[
  {
    "path": ".github/workflows/changelog.yml",
    "content": "name: Changelog Check\n\non:\n  pull_request:\n    branches:\n      - main\n\njobs:\n  changelog:\n    name: Ensure changelog updated\n    runs-on: ubuntu-latest\n\n    steps:\n      - name: Checkout repository\n        uses: actions/checkout@v4\n        with:\n          fetch-depth: 0\n\n      - name: Verify changelog entry\n        env:\n          BASE_SHA: ${{ github.event.pull_request.base.sha }}\n        run: |\n          git fetch --depth=1 origin \"$BASE_SHA\"\n          changed=$(git diff --name-only \"$BASE_SHA\"...HEAD | grep '^CHANGES\\.md$' || true)\n          if [ -z \"$changed\" ]; then\n            echo \"::error::Missing changelog entry. Update CHANGES.md to describe your changes.\"\n            exit 1\n          fi\n"
  },
  {
    "path": ".github/workflows/ci.yml",
    "content": "name: Build and test\n\non:\n  push:\n    branches: [ main ]\n  pull_request:\n    branches: [ main ]\n\njobs:\n  build:\n    strategy:\n      fail-fast: false\n      matrix:\n        os:\n          - ubuntu-latest\n          - macos-latest\n          # - windows-latest\n\n    runs-on: ${{ matrix.os }}\n\n    steps:\n      - name: Checkout tree\n        uses: actions/checkout@v4\n\n      - name: Set-up OCaml\n        uses: ocaml/setup-ocaml@v3\n        with:\n          ocaml-compiler: 5\n\n      - name: Install dependencies\n        run: opam install . --deps-only --with-test\n\n      - name: Build and test nx\n        run: |\n          opam exec -- dune build --release packages/nx\n          opam exec -- dune build --release @packages/nx/runtest\n\n      - name: Build and test brot\n        run: |\n          opam exec -- dune build --release packages/brot\n          opam exec -- dune build --release @packages/brot/runtest\n\n      - name: Build and test talon\n        run: |\n          opam exec -- dune build --release packages/talon\n          opam exec -- dune build --release @packages/talon/runtest\n\n      - name: Build and test rune\n        run: |\n          opam exec -- dune build --release packages/rune\n          opam exec -- dune build --release @packages/rune/runtest\n\n      - name: Build and test kaun\n        run: |\n          opam exec -- dune build --release packages/kaun\n          opam exec -- dune build --release @packages/kaun/runtest\n\n      - name: Build and test fehu\n        run: |\n          opam exec -- dune build --release packages/fehu\n          opam exec -- dune build --release @packages/fehu/runtest\n\n      - name: Build and test sowilo\n        run: |\n          opam exec -- dune build --release packages/sowilo\n          opam exec -- dune build --release @packages/sowilo/runtest\n\n      - name: Build and test hugin\n        run: |\n          opam exec -- dune build --release packages/hugin\n          opam exec -- dune build --release @packages/hugin/runtest\n\n      - name: Build and test quill\n        run: |\n          opam exec -- dune build --release packages/quill\n          opam exec -- dune build --release @packages/quill/runtest\n"
  },
  {
    "path": ".gitignore",
    "content": "# Opam swtich\n_opam/\n\n# Build output directory\n_build/\n\n# Catch-all for _*/ directories\n_*/\n\n# Dune lock files\ndune.lock/\ndune-tsan.lock/\n\n# Development tools lock files\ndev-tools.locks/\n\n# VSCode editor configuration directory\n.vscode/\n\n# Python virtual environment directory\n.venv/\n\n# Environment variable files\n.env\n\n# Development packages\ndev.opam\n\n# Agent files\nCLAUDE.local.md\n\n# Run logs\nruns/\n\n# Agents\n.agents/\n.claude/"
  },
  {
    "path": ".ocamlformat",
    "content": "# OCamlFormat configuration file\n\n# Pin the version of OCamlFormat to ensure consistent formatting across different environments.\n# Uncomment and update this line to specify a version:\n# version = 0.26.2\n\n# The formatting style to use. Options include 'default', 'ocamlformat', and 'janestreet'.\n# 'default' is a good starting point for most projects.\nprofile = default\n\n# Parse and format comments in docstrings\nparse-docstrings = true\n\n# Wrap comments and docstrings to fit within the 'max-width'\nwrap-comments = true\n"
  },
  {
    "path": "AGENTS.md",
    "content": "# agents.md\n\nraven is an ecosystem of packages that brings modern machine learning capabilities to ocaml. it provides familiar equivalent of python packages.\n\n## philosophy\n\nraven is inspired by unix's philosophy of doing one thing well, and tinygrad's philosophy of minimalism and clarity. while our scope is larger than tinygrad's, we aim for the same beautiful and minimal code that covers python equivalent use cases.\n\n- strive for the \"right\", principled implementations and designs that stand the test of time.\n- every line must have purpose. choose clarity over cleverness.\n- public apis stay small and modern. no legacy layers, no extra knobs.\n- do not maintain compatibility for its own sake. breaking changes are fine when they move us toward the correct design.\n- focus on _modern_ numerical computing and machine learning. old or classic apis from numpy, pandas, jax, etc are out of scope.\n- minimize api surface as much as possible and offer the most elegant apis that cover user needs.\n\n## projects\n\n- **nx**: n-dimensional arrays with pluggable backend architecture - equivalent to numpy.\n\n  the backend interface is defined at `packages/nx/lib/core/backend.mli`. NEVER add a backend operation without being asked to do so.\n  frontend apis are defined in a single file `packages/nx/lib/frontend.ml` using the backend operations.\n  nx comes with a default c backend in `packages/nx/lib/backend_c/`.\n\n- **rune**: tensor computation with automatic differentiation and jit compilation - equivalent to jax.\n\n  rune is architected as a backend for nx in `packages/rune/lib/nx_rune.ml`, where each backend operation raises an effect, or, if the effect is unhandled, falls back to the nx c backend.\n\n  this allows us to provide an nx-like api, while providing additional features such as automatic differentiation and jit compilation:\n  - for automatic differentiation in `packages/rune/lib/autodiff.ml`, effects are caught once re-executed, alongside their gradient calculations in the effect handler, the new calls are not caught by the effect handler (unless the user nests `grad` calls), so the operations are executed as normal on the c backend.\n  - for jit compilation, all effects are handled to build a computation graph, which is then jitted using `rune.jit`.\n  - and similar for other features such as debug, vmap.\n\n- **kaun**: neural networks and training utilities built on rune - equivalent to flax.\n\n  kaun builds on rune to provide high-level neural network abstractions such as ptree, layers, optimizer, training loops, datasets, metrics, etc.\n\n  it also provides ready-to-use models in `packages/kaun/lib/kaun-models` and datasets in `packages/kaun/lib/kaun-datasets`.\n\n- **fehu**: reinforcement learning environment and algorithms built on rune and kaun - equivalent to gym and stable baselines.\n- **talon**: dataframe library for data manipulation and analysis - equivalent to pandas and polars.\n- **brot**: tokenization and text processing - equivalent to huggingface tokenizers and parts of huggingface transformers.\n- **hugin**: visualization library for plotting and rendering - equivalent to matplotlib and plotly.\n- **quill**: interactive computing environment for ocaml - equivalent to jupyter notebooks and ipython.\n- **sowilo**: image processing and computer vision built on rune - equivalent to opencv with differentiable operations.\n\n## project structure\n\n- packages live in `packages/` such as `packages/nx/`, `packages/rune/`, `packages/kaun/`, `packages/sowilo/`, `packages/talon/`, `packages/hugin/`, `packages/quill/`, and `packages/fehu/`, each with `lib/` sources and `test/` suites.\n- documentation assets live under `www/` (static site).\n\n## guidelines\n\n- modules and variants are `Capitalized_snake_case`. values and functions use `snake_case`.\n- docstrings are only used in `mli` files. they start with `(** [function_name args...] ... *)`.\n- operations that match on dtypes need explicit type annocations, e.g. `let nonzero (type a b) (t : (a, b) t) =`.\n\n## performance\n\n- keep allocations to a minimum. allocate outside of loops and reuse buffers when possible.\n- prefer loop-based implementations over higher-order functions for performance-critical code.\n- use unsafe Bigarray and Bytes functions (e.g. `Bigarray.Array1.unsafe_get`) when safety checks are redundant.\n\n## changelog\n\nevery user-facing commit MUST include a corresponding entry in `CHANGES.md`. if a commit adds a feature, fixes a bug, changes an API, or improves performance in a way that users would notice, update the changelog as part of that commit.\n\nentries go under the current unreleased version, grouped by package with `### Package` headers. add new entries at the top of the relevant package section.\n\nwriting style:\n- lead with what changed from the user's perspective, not what code was modified.\n- explain *why* when the reason isn't obvious (e.g. a bug fix should say what was wrong).\n- name the affected functions or types so users can find them.\n- keep each entry to 1-3 lines. use backticks for code identifiers.\n- do not include internal refactors, style changes, or test-only changes.\n\n## important rules\n\n- NEVER stage or commit changes unless explicitly requested\n- NEVER run `dune clean`\n- NEVER use the `--force` argument\n- NEVER run dune build with DUNE_CACHE=disabled\n- NEVER try to remove the dune lock file\n- NEVER use git stash, git checkout, git reset, git restore, or ANY git command that modifies the working tree\n- NEVER use git commands to \"test\" or \"isolate\" changes — reason about the code instead\n- NEVER add new backend operations to nx unless explicitly requested\n- NEVER hide warnings and NEVER hide unused variables by adding an underscore. ALWAYS treat warnings as errors that need a proper fix.\n- ALWAYS add changelog entry(ies) in `CHANGES.md` when committing user-facing changes.\n"
  },
  {
    "path": "CHANGES.md",
    "content": "# Changelog\n\nAll notable changes to this project will be documented in this file.\n\n- Only document user-facing changes (features, bug fixes, performance improvements, API changes, etc.)\n- Add new entries at the top of the appropriate section (most recent first)\n\n## [1.0.0~beta1] - Unreleased\n\n### Munin (new)\n\nLocal experiment tracking for Raven. Evolves `kaun-board` into a full\nexperiment tracker — the Raven equivalent of W&B or MLFlow, without a server.\nLog metrics and artifacts from your training script, monitor runs live in the\nterminal with `munin watch`, then compare results with `munin compare`. Data\nis plain JSON on disk, so `jq` and shell scripts work out of the box. Git\ncommit, command line, and system info are captured automatically. The\n`munin.sys` sub-library adds opt-in CPU and memory monitoring in a background\nthread.\n\n### Norn (new)\n\n- New package: Markov chain Monte Carlo sampling with automatic gradients via\n  Rune. Provides HMC and NUTS samplers with Stan-style window adaptation (dual\n  averaging for step size, Welford estimation for mass matrix). Includes\n  symplectic integrators (leapfrog, mclachlan, yoshida), mass matrix metrics\n  (unit, diagonal, dense), and convergence diagnostics (ESS, split R-hat).\n  Equivalent to BlackJAX/PyMC in Python.\n\n### Vega (new)\n\n- New package: per-parameter gradient-based optimizers (SGD, Adam, AdamW,\n  RMSprop, Adagrad) and learning-rate schedules. Built on Nx with no autodiff\n  dependency. Optimizers compose via `Vega.chain` and schedules are plain\n  `int -> float` functions. Equivalent to Optax in JAX.\n\n### Nx\n\n- Remove `~out` parameter from all backend compute operations. Operations now\n  allocate and return their result instead of writing to a caller-provided\n  buffer. This simplifies the effect system, fixes vmap, and prepares the\n  architecture for JIT compilation.\n- Add `Shape.reduce_output_shape` for computing output shapes after axis\n  reduction.\n- Add machine learning examples: PCA, K-Means, DBSCAN, and t-SNE implemented\n  from Nx primitives.\n- Fix incorrect results for views and slices in binary, unary, ternary, cast,\n  and shape C stubs. The `iterate_inner_dims` helpers did not account for the\n  ndarray offset, producing wrong results when the data starts at a non-zero\n  offset in the underlying buffer.\n\n### Rune\n\n- Add `Rune.jacfwd` and `Rune.jacrev` for computing full Jacobian matrices.\n  `jacfwd` uses forward-mode AD (column-by-column via JVP); `jacrev` uses\n  reverse-mode AD (row-by-row via VJP). Prefer `jacfwd` when inputs are smaller\n  than outputs, and `jacrev` otherwise.\n- Guard against in-place tensor mutation inside `grad`, `vjp`, `jvp`, and\n  `vmap`. Using `set_item`, `set_slice`, `blit`, or `assign` inside these\n  transformations now raises `Invalid_argument` with a message directing users\n  to use `scatter` instead.\n\n### Kaun\n\n- Optimizers extracted to the new Vega package. `Kaun.Optim` now delegates to\n  Vega for per-leaf updates across parameter trees. `Train.make` accepts a\n  `Vega.t` directly instead of `Optim.algorithm`. Learning-rate schedules move\n  from `Optim.Schedule` to `Vega.Schedule`.\n\n### Talon\n\n- Add `to_html` and `pp_display` for rich table rendering in Quill notebooks.\n  Tables display as styled HTML in the web UI and published books, and as inline\n  HTML in markdown output files.\n- Add `Talon.take` for selecting rows by an array of indices. Indices may repeat\n  and need not be sorted.\n- Fix CSV auto-detection defaulting numeric columns to float32. Parsed values go\n  through `float_of_string` which produces 64-bit floats; defaulting to float32\n  silently truncated precision. Now defaults to float64.\n\n### Hugin\n\n- Fix contour rendering. The marching squares implementation produced disconnected\n  2-point line segments instead of joined polylines. Contour lines now render as\n  smooth connected curves, and filled contours (`~filled:true`) produce correct\n  closed polygons instead of degenerate 2-point fills.\n\n### Quill\n\n- Allow `quill file.md` without requiring `quill -- file.md` or `quill run file.md`.\n  The CLI now detects file arguments and routes them to the default TUI command.\n- Fix image Display outputs showing raw base64 text in markdown files. Images now\n  render as inline `<img>` tags with data URIs, visible in any markdown viewer.\n- Add `--figures-dir` flag to `quill run` for writing images to disk and\n  referencing them by path instead of inlining base64 data.\n- Add rich table display for Talon dataframes in liveview and published books.\n- Improve table styling in the web notebook and book build with clean borders,\n  monospace font, and proper header treatment.\n- Resolve relative notebook paths to absolute and change into the notebook\n  directory before execution, so that relative file references in code cells\n  work correctly.\n- Add `vega` to the default Raven packages loaded in Quill kernels.\n\n## [1.0.0~alpha3] - 2026-03-14\n\nThis release reshapes raven's foundations. Every package received API\nimprovements, several were rewritten, and two new packages — nx-oxcaml and\nkaun-board — were built as part of our Outreachy internships.\n\n### Highlights\n\n- **Unified tensor type** — `Nx.t` and `Rune.t` are now the same type.\n  Downstream packages no longer need to choose between them or convert at\n  boundaries. Rune is now a pure transformation library (grad, vjp, vmap)\n  over standard Nx tensors.\n- **nx-oxcaml** (new, Outreachy) — Pure-OCaml tensor backend using OxCaml's\n  unboxed types and SIMD intrinsics. Performance approaches the C backend —\n  in pure OCaml.\n- **kaun-board** (new, Outreachy) — TUI dashboard for monitoring training\n  runs in the terminal. Live metrics, loss curves, and system stats.\n- **quill** — Rewritten from the ground up with two interfaces: a terminal UI\n  with syntax highlighting and code completion, and a web frontend via\n  `quill serve` with a CodeMirror 6 editor, WebSocket-based execution,\n  autocompletion, and diagnostics.\n- **brot** — The tokenization library formerly known as saga. Complete rewrite\n  with a cleaner API. [1.3-6x faster than HuggingFace Tokenizers](packages/brot/bench/)\n  on most benchmarks.\n- **nx** — Redesigned backend interface, RNG with effect-based scoping.\n  Einsum **8-20x** faster, matmul dispatch at BLAS parity with NumPy.\n\n### Breaking changes\n\n- **nx**: Redesigned backend interface with new `Nx_buffer` type. Removed\n  `nx.datasets` library. Moved NN functions to Kaun (use `Kaun.Fn`). Renamed\n  `im2col`/`col2im` to `extract_patches`/`combine_patches`. RNG uses\n  effect-based implicit scoping instead of explicit key threading. Removed\n  in-place mutation operations (`ifill`, `iadd`, `isub`, `imul`, `idiv`,\n  `ipow`, `imod`, `imaximum`, `iminimum` and `_s` variants). Removed\n  `Symbolic_shape` module; shapes are concrete `int array` throughout.\n  Removed `Instrumentation` module.\n- **rune**: `Rune.t` no longer exists — use `Nx.t` everywhere. `Rune` no\n  longer re-exports tensor operations; use `open Nx` for tensor ops and\n  `Rune.grad`, `Rune.vjp`, etc. for autodiff. Remove any `Rune.to_nx` /\n  `Rune.of_nx` calls. Removed `enable_debug`, `disable_debug`, `with_debug`;\n  use `Rune.debug f x` instead.\n- **rune**: Removed JIT/LLVM backend. This will come back in a future\n  release with a proper ML compiler.\n- **kaun**: Rewritten core modules API, datasets, and HuggingFace integration.\n  Removed `kaun-models`.\n- **brot**: Renamed from saga. Rewritten API focused on tokenization.\n\n### Nx\n\n- Unify `Nx.t` and `Rune.t` into a single tensor type. A new `nx.effect` library (`Nx_effect`) implements the backend interface with OCaml 5 effects: each operation raises an effect that autodiff/vmap/debug handlers can intercept, falling back to the C backend when unhandled. `Nx.t` is now `Nx_effect.t` everywhere — no more type conversions between Nx and Rune.\n- Make transcendental, trigonometric, and hyperbolic operations (`exp`, `log`, `sin`, `cos`, `tan`, `asin`, `acos`, `atan`, `atan2`, `sinh`, `cosh`, `tanh`, `asinh`, `acosh`, `atanh`, `erf`, `sigmoid`) polymorphic over all numeric types including complex, matching the backend and effect definitions.\n- Make `isinf`, `isfinite`, `ceil`, `floor`, `round` polymorphic (non-float dtypes return all-false/all-true or no-op as appropriate).\n- Redesign backend interface with more granular operations (e.g. dedicated unary and binary kernels). This improves performance by letting backends optimize individual ops directly, and prepares for the JIT pipeline which will decompose composite operations at the compiler level instead of the frontend.\n- Rewrite `Nx_buffer` module with new interface. The backend now returns `Nx_buffer.t` instead of raw bigarrays.\n- Add new C kernels for unary, binary, and sort operations, and route new backend ops to C kernels.\n- Add scipy-style `correlate`, `convolve`, and sliding window filters.\n- Generalize `unfold`/`fold` to arbitrary leading dimensions.\n- Remove neural-network functions from Nx (softmax, log_softmax, relu, gelu, silu, sigmoid, tanh). These now live in `Kaun.Fn`.\n- Rename `im2col`/`col2im` to `extract_patches`/`combine_patches`.\n- Remove `nx.datasets` module. Datasets are now in `kaun.datasets`.\n- Simplify `Nx_io` interface. Inline vendor libraries (safetensors, and npy) directly into nx_io.\n- Move the `Rng` module from Rune into Nx with effect-based implicit scoping. Random number generation uses `Nx.Rng.run` to scope RNG state instead of explicit key threading.\n- Reduce matmul dispatch overhead to reach BLAS parity with NumPy.\n- Fix Threefry2x32 to match the Random123 standard.\n- Fix `save_image` crash on multi-dimensional genarray.\n- Pre-reduce independent axes in einsum to avoid OOM on large contractions.\n- Make Nx backends pluggable via Dune virtual libraries. The new `nx.backend` virtual library defines the backend interface, with the C backend (`nx.c`) as the default implementation. Alternative backends (e.g., `nx-oxcaml`) can be swapped in at link time. The `Nx_c` module is renamed to `Nx_backend`.\n- Fix `.top` libraries failing to load in utop with \"Reference to undefined compilation unit `Parse`\".\n- Fix OpenMP flag filtering in `discover.ml`: strip `-Xpreprocessor -fopenmp` as a pair on macOS to prevent dangling `-Xpreprocessor` from consuming subsequent flags and causing linker failures. (@Alizter)\n- Add missing bool→low-precision cast support (f16/bf16/fp8) in the C backend.\n- Add UInt32/UInt64 dtypes, rename complex dtypes to Complex64/Complex128, and drop Complex16/QInt8/QUInt8/Int/NativeInt as tensor element dtypes.\n- Remove in-place mutation operations (`ifill`, `iadd`, `isub`, `imul`, `idiv`, `ipow`, `imod`, `imaximum`, `iminimum` and `_s` variants). Use functional operations instead.\n- Remove `Symbolic_shape` module; shapes are now concrete `int array` throughout.\n- Remove `Instrumentation` module. Nx no longer wraps operations in tracing spans. Debugging tensor operations is handled by Rune's effect-based debug handler.\n- Fix critical correctness issue in fancy slicing (`L`) where permutations were ignored if the number of indices matched the dimension size (e.g., `slice [L [1; 0]] x` returned `x` unmodified).\n- Rewrite `slice` implementation to use `as_strided` for contiguous operations, reducing overhead to **O(1)** for view-based slices and separating gather operations for better performance.\n- Optimize `set_slice` by replacing scalar-loop index calculations with vectorized coordinate arithmetic, significantly improving performance for fancy index assignments.\n- Improve `einsum` performance **8–20×** with greedy contraction path optimizer (e.g., MatMul 100×100 f32 207.83 µs → 10.76 µs, **19×**; BatchMatMul 200×200 f32 8.78 ms → 435.39 µs, **20×**)\n- Rewrite `diagonal` using flatten + gather approach instead of O(N²) eye matrix masking, reducing memory from O(N²) to O(N)\n- Improve error messages for shape operations (`broadcast`, `reshape`, `blit`) with per-dimension detail and element counts.\n\n### nx-oxcaml (new)\n\nNew pure-OCaml tensor backend that can be swapped in at link time via Dune virtual libraries. Uses OxCaml's unboxed types for zero-cost tensor element access, SIMD intrinsics for vectorized kernels, and parallel matmul. Performance approaches the native C backend — in pure OCaml. Supports the full Nx operation set: elementwise, reductions, matmul, gather/scatter, sort/argsort, argmax/argmin, unfold/fold, pad, cat, associative scan, and threefry RNG. (@nirnayroy, @tmattio)\n\n### Rune\n\n- Unify tensor types: `Rune.t` is now `Nx.t`. Rune no longer re-exports the Nx frontend — it is a pure transformation library exporting only `grad`, `grads`, `value_and_grad`, `vjp`, `jvp`, `vmap`, `no_grad`, `detach`, and debugging/gradcheck utilities. All tensor creation and manipulation uses `Nx` directly.\n- Remove `Tensor` module and `Nx_rune` backend. Effect definitions moved to the new `nx.effect` library shared with Nx.\n- Remove `Rune.to_nx` / `Rune.of_nx` (no longer needed — types are identical).\n- Remove `Rune.enable_debug`, `Rune.disable_debug`, `Rune.with_debug`. Use `Rune.debug f x` to run a computation with debug logging enabled.\n- Remove JIT compilation support from Rune. The `Rune.Jit` module and LLVM/Metal backends have been removed and will be re-introduced later as a standalone package.\n- Update to new `Nx_buffer.t` type.\n- Propagate new backend operations through effects and autodiff.\n- Rewrite `Autodiff` module to fix critical JVP correctness issues, enable higher-order derivatives (nested gradients), and introduce `vjp` as a first-class primitive.\n- Fix pointer-based hashing in autodiff, correcting nested JVP handler behavior.\n- Add autodiff support for `as_strided`, enabling gradients through slicing and indexing operations\n- Add autodiff support for `cummax` and `cummin` cumulative operations\n- Add autodiff support for FFT operations\n- Add autodiff support for some linear algebra operations: QR decomposition (`qr`), Cholesky decomposition (`cholesky`), and triangular solve (`triangular_solve`).\n\n### Kaun\n\n- Simplify and redesign the core API for better discoverability and composability. Layers, optimizers, and training utilities now follow consistent patterns and compose more naturally.\n- Add `Fn` module with `conv1d`, `conv2d`, `max_pool`, `avg_pool` — neural network operations that were previously in Nx now live here with a cleaner, more focused API.\n- Redesign datasets and HuggingFace integration with simpler, more composable APIs.\n- Remove `kaun-models` library. Pre-built models now live in examples.\n- Reinitialize dataset each epoch to avoid iterator exhaustion (#147, @Shocker444, @tmattio)\n\n### kaun-board (new)\n\nTUI dashboard for monitoring training runs in the terminal. Displays live metrics, loss curves, and system stats. Extracted from kaun's console module into a standalone package. (#166, #167, #170, @Arsalaan-Alam)\n\n### Brot\n\n- Rename the library from saga to brot.\n- Simplify brot to a tokenization-only library. Remove the sampler, n-gram models, and I/O utilities. The sampler is rewritten with nx tensors and moved to `dev/mimir` as the seed of an experimental inference engine.\n- Merge `brot.tokenizers` sub-library into `brot`.\n- Remove dependency on Nx.\n- Use `Buffer.add_substring` instead of char-by-char loop in whitespace pre-tokenizer.\n- Compact BPE symbols in-place after merges, avoiding an intermediate array allocation.\n- Replace list cons + reverse with forward `List.init` in BPE `word_to_tokens`.\n- Use pre-allocated arrays with `Array.blit` instead of `Array.append` in encoding merge and padding, halving per-field allocations.\n- Avoid allocating an unused `words` array in post-processor encoding conversion.\n- Reduce WordPiece substring allocations from O(n²) to O(n) per word by building the prefixed candidate string once per position.\n- Add `encode_ids` fast path that bypasses `Encoding.t` construction entirely when only token IDs are needed.\n- Add ASCII property table for O(1) character classification in pre-tokenizers, replacing O(log n) binary search for `is_alphabetic` (600 ranges), `is_numeric` (230 ranges), and `is_whitespace` (10 ranges). Yields 12-27% speedup on encode benchmarks with ~30% allocation reduction.\n- Add inline ASCII fast paths in all pre-tokenizer loops, skipping UTF-8 decoding and using `Buffer.add_char` instead of `String.sub` for single-byte characters. Combined with the property table, yields 20-30% total speedup and 36-55% allocation reduction vs baseline.\n- Parallelize batch encoding with OCaml 5 domains.\n- Optimize BPE merge loop with open-addressing hash, flat arrays, and shift-based heap.\n- Add trie-based WordPiece lookup and normalizer fast path.\n- Remove dependency on `str` library.\n- Generate unicode data offline, removing runtime dependency on `uucp`.\n- Remove unused `Grapheme` module. Grapheme cluster segmentation is not needed for tokenization.\n- Remove `uutf` dependency in favour of OCaml `Stdlib` unicode support.\n\n### Fehu\n\n- Simplify and redesign the core API. Environments and training utilities now follow consistent functional patterns that are easier to use and compose.\n- Remove `fehu.algorithms` — fehu now only depends on rune, and users bring their own algorithms. Examples provided for well-known RL algorithms like DQN and REINFORCE.\n\n### Sowilo\n\n- Cleaner public API — internal implementation split into focused submodules while the public surface stays small.\n- Faster grayscale conversion, edge detection, and gaussian blur.\n\n### Quill\n\nRewritten from the ground up. Terminal UI with syntax highlighting, code completion, and a compact single-line footer. Web frontend via `quill serve` with a CodeMirror 6 editor, WebSocket-based execution, autocompletion, and diagnostics. Markdown notebook format shared across both interfaces.\n\nInteractive REPL: `quill` with no file argument launches a toplevel with syntax highlighting, tab completion, persistent history, smart phrase-aware submission, and piped mode.\n\n### Hugin\n\nRewritten from the ground up with a declarative, composable API. Plots are\nbuilt by combining inert mark descriptions (`line`, `point`, `bar`, `hist`,\n`heatmap`, `contour`, `errorbar`, etc.) with `layers`, decorating them\n(`title`, `xlabel`, `legend`, etc.), and laying them out (`grid`, `hstack`,\n`vstack`). A compilation pass resolves data to a Scene IR that separate\nbackends render.\n\n- New declarative specification API replacing the imperative figure/axes/artist\n  architecture. Marks compose with `layers`, decorations chain functionally,\n  and grid layouts nest arbitrarily.\n- **ucairo** — Minimal Cairo FFI bindings (36 C stubs) replacing the `cairo2`\n  opam dependency.\n- Dual-backend rendering: Cairo (PNG, PDF, interactive SDL window) and SVG from\n  a shared Scene IR.\n- OKLCH perceptual color space with `Color.oklch`, `Color.hex`, named CSS\n  colors, and alpha support.\n- Curated colormaps (`Cmap.viridis`, `plasma`, `inferno`, `magma`, `cividis`,\n  `turbo`, `coolwarm`, `spectral`).\n- Theme system with `light`, `dark`, and `minimal` presets.\n- Linear, log, and symlog axis scaling with automatic tick generation.\n- Legend placement with configurable location and multi-column layout.\n- Interactive `show` with SDL window resizing, Escape/Q to close.\n- Rewritten examples and documentation.\n\n### Talon\n\n- Remove `jsont`, `bytesrw`, and `csv` dependencies from Talon. CSV support is now built-in via the `talon.csv` sub-library with a minimal RFC 4180 parser.\n- Remove `talon.json` sub-library.\n\n## [1.0.0~alpha2] - 2025-11-03\n\nWe're excited to announce the release of Raven 1.0.0~alpha2! Less than a month after alpha1, this release notably includes contributions from Outreachy applicants in preparation for the upcoming _two_ internships.\n\nSome highlights from this release include:\n\n- NumPy-compatible text I/O with `Nx_io.{save,load}_text`\n- Lots of new functions in Nx/Rune, including neural-net ones `dropout`, `log_softmax`, `batch_norm`, `layer_norm`, and activation functions like `celu` and `celu`, and generic ones like `conjugate`, `index_put`, and more.\n- Addition of `.top` libraries for `nx`, `rune`, and `hugin` that auto-install pretty-printers in the OCaml toplevel. You can run e.g. `#require \"nx.top\"`.\n- Addition of a visualization API in Fehu via the new `fehu.visualize` library, supporting video recording.\n- Redesign of Kaun core datastructure and checkpointing subsystem for complete snapshotting.\n- Many, many bug fixes and correctness improvements.\n\nWe've also made numerous performance improvements across the board:\n\n- Nx elementwise ops: 5–50× faster (e.g., Add 50×50 f32 88.81 µs → 1.83 µs, **48×**; Mul 100×100 f32 78.51 µs → 2.41 µs, **33×**).\n- Nx conv2d: **4–5×** faster on common shapes; up to **115×** on heavy f64 batched cases (e.g., B16 C64→128 16×16 K3 f64 1.61 s → 13.96 ms).\n- Rune autodiff: **1.2–3.7×** faster on core grads (e.g., MatMulGrad Medium 34.04 ms → 11.91 ms, **2.86×**; Large 190.19 ms → 50.97 ms, **3.73×**).\n- Talon dataframes: big wins in joins and group-bys (Join 805.35 ms → 26.10 ms, **31×**; Group-by 170.80 ms → 19.03 ms, **9×**; Filter 9.93 ms → 3.39 ms, **3×**).\n- Brot tokenizers: realistic workloads **4–17%** faster (e.g., WordPiece encode single 136.05 µs → 115.92 µs, **1.17×**; BPE batch_32 24.52 ms → 22.27 ms, **1.10×**)\n\nWe're closing 8 user-reported issues or feature requests and are totalling 30 community contributions from 8 unique contributors.\n\n### Nx\n\n- Fix einsum output axis ordering for free axes (e.g., `i,jk->jki`, `ij,klj->kli`) by correcting final transpose permutation and intermediate left-axis reordering.\n- Add `Nx_io.Cache_dir` module with consolidated cache directory utilities respecting `RAVEN_CACHE_ROOT`, `XDG_CACHE_HOME`, and `HOME` fallback, replacing project-specific cache logic across the whole raven ecosystem (#134, @Arsalaan-Alam)\n- Add `Nx_io.save_txt` / `Nx_io.load_txt` with NumPy-compatible formatting, comments, and dtype support (#120, @six-shot)\n- Optimize `multi_dot` for matrix chains, reducing intermediate allocations and improving performance\n- Add public `index_put` function for indexed updates\n- Clarify `reshape` documentation to match its view-only semantics\n- Provide `nx.top`, `rune.top`, and `hugin.top` libraries that auto-install pretty printers in the OCaml toplevel and update Quill to load them\n- Add `ifill` for explicit in-place fills and make `fill` return a copied tensor\n- Speed up contiguous elementwise ops via vectorized loops\n- Fast-path contiguous single-axis reductions to avoid iterator fallback\n- Speed up float reductions with contiguous multi-axis fast paths\n- Fast-path padding-free `unfold` to lower conv2d overhead\n- Move neural-network operations (softmax, log_softmax, relu, gelu, silu, sigmoid, tanh) from Kaun to Nx\n- Add public `conjugate` function for complex number conjugation (#125, @Arsalaan-Alam)\n- Fix complex vdot to conjugate first tensor before multiplication, ensuring correct mathematical behavior (#123, @Arsalaan-Alam)\n- Update comparison and conditional operations to use boolean tensors (#115, @nirnayroy)\n- Add support for rcond parameter and underdetermined systems to `lstsq` (#102, @Shocker444)\n- Fix `matrix_rank`/`pinv` Hermitian fast paths to use eigen-decomposition and match NumPy for complex inputs (#96, @six-shot, @tmattio)\n- Optimize matmul BLAS dispatch for strided tensors, improving matrix multiplication performance\n- Fix slow builds reported since alpha1 (#88, @tmattio)\n- Fix macOS ARM crash when loading extended bigarray kinds\n- Add float16 and bfloat16 support to safetensors I/O, including precise conversions that preserve denormals/NaNs (#84, @six-shot, @tmattio)\n- Refined `View` internals for leaner contiguity checks and stride handling, cutting redundant materialization on hot paths\n- Merge `Lazy_view` into the core `View` API so movement ops operate on a single composed view\n- Documented the reworked `View` interface\n- Documented the `Symbolic_shape` interface\n- Added Accelerate framework flag when compiling on macOS, fixing issues in some environments (#129, @nirnayroy)\n\n### Hugin\n\n- Fix random `SIGBUS`/bus errors on macOS when closing `Hugin.show` windows by\n  destroying SDL windows with the correct pointer in the finalizer.\n- Let `Hugin.show` windows close cleanly via the window button or `Esc`/`q`, avoiding frozen macOS REPL sessions\n\n### Rune\n\n- Add `Rune.no_grad` and `Rune.detach` to mirror JAX stop-gradient semantics\n- Improve gradient performance slightly by replace the reverse-mode tape's linear PhysicalTbl with an identity hash table\n- Fix `Rune.Rng.shuffle` flattening outputs for multi-dimensional tensors; the\n  shuffle now gathers along axis 0 and keeps shapes intact\n- Replace `Rune.Rng.truncated_normal` clipping with rejection sampling so\n  samples stay inside the requested interval without boundary spikes\n- Add support for categorical sampling with `Rune.Rng.categorical` (#89, @nirnayroy)\n- Allow plain `llvm-config` in discovery, fixing build in some platforms (#71, @stepbrobd)\n\n### Kaun\n\n- Added Similarity and Polysemy analysis to the BERT example (#137, @nirnayroy)\n- Support attention masks via the new `Kaun.Attention` module\n- Support loading sharded Hugging Face safetensors\n- Fix BERT and GPT‑2 model loading\n- API simplification: removed type parameters from public types; `Ptree` now supports mixed‑dtype trees via packed tensors with typed getters.\n- Checkpointing overhaul: versioned `Train_state` with schema tagging, explicit `Checkpoint.{Snapshot,Artifact,Manifest,Repository}` (retention, tags, metadata), and simple save/load helpers for snapshots and params.\n- Overhaul dataset combinators: derive tensor specs from Rune dtype, fix sampling/window bugs, validate weighted sampling, and respect `drop_remainder`\n- Make dataset `prefetch` truly asynchronous with background domains and allow reusing an external Domainslib pool via `parallel_map ~pool`\n- Use `Dataset.iter` for epoch batches to reduce overhead\n- Update BERT and GPT-2 tokenizer cache to use `Nx.Cache` for consistent cache directory resolution (#134, @Arsalaan-Alam)\n- Honor text dataset encodings via incremental Uutf decoding (#122, @Satarupa22-SD).\n- Preserve empty sequential modules when unflattening so indices stay aligned for checkpoint round-tripping\n- Prevent `Training.fit`/`evaluate` from consuming entire datasets eagerly and fail fast when a dataset yields no batches, avoiding hangs and division-by-zero crashes\n- Allow metric history to tolerate metrics that appear or disappear between epochs so dynamic metric sets no longer raise during training\n- Make `Optimizer.clip_by_global_norm` robust to zero gradients and empty parameter trees to avoid NaNs during training\n- Split CSV loader into `from_csv` and `from_csv_with_labels` to retain labels when requested (#114, @Satarupa22-SD)\n- Implement AUC-ROC and AUC-PR in Kaun metrics and simplify their signatures (#124, #131, @Shocker444)\n- Add mean absolute percentage error, explained variance, R² (with optional adjustment), KL-divergence, and top-k accuracy to Kaun metrics\n- Add NDCG, MAP, and MRR ranking metrics to Kaun metrics\n- Add BLEU, ROUGE, and METEOR metrics to Kaun for pre-tokenized sequences, removing tokenizer dependencies\n- Add SSIM, IoU, and Dice metrics for vision workloads in Kaun\n\n### Talon\n\n- Remove automatic sentinel-based null detection for numeric columns; explicit masks (via [_opt] constructors) now define missing data semantics\n- Replace join nested loops with hashed join indices, cutting lookup from O(n·m) to near O(n)\n- Reuse a shared Nx-based column reindexer so filter/sample paths avoid repeated array copies\n- Fix `fillna` to honor column null masks and replacements, restoring expected nullable semantics\n- Preserve null masks when reindexing during joins so sentinel values remain valid data\n- Handle numeric index columns in `pivot`, preventing distinct keys from collapsing into a single bucket\n- Respect null masks when serializing numeric columns to JSON, emitting JSON `null` instead of sentinel values\n- Detect big integers as int64 in Talon CSV loader (#121, @Arsalaan-Alam)\n- Allow forcing column types in Talon JSON loader (#104, @nirnayroy)\n- Add documentation to compare Talon and Pandas (#154, Satarupa22-SD)\n\n### Saga\n\n- Remove legacy `Normalizers.nmt` and `Normalizers.precompiled` constructors (and their JSON serializers) so the public surface only advertises supported normalizers\n- Tighten template processor JSON parsing: require integer type ids, drop the legacy special-token list format, and ensure multi-id special tokens round-trip with the new record fields\n- Make tokenizer JSON loading tolerant of HuggingFace quirks (missing `model.type`, string-encoded merges), restoring compatibility with upstream `tokenizer.json` files\n- Cache byte-level encode/decode lookup tables to avoid rebuilding them during tokenization, trimming avoidable allocations\n- Skip BPE dropout sampling when dropout is disabled, removing redundant RNG work on common hot paths\n- Fix Unigram tokenization so longest matches are emitted without aborting the sequence when a vocab hit occurs\n- Recompute pad token ids when the pad special string changes, preventing padding with stale ids\n- Fix Unigram `token_to_id`/`id_to_token` vocabulary lookups (#117, @RidwanAdebosin)\n- Optimize `Pre_tokenizers.whitespace` to reduce allocations and improve tokenization performance\n- Simplify tokenizers interface\n\n### Sowilo\n\n- Add `resize` (nearest & bilinear) that works for 2D, batched, and NHWC tensors\n- Update grayscale conversion and RGB/BGR channel swaps to run entirely on Rune ops, keeping batched inputs compatible with JIT backends\n- Make `median_blur` compute the true median so salt-and-pepper noise is removed as expected\n- Fix `erode`/`dilate` so custom structuring elements (e.g. cross vs. square) and batched tensors produce the correct morphology result\n\n### Fehu\n\n- Added snapshot-based save/load for DQN and REINFORCE agents (#127, @RidwanAdebosin, @tmattio)\n- Added typed `Render` payloads with enforced `render_mode` selection in `Env.create`, auto human-mode rendering, and vectorized `Env.render` accessors so environments consistently expose frames for downstream tooling\n- Introduced the `Fehu_visualize` library with ffmpeg/gif/W&B sinks, overlay combinators, rollout/evaluation recorders, and video wrappers for single and vectorized environments, providing a cohesive visualization stack for Fehu\n- Added a `Fehu.Policy` helper module (random/deterministic/greedy) and sink `with_*` guards so visualization sinks handle directory creation and cleanup automatically\n- Added `Buffer.Replay.sample_tensors` to streamline batched training loops and exploration handling\n- Reworked `Fehu_algorithms.Dqn` around `init`/`step`/`train` primitives with functional state, warmup control, and snapshotting helpers\n- Rebuilt `Fehu_algorithms.Reinforce` on the same `init`/`step`/`train` interface with optional baselines, tensor-based rollouts, snapshot save/load, and updated tests/examples/docs using the new workflow\n- Upgraded the GridWorld environment to return ANSI and RGB-array frames using the new render types, and updated the DQN example to optionally record pre- and post-training rollouts via `FEHU_DQN_RECORD_DIR` using `Fehu_visualize` sinks\n- Reworked space sampling to return `(value, next_rng)` and split keys internally, fixing correlated draws in Box/Multi-discrete/Tuple/Dict/Sequence/Text samplers while adding `Space.boundary_values` for deterministic compatibility checks\n- Extended vectorized environments to reuse space boundary probes and now store structured `final_observation` payloads in `Info`, improving downstream consumption\n- Added `Buffer.Replay.add_many` and `Buffer.Replay.sample_arrays`, preserved backing storage on `clear`, and exposed struct-of-arrays batches for vectorised learners\n- Tightened `Env.create` diagnostics with contextual error messages and an optional `~validate_transition` hook for custom invariants\n- Enriched `Wrapper` utilities with `map_info`, Box `clip_action`/`clip_observation`, and time-limit info reporting elapsed steps\n- Upgraded `Info` values to carry int/float/bool arrays with stable JSON round-tripping (handling NaN/∞) and sorted metadata serialization for deterministic diffs\n- Improved training helpers: Welford-based normalization with optional unbiased variance, documented `done = terminated || truncated`, and returned `nan` when explained variance is undefined\n- Treat time-limit truncations as terminals when computing rollout advantages and expose the `truncated` flag in buffer steps\n- Require callers of `Training.compute_gae` to pass final bootstrapping values and ensure `Training.evaluate` feeds the current observation to policies\n- Allow `Space.Sequence.create` to omit `max_length`, keeping sequences unbounded above while preserving validation and sampling semantics\n- Validate vectorized environments by round-tripping sample actions/observations across every instance, preventing incompatible spaces from slipping through\n- Finish clipped value loss support in Fehu.Training (#119, @nirnayroy)\n\n### Nx-datasets\n\n- Migrate to `Nx.Cache` for cache directory resolution, enabling consistent behavior. (#133, @Arsalaan-Alam)\n- Fix cache directory resolution to respect `RAVEN_CACHE_ROOT` (or fall back to `XDG_CACHE_HOME`/`HOME`), allowing custom cache locations. (#128, @Arsalaan-Alam)\n- Switch CIFAR-10 loader to the binary archive so parsing succeeds again\n- Add a CIFAR-10 example\n- Standardize dataset examples on `Logs`\n- Use `Logs` for dataset loader logging (#95, @Satarupa22-SD)\n\n## [1.0.0~alpha1] - 2025-10-02\n\nThis release expands the Raven ecosystem with three new libraries (Talon, Saga, Fehu) and significant enhancements to existing ones. `alpha1` focuses on breadth—adding foundational capabilities across data processing, NLP, and reinforcement learning—while continuing to iterate on core infrastructure.\n\n### New Libraries\n\n#### Talon - DataFrame Processing\nWe've added Talon, a new DataFrame library inspired by pandas and polars:\n- Columnar data structures that support mixed types (integers, floats, strings, etc.) within a single table (aka heterogeneous datasets)\n- Operations: filter rows, group by columns, join tables, compute aggregates\n- Load and save data in CSV and JSON formats\n- Seamless conversion to/from Nx arrays for numerical operations\n\n#### Saga - NLP & Text Processing\nSaga is a new text processing library for building language models. It provides:\n- Tokenizers: Byte-pair encoding (BPE), WordPiece subword tokenization, and character-level splitting\n- Text generation: Control output with temperature scaling, top-k filtering, nucleus (top-p) sampling, and custom sampling strategies\n- Language models: Train and generate text with statistical n-gram models (bigrams, trigrams, etc.)\n- I/O: Read large text files line-by-line and batch-process corpora\n\n#### Fehu - Reinforcement Learning\nFehu brings reinforcement learning to Raven, with an API inspired by Gymnasium and Stable-Baselines3:\n- Standard RL environment interface (reset, step, render) with example environments like Random Walk and CartPole\n- Environment wrappers to modify observations, rewards, or episode termination conditions\n- Vectorized environments to collect experience from multiple parallel rollouts\n- Training utilities: Generalized advantage estimation (GAE), trajectory collection and management\n- RL algorithms: Policy gradient method (REINFORCE), deep Q-learning (DQN) with replay buffer\n- Use Kaun neural networks as function approximators for policies and value functions\n\n### Major Enhancements\n\n#### Nx - Array Computing\nWe've significantly expanded Nx's following early user feedback from alpha0:\n- Complete linear algebra suite: LAPACK-backed operations matching NumPy including singular value decomposition (SVD), QR factorization, Cholesky decomposition, eigenvalue/eigenvector computation, matrix inverse, and solving linear systems\n- FFT operations: Fast Fourier transforms (FFT/IFFT) for frequency domain analysis and signal processing\n- Advanced operations: Einstein summation notation (`einsum`) for complex tensor operations, extract/construct diagonal matrices (`diag`), cumulative sums and products along axes\n- Extended dtypes: Machine learning-focused types including bfloat16 (brain floating point), complex16, and float8 for reduced-precision training\n- Symbolic shapes: Internal infrastructure for symbolic shape inference to enable dynamic shapes in future releases (not yet exposed in public API)\n- Lazy views: Array views only copy and reorder memory when stride patterns require it, avoiding unnecessary allocations\n\n#### Rune - Autodiff & JIT\nWe've continued iterating on Rune's autodiff capabilities, and made progress on upcoming features:\n- Forward-mode AD: Compute Jacobian-vector products (`jvp`) for forward-mode automatic differentiation, complementing existing reverse-mode\n- JIT: Ongoing development of LLVM-based just-in-time compilation for Rune computations (currently in prototype stage)\n- vmap: Experimental support for vectorized mapping to automatically batch operations (work-in-progress, not yet stable)\n- LLVM backend: Added compilation backend with support for LLVM versions 19, 20, and 21\n- Metal backend: Continued work on GPU acceleration for macOS using Metal compute shaders\n\n#### Kaun - Deep Learning\nWe've expanded Kaun with high-level APIs for deep learning. These APIs are inspired by popular Python frameworks like TensorFlow, PyTorch, and Flax, and should feel familiar to users building models in Python:\n- High-level training: Keras-style `fit()` function to train models with automatic batching, gradient computation, and parameter updates\n- Training state: Encapsulated training state (TrainState) holding parameters, optimizer state, and step count; automatic history tracking of loss and metrics\n- Checkpoints: Save and load model weights to disk for model persistence and transfer learning\n- Metrics: Automatic metric computation during training including accuracy, precision, recall, F1 score, mean absolute error (MAE), and mean squared error (MSE)\n- Data pipeline: Composable dataset operations (map, filter, batch, shuffle, cache) inspired by TensorFlow's `tf.data` for building input pipelines\n- Model zoo: Reference implementations of classic and modern architectures (LeNet5 for basic CNNs, BERT for masked language modeling, GPT2 for autoregressive generation) including reusable transformer components\n- Ecosystem integration: Load HuggingFace model architectures (`kaun.huggingface`), access common datasets like MNIST and CIFAR-10 (`kaun.datasets`), and use standardized model definitions (`kaun.models`)\n\n### Contributors\n\nThanks to everyone who contributed to this release:\n\n- @adamchol (Adam Cholewi) - Implemented the initial `associative_scan` native backend operation for cumulative operations\n- @akshay-gulab (Akshay Gulabrao)\n- @dhruvmakwana (Dhruv Makwana) - Implemented `einsum` for Einstein summation notation\n- @gabyfle (Gabriel Santamaria) - Built PocketFFT bindings that replaced our custom FFT kernels\n- @lukstafi (Lukasz Stafiniak) - Major contributions to Fehu and FunOCaml workshop on training Sokoban agents\n- @nickbetteridge\n- @sidkshatriya (Sidharth Kshatriya)\n\n## [1.0.0~alpha0] - 2025-07-05\n\n### Initial Alpha Release\n\nWe're excited to release the zeroth alpha of Raven, an OCaml machine learning ecosystem bringing modern scientific computing to OCaml.\n\n### Added\n\n#### Core Libraries\n\n- **Nx** - N-dimensional array library with NumPy-like API\n  - Multi-dimensional tensors with support for several data types.\n  - Zero-copy operations: slicing, reshaping, broadcasting\n  - Element-wise and linear algebra operations\n  - Swappable backends: Native OCaml, C, Metal\n  - I/O support for images (PNG, JPEG) and NumPy files (.npy, .npz)\n\n- **Hugin** - Publication-quality plotting library\n  - 2D plots: line, scatter, bar, histogram, step, error bars, fill-between\n  - 3D plots: line3d, scatter3d\n  - Image visualization: imshow, matshow\n  - Contour plots with customizable levels\n  - Text annotations and legends\n\n- **Quill** - Interactive notebook environment\n  - Markdown-based notebooks with live formatting\n  - OCaml code execution with persistent session state\n  - Integrated data visualization via Hugin\n  - Web server mode for browser-based editing\n\n#### ML/AI Components\n\n- **Rune** - Automatic differentiation and JIT compilation framework\n  - Reverse-mode automatic differentiation\n  - Functional API for pure computations\n  - Basic JIT infrastructure (in development)\n\n- **Kaun** - Deep learning framework (experimental)\n  - Flax-inspired functional API\n  - Basic neural network components\n  - Example implementations for XOR and MNIST\n\n- **Sowilo** - Computer vision library\n  - Image manipulation: flip, crop, color conversions\n  - Filtering: gaussian_blur, median_blur\n  - Morphological operations and edge detection\n\n#### Supporting Libraries\n\n- **Nx-datasets** - Common ML datasets (MNIST, Iris, California Housing)\n- **Nx-text** - Text processing and tokenization utilities\n\n### Known Issues\n\nThis is an alpha release with several limitations:\n- Quill editor has UI bugs being addressed\n- APIs may change significantly before stable release\n\n### Contributors\n\nInitial development by the Raven team. Special thanks to all early testers and contributors.\n\n@axrwl\n@gabyfle\n@hesterjeng\n@ghennequin\n@blueavee\n\nAnd to our early sponsors:\n\n@daemonfire300\n@gabyfle\n@sabine\n\n[1.0.0~alpha0]: https://github.com/raven-ocaml/raven/releases/tag/v1.0.0~alpha0\n[1.0.0~alpha1]: https://github.com/raven-ocaml/raven/releases/tag/v1.0.0~alpha1\n[1.0.0~alpha2]: https://github.com/raven-ocaml/raven/releases/tag/v1.0.0~alpha2\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "# Contributing to Raven\n\n## Documentation Style\n\n### Overview\n\nThis guide establishes documentation conventions for the raven. We follow the Unix philosophy: terse, precise, no fluff. Document contracts and invariants, not implementation details.\n\n### General Principles\n\n1. **Be imperative and active** - \"Creates tensor\" not \"This function creates a tensor\"\n2. **Document invariants, not implementation** - What must be true, not how it works\n3. **Mention performance only when surprising** - O(1) views vs O(n) copies\n4. **No redundant information** - If it's obvious from the type, don't repeat it\n\n### Documentation Template\n\n```ocaml\nval zeros : ('a, 'b) dtype -> int array -> ('a, 'b) t\n(** [zeros dtype shape] creates zero-filled tensor.   (* <-- function application pattern *)\n\n    Extended description if needed. State invariants.  (* <-- optional extended info *)\n\n    @raise Exception_name if [condition]               (* <-- exceptions *)\n\n    Example creating a 2x3 matrix of zeros:            (* <-- example with description *)\n    {[\n      let t = Nx.zeros Nx.float32 [|2; 3|] in\n      Nx.to_array t = [|0.; 0.; 0.; 0.; 0.; 0.|]\n    ]} *)\n```\n\n### Formatting Conventions\n\n#### Code References\n- Use `[code]` for inline code: parameter names, function names, expressions\n- Use `{[ ... ]}` for code blocks\n- No backticks - this is odoc, not Markdown\n\n#### First Line\nAlways start with: `[function_name arg1 arg2] does X`\nNot: \"Creates a tensor with...\" or \"This function...\"\n\n#### Mathematical Notation\n- Use ASCII: `a * b`, not `a × b`\n- Use `x^2` or `x ** 2` for powers\n- Use `[start, stop)` for half-open intervals\n\n### What to Document\n\n✓ **Invariants and preconditions**: \"Length of [data] must equal product of [shape].\"  \n✓ **Surprising performance**: \"Returns view if possible (O(1)), otherwise copies (O(n)).\"  \n✓ **Shape transformations**: \"Result has shape [|m; n|] where m = length of [a].\"\n\n✗ **Not**: obvious information, implementation details, or redundant parameter descriptions\n\n### Code Examples\n\nMust be valid, compilable OCaml:\n- Use qualified names (`Nx.function` not `open Nx`)\n- Show expected results with `=`\n- Each example in its own `{[ ... ]}` block with a description before it\n- Self-contained (independently executable)\n\n### Examples\n\n#### Function with Constraints\n```ocaml\nval arange : ('a, 'b) dtype -> int -> int -> int -> ('a, 'b) t\n(** [arange dtype start stop step] generates values from [start] to [stop).\n\n    Step must be non-zero. Result length is [(stop - start) / step] rounded\n    toward zero.\n\n    @raise Failure if [step = 0]\n\n    Generating even numbers from 0 to 10:\n    {[\n      let t1 = Nx.arange Nx.int32 0 10 2 in\n      Nx.to_array t1 = [|0l; 2l; 4l; 6l; 8l|]\n    ]} *)\n```\n\n#### Function with Multiple Behaviors\n```ocaml\nval dot : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [dot a b] computes generalized dot product.\n\n    For 1-D tensors, returns inner product (scalar). For 2-D, performs\n    matrix multiplication. Otherwise, contracts last axis of [a] with\n    second-last of [b].\n\n    @raise Invalid_argument if contraction axes have different sizes\n\n    Computing inner product of two vectors:\n    {[\n      let v1 = Nx.of_array Nx.float32 [|1.; 2.|] in\n      let v2 = Nx.of_array Nx.float32 [|3.; 4.|] in\n      let scalar = Nx.dot v1 v2 in\n      Nx.to_scalar scalar = 11.\n    ]} *)\n```\n\n#### Optional Parameters\n```ocaml\nval sum : ?axes:int array -> ?keepdims:bool -> ('a, 'b) t -> ('a, 'b) t\n(** [sum ?axes ?keepdims t] sums elements along specified axes.\n\n    Default sums all axes. If [keepdims] is true, retains reduced\n    dimensions with size 1.\n\n    @raise Invalid_argument if any axis is out of bounds\n\n    Summing all elements:\n    {[\n      let t = Nx.of_array Nx.float32 ~shape:[|2; 2|] [|1.; 2.; 3.; 4.|] in\n      Nx.to_scalar (Nx.sum t) = 10.\n    ]}\n\n    Summing along rows (axis 0):\n    {[\n      let t = Nx.of_array Nx.float32 ~shape:[|2; 2|] [|1.; 2.; 3.; 4.|] in\n      let sum_axis0 = Nx.sum ~axes:[|0|] t in\n      Nx.to_array sum_axis0 = [|4.; 6.|]\n    ]} *)\n```\n\n### Special Documentation Cases\n\n**Broadcasting**: Always explain compatibility rules\n```ocaml\n(** [add t1 t2] computes element-wise sum with broadcasting.\n\n    Shapes must be broadcast-compatible: each dimension must be equal\n    or one of them must be 1. *)\n```\n\n**Memory behavior**: Be explicit about views vs copies\n```ocaml\n(** [transpose t] returns view with swapped axes (no copy). *)\n(** [flatten t] returns new 1-D tensor (always copies). *)\n(** [reshape shape t] returns view if possible, otherwise copies. *)\n```\n\n**Complex shapes**: Use examples to clarify\n```ocaml\n(** [stack axis tensors] stacks along new axis at position [axis].\n\n    All tensors must have identical shape. Result has rank + 1.\n\n    Stacking two 2x2 matrices along a new first axis:\n    {[\n      let t1 = Nx.of_array Nx.int32 ~shape:[|2; 2|] [|1l; 2l; 3l; 4l|] in\n      let t2 = Nx.of_array Nx.int32 ~shape:[|2; 2|] [|5l; 6l; 7l; 8l|] in\n      let stacked = Nx.stack ~axis:0 [t1; t2] in\n      Nx.shape stacked = [|2; 2; 2|] &&\n      Nx.to_array stacked = [|1l; 2l; 3l; 4l; 5l; 6l; 7l; 8l|]\n    ]} *)\n```\n\n### Module-level Documentation\n\n```ocaml\n(** N-dimensional array operations.\n\n    This module provides NumPy-style tensor operations for OCaml.\n    Tensors are immutable views over mutable buffers, supporting\n    broadcasting, slicing, and efficient memory layout transformations.\n\n    {1 Creating Tensors}\n\n    Use {!create}, {!zeros}, {!ones}, or {!arange} to construct tensors... *)\n```\n\nRemember: If the Unix manual wouldn't say it, neither should we.\n\n## Error Message\n\n### Format\n\n```\noperation: cannot <action> <from> to <to> (<specific problem>)\nhint: <guidance>\n```\n\nAll lowercase except dtypes. Hints are optional.\n\n**Alternative formats when needed:**\n```\noperation: invalid <thing> (<specific problem>)\noperation: <what failed> (<specific problem>)\n```\n\n### Examples\n\n```\nreshape: cannot reshape [10,10] to [12,10] (100→120 elements)\n\nbroadcast: cannot broadcast [2,3] with [4,5] (dim 0: 2≠4, dim 1: 3≠5)\nhint: broadcasting requires dimensions to be either equal or 1\n\nempty: invalid shape [-1, 10] (negative dimension)\n\nmatmul: cannot multiply Float32 @ Int64 (dtype mismatch)\nhint: cast one array to match the other's dtype\n```\n\n### Rules\n\n#### Always include:\n- **Operation name** - what function failed\n- **Full context** - complete shapes, not just sizes\n- **Specific problem** - which dimension/axis failed and why\n\n#### Structure consistently:\n- For transformations: `[10,10] to [12,10]`\n- For operations: `[2,3] with [4,5]`\n- For access: `[5,2] in shape [3,4]`\n- For invalid inputs: `invalid X (reason)`\n\n#### Make problems obvious:\n- Show comparisons: `2≠4`, `5≥3`, `100→120`\n- Point to location: `dim 0:`, `axis 1:`\n- State violations: `axis 2 repeated`, `multiple -1`\n\n#### Multiple issues:\n```\nconv2d: invalid configuration\n  - input channels: 3 ≠ 5 (weight expects 5)\n  - kernel [6,6] > input [5,5] with 'valid' padding\n```\n\n#### Add hints when:\n- The fix is non-obvious\n- There's a specific function to call\n- The rule isn't clear from context\n- Backend limitations exist\n\n### Special Cases\n\n**Performance warnings:**\n```\nreshape: requires copy from strided view [100,10] to [1000]\nhint: call contiguous() first to avoid copy\n```\n\n**Empty/scalar edge cases:**\n```\nsqueeze: cannot squeeze scalar (already rank 0)\nargmax: empty axis returns no indices (size 0)\n```\n\n**Backend limitations:**\n```\ngather: indices dtype Int64 not supported (backend uses Int32)\nhint: cast indices to Int32\n```\n\n### Common Patterns\n\n**Shape changes:**\n```\nreshape: cannot reshape [2,5,10] to [4,26] (100→104 elements)\n```\n\n**Invalid access:**\n```\nslice: cannot slice [(0,5), (2,12)] in shape [10,10] (axis 1: 12>10)\n```\n\n**Type/value errors:**\n```\npad: invalid padding [-1, 2] (negative values)\nhint: use shrink() to remove elements\n```\n\n**Configuration errors:**\n```\npermute: invalid axes [0,2,2] (axis 2 repeated)\narange: invalid range [10, 5, 1] (start > stop with positive step)\n```\n\n### Don'ts\n\n❌ Vague errors: `invalid shape`\n❌ Missing context: `100 != 120`\n❌ Redundant hints: `shapes must be compatible (incompatible shapes)`\n❌ Teaching basics: `broadcasting requires...` (save for hints)\n\n### Summary\n\nShow exactly what they tried, what failed, and where. Use the standard format when possible, adapt when needed. Include hints only when they add value.\n"
  },
  {
    "path": "LICENSE",
    "content": "ISC License\n\nCopyright (c) 2025, Thibaut Mattio\n\nPermission to use, copy, modify, and/or distribute this software for any\npurpose with or without fee is hereby granted, provided that the above\ncopyright notice and this permission notice appear in all copies.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES\nWITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF\nMERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR\nANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES\nWHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN\nACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF\nOR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE."
  },
  {
    "path": "README.md",
    "content": "<p align=\"center\">\n  <img src=\"www/site/raven.svg\" width=\"80\" alt=\"raven\">\n</p>\n\n<h3 align=\"center\">modern scientific computing for OCaml</h3>\n\n<p align=\"center\">\n  <a href=\"https://raven-ml.dev/docs/\">docs</a> &middot;\n  <a href=\"https://raven-ml.dev/docs/installation/\">install</a> &middot;\n  <a href=\"https://github.com/raven-ml/raven/issues\">issues</a>\n</p>\n\n---\n\nRaven is an ecosystem of OCaml libraries for numerical computing, machine learning, and data science. Everything you know from Python — NumPy, JAX, PyTorch, Matplotlib, Jupyter — rebuilt with type safety.\n\n> Raven is **alpha**. APIs will change. [Feedback welcome.](https://github.com/raven-ml/raven/issues)\n\n```ocaml\n(* nx — n-dimensional arrays *)\nlet x = Nx.linspace float32 0. 10. 100\nlet y = Nx.sin x\n\n(* rune — automatic differentiation *)\nlet grad_f = Rune.grad (fun x -> Rune.sum (Rune.mul x x)) x\n\n(* brot — tokenization *)\nlet tokenizer = Brot.from_file \"tokenizer.json\" |> Result.get_ok\nlet ids = Brot.encode_ids tokenizer \"The meaning of life is\"\n\n(* kaun — neural networks *)\nlet model = Kaun.Layer.sequential [\n  Kaun.Layer.linear ~in_features:768 ~out_features:128 ();\n  Kaun.Layer.relu ();\n  Kaun.Layer.linear ~in_features:128 ~out_features:10 ();\n]\n\n(* talon — dataframes *)\nlet df = Talon.create [\n  \"name\", Talon.Col.string_list [ \"Alice\"; \"Bob\"; \"Charlie\" ];\n  \"score\", Talon.Col.float64_list [ 85.5; 92.0; 78.5 ];\n]\n\n(* hugin — plotting *)\nlet () = Hugin.(figure () |> subplot |> Plotting.plot ~x ~y |> ignore; show ())\n```\n\n## Packages\n\n|     | Package                        | Like              | What it does                                             |\n| --- | ------------------------------ | ----------------- | -------------------------------------------------------- |\n|     | [**nx**](packages/nx/)         | NumPy             | N-dimensional arrays with linear algebra operations      |\n| ᛏ   | [**tolk**](packages/tolk/)     | tinygrad          | Minimal ML compiler for GPU tensor computation           |\n| ᚱ   | [**rune**](packages/rune/)     | JAX               | Automatic differentiation and functional transformations |\n| ᚲ   | [**kaun**](packages/kaun/)     | Flax              | Neural networks and training                             |\n| ᚹ   | [**vega**](packages/vega/)     | Optax             | Composable gradient-based optimizers                     |\n| ᚾ   | [**norn**](packages/norn/)     | BlackJAX          | MCMC sampling with automatic gradients                   |\n| ᚨ   | [**brot**](packages/brot/)     | HF Tokenizers     | Fast, HuggingFace-compatible tokenization                |\n| ᛃ   | [**talon**](packages/talon/)   | Polars            | Fast and elegant dataframes with type-safe operations    |\n| ᛞ   | [**hugin**](packages/hugin/)   | Matplotlib        | Publication-quality plotting                             |\n| ᛈ   | [**quill**](packages/quill/)   | Jupyter + IPython | Interactive REPL and markdown notebooks                  |\n| ᚠ   | [**fehu**](packages/fehu/)     | Gymnasium         | Reinforcement learning environments                      |\n| ᛋ   | [**sowilo**](packages/sowilo/) | OpenCV            | Differentiable computer vision                           |\n| ᛗ   | [**munin**](packages/munin/)  | W&B / MLFlow      | Local experiment tracking with live TUI dashboard        |\n\n## Getting started\n\n```bash\nopam install raven\n```\n\nThis installs the full ecosystem. You can also install only what you need — e.g. `opam install kaun` for neural networks, or `opam install nx` for just arrays.\n\nAdd to your `dune` file:\n\n```dune\n(executable\n (name main)\n (libraries raven))\n```\n\nSee the [installation guide](https://raven-ml.dev/docs/installation/) for system dependencies and editor setup.\n\n## Support\n\nBuilding a scientific computing ecosystem takes sustained effort. Sponsorships help us ship JIT compilation, distributed training, better developer tooling, and production deployment through MirageOS.\n\n**[Support Raven →](https://raven-ml.dev/docs/support-raven/)**\n\nThanks to our sponsors [Ahrefs](https://ahrefs.com) and [Tarides](https://tarides.com).\n\n## License\n\n[ISC](LICENSE)\n"
  },
  {
    "path": "TODO.md",
    "content": "# todo\n\n## beta (jit)\n\ngoalpost: jit-compiled gpt2 matching pytorch performance\n\nperf:\n- close rune grad performance gap (within <2x of pytorch)\n- close nx performance gaps (within <2x of numpy)\n\ntolk:\n- integrate tolk as rune jit transformation\n- kernel fusion and optimization\n- cpu, cuda, metal backends\n\n## v1 (production)\n\ngoalpost: end-to-end train -> deploy as unikernel or static binary\n\ntraining:\n- gradient accumulation\n- mixed precision (fp16/bf16 forward, fp32 master weights, loss scaling)\n- gradient checkpointing (rune.checkpoint, recompute activations in backward)\n- flash attention (tolk kernel and/or kaun.fn primitive)\n- parallel data loading (ocaml 5 domains, background prefetch)\n- layer completions: transposed conv, group norm, full conv2d stride/dilation/padding\n- onnx import (onnx -> tolk ir adapter, cover resnet/bert/gpt2/llama/vit/whisper ops)\n\ndeployment:\n- aot compilation: cpu (c via clang, musl static linking) and gpu (cuda/metal/opencl)\n- mimir: kv cache, continuous batching, pagedattention\n- mimir: http server (rest api, /health, /metrics, sigterm, structured logging)\n- post-training quantization (int8/int4, tolk quantized kernels)\n- mirageos unikernel deployment (raven-mirage package)\n  - no blas dep (tolk aot generates all compute)\n  - weight loading via network (mirage-http)\n  - verify ocaml 5 effects on mirageos runtime\n  - http server on mirageos network stack\n\ndocs/website:\n- landing page rewrite with benchmarks\n- deployment guide (aot, static binary, docker, mirageos, gpu)\n- end-to-end examples (serving, onnx+deploy workflow)\n"
  },
  {
    "path": "dev/README.md",
    "content": "# dev\n\nDevelopment sandbox for experiments and prototypes that support the Raven ecosystem.\n\n## Projects\n\n| Name | Description |\n| ---- | ----------- |\n| [mimir](mimir/) | Experimental inference engine |\n| [tolk](tolk/) | ML compiler inspired by tinygrad |\n"
  },
  {
    "path": "dev/mimir/README.md",
    "content": "# mimir\n\nExperimental inference engine for raven.\n\nThe gap between \"I can run a forward pass\" and \"I can serve a model in production\" is large. mimir is where we figure out what the OCaml answer to that gap looks like.\n\n## Current state\n\nThe sampling layer: composable logits processors (temperature, top-k, top-p, repetition penalty, n-gram blocking), stopping criteria, and the autoregressive generation loop operating on nx tensors.\n\nThis is the outermost piece of the inference puzzle — the part that turns model logits into actual token sequences. Everything below is open.\n\n## What we want to explore\n\n**Memory management for KV cache.** The attention mechanism produces intermediate state (keys and values) that grows linearly with sequence length. Naive allocation wastes memory; the interesting question is whether we can apply OS-style virtual memory ideas — fixed-size blocks, deferred allocation, reference-counted sharing — to make long sequences and shared prefixes cheap. This is the core idea behind PagedAttention.\n\n**Request scheduling.** A single request is simple. Thousands of concurrent requests with different prompt lengths, generation limits, and priority levels is a scheduling problem. Batching amortizes GPU overhead but introduces latency trade-offs. Continuous batching (letting new requests join mid-batch as others finish) changes the calculus further. OCaml's algebraic types and pattern matching may give us a cleaner expression of scheduling policies than the typical mutable-state approach.\n\n**Prefill/decode asymmetry.** The two phases of autoregressive generation have opposite performance characteristics — one is compute-bound, the other memory-bound. An engine that treats them identically leaves performance on the table.\n\n**JIT compilation of decode steps.** The decode phase repeats the same computation graph with different inputs. If rune's JIT can capture and replay these graphs, we avoid per-step compilation overhead — similar in spirit to CUDA graph capture.\n\n**Structured generation.** Constraining the sampling step so that output conforms to a grammar, regex, or JSON schema. This means masking logits at each step based on what the constraint automaton allows, which interacts with the sampling pipeline we already have.\n\n**Tensor parallelism.** Splitting a model across multiple devices. This is a rune-level concern more than a mimir concern, but the inference engine needs to coordinate it.\n\n## References\n\n- [Nano-vLLM](https://github.com/GeeeekExplworker/nano-vllm) — minimal (~1,200 lines) inference engine by a DeepSeek contributor, good for understanding the essential moving parts\n- [vLLM: PagedAttention paper](https://arxiv.org/abs/2309.06180)\n- [SGLang](https://github.com/sgl-project/sglang) — alternative engine with RadixAttention for prefix sharing\n"
  },
  {
    "path": "dev/mimir/dune-project",
    "content": "(lang dune 3.21)\n\n(name mimir)\n\n(package\n (name mimir)\n (synopsis \"Experimental inference engine for Raven\")\n (description\n  \"Mimir is an inference engine for the Raven ecosystem. It provides sampling, KV cache management, request scheduling, and structured generation for serving ML models.\")\n (depends\n  (ocaml\n   (>= 5.2))))\n"
  },
  {
    "path": "dev/mimir/lib/dune",
    "content": "(library\n (name mimir)\n (public_name mimir)\n (libraries nx unix))\n"
  },
  {
    "path": "dev/mimir/lib/mimir.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ninclude Sampler\n"
  },
  {
    "path": "dev/mimir/lib/mimir.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Mimir - Text generation with composable logits processors.\n\n    Experimental inference/generation library for the Raven ML ecosystem.\n    Provides the autoregressive decode loop, composable logits processors,\n    stopping criteria, and generation configuration. *)\n\ninclude module type of Sampler\n"
  },
  {
    "path": "dev/mimir/lib/sampler.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* ───── Core Types ───── *)\n\ntype logits = (float, Bigarray.float32_elt) Nx.t\ntype token_ids = int array\n\n(* ───── Logits Processors ───── *)\n\ntype logits_processor = {\n  name : string;\n  process : prompt_length:int -> token_ids -> logits -> logits;\n}\n\ntype logits_processor_list = logits_processor list\n\n(* ───── Stopping Criteria ───── *)\n\ntype stopping_criterion = {\n  name : string;\n  should_stop : prompt_length:int -> start_time:float -> token_ids -> bool;\n}\n\ntype stopping_criteria_list = stopping_criterion list\n\n(* ───── Generation Configuration ───── *)\n\ntype generation_config = {\n  max_length : int;\n  max_new_tokens : int option;\n  min_length : int;\n  min_new_tokens : int;\n  do_sample : bool;\n  temperature : float;\n  top_k : int;\n  top_p : float;\n  repetition_penalty : float;\n  no_repeat_ngram_size : int;\n  bad_words_ids : int list list;\n  force_words_ids : int list list;\n  pad_token_id : int option;\n  bos_token_id : int option;\n  eos_token_id : int option;\n  eos_token_ids : int list;\n}\n\nlet default =\n  {\n    max_length = 100;\n    max_new_tokens = None;\n    min_length = 0;\n    min_new_tokens = 0;\n    do_sample = false;\n    temperature = 1.0;\n    top_k = 0;\n    top_p = 1.0;\n    repetition_penalty = 1.0;\n    no_repeat_ngram_size = 0;\n    bad_words_ids = [];\n    force_words_ids = [];\n    pad_token_id = None;\n    bos_token_id = None;\n    eos_token_id = None;\n    eos_token_ids = [];\n  }\n\n(* ───── Builder Pattern ───── *)\n\nlet with_temperature temperature config = { config with temperature }\nlet with_top_k top_k config = { config with top_k }\nlet with_top_p top_p config = { config with top_p }\n\nlet with_repetition_penalty repetition_penalty config =\n  { config with repetition_penalty }\n\nlet with_max_length max_length config = { config with max_length }\n\nlet with_max_new_tokens max_new_tokens config =\n  { config with max_new_tokens = Some max_new_tokens }\n\nlet with_min_length min_length config = { config with min_length }\nlet with_min_new_tokens min_new_tokens config = { config with min_new_tokens }\n\nlet with_no_repeat_ngram no_repeat_ngram_size config =\n  { config with no_repeat_ngram_size }\n\nlet with_do_sample do_sample config = { config with do_sample }\n\n(* ───── Preset Configurations ───── *)\n\nlet creative_writing =\n  default |> with_do_sample true |> with_temperature 0.8 |> with_top_p 0.9\n  |> with_repetition_penalty 1.2\n  |> with_no_repeat_ngram 3 |> with_max_new_tokens 512\n\nlet chat =\n  default |> with_do_sample true |> with_temperature 0.7 |> with_top_p 0.95\n  |> with_repetition_penalty 1.1\n  |> with_max_new_tokens 512\n\nlet code_generation =\n  default |> with_do_sample true |> with_temperature 0.2 |> with_top_k 5\n  |> with_repetition_penalty 1.0\n  |> with_max_new_tokens 1024\n\nlet factual =\n  default |> with_do_sample true |> with_temperature 0.3 |> with_top_k 10\n  |> with_repetition_penalty 1.1\n  |> with_max_new_tokens 256\n\nlet from_preset = function\n  | \"creative_writing\" -> creative_writing\n  | \"chat\" -> chat\n  | \"code_generation\" -> code_generation\n  | \"factual\" -> factual\n  | _ -> default\n\n(* ───── Logits Processors ───── *)\n\nlet neg_infinity = Float.neg_infinity\n\nlet temperature_warper ~temperature =\n  {\n    name = Printf.sprintf \"temperature(%.2f)\" temperature;\n    process =\n      (fun ~prompt_length:_ _tokens logits ->\n        if temperature = 1.0 then logits else Nx.div_s logits temperature);\n  }\n\nlet top_k_warper ~k =\n  {\n    name = Printf.sprintf \"top_k(%d)\" k;\n    process =\n      (fun ~prompt_length:_ _tokens logits ->\n        if k <= 0 then logits\n        else\n          let sorted_values, _sorted_indices =\n            Nx.sort ~descending:true logits\n          in\n          let vocab_size = Nx.numel logits in\n          let cutoff_k = min k vocab_size in\n          let threshold = Nx.item [ cutoff_k - 1 ] sorted_values in\n          let mask = Nx.less_s logits threshold in\n          Nx.where mask (Nx.full_like logits neg_infinity) logits);\n  }\n\nlet top_p_warper ~p =\n  {\n    name = Printf.sprintf \"top_p(%.2f)\" p;\n    process =\n      (fun ~prompt_length:_ _tokens logits ->\n        if p >= 1.0 then logits\n        else\n          let probs = Nx.softmax logits in\n          let sorted_probs, sorted_indices = Nx.sort ~descending:true probs in\n          let cumulative = Nx.cumsum sorted_probs in\n          (* Find where cumulative exceeds p, keeping at least 1 token *)\n          let cutoff_mask = Nx.greater_s cumulative p in\n          (* Shift mask right by 1 so the token that crosses p is kept *)\n          let n = Nx.numel logits in\n          let shifted_arr = Nx.to_array cutoff_mask in\n          let new_mask_arr = Array.make n false in\n          for i = 1 to n - 1 do\n            new_mask_arr.(i) <- shifted_arr.(i - 1)\n          done;\n          let shifted_mask = Nx.create Nx.bool [| n |] new_mask_arr in\n          (* Map mask back to original token order *)\n          let result = Nx.copy logits in\n          let sorted_idx_arr = Nx.to_array sorted_indices in\n          let shifted_mask_arr = Nx.to_array shifted_mask in\n          for i = 0 to n - 1 do\n            if shifted_mask_arr.(i) then\n              Nx.set_item\n                [ Int32.to_int sorted_idx_arr.(i) ]\n                neg_infinity result\n          done;\n          result);\n  }\n\nlet repetition_penalty ~penalty =\n  {\n    name = Printf.sprintf \"repetition_penalty(%.2f)\" penalty;\n    process =\n      (fun ~prompt_length:_ previous_tokens logits ->\n        if penalty = 1.0 then logits\n        else\n          let result = Nx.copy logits in\n          let vocab_size = Nx.numel result in\n          Array.iter\n            (fun token_id ->\n              if token_id < vocab_size then begin\n                let score = Nx.item [ token_id ] result in\n                let penalized =\n                  if score < 0.0 then score *. penalty else score /. penalty\n                in\n                Nx.set_item [ token_id ] penalized result\n              end)\n            previous_tokens;\n          result);\n  }\n\nlet no_repeat_ngram ~ngram_size =\n  {\n    name = Printf.sprintf \"no_repeat_ngram(%d)\" ngram_size;\n    process =\n      (fun ~prompt_length:_ previous_tokens logits ->\n        let len = Array.length previous_tokens in\n        if ngram_size <= 0 || len < ngram_size - 1 then logits\n        else\n          let result = Nx.copy logits in\n          (* Get the last (ngram_size - 1) tokens as the current prefix *)\n          let prefix_start = len - (ngram_size - 1) in\n          let prefix =\n            Array.sub previous_tokens prefix_start (ngram_size - 1)\n          in\n          (* Scan history for matching prefixes *)\n          for i = 0 to len - ngram_size do\n            let matches = ref true in\n            for j = 0 to ngram_size - 2 do\n              if previous_tokens.(i + j) <> prefix.(j) then matches := false\n            done;\n            if !matches then begin\n              let blocked_token = previous_tokens.(i + ngram_size - 1) in\n              if blocked_token < Nx.numel result then\n                Nx.set_item [ blocked_token ] neg_infinity result\n            end\n          done;\n          result);\n  }\n\nlet min_length ~min_length ~eos_token_ids =\n  {\n    name = Printf.sprintf \"min_length(%d)\" min_length;\n    process =\n      (fun ~prompt_length:_ tokens logits ->\n        if Array.length tokens >= min_length then logits\n        else\n          let result = Nx.copy logits in\n          let vocab_size = Nx.numel result in\n          List.iter\n            (fun eos_id ->\n              if eos_id < vocab_size then\n                Nx.set_item [ eos_id ] neg_infinity result)\n            eos_token_ids;\n          result);\n  }\n\nlet min_new_tokens ~min_new_tokens ~eos_token_ids =\n  {\n    name = Printf.sprintf \"min_new_tokens(%d)\" min_new_tokens;\n    process =\n      (fun ~prompt_length tokens logits ->\n        let new_tokens = Array.length tokens - prompt_length in\n        if new_tokens >= min_new_tokens then logits\n        else\n          let result = Nx.copy logits in\n          let vocab_size = Nx.numel result in\n          List.iter\n            (fun eos_id ->\n              if eos_id < vocab_size then\n                Nx.set_item [ eos_id ] neg_infinity result)\n            eos_token_ids;\n          result);\n  }\n\nlet bad_words ~bad_words_ids =\n  {\n    name = \"bad_words\";\n    process =\n      (fun ~prompt_length:_ tokens logits ->\n        let result = Nx.copy logits in\n        let len = Array.length tokens in\n        let vocab_size = Nx.numel result in\n        List.iter\n          (fun bad_sequence ->\n            let seq_len = List.length bad_sequence in\n            if seq_len > 0 && len >= seq_len - 1 then (\n              let prefix_len = seq_len - 1 in\n              let matches = ref true in\n              let prefix = List.rev (List.tl (List.rev bad_sequence)) in\n              List.iteri\n                (fun i expected ->\n                  if tokens.(len - prefix_len + i) <> expected then\n                    matches := false)\n                prefix;\n              if !matches then begin\n                let bad_token = List.nth bad_sequence (seq_len - 1) in\n                if bad_token < vocab_size then\n                  Nx.set_item [ bad_token ] neg_infinity result\n              end))\n          bad_words_ids;\n        result);\n  }\n\nlet force_words ~force_words_ids ~iteration =\n  {\n    name = \"force_words\";\n    process =\n      (fun ~prompt_length:_ _tokens logits ->\n        if iteration >= List.length force_words_ids then logits\n        else\n          let forced_tokens = List.nth force_words_ids iteration in\n          let result = Nx.full_like logits neg_infinity in\n          List.iter\n            (fun token_id ->\n              if token_id < Nx.numel result then\n                Nx.set_item [ token_id ] (Nx.item [ token_id ] logits) result)\n            forced_tokens;\n          result);\n  }\n\nlet custom ~name ~process = { name; process }\n\n(* ───── Stopping Criteria ───── *)\n\nlet max_length_criteria ~max_length =\n  {\n    name = Printf.sprintf \"max_length(%d)\" max_length;\n    should_stop =\n      (fun ~prompt_length:_ ~start_time:_ tokens ->\n        Array.length tokens >= max_length);\n  }\n\nlet max_new_tokens_criteria ~max_new_tokens =\n  {\n    name = Printf.sprintf \"max_new_tokens(%d)\" max_new_tokens;\n    should_stop =\n      (fun ~prompt_length ~start_time:_ tokens ->\n        Array.length tokens - prompt_length >= max_new_tokens);\n  }\n\nlet eos_token_criteria ~eos_token_ids =\n  {\n    name = \"eos_token\";\n    should_stop =\n      (fun ~prompt_length:_ ~start_time:_ tokens ->\n        let len = Array.length tokens in\n        if len = 0 then false else List.mem tokens.(len - 1) eos_token_ids);\n  }\n\nlet max_time_criteria ~max_time =\n  {\n    name = Printf.sprintf \"max_time(%.1fs)\" max_time;\n    should_stop =\n      (fun ~prompt_length:_ ~start_time _tokens ->\n        Unix.gettimeofday () -. start_time > max_time);\n  }\n\nlet stop_strings_criteria ~stop_strings ~decoder =\n  {\n    name = \"stop_strings\";\n    should_stop =\n      (fun ~prompt_length:_ ~start_time:_ tokens ->\n        let text = decoder tokens in\n        List.exists\n          (fun stop_str -> String_util.contains_substring text stop_str)\n          stop_strings);\n  }\n\nlet custom_criteria ~name ~should_stop = { name; should_stop }\n\n(* ───── Utilities ───── *)\n\nlet apply_processors ~processors ~prompt_length ~tokens ~logits =\n  List.fold_left\n    (fun acc processor -> processor.process ~prompt_length tokens acc)\n    logits processors\n\nlet check_stopping ~criteria ~prompt_length ~start_time ~tokens =\n  List.exists\n    (fun criterion -> criterion.should_stop ~prompt_length ~start_time tokens)\n    criteria\n\n(* ───── Main Generation Functions ───── *)\n\ntype generation_output = {\n  sequences : int array list;\n  scores : float list list option;\n}\n\nlet sample_from_logits logits =\n  let probs = Nx.softmax logits in\n  let probs_arr = Nx.to_array probs in\n  let r = Random.float 1.0 in\n  let cumsum = ref 0.0 in\n  let result = ref 0 in\n  (try\n     for i = 0 to Array.length probs_arr - 1 do\n       cumsum := !cumsum +. probs_arr.(i);\n       if !cumsum > r then begin\n         result := i;\n         raise_notrace Exit\n       end\n     done\n   with Exit -> ());\n  !result\n\nlet argmax logits = Int32.to_int (Nx.item [ 0 ] (Nx.argmax logits))\n\nlet generate ~model ?(input_ids = [||]) ?(generation_config = default)\n    ?(logits_processor = []) ?(stopping_criteria = []) () =\n  let start_time = Unix.gettimeofday () in\n  let prompt_length = Array.length input_ids in\n\n  let processors =\n    let ps = [] in\n    let ps =\n      if generation_config.temperature <> 1.0 then\n        temperature_warper ~temperature:generation_config.temperature :: ps\n      else ps\n    in\n    let ps =\n      if generation_config.top_k > 0 then\n        top_k_warper ~k:generation_config.top_k :: ps\n      else ps\n    in\n    let ps =\n      if generation_config.top_p < 1.0 then\n        top_p_warper ~p:generation_config.top_p :: ps\n      else ps\n    in\n    let ps =\n      if generation_config.repetition_penalty <> 1.0 then\n        repetition_penalty ~penalty:generation_config.repetition_penalty :: ps\n      else ps\n    in\n    let ps =\n      if generation_config.no_repeat_ngram_size > 0 then\n        no_repeat_ngram ~ngram_size:generation_config.no_repeat_ngram_size :: ps\n      else ps\n    in\n    let eos_ids =\n      match generation_config.eos_token_id with\n      | Some id -> id :: generation_config.eos_token_ids\n      | None -> generation_config.eos_token_ids\n    in\n    let ps =\n      if generation_config.min_length > 0 then\n        min_length ~min_length:generation_config.min_length\n          ~eos_token_ids:eos_ids\n        :: ps\n      else ps\n    in\n    let ps =\n      if generation_config.min_new_tokens > 0 then\n        min_new_tokens ~min_new_tokens:generation_config.min_new_tokens\n          ~eos_token_ids:eos_ids\n        :: ps\n      else ps\n    in\n    ps @ logits_processor\n  in\n\n  let criteria =\n    let cs = [] in\n    let cs =\n      max_length_criteria ~max_length:generation_config.max_length :: cs\n    in\n    let cs =\n      match generation_config.max_new_tokens with\n      | Some max_new -> max_new_tokens_criteria ~max_new_tokens:max_new :: cs\n      | None -> cs\n    in\n    let eos_ids =\n      match generation_config.eos_token_id with\n      | Some id -> id :: generation_config.eos_token_ids\n      | None -> generation_config.eos_token_ids\n    in\n    let cs =\n      if eos_ids <> [] then eos_token_criteria ~eos_token_ids:eos_ids :: cs\n      else cs\n    in\n    cs @ stopping_criteria\n  in\n\n  let tokens_ref = ref (Array.copy input_ids) in\n\n  let rec generate_loop () =\n    let current_tokens = !tokens_ref in\n    if\n      Array.length current_tokens > prompt_length\n      && check_stopping ~criteria ~prompt_length ~start_time\n           ~tokens:current_tokens\n    then current_tokens\n    else begin\n      let raw_logits = model current_tokens in\n      let processed =\n        apply_processors ~processors ~prompt_length ~tokens:current_tokens\n          ~logits:raw_logits\n      in\n      let next_token =\n        if generation_config.do_sample then sample_from_logits processed\n        else argmax processed\n      in\n      tokens_ref := Array.append current_tokens [| next_token |];\n      generate_loop ()\n    end\n  in\n\n  let sequences = generate_loop () in\n  { sequences = [ sequences ]; scores = None }\n\nlet generate_text ~model ~tokenizer ~decoder ?(prompt = \"\")\n    ?(generation_config = default) ?(logits_processor = [])\n    ?(stopping_criteria = []) () =\n  let input_ids = tokenizer prompt in\n  let output =\n    generate ~model ~input_ids ~generation_config ~logits_processor\n      ~stopping_criteria ()\n  in\n  match output.sequences with seq :: _ -> decoder seq | [] -> \"\"\n"
  },
  {
    "path": "dev/mimir/lib/sampler.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Text generation with composable logits processors.\n\n    Provides the autoregressive decode loop, composable logits processors,\n    stopping criteria, and generation configuration for language model\n    inference. Operates on nx tensors for logits. *)\n\n(** {1 Core Types} *)\n\ntype logits = (float, Bigarray.float32_elt) Nx.t\n(** 1D float32 tensor of unnormalized token probabilities. Length equals\n    vocabulary size. *)\n\ntype token_ids = int array\n(** Sequence of token IDs representing encoded text. *)\n\ntype logits_processor = {\n  name : string;\n  process : prompt_length:int -> token_ids -> logits -> logits;\n}\n(** Transforms logits before sampling. *)\n\ntype logits_processor_list = logits_processor list\n\ntype stopping_criterion = {\n  name : string;\n  should_stop : prompt_length:int -> start_time:float -> token_ids -> bool;\n}\n(** Determines when to end generation. *)\n\ntype stopping_criteria_list = stopping_criterion list\n\n(** {1 Generation Configuration} *)\n\ntype generation_config = {\n  max_length : int;\n  max_new_tokens : int option;\n  min_length : int;\n  min_new_tokens : int;\n  do_sample : bool;\n  temperature : float;\n  top_k : int;\n  top_p : float;\n  repetition_penalty : float;\n  no_repeat_ngram_size : int;\n  bad_words_ids : int list list;\n  force_words_ids : int list list;\n  pad_token_id : int option;\n  bos_token_id : int option;\n  eos_token_id : int option;\n  eos_token_ids : int list;\n}\n\nval default : generation_config\n\n(** {2 Builder Pattern} *)\n\nval with_temperature : float -> generation_config -> generation_config\nval with_top_k : int -> generation_config -> generation_config\nval with_top_p : float -> generation_config -> generation_config\nval with_repetition_penalty : float -> generation_config -> generation_config\nval with_max_length : int -> generation_config -> generation_config\nval with_max_new_tokens : int -> generation_config -> generation_config\nval with_min_length : int -> generation_config -> generation_config\nval with_min_new_tokens : int -> generation_config -> generation_config\nval with_no_repeat_ngram : int -> generation_config -> generation_config\nval with_do_sample : bool -> generation_config -> generation_config\n\n(** {2 Presets} *)\n\nval creative_writing : generation_config\nval chat : generation_config\nval code_generation : generation_config\nval factual : generation_config\nval from_preset : string -> generation_config\n\n(** {1 Logits Processors} *)\n\nval temperature_warper : temperature:float -> logits_processor\nval top_k_warper : k:int -> logits_processor\nval top_p_warper : p:float -> logits_processor\nval repetition_penalty : penalty:float -> logits_processor\nval no_repeat_ngram : ngram_size:int -> logits_processor\nval min_length : min_length:int -> eos_token_ids:int list -> logits_processor\n\nval min_new_tokens :\n  min_new_tokens:int -> eos_token_ids:int list -> logits_processor\n\nval bad_words : bad_words_ids:int list list -> logits_processor\n\nval force_words :\n  force_words_ids:int list list -> iteration:int -> logits_processor\n\nval custom :\n  name:string ->\n  process:(prompt_length:int -> token_ids -> logits -> logits) ->\n  logits_processor\n\n(** {1 Stopping Criteria} *)\n\nval max_length_criteria : max_length:int -> stopping_criterion\nval max_new_tokens_criteria : max_new_tokens:int -> stopping_criterion\nval eos_token_criteria : eos_token_ids:int list -> stopping_criterion\nval max_time_criteria : max_time:float -> stopping_criterion\n\nval stop_strings_criteria :\n  stop_strings:string list ->\n  decoder:(token_ids -> string) ->\n  stopping_criterion\n\nval custom_criteria :\n  name:string ->\n  should_stop:(prompt_length:int -> start_time:float -> token_ids -> bool) ->\n  stopping_criterion\n\n(** {1 Generation} *)\n\ntype generation_output = {\n  sequences : int array list;\n  scores : float list list option;\n}\n\nval generate :\n  model:(token_ids -> logits) ->\n  ?input_ids:token_ids ->\n  ?generation_config:generation_config ->\n  ?logits_processor:logits_processor_list ->\n  ?stopping_criteria:stopping_criteria_list ->\n  unit ->\n  generation_output\n\nval generate_text :\n  model:(token_ids -> logits) ->\n  tokenizer:(string -> token_ids) ->\n  decoder:(token_ids -> string) ->\n  ?prompt:string ->\n  ?generation_config:generation_config ->\n  ?logits_processor:logits_processor_list ->\n  ?stopping_criteria:stopping_criteria_list ->\n  unit ->\n  string\n\n(** {1 Utilities} *)\n\nval apply_processors :\n  processors:logits_processor_list ->\n  prompt_length:int ->\n  tokens:token_ids ->\n  logits:logits ->\n  logits\n\nval check_stopping :\n  criteria:stopping_criteria_list ->\n  prompt_length:int ->\n  start_time:float ->\n  tokens:token_ids ->\n  bool\n"
  },
  {
    "path": "dev/mimir/lib/string_util.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet contains_substring s sub =\n  let len_s = String.length s in\n  let len_sub = String.length sub in\n  if len_sub = 0 then true\n  else if len_sub > len_s then false\n  else\n    let rec check i =\n      if i > len_s - len_sub then false\n      else if String.sub s i len_sub = sub then true\n      else check (i + 1)\n    in\n    check 0\n"
  },
  {
    "path": "dev/umbra/README.md",
    "content": "# Umbra\n\nComputational astronomy for OCaml, powered by [Nx](../../packages/nx/) and [Rune](../../packages/rune/)\n\nUmbra provides dimensionally-typed physical quantities, cosmological distances,\nspectral energy distributions, dust extinction, synthetic photometry, coordinate\ntransforms, time scales, catalog cross-matching, and weak lensing survey science.\nAll computations operate on Nx tensors and are differentiable through Rune --\nfit cosmological parameters, propagate uncertainties via Jacobians, or sample\nposteriors with HMC, all from the same forward model.\n\n## Quick Start\n\nCompute the luminosity distance to a galaxy at redshift 0.5:\n\n```ocaml\nopen Umbra\n\nlet () =\n  let f64 = Nx.float64 in\n  let z = Nx.scalar f64 0.5 in\n  let dl = Cosmo.luminosity_distance ~p:Cosmo.planck18 z in\n  Printf.printf \"d_L(z=0.5) = %.1f Mpc\\n\"\n    (Nx.item [] (Unit.Length.in_mpc dl))\n```\n\nFit stellar temperature from photometry with automatic derivatives:\n\n```ocaml\nlet model params =\n  let temp = Unit.Temperature.of_kelvin (Nx.exp (Nx.slice [ I 0 ] params)) in\n  let av = Nx.reshape [||] (Nx.slice [ I 1 ] params) in\n  let rv = Nx.scalar Nx.float64 3.1 in\n  List.map (fun bp ->\n    let wave = Photometry.wavelength bp in\n    let sed =\n      Spectrum.blackbody ~temperature:temp ~wavelength:wave\n      |> Extinction.apply (Extinction.ccm89 ~rv) ~av\n      |> Spectrum.as_flux_density\n    in\n    Photometry.ab_mag bp sed) bands\n  |> Nx.stack ~axis:0\n\n(* Rune differentiates through the entire pipeline *)\nlet loss, grad = Rune.value_and_grad chi2 params\n```\n\n## Features\n\n- **Dimensional types**: `Unit.Length`, `Unit.Mass`, `Unit.Time`, `Unit.Angle`, etc. with compile-time safety\n- **Physical constants**: CODATA 2022 and IAU 2015 via `Const`\n- **Cosmology**: LCDM, wCDM, w0waCDM distances, growth factors, and matter power spectra via `Cosmo`\n- **Spectra**: blackbody, power-law, and line profiles (Gaussian, Lorentzian, Voigt) via `Spectrum`\n- **Extinction**: CCM89, Fitzpatrick99, O'Donnell94, Calzetti00 dust laws via `Extinction`\n- **Photometry**: AB, ST, and Vega magnitudes through standard filter bandpasses via `Photometry`\n- **Filters**: SDSS, Johnson-Cousins, 2MASS, Gaia DR3, Rubin/LSST, Euclid via `Filters`\n- **Coordinates**: ICRS, Galactic, Ecliptic, Supergalactic frame transforms and kd-tree cross-matching via `Coord`\n- **Time**: UTC, TAI, TT, TDB time scales with phantom-typed safety via `Time`\n- **Observer geometry**: altitude-azimuth coordinates and airmass via `Altaz`\n- **Survey science**: angular power spectra and Fisher forecasting via `Survey`\n- **FITS I/O**: image and table read/write via `Umbra_fits`\n- **Fully differentiable**: all forward models work with Rune's autodiff, Jacobians, and MCMC\n\n## Examples\n\n| Example | Concept |\n|---------|---------|\n| [`01-constants-and-units`](examples/01-constants-and-units/) | Type-safe physical quantities and conversions |\n| [`02-cosmological-distances`](examples/02-cosmological-distances/) | LCDM distances and SN Ia fitting |\n| [`03-blackbody-fitting`](examples/03-blackbody-fitting/) | Fit stellar temperature from photometry |\n| [`04-extinction-and-magnitudes`](examples/04-extinction-and-magnitudes/) | Dust extinction, magnitude systems, K-corrections |\n| [`05-sed-fitting`](examples/05-sed-fitting/) | Full SED pipeline: blackbody, extinction, photometry |\n| [`06-coordinates-and-time`](examples/06-coordinates-and-time/) | Frame transforms, time scales, observer geometry |\n| [`07-batch-photometry`](examples/07-batch-photometry/) | Batched operations over parameter grids |\n| [`08-photometric-redshifts`](examples/08-photometric-redshifts/) | Two-stage photo-z: grid search + gradient refinement |\n| [`09-gravitational-lensing`](examples/09-gravitational-lensing/) | Point-mass lens model parameter fitting |\n| [`10-uncertainty-propagation`](examples/10-uncertainty-propagation/) | AD Jacobians for error propagation vs Monte Carlo |\n| [`11-bayesian-sed`](examples/11-bayesian-sed/) | Fisher matrix + HMC posterior sampling |\n| [`12-survey-optimization`](examples/12-survey-optimization/) | Differentiable Fisher forecasting for survey design |\n\n## Papers\n\n- [**Perlmutter et al. 1999**](papers/perlmutter1999/) -- Reproducing the Nobel Prize-winning discovery of cosmic acceleration using the Pantheon+ dataset\n\n## Contributing\n\nSee the [Raven monorepo README](../../README.md) for guidelines.\n\n## License\n\nISC License. See [LICENSE](../../LICENSE) for details.\n"
  },
  {
    "path": "dev/umbra/dune-project",
    "content": "(lang dune 3.21)\n\n(name umbra)\n\n(package\n (name umbra)\n (synopsis \"Astronomy library for OCaml\")\n (description\n  \"Physical units, celestial coordinates, FITS I/O, cosmological distances, and catalog cross-matching. Built on Nx and Talon.\")\n (depends\n  (ocaml\n   (>= 5.2.0))\n  dune\n  (nx\n   (>= 1.0.0~alpha3))\n  (talon\n   (>= 1.0.0~alpha3))\n  (windtrap :with-test)))\n"
  },
  {
    "path": "dev/umbra/examples/01-constants-and-units/README.md",
    "content": "# `01-constants-and-units`\n\nIntroduction to Umbra's type-safe unit system and physical constants. Creates\nquantities in different units, converts between them, and demonstrates how\nphantom types prevent mixing incompatible dimensions at compile time.\n\n```bash\ndune exec dev/umbra/examples/01-constants-and-units/main.exe\n```\n\n## What You'll Learn\n\n- Creating quantities with scalar constructors (`Length.pc`, `Angle.deg`, `Mass.solar_mass`)\n- Converting between units (`Length.in_ly`, `Angle.in_arcsec`)\n- Adding quantities of the same dimension (`Unit.(+)`)\n- Using physical constants (`Const.c`, `Const.h_si`, `Const.k_b_si`)\n- Cross-dimension conversions (`parallax_to_distance`, `wavelength_to_frequency`)\n- Batch operations on tensor-valued quantities\n\n## Key Functions\n\n| Function                    | Purpose                                      |\n| --------------------------- | -------------------------------------------- |\n| `Length.pc`, `Length.au`     | Create length quantities in parsecs, AU       |\n| `Length.in_m`, `Length.in_ly`| Extract values in metres, light-years         |\n| `Angle.deg`, `Angle.arcsec` | Create angles in degrees, arcseconds          |\n| `Temperature.kelvin`        | Create temperature quantities                 |\n| `Mass.solar_mass`           | Create mass in solar masses                   |\n| `Const.c`, `Const.h_si`    | Speed of light, Planck constant               |\n| `parallax_to_distance`      | Convert stellar parallax to distance          |\n| `wavelength_to_frequency`   | Convert wavelength to frequency via c/lambda  |\n\n## Try It\n\n1. Compute the Schwarzschild radius of the Sun using `Const.g_si`, `Const.solar_mass`, and `Const.c`.\n2. Add `Length.ly 4.246` (Proxima Centauri) and check it matches the parallax-derived distance.\n3. Use `Unit.doppler_optical` to compute the observed wavelength of H-alpha at a radial velocity of 100 km/s.\n\n## Next Steps\n\nContinue to [02-cosmological-distances](../02-cosmological-distances/) to compute\ndistances and times in an expanding universe.\n"
  },
  {
    "path": "dev/umbra/examples/01-constants-and-units/dune",
    "content": "(executable\n (name main)\n (libraries nx umbra))\n"
  },
  {
    "path": "dev/umbra/examples/01-constants-and-units/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Type-safe units and physical constants.\n\n   Introduces Umbra's dimensional type system: quantities carry phantom types\n   that prevent mixing incompatible dimensions at compile time. Shows how to\n   create, convert, and combine quantities in different units, and how to use\n   physical and astronomical constants. *)\n\nopen Nx\nopen Umbra\n\nlet f64 = Nx.float64\n\nlet () =\n  Printf.printf \"Type-safe units and physical constants\\n\";\n  Printf.printf \"======================================\\n\\n\";\n\n  (* --- Length: metres, parsecs, AU, light-years --- *)\n  Printf.printf \"Length conversions\\n\";\n  Printf.printf \"------------------\\n\";\n  let d_pc = Unit.Length.pc 1.0 in\n  Printf.printf \"  1 parsec = %.4e m\\n\" (item [] (Unit.Length.in_m d_pc));\n  Printf.printf \"  1 parsec = %.6f ly\\n\" (item [] (Unit.Length.in_ly d_pc));\n  Printf.printf \"  1 parsec = %.0f AU\\n\" (item [] (Unit.Length.in_au d_pc));\n\n  let d_au = Unit.Length.au 1.0 in\n  Printf.printf \"  1 AU     = %.4e m\\n\" (item [] (Unit.Length.in_m d_au));\n  Printf.printf \"  1 AU     = %.4e pc\\n\\n\" (item [] (Unit.Length.in_pc d_au));\n\n  (* Adding lengths of different units — the type system ensures consistency *)\n  let d_total = Unit.( + ) (Unit.Length.kpc 10.0) (Unit.Length.pc 500.0) in\n  Printf.printf \"  10 kpc + 500 pc = %.3f kpc\\n\\n\"\n    (item [] (Unit.Length.in_kpc d_total));\n\n  (* --- Angle: degrees, radians, arcseconds --- *)\n  Printf.printf \"Angle conversions\\n\";\n  Printf.printf \"-----------------\\n\";\n  let a_deg = Unit.Angle.deg 1.0 in\n  Printf.printf \"  1 degree = %.6f rad\\n\" (item [] (Unit.Angle.in_rad a_deg));\n  Printf.printf \"  1 degree = %.1f arcmin\\n\"\n    (item [] (Unit.Angle.in_arcmin a_deg));\n  Printf.printf \"  1 degree = %.1f arcsec\\n\"\n    (item [] (Unit.Angle.in_arcsec a_deg));\n\n  let a_mas = Unit.Angle.mas 1.0 in\n  Printf.printf \"  1 mas    = %.4e arcsec\\n\\n\"\n    (item [] (Unit.Angle.in_arcsec a_mas));\n\n  (* --- Temperature --- *)\n  Printf.printf \"Temperature\\n\";\n  Printf.printf \"-----------\\n\";\n  let sun_t = Unit.Temperature.kelvin 5778.0 in\n  Printf.printf \"  Sun surface: %.0f K\\n\"\n    (item [] (Unit.Temperature.in_kelvin sun_t));\n  let sirius_t = Unit.Temperature.kelvin 9940.0 in\n  Printf.printf \"  Sirius:      %.0f K\\n\\n\"\n    (item [] (Unit.Temperature.in_kelvin sirius_t));\n\n  (* --- Time durations --- *)\n  Printf.printf \"Time durations\\n\";\n  Printf.printf \"--------------\\n\";\n  let t_yr = Unit.Time.yr 1.0 in\n  Printf.printf \"  1 Julian year = %.0f days\\n\"\n    (item [] (Unit.Time.in_day t_yr));\n  Printf.printf \"  1 Julian year = %.2f s\\n\" (item [] (Unit.Time.in_s t_yr));\n\n  let t_gyr = Unit.Time.gyr 13.8 in\n  Printf.printf \"  Age of universe ~ %.2e yr\\n\\n\"\n    (item [] (Unit.Time.in_yr t_gyr));\n\n  (* --- Mass: kg, solar masses, Earth masses --- *)\n  Printf.printf \"Mass conversions\\n\";\n  Printf.printf \"----------------\\n\";\n  let m_sun = Unit.Mass.solar_mass 1.0 in\n  Printf.printf \"  1 solar mass = %.4e kg\\n\" (item [] (Unit.Mass.in_kg m_sun));\n  Printf.printf \"  1 solar mass = %.0f Earth masses\\n\"\n    (item [] (Unit.Mass.in_earth_mass m_sun));\n  Printf.printf \"  1 solar mass = %.1f Jupiter masses\\n\\n\"\n    (item [] (Unit.Mass.in_jupiter_mass m_sun));\n\n  (* --- Physical constants --- *)\n  Printf.printf \"Physical constants\\n\";\n  Printf.printf \"------------------\\n\";\n  Printf.printf \"  c     = %.0f m/s\\n\" (Unit.to_float Const.c);\n  Printf.printf \"  h     = %.4e J s\\n\" Const.h_si;\n  Printf.printf \"  k_B   = %.4e J/K\\n\" Const.k_b_si;\n  Printf.printf \"  G     = %.4e m^3 kg^-1 s^-2\\n\" Const.g_si;\n  Printf.printf \"  sigma = %.4e W m^-2 K^-4\\n\\n\" Const.sigma_sb_si;\n\n  (* --- Astronomical constants --- *)\n  Printf.printf \"Astronomical constants\\n\";\n  Printf.printf \"----------------------\\n\";\n  Printf.printf \"  L_sun = %.4e W\\n\"\n    (item [] (Unit.Power.in_w Const.solar_luminosity));\n  Printf.printf \"  R_sun = %.4e m\\n\"\n    (item [] (Unit.Length.in_m Const.solar_radius));\n  Printf.printf \"  M_sun = %.4e kg\\n\"\n    (item [] (Unit.Mass.in_kg Const.solar_mass));\n  Printf.printf \"  1 AU  = %.4e m\\n\" (item [] (Unit.Length.in_m Const.au));\n  Printf.printf \"  1 pc  = %.4e m\\n\\n\" (item [] (Unit.Length.in_m Const.pc));\n\n  (* --- Cross-dimension: parallax to distance --- *)\n  Printf.printf \"Parallax to distance\\n\";\n  Printf.printf \"--------------------\\n\";\n  let parallax = Unit.Angle.arcsec 1.0 in\n  let dist = Unit.parallax_to_distance parallax in\n  Printf.printf \"  1 arcsec parallax -> %.3f pc\\n\"\n    (item [] (Unit.Length.in_pc dist));\n\n  let proxima_parallax = Unit.Angle.mas 768.5 in\n  let proxima_dist = Unit.parallax_to_distance proxima_parallax in\n  Printf.printf \"  Proxima Cen (768.5 mas) -> %.3f pc\\n\"\n    (item [] (Unit.Length.in_pc proxima_dist));\n\n  (* --- Tensor operations: batch unit conversions --- *)\n  Printf.printf \"\\nBatch operations\\n\";\n  Printf.printf \"----------------\\n\";\n  let wavelengths_nm =\n    create f64 [| 5 |] [| 380.0; 450.0; 550.0; 650.0; 750.0 |]\n  in\n  let wavelengths = Unit.Length.of_nm wavelengths_nm in\n  let wavelengths_angstrom = Unit.Length.in_angstrom wavelengths in\n  Printf.printf \"  Wavelengths (nm):       %s\\n\"\n    (Nx.data_to_string wavelengths_nm);\n  Printf.printf \"  Wavelengths (angstrom): %s\\n\"\n    (Nx.data_to_string wavelengths_angstrom);\n\n  (* Convert wavelength to frequency *)\n  let freqs = Unit.wavelength_to_frequency wavelengths in\n  Printf.printf \"  Frequencies (Hz):       %s\\n\"\n    (Nx.data_to_string (Unit.Frequency.in_hz freqs));\n\n  (* Wien's law: peak wavelength of a blackbody *)\n  Printf.printf \"\\nWien's displacement law\\n\";\n  Printf.printf \"----------------------\\n\";\n  let b_wien = Const.b_wien_si in\n  let sun_peak_m = b_wien /. item [] (Unit.Temperature.in_kelvin sun_t) in\n  Printf.printf \"  Sun (T=%.0f K): peak at %.0f nm\\n\"\n    (item [] (Unit.Temperature.in_kelvin sun_t))\n    (sun_peak_m *. 1e9);\n  let sirius_peak_m = b_wien /. item [] (Unit.Temperature.in_kelvin sirius_t) in\n  Printf.printf \"  Sirius (T=%.0f K): peak at %.0f nm\\n\"\n    (item [] (Unit.Temperature.in_kelvin sirius_t))\n    (sirius_peak_m *. 1e9)\n"
  },
  {
    "path": "dev/umbra/examples/02-cosmological-distances/README.md",
    "content": "# `02-cosmological-distances`\n\nCosmological distance calculations and parameter fitting. First prints a\ndistance table for the Planck 2018 cosmology, then fits H0 and Omega_m from\nsynthetic Type Ia supernova distance moduli using gradient descent.\n\n```bash\ndune exec dev/umbra/examples/02-cosmological-distances/main.exe\n```\n\n## What You'll Learn\n\n- Using preset cosmologies (`Cosmo.planck18`)\n- Computing distances (`comoving_distance`, `luminosity_distance`, `angular_diameter_distance`)\n- Computing distance modulus and lookback time\n- Building differentiable cosmological models with `create_flat_lcdm`\n- Fitting cosmological parameters with Rune autodiff and Vega optimizers\n\n## Key Functions\n\n| Function                      | Purpose                                       |\n| ----------------------------- | --------------------------------------------- |\n| `Cosmo.planck18`              | Planck 2018 flat LCDM preset                  |\n| `Cosmo.comoving_distance`     | Line-of-sight comoving distance               |\n| `Cosmo.luminosity_distance`   | Luminosity distance at redshift z              |\n| `Cosmo.distance_modulus`      | Distance modulus mu = 5 log10(d_L/Mpc) + 25   |\n| `Cosmo.lookback_time`         | Time since light was emitted                   |\n| `Cosmo.age`                   | Age of the universe at redshift z              |\n| `Cosmo.create_flat_lcdm`     | Tensor-parameterized cosmology for autodiff    |\n| `Rune.value_and_grads`        | Forward pass + gradient computation            |\n\n## How It Works\n\nThe distance modulus forward model uses `Cosmo.distance_modulus`, which\ninternally integrates E(z) via 16-point Gauss-Legendre quadrature. Since all\noperations are Nx tensor ops, gradients flow through the entire pipeline\nautomatically via Rune.\n\nThe optimizer starts from H0=65, Omega_m=0.25 and converges toward the true\nvalues (H0~73, Omega_m~0.3) that generated the synthetic data.\n\n## Try It\n\n1. Change the preset to `Cosmo.wmap9` and compare the distance table.\n2. Add `Omega_L` as a free parameter using `create_lcdm` for a non-flat model.\n3. Use `Cosmo.z_at_value` to find the redshift where the lookback time is 10 Gyr.\n\n## Next Steps\n\nContinue to [03-blackbody-fitting](../03-blackbody-fitting/) to fit stellar\ntemperatures from photometry.\n"
  },
  {
    "path": "dev/umbra/examples/02-cosmological-distances/dune",
    "content": "(executable\n (name main)\n (libraries nx rune vega umbra))\n"
  },
  {
    "path": "dev/umbra/examples/02-cosmological-distances/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Differentiable cosmological parameter fitting from Type Ia supernova distance\n   moduli.\n\n   Fits H0 (Hubble constant) and Omega_m (matter density fraction) by gradient\n   descent on the distance modulus residuals. The forward model uses\n   Umbra.Cosmo.distance_modulus directly -- its Gauss-Legendre quadrature,\n   luminosity distance, and distance modulus are all Nx tensor operations,\n   making them natively differentiable through Rune's autodiff.\n\n   Also demonstrates basic cosmological distance queries: comoving distance,\n   luminosity distance, angular diameter distance, lookback time, and the age of\n   the universe at various redshifts. *)\n\nopen Nx\nopen Umbra\n\nlet f64 = Nx.float64\n\n(* --- Part 1: Distance table for the Planck 2018 cosmology --- *)\n\nlet print_distance_table () =\n  Printf.printf \"Cosmological distances (Planck 2018)\\n\";\n  Printf.printf \"====================================\\n\\n\";\n  let p = Cosmo.planck18 in\n  Printf.printf \"  H0      = %.2f km/s/Mpc\\n\" (item [] (Cosmo.h0 p));\n  Printf.printf \"  Omega_m = %.4f\\n\" (item [] (Cosmo.omega_m p));\n  Printf.printf \"  Omega_L = %.4f\\n\\n\" (item [] (Cosmo.omega_l p));\n\n  Printf.printf \"%6s  %10s  %10s  %10s  %8s  %8s\\n\" \"z\" \"d_C (Mpc)\" \"d_L (Mpc)\"\n    \"d_A (Mpc)\" \"mu\" \"t_lb (Gyr)\";\n  Printf.printf \"%6s  %10s  %10s  %10s  %8s  %8s\\n\" \"------\" \"----------\"\n    \"----------\" \"----------\" \"--------\" \"----------\";\n  let redshifts = [| 0.01; 0.1; 0.3; 0.5; 1.0; 2.0; 3.0; 5.0 |] in\n  Array.iter\n    (fun z ->\n      let zv = scalar f64 z in\n      let d_c = item [] (Unit.Length.in_mpc (Cosmo.comoving_distance ~p zv)) in\n      let d_l =\n        item [] (Unit.Length.in_mpc (Cosmo.luminosity_distance ~p zv))\n      in\n      let d_a =\n        item [] (Unit.Length.in_mpc (Cosmo.angular_diameter_distance ~p zv))\n      in\n      let mu = item [] (Cosmo.distance_modulus ~p zv) in\n      let t_lb = item [] (Unit.Time.in_gyr (Cosmo.lookback_time ~p zv)) in\n      Printf.printf \"%6.2f  %10.1f  %10.1f  %10.1f  %8.2f  %8.2f\\n\" z d_c d_l\n        d_a mu t_lb)\n    redshifts;\n  Printf.printf \"\\n\";\n\n  (* Age of the universe *)\n  let age_now = item [] (Unit.Time.in_gyr (Cosmo.age ~p (scalar f64 0.0))) in\n  Printf.printf \"  Age of the universe (z=0): %.2f Gyr\\n\\n\" age_now\n\n(* --- Part 2: Fit H0 and Omega_m from SN Ia data --- *)\n\n(* Representative SN Ia data points (z, observed distance modulus). Based on\n   Pantheon+ compilation values for flat LCDM with H0 ~ 73, Omega_m ~ 0.3. *)\nlet z_arr = [| 0.01; 0.03; 0.08; 0.15; 0.25; 0.40; 0.55; 0.70; 0.85; 1.00 |]\nlet n_sn = Array.length z_arr\n\nlet mu_obs =\n  [| 33.07; 35.47; 37.62; 39.07; 40.24; 41.42; 42.23; 42.85; 43.34; 43.74 |]\n\n(* Forward model: compute distance modulus for all SNe. The differentiable\n   parameters are H0 and Omega_m, which flow through Cosmo.distance_modulus via\n   Nx tensor operations. *)\nlet loss params =\n  match params with\n  | [ h0; omega_m ] ->\n      let p = Cosmo.create_flat_lcdm ~h0 ~omega_m in\n      let total = ref (scalar f64 0.0) in\n      for i = 0 to n_sn - 1 do\n        let z_i = scalar f64 z_arr.(i) in\n        let mu_pred = Cosmo.distance_modulus ~p z_i in\n        let mu_obs_i = scalar f64 mu_obs.(i) in\n        let residual = sub mu_pred mu_obs_i in\n        total := add !total (square residual)\n      done;\n      div_s !total (Float.of_int n_sn)\n  | _ -> failwith \"expected [h0; omega_m]\"\n\nlet fit_cosmology () =\n  Printf.printf \"Fitting H0 and Omega_m from Type Ia supernovae\\n\";\n  Printf.printf \"===============================================\\n\";\n  Printf.printf \"  Data: %d distance moduli (Pantheon+-like)\\n\" n_sn;\n  Printf.printf \"  Method: Adam optimizer, 300 steps\\n\";\n  Printf.printf \"  Model: flat LCDM via Cosmo.distance_modulus\\n\\n\";\n\n  let algo = Vega.adam (Vega.Schedule.constant 0.5) in\n  let h0 = ref (scalar f64 65.0) in\n  let omega_m = ref (scalar f64 0.25) in\n  let states = [| Vega.init algo !h0; Vega.init algo !omega_m |] in\n  let steps = 300 in\n\n  Printf.printf \"%5s  %10s  %8s  %8s\\n\" \"step\" \"loss\" \"H0\" \"Omega_m\";\n  Printf.printf \"%5s  %10s  %8s  %8s\\n\" \"-----\" \"----------\" \"--------\"\n    \"--------\";\n\n  let refs = [| h0; omega_m |] in\n  for i = 0 to steps - 1 do\n    let loss_val, grads = Rune.value_and_grads loss [ !h0; !omega_m ] in\n    List.iteri\n      (fun j g ->\n        let p, s = Vega.step states.(j) ~grad:g ~param:!(refs.(j)) in\n        refs.(j) := p;\n        states.(j) <- s)\n      grads;\n    if i mod 50 = 0 || i = steps - 1 then\n      Printf.printf \"%5d  %10.6f  %8.2f  %8.4f\\n\" i (item [] loss_val)\n        (item [] !h0) (item [] !omega_m)\n  done;\n\n  Printf.printf \"\\nFitted parameters:\\n\";\n  Printf.printf \"  H0      = %.2f km/s/Mpc\\n\" (item [] !h0);\n  Printf.printf \"  Omega_m = %.4f\\n\" (item [] !omega_m)\n\nlet () =\n  print_distance_table ();\n  fit_cosmology ()\n"
  },
  {
    "path": "dev/umbra/examples/03-blackbody-fitting/README.md",
    "content": "# `03-blackbody-fitting`\n\nFits the effective temperature and luminosity normalization of a star from\nsynthetic UGRIZ broadband photometry using gradient descent on a blackbody\nmodel.\n\n```bash\ndune exec dev/umbra/examples/03-blackbody-fitting/main.exe\n```\n\n## What You'll Learn\n\n- Using physical constants (`Const.h_si`, `Const.k_b_si`, `Const.c`)\n- Building a differentiable Planck function from Nx tensor operations\n- Parameterizing in log-space for numerical stability\n- Fitting chi-squared with Rune autodiff and Vega's Adam optimizer\n\n## Key Functions\n\n| Function              | Purpose                                            |\n| --------------------- | -------------------------------------------------- |\n| `Const.h_si`          | Planck constant (J s)                              |\n| `Const.k_b_si`        | Boltzmann constant (J/K)                           |\n| `Const.c`             | Speed of light (typed velocity)                    |\n| `Unit.to_float`       | Extract scalar SI value from a typed constant      |\n| `Rune.value_and_grads`| Compute loss and gradients in one pass             |\n| `Vega.adam`           | Adam optimizer                                     |\n| `Vega.step`           | Apply one optimization step                        |\n\n## How It Works\n\nThe Planck spectral radiance B(lambda, T) = 2hc^2 / lambda^5 / (exp(hc /\nlambda k T) - 1) is implemented entirely with Nx tensor operations. Since Rune\ncan differentiate any Nx computation, gradients of chi-squared with respect to\nlog(T) and log(A) are computed automatically.\n\nThe optimizer starts from T=5000 K and converges toward the true temperature\nof 5800 K (Sun-like star). Log-space parameterization ensures positivity and\nimproves gradient conditioning.\n\n## Try It\n\n1. Change the true temperature to 10000 K (A-type star) and observe how the\n   SED shape changes.\n2. Add a third parameter for a dust extinction term.\n3. Replace the central-wavelength approximation with proper filter integration\n   using `Photometry.ab_mag` (see example 05).\n\n## Next Steps\n\nContinue to [04-extinction-and-magnitudes](../04-extinction-and-magnitudes/) to\nlearn about dust extinction, K-corrections, and magnitude systems.\n"
  },
  {
    "path": "dev/umbra/examples/03-blackbody-fitting/dune",
    "content": "(executable\n (name main)\n (libraries nx rune vega umbra))\n"
  },
  {
    "path": "dev/umbra/examples/03-blackbody-fitting/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Differentiable blackbody SED fitting.\n\n   Given broadband photometric measurements in UGRIZ bands, fit the stellar\n   effective temperature and luminosity normalization by gradient descent on the\n   chi-squared statistic. The Planck function is evaluated as Nx tensor\n   operations, making it fully differentiable through Rune.\n\n   Uses Umbra.Const for physical constants. *)\n\nopen Nx\nopen Umbra\n\nlet f64 = Nx.float64\n\n(* Central wavelengths of SDSS UGRIZ bands in meters *)\nlet lambda =\n  create f64 [| 5 |] [| 3.551e-7; 4.686e-7; 6.166e-7; 7.480e-7; 8.932e-7 |]\n\n(* Physical constants from Umbra *)\nlet h_planck = Const.h_si\nlet c_light = Unit.to_float Const.c\nlet k_boltz = Const.k_b_si\n\n(* Pre-computed constant tensors *)\nlet two_hc2 = scalar f64 (2.0 *. h_planck *. c_light *. c_light)\nlet hc_over_k = scalar f64 (h_planck *. c_light /. k_boltz)\nlet lam5 = pow_s lambda 5.0\n\n(* Generate synthetic observations from a Sun-like star *)\nlet true_temp = 5800.0\nlet true_log_norm = -50.0\n\nlet planck_scalar lam_m temp =\n  let x = h_planck *. c_light /. (lam_m *. k_boltz *. temp) in\n  2.0 *. h_planck *. c_light *. c_light\n  /. (lam_m *. lam_m *. lam_m *. lam_m *. lam_m)\n  /. (Float.exp x -. 1.0)\n\nlet flux_obs =\n  let norm = Float.exp true_log_norm in\n  let fluxes =\n    Array.init 5 (fun i ->\n        let lam_m =\n          [| 3.551e-7; 4.686e-7; 6.166e-7; 7.480e-7; 8.932e-7 |].(i)\n        in\n        norm\n        *. planck_scalar lam_m true_temp\n        *. (1.0 +. (0.02 *. (Float.of_int i -. 2.0))))\n  in\n  create f64 [| 5 |] fluxes\n\n(* Fractional errors: 5% photometry *)\nlet flux_err = mul_s flux_obs 0.05\nlet band_names = [| \"U\"; \"G\"; \"R\"; \"I\"; \"Z\" |]\n\n(* Differentiable forward model: Planck function at 5 wavelengths. Parameterized\n   in log-space for positivity and gradient conditioning.\n\n   B(lambda, T) = 2hc^2 / lambda^5 / (exp(hc / (lambda * k * T)) - 1) *)\nlet loss params =\n  match params with\n  | [ log_temp; log_norm ] ->\n      let temp = exp log_temp in\n      let norm = exp log_norm in\n      let exponent = div hc_over_k (mul lambda temp) in\n      let planck =\n        div (div two_hc2 lam5) (sub (exp exponent) (scalar f64 1.0))\n      in\n      let flux_pred = mul norm planck in\n      let residual = div (sub flux_pred flux_obs) flux_err in\n      sum (square residual)\n  | _ -> failwith \"expected [log_temp; log_norm]\"\n\nlet () =\n  Printf.printf \"Differentiable blackbody SED fitting\\n\";\n  Printf.printf \"====================================\\n\";\n  Printf.printf \"Fitting temperature and normalization to UGRIZ photometry\\n\\n\";\n\n  Printf.printf \"True parameters:\\n\";\n  Printf.printf \"  T    = %.0f K\\n\" true_temp;\n  Printf.printf \"  logA = %.1f\\n\\n\" true_log_norm;\n\n  Printf.printf \"Synthetic observations (5%% errors):\\n\";\n  for i = 0 to 4 do\n    Printf.printf \"  %s: %.4e +/- %.4e\\n\" band_names.(i) (item [ i ] flux_obs)\n      (item [ i ] flux_err)\n  done;\n  Printf.printf \"\\n\";\n\n  (* Start from a guess *)\n  let algo = Vega.adam (Vega.Schedule.constant 1e-2) in\n  let log_temp = ref (scalar f64 (Float.log 5000.0)) in\n  let log_norm = ref (scalar f64 (-52.0)) in\n  let states = [| Vega.init algo !log_temp; Vega.init algo !log_norm |] in\n  let steps = 500 in\n\n  Printf.printf \"%5s  %12s  %8s  %10s\\n\" \"step\" \"chi2\" \"T (K)\" \"log_norm\";\n  Printf.printf \"%5s  %12s  %8s  %10s\\n\" \"-----\" \"------------\" \"--------\"\n    \"----------\";\n\n  let refs = [| log_temp; log_norm |] in\n  for i = 0 to steps - 1 do\n    let loss_val, grads = Rune.value_and_grads loss [ !log_temp; !log_norm ] in\n    List.iteri\n      (fun j g ->\n        let p, s = Vega.step states.(j) ~grad:g ~param:!(refs.(j)) in\n        refs.(j) := p;\n        states.(j) <- s)\n      grads;\n    if i mod 100 = 0 || i = steps - 1 then\n      Printf.printf \"%5d  %12.4f  %8.1f  %10.3f\\n\" i (item [] loss_val)\n        (Float.exp (item [] !log_temp))\n        (item [] !log_norm)\n  done;\n\n  Printf.printf \"\\nFitted parameters:\\n\";\n  Printf.printf \"  T    = %.1f K  (true: %.0f K)\\n\"\n    (Float.exp (item [] !log_temp))\n    true_temp;\n  Printf.printf \"  logA = %.3f  (true: %.1f)\\n\" (item [] !log_norm)\n    true_log_norm\n"
  },
  {
    "path": "dev/umbra/examples/04-extinction-and-magnitudes/README.md",
    "content": "# `04-extinction-and-magnitudes`\n\nExplores three key photometric concepts: magnitude systems (AB, ST, Vega),\nK-corrections from redshift, and interstellar dust extinction. Shows how to\ncompose `Spectrum`, `Extinction`, `Photometry`, and `Filters` modules.\n\n```bash\ndune exec dev/umbra/examples/04-extinction-and-magnitudes/main.exe\n```\n\n## What You'll Learn\n\n- Computing AB, ST, and Vega magnitudes through real SDSS filters\n- Understanding K-corrections from redshift-shifted SEDs\n- Applying extinction laws (CCM89, Fitzpatrick99, O'Donnell94)\n- Measuring colors and color excess from dust reddening\n\n## Key Functions\n\n| Function                    | Purpose                                        |\n| --------------------------- | ---------------------------------------------- |\n| `Photometry.ab_mag`         | AB magnitude through a bandpass                |\n| `Photometry.st_mag`         | ST magnitude through a bandpass                |\n| `Photometry.vega_mag`       | Vega magnitude through a bandpass              |\n| `Photometry.color`          | Color index (mag difference between two bands) |\n| `Spectrum.blackbody`        | Planck spectral radiance                       |\n| `Spectrum.redshift`         | Apply cosmological redshift to an SED          |\n| `Spectrum.as_flux_density`  | Cast spectrum to flux density kind             |\n| `Extinction.ccm89`          | Cardelli, Clayton & Mathis (1989) dust law     |\n| `Extinction.fitzpatrick99`  | Fitzpatrick (1999) dust law                    |\n| `Extinction.apply`          | Redden a spectrum by A_V magnitudes            |\n| `Filters.sdss_r`            | Pre-built SDSS r-band bandpass                 |\n\n## How It Works\n\n**Magnitude systems** differ in their reference flux:\n- AB: constant f_nu = 3631 Jy\n- ST: constant f_lambda = 3.63e-9 erg/s/cm^2/A\n- Vega: the spectrum of alpha Lyrae\n\n**K-corrections** arise because redshift moves the SED across the bandpass,\nchanging the measured flux even without distance dimming. K(z) = m_obs - m_rest.\n\n**Extinction** attenuates and reddens starlight. The extinction curve\nA_lambda/A_V depends on wavelength and the dust grain properties (encoded in\nR_V). Higher A_V means more dimming; bluer bands are affected more, producing\nreddening.\n\n## Try It\n\n1. Compare Galactic extinction (CCM89, R_V=3.1) with starburst attenuation\n   (`Extinction.calzetti00`).\n2. Apply both redshift and extinction to see their combined effect on colors.\n3. Use `Extinction.unredden` to recover the intrinsic SED from a reddened\n   observation.\n\n## Next Steps\n\nContinue to [05-sed-fitting](../05-sed-fitting/) to fit temperature, extinction,\nand normalization simultaneously.\n"
  },
  {
    "path": "dev/umbra/examples/04-extinction-and-magnitudes/dune",
    "content": "(executable\n (name main)\n (libraries nx rune umbra))\n"
  },
  {
    "path": "dev/umbra/examples/04-extinction-and-magnitudes/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* K-corrections, extinction, and magnitude systems.\n\n   Demonstrates three key photometric concepts:\n\n   1. Magnitude systems: AB, ST, and Vega magnitudes through real SDSS filters.\n   2. K-corrections: the difference between observed and rest-frame magnitudes\n   due to redshift shifting the SED across the bandpass. 3. Extinction: how\n   interstellar dust reddens and dims stellar light, comparing CCM89 and\n   Fitzpatrick99 extinction laws. *)\n\nopen Nx\nopen Umbra\n\nlet f64 = Nx.float64\n\nlet () =\n  Printf.printf \"Extinction, K-corrections, and magnitude systems\\n\";\n  Printf.printf \"=================================================\\n\\n\";\n\n  (* --- Part 1: Magnitude systems --- *)\n  Printf.printf \"Part 1: AB, ST, and Vega magnitudes\\n\";\n  Printf.printf \"------------------------------------\\n\\n\";\n\n  let temp = Unit.Temperature.of_kelvin (Nx.scalar f64 6000.0) in\n  let norm = Nx.scalar f64 (Float.exp (-50.0)) in\n  let bands =\n    [|\n      (\"SDSS u\", Filters.sdss_u);\n      (\"SDSS g\", Filters.sdss_g);\n      (\"SDSS r\", Filters.sdss_r);\n      (\"SDSS i\", Filters.sdss_i);\n      (\"SDSS z\", Filters.sdss_z);\n    |]\n  in\n\n  Printf.printf \"  Source: T=6000 K blackbody\\n\\n\";\n  Printf.printf \"%8s  %8s  %8s  %8s\\n\" \"Band\" \"AB\" \"ST\" \"Vega\";\n  Printf.printf \"%8s  %8s  %8s  %8s\\n\" \"--------\" \"--------\" \"--------\"\n    \"--------\";\n  Array.iter\n    (fun (name, bp) ->\n      let bp_wave = Photometry.wavelength bp in\n      let sed =\n        Spectrum.blackbody ~temperature:temp ~wavelength:bp_wave\n        |> Spectrum.scale norm |> Spectrum.as_flux_density\n      in\n      let m_ab = item [] (Photometry.ab_mag bp sed) in\n      let m_st = item [] (Photometry.st_mag bp sed) in\n      let m_vega = item [] (Photometry.vega_mag bp sed) in\n      Printf.printf \"%8s  %+8.3f  %+8.3f  %+8.3f\\n\" name m_ab m_st m_vega)\n    bands;\n\n  Printf.printf \"\\n  Note: AB and ST systems are defined by reference flux\\n\";\n  Printf.printf \"  densities; Vega magnitudes use the alpha Lyr spectrum.\\n\\n\";\n\n  (* --- Part 2: K-corrections --- *)\n  Printf.printf \"Part 2: K-corrections\\n\";\n  Printf.printf \"---------------------\\n\\n\";\n\n  let bp = Filters.sdss_r in\n  let bp_wave = Photometry.wavelength bp in\n\n  let rest_sed =\n    Spectrum.blackbody ~temperature:temp ~wavelength:bp_wave\n    |> Spectrum.scale norm |> Spectrum.as_flux_density\n  in\n  let m_ab_rest = item [] (Photometry.ab_mag bp rest_sed) in\n  let m_st_rest = item [] (Photometry.st_mag bp rest_sed) in\n  let m_vega_rest = item [] (Photometry.vega_mag bp rest_sed) in\n\n  Printf.printf \"  Rest-frame SDSS r-band:\\n\";\n  Printf.printf \"    AB   = %.3f\\n\" m_ab_rest;\n  Printf.printf \"    ST   = %.3f\\n\" m_st_rest;\n  Printf.printf \"    Vega = %.3f\\n\\n\" m_vega_rest;\n\n  Printf.printf \"  K-correction = m_obs(z) - m_rest\\n\\n\";\n  Printf.printf \"%6s  %8s  %8s  %8s\\n\" \"z\" \"K_AB\" \"K_ST\" \"K_Vega\";\n  Printf.printf \"%6s  %8s  %8s  %8s\\n\" \"------\" \"--------\" \"--------\" \"--------\";\n\n  let redshifts = [| 0.1; 0.2; 0.3; 0.5; 0.7; 1.0 |] in\n  Array.iter\n    (fun z ->\n      let zv = Nx.scalar f64 z in\n      let obs_sed =\n        Spectrum.blackbody ~temperature:temp ~wavelength:bp_wave\n        |> Spectrum.scale norm |> Spectrum.as_flux_density\n        |> Spectrum.redshift ~z:zv\n      in\n      let k_ab = item [] (Photometry.ab_mag bp obs_sed) -. m_ab_rest in\n      let k_st = item [] (Photometry.st_mag bp obs_sed) -. m_st_rest in\n      let k_vega = item [] (Photometry.vega_mag bp obs_sed) -. m_vega_rest in\n      Printf.printf \"%6.2f  %+8.3f  %+8.3f  %+8.3f\\n\" z k_ab k_st k_vega)\n    redshifts;\n\n  Printf.printf \"\\n\";\n\n  (* --- Part 3: Color evolution with redshift --- *)\n  Printf.printf \"Part 3: Color evolution (u-r) with redshift\\n\";\n  Printf.printf \"-------------------------------------------\\n\\n\";\n  Printf.printf \"%6s  %8s\\n\" \"z\" \"u-r (AB)\";\n  Printf.printf \"%6s  %8s\\n\" \"------\" \"--------\";\n  Array.iter\n    (fun z ->\n      let zv = Nx.scalar f64 z in\n      let color =\n        item []\n          (Photometry.color Filters.sdss_u Filters.sdss_r\n             (Spectrum.blackbody ~temperature:temp ~wavelength:bp_wave\n             |> Spectrum.scale norm |> Spectrum.as_flux_density\n             |> Spectrum.redshift ~z:zv))\n      in\n      Printf.printf \"%6.2f  %+8.3f\\n\" z color)\n    redshifts;\n\n  Printf.printf \"\\n\";\n\n  (* --- Part 4: Extinction --- *)\n  Printf.printf \"Part 4: Dust extinction\\n\";\n  Printf.printf \"-----------------------\\n\\n\";\n\n  let rv = Nx.scalar f64 3.1 in\n  let av_values = [| 0.0; 0.5; 1.0; 2.0; 3.0 |] in\n\n  Printf.printf \"  CCM89 extinction law (R_V = 3.1)\\n\";\n  Printf.printf \"  Reddening a T=6000 K blackbody through SDSS r-band\\n\\n\";\n  Printf.printf \"%6s  %8s  %8s  %8s\\n\" \"A_V\" \"m_AB\" \"delta_m\" \"E(u-r)\";\n  Printf.printf \"%6s  %8s  %8s  %8s\\n\" \"------\" \"--------\" \"--------\" \"--------\";\n\n  let unreddened_sed =\n    Spectrum.blackbody ~temperature:temp ~wavelength:bp_wave\n    |> Spectrum.scale norm |> Spectrum.as_flux_density\n  in\n  let m0 = item [] (Photometry.ab_mag bp unreddened_sed) in\n  let color0 =\n    item [] (Photometry.color Filters.sdss_u Filters.sdss_r unreddened_sed)\n  in\n\n  Array.iter\n    (fun av_f ->\n      let av = Nx.scalar f64 av_f in\n      let reddened =\n        Spectrum.blackbody ~temperature:temp ~wavelength:bp_wave\n        |> Spectrum.scale norm\n        |> Extinction.apply (Extinction.ccm89 ~rv) ~av\n        |> Spectrum.as_flux_density\n      in\n      let m = item [] (Photometry.ab_mag bp reddened) in\n      let color =\n        item [] (Photometry.color Filters.sdss_u Filters.sdss_r reddened)\n      in\n      Printf.printf \"%6.1f  %8.3f  %+8.3f  %+8.3f\\n\" av_f m (m -. m0)\n        (color -. color0))\n    av_values;\n\n  Printf.printf \"\\n\";\n\n  (* Compare extinction laws *)\n  Printf.printf \"  Comparing extinction laws at A_V = 1.0:\\n\\n\";\n  Printf.printf \"%16s  %8s  %8s\\n\" \"Law\" \"r-band\" \"E(u-r)\";\n  Printf.printf \"%16s  %8s  %8s\\n\" \"----------------\" \"--------\" \"--------\";\n\n  let av_one = Nx.scalar f64 1.0 in\n  let laws =\n    [|\n      (\"CCM89\", Extinction.ccm89 ~rv);\n      (\"Fitzpatrick99\", Extinction.fitzpatrick99 ~rv);\n      (\"O'Donnell94\", Extinction.odonnell94 ~rv);\n    |]\n  in\n  Array.iter\n    (fun (name, law) ->\n      let reddened =\n        Spectrum.blackbody ~temperature:temp ~wavelength:bp_wave\n        |> Spectrum.scale norm\n        |> Extinction.apply law ~av:av_one\n        |> Spectrum.as_flux_density\n      in\n      let m = item [] (Photometry.ab_mag bp reddened) in\n      let color =\n        item [] (Photometry.color Filters.sdss_u Filters.sdss_r reddened)\n      in\n      Printf.printf \"%16s  %+8.3f  %+8.3f\\n\" name (m -. m0) (color -. color0))\n    laws\n"
  },
  {
    "path": "dev/umbra/examples/05-sed-fitting/README.md",
    "content": "# `05-sed-fitting`\n\nFull SED fitting pipeline: fits stellar temperature, dust extinction (A_V), and\nflux normalization simultaneously from UGRIZ photometry. Demonstrates the\ncomposable differentiable pipeline through Spectrum, Extinction, and Photometry.\n\n```bash\ndune exec dev/umbra/examples/05-sed-fitting/main.exe\n```\n\n## What You'll Learn\n\n- Building a full astrophysical forward model from composable modules\n- How the blackbody -> extinction -> photometry pipeline is end-to-end differentiable\n- Creating custom bandpasses with `Photometry.tophat`\n- Fitting multiple correlated parameters (T, A_V, normalization) simultaneously\n\n## Key Functions\n\n| Function                     | Purpose                                       |\n| ---------------------------- | --------------------------------------------- |\n| `Spectrum.blackbody`         | Planck spectral radiance at given wavelengths  |\n| `Spectrum.scale`             | Scale spectrum values by a factor              |\n| `Spectrum.as_flux_density`   | Cast to flux density kind for photometry       |\n| `Extinction.ccm89`           | Create CCM89 extinction law with R_V           |\n| `Extinction.apply`           | Apply dust reddening to a spectrum             |\n| `Photometry.tophat`          | Create a rectangular bandpass                  |\n| `Photometry.ab_mag`          | Compute AB magnitude through a bandpass        |\n| `Rune.value_and_grads`       | Autodiff through the entire pipeline           |\n\n## How It Works\n\nThe forward model constructs a synthetic SED at each optimization step:\n\n1. **Spectrum.blackbody** generates the Planck function at temperature T\n2. **Spectrum.scale** applies the flux normalization\n3. **Extinction.apply** reddens the spectrum using CCM89 with extinction A_V\n4. **Photometry.ab_mag** integrates through each bandpass to produce magnitudes\n\nSince every step is built from Nx tensor operations, Rune computes gradients\nof chi-squared with respect to all three parameters (log T, A_V, log norm) in\na single backward pass.\n\nThe temperature and normalization are parameterized in log-space for positivity\nand better gradient conditioning. A_V is left in linear space since it can\nmeaningfully be zero or negative (de-reddening).\n\n## Try It\n\n1. Replace tophat filters with real SDSS filters from `Filters.sdss_u`, etc.\n2. Add a redshift parameter to fit photometric redshifts.\n3. Try `Extinction.fitzpatrick99` instead of `ccm89` and compare results.\n4. Increase the photometric noise and observe how parameter uncertainties grow.\n\n## Next Steps\n\nContinue to [06-coordinates-and-time](../06-coordinates-and-time/) to work with\ncelestial coordinates, time scales, and observing conditions.\n"
  },
  {
    "path": "dev/umbra/examples/05-sed-fitting/dune",
    "content": "(executable\n (name main)\n (libraries nx rune vega umbra))\n"
  },
  {
    "path": "dev/umbra/examples/05-sed-fitting/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Differentiable SED fitting: temperature + extinction from photometry.\n\n   Demonstrates the composable differentiable pipeline: Spectrum.blackbody ->\n   Extinction.apply -> Photometry.ab_mag\n\n   All operations flow through Nx tensor ops, making the entire pipeline\n   differentiable via Rune's autodiff. We fit stellar temperature, dust\n   extinction, and flux normalization simultaneously by gradient descent on\n   photometric residuals. *)\n\nopen Nx\nopen Umbra\n\nlet f64 = Nx.float64\n\n(* Define 5 broadband filters (UGRIZ-like tophats) *)\nlet n_bp = 100\n\nlet band_u =\n  Photometry.tophat ~lo:(Unit.Length.m 3.0e-7) ~hi:(Unit.Length.m 4.0e-7)\n    ~n:n_bp\n\nlet band_g =\n  Photometry.tophat ~lo:(Unit.Length.m 4.0e-7) ~hi:(Unit.Length.m 5.5e-7)\n    ~n:n_bp\n\nlet band_r =\n  Photometry.tophat ~lo:(Unit.Length.m 5.5e-7) ~hi:(Unit.Length.m 7.0e-7)\n    ~n:n_bp\n\nlet band_i =\n  Photometry.tophat ~lo:(Unit.Length.m 7.0e-7) ~hi:(Unit.Length.m 8.5e-7)\n    ~n:n_bp\n\nlet band_z =\n  Photometry.tophat ~lo:(Unit.Length.m 8.5e-7) ~hi:(Unit.Length.m 1.0e-6)\n    ~n:n_bp\n\nlet bands = [ band_u; band_g; band_r; band_i; band_z ]\nlet band_names = [| \"U\"; \"G\"; \"R\"; \"I\"; \"Z\" |]\n\n(* True parameters *)\nlet true_temp = 6500.0 (* K -- F-type star *)\nlet true_av = 0.5 (* moderate extinction *)\nlet true_log_norm = -50.0\n\n(* Generate synthetic observations *)\nlet rv = Nx.scalar f64 3.1\n\nlet obs_mags =\n  let temp = Unit.Temperature.of_kelvin (Nx.scalar f64 true_temp) in\n  let av = Nx.scalar f64 true_av in\n  let norm = Nx.scalar f64 (Float.exp true_log_norm) in\n  let mags =\n    List.map\n      (fun bp ->\n        let bp_wave = Photometry.wavelength bp in\n        let sed =\n          Spectrum.blackbody ~temperature:temp ~wavelength:bp_wave\n          |> Spectrum.scale norm\n          |> Extinction.apply (Extinction.ccm89 ~rv) ~av\n          |> Spectrum.as_flux_density\n        in\n        Photometry.ab_mag bp sed)\n      bands\n  in\n  (* Add 3% photometric noise *)\n  let noise = [| 0.03; -0.02; 0.01; -0.01; 0.02 |] in\n  List.mapi (fun i m -> Nx.add_s m noise.(i)) mags\n\nlet obs_errs = List.init 5 (fun _ -> Nx.scalar f64 0.05)\n\n(* Forward model: generate magnitudes from parameters *)\nlet forward_model log_temp av log_norm =\n  let temp = Unit.Temperature.of_kelvin (exp log_temp) in\n  let norm = exp log_norm in\n  List.map\n    (fun bp ->\n      let bp_wave = Photometry.wavelength bp in\n      let sed =\n        Spectrum.blackbody ~temperature:temp ~wavelength:bp_wave\n        |> Spectrum.scale norm\n        |> Extinction.apply (Extinction.ccm89 ~rv) ~av\n        |> Spectrum.as_flux_density\n      in\n      Photometry.ab_mag bp sed)\n    bands\n\n(* Loss function: chi-squared *)\nlet loss params =\n  match params with\n  | [ log_temp; av; log_norm ] ->\n      let pred = forward_model log_temp av log_norm in\n      List.fold_left2\n        (fun acc p (o, e) ->\n          let residual = div (sub p o) e in\n          add acc (square residual))\n        (scalar f64 0.0) pred\n        (List.combine obs_mags obs_errs)\n  | _ -> failwith \"expected [log_temp; av; log_norm]\"\n\nlet () =\n  Printf.printf \"Differentiable SED Fitting\\n\";\n  Printf.printf \"=========================\\n\";\n  Printf.printf\n    \"Pipeline: Spectrum.blackbody -> Extinction.ccm89 -> Photometry.ab_mag\\n\\n\";\n\n  Printf.printf \"True parameters:\\n\";\n  Printf.printf \"  T    = %.0f K\\n\" true_temp;\n  Printf.printf \"  A_V  = %.2f mag\\n\" true_av;\n  Printf.printf \"  logN = %.1f\\n\\n\" true_log_norm;\n\n  Printf.printf \"Observed magnitudes (with noise):\\n\";\n  List.iteri\n    (fun i m ->\n      Printf.printf \"  %s = %.3f +/- %.3f\\n\" band_names.(i) (item [] m)\n        (item [] (List.nth obs_errs i)))\n    obs_mags;\n  Printf.printf \"\\n\";\n\n  (* Initial guesses *)\n  let algo = Vega.adam (Vega.Schedule.constant 1e-3) in\n  let log_temp = ref (scalar f64 (Float.log 7000.0)) in\n  let av = ref (scalar f64 0.3) in\n  let log_norm = ref (scalar f64 (-50.5)) in\n  let states =\n    [| Vega.init algo !log_temp; Vega.init algo !av; Vega.init algo !log_norm |]\n  in\n  let steps = 1000 in\n\n  Printf.printf \"%5s  %10s  %8s  %8s  %8s\\n\" \"step\" \"chi2\" \"T (K)\" \"A_V\"\n    \"log_norm\";\n  Printf.printf \"%5s  %10s  %8s  %8s  %8s\\n\" \"-----\" \"----------\" \"--------\"\n    \"--------\" \"--------\";\n\n  let refs = [| log_temp; av; log_norm |] in\n  for i = 0 to steps - 1 do\n    let loss_val, grads =\n      Rune.value_and_grads loss [ !log_temp; !av; !log_norm ]\n    in\n    if i mod 200 = 0 || i = steps - 1 then\n      Printf.printf \"%5d  %10.4f  %8.1f  %8.3f  %8.3f\\n\" i (item [] loss_val)\n        (Float.exp (item [] !log_temp))\n        (item [] !av) (item [] !log_norm);\n    List.iteri\n      (fun j g ->\n        let p, s = Vega.step states.(j) ~grad:g ~param:!(refs.(j)) in\n        refs.(j) := p;\n        states.(j) <- s)\n      grads\n  done;\n\n  Printf.printf \"\\nFitted parameters:\\n\";\n  Printf.printf \"  T    = %.1f K  (true: %.0f K)\\n\"\n    (Float.exp (item [] !log_temp))\n    true_temp;\n  Printf.printf \"  A_V  = %.3f    (true: %.2f)\\n\" (item [] !av) true_av;\n  Printf.printf \"  logN = %.3f    (true: %.1f)\\n\" (item [] !log_norm)\n    true_log_norm;\n\n  (* Show fitted vs observed magnitudes *)\n  Printf.printf \"\\nFitted vs observed magnitudes:\\n\";\n  let fitted_mags = forward_model !log_temp !av !log_norm in\n  Printf.printf \"%5s  %8s  %8s  %8s\\n\" \"Band\" \"Observed\" \"Fitted\" \"Residual\";\n  Printf.printf \"%5s  %8s  %8s  %8s\\n\" \"-----\" \"--------\" \"--------\" \"--------\";\n  List.iteri\n    (fun i (obs, fit) ->\n      let o = item [] obs in\n      let f = item [] fit in\n      Printf.printf \"%5s  %8.3f  %8.3f  %+8.3f\\n\" band_names.(i) o f (f -. o))\n    (List.combine obs_mags fitted_mags)\n"
  },
  {
    "path": "dev/umbra/examples/06-coordinates-and-time/README.md",
    "content": "# `06-coordinates-and-time`\n\nCelestial coordinates, astronomical time scales, and survey selection.\nDemonstrates frame transforms (ICRS, Galactic), angular separation, time scale\nconversions (UTC, TAI, TT, TDB), altitude-azimuth coordinates, airmass, and\na practical survey selection function.\n\n```bash\ndune exec dev/umbra/examples/06-coordinates-and-time/main.exe\n```\n\n## What You'll Learn\n\n- Creating celestial coordinates in ICRS and converting to Galactic frame\n- Computing angular separations between objects\n- Parsing ISO 8601 dates and converting between time scales\n- Computing horizontal coordinates for a ground-based observer\n- Building a survey selection function from airmass, altitude, and magnitude cuts\n\n## Key Functions\n\n| Function                     | Purpose                                       |\n| ---------------------------- | --------------------------------------------- |\n| `Coord.of_radec`             | Create ICRS coordinates from RA/Dec           |\n| `Coord.galactic`             | Convert to Galactic coordinates               |\n| `Coord.separation`           | Angular separation between positions          |\n| `Time.of_iso`                | Parse ISO 8601 date-time as UTC               |\n| `Time.utc_to_tai`            | Convert UTC to TAI                            |\n| `Time.tai_to_tt`             | Convert TAI to Terrestrial Time               |\n| `Time.tt_to_tdb`             | Convert TT to Barycentric Dynamical Time      |\n| `Time.to_jd`, `Time.to_mjd`  | Extract Julian Date / Modified Julian Date    |\n| `Altaz.make_observer`        | Create a ground-based observer location       |\n| `Altaz.of_coord`             | Convert celestial to horizontal coordinates   |\n| `Altaz.alt`, `Altaz.az`      | Extract altitude and azimuth                  |\n| `Altaz.airmass`              | Compute airmass at given altitude              |\n| `Filters.rubin_r`            | Pre-built Rubin/LSST r-band filter            |\n\n## How It Works\n\n**Coordinates**: Positions are stored as (longitude, latitude) pairs in typed\nangle quantities. Frame transforms use 3x3 rotation matrices to convert\nbetween ICRS, Galactic, Ecliptic, and Supergalactic systems. Angular separation\nuses the Vincenty formula for numerical stability.\n\n**Time**: Julian Dates carry phantom type tags (UTC, TAI, TT, TDB) that\nenforce correct scale conversions at compile time. UTC-TAI uses the IERS\nleap-second table; TT = TAI + 32.184s exactly; TDB-TT uses the Fairhead &\nBretagnon series.\n\n**Altaz**: Converts ICRS to horizontal coordinates using IAU 2006 precession\nand the Earth Rotation Angle. Airmass uses the Pickering (2002) formula.\n\n**Selection**: Combines altitude (above horizon), airmass (atmospheric\nextinction), and magnitude limit into a boolean selection function -- a\nbuilding block for survey simulations.\n\n## Try It\n\n1. Add atmospheric refraction with `Altaz.of_coord ~refraction:true`.\n2. Compute the position angle from Vega to Deneb with `Coord.position_angle`.\n3. Use `Coord.of_galactic` to create coordinates in the Galactic plane and\n   convert to ICRS.\n4. Change the observer location and time to see how visibility changes.\n\n## Next Steps\n\nExplore the other Umbra examples for more advanced topics: catalog\ncross-matching with `Coord.nearest`, cosmological power spectra with\n`Cosmo.linear_power`, and Fisher matrix forecasts.\n"
  },
  {
    "path": "dev/umbra/examples/06-coordinates-and-time/dune",
    "content": "(executable\n (name main)\n (libraries nx rune umbra))\n"
  },
  {
    "path": "dev/umbra/examples/06-coordinates-and-time/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Coordinates, time scales, and survey selection.\n\n   Demonstrates Umbra's coordinate, time, and observing modules: - Coord:\n   celestial coordinates with frame transforms (ICRS, Galactic, Ecliptic,\n   Supergalactic) and angular separation. - Time: astronomical time with\n   type-safe scale conversions (UTC, TAI, TT, TDB) and ISO 8601 parsing. -\n   Altaz: horizontal coordinates, airmass, and atmospheric refraction.\n\n   Combines these into a survey selection function that determines which targets\n   are observable given an observer, time, and observing constraints. *)\n\nopen Nx\nopen Umbra\n\nlet f64 = Nx.float64\n\nlet () =\n  Printf.printf \"Coordinates, time scales, and survey selection\\n\";\n  Printf.printf \"===============================================\\n\\n\";\n\n  (* --- Part 1: Coordinate frames --- *)\n  Printf.printf \"Part 1: Coordinate frame transforms\\n\";\n  Printf.printf \"------------------------------------\\n\\n\";\n\n  let targets =\n    [|\n      (\"Galactic center\", 266.417, -28.936);\n      (\"Vega\", 279.235, 38.784);\n      (\"North Galactic Pole\", 192.860, 27.128);\n      (\"LMC\", 80.894, -69.756);\n      (\"M31 (Andromeda)\", 10.685, 41.269);\n    |]\n  in\n\n  Printf.printf \"%20s  %8s  %8s  %8s  %8s\\n\" \"Object\" \"RA\" \"Dec\" \"l\" \"b\";\n  Printf.printf \"%20s  %8s  %8s  %8s  %8s\\n\" \"--------------------\" \"--------\"\n    \"--------\" \"--------\" \"--------\";\n\n  Array.iter\n    (fun (name, ra_deg, dec_deg) ->\n      let coord =\n        Coord.of_radec\n          ~ra:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| ra_deg |]))\n          ~dec:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| dec_deg |]))\n      in\n      let gal = Coord.galactic coord in\n      let l = item [ 0 ] (Unit.Angle.in_deg (Coord.lon gal)) in\n      let b = item [ 0 ] (Unit.Angle.in_deg (Coord.lat gal)) in\n      Printf.printf \"%20s  %8.2f  %+8.2f  %8.2f  %+8.2f\\n\" name ra_deg dec_deg l\n        b)\n    targets;\n  Printf.printf \"\\n\";\n\n  (* Angular separation *)\n  Printf.printf \"Angular separations:\\n\";\n  let vega =\n    Coord.of_radec\n      ~ra:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 279.235 |]))\n      ~dec:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 38.784 |]))\n  in\n  let altair =\n    Coord.of_radec\n      ~ra:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 297.696 |]))\n      ~dec:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 8.868 |]))\n  in\n  let deneb =\n    Coord.of_radec\n      ~ra:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 310.358 |]))\n      ~dec:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 45.280 |]))\n  in\n  let sep_va = item [ 0 ] (Unit.Angle.in_deg (Coord.separation vega altair)) in\n  let sep_vd = item [ 0 ] (Unit.Angle.in_deg (Coord.separation vega deneb)) in\n  let sep_ad = item [ 0 ] (Unit.Angle.in_deg (Coord.separation altair deneb)) in\n  Printf.printf \"  Vega - Altair:  %.2f deg\\n\" sep_va;\n  Printf.printf \"  Vega - Deneb:   %.2f deg\\n\" sep_vd;\n  Printf.printf \"  Altair - Deneb: %.2f deg\\n\" sep_ad;\n  Printf.printf \"  (Summer Triangle)\\n\\n\";\n\n  (* --- Part 2: Time scales --- *)\n  Printf.printf \"Part 2: Astronomical time scales\\n\";\n  Printf.printf \"--------------------------------\\n\\n\";\n\n  let t_utc = Time.of_iso \"2024-06-21T04:00:00\" in\n  let t_tai = Time.utc_to_tai t_utc in\n  let t_tt = Time.tai_to_tt t_tai in\n  let t_tdb = Time.tt_to_tdb t_tt in\n\n  Printf.printf \"  UTC: %s\\n\" (Time.to_iso t_utc);\n  Printf.printf \"  JD (UTC): %.6f\\n\" (Time.to_jd t_utc);\n  Printf.printf \"  MJD (UTC): %.6f\\n\" (Time.to_mjd t_utc);\n  Printf.printf \"  JD (TAI): %.6f\\n\" (Time.to_jd t_tai);\n  Printf.printf \"  JD (TT):  %.6f\\n\" (Time.to_jd t_tt);\n  Printf.printf \"  JD (TDB): %.6f\\n\" (Time.to_jd t_tdb);\n\n  let dt_tai_utc =\n    Unit.to_float (Time.diff t_tai (Time.unsafe_of_jd (Time.to_jd t_utc)))\n  in\n  Printf.printf \"\\n  TAI - UTC = %.1f s (leap seconds)\\n\" (dt_tai_utc *. 86400.0);\n\n  let t_j2000 = Time.of_iso \"2000-01-01T12:00:00\" in\n  let dt_j2000 = Unit.to_float (Time.diff t_utc t_j2000) in\n  Printf.printf \"  Days since J2000.0: %.2f\\n\\n\" (dt_j2000 *. 86400.0 /. 86400.0);\n\n  (* --- Part 3: Horizontal coordinates and airmass --- *)\n  Printf.printf \"Part 3: Altitude-azimuth and airmass\\n\";\n  Printf.printf \"------------------------------------\\n\\n\";\n\n  (* Observer at Cerro Pachon (Rubin site) *)\n  let obs =\n    Altaz.make_observer\n      ~lat:(Unit.Angle.deg (-30.2444))\n      ~lon:(Unit.Angle.deg (-70.7494))\n      ~height:(Unit.Length.m 2663.0) ()\n  in\n  let obstime = Time.of_iso \"2024-06-21T04:00:00\" in\n\n  Printf.printf \"  Observer: Cerro Pachon (Rubin Observatory)\\n\";\n  Printf.printf \"    Lat: %.4f deg\\n\" (-30.2444);\n  Printf.printf \"    Lon: %.4f deg\\n\" (-70.7494);\n  Printf.printf \"    Elevation: %.0f m\\n\" 2663.0;\n  Printf.printf \"  Time: 2024-06-21 04:00 UTC\\n\\n\";\n\n  let stars =\n    [|\n      (\"Vega\", 279.235, 38.784);\n      (\"Sirius\", 101.287, -16.716);\n      (\"Canopus\", 95.988, -52.696);\n      (\"Alpha Cen\", 219.902, -60.834);\n      (\"Fomalhaut\", 344.413, -29.622);\n    |]\n  in\n\n  Printf.printf \"%12s  %7s  %7s  %8s\\n\" \"Star\" \"Alt\" \"Az\" \"Airmass\";\n  Printf.printf \"%12s  %7s  %7s  %8s\\n\" \"------------\" \"-------\" \"-------\"\n    \"--------\";\n\n  Array.iter\n    (fun (name, ra_deg, dec_deg) ->\n      let coord =\n        Coord.of_radec\n          ~ra:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| ra_deg |]))\n          ~dec:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| dec_deg |]))\n      in\n      let hz = Altaz.of_coord ~obstime ~observer:obs coord in\n      let alt_deg =\n        item [ 0 ] (Unit.Angle.to_tensor (Altaz.alt hz)) *. 180.0 /. Float.pi\n      in\n      let az_deg =\n        item [ 0 ] (Unit.Angle.to_tensor (Altaz.az hz)) *. 180.0 /. Float.pi\n      in\n      let am = item [ 0 ] (Altaz.airmass hz) in\n      Printf.printf \"%12s  %+7.1f  %7.1f  %8.2f\\n\" name alt_deg az_deg am)\n    stars;\n  Printf.printf \"\\n\";\n\n  (* --- Part 4: Survey selection --- *)\n  Printf.printf \"Part 4: Survey selection function\\n\";\n  Printf.printf \"---------------------------------\\n\\n\";\n\n  let mag_limit = 20.0 in\n  let airmass_cut = 2.0 in\n  Printf.printf \"  Selection criteria:\\n\";\n  Printf.printf \"    Magnitude limit: r < %.1f (AB)\\n\" mag_limit;\n  Printf.printf \"    Airmass cut: X < %.1f\\n\" airmass_cut;\n  Printf.printf \"    Above horizon: alt > 0 deg\\n\\n\";\n\n  let bp = Filters.rubin_r in\n  let norm = Nx.scalar f64 (Float.exp (-49.0)) in\n\n  let star_data =\n    [|\n      (\"Vega\", 279.235, 38.784, 5800.0);\n      (\"Sirius\", 101.287, -16.716, 9940.0);\n      (\"Canopus\", 95.988, -52.696, 7350.0);\n      (\"Alpha Cen\", 219.902, -60.834, 5790.0);\n      (\"Fomalhaut\", 344.413, -29.622, 8590.0);\n    |]\n  in\n\n  Printf.printf \"%12s  %7s  %8s  %6s  %s\\n\" \"Star\" \"Alt\" \"Airmass\" \"r_mag\"\n    \"Select?\";\n  Printf.printf \"%12s  %7s  %8s  %6s  %s\\n\" \"------------\" \"-------\" \"--------\"\n    \"------\" \"-------\";\n\n  Array.iter\n    (fun (name, ra_deg, dec_deg, temp_k) ->\n      let coord =\n        Coord.of_radec\n          ~ra:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| ra_deg |]))\n          ~dec:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| dec_deg |]))\n      in\n      let hz = Altaz.of_coord ~obstime ~observer:obs coord in\n      let alt_deg =\n        item [ 0 ] (Unit.Angle.to_tensor (Altaz.alt hz)) *. 180.0 /. Float.pi\n      in\n      let am = item [ 0 ] (Altaz.airmass hz) in\n\n      (* Synthetic magnitude through Rubin r-band *)\n      let temp = Unit.Temperature.of_kelvin (Nx.scalar f64 temp_k) in\n      let bp_wave = Photometry.wavelength bp in\n      let sed =\n        Spectrum.blackbody ~temperature:temp ~wavelength:bp_wave\n        |> Spectrum.scale norm |> Spectrum.as_flux_density\n      in\n      let r_mag = item [] (Photometry.ab_mag bp sed) in\n\n      let selected = alt_deg > 0.0 && am < airmass_cut && r_mag < mag_limit in\n      Printf.printf \"%12s  %+7.1f  %8.2f  %6.2f  %s\\n\" name alt_deg am r_mag\n        (if selected then \"YES\" else \"no\"))\n    star_data;\n\n  Printf.printf \"\\n  Height stored: %.0f m\\n\"\n    (item [] (Unit.Length.to_tensor (Altaz.observer_height obs)))\n"
  },
  {
    "path": "dev/umbra/examples/07-batch-photometry/README.md",
    "content": "# `07-batch-photometry`\n\nComputes SDSS g-r colors for a grid of blackbody templates at different\ntemperatures and dust extinctions in a single pass using batch operations.\nInstead of looping over individual spectra, the values tensor has a leading\nbatch dimension and all photometry operations broadcast over it.\n\n```bash\ncd dev/umbra\ndune exec --root . examples/07-batch-photometry/main.exe\n```\n\n## What You'll Learn\n\n- Constructing batched spectra by stacking blackbodies into a leading dimension\n- Broadcasting extinction across a batch of SEDs with per-spectrum A_V\n- Computing synthetic SDSS photometry with AB magnitudes\n- Exploring color-temperature and color-extinction relations\n\n## Key Functions\n\n| Function                   | Purpose                                          |\n| -------------------------- | ------------------------------------------------ |\n| `Spectrum.blackbody`       | Generate Planck spectrum at a given temperature  |\n| `Spectrum.create`          | Build a spectrum from wavelength and value arrays |\n| `Spectrum.as_flux_density` | Cast to flux density kind for photometry         |\n| `Nx.stack`                 | Stack individual spectra into a batch dimension  |\n| `Extinction.ccm89`         | Create CCM89 dust extinction law                 |\n| `Extinction.apply`         | Apply reddening with per-spectrum A_V broadcast  |\n| `Photometry.ab_mag`        | Compute AB magnitude through a bandpass          |\n| `Filters.sdss_g`           | SDSS g-band filter response                      |\n\n## How It Works\n\nThe example first builds a grid of 20 blackbody spectra from 3000 K to 30000 K\nby stacking individual `Spectrum.blackbody` outputs into a `[n_temp; 500]`\nvalues tensor. When this batch spectrum is passed to `Photometry.ab_mag`, the\nintegration broadcasts over the leading dimension, producing one magnitude per\ntemperature in a single call.\n\nThe second half demonstrates per-spectrum extinction. A T=6000 K blackbody is\nreplicated into 10 copies, and `Extinction.apply` is called with an A_V tensor\nof shape `[n_av; 1]` that broadcasts against the `[n_av; 500]` flux values.\nThis yields reddened g-r colors across a range of dust columns without any\nexplicit loop.\n\n## Try It\n\n1. Increase the temperature grid to 100 points and plot the g-r color curve to\n   see where the blue turnover occurs.\n2. Add a third band (sdss_i) and compute the g-r vs r-i color-color diagram.\n3. Replace the blackbody with a power-law spectrum and observe how the color\n   trends differ.\n\n## Next Steps\n\nContinue to [08-photometric-redshifts](../08-photometric-redshifts/) to learn\nhow to estimate galaxy redshifts by combining grid search with gradient-based\nrefinement through the differentiable photometry pipeline.\n"
  },
  {
    "path": "dev/umbra/examples/07-batch-photometry/dune",
    "content": "(executable\n (name main)\n (libraries nx rune umbra))\n"
  },
  {
    "path": "dev/umbra/examples/07-batch-photometry/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Batch template photometry.\n\n   Computes SDSS g-r colors for a grid of blackbody templates at different\n   temperatures and dust extinctions in a single pass, demonstrating batched\n   spectra. Instead of looping over individual spectra, the values tensor has a\n   leading batch dimension and all photometry operations broadcast over it. *)\n\nopen Nx\nopen Umbra\n\nlet f64 = Nx.float64\n\nlet () =\n  Printf.printf \"Batch Template Photometry\\n\";\n  Printf.printf \"=========================\\n\\n\";\n\n  (* Temperature grid: 20 blackbodies from 3000K to 30000K *)\n  let n_temp = 20 in\n  let temps =\n    Array.init n_temp (fun i ->\n        3000.0\n        +. (Float.of_int i *. (30000.0 -. 3000.0) /. Float.of_int (n_temp - 1)))\n  in\n\n  (* Shared wavelength grid covering SDSS g and r *)\n  let wavelength = Unit.Length.of_m (Nx.linspace f64 3e-7 1.1e-6 500) in\n\n  (* Build batch spectrum: stack individual blackbodies into [n_temp; 500] *)\n  let values =\n    Nx.stack\n      (List.init n_temp (fun i ->\n           let temp = Unit.Temperature.of_kelvin (Nx.scalar f64 temps.(i)) in\n           Spectrum.values (Spectrum.blackbody ~temperature:temp ~wavelength)))\n  in\n  let batch = Spectrum.create ~wavelength ~values |> Spectrum.as_flux_density in\n\n  (* AB magnitudes in g and r — returns shape [n_temp] each *)\n  let g_mag = Photometry.ab_mag Filters.sdss_g batch in\n  let r_mag = Photometry.ab_mag Filters.sdss_r batch in\n  let g_r = Nx.sub g_mag r_mag in\n\n  Printf.printf \"Unreddened blackbody colors (SDSS g-r):\\n\";\n  Printf.printf \"%8s  %8s  %8s  %8s\\n\" \"T (K)\" \"g\" \"r\" \"g-r\";\n  Printf.printf \"%8s  %8s  %8s  %8s\\n\" \"--------\" \"--------\" \"--------\"\n    \"--------\";\n  Array.iteri\n    (fun i t ->\n      if i mod 4 = 0 || i = n_temp - 1 then\n        Printf.printf \"%8.0f  %+8.3f  %+8.3f  %+8.3f\\n\" t (item [ i ] g_mag)\n          (item [ i ] r_mag) (item [ i ] g_r))\n    temps;\n\n  (* Now apply per-spectrum extinction: A_V from 0.0 to 2.0 *)\n  Printf.printf \"\\nReddening a T=6000K blackbody (SDSS g-r vs A_V):\\n\";\n  let n_av = 10 in\n  let av_values = Nx.linspace f64 0.0 2.0 n_av in\n\n  (* Single-temperature spectrum, batched over A_V *)\n  let temp_6k = Unit.Temperature.of_kelvin (Nx.scalar f64 6000.0) in\n  let sed_1d =\n    Spectrum.blackbody ~temperature:temp_6k ~wavelength\n    |> Spectrum.as_flux_density\n  in\n  (* Replicate into [n_av; 500] *)\n  let sed_values =\n    Nx.stack (List.init n_av (fun _ -> Spectrum.values sed_1d))\n  in\n  let sed_batch =\n    Spectrum.create ~wavelength ~values:sed_values |> Spectrum.as_flux_density\n  in\n  (* Per-spectrum A_V: reshape to [n_av; 1] to broadcast with [n_av; 500] *)\n  let rv = Nx.scalar f64 3.1 in\n  let av_col = Nx.reshape [| n_av; 1 |] av_values in\n  let reddened = Extinction.apply (Extinction.ccm89 ~rv) ~av:av_col sed_batch in\n  let g_red = Photometry.ab_mag Filters.sdss_g reddened in\n  let r_red = Photometry.ab_mag Filters.sdss_r reddened in\n  let g_r_red = Nx.sub g_red r_red in\n\n  Printf.printf \"%8s  %8s\\n\" \"A_V\" \"g-r\";\n  Printf.printf \"%8s  %8s\\n\" \"--------\" \"--------\";\n  for i = 0 to n_av - 1 do\n    Printf.printf \"%8.2f  %+8.3f\\n\" (item [ i ] av_values) (item [ i ] g_r_red)\n  done\n"
  },
  {
    "path": "dev/umbra/examples/08-photometric-redshifts/README.md",
    "content": "# `08-photometric-redshifts`\n\nTwo-stage photometric redshift estimation: coarse grid search followed by\ngradient-based refinement using Adam. The full pipeline (blackbody -> redshift\n-> extinction -> photometry) is differentiable through Rune, enabling gradient\ndescent on redshift and normalization parameters against synthetic SDSS ugriz\nobservations.\n\n```bash\ncd dev/umbra\ndune exec --root . examples/08-photometric-redshifts/main.exe\n```\n\n## What You'll Learn\n\n- Building an end-to-end differentiable photometric pipeline through SDSS ugriz filters\n- Composing spectrum redshifting, dust extinction, and synthetic photometry\n- Combining grid search initialization with autodiff gradient refinement\n- Using multi-parameter gradients to jointly fit redshift and normalization\n\n## Key Functions\n\n| Function                   | Purpose                                              |\n| -------------------------- | ---------------------------------------------------- |\n| `Spectrum.blackbody`       | Generate a template SED at given temperature         |\n| `Spectrum.redshift`        | Apply cosmological redshift to a spectrum             |\n| `Spectrum.scale`           | Scale spectrum by a normalization factor              |\n| `Extinction.apply`         | Apply dust reddening with CCM89 law                  |\n| `Photometry.ab_mag`        | Compute AB magnitude through a bandpass              |\n| `Photometry.wavelength`    | Extract the wavelength grid of a bandpass filter      |\n| `Rune.value_and_grads`     | Compute loss and parameter gradients in one pass     |\n| `Vega.adam`                | Adam optimizer for gradient refinement               |\n\n## How It Works\n\nThe example generates synthetic observed magnitudes for a galaxy at z=0.3 with\nT=5500 K, A_V=0.2, by pushing a blackbody through the full pipeline:\n`blackbody -> scale -> extinction -> redshift -> ab_mag` in each of the five\nSDSS bands. These serve as the \"data\" to fit against.\n\nStage 1 performs a coarse grid search over 30 redshift values from 0.01 to\n0.90, computing chi-squared at each point with a fixed template. This\nidentifies a rough minimum without requiring gradients.\n\nStage 2 takes the best grid redshift and refines it with 500 Adam optimizer\nsteps. The loss function (sum of squared magnitude residuals) flows through\n`Spectrum.redshift` and `Photometry.ab_mag`, so Rune provides exact gradients\nwith respect to log(1+z) and log(normalization). The parameterization in\nlog-space ensures positivity and improves conditioning.\n\n## Try It\n\n1. Change the true redshift to z=0.7 and observe how the grid search coarseness\n   affects the initial estimate.\n2. Add temperature as a third free parameter in the refinement stage.\n3. Replace the single blackbody template with a composite SED that includes an\n   emission line.\n\n## Next Steps\n\nContinue to [09-gravitational-lensing](../09-gravitational-lensing/) to see how\nRune's autodiff can fit physical parameters of a gravitational lens model from\nobserved image positions.\n"
  },
  {
    "path": "dev/umbra/examples/08-photometric-redshifts/dune",
    "content": "(executable\n (name main)\n (libraries nx rune vega umbra))\n"
  },
  {
    "path": "dev/umbra/examples/08-photometric-redshifts/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Photometric redshift estimation via template fitting.\n\n   Demonstrates composing Spectrum.redshift -> Extinction.apply ->\n   Photometry.ab_mag through real SDSS filters, with gradient refinement via\n   Rune's autodiff. Auto-resampling makes the pipeline seamless.\n\n   Stage 1: Grid search over redshift to find a coarse estimate. Stage 2: Adam\n   optimizer refines z and normalization using AD gradients. *)\n\nopen Nx\nopen Umbra\n\nlet f64 = Nx.float64\n\nlet bands =\n  [\n    Filters.sdss_u;\n    Filters.sdss_g;\n    Filters.sdss_r;\n    Filters.sdss_i;\n    Filters.sdss_z;\n  ]\n\nlet band_names = [| \"u\"; \"g\"; \"r\"; \"i\"; \"z\" |]\n\n(* True parameters for synthetic galaxy *)\nlet true_z = 0.3\nlet true_temp = 5500.0\nlet true_av = 0.2\nlet true_log_norm = -50.0\nlet rv = Nx.scalar f64 3.1\n\n(* Synthetic observed magnitudes *)\nlet obs_mags =\n  let temp = Unit.Temperature.of_kelvin (Nx.scalar f64 true_temp) in\n  let z = Nx.scalar f64 true_z in\n  let av = Nx.scalar f64 true_av in\n  let norm = Nx.scalar f64 (Float.exp true_log_norm) in\n  List.map\n    (fun bp ->\n      let bp_wave = Photometry.wavelength bp in\n      let sed =\n        Spectrum.blackbody ~temperature:temp ~wavelength:bp_wave\n        |> Spectrum.scale norm\n        |> Extinction.apply (Extinction.ccm89 ~rv) ~av\n        |> Spectrum.as_flux_density |> Spectrum.redshift ~z\n      in\n      Photometry.ab_mag bp sed)\n    bands\n\n(* Grid search: coarse scan over z *)\nlet grid_search () =\n  let best_z = ref 0.0 in\n  let best_chi2 = ref Float.infinity in\n  let n_z = 30 in\n  for iz = 0 to n_z - 1 do\n    let z = Nx.scalar f64 (0.01 +. (Float.of_int iz *. 0.03)) in\n    let temp = Unit.Temperature.of_kelvin (Nx.scalar f64 5000.0) in\n    let norm = Nx.scalar f64 (Float.exp (-50.0)) in\n    let pred =\n      List.map\n        (fun bp ->\n          let bp_wave = Photometry.wavelength bp in\n          let sed =\n            Spectrum.blackbody ~temperature:temp ~wavelength:bp_wave\n            |> Spectrum.scale norm |> Spectrum.as_flux_density\n            |> Spectrum.redshift ~z\n          in\n          Photometry.ab_mag bp sed)\n        bands\n    in\n    (* Color-based chi-squared: compare color differences *)\n    let chi2 =\n      List.fold_left2\n        (fun acc p o -> add acc (square (sub p o)))\n        (scalar f64 0.0) pred obs_mags\n    in\n    let chi2_v = item [] chi2 in\n    if chi2_v < !best_chi2 then begin\n      best_chi2 := chi2_v;\n      best_z := item [] z\n    end\n  done;\n  !best_z\n\n(* Gradient refinement around grid minimum *)\nlet refine z0 =\n  let loss params =\n    match params with\n    | [ log_z1; log_norm ] ->\n        let z = sub (exp log_z1) (scalar f64 1.0) in\n        let temp = Unit.Temperature.of_kelvin (Nx.scalar f64 5500.0) in\n        let norm = exp log_norm in\n        let pred =\n          List.map\n            (fun bp ->\n              let bp_wave = Photometry.wavelength bp in\n              let sed =\n                Spectrum.blackbody ~temperature:temp ~wavelength:bp_wave\n                |> Spectrum.scale norm |> Spectrum.as_flux_density\n                |> Spectrum.redshift ~z\n              in\n              Photometry.ab_mag bp sed)\n            bands\n        in\n        List.fold_left2\n          (fun acc p o -> add acc (square (sub p o)))\n          (scalar f64 0.0) pred obs_mags\n    | _ -> failwith \"expected [log_z1; log_norm]\"\n  in\n  let algo = Vega.adam (Vega.Schedule.constant 5e-4) in\n  let log_z1 = ref (scalar f64 (Float.log (1.0 +. z0))) in\n  let log_norm = ref (scalar f64 (-50.0)) in\n  let states = [| Vega.init algo !log_z1; Vega.init algo !log_norm |] in\n  let refs = [| log_z1; log_norm |] in\n  for _ = 0 to 499 do\n    let _loss_val, grads = Rune.value_and_grads loss [ !log_z1; !log_norm ] in\n    List.iteri\n      (fun j g ->\n        let p, s = Vega.step states.(j) ~grad:g ~param:!(refs.(j)) in\n        refs.(j) := p;\n        states.(j) <- s)\n      grads\n  done;\n  Float.exp (item [] !log_z1) -. 1.0\n\nlet () =\n  Printf.printf \"Photometric Redshift Estimation\\n\";\n  Printf.printf \"===============================\\n\";\n  Printf.printf\n    \"Pipeline: blackbody -> redshift -> extinction -> ab_mag (SDSS)\\n\\n\";\n  Printf.printf \"True: z=%.3f  T=%.0fK  A_V=%.2f\\n\\n\" true_z true_temp true_av;\n  Printf.printf \"Observed magnitudes:\\n\";\n  List.iteri\n    (fun i m -> Printf.printf \"  %s = %.3f\\n\" band_names.(i) (item [] m))\n    obs_mags;\n  Printf.printf \"\\nStep 1: Grid search (z = 0.01 to 0.90)...\\n\";\n  let z_grid = grid_search () in\n  Printf.printf \"  Best grid z = %.3f\\n\" z_grid;\n  Printf.printf \"\\nStep 2: Gradient refinement (500 Adam steps)...\\n\";\n  let z_fit = refine z_grid in\n  Printf.printf \"  Refined z = %.4f  (true: %.3f)\\n\" z_fit true_z;\n  Printf.printf \"  Error = %.4f\\n\" (Float.abs (z_fit -. true_z))\n"
  },
  {
    "path": "dev/umbra/examples/09-gravitational-lensing/README.md",
    "content": "# `09-gravitational-lensing`\n\nFits gravitational lens parameters (lens center and Einstein radius) from\nobserved image positions of a quadruply-imaged quasar. The point-mass lens\nequation is expressed as Nx tensor operations, making the model fully\ndifferentiable through Rune for gradient-based fitting with Adam.\n\n```bash\ncd dev/umbra\ndune exec --root . examples/09-gravitational-lensing/main.exe\n```\n\n## What You'll Learn\n\n- Expressing the gravitational lens equation as differentiable tensor operations\n- Minimizing source-plane variance to fit lens parameters\n- Fitting physical parameters (lens position, Einstein radius) via Adam optimizer\n- Using autodiff gradients with a physics-based loss function\n\n## Key Functions\n\n| Function                | Purpose                                                |\n| ----------------------- | ------------------------------------------------------ |\n| `Nx.square`             | Squared distances for radial computation               |\n| `Nx.sqrt`               | Radial distance from lens center                       |\n| `Nx.mean`               | Mean source position across images                     |\n| `Rune.value_and_grads`  | Compute loss and gradients for all lens parameters      |\n| `Vega.adam`             | Adam optimizer for parameter fitting                   |\n| `Vega.step`             | Apply one optimization update                          |\n\n## How It Works\n\nA point-mass gravitational lens deflects light according to the lens equation:\nbeta = theta - theta_E^2 * theta_hat / |theta|, where beta is the true source\nposition, theta is the observed image position, and theta_E is the Einstein\nradius. If the lens model is correct, all observed images should map back to the\nsame source position in the source plane.\n\nThe example generates synthetic image positions for a quadruply-imaged quasar\nwith known lens parameters (x_L=0.1, y_L=-0.05, theta_E=1.0) plus small noise.\nThe loss function maps each image back to the source plane using the current\nlens parameters and computes the variance of the inferred source positions. A\ncorrect lens model yields zero variance.\n\nStarting from an initial guess of (x_L=0, y_L=0, theta_E=0.5), the Adam\noptimizer runs for 500 steps. Rune differentiates through the entire lens\nequation -- including the division by |theta|, the square root, and the\nmean/variance -- to provide exact gradients that drive convergence to the true\nparameters.\n\n## Try It\n\n1. Increase the noise level from 0.005 to 0.05 and observe how parameter\n   uncertainties grow.\n2. Add a shear term (gamma_1, gamma_2) to the lens model for external\n   tidal perturbation.\n3. Replace the point-mass with a singular isothermal sphere (SIS) profile\n   where the deflection is constant: alpha = theta_E * theta_hat.\n\n## Next Steps\n\nContinue to [10-uncertainty-propagation](../10-uncertainty-propagation/) to\nlearn how to automatically propagate parameter uncertainties through\ncosmological distance calculations using exact AD Jacobians.\n"
  },
  {
    "path": "dev/umbra/examples/09-gravitational-lensing/dune",
    "content": "(executable\n (name main)\n (libraries nx rune vega umbra))\n"
  },
  {
    "path": "dev/umbra/examples/09-gravitational-lensing/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Differentiable gravitational lens parameter fitting.\n\n   A point-mass gravitational lens produces multiple images of a background\n   source. Given the observed image positions, we fit the lens center and\n   Einstein radius by requiring that all images map back to the same source\n   position. The lens equation and source-plane mapping are expressed as Nx\n   tensor operations, making the entire model differentiable through Rune. *)\n\nopen Nx\n\nlet f64 = Nx.float64\n\n(* True lens parameters (for generating synthetic data) *)\nlet true_x_l = 0.1\nlet true_y_l = -0.05\nlet true_theta_e = 1.0\n\n(* Generate synthetic image positions for a quadruply-imaged quasar. Source at\n   (0.15, 0.08), lens at (true_x_l, true_y_l). *)\nlet source_x = 0.15\nlet source_y = 0.08\nlet () = Printf.printf \"Differentiable gravitational lens modeling\\n\"\nlet () = Printf.printf \"==========================================\\n\\n\"\n\n(* Solve lens equation: beta = theta - theta_E^2 * theta / |theta|^2 for point\n   mass (where theta is relative to lens center). We generate 4 image positions\n   by solving analytically + adding noise. *)\nlet img_x, img_y =\n  (* For a point mass, images lie along the source-lens axis. Place 4 images at\n     realistic positions around the lens. *)\n  let dx = source_x -. true_x_l in\n  let dy = source_y -. true_y_l in\n  let beta = Float.sqrt ((dx *. dx) +. (dy *. dy)) in\n  let cos_a = dx /. beta and sin_a = dy /. beta in\n  (* Two images along the axis *)\n  let theta_p =\n    (beta\n    +. Float.sqrt ((beta *. beta) +. (4.0 *. true_theta_e *. true_theta_e)))\n    /. 2.0\n  in\n  let theta_m =\n    (beta\n    -. Float.sqrt ((beta *. beta) +. (4.0 *. true_theta_e *. true_theta_e)))\n    /. 2.0\n  in\n  (* Image positions in 2D (along and perpendicular to axis, with noise) *)\n  let noise = 0.005 in\n  let x1 = true_x_l +. (theta_p *. cos_a) +. (noise *. 0.3) in\n  let y1 = true_y_l +. (theta_p *. sin_a) -. (noise *. 0.2) in\n  let x2 = true_x_l +. (theta_m *. cos_a) -. (noise *. 0.5) in\n  let y2 = true_y_l +. (theta_m *. sin_a) +. (noise *. 0.4) in\n  (* Add two more images from slight perturbation (simulating extended\n     source) *)\n  let x3 = true_x_l +. (theta_p *. 0.7 *. cos_a) +. (theta_p *. 0.3 *. sin_a) in\n  let y3 = true_y_l +. (theta_p *. 0.7 *. sin_a) -. (theta_p *. 0.3 *. cos_a) in\n  let x4 = true_x_l -. (theta_p *. 0.5 *. cos_a) +. (theta_p *. 0.4 *. sin_a) in\n  let y4 = true_y_l -. (theta_p *. 0.5 *. sin_a) -. (theta_p *. 0.4 *. cos_a) in\n  ( create f64 [| 4 |] [| x1; x2; x3; x4 |],\n    create f64 [| 4 |] [| y1; y2; y3; y4 |] )\n\n(* Loss: given lens params, map each image back to the source plane. All images\n   should map to the same source -> minimize variance of inferred source\n   positions. *)\nlet loss params =\n  match params with\n  | [ x_l; y_l; theta_e ] ->\n      (* Displacement from lens center *)\n      let dx = sub img_x x_l in\n      let dy = sub img_y y_l in\n      (* Distance from lens center *)\n      let r_sq = add (square dx) (square dy) in\n      let r = sqrt r_sq in\n      (* Point-mass deflection: alpha = theta_E^2 / r *)\n      let alpha = div (square theta_e) r in\n      (* Source position for each image: beta = theta - alpha * hat(theta) *)\n      let beta_x = sub img_x (mul alpha (div dx r)) in\n      let beta_y = sub img_y (mul alpha (div dy r)) in\n      (* Variance of source positions (should be ~0 if lens model is correct) *)\n      let mean_bx = mean beta_x in\n      let mean_by = mean beta_y in\n      let var_x = mean (square (sub beta_x mean_bx)) in\n      let var_y = mean (square (sub beta_y mean_by)) in\n      add var_x var_y\n  | _ -> failwith \"expected [x_l; y_l; theta_e]\"\n\nlet () =\n  Printf.printf \"True parameters:\\n\";\n  Printf.printf \"  x_L     = %.3f arcsec\\n\" true_x_l;\n  Printf.printf \"  y_L     = %.3f arcsec\\n\" true_y_l;\n  Printf.printf \"  theta_E = %.3f arcsec\\n\\n\" true_theta_e;\n  let algo = Vega.adam (Vega.Schedule.constant 1e-2) in\n  let x_l = ref (scalar f64 0.0) in\n  let y_l = ref (scalar f64 0.0) in\n  let theta_e = ref (scalar f64 0.5) in\n  let states =\n    [| Vega.init algo !x_l; Vega.init algo !y_l; Vega.init algo !theta_e |]\n  in\n  let steps = 500 in\n  Printf.printf \"%5s  %12s  %8s  %8s  %8s\\n\" \"step\" \"loss\" \"x_L\" \"y_L\" \"theta_E\";\n  Printf.printf \"%5s  %12s  %8s  %8s  %8s\\n\" \"-----\" \"------------\" \"--------\"\n    \"--------\" \"--------\";\n  let refs = [| x_l; y_l; theta_e |] in\n  for i = 0 to steps - 1 do\n    let loss_val, grads = Rune.value_and_grads loss [ !x_l; !y_l; !theta_e ] in\n    List.iteri\n      (fun j g ->\n        let p, s = Vega.step states.(j) ~grad:g ~param:!(refs.(j)) in\n        refs.(j) := p;\n        states.(j) <- s)\n      grads;\n    if i mod 100 = 0 || i = steps - 1 then\n      Printf.printf \"%5d  %12.8f  %8.4f  %8.4f  %8.4f\\n\" i (item [] loss_val)\n        (item [] !x_l) (item [] !y_l) (item [] !theta_e)\n  done;\n  Printf.printf \"\\nFitted parameters:\\n\";\n  Printf.printf \"  x_L     = %.4f  (true: %.4f)\\n\" (item [] !x_l) true_x_l;\n  Printf.printf \"  y_L     = %.4f  (true: %.4f)\\n\" (item [] !y_l) true_y_l;\n  Printf.printf \"  theta_E = %.4f  (true: %.4f)\\n\" (item [] !theta_e)\n    true_theta_e\n"
  },
  {
    "path": "dev/umbra/examples/10-uncertainty-propagation/README.md",
    "content": "# `10-uncertainty-propagation`\n\nAutomatic uncertainty propagation through cosmological distance calculations.\nPropagates H0 and Omega_m uncertainties through distance modulus using exact\nAD Jacobians via forward-mode differentiation. The linear error propagation\nformula (Sigma_out = J Sigma_in J^T) is validated against Monte Carlo sampling\nwith 50,000 draws.\n\n```bash\ncd dev/umbra\ndune exec --root . examples/10-uncertainty-propagation/main.exe\n```\n\n## What You'll Learn\n\n- Computing exact Jacobians automatically with forward-mode AD (`Rune.jacfwd`)\n- Applying linear error propagation via the Jacobian covariance formula\n- Validating analytical uncertainty estimates with Monte Carlo sampling\n- Propagating scalar uncertainties through cosmological models with JVP\n\n## Key Functions\n\n| Function                     | Purpose                                          |\n| ---------------------------- | ------------------------------------------------ |\n| `Cosmo.create_flat_lcdm`     | Create a flat Lambda-CDM cosmology               |\n| `Cosmo.distance_modulus`     | Compute distance modulus at a given redshift      |\n| `Rune.jacfwd`                | Forward-mode Jacobian of a function               |\n| `Rune.jvp`                   | Jacobian-vector product for scalar propagation    |\n| `Nx.cholesky`                | Cholesky decomposition for MC sampling            |\n| `Nx.matmul`                  | Matrix multiply for J Sigma J^T                  |\n| `Nx.diag`                    | Build diagonal covariance from variances          |\n\n## How It Works\n\nGiven input parameters with uncertainties (H0 = 70 +/- 1 km/s/Mpc, Omega_m =\n0.30 +/- 0.01), the example propagates these through `Cosmo.distance_modulus`\nat five redshifts (z = 0.1 to 1.0). The propagation uses the standard linear\nformula: Sigma_out = J Sigma_in J^T, where J is the Jacobian of the distance\nmodulus with respect to [H0, Omega_m]. Rather than deriving J analytically,\n`Rune.jacfwd` computes it automatically with just two JVP evaluations (one per\ninput parameter).\n\nFor validation, the example draws 50,000 Monte Carlo samples from the input\ncovariance via Cholesky decomposition, evaluates the model at each sample, and\ncomputes empirical output statistics. Agreement below 1% between AD and MC\nconfirms that linear propagation is accurate for these parameter ranges.\n\nA scalar API demo shows the simpler case: propagating redshift uncertainty\n(z = 0.5 +/- 0.01) through a single `jvp` call, which returns both the output\nvalue and its sensitivity to the input perturbation.\n\n## Try It\n\n1. Add correlation between H0 and Omega_m by putting off-diagonal terms in the\n   input covariance matrix.\n2. Increase the uncertainties to see where linear propagation breaks down and\n   MC diverges from AD.\n3. Propagate uncertainties through `Cosmo.luminosity_distance` instead of\n   distance modulus and compare the relative errors.\n\n## Next Steps\n\nContinue to [11-bayesian-sed](../11-bayesian-sed/) to see how Fisher information\nand Hamiltonian Monte Carlo provide both theoretical bounds and full Bayesian\nposteriors for SED parameter estimation.\n"
  },
  {
    "path": "dev/umbra/examples/10-uncertainty-propagation/dune",
    "content": "(executable\n (name main)\n (libraries nx rune umbra))\n"
  },
  {
    "path": "dev/umbra/examples/10-uncertainty-propagation/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Automatic uncertainty propagation through cosmological distances.\n\n   Demonstrates propagating H0 and Omega_m uncertainties through\n   Umbra.Cosmo.distance_modulus using exact AD Jacobians. The linear error\n   propagation formula (Sigma_out = J Sigma_in J^T) is computed automatically\n   via Rune.jacfwd. Results are validated against Monte Carlo sampling.\n\n   Fisher, propagation, and Monte Carlo are all trivial given Rune's jacfwd --\n   no dedicated library needed. *)\n\nopen Nx\nopen Umbra\n\nlet f64 = Nx.float64\n\n(* Redshifts to evaluate *)\nlet redshifts = [| 0.1; 0.3; 0.5; 0.7; 1.0 |]\n\n(* Forward model: given [H0; Omega_m], compute distance modulus at z *)\nlet distance_modulus_at z p =\n  let h0 = Nx.reshape [||] (Nx.slice [ I 0 ] p) in\n  let om = Nx.reshape [||] (Nx.slice [ I 1 ] p) in\n  let cosmo = Cosmo.create_flat_lcdm ~h0 ~omega_m:om in\n  Cosmo.distance_modulus ~p:cosmo (Nx.scalar f64 z)\n\n(* Linear error propagation: Sigma_out = J Sigma_in J^T *)\nlet propagate f ~mean ~cov =\n  let j = Rune.jacfwd f mean in\n  let mean_out = f mean in\n  let cov_out = Nx.matmul (Nx.matmul j cov) (Nx.matrix_transpose j) in\n  let cov_out = Nx.div_s (Nx.add cov_out (Nx.matrix_transpose cov_out)) 2.0 in\n  (mean_out, cov_out)\n\n(* Monte Carlo validation *)\nlet monte_carlo ?(n_samples = 50_000) f ~mean ~cov =\n  let n = Nx.numel mean in\n  let l = Nx.cholesky cov in\n  let z = Nx.randn f64 [| n_samples; n |] in\n  let samples = Nx.add (Nx.matmul z (Nx.matrix_transpose l)) mean in\n  let y0 = f (Nx.slice [ I 0 ] samples) in\n  let m = Nx.numel y0 in\n  let outputs = Nx.zeros f64 [| n_samples; m |] in\n  Nx.set_slice [ I 0 ] outputs y0;\n  for i = 1 to n_samples - 1 do\n    Nx.set_slice [ I i ] outputs (f (Nx.slice [ I i ] samples))\n  done;\n  let mean_out = Nx.mean ~axes:[ 0 ] outputs in\n  let centered = Nx.sub outputs mean_out in\n  let cov_out =\n    Nx.div_s\n      (Nx.matmul (Nx.matrix_transpose centered) centered)\n      (Float.of_int (n_samples - 1))\n  in\n  (mean_out, cov_out)\n\nlet () =\n  Printf.printf \"Automatic Uncertainty Propagation through Cosmology\\n\";\n  Printf.printf \"====================================================\\n\\n\";\n\n  (* Parameters with uncertainties *)\n  let h0_mean = 70.0 and h0_std = 1.0 in\n  let om_mean = 0.30 and om_std = 0.01 in\n\n  Printf.printf \"Input parameters:\\n\";\n  Printf.printf \"  H0      = %.1f +/- %.1f km/s/Mpc\\n\" h0_mean h0_std;\n  Printf.printf \"  Omega_m = %.2f +/- %.2f\\n\\n\" om_mean om_std;\n\n  let mean = Nx.create f64 [| 2 |] [| h0_mean; om_mean |] in\n  let std = Nx.create f64 [| 2 |] [| h0_std; om_std |] in\n  let cov = Nx.diag (Nx.square std) in\n\n  Printf.printf \"%5s  %10s  %10s  %10s  %10s\\n\" \"z\" \"mu (AD)\" \"sigma (AD)\"\n    \"sigma (MC)\" \"agreement\";\n  Printf.printf \"%5s  %10s  %10s  %10s  %10s\\n\" \"-----\" \"----------\"\n    \"----------\" \"----------\" \"----------\";\n\n  Array.iter\n    (fun z ->\n      (* AD-based propagation *)\n      let f p = Nx.reshape [| 1 |] (distance_modulus_at z p) in\n      let mean_ad, cov_ad = propagate f ~mean ~cov in\n      let mu_ad = item [ 0 ] mean_ad in\n      let std_ad = Float.sqrt (item [ 0; 0 ] cov_ad) in\n\n      (* Monte Carlo validation *)\n      let _, cov_mc = monte_carlo f ~mean ~cov in\n      let std_mc = Float.sqrt (item [ 0; 0 ] cov_mc) in\n\n      let agreement = Float.abs (std_ad -. std_mc) /. std_mc *. 100.0 in\n      Printf.printf \"%5.1f  %10.4f  %10.4f  %10.4f  %9.1f%%\\n\" z mu_ad std_ad\n        std_mc agreement)\n    redshifts;\n\n  Printf.printf \"\\n\";\n  Printf.printf \"AD uses exact Jacobians (2 JVP calls for 2 parameters).\\n\";\n  Printf.printf \"MC uses 50,000 samples for validation.\\n\";\n  Printf.printf \"Agreement < 1%% confirms linear propagation is accurate.\\n\";\n\n  (* Also demonstrate the simple scalar API *)\n  Printf.printf \"\\n--- Scalar API demo ---\\n\\n\";\n  Printf.printf \"Propagating z = 0.5 +/- 0.01 through distance_modulus:\\n\";\n  let x = Nx.scalar f64 0.5 in\n  let y, dy =\n    Rune.jvp (fun z -> Cosmo.distance_modulus z) x (Nx.scalar f64 1.0)\n  in\n  let mu_mean = Nx.item [] y in\n  let mu_std = Float.abs (Nx.item [] dy) *. 0.01 in\n  Printf.printf \"  mu = %.4f +/- %.4f\\n\" mu_mean mu_std\n"
  },
  {
    "path": "dev/umbra/examples/11-bayesian-sed/README.md",
    "content": "# `11-bayesian-sed`\n\nFisher information matrix analysis and Hamiltonian Monte Carlo sampling for\nBayesian SED parameter estimation. Computes Cramer-Rao bounds (theoretical\nminimum uncertainties) from the Fisher matrix, then samples the full posterior\nvia HMC through the differentiable spectrum -> extinction -> photometry pipeline.\n\n```bash\ncd dev/umbra\ndune exec --root . examples/11-bayesian-sed/main.exe\n```\n\n## What You'll Learn\n\n- Computing the Fisher information matrix via reverse-mode Jacobians\n- Deriving Cramer-Rao bounds on SED parameters (temperature, extinction)\n- Sampling full Bayesian posteriors with Hamiltonian Monte Carlo\n- Comparing Fisher-predicted vs HMC-sampled uncertainties\n- Building differentiable forward models through tophat bandpasses\n\n## Key Functions\n\n| Function                   | Purpose                                             |\n| -------------------------- | --------------------------------------------------- |\n| `Rune.jacrev`              | Reverse-mode Jacobian for Fisher matrix computation |\n| `Nx.inv`                   | Matrix inverse for Fisher -> covariance             |\n| `Nx.diagonal`              | Extract diagonal (marginal variances)               |\n| `Spectrum.blackbody`       | Generate Planck SED at given temperature            |\n| `Extinction.apply`         | Apply CCM89 dust reddening                          |\n| `Photometry.tophat`        | Create rectangular bandpass filters                 |\n| `Photometry.ab_mag`        | Compute AB magnitude through a bandpass             |\n| `Norn.hmc`                 | Hamiltonian Monte Carlo posterior sampling           |\n\n## How It Works\n\nThe forward model maps two parameters -- log(T) and A_V -- to five broadband\nmagnitudes through the pipeline: `blackbody -> extinction -> ab_mag`. Synthetic\nobservations are generated at T=6500 K, A_V=0.5 with realistic photometric\nerrors (0.03-0.05 mag).\n\nThe Fisher information matrix F = J^T C^-1 J is computed from the Jacobian of\nthe model (via `Rune.jacrev`) and the observational covariance C. Inverting F\ngives the Cramer-Rao lower bound -- the best achievable 1-sigma uncertainty on\neach parameter for a given dataset, regardless of estimation method.\n\nThe example then samples the actual Bayesian posterior using `Norn.hmc`. The\nlog-posterior is a Gaussian likelihood with flat priors, and HMC uses Rune's\ngradients to efficiently explore the parameter space with 500 post-warmup\nsamples. Comparing the HMC posterior width to the Fisher prediction validates\nthat the model is well-behaved: when they agree, the posterior is approximately\nGaussian and the Fisher bound is tight.\n\n## Try It\n\n1. Reduce the photometric errors to 0.01 mag and observe how both Fisher bounds\n   and HMC posteriors tighten.\n2. Add a third parameter (redshift) and examine the resulting parameter\n   degeneracies in the Fisher matrix.\n3. Replace the flat prior with an informative Gaussian prior on A_V and see\n   how the posterior shifts.\n\n## Next Steps\n\nContinue to [12-survey-optimization](../12-survey-optimization/) to see how\ndifferentiable Fisher forecasting enables gradient-based optimization of survey\ndesign parameters for weak gravitational lensing.\n"
  },
  {
    "path": "dev/umbra/examples/11-bayesian-sed/dune",
    "content": "(executable\n (name main)\n (libraries nx rune vega norn umbra))\n"
  },
  {
    "path": "dev/umbra/examples/11-bayesian-sed/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Fisher information and HMC sampling for SED parameter estimation.\n\n   Demonstrates two capabilities:\n\n   1. Fisher matrix: compute the Cramer-Rao bounds on temperature and extinction\n   -- \"how well CAN I constrain these parameters from UGRIZ photometry?\" --\n   before taking any data. Computed inline from Rune.jacrev + linear algebra.\n\n   2. HMC sampling: full Bayesian posterior through the differentiable Spectrum\n   -> Extinction -> Photometry pipeline, via Norn.hmc. *)\n\nopen Nx\nopen Umbra\n\nlet f64 = Nx.float64\n\n(* Bandpasses *)\nlet n_bp = 20\n\nlet bands =\n  [\n    Photometry.tophat ~lo:(Unit.Length.m 3.0e-7) ~hi:(Unit.Length.m 4.0e-7)\n      ~n:n_bp;\n    Photometry.tophat ~lo:(Unit.Length.m 4.0e-7) ~hi:(Unit.Length.m 5.5e-7)\n      ~n:n_bp;\n    Photometry.tophat ~lo:(Unit.Length.m 5.5e-7) ~hi:(Unit.Length.m 7.0e-7)\n      ~n:n_bp;\n    Photometry.tophat ~lo:(Unit.Length.m 7.0e-7) ~hi:(Unit.Length.m 8.5e-7)\n      ~n:n_bp;\n    Photometry.tophat ~lo:(Unit.Length.m 8.5e-7) ~hi:(Unit.Length.m 1.0e-6)\n      ~n:n_bp;\n  ]\n\nlet band_names = [| \"U\"; \"G\"; \"R\"; \"I\"; \"Z\" |]\nlet rv = Nx.scalar f64 3.1\n\n(* Forward model: [log_T, A_V] -> 5 magnitudes *)\nlet model params =\n  let log_temp = Nx.reshape [||] (Nx.slice [ I 0 ] params) in\n  let av = Nx.reshape [||] (Nx.slice [ I 1 ] params) in\n  let temp = Unit.Temperature.of_kelvin (Nx.exp log_temp) in\n  let mags =\n    List.map\n      (fun bp ->\n        let wave = Photometry.wavelength bp in\n        let sed =\n          Spectrum.blackbody ~temperature:temp ~wavelength:wave\n          |> Extinction.apply (Extinction.ccm89 ~rv) ~av\n          |> Spectrum.as_flux_density\n        in\n        Photometry.ab_mag bp sed)\n      bands\n  in\n  Nx.stack ~axis:0 mags\n\n(* True parameters *)\nlet true_log_temp = Float.log 6500.0\nlet true_av = 0.5\nlet true_params = Nx.create f64 [| 2 |] [| true_log_temp; true_av |]\n\n(* Synthetic observations *)\nlet obs_errs = Nx.create f64 [| 5 |] [| 0.05; 0.03; 0.03; 0.04; 0.05 |]\n\nlet obs_mags =\n  let true_mags = model true_params in\n  let noise = Nx.create f64 [| 5 |] [| 0.03; -0.02; 0.01; -0.01; 0.02 |] in\n  Nx.add true_mags noise\n\n(* Fisher information: F = J^T C^-1 J *)\nlet fisher f ~params ~obs_cov =\n  let j = Rune.jacrev f params in\n  let jt = Nx.matrix_transpose j in\n  Nx.matmul (Nx.matmul jt (Nx.inv obs_cov)) j\n\n(* Cramer-Rao bounds: sigma = sqrt(diag(F^-1)) *)\nlet marginal_sigma f = Nx.sqrt (Nx.diagonal (Nx.inv f))\n\nlet () =\n  Printf.printf \"Fisher Information & HMC for SED Fitting\\n\";\n  Printf.printf \"=========================================\\n\\n\";\n\n  Printf.printf \"True parameters:\\n\";\n  Printf.printf \"  T   = %.0f K  (log_T = %.4f)\\n\" (Float.exp true_log_temp)\n    true_log_temp;\n  Printf.printf \"  A_V = %.2f\\n\\n\" true_av;\n\n  Printf.printf \"Observed magnitudes:\\n\";\n  Array.iteri\n    (fun i name ->\n      Printf.printf \"  %s = %.3f +/- %.3f\\n\" name (item [ i ] obs_mags)\n        (item [ i ] obs_errs))\n    band_names;\n  Printf.printf \"\\n\";\n\n  (* --- Fisher Information --- *)\n  Printf.printf \"=== Fisher Information ===\\n\\n\";\n\n  let obs_cov = Nx.diag (Nx.square obs_errs) in\n  let f = fisher model ~params:true_params ~obs_cov in\n  let sigma = marginal_sigma f in\n\n  Printf.printf \"Fisher matrix:\\n\";\n  Printf.printf \"  F = [[ %10.2f  %10.2f ]\\n\"\n    (item [ 0; 0 ] f)\n    (item [ 0; 1 ] f);\n  Printf.printf \"       [ %10.2f  %10.2f ]]\\n\\n\"\n    (item [ 1; 0 ] f)\n    (item [ 1; 1 ] f);\n\n  Printf.printf \"Cramer-Rao bounds (best achievable 1-sigma):\\n\";\n  let sigma_log_t = item [ 0 ] sigma in\n  let sigma_av = item [ 1 ] sigma in\n  Printf.printf \"  sigma(log_T) = %.4f  ->  sigma(T) ~ %.0f K\\n\" sigma_log_t\n    (sigma_log_t *. Float.exp true_log_temp);\n  Printf.printf \"  sigma(A_V)   = %.4f\\n\\n\" sigma_av;\n\n  (* --- HMC Sampling --- *)\n  Printf.printf \"=== HMC Posterior Sampling ===\\n\\n\";\n\n  (* Log-posterior: Gaussian likelihood, flat prior *)\n  let log_posterior params =\n    let pred = model params in\n    let residuals = Nx.div (Nx.sub pred obs_mags) obs_errs in\n    Nx.mul_s (Nx.sum (Nx.square residuals)) (-0.5)\n  in\n\n  let init = Nx.create f64 [| 2 |] [| Float.log 7000.0; 0.3 |] in\n  let result =\n    Norn.hmc ~step_size:0.001 ~num_leapfrog:10 ~num_warmup:200 ~n:500\n      log_posterior init\n  in\n\n  Printf.printf \"HMC diagnostics:\\n\";\n  Printf.printf \"  Accept rate: %.1f%%\\n\\n\" (result.stats.accept_rate *. 100.);\n\n  (* Sample statistics *)\n  let sample_mean = Nx.mean ~axes:[ 0 ] result.samples in\n  let centered = Nx.sub result.samples sample_mean in\n  let sample_cov =\n    Nx.div_s\n      (Nx.matmul (Nx.matrix_transpose centered) centered)\n      (Float.of_int 499)\n  in\n  let sample_std = Nx.sqrt (Nx.diagonal sample_cov) in\n\n  let hmc_log_t = item [ 0 ] sample_mean in\n  let hmc_av = item [ 1 ] sample_mean in\n  let hmc_sigma_log_t = item [ 0 ] sample_std in\n  let hmc_sigma_av = item [ 1 ] sample_std in\n\n  Printf.printf \"Posterior (HMC):\\n\";\n  Printf.printf \"  log_T = %.4f +/- %.4f  ->  T ~ %.0f K\\n\" hmc_log_t\n    hmc_sigma_log_t (Float.exp hmc_log_t);\n  Printf.printf \"  A_V   = %.4f +/- %.4f\\n\\n\" hmc_av hmc_sigma_av;\n\n  (* --- Comparison --- *)\n  Printf.printf \"=== Fisher vs HMC Comparison ===\\n\\n\";\n  Printf.printf \"  %12s  %10s  %10s\\n\" \"\" \"Fisher s\" \"HMC s\";\n  Printf.printf \"  %12s  %10s  %10s\\n\" \"------------\" \"----------\" \"----------\";\n  Printf.printf \"  %12s  %10.4f  %10.4f\\n\" \"s(log_T)\" sigma_log_t\n    hmc_sigma_log_t;\n  Printf.printf \"  %12s  %10.4f  %10.4f\\n\\n\" \"s(A_V)\" sigma_av hmc_sigma_av;\n\n  Printf.printf \"Fisher gives the theoretical minimum uncertainty.\\n\";\n  Printf.printf \"HMC gives the actual posterior width.\\n\";\n  Printf.printf \"Agreement confirms the model is well-behaved (near-linear).\\n\"\n"
  },
  {
    "path": "dev/umbra/examples/12-survey-optimization/README.md",
    "content": "# `12-survey-optimization`\n\nDifferentiable survey optimization for a Stage IV weak lensing survey. Uses\nexact autodiff gradients to optimize survey parameters that minimize the\nuncertainty on S8 = sigma8 * sqrt(Omega_m / 0.3), replacing traditional grid\nsearch with gradient-based Fisher forecasting. Demonstrates both a single-bin\narea/depth tradeoff and joint optimization of sky fraction with tomographic bin\nedges.\n\n```bash\ncd dev/umbra\ndune exec --root . examples/12-survey-optimization/main.exe\n```\n\n## What You'll Learn\n\n- Computing differentiable Fisher information matrices for survey forecasting\n- Optimizing the area/depth tradeoff for sky coverage vs galaxy density\n- Jointly optimizing sky fraction and tomographic bin edges with gradient descent\n- Using sigmoid-windowed bins for smooth gradient flow through discrete boundaries\n- Comparing gradient-based optimization against brute-force grid search\n\n## Key Functions\n\n| Function                    | Purpose                                              |\n| --------------------------- | ---------------------------------------------------- |\n| `Survey.angular_cl`         | Compute angular power spectra for tracer pairs       |\n| `Survey.weak_lensing`       | Create a weak lensing tracer from n(z)               |\n| `Survey.smail`              | Smail redshift distribution for source galaxies      |\n| `Cosmo.planck18`            | Planck 2018 fiducial cosmology                       |\n| `Cosmo.linear_power`        | Linear matter power spectrum P(k, z)                 |\n| `Cosmo.comoving_distance`   | Comoving distance for lensing kernel computation     |\n| `Rune.value_and_grad`       | Loss and gradient for survey parameter optimization  |\n| `Vega.adam`                 | Adam optimizer for continuous parameter search       |\n\n## How It Works\n\nPart 1 tackles the area/depth tradeoff for a single tomographic bin. A fixed\ngalaxy budget (n_gal * f_sky = constant) means wider surveys are shallower. The\nFisher matrix for [Omega_m, sigma8] is computed from Limber-integrated angular\npower spectra, with shape noise that depends on galaxy density. The objective\nfunction -- sigma(S8) derived from the 2x2 Fisher inverse -- is fully\ndifferentiable through f_sky via sigmoid parameterization. Adam finds the\noptimal sky fraction in 300 steps with exact gradients, verified by a\nfinite-difference check.\n\nPart 2 extends to joint optimization of sky fraction and two tomographic bin\nedges that divide galaxies into three redshift bins. The bin boundaries use\nsigmoid window functions (with width delta=0.03) so that gradients flow smoothly\nthrough the discrete bin assignment. Narrower bins concentrate signal but\nincrease shot noise; the optimizer balances this tradeoff automatically. The\nLimber integral uses precomputed cosmological grids (comoving distances, Hubble\nrates, power spectra) evaluated at five cosmology perturbations for numerical\nderivatives of C_l with respect to Omega_m and sigma8, while gradients with\nrespect to survey parameters (f_sky, z1, z2) flow through Rune's autodiff.\n\nA brute-force grid search over 12 x 15 x 15 = 2700 parameter combinations\nvalidates the gradient result, demonstrating that 500 Adam steps achieve equal\nor better precision with orders of magnitude fewer function evaluations.\n\n## Try It\n\n1. Increase the galaxy budget from 10 to 50 gal/arcmin2 and observe how the\n   optimal sky fraction shifts toward wider coverage.\n2. Add a fourth tomographic bin and compare the improvement in sigma(S8).\n3. Replace the Smail n(z) with a sharper distribution and see how the optimal\n   bin edges respond.\n\n## Next Steps\n\nThis is the final example in the Umbra series. For earlier topics, revisit\n[01-constants-and-units](../01-constants-and-units/) for physical constants and\nunit handling, or [05-sed-fitting](../05-sed-fitting/) for the foundations of\ndifferentiable spectral energy distribution fitting that this example builds on.\n"
  },
  {
    "path": "dev/umbra/examples/12-survey-optimization/dune",
    "content": "(executable\n (name main)\n (libraries nx rune vega umbra))\n"
  },
  {
    "path": "dev/umbra/examples/12-survey-optimization/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Differentiable survey optimization via autodiff gradients through the Fisher\n   information matrix.\n\n   Traditional survey optimization uses grid search over discrete Fisher\n   forecasts. Umbra's fully differentiable cosmology pipeline enables\n   gradient-based continuous optimization: compute Fisher(survey_params) and\n   minimize sigma(S8) with respect to survey parameters using exact autodiff\n   gradients from Rune.\n\n   Part 1: Area/depth tradeoff -- optimize f_sky with fixed n(z) shape. Part 2:\n   Joint area + bin edge optimization -- optimize f_sky and tomographic bin\n   edges simultaneously, with gradients flowing through the lensing kernel\n   computation via differentiable n(z) windowing. *)\n\nopen Nx\nopen Umbra\n\nlet f64 = Nx.float64\nlet sigma_e = 0.26\nlet steradian_to_arcmin2 = 11818102.86004228\nlet c_km_s = 299792.458\nlet h0_ref = 100.0\n\n(* Fiducial cosmology *)\nlet p_fid = Cosmo.planck18\nlet omega_m_fid = Nx.item [] (Cosmo.omega_m p_fid)\nlet sigma8_fid = Nx.item [] (Cosmo.sigma8 p_fid)\n\n(* S8 = sigma8 * sqrt(omega_m / 0.3) -- derivatives at fiducial *)\nlet ds8_dom = sigma8_fid /. (2.0 *. Float.sqrt (0.3 *. omega_m_fid))\nlet ds8_ds8 = Float.sqrt (omega_m_fid /. 0.3)\n\n(* ell weights: (2*ell+1) * dell / 2 *)\nlet ell_weights ell =\n  let n_ell = (Nx.shape ell).(0) in\n  let dell =\n    Array.init n_ell (fun l ->\n        if l = 0 then Nx.item [ 1 ] ell -. Nx.item [ 0 ] ell\n        else if l = n_ell - 1 then Nx.item [ l ] ell -. Nx.item [ l - 1 ] ell\n        else 0.5 *. (Nx.item [ l + 1 ] ell -. Nx.item [ l - 1 ] ell))\n  in\n  Nx.create f64 [| n_ell |]\n    (Array.init n_ell (fun l ->\n         ((2.0 *. Nx.item [ l ] ell) +. 1.0) *. dell.(l) /. 2.0))\n\n(* Compute dCl/d(theta) via central finite differences *)\nlet finite_diff_cl ~ell ~tracers ~param_name ~set_param ~fid_val ~eps =\n  let p_plus = set_param (scalar f64 (fid_val +. eps)) p_fid in\n  let p_minus = set_param (scalar f64 (fid_val -. eps)) p_fid in\n  let cl_p =\n    Survey.Cls.to_tensor\n      (Survey.angular_cl ~p:p_plus ~power:Survey.linear ~ell tracers)\n  in\n  let cl_m =\n    Survey.Cls.to_tensor\n      (Survey.angular_cl ~p:p_minus ~power:Survey.linear ~ell tracers)\n  in\n  let dcl = Nx.div_s (Nx.sub cl_p cl_m) (2.0 *. eps) in\n  Printf.printf \"  dCl/d(%-8s): max=%.3e\\n\" param_name (Nx.item [] (Nx.max dcl));\n  dcl\n\n(* 2x2 analytical Fisher inverse -> sigma(S8) -- all differentiable *)\nlet sigma_s8_from_fisher f11 f12 f22 =\n  let det = Nx.sub (Nx.mul f11 f22) (Nx.mul f12 f12) in\n  let a = scalar f64 ds8_dom and b = scalar f64 ds8_ds8 in\n  let sigma_sq =\n    Nx.div\n      (Nx.add\n         (Nx.sub\n            (Nx.mul f22 (Nx.mul a a))\n            (Nx.mul_s (Nx.mul f12 (Nx.mul a b)) 2.0))\n         (Nx.mul f11 (Nx.mul b b)))\n      det\n  in\n  Nx.sqrt sigma_sq\n\n(* ===================================================================== *)\n(* Part 1: Area/depth tradeoff (single bin) *)\n(* ===================================================================== *)\n\nlet part1 () =\n  Printf.printf \"--- Part 1: Area/Depth Tradeoff (1 bin) ---\\n\\n\";\n  let budget = 10.0 in\n  let ell = Nx.logspace f64 1.0 3.0 30 in\n  let w_ell = ell_weights ell in\n\n  let nz = Survey.smail ~a:2.0 ~b:1.5 ~z0:0.3 () in\n  let wl = Survey.weak_lensing nz in\n\n  Printf.printf \"Precomputing signal derivatives...\\n\";\n  let cl_fid =\n    Survey.Cls.to_tensor\n      (Survey.angular_cl ~p:p_fid ~power:Survey.linear ~ell [ wl ])\n  in\n  let cl_fid_flat = Nx.flatten cl_fid in\n  let eps = 1e-4 in\n  let dcl_dom =\n    Nx.flatten\n      (finite_diff_cl ~ell ~tracers:[ wl ] ~param_name:\"omega_m\"\n         ~set_param:(fun v p -> Cosmo.set_t ~omega_m:v p)\n         ~fid_val:omega_m_fid ~eps)\n  in\n  let dcl_ds8 =\n    Nx.flatten\n      (finite_diff_cl ~ell ~tracers:[ wl ] ~param_name:\"sigma8\"\n         ~set_param:(fun v p -> Cosmo.set_t ~sigma8:v p)\n         ~fid_val:sigma8_fid ~eps)\n  in\n  Printf.printf \"\\n\";\n\n  let objective log_f_sky =\n    let f_sky = Nx.sigmoid log_f_sky in\n    let n_gal = Nx.div (scalar f64 budget) f_sky in\n    let noise =\n      Nx.div\n        (scalar f64 (sigma_e *. sigma_e))\n        (Nx.mul_s n_gal steradian_to_arcmin2)\n    in\n    let cl_obs = Nx.add cl_fid_flat noise in\n    let cl_obs_sq = Nx.mul cl_obs cl_obs in\n    let weighted_dom =\n      Nx.div (Nx.mul w_ell (Nx.mul dcl_dom dcl_dom)) cl_obs_sq\n    in\n    let weighted_ds8 =\n      Nx.div (Nx.mul w_ell (Nx.mul dcl_ds8 dcl_ds8)) cl_obs_sq\n    in\n    let weighted_x = Nx.div (Nx.mul w_ell (Nx.mul dcl_dom dcl_ds8)) cl_obs_sq in\n    let f11 = Nx.mul f_sky (Nx.sum weighted_dom) in\n    let f12 = Nx.mul f_sky (Nx.sum weighted_x) in\n    let f22 = Nx.mul f_sky (Nx.sum weighted_ds8) in\n    sigma_s8_from_fisher f11 f12 f22\n  in\n\n  (* Gradient check *)\n  let log_f_sky_init = scalar f64 0.0 in\n  let v0, g0 = Rune.value_and_grad objective log_f_sky_init in\n  let fd_eps = 1e-5 in\n  let vp = item [] (objective (scalar f64 fd_eps)) in\n  let vm = item [] (objective (scalar f64 (-.fd_eps))) in\n  let fd = (vp -. vm) /. (2.0 *. fd_eps) in\n  Printf.printf \"Gradient check: AD=%.6e  FD=%.6e  rel=%.2e\\n\\n\" (item [] g0) fd\n    (Float.abs (item [] g0 -. fd) /. Float.abs fd);\n\n  let f_sky_0 = 1.0 /. (1.0 +. Float.exp (-0.0)) in\n  Printf.printf \"Initial: f_sky=%.3f  n_gal=%.1f  sigma(S8)=%.6f\\n\" f_sky_0\n    (budget /. f_sky_0) (item [] v0);\n\n  let algo = Vega.adam (Vega.Schedule.constant 0.01) in\n  let log_f_sky = ref log_f_sky_init in\n  let state = ref (Vega.init algo !log_f_sky) in\n  let best_sigma = ref (item [] v0) in\n  let best_f_sky = ref f_sky_0 in\n  Printf.printf \"\\n%5s  %8s  %8s  %10s\\n\" \"step\" \"f_sky\" \"n_gal\" \"sigma(S8)\";\n  Printf.printf \"%5s  %8s  %8s  %10s\\n\" \"-----\" \"--------\" \"--------\"\n    \"----------\";\n  let steps = 300 in\n  for i = 0 to steps - 1 do\n    let sigma_val, grad = Rune.value_and_grad objective !log_f_sky in\n    let p, s = Vega.step !state ~grad ~param:!log_f_sky in\n    log_f_sky := p;\n    state := s;\n    let f_sky_cur = 1.0 /. (1.0 +. Float.exp (-.item [] !log_f_sky)) in\n    let sigma_cur = item [] sigma_val in\n    if sigma_cur < !best_sigma then begin\n      best_sigma := sigma_cur;\n      best_f_sky := f_sky_cur\n    end;\n    if i mod 50 = 0 || i = steps - 1 then\n      Printf.printf \"%5d  %8.4f  %8.1f  %10.6f\\n\" i f_sky_cur\n        (budget /. f_sky_cur) sigma_cur\n  done;\n  Printf.printf \"\\nOptimal: f_sky=%.4f  n_gal=%.1f gal/arcmin2\\n\" !best_f_sky\n    (budget /. !best_f_sky);\n  Printf.printf \"Improvement: sigma(S8) reduced by %.1f%% vs initial\\n\\n\"\n    ((1.0 -. (!best_sigma /. item [] v0)) *. 100.0)\n\n(* ===================================================================== *)\n(* Part 2: Joint area + bin edge optimization (3 bins) *)\n(* ===================================================================== *)\n\n(* Precomputed cosmological grids -- expensive, done once per cosmology. *)\ntype cosmo_grid = {\n  n_z : int;\n  dz : float;\n  z_arr : float array;\n  z_vec : Nx.float64_t;\n  chi_safe : Nx.float64_t;\n  omega_m_t : Nx.float64_t;\n  integ_weight : Nx.float64_t;\n  w_pk : Nx.float64_t;\n  ell_factor_sq : Nx.float64_t;\n}\n\nlet precompute_grid ~p ~ell =\n  let zmax = 3.0 in\n  let n_z = 50 in\n  let dz = zmax /. Float.of_int (n_z - 1) in\n  let z_arr = Array.init n_z (fun i -> Float.of_int i *. dz) in\n  z_arr.(0) <- 1e-6;\n  let z_vec = Nx.create f64 [| n_z |] z_arr in\n  let sw =\n    Array.init n_z (fun i ->\n        if i = 0 || i = n_z - 1 then 1.0 else if i mod 2 = 1 then 4.0 else 2.0)\n  in\n  let simpson_w = Nx.mul_s (Nx.create f64 [| n_z |] sw) (dz /. 3.0) in\n  let h_t = Nx.item [] (Nx.div (Cosmo.h0 p) (Nx.scalar f64 h0_ref)) in\n  let chi_vec =\n    Nx.create f64 [| n_z |]\n      (Array.init n_z (fun j ->\n           let z_t = Nx.scalar f64 z_arr.(j) in\n           let chi =\n             Nx.item [] (Unit.Length.in_mpc (Cosmo.comoving_distance ~p z_t))\n           in\n           chi *. h_t))\n  in\n  let chi_safe = Nx.clamp ~min:1e-10 chi_vec in\n  let h_vec_f =\n    Array.init n_z (fun j ->\n        Nx.item [] (Cosmo.hubble ~p (Nx.scalar f64 z_arr.(j))))\n  in\n  let dchi_dz_vec =\n    Nx.create f64 [| n_z |]\n      (Array.init n_z (fun j -> h_t *. c_km_s /. h_vec_f.(j)))\n  in\n  let omega_m_t = Nx.scalar f64 (Nx.item [] (Cosmo.omega_m p)) in\n  let integ_weight =\n    Nx.create f64 [| n_z |]\n      (Array.init n_z (fun j ->\n           let sw_j = Nx.item [ j ] simpson_w in\n           let dchi_j = Nx.item [ j ] dchi_dz_vec in\n           let chi_j = Nx.item [ j ] chi_safe in\n           sw_j *. dchi_j /. (chi_j *. chi_j) /. (c_km_s *. c_km_s)))\n  in\n  let pk_grid =\n    Nx.stack\n      (List.init n_z (fun j ->\n           let z_t = Nx.scalar f64 z_arr.(j) in\n           let chi_j = Nx.item [ j ] chi_safe in\n           let k_vec = Nx.div_s (Nx.add_s ell 0.5) chi_j in\n           Cosmo.linear_power ~p k_vec z_t))\n  in\n  let w_pk =\n    Nx.mul\n      (Nx.reshape [| n_z; 1 |]\n         (Nx.create f64 [| n_z |]\n            (Array.init n_z (fun j -> Nx.item [ j ] integ_weight))))\n      pk_grid\n  in\n  let l = ell in\n  let num =\n    Nx.mul\n      (Nx.mul (Nx.sub_s l 1.0) l)\n      (Nx.mul (Nx.add_s l 1.0) (Nx.add_s l 2.0))\n  in\n  let den = Nx.mul (Nx.add_s l 0.5) (Nx.add_s l 0.5) in\n  let ell_factor = Nx.div (Nx.sqrt (Nx.abs num)) den in\n  let ell_factor_sq = Nx.mul ell_factor ell_factor in\n  {\n    n_z;\n    dz;\n    z_arr;\n    z_vec;\n    chi_safe;\n    omega_m_t;\n    integ_weight;\n    w_pk;\n    ell_factor_sq;\n  }\n\n(* Reverse cumulative trapezoidal sum *)\nlet rev_cumtrapz f_vec n dz =\n  let left = Nx.slice [ R (0, n - 1) ] f_vec in\n  let right = Nx.slice [ R (1, n) ] f_vec in\n  let mid = Nx.mul_s (Nx.add left right) (0.5 *. dz) in\n  let partial = Nx.flip (Nx.cumsum ~axis:0 (Nx.flip mid)) in\n  Nx.concatenate [ partial; Nx.zeros f64 [| 1 |] ]\n\n(* Fast WL-only angular Cl from precomputed cosmo grid + pre-evaluated n(z)\n   tensors. nz_tensors are [n_z] tensors, one per bin, evaluated on the z grid.\n   Differentiable through the n(z) values. *)\nlet fast_wl_cl grid nz_tensors =\n  let n_z = grid.n_z and dz = grid.dz in\n  let n_bins = Array.length nz_tensors in\n  (* Build WL kernels *)\n  let prefactor =\n    Nx.mul_s grid.omega_m_t (3.0 *. h0_ref *. h0_ref /. (2.0 *. c_km_s))\n  in\n  let one_plus_z = Nx.add_s grid.z_vec 1.0 in\n  let kernels =\n    Array.init n_bins (fun b ->\n        let nz_t = nz_tensors.(b) in\n        let a_vec = rev_cumtrapz nz_t n_z dz in\n        let nz_over_chi = Nx.div nz_t grid.chi_safe in\n        let b_vec = rev_cumtrapz nz_over_chi n_z dz in\n        let g_vec = Nx.sub a_vec (Nx.mul grid.chi_safe b_vec) in\n        Nx.mul prefactor (Nx.mul one_plus_z (Nx.mul grid.chi_safe g_vec)))\n  in\n  (* Limber integration for all pairs *)\n  let pairs = ref [] in\n  for i = 0 to n_bins - 1 do\n    for j = i to n_bins - 1 do\n      pairs := (i, j) :: !pairs\n    done\n  done;\n  let pairs = List.rev !pairs in\n  Nx.stack\n    (List.map\n       (fun (i, j) ->\n         let ki = Nx.reshape [| n_z; 1 |] kernels.(i) in\n         let kj = Nx.reshape [| n_z; 1 |] kernels.(j) in\n         let integrand = Nx.mul (Nx.mul ki kj) grid.w_pk in\n         Nx.mul grid.ell_factor_sq (Nx.sum ~axes:[ 0 ] integrand))\n       pairs)\n\n(* Parent n(z): Smail distribution, evaluated as float *)\nlet parent_nz =\n  let a = 2.0 and b = 1.5 and z0 = 0.3 in\n  let raw z_f = (z_f ** a) *. Float.exp (-.((z_f /. z0) ** b)) in\n  let norm =\n    let n = 256 in\n    let h = 3.0 /. Float.of_int n in\n    let s = ref (raw 1e-6 +. raw 3.0) in\n    for i = 1 to n - 1 do\n      let x = Float.of_int i *. h in\n      let w = if i mod 2 = 1 then 4.0 else 2.0 in\n      s := !s +. (w *. raw x)\n    done;\n    !s *. h /. 3.0\n  in\n  fun z_f -> raw z_f /. norm\n\n(* Build a differentiable bin n(z) with smooth sigmoid edges *)\nlet make_bin_eval z_lo z_hi delta z =\n  let parent_val = parent_nz (Nx.item [] z) in\n  if parent_val < 1e-30 then scalar f64 0.0\n  else\n    let lo_gate = Nx.sigmoid (Nx.div_s (Nx.sub z z_lo) delta) in\n    let hi_gate = Nx.sigmoid (Nx.div_s (Nx.sub z_hi z) delta) in\n    Nx.mul_s (Nx.mul lo_gate hi_gate) parent_val\n\nlet part2 () =\n  Printf.printf \"--- Part 2: Joint Area + Bin Edges (3 bins) ---\\n\\n\";\n  let budget = 10.0 in\n  let ell = Nx.logspace f64 1.0 3.0 20 in\n  let w_ell = ell_weights ell in\n  let eps = 1e-4 in\n  let delta = 0.03 in\n\n  Printf.printf \"Precomputing cosmo grids (fiducial + 4 perturbations)...\\n\";\n  let grid_fid = precompute_grid ~p:p_fid ~ell in\n  let grid_p_om =\n    precompute_grid\n      ~p:(Cosmo.set_t ~omega_m:(scalar f64 (omega_m_fid +. eps)) p_fid)\n      ~ell\n  in\n  let grid_m_om =\n    precompute_grid\n      ~p:(Cosmo.set_t ~omega_m:(scalar f64 (omega_m_fid -. eps)) p_fid)\n      ~ell\n  in\n  let grid_p_s8 =\n    precompute_grid\n      ~p:(Cosmo.set_t ~sigma8:(scalar f64 (sigma8_fid +. eps)) p_fid)\n      ~ell\n  in\n  let grid_m_s8 =\n    precompute_grid\n      ~p:(Cosmo.set_t ~sigma8:(scalar f64 (sigma8_fid -. eps)) p_fid)\n      ~ell\n  in\n  Printf.printf \"Done.\\n\\n\";\n\n  let n_z = grid_fid.n_z in\n  let z_arr = grid_fid.z_arr in\n  let dz = grid_fid.dz in\n\n  let objective params =\n    let log_f_sky = Nx.get [ 0 ] params in\n    let z1 = Nx.get [ 1 ] params in\n    let z2 = Nx.get [ 2 ] params in\n    let f_sky = Nx.sigmoid log_f_sky in\n    let n_gal = Nx.div (scalar f64 budget) f_sky in\n\n    (* Differentiable n(z) bin functions *)\n    let nz_funs =\n      [|\n        make_bin_eval (scalar f64 0.0) z1 delta;\n        make_bin_eval z1 z2 delta;\n        make_bin_eval z2 (scalar f64 3.0) delta;\n      |]\n    in\n\n    (* Evaluate n(z) on z grid -- differentiable through bin edges *)\n    let nz_tensors =\n      Array.init 3 (fun b ->\n          Nx.stack\n            (List.init n_z (fun j -> nz_funs.(b) (Nx.scalar f64 z_arr.(j)))))\n    in\n\n    (* Galaxy fraction per bin: integral of window_i(z) n(z) dz. Parent n(z) is\n       normalized so this gives the fraction of total galaxies in each bin.\n       Differentiable through bin edges -- narrow bins get fewer galaxies. *)\n    let gal_fracs =\n      Array.init 3 (fun b ->\n          let nz_t = nz_tensors.(b) in\n          let left = Nx.slice [ R (0, n_z - 2) ] nz_t in\n          let right = Nx.slice [ R (1, n_z - 1) ] nz_t in\n          Nx.mul_s (Nx.sum (Nx.add left right)) (0.5 *. dz))\n    in\n\n    (* Per-bin noise: sigma_e^2 / (n_gal_bin * ster) where n_gal_bin = n_gal *\n       f_i. Bins with fewer galaxies have higher shot noise. *)\n    let noise_per_bin =\n      Array.init 3 (fun b ->\n          Nx.div\n            (scalar f64 (sigma_e *. sigma_e))\n            (Nx.mul_s (Nx.mul n_gal gal_fracs.(b)) steradian_to_arcmin2))\n    in\n\n    (* Fast Cl from precomputed grids -- only n(z) -> kernel is traced *)\n    let cl_fid = fast_wl_cl grid_fid nz_tensors in\n    let cl_p_om = fast_wl_cl grid_p_om nz_tensors in\n    let cl_m_om = fast_wl_cl grid_m_om nz_tensors in\n    let cl_p_s8 = fast_wl_cl grid_p_s8 nz_tensors in\n    let cl_m_s8 = fast_wl_cl grid_m_s8 nz_tensors in\n    let dcl_dom = Nx.div_s (Nx.sub cl_p_om cl_m_om) (2.0 *. eps) in\n    let dcl_ds8 = Nx.div_s (Nx.sub cl_p_s8 cl_m_s8) (2.0 *. eps) in\n\n    (* Full Fisher via Tr[C^-1 dC/dtheta_i C^-1 dC/dtheta_j] with analytical 3x3\n       inverse. Vectorized over ell: each matrix element is a [n_ell] tensor. *)\n    let n_bins = 3 in\n\n    (* Pair index: (i,j) -> spectrum row in cl arrays. Ordering: (0,0)=0,\n       (0,1)=1, (0,2)=2, (1,1)=3, (1,2)=4, (2,2)=5 *)\n    let pidx i j =\n      let a, b = if i <= j then (i, j) else (j, i) in\n      (a * ((2 * n_bins) - a - 1) / 2) + b\n    in\n\n    (* Build 3x3 C(ell) = Cl + N, stored as flat [9] of [n_ell] tensors *)\n    let c =\n      Array.init 9 (fun idx ->\n          let i = idx / 3 and j = idx mod 3 in\n          let cl_ij = Nx.slice [ I (pidx i j) ] cl_fid in\n          if i = j then Nx.add cl_ij noise_per_bin.(i) else cl_ij)\n    in\n\n    (* 3x3 inverse via cofactors / determinant *)\n    let det =\n      Nx.add\n        (Nx.sub\n           (Nx.mul c.(0) (Nx.sub (Nx.mul c.(4) c.(8)) (Nx.mul c.(5) c.(7))))\n           (Nx.mul c.(1) (Nx.sub (Nx.mul c.(3) c.(8)) (Nx.mul c.(5) c.(6)))))\n        (Nx.mul c.(2) (Nx.sub (Nx.mul c.(3) c.(7)) (Nx.mul c.(4) c.(6))))\n    in\n    let ci = Array.make 9 (scalar f64 0.0) in\n    ci.(0) <- Nx.div (Nx.sub (Nx.mul c.(4) c.(8)) (Nx.mul c.(5) c.(7))) det;\n    ci.(1) <- Nx.div (Nx.sub (Nx.mul c.(2) c.(7)) (Nx.mul c.(1) c.(8))) det;\n    ci.(2) <- Nx.div (Nx.sub (Nx.mul c.(1) c.(5)) (Nx.mul c.(2) c.(4))) det;\n    ci.(3) <- Nx.div (Nx.sub (Nx.mul c.(5) c.(6)) (Nx.mul c.(3) c.(8))) det;\n    ci.(4) <- Nx.div (Nx.sub (Nx.mul c.(0) c.(8)) (Nx.mul c.(2) c.(6))) det;\n    ci.(5) <- Nx.div (Nx.sub (Nx.mul c.(2) c.(3)) (Nx.mul c.(0) c.(5))) det;\n    ci.(6) <- Nx.div (Nx.sub (Nx.mul c.(3) c.(7)) (Nx.mul c.(4) c.(6))) det;\n    ci.(7) <- Nx.div (Nx.sub (Nx.mul c.(1) c.(6)) (Nx.mul c.(0) c.(7))) det;\n    ci.(8) <- Nx.div (Nx.sub (Nx.mul c.(0) c.(4)) (Nx.mul c.(1) c.(3))) det;\n\n    (* Build dC/dtheta matrices: symmetric, no noise term *)\n    let dc_om =\n      Array.init 9 (fun idx ->\n          Nx.slice [ I (pidx (idx / 3) (idx mod 3)) ] dcl_dom)\n    in\n    let dc_s8 =\n      Array.init 9 (fun idx ->\n          Nx.slice [ I (pidx (idx / 3) (idx mod 3)) ] dcl_ds8)\n    in\n\n    (* 3x3 matmul: (AB)_ij = sum_k A_ik B_kj, vectorized over ell *)\n    let mm a b =\n      Array.init 9 (fun idx ->\n          let i = idx / 3 and j = idx mod 3 in\n          Nx.add\n            (Nx.add (Nx.mul a.(i * 3) b.(j)) (Nx.mul a.((i * 3) + 1) b.(3 + j)))\n            (Nx.mul a.((i * 3) + 2) b.(6 + j)))\n    in\n\n    (* Tr[AB] = sum_ij A_ij B_ji, returns [n_ell] tensor *)\n    let tr a b =\n      let t = ref (Nx.mul a.(0) b.(0)) in\n      for i = 0 to 2 do\n        for j = 0 to 2 do\n          if i > 0 || j > 0 then\n            t := Nx.add !t (Nx.mul a.((i * 3) + j) b.((j * 3) + i))\n        done\n      done;\n      !t\n    in\n\n    (* D1 = C^-1 dC/d(Omega_m), D2 = C^-1 dC/d(sigma8) *)\n    let d1 = mm ci dc_om in\n    let d2 = mm ci dc_s8 in\n\n    (* F_ij = f_sky * sum_ell w_ell * Tr[D_i D_j] *)\n    let f11 = Nx.mul f_sky (Nx.sum (Nx.mul w_ell (tr d1 d1))) in\n    let f12 = Nx.mul f_sky (Nx.sum (Nx.mul w_ell (tr d1 d2))) in\n    let f22 = Nx.mul f_sky (Nx.sum (Nx.mul w_ell (tr d2 d2))) in\n    sigma_s8_from_fisher f11 f12 f22\n  in\n\n  let params = Nx.create f64 [| 3 |] [| -1.1; 0.5; 1.0 |] in\n  Printf.printf \"Computing initial sigma(S8)...\\n\";\n  let v0 = item [] (objective params) in\n  let f_sky_0 = 1.0 /. (1.0 +. Float.exp 1.1) in\n  Printf.printf\n    \"Initial: f_sky=%.3f  bins=[0, 0.50, 1.00, 3.0]  sigma(S8)=%.6f\\n\\n\" f_sky_0\n    v0;\n\n  let algo = Vega.adam (Vega.Schedule.constant 0.03) in\n  let params = ref params in\n  let state = ref (Vega.init algo !params) in\n  let best_sigma = ref v0 in\n  let best_params = ref !params in\n  Printf.printf \"%5s  %8s  %8s  %8s  %10s\\n\" \"step\" \"f_sky\" \"z1\" \"z2\"\n    \"sigma(S8)\";\n  Printf.printf \"%5s  %8s  %8s  %8s  %10s\\n\" \"-----\" \"--------\" \"--------\"\n    \"--------\" \"----------\";\n  let steps = 500 in\n  for i = 0 to steps - 1 do\n    let sigma_val, grad = Rune.value_and_grad objective !params in\n    let p, s = Vega.step !state ~grad ~param:!params in\n    let z1 = Float.max 0.1 (Float.min 2.8 (item [ 1 ] p)) in\n    let z2 = Float.max (z1 +. 0.1) (Float.min 2.9 (item [ 2 ] p)) in\n    params := Nx.create f64 [| 3 |] [| item [ 0 ] p; z1; z2 |];\n    state := s;\n    let sigma_cur = item [] sigma_val in\n    if sigma_cur < !best_sigma then begin\n      best_sigma := sigma_cur;\n      best_params := !params\n    end;\n    if i mod 50 = 0 || i = steps - 1 then begin\n      let f_sky = 1.0 /. (1.0 +. Float.exp (-.item [ 0 ] !params)) in\n      Printf.printf \"%5d  %8.4f  %8.3f  %8.3f  %10.6f\\n\" i f_sky\n        (item [ 1 ] !params) (item [ 2 ] !params) sigma_cur\n    end\n  done;\n  let f_sky_opt = 1.0 /. (1.0 +. Float.exp (-.item [ 0 ] !best_params)) in\n  Printf.printf\n    \"\\nGrad optimal: f_sky=%.4f  bins=[0, %.2f, %.2f, 3.0]  sigma(S8)=%.6f\\n\"\n    f_sky_opt (item [ 1 ] !best_params) (item [ 2 ] !best_params) !best_sigma;\n\n  (* Grid search validation *)\n  let grid_best_sigma = ref infinity in\n  let grid_best_fs = ref 0.0 in\n  let grid_best_z1 = ref 0.0 in\n  let grid_best_z2 = ref 0.0 in\n  let n_fs = 12 and n_z1 = 15 and n_z2 = 15 in\n  let n_grid_evals = ref 0 in\n  Printf.printf \"\\nGrid search (%d*%d*%d)...\\n%!\" n_fs n_z1 n_z2;\n  for fi = 0 to n_fs - 1 do\n    let fs = 0.1 +. (Float.of_int fi *. 0.88 /. Float.of_int (n_fs - 1)) in\n    let log_fs = Float.log (fs /. (1.0 -. fs)) in\n    for z1i = 0 to n_z1 - 1 do\n      let z1_v = 0.2 +. (Float.of_int z1i *. 2.4 /. Float.of_int (n_z1 - 1)) in\n      for z2i = 0 to n_z2 - 1 do\n        let z2_v =\n          z1_v +. 0.15\n          +. (Float.of_int z2i *. (2.7 -. z1_v) /. Float.of_int (n_z2 - 1))\n        in\n        if z2_v > z1_v +. 0.1 && z2_v < 2.9 then begin\n          incr n_grid_evals;\n          let p = Nx.create f64 [| 3 |] [| log_fs; z1_v; z2_v |] in\n          let s = item [] (objective p) in\n          if s < !grid_best_sigma then begin\n            grid_best_sigma := s;\n            grid_best_fs := fs;\n            grid_best_z1 := z1_v;\n            grid_best_z2 := z2_v\n          end\n        end\n      done\n    done\n  done;\n  Printf.printf\n    \"Grid optimal: f_sky=%.4f  bins=[0, %.2f, %.2f, 3.0]  sigma(S8)=%.6f  (%d \\\n     evals)\\n\"\n    !grid_best_fs !grid_best_z1 !grid_best_z2 !grid_best_sigma !n_grid_evals;\n  Printf.printf \"\\nComparison:\\n\";\n  Printf.printf \"  Gradient:  sigma(S8)=%.6f  (%d evals)\\n\" !best_sigma steps;\n  Printf.printf \"  Grid:      sigma(S8)=%.6f  (%d evals)\\n\" !grid_best_sigma\n    !n_grid_evals;\n  let rel = (1.0 -. (!best_sigma /. !grid_best_sigma)) *. 100.0 in\n  if rel >= 0.0 then\n    Printf.printf \"  Gradient %.1f%% better with %.0f* fewer evaluations\\n\" rel\n      (Float.of_int !n_grid_evals /. Float.of_int steps)\n  else\n    Printf.printf\n      \"  Gradient within %.1f%% of grid with %.0f* fewer evaluations\\n\"\n      (Float.abs rel)\n      (Float.of_int !n_grid_evals /. Float.of_int steps)\n\nlet () =\n  Printf.printf \"=== Differentiable Survey Optimization ===\\n\";\n  Printf.printf \"Stage IV Weak Lensing Survey\\n\\n\";\n  part1 ();\n  part2 ()\n"
  },
  {
    "path": "dev/umbra/examples/README.md",
    "content": "# Umbra Examples\n\nLearn Umbra through progressively complex examples. Start with\n`01-constants-and-units` and work through the numbered examples in order.\n\n## Examples\n\n| Example | Concept | Key Functions |\n|---------|---------|---------------|\n| [`01-constants-and-units`](./01-constants-and-units/) | Type-safe physical quantities, conversions, constants | `Unit.Length.of_m`, `Const.c`, `Unit.Angle.deg` |\n| [`02-cosmological-distances`](./02-cosmological-distances/) | LCDM distances, SN Ia fitting | `Cosmo.luminosity_distance`, `Cosmo.distance_modulus` |\n| [`03-blackbody-fitting`](./03-blackbody-fitting/) | Fit stellar temperature from photometry | `Spectrum.blackbody`, `Photometry.ab_mag` |\n| [`04-extinction-and-magnitudes`](./04-extinction-and-magnitudes/) | Dust extinction, magnitude systems, K-corrections | `Extinction.ccm89`, `Photometry.vega_mag`, `Photometry.color` |\n| [`05-sed-fitting`](./05-sed-fitting/) | Full SED pipeline: blackbody, extinction, photometry | `Spectrum.blackbody`, `Extinction.apply`, `Photometry.ab_mag` |\n| [`06-coordinates-and-time`](./06-coordinates-and-time/) | Frame transforms, time scales, observer geometry | `Coord.galactic_of_icrs`, `Time.of_iso`, `Altaz.airmass` |\n| [`07-batch-photometry`](./07-batch-photometry/) | Batched operations over temperature and extinction grids | `Spectrum.blackbody`, `Extinction.apply`, `Photometry.ab_mag` |\n| [`08-photometric-redshifts`](./08-photometric-redshifts/) | Two-stage photo-z: grid search + gradient refinement | `Spectrum.redshift`, `Photometry.ab_mag`, `Rune.value_and_grad` |\n| [`09-gravitational-lensing`](./09-gravitational-lensing/) | Point-mass lens model parameter fitting | `Rune.value_and_grad`, `Vega.adam` |\n| [`10-uncertainty-propagation`](./10-uncertainty-propagation/) | AD Jacobians for error propagation vs Monte Carlo | `Rune.jacfwd`, `Cosmo.distance_modulus` |\n| [`11-bayesian-sed`](./11-bayesian-sed/) | Fisher matrix + HMC posterior sampling | `Rune.jacrev`, `Norn.hmc` |\n| [`12-survey-optimization`](./12-survey-optimization/) | Differentiable Fisher forecasting for survey design | `Survey.angular_cl`, `Cosmo.linear_power` |\n\n## Running Examples\n\nAll examples can be run with:\n\n```bash\ncd dev/umbra\ndune exec --root . examples/<name>/main.exe\n```\n\nFor example:\n\n```bash\ncd dev/umbra\ndune exec --root . examples/01-constants-and-units/main.exe\n```\n\n## Quick Reference\n\n### Cosmological Distances\n\n```ocaml\nopen Umbra\n\nlet cosmo = Cosmo.planck18 in\nlet z = Nx.scalar Nx.float64 0.5 in\nlet dl = Cosmo.luminosity_distance cosmo z\n```\n\n### Synthetic Photometry\n\n```ocaml\nlet sed =\n  Spectrum.blackbody\n    ~temperature:(Unit.Temperature.of_kelvin (Nx.scalar f64 5800.0))\n    ~wavelength:wave\n  |> Extinction.apply (Extinction.ccm89 ~rv) ~av\n  |> Spectrum.as_flux_density\nin\nlet mag = Photometry.ab_mag (Filters.sdss_r ()) sed\n```\n\n### Coordinate Transforms\n\n```ocaml\nlet ra = Unit.Angle.deg 83.633 in\nlet dec = Unit.Angle.deg (-5.550) in\nlet l, b = Coord.galactic_of_icrs ra dec\n```\n"
  },
  {
    "path": "dev/umbra/lib/altaz.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet pi = Float.pi\n\ntype observer = { lat : float; lon : float; height : float }\n\nlet make_observer ~lat ~lon ?(height = Unit.Length.m 0.0) () =\n  let lat = Nx.item [] (Unit.Angle.to_tensor lat) in\n  let lon = Nx.item [] (Unit.Angle.to_tensor lon) in\n  let height = Nx.item [] (Unit.Length.to_tensor height) in\n  { lat; lon; height }\n\nlet observer_height obs =\n  Unit.Length.of_tensor (Nx.scalar Nx.float64 obs.height)\n\ntype t = { az : Nx.float64_t; alt : Nx.float64_t }\n\nlet alt t = Unit.Angle.of_tensor t.alt\nlet az t = Unit.Angle.of_tensor t.az\n\n(* Earth Rotation Angle from UT1 Julian Date. ERA = 2π(0.7790572732640 +\n   1.00273781191135448 * Du) where Du = JD_UT1 - 2451545.0 *)\nlet era jd_ut1 =\n  let du = jd_ut1 -. 2_451_545.0 in\n  let theta =\n    2.0 *. pi *. (0.779_057_273_264_0 +. (1.002_737_811_911_354_48 *. du))\n  in\n  Float.rem theta (2.0 *. pi)\n\n(* IAU 2006 precession angles (Capitaine et al. 2003). T = Julian centuries from\n   J2000.0 TT. Returns (zeta_A, z_A, theta_A) in radians. *)\nlet precession_angles t_cy =\n  let arcsec_to_rad x = x *. pi /. 648_000.0 in\n  let t2 = t_cy *. t_cy in\n  let t3 = t2 *. t_cy in\n  (* zeta_A = 2.5976176'' + 2306.0809506''T + 1.0109032''T² + 0.0182337''T³ *)\n  let zeta_a =\n    arcsec_to_rad\n      (2.597_617_6 +. (2306.080_950_6 *. t_cy) +. (1.010_903_2 *. t2)\n     +. (0.018_233_7 *. t3))\n  in\n  (* z_A = -2.5976176'' + 2306.0803226''T + 1.0947790''T² + 0.0182273''T³ *)\n  let z_a =\n    arcsec_to_rad\n      (~-.2.597_617_6 +. (2306.080_322_6 *. t_cy) +. (1.094_779_0 *. t2)\n     +. (0.018_227_3 *. t3))\n  in\n  (* theta_A = 2004.1917476''T - 0.4269353''T² - 0.0418251''T³ *)\n  let theta_a =\n    arcsec_to_rad\n      ((2004.191_747_6 *. t_cy) -. (0.426_935_3 *. t2) -. (0.041_825_1 *. t3))\n  in\n  (zeta_a, z_a, theta_a)\n\n(* Apply IAU 2006 precession matrix to ICRS (RA, Dec) → mean (RA, Dec) of date.\n   R = Rz(-z_A) · Ry(theta_A) · Rz(-zeta_A) *)\nlet precess_to_date ra dec t_cy =\n  let zeta_a, z_a, theta_a = precession_angles t_cy in\n  let sz = Float.sin zeta_a and cz = Float.cos zeta_a in\n  let sa = Float.sin z_a and ca = Float.cos z_a in\n  let st = Float.sin theta_a and ct = Float.cos theta_a in\n  (* Rotation matrix elements *)\n  let r11 = (ca *. ct *. cz) -. (sa *. sz) in\n  let r12 = ~-.((ca *. ct *. sz) +. (sa *. cz)) in\n  let r13 = ~-.(ca *. st) in\n  let r21 = (sa *. ct *. cz) +. (ca *. sz) in\n  let r22 = ~-.((sa *. ct *. sz) -. (ca *. cz)) in\n  let r23 = ~-.(sa *. st) in\n  let r31 = st *. cz in\n  let r32 = ~-.(st *. sz) in\n  let r33 = ct in\n  let n = Nx.numel ra in\n  let ra_out = Nx.zeros Nx.float64 [| n |] in\n  let dec_out = Nx.zeros Nx.float64 [| n |] in\n  for i = 0 to n - 1 do\n    let r = Nx.item [ i ] ra in\n    let d = Nx.item [ i ] dec in\n    let cd = Float.cos d in\n    let x = cd *. Float.cos r in\n    let y = cd *. Float.sin r in\n    let z = Float.sin d in\n    let x' = (r11 *. x) +. (r12 *. y) +. (r13 *. z) in\n    let y' = (r21 *. x) +. (r22 *. y) +. (r23 *. z) in\n    let z' = (r31 *. x) +. (r32 *. y) +. (r33 *. z) in\n    Nx.set_item [ i ] (Float.atan2 y' x') ra_out;\n    Nx.set_item [ i ] (Float.asin (Float.max ~-.1.0 (Float.min 1.0 z'))) dec_out\n  done;\n  (ra_out, dec_out)\n\nlet airmass hz =\n  let n = Nx.numel hz.alt in\n  let out = Nx.zeros Nx.float64 [| n |] in\n  let to_deg = 180.0 /. pi in\n  for i = 0 to n - 1 do\n    let alt_deg = Nx.item [ i ] hz.alt *. to_deg in\n    (* Pickering (2002): X = 1 / sin(h + 244/(165 + 47h^1.1)) where h in deg *)\n    let arg =\n      alt_deg\n      +. (244.0 /. (165.0 +. (47.0 *. Float.pow (Float.abs alt_deg) 1.1)))\n    in\n    let x = 1.0 /. Float.sin (arg *. pi /. 180.0) in\n    Nx.set_item [ i ] (Float.max 1.0 x) out\n  done;\n  out\n\n(* Bennett (1982) atmospheric refraction for geometric altitude. R (arcmin) =\n   cot(h + 7.31/(h + 4.4)) where h in degrees. Returns refraction in radians.\n   Clamps to 0 below -1°. *)\nlet refraction_correction alt_rad =\n  let h = alt_rad *. 180.0 /. pi in\n  if h < -1.0 then 0.0\n  else\n    let arg = (h +. (7.31 /. (h +. 4.4))) *. pi /. 180.0 in\n    let r_arcmin = 1.0 /. Float.tan arg in\n    r_arcmin *. pi /. (180.0 *. 60.0)\n\nlet refraction hz =\n  let n = Nx.numel hz.alt in\n  let out = Nx.zeros Nx.float64 [| n |] in\n  for i = 0 to n - 1 do\n    Nx.set_item [ i ] (refraction_correction (Nx.item [ i ] hz.alt)) out\n  done;\n  Unit.Angle.of_tensor out\n\nlet of_coord ?(refraction = false) ~obstime ~observer c =\n  let icrs = Coord.icrs c in\n  let ra_rad = Unit.Angle.to_tensor (Coord.lon icrs) in\n  let dec_rad = Unit.Angle.to_tensor (Coord.lat icrs) in\n  (* Convert UTC → UT1 (ignoring DUT1 < 1s) then to TT for precession *)\n  let jd_utc = Time.to_jd obstime in\n  let jd_ut1 = jd_utc in\n  let jd_tt = Time.to_jd (Time.tai_to_tt (Time.utc_to_tai obstime)) in\n  let t_cy = (jd_tt -. 2_451_545.0) /. 36_525.0 in\n  (* Precess ICRS to mean RA/Dec of date *)\n  let ra_date, dec_date = precess_to_date ra_rad dec_rad t_cy in\n  (* Hour angle: HA = ERA + observer_lon - RA_date *)\n  let era_val = era jd_ut1 in\n  let n = Nx.numel ra_rad in\n  let alt_out = Nx.zeros Nx.float64 [| n |] in\n  let az_out = Nx.zeros Nx.float64 [| n |] in\n  let slat = Float.sin observer.lat and clat = Float.cos observer.lat in\n  for i = 0 to n - 1 do\n    let ha = era_val +. observer.lon -. Nx.item [ i ] ra_date in\n    let dec = Nx.item [ i ] dec_date in\n    let sdec = Float.sin dec and cdec = Float.cos dec in\n    let sha = Float.sin ha and cha = Float.cos ha in\n    (* alt = asin(sin(lat)sin(dec) + cos(lat)cos(dec)cos(ha)) *)\n    let sin_alt = (slat *. sdec) +. (clat *. cdec *. cha) in\n    let alt = Float.asin (Float.max ~-.1.0 (Float.min 1.0 sin_alt)) in\n    (* az = atan2(-cos(dec)sin(ha), cos(lat)sin(dec) -\n       sin(lat)cos(dec)cos(ha)) *)\n    let num = ~-.(cdec *. sha) in\n    let den = (clat *. sdec) -. (slat *. cdec *. cha) in\n    let az = Float.atan2 num den in\n    let az = if az < 0.0 then az +. (2.0 *. pi) else az in\n    let alt = if refraction then alt +. refraction_correction alt else alt in\n    Nx.set_item [ i ] alt alt_out;\n    Nx.set_item [ i ] az az_out\n  done;\n  { alt = alt_out; az = az_out }\n\nlet to_coord ~obstime ~observer t =\n  let jd_utc = Time.to_jd obstime in\n  let jd_ut1 = jd_utc in\n  let jd_tt = Time.to_jd (Time.tai_to_tt (Time.utc_to_tai obstime)) in\n  let t_cy = (jd_tt -. 2_451_545.0) /. 36_525.0 in\n  let era_val = era jd_ut1 in\n  let slat = Float.sin observer.lat and clat = Float.cos observer.lat in\n  let zeta_a, z_a, theta_a = precession_angles t_cy in\n  (* Inverse precession matrix = transpose of forward *)\n  let sz = Float.sin zeta_a and cz = Float.cos zeta_a in\n  let sa = Float.sin z_a and ca = Float.cos z_a in\n  let st = Float.sin theta_a and ct = Float.cos theta_a in\n  let r11 = (ca *. ct *. cz) -. (sa *. sz) in\n  let r12 = ~-.((ca *. ct *. sz) +. (sa *. cz)) in\n  let r13 = ~-.(ca *. st) in\n  let r21 = (sa *. ct *. cz) +. (ca *. sz) in\n  let r22 = ~-.((sa *. ct *. sz) -. (ca *. cz)) in\n  let r23 = ~-.(sa *. st) in\n  let r31 = st *. cz in\n  let r32 = ~-.(st *. sz) in\n  let r33 = ct in\n  (* Transpose for inverse *)\n  let ri11 = r11 and ri12 = r21 and ri13 = r31 in\n  let ri21 = r12 and ri22 = r22 and ri23 = r32 in\n  let ri31 = r13 and ri32 = r23 and ri33 = r33 in\n  let n = Nx.numel t.alt in\n  let ra_out = Nx.zeros Nx.float64 [| n |] in\n  let dec_out = Nx.zeros Nx.float64 [| n |] in\n  for i = 0 to n - 1 do\n    let alt = Nx.item [ i ] t.alt in\n    let az = Nx.item [ i ] t.az in\n    let salt = Float.sin alt and calt = Float.cos alt in\n    let saz = Float.sin az and caz = Float.cos az in\n    (* (Alt, Az) → (HA, Dec) *)\n    let sin_dec = (slat *. salt) +. (clat *. calt *. caz) in\n    let dec = Float.asin (Float.max ~-.1.0 (Float.min 1.0 sin_dec)) in\n    let num = ~-.(calt *. saz) in\n    let den = (clat *. salt) -. (slat *. calt *. caz) in\n    let ha = Float.atan2 num den in\n    (* RA_date = ERA + observer_lon - HA *)\n    let ra_date = era_val +. observer.lon -. ha in\n    (* Deprecess: mean of date → ICRS *)\n    let cd = Float.cos dec in\n    let x = cd *. Float.cos ra_date in\n    let y = cd *. Float.sin ra_date in\n    let z = Float.sin dec in\n    let x' = (ri11 *. x) +. (ri12 *. y) +. (ri13 *. z) in\n    let y' = (ri21 *. x) +. (ri22 *. y) +. (ri23 *. z) in\n    let z' = (ri31 *. x) +. (ri32 *. y) +. (ri33 *. z) in\n    let ra = Float.atan2 y' x' in\n    let ra = if ra < 0.0 then ra +. (2.0 *. pi) else ra in\n    let dec = Float.asin (Float.max ~-.1.0 (Float.min 1.0 z')) in\n    Nx.set_item [ i ] ra ra_out;\n    Nx.set_item [ i ] dec dec_out\n  done;\n  Coord.of_radec\n    ~ra:(Unit.Angle.of_tensor ra_out)\n    ~dec:(Unit.Angle.of_tensor dec_out)\n"
  },
  {
    "path": "dev/umbra/lib/altaz.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Altitude-azimuth (horizontal) coordinates.\n\n    Converts celestial coordinates to local horizon coordinates for a given\n    observer location and time. Uses IAU 2006 precession (Capitaine et al. 2003)\n    and the Earth Rotation Angle.\n\n    {b Warning.} Nutation and polar motion are omitted. Atmospheric refraction\n    can be applied via {!refraction} or the [~refraction] parameter of\n    {!of_coord}. Accuracy is ~1 arcminute for dates within a few centuries of\n    J2000.0.\n\n    {[\n      let obs = Altaz.make_observer ~lat:(Unit.Angle.deg 28.7624) ~lon:(Unit.Angle.deg (-17.8792)) () in\n      let t = Time.of_iso \"2024-06-21T22:00:00\" in\n      let vega =\n        Coord.of_radec\n          ~ra:(Unit.Angle.deg 279.2347)\n          ~dec:(Unit.Angle.deg 38.7837)\n      in\n      let hz = Altaz.of_coord ~obstime:t ~observer:obs vega in\n      let alt_deg = Nx.item [] (Unit.Angle.in_deg (Altaz.alt hz))\n    ]} *)\n\n(** {1:observer Observer} *)\n\ntype observer\n(** The type for a ground-based observer location. *)\n\nval make_observer :\n  lat:Unit.angle Unit.t ->\n  lon:Unit.angle Unit.t ->\n  ?height:Unit.length Unit.t ->\n  unit ->\n  observer\n(** [make_observer ~lat ~lon ?height ()] is an observer at geodetic latitude\n    [lat], longitude [lon], and elevation [height] above the reference\n    ellipsoid. [lon] is positive East. [height] defaults to sea level.\n\n    [height] is stored for forward compatibility but does not yet affect\n    coordinate transforms. *)\n\nval observer_height : observer -> Unit.length Unit.t\n(** [observer_height obs] is the observer's elevation above the reference\n    ellipsoid. *)\n\n(** {1:coords Horizontal coordinates} *)\n\ntype t\n(** The type for altitude-azimuth coordinates. Azimuth is measured from North\n    through East. *)\n\nval alt : t -> Unit.angle Unit.t\n(** [alt t] is the altitude (elevation above the horizon). *)\n\nval az : t -> Unit.angle Unit.t\n(** [az t] is the azimuth (North = 0, East = 90 deg). *)\n\n(** {1:derived Derived quantities} *)\n\nval airmass : t -> Nx.float64_t\n(** [airmass hz] is the airmass at the altitude of [hz] using the Pickering\n    (2002) formula. Well-behaved from zenith to horizon. Not differentiable\n    (operates on float-level altitude values). *)\n\n(** {1:refraction Atmospheric refraction} *)\n\nval refraction : t -> Unit.angle Unit.t\n(** [refraction hz] is the atmospheric refraction correction at the geometric\n    altitude of [hz], using the Bennett (1982) formula. The correction is\n    positive (objects appear higher than their geometric position). Returns zero\n    for altitudes below -1°.\n\n    Not differentiable (scalar-level trigonometry). *)\n\n(** {1:converting Converting} *)\n\nval of_coord :\n  ?refraction:bool ->\n  obstime:Time.utc Time.t ->\n  observer:observer ->\n  Coord.t ->\n  t\n(** [of_coord ~obstime ~observer c] converts celestial coordinates [c] to\n    altitude-azimuth for [observer] at [obstime]. Applies IAU 2006 precession to\n    move from ICRS to the mean equator of date.\n\n    When [refraction] is [true], the Bennett (1982) atmospheric refraction\n    correction is applied to the computed altitude. [refraction] defaults to\n    [false].\n\n    Not differentiable (scalar-level rotation matrices). *)\n\nval to_coord : obstime:Time.utc Time.t -> observer:observer -> t -> Coord.t\n(** [to_coord ~obstime ~observer t] converts altitude-azimuth coordinates [t]\n    back to ICRS celestial coordinates for [observer] at [obstime]. Not\n    differentiable (scalar-level rotation matrices). *)\n"
  },
  {
    "path": "dev/umbra/lib/const.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Fundamental constants (CODATA 2022) *)\n\nlet c = Unit.Velocity.m_s 299_792_458.0\nlet m_e = Unit.Mass.kg 9.109_383_713_9e-31\nlet m_p = Unit.Mass.kg 1.672_621_923_69e-27\nlet m_n = Unit.Mass.kg 1.674_927_498_04e-27\nlet u = Unit.Mass.kg 1.660_539_066_60e-27\n\n(* Astronomical constants (IAU 2015) *)\n\nlet au = Unit.Length.au 1.0\nlet pc = Unit.Length.pc 1.0\nlet solar_mass = Unit.Mass.solar_mass 1.0\nlet solar_radius = Unit.Length.solar_radius 1.0\nlet solar_luminosity = Unit.Power.solar_luminosity 1.0\nlet earth_mass = Unit.Mass.earth_mass 1.0\nlet earth_radius = Unit.Length.earth_radius 1.0\nlet jupiter_mass = Unit.Mass.jupiter_mass 1.0\nlet jupiter_radius = Unit.Length.jupiter_radius 1.0\n\n(* Raw SI floats for compound dimensions (CODATA 2022) *)\n\nlet h_si = 6.626_070_15e-34\nlet hbar_si = 1.054_571_817e-34\nlet g_si = 6.674_30e-11\nlet k_b_si = 1.380_649e-23\nlet sigma_sb_si = 5.670_374_419e-8\nlet n_a = 6.022_140_76e23\nlet sigma_t_si = 6.652_458_705_1e-29\nlet b_wien_si = 2.897_771_955e-3\nlet alpha = 7.297_352_5643e-3\nlet a_0 = Unit.Length.m 5.291_772_105_44e-11\nlet gm_sun_si = 1.327_124_4e20\nlet gm_earth_si = 3.986_004e14\nlet gm_jup_si = 1.266_865_3e17\nlet l_bol0 = Unit.Power.w 3.0128e28\n"
  },
  {
    "path": "dev/umbra/lib/const.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Physical and astronomical constants.\n\n    Typed constants use {!Unit.t} with the appropriate phantom dimension. Raw SI\n    floats are provided for compound dimensions that do not map to a single\n    {!Unit} dimension type.\n\n    Fundamental constants follow\n    {{:https://physics.nist.gov/cuu/Constants/}CODATA 2022}. Astronomical\n    constants follow IAU 2015. *)\n\n(** {1:fundamental Fundamental constants} *)\n\nval c : Unit.velocity Unit.t\n(** [c] is the speed of light in vacuum (299 792 458 m/s, exact). *)\n\n(** {1:particle Particle masses} *)\n\nval m_e : Unit.mass Unit.t\n(** [m_e] is the electron mass (9.109 383 7139e-31 kg). *)\n\nval m_p : Unit.mass Unit.t\n(** [m_p] is the proton mass (1.672 621 923 69e-27 kg). *)\n\nval m_n : Unit.mass Unit.t\n(** [m_n] is the neutron mass (1.674 927 498 04e-27 kg). *)\n\nval u : Unit.mass Unit.t\n(** [u] is the atomic mass unit (1.660 539 066 60e-27 kg). *)\n\n(** {1:astro Astronomical constants} *)\n\nval au : Unit.length Unit.t\n(** [au] is one astronomical unit. *)\n\nval pc : Unit.length Unit.t\n(** [pc] is one parsec. *)\n\nval solar_mass : Unit.mass Unit.t\n(** [solar_mass] is one solar mass. *)\n\nval solar_radius : Unit.length Unit.t\n(** [solar_radius] is one solar radius. *)\n\nval solar_luminosity : Unit.power Unit.t\n(** [solar_luminosity] is one solar luminosity. *)\n\nval earth_mass : Unit.mass Unit.t\n(** [earth_mass] is one Earth mass. *)\n\nval earth_radius : Unit.length Unit.t\n(** [earth_radius] is one Earth radius. *)\n\nval jupiter_mass : Unit.mass Unit.t\n(** [jupiter_mass] is one Jupiter mass. *)\n\nval jupiter_radius : Unit.length Unit.t\n(** [jupiter_radius] is one Jupiter radius. *)\n\n(** {1:si Raw SI constants}\n\n    Constants with compound dimensions that do not map to a single {!Unit}\n    dimension type. CODATA 2022 values. *)\n\nval h_si : float\n(** [h_si] is the Planck constant (6.626 070 15e-34 J s, exact). *)\n\nval hbar_si : float\n(** [hbar_si] is the reduced Planck constant (1.054 571 817e-34 J s). *)\n\nval g_si : float\n(** [g_si] is the gravitational constant (6.674 30e-11 m{^ 3} kg{^ -1} s{^ -2}).\n*)\n\nval k_b_si : float\n(** [k_b_si] is the Boltzmann constant (1.380 649e-23 J K{^ -1}, exact). *)\n\nval sigma_sb_si : float\n(** [sigma_sb_si] is the Stefan-Boltzmann constant (5.670 374 419e-8 W m{^ -2}\n    K{^ -4}). *)\n\nval n_a : float\n(** [n_a] is the Avogadro constant (6.022 140 76e23 mol{^ -1}, exact). *)\n\nval sigma_t_si : float\n(** [sigma_t_si] is the Thomson scattering cross-section (6.652 458 705 1e-29\n    m{^ 2}). *)\n\nval b_wien_si : float\n(** [b_wien_si] is the Wien displacement law constant (2.897 771 955e-3 m K). *)\n\nval alpha : float\n(** [alpha] is the fine-structure constant (7.297 352 5643e-3). *)\n\nval a_0 : Unit.length Unit.t\n(** [a_0] is the Bohr radius (5.291 772 105 44e-11 m). *)\n\nval gm_sun_si : float\n(** [gm_sun_si] is the solar mass parameter (1.327 124 4e20 m{^ 3} s{^ -2}).\n    More precise than [g_si * solar_mass] for orbital mechanics. *)\n\nval gm_earth_si : float\n(** [gm_earth_si] is the Earth mass parameter (3.986 004e14 m{^ 3} s{^ -2}). *)\n\nval gm_jup_si : float\n(** [gm_jup_si] is the Jupiter mass parameter (1.266 865 3e17 m{^ 3} s{^ -2}).\n*)\n\nval l_bol0 : Unit.power Unit.t\n(** [l_bol0] is the IAU 2015 zero-point bolometric luminosity (3.0128e28 W). *)\n"
  },
  {
    "path": "dev/umbra/lib/coord.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet pi = Float.pi\nlet deg_to_rad = pi /. 180.0\nlet two_pi = Nx.scalar Nx.float64 (2.0 *. pi)\n\ntype frame = ICRS | Galactic | Ecliptic_j2000 | Supergalactic\n\n(* Internally stores lon/lat in radians *)\ntype t = { frame : frame; lon : Nx.float64_t; lat : Nx.float64_t }\n\n(* IAU rotation matrices *)\n\nlet ra_gp = 192.85948 *. deg_to_rad\nlet dec_gp = 27.12825 *. deg_to_rad\nlet l_ncp = 122.93192 *. deg_to_rad\n\nlet icrs_to_gal =\n  let sd = Float.sin dec_gp and cd = Float.cos dec_gp in\n  let sa = Float.sin ra_gp and ca = Float.cos ra_gp in\n  let sl = Float.sin l_ncp and cl = Float.cos l_ncp in\n  [|\n    [|\n      (~-.sl *. sa) -. (cl *. ca *. sd);\n      (sl *. ca) -. (cl *. sa *. sd);\n      cl *. cd;\n    |];\n    [|\n      (cl *. sa) -. (sl *. ca *. sd);\n      (~-.cl *. ca) -. (sl *. sa *. sd);\n      sl *. cd;\n    |];\n    [| ca *. cd; sa *. cd; sd |];\n  |]\n\nlet transpose_3x3 m =\n  [|\n    [| m.(0).(0); m.(1).(0); m.(2).(0) |];\n    [| m.(0).(1); m.(1).(1); m.(2).(1) |];\n    [| m.(0).(2); m.(1).(2); m.(2).(2) |];\n  |]\n\nlet gal_to_icrs = transpose_3x3 icrs_to_gal\n\n(* Fixed J2000.0 mean obliquity: 23.4392911 degrees *)\nlet obliquity = 23.4392911 *. deg_to_rad\n\nlet icrs_to_ecl =\n  let se = Float.sin obliquity and ce = Float.cos obliquity in\n  [| [| 1.0; 0.0; 0.0 |]; [| 0.0; ce; se |]; [| 0.0; ~-.se; ce |] |]\n\nlet ecl_to_icrs = transpose_3x3 icrs_to_ecl\n\n(* Supergalactic: defined relative to Galactic. SGP at (l=47.37, b=6.32), SGL\n   origin at l=137.37 *)\nlet sgl_l0 = 137.37 *. deg_to_rad\nlet sgp_l = 47.37 *. deg_to_rad\nlet sgp_b = 6.32 *. deg_to_rad\n\nlet gal_to_sgal =\n  let sb = Float.sin sgp_b and cb = Float.cos sgp_b in\n  let sl = Float.sin sgp_l and cl = Float.cos sgp_l in\n  let sl0 = Float.sin sgl_l0 and cl0 = Float.cos sgl_l0 in\n  let r00 = (~-.sl0 *. sl) -. (cl0 *. cl *. sb) in\n  let r01 = (sl0 *. cl) -. (cl0 *. sl *. sb) in\n  let r02 = cl0 *. cb in\n  let r10 = (cl0 *. sl) -. (sl0 *. cl *. sb) in\n  let r11 = (~-.cl0 *. cl) -. (sl0 *. sl *. sb) in\n  let r12 = sl0 *. cb in\n  let r20 = cl *. cb in\n  let r21 = sl *. cb in\n  let r22 = sb in\n  [| [| r00; r01; r02 |]; [| r10; r11; r12 |]; [| r20; r21; r22 |] |]\n\nlet sgal_to_gal = transpose_3x3 gal_to_sgal\n\nlet rotate mat lon_rad lat_rad =\n  let cl = Nx.cos lat_rad and sl = Nx.sin lat_rad in\n  let ca = Nx.cos lon_rad and sa = Nx.sin lon_rad in\n  let x = Nx.mul cl ca and y = Nx.mul cl sa in\n  let x' =\n    Nx.add\n      (Nx.add (Nx.mul_s x mat.(0).(0)) (Nx.mul_s y mat.(0).(1)))\n      (Nx.mul_s sl mat.(0).(2))\n  in\n  let y' =\n    Nx.add\n      (Nx.add (Nx.mul_s x mat.(1).(0)) (Nx.mul_s y mat.(1).(1)))\n      (Nx.mul_s sl mat.(1).(2))\n  in\n  let z' =\n    Nx.add\n      (Nx.add (Nx.mul_s x mat.(2).(0)) (Nx.mul_s y mat.(2).(1)))\n      (Nx.mul_s sl mat.(2).(2))\n  in\n  let z_clamped = Nx.clamp ~min:(-1.0) ~max:1.0 z' in\n  let lat' = Nx.asin z_clamped in\n  let lon' = Nx.atan2 y' x' in\n  let mask = Nx.less_s lon' 0.0 in\n  let lon' = Nx.where mask (Nx.add lon' two_pi) lon' in\n  (lon', lat')\n\nlet ensure_1d t = if Nx.ndim t = 0 then Nx.reshape [| 1 |] t else t\n\nlet make frame ~lon ~lat =\n  let lon_rad = ensure_1d (Unit.Angle.to_tensor lon) in\n  let lat_rad = ensure_1d (Unit.Angle.to_tensor lat) in\n  if Nx.ndim lon_rad <> 1 || Nx.ndim lat_rad <> 1 then\n    invalid_arg \"Coord: lon and lat must be scalar or 1-D tensors\";\n  if Nx.numel lon_rad <> Nx.numel lat_rad then\n    invalid_arg \"Coord: lon and lat must have the same length\";\n  { frame; lon = lon_rad; lat = lat_rad }\n\nlet of_radec ~ra ~dec = make ICRS ~lon:ra ~lat:dec\nlet of_galactic ~l ~b = make Galactic ~lon:l ~lat:b\nlet of_ecliptic_j2000 ~lon ~lat = make Ecliptic_j2000 ~lon ~lat\nlet of_supergalactic ~sgl ~sgb = make Supergalactic ~lon:sgl ~lat:sgb\nlet frame c = c.frame\nlet size c = Nx.numel c.lon\nlet lon c = Unit.Angle.of_tensor c.lon\nlet lat c = Unit.Angle.of_tensor c.lat\n\nlet to_icrs c =\n  match c.frame with\n  | ICRS -> c\n  | Galactic ->\n      let lon', lat' = rotate gal_to_icrs c.lon c.lat in\n      { frame = ICRS; lon = lon'; lat = lat' }\n  | Ecliptic_j2000 ->\n      let lon', lat' = rotate ecl_to_icrs c.lon c.lat in\n      { frame = ICRS; lon = lon'; lat = lat' }\n  | Supergalactic ->\n      let gal_lon, gal_lat = rotate sgal_to_gal c.lon c.lat in\n      let icrs_lon, icrs_lat = rotate gal_to_icrs gal_lon gal_lat in\n      { frame = ICRS; lon = icrs_lon; lat = icrs_lat }\n\nlet ra c = lon (to_icrs c)\nlet dec c = lat (to_icrs c)\n\nlet to_frame target c =\n  if c.frame = target then c\n  else\n    let icrs = to_icrs c in\n    match target with\n    | ICRS -> icrs\n    | Galactic ->\n        let lon', lat' = rotate icrs_to_gal icrs.lon icrs.lat in\n        { frame = Galactic; lon = lon'; lat = lat' }\n    | Ecliptic_j2000 ->\n        let lon', lat' = rotate icrs_to_ecl icrs.lon icrs.lat in\n        { frame = Ecliptic_j2000; lon = lon'; lat = lat' }\n    | Supergalactic ->\n        let gal_lon, gal_lat = rotate icrs_to_gal icrs.lon icrs.lat in\n        let sg_lon, sg_lat = rotate gal_to_sgal gal_lon gal_lat in\n        { frame = Supergalactic; lon = sg_lon; lat = sg_lat }\n\nlet icrs c = to_frame ICRS c\nlet galactic c = to_frame Galactic c\nlet ecliptic_j2000 c = to_frame Ecliptic_j2000 c\nlet supergalactic c = to_frame Supergalactic c\n\nlet trig_of a b =\n  let a = to_icrs a and b = to_icrs b in\n  let dlon = Nx.sub b.lon a.lon in\n  let cos_lat1 = Nx.cos a.lat and sin_lat1 = Nx.sin a.lat in\n  let cos_lat2 = Nx.cos b.lat and sin_lat2 = Nx.sin b.lat in\n  let cos_dlon = Nx.cos dlon and sin_dlon = Nx.sin dlon in\n  (dlon, cos_lat1, sin_lat1, cos_lat2, sin_lat2, cos_dlon, sin_dlon)\n\nlet separation a b =\n  if size a <> size b then\n    invalid_arg \"Coord.separation: arrays must have the same length\";\n  let _, cos_lat1, sin_lat1, cos_lat2, sin_lat2, cos_dlon, sin_dlon =\n    trig_of a b\n  in\n  (* Vincenty formula *)\n  let a1 = Nx.mul cos_lat2 sin_dlon in\n  let a2 =\n    Nx.sub (Nx.mul cos_lat1 sin_lat2)\n      (Nx.mul (Nx.mul sin_lat1 cos_lat2) cos_dlon)\n  in\n  let num = Nx.sqrt (Nx.add (Nx.square a1) (Nx.square a2)) in\n  let den =\n    Nx.add (Nx.mul sin_lat1 sin_lat2)\n      (Nx.mul (Nx.mul cos_lat1 cos_lat2) cos_dlon)\n  in\n  let sep = Nx.atan2 num den in\n  Unit.Angle.of_tensor (Nx.abs sep)\n\nlet position_angle a b =\n  if size a <> size b then\n    invalid_arg \"Coord.position_angle: arrays must have the same length\";\n  let _, cos_lat1, sin_lat1, cos_lat2, sin_lat2, cos_dlon, sin_dlon =\n    trig_of a b\n  in\n  let num = Nx.mul cos_lat2 sin_dlon in\n  let den =\n    Nx.sub (Nx.mul cos_lat1 sin_lat2)\n      (Nx.mul (Nx.mul sin_lat1 cos_lat2) cos_dlon)\n  in\n  let pa = Nx.atan2 num den in\n  let mask = Nx.less_s pa 0.0 in\n  Unit.Angle.of_tensor (Nx.where mask (Nx.add pa two_pi) pa)\n\n(* --- Offset operations --- *)\n\nlet offset_by ~position_angle ~separation c =\n  let pa = Unit.Angle.to_tensor position_angle in\n  let sep = Unit.Angle.to_tensor separation in\n  let cos_sep = Nx.cos sep and sin_sep = Nx.sin sep in\n  let cos_pa = Nx.cos pa and sin_pa = Nx.sin pa in\n  let sin_lat = Nx.sin c.lat and cos_lat = Nx.cos c.lat in\n  (* lat2 = asin(sin(lat1)*cos(sep) + cos(lat1)*sin(sep)*cos(pa)) *)\n  let sin_lat2 =\n    Nx.add (Nx.mul sin_lat cos_sep) (Nx.mul (Nx.mul cos_lat sin_sep) cos_pa)\n  in\n  let lat2 = Nx.asin (Nx.clamp ~min:(-1.0) ~max:1.0 sin_lat2) in\n  (* lon2 = lon1 + atan2(sin(pa)*sin(sep), cos(lat1)*cos(sep) -\n     sin(lat1)*sin(sep)*cos(pa)) *)\n  let num = Nx.mul sin_pa sin_sep in\n  let den =\n    Nx.sub (Nx.mul cos_lat cos_sep) (Nx.mul (Nx.mul sin_lat sin_sep) cos_pa)\n  in\n  let dlon = Nx.atan2 num den in\n  let lon2 = Nx.add c.lon dlon in\n  let lon2 = Nx.where (Nx.less_s lon2 0.0) (Nx.add lon2 two_pi) lon2 in\n  let lon2 =\n    Nx.where (Nx.greater_equal lon2 two_pi) (Nx.sub lon2 two_pi) lon2\n  in\n  { frame = c.frame; lon = lon2; lat = lat2 }\n\nlet spherical_offsets_to a b =\n  if size a <> size b then\n    invalid_arg \"Coord.spherical_offsets_to: arrays must have the same length\";\n  if a.frame <> b.frame then\n    invalid_arg\n      \"Coord.spherical_offsets_to: coordinates must be in the same frame\";\n  (* Δlon = (lon_b - lon_a) * cos(lat_a), Δlat = lat_b - lat_a *)\n  let dlon = Nx.mul (Nx.sub b.lon a.lon) (Nx.cos a.lat) in\n  let dlat = Nx.sub b.lat a.lat in\n  (Unit.Angle.of_tensor dlon, Unit.Angle.of_tensor dlat)\n\n(* --- Catalog cross-matching --- *)\n\ntype coord = t\ntype result = { indices : Nx.int32_t; separations : Unit.angle Unit.t }\n\ntype within_result = {\n  indices_a : Nx.int32_t;\n  indices_b : Nx.int32_t;\n  separations : Unit.angle Unit.t;\n}\n\nlet to_xyz c =\n  let icrs = to_icrs c in\n  let n = size c in\n  let xs = Array.make n 0.0 in\n  let ys = Array.make n 0.0 in\n  let zs = Array.make n 0.0 in\n  for i = 0 to n - 1 do\n    let r = Nx.item [ i ] icrs.lon in\n    let d = Nx.item [ i ] icrs.lat in\n    let cd = Float.cos d in\n    xs.(i) <- cd *. Float.cos r;\n    ys.(i) <- cd *. Float.sin r;\n    zs.(i) <- Float.sin d\n  done;\n  (xs, ys, zs)\n\nlet chord_to_rad chord_sq =\n  let chord = Float.sqrt (Float.max 0.0 chord_sq) in\n  let half_chord = Float.min 1.0 (chord /. 2.0) in\n  2.0 *. Float.asin half_chord\n\nmodule Index = struct\n  type t = { tree : Kdtree.t }\n\n  let of_coord c =\n    let xs, ys, zs = to_xyz c in\n    let tree = Kdtree.build xs ys zs in\n    { tree }\n\n  let nearest idx query =\n    let qx, qy, qz = to_xyz query in\n    let n = Array.length qx in\n    let indices = Nx.zeros Nx.int32 [| n |] in\n    let seps = Nx.zeros Nx.float64 [| n |] in\n    for i = 0 to n - 1 do\n      let j, dist_sq = Kdtree.nearest idx.tree qx.(i) qy.(i) qz.(i) in\n      Nx.set_item [ i ] (Int32.of_int j) indices;\n      Nx.set_item [ i ] (chord_to_rad dist_sq) seps\n    done;\n    { indices; separations = Unit.Angle.of_tensor seps }\n\n  let within idx query ~max_sep =\n    let max_sep_rad = Nx.item [] (Unit.Angle.to_tensor max_sep) in\n    let half_angle = max_sep_rad /. 2.0 in\n    let chord = 2.0 *. Float.sin half_angle in\n    let max_dist_sq = chord *. chord in\n    let qx, qy, qz = to_xyz query in\n    let na = Array.length qx in\n    let acc = ref [] and count = ref 0 in\n    for i = 0 to na - 1 do\n      let matches = Kdtree.within idx.tree qx.(i) qy.(i) qz.(i) max_dist_sq in\n      List.iter\n        (fun (j, dist_sq) ->\n          acc := (i, j, chord_to_rad dist_sq) :: !acc;\n          incr count)\n        matches\n    done;\n    let n = !count in\n    let out_a = Nx.zeros Nx.int32 [| n |] in\n    let out_b = Nx.zeros Nx.int32 [| n |] in\n    let out_s = Nx.zeros Nx.float64 [| n |] in\n    let k = ref (n - 1) in\n    List.iter\n      (fun (i, j, sep) ->\n        let k' = !k in\n        Nx.set_item [ k' ] (Int32.of_int i) out_a;\n        Nx.set_item [ k' ] (Int32.of_int j) out_b;\n        Nx.set_item [ k' ] sep out_s;\n        decr k)\n      !acc;\n    {\n      indices_a = out_a;\n      indices_b = out_b;\n      separations = Unit.Angle.of_tensor out_s;\n    }\nend\n\nlet nearest query catalog =\n  if size catalog = 0 then invalid_arg \"Coord.nearest: catalog is empty\";\n  let idx = Index.of_coord catalog in\n  Index.nearest idx query\n\nlet within a b ~max_sep =\n  let idx = Index.of_coord b in\n  Index.within idx a ~max_sep\n"
  },
  {
    "path": "dev/umbra/lib/coord.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Celestial coordinates with frame transforms and catalog matching.\n\n    Positions are stored as longitude/latitude pairs in 1D {!Unit.angle}\n    quantities and can be converted between {!ICRS}, {!Galactic},\n    {!Ecliptic_j2000}, and {!Supergalactic} frames via 3x3 rotation matrices.\n\n    {[\n      let c = Coord.of_radec ~ra:(Unit.Angle.of_deg ra) ~dec:(Unit.Angle.of_deg dec) in\n      let gal = Coord.galactic c\n    ]} *)\n\n(** {1:types Types} *)\n\n(** The type for celestial reference frames. *)\ntype frame =\n  | ICRS  (** International Celestial Reference System. *)\n  | Galactic  (** IAU Galactic coordinates. *)\n  | Ecliptic_j2000  (** Ecliptic coordinates at J2000.0 epoch. *)\n  | Supergalactic  (** Supergalactic coordinates. *)\n\ntype t\n(** The type for celestial coordinates. A pair of 1D angle quantities\n    (longitude, latitude), tagged with a {!frame}. *)\n\n(** {1:constructors Constructors}\n\n    All constructors require 1D angle quantities of equal length.\n\n    Raises [Invalid_argument] if the tensors are not 1D or differ in length. *)\n\nval of_radec : ra:Unit.angle Unit.t -> dec:Unit.angle Unit.t -> t\n(** [of_radec ~ra ~dec] is a coordinate in the ICRS frame. [ra] and [dec] must\n    be scalar or 1-D angle quantities with matching sizes. *)\n\nval of_galactic : l:Unit.angle Unit.t -> b:Unit.angle Unit.t -> t\n(** [of_galactic ~l ~b] is a coordinate in the Galactic frame. [l] and [b] must\n    be scalar or 1-D angle quantities with matching sizes. *)\n\nval of_ecliptic_j2000 : lon:Unit.angle Unit.t -> lat:Unit.angle Unit.t -> t\n(** [of_ecliptic_j2000 ~lon ~lat] is a coordinate in the ecliptic frame at the\n    J2000.0 mean obliquity (23.4392911 degrees). [lon] and [lat] must be scalar\n    or 1-D angle quantities with matching sizes. *)\n\nval of_supergalactic : sgl:Unit.angle Unit.t -> sgb:Unit.angle Unit.t -> t\n(** [of_supergalactic ~sgl ~sgb] is a coordinate in the Supergalactic frame.\n    [sgl] and [sgb] must be scalar or 1-D angle quantities with matching sizes.\n*)\n\n(** {1:accessors Accessors} *)\n\nval frame : t -> frame\n(** [frame c] is the reference frame of [c]. *)\n\nval size : t -> int\n(** [size c] is the number of positions in [c]. *)\n\nval lon : t -> Unit.angle Unit.t\n(** [lon c] is the longitude component of [c]. *)\n\nval lat : t -> Unit.angle Unit.t\n(** [lat c] is the latitude component of [c]. *)\n\nval ra : t -> Unit.angle Unit.t\n(** [ra c] is the ICRS right ascension of [c]. Converts to ICRS first if [c] is\n    in another frame. *)\n\nval dec : t -> Unit.angle Unit.t\n(** [dec c] is the ICRS declination of [c]. Converts to ICRS first if [c] is in\n    another frame. *)\n\n(** {1:transforms Frame transforms} *)\n\nval to_frame : frame -> t -> t\n(** [to_frame f c] is [c] converted to frame [f]. Returns [c] unchanged if [c]\n    is already in [f]. All conversions go through ICRS as the pivot frame. Not\n    differentiable (scalar-level rotation matrices). *)\n\nval icrs : t -> t\n(** [icrs c] is [to_frame ICRS c]. *)\n\nval galactic : t -> t\n(** [galactic c] is [to_frame Galactic c]. *)\n\nval ecliptic_j2000 : t -> t\n(** [ecliptic_j2000 c] is [to_frame Ecliptic_j2000 c]. *)\n\nval supergalactic : t -> t\n(** [supergalactic c] is [to_frame Supergalactic c]. *)\n\n(** {1:separation Angular separation} *)\n\nval separation : t -> t -> Unit.angle Unit.t\n(** [separation a b] is the angular separation between corresponding positions\n    of [a] and [b], computed with the Vincenty formula. Both coordinates are\n    converted to ICRS before computation. Not differentiable (scalar-level\n    trigonometry).\n\n    Raises [Invalid_argument] if [a] and [b] differ in {!size}. *)\n\nval position_angle : t -> t -> Unit.angle Unit.t\n(** [position_angle a b] is the position angle from [a] to [b], measured North\n    through East, in \\[0, 2{e pi}). Both coordinates are converted to ICRS\n    before computation. Not differentiable (scalar-level trigonometry).\n\n    Raises [Invalid_argument] if [a] and [b] differ in {!size}. *)\n\n(** {1:offsets Offset operations} *)\n\nval offset_by :\n  position_angle:Unit.angle Unit.t -> separation:Unit.angle Unit.t -> t -> t\n(** [offset_by ~position_angle ~separation c] is the coordinate obtained by\n    moving each position in [c] along bearing [position_angle] (North through\n    East) by angular distance [separation]. The result is in the same frame as\n    [c]. Not differentiable (scalar-level trigonometry). *)\n\nval spherical_offsets_to : t -> t -> Unit.angle Unit.t * Unit.angle Unit.t\n(** [spherical_offsets_to a b] is [(dlon, dlat)] where\n    [dlon = (lon_b - lon_a) * cos(lat_a)] and [dlat = lat_b - lat_a]. Both\n    coordinates must be in the same frame. Not differentiable (scalar-level\n    trigonometry).\n\n    Raises [Invalid_argument] if [a] and [b] differ in {!size} or {!frame}. *)\n\n(** {1:matching Catalog cross-matching}\n\n    Matches positions between catalogs using a 3D kd-tree built from unit-sphere\n    Cartesian coordinates. All indices in results are 0-based.\n\n    {b Warning.} Cross-matching is not differentiable: it produces integer\n    indices and uses discrete tree search. *)\n\ntype coord = t\n(** Alias for {!t}, used inside {!Index} to avoid shadowing. *)\n\ntype result = {\n  indices : Nx.int32_t;  (** 0-based indices into the catalog. *)\n  separations : Unit.angle Unit.t;  (** Angular distances. *)\n}\n(** The type for nearest-match results. For each query position, {!indices}\n    gives the index of the nearest catalog entry and {!separations} gives the\n    angular distance to it. Both have the same length as the query. *)\n\ntype within_result = {\n  indices_a : Nx.int32_t;  (** 0-based indices into the query. *)\n  indices_b : Nx.int32_t;  (** 0-based indices into the catalog. *)\n  separations : Unit.angle Unit.t;  (** Angular distances. *)\n}\n(** The type for within-radius match results. Each entry represents one matched\n    pair. The three fields have equal length. *)\n\n(** {2:index Reusable index}\n\n    Build a kd-tree once and query it many times. *)\n\nmodule Index : sig\n  type t\n  (** The type for a prebuilt spatial index over a catalog. *)\n\n  val of_coord : coord -> t\n  (** [of_coord c] builds a kd-tree index from the positions in [c]. Coordinates\n      are converted to ICRS internally. *)\n\n  val nearest : t -> coord -> result\n  (** [nearest idx query] finds, for each position in [query], the nearest\n      position in the indexed catalog. *)\n\n  val within : t -> coord -> max_sep:Unit.angle Unit.t -> within_result\n  (** [within idx query ~max_sep] finds all pairs where a position in [query] is\n      within [max_sep] of a position in the indexed catalog. *)\nend\n\nval nearest : t -> t -> result\n(** [nearest query catalog] finds, for each position in [query], the nearest\n    position in [catalog].\n\n    Raises [Invalid_argument] if [catalog] is empty. *)\n\nval within : t -> t -> max_sep:Unit.angle Unit.t -> within_result\n(** [within a b ~max_sep] finds all pairs of positions where the separation is\n    at most [max_sep]. Builds a kd-tree on [b]. *)\n"
  },
  {
    "path": "dev/umbra/lib/cosmo.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Cosmological distance calculations for ΛCDM, wCDM, and w0waCDM universes.\n\n   w0waCDM subsumes all models: - flat ΛCDM: omega_k = 0, w0 = -1, wa = 0 -\n   non-flat ΛCDM: w0 = -1, wa = 0 - wCDM: wa = 0 - w0waCDM: general case\n\n   All computations use Nx tensor ops, making them natively differentiable\n   through Rune's autodiff. GL quadrature is vectorized as tensor operations. *)\n\nlet f64 = Nx.float64\nlet c_km_s = Nx.scalar f64 299792.458\nlet _mpc_m = 3.085_677_581_491_367_3e22\n\ntype params = {\n  h0 : Nx.float64_t;\n  omega_m : Nx.float64_t;\n  omega_l : Nx.float64_t;\n  omega_k : Nx.float64_t;\n  w0 : Nx.float64_t;\n  wa : Nx.float64_t;\n  omega_b : Nx.float64_t option;\n  n_s : Nx.float64_t option;\n  sigma8 : Nx.float64_t option;\n}\n\nlet err_missing name =\n  invalid_arg\n    (\"Cosmo: \" ^ name ^ \" not set (use Cosmo.set or a preset like planck18)\")\n\n(* --- Constructors --- *)\n\nlet flat_lcdm ~h0 ~omega_m =\n  if h0 <= 0.0 then invalid_arg \"Cosmo.flat_lcdm: h0 must be positive\";\n  if omega_m < 0.0 then\n    invalid_arg \"Cosmo.flat_lcdm: omega_m must be non-negative\";\n  {\n    h0 = Nx.scalar f64 h0;\n    omega_m = Nx.scalar f64 omega_m;\n    omega_l = Nx.scalar f64 (1.0 -. omega_m);\n    omega_k = Nx.scalar f64 0.0;\n    w0 = Nx.scalar f64 (-1.0);\n    wa = Nx.scalar f64 0.0;\n    omega_b = None;\n    n_s = None;\n    sigma8 = None;\n  }\n\nlet lcdm ~h0 ~omega_m ~omega_l =\n  if h0 <= 0.0 then invalid_arg \"Cosmo.lcdm: h0 must be positive\";\n  {\n    h0 = Nx.scalar f64 h0;\n    omega_m = Nx.scalar f64 omega_m;\n    omega_l = Nx.scalar f64 omega_l;\n    omega_k = Nx.scalar f64 (1.0 -. omega_m -. omega_l);\n    w0 = Nx.scalar f64 (-1.0);\n    wa = Nx.scalar f64 0.0;\n    omega_b = None;\n    n_s = None;\n    sigma8 = None;\n  }\n\nlet wcdm ~h0 ~omega_m ?omega_l ~w0 () =\n  if h0 <= 0.0 then invalid_arg \"Cosmo.wcdm: h0 must be positive\";\n  let omega_l = match omega_l with Some v -> v | None -> 1.0 -. omega_m in\n  {\n    h0 = Nx.scalar f64 h0;\n    omega_m = Nx.scalar f64 omega_m;\n    omega_l = Nx.scalar f64 omega_l;\n    omega_k = Nx.scalar f64 (1.0 -. omega_m -. omega_l);\n    w0 = Nx.scalar f64 w0;\n    wa = Nx.scalar f64 0.0;\n    omega_b = None;\n    n_s = None;\n    sigma8 = None;\n  }\n\nlet w0wacdm ~h0 ~omega_m ?omega_l ~w0 ~wa () =\n  if h0 <= 0.0 then invalid_arg \"Cosmo.w0wacdm: h0 must be positive\";\n  let omega_l = match omega_l with Some v -> v | None -> 1.0 -. omega_m in\n  {\n    h0 = Nx.scalar f64 h0;\n    omega_m = Nx.scalar f64 omega_m;\n    omega_l = Nx.scalar f64 omega_l;\n    omega_k = Nx.scalar f64 (1.0 -. omega_m -. omega_l);\n    w0 = Nx.scalar f64 w0;\n    wa = Nx.scalar f64 wa;\n    omega_b = None;\n    n_s = None;\n    sigma8 = None;\n  }\n\n(* Tensor constructors for differentiable construction *)\n\nlet create_flat_lcdm ~h0 ~omega_m =\n  {\n    h0;\n    omega_m;\n    omega_l = Nx.sub (Nx.scalar f64 1.0) omega_m;\n    omega_k = Nx.scalar f64 0.0;\n    w0 = Nx.scalar f64 (-1.0);\n    wa = Nx.scalar f64 0.0;\n    omega_b = None;\n    n_s = None;\n    sigma8 = None;\n  }\n\nlet create_lcdm ~h0 ~omega_m ~omega_l =\n  {\n    h0;\n    omega_m;\n    omega_l;\n    omega_k = Nx.sub (Nx.scalar f64 1.0) (Nx.add omega_m omega_l);\n    w0 = Nx.scalar f64 (-1.0);\n    wa = Nx.scalar f64 0.0;\n    omega_b = None;\n    n_s = None;\n    sigma8 = None;\n  }\n\nlet create_wcdm ~h0 ~omega_m ?omega_l ~w0 () =\n  let omega_l =\n    match omega_l with\n    | Some v -> v\n    | None -> Nx.sub (Nx.scalar f64 1.0) omega_m\n  in\n  {\n    h0;\n    omega_m;\n    omega_l;\n    omega_k = Nx.sub (Nx.scalar f64 1.0) (Nx.add omega_m omega_l);\n    w0;\n    wa = Nx.scalar f64 0.0;\n    omega_b = None;\n    n_s = None;\n    sigma8 = None;\n  }\n\nlet create_w0wacdm ~h0 ~omega_m ?omega_l ~w0 ~wa () =\n  let omega_l =\n    match omega_l with\n    | Some v -> v\n    | None -> Nx.sub (Nx.scalar f64 1.0) omega_m\n  in\n  {\n    h0;\n    omega_m;\n    omega_l;\n    omega_k = Nx.sub (Nx.scalar f64 1.0) (Nx.add omega_m omega_l);\n    w0;\n    wa;\n    omega_b = None;\n    n_s = None;\n    sigma8 = None;\n  }\n\n(* Accessors *)\n\nlet h0 p = p.h0\nlet omega_m p = p.omega_m\nlet omega_l p = p.omega_l\nlet omega_k p = p.omega_k\nlet w0 p = p.w0\nlet wa p = p.wa\n\nlet omega_b p =\n  match p.omega_b with Some v -> v | None -> err_missing \"omega_b\"\n\nlet n_s p = match p.n_s with Some v -> v | None -> err_missing \"n_s\"\nlet sigma8 p = match p.sigma8 with Some v -> v | None -> err_missing \"sigma8\"\n\nlet set ?omega_b ?n_s ?sigma8 p =\n  let omega_b =\n    match omega_b with Some v -> Some (Nx.scalar f64 v) | None -> p.omega_b\n  in\n  let n_s = match n_s with Some v -> Some (Nx.scalar f64 v) | None -> p.n_s in\n  let sigma8 =\n    match sigma8 with Some v -> Some (Nx.scalar f64 v) | None -> p.sigma8\n  in\n  { p with omega_b; n_s; sigma8 }\n\nlet set_t ?h0 ?omega_m ?omega_l ?omega_b ?n_s ?sigma8 p =\n  let h0 = match h0 with Some v -> v | None -> p.h0 in\n  let omega_m = match omega_m with Some v -> v | None -> p.omega_m in\n  let omega_l = match omega_l with Some v -> v | None -> p.omega_l in\n  let omega_k = Nx.sub (Nx.scalar f64 1.0) (Nx.add omega_m omega_l) in\n  let omega_b = match omega_b with Some v -> Some v | None -> p.omega_b in\n  let n_s = match n_s with Some v -> Some v | None -> p.n_s in\n  let sigma8 = match sigma8 with Some v -> Some v | None -> p.sigma8 in\n  { p with h0; omega_m; omega_l; omega_k; omega_b; n_s; sigma8 }\n\n(* Presets *)\n\nlet default = flat_lcdm ~h0:70.0 ~omega_m:0.3\n\nlet planck18 =\n  flat_lcdm ~h0:67.66 ~omega_m:0.3111\n  |> set ~omega_b:0.0490 ~n_s:0.9665 ~sigma8:0.8102\n\nlet planck15 =\n  flat_lcdm ~h0:67.74 ~omega_m:0.3075\n  |> set ~omega_b:0.0486 ~n_s:0.9667 ~sigma8:0.8159\n\nlet wmap9 =\n  flat_lcdm ~h0:69.32 ~omega_m:0.2865\n  |> set ~omega_b:0.0463 ~n_s:0.9608 ~sigma8:0.820\n\n(* --- E(z) computation ---\n\n   E(z) = H(z)/H0 = sqrt(Ω_m(1+z)³ + Ω_k(1+z)² + Ω_de(z))\n\n   where Ω_de(z) = Ω_Λ * (1+z)^(3(1+w0+wa)) * exp(-3*wa*z/(1+z))\n\n   For ΛCDM (w0=-1, wa=0): Ω_de(z) = Ω_Λ (constant) For wCDM (wa=0): Ω_de(z) =\n   Ω_Λ * (1+z)^(3(1+w0)) *)\n\nlet e_of p z =\n  let one_plus_z = Nx.add_s z 1.0 in\n  let cubed = Nx.mul one_plus_z (Nx.mul one_plus_z one_plus_z) in\n  let matter = Nx.mul p.omega_m cubed in\n  let curvature = Nx.mul p.omega_k (Nx.mul one_plus_z one_plus_z) in\n  (* Dark energy: Ω_Λ * (1+z)^(3(1+w0+wa)) * exp(-3*wa*z/(1+z)) *)\n  let w_eff = Nx.add_s (Nx.add p.w0 p.wa) 1.0 in\n  let de_power = Nx.pow one_plus_z (Nx.mul_s w_eff 3.0) in\n  let wa_arg = Nx.mul (Nx.mul_s p.wa (-3.0)) (Nx.div z one_plus_z) in\n  let de = Nx.mul p.omega_l (Nx.mul de_power (Nx.exp wa_arg)) in\n  Nx.sqrt (Nx.add matter (Nx.add curvature de))\n\n(* 16-point Gauss-Legendre nodes and weights on [-1, 1] as Nx tensors *)\nlet gl_nodes =\n  Nx.create f64 [| 16 |]\n    [|\n      -0.9894009349916499;\n      -0.9445750230732326;\n      -0.8656312023878318;\n      -0.7554044083550030;\n      -0.6178762444026438;\n      -0.4580167776572274;\n      -0.2816035507792589;\n      -0.0950125098376374;\n      0.0950125098376374;\n      0.2816035507792589;\n      0.4580167776572274;\n      0.6178762444026438;\n      0.7554044083550030;\n      0.8656312023878318;\n      0.9445750230732326;\n      0.9894009349916499;\n    |]\n\nlet gl_weights =\n  Nx.create f64 [| 16 |]\n    [|\n      0.0271524594117541;\n      0.0622535239386479;\n      0.0951585116824928;\n      0.1246289712555339;\n      0.1495959888165767;\n      0.1691565193950025;\n      0.1826034150449236;\n      0.1894506104550685;\n      0.1894506104550685;\n      0.1826034150449236;\n      0.1691565193950025;\n      0.1495959888165767;\n      0.1246289712555339;\n      0.0951585116824928;\n      0.0622535239386479;\n      0.0271524594117541;\n    |]\n\n(* GL quadrature in scale-factor space.\n\n   All cosmological integrals ∫₀ᶻ g(z') dz' are evaluated via the substitution a\n   = 1/(1+z), which maps [0, z] → [1/(1+z), 1]. This bounded range is\n   well-resolved by 16-point GL even at z = 1089 (CMB). Direct quadrature over\n   [0, z] in redshift space under-resolves the integrand at large z. *)\nlet gl_quad_a p z f =\n  let a_lo = Nx.recip (Nx.add_s z 1.0) in\n  let one = Nx.scalar f64 1.0 in\n  let half = Nx.div_s (Nx.sub one a_lo) 2.0 in\n  let mid = Nx.div_s (Nx.add one a_lo) 2.0 in\n  let a = Nx.add (Nx.mul half gl_nodes) mid in\n  let e_z = e_of p (Nx.sub_s (Nx.recip a) 1.0) in\n  Nx.mul half (Nx.sum (Nx.mul (f a e_z) gl_weights))\n\n(* ∫₀ᶻ dz'/E(z') = ∫_{a_lo}^1 da/(a² E(a)) *)\nlet integrate_inv_ez p z =\n  gl_quad_a p z (fun a e -> Nx.recip (Nx.mul (Nx.mul a a) e))\n\n(* ∫₀ᶻ dz'/((1+z') E(z')) = ∫_{a_lo}^1 da/(a E(a)) *)\nlet integrate_inv_z1_ez p z = gl_quad_a p z (fun a e -> Nx.recip (Nx.mul a e))\n\n(* --- Derived quantities --- *)\n\nlet hubble ?(p = default) z = Nx.mul p.h0 (e_of p z)\n\nlet critical_density ?(p = default) z =\n  let h_z = hubble ~p z in\n  let h_si = Nx.div_s (Nx.mul_s h_z 1e3) _mpc_m in\n  Nx.div_s (Nx.mul_s (Nx.mul h_si h_si) 3.0) (8.0 *. Float.pi *. 6.674_30e-11)\n\n(* --- Distances ---\n\n   Line-of-sight comoving distance: χ = d_H ∫₀ᶻ dz'/E(z')\n\n   Transverse comoving distance (curvature-corrected): - Ω_k > 0 (open): d_M =\n   d_H/√Ω_k · sinh(√Ω_k · χ/d_H) - Ω_k = 0 (flat): d_M = χ - Ω_k < 0 (closed):\n   d_M = d_H/√|Ω_k| · sin(√|Ω_k| · χ/d_H) *)\n\nlet comoving_distance_mpc p z =\n  let d_h = Nx.div c_km_s p.h0 in\n  Nx.mul d_h (integrate_inv_ez p z)\n\nlet transverse_comoving_mpc p z =\n  let d_h = Nx.div c_km_s p.h0 in\n  let chi = Nx.mul d_h (integrate_inv_ez p z) in\n  let ok_f = Nx.item [] p.omega_k in\n  if Float.abs ok_f < 1e-10 then chi (* flat *)\n  else\n    let sqrt_ok = Nx.sqrt (Nx.abs p.omega_k) in\n    let arg = Nx.div (Nx.mul sqrt_ok chi) d_h in\n    if ok_f > 0.0 then Nx.div (Nx.mul d_h (Nx.sinh arg)) sqrt_ok\n    else Nx.div (Nx.mul d_h (Nx.sin arg)) sqrt_ok\n\nlet comoving_distance ?(p = default) z =\n  Unit.Length.of_tensor (Nx.mul_s (comoving_distance_mpc p z) _mpc_m)\n\nlet luminosity_distance ?(p = default) z =\n  let dm_mpc = transverse_comoving_mpc p z in\n  Unit.Length.of_tensor (Nx.mul_s (Nx.mul (Nx.add_s z 1.0) dm_mpc) _mpc_m)\n\nlet angular_diameter_distance ?(p = default) z =\n  let dm_mpc = transverse_comoving_mpc p z in\n  Unit.Length.of_tensor (Nx.mul_s (Nx.div dm_mpc (Nx.add_s z 1.0)) _mpc_m)\n\nlet distance_modulus ?(p = default) z =\n  let dl_mpc = Nx.mul (Nx.add_s z 1.0) (transverse_comoving_mpc p z) in\n  (* mu = 5 * log10(dL_Mpc) + 25 = 5/ln10 * ln(dL_Mpc) + 25 *)\n  let five_over_ln10 = 5.0 /. Float.log 10.0 in\n  Nx.add_s (Nx.mul_s (Nx.log dl_mpc) five_over_ln10) 25.0\n\n(* --- Angular scale --- *)\n\nlet angular_size ?(p = default) ~z phys =\n  let da = angular_diameter_distance ~p z in\n  Unit.Angle.of_tensor\n    (Nx.div (Unit.Length.to_tensor phys) (Unit.Length.to_tensor da))\n\nlet physical_size ?(p = default) ~z ang =\n  let da = angular_diameter_distance ~p z in\n  Unit.Length.of_tensor\n    (Nx.mul (Unit.Angle.to_tensor ang) (Unit.Length.to_tensor da))\n\n(* --- Cosmic times --- *)\n\n(* 1/H0 in seconds: (km/s/Mpc)^{-1} = Mpc/km · s *)\nlet _hubble_time_s p = Nx.mul_s (Nx.recip p.h0) 3.0856776e19\n\nlet lookback_time ?(p = default) z =\n  Unit.Time.of_tensor (Nx.mul (_hubble_time_s p) (integrate_inv_z1_ez p z))\n\nlet age ?(p = default) z =\n  (* age(z) = t_H ∫₀^{1/(1+z)} da/(a E(a)). We reuse gl_quad_a with an upper\n     limit at z_max=1000 (≈ a_lo → 0) for the total integral, then subtract the\n     lookback from 0 to z. *)\n  let t_h_s = _hubble_time_s p in\n  let total = integrate_inv_z1_ez p (Nx.scalar f64 1000.0) in\n  let lb = integrate_inv_z1_ez p z in\n  Unit.Time.of_tensor (Nx.mul t_h_s (Nx.sub total lb))\n\n(* --- z_at_value: inverse lookup via Brent's method ---\n\n   Given a monotonic cosmological function f and a target value, find the\n   redshift z such that f(z) ≈ target. Not differentiable. *)\n\nlet z_at_value ?(p = default) ?(zmin = 1e-8) ?(zmax = 1000.0) ?(xtol = 1e-8) f\n    target =\n  let target_v = Nx.item [] target in\n  let eval z = Nx.item [] (f ~p (Nx.scalar f64 z)) -. target_v in\n  (* Brent's method *)\n  let a = ref zmin and b = ref zmax in\n  let fa = ref (eval !a) and fb = ref (eval !b) in\n  if !fa *. !fb > 0.0 then\n    invalid_arg \"Cosmo.z_at_value: target outside [f(zmin), f(zmax)]\";\n  if Float.abs !fa < Float.abs !fb then begin\n    let tmp = !a in\n    a := !b;\n    b := tmp;\n    let tmp = !fa in\n    fa := !fb;\n    fb := tmp\n  end;\n  let c = ref !a and fc = ref !fa in\n  let d = ref (!b -. !a) in\n  let mflag = ref true in\n  let max_iter = 100 in\n  let i = ref 0 in\n  while Float.abs !fb > xtol && !i < max_iter do\n    let s =\n      if Float.abs (!fa -. !fc) > 1e-30 && Float.abs (!fb -. !fc) > 1e-30 then\n        (* Inverse quadratic interpolation *)\n        let s1 = !a *. !fb *. !fc /. ((!fa -. !fb) *. (!fa -. !fc)) in\n        let s2 = !b *. !fa *. !fc /. ((!fb -. !fa) *. (!fb -. !fc)) in\n        let s3 = !c *. !fa *. !fb /. ((!fc -. !fa) *. (!fc -. !fb)) in\n        s1 +. s2 +. s3\n      else\n        (* Secant method *)\n        !b -. (!fb *. (!b -. !a) /. (!fb -. !fa))\n    in\n    let cond1 =\n      let lo = ((3.0 *. !a) +. !b) /. 4.0 in\n      not (if lo < !b then lo <= s && s <= !b else !b <= s && s <= lo)\n    in\n    let cond2 = !mflag && Float.abs (s -. !b) >= Float.abs (!b -. !c) /. 2.0 in\n    let cond3 =\n      (not !mflag) && Float.abs (s -. !b) >= Float.abs (!c -. !d) /. 2.0\n    in\n    let cond4 = !mflag && Float.abs (!b -. !c) < xtol in\n    let cond5 = (not !mflag) && Float.abs (!c -. !d) < xtol in\n    let s =\n      if cond1 || cond2 || cond3 || cond4 || cond5 then begin\n        mflag := true;\n        (!a +. !b) /. 2.0\n      end\n      else begin\n        mflag := false;\n        s\n      end\n    in\n    let fs = eval s in\n    d := !c;\n    c := !b;\n    fc := !fb;\n    if !fa *. fs < 0.0 then begin\n      b := s;\n      fb := fs\n    end\n    else begin\n      a := s;\n      fa := fs\n    end;\n    if Float.abs !fa < Float.abs !fb then begin\n      let tmp = !a in\n      a := !b;\n      b := tmp;\n      let tmp = !fa in\n      fa := !fb;\n      fb := tmp\n    end;\n    incr i\n  done;\n  Nx.scalar f64 !b\n\n(* Growth factor and growth rate *)\n\n(* E(a) from scale factor: a = 1/(1+z), so z = 1/a - 1 *)\nlet e_at_a p a = e_of p (Nx.sub_s (Nx.recip a) 1.0)\n\n(* GL quadrature of f(a') from 0 to a. Transforms [-1,1] to [0,a]. *)\nlet gl_integrate_a p a f =\n  let half = Nx.div_s a 2.0 in\n  let a_prime = Nx.add (Nx.mul half gl_nodes) half in\n  let e_a = e_at_a p a_prime in\n  Nx.mul half (Nx.sum (Nx.mul (f a_prime e_a) gl_weights))\n\n(* Growth integral: J(a) = ∫₀ᵃ da' / (a'³ E³(a')) Integrand at a'→0:\n   ~a'^(3/2)/Ω_m^(3/2) → 0, so well-behaved. *)\nlet growth_integral p a =\n  gl_integrate_a p a (fun a_prime e_a ->\n      let a3 = Nx.mul a_prime (Nx.mul a_prime a_prime) in\n      let e3 = Nx.mul e_a (Nx.mul e_a e_a) in\n      Nx.recip (Nx.mul a3 e3))\n\n(* Unnormalized growth factor: D(a) ∝ E(a) × J(a) *)\nlet growth_unnorm p a = Nx.mul (e_at_a p a) (growth_integral p a)\n\nlet growth_factor ?(p = default) z =\n  let a = Nx.recip (Nx.add_s z 1.0) in\n  let d_a = growth_unnorm p a in\n  let d_1 = growth_unnorm p (Nx.scalar f64 1.0) in\n  Nx.div d_a d_1\n\n(* Growth rate: f(a) = dlnD/dlna D(a) = E(a) J(a) / const, so f = dlnE/dlna +\n   (dJ/dlna) / J = dlnE/dlna + 1 / (a² E³(a) J(a))\n\n   dlnE/dlna = a/(2E²) dE²/da dE²/da = -3Ωm a⁻⁴ - 2Ωk a⁻³ + ΩΛ exp(f_de)\n   (-3(1+w0+wa)/a + 3wa) *)\nlet growth_rate ?(p = default) z =\n  let a = Nx.recip (Nx.add_s z 1.0) in\n  let e_a = e_at_a p a in\n  let e2 = Nx.mul e_a e_a in\n  let j_a = growth_integral p a in\n  (* dE²/da *)\n  let a2 = Nx.mul a a in\n  let a3 = Nx.mul a2 a in\n  let a4 = Nx.mul a3 a in\n  let dm = Nx.mul_s (Nx.div p.omega_m a4) (-3.0) in\n  let dk = Nx.mul_s (Nx.div p.omega_k a3) (-2.0) in\n  (* Dark energy contribution: need f_de(a) and f_de'(a) *)\n  let f_de =\n    Nx.add\n      (Nx.mul (Nx.mul_s (Nx.add_s (Nx.add p.w0 p.wa) 1.0) (-3.0)) (Nx.log a))\n      (Nx.mul p.wa (Nx.mul_s (Nx.sub_s a 1.0) 3.0))\n  in\n  let f_de_prime =\n    Nx.add\n      (Nx.div (Nx.mul_s (Nx.add_s (Nx.add p.w0 p.wa) 1.0) (-3.0)) a)\n      (Nx.mul_s p.wa 3.0)\n  in\n  let dde = Nx.mul (Nx.mul p.omega_l (Nx.exp f_de)) f_de_prime in\n  let de2_da = Nx.add dm (Nx.add dk dde) in\n  (* dlnE/dlna = a/(2E²) × dE²/da *)\n  let dln_e = Nx.div (Nx.mul a de2_da) (Nx.mul_s e2 2.0) in\n  (* 1/(a² E³ J) *)\n  let e3 = Nx.mul e_a e2 in\n  let term2 = Nx.recip (Nx.mul a2 (Nx.mul e3 j_a)) in\n  Nx.add dln_e term2\n\n(* Eisenstein-Hu transfer function (1998) *)\n\nlet t_cmb = 2.7255\n\n(* Eisenstein & Hu (1998) transfer function with baryon oscillations. Scalar\n   cosmological quantities are computed in float arithmetic (the transfer\n   function is a fitting formula. The wavenumber k may be a tensor of arbitrary\n   shape; the result has the same shape. Differentiable through cosmological\n   parameters via Rune. *)\nlet eisenstein_hu p k =\n  let s = Nx.scalar f64 in\n  let om = p.omega_m in\n  let ob = omega_b p in\n  let h = Nx.div_s p.h0 100.0 in\n  let h2 = Nx.mul h h in\n  let w_m = Nx.mul om h2 in\n  let w_b = Nx.mul ob h2 in\n  let fb = Nx.div ob om in\n  let fc = Nx.sub (s 1.0) fb in\n  let t27sq = (t_cmb /. 2.7) ** 2.0 in\n  let t27_4 = t27sq *. t27sq in\n  (* Eq. 2,3: equality epoch *)\n  let z_eq = Nx.div_s (Nx.mul_s w_m 2.50e4) t27_4 in\n  let k_eq = Nx.div (Nx.div_s (Nx.mul_s w_m 7.46e-2) t27sq) h in\n  (* Eq. 4: drag epoch *)\n  let b1 =\n    Nx.mul\n      (Nx.pow w_m (s (-0.419)))\n      (Nx.add_s (Nx.mul_s (Nx.pow w_m (s 0.674)) 0.607) 1.0)\n    |> fun x -> Nx.mul_s x 0.313\n  in\n  let b2 = Nx.mul_s (Nx.pow w_m (s 0.223)) 0.238 in\n  let z_d =\n    Nx.mul\n      (Nx.div\n         (Nx.mul_s (Nx.pow w_m (s 0.251)) 1291.0)\n         (Nx.add_s (Nx.mul_s (Nx.pow w_m (s 0.828)) 0.659) 1.0))\n      (Nx.add_s (Nx.mul b1 (Nx.pow w_b b2)) 1.0)\n  in\n  (* Eq. 5: baryon/photon momentum ratios *)\n  let r_d = Nx.mul (Nx.div_s (Nx.mul_s w_b 31.5) t27_4) (Nx.div (s 1e3) z_d) in\n  let r_eq =\n    Nx.mul (Nx.div_s (Nx.mul_s w_b 31.5) t27_4) (Nx.div (s 1e3) z_eq)\n  in\n  (* Eq. 6: sound horizon *)\n  let sh_d =\n    Nx.mul\n      (Nx.mul\n         (Nx.div (s 2.0) (Nx.mul_s k_eq 3.0))\n         (Nx.sqrt (Nx.div (s 6.0) r_eq)))\n      (Nx.log\n         (Nx.div\n            (Nx.add (Nx.sqrt (Nx.add_s r_d 1.0)) (Nx.sqrt (Nx.add r_eq r_d)))\n            (Nx.add_s (Nx.sqrt r_eq) 1.0)))\n  in\n  (* Eq. 7: Silk damping *)\n  let k_silk =\n    Nx.div\n      (Nx.mul\n         (Nx.mul (Nx.mul_s (Nx.pow w_b (s 0.52)) 1.6) (Nx.pow w_m (s 0.73)))\n         (Nx.add_s (Nx.pow (Nx.mul_s w_m 10.4) (s (-0.95))) 1.0))\n      h\n  in\n  (* CDM transfer function (Eqs. 11, 12, 17, 18) *)\n  let a1 =\n    Nx.mul\n      (Nx.pow (Nx.mul_s w_m 46.9) (s 0.670))\n      (Nx.add_s (Nx.pow (Nx.mul_s w_m 32.1) (s (-0.532))) 1.0)\n  in\n  let a2 =\n    Nx.mul\n      (Nx.pow (Nx.mul_s w_m 12.0) (s 0.424))\n      (Nx.add_s (Nx.pow (Nx.mul_s w_m 45.0) (s (-0.582))) 1.0)\n  in\n  let alpha_c =\n    Nx.mul\n      (Nx.pow a1 (Nx.neg fb))\n      (Nx.pow a2 (Nx.neg (Nx.mul fb (Nx.mul fb fb))))\n  in\n  let b1c =\n    Nx.div (s 0.944) (Nx.add_s (Nx.pow (Nx.mul_s w_m 458.0) (s (-0.708))) 1.0)\n  in\n  let b2c = Nx.pow (Nx.mul_s w_m 0.395) (s (-0.0266)) in\n  let beta_c =\n    Nx.recip (Nx.add_s (Nx.mul b1c (Nx.sub (Nx.pow fc b2c) (s 1.0))) 1.0)\n  in\n  (* T_tilde: Eq. 10, 19. Operates on k tensor. alpha, beta are scalar\n     tensors. *)\n  let t_tilde k1 alpha beta =\n    let q = Nx.div k1 (Nx.mul_s k_eq 13.41) in\n    let l = Nx.log (Nx.add_s (Nx.mul q (Nx.mul_s beta 1.8)) (Float.exp 1.0)) in\n    let c =\n      Nx.add\n        (Nx.div (s 386.0) (Nx.add_s (Nx.mul_s (Nx.pow q (s 1.08)) 69.9) 1.0))\n        (Nx.div (s 14.2) alpha)\n    in\n    Nx.div l (Nx.add l (Nx.mul c (Nx.mul q q)))\n  in\n  let ksh = Nx.mul k sh_d in\n  (* Eq. 17, 18 *)\n  let f_ =\n    let x = Nx.div_s ksh 5.4 in\n    let x2 = Nx.mul x x in\n    Nx.recip (Nx.add_s (Nx.mul x2 x2) 1.0)\n  in\n  let tc =\n    Nx.add\n      (Nx.mul f_ (t_tilde k (s 1.0) beta_c))\n      (Nx.mul (Nx.sub (s 1.0) f_) (t_tilde k alpha_c beta_c))\n  in\n  (* Baryon transfer function (Eqs. 14, 19, 21) *)\n  let y = Nx.div (Nx.add_s z_eq 1.0) (Nx.add_s z_d 1.0) in\n  let x_ = Nx.sqrt (Nx.add_s y 1.0) in\n  let g_eh =\n    Nx.mul y\n      (Nx.add (Nx.mul_s x_ (-6.0))\n         (Nx.mul\n            (Nx.add_s (Nx.mul_s y 3.0) 2.0)\n            (Nx.log (Nx.div (Nx.add_s x_ 1.0) (Nx.sub_s x_ 1.0)))))\n  in\n  let alpha_b =\n    Nx.mul_s\n      (Nx.mul (Nx.mul k_eq sh_d)\n         (Nx.mul (Nx.pow (Nx.add_s r_d 1.0) (s (-0.75))) g_eh))\n      2.07\n  in\n  let beta_node = Nx.mul_s (Nx.pow w_m (s 0.435)) 8.41 in\n  let beta_b =\n    Nx.add (Nx.add_s fb 0.5)\n      (Nx.mul\n         (Nx.sub_s (Nx.mul_s fb 2.0) 3.0)\n         (Nx.neg\n            (Nx.sqrt\n               (Nx.add_s (Nx.mul (Nx.mul_s w_m 17.2) (Nx.mul_s w_m 17.2)) 1.0))))\n  in\n  (* Eq. 22: tilde_s per-k *)\n  let tilde_s =\n    let bns = Nx.div beta_node ksh in\n    let bns3 = Nx.mul bns (Nx.mul bns bns) in\n    Nx.div sh_d (Nx.pow (Nx.add_s bns3 1.0) (s (1.0 /. 3.0)))\n  in\n  let tb =\n    let term1 =\n      Nx.div\n        (t_tilde k (s 1.0) (s 1.0))\n        (Nx.add_s\n           (let x = Nx.div_s ksh 5.2 in\n            Nx.mul x x)\n           1.0)\n    in\n    let bbks = Nx.div beta_b ksh in\n    let bbks3 = Nx.mul bbks (Nx.mul bbks bbks) in\n    let term2 =\n      Nx.mul\n        (Nx.div alpha_b (Nx.add_s bbks3 1.0))\n        (Nx.exp (Nx.neg (Nx.pow (Nx.div k k_silk) (s 1.4))))\n    in\n    let sinc_arg = Nx.mul k tilde_s in\n    Nx.mul (Nx.add term1 term2) (Nx.div (Nx.sin sinc_arg) sinc_arg)\n  in\n  (* Total: fb * Tb + fc * Tc *)\n  Nx.add (Nx.mul tb fb) (Nx.mul tc fc)\n\n(* Matter power spectrum *)\n\n(* Simpson's rule integration on a uniform grid of n+1 points from a to b. n\n   must be even. f is evaluated at each grid point, returns [n+1] tensor. *)\nlet simps_integrate f a b n =\n  let h = (b -. a) /. Float.of_int n in\n  let xs =\n    Nx.create f64\n      [| n + 1 |]\n      (Array.init (n + 1) (fun i -> a +. (Float.of_int i *. h)))\n  in\n  let ys = f xs in\n  (* Simpson weights: 1, 4, 2, 4, 2, ..., 4, 1 *)\n  let w =\n    Array.init (n + 1) (fun i ->\n        if i = 0 || i = n then 1.0 else if i mod 2 = 1 then 4.0 else 2.0)\n  in\n  let weights = Nx.create f64 [| n + 1 |] w in\n  Nx.mul_s (Nx.sum (Nx.mul ys weights)) (h /. 3.0)\n\n(* σ²(R) = 1/(2π²) ∫ k³ P_unnorm(k) W²(kR) d(ln k) where P_unnorm = k^n_s ×\n   T²(k) and W is the top-hat window. Integration in ln(k) space: the integrand\n   is k³ P W² (the dk/k from d(ln k) cancels one power of k, giving k² P W² dk\n   equivalent). *)\nlet sigma_sq p r =\n  let ns = n_s p in\n  simps_integrate\n    (fun lnk ->\n      let k = Nx.exp lnk in\n      let x = Nx.mul_s k r in\n      (* Top-hat window: W(x) = 3(sin x - x cos x)/x³ *)\n      let x2 = Nx.mul x x in\n      let x3 = Nx.mul x2 x in\n      let w =\n        Nx.div (Nx.mul_s (Nx.sub (Nx.sin x) (Nx.mul x (Nx.cos x))) 3.0) x3\n      in\n      let t = eisenstein_hu p k in\n      let pk = Nx.mul (Nx.pow k ns) (Nx.mul t t) in\n      let k3 = Nx.mul k (Nx.mul k k) in\n      Nx.mul k3 (Nx.mul (Nx.mul w w) pk))\n    (Float.log 1e-4) (Float.log 1e4) 512\n  |> fun integral -> Nx.div_s integral (2.0 *. Float.pi *. Float.pi)\n\nlet linear_power ?(p = default) k z =\n  let s8 = sigma8 p in\n  let g = growth_factor ~p z in\n  let t = eisenstein_hu p k in\n  let ns = n_s p in\n  let pk_unnorm = Nx.mul (Nx.pow k ns) (Nx.mul t t) in\n  (* Normalization: A = σ8² / σ²_unnorm(R=8) *)\n  let s2 = sigma_sq p 8.0 in\n  let norm = Nx.div (Nx.mul s8 s8) s2 in\n  Nx.mul norm (Nx.mul pk_unnorm (Nx.mul g g))\n\n(* Halofit (Takahashi et al. 2012) *)\n\n(* Ω_m(a) = Ω_m a⁻³ / E²(a) *)\nlet omega_m_a p a =\n  let e2 =\n    let e = e_at_a p a in\n    Nx.mul e e\n  in\n  let a3 = Nx.mul a (Nx.mul a a) in\n  Nx.div (Nx.div p.omega_m a3) e2\n\n(* Ω_de(a) = Ω_Λ exp(f_de(a)) / E²(a) *)\nlet omega_de_a p a =\n  let e2 =\n    let e = e_at_a p a in\n    Nx.mul e e\n  in\n  let f_de =\n    Nx.add\n      (Nx.mul (Nx.mul_s (Nx.add_s (Nx.add p.w0 p.wa) 1.0) (-3.0)) (Nx.log a))\n      (Nx.mul p.wa (Nx.mul_s (Nx.sub_s a 1.0) 3.0))\n  in\n  Nx.div (Nx.mul p.omega_l (Nx.exp f_de)) e2\n\n(* w(a) = w0 + wa(1-a) *)\nlet w_of p a = Nx.add p.w0 (Nx.mul p.wa (Nx.sub (Nx.scalar f64 1.0) a))\n\n(* σ²(R, z) using linear P(k) at z=0, scaled by D²(z)/D²(0)=D²(z). For Halofit\n   we need σ(R) at various R to find k_nl, plus derivatives. *)\nlet sigma_sq_at_z p r z =\n  let g = growth_factor ~p z in\n  Nx.mul (sigma_sq p r) (Nx.mul g g)\n\n(* Find k_nl where σ(1/k_nl, z) = 1, plus n_eff and C at the nonlinear scale. We\n   compute σ²(R) on a grid, interpolate to find R_nl, then compute spectral\n   index and curvature from Gaussian-filtered integrals. *)\nlet halofit_params p z =\n  let g = growth_factor ~p z in\n  let g2 = Nx.mul g g in\n  let ns = n_s p in\n  let s8 = sigma8 p in\n  let s2_8 = sigma_sq p 8.0 in\n  let pknorm = Nx.div (Nx.mul s8 s8) s2_8 in\n  let n_r = 256 in\n  let logr =\n    Nx.create f64 [| n_r |]\n      (Array.init n_r (fun i ->\n           Float.log 1e-4\n           +. Float.of_int i\n              *. (Float.log 1e1 -. Float.log 1e-4)\n              /. Float.of_int (n_r - 1)))\n  in\n  (* Compute σ²(R) for each R using Gaussian filter exp(-(kR)²) *)\n  let n_k = 512 in\n  let lnk_min = Float.log 1e-4 in\n  let lnk_max = Float.log 1e4 in\n  let dlnk = (lnk_max -. lnk_min) /. Float.of_int (n_k - 1) in\n  let lnk =\n    Nx.create f64 [| n_k |]\n      (Array.init n_k (fun i -> lnk_min +. (Float.of_int i *. dlnk)))\n  in\n  let k = Nx.exp lnk in\n  let t = eisenstein_hu p k in\n  let pk_base = Nx.mul pknorm (Nx.mul (Nx.pow k ns) (Nx.mul t t)) in\n  let pk_at_z = Nx.mul pk_base g2 in\n  (* k³ P(k) / (2π²) *)\n  let k3pk =\n    Nx.div\n      (Nx.mul (Nx.mul k (Nx.mul k k)) pk_at_z)\n      (Nx.scalar f64 (2.0 *. Float.pi *. Float.pi))\n  in\n  (* Trapezoidal weights [n_k] *)\n  let trap_w =\n    Nx.create f64 [| n_k |]\n      (Array.init n_k (fun j -> if j = 0 || j = n_k - 1 then 0.5 else 1.0))\n  in\n  (* Float-level σ²(R) grid for root-finding *)\n  let sigma2_arr = Array.make n_r 0.0 in\n  for i = 0 to n_r - 1 do\n    let r = Float.exp (Nx.item [ i ] logr) in\n    let kr = Nx.mul_s k r in\n    let y2 = Nx.mul kr kr in\n    let gauss = Nx.exp (Nx.neg y2) in\n    let integrand = Nx.mul k3pk gauss in\n    sigma2_arr.(i) <-\n      Nx.item [] (Nx.mul_s (Nx.sum (Nx.mul trap_w integrand)) dlnk)\n  done;\n  (* Find R_nl where σ² = 1 by linear interpolation in log space *)\n  let r_nl = ref (Float.exp (Nx.item [ 0 ] logr)) in\n  (let found = ref false in\n   for i = 0 to n_r - 2 do\n     if (not !found) && sigma2_arr.(i) >= 1.0 && sigma2_arr.(i + 1) <= 1.0 then begin\n       let ls0 = Float.log sigma2_arr.(i) in\n       let ls1 = Float.log sigma2_arr.(i + 1) in\n       let lr0 = Nx.item [ i ] logr in\n       let lr1 = Nx.item [ i + 1 ] logr in\n       let frac = (0.0 -. ls0) /. (ls1 -. ls0) in\n       r_nl := Float.exp (lr0 +. (frac *. (lr1 -. lr0)));\n       found := true\n     end\n   done);\n  let r_nl_f = !r_nl in\n  (* Differentiable Newton refinement for R_nl. Compute σ² at the float root,\n     then one Newton step: R' = R + R*(σ²-1)/dn where dσ²/dR = -dn/R.\n     Numerically R' ≈ R, but the gradient dR'/dp = -(∂σ²/∂p)/(∂σ²/∂R) is exact\n     via the implicit function theorem. *)\n  let kr0 = Nx.mul_s k r_nl_f in\n  let y2_0 = Nx.mul kr0 kr0 in\n  let gauss0 = Nx.exp (Nx.neg y2_0) in\n  let integrand0 = Nx.mul k3pk gauss0 in\n  let trap_sum f = Nx.mul_s (Nx.sum (Nx.mul trap_w f)) dlnk in\n  let s2_0 = trap_sum integrand0 in\n  let dn_0 = trap_sum (Nx.mul_s (Nx.mul integrand0 y2_0) 2.0) in\n  let r_nl_t =\n    Nx.add_s (Nx.mul_s (Nx.div (Nx.sub_s s2_0 1.0) dn_0) r_nl_f) r_nl_f\n  in\n  let k_nl = Nx.recip r_nl_t in\n  (* Recompute n_eff and C at the tensor R_nl for full differentiability. *)\n  let kr = Nx.mul k r_nl_t in\n  let y2 = Nx.mul kr kr in\n  let gauss = Nx.exp (Nx.neg y2) in\n  let integrand = Nx.mul k3pk gauss in\n  let s2 = trap_sum integrand in\n  let dn = trap_sum (Nx.mul_s (Nx.mul integrand y2) 2.0) in\n  let dc =\n    trap_sum (Nx.mul (Nx.mul_s integrand 4.0) (Nx.sub y2 (Nx.mul y2 y2)))\n  in\n  let n_eff = Nx.sub_s dn 3.0 in\n  let c_curv = Nx.add (Nx.mul dn dn) (Nx.div dc s2) in\n  (k_nl, n_eff, c_curv)\n\nlet nonlinear_power ?(p = default) k z =\n  let s = Nx.scalar f64 in\n  let pk_lin = linear_power ~p k z in\n  let k_nl, n, c = halofit_params p z in\n  let n2 = Nx.mul n n in\n  let n3 = Nx.mul n2 n in\n  let n4 = Nx.mul n3 n in\n  let a = Nx.recip (Nx.add_s z 1.0) in\n  let om_m = omega_m_a p a in\n  let om_de = omega_de_a p a in\n  let w = w_of p a in\n  let odew1 = Nx.mul om_de (Nx.add_s w 1.0) in\n  (* Takahashi et al. 2012 coefficients — all tensor *)\n  let a_n =\n    Nx.pow (s 10.0)\n      (Nx.add\n         (Nx.add\n            (Nx.add\n               (Nx.add\n                  (Nx.add\n                     (Nx.add_s (Nx.mul_s n 2.8553) 1.5222)\n                     (Nx.mul_s n2 2.3706))\n                  (Nx.mul_s n3 0.9903))\n               (Nx.mul_s n4 0.2250))\n            (Nx.mul_s c (-0.6038)))\n         (Nx.mul_s odew1 0.1749))\n  in\n  let b_n =\n    Nx.pow (s 10.0)\n      (Nx.add\n         (Nx.add\n            (Nx.add\n               (Nx.add_s (Nx.mul_s n 0.5864) (-0.5642))\n               (Nx.mul_s n2 0.5716))\n            (Nx.mul_s c (-1.5474)))\n         (Nx.mul_s odew1 0.2279))\n  in\n  let c_n =\n    Nx.pow (s 10.0)\n      (Nx.add\n         (Nx.add (Nx.add_s (Nx.mul_s n 2.0404) 0.3698) (Nx.mul_s n2 0.8161))\n         (Nx.mul_s c 0.5869))\n  in\n  let gamma_n =\n    Nx.add (Nx.add_s (Nx.mul_s n (-0.0843)) 0.1971) (Nx.mul_s c 0.8460)\n  in\n  let alpha_n =\n    Nx.abs\n      (Nx.add\n         (Nx.add (Nx.add_s (Nx.mul_s n 1.3373) 6.0835) (Nx.mul_s n2 (-0.1959)))\n         (Nx.mul_s c (-5.5274)))\n  in\n  let beta_n =\n    Nx.add\n      (Nx.add\n         (Nx.add\n            (Nx.add\n               (Nx.add_s (Nx.mul_s n (-0.7354)) 2.0379)\n               (Nx.mul_s n2 0.3157))\n            (Nx.mul_s n3 1.2490))\n         (Nx.mul_s n4 0.3980))\n      (Nx.mul_s c (-0.1682))\n  in\n  let nu_n = Nx.pow (s 10.0) (Nx.add_s (Nx.mul_s n 3.6902) 5.2105) in\n  let f1 = Nx.pow om_m (s (-0.0307)) in\n  let f2 = Nx.pow om_m (s (-0.0585)) in\n  let f3 = Nx.pow om_m (s 0.0743) in\n  let y = Nx.div k k_nl in\n  (* Δ²_L = k³ P_lin / (2π²) *)\n  let d2l =\n    Nx.div\n      (Nx.mul (Nx.mul k (Nx.mul k k)) pk_lin)\n      (s (2.0 *. Float.pi *. Float.pi))\n  in\n  (* f(y) = y/4 + y²/8 *)\n  let fy = Nx.add (Nx.div_s y 4.0) (Nx.div_s (Nx.mul y y) 8.0) in\n  (* Quasi-linear term: Δ²_Q *)\n  let d2q =\n    Nx.mul d2l\n      (Nx.mul\n         (Nx.div\n            (Nx.pow (Nx.add_s d2l 1.0) beta_n)\n            (Nx.add_s (Nx.mul d2l alpha_n) 1.0))\n         (Nx.exp (Nx.neg fy)))\n  in\n  (* Halo term: Δ²_H *)\n  let three_f1 = Nx.mul_s f1 3.0 in\n  let d2h_prime =\n    Nx.div\n      (Nx.mul a_n (Nx.pow y three_f1))\n      (Nx.add_s\n         (Nx.add\n            (Nx.mul b_n (Nx.pow y f2))\n            (Nx.pow (Nx.mul (Nx.mul c_n f3) y) (Nx.sub_s gamma_n 3.0)))\n         1.0)\n  in\n  let d2h = Nx.div d2h_prime (Nx.add_s (Nx.div nu_n (Nx.mul y y)) 1.0) in\n  let d2nl = Nx.add d2q d2h in\n  Nx.div (Nx.mul_s d2nl (2.0 *. Float.pi *. Float.pi)) (Nx.mul k (Nx.mul k k))\n\n(* BAO distance measures *)\n\nlet dh ?(p = default) z =\n  Unit.Length.of_tensor (Nx.mul_s (Nx.div c_km_s (hubble ~p z)) _mpc_m)\n\nlet dm ?(p = default) z =\n  Unit.Length.of_tensor (Nx.mul_s (transverse_comoving_mpc p z) _mpc_m)\n\nlet dv ?(p = default) z =\n  let dh_mpc = Nx.div c_km_s (hubble ~p z) in\n  let dm_mpc = transverse_comoving_mpc p z in\n  let cube = Nx.mul z (Nx.mul dh_mpc (Nx.mul dm_mpc dm_mpc)) in\n  Unit.Length.of_tensor (Nx.mul_s (Nx.pow_s cube (1.0 /. 3.0)) _mpc_m)\n\nlet sound_horizon ?(p = default) () =\n  let ob = omega_b p in\n  let h = Nx.div_s p.h0 100.0 in\n  let h2 = Nx.mul h h in\n  let w_m = Nx.mul p.omega_m h2 in\n  let w_b = Nx.mul ob h2 in\n  (* Eisenstein & Hu (1998) Eq. 2–6: sound horizon at drag epoch in Mpc/h *)\n  let t27sq = (t_cmb /. 2.7) ** 2.0 in\n  let t27_4 = t27sq *. t27sq in\n  let z_eq = Nx.div_s (Nx.mul_s w_m 2.50e4) t27_4 in\n  let k_eq = Nx.div (Nx.div_s (Nx.mul_s w_m 7.46e-2) t27sq) h in\n  let b1_z =\n    Nx.mul\n      (Nx.pow w_m (Nx.scalar f64 (-0.419)))\n      (Nx.add_s (Nx.mul_s (Nx.pow w_m (Nx.scalar f64 0.674)) 0.607) 1.0)\n    |> fun x -> Nx.mul_s x 0.313\n  in\n  let b2_z = Nx.mul_s (Nx.pow w_m (Nx.scalar f64 0.223)) 0.238 in\n  let z_d =\n    Nx.mul\n      (Nx.div\n         (Nx.mul_s (Nx.pow w_m (Nx.scalar f64 0.251)) 1291.0)\n         (Nx.add_s (Nx.mul_s (Nx.pow w_m (Nx.scalar f64 0.828)) 0.659) 1.0))\n      (Nx.add_s (Nx.mul b1_z (Nx.pow w_b b2_z)) 1.0)\n  in\n  let r_d =\n    Nx.mul (Nx.div_s (Nx.mul_s w_b 31.5) t27_4) (Nx.div (Nx.scalar f64 1e3) z_d)\n  in\n  let r_eq =\n    Nx.mul\n      (Nx.div_s (Nx.mul_s w_b 31.5) t27_4)\n      (Nx.div (Nx.scalar f64 1e3) z_eq)\n  in\n  (* Eq. 6 from Eisenstein & Hu: sound horizon in Mpc/h *)\n  let sh_d =\n    Nx.mul\n      (Nx.mul\n         (Nx.div (Nx.scalar f64 2.0) (Nx.mul_s k_eq 3.0))\n         (Nx.sqrt (Nx.div (Nx.scalar f64 6.0) r_eq)))\n      (Nx.log\n         (Nx.div\n            (Nx.add (Nx.sqrt (Nx.add_s r_d 1.0)) (Nx.sqrt (Nx.add r_eq r_d)))\n            (Nx.add_s (Nx.sqrt r_eq) 1.0)))\n  in\n  (* sh_d is in Mpc/h, convert to Mpc then to metres *)\n  let rs_mpc = Nx.div sh_d h in\n  Unit.Length.of_tensor (Nx.mul_s rs_mpc _mpc_m)\n"
  },
  {
    "path": "dev/umbra/lib/cosmo.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Cosmology for {e Λ}CDM, wCDM, and w0waCDM universes.\n\n    Computes distances, growth factors, and matter power spectra. Supports flat\n    and non-flat {e Λ}CDM, wCDM, and w0waCDM cosmologies through a single\n    parameter type. All functions are differentiable through Rune.\n\n    {[\n      let z = Nx.scalar Nx.float64 0.5 in\n      let dl = Cosmo.luminosity_distance z in\n      let dl_mpc = Unit.Length.in_mpc dl\n    ]}\n\n    Power spectrum functions require [omega_b], [n_s], and [sigma8] to be set\n    via {!set} or by using a preset like {!planck18}. *)\n\n(** {1:params Parameters} *)\n\ntype params\n(** The type for cosmological parameters. Subsumes flat {e Λ}CDM, non-flat\n    {e Λ}CDM, wCDM, and w0waCDM. *)\n\n(** {2:float_constructors Float constructors}\n\n    Create parameters from plain floats. *)\n\nval flat_lcdm : h0:float -> omega_m:float -> params\n(** [flat_lcdm ~h0 ~omega_m] is flat {e Λ}CDM with {e Ω}{_ L}[ = 1 - omega_m].\n\n    Raises [Invalid_argument] if [h0 <= 0] or [omega_m < 0]. *)\n\nval lcdm : h0:float -> omega_m:float -> omega_l:float -> params\n(** [lcdm ~h0 ~omega_m ~omega_l] is {e Λ}CDM with curvature\n    {e Ω}{_ k}[ = 1 - omega_m - omega_l].\n\n    Raises [Invalid_argument] if [h0 <= 0]. *)\n\nval wcdm :\n  h0:float -> omega_m:float -> ?omega_l:float -> w0:float -> unit -> params\n(** [wcdm ~h0 ~omega_m ~w0 ()] is wCDM with constant dark energy equation of\n    state [w0]. [omega_l] defaults to [1 - omega_m] (flat). *)\n\nval w0wacdm :\n  h0:float ->\n  omega_m:float ->\n  ?omega_l:float ->\n  w0:float ->\n  wa:float ->\n  unit ->\n  params\n(** [w0wacdm ~h0 ~omega_m ~w0 ~wa ()] is the CPL parameterization\n    [w(z) = w0 + wa * z/(1+z)]. [omega_l] defaults to [1 - omega_m] (flat). *)\n\n(** {2:tensor_constructors Tensor constructors}\n\n    Create parameters from Nx scalar tensors for differentiable construction. *)\n\nval create_flat_lcdm : h0:Nx.float64_t -> omega_m:Nx.float64_t -> params\n\nval create_lcdm :\n  h0:Nx.float64_t -> omega_m:Nx.float64_t -> omega_l:Nx.float64_t -> params\n\nval create_wcdm :\n  h0:Nx.float64_t ->\n  omega_m:Nx.float64_t ->\n  ?omega_l:Nx.float64_t ->\n  w0:Nx.float64_t ->\n  unit ->\n  params\n\nval create_w0wacdm :\n  h0:Nx.float64_t ->\n  omega_m:Nx.float64_t ->\n  ?omega_l:Nx.float64_t ->\n  w0:Nx.float64_t ->\n  wa:Nx.float64_t ->\n  unit ->\n  params\n\n(** {2:accessors Accessors} *)\n\nval h0 : params -> Nx.float64_t\n(** [h0 p] is the Hubble constant H{_ 0} in km s{^ -1} Mpc{^ -1}. *)\n\nval omega_m : params -> Nx.float64_t\n(** [omega_m p] is the matter density parameter {e Ω}{_ m}. *)\n\nval omega_l : params -> Nx.float64_t\n(** [omega_l p] is the dark energy density parameter {e Ω}{_ Λ}. *)\n\nval omega_k : params -> Nx.float64_t\n(** [omega_k p] is the curvature density parameter {e Ω}{_ k}[ = 1 - Ω_m - Ω_Λ].\n*)\n\nval w0 : params -> Nx.float64_t\n(** [w0 p] is the dark energy equation of state parameter w{_ 0}. *)\n\nval wa : params -> Nx.float64_t\n(** [wa p] is the CPL time-varying dark energy parameter w{_ a}. *)\n\nval omega_b : params -> Nx.float64_t\n(** [omega_b p] is the baryon density parameter {e Ω}{_ b}.\n\n    Raises [Invalid_argument] if not set. *)\n\nval n_s : params -> Nx.float64_t\n(** [n_s p] is the primordial spectral index n{_ s}.\n\n    Raises [Invalid_argument] if not set. *)\n\nval sigma8 : params -> Nx.float64_t\n(** [sigma8 p] is the amplitude of matter fluctuations {e σ}{_ 8}.\n\n    Raises [Invalid_argument] if not set. *)\n\n(** {2:set Setting power spectrum parameters} *)\n\nval set : ?omega_b:float -> ?n_s:float -> ?sigma8:float -> params -> params\n(** [set ~omega_b ~n_s ~sigma8 p] is [p] with the given power spectrum\n    parameters set. Unspecified parameters retain their previous value. *)\n\nval set_t :\n  ?h0:Nx.float64_t ->\n  ?omega_m:Nx.float64_t ->\n  ?omega_l:Nx.float64_t ->\n  ?omega_b:Nx.float64_t ->\n  ?n_s:Nx.float64_t ->\n  ?sigma8:Nx.float64_t ->\n  params ->\n  params\n(** [set_t] is like {!set} but takes Nx scalar tensors for differentiable\n    construction. Recomputes {e Ω}{_ k} when [omega_m] or [omega_l] changes. *)\n\n(** {2:presets Presets} *)\n\nval default : params\n(** [default] is flat {e Λ}CDM with [h0 = 70], [omega_m = 0.3]. *)\n\nval planck18 : params\n(** [planck18] is Planck 2018 flat {e Λ}CDM: [h0 = 67.66], [omega_m = 0.3111],\n    [omega_b = 0.0490], [n_s = 0.9665], [sigma8 = 0.8102]. *)\n\nval planck15 : params\n(** [planck15] is Planck 2015 flat {e Λ}CDM: [h0 = 67.74], [omega_m = 0.3075],\n    [omega_b = 0.0486], [n_s = 0.9667], [sigma8 = 0.8159]. *)\n\nval wmap9 : params\n(** [wmap9] is WMAP9 flat {e Λ}CDM: [h0 = 69.32], [omega_m = 0.2865],\n    [omega_b = 0.0463], [n_s = 0.9608], [sigma8 = 0.820]. *)\n\n(** {1:e_z Hubble parameter} *)\n\nval e_of : params -> Nx.float64_t -> Nx.float64_t\n(** [e_of p z] is E(z) = H(z)/H{_ 0} at redshift [z]. Fully differentiable\n    through Rune. *)\n\nval hubble : ?p:params -> Nx.float64_t -> Nx.float64_t\n(** [hubble z] is H(z) in km s{^ -1} Mpc{^ -1}. [p] defaults to {!default}. *)\n\nval critical_density : ?p:params -> Nx.float64_t -> Nx.float64_t\n(** [critical_density z] is the critical density {e rho}{_ c}(z) in kg m{^ -3}.\n    [p] defaults to {!default}. *)\n\n(** {1:distances Distances} *)\n\nval comoving_distance : ?p:params -> Nx.float64_t -> Unit.length Unit.t\n(** [comoving_distance z] is the line-of-sight comoving distance at redshift\n    [z]. [p] defaults to {!default}. *)\n\nval luminosity_distance : ?p:params -> Nx.float64_t -> Unit.length Unit.t\n(** [luminosity_distance z] is the luminosity distance at redshift [z]. For\n    non-flat models, applies the curvature correction via the transverse\n    comoving distance. [p] defaults to {!default}. *)\n\nval angular_diameter_distance : ?p:params -> Nx.float64_t -> Unit.length Unit.t\n(** [angular_diameter_distance z] is the angular diameter distance at redshift\n    [z]. [p] defaults to {!default}. *)\n\nval distance_modulus : ?p:params -> Nx.float64_t -> Nx.float64_t\n(** [distance_modulus z] is the distance modulus\n    {e mu}[ = 5 log10(d_L / Mpc) + 25]. [p] defaults to {!default}. *)\n\n(** {1:angular Angular scale} *)\n\nval angular_size :\n  ?p:params -> z:Nx.float64_t -> Unit.length Unit.t -> Unit.angle Unit.t\n(** [angular_size ~z length] is the angular size of [length] at redshift [z]\n    under the small-angle approximation [{e theta} = l / d_A]. [p] defaults to\n    {!default}. *)\n\nval physical_size :\n  ?p:params -> z:Nx.float64_t -> Unit.angle Unit.t -> Unit.length Unit.t\n(** [physical_size ~z angle] is the physical size subtended by [angle] at\n    redshift [z] under the small-angle approximation [l = {e theta} * d_A]. [p]\n    defaults to {!default}. *)\n\n(** {1:times Cosmic times} *)\n\nval lookback_time : ?p:params -> Nx.float64_t -> Unit.time Unit.t\n(** [lookback_time z] is the lookback time to redshift [z]. [p] defaults to\n    {!default}. *)\n\nval age : ?p:params -> Nx.float64_t -> Unit.time Unit.t\n(** [age z] is the age of the universe at redshift [z].\n\n    Integrates from [z] to [z = 1000]. This approximation is accurate to ~0.1%\n    for late-time cosmology ([z < 10]) but omits the radiation era and is not\n    suitable for CMB-epoch calculations. [p] defaults to {!default}. *)\n\n(** {1:inverse Inverse lookup} *)\n\nval z_at_value :\n  ?p:params ->\n  ?zmin:float ->\n  ?zmax:float ->\n  ?xtol:float ->\n  (p:params -> Nx.float64_t -> Nx.float64_t) ->\n  Nx.float64_t ->\n  Nx.float64_t\n(** [z_at_value f target] finds the redshift [z] where [f ~p z = target] using\n    Brent's method. [f] must be a monotonic function of redshift.\n\n    For distance functions, unwrap the unit first:\n    {[\n    z_at_value\n      (fun ~p z -> Unit.Length.in_mpc (Cosmo.comoving_distance ~p z))\n      target\n    ]}\n\n    [zmin] defaults to [1e-8]. [zmax] defaults to [1000.0]. [xtol] defaults to\n    [1e-8].\n\n    {b Warning.} Not differentiable (iterative root-finding).\n\n    Raises [Invalid_argument] if [target] is outside [[f(zmin), f(zmax)]]. *)\n\n(** {1:bao BAO distance measures} *)\n\nval dh : ?p:params -> Nx.float64_t -> Unit.length Unit.t\n(** [dh z] is the Hubble distance D{_ H}(z) = c / H(z). [p] defaults to\n    {!default}. *)\n\nval dm : ?p:params -> Nx.float64_t -> Unit.length Unit.t\n(** [dm z] is the comoving transverse distance D{_ M}(z). Equal to\n    {!comoving_distance} for flat cosmologies; includes curvature correction\n    otherwise. [p] defaults to {!default}. *)\n\nval dv : ?p:params -> Nx.float64_t -> Unit.length Unit.t\n(** [dv z] is the volume-averaged BAO distance D{_ V}(z) = (z D{_ H}(z)\n    D{_ M}{^ 2}(z)){^ 1/3}. [p] defaults to {!default}. *)\n\nval sound_horizon : ?p:params -> unit -> Unit.length Unit.t\n(** [sound_horizon ()] is the comoving sound horizon at the drag epoch\n    r{_ s}(z{_ drag}), using the Eisenstein & Hu (1998) fitting formulae for\n    z{_ drag} and the sound horizon integral.\n\n    Raises [Invalid_argument] if [omega_b] is not set in [p]. [p] defaults to\n    {!default}. *)\n\n(** {1:growth Structure growth} *)\n\nval growth_factor : ?p:params -> Nx.float64_t -> Nx.float64_t\n(** [growth_factor z] is the linear growth factor D(z), normalized to D(0) = 1.\n    Computed via the integral form D(a) {e ∝} E(a) {e ∫}{_ 0}{^ a} da' /\n    (a'{^ 3} E{^ 3}(a')).\n\n    Does not require [omega_b], [n_s], or [sigma8]. [p] defaults to {!default}.\n*)\n\nval growth_rate : ?p:params -> Nx.float64_t -> Nx.float64_t\n(** [growth_rate z] is the linear growth rate f(z) = d ln D / d ln a, computed\n    from the exact derivative of the integral-form growth factor.\n\n    [p] defaults to {!default}. *)\n\n(** {1:power Matter power spectrum}\n\n    All power spectrum functions require [omega_b], [n_s], and [sigma8] to be\n    set in the parameters. Use {!set} or a preset like {!planck18}.\n\n    Wavenumbers [k] are in h/Mpc. Power spectra are in (Mpc/h){^ 3}. *)\n\nval linear_power : ?p:params -> Nx.float64_t -> Nx.float64_t -> Nx.float64_t\n(** [linear_power ~p k z] is the linear matter power spectrum P(k, z). Uses the\n    Eisenstein & Hu (1998) transfer function with baryon oscillations and\n    {e σ}{_ 8} normalization.\n\n    Raises [Invalid_argument] if [omega_b], [n_s], or [sigma8] are not set. *)\n\nval nonlinear_power : ?p:params -> Nx.float64_t -> Nx.float64_t -> Nx.float64_t\n(** [nonlinear_power ~p k z] is the nonlinear matter power spectrum via the\n    Halofit fitting formula (Takahashi et al. 2012).\n\n    {b Warning.} The nonlinear scale k{_ nl} is found by float-level\n    root-finding; gradients do not flow through it. The mapping from k{_ nl} to\n    P{_ nl}(k) is differentiable.\n\n    Raises [Invalid_argument] if [omega_b], [n_s], or [sigma8] are not set. *)\n"
  },
  {
    "path": "dev/umbra/lib/dune",
    "content": "(library\n (name umbra)\n (public_name umbra)\n (private_modules kdtree filter_data vega_data)\n (libraries nx unix))\n"
  },
  {
    "path": "dev/umbra/lib/extinction.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet f64 = Nx.float64\n\n(* Extinction law: wavelength in metres → A_λ/A_V *)\ntype law = wavelength:Nx.float64_t -> Nx.float64_t\n\n(* Horner evaluation: c0 + y*(c1 + y*(c2 + ...)) *)\nlet horner y coeffs =\n  let n = Array.length coeffs in\n  let acc = ref (Nx.scalar f64 coeffs.(n - 1)) in\n  for i = n - 2 downto 0 do\n    acc := Nx.add_s (Nx.mul y !acc) coeffs.(i)\n  done;\n  !acc\n\n(* Shared CCM89/O'Donnell94 implementation parameterized by R_V. Only the\n   optical a/b polynomial coefficients differ between the two laws; IR and UV\n   regions are identical. Uses Nx.where for differentiable piecewise selection.\n   Valid for 0.125–3.5 μm (x = 0.3–8.0 μm⁻¹). *)\nlet ccm89_impl a_opt b_opt ~rv ~wavelength =\n  (* Convert wavelength (m) to inverse microns *)\n  let x = Nx.div (Nx.scalar f64 1e-6) wavelength in\n  (* Infrared: 0.3 ≤ x < 1.1 *)\n  let a_ir = Nx.mul_s (Nx.pow_s x 1.61) 0.574 in\n  let b_ir = Nx.mul_s (Nx.pow_s x 1.61) (-0.527) in\n  (* Optical/NIR: 1.1 ≤ x ≤ 3.3, polynomial in (x - 1.82) *)\n  let y = Nx.sub_s x 1.82 in\n  let a_o = horner y a_opt in\n  let b_o = horner y b_opt in\n  (* UV: 3.3 < x ≤ 8.0 *)\n  let fa =\n    Nx.where (Nx.greater_equal_s x 5.9)\n      (Nx.add\n         (Nx.mul_s (Nx.square (Nx.sub_s x 5.9)) (-0.04473))\n         (Nx.mul_s (Nx.pow_s (Nx.sub_s x 5.9) 3.0) (-0.009779)))\n      (Nx.scalar f64 0.0)\n  in\n  let fb =\n    Nx.where (Nx.greater_equal_s x 5.9)\n      (Nx.add\n         (Nx.mul_s (Nx.square (Nx.sub_s x 5.9)) 0.2130)\n         (Nx.mul_s (Nx.pow_s (Nx.sub_s x 5.9) 3.0) 0.1207))\n      (Nx.scalar f64 0.0)\n  in\n  (* a(x) = 1.752 - 0.316*x - 0.104/((x-4.67)² + 0.341) + F_a *)\n  let a_uv_base = Nx.add_s (Nx.mul_s x (-0.316)) 1.752 in\n  let bump_a =\n    Nx.div (Nx.scalar f64 (-0.104))\n      (Nx.add (Nx.square (Nx.sub_s x 4.67)) (Nx.scalar f64 0.341))\n  in\n  let a_uv = Nx.add (Nx.add a_uv_base bump_a) fa in\n  (* b(x) = -3.090 + 1.825*x + 1.206/((x-4.62)² + 0.263) + F_b *)\n  let b_uv_base = Nx.add_s (Nx.mul_s x 1.825) (-3.090) in\n  let bump_b =\n    Nx.div (Nx.scalar f64 1.206)\n      (Nx.add (Nx.square (Nx.sub_s x 4.62)) (Nx.scalar f64 0.263))\n  in\n  let b_uv = Nx.add (Nx.add b_uv_base bump_b) fb in\n  (* Piecewise selection using Nx.where *)\n  let ir_mask = Nx.less_s x 1.1 in\n  let uv_mask = Nx.greater_s x 3.3 in\n  let a = Nx.where ir_mask a_ir (Nx.where uv_mask a_uv a_o) in\n  let b = Nx.where ir_mask b_ir (Nx.where uv_mask b_uv b_o) in\n  (* A_λ/A_V = a(x) + b(x)/R_V *)\n  Nx.add a (Nx.div b rv)\n\n(* CCM89: Cardelli, Clayton & Mathis 1989, ApJ 345, 245 — optical\n   coefficients *)\nlet ccm89_a =\n  [| 1.0; 0.17699; -0.50447; -0.02427; 0.72085; 0.01979; -0.77530; 0.32999 |]\n\nlet ccm89_b =\n  [| 0.0; 1.41338; 2.28305; 1.07233; -5.38434; -0.62251; 5.30260; -2.09002 |]\n\nlet ccm89 ~rv = fun ~wavelength -> ccm89_impl ccm89_a ccm89_b ~rv ~wavelength\n\n(* O'Donnell 1994, ApJ 422, 158 — revised optical coefficients *)\nlet od94_a = [| 1.0; 0.104; -0.609; 0.701; -1.221; 0.700; -0.048; -0.091 |]\nlet od94_b = [| 0.0; 1.952; 2.908; -3.989; 7.985; -5.002; -0.478; 1.149 |]\nlet odonnell94 ~rv = fun ~wavelength -> ccm89_impl od94_a od94_b ~rv ~wavelength\n\n(* Calzetti 2000: Calzetti et al. 2000, ApJ 533, 682. Starburst attenuation law.\n   Fixed R_V = 4.05. Valid 0.12–2.2 μm. *)\nlet calzetti00 =\n fun ~wavelength ->\n  let lam_um = Nx.mul_s wavelength 1e6 in\n  let rv = 4.05 in\n  (* Blue: 0.12–0.63 μm k'(λ) = 2.659 * (-2.156 + 1.509/λ - 0.198/λ² + 0.011/λ³)\n     + R_V *)\n  let k_blue =\n    Nx.add_s\n      (Nx.mul_s\n         (Nx.add_s\n            (Nx.add\n               (Nx.mul_s (Nx.recip lam_um) 1.509)\n               (Nx.add\n                  (Nx.mul_s (Nx.pow_s lam_um (-2.0)) (-0.198))\n                  (Nx.mul_s (Nx.pow_s lam_um (-3.0)) 0.011)))\n            (-2.156))\n         2.659)\n      rv\n  in\n  (* Red: 0.63–2.2 μm k'(λ) = 2.659 * (-1.857 + 1.040/λ) + R_V *)\n  let k_red =\n    Nx.add_s\n      (Nx.mul_s (Nx.add_s (Nx.mul_s (Nx.recip lam_um) 1.040) (-1.857)) 2.659)\n      rv\n  in\n  let blue_mask = Nx.less_s lam_um 0.63 in\n  let k = Nx.where blue_mask k_blue k_red in\n  (* A_λ/A_V = k'(λ) / R_V *)\n  Nx.div_s k rv\n\n(* Fitzpatrick 1999: Fitzpatrick 1999, PASP 111, 63. R_V-dependent extinction\n   using cubic spline for optical/NIR and Fitzpatrick & Massa parameterization\n   for UV. Valid 0.1–3.5 μm. *)\n\n(* FM UV parameters (fixed) *)\nlet f99_x0_sq = 4.596 *. 4.596\nlet f99_gamma_sq = 0.99 *. 0.99\nlet f99_c3 = 3.23\nlet f99_c4 = 0.41\nlet f99_c5 = 5.9\n\n(* Spline anchor x-values (inverse microns) *)\nlet f99_xk =\n  [|\n    0.;\n    1e4 /. 26500.;\n    1e4 /. 12200.;\n    1e4 /. 6000.;\n    1e4 /. 5470.;\n    1e4 /. 4670.;\n    1e4 /. 4110.;\n    1e4 /. 2700.;\n    1e4 /. 2600.;\n  |]\n\nlet f99_hk = Array.init 8 (fun i -> f99_xk.(i + 1) -. f99_xk.(i))\n\n(* Drude profile at a fixed x-value *)\nlet f99_drude x =\n  let x2 = x *. x in\n  let y = x2 -. f99_x0_sq in\n  x2 /. ((y *. y) +. (x2 *. f99_gamma_sq))\n\n(* Precompute spline basis matrix M (7×9): maps 9 anchor y-values to 7 interior\n   second derivatives. Natural boundary conditions: m[0] = m[8] = 0.\n\n   The tridiagonal system Am = Dy is solved offline; M = A⁻¹D is stored. At\n   runtime m[j] = Σ M[j][i] y[i] — a weighted sum of Nx scalars, fully\n   differentiable through Rune. *)\nlet f99_basis =\n  let n = 7 in\n  let h = f99_hk in\n  (* Right-hand side matrix D (7×9) *)\n  let d_mat =\n    Array.init n (fun j ->\n        Array.init 9 (fun i ->\n            if i = j then 6.0 /. h.(j)\n            else if i = j + 1 then ~-.((6.0 /. h.(j + 1)) +. (6.0 /. h.(j)))\n            else if i = j + 2 then 6.0 /. h.(j + 1)\n            else 0.0))\n  in\n  (* Tridiagonal A: diag, sub, sup *)\n  let diag = Array.init n (fun j -> 2.0 *. (h.(j) +. h.(j + 1))) in\n  let sub j = h.(j) in\n  let sup j = h.(j + 1) in\n  (* Solve A X_col = D_col for each of 9 columns via Thomas algorithm *)\n  let m = Array.init n (fun _ -> Array.make 9 0.0) in\n  for col = 0 to 8 do\n    let b = Array.init n (fun j -> d_mat.(j).(col)) in\n    let c = Array.make n 0.0 in\n    let d = Array.make n 0.0 in\n    c.(0) <- sup 0 /. diag.(0);\n    d.(0) <- b.(0) /. diag.(0);\n    for i = 1 to n - 1 do\n      let w = diag.(i) -. (sub i *. c.(i - 1)) in\n      c.(i) <- (if i < n - 1 then sup i /. w else 0.0);\n      d.(i) <- (b.(i) -. (sub i *. d.(i - 1))) /. w\n    done;\n    m.(n - 1).(col) <- d.(n - 1);\n    for i = n - 2 downto 0 do\n      m.(i).(col) <- d.(i) -. (c.(i) *. m.(i + 1).(col))\n    done\n  done;\n  m\n\n(* Evaluate a cubic spline piece on [xk, xk1] at tensor x. mk and mk1 are second\n   derivatives (Nx scalars); yk, yk1 are y-values. *)\nlet f99_eval_piece hk xk yk yk1 mk mk1 x =\n  let a = yk in\n  let c = Nx.mul_s mk 0.5 in\n  let d = Nx.div_s (Nx.sub mk1 mk) (6.0 *. hk) in\n  let b =\n    Nx.sub\n      (Nx.div_s (Nx.sub yk1 yk) hk)\n      (Nx.mul_s (Nx.add (Nx.mul_s mk 2.0) mk1) (hk /. 6.0))\n  in\n  let t = Nx.sub_s x xk in\n  Nx.add a (Nx.mul t (Nx.add b (Nx.mul t (Nx.add c (Nx.mul t d)))))\n\nlet fitzpatrick99 ~rv =\n  let rv2 = Nx.mul rv rv in\n  let rv3 = Nx.mul rv2 rv in\n  let rv4 = Nx.mul rv2 rv2 in\n  (* FM UV c1, c2 — computed once, used for anchor y-values and the closure *)\n  let c2_uv = Nx.add_s (Nx.mul_s (Nx.recip rv) 4.717) (-0.824) in\n  let c1_uv = Nx.sub (Nx.scalar f64 2.030) (Nx.mul_s c2_uv 3.007) in\n  let uv_anchor xk =\n    Nx.add c1_uv (Nx.add_s (Nx.mul_s c2_uv xk) (f99_c3 *. f99_drude xk))\n  in\n  (* 9 anchor E(λ-V)/E(B-V) values *)\n  let y =\n    [|\n      Nx.neg rv;\n      Nx.sub (Nx.mul_s rv (0.26469 /. 3.1)) rv;\n      Nx.sub (Nx.mul_s rv (0.82925 /. 3.1)) rv;\n      Nx.sub\n        (Nx.add\n           (Nx.add_s (Nx.mul_s rv 1.00270) (-0.422809))\n           (Nx.mul_s rv2 2.13572e-04))\n        rv;\n      Nx.sub\n        (Nx.add\n           (Nx.add_s (Nx.mul_s rv 1.00216) (-5.13540e-02))\n           (Nx.mul_s rv2 (-7.35778e-05)))\n        rv;\n      Nx.sub\n        (Nx.add\n           (Nx.add_s (Nx.mul_s rv 1.00184) 0.700127)\n           (Nx.mul_s rv2 (-3.32598e-05)))\n        rv;\n      Nx.sub\n        (Nx.add\n           (Nx.add\n              (Nx.add\n                 (Nx.add_s (Nx.mul_s rv 1.01707) 1.19456)\n                 (Nx.mul_s rv2 (-5.46959e-03)))\n              (Nx.mul_s rv3 7.97809e-04))\n           (Nx.mul_s rv4 (-4.45636e-05)))\n        rv;\n      (* UV anchors from FM parameterization *)\n      uv_anchor f99_xk.(7);\n      uv_anchor f99_xk.(8);\n    |]\n  in\n  (* Second derivatives m[0..8]: m[0] = m[8] = 0, m[1..7] from basis matrix *)\n  let zero = Nx.scalar f64 0.0 in\n  let m2 = Array.make 9 zero in\n  for j = 0 to 6 do\n    let acc = ref zero in\n    for i = 0 to 8 do\n      acc := Nx.add !acc (Nx.mul_s y.(i) f99_basis.(j).(i))\n    done;\n    m2.(j + 1) <- !acc\n  done;\n  (* Precompute spline piece coefficients for intervals 0..6 *)\n  let h = f99_hk in\n  let pieces =\n    Array.init 7 (fun k ->\n        let hk = h.(k) in\n        let yk = y.(k) in\n        let yk1 = y.(k + 1) in\n        let mk = m2.(k) in\n        let mk1 = m2.(k + 1) in\n        (hk, f99_xk.(k), yk, yk1, mk, mk1))\n  in\n  fun ~wavelength ->\n    (* Convert wavelength (m) to inverse microns *)\n    let x = Nx.div (Nx.scalar f64 1e-6) wavelength in\n    (* Evaluate spline for each interval *)\n    let eval k =\n      let hk, xk, yk, yk1, mk, mk1 = pieces.(k) in\n      f99_eval_piece hk xk yk yk1 mk mk1 x\n    in\n    let s0 = eval 0 in\n    let s1 = eval 1 in\n    let s2 = eval 2 in\n    let s3 = eval 3 in\n    let s4 = eval 4 in\n    let s5 = eval 5 in\n    let s6 = eval 6 in\n    let opt_nir =\n      Nx.where\n        (Nx.less_s x f99_xk.(1))\n        s0\n        (Nx.where\n           (Nx.less_s x f99_xk.(2))\n           s1\n           (Nx.where\n              (Nx.less_s x f99_xk.(3))\n              s2\n              (Nx.where\n                 (Nx.less_s x f99_xk.(4))\n                 s3\n                 (Nx.where\n                    (Nx.less_s x f99_xk.(5))\n                    s4\n                    (Nx.where (Nx.less_s x f99_xk.(6)) s5 s6)))))\n    in\n    (* UV: FM parameterization for x ≥ 1e4/2700 *)\n    let x2 = Nx.square x in\n    let y_bump = Nx.sub x2 (Nx.scalar f64 f99_x0_sq) in\n    let drude =\n      Nx.div x2 (Nx.add (Nx.mul y_bump y_bump) (Nx.mul_s x2 f99_gamma_sq))\n    in\n    let fuv =\n      Nx.where\n        (Nx.greater_equal_s x f99_c5)\n        (let dx = Nx.sub_s x f99_c5 in\n         let dx2 = Nx.square dx in\n         Nx.add (Nx.mul_s dx2 0.5392) (Nx.mul_s (Nx.mul dx2 dx) 0.05644))\n        (Nx.scalar f64 0.0)\n    in\n    let k_uv =\n      Nx.add c1_uv\n        (Nx.add (Nx.mul c2_uv x)\n           (Nx.add (Nx.mul_s drude f99_c3) (Nx.mul_s fuv f99_c4)))\n    in\n    (* Select optical/NIR vs UV *)\n    let e_over_ebv = Nx.where (Nx.less_s x f99_xk.(7)) opt_nir k_uv in\n    (* A(λ)/A(V) = E(λ-V)/E(B-V) / R_V + 1 *)\n    Nx.add_s (Nx.div e_over_ebv rv) 1.0\n\nlet curve law ~wavelength = law ~wavelength:(Unit.Length.to_tensor wavelength)\nlet ln10_over_2_5 = Float.log 10.0 *. 0.4\n\nlet scale_flux sign law ~av spectrum =\n  let wave_m = Unit.Length.to_tensor (Spectrum.wavelength spectrum) in\n  let a_lambda = Nx.mul (law ~wavelength:wave_m) av in\n  let factor = Nx.exp (Nx.mul_s a_lambda (sign *. ln10_over_2_5)) in\n  Spectrum.scale factor spectrum\n\nlet apply law ~av spectrum = scale_flux (-1.0) law ~av spectrum\nlet unredden law ~av spectrum = scale_flux 1.0 law ~av spectrum\n"
  },
  {
    "path": "dev/umbra/lib/extinction.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Dust extinction laws.\n\n    Extinction laws describe how interstellar dust attenuates and reddens light\n    as a function of wavelength. A {!law} maps wavelength to the normalised\n    extinction curve A{_ lambda} / A{_ V}.\n\n    {!apply} and {!unredden} are differentiable through Rune with respect to\n    [av]. The extinction curve evaluation itself (law constructors and {!curve})\n    is not differentiable (scalar-level polynomial and spline evaluation). *)\n\n(** {1:types Types} *)\n\ntype law\n(** The type for extinction laws. *)\n\n(** {1:laws Standard laws} *)\n\nval ccm89 : rv:Nx.float64_t -> law\n(** [ccm89 ~rv] is the\n    {{:https://ui.adsabs.harvard.edu/abs/1989ApJ...345..245C}Cardelli, Clayton &\n     Mathis (1989)} Milky Way extinction law. [rv] is the total-to-selective\n    extinction ratio R{_ V} (typically 3.1).\n\n    Valid for 0.125--3.5 {e mu}m (0.3--8.0 {e mu}m{^ -1}). Values outside this\n    range are extrapolations. *)\n\nval fitzpatrick99 : rv:Nx.float64_t -> law\n(** [fitzpatrick99 ~rv] is the\n    {{:https://ui.adsabs.harvard.edu/abs/1999PASP..111...63F}Fitzpatrick (1999)}\n    R{_ V}-dependent Milky Way extinction law. Uses a cubic spline for\n    optical/NIR and the Fitzpatrick & Massa UV parameterization.\n\n    Valid for 0.1--3.5 {e mu}m (0.3--10.0 {e mu}m{^ -1}). *)\n\nval odonnell94 : rv:Nx.float64_t -> law\n(** [odonnell94 ~rv] is the\n    {{:https://ui.adsabs.harvard.edu/abs/1994ApJ...422..158O}O'Donnell (1994)}\n    Milky Way extinction law. Identical to {!ccm89} except for revised optical\n    coefficients (1.1--3.3 {e mu}m{^ -1}).\n\n    Valid for 0.125--3.5 {e mu}m. *)\n\nval calzetti00 : law\n(** [calzetti00] is the\n    {{:https://ui.adsabs.harvard.edu/abs/2000ApJ...533..682C}Calzetti et al.\n     (2000)} starburst attenuation law with fixed R{_ V} = 4.05.\n\n    Valid for 0.12--2.2 {e mu}m. Values outside this range are extrapolations.\n*)\n\n(** {1:evaluation Evaluation} *)\n\nval curve : law -> wavelength:Unit.length Unit.t -> Nx.float64_t\n(** [curve law ~wavelength] is A{_ lambda} / A{_ V} at the given wavelengths.\n    Not differentiable. *)\n\n(** {1:application Application} *)\n\nval apply : law -> av:Nx.float64_t -> 'a Spectrum.t -> 'a Spectrum.t\n(** [apply law ~av spectrum] reddens [spectrum] by applying [av] magnitudes of\n    V-band extinction. The spectral kind is preserved. Differentiable through\n    Rune with respect to [av] and the spectrum values. *)\n\nval unredden : law -> av:Nx.float64_t -> 'a Spectrum.t -> 'a Spectrum.t\n(** [unredden law ~av spectrum] de-reddens [spectrum] by removing [av]\n    magnitudes of V-band extinction. The spectral kind is preserved.\n    Differentiable through Rune with respect to [av] and the spectrum values. *)\n"
  },
  {
    "path": "dev/umbra/lib/filter_data.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n[@@@ocamlformat \"disable\"]\n\n(* Filter transmission curves from the SVO Filter Profile Service.\n   http://svo2.cab.inta-csic.es/theory/fps/\n   Wavelengths in Angstroms, throughput dimensionless. *)\n\n(* SDSS *)\n\nlet sdss_u_wave =\n  [|\n    2980.0; 3005.0; 3030.0; 3055.0; 3080.0; 3105.0;\n    3130.0; 3155.0; 3180.0; 3205.0; 3230.0; 3255.0;\n    3280.0; 3305.0; 3330.0; 3355.0; 3380.0; 3405.0;\n    3430.0; 3455.0; 3480.0; 3505.0; 3530.0; 3555.0;\n    3580.0; 3605.0; 3630.0; 3655.0; 3680.0; 3705.0;\n    3730.0; 3755.0; 3780.0; 3805.0; 3830.0; 3855.0;\n    3880.0; 3905.0; 3930.0; 3955.0; 3980.0; 4005.0;\n    4030.0; 4055.0; 4080.0; 4105.0; 4130.0\n  |]\n\nlet sdss_u_thru =\n  [|\n    0.0; 0.0001; 0.0005; 0.0013; 0.0026; 0.0052;\n    0.0093; 0.0161; 0.024; 0.0323; 0.0405; 0.0485;\n    0.0561; 0.0634; 0.07; 0.0756; 0.0803; 0.0848;\n    0.0883; 0.0917; 0.0959; 0.1001; 0.1029; 0.1044;\n    0.1053; 0.1063; 0.1075; 0.1085; 0.1084; 0.1064;\n    0.1024; 0.0966; 0.0887; 0.0787; 0.0672; 0.0549;\n    0.0413; 0.0268; 0.0145; 0.0075; 0.0042; 0.0022;\n    0.001; 0.0006; 0.0004; 0.0002; 0.0\n  |]\n\n(* SDSS *)\n\nlet sdss_g_wave =\n  [|\n    3630.0; 3655.0; 3680.0; 3705.0; 3730.0; 3755.0;\n    3780.0; 3805.0; 3830.0; 3855.0; 3880.0; 3905.0;\n    3930.0; 3955.0; 3980.0; 4005.0; 4030.0; 4055.0;\n    4080.0; 4105.0; 4130.0; 4155.0; 4180.0; 4205.0;\n    4230.0; 4255.0; 4280.0; 4305.0; 4330.0; 4355.0;\n    4380.0; 4405.0; 4430.0; 4455.0; 4480.0; 4505.0;\n    4530.0; 4555.0; 4580.0; 4605.0; 4630.0; 4655.0;\n    4680.0; 4705.0; 4730.0; 4755.0; 4780.0; 4805.0;\n    4830.0; 4855.0; 4880.0; 4905.0; 4930.0; 4955.0;\n    4980.0; 5005.0; 5030.0; 5055.0; 5080.0; 5105.0;\n    5130.0; 5155.0; 5180.0; 5205.0; 5230.0; 5255.0;\n    5280.0; 5305.0; 5330.0; 5355.0; 5380.0; 5405.0;\n    5430.0; 5455.0; 5480.0; 5505.0; 5530.0; 5555.0;\n    5580.0; 5605.0; 5630.0; 5655.0; 5680.0; 5705.0;\n    5730.0; 5755.0; 5780.0; 5805.0; 5830.0\n  |]\n\nlet sdss_g_thru =\n  [|\n    0.0; 0.0003; 0.0008; 0.0013; 0.0019; 0.0024;\n    0.0034; 0.0055; 0.0103; 0.0194; 0.0326; 0.0492;\n    0.0686; 0.09; 0.1123; 0.1342; 0.1545; 0.1722;\n    0.1873; 0.2003; 0.2116; 0.2214; 0.2301; 0.2378;\n    0.2448; 0.2513; 0.2574; 0.2633; 0.2691; 0.2747;\n    0.2801; 0.2852; 0.2899; 0.294; 0.2979; 0.3016;\n    0.3055; 0.3097; 0.3141; 0.3184; 0.3224; 0.3257;\n    0.3284; 0.3307; 0.3327; 0.3346; 0.3364; 0.3383;\n    0.3403; 0.3425; 0.3448; 0.3472; 0.3495; 0.3519;\n    0.3541; 0.3562; 0.3581; 0.3597; 0.3609; 0.3613;\n    0.3609; 0.3595; 0.3581; 0.3558; 0.3452; 0.3194;\n    0.2807; 0.2339; 0.1839; 0.1352; 0.0911; 0.0548;\n    0.0295; 0.0166; 0.0112; 0.0077; 0.005; 0.0032;\n    0.0021; 0.0015; 0.0012; 0.001; 0.0009; 0.0008;\n    0.0006; 0.0005; 0.0003; 0.0001; 0.0\n  |]\n\n(* SDSS *)\n\nlet sdss_r_wave =\n  [|\n    5380.0; 5405.0; 5430.0; 5455.0; 5480.0; 5505.0;\n    5530.0; 5555.0; 5580.0; 5605.0; 5630.0; 5655.0;\n    5680.0; 5705.0; 5730.0; 5755.0; 5780.0; 5805.0;\n    5830.0; 5855.0; 5880.0; 5905.0; 5930.0; 5955.0;\n    5980.0; 6005.0; 6030.0; 6055.0; 6080.0; 6105.0;\n    6130.0; 6155.0; 6180.0; 6205.0; 6230.0; 6255.0;\n    6280.0; 6305.0; 6330.0; 6355.0; 6380.0; 6405.0;\n    6430.0; 6455.0; 6480.0; 6505.0; 6530.0; 6555.0;\n    6580.0; 6605.0; 6630.0; 6655.0; 6680.0; 6705.0;\n    6730.0; 6755.0; 6780.0; 6805.0; 6830.0; 6855.0;\n    6880.0; 6905.0; 6930.0; 6955.0; 6980.0; 7005.0;\n    7030.0; 7055.0; 7080.0; 7105.0; 7130.0; 7155.0;\n    7180.0; 7205.0; 7230.0\n  |]\n\nlet sdss_r_thru =\n  [|\n    0.0; 0.0014; 0.0099; 0.0259; 0.0497; 0.0807;\n    0.1186; 0.1625; 0.2093; 0.2555; 0.2975; 0.3326;\n    0.3609; 0.3834; 0.401; 0.4147; 0.4253; 0.4333;\n    0.4395; 0.4446; 0.4489; 0.4527; 0.4563; 0.4599;\n    0.4634; 0.4665; 0.4689; 0.4703; 0.4711; 0.4717;\n    0.4727; 0.4744; 0.4767; 0.4792; 0.4819; 0.4844;\n    0.4867; 0.4887; 0.4902; 0.4909; 0.4912; 0.4912;\n    0.4912; 0.4914; 0.4915; 0.4912; 0.4901; 0.4878;\n    0.4852; 0.4818; 0.4697; 0.4421; 0.4009; 0.3499;\n    0.2924; 0.2318; 0.1715; 0.1152; 0.0687; 0.038;\n    0.0212; 0.0134; 0.0099; 0.0076; 0.0055; 0.0039;\n    0.0027; 0.002; 0.0015; 0.0012; 0.001; 0.0007;\n    0.0004; 0.0002; 0.0\n  |]\n\n(* SDSS *)\n\nlet sdss_i_wave =\n  [|\n    6430.0; 6455.0; 6480.0; 6505.0; 6530.0; 6555.0;\n    6580.0; 6605.0; 6630.0; 6655.0; 6680.0; 6705.0;\n    6730.0; 6755.0; 6780.0; 6805.0; 6830.0; 6855.0;\n    6880.0; 6905.0; 6930.0; 6955.0; 6980.0; 7005.0;\n    7030.0; 7055.0; 7080.0; 7105.0; 7130.0; 7155.0;\n    7180.0; 7205.0; 7230.0; 7255.0; 7280.0; 7305.0;\n    7330.0; 7355.0; 7380.0; 7405.0; 7430.0; 7455.0;\n    7480.0; 7505.0; 7530.0; 7555.0; 7580.0; 7605.0;\n    7630.0; 7655.0; 7680.0; 7705.0; 7730.0; 7755.0;\n    7780.0; 7805.0; 7830.0; 7855.0; 7880.0; 7905.0;\n    7930.0; 7955.0; 7980.0; 8005.0; 8030.0; 8055.0;\n    8080.0; 8105.0; 8130.0; 8155.0; 8180.0; 8205.0;\n    8230.0; 8255.0; 8280.0; 8305.0; 8330.0; 8355.0;\n    8380.0; 8405.0; 8430.0; 8455.0; 8480.0; 8505.0;\n    8530.0; 8555.0; 8580.0; 8605.0; 8630.0\n  |]\n\nlet sdss_i_thru =\n  [|\n    0.0; 0.0001; 0.0003; 0.0004; 0.0004; 0.0003;\n    0.0003; 0.0004; 0.0009; 0.0019; 0.0034; 0.0056;\n    0.0103; 0.0194; 0.0344; 0.0561; 0.0839; 0.1164;\n    0.1528; 0.1948; 0.2408; 0.2857; 0.3233; 0.3503;\n    0.3759; 0.399; 0.4162; 0.4233; 0.4165; 0.3943;\n    0.376; 0.3823; 0.3918; 0.3892; 0.3828; 0.382;\n    0.3884; 0.3872; 0.3821; 0.3787; 0.3759; 0.3727;\n    0.3681; 0.3618; 0.3565; 0.3554; 0.3478; 0.1473;\n    0.2096; 0.2648; 0.33; 0.3256; 0.3223; 0.3179;\n    0.3129; 0.3077; 0.3026; 0.298; 0.2944; 0.2921;\n    0.2916; 0.2921; 0.2927; 0.2923; 0.2896; 0.284;\n    0.2758; 0.2642; 0.2427; 0.2091; 0.1689; 0.1276;\n    0.0901; 0.0603; 0.0378; 0.0218; 0.0117; 0.0068;\n    0.0048; 0.0033; 0.002; 0.0013; 0.001; 0.0009;\n    0.0009; 0.0008; 0.0005; 0.0002; 0.0\n  |]\n\n(* SDSS *)\n\nlet sdss_z_wave =\n  [|\n    7730.0; 7755.0; 7780.0; 7805.0; 7830.0; 7855.0;\n    7880.0; 7905.0; 7930.0; 7955.0; 7980.0; 8005.0;\n    8030.0; 8055.0; 8080.0; 8105.0; 8130.0; 8155.0;\n    8180.0; 8205.0; 8230.0; 8255.0; 8280.0; 8305.0;\n    8330.0; 8355.0; 8380.0; 8405.0; 8430.0; 8455.0;\n    8480.0; 8505.0; 8530.0; 8555.0; 8580.0; 8605.0;\n    8630.0; 8655.0; 8680.0; 8705.0; 8730.0; 8755.0;\n    8780.0; 8805.0; 8830.0; 8855.0; 8880.0; 8905.0;\n    8930.0; 8955.0; 8980.0; 9005.0; 9030.0; 9055.0;\n    9080.0; 9105.0; 9130.0; 9155.0; 9180.0; 9205.0;\n    9230.0; 9255.0; 9280.0; 9305.0; 9330.0; 9355.0;\n    9380.0; 9405.0; 9430.0; 9455.0; 9480.0; 9505.0;\n    9530.0; 9555.0; 9580.0; 9605.0; 9630.0; 9655.0;\n    9680.0; 9705.0; 9730.0; 9755.0; 9780.0; 9805.0;\n    9830.0; 9855.0; 9880.0; 9905.0; 9930.0; 9955.0;\n    9980.0; 10005.0; 10030.0; 10055.0; 10080.0; 10105.0;\n    10130.0; 10155.0; 10180.0; 10205.0; 10230.0; 10255.0;\n    10280.0; 10305.0; 10330.0; 10355.0; 10380.0; 10405.0;\n    10430.0; 10455.0; 10480.0; 10505.0; 10530.0; 10555.0;\n    10580.0; 10605.0; 10630.0; 10655.0; 10680.0; 10705.0;\n    10730.0; 10755.0; 10780.0; 10805.0; 10830.0; 10855.0;\n    10880.0; 10905.0; 10930.0; 10955.0; 10980.0; 11005.0;\n    11030.0; 11055.0; 11080.0; 11105.0; 11130.0; 11155.0;\n    11180.0; 11205.0; 11230.0\n  |]\n\nlet sdss_z_thru =\n  [|\n    0.0; 0.0; 0.0001; 0.0001; 0.0001; 0.0002;\n    0.0002; 0.0003; 0.0005; 0.0007; 0.0011; 0.0017;\n    0.0027; 0.004; 0.0057; 0.0079; 0.0106; 0.0139;\n    0.0178; 0.0222; 0.0271; 0.0324; 0.0382; 0.0446;\n    0.0511; 0.0564; 0.0603; 0.0637; 0.0667; 0.0694;\n    0.0717; 0.0736; 0.0752; 0.0765; 0.0775; 0.0782;\n    0.0786; 0.0787; 0.0785; 0.078; 0.0772; 0.0763;\n    0.0751; 0.0738; 0.0723; 0.0708; 0.0693; 0.0674;\n    0.0632; 0.0581; 0.0543; 0.0526; 0.0523; 0.0522;\n    0.0512; 0.0496; 0.0481; 0.0473; 0.0476; 0.0482;\n    0.0476; 0.0447; 0.0391; 0.0329; 0.0283; 0.0264;\n    0.0271; 0.0283; 0.0275; 0.0254; 0.0252; 0.0256;\n    0.0246; 0.0244; 0.0252; 0.0258; 0.0265; 0.0274;\n    0.0279; 0.0271; 0.0252; 0.0236; 0.0227; 0.0222;\n    0.0216; 0.0208; 0.0196; 0.0183; 0.0171; 0.016;\n    0.0149; 0.0138; 0.0128; 0.0118; 0.0108; 0.0099;\n    0.0091; 0.0083; 0.0075; 0.0068; 0.0061; 0.0055;\n    0.005; 0.0045; 0.0041; 0.0037; 0.0033; 0.003;\n    0.0027; 0.0025; 0.0023; 0.0021; 0.0019; 0.0018;\n    0.0017; 0.0016; 0.0015; 0.0014; 0.0013; 0.0012;\n    0.0011; 0.001; 0.0009; 0.0008; 0.0008; 0.0007;\n    0.0006; 0.0006; 0.0006; 0.0005; 0.0005; 0.0004;\n    0.0004; 0.0003; 0.0003; 0.0002; 0.0002; 0.0001;\n    0.0001; 0.0; 0.0\n  |]\n\n(* Johnson-Cousins *)\n\nlet johnson_u_wave =\n  [|\n    3000.0; 3100.0; 3200.0; 3300.0; 3400.0; 3500.0;\n    3600.0; 3700.0; 3800.0; 3900.0; 4000.0; 4100.0;\n    4200.0\n  |]\n\nlet johnson_u_thru =\n  [|\n    0.0; 0.1; 0.61; 0.84; 0.93; 0.97;\n    1.0; 0.97; 0.73; 0.36; 0.05; 0.01;\n    0.0\n  |]\n\n(* Johnson-Cousins *)\n\nlet johnson_b_wave =\n  [|\n    3700.0; 3800.0; 4000.0; 4200.0; 4400.0; 4600.0;\n    4800.0; 5000.0; 5200.0; 5400.0; 5600.0\n  |]\n\nlet johnson_b_thru =\n  [|\n    0.0; 0.11; 0.92; 1.0; 0.94; 0.79;\n    0.58; 0.36; 0.15; 0.04; 0.0\n  |]\n\n(* Johnson-Cousins *)\n\nlet johnson_v_wave =\n  [|\n    4600.0; 4800.0; 5000.0; 5200.0; 5400.0; 5600.0;\n    5800.0; 6000.0; 6200.0; 6400.0; 6600.0; 6800.0;\n    7000.0; 7200.0; 7400.0\n  |]\n\nlet johnson_v_thru =\n  [|\n    0.0; 0.02; 0.38; 0.91; 0.98; 0.72;\n    0.62; 0.4; 0.2; 0.08; 0.02; 0.01;\n    0.01; 0.01; 0.0\n  |]\n\n(* Johnson-Cousins *)\n\nlet cousins_r_wave =\n  [|\n    5400.0; 5450.0; 5500.0; 5550.0; 5600.0; 5650.0;\n    5700.0; 5750.0; 5800.0; 5850.0; 5900.0; 5950.0;\n    6000.0; 6050.0; 6100.0; 6150.0; 6200.0; 6250.0;\n    6300.0; 6350.0; 6400.0; 6450.0; 6500.0; 6550.0;\n    6600.0; 6650.0; 6700.0; 6750.0; 6800.0; 6850.0;\n    6900.0; 6950.0; 7000.0; 7050.0; 7100.0; 7150.0;\n    7200.0; 7250.0; 7300.0; 7350.0; 7400.0; 7450.0;\n    7500.0; 7550.0; 7600.0; 7650.0; 7700.0; 7750.0;\n    7800.0; 7850.0; 7900.0; 7950.0; 8000.0\n  |]\n\nlet cousins_r_thru =\n  [|\n    0.0; 0.002; 0.01; 0.03; 0.07; 0.18;\n    0.4; 0.77; 0.89; 0.96; 0.99; 0.999;\n    1.0; 0.997; 0.99; 0.976; 0.96; 0.946;\n    0.93; 0.912; 0.895; 0.88; 0.86; 0.845;\n    0.825; 0.806; 0.788; 0.765; 0.742; 0.72;\n    0.7; 0.676; 0.65; 0.626; 0.6; 0.568;\n    0.53; 0.48; 0.395; 0.3; 0.215; 0.155;\n    0.12; 0.1; 0.085; 0.075; 0.06; 0.05;\n    0.04; 0.029; 0.02; 0.01; 0.0\n  |]\n\n(* Johnson-Cousins *)\n\nlet cousins_i_wave =\n  [|\n    7000.0; 7050.0; 7100.0; 7150.0; 7200.0; 7250.0;\n    7300.0; 7350.0; 7400.0; 7450.0; 7500.0; 7550.0;\n    7600.0; 7650.0; 7700.0; 7750.0; 7800.0; 7850.0;\n    7900.0; 7950.0; 8000.0; 8050.0; 8100.0; 8150.0;\n    8200.0; 8250.0; 8300.0; 8350.0; 8400.0; 8450.0;\n    8500.0; 8550.0; 8600.0; 8650.0; 8700.0; 8750.0;\n    8800.0; 8850.0; 8900.0; 8950.0; 9000.0; 9050.0;\n    9100.0\n  |]\n\nlet cousins_i_thru =\n  [|\n    0.0; 0.005; 0.02; 0.05; 0.1; 0.17;\n    0.33; 0.7; 0.82; 0.9; 0.95; 0.98;\n    0.99; 0.994; 0.98; 0.95; 0.913; 0.87;\n    0.83; 0.79; 0.75; 0.71; 0.673; 0.65;\n    0.63; 0.61; 0.58; 0.55; 0.51; 0.47;\n    0.405; 0.33; 0.25; 0.18; 0.14; 0.11;\n    0.08; 0.06; 0.035; 0.02; 0.01; 0.005;\n    0.0\n  |]\n\n(* 2MASS *)\n\nlet twomass_j_wave =\n  [|\n    10620.0; 10660.0; 10700.0; 10750.0; 10780.0; 10820.0;\n    10840.0; 10870.0; 10890.0; 10930.0; 10960.0; 11020.0;\n    11050.0; 11070.0; 11090.0; 11120.0; 11160.0; 11170.0;\n    11200.0; 11230.0; 11280.0; 11290.0; 11320.0; 11340.0;\n    11380.0; 11400.0; 11430.0; 11470.0; 11540.0; 11590.0;\n    11640.0; 11670.0; 11700.0; 11730.0; 11750.0; 11790.0;\n    11820.0; 11860.0; 11880.0; 11920.0; 11950.0; 11990.0;\n    12020.0; 12090.0; 12160.0; 12210.0; 12270.0; 12310.0;\n    12360.0; 12400.0; 12440.0; 12470.0; 12530.0; 12550.0;\n    12580.0; 12600.0; 12650.0; 12700.0; 12750.0; 12790.0;\n    12860.0; 12920.0; 12970.0; 13020.0; 13050.0; 13070.0;\n    13100.0; 13130.0; 13160.0; 13190.0; 13230.0; 13260.0;\n    13300.0; 13330.0; 13340.0; 13360.0; 13390.0; 13430.0;\n    13460.0; 13490.0; 13530.0; 13550.0; 13600.0; 13630.0;\n    13700.0; 13730.0; 13770.0; 13830.0; 13880.0; 13920.0;\n    13950.0; 13960.0; 13970.0; 13980.0; 14000.0; 14010.0;\n    14020.0; 14040.0; 14060.0; 14070.0; 14100.0; 14120.0;\n    14160.0; 14210.0; 14260.0; 14420.0; 14500.0\n  |]\n\nlet twomass_j_thru =\n  [|\n    0.0; 0.0004; 0.0015; 0.0027; 0.0055; 0.0123;\n    0.0203; 0.0306; 0.0405; 0.0515; 0.0564; 0.0718;\n    0.2736; 0.341; 0.3584; 0.3801; 0.3307; 0.2395;\n    0.2501; 0.2833; 0.2582; 0.2515; 0.5381; 0.2232;\n    0.5369; 0.1102; 0.5292; 0.2619; 0.3202; 0.1743;\n    0.607; 0.6179; 0.6763; 0.7279; 0.7465; 0.8304;\n    0.7903; 0.8096; 0.8369; 0.836; 0.7499; 0.708;\n    0.6988; 0.7049; 0.7004; 0.7328; 0.7057; 0.8424;\n    0.9219; 0.9525; 0.9676; 0.9595; 0.9227; 0.893;\n    0.8529; 0.8023; 0.7501; 0.6781; 0.6524; 0.6388;\n    0.6424; 0.6486; 0.6824; 0.7529; 0.7759; 0.8118;\n    0.777; 0.721; 0.9525; 0.8551; 0.8414; 1.0;\n    0.8947; 0.8549; 0.5379; 0.2799; 0.9065; 0.6893;\n    0.5533; 0.2432; 0.0144; 0.0002; 0.0401; 0.0045;\n    0.0003; 0.0372; 0.0005; 0.0; 0.0001; 0.0033;\n    0.0003; 0.0085; 0.0254; 0.1184; 0.0001; 0.0001;\n    0.0521; 0.0104; 0.0478; 0.0004; 0.0024; 0.0053;\n    0.0086; 0.0007; 0.0003; 0.0004; 0.0\n  |]\n\n(* 2MASS *)\n\nlet twomass_h_wave =\n  [|\n    12890.0; 13150.0; 13410.0; 13680.0; 13970.0; 14180.0;\n    14400.0; 14620.0; 14780.0; 14860.0; 14930.0; 15040.0;\n    15150.0; 15280.0; 15390.0; 15460.0; 15510.0; 15560.0;\n    15650.0; 15720.0; 15770.0; 15830.0; 15920.0; 15970.0;\n    16020.0; 16130.0; 16190.0; 16280.0; 16330.0; 16420.0;\n    16480.0; 16570.0; 16590.0; 16710.0; 16840.0; 17010.0;\n    17150.0; 17270.0; 17390.0; 17460.0; 17510.0; 17530.0;\n    17560.0; 17640.0; 17750.0; 17850.0; 17900.0; 17960.0;\n    18030.0; 18100.0; 18130.0; 18180.0; 18280.0; 18350.0;\n    18500.0; 18710.0; 18930.0; 19140.0\n  |]\n\nlet twomass_h_thru =\n  [|\n    0.0; 0.0; 0.0; 0.0; 0.0; 0.0;\n    0.0005; 0.028; 0.081; 0.287; 0.871; 0.201;\n    0.438; 0.686; 0.818; 0.882; 0.912; 0.927;\n    0.929; 0.873; 0.857; 0.883; 0.918; 0.927;\n    0.908; 0.926; 0.92; 0.924; 0.924; 0.942;\n    0.949; 0.981; 0.994; 1.0; 0.956; 0.924;\n    0.982; 0.992; 0.989; 0.979; 0.968; 0.937;\n    0.919; 0.842; 0.667; 0.269; 0.452; 0.173;\n    0.108; 0.071; 0.005; 0.02; 0.0004; 0.0;\n    0.0001; 0.0; 0.0; 0.0\n  |]\n\n(* 2MASS *)\n\nlet twomass_ks_wave =\n  [|\n    19000.0; 19150.0; 19270.0; 19340.0; 19390.0; 19480.0;\n    19570.0; 19620.0; 19690.0; 19760.0; 19810.0; 19890.0;\n    19900.0; 19980.0; 20080.0; 20140.0; 20190.0; 20280.0;\n    20370.0; 20450.0; 20610.0; 20720.0; 20750.0; 20820.0;\n    20890.0; 20990.0; 21060.0; 21130.0; 21200.0; 21240.0;\n    21380.0; 21450.0; 21550.0; 21690.0; 21760.0; 21850.0;\n    21970.0; 22080.0; 22130.0; 22180.0; 22320.0; 22370.0;\n    22480.0; 22560.0; 22600.0; 22630.0; 22650.0; 22700.0;\n    22720.0; 22760.0; 22770.0; 22810.0; 22840.0; 22860.0;\n    22910.0; 22930.0; 22950.0; 22970.0; 22990.0; 23060.0;\n    23110.0; 23160.0; 23200.0; 23250.0; 23280.0; 23350.0;\n    23390.0; 23440.0; 23460.0; 23520.0; 23610.0; 23630.0;\n    23700.0; 23750.0; 23840.0; 23990.0\n  |]\n\nlet twomass_ks_thru =\n  [|\n    0.0; 0.0; 0.0; 0.0002; 0.0005; 0.0054;\n    0.0119; 0.0197; 0.0422; 0.0873; 0.1528; 0.2482;\n    0.1902; 0.2339; 0.2946; 0.3982; 0.3366; 0.6207;\n    0.765; 0.7464; 0.6251; 0.7255; 0.6895; 0.7879;\n    0.8181; 0.8228; 0.8633; 0.8778; 0.8549; 0.8953;\n    0.9189; 0.9268; 0.9267; 0.9009; 0.9228; 0.8428;\n    0.9459; 0.9804; 0.9879; 0.9848; 0.9647; 0.9816;\n    0.9834; 0.9613; 0.9792; 1.0; 0.9632; 0.9812;\n    0.9681; 0.9109; 0.9821; 0.8896; 0.8918; 0.9424;\n    0.8404; 0.8042; 0.7077; 0.6576; 0.5607; 0.4437;\n    0.3482; 0.2302; 0.1626; 0.136; 0.0921; 0.0624;\n    0.0431; 0.034; 0.031; 0.0118; 0.0068; 0.0007;\n    0.003; 0.0021; 0.0004; 0.0\n  |]\n\n(* Gaia DR3 *)\n\nlet gaia_g_wave =\n  [|\n    3200.0; 3300.0; 3400.0; 3500.0; 3600.0; 3700.0;\n    3800.0; 3900.0; 4000.0; 4100.0; 4200.0; 4300.0;\n    4400.0; 4500.0; 4600.0; 4700.0; 4800.0; 4900.0;\n    5000.0; 5100.0; 5200.0; 5300.0; 5400.0; 5500.0;\n    5600.0; 5700.0; 5800.0; 5900.0; 6000.0; 6100.0;\n    6200.0; 6300.0; 6400.0; 6500.0; 6600.0; 6700.0;\n    6800.0; 6900.0; 7000.0; 7100.0; 7200.0; 7300.0;\n    7400.0; 7500.0; 7600.0; 7700.0; 7800.0; 7900.0;\n    8000.0; 8100.0; 8200.0; 8300.0; 8400.0; 8500.0;\n    8600.0; 8700.0; 8800.0; 8900.0; 9000.0; 9100.0;\n    9200.0; 9300.0; 9400.0; 9500.0; 9600.0; 9700.0;\n    9800.0; 9900.0; 10000.0; 10100.0; 10200.0; 10300.0;\n    10400.0; 10500.0\n  |]\n\nlet gaia_g_thru =\n  [|\n    2.37366962e-08; 0.00976875472; 0.0868837415; 0.125910068; 0.121442511; 0.109349045;\n    0.116293195; 0.204618287; 0.34084777; 0.433235889; 0.492915186; 0.532506055;\n    0.560121042; 0.58187167; 0.598921356; 0.612743401; 0.624456273; 0.634592054;\n    0.642876868; 0.651384274; 0.659234285; 0.665180853; 0.672624175; 0.677892686;\n    0.68337283; 0.688218588; 0.692909244; 0.698360314; 0.701281364; 0.705926392;\n    0.709945761; 0.712286557; 0.714900215; 0.716852196; 0.718062023; 0.717424017;\n    0.716404699; 0.713025742; 0.709495858; 0.702344476; 0.694885081; 0.682863231;\n    0.670880823; 0.654375536; 0.636105955; 0.615501457; 0.592399976; 0.567402553;\n    0.539583616; 0.510092228; 0.4791254; 0.447393833; 0.414784905; 0.38035191;\n    0.347263747; 0.313995072; 0.280491684; 0.249470941; 0.218314877; 0.189578109;\n    0.162072087; 0.137119296; 0.113758622; 0.0931891382; 0.074983285; 0.058819497;\n    0.0451523186; 0.0338677803; 0.0245381883; 0.0171045299; 0.0113958923; 0.00725157056;\n    0.00436700622; 0.00241251048\n  |]\n\n(* Gaia DR3 *)\n\nlet gaia_bp_wave =\n  [|\n    3250.0; 3300.0; 3350.0; 3400.0; 3450.0; 3500.0;\n    3550.0; 3600.0; 3650.0; 3700.0; 3750.0; 3800.0;\n    3850.0; 3900.0; 3950.0; 4000.0; 4050.0; 4100.0;\n    4150.0; 4200.0; 4250.0; 4300.0; 4350.0; 4400.0;\n    4450.0; 4500.0; 4550.0; 4600.0; 4650.0; 4700.0;\n    4750.0; 4800.0; 4850.0; 4900.0; 4950.0; 5000.0;\n    5050.0; 5100.0; 5150.0; 5200.0; 5250.0; 5300.0;\n    5350.0; 5400.0; 5450.0; 5500.0; 5550.0; 5600.0;\n    5650.0; 5700.0; 5750.0; 5800.0; 5850.0; 5900.0;\n    5950.0; 6000.0; 6050.0; 6100.0; 6150.0; 6200.0;\n    6250.0; 6300.0; 6350.0; 6400.0; 6450.0; 6500.0;\n    6550.0; 6600.0; 6650.0; 6700.0; 6750.0; 6800.0;\n    6850.0; 6900.0; 6950.0; 7000.0; 7050.0; 7100.0;\n    7150.0; 7200.0; 7250.0; 7300.0; 7350.0; 7400.0;\n    7450.0; 7500.0\n  |]\n\nlet gaia_bp_thru =\n  [|\n    3.87054116e-05; 0.0109458069; 0.0960352312; 0.209777042; 0.24623711; 0.184648618;\n    0.196988564; 0.235262373; 0.223965129; 0.204351616; 0.178318209; 0.162143918;\n    0.184059158; 0.254352193; 0.34761739; 0.432175816; 0.492469156; 0.533465853;\n    0.560569552; 0.578302699; 0.589904162; 0.59902208; 0.607578555; 0.615301491;\n    0.623213524; 0.626992584; 0.627863884; 0.627028071; 0.627574894; 0.629195435;\n    0.632206645; 0.634636782; 0.635341726; 0.635285854; 0.634064731; 0.631462795;\n    0.63078819; 0.630124067; 0.630179832; 0.630007723; 0.627664462; 0.623347947;\n    0.621221392; 0.620168751; 0.619955482; 0.622688637; 0.622951427; 0.619327372;\n    0.614279326; 0.608587274; 0.60526859; 0.613287749; 0.63076648; 0.643202692;\n    0.641217847; 0.627856079; 0.613625406; 0.613169161; 0.625579651; 0.649530382;\n    0.666534979; 0.666866457; 0.650929127; 0.620667611; 0.578699833; 0.528381533;\n    0.46236659; 0.341204564; 0.158484372; 0.0351559339; 0.00370522417; 0.000728910264;\n    0.000524017362; 0.000284946553; 0.000113610422; 2.0255692e-05; 6.08899493e-06; 4.25533546e-06;\n    2.88792024e-06; 9.89680713e-07; 4.24900727e-07; 1.27016225e-07; 1.16831386e-07; 7.93884518e-09;\n    1.27555036e-07; 2.17167412e-08\n  |]\n\n(* Gaia DR3 *)\n\nlet gaia_rp_wave =\n  [|\n    6100.0; 6150.0; 6200.0; 6250.0; 6300.0; 6350.0;\n    6400.0; 6450.0; 6500.0; 6550.0; 6600.0; 6650.0;\n    6700.0; 6750.0; 6800.0; 6850.0; 6900.0; 6950.0;\n    7000.0; 7050.0; 7100.0; 7150.0; 7200.0; 7250.0;\n    7300.0; 7350.0; 7400.0; 7450.0; 7500.0; 7550.0;\n    7600.0; 7650.0; 7700.0; 7750.0; 7800.0; 7850.0;\n    7900.0; 7950.0; 8000.0; 8050.0; 8100.0; 8150.0;\n    8200.0; 8250.0; 8300.0; 8350.0; 8400.0; 8450.0;\n    8500.0; 8550.0; 8600.0; 8650.0; 8700.0; 8750.0;\n    8800.0; 8850.0; 8900.0; 8950.0; 9000.0; 9050.0;\n    9100.0; 9150.0; 9200.0; 9250.0; 9300.0; 9350.0;\n    9400.0; 9450.0; 9500.0; 9550.0; 9600.0; 9650.0;\n    9700.0; 9750.0; 9800.0; 9850.0; 9900.0; 9950.0;\n    10000.0; 10050.0; 10100.0; 10150.0; 10200.0; 10250.0;\n    10300.0; 10350.0; 10400.0; 10450.0; 10500.0; 10550.0;\n    10600.0; 10650.0; 10700.0; 10750.0; 10800.0\n  |]\n\nlet gaia_rp_thru =\n  [|\n    0.0001067; 0.000705; 0.0089591; 0.0894186; 0.3945348; 0.6832151;\n    0.7284574; 0.6783742; 0.6932457; 0.6991653; 0.7068345; 0.7168661;\n    0.7258579; 0.7314582; 0.7317729; 0.729553; 0.7311262; 0.7341997;\n    0.7375911; 0.7377587; 0.7351913; 0.7317705; 0.7322348; 0.7341152;\n    0.7395558; 0.7439523; 0.7434368; 0.7401882; 0.7383857; 0.7400737;\n    0.7391916; 0.7378262; 0.7299905; 0.7234387; 0.7148353; 0.7081058;\n    0.7045418; 0.7029044; 0.703763; 0.7037788; 0.7012269; 0.698329;\n    0.6904644; 0.6830179; 0.6750185; 0.6668831; 0.6552453; 0.6437497;\n    0.6278626; 0.6142203; 0.5984866; 0.5817457; 0.5664293; 0.5505743;\n    0.5320554; 0.5156898; 0.4998404; 0.4817145; 0.4631831; 0.443315;\n    0.4236545; 0.4041978; 0.3837304; 0.3611222; 0.3409582; 0.320113;\n    0.2991975; 0.278412; 0.2555403; 0.2372075; 0.2165387; 0.1977315;\n    0.1787908; 0.1634732; 0.1453902; 0.1318572; 0.1142639; 0.0987333;\n    0.0815165; 0.0661173; 0.0521649; 0.0400458; 0.030169; 0.0228553;\n    0.0165918; 0.0122218; 0.0086189; 0.006114; 0.0042268; 0.0028113;\n    0.001905; 0.0012324; 0.0007693; 0.0004905; 0.0003028\n  |]\n\n(* LSST/LSST.u — 60 points *)\nlet rubin_u_wave =\n  [|\n    3.200000e+03; 3.215000e+03; 3.230000e+03; 3.245000e+03; 3.260000e+03; 3.275000e+03; 3.290000e+03; 3.305000e+03;\n    3.320000e+03; 3.335000e+03; 3.350000e+03; 3.365000e+03; 3.380000e+03; 3.395000e+03; 3.410000e+03; 3.425000e+03;\n    3.440000e+03; 3.455000e+03; 3.470000e+03; 3.485000e+03; 3.500000e+03; 3.515000e+03; 3.530000e+03; 3.545000e+03;\n    3.560000e+03; 3.575000e+03; 3.590000e+03; 3.605000e+03; 3.620000e+03; 3.635000e+03; 3.650000e+03; 3.665000e+03;\n    3.680000e+03; 3.695000e+03; 3.710000e+03; 3.725000e+03; 3.740000e+03; 3.755000e+03; 3.770000e+03; 3.785000e+03;\n    3.800000e+03; 3.815000e+03; 3.830000e+03; 3.845000e+03; 3.860000e+03; 3.875000e+03; 3.890000e+03; 3.905000e+03;\n    3.920000e+03; 3.935000e+03; 3.950000e+03; 3.965000e+03; 3.980000e+03; 3.995000e+03; 4.010000e+03; 4.025000e+03;\n    4.040000e+03; 4.055000e+03; 4.070000e+03; 4.085000e+03\n  |]\n\nlet rubin_u_thru =\n  [|\n    1.429550e-14; 5.824880e-03; 9.177360e-03; 1.413040e-02; 2.023590e-02; 2.751190e-02; 3.708220e-02; 4.640890e-02;\n    5.690710e-02; 6.560040e-02; 7.538320e-02; 8.192530e-02; 8.826960e-02; 9.514300e-02; 1.009060e-01; 1.072220e-01;\n    1.120190e-01; 1.179670e-01; 1.231450e-01; 1.283730e-01; 1.337000e-01; 1.381080e-01; 1.432610e-01; 1.478000e-01;\n    1.527230e-01; 1.573360e-01; 1.620670e-01; 1.666840e-01; 1.716940e-01; 1.764620e-01; 1.811790e-01; 1.858970e-01;\n    1.906280e-01; 1.950920e-01; 1.996840e-01; 2.041020e-01; 2.082430e-01; 2.126950e-01; 2.169940e-01; 2.214680e-01;\n    2.221790e-01; 2.194800e-01; 2.155940e-01; 2.047200e-01; 1.938620e-01; 1.822830e-01; 1.701760e-01; 1.575460e-01;\n    1.442870e-01; 1.308010e-01; 1.168670e-01; 1.024170e-01; 8.739400e-02; 7.196930e-02; 5.599790e-02; 3.968800e-02;\n    2.581460e-02; 1.741130e-02; 8.801300e-03; 2.931970e-04\n  |]\n\n(* LSST/LSST.g — 60 points *)\nlet rubin_g_wave =\n  [|\n    3.864000e+03; 3.894000e+03; 3.925000e+03; 3.955000e+03; 3.986000e+03; 4.016000e+03; 4.047000e+03; 4.078000e+03;\n    4.108000e+03; 4.139000e+03; 4.169000e+03; 4.200000e+03; 4.231000e+03; 4.261000e+03; 4.292000e+03; 4.322000e+03;\n    4.353000e+03; 4.384000e+03; 4.414000e+03; 4.445000e+03; 4.475000e+03; 4.506000e+03; 4.537000e+03; 4.567000e+03;\n    4.598000e+03; 4.628000e+03; 4.659000e+03; 4.690000e+03; 4.720000e+03; 4.751000e+03; 4.781000e+03; 4.812000e+03;\n    4.842000e+03; 4.873000e+03; 4.904000e+03; 4.934000e+03; 4.965000e+03; 4.995000e+03; 5.026000e+03; 5.057000e+03;\n    5.087000e+03; 5.118000e+03; 5.148000e+03; 5.179000e+03; 5.210000e+03; 5.240000e+03; 5.271000e+03; 5.301000e+03;\n    5.332000e+03; 5.363000e+03; 5.393000e+03; 5.424000e+03; 5.454000e+03; 5.485000e+03; 5.516000e+03; 5.546000e+03;\n    5.577000e+03; 5.607000e+03; 5.638000e+03; 5.669000e+03\n  |]\n\nlet rubin_g_thru =\n  [|\n    4.995720e-14; 1.504200e-02; 3.744180e-02; 7.157070e-02; 1.087810e-01; 1.464570e-01; 1.868340e-01; 2.288970e-01;\n    2.711150e-01; 3.057660e-01; 3.235230e-01; 3.295650e-01; 3.352140e-01; 3.406260e-01; 3.460290e-01; 3.509490e-01;\n    3.552980e-01; 3.595040e-01; 3.634800e-01; 3.669990e-01; 3.708900e-01; 3.741200e-01; 3.769130e-01; 3.795000e-01;\n    3.822150e-01; 3.843350e-01; 3.866750e-01; 3.889150e-01; 3.912650e-01; 3.926710e-01; 3.941770e-01; 3.949940e-01;\n    3.969120e-01; 3.983260e-01; 3.989640e-01; 3.998360e-01; 4.005050e-01; 4.012760e-01; 4.004640e-01; 4.001320e-01;\n    4.011100e-01; 4.024610e-01; 4.033200e-01; 4.036730e-01; 4.041100e-01; 4.038960e-01; 4.034070e-01; 4.035910e-01;\n    4.037410e-01; 4.053230e-01; 3.897960e-01; 3.553770e-01; 3.076690e-01; 2.585420e-01; 2.090320e-01; 1.607920e-01;\n    1.108210e-01; 6.215070e-02; 2.627740e-02; 8.320810e-04\n  |]\n\n(* LSST/LSST.r — 60 points *)\nlet rubin_r_wave =\n  [|\n    5.370000e+03; 5.398000e+03; 5.427000e+03; 5.455000e+03; 5.484000e+03; 5.513000e+03; 5.541000e+03; 5.570000e+03;\n    5.599000e+03; 5.627000e+03; 5.656000e+03; 5.684000e+03; 5.713000e+03; 5.742000e+03; 5.770000e+03; 5.799000e+03;\n    5.828000e+03; 5.856000e+03; 5.885000e+03; 5.913000e+03; 5.942000e+03; 5.971000e+03; 5.999000e+03; 6.028000e+03;\n    6.057000e+03; 6.085000e+03; 6.114000e+03; 6.142000e+03; 6.171000e+03; 6.200000e+03; 6.228000e+03; 6.257000e+03;\n    6.286000e+03; 6.314000e+03; 6.343000e+03; 6.371000e+03; 6.400000e+03; 6.429000e+03; 6.457000e+03; 6.486000e+03;\n    6.515000e+03; 6.543000e+03; 6.572000e+03; 6.600000e+03; 6.629000e+03; 6.658000e+03; 6.686000e+03; 6.715000e+03;\n    6.744000e+03; 6.772000e+03; 6.801000e+03; 6.829000e+03; 6.858000e+03; 6.887000e+03; 6.915000e+03; 6.944000e+03;\n    6.973000e+03; 7.001000e+03; 7.030000e+03; 7.059000e+03\n  |]\n\nlet rubin_r_thru =\n  [|\n    4.419110e-13; 2.309990e-02; 5.277260e-02; 9.905770e-02; 1.473500e-01; 1.958210e-01; 2.426460e-01; 2.913780e-01;\n    3.398970e-01; 3.865220e-01; 4.118140e-01; 4.177100e-01; 4.186640e-01; 4.200420e-01; 4.218920e-01; 4.241900e-01;\n    4.259640e-01; 4.276140e-01; 4.258520e-01; 4.267580e-01; 4.263350e-01; 4.286710e-01; 4.309470e-01; 4.325390e-01;\n    4.342740e-01; 4.365000e-01; 4.394080e-01; 4.418750e-01; 4.444110e-01; 4.462740e-01; 4.484890e-01; 4.503210e-01;\n    4.432760e-01; 4.520930e-01; 4.562140e-01; 4.584970e-01; 4.602860e-01; 4.625800e-01; 4.638100e-01; 4.629030e-01;\n    4.637690e-01; 4.655010e-01; 4.663710e-01; 4.690410e-01; 4.694020e-01; 4.697590e-01; 4.700940e-01; 4.709560e-01;\n    4.711620e-01; 4.671030e-01; 4.403810e-01; 3.889860e-01; 3.325260e-01; 2.460020e-01; 2.127620e-01; 1.675560e-01;\n    1.164150e-01; 6.332250e-02; 2.799150e-02; 9.340470e-04\n  |]\n\n(* LSST/LSST.i — 60 points *)\nlet rubin_i_wave =\n  [|\n    6.760000e+03; 6.786000e+03; 6.813000e+03; 6.839000e+03; 6.866000e+03; 6.892000e+03; 6.919000e+03; 6.946000e+03;\n    6.972000e+03; 6.999000e+03; 7.025000e+03; 7.052000e+03; 7.079000e+03; 7.105000e+03; 7.132000e+03; 7.158000e+03;\n    7.185000e+03; 7.212000e+03; 7.238000e+03; 7.265000e+03; 7.291000e+03; 7.318000e+03; 7.345000e+03; 7.371000e+03;\n    7.398000e+03; 7.424000e+03; 7.451000e+03; 7.478000e+03; 7.504000e+03; 7.531000e+03; 7.557000e+03; 7.584000e+03;\n    7.610000e+03; 7.637000e+03; 7.664000e+03; 7.690000e+03; 7.717000e+03; 7.743000e+03; 7.770000e+03; 7.797000e+03;\n    7.823000e+03; 7.850000e+03; 7.876000e+03; 7.903000e+03; 7.930000e+03; 7.956000e+03; 7.983000e+03; 8.009000e+03;\n    8.036000e+03; 8.063000e+03; 8.089000e+03; 8.116000e+03; 8.142000e+03; 8.169000e+03; 8.196000e+03; 8.222000e+03;\n    8.249000e+03; 8.275000e+03; 8.302000e+03; 8.329000e+03\n  |]\n\nlet rubin_i_thru =\n  [|\n    8.017680e-13; 2.428840e-02; 5.232930e-02; 1.008480e-01; 1.393260e-01; 1.741710e-01; 2.376070e-01; 2.935610e-01;\n    3.473580e-01; 3.942050e-01; 4.356630e-01; 4.633490e-01; 4.648150e-01; 4.656090e-01; 4.657820e-01; 4.634080e-01;\n    4.285660e-01; 4.468650e-01; 4.358040e-01; 4.434830e-01; 4.449560e-01; 4.513710e-01; 4.624340e-01; 4.619650e-01;\n    4.645030e-01; 4.657510e-01; 4.657690e-01; 4.658270e-01; 4.653330e-01; 4.654000e-01; 4.648360e-01; 4.636220e-01;\n    1.706640e-01; 2.698000e-01; 4.015270e-01; 4.520990e-01; 4.617890e-01; 4.617680e-01; 4.610620e-01; 4.606790e-01;\n    4.597540e-01; 4.585490e-01; 4.570360e-01; 4.535390e-01; 4.537330e-01; 4.537350e-01; 4.537880e-01; 4.513960e-01;\n    4.520760e-01; 4.306080e-01; 3.909260e-01; 3.398770e-01; 2.831320e-01; 2.292450e-01; 1.858430e-01; 1.434810e-01;\n    9.907340e-02; 5.212160e-02; 2.456530e-02; 8.945460e-04\n  |]\n\n(* LSST/LSST.z — 60 points *)\nlet rubin_z_wave =\n  [|\n    8.030000e+03; 8.052000e+03; 8.075000e+03; 8.098000e+03; 8.121000e+03; 8.144000e+03; 8.167000e+03; 8.190000e+03;\n    8.213000e+03; 8.236000e+03; 8.259000e+03; 8.282000e+03; 8.305000e+03; 8.328000e+03; 8.351000e+03; 8.374000e+03;\n    8.397000e+03; 8.420000e+03; 8.443000e+03; 8.466000e+03; 8.489000e+03; 8.512000e+03; 8.535000e+03; 8.558000e+03;\n    8.581000e+03; 8.604000e+03; 8.627000e+03; 8.650000e+03; 8.673000e+03; 8.696000e+03; 8.718000e+03; 8.741000e+03;\n    8.764000e+03; 8.787000e+03; 8.810000e+03; 8.833000e+03; 8.856000e+03; 8.879000e+03; 8.902000e+03; 8.925000e+03;\n    8.948000e+03; 8.971000e+03; 8.994000e+03; 9.017000e+03; 9.040000e+03; 9.063000e+03; 9.086000e+03; 9.109000e+03;\n    9.132000e+03; 9.155000e+03; 9.178000e+03; 9.201000e+03; 9.224000e+03; 9.247000e+03; 9.270000e+03; 9.293000e+03;\n    9.316000e+03; 9.339000e+03; 9.362000e+03; 9.385000e+03\n  |]\n\nlet rubin_z_thru =\n  [|\n    1.039400e-12; 1.983520e-02; 4.060140e-02; 7.732710e-02; 1.178440e-01; 1.535950e-01; 1.866070e-01; 2.287220e-01;\n    2.798410e-01; 2.967030e-01; 3.561920e-01; 3.877460e-01; 4.214340e-01; 4.411140e-01; 4.470510e-01; 4.475610e-01;\n    4.480250e-01; 4.471650e-01; 4.471040e-01; 4.466870e-01; 4.464830e-01; 4.458890e-01; 4.466020e-01; 4.471110e-01;\n    4.474290e-01; 4.476080e-01; 4.474960e-01; 4.474640e-01; 4.472600e-01; 4.469260e-01; 4.456110e-01; 4.441420e-01;\n    4.428510e-01; 4.412100e-01; 4.406520e-01; 4.402080e-01; 4.389510e-01; 4.380910e-01; 4.371550e-01; 4.326220e-01;\n    4.198210e-01; 3.961630e-01; 3.715530e-01; 3.863600e-01; 4.102820e-01; 3.960420e-01; 3.749220e-01; 3.652600e-01;\n    3.317280e-01; 2.897530e-01; 2.623570e-01; 2.428380e-01; 2.063130e-01; 1.666310e-01; 1.330530e-01; 8.828810e-02;\n    4.608740e-02; 1.844720e-02; 9.801210e-03; 2.278930e-04\n  |]\n\n(* LSST/LSST.y — 60 points *)\nlet rubin_y_wave =\n  [|\n    9.084000e+03; 9.116000e+03; 9.148000e+03; 9.180000e+03; 9.213000e+03; 9.245000e+03; 9.277000e+03; 9.310000e+03;\n    9.342000e+03; 9.374000e+03; 9.406000e+03; 9.439000e+03; 9.471000e+03; 9.503000e+03; 9.536000e+03; 9.568000e+03;\n    9.600000e+03; 9.632000e+03; 9.665000e+03; 9.697000e+03; 9.729000e+03; 9.762000e+03; 9.794000e+03; 9.826000e+03;\n    9.858000e+03; 9.891000e+03; 9.923000e+03; 9.955000e+03; 9.988000e+03; 1.002000e+04; 1.005200e+04; 1.008400e+04;\n    1.011700e+04; 1.014900e+04; 1.018100e+04; 1.021400e+04; 1.024600e+04; 1.027800e+04; 1.031000e+04; 1.034300e+04;\n    1.037500e+04; 1.040700e+04; 1.044000e+04; 1.047200e+04; 1.050400e+04; 1.053600e+04; 1.056900e+04; 1.060100e+04;\n    1.063300e+04; 1.066600e+04; 1.069800e+04; 1.073000e+04; 1.076200e+04; 1.079500e+04; 1.082700e+04; 1.085900e+04;\n    1.089200e+04; 1.092400e+04; 1.095600e+04; 1.098900e+04\n  |]\n\nlet rubin_y_thru =\n  [|\n    4.969710e-13; 2.294700e-02; 5.618380e-02; 9.902450e-02; 1.553400e-01; 1.976760e-01; 2.337680e-01; 2.243520e-01;\n    1.759080e-01; 2.175850e-01; 2.643880e-01; 2.261180e-01; 2.467280e-01; 2.298940e-01; 2.452360e-01; 2.340200e-01;\n    2.363950e-01; 2.468170e-01; 2.376270e-01; 2.601730e-01; 2.440820e-01; 2.274890e-01; 2.249930e-01; 2.233450e-01;\n    2.183580e-01; 2.090180e-01; 1.985730e-01; 1.887810e-01; 1.784510e-01; 1.678880e-01; 1.585850e-01; 1.488040e-01;\n    1.392830e-01; 1.302160e-01; 1.212720e-01; 1.124510e-01; 1.041190e-01; 9.340160e-02; 8.306840e-02; 7.337710e-02;\n    6.467130e-02; 5.662540e-02; 4.896100e-02; 4.205990e-02; 3.581300e-02; 3.014650e-02; 2.491370e-02; 2.058860e-02;\n    1.681940e-02; 1.360580e-02; 1.116100e-02; 9.274090e-03; 7.773550e-03; 6.345540e-03; 5.132820e-03; 3.961830e-03;\n    3.044690e-03; 2.228830e-03; 1.600220e-03; 1.225340e-03\n  |]\n\n(* Euclid/VIS.vis — 60 points *)\nlet euclid_vis_wave =\n  [|\n    4.369190e+03; 4.459140e+03; 4.549090e+03; 4.639040e+03; 4.738980e+03; 4.828920e+03; 4.918870e+03; 5.018810e+03;\n    5.108760e+03; 5.198710e+03; 5.298650e+03; 5.388590e+03; 5.478540e+03; 5.578480e+03; 5.668430e+03; 5.758380e+03;\n    5.858320e+03; 5.948270e+03; 6.038210e+03; 6.138150e+03; 6.228100e+03; 6.318050e+03; 6.417990e+03; 6.507940e+03;\n    6.597880e+03; 6.697820e+03; 6.787770e+03; 6.877720e+03; 6.977660e+03; 7.067610e+03; 7.157550e+03; 7.247500e+03;\n    7.347440e+03; 7.437390e+03; 7.527340e+03; 7.627280e+03; 7.717230e+03; 7.807170e+03; 7.907110e+03; 7.997060e+03;\n    8.087010e+03; 8.186950e+03; 8.276900e+03; 8.366840e+03; 8.466780e+03; 8.556730e+03; 8.646680e+03; 8.746620e+03;\n    8.836570e+03; 8.926510e+03; 9.026460e+03; 9.116400e+03; 9.206350e+03; 9.306290e+03; 9.396240e+03; 9.486180e+03;\n    9.586130e+03; 9.676070e+03; 9.766020e+03; 9.865960e+03\n  |]\n\nlet euclid_vis_thru =\n  [|\n    5.667901e-04; 1.630730e-03; 4.172531e-03; 1.124922e-03; 2.177489e-03; 3.386911e-03; 3.641207e-03; 1.371951e-02;\n    9.284691e-03; 1.350449e-02; 2.210145e-02; 5.507258e-02; 7.012943e-01; 7.157499e-01; 7.257763e-01; 7.345625e-01;\n    7.426705e-01; 7.485474e-01; 7.527962e-01; 7.552509e-01; 7.566956e-01; 7.574142e-01; 7.585293e-01; 7.588235e-01;\n    7.574724e-01; 7.558434e-01; 7.571670e-01; 7.570556e-01; 7.567355e-01; 7.559193e-01; 7.545203e-01; 7.533978e-01;\n    7.508411e-01; 7.461720e-01; 7.414610e-01; 7.350350e-01; 7.301607e-01; 7.229322e-01; 7.122474e-01; 7.009171e-01;\n    6.870897e-01; 6.690291e-01; 6.520249e-01; 6.316748e-01; 6.043242e-01; 5.787996e-01; 5.515213e-01; 5.182027e-01;\n    4.851838e-01; 4.521600e-01; 4.136761e-01; 3.779477e-01; 3.006216e-01; 1.259014e-02; 1.804853e-03; 2.027216e-03;\n    1.289551e-03; 6.535986e-04; 7.194299e-04; 4.038814e-04\n  |]\n\n(* Euclid/NISP.Y — 60 points *)\nlet euclid_y_wave =\n  [|\n    9.330000e+03; 9.380000e+03; 9.430000e+03; 9.480000e+03; 9.540000e+03; 9.590000e+03; 9.640000e+03; 9.700000e+03;\n    9.750000e+03; 9.800000e+03; 9.850000e+03; 9.910000e+03; 9.960000e+03; 1.001000e+04; 1.007000e+04; 1.012000e+04;\n    1.017000e+04; 1.022000e+04; 1.028000e+04; 1.033000e+04; 1.038000e+04; 1.044000e+04; 1.049000e+04; 1.054000e+04;\n    1.059000e+04; 1.065000e+04; 1.070000e+04; 1.075000e+04; 1.081000e+04; 1.086000e+04; 1.091000e+04; 1.096000e+04;\n    1.102000e+04; 1.107000e+04; 1.112000e+04; 1.118000e+04; 1.123000e+04; 1.128000e+04; 1.133000e+04; 1.139000e+04;\n    1.144000e+04; 1.149000e+04; 1.155000e+04; 1.160000e+04; 1.165000e+04; 1.170000e+04; 1.176000e+04; 1.181000e+04;\n    1.186000e+04; 1.192000e+04; 1.197000e+04; 1.202000e+04; 1.207000e+04; 1.213000e+04; 1.218000e+04; 1.223000e+04;\n    1.229000e+04; 1.234000e+04; 1.239000e+04; 1.245000e+04\n  |]\n\nlet euclid_y_thru =\n  [|\n    1.401100e-04; 9.044250e-04; 2.786910e-02; 1.932270e-01; 7.417070e-01; 7.539270e-01; 7.683890e-01; 7.725430e-01;\n    7.736840e-01; 7.748860e-01; 7.762310e-01; 7.789970e-01; 7.770510e-01; 7.763010e-01; 7.815180e-01; 7.831650e-01;\n    7.782630e-01; 7.784520e-01; 7.745790e-01; 7.736720e-01; 7.806700e-01; 7.784520e-01; 7.792640e-01; 7.782380e-01;\n    7.713900e-01; 7.704340e-01; 7.682960e-01; 7.640030e-01; 7.705670e-01; 7.696290e-01; 7.648760e-01; 7.663780e-01;\n    7.621110e-01; 7.587620e-01; 7.627900e-01; 7.664810e-01; 7.633970e-01; 7.646770e-01; 7.645850e-01; 7.692010e-01;\n    7.718380e-01; 7.719360e-01; 7.718390e-01; 7.717390e-01; 7.688890e-01; 7.725960e-01; 7.771790e-01; 7.775880e-01;\n    7.796710e-01; 7.805190e-01; 7.824850e-01; 7.843440e-01; 7.759730e-01; 3.002440e-01; 7.116640e-02; 1.522080e-02;\n    3.885880e-03; 1.834220e-03; 1.115430e-03; 6.624430e-04\n  |]\n\n(* Euclid/NISP.J — 60 points *)\nlet euclid_j_wave =\n  [|\n    1.141000e+04; 1.148000e+04; 1.156000e+04; 1.164000e+04; 1.172000e+04; 1.180000e+04; 1.188000e+04; 1.196000e+04;\n    1.204000e+04; 1.212000e+04; 1.220000e+04; 1.228000e+04; 1.236000e+04; 1.244000e+04; 1.252000e+04; 1.260000e+04;\n    1.268000e+04; 1.276000e+04; 1.284000e+04; 1.292000e+04; 1.299000e+04; 1.307000e+04; 1.315000e+04; 1.323000e+04;\n    1.331000e+04; 1.339000e+04; 1.347000e+04; 1.355000e+04; 1.363000e+04; 1.371000e+04; 1.379000e+04; 1.387000e+04;\n    1.395000e+04; 1.403000e+04; 1.411000e+04; 1.419000e+04; 1.427000e+04; 1.435000e+04; 1.443000e+04; 1.451000e+04;\n    1.458000e+04; 1.466000e+04; 1.474000e+04; 1.482000e+04; 1.490000e+04; 1.498000e+04; 1.506000e+04; 1.514000e+04;\n    1.522000e+04; 1.530000e+04; 1.538000e+04; 1.546000e+04; 1.554000e+04; 1.562000e+04; 1.570000e+04; 1.578000e+04;\n    1.586000e+04; 1.594000e+04; 1.602000e+04; 1.610000e+04\n  |]\n\nlet euclid_j_thru =\n  [|\n    1.576900e-04; 4.015150e-04; 3.417080e-03; 1.226570e-01; 7.426110e-01; 7.817110e-01; 7.813510e-01; 7.840630e-01;\n    7.888400e-01; 7.907170e-01; 7.833700e-01; 7.884310e-01; 7.896350e-01; 7.852690e-01; 7.966270e-01; 7.958310e-01;\n    7.988340e-01; 7.953290e-01; 7.964360e-01; 7.932720e-01; 7.885410e-01; 7.955600e-01; 7.943190e-01; 7.956280e-01;\n    8.027170e-01; 8.039270e-01; 8.032210e-01; 7.995800e-01; 8.013920e-01; 8.024890e-01; 7.976110e-01; 7.968730e-01;\n    7.954540e-01; 7.861820e-01; 7.882250e-01; 7.912090e-01; 7.856070e-01; 7.868450e-01; 7.890830e-01; 7.843430e-01;\n    7.847770e-01; 7.870230e-01; 7.847020e-01; 7.808650e-01; 7.816630e-01; 7.821060e-01; 7.840170e-01; 7.832910e-01;\n    7.825870e-01; 7.872660e-01; 7.816920e-01; 7.772310e-01; 7.810290e-01; 6.745540e-01; 2.009210e-01; 2.169140e-02;\n    3.511730e-03; 9.199010e-04; 3.143400e-04; 1.453270e-04\n  |]\n\n(* Euclid/NISP.H — 60 points *)\nlet euclid_h_wave =\n  [|\n    1.480000e+04; 1.489000e+04; 1.499000e+04; 1.509000e+04; 1.519000e+04; 1.529000e+04; 1.539000e+04; 1.549000e+04;\n    1.559000e+04; 1.569000e+04; 1.579000e+04; 1.589000e+04; 1.599000e+04; 1.609000e+04; 1.619000e+04; 1.629000e+04;\n    1.639000e+04; 1.649000e+04; 1.659000e+04; 1.669000e+04; 1.678000e+04; 1.688000e+04; 1.698000e+04; 1.708000e+04;\n    1.718000e+04; 1.728000e+04; 1.738000e+04; 1.748000e+04; 1.758000e+04; 1.768000e+04; 1.778000e+04; 1.788000e+04;\n    1.798000e+04; 1.808000e+04; 1.818000e+04; 1.828000e+04; 1.838000e+04; 1.848000e+04; 1.858000e+04; 1.868000e+04;\n    1.877000e+04; 1.887000e+04; 1.897000e+04; 1.907000e+04; 1.917000e+04; 1.927000e+04; 1.937000e+04; 1.947000e+04;\n    1.957000e+04; 1.967000e+04; 1.977000e+04; 1.987000e+04; 1.997000e+04; 2.007000e+04; 2.017000e+04; 2.027000e+04;\n    2.037000e+04; 2.047000e+04; 2.057000e+04; 2.067000e+04\n  |]\n\nlet euclid_h_thru =\n  [|\n    1.433800e-04; 3.416300e-04; 1.371980e-03; 1.165150e-02; 2.166120e-01; 7.653910e-01; 7.770660e-01; 7.766830e-01;\n    7.792960e-01; 7.733530e-01; 7.817380e-01; 7.820990e-01; 7.830000e-01; 7.815810e-01; 7.808620e-01; 7.824440e-01;\n    7.788240e-01; 7.785320e-01; 7.810690e-01; 7.777990e-01; 7.773880e-01; 7.825950e-01; 7.841750e-01; 7.836200e-01;\n    7.841940e-01; 7.853450e-01; 7.824440e-01; 7.815110e-01; 7.833050e-01; 7.838540e-01; 7.843000e-01; 7.839500e-01;\n    7.850730e-01; 7.855760e-01; 7.873750e-01; 7.901270e-01; 7.888930e-01; 7.901500e-01; 7.918380e-01; 7.927560e-01;\n    7.897410e-01; 7.867750e-01; 7.847490e-01; 7.831360e-01; 7.785180e-01; 7.761400e-01; 7.759030e-01; 7.723980e-01;\n    7.665570e-01; 7.651970e-01; 7.639360e-01; 7.611570e-01; 7.567070e-01; 7.462570e-01; 5.895400e-01; 1.360190e-01;\n    1.785060e-02; 3.091850e-03; 7.171190e-04; 1.507920e-04\n  |]\n"
  },
  {
    "path": "dev/umbra/lib/filters.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet f64 = Nx.float64\nlet angstrom_to_m = 1e-10\n\nlet make wave_a thru_a =\n  let n = Array.length wave_a in\n  let w = Nx.create f64 [| n |] wave_a in\n  let w = Nx.mul_s w angstrom_to_m in\n  let t = Nx.create f64 [| n |] thru_a in\n  Photometry.bandpass ~wavelength:(Unit.Length.of_tensor w) ~throughput:t\n\n(* SDSS *)\n\nlet sdss_u = make Filter_data.sdss_u_wave Filter_data.sdss_u_thru\nlet sdss_g = make Filter_data.sdss_g_wave Filter_data.sdss_g_thru\nlet sdss_r = make Filter_data.sdss_r_wave Filter_data.sdss_r_thru\nlet sdss_i = make Filter_data.sdss_i_wave Filter_data.sdss_i_thru\nlet sdss_z = make Filter_data.sdss_z_wave Filter_data.sdss_z_thru\n\n(* Johnson-Cousins *)\n\nlet johnson_u = make Filter_data.johnson_u_wave Filter_data.johnson_u_thru\nlet johnson_b = make Filter_data.johnson_b_wave Filter_data.johnson_b_thru\nlet johnson_v = make Filter_data.johnson_v_wave Filter_data.johnson_v_thru\nlet cousins_r = make Filter_data.cousins_r_wave Filter_data.cousins_r_thru\nlet cousins_i = make Filter_data.cousins_i_wave Filter_data.cousins_i_thru\n\n(* 2MASS *)\n\nlet twomass_j = make Filter_data.twomass_j_wave Filter_data.twomass_j_thru\nlet twomass_h = make Filter_data.twomass_h_wave Filter_data.twomass_h_thru\nlet twomass_ks = make Filter_data.twomass_ks_wave Filter_data.twomass_ks_thru\n\n(* Gaia DR3 *)\n\nlet gaia_g = make Filter_data.gaia_g_wave Filter_data.gaia_g_thru\nlet gaia_bp = make Filter_data.gaia_bp_wave Filter_data.gaia_bp_thru\nlet gaia_rp = make Filter_data.gaia_rp_wave Filter_data.gaia_rp_thru\n\n(* Rubin/LSST *)\n\nlet rubin_u = make Filter_data.rubin_u_wave Filter_data.rubin_u_thru\nlet rubin_g = make Filter_data.rubin_g_wave Filter_data.rubin_g_thru\nlet rubin_r = make Filter_data.rubin_r_wave Filter_data.rubin_r_thru\nlet rubin_i = make Filter_data.rubin_i_wave Filter_data.rubin_i_thru\nlet rubin_z = make Filter_data.rubin_z_wave Filter_data.rubin_z_thru\nlet rubin_y = make Filter_data.rubin_y_wave Filter_data.rubin_y_thru\n\n(* Euclid *)\n\nlet euclid_vis = make Filter_data.euclid_vis_wave Filter_data.euclid_vis_thru\nlet euclid_y = make Filter_data.euclid_y_wave Filter_data.euclid_y_thru\nlet euclid_j = make Filter_data.euclid_j_wave Filter_data.euclid_j_thru\nlet euclid_h = make Filter_data.euclid_h_wave Filter_data.euclid_h_thru\n"
  },
  {
    "path": "dev/umbra/lib/filters.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Standard astronomical filter bandpasses.\n\n    Tabulated transmission curves from the\n    {{:https://svo2.cab.inta-csic.es/theory/fps/} SVO Filter Profile Service}.\n    Each value is a pre-built {!Photometry.bandpass}.\n\n    {[\n    let mag = Photometry.ab_mag Filters.sdss_r sed\n    ]} *)\n\n(** {1:sdss SDSS ugriz} *)\n\nval sdss_u : Photometry.bandpass\n(** [sdss_u] is the SDSS u-band (298--413 nm, 47 points). *)\n\nval sdss_g : Photometry.bandpass\n(** [sdss_g] is the SDSS g-band (363--583 nm, 89 points). *)\n\nval sdss_r : Photometry.bandpass\n(** [sdss_r] is the SDSS r-band (538--723 nm, 75 points). *)\n\nval sdss_i : Photometry.bandpass\n(** [sdss_i] is the SDSS i-band (643--863 nm, 89 points). *)\n\nval sdss_z : Photometry.bandpass\n(** [sdss_z] is the SDSS z-band (773--1123 nm, 141 points). *)\n\n(** {1:johnson Johnson-Cousins UBVRI} *)\n\nval johnson_u : Photometry.bandpass\n(** [johnson_u] is the Johnson U-band (300--420 nm, 13 points). *)\n\nval johnson_b : Photometry.bandpass\n(** [johnson_b] is the Johnson B-band (370--560 nm, 11 points). *)\n\nval johnson_v : Photometry.bandpass\n(** [johnson_v] is the Johnson V-band (460--740 nm, 15 points). *)\n\nval cousins_r : Photometry.bandpass\n(** [cousins_r] is the Cousins R-band (540--800 nm, 53 points). *)\n\nval cousins_i : Photometry.bandpass\n(** [cousins_i] is the Cousins I-band (700--910 nm, 43 points). *)\n\n(** {1:twomass 2MASS JHKs} *)\n\nval twomass_j : Photometry.bandpass\n(** [twomass_j] is the 2MASS J-band (1062--1450 nm, 107 points). *)\n\nval twomass_h : Photometry.bandpass\n(** [twomass_h] is the 2MASS H-band (1289--1914 nm, 58 points). *)\n\nval twomass_ks : Photometry.bandpass\n(** [twomass_ks] is the 2MASS Ks-band (1900--2399 nm, 76 points). *)\n\n(** {1:gaia Gaia DR3} *)\n\nval gaia_g : Photometry.bandpass\n(** [gaia_g] is the Gaia DR3 G-band (330--1040 nm, 74 points). *)\n\nval gaia_bp : Photometry.bandpass\n(** [gaia_bp] is the Gaia DR3 BP-band (328--748 nm, 86 points). *)\n\nval gaia_rp : Photometry.bandpass\n(** [gaia_rp] is the Gaia DR3 RP-band (618--1076 nm, 95 points). *)\n\n(** {1:rubin Rubin/LSST ugrizy} *)\n\nval rubin_u : Photometry.bandpass\n(** [rubin_u] is the Rubin/LSST u-band (320--409 nm, 60 points). *)\n\nval rubin_g : Photometry.bandpass\n(** [rubin_g] is the Rubin/LSST g-band (386--567 nm, 60 points). *)\n\nval rubin_r : Photometry.bandpass\n(** [rubin_r] is the Rubin/LSST r-band (537--706 nm, 60 points). *)\n\nval rubin_i : Photometry.bandpass\n(** [rubin_i] is the Rubin/LSST i-band (676--833 nm, 60 points). *)\n\nval rubin_z : Photometry.bandpass\n(** [rubin_z] is the Rubin/LSST z-band (803--935 nm, 60 points). *)\n\nval rubin_y : Photometry.bandpass\n(** [rubin_y] is the Rubin/LSST y-band (908--1099 nm, 60 points). *)\n\n(** {1:euclid Euclid} *)\n\nval euclid_vis : Photometry.bandpass\n(** [euclid_vis] is the Euclid VIS-band (437--987 nm, 60 points). *)\n\nval euclid_y : Photometry.bandpass\n(** [euclid_y] is the Euclid NISP Y-band (933--1245 nm, 60 points). *)\n\nval euclid_j : Photometry.bandpass\n(** [euclid_j] is the Euclid NISP J-band (1141--1610 nm, 60 points). *)\n\nval euclid_h : Photometry.bandpass\n(** [euclid_h] is the Euclid NISP H-band (1480--2067 nm, 60 points). *)\n"
  },
  {
    "path": "dev/umbra/lib/fits/dune",
    "content": "(library\n (name umbra_fits)\n (public_name umbra.fits)\n (private_modules fits_parser)\n (libraries nx nx.io talon unix))\n"
  },
  {
    "path": "dev/umbra/lib/fits/fits_parser.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet err_truncated = \"Fits: unexpected end of file\"\nlet err_no_simple = \"Fits: missing SIMPLE keyword in primary HDU\"\nlet err_bad_tform msg = \"Fits: unsupported TFORM: \" ^ msg\nlet block_size = 2880\n\ntype keyword = { key : string; value : string; comment : string }\n\ntype header = {\n  keywords : keyword list;\n  xtension : string;\n  bitpix : int;\n  naxis : int array;\n  data_bytes : int;\n}\n\ntype col_desc = {\n  name : string;\n  tform : char;\n  repeat : int;\n  width : int;\n  tnull : int64 option;\n  tscal : float;\n  tzero : float;\n}\n\nlet swap16 buf pos =\n  let b0 = Bytes.get_uint8 buf pos in\n  let b1 = Bytes.get_uint8 buf (pos + 1) in\n  Bytes.set_uint8 buf pos b1;\n  Bytes.set_uint8 buf (pos + 1) b0\n\nlet swap32 buf pos =\n  let b0 = Bytes.get_uint8 buf pos in\n  let b1 = Bytes.get_uint8 buf (pos + 1) in\n  let b2 = Bytes.get_uint8 buf (pos + 2) in\n  let b3 = Bytes.get_uint8 buf (pos + 3) in\n  Bytes.set_uint8 buf pos b3;\n  Bytes.set_uint8 buf (pos + 1) b2;\n  Bytes.set_uint8 buf (pos + 2) b1;\n  Bytes.set_uint8 buf (pos + 3) b0\n\nlet swap64 buf pos =\n  let b0 = Bytes.get_uint8 buf pos in\n  let b1 = Bytes.get_uint8 buf (pos + 1) in\n  let b2 = Bytes.get_uint8 buf (pos + 2) in\n  let b3 = Bytes.get_uint8 buf (pos + 3) in\n  let b4 = Bytes.get_uint8 buf (pos + 4) in\n  let b5 = Bytes.get_uint8 buf (pos + 5) in\n  let b6 = Bytes.get_uint8 buf (pos + 6) in\n  let b7 = Bytes.get_uint8 buf (pos + 7) in\n  Bytes.set_uint8 buf pos b7;\n  Bytes.set_uint8 buf (pos + 1) b6;\n  Bytes.set_uint8 buf (pos + 2) b5;\n  Bytes.set_uint8 buf (pos + 3) b4;\n  Bytes.set_uint8 buf (pos + 4) b3;\n  Bytes.set_uint8 buf (pos + 5) b2;\n  Bytes.set_uint8 buf (pos + 6) b1;\n  Bytes.set_uint8 buf (pos + 7) b0\n\nlet trim_right s =\n  let len = String.length s in\n  let i = ref (len - 1) in\n  while !i >= 0 && s.[!i] = ' ' do\n    decr i\n  done;\n  if !i = len - 1 then s else String.sub s 0 (!i + 1)\n\nlet parse_card card =\n  let key = trim_right (String.sub card 0 8) in\n  if key = \"COMMENT\" || key = \"HISTORY\" then\n    let content =\n      if String.length card > 8 then\n        trim_right (String.sub card 8 (String.length card - 8))\n      else \"\"\n    in\n    { key; value = content; comment = \"\" }\n  else if String.length card < 10 || card.[8] <> '=' || card.[9] <> ' ' then\n    { key; value = \"\"; comment = \"\" }\n  else\n    let rest = String.sub card 10 (String.length card - 10) in\n    let rest = String.trim rest in\n    if String.length rest > 0 && rest.[0] = '\\'' then begin\n      let len = String.length rest in\n      let i = ref 1 in\n      let buf = Buffer.create 68 in\n      while !i < len do\n        if rest.[!i] = '\\'' then\n          begin if !i + 1 < len && rest.[!i + 1] = '\\'' then begin\n            Buffer.add_char buf '\\'';\n            i := !i + 2\n          end\n          else i := len\n          end\n        else begin\n          Buffer.add_char buf rest.[!i];\n          i := !i + 1\n        end\n      done;\n      { key; value = trim_right (Buffer.contents buf); comment = \"\" }\n    end\n    else\n      begin match String.index_opt rest '/' with\n      | Some i ->\n          let value = trim_right (String.sub rest 0 i) in\n          let comment =\n            String.trim (String.sub rest (i + 1) (String.length rest - i - 1))\n          in\n          { key; value; comment }\n      | None -> { key; value = trim_right rest; comment = \"\" }\n      end\n\nlet read_one_header ic =\n  let keywords = ref [] in\n  let found_end = ref false in\n  let card_buf = Bytes.create 80 in\n  while not !found_end do\n    let block = Bytes.create block_size in\n    (match In_channel.really_input ic block 0 block_size with\n    | None -> failwith err_truncated\n    | Some () -> ());\n    for card_i = 0 to 35 do\n      if not !found_end then begin\n        Bytes.blit block (card_i * 80) card_buf 0 80;\n        let card = Bytes.to_string card_buf in\n        let key = trim_right (String.sub card 0 8) in\n        if key = \"END\" then found_end := true\n        else if key <> \"\" then keywords := parse_card card :: !keywords\n      end\n    done\n  done;\n  List.rev !keywords\n\nlet find_keyword keywords key =\n  match List.find_opt (fun kw -> kw.key = key) keywords with\n  | Some kw -> Some kw.value\n  | None -> None\n\nlet find_keyword_int keywords key =\n  match find_keyword keywords key with\n  | Some v -> Some (int_of_string (String.trim v))\n  | None -> None\n\nlet find_keyword_exn keywords key =\n  match find_keyword keywords key with\n  | Some v -> v\n  | None -> failwith (\"Fits: missing required keyword \" ^ key)\n\nlet find_keyword_int_exn keywords key =\n  int_of_string (String.trim (find_keyword_exn keywords key))\n\nlet compute_data_bytes keywords =\n  let bitpix = find_keyword_int_exn keywords \"BITPIX\" in\n  let naxis_n = find_keyword_int_exn keywords \"NAXIS\" in\n  if naxis_n = 0 then 0\n  else begin\n    let total = ref (abs bitpix / 8) in\n    for i = 1 to naxis_n do\n      let key = Printf.sprintf \"NAXIS%d\" i in\n      total := !total * find_keyword_int_exn keywords key\n    done;\n    let pcount =\n      match find_keyword_int keywords \"PCOUNT\" with Some v -> v | None -> 0\n    in\n    let gcount =\n      match find_keyword_int keywords \"GCOUNT\" with Some v -> v | None -> 1\n    in\n    (!total + pcount) * gcount\n  end\n\nlet build_header keywords =\n  let bitpix = find_keyword_int_exn keywords \"BITPIX\" in\n  let naxis_n = find_keyword_int_exn keywords \"NAXIS\" in\n  let naxis =\n    Array.init naxis_n (fun i ->\n        find_keyword_int_exn keywords (Printf.sprintf \"NAXIS%d\" (i + 1)))\n  in\n  let xtension =\n    match find_keyword keywords \"XTENSION\" with Some v -> v | None -> \"\"\n  in\n  let data_bytes = compute_data_bytes keywords in\n  { keywords; xtension; bitpix; naxis; data_bytes }\n\nlet read_headers ic =\n  In_channel.seek ic 0L;\n  let headers = ref [] in\n  let first = ref true in\n  let continue = ref true in\n  while !continue do\n    let keywords = try Some (read_one_header ic) with Failure _ -> None in\n    match keywords with\n    | None -> continue := false\n    | Some keywords ->\n        if !first then begin\n          first := false;\n          match find_keyword keywords \"SIMPLE\" with\n          | Some _ -> ()\n          | None -> failwith err_no_simple\n        end;\n        let hdr = build_header keywords in\n        headers := hdr :: !headers;\n        let data_blocks =\n          if hdr.data_bytes = 0 then 0\n          else (hdr.data_bytes + block_size - 1) / block_size\n        in\n        In_channel.seek ic\n          (Int64.add (In_channel.pos ic)\n             (Int64.of_int (data_blocks * block_size)))\n  done;\n  List.rev !headers\n\nlet seek_to_data ic headers hdu =\n  if hdu < 0 || hdu >= List.length headers then\n    failwith\n      (Printf.sprintf \"Fits: HDU %d out of range (file has %d)\" hdu\n         (List.length headers));\n  In_channel.seek ic 0L;\n  for i = 0 to hdu do\n    let h = List.nth headers i in\n    let found_end = ref false in\n    while not !found_end do\n      let block = Bytes.create block_size in\n      (match In_channel.really_input ic block 0 block_size with\n      | None -> failwith err_truncated\n      | Some () -> ());\n      for card_i = 0 to 35 do\n        if not !found_end then begin\n          let key = trim_right (Bytes.sub_string block (card_i * 80) 8) in\n          if key = \"END\" then found_end := true\n        end\n      done\n    done;\n    if i < hdu then begin\n      let data_blocks =\n        if h.data_bytes = 0 then 0\n        else (h.data_bytes + block_size - 1) / block_size\n      in\n      In_channel.seek ic\n        (Int64.add (In_channel.pos ic)\n           (Int64.of_int (data_blocks * block_size)))\n    end\n  done;\n  let h = List.nth headers hdu in\n  h.data_bytes\n\nlet parse_tform s =\n  let s = String.trim s in\n  let len = String.length s in\n  if len = 0 then failwith (err_bad_tform \"empty\");\n  let i = ref 0 in\n  while !i < len && s.[!i] >= '0' && s.[!i] <= '9' do\n    incr i\n  done;\n  let repeat = if !i = 0 then 1 else int_of_string (String.sub s 0 !i) in\n  if !i >= len then failwith (err_bad_tform s);\n  let code = s.[!i] in\n  let width =\n    match code with\n    | 'L' -> 1\n    | 'B' -> 1\n    | 'I' -> 2\n    | 'J' -> 4\n    | 'K' -> 8\n    | 'E' -> 4\n    | 'D' -> 8\n    | 'A' -> 1\n    | c -> failwith (err_bad_tform (String.make 1 c))\n  in\n  (code, repeat, width)\n\nlet parse_bintable_cols hdr =\n  let keywords = hdr.keywords in\n  let tfields = find_keyword_int_exn keywords \"TFIELDS\" in\n  List.init tfields (fun i ->\n      let col = i + 1 in\n      let name =\n        match find_keyword keywords (Printf.sprintf \"TTYPE%d\" col) with\n        | Some v -> v\n        | None -> Printf.sprintf \"col%d\" col\n      in\n      let tform_s = find_keyword_exn keywords (Printf.sprintf \"TFORM%d\" col) in\n      let tform, repeat, width = parse_tform tform_s in\n      let tnull =\n        match find_keyword keywords (Printf.sprintf \"TNULL%d\" col) with\n        | Some v -> Some (Int64.of_string (String.trim v))\n        | None -> None\n      in\n      let tscal =\n        match find_keyword keywords (Printf.sprintf \"TSCAL%d\" col) with\n        | Some v -> float_of_string (String.trim v)\n        | None -> 1.0\n      in\n      let tzero =\n        match find_keyword keywords (Printf.sprintf \"TZERO%d\" col) with\n        | Some v -> float_of_string (String.trim v)\n        | None -> 0.0\n      in\n      { name; tform; repeat; width; tnull; tscal; tzero })\n"
  },
  {
    "path": "dev/umbra/lib/fits/fits_parser.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(**/**)\n\n(** Internal FITS parser. *)\n\ntype keyword = { key : string; value : string; comment : string }\n\ntype header = {\n  keywords : keyword list;\n  xtension : string;\n  bitpix : int;\n  naxis : int array;\n  data_bytes : int;\n}\n\ntype col_desc = {\n  name : string;\n  tform : char;\n  repeat : int;\n  width : int;\n  tnull : int64 option;\n  tscal : float;\n  tzero : float;\n}\n\nval read_headers : In_channel.t -> header list\nval seek_to_data : In_channel.t -> header list -> int -> int\nval parse_bintable_cols : header -> col_desc list\nval find_keyword : keyword list -> string -> string option\nval find_keyword_int : keyword list -> string -> int option\nval trim_right : string -> string\nval block_size : int\nval swap16 : bytes -> int -> unit\nval swap32 : bytes -> int -> unit\nval swap64 : bytes -> int -> unit\n\n(**/**)\n"
  },
  {
    "path": "dev/umbra/lib/fits/umbra_fits.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet err_not_bintable = \"Fits.read_table: HDU is not a BINTABLE\"\nlet err_not_image = \"Fits.read_image: HDU is not an image\"\nlet err_unsupported_bitpix n = Printf.sprintf \"Fits: unsupported BITPIX %d\" n\nlet err_truncated_data = \"Fits: unexpected end of file in data\"\n\ntype header_card = { key : string; value : string; comment : string }\ntype hdu_type = Primary | Image | Bintable | Ascii_table\n\ntype hdu_info = {\n  index : int;\n  hdu_type : hdu_type;\n  dimensions : int array;\n  num_rows : int option;\n  num_cols : int option;\n}\n\nlet hdu_type_of_header i (hdr : Fits_parser.header) =\n  match hdr.xtension with\n  | \"\" -> if i = 0 then Primary else Image\n  | \"BINTABLE\" -> Bintable\n  | \"TABLE\" -> Ascii_table\n  | \"IMAGE\" -> Image\n  | _ -> Image\n\nlet read_input ic buf n =\n  match In_channel.really_input ic buf 0 n with\n  | None -> failwith err_truncated_data\n  | Some () -> ()\n\nlet info path =\n  let ic = In_channel.open_bin path in\n  Fun.protect\n    ~finally:(fun () -> In_channel.close ic)\n    (fun () ->\n      let headers = Fits_parser.read_headers ic in\n      List.mapi\n        (fun i (hdr : Fits_parser.header) ->\n          let ht = hdu_type_of_header i hdr in\n          let num_rows, num_cols =\n            match ht with\n            | Bintable | Ascii_table ->\n                let nrows =\n                  if Array.length hdr.naxis >= 2 then Some hdr.naxis.(1)\n                  else None\n                in\n                let ncols =\n                  Fits_parser.find_keyword_int hdr.keywords \"TFIELDS\"\n                in\n                (nrows, ncols)\n            | _ -> (None, None)\n          in\n          {\n            index = i;\n            hdu_type = ht;\n            dimensions = hdr.naxis;\n            num_rows;\n            num_cols;\n          })\n        headers)\n\nlet header ?(hdu = 0) path =\n  let ic = In_channel.open_bin path in\n  Fun.protect\n    ~finally:(fun () -> In_channel.close ic)\n    (fun () ->\n      let headers = Fits_parser.read_headers ic in\n      if hdu < 0 || hdu >= List.length headers then\n        failwith (Printf.sprintf \"Fits.header: HDU %d out of range\" hdu);\n      let h = List.nth headers hdu in\n      List.map\n        (fun (kw : Fits_parser.keyword) ->\n          { key = kw.key; value = kw.value; comment = kw.comment })\n        h.keywords)\n\nlet read_table ?(hdu = 1) path =\n  let ic = In_channel.open_bin path in\n  Fun.protect\n    ~finally:(fun () -> In_channel.close ic)\n    (fun () ->\n      let headers = Fits_parser.read_headers ic in\n      if hdu < 0 || hdu >= List.length headers then\n        failwith (Printf.sprintf \"Fits.read_table: HDU %d out of range\" hdu);\n      let h = List.nth headers hdu in\n      (match hdu_type_of_header hdu h with\n      | Bintable -> ()\n      | _ -> failwith err_not_bintable);\n      let cols = Fits_parser.parse_bintable_cols h in\n      let nrows = if Array.length h.naxis >= 2 then h.naxis.(1) else 0 in\n      let row_bytes = h.naxis.(0) in\n      let (_ : int) = Fits_parser.seek_to_data ic headers hdu in\n      let row_buf = Bytes.create row_bytes in\n      let col_info =\n        List.map\n          (fun (cd : Fits_parser.col_desc) ->\n            let elem_bytes = cd.repeat * cd.width in\n            (cd, Bytes.create (nrows * elem_bytes), elem_bytes))\n          cols\n      in\n      let col_offsets =\n        let off = ref 0 in\n        List.map\n          (fun (cd : Fits_parser.col_desc) ->\n            let o = !off in\n            off := !off + (cd.repeat * cd.width);\n            o)\n          cols\n      in\n      for row = 0 to nrows - 1 do\n        read_input ic row_buf row_bytes;\n        List.iter2\n          (fun offset (_cd, buf, elem_bytes) ->\n            Bytes.blit row_buf offset buf (row * elem_bytes) elem_bytes)\n          col_offsets col_info\n      done;\n      let err_vector name repeat =\n        failwith\n          (Printf.sprintf \"Fits: vector column '%s' (repeat=%d) not supported\"\n             name repeat)\n      in\n      let talon_cols =\n        List.map\n          (fun (cd, buf, _) ->\n            let col =\n              match cd.Fits_parser.tform with\n              | 'E' ->\n                  if cd.repeat <> 1 then err_vector cd.name cd.repeat;\n                  Talon.Col.float32\n                    (Array.init nrows (fun i ->\n                         let pos = i * 4 in\n                         Fits_parser.swap32 buf pos;\n                         let v =\n                           Int32.float_of_bits (Bytes.get_int32_le buf pos)\n                         in\n                         if cd.tzero = 0.0 && cd.tscal = 1.0 then v\n                         else (v *. cd.tscal) +. cd.tzero))\n              | 'D' ->\n                  if cd.repeat <> 1 then err_vector cd.name cd.repeat;\n                  Talon.Col.float64\n                    (Array.init nrows (fun i ->\n                         let pos = i * 8 in\n                         Fits_parser.swap64 buf pos;\n                         let v =\n                           Int64.float_of_bits (Bytes.get_int64_le buf pos)\n                         in\n                         if cd.tzero = 0.0 && cd.tscal = 1.0 then v\n                         else (v *. cd.tscal) +. cd.tzero))\n              | 'J' ->\n                  if cd.repeat <> 1 then err_vector cd.name cd.repeat;\n                  Talon.Col.int32\n                    (Array.init nrows (fun i ->\n                         let pos = i * 4 in\n                         Fits_parser.swap32 buf pos;\n                         let v = Bytes.get_int32_le buf pos in\n                         if cd.tzero = 0.0 && cd.tscal = 1.0 then v\n                         else\n                           Int32.of_float\n                             ((Int32.to_float v *. cd.tscal) +. cd.tzero)))\n              | 'K' ->\n                  if cd.repeat <> 1 then err_vector cd.name cd.repeat;\n                  Talon.Col.int64\n                    (Array.init nrows (fun i ->\n                         let pos = i * 8 in\n                         Fits_parser.swap64 buf pos;\n                         let v = Bytes.get_int64_le buf pos in\n                         if cd.tzero = 0.0 && cd.tscal = 1.0 then v\n                         else\n                           Int64.of_float\n                             ((Int64.to_float v *. cd.tscal) +. cd.tzero)))\n              | 'I' ->\n                  if cd.repeat <> 1 then err_vector cd.name cd.repeat;\n                  Talon.Col.int32\n                    (Array.init nrows (fun i ->\n                         let pos = i * 2 in\n                         Fits_parser.swap16 buf pos;\n                         let v = Bytes.get_int16_le buf pos in\n                         if cd.tzero = 0.0 && cd.tscal = 1.0 then Int32.of_int v\n                         else\n                           Int32.of_float\n                             ((Float.of_int v *. cd.tscal) +. cd.tzero)))\n              | 'B' ->\n                  if cd.repeat <> 1 then err_vector cd.name cd.repeat;\n                  Talon.Col.int32\n                    (Array.init nrows (fun i ->\n                         let v = Bytes.get_uint8 buf i in\n                         if cd.tzero = 0.0 && cd.tscal = 1.0 then Int32.of_int v\n                         else\n                           Int32.of_float\n                             ((Float.of_int v *. cd.tscal) +. cd.tzero)))\n              | 'L' ->\n                  if cd.repeat <> 1 then err_vector cd.name cd.repeat;\n                  Talon.Col.bool\n                    (Array.init nrows (fun i ->\n                         let c = Bytes.get buf i in\n                         c = 'T' || c = '\\x01'))\n              | 'A' ->\n                  Talon.Col.string\n                    (Array.init nrows (fun i ->\n                         Fits_parser.trim_right\n                           (Bytes.sub_string buf (i * cd.repeat) cd.repeat)))\n              | c -> failwith (Printf.sprintf \"Fits: unsupported TFORM '%c'\" c)\n            in\n            (cd.name, col))\n          col_info\n      in\n      Talon.create talon_cols)\n\nlet find_keyword_float keywords key =\n  match Fits_parser.find_keyword keywords key with\n  | Some v -> Some (float_of_string (String.trim v))\n  | None -> None\n\nlet read_image ?(hdu = 0) path =\n  let ic = In_channel.open_bin path in\n  Fun.protect\n    ~finally:(fun () -> In_channel.close ic)\n    (fun () ->\n      let headers = Fits_parser.read_headers ic in\n      if hdu < 0 || hdu >= List.length headers then\n        failwith (Printf.sprintf \"Fits.read_image: HDU %d out of range\" hdu);\n      let h = List.nth headers hdu in\n      (match hdu_type_of_header hdu h with\n      | Primary | Image -> ()\n      | _ -> failwith err_not_image);\n      let bscale =\n        match find_keyword_float h.keywords \"BSCALE\" with\n        | Some v -> v\n        | None -> 1.0\n      in\n      let bzero =\n        match find_keyword_float h.keywords \"BZERO\" with\n        | Some v -> v\n        | None -> 0.0\n      in\n      let has_scaling = bscale <> 1.0 || bzero <> 0.0 in\n      let (_ : int) = Fits_parser.seek_to_data ic headers hdu in\n      let shape = Array.to_list h.naxis |> List.rev |> Array.of_list in\n      let total = Array.fold_left ( * ) 1 shape in\n      let apply_scaling raw =\n        Nx.add_s (Nx.mul_s (Nx.astype Nx.float64 raw) bscale) bzero\n      in\n      match h.bitpix with\n      | 8 ->\n          let buf = Bytes.create total in\n          read_input ic buf total;\n          let raw =\n            Nx.create Nx.uint8 shape\n              (Array.init total (fun i -> Bytes.get_uint8 buf i))\n          in\n          if has_scaling then Nx_io.P (apply_scaling raw) else Nx_io.P raw\n      | 16 ->\n          let buf = Bytes.create (total * 2) in\n          read_input ic buf (total * 2);\n          let raw =\n            Nx.create Nx.int16 shape\n              (Array.init total (fun i ->\n                   let pos = i * 2 in\n                   Fits_parser.swap16 buf pos;\n                   Bytes.get_int16_le buf pos))\n          in\n          if has_scaling then Nx_io.P (apply_scaling raw) else Nx_io.P raw\n      | 32 ->\n          let buf = Bytes.create (total * 4) in\n          read_input ic buf (total * 4);\n          let raw =\n            Nx.create Nx.int32 shape\n              (Array.init total (fun i ->\n                   let pos = i * 4 in\n                   Fits_parser.swap32 buf pos;\n                   Bytes.get_int32_le buf pos))\n          in\n          if has_scaling then Nx_io.P (apply_scaling raw) else Nx_io.P raw\n      | 64 ->\n          let buf = Bytes.create (total * 8) in\n          read_input ic buf (total * 8);\n          let raw =\n            Nx.create Nx.int64 shape\n              (Array.init total (fun i ->\n                   let pos = i * 8 in\n                   Fits_parser.swap64 buf pos;\n                   Bytes.get_int64_le buf pos))\n          in\n          if has_scaling then Nx_io.P (apply_scaling raw) else Nx_io.P raw\n      | -32 ->\n          let buf = Bytes.create (total * 4) in\n          read_input ic buf (total * 4);\n          let raw =\n            Nx.create Nx.float32 shape\n              (Array.init total (fun i ->\n                   let pos = i * 4 in\n                   Fits_parser.swap32 buf pos;\n                   Int32.float_of_bits (Bytes.get_int32_le buf pos)))\n          in\n          if has_scaling then Nx_io.P (apply_scaling raw) else Nx_io.P raw\n      | -64 ->\n          let buf = Bytes.create (total * 8) in\n          read_input ic buf (total * 8);\n          let raw =\n            Nx.create Nx.float64 shape\n              (Array.init total (fun i ->\n                   let pos = i * 8 in\n                   Fits_parser.swap64 buf pos;\n                   Int64.float_of_bits (Bytes.get_int64_le buf pos)))\n          in\n          if has_scaling then Nx_io.P (apply_scaling raw) else Nx_io.P raw\n      | n -> failwith (err_unsupported_bitpix n))\n\nlet pad_to_block oc written =\n  let rem = written mod Fits_parser.block_size in\n  if rem > 0 then\n    output_string oc (String.make (Fits_parser.block_size - rem) '\\x00')\n\nlet write_card oc key value =\n  let card = Bytes.make 80 ' ' in\n  Bytes.blit_string key 0 card 0 (Int.min 8 (String.length key));\n  Bytes.set card 8 '=';\n  Bytes.set card 9 ' ';\n  let v = String.trim value in\n  Bytes.blit_string v 0 card 10 (Int.min 70 (String.length v));\n  output_bytes oc card\n\nlet write_card_str oc key value =\n  write_card oc key (Printf.sprintf \"'%-8s'\" value)\n\nlet write_card_int oc key value =\n  write_card oc key (Printf.sprintf \"%20d\" value)\n\nlet write_end oc cards_written =\n  let card = Bytes.make 80 ' ' in\n  Bytes.blit_string \"END\" 0 card 0 3;\n  output_bytes oc card;\n  let total_cards = cards_written + 1 in\n  let rem = total_cards * 80 mod Fits_parser.block_size in\n  if rem > 0 then\n    output_string oc (String.make (Fits_parser.block_size - rem) ' ')\n\nlet write_empty_primary oc =\n  write_card oc \"SIMPLE\" \"                   T\";\n  write_card_int oc \"BITPIX\" 8;\n  write_card_int oc \"NAXIS\" 0;\n  write_end oc 3\n\nlet write_image_typed (type a b) ?(overwrite = true) path (tensor : (a, b) Nx.t)\n    =\n  if (not overwrite) && Sys.file_exists path then\n    failwith (\"Fits.write_image: file exists: \" ^ path);\n  let oc = Out_channel.open_bin path in\n  Fun.protect\n    ~finally:(fun () -> Out_channel.close oc)\n    (fun () ->\n      let shape = Nx.shape tensor in\n      let ndim = Array.length shape in\n      let fits_shape = Array.init ndim (fun i -> shape.(ndim - 1 - i)) in\n      let total = Nx.numel tensor in\n      let dt = Nx.dtype_to_string (Nx.dtype tensor) in\n      let bitpix, elem_bytes =\n        match dt with\n        | \"uint8\" -> (8, 1)\n        | \"int16\" -> (16, 2)\n        | \"int32\" -> (32, 4)\n        | \"int64\" -> (64, 8)\n        | \"float32\" -> (-32, 4)\n        | \"float64\" -> (-64, 8)\n        | s -> failwith (\"Fits.write_image: unsupported dtype \" ^ s)\n      in\n      write_card oc \"SIMPLE\" \"                   T\";\n      write_card_int oc \"BITPIX\" bitpix;\n      write_card_int oc \"NAXIS\" ndim;\n      for i = 0 to ndim - 1 do\n        write_card_int oc (Printf.sprintf \"NAXIS%d\" (i + 1)) fits_shape.(i)\n      done;\n      write_end oc (3 + ndim);\n      let flat = Nx.reshape [| total |] tensor in\n      let arr = Nx.to_array flat in\n      let data_bytes = total * elem_bytes in\n      let buf = Bytes.create data_bytes in\n      (match dt with\n      | \"uint8\" ->\n          Array.iteri\n            (fun i (v : a) -> Bytes.set_uint8 buf i (Obj.magic v : int))\n            arr\n      | \"int16\" ->\n          Array.iteri\n            (fun i (v : a) ->\n              let pos = i * 2 in\n              Bytes.set_int16_le buf pos (Obj.magic v : int);\n              Fits_parser.swap16 buf pos)\n            arr\n      | \"int32\" ->\n          Array.iteri\n            (fun i (v : a) ->\n              let pos = i * 4 in\n              Bytes.set_int32_le buf pos (Obj.magic v : int32);\n              Fits_parser.swap32 buf pos)\n            arr\n      | \"int64\" ->\n          Array.iteri\n            (fun i (v : a) ->\n              let pos = i * 8 in\n              Bytes.set_int64_le buf pos (Obj.magic v : int64);\n              Fits_parser.swap64 buf pos)\n            arr\n      | \"float32\" ->\n          Array.iteri\n            (fun i (v : a) ->\n              let pos = i * 4 in\n              Bytes.set_int32_le buf pos\n                (Int32.bits_of_float (Obj.magic v : float));\n              Fits_parser.swap32 buf pos)\n            arr\n      | \"float64\" ->\n          Array.iteri\n            (fun i (v : a) ->\n              let pos = i * 8 in\n              Bytes.set_int64_le buf pos\n                (Int64.bits_of_float (Obj.magic v : float));\n              Fits_parser.swap64 buf pos)\n            arr\n      | _ -> assert false);\n      output_bytes oc buf;\n      pad_to_block oc data_bytes)\n\nlet write_image ?overwrite path tensor =\n  write_image_typed ?overwrite path tensor\n\nlet write_table ?(overwrite = true) path df =\n  if (not overwrite) && Sys.file_exists path then\n    failwith (\"Fits.write_table: file exists: \" ^ path);\n  let oc = Out_channel.open_bin path in\n  Fun.protect\n    ~finally:(fun () -> Out_channel.close oc)\n    (fun () ->\n      write_empty_primary oc;\n      let col_names = Talon.column_names df in\n      let nrows = Talon.num_rows df in\n      let ncols = List.length col_names in\n      let col_info =\n        List.map\n          (fun name ->\n            let col = Talon.get_column_exn df name in\n            match Talon.Col.dtype col with\n            | `Float32 -> (name, col, \"1E\", 4)\n            | `Float64 -> (name, col, \"1D\", 8)\n            | `Int32 -> (name, col, \"1J\", 4)\n            | `Int64 -> (name, col, \"1K\", 8)\n            | `String -> (\n                match Talon.to_string_array df name with\n                | Some arr ->\n                    let maxlen =\n                      Array.fold_left\n                        (fun acc v ->\n                          match v with\n                          | Some s -> max acc (String.length s)\n                          | None -> acc)\n                        1 arr\n                    in\n                    (name, col, Printf.sprintf \"%dA\" maxlen, maxlen)\n                | None -> failwith \"Fits.write_table: string column missing\")\n            | `Bool -> (name, col, \"1L\", 1)\n            | `Other -> failwith \"Fits.write_table: unsupported dtype\")\n          col_names\n      in\n      let row_bytes =\n        List.fold_left (fun acc (_, _, _, eb) -> acc + eb) 0 col_info\n      in\n      write_card_str oc \"XTENSION\" \"BINTABLE\";\n      write_card_int oc \"BITPIX\" 8;\n      write_card_int oc \"NAXIS\" 2;\n      write_card_int oc \"NAXIS1\" row_bytes;\n      write_card_int oc \"NAXIS2\" nrows;\n      write_card_int oc \"PCOUNT\" 0;\n      write_card_int oc \"GCOUNT\" 1;\n      write_card_int oc \"TFIELDS\" ncols;\n      let cards = ref 8 in\n      List.iteri\n        (fun i (name, _col, tform, _eb) ->\n          let n = i + 1 in\n          write_card_str oc (Printf.sprintf \"TTYPE%d\" n) name;\n          write_card_str oc (Printf.sprintf \"TFORM%d\" n) tform;\n          cards := !cards + 2)\n        col_info;\n      write_end oc !cards;\n      let col_arrays =\n        List.map\n          (fun (name, col, _tform, _eb) ->\n            match Talon.Col.dtype col with\n            | `Float32 -> (\n                match Talon.to_array Nx.float32 df name with\n                | Some a -> `F32 a\n                | None -> assert false)\n            | `Float64 -> (\n                match Talon.to_array Nx.float64 df name with\n                | Some a -> `F64 a\n                | None -> assert false)\n            | `Int32 -> (\n                match Talon.to_array Nx.int32 df name with\n                | Some a -> `I32 a\n                | None -> assert false)\n            | `Int64 -> (\n                match Talon.to_array Nx.int64 df name with\n                | Some a -> `I64 a\n                | None -> assert false)\n            | `String -> (\n                match Talon.to_string_array df name with\n                | Some a -> `Str a\n                | None -> assert false)\n            | `Bool -> (\n                match Talon.to_bool_array df name with\n                | Some a -> `Bool a\n                | None -> assert false)\n            | `Other -> failwith \"Fits.write_table: unsupported dtype\")\n          col_info\n      in\n      let row_buf = Bytes.create row_bytes in\n      for row = 0 to nrows - 1 do\n        let off = ref 0 in\n        List.iter2\n          (fun (_, _, _, eb) col_arr ->\n            (match col_arr with\n            | `F32 arr ->\n                Bytes.set_int32_le row_buf !off (Int32.bits_of_float arr.(row));\n                Fits_parser.swap32 row_buf !off\n            | `F64 arr ->\n                Bytes.set_int64_le row_buf !off (Int64.bits_of_float arr.(row));\n                Fits_parser.swap64 row_buf !off\n            | `I32 arr ->\n                Bytes.set_int32_le row_buf !off arr.(row);\n                Fits_parser.swap32 row_buf !off\n            | `I64 arr ->\n                Bytes.set_int64_le row_buf !off arr.(row);\n                Fits_parser.swap64 row_buf !off\n            | `Str arr -> (\n                Bytes.fill row_buf !off eb ' ';\n                match arr.(row) with\n                | Some s ->\n                    let len = Int.min eb (String.length s) in\n                    Bytes.blit_string s 0 row_buf !off len\n                | None -> ())\n            | `Bool arr ->\n                let v = match arr.(row) with Some true -> 'T' | _ -> 'F' in\n                Bytes.set row_buf !off v);\n            off := !off + eb)\n          col_info col_arrays;\n        output_bytes oc row_buf\n      done;\n      pad_to_block oc (nrows * row_bytes))\n"
  },
  {
    "path": "dev/umbra/lib/fits/umbra_fits.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** FITS file I/O.\n\n    Reads and writes {{:https://fits.gsfc.nasa.gov/fits_standard.html}FITS}\n    files. Binary tables are loaded into {!Talon.t} dataframes and images into\n    {!Nx.t} tensors. All data is converted from FITS big-endian on read and\n    written as big-endian on write. *)\n\n(** {1:inspect Inspection} *)\n\n(** The type for FITS header data unit kinds. *)\ntype hdu_type =\n  | Primary  (** Primary HDU. *)\n  | Image  (** Image extension. *)\n  | Bintable  (** Binary table extension. *)\n  | Ascii_table  (** ASCII table extension. *)\n\ntype hdu_info = {\n  index : int;  (** Zero-based HDU index. *)\n  hdu_type : hdu_type;  (** Kind of HDU. *)\n  dimensions : int array;  (** NAXIS values. *)\n  num_rows : int option;  (** Row count for table HDUs. *)\n  num_cols : int option;  (** Column count for table HDUs. *)\n}\n(** The type for HDU summary information. *)\n\ntype header_card = {\n  key : string;  (** Keyword name (up to 8 characters). *)\n  value : string;  (** Parsed value string. *)\n  comment : string;  (** Inline comment, if any. *)\n}\n(** The type for FITS header cards. *)\n\nval info : string -> hdu_info list\n(** [info path] is the summary information for every HDU in the FITS file at\n    [path].\n\n    Raises [Failure] if [path] cannot be read or is not a valid FITS file. *)\n\nval header : ?hdu:int -> string -> header_card list\n(** [header path] is the header cards for HDU [hdu] in the FITS file at [path],\n    including COMMENT and HISTORY cards.\n\n    [hdu] defaults to [0] (primary HDU).\n\n    Raises [Failure] if [hdu] is out of range. *)\n\n(** {1:reading Reading} *)\n\nval read_table : ?hdu:int -> string -> Talon.t\n(** [read_table path] reads a BINTABLE extension into a dataframe.\n\n    [hdu] defaults to [1] (first extension).\n\n    Supported TFORM types: [E] (float32), [D] (float64), [J] (int32), [K]\n    (int64), [I] (int16), [B] (uint8), [L] (logical), [A] (string). Vector\n    columns (repeat > 1) are not supported except for strings. TSCAL and TZERO\n    are applied when present.\n\n    Raises [Failure] if the HDU is not a BINTABLE, [hdu] is out of range, or a\n    column has an unsupported TFORM type. *)\n\nval read_image : ?hdu:int -> string -> Nx_io.packed\n(** [read_image path] reads an image HDU into a packed {!Nx.t} tensor.\n\n    [hdu] defaults to [0] (primary HDU).\n\n    Supported BITPIX values: [8], [16], [32], [64], [-32], [-64].\n\n    When BSCALE or BZERO header cards are present with non-trivial values\n    (BSCALE != 1.0 or BZERO != 0.0), the physical values [BZERO + BSCALE * raw]\n    are computed and the result is returned as float64 regardless of the\n    original BITPIX. When neither card is present or both have default values,\n    the raw data type is preserved.\n\n    Raises [Failure] if the HDU is not an image, [hdu] is out of range, or\n    BITPIX is unsupported. *)\n\n(** {1:writing Writing} *)\n\nval write_table : ?overwrite:bool -> string -> Talon.t -> unit\n(** [write_table path df] writes [df] as a single BINTABLE extension preceded by\n    an empty primary HDU.\n\n    [overwrite] defaults to [true].\n\n    Raises [Failure] if [overwrite] is [false] and [path] already exists. *)\n\nval write_image : ?overwrite:bool -> string -> ('a, 'b) Nx.t -> unit\n(** [write_image path tensor] writes [tensor] as a primary image HDU.\n\n    [overwrite] defaults to [true].\n\n    Supported dtypes: uint8, int16, int32, int64, float32, float64.\n\n    Raises [Failure] if [overwrite] is [false] and [path] already exists, or if\n    the dtype is unsupported. *)\n"
  },
  {
    "path": "dev/umbra/lib/galactocentric.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet pi = Float.pi\nlet f64 = Nx.float64\nlet galcen_distance_default = Unit.Length.of_kpc (Nx.scalar f64 8.122)\nlet z_sun_default = Unit.Length.of_kpc (Nx.scalar f64 0.0208)\n\ntype t = { x : Nx.float64_t; y : Nx.float64_t; z : Nx.float64_t }\n\nlet x t = Unit.Length.of_kpc t.x\nlet y t = Unit.Length.of_kpc t.y\nlet z t = Unit.Length.of_kpc t.z\n\n(* Convert via Galactic coordinates. In Galactic (l,b) the GC is at l=0, b=0, so\n   heliocentric Galactic Cartesian is: x_h = d cos(b) cos(l) toward GC y_h = d\n   cos(b) sin(l) toward rotation z_h = d sin(b) toward NGP\n\n   Galactocentric = heliocentric shifted by Sun's position: x_gc = x_h -\n   galcen_distance y_gc = y_h z_gc = z_h + z_sun *)\n\nlet of_coord ?(galcen_distance = galcen_distance_default)\n    ?(z_sun = z_sun_default) ~distance c =\n  let galcen_distance = Nx.item [] (Unit.Length.in_kpc galcen_distance) in\n  let z_sun = Nx.item [] (Unit.Length.in_kpc z_sun) in\n  let gal = Coord.galactic c in\n  let l_rad = Unit.Angle.to_tensor (Coord.lon gal) in\n  let b_rad = Unit.Angle.to_tensor (Coord.lat gal) in\n  let d_kpc = Unit.Length.in_kpc distance in\n  let n = Nx.numel l_rad in\n  let x_out = Nx.zeros Nx.float64 [| n |] in\n  let y_out = Nx.zeros Nx.float64 [| n |] in\n  let z_out = Nx.zeros Nx.float64 [| n |] in\n  for i = 0 to n - 1 do\n    let l = Nx.item [ i ] l_rad in\n    let b = Nx.item [ i ] b_rad in\n    let d = Nx.item [ i ] d_kpc in\n    let cb = Float.cos b in\n    let xh = d *. cb *. Float.cos l in\n    let yh = d *. cb *. Float.sin l in\n    let zh = d *. Float.sin b in\n    Nx.set_item [ i ] (xh -. galcen_distance) x_out;\n    Nx.set_item [ i ] yh y_out;\n    Nx.set_item [ i ] (zh +. z_sun) z_out\n  done;\n  { x = x_out; y = y_out; z = z_out }\n\nlet to_coord ?(galcen_distance = galcen_distance_default)\n    ?(z_sun = z_sun_default) t =\n  let galcen_distance = Nx.item [] (Unit.Length.in_kpc galcen_distance) in\n  let z_sun = Nx.item [] (Unit.Length.in_kpc z_sun) in\n  let n = Nx.numel t.x in\n  let l_out = Nx.zeros Nx.float64 [| n |] in\n  let b_out = Nx.zeros Nx.float64 [| n |] in\n  let d_out = Nx.zeros Nx.float64 [| n |] in\n  for i = 0 to n - 1 do\n    let xg = Nx.item [ i ] t.x in\n    let yg = Nx.item [ i ] t.y in\n    let zg = Nx.item [ i ] t.z in\n    let xh = xg +. galcen_distance in\n    let yh = yg in\n    let zh = zg -. z_sun in\n    let d = Float.sqrt ((xh *. xh) +. (yh *. yh) +. (zh *. zh)) in\n    let b = Float.asin (Float.max ~-.1.0 (Float.min 1.0 (zh /. d))) in\n    let l = Float.atan2 yh xh in\n    let l = if l < 0.0 then l +. (2.0 *. pi) else l in\n    Nx.set_item [ i ] l l_out;\n    Nx.set_item [ i ] b b_out;\n    Nx.set_item [ i ] d d_out\n  done;\n  let coord =\n    Coord.of_galactic\n      ~l:(Unit.Angle.of_tensor l_out)\n      ~b:(Unit.Angle.of_tensor b_out)\n  in\n  let distance = Unit.Length.of_kpc d_out in\n  (coord, distance)\n"
  },
  {
    "path": "dev/umbra/lib/galactocentric.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Galactocentric Cartesian coordinates.\n\n    Converts celestial positions with distances to a right-handed Cartesian\n    frame centered on the Galactic center. The x-axis points from the Sun toward\n    the Galactic center (l=0, b=0), y in the direction of Galactic rotation, z\n    toward the North Galactic Pole.\n\n    Coordinates go through the Galactic frame (ICRS {e ->} Galactic {e ->}\n    heliocentric Cartesian {e ->} Galactocentric). The Galactic center position\n    is defined by the IAU Galactic coordinate system (l=0, b=0).\n\n    Default parameters follow\n    {{:https://ui.adsabs.harvard.edu/abs/2018A%26A...615L..15G}GRAVITY\n     Collaboration (2018)} for the Galactic center distance.\n\n    {[\n      let star =\n        Coord.of_radec\n          ~ra:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 266.0 |]))\n          ~dec:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| -29.0 |]))\n      in\n      let gc =\n        Galactocentric.of_coord\n          ~distance:(Unit.Length.of_kpc (Nx.create f64 [| 1 |] [| 8.0 |]))\n          star\n      in\n      let x_kpc = Nx.item [ 0 ] (Unit.Length.in_kpc (Galactocentric.x gc))\n    ]} *)\n\n(** {1:coords Coordinates} *)\n\ntype t\n(** The type for Galactocentric Cartesian positions. *)\n\nval x : t -> Unit.length Unit.t\n(** [x t] is the x coordinate (toward the Galactic center). *)\n\nval y : t -> Unit.length Unit.t\n(** [y t] is the y coordinate (direction of Galactic rotation). *)\n\nval z : t -> Unit.length Unit.t\n(** [z t] is the z coordinate (toward the North Galactic Pole). *)\n\n(** {1:converting Converting} *)\n\nval of_coord :\n  ?galcen_distance:Unit.length Unit.t ->\n  ?z_sun:Unit.length Unit.t ->\n  distance:Unit.length Unit.t ->\n  Coord.t ->\n  t\n(** [of_coord ~distance c] converts celestial coordinates [c] with [distance] to\n    Galactocentric Cartesian. Not differentiable (scalar-level trigonometry).\n\n    [galcen_distance] is the Sun-GC distance (defaults to 8.122 kpc, GRAVITY\n    Collaboration 2018). [z_sun] is the Sun's height above the Galactic midplane\n    (defaults to 0.0208 kpc). *)\n\nval to_coord :\n  ?galcen_distance:Unit.length Unit.t ->\n  ?z_sun:Unit.length Unit.t ->\n  t ->\n  Coord.t * Unit.length Unit.t\n(** [to_coord t] converts Galactocentric Cartesian coordinates [t] back to ICRS\n    celestial coordinates and a distance. Not differentiable (scalar-level\n    trigonometry).\n\n    [galcen_distance] defaults to 8.122 kpc. [z_sun] defaults to 0.0208 kpc. *)\n"
  },
  {
    "path": "dev/umbra/lib/kdtree.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype node =\n  | Leaf\n  | Node of {\n      idx : int;\n      x : float;\n      y : float;\n      z : float;\n      split : int;\n      left : node;\n      right : node;\n    }\n\ntype t = { root : node; size : int }\n\nlet coord split x y z = match split with 0 -> x | 1 -> y | _ -> z\n\nlet build xs ys zs =\n  let n = Array.length xs in\n  if n <> Array.length ys || n <> Array.length zs then\n    invalid_arg \"Kdtree.build: arrays must have the same length\";\n  let indices = Array.init n Fun.id in\n  let rec build_rec start len depth =\n    if len = 0 then Leaf\n    else if len = 1 then\n      let i = indices.(start) in\n      Node\n        {\n          idx = i;\n          x = xs.(i);\n          y = ys.(i);\n          z = zs.(i);\n          split = depth mod 3;\n          left = Leaf;\n          right = Leaf;\n        }\n    else begin\n      let split = depth mod 3 in\n      let sub = Array.sub indices start len in\n      Array.sort\n        (fun a b ->\n          Float.compare\n            (coord split xs.(a) ys.(a) zs.(a))\n            (coord split xs.(b) ys.(b) zs.(b)))\n        sub;\n      Array.blit sub 0 indices start len;\n      let mid = len / 2 in\n      let mi = indices.(start + mid) in\n      let left = build_rec start mid (depth + 1) in\n      let right = build_rec (start + mid + 1) (len - mid - 1) (depth + 1) in\n      Node\n        { idx = mi; x = xs.(mi); y = ys.(mi); z = zs.(mi); split; left; right }\n    end\n  in\n  { root = build_rec 0 n 0; size = n }\n\nlet sq_dist px py pz qx qy qz =\n  let dx = px -. qx and dy = py -. qy and dz = pz -. qz in\n  (dx *. dx) +. (dy *. dy) +. (dz *. dz)\n\nlet nearest tree qx qy qz =\n  if tree.size = 0 then invalid_arg \"Kdtree.nearest: empty tree\";\n  let best_idx = ref 0 in\n  let best_dist = ref Float.infinity in\n  let rec search node =\n    match node with\n    | Leaf -> ()\n    | Node { idx; x; y; z; split; left; right } ->\n        let d = sq_dist x y z qx qy qz in\n        if d < !best_dist then begin\n          best_dist := d;\n          best_idx := idx\n        end;\n        let q_split = coord split qx qy qz in\n        let p_split = coord split x y z in\n        let diff = q_split -. p_split in\n        let near, far = if diff < 0.0 then (left, right) else (right, left) in\n        search near;\n        if diff *. diff < !best_dist then search far\n  in\n  search tree.root;\n  (!best_idx, !best_dist)\n\nlet within tree qx qy qz max_dist_sq =\n  let results = ref [] in\n  let rec search node =\n    match node with\n    | Leaf -> ()\n    | Node { idx; x; y; z; split; left; right } ->\n        let d = sq_dist x y z qx qy qz in\n        if d <= max_dist_sq then results := (idx, d) :: !results;\n        let q_split = coord split qx qy qz in\n        let p_split = coord split x y z in\n        let diff = q_split -. p_split in\n        let near, far = if diff < 0.0 then (left, right) else (right, left) in\n        search near;\n        if diff *. diff <= max_dist_sq then search far\n  in\n  search tree.root;\n  !results\n"
  },
  {
    "path": "dev/umbra/lib/kdtree.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** 3D kd-tree for nearest-neighbor queries.\n\n    {b Note.} Private module. *)\n\ntype t\n(** The type for a 3D kd-tree. *)\n\nval build : float array -> float array -> float array -> t\n(** [build xs ys zs] is a kd-tree over the points [(xs.(i), ys.(i), zs.(i))].\n    The three arrays must have equal length.\n\n    Raises [Invalid_argument] if the arrays differ in length. *)\n\nval nearest : t -> float -> float -> float -> int * float\n(** [nearest tree qx qy qz] is [(i, d2)] where [i] is the index of the nearest\n    point to [(qx, qy, qz)] and [d2] is the squared Euclidean distance.\n\n    Raises [Invalid_argument] if the tree is empty. *)\n\nval within : t -> float -> float -> float -> float -> (int * float) list\n(** [within tree qx qy qz max_d2] is the list of [(i, d2)] pairs for all points\n    within squared Euclidean distance [max_d2] of [(qx, qy, qz)]. *)\n"
  },
  {
    "path": "dev/umbra/lib/photometry.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet f64 = Nx.float64\n\n(* Speed of light for f_lambda to f_nu conversion *)\nlet _c = 299_792_458.0\n\n(* AB magnitude zero-point: 3631 Jy = 3631e-26 W/m²/Hz *)\nlet _ab_zp = 3631.0e-26\n\n(* Wavelength stored internally in metres (SI base unit) *)\ntype bandpass = { wavelength : Nx.float64_t; throughput : Nx.float64_t }\ntype detector = Energy | Photon\n\nlet bandpass ~wavelength ~throughput =\n  let wavelength = Unit.Length.to_tensor wavelength in\n  if Nx.ndim wavelength <> 1 then\n    invalid_arg \"Photometry.bandpass: wavelength must be a 1-D tensor\";\n  if Nx.ndim throughput <> 1 then\n    invalid_arg \"Photometry.bandpass: throughput must be a 1-D tensor\";\n  if Nx.numel wavelength <> Nx.numel throughput then\n    invalid_arg\n      \"Photometry.bandpass: wavelength and throughput must have the same length\";\n  { wavelength; throughput }\n\nlet tophat ~lo ~hi ~n =\n  let lo_m = Nx.item [] (Unit.Length.to_tensor lo) in\n  let hi_m = Nx.item [] (Unit.Length.to_tensor hi) in\n  let wavelength = Nx.linspace f64 lo_m hi_m n in\n  let throughput = Nx.ones f64 [| n |] in\n  { wavelength; throughput }\n\nlet wavelength bp = Unit.Length.of_tensor bp.wavelength\nlet throughput bp = bp.throughput\n\n(* Differentiable trapezoidal integration along the last axis of y. x is always\n   1-D (the wavelength grid). When y has leading batch dimensions the result\n   preserves them. All Nx ops — fully differentiable through Rune. *)\nlet trapz y x =\n  let m = Nx.numel x in\n  let x0 = Nx.slice [ R (0, m - 1) ] x in\n  let x1 = Nx.slice [ R (1, m) ] x in\n  let dx = Nx.sub x1 x0 in\n  let y_shape = Nx.shape y in\n  let ndim = Array.length y_shape in\n  if ndim <= 1 then begin\n    let y0 = Nx.slice [ R (0, m - 1) ] y in\n    let y1 = Nx.slice [ R (1, m) ] y in\n    let y_avg = Nx.div_s (Nx.add y0 y1) 2.0 in\n    Nx.sum (Nx.mul y_avg dx)\n  end\n  else begin\n    let y2d = Nx.reshape [| -1; m |] y in\n    let y0 = Nx.slice [ A; R (0, m - 1) ] y2d in\n    let y1 = Nx.slice [ A; R (1, m) ] y2d in\n    let y_avg = Nx.div_s (Nx.add y0 y1) 2.0 in\n    let result = Nx.sum ~axes:[ 1 ] (Nx.mul y_avg dx) in\n    let batch_shape = Array.sub y_shape 0 (ndim - 1) in\n    Nx.reshape batch_shape result\n  end\n\nlet pivot_wavelength bp =\n  let lam = bp.wavelength in\n  let t = bp.throughput in\n  (* lambda_p = sqrt(integral T lambda d lambda / integral T/lambda d lambda) *)\n  let num = trapz (Nx.mul t lam) lam in\n  let den = trapz (Nx.div t lam) lam in\n  Unit.Length.of_tensor (Nx.sqrt (Nx.div num den))\n\n(* Detector weight: 1 for energy-counting, lambda for photon-counting *)\nlet detector_weight detector lam throughput =\n  match detector with Energy -> throughput | Photon -> Nx.mul throughput lam\n\n(* ST magnitude zero-point: -2.5 log10(f_lambda / 3.63e-9 erg/s/cm²/Å) In SI:\n   3.63e-9 erg/s/cm²/Å = 3.63e-9 * 1e-7 * 1e4 * 1e10 W/m²/m = 3.63e-2 W/m²/m *)\nlet _st_zp = 3.63e-2\n\nlet align_spectrum bp spectrum =\n  let lam_bp = bp.wavelength in\n  let lam_sp = Unit.Length.to_tensor (Spectrum.wavelength spectrum) in\n  let same =\n    Nx.numel lam_bp = Nx.numel lam_sp\n    && Nx.item [] (Nx.max (Nx.abs (Nx.sub lam_bp lam_sp))) = 0.0\n  in\n  if same then spectrum\n  else Spectrum.resample ~wavelength:(Unit.Length.of_tensor lam_bp) spectrum\n\nlet flux_density ?(detector = Energy) bp spectrum =\n  let spectrum = align_spectrum bp spectrum in\n  let lam = bp.wavelength in\n  let f = Spectrum.values spectrum in\n  let w = detector_weight detector lam bp.throughput in\n  Nx.div (trapz (Nx.mul f w) lam) (trapz w lam)\n\nlet ab_mag ?(detector = Energy) bp spectrum =\n  let spectrum = align_spectrum bp spectrum in\n  let lam = bp.wavelength in\n  let f_lambda = Spectrum.values spectrum in\n  let f_nu = Nx.div (Nx.mul f_lambda (Nx.square lam)) (Nx.scalar f64 _c) in\n  let w = detector_weight detector lam bp.throughput in\n  let mean_fnu = Nx.div (trapz (Nx.mul f_nu w) lam) (trapz w lam) in\n  Nx.mul_s\n    (Nx.log (Nx.div mean_fnu (Nx.scalar f64 _ab_zp)))\n    (-2.5 /. Float.log 10.0)\n\nlet st_mag ?(detector = Energy) bp spectrum =\n  let spectrum = align_spectrum bp spectrum in\n  let lam = bp.wavelength in\n  let f_lambda = Spectrum.values spectrum in\n  let w = detector_weight detector lam bp.throughput in\n  let mean_flam = Nx.div (trapz (Nx.mul f_lambda w) lam) (trapz w lam) in\n  Nx.mul_s\n    (Nx.log (Nx.div mean_flam (Nx.scalar f64 _st_zp)))\n    (-2.5 /. Float.log 10.0)\n\nlet _vega_spectrum =\n  let n = Array.length Vega_data.wave in\n  let w = Nx.create f64 [| n |] Vega_data.wave in\n  let w = Nx.mul_s w 1e-10 in\n  let f = Nx.create f64 [| n |] Vega_data.flux in\n  Spectrum.create ~wavelength:(Unit.Length.of_tensor w) ~values:f\n  |> Spectrum.as_flux_density\n\nlet vega_mag ?(detector = Energy) bp spectrum =\n  let f_src = flux_density ~detector bp spectrum in\n  let f_vega = flux_density ~detector bp _vega_spectrum in\n  Nx.mul_s (Nx.log (Nx.div f_src f_vega)) (-2.5 /. Float.log 10.0)\n\nlet color ?detector bp1 bp2 spectrum =\n  Nx.sub (ab_mag ?detector bp1 spectrum) (ab_mag ?detector bp2 spectrum)\n\nlet effective_wavelength ?(detector = Energy) bp spectrum =\n  let spectrum = align_spectrum bp spectrum in\n  let lam = bp.wavelength in\n  let f = Spectrum.values spectrum in\n  let w = detector_weight detector lam bp.throughput in\n  let fw = Nx.mul f w in\n  let num = trapz (Nx.mul fw (Nx.square lam)) lam in\n  let den = trapz (Nx.mul fw lam) lam in\n  Unit.Length.of_tensor (Nx.div num den)\n"
  },
  {
    "path": "dev/umbra/lib/photometry.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Synthetic photometry.\n\n    Computes broadband fluxes and magnitudes by integrating spectra through\n    filter bandpasses using trapezoidal quadrature.\n\n    {[\n      let bp = Photometry.tophat\n        ~lo:(Unit.Length.nm 400.0)\n        ~hi:(Unit.Length.nm 500.0) ~n:100 in\n      let mag = Photometry.ab_mag bp sed\n    ]}\n\n    All photometry functions accept batched spectra (values with leading batch\n    dimensions). When a spectrum has shape [[batch; n_lambda]], the result has\n    shape [[batch]]. *)\n\n(** {1:types Types} *)\n\ntype bandpass\n(** The type for filter transmission curves. *)\n\ntype detector =\n  | Energy\n  | Photon\n      (** The detector convention.\n\n          - {!Energy}: counts incident energy (default). The bandpass-weighted\n            mean is [<f_nu> = integral f_nu T d lambda / integral T d lambda].\n          - {!Photon}: counts photons. Weights both numerator and denominator by\n            [lambda]:\n            [<f_nu> = integral f_nu T lambda d lambda / integral T lambda d\n             lambda]. *)\n\n(** {1:constructors Constructors} *)\n\nval bandpass :\n  wavelength:Unit.length Unit.t -> throughput:Nx.float64_t -> bandpass\n(** [bandpass ~wavelength ~throughput] is a filter from 1-D arrays. [throughput]\n    is dimensionless (typically in \\[0, 1\\]).\n\n    Raises [Invalid_argument] if tensors are not 1-D or have different lengths.\n*)\n\nval tophat : lo:Unit.length Unit.t -> hi:Unit.length Unit.t -> n:int -> bandpass\n(** [tophat ~lo ~hi ~n] is a rectangular bandpass from [lo] to [hi] with [n]\n    wavelength points and unit throughput. *)\n\n(** {1:accessors Accessors} *)\n\nval wavelength : bandpass -> Unit.length Unit.t\n(** [wavelength bp] is the wavelength grid. *)\n\nval throughput : bandpass -> Nx.float64_t\n(** [throughput bp] is the throughput curve. *)\n\nval pivot_wavelength : bandpass -> Unit.length Unit.t\n(** [pivot_wavelength bp] is the pivot wavelength\n    {e lambda}{_ p}[ = sqrt(integral T lambda d lambda / integral T/lambda d\n                    lambda)]. *)\n\n(** {1:photometry Synthetic photometry} *)\n\nval flux_density :\n  ?detector:detector ->\n  bandpass ->\n  Spectrum.flux_density Spectrum.t ->\n  Nx.float64_t\n(** [flux_density ?detector bp spectrum] is the bandpass-weighted mean flux\n    density [<f> = integral f T w d lambda / integral T w d lambda] where [w] is\n    [1] for {!Energy} and [lambda] for {!Photon}. [detector] defaults to\n    {!Energy}.\n\n    The spectrum is resampled to the bandpass wavelength grid via linear\n    interpolation if they differ. Differentiable through Rune. *)\n\nval ab_mag :\n  ?detector:detector ->\n  bandpass ->\n  Spectrum.flux_density Spectrum.t ->\n  Nx.float64_t\n(** [ab_mag ?detector bp spectrum] is the AB magnitude of [spectrum] through\n    [bp].\n\n    Computes the mean spectral flux density in f{_ nu}:\n    [<f_nu> = integral (f_lambda lambda{^2}/c) T w d lambda / integral T w d\n     lambda], where [w] is [1] for {!Energy} and [lambda] for {!Photon}, then\n    [m_AB = -2.5 log10(<f_nu> / 3631 Jy)]. [detector] defaults to {!Energy}.\n\n    The spectrum is resampled to the bandpass wavelength grid via linear\n    interpolation if they differ. Differentiable through Rune. *)\n\nval st_mag :\n  ?detector:detector ->\n  bandpass ->\n  Spectrum.flux_density Spectrum.t ->\n  Nx.float64_t\n(** [st_mag ?detector bp spectrum] is the ST magnitude of [spectrum] through\n    [bp].\n\n    Computes the bandpass-weighted mean f{_ lambda}, then\n    [m_ST = -2.5 log10(<f_lambda> / 3.63e-9 erg s{^-1} cm{^-2} A{^-1})].\n    [detector] defaults to {!Energy}.\n\n    The spectrum is resampled to the bandpass wavelength grid via linear\n    interpolation if they differ. Differentiable through Rune. *)\n\nval vega_mag :\n  ?detector:detector ->\n  bandpass ->\n  Spectrum.flux_density Spectrum.t ->\n  Nx.float64_t\n(** [vega_mag ?detector bp spectrum] is the Vega magnitude of [spectrum] through\n    [bp].\n\n    Computes [-2.5 log10(<f_lambda> / <f_lambda,Vega>)] where the Vega reference\n    spectrum is from CALSPEC alpha_lyr_stis_011.fits (Bohlin 2014). [detector]\n    defaults to {!Energy}.\n\n    The spectrum is resampled to the bandpass wavelength grid via linear\n    interpolation if they differ. Differentiable through Rune. *)\n\nval color :\n  ?detector:detector ->\n  bandpass ->\n  bandpass ->\n  Spectrum.flux_density Spectrum.t ->\n  Nx.float64_t\n(** [color ?detector bp1 bp2 spectrum] is\n    [ab_mag ?detector bp1 spectrum - ab_mag ?detector bp2 spectrum].\n\n    Differentiable through Rune. *)\n\nval effective_wavelength :\n  ?detector:detector ->\n  bandpass ->\n  Spectrum.flux_density Spectrum.t ->\n  Unit.length Unit.t\n(** [effective_wavelength ?detector bp spectrum] is the source-dependent\n    effective wavelength\n    {e lambda}{_ eff}[ = integral f T w lambda{^2} d lambda / integral f T w\n                      lambda d lambda].\n\n    Unlike {!pivot_wavelength}, this depends on the source spectrum. The\n    spectrum is resampled if grids differ. Differentiable through Rune. *)\n"
  },
  {
    "path": "dev/umbra/lib/spectrum.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet f64 = Nx.float64\n\n(* Physical constants (SI) *)\nlet _h = 6.626_070_15e-34\nlet _c = 299_792_458.0\nlet _k_b = 1.380_649e-23\nlet _two_hc2 = 2.0 *. _h *. _c *. _c\nlet _hc_over_k = _h *. _c /. _k_b\n\n(* Spectral kinds — phantom, no runtime representation *)\ntype flux_density\ntype radiance\ntype sampled\n\n(* Wavelength stored internally in metres (SI base unit) *)\ntype 'a t = { wavelength : Nx.float64_t; values : Nx.float64_t }\n\nlet validate_increasing name wl =\n  let n = Nx.numel wl in\n  if n > 1 then\n    for i = 1 to n - 1 do\n      if Nx.item [ i ] wl <= Nx.item [ i - 1 ] wl then\n        invalid_arg (name ^ \": wavelength must be strictly increasing\")\n    done\n\nlet create ~wavelength ~values =\n  let wavelength = Unit.Length.to_tensor wavelength in\n  if Nx.ndim wavelength <> 1 then\n    invalid_arg \"Spectrum.create: wavelength must be a 1-D tensor\";\n  let v_shape = Nx.shape values in\n  let v_ndim = Array.length v_shape in\n  if v_ndim = 0 then invalid_arg \"Spectrum.create: values must be at least 1-D\";\n  if v_shape.(v_ndim - 1) <> Nx.numel wavelength then\n    invalid_arg\n      \"Spectrum.create: last dimension of values must match wavelength length\";\n  validate_increasing \"Spectrum.create\" wavelength;\n  { wavelength; values }\n\nlet wavelength t = Unit.Length.of_tensor t.wavelength\nlet values t = t.values\nlet as_flux_density t = { wavelength = t.wavelength; values = t.values }\nlet as_sampled t = { wavelength = t.wavelength; values = t.values }\n\nlet blackbody ~temperature ~wavelength =\n  let wavelength = Unit.Length.to_tensor wavelength in\n  let temp = Unit.Temperature.to_tensor temperature in\n  let two_hc2 = Nx.scalar f64 _two_hc2 in\n  let hc_k = Nx.scalar f64 _hc_over_k in\n  let lam5 = Nx.pow_s wavelength 5.0 in\n  let exponent = Nx.div hc_k (Nx.mul wavelength temp) in\n  let values =\n    Nx.div (Nx.div two_hc2 lam5) (Nx.sub (Nx.exp exponent) (Nx.scalar f64 1.0))\n  in\n  { wavelength; values }\n\nlet power_law ~amplitude ~index ~pivot ~wavelength =\n  let wavelength = Unit.Length.to_tensor wavelength in\n  let pivot = Unit.Length.to_tensor pivot in\n  let ratio = Nx.div wavelength pivot in\n  let values = Nx.mul amplitude (Nx.pow ratio index) in\n  { wavelength; values }\n\nlet redshift ~z t =\n  let one_plus_z = Nx.add_s z 1.0 in\n  {\n    wavelength = Nx.mul t.wavelength one_plus_z;\n    values = Nx.div t.values one_plus_z;\n  }\n\nlet scale factor t = { t with values = Nx.mul factor t.values }\n\nlet mul a b =\n  if Nx.numel a.wavelength <> Nx.numel b.wavelength then\n    invalid_arg \"Spectrum.mul: spectra must have the same wavelength grid\";\n  let max_diff =\n    Nx.item [] (Nx.max (Nx.abs (Nx.sub a.wavelength b.wavelength)))\n  in\n  if max_diff > 0.0 then\n    invalid_arg \"Spectrum.mul: spectra must have the same wavelength grid\";\n  { wavelength = a.wavelength; values = Nx.mul a.values b.values }\n\nlet div a b =\n  if Nx.numel a.wavelength <> Nx.numel b.wavelength then\n    invalid_arg \"Spectrum.div: spectra must have the same wavelength grid\";\n  let max_diff =\n    Nx.item [] (Nx.max (Nx.abs (Nx.sub a.wavelength b.wavelength)))\n  in\n  if max_diff > 0.0 then\n    invalid_arg \"Spectrum.div: spectra must have the same wavelength grid\";\n  { wavelength = a.wavelength; values = Nx.div a.values b.values }\n\nlet add a b =\n  if Nx.numel a.wavelength <> Nx.numel b.wavelength then\n    invalid_arg \"Spectrum.add: spectra must have the same wavelength grid\";\n  let max_diff =\n    Nx.item [] (Nx.max (Nx.abs (Nx.sub a.wavelength b.wavelength)))\n  in\n  if max_diff > 0.0 then\n    invalid_arg \"Spectrum.add: spectra must have the same wavelength grid\";\n  { wavelength = a.wavelength; values = Nx.add a.values b.values }\n\nlet resample ~wavelength t =\n  let new_wave = Unit.Length.to_tensor wavelength in\n  if Nx.ndim new_wave <> 1 then\n    invalid_arg \"Spectrum.resample: wavelength must be a 1-D tensor\";\n  validate_increasing \"Spectrum.resample\" new_wave;\n  let old_wave = t.wavelength in\n  let old_values = t.values in\n  let n_old = Nx.numel old_wave and n_new = Nx.numel new_wave in\n  (* Find lower bracket index for each target wavelength (non-differentiable) *)\n  let lo_arr =\n    Array.init n_new (fun j ->\n        let x = Nx.item [ j ] new_wave in\n        let lo = ref 0 and hi = ref (n_old - 1) in\n        while !hi - !lo > 1 do\n          let mid = (!lo + !hi) / 2 in\n          if Nx.item [ mid ] old_wave <= x then lo := mid else hi := mid\n        done;\n        !lo)\n  in\n  let hi_arr =\n    Array.init n_new (fun j -> Int32.of_int (min (lo_arr.(j) + 1) (n_old - 1)))\n  in\n  let lo_arr = Array.map Int32.of_int lo_arr in\n  let lo_t = Nx.create Nx.int32 [| n_new |] lo_arr in\n  let hi_t = Nx.create Nx.int32 [| n_new |] hi_arr in\n  (* Gather source wavelengths and values at bracket endpoints. Nx.take uses\n     B.gather, which Rune differentiates through. *)\n  let x0 = Nx.take lo_t old_wave in\n  let x1 = Nx.take hi_t old_wave in\n  let y0 = Nx.take ~axis:(-1) lo_t old_values in\n  let y1 = Nx.take ~axis:(-1) hi_t old_values in\n  (* Linear interpolation — differentiable through Rune *)\n  let dx = Nx.clamp ~min:1e-30 (Nx.sub x1 x0) in\n  let frac = Nx.div (Nx.sub new_wave x0) dx in\n  let values = Nx.add y0 (Nx.mul frac (Nx.sub y1 y0)) in\n  { wavelength = new_wave; values }\n\nlet gaussian ~amplitude ~center ~stddev ~wavelength =\n  let wavelength = Unit.Length.to_tensor wavelength in\n  let center = Unit.Length.to_tensor center in\n  let stddev = Unit.Length.to_tensor stddev in\n  let x = Nx.sub wavelength center in\n  let z = Nx.div x stddev in\n  let values = Nx.mul amplitude (Nx.exp (Nx.mul_s (Nx.mul z z) (-0.5))) in\n  { wavelength; values }\n\nlet lorentzian ~amplitude ~center ~fwhm ~wavelength =\n  let wavelength = Unit.Length.to_tensor wavelength in\n  let center = Unit.Length.to_tensor center in\n  let half_gamma = Nx.div_s (Unit.Length.to_tensor fwhm) 2.0 in\n  let x = Nx.sub wavelength center in\n  let hg2 = Nx.mul half_gamma half_gamma in\n  let values = Nx.mul amplitude (Nx.div hg2 (Nx.add (Nx.mul x x) hg2)) in\n  { wavelength; values }\n\nlet voigt ~amplitude ~center ~sigma ~gamma ~wavelength =\n  let wavelength = Unit.Length.to_tensor wavelength in\n  let center = Unit.Length.to_tensor center in\n  let sigma = Unit.Length.to_tensor sigma in\n  let gamma = Unit.Length.to_tensor gamma in\n  (* Pseudo-Voigt mixing via Thompson, Cox & Hastings (1987). *)\n  let sqrt_2ln2 = Float.sqrt (2.0 *. Float.log 2.0) in\n  let fg = Nx.mul_s sigma (2.0 *. sqrt_2ln2) in\n  let fl = Nx.mul_s gamma 2.0 in\n  let fg2 = Nx.mul fg fg in\n  let fg3 = Nx.mul fg2 fg in\n  let fg4 = Nx.mul fg3 fg in\n  let fg5 = Nx.mul fg4 fg in\n  let fl2 = Nx.mul fl fl in\n  let fl3 = Nx.mul fl2 fl in\n  let fl4 = Nx.mul fl3 fl in\n  let fl5 = Nx.mul fl4 fl in\n  let f =\n    Nx.pow_s\n      (Nx.add fg5\n         (Nx.add\n            (Nx.mul_s (Nx.mul fg4 fl) 2.69269)\n            (Nx.add\n               (Nx.mul_s (Nx.mul fg3 fl2) 2.42843)\n               (Nx.add\n                  (Nx.mul_s (Nx.mul fg2 fl3) 4.47163)\n                  (Nx.add (Nx.mul_s (Nx.mul fg fl4) 0.07842) fl5)))))\n      0.2\n  in\n  let ratio = Nx.div fl f in\n  let ratio2 = Nx.mul ratio ratio in\n  let ratio3 = Nx.mul ratio2 ratio in\n  let eta =\n    Nx.add (Nx.mul_s ratio 1.36603)\n      (Nx.add (Nx.mul_s ratio2 (-0.47719)) (Nx.mul_s ratio3 0.11116))\n  in\n  (* Gaussian component (unit height at center) *)\n  let x = Nx.sub wavelength center in\n  let sig_eff = Nx.div_s f (2.0 *. sqrt_2ln2) in\n  let z_g = Nx.div x sig_eff in\n  let gauss = Nx.exp (Nx.mul_s (Nx.mul z_g z_g) (-0.5)) in\n  (* Lorentzian component (unit height at center) *)\n  let hf = Nx.div_s f 2.0 in\n  let hf2 = Nx.mul hf hf in\n  let lorentz = Nx.div hf2 (Nx.add (Nx.mul x x) hf2) in\n  let values =\n    Nx.mul amplitude\n      (Nx.add (Nx.mul eta lorentz)\n         (Nx.mul (Nx.sub (Nx.scalar f64 1.0) eta) gauss))\n  in\n  { wavelength; values }\n"
  },
  {
    "path": "dev/umbra/lib/spectrum.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Sampled spectral quantities on a wavelength grid.\n\n    A {!'a t} pairs a wavelength grid with spectral values parameterised by a\n    phantom {e kind} that tracks the physical meaning of the values:\n\n    - {!flux_density}: spectral flux density f{_ lambda} (W m{^ -2} m{^ -1}).\n    - {!radiance}: spectral radiance B{_ lambda} (W m{^ -2} m{^ -1} sr{^ -1}).\n    - {!sampled}: arbitrary values with no physical assumption.\n\n    Operations that depend on the physical interpretation of the values (e.g.,\n    {!redshift}, {!val-Photometry.ab_mag}) require a specific kind, preventing\n    accidental misuse at compile time. Use {!as_flux_density} to explicitly\n    reinterpret values when the physical meaning is known to the caller.\n\n    {[\n      let wave = Unit.Length.of_m (Nx.linspace Nx.float64 1e-7 1e-5 1000) in\n      let sed =\n        Spectrum.blackbody\n          ~temperature:(Unit.Temperature.of_kelvin (Nx.scalar Nx.float64 5800.0))\n          ~wavelength:wave\n        |> Spectrum.as_flux_density\n      in\n      let reddened =\n        Extinction.apply (Extinction.ccm89 ~rv) ~av sed\n    ]}\n\n    {2:batch Batched spectra}\n\n    Values may have leading batch dimensions: a spectrum with wavelength\n    [[n_lambda]] and values [[batch; n_lambda]] represents [batch] spectra\n    sharing a wavelength grid. All operations ({!resample}, {!scale}, {!add},\n    {!val-Photometry.ab_mag}, {!val-Extinction.apply}, etc.) broadcast over\n    leading dimensions via Nx:\n\n    {[\n      let values = Nx.stack (List.map Spectrum.values templates) in\n      let batch =\n        Spectrum.create ~wavelength ~values |> Spectrum.as_flux_density\n      in\n      let mags = Photometry.ab_mag bp batch  (* shape [batch] *)\n    ]}\n\n    {b Note.} {!redshift} with a per-spectrum [z] does not broadcast — it\n    changes the wavelength grid, breaking the shared-grid invariant. Use\n    [List.map] or [Rune.vmap] for per-spectrum redshifts. *)\n\n(** {1:kinds Spectral kinds} *)\n\ntype flux_density\n(** Phantom type for spectral flux density f{_ lambda} (W m{^ -2} m{^ -1}). *)\n\ntype radiance\n(** Phantom type for spectral radiance B{_ lambda} (W m{^ -2} m{^ -1} sr{^ -1}).\n*)\n\ntype sampled\n(** Phantom type for arbitrary sampled spectral values. *)\n\n(** {1:types Types} *)\n\ntype 'a t\n(** The type for spectra parameterised by spectral kind ['a]. *)\n\n(** {1:constructors Constructors} *)\n\nval create : wavelength:Unit.length Unit.t -> values:Nx.float64_t -> sampled t\n(** [create ~wavelength ~values] is a tabulated spectrum. [wavelength] must be\n    1-D. [values] must be at least 1-D with its last dimension matching\n    [wavelength]; leading dimensions are preserved as batch dimensions.\n\n    Raises [Invalid_argument] if [wavelength] is not 1-D, the last dimension of\n    [values] does not match, or [wavelength] is not strictly increasing. *)\n\n(** {1:accessors Accessors} *)\n\nval wavelength : 'a t -> Unit.length Unit.t\n(** [wavelength s] is the wavelength grid. *)\n\nval values : 'a t -> Nx.float64_t\n(** [values s] is the spectral values. *)\n\n(** {1:casts Kind casts} *)\n\nval as_flux_density : _ t -> flux_density t\n(** [as_flux_density s] reinterprets [s] as spectral flux density. The caller is\n    responsible for ensuring the values represent f{_ lambda}. Use this when\n    working with external data or when only relative values matter (e.g.,\n    fitting colours from a blackbody model). *)\n\nval as_sampled : _ t -> sampled t\n(** [as_sampled s] forgets the spectral kind. *)\n\n(** {1:models Parametric models} *)\n\nval blackbody :\n  temperature:Unit.temperature Unit.t ->\n  wavelength:Unit.length Unit.t ->\n  radiance t\n(** [blackbody ~temperature ~wavelength] is the Planck spectral radiance\n    B{_ lambda}(T) in W m{^ -2} m{^ -1} sr{^ -1} at the given wavelengths. This\n    is a per-steradian quantity; multiply by a solid angle to obtain spectral\n    irradiance. Differentiable through Rune. *)\n\nval power_law :\n  amplitude:Nx.float64_t ->\n  index:Nx.float64_t ->\n  pivot:Unit.length Unit.t ->\n  wavelength:Unit.length Unit.t ->\n  sampled t\n(** [power_law ~amplitude ~index ~pivot ~wavelength] is the spectrum\n    [amplitude * (wavelength / pivot){^index}]. Differentiable through Rune. *)\n\n(** {1:operations Operations} *)\n\nval redshift : z:Nx.float64_t -> flux_density t -> flux_density t\n(** [redshift ~z s] shifts [s] to redshift [z]. Wavelengths are multiplied by\n    [(1+z)] and values are divided by [(1+z)].\n\n    Restricted to {!flux_density} spectra because the [(1+z){^ -1}] dimming\n    factor is specific to spectral flux density. Differentiable through Rune. *)\n\nval scale : Nx.float64_t -> 'a t -> 'a t\n(** [scale factor s] is [s] with values multiplied element-wise by [factor].\n    [factor] may be a scalar or a tensor that broadcasts with the values.\n    Differentiable through Rune. *)\n\nval mul : 'a t -> sampled t -> 'a t\n(** [mul a b] multiplies values element-wise. [a]'s spectral kind is preserved;\n    [b] is treated as a dimensionless modifier (transmission curve, efficiency\n    function, etc.). Both must share the same wavelength grid. Differentiable\n    through Rune.\n\n    Raises [Invalid_argument] if wavelength grids have different lengths. *)\n\nval div : 'a t -> sampled t -> 'a t\n(** [div a b] divides values element-wise. [a]'s spectral kind is preserved; [b]\n    is treated as a dimensionless modifier. Both must share the same wavelength\n    grid. Differentiable through Rune.\n\n    Raises [Invalid_argument] if wavelength grids have different lengths. *)\n\nval add : 'a t -> 'a t -> 'a t\n(** [add a b] is the element-wise sum of two spectra. Both must share the same\n    wavelength grid. Differentiable through Rune.\n\n    Raises [Invalid_argument] if wavelength grids have different lengths. *)\n\nval resample : wavelength:Unit.length Unit.t -> 'a t -> 'a t\n(** [resample ~wavelength s] resamples [s] onto a new wavelength grid using\n    linear interpolation. Leading batch dimensions are preserved. Differentiable\n    through Rune with respect to the spectrum values (index computation is not\n    differentiable, but the interpolation weights and gather operations are).\n\n    Raises [Invalid_argument] if [wavelength] is not 1-D or not strictly\n    increasing. *)\n\n(** {1:lines Line profiles} *)\n\nval gaussian :\n  amplitude:Nx.float64_t ->\n  center:Unit.length Unit.t ->\n  stddev:Unit.length Unit.t ->\n  wavelength:Unit.length Unit.t ->\n  sampled t\n(** [gaussian ~amplitude ~center ~stddev ~wavelength] is the Gaussian profile\n    [amplitude * exp(-0.5 * ((lambda - center) / stddev){^2})].\n\n    [amplitude], [center], and [stddev] may be scalar tensors; they broadcast\n    against [wavelength]. Differentiable through Rune. *)\n\nval lorentzian :\n  amplitude:Nx.float64_t ->\n  center:Unit.length Unit.t ->\n  fwhm:Unit.length Unit.t ->\n  wavelength:Unit.length Unit.t ->\n  sampled t\n(** [lorentzian ~amplitude ~center ~fwhm ~wavelength] is the Lorentzian profile\n    [amplitude * (gamma/2){^2} / ((lambda - center){^2} + (gamma/2){^2})] where\n    [gamma = fwhm]. Unit height at [center]. Differentiable through Rune. *)\n\nval voigt :\n  amplitude:Nx.float64_t ->\n  center:Unit.length Unit.t ->\n  sigma:Unit.length Unit.t ->\n  gamma:Unit.length Unit.t ->\n  wavelength:Unit.length Unit.t ->\n  sampled t\n(** [voigt ~amplitude ~center ~sigma ~gamma ~wavelength] is the pseudo-Voigt\n    approximation of the Voigt profile (Thompson, Cox & Hastings 1987). [sigma]\n    is the Gaussian standard deviation and [gamma] is the Lorentzian half-width\n    at half-maximum. Accurate to <1% of the exact Faddeeva-based Voigt.\n    Differentiable through Rune. *)\n"
  },
  {
    "path": "dev/umbra/lib/survey.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet f64 = Nx.float64\nlet c_km_s = 299792.458\nlet h0_ref = 100.0\nlet steradian_to_arcmin2 = 11818102.86004228\nlet c1_rho_crit = 0.0134\n\n(* Redshift distributions *)\n\ntype nz = { eval : Nx.float64_t -> Nx.float64_t; zmax : float }\n\nlet simps_float f a b n =\n  let h = (b -. a) /. Float.of_int n in\n  let sum = ref (f a +. f b) in\n  for i = 1 to n - 1 do\n    let x = a +. (Float.of_int i *. h) in\n    let w = if i mod 2 = 1 then 4.0 else 2.0 in\n    sum := !sum +. (w *. f x)\n  done;\n  !sum *. h /. 3.0\n\nlet smail ?(zmax = 10.0) ~a ~b ~z0 () =\n  let raw z_f = (z_f ** a) *. Float.exp (-.((z_f /. z0) ** b)) in\n  let norm = simps_float raw 0.0 zmax 256 in\n  let eval z =\n    let z_f = Nx.item [] z in\n    Nx.scalar f64 (raw z_f /. norm)\n  in\n  { eval; zmax }\n\nlet tabulated ~z ~pz () =\n  let n = (Nx.shape z).(0) in\n  let zmax = Nx.item [ n - 1 ] z in\n  let norm = ref 0.0 in\n  for i = 0 to n - 2 do\n    let dz = Nx.item [ i + 1 ] z -. Nx.item [ i ] z in\n    norm := !norm +. (0.5 *. (Nx.item [ i ] pz +. Nx.item [ i + 1 ] pz) *. dz)\n  done;\n  let eval zq =\n    let zq_f = Nx.item [] zq in\n    if zq_f <= Nx.item [ 0 ] z || zq_f >= zmax then Nx.scalar f64 0.0\n    else begin\n      let idx = ref 0 in\n      for i = 0 to n - 2 do\n        if Nx.item [ i ] z <= zq_f then idx := i\n      done;\n      let i = !idx in\n      let z0 = Nx.item [ i ] z and z1 = Nx.item [ i + 1 ] z in\n      let p0 = Nx.item [ i ] pz and p1 = Nx.item [ i + 1 ] pz in\n      let frac = (zq_f -. z0) /. (z1 -. z0) in\n      Nx.scalar f64 ((p0 +. (frac *. (p1 -. p0))) /. !norm)\n    end\n  in\n  { eval; zmax }\n\nlet custom_nz ?(zmax = 10.0) eval = { eval; zmax }\nlet eval_nz nz z = nz.eval z\nlet nz_zmax nz = nz.zmax\n\n(* Galaxy bias *)\n\ntype bias = Cosmo.params -> Nx.float64_t -> Nx.float64_t\n\nlet constant_bias b _p _z = Nx.scalar f64 b\n\nlet inverse_growth_bias b0 p z =\n  let d = Cosmo.growth_factor ~p z in\n  Nx.div (Nx.scalar f64 b0) d\n\n(* Power spectrum backends *)\n\ntype power = Cosmo.params -> Nx.float64_t -> Nx.float64_t -> Nx.float64_t\n\nlet linear p k z = Cosmo.linear_power ~p k z\nlet nonlinear p k z = Cosmo.nonlinear_power ~p k z\n\nlet baryonic_feedback ?(a_bary = 0.0) ?(log10_k_star = 1.0) ?(sigma = 0.55)\n    base_power =\n fun p k z ->\n  let pk = base_power p k z in\n  if a_bary = 0.0 then pk\n  else\n    let inv_sigma2 = -1.0 /. (sigma *. sigma) in\n    let log10_k = Nx.div_s (Nx.log k) (Float.log 10.0) in\n    let delta = Nx.sub_s log10_k log10_k_star in\n    let gauss = Nx.exp (Nx.mul_s (Nx.mul delta delta) inv_sigma2) in\n    Nx.sub pk (Nx.mul_s (Nx.mul gauss pk) a_bary)\n\n(* Tracers *)\n\ntype tracer_kind =\n  | Weak_lensing of { ia_bias : bias option; sigma_e : float; m_bias : float }\n  | Number_counts of { bias : bias }\n  | Custom of {\n      kernel :\n        p:Cosmo.params -> z:Nx.float64_t -> chi:Nx.float64_t -> Nx.float64_t;\n    }\n\ntype tracer = {\n  nz : nz option;\n  n_gal : float;\n  noise : float;\n  kind : tracer_kind;\n  zmax : float;\n}\n\nlet weak_lensing ?ia_bias ?(sigma_e = 0.26) ?(m_bias = 0.0) ?(n_gal = 1.0) nz =\n  let noise = sigma_e *. sigma_e /. (n_gal *. steradian_to_arcmin2) in\n  {\n    nz = Some nz;\n    n_gal;\n    noise;\n    kind = Weak_lensing { ia_bias; sigma_e; m_bias };\n    zmax = nz.zmax;\n  }\n\nlet number_counts ~bias ?(n_gal = 1.0) nz =\n  let noise = 1.0 /. (n_gal *. steradian_to_arcmin2) in\n  { nz = Some nz; n_gal; noise; kind = Number_counts { bias }; zmax = nz.zmax }\n\nlet tracer ?(noise = 0.0) ?(zmax = 3.0) kernel =\n  { nz = None; n_gal = 0.0; noise; kind = Custom { kernel }; zmax }\n\n(* Cls result type *)\n\ntype cls = {\n  ell : Nx.float64_t;\n  tracers : tracer array;\n  spectra : Nx.float64_t;\n}\n\n(* Cl index ordering: upper triangle *)\n\nlet pair_index nt i j =\n  let a, b = if i <= j then (i, j) else (j, i) in\n  (a * ((2 * nt) - a - 1) / 2) + b\n\nlet cl_pairs nt =\n  let pairs = ref [] in\n  for i = 0 to nt - 1 do\n    for j = i to nt - 1 do\n      pairs := (i, j) :: !pairs\n    done\n  done;\n  List.rev !pairs\n\n(* Evaluate n(z) for one bin on the z grid. Returns tensor [n_z]. Uses Nx.stack\n   so gradients flow through custom_nz eval functions. *)\nlet eval_nz_grid nz z_arr n_z =\n  Nx.stack (List.init n_z (fun j -> nz.eval (Nx.scalar f64 z_arr.(j))))\n\n(* Reverse cumulative trapezoidal sum of tensor [n] with spacing dz. result[j] =\n   ∫_{x_j}^{x_{n-1}} f(x) dx via trapezoidal rule. *)\nlet rev_cumtrapz f_vec n dz =\n  let left = Nx.slice [ R (0, n - 1) ] f_vec in\n  let right = Nx.slice [ R (1, n) ] f_vec in\n  let mid = Nx.mul_s (Nx.add left right) (0.5 *. dz) in\n  let partial = Nx.flip (Nx.cumsum ~axis:0 (Nx.flip mid)) in\n  Nx.concatenate [ partial; Nx.zeros f64 [| 1 |] ]\n\n(* Angular power spectra *)\n\nlet angular_cl ?(p = Cosmo.planck18) ?(power = nonlinear) ~ell tracers =\n  let tracers_arr = Array.of_list tracers in\n  let nt = Array.length tracers_arr in\n  let pairs = cl_pairs nt in\n  let pairs_arr = Array.of_list pairs in\n  let zmax =\n    Array.fold_left (fun acc t -> Float.max acc t.zmax) 0.0 tracers_arr\n  in\n  let n_z = 100 in\n  let dz = zmax /. Float.of_int (n_z - 1) in\n  let z_arr = Array.init n_z (fun i -> Float.of_int i *. dz) in\n  z_arr.(0) <- 1e-6;\n  let z_vec = Nx.create f64 [| n_z |] z_arr in\n\n  (* Simpson weights: tensor [n_z] *)\n  let sw =\n    Array.init n_z (fun i ->\n        if i = 0 || i = n_z - 1 then 1.0 else if i mod 2 = 1 then 4.0 else 2.0)\n  in\n  let simpson_w = Nx.mul_s (Nx.create f64 [| n_z |] sw) (dz /. 3.0) in\n\n  (* Precompute z-dependent quantities as tensors — differentiable through p.\n     comoving_distance and growth_factor use GL quadrature internally and cannot\n     accept vector z, so we loop over scalar z values. *)\n  let h_t = Nx.div (Cosmo.h0 p) (Nx.scalar f64 h0_ref) in\n  let chi_vec =\n    Nx.stack\n      (List.init n_z (fun j ->\n           let z_t = Nx.scalar f64 z_arr.(j) in\n           Nx.mul (Unit.Length.in_mpc (Cosmo.comoving_distance ~p z_t)) h_t))\n  in\n  let chi_safe = Nx.clamp ~min:1e-10 chi_vec in\n  let h_vec = Cosmo.hubble ~p z_vec in\n  let dchi_dz_vec = Nx.div (Nx.mul_s h_t c_km_s) h_vec in\n  let growth_vec =\n    Nx.stack\n      (List.init n_z (fun j -> Cosmo.growth_factor ~p (Nx.scalar f64 z_arr.(j))))\n  in\n  let omega_m_t = Cosmo.omega_m p in\n\n  (* n(z) values per tracer: tensors [n_z], differentiable through custom_nz *)\n  let nz_arrs = Array.make nt (Nx.zeros f64 [| n_z |]) in\n  Array.iteri\n    (fun idx t ->\n      match t.nz with\n      | Some nz -> nz_arrs.(idx) <- eval_nz_grid nz z_arr n_z\n      | None -> ())\n    tracers_arr;\n\n  (* Kernel base vectors per tracer: tensor [n_z], without ell_factor for WL *)\n  let kernel_bases = Array.make nt (Nx.zeros f64 [| n_z |]) in\n  let kernel_has_ell_factor = Array.make nt false in\n  Array.iteri\n    (fun idx t ->\n      match t.kind with\n      | Weak_lensing { ia_bias; sigma_e = _; m_bias } ->\n          let nz_tensor = nz_arrs.(idx) in\n          (* A(z_j) = ∫_{z_j}^{zmax} n(z') dz' *)\n          let a_vec = rev_cumtrapz nz_tensor n_z dz in\n          (* B(z_j) = ∫_{z_j}^{zmax} n(z')/χ(z') dz' — tensor, through chi *)\n          let nz_over_chi = Nx.div nz_tensor chi_safe in\n          let b_vec = rev_cumtrapz nz_over_chi n_z dz in\n          (* g = A - chi * B *)\n          let g_vec = Nx.sub a_vec (Nx.mul chi_vec b_vec) in\n          (* WL kernel base: (3 H0² Ωm / 2c) × (1+z) × χ × g *)\n          let prefactor =\n            Nx.mul_s omega_m_t (3.0 *. h0_ref *. h0_ref /. (2.0 *. c_km_s))\n          in\n          let one_plus_z = Nx.add_s z_vec 1.0 in\n          let k_base =\n            Nx.mul prefactor (Nx.mul one_plus_z (Nx.mul chi_vec g_vec))\n          in\n          (* Add NLA intrinsic alignment if present *)\n          let k_base =\n            match ia_bias with\n            | None -> k_base\n            | Some ia_b ->\n                let ia_tensor =\n                  Nx.stack\n                    (List.init n_z (fun j -> ia_b p (Nx.scalar f64 z_arr.(j))))\n                in\n                (* K_IA = -(C₁ ρ_crit Ωm / D(z)) × n(z) × b_IA(z) × H(z) *)\n                let ia_kernel =\n                  Nx.mul\n                    (Nx.mul_s omega_m_t (-.c1_rho_crit))\n                    (Nx.mul\n                       (Nx.div nz_tensor growth_vec)\n                       (Nx.mul ia_tensor h_vec))\n                in\n                Nx.add k_base ia_kernel\n          in\n          (* Shear multiplicative bias: W_obs = (1+m) W_true *)\n          let k_base =\n            if m_bias = 0.0 then k_base else Nx.mul_s k_base (1.0 +. m_bias)\n          in\n          kernel_bases.(idx) <- k_base;\n          kernel_has_ell_factor.(idx) <- true\n      | Number_counts { bias } ->\n          let nz_tensor = nz_arrs.(idx) in\n          let bias_tensor =\n            Nx.stack (List.init n_z (fun j -> bias p (Nx.scalar f64 z_arr.(j))))\n          in\n          (* NC kernel: n(z) × b(z) × H(z) — no ell factor *)\n          kernel_bases.(idx) <- Nx.mul nz_tensor (Nx.mul bias_tensor h_vec);\n          kernel_has_ell_factor.(idx) <- false\n      | Custom { kernel } ->\n          (* Custom kernel: user provides the full W(z) *)\n          kernel_bases.(idx) <-\n            Nx.stack\n              (List.init n_z (fun j ->\n                   let z_t = Nx.scalar f64 z_arr.(j) in\n                   let chi_t = Nx.get [ j ] chi_safe in\n                   kernel ~p ~z:z_t ~chi:chi_t));\n          kernel_has_ell_factor.(idx) <- false)\n    tracers_arr;\n\n  (* Common integration weight: dchi/dz / chi² / c² × simpson *)\n  let integ_weight =\n    Nx.mul simpson_w\n      (Nx.div_s\n         (Nx.div dchi_dz_vec (Nx.mul chi_safe chi_safe))\n         (c_km_s *. c_km_s))\n  in\n\n  (* Power spectrum grid [n_z, n_ell]: loop over z (scalar), vectorized over k.\n     Both linear_power and nonlinear_power accept vector k but scalar z. *)\n  let pk_grid =\n    Nx.stack\n      (List.init n_z (fun z_idx ->\n           let z_t = Nx.scalar f64 z_arr.(z_idx) in\n           let chi_z = Nx.get [ z_idx ] chi_safe in\n           let k_vec = Nx.div (Nx.add_s ell 0.5) chi_z in\n           power p k_vec z_t))\n  in\n\n  (* ell_factor vector [n_ell]: sqrt((ℓ-1)ℓ(ℓ+1)(ℓ+2)) / (ℓ+0.5)² *)\n  let ell_factor_vec =\n    let l = ell in\n    let num =\n      Nx.mul\n        (Nx.mul (Nx.sub_s l 1.0) l)\n        (Nx.mul (Nx.add_s l 1.0) (Nx.add_s l 2.0))\n    in\n    let den = Nx.mul (Nx.add_s l 0.5) (Nx.add_s l 0.5) in\n    Nx.div (Nx.sqrt (Nx.abs num)) den\n  in\n\n  (* Limber integration: functional, no in-place mutation. integ_weight is\n     [n_z], pk_grid is [n_z, n_ell]. For each pair (i,j): C_ℓ = Σ_z K_i(z)\n     K_j(z) P(k,z) w(z) kernel_bases are [n_z], broadcast with pk_grid [n_z,\n     n_ell]. *)\n  let w_pk = Nx.mul (Nx.reshape [| n_z; 1 |] integ_weight) pk_grid in\n  let spectra =\n    Nx.stack\n      (List.map\n         (fun (i, j) ->\n           let ki = Nx.reshape [| n_z; 1 |] kernel_bases.(i) in\n           let kj = Nx.reshape [| n_z; 1 |] kernel_bases.(j) in\n           let integrand = Nx.mul (Nx.mul ki kj) w_pk in\n           let cl_row = Nx.sum ~axes:[ 0 ] integrand in\n           let ell_power =\n             (if kernel_has_ell_factor.(i) then 1 else 0)\n             + if kernel_has_ell_factor.(j) then 1 else 0\n           in\n           if ell_power = 0 then cl_row\n           else if ell_power = 1 then Nx.mul ell_factor_vec cl_row\n           else Nx.mul (Nx.mul ell_factor_vec ell_factor_vec) cl_row)\n         (Array.to_list pairs_arr))\n  in\n  { ell; tracers = tracers_arr; spectra }\n\n(* Cls submodule *)\n\nmodule Cls = struct\n  let get cls ~i ~j =\n    let n = Array.length cls.tracers in\n    if i < 0 || i >= n || j < 0 || j >= n then\n      invalid_arg \"Survey.Cls.get: index out of range\";\n    Nx.slice [ I (pair_index n i j) ] cls.spectra\n\n  let ell cls = cls.ell\n  let n_tracers cls = Array.length cls.tracers\n  let to_tensor cls = cls.spectra\n\n  let noise cls =\n    let n_ell = (Nx.shape cls.ell).(0) in\n    let nt = Array.length cls.tracers in\n    let pairs = cl_pairs nt in\n    let n_cls = List.length pairs in\n    let result = Nx.zeros f64 [| n_cls; n_ell |] in\n    let pair_idx = ref 0 in\n    List.iter\n      (fun (i, j) ->\n        if i = j then begin\n          let noise_val = cls.tracers.(i).noise in\n          for l = 0 to n_ell - 1 do\n            Nx.set_item [ !pair_idx; l ] noise_val result\n          done\n        end;\n        incr pair_idx)\n      pairs;\n    result\n\n  let gaussian_covariance ?(f_sky = 0.25) cls =\n    let ell = cls.ell in\n    let n_ell = (Nx.shape ell).(0) in\n    let nt = Array.length cls.tracers in\n    let pairs = cl_pairs nt in\n    let n_cls = List.length pairs in\n    let n = n_cls * n_ell in\n    let cov = Nx.zeros f64 [| n; n |] in\n    let cl_noise = noise cls in\n    let cl_obs = Nx.add cls.spectra cl_noise in\n    let pairs_arr = Array.of_list pairs in\n    let find_pair a b = pair_index nt a b in\n    (* Δℓ via finite differences *)\n    let dell =\n      Array.init n_ell (fun l ->\n          if l = 0 then Nx.item [ 1 ] ell -. Nx.item [ 0 ] ell\n          else if l = n_ell - 1 then Nx.item [ l ] ell -. Nx.item [ l - 1 ] ell\n          else 0.5 *. (Nx.item [ l + 1 ] ell -. Nx.item [ l - 1 ] ell))\n    in\n    for p1 = 0 to n_cls - 1 do\n      let i, j = pairs_arr.(p1) in\n      for p2 = p1 to n_cls - 1 do\n        let m, nn = pairs_arr.(p2) in\n        let im = find_pair i m and jn = find_pair j nn in\n        let in_ = find_pair i nn and jm = find_pair j m in\n        for l = 0 to n_ell - 1 do\n          let ell_l = Nx.item [ l ] ell in\n          let norm = ((2.0 *. ell_l) +. 1.0) *. dell.(l) *. f_sky in\n          let c_im = Nx.get [ im; l ] cl_obs in\n          let c_jn = Nx.get [ jn; l ] cl_obs in\n          let c_in = Nx.get [ in_; l ] cl_obs in\n          let c_jm = Nx.get [ jm; l ] cl_obs in\n          let val_ =\n            Nx.div_s (Nx.add (Nx.mul c_im c_jn) (Nx.mul c_in c_jm)) norm\n          in\n          let row = (p1 * n_ell) + l in\n          let col = (p2 * n_ell) + l in\n          Nx.set [ row; col ] cov val_;\n          if p1 <> p2 then Nx.set [ col; row ] cov val_\n        done\n      done\n    done;\n    cov\nend\n"
  },
  {
    "path": "dev/umbra/lib/survey.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Angular power spectra and survey science.\n\n    The central type is {!tracer}: one tracer per tomographic bin. {!angular_cl}\n    cross-correlates a list of tracers and returns a structured {!cls} value\n    with typed accessors.\n\n    {!angular_cl} and {!inverse_growth_bias} are differentiable through Rune.\n    {!Cls.noise} and {!Cls.gaussian_covariance} are not (they use in-place\n    mutation); compute them once at a fiducial cosmology.\n\n    {[\n      let nz1 = Survey.smail ~a:2.0 ~b:1.5 ~z0:0.3 () in\n      let nz2 = Survey.smail ~a:2.0 ~b:1.5 ~z0:0.7 () in\n      let wl1 = Survey.weak_lensing ~n_gal:26.0 nz1 in\n      let wl2 = Survey.weak_lensing ~n_gal:26.0 nz2 in\n      let ell = Nx.logspace Nx.float64 1.0 3.0 50 in\n      let cls = Survey.angular_cl ~p:Cosmo.planck18 ~ell [ wl1; wl2 ] in\n      let cl_auto = Survey.Cls.get cls ~i:0 ~j:0 in\n      let cl_cross = Survey.Cls.get cls ~i:0 ~j:1\n    ]} *)\n\n(** {1:nz Redshift distributions} *)\n\ntype nz\n(** A normalized redshift probability density n(z) with a maximum redshift. *)\n\nval smail : ?zmax:float -> a:float -> b:float -> z0:float -> unit -> nz\n(** [smail ~a ~b ~z0 ()] is n(z) {e ∝} z{^ a} exp(-(z/z0){^ b}). Auto-normalized\n    via Simpson's rule. [zmax] defaults to [10.0]. *)\n\nval tabulated : z:Nx.float64_t -> pz:Nx.float64_t -> unit -> nz\n(** [tabulated ~z ~pz ()] is n(z) linearly interpolated from sampled points.\n    Auto-normalized. [zmax] is inferred from the last element of [z]. *)\n\nval custom_nz : ?zmax:float -> (Nx.float64_t -> Nx.float64_t) -> nz\n(** [custom_nz f] is a redshift distribution with evaluation function [f]. [f z]\n    maps a scalar tensor [z] to n(z). For differentiable survey optimization,\n    [f] should use tensor operations so gradients flow through Rune. [zmax]\n    defaults to [10.0]. *)\n\nval eval_nz : nz -> Nx.float64_t -> Nx.float64_t\n(** [eval_nz nz z] evaluates the normalized n(z) at [z]. *)\n\nval nz_zmax : nz -> float\n(** [nz_zmax nz] is the maximum redshift of the distribution. *)\n\n(** {1:bias Galaxy bias} *)\n\ntype bias = Cosmo.params -> Nx.float64_t -> Nx.float64_t\n(** A galaxy bias function. [bias p z] is b(z) under cosmology [p]. *)\n\nval constant_bias : float -> bias\n(** [constant_bias b] is a redshift-independent linear bias. Not differentiable\n    (constant value). *)\n\nval inverse_growth_bias : float -> bias\n(** [inverse_growth_bias b0] is [b0 / D(z)], where D is the linear growth\n    factor. Differentiable through Rune. *)\n\n(** {1:power Power spectrum backends} *)\n\ntype power = Cosmo.params -> Nx.float64_t -> Nx.float64_t -> Nx.float64_t\n(** [power p k z] is the matter power spectrum P(k, z) in (Mpc/h){^ 3}. [k] is a\n    1-D tensor of wavenumbers in h/Mpc, [z] is a scalar tensor. *)\n\nval linear : power\n(** [linear] is the linear matter power spectrum via Eisenstein & Hu (1998).\n    Differentiable through Rune. *)\n\nval nonlinear : power\n(** [nonlinear] is the nonlinear power spectrum via Halofit (Takahashi et al.\n    2012). Differentiable through Rune (except the nonlinear scale k{_ nl} which\n    is found by float-level root-finding). *)\n\nval baryonic_feedback :\n  ?a_bary:float -> ?log10_k_star:float -> ?sigma:float -> power -> power\n(** [baryonic_feedback base_power] wraps [base_power] with a Gaussian\n    suppression in log{_ 10}(k) that models baryonic feedback on the matter\n    power spectrum:\n\n    P{_ bary}(k, z) = P(k, z) {e ×} (1 - a{_ bary} {e ×} exp(-(log{_ 10}(k) -\n    log{_ 10}(k{_ star})){^ 2} / {e σ}{^ 2})).\n\n    [a_bary] is the suppression amplitude (default [0.0] = no effect).\n    [log10_k_star] is the log{_ 10} of the peak suppression wavenumber in h/Mpc\n    (default [1.0], i.e. k{_ star} = 10 h/Mpc). [sigma] is the Gaussian width in\n    log{_ 10}(k) (default [0.55]).\n\n    Differentiable through Rune. *)\n\n(** {1:tracers Tracers} *)\n\ntype tracer\n(** The type for a single tomographic tracer. One tracer = one redshift bin with\n    its physics (lensing kernel, galaxy bias, etc.) and noise properties.\n    {!angular_cl} cross-correlates a list of tracers. *)\n\nval weak_lensing :\n  ?ia_bias:bias ->\n  ?sigma_e:float ->\n  ?m_bias:float ->\n  ?n_gal:float ->\n  nz ->\n  tracer\n(** [weak_lensing nz] is a weak gravitational lensing tracer with redshift\n    distribution [nz]. [sigma_e] is the intrinsic ellipticity dispersion\n    (default [0.26]). [n_gal] is the galaxy number density in\n    galaxies/arcmin{^ 2} (default [1.0]). [ia_bias], if provided, adds NLA\n    intrinsic alignment.\n\n    [m_bias] is the shear multiplicative bias (default [0.0]). The lensing\n    kernel is scaled by [(1 + m_bias)], so auto-spectra scale as [(1 + m){^ 2}]\n    and cross-spectra as [(1 + m{_ i})(1 + m{_ j})]. Differentiable through Rune\n    when used with {!angular_cl}. *)\n\nval number_counts : bias:bias -> ?n_gal:float -> nz -> tracer\n(** [number_counts ~bias nz] is a galaxy number counts tracer with redshift\n    distribution [nz] and galaxy bias model [bias]. [n_gal] is the galaxy number\n    density in galaxies/arcmin{^ 2} (default [1.0]). *)\n\nval tracer :\n  ?noise:float ->\n  ?zmax:float ->\n  (p:Cosmo.params -> z:Nx.float64_t -> chi:Nx.float64_t -> Nx.float64_t) ->\n  tracer\n(** [tracer kernel] is a custom tracer with kernel function [kernel].\n\n    [kernel ~p ~z ~chi] returns the full projection kernel W(z) at scalar\n    redshift [z] and comoving distance [chi] (Mpc/h) under cosmology [p].\n\n    [noise] is the constant noise power N{_ ℓ} for auto-correlations (default\n    [0.0]). [zmax] defaults to [3.0]. *)\n\n(** {1:cls Angular power spectra} *)\n\ntype cls\n(** The type for a set of angular power spectra. Stores all auto- and\n    cross-correlations for a list of tracers, along with the ell values and\n    tracer metadata needed for noise and covariance computation. *)\n\nval angular_cl :\n  ?p:Cosmo.params -> ?power:power -> ell:Nx.float64_t -> tracer list -> cls\n(** [angular_cl ~ell tracers] computes angular power spectra C{_ ℓ} for all\n    auto- and cross-correlations via the Limber approximation. Differentiable\n    through Rune.\n\n    [power] defaults to {!nonlinear}. [p] defaults to {!Cosmo.planck18}.\n\n    Raises [Invalid_argument] if [omega_b], [n_s], or [sigma8] are not set in\n    [p]. *)\n\n(** {2:cls_access Structured access} *)\n\nmodule Cls : sig\n  val get : cls -> i:int -> j:int -> Nx.float64_t\n  (** [get cls ~i ~j] is the angular power spectrum C{_ ℓ}{^ ij} between tracers\n      [i] and [j]. Returns a 1-D tensor of shape [[n_ell]]. [get cls ~i ~j] and\n      [get cls ~j ~i] return the same spectrum.\n\n      Raises [Invalid_argument] if [i] or [j] is out of range. *)\n\n  val ell : cls -> Nx.float64_t\n  (** [ell cls] is the multipole values, shape [[n_ell]]. *)\n\n  val n_tracers : cls -> int\n  (** [n_tracers cls] is the number of tracers. *)\n\n  val to_tensor : cls -> Nx.float64_t\n  (** [to_tensor cls] is all spectra packed as a tensor of shape\n      [[n_cls; n_ell]] where [n_cls = n * (n + 1) / 2], ordered as (0,0), (0,1),\n      ..., (1,1), .... *)\n\n  val noise : cls -> Nx.float64_t\n  (** [noise cls] is the shot noise power spectra. Weak lensing:\n      {e σ}{_ e}{^ 2}/n{_ gal}. Number counts: 1/n{_ gal}. Custom: the [noise]\n      value. Cross-spectra are zero. Shape [[n_cls; n_ell]].\n\n      Not differentiable. *)\n\n  val gaussian_covariance : ?f_sky:float -> cls -> Nx.float64_t\n  (** [gaussian_covariance cls] is the Gaussian covariance matrix. [f_sky]\n      defaults to [0.25]. Returns dense matrix of shape [[n; n]] where\n      [n = n_cls * n_ell].\n\n      Not differentiable. *)\nend\n"
  },
  {
    "path": "dev/umbra/lib/time.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Astronomical time with phantom-typed time scales.\n\n   Internal representation: Julian Date (float) in the tagged scale. MJD = JD -\n   2400000.5 Unix epoch (1970-01-01T00:00:00 UTC) = JD 2440587.5 *)\n\ntype 'a t = float\ntype utc\ntype tai\ntype tt\ntype tdb\n\n(* Constructors *)\n\nlet unsafe_of_jd jd = jd\nlet unsafe_of_mjd mjd = mjd +. 2_400_000.5\nlet of_unix u = (u /. 86_400.0) +. 2_440_587.5\nlet now () = of_unix (Unix.gettimeofday ())\n\n(* Comparison *)\n\nlet compare (a : float) (b : float) = Float.compare a b\nlet equal (a : float) (b : float) = Float.equal a b\n\n(* Eliminators *)\n\nlet to_jd t = t\nlet to_mjd t = t -. 2_400_000.5\nlet to_unix t = (t -. 2_440_587.5) *. 86_400.0\n\n(* Duration *)\n\nlet diff a b = Unit.Time.day (a -. b)\nlet add t dt = t +. Nx.item [] (Unit.Time.in_day dt)\n\n(* Leap second table: (JD of midnight UTC when leap second is introduced,\n   cumulative TAI-UTC). Source: IERS Bulletin C. *)\nlet leap_seconds =\n  [|\n    (2441317.5, 10.0);\n    (* 1972-01-01 *)\n    (2441499.5, 11.0);\n    (* 1972-07-01 *)\n    (2441683.5, 12.0);\n    (* 1973-01-01 *)\n    (2442048.5, 13.0);\n    (* 1974-01-01 *)\n    (2442413.5, 14.0);\n    (* 1975-01-01 *)\n    (2442778.5, 15.0);\n    (* 1976-01-01 *)\n    (2443144.5, 16.0);\n    (* 1977-01-01 *)\n    (2443509.5, 17.0);\n    (* 1978-01-01 *)\n    (2443874.5, 18.0);\n    (* 1979-01-01 *)\n    (2444239.5, 19.0);\n    (* 1980-01-01 *)\n    (2444786.5, 20.0);\n    (* 1981-07-01 *)\n    (2445151.5, 21.0);\n    (* 1982-07-01 *)\n    (2445516.5, 22.0);\n    (* 1983-07-01 *)\n    (2446247.5, 23.0);\n    (* 1985-07-01 *)\n    (2447161.5, 24.0);\n    (* 1988-01-01 *)\n    (2447892.5, 25.0);\n    (* 1990-01-01 *)\n    (2448257.5, 26.0);\n    (* 1991-01-01 *)\n    (2448804.5, 27.0);\n    (* 1992-07-01 *)\n    (2449169.5, 28.0);\n    (* 1993-07-01 *)\n    (2449534.5, 29.0);\n    (* 1994-07-01 *)\n    (2450083.5, 30.0);\n    (* 1996-01-01 *)\n    (2450630.5, 31.0);\n    (* 1997-07-01 *)\n    (2451179.5, 32.0);\n    (* 1999-01-01 *)\n    (2453736.5, 33.0);\n    (* 2006-01-01 *)\n    (2454832.5, 34.0);\n    (* 2009-01-01 *)\n    (2456109.5, 35.0);\n    (* 2012-07-01 *)\n    (2457204.5, 36.0);\n    (* 2015-07-01 *)\n    (2457754.5, 37.0);\n    (* 2017-01-01 *)\n  |]\n\nlet tai_minus_utc jd_utc =\n  let n = Array.length leap_seconds in\n  let rec search i =\n    if i < 0 then 10.0\n    else\n      let jd, dt = leap_seconds.(i) in\n      if jd_utc >= jd then dt else search (i - 1)\n  in\n  search (n - 1)\n\n(* UTC <-> TAI *)\n\nlet utc_to_tai utc_jd =\n  let dt = tai_minus_utc utc_jd in\n  utc_jd +. (dt /. 86_400.0)\n\nlet tai_to_utc tai_jd =\n  (* Approximate: convert TAI to approximate UTC, look up, refine *)\n  let approx_utc = tai_jd -. (37.0 /. 86_400.0) in\n  let dt = tai_minus_utc approx_utc in\n  tai_jd -. (dt /. 86_400.0)\n\n(* TAI <-> TT: TT = TAI + 32.184s (exact by definition) *)\n\nlet tt_offset = 32.184 /. 86_400.0\nlet tai_to_tt tai_jd = tai_jd +. tt_offset\nlet tt_to_tai tt_jd = tt_jd -. tt_offset\n\n(* TT <-> TDB: Fairhead & Bretagnon 1990 series (first 10 terms). Accuracy ~1μs\n   for dates within a few centuries of J2000.0.\n\n   T = (JD_TT - 2451545.0) / 36525.0 (Julian centuries from J2000.0 TT) TDB - TT\n   ≈ Σ Aᵢ sin(ωᵢ T + φᵢ) in seconds *)\n\nlet fb_terms =\n  [|\n    (* amplitude (s), frequency (rad/century), phase (rad) *)\n    (1.656_674_564e-3, 6_283.075_849_991, 6.240_054_195);\n    (2.227_2e-5, 5_753.384_884_897, 4.296_977_442);\n    (1.3886e-5, 12_566.151_699_983, 6.196_904_410);\n    (3.150e-6, 529.690_965_095, 0.444_401_603);\n    (1.575e-6, 6_069.776_754_553, 4.021_195_093);\n    (1.020_5e-5, 213.299_095_438, 5.543_113_262);\n    (3.978e-6, 77_713.771_467_920, 5.198_467_090);\n    (4.354e-6, 7_860.419_392_439, 5.988_822_341);\n    (1.456e-6, 11_506.769_769_794, 2.457_236_222);\n    (1.126e-6, 3_930.209_696_220, 5.316_024_159);\n  |]\n\nlet tt_to_tdb tt_jd =\n  let t = (tt_jd -. 2_451_545.0) /. 36_525.0 in\n  let sum = ref 0.0 in\n  for i = 0 to Array.length fb_terms - 1 do\n    let amp, freq, phase = fb_terms.(i) in\n    sum := !sum +. (amp *. Float.sin ((freq *. t) +. phase))\n  done;\n  tt_jd +. (!sum /. 86_400.0)\n\nlet tdb_to_tt tdb_jd =\n  (* Single Newton iteration: TT ≈ TDB, compute correction *)\n  let tt_approx = tdb_jd in\n  let tdb_from_approx = tt_to_tdb tt_approx in\n  let correction = tdb_jd -. tdb_from_approx in\n  tt_approx +. correction\n\n(* ISO 8601 parsing and formatting for UTC *)\n\n(* Calendar date to JD (valid for dates after 1582-10-15, Gregorian calendar) *)\nlet cal_to_jd y m d =\n  let y, m = if m <= 2 then (y - 1, m + 12) else (y, m) in\n  let a = y / 100 in\n  let b = 2 - a + (a / 4) in\n  Float.floor (365.25 *. Float.of_int (y + 4716))\n  +. Float.floor (30.6001 *. Float.of_int (m + 1))\n  +. d +. Float.of_int b -. 1524.5\n\n(* JD to calendar date *)\nlet jd_to_cal jd =\n  let jd = jd +. 0.5 in\n  let z = Float.to_int (Float.floor jd) in\n  let f = jd -. Float.of_int z in\n  let a =\n    if z < 2299161 then z\n    else\n      let alpha =\n        Float.to_int (Float.floor ((Float.of_int z -. 1867216.25) /. 36524.25))\n      in\n      z + 1 + alpha - (alpha / 4)\n  in\n  let b = a + 1524 in\n  let c = Float.to_int (Float.floor ((Float.of_int b -. 122.1) /. 365.25)) in\n  let d = Float.to_int (Float.floor (365.25 *. Float.of_int c)) in\n  let e = Float.to_int (Float.floor (Float.of_int (b - d) /. 30.6001)) in\n  let day_frac =\n    Float.of_int (b - d) -. Float.floor (30.6001 *. Float.of_int e) +. f\n  in\n  let month = if e < 14 then e - 1 else e - 13 in\n  let year = if month > 2 then c - 4716 else c - 4715 in\n  (year, month, day_frac)\n\nlet of_iso s =\n  let s =\n    let len = String.length s in\n    if len > 0 && s.[len - 1] = 'Z' then String.sub s 0 (len - 1) else s\n  in\n  match\n    Scanf.sscanf s \"%d-%d-%dT%d:%d:%f\" (fun y mo d h mi s ->\n        (y, mo, d, h, mi, s))\n  with\n  | y, mo, d, h, mi, sec ->\n      let day =\n        Float.of_int d\n        +. (Float.of_int h /. 24.0)\n        +. (Float.of_int mi /. 1440.0)\n        +. (sec /. 86_400.0)\n      in\n      cal_to_jd y mo day\n  | exception _ -> (\n      match Scanf.sscanf s \"%d-%d-%d\" (fun y mo d -> (y, mo, d)) with\n      | y, mo, d -> cal_to_jd y mo (Float.of_int d)\n      | exception _ -> invalid_arg (\"Time.of_iso: cannot parse \" ^ s))\n\nlet to_iso t =\n  let y, m, day_frac = jd_to_cal t in\n  let d = Float.to_int (Float.floor day_frac) in\n  let frac = day_frac -. Float.of_int d in\n  let total_sec = frac *. 86_400.0 in\n  let h = Float.to_int (Float.floor (total_sec /. 3600.0)) in\n  let rem = total_sec -. (Float.of_int h *. 3600.0) in\n  let mi = Float.to_int (Float.floor (rem /. 60.0)) in\n  let sec = rem -. (Float.of_int mi *. 60.0) in\n  if Float.abs sec < 0.0005 then\n    Printf.sprintf \"%04d-%02d-%02dT%02d:%02d:%02dZ\" y m d h mi 0\n  else Printf.sprintf \"%04d-%02d-%02dT%02d:%02d:%06.3fZ\" y m d h mi sec\n"
  },
  {
    "path": "dev/umbra/lib/time.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Astronomical time with phantom-typed time scales.\n\n    Times are stored internally as Julian Dates (float). Scale conversions are\n    type-safe: {!utc_to_tai} accepts a [utc t] and returns a [tai t].\n\n    {[\n      let t = Time.of_iso \"2024-01-01T00:00:00\" in\n      let tai = Time.utc_to_tai t in\n      let tt = Time.tai_to_tt tai in\n      let jd = Time.to_jd tt\n    ]} *)\n\n(** {1:types Types} *)\n\ntype 'scale t\n(** The type for a Julian Date tagged with time scale ['scale]. *)\n\ntype utc\n(** Coordinated Universal Time. *)\n\ntype tai\n(** International Atomic Time. *)\n\ntype tt\n(** Terrestrial Time. *)\n\ntype tdb\n(** Barycentric Dynamical Time. *)\n\n(** {1:constructors Constructors} *)\n\nval unsafe_of_jd : float -> 'a t\n(** [unsafe_of_jd jd] is a time from the Julian Date [jd]. The caller must\n    ensure [jd] is in the intended time scale. *)\n\nval unsafe_of_mjd : float -> 'a t\n(** [unsafe_of_mjd mjd] is a time from the Modified Julian Date [mjd] (MJD = JD\n    \\- 2400000.5). The caller must ensure [mjd] is in the intended time scale.\n*)\n\nval of_iso : string -> utc t\n(** [of_iso s] parses an ISO 8601 date-time string as UTC. Accepted formats:\n    [\"YYYY-MM-DD\"], [\"YYYY-MM-DDThh:mm:ss\"], and [\"YYYY-MM-DDThh:mm:ssZ\"].\n\n    {b Warning.} Uses the Gregorian calendar; dates before 1582-10-15 produce\n    incorrect Julian Dates. Leap seconds (e.g. [23:59:60]) cannot be represented\n    and are parsed as the following second.\n\n    Raises [Invalid_argument] if [s] cannot be parsed. *)\n\nval of_unix : float -> utc t\n(** [of_unix u] is the UTC time corresponding to the Unix timestamp [u] (seconds\n    since 1970-01-01T00:00:00 UTC). *)\n\nval now : unit -> utc t\n(** [now ()] is the current UTC time from the system clock. *)\n\n(** {1:comparison Comparison} *)\n\nval compare : 'a t -> 'a t -> int\n(** [compare a b] orders times by their Julian Date values. *)\n\nval equal : 'a t -> 'a t -> bool\n(** [equal a b] is [true] iff [a] and [b] have the same Julian Date value. *)\n\n(** {1:eliminators Eliminators} *)\n\nval to_jd : 'a t -> float\n(** [to_jd t] is the Julian Date of [t]. *)\n\nval to_mjd : 'a t -> float\n(** [to_mjd t] is the Modified Julian Date of [t] (MJD = JD - 2400000.5). *)\n\nval to_iso : utc t -> string\n(** [to_iso t] formats [t] as an ISO 8601 string with trailing [Z]. Output is\n    [\"YYYY-MM-DDThh:mm:ssZ\"] when the fractional seconds are below 0.5 ms, or\n    [\"YYYY-MM-DDThh:mm:ss.sssZ\"] otherwise.\n\n    {b Warning.} Leap-second labels like [23:59:60] cannot be produced; times\n    within a leap second round to [00:00:00] of the following day. *)\n\nval to_unix : utc t -> float\n(** [to_unix t] is the Unix timestamp of [t] (seconds since 1970-01-01T00:00:00\n    UTC). *)\n\n(** {1:scales Scale conversions}\n\n    UTC/TAI conversions use the IERS leap-second table (Bulletin C), currently\n    covering 1972-01-01 through 2017-01-01 (TAI-UTC = 37 s). Dates before\n    1972-01-01 use TAI-UTC = 10 s.\n\n    TT = TAI + 32.184 s (exact by definition).\n\n    TDB-TT uses the first 10 terms of the Fairhead & Bretagnon (1990) series,\n    accurate to ~1 us within a few centuries of J2000.0. *)\n\nval utc_to_tai : utc t -> tai t\n(** [utc_to_tai t] converts [t] from UTC to TAI. *)\n\nval tai_to_utc : tai t -> utc t\n(** [tai_to_utc t] converts [t] from TAI to UTC. *)\n\nval tai_to_tt : tai t -> tt t\n(** [tai_to_tt t] converts [t] from TAI to TT. *)\n\nval tt_to_tai : tt t -> tai t\n(** [tt_to_tai t] converts [t] from TT to TAI. *)\n\nval tt_to_tdb : tt t -> tdb t\n(** [tt_to_tdb t] converts [t] from TT to TDB. *)\n\nval tdb_to_tt : tdb t -> tt t\n(** [tdb_to_tt t] converts [t] from TDB to TT. Uses a single Newton iteration;\n    accurate to ~1 us. *)\n\n(** {1:duration Duration} *)\n\nval diff : 'a t -> 'a t -> Unit.time Unit.t\n(** [diff a b] is the duration [a - b] as a {!Unit.time} quantity. *)\n\nval add : 'a t -> Unit.time Unit.t -> 'a t\n(** [add t dt] is [t] offset by the duration [dt]. *)\n"
  },
  {
    "path": "dev/umbra/lib/umbra.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule Unit = Unit\nmodule Const = Const\nmodule Time = Time\nmodule Coord = Coord\nmodule Altaz = Altaz\nmodule Galactocentric = Galactocentric\nmodule Cosmo = Cosmo\nmodule Spectrum = Spectrum\nmodule Extinction = Extinction\nmodule Photometry = Photometry\nmodule Filters = Filters\nmodule Survey = Survey\n"
  },
  {
    "path": "dev/umbra/lib/umbra.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Computational astronomy for OCaml.\n\n    Umbra provides dimensionally-typed physical quantities, astronomical\n    constants, cosmological distance calculations, spectral energy\n    distributions, dust extinction, synthetic photometry, coordinate transforms,\n    time scales, and catalog cross-matching.\n\n    All computations operate on {!Nx} tensors and are differentiable through\n    {!Rune} by default.\n\n    {[\n      open Umbra\n\n      let z = Nx.scalar Nx.float64 0.5 in\n      let dl = Cosmo.luminosity_distance z in\n      let dl_mpc = Unit.Length.in_mpc dl in\n\n      let rv = Nx.scalar Nx.float64 3.1 in\n      let av = Nx.scalar Nx.float64 0.5 in\n      let wave = Unit.Length.of_m (Nx.linspace Nx.float64 3e-7 1e-6 1000) in\n      let bp = Photometry.tophat\n        ~lo:(Unit.Length.nm 400.0) ~hi:(Unit.Length.nm 700.0) ~n:1000 in\n      let sed =\n        Spectrum.blackbody\n          ~temperature:(Unit.Temperature.of_kelvin (Nx.scalar Nx.float64 5800.0))\n          ~wavelength:wave\n        |> Extinction.apply (Extinction.ccm89 ~rv) ~av\n        |> Spectrum.as_flux_density\n      in\n      let mag = Photometry.ab_mag bp sed\n    ]} *)\n\n(** {1:units Units and constants} *)\n\nmodule Unit = Unit\n(** Physical quantities with compile-time dimensional safety. *)\n\nmodule Const = Const\n(** Physical and astronomical constants (CODATA 2022, IAU 2015). *)\n\n(** {1:astro Astronomy} *)\n\nmodule Time = Time\n(** Astronomical time with phantom-typed time scales. *)\n\nmodule Coord = Coord\n(** Celestial coordinates with frame transforms and catalog cross-matching. *)\n\nmodule Altaz = Altaz\n(** Altitude-azimuth (horizontal) coordinates. *)\n\nmodule Galactocentric = Galactocentric\n(** Galactocentric Cartesian coordinates. *)\n\nmodule Cosmo = Cosmo\n(** Cosmological distances for {e Λ}CDM, wCDM, and w0waCDM universes. *)\n\nmodule Spectrum = Spectrum\n(** Sampled spectral values on a wavelength grid. *)\n\nmodule Extinction = Extinction\n(** Dust extinction laws. *)\n\nmodule Photometry = Photometry\n(** Synthetic photometry over filter bandpasses. *)\n\nmodule Filters = Filters\n(** Standard astronomical filter bandpasses (SDSS, Johnson-Cousins, 2MASS, Gaia\n    DR3). *)\n\n(** {1:survey Survey science} *)\n\nmodule Survey = Survey\n(** Angular power spectra, probes, and survey likelihood. *)\n"
  },
  {
    "path": "dev/umbra/lib/unit.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet f64 = Nx.float64\n\ntype 'a t = Nx.float64_t\ntype length\ntype mass\ntype time\ntype angle\ntype velocity\ntype power\ntype temperature\ntype energy\ntype frequency\ntype dimensionless\n\n(* Arithmetic — all Nx ops, fully traced by rune *)\n\nlet ( + ) a b = Nx.add a b\nlet ( - ) a b = Nx.sub a b\nlet neg x = Nx.neg x\nlet abs x = Nx.abs x\nlet scale s x = Nx.mul_s x s\nlet scale_t s x = Nx.mul s x\nlet ratio a b = Nx.div a b\nlet zero = Nx.scalar f64 0.0\nlet compare a b = Float.compare (Nx.item [] a) (Nx.item [] b)\nlet equal a b = Float.equal (Nx.item [] a) (Nx.item [] b)\nlet pp fmt x = Format.fprintf fmt \"%g\" (Nx.item [] x)\nlet to_float x = Nx.item [] x\n\n(* Physical constants used in cross-dimension combinators *)\n\nlet c_m_s = Nx.scalar f64 299_792_458.0\nlet h_si = Nx.scalar f64 6.626_070_15e-34\nlet hc_si = Nx.scalar f64 (6.626_070_15e-34 *. 299_792_458.0)\nlet one = Nx.scalar f64 1.0\nlet au_m_t = Nx.scalar f64 1.495_978_707e11\n\n(* Cross-dimension combinators — all Nx ops *)\n\nlet length_per_time d t = Nx.div d t\nlet velocity_times_time v t = Nx.mul v t\nlet length_per_velocity d v = Nx.div d v\nlet wavelength_to_frequency lam = Nx.div c_m_s lam\nlet frequency_to_wavelength nu = Nx.div c_m_s nu\nlet frequency_to_energy nu = Nx.mul h_si nu\nlet energy_to_frequency e = Nx.div e h_si\nlet energy_to_wavelength e = Nx.div hc_si e\n\n(* Parallax: 1 arcsec ↔ 1 parsec. parallax(rad) = 1 AU / distance(m), so\n   distance(m) = 1 AU / parallax(rad). Uses the scale factors defined below. *)\n\nlet parallax_to_distance p = Nx.div au_m_t p\nlet distance_to_parallax d = Nx.div au_m_t d\n\n(* Spectral density: f_ν = f_λ · λ²/c, where f_λ is per-metre and f_ν is\n   per-hertz. *)\nlet flam_to_fnu ~wavelength flam =\n  Nx.div (Nx.mul flam (Nx.square wavelength)) c_m_s\n\nlet fnu_to_flam ~wavelength fnu =\n  Nx.div (Nx.mul fnu c_m_s) (Nx.square wavelength)\n\n(* Doppler conventions: velocity ↔ observed wavelength given rest wavelength.\n   All three conventions agree at v << c. *)\n\n(* Optical: z = v/c, λ_obs = λ_rest * (1 + v/c) *)\nlet doppler_optical ~rest v = Nx.mul rest (Nx.add_s (Nx.div v c_m_s) 1.0)\n\nlet doppler_optical_inv ~rest obs =\n  Nx.mul c_m_s (Nx.sub_s (Nx.div obs rest) 1.0)\n\n(* Radio: v = c*(1 - λ_rest/λ_obs), λ_obs = λ_rest / (1 - v/c) *)\nlet doppler_radio ~rest v = Nx.div rest (Nx.sub one (Nx.div v c_m_s))\nlet doppler_radio_inv ~rest obs = Nx.mul c_m_s (Nx.sub one (Nx.div rest obs))\n\n(* Relativistic: λ_obs = λ_rest * sqrt((1+β)/(1-β)), β = v/c *)\nlet doppler_relativistic ~rest v =\n  let beta = Nx.div v c_m_s in\n  Nx.mul rest (Nx.sqrt (Nx.div (Nx.add_s beta 1.0) (Nx.sub one beta)))\n\nlet doppler_relativistic_inv ~rest obs =\n  let r2 = Nx.square (Nx.div obs rest) in\n  Nx.mul c_m_s (Nx.div (Nx.sub_s r2 1.0) (Nx.add_s r2 1.0))\n\n(* Scale factors to SI base unit *)\n\nlet pc_m = 3.085_677_581_491_367_3e16\nlet au_m = 1.495_978_707e11\nlet ly_m = 9.460_730_472_580_8e15\nlet solar_radius_m = 6.957e8\nlet earth_radius_m = 6.371e6\nlet jupiter_radius_m = 7.1492e7\nlet solar_mass_kg = 1.988_4e30\nlet earth_mass_kg = 5.972_2e24\nlet jupiter_mass_kg = 1.898_2e27\nlet solar_luminosity_w = 3.828e26\nlet julian_year_s = 365.25 *. 86_400.0\nlet ev_j = 1.602_176_634e-19\n\nmodule Length = struct\n  let of_tensor x = x\n  let to_tensor x = x\n  let m x = Nx.scalar f64 x\n  let km x = Nx.scalar f64 (x *. 1e3)\n  let cm x = Nx.scalar f64 (x *. 1e-2)\n  let mm x = Nx.scalar f64 (x *. 1e-3)\n  let um x = Nx.scalar f64 (x *. 1e-6)\n  let nm x = Nx.scalar f64 (x *. 1e-9)\n  let angstrom x = Nx.scalar f64 (x *. 1e-10)\n  let au x = Nx.scalar f64 (x *. au_m)\n  let pc x = Nx.scalar f64 (x *. pc_m)\n  let kpc x = Nx.scalar f64 (x *. pc_m *. 1e3)\n  let mpc x = Nx.scalar f64 (x *. pc_m *. 1e6)\n  let gpc x = Nx.scalar f64 (x *. pc_m *. 1e9)\n  let ly x = Nx.scalar f64 (x *. ly_m)\n  let solar_radius x = Nx.scalar f64 (x *. solar_radius_m)\n  let earth_radius x = Nx.scalar f64 (x *. earth_radius_m)\n  let jupiter_radius x = Nx.scalar f64 (x *. jupiter_radius_m)\n  let of_m x = x\n  let of_km x = Nx.mul_s x 1e3\n  let of_cm x = Nx.mul_s x 1e-2\n  let of_mm x = Nx.mul_s x 1e-3\n  let of_um x = Nx.mul_s x 1e-6\n  let of_nm x = Nx.mul_s x 1e-9\n  let of_angstrom x = Nx.mul_s x 1e-10\n  let of_au x = Nx.mul_s x au_m\n  let of_pc x = Nx.mul_s x pc_m\n  let of_kpc x = Nx.mul_s x (pc_m *. 1e3)\n  let of_mpc x = Nx.mul_s x (pc_m *. 1e6)\n  let of_gpc x = Nx.mul_s x (pc_m *. 1e9)\n  let of_ly x = Nx.mul_s x ly_m\n  let of_solar_radius x = Nx.mul_s x solar_radius_m\n  let of_earth_radius x = Nx.mul_s x earth_radius_m\n  let of_jupiter_radius x = Nx.mul_s x jupiter_radius_m\n  let in_m x = x\n  let in_km x = Nx.div_s x 1e3\n  let in_cm x = Nx.mul_s x (1.0 /. 1e-2)\n  let in_mm x = Nx.mul_s x (1.0 /. 1e-3)\n  let in_um x = Nx.mul_s x (1.0 /. 1e-6)\n  let in_nm x = Nx.mul_s x (1.0 /. 1e-9)\n  let in_angstrom x = Nx.mul_s x (1.0 /. 1e-10)\n  let in_au x = Nx.div_s x au_m\n  let in_pc x = Nx.div_s x pc_m\n  let in_kpc x = Nx.div_s x (pc_m *. 1e3)\n  let in_mpc x = Nx.div_s x (pc_m *. 1e6)\n  let in_gpc x = Nx.div_s x (pc_m *. 1e9)\n  let in_ly x = Nx.div_s x ly_m\n  let in_solar_radius x = Nx.div_s x solar_radius_m\n  let in_earth_radius x = Nx.div_s x earth_radius_m\n  let in_jupiter_radius x = Nx.div_s x jupiter_radius_m\nend\n\nmodule Mass = struct\n  let of_tensor x = x\n  let to_tensor x = x\n  let kg x = Nx.scalar f64 x\n  let g x = Nx.scalar f64 (x *. 1e-3)\n  let mg x = Nx.scalar f64 (x *. 1e-6)\n  let solar_mass x = Nx.scalar f64 (x *. solar_mass_kg)\n  let earth_mass x = Nx.scalar f64 (x *. earth_mass_kg)\n  let jupiter_mass x = Nx.scalar f64 (x *. jupiter_mass_kg)\n  let of_kg x = x\n  let of_g x = Nx.mul_s x 1e-3\n  let of_mg x = Nx.mul_s x 1e-6\n  let of_solar_mass x = Nx.mul_s x solar_mass_kg\n  let of_earth_mass x = Nx.mul_s x earth_mass_kg\n  let of_jupiter_mass x = Nx.mul_s x jupiter_mass_kg\n  let in_kg x = x\n  let in_g x = Nx.mul_s x (1.0 /. 1e-3)\n  let in_mg x = Nx.mul_s x (1.0 /. 1e-6)\n  let in_solar_mass x = Nx.div_s x solar_mass_kg\n  let in_earth_mass x = Nx.div_s x earth_mass_kg\n  let in_jupiter_mass x = Nx.div_s x jupiter_mass_kg\nend\n\nmodule Time = struct\n  let of_tensor x = x\n  let to_tensor x = x\n  let s x = Nx.scalar f64 x\n  let ms x = Nx.scalar f64 (x *. 1e-3)\n  let us x = Nx.scalar f64 (x *. 1e-6)\n  let min x = Nx.scalar f64 (x *. 60.0)\n  let hr x = Nx.scalar f64 (x *. 3600.0)\n  let day x = Nx.scalar f64 (x *. 86_400.0)\n  let yr x = Nx.scalar f64 (x *. julian_year_s)\n  let myr x = Nx.scalar f64 (x *. julian_year_s *. 1e6)\n  let gyr x = Nx.scalar f64 (x *. julian_year_s *. 1e9)\n  let of_s x = x\n  let of_ms x = Nx.mul_s x 1e-3\n  let of_us x = Nx.mul_s x 1e-6\n  let of_min x = Nx.mul_s x 60.0\n  let of_hr x = Nx.mul_s x 3600.0\n  let of_day x = Nx.mul_s x 86_400.0\n  let of_yr x = Nx.mul_s x julian_year_s\n  let of_myr x = Nx.mul_s x (julian_year_s *. 1e6)\n  let of_gyr x = Nx.mul_s x (julian_year_s *. 1e9)\n  let in_s x = x\n  let in_ms x = Nx.mul_s x (1.0 /. 1e-3)\n  let in_us x = Nx.mul_s x (1.0 /. 1e-6)\n  let in_min x = Nx.div_s x 60.0\n  let in_hr x = Nx.div_s x 3600.0\n  let in_day x = Nx.div_s x 86_400.0\n  let in_yr x = Nx.div_s x julian_year_s\n  let in_myr x = Nx.div_s x (julian_year_s *. 1e6)\n  let in_gyr x = Nx.div_s x (julian_year_s *. 1e9)\nend\n\nmodule Angle = struct\n  let deg_rad = Float.pi /. 180.0\n  let of_tensor x = x\n  let to_tensor x = x\n  let rad x = Nx.scalar f64 x\n  let deg x = Nx.scalar f64 (x *. deg_rad)\n  let arcmin x = Nx.scalar f64 (x *. deg_rad /. 60.0)\n  let arcsec x = Nx.scalar f64 (x *. deg_rad /. 3600.0)\n  let mas x = Nx.scalar f64 (x *. deg_rad /. 3_600_000.0)\n  let hour_angle x = Nx.scalar f64 (x *. Float.pi /. 12.0)\n  let of_rad x = x\n  let of_deg x = Nx.mul_s x deg_rad\n  let of_arcmin x = Nx.mul_s x (deg_rad /. 60.0)\n  let of_arcsec x = Nx.mul_s x (deg_rad /. 3600.0)\n  let of_mas x = Nx.mul_s x (deg_rad /. 3_600_000.0)\n  let of_hour_angle x = Nx.mul_s x (Float.pi /. 12.0)\n  let in_rad x = x\n  let in_deg x = Nx.div_s x deg_rad\n  let in_arcmin x = Nx.mul_s (Nx.div_s x deg_rad) 60.0\n  let in_arcsec x = Nx.mul_s (Nx.div_s x deg_rad) 3600.0\n  let in_mas x = Nx.mul_s (Nx.div_s x deg_rad) 3_600_000.0\n  let in_hour_angle x = Nx.mul_s x (12.0 /. Float.pi)\n  let sin x = Nx.sin x\n  let cos x = Nx.cos x\n  let tan x = Nx.tan x\n  let asin x = Nx.asin x\n  let acos x = Nx.acos x\n  let atan2 ~y ~x = Nx.atan2 y x\n\n  let wrap_360 x =\n    let d = in_deg x in\n    let d = Nx.sub d (Nx.mul_s (Nx.floor (Nx.div_s d 360.0)) 360.0) in\n    of_deg d\n\n  let wrap_180 x =\n    let d = Nx.add_s (in_deg x) 180.0 in\n    let d = Nx.sub d (Nx.mul_s (Nx.floor (Nx.div_s d 360.0)) 360.0) in\n    of_deg (Nx.sub_s d 180.0)\nend\n\nmodule Velocity = struct\n  let of_tensor x = x\n  let to_tensor x = x\n  let m_s x = Nx.scalar f64 x\n  let km_s x = Nx.scalar f64 (x *. 1e3)\n  let km_hr x = Nx.scalar f64 (x *. (1e3 /. 3600.0))\n  let of_m_s x = x\n  let of_km_s x = Nx.mul_s x 1e3\n  let of_km_hr x = Nx.mul_s x (1e3 /. 3600.0)\n  let in_m_s x = x\n  let in_km_s x = Nx.div_s x 1e3\n  let in_km_hr x = Nx.div_s x (1e3 /. 3600.0)\nend\n\nmodule Power = struct\n  let of_tensor x = x\n  let to_tensor x = x\n  let w x = Nx.scalar f64 x\n  let kw x = Nx.scalar f64 (x *. 1e3)\n  let solar_luminosity x = Nx.scalar f64 (x *. solar_luminosity_w)\n  let erg_s x = Nx.scalar f64 (x *. 1e-7)\n  let of_w x = x\n  let of_kw x = Nx.mul_s x 1e3\n  let of_solar_luminosity x = Nx.mul_s x solar_luminosity_w\n  let of_erg_s x = Nx.mul_s x 1e-7\n  let in_w x = x\n  let in_kw x = Nx.div_s x 1e3\n  let in_solar_luminosity x = Nx.div_s x solar_luminosity_w\n  let in_erg_s x = Nx.mul_s x (1.0 /. 1e-7)\nend\n\nmodule Temperature = struct\n  let of_tensor x = x\n  let to_tensor x = x\n  let kelvin x = Nx.scalar f64 x\n  let of_kelvin x = x\n  let in_kelvin x = x\nend\n\nmodule Energy = struct\n  let of_tensor x = x\n  let to_tensor x = x\n  let j x = Nx.scalar f64 x\n  let erg x = Nx.scalar f64 (x *. 1e-7)\n  let ev x = Nx.scalar f64 (x *. ev_j)\n  let kev x = Nx.scalar f64 (x *. ev_j *. 1e3)\n  let mev x = Nx.scalar f64 (x *. ev_j *. 1e6)\n  let of_j x = x\n  let of_erg x = Nx.mul_s x 1e-7\n  let of_ev x = Nx.mul_s x ev_j\n  let of_kev x = Nx.mul_s x (ev_j *. 1e3)\n  let of_mev x = Nx.mul_s x (ev_j *. 1e6)\n  let in_j x = x\n  let in_erg x = Nx.mul_s x (1.0 /. 1e-7)\n  let in_ev x = Nx.div_s x ev_j\n  let in_kev x = Nx.div_s x (ev_j *. 1e3)\n  let in_mev x = Nx.div_s x (ev_j *. 1e6)\nend\n\nmodule Frequency = struct\n  let of_tensor x = x\n  let to_tensor x = x\n  let hz x = Nx.scalar f64 x\n  let khz x = Nx.scalar f64 (x *. 1e3)\n  let mhz x = Nx.scalar f64 (x *. 1e6)\n  let ghz x = Nx.scalar f64 (x *. 1e9)\n  let of_hz x = x\n  let of_khz x = Nx.mul_s x 1e3\n  let of_mhz x = Nx.mul_s x 1e6\n  let of_ghz x = Nx.mul_s x 1e9\n  let in_hz x = x\n  let in_khz x = Nx.div_s x 1e3\n  let in_mhz x = Nx.div_s x 1e6\n  let in_ghz x = Nx.div_s x 1e9\nend\n\nmodule Dimensionless = struct\n  let of_tensor x = x\n  let to_tensor x = x\n  let v x = Nx.scalar f64 x\n  let to_float x = Nx.item [] x\nend\n"
  },
  {
    "path": "dev/umbra/lib/unit.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Physical quantities with compile-time dimensional safety.\n\n    A {e quantity} is an {!Nx.float64_t} tensor of arbitrary shape tagged with a\n    phantom dimension type. Arithmetic requires matching dimensions:\n    [length t + length t] typechecks, [length t + mass t] does not. Values are\n    stored in SI base units internally.\n\n    Each dimension module provides three families of functions:\n    - {e Scalar constructors} ([Length.km], [Mass.kg], ...) create a 0-d\n      quantity from a [float].\n    - {e Tensor constructors} ([Length.of_km], [Mass.of_kg], ...) wrap an\n      arbitrary-shape {!Nx.float64_t}.\n    - {e Extractors} ([Length.in_km], [Mass.in_kg], ...) return the numeric\n      value in a given unit as an {!Nx.float64_t}.\n\n    {[\n    open Unit\n\n    let d = Length.(kpc 10.0 + pc 500.0)\n    let d_mpc = Length.in_mpc d\n    ]} *)\n\n(** {1:types Types} *)\n\ntype 'a t\n(** The type for a physical quantity with dimension ['a]. Internally an\n    {!Nx.float64_t} in SI base units. *)\n\ntype length\n(** Phantom type for length (SI: metres). *)\n\ntype mass\n(** Phantom type for mass (SI: kilograms). *)\n\ntype time\n(** Phantom type for time duration (SI: seconds). *)\n\ntype angle\n(** Phantom type for angles (SI: radians). *)\n\ntype velocity\n(** Phantom type for velocity (SI: m/s). *)\n\ntype power\n(** Phantom type for power / luminosity (SI: watts). *)\n\ntype temperature\n(** Phantom type for temperature (SI: kelvin). *)\n\ntype energy\n(** Phantom type for energy (SI: joules). *)\n\ntype frequency\n(** Phantom type for frequency (SI: hertz). *)\n\ntype dimensionless\n(** Phantom type for dimensionless quantities. *)\n\n(** {1:arithmetic Arithmetic}\n\n    All operations require matching dimensions. *)\n\nval ( + ) : 'a t -> 'a t -> 'a t\n(** [a + b] is the element-wise sum of [a] and [b]. *)\n\nval ( - ) : 'a t -> 'a t -> 'a t\n(** [a - b] is the element-wise difference of [a] and [b]. *)\n\nval neg : 'a t -> 'a t\n(** [neg x] is the element-wise negation of [x]. *)\n\nval abs : 'a t -> 'a t\n(** [abs x] is the element-wise absolute value of [x]. *)\n\nval scale : float -> 'a t -> 'a t\n(** [scale s x] multiplies every element of [x] by [s]. *)\n\nval scale_t : Nx.float64_t -> 'a t -> 'a t\n(** [scale_t s x] multiplies every element of [x] by the tensor [s]. Keeps the\n    result in the typed world when the scale factor is a fitted parameter. *)\n\nval ratio : 'a t -> 'a t -> dimensionless t\n(** [ratio a b] is the element-wise division [a / b], yielding a dimensionless\n    quantity. *)\n\nval zero : 'a t\n(** [zero] is the scalar quantity [0.0]. *)\n\n(** {1:predicates Predicates, comparisons, and converting}\n\n    These functions extract scalar values and are intended for 0-d tensors. *)\n\nval compare : 'a t -> 'a t -> int\n(** [compare a b] orders [a] and [b] by their scalar SI values. *)\n\nval equal : 'a t -> 'a t -> bool\n(** [equal a b] is [true] iff [a] and [b] have the same scalar SI value. *)\n\nval pp : Format.formatter -> 'a t -> unit\n(** [pp] formats the scalar SI value of a quantity. *)\n\nval to_float : 'a t -> float\n(** [to_float x] is the scalar value of [x] in SI base units. *)\n\n(** {1:cross Cross-dimension combinators}\n\n    Functions that relate quantities of different dimensions. *)\n\nval length_per_time : length t -> time t -> velocity t\n(** [length_per_time d t] is [d / t] as a velocity. *)\n\nval velocity_times_time : velocity t -> time t -> length t\n(** [velocity_times_time v t] is [v * t] as a length. *)\n\nval length_per_velocity : length t -> velocity t -> time t\n(** [length_per_velocity d v] is [d / v] as a time. *)\n\nval wavelength_to_frequency : length t -> frequency t\n(** [wavelength_to_frequency lam] is [c / lam]. *)\n\nval frequency_to_wavelength : frequency t -> length t\n(** [frequency_to_wavelength nu] is [c / nu]. *)\n\nval frequency_to_energy : frequency t -> energy t\n(** [frequency_to_energy nu] is [h * nu]. *)\n\nval energy_to_frequency : energy t -> frequency t\n(** [energy_to_frequency e] is [e / h]. *)\n\nval energy_to_wavelength : energy t -> length t\n(** [energy_to_wavelength e] is [h * c / e]. *)\n\nval parallax_to_distance : angle t -> length t\n(** [parallax_to_distance p] is the distance corresponding to parallax [p]. Uses\n    [d = 1 AU / p]. One arcsecond of parallax gives one parsec. *)\n\nval distance_to_parallax : length t -> angle t\n(** [distance_to_parallax d] is the parallax corresponding to distance [d]. Uses\n    [p = 1 AU / d]. *)\n\nval flam_to_fnu : wavelength:length t -> Nx.float64_t -> Nx.float64_t\n(** [flam_to_fnu ~wavelength flam] converts spectral flux density from\n    per-wavelength (F{_ {e lambda}}, W m{^ -2} m{^ -1}) to per-frequency\n    (F{_ {e nu}}, W m{^ -2} Hz{^ -1}) at the given wavelengths. Uses\n    [f_nu = f_lambda * lambda{^ 2} / c]. *)\n\nval fnu_to_flam : wavelength:length t -> Nx.float64_t -> Nx.float64_t\n(** [fnu_to_flam ~wavelength fnu] converts spectral flux density from\n    per-frequency (F{_ {e nu}}) to per-wavelength (F{_ {e lambda}}) at the given\n    wavelengths. Uses [f_lambda = f_nu * c / lambda{^ 2}]. *)\n\n(** {2:doppler Doppler conventions}\n\n    Three conventions for converting between radial velocity and observed\n    wavelength, given a rest wavelength. All agree at [v << c]. *)\n\nval doppler_optical : rest:length t -> velocity t -> length t\n(** [doppler_optical ~rest v] is the observed wavelength under the optical (cz)\n    convention: [lambda_obs = lambda_rest * (1 + v/c)]. *)\n\nval doppler_optical_inv : rest:length t -> length t -> velocity t\n(** [doppler_optical_inv ~rest obs] is the velocity under the optical\n    convention: [v = c * (lambda_obs/lambda_rest - 1)]. *)\n\nval doppler_radio : rest:length t -> velocity t -> length t\n(** [doppler_radio ~rest v] is the observed wavelength under the radio\n    convention: [lambda_obs = lambda_rest / (1 - v/c)]. *)\n\nval doppler_radio_inv : rest:length t -> length t -> velocity t\n(** [doppler_radio_inv ~rest obs] is the velocity under the radio convention:\n    [v = c * (1 - lambda_rest/lambda_obs)]. *)\n\nval doppler_relativistic : rest:length t -> velocity t -> length t\n(** [doppler_relativistic ~rest v] is the observed wavelength under the full\n    relativistic Doppler formula:\n    [lambda_obs = lambda_rest * sqrt((1 + v/c) / (1 - v/c))]. *)\n\nval doppler_relativistic_inv : rest:length t -> length t -> velocity t\n(** [doppler_relativistic_inv ~rest obs] is the velocity under the relativistic\n    formula: [v = c * (r{^ 2} - 1) / (r{^ 2} + 1)] where\n    [r = lambda_obs/lambda_rest]. *)\n\n(** {1:length Length} *)\n\nmodule Length : sig\n  val of_tensor : Nx.float64_t -> length t\n  (** [of_tensor x] wraps [x] as a length. [x] must be in metres. *)\n\n  val to_tensor : length t -> Nx.float64_t\n  (** [to_tensor x] is the underlying tensor in metres. *)\n\n  (** {2:scalar Scalar constructors}\n\n      Each function creates a 0-d length quantity from a [float] value in the\n      named unit. *)\n\n  val m : float -> length t\n  (** [m x] is [x] metres. *)\n\n  val km : float -> length t\n  (** [km x] is [x] kilometres. *)\n\n  val cm : float -> length t\n  (** [cm x] is [x] centimetres. *)\n\n  val mm : float -> length t\n  (** [mm x] is [x] millimetres. *)\n\n  val um : float -> length t\n  (** [um x] is [x] micrometres. *)\n\n  val nm : float -> length t\n  (** [nm x] is [x] nanometres. *)\n\n  val angstrom : float -> length t\n  (** [angstrom x] is [x] angstroms. *)\n\n  val au : float -> length t\n  (** [au x] is [x] astronomical units. *)\n\n  val pc : float -> length t\n  (** [pc x] is [x] parsecs. *)\n\n  val kpc : float -> length t\n  (** [kpc x] is [x] kiloparsecs. *)\n\n  val mpc : float -> length t\n  (** [mpc x] is [x] megaparsecs. *)\n\n  val gpc : float -> length t\n  (** [gpc x] is [x] gigaparsecs. *)\n\n  val ly : float -> length t\n  (** [ly x] is [x] light-years. *)\n\n  val solar_radius : float -> length t\n  (** [solar_radius x] is [x] solar radii. *)\n\n  val earth_radius : float -> length t\n  (** [earth_radius x] is [x] Earth equatorial radii. *)\n\n  val jupiter_radius : float -> length t\n  (** [jupiter_radius x] is [x] Jupiter equatorial radii. *)\n\n  (** {2:tensor Tensor constructors}\n\n      Each function wraps an arbitrary-shape {!Nx.float64_t} (in the named unit)\n      as a length quantity. *)\n\n  val of_m : Nx.float64_t -> length t\n  val of_km : Nx.float64_t -> length t\n  val of_cm : Nx.float64_t -> length t\n  val of_mm : Nx.float64_t -> length t\n  val of_um : Nx.float64_t -> length t\n  val of_nm : Nx.float64_t -> length t\n  val of_angstrom : Nx.float64_t -> length t\n  val of_au : Nx.float64_t -> length t\n  val of_pc : Nx.float64_t -> length t\n  val of_kpc : Nx.float64_t -> length t\n  val of_mpc : Nx.float64_t -> length t\n  val of_gpc : Nx.float64_t -> length t\n  val of_ly : Nx.float64_t -> length t\n  val of_solar_radius : Nx.float64_t -> length t\n  val of_earth_radius : Nx.float64_t -> length t\n  val of_jupiter_radius : Nx.float64_t -> length t\n\n  (** {2:extract Extracting}\n\n      Each function returns the numeric value in the named unit as an\n      {!Nx.float64_t}. *)\n\n  val in_m : length t -> Nx.float64_t\n  val in_km : length t -> Nx.float64_t\n  val in_cm : length t -> Nx.float64_t\n  val in_mm : length t -> Nx.float64_t\n  val in_um : length t -> Nx.float64_t\n  val in_nm : length t -> Nx.float64_t\n  val in_angstrom : length t -> Nx.float64_t\n  val in_au : length t -> Nx.float64_t\n  val in_pc : length t -> Nx.float64_t\n  val in_kpc : length t -> Nx.float64_t\n  val in_mpc : length t -> Nx.float64_t\n  val in_gpc : length t -> Nx.float64_t\n  val in_ly : length t -> Nx.float64_t\n  val in_solar_radius : length t -> Nx.float64_t\n  val in_earth_radius : length t -> Nx.float64_t\n  val in_jupiter_radius : length t -> Nx.float64_t\nend\n\n(** {1:mass Mass} *)\n\nmodule Mass : sig\n  val of_tensor : Nx.float64_t -> mass t\n  (** [of_tensor x] wraps [x] as a mass. [x] must be in kilograms. *)\n\n  val to_tensor : mass t -> Nx.float64_t\n  (** [to_tensor x] is the underlying tensor in kilograms. *)\n\n  (** {2:scalar Scalar constructors} *)\n\n  val kg : float -> mass t\n  (** [kg x] is [x] kilograms. *)\n\n  val g : float -> mass t\n  (** [g x] is [x] grams. *)\n\n  val mg : float -> mass t\n  (** [mg x] is [x] milligrams. *)\n\n  val solar_mass : float -> mass t\n  (** [solar_mass x] is [x] solar masses. *)\n\n  val earth_mass : float -> mass t\n  (** [earth_mass x] is [x] Earth masses. *)\n\n  val jupiter_mass : float -> mass t\n  (** [jupiter_mass x] is [x] Jupiter masses. *)\n\n  (** {2:tensor Tensor constructors} *)\n\n  val of_kg : Nx.float64_t -> mass t\n  val of_g : Nx.float64_t -> mass t\n  val of_mg : Nx.float64_t -> mass t\n  val of_solar_mass : Nx.float64_t -> mass t\n  val of_earth_mass : Nx.float64_t -> mass t\n  val of_jupiter_mass : Nx.float64_t -> mass t\n\n  (** {2:extract Extracting} *)\n\n  val in_kg : mass t -> Nx.float64_t\n  val in_g : mass t -> Nx.float64_t\n  val in_mg : mass t -> Nx.float64_t\n  val in_solar_mass : mass t -> Nx.float64_t\n  val in_earth_mass : mass t -> Nx.float64_t\n  val in_jupiter_mass : mass t -> Nx.float64_t\nend\n\n(** {1:time Time duration} *)\n\nmodule Time : sig\n  val of_tensor : Nx.float64_t -> time t\n  (** [of_tensor x] wraps [x] as a time duration. [x] must be in seconds. *)\n\n  val to_tensor : time t -> Nx.float64_t\n  (** [to_tensor x] is the underlying tensor in seconds. *)\n\n  (** {2:scalar Scalar constructors} *)\n\n  val s : float -> time t\n  (** [s x] is [x] seconds. *)\n\n  val ms : float -> time t\n  (** [ms x] is [x] milliseconds. *)\n\n  val us : float -> time t\n  (** [us x] is [x] microseconds. *)\n\n  val min : float -> time t\n  (** [min x] is [x] minutes. *)\n\n  val hr : float -> time t\n  (** [hr x] is [x] hours. *)\n\n  val day : float -> time t\n  (** [day x] is [x] days (86 400 s). *)\n\n  val yr : float -> time t\n  (** [yr x] is [x] Julian years (365.25 days). *)\n\n  val myr : float -> time t\n  (** [myr x] is [x] megayears. *)\n\n  val gyr : float -> time t\n  (** [gyr x] is [x] gigayears. *)\n\n  (** {2:tensor Tensor constructors} *)\n\n  val of_s : Nx.float64_t -> time t\n  val of_ms : Nx.float64_t -> time t\n  val of_us : Nx.float64_t -> time t\n  val of_min : Nx.float64_t -> time t\n  val of_hr : Nx.float64_t -> time t\n  val of_day : Nx.float64_t -> time t\n  val of_yr : Nx.float64_t -> time t\n  val of_myr : Nx.float64_t -> time t\n  val of_gyr : Nx.float64_t -> time t\n\n  (** {2:extract Extracting} *)\n\n  val in_s : time t -> Nx.float64_t\n  val in_ms : time t -> Nx.float64_t\n  val in_us : time t -> Nx.float64_t\n  val in_min : time t -> Nx.float64_t\n  val in_hr : time t -> Nx.float64_t\n  val in_day : time t -> Nx.float64_t\n  val in_yr : time t -> Nx.float64_t\n  val in_myr : time t -> Nx.float64_t\n  val in_gyr : time t -> Nx.float64_t\nend\n\n(** {1:angle Angle} *)\n\nmodule Angle : sig\n  val of_tensor : Nx.float64_t -> angle t\n  (** [of_tensor x] wraps [x] as an angle. [x] must be in radians. *)\n\n  val to_tensor : angle t -> Nx.float64_t\n  (** [to_tensor x] is the underlying tensor in radians. *)\n\n  (** {2:scalar Scalar constructors} *)\n\n  val rad : float -> angle t\n  (** [rad x] is [x] radians. *)\n\n  val deg : float -> angle t\n  (** [deg x] is [x] degrees. *)\n\n  val arcmin : float -> angle t\n  (** [arcmin x] is [x] arcminutes. *)\n\n  val arcsec : float -> angle t\n  (** [arcsec x] is [x] arcseconds. *)\n\n  val mas : float -> angle t\n  (** [mas x] is [x] milliarcseconds. *)\n\n  val hour_angle : float -> angle t\n  (** [hour_angle x] is [x] hour angles (1 h = 15 deg). *)\n\n  (** {2:tensor Tensor constructors} *)\n\n  val of_rad : Nx.float64_t -> angle t\n  val of_deg : Nx.float64_t -> angle t\n  val of_arcmin : Nx.float64_t -> angle t\n  val of_arcsec : Nx.float64_t -> angle t\n  val of_mas : Nx.float64_t -> angle t\n  val of_hour_angle : Nx.float64_t -> angle t\n\n  (** {2:extract Extracting} *)\n\n  val in_rad : angle t -> Nx.float64_t\n  val in_deg : angle t -> Nx.float64_t\n  val in_arcmin : angle t -> Nx.float64_t\n  val in_arcsec : angle t -> Nx.float64_t\n  val in_mas : angle t -> Nx.float64_t\n  val in_hour_angle : angle t -> Nx.float64_t\n\n  (** {2:trig Trigonometric functions} *)\n\n  val sin : angle t -> Nx.float64_t\n  (** [sin a] is the sine of [a]. *)\n\n  val cos : angle t -> Nx.float64_t\n  (** [cos a] is the cosine of [a]. *)\n\n  val tan : angle t -> Nx.float64_t\n  (** [tan a] is the tangent of [a]. *)\n\n  val asin : Nx.float64_t -> angle t\n  (** [asin x] is the arc sine of [x]. *)\n\n  val acos : Nx.float64_t -> angle t\n  (** [acos x] is the arc cosine of [x]. *)\n\n  val atan2 : y:Nx.float64_t -> x:Nx.float64_t -> angle t\n  (** [atan2 ~y ~x] is the two-argument arc tangent of [y] and [x]. *)\n\n  (** {2:wrap Wrapping} *)\n\n  val wrap_360 : angle t -> angle t\n  (** [wrap_360 a] normalizes [a] into \\[0, 360) degrees. *)\n\n  val wrap_180 : angle t -> angle t\n  (** [wrap_180 a] normalizes [a] into \\[-180, 180) degrees. *)\nend\n\n(** {1:velocity Velocity} *)\n\nmodule Velocity : sig\n  val of_tensor : Nx.float64_t -> velocity t\n  (** [of_tensor x] wraps [x] as a velocity. [x] must be in m/s. *)\n\n  val to_tensor : velocity t -> Nx.float64_t\n  (** [to_tensor x] is the underlying tensor in m/s. *)\n\n  (** {2:scalar Scalar constructors} *)\n\n  val m_s : float -> velocity t\n  (** [m_s x] is [x] m/s. *)\n\n  val km_s : float -> velocity t\n  (** [km_s x] is [x] km/s. *)\n\n  val km_hr : float -> velocity t\n  (** [km_hr x] is [x] km/h. *)\n\n  (** {2:tensor Tensor constructors} *)\n\n  val of_m_s : Nx.float64_t -> velocity t\n  val of_km_s : Nx.float64_t -> velocity t\n  val of_km_hr : Nx.float64_t -> velocity t\n\n  (** {2:extract Extracting} *)\n\n  val in_m_s : velocity t -> Nx.float64_t\n  val in_km_s : velocity t -> Nx.float64_t\n  val in_km_hr : velocity t -> Nx.float64_t\nend\n\n(** {1:power Power / Luminosity} *)\n\nmodule Power : sig\n  val of_tensor : Nx.float64_t -> power t\n  (** [of_tensor x] wraps [x] as a power. [x] must be in watts. *)\n\n  val to_tensor : power t -> Nx.float64_t\n  (** [to_tensor x] is the underlying tensor in watts. *)\n\n  (** {2:scalar Scalar constructors} *)\n\n  val w : float -> power t\n  (** [w x] is [x] watts. *)\n\n  val kw : float -> power t\n  (** [kw x] is [x] kilowatts. *)\n\n  val solar_luminosity : float -> power t\n  (** [solar_luminosity x] is [x] solar luminosities. *)\n\n  val erg_s : float -> power t\n  (** [erg_s x] is [x] erg/s. *)\n\n  (** {2:tensor Tensor constructors} *)\n\n  val of_w : Nx.float64_t -> power t\n  val of_kw : Nx.float64_t -> power t\n  val of_solar_luminosity : Nx.float64_t -> power t\n  val of_erg_s : Nx.float64_t -> power t\n\n  (** {2:extract Extracting} *)\n\n  val in_w : power t -> Nx.float64_t\n  val in_kw : power t -> Nx.float64_t\n  val in_solar_luminosity : power t -> Nx.float64_t\n  val in_erg_s : power t -> Nx.float64_t\nend\n\n(** {1:temperature Temperature} *)\n\nmodule Temperature : sig\n  val of_tensor : Nx.float64_t -> temperature t\n  (** [of_tensor x] wraps [x] as a temperature. [x] must be in kelvin. *)\n\n  val to_tensor : temperature t -> Nx.float64_t\n  (** [to_tensor x] is the underlying tensor in kelvin. *)\n\n  (** {2:scalar Scalar constructors} *)\n\n  val kelvin : float -> temperature t\n  (** [kelvin x] is [x] kelvin. *)\n\n  (** {2:tensor Tensor constructors} *)\n\n  val of_kelvin : Nx.float64_t -> temperature t\n\n  (** {2:extract Extracting} *)\n\n  val in_kelvin : temperature t -> Nx.float64_t\nend\n\n(** {1:energy Energy} *)\n\nmodule Energy : sig\n  val of_tensor : Nx.float64_t -> energy t\n  (** [of_tensor x] wraps [x] as an energy. [x] must be in joules. *)\n\n  val to_tensor : energy t -> Nx.float64_t\n  (** [to_tensor x] is the underlying tensor in joules. *)\n\n  (** {2:scalar Scalar constructors} *)\n\n  val j : float -> energy t\n  (** [j x] is [x] joules. *)\n\n  val erg : float -> energy t\n  (** [erg x] is [x] ergs. *)\n\n  val ev : float -> energy t\n  (** [ev x] is [x] electronvolts. *)\n\n  val kev : float -> energy t\n  (** [kev x] is [x] kiloelectronvolts. *)\n\n  val mev : float -> energy t\n  (** [mev x] is [x] megaelectronvolts. *)\n\n  (** {2:tensor Tensor constructors} *)\n\n  val of_j : Nx.float64_t -> energy t\n  val of_erg : Nx.float64_t -> energy t\n  val of_ev : Nx.float64_t -> energy t\n  val of_kev : Nx.float64_t -> energy t\n  val of_mev : Nx.float64_t -> energy t\n\n  (** {2:extract Extracting} *)\n\n  val in_j : energy t -> Nx.float64_t\n  val in_erg : energy t -> Nx.float64_t\n  val in_ev : energy t -> Nx.float64_t\n  val in_kev : energy t -> Nx.float64_t\n  val in_mev : energy t -> Nx.float64_t\nend\n\n(** {1:frequency Frequency} *)\n\nmodule Frequency : sig\n  val of_tensor : Nx.float64_t -> frequency t\n  (** [of_tensor x] wraps [x] as a frequency. [x] must be in hertz. *)\n\n  val to_tensor : frequency t -> Nx.float64_t\n  (** [to_tensor x] is the underlying tensor in hertz. *)\n\n  (** {2:scalar Scalar constructors} *)\n\n  val hz : float -> frequency t\n  (** [hz x] is [x] hertz. *)\n\n  val khz : float -> frequency t\n  (** [khz x] is [x] kilohertz. *)\n\n  val mhz : float -> frequency t\n  (** [mhz x] is [x] megahertz. *)\n\n  val ghz : float -> frequency t\n  (** [ghz x] is [x] gigahertz. *)\n\n  (** {2:tensor Tensor constructors} *)\n\n  val of_hz : Nx.float64_t -> frequency t\n  val of_khz : Nx.float64_t -> frequency t\n  val of_mhz : Nx.float64_t -> frequency t\n  val of_ghz : Nx.float64_t -> frequency t\n\n  (** {2:extract Extracting} *)\n\n  val in_hz : frequency t -> Nx.float64_t\n  val in_khz : frequency t -> Nx.float64_t\n  val in_mhz : frequency t -> Nx.float64_t\n  val in_ghz : frequency t -> Nx.float64_t\nend\n\n(** {1:dimensionless Dimensionless} *)\n\nmodule Dimensionless : sig\n  val of_tensor : Nx.float64_t -> dimensionless t\n  (** [of_tensor x] wraps [x] as a dimensionless quantity. *)\n\n  val to_tensor : dimensionless t -> Nx.float64_t\n  (** [to_tensor x] is the underlying tensor. *)\n\n  val v : float -> dimensionless t\n  (** [v x] is the scalar dimensionless quantity [x]. *)\n\n  val to_float : dimensionless t -> float\n  (** [to_float x] is the scalar value of [x]. Intended for 0-d tensors. *)\nend\n"
  },
  {
    "path": "dev/umbra/lib/vega_data.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Vega (alpha Lyrae) reference spectrum from CALSPEC: alpha_lyr_stis_011.fits\n   (Bohlin 2014). Subsampled to 300 points from 900-250000 Angstroms.\n\n   wave: wavelength in Angstroms. flux: spectral flux density f_lambda in\n   W/m^2/m (SI). *)\n\n[@@@ocamlformat \"disable\"]\n\nlet wave =\n  [|\n    900.452; 917.759; 935.398; 953.377; 971.701;\n    989.386; 1008.402; 1027.784; 1047.539; 1067.673;\n    1087.104; 1107.998; 1129.294; 1151.0; 1172.199951171875;\n    1198.199951171875; 1230.0; 1240.699951171875; 1264.300048828125; 1287.9000244140625;\n    1312.5999755859375; 1337.4000244140625; 1363.4000244140625; 1388.0999755859375; 1415.300048828125;\n    1442.4000244140625; 1469.5; 1496.699951171875; 1526.199951171875; 1554.5;\n    1584.0; 1614.5999755859375; 1645.300048828125; 1676.115356; 1707.652832;\n    1740.569458; 1773.494019; 1807.798096; 1842.109619; 1876.428101;\n    1912.126099; 1947.830688; 1984.914551; 2023.37793; 2061.84668;\n    2100.320312; 2140.172119; 2181.401855; 2222.634521; 2265.243652;\n    2307.85376; 2351.838867; 2395.823242; 2441.179932; 2487.907959;\n    2536.005615; 2584.09668; 2632.18042; 2683.001953; 2733.811768;\n    2785.980957; 2838.134277; 2891.641846; 2946.500488; 3002.706055;\n    3060.254883; 3119.17334; 3176.848877; 3237.280029; 3300.467529;\n    3363.663574; 3426.86792; 3492.827881; 3558.794678; 3624.767822;\n    3693.495117; 3764.976562; 3836.461914; 3907.949951; 3982.189453;\n    4059.178955; 4136.16748; 4213.153809; 4292.885742; 4375.36084;\n    4457.828613; 4543.035156; 4630.977539; 4718.905273; 4806.816406;\n    4897.454102; 4990.81543; 5086.894531; 5182.942871; 5281.700684;\n    5380.419922; 5485.38916; 5587.699219; 5694.90625; 5802.137207;\n    5914.267578; 6026.421387; 6138.598145; 6255.674805; 6377.652344;\n    6494.770996; 6621.67041; 6743.70752; 6875.525879; 7002.478027;\n    7139.210449; 7271.072266; 7412.712402; 7549.477051; 7696.016602;\n    7842.55957; 7989.103027; 8140.52832; 8296.831055; 8453.124023;\n    8614.286133; 8775.428711; 8946.311523; 9112.282227; 9287.978516;\n    9463.631836; 9644.114258; 9824.539062; 10014.648438; 10205.378729443204;\n    10401.529770846228; 10601.450905715283; 10794.393204668508; 11001.865315614596; 11213.325115099247;\n    11428.849247658822; 11648.515831526549; 11860.514306414294; 12088.477647568725; 12320.82252599989;\n    12557.633156314389; 12786.177214329819; 13031.93212871661; 13282.410540749495; 13537.703237775104;\n    13797.902752017966; 14049.019236198084; 14319.046427392743; 14594.263638185937; 14874.770622496817;\n    15145.48568465375; 15436.587354440591; 15733.284102879634; 16035.6834692452; 16343.895059966944;\n    16641.34761761389; 16961.200290683963; 17287.200646834564; 17619.466846796713; 17940.134316746688;\n    18284.950136177056; 18636.393439411142; 18994.591609034607; 19359.674476061784; 19712.013050644106;\n    20090.8850151999; 20477.039034852016; 20870.61507330569; 21250.45218142002; 21658.893498090372;\n    22075.185203484638; 22499.47818484361; 22931.926229505145; 23349.278401987125; 23798.05991182052;\n    24255.46716331803; 24721.665946277586; 25171.590688206736; 25655.39769412964; 26148.503644305452;\n    26651.08726772658; 27163.33072884503; 27657.69282371489; 28189.283604893368; 28731.09175155295;\n    29283.313645250695; 29816.258607862546; 30389.33779785106; 30973.431775616467; 31568.752249243276;\n    32175.51499603123; 32761.096902670255; 33390.77694148892; 34032.559656653226; 34686.6776658844;\n    35317.961760150065; 35996.78565914454; 36688.65679708345; 37393.82594651755; 38112.548700060266;\n    38806.182319759384; 39552.05106970492; 40312.25568451587; 41087.071705009694; 41834.84049088929;\n    42638.92113772262; 43458.45650302316; 44293.74363220345; 45145.08527967651; 45966.708340478566;\n    46850.20497002188; 47750.6827218135; 48668.46797921292; 49603.893398540575; 50506.66503950412;\n    51477.42126605539; 52466.83577548603; 53475.2671869996; 54448.49633089426; 55495.01596936514;\n    56561.650090809424; 57648.78530287201; 58756.81564377781; 59826.16692337332; 60976.04732038082;\n    62148.02883123579; 63342.53624731101; 64495.34586512943; 65734.96955693385; 66998.41926097606;\n    68286.15292170878; 69598.63728556872; 70865.30551837588; 72227.3621205334; 73615.59793944974;\n    75030.51614901233; 76396.04247255616; 77864.4018237676; 79360.98356865476; 80886.33015165699;\n    82440.99444348396; 83941.38860077565; 85554.77221228057; 87199.16563569565; 88875.16489081086;\n    90492.65845739808; 92231.95982648393; 94004.69119201355; 95811.49509052176; 97653.02640827911;\n    99430.27365358408; 101341.3591939848; 103289.17648427008; 105274.43152183376; 107190.38538724542;\n    109250.62305986807; 111350.45923995743; 113490.65502485144; 115671.98614086409; 117777.17147181589;\n    120040.89097212761; 122348.11997351186; 124699.69474434588; 126969.18087410457; 129409.57409375694;\n    131896.8725467459; 134431.9777685813; 137015.80862256046; 139509.44325303897; 142190.86481564713;\n    144923.8242628955; 147709.31217146275; 150397.56442345414; 153288.25958223912; 156234.51493917976;\n    159237.39838144454; 162297.9983210655; 165251.7590085775; 168427.95711765194; 171665.20289426477;\n    174964.66969630358; 178148.95889525744; 181573.045814387; 185062.94491283095; 188619.92112426818;\n    192245.26369497634; 195744.05300937625; 199506.32395143431; 203340.90709103053; 207249.19229470106;\n    211021.04729625938; 215076.94755071797; 219210.80366346712; 223424.11397425184; 227718.40562110202;\n    231862.792374509; 236319.27844847148; 240861.41978391458; 245490.86270582714; 249958.7012923323\n  |]\n\nlet flux =\n  [|\n    1.2837948759614193e-10; 1.1943805588998657e-07; 1.173052282865683e-06; 2.1574317088379757e-06; 1.3420374216366326e-06;\n    4.100760634173639e-06; 6.2335830079973675e-06; 4.519122285273625e-06; 1.093481569114374e-05; 1.3420373761618976e-05;\n    1.8457114492775872e-05; 4.486309626372531e-05; 0.0006154832663014531; 0.0017448125872761011; 0.00016285567835438997;\n    0.00012107902148272842; 0.0002881045511458069; 0.0008428706787526608; 0.00555039057508111; 0.021335331723093987;\n    0.032872095704078674; 0.04942339286208153; 0.05074312165379524; 0.05717964470386505; 0.062469806522130966;\n    0.0662260353565216; 0.06470223516225815; 0.0783146470785141; 0.06845968961715698; 0.07111997902393341;\n    0.0734374150633812; 0.0725170373916626; 0.07288476824760437; 0.06900300085544586; 0.0641390010714531;\n    0.06533699482679367; 0.06514099985361099; 0.06312000006437302; 0.058240000158548355; 0.060812000185251236;\n    0.059772998094558716; 0.060645997524261475; 0.05867400020360947; 0.0558370016515255; 0.05407499894499779;\n    0.05430099740624428; 0.04962899908423424; 0.05051000043749809; 0.047245997935533524; 0.044964998960494995;\n    0.04547500237822533; 0.04311799630522728; 0.03780499845743179; 0.041788000613451004; 0.04115099832415581;\n    0.03668300062417984; 0.03907399997115135; 0.03461499884724617; 0.040741000324487686; 0.03929299861192703;\n    0.03749600052833557; 0.037234000861644745; 0.03612099960446358; 0.03686100244522095; 0.03653800114989281;\n    0.036056000739336014; 0.03495499864220619; 0.03484500199556351; 0.033263999968767166; 0.03422100096940994;\n    0.032947998493909836; 0.032896000891923904; 0.03175799921154976; 0.031553998589515686; 0.03090999834239483;\n    0.0309200007468462; 0.045221999287605286; 0.033542998135089874; 0.07968499511480331; 0.06394299864768982;\n    0.08404000103473663; 0.07898300141096115; 0.07822800427675247; 0.0719740018248558; 0.0667010024189949;\n    0.06643600016832352; 0.06298799812793732; 0.05953500047326088; 0.05647499859333038; 0.05225900188088417;\n    0.047912996262311935; 0.04777200147509575; 0.04543299973011017; 0.04206399992108345; 0.04022299870848656;\n    0.03820599988102913; 0.03601999953389168; 0.03411700204014778; 0.032175999134778976; 0.03042599931359291;\n    0.028788000345230103; 0.027058999985456467; 0.02562600001692772; 0.024166999384760857; 0.02274700067937374;\n    0.021289000287652016; 0.019968999549746513; 0.019201001152396202; 0.01802999898791313; 0.017007999122142792;\n    0.01603100076317787; 0.015209999866783619; 0.014260999858379364; 0.013499000109732151; 0.012649999931454659;\n    0.011848000809550285; 0.011309999972581863; 0.010559000074863434; 0.009970699436962605; 0.009328600019216537;\n    0.0090791005641222; 0.008882399648427963; 0.009110400453209877; 0.008642599917948246; 0.008087899535894394;\n    0.007705099880695343; 0.00721339974552393; 0.006843299604952335; 0.006151599809527397; 0.006024217698723078;\n    0.005648050922900438; 0.0052790273912250996; 0.004945714958012104; 0.004534630570560694; 0.0043441662564873695;\n    0.004064818844199181; 0.0037846784107387066; 0.0035608832258731127; 0.0033259775955229998; 0.0031156735494732857;\n    0.0029069569427520037; 0.002545868745073676; 0.002553011057898402; 0.0023879422806203365; 0.0022300160489976406;\n    0.0020839935168623924; 0.001953843282535672; 0.0018244864186272025; 0.0017038591904565692; 0.0015895807882770896;\n    0.0014951423509046435; 0.0013784831389784813; 0.0012856320245191455; 0.0012396031524986029; 0.0011531008640304208;\n    0.0011078655952587724; 0.0010332672391086817; 0.0009435904212296009; 0.0008983552106656134; 0.0008412160095758736;\n    0.0007756645791232586; 0.0007241599960252643; 0.0006796390516683459; 0.0006226585246622562; 0.0005917081143707037;\n    0.0005520281265489757; 0.0005140146822668612; 0.0004784614429809153; 0.00044632062781602144; 0.00036132606328465044;\n    0.0003871974186040461; 0.0003606118552852422; 0.0003356928064022213; 0.0003139481705147773; 0.00029244160396046937;\n    0.0002722047793213278; 0.0002522854192648083; 0.0002364928077440709; 0.00021379583631642163; 0.00019514624727889895;\n    0.00018959103908855468; 0.00017943295824807137; 0.0001626086450414732; 0.00015602176426909864; 0.00013110271538607776;\n    0.0001348326331935823; 0.0001258649572264403; 0.0001058662383002229; 0.00010872319398913532; 0.00010110463335877284;\n    9.396224049851298e-05; 8.72959935804829e-05; 8.126463944790885e-05; 7.560627273051068e-05; 7.031296263448894e-05;\n    6.562278576893732e-05; 6.097228470025584e-05; 5.6718592531979084e-05; 4.850483310292475e-05; 4.869529584539123e-05;\n    4.574310514726676e-05; 4.2584575567161664e-05; 3.8981630495982245e-05; 3.6783359973924235e-05; 3.387084507266991e-05;\n    3.1902720365906134e-05; 2.955366471724119e-05; 2.7506175683811307e-05; 2.5553918021614663e-05; 2.3807999241398647e-05;\n    2.1720832592109218e-05; 2.0554240109049715e-05; 1.9094017261522822e-05; 1.7729023966239765e-05; 1.6522752048331313e-05;\n    1.520537625765428e-05; 1.422924833605066e-05; 1.3229311662144028e-05; 1.2340479770500679e-05; 1.143577628681669e-05;\n    1.063423951563891e-05; 9.872383998299483e-06; 9.134336323768366e-06; 8.507391612511128e-06; 7.931238542369101e-06;\n    7.359846222243505e-06; 6.820991984568536e-06; 6.3662591855973005e-06; 5.908352250116877e-06; 5.479807896335842e-06;\n    5.0814210226235446e-06; 4.719538992503658e-06; 4.393369636090938e-06; 4.080691269336967e-06; 3.7862655517528765e-06;\n    3.291852635811665e-06; 3.273599986641784e-06; 3.0180608519003727e-06; 2.8196607217978453e-06; 2.615705398056889e-06;\n    2.4268288143503014e-06; 2.2609663119510515e-06; 2.0966911051800707e-06; 1.937177557920222e-06; 1.8046463310383842e-06;\n    1.6800512412373791e-06; 1.5578368675051024e-06; 1.4118143099040026e-06; 1.3411839745458565e-06; 1.2443647392501589e-06;\n    1.1586560049181571e-06; 1.0753279866548837e-06; 9.967616279027425e-07; 9.150207347374817e-07; 8.610560371380416e-07;\n    7.983616114870529e-07; 7.404287885037775e-07; 6.840038508926227e-07; 6.342451115415315e-07; 5.927398092353542e-07;\n    5.49964795482083e-07; 5.098086148791481e-07; 4.7227135269167775e-07; 4.399718420700083e-07; 4.0822783375915606e-07;\n    3.7775359373881656e-07; 3.5092992334284645e-07; 3.2537599281567964e-07; 3.019648033841804e-07; 2.7887102760359994e-07;\n    2.6038017608698283e-07; 2.414131188288593e-07; 2.2458880266640335e-07; 2.0831998881476466e-07; 1.930828688045949e-07;\n    1.790361494613535e-07; 1.6260864299511013e-07; 1.5435520595019625e-07; 1.4181631513565662e-07; 1.32769287120027e-07;\n    1.230079931247019e-07; 1.1451648163074424e-07; 1.0618367696224595e-07; 9.83270425081173e-08; 9.118463850654734e-08;\n    8.451839761391966e-08; 7.86695650845104e-08; 7.293977688505038e-08; 6.760678417094823e-08; 6.264678376055599e-08;\n    5.8289920445986354e-08; 5.4044157593580167e-08; 5.0052349820361997e-08; 4.447334234214395e-08; 4.3013120176738084e-08;\n    4.002918529977251e-08; 3.6950016379933004e-08; 3.4370817303397416e-08; 3.1863038429946755e-08; 2.9625088160400992e-08\n  |]\n"
  },
  {
    "path": "dev/umbra/papers/perlmutter1999/.gitignore",
    "content": "/data/\n"
  },
  {
    "path": "dev/umbra/papers/perlmutter1999/download_data.sh",
    "content": "#!/usr/bin/env bash\n# Download Pantheon+ Type Ia supernova data (Scolnic et al. 2022, Brout et al. 2022).\n#\n# The Pantheon+ compilation contains 1701 light curves of 1550 unique SNe Ia\n# spanning 0.001 < z < 2.26, extending the original 42 high-z supernovae from\n# Perlmutter et al. (1999) that first demonstrated cosmic acceleration.\n#\n# Source: https://github.com/PantheonPlusSH0ES/DataRelease\n# Papers: arXiv:2112.03863 (data), arXiv:2202.04077 (cosmology)\n\nset -euo pipefail\n\nDIR=\"$(cd \"$(dirname \"$0\")\" && pwd)\"\nDATA_DIR=\"${DIR}/data\"\nmkdir -p \"${DATA_DIR}\"\n\nBASE_URL=\"https://raw.githubusercontent.com/PantheonPlusSH0ES/DataRelease/main/Pantheon%2B_Data/4_DISTANCES_AND_COVAR\"\n\necho \"Downloading Pantheon+ SN Ia distance data...\"\ncurl -fSL \"${BASE_URL}/Pantheon%2BSH0ES.dat\" -o \"${DATA_DIR}/Pantheon+SH0ES.dat\"\necho \"  -> ${DATA_DIR}/Pantheon+SH0ES.dat ($(wc -l < \"${DATA_DIR}/Pantheon+SH0ES.dat\") lines)\"\n\necho \"Downloading paper PDF (Perlmutter et al. 1999, arXiv:astro-ph/9812133)...\"\ncurl -fSL \"https://arxiv.org/pdf/astro-ph/9812133\" -o \"${DATA_DIR}/perlmutter1999.pdf\"\necho \"  -> ${DATA_DIR}/perlmutter1999.pdf\"\n\necho \"Done.\"\n"
  },
  {
    "path": "dev/umbra/papers/perlmutter1999/perlmutter1999.md",
    "content": "# The Accelerating Universe\n\nReproducing the key result of Perlmutter et al. (1999), \"Measurements of\n$\\Omega$ and $\\Lambda$ from 42 High-Redshift Supernovae\" (ApJ 517, 565) --\nthe Nobel Prize-winning discovery that the expansion of the universe is\naccelerating.\n\nWe use the modern Pantheon+ dataset (Scolnic et al. 2022, 1701 SNe Ia spanning\n$0.001 < z < 2.26$) which extends the original 42 supernovae and confirms the\nresult with far greater precision.\n\n<!-- quill:cell id=\"c_intro\" -->\n## Background\n\nType Ia supernovae (SNe Ia) are \"standardizable candles\": after correcting for\nthe correlation between peak luminosity and light-curve width, they have\nremarkably uniform absolute magnitudes. This lets us measure their distances\nthrough the **distance modulus**:\n\n$$\\mu = m - M = 5 \\log_{10}\\!\\left(\\frac{d_L}{\\text{Mpc}}\\right) + 25$$\n\nwhere $d_L$ is the luminosity distance, which depends on the cosmological\nparameters $\\Omega_M$ (matter density) and $\\Omega_\\Lambda$ (dark energy\ndensity). In 1998--1999, two independent teams (the Supernova Cosmology Project\nand the High-z Supernova Search Team) found that distant SNe Ia are **fainter\nthan expected** in a decelerating universe -- implying that the expansion is\naccelerating, driven by a cosmological constant or dark energy.\n\nWe reproduce three key results:\n1. The **Hubble diagram** ($\\mu$ vs $z$) with cosmological model curves\n2. **Residuals** relative to an empty universe, showing the acceleration signal\n3. **Confidence contours** in the $\\Omega_M$--$\\Omega_\\Lambda$ plane\n\n<!-- quill:cell id=\"c_setup\" -->\n## Setup\n\n\n<!-- quill:cell id=\"c_arrnj4crg3br\" -->\n```ocaml\n#require \"umbra\";;\n\nopen Nx\nopen Umbra\n\nlet f64 = Nx.float64\nlet f32 = Nx.float32\n```\n<!-- quill:output -->\n<!-- out:stdout -->\nval f64 : (float, Nx.float64_elt) Nx.dtype = Nx.Float64\nval f32 : (float, Nx.float32_elt) Nx.dtype = Nx.Float32\n<!-- /quill:output -->\n\n## Loading the Pantheon+ data\n\nThe Pantheon+ compilation (Scolnic et al. 2022) provides standardized distance\nmoduli for 1701 SN Ia light curves from 18 surveys. We load the data file\n(downloaded by `download_data.sh`) and extract redshift, distance modulus, and\nthe diagonal error (for plotting; full cosmological fits require the covariance\nmatrix).\n\n\n<!-- quill:cell id=\"c_data\" -->\n```ocaml\nlet df = Talon_csv.read ~sep:' ' \"data/Pantheon+SH0ES.dat\"\n\nlet () =\n  Printf.printf \"Loaded %d light curves, %d columns\\n\"\n    (Talon.num_rows df) (List.length (Talon.column_names df))\n```\n<!-- quill:output -->\n<!-- out:stdout -->\nLoaded 1701 light curves, 47 columns\nval df : Talon.t =\n  \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>CID</th><th>IDSURVEY</th><th>zHD</th><th>zHDERR</th><th>zCMB</th><th>zCMBERR</th><th>zHEL</th><th>zHELERR</th><th>m_b_corr</th><th>m_b_corr_err_DIAG</th><th>…</th></tr></thead>\n<tbody>\n<tr><td>2011fe</td><td>51</td><td>0.00122</td><td>0.00084</td><td>0.00122</td><td>2e-05</td><td>0.00082</td><td>2e-05</td><td>9.74571</td><td>1.51621</td><td>…</td></tr>\n<tr><td>2011fe</td><td>56</td><td>0.00122</td><td>0.00084</td><td>0.00122</td><td>2e-05</td><td>0.00082</td><td>2e-05</td><td>9.80286</td><td>1.51723</td><td>…</td></tr>\n<tr><td>2012cg</td><td>51</td><td>0.00256</td><td>0.00084</td><td>0.00256</td><td>2e-05</td><td>0.00144</td><td>2e-05</td><td>11.4703</td><td>0.781906</td><td>…</td></tr>\n<tr><td>2012cg</td><td>56</td><td>0.00256</td><td>0.00084</td><td>0.00256</td><td>2e-05</td><td>0.00144</td><td>2e-05</td><td>11.4919</td><td>0.798612</td><td>…</td></tr>\n<tr><td>1994DRichmond</td><td>50</td><td>0.00299</td><td>0.00084</td><td>0.00299</td><td>4e-05</td><td>0.00187</td><td>4e-05</td><td>11.5227</td><td>0.880798</td><td>…</td></tr>\n<tr><td>1981B</td><td>50</td><td>0.00317</td><td>0.00084</td><td>0.0035</td><td>1e-05</td><td>0.00236</td><td>1e-05</td><td>11.5416</td><td>0.613941</td><td>…</td></tr>\n<tr><td>2013aa</td><td>56</td><td>0.00331</td><td>0.00085</td><td>0.00478</td><td>0.00015</td><td>0.00411</td><td>0.00015</td><td>11.2074</td><td>0.59407</td><td>…</td></tr>\n<tr><td>2013aa</td><td>5</td><td>0.00331</td><td>0.00085</td><td>0.00478</td><td>0.00015</td><td>0.00411</td><td>0.00015</td><td>11.2998</td><td>0.579622</td><td>…</td></tr>\n<tr><td>2017cbv</td><td>5</td><td>0.00331</td><td>0.00085</td><td>0.00478</td><td>0.00015</td><td>0.00411</td><td>0.00015</td><td>11.1483</td><td>0.577815</td><td>…</td></tr>\n<tr><td>2017cbv</td><td>18</td><td>0.00331</td><td>0.00085</td><td>0.00478</td><td>0.00015</td><td>0.00411</td><td>0.00015</td><td>11.2577</td><td>0.577916</td><td>…</td></tr>\n<tr><td>2001el</td><td>50</td><td>0.00333</td><td>0.00084</td><td>0.00357</td><td>1e-05</td><td>0.00379</td><td>1e-05</td><td>12.2481</td><td>0.590389</td><td>…</td></tr>\n<tr><td>2011by</td><td>51</td><td>0.00349</td><td>0.00084</td><td>0.00369</td><td>2e-05</td><td>0.00313</td><td>2e-05</td><td>12.5403</td><td>0.55206</td><td>…</td></tr>\n<tr><td>1998aq</td><td>50</td><td>0.00349</td><td>0.00084</td><td>0.00369</td><td>1e-05</td><td>0.00313</td><td>1e-05</td><td>12.2437</td><td>0.544824</td><td>…</td></tr>\n<tr><td>1990N</td><td>50</td><td>0.00359</td><td>0.00084</td><td>0.00462</td><td>2e-05</td><td>0.00355</td><td>2e-05</td><td>12.4439</td><td>0.550332</td><td>…</td></tr>\n<tr><td>2021pit</td><td>56</td><td>0.00384</td><td>0.00084</td><td>0.00366</td><td>1e-05</td><td>0.00388</td><td>1e-05</td><td>11.7469</td><td>0.565861</td><td>…</td></tr>\n<tr><td>2005df</td><td>50</td><td>0.00407</td><td>0.00084</td><td>0.00435</td><td>1e-05</td><td>0.00435</td><td>1e-05</td><td>12.1403</td><td>0.475638</td><td>…</td></tr>\n<tr><td>2005df_ANU</td><td>50</td><td>0.00407</td><td>0.00084</td><td>0.00435</td><td>1e-05</td><td>0.00435</td><td>1e-05</td><td>12.1249</td><td>0.478515</td><td>…</td></tr>\n<tr><td>2013dy</td><td>51</td><td>0.00432</td><td>0.00084</td><td>0.00293</td><td>0.00012</td><td>0.00394</td><td>0.00012</td><td>12.246</td><td>0.513549</td><td>…</td></tr>\n<tr><td>2013dy</td><td>56</td><td>0.00432</td><td>0.00084</td><td>0.00293</td><td>0.00012</td><td>0.00394</td><td>0.00012</td><td>12.3081</td><td>0.530151</td><td>…</td></tr>\n<tr><td>2012ht</td><td>56</td><td>0.00465</td><td>0.00084</td><td>0.00465</td><td>2e-05</td><td>0.00352</td><td>2e-05</td><td>12.6779</td><td>0.441191</td><td>…</td></tr>\n<tr><td>…</td><td>…</td><td>…</td><td>…</td><td>…</td><td>…</td><td>…</td><td>…</td><td>…</td><td>…</td><td>…</td></tr>\n</tbody>\n</table>\n<p><small>1701 rows × 47 columns</small></p>\n<!-- /quill:output -->\n\n<!-- quill:cell id=\"c_extract\" -->\n```ocaml\nlet col name =\n  Talon.get_column_exn df name |> Talon.Col.to_tensor f64 |> Option.get\n\nlet sn_z = col \"zHD\"\nlet sn_mu = col \"MU_SH0ES\"\nlet sn_mu_err = col \"MU_SH0ES_ERR_DIAG\"\n\nlet () =\n  let n = (Nx.shape sn_z).(0) in\n  Printf.printf \"%d SNe Ia, z in [%.4f, %.3f]\\n\" n\n    (Nx.item [] (Nx.min sn_z)) (Nx.item [] (Nx.max sn_z))\n```\n<!-- quill:output -->\n<!-- out:stdout -->\n1701 SNe Ia, z in [0.0012, 2.261]\nval col : string -> (float, Nx.float64_elt) Nx.t = <fun>\nval sn_z : (float, Nx.float64_elt) Nx.t = float64 [1701] \n  [0.00122, 0.00122, ..., 1.91165, 2.26137]\nval sn_mu : (float, Nx.float64_elt) Nx.t = float64 [1701] \n  [28.9987, 29.0559, ..., 45.4233, 46.1828]\nval sn_mu_err : (float, Nx.float64_elt) Nx.t = float64 [1701] \n  [1.51645, 1.51747, ..., 0.358642, 0.281309]\n<!-- /quill:output -->\n\n## Cosmological models\n\nWe compute the theoretical distance modulus $\\mu(z)$ for several cosmologies\nto compare with the data. The key insight from Perlmutter et al. is that the\ndata prefer $\\Omega_\\Lambda > 0$ (accelerating expansion) over\n$\\Omega_\\Lambda = 0$ (decelerating expansion).\n\nThe models we compare:\n- **Best-fit $\\Lambda$CDM**: $(\\Omega_M, \\Omega_\\Lambda) = (0.3, 0.7)$, $H_0 = 70$\n- **Einstein--de Sitter**: $(\\Omega_M, \\Omega_\\Lambda) = (1, 0)$ -- matter-only, decelerating\n- **Empty (Milne)**: $(\\Omega_M, \\Omega_\\Lambda) = (0, 0)$ -- coasting, no gravity\n- **Open CDM**: $(\\Omega_M, \\Omega_\\Lambda) = (0.3, 0)$ -- matter only, curved\n\n\n<!-- quill:cell id=\"c_models\" -->\n```ocaml\nlet h0 = 70.0\n\nlet p_lcdm = Cosmo.lcdm ~h0 ~omega_m:0.3 ~omega_l:0.7\nlet p_edsit = Cosmo.lcdm ~h0 ~omega_m:1.0 ~omega_l:0.0\nlet p_empty = Cosmo.lcdm ~h0 ~omega_m:0.01 ~omega_l:0.0\nlet p_open = Cosmo.lcdm ~h0 ~omega_m:0.3 ~omega_l:0.0\n\nlet n_grid = 200\nlet z_grid = Nx.logspace f64 (-2.5) 0.4 n_grid  (* z ~ 0.003 to 2.5 *)\n\nlet mu_of_model p =\n  Nx.init f64 [| n_grid |] (fun idx ->\n    let z = Nx.scalar f64 (Nx.item [idx.(0)] z_grid) in\n    Nx.item [] (Cosmo.distance_modulus ~p z))\n\nlet mu_lcdm = mu_of_model p_lcdm\nlet mu_edsit = mu_of_model p_edsit\nlet mu_empty = mu_of_model p_empty\nlet mu_open = mu_of_model p_open\n\nlet () = Printf.printf \"Theory curves computed for %d redshift points\\n\" n_grid\n```\n<!-- quill:output -->\n<!-- out:stdout -->\nTheory curves computed for 200 redshift points\nval h0 : float = 70.\nval p_lcdm : Umbra.Cosmo.params = <abstr>\nval p_edsit : Umbra.Cosmo.params = <abstr>\nval p_empty : Umbra.Cosmo.params = <abstr>\nval p_open : Umbra.Cosmo.params = <abstr>\nval n_grid : int = 200\nval z_grid : (float, Nx.float64_elt) Nx.t = float64 [200] \n  [0.00316228, 0.00327019, ..., 2.429, 2.51189]\nval mu_of_model : Umbra.Cosmo.params -> (float, Nx.float64_elt) Nx.t = <fun>\nval mu_lcdm : (float, Nx.float64_elt) Nx.t = float64 [200] \n  [30.6639, 30.737, ..., 46.4718, 46.5603]\nval mu_edsit : (float, Nx.float64_elt) Nx.t = float64 [200] \n  [30.6603, 30.7332, ..., 45.6533, 45.7352]\nval mu_empty : (float, Nx.float64_elt) Nx.t = float64 [200] \n  [30.662, 30.735, ..., 46.7919, 46.9042]\nval mu_open : (float, Nx.float64_elt) Nx.t = float64 [200] \n  [30.6615, 30.7345, ..., 46.3264, 46.4235]\n<!-- /quill:output -->\n\n## The Hubble diagram\n\nThe Hubble diagram plots distance modulus $\\mu$ against redshift $z$. Distant\nsupernovae ($z > 0.3$) are systematically fainter than predicted by\ndecelerating models (Einstein--de Sitter, Open CDM), showing that the expansion\nhas been **accelerating**.\n\n\n<!-- quill:cell id=\"c_hubble\" -->\n```ocaml\nlet to32 t = Nx.astype f32 t\n\nlet _fig =\n  Hugin.layers [\n    Hugin.point ~x:(to32 sn_z) ~y:(to32 sn_mu)\n      ~color:(Hugin.Color.with_alpha 0.3 Hugin.Color.blue)\n      ~size:2.0 ~marker:Hugin.Circle () ;\n    Hugin.line ~x:(to32 z_grid) ~y:(to32 mu_lcdm)\n      ~color:Hugin.Color.vermillion ~line_width:2.5\n      ~label:\"ΛCDM (0.3, 0.7)\" () ;\n    Hugin.line ~x:(to32 z_grid) ~y:(to32 mu_edsit)\n      ~color:Hugin.Color.sky_blue ~line_width:2.0\n      ~line_style:`Dashed ~label:\"EdS (1, 0)\" () ;\n    Hugin.line ~x:(to32 z_grid) ~y:(to32 mu_empty)\n      ~color:Hugin.Color.green ~line_width:2.0\n      ~line_style:`Dotted ~label:\"Empty (0, 0)\" () ;\n    Hugin.line ~x:(to32 z_grid) ~y:(to32 mu_open)\n      ~color:Hugin.Color.orange ~line_width:2.0\n      ~line_style:`Dash_dot ~label:\"Open (0.3, 0)\" () ;\n  ]\n  |> Hugin.xscale `Log\n  |> Hugin.xlim 0.01 2.5\n  |> Hugin.xlabel \"Redshift z\"\n  |> Hugin.ylabel \"Distance modulus μ (mag)\"\n  |> Hugin.title \"SN Ia Hubble Diagram (Pantheon+, 1701 light curves)\"\n  |> Hugin.legend ~loc:Hugin.Lower_right\n  |> Hugin.grid_lines true\n```\n<!-- quill:output -->\n<!-- out:stdout -->\nval to32 : ('a, 'b) Nx.t -> (float, Nx.float32_elt) Nx.t = <fun>\nval _fig : Hugin.t =\n  \n<!-- out:display image/png -->\n<img src=\"figures/c_hubble.png\">\n<!-- /quill:output -->\n\n## Residuals: the acceleration signal\n\nResiduals $\\Delta\\mu = \\mu_\\text{obs} - \\mu_\\text{empty}(z)$ relative to an\nempty (coasting) universe isolate the acceleration signal. Positive residuals\nat high redshift mean supernovae are **fainter than expected** -- i.e. farther\naway than in a coasting universe. This is the direct evidence for cosmic\nacceleration.\n\nWe bin the data in redshift to show the trend clearly.\n\n\n<!-- quill:cell id=\"c_residual\" -->\n```ocaml\nlet sn_mu_empty =\n  let n = (Nx.shape sn_z).(0) in\n  Nx.init f64 [| n |] (fun idx ->\n    let z = Nx.scalar f64 (Nx.item [idx.(0)] sn_z) in\n    Nx.item [] (Cosmo.distance_modulus ~p:p_empty z))\n\nlet sn_residual = Nx.sub sn_mu sn_mu_empty\n\n(* Model residuals on the grid *)\nlet res_lcdm = Nx.sub mu_lcdm mu_empty\nlet res_edsit = Nx.sub mu_edsit mu_empty\nlet res_open = Nx.sub mu_open mu_empty\n\n(* Bin the residuals using Talon grouping *)\nlet n_bins = 25\nlet log_z_min = Float.log10 0.01\nlet log_z_max = Float.log10 2.3\nlet bin_width = (log_z_max -. log_z_min) /. Float.of_int n_bins\n\nlet bin_df =\n  let df = Talon.create [\n    \"z\", Talon.Col.of_tensor sn_z;\n    \"res\", Talon.Col.of_tensor sn_residual;\n  ] in\n  let df = Talon.filter_by df Talon.Row.(map (number \"z\") ~f:(fun z -> z > 0.01)) in\n  Talon.with_column df \"bin\" f64 Talon.Row.(\n       map (number \"z\") ~f:(fun z ->\n         let b = int_of_float ((Float.log10 z -. log_z_min) /. bin_width) in\n         Float.of_int (Int.max 0 (Int.min (n_bins - 1) b))))\n\nlet groups =\n  Talon.group_by bin_df Talon.Row.(map (number \"bin\") ~f:int_of_float)\n  |> List.filter (fun (_, g) -> Talon.num_rows g > 2)\n  |> List.sort (fun (a, _) (b, _) -> Int.compare a b)\n\nlet n_groups = List.length groups\nlet bz = Nx.create f32 [| n_groups |]\n  (Array.of_list (List.map (fun (_, g) -> Talon.Agg.mean g \"z\") groups))\nlet bmu = Nx.create f32 [| n_groups |]\n  (Array.of_list (List.map (fun (_, g) -> Talon.Agg.mean g \"res\") groups))\nlet berr = Nx.create f32 [| n_groups |]\n  (Array.of_list (List.map (fun (_, g) ->\n    Talon.Agg.std g \"res\"\n    /. Float.sqrt (Float.of_int (Talon.num_rows g - 1))) groups))\n```\n<!-- quill:output -->\n<!-- out:stdout -->\nval sn_mu_empty : (float, Nx.float64_elt) Nx.t = float64 [1701] \n  [28.5917, 28.5917, ..., 46.007, 46.5544]\nval sn_residual : (float, Nx.float64_elt) Nx.t = float64 [1701] \n  [0.40697, 0.46417, ..., -0.583671, -0.371643]\nval res_lcdm : (float, Nx.float64_elt) Nx.t = float64 [200] \n  [0.00189584, 0.00196019, ..., -0.320084, -0.343972]\nval res_edsit : (float, Nx.float64_elt) Nx.t = float64 [200] \n  [-0.00170018, -0.00175822, ..., -1.13865, -1.16906]\nval res_open : (float, Nx.float64_elt) Nx.t = float64 [200] \n  [-0.000498446, -0.000515476, ..., -0.465507, -0.480768]\nval n_bins : int = 25\nval log_z_min : float = -2.\nval log_z_max : float = 0.361727836017592841\nval bin_width : float = 0.094469113440703717\nval bin_df : Talon.t =\n  \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.01016</td><td>-0.424629577679</td><td>0.</td></tr>\n<tr><td>0.01017</td><td>-0.287976550182</td><td>0.</td></tr>\n<tr><td>0.01017</td><td>-0.238776550182</td><td>0.</td></tr>\n<tr><td>0.01026</td><td>0.112394684341</td><td>0.</td></tr>\n<tr><td>0.01026</td><td>-0.000105315659297</td><td>0.</td></tr>\n<tr><td>0.01028</td><td>0.0491444208064</td><td>0.</td></tr>\n<tr><td>0.01042</td><td>-0.135579053456</td><td>0.</td></tr>\n<tr><td>0.01044</td><td>-0.17646444435</td><td>0.</td></tr>\n<tr><td>0.01061</td><td>0.0819784530003</td><td>0.</td></tr>\n<tr><td>0.01061</td><td>-0.0055215469997</td><td>0.</td></tr>\n<tr><td>0.01073</td><td>-0.215072175983</td><td>0.</td></tr>\n<tr><td>0.01079</td><td>-0.267045255969</td><td>0.</td></tr>\n<tr><td>0.01079</td><td>-0.154445255969</td><td>0.</td></tr>\n<tr><td>0.01096</td><td>0.201426552503</td><td>0.</td></tr>\n<tr><td>0.01114</td><td>0.328659998161</td><td>0.</td></tr>\n<tr><td>0.01114</td><td>0.231859998161</td><td>0.</td></tr>\n<tr><td>0.01122</td><td>0.414035730767</td><td>0.</td></tr>\n<tr><td>0.01122</td><td>0.0129357307671</td><td>0.</td></tr>\n<tr><td>0.01122</td><td>0.0740357307671</td><td>0.</td></tr>\n<tr><td>0.01155</td><td>0.0993356408497</td><td>0.</td></tr>\n<tr><td>…</td><td>…</td><td>…</td></tr>\n</tbody>\n</table>\n<p><small>1590 rows × 3 columns</small></p>\n<!-- out:stdout -->\n\nval groups : (int * Talon.t) list =\n  [(0,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.01016</td><td>-0.424629577679</td><td>0.</td></tr>\n<tr><td>0.01017</td><td>-0.287976550182</td><td>0.</td></tr>\n<tr><td>0.01017</td><td>-0.238776550182</td><td>0.</td></tr>\n<tr><td>0.01026</td><td>0.112394684341</td><td>0.</td></tr>\n<tr><td>0.01026</td><td>-0.000105315659297</td><td>0.</td></tr>\n<tr><td>0.01028</td><td>0.0491444208064</td><td>0.</td></tr>\n<tr><td>0.01042</td><td>-0.135579053456</td><td>0.</td></tr>\n<tr><td>0.01044</td><td>-0.17646444435</td><td>0.</td></tr>\n<tr><td>0.01061</td><td>0.0819784530003</td><td>0.</td></tr>\n<tr><td>0.01061</td><td>-0.0055215469997</td><td>0.</td></tr>\n<tr><td>0.01073</td><td>-0.215072175983</td><td>0.</td></tr>\n<tr><td>0.01079</td><td>-0.267045255969</td><td>0.</td></tr>\n<tr><td>0.01079</td><td>-0.154445255969</td><td>0.</td></tr>\n<tr><td>0.01096</td><td>0.201426552503</td><td>0.</td></tr>\n<tr><td>0.01114</td><td>0.328659998161</td><td>0.</td></tr>\n<tr><td>0.01114</td><td>0.231859998161</td><td>0.</td></tr>\n<tr><td>0.01122</td><td>0.414035730767</td><td>0.</td></tr>\n<tr><td>0.01122</td><td>0.0129357307671</td><td>0.</td></tr>\n<tr><td>0.01122</td><td>0.0740357307671</td><td>0.</td></tr>\n<tr><td>0.01155</td><td>0.0993356408497</td><td>0.</td></tr>\n<tr><td>…</td><td>…</td><td>…</td></tr>\n</tbody>\n</table>\n<p><small>24 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (1,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.01246</td><td>0.178978224041</td><td>1.</td></tr>\n<tr><td>0.01258</td><td>0.0374364117569</td><td>1.</td></tr>\n<tr><td>0.01258</td><td>0.0540364117569</td><td>1.</td></tr>\n<tr><td>0.01259</td><td>-0.084899767747</td><td>1.</td></tr>\n<tr><td>0.01259</td><td>-0.154499767747</td><td>1.</td></tr>\n<tr><td>0.01279</td><td>0.0424614815497</td><td>1.</td></tr>\n<tr><td>0.01283</td><td>-0.0770620111033</td><td>1.</td></tr>\n<tr><td>0.01283</td><td>-0.210762011103</td><td>1.</td></tr>\n<tr><td>0.01303</td><td>-0.0118654606603</td><td>1.</td></tr>\n<tr><td>0.01303</td><td>0.0133345393397</td><td>1.</td></tr>\n<tr><td>0.01304</td><td>-0.173842071152</td><td>1.</td></tr>\n<tr><td>0.01304</td><td>-0.181742071152</td><td>1.</td></tr>\n<tr><td>0.01312</td><td>-0.242409144024</td><td>1.</td></tr>\n<tr><td>0.01312</td><td>-0.290709144024</td><td>1.</td></tr>\n<tr><td>0.01325</td><td>0.184241133384</td><td>1.</td></tr>\n<tr><td>0.01325</td><td>0.200641133384</td><td>1.</td></tr>\n<tr><td>0.01325</td><td>0.140741133384</td><td>1.</td></tr>\n<tr><td>0.01375</td><td>-0.0679294440597</td><td>1.</td></tr>\n<tr><td>0.01375</td><td>-0.0966294440597</td><td>1.</td></tr>\n<tr><td>0.01375</td><td>-0.0929294440597</td><td>1.</td></tr>\n<tr><td>…</td><td>…</td><td>…</td></tr>\n</tbody>\n</table>\n<p><small>52 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (2,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.01546</td><td>-0.0881971344432</td><td>2.</td></tr>\n<tr><td>0.01549</td><td>0.14766106801</td><td>2.</td></tr>\n<tr><td>0.0155</td><td>0.131148947175</td><td>2.</td></tr>\n<tr><td>0.0155</td><td>0.0842489471753</td><td>2.</td></tr>\n<tr><td>0.0155</td><td>-0.127551052825</td><td>2.</td></tr>\n<tr><td>0.0155</td><td>-0.241751052825</td><td>2.</td></tr>\n<tr><td>0.01557</td><td>-0.346610654758</td><td>2.</td></tr>\n<tr><td>0.01557</td><td>-0.329610654758</td><td>2.</td></tr>\n<tr><td>0.01562</td><td>0.0293736691777</td><td>2.</td></tr>\n<tr><td>0.01562</td><td>-0.0136263308223</td><td>2.</td></tr>\n<tr><td>0.01565</td><td>0.183674953401</td><td>2.</td></tr>\n<tr><td>0.01576</td><td>0.00234770294004</td><td>2.</td></tr>\n<tr><td>0.01578</td><td>-0.16752766025</td><td>2.</td></tr>\n<tr><td>0.01578</td><td>-0.0964276602495</td><td>2.</td></tr>\n<tr><td>0.01581</td><td>-0.177684167024</td><td>2.</td></tr>\n<tr><td>0.01581</td><td>-0.269884167024</td><td>2.</td></tr>\n<tr><td>0.01587</td><td>0.213026247358</td><td>2.</td></tr>\n<tr><td>0.01588</td><td>-0.151252326051</td><td>2.</td></tr>\n<tr><td>0.0159</td><td>-0.0164068904934</td><td>2.</td></tr>\n<tr><td>0.0159</td><td>-0.0504068904934</td><td>2.</td></tr>\n<tr><td>…</td><td>…</td><td>…</td></tr>\n</tbody>\n</table>\n<p><small>72 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (3,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.01947</td><td>0.148326584074</td><td>3.</td></tr>\n<tr><td>0.01947</td><td>0.182126584074</td><td>3.</td></tr>\n<tr><td>0.01975</td><td>0.0993213367396</td><td>3.</td></tr>\n<tr><td>0.01975</td><td>0.0817213367396</td><td>3.</td></tr>\n<tr><td>0.01976</td><td>0.0277114394619</td><td>3.</td></tr>\n<tr><td>0.01995</td><td>-0.31497156999</td><td>3.</td></tr>\n<tr><td>0.02001</td><td>-0.000956680793522</td><td>3.</td></tr>\n<tr><td>0.02006</td><td>-0.17972935267</td><td>3.</td></tr>\n<tr><td>0.02019</td><td>-0.198795323932</td><td>3.</td></tr>\n<tr><td>0.02019</td><td>-0.00399532393241</td><td>3.</td></tr>\n<tr><td>0.02023</td><td>-0.110835916616</td><td>3.</td></tr>\n<tr><td>0.02023</td><td>-0.209135916616</td><td>3.</td></tr>\n<tr><td>0.02023</td><td>-0.134335916616</td><td>3.</td></tr>\n<tr><td>0.02023</td><td>-0.0621359166155</td><td>3.</td></tr>\n<tr><td>0.02024</td><td>0.287380263147</td><td>3.</td></tr>\n<tr><td>0.02034</td><td>0.0942711314822</td><td>3.</td></tr>\n<tr><td>0.02034</td><td>-0.147328868518</td><td>3.</td></tr>\n<tr><td>0.02035</td><td>-0.152406886052</td><td>3.</td></tr>\n<tr><td>0.02035</td><td>-0.157706886052</td><td>3.</td></tr>\n<tr><td>0.02035</td><td>-0.230206886052</td><td>3.</td></tr>\n<tr><td>…</td><td>…</td><td>…</td></tr>\n</tbody>\n</table>\n<p><small>92 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (4,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.02388</td><td>-0.0795275874192</td><td>4.</td></tr>\n<tr><td>0.0239</td><td>-0.187266827175</td><td>4.</td></tr>\n<tr><td>0.0239</td><td>-0.0822668271749</td><td>4.</td></tr>\n<tr><td>0.02391</td><td>-0.177885876584</td><td>4.</td></tr>\n<tr><td>0.02401</td><td>0.0154444709488</td><td>4.</td></tr>\n<tr><td>0.02411</td><td>0.00941249193803</td><td>4.</td></tr>\n<tr><td>0.02411</td><td>-0.624387508062</td><td>4.</td></tr>\n<tr><td>0.02412</td><td>-0.218198645962</td><td>4.</td></tr>\n<tr><td>0.02417</td><td>-0.158848742089</td><td>4.</td></tr>\n<tr><td>0.02417</td><td>-0.0169487420891</td><td>4.</td></tr>\n<tr><td>0.02428</td><td>0.0693737074972</td><td>4.</td></tr>\n<tr><td>0.02429</td><td>-0.0347311260491</td><td>4.</td></tr>\n<tr><td>0.02432</td><td>-0.127443419316</td><td>4.</td></tr>\n<tr><td>0.02432</td><td>-0.161543419316</td><td>4.</td></tr>\n<tr><td>0.02432</td><td>0.117156580684</td><td>4.</td></tr>\n<tr><td>0.02434</td><td>-0.106149778375</td><td>4.</td></tr>\n<tr><td>0.02453</td><td>-0.125437393885</td><td>4.</td></tr>\n<tr><td>0.02453</td><td>-0.0588373938845</td><td>4.</td></tr>\n<tr><td>0.02453</td><td>-0.184137393885</td><td>4.</td></tr>\n<tr><td>0.02457</td><td>0.0413818842716</td><td>4.</td></tr>\n<tr><td>…</td><td>…</td><td>…</td></tr>\n</tbody>\n</table>\n<p><small>112 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (5,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.02969</td><td>-0.275399367382</td><td>5.</td></tr>\n<tr><td>0.02978</td><td>-0.058667628548</td><td>5.</td></tr>\n<tr><td>0.02978</td><td>-0.048567628548</td><td>5.</td></tr>\n<tr><td>0.0299</td><td>-0.133627805804</td><td>5.</td></tr>\n<tr><td>0.02996</td><td>0.287555242197</td><td>5.</td></tr>\n<tr><td>0.03012</td><td>-0.0656808035682</td><td>5.</td></tr>\n<tr><td>0.03012</td><td>-0.101380803568</td><td>5.</td></tr>\n<tr><td>0.03012</td><td>-0.109080803568</td><td>5.</td></tr>\n<tr><td>0.03023</td><td>-0.0747137430286</td><td>5.</td></tr>\n<tr><td>0.03031</td><td>-0.0448378058181</td><td>5.</td></tr>\n<tr><td>0.03036</td><td>-0.122370156394</td><td>5.</td></tr>\n<tr><td>0.03047</td><td>-0.0761406185614</td><td>5.</td></tr>\n<tr><td>0.03059</td><td>-0.260703391037</td><td>5.</td></tr>\n<tr><td>0.03075</td><td>-0.278601806314</td><td>5.</td></tr>\n<tr><td>0.03076</td><td>-0.0231184984297</td><td>5.</td></tr>\n<tr><td>0.03083</td><td>-0.171728924107</td><td>5.</td></tr>\n<tr><td>0.03086</td><td>-0.222272818751</td><td>5.</td></tr>\n<tr><td>0.03091</td><td>-0.149641417105</td><td>5.</td></tr>\n<tr><td>0.03096</td><td>0.00229566781329</td><td>5.</td></tr>\n<tr><td>0.03108</td><td>0.26806774982</td><td>5.</td></tr>\n<tr><td>…</td><td>…</td><td>…</td></tr>\n</tbody>\n</table>\n<p><small>99 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (6,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.03697</td><td>0.037370558993</td><td>6.</td></tr>\n<tr><td>0.03702</td><td>-0.234817279977</td><td>6.</td></tr>\n<tr><td>0.03702</td><td>-0.111717279977</td><td>6.</td></tr>\n<tr><td>0.03702</td><td>0.0290827200227</td><td>6.</td></tr>\n<tr><td>0.03702</td><td>0.0395827200227</td><td>6.</td></tr>\n<tr><td>0.03705</td><td>-0.0785080806863</td><td>6.</td></tr>\n<tr><td>0.03707</td><td>0.00409884352897</td><td>6.</td></tr>\n<tr><td>0.03725</td><td>-0.047410467157</td><td>6.</td></tr>\n<tr><td>0.03725</td><td>-0.061410467157</td><td>6.</td></tr>\n<tr><td>0.0373</td><td>-0.187276253133</td><td>6.</td></tr>\n<tr><td>0.0374</td><td>-0.0840961258395</td><td>6.</td></tr>\n<tr><td>0.03753</td><td>-0.121668755977</td><td>6.</td></tr>\n<tr><td>0.03756</td><td>0.260864344984</td><td>6.</td></tr>\n<tr><td>0.03756</td><td>0.146064344984</td><td>6.</td></tr>\n<tr><td>0.03787</td><td>-0.0843128672071</td><td>6.</td></tr>\n<tr><td>0.0379</td><td>-0.0299641891915</td><td>6.</td></tr>\n<tr><td>0.03796</td><td>-0.00696275219823</td><td>6.</td></tr>\n<tr><td>0.03818</td><td>-0.214844515711</td><td>6.</td></tr>\n<tr><td>0.03818</td><td>-0.246944515711</td><td>6.</td></tr>\n<tr><td>0.03828</td><td>-0.00033051522886</td><td>6.</td></tr>\n<tr><td>…</td><td>…</td><td>…</td></tr>\n</tbody>\n</table>\n<p><small>61 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (7,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.0459</td><td>-0.135594053043</td><td>7.</td></tr>\n<tr><td>0.04625</td><td>-0.269958777154</td><td>7.</td></tr>\n<tr><td>0.04631</td><td>0.00296267259202</td><td>7.</td></tr>\n<tr><td>0.04643</td><td>-0.217483496434</td><td>7.</td></tr>\n<tr><td>0.04656</td><td>-0.0402921381081</td><td>7.</td></tr>\n<tr><td>0.04664</td><td>-0.0809044161309</td><td>7.</td></tr>\n<tr><td>0.04682</td><td>-0.0821587037548</td><td>7.</td></tr>\n<tr><td>0.04682</td><td>-0.0750587037548</td><td>7.</td></tr>\n<tr><td>0.04691</td><td>0.193176209883</td><td>7.</td></tr>\n<tr><td>0.04738</td><td>-0.0229677846992</td><td>7.</td></tr>\n<tr><td>0.0476</td><td>0.00344066051645</td><td>7.</td></tr>\n<tr><td>0.04777</td><td>-0.010780094188</td><td>7.</td></tr>\n<tr><td>0.04777</td><td>-0.042680094188</td><td>7.</td></tr>\n<tr><td>0.04819</td><td>-0.127231460348</td><td>7.</td></tr>\n<tr><td>0.04837</td><td>-0.259617069447</td><td>7.</td></tr>\n<tr><td>0.0486</td><td>-0.313360481081</td><td>7.</td></tr>\n<tr><td>0.04865</td><td>-0.00494607201814</td><td>7.</td></tr>\n<tr><td>0.04934</td><td>-0.0775548977512</td><td>7.</td></tr>\n<tr><td>0.0494</td><td>-0.21085715033</td><td>7.</td></tr>\n<tr><td>0.04944</td><td>-0.0989568708902</td><td>7.</td></tr>\n<tr><td>…</td><td>…</td><td>…</td></tr>\n</tbody>\n</table>\n<p><small>47 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (8,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.05708</td><td>-0.0115206686667</td><td>8.</td></tr>\n<tr><td>0.05728</td><td>0.0590741402491</td><td>8.</td></tr>\n<tr><td>0.05824</td><td>-0.064625197056</td><td>8.</td></tr>\n<tr><td>0.05824</td><td>-0.114325197056</td><td>8.</td></tr>\n<tr><td>0.0583</td><td>-0.0945240955029</td><td>8.</td></tr>\n<tr><td>0.05886</td><td>-0.0901701116884</td><td>8.</td></tr>\n<tr><td>0.05886</td><td>0.0756298883116</td><td>8.</td></tr>\n<tr><td>0.05974</td><td>-0.803417802658</td><td>8.</td></tr>\n<tr><td>0.06092</td><td>-0.115228066137</td><td>8.</td></tr>\n<tr><td>0.06099</td><td>0.0133048886582</td><td>8.</td></tr>\n<tr><td>0.06099</td><td>0.0427048886582</td><td>8.</td></tr>\n<tr><td>0.06121</td><td>-0.0195443596127</td><td>8.</td></tr>\n<tr><td>0.06137</td><td>-0.202880711986</td><td>8.</td></tr>\n<tr><td>0.06137</td><td>-0.194580711986</td><td>8.</td></tr>\n<tr><td>0.06153</td><td>-0.0948022912486</td><td>8.</td></tr>\n<tr><td>0.06372</td><td>0.103960473042</td><td>8.</td></tr>\n<tr><td>0.06384</td><td>-0.232250654126</td><td>8.</td></tr>\n<tr><td>0.06446</td><td>-0.194986434258</td><td>8.</td></tr>\n<tr><td>0.06533</td><td>-0.218508110773</td><td>8.</td></tr>\n<tr><td>0.06627</td><td>0.0260876633256</td><td>8.</td></tr>\n<tr><td>…</td><td>…</td><td>…</td></tr>\n</tbody>\n</table>\n<p><small>32 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (9,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.07089</td><td>-0.0324754668871</td><td>9.</td></tr>\n<tr><td>0.0709</td><td>0.242007811196</td><td>9.</td></tr>\n<tr><td>0.07091</td><td>-0.116208867472</td><td>9.</td></tr>\n<tr><td>0.07116</td><td>-0.140111813817</td><td>9.</td></tr>\n<tr><td>0.07158</td><td>-0.259128451072</td><td>9.</td></tr>\n<tr><td>0.07167</td><td>-0.0958508194342</td><td>9.</td></tr>\n<tr><td>0.07193</td><td>0.0423148998275</td><td>9.</td></tr>\n<tr><td>0.07222</td><td>0.103375550607</td><td>9.</td></tr>\n<tr><td>0.07252</td><td>0.0846613871215</td><td>9.</td></tr>\n<tr><td>0.07393</td><td>-0.0276218037538</td><td>9.</td></tr>\n<tr><td>0.0744</td><td>-0.16077227092</td><td>9.</td></tr>\n<tr><td>0.07446</td><td>-0.147085210939</td><td>9.</td></tr>\n<tr><td>0.0752</td><td>-0.143129423208</td><td>9.</td></tr>\n<tr><td>0.0752</td><td>-0.0538294232081</td><td>9.</td></tr>\n<tr><td>0.0756</td><td>0.182834610902</td><td>9.</td></tr>\n<tr><td>0.07575</td><td>-0.0897256482245</td><td>9.</td></tr>\n<tr><td>0.07588</td><td>-0.00548430816918</td><td>9.</td></tr>\n<tr><td>0.07845</td><td>-0.143184159753</td><td>9.</td></tr>\n<tr><td>0.07859</td><td>0.112798691001</td><td>9.</td></tr>\n<tr><td>0.07875</td><td>0.292116111885</td><td>9.</td></tr>\n<tr><td>…</td><td>…</td><td>…</td></tr>\n</tbody>\n</table>\n<p><small>33 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (10,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.0887</td><td>-0.214759965462</td><td>10.</td></tr>\n<tr><td>0.09039</td><td>-0.114490123038</td><td>10.</td></tr>\n<tr><td>0.09089</td><td>-0.0202850975671</td><td>10.</td></tr>\n<tr><td>0.09205</td><td>0.0842789146355</td><td>10.</td></tr>\n<tr><td>0.09293</td><td>0.125110163521</td><td>10.</td></tr>\n<tr><td>0.0995</td><td>0.185107403999</td><td>10.</td></tr>\n<tr><td>0.10165</td><td>-0.00302391994575</td><td>10.</td></tr>\n<tr><td>0.10221</td><td>0.0947708510874</td><td>10.</td></tr>\n<tr><td>0.10246</td><td>-0.0681907017381</td><td>10.</td></tr>\n<tr><td>0.10294</td><td>-0.148432610935</td><td>10.</td></tr>\n<tr><td>0.10361</td><td>-0.0109079016729</td><td>10.</td></tr>\n<tr><td>0.10374</td><td>-0.139664168122</td><td>10.</td></tr>\n<tr><td>0.10507</td><td>0.0529089143211</td><td>10.</td></tr>\n<tr><td>0.10661</td><td>-0.148665966028</td><td>10.</td></tr>\n<tr><td>0.10707</td><td>0.0428133675802</td><td>10.</td></tr>\n<tr><td>0.10711</td><td>0.00906130066566</td><td>10.</td></tr>\n<tr><td>0.10713</td><td>0.359235381084</td><td>10.</td></tr>\n<tr><td>0.10774</td><td>0.064581151469</td><td>10.</td></tr>\n<tr><td>0.10794</td><td>-0.0363509057905</td><td>10.</td></tr>\n<tr><td>0.10908</td><td>-0.0604317214155</td><td>10.</td></tr>\n</tbody>\n</table>\n<p><small>20 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (11,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.11001</td><td>0.0886813554461</td><td>11.</td></tr>\n<tr><td>0.11259</td><td>0.0870049833038</td><td>11.</td></tr>\n<tr><td>0.11388</td><td>-0.172851036977</td><td>11.</td></tr>\n<tr><td>0.1165</td><td>-0.0717173699591</td><td>11.</td></tr>\n<tr><td>0.11653</td><td>0.0170929257421</td><td>11.</td></tr>\n<tr><td>0.1176</td><td>-0.0613460214423</td><td>11.</td></tr>\n<tr><td>0.11792</td><td>-0.031472958122</td><td>11.</td></tr>\n<tr><td>0.11818</td><td>-0.229620528577</td><td>11.</td></tr>\n<tr><td>0.11901</td><td>-0.0984636003687</td><td>11.</td></tr>\n<tr><td>0.12014</td><td>-0.0872353293955</td><td>11.</td></tr>\n<tr><td>0.12058</td><td>0.101178454287</td><td>11.</td></tr>\n<tr><td>0.12086</td><td>-0.0619431118695</td><td>11.</td></tr>\n<tr><td>0.12207</td><td>-0.125106114742</td><td>11.</td></tr>\n<tr><td>0.12231</td><td>-0.112215348847</td><td>11.</td></tr>\n<tr><td>0.12278</td><td>-0.0476216623284</td><td>11.</td></tr>\n<tr><td>0.12316</td><td>-0.143318307083</td><td>11.</td></tr>\n<tr><td>0.12357</td><td>-0.13125195377</td><td>11.</td></tr>\n<tr><td>0.12377</td><td>0.0495330304242</td><td>11.</td></tr>\n<tr><td>0.12383</td><td>0.0717196359588</td><td>11.</td></tr>\n<tr><td>0.12393</td><td>-0.103934884938</td><td>11.</td></tr>\n<tr><td>…</td><td>…</td><td>…</td></tr>\n</tbody>\n</table>\n<p><small>38 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (12,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.13614</td><td>0.00634512405369</td><td>12.</td></tr>\n<tr><td>0.13658</td><td>-0.130706237119</td><td>12.</td></tr>\n<tr><td>0.137</td><td>0.0242022119563</td><td>12.</td></tr>\n<tr><td>0.13713</td><td>-0.0258886328085</td><td>12.</td></tr>\n<tr><td>0.13745</td><td>0.35822685965</td><td>12.</td></tr>\n<tr><td>0.13822</td><td>-0.12008128119</td><td>12.</td></tr>\n<tr><td>0.13826</td><td>-0.0561499783706</td><td>12.</td></tr>\n<tr><td>0.1384</td><td>0.0141110181508</td><td>12.</td></tr>\n<tr><td>0.13851</td><td>0.0294747952992</td><td>12.</td></tr>\n<tr><td>0.13875</td><td>-0.0147267381425</td><td>12.</td></tr>\n<tr><td>0.1388</td><td>0.0312404310394</td><td>12.</td></tr>\n<tr><td>0.13955</td><td>-0.0066181867595</td><td>12.</td></tr>\n<tr><td>0.14082</td><td>0.128628511143</td><td>12.</td></tr>\n<tr><td>0.14104</td><td>0.231016921037</td><td>12.</td></tr>\n<tr><td>0.14123</td><td>-0.0148979084376</td><td>12.</td></tr>\n<tr><td>0.14134</td><td>0.0663005741243</td><td>12.</td></tr>\n<tr><td>0.14325</td><td>-0.0414714699277</td><td>12.</td></tr>\n<tr><td>0.14345</td><td>0.423697521</td><td>12.</td></tr>\n<tr><td>0.14359</td><td>0.0195383381869</td><td>12.</td></tr>\n<tr><td>0.14404</td><td>-0.0256092965459</td><td>12.</td></tr>\n<tr><td>…</td><td>…</td><td>…</td></tr>\n</tbody>\n</table>\n<p><small>64 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (13,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.16924</td><td>0.0389733472685</td><td>13.</td></tr>\n<tr><td>0.16971</td><td>0.189083761363</td><td>13.</td></tr>\n<tr><td>0.17042</td><td>-0.0185879074946</td><td>13.</td></tr>\n<tr><td>0.17124</td><td>0.0383736768909</td><td>13.</td></tr>\n<tr><td>0.17169</td><td>-0.0498724221099</td><td>13.</td></tr>\n<tr><td>0.17256</td><td>-0.0199123832614</td><td>13.</td></tr>\n<tr><td>0.1727</td><td>-0.25631246117</td><td>13.</td></tr>\n<tr><td>0.17297</td><td>0.19872715637</td><td>13.</td></tr>\n<tr><td>0.17331</td><td>-0.145074641558</td><td>13.</td></tr>\n<tr><td>0.17374</td><td>0.00231747862402</td><td>13.</td></tr>\n<tr><td>0.17378</td><td>-0.189222107681</td><td>13.</td></tr>\n<tr><td>0.17392</td><td>0.0581902517402</td><td>13.</td></tr>\n<tr><td>0.17417</td><td>0.0296229858146</td><td>13.</td></tr>\n<tr><td>0.1742</td><td>-0.212480783249</td><td>13.</td></tr>\n<tr><td>0.17438</td><td>-0.0413020372001</td><td>13.</td></tr>\n<tr><td>0.17443</td><td>0.00732580573651</td><td>13.</td></tr>\n<tr><td>0.17444</td><td>-0.062508604123</td><td>13.</td></tr>\n<tr><td>0.17498</td><td>0.188743907993</td><td>13.</td></tr>\n<tr><td>0.17666</td><td>-0.0606712840265</td><td>13.</td></tr>\n<tr><td>0.17713</td><td>-0.0738066505178</td><td>13.</td></tr>\n<tr><td>…</td><td>…</td><td>…</td></tr>\n</tbody>\n</table>\n<p><small>117 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (14,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.21037</td><td>-0.53515754654</td><td>14.</td></tr>\n<tr><td>0.21084</td><td>-0.031562219736</td><td>14.</td></tr>\n<tr><td>0.21095</td><td>-0.190002164661</td><td>14.</td></tr>\n<tr><td>0.21114</td><td>-0.0429424845705</td><td>14.</td></tr>\n<tr><td>0.21134</td><td>0.14110646419</td><td>14.</td></tr>\n<tr><td>0.21174</td><td>-0.0372897541268</td><td>14.</td></tr>\n<tr><td>0.212</td><td>-0.0942081001471</td><td>14.</td></tr>\n<tr><td>0.21225</td><td>-0.0732110933607</td><td>14.</td></tr>\n<tr><td>0.2135</td><td>-0.00278059242427</td><td>14.</td></tr>\n<tr><td>0.21365</td><td>-0.0178518660398</td><td>14.</td></tr>\n<tr><td>0.21398</td><td>-0.142524867022</td><td>14.</td></tr>\n<tr><td>0.2144</td><td>-0.0166920577869</td><td>14.</td></tr>\n<tr><td>0.21507</td><td>-0.159519939026</td><td>14.</td></tr>\n<tr><td>0.21521</td><td>0.420530657906</td><td>14.</td></tr>\n<tr><td>0.21578</td><td>0.175431938785</td><td>14.</td></tr>\n<tr><td>0.2165</td><td>-0.232502482431</td><td>14.</td></tr>\n<tr><td>0.21689</td><td>0.00510984041876</td><td>14.</td></tr>\n<tr><td>0.21692</td><td>0.0307803130448</td><td>14.</td></tr>\n<tr><td>0.21742</td><td>0.0715943550243</td><td>14.</td></tr>\n<tr><td>0.21794</td><td>-0.0793987415813</td><td>14.</td></tr>\n<tr><td>…</td><td>…</td><td>…</td></tr>\n</tbody>\n</table>\n<p><small>142 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (15,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.26141</td><td>-0.063389305081</td><td>15.</td></tr>\n<tr><td>0.26162</td><td>-0.0941332792955</td><td>15.</td></tr>\n<tr><td>0.26172</td><td>0.174141517223</td><td>15.</td></tr>\n<tr><td>0.26173</td><td>0.16764901455</td><td>15.</td></tr>\n<tr><td>0.26175</td><td>-0.121535981157</td><td>15.</td></tr>\n<tr><td>0.26184</td><td>0.179131697137</td><td>15.</td></tr>\n<tr><td>0.262</td><td>0.205752656004</td><td>15.</td></tr>\n<tr><td>0.26303</td><td>0.738750935002</td><td>15.</td></tr>\n<tr><td>0.26323</td><td>0.111609861946</td><td>15.</td></tr>\n<tr><td>0.2636</td><td>-0.130892774852</td><td>15.</td></tr>\n<tr><td>0.26393</td><td>0.0251761002073</td><td>15.</td></tr>\n<tr><td>0.26397</td><td>-0.230491074865</td><td>15.</td></tr>\n<tr><td>0.26408</td><td>0.0991994542632</td><td>15.</td></tr>\n<tr><td>0.26419</td><td>-0.0286096346745</td><td>15.</td></tr>\n<tr><td>0.2646</td><td>-0.0244674248267</td><td>15.</td></tr>\n<tr><td>0.26463</td><td>0.131857822634</td><td>15.</td></tr>\n<tr><td>0.26582</td><td>0.0577820584371</td><td>15.</td></tr>\n<tr><td>0.26583</td><td>-0.226609147044</td><td>15.</td></tr>\n<tr><td>0.2664</td><td>-0.266102716605</td><td>15.</td></tr>\n<tr><td>0.267</td><td>-0.0153587433182</td><td>15.</td></tr>\n<tr><td>…</td><td>…</td><td>…</td></tr>\n</tbody>\n</table>\n<p><small>150 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (16,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.32548</td><td>0.180663773592</td><td>16.</td></tr>\n<tr><td>0.3256</td><td>0.214952098223</td><td>16.</td></tr>\n<tr><td>0.3258</td><td>0.114833307567</td><td>16.</td></tr>\n<tr><td>0.32581</td><td>-0.00134261005185</td><td>16.</td></tr>\n<tr><td>0.32632</td><td>-0.00261164522524</td><td>16.</td></tr>\n<tr><td>0.32804</td><td>-0.0658203680861</td><td>16.</td></tr>\n<tr><td>0.32842</td><td>0.23901384648</td><td>16.</td></tr>\n<tr><td>0.32848</td><td>0.281661625296</td><td>16.</td></tr>\n<tr><td>0.32851</td><td>-0.135564457581</td><td>16.</td></tr>\n<tr><td>0.32868</td><td>0.000754754949696</td><td>16.</td></tr>\n<tr><td>0.32868</td><td>0.0291547549497</td><td>16.</td></tr>\n<tr><td>0.32871</td><td>-0.143471204838</td><td>16.</td></tr>\n<tr><td>0.32907</td><td>-0.017181284091</td><td>16.</td></tr>\n<tr><td>0.32941</td><td>0.334561631049</td><td>16.</td></tr>\n<tr><td>0.32952</td><td>0.161934844394</td><td>16.</td></tr>\n<tr><td>0.32968</td><td>0.0247326862566</td><td>16.</td></tr>\n<tr><td>0.32995</td><td>-0.235994772669</td><td>16.</td></tr>\n<tr><td>0.33047</td><td>0.146304669334</td><td>16.</td></tr>\n<tr><td>0.33056</td><td>0.082430130099</td><td>16.</td></tr>\n<tr><td>0.33063</td><td>0.0834056020207</td><td>16.</td></tr>\n<tr><td>…</td><td>…</td><td>…</td></tr>\n</tbody>\n</table>\n<p><small>130 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (17,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.40368</td><td>0.0305227159842</td><td>17.</td></tr>\n<tr><td>0.40463</td><td>0.0832671563379</td><td>17.</td></tr>\n<tr><td>0.40483</td><td>-0.255385075024</td><td>17.</td></tr>\n<tr><td>0.4055</td><td>0.183823918891</td><td>17.</td></tr>\n<tr><td>0.40646</td><td>0.0885295187297</td><td>17.</td></tr>\n<tr><td>0.40895</td><td>0.0818394810777</td><td>17.</td></tr>\n<tr><td>0.4092</td><td>0.120588849089</td><td>17.</td></tr>\n<tr><td>0.40935</td><td>0.0852588703231</td><td>17.</td></tr>\n<tr><td>0.40949</td><td>0.0566911608651</td><td>17.</td></tr>\n<tr><td>0.41004</td><td>0.0584848294338</td><td>17.</td></tr>\n<tr><td>0.41123</td><td>0.0295285142735</td><td>17.</td></tr>\n<tr><td>0.4114</td><td>0.199979142478</td><td>17.</td></tr>\n<tr><td>0.41161</td><td>0.178283386575</td><td>17.</td></tr>\n<tr><td>0.41266</td><td>0.148013322853</td><td>17.</td></tr>\n<tr><td>0.41657</td><td>0.0642467660022</td><td>17.</td></tr>\n<tr><td>0.41857</td><td>-0.0600359425703</td><td>17.</td></tr>\n<tr><td>0.41936</td><td>0.00406598556371</td><td>17.</td></tr>\n<tr><td>0.41939</td><td>-0.222016063038</td><td>17.</td></tr>\n<tr><td>0.4196</td><td>0.182509917202</td><td>17.</td></tr>\n<tr><td>0.41965</td><td>0.114106661782</td><td>17.</td></tr>\n<tr><td>…</td><td>…</td><td>…</td></tr>\n</tbody>\n</table>\n<p><small>97 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (18,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.50282</td><td>0.0115974630251</td><td>18.</td></tr>\n<tr><td>0.50285</td><td>-0.0792578980026</td><td>18.</td></tr>\n<tr><td>0.50306</td><td>-0.00224520003424</td><td>18.</td></tr>\n<tr><td>0.50316</td><td>-0.00676282447449</td><td>18.</td></tr>\n<tr><td>0.50593</td><td>-0.0864656605981</td><td>18.</td></tr>\n<tr><td>0.50615</td><td>-0.0622987115987</td><td>18.</td></tr>\n<tr><td>0.50725</td><td>0.0635424328218</td><td>18.</td></tr>\n<tr><td>0.50739</td><td>0.0689229785269</td><td>18.</td></tr>\n<tr><td>0.50825</td><td>-0.15809275328</td><td>18.</td></tr>\n<tr><td>0.51016</td><td>-0.0926766588196</td><td>18.</td></tr>\n<tr><td>0.51095</td><td>-0.132914123298</td><td>18.</td></tr>\n<tr><td>0.51169</td><td>0.193508853341</td><td>18.</td></tr>\n<tr><td>0.51387</td><td>0.0506093976964</td><td>18.</td></tr>\n<tr><td>0.51437</td><td>0.0172694047828</td><td>18.</td></tr>\n<tr><td>0.51469</td><td>0.11564493179</td><td>18.</td></tr>\n<tr><td>0.51726</td><td>-0.130069979609</td><td>18.</td></tr>\n<tr><td>0.51883</td><td>-0.0362931904153</td><td>18.</td></tr>\n<tr><td>0.51885</td><td>-0.0488939890317</td><td>18.</td></tr>\n<tr><td>0.51941</td><td>0.00968501476334</td><td>18.</td></tr>\n<tr><td>0.51968</td><td>0.0550258324352</td><td>18.</td></tr>\n<tr><td>…</td><td>…</td><td>…</td></tr>\n</tbody>\n</table>\n<p><small>96 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (19,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.62525</td><td>-0.0676927812368</td><td>19.</td></tr>\n<tr><td>0.62725</td><td>-0.148965950221</td><td>19.</td></tr>\n<tr><td>0.63077</td><td>-0.0857981161031</td><td>19.</td></tr>\n<tr><td>0.63183</td><td>-0.442410797453</td><td>19.</td></tr>\n<tr><td>0.63225</td><td>0.094002948885</td><td>19.</td></tr>\n<tr><td>0.63399</td><td>-0.165486457732</td><td>19.</td></tr>\n<tr><td>0.63777</td><td>-0.248879776508</td><td>19.</td></tr>\n<tr><td>0.63794</td><td>0.0843028521275</td><td>19.</td></tr>\n<tr><td>0.63824</td><td>-0.121262699059</td><td>19.</td></tr>\n<tr><td>0.63873</td><td>-0.00232867361532</td><td>19.</td></tr>\n<tr><td>0.63934</td><td>0.00470128976596</td><td>19.</td></tr>\n<tr><td>0.64185</td><td>0.0517482101598</td><td>19.</td></tr>\n<tr><td>0.64311</td><td>-0.440236071775</td><td>19.</td></tr>\n<tr><td>0.64371</td><td>-0.00444929008427</td><td>19.</td></tr>\n<tr><td>0.64371</td><td>-0.000249290084263</td><td>19.</td></tr>\n<tr><td>0.6477</td><td>-0.00131150843364</td><td>19.</td></tr>\n<tr><td>0.64852</td><td>0.215975033203</td><td>19.</td></tr>\n<tr><td>0.6487</td><td>-0.0909737694835</td><td>19.</td></tr>\n<tr><td>0.64962</td><td>-0.0477982164029</td><td>19.</td></tr>\n<tr><td>0.66213</td><td>0.0994509027894</td><td>19.</td></tr>\n<tr><td>…</td><td>…</td><td>…</td></tr>\n</tbody>\n</table>\n<p><small>76 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (20,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.77929</td><td>-0.286728045808</td><td>20.</td></tr>\n<tr><td>0.78807</td><td>-0.0531352325948</td><td>20.</td></tr>\n<tr><td>0.78907</td><td>0.0228403967112</td><td>20.</td></tr>\n<tr><td>0.78928</td><td>-0.162099242024</td><td>20.</td></tr>\n<tr><td>0.79662</td><td>-0.484747596675</td><td>20.</td></tr>\n<tr><td>0.79863</td><td>0.039436351798</td><td>20.</td></tr>\n<tr><td>0.83981</td><td>-0.174927172808</td><td>20.</td></tr>\n<tr><td>0.83981</td><td>-0.333727172808</td><td>20.</td></tr>\n<tr><td>0.85482</td><td>0.0320794309533</td><td>20.</td></tr>\n<tr><td>0.93585</td><td>-0.308388644025</td><td>20.</td></tr>\n</tbody>\n</table>\n<p><small>10 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (21,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>0.97423</td><td>0.154450661289</td><td>21.</td></tr>\n<tr><td>1.01242</td><td>0.0501694564944</td><td>21.</td></tr>\n<tr><td>1.01988</td><td>0.186919560833</td><td>21.</td></tr>\n<tr><td>1.02088</td><td>-0.121019067405</td><td>21.</td></tr>\n<tr><td>1.02789</td><td>0.42494712992</td><td>21.</td></tr>\n<tr><td>1.04817</td><td>0.254697417559</td><td>21.</td></tr>\n<tr><td>1.12092</td><td>0.0759845634931</td><td>21.</td></tr>\n</tbody>\n</table>\n<p><small>7 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (22,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>1.23225</td><td>-0.324587237938</td><td>22.</td></tr>\n<tr><td>1.23597</td><td>0.250701908425</td><td>22.</td></tr>\n<tr><td>1.29911</td><td>0.0602023436219</td><td>22.</td></tr>\n<tr><td>1.3041</td><td>-0.0948606170644</td><td>22.</td></tr>\n<tr><td>1.30611</td><td>0.199592161811</td><td>22.</td></tr>\n<tr><td>1.31317</td><td>-0.0496838731239</td><td>22.</td></tr>\n<tr><td>1.3291</td><td>0.00165723209277</td><td>22.</td></tr>\n<tr><td>1.34101</td><td>-0.145764204634</td><td>22.</td></tr>\n<tr><td>1.35136</td><td>-0.373984554953</td><td>22.</td></tr>\n<tr><td>1.35608</td><td>-0.124370205399</td><td>22.</td></tr>\n<tr><td>1.39103</td><td>-0.216813113651</td><td>22.</td></tr>\n<tr><td>1.41633</td><td>-0.608368865186</td><td>22.</td></tr>\n</tbody>\n</table>\n<p><small>12 rows × 3 columns</small></p>\n<!-- out:stdout -->\n);\n   (23,\n    \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>z</th><th>res</th><th>bin</th></tr></thead>\n<tbody>\n<tr><td>1.5429</td><td>-0.239796061037</td><td>23.</td></tr>\n<tr><td>1.54901</td><td>-0.0492647551793</td><td>23.</td></tr>\n<tr><td>1.61505</td><td>-0.312860625322</td><td>23.</td></tr>\n<tr><td>1.69706</td><td>-0.341579608953</td><td>23.</td></tr>\n<tr><td>1.80119</td><td>-0.33004923418</td><td>23.</td></tr>\n</tbody>\n</table>\n<p><small>5 rows × 3 columns</small></p>\n<!-- out:stdout -->\n)]\nval n_groups : int = 24\nval bz : (float, Nx.float32_elt) Nx.t = float32 [24] \n  [0.0109479, 0.0140023, ..., 1.32297, 1.64104]\nval bmu : (float, Nx.float32_elt) Nx.t = float32 [24] \n  [0.00578864, -0.0499531, ..., -0.118857, -0.25471]\nval berr : (float, Nx.float32_elt) Nx.t = float32 [24] \n  [0.043205, 0.0225362, ..., 0.070028, 0.0543295]\n<!-- /quill:output -->\n\n<!-- quill:cell id=\"c_residual_plot\" -->\n```ocaml\nlet _fig =\n  Hugin.layers [\n    Hugin.errorbar ~x:bz ~y:bmu ~yerr:(`Symmetric berr)\n      ~color:Hugin.Color.black ~cap_size:4.0 ~line_width:1.5 () ;\n    Hugin.point ~x:bz ~y:bmu\n      ~color:Hugin.Color.black ~size:5.0 ~marker:Hugin.Circle () ;\n    Hugin.line ~x:(to32 z_grid) ~y:(to32 res_lcdm)\n      ~color:Hugin.Color.vermillion ~line_width:2.5\n      ~label:\"ΛCDM (0.3, 0.7)\" () ;\n    Hugin.line ~x:(to32 z_grid) ~y:(to32 res_edsit)\n      ~color:Hugin.Color.sky_blue ~line_width:2.0\n      ~line_style:`Dashed ~label:\"EdS (1, 0)\" () ;\n    Hugin.line ~x:(to32 z_grid) ~y:(to32 res_open)\n      ~color:Hugin.Color.orange ~line_width:2.0\n      ~line_style:`Dash_dot ~label:\"Open (0.3, 0)\" () ;\n    Hugin.hline ~y:0.0 ~line_style:`Dotted ~color:Hugin.Color.gray () ;\n  ]\n  |> Hugin.xscale `Log\n  |> Hugin.xlim 0.01 2.5\n  |> Hugin.xlabel \"Redshift z\"\n  |> Hugin.ylabel \"Δμ (mag, relative to empty universe)\"\n  |> Hugin.title \"Hubble Residuals: The Acceleration Signal\"\n  |> Hugin.legend ~loc:Hugin.Upper_left\n  |> Hugin.grid_lines true\n```\n<!-- quill:output -->\n<!-- out:stdout -->\nval _fig : Hugin.t =\n  \n<!-- out:display image/png -->\n<img src=\"figures/c_residual_plot.png\">\n<!-- /quill:output -->\n\n## Confidence contours in the $\\Omega_M$--$\\Omega_\\Lambda$ plane\n\nFollowing Perlmutter et al. (1999, Fig. 7), we scan a grid of\n$(\\Omega_M, \\Omega_\\Lambda)$ values and compute $\\chi^2$ at each point. The\nconfidence contours are drawn at $\\Delta\\chi^2 = 2.30, 6.17, 11.8$ (68.3%,\n95.4%, 99.7% for 2 parameters). We use only the Hubble-flow SNe ($z > 0.01$)\nand the diagonal errors (sufficient for this visualization).\n\n\n<!-- quill:cell id=\"c_chisq\" -->\n```ocaml\n(* Filter Hubble-flow SNe using Talon *)\nlet hf =\n  Talon.filter_by df Talon.Row.(\n    map2 (number \"zHD\") (number \"MU_SH0ES_ERR_DIAG\")\n      ~f:(fun z err -> z > 0.01 && err > 0.0 && err < 10.0))\n\nlet hf_col name =\n  Talon.get_column_exn hf name |> Talon.Col.to_tensor f64 |> Option.get\n\nlet hf_z = hf_col \"zHD\"\nlet hf_mu = hf_col \"MU_SH0ES\"\nlet hf_w = Nx.recip (Nx.square (hf_col \"MU_SH0ES_ERR_DIAG\"))\nlet n_hf = (Nx.shape hf_z).(0)\n\nlet () = Printf.printf \"Using %d Hubble-flow SNe for chi-squared grid\\n\" n_hf\n\n(* Chi-squared for a given (omega_m, omega_l) with M marginalized analytically.\n   chi2 = sum w_i (mu_i - mu_th(z_i) - M)^2\n   Minimizing over M:  M* = sum(w_i * (mu_i - mu_th_i)) / sum(w_i)\n   chi2_min = sum(w_i * d_i^2) - (sum(w_i * d_i))^2 / sum(w_i) *)\nlet hf_z_arr = Array.init n_hf (fun i -> Nx.item [i] hf_z)\nlet hf_mu_arr = Array.init n_hf (fun i -> Nx.item [i] hf_mu)\nlet hf_w_arr = Array.init n_hf (fun i -> Nx.item [i] hf_w)\nlet sum_w = Array.fold_left ( +. ) 0.0 hf_w_arr\n\n(* Pure-float distance modulus via 16-point Gauss-Legendre quadrature.\n   Avoids all tensor allocation in the chi2 hot loop. *)\nlet gl_n = [| -0.9894009349916499; -0.9445750230732326; -0.8656312023878318;\n  -0.7554044083550030; -0.6178762444026438; -0.4580167776572274;\n  -0.2816035507792589; -0.0950125098376374;  0.0950125098376374;\n   0.2816035507792589;  0.4580167776572274;  0.6178762444026438;\n   0.7554044083550030;  0.8656312023878318;  0.9445750230732326;\n   0.9894009349916499 |]\nlet gl_wt = [| 0.0271524594117541; 0.0622535239386479; 0.0951585116824928;\n  0.1246289712555339; 0.1495959888165767; 0.1691565193950025;\n  0.1826034150449236; 0.1894506104550685; 0.1894506104550685;\n  0.1826034150449236; 0.1691565193950025; 0.1495959888165767;\n  0.1246289712555339; 0.0951585116824928; 0.0622535239386479;\n  0.0271524594117541 |]\n\nlet dist_mod_f omega_m omega_l z =\n  let c_over_h0 = 299792.458 /. 70.0 in\n  let omega_k = 1.0 -. omega_m -. omega_l in\n  let half_z = z *. 0.5 in\n  let integral = ref 0.0 in\n  for k = 0 to 15 do\n    let zp = half_z *. gl_n.(k) +. half_z in\n    let opz = 1.0 +. zp in\n    let ez = Float.sqrt (omega_m *. opz *. opz *. opz\n                         +. omega_k *. opz *. opz +. omega_l) in\n    integral := !integral +. gl_wt.(k) /. ez\n  done;\n  let chi = c_over_h0 *. half_z *. !integral in\n  let dl = (1.0 +. z) *. chi in\n  5.0 /. Float.log 10.0 *. Float.log dl +. 25.0\n\nlet chi2_at omega_m omega_l =\n  let sum_wd = ref 0.0 in\n  let sum_wdd = ref 0.0 in\n  let ok = ref true in\n  for i = 0 to n_hf - 1 do\n    let mu_th_i = dist_mod_f omega_m omega_l hf_z_arr.(i) in\n    if Float.is_nan mu_th_i then ok := false\n    else begin\n      let d = hf_mu_arr.(i) -. mu_th_i in\n      let w = hf_w_arr.(i) in\n      sum_wd := !sum_wd +. w *. d;\n      sum_wdd := !sum_wdd +. w *. d *. d\n    end\n  done;\n  if not !ok then infinity\n  else !sum_wdd -. (!sum_wd *. !sum_wd /. sum_w)\n\n(* Scan the grid -- axis range matches Perlmutter 1999 Figure 7 *)\nlet n_om = 100\nlet n_ol = 100\nlet om_min = 0.0 and om_max = 3.0\nlet ol_min = -1.0 and ol_max = 3.0\n\nlet () = Printf.printf \"Computing chi-squared on %dx%d grid...\\n%!\" n_om n_ol\n\nlet chi2_grid =\n  Nx.init f64 [| n_ol; n_om |] (fun idx ->\n    let j = idx.(0) and i = idx.(1) in\n    let omega_m = om_min +. (Float.of_int i +. 0.5) *. (om_max -. om_min) /. Float.of_int n_om in\n    let omega_l = ol_min +. (Float.of_int j +. 0.5) *. (ol_max -. ol_min) /. Float.of_int n_ol in\n    if omega_m < 0.001 then 1e10\n    else chi2_at omega_m omega_l)\n\nlet chi2_min = Nx.item [] (Nx.min chi2_grid)\nlet delta_chi2 = Nx.sub_s chi2_grid chi2_min\n\nlet () =\n  let flat_idx = Int32.to_int (Nx.item [] (Nx.argmin chi2_grid)) in\n  let best_i = flat_idx mod n_om in\n  let best_j = flat_idx / n_om in\n  let best_om = om_min +. (Float.of_int best_i +. 0.5) *. (om_max -. om_min) /. Float.of_int n_om in\n  let best_ol = ol_min +. (Float.of_int best_j +. 0.5) *. (ol_max -. ol_min) /. Float.of_int n_ol in\n  Printf.printf \"Best fit: Omega_M = %.2f, Omega_Lambda = %.2f (chi2 = %.1f, dof ~ %d)\\n\"\n    best_om best_ol chi2_min (n_hf - 1)\n```\n<!-- quill:output -->\n<!-- out:stdout -->\nUsing 1590 Hubble-flow SNe for chi-squared grid\nComputing chi-squared on 100x100 grid...\nBest fit: Omega_M = 0.23, Omega_Lambda = 0.54 (chi2 = 684.2, dof ~ 1589)\nval hf : Talon.t =\n  \n<!-- out:display text/html -->\n<table>\n<thead><tr><th>CID</th><th>IDSURVEY</th><th>zHD</th><th>zHDERR</th><th>zCMB</th><th>zCMBERR</th><th>zHEL</th><th>zHELERR</th><th>m_b_corr</th><th>m_b_corr_err_DIAG</th><th>…</th></tr></thead>\n<tbody>\n<tr><td>2013E</td><td>56</td><td>0.01016</td><td>0.00085</td><td>0.01042</td><td>8e-05</td><td>0.00936</td><td>8e-05</td><td>13.5264</td><td>0.3475</td><td>…</td></tr>\n<tr><td>1999ac</td><td>57</td><td>0.01017</td><td>0.00084</td><td>0.00979</td><td>2e-05</td><td>0.00947</td><td>2e-05</td><td>13.6652</td><td>0.364224</td><td>…</td></tr>\n<tr><td>1999ac</td><td>62</td><td>0.01017</td><td>0.00084</td><td>0.00979</td><td>2e-05</td><td>0.00947</td><td>2e-05</td><td>13.7144</td><td>0.34081</td><td>…</td></tr>\n<tr><td>2009an</td><td>51</td><td>0.01026</td><td>0.00084</td><td>0.00921</td><td>1e-05</td><td>0.00887</td><td>1e-05</td><td>14.0848</td><td>0.305101</td><td>…</td></tr>\n<tr><td>2009an</td><td>65</td><td>0.01026</td><td>0.00084</td><td>0.00921</td><td>1e-05</td><td>0.00887</td><td>1e-05</td><td>13.9723</td><td>0.297865</td><td>…</td></tr>\n<tr><td>2006bh</td><td>5</td><td>0.01028</td><td>0.00086</td><td>0.01042</td><td>0.00015</td><td>0.01077</td><td>0.00015</td><td>14.0258</td><td>0.246478</td><td>…</td></tr>\n<tr><td>2004S</td><td>57</td><td>0.01042</td><td>0.00084</td><td>0.0098</td><td>2e-05</td><td>0.0093</td><td>2e-05</td><td>13.8706</td><td>0.316076</td><td>…</td></tr>\n<tr><td>2021hpr</td><td>57</td><td>0.01044</td><td>0.00084</td><td>0.00958</td><td>2e-05</td><td>0.00938</td><td>2e-05</td><td>13.8339</td><td>0.342855</td><td>…</td></tr>\n<tr><td>2002dp</td><td>63</td><td>0.01061</td><td>0.00084</td><td>0.01049</td><td>1e-05</td><td>0.01169</td><td>1e-05</td><td>14.1276</td><td>0.307827</td><td>…</td></tr>\n<tr><td>2002dp</td><td>57</td><td>0.01061</td><td>0.00084</td><td>0.01049</td><td>1e-05</td><td>0.01169</td><td>1e-05</td><td>14.0401</td><td>0.273239</td><td>…</td></tr>\n<tr><td>1997do</td><td>62</td><td>0.01073</td><td>0.00084</td><td>0.01048</td><td>2e-05</td><td>0.01012</td><td>2e-05</td><td>13.8551</td><td>0.363667</td><td>…</td></tr>\n<tr><td>1997bq</td><td>62</td><td>0.01079</td><td>0.00084</td><td>0.00993</td><td>2e-05</td><td>0.00973</td><td>2e-05</td><td>13.8153</td><td>0.322889</td><td>…</td></tr>\n<tr><td>2008fv_comb</td><td>50</td><td>0.01079</td><td>0.00084</td><td>0.00993</td><td>2e-05</td><td>0.00973</td><td>2e-05</td><td>13.9279</td><td>0.377003</td><td>…</td></tr>\n<tr><td>ASASSN-16jf</td><td>150</td><td>0.01096</td><td>0.00084</td><td>0.0104</td><td>1e-05</td><td>0.01144</td><td>1e-05</td><td>14.3179</td><td>0.313698</td><td>…</td></tr>\n<tr><td>iPTF13ebh</td><td>56</td><td>0.01114</td><td>0.00085</td><td>0.01238</td><td>5e-05</td><td>0.01317</td><td>5e-05</td><td>14.4807</td><td>0.341421</td><td>…</td></tr>\n<tr><td>iPTF13ebh</td><td>5</td><td>0.01114</td><td>0.00085</td><td>0.01238</td><td>5e-05</td><td>0.01317</td><td>5e-05</td><td>14.3839</td><td>0.293983</td><td>…</td></tr>\n<tr><td>2010ko</td><td>56</td><td>0.01122</td><td>0.00084</td><td>0.01096</td><td>2e-05</td><td>0.01082</td><td>2e-05</td><td>14.5817</td><td>0.352878</td><td>…</td></tr>\n<tr><td>2013ex</td><td>51</td><td>0.01122</td><td>0.00084</td><td>0.01096</td><td>2e-05</td><td>0.01082</td><td>2e-05</td><td>14.1806</td><td>0.282135</td><td>…</td></tr>\n<tr><td>2013ex</td><td>56</td><td>0.01122</td><td>0.00084</td><td>0.01096</td><td>2e-05</td><td>0.01082</td><td>2e-05</td><td>14.2417</td><td>0.32405</td><td>…</td></tr>\n<tr><td>2009ab</td><td>5</td><td>0.01155</td><td>0.00085</td><td>0.01189</td><td>8e-05</td><td>0.01219</td><td>8e-05</td><td>14.3303</td><td>0.27987</td><td>…</td></tr>\n<tr><td>…</td><td>…</td><td>…</td><td>…</td><td>…</td><td>…</td><td>…</td><td>…</td><td>…</td><td>…</td><td>…</td></tr>\n</tbody>\n</table>\n<p><small>1590 rows × 47 columns</small></p>\n<!-- out:stdout -->\n\nval hf_col : string -> (float, Nx.float64_elt) Nx.t = <fun>\nval hf_z : (float, Nx.float64_elt) Nx.t = float64 [1590] \n  [0.01016, 0.01017, ..., 1.91165, 2.26137]\nval hf_mu : (float, Nx.float64_elt) Nx.t = float64 [1590] \n  [32.7794, 32.9182, ..., 45.4233, 46.1828]\nval hf_w : (float, Nx.float64_elt) Nx.t = float64 [1590] \n  [8.23147, 7.49694, ..., 7.77459, 12.6367]\nval n_hf : int = 1590\nval hf_z_arr : float array =\n  [|0.01016; 0.01017; 0.01017; 0.01026; 0.01026; 0.01028; 0.01042; 0.01044;\n    0.01061; 0.01061; 0.01073; 0.01079; 0.01079; 0.01096; 0.01114; 0.01114;\n    0.01122; 0.01122; 0.01122; 0.01155; 0.01195; 0.01213; 0.0122; 0.01233;\n    0.01246; 0.01258; 0.01258; 0.01259; 0.01259; 0.01279; 0.01283; 0.01283;\n    0.01303; 0.01303; 0.01304; 0.01304; 0.01312; 0.01312; 0.01325; 0.01325;\n    0.01325; 0.01375; 0.01375; 0.01375; 0.01376; 0.01386; 0.01388; 0.01389;\n    0.01389; 0.01411; 0.01424; 0.01442; 0.01442; 0.01442; 0.01442; 0.01442;\n    0.01446; 0.0145; 0.01453; 0.0146; 0.01462; 0.01463; 0.01463; 0.01467;\n    0.01472; 0.01484; 0.01492; 0.01493; 0.01499; 0.01499; 0.01515; 0.01519;\n    0.01525; 0.01529; 0.01542; 0.01543; 0.01546; 0.01549; 0.0155; 0.0155;\n    0.0155; 0.0155; 0.01557; 0.01557; 0.01562; 0.01562; 0.01565; 0.01576;\n    0.01578; 0.01578; 0.01581; 0.01581; 0.01587; 0.01588; 0.0159; 0.0159;\n    0.0159; 0.01603; 0.01652; 0.01652; 0.01656; 0.01657; 0.01662; 0.01666;\n    0.01671; 0.01678; 0.01682; 0.01682; 0.01682; 0.0169; 0.0169; 0.01692;\n    0.01698; 0.01699; 0.01705; 0.01718; 0.01718; 0.0172; 0.0173; 0.0173;\n    0.01733; 0.01733; 0.01734; 0.01737; 0.01737; 0.01743; 0.01747; 0.01747;\n    0.01752; 0.01776; 0.01778; 0.01778; 0.01784; 0.01784; 0.0179; 0.01802;\n    0.01808; 0.01826; 0.01826; 0.01839; 0.01855; 0.01855; 0.01865; 0.01865;\n    0.01866; 0.01875; 0.01875; 0.01905; 0.01947; 0.01947; 0.01975; 0.01975;\n    0.01976; 0.01995; 0.02001; 0.02006; 0.02019; 0.02019; 0.02023; 0.02023;\n    0.02023; 0.02023; 0.02024; 0.02034; 0.02034; 0.02035; 0.02035; 0.02035;\n    0.02044; 0.02049; 0.02052; 0.02056; 0.02056; 0.02081; 0.02082; 0.02082;\n    0.0209; 0.02096; 0.02106; 0.02116; 0.02116; 0.02118; 0.02118; 0.02131;\n    0.02131; 0.02131; 0.02134; 0.02137; 0.02151; 0.02153; 0.0217; 0.02183;\n    0.02183; 0.02197; 0.02198; 0.02203; 0.02205; 0.02207; 0.02215; 0.02219;\n    0.02228; 0.02228; 0.02231; 0.02234; 0.02234; 0.02236; 0.02239; 0.02239;\n    0.0224; 0.0224; 0.02241; 0.02255; 0.02266; 0.0227; 0.02273; 0.02295;\n    0.02295; 0.02298; 0.02298; 0.02303; 0.02307; 0.02313; 0.02316; 0.02321;\n    0.02325; 0.02331; 0.02331; 0.02342; 0.02342; 0.02342; 0.02343; 0.02343;\n    0.02344; 0.02352; 0.02354; 0.02357; 0.02357; 0.02357; 0.02365; 0.02369;\n    0.02388; 0.0239; 0.0239; 0.02391; 0.02401; 0.02411; 0.02411; 0.02412;\n    0.02417; 0.02417; 0.02428; 0.02429; 0.02432; 0.02432; 0.02432; 0.02434;\n    0.02453; 0.02453; 0.02453; 0.02457; 0.02462; 0.02462; 0.02462; 0.02464;\n    0.02464; 0.02466; 0.02491; 0.02494; 0.02509; 0.0251; 0.0251; 0.0251;\n    0.0251; 0.02512; 0.02513; 0.02517; 0.02517; 0.02517; 0.02519; 0.02519;\n    0.02521; 0.02525; 0.02525; 0.02534; 0.02534; 0.02556; 0.02557; 0.02585;\n    0.02591; 0.02596; 0.02598; 0.02598; 0.02598; 0.02598; 0.02626; 0.02626;\n    0.02632; 0.02669; 0.02691; ...|]\nval hf_mu_arr : float array =\n  [|32.7794; 32.9182; 32.9674; 33.3378; 33.2253; 33.2788; 33.1236; 33.0869;\n    33.3806; 33.2931; 33.1081; 33.0683; 33.1809; 33.5709; 33.7337; 33.6369;\n    33.8347; 33.4336; 33.4947; 33.5833; 33.4974; 33.8783; 33.7023; 33.7403;\n    33.8286; 33.708; 33.7246; 33.5874; 33.5178; 33.7492; 33.6365; 33.5028;\n    33.7355; 33.7607; 33.5752; 33.5673; 33.52; 33.4717; 33.9682; 33.9846;\n    33.9247; 33.797; 33.7683; 33.772; 33.7649; 33.6741; 33.6968; 33.8053;\n    33.6651; 33.8206; 33.8444; 34.0187; 34.0249; 34.3366; 34.1781; 34.252;\n    33.9078; 34.1727; 33.7526; 33.8826; 33.9253; 33.7553; 33.7189; 33.8916;\n    33.7894; 34.0109; 34.1902; 33.8524; 33.974; 33.9261; 33.8877; 34.1793;\n    33.7706; 34.0334; 34.3426; 34.0744; 34.0331; 34.2732; 34.2581; 34.2112;\n    33.9994; 33.8852; 33.7902; 33.8072; 34.1732; 34.1302; 34.3317; 34.1657;\n    33.9986; 34.0697; 33.9926; 33.9004; 34.3916; 34.0287; 34.1663; 34.1323;\n    34.0329; 34.1062; 34.4137; 33.9999; 34.1608; 33.9726; 34.2793; 34.3408;\n    33.8985; 34.1679; 34.4649; 34.5341; 33.9673; 33.841; 33.8975; 34.4894;\n    34.3409; 34.0755; 34.1973; 34.022; 34.2003; 34.6084; 34.1436; 34.1517;\n    34.4009; 34.5101; 34.361; 34.31; 34.2669; 33.7799; 34.3228; 34.3133;\n    34.1465; 34.2221; 34.3364; 34.2464; 34.2939; 34.2418; 34.2975; 34.2661;\n    34.6078; 34.9792; 34.2005; 34.4393; 34.492; 34.576; 34.6171; 34.5716;\n    34.6066; 34.5466; 34.4866; 34.4286; 34.7747; 34.8085; 34.757; 34.7394;\n    34.6865; 34.3648; 34.6854; 34.5121; 34.5072; 34.702; 34.5995; 34.5012;\n    34.576; 34.6482; 34.9988; 34.8165; 34.5749; 34.5709; 34.5656; 34.4931;\n    34.6552; 35.1263; 34.556; 34.4168; 34.5647; 34.4863; 34.674; 34.6349;\n    34.8118; 34.6956; 34.6693; 34.375; 34.8194; 35.118; 35.2056; 34.7561;\n    34.7344; 34.7419; 34.7627; 34.3468; 34.5899; 34.6114; 34.7709; 34.7583;\n    34.7742; 34.8589; 34.9654; 34.6739; 34.7477; 34.8847; 35.0481; 35.0136;\n    34.8059; 34.8234; 34.9245; 34.7727; 34.8704; 34.7814; 34.7714; 34.8603;\n    34.759; 34.7997; 35.0392; 34.7306; 34.8438; 34.8693; 34.6849; 35.2685;\n    34.8584; 34.9992; 34.9712; 34.985; 34.7666; 35.0225; 34.883; 34.9168;\n    35.0537; 34.6478; 34.4301; 34.9851; 35.0081; 34.9783; 34.8582; 34.8582;\n    34.8331; 34.8391; 34.9998; 35.0406; 34.9509; 34.5715; 34.9543; 34.943;\n    34.9949; 34.889; 34.994; 34.8993; 35.1018; 35.1049; 34.4711; 34.8782;\n    34.9421; 35.084; 35.1803; 35.0771; 34.9871; 34.953; 35.2317; 35.0102;\n    35.008; 35.0746; 34.9493; 35.1784; 35.1061; 35.0521; 34.8898; 34.9528;\n    34.807; 34.8112; 35.0862; 35.1973; 35.2152; 34.8369; 34.8482; 35.079;\n    35.009; 35.0276; 34.4583; 34.9117; 35.1989; 35.2277; 35.1139; 35.1145;\n    34.9842; 34.9441; 34.9063; 35.1882; 35.1974; 35.1828; 35.0621; 34.7664;\n    35.1181; 35.4014; 35.2931; 35.1081; 34.835; 35.3111; 35.2855; 35.3429;\n    35.0223; 35.308; 35.2688; ...|]\nval hf_w_arr : float array =\n  [|8.23146814613716593; 7.4969352680999215; 8.55574221326686413;\n    10.6591609759993187; 11.1791254824205311; 16.2654007737080022;\n    9.93710117748064903; 8.45464229847009463; 10.4726778114779471;\n    13.2645900552113254; 7.5197722886281948; 9.52504612371758519;\n    6.99983232092770624; 10.0871977557366836; 8.52534214273321567;\n    11.4738309842553772; 7.98393120275620927; 12.4487789115859133;\n    9.45744263313261868; 12.6492602333561486; 11.0006818657807397;\n    8.3347365445043; 14.0625703127636719; 15.4835677155751181;\n    14.227591884866; 17.2516774910131865; 15.987591222375503;\n    14.7860742788850779; 14.1129085551366398; 5.76023962347622742;\n    12.1084887207242176; 9.1205006900497132; 17.3925484174557248;\n    12.1218985292262946; 11.683103651051935; 12.3299901204144131;\n    13.365063188000093; 14.6123881673806135; 10.8425251089593679;\n    11.1557637660110593; 10.7776256002286637; 12.9527972436885257;\n    18.4550886236199396; 17.9919475158734201; 11.9903729275141533;\n    8.38375008416108436; 18.3362669046649529; 11.9893765286826248;\n    11.3974938041915532; 15.8888386016909919; 14.1875346969754474;\n    10.5967952266739616; 8.05528722258021723; 7.27645222224196;\n    6.34775702674259801; 6.33731050419107; 17.2868421478877323;\n    7.50490609740132886; 21.3740568964570166; 16.0879126286809289;\n    16.8064771777973228; 6.55759480278037898; 6.48056420082261653;\n    17.2611398525342; 6.33357902093675751; 13.69489137836125;\n    7.20015441460373751; 12.9805322684816744; 18.9754652580740206;\n    18.625928986051818; 11.4921195365949078; 14.0516078939454445;\n    10.9506480889754076; 8.43554877406980452; 7.19189246515212055;\n    10.0869414622828693; 19.4983479754138251; 8.63222933841724327;\n    13.4641128131653112; 15.0148704932793411; 12.1982247035558142;\n    10.146793599690735; 11.8434308491135152; 11.8582808284653058;\n    14.3086498722222988; 8.94749089006965; 12.873171291985976;\n    12.1506484441415417; 20.672995317749784; 11.8593426098531527;\n    17.4644346112094517; 20.851416708835508; 13.6828374657754281;\n    19.7840253229750367; 16.9500116927867452; 22.7904126118422568;\n    13.4657927237540278; 12.6121817743929938; 8.3352659411483927;\n    8.9251047074459; 21.0194316349305446; 5.79853079888242284;\n    14.5772613757875487; 25.5016762892044468; 16.6077598589186302;\n    16.4609255025734029; 7.78826881381891045; 8.56872028090763749;\n    12.362786901820888; 11.9473092996875536; 14.2871324174975616;\n    10.3847790563541444; 19.1715506082083955; 11.2747485236470109;\n    21.741787484924366; 10.4736268288461609; 9.44831673156642182;\n    20.6961375184077028; 12.5135089429374613; 15.1242634067501207;\n    20.3458786329970458; 18.7015599950648692; 20.6077296866621609;\n    14.3963000301706341; 14.520768051703806; 9.14760924209767445;\n    24.9398587271691525; 25.5540420407535684; 18.6728015422666118;\n    9.74052699653846; 22.6509220160585194; 21.9531684900774628;\n    16.4439751191577166; 20.1324567073815182; 9.83237146373930848;\n    18.2299477227476743; 24.6365123425761; 20.9496429577490275;\n    12.7035098104889101; 19.4234861464175346; 15.3351298538964578;\n    11.3947238832405269; 17.509040539551691; 17.3627026557141519;\n    12.988673539837988; 21.5302476249783794; 9.70348417833172761;\n    12.6278730549272478; 26.1901234656971589; 26.1558451309293041;\n    4.59293241178748; 5.32095686553453; 6.05944007446558164;\n    11.4346774646612896; 23.9645656337853978; 25.1289970704292536;\n    18.27955215083119; 26.9972194699756898; 28.2005673569440063;\n    11.4469833641548; 27.7291367056932039; 28.3439800971673748;\n    28.3793238806152281; 17.1000589321468475; 7.0800382049997026;\n    28.3814405615375804; 31.6626267020978887; 27.4834283515611411;\n    21.7158579836266306; 7.08150786072332394; 8.96926295523383743;\n    21.2058816479978; 16.0635486772255796; 9.15802089756929;\n    18.7566768373479498; 15.4378524337368379; 11.0946109925032541;\n    21.335767528888681; 18.7911672303113519; 24.2452796480947299;\n    25.9182978468908658; 10.3599255061832096; 10.8133084385166747;\n    30.6648936516269401; 31.8555650038301259; 28.3678374577575063;\n    8.65199489673906541; 25.2493498917324288; 25.2070269959166922;\n    6.46396662873246; 26.0672487054426796; 12.4466708843980065;\n    15.5433263008844627; 16.4646661210180483; 21.2416677860634415;\n    16.2873329914842024; 25.2277949897877924; 28.8960644281234629;\n    12.0035869468058891; 21.4019504835921; 30.8025616050924036;\n    15.0103333911976673; 34.6685262497122935; 25.2508724579438208;\n    23.9167726307908026; 19.8123910920625654; 10.9170968668811543;\n    16.9183741772662302; 11.9963539297717094; 15.3461854905195416;\n    21.7290195082093263; 16.9154518357872767; 20.5587966414740855;\n    24.1949786225266372; 7.78835575489808818; 10.7586857487961058;\n    24.6854987844132197; 13.9889266783498822; 17.7608231677881321;\n    27.4863101958301748; 8.63334537708151; 13.5941912097414512;\n    13.2260260168708097; 31.3766650710703807; 18.9526720076696868;\n    14.0314032315368173; 12.807834225708806; 12.5233417493347;\n    20.3469799530605187; 19.6825161300159905; 14.7618831786326936;\n    34.3446168784105055; 14.372949787259202; 21.4573053352920269;\n    21.0965445810056522; 23.4345755202291386; 14.8926270517011119;\n    14.5962029388981858; 13.3160449309237592; 22.718990622509434;\n    30.5913289186968882; 6.13605093366510346; 12.9969161832653164;\n    27.6045671236346415; 26.2947120504520981; 10.3837082466899;\n    6.29789856614867727; 19.9179299665683978; 23.0504729450445041;\n    29.6802504840401795; 18.5389364101218526; 26.1866389989169512;\n    29.4897237317334593; 25.4911194943350203; 18.0761901209589091;\n    24.5200237809392654; 38.0968922623001305; 22.6861065677917715;\n    34.2502130892676817; 27.9615875110153667; 18.0223599013189144;\n    18.6980019826901689; 18.2672101541450651; 21.8778636636524624;\n    30.0375605276369; 9.91408611351553; 17.2822431119072597;\n    19.1537670175515231; 13.8683272198277763; 12.0528115008079286;\n    30.2168128321327174; 23.5658128966585103; 31.9593800666736279;\n    27.6431867840131886; 9.00369113498522466; 10.6459491096232455;\n    17.612506723563655; 23.941354096378312; 41.4573139679420777;\n    32.7184654368981498; 29.1289053383799548; 17.2569811896428149;\n    13.8128200856228514; 32.0274222366012324; 28.9246665418856;\n    18.3059966853358169; 27.6492920330522622; 8.41661752676145625;\n    24.9548113528369981; 24.8777005517676493; 31.0208268317396296;\n    25.4453634768141832; 8.2424844909615409; 31.304378738480434;\n    11.9805803972240117; 9.57136742791041328; 7.21284524967817653;\n    31.0063188509245471; 29.7391957624394863; ...|]\nval sum_w : float = 38213.713851628665\nval gl_n : float array =\n  [|-0.989400934991649939; -0.9445750230732326; -0.865631202387831755;\n    -0.755404408355003; -0.617876244402643771; -0.458016777657227425;\n    -0.281603550779258915; -0.0950125098376374; 0.0950125098376374;\n    0.281603550779258915; 0.458016777657227425; 0.617876244402643771;\n    0.755404408355003; 0.865631202387831755; 0.9445750230732326;\n    0.989400934991649939|]\nval gl_wt : float array =\n  [|0.0271524594117541; 0.0622535239386479; 0.0951585116824928;\n    0.124628971255533905; 0.149595988816576708; 0.169156519395002508;\n    0.182603415044923612; 0.189450610455068502; 0.189450610455068502;\n    0.182603415044923612; 0.169156519395002508; 0.149595988816576708;\n    0.124628971255533905; 0.0951585116824928; 0.0622535239386479;\n    0.0271524594117541|]\nval dist_mod_f : float -> float -> float -> float = <fun>\nval chi2_at : float -> float -> float = <fun>\nval n_om : int = 100\nval n_ol : int = 100\nval om_min : float = 0.\nval om_max : float = 3.\nval ol_min : float = -1.\nval ol_max : float = 3.\nval chi2_grid : (float, Nx.float64_elt) Nx.t = float64 [100; 100] \n  [[1522.01, 1543.98, ..., 3820.7, 3844.02],\n   [1484.88, 1506.67, ..., 3779.71, 3803.06],\n   ...\n   [inf, inf, ..., 877.663, 867.429],\n   [inf, inf, ..., 902.83, 890.429]]\nval chi2_min : float = 684.197413463096268\nval delta_chi2 : (float, Nx.float64_elt) Nx.t = float64 [100; 100] \n  [[837.808, 859.782, ..., 3136.5, 3159.83],\n   [800.684, 822.469, ..., 3095.52, 3118.86],\n   ...\n   [inf, inf, ..., 193.466, 183.232],\n   [inf, inf, ..., 218.632, 206.232]]\n<!-- /quill:output -->\n\n## Contour plot\n\nReproducing Perlmutter et al. (1999) Figure 7. The contour levels correspond\nto 68%, 90%, 95%, and 99% confidence regions for two parameters\n($\\Delta\\chi^2 = 2.30, 4.61, 5.99, 9.21$). The diagonal solid line marks\n**flat** universes ($\\Omega_M + \\Omega_\\Lambda = 1$). The nearly horizontal\ndashed line separates eternally expanding universes from those that eventually\nrecollapse. The upper-left gray region has no Big Bang.\n\n\n<!-- quill:cell id=\"c_contour\" -->\n```ocaml\n(* Confidence levels for 2 parameters:\n   68% -> delta_chi2 = 2.30,  90% -> 4.61,  95% -> 5.99,  99% -> 9.21 *)\nlet confidence_levels = [| 2.30; 4.61; 5.99; 9.21 |]\n\n(* \"No Big Bang\" boundary: upper-left region where the universe has no\n   initial singularity. Approximate as OmegaL > 4*OmegaM*(cosh(...))^3\n   for plotting purposes; simplified to a polygon here. *)\nlet no_bb_x = Nx.create f32 [| 5 |] [| 0.0; 0.0; 1.0; 2.0; 3.0 |]\nlet no_bb_y1 = Nx.create f32 [| 5 |] [| 3.0; 1.0; 2.2; 2.8; 3.0 |]\nlet no_bb_y2 = Nx.full f32 [| 5 |] 3.0\n\nlet _fig =\n  Hugin.layers [\n    (* \"No Big Bang\" shaded region *)\n    Hugin.fill_between ~x:no_bb_x ~y1:no_bb_y1 ~y2:no_bb_y2\n      ~color:(Hugin.Color.with_alpha 0.15 Hugin.Color.gray) () ;\n    (* Filled confidence contours -- blue/teal like Figure 7 *)\n    Hugin.contour ~data:(to32 delta_chi2)\n      ~x0:om_min ~x1:om_max ~y0:ol_min ~y1:ol_max\n      ~levels:(`Values confidence_levels)\n      ~filled:true\n      ~cmap:(Hugin.Cmap.of_colors [|\n        Hugin.Color.with_alpha 0.8 (Hugin.Color.hex \"#1a5276\");\n        Hugin.Color.with_alpha 0.6 (Hugin.Color.hex \"#2e86c1\");\n        Hugin.Color.with_alpha 0.4 (Hugin.Color.hex \"#85c1e9\");\n        Hugin.Color.with_alpha 0.2 (Hugin.Color.hex \"#d4e6f1\");\n        Hugin.Color.with_alpha 0.0 Hugin.Color.white;\n      |]) () ;\n    (* Contour outlines *)\n    Hugin.contour ~data:(to32 delta_chi2)\n      ~x0:om_min ~x1:om_max ~y0:ol_min ~y1:ol_max\n      ~levels:(`Values confidence_levels)\n      ~color:(Hugin.Color.hex \"#2e86c1\") ~line_width:1.0 () ;\n    (* Flat universe line: OmegaM + OmegaL = 1 *)\n    Hugin.line\n      ~x:(Nx.create f32 [| 2 |] [| 0.0; 3.0 |])\n      ~y:(Nx.create f32 [| 2 |] [| 1.0; -2.0 |])\n      ~color:Hugin.Color.black ~line_width:1.5\n      ~label:\"Flat\" () ;\n    (* No-deceleration line: q0 = 0, i.e. OmegaL = OmegaM / 2 *)\n    Hugin.line\n      ~x:(Nx.create f32 [| 2 |] [| 0.0; 3.0 |])\n      ~y:(Nx.create f32 [| 2 |] [| 0.0; 1.5 |])\n      ~color:Hugin.Color.gray ~line_style:`Dashed ~line_width:1.0\n      ~label:\"Accelerating/decelerating\" () ;\n    (* Lambda = 0 line *)\n    Hugin.hline ~y:0.0 ~color:Hugin.Color.gray ~line_style:`Dotted\n      ~line_width:0.5 () ;\n  ]\n  |> Hugin.xlim 0.0 3.0\n  |> Hugin.ylim (-1.0) 3.0\n  |> Hugin.xlabel \"Omega_M\"\n  |> Hugin.ylabel \"Omega_Lambda\"\n  |> Hugin.title \"Confidence Contours in the Omega_M - Omega_Lambda Plane\"\n  |> Hugin.legend ~loc:Hugin.Upper_right\n```\n<!-- quill:output -->\n<!-- out:stdout -->\nval confidence_levels : float array = [|2.3; 4.61; 5.99; 9.21|]\nval no_bb_x : (float, Nx.float32_elt) Nx.t = float32 [5] [0, 0, ..., 2, 3]\nval no_bb_y1 : (float, Nx.float32_elt) Nx.t = float32 [5] [3, 1, ..., 2.8, 3]\nval no_bb_y2 : (float, Nx.float32_elt) Nx.t = float32 [5] [3, 3, ..., 3, 3]\nval _fig : Hugin.t =\n  \n<!-- out:display image/png -->\n<img src=\"figures/c_contour.png\">\n<!-- /quill:output -->\n\n## Best-fit flat $\\Lambda$CDM\n\nRestricting to flat universes ($\\Omega_M + \\Omega_\\Lambda = 1$), we find the\nbest-fit $\\Omega_M$ by scanning along the flatness constraint and use Umbra's\n`Cosmo.flat_lcdm` to compute the corresponding distances.\n\n\n<!-- quill:cell id=\"c_bestfit\" -->\n```ocaml\nlet n_flat = 200\nlet flat_om = Nx.linspace f64 0.01 0.99 n_flat\nlet flat_chi2 = Nx.init f64 [| n_flat |] (fun i ->\n  let om = Nx.item [i.(0)] flat_om in\n  chi2_at om (1.0 -. om))\n\nlet best_flat_i = Int32.to_int (Nx.item [] (Nx.argmin flat_chi2))\nlet omega_m_best = Nx.item [best_flat_i] flat_om\nlet omega_l_best = 1.0 -. omega_m_best\n\nlet () =\n  Printf.printf \"\\n=== Flat ΛCDM best fit ===\\n\";\n  Printf.printf \"  Omega_M     = %.3f\\n\" omega_m_best;\n  Printf.printf \"  Omega_L     = %.3f\\n\" omega_l_best;\n  Printf.printf \"  chi2        = %.1f  (dof ~ %d)\\n\"\n    (Nx.item [best_flat_i] flat_chi2) (n_hf - 1);\n  Printf.printf \"\\nPerlmutter et al. (1999) found Omega_M ~ 0.28, Omega_L ~ 0.72\\n\";\n  Printf.printf \"Planck 2018 finds            Omega_M = 0.315, Omega_L = 0.685\\n\"\n```\n<!-- quill:output -->\n<!-- out:stdout -->\n\n=== Flat ΛCDM best fit ===\n  Omega_M     = 0.350\n  Omega_L     = 0.650\n  chi2        = 684.6  (dof ~ 1589)\n\nPerlmutter et al. (1999) found Omega_M ~ 0.28, Omega_L ~ 0.72\nPlanck 2018 finds            Omega_M = 0.315, Omega_L = 0.685\nval n_flat : int = 200\nval flat_om : (float, Nx.float64_elt) Nx.t = float64 [200] \n  [0.01, 0.0149246, ..., 0.985075, 0.99]\nval flat_chi2 : (float, Nx.float64_elt) Nx.t = float64 [200] \n  [1265.72, 1240.76, ..., 1313.88, 1321.44]\nval best_flat_i : int = 69\nval omega_m_best : float = 0.349798994974874378\nval omega_l_best : float = 0.650201005025125678\n<!-- /quill:output -->\n\n### $\\chi^2$ profile along the flat-universe constraint\n\n\n<!-- quill:cell id=\"c_flat_chi2\" -->\n```ocaml\nlet chi2_min_flat = Nx.item [best_flat_i] flat_chi2\nlet delta_flat = Nx.sub_s flat_chi2 chi2_min_flat\n\nlet _fig =\n  Hugin.layers [\n    Hugin.line ~x:(to32 flat_om) ~y:(to32 delta_flat)\n      ~color:Hugin.Color.vermillion ~line_width:2.5 () ;\n    Hugin.hline ~y:1.0 ~line_style:`Dashed ~color:Hugin.Color.gray\n      ~label:\"Δχ² = 1 (1σ)\" () ;\n    Hugin.hline ~y:4.0 ~line_style:`Dotted ~color:Hugin.Color.gray\n      ~label:\"Δχ² = 4 (2σ)\" () ;\n    Hugin.vline ~x:omega_m_best ~line_style:`Dashed\n      ~color:Hugin.Color.sky_blue\n      ~label:(Printf.sprintf \"Best fit: Ω_M = %.3f\" omega_m_best) () ;\n  ]\n  |> Hugin.xlim 0.0 1.0\n  |> Hugin.ylim 0.0 20.0\n  |> Hugin.xlabel \"Ω_M (flat universe)\"\n  |> Hugin.ylabel \"Δχ²\"\n  |> Hugin.title \"χ² Profile: Flat ΛCDM\"\n  |> Hugin.legend ~loc:Hugin.Upper_right\n  |> Hugin.grid_lines true\n```\n<!-- quill:output -->\n<!-- out:stdout -->\nval chi2_min_flat : float = 684.600426847840708\nval delta_flat : (float, Nx.float64_elt) Nx.t = float64 [200] \n  [581.117, 556.156, ..., 629.283, 636.844]\nval _fig : Hugin.t =\n  \n<!-- out:display image/png -->\n<img src=\"figures/c_flat_chi2.png\">\n<!-- /quill:output -->\n\n## Cosmological implications\n\nWith the best-fit flat $\\Lambda$CDM parameters, we compute some fundamental\nproperties of the universe using Umbra's cosmology module.\n\n\n<!-- quill:cell id=\"c_cosmo\" -->\n```ocaml\nlet p_best = Cosmo.flat_lcdm ~h0:70.0 ~omega_m:omega_m_best\n\nlet () =\n  let z0 = Nx.scalar f64 0.0 in\n  let z1 = Nx.scalar f64 1.0 in\n  let z_star = Nx.scalar f64 1089.0 in\n  Printf.printf \"\\n=== Universe properties (H₀ = 70 km/s/Mpc, Ω_M = %.3f) ===\\n\\n\" omega_m_best;\n  Printf.printf \"  Age of the universe           = %.2f Gyr\\n\"\n    (Nx.item [] (Unit.Time.in_gyr (Cosmo.age ~p:p_best z0)));\n  Printf.printf \"  Lookback time to z=1          = %.2f Gyr\\n\"\n    (Nx.item [] (Unit.Time.in_gyr (Cosmo.lookback_time ~p:p_best z1)));\n  Printf.printf \"  Comoving distance to z=1      = %.0f Mpc\\n\"\n    (Nx.item [] (Unit.Length.in_mpc (Cosmo.comoving_distance ~p:p_best z1)));\n  Printf.printf \"  Luminosity distance to z=1    = %.0f Mpc\\n\"\n    (Nx.item [] (Unit.Length.in_mpc (Cosmo.luminosity_distance ~p:p_best z1)));\n  Printf.printf \"  Ang. diameter distance to z=1 = %.0f Mpc\\n\"\n    (Nx.item [] (Unit.Length.in_mpc (Cosmo.angular_diameter_distance ~p:p_best z1)));\n  Printf.printf \"  Comoving distance to CMB      = %.0f Mpc\\n\"\n    (Nx.item [] (Unit.Length.in_mpc (Cosmo.comoving_distance ~p:p_best z_star)))\n```\n<!-- quill:output -->\n<!-- out:stdout -->\n\n=== Universe properties (H₀ = 70 km/s/Mpc, Ω_M = 0.350) ===\n\n  Age of the universe           = 12.89 Gyr\n  Lookback time to z=1          = 7.52 Gyr\n  Comoving distance to z=1      = 3212 Mpc\n  Luminosity distance to z=1    = 6423 Mpc\n  Ang. diameter distance to z=1 = 1606 Mpc\n  Comoving distance to CMB      = 12758 Mpc\nval p_best : Umbra.Cosmo.params = <abstr>\n<!-- /quill:output -->\n\n## Conclusion\n\nWe have reproduced the central result of Perlmutter et al. (1999) using the\nmodern Pantheon+ dataset and Umbra's cosmology module:\n\n1. The **Hubble diagram** shows that distant SNe Ia are fainter than predicted\n   by decelerating models, confirming cosmic acceleration.\n2. **Residuals** relative to an empty universe clearly show the acceleration\n   signal at $z > 0.2$.\n3. **Confidence contours** in the $\\Omega_M$--$\\Omega_\\Lambda$ plane strongly\n   exclude $\\Omega_\\Lambda = 0$ and are consistent with a flat universe with\n   $\\Omega_M \\approx 0.3$, $\\Omega_\\Lambda \\approx 0.7$.\n\nThe analysis required only Umbra's `Cosmo.lcdm`, `Cosmo.flat_lcdm`, and\n`Cosmo.distance_modulus` functions -- the entire theoretical framework for\nSN Ia cosmology in a few lines of OCaml.\n\n### References\n\n- Perlmutter, S. et al. 1999, ApJ, 517, 565 (arXiv:astro-ph/9812133)\n- Riess, A.G. et al. 1998, AJ, 116, 1009 (arXiv:astro-ph/9805201)\n- Scolnic, D.M. et al. 2022, ApJ, 938, 113 (arXiv:2112.03863)\n- Brout, D. et al. 2022, ApJ, 938, 110 (arXiv:2202.04077)\n"
  },
  {
    "path": "dev/umbra/test/dune",
    "content": "(test\n (name test_umbra)\n (libraries umbra umbra.fits nx nx.io talon windtrap))\n"
  },
  {
    "path": "dev/umbra/test/test_umbra.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Umbra\n\nlet eps = 1e-6\nlet f64 = Nx.float64\nlet v x = Nx.item [] x\n\n(* Unit tests *)\n\nlet test_length_conversion () =\n  let d = Unit.Length.kpc 10.0 in\n  let mpc = v (Unit.Length.in_mpc d) in\n  is_true ~msg:\"10 kpc = 0.01 Mpc\" (Float.abs (mpc -. 0.01) < eps);\n  let m = v (Unit.Length.in_m d) in\n  let back = v (Unit.Length.in_kpc (Unit.Length.m m)) in\n  is_true ~msg:\"kpc -> m -> kpc roundtrip\" (Float.abs (back -. 10.0) < eps)\n\nlet test_length_arithmetic () =\n  let open Unit in\n  let d = Length.kpc 10.0 + Length.pc 500.0 in\n  let kpc = v (Length.in_kpc d) in\n  is_true ~msg:\"10 kpc + 500 pc = 10.5 kpc\" (Float.abs (kpc -. 10.5) < eps)\n\nlet test_mass_conversion () =\n  let m = Unit.Mass.solar_mass 1.0 in\n  let kg = v (Unit.Mass.in_kg m) in\n  is_true ~msg:\"1 Msun ~ 1.988e30 kg\"\n    (Float.abs (kg -. 1.9884e30) /. 1.9884e30 < 1e-4)\n\nlet test_velocity_cross_dim () =\n  let d = Unit.Length.km 100.0 in\n  let t = Unit.Time.s 10.0 in\n  let vel = Unit.length_per_time d t in\n  let km_s = v (Unit.Velocity.in_km_s vel) in\n  is_true ~msg:\"100 km / 10 s = 10 km/s\" (Float.abs (km_s -. 10.0) < eps)\n\nlet test_angle_trig () =\n  let a = Unit.Angle.deg 90.0 in\n  is_true ~msg:\"sin(90°) = 1\"\n    (Float.abs (Nx.item [] (Unit.Angle.sin a) -. 1.0) < eps);\n  is_true ~msg:\"cos(90°) = 0\" (Float.abs (Nx.item [] (Unit.Angle.cos a)) < eps)\n\nlet test_wavelength_frequency () =\n  let lam = Unit.Length.nm 500.0 in\n  let nu = Unit.wavelength_to_frequency lam in\n  let lam2 = Unit.frequency_to_wavelength nu in\n  let nm2 = v (Unit.Length.in_nm lam2) in\n  is_true ~msg:\"wavelength -> freq -> wavelength roundtrip\"\n    (Float.abs (nm2 -. 500.0) < eps)\n\nlet test_phantom_type_safety () =\n  (* This is a compile-time test: the following should NOT typecheck: let _ =\n     Unit.(Length.m 1.0 + Mass.kg 1.0) The fact that this module compiles proves\n     type safety. *)\n  let _d = Unit.(Length.m 1.0 + Length.km 1.0) in\n  let _m = Unit.(Mass.kg 1.0 + Mass.g 500.0) in\n  ()\n\n(* Const tests *)\n\nlet test_const_c () =\n  let c_km_s = v (Unit.Velocity.in_km_s Const.c) in\n  is_true ~msg:\"c ~ 299792 km/s\" (Float.abs (c_km_s -. 299792.458) < 1.0)\n\n(* Coord tests *)\n\nlet deg_eps = 1e-6\n\nlet test_coord_roundtrip () =\n  let ra =\n    Unit.Angle.of_deg (Nx.create f64 [| 4 |] [| 180.0; 0.0; 90.0; 266.405 |])\n  in\n  let dec =\n    Unit.Angle.of_deg (Nx.create f64 [| 4 |] [| 45.0; -30.0; 0.0; -28.936 |])\n  in\n  let c = Coord.of_radec ~ra ~dec in\n  let gal = Coord.galactic c in\n  let back = Coord.icrs gal in\n  let ra' = Unit.Angle.in_deg (Coord.ra back) in\n  let dec' = Unit.Angle.in_deg (Coord.dec back) in\n  let ra_orig = Unit.Angle.in_deg ra in\n  let dec_orig = Unit.Angle.in_deg dec in\n  for i = 0 to 3 do\n    is_true\n      ~msg:(Printf.sprintf \"RA roundtrip [%d]\" i)\n      (Float.abs (Nx.item [ i ] ra_orig -. Nx.item [ i ] ra') < deg_eps);\n    is_true\n      ~msg:(Printf.sprintf \"Dec roundtrip [%d]\" i)\n      (Float.abs (Nx.item [ i ] dec_orig -. Nx.item [ i ] dec') < deg_eps)\n  done\n\nlet test_separation_poles () =\n  let c1 =\n    Coord.of_radec\n      ~ra:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 0.0 |]))\n      ~dec:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 90.0 |]))\n  in\n  let c2 =\n    Coord.of_radec\n      ~ra:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 0.0 |]))\n      ~dec:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| -90.0 |]))\n  in\n  let sep = Coord.separation c1 c2 in\n  is_true ~msg:\"Pole separation = 180\"\n    (Float.abs (Nx.item [ 0 ] (Unit.Angle.in_deg sep) -. 180.0) < deg_eps)\n\n(* Cosmo tests *)\n\nlet test_cosmo_distances () =\n  let z = Nx.scalar f64 0.1 in\n  let dc = v (Unit.Length.in_mpc (Cosmo.comoving_distance z)) in\n  is_true\n    ~msg:(Printf.sprintf \"comoving(0.1) ~ 421 Mpc, got %.1f\" dc)\n    (Float.abs (dc -. 421.0) < 5.0);\n  let dl = v (Unit.Length.in_mpc (Cosmo.luminosity_distance z)) in\n  is_true\n    ~msg:(Printf.sprintf \"luminosity(0.1) ~ 463 Mpc, got %.1f\" dl)\n    (Float.abs (dl -. 463.0) < 5.0)\n\nlet test_cosmo_lookback () =\n  let z = Nx.scalar f64 1.0 in\n  let t = v (Unit.Time.in_gyr (Cosmo.lookback_time z)) in\n  is_true\n    ~msg:(Printf.sprintf \"lookback(1.0) ~ 7.7 Gyr, got %.1f\" t)\n    (Float.abs (t -. 7.7) < 0.3)\n\nlet test_cosmo_angular_scale () =\n  let phys = Unit.Length.kpc 1.0 in\n  let z = Nx.scalar f64 0.022 in\n  let ang = Cosmo.angular_size ~z phys in\n  let arcsec = v (Unit.Angle.in_arcsec ang) in\n  is_true\n    ~msg:(Printf.sprintf \"1 kpc at z=0.022 ~ 2.3 arcsec, got %.2f\" arcsec)\n    (Float.abs (arcsec -. 2.3) < 0.2)\n\n(* Cosmo: high-z regression tests. These catch quadrature under-resolution at\n   large z. *)\n\nlet test_cosmo_age_planck18 () =\n  let p = Cosmo.planck18 in\n  let t = v (Unit.Time.in_gyr (Cosmo.age ~p (Nx.scalar f64 0.0))) in\n  is_true\n    ~msg:(Printf.sprintf \"age(Planck18, z=0) ~ 13.8 Gyr, got %.1f\" t)\n    (Float.abs (t -. 13.8) < 0.3)\n\nlet test_cosmo_age_at_z1 () =\n  let p = Cosmo.planck18 in\n  let age_0 = v (Unit.Time.in_gyr (Cosmo.age ~p (Nx.scalar f64 0.0))) in\n  let age_1 = v (Unit.Time.in_gyr (Cosmo.age ~p (Nx.scalar f64 1.0))) in\n  let lb_1 =\n    v (Unit.Time.in_gyr (Cosmo.lookback_time ~p (Nx.scalar f64 1.0)))\n  in\n  is_true\n    ~msg:\n      (Printf.sprintf\n         \"age(z=0) - age(z=1) = lookback(z=1): %.2f - %.2f = %.2f vs %.2f\" age_0\n         age_1 (age_0 -. age_1) lb_1)\n    (Float.abs (age_0 -. age_1 -. lb_1) < 0.05)\n\nlet test_cosmo_comoving_cmb () =\n  let p = Cosmo.planck18 in\n  let dc =\n    v (Unit.Length.in_mpc (Cosmo.comoving_distance ~p (Nx.scalar f64 1089.0)))\n  in\n  is_true\n    ~msg:(Printf.sprintf \"comoving(z=1089) ~ 14000 Mpc, got %.0f\" dc)\n    (Float.abs (dc -. 14000.0) < 500.0)\n\nlet test_cosmo_comoving_high_z () =\n  let p = Cosmo.planck18 in\n  let dc_2 =\n    v (Unit.Length.in_mpc (Cosmo.comoving_distance ~p (Nx.scalar f64 2.0)))\n  in\n  let dc_5 =\n    v (Unit.Length.in_mpc (Cosmo.comoving_distance ~p (Nx.scalar f64 5.0)))\n  in\n  let dc_10 =\n    v (Unit.Length.in_mpc (Cosmo.comoving_distance ~p (Nx.scalar f64 10.0)))\n  in\n  is_true ~msg:\"comoving distances monotonically increase\"\n    (dc_2 < dc_5 && dc_5 < dc_10);\n  is_true\n    ~msg:(Printf.sprintf \"comoving(z=10) ~ 9700 Mpc, got %.0f\" dc_10)\n    (Float.abs (dc_10 -. 9700.0) < 300.0)\n\nlet test_cosmo_lookback_high_z () =\n  let p = Cosmo.planck18 in\n  let lb_5 =\n    v (Unit.Time.in_gyr (Cosmo.lookback_time ~p (Nx.scalar f64 5.0)))\n  in\n  is_true\n    ~msg:(Printf.sprintf \"lookback(z=5) ~ 12.5 Gyr, got %.1f\" lb_5)\n    (Float.abs (lb_5 -. 12.5) < 0.3)\n\n(* FITS tests *)\n\nlet test_fits_image_roundtrip () =\n  let path = \"_test_image.fits\" in\n  Fun.protect\n    ~finally:(fun () -> if Sys.file_exists path then Sys.remove path)\n    (fun () ->\n      let data =\n        Nx.create Nx.float32 [| 2; 3 |] [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0 |]\n      in\n      Umbra_fits.write_image path data;\n      let packed = Umbra_fits.read_image ~hdu:0 path in\n      let result = Nx_io.to_typed Nx.float32 packed in\n      is_true ~msg:\"Image shape\" (Nx.shape result = [| 2; 3 |]);\n      for i = 0 to 5 do\n        let row = i / 3 and col = i mod 3 in\n        is_true\n          ~msg:(Printf.sprintf \"Image value [%d,%d]\" row col)\n          (Float.abs (Nx.item [ row; col ] data -. Nx.item [ row; col ] result)\n          < 1e-6)\n      done)\n\nlet test_fits_table_roundtrip () =\n  let path = \"_test_table.fits\" in\n  Fun.protect\n    ~finally:(fun () -> if Sys.file_exists path then Sys.remove path)\n    (fun () ->\n      let df =\n        Talon.create\n          [\n            (\"ra\", Talon.Col.float64 [| 10.0; 20.0; 30.0 |]);\n            (\"dec\", Talon.Col.float64 [| -10.0; 0.0; 10.0 |]);\n          ]\n      in\n      Umbra_fits.write_table path df;\n      let df2 = Umbra_fits.read_table ~hdu:1 path in\n      is_true ~msg:\"Table rows\" (Talon.num_rows df2 = 3);\n      match Talon.to_array Nx.float64 df2 \"ra\" with\n      | Some arr -> is_true ~msg:\"ra[0]\" (Float.abs (arr.(0) -. 10.0) < 1e-10)\n      | None -> fail \"ra column missing\")\n\n(* Coord cross-matching tests *)\n\nlet test_match_nearest_self () =\n  let ra = Unit.Angle.of_deg (Nx.create f64 [| 3 |] [| 10.0; 20.0; 30.0 |]) in\n  let dec = Unit.Angle.of_deg (Nx.create f64 [| 3 |] [| -10.0; 0.0; 10.0 |]) in\n  let c = Coord.of_radec ~ra ~dec in\n  let { Coord.indices; separations } = Coord.nearest c c in\n  for i = 0 to 2 do\n    is_true\n      ~msg:(Printf.sprintf \"Self-match index[%d]\" i)\n      (Int32.to_int (Nx.item [ i ] indices) = i);\n    is_true\n      ~msg:(Printf.sprintf \"Self-match separation[%d]\" i)\n      (Nx.item [ i ] (Unit.Angle.in_rad separations) < 1e-10)\n  done\n\n(* Time tests *)\n\nlet test_time_jd_mjd () =\n  let t = Time.unsafe_of_jd 2451545.0 in\n  is_true ~msg:\"J2000.0 JD\" (Float.abs (Time.to_jd t -. 2451545.0) < 1e-10);\n  is_true ~msg:\"J2000.0 MJD\" (Float.abs (Time.to_mjd t -. 51544.5) < 1e-10);\n  let t2 = Time.unsafe_of_mjd 51544.5 in\n  is_true ~msg:\"MJD roundtrip\" (Float.abs (Time.to_jd t2 -. 2451545.0) < 1e-10)\n\nlet test_time_iso () =\n  let t = Time.of_iso \"2000-01-01T12:00:00\" in\n  is_true ~msg:\"J2000.0 from ISO\" (Float.abs (Time.to_jd t -. 2451545.0) < 1e-6);\n  let s = Time.to_iso t in\n  is_true ~msg:\"ISO roundtrip\" (s = \"2000-01-01T12:00:00Z\")\n\nlet test_time_utc_tai_tt () =\n  let utc = Time.unsafe_of_jd 2451545.0 in\n  let tai = Time.utc_to_tai utc in\n  let dt_s = (Time.to_jd tai -. Time.to_jd utc) *. 86400.0 in\n  is_true\n    ~msg:(Printf.sprintf \"TAI-UTC at J2000 = 32s, got %.1f\" dt_s)\n    (Float.abs (dt_s -. 32.0) < 0.1);\n  let tt = Time.tai_to_tt tai in\n  let dt_tt = (Time.to_jd tt -. Time.to_jd tai) *. 86400.0 in\n  is_true\n    ~msg:(Printf.sprintf \"TT-TAI = 32.184s, got %.6f\" dt_tt)\n    (Float.abs (dt_tt -. 32.184) < 1e-3);\n  let tai' = Time.tt_to_tai tt in\n  is_true ~msg:\"TT->TAI roundtrip\"\n    (Float.abs (Time.to_jd tai' -. Time.to_jd tai) < 1e-12);\n  let utc' = Time.tai_to_utc tai in\n  is_true ~msg:\"TAI->UTC roundtrip\"\n    (Float.abs (Time.to_jd utc' -. Time.to_jd utc) < 1e-10)\n\nlet test_time_tdb () =\n  let tt = Time.unsafe_of_jd 2451545.0 in\n  let tdb = Time.tt_to_tdb tt in\n  let dt_ms = (Time.to_jd tdb -. Time.to_jd tt) *. 86400.0 *. 1000.0 in\n  is_true\n    ~msg:(Printf.sprintf \"TDB-TT < 2ms, got %.3f ms\" dt_ms)\n    (Float.abs dt_ms < 2.0);\n  let tt' = Time.tdb_to_tt tdb in\n  is_true ~msg:\"TDB->TT roundtrip\"\n    (Float.abs (Time.to_jd tt' -. Time.to_jd tt) < 1e-10)\n\nlet test_time_unix () =\n  let t = Time.of_unix 0.0 in\n  is_true ~msg:\"Unix epoch JD\" (Float.abs (Time.to_jd t -. 2440587.5) < 1e-10);\n  let u = Time.to_unix t in\n  is_true ~msg:\"Unix roundtrip\" (Float.abs u < 1e-6)\n\nlet test_time_diff_add () =\n  let t1 = Time.unsafe_of_jd 2451545.0 in\n  let t2 = Time.unsafe_of_jd 2451546.0 in\n  let dt = Time.diff t2 t1 in\n  is_true ~msg:\"diff = 1 day\"\n    (Float.abs (v (Unit.Time.in_day dt) -. 1.0) < 1e-10);\n  let t3 = Time.add t1 (Unit.Time.day 1.0) in\n  is_true ~msg:\"add 1 day\" (Float.abs (Time.to_jd t3 -. 2451546.0) < 1e-10)\n\n(* Cosmo preset tests *)\n\nlet test_cosmo_planck18 () =\n  let z = Nx.scalar f64 0.5 in\n  let dc =\n    v (Unit.Length.in_mpc (Cosmo.comoving_distance ~p:Cosmo.planck18 z))\n  in\n  is_true\n    ~msg:(Printf.sprintf \"Planck18 comoving(0.5) ~ 1960 Mpc, got %.0f\" dc)\n    (Float.abs (dc -. 1960.0) < 30.0)\n\nlet test_cosmo_hubble () =\n  let z = Nx.scalar f64 0.0 in\n  let h0 = Nx.item [] (Cosmo.hubble z) in\n  is_true\n    ~msg:(Printf.sprintf \"H(0) = H0 = 70, got %.1f\" h0)\n    (Float.abs (h0 -. 70.0) < 1e-6)\n\n(* Coord FK5/Supergalactic tests *)\n\nlet test_coord_ecliptic_roundtrip () =\n  let ra = Unit.Angle.of_deg (Nx.create f64 [| 2 |] [| 180.0; 45.0 |]) in\n  let dec = Unit.Angle.of_deg (Nx.create f64 [| 2 |] [| 45.0; -30.0 |]) in\n  let c = Coord.of_radec ~ra ~dec in\n  let ecl = Coord.ecliptic_j2000 c in\n  let back = Coord.icrs ecl in\n  let ra' = Unit.Angle.in_deg (Coord.ra back) in\n  let dec' = Unit.Angle.in_deg (Coord.dec back) in\n  let ra_orig = Unit.Angle.in_deg ra in\n  let dec_orig = Unit.Angle.in_deg dec in\n  for i = 0 to 1 do\n    is_true\n      ~msg:(Printf.sprintf \"Ecliptic RA roundtrip [%d]\" i)\n      (Float.abs (Nx.item [ i ] ra_orig -. Nx.item [ i ] ra') < deg_eps);\n    is_true\n      ~msg:(Printf.sprintf \"Ecliptic Dec roundtrip [%d]\" i)\n      (Float.abs (Nx.item [ i ] dec_orig -. Nx.item [ i ] dec') < deg_eps)\n  done\n\nlet test_coord_supergalactic_roundtrip () =\n  let ra = Unit.Angle.of_deg (Nx.create f64 [| 2 |] [| 180.0; 45.0 |]) in\n  let dec = Unit.Angle.of_deg (Nx.create f64 [| 2 |] [| 45.0; -30.0 |]) in\n  let c = Coord.of_radec ~ra ~dec in\n  let sg = Coord.supergalactic c in\n  let back = Coord.icrs sg in\n  let ra' = Unit.Angle.in_deg (Coord.ra back) in\n  let dec' = Unit.Angle.in_deg (Coord.dec back) in\n  let ra_orig = Unit.Angle.in_deg ra in\n  let dec_orig = Unit.Angle.in_deg dec in\n  for i = 0 to 1 do\n    is_true\n      ~msg:(Printf.sprintf \"Supergalactic RA roundtrip [%d]\" i)\n      (Float.abs (Nx.item [ i ] ra_orig -. Nx.item [ i ] ra') < 1e-4);\n    is_true\n      ~msg:(Printf.sprintf \"Supergalactic Dec roundtrip [%d]\" i)\n      (Float.abs (Nx.item [ i ] dec_orig -. Nx.item [ i ] dec') < 1e-4)\n  done\n\n(* Unit energy-wavelength-frequency tests *)\n\nlet test_energy_wavelength_frequency () =\n  let e = Unit.Energy.ev 2.0 in\n  let nu = Unit.energy_to_frequency e in\n  let e2 = Unit.frequency_to_energy nu in\n  is_true ~msg:\"energy->freq->energy roundtrip\"\n    (Float.abs (v (Unit.Energy.in_ev e2) -. 2.0) < 1e-6);\n  let lam = Unit.energy_to_wavelength e in\n  let nu2 = Unit.wavelength_to_frequency lam in\n  let e3 = Unit.frequency_to_energy nu2 in\n  is_true ~msg:\"energy->wavelength->freq->energy roundtrip\"\n    (Float.abs (v (Unit.Energy.in_ev e3) -. 2.0) < 1e-6)\n\n(* Spectrum tests *)\n\nlet test_spectrum_blackbody_wien () =\n  (* Wien's displacement law: λ_max * T = 2.898e-3 m·K *)\n  let temp = Unit.Temperature.of_kelvin (Nx.scalar f64 5778.0) in\n  let wave = Unit.Length.of_m (Nx.linspace f64 1e-7 3e-6 1000) in\n  let spec = Spectrum.blackbody ~temperature:temp ~wavelength:wave in\n  let vals = Spectrum.values spec in\n  (* Find index of max value *)\n  let peak_idx = ref 0 in\n  let peak_val = ref (Nx.item [ 0 ] vals) in\n  for i = 1 to 999 do\n    let v = Nx.item [ i ] vals in\n    if v > !peak_val then begin\n      peak_val := v;\n      peak_idx := i\n    end\n  done;\n  let wave_m = Unit.Length.in_m (Spectrum.wavelength spec) in\n  let peak_lam = Nx.item [ !peak_idx ] wave_m in\n  let wien = peak_lam *. 5778.0 in\n  is_true\n    ~msg:(Printf.sprintf \"Wien's law: λ_max*T ~ 2.898e-3, got %.4e\" wien)\n    (Float.abs (wien -. 2.898e-3) /. 2.898e-3 < 0.01)\n\nlet test_spectrum_redshift () =\n  let wave = Unit.Length.of_m (Nx.linspace f64 1e-7 1e-6 100) in\n  let values = Nx.ones f64 [| 100 |] in\n  let spec =\n    Spectrum.create ~wavelength:wave ~values |> Spectrum.as_flux_density\n  in\n  let z = Nx.scalar f64 1.0 in\n  let shifted = Spectrum.redshift ~z spec in\n  (* Wavelengths should double at z=1 *)\n  let orig_wave = Unit.Length.in_m (Spectrum.wavelength spec) in\n  let shifted_wave = Unit.Length.in_m (Spectrum.wavelength shifted) in\n  let ratio = Nx.item [ 50 ] shifted_wave /. Nx.item [ 50 ] orig_wave in\n  is_true ~msg:\"Redshift z=1 doubles wavelength\"\n    (Float.abs (ratio -. 2.0) < 1e-10);\n  (* Values should halve at z=1 *)\n  let val_ratio =\n    Nx.item [ 50 ] (Spectrum.values shifted) /. Nx.item [ 50 ] values\n  in\n  is_true ~msg:\"Redshift z=1 halves values\"\n    (Float.abs (val_ratio -. 0.5) < 1e-10)\n\nlet test_spectrum_scale () =\n  let wave = Unit.Length.of_m (Nx.linspace f64 1e-7 1e-6 10) in\n  let values = Nx.ones f64 [| 10 |] in\n  let spec = Spectrum.create ~wavelength:wave ~values in\n  let scaled = Spectrum.scale (Nx.scalar f64 3.0) spec in\n  is_true ~msg:\"Scale by 3\"\n    (Float.abs (Nx.item [ 0 ] (Spectrum.values scaled) -. 3.0) < 1e-10)\n\n(* Extinction tests *)\n\nlet test_extinction_ccm89_v_band () =\n  (* At V-band (550nm), A_λ/A_V should be ~1.0 for R_V=3.1 *)\n  let rv = Nx.scalar f64 3.1 in\n  let wave_v = Unit.Length.of_m (Nx.create f64 [| 1 |] [| 5.5e-7 |]) in\n  let alav = Extinction.curve (Extinction.ccm89 ~rv) ~wavelength:wave_v in\n  let val_v = Nx.item [ 0 ] alav in\n  is_true\n    ~msg:(Printf.sprintf \"CCM89 A_V/A_V ~ 1.0 at 550nm, got %.3f\" val_v)\n    (Float.abs (val_v -. 1.0) < 0.1)\n\nlet test_extinction_apply_unredden () =\n  (* apply then unredden should recover original spectrum *)\n  let rv = Nx.scalar f64 3.1 in\n  let law = Extinction.ccm89 ~rv in\n  let wave = Unit.Length.of_m (Nx.linspace f64 3e-7 1e-6 50) in\n  let temp = Unit.Temperature.of_kelvin (Nx.scalar f64 6000.0) in\n  let spec = Spectrum.blackbody ~temperature:temp ~wavelength:wave in\n  let av = Nx.scalar f64 1.0 in\n  let reddened = Extinction.apply law ~av spec in\n  let recovered = Extinction.unredden law ~av reddened in\n  (* Compare values *)\n  let orig_val = Nx.item [ 25 ] (Spectrum.values spec) in\n  let rec_val = Nx.item [ 25 ] (Spectrum.values recovered) in\n  is_true ~msg:\"apply + unredden roundtrip\"\n    (Float.abs (rec_val -. orig_val) /. orig_val < 1e-10)\n\nlet test_extinction_ccm89_monotonic () =\n  (* Extinction should increase toward blue wavelengths (for optical) *)\n  let rv = Nx.scalar f64 3.1 in\n  let wave =\n    Unit.Length.of_m (Nx.create f64 [| 3 |] [| 4e-7; 5.5e-7; 8e-7 |])\n  in\n  let alav = Extinction.curve (Extinction.ccm89 ~rv) ~wavelength:wave in\n  let blue = Nx.item [ 0 ] alav in\n  let green = Nx.item [ 1 ] alav in\n  let red = Nx.item [ 2 ] alav in\n  is_true ~msg:\"CCM89: A_blue > A_green\" (blue > green);\n  is_true ~msg:\"CCM89: A_green > A_red\" (green > red)\n\n(* Photometry tests *)\n\nlet test_photometry_ab_mag_flat () =\n  (* A flat f_nu spectrum at 3631 Jy should give m_AB = 0 in any band. f_nu =\n     3631e-26 W/m²/Hz, so f_lambda = f_nu * c / lambda² *)\n  let n = 100 in\n  let bp =\n    Photometry.tophat ~lo:(Unit.Length.m 4e-7) ~hi:(Unit.Length.m 7e-7) ~n\n  in\n  let wave_m = Unit.Length.to_tensor (Photometry.wavelength bp) in\n  let c = 299_792_458.0 in\n  let ab_zp = 3631.0e-26 in\n  (* f_lambda = f_nu * c / lambda^2 *)\n  let f_lambda =\n    Nx.div\n      (Nx.mul_s (Nx.recip (Nx.square wave_m)) (ab_zp *. c))\n      (Nx.scalar f64 1.0)\n  in\n  let spec =\n    Spectrum.create ~wavelength:(Photometry.wavelength bp) ~values:f_lambda\n    |> Spectrum.as_flux_density\n  in\n  let mag = Nx.item [] (Photometry.ab_mag bp spec) in\n  is_true\n    ~msg:(Printf.sprintf \"Flat f_nu=3631Jy gives m_AB ~ 0, got %.3f\" mag)\n    (Float.abs mag < 0.05)\n\nlet test_photometry_color_same_band () =\n  (* Color between same band should be 0 *)\n  let bp =\n    Photometry.tophat ~lo:(Unit.Length.m 4e-7) ~hi:(Unit.Length.m 5.5e-7) ~n:50\n  in\n  let spec =\n    Spectrum.blackbody\n      ~temperature:(Unit.Temperature.of_kelvin (Nx.scalar f64 5000.0))\n      ~wavelength:(Photometry.wavelength bp)\n    |> Spectrum.as_flux_density\n  in\n  let col = Nx.item [] (Photometry.color bp bp spec) in\n  is_true ~msg:\"Same-band color = 0\" (Float.abs col < 1e-10)\n\nlet test_photometry_blue_star_color () =\n  (* A hot star should be brighter (lower mag) in blue than red *)\n  let n = 100 in\n  let bp_b =\n    Photometry.tophat ~lo:(Unit.Length.m 4e-7) ~hi:(Unit.Length.m 5e-7) ~n\n  in\n  let bp_r =\n    Photometry.tophat ~lo:(Unit.Length.m 6e-7) ~hi:(Unit.Length.m 7e-7) ~n\n  in\n  let temp = Unit.Temperature.of_kelvin (Nx.scalar f64 20000.0) in\n  let spec_b =\n    Spectrum.blackbody ~temperature:temp\n      ~wavelength:(Photometry.wavelength bp_b)\n    |> Spectrum.as_flux_density\n  in\n  let spec_r =\n    Spectrum.blackbody ~temperature:temp\n      ~wavelength:(Photometry.wavelength bp_r)\n    |> Spectrum.as_flux_density\n  in\n  let mag_b = Nx.item [] (Photometry.ab_mag bp_b spec_b) in\n  let mag_r = Nx.item [] (Photometry.ab_mag bp_r spec_r) in\n  is_true ~msg:\"Hot star: blue mag < red mag (brighter in blue)\" (mag_b < mag_r)\n\n(* Cosmo: extended models *)\n\nlet test_cosmo_flat_lcdm_same_as_default () =\n  let p = Cosmo.flat_lcdm ~h0:70.0 ~omega_m:0.3 in\n  let z = Nx.scalar f64 0.5 in\n  let dc_default = v (Unit.Length.in_mpc (Cosmo.comoving_distance z)) in\n  let dc_flat = v (Unit.Length.in_mpc (Cosmo.comoving_distance ~p z)) in\n  is_true ~msg:\"flat_lcdm(70,0.3) = default\"\n    (Float.abs (dc_default -. dc_flat) < 1e-6)\n\nlet test_cosmo_nonflat_lcdm () =\n  (* Open universe: omega_m=0.3, omega_l=0.5 → omega_k=0.2. Result should differ\n     from flat LCDM. *)\n  let p_flat = Cosmo.flat_lcdm ~h0:70.0 ~omega_m:0.3 in\n  let p_open = Cosmo.lcdm ~h0:70.0 ~omega_m:0.3 ~omega_l:0.5 in\n  let z = Nx.scalar f64 1.0 in\n  let dl_flat =\n    v (Unit.Length.in_mpc (Cosmo.luminosity_distance ~p:p_flat z))\n  in\n  let dl_open =\n    v (Unit.Length.in_mpc (Cosmo.luminosity_distance ~p:p_open z))\n  in\n  is_true\n    ~msg:\n      (Printf.sprintf \"Non-flat LCDM differs from flat: %.0f vs %.0f\" dl_open\n         dl_flat)\n    (Float.abs (dl_open -. dl_flat) > 10.0)\n\nlet test_cosmo_wcdm () =\n  (* w0 = -1 should be identical to ΛCDM *)\n  let p_lcdm = Cosmo.flat_lcdm ~h0:70.0 ~omega_m:0.3 in\n  let p_wcdm = Cosmo.wcdm ~h0:70.0 ~omega_m:0.3 ~w0:(-1.0) () in\n  let z = Nx.scalar f64 0.5 in\n  let dc_lcdm = v (Unit.Length.in_mpc (Cosmo.comoving_distance ~p:p_lcdm z)) in\n  let dc_wcdm = v (Unit.Length.in_mpc (Cosmo.comoving_distance ~p:p_wcdm z)) in\n  is_true\n    ~msg:(Printf.sprintf \"wCDM(w0=-1) = LCDM: %.1f vs %.1f\" dc_wcdm dc_lcdm)\n    (Float.abs (dc_wcdm -. dc_lcdm) < 1.0)\n\nlet test_cosmo_w0wacdm () =\n  (* w0=-1, wa=0 should reduce to ΛCDM *)\n  let p_lcdm = Cosmo.flat_lcdm ~h0:70.0 ~omega_m:0.3 in\n  let p_cpl = Cosmo.w0wacdm ~h0:70.0 ~omega_m:0.3 ~w0:(-1.0) ~wa:0.0 () in\n  let z = Nx.scalar f64 1.0 in\n  let dl_lcdm =\n    v (Unit.Length.in_mpc (Cosmo.luminosity_distance ~p:p_lcdm z))\n  in\n  let dl_cpl = v (Unit.Length.in_mpc (Cosmo.luminosity_distance ~p:p_cpl z)) in\n  is_true\n    ~msg:(Printf.sprintf \"w0waCDM(-1,0) = LCDM: %.1f vs %.1f\" dl_cpl dl_lcdm)\n    (Float.abs (dl_cpl -. dl_lcdm) < 1.0)\n\nlet test_cosmo_e_of () =\n  (* E(z=0) = 1 for any cosmology *)\n  let p = Cosmo.planck18 in\n  let z = Nx.scalar f64 0.0 in\n  let e0 = v (Cosmo.e_of p z) in\n  is_true\n    ~msg:(Printf.sprintf \"E(z=0) = 1, got %.6f\" e0)\n    (Float.abs (e0 -. 1.0) < 1e-6)\n\nlet test_cosmo_z_at_value () =\n  (* Roundtrip: compute dl at z=0.5, then find z back *)\n  let p = Cosmo.default in\n  let z0 = 0.5 in\n  let dl = Cosmo.luminosity_distance ~p (Nx.scalar f64 z0) in\n  let z_found =\n    v\n      (Cosmo.z_at_value ~p\n         (fun ~p z -> Unit.Length.to_tensor (Cosmo.luminosity_distance ~p z))\n         (Unit.Length.to_tensor dl))\n  in\n  is_true\n    ~msg:(Printf.sprintf \"z_at_value roundtrip: expected 0.5, got %.6f\" z_found)\n    (Float.abs (z_found -. z0) < 1e-6)\n\n(* AltAz tests *)\n\nlet test_altaz_zenith () =\n  (* A star at the observer's zenith should have alt ~ 90° *)\n  let obs =\n    Altaz.make_observer ~lat:(Unit.Angle.deg 0.0) ~lon:(Unit.Angle.deg 0.0) ()\n  in\n  (* Use the vernal equinox time: RA=0, Dec=0 should be near zenith at sidereal\n     midnight from lon=0, lat=0. At J2000.0 the ERA is ~280.46°, so RA ~ 280.46°\n     should be near transit. Instead, test roundtrip. *)\n  let t = Time.of_iso \"2024-01-01T00:00:00\" in\n  let ra = Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 180.0 |]) in\n  let dec = Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 45.0 |]) in\n  let c = Coord.of_radec ~ra ~dec in\n  let hz = Altaz.of_coord ~obstime:t ~observer:obs c in\n  let back = Altaz.to_coord ~obstime:t ~observer:obs hz in\n  let ra' = Nx.item [ 0 ] (Unit.Angle.in_deg (Coord.ra back)) in\n  let dec' = Nx.item [ 0 ] (Unit.Angle.in_deg (Coord.dec back)) in\n  is_true\n    ~msg:(Printf.sprintf \"AltAz RA roundtrip: 180 vs %.4f\" ra')\n    (Float.abs (ra' -. 180.0) < 0.1);\n  is_true\n    ~msg:(Printf.sprintf \"AltAz Dec roundtrip: 45 vs %.4f\" dec')\n    (Float.abs (dec' -. 45.0) < 0.1)\n\nlet test_altaz_north_pole () =\n  (* Polaris (dec ~ 90) should always be near alt = observer lat *)\n  let obs =\n    Altaz.make_observer ~lat:(Unit.Angle.deg 45.0) ~lon:(Unit.Angle.deg 0.0) ()\n  in\n  let t = Time.of_iso \"2024-06-15T12:00:00\" in\n  let ra = Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 37.95 |]) in\n  let dec = Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 89.264 |]) in\n  let c = Coord.of_radec ~ra ~dec in\n  let hz = Altaz.of_coord ~obstime:t ~observer:obs c in\n  let alt_deg = Nx.item [ 0 ] (Unit.Angle.in_deg (Altaz.alt hz)) in\n  is_true\n    ~msg:(Printf.sprintf \"Polaris alt ~ 45° from lat=45°, got %.1f\" alt_deg)\n    (Float.abs (alt_deg -. 45.0) < 2.0)\n\n(* Galactocentric tests *)\n\nlet test_galactocentric_gc_position () =\n  (* A point at l=0, b=0, d=galcen_distance should map to near (0, 0, z_sun) in\n     Galactocentric. *)\n  let c =\n    Coord.of_galactic\n      ~l:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 0.0 |]))\n      ~b:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 0.0 |]))\n  in\n  let gc =\n    Galactocentric.of_coord\n      ~distance:(Unit.Length.of_kpc (Nx.create f64 [| 1 |] [| 8.122 |]))\n      c\n  in\n  let xv = Nx.item [ 0 ] (Unit.Length.in_kpc (Galactocentric.x gc)) in\n  let yv = Nx.item [ 0 ] (Unit.Length.in_kpc (Galactocentric.y gc)) in\n  let zv = Nx.item [ 0 ] (Unit.Length.in_kpc (Galactocentric.z gc)) in\n  is_true\n    ~msg:(Printf.sprintf \"GC x ~ 0 kpc, got %.6f\" xv)\n    (Float.abs xv < 1e-10);\n  is_true\n    ~msg:(Printf.sprintf \"GC y ~ 0 kpc, got %.6f\" yv)\n    (Float.abs yv < 1e-10);\n  is_true\n    ~msg:(Printf.sprintf \"GC z ~ z_sun=0.0208 kpc, got %.4f\" zv)\n    (Float.abs (zv -. 0.0208) < 1e-10)\n\nlet test_galactocentric_roundtrip () =\n  let ra = Unit.Angle.of_deg (Nx.create f64 [| 2 |] [| 180.0; 45.0 |]) in\n  let dec = Unit.Angle.of_deg (Nx.create f64 [| 2 |] [| 30.0; -15.0 |]) in\n  let c = Coord.of_radec ~ra ~dec in\n  let d = Unit.Length.of_kpc (Nx.create f64 [| 2 |] [| 5.0; 12.0 |]) in\n  let gc = Galactocentric.of_coord ~distance:d c in\n  let c', d' = Galactocentric.to_coord gc in\n  let ra' = Unit.Angle.in_deg (Coord.ra c') in\n  let dec' = Unit.Angle.in_deg (Coord.dec c') in\n  let d_kpc' = Unit.Length.in_kpc d' in\n  let ra_orig = Unit.Angle.in_deg ra in\n  let dec_orig = Unit.Angle.in_deg dec in\n  let d_orig = Unit.Length.in_kpc d in\n  for i = 0 to 1 do\n    is_true\n      ~msg:(Printf.sprintf \"Galactocentric RA roundtrip [%d]\" i)\n      (Float.abs (Nx.item [ i ] ra' -. Nx.item [ i ] ra_orig) < 0.01);\n    is_true\n      ~msg:(Printf.sprintf \"Galactocentric Dec roundtrip [%d]\" i)\n      (Float.abs (Nx.item [ i ] dec' -. Nx.item [ i ] dec_orig) < 0.01);\n    is_true\n      ~msg:(Printf.sprintf \"Galactocentric distance roundtrip [%d]\" i)\n      (Float.abs (Nx.item [ i ] d_kpc' -. Nx.item [ i ] d_orig) < 0.01)\n  done\n\n(* Cosmo: growth and power spectrum *)\n\nlet test_cosmo_growth_factor_z0 () =\n  let g = v (Cosmo.growth_factor ~p:Cosmo.planck18 (Nx.scalar f64 0.0)) in\n  is_true\n    ~msg:(Printf.sprintf \"D(z=0) = 1.0, got %.6f\" g)\n    (Float.abs (g -. 1.0) < 1e-4)\n\nlet test_cosmo_growth_factor_z1 () =\n  let g = v (Cosmo.growth_factor ~p:Cosmo.planck18 (Nx.scalar f64 1.0)) in\n  is_true\n    ~msg:(Printf.sprintf \"D(z=1) ~ 0.61, got %.4f\" g)\n    (Float.abs (g -. 0.61) < 0.02)\n\nlet test_cosmo_growth_rate_z0 () =\n  let f = v (Cosmo.growth_rate ~p:Cosmo.planck18 (Nx.scalar f64 0.0)) in\n  (* f(z=0) ~ Ω_m^0.55 ~ 0.524 for Planck18 *)\n  is_true\n    ~msg:(Printf.sprintf \"f(z=0) ~ 0.52, got %.4f\" f)\n    (Float.abs (f -. 0.52) < 0.02)\n\nlet test_cosmo_growth_monotonic () =\n  let p = Cosmo.planck18 in\n  let d0 = v (Cosmo.growth_factor ~p (Nx.scalar f64 0.0)) in\n  let d1 = v (Cosmo.growth_factor ~p (Nx.scalar f64 0.5)) in\n  let d2 = v (Cosmo.growth_factor ~p (Nx.scalar f64 1.0)) in\n  is_true ~msg:\"D(0) > D(0.5) > D(1)\" (d0 > d1 && d1 > d2)\n\nlet test_cosmo_linear_power () =\n  let p = Cosmo.planck18 in\n  let k = Nx.scalar f64 0.1 in\n  let pk = v (Cosmo.linear_power ~p k (Nx.scalar f64 0.0)) in\n  is_true ~msg:(Printf.sprintf \"P_lin(k=0.1, z=0) > 0, got %.1f\" pk) (pk > 0.0);\n  (* P(k, z=1) should be less than P(k, z=0) *)\n  let pk1 = v (Cosmo.linear_power ~p k (Nx.scalar f64 1.0)) in\n  is_true ~msg:\"P_lin(z=1) < P_lin(z=0)\" (pk1 < pk)\n\nlet test_cosmo_nonlinear_power () =\n  let p = Cosmo.planck18 in\n  let k = Nx.scalar f64 1.0 in\n  let pk_nl = v (Cosmo.nonlinear_power ~p k (Nx.scalar f64 0.0)) in\n  let pk_lin = v (Cosmo.linear_power ~p k (Nx.scalar f64 0.0)) in\n  is_true ~msg:(Printf.sprintf \"P_nl(k=1) > 0, got %.1f\" pk_nl) (pk_nl > 0.0);\n  (* At k=1 h/Mpc, nonlinear should exceed linear *)\n  is_true\n    ~msg:(Printf.sprintf \"P_nl(k=1) > P_lin(k=1): %.1f > %.1f\" pk_nl pk_lin)\n    (pk_nl > pk_lin)\n\nlet test_cosmo_params_accessors () =\n  let p = Cosmo.planck18 in\n  let ob = v (Cosmo.omega_b p) in\n  let ns = v (Cosmo.n_s p) in\n  let s8 = v (Cosmo.sigma8 p) in\n  is_true ~msg:\"Planck18 omega_b = 0.049\" (Float.abs (ob -. 0.049) < 1e-6);\n  is_true ~msg:\"Planck18 n_s = 0.9665\" (Float.abs (ns -. 0.9665) < 1e-6);\n  is_true ~msg:\"Planck18 sigma8 = 0.8102\" (Float.abs (s8 -. 0.8102) < 1e-6)\n\n(* Survey tests *)\n\nlet test_survey_smail_normalized () =\n  let nz = Survey.smail ~a:2.0 ~b:1.5 ~z0:0.3 () in\n  let n = 1000 in\n  let zmax = Survey.nz_zmax nz in\n  let dz = zmax /. Float.of_int n in\n  let sum = ref 0.0 in\n  for i = 0 to n do\n    let z = Float.of_int i *. dz in\n    let nz_val = v (Survey.eval_nz nz (Nx.scalar f64 z)) in\n    let w = if i = 0 || i = n then 0.5 else 1.0 in\n    sum := !sum +. (w *. nz_val *. dz)\n  done;\n  is_true\n    ~msg:(Printf.sprintf \"smail integrates to 1.0, got %.6f\" !sum)\n    (Float.abs (!sum -. 1.0) < 1e-3)\n\nlet test_survey_tabulated () =\n  let z = Nx.create f64 [| 5 |] [| 0.0; 0.25; 0.5; 0.75; 1.0 |] in\n  let pz = Nx.create f64 [| 5 |] [| 0.0; 1.0; 2.0; 1.0; 0.0 |] in\n  let nz = Survey.tabulated ~z ~pz () in\n  let mid = v (Survey.eval_nz nz (Nx.scalar f64 0.5)) in\n  is_true ~msg:\"tabulated mid > 0\" (mid > 0.0);\n  let out = v (Survey.eval_nz nz (Nx.scalar f64 1.5)) in\n  is_true ~msg:\"tabulated outside = 0\" (Float.abs out < eps)\n\nlet test_survey_cl_shape () =\n  let p = Cosmo.planck18 in\n  let nz1 = Survey.smail ~a:2.0 ~b:1.5 ~z0:0.3 () in\n  let nz2 = Survey.smail ~a:2.0 ~b:1.5 ~z0:0.7 () in\n  let wl1 = Survey.weak_lensing ~n_gal:26.0 nz1 in\n  let wl2 = Survey.weak_lensing ~n_gal:26.0 nz2 in\n  let ell = Nx.create f64 [| 3 |] [| 100.0; 300.0; 1000.0 |] in\n  let cls = Survey.angular_cl ~p ~power:Survey.linear ~ell [ wl1; wl2 ] in\n  let shape = Nx.shape (Survey.Cls.to_tensor cls) in\n  is_true\n    ~msg:(Printf.sprintf \"C_l shape = [3; 3], got [%d; %d]\" shape.(0) shape.(1))\n    (shape.(0) = 3 && shape.(1) = 3);\n  is_true\n    ~msg:(Printf.sprintf \"n_tracers = 2, got %d\" (Survey.Cls.n_tracers cls))\n    (Survey.Cls.n_tracers cls = 2)\n\nlet test_survey_cl_positive () =\n  let p = Cosmo.planck18 in\n  let nz1 = Survey.smail ~a:2.0 ~b:1.5 ~z0:0.5 () in\n  let wl = Survey.weak_lensing ~n_gal:26.0 nz1 in\n  let ell = Nx.create f64 [| 3 |] [| 100.0; 500.0; 1000.0 |] in\n  let cls = Survey.angular_cl ~p ~power:Survey.linear ~ell [ wl ] in\n  let cl_auto = Survey.Cls.get cls ~i:0 ~j:0 in\n  for l = 0 to 2 do\n    let cl_val = Nx.item [ l ] cl_auto in\n    is_true ~msg:(Printf.sprintf \"C_l[%d] = %.2e > 0\" l cl_val) (cl_val > 0.0)\n  done\n\nlet test_survey_noise_wl () =\n  let sigma_e = 0.26 in\n  let n_gal = 30.0 in\n  let nz1 = Survey.smail ~a:2.0 ~b:1.5 ~z0:0.3 () in\n  let wl = Survey.weak_lensing ~sigma_e ~n_gal nz1 in\n  let ell = Nx.create f64 [| 3 |] [| 100.0; 500.0; 1000.0 |] in\n  let cls = Survey.angular_cl ~ell [ wl ] in\n  let nl = Survey.Cls.noise cls in\n  let n0 = Nx.item [ 0; 0 ] nl in\n  let n1 = Nx.item [ 0; 1 ] nl in\n  let n2 = Nx.item [ 0; 2 ] nl in\n  is_true ~msg:\"WL noise > 0\" (n0 > 0.0);\n  is_true\n    ~msg:(Printf.sprintf \"WL noise constant in ℓ: %.2e vs %.2e\" n0 n1)\n    (Float.abs (n0 -. n1) < 1e-20);\n  is_true ~msg:\"WL noise constant in ℓ (2)\" (Float.abs (n1 -. n2) < 1e-20)\n\n(* Spectrum: mul/div *)\n\nlet test_spectrum_mul () =\n  let wave = Unit.Length.of_m (Nx.linspace f64 3e-7 1e-6 10) in\n  let values =\n    Nx.create f64 [| 10 |]\n      [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0; 7.0; 8.0; 9.0; 10.0 |]\n  in\n  let a =\n    Spectrum.create ~wavelength:wave ~values |> Spectrum.as_flux_density\n  in\n  let trans =\n    Nx.create f64 [| 10 |]\n      [| 0.5; 0.5; 0.5; 0.5; 0.5; 0.5; 0.5; 0.5; 0.5; 0.5 |]\n  in\n  let b = Spectrum.create ~wavelength:wave ~values:trans in\n  let result = Spectrum.mul a b in\n  is_true ~msg:\"mul: 2.0 * 0.5 = 1.0\"\n    (Float.abs (Nx.item [ 1 ] (Spectrum.values result) -. 1.0) < eps);\n  is_true ~msg:\"mul: 10.0 * 0.5 = 5.0\"\n    (Float.abs (Nx.item [ 9 ] (Spectrum.values result) -. 5.0) < eps)\n\nlet test_spectrum_div () =\n  let wave = Unit.Length.of_m (Nx.linspace f64 3e-7 1e-6 10) in\n  let values =\n    Nx.create f64 [| 10 |]\n      [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0; 7.0; 8.0; 9.0; 10.0 |]\n  in\n  let a =\n    Spectrum.create ~wavelength:wave ~values |> Spectrum.as_flux_density\n  in\n  let flat =\n    Nx.create f64 [| 10 |]\n      [| 2.0; 2.0; 2.0; 2.0; 2.0; 2.0; 2.0; 2.0; 2.0; 2.0 |]\n  in\n  let b = Spectrum.create ~wavelength:wave ~values:flat in\n  let result = Spectrum.div a b in\n  is_true ~msg:\"div: 4.0 / 2.0 = 2.0\"\n    (Float.abs (Nx.item [ 3 ] (Spectrum.values result) -. 2.0) < eps);\n  is_true ~msg:\"div: 10.0 / 2.0 = 5.0\"\n    (Float.abs (Nx.item [ 9 ] (Spectrum.values result) -. 5.0) < eps)\n\nlet test_spectrum_mul_div_roundtrip () =\n  let wave = Unit.Length.of_m (Nx.linspace f64 3e-7 1e-6 50) in\n  let temp = Unit.Temperature.of_kelvin (Nx.scalar f64 6000.0) in\n  let spec =\n    Spectrum.blackbody ~temperature:temp ~wavelength:wave\n    |> Spectrum.as_flux_density\n  in\n  let trans_vals =\n    Nx.create f64 [| 50 |]\n      (Array.init 50 (fun i ->\n           0.5 +. (0.3 *. Float.sin (Float.of_int i *. 0.2))))\n  in\n  let trans = Spectrum.create ~wavelength:wave ~values:trans_vals in\n  let mulled = Spectrum.mul spec trans in\n  let recovered = Spectrum.div mulled trans in\n  let orig_val = Nx.item [ 25 ] (Spectrum.values spec) in\n  let rec_val = Nx.item [ 25 ] (Spectrum.values recovered) in\n  is_true ~msg:\"mul then div roundtrip\"\n    (Float.abs (rec_val -. orig_val) /. orig_val < 1e-10)\n\n(* Spectrum: line profiles *)\n\nlet test_spectrum_gaussian_peak () =\n  let wave = Unit.Length.of_m (Nx.linspace f64 6.4e-7 6.7e-7 1000) in\n  let center = Unit.Length.nm 656.3 in\n  let stddev = Unit.Length.nm 1.0 in\n  let amplitude = Nx.scalar f64 1.0 in\n  let g = Spectrum.gaussian ~amplitude ~center ~stddev ~wavelength:wave in\n  let vals = Spectrum.values g in\n  let peak_idx = ref 0 in\n  let peak_val = ref (Nx.item [ 0 ] vals) in\n  for i = 1 to 999 do\n    let vi = Nx.item [ i ] vals in\n    if vi > !peak_val then begin\n      peak_val := vi;\n      peak_idx := i\n    end\n  done;\n  let wave_m = Unit.Length.in_m (Spectrum.wavelength g) in\n  let peak_lam_nm = Nx.item [ !peak_idx ] wave_m *. 1e9 in\n  is_true\n    ~msg:(Printf.sprintf \"Gaussian peak near 656.3 nm, got %.1f\" peak_lam_nm)\n    (Float.abs (peak_lam_nm -. 656.3) < 0.5);\n  is_true\n    ~msg:(Printf.sprintf \"Gaussian peak amplitude ~ 1.0, got %.4f\" !peak_val)\n    (Float.abs (!peak_val -. 1.0) < 0.01)\n\nlet test_spectrum_lorentzian_peak () =\n  let wave = Unit.Length.of_m (Nx.linspace f64 4.8e-7 5.2e-7 1000) in\n  let center = Unit.Length.nm 500.0 in\n  let fwhm = Unit.Length.nm 2.0 in\n  let amplitude = Nx.scalar f64 3.0 in\n  let l = Spectrum.lorentzian ~amplitude ~center ~fwhm ~wavelength:wave in\n  let vals = Spectrum.values l in\n  let peak_idx = ref 0 in\n  let peak_val = ref (Nx.item [ 0 ] vals) in\n  for i = 1 to 999 do\n    let vi = Nx.item [ i ] vals in\n    if vi > !peak_val then begin\n      peak_val := vi;\n      peak_idx := i\n    end\n  done;\n  let wave_m = Unit.Length.in_m (Spectrum.wavelength l) in\n  let peak_lam_nm = Nx.item [ !peak_idx ] wave_m *. 1e9 in\n  is_true\n    ~msg:(Printf.sprintf \"Lorentzian peak near 500 nm, got %.1f\" peak_lam_nm)\n    (Float.abs (peak_lam_nm -. 500.0) < 0.5);\n  is_true\n    ~msg:(Printf.sprintf \"Lorentzian peak ~ 3.0, got %.4f\" !peak_val)\n    (Float.abs (!peak_val -. 3.0) < 0.05)\n\nlet test_spectrum_voigt_limits () =\n  let wave = Unit.Length.of_m (Nx.linspace f64 4.8e-7 5.2e-7 1000) in\n  let center = Unit.Length.nm 500.0 in\n  let amplitude = Nx.scalar f64 1.0 in\n  (* Gaussian limit: sigma >> gamma *)\n  let sigma_big = Unit.Length.nm 2.0 in\n  let gamma_tiny = Unit.Length.nm 0.001 in\n  let voigt_g =\n    Spectrum.voigt ~amplitude ~center ~sigma:sigma_big ~gamma:gamma_tiny\n      ~wavelength:wave\n  in\n  let gauss =\n    Spectrum.gaussian ~amplitude ~center ~stddev:sigma_big ~wavelength:wave\n  in\n  let vg_peak = ref 0.0 in\n  let g_peak = ref 0.0 in\n  for i = 0 to 999 do\n    let vv = Nx.item [ i ] (Spectrum.values voigt_g) in\n    let gv = Nx.item [ i ] (Spectrum.values gauss) in\n    if vv > !vg_peak then vg_peak := vv;\n    if gv > !g_peak then g_peak := gv\n  done;\n  is_true\n    ~msg:\n      (Printf.sprintf \"Voigt(sigma>>gamma) peak ~ Gaussian peak: %.4f vs %.4f\"\n         !vg_peak !g_peak)\n    (Float.abs (!vg_peak -. !g_peak) /. !g_peak < 0.05)\n\nlet test_spectrum_line_composability () =\n  let wave = Unit.Length.of_m (Nx.linspace f64 6e-7 7e-7 500) in\n  let continuum =\n    Spectrum.power_law ~amplitude:(Nx.scalar f64 1e-15)\n      ~index:(Nx.scalar f64 (-2.0)) ~pivot:(Unit.Length.nm 650.0)\n      ~wavelength:wave\n  in\n  let ha =\n    Spectrum.gaussian ~amplitude:(Nx.scalar f64 1e-15)\n      ~center:(Unit.Length.nm 656.3) ~stddev:(Unit.Length.nm 0.5)\n      ~wavelength:wave\n  in\n  let composite = Spectrum.add continuum ha in\n  let cont_val = Nx.item [ 0 ] (Spectrum.values continuum) in\n  let comp_val = Nx.item [ 0 ] (Spectrum.values composite) in\n  is_true ~msg:\"Composite spectrum at wing ~ continuum\"\n    (Float.abs (comp_val -. cont_val) /. cont_val < 0.01)\n\n(* Altaz: airmass *)\n\nlet test_altaz_airmass_zenith () =\n  let hz =\n    Altaz.of_coord\n      ~obstime:(Time.of_iso \"2024-06-21T12:00:00\")\n      ~observer:\n        (Altaz.make_observer ~lat:(Unit.Angle.deg 45.0)\n           ~lon:(Unit.Angle.deg 0.0) ())\n      (Coord.of_radec\n         ~ra:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 0.0 |]))\n         ~dec:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 89.0 |])))\n  in\n  let x = Altaz.airmass hz in\n  let x0 = Nx.item [ 0 ] x in\n  is_true ~msg:(Printf.sprintf \"Airmass >= 1.0, got %.4f\" x0) (x0 >= 1.0)\n\nlet test_altaz_airmass_low_alt () =\n  let obs =\n    Altaz.make_observer ~lat:(Unit.Angle.deg 30.0) ~lon:(Unit.Angle.deg 0.0) ()\n  in\n  let t = Time.of_iso \"2024-06-21T22:00:00\" in\n  let star_a =\n    Coord.of_radec\n      ~ra:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 0.0 |]))\n      ~dec:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 80.0 |]))\n  in\n  let star_b =\n    Coord.of_radec\n      ~ra:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 180.0 |]))\n      ~dec:(Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 10.0 |]))\n  in\n  let hz_a = Altaz.of_coord ~obstime:t ~observer:obs star_a in\n  let hz_b = Altaz.of_coord ~obstime:t ~observer:obs star_b in\n  let x_a = Nx.item [ 0 ] (Altaz.airmass hz_a) in\n  let x_b = Nx.item [ 0 ] (Altaz.airmass hz_b) in\n  is_true\n    ~msg:(Printf.sprintf \"Both airmasses >= 1: %.2f, %.2f\" x_a x_b)\n    (x_a >= 1.0 && x_b >= 1.0);\n  is_true\n    ~msg:(Printf.sprintf \"Different airmasses: %.2f vs %.2f\" x_a x_b)\n    (Float.abs (x_a -. x_b) > 0.01)\n\n(* Cosmo: BAO distances *)\n\nlet test_cosmo_dh () =\n  let p = Cosmo.planck18 in\n  let z = Nx.scalar f64 0.0 in\n  let dh0 = v (Unit.Length.in_mpc (Cosmo.dh ~p z)) in\n  let h0 = Nx.item [] (Cosmo.h0 p) in\n  let expected = 299792.458 /. h0 in\n  is_true\n    ~msg:(Printf.sprintf \"D_H(0) = c/H0 ~ %.1f Mpc, got %.1f\" expected dh0)\n    (Float.abs (dh0 -. expected) /. expected < 1e-4)\n\nlet test_cosmo_dm_flat () =\n  let p = Cosmo.planck18 in\n  let z = Nx.scalar f64 0.5 in\n  let dm_val = v (Unit.Length.in_mpc (Cosmo.dm ~p z)) in\n  let dc_val = v (Unit.Length.in_mpc (Cosmo.comoving_distance ~p z)) in\n  is_true\n    ~msg:(Printf.sprintf \"D_M = D_C for flat: %.1f vs %.1f\" dm_val dc_val)\n    (Float.abs (dm_val -. dc_val) /. dc_val < 1e-4)\n\nlet test_cosmo_dv () =\n  let p = Cosmo.planck18 in\n  let z = Nx.scalar f64 0.5 in\n  let dv_val = v (Unit.Length.in_mpc (Cosmo.dv ~p z)) in\n  is_true ~msg:(Printf.sprintf \"D_V(0.5) > 0, got %.1f\" dv_val) (dv_val > 0.0);\n  let dh_val = v (Unit.Length.in_mpc (Cosmo.dh ~p z)) in\n  let dm_val = v (Unit.Length.in_mpc (Cosmo.dm ~p z)) in\n  let z_f = 0.5 in\n  let expected = (z_f *. dh_val *. dm_val *. dm_val) ** (1.0 /. 3.0) in\n  is_true\n    ~msg:\n      (Printf.sprintf \"D_V = (z D_H D_M^2)^{1/3}: %.1f vs %.1f\" dv_val expected)\n    (Float.abs (dv_val -. expected) /. expected < 1e-3)\n\nlet test_cosmo_sound_horizon () =\n  let p = Cosmo.planck18 in\n  let rs = v (Unit.Length.in_mpc (Cosmo.sound_horizon ~p ())) in\n  is_true\n    ~msg:(Printf.sprintf \"r_s(Planck18) ~ 147 Mpc, got %.1f\" rs)\n    (Float.abs (rs -. 147.0) < 5.0)\n\n(* Filters *)\n\nlet test_filters_sdss_pivot () =\n  let bp = Filters.sdss_r in\n  let lam_p = v (Unit.Length.in_nm (Photometry.pivot_wavelength bp)) in\n  is_true\n    ~msg:(Printf.sprintf \"SDSS r pivot ~ 620 nm, got %.0f\" lam_p)\n    (Float.abs (lam_p -. 620.0) < 30.0)\n\nlet test_filters_johnson_v_pivot () =\n  let bp = Filters.johnson_v in\n  let lam_p = v (Unit.Length.in_nm (Photometry.pivot_wavelength bp)) in\n  is_true\n    ~msg:(Printf.sprintf \"Johnson V pivot ~ 551 nm, got %.0f\" lam_p)\n    (Float.abs (lam_p -. 551.0) < 20.0)\n\nlet test_filters_twomass_j_pivot () =\n  let bp = Filters.twomass_j in\n  let lam_p = v (Unit.Length.in_nm (Photometry.pivot_wavelength bp)) in\n  is_true\n    ~msg:(Printf.sprintf \"2MASS J pivot ~ 1235 nm, got %.0f\" lam_p)\n    (Float.abs (lam_p -. 1235.0) < 30.0)\n\nlet test_filters_gaia_ordering () =\n  let bp_p =\n    v (Unit.Length.in_nm (Photometry.pivot_wavelength Filters.gaia_bp))\n  in\n  let g_p =\n    v (Unit.Length.in_nm (Photometry.pivot_wavelength Filters.gaia_g))\n  in\n  let rp_p =\n    v (Unit.Length.in_nm (Photometry.pivot_wavelength Filters.gaia_rp))\n  in\n  is_true\n    ~msg:(Printf.sprintf \"Gaia: BP < G < RP: %.0f < %.0f < %.0f\" bp_p g_p rp_p)\n    (bp_p < g_p && g_p < rp_p)\n\nlet test_filters_photometry () =\n  let bp = Filters.sdss_g in\n  let wave = Photometry.wavelength bp in\n  let temp = Unit.Temperature.of_kelvin (Nx.scalar f64 5800.0) in\n  let sed =\n    Spectrum.blackbody ~temperature:temp ~wavelength:wave\n    |> Spectrum.as_flux_density\n  in\n  let mag = Nx.item [] (Photometry.ab_mag bp sed) in\n  is_true\n    ~msg:(Printf.sprintf \"BB(5800K) through SDSS g is finite, got %.2f\" mag)\n    (Float.is_finite mag)\n\n(* Photometry: auto-resample *)\n\nlet test_photometry_auto_resample () =\n  let bp = Filters.sdss_g in\n  let temp = Unit.Temperature.of_kelvin (Nx.scalar f64 5800.0) in\n  let wave_fine = Unit.Length.of_m (Nx.linspace f64 3e-7 1.1e-6 1000) in\n  let sed =\n    Spectrum.blackbody ~temperature:temp ~wavelength:wave_fine\n    |> Spectrum.as_flux_density\n  in\n  let mag = Nx.item [] (Photometry.ab_mag bp sed) in\n  is_true\n    ~msg:\n      (Printf.sprintf \"Auto-resample: BB(5800K) through SDSS g finite, got %.2f\"\n         mag)\n    (Float.is_finite mag);\n  let manual = Spectrum.resample ~wavelength:(Photometry.wavelength bp) sed in\n  let mag_manual = Nx.item [] (Photometry.ab_mag bp manual) in\n  is_true\n    ~msg:\n      (Printf.sprintf \"Auto-resample matches manual: %.4f vs %.4f\" mag\n         mag_manual)\n    (Float.abs (mag -. mag_manual) < 1e-10)\n\n(* Photometry: ST magnitude *)\n\nlet test_photometry_st_mag () =\n  let bp =\n    Photometry.tophat ~lo:(Unit.Length.nm 400.0) ~hi:(Unit.Length.nm 700.0)\n      ~n:100\n  in\n  let wave = Photometry.wavelength bp in\n  let temp = Unit.Temperature.of_kelvin (Nx.scalar f64 5800.0) in\n  let sed =\n    Spectrum.blackbody ~temperature:temp ~wavelength:wave\n    |> Spectrum.as_flux_density\n  in\n  let st = Nx.item [] (Photometry.st_mag bp sed) in\n  let ab = Nx.item [] (Photometry.ab_mag bp sed) in\n  is_true ~msg:(Printf.sprintf \"ST mag is finite: %.2f\" st) (Float.is_finite st);\n  is_true\n    ~msg:(Printf.sprintf \"ST and AB differ: ST=%.2f AB=%.2f\" st ab)\n    (Float.abs (st -. ab) > 0.01)\n\n(* Photometry: Vega magnitude *)\n\nlet test_photometry_vega_mag () =\n  let bp = Filters.johnson_v in\n  let wave = Photometry.wavelength bp in\n  let temp = Unit.Temperature.of_kelvin (Nx.scalar f64 9600.0) in\n  let sed =\n    Spectrum.blackbody ~temperature:temp ~wavelength:wave\n    |> Spectrum.as_flux_density\n  in\n  let vm = Nx.item [] (Photometry.vega_mag bp sed) in\n  is_true\n    ~msg:(Printf.sprintf \"Vega mag of hot BB through V is finite: %.2f\" vm)\n    (Float.is_finite vm);\n  let ab = Nx.item [] (Photometry.ab_mag bp sed) in\n  is_true\n    ~msg:(Printf.sprintf \"Vega and AB differ: V=%.2f AB=%.2f\" vm ab)\n    (Float.abs (vm -. ab) > 0.001)\n\n(* Photometry: effective wavelength *)\n\nlet test_photometry_effective_wavelength () =\n  let bp =\n    Photometry.tophat ~lo:(Unit.Length.nm 400.0) ~hi:(Unit.Length.nm 700.0)\n      ~n:100\n  in\n  let wave = Photometry.wavelength bp in\n  let flat_vals = Nx.ones f64 [| 100 |] in\n  let flat =\n    Spectrum.create ~wavelength:wave ~values:flat_vals\n    |> Spectrum.as_flux_density\n  in\n  let lam_eff =\n    v (Unit.Length.in_nm (Photometry.effective_wavelength bp flat))\n  in\n  let lam_pivot = v (Unit.Length.in_nm (Photometry.pivot_wavelength bp)) in\n  is_true\n    ~msg:\n      (Printf.sprintf \"Flat spectrum: eff_wavelength in range: %.1f nm\" lam_eff)\n    (lam_eff > 500.0 && lam_eff < 600.0);\n  is_true\n    ~msg:\n      (Printf.sprintf \"eff_wavelength >= pivot for flat/tophat: %.1f vs %.1f\"\n         lam_eff lam_pivot)\n    (lam_eff >= lam_pivot)\n\n(* Altaz: atmospheric refraction *)\n\nlet test_altaz_refraction () =\n  let obs =\n    Altaz.make_observer ~lat:(Unit.Angle.deg 45.0) ~lon:(Unit.Angle.deg 0.0) ()\n  in\n  let t = Time.of_iso \"2024-06-15T12:00:00\" in\n  let ra = Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 37.95 |]) in\n  let dec = Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 89.264 |]) in\n  let c = Coord.of_radec ~ra ~dec in\n  let hz_no = Altaz.of_coord ~refraction:false ~obstime:t ~observer:obs c in\n  let hz_yes = Altaz.of_coord ~refraction:true ~obstime:t ~observer:obs c in\n  let alt_no = Nx.item [ 0 ] (Unit.Angle.in_deg (Altaz.alt hz_no)) in\n  let alt_yes = Nx.item [ 0 ] (Unit.Angle.in_deg (Altaz.alt hz_yes)) in\n  (* Refraction makes objects appear higher *)\n  is_true\n    ~msg:\n      (Printf.sprintf \"Refraction raises altitude: %.4f > %.4f\" alt_yes alt_no)\n    (alt_yes > alt_no);\n  (* At ~45° alt, refraction is ~1 arcmin = 0.017° *)\n  let diff = alt_yes -. alt_no in\n  is_true\n    ~msg:(Printf.sprintf \"Refraction at ~45° is small (< 0.1°): %.4f\" diff)\n    (diff > 0.0 && diff < 0.1)\n\nlet test_altaz_refraction_standalone () =\n  let obs =\n    Altaz.make_observer ~lat:(Unit.Angle.deg 45.0) ~lon:(Unit.Angle.deg 0.0) ()\n  in\n  let t = Time.of_iso \"2024-06-15T12:00:00\" in\n  let ra = Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 37.95 |]) in\n  let dec = Unit.Angle.of_deg (Nx.create f64 [| 1 |] [| 89.264 |]) in\n  let c = Coord.of_radec ~ra ~dec in\n  let hz = Altaz.of_coord ~obstime:t ~observer:obs c in\n  let r = Altaz.refraction hz in\n  let r_arcmin = Nx.item [ 0 ] (Unit.Angle.in_deg r) *. 60.0 in\n  is_true\n    ~msg:(Printf.sprintf \"Refraction > 0 arcmin: %.2f\" r_arcmin)\n    (r_arcmin > 0.0);\n  is_true\n    ~msg:(Printf.sprintf \"Refraction < 2 arcmin at high alt: %.2f\" r_arcmin)\n    (r_arcmin < 2.0)\n\n(* Survey: shear multiplicative bias *)\n\nlet test_survey_m_bias () =\n  let nz = Survey.smail ~a:2.0 ~b:1.5 ~z0:0.5 () in\n  let ell = Nx.logspace f64 1.0 3.0 20 in\n  let wl_no_bias = Survey.weak_lensing ~n_gal:26.0 nz in\n  let wl_with_bias = Survey.weak_lensing ~m_bias:0.02 ~n_gal:26.0 nz in\n  let cls_no = Survey.angular_cl ~ell [ wl_no_bias ] in\n  let cls_yes = Survey.angular_cl ~ell [ wl_with_bias ] in\n  let cl_no = Survey.Cls.get cls_no ~i:0 ~j:0 in\n  let cl_yes = Survey.Cls.get cls_yes ~i:0 ~j:0 in\n  (* Auto-spectrum scales as (1+m)^2 = 1.0404 *)\n  let ratio = Nx.item [ 10 ] (Nx.div cl_yes cl_no) in\n  let expected = 1.02 *. 1.02 in\n  is_true\n    ~msg:\n      (Printf.sprintf\n         \"m_bias=0.02 scales auto-Cl by (1+m)^2: ratio=%.4f vs %.4f\" ratio\n         expected)\n    (Float.abs (ratio -. expected) < 1e-4);\n  (* m_bias=0.0 gives same result as no bias *)\n  let wl_zero_bias = Survey.weak_lensing ~m_bias:0.0 ~n_gal:26.0 nz in\n  let cls_zero = Survey.angular_cl ~ell [ wl_zero_bias ] in\n  let cl_zero = Survey.Cls.get cls_zero ~i:0 ~j:0 in\n  let diff = Nx.item [] (Nx.max (Nx.abs (Nx.sub cl_zero cl_no))) in\n  is_true\n    ~msg:(Printf.sprintf \"m_bias=0.0 matches no bias: max_diff=%.2e\" diff)\n    (diff < 1e-30)\n\n(* Spectrum: differentiable resample *)\n\nlet test_spectrum_resample_values () =\n  let wave = Unit.Length.of_m (Nx.linspace f64 1e-7 1e-5 100) in\n  let sed =\n    Spectrum.blackbody\n      ~temperature:(Unit.Temperature.of_kelvin (Nx.scalar f64 5800.0))\n      ~wavelength:wave\n  in\n  let new_wave = Unit.Length.of_m (Nx.linspace f64 2e-7 9e-6 50) in\n  let resampled = Spectrum.resample ~wavelength:new_wave sed in\n  let vals = Spectrum.values resampled in\n  let n = Nx.numel vals in\n  is_true ~msg:(Printf.sprintf \"Resampled has %d points\" n) (n = 50);\n  let v0 = Nx.item [ 0 ] vals in\n  is_true ~msg:(Printf.sprintf \"Resampled values positive: %.2e\" v0) (v0 > 0.0);\n  let vmax = Nx.item [] (Nx.max vals) in\n  is_true\n    ~msg:(Printf.sprintf \"Resampled max is finite: %.2e\" vmax)\n    (Float.is_finite vmax)\n\n(* Survey: baryonic feedback *)\n\nlet test_survey_baryonic_feedback () =\n  let nz = Survey.smail ~a:2.0 ~b:1.5 ~z0:0.5 () in\n  let ell = Nx.logspace f64 1.0 3.0 20 in\n  let wl = Survey.weak_lensing ~n_gal:26.0 nz in\n  let cls_dm = Survey.angular_cl ~power:Survey.nonlinear ~ell [ wl ] in\n  let power_bary = Survey.baryonic_feedback ~a_bary:0.2 Survey.nonlinear in\n  let cls_bary = Survey.angular_cl ~power:power_bary ~ell [ wl ] in\n  let cl_dm = Survey.Cls.get cls_dm ~i:0 ~j:0 in\n  let cl_bary = Survey.Cls.get cls_bary ~i:0 ~j:0 in\n  (* Baryonic feedback suppresses small-scale (high-ell) power *)\n  let ratio_high = Nx.item [ 19 ] (Nx.div cl_bary cl_dm) in\n  is_true\n    ~msg:\n      (Printf.sprintf \"Baryonic suppression at high ell: ratio=%.4f < 1\"\n         ratio_high)\n    (ratio_high < 1.0);\n  (* a_bary=0 gives same result as no feedback *)\n  let power_zero = Survey.baryonic_feedback ~a_bary:0.0 Survey.nonlinear in\n  let cls_zero = Survey.angular_cl ~power:power_zero ~ell [ wl ] in\n  let cl_zero = Survey.Cls.get cls_zero ~i:0 ~j:0 in\n  let diff = Nx.item [] (Nx.max (Nx.abs (Nx.sub cl_zero cl_dm))) in\n  is_true\n    ~msg:(Printf.sprintf \"a_bary=0 matches DM-only: max_diff=%.2e\" diff)\n    (diff < 1e-30)\n\n(* Batched spectra *)\n\nlet test_batch_create () =\n  let wave = Unit.Length.of_m (Nx.linspace f64 1e-7 1e-6 100) in\n  let v1 = Nx.ones f64 [| 100 |] in\n  let v2 = Nx.full f64 [| 100 |] 2.0 in\n  let values = Nx.stack [ v1; v2 ] in\n  let s =\n    Spectrum.create ~wavelength:wave ~values |> Spectrum.as_flux_density\n  in\n  let sh = Nx.shape (Spectrum.values s) in\n  is_true ~msg:\"batch values shape [2; 100]\"\n    (Array.length sh = 2 && sh.(0) = 2 && sh.(1) = 100)\n\nlet test_batch_resample () =\n  let wave = Unit.Length.of_m (Nx.linspace f64 1e-7 1e-6 100) in\n  let wave2 = Unit.Length.of_m (Nx.linspace f64 2e-7 8e-7 50) in\n  let bb1 =\n    Spectrum.blackbody\n      ~temperature:(Unit.Temperature.of_kelvin (Nx.scalar f64 5000.0))\n      ~wavelength:wave\n    |> Spectrum.as_sampled\n  in\n  let bb2 =\n    Spectrum.blackbody\n      ~temperature:(Unit.Temperature.of_kelvin (Nx.scalar f64 8000.0))\n      ~wavelength:wave\n    |> Spectrum.as_sampled\n  in\n  let values = Nx.stack [ Spectrum.values bb1; Spectrum.values bb2 ] in\n  let batch = Spectrum.create ~wavelength:wave ~values |> Spectrum.as_sampled in\n  let resampled = Spectrum.resample ~wavelength:wave2 batch in\n  let r_shape = Nx.shape (Spectrum.values resampled) in\n  is_true ~msg:\"batch resample shape [2; 50]\"\n    (Array.length r_shape = 2 && r_shape.(0) = 2 && r_shape.(1) = 50);\n  let r1 = Spectrum.resample ~wavelength:wave2 bb1 in\n  let r2 = Spectrum.resample ~wavelength:wave2 bb2 in\n  let expected = Nx.stack [ Spectrum.values r1; Spectrum.values r2 ] in\n  let diff =\n    Nx.item [] (Nx.max (Nx.abs (Nx.sub (Spectrum.values resampled) expected)))\n  in\n  is_true\n    ~msg:\n      (Printf.sprintf \"batch resample matches individual: max_diff=%.2e\" diff)\n    (diff < 1e-20)\n\nlet test_batch_ab_mag () =\n  let wave = Unit.Length.of_m (Nx.linspace f64 3e-7 8e-7 200) in\n  let bp =\n    Photometry.tophat ~lo:(Unit.Length.nm 400.0) ~hi:(Unit.Length.nm 600.0)\n      ~n:100\n  in\n  let bb1 =\n    Spectrum.blackbody\n      ~temperature:(Unit.Temperature.of_kelvin (Nx.scalar f64 5000.0))\n      ~wavelength:wave\n    |> Spectrum.as_flux_density\n  in\n  let bb2 =\n    Spectrum.blackbody\n      ~temperature:(Unit.Temperature.of_kelvin (Nx.scalar f64 8000.0))\n      ~wavelength:wave\n    |> Spectrum.as_flux_density\n  in\n  let values = Nx.stack [ Spectrum.values bb1; Spectrum.values bb2 ] in\n  let batch =\n    Spectrum.create ~wavelength:wave ~values |> Spectrum.as_flux_density\n  in\n  let mags_batch = Photometry.ab_mag bp batch in\n  let mag1 = Photometry.ab_mag bp bb1 in\n  let mag2 = Photometry.ab_mag bp bb2 in\n  let expected = Nx.stack [ mag1; mag2 ] in\n  let diff = Nx.item [] (Nx.max (Nx.abs (Nx.sub mags_batch expected))) in\n  is_true\n    ~msg:(Printf.sprintf \"batch ab_mag matches individual: max_diff=%.2e\" diff)\n    (diff < 1e-10)\n\nlet test_batch_extinction () =\n  let wave = Unit.Length.of_m (Nx.linspace f64 3e-7 8e-7 200) in\n  let rv = Nx.scalar f64 3.1 in\n  let av = Nx.scalar f64 0.5 in\n  let bb1 =\n    Spectrum.blackbody\n      ~temperature:(Unit.Temperature.of_kelvin (Nx.scalar f64 5000.0))\n      ~wavelength:wave\n    |> Spectrum.as_flux_density\n  in\n  let bb2 =\n    Spectrum.blackbody\n      ~temperature:(Unit.Temperature.of_kelvin (Nx.scalar f64 8000.0))\n      ~wavelength:wave\n    |> Spectrum.as_flux_density\n  in\n  let values = Nx.stack [ Spectrum.values bb1; Spectrum.values bb2 ] in\n  let batch =\n    Spectrum.create ~wavelength:wave ~values |> Spectrum.as_flux_density\n  in\n  let reddened = Extinction.apply (Extinction.ccm89 ~rv) ~av batch in\n  let r1 = Extinction.apply (Extinction.ccm89 ~rv) ~av bb1 in\n  let r2 = Extinction.apply (Extinction.ccm89 ~rv) ~av bb2 in\n  let expected = Nx.stack [ Spectrum.values r1; Spectrum.values r2 ] in\n  let diff =\n    Nx.item [] (Nx.max (Nx.abs (Nx.sub (Spectrum.values reddened) expected)))\n  in\n  is_true\n    ~msg:\n      (Printf.sprintf \"batch extinction matches individual: max_diff=%.2e\" diff)\n    (diff < 1e-25)\n\nlet test_batch_scale () =\n  let wave = Unit.Length.of_m (Nx.linspace f64 1e-7 1e-6 100) in\n  let v1 = Nx.ones f64 [| 100 |] in\n  let v2 = Nx.full f64 [| 100 |] 2.0 in\n  let values = Nx.stack [ v1; v2 ] in\n  let batch = Spectrum.create ~wavelength:wave ~values in\n  let scaled = Spectrum.scale (Nx.scalar f64 3.0) batch in\n  let sv = Spectrum.values scaled in\n  let expected =\n    Nx.stack [ Nx.full f64 [| 100 |] 3.0; Nx.full f64 [| 100 |] 6.0 ]\n  in\n  let diff = Nx.item [] (Nx.max (Nx.abs (Nx.sub sv expected))) in\n  is_true ~msg:\"batch scalar scale\" (diff < 1e-15)\n\nlet test_batch_redshift_scalar () =\n  let wave = Unit.Length.of_m (Nx.linspace f64 1e-7 1e-6 100) in\n  let bb1 =\n    Spectrum.blackbody\n      ~temperature:(Unit.Temperature.of_kelvin (Nx.scalar f64 5000.0))\n      ~wavelength:wave\n    |> Spectrum.as_flux_density\n  in\n  let bb2 =\n    Spectrum.blackbody\n      ~temperature:(Unit.Temperature.of_kelvin (Nx.scalar f64 8000.0))\n      ~wavelength:wave\n    |> Spectrum.as_flux_density\n  in\n  let values = Nx.stack [ Spectrum.values bb1; Spectrum.values bb2 ] in\n  let batch =\n    Spectrum.create ~wavelength:wave ~values |> Spectrum.as_flux_density\n  in\n  let z = Nx.scalar f64 0.5 in\n  let shifted = Spectrum.redshift ~z batch in\n  let s1 = Spectrum.redshift ~z bb1 in\n  let s2 = Spectrum.redshift ~z bb2 in\n  let expected = Nx.stack [ Spectrum.values s1; Spectrum.values s2 ] in\n  let diff =\n    Nx.item [] (Nx.max (Nx.abs (Nx.sub (Spectrum.values shifted) expected)))\n  in\n  is_true\n    ~msg:\n      (Printf.sprintf \"batch redshift matches individual: max_diff=%.2e\" diff)\n    (diff < 1e-20)\n\nlet test_batch_create_mismatch () =\n  let wave = Unit.Length.of_m (Nx.linspace f64 1e-7 1e-6 100) in\n  let values = Nx.ones f64 [| 3; 50 |] in\n  let raised =\n    try\n      ignore (Spectrum.create ~wavelength:wave ~values);\n      false\n    with Invalid_argument _ -> true\n  in\n  is_true ~msg:\"mismatched last dim raises\" raised\n\nlet test_batch_roundtrip () =\n  let wave = Unit.Length.of_m (Nx.linspace f64 1e-7 1e-6 100) in\n  let v1 = Nx.ones f64 [| 100 |] in\n  let v2 = Nx.full f64 [| 100 |] 2.0 in\n  let v3 = Nx.full f64 [| 100 |] 3.0 in\n  let values = Nx.stack [ v1; v2; v3 ] in\n  let batch = Spectrum.create ~wavelength:wave ~values in\n  let extracted = Nx.get [ 1 ] (Spectrum.values batch) in\n  let diff = Nx.item [] (Nx.max (Nx.abs (Nx.sub extracted v2))) in\n  is_true ~msg:\"extract second spectrum from batch\" (diff < 1e-15)\n\nlet () =\n  run \"Umbra\"\n    [\n      group \"Unit\"\n        [\n          test \"10 kpc converts to 0.01 Mpc\" test_length_conversion;\n          test \"10 kpc + 500 pc = 10.5 kpc\" test_length_arithmetic;\n          test \"1 solar mass is ~1.988e30 kg\" test_mass_conversion;\n          test \"100 km / 10 s = 10 km/s\" test_velocity_cross_dim;\n          test \"sin(90) = 1 and cos(90) = 0\" test_angle_trig;\n          test \"wavelength to frequency roundtrips\" test_wavelength_frequency;\n          test \"phantom types prevent adding length and mass\"\n            test_phantom_type_safety;\n          test \"2 eV survives energy-wavelength-frequency roundtrip\"\n            test_energy_wavelength_frequency;\n        ];\n      group \"Const\" [ test \"speed of light is ~299792 km/s\" test_const_c ];\n      group \"Time\"\n        [\n          test \"J2000.0 JD and MJD values are correct\" test_time_jd_mjd;\n          test \"ISO 8601 parse and format roundtrip\" test_time_iso;\n          test \"UTC to TAI offset is 32s at J2000\" test_time_utc_tai_tt;\n          test \"TDB-TT difference is less than 2 ms\" test_time_tdb;\n          test \"Unix epoch maps to JD 2440587.5\" test_time_unix;\n          test \"diff and add with 1-day offset\" test_time_diff_add;\n        ];\n      group \"Coord\"\n        [\n          test \"ICRS to Galactic and back preserves RA/Dec\" test_coord_roundtrip;\n          test \"ICRS to Ecliptic and back preserves RA/Dec\"\n            test_coord_ecliptic_roundtrip;\n          test \"ICRS to Supergalactic and back preserves RA/Dec\"\n            test_coord_supergalactic_roundtrip;\n          test \"north pole to south pole separation is 180 deg\"\n            test_separation_poles;\n          test \"nearest self-match returns identity indices\"\n            test_match_nearest_self;\n        ];\n      group \"Cosmo\"\n        [\n          test \"H(0) equals H0 = 70 km/s/Mpc\" test_cosmo_hubble;\n          test \"E(z=0) = 1 for any cosmology\" test_cosmo_e_of;\n          test \"comoving(0.1) ~ 421 Mpc and luminosity(0.1) ~ 463 Mpc\"\n            test_cosmo_distances;\n          test \"lookback time at z=1 is ~7.7 Gyr\" test_cosmo_lookback;\n          test \"1 kpc at z=0.022 subtends ~2.3 arcsec\" test_cosmo_angular_scale;\n          test \"Planck18 comoving(0.5) ~ 1960 Mpc\" test_cosmo_planck18;\n          test \"flat_lcdm(70, 0.3) matches default cosmology\"\n            test_cosmo_flat_lcdm_same_as_default;\n          test \"non-flat LCDM differs from flat\" test_cosmo_nonflat_lcdm;\n          test \"wCDM with w0=-1 reduces to LCDM\" test_cosmo_wcdm;\n          test \"w0waCDM with w0=-1 wa=0 reduces to LCDM\" test_cosmo_w0wacdm;\n          test \"z_at_value roundtrips luminosity distance\" test_cosmo_z_at_value;\n          test \"growth factor D(z=0) = 1\" test_cosmo_growth_factor_z0;\n          test \"growth factor D(z=1) ~ 0.61\" test_cosmo_growth_factor_z1;\n          test \"growth rate f(z=0) ~ 0.52\" test_cosmo_growth_rate_z0;\n          test \"growth factor decreases with redshift\"\n            test_cosmo_growth_monotonic;\n          test \"linear power spectrum is positive and decreases with z\"\n            test_cosmo_linear_power;\n          test \"nonlinear power exceeds linear at k=1 h/Mpc\"\n            test_cosmo_nonlinear_power;\n          test \"Planck18 omega_b, n_s, and sigma8 accessors\"\n            test_cosmo_params_accessors;\n          test \"D_H(0) = c/H0\" test_cosmo_dh;\n          test \"D_M equals D_C for flat geometry\" test_cosmo_dm_flat;\n          test \"D_V = (z * D_H * D_M^2)^(1/3)\" test_cosmo_dv;\n          test \"sound horizon r_s ~ 147 Mpc for Planck18\"\n            test_cosmo_sound_horizon;\n          test \"age of universe ~ 13.8 Gyr for Planck18\" test_cosmo_age_planck18;\n          test \"age(z=0) - age(z=1) = lookback(z=1)\" test_cosmo_age_at_z1;\n          test \"comoving distance to CMB ~ 14000 Mpc\" test_cosmo_comoving_cmb;\n          test \"comoving distances increase at z = 2, 5, 10\"\n            test_cosmo_comoving_high_z;\n          test \"lookback time at z=5 ~ 12.5 Gyr\" test_cosmo_lookback_high_z;\n        ];\n      group \"Altaz\"\n        [\n          test \"ICRS to AltAz and back preserves RA/Dec\" test_altaz_zenith;\n          test \"Polaris altitude ~ observer latitude from lat=45\"\n            test_altaz_north_pole;\n          test \"airmass is >= 1.0 near zenith\" test_altaz_airmass_zenith;\n          test \"airmass differs for high vs low altitude stars\"\n            test_altaz_airmass_low_alt;\n          test \"refraction raises apparent altitude\" test_altaz_refraction;\n          test \"standalone refraction is between 0 and 2 arcmin at high alt\"\n            test_altaz_refraction_standalone;\n        ];\n      group \"Galactocentric\"\n        [\n          test \"l=0 b=0 at galcen_distance maps to origin\"\n            test_galactocentric_gc_position;\n          test \"Galactocentric to ICRS roundtrips RA/Dec/distance\"\n            test_galactocentric_roundtrip;\n        ];\n      group \"Spectrum\"\n        [\n          test \"scale by 3 multiplies all values\" test_spectrum_scale;\n          test \"multiply spectrum by transmission\" test_spectrum_mul;\n          test \"divide spectrum by flat transmission\" test_spectrum_div;\n          test \"mul then div roundtrips to original\"\n            test_spectrum_mul_div_roundtrip;\n          test \"blackbody peak obeys Wien's displacement law\"\n            test_spectrum_blackbody_wien;\n          test \"redshift z=1 doubles wavelength and halves flux\"\n            test_spectrum_redshift;\n          test \"resample preserves positivity and finiteness\"\n            test_spectrum_resample_values;\n          test \"Gaussian line peaks at 656.3 nm with unit amplitude\"\n            test_spectrum_gaussian_peak;\n          test \"Lorentzian line peaks at 500 nm with amplitude 3\"\n            test_spectrum_lorentzian_peak;\n          test \"Voigt with sigma >> gamma matches Gaussian\"\n            test_spectrum_voigt_limits;\n          test \"power-law continuum plus Gaussian line composes cleanly\"\n            test_spectrum_line_composability;\n        ];\n      group \"Extinction\"\n        [\n          test \"CCM89 A_V/A_V ~ 1.0 at V-band 550 nm\"\n            test_extinction_ccm89_v_band;\n          test \"CCM89 extinction increases toward blue\"\n            test_extinction_ccm89_monotonic;\n          test \"apply then unredden recovers original spectrum\"\n            test_extinction_apply_unredden;\n        ];\n      group \"Photometry\"\n        [\n          test \"flat f_nu = 3631 Jy gives m_AB ~ 0\" test_photometry_ab_mag_flat;\n          test \"same-band color is zero\" test_photometry_color_same_band;\n          test \"hot star is brighter in blue than red\"\n            test_photometry_blue_star_color;\n          test \"auto-resample matches manual resample\"\n            test_photometry_auto_resample;\n          test \"ST and AB magnitudes differ for a blackbody\"\n            test_photometry_st_mag;\n          test \"Vega and AB magnitudes differ through Johnson V\"\n            test_photometry_vega_mag;\n          test \"effective wavelength is in range for flat tophat spectrum\"\n            test_photometry_effective_wavelength;\n        ];\n      group \"Filters\"\n        [\n          test \"SDSS r pivot wavelength ~ 620 nm\" test_filters_sdss_pivot;\n          test \"Johnson V pivot wavelength ~ 551 nm\"\n            test_filters_johnson_v_pivot;\n          test \"2MASS J pivot wavelength ~ 1235 nm\" test_filters_twomass_j_pivot;\n          test \"Gaia BP < G < RP pivot ordering\" test_filters_gaia_ordering;\n          test \"5800 K blackbody through SDSS g yields finite magnitude\"\n            test_filters_photometry;\n        ];\n      group \"Survey\"\n        [\n          test \"Smail n(z) integrates to 1.0\" test_survey_smail_normalized;\n          test \"tabulated n(z) is positive at midpoint and zero outside\"\n            test_survey_tabulated;\n          test \"C_l matrix has correct shape for 2 tracers\" test_survey_cl_shape;\n          test \"auto C_l is positive at all ell\" test_survey_cl_positive;\n          test \"weak lensing noise is constant in ell\" test_survey_noise_wl;\n          test \"shear m_bias=0.02 scales auto C_l by (1+m)^2\" test_survey_m_bias;\n          test \"baryonic feedback suppresses high-ell power\"\n            test_survey_baryonic_feedback;\n        ];\n      group \"FITS\"\n        [\n          test \"2x3 float32 image writes and reads back\"\n            test_fits_image_roundtrip;\n          test \"3-row table with ra/dec writes and reads back\"\n            test_fits_table_roundtrip;\n        ];\n      group \"Batch\"\n        [\n          test \"batch of 2 spectra has shape [2; 100]\" test_batch_create;\n          test \"mismatched wavelength and values dims raises\"\n            test_batch_create_mismatch;\n          test \"extract second spectrum from batch\" test_batch_roundtrip;\n          test \"scalar scale applies to all spectra in batch\" test_batch_scale;\n          test \"batch resample matches per-spectrum resample\"\n            test_batch_resample;\n          test \"batch AB magnitudes match per-spectrum magnitudes\"\n            test_batch_ab_mag;\n          test \"batch extinction matches per-spectrum extinction\"\n            test_batch_extinction;\n          test \"batch redshift matches per-spectrum redshift\"\n            test_batch_redshift_scalar;\n        ];\n    ]\n"
  },
  {
    "path": "doc/coming-from-python.md",
    "content": "# Coming from Python\n\nThis page maps Python scientific computing concepts to their Raven equivalents. It assumes you already know OCaml basics.\n\n## Library Mapping\n\n| Python | Raven | Notes |\n|--------|-------|-------|\n| NumPy | [Nx](/docs/nx/) | N-dimensional arrays, broadcasting, linear algebra, FFT |\n| JAX | [Rune](/docs/rune/) | Functional transformations: `grad`, `jvp`, `vmap` |\n| PyTorch / Flax | [Kaun](/docs/kaun/) | Layers, optimizers, training loops |\n| HuggingFace Tokenizers | [Brot](/docs/brot/) | BPE, WordPiece, Unigram; HF-compatible |\n| pandas / Polars | [Talon](/docs/talon/) | Type-safe DataFrames |\n| Matplotlib | [Hugin](/docs/hugin/) | 2D/3D plotting with Cairo |\n| Gymnasium | [Fehu](/docs/fehu/) | RL environments and training utilities |\n| OpenCV | [Sowilo](/docs/sowilo/) | Differentiable image processing |\n| Jupyter + IPython | [Quill](/docs/quill/) | Interactive REPL and markdown notebooks |\n\n## Key Differences\n\n### Explicit Types\n\nNumPy casts types silently. Nx does not.\n\n```python\n# Python: silently upcasts int + float -> float\na = np.array([1, 2, 3])\nb = a + 1.5  # works\n```\n\n<!-- $MDX skip -->\n```ocaml\n(* OCaml: types must match *)\nlet a = Nx.create Nx.Int32 [|3|] [|1l; 2l; 3l|]\n(* Nx.add a (Nx.scalar Nx.Float32 1.5)  -- type error *)\n\n(* Cast explicitly *)\nlet a_f = Nx.astype Nx.Float32 a\nlet b = Nx.add a_f (Nx.scalar Nx.Float32 1.5)\n```\n\n### Array Literals\n\nNumPy uses Python lists. Nx uses OCaml arrays with `[| |]` syntax.\n\n```python\nx = np.array([[1, 2], [3, 4]])\n```\n\n<!-- $MDX skip -->\n```ocaml\nlet x = Nx.create Nx.Float32 [|2; 2|] [|1.; 2.; 3.; 4.|]\n```\n\n### Slicing\n\nNumPy uses `[]` with `:`. Nx uses the `slice` function with index constructors.\n\n```python\nx[0:2, :]           # first two rows\nx[:, 1]             # second column\nx[::2]              # every other element\n```\n\n<!-- $MDX skip -->\n```ocaml\nNx.slice [R (0, 2); A] x      (* first two rows *)\nNx.slice [A; I 1] x            (* second column *)\nNx.slice [S (0, -1, 2)] x     (* every other element *)\n```\n\n### No Separate Tensor Type\n\nIn PyTorch, `torch.Tensor` is different from `numpy.ndarray`. In Raven, Rune operates directly on `Nx.t` values. There is no wrapper type.\n\n```python\n# PyTorch: convert between types\nx_np = np.array([1.0, 2.0])\nx_torch = torch.from_numpy(x_np)\nx_torch.requires_grad_(True)\n```\n\n<!-- $MDX skip -->\n```ocaml\n(* Raven: just use Nx tensors directly *)\nlet x = Nx.create Nx.Float32 [|2|] [|1.0; 2.0|]\nlet gradient = Rune.grad (fun x -> Nx.sum (Nx.mul x x)) x\n```\n\n### Functional Transformations\n\nJAX users will find Rune familiar. PyTorch users: think of `grad` as a function transformer, not a method on tensors.\n\n```python\n# JAX style\ngrad_fn = jax.grad(loss_fn)\ngrads = grad_fn(params)\n\n# PyTorch style\nloss = loss_fn(params)\nloss.backward()\ngrads = params.grad\n```\n\n<!-- $MDX skip -->\n```ocaml\n(* Rune: JAX-style functional transforms *)\nlet grad_fn = Rune.grad loss_fn\nlet grads = grad_fn params\n\n(* Or compute value and gradient together *)\nlet loss, grads = Rune.value_and_grad loss_fn params\n```\n\n### Module-Based Layers\n\nKaun layers are records with `init` and `apply`, not classes with `forward`.\n\n```python\n# PyTorch\nclass Model(nn.Module):\n    def __init__(self):\n        self.linear = nn.Linear(784, 10)\n    def forward(self, x):\n        return self.linear(x)\nmodel = Model()\n```\n\n<!-- $MDX skip -->\n```ocaml\n(* Kaun: compose layer records *)\nlet model = Kaun.Layer.sequential [\n  Kaun.Layer.linear ~in_features:784 ~out_features:10 ();\n]\nlet vars = Kaun.Layer.init model ~dtype:Nx.Float32\n```\n\nParameters are plain data (`Ptree.t` — a tree of Nx tensors), not hidden inside objects.\n\n### DataFrames\n\npandas uses string-based column access. Talon provides type-safe row operations via an applicative.\n\n```python\n# pandas\ndf['bmi'] = df['weight'] / df['height'] ** 2\n```\n\n<!-- $MDX skip -->\n```ocaml\n(* Talon: type-safe row computation *)\nlet df = Talon.with_column df \"bmi\" Nx.Float64\n  Talon.Row.(map2 (number \"weight\") (number \"height\")\n    ~f:(fun w h -> w /. (h *. h)))\n```\n\n## Detailed Comparisons\n\nEach library has a dedicated comparison page with side-by-side code examples:\n\n- [Nx vs NumPy](/docs/nx/numpy-comparison/)\n- [Rune vs JAX](/docs/rune/jax-comparison/)\n- [Kaun vs PyTorch/Flax](/docs/kaun/pytorch-comparison/)\n- [Brot vs HuggingFace Tokenizers](/docs/brot/hf-tokenizers-comparison/)\n- [Talon vs pandas](/docs/talon/pandas-comparison/)\n- [Hugin vs Matplotlib](/docs/hugin/matplotlib-comparison/)\n- [Sowilo vs OpenCV](/docs/sowilo/opencv-comparison/)\n- [Fehu vs Gymnasium](/docs/fehu/gymnasium-comparison/)\n"
  },
  {
    "path": "doc/ecosystem-overview.md",
    "content": "# The Raven Ecosystem\n\nRaven is nine libraries that share one data type: `Nx.t`, the\nn-dimensional array. Each library does one thing, and they compose\nthrough tensors.\n\n## How the Libraries Fit Together\n\n```\n                         ┌───────────┐\n                         │   Kaun    │  neural networks\n                         │  (Flax)   │\n                         └─────┬─────┘\n                               │\n  ┌───────────┐          ┌─────┴─────┐          ┌───────────┐\n  │  Sowilo   │          │   Rune    │          │   Fehu    │\n  │ (OpenCV)  ├──────────┤  (JAX)    ├──────────┤(Gymnasium)│\n  └─────┬─────┘          └─────┬─────┘          └─────┬─────┘\n        │                      │                      │\n  ┌─────┴──────────────────────┴──────────────────────┴─────┐\n  │                          Nx                              │\n  │                       (NumPy)                            │\n  └──┬──────────────┬──────────────┬──────────────┬─────────┘\n     │              │              │              │\n ┌───┴────┐    ┌────┴───┐    ┌────┴───┐    ┌─────┴────┐\n │ Talon  │    │  Brot  │    │ Hugin  │    │  Quill   │\n │(Polars)│    │(HF Tok)│    │(Mpl)   │    │(Jupyter) │\n └────────┘    └────────┘    └────────┘    └──────────┘\n```\n\n**Nx** is the foundation — every library operates on `Nx.t` tensors.\n\n**Rune** adds functional transformations on top of Nx: `grad`, `jvp`,\n`vmap`. Your Nx code becomes differentiable without changes.\n\n**Kaun** builds on Rune to provide layers, optimizers, training loops,\nand HuggingFace Hub integration.\n\n**Sowilo**, **Fehu**, **Talon**, **Brot**, **Hugin**, and **Quill** each\nuse Nx directly for their domain. Sowilo and Fehu operations are\ncompatible with Rune's `grad` and `vmap` since they are plain Nx\noperations under the hood.\n\n## Which Library Do I Need?\n\n| I want to... | Use |\n|---|---|\n| Work with numerical arrays | [Nx](/docs/nx/) |\n| Compute gradients | [Rune](/docs/rune/) |\n| Train neural networks | [Kaun](/docs/kaun/) |\n| Tokenize text for language models | [Brot](/docs/brot/) |\n| Manipulate tabular data | [Talon](/docs/talon/) |\n| Process and transform images | [Sowilo](/docs/sowilo/) |\n| Build RL environments and agents | [Fehu](/docs/fehu/) |\n| Create plots and visualizations | [Hugin](/docs/hugin/) |\n| Run code interactively (REPL or notebooks) | [Quill](/docs/quill/) |\n\n---\n\n## Nx: N-Dimensional Arrays\n\nNx provides the numerical foundation for the entire ecosystem.\nNumPy-like operations on n-dimensional arrays with 19 data types\n(float16 through complex128), broadcasting, slicing, linear algebra,\nFFT, and I/O.\n\n```ocaml\nopen Nx\n\nlet x = linspace Float32 0. 10. 100\nlet y = sin x\nlet mean_y = mean y\n```\n\n[Nx documentation →](/docs/nx/)\n\n## Rune: Automatic Differentiation\n\nFunctional transformations for Nx tensors: reverse-mode AD (grad,\nvjp), forward-mode AD (jvp), and vectorising maps (vmap). Operates on\n`Nx.t` values directly using OCaml 5 effect handlers — no special\ntensor type needed.\n\n<!-- $MDX skip -->\n```ocaml\nopen Nx\nopen Rune\n\nlet f x = add (mul x x) (sin x)\nlet f' = grad f\nlet f'' = grad f'\n```\n\n[Rune documentation →](/docs/rune/)\n\n## Kaun: Neural Networks\n\nComposable layers, optimizers with learning-rate schedules, training\nloops, data pipelines, and HuggingFace Hub integration. Model\nparameters are `Ptree.t` — trees of Nx tensors you can inspect, map,\nand serialize.\n\n<!-- $MDX skip -->\n```ocaml\nopen Kaun\n\nlet model = Layer.sequential [\n  Layer.linear ~in_features:784 ~out_features:128 ();\n  Layer.relu ();\n  Layer.linear ~in_features:128 ~out_features:10 ();\n]\n\nlet trainer = Train.make ~model\n  ~optimizer:(Optim.adam ~lr:(Optim.Schedule.constant 0.001) ())\n```\n\n[Kaun documentation →](/docs/kaun/)\n\n## Brot: Tokenization\n\nFast, HuggingFace-compatible tokenization supporting BPE, WordPiece,\nUnigram, word-level, and character-level algorithms. Composable\npipeline (normalizer → pre-tokenizer → model → post-processor →\ndecoder) with training from scratch.\n\n```ocaml\nopen Brot\n\nlet tokenizer = from_file \"tokenizer.json\" |> Result.get_ok\nlet encoding = encode tokenizer \"Hello, world!\"\nlet ids = Encoding.ids encoding\n```\n\n[Brot documentation →](/docs/brot/)\n\n## Talon: DataFrames\n\nType-safe tabular data with heterogeneous columns, an applicative Row\nsystem for row-wise operations, and vectorized aggregations backed by\nNx.\n\n```ocaml\nopen Talon\n\nlet df = create [\n  \"name\", Col.string_list [\"Alice\"; \"Bob\"; \"Charlie\"];\n  \"score\", Col.float64_list [85.5; 92.0; 78.5];\n]\n\nlet () = print df\n```\n\n[Talon documentation →](/docs/talon/)\n\n## Sowilo: Computer Vision\n\nDifferentiable image processing: geometric transforms (resize, crop,\nflip), spatial filters (Gaussian blur, Sobel, Canny), color space\nconversions, and morphological operations. All operations are plain Nx\ncomputations, so they compose with `Rune.grad` and `Rune.vmap`.\n\n<!-- $MDX skip -->\n```ocaml\nopen Sowilo\n\nlet processed =\n  img\n  |> to_float\n  |> resize ~height:224 ~width:224 ~mode:Bilinear\n  |> normalize ~mean:[|0.485; 0.456; 0.406|] ~std:[|0.229; 0.224; 0.225|]\n```\n\n[Sowilo documentation →](/docs/sowilo/)\n\n## Fehu: Reinforcement Learning\n\nRL environments (CartPole, MountainCar, GridWorld), type-safe\nobservation/action spaces, vectorized environments, trajectory\ncollection, replay buffers, and generalized advantage estimation.\n\n<!-- $MDX skip -->\n```ocaml\nopen Fehu\n\nlet env = Fehu_envs.cartpole () in\nlet obs, _info = Env.reset env in\nlet obs, reward, terminated, truncated, _info =\n  Env.step env (Space.sample (Env.action_space env))\n```\n\n[Fehu documentation →](/docs/fehu/)\n\n## Hugin: Visualization\n\nPublication-quality 2D and 3D plots using Cairo rendering. Takes Nx\ntensors as input. Line plots, scatter, bar charts, contour plots,\nimage display.\n\n<!-- $MDX skip -->\n```ocaml\nopen Hugin\nopen Nx\n\nlet fig = figure () in\nlet ax = subplot fig in\nlet _ = Plotting.plot ax ~x ~y ~label:\"sin(x)\" in\nshow fig\n```\n\n[Hugin documentation →](/docs/hugin/)\n\n## Quill: Interactive Computing\n\nInteractive REPL and markdown notebooks. Launch `quill` for a toplevel\nwith syntax highlighting, completion, and history, or open a markdown\nfile for a full notebook experience. Terminal UI, web frontend, and\nbatch mode with all Raven libraries pre-loaded.\n\n<!-- $MDX skip -->\n```bash\nquill                    # interactive REPL\nquill notebook.md        # notebook TUI\nquill serve notebook.md  # web frontend\nquill run notebook.md    # batch evaluation\n```\n\n[Quill documentation →](/docs/quill/)\n\n## Getting Started\n\n1. **New to Raven?** Start with the [Quickstart](/docs/quickstart/)\n2. **Coming from Python?** Read [Coming from Python](/docs/coming-from-python/)\n3. **Want a specific library?** Use the table above to find the right docs\n"
  },
  {
    "path": "doc/index.md",
    "content": "# Documentation\n\nWelcome to Raven's documentation. Raven is an ecosystem of OCaml libraries for numerical computing, machine learning, and data science.\n\n## Start Here\n\n- **[Quickstart](/docs/quickstart/)** — zero to gradient in 5 minutes\n- **[Coming from Python](/docs/coming-from-python/)** — map NumPy, PyTorch, pandas concepts to Raven\n- **[Ecosystem Overview](/docs/ecosystem-overview/)** — how the libraries relate and which to use\n\n## Libraries\n\n| | Library | Like | What it does |\n|-|---------|------|-------------|\n| | [**nx**](/docs/nx/) | NumPy | N-dimensional arrays with pluggable backends |\n| ᚱ | [**rune**](/docs/rune/) | JAX | Automatic differentiation and functional transformations |\n| ᚲ | [**kaun**](/docs/kaun/) | PyTorch / Flax | Neural networks and training |\n| ᚨ | [**brot**](/docs/brot/) | HF Tokenizers | Fast tokenization for language models |\n| ᛃ | [**talon**](/docs/talon/) | Pandas / Polars | DataFrames with type-safe columns |\n| ᛞ | [**hugin**](/docs/hugin/) | Matplotlib | Data visualization and plotting |\n| ᛈ | [**quill**](/docs/quill/) | Jupyter + IPython | Interactive REPL and markdown notebooks |\n| ᚠ | [**fehu**](/docs/fehu/) | Gymnasium | Reinforcement learning environments |\n| ᛋ | [**sowilo**](/docs/sowilo/) | OpenCV | Differentiable computer vision |\n\n## Project\n\n- [Installation](/docs/installation/) — system dependencies, opam setup, building from source\n- [Roadmap](/docs/roadmap/) — what works today and what's coming\n- [Introduction](/docs/introduction/) — vision and philosophy\n- [Support Raven](/docs/support-raven/) — sponsorship and contributing\n"
  },
  {
    "path": "doc/installation.md",
    "content": "# Installation\n\n## Prerequisites\n\nRaven requires **OCaml 5.2** or later and **opam**.\n\nIf you don't have opam installed, follow the [official instructions](https://opam.ocaml.org/doc/Install.html). Then create a switch:\n\n```bash\nopam switch create raven 5.2.0\neval $(opam env)\n```\n\n## Installing from opam\n\nInstall the entire ecosystem:\n\n```bash\nopam install raven\n```\n\nOr install individual libraries:\n\n```bash\nopam install nx          # just arrays\nopam install rune        # arrays + autodiff\nopam install kaun        # arrays + autodiff + neural networks\nopam install brot        # tokenization\nopam install talon       # dataframes\n```\n\n## Building from Source\n\n```bash\ngit clone https://github.com/raven-ml/raven\ncd raven\ndune pkg lock && dune build\n```\n\nTo build a specific library:\n\n```bash\ndune build packages/nx            # just nx\ndune build packages/kaun          # kaun + its dependencies\n```\n\n## System Dependencies\n\nMost Raven libraries have no system dependencies beyond OCaml. The exceptions:\n\n| Library | Requires | macOS | Ubuntu/Debian |\n|---------|----------|-------|---------------|\n| **hugin** | Cairo, SDL2 | `brew install cairo sdl2` | `apt install libcairo2-dev libsdl2-dev` |\n\n## Using Raven in Your Project\n\nAdd libraries to your `dune-project`:\n\n```dune\n(lang dune 3.0)\n(package\n (name my_project)\n (depends\n  ocaml\n  dune\n  nx\n  rune))\n```\n\nAnd your `dune` file:\n\n```dune\n(executable\n (name main)\n (libraries nx rune))\n```\n\n## Verify Your Installation\n\nCreate a file `main.ml`:\n\n```ocaml\nlet () =\n  let open Nx in\n  let x = linspace Float32 0. 1. 5 in\n  print_data x\n```\n\nBuild and run:\n\n```bash\ndune exec ./main.exe\n```\n\nYou should see five evenly-spaced values printed.\n\n## Editor Setup\n\nFor the best development experience, use an editor with OCaml LSP support:\n\n- **VS Code**: Install the [OCaml Platform extension](https://marketplace.visualstudio.com/items?itemName=ocamllabs.ocaml-platform)\n- **Emacs**: Use [ocaml-eglot](https://github.com/tarides/ocaml-eglot)\n- **Vim/Neovim**: Use [ocaml-lsp](https://github.com/ocaml/ocaml-lsp) with your LSP client\n\n## Troubleshooting\n\n**Missing system libraries**: If Hugin fails to build, ensure Cairo and SDL2 development headers are installed.\n\n**Opam switch issues**: Run `eval $(opam env)` after creating or switching opam switches.\n\n**Build failures**: Check your OCaml version with `ocaml --version`. Raven requires 5.2.0 or later.\n\n**Getting help**: Report issues at [github.com/raven-ml/raven/issues](https://github.com/raven-ml/raven/issues).\n"
  },
  {
    "path": "doc/introduction.md",
    "content": "# Introduction\n\nRaven is a project to bring modern scientific computing to the OCaml programming language. We're building a comprehensive ecosystem, from low-level numerical libraries and automatic differentiation to high-level machine learning frameworks and interactive notebooks.\n\nOur ambition is to make scientific computing in OCaml feel as natural as it does in Python. This means not just matching Python's capabilities, but delivering the same level of ergonomics, performance, and developer experience that has made Python the de facto standard for scientific computing.\n\nIf successful, Raven would establish OCaml as a genuine alternative in the scientific computing landscape. It's an ambitious undertaking, but one we believe is both necessary and achievable.\n\n## Why Not Just Use Python?\n\nToday, Python has an effective monopoly on scientific computing. Unlike web development, where we can choose between multiple mature ecosystems, numerical computing offers essentially one realistic option. This lack of choice is unfortunate.\n\nWhat's more problematic is that Python, while excellent for quick experimentation, doesn't particularly shine for building robust production systems. Its interpreted nature, dynamic typing, and limited multicore support create real challenges when you need to deploy and maintain large-scale applications.\n\nIf you've worked in this space, you've likely experienced this firsthand: rapid prototypes that become production nightmares, debugging sessions where type errors only surface at runtime, or performance bottlenecks that force you to drop down to C extensions.\n\nOften, this mismatch forces a wasteful pattern: researchers prototype in Python, then teams reimplement everything for production in other languages. This induces all kinds of second-order effects on organization structures, team dynamics, development velocity, and workload.\n\nThe scientific community deserves better options than being forced into one language, and we believe OCaml occupies a unique sweet spot between rapid experimentation and building production-grade systems. It just needs the scientific ecosystem to match its technical strengths. This is the gap that Raven aims to fill.\n\nIn the AI era, we believe OCaml has an important role to play. If you're generating 80% of your code with AI assistance, wouldn't you prefer a language that catches errors at compile time rather than runtime? The productivity gains from AI coding are amplified when you have a type system that gives you stronger guarantees about your generated code. Raven is our contribution to putting OCaml in the spotlight for scientific computing in this new era.\n\n## What Does Success Look Like?\n\nOur goal isn't just to build OCaml versions of Python libraries: it's to create a compelling alternative for busy developers who just want the best tool for the job.\n\nSuccess means two things. First, **OCaml developers shouldn't have to switch to Python for numerical computing**. Whether you're analyzing data, training models, or building computational systems, you should be able to stay in the OCaml ecosystem with the same productivity you'd expect from Python.\n\nSecond, **Raven should break into the mainstream scientific computing conversation**. It shouldn't just serve existing OCaml developers: we're building for teams who need to ship reliable systems, not just an OCaml curiosity for language enthusiasts.\n\nWe measure success across five key dimensions:\n\n- **Capability parity**: Everything you can do in Python, you should be able to do with Raven\n- **Development productivity**: Getting from idea to working prototype is as fast as it would be in Python\n- **Developer experience**: Developers get the kind of documentation, tooling, and APIs they dream every project had\n- **Production performance**: Match or exceed NumPy/PyTorch performance on the fast path\n- **Production readiness**: Teams can ship robust, maintainable Raven-built applications that perform well under real-world conditions\n\nWe believe this is achievable through focused execution and strategic choices. We're prioritizing the 80% that matter most, focusing on one blessed workflow per use-case, and building modular components that encourage ecosystem growth, rather than trying to match Python everywhere from day one.\n\n## Why Not Just Use Owl?\n\nOwl deserves credit for the amount of work and love that has been poured into it. It demonstrated that serious numerical computing in OCaml was possible, spanning everything from statistics and signal processing to basic linear algebra and neural networks, and more.\n\nHowever, Owl can't compete with NumPy or PyTorch on performance, and performance parity isn't optional if we want teams to seriously consider OCaml over Python.\n\nThe reality is that we can't realistically match NumPy and PyTorch's performance through traditional optimization. These projects have hundreds of developers working on hand-optimized kernels. With our small team, JIT compilation is our only viable path to competitive performance.\n\nThis creates a fundamental constraint. Building for JIT-first changes everything about your design: API choices, memory layouts, operator fusion strategies, even how you structure the development experience. Rather than retrofitting these assumptions onto existing work, we decided a clean slate would be more effective.\n\nThere's also the ecosystem question. Despite Owl's technical achievements, it hasn't generated the kind of flourishing community we need. We suspect this is partly due to its lack of modularity: without libraries designed as composable building blocks, it's challenging to build a broader ecosystem around the foundation.\n\nRaven is designed from the ground up to (1) compete with Python's scientific computing stack on performance and (2) build the flourishing ecosystem that OCaml's scientific computing community deserves.\n\n## What We're Building\n\nRaven is a comprehensive ecosystem that spans the entire scientific computing stack. Here's what we're building:\n\n**Foundation**\n- **Nx**: N-dimensional arrays with pluggable backends (NumPy equivalent)\n- **Brot**: Fast, HuggingFace-compatible tokenization (HF Tokenizers equivalent)\n- **Talon**: Type-safe DataFrames (pandas/Polars equivalent)\n\n**Differentiable Computing**\n- **Rune**: Automatic differentiation using OCaml's effect system (JAX equivalent)\n\n**Domain Frameworks**\n- **Kaun**: Neural networks and training (PyTorch/Flax equivalent)\n- **Sowilo**: Differentiable computer vision (OpenCV equivalent)\n- **Fehu**: Reinforcement learning environments and algorithms (Gymnasium equivalent)\n\n**Tooling**\n- **Hugin**: Publication-quality plotting (Matplotlib equivalent)\n- **Quill**: Interactive notebooks as markdown files (Jupyter equivalent)\n\nNine libraries spanning the full scientific computing stack, all designed to work together seamlessly.\n\n**Key Innovations**\nWhile we aim to feel familiar to Python users, Raven brings genuine innovations to scientific computing:\n\n**Nx** uses pluggable backends inspired by Tinygrad's minimalist approach, giving us flexibility to optimize for different hardware without monolithic complexity.\n\n**Rune** implements automatic differentiation using OCaml's effects system. As far as we know, it is the first project of this scale to use effects for autodiff, building on recent research, and implementing Jax's vision for functional numerical computation with a truly functional foundation.\n\n**Quill** rethinks notebooks. Notebooks are plain markdown files — git-friendly, readable without special tooling, and editable in any text editor. Quill runs them as a TUI in the terminal or as a web frontend in the browser, with all Raven packages pre-loaded and zero setup.\n\n**Deployment** is where Raven's story diverges most from Python. AOT compilation generates all compute kernels at compile time, producing binaries with no BLAS or CUDA runtime dependency. This makes it possible to deploy models as MirageOS unikernels — minimal attack surface, millisecond boot, deterministic behavior — or as static binaries with no Python runtime, no dependency hell.\n\n**Current Focus**\nThe alpha milestone is complete — we've trained GPT-2 end-to-end on CPU using the full Raven stack. We're now focused on integrating tolk as a JIT transformation in Rune, with the goal of matching PyTorch performance. After that, V1 brings production-ready training and deployment: AOT compilation, inference serving, ONNX import, and MirageOS unikernel deployment. See the [roadmap](/docs/roadmap/) for details.\n"
  },
  {
    "path": "doc/quickstart.md",
    "content": "# Quickstart\n\nThis gets you from zero to computing gradients and training a model in five minutes.\n\n## Setup\n\n<!-- $MDX skip -->\n```bash\nopam install raven\n```\n\nCreate a `dune-project` and `dune` file:\n\n<!-- $MDX skip -->\n```dune\n; dune-project\n(lang dune 3.20)\n```\n\n<!-- $MDX skip -->\n```dune\n; dune\n(executable\n (name main)\n (libraries kaun))\n```\n\nInstalling `kaun` pulls in `nx` and `rune` automatically.\n\n## Step 1: Arrays with Nx\n\nNx provides n-dimensional arrays. Every value has a data type and a shape.\n\n```ocaml\nopen Nx\n\nlet () =\n  (* Create arrays *)\n  let a = create Float32 [|2; 3|] [|1.; 2.; 3.; 4.; 5.; 6.|] in\n  let b = ones Float32 [|2; 3|] in\n\n  (* Element-wise operations *)\n  let c = add a b in\n  print_data c;\n\n  (* Reductions *)\n  Printf.printf \"sum = %.1f\\n\" (item [] (sum a));\n  Printf.printf \"mean = %.1f\\n\" (item [] (mean a));\n\n  (* Matrix multiplication *)\n  let x = rand Float32 [|3; 4|] in\n  let y = rand Float32 [|4; 2|] in\n  let z = matmul x y in\n  Printf.printf \"matmul shape: %s\\n\"\n    (Array.to_list (shape z) |> List.map string_of_int |> String.concat \"x\")\n```\n\n## Step 2: Gradients with Rune\n\nRune computes derivatives of Nx functions automatically. Write a function using Nx operations, then use `grad` to differentiate it.\n\n```ocaml\nopen Nx\nopen Rune\n\nlet () =\n  (* f(x) = x² + sin(x) *)\n  let f x = add (mul x x) (sin x) in\n\n  (* grad returns the derivative function *)\n  let f' = grad f in\n\n  let x = scalar Float32 2.0 in\n  Printf.printf \"f(2)  = %.4f\\n\" (item [] (f x));\n  Printf.printf \"f'(2) = %.4f\\n\" (item [] (f' x));\n\n  (* Higher-order: second derivative *)\n  let f'' = grad f' in\n  Printf.printf \"f''(2) = %.4f\\n\" (item [] (f'' x))\n```\n\n## Step 3: Training with Kaun\n\nKaun provides layers, optimizers, and training loops built on Rune.\n\n<!-- $MDX skip -->\n```ocaml\nopen Kaun\n\nlet () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n\n  (* XOR dataset *)\n  let x = Nx.create Nx.Float32 [|4; 2|]\n    [|0.; 0.; 0.; 1.; 1.; 0.; 1.; 1.|] in\n  let y = Nx.create Nx.Float32 [|4; 1|]\n    [|0.; 1.; 1.; 0.|] in\n\n  (* Define model *)\n  let model = Layer.sequential [\n    Layer.linear ~in_features:2 ~out_features:8 ();\n    Layer.tanh ();\n    Layer.linear ~in_features:8 ~out_features:1 ();\n  ] in\n\n  (* Create trainer and initialize *)\n  let trainer = Train.make ~model\n    ~optimizer:(Optim.adam ~lr:(Optim.Schedule.constant 0.01) ()) in\n  let st = Train.init trainer ~dtype:Nx.Float32 in\n\n  (* Train *)\n  let st = Train.fit trainer st\n    ~report:(fun ~step ~loss _st ->\n      if step mod 250 = 0 then\n        Printf.printf \"step %4d  loss %.6f\\n\" step loss)\n    (Data.repeat 1000 (x, fun pred -> Loss.binary_cross_entropy pred y))\n  in\n\n  (* Predict *)\n  let pred = Train.predict trainer st x |> Nx.sigmoid in\n  Printf.printf \"\\npredictions (expected 0 1 1 0):\\n\";\n  for i = 0 to 3 do\n    Printf.printf \"  [%.0f, %.0f] -> %.3f\\n\"\n      (Nx.item [i; 0] x) (Nx.item [i; 1] x) (Nx.item [i; 0] pred)\n  done\n```\n\n## Next Steps\n\n- **[Nx](/docs/nx/getting-started/)** — full guide to arrays, slicing, broadcasting, linear algebra\n- **[Rune](/docs/rune/getting-started/)** — all transformations: grad, jvp, vmap, and more\n- **[Kaun](/docs/kaun/getting-started/)** — layers, optimizers, training loops, pretrained models\n- **[Ecosystem Overview](/docs/ecosystem-overview/)** — how all 9 libraries fit together\n"
  },
  {
    "path": "doc/roadmap.md",
    "content": "# Roadmap\n\n## Current Status\n\nRaven is in **alpha**. The core stack (Nx -> Rune -> Kaun) works end-to-end: we have successfully trained GPT-2 on CPU using the full Raven stack.\n\n| Library    | Status | What works                                                                |\n| ---------- | ------ | ------------------------------------------------------------------------- |\n| **nx**     | Alpha  | Full NumPy-like API, linear algebra, FFT, I/O (npy, images)               |\n| **rune**   | Alpha  | Reverse and forward-mode AD, vmap, gradient checking                      |\n| **kaun**   | Alpha  | Layers, optimizers, training loops, HuggingFace Hub, MNIST/GPT-2 examples |\n| **brot**   | Alpha  | All 5 algorithms, full pipeline, HF tokenizer.json compat, training       |\n| **talon**  | Alpha  | DataFrames, row operations, aggregations, CSV I/O                         |\n| **hugin**  | Alpha  | 2D/3D plots, scatter, bar, contour, images                                |\n| **fehu**   | Alpha  | Environments (CartPole, GridWorld, MountainCar), vectorized envs, GAE     |\n| **sowilo** | Alpha  | Geometric transforms, filters, edge detection, morphological ops          |\n| **quill**  | Alpha  | Interactive REPL, notebook TUI and web frontend, batch eval, watch mode   |\n\nAPIs will change. Bug reports and feedback are welcome.\n\n## Beta: JIT Compilation & Performance\n\nThe beta cycle focuses on **JIT compilation with performance close to PyTorch**.\n\n- Integrate tolk (an OCaml port of tinygrad) as a JIT transformation in Rune\n- Target CPU, CUDA, Metal, OpenCL, and HIP\n- Kernel fusion and optimization\n- Benchmark against PyTorch on standard workloads\n\n## V1: Production-Ready Training & Deployment\n\nV1 makes Raven **production-ready**: train models, deploy them as unikernels or static binaries.\n\n**Training**:\n- Gradient accumulation, mixed precision, gradient checkpointing\n- Flash attention for efficient transformer training\n- ONNX import for PyTorch model portability\n- Parallel data loading, layer completions\n\n**Deployment**:\n- AOT compilation to standalone binaries (CPU and GPU)\n- Inference engine with KV cache, continuous batching, and PagedAttention\n- Post-training quantization (INT8/INT4)\n- MirageOS unikernel deployment -- tolk AOT generates all compute at compile time, no BLAS dependency, enabling deployment as unikernels\n"
  },
  {
    "path": "doc/support-raven.md",
    "content": "# Support Raven\n\n## Raven in One Minute\n\nPython's monopoly on scientific computing forces an impossible choice: ship everything in Python (endure runtime crashes, the GIL's multicore ceiling, and gigabyte containers), or prototype in Python then rewrite for production (doubling the work and creating siloed teams).\n\n**We think there's a better way.** OCaml lets you prototype as quickly as Python and scale the same code to production. Same expressiveness, strong typing catches bugs before they crash your ML pipeline, while JIT compilation matches NumPy/PyTorch performance. One language from research to production — it just needs a production-grade ML stack.\n\n**Raven brings that stack to OCaml:** Nx (NumPy), Rune (JAX with effects-based autodiff), Kaun (Flax), Brot (tokenization), Hugin (Matplotlib), and Quill (notebooks done right). Train models with automatic differentiation and JIT compilation, then deploy as a MirageOS unikernel or a static binary — no Python, no CUDA dependency hell, no 5 GB Docker images. We built Raven for teams that want both development speed and reliable systems.\n\n_Learn more: [Introduction](/docs/introduction)_\n\n_We're in alpha with the full stack working end-to-end (we've trained GPT-2 on CPU). Next milestone: JIT compilation via tolk with performance close to PyTorch._\n\n## Roadmap & Funding Goals\n\n_See the [full roadmap](/docs/roadmap) for our complete vision and timeline._\n\n### Beta — JIT Compilation & Performance\n- Integrate tolk (tinygrad-based compiler) as a JIT transformation in Rune\n- Target CPU, CUDA, Metal, OpenCL, and HIP\n- Kernel fusion and optimization\n- Performance within 2x of PyTorch on standard workloads\n\n### V1 — Production-Ready Training & Deployment\n- Production training: gradient accumulation, mixed precision, gradient checkpointing, flash attention\n- ONNX import for PyTorch model portability\n- AOT compilation to standalone binaries (CPU and GPU)\n- Inference engine with KV cache, continuous batching, and PagedAttention\n- MirageOS unikernel deployment\n- Post-training quantization (INT8/INT4)\n\nWe're also open to discussing custom sponsorship packages based on your needs.\n\n## Ways to Support\n\n### For Developers\n- **Try it out**: Test Raven with your workflows and [report issues](https://github.com/raven-ml/raven/issues)\n- **Contribute code**: See our [contributing guide](https://github.com/raven-ml/raven/blob/main/CONTRIBUTING.md) for areas where we need help\n- **Share feedback**: What would make you switch from Python? [Tell us](mailto:thibaut.mattio@gmail.com)\n- **Spread the word**: Star the repo, share with your team, write about your experience\n\n### For Companies\n- **Use Raven**: Reach out if you're interested in using it—we're keen on prioritizing development based on real-world needs\n- **Sponsor development**: Email [thibaut.mattio@gmail.com](mailto:thibaut.mattio@gmail.com) for sponsorship packages\n\n### For Individuals\n- **GitHub Sponsors**: [Support the project with monthly contributions](https://github.com/sponsors/tmattio)\n- **One-time donations**: Every contribution helps us reach the next milestone\n- **Write tutorials**: Help others learn Raven and grow the community\n\n## Current Sponsors\n\nWe're grateful for the support of our sponsors:\n\n### Corporate Sponsors\n\n- [**Ahrefs**](https://ahrefs.com) - Building tools to help you grow your search traffic\n- [**Tarides**](https://tarides.com) - Secure-by-design infrastructure and tooling for a better digital world\n\n### Individual Sponsors\n\nThank you to all our individual sponsors for their support!\n\n## Get in Touch\n\n**For sponsorship inquiries**: [thibaut.mattio@gmail.com](mailto:thibaut.mattio@gmail.com)  \n**For feature request or bug reports**: [GitHub Issues](https://github.com/raven-ml/raven/issues)\n\n---\n\n_Raven is built by [Thibaut Mattio](https://github.com/tmattio) and contributors. We believe OCaml deserves a world-class scientific computing ecosystem, and we're committed to building it._\n"
  },
  {
    "path": "dune-project",
    "content": "(lang dune 3.21)\n\n(name raven)\n\n(source\n (github raven-ml/raven))\n\n(authors \"Thibaut Mattio <thibaut.mattio@gmail.com>\")\n\n(maintainers \"Thibaut Mattio <thibaut.mattio@gmail.com>\")\n\n(license ISC)\n\n(documentation \"https://raven-ml.dev/docs/\")\n\n(bug_reports \"https://github.com/raven-ml/raven/issues\")\n\n(using mdx 0.4)\n\n(using directory-targets 0.1)\n\n(version 1.0.0~alpha3)\n\n(implicit_transitive_deps false)\n\n(generate_opam_files true)\n\n(opam_file_location inside_opam_directory)\n\n(pin\n (url \"git+https://github.com/invariant-hq/thumper.git\")\n (package\n  (name thumper)))\n\n(package\n (name nx)\n (dir packages/nx)\n (synopsis \"N-dimensional arrays for OCaml\")\n (description\n  \"Nx provides n-dimensional arrays with NumPy-like semantics and OCaml's type safety. 19 data types, broadcasting, slicing, linear algebra, FFT, and I/O. The numerical foundation for the Raven ecosystem.\")\n (depends\n  (ocaml\n   (>= 5.2.0))\n  dune\n  (dune-configurator :build)\n  (conf-pkg-config :build)\n  ; camlzip\n  (conf-zlib :build)\n  logs\n  ; tests\n  (windtrap :with-test)\n  (mdx :with-test)\n  (thumper :with-test))\n (tags\n  (numerical-computation tensor-library machine-learning)))\n\n(package\n (name brot)\n (dir packages/brot)\n (synopsis \"Tokenization for OCaml\")\n (description\n  \"Fast, HuggingFace-compatible tokenization for language models. BPE, WordPiece, Unigram, word-level, and character-level algorithms with composable pipelines and training from scratch.\")\n (depends\n  (ocaml\n   (>= 5.2.0))\n  dune\n  re\n  jsont\n  bytesrw\n  (uunf\n   (>= 15.1.0))\n  uucp\n  (windtrap :with-test)\n  (mdx :with-test)\n  (thumper :with-test))\n (tags\n  (tokenization bpe wordpiece subword-tokenization language-models)))\n\n(package\n (name talon)\n (dir packages/talon)\n (synopsis \"Dataframes for OCaml\")\n (description\n  \"Fast and elegant dataframes with type-safe operations. Heterogeneous columns, applicative row operations, vectorized aggregations, and CSV I/O, built on Nx.\")\n (depends\n  (ocaml\n   (>= 5.2.0))\n  dune\n  (nx\n   (= :version))\n  (windtrap :with-test)\n  (mdx :with-test)\n  (thumper :with-test))\n (tags\n  (dataframe data-manipulation data-science tabular-data)))\n\n(package\n (name rune)\n (dir packages/rune)\n (synopsis \"Functional transformations for Nx arrays\")\n (description\n  \"Automatic differentiation and vectorizing maps for Nx tensors. Reverse-mode AD (grad, vjp), forward-mode AD (jvp), vmap, and gradient checking, built on OCaml 5 effect handlers.\")\n (depends\n  (ocaml\n   (>= 5.2.0))\n  dune\n  (dune-configurator :build)\n  (nx\n   (= :version))\n  (tolk\n   (= :version))\n  (windtrap :with-test)\n  (mdx :with-test)\n  (thumper :with-test))\n (tags\n  (automatic-differentiation machine-learning deep-learning optimization)))\n\n(package\n (name tolk)\n (dir packages/tolk)\n (synopsis \"A minimal ML compiler for GPU tensor computation\")\n (description\n  \"Tolk is a minimal, readable ML compiler for GPU tensor computation in the Raven ecosystem.\")\n (depends\n  (ocaml\n   (>= 5.2))\n  dune\n  (windtrap :with-test)\n  (thumper :with-test))\n (tags\n  (compiler gpu tensor-computation)))\n\n(package\n (name norn)\n (dir packages/norn)\n (synopsis \"MCMC sampling for OCaml\")\n (description\n  \"Markov chain Monte Carlo samplers with automatic gradients via Rune. Hamiltonian Monte Carlo with dual-averaging step-size adaptation.\")\n (depends\n  (ocaml\n   (>= 5.2.0))\n  dune\n  (nx\n   (= :version))\n  (rune\n   (= :version))\n  (windtrap :with-test)\n  (thumper :with-test))\n (tags\n  (mcmc sampling bayesian machine-learning)))\n\n(package\n (name vega)\n (dir packages/vega)\n (synopsis \"Per-parameter gradient-based optimizers for OCaml\")\n (description\n  \"Typed, per-parameter optimizer primitives: Adam, AdamW, SGD, RMSprop, Adagrad, and learning-rate schedules. Built on Nx with no autodiff dependency.\")\n (depends\n  (ocaml\n   (>= 5.2.0))\n  dune\n  (nx\n   (= :version))\n  (windtrap :with-test)\n  (thumper :with-test))\n (tags\n  (optimization machine-learning gradient-descent)))\n\n(package\n (name kaun)\n (dir packages/kaun)\n (synopsis \"Neural networks for OCaml\")\n (description\n  \"Composable layers, parameter trees, optimizers, training loops, data pipelines, and HuggingFace Hub integration. Built on Rune.\")\n (depends\n  (ocaml\n   (>= 5.2.0))\n  dune\n  (rune\n   (= :version))\n  (vega\n   (= :version))\n  (nx\n   (= :version))\n  jsont\n  bytesrw\n  (windtrap :with-test)\n  (mdx :with-test)\n  (thumper :with-test))\n (tags\n  (neural-networks machine-learning deep-learning)))\n\n(package\n (name munin)\n (dir packages/munin)\n (synopsis \"Local experiment tracking for Raven\")\n (description\n  \"Local-first experiment tracking with append-only event logs, versioned artifacts, a terminal dashboard, and a CLI. The core library (munin) provides Session, Run, Store, and Artifact modules. The TUI library (munin.tui) provides a Mosaic-based dashboard.\")\n (depends\n  (ocaml\n   (>= 5.2.0))\n  dune\n  jsont\n  bytesrw\n  sha\n  cmdliner\n  mosaic\n  (dune-configurator :build)\n  matrix\n  (windtrap :with-test))\n (tags\n  (experiment-tracking machine-learning monitoring)))\n\n(package\n (name sowilo)\n (dir packages/sowilo)\n (synopsis \"Differentiable computer vision for OCaml\")\n (description\n  \"Image processing operations expressed as Nx tensor computations. Geometric transforms, spatial filters, edge detection, morphological operations, and color space conversions, all compatible with Rune.grad and Rune.vmap.\")\n (depends\n  (ocaml\n   (>= 5.2.0))\n  dune\n  (nx\n   (= :version))\n  (windtrap :with-test)\n  (mdx :with-test)\n  (thumper :with-test))\n (tags\n  (computer-vision image-processing feature-detection machine-learning)))\n\n(package\n (name fehu)\n (dir packages/fehu)\n (synopsis \"Reinforcement learning for OCaml\")\n (description\n  \"Type-safe RL environments, observation/action spaces, vectorized environments, trajectory collection, replay buffers, and generalized advantage estimation. Built on Nx.\")\n (depends\n  (ocaml\n   (>= 5.2.0))\n  dune\n  (nx\n   (= :version))\n  (windtrap :with-test)\n  (mdx :with-test)\n  (thumper :with-test))\n (tags\n  (reinforcement-learning machine-learning environments)))\n\n(package\n (name hugin)\n (dir packages/hugin)\n (synopsis \"Declarative plotting and visualization for OCaml\")\n (description \"Composable, beautiful-by-default plotting built on Nx.\")\n (depends\n  (ocaml\n   (>= 5.2.0))\n  dune\n  (dune-configurator :build)\n  (conf-sdl2 :build)\n  (conf-cairo :build)\n  (nx\n   (= :version))\n  (windtrap :with-test)\n  (mdx :with-test))\n (tags\n  (visualization plotting charts data-science graphics)))\n\n(package\n (name quill)\n (dir packages/quill)\n (synopsis \"Interactive REPL and markdown notebooks\")\n (description\n  \"Quill is a REPL and notebook environment for OCaml. Interactive toplevel with syntax highlighting, completion, and history. Markdown notebooks with a terminal UI, web frontend, batch evaluation, and watch mode.\")\n (depends\n  (ocaml\n   (>= 5.2.0))\n  dune\n  cmarkit\n  cmdliner\n  bytesrw\n  jsont\n  mosaic\n  (windtrap :with-test)\n  (mdx :with-test))\n (tags\n  (repl toplevel notebooks interactive-computing literate-programming)))\n\n(package\n (name raven)\n (allow_empty)\n (dir packages/raven)\n (synopsis \"Modern scientific computing for OCaml\")\n (description\n  \"Raven is an ecosystem of composable libraries for numerical computing in OCaml. Tensors, automatic differentiation, neural networks, dataframes, plotting, tokenization, computer vision, reinforcement learning, and interactive notebooks.\")\n (depends\n  (nx\n   (= :version))\n  (tolk\n   (= :version))\n  (brot\n   (= :version))\n  (talon\n   (= :version))\n  (rune\n   (= :version))\n  (vega\n   (= :version))\n  (kaun\n   (= :version))\n  (munin\n   (= :version))\n  (sowilo\n   (= :version))\n  (fehu\n   (= :version))\n  (hugin\n   (= :version))\n  (quill\n   (= :version)))\n (tags\n  (machine-learning data-science numerical-computation)))\n"
  },
  {
    "path": "dune-workspace.tsan",
    "content": "(lang dune 3.21)\n\n(lock_dir\n (path dune.lock))\n\n; Pin ocaml-variants to the 5.4 branch which includes the\n; __tsan_func_exit signature fix (ocaml/ocaml#14082).\n; Remove this pin once OCaml 5.4.2 is released.\n(pin\n (name ocaml-variants)\n (url \"git+https://github.com/ocaml/ocaml#5.4\")\n (package\n  (name ocaml-variants)\n  (version 5.4.2+trunk)))\n\n(lock_dir\n (path dune-tsan.lock)\n (pins ocaml-variants)\n (depopts ocaml-option-tsan))\n\n(context default)\n\n(context\n (default\n  (name tsan)\n  (lock_dir dune-tsan.lock)))\n"
  },
  {
    "path": "opam/brot.opam",
    "content": "# This file is generated by dune, edit dune-project instead\nopam-version: \"2.0\"\nversion: \"1.0.0~alpha3\"\nsynopsis: \"Tokenization for OCaml\"\ndescription:\n  \"Fast, HuggingFace-compatible tokenization for language models. BPE, WordPiece, Unigram, word-level, and character-level algorithms with composable pipelines and training from scratch.\"\nmaintainer: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nauthors: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nlicense: \"ISC\"\ntags: [\n  \"tokenization\" \"bpe\" \"wordpiece\" \"subword-tokenization\" \"language-models\"\n]\nhomepage: \"https://github.com/raven-ml/raven\"\ndoc: \"https://raven-ml.dev/docs/\"\nbug-reports: \"https://github.com/raven-ml/raven/issues\"\ndepends: [\n  \"ocaml\" {>= \"5.2.0\"}\n  \"dune\" {>= \"3.21\"}\n  \"re\"\n  \"jsont\"\n  \"bytesrw\"\n  \"uunf\" {>= \"15.1.0\"}\n  \"uucp\"\n  \"windtrap\" {with-test}\n  \"mdx\" {with-test}\n  \"thumper\" {with-test}\n  \"odoc\" {with-doc}\n]\nbuild: [\n  [\"dune\" \"subst\"] {dev}\n  [\n    \"dune\"\n    \"build\"\n    \"-p\"\n    name\n    \"-j\"\n    jobs\n    \"@install\"\n    \"@runtest\" {with-test}\n    \"@doc\" {with-doc}\n  ]\n]\ndev-repo: \"git+https://github.com/raven-ml/raven.git\"\nx-maintenance-intent: [\"(latest)\"]\n"
  },
  {
    "path": "opam/fehu.opam",
    "content": "# This file is generated by dune, edit dune-project instead\nopam-version: \"2.0\"\nversion: \"1.0.0~alpha3\"\nsynopsis: \"Reinforcement learning for OCaml\"\ndescription:\n  \"Type-safe RL environments, observation/action spaces, vectorized environments, trajectory collection, replay buffers, and generalized advantage estimation. Built on Nx.\"\nmaintainer: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nauthors: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nlicense: \"ISC\"\ntags: [\"reinforcement-learning\" \"machine-learning\" \"environments\"]\nhomepage: \"https://github.com/raven-ml/raven\"\ndoc: \"https://raven-ml.dev/docs/\"\nbug-reports: \"https://github.com/raven-ml/raven/issues\"\ndepends: [\n  \"ocaml\" {>= \"5.2.0\"}\n  \"dune\" {>= \"3.21\"}\n  \"nx\" {= version}\n  \"windtrap\" {with-test}\n  \"mdx\" {with-test}\n  \"thumper\" {with-test}\n  \"odoc\" {with-doc}\n]\nbuild: [\n  [\"dune\" \"subst\"] {dev}\n  [\n    \"dune\"\n    \"build\"\n    \"-p\"\n    name\n    \"-j\"\n    jobs\n    \"@install\"\n    \"@runtest\" {with-test}\n    \"@doc\" {with-doc}\n  ]\n]\ndev-repo: \"git+https://github.com/raven-ml/raven.git\"\nx-maintenance-intent: [\"(latest)\"]\n"
  },
  {
    "path": "opam/hugin.opam",
    "content": "# This file is generated by dune, edit dune-project instead\nopam-version: \"2.0\"\nversion: \"1.0.0~alpha3\"\nsynopsis: \"Declarative plotting and visualization for OCaml\"\ndescription: \"Composable, beautiful-by-default plotting built on Nx.\"\nmaintainer: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nauthors: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nlicense: \"ISC\"\ntags: [\"visualization\" \"plotting\" \"charts\" \"data-science\" \"graphics\"]\nhomepage: \"https://github.com/raven-ml/raven\"\ndoc: \"https://raven-ml.dev/docs/\"\nbug-reports: \"https://github.com/raven-ml/raven/issues\"\ndepends: [\n  \"ocaml\" {>= \"5.2.0\"}\n  \"dune\" {>= \"3.21\"}\n  \"dune-configurator\" {build}\n  \"conf-sdl2\" {build}\n  \"conf-cairo\" {build}\n  \"nx\" {= version}\n  \"windtrap\" {with-test}\n  \"mdx\" {with-test}\n  \"odoc\" {with-doc}\n]\nbuild: [\n  [\"dune\" \"subst\"] {dev}\n  [\n    \"dune\"\n    \"build\"\n    \"-p\"\n    name\n    \"-j\"\n    jobs\n    \"@install\"\n    \"@runtest\" {with-test}\n    \"@doc\" {with-doc}\n  ]\n]\ndev-repo: \"git+https://github.com/raven-ml/raven.git\"\nx-maintenance-intent: [\"(latest)\"]\n"
  },
  {
    "path": "opam/kaun-board.opam",
    "content": "# This file is generated by dune, edit dune-project instead\nopam-version: \"2.0\"\nversion: \"1.0.0~alpha3\"\nsynopsis: \"Training dashboard and logging for Raven\"\ndescription:\n  \"Lightweight training logger and terminal dashboard for monitoring runs. The core library (kaun-board) provides a Log API for writing JSONL events and a reader for consuming them. The TUI library (kaun-board.tui) provides a Mosaic-based dashboard.\"\nmaintainer: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nauthors: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nlicense: \"ISC\"\ntags: [\"training-dashboard\" \"monitoring\" \"logging\" \"machine-learning\"]\nhomepage: \"https://github.com/raven-ml/raven\"\ndoc: \"https://raven-ml.dev/docs/\"\nbug-reports: \"https://github.com/raven-ml/raven/issues\"\ndepends: [\n  \"ocaml\" {>= \"5.2.0\"}\n  \"dune\" {>= \"3.21\"}\n  \"jsont\"\n  \"bytesrw\"\n  \"cmdliner\"\n  \"mosaic\"\n  \"dune-configurator\" {build}\n  \"matrix\"\n  \"odoc\" {with-doc}\n]\nbuild: [\n  [\"dune\" \"subst\"] {dev}\n  [\n    \"dune\"\n    \"build\"\n    \"-p\"\n    name\n    \"-j\"\n    jobs\n    \"@install\"\n    \"@runtest\" {with-test}\n    \"@doc\" {with-doc}\n  ]\n]\ndev-repo: \"git+https://github.com/raven-ml/raven.git\"\nx-maintenance-intent: [\"(latest)\"]\n"
  },
  {
    "path": "opam/kaun.opam",
    "content": "# This file is generated by dune, edit dune-project instead\nopam-version: \"2.0\"\nversion: \"1.0.0~alpha3\"\nsynopsis: \"Neural networks for OCaml\"\ndescription:\n  \"Composable layers, parameter trees, optimizers, training loops, data pipelines, and HuggingFace Hub integration. Built on Rune.\"\nmaintainer: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nauthors: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nlicense: \"ISC\"\ntags: [\"neural-networks\" \"machine-learning\" \"deep-learning\"]\nhomepage: \"https://github.com/raven-ml/raven\"\ndoc: \"https://raven-ml.dev/docs/\"\nbug-reports: \"https://github.com/raven-ml/raven/issues\"\ndepends: [\n  \"ocaml\" {>= \"5.2.0\"}\n  \"dune\" {>= \"3.21\"}\n  \"rune\" {= version}\n  \"vega\" {= version}\n  \"nx\" {= version}\n  \"jsont\"\n  \"bytesrw\"\n  \"windtrap\" {with-test}\n  \"mdx\" {with-test}\n  \"thumper\" {with-test}\n  \"odoc\" {with-doc}\n]\nbuild: [\n  [\"dune\" \"subst\"] {dev}\n  [\n    \"dune\"\n    \"build\"\n    \"-p\"\n    name\n    \"-j\"\n    jobs\n    \"@install\"\n    \"@runtest\" {with-test}\n    \"@doc\" {with-doc}\n  ]\n]\ndev-repo: \"git+https://github.com/raven-ml/raven.git\"\nx-maintenance-intent: [\"(latest)\"]\n"
  },
  {
    "path": "opam/munin.opam",
    "content": "# This file is generated by dune, edit dune-project instead\nopam-version: \"2.0\"\nversion: \"1.0.0~alpha3\"\nsynopsis: \"Local experiment tracking for Raven\"\ndescription:\n  \"Local-first experiment tracking with append-only event logs, versioned artifacts, a terminal dashboard, and a CLI. The core library (munin) provides Session, Run, Store, and Artifact modules. The TUI library (munin.tui) provides a Mosaic-based dashboard.\"\nmaintainer: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nauthors: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nlicense: \"ISC\"\ntags: [\"experiment-tracking\" \"machine-learning\" \"monitoring\"]\nhomepage: \"https://github.com/raven-ml/raven\"\ndoc: \"https://raven-ml.dev/docs/\"\nbug-reports: \"https://github.com/raven-ml/raven/issues\"\ndepends: [\n  \"ocaml\" {>= \"5.2.0\"}\n  \"dune\" {>= \"3.21\"}\n  \"jsont\"\n  \"bytesrw\"\n  \"sha\"\n  \"cmdliner\"\n  \"mosaic\"\n  \"dune-configurator\" {build}\n  \"matrix\"\n  \"windtrap\" {with-test}\n  \"odoc\" {with-doc}\n]\nbuild: [\n  [\"dune\" \"subst\"] {dev}\n  [\n    \"dune\"\n    \"build\"\n    \"-p\"\n    name\n    \"-j\"\n    jobs\n    \"@install\"\n    \"@runtest\" {with-test}\n    \"@doc\" {with-doc}\n  ]\n]\ndev-repo: \"git+https://github.com/raven-ml/raven.git\"\nx-maintenance-intent: [\"(latest)\"]\n"
  },
  {
    "path": "opam/norn.opam",
    "content": "# This file is generated by dune, edit dune-project instead\nopam-version: \"2.0\"\nversion: \"1.0.0~alpha3\"\nsynopsis: \"MCMC sampling for OCaml\"\ndescription:\n  \"Markov chain Monte Carlo samplers with automatic gradients via Rune. Hamiltonian Monte Carlo with dual-averaging step-size adaptation.\"\nmaintainer: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nauthors: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nlicense: \"ISC\"\ntags: [\"mcmc\" \"sampling\" \"bayesian\" \"machine-learning\"]\nhomepage: \"https://github.com/raven-ml/raven\"\ndoc: \"https://raven-ml.dev/docs/\"\nbug-reports: \"https://github.com/raven-ml/raven/issues\"\ndepends: [\n  \"ocaml\" {>= \"5.2.0\"}\n  \"dune\" {>= \"3.21\"}\n  \"nx\" {= version}\n  \"rune\" {= version}\n  \"windtrap\" {with-test}\n  \"thumper\" {with-test}\n  \"odoc\" {with-doc}\n]\nbuild: [\n  [\"dune\" \"subst\"] {dev}\n  [\n    \"dune\"\n    \"build\"\n    \"-p\"\n    name\n    \"-j\"\n    jobs\n    \"@install\"\n    \"@runtest\" {with-test}\n    \"@doc\" {with-doc}\n  ]\n]\ndev-repo: \"git+https://github.com/raven-ml/raven.git\"\nx-maintenance-intent: [\"(latest)\"]\n"
  },
  {
    "path": "opam/nx.opam",
    "content": "# This file is generated by dune, edit dune-project instead\nopam-version: \"2.0\"\nversion: \"1.0.0~alpha3\"\nsynopsis: \"N-dimensional arrays for OCaml\"\ndescription:\n  \"Nx provides n-dimensional arrays with NumPy-like semantics and OCaml's type safety. 19 data types, broadcasting, slicing, linear algebra, FFT, and I/O. The numerical foundation for the Raven ecosystem.\"\nmaintainer: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nauthors: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nlicense: \"ISC\"\ntags: [\"numerical-computation\" \"tensor-library\" \"machine-learning\"]\nhomepage: \"https://github.com/raven-ml/raven\"\ndoc: \"https://raven-ml.dev/docs/\"\nbug-reports: \"https://github.com/raven-ml/raven/issues\"\ndepends: [\n  \"ocaml\" {>= \"5.2.0\"}\n  \"dune\" {>= \"3.21\"}\n  \"dune-configurator\" {build}\n  \"conf-pkg-config\" {build}\n  \"conf-zlib\" {build}\n  \"logs\"\n  \"windtrap\" {with-test}\n  \"mdx\" {with-test}\n  \"thumper\" {with-test}\n  \"odoc\" {with-doc}\n]\nbuild: [\n  [\"dune\" \"subst\"] {dev}\n  [\n    \"dune\"\n    \"build\"\n    \"-p\"\n    name\n    \"-j\"\n    jobs\n    \"@install\"\n    \"@runtest\" {with-test}\n    \"@doc\" {with-doc}\n  ]\n]\ndev-repo: \"git+https://github.com/raven-ml/raven.git\"\nx-maintenance-intent: [\"(latest)\"]\ndepexts: [\n  [\"libc-dev\" \"openblas-dev\" \"lapack-dev\"] {os-distribution = \"alpine\"}\n  [\"epel-release\" \"openblas-devel\"] {os-distribution = \"centos\"}\n  [\"libopenblas-dev\" \"liblapacke-dev\"] {os-family = \"debian\"}\n  [\"libopenblas-dev\" \"liblapacke-dev\"] {os-family = \"ubuntu\"}\n  [\"openblas-devel\"] {os-family = \"fedora\"}\n  [\"libopenblas_openmp-devel\"] {os-family = \"suse\" | os-family = \"opensuse\"}\n  [\"openblas\" \"lapacke\" \"cblas\"] {os-distribution = \"arch\"}\n  [\"openblas\"] {os = \"macos\" & os-distribution = \"homebrew\"}\n  [\"openblas\" \"lapacke\"] {os = \"freebsd\"}\n  [\"mingw64-x86_64-cblas\" \"mingw64-x86_64-lapack\"] {os = \"cygwin\"}\n]\nx-ci-accept-failures: [\n  \"oraclelinux-7\"\n  \"oraclelinux-8\"\n  \"oraclelinux-9\"\n]\n"
  },
  {
    "path": "opam/nx.opam.template",
    "content": "depexts: [\n  [\"libc-dev\" \"openblas-dev\" \"lapack-dev\"] {os-distribution = \"alpine\"}\n  [\"epel-release\" \"openblas-devel\"] {os-distribution = \"centos\"}\n  [\"libopenblas-dev\" \"liblapacke-dev\"] {os-family = \"debian\"}\n  [\"libopenblas-dev\" \"liblapacke-dev\"] {os-family = \"ubuntu\"}\n  [\"openblas-devel\"] {os-family = \"fedora\"}\n  [\"libopenblas_openmp-devel\"] {os-family = \"suse\" | os-family = \"opensuse\"}\n  [\"openblas\" \"lapacke\" \"cblas\"] {os-distribution = \"arch\"}\n  [\"openblas\"] {os = \"macos\" & os-distribution = \"homebrew\"}\n  [\"openblas\" \"lapacke\"] {os = \"freebsd\"}\n  [\"mingw64-x86_64-cblas\" \"mingw64-x86_64-lapack\"] {os = \"cygwin\"}\n]\nx-ci-accept-failures: [\n  \"oraclelinux-7\"\n  \"oraclelinux-8\"\n  \"oraclelinux-9\"\n]\n"
  },
  {
    "path": "opam/quill.opam",
    "content": "# This file is generated by dune, edit dune-project instead\nopam-version: \"2.0\"\nversion: \"1.0.0~alpha3\"\nsynopsis: \"Interactive REPL and markdown notebooks\"\ndescription:\n  \"Quill is a REPL and notebook environment for OCaml. Interactive toplevel with syntax highlighting, completion, and history. Markdown notebooks with a terminal UI, web frontend, batch evaluation, and watch mode.\"\nmaintainer: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nauthors: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nlicense: \"ISC\"\ntags: [\n  \"repl\"\n  \"toplevel\"\n  \"notebooks\"\n  \"interactive-computing\"\n  \"literate-programming\"\n]\nhomepage: \"https://github.com/raven-ml/raven\"\ndoc: \"https://raven-ml.dev/docs/\"\nbug-reports: \"https://github.com/raven-ml/raven/issues\"\ndepends: [\n  \"ocaml\" {>= \"5.2.0\"}\n  \"dune\" {>= \"3.21\"}\n  \"cmarkit\"\n  \"cmdliner\"\n  \"bytesrw\"\n  \"jsont\"\n  \"mosaic\"\n  \"windtrap\" {with-test}\n  \"mdx\" {with-test}\n  \"odoc\" {with-doc}\n]\nbuild: [\n  [\"dune\" \"subst\"] {dev}\n  [\n    \"dune\"\n    \"build\"\n    \"-p\"\n    name\n    \"-j\"\n    jobs\n    \"@install\"\n    \"@runtest\" {with-test}\n    \"@doc\" {with-doc}\n  ]\n]\ndev-repo: \"git+https://github.com/raven-ml/raven.git\"\nx-maintenance-intent: [\"(latest)\"]\n"
  },
  {
    "path": "opam/raven.opam",
    "content": "# This file is generated by dune, edit dune-project instead\nopam-version: \"2.0\"\nversion: \"1.0.0~alpha3\"\nsynopsis: \"Modern scientific computing for OCaml\"\ndescription:\n  \"Raven is an ecosystem of composable libraries for numerical computing in OCaml. Tensors, automatic differentiation, neural networks, dataframes, plotting, tokenization, computer vision, reinforcement learning, and interactive notebooks.\"\nmaintainer: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nauthors: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nlicense: \"ISC\"\ntags: [\"machine-learning\" \"data-science\" \"numerical-computation\"]\nhomepage: \"https://github.com/raven-ml/raven\"\ndoc: \"https://raven-ml.dev/docs/\"\nbug-reports: \"https://github.com/raven-ml/raven/issues\"\ndepends: [\n  \"dune\" {>= \"3.21\"}\n  \"nx\" {= version}\n  \"tolk\" {= version}\n  \"brot\" {= version}\n  \"talon\" {= version}\n  \"rune\" {= version}\n  \"vega\" {= version}\n  \"kaun\" {= version}\n  \"munin\" {= version}\n  \"sowilo\" {= version}\n  \"fehu\" {= version}\n  \"hugin\" {= version}\n  \"quill\" {= version}\n  \"odoc\" {with-doc}\n]\nbuild: [\n  [\"dune\" \"subst\"] {dev}\n  [\n    \"dune\"\n    \"build\"\n    \"-p\"\n    name\n    \"-j\"\n    jobs\n    \"@install\"\n    \"@runtest\" {with-test}\n    \"@doc\" {with-doc}\n  ]\n]\ndev-repo: \"git+https://github.com/raven-ml/raven.git\"\nx-maintenance-intent: [\"(latest)\"]\n"
  },
  {
    "path": "opam/rune.opam",
    "content": "# This file is generated by dune, edit dune-project instead\nopam-version: \"2.0\"\nversion: \"1.0.0~alpha3\"\nsynopsis: \"Functional transformations for Nx arrays\"\ndescription:\n  \"Automatic differentiation and vectorizing maps for Nx tensors. Reverse-mode AD (grad, vjp), forward-mode AD (jvp), vmap, and gradient checking, built on OCaml 5 effect handlers.\"\nmaintainer: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nauthors: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nlicense: \"ISC\"\ntags: [\n  \"automatic-differentiation\"\n  \"machine-learning\"\n  \"deep-learning\"\n  \"optimization\"\n]\nhomepage: \"https://github.com/raven-ml/raven\"\ndoc: \"https://raven-ml.dev/docs/\"\nbug-reports: \"https://github.com/raven-ml/raven/issues\"\ndepends: [\n  \"ocaml\" {>= \"5.2.0\"}\n  \"dune\" {>= \"3.21\"}\n  \"dune-configurator\" {build}\n  \"nx\" {= version}\n  \"tolk\" {= version}\n  \"windtrap\" {with-test}\n  \"mdx\" {with-test}\n  \"thumper\" {with-test}\n  \"odoc\" {with-doc}\n]\nbuild: [\n  [\"dune\" \"subst\"] {dev}\n  [\n    \"dune\"\n    \"build\"\n    \"-p\"\n    name\n    \"-j\"\n    jobs\n    \"@install\"\n    \"@runtest\" {with-test}\n    \"@doc\" {with-doc}\n  ]\n]\ndev-repo: \"git+https://github.com/raven-ml/raven.git\"\nx-maintenance-intent: [\"(latest)\"]\n"
  },
  {
    "path": "opam/sowilo.opam",
    "content": "# This file is generated by dune, edit dune-project instead\nopam-version: \"2.0\"\nversion: \"1.0.0~alpha3\"\nsynopsis: \"Differentiable computer vision for OCaml\"\ndescription:\n  \"Image processing operations expressed as Nx tensor computations. Geometric transforms, spatial filters, edge detection, morphological operations, and color space conversions, all compatible with Rune.grad and Rune.vmap.\"\nmaintainer: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nauthors: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nlicense: \"ISC\"\ntags: [\n  \"computer-vision\" \"image-processing\" \"feature-detection\" \"machine-learning\"\n]\nhomepage: \"https://github.com/raven-ml/raven\"\ndoc: \"https://raven-ml.dev/docs/\"\nbug-reports: \"https://github.com/raven-ml/raven/issues\"\ndepends: [\n  \"ocaml\" {>= \"5.2.0\"}\n  \"dune\" {>= \"3.21\"}\n  \"nx\" {= version}\n  \"windtrap\" {with-test}\n  \"mdx\" {with-test}\n  \"thumper\" {with-test}\n  \"odoc\" {with-doc}\n]\nbuild: [\n  [\"dune\" \"subst\"] {dev}\n  [\n    \"dune\"\n    \"build\"\n    \"-p\"\n    name\n    \"-j\"\n    jobs\n    \"@install\"\n    \"@runtest\" {with-test}\n    \"@doc\" {with-doc}\n  ]\n]\ndev-repo: \"git+https://github.com/raven-ml/raven.git\"\nx-maintenance-intent: [\"(latest)\"]\n"
  },
  {
    "path": "opam/talon.opam",
    "content": "# This file is generated by dune, edit dune-project instead\nopam-version: \"2.0\"\nversion: \"1.0.0~alpha3\"\nsynopsis: \"Dataframes for OCaml\"\ndescription:\n  \"Fast and elegant dataframes with type-safe operations. Heterogeneous columns, applicative row operations, vectorized aggregations, and CSV I/O, built on Nx.\"\nmaintainer: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nauthors: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nlicense: \"ISC\"\ntags: [\"dataframe\" \"data-manipulation\" \"data-science\" \"tabular-data\"]\nhomepage: \"https://github.com/raven-ml/raven\"\ndoc: \"https://raven-ml.dev/docs/\"\nbug-reports: \"https://github.com/raven-ml/raven/issues\"\ndepends: [\n  \"ocaml\" {>= \"5.2.0\"}\n  \"dune\" {>= \"3.21\"}\n  \"nx\" {= version}\n  \"windtrap\" {with-test}\n  \"mdx\" {with-test}\n  \"thumper\" {with-test}\n  \"odoc\" {with-doc}\n]\nbuild: [\n  [\"dune\" \"subst\"] {dev}\n  [\n    \"dune\"\n    \"build\"\n    \"-p\"\n    name\n    \"-j\"\n    jobs\n    \"@install\"\n    \"@runtest\" {with-test}\n    \"@doc\" {with-doc}\n  ]\n]\ndev-repo: \"git+https://github.com/raven-ml/raven.git\"\nx-maintenance-intent: [\"(latest)\"]\n"
  },
  {
    "path": "opam/tolk.opam",
    "content": "# This file is generated by dune, edit dune-project instead\nopam-version: \"2.0\"\nversion: \"1.0.0~alpha3\"\nsynopsis: \"A minimal ML compiler for GPU tensor computation\"\ndescription:\n  \"Tolk is a minimal, readable ML compiler for GPU tensor computation in the Raven ecosystem.\"\nmaintainer: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nauthors: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nlicense: \"ISC\"\ntags: [\"compiler\" \"gpu\" \"tensor-computation\"]\nhomepage: \"https://github.com/raven-ml/raven\"\ndoc: \"https://raven-ml.dev/docs/\"\nbug-reports: \"https://github.com/raven-ml/raven/issues\"\ndepends: [\n  \"ocaml\" {>= \"5.2\"}\n  \"dune\" {>= \"3.21\"}\n  \"windtrap\" {with-test}\n  \"thumper\" {with-test}\n  \"odoc\" {with-doc}\n]\nbuild: [\n  [\"dune\" \"subst\"] {dev}\n  [\n    \"dune\"\n    \"build\"\n    \"-p\"\n    name\n    \"-j\"\n    jobs\n    \"@install\"\n    \"@runtest\" {with-test}\n    \"@doc\" {with-doc}\n  ]\n]\ndev-repo: \"git+https://github.com/raven-ml/raven.git\"\nx-maintenance-intent: [\"(latest)\"]\n"
  },
  {
    "path": "opam/vega.opam",
    "content": "# This file is generated by dune, edit dune-project instead\nopam-version: \"2.0\"\nversion: \"1.0.0~alpha3\"\nsynopsis: \"Per-parameter gradient-based optimizers for OCaml\"\ndescription:\n  \"Typed, per-parameter optimizer primitives: Adam, AdamW, SGD, RMSprop, Adagrad, and learning-rate schedules. Built on Nx with no autodiff dependency.\"\nmaintainer: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nauthors: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nlicense: \"ISC\"\ntags: [\"optimization\" \"machine-learning\" \"gradient-descent\"]\nhomepage: \"https://github.com/raven-ml/raven\"\ndoc: \"https://raven-ml.dev/docs/\"\nbug-reports: \"https://github.com/raven-ml/raven/issues\"\ndepends: [\n  \"ocaml\" {>= \"5.2.0\"}\n  \"dune\" {>= \"3.21\"}\n  \"nx\" {= version}\n  \"windtrap\" {with-test}\n  \"thumper\" {with-test}\n  \"odoc\" {with-doc}\n]\nbuild: [\n  [\"dune\" \"subst\"] {dev}\n  [\n    \"dune\"\n    \"build\"\n    \"-p\"\n    name\n    \"-j\"\n    jobs\n    \"@install\"\n    \"@runtest\" {with-test}\n    \"@doc\" {with-doc}\n  ]\n]\ndev-repo: \"git+https://github.com/raven-ml/raven.git\"\nx-maintenance-intent: [\"(latest)\"]\n"
  },
  {
    "path": "packages/brot/README.md",
    "content": "# Brot\n\nFast tokenization library for OCaml.\n\nBrot tokenizes text into token IDs for language models and reverses the\nprocess. It is part of the Raven ecosystem. It loads and saves HuggingFace\n`tokenizer.json` files, supports BPE, WordPiece, Unigram, word-level, and\ncharacter-level algorithms, and is 1.3-6x faster than HuggingFace\ntokenizers on most benchmarks.\n\n## Features\n\n- Tokenization algorithms: BPE, WordPiece, Unigram, word-level, character-level\n- HuggingFace compatible: load and save `tokenizer.json` files, load\n  vocab/merges model files\n- Composable pipeline: normalizer, pre-tokenizer, post-processor, decoder\n  — each stage independently configurable\n- Rich encoding output: token IDs, string tokens, byte offsets, attention\n  masks, type IDs, word IDs, special token masks\n- Training: train BPE, WordPiece, Unigram, and word-level tokenizers from\n  scratch\n- Performance: 1.3-6x faster than HuggingFace tokenizers (Rust native) on\n  most benchmarks — see [bench/](bench/) for details\n\n## Quick Start\n\n<!-- $MDX skip -->\n```ocaml\nopen Brot\n\nlet () =\n  (* Load a pretrained HuggingFace tokenizer *)\n  let tokenizer = from_file \"tokenizer.json\" |> Result.get_ok in\n\n  (* Encode text to token IDs *)\n  let encoding = encode tokenizer \"Hello world!\" in\n  let ids = Encoding.ids encoding in\n  Printf.printf \"Token IDs: \";\n  Array.iter (fun id -> Printf.printf \"%d \" id) ids;\n  print_newline ();\n\n  (* Decode back to text *)\n  let text = decode tokenizer ids in\n  Printf.printf \"Decoded: %s\\n\" text\n```\n\n## Contributing\n\nSee the [Raven monorepo README](../README.md) for contribution guidelines.\n\n## License\n\nISC License. See [LICENSE](../LICENSE) for details.\n"
  },
  {
    "path": "packages/brot/bench/README.md",
    "content": "# Brot Benchmarks\n\nThis directory contains micro-benchmarks for the `brot` library.\nThe suite mirrors HuggingFace's `tokenizers` so we can compare wall-clock\nthroughput for realistic workloads and catch regressions.\n\n## Fixtures\n\nBenchmark inputs live in `./data/`:\n\n- `news_1k.txt`, `wiki_64k.txt`, `code_excerpt.txt` — sample corpora used for\n  encoding workloads.\n- `gpt2.json` — OpenAI GPT-2 (BPE, 50K vocab, 50K merges)\n- `bert_base.json` — Google BERT-base-uncased (WordPiece, 30K vocab)\n- `llama.json` — Meta LLaMA (BPE, 32K vocab, 61K merges, no pre-tokenizer)\n\nDownload the tokenizer model files:\n\n```bash\nbrot/bench/download_data.sh\n```\n\n## Running the Benchmarks\n\n### Brot (OCaml)\n\n```bash\ndune exec brot/bench/bench_brot.exe -- --gc\n```\n\n### tokenizers — Rust native\n\n```bash\ncd brot/bench/bench_rust && cargo run --release\n```\n\n### tokenizers — Python (Rust FFI)\n\n```bash\nuv run --with tokenizers brot/bench/bench_tokenizers.py\n```\n\n## Comparison\n\nWall-clock time per run. Lower is better. Apple M3 Pro, macOS.\n\n| Benchmark                            | Brot (OCaml) | Rust native | Python (Rust FFI) | Brot vs Rust |\n| ------------------------------------ | ------------ | ----------- | ----------------- | ------------ |\n| **GPT-2** (BPE, 50K vocab)           |              |             |                   |              |\n| Encode/short (1KB)                   | 46μs         | 209μs       | 250μs             | **4.5x**     |\n| Encode/long (64KB)                   | 5.26ms       | 10.25ms     | 13.27ms           | **1.9x**     |\n| Encode/batch_32                      | 1.38ms       | 3.05ms      | 3.91ms            | **2.2x**     |\n| Decode/long                          | 1.19ms       | 1.50ms      | 1.58ms            | **1.3x**     |\n| **BERT-base** (WordPiece, 30K vocab) |              |             |                   |              |\n| Encode/short (1KB)                   | 137μs        | 278μs       | 325μs             | **2.0x**     |\n| Encode/long (64KB)                   | 10.87ms      | 13.95ms     | 16.64ms           | **1.3x**     |\n| Encode/batch_32                      | 2.06ms       | 2.31ms      | 2.66ms            | **1.1x**     |\n| Decode/long                          | 1.25ms       | 7.63ms      | 7.76ms            | **6.1x**     |\n| **LLaMA** (BPE, 32K vocab)           |              |             |                   |              |\n| Encode/short (1KB)                   | 51μs         | 207μs       | 247μs             | **4.1x**     |\n| Encode/long (64KB)                   | 20.15ms      | 13.41ms     | 16.23ms           | 1.5x slower  |\n| Encode/batch_32                      | 1.43ms       | 1.56ms      | 1.51ms            | ~par         |\n| Decode/long                          | 1.12ms       | 5.02ms      | 5.03ms            | **4.5x**     |\n\nNotes:\n- The \"Rust native\" column calls the `tokenizers` crate directly, no Python FFI.\n  Source: `bench_rust/main.rs`.\n- Both brot and HF tokenizers use multi-threading for batch encoding (wall < CPU).\n- LLaMA has no pre-tokenizer, so the entire text goes through BPE as a single\n  sequence — this is where brot's BPE is slower on long inputs.\n"
  },
  {
    "path": "packages/brot/bench/bench_brot.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Benchmark suite for Brot tokenizers using realistic fixtures. *)\n\nopen Brot\n\nmodule Fixtures = struct\n  let data_dir = Filename.concat (Sys.getcwd ()) \"packages/brot/bench/data\"\n\n  let read_file name =\n    let path = Filename.concat data_dir name in\n    let ic = open_in_bin path in\n    let len = in_channel_length ic in\n    let content = really_input_string ic len in\n    close_in ic;\n    content\n\n  let load_tokenizer name =\n    let path = Filename.concat data_dir name in\n    match from_file path with\n    | Ok tok -> tok\n    | Error msg ->\n        failwith (Printf.sprintf \"Failed to load tokenizer %s: %s\" path msg)\n\n  let short_text = read_file \"news_1k.txt\"\n  let long_text = read_file \"wiki_64k.txt\"\n\n  let batch_32 =\n    let rec loop acc remaining =\n      if remaining = 0 then List.rev acc\n      else loop (short_text :: acc) (remaining - 1)\n    in\n    loop [] 32\nend\n\nlet encode_single tok text = encode tok text\nlet encode_batch tok texts = encode_batch tok texts\nlet decode_ids tok ids = decode tok ids\n\nlet make_suite ~label ~tokenizer =\n  let open Fixtures in\n  let decode_input =\n    let encoding = encode tokenizer long_text in\n    Array.copy (Encoding.ids encoding)\n  in\n  let benches =\n    [\n      Thumper.bench \"Encode/single_short\" (fun () ->\n          encode_single tokenizer short_text);\n      Thumper.bench \"Encode/single_long\" (fun () ->\n          encode_single tokenizer long_text);\n      Thumper.bench \"Encode/batch_32\" (fun () ->\n          encode_batch tokenizer batch_32);\n      Thumper.bench \"Decode/long\" (fun () -> decode_ids tokenizer decode_input);\n    ]\n  in\n  Thumper.group label benches\n\nlet all_benchmarks =\n  let open Fixtures in\n  let gpt2 =\n    make_suite ~label:\"GPT-2\" ~tokenizer:(load_tokenizer \"gpt2.json\")\n  in\n  let bert =\n    make_suite ~label:\"BERT-base\" ~tokenizer:(load_tokenizer \"bert_base.json\")\n  in\n  let llama =\n    make_suite ~label:\"LLaMA\" ~tokenizer:(load_tokenizer \"llama.json\")\n  in\n  [ gpt2; bert; llama ]\n\nlet () = Thumper.run \"brot\" all_benchmarks\n"
  },
  {
    "path": "packages/brot/bench/bench_rust/.gitignore",
    "content": "/target\n"
  },
  {
    "path": "packages/brot/bench/bench_rust/Cargo.toml",
    "content": "[package]\nname = \"bench_tokenizers_rust\"\nedition = \"2021\"\n\n[[bin]]\nname = \"bench_tokenizers_rust\"\npath = \"main.rs\"\n\n[dependencies]\ntokenizers = \"0.22\"\n"
  },
  {
    "path": "packages/brot/bench/bench_rust/main.rs",
    "content": "use std::fs;\nuse std::path::Path;\nuse std::time::{Duration, Instant};\nuse tokenizers::Tokenizer;\n\nconst WARMUP: usize = 4;\nconst TIME_QUOTA: Duration = Duration::from_millis(300);\nconst MIN_MEASUREMENTS: usize = 3;\n\nstruct BenchResult {\n    name: String,\n    wall_per_run: Duration,\n    runs: usize,\n}\n\nfn bench<F: FnMut()>(name: &str, mut f: F) -> BenchResult {\n    // Warmup\n    for _ in 0..WARMUP {\n        f();\n    }\n\n    // Adaptive batching: start with batch_size=1, scale up until each batch\n    // takes at least 2ms of wall time, then collect measurements for ~0.3s.\n    let mut batch_size: usize = 1;\n    let mut measurements: Vec<Duration> = Vec::new();\n    let bench_start = Instant::now();\n\n    loop {\n        let start = Instant::now();\n        for _ in 0..batch_size {\n            f();\n        }\n        let elapsed = start.elapsed();\n\n        if elapsed.as_secs_f64() < 0.002 {\n            // Batch too fast, scale up\n            batch_size = (batch_size as f64 * 1.3).ceil().max((batch_size + 1) as f64) as usize;\n            continue;\n        }\n\n        let per_run = elapsed / batch_size as u32;\n        measurements.push(per_run);\n\n        let total_elapsed = bench_start.elapsed();\n        if measurements.len() >= MIN_MEASUREMENTS && total_elapsed >= TIME_QUOTA {\n            break;\n        }\n\n        batch_size = (batch_size as f64 * 1.3).ceil().max((batch_size + 1) as f64) as usize;\n    }\n\n    // Compute average\n    let total: Duration = measurements.iter().sum();\n    let avg = total / measurements.len() as u32;\n\n    BenchResult {\n        name: name.to_string(),\n        wall_per_run: avg,\n        runs: measurements.len(),\n    }\n}\n\nfn format_duration(d: Duration) -> String {\n    let nanos = d.as_nanos() as f64;\n    if nanos < 1_000.0 {\n        format!(\"{:.2}ns\", nanos)\n    } else if nanos < 1_000_000.0 {\n        format!(\"{:.2}μs\", nanos / 1_000.0)\n    } else if nanos < 1_000_000_000.0 {\n        format!(\"{:.2}ms\", nanos / 1_000_000.0)\n    } else {\n        format!(\"{:.2}s\", nanos / 1_000_000_000.0)\n    }\n}\n\nfn run_suite(label: &str, tokenizer: &Tokenizer, short_text: &str, long_text: &str) {\n    let batch_32: Vec<&str> = vec![short_text; 32];\n\n    // Pre-compute decode input\n    let encoding = tokenizer\n        .encode(long_text, false)\n        .expect(\"encode for decode input\");\n    let decode_ids: Vec<u32> = encoding.get_ids().to_vec();\n\n    let results = vec![\n        bench(&format!(\"{}/Encode/single_short\", label), || {\n            tokenizer.encode(short_text, false).unwrap();\n        }),\n        bench(&format!(\"{}/Encode/single_long\", label), || {\n            tokenizer.encode(long_text, false).unwrap();\n        }),\n        bench(&format!(\"{}/Encode/batch_32\", label), || {\n            tokenizer\n                .encode_batch(batch_32.clone(), false)\n                .unwrap();\n        }),\n        bench(&format!(\"{}/Decode/long\", label), || {\n            tokenizer.decode(decode_ids.as_slice(), true).unwrap();\n        }),\n    ];\n\n    for r in &results {\n        println!(\n            \"  {:<35} {:>10}  ({} samples)\",\n            r.name,\n            format_duration(r.wall_per_run),\n            r.runs\n        );\n    }\n}\n\nfn main() {\n    let data_dir = Path::new(env!(\"CARGO_MANIFEST_DIR\")).join(\"../data\");\n\n    let short_text =\n        fs::read_to_string(data_dir.join(\"news_1k.txt\")).expect(\"read news_1k.txt\");\n    let long_text =\n        fs::read_to_string(data_dir.join(\"wiki_64k.txt\")).expect(\"read wiki_64k.txt\");\n\n    println!(\"Rust-native HuggingFace tokenizers benchmark\");\n    println!(\"=============================================\\n\");\n\n    let tokenizers = [\n        (\"GPT-2\", \"gpt2.json\"),\n        (\"BERT-base\", \"bert_base.json\"),\n        (\"LLaMA\", \"llama.json\"),\n    ];\n\n    for (label, filename) in &tokenizers {\n        let path = data_dir.join(filename);\n        let tokenizer =\n            Tokenizer::from_file(&path).unwrap_or_else(|e| {\n                panic!(\"Failed to load {}: {}\", path.display(), e)\n            });\n        println!(\"{}:\", label);\n        run_suite(label, &tokenizer, &short_text, &long_text);\n        println!();\n    }\n}\n"
  },
  {
    "path": "packages/brot/bench/bench_tokenizers.py",
    "content": "from __future__ import annotations\n\nfrom pathlib import Path\nfrom typing import Any, Callable, List\n\nfrom tokenizers import Tokenizer\n\n_ROOT = Path(__file__).resolve().parent\n_DATA_DIR = _ROOT / \"data\"\n\nimport sys\n\n_SCRIPTS_DIR = _ROOT\nwhile not (_SCRIPTS_DIR / \"dune-project\").exists():\n    _SCRIPTS_DIR = _SCRIPTS_DIR.parent\n_SCRIPTS_DIR = _SCRIPTS_DIR / \"scripts\"\nif str(_SCRIPTS_DIR) not in sys.path:\n    sys.path.insert(0, str(_SCRIPTS_DIR))\n\nimport ubench  # type: ignore\n\n\nSHORT_TEXT = (_DATA_DIR / \"news_1k.txt\").read_text(encoding=\"utf-8\")\nLONG_TEXT = (_DATA_DIR / \"wiki_64k.txt\").read_text(encoding=\"utf-8\")\nBATCH_32 = [SHORT_TEXT] * 32\n\n\ndef load_tokenizer(filename: str) -> Tokenizer:\n    path = _DATA_DIR / filename\n    return Tokenizer.from_file(str(path))\n\n\ndef make_suite(label: str, tokenizer: Tokenizer) -> Any:\n    decode_ids = tokenizer.encode(LONG_TEXT).ids\n\n    benches: List[Any] = [\n        ubench.bench(\"Encode/single_short\", lambda: tokenizer.encode(SHORT_TEXT)),\n        ubench.bench(\"Encode/single_long\", lambda: tokenizer.encode(LONG_TEXT)),\n        ubench.bench(\"Encode/batch_32\", lambda: tokenizer.encode_batch(BATCH_32)),\n        ubench.bench(\"Decode/long\", lambda: tokenizer.decode(decode_ids)),\n    ]\n\n    return ubench.group(label, benches)\n\n\ndef build_benchmarks() -> List[Any]:\n    return [\n        make_suite(\"GPT-2\", load_tokenizer(\"gpt2.json\")),\n        make_suite(\"BERT-base\", load_tokenizer(\"bert_base.json\")),\n        make_suite(\"LLaMA\", load_tokenizer(\"llama.json\")),\n    ]\n\n\ndef default_config() -> ubench.Config:\n    return ubench.Config.default().build()\n\n\ndef main() -> None:\n    benchmarks = build_benchmarks()\n    config = default_config()\n    ubench.run(benchmarks, config=config, output_format=\"pretty\", verbose=False)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "packages/brot/bench/brot.thumper",
    "content": "# thumper baseline\n# version: 1\n# suite_name: brot\n# host: 1480401c3b76ed18\n# cpu: Apple M1 Max\n# ocaml: 5.4.1\n# git: 31747323\n# dirty: true\n# command: /Users/tmattio/Workspace/raven/_build/default/packages/brot/bench/bench_brot.exe --bless --quick\n\nbert-base/decode_long\talloc_words\t4.445400e+05\t4.445400e+05\t4.445400e+05\t0.000000e+00\t9\t0\nbert-base/decode_long\tcpu_time\t1.424685e-03\t1.370913e-03\t1.470915e-03\t3.509645e-02\t9\t1\nbert-base/decode_long\twall_time\t1.425388e-03\t1.371378e-03\t1.475911e-03\t3.666840e-02\t9\t1\nbert-base/encode_batch_32\talloc_words\t1.089250e+05\t1.089250e+05\t1.089250e+05\t0.000000e+00\t31\t1\nbert-base/encode_batch_32\tcpu_time\t9.091263e-03\t8.686646e-03\t9.591658e-03\t4.977371e-02\t31\t2\nbert-base/encode_batch_32\twall_time\t2.121498e-03\t1.993592e-03\t2.253395e-03\t6.123112e-02\t31\t1\nbert-base/encode_single_long\talloc_words\t1.350547e+06\t1.350547e+06\t1.350547e+06\t0.000000e+00\t9\t1\nbert-base/encode_single_long\tcpu_time\t9.506189e-03\t9.354674e-03\t9.692541e-03\t1.777085e-02\t9\t1\nbert-base/encode_single_long\twall_time\t9.509793e-03\t9.372449e-03\t9.680291e-03\t1.618554e-02\t9\t1\nbert-base/encode_single_short\talloc_words\t2.699600e+04\t2.699600e+04\t2.699600e+04\t0.000000e+00\t9\t0\nbert-base/encode_single_short\tcpu_time\t1.392726e-04\t1.345864e-04\t1.448241e-04\t3.675399e-02\t9\t0\nbert-base/encode_single_short\twall_time\t1.393180e-04\t1.345034e-04\t1.440607e-04\t3.430033e-02\t9\t0\ngpt-2/decode_long\talloc_words\t3.417770e+05\t3.417770e+05\t3.417770e+05\t0.000000e+00\t7\t0\ngpt-2/decode_long\tcpu_time\t1.305595e-03\t1.261472e-03\t1.338931e-03\t2.966443e-02\t7\t0\ngpt-2/decode_long\twall_time\t1.305703e-03\t1.262279e-03\t1.346462e-03\t3.223653e-02\t7\t0\ngpt-2/encode_batch_32\talloc_words\t5.518900e+04\t5.518900e+04\t5.518900e+04\t0.000000e+00\t6\t0\ngpt-2/encode_batch_32\tcpu_time\t3.952923e-03\t3.848310e-03\t4.113309e-03\t3.351934e-02\t6\t1\ngpt-2/encode_batch_32\twall_time\t1.386324e-03\t1.328412e-03\t1.431061e-03\t3.702195e-02\t6\t0\ngpt-2/encode_single_long\talloc_words\t6.731690e+05\t6.731690e+05\t6.731690e+05\t0.000000e+00\t19\t2\ngpt-2/encode_single_long\tcpu_time\t3.852835e-03\t3.758677e-03\t3.927922e-03\t2.196376e-02\t19\t1\ngpt-2/encode_single_long\twall_time\t3.856248e-03\t3.756035e-03\t3.930090e-03\t2.256793e-02\t19\t1\ngpt-2/encode_single_short\talloc_words\t1.356200e+04\t1.356200e+04\t1.356200e+04\t0.000000e+00\t13\t0\ngpt-2/encode_single_short\tcpu_time\t5.279107e-05\t5.090596e-05\t5.553182e-05\t4.381283e-02\t13\t0\ngpt-2/encode_single_short\twall_time\t5.282309e-05\t5.106692e-05\t5.499492e-05\t3.718073e-02\t13\t0\nllama/decode_long\talloc_words\t6.844460e+05\t6.844460e+05\t6.844460e+05\t0.000000e+00\t12\t0\nllama/decode_long\tcpu_time\t1.149214e-03\t1.094901e-03\t1.194180e-03\t4.319437e-02\t12\t0\nllama/decode_long\twall_time\t1.149682e-03\t1.095042e-03\t1.189211e-03\t4.095449e-02\t12\t0\nllama/encode_batch_32\talloc_words\t9.471700e+04\t9.471700e+04\t9.471700e+04\t0.000000e+00\t5\t0\nllama/encode_batch_32\tcpu_time\t4.421498e-03\t4.320283e-03\t4.534447e-03\t2.421853e-02\t5\t0\nllama/encode_batch_32\twall_time\t1.467702e-03\t1.366193e-03\t1.593832e-03\t7.754944e-02\t5\t2\nllama/encode_single_long\talloc_words\t1.261210e+06\t1.261210e+06\t1.261210e+06\t0.000000e+00\t9\t1\nllama/encode_single_long\tcpu_time\t1.817278e-02\t1.788729e-02\t1.860684e-02\t1.979736e-02\t9\t2\nllama/encode_single_long\twall_time\t1.819150e-02\t1.794107e-02\t1.863329e-02\t1.902594e-02\t9\t2\nllama/encode_single_short\talloc_words\t2.344400e+04\t2.344400e+04\t2.344400e+04\t0.000000e+00\t42\t0\nllama/encode_single_short\tcpu_time\t6.126139e-05\t6.069695e-05\t6.174501e-05\t8.553960e-03\t42\t6\nllama/encode_single_short\twall_time\t6.130598e-05\t6.079485e-05\t6.183071e-05\t8.448240e-03\t42\t6\n"
  },
  {
    "path": "packages/brot/bench/data/.gitignore",
    "content": "gpt2.json\nbert_base.json\nllama.json\n"
  },
  {
    "path": "packages/brot/bench/data/news_1k.txt",
    "content": "City officials confirmed on Tuesday that the riverside park will reopen this summer after a two-year renovation.\nCrews installed 175 energy-efficient lights, replanted native wildflowers, and added a playground designed by local artists.\nThe project ran $1.8 million under budget, according to Deputy Mayor Alicia Gómez — a welcome surprise for residents concerned about rising taxes.\n\"It's not just a facelift; it's a commitment to public space,\" said Gomez. Cyclists tested the new bike lanes, while children chased bubbles during the ribbon-cutting ceremony.\nThe park will host weekly night markets featuring Afghan bolani, Jamaican patties, and vegan empanadas, with vendors selected through a community ballot.\nPublic transit advocates noted that the expanded bus schedule, combined with real-time arrival boards, should alleviate weekend congestion.\nSustainability officers also unveiled a solar-powered irrigation system and a pollinator habitat that includes milkweed, lavender, and rare prairie clover.\nEarly visitor surveys show 92% satisfaction, with many praising the accessible design, tactile maps, and multilingual audio tours available in English, Spanish, Mandarin, and American Sign Language.\nThe city plans to share open-source blueprints and a detailed maintenance playbook with other municipalities considering similar upgrades."
  },
  {
    "path": "packages/brot/bench/data/wiki_64k.txt",
    "content": "== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday Life ==\nSchoolchildren log phenology observations, while retired tram conductors teach visitor orientation classes in a repurposed depot, complete with time-travel escape room puzzles chronicling the town's evolution.\n\n== Early History ==\n\nThe settlement traces its roots to a trading village documented in the 12th-century annals of the Seljuk chronicler al-Biruni. Archaeological digs in 1989 uncovered kiln-fired ceramics, copper ingots, and terraced irrigation canals that reshaped historians' understanding of Central Asian trade routes.\n\n== Linguistics ==\nModern dialect surveys reveal a blend of Chuvash, Khazar, and Oghur loanwords; linguists have mapped palatalized consonants appearing near river valleys, likely a relic of seasonal migration.\n== Technological Renaissance ==\nBy 1893 the town hosted one of the earliest wireless telegraph stations in the region. Engineer Lidiya Petrovna retrofitted surplus naval equipment to send meteorological data to Moscow every sunset. Her notebooks — digitized in 2017 — contain meticulous diagrams of spark-gap transmitters, annotations in French, and the occasional doodle of a cat wearing goggles.\n== Cultural Revival ==\nAnnual festivals now feature Tuvan throat singing workshops, VR reconstructions of vanished monasteries, and fermentation labs explaining the chemistry behind kumis. UNESCO added the town's accordion workshops to its intangible heritage list, citing their adaptive use of recycled polymers for reeds.\n== Contemporary Research ==\nIn 2022 a consortium of botanists, data journalists, and Indigenous seed keepers launched the Steppe Observatory, using open satellite data, LoRaWAN sensors, and community weather diaries to forecast dust storms.\n== Notable Figures ==\nHistorian Salome Okafor popularized the settlement after translating 400 folktales into Yoruba, English, and Esperanto, each annotated with QR codes linking to oral history recordings.\n== Gastronomy ==\nLocal chefs pair fermented camel-milk cheese with candied sea buckthorn, while food trucks experiment with kelp-laden naan tacos, reflecting the town's fishing diaspora.\n== Climate Adaptation ==\nFlood mitigation now involves mycelium-reinforced levees, willow microforests, and AI-optimized sluice gates governed by a civic algorithm crafted in nightly town halls.\n== Digital Archives ==\nVolunteer coders maintain a mirrored archive stored on solar-powered Raspberry Pi clusters. The archive syncs monthly via a community-owned satellite uplink leased during lunar downtimes.\n== Everyday L"
  },
  {
    "path": "packages/brot/bench/download_data.sh",
    "content": "#!/usr/bin/env bash\nset -euo pipefail\n\nDATA_DIR=\"$(cd \"$(dirname \"$0\")/data\" && pwd)\"\n\necho \"Downloading real-world tokenizer models to $DATA_DIR...\"\n\ncurl -sL -o \"$DATA_DIR/gpt2.json\" \\\n  \"https://huggingface.co/openai-community/gpt2/resolve/main/tokenizer.json\"\necho \"  GPT-2 (BPE, 50K vocab)\"\n\ncurl -sL -o \"$DATA_DIR/bert_base.json\" \\\n  \"https://huggingface.co/google-bert/bert-base-uncased/resolve/main/tokenizer.json\"\necho \"  BERT-base (WordPiece, 30K vocab)\"\n\ncurl -sL -o \"$DATA_DIR/llama.json\" \\\n  \"https://huggingface.co/hf-internal-testing/llama-tokenizer/resolve/main/tokenizer.json\"\necho \"  LLaMA (BPE, 32K vocab)\"\n\necho \"Done.\"\n"
  },
  {
    "path": "packages/brot/bench/dune",
    "content": "(data_only_dirs data)\n\n(executable\n (name bench_brot)\n (libraries brot thumper unix))\n\n(rule\n (alias runtest)\n (action\n  (progn\n   (run %{exe:bench_brot.exe} -q)\n   (diff? brot.thumper brot.thumper.corrected))))\n"
  },
  {
    "path": "packages/brot/doc/01-getting-started.md",
    "content": "# Getting Started\n\nThis guide covers the basics: encoding text to token IDs, decoding back\nto text, configuring the pipeline, and training tokenizers from scratch.\n\n## Installation\n\n<!-- $MDX skip -->\n```bash\nopam install brot\n```\n\nOr build from source:\n\n<!-- $MDX skip -->\n```bash\ngit clone https://github.com/raven-ml/raven\ncd raven && dune build brot\n```\n\n## Encoding and Decoding\n\nA tokenizer converts text to token IDs and back. Build one from a\nvocabulary and merge rules, then encode and decode:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  bpe\n    ~vocab:\n      [ (\"h\", 0); (\"e\", 1); (\"l\", 2); (\"o\", 3); (\" \", 4); (\"w\", 5);\n        (\"r\", 6); (\"d\", 7); (\"he\", 8); (\"ll\", 9); (\"llo\", 10);\n        (\"hello\", 11); (\"wo\", 12); (\"rl\", 13); (\"rld\", 14); (\"world\", 15) ]\n    ~merges:\n      [ (\"h\", \"e\"); (\"l\", \"l\"); (\"ll\", \"o\"); (\"he\", \"llo\");\n        (\"w\", \"o\"); (\"r\", \"l\"); (\"rl\", \"d\"); (\"wo\", \"rld\") ]\n    ()\n\n(* Encode text to an Encoding *)\nlet encoding = encode tokenizer \"hello world\"\nlet ids = Encoding.ids encoding       (* [| 11; 4; 15 |] *)\nlet tokens = Encoding.tokens encoding (* [| \"hello\"; \" \"; \"world\" |] *)\n\n(* Decode back to text *)\nlet text = decode tokenizer ids (* \"hello world\" *)\n```\n\n`encode` returns an `Encoding.t`. For just the IDs, use `encode_ids`:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  bpe\n    ~vocab:\n      [ (\"h\", 0); (\"e\", 1); (\"l\", 2); (\"o\", 3); (\" \", 4); (\"w\", 5);\n        (\"r\", 6); (\"d\", 7); (\"he\", 8); (\"ll\", 9); (\"llo\", 10);\n        (\"hello\", 11); (\"wo\", 12); (\"rl\", 13); (\"rld\", 14); (\"world\", 15) ]\n    ~merges:\n      [ (\"h\", \"e\"); (\"l\", \"l\"); (\"ll\", \"o\"); (\"he\", \"llo\");\n        (\"w\", \"o\"); (\"r\", \"l\"); (\"rl\", \"d\"); (\"wo\", \"rld\") ]\n    ()\n\nlet ids = encode_ids tokenizer \"hello world\" (* [| 11; 4; 15 |] *)\n```\n\n## Encoding Output\n\nAn `Encoding.t` carries more than just token IDs. Every field is a\nparallel array of the same length:\n\n- `ids` — integer token IDs for model input\n- `tokens` — string representation of each token\n- `offsets` — `(start, end)` byte positions in the original text\n- `type_ids` — segment IDs (0 for first sentence, 1 for second in pair tasks)\n- `attention_mask` — 1 for real tokens, 0 for padding\n- `special_tokens_mask` — 1 for special tokens (`[CLS]`, `[SEP]`, padding), 0 for content\n- `word_ids` — maps each token to its source word index, or `None` for special tokens\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  wordpiece\n    ~vocab:\n      [ (\"[UNK]\", 0); (\"[CLS]\", 1); (\"[SEP]\", 2);\n        (\"the\", 3); (\"cat\", 4); (\"play\", 5); (\"##ing\", 6) ]\n    ~specials:(List.map special [ \"[UNK]\"; \"[CLS]\"; \"[SEP]\" ])\n    ~post:(Post_processor.bert ~cls:(\"[CLS]\", 1) ~sep:(\"[SEP]\", 2) ())\n    ~decoder:(Decoder.wordpiece ())\n    ~pre:(Pre_tokenizer.whitespace ())\n    ~unk_token:\"[UNK]\" ()\n\nlet enc = encode tokenizer \"the cat playing\"\n(* tokens: [| \"[CLS]\"; \"the\"; \"cat\"; \"play\"; \"##ing\"; \"[SEP]\" |] *)\nlet ids = Encoding.ids enc\nlet type_ids = Encoding.type_ids enc\nlet attention_mask = Encoding.attention_mask enc\nlet special_tokens_mask = Encoding.special_tokens_mask enc\nlet offsets = Encoding.offsets enc\nlet word_ids = Encoding.word_ids enc\n```\n\nSee [Batch Processing](04-batch-processing/) for a deeper look at encoding\nmetadata, sentence pairs, padding, and truncation.\n\n## The Pipeline\n\nTokenization proceeds through up to 5 configurable stages:\n\n1. **Normalizer** — text cleanup (lowercase, accent removal, Unicode normalization)\n2. **Pre-tokenizer** — split text into pieces with byte offsets\n3. **Algorithm** — apply vocabulary-based encoding (BPE, WordPiece, Unigram, etc.)\n4. **Post-processor** — add special tokens and set type IDs\n5. **Decoder** — reverse the encoding back to text\n\nEach stage is optional. Here is a complete BERT-style pipeline:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  wordpiece\n    ~normalizer:(Normalizer.bert ~lowercase:true ())\n    ~pre:(Pre_tokenizer.bert ())\n    ~post:(Post_processor.bert ~cls:(\"[CLS]\", 1) ~sep:(\"[SEP]\", 2) ())\n    ~decoder:(Decoder.wordpiece ())\n    ~vocab:\n      [ (\"[UNK]\", 0); (\"[CLS]\", 1); (\"[SEP]\", 2); (\"[PAD]\", 3);\n        (\"the\", 4); (\"cat\", 5); (\"sat\", 6); (\"on\", 7);\n        (\"play\", 8); (\"##ing\", 9); (\"##ed\", 10) ]\n    ~specials:(List.map special [ \"[UNK]\"; \"[CLS]\"; \"[SEP]\"; \"[PAD]\" ])\n    ~unk_token:\"[UNK]\" ~pad_token:\"[PAD]\" ()\n\n(* The normalizer lowercases \"The Cat\" before tokenization *)\nlet enc = encode tokenizer \"The Cat Sat\"\nlet tokens = Encoding.tokens enc\n(* [| \"[CLS]\"; \"the\"; \"cat\"; \"sat\"; \"[SEP]\" |] *)\n\n(* Decode, skipping special tokens *)\nlet text = decode tokenizer ~skip_special_tokens:true (Encoding.ids enc)\n(* \"the cat sat\" *)\n```\n\nSee [The Tokenization Pipeline](02-pipeline/) for a detailed guide to each\nstage.\n\n## Training\n\nTrain a tokenizer from a text corpus. Brot supports training BPE,\nWordPiece, Unigram, and word-level tokenizers:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  train_bpe ~vocab_size:80 ~show_progress:false\n    (`Seq (List.to_seq\n       [ \"The quick brown fox jumps over the lazy dog\";\n         \"The dog barked loudly at the brown fox\";\n         \"Quick brown foxes are jumping over lazy dogs\";\n         \"The lazy dog slept while the fox jumped\" ]))\n\nlet size = vocab_size tokenizer\nlet enc = encode tokenizer \"The quick fox\"\n```\n\nSee [Choosing an Algorithm](05-algorithms/) for guidance on which algorithm\nto use and how to configure training.\n\n## Loading Pretrained Tokenizers\n\nLoad a HuggingFace `tokenizer.json` file:\n\n<!-- $MDX skip -->\n```ocaml\nopen Brot\n\nlet tokenizer = from_file \"tokenizer.json\" |> Result.get_ok\nlet encoding = encode tokenizer \"Hello world!\"\n```\n\nLoad from separate vocabulary and merges files:\n\n<!-- $MDX skip -->\n```ocaml\nopen Brot\n\nlet tokenizer =\n  from_model_file ~vocab:\"vocab.json\" ~merges:\"merges.txt\"\n    ~pre:(Pre_tokenizer.byte_level ~add_prefix_space:false ())\n    ~decoder:(Decoder.byte_level ())\n    ()\n```\n\nSee [Pretrained Tokenizers](03-pretrained/) for complete pipeline\nconfigurations for BERT, GPT-2, and SentencePiece-style models.\n\n## Batch Processing\n\nEncode multiple texts at once with padding to uniform length:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  train_bpe ~vocab_size:80 ~show_progress:false\n    ~specials:(List.map special [ \"[PAD]\" ])\n    ~pad_token:\"[PAD]\"\n    (`Seq (List.to_seq\n       [ \"The quick brown fox jumps over the lazy dog\";\n         \"The dog barked loudly at the brown fox\";\n         \"Quick brown foxes are jumping over lazy dogs\" ]))\n\nlet encodings =\n  encode_batch tokenizer\n    ~padding:(padding `Batch_longest)\n    [ \"The quick fox\"; \"The lazy dog barked\" ]\n\n(* All encodings now have the same length *)\nlet lengths = List.map Encoding.length encodings\n```\n\nSee [Batch Processing](04-batch-processing/) for padding strategies,\ntruncation, sentence pairs, and offset alignment.\n\n## Next Steps\n\n- [The Tokenization Pipeline](02-pipeline/) — how the 5 pipeline stages work\n- [Pretrained Tokenizers](03-pretrained/) — loading, saving, and building known model pipelines\n- [Batch Processing](04-batch-processing/) — padding, truncation, encoding metadata\n- [Choosing an Algorithm](05-algorithms/) — BPE vs WordPiece vs Unigram and when to use each\n"
  },
  {
    "path": "packages/brot/doc/02-pipeline.md",
    "content": "# The Tokenization Pipeline\n\nBrot processes text through up to 5 stages, each optional and independently\nconfigurable:\n\n```\ntext\n │\n ├─ 1. Normalizer       — clean and transform text\n ├─ 2. Pre-tokenizer    — split into pieces with byte offsets\n ├─ 3. Algorithm        — map pieces to token IDs (BPE, WordPiece, …)\n ├─ 4. Post-processor   — add special tokens, set type IDs\n └─ 5. Decoder          — reverse the encoding back to text\n │\n ▼\nEncoding.t (ids, tokens, offsets, masks, …)\n```\n\nEach stage is set when constructing the tokenizer. Omit any stage and it\nis skipped.\n\n## Normalization\n\nNormalizers transform text before tokenization. They handle lowercasing,\naccent removal, Unicode normalization, whitespace cleanup, and\nmodel-specific preprocessing.\n\nAvailable normalizers:\n\n- **Unicode**: `nfc`, `nfd`, `nfkc`, `nfkd`\n- **Text transforms**: `lowercase`, `strip_accents`, `strip`, `replace`, `prepend`\n- **Byte-level**: `byte_level` (GPT-2 style byte-to-Unicode mapping)\n- **Model-specific**: `bert` (clean text, CJK padding, optional lowercasing and accent stripping)\n\nCompose normalizers with `sequence`:\n\n```ocaml\nopen Brot\n\nlet n =\n  Normalizer.sequence\n    [ Normalizer.nfd; Normalizer.strip_accents; Normalizer.lowercase ]\n\nlet r1 = Normalizer.apply n \"Café Résumé\" (* \"cafe resume\" *)\nlet r2 = Normalizer.apply n \"HELLO\"        (* \"hello\" *)\n```\n\nThe BERT normalizer combines several transforms:\n\n```ocaml\nopen Brot\n\nlet n = Normalizer.bert ~lowercase:true ()\n(* Lowercases, cleans control characters, pads CJK *)\nlet r1 = Normalizer.apply n \"Hello World\" (* \"hello world\" *)\nlet r2 = Normalizer.apply n \"Café\"        (* \"cafe\" *)\n```\n\n## Pre-tokenization\n\nPre-tokenizers split text into pieces before the algorithm runs. Each\npiece carries byte offsets into the original text. The algorithm then\ntokenizes each piece independently.\n\nAvailable pre-tokenizers:\n\n| Pre-tokenizer         | Description                                                     |\n| --------------------- | --------------------------------------------------------------- |\n| `whitespace ()`       | Split on `\\w+\\|[^\\w\\s]+` (word chars grouped, non-word grouped) |\n| `whitespace_split ()` | Split on whitespace (simplest)                                  |\n| `bert ()`             | BERT-style: whitespace + punctuation isolation + CJK separation |\n| `byte_level ()`       | GPT-2 style byte-level encoding with regex splitting            |\n| `punctuation ()`      | Separate punctuation from alphanumeric content                  |\n| `split ~pattern ()`   | Split on a literal string pattern                               |\n| `char_delimiter c`    | Split on a single character                                     |\n| `digits ()`           | Split on digit boundaries                                       |\n| `metaspace ()`        | Replace whitespace with a visible marker (SentencePiece)        |\n| `unicode_scripts ()`  | Split on Unicode script boundaries                              |\n| `fixed_length n`      | Fixed-size character chunks                                     |\n\nUse `pre_tokenize` to inspect how a pre-tokenizer splits text. It returns\na list of `(piece, (start_offset, end_offset))` pairs:\n\n```ocaml\nopen Brot\n\nlet text = \"Hello, world! How's it going?\"\n\nlet whitespace_pieces =\n  Pre_tokenizer.pre_tokenize (Pre_tokenizer.whitespace ()) text\n(* [(\"Hello\", (0,5)); (\",\", (5,6)); (\"world\", (7,12)); (\"!\", (12,13)); ...] *)\n\nlet bert_pieces =\n  Pre_tokenizer.pre_tokenize (Pre_tokenizer.bert ()) text\n\nlet punct_pieces =\n  Pre_tokenizer.pre_tokenize (Pre_tokenizer.punctuation ()) text\n```\n\nCompose pre-tokenizers with `sequence`. Each pre-tokenizer in the chain\nprocesses the pieces from the previous one:\n\n```ocaml\nopen Brot\n\nlet pre =\n  Pre_tokenizer.sequence\n    [ Pre_tokenizer.whitespace_split (); Pre_tokenizer.digits () ]\n\nlet pieces = Pre_tokenizer.pre_tokenize pre \"order 42 shipped\"\n(* [(\"order\", _); (\"4\", _); (\"2\", _); (\"shipped\", _)] *)\n```\n\n## Tokenization Algorithms\n\nThe algorithm maps pre-tokenized pieces to token IDs using the vocabulary.\nBrot supports 5 algorithms:\n\n| Algorithm       | How it splits                               | Notable models                 |\n| --------------- | ------------------------------------------- | ------------------------------ |\n| BPE             | Iterative merge of most frequent pairs      | GPT-2, GPT-3/4, RoBERTa, LLaMA |\n| WordPiece       | Greedy longest-match with `##` prefix       | BERT, DistilBERT, Electra      |\n| Unigram         | Probabilistic segmentation (max likelihood) | T5, ALBERT, mBART, XLNet       |\n| Word-level      | Whole words, no subword splitting           | Simple models, prototyping     |\n| Character-level | Each byte is a token                        | Byte-level fallback            |\n\nSee [Choosing an Algorithm](05-algorithms/) for details on each algorithm,\nwhen to use it, and how to configure training.\n\n## Post-processing\n\nPost-processors add special tokens and set type IDs after tokenization.\nThey handle model-specific requirements like `[CLS]`/`[SEP]` for BERT or\n`<s>`/`</s>` for RoBERTa.\n\nAvailable post-processors:\n\n- `bert ~sep ~cls ()` — `[CLS] A [SEP]` or `[CLS] A [SEP] B [SEP]`, type IDs 0/1\n- `roberta ~sep ~cls ()` — `<s> A </s>` or `<s> A </s> </s> B </s>`, all type IDs 0\n- `byte_level ()` — adjust offsets for byte-level encoding\n- `template ~single ()` — custom template with `$A`, `$B`, and literal token placeholders\n- `sequence processors` — chain multiple post-processors\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  wordpiece\n    ~vocab:\n      [ (\"[UNK]\", 0); (\"[CLS]\", 1); (\"[SEP]\", 2);\n        (\"the\", 3); (\"cat\", 4); (\"sat\", 5); (\"how\", 6); (\"are\", 7); (\"you\", 8) ]\n    ~specials:(List.map special [ \"[UNK]\"; \"[CLS]\"; \"[SEP]\" ])\n    ~pre:(Pre_tokenizer.whitespace ())\n    ~post:(Post_processor.bert ~cls:(\"[CLS]\", 1) ~sep:(\"[SEP]\", 2) ())\n    ~decoder:(Decoder.wordpiece ())\n    ~unk_token:\"[UNK]\" ()\n\n(* Single sentence: [CLS] the cat sat [SEP] *)\nlet single = encode tokenizer \"the cat sat\"\n\n(* Sentence pair: [CLS] the cat sat [SEP] how are you [SEP] *)\nlet pair = encode tokenizer ~pair:\"how are you\" \"the cat sat\"\n(* type_ids: 0 for first sentence + [CLS]/[SEP], 1 for second + [SEP] *)\nlet type_ids = Encoding.type_ids pair\n```\n\nThe `template` post-processor gives full control over the format. Use `$A`\nand `$B` as sequence placeholders, and literal token names in brackets.\nAppend `:N` to set type IDs:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  word_level\n    ~vocab:\n      [ (\"[BOS]\", 0); (\"[EOS]\", 1); (\"hello\", 2); (\"world\", 3) ]\n    ~specials:(List.map special [ \"[BOS]\"; \"[EOS]\" ])\n    ~pre:(Pre_tokenizer.whitespace ())\n    ~post:\n      (Post_processor.template\n         ~single:\"[BOS]:0 $A:0 [EOS]:0\"\n         ~pair:\"[BOS]:0 $A:0 [EOS]:0 $B:1 [EOS]:1\"\n         ~special_tokens:[ (\"[BOS]\", 0); (\"[EOS]\", 1) ]\n         ())\n    ~unk_token:\"[UNK]\" ()\n\nlet enc = encode tokenizer \"hello world\"\nlet tokens = Encoding.tokens enc   (* [| \"[BOS]\"; \"hello\"; \"world\"; \"[EOS]\" |] *)\nlet type_ids = Encoding.type_ids enc (* [| 0; 0; 0; 0 |] *)\n```\n\n## Decoding\n\nDecoders reverse encoding-specific transformations to produce natural text\nfrom token strings. They operate on token *strings* (looked up from the\nvocabulary), not IDs.\n\nDecoders fall into two categories:\n\n- **Per-token** — transform each token independently: `bpe`, `byte_fallback`, `metaspace`\n- **Collapsing** — process the entire token list as a whole: `byte_level`, `wordpiece`, `replace`, `strip`, `fuse`\n\nThis distinction matters when composing with `sequence`: per-token decoders\npass a list of transformed tokens to the next decoder, while collapsing\ndecoders produce a single result.\n\nAvailable decoders:\n\n| Decoder                   | Type       | Description                                      |\n| ------------------------- | ---------- | ------------------------------------------------ |\n| `bpe ()`                  | Per-token  | Strip end-of-word suffix, insert spaces          |\n| `byte_fallback ()`        | Per-token  | Convert `<0x41>` hex tokens to bytes             |\n| `metaspace ()`            | Per-token  | Convert metaspace markers to spaces              |\n| `byte_level ()`           | Collapsing | Reverse GPT-2 byte-to-Unicode encoding           |\n| `wordpiece ()`            | Collapsing | Strip `##` prefix, join subwords                 |\n| `replace ~pattern ~by ()` | Collapsing | Replace literal pattern in joined text           |\n| `strip ()`                | Collapsing | Remove leading/trailing characters               |\n| `fuse ()`                 | Collapsing | Concatenate all tokens with no delimiter         |\n| `ctc ()`                  | Per-token  | CTC output decoding (deduplication, pad removal) |\n\n```ocaml\nopen Brot\n\n(* WordPiece decoder: strips ## prefix and joins subwords *)\nlet wp = Decoder.wordpiece ()\nlet text = Decoder.decode wp [ \"[CLS]\"; \"play\"; \"##ing\"; \"cat\"; \"##s\"; \"[SEP]\" ]\n(* \"[CLS] playing cats [SEP]\" *)\n\n(* Sequence of decoders *)\nlet seq = Decoder.sequence [ Decoder.fuse (); Decoder.replace ~pattern:\"_\" ~by:\" \" () ]\nlet text2 = Decoder.decode seq [ \"_Hello\"; \"_world\" ]\n(* \" Hello world\" *)\n```\n\nWhen using `Brot.decode`, the tokenizer looks up token strings from the\nvocabulary and then applies the configured decoder automatically.\n\n## Complete Example\n\nHere is a complete BERT-style tokenizer using all 5 pipeline stages:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  wordpiece\n    (* 1. Normalizer: lowercase and clean text *)\n    ~normalizer:(Normalizer.bert ~lowercase:true ())\n    (* 2. Pre-tokenizer: BERT-style splitting *)\n    ~pre:(Pre_tokenizer.bert ())\n    (* 3. Algorithm: WordPiece with ## prefix *)\n    ~vocab:\n      [ (\"[PAD]\", 0); (\"[UNK]\", 1); (\"[CLS]\", 2); (\"[SEP]\", 3);\n        (\"the\", 4); (\"cat\", 5); (\"sat\", 6); (\"on\", 7); (\"mat\", 8);\n        (\"play\", 9); (\"##ing\", 10); (\"##ed\", 11); (\"a\", 12) ]\n    ~specials:(List.map special [ \"[PAD]\"; \"[UNK]\"; \"[CLS]\"; \"[SEP]\" ])\n    ~unk_token:\"[UNK]\" ~pad_token:\"[PAD]\"\n    (* 4. Post-processor: add [CLS] and [SEP] *)\n    ~post:(Post_processor.bert ~cls:(\"[CLS]\", 2) ~sep:(\"[SEP]\", 3) ())\n    (* 5. Decoder: strip ## and join *)\n    ~decoder:(Decoder.wordpiece ())\n    ()\n\n(* \"The Cat\" is normalized to \"the cat\" before tokenization *)\nlet enc = encode tokenizer \"The Cat Played On A Mat\"\nlet tokens = Encoding.tokens enc\n(* [| \"[CLS]\"; \"the\"; \"cat\"; \"play\"; \"##ed\"; \"on\"; \"a\"; \"mat\"; \"[SEP]\" |] *)\n\n(* Decode back, skipping special tokens *)\nlet text = decode tokenizer ~skip_special_tokens:true (Encoding.ids enc)\n(* \"the cat played on a mat\" *)\n```\n"
  },
  {
    "path": "packages/brot/doc/03-pretrained.md",
    "content": "# Pretrained Tokenizers\n\nMost users start by loading an existing tokenizer rather than building one\nfrom scratch. Brot reads and writes HuggingFace `tokenizer.json` files and\nseparate vocabulary/merges model files.\n\n## Loading from tokenizer.json\n\nHuggingFace models ship a `tokenizer.json` that contains the algorithm,\nvocabulary, merge rules, and full pipeline configuration. Load it with\n`from_file`:\n\n<!-- $MDX skip -->\n```ocaml\nopen Brot\n\nlet tokenizer = from_file \"path/to/tokenizer.json\" |> Result.get_ok\nlet encoding = encode tokenizer \"Hello world!\"\nlet ids = Encoding.ids encoding\n```\n\n`from_file` returns `(t, string) result`. Handle errors explicitly when\nthe file may be missing or malformed:\n\n<!-- $MDX skip -->\n```ocaml\nlet tokenizer =\n  match Brot.from_file \"tokenizer.json\" with\n  | Ok t -> t\n  | Error msg -> failwith msg\n```\n\n## Loading from Model Files\n\nOlder models ship separate `vocab.json` and `merges.txt` files instead\nof a single `tokenizer.json`. Use `from_model_file`:\n\n<!-- $MDX skip -->\n```ocaml\nopen Brot\n\n(* BPE: provide both vocab and merges *)\nlet tokenizer =\n  from_model_file ~vocab:\"vocab.json\" ~merges:\"merges.txt\"\n    ~pre:(Pre_tokenizer.byte_level ~add_prefix_space:false ())\n    ~decoder:(Decoder.byte_level ())\n    ()\n\n(* WordPiece: vocab only, no merges *)\nlet tokenizer =\n  from_model_file ~vocab:\"vocab.txt\"\n    ~pre:(Pre_tokenizer.bert ())\n    ~decoder:(Decoder.wordpiece ())\n    ()\n```\n\nWhen `merges` is provided, a BPE tokenizer is created. Without it,\nWordPiece is used. The pipeline stages (normalizer, pre-tokenizer,\npost-processor, decoder) must be configured explicitly since model files\ndo not include them.\n\n## Building Known Pipelines\n\nWhen you need full control over the pipeline or want to understand what\neach stage does, build the tokenizer from scratch with an inline\nvocabulary. The following examples show the standard configurations for\nwell-known models.\n\n### BERT (uncased)\n\nBERT uses WordPiece with `##` continuation prefix, BERT normalization\n(lowercase, clean text, CJK padding), BERT pre-tokenization (whitespace +\npunctuation), and `[CLS]`/`[SEP]` post-processing:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  wordpiece\n    ~vocab:\n      [ (\"[PAD]\", 0); (\"[UNK]\", 1); (\"[CLS]\", 2); (\"[SEP]\", 3);\n        (\"the\", 4); (\"cat\", 5); (\"sat\", 6); (\"on\", 7); (\"mat\", 8);\n        (\"play\", 9); (\"##ing\", 10); (\"##ed\", 11); (\"a\", 12);\n        (\"is\", 13); (\"good\", 14) ]\n    ~normalizer:(Normalizer.bert ~lowercase:true ())\n    ~pre:(Pre_tokenizer.bert ())\n    ~post:(Post_processor.bert ~cls:(\"[CLS]\", 2) ~sep:(\"[SEP]\", 3) ())\n    ~decoder:(Decoder.wordpiece ())\n    ~specials:(List.map special [ \"[PAD]\"; \"[UNK]\"; \"[CLS]\"; \"[SEP]\" ])\n    ~unk_token:\"[UNK]\" ~pad_token:\"[PAD]\" ()\n\nlet enc = encode tokenizer \"The Cat Is Playing\"\nlet tokens = Encoding.tokens enc\n(* [| \"[CLS]\"; \"the\"; \"cat\"; \"is\"; \"play\"; \"##ing\"; \"[SEP]\" |] *)\nlet decoded = decode tokenizer ~skip_special_tokens:true (Encoding.ids enc)\n(* \"the cat is playing\" *)\n```\n\n### GPT-2\n\nGPT-2 uses BPE with byte-level pre-tokenization (no information loss,\nhandles any Unicode input) and byte-level decoding:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  bpe\n    ~vocab:\n      [ (\"H\", 0); (\"e\", 1); (\"l\", 2); (\"o\", 3); (\"Ġ\", 4); (\"w\", 5);\n        (\"r\", 6); (\"d\", 7); (\"He\", 8); (\"ll\", 9); (\"llo\", 10);\n        (\"Hello\", 11); (\"Ġw\", 12); (\"or\", 13); (\"ld\", 14);\n        (\"orld\", 15); (\"Ġworld\", 16) ]\n    ~merges:\n      [ (\"H\", \"e\"); (\"l\", \"l\"); (\"ll\", \"o\"); (\"He\", \"llo\");\n        (\"Ġ\", \"w\"); (\"o\", \"r\"); (\"l\", \"d\"); (\"or\", \"ld\");\n        (\"Ġw\", \"orld\") ]\n    ~pre:(Pre_tokenizer.byte_level ~add_prefix_space:false ())\n    ~decoder:(Decoder.byte_level ())\n    ()\n\nlet enc = encode tokenizer \"Hello world\"\nlet tokens = Encoding.tokens enc (* [| \"Hello\"; \"Ġworld\" |] *)\nlet decoded = decode tokenizer (Encoding.ids enc) (* \"Hello world\" *)\n```\n\n### SentencePiece-style (T5, ALBERT)\n\nSentencePiece models use Unigram with metaspace pre-tokenization (spaces\nreplaced by a visible marker) and metaspace decoding:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  unigram\n    ~vocab:\n      [ (\"<unk>\", -1.0); (\"\\xe2\\x96\\x81\", -2.0);\n        (\"\\xe2\\x96\\x81the\", -1.5); (\"\\xe2\\x96\\x81cat\", -1.8);\n        (\"\\xe2\\x96\\x81is\", -1.6); (\"\\xe2\\x96\\x81play\", -2.0);\n        (\"ing\", -2.5); (\"\\xe2\\x96\\x81a\", -1.4); (\"\\xe2\\x96\\x81good\", -2.1) ]\n    ~pre:(Pre_tokenizer.metaspace ~replacement:'\\xe2' ())\n    ~decoder:(Decoder.metaspace ~replacement:'\\xe2' ())\n    ~unk_token:\"<unk>\" ()\n\nlet enc = encode tokenizer \"the cat is playing\"\n```\n\n## Saving Tokenizers\n\nSave a tokenizer in HuggingFace format for later use or sharing:\n\n<!-- $MDX skip -->\n```ocaml\n(* Save as tokenizer.json (full pipeline) *)\nBrot.save_pretrained tokenizer ~path:\"./my_tokenizer\"\n\n(* Save just the vocabulary and merges files *)\nlet files = Brot.save_model_files tokenizer ~folder:\"./model\" ()\n\n(* Export BPE merges in tiktoken format *)\nBrot.export_tiktoken tokenizer\n  ~merges_path:\"./tiktoken_merges.txt\"\n  ~vocab_path:\"./tiktoken_vocab.txt\"\n```\n\n## Training from Scratch\n\nTrain a tokenizer from a text corpus. Configure the full pipeline\nalongside the training parameters:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  train_bpe\n    ~vocab_size:120\n    ~min_frequency:1\n    ~show_progress:false\n    ~pre:(Pre_tokenizer.whitespace ())\n    ~specials:(List.map special [ \"[PAD]\"; \"[UNK]\" ])\n    ~unk_token:\"[UNK]\" ~pad_token:\"[PAD]\"\n    (`Seq (List.to_seq\n       [ \"The quick brown fox jumps over the lazy dog.\";\n         \"Machine learning models need good tokenizers.\";\n         \"Subword tokenization handles unknown words gracefully.\";\n         \"The fox jumped over the lazy dog again.\";\n         \"Tokenizers convert text to numerical representations.\" ]))\n\nlet size = vocab_size tokenizer\nlet enc = encode tokenizer \"The quick fox\"\n```\n\nSee [Choosing an Algorithm](05-algorithms/) for guidance on which algorithm\nto train and how to tune parameters like `vocab_size`, `min_frequency`,\nand algorithm-specific options.\n"
  },
  {
    "path": "packages/brot/doc/04-batch-processing.md",
    "content": "# Batch Processing\n\nReal-world usage requires encoding multiple texts into uniform-length\nsequences for model input. This guide covers encoding metadata, sentence\npairs, batch encoding, padding, truncation, and offset alignment.\n\n## Encoding Metadata\n\n`Encoding.t` carries parallel arrays that all share the same length. Each\nfield serves a specific purpose in model input preparation:\n\n| Field                 | Type                | Description                                     |\n| --------------------- | ------------------- | ----------------------------------------------- |\n| `ids`                 | `int array`         | Token IDs for model input                       |\n| `tokens`              | `string array`      | String representation of each token             |\n| `offsets`             | `(int * int) array` | `(start, end)` byte positions in source text    |\n| `type_ids`            | `int array`         | Segment IDs: 0 for sentence A, 1 for sentence B |\n| `attention_mask`      | `int array`         | 1 for real tokens, 0 for padding                |\n| `special_tokens_mask` | `int array`         | 1 for special tokens, 0 for content             |\n| `word_ids`            | `int option array`  | Source word index, or `None` for special tokens |\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  wordpiece\n    ~vocab:\n      [ (\"[UNK]\", 0); (\"[CLS]\", 1); (\"[SEP]\", 2);\n        (\"the\", 3); (\"cat\", 4); (\"play\", 5); (\"##ing\", 6) ]\n    ~specials:(List.map special [ \"[UNK]\"; \"[CLS]\"; \"[SEP]\" ])\n    ~pre:(Pre_tokenizer.whitespace ())\n    ~post:(Post_processor.bert ~cls:(\"[CLS]\", 1) ~sep:(\"[SEP]\", 2) ())\n    ~decoder:(Decoder.wordpiece ())\n    ~unk_token:\"[UNK]\" ()\n\nlet enc = encode tokenizer \"the cat playing\"\n(* tokens: [| \"[CLS]\"; \"the\"; \"cat\"; \"play\"; \"##ing\"; \"[SEP]\" |] *)\nlet ids = Encoding.ids enc\nlet type_ids = Encoding.type_ids enc\nlet attention_mask = Encoding.attention_mask enc\nlet special_tokens_mask = Encoding.special_tokens_mask enc\nlet offsets = Encoding.offsets enc\nlet word_ids = Encoding.word_ids enc\n(* word_ids: [| None; Some 0; Some 1; Some 2; Some 2; None |]\n   \"play\" and \"##ing\" share word index 2 *)\n```\n\n## Sentence Pairs\n\nMany NLP tasks (question answering, natural language inference, sentence\nsimilarity) operate on pairs of sentences. Use `encode ~pair` to encode\nboth sequences together:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  wordpiece\n    ~vocab:\n      [ (\"[UNK]\", 0); (\"[CLS]\", 1); (\"[SEP]\", 2);\n        (\"the\", 3); (\"cat\", 4); (\"sat\", 5); (\"how\", 6);\n        (\"are\", 7); (\"you\", 8) ]\n    ~specials:(List.map special [ \"[UNK]\"; \"[CLS]\"; \"[SEP]\" ])\n    ~pre:(Pre_tokenizer.whitespace ())\n    ~post:(Post_processor.bert ~cls:(\"[CLS]\", 1) ~sep:(\"[SEP]\", 2) ())\n    ~decoder:(Decoder.wordpiece ())\n    ~unk_token:\"[UNK]\" ()\n\nlet enc = encode tokenizer ~pair:\"how are you\" \"the cat sat\"\n(* tokens: [| \"[CLS]\"; \"the\"; \"cat\"; \"sat\"; \"[SEP]\"; \"how\"; \"are\"; \"you\"; \"[SEP]\" |] *)\nlet type_ids = Encoding.type_ids enc\n(* [| 0; 0; 0; 0; 0; 1; 1; 1; 1 |] *)\n```\n\nType IDs distinguish the two sentences: 0 for the first sequence\n(including `[CLS]` and first `[SEP]`), 1 for the second (including\nfinal `[SEP]`).\n\n## Batch Encoding\n\nEncode multiple texts at once with `encode_batch`, or multiple sentence\npairs with `encode_pairs_batch`:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  wordpiece\n    ~vocab:\n      [ (\"[UNK]\", 0); (\"[CLS]\", 1); (\"[SEP]\", 2);\n        (\"the\", 3); (\"cat\", 4); (\"sat\", 5);\n        (\"how\", 6); (\"are\", 7); (\"you\", 8); (\"good\", 9) ]\n    ~specials:(List.map special [ \"[UNK]\"; \"[CLS]\"; \"[SEP]\" ])\n    ~pre:(Pre_tokenizer.whitespace ())\n    ~post:(Post_processor.bert ~cls:(\"[CLS]\", 1) ~sep:(\"[SEP]\", 2) ())\n    ~decoder:(Decoder.wordpiece ())\n    ~unk_token:\"[UNK]\" ()\n\n(* Batch of single sentences *)\nlet encodings =\n  encode_batch tokenizer [ \"the cat\"; \"the cat sat\"; \"good\" ]\nlet lengths = List.map Encoding.length encodings\n(* [4; 5; 3] — each includes [CLS] and [SEP] *)\n\n(* Batch of sentence pairs *)\nlet pairs =\n  encode_pairs_batch tokenizer\n    [ (\"the cat sat\", \"how are you\"); (\"good\", \"the cat\") ]\n```\n\n## Padding\n\nModels require uniform sequence lengths within a batch. Padding extends\nshorter sequences with padding tokens. Three strategies are available:\n\n- **`Batch_longest`** — pad to the longest sequence in the batch\n- **`Fixed n`** — pad every sequence to exactly `n` tokens\n- **`To_multiple n`** — pad to the smallest multiple of `n` that fits\n\nPadding tokens have `attention_mask = 0` and `special_tokens_mask = 1`.\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  word_level\n    ~vocab:\n      [ (\"[PAD]\", 0); (\"[UNK]\", 1); (\"the\", 2); (\"cat\", 3);\n        (\"sat\", 4); (\"on\", 5); (\"a\", 6); (\"mat\", 7) ]\n    ~specials:(List.map special [ \"[PAD]\"; \"[UNK]\" ])\n    ~pre:(Pre_tokenizer.whitespace ())\n    ~unk_token:\"[UNK]\" ~pad_token:\"[PAD]\" ()\n\nlet texts = [ \"the cat\"; \"the cat sat on a mat\"; \"cat\" ]\n\n(* Pad to longest in batch — all encodings have length 6 *)\nlet batch1 =\n  encode_batch tokenizer ~padding:(padding `Batch_longest) texts\n\n(* Pad to fixed length — all encodings have length 8 *)\nlet batch2 =\n  encode_batch tokenizer ~padding:(padding (`Fixed 8)) texts\n\n(* Pad to multiple of 4 — lengths rounded up to nearest multiple *)\nlet batch3 =\n  encode_batch tokenizer ~padding:(padding (`To_multiple 4)) texts\n```\n\nBy default, padding is applied to the right. Use `` ~direction:`Left ``\nfor left-padding, which is common for autoregressive generation:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  word_level\n    ~vocab:\n      [ (\"[PAD]\", 0); (\"[UNK]\", 1); (\"the\", 2); (\"cat\", 3); (\"sat\", 4) ]\n    ~specials:(List.map special [ \"[PAD]\"; \"[UNK]\" ])\n    ~pre:(Pre_tokenizer.whitespace ())\n    ~unk_token:\"[UNK]\" ~pad_token:\"[PAD]\" ()\n\nlet encodings =\n  encode_batch tokenizer\n    ~padding:(padding ~direction:`Left (`Fixed 5))\n    [ \"the cat\"; \"the cat sat\" ]\n(* tokens: [| \"[PAD]\"; \"[PAD]\"; \"[PAD]\"; \"the\"; \"cat\" |]\n           [| \"[PAD]\"; \"[PAD]\"; \"the\"; \"cat\"; \"sat\" |] *)\n```\n\n## Truncation\n\nTruncation limits sequences to a maximum length. Excess tokens are\ntrimmed from the specified direction:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  word_level\n    ~vocab:\n      [ (\"[UNK]\", 0); (\"the\", 1); (\"quick\", 2); (\"brown\", 3);\n        (\"fox\", 4); (\"jumps\", 5); (\"over\", 6) ]\n    ~specials:(List.map special [ \"[UNK]\" ])\n    ~pre:(Pre_tokenizer.whitespace ())\n    ~unk_token:\"[UNK]\" ()\n\nlet text = \"the quick brown fox jumps over\"\n\n(* Truncate from the right (default) *)\nlet enc_right = encode tokenizer ~truncation:(truncation 4) text\nlet tokens_right = Encoding.tokens enc_right\n(* [| \"the\"; \"quick\"; \"brown\"; \"fox\" |] *)\n\n(* Truncate from the left *)\nlet enc_left =\n  encode tokenizer ~truncation:(truncation ~direction:`Left 4) text\nlet tokens_left = Encoding.tokens enc_left\n(* [| \"brown\"; \"fox\"; \"jumps\"; \"over\" |] *)\n```\n\nWhen using a post-processor that adds special tokens, account for the\ntokens it adds. Use `Post_processor.added_tokens` to calculate the\nbudget:\n\n```ocaml\nopen Brot\n\nlet post = Post_processor.bert ~cls:(\"[CLS]\", 1) ~sep:(\"[SEP]\", 2) ()\nlet added_single = Post_processor.added_tokens post ~is_pair:false (* 2 *)\nlet added_pair = Post_processor.added_tokens post ~is_pair:true    (* 3 *)\n```\n\n## Padding and Truncation Together\n\nThe common pattern for model input: truncate long sequences and pad short\nones to a uniform length:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  word_level\n    ~vocab:\n      [ (\"[PAD]\", 0); (\"[UNK]\", 1); (\"the\", 2); (\"cat\", 3);\n        (\"sat\", 4); (\"on\", 5); (\"a\", 6); (\"mat\", 7);\n        (\"dog\", 8); (\"ran\", 9); (\"fast\", 10) ]\n    ~specials:(List.map special [ \"[PAD]\"; \"[UNK]\" ])\n    ~pre:(Pre_tokenizer.whitespace ())\n    ~unk_token:\"[UNK]\" ~pad_token:\"[PAD]\" ()\n\nlet encodings =\n  encode_batch tokenizer\n    ~truncation:(truncation 4)\n    ~padding:(padding (`Fixed 4))\n    [ \"the cat sat on a mat\"; \"the dog ran\"; \"cat\" ]\n(* All encodings have exactly 4 tokens.\n   Long sequences are truncated, short ones are padded.\n   attention_mask distinguishes real tokens (1) from padding (0). *)\nlet masks = List.map Encoding.attention_mask encodings\n```\n\n## Offsets and Alignment\n\n`Encoding.offsets` maps each token back to its `(start, end)` byte span\nin the original text. This is useful for tasks like named entity\nrecognition where you need to extract the source text for each token:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  wordpiece\n    ~vocab:\n      [ (\"[UNK]\", 0); (\"hello\", 1); (\"world\", 2);\n        (\"play\", 3); (\"##ing\", 4) ]\n    ~pre:(Pre_tokenizer.whitespace ())\n    ~decoder:(Decoder.wordpiece ())\n    ~unk_token:\"[UNK]\" ()\n\nlet text = \"hello playing world\"\nlet enc = encode tokenizer text\nlet offsets = Encoding.offsets enc\n(* offsets.(0) = (0, 5)   -> \"hello\"\n   offsets.(1) = (6, 13)  -> \"playing\" (start of \"play\")\n   offsets.(2) = (6, 13)  -> \"playing\" (extent of \"##ing\")\n   offsets.(3) = (14, 19) -> \"world\" *)\n\n(* Extract source span for a token *)\nlet start, end_ = offsets.(0)\nlet source = String.sub text start (end_ - start) (* \"hello\" *)\n```\n\n`Encoding.word_ids` groups subword tokens back to their source word.\nTokens that belong to the same word share the same word index:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  wordpiece\n    ~vocab:\n      [ (\"[UNK]\", 0); (\"the\", 1); (\"cat\", 2);\n        (\"play\", 3); (\"##ing\", 4); (\"##s\", 5) ]\n    ~pre:(Pre_tokenizer.whitespace ())\n    ~decoder:(Decoder.wordpiece ())\n    ~unk_token:\"[UNK]\" ()\n\nlet enc = encode tokenizer \"the cat playing\"\nlet word_ids = Encoding.word_ids enc\n(* [| Some 0; Some 1; Some 2; Some 2 |]\n   \"play\" and \"##ing\" share word index 2,\n   indicating they come from the same source word *)\n```\n"
  },
  {
    "path": "packages/brot/doc/05-algorithms.md",
    "content": "# Choosing a Tokenization Algorithm\n\nBrot supports 5 tokenization algorithms. The three subword algorithms\n(BPE, WordPiece, Unigram) handle open vocabulary by splitting rare words\ninto smaller pieces. Word-level and character-level are simpler\nalternatives.\n\n## BPE (Byte Pair Encoding)\n\nBPE starts with individual characters and iteratively merges the most\nfrequent adjacent pairs. The merge rules, learned during training, define\nhow text is split. Used by GPT-2, GPT-3/4, RoBERTa, and LLaMA.\n\nConstructor: `Brot.bpe`. Trainer: `Brot.train_bpe`.\n\nKey parameters:\n- `vocab_size` — target vocabulary size (default: 30000)\n- `min_frequency` — minimum pair frequency for merging (default: 0)\n- `dropout` — probability of skipping merges for data augmentation\n- `byte_fallback` — use `<0x00>` byte tokens instead of unknown token\n- `continuing_subword_prefix` — prefix for non-initial subwords\n- `end_of_word_suffix` — suffix marking word boundaries (e.g., `</w>`)\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  bpe\n    ~vocab:\n      [ (\"h\", 0); (\"e\", 1); (\"l\", 2); (\"o\", 3); (\" \", 4); (\"w\", 5);\n        (\"r\", 6); (\"d\", 7); (\"he\", 8); (\"ll\", 9); (\"llo\", 10);\n        (\"hello\", 11); (\"wo\", 12); (\"rl\", 13); (\"rld\", 14); (\"world\", 15) ]\n    ~merges:\n      [ (\"h\", \"e\"); (\"l\", \"l\"); (\"ll\", \"o\"); (\"he\", \"llo\");\n        (\"w\", \"o\"); (\"r\", \"l\"); (\"rl\", \"d\"); (\"wo\", \"rld\") ]\n    ()\n\nlet enc = encode tokenizer \"hello world\"\nlet tokens = Encoding.tokens enc (* [| \"hello\"; \" \"; \"world\" |] *)\n```\n\nTraining BPE:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  train_bpe ~vocab_size:80 ~min_frequency:1 ~show_progress:false\n    (`Seq (List.to_seq\n       [ \"The quick brown fox jumps over the lazy dog\";\n         \"The dog barked at the brown fox\";\n         \"Quick brown foxes are rare and beautiful\" ]))\n\nlet size = vocab_size tokenizer\nlet enc = encode tokenizer \"The brown fox\"\n```\n\n## WordPiece\n\nWordPiece uses a greedy longest-match-first algorithm. For each word, it\nfinds the longest prefix in the vocabulary, then continues with the\nremainder prefixed by a continuation marker (default: `##`). Used by BERT,\nDistilBERT, and Electra.\n\nConstructor: `Brot.wordpiece`. Trainer: `Brot.train_wordpiece`.\n\nKey parameters:\n- `vocab_size` — target vocabulary size (default: 30000)\n- `continuing_subword_prefix` — prefix for non-initial subwords (default: `##`)\n- `max_input_chars_per_word` — words longer than this become unknown (default: 100)\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  wordpiece\n    ~vocab:\n      [ (\"[UNK]\", 0); (\"the\", 1); (\"cat\", 2); (\"play\", 3);\n        (\"##ing\", 4); (\"##ed\", 5); (\"##s\", 6); (\"un\", 7);\n        (\"##happy\", 8); (\"##ly\", 9) ]\n    ~pre:(Pre_tokenizer.whitespace ())\n    ~decoder:(Decoder.wordpiece ())\n    ~unk_token:\"[UNK]\" ()\n\nlet enc = encode tokenizer \"the cat playing unhappily\"\nlet tokens = Encoding.tokens enc\n(* [| \"the\"; \"cat\"; \"play\"; \"##ing\"; \"un\"; \"##happy\"; \"##ly\" |] *)\nlet decoded = decode tokenizer (Encoding.ids enc)\n(* \"the cat playing unhappily\" *)\n```\n\nTraining WordPiece:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  train_wordpiece ~vocab_size:80 ~show_progress:false\n    (`Seq (List.to_seq\n       [ \"The quick brown fox jumps over the lazy dog\";\n         \"The dog barked at the brown fox\";\n         \"Quick brown foxes are rare and beautiful\" ]))\n\nlet size = vocab_size tokenizer\nlet enc = encode tokenizer \"The brown fox\"\n```\n\n## Unigram\n\nUnigram uses probabilistic segmentation: given a vocabulary of subwords\nwith log-probabilities, it finds the segmentation that maximizes the\ntotal likelihood. Training uses the EM algorithm to iteratively prune the\nvocabulary. Used by T5, ALBERT, mBART, and XLNet.\n\nConstructor: `Brot.unigram`. Trainer: `Brot.train_unigram`.\n\nKey parameters:\n- `vocab_size` — target vocabulary size (default: 8000)\n- `shrinking_factor` — fraction of vocabulary to retain per pruning round (default: 0.75)\n- `max_piece_length` — maximum subword length (default: 16)\n- `n_sub_iterations` — EM sub-iterations per pruning round (default: 2)\n\nVocabulary entries are `(token, score)` pairs where scores are negative\nlog probabilities:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  unigram\n    ~vocab:\n      [ (\"<unk>\", 0.0); (\"the\", -1.0); (\"cat\", -1.5);\n        (\"th\", -2.0); (\"e\", -2.5); (\"c\", -3.0); (\"a\", -3.0);\n        (\"t\", -3.0); (\"at\", -2.0); (\"he\", -2.0);\n        (\"sat\", -1.8); (\"on\", -1.5) ]\n    ~unk_token:\"<unk>\" ()\n\nlet enc = encode tokenizer \"the cat sat on\"\n```\n\nTraining Unigram:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  train_unigram ~vocab_size:60 ~show_progress:false\n    (`Seq (List.to_seq\n       [ \"The quick brown fox jumps over the lazy dog\";\n         \"The dog barked at the brown fox\";\n         \"Quick brown foxes are rare and beautiful\" ]))\n\nlet size = vocab_size tokenizer\nlet enc = encode tokenizer \"The brown fox\"\n```\n\n## Word-level\n\nWord-level tokenization maps each word directly to a token ID. No\nsubword splitting is performed — words not in the vocabulary are replaced\nby the unknown token.\n\nConstructor: `Brot.word_level`. Trainer: `Brot.train_wordlevel`.\n\nBest suited for small controlled vocabularies and prototyping. For\nproduction use with open vocabulary, prefer a subword algorithm.\n\nWhen no pre-tokenizer is specified, `word_level` defaults to\n`Pre_tokenizer.whitespace`.\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  word_level\n    ~vocab:\n      [ (\"[UNK]\", 0); (\"the\", 1); (\"cat\", 2); (\"sat\", 3);\n        (\"on\", 4); (\"a\", 5); (\"mat\", 6) ]\n    ~unk_token:\"[UNK]\" ()\n\n(* Known words get their IDs, unknown words become [UNK] *)\nlet enc = encode tokenizer \"the cat sat on a rug\"\nlet tokens = Encoding.tokens enc\n(* [| \"the\"; \"cat\"; \"sat\"; \"on\"; \"a\"; \"[UNK]\" |] *)\nlet ids = Encoding.ids enc\n(* [| 1; 2; 3; 4; 5; 0 |] *)\n```\n\n## Character-level\n\nCharacter-level tokenization maps each byte to a token with ID equal to\nits ordinal value. No vocabulary or training is needed.\n\nConstructor: `Brot.chars`.\n\nUseful as a byte-level fallback or for models that operate directly on\ncharacters:\n\n```ocaml\nopen Brot\n\nlet tokenizer = chars ()\n\nlet enc = encode tokenizer \"Hi!\"\nlet tokens = Encoding.tokens enc (* [| \"H\"; \"i\"; \"!\" |] *)\nlet ids = Encoding.ids enc       (* [| 72; 105; 33 |] *)\n```\n\n## Quick Reference\n\n| Algorithm       | Splitting strategy                        | Typical vocab | Notable models            | Constructor  | Trainer           |\n| --------------- | ----------------------------------------- | ------------- | ------------------------- | ------------ | ----------------- |\n| BPE             | Iterative merge of frequent pairs         | 30K-50K       | GPT-2, RoBERTa, LLaMA     | `bpe`        | `train_bpe`       |\n| WordPiece       | Greedy longest-match with `##` prefix     | 30K           | BERT, DistilBERT, Electra | `wordpiece`  | `train_wordpiece` |\n| Unigram         | Probabilistic max-likelihood segmentation | 8K-32K        | T5, ALBERT, mBART, XLNet  | `unigram`    | `train_unigram`   |\n| Word-level      | Whole words, no splitting                 | Varies        | Simple models             | `word_level` | `train_wordlevel` |\n| Character-level | Each byte is a token                      | 256           | Byte-level models         | `chars`      | —                 |\n"
  },
  {
    "path": "packages/brot/doc/06-hf-tokenizers-comparison.md",
    "content": "# Brot vs. HuggingFace Tokenizers -- A Practical Comparison\n\nThis guide explains how Brot relates to Python's [HuggingFace Tokenizers](https://github.com/huggingface/tokenizers), focusing on:\n\n* How core concepts map (tokenizer types, pipeline stages, encoding results)\n* Where the APIs feel similar vs. deliberately different\n* How to translate common HuggingFace patterns into Brot\n\nIf you already use HuggingFace Tokenizers, this should be enough to become productive in Brot quickly.\n\n---\n\n## 1. Big-Picture Differences\n\n| Aspect             | HuggingFace Tokenizers (Python)                      | Brot (OCaml)                                                                  |\n| ------------------ | ---------------------------------------------------- | ----------------------------------------------------------------------------- |\n| Language           | Python bindings over Rust                            | Native OCaml                                                                  |\n| Core type          | `tokenizers.Tokenizer`                               | `Brot.t`                                                                      |\n| Encoding result    | `tokenizers.Encoding`                                | `Encoding.t`                                                                  |\n| Algorithms         | `BPE`, `WordPiece`, `Unigram`, `WordLevel`           | `Brot.bpe`, `Brot.wordpiece`, `Brot.unigram`, `Brot.word_level`, `Brot.chars` |\n| Pipeline stages    | Mutable properties on `Tokenizer` object             | Immutable `~normalizer`, `~pre`, `~post`, `~decoder` args                     |\n| Mutability         | Tokenizer is mutable (set properties after creation) | Tokenizer is immutable after creation                                         |\n| HuggingFace compat | Native format                                        | Full `tokenizer.json` read/write via `from_file`/`save_pretrained`            |\n| Training           | `Trainer` objects passed to `tokenizer.train()`      | `Brot.train_bpe`, `Brot.train_wordpiece`, etc.                                |\n| Padding config     | `tokenizer.enable_padding()`                         | `~padding` arg on `encode`/`encode_batch`                                     |\n| Truncation config  | `tokenizer.enable_truncation()`                      | `~truncation` arg on `encode`/`encode_batch`                                  |\n\n**Brot semantics to know (read once):**\n- Tokenizers are immutable. Pipeline components are set at construction time, not mutated after.\n- `from_file` returns `(t, string) result`. Handle errors explicitly.\n- Padding and truncation are per-call parameters, not global tokenizer state.\n- Special tokens use a record type (`Brot.special`) with explicit control over stripping and normalization.\n- `encode` returns `Encoding.t`; use `encode_ids` when you only need the ID array.\n\n---\n\n## 2. Loading Pretrained Tokenizers\n\n### 2.1 From a tokenizer.json file\n\n**HuggingFace**\n\n```python\nfrom tokenizers import Tokenizer\n\ntokenizer = Tokenizer.from_file(\"tokenizer.json\")\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet tokenizer = Brot.from_file \"tokenizer.json\" |> Result.get_ok\n```\n\nBoth read the same `tokenizer.json` format. Brot's `from_file` returns a `result` instead of raising an exception.\n\n### 2.2 From vocabulary and merges files\n\n**HuggingFace**\n\n```python\nfrom tokenizers import Tokenizer\nfrom tokenizers.models import BPE\n\ntokenizer = Tokenizer(BPE.from_file(\"vocab.json\", \"merges.txt\"))\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet tokenizer =\n  Brot.from_model_file\n    ~vocab:\"vocab.json\"\n    ~merges:\"merges.txt\"\n    ()\n```\n\nWhen `~merges` is omitted, Brot infers WordPiece instead of BPE.\n\n### 2.3 Saving\n\n**HuggingFace**\n\n```python\ntokenizer.save(\"tokenizer.json\")\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nBrot.save_pretrained tokenizer ~path:\"./my_tokenizer\"\n```\n\n`save_pretrained` creates `path/tokenizer.json` in HuggingFace format. Use `to_json` when you need the JSON value directly.\n\n---\n\n## 3. Encoding Text\n\n### 3.1 Basic encoding\n\n**HuggingFace**\n\n```python\noutput = tokenizer.encode(\"Hello world!\")\noutput.ids          # [101, 7592, 2088, 999, 102]\noutput.tokens       # ['[CLS]', 'hello', 'world', '!', '[SEP]']\noutput.offsets      # [(0, 0), (0, 5), (6, 11), (11, 12), (0, 0)]\noutput.type_ids     # [0, 0, 0, 0, 0]\noutput.attention_mask  # [1, 1, 1, 1, 1]\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet enc = Brot.encode tokenizer \"Hello world!\"\nlet ids   = Encoding.ids enc            (* int array *)\nlet toks  = Encoding.tokens enc         (* string array *)\nlet offs  = Encoding.offsets enc        (* (int * int) array *)\nlet types = Encoding.type_ids enc       (* int array *)\nlet mask  = Encoding.attention_mask enc (* int array *)\n```\n\n### 3.2 IDs only\n\n**HuggingFace**\n\n```python\nids = tokenizer.encode(\"Hello world!\").ids\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet ids = Brot.encode_ids tokenizer \"Hello world!\"\n```\n\n`encode_ids` is a shortcut that avoids constructing the full `Encoding.t` when you only need token IDs.\n\n### 3.3 Without special tokens\n\n**HuggingFace**\n\n```python\noutput = tokenizer.encode(\"Hello world!\", add_special_tokens=False)\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet enc = Brot.encode tokenizer ~add_special_tokens:false \"Hello world!\"\n```\n\n---\n\n## 4. Decoding\n\n### 4.1 Basic decoding\n\n**HuggingFace**\n\n```python\ntext = tokenizer.decode([101, 7592, 2088, 999, 102])\ntext_clean = tokenizer.decode([101, 7592, 2088, 999, 102], skip_special_tokens=True)\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet text = Brot.decode tokenizer [| 101; 7592; 2088; 999; 102 |]\nlet text_clean =\n  Brot.decode tokenizer ~skip_special_tokens:true\n    [| 101; 7592; 2088; 999; 102 |]\n```\n\n### 4.2 Batch decoding\n\n**HuggingFace**\n\n```python\ntexts = tokenizer.decode_batch([[101, 7592, 102], [101, 2088, 102]])\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet texts =\n  Brot.decode_batch tokenizer\n    [ [| 101; 7592; 102 |]; [| 101; 2088; 102 |] ]\n```\n\n---\n\n## 5. Batch Encoding\n\n**HuggingFace**\n\n```python\noutputs = tokenizer.encode_batch([\"Hello world!\", \"How are you?\"])\n# outputs is a list of Encoding objects\nfor enc in outputs:\n    print(enc.ids)\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet encodings =\n  Brot.encode_batch tokenizer\n    [ \"Hello world!\"; \"How are you?\" ]\n\nlet () =\n  List.iter\n    (fun enc ->\n      let ids = Encoding.ids enc in\n      Array.iter (Printf.printf \"%d \") ids;\n      print_newline ())\n    encodings\n```\n\nBoth return a list of encoding objects, one per input.\n\n---\n\n## 6. Padding and Truncation\n\n### 6.1 Padding\n\nIn HuggingFace, padding is global state on the tokenizer. In Brot, it is a per-call parameter.\n\n**HuggingFace**\n\n```python\ntokenizer.enable_padding(\n    direction=\"right\",\n    pad_id=0,\n    pad_token=\"[PAD]\",\n    length=128,         # fixed length\n)\noutput = tokenizer.encode(\"Hello\")\n# output.attention_mask shows 0s for padding positions\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet pad = Brot.padding ~pad_id:0 ~pad_token:\"[PAD]\" (`Fixed 128)\nlet enc = Brot.encode tokenizer ~padding:pad \"Hello\"\n(* Encoding.attention_mask enc has 0s for padding positions *)\n```\n\nPadding strategies:\n\n| HuggingFace                             | Brot                          |\n| --------------------------------------- | ----------------------------- |\n| `length=None` (pad to longest in batch) | `` `Batch_longest ``          |\n| `length=128` (fixed)                    | `` `Fixed 128 ``              |\n| `pad_to_multiple_of=8`                  | `` `To_multiple 8 ``          |\n| `direction=\"left\"`                      | `~direction:`Left`            |\n| `direction=\"right\"` (default)           | `~direction:`Right` (default) |\n\n### 6.2 Truncation\n\n**HuggingFace**\n\n```python\ntokenizer.enable_truncation(max_length=512, direction=\"right\")\noutput = tokenizer.encode(\"Very long text ...\")\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet trunc = Brot.truncation 512\nlet enc = Brot.encode tokenizer ~truncation:trunc \"Very long text ...\"\n```\n\nTruncation direction defaults to `` `Right `` in both libraries.\n\n### 6.3 Combined padding and truncation\n\n**HuggingFace**\n\n```python\ntokenizer.enable_padding(length=512, pad_token=\"[PAD]\", pad_id=0)\ntokenizer.enable_truncation(max_length=512)\noutputs = tokenizer.encode_batch(texts)\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet pad = Brot.padding ~pad_token:\"[PAD]\" ~pad_id:0 (`Fixed 512)\nlet trunc = Brot.truncation 512\nlet encodings =\n  Brot.encode_batch tokenizer ~padding:pad ~truncation:trunc texts\n```\n\nThe key difference: Brot passes these as arguments, so different calls can use different settings without mutating the tokenizer.\n\n---\n\n## 7. Sentence Pairs\n\n**HuggingFace**\n\n```python\n# Single pair\noutput = tokenizer.encode(\"premise\", \"hypothesis\")\noutput.type_ids  # [0, 0, 0, 0, 1, 1, 1]  (with BERT post-processor)\n\n# Batch of pairs\noutputs = tokenizer.encode_batch([(\"premise1\", \"hyp1\"), (\"premise2\", \"hyp2\")])\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\n(* Single pair *)\nlet enc = Brot.encode tokenizer ~pair:\"hypothesis\" \"premise\"\nlet type_ids = Encoding.type_ids enc  (* 0s for first, 1s for second *)\n\n(* Batch of pairs *)\nlet encodings =\n  Brot.encode_pairs_batch tokenizer\n    [ (\"premise1\", \"hyp1\"); (\"premise2\", \"hyp2\") ]\n```\n\nBrot uses the `~pair` optional argument on `encode` for single pairs and a dedicated `encode_pairs_batch` for batches, instead of overloading the same function with tuples.\n\n---\n\n## 8. Special Tokens\n\n### 8.1 Defining special tokens\n\n**HuggingFace**\n\n```python\nfrom tokenizers import AddedToken\n\ntokenizer.add_special_tokens([\n    AddedToken(\"[CLS]\", single_word=False, lstrip=False, rstrip=False),\n    AddedToken(\"[SEP]\", single_word=False, lstrip=False, rstrip=False),\n    AddedToken(\"[PAD]\", single_word=False, lstrip=False, rstrip=False),\n])\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet tokenizer =\n  Brot.bpe\n    ~specials:[\n      Brot.special \"[CLS]\";\n      Brot.special \"[SEP]\";\n      Brot.special \"[PAD]\";\n    ]\n    ~pad_token:\"[PAD]\"\n    ~bos_token:\"[CLS]\"\n    ~eos_token:\"[SEP]\"\n    ()\n```\n\nIn HuggingFace, special tokens are added after construction. In Brot, they are part of construction since tokenizers are immutable. The `special` function accepts optional `~single_word`, `~lstrip`, `~rstrip`, and `~normalized` parameters matching `AddedToken`.\n\n### 8.2 Role tokens\n\n**HuggingFace**\n\n```python\ntokenizer.pad_token       # \"[PAD]\"\ntokenizer.cls_token       # \"[CLS]\"\ntokenizer.sep_token       # \"[SEP]\"\ntokenizer.unk_token       # \"[UNK]\"\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet pad = Brot.pad_token tokenizer  (* string option *)\nlet bos = Brot.bos_token tokenizer  (* string option *)\nlet eos = Brot.eos_token tokenizer  (* string option *)\nlet unk = Brot.unk_token tokenizer  (* string option *)\n```\n\nBrot uses `bos_token`/`eos_token` instead of `cls_token`/`sep_token` since these are model-agnostic roles. They return `option` instead of raising on missing tokens.\n\n### 8.3 Special tokens mask\n\nBoth libraries provide a mask distinguishing special tokens from content tokens in the encoding:\n\n**HuggingFace**\n\n```python\noutput.special_tokens_mask  # [1, 0, 0, 0, 1]\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet mask = Encoding.special_tokens_mask enc  (* int array: 1 for special, 0 for content *)\n```\n\n---\n\n## 9. Pipeline Components\n\nBoth libraries use the same four-stage pipeline: normalizer, pre-tokenizer, post-processor, decoder. The difference is how they are configured.\n\n### 9.1 Normalizer\n\n**HuggingFace**\n\n```python\nfrom tokenizers import normalizers\n\ntokenizer.normalizer = normalizers.Sequence([\n    normalizers.NFD(),\n    normalizers.StripAccents(),\n    normalizers.Lowercase(),\n])\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet norm =\n  Normalizer.sequence\n    [ Normalizer.nfd; Normalizer.strip_accents; Normalizer.lowercase ]\n\nlet tokenizer = Brot.bpe ~normalizer:norm ()\n```\n\nCommon normalizers:\n\n| HuggingFace                         | Brot                                       |\n| ----------------------------------- | ------------------------------------------ |\n| `normalizers.NFC()`                 | `Normalizer.nfc`                           |\n| `normalizers.NFD()`                 | `Normalizer.nfd`                           |\n| `normalizers.NFKC()`                | `Normalizer.nfkc`                          |\n| `normalizers.NFKD()`                | `Normalizer.nfkd`                          |\n| `normalizers.Lowercase()`           | `Normalizer.lowercase`                     |\n| `normalizers.StripAccents()`        | `Normalizer.strip_accents`                 |\n| `normalizers.Strip()`               | `Normalizer.strip ()`                      |\n| `normalizers.Replace(pattern, rep)` | `Normalizer.replace ~pattern ~replacement` |\n| `normalizers.Prepend(s)`            | `Normalizer.prepend s`                     |\n| `normalizers.BertNormalizer()`      | `Normalizer.bert ()`                       |\n| `normalizers.ByteLevel()`           | `Normalizer.byte_level ()`                 |\n| `normalizers.Sequence([...])`       | `Normalizer.sequence [...]`                |\n\n### 9.2 Pre-tokenizer\n\n**HuggingFace**\n\n```python\nfrom tokenizers import pre_tokenizers\n\ntokenizer.pre_tokenizer = pre_tokenizers.Sequence([\n    pre_tokenizers.WhitespaceSplit(),\n    pre_tokenizers.Punctuation(),\n])\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet pre =\n  Pre_tokenizer.sequence\n    [ Pre_tokenizer.whitespace_split ();\n      Pre_tokenizer.punctuation () ]\n\nlet tokenizer = Brot.bpe ~pre ()\n```\n\nCommon pre-tokenizers:\n\n| HuggingFace                            | Brot                                |\n| -------------------------------------- | ----------------------------------- |\n| `pre_tokenizers.Whitespace()`          | `Pre_tokenizer.whitespace ()`       |\n| `pre_tokenizers.WhitespaceSplit()`     | `Pre_tokenizer.whitespace_split ()` |\n| `pre_tokenizers.BertPreTokenizer()`    | `Pre_tokenizer.bert ()`             |\n| `pre_tokenizers.ByteLevel()`           | `Pre_tokenizer.byte_level ()`       |\n| `pre_tokenizers.Punctuation()`         | `Pre_tokenizer.punctuation ()`      |\n| `pre_tokenizers.Digits()`              | `Pre_tokenizer.digits ()`           |\n| `pre_tokenizers.Metaspace()`           | `Pre_tokenizer.metaspace ()`        |\n| `pre_tokenizers.UnicodeScripts()`      | `Pre_tokenizer.unicode_scripts ()`  |\n| `pre_tokenizers.CharDelimiterSplit(c)` | `Pre_tokenizer.char_delimiter c`    |\n| `pre_tokenizers.Split(pattern, ...)`   | `Pre_tokenizer.split ~pattern ()`   |\n| `pre_tokenizers.Sequence([...])`       | `Pre_tokenizer.sequence [...]`      |\n\n### 9.3 Post-processor\n\n**HuggingFace**\n\n```python\nfrom tokenizers import processors\n\ntokenizer.post_processor = processors.BertProcessing(\n    sep=(\"[SEP]\", 102),\n    cls=(\"[CLS]\", 101),\n)\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet post =\n  Post_processor.bert\n    ~sep:(\"[SEP]\", 102)\n    ~cls:(\"[CLS]\", 101)\n    ()\n\nlet tokenizer = Brot.bpe ~post ()\n```\n\nCommon post-processors:\n\n| HuggingFace                                                   | Brot                                                       |\n| ------------------------------------------------------------- | ---------------------------------------------------------- |\n| `processors.BertProcessing(sep, cls)`                         | `Post_processor.bert ~sep ~cls ()`                         |\n| `processors.RobertaProcessing(sep, cls)`                      | `Post_processor.roberta ~sep ~cls ()`                      |\n| `processors.ByteLevel()`                                      | `Post_processor.byte_level ()`                             |\n| `processors.TemplateProcessing(single, pair, special_tokens)` | `Post_processor.template ~single ?pair ~special_tokens ()` |\n| `processors.Sequence([...])`                                  | `Post_processor.sequence [...]`                            |\n\n### 9.4 Decoder\n\n**HuggingFace**\n\n```python\nfrom tokenizers import decoders\n\ntokenizer.decoder = decoders.WordPiece(prefix=\"##\")\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet dec = Decoder.wordpiece ~prefix:\"##\" ()\nlet tokenizer = Brot.wordpiece ~decoder:dec ()\n```\n\nCommon decoders:\n\n| HuggingFace                     | Brot                              |\n| ------------------------------- | --------------------------------- |\n| `decoders.BPEDecoder(suffix)`   | `Decoder.bpe ~suffix ()`          |\n| `decoders.ByteLevel()`          | `Decoder.byte_level ()`           |\n| `decoders.ByteFallback()`       | `Decoder.byte_fallback ()`        |\n| `decoders.WordPiece(prefix)`    | `Decoder.wordpiece ~prefix ()`    |\n| `decoders.Metaspace()`          | `Decoder.metaspace ()`            |\n| `decoders.CTC()`                | `Decoder.ctc ()`                  |\n| `decoders.Replace(pattern, by)` | `Decoder.replace ~pattern ~by ()` |\n| `decoders.Strip()`              | `Decoder.strip ()`                |\n| `decoders.Fuse()`               | `Decoder.fuse ()`                 |\n| `decoders.Sequence([...])`      | `Decoder.sequence [...]`          |\n\n### 9.5 Inspecting the pipeline\n\n**HuggingFace**\n\n```python\ntokenizer.normalizer\ntokenizer.pre_tokenizer\ntokenizer.post_processor\ntokenizer.decoder\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet norm = Brot.normalizer tokenizer     (* Normalizer.t option *)\nlet pre  = Brot.pre_tokenizer tokenizer  (* Pre_tokenizer.t option *)\nlet post = Brot.post_processor tokenizer (* Post_processor.t option *)\nlet dec  = Brot.decoder tokenizer        (* Decoder.t option *)\n```\n\nBrot returns `option` for each stage, since any stage can be absent.\n\n---\n\n## 10. Training Tokenizers\n\n### 10.1 BPE training\n\n**HuggingFace**\n\n```python\nfrom tokenizers import Tokenizer\nfrom tokenizers.models import BPE\nfrom tokenizers.trainers import BpeTrainer\n\ntokenizer = Tokenizer(BPE())\ntrainer = BpeTrainer(\n    vocab_size=30000,\n    min_frequency=2,\n    special_tokens=[\"[UNK]\", \"[CLS]\", \"[SEP]\", \"[PAD]\"],\n)\ntokenizer.train([\"corpus.txt\"], trainer)\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet tokenizer =\n  Brot.train_bpe\n    (`Files [ \"corpus.txt\" ])\n    ~vocab_size:30000\n    ~min_frequency:2\n    ~specials:[\n      Brot.special \"[UNK]\";\n      Brot.special \"[CLS]\";\n      Brot.special \"[SEP]\";\n      Brot.special \"[PAD]\";\n    ]\n    ~unk_token:\"[UNK]\"\n    ~pad_token:\"[PAD]\"\n```\n\nBrot combines the `Tokenizer` + `Trainer` pattern into a single function call. Training data is passed as `` `Files `` (file paths) or `` `Seq `` (string sequence).\n\n### 10.2 WordPiece training\n\n**HuggingFace**\n\n```python\nfrom tokenizers.models import WordPiece\nfrom tokenizers.trainers import WordPieceTrainer\n\ntokenizer = Tokenizer(WordPiece(unk_token=\"[UNK]\"))\ntrainer = WordPieceTrainer(vocab_size=30000, special_tokens=[\"[UNK]\", \"[PAD]\"])\ntokenizer.train([\"corpus.txt\"], trainer)\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet tokenizer =\n  Brot.train_wordpiece\n    (`Files [ \"corpus.txt\" ])\n    ~vocab_size:30000\n    ~unk_token:\"[UNK]\"\n    ~specials:[ Brot.special \"[UNK]\"; Brot.special \"[PAD]\" ]\n    ~pad_token:\"[PAD]\"\n```\n\n### 10.3 Unigram training\n\n**HuggingFace**\n\n```python\nfrom tokenizers.models import Unigram\nfrom tokenizers.trainers import UnigramTrainer\n\ntokenizer = Tokenizer(Unigram())\ntrainer = UnigramTrainer(vocab_size=8000, special_tokens=[\"<unk>\", \"<pad>\"])\ntokenizer.train([\"corpus.txt\"], trainer)\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet tokenizer =\n  Brot.train_unigram\n    (`Files [ \"corpus.txt\" ])\n    ~vocab_size:8000\n    ~unk_token:\"<unk>\"\n    ~specials:[ Brot.special \"<unk>\"; Brot.special \"<pad>\" ]\n    ~pad_token:\"<pad>\"\n```\n\n### 10.4 Training from in-memory data\n\n**HuggingFace**\n\n```python\nfrom tokenizers import Tokenizer\nfrom tokenizers.models import BPE\nfrom tokenizers.trainers import BpeTrainer\n\ntokenizer = Tokenizer(BPE())\ntrainer = BpeTrainer(vocab_size=1000)\ntokenizer.train_from_iterator(\n    [\"Hello world\", \"How are you?\", \"Hello again\"],\n    trainer,\n)\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet texts = [ \"Hello world\"; \"How are you?\"; \"Hello again\" ]\nlet tokenizer =\n  Brot.train_bpe (`Seq (List.to_seq texts)) ~vocab_size:1000\n```\n\n### 10.5 Extending an existing tokenizer\n\n**HuggingFace**\n\n```python\n# Load, then retrain with more data\ntokenizer = Tokenizer.from_file(\"tokenizer.json\")\ntrainer = BpeTrainer(vocab_size=50000)\ntokenizer.train([\"more_data.txt\"], trainer)\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet base = Brot.from_file \"tokenizer.json\" |> Result.get_ok\nlet tokenizer =\n  Brot.train_bpe ~init:base (`Files [ \"more_data.txt\" ]) ~vocab_size:50000\n```\n\nThe `~init` parameter on training functions lets you extend an existing tokenizer with additional data.\n\n---\n\n## 11. Vocabulary Inspection\n\n**HuggingFace**\n\n```python\ntokenizer.get_vocab()                 # dict: token -> id\ntokenizer.get_vocab_size()            # int\ntokenizer.token_to_id(\"[CLS]\")        # int or None\ntokenizer.id_to_token(101)            # str or None\n```\n\n**Brot**\n\n<!-- $MDX skip -->\n```ocaml\nlet v     = Brot.vocab tokenizer         (* (string * int) list *)\nlet size  = Brot.vocab_size tokenizer    (* int *)\nlet id    = Brot.token_to_id tokenizer \"[CLS]\"  (* int option *)\nlet token = Brot.id_to_token tokenizer 101       (* string option *)\n```\n\n`vocab` returns an association list instead of a dictionary. `token_to_id` and `id_to_token` return `option` instead of nullable values.\n\n---\n\n## 12. Quick Cheat Sheet\n\n| Task                | HuggingFace Tokenizers                                      | Brot                                                             |\n| ------------------- | ----------------------------------------------------------- | ---------------------------------------------------------------- |\n| Load from file      | `Tokenizer.from_file(\"tokenizer.json\")`                     | `Brot.from_file \"tokenizer.json\"`                                |\n| Save to file        | `tokenizer.save(\"tokenizer.json\")`                          | `Brot.save_pretrained tokenizer ~path:\"./out\"`                   |\n| Encode text         | `tokenizer.encode(\"Hello\")`                                 | `Brot.encode tokenizer \"Hello\"`                                  |\n| Encode IDs only     | `tokenizer.encode(\"Hello\").ids`                             | `Brot.encode_ids tokenizer \"Hello\"`                              |\n| Encode batch        | `tokenizer.encode_batch([\"a\", \"b\"])`                        | `Brot.encode_batch tokenizer [\"a\"; \"b\"]`                         |\n| Encode pair         | `tokenizer.encode(\"a\", \"b\")`                                | `Brot.encode tokenizer ~pair:\"b\" \"a\"`                            |\n| Encode pairs batch  | `tokenizer.encode_batch([(\"a\",\"b\"), ...])`                  | `Brot.encode_pairs_batch tokenizer [(\"a\",\"b\"); ...]`             |\n| Decode              | `tokenizer.decode(ids)`                                     | `Brot.decode tokenizer ids`                                      |\n| Decode batch        | `tokenizer.decode_batch([ids1, ids2])`                      | `Brot.decode_batch tokenizer [ids1; ids2]`                       |\n| Get token IDs       | `output.ids`                                                | `Encoding.ids enc`                                               |\n| Get tokens          | `output.tokens`                                             | `Encoding.tokens enc`                                            |\n| Get attention mask  | `output.attention_mask`                                     | `Encoding.attention_mask enc`                                    |\n| Get type IDs        | `output.type_ids`                                           | `Encoding.type_ids enc`                                          |\n| Get offsets         | `output.offsets`                                            | `Encoding.offsets enc`                                           |\n| Padding             | `tokenizer.enable_padding(length=128)`                      | `Brot.encode tokenizer ~padding:(Brot.padding (`Fixed 128)) ...` |\n| Truncation          | `tokenizer.enable_truncation(max_length=512)`               | `Brot.encode tokenizer ~truncation:(Brot.truncation 512) ...`    |\n| Vocab size          | `tokenizer.get_vocab_size()`                                | `Brot.vocab_size tokenizer`                                      |\n| Token to ID         | `tokenizer.token_to_id(\"[CLS]\")`                            | `Brot.token_to_id tokenizer \"[CLS]\"`                             |\n| ID to token         | `tokenizer.id_to_token(101)`                                | `Brot.id_to_token tokenizer 101`                                 |\n| Train BPE           | `tokenizer.train(files, BpeTrainer(...))`                   | `Brot.train_bpe (`Files files) ~vocab_size:30000`                |\n| Train WordPiece     | `tokenizer.train(files, WordPieceTrainer(...))`             | `Brot.train_wordpiece (`Files files) ~vocab_size:30000`          |\n| Train Unigram       | `tokenizer.train(files, UnigramTrainer(...))`               | `Brot.train_unigram (`Files files) ~vocab_size:8000`             |\n| Train from iterator | `tokenizer.train_from_iterator(iter, trainer)`              | `Brot.train_bpe (`Seq seq) ~vocab_size:1000`                     |\n| Set normalizer      | `tokenizer.normalizer = normalizers.Lowercase()`            | `Brot.bpe ~normalizer:Normalizer.lowercase ()`                   |\n| Set pre-tokenizer   | `tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel()`      | `Brot.bpe ~pre:(Pre_tokenizer.byte_level ()) ()`                 |\n| Set post-processor  | `tokenizer.post_processor = processors.BertProcessing(...)` | `Brot.bpe ~post:(Post_processor.bert ~sep ~cls ()) ()`           |\n| Set decoder         | `tokenizer.decoder = decoders.WordPiece()`                  | `Brot.bpe ~decoder:(Decoder.wordpiece ()) ()`                    |\n| Add special tokens  | `tokenizer.add_special_tokens([AddedToken(...)])`           | Pass `~specials:[Brot.special \"...\"; ...]` at construction       |\n"
  },
  {
    "path": "packages/brot/doc/dune",
    "content": "(mdx\n (files *.md)\n (package brot)\n (libraries brot))\n"
  },
  {
    "path": "packages/brot/doc/index.md",
    "content": "# Brot\n\nBrot tokenizes text into token IDs for language models and reverses the\nprocess. It supports BPE, WordPiece, Unigram, word-level, and\ncharacter-level algorithms, loads and saves HuggingFace `tokenizer.json`\nfiles, and is 1.3-6x faster than HuggingFace tokenizers on most\nbenchmarks.\n\n## Features\n\n- **Tokenization algorithms**: BPE, WordPiece, Unigram, word-level, character-level\n- **HuggingFace compatible**: load and save `tokenizer.json`, load vocab/merges model files\n- **Composable pipeline**: normalizer, pre-tokenizer, post-processor, decoder — each stage independently configurable\n- **Rich encoding output**: token IDs, string tokens, byte offsets, attention masks, type IDs, word IDs, special token masks\n- **Training**: train BPE, WordPiece, Unigram, and word-level tokenizers from scratch\n- **Performance**: 1.3-6x faster than HuggingFace tokenizers (Rust native)\n\n## Quick Start\n\nBuild a BPE tokenizer from a vocabulary and merge rules, encode text,\nand decode it back:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  bpe\n    ~vocab:\n      [ (\"h\", 0); (\"e\", 1); (\"l\", 2); (\"o\", 3); (\" \", 4); (\"w\", 5);\n        (\"r\", 6); (\"d\", 7); (\"he\", 8); (\"ll\", 9); (\"llo\", 10);\n        (\"hello\", 11); (\"wo\", 12); (\"rl\", 13); (\"rld\", 14); (\"world\", 15) ]\n    ~merges:\n      [ (\"h\", \"e\"); (\"l\", \"l\"); (\"ll\", \"o\"); (\"he\", \"llo\");\n        (\"w\", \"o\"); (\"r\", \"l\"); (\"rl\", \"d\"); (\"wo\", \"rld\") ]\n    ()\n\nlet encoding = encode tokenizer \"hello world\"\nlet ids = Encoding.ids encoding         (* [| 11; 4; 15 |] *)\nlet tokens = Encoding.tokens encoding   (* [| \"hello\"; \" \"; \"world\" |] *)\nlet decoded = decode tokenizer ids      (* \"hello world\" *)\n```\n\nLoad a pretrained tokenizer from a HuggingFace `tokenizer.json` file:\n\n<!-- $MDX skip -->\n```ocaml\nopen Brot\n\nlet tokenizer = from_file \"tokenizer.json\" |> Result.get_ok\nlet encoding = encode tokenizer \"Hello world!\"\nlet ids = Encoding.ids encoding\n```\n\nTrain a tokenizer from a text corpus:\n\n```ocaml\nopen Brot\n\nlet tokenizer =\n  train_bpe ~vocab_size:100 ~show_progress:false\n    (`Seq (List.to_seq\n       [ \"The quick brown fox jumps over the lazy dog\";\n         \"The dog barked at the fox\";\n         \"Quick brown foxes are rare\" ]))\n\nlet size = vocab_size tokenizer\nlet ids = encode_ids tokenizer \"The quick fox\"\n```\n\n## Next Steps\n\n- [Getting Started](01-getting-started/) — encode, decode, pipeline basics, training\n- [The Tokenization Pipeline](02-pipeline/) — how the 5 pipeline stages work\n- [Pretrained Tokenizers](03-pretrained/) — loading, saving, and building known model pipelines\n- [Batch Processing](04-batch-processing/) — padding, truncation, encoding metadata\n- [Choosing an Algorithm](05-algorithms/) — BPE vs WordPiece vs Unigram and when to use each\n"
  },
  {
    "path": "packages/brot/examples/01-encode-decode/README.md",
    "content": "# `01-encode-decode`\n\nYour first tokenizer. This example shows the minimal steps to encode text into\ntoken IDs and decode back.\n\n```bash\ndune exec brot/examples/01-encode-decode/main.exe\n```\n\n## What You'll Learn\n\n- Creating a BPE tokenizer with `Brot.bpe`\n- Encoding text with `Brot.encode`\n- Inspecting token strings and IDs with `Encoding.tokens` and `Encoding.ids`\n- Decoding token IDs back to text with `Brot.decode`\n\n## Key Functions\n\n| Function          | Purpose                                                |\n| ----------------- | ------------------------------------------------------ |\n| `bpe`             | Create a BPE tokenizer from vocabulary and merge rules |\n| `encode`          | Encode text into an `Encoding.t`                       |\n| `Encoding.ids`    | Get the integer token IDs                              |\n| `Encoding.tokens` | Get the string token representations                   |\n| `decode`          | Convert token IDs back to text                         |\n\n## How BPE Works\n\nBPE (Byte Pair Encoding) iteratively merges the most frequent character pairs.\nGiven the text `\"hello\"` and merge rules like `(\"h\",\"e\")`, `(\"l\",\"l\")`,\n`(\"he\",\"l\")`, `(\"ll\",\"o\")`, `(\"hel\",\"lo\")`, BPE applies merges in priority\norder until no more merges apply, producing `\"hello\"` as a single token.\n\n## Try It\n\n1. Remove some merge rules and run again to see how the text gets split into\n   smaller subword pieces.\n2. Add a new word like `\"held\"` to the vocabulary and encode `\"hello held\"`.\n\n## Next Steps\n\nContinue to [02-encoding-fields](../02-encoding-fields/) to learn about all the\nmetadata in an encoding.\n"
  },
  {
    "path": "packages/brot/examples/01-encode-decode/dune",
    "content": "(executable\n (name main)\n (libraries brot))\n"
  },
  {
    "path": "packages/brot/examples/01-encode-decode/main.ml",
    "content": "(* Encode and decode.\n\n   The simplest possible tokenization: convert text to token IDs and back.\n   Demonstrates creating a BPE tokenizer from an inline vocabulary and merge\n   rules, encoding text, inspecting tokens and IDs, and decoding. *)\n\nopen Brot\n\nlet () =\n  (* Build a small BPE tokenizer. The vocabulary maps token strings to IDs.\n     Merge rules define which character pairs to combine, in priority order. *)\n  let vocab =\n    [\n      (\"h\", 0);\n      (\"e\", 1);\n      (\"l\", 2);\n      (\"o\", 3);\n      (\" \", 4);\n      (\"w\", 5);\n      (\"r\", 6);\n      (\"d\", 7);\n      (\"he\", 8);\n      (\"ll\", 9);\n      (\"llo\", 10);\n      (\"hello\", 11);\n      (\"wo\", 12);\n      (\"rl\", 13);\n      (\"rld\", 14);\n      (\"world\", 15);\n    ]\n  in\n  let merges =\n    [\n      (\"h\", \"e\");\n      (\"l\", \"l\");\n      (\"ll\", \"o\");\n      (\"he\", \"llo\");\n      (\"w\", \"o\");\n      (\"r\", \"l\");\n      (\"rl\", \"d\");\n      (\"wo\", \"rld\");\n    ]\n  in\n  let tokenizer = bpe ~vocab ~merges () in\n\n  (* Encode text into an Encoding *)\n  let text = \"hello world\" in\n  let encoding = encode tokenizer text in\n  let ids = Encoding.ids encoding in\n  let tokens = Encoding.tokens encoding in\n\n  Printf.printf \"Text:    %S\\n\" text;\n  Printf.printf \"Tokens:  [%s]\\n\"\n    (String.concat \"; \"\n       (List.map (fun s -> Printf.sprintf \"%S\" s) (Array.to_list tokens)));\n  Printf.printf \"IDs:     [%s]\\n\"\n    (String.concat \"; \" (Array.to_list (Array.map string_of_int ids)));\n\n  (* Decode token IDs back to text *)\n  let decoded = decode tokenizer ids in\n  Printf.printf \"Decoded: %S\\n\\n\" decoded;\n\n  Printf.printf \"Round-trip matches: %b\\n\\n\" (String.equal text decoded);\n\n  (* Try another text -- unknown characters become individual tokens *)\n  let text2 = \"hello\" in\n  let enc2 = encode tokenizer text2 in\n  Printf.printf \"Text:    %S\\n\" text2;\n  Printf.printf \"Tokens:  [%s]\\n\"\n    (String.concat \"; \"\n       (List.map\n          (fun s -> Printf.sprintf \"%S\" s)\n          (Array.to_list (Encoding.tokens enc2))));\n  Printf.printf \"IDs:     [%s]\\n\"\n    (String.concat \"; \"\n       (Array.to_list (Array.map string_of_int (Encoding.ids enc2))))\n"
  },
  {
    "path": "packages/brot/examples/02-encoding-fields/README.md",
    "content": "# `02-encoding-fields`\n\nUnderstanding encodings. An `Encoding.t` bundles token IDs with alignment\nmetadata: byte offsets, word indices, type IDs, attention masks, and\nspecial-token flags.\n\n```bash\ndune exec brot/examples/02-encoding-fields/main.exe\n```\n\n## What You'll Learn\n\n- All parallel arrays in an `Encoding.t` and how they align\n- Byte offsets that map each token back to the original text\n- Word indices that group subword tokens by source word\n- Attention mask (1 = real token, 0 = padding)\n- Special tokens mask (1 = special, 0 = content)\n\n## Key Functions\n\n| Function                       | Purpose                                           |\n| ------------------------------ | ------------------------------------------------- |\n| `Encoding.ids`                 | Token ID array for model input                    |\n| `Encoding.tokens`              | String representation of each token               |\n| `Encoding.offsets`             | `(start, end)` byte spans in the original text    |\n| `Encoding.word_ids`            | Source word index per token (`None` for specials) |\n| `Encoding.type_ids`            | Segment IDs (0 or 1 for sentence pairs)           |\n| `Encoding.attention_mask`      | 1 for real tokens, 0 for padding                  |\n| `Encoding.special_tokens_mask` | 1 for special tokens, 0 for content               |\n| `Encoding.length`              | Number of tokens                                  |\n\n## Offsets\n\nOffsets are byte positions `(start, end)` into the original text. You can\nextract the original substring with `String.sub text start (end - start)`.\nThis is essential for highlighting, named entity recognition, and other tasks\nthat need to map tokens back to source text.\n\n## Try It\n\n1. Add more words to the vocabulary and encode a longer sentence.\n2. Encode a text with unknown words and observe the `[UNK]` token.\n\n## Next Steps\n\nContinue to [03-normalizers](../03-normalizers/) to learn how text is cleaned\nbefore tokenization.\n"
  },
  {
    "path": "packages/brot/examples/02-encoding-fields/dune",
    "content": "(executable\n (name main)\n (libraries brot))\n"
  },
  {
    "path": "packages/brot/examples/02-encoding-fields/main.ml",
    "content": "(* Understanding encodings.\n\n   An Encoding bundles token IDs with alignment metadata: byte offsets, word\n   indices, segment type IDs, attention masks, and special-token flags. All\n   arrays share the same length. *)\n\nopen Brot\n\nlet print_encoding enc =\n  let ids = Encoding.ids enc in\n  let tokens = Encoding.tokens enc in\n  let offsets = Encoding.offsets enc in\n  let word_ids = Encoding.word_ids enc in\n  let type_ids = Encoding.type_ids enc in\n  let attn = Encoding.attention_mask enc in\n  let special = Encoding.special_tokens_mask enc in\n\n  Printf.printf \"%-6s %-10s %-4s %-12s %-8s %-8s %-6s %-8s\\n\" \"Index\" \"Token\"\n    \"ID\" \"Offsets\" \"Word_ID\" \"Type_ID\" \"Attn\" \"Special\";\n  Printf.printf \"%s\\n\" (String.make 66 '-');\n\n  for i = 0 to Encoding.length enc - 1 do\n    let s, e = offsets.(i) in\n    let word =\n      match word_ids.(i) with Some w -> string_of_int w | None -> \"-\"\n    in\n    Printf.printf \"%-6d %-10s %-4d (%2d, %2d)     %-8s %-8d %-6d %-8d\\n\" i\n      tokens.(i) ids.(i) s e word type_ids.(i) attn.(i) special.(i)\n  done\n\nlet () =\n  (* Word-level tokenizer: each word maps to one token *)\n  let vocab =\n    [\n      (\"[UNK]\", 0);\n      (\"hello\", 1);\n      (\"world\", 2);\n      (\"the\", 3);\n      (\"is\", 4);\n      (\"great\", 5);\n    ]\n  in\n  let tokenizer =\n    word_level ~vocab ~unk_token:\"[UNK]\" ~pre:(Pre_tokenizer.whitespace ()) ()\n  in\n\n  let text = \"hello world is great\" in\n  Printf.printf \"Text: %S\\n\" text;\n  Printf.printf \"Length: %d tokens\\n\\n\"\n    (Encoding.length (encode tokenizer text));\n  print_encoding (encode tokenizer text);\n\n  (* Show what happens with unknown words *)\n  Printf.printf \"\\n--- Unknown words ---\\n\\n\";\n  let text2 = \"hello universe\" in\n  Printf.printf \"Text: %S\\n\" text2;\n  Printf.printf \"Length: %d tokens\\n\\n\"\n    (Encoding.length (encode tokenizer text2));\n  print_encoding (encode tokenizer text2);\n\n  (* WordPiece: subword tokens have word_ids linking to the source word *)\n  Printf.printf \"\\n--- Subword tokens (WordPiece) ---\\n\\n\";\n  let wp_vocab =\n    [\n      (\"[UNK]\", 0);\n      (\"play\", 1);\n      (\"##ing\", 2);\n      (\"##ed\", 3);\n      (\"un\", 4);\n      (\"##happy\", 5);\n    ]\n  in\n  let wp = wordpiece ~vocab:wp_vocab ~unk_token:\"[UNK]\" () in\n  let text3 = \"playing\" in\n  Printf.printf \"Text: %S\\n\" text3;\n  Printf.printf \"Length: %d tokens\\n\\n\" (Encoding.length (encode wp text3));\n  print_encoding (encode wp text3)\n"
  },
  {
    "path": "packages/brot/examples/03-normalizers/README.md",
    "content": "# `03-normalizers`\n\nText normalization before tokenization. Normalizers clean and standardize text\nso that surface variations (case, accents, whitespace) don't prevent vocabulary\nmatches.\n\n```bash\ndune exec brot/examples/03-normalizers/main.exe\n```\n\n## What You'll Learn\n\n- Unicode normalization: `nfc`, `nfkc`\n- Text transforms: `lowercase`, `strip_accents`, `strip`, `replace`, `prepend`\n- Model-specific normalization: `bert`\n- Composing normalizers with `sequence`\n- Applying normalizers directly with `Normalizer.apply`\n- How normalization affects tokenization results\n\n## Key Functions\n\n| Function                   | Purpose                            |\n| -------------------------- | ---------------------------------- |\n| `Normalizer.nfc` / `nfkc`  | Unicode normalization forms        |\n| `Normalizer.lowercase`     | Unicode case folding               |\n| `Normalizer.strip_accents` | Remove combining marks             |\n| `Normalizer.strip`         | Strip boundary whitespace          |\n| `Normalizer.replace`       | Regex-based replacement            |\n| `Normalizer.prepend`       | Prepend a string to non-empty text |\n| `Normalizer.bert`          | BERT-specific normalizer           |\n| `Normalizer.sequence`      | Compose normalizers left-to-right  |\n| `Normalizer.apply`         | Apply a normalizer to a string     |\n\n## Why Normalize?\n\nWithout normalization, `\"Hello\"`, `\"hello\"`, and `\"HELLO\"` are three different\ntokens. Normalization maps them all to `\"hello\"` so a single vocabulary entry\ncovers all cases. Similarly, `\"caf\\u{00E9}\"` and `\"cafe\"` can be unified by\nstripping accents.\n\n## Try It\n\n1. Add `Normalizer.nfkd` and see how it differs from `nfd`.\n2. Create a normalizer that replaces email addresses with `<EMAIL>`.\n3. Try the BERT normalizer with Chinese characters.\n\n## Next Steps\n\nContinue to [04-pre-tokenizers](../04-pre-tokenizers/) to learn how text is\nsplit into fragments before vocabulary lookup.\n"
  },
  {
    "path": "packages/brot/examples/03-normalizers/dune",
    "content": "(executable\n (name main)\n (libraries brot))\n"
  },
  {
    "path": "packages/brot/examples/03-normalizers/main.ml",
    "content": "(* Text normalization.\n\n   Normalizers transform text before tokenization: lowercasing, accent removal,\n   Unicode normalization, whitespace cleanup, and model-specific preprocessing.\n   They are the first stage in the tokenization pipeline. *)\n\nopen Brot\n\nlet show name norm text =\n  let result = Normalizer.apply norm text in\n  Printf.printf \"  %-20s %S -> %S\\n\" name text result\n\nlet () =\n  Printf.printf \"=== Unicode Normalization ===\\n\\n\";\n  show \"nfc\" Normalizer.nfc \"caf\\xc3\\xa9\";\n  show \"nfkc\" Normalizer.nfkc \"\\xef\\xac\\x81\";\n\n  (* fi ligature -> fi *)\n  Printf.printf \"\\n=== Text Transforms ===\\n\\n\";\n  show \"lowercase\" Normalizer.lowercase \"Hello WORLD\";\n  show \"strip_accents\" Normalizer.strip_accents\n    \"caf\\xc3\\xa9 r\\xc3\\xa9sum\\xc3\\xa9\";\n  show \"strip\" (Normalizer.strip ()) \"  hello  \";\n  show \"replace\"\n    (Normalizer.replace ~pattern:\"\\\\d+\" ~replacement:\"<NUM>\")\n    \"I have 42 apples and 3 oranges\";\n  show \"prepend\" (Normalizer.prepend \">> \") \"hello\";\n\n  Printf.printf \"\\n=== Model-specific ===\\n\\n\";\n  show \"bert (default)\" (Normalizer.bert ()) \"Hello  WORLD!\";\n  show \"bert (no lower)\" (Normalizer.bert ~lowercase:false ()) \"Hello  WORLD!\";\n\n  Printf.printf \"\\n=== Composition ===\\n\\n\";\n  let composed =\n    Normalizer.sequence\n      [ Normalizer.nfd; Normalizer.strip_accents; Normalizer.lowercase ]\n  in\n  show \"nfd+strip+lower\" composed \"Caf\\xc3\\xa9 R\\xc3\\xa9sum\\xc3\\xa9\";\n  show \"nfd+strip+lower\" composed \"HELLO\";\n\n  Printf.printf \"\\n=== Effect on Tokenization ===\\n\\n\";\n  let vocab =\n    [ (\"hello\", 0); (\"world\", 1); (\"cafe\", 2); (\"resume\", 3); (\"<unk>\", 4) ]\n  in\n  let no_norm =\n    word_level ~vocab ~unk_token:\"<unk>\" ~pre:(Pre_tokenizer.whitespace ()) ()\n  in\n  let with_norm =\n    word_level ~vocab ~unk_token:\"<unk>\"\n      ~pre:(Pre_tokenizer.whitespace ())\n      ~normalizer:composed ()\n  in\n\n  let text = \"HELLO Caf\\xc3\\xa9\" in\n  let enc1 = encode no_norm text in\n  let enc2 = encode with_norm text in\n  Printf.printf \"  Text: %S\\n\" text;\n  Printf.printf \"  Without normalizer: [%s]\\n\"\n    (String.concat \"; \"\n       (List.map\n          (fun s -> Printf.sprintf \"%S\" s)\n          (Array.to_list (Encoding.tokens enc1))));\n  Printf.printf \"  With normalizer:    [%s]\\n\"\n    (String.concat \"; \"\n       (List.map\n          (fun s -> Printf.sprintf \"%S\" s)\n          (Array.to_list (Encoding.tokens enc2))))\n"
  },
  {
    "path": "packages/brot/examples/04-pre-tokenizers/README.md",
    "content": "# `04-pre-tokenizers`\n\nPre-tokenization: splitting text into fragments before vocabulary lookup. Each\nfragment carries byte offsets into the original text.\n\n```bash\ndune exec brot/examples/04-pre-tokenizers/main.exe\n```\n\n## What You'll Learn\n\n- Common pre-tokenizers: `whitespace`, `whitespace_split`, `bert`\n- Punctuation and digit handling\n- Delimiter-based splitting: `char_delimiter`, `split`, `fixed_length`\n- SentencePiece-style `metaspace`\n- Composing pre-tokenizers with `sequence`\n- Using `Pre_tokenizer.pre_tokenize` to see fragments and offsets\n\n## Key Functions\n\n| Function                         | Purpose                                    |\n| -------------------------------- | ------------------------------------------ |\n| `Pre_tokenizer.whitespace`       | Pattern-based: `\\w+` and `[^\\w\\s]+` groups |\n| `Pre_tokenizer.whitespace_split` | Simple whitespace splitting                |\n| `Pre_tokenizer.bert`             | BERT-style: whitespace + punctuation + CJK |\n| `Pre_tokenizer.punctuation`      | Isolate punctuation from words             |\n| `Pre_tokenizer.digits`           | Split on digit boundaries                  |\n| `Pre_tokenizer.char_delimiter`   | Split on a single character                |\n| `Pre_tokenizer.split`            | Split on a literal string pattern          |\n| `Pre_tokenizer.fixed_length`     | Fixed-length character chunks              |\n| `Pre_tokenizer.metaspace`        | Replace spaces with visible markers        |\n| `Pre_tokenizer.sequence`         | Chain pre-tokenizers left-to-right         |\n| `Pre_tokenizer.pre_tokenize`     | Apply and get `(fragment, offsets)` list   |\n\n## Pre-tokenizer vs Tokenizer\n\nPre-tokenization happens *before* the vocabulary-based algorithm (BPE,\nWordPiece, etc.). It determines the boundaries within which subword splitting\noperates. For example, with whitespace pre-tokenization, BPE will never merge\ntokens across word boundaries.\n\n## Try It\n\n1. Try `unicode_scripts` on text mixing Latin and CJK characters.\n2. Change the punctuation `behavior` to `` `Merged_with_previous `` or\n   `` `Removed ``.\n3. Create a pre-tokenizer that splits on hyphens.\n\n## Next Steps\n\nContinue to [05-algorithms](../05-algorithms/) to see how different\ntokenization algorithms split the same text.\n"
  },
  {
    "path": "packages/brot/examples/04-pre-tokenizers/dune",
    "content": "(executable\n (name main)\n (libraries brot))\n"
  },
  {
    "path": "packages/brot/examples/04-pre-tokenizers/main.ml",
    "content": "(* Pre-tokenization.\n\n   Pre-tokenizers split text into fragments before vocabulary-based\n   tokenization. Each fragment carries byte offsets into the original text.\n   Different strategies produce different splits, affecting how subword\n   algorithms see the input. *)\n\nopen Brot\n\nlet show name pre text =\n  let result = Pre_tokenizer.pre_tokenize pre text in\n  Printf.printf \"  %-24s %S\\n\" name text;\n  List.iter\n    (fun (fragment, (s, e)) -> Printf.printf \"    %S (%d, %d)\\n\" fragment s e)\n    result;\n  print_newline ()\n\nlet () =\n  let text = \"Hello, world! It's 2026.\" in\n  Printf.printf \"=== Common Pre-tokenizers ===\\n\\n\";\n  Printf.printf \"Text: %S\\n\\n\" text;\n\n  show \"whitespace\" (Pre_tokenizer.whitespace ()) text;\n  show \"whitespace_split\" (Pre_tokenizer.whitespace_split ()) text;\n  show \"bert\" (Pre_tokenizer.bert ()) text;\n  show \"punctuation\" (Pre_tokenizer.punctuation ()) text;\n  show \"digits (individual)\"\n    (Pre_tokenizer.digits ~individual_digits:true ())\n    text;\n  show \"digits (grouped)\"\n    (Pre_tokenizer.digits ~individual_digits:false ())\n    text;\n\n  Printf.printf \"=== Delimiter-based ===\\n\\n\";\n\n  show \"char_delimiter ','\" (Pre_tokenizer.char_delimiter ',') \"a,b,c\";\n  show \"split on '::'\" (Pre_tokenizer.split ~pattern:\"::\" ()) \"mod::func::arg\";\n  show \"fixed_length 3\" (Pre_tokenizer.fixed_length 3) \"abcdefgh\";\n  show \"metaspace\" (Pre_tokenizer.metaspace ()) \"Hello world today\";\n\n  Printf.printf \"=== Composition ===\\n\\n\";\n\n  let composed =\n    Pre_tokenizer.sequence\n      [\n        Pre_tokenizer.whitespace_split ();\n        Pre_tokenizer.punctuation ~behavior:`Isolated ();\n      ]\n  in\n  show \"whitespace + punctuation\" composed text\n"
  },
  {
    "path": "packages/brot/examples/05-algorithms/README.md",
    "content": "# `05-algorithms`\n\nFive tokenization algorithms compared side-by-side. Each algorithm splits text\ndifferently based on its strategy.\n\n```bash\ndune exec brot/examples/05-algorithms/main.exe\n```\n\n## What You'll Learn\n\n- **BPE** (Byte Pair Encoding): merge-based subwords (GPT-2, RoBERTa)\n- **WordPiece**: greedy longest-match with `##` prefix (BERT)\n- **Unigram**: probabilistic segmentation (T5, mBART)\n- **Word-level**: one token per word, no subword splitting\n- **Character-level**: one token per byte, no vocabulary needed\n\n## Key Functions\n\n| Function          | Purpose                                |\n| ----------------- | -------------------------------------- |\n| `Brot.bpe`        | BPE tokenizer from vocab + merge rules |\n| `Brot.wordpiece`  | WordPiece tokenizer from vocab         |\n| `Brot.unigram`    | Unigram tokenizer from vocab + scores  |\n| `Brot.word_level` | Word-level tokenizer from vocab        |\n| `Brot.chars`      | Character-level tokenizer (no vocab)   |\n| `Brot.vocab_size` | Number of vocabulary entries           |\n\n## Algorithm Comparison\n\n| Algorithm  | Subwords?           | Unknown handling         | Vocabulary              |\n| ---------- | ------------------- | ------------------------ | ----------------------- |\n| BPE        | Yes (merges)        | Falls back to characters | `(string * int) list`   |\n| WordPiece  | Yes (`##` prefix)   | `[UNK]` token            | `(string * int) list`   |\n| Unigram    | Yes (probabilistic) | Lowest-score fallback    | `(string * float) list` |\n| Word-level | No                  | `<unk>` token            | `(string * int) list`   |\n| Chars      | No                  | N/A (all bytes valid)    | None needed             |\n\n## Try It\n\n1. Add more merge rules to the BPE tokenizer and see how it affects splitting.\n2. Try encoding a word not in the WordPiece vocabulary.\n3. Change the Unigram scores and observe how probabilities affect splitting.\n\n## Next Steps\n\nContinue to [06-special-tokens](../06-special-tokens/) to learn about special\ntokens and post-processing.\n"
  },
  {
    "path": "packages/brot/examples/05-algorithms/dune",
    "content": "(executable\n (name main)\n (libraries brot))\n"
  },
  {
    "path": "packages/brot/examples/05-algorithms/main.ml",
    "content": "(* Tokenization algorithms.\n\n   Five algorithms compared side-by-side: BPE (merge-based), WordPiece (greedy\n   longest-match), Unigram (probabilistic), word-level (whole words), and\n   character-level (per-byte). Each splits text differently. *)\n\nopen Brot\n\nlet show name tokenizer text =\n  let encoding = encode tokenizer text in\n  let tokens = Encoding.tokens encoding in\n  let ids = Encoding.ids encoding in\n  Printf.printf \"  %-12s tokens=[%s]  ids=[%s]\\n\" name\n    (String.concat \", \"\n       (List.map (fun s -> Printf.sprintf \"%S\" s) (Array.to_list tokens)))\n    (String.concat \", \" (Array.to_list (Array.map string_of_int ids)))\n\nlet () =\n  (* --- BPE: iterative merge-based subwords --- *)\n  let bpe_tok =\n    bpe\n      ~vocab:\n        [\n          (\"p\", 0);\n          (\"l\", 1);\n          (\"a\", 2);\n          (\"y\", 3);\n          (\"i\", 4);\n          (\"n\", 5);\n          (\"g\", 6);\n          (\"pl\", 7);\n          (\"ay\", 8);\n          (\"in\", 9);\n          (\"ng\", 10);\n          (\"play\", 11);\n          (\"ing\", 12);\n          (\"playing\", 13);\n        ]\n      ~merges:\n        [\n          (\"p\", \"l\");\n          (\"a\", \"y\");\n          (\"i\", \"n\");\n          (\"n\", \"g\");\n          (\"pl\", \"ay\");\n          (\"in\", \"g\");\n          (\"play\", \"ing\");\n        ]\n      ()\n  in\n\n  (* --- WordPiece: greedy longest-match with ## prefix --- *)\n  let wp_tok =\n    wordpiece\n      ~vocab:\n        [\n          (\"[UNK]\", 0);\n          (\"play\", 1);\n          (\"##ing\", 2);\n          (\"##ed\", 3);\n          (\"run\", 4);\n          (\"##ning\", 5);\n          (\"un\", 6);\n          (\"##known\", 7);\n        ]\n      ~unk_token:\"[UNK]\" ()\n  in\n\n  (* --- Unigram: probabilistic segmentation --- *)\n  let uni_tok =\n    unigram\n      ~vocab:\n        [\n          (\"playing\", -0.5);\n          (\"play\", -1.0);\n          (\"ing\", -1.5);\n          (\"p\", -3.0);\n          (\"l\", -3.0);\n          (\"a\", -3.0);\n          (\"y\", -3.0);\n          (\"i\", -3.0);\n          (\"n\", -3.0);\n          (\"g\", -3.0);\n        ]\n      ()\n  in\n\n  (* --- Word-level: whole words only --- *)\n  let wl_tok =\n    word_level\n      ~vocab:[ (\"playing\", 0); (\"hello\", 1); (\"<unk>\", 2) ]\n      ~unk_token:\"<unk>\"\n      ~pre:(Pre_tokenizer.whitespace ())\n      ()\n  in\n\n  (* --- Character-level: one byte per token --- *)\n  let char_tok = chars () in\n\n  Printf.printf \"=== Encoding %S ===\\n\\n\" \"playing\";\n  show \"BPE\" bpe_tok \"playing\";\n  show \"WordPiece\" wp_tok \"playing\";\n  show \"Unigram\" uni_tok \"playing\";\n  show \"Word-level\" wl_tok \"playing\";\n  show \"Chars\" char_tok \"playing\";\n\n  Printf.printf \"\\n=== Encoding %S ===\\n\\n\" \"running\";\n  show \"WordPiece\" wp_tok \"running\";\n  show \"Chars\" char_tok \"running\";\n\n  Printf.printf \"\\n=== Encoding %S (unknown word) ===\\n\\n\" \"unknown\";\n  show \"WordPiece\" wp_tok \"unknown\";\n  show \"Word-level\" wl_tok \"unknown\";\n  show \"Chars\" char_tok \"unknown\";\n\n  Printf.printf \"\\n=== Vocabulary sizes ===\\n\\n\";\n  Printf.printf \"  BPE:        %d\\n\" (vocab_size bpe_tok);\n  Printf.printf \"  WordPiece:  %d\\n\" (vocab_size wp_tok);\n  Printf.printf \"  Unigram:    %d\\n\" (vocab_size uni_tok);\n  Printf.printf \"  Word-level: %d\\n\" (vocab_size wl_tok);\n  Printf.printf \"  Chars:      %d (byte range 0-255)\\n\" (vocab_size char_tok)\n"
  },
  {
    "path": "packages/brot/examples/06-special-tokens/README.md",
    "content": "# `06-special-tokens`\n\nSpecial tokens and post-processing. Post-processors insert tokens like `[CLS]`\nand `[SEP]` after tokenization, and assign type IDs for sentence-pair tasks.\n\n```bash\ndune exec brot/examples/06-special-tokens/main.exe\n```\n\n## What You'll Learn\n\n- Defining special tokens with `Brot.special`\n- BERT-style post-processing: `[CLS] A [SEP]` and `[CLS] A [SEP] B [SEP]`\n- Sentence-pair encoding with `encode ~pair`\n- Type IDs: 0 for first sequence, 1 for second\n- Template-based post-processing for custom formats\n- Skipping special tokens with `~add_special_tokens:false`\n\n## Key Functions\n\n| Function                       | Purpose                                     |\n| ------------------------------ | ------------------------------------------- |\n| `Brot.special`                 | Define a special token configuration        |\n| `Post_processor.bert`          | BERT-style `[CLS] A [SEP] B [SEP]`          |\n| `Post_processor.template`      | Template-based with `$A`, `$B` placeholders |\n| `Brot.encode ~pair`            | Encode a sentence pair                      |\n| `Encoding.type_ids`            | Segment type IDs (0 or 1)                   |\n| `Encoding.special_tokens_mask` | 1 for special tokens, 0 for content         |\n\n## BERT Post-processing\n\nFor a single sentence: `[CLS] tokens [SEP]`\nFor a sentence pair: `[CLS] A_tokens [SEP] B_tokens [SEP]`\n\nType IDs distinguish the two sequences:\n- First sequence (including `[CLS]` and first `[SEP]`): type_id = 0\n- Second sequence (including final `[SEP]`): type_id = 1\n\n## Try It\n\n1. Try the `roberta` post-processor with `<s>` and `</s>` tokens.\n2. Create a custom template with different special tokens.\n3. Encode a pair and check that `type_ids` correctly separates the segments.\n\n## Next Steps\n\nContinue to [07-padding-truncation](../07-padding-truncation/) to learn about\npreparing batches with uniform sequence lengths.\n"
  },
  {
    "path": "packages/brot/examples/06-special-tokens/dune",
    "content": "(executable\n (name main)\n (libraries brot))\n"
  },
  {
    "path": "packages/brot/examples/06-special-tokens/main.ml",
    "content": "(* Special tokens and post-processing.\n\n   Special tokens like [CLS] and [SEP] are inserted by post-processors after\n   tokenization. They mark sequence boundaries and provide structure for model\n   input. Sentence-pair encoding assigns different type IDs to each sequence. *)\n\nopen Brot\n\nlet print_encoding enc =\n  let ids = Encoding.ids enc in\n  let tokens = Encoding.tokens enc in\n  let type_ids = Encoding.type_ids enc in\n  let special = Encoding.special_tokens_mask enc in\n  Printf.printf \"  %-8s %-4s %-8s %-8s\\n\" \"Token\" \"ID\" \"Type_ID\" \"Special\";\n  Printf.printf \"  %s\\n\" (String.make 32 '-');\n  for i = 0 to Encoding.length enc - 1 do\n    Printf.printf \"  %-8s %-4d %-8d %-8d\\n\" tokens.(i) ids.(i) type_ids.(i)\n      special.(i)\n  done\n\nlet () =\n  let vocab =\n    [\n      (\"[UNK]\", 0);\n      (\"[CLS]\", 1);\n      (\"[SEP]\", 2);\n      (\"hello\", 3);\n      (\"world\", 4);\n      (\"how\", 5);\n      (\"are\", 6);\n      (\"you\", 7);\n    ]\n  in\n  let specials = List.map special [ \"[CLS]\"; \"[SEP]\"; \"[UNK]\" ] in\n  let post = Post_processor.bert ~cls:(\"[CLS]\", 1) ~sep:(\"[SEP]\", 2) () in\n  let tokenizer =\n    word_level ~vocab ~unk_token:\"[UNK]\" ~specials ~post\n      ~pre:(Pre_tokenizer.whitespace ())\n      ()\n  in\n\n  (* Single sentence: [CLS] A [SEP] *)\n  Printf.printf \"=== Single Sentence ===\\n\";\n  Printf.printf \"Text: \\\"hello world\\\"\\n\\n\";\n  print_encoding (encode tokenizer \"hello world\");\n\n  (* Sentence pair: [CLS] A [SEP] B [SEP] *)\n  Printf.printf \"\\n=== Sentence Pair ===\\n\";\n  Printf.printf \"A: \\\"hello world\\\", B: \\\"how are you\\\"\\n\\n\";\n  print_encoding (encode tokenizer ~pair:\"how are you\" \"hello world\");\n\n  (* Without special tokens *)\n  Printf.printf \"\\n=== Without Special Tokens ===\\n\";\n  Printf.printf \"Text: \\\"hello world\\\" (add_special_tokens=false)\\n\\n\";\n  print_encoding (encode tokenizer ~add_special_tokens:false \"hello world\");\n\n  (* Template-based post-processor *)\n  Printf.printf \"\\n=== Template Post-processor ===\\n\";\n  let template_post =\n    Post_processor.template ~single:\"[CLS] $A [SEP]\"\n      ~pair:\"[CLS] $A [SEP] $B:1 [SEP]:1\"\n      ~special_tokens:[ (\"[CLS]\", 1); (\"[SEP]\", 2) ]\n      ()\n  in\n  let tok2 =\n    word_level ~vocab ~unk_token:\"[UNK]\" ~specials ~post:template_post\n      ~pre:(Pre_tokenizer.whitespace ())\n      ()\n  in\n  Printf.printf \"Template: \\\"[CLS] $A [SEP] $B:1 [SEP]:1\\\"\\n\";\n  Printf.printf \"A: \\\"hello\\\", B: \\\"world\\\"\\n\\n\";\n  print_encoding (encode tok2 ~pair:\"world\" \"hello\")\n"
  },
  {
    "path": "packages/brot/examples/07-padding-truncation/README.md",
    "content": "# `07-padding-truncation`\n\nPadding and truncation for batch processing. Models require uniform sequence\nlengths. Padding adds filler tokens; truncation trims long sequences.\n\n```bash\ndune exec brot/examples/07-padding-truncation/main.exe\n```\n\n## What You'll Learn\n\n- Fixed-length padding with `padding (`Fixed n)`\n- Batch-longest padding with `padding `Batch_longest`\n- Left vs right padding direction\n- Truncation with `truncation max_length`\n- Combining padding and truncation\n- Using `Encoding.attention_mask` to distinguish real tokens from padding\n\n## Key Functions\n\n| Function                  | Purpose                           |\n| ------------------------- | --------------------------------- |\n| `Brot.padding`            | Create a padding configuration    |\n| `Brot.truncation`         | Create a truncation configuration |\n| `Brot.encode_batch`       | Encode multiple texts at once     |\n| `Encoding.attention_mask` | 1 for real tokens, 0 for padding  |\n\n## Padding Strategies\n\n| Strategy             | Behavior                                               |\n| -------------------- | ------------------------------------------------------ |\n| `` `Fixed n ``       | Every sequence padded to exactly `n` tokens            |\n| `` `Batch_longest `` | All sequences padded to match the longest in the batch |\n| `` `To_multiple n `` | Pad to smallest multiple of `n` >= sequence length     |\n\n## Try It\n\n1. Change the padding direction to `` `Left `` and observe where pad tokens appear.\n2. Try `padding (`To_multiple 4)` and see how lengths round up.\n3. Truncate from the left with `truncation ~direction:`Left 3`.\n\n## Next Steps\n\nContinue to [08-decoders](../08-decoders/) to learn how tokens are converted\nback to text.\n"
  },
  {
    "path": "packages/brot/examples/07-padding-truncation/dune",
    "content": "(executable\n (name main)\n (libraries brot))\n"
  },
  {
    "path": "packages/brot/examples/07-padding-truncation/main.ml",
    "content": "(* Padding and truncation.\n\n   Batch processing requires uniform sequence lengths. Padding extends short\n   sequences with pad tokens; truncation trims long ones. The attention mask\n   distinguishes real tokens from padding. *)\n\nopen Brot\n\nlet print_batch label encodings =\n  Printf.printf \"%s\\n\" label;\n  List.iteri\n    (fun i enc ->\n      let ids = Encoding.ids enc in\n      let attn = Encoding.attention_mask enc in\n      Printf.printf \"  [%d] ids=[%s]  attn=[%s]\\n\" i\n        (String.concat \", \" (Array.to_list (Array.map string_of_int ids)))\n        (String.concat \", \" (Array.to_list (Array.map string_of_int attn))))\n    encodings;\n  print_newline ()\n\nlet () =\n  let vocab =\n    [\n      (\"[PAD]\", 0);\n      (\"<unk>\", 1);\n      (\"hello\", 2);\n      (\"world\", 3);\n      (\"how\", 4);\n      (\"are\", 5);\n      (\"you\", 6);\n      (\"doing\", 7);\n      (\"today\", 8);\n    ]\n  in\n  let tokenizer =\n    word_level ~vocab ~unk_token:\"<unk>\"\n      ~specials:[ special \"[PAD]\" ]\n      ~pad_token:\"[PAD]\"\n      ~pre:(Pre_tokenizer.whitespace ())\n      ()\n  in\n\n  let texts = [ \"hello\"; \"hello world\"; \"how are you doing today\" ] in\n  Printf.printf \"Texts:\\n\";\n  List.iteri (fun i t -> Printf.printf \"  [%d] %S\\n\" i t) texts;\n  print_newline ();\n\n  (* No padding *)\n  print_batch \"=== No Padding ===\" (encode_batch tokenizer texts);\n\n  (* Fixed-length padding *)\n  print_batch \"=== Fixed Padding (length=6) ===\"\n    (encode_batch tokenizer ~padding:(padding (`Fixed 6)) texts);\n\n  (* Batch-longest padding *)\n  print_batch \"=== Batch Longest Padding ===\"\n    (encode_batch tokenizer ~padding:(padding `Batch_longest) texts);\n\n  (* Left padding *)\n  print_batch \"=== Left Padding (length=6) ===\"\n    (encode_batch tokenizer\n       ~padding:(padding ~direction:`Left (`Fixed 6))\n       texts);\n\n  (* Truncation *)\n  print_batch \"=== Truncation (max_length=3) ===\"\n    (encode_batch tokenizer ~truncation:(truncation 3) texts);\n\n  (* Padding + Truncation *)\n  print_batch \"=== Padding + Truncation (pad=4, trunc=4) ===\"\n    (encode_batch tokenizer\n       ~padding:(padding (`Fixed 4))\n       ~truncation:(truncation 4) texts)\n"
  },
  {
    "path": "packages/brot/examples/08-decoders/README.md",
    "content": "# `08-decoders`\n\nDecoders convert token strings back to natural text. Different tokenization\nschemes require different decoding strategies to produce clean output.\n\n```bash\ndune exec brot/examples/08-decoders/main.exe\n```\n\n## What You'll Learn\n\n- Per-token decoders: `wordpiece`, `bpe`, `metaspace`, `byte_fallback`\n- Collapsing decoders: `fuse`, `replace`\n- Composing decoders with `sequence`\n- Integrating a decoder with a tokenizer\n- Skipping special tokens during decoding\n\n## Key Functions\n\n| Function                | Purpose                              |\n| ----------------------- | ------------------------------------ |\n| `Decoder.wordpiece`     | Strip `##` prefix, join subwords     |\n| `Decoder.bpe`           | Strip word-end suffix, insert spaces |\n| `Decoder.metaspace`     | Convert markers back to spaces       |\n| `Decoder.byte_fallback` | Convert `<0xFF>` back to bytes       |\n| `Decoder.fuse`          | Concatenate all tokens               |\n| `Decoder.replace`       | String replacement                   |\n| `Decoder.sequence`      | Chain decoders                       |\n| `Decoder.decode`        | Apply decoder to token list          |\n| `Brot.decode`           | Full decode through tokenizer        |\n\n## Per-token vs Collapsing\n\nSome decoders transform each token independently (per-token: `bpe`,\n`metaspace`, `byte_fallback`), while others combine the entire token list into\na single result (collapsing: `wordpiece`, `fuse`, `replace`). This matters\nwhen composing with `sequence`.\n\n## Try It\n\n1. Try `Decoder.ctc` for speech recognition CTC output.\n2. Compose `byte_fallback` with `fuse` and decode byte tokens.\n3. Use `Decoder.strip` to remove leading/trailing characters.\n\n## Next Steps\n\nContinue to [09-training](../09-training/) to learn how to train tokenizers\nfrom scratch.\n"
  },
  {
    "path": "packages/brot/examples/08-decoders/dune",
    "content": "(executable\n (name main)\n (libraries brot))\n"
  },
  {
    "path": "packages/brot/examples/08-decoders/main.ml",
    "content": "(* Decoders.\n\n   Decoders convert token strings back to natural text by reversing\n   encoding-specific transformations: prefix/suffix removal, space insertion,\n   byte-level decoding, and marker replacement. *)\n\nopen Brot\n\nlet show name decoder tokens =\n  let result = Decoder.decode decoder tokens in\n  Printf.printf \"  %-22s [%s] -> %S\\n\" name\n    (String.concat \"; \" (List.map (fun s -> Printf.sprintf \"%S\" s) tokens))\n    result\n\nlet () =\n  Printf.printf \"=== Per-token Decoders ===\\n\\n\";\n\n  show \"wordpiece\" (Decoder.wordpiece ()) [ \"play\"; \"##ing\"; \"un\"; \"##happy\" ];\n\n  show \"bpe (suffix=</w>)\"\n    (Decoder.bpe ~suffix:\"</w>\" ())\n    [ \"hel\"; \"lo</w>\"; \"wor\"; \"ld</w>\" ];\n\n  show \"metaspace\" (Decoder.metaspace ())\n    [ \"\\xe2\\x96\\x81Hello\"; \"\\xe2\\x96\\x81world\" ];\n\n  show \"byte_fallback\" (Decoder.byte_fallback ()) [ \"hello\"; \"<0x21>\" ];\n\n  Printf.printf \"\\n=== Collapsing Decoders ===\\n\\n\";\n\n  show \"fuse\" (Decoder.fuse ()) [ \"h\"; \"e\"; \"l\"; \"l\"; \"o\" ];\n\n  show \"replace ('_' -> ' ')\"\n    (Decoder.replace ~pattern:\"_\" ~by:\" \" ())\n    [ \"hello_world\" ];\n\n  Printf.printf \"\\n=== Composed Decoder ===\\n\\n\";\n\n  let composed =\n    Decoder.sequence\n      [ Decoder.wordpiece (); Decoder.replace ~pattern:\"  \" ~by:\" \" () ]\n  in\n  show \"wordpiece + replace\" composed [ \"play\"; \"##ing\"; \"is\"; \"great\" ];\n\n  Printf.printf \"\\n=== Integrated with Tokenizer ===\\n\\n\";\n\n  let vocab =\n    [\n      (\"[UNK]\", 0);\n      (\"[CLS]\", 1);\n      (\"[SEP]\", 2);\n      (\"play\", 3);\n      (\"##ing\", 4);\n      (\"##ed\", 5);\n      (\"great\", 6);\n    ]\n  in\n  let tokenizer =\n    wordpiece ~vocab ~unk_token:\"[UNK]\"\n      ~specials:[ special \"[CLS]\"; special \"[SEP]\" ]\n      ~post:(Post_processor.bert ~cls:(\"[CLS]\", 1) ~sep:(\"[SEP]\", 2) ())\n      ~decoder:(Decoder.wordpiece ()) ()\n  in\n\n  let text = \"playing\" in\n  let encoding = encode tokenizer text in\n  let ids = Encoding.ids encoding in\n  Printf.printf \"  Text:    %S\\n\" text;\n  Printf.printf \"  Tokens:  [%s]\\n\"\n    (String.concat \"; \"\n       (List.map\n          (fun s -> Printf.sprintf \"%S\" s)\n          (Array.to_list (Encoding.tokens encoding))));\n  Printf.printf \"  IDs:     [%s]\\n\"\n    (String.concat \"; \" (Array.to_list (Array.map string_of_int ids)));\n  Printf.printf \"  Decoded: %S\\n\" (decode tokenizer ids);\n  Printf.printf \"  Decoded (skip specials): %S\\n\"\n    (decode tokenizer ~skip_special_tokens:true ids)\n"
  },
  {
    "path": "packages/brot/examples/09-training/README.md",
    "content": "# `09-training`\n\nTraining tokenizers from scratch. Given a text corpus, each algorithm learns a\nvocabulary tailored to the data.\n\n```bash\ndune exec brot/examples/09-training/main.exe\n```\n\n## What You'll Learn\n\n- Training BPE, WordPiece, word-level, and Unigram tokenizers\n- Controlling vocabulary size with `~vocab_size`\n- Adding special tokens during training\n- Inspecting the learned vocabulary\n\n## Key Functions\n\n| Function               | Purpose                                          |\n| ---------------------- | ------------------------------------------------ |\n| `Brot.train_bpe`       | Train a BPE tokenizer (learns merge rules)       |\n| `Brot.train_wordpiece` | Train a WordPiece tokenizer (learns subwords)    |\n| `Brot.train_wordlevel` | Train a word-level tokenizer (collects words)    |\n| `Brot.train_unigram`   | Train a Unigram tokenizer (learns probabilities) |\n| `Brot.vocab_size`      | Check learned vocabulary size                    |\n| `Brot.token_to_id`     | Look up a token's ID                             |\n\n## Training Data\n\nTraining data is provided as `` `Seq (List.to_seq texts) `` for in-memory text\nor `` `Files [\"path1\"; \"path2\"] `` for files (one sentence per line).\n\n## Try It\n\n1. Add more sentences to the corpus and see how the vocabulary changes.\n2. Train with a smaller `~vocab_size` and observe more subword splitting.\n3. Use `~min_frequency:2` to exclude rare words.\n\n## Next Steps\n\nContinue to [10-bert-pipeline](../10-bert-pipeline/) to assemble a complete\nBERT-style tokenizer pipeline.\n"
  },
  {
    "path": "packages/brot/examples/09-training/dune",
    "content": "(executable\n (name main)\n (libraries brot))\n"
  },
  {
    "path": "packages/brot/examples/09-training/main.ml",
    "content": "(* Training tokenizers.\n\n   Train new tokenizers from a text corpus. Each algorithm learns a different\n   vocabulary: BPE learns merge rules, WordPiece learns subword prefixes,\n   word-level collects unique words, and Unigram learns token probabilities. *)\n\nopen Brot\n\nlet corpus =\n  [\n    \"the cat sat on the mat\";\n    \"the dog sat on the log\";\n    \"the cat and the dog are friends\";\n    \"cats and dogs play together\";\n    \"the cat plays with the dog\";\n    \"playing in the park is fun\";\n    \"the park has many cats and dogs\";\n    \"friends play in the park together\";\n  ]\n\nlet show_trained name tokenizer test_texts =\n  Printf.printf \"--- %s (vocab_size=%d) ---\\n\" name (vocab_size tokenizer);\n  List.iter\n    (fun text ->\n      let enc = encode tokenizer text in\n      Printf.printf \"  %S -> [%s]\\n\" text\n        (String.concat \", \"\n           (List.map\n              (fun s -> Printf.sprintf \"%S\" s)\n              (Array.to_list (Encoding.tokens enc)))))\n    test_texts;\n  print_newline ()\n\nlet () =\n  let data = `Seq (List.to_seq corpus) in\n  let test_texts = [ \"the cat plays\"; \"dogs are friends\" ] in\n\n  Printf.printf \"Training corpus: %d sentences\\n\\n\" (List.length corpus);\n\n  (* Train BPE: learns merge rules by iteratively combining frequent pairs *)\n  let bpe_tok =\n    train_bpe data ~vocab_size:100 ~show_progress:false\n      ~pre:(Pre_tokenizer.whitespace ())\n  in\n  show_trained \"BPE\" bpe_tok test_texts;\n\n  (* Train WordPiece: learns subword prefixes (## for continuation tokens) *)\n  let wp_tok =\n    train_wordpiece data ~vocab_size:100 ~show_progress:false\n      ~pre:(Pre_tokenizer.whitespace ())\n  in\n  show_trained \"WordPiece\" wp_tok test_texts;\n\n  (* Train word-level: each unique word is a token *)\n  let wl_tok =\n    train_wordlevel data ~vocab_size:50 ~show_progress:false\n      ~pre:(Pre_tokenizer.whitespace ())\n  in\n  show_trained \"Word-level\" wl_tok test_texts;\n\n  (* Train Unigram: probabilistic subword segmentation *)\n  let uni_tok = train_unigram data ~vocab_size:100 ~show_progress:false in\n  show_trained \"Unigram\" uni_tok test_texts;\n\n  (* Training with special tokens *)\n  Printf.printf \"=== Training with Special Tokens ===\\n\\n\";\n  let wp_with_specials =\n    train_wordpiece data ~vocab_size:100 ~show_progress:false\n      ~pre:(Pre_tokenizer.whitespace ())\n      ~specials:[ special \"[CLS]\"; special \"[SEP]\"; special \"[PAD]\" ]\n      ~pad_token:\"[PAD]\"\n  in\n  Printf.printf \"WordPiece with specials (vocab=%d):\\n\"\n    (vocab_size wp_with_specials);\n  let show_id tok name =\n    Printf.printf \"  %s id = %s\\n\" name\n      (match token_to_id tok name with\n      | Some id -> string_of_int id\n      | None -> \"N/A\")\n  in\n  show_id wp_with_specials \"[CLS]\";\n  show_id wp_with_specials \"[SEP]\";\n  show_id wp_with_specials \"[PAD]\";\n\n  (* Add a post-processor to insert special tokens during encoding *)\n  Printf.printf \"\\n  Encoding with post-processor:\\n\";\n  let wp_full =\n    train_wordpiece data ~vocab_size:100 ~show_progress:false\n      ~pre:(Pre_tokenizer.whitespace ())\n      ~post:\n        (Post_processor.bert\n           ~cls:(\"[CLS]\", Option.get (token_to_id wp_with_specials \"[CLS]\"))\n           ~sep:(\"[SEP]\", Option.get (token_to_id wp_with_specials \"[SEP]\"))\n           ())\n      ~specials:[ special \"[CLS]\"; special \"[SEP]\"; special \"[PAD]\" ]\n      ~pad_token:\"[PAD]\"\n  in\n  let enc = encode wp_full \"the cat plays\" in\n  Printf.printf \"  %S -> [%s]\\n\" \"the cat plays\"\n    (String.concat \", \"\n       (List.map\n          (fun s -> Printf.sprintf \"%S\" s)\n          (Array.to_list (Encoding.tokens enc))))\n"
  },
  {
    "path": "packages/brot/examples/10-bert-pipeline/README.md",
    "content": "# `10-bert-pipeline`\n\nComplete BERT-style tokenizer pipeline. Assembles all stages: normalizer,\npre-tokenizer, WordPiece algorithm, post-processor, decoder, special tokens,\npadding, and truncation.\n\n```bash\ndune exec brot/examples/10-bert-pipeline/main.exe\n```\n\n## What You'll Learn\n\n- Assembling a full tokenization pipeline\n- How all stages work together end-to-end\n- Single sentence and sentence-pair encoding\n- Batch encoding with padding\n- Sentence-pair batch encoding with `encode_pairs_batch`\n- Decoding with and without special tokens\n- Inspecting tokenizer configuration with `Brot.pp`\n\n## Key Functions\n\n| Function                           | Purpose                                       |\n| ---------------------------------- | --------------------------------------------- |\n| `Brot.wordpiece`                   | Full pipeline constructor                     |\n| `Normalizer.bert`                  | BERT normalizer (lowercase, clean, CJK)       |\n| `Pre_tokenizer.bert`               | BERT pre-tokenizer (whitespace + punctuation) |\n| `Post_processor.bert`              | Insert `[CLS]` and `[SEP]` tokens             |\n| `Decoder.wordpiece`                | Reverse `##` prefix joining                   |\n| `Brot.encode ~pair`                | Encode a sentence pair                        |\n| `Brot.encode_pairs_batch`          | Batch-encode sentence pairs                   |\n| `Brot.decode ~skip_special_tokens` | Decode without `[CLS]`/`[SEP]`                |\n| `Brot.pp`                          | Pretty-print tokenizer configuration          |\n\n## The Full Pipeline\n\n```\nInput text\n  |\n  v\nNormalizer.bert     -- lowercase, clean control chars, pad CJK\n  |\n  v\nPre_tokenizer.bert  -- split on whitespace, isolate punctuation\n  |\n  v\nWordPiece model     -- greedy longest-match subword splitting\n  |\n  v\nPost_processor.bert -- insert [CLS] and [SEP], set type_ids\n  |\n  v\nEncoding.t          -- ids, tokens, offsets, type_ids, attention_mask\n```\n\n## Try It\n\n1. Encode text with accented characters and see the normalizer at work.\n2. Change `Post_processor.bert` to `Post_processor.roberta` with `<s>` and\n   `</s>` tokens for a RoBERTa-style pipeline.\n3. Use `save_pretrained` to export the tokenizer and reload it with\n   `from_file`.\n\n## Further Reading\n\n- [gpt2_tokenizer](../x-gpt2-tokenizer/) -- loading a real GPT-2 tokenizer\n  from HuggingFace model files\n"
  },
  {
    "path": "packages/brot/examples/10-bert-pipeline/dune",
    "content": "(executable\n (name main)\n (libraries brot))\n"
  },
  {
    "path": "packages/brot/examples/10-bert-pipeline/main.ml",
    "content": "(* BERT-style pipeline.\n\n   Assembles all pipeline stages into a complete BERT-style tokenizer:\n   normalizer, pre-tokenizer, WordPiece algorithm, post-processor, decoder,\n   special tokens, padding, and truncation. *)\n\nopen Brot\n\nlet print_encoding label enc =\n  let tokens = Encoding.tokens enc in\n  let ids = Encoding.ids enc in\n  let type_ids = Encoding.type_ids enc in\n  let attn = Encoding.attention_mask enc in\n  Printf.printf \"%s\\n\" label;\n  Printf.printf \"  tokens:    [%s]\\n\"\n    (String.concat \", \"\n       (List.map (fun s -> Printf.sprintf \"%S\" s) (Array.to_list tokens)));\n  Printf.printf \"  ids:       [%s]\\n\"\n    (String.concat \", \" (Array.to_list (Array.map string_of_int ids)));\n  Printf.printf \"  type_ids:  [%s]\\n\"\n    (String.concat \", \" (Array.to_list (Array.map string_of_int type_ids)));\n  Printf.printf \"  attn_mask: [%s]\\n\"\n    (String.concat \", \" (Array.to_list (Array.map string_of_int attn)));\n  print_newline ()\n\nlet () =\n  (* Build a BERT-style vocabulary *)\n  let vocab =\n    [\n      (\"[PAD]\", 0);\n      (\"[UNK]\", 1);\n      (\"[CLS]\", 2);\n      (\"[SEP]\", 3);\n      (\"the\", 4);\n      (\"cat\", 5);\n      (\"sat\", 6);\n      (\"on\", 7);\n      (\"mat\", 8);\n      (\"dog\", 9);\n      (\"play\", 10);\n      (\"##ing\", 11);\n      (\"##ed\", 12);\n      (\"is\", 13);\n      (\"a\", 14);\n      (\"good\", 15);\n      (\"great\", 16);\n      (\"un\", 17);\n      (\"##happy\", 18);\n      (\"friend\", 19);\n      (\"##s\", 20);\n      (\"how\", 21);\n      (\"are\", 22);\n      (\"you\", 23);\n    ]\n  in\n  let specials = List.map special [ \"[PAD]\"; \"[UNK]\"; \"[CLS]\"; \"[SEP]\" ] in\n\n  (* Assemble the full pipeline *)\n  let tokenizer =\n    wordpiece ~vocab ~unk_token:\"[UNK]\"\n      ~normalizer:(Normalizer.bert ~lowercase:true ())\n      ~pre:(Pre_tokenizer.bert ())\n      ~post:(Post_processor.bert ~cls:(\"[CLS]\", 2) ~sep:(\"[SEP]\", 3) ())\n      ~decoder:(Decoder.wordpiece ()) ~specials ~pad_token:\"[PAD]\" ()\n  in\n\n  (* Inspect the tokenizer *)\n  Printf.printf \"=== Tokenizer Configuration ===\\n\";\n  Format.printf \"%a@.@.\" pp tokenizer;\n\n  (* Single sentence *)\n  Printf.printf \"=== Single Sentence ===\\n\\n\";\n  print_encoding \"\\\"The Cat is Playing\\\"\"\n    (encode tokenizer \"The Cat is Playing\");\n\n  (* Sentence pair *)\n  Printf.printf \"=== Sentence Pair ===\\n\\n\";\n  print_encoding \"A: \\\"the cat sat\\\", B: \\\"how are you\\\"\"\n    (encode tokenizer ~pair:\"how are you\" \"the cat sat\");\n\n  (* Batch with padding *)\n  Printf.printf \"=== Padded Batch ===\\n\\n\";\n  let batch =\n    encode_batch tokenizer ~padding:(padding `Batch_longest)\n      [ \"the cat\"; \"the cat sat on a mat\"; \"good\" ]\n  in\n  List.iteri (fun i enc -> print_encoding (Printf.sprintf \"[%d]\" i) enc) batch;\n\n  (* Sentence pairs batch with padding and truncation *)\n  Printf.printf \"=== Sentence Pairs (pad=12, trunc=12) ===\\n\\n\";\n  let pairs =\n    encode_pairs_batch tokenizer\n      ~padding:(padding (`Fixed 12))\n      ~truncation:(truncation 12)\n      [ (\"the cat sat\", \"how are you\"); (\"good dog\", \"is a friend\") ]\n  in\n  List.iteri\n    (fun i enc -> print_encoding (Printf.sprintf \"pair[%d]\" i) enc)\n    pairs;\n\n  (* Decoding *)\n  Printf.printf \"=== Decoding ===\\n\\n\";\n  let enc = encode tokenizer ~pair:\"how are you\" \"the cat sat\" in\n  let ids = Encoding.ids enc in\n  Printf.printf \"  Full decode:   %S\\n\" (decode tokenizer ids);\n  Printf.printf \"  Skip specials: %S\\n\"\n    (decode tokenizer ~skip_special_tokens:true ids)\n"
  },
  {
    "path": "packages/brot/examples/README.md",
    "content": "# Brot Examples\n\nLearn Brot through progressively complex examples. Start with `01-encode-decode`\nand work through the numbered examples in order.\n\n## Examples\n\n| Example | Concept | Key Functions |\n|---------|---------|---------------|\n| [`01-encode-decode`](./01-encode-decode/) | Text to IDs and back | `bpe`, `encode`, `decode` |\n| [`02-encoding-fields`](./02-encoding-fields/) | Encoding metadata | `Encoding.ids`, `.tokens`, `.offsets` |\n| [`03-normalizers`](./03-normalizers/) | Text normalization | `Normalizer.lowercase`, `.bert`, `.sequence` |\n| [`04-pre-tokenizers`](./04-pre-tokenizers/) | Splitting before vocab | `Pre_tokenizer.whitespace`, `.bert`, `.sequence` |\n| [`05-algorithms`](./05-algorithms/) | Algorithm comparison | `bpe`, `wordpiece`, `unigram`, `word_level`, `chars` |\n| [`06-special-tokens`](./06-special-tokens/) | Special tokens and post-processing | `Post_processor.bert`, `.template`, `encode ~pair` |\n| [`07-padding-truncation`](./07-padding-truncation/) | Batch preparation | `padding`, `truncation`, `encode_batch` |\n| [`08-decoders`](./08-decoders/) | Tokens back to text | `Decoder.wordpiece`, `.bpe`, `.fuse`, `.sequence` |\n| [`09-training`](./09-training/) | Train from scratch | `train_bpe`, `train_wordpiece`, `train_unigram` |\n| [`10-bert-pipeline`](./10-bert-pipeline/) | Full BERT pipeline | All stages assembled end-to-end |\n\nAdvanced:\n\n- [**x-gpt2-tokenizer**](./x-gpt2-tokenizer/): Loading a real GPT-2 tokenizer\n  from HuggingFace model files\n\n## Running Examples\n\nAll examples can be run with:\n\n```bash\ndune exec brot/examples/<name>/main.exe\n```\n\nFor example:\n\n```bash\ndune exec brot/examples/01-encode-decode/main.exe\n```\n\n## Quick Reference\n\n### Encode and Decode\n\n```ocaml\nopen Brot\n\nlet tokenizer = bpe ~vocab:[(\"hello\", 0); ...] ~merges:[...] () in\nlet encoding = encode tokenizer \"hello world\" in\nlet ids = Encoding.ids encoding in\nlet text = decode tokenizer ids\n```\n\n### Full Pipeline\n\n```ocaml\nlet tokenizer =\n  wordpiece ~vocab\n    ~normalizer:(Normalizer.bert ~lowercase:true ())\n    ~pre:(Pre_tokenizer.bert ())\n    ~post:(Post_processor.bert ~cls:(\"[CLS]\", 2) ~sep:(\"[SEP]\", 3) ())\n    ~decoder:(Decoder.wordpiece ())\n    ~specials:(List.map special [ \"[CLS]\"; \"[SEP]\"; \"[PAD]\" ])\n    ~pad_token:\"[PAD]\" ()\n```\n\n### Train from Text\n\n```ocaml\nlet tokenizer =\n  train_bpe (`Seq (List.to_seq texts)) ~vocab_size:1000\n```\n"
  },
  {
    "path": "packages/brot/examples/x-gpt2-tokenizer/README.md",
    "content": "# `x-gpt2-tokenizer`\n\nLoading a real GPT-2 tokenizer from HuggingFace model files. This example\ndownloads GPT-2's vocabulary and merges, builds the full byte-level BPE\npipeline, and demonstrates encoding, decoding, and subword inspection.\n\n```bash\ndune exec brot/examples/x-gpt2-tokenizer/main.exe\n```\n\n## What You'll Learn\n\n- Loading a pre-trained tokenizer from vocabulary and merge files\n- Building a byte-level BPE pipeline with `from_model_file`\n- Encoding text and inspecting tokens, IDs, and offsets\n- Decoding token IDs back to text\n- Subword splitting on real vocabulary\n- Batch encoding multiple texts\n\n## Key Functions\n\n| Function                   | Purpose                                         |\n| -------------------------- | ----------------------------------------------- |\n| `Brot.from_model_file`     | Load tokenizer from vocab.json and merges.txt   |\n| `Pre_tokenizer.byte_level` | GPT-2 style byte-level pre-tokenizer            |\n| `Decoder.byte_level`       | Corresponding byte-level decoder                |\n| `Brot.encode`              | Encode text to an `Encoding.t`                  |\n| `Brot.decode`              | Decode token IDs back to text                   |\n| `Brot.encode_batch`        | Encode multiple texts at once                   |\n| `Encoding.tokens`          | Token strings from an encoding                  |\n| `Encoding.ids`             | Token IDs from an encoding                      |\n| `Encoding.offsets`         | Byte offset pairs mapping tokens to source text |\n\n## Prerequisites\n\nThis example downloads GPT-2 model files from HuggingFace on first run\n(~1 MB total). Files are cached in `/tmp/brot_gpt2/`.\n\n## Output Walkthrough\n\n```\nVocabulary: 50257 tokens\n\nText:    \"Hello world! GPT-2 is amazing.\"\nTokens:  [\"Hello\"; \" world\"; \"!\"; \" GPT\"; \"-\"; \"2\"; \" is\"; \" amazing\"; \".\"]\nIDs:     [15496; 995; 0; 402; 12; 17; 318; 4998; 13]\nDecoded: \"Hello world! GPT-2 is amazing.\"\nRound-trip: true\n\n=== Subword Splitting ===\n\n  \"tokenization\"       -> 3 tokens: [\"token\", \"ization\"]\n  \"transformer\"        -> 1 tokens: [\"transformer\"]\n  ...\n\n=== Batch Encoding ===\n\n  \"The quick brown fox\"          -> 4 tokens\n  \"jumps over the lazy dog\"      -> 5 tokens\n  \"Machine learning is fun\"      -> 4 tokens\n\n=== Token Offsets ===\n\n  Text: \"Hello, world!\"\n  Hello     offsets=(0, 5)  source=\"Hello\"\n  ,         offsets=(5, 6)  source=\",\"\n  ...\n```\n\n## Try It\n\n1. Change the input text and see how GPT-2 tokenizes different sentences.\n2. Try words with unusual spellings to see subword splitting in action.\n3. Compare the token count for English text vs other languages.\n\n## See Also\n\n- [01-encode-decode](../01-encode-decode/) for basic encoding and decoding\n- [05-algorithms](../05-algorithms/) for comparing tokenization algorithms\n- [08-decoders](../08-decoders/) for decoder options\n"
  },
  {
    "path": "packages/brot/examples/x-gpt2-tokenizer/dune",
    "content": "(executable\n (name main)\n (libraries brot nx unix))\n"
  },
  {
    "path": "packages/brot/examples/x-gpt2-tokenizer/main.ml",
    "content": "(* Loading a real GPT-2 tokenizer.\n\n   Downloads GPT-2's vocabulary and merge files from HuggingFace, builds the\n   full byte-level BPE pipeline, and demonstrates encoding, decoding, and\n   subword inspection on real-world text. *)\n\nopen Brot\n\nlet download url dest =\n  if not (Sys.file_exists dest) then (\n    Printf.printf \"Downloading %s...\\n%!\" (Filename.basename dest);\n    let cmd =\n      Printf.sprintf \"curl -L --fail -s -o %s %s\" (Filename.quote dest)\n        (Filename.quote url)\n    in\n    match Unix.system cmd with\n    | Unix.WEXITED 0 -> ()\n    | _ -> failwith (Printf.sprintf \"Failed to download %s\" url))\n\nlet () =\n  (* Download GPT-2 model files *)\n  let cache = \"/tmp/brot_gpt2\" in\n  if not (Sys.file_exists cache) then Sys.mkdir cache 0o755;\n  let vocab_file = Filename.concat cache \"vocab.json\" in\n  let merges_file = Filename.concat cache \"merges.txt\" in\n  download \"https://huggingface.co/gpt2/raw/main/vocab.json\" vocab_file;\n  download \"https://huggingface.co/gpt2/raw/main/merges.txt\" merges_file;\n\n  (* Build the GPT-2 tokenizer: BPE with byte-level pre-tokenizer *)\n  let tokenizer =\n    from_model_file ~vocab:vocab_file ~merges:merges_file\n      ~pre:(Pre_tokenizer.byte_level ~add_prefix_space:false ())\n      ~decoder:(Decoder.byte_level ()) ()\n  in\n  Printf.printf \"\\nVocabulary: %d tokens\\n\\n\" (vocab_size tokenizer);\n\n  (* Encode text *)\n  let text = \"Hello world! GPT-2 is amazing.\" in\n  let enc = encode tokenizer text in\n  Printf.printf \"Text:    %S\\n\" text;\n  Printf.printf \"Tokens:  [%s]\\n\"\n    (String.concat \"; \"\n       (List.map\n          (fun s -> Printf.sprintf \"%S\" s)\n          (Array.to_list (Encoding.tokens enc))));\n  Printf.printf \"IDs:     [%s]\\n\"\n    (String.concat \"; \"\n       (Array.to_list (Array.map string_of_int (Encoding.ids enc))));\n\n  (* Decode back *)\n  let decoded = decode tokenizer (Encoding.ids enc) in\n  Printf.printf \"Decoded: %S\\n\" decoded;\n  Printf.printf \"Round-trip: %b\\n\\n\" (String.equal text decoded);\n\n  (* Subword splitting: see how a long word is broken down *)\n  Printf.printf \"=== Subword Splitting ===\\n\\n\";\n  List.iter\n    (fun word ->\n      let e = encode tokenizer word in\n      let tokens = Encoding.tokens e in\n      Printf.printf \"  %-20s -> %d tokens: [%s]\\n\" (Printf.sprintf \"%S\" word)\n        (Array.length tokens)\n        (String.concat \", \"\n           (List.map (fun s -> Printf.sprintf \"%S\" s) (Array.to_list tokens))))\n    [ \"tokenization\"; \"transformer\"; \"GPT\"; \"Hello\"; \"supercalifragilistic\" ];\n\n  (* Batch encoding *)\n  Printf.printf \"\\n=== Batch Encoding ===\\n\\n\";\n  let texts =\n    [\n      \"The quick brown fox\";\n      \"jumps over the lazy dog\";\n      \"Machine learning is fun\";\n    ]\n  in\n  let batch = encode_batch tokenizer texts in\n  List.iter2\n    (fun text enc ->\n      Printf.printf \"  %-30s -> %d tokens\\n\" (Printf.sprintf \"%S\" text)\n        (Encoding.length enc))\n    texts batch;\n\n  (* Offsets: map tokens back to source text *)\n  Printf.printf \"\\n=== Token Offsets ===\\n\\n\";\n  let text2 = \"Hello, world!\" in\n  let enc2 = encode tokenizer text2 in\n  Printf.printf \"Text: %S\\n\" text2;\n  let tokens = Encoding.tokens enc2 in\n  let offsets = Encoding.offsets enc2 in\n  for i = 0 to Encoding.length enc2 - 1 do\n    let s, e = offsets.(i) in\n    Printf.printf \"  %-8s  offsets=(%d, %d)  source=%S\\n\" tokens.(i) s e\n      (String.sub text2 s (e - s))\n  done\n"
  },
  {
    "path": "packages/brot/lib/bpe.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet list_drop n l =\n  let rec aux i = function\n    | _ :: l when i < n -> aux (i + 1) l\n    | rest -> rest\n  in\n  if n <= 0 then l else aux 0 l\n\ntype vocab = (string, int) Hashtbl.t\ntype merges = (string * string) list\n\n(* Open-addressing hash table for merge lookups. Returns int directly (no option\n   allocation). -1 = not found. *)\nmodule Merge_map = struct\n  type t = { keys : int array; values : int array; mask : int }\n\n  let[@inline] hash key =\n    let h = key * 0x1B873593 in\n    h lxor (h lsr 16)\n\n  let create entries =\n    let n = List.length entries in\n    let cap = ref 16 in\n    while !cap < n * 4 do\n      cap := !cap * 2\n    done;\n    let mask = !cap - 1 in\n    let keys = Array.make !cap (-1) in\n    let values = Array.make !cap 0 in\n    List.iter\n      (fun (key, value) ->\n        let h = ref (hash key land mask) in\n        while Array.unsafe_get keys !h >= 0 do\n          h := (!h + 1) land mask\n        done;\n        Array.unsafe_set keys !h key;\n        Array.unsafe_set values !h value)\n      entries;\n    { keys; values; mask }\n\n  let[@inline] find t key =\n    let mask = t.mask in\n    let keys = t.keys in\n    let h = ref (hash key land mask) in\n    let k = ref (Array.unsafe_get keys !h) in\n    while !k <> key && !k >= 0 do\n      h := (!h + 1) land mask;\n      k := Array.unsafe_get keys !h\n    done;\n    if !k = key then Array.unsafe_get t.values !h else -1\n\n  let fold f t acc =\n    let keys = t.keys in\n    let values = t.values in\n    let len = Array.length keys in\n    let acc = ref acc in\n    for i = 0 to len - 1 do\n      let k = Array.unsafe_get keys i in\n      if k >= 0 then acc := f k (Array.unsafe_get values i) !acc\n    done;\n    !acc\nend\n\nlet[@inline] merge_key a b = (a lsl 21) lor b\nlet[@inline] pack_merge rank new_id = (rank lsl 21) lor new_id\nlet[@inline] merge_rank v = v lsr 21\nlet[@inline] merge_new_id v = v land 0x1FFFFF\n\ntype word = {\n  sym_c : int array;\n  sym_prev : int array;\n  sym_next : int array;\n  sym_len : int array;\n  mutable size : int;\n}\n\n(* Specialized min-heap for BPE merges using parallel arrays (no tuple allocation).\n   Ordered by (rank, position) — lower rank first, then lower position. *)\n(* Min-heap with packed comparison key: (rank lsl 21) lor pos.\n   Single int comparison for sift operations, 2 arrays instead of 3. *)\nmodule Merge_queue = struct\n  type t = {\n    mutable keys : int array;\n    mutable new_ids : int array;\n    mutable size : int;\n    mutable pop_key : int;\n    mutable pop_new_id : int;\n    mutable skip_keys : int array;\n    mutable skip_new_ids : int array;\n    mutable skip_size : int;\n  }\n\n  let create cap =\n    let cap = max 16 cap in\n    {\n      keys = Array.make cap 0;\n      new_ids = Array.make cap 0;\n      size = 0;\n      pop_key = 0;\n      pop_new_id = 0;\n      skip_keys = [||];\n      skip_new_ids = [||];\n      skip_size = 0;\n    }\n\n  let[@inline] pack_key rank pos = (rank lsl 21) lor pos\n\n  let sift_up t idx =\n    let keys = t.keys in\n    let new_ids = t.new_ids in\n    let key = Array.unsafe_get keys idx in\n    let nid = Array.unsafe_get new_ids idx in\n    let i = ref idx in\n    let cont = ref (!i > 0) in\n    while !cont do\n      let p = (!i - 1) asr 1 in\n      if key < Array.unsafe_get keys p then (\n        Array.unsafe_set keys !i (Array.unsafe_get keys p);\n        Array.unsafe_set new_ids !i (Array.unsafe_get new_ids p);\n        i := p;\n        cont := !i > 0)\n      else cont := false\n    done;\n    Array.unsafe_set keys !i key;\n    Array.unsafe_set new_ids !i nid\n\n  let sift_down t idx =\n    let keys = t.keys in\n    let new_ids = t.new_ids in\n    let size = t.size in\n    let key = Array.unsafe_get keys idx in\n    let nid = Array.unsafe_get new_ids idx in\n    let i = ref idx in\n    let continue_ = ref true in\n    while !continue_ do\n      let l = (2 * !i) + 1 in\n      if l >= size then continue_ := false\n      else begin\n        let r = l + 1 in\n        let smallest =\n          if r < size && Array.unsafe_get keys r < Array.unsafe_get keys l then\n            r\n          else l\n        in\n        if Array.unsafe_get keys smallest < key then (\n          Array.unsafe_set keys !i (Array.unsafe_get keys smallest);\n          Array.unsafe_set new_ids !i (Array.unsafe_get new_ids smallest);\n          i := smallest)\n        else continue_ := false\n      end\n    done;\n    Array.unsafe_set keys !i key;\n    Array.unsafe_set new_ids !i nid\n\n  let push t rank pos new_id =\n    let s = t.size in\n    if s = Array.length t.keys then begin\n      let new_cap = max 16 (s * 2) in\n      let grow a =\n        let b = Array.make new_cap 0 in\n        Array.blit a 0 b 0 s;\n        b\n      in\n      t.keys <- grow t.keys;\n      t.new_ids <- grow t.new_ids\n    end;\n    Array.unsafe_set t.keys s (pack_key rank pos);\n    Array.unsafe_set t.new_ids s new_id;\n    t.size <- s + 1;\n    sift_up t s\n\n  let pop t =\n    if t.size = 0 then false\n    else begin\n      t.pop_key <- Array.unsafe_get t.keys 0;\n      t.pop_new_id <- Array.unsafe_get t.new_ids 0;\n      t.size <- t.size - 1;\n      if t.size > 0 then begin\n        Array.unsafe_set t.keys 0 (Array.unsafe_get t.keys t.size);\n        Array.unsafe_set t.new_ids 0 (Array.unsafe_get t.new_ids t.size);\n        sift_down t 0\n      end;\n      true\n    end\nend\n\ntype token = { id : int; value : string; offsets : int * int }\n\n(* Direct-mapped bounded cache: hash key to slot, newest entry wins. Fixed\n   memory, no eviction logic, no unbounded growth. *)\ntype cache = {\n  cache_keys : string array;\n  cache_vals : word array;\n  cache_mask : int;\n}\n\nlet empty_word =\n  { sym_c = [||]; sym_prev = [||]; sym_next = [||]; sym_len = [||]; size = 0 }\n\nlet create_cache capacity =\n  (* Round up to power of 2 *)\n  let cap = ref 16 in\n  while !cap < capacity do\n    cap := !cap * 2\n  done;\n  {\n    cache_keys = Array.make !cap \"\";\n    cache_vals = Array.make !cap empty_word;\n    cache_mask = !cap - 1;\n  }\n\nlet[@inline] cache_find c key =\n  let h = Hashtbl.hash key land c.cache_mask in\n  if String.equal (Array.unsafe_get c.cache_keys h) key then\n    Array.unsafe_get c.cache_vals h\n  else empty_word\n\nlet[@inline] cache_add c key value =\n  let h = Hashtbl.hash key land c.cache_mask in\n  Array.unsafe_set c.cache_keys h key;\n  Array.unsafe_set c.cache_vals h value\n\ntype t = {\n  vocab : vocab;\n  vocab_r : string array;\n  merges : Merge_map.t;\n  cache : cache option;\n  dropout : float option;\n  unk_token : string option;\n  continuing_subword_prefix : string option;\n  end_of_word_suffix : string option;\n  fuse_unk : bool;\n  byte_fallback : bool;\n  ignore_merges : bool;\n  ascii_to_id : int array;\n  byte_fallback_ids : int array;\n  char_to_id : Merge_map.t;\n  prefixed_ascii_to_id : int array;\n  prefixed_char_to_id : Merge_map.t;\n  unk_id : int;\n  mutable work_word : word;\n  mutable work_queue : Merge_queue.t;\n  work_in_use : bool Atomic.t;\n}\n\nlet create_word capacity =\n  let cap = max 16 capacity in\n  {\n    sym_c = Array.make cap 0;\n    sym_prev = Array.make cap 0;\n    sym_next = Array.make cap 0;\n    sym_len = Array.make cap 0;\n    size = 0;\n  }\n\nlet ensure_word_capacity word capacity =\n  if Array.length word.sym_c >= capacity then begin\n    word.size <- 0;\n    word\n  end\n  else create_word capacity\n\nlet ensure_queue_capacity queue capacity =\n  let cap = max 16 capacity in\n  if Array.length queue.Merge_queue.keys >= cap then begin\n    queue.Merge_queue.size <- 0;\n    queue\n  end\n  else Merge_queue.create cap\n\nlet[@inline] add_symbol word c byte_len =\n  let s = word.size in\n  let prev = if s > 0 then s - 1 else -1 in\n  Array.unsafe_set word.sym_c s c;\n  Array.unsafe_set word.sym_prev s prev;\n  Array.unsafe_set word.sym_next s (-1);\n  Array.unsafe_set word.sym_len s byte_len;\n  if prev >= 0 then Array.unsafe_set word.sym_next prev s;\n  word.size <- s + 1\n\nlet apply_merges model dropout word queue =\n  let p = match dropout with Some p -> p | None -> 0.0 in\n  let use_dropout = p > 0.0 in\n  let merges = model.merges in\n  let sym_c = word.sym_c in\n  let sym_prev = word.sym_prev in\n  let sym_next = word.sym_next in\n  let sym_len = word.sym_len in\n  for i = 0 to word.size - 2 do\n    let key =\n      merge_key (Array.unsafe_get sym_c i) (Array.unsafe_get sym_c (i + 1))\n    in\n    let packed = Merge_map.find merges key in\n    if packed >= 0 then\n      Merge_queue.push queue (merge_rank packed) i (merge_new_id packed)\n  done;\n  queue.skip_size <- 0;\n  while Merge_queue.pop queue do\n    let pkey = queue.pop_key in\n    let pos = pkey land 0x1FFFFF in\n    let new_id = queue.pop_new_id in\n    if Array.unsafe_get sym_len pos > 0 then begin\n      let next_pos = Array.unsafe_get sym_next pos in\n      if next_pos >= 0 then begin\n        let key =\n          merge_key\n            (Array.unsafe_get sym_c pos)\n            (Array.unsafe_get sym_c next_pos)\n        in\n        let packed = Merge_map.find merges key in\n        if packed >= 0 && merge_new_id packed = new_id then\n          begin if use_dropout && Random.float 1.0 < p then begin\n            let s = queue.skip_size in\n            if s = Array.length queue.skip_keys then begin\n              let new_cap = max 8 (s * 2) in\n              let grow old =\n                let a = Array.make new_cap 0 in\n                if s > 0 then Array.blit old 0 a 0 s;\n                a\n              in\n              queue.skip_keys <- grow queue.skip_keys;\n              queue.skip_new_ids <- grow queue.skip_new_ids\n            end;\n            Array.unsafe_set queue.skip_keys s pkey;\n            Array.unsafe_set queue.skip_new_ids s new_id;\n            queue.skip_size <- s + 1\n          end\n          else begin\n            for i = 0 to queue.skip_size - 1 do\n              Merge_queue.push queue\n                (Array.unsafe_get queue.skip_keys i lsr 21)\n                (Array.unsafe_get queue.skip_keys i land 0x1FFFFF)\n                (Array.unsafe_get queue.skip_new_ids i)\n            done;\n            queue.skip_size <- 0;\n            Array.unsafe_set sym_c pos new_id;\n            Array.unsafe_set sym_len pos\n              (Array.unsafe_get sym_len pos + Array.unsafe_get sym_len next_pos);\n            Array.unsafe_set sym_next pos (Array.unsafe_get sym_next next_pos);\n            Array.unsafe_set sym_len next_pos 0;\n            let new_next = Array.unsafe_get sym_next pos in\n            if new_next >= 0 then Array.unsafe_set sym_prev new_next pos;\n            let prev = Array.unsafe_get sym_prev pos in\n            if prev >= 0 then begin\n              let k =\n                merge_key\n                  (Array.unsafe_get sym_c prev)\n                  (Array.unsafe_get sym_c pos)\n              in\n              let v = Merge_map.find merges k in\n              if v >= 0 then\n                Merge_queue.push queue (merge_rank v) prev (merge_new_id v)\n            end;\n            let next = Array.unsafe_get sym_next pos in\n            if next >= 0 then begin\n              let k =\n                merge_key\n                  (Array.unsafe_get sym_c pos)\n                  (Array.unsafe_get sym_c next)\n              in\n              let v = Merge_map.find merges k in\n              if v >= 0 then\n                Merge_queue.push queue (merge_rank v) pos (merge_new_id v)\n            end\n          end\n          end\n      end\n    end\n  done;\n  (* Compact using linked-list traversal: O(N_final) instead of O(N_original) *)\n  let j = ref 0 in\n  let cur = ref 0 in\n  while !cur >= 0 do\n    if !j <> !cur then begin\n      Array.unsafe_set sym_c !j (Array.unsafe_get sym_c !cur);\n      Array.unsafe_set sym_len !j (Array.unsafe_get sym_len !cur)\n    end;\n    incr j;\n    cur := Array.unsafe_get sym_next !cur\n  done;\n  word.size <- !j\n\nlet utf8_byte_len_table =\n  Array.init 256 (fun b ->\n      if b land 0x80 = 0 then 1\n      else if b land 0xE0 = 0xC0 then 2\n      else if b land 0xF0 = 0xE0 then 3\n      else if b land 0xF8 = 0xF0 then 4\n      else 1)\n\nlet[@inline] utf8_byte_len b = Array.unsafe_get utf8_byte_len_table b\n\nlet[@inline] pack_char_key text pos byte_len =\n  let b0 = Char.code (String.unsafe_get text pos) in\n  match byte_len with\n  | 1 -> b0\n  | 2 -> (b0 lsl 8) lor Char.code (String.unsafe_get text (pos + 1))\n  | 3 ->\n      (b0 lsl 16)\n      lor (Char.code (String.unsafe_get text (pos + 1)) lsl 8)\n      lor Char.code (String.unsafe_get text (pos + 2))\n  | _ ->\n      (b0 lsl 24)\n      lor (Char.code (String.unsafe_get text (pos + 1)) lsl 16)\n      lor (Char.code (String.unsafe_get text (pos + 2)) lsl 8)\n      lor Char.code (String.unsafe_get text (pos + 3))\n\n(* Try emitting byte fallback tokens for [byte_len] bytes starting at [src]\n   offset [offset]. Returns true if all bytes had fallback IDs. *)\nlet try_byte_fallback model word flush_unk src offset byte_len =\n  let all_found = ref true in\n  for i = 0 to byte_len - 1 do\n    if\n      Array.unsafe_get model.byte_fallback_ids\n        (Char.code (String.unsafe_get src (offset + i)))\n      < 0\n    then all_found := false\n  done;\n  if !all_found then begin\n    flush_unk ();\n    for i = 0 to byte_len - 1 do\n      add_symbol word\n        (Array.unsafe_get model.byte_fallback_ids\n           (Char.code (String.unsafe_get src (offset + i))))\n        1\n    done;\n    true\n  end\n  else false\n\n(* No prefix/suffix — avoids all per-character string allocation for ASCII via\n   pre-computed lookup tables. *)\nlet init_word_fast model word text text_len =\n  let pos = ref 0 in\n  let pending_unk_id = ref (-1) in\n  let pending_unk_len = ref 0 in\n  let flush_unk () =\n    if !pending_unk_id >= 0 then begin\n      add_symbol word !pending_unk_id !pending_unk_len;\n      pending_unk_id := -1;\n      pending_unk_len := 0\n    end\n  in\n  let handle_unk byte_len =\n    if model.unk_id >= 0 then\n      begin if model.fuse_unk then\n        begin if !pending_unk_id >= 0 then\n          pending_unk_len := !pending_unk_len + byte_len\n        else begin\n          pending_unk_id := model.unk_id;\n          pending_unk_len := byte_len\n        end\n        end\n      else begin\n        flush_unk ();\n        add_symbol word model.unk_id byte_len\n      end\n      end\n  in\n  while !pos < text_len do\n    let b = Char.code (String.unsafe_get text !pos) in\n    if b < 128 then begin\n      let id = Array.unsafe_get model.ascii_to_id b in\n      if id >= 0 then begin\n        flush_unk ();\n        add_symbol word id 1\n      end\n      else if model.byte_fallback then begin\n        let fbid = Array.unsafe_get model.byte_fallback_ids b in\n        if fbid >= 0 then begin\n          flush_unk ();\n          add_symbol word fbid 1\n        end\n        else handle_unk 1\n      end\n      else handle_unk 1;\n      incr pos\n    end\n    else begin\n      let byte_len = utf8_byte_len b in\n      let key = pack_char_key text !pos byte_len in\n      let id = Merge_map.find model.char_to_id key in\n      if id >= 0 then begin\n        flush_unk ();\n        add_symbol word id byte_len\n      end\n      else if model.byte_fallback then\n        begin if not (try_byte_fallback model word flush_unk text !pos byte_len)\n        then handle_unk byte_len\n        end\n      else handle_unk byte_len;\n      pos := !pos + byte_len\n    end\n  done;\n  flush_unk ()\n\n(* Models with continuing_subword_prefix or end_of_word_suffix *)\nlet init_word_slow model word text text_len =\n  let pos = ref 0 in\n  let pending_unk_id = ref (-1) in\n  let pending_unk_len = ref 0 in\n  let flush_unk () =\n    if !pending_unk_id >= 0 then begin\n      add_symbol word !pending_unk_id !pending_unk_len;\n      pending_unk_id := -1;\n      pending_unk_len := 0\n    end\n  in\n  let handle_unk byte_len =\n    if model.unk_id >= 0 then\n      begin if model.fuse_unk then\n        begin if !pending_unk_id >= 0 then\n          pending_unk_len := !pending_unk_len + byte_len\n        else begin\n          pending_unk_id := model.unk_id;\n          pending_unk_len := byte_len\n        end\n        end\n      else begin\n        flush_unk ();\n        add_symbol word model.unk_id byte_len\n      end\n      end\n  in\n  let has_prefix = model.continuing_subword_prefix <> None in\n  let has_suffix = model.end_of_word_suffix <> None in\n  while !pos < text_len do\n    let b = Char.code (String.unsafe_get text !pos) in\n    let byte_len = utf8_byte_len b in\n    if b land 0xC0 = 0x80 then pos := !pos + 1\n    else begin\n      let start = !pos in\n      let is_first = start = 0 in\n      let is_last = !pos + byte_len >= text_len in\n      pos := !pos + byte_len;\n      (* Suffix only applies at word boundaries (first-not-last or\n         last-not-first), never to middle chars and never to single-char\n         words *)\n      let needs_string = has_suffix && is_first <> is_last in\n      if needs_string then begin\n        (* Slow path: suffix involved, at most 2x per word *)\n        let char_str = String.sub text start byte_len in\n        let token_str =\n          match\n            ( is_first,\n              is_last,\n              model.continuing_subword_prefix,\n              model.end_of_word_suffix )\n          with\n          | true, false, _, Some suffix -> char_str ^ suffix\n          | true, false, _, None -> char_str\n          | false, true, Some prefix, Some suffix -> prefix ^ char_str ^ suffix\n          | false, true, Some prefix, None -> prefix ^ char_str\n          | false, true, None, Some suffix -> char_str ^ suffix\n          | false, true, None, None -> char_str\n          | _, _, _, _ -> char_str\n        in\n        match Hashtbl.find_opt model.vocab token_str with\n        | Some id ->\n            flush_unk ();\n            add_symbol word id byte_len\n        | None ->\n            if model.byte_fallback then\n              begin if\n                not (try_byte_fallback model word flush_unk text start byte_len)\n              then handle_unk byte_len\n              end\n            else handle_unk byte_len\n      end\n      else begin\n        (* Fast path: no suffix, use packed-int lookup (zero allocation) *)\n        let needs_prefix = has_prefix && not is_first in\n        let id =\n          if needs_prefix then\n            if b < 128 then Array.unsafe_get model.prefixed_ascii_to_id b\n            else\n              Merge_map.find model.prefixed_char_to_id\n                (pack_char_key text start byte_len)\n          else if b < 128 then Array.unsafe_get model.ascii_to_id b\n          else\n            Merge_map.find model.char_to_id (pack_char_key text start byte_len)\n        in\n        if id >= 0 then begin\n          flush_unk ();\n          add_symbol word id byte_len\n        end\n        else if model.byte_fallback then\n          begin if\n            not (try_byte_fallback model word flush_unk text start byte_len)\n          then handle_unk byte_len\n          end\n        else handle_unk byte_len\n      end\n    end\n  done;\n  flush_unk ()\n\nlet merge_word model text =\n  let text_len = String.length text in\n  let owned = Atomic.compare_and_set model.work_in_use false true in\n  let word, queue =\n    if owned then begin\n      let w = ensure_word_capacity model.work_word text_len in\n      model.work_word <- w;\n      let q = ensure_queue_capacity model.work_queue text_len in\n      model.work_queue <- q;\n      (w, q)\n    end\n    else (create_word text_len, Merge_queue.create text_len)\n  in\n  if model.continuing_subword_prefix = None && model.end_of_word_suffix = None\n  then init_word_fast model word text text_len\n  else init_word_slow model word text text_len;\n  apply_merges model model.dropout word queue;\n  if owned then begin\n    let n = word.size in\n    let sym_c = Array.make n 0 in\n    let sym_len = Array.make n 0 in\n    Array.blit word.sym_c 0 sym_c 0 n;\n    Array.blit word.sym_len 0 sym_len 0 n;\n    Atomic.set model.work_in_use false;\n    { sym_c; sym_prev = [||]; sym_next = [||]; sym_len; size = n }\n  end\n  else word\n\nlet word_to_tokens model word =\n  let offset = ref 0 in\n  List.init word.size (fun i ->\n      let id = Array.unsafe_get word.sym_c i in\n      let vr = model.vocab_r in\n      let value =\n        if id >= 0 && id < Array.length vr then Array.unsafe_get vr id\n        else \"<unk>\"\n      in\n      let start = !offset in\n      let end_ = start + Array.unsafe_get word.sym_len i in\n      offset := end_;\n      { id; value; offsets = (start, end_) })\n\nlet word_to_ids word =\n  Array.init word.size (fun i -> Array.unsafe_get word.sym_c i)\n\nlet word_to_encoding model word ~type_id =\n  let n = word.size in\n  let ids = Array.make n 0 in\n  let tokens = Array.make n \"\" in\n  let offsets = Array.make n (0, 0) in\n  let offset = ref 0 in\n  for i = 0 to n - 1 do\n    let id = Array.unsafe_get word.sym_c i in\n    Array.unsafe_set ids i id;\n    let vr = model.vocab_r in\n    Array.unsafe_set tokens i\n      (if id >= 0 && id < Array.length vr then Array.unsafe_get vr id\n       else \"<unk>\");\n    let start = !offset in\n    let end_ = start + Array.unsafe_get word.sym_len i in\n    Array.unsafe_set offsets i (start, end_);\n    offset := end_\n  done;\n  Encoding.create ~ids ~type_ids:(Array.make n type_id) ~tokens\n    ~words:(Array.make n None) ~offsets ~special_tokens_mask:(Array.make n 0)\n    ~attention_mask:(Array.make n 1) ()\n\nlet get_word model text =\n  if model.ignore_merges then merge_word model text\n  else\n    match model.cache with\n    | Some cache when String.length text < 4096 ->\n        let cached = cache_find cache text in\n        if cached.size > 0 then cached\n        else\n          let word = merge_word model text in\n          cache_add cache text word;\n          word\n    | _ -> merge_word model text\n\nlet tokenize model text =\n  if String.length text = 0 then []\n  else\n    match Hashtbl.find_opt model.vocab text with\n    | Some id -> [ { id; value = text; offsets = (0, String.length text) } ]\n    | None -> word_to_tokens model (get_word model text)\n\nlet tokenize_ids model text =\n  if String.length text = 0 then [||]\n  else\n    match Hashtbl.find_opt model.vocab text with\n    | Some id -> [| id |]\n    | None -> word_to_ids (get_word model text)\n\nlet tokenize_encoding model text ~type_id =\n  if String.length text = 0 then Encoding.empty\n  else\n    match Hashtbl.find_opt model.vocab text with\n    | Some id ->\n        Encoding.token ~id ~token:text\n          ~offset:(0, String.length text)\n          ~type_id ~special:false\n    | None -> word_to_encoding model (get_word model text) ~type_id\n\nlet token_to_id model token = Hashtbl.find_opt model.vocab token\n\nlet id_to_token model id =\n  if id >= 0 && id < Array.length model.vocab_r then\n    Some (Array.unsafe_get model.vocab_r id)\n  else None\n\nlet get_vocab model = Hashtbl.fold (fun k v acc -> (k, v) :: acc) model.vocab []\nlet get_vocab_size model = Hashtbl.length model.vocab\nlet get_unk_token model = model.unk_token\nlet get_continuing_subword_prefix model = model.continuing_subword_prefix\nlet get_end_of_word_suffix model = model.end_of_word_suffix\n\nlet get_merges model =\n  Merge_map.fold\n    (fun key packed acc ->\n      let a_id = key lsr 21 in\n      let b_id = key land 0x1FFFFF in\n      let rank = merge_rank packed in\n      let vr = model.vocab_r in\n      let vr_len = Array.length vr in\n      if a_id >= 0 && a_id < vr_len && b_id >= 0 && b_id < vr_len then\n        (rank, (Array.unsafe_get vr a_id, Array.unsafe_get vr b_id)) :: acc\n      else acc)\n    model.merges []\n  |> List.sort (fun (r1, _) (r2, _) -> Int.compare r1 r2)\n  |> List.map snd\n\nlet convert_merges_to_merge_map vocab merges continuing_subword_prefix =\n  let csp_str =\n    match continuing_subword_prefix with Some p -> p | None -> \"\"\n  in\n  let csp_len = String.length csp_str in\n  List.mapi\n    (fun rank (a, b) ->\n      match (Hashtbl.find_opt vocab a, Hashtbl.find_opt vocab b) with\n      | Some a_id, Some b_id -> (\n          let alen = String.length a in\n          let blen = String.length b in\n          let new_token =\n            if\n              csp_len > 0 && blen > csp_len\n              && String.starts_with ~prefix:csp_str b\n            then (\n              let brest = blen - csp_len in\n              let s = Bytes.create (alen + brest) in\n              Bytes.blit_string a 0 s 0 alen;\n              Bytes.blit_string b csp_len s alen brest;\n              Bytes.unsafe_to_string s)\n            else\n              let s = Bytes.create (alen + blen) in\n              Bytes.blit_string a 0 s 0 alen;\n              Bytes.blit_string b 0 s alen blen;\n              Bytes.unsafe_to_string s\n          in\n          match Hashtbl.find_opt vocab new_token with\n          | Some new_id -> Some ((a_id, b_id), pack_merge rank new_id)\n          | None ->\n              failwith\n                (Printf.sprintf \"Merge token '%s' not in vocabulary\" new_token))\n      | _ ->\n          failwith\n            (Printf.sprintf \"Merge tokens ('%s', '%s') not in vocabulary\" a b))\n    merges\n  |> List.filter_map Fun.id\n  |> fun entries ->\n  Merge_map.create\n    (List.map\n       (fun ((a_id, b_id), packed) -> (merge_key a_id b_id, packed))\n       entries)\n\nlet create ~vocab ~merges ?(cache_capacity = 10000) ?dropout ?unk_token\n    ?continuing_subword_prefix ?end_of_word_suffix ?(fuse_unk = false)\n    ?(byte_fallback = false) ?(ignore_merges = false) () : t =\n  let max_id = Hashtbl.fold (fun _ id acc -> max id acc) vocab (-1) in\n  let vocab_r = Array.make (max_id + 1) \"\" in\n  Hashtbl.iter (fun k v -> Array.unsafe_set vocab_r v k) vocab;\n  let cache =\n    if cache_capacity = 0 then None else Some (create_cache cache_capacity)\n  in\n  let merges =\n    convert_merges_to_merge_map vocab merges continuing_subword_prefix\n  in\n  let ascii_to_id = Array.make 128 (-1) in\n  for i = 0 to 127 do\n    let s = String.make 1 (Char.chr i) in\n    match Hashtbl.find_opt vocab s with\n    | Some id -> ascii_to_id.(i) <- id\n    | None -> ()\n  done;\n  let byte_fallback_ids = Array.make 256 (-1) in\n  for i = 0 to 255 do\n    let hex = Printf.sprintf \"<0x%02X>\" i in\n    match Hashtbl.find_opt vocab hex with\n    | Some id -> byte_fallback_ids.(i) <- id\n    | None -> ()\n  done;\n  (* Build packed-int char lookup table for zero-allocation multi-byte lookup *)\n  let char_entries = ref [] in\n  Hashtbl.iter\n    (fun key id ->\n      let len = String.length key in\n      if len >= 1 && len <= 4 then begin\n        let b0 = Char.code (String.unsafe_get key 0) in\n        let expected_len = utf8_byte_len b0 in\n        if expected_len = len then\n          let packed =\n            match len with\n            | 1 -> b0\n            | 2 -> (b0 lsl 8) lor Char.code (String.unsafe_get key 1)\n            | 3 ->\n                (b0 lsl 16)\n                lor (Char.code (String.unsafe_get key 1) lsl 8)\n                lor Char.code (String.unsafe_get key 2)\n            | _ ->\n                (b0 lsl 24)\n                lor (Char.code (String.unsafe_get key 1) lsl 16)\n                lor (Char.code (String.unsafe_get key 2) lsl 8)\n                lor Char.code (String.unsafe_get key 3)\n          in\n          char_entries := (packed, id) :: !char_entries\n      end)\n    vocab;\n  let char_to_id = Merge_map.create !char_entries in\n  (* Build prefixed char lookup tables for zero-allocation init_word_slow *)\n  let prefixed_ascii_to_id = Array.make 128 (-1) in\n  let prefixed_char_entries = ref [] in\n  (match continuing_subword_prefix with\n  | Some prefix ->\n      for i = 0 to 127 do\n        let s = prefix ^ String.make 1 (Char.chr i) in\n        match Hashtbl.find_opt vocab s with\n        | Some id -> prefixed_ascii_to_id.(i) <- id\n        | None -> ()\n      done;\n      Hashtbl.iter\n        (fun key id ->\n          let plen = String.length prefix in\n          let klen = String.length key in\n          if klen > plen && String.sub key 0 plen = prefix then begin\n            let rest_len = klen - plen in\n            if rest_len >= 2 && rest_len <= 4 then begin\n              let b0 = Char.code (String.unsafe_get key plen) in\n              let expected = utf8_byte_len b0 in\n              if expected = rest_len then\n                let packed = pack_char_key key plen rest_len in\n                prefixed_char_entries := (packed, id) :: !prefixed_char_entries\n            end\n          end)\n        vocab\n  | None -> ());\n  let prefixed_char_to_id = Merge_map.create !prefixed_char_entries in\n  let unk_id =\n    match unk_token with\n    | Some unk -> (\n        match Hashtbl.find_opt vocab unk with Some id -> id | None -> -1)\n    | None -> -1\n  in\n  {\n    vocab;\n    vocab_r;\n    merges;\n    cache;\n    dropout;\n    unk_token;\n    continuing_subword_prefix;\n    end_of_word_suffix;\n    fuse_unk;\n    byte_fallback;\n    ignore_merges;\n    ascii_to_id;\n    byte_fallback_ids;\n    char_to_id;\n    prefixed_ascii_to_id;\n    prefixed_char_to_id;\n    unk_id;\n    work_word = create_word 16;\n    work_queue = Merge_queue.create 16;\n    work_in_use = Atomic.make false;\n  }\n\nlet json_of_string s =\n  match Jsont_bytesrw.decode_string Jsont.json s with\n  | Ok v -> v\n  | Error e -> failwith e\n\nlet json_to_string j =\n  match Jsont_bytesrw.encode_string ~format:Jsont.Minify Jsont.json j with\n  | Ok s -> s\n  | Error e -> failwith e\n\nlet read_files ~vocab_file ~merges_file =\n  let vocab_json =\n    let ic = open_in vocab_file in\n    let content =\n      Fun.protect\n        ~finally:(fun () -> close_in ic)\n        (fun () -> really_input_string ic (in_channel_length ic))\n    in\n    json_of_string content\n  in\n  let vocab = Hashtbl.create 1024 in\n  (match vocab_json with\n  | Jsont.Object (mems, _) ->\n      List.iter\n        (fun ((k, _), v) ->\n          match v with\n          | Jsont.Number (f, _) -> Hashtbl.add vocab k (int_of_float f)\n          | _ -> failwith \"Invalid vocab format\")\n        mems\n  | _ -> failwith \"Invalid vocab.json format\");\n  let merges =\n    let ic = open_in merges_file in\n    Fun.protect\n      ~finally:(fun () -> close_in ic)\n      (fun () ->\n        let merges = ref [] in\n        (try\n           while true do\n             let line = input_line ic in\n             (* Skip empty lines and comment lines that start with #version *)\n             if\n               String.length line > 0\n               && not (String.starts_with ~prefix:\"#version\" line)\n             then\n               match String.split_on_char ' ' line with\n               | [ a; b ] -> merges := (a, b) :: !merges\n               | _ -> failwith (Printf.sprintf \"Invalid merge line: %s\" line)\n           done\n         with End_of_file -> ());\n        List.rev !merges)\n  in\n  (vocab, merges)\n\nlet from_files ~vocab_file ~merges_file =\n  let vocab, merges = read_files ~vocab_file ~merges_file in\n  create ~vocab ~merges ()\n\nlet save model ~path ?name () =\n  let vocab_file =\n    match name with\n    | Some n -> Filename.concat path (Printf.sprintf \"%s-vocab.json\" n)\n    | None -> Filename.concat path \"vocab.json\"\n  in\n  let merges_file =\n    match name with\n    | Some n -> Filename.concat path (Printf.sprintf \"%s-merges.txt\" n)\n    | None -> Filename.concat path \"merges.txt\"\n  in\n  let vocab_items =\n    Hashtbl.fold (fun k v acc -> (k, v) :: acc) model.vocab []\n    |> List.sort (fun (_, a) (_, b) -> compare a b)\n  in\n  let vocab_json =\n    Jsont.Json.object'\n      (List.map\n         (fun (k, v) -> (Jsont.Json.name k, Jsont.Json.int v))\n         vocab_items)\n  in\n  let oc = open_out vocab_file in\n  Fun.protect\n    ~finally:(fun () -> close_out oc)\n    (fun () -> output_string oc (json_to_string vocab_json));\n  let oc = open_out merges_file in\n  Fun.protect\n    ~finally:(fun () -> close_out oc)\n    (fun () ->\n      output_string oc \"#version: 0.2\\n\";\n      let merges_list =\n        Merge_map.fold\n          (fun key packed acc ->\n            let a_id = key lsr 21 in\n            let b_id = key land 0x1FFFFF in\n            let rank = merge_rank packed in\n            let vr = model.vocab_r in\n            let vr_len = Array.length vr in\n            if a_id >= 0 && a_id < vr_len && b_id >= 0 && b_id < vr_len then\n              (rank, Array.unsafe_get vr a_id, Array.unsafe_get vr b_id) :: acc\n            else acc)\n          model.merges []\n        |> List.sort (fun (r1, _, _) (r2, _, _) -> compare r1 r2)\n      in\n      List.iter (fun (_, a, b) -> Printf.fprintf oc \"%s %s\\n\" a b) merges_list)\n\nlet train ~min_frequency ~vocab_size ~show_progress ~special_tokens\n    ~limit_alphabet ~initial_alphabet ~continuing_subword_prefix\n    ~end_of_word_suffix ~max_token_length texts existing =\n  let _ = (show_progress, existing) in\n\n  (* Count words from texts *)\n  let word_counts = Hashtbl.create 10000 in\n  List.iter\n    (fun text ->\n      let words = String.split_on_char ' ' text in\n      List.iter\n        (fun word ->\n          if String.length word > 0 then\n            Hashtbl.replace word_counts word\n              (1 + try Hashtbl.find word_counts word with Not_found -> 0))\n        words)\n    texts;\n\n  let compute_pair_counts words_copy =\n    let pair_counts = Hashtbl.create 10000 in\n    Hashtbl.iter\n      (fun word count ->\n        let chars = String.split_on_char ' ' word in\n        for i = 0 to List.length chars - 2 do\n          let a = List.nth chars i in\n          let b = List.nth chars (i + 1) in\n          let pair = (a, b) in\n          Hashtbl.replace pair_counts pair\n            (count + try Hashtbl.find pair_counts pair with Not_found -> 0)\n        done)\n      words_copy;\n    pair_counts\n  in\n\n  (* Build vocabulary *)\n  let vocab = Hashtbl.create 10000 in\n  let vocab_size_ref = ref 0 in\n  List.iter\n    (fun token ->\n      if not (Hashtbl.mem vocab token) then (\n        Hashtbl.add vocab token !vocab_size_ref;\n        incr vocab_size_ref))\n    special_tokens;\n\n  (* Build alphabet *)\n  let alphabet = Hashtbl.create 10000 in\n  Hashtbl.iter\n    (fun word count ->\n      let len = String.length word in\n      let buf = Buffer.create 4 in\n      let rec loop i =\n        if i >= len then ()\n        else\n          let d = String.get_utf_8_uchar word i in\n          let n = Uchar.utf_decode_length d in\n          if Uchar.utf_decode_is_valid d then (\n            let u = Uchar.utf_decode_uchar d in\n            Buffer.clear buf;\n            Buffer.add_utf_8_uchar buf u;\n            let char_str = Buffer.contents buf in\n            Hashtbl.replace alphabet char_str\n              (count + try Hashtbl.find alphabet char_str with Not_found -> 0));\n          loop (i + n)\n      in\n      loop 0)\n    word_counts;\n\n  List.iter\n    (fun c ->\n      let char_str = String.make 1 c in\n      Hashtbl.replace alphabet char_str max_int)\n    initial_alphabet;\n\n  let kept = Hashtbl.fold (fun k v acc -> (k, v) :: acc) alphabet [] in\n  let kept = List.sort (fun (_, v1) (_, v2) -> compare v1 v2) kept in\n  let to_remove =\n    match limit_alphabet with\n    | Some limit -> max 0 (List.length kept - limit)\n    | None -> 0\n  in\n  let kept = list_drop to_remove kept in\n  let kept = List.sort (fun (k1, _) (k2, _) -> compare k1 k2) kept in\n  let csp_str =\n    match continuing_subword_prefix with Some p -> p | None -> \"\"\n  in\n  let csp_len = String.length csp_str in\n  List.iter\n    (fun (c, _) ->\n      if not (Hashtbl.mem vocab c) then (\n        Hashtbl.add vocab c !vocab_size_ref;\n        incr vocab_size_ref);\n      if csp_len > 0 then (\n        let clen = String.length c in\n        let s = Bytes.create (csp_len + clen) in\n        Bytes.blit_string csp_str 0 s 0 csp_len;\n        Bytes.blit_string c 0 s csp_len clen;\n        let prefixed = Bytes.unsafe_to_string s in\n        if not (Hashtbl.mem vocab prefixed) then (\n          Hashtbl.add vocab prefixed !vocab_size_ref;\n          incr vocab_size_ref)))\n    kept;\n\n  (* Learn merges *)\n  let merges = ref [] in\n  let words_copy = ref (Hashtbl.create (Hashtbl.length word_counts)) in\n  Hashtbl.iter\n    (fun word count ->\n      let len = String.length word in\n      let chars = ref [] in\n      let buf = Buffer.create 8 in\n      let is_first = ref true in\n      let rec loop i =\n        if i >= len then ()\n        else\n          let d = String.get_utf_8_uchar word i in\n          let n = Uchar.utf_decode_length d in\n          if Uchar.utf_decode_is_valid d then (\n            let u = Uchar.utf_decode_uchar d in\n            Buffer.clear buf;\n            if csp_len > 0 && not !is_first then Buffer.add_string buf csp_str;\n            Buffer.add_utf_8_uchar buf u;\n            is_first := false;\n            chars := Buffer.contents buf :: !chars);\n          loop (i + n)\n      in\n      loop 0;\n      let separated = String.concat \" \" (List.rev !chars) in\n      Hashtbl.add !words_copy separated count)\n    word_counts;\n\n  while !vocab_size_ref < vocab_size do\n    let pair_counts = compute_pair_counts !words_copy in\n    let best_pair = ref None in\n    let best_count = ref (-1) in\n    let best_pair_tie = ref (\"\", \"\") in\n    Hashtbl.iter\n      (fun pair count ->\n        if count > !best_count then (\n          best_count := count;\n          best_pair := Some pair;\n          best_pair_tie := pair)\n        else if count = !best_count then\n          if compare pair !best_pair_tie < 0 then best_pair_tie := pair)\n      pair_counts;\n    match !best_pair with\n    | None -> vocab_size_ref := vocab_size\n    | Some (a, b) ->\n        if !best_count < min_frequency then vocab_size_ref := vocab_size\n        else\n          let blen = String.length b in\n          let new_token =\n            if\n              csp_len > 0 && blen > csp_len\n              && String.starts_with ~prefix:csp_str b\n            then (\n              let alen = String.length a in\n              let brest = blen - csp_len in\n              let s = Bytes.create (alen + brest) in\n              Bytes.blit_string a 0 s 0 alen;\n              Bytes.blit_string b csp_len s alen brest;\n              Bytes.unsafe_to_string s)\n            else a ^ b\n          in\n          let skip =\n            match max_token_length with\n            | Some l when String.length new_token > l -> true\n            | _ -> false\n          in\n          if not skip then (\n            if not (Hashtbl.mem vocab new_token) then (\n              Hashtbl.add vocab new_token !vocab_size_ref;\n              incr vocab_size_ref);\n            merges := (a, b) :: !merges;\n            let new_words = Hashtbl.create (Hashtbl.length !words_copy) in\n            let pat = a ^ \" \" ^ b in\n            let pat_len = String.length pat in\n            Hashtbl.iter\n              (fun word count ->\n                let wlen = String.length word in\n                if wlen < pat_len then Hashtbl.add new_words word count\n                else\n                  let buf = Buffer.create wlen in\n                  let pos = ref 0 in\n                  let changed = ref false in\n                  while !pos <= wlen - pat_len do\n                    let at_boundary =\n                      (!pos = 0\n                      || Char.equal (String.unsafe_get word (!pos - 1)) ' ')\n                      && (!pos + pat_len = wlen\n                         || Char.equal\n                              (String.unsafe_get word (!pos + pat_len))\n                              ' ')\n                    in\n                    if at_boundary && String.sub word !pos pat_len = pat then (\n                      Buffer.add_string buf new_token;\n                      pos := !pos + pat_len;\n                      changed := true)\n                    else (\n                      Buffer.add_char buf (String.unsafe_get word !pos);\n                      incr pos)\n                  done;\n                  if !changed then (\n                    Buffer.add_substring buf word !pos (wlen - !pos);\n                    Hashtbl.add new_words (Buffer.contents buf) count)\n                  else Hashtbl.add new_words word count)\n              !words_copy;\n            words_copy := new_words)\n  done;\n\n  let trained_model =\n    create ~vocab ~merges:(List.rev !merges) ?continuing_subword_prefix\n      ?end_of_word_suffix ()\n  in\n  (trained_model, special_tokens)\n"
  },
  {
    "path": "packages/brot/lib/bpe.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** BPE (Byte Pair Encoding) tokenization model.\n\n    {b Internal module.} Iteratively merges the most frequent adjacent character\n    pairs to build a subword vocabulary. Used by GPT-2, GPT-3, and RoBERTa.\n\n    A word is first split into characters, then merge rules are applied in\n    priority order (earlier rules have higher priority). Merging continues until\n    no more rules apply.\n\n    Tokenized words are cached in a direct-mapped bounded cache for amortized\n    performance. *)\n\ntype t\n(** The type for BPE models. Internally mutable due to the merge cache. *)\n\ntype vocab = (string, int) Hashtbl.t\n(** The type for vocabularies mapping token strings to IDs. *)\n\ntype merges = (string * string) list\n(** The type for merge rules in priority order (earlier rules have higher\n    priority). *)\n\n(** {1:creation Creation} *)\n\nval create :\n  vocab:vocab ->\n  merges:merges ->\n  ?cache_capacity:int ->\n  ?dropout:float ->\n  ?unk_token:string ->\n  ?continuing_subword_prefix:string ->\n  ?end_of_word_suffix:string ->\n  ?fuse_unk:bool ->\n  ?byte_fallback:bool ->\n  ?ignore_merges:bool ->\n  unit ->\n  t\n(** [create ~vocab ~merges ()] is a BPE model.\n\n    - [cache_capacity] is the number of slots in the direct-mapped word cache.\n      Defaults to [10000]. Set to [0] to disable caching. Words longer than 4096\n      bytes bypass the cache.\n    - [dropout] is the probability of randomly skipping a merge during\n      tokenization (BPE-dropout regularization). Defaults to [0.] (no dropout).\n    - [unk_token] is emitted for characters not in [vocab] (when\n      {!byte_fallback} is off). No default.\n    - [continuing_subword_prefix] is prepended to non-initial subwords. No\n      default.\n    - [end_of_word_suffix] is appended to the final subword of each word. No\n      default.\n    - [fuse_unk], when [true], merges consecutive unknown bytes into a single\n      [unk_token] instead of emitting one per byte. Defaults to [false].\n    - [byte_fallback], when [true], falls back to byte-level tokens (e.g.\n      [\"<0xFF>\"]) for characters not in [vocab] instead of emitting [unk_token].\n      Defaults to [false].\n    - [ignore_merges], when [true], skips the merge step entirely and returns\n      raw character-level tokens. Defaults to [false]. *)\n\nval from_files : vocab_file:string -> merges_file:string -> t\n(** [from_files ~vocab_file ~merges_file] loads a BPE model from\n    HuggingFace-format files.\n\n    - [vocab_file] is a JSON object mapping token strings to integer IDs.\n    - [merges_file] is a text file with one space-separated merge pair per line.\n      An optional [#version:] header line is skipped. *)\n\n(** {1:tokenization Tokenization} *)\n\ntype token = { id : int; value : string; offsets : int * int }\n(** The type for tokens. [id] is the vocabulary index, [value] the string\n    content, and [offsets] the [(start, stop)] byte span in the source text. *)\n\nval tokenize : t -> string -> token list\n(** [tokenize t s] is the BPE tokenization of [s]. *)\n\nval tokenize_ids : t -> string -> int array\n(** [tokenize_ids t s] is like {!tokenize} but returns only token IDs. *)\n\nval tokenize_encoding : t -> string -> type_id:int -> Encoding.t\n(** [tokenize_encoding t s ~type_id] tokenizes [s] and builds an {!Encoding.t}\n    directly, avoiding intermediate list allocation. *)\n\n(** {1:vocabulary Vocabulary} *)\n\nval token_to_id : t -> string -> int option\n(** [token_to_id t tok] is the ID of [tok] in the vocabulary. *)\n\nval id_to_token : t -> int -> string option\n(** [id_to_token t id] is the token string for [id]. *)\n\nval get_vocab : t -> (string * int) list\n(** [get_vocab t] is the vocabulary as [(token, id)] pairs. *)\n\nval get_vocab_size : t -> int\n(** [get_vocab_size t] is the number of tokens in the vocabulary. *)\n\nval get_unk_token : t -> string option\n(** [get_unk_token t] is the unknown token, if configured. *)\n\nval get_continuing_subword_prefix : t -> string option\n(** [get_continuing_subword_prefix t] is the subword prefix, if configured (e.g.\n    [\"##\"]). *)\n\nval get_end_of_word_suffix : t -> string option\n(** [get_end_of_word_suffix t] is the word-end suffix, if configured (e.g.\n    [\"</w>\"]). *)\n\nval get_merges : t -> (string * string) list\n(** [get_merges t] is the merge rules in priority order. *)\n\n(** {1:serialization Serialization} *)\n\nval save : t -> path:string -> ?name:string -> unit -> unit\n(** [save t ~path ()] writes the model to [path] as two files:\n\n    - [vocab.json]: a JSON object mapping token strings to IDs.\n    - [merges.txt]: merge pairs, one per line, with a [#version: 0.2] header. *)\n\n(** {1:training Training} *)\n\nval train :\n  min_frequency:int ->\n  vocab_size:int ->\n  show_progress:bool ->\n  special_tokens:string list ->\n  limit_alphabet:int option ->\n  initial_alphabet:char list ->\n  continuing_subword_prefix:string option ->\n  end_of_word_suffix:string option ->\n  max_token_length:int option ->\n  string list ->\n  t option ->\n  t * string list\n(** [train ~min_frequency ~vocab_size ~show_progress ~special_tokens\n     ~limit_alphabet ~initial_alphabet ~continuing_subword_prefix\n     ~end_of_word_suffix ~max_token_length texts init] learns BPE merges from\n    [texts].\n\n    The algorithm counts word frequencies, builds an initial character alphabet,\n    then iteratively finds and merges the highest-frequency adjacent pair until\n    [vocab_size] is reached or pair frequency drops below [min_frequency].\n\n    - [min_frequency] is the minimum pair frequency to merge.\n    - [vocab_size] is the target vocabulary size.\n    - [show_progress] enables progress output on [stderr].\n    - [special_tokens] are added to the vocabulary first.\n    - [limit_alphabet] caps the number of distinct initial characters kept.\n    - [initial_alphabet] seeds the character set.\n    - [continuing_subword_prefix] is set on the resulting model.\n    - [end_of_word_suffix] is set on the resulting model.\n    - [max_token_length] limits the byte length of merged tokens.\n    - [init], when provided, seeds the vocabulary from an existing model.\n\n    Returns [(model, special_tokens)]. *)\n"
  },
  {
    "path": "packages/brot/lib/brot.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule Normalizer = Normalizer\nmodule Pre_tokenizer = Pre_tokenizer\nmodule Post_processor = Post_processor\nmodule Decoder = Decoder\nmodule Encoding = Encoding\n\nlet strf = Printf.sprintf\n\n(* Error messages *)\n\nlet err_pair_no_post = \"pair sequences require a configured post-processor\"\nlet err_no_pad_token = \"padding requested but no pad token configured\"\nlet err_pad_not_in_vocab tok = strf \"pad token '%s' not in vocabulary\" tok\nlet err_add_tokens = \"only supported for word-level tokenizers\"\nlet err_export_tiktoken = \"only supported for BPE models\"\nlet err_infer_type = \"unable to infer model type from JSON\"\n\n(* Types *)\n\ntype direction = [ `Left | `Right ]\n\ntype special = {\n  token : string;\n  single_word : bool;\n  lstrip : bool;\n  rstrip : bool;\n  normalized : bool;\n}\n\ntype pad_length = [ `Batch_longest | `Fixed of int | `To_multiple of int ]\n\ntype padding = {\n  length : pad_length;\n  direction : direction;\n  pad_id : int option;\n  pad_type_id : int option;\n  pad_token : string option;\n}\n\ntype truncation = { max_length : int; direction : direction }\ntype data = [ `Files of string list | `Seq of string Seq.t ]\ntype sequence = { text : string; pair : string option }\n\ntype algorithm =\n  | Alg_bpe of Bpe.t\n  | Alg_wordpiece of Wordpiece.t\n  | Alg_wordlevel of Word_level.t\n  | Alg_unigram of Unigram.t\n  | Alg_chars of Chars.t\n\ntype t = {\n  algorithm : algorithm;\n  normalizer : Normalizer.t option;\n  pre_tokenizer : Pre_tokenizer.t option;\n  post_processor : Post_processor.t option;\n  decoder : Decoder.t option;\n  specials : special list;\n  special_lookup : (string, unit) Hashtbl.t;\n  bos_token : string option;\n  eos_token : string option;\n  pad_token : string option;\n  pad_id : int option;\n  pad_type_id : int;\n  unk_token : string option;\n}\n\nlet special ?(single_word = false) ?(lstrip = false) ?(rstrip = false)\n    ?(normalized = false) token =\n  { token; single_word; lstrip; rstrip; normalized }\n\nlet padding ?(direction = `Right) ?pad_id ?pad_type_id ?pad_token length =\n  { length; direction; pad_id; pad_type_id; pad_token }\n\nlet truncation ?(direction = `Right) max_length = { max_length; direction }\n\n(* Algorithm dispatch *)\n\nlet alg_add_tokens algorithm tokens =\n  match algorithm with\n  | Alg_wordlevel model ->\n      ignore (Word_level.add_tokens model tokens);\n      algorithm\n  | Alg_bpe _ | Alg_wordpiece _ | Alg_unigram _ | Alg_chars _ -> algorithm\n\nlet alg_token_to_id algorithm token =\n  match algorithm with\n  | Alg_bpe m -> Bpe.token_to_id m token\n  | Alg_wordpiece m -> Wordpiece.token_to_id m token\n  | Alg_wordlevel m -> Word_level.token_to_id m token\n  | Alg_unigram m -> Unigram.token_to_id m token\n  | Alg_chars m -> Chars.token_to_id m token\n\nlet alg_id_to_token algorithm id =\n  match algorithm with\n  | Alg_bpe m -> Bpe.id_to_token m id\n  | Alg_wordpiece m -> Wordpiece.id_to_token m id\n  | Alg_wordlevel m -> Word_level.id_to_token m id\n  | Alg_unigram m -> Unigram.id_to_token m id\n  | Alg_chars m -> Chars.id_to_token m id\n\nlet alg_vocab algorithm =\n  match algorithm with\n  | Alg_bpe m -> Bpe.get_vocab m\n  | Alg_wordpiece m -> Wordpiece.get_vocab m\n  | Alg_wordlevel m -> Word_level.get_vocab m\n  | Alg_unigram m ->\n      Unigram.get_vocab m |> List.mapi (fun i (token, _) -> (token, i))\n  | Alg_chars m -> Chars.get_vocab m\n\nlet alg_vocab_size algorithm =\n  match algorithm with\n  | Alg_bpe m -> Bpe.get_vocab_size m\n  | Alg_wordpiece m -> Wordpiece.get_vocab_size m\n  | Alg_wordlevel m -> Word_level.get_vocab_size m\n  | Alg_unigram m -> Unigram.get_vocab_size m\n  | Alg_chars m -> Chars.get_vocab_size m\n\nlet alg_save algorithm ~folder ?prefix () =\n  match algorithm with\n  | Alg_bpe m ->\n      Bpe.save m ~path:folder ?name:prefix ();\n      let name base ext =\n        match prefix with\n        | Some n -> Filename.concat folder (strf \"%s-%s.%s\" n base ext)\n        | None -> Filename.concat folder (strf \"%s.%s\" base ext)\n      in\n      [ name \"vocab\" \"json\"; name \"merges\" \"txt\" ]\n  | Alg_wordpiece m -> [ Wordpiece.save m ~path:folder ?name:prefix () ]\n  | Alg_wordlevel m -> Word_level.save m ~folder ()\n  | Alg_unigram m -> Unigram.save m ~folder ()\n  | Alg_chars m -> Chars.save m ~folder ()\n\nlet alg_tokenize algorithm text =\n  match algorithm with\n  | Alg_bpe m ->\n      Bpe.tokenize m text\n      |> List.map (fun (tok : Bpe.token) -> (tok.id, tok.value, tok.offsets))\n  | Alg_wordpiece m ->\n      Wordpiece.tokenize m text\n      |> List.map (fun (tok : Wordpiece.token) ->\n          (tok.id, tok.value, tok.offsets))\n  | Alg_wordlevel m -> Word_level.tokenize m text\n  | Alg_unigram m -> Unigram.tokenize m text\n  | Alg_chars m -> Chars.tokenize m text\n\nlet alg_tokenize_ids algorithm text =\n  match algorithm with\n  | Alg_bpe m -> Bpe.tokenize_ids m text\n  | Alg_wordpiece m -> Wordpiece.tokenize_ids m text\n  | Alg_wordlevel m -> Word_level.tokenize_ids m text\n  | Alg_unigram m ->\n      Unigram.tokenize m text\n      |> List.map (fun (id, _, _) -> id)\n      |> Array.of_list\n  | Alg_chars m ->\n      Chars.tokenize m text |> List.map (fun (id, _, _) -> id) |> Array.of_list\n\nlet alg_name = function\n  | Alg_bpe _ -> \"BPE\"\n  | Alg_wordpiece _ -> \"WordPiece\"\n  | Alg_wordlevel _ -> \"WordLevel\"\n  | Alg_unigram _ -> \"Unigram\"\n  | Alg_chars _ -> \"Chars\"\n\nlet vocab_to_hashtbl vocab =\n  let tbl = Hashtbl.create (List.length vocab) in\n  List.iter (fun (token, id) -> Hashtbl.add tbl token id) vocab;\n  tbl\n\n(* Special tokens *)\n\nlet dedup_by key items =\n  let seen = Hashtbl.create 16 in\n  let acc = ref [] in\n  List.iter\n    (fun item ->\n      let k = key item in\n      if not (Hashtbl.mem seen k) then (\n        Hashtbl.replace seen k ();\n        acc := item :: !acc))\n    items;\n  List.rev !acc\n\nlet collect_unique_tokens specials ~bos_token ~eos_token ~pad_token ~unk_token =\n  let items =\n    List.map (fun (s : special) -> s.token) specials\n    @ List.filter_map Fun.id [ bos_token; eos_token; pad_token; unk_token ]\n  in\n  dedup_by Fun.id items\n\nlet build_special_lookup specials ~bos_token ~eos_token ~pad_token ~unk_token =\n  let tokens =\n    collect_unique_tokens specials ~bos_token ~eos_token ~pad_token ~unk_token\n  in\n  let table = Hashtbl.create (List.length tokens) in\n  List.iter (fun t -> Hashtbl.replace table t ()) tokens;\n  table\n\n(* Construction *)\n\nlet create ?normalizer ?pre ?post ?decoder ?(specials = []) ?bos_token\n    ?eos_token ?pad_token ?unk_token algorithm =\n  let all_tokens =\n    collect_unique_tokens specials ~bos_token ~eos_token ~pad_token ~unk_token\n  in\n  let algorithm = alg_add_tokens algorithm all_tokens in\n  let special_lookup =\n    build_special_lookup specials ~bos_token ~eos_token ~pad_token ~unk_token\n  in\n  let pad_id = Option.bind pad_token (alg_token_to_id algorithm) in\n  {\n    algorithm;\n    normalizer;\n    pre_tokenizer = pre;\n    post_processor = post;\n    decoder;\n    specials;\n    special_lookup;\n    bos_token;\n    eos_token;\n    pad_token;\n    pad_id;\n    pad_type_id = 0;\n    unk_token;\n  }\n\n(* Accessors *)\n\nlet normalizer t = t.normalizer\nlet pre_tokenizer t = t.pre_tokenizer\nlet post_processor t = t.post_processor\nlet decoder t = t.decoder\nlet specials t = t.specials\nlet bos_token t = t.bos_token\nlet eos_token t = t.eos_token\nlet pad_token t = t.pad_token\nlet unk_token t = t.unk_token\nlet vocab t = alg_vocab t.algorithm\nlet vocab_size t = alg_vocab_size t.algorithm\nlet token_to_id t token = alg_token_to_id t.algorithm token\nlet id_to_token t id = alg_id_to_token t.algorithm id\n\n(* Algorithm constructors *)\n\nlet bpe ?normalizer ?pre ?post ?decoder ?specials ?bos_token ?eos_token\n    ?pad_token ?unk_token ?vocab ?merges ?cache_capacity ?dropout\n    ?continuing_subword_prefix ?end_of_word_suffix ?fuse_unk ?byte_fallback\n    ?ignore_merges () =\n  let vocab_tbl =\n    match vocab with None -> Hashtbl.create 100 | Some v -> vocab_to_hashtbl v\n  in\n  let algorithm =\n    Alg_bpe\n      (Bpe.create ~vocab:vocab_tbl\n         ~merges:(Option.value merges ~default:[])\n         ?cache_capacity ?dropout ?unk_token ?continuing_subword_prefix\n         ?end_of_word_suffix ?fuse_unk ?byte_fallback ?ignore_merges ())\n  in\n  create ?normalizer ?pre ?post ?decoder ?specials ?bos_token ?eos_token\n    ?pad_token ?unk_token algorithm\n\nlet wordpiece ?normalizer ?pre ?post ?decoder ?specials ?bos_token ?eos_token\n    ?pad_token ?unk_token ?vocab ?continuing_subword_prefix\n    ?max_input_chars_per_word () =\n  let vocab_tbl =\n    match vocab with None -> Hashtbl.create 100 | Some v -> vocab_to_hashtbl v\n  in\n  let algorithm =\n    Alg_wordpiece\n      (Wordpiece.create ~vocab:vocab_tbl ?unk_token ?continuing_subword_prefix\n         ?max_input_chars_per_word ())\n  in\n  create ?normalizer ?pre ?post ?decoder ?specials ?bos_token ?eos_token\n    ?pad_token ?unk_token algorithm\n\nlet word_level ?normalizer ?pre ?post ?decoder ?specials ?bos_token ?eos_token\n    ?pad_token ?unk_token ?vocab () =\n  let pre =\n    match pre with Some _ -> pre | None -> Some (Pre_tokenizer.whitespace ())\n  in\n  let algorithm = Alg_wordlevel (Word_level.create ?vocab ?unk_token ()) in\n  create ?normalizer ?pre ?post ?decoder ?specials ?bos_token ?eos_token\n    ?pad_token ?unk_token algorithm\n\nlet unigram ?normalizer ?pre ?post ?decoder ?specials ?bos_token ?eos_token\n    ?pad_token ?unk_token ?vocab () =\n  let algorithm =\n    Alg_unigram (Unigram.create (Option.value vocab ~default:[]))\n  in\n  create ?normalizer ?pre ?post ?decoder ?specials ?bos_token ?eos_token\n    ?pad_token ?unk_token algorithm\n\nlet chars ?normalizer ?pre ?post ?decoder ?specials ?bos_token ?eos_token\n    ?pad_token ?unk_token () =\n  let algorithm = Alg_chars (Chars.create ()) in\n  create ?normalizer ?pre ?post ?decoder ?specials ?bos_token ?eos_token\n    ?pad_token ?unk_token algorithm\n\nlet from_model_file ~vocab ?merges ?normalizer ?pre ?post ?decoder ?specials\n    ?bos_token ?eos_token ?pad_token ?unk_token () =\n  let algorithm =\n    match merges with\n    | Some merges_file ->\n        Alg_bpe (Bpe.from_files ~vocab_file:vocab ~merges_file)\n    | None -> Alg_wordpiece (Wordpiece.from_file ~vocab_file:vocab)\n  in\n  create ?normalizer ?pre ?post ?decoder ?specials ?bos_token ?eos_token\n    ?pad_token ?unk_token algorithm\n\nlet add_tokens t tokens =\n  match t.algorithm with\n  | Alg_wordlevel model ->\n      let vocab = Word_level.get_vocab model in\n      let new_model = Word_level.create ~vocab ?unk_token:t.unk_token () in\n      ignore (Word_level.add_tokens new_model tokens);\n      { t with algorithm = Alg_wordlevel new_model }\n  | Alg_bpe _ | Alg_wordpiece _ | Alg_unigram _ | Alg_chars _ ->\n      invalid_arg err_add_tokens\n\n(* Encoding *)\n\nlet encode_text t text =\n  let normalized =\n    match t.normalizer with Some n -> Normalizer.apply n text | None -> text\n  in\n  let pre_tokens =\n    match t.pre_tokenizer with\n    | Some pre -> Pre_tokenizer.pre_tokenize pre normalized\n    | None -> [ (normalized, (0, String.length normalized)) ]\n  in\n  match (t.algorithm, pre_tokens) with\n  | Alg_bpe m, [ (fragment, _) ] -> Bpe.tokenize_encoding m fragment ~type_id:0\n  | Alg_wordpiece m, _ ->\n      Wordpiece.tokenize_spans_encoding m pre_tokens ~type_id:0\n  | _ ->\n      pre_tokens\n      |> List.concat_map (fun (fragment, _) ->\n          alg_tokenize t.algorithm fragment)\n      |> Encoding.from_tokens ~type_id:0\n\nlet post_process t ~add_special primary pair =\n  match t.post_processor with\n  | None ->\n      if Option.is_some pair then invalid_arg err_pair_no_post else primary\n  | Some processor ->\n      Post_processor.process processor ?pair primary\n        ~add_special_tokens:add_special\n\nlet encode_single t ~add_special_tokens ~truncation seq =\n  let primary = encode_text t seq.text in\n  let pair = Option.map (encode_text t) seq.pair in\n  let processed = post_process t ~add_special:add_special_tokens primary pair in\n  match truncation with\n  | None -> processed\n  | Some { max_length; direction } ->\n      Encoding.truncate processed ~max_length ~stride:0 ~direction\n\n(* Padding *)\n\nlet resolve_pad t (cfg : padding) =\n  let token =\n    match cfg.pad_token with Some _ as v -> v | None -> t.pad_token\n  in\n  let token =\n    match token with\n    | Some token -> token\n    | None -> invalid_arg err_no_pad_token\n  in\n  let id = match cfg.pad_id with Some _ as v -> v | None -> t.pad_id in\n  let id =\n    match id with\n    | Some id -> id\n    | None -> (\n        match alg_token_to_id t.algorithm token with\n        | Some id -> id\n        | None -> invalid_arg (err_pad_not_in_vocab token))\n  in\n  let type_id = Option.value cfg.pad_type_id ~default:t.pad_type_id in\n  (token, id, type_id)\n\nlet round_up_to_multiple n m = if n mod m = 0 then n else (n + m - 1) / m * m\n\nlet apply_padding t encodings = function\n  | None -> encodings\n  | Some cfg -> (\n      let pad_token, pad_id, pad_type_id = resolve_pad t cfg in\n      let direction = cfg.direction in\n      let pad enc target =\n        if Encoding.length enc >= target then enc\n        else\n          Encoding.pad enc ~target_length:target ~pad_id ~pad_type_id ~pad_token\n            ~direction\n      in\n      match cfg.length with\n      | `Fixed n -> List.map (fun enc -> pad enc n) encodings\n      | `Batch_longest ->\n          let max_len =\n            List.fold_left\n              (fun acc enc -> max acc (Encoding.length enc))\n              0 encodings\n          in\n          List.map (fun enc -> pad enc max_len) encodings\n      | `To_multiple m ->\n          if m <= 0 then encodings\n          else\n            List.map\n              (fun enc ->\n                pad enc (round_up_to_multiple (Encoding.length enc) m))\n              encodings)\n\n(* Parallel batch encoding *)\n\nlet encode_parallel t sequences ~add_special_tokens ~truncation =\n  let arr = Array.of_list sequences in\n  let n = Array.length arr in\n  let results =\n    Array.make n (encode_single t ~add_special_tokens ~truncation arr.(0))\n  in\n  let num_domains = min n (Domain.recommended_domain_count ()) in\n  if num_domains <= 1 then\n    for i = 1 to n - 1 do\n      results.(i) <- encode_single t ~add_special_tokens ~truncation arr.(i)\n    done\n  else begin\n    let chunk_size = n / num_domains in\n    let remainder = n mod num_domains in\n    let domains =\n      Array.init (num_domains - 1) (fun d ->\n          let start = ((d + 1) * chunk_size) + min (d + 1) remainder in\n          let len = chunk_size + if d + 1 < remainder then 1 else 0 in\n          Domain.spawn (fun () ->\n              for i = start to start + len - 1 do\n                results.(i) <-\n                  encode_single t ~add_special_tokens ~truncation arr.(i)\n              done))\n    in\n    let main_len = chunk_size + if 0 < remainder then 1 else 0 in\n    for i = 1 to main_len - 1 do\n      results.(i) <- encode_single t ~add_special_tokens ~truncation arr.(i)\n    done;\n    Array.iter Domain.join domains\n  end;\n  Array.to_list results\n\nlet encode_sequences t sequences ~add_special_tokens ~padding ~truncation =\n  let n = List.length sequences in\n  let raw =\n    if n >= 4 then encode_parallel t sequences ~add_special_tokens ~truncation\n    else List.map (encode_single t ~add_special_tokens ~truncation) sequences\n  in\n  apply_padding t raw padding\n\nlet encode t ?pair ?(add_special_tokens = true) ?padding ?truncation text =\n  match\n    encode_sequences t\n      [ { text; pair } ]\n      ~add_special_tokens ~padding ~truncation\n  with\n  | [ encoding ] -> encoding\n  | _ -> assert false\n\nlet encode_batch t ?(add_special_tokens = true) ?padding ?truncation = function\n  | [] -> []\n  | texts ->\n      let sequences = List.map (fun text -> { text; pair = None }) texts in\n      encode_sequences t sequences ~add_special_tokens ~padding ~truncation\n\nlet encode_pairs_batch t ?(add_special_tokens = true) ?padding ?truncation =\n  function\n  | [] -> []\n  | pairs ->\n      let sequences =\n        List.map (fun (text, pair) -> { text; pair = Some pair }) pairs\n      in\n      encode_sequences t sequences ~add_special_tokens ~padding ~truncation\n\nlet encode_ids t ?pair ?add_special_tokens ?padding ?truncation text =\n  let use_fast_path =\n    Option.is_none pair\n    && (add_special_tokens = None || add_special_tokens = Some false)\n    && Option.is_none padding && Option.is_none truncation\n    && Option.is_none t.post_processor\n  in\n  if not use_fast_path then\n    Encoding.ids (encode t ?pair ?add_special_tokens ?padding ?truncation text)\n  else\n    let normalized =\n      match t.normalizer with Some n -> Normalizer.apply n text | None -> text\n    in\n    let pre_tokens =\n      match t.pre_tokenizer with\n      | Some pre -> Pre_tokenizer.pre_tokenize pre normalized\n      | None -> [ (normalized, (0, String.length normalized)) ]\n    in\n    let id_arrays =\n      List.map\n        (fun (fragment, _) -> alg_tokenize_ids t.algorithm fragment)\n        pre_tokens\n    in\n    let total_len =\n      List.fold_left (fun acc a -> acc + Array.length a) 0 id_arrays\n    in\n    let result = Array.make total_len 0 in\n    let pos = ref 0 in\n    List.iter\n      (fun a ->\n        let len = Array.length a in\n        Array.blit a 0 result !pos len;\n        pos := !pos + len)\n      id_arrays;\n    result\n\n(* Decoding *)\n\nlet decode t ?(skip_special_tokens = false) ids =\n  let tokens =\n    Array.to_list ids\n    |> List.filter_map (fun id ->\n        match alg_id_to_token t.algorithm id with\n        | None -> None\n        | Some token\n          when skip_special_tokens && Hashtbl.mem t.special_lookup token ->\n            None\n        | Some token -> Some token)\n  in\n  match t.decoder with\n  | Some decoder -> Decoder.decode decoder tokens\n  | None -> (\n      match t.algorithm with\n      | Alg_wordlevel _ -> String.concat \" \" tokens\n      | _ -> String.concat \"\" tokens)\n\nlet decode_batch t ?(skip_special_tokens = false) id_lists =\n  List.map (decode t ~skip_special_tokens) id_lists\n\n(* Training *)\n\nlet special_tokens_for_training init specials =\n  let items =\n    (match specials with\n      | Some sl -> List.map (fun (s : special) -> s.token) sl\n      | None -> [])\n    @\n    match init with\n    | Some tok -> List.map (fun (s : special) -> s.token) tok.specials\n    | None -> []\n  in\n  dedup_by Fun.id items\n\nlet merge_specials_from_training ~user_specials ~trained_tokens =\n  let items =\n    (match user_specials with Some sl -> sl | None -> [])\n    @ List.map special trained_tokens\n  in\n  dedup_by (fun (s : special) -> s.token) items\n\nlet data_to_strings = function\n  | `Files files ->\n      let lines = ref [] in\n      List.iter\n        (fun file ->\n          let ic = open_in file in\n          (try\n             while true do\n               lines := input_line ic :: !lines\n             done\n           with End_of_file -> ());\n          close_in ic)\n        files;\n      List.rev !lines\n  | `Seq seq -> List.of_seq seq\n\nlet initial_alphabet_of strs =\n  List.map (fun s -> if String.length s > 0 then s.[0] else ' ') strs\n\nlet train_bpe ?init ?normalizer ?pre ?post ?decoder ?specials ?bos_token\n    ?eos_token ?pad_token ?unk_token ?(vocab_size = 30000) ?(min_frequency = 0)\n    ?limit_alphabet ?initial_alphabet ?continuing_subword_prefix\n    ?end_of_word_suffix ?(show_progress = true) ?max_token_length data =\n  let special_tokens = special_tokens_for_training init specials in\n  let initial_alphabet =\n    Option.value initial_alphabet ~default:[] |> initial_alphabet_of\n  in\n  let limit_alphabet = Some (Option.value limit_alphabet ~default:1000) in\n  let texts = data_to_strings data in\n  let existing_bpe =\n    Option.bind init (fun t ->\n        match t.algorithm with Alg_bpe m -> Some m | _ -> None)\n  in\n  let trained_model, result_specials =\n    Bpe.train ~min_frequency ~vocab_size ~show_progress ~special_tokens\n      ~limit_alphabet ~initial_alphabet ~continuing_subword_prefix\n      ~end_of_word_suffix ~max_token_length texts existing_bpe\n  in\n  let all_specials =\n    merge_specials_from_training ~user_specials:specials\n      ~trained_tokens:result_specials\n  in\n  create ?normalizer ?pre ?post ?decoder ~specials:all_specials ?bos_token\n    ?eos_token ?pad_token ?unk_token (Alg_bpe trained_model)\n\nlet train_wordpiece ?init ?normalizer ?pre ?post ?decoder ?specials ?bos_token\n    ?eos_token ?pad_token ?unk_token ?(vocab_size = 30000) ?(min_frequency = 0)\n    ?limit_alphabet ?initial_alphabet ?(continuing_subword_prefix = \"##\")\n    ?end_of_word_suffix ?(show_progress = true) data =\n  let special_tokens = special_tokens_for_training init specials in\n  let initial_alphabet =\n    Option.value initial_alphabet ~default:[] |> initial_alphabet_of\n  in\n  let limit_alphabet = Some (Option.value limit_alphabet ~default:1000) in\n  let texts = data_to_strings data in\n  let existing_wp =\n    Option.bind init (fun t ->\n        match t.algorithm with Alg_wordpiece m -> Some m | _ -> None)\n  in\n  let trained_model, result_specials =\n    Wordpiece.train ~min_frequency ~vocab_size ~show_progress ~special_tokens\n      ~limit_alphabet ~initial_alphabet ~continuing_subword_prefix\n      ~end_of_word_suffix texts existing_wp\n  in\n  let all_specials =\n    merge_specials_from_training ~user_specials:specials\n      ~trained_tokens:result_specials\n  in\n  create ?normalizer ?pre ?post ?decoder ~specials:all_specials ?bos_token\n    ?eos_token ?pad_token ?unk_token (Alg_wordpiece trained_model)\n\nlet train_wordlevel ?init ?normalizer ?pre ?post ?decoder ?specials ?bos_token\n    ?eos_token ?pad_token ?unk_token ?(vocab_size = 30000) ?(min_frequency = 0)\n    ?(show_progress = true) data =\n  let special_tokens = special_tokens_for_training init specials in\n  let texts = data_to_strings data in\n  let existing_wl =\n    Option.bind init (fun t ->\n        match t.algorithm with Alg_wordlevel m -> Some m | _ -> None)\n  in\n  let trained_model, result_specials =\n    Word_level.train ~vocab_size ~min_frequency ~show_progress ~special_tokens\n      texts existing_wl\n  in\n  let all_specials =\n    merge_specials_from_training ~user_specials:specials\n      ~trained_tokens:result_specials\n  in\n  create ?normalizer ?pre ?post ?decoder ~specials:all_specials ?bos_token\n    ?eos_token ?pad_token ?unk_token (Alg_wordlevel trained_model)\n\nlet train_unigram ?init ?normalizer ?pre ?post ?decoder ?specials ?bos_token\n    ?eos_token ?pad_token ?unk_token ?(vocab_size = 8000)\n    ?(show_progress = true) ?(shrinking_factor = 0.75) ?(max_piece_length = 16)\n    ?(n_sub_iterations = 2) data =\n  let special_tokens = special_tokens_for_training init specials in\n  let texts = data_to_strings data in\n  let existing_ug =\n    Option.bind init (fun t ->\n        match t.algorithm with Alg_unigram m -> Some m | _ -> None)\n  in\n  let trained_model, result_specials =\n    Unigram.train ~vocab_size ~show_progress ~special_tokens ~shrinking_factor\n      ~unk_token ~max_piece_length ~n_sub_iterations texts existing_ug\n  in\n  let all_specials =\n    merge_specials_from_training ~user_specials:specials\n      ~trained_tokens:result_specials\n  in\n  create ?normalizer ?pre ?post ?decoder ~specials:all_specials ?bos_token\n    ?eos_token ?pad_token ?unk_token (Alg_unigram trained_model)\n\n(* JSON serialization *)\n\nlet json_obj pairs =\n  Jsont.Json.object' (List.map (fun (k, v) -> (Jsont.Json.name k, v)) pairs)\n\nlet json_mem name = function\n  | Jsont.Object (mems, _) -> (\n      match Jsont.Json.find_mem name mems with\n      | Some (_, v) -> v\n      | None -> Jsont.Null ((), Jsont.Meta.none))\n  | _ -> Jsont.Null ((), Jsont.Meta.none)\n\nlet json_string_or_null = function Jsont.String (s, _) -> Some s | _ -> None\nlet json_option_of f = function None -> Jsont.Json.null () | Some v -> f v\n\nlet special_of_json json =\n  let mem name = json_mem name json in\n  let to_bool = function Jsont.Bool (b, _) -> b | _ -> false in\n  let to_str = function\n    | Jsont.String (s, _) -> s\n    | _ -> failwith \"expected string\"\n  in\n  {\n    token = to_str (mem \"content\");\n    single_word = to_bool (mem \"single_word\");\n    lstrip = to_bool (mem \"lstrip\");\n    rstrip = to_bool (mem \"rstrip\");\n    normalized = to_bool (mem \"normalized\");\n  }\n\nlet added_token_to_json ~id (s : special) =\n  json_obj\n    [\n      (\"id\", Jsont.Json.int id);\n      (\"content\", Jsont.Json.string s.token);\n      (\"single_word\", Jsont.Json.bool s.single_word);\n      (\"lstrip\", Jsont.Json.bool s.lstrip);\n      (\"rstrip\", Jsont.Json.bool s.rstrip);\n      (\"normalized\", Jsont.Json.bool s.normalized);\n      (\"special\", Jsont.Json.bool true);\n    ]\n\nlet vocab_to_json vocab =\n  json_obj (List.map (fun (token, id) -> (token, Jsont.Json.int id)) vocab)\n\nlet alg_to_json = function\n  | Alg_bpe bpe ->\n      let vocab_json = vocab_to_json (Bpe.get_vocab bpe) in\n      let merges_json =\n        Bpe.get_merges bpe\n        |> List.map (fun (a, b) ->\n            Jsont.Json.list [ Jsont.Json.string a; Jsont.Json.string b ])\n        |> Jsont.Json.list\n      in\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"BPE\");\n          (\"dropout\", Jsont.Json.null ());\n          (\"unk_token\", json_option_of Jsont.Json.string (Bpe.get_unk_token bpe));\n          ( \"continuing_subword_prefix\",\n            json_option_of Jsont.Json.string\n              (Bpe.get_continuing_subword_prefix bpe) );\n          ( \"end_of_word_suffix\",\n            json_option_of Jsont.Json.string (Bpe.get_end_of_word_suffix bpe) );\n          (\"fuse_unk\", Jsont.Json.bool false);\n          (\"byte_fallback\", Jsont.Json.bool false);\n          (\"ignore_merges\", Jsont.Json.bool false);\n          (\"vocab\", vocab_json);\n          (\"merges\", merges_json);\n        ]\n  | Alg_wordpiece wp ->\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"WordPiece\");\n          (\"unk_token\", Jsont.Json.string (Wordpiece.get_unk_token wp));\n          ( \"continuing_subword_prefix\",\n            Jsont.Json.string (Wordpiece.get_continuing_subword_prefix wp) );\n          (\"max_input_chars_per_word\", Jsont.Json.int 100);\n          (\"vocab\", vocab_to_json (Wordpiece.get_vocab wp));\n        ]\n  | Alg_wordlevel wl ->\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"WordLevel\");\n          (\"unk_token\", Jsont.Json.string \"[UNK]\");\n          (\"vocab\", vocab_to_json (Word_level.get_vocab wl));\n        ]\n  | Alg_unigram ug ->\n      let vocab_json =\n        Unigram.get_vocab ug\n        |> List.map (fun (token, score) ->\n            Jsont.Json.list [ Jsont.Json.string token; Jsont.Json.number score ])\n        |> Jsont.Json.list\n      in\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"Unigram\");\n          (\"unk_id\", Jsont.Json.null ());\n          (\"vocab\", vocab_json);\n        ]\n  | Alg_chars _ ->\n      json_obj [ (\"type\", Jsont.Json.string \"Chars\"); (\"vocab\", json_obj []) ]\n\nlet to_json (t : t) =\n  let vocab_list = alg_vocab t.algorithm in\n  let added_tokens =\n    t.specials\n    |> List.filter_map (fun spec ->\n        List.find_opt (fun (token, _) -> token = spec.token) vocab_list\n        |> Option.map (fun (_, id) -> added_token_to_json ~id spec))\n  in\n  json_obj\n    [\n      (\"version\", Jsont.Json.string \"1.0\");\n      (\"truncation\", Jsont.Json.null ());\n      (\"padding\", Jsont.Json.null ());\n      (\"added_tokens\", Jsont.Json.list added_tokens);\n      (\"normalizer\", json_option_of Normalizer.to_json t.normalizer);\n      (\"pre_tokenizer\", json_option_of Pre_tokenizer.to_json t.pre_tokenizer);\n      (\"post_processor\", json_option_of Post_processor.to_json t.post_processor);\n      (\"decoder\", json_option_of Decoder.to_json t.decoder);\n      (\"model\", alg_to_json t.algorithm);\n    ]\n\n(* JSON deserialization helpers *)\n\nlet json_to_assoc = function\n  | Jsont.Object (mems, _) ->\n      List.map\n        (fun ((k, _), v) ->\n          match v with\n          | Jsont.Number (f, _) -> (k, int_of_float f)\n          | _ -> failwith (\"Expected number for vocab entry: \" ^ k))\n        mems\n  | _ -> failwith \"Expected object for vocab\"\n\nlet json_to_list = function\n  | Jsont.Array (l, _) -> l\n  | _ -> failwith \"Expected array\"\n\nlet json_to_string = function\n  | Jsont.String (s, _) -> s\n  | _ -> failwith \"Expected string\"\n\nlet json_to_float = function\n  | Jsont.Number (f, _) -> f\n  | _ -> failwith \"Expected number\"\n\nlet json_has_field name j =\n  match json_mem name j with Jsont.Null _ -> false | _ -> true\n\nlet json_result_to_option of_json = function\n  | Jsont.Null _ -> None\n  | j -> ( match of_json j with Ok v -> Some v | Error msg -> failwith msg)\n\nlet infer_model_type mj =\n  match json_string_or_null (json_mem \"type\" mj) with\n  | Some s -> s\n  | None ->\n      if json_has_field \"merges\" mj then \"BPE\"\n      else if json_has_field \"unk_id\" mj then \"Unigram\"\n      else if\n        json_has_field \"continuing_subword_prefix\" mj\n        || json_has_field \"max_input_chars_per_word\" mj\n      then \"WordPiece\"\n      else if json_has_field \"vocab\" mj then \"WordLevel\"\n      else failwith err_infer_type\n\nlet parse_merge = function\n  | Jsont.Array ([ a; b ], _) -> (json_to_string a, json_to_string b)\n  | Jsont.String (s, _) -> (\n      match String.split_on_char ' ' s with\n      | [ a; b ] -> (a, b)\n      | _ -> failwith \"Invalid merge string format\")\n  | _ -> failwith \"Invalid merge entry\"\n\nlet alg_of_json mj =\n  let mem name = json_mem name mj in\n  let str name = json_string_or_null (mem name) in\n  match infer_model_type mj with\n  | \"BPE\" ->\n      let vocab_list = json_to_assoc (mem \"vocab\") in\n      let merges = json_to_list (mem \"merges\") |> List.map parse_merge in\n      Alg_bpe\n        (Bpe.create\n           ~vocab:(vocab_to_hashtbl vocab_list)\n           ~merges ?unk_token:(str \"unk_token\")\n           ?continuing_subword_prefix:(str \"continuing_subword_prefix\")\n           ?end_of_word_suffix:(str \"end_of_word_suffix\") ())\n  | \"WordPiece\" ->\n      let vocab_list = json_to_assoc (mem \"vocab\") in\n      let unk_token = str \"unk_token\" |> Option.value ~default:\"[UNK]\" in\n      let continuing_subword_prefix =\n        str \"continuing_subword_prefix\" |> Option.value ~default:\"##\"\n      in\n      let max_input_chars_per_word =\n        match mem \"max_input_chars_per_word\" with\n        | Jsont.Number (f, _) -> int_of_float f\n        | _ -> 100\n      in\n      Alg_wordpiece\n        (Wordpiece.create\n           ~vocab:(vocab_to_hashtbl vocab_list)\n           ~unk_token ~continuing_subword_prefix ~max_input_chars_per_word ())\n  | \"WordLevel\" ->\n      let vocab_list = json_to_assoc (mem \"vocab\") in\n      let unk_token = str \"unk_token\" |> Option.value ~default:\"[UNK]\" in\n      Alg_wordlevel (Word_level.create ~vocab:vocab_list ~unk_token ())\n  | \"Unigram\" ->\n      let vocab =\n        json_to_list (mem \"vocab\")\n        |> List.map (fun arr ->\n            match json_to_list arr with\n            | [ token; score ] -> (json_to_string token, json_to_float score)\n            | _ -> failwith \"Invalid unigram vocab format\")\n      in\n      Alg_unigram (Unigram.create vocab)\n  | \"Chars\" -> Alg_chars (Chars.create ())\n  | s -> failwith (strf \"Unsupported model type: %s\" s)\n\nlet from_json json =\n  try\n    let mem name = json_mem name json in\n    let normalizer =\n      json_result_to_option Normalizer.of_json (mem \"normalizer\")\n    in\n    let pre =\n      json_result_to_option Pre_tokenizer.of_json (mem \"pre_tokenizer\")\n    in\n    let post =\n      json_result_to_option Post_processor.of_json (mem \"post_processor\")\n    in\n    let decoder = json_result_to_option Decoder.of_json (mem \"decoder\") in\n    let algorithm = alg_of_json (mem \"model\") in\n    let added_tokens =\n      match mem \"added_tokens\" with\n      | Jsont.Array (l, _) -> List.map special_of_json l\n      | _ -> []\n    in\n    Ok (create ?normalizer ?pre ?post ?decoder ~specials:added_tokens algorithm)\n  with\n  | Failure msg -> Error msg\n  | exn -> Error (Printexc.to_string exn)\n\n(* File I/O *)\n\nlet write_string_to_file path s =\n  let oc = open_out path in\n  Fun.protect ~finally:(fun () -> close_out oc) (fun () -> output_string oc s)\n\nlet from_file path =\n  try\n    let ic = open_in path in\n    let s =\n      Fun.protect\n        ~finally:(fun () -> close_in ic)\n        (fun () -> really_input_string ic (in_channel_length ic))\n    in\n    match Jsont_bytesrw.decode_string Jsont.json s with\n    | Ok json -> from_json json\n    | Error e -> Error e\n  with\n  | Sys_error msg -> Error (\"File error: \" ^ msg)\n  | exn -> Error (Printexc.to_string exn)\n\nlet save_pretrained t ~path =\n  (try Sys.mkdir path 0o755 with Sys_error _ -> ());\n  let json_str =\n    match\n      Jsont_bytesrw.encode_string ~format:Jsont.Minify Jsont.json (to_json t)\n    with\n    | Ok s -> s\n    | Error e -> failwith (\"save_pretrained: failed to encode JSON: \" ^ e)\n  in\n  write_string_to_file (Filename.concat path \"tokenizer.json\") json_str\n\nlet export_tiktoken t ~merges_path ~vocab_path =\n  match t.algorithm with\n  | Alg_bpe bpe ->\n      let vocab =\n        alg_vocab t.algorithm\n        |> List.sort (fun (_, id1) (_, id2) -> Int.compare id1 id2)\n      in\n      let json_str =\n        match\n          Jsont_bytesrw.encode_string ~format:Jsont.Minify Jsont.json\n            (vocab_to_json vocab)\n        with\n        | Ok s -> s\n        | Error e -> failwith (\"export_tiktoken: failed to encode vocab: \" ^ e)\n      in\n      write_string_to_file vocab_path json_str;\n      let oc = open_out merges_path in\n      Fun.protect\n        ~finally:(fun () -> close_out oc)\n        (fun () ->\n          output_string oc \"#version: 0.2\\n\";\n          List.iter\n            (fun (a, b) -> Printf.fprintf oc \"%s %s\\n\" a b)\n            (Bpe.get_merges bpe))\n  | _ -> invalid_arg err_export_tiktoken\n\nlet save_model_files t ~folder ?prefix () =\n  alg_save t.algorithm ~folder ?prefix ()\n\n(* Formatting *)\n\nlet pp ppf t =\n  let yes_no = function Some _ -> \"yes\" | None -> \"no\" in\n  Format.fprintf ppf\n    \"@[<1><brot %s@ vocab=%d@ normalizer=%s@ pre=%s@ post=%s@ decoder=%s>@]\"\n    (alg_name t.algorithm)\n    (alg_vocab_size t.algorithm)\n    (yes_no t.normalizer) (yes_no t.pre_tokenizer) (yes_no t.post_processor)\n    (yes_no t.decoder)\n"
  },
  {
    "path": "packages/brot/lib/brot.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Tokenization for OCaml.\n\n    Brot tokenizes text into token IDs for language models and reverses the\n    process. Tokenization proceeds through configurable stages:\n\n    + {e Normalization}: clean and normalize text (lowercase, accent removal,\n      Unicode normalization). See {!Normalizer}.\n    + {e Pre-tokenization}: split text into words or sub-words. See\n      {!Pre_tokenizer}.\n    + {e Tokenization}: apply vocabulary-based encoding (BPE, WordPiece,\n      Unigram, word-level, or character-level).\n    + {e Post-processing}: add special tokens and set type IDs. See\n      {!Post_processor}.\n    + {e Padding/Truncation}: adjust sequence lengths for batching.\n\n    Each stage is optional and configurable. Open the module to use it, it\n    defines only modules in your scope.\n\n    {1:quick_start Quick start}\n\n    Load a pretrained tokenizer:\n    {[\n      let tokenizer = Brot.from_file \"tokenizer.json\" |> Result.get_ok in\n      let encoding = Brot.encode tokenizer \"Hello world!\" in\n      let _ids = Encoding.ids encoding\n    ]}\n\n    Create a BPE tokenizer from scratch:\n    {[\n      let tokenizer =\n        Brot.bpe\n          ~vocab:[(\"hello\", 0); (\"world\", 1); (\"[PAD]\", 2)]\n          ~merges:[]\n          ()\n      in\n      let encoding = Brot.encode tokenizer \"hello world\" in\n      let _text = Brot.decode tokenizer (Encoding.ids encoding)\n    ]}\n\n    Train a new tokenizer:\n    {[\n    let texts = [ \"Hello world\"; \"How are you?\"; \"Hello again\" ] in\n    let tokenizer =\n      Brot.train_bpe (`Seq (List.to_seq texts)) ~vocab_size:1000\n    in\n    Brot.save_pretrained tokenizer ~path:\"./my_tokenizer\"\n    ]}\n\n    {!modules:Encoding Normalizer Pre_tokenizer Post_processor Decoder} *)\n\nmodule Normalizer = Normalizer\n(** Text normalization. *)\n\nmodule Pre_tokenizer = Pre_tokenizer\n(** Pre-tokenization. *)\n\nmodule Post_processor = Post_processor\n(** Post-processing. *)\n\nmodule Decoder = Decoder\n(** Token decoding. *)\n\nmodule Encoding = Encoding\n(** Tokenization encodings. *)\n\n(** {1:types Types} *)\n\ntype t\n(** The type for tokenizers. Immutable after creation. *)\n\ntype direction = [ `Left | `Right ]\n(** The type for padding and truncation directions. [`Left] operates at the\n    beginning of the sequence, [`Right] at the end. *)\n\ntype special = {\n  token : string;  (** The token text (e.g., [\"<pad>\"], [\"<unk>\"]). *)\n  single_word : bool;  (** Whether this token must match whole words only. *)\n  lstrip : bool;  (** Whether to strip whitespace on the left. *)\n  rstrip : bool;  (** Whether to strip whitespace on the right. *)\n  normalized : bool;  (** Whether to apply normalization to this token. *)\n}\n(** The type for special token configurations.\n\n    Special tokens are never split during tokenization and can be skipped during\n    decoding. Token IDs are assigned automatically when added to the vocabulary.\n    The semantic role (pad, unk, bos, etc.) is contextual, not encoded in the\n    type. *)\n\ntype pad_length = [ `Batch_longest | `Fixed of int | `To_multiple of int ]\n(** The type for padding length strategies.\n\n    - [`Batch_longest]: pad to the longest sequence in the batch.\n    - [`Fixed n]: pad every sequence to exactly [n] tokens.\n    - [`To_multiple n]: pad to the smallest multiple of [n] that is at least the\n      sequence length. *)\n\ntype padding = {\n  length : pad_length;\n  direction : direction;\n  pad_id : int option;\n  pad_type_id : int option;\n  pad_token : string option;\n}\n(** The type for padding configurations.\n\n    When [pad_id], [pad_type_id], or [pad_token] are [None], the tokenizer's\n    configured padding token is used. Raises [Invalid_argument] at padding time\n    if no padding token is configured and these fields are [None]. *)\n\ntype truncation = { max_length : int; direction : direction }\n(** The type for truncation configurations. Sequences exceeding [max_length]\n    tokens are trimmed from the given [direction]. *)\n\ntype data = [ `Files of string list | `Seq of string Seq.t ]\n(** The type for training data sources.\n\n    - [`Files paths]: read training text from files, one line per example.\n    - [`Seq seq]: use a sequence of strings. *)\n\nval special :\n  ?single_word:bool ->\n  ?lstrip:bool ->\n  ?rstrip:bool ->\n  ?normalized:bool ->\n  string ->\n  special\n(** [special token] is a special token configuration for [token].\n\n    [single_word] defaults to [false]. [lstrip] and [rstrip] default to [false].\n    [normalized] defaults to [false]. *)\n\nval padding :\n  ?direction:direction ->\n  ?pad_id:int ->\n  ?pad_type_id:int ->\n  ?pad_token:string ->\n  pad_length ->\n  padding\n(** [padding length] is a padding configuration for the given [length] strategy.\n\n    [direction] defaults to [`Right]. Other fields default to [None] (falls back\n    to the tokenizer's configured padding token). *)\n\nval truncation : ?direction:direction -> int -> truncation\n(** [truncation max_length] is a truncation configuration limiting sequences to\n    [max_length] tokens. [direction] defaults to [`Right]. *)\n\n(** {1:constructors Constructors} *)\n\nval bpe :\n  ?normalizer:Normalizer.t ->\n  ?pre:Pre_tokenizer.t ->\n  ?post:Post_processor.t ->\n  ?decoder:Decoder.t ->\n  ?specials:special list ->\n  ?bos_token:string ->\n  ?eos_token:string ->\n  ?pad_token:string ->\n  ?unk_token:string ->\n  ?vocab:(string * int) list ->\n  ?merges:(string * string) list ->\n  ?cache_capacity:int ->\n  ?dropout:float ->\n  ?continuing_subword_prefix:string ->\n  ?end_of_word_suffix:string ->\n  ?fuse_unk:bool ->\n  ?byte_fallback:bool ->\n  ?ignore_merges:bool ->\n  unit ->\n  t\n(** [bpe ()] is a BPE (Byte Pair Encoding) tokenizer. Used by GPT-2, GPT-3,\n    RoBERTa.\n\n    - [normalizer]: text normalization. Default: none.\n    - [pre]: pre-tokenization strategy. Default: none.\n    - [post]: post-processor for special tokens. Default: none.\n    - [decoder]: decoding strategy. Default: none.\n    - [specials]: special tokens to add to vocabulary. Default: [[]].\n    - [bos_token], [eos_token], [pad_token]: role markers; added to vocabulary\n      if not already present. Default: none.\n    - [unk_token]: token for unknown characters. Configures both the role and\n      the BPE model's unknown handling. Default: none.\n    - [vocab]: initial vocabulary as [(token, id)] pairs. Default: [[]].\n    - [merges]: merge rules as [(left, right)] pairs learned during training.\n      Default: [[]].\n    - [cache_capacity]: LRU cache size for tokenization results. Default:\n      [10000].\n    - [dropout]: probability \\[[0]; [1]\\] of skipping merges (data\n      augmentation). Default: none (no dropout).\n    - [continuing_subword_prefix]: prefix for non-initial subwords (e.g.,\n      [\"##\"]). Default: none.\n    - [end_of_word_suffix]: suffix marking word boundaries (e.g., [\"</w>\"]).\n      Default: none.\n    - [fuse_unk]: merge consecutive unknown tokens. Default: [false].\n    - [byte_fallback]: use byte-level fallback ([\"<0x00>\"]) instead of unknown\n      token. Default: [false].\n    - [ignore_merges]: skip merge application (character-level output). Default:\n      [false]. *)\n\nval wordpiece :\n  ?normalizer:Normalizer.t ->\n  ?pre:Pre_tokenizer.t ->\n  ?post:Post_processor.t ->\n  ?decoder:Decoder.t ->\n  ?specials:special list ->\n  ?bos_token:string ->\n  ?eos_token:string ->\n  ?pad_token:string ->\n  ?unk_token:string ->\n  ?vocab:(string * int) list ->\n  ?continuing_subword_prefix:string ->\n  ?max_input_chars_per_word:int ->\n  unit ->\n  t\n(** [wordpiece ()] is a WordPiece tokenizer. Used by BERT, DistilBERT, Electra.\n\n    WordPiece uses a greedy longest-match-first algorithm to split words into\n    subword pieces prefixed with a continuation marker (e.g., [\"running\"]\n    becomes [[\"run\"; \"##ning\"]]).\n\n    - [vocab]: initial vocabulary as [(token, id)] pairs. Default: [[]].\n    - [unk_token]: token for out-of-vocabulary words. Default: [\"[UNK]\"].\n    - [continuing_subword_prefix]: prefix for non-initial subwords. Default:\n      [\"##\"].\n    - [max_input_chars_per_word]: words longer than this are replaced with\n      [unk_token]. Default: [100].\n\n    Pipeline parameters ([normalizer], [pre], [post], [decoder], [specials],\n    [bos_token], [eos_token], [pad_token]) are as in {!bpe}. *)\n\nval word_level :\n  ?normalizer:Normalizer.t ->\n  ?pre:Pre_tokenizer.t ->\n  ?post:Post_processor.t ->\n  ?decoder:Decoder.t ->\n  ?specials:special list ->\n  ?bos_token:string ->\n  ?eos_token:string ->\n  ?pad_token:string ->\n  ?unk_token:string ->\n  ?vocab:(string * int) list ->\n  unit ->\n  t\n(** [word_level ()] is a word-level tokenizer.\n\n    Maps each word directly to a token ID. No subword splitting is performed.\n    Words not in vocabulary map to [unk_token].\n\n    {b Note.} When [pre] is not provided, {!Pre_tokenizer.whitespace} is used by\n    default.\n\n    - [vocab]: initial vocabulary as [(word, id)] pairs. Default: [[]].\n    - [unk_token]: token for out-of-vocabulary words. Default: [\"<unk>\"].\n\n    Pipeline parameters ([normalizer], [pre], [post], [decoder], [specials],\n    [bos_token], [eos_token], [pad_token]) are as in {!bpe}. *)\n\nval unigram :\n  ?normalizer:Normalizer.t ->\n  ?pre:Pre_tokenizer.t ->\n  ?post:Post_processor.t ->\n  ?decoder:Decoder.t ->\n  ?specials:special list ->\n  ?bos_token:string ->\n  ?eos_token:string ->\n  ?pad_token:string ->\n  ?unk_token:string ->\n  ?vocab:(string * float) list ->\n  unit ->\n  t\n(** [unigram ()] is a Unigram tokenizer. Used by AlBERT, T5, mBART.\n\n    Unigram uses probabilistic segmentation to find optimal subword splits based\n    on token log-probabilities.\n\n    - [vocab]: initial vocabulary as [(token, score)] pairs where scores are\n      negative log probabilities. Default: [[]].\n    - [unk_token]: token for unknown characters. Default: none.\n\n    Pipeline parameters ([normalizer], [pre], [post], [decoder], [specials],\n    [bos_token], [eos_token], [pad_token]) are as in {!bpe}. *)\n\nval chars :\n  ?normalizer:Normalizer.t ->\n  ?pre:Pre_tokenizer.t ->\n  ?post:Post_processor.t ->\n  ?decoder:Decoder.t ->\n  ?specials:special list ->\n  ?bos_token:string ->\n  ?eos_token:string ->\n  ?pad_token:string ->\n  ?unk_token:string ->\n  unit ->\n  t\n(** [chars ()] is a character-level tokenizer.\n\n    Each byte in the input becomes a separate token with ID equal to its ordinal\n    value. No vocabulary is required.\n\n    Pipeline parameters ([normalizer], [pre], [post], [decoder], [specials],\n    [bos_token], [eos_token], [pad_token]) are as in {!bpe}. *)\n\nval from_model_file :\n  vocab:string ->\n  ?merges:string ->\n  ?normalizer:Normalizer.t ->\n  ?pre:Pre_tokenizer.t ->\n  ?post:Post_processor.t ->\n  ?decoder:Decoder.t ->\n  ?specials:special list ->\n  ?bos_token:string ->\n  ?eos_token:string ->\n  ?pad_token:string ->\n  ?unk_token:string ->\n  unit ->\n  t\n(** [from_model_file ~vocab ()] loads a tokenizer from HuggingFace model files.\n\n    The model type is inferred from the arguments: if [merges] is provided, a\n    BPE tokenizer is created; otherwise WordPiece.\n\n    - [vocab]: path to vocabulary file ([vocab.json]). Expected format: JSON\n      object mapping tokens to IDs ([{\"hello\": 0, \"world\": 1}]).\n    - [merges]: path to merges file ([merges.txt]). One merge per line as\n      space-separated token pairs. Lines starting with [\"#version\"] are skipped.\n\n    Raises [Sys_error] if a file cannot be read.\n\n    Pipeline parameters ([normalizer], [pre], [post], [decoder], [specials],\n    [bos_token], [eos_token], [pad_token], [unk_token]) are as in {!bpe}. *)\n\nval add_tokens : t -> string list -> t\n(** [add_tokens t tokens] is [t] with [tokens] added to the vocabulary. Only\n    supported for word-level tokenizers.\n\n    Raises [Invalid_argument] if the tokenizer does not support dynamic\n    vocabulary extension. *)\n\n(** {1:accessors Accessors} *)\n\nval normalizer : t -> Normalizer.t option\n(** [normalizer t] is [t]'s normalizer, if any. *)\n\nval pre_tokenizer : t -> Pre_tokenizer.t option\n(** [pre_tokenizer t] is [t]'s pre-tokenizer, if any. *)\n\nval post_processor : t -> Post_processor.t option\n(** [post_processor t] is [t]'s post-processor, if any. *)\n\nval decoder : t -> Decoder.t option\n(** [decoder t] is [t]'s decoder, if any. *)\n\nval specials : t -> special list\n(** [specials t] is [t]'s special tokens. *)\n\nval bos_token : t -> string option\n(** [bos_token t] is [t]'s beginning-of-sequence token, if any. *)\n\nval eos_token : t -> string option\n(** [eos_token t] is [t]'s end-of-sequence token, if any. *)\n\nval pad_token : t -> string option\n(** [pad_token t] is [t]'s padding token, if any. *)\n\nval unk_token : t -> string option\n(** [unk_token t] is [t]'s unknown token, if any. *)\n\n(** {1:vocab Vocabulary} *)\n\nval vocab : t -> (string * int) list\n(** [vocab t] is [t]'s vocabulary as [(token, id)] pairs. *)\n\nval vocab_size : t -> int\n(** [vocab_size t] is the number of tokens in [t]'s vocabulary. *)\n\nval token_to_id : t -> string -> int option\n(** [token_to_id t token] is the ID of [token] in [t], if any. *)\n\nval id_to_token : t -> int -> string option\n(** [id_to_token t id] is the token string for [id] in [t], if any. *)\n\n(** {1:encoding Encoding and decoding} *)\n\nval encode :\n  t ->\n  ?pair:string ->\n  ?add_special_tokens:bool ->\n  ?padding:padding ->\n  ?truncation:truncation ->\n  string ->\n  Encoding.t\n(** [encode t text] is the encoding of [text] by [t].\n\n    - [pair]: a second sentence for sentence-pair tasks. The post-processor\n      merges both sequences with appropriate type IDs. Default: none.\n    - [add_special_tokens]: whether to insert special tokens via the\n      post-processor. Default: [true].\n    - [padding]: padding configuration. Default: none (no padding).\n    - [truncation]: truncation configuration. Default: none (no truncation). *)\n\nval encode_batch :\n  t ->\n  ?add_special_tokens:bool ->\n  ?padding:padding ->\n  ?truncation:truncation ->\n  string list ->\n  Encoding.t list\n(** [encode_batch t texts] is the encoding of each text in [texts].\n\n    Optional parameters are as in {!encode}. For sentence-pair tasks, use\n    {!encode_pairs_batch}. *)\n\nval encode_pairs_batch :\n  t ->\n  ?add_special_tokens:bool ->\n  ?padding:padding ->\n  ?truncation:truncation ->\n  (string * string) list ->\n  Encoding.t list\n(** [encode_pairs_batch t pairs] encodes a batch of sentence pairs. Each element\n    is [(primary, secondary)].\n\n    Optional parameters are as in {!encode}. *)\n\nval encode_ids :\n  t ->\n  ?pair:string ->\n  ?add_special_tokens:bool ->\n  ?padding:padding ->\n  ?truncation:truncation ->\n  string ->\n  int array\n(** [encode_ids t text] is [Encoding.ids (encode t text)].\n\n    Optional parameters are as in {!encode}. *)\n\nval decode : t -> ?skip_special_tokens:bool -> int array -> string\n(** [decode t ids] is the text obtained by decoding [ids] through [t]'s\n    vocabulary and decoder.\n\n    [skip_special_tokens] defaults to [false]. *)\n\nval decode_batch :\n  t -> ?skip_special_tokens:bool -> int array list -> string list\n(** [decode_batch t ids_list] decodes each element of [ids_list].\n\n    [skip_special_tokens] defaults to [false]. *)\n\n(** {1:training Training} *)\n\nval train_bpe :\n  ?init:t ->\n  ?normalizer:Normalizer.t ->\n  ?pre:Pre_tokenizer.t ->\n  ?post:Post_processor.t ->\n  ?decoder:Decoder.t ->\n  ?specials:special list ->\n  ?bos_token:string ->\n  ?eos_token:string ->\n  ?pad_token:string ->\n  ?unk_token:string ->\n  ?vocab_size:int ->\n  ?min_frequency:int ->\n  ?limit_alphabet:int ->\n  ?initial_alphabet:string list ->\n  ?continuing_subword_prefix:string ->\n  ?end_of_word_suffix:string ->\n  ?show_progress:bool ->\n  ?max_token_length:int ->\n  data ->\n  t\n(** [train_bpe data] trains a BPE tokenizer from [data].\n\n    Learns merge rules by iteratively merging the most frequent adjacent pairs\n    until reaching the target vocabulary size.\n\n    - [init]: existing tokenizer to extend. Default: create new.\n    - [vocab_size]: target vocabulary size including special tokens. Default:\n      [30000].\n    - [min_frequency]: minimum pair frequency to be merged. Default: [0].\n    - [limit_alphabet]: maximum number of initial characters to keep. Default:\n      none (keep all).\n    - [initial_alphabet]: characters to include regardless of frequency.\n      Default: [[]].\n    - [continuing_subword_prefix]: prefix for non-initial subwords. Default:\n      none.\n    - [end_of_word_suffix]: suffix marking word boundaries. Default: none.\n    - [show_progress]: display progress bar. Default: [true].\n    - [max_token_length]: maximum token length. Default: none.\n\n    Pipeline parameters ([normalizer], [pre], [post], [decoder], [specials],\n    [bos_token], [eos_token], [pad_token], [unk_token]) are as in {!bpe}. *)\n\nval train_wordpiece :\n  ?init:t ->\n  ?normalizer:Normalizer.t ->\n  ?pre:Pre_tokenizer.t ->\n  ?post:Post_processor.t ->\n  ?decoder:Decoder.t ->\n  ?specials:special list ->\n  ?bos_token:string ->\n  ?eos_token:string ->\n  ?pad_token:string ->\n  ?unk_token:string ->\n  ?vocab_size:int ->\n  ?min_frequency:int ->\n  ?limit_alphabet:int ->\n  ?initial_alphabet:string list ->\n  ?continuing_subword_prefix:string ->\n  ?end_of_word_suffix:string ->\n  ?show_progress:bool ->\n  data ->\n  t\n(** [train_wordpiece data] trains a WordPiece tokenizer from [data].\n\n    Learns subword vocabulary by maximizing language model likelihood.\n\n    - [init]: existing tokenizer to extend. Default: create new.\n    - [vocab_size]: target vocabulary size including special tokens. Default:\n      [30000].\n    - [min_frequency]: minimum frequency for a subword to be included. Default:\n      [0].\n    - [limit_alphabet]: maximum number of initial characters to keep. Default:\n      none (keep all).\n    - [initial_alphabet]: characters to include regardless of frequency.\n      Default: [[]].\n    - [continuing_subword_prefix]: prefix for non-initial subwords. Default:\n      [\"##\"].\n    - [end_of_word_suffix]: suffix marking word boundaries. Default: none.\n    - [show_progress]: display progress bar. Default: [true].\n\n    Pipeline parameters ([normalizer], [pre], [post], [decoder], [specials],\n    [bos_token], [eos_token], [pad_token], [unk_token]) are as in {!bpe}. *)\n\nval train_wordlevel :\n  ?init:t ->\n  ?normalizer:Normalizer.t ->\n  ?pre:Pre_tokenizer.t ->\n  ?post:Post_processor.t ->\n  ?decoder:Decoder.t ->\n  ?specials:special list ->\n  ?bos_token:string ->\n  ?eos_token:string ->\n  ?pad_token:string ->\n  ?unk_token:string ->\n  ?vocab_size:int ->\n  ?min_frequency:int ->\n  ?show_progress:bool ->\n  data ->\n  t\n(** [train_wordlevel data] trains a word-level tokenizer from [data].\n\n    Builds vocabulary by collecting unique words, optionally filtering by\n    frequency. No subword splitting.\n\n    - [init]: existing tokenizer to extend. Default: create new.\n    - [vocab_size]: target vocabulary size including special tokens. Default:\n      [30000].\n    - [min_frequency]: minimum frequency for a word to be included. Default:\n      [0].\n    - [show_progress]: display progress bar. Default: [true].\n\n    Pipeline parameters ([normalizer], [pre], [post], [decoder], [specials],\n    [bos_token], [eos_token], [pad_token], [unk_token]) are as in {!bpe}. *)\n\nval train_unigram :\n  ?init:t ->\n  ?normalizer:Normalizer.t ->\n  ?pre:Pre_tokenizer.t ->\n  ?post:Post_processor.t ->\n  ?decoder:Decoder.t ->\n  ?specials:special list ->\n  ?bos_token:string ->\n  ?eos_token:string ->\n  ?pad_token:string ->\n  ?unk_token:string ->\n  ?vocab_size:int ->\n  ?show_progress:bool ->\n  ?shrinking_factor:float ->\n  ?max_piece_length:int ->\n  ?n_sub_iterations:int ->\n  data ->\n  t\n(** [train_unigram data] trains a Unigram tokenizer from [data].\n\n    Learns probabilistic subword vocabulary using EM algorithm.\n\n    - [init]: existing tokenizer to extend. Default: create new.\n    - [vocab_size]: target vocabulary size including special tokens. Default:\n      [8000].\n    - [show_progress]: display progress bar. Default: [true].\n    - [shrinking_factor]: fraction of vocabulary to retain in each pruning\n      iteration. Default: [0.75].\n    - [max_piece_length]: maximum subword length. Default: [16].\n    - [n_sub_iterations]: number of EM sub-iterations per pruning round.\n      Default: [2].\n\n    Pipeline parameters ([normalizer], [pre], [post], [decoder], [specials],\n    [bos_token], [eos_token], [pad_token], [unk_token]) are as in {!bpe}. *)\n\n(** {1:model_files Model files} *)\n\nval export_tiktoken : t -> merges_path:string -> vocab_path:string -> unit\n(** [export_tiktoken t ~merges_path ~vocab_path] exports [t]'s BPE merges and\n    vocabulary in tiktoken-compatible format.\n\n    {b Warning.} Only BPE tokenizers are supported. Raises [Failure] for other\n    model types. *)\n\nval save_model_files :\n  t -> folder:string -> ?prefix:string -> unit -> string list\n(** [save_model_files t ~folder ?prefix ()] saves [t]'s underlying model files\n    (vocabulary and merges) to [folder] and returns the list of created file\n    paths.\n\n    [prefix] defaults to [\"\"]. *)\n\n(** {1:huggingface HuggingFace compatibility} *)\n\nval from_file : string -> (t, string) result\n(** [from_file path] is a tokenizer loaded from a HuggingFace [tokenizer.json]\n    file. Errors if the file cannot be read or has invalid format. *)\n\nval from_json : Jsont.json -> (t, string) result\n(** [from_json json] is a tokenizer deserialized from HuggingFace JSON format.\n    Errors if [json] has a missing or unknown model type, or invalid parameters.\n*)\n\nval to_json : t -> Jsont.json\n(** [to_json t] is [t] serialized to HuggingFace JSON format. *)\n\nval save_pretrained : t -> path:string -> unit\n(** [save_pretrained t ~path] saves [t] to [path] in HuggingFace format. Creates\n    [path/tokenizer.json].\n\n    Raises [Sys_error] if [path] cannot be written. *)\n\n(** {1:fmt Formatting} *)\n\nval pp : Format.formatter -> t -> unit\n(** [pp] formats a tokenizer for inspection. Shows algorithm type, vocabulary\n    size, and configured pipeline stages. *)\n"
  },
  {
    "path": "packages/brot/lib/chars.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype t = unit\n\nlet create () = ()\n\nlet tokenize () text =\n  if String.length text = 0 then []\n  else\n    let chars = ref [] in\n    let offset = ref 0 in\n    String.iter\n      (fun c ->\n        let char_str = String.make 1 c in\n        let id = Char.code c in\n        chars := (id, char_str, (!offset, !offset + 1)) :: !chars;\n        incr offset)\n      text;\n    List.rev !chars\n\nlet token_to_id () token =\n  if String.length token = 1 then Some (Char.code token.[0]) else None\n\nlet id_to_token () id =\n  if id >= 0 && id <= 255 then Some (String.make 1 (Char.chr id)) else None\n\nlet get_vocab () = []\nlet get_vocab_size () = 256 (* All ASCII characters *)\nlet save () ~folder:_ () = []\n"
  },
  {
    "path": "packages/brot/lib/chars.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Character-level tokenization model.\n\n    {b Internal module.} Each byte maps to its ordinal value as token ID.\n    Stateless: no vocabulary storage, no training. *)\n\ntype t\n(** The type for character-level models. *)\n\n(** {1:creation Creation} *)\n\nval create : unit -> t\n(** [create ()] is a character-level tokenizer. *)\n\n(** {1:tokenization Tokenization} *)\n\nval tokenize : t -> string -> (int * string * (int * int)) list\n(** [tokenize t s] is the tokenization of [s] as\n    [(byte_value, char_string, (start, stop))] triples, one per byte. *)\n\n(** {1:vocabulary Vocabulary} *)\n\nval token_to_id : t -> string -> int option\n(** [token_to_id t s] is the byte value of [s] when [s] is a single byte. *)\n\nval id_to_token : t -> int -> string option\n(** [id_to_token t b] is the single-byte string for byte value [b]. *)\n\nval get_vocab : t -> (string * int) list\n(** [get_vocab t] is [[]] (no explicit vocabulary). *)\n\nval get_vocab_size : t -> int\n(** [get_vocab_size t] is [1114112] (all Unicode code points). *)\n\n(** {1:serialization Serialization} *)\n\nval save : t -> folder:string -> unit -> string list\n(** [save t ~folder ()] is [[]] (no files to write). *)\n"
  },
  {
    "path": "packages/brot/lib/decoder.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype t =\n  | BPE of { suffix : string }\n  | Byte_level\n  | Byte_fallback\n  | Word_piece of { prefix : string; cleanup : bool }\n  | Metaspace of { replacement : char; add_prefix_space : bool }\n  | CTC of { pad_token : string; word_delimiter_token : string; cleanup : bool }\n  | Sequence of t list\n  | Replace of { pattern : string; replacement : string }\n  | Strip of { left : bool; right : bool; content : char }\n  | Fuse\n\n(* Errors *)\n\nlet strf = Printf.sprintf\nlet err_replace_missing_pattern = \"missing pattern in Replace decoder\"\n\nlet err_seq_missing_decoders =\n  \"invalid Sequence decoder: missing decoders array\"\n\nlet err_unknown_type typ = strf \"unknown decoder type: %s\" typ\nlet err_expected_object = \"invalid decoder JSON: expected object\"\n\n(* Decoding *)\n\nlet whitespace_re = Re.compile (Re.rep1 (Re.char ' '))\n\n(* Literal string replacement without regex overhead. Returns [s] unchanged when\n   [pattern] does not occur—no allocation on the fast path. *)\nlet replace_all ~pattern ~by s =\n  let plen = String.length pattern in\n  let slen = String.length s in\n  if plen = 0 || plen > slen then s\n  else\n    let match_at i =\n      let rec check j =\n        j >= plen\n        || String.unsafe_get s (i + j) = String.unsafe_get pattern j\n           && check (j + 1)\n      in\n      check 0\n    in\n    let rec find_first i =\n      if i > slen - plen then -1\n      else if match_at i then i\n      else find_first (i + 1)\n    in\n    let pos = find_first 0 in\n    if pos < 0 then s\n    else\n      let buf = Buffer.create slen in\n      Buffer.add_substring buf s 0 pos;\n      Buffer.add_string buf by;\n      let i = ref (pos + plen) in\n      while !i <= slen - plen do\n        if match_at !i then (\n          Buffer.add_string buf by;\n          i := !i + plen)\n        else (\n          Buffer.add_char buf (String.unsafe_get s !i);\n          incr i)\n      done;\n      if !i < slen then Buffer.add_substring buf s !i (slen - !i);\n      Buffer.contents buf\n\nlet decode_bpe ~suffix tokens =\n  let suffix_len = String.length suffix in\n  let strip token =\n    if suffix_len > 0 && String.ends_with ~suffix token then\n      String.sub token 0 (String.length token - suffix_len)\n    else token\n  in\n  let rec loop acc = function\n    | [] -> List.rev acc\n    | [ token ] -> List.rev (strip token :: acc)\n    | token :: rest -> loop (\" \" :: strip token :: acc) rest\n  in\n  loop [] tokens\n\nlet decode_byte_level tokens =\n  let buf = Buffer.create 128 in\n  List.iter\n    (fun token -> Buffer.add_string buf (Pre_tokenizer.byte_level_decode token))\n    tokens;\n  Buffer.contents buf\n\nlet decode_byte_fallback tokens =\n  let flush acc = function\n    | [] -> acc\n    | byte_acc ->\n        let bytes = List.rev byte_acc in\n        let s = Bytes.create (List.length bytes) in\n        List.iteri (fun i b -> Bytes.unsafe_set s i (Char.chr b)) bytes;\n        Bytes.unsafe_to_string s :: acc\n  in\n  let is_byte_token token =\n    String.length token = 6\n    && String.starts_with ~prefix:\"<0x\" token\n    && String.ends_with ~suffix:\">\" token\n  in\n  let rec loop acc byte_acc = function\n    | [] -> List.rev (flush acc byte_acc)\n    | token :: rest when is_byte_token token -> (\n        let hex = String.sub token 3 2 in\n        match int_of_string_opt (\"0x\" ^ hex) with\n        | Some b when b >= 0 && b <= 255 -> loop acc (b :: byte_acc) rest\n        | _ -> loop (token :: flush acc byte_acc) [] rest)\n    | token :: rest -> loop (token :: flush acc byte_acc) [] rest\n  in\n  loop [] [] tokens\n\nlet decode_wordpiece ~prefix ~cleanup tokens =\n  let plen = String.length prefix in\n  let buf = Buffer.create 128 in\n  List.iteri\n    (fun i token ->\n      if i > 0 && String.starts_with ~prefix token then\n        Buffer.add_substring buf token plen (String.length token - plen)\n      else begin\n        if i > 0 then Buffer.add_char buf ' ';\n        Buffer.add_string buf token\n      end)\n    tokens;\n  let s = Buffer.contents buf in\n  if cleanup then String.trim (Re.replace_string whitespace_re ~by:\" \" s) else s\n\nlet decode_metaspace ~replacement ~add_prefix_space tokens =\n  List.mapi\n    (fun i token ->\n      let s = String.map (fun c -> if c = replacement then ' ' else c) token in\n      if add_prefix_space && i = 0 && String.length s > 0 && s.[0] = ' ' then\n        String.sub s 1 (String.length s - 1)\n      else s)\n    tokens\n\nlet decode_ctc ~pad_token ~word_delimiter_token ~cleanup tokens =\n  let rec dedup acc = function\n    | [] -> List.rev acc\n    | [ x ] -> List.rev (x :: acc)\n    | x :: (y :: _ as rest) ->\n        if String.equal x y then dedup acc rest else dedup (x :: acc) rest\n  in\n  let re =\n    if cleanup then Some (Re.compile (Re.str word_delimiter_token)) else None\n  in\n  dedup [] tokens\n  |> List.filter_map (fun token ->\n      if String.equal token pad_token then None\n      else\n        let s =\n          match re with\n          | Some re -> Re.replace_string re ~by:\" \" token\n          | None -> token\n        in\n        if String.equal s \"\" then None else Some s)\n\nlet decode_replace ~pattern ~replacement tokens =\n  [ replace_all ~pattern ~by:replacement (String.concat \"\" tokens) ]\n\nlet strip_token ~left ~right content token =\n  let len = String.length token in\n  let start =\n    if left then\n      let rec find i =\n        if i < len && Char.equal token.[i] content then find (i + 1) else i\n      in\n      find 0\n    else 0\n  in\n  let stop =\n    if right then\n      let rec find i =\n        if i >= 0 && Char.equal token.[i] content then find (i - 1) else i + 1\n      in\n      find (len - 1)\n    else len\n  in\n  if start < stop then String.sub token start (stop - start) else \"\"\n\nlet rec decode_chain decoder tokens =\n  match decoder with\n  | BPE { suffix } -> decode_bpe ~suffix tokens\n  | Byte_level -> [ decode_byte_level tokens ]\n  | Byte_fallback -> decode_byte_fallback tokens\n  | Word_piece { prefix; cleanup } ->\n      [ decode_wordpiece ~prefix ~cleanup tokens ]\n  | Metaspace { replacement; add_prefix_space } ->\n      decode_metaspace ~replacement ~add_prefix_space tokens\n  | CTC { pad_token; word_delimiter_token; cleanup } ->\n      decode_ctc ~pad_token ~word_delimiter_token ~cleanup tokens\n  | Replace { pattern; replacement } ->\n      decode_replace ~pattern ~replacement tokens\n  | Strip { left; right; content } ->\n      [ strip_token ~left ~right content (String.concat \"\" tokens) ]\n  | Fuse -> [ String.concat \"\" tokens ]\n  | Sequence decoders ->\n      List.fold_left (fun toks dec -> decode_chain dec toks) tokens decoders\n\nlet decode decoder tokens = String.concat \"\" (decode_chain decoder tokens)\n\n(* Constructors *)\n\nlet bpe ?(suffix = \"\") () = BPE { suffix }\nlet byte_level () = Byte_level\nlet byte_fallback () = Byte_fallback\n\nlet wordpiece ?(prefix = \"##\") ?(cleanup = true) () =\n  Word_piece { prefix; cleanup }\n\nlet metaspace ?(replacement = '_') ?(add_prefix_space = true) () =\n  Metaspace { replacement; add_prefix_space }\n\nlet ctc ?(pad_token = \"<pad>\") ?(word_delimiter_token = \"|\") ?(cleanup = true)\n    () =\n  CTC { pad_token; word_delimiter_token; cleanup }\n\nlet sequence decoders = Sequence decoders\nlet replace ~pattern ~by () = Replace { pattern; replacement = by }\n\nlet strip ?(left = false) ?(right = false) ?(content = ' ') () =\n  Strip { left; right; content }\n\nlet fuse () = Fuse\n\n(* Formatting *)\n\nlet rec pp ppf = function\n  | BPE { suffix } ->\n      if suffix <> \"\" then Format.fprintf ppf \"bpe ~suffix:%S\" suffix\n      else Format.fprintf ppf \"bpe\"\n  | Byte_level -> Format.fprintf ppf \"byte_level\"\n  | Byte_fallback -> Format.fprintf ppf \"byte_fallback\"\n  | Word_piece { prefix; cleanup } ->\n      Format.fprintf ppf \"wordpiece ~prefix:%S ~cleanup:%b\" prefix cleanup\n  | Metaspace { replacement; add_prefix_space } ->\n      Format.fprintf ppf \"metaspace ~replacement:%C ~add_prefix_space:%b\"\n        replacement add_prefix_space\n  | CTC { pad_token; word_delimiter_token; cleanup } ->\n      Format.fprintf ppf\n        \"ctc ~pad_token:%S ~word_delimiter_token:%S ~cleanup:%b\" pad_token\n        word_delimiter_token cleanup\n  | Replace { pattern; replacement } ->\n      Format.fprintf ppf \"replace ~pattern:%S ~by:%S\" pattern replacement\n  | Strip { left; right; content } ->\n      Format.fprintf ppf \"strip ~left:%b ~right:%b ~content:%C\" left right\n        content\n  | Fuse -> Format.fprintf ppf \"fuse\"\n  | Sequence decoders ->\n      Format.fprintf ppf \"@[<hv 2>sequence [%a]@]\"\n        (Format.pp_print_list\n           ~pp_sep:(fun ppf () -> Format.fprintf ppf \";@ \")\n           pp)\n        decoders\n\n(* Serialization *)\n\nlet json_obj pairs =\n  Jsont.Json.object' (List.map (fun (k, v) -> (Jsont.Json.name k, v)) pairs)\n\nlet rec to_json = function\n  | BPE { suffix } ->\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"BPEDecoder\");\n          (\"suffix\", Jsont.Json.string suffix);\n        ]\n  | Byte_level -> json_obj [ (\"type\", Jsont.Json.string \"Byte_level\") ]\n  | Byte_fallback -> json_obj [ (\"type\", Jsont.Json.string \"Byte_fallback\") ]\n  | Word_piece { prefix; cleanup } ->\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"Word_piece\");\n          (\"prefix\", Jsont.Json.string prefix);\n          (\"cleanup\", Jsont.Json.bool cleanup);\n        ]\n  | Metaspace { replacement; add_prefix_space } ->\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"Metaspace\");\n          (\"replacement\", Jsont.Json.string (String.make 1 replacement));\n          (\"add_prefix_space\", Jsont.Json.bool add_prefix_space);\n        ]\n  | CTC { pad_token; word_delimiter_token; cleanup } ->\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"CTC\");\n          (\"pad_token\", Jsont.Json.string pad_token);\n          (\"word_delimiter_token\", Jsont.Json.string word_delimiter_token);\n          (\"cleanup\", Jsont.Json.bool cleanup);\n        ]\n  | Replace { pattern; replacement } ->\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"Replace\");\n          (\"pattern\", Jsont.Json.string pattern);\n          (\"content\", Jsont.Json.string replacement);\n        ]\n  | Strip { left; right; content } ->\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"Strip\");\n          (\"strip_left\", Jsont.Json.bool left);\n          (\"strip_right\", Jsont.Json.bool right);\n          (\"content\", Jsont.Json.string (String.make 1 content));\n        ]\n  | Fuse -> json_obj [ (\"type\", Jsont.Json.string \"Fuse\") ]\n  | Sequence decoders ->\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"Sequence\");\n          (\"decoders\", Jsont.Json.list (List.map to_json decoders));\n        ]\n\nlet find_field fields name = Option.map snd (Jsont.Json.find_mem name fields)\n\nlet string_field fields name ~default =\n  match find_field fields name with\n  | Some (Jsont.String (s, _)) -> s\n  | _ -> default\n\nlet bool_field fields name ~default =\n  match find_field fields name with\n  | Some (Jsont.Bool (b, _)) -> b\n  | _ -> default\n\nlet char_field fields name ~default =\n  match find_field fields name with\n  | Some (Jsont.String (s, _)) when String.length s > 0 -> s.[0]\n  | _ -> default\n\nlet rec of_json = function\n  | Jsont.Object (fields, _) -> (\n      let ( let* ) = Result.bind in\n      match find_field fields \"type\" with\n      | Some (Jsont.String (\"BPEDecoder\", _)) ->\n          Ok (BPE { suffix = string_field fields \"suffix\" ~default:\"\" })\n      | Some (Jsont.String ((\"Byte_level\" | \"ByteLevel\"), _)) -> Ok Byte_level\n      | Some (Jsont.String ((\"Byte_fallback\" | \"ByteFallback\"), _)) ->\n          Ok Byte_fallback\n      | Some (Jsont.String ((\"Word_piece\" | \"WordPiece\"), _)) ->\n          Ok\n            (Word_piece\n               {\n                 prefix = string_field fields \"prefix\" ~default:\"##\";\n                 cleanup = bool_field fields \"cleanup\" ~default:true;\n               })\n      | Some (Jsont.String (\"Metaspace\", _)) ->\n          Ok\n            (Metaspace\n               {\n                 replacement = char_field fields \"replacement\" ~default:'_';\n                 add_prefix_space =\n                   bool_field fields \"add_prefix_space\" ~default:true;\n               })\n      | Some (Jsont.String (\"CTC\", _)) ->\n          Ok\n            (CTC\n               {\n                 pad_token = string_field fields \"pad_token\" ~default:\"<pad>\";\n                 word_delimiter_token =\n                   string_field fields \"word_delimiter_token\" ~default:\"|\";\n                 cleanup = bool_field fields \"cleanup\" ~default:true;\n               })\n      | Some (Jsont.String (\"Replace\", _)) ->\n          let* pattern =\n            match find_field fields \"pattern\" with\n            | Some (Jsont.String (s, _)) -> Ok s\n            | Some (Jsont.Object (pattern_fields, _)) -> (\n                match Jsont.Json.find_mem \"String\" pattern_fields with\n                | Some (_, Jsont.String (p, _)) -> Ok p\n                | _ -> Error err_replace_missing_pattern)\n            | _ -> Error err_replace_missing_pattern\n          in\n          Ok\n            (Replace\n               {\n                 pattern;\n                 replacement = string_field fields \"content\" ~default:\"\";\n               })\n      | Some (Jsont.String (\"Strip\", _)) ->\n          Ok\n            (Strip\n               {\n                 left = bool_field fields \"strip_left\" ~default:false;\n                 right = bool_field fields \"strip_right\" ~default:false;\n                 content = char_field fields \"content\" ~default:' ';\n               })\n      | Some (Jsont.String (\"Fuse\", _)) -> Ok Fuse\n      | Some (Jsont.String (\"Sequence\", _)) -> (\n          match find_field fields \"decoders\" with\n          | Some (Jsont.Array (decs, _)) ->\n              let* decoders =\n                List.fold_left\n                  (fun acc j ->\n                    let* acc = acc in\n                    let* d = of_json j in\n                    Ok (d :: acc))\n                  (Ok []) decs\n              in\n              Ok (Sequence (List.rev decoders))\n          | _ -> Error err_seq_missing_decoders)\n      | Some (Jsont.String (typ, _)) -> Error (err_unknown_type typ)\n      | _ -> Error \"missing or invalid decoder type field\")\n  | _ -> Error err_expected_object\n"
  },
  {
    "path": "packages/brot/lib/decoder.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Decoding tokens back to text.\n\n    Decoders convert token strings back into natural text by reversing\n    encoding-specific transformations (prefix/suffix removal, byte-level\n    decoding, whitespace normalization, etc.).\n\n    Decoders operate on token {e strings}, not IDs. Convert IDs to strings via\n    vocabulary first, then apply {!decode}.\n\n    Some decoders transform each token independently ({e per-token}: {!bpe},\n    {!metaspace}, {!replace}, {!strip}, {!byte_fallback}), while others collapse\n    the entire token list into a single result ({e collapsing}: {!byte_level},\n    {!wordpiece}, {!fuse}). This distinction matters when composing decoders\n    with {!sequence}. *)\n\ntype t\n(** The type for decoders. *)\n\n(** {1:constructors Constructors} *)\n\nval bpe : ?suffix:string -> unit -> t\n(** [bpe ~suffix ()] is a per-token decoder for BPE-encoded tokens. Strips\n    [suffix] from end-of-word tokens and inserts spaces between words. [suffix]\n    defaults to [\"\"]. *)\n\nval byte_level : unit -> t\n(** [byte_level ()] is a collapsing decoder that reverses GPT-2 style\n    byte-to-Unicode encoding back to original bytes. *)\n\nval byte_fallback : unit -> t\n(** [byte_fallback ()] is a per-token decoder for byte fallback tokens. Converts\n    hex byte tokens (e.g. [\"<0x41>\"]) back to their byte values, accumulating\n    consecutive byte tokens into strings. Non-byte tokens pass through\n    unchanged. *)\n\nval wordpiece : ?prefix:string -> ?cleanup:bool -> unit -> t\n(** [wordpiece ~prefix ~cleanup ()] is a collapsing decoder for WordPiece\n    tokens. Strips continuation [prefix] (default [\"##\"]) from non-initial\n    subwords and joins tokens into words. When [cleanup] is [true] (default),\n    normalizes whitespace in the result. *)\n\nval metaspace : ?replacement:char -> ?add_prefix_space:bool -> unit -> t\n(** [metaspace ~replacement ~add_prefix_space ()] is a per-token decoder that\n    converts metaspace markers back to regular spaces. [replacement] defaults to\n    ['_']. When [add_prefix_space] is [true] (default), the leading replacement\n    character on the first token is stripped. *)\n\nval ctc :\n  ?pad_token:string ->\n  ?word_delimiter_token:string ->\n  ?cleanup:bool ->\n  unit ->\n  t\n(** [ctc ~pad_token ~word_delimiter_token ~cleanup ()] is a per-token decoder\n    for\n    {{:https://distill.pub/2017/ctc/}CTC (Connectionist Temporal\n     Classification)} output. Deduplicates consecutive tokens, removes\n    [pad_token] (default [\"<pad>\"]), and when [cleanup] is [true] (default),\n    replaces [word_delimiter_token] (default [\"|\"]) with spaces. *)\n\nval sequence : t list -> t\n(** [sequence decoders] chains [decoders] left-to-right. Each decoder's output\n    token list feeds into the next. *)\n\nval replace : pattern:string -> by:string -> unit -> t\n(** [replace ~pattern ~by ()] is a collapsing decoder that joins the token list,\n    replaces all literal occurrences of [pattern] with [by] in the result, and\n    returns a single-element list. *)\n\nval strip : ?left:bool -> ?right:bool -> ?content:char -> unit -> t\n(** [strip ~left ~right ~content ()] is a collapsing decoder that joins the\n    token list and removes leading (when [left] is [true]) and/or trailing (when\n    [right] is [true]) occurrences of [content] from the result. [left] and\n    [right] default to [false]; [content] defaults to [' ']. *)\n\nval fuse : unit -> t\n(** [fuse ()] is a collapsing decoder that concatenates all tokens into a single\n    string with no delimiter. *)\n\n(** {1:ops Operations} *)\n\nval decode : t -> string list -> string\n(** [decode decoder tokens] applies [decoder] to [tokens] and returns the\n    decoded text. *)\n\n(** {1:fmt Formatting} *)\n\nval pp : Format.formatter -> t -> unit\n(** [pp ppf decoder] formats [decoder] for debugging. *)\n\n(** {1:serialization Serialization} *)\n\nval to_json : t -> Jsont.json\n(** [to_json decoder] serializes [decoder] to HuggingFace JSON format. *)\n\nval of_json : Jsont.json -> (t, string) result\n(** [of_json json] is a decoder from HuggingFace JSON format. Errors if [json]\n    is not an object, has a missing or unknown [\"type\"] field, or has invalid\n    parameters. *)\n"
  },
  {
    "path": "packages/brot/lib/dune",
    "content": "(library\n (name brot)\n (public_name brot)\n (private_modules bpe wordpiece word_level unigram chars)\n (libraries re jsont jsont.bytesrw uucp uunf))\n"
  },
  {
    "path": "packages/brot/lib/encoding.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype t = {\n  ids : int array;\n  type_ids : int array;\n  tokens : string array;\n  words : int option array;\n  offsets : (int * int) array;\n  special_tokens_mask : int array;\n  attention_mask : int array;\n  mutable overflowing : t list;\n  sequence_ranges : (int, int * int) Hashtbl.t;\n}\n\n(* Constructors *)\n\nlet empty_ranges : (int, int * int) Hashtbl.t = Hashtbl.create 0\n\nlet empty =\n  {\n    ids = [||];\n    type_ids = [||];\n    tokens = [||];\n    words = [||];\n    offsets = [||];\n    special_tokens_mask = [||];\n    attention_mask = [||];\n    overflowing = [];\n    sequence_ranges = empty_ranges;\n  }\n\nlet create ~ids ~type_ids ~tokens ~words ~offsets ~special_tokens_mask\n    ~attention_mask ?(overflowing = []) () =\n  {\n    ids;\n    type_ids;\n    tokens;\n    words;\n    offsets;\n    special_tokens_mask;\n    attention_mask;\n    overflowing;\n    sequence_ranges = empty_ranges;\n  }\n\nlet token ~id ~token ~offset ~type_id ~special =\n  {\n    ids = [| id |];\n    type_ids = [| type_id |];\n    tokens = [| token |];\n    words = [| None |];\n    offsets = [| offset |];\n    special_tokens_mask = [| (if special then 1 else 0) |];\n    attention_mask = [| 1 |];\n    overflowing = [];\n    sequence_ranges = empty_ranges;\n  }\n\nlet from_tokens tokens ~type_id =\n  let n = List.length tokens in\n  let ids = Array.make n 0 in\n  let token_strs = Array.make n \"\" in\n  let offsets = Array.make n (0, 0) in\n  List.iteri\n    (fun i (id, tok, off) ->\n      ids.(i) <- id;\n      token_strs.(i) <- tok;\n      offsets.(i) <- off)\n    tokens;\n  {\n    ids;\n    tokens = token_strs;\n    offsets;\n    words = Array.make n None;\n    type_ids = Array.make n type_id;\n    attention_mask = Array.make n 1;\n    special_tokens_mask = Array.make n 0;\n    overflowing = [];\n    sequence_ranges = empty_ranges;\n  }\n\nlet concat a b =\n  {\n    ids = Array.append a.ids b.ids;\n    type_ids = Array.append a.type_ids b.type_ids;\n    tokens = Array.append a.tokens b.tokens;\n    words = Array.append a.words b.words;\n    offsets = Array.append a.offsets b.offsets;\n    special_tokens_mask =\n      Array.append a.special_tokens_mask b.special_tokens_mask;\n    attention_mask = Array.append a.attention_mask b.attention_mask;\n    overflowing = a.overflowing;\n    sequence_ranges = a.sequence_ranges;\n  }\n\nlet concat_list encodings =\n  match encodings with\n  | [] -> empty\n  | [ single ] -> single\n  | first :: _ ->\n      let total =\n        List.fold_left (fun acc t -> acc + Array.length t.ids) 0 encodings\n      in\n      let ids = Array.make total 0 in\n      let type_ids = Array.make total 0 in\n      let tokens = Array.make total \"\" in\n      let words = Array.make total None in\n      let offsets = Array.make total (0, 0) in\n      let special_tokens_mask = Array.make total 0 in\n      let attention_mask = Array.make total 0 in\n      let pos = ref 0 in\n      List.iter\n        (fun t ->\n          let n = Array.length t.ids in\n          Array.blit t.ids 0 ids !pos n;\n          Array.blit t.type_ids 0 type_ids !pos n;\n          Array.blit t.tokens 0 tokens !pos n;\n          Array.blit t.words 0 words !pos n;\n          Array.blit t.offsets 0 offsets !pos n;\n          Array.blit t.special_tokens_mask 0 special_tokens_mask !pos n;\n          Array.blit t.attention_mask 0 attention_mask !pos n;\n          pos := !pos + n)\n        encodings;\n      {\n        ids;\n        type_ids;\n        tokens;\n        words;\n        offsets;\n        special_tokens_mask;\n        attention_mask;\n        overflowing = first.overflowing;\n        sequence_ranges = first.sequence_ranges;\n      }\n\n(* Accessors *)\n\nlet is_empty t = Array.length t.ids = 0\nlet length t = Array.length t.ids\nlet ids t = t.ids\nlet type_ids t = t.type_ids\nlet tokens t = t.tokens\nlet word_ids t = t.words\nlet offsets t = t.offsets\nlet special_tokens_mask t = t.special_tokens_mask\nlet attention_mask t = t.attention_mask\nlet overflowing t = t.overflowing\n\n(* Truncation *)\n\nlet slice t start len =\n  {\n    ids = Array.sub t.ids start len;\n    type_ids = Array.sub t.type_ids start len;\n    tokens = Array.sub t.tokens start len;\n    words = Array.sub t.words start len;\n    offsets = Array.sub t.offsets start len;\n    special_tokens_mask = Array.sub t.special_tokens_mask start len;\n    attention_mask = Array.sub t.attention_mask start len;\n    overflowing = [];\n    sequence_ranges = empty_ranges;\n  }\n\nlet truncate t ~max_length ~stride ~direction =\n  let encoding_len = length t in\n  if max_length >= encoding_len then t\n  else if max_length = 0 then { empty with overflowing = [ t ] }\n  else begin\n    assert (stride < max_length);\n    let step = max_length - stride in\n    let ranges =\n      match direction with\n      | `Right ->\n          let rec loop start acc =\n            if start >= encoding_len then List.rev acc\n            else\n              let stop = min (start + max_length) encoding_len in\n              loop (start + step) ((start, stop) :: acc)\n          in\n          loop 0 []\n      | `Left ->\n          let rec loop stop acc =\n            if stop <= 0 then acc\n            else\n              let start = max 0 (stop - max_length) in\n              loop (stop - step) ((start, stop) :: acc)\n          in\n          loop encoding_len []\n    in\n    match ranges with\n    | [] -> empty\n    | (start, stop) :: rest ->\n        let enc = slice t start (stop - start) in\n        enc.overflowing <-\n          List.map (fun (start, stop) -> slice t start (stop - start)) rest;\n        enc\n  end\n\n(* Pad *)\n\nlet pad_array src n fill direction =\n  let src_len = Array.length src in\n  let dst = Array.make (src_len + n) fill in\n  let off = match direction with `Left -> n | `Right -> 0 in\n  Array.blit src 0 dst off src_len;\n  dst\n\nlet rec pad t ~target_length ~pad_id ~pad_type_id ~pad_token ~direction =\n  let overflowing =\n    List.map\n      (fun e -> pad e ~target_length ~pad_id ~pad_type_id ~pad_token ~direction)\n      t.overflowing\n  in\n  let current_len = length t in\n  if current_len >= target_length then { t with overflowing }\n  else\n    let n = target_length - current_len in\n    let pad_a arr fill = pad_array arr n fill direction in\n    let sequence_ranges =\n      match direction with\n      | `Right -> t.sequence_ranges\n      | `Left ->\n          if Hashtbl.length t.sequence_ranges = 0 then empty_ranges\n          else begin\n            let tbl = Hashtbl.create (Hashtbl.length t.sequence_ranges) in\n            Hashtbl.iter\n              (fun seq_id (start, stop) ->\n                Hashtbl.add tbl seq_id (start + n, stop + n))\n              t.sequence_ranges;\n            tbl\n          end\n    in\n    {\n      ids = pad_a t.ids pad_id;\n      type_ids = pad_a t.type_ids pad_type_id;\n      tokens = pad_a t.tokens pad_token;\n      words = pad_a t.words None;\n      offsets = pad_a t.offsets (0, 0);\n      special_tokens_mask = pad_a t.special_tokens_mask 1;\n      attention_mask = pad_a t.attention_mask 0;\n      overflowing;\n      sequence_ranges;\n    }\n"
  },
  {
    "path": "packages/brot/lib/encoding.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Tokenization encodings.\n\n    An encoding bundles token IDs for model input with alignment metadata: byte\n    offsets, word indices, segment type IDs, attention masks, and special-token\n    flags.\n\n    Encodings are produced by {!Brot.encode} and post-processed with\n    {!val-truncate} and {!val-pad}. All parallel arrays ({!val-ids},\n    {!val-type_ids}, {!val-tokens}, {!val-word_ids}, {!val-offsets},\n    {!val-special_tokens_mask}, {!val-attention_mask}) share the same length,\n    equal to {!val-length}. *)\n\ntype t\n(** The type for tokenization encodings. *)\n\n(** {1:construct Construction} *)\n\nval empty : t\n(** [empty] is the encoding with no tokens. *)\n\nval create :\n  ids:int array ->\n  type_ids:int array ->\n  tokens:string array ->\n  words:int option array ->\n  offsets:(int * int) array ->\n  special_tokens_mask:int array ->\n  attention_mask:int array ->\n  ?overflowing:t list ->\n  unit ->\n  t\n(** [create ~ids ~type_ids ~tokens ~words ~offsets ~special_tokens_mask\n     ~attention_mask ()] is an encoding from the given arrays.\n\n    All arrays must have the same length; no validation is performed.\n    [overflowing] defaults to [[]]. *)\n\nval token :\n  id:int -> token:string -> offset:int * int -> type_id:int -> special:bool -> t\n(** [token ~id ~token ~offset ~type_id ~special] is a single-token encoding.\n    When [special] is [true], {!val-special_tokens_mask} is [1] and\n    {!val-word_ids} is [None]; otherwise {!val-special_tokens_mask} is [0].\n    {!val-attention_mask} is always [1]. *)\n\nval from_tokens : (int * string * (int * int)) list -> type_id:int -> t\n(** [from_tokens tokens ~type_id] is an encoding from a list of\n    [(id, token_string, (start, end_offset))] triples. Every token gets the\n    given [type_id], {!val-attention_mask} [1], {!val-special_tokens_mask} [0]\n    and {!val-word_ids} [None]. *)\n\nval concat : t -> t -> t\n(** [concat a b] is the encoding with [a]'s tokens followed by [b]'s.\n    {!val-overflowing} and sequence ranges are taken from [a]. *)\n\nval concat_list : t list -> t\n(** [concat_list encs] is the concatenation of [encs] in order.\n    {!val-overflowing} and sequence ranges are taken from the first element.\n    Allocates once rather than creating intermediate arrays per pair. *)\n\n(** {1:access Accessors} *)\n\nval ids : t -> int array\n(** [ids enc] is the token ID array. *)\n\nval type_ids : t -> int array\n(** [type_ids enc] is the segment ID array. Typically [0] for the first sequence\n    and [1] for the second in sentence-pair tasks. *)\n\nval tokens : t -> string array\n(** [tokens enc] is the string representation of each token. *)\n\nval word_ids : t -> int option array\n(** [word_ids enc] maps each token to its source word index, or [None] for\n    special tokens. *)\n\nval offsets : t -> (int * int) array\n(** [offsets enc] is the [(start, end_)] byte offset spans into the original\n    text for each token. *)\n\nval special_tokens_mask : t -> int array\n(** [special_tokens_mask enc] is [1] for special tokens ([CLS], [SEP], padding)\n    and [0] for content tokens. *)\n\nval attention_mask : t -> int array\n(** [attention_mask enc] is [1] for real tokens and [0] for padding tokens. *)\n\nval overflowing : t -> t list\n(** [overflowing enc] is the list of overflow encodings produced by\n    {!val-truncate} when the input exceeds [max_length]. Each element is a\n    sliding window over the excess tokens. *)\n\nval is_empty : t -> bool\n(** [is_empty enc] is [true] iff [enc] has no tokens. *)\n\nval length : t -> int\n(** [length enc] is the number of tokens in [enc]. *)\n\n(** {1:ops Operations} *)\n\nval truncate :\n  t -> max_length:int -> stride:int -> direction:[ `Left | `Right ] -> t\n(** [truncate enc ~max_length ~stride ~direction] limits [enc] to at most\n    [max_length] tokens.\n\n    Excess tokens are split into sliding windows of size [max_length] with\n    overlap [stride] and stored in {!val-overflowing}. If\n    [length enc <= max_length], [enc] is returned unchanged.\n\n    [stride] must be strictly less than [max_length]. When [max_length] is [0],\n    all tokens move to {!val-overflowing} and {!val-empty} is returned. *)\n\nval pad :\n  t ->\n  target_length:int ->\n  pad_id:int ->\n  pad_type_id:int ->\n  pad_token:string ->\n  direction:[ `Left | `Right ] ->\n  t\n(** [pad enc ~target_length ~pad_id ~pad_type_id ~pad_token ~direction] extends\n    [enc] to exactly [target_length] tokens.\n\n    Padding tokens have {!val-attention_mask} [0] and {!val-special_tokens_mask}\n    [1]. If [length enc >= target_length], [enc] is returned unchanged. Padding\n    is applied recursively to {!val-overflowing} encodings. When [direction] is\n    [`Left], {!val-offsets} and sequence ranges are shifted accordingly. *)\n"
  },
  {
    "path": "packages/brot/lib/normalizer.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Errors *)\n\nlet err_expected_object = \"expected JSON object\"\nlet err_missing_type = \"missing type field\"\nlet err_replace_invalid_pattern = \"invalid pattern\"\nlet err_replace_missing_pattern = \"missing pattern\"\nlet err_replace_missing_content = \"missing content\"\nlet err_prepend_missing = \"missing prepend field\"\nlet err_sequence_missing = \"missing normalizers\"\nlet strf = Printf.sprintf\n\n(* Type *)\n\ntype t =\n  | Bert of {\n      clean_text : bool;\n      handle_chinese_chars : bool;\n      strip_accents : bool option;\n      lowercase : bool;\n    }\n  | Strip of { left : bool; right : bool }\n  | Strip_accents\n  | NFC\n  | NFD\n  | NFKC\n  | NFKD\n  | Lowercase\n  | Replace of { pattern : string; replacement : string; compiled : Re.re }\n  | Prepend of string\n  | Byte_level of { add_prefix_space : bool; use_regex : bool }\n  | Sequence of t list\n\n(* Unicode text transforms *)\n\nlet normalize_utf8 nf text =\n  let len = String.length text in\n  if len = 0 then text\n  else\n    let rec all_ascii i =\n      i >= len\n      || (Char.code (String.unsafe_get text i) < 0x80 && all_ascii (i + 1))\n    in\n    if all_ascii 0 then text else Uunf_string.normalize_utf_8 nf text\n\nlet case_fold text =\n  let len = String.length text in\n  let rec needs_fold i =\n    if i >= len then false\n    else\n      let byte = Char.code (String.unsafe_get text i) in\n      if byte >= 0x41 && byte <= 0x5A then true\n      else if byte >= 128 then true\n      else needs_fold (i + 1)\n  in\n  if not (needs_fold 0) then text\n  else\n    let b = Buffer.create len in\n    let i = ref 0 in\n    while !i < len do\n      let byte = Char.code (String.unsafe_get text !i) in\n      if byte < 128 then (\n        let c = if byte >= 0x41 && byte <= 0x5A then byte + 32 else byte in\n        Buffer.add_char b (Char.unsafe_chr c);\n        incr i)\n      else\n        let d = String.get_utf_8_uchar text !i in\n        let n = Uchar.utf_decode_length d in\n        (if Uchar.utf_decode_is_valid d then\n           let u = Uchar.utf_decode_uchar d in\n           match Uucp.Case.Fold.fold u with\n           | `Self -> Buffer.add_utf_8_uchar b u\n           | `Uchars us -> List.iter (fun u -> Buffer.add_utf_8_uchar b u) us);\n        i := !i + n\n    done;\n    Buffer.contents b\n\nlet strip_accents_text text =\n  let len = String.length text in\n  let rec has_non_ascii i =\n    if i >= len then false\n    else if Char.code (String.unsafe_get text i) >= 128 then true\n    else has_non_ascii (i + 1)\n  in\n  if not (has_non_ascii 0) then text\n  else\n    let b = Buffer.create len in\n    let i = ref 0 in\n    while !i < len do\n      let byte = Char.code (String.unsafe_get text !i) in\n      if byte < 128 then (\n        Buffer.add_char b (Char.unsafe_chr byte);\n        incr i)\n      else\n        let d = String.get_utf_8_uchar text !i in\n        let n = Uchar.utf_decode_length d in\n        (if Uchar.utf_decode_is_valid d then\n           let u = Uchar.utf_decode_uchar d in\n           match Uucp.Gc.general_category u with\n           | `Mn | `Mc | `Me -> ()\n           | _ -> Buffer.add_utf_8_uchar b u);\n        i := !i + n\n    done;\n    Buffer.contents b\n\n(* UTF-8 helpers *)\n\n(* Returns (codepoint lsl 3) lor byte_length — zero allocation. *)\nlet[@inline] utf8_next s i =\n  let d = String.get_utf_8_uchar s i in\n  (Uchar.to_int (Uchar.utf_decode_uchar d) lsl 3) lor Uchar.utf_decode_length d\n\n(* Character classification *)\n\nlet[@inline] is_whitespace code =\n  code = 0x09 || code = 0x0A || code = 0x0D || code = 0x20\n  || Uucp.White.is_white_space (Uchar.of_int code)\n\nlet[@inline] is_control code =\n  if code = 0x09 || code = 0x0A || code = 0x0D then false\n  else\n    match Uucp.Gc.general_category (Uchar.of_int code) with\n    | `Cc | `Cf | `Cn | `Co -> true\n    | _ -> false\n\nlet[@inline] is_chinese_char code =\n  (code >= 0x4E00 && code <= 0x9FFF)\n  || (code >= 0x3400 && code <= 0x4DBF)\n  || (code >= 0x20000 && code <= 0x2A6DF)\n  || (code >= 0x2A700 && code <= 0x2B73F)\n  || (code >= 0x2B740 && code <= 0x2B81F)\n  || (code >= 0x2B920 && code <= 0x2CEAF)\n  || (code >= 0xF900 && code <= 0xFAFF)\n  || (code >= 0x2F800 && code <= 0x2FA1F)\n\n(* Operations *)\n\nlet clean_text s =\n  let len = String.length s in\n  let buf = Buffer.create len in\n  let i = ref 0 in\n  while !i < len do\n    let b0 = Char.code (String.unsafe_get s !i) in\n    if b0 < 128 then begin\n      if b0 = 9 || b0 = 10 || b0 = 13 || b0 = 32 then Buffer.add_char buf ' '\n      else if b0 >= 33 && b0 < 127 then Buffer.add_char buf (Char.unsafe_chr b0);\n      incr i\n    end\n    else begin\n      let p = utf8_next s !i in\n      let code = p lsr 3 and clen = p land 7 in\n      if code <> 0xFFFD && not (is_control code) then\n        if is_whitespace code then Buffer.add_char buf ' '\n        else Buffer.add_substring buf s !i clen;\n      i := !i + clen\n    end\n  done;\n  Buffer.contents buf\n\nlet handle_chinese_chars s =\n  let len = String.length s in\n  let rec has_non_ascii i =\n    i < len\n    && (Char.code (String.unsafe_get s i) >= 128 || has_non_ascii (i + 1))\n  in\n  if not (has_non_ascii 0) then s\n  else\n    let buf = Buffer.create (len + (len / 4)) in\n    let i = ref 0 in\n    while !i < len do\n      let b0 = Char.code (String.unsafe_get s !i) in\n      if b0 < 128 then begin\n        Buffer.add_char buf (Char.unsafe_chr b0);\n        incr i\n      end\n      else begin\n        let p = utf8_next s !i in\n        let code = p lsr 3 and clen = p land 7 in\n        if is_chinese_char code then (\n          Buffer.add_char buf ' ';\n          Buffer.add_substring buf s !i clen;\n          Buffer.add_char buf ' ')\n        else Buffer.add_substring buf s !i clen;\n        i := !i + clen\n      end\n    done;\n    Buffer.contents buf\n\nlet do_strip_accents s = strip_accents_text (normalize_utf8 `NFD s)\nlet do_lowercase s = case_fold s\n\nlet strip_whitespace s ~left ~right =\n  let len = String.length s in\n  let start =\n    if left then\n      let rec loop i =\n        if i >= len then len\n        else\n          let p = utf8_next s i in\n          let code = p lsr 3 and clen = p land 7 in\n          if is_whitespace code then loop (i + clen) else i\n      in\n      loop 0\n    else 0\n  in\n  let stop =\n    if right then\n      let rec loop i last =\n        if i >= len then last\n        else\n          let p = utf8_next s i in\n          let code = p lsr 3 and clen = p land 7 in\n          let next = i + clen in\n          if is_whitespace code then loop next last else loop next next\n      in\n      loop start start\n    else len\n  in\n  if start = 0 && stop = len then s else String.sub s start (stop - start)\n\n(* Byte-level encoding *)\n\nlet byte_to_unicode =\n  let is_direct b =\n    (b >= 33 && b <= 126) || (b >= 161 && b <= 172) || b >= 174\n  in\n  let tbl = Array.make 256 0 in\n  let n = ref 0 in\n  for b = 0 to 255 do\n    if is_direct b then tbl.(b) <- b\n    else (\n      tbl.(b) <- 256 + !n;\n      incr n)\n  done;\n  tbl\n\nlet apply_byte_level s ~add_prefix_space ~use_regex:_ =\n  let s =\n    if add_prefix_space && String.length s > 0 then\n      let code = utf8_next s 0 lsr 3 in\n      if is_whitespace code then s else \" \" ^ s\n    else s\n  in\n  let len = String.length s in\n  let buf = Buffer.create (len * 2) in\n  for i = 0 to len - 1 do\n    let b = Char.code (String.unsafe_get s i) in\n    Buffer.add_utf_8_uchar buf (Uchar.of_int byte_to_unicode.(b))\n  done;\n  Buffer.contents buf\n\n(* Constructors *)\n\nlet nfc = NFC\nlet nfd = NFD\nlet nfkc = NFKC\nlet nfkd = NFKD\nlet lowercase = Lowercase\nlet strip_accents = Strip_accents\nlet strip ?(left = true) ?(right = true) () = Strip { left; right }\n\nlet replace ~pattern ~replacement =\n  Replace { pattern; replacement; compiled = Re.compile (Re.Pcre.re pattern) }\n\nlet prepend s = Prepend s\n\nlet byte_level ?(add_prefix_space = false) () =\n  Byte_level { add_prefix_space; use_regex = false }\n\nlet bert ?(clean_text = true) ?(handle_chinese_chars = true)\n    ?(strip_accents = None) ?(lowercase = true) () =\n  Bert { clean_text; handle_chinese_chars; strip_accents; lowercase }\n\nlet sequence ns = Sequence ns\n\n(* Apply *)\n\nlet rec apply t s =\n  match t with\n  | NFC -> normalize_utf8 `NFC s\n  | NFD -> normalize_utf8 `NFD s\n  | NFKC -> normalize_utf8 `NFKC s\n  | NFKD -> normalize_utf8 `NFKD s\n  | Lowercase -> do_lowercase s\n  | Strip_accents -> do_strip_accents s\n  | Strip { left; right } -> strip_whitespace s ~left ~right\n  | Replace { compiled; replacement; _ } ->\n      Re.replace_string compiled ~by:replacement s\n  | Prepend prefix -> if String.length s = 0 then s else prefix ^ s\n  | Byte_level { add_prefix_space; use_regex } ->\n      apply_byte_level s ~add_prefix_space ~use_regex\n  | Bert\n      {\n        clean_text = ct;\n        handle_chinese_chars = hcc;\n        strip_accents = sa;\n        lowercase = lc;\n      } ->\n      let s = if ct then clean_text s else s in\n      let s = if hcc then handle_chinese_chars s else s in\n      let do_strip = match sa with Some v -> v | None -> lc in\n      let s = if do_strip then do_strip_accents s else s in\n      if lc then do_lowercase s else s\n  | Sequence ns -> List.fold_left (fun s n -> apply n s) s ns\n\n(* Formatting *)\n\nlet pp_bool_opt ppf = function\n  | None -> Format.pp_print_string ppf \"None\"\n  | Some b -> Format.fprintf ppf \"Some(%b)\" b\n\nlet rec pp ppf = function\n  | NFC -> Format.pp_print_string ppf \"NFC\"\n  | NFD -> Format.pp_print_string ppf \"NFD\"\n  | NFKC -> Format.pp_print_string ppf \"NFKC\"\n  | NFKD -> Format.pp_print_string ppf \"NFKD\"\n  | Lowercase -> Format.pp_print_string ppf \"Lowercase\"\n  | Strip_accents -> Format.pp_print_string ppf \"StripAccents\"\n  | Strip { left; right } ->\n      Format.fprintf ppf \"@[<1>Strip(left=%b,@ right=%b)@]\" left right\n  | Replace { pattern; replacement; _ } ->\n      Format.fprintf ppf \"@[<1>Replace(%S,@ %S)@]\" pattern replacement\n  | Prepend s -> Format.fprintf ppf \"Prepend(%S)\" s\n  | Byte_level { add_prefix_space; use_regex } ->\n      Format.fprintf ppf \"@[<1>ByteLevel(add_prefix_space=%b,@ use_regex=%b)@]\"\n        add_prefix_space use_regex\n  | Bert { clean_text; handle_chinese_chars; strip_accents; lowercase } ->\n      Format.fprintf ppf\n        \"@[<1>Bert(clean_text=%b,@ handle_chinese_chars=%b,@ \\\n         strip_accents=%a,@ lowercase=%b)@]\"\n        clean_text handle_chinese_chars pp_bool_opt strip_accents lowercase\n  | Sequence ns ->\n      Format.fprintf ppf \"@[<1>Sequence[%a]@]\"\n        (Format.pp_print_list\n           ~pp_sep:(fun ppf () -> Format.fprintf ppf \",@ \")\n           pp)\n        ns\n\n(*---------------------------------------------------------------------------\n  Serialization\n  ---------------------------------------------------------------------------*)\n\nlet json_obj pairs =\n  Jsont.Json.object' (List.map (fun (k, v) -> (Jsont.Json.name k, v)) pairs)\n\nlet typed name = json_obj [ (\"type\", Jsont.Json.string name) ]\nlet typed_with name pairs = json_obj ((\"type\", Jsont.Json.string name) :: pairs)\n\nlet rec to_json = function\n  | Bert { clean_text; handle_chinese_chars; strip_accents; lowercase } ->\n      typed_with \"Bert\"\n        [\n          (\"clean_text\", Jsont.Json.bool clean_text);\n          (\"handle_chinese_chars\", Jsont.Json.bool handle_chinese_chars);\n          ( \"strip_accents\",\n            match strip_accents with\n            | None -> Jsont.Json.null ()\n            | Some b -> Jsont.Json.bool b );\n          (\"lowercase\", Jsont.Json.bool lowercase);\n        ]\n  | Strip { left; right } ->\n      typed_with \"Strip\"\n        [\n          (\"strip_left\", Jsont.Json.bool left);\n          (\"strip_right\", Jsont.Json.bool right);\n        ]\n  | Strip_accents -> typed \"StripAccents\"\n  | NFC -> typed \"NFC\"\n  | NFD -> typed \"NFD\"\n  | NFKC -> typed \"NFKC\"\n  | NFKD -> typed \"NFKD\"\n  | Lowercase -> typed \"Lowercase\"\n  | Replace { pattern; replacement; _ } ->\n      typed_with \"Replace\"\n        [\n          (\"pattern\", json_obj [ (\"String\", Jsont.Json.string pattern) ]);\n          (\"content\", Jsont.Json.string replacement);\n        ]\n  | Prepend prefix ->\n      typed_with \"Prepend\" [ (\"prepend\", Jsont.Json.string prefix) ]\n  | Byte_level { add_prefix_space; use_regex } ->\n      typed_with \"ByteLevel\"\n        [\n          (\"add_prefix_space\", Jsont.Json.bool add_prefix_space);\n          (\"use_regex\", Jsont.Json.bool use_regex);\n        ]\n  | Sequence ns ->\n      typed_with \"Sequence\"\n        [ (\"normalizers\", Jsont.Json.list (List.map to_json ns)) ]\n\nlet rec of_json = function\n  | Jsont.Object (fields, _) -> (\n      let find name = Option.map snd (Jsont.Json.find_mem name fields) in\n      let get_bool name default =\n        match find name with Some (Jsont.Bool (b, _)) -> b | _ -> default\n      in\n      match find \"type\" with\n      | Some (Jsont.String ((\"Bert\" | \"BertNormalizer\"), _)) ->\n          let strip_accents =\n            match find \"strip_accents\" with\n            | Some (Jsont.Bool (b, _)) -> Some b\n            | _ -> None\n          in\n          Ok\n            (Bert\n               {\n                 clean_text = get_bool \"clean_text\" true;\n                 handle_chinese_chars = get_bool \"handle_chinese_chars\" true;\n                 strip_accents;\n                 lowercase = get_bool \"lowercase\" true;\n               })\n      | Some (Jsont.String (\"Strip\", _)) ->\n          Ok\n            (Strip\n               {\n                 left = get_bool \"strip_left\" false;\n                 right = get_bool \"strip_right\" true;\n               })\n      | Some (Jsont.String (\"StripAccents\", _)) -> Ok Strip_accents\n      | Some (Jsont.String (\"NFC\", _)) -> Ok NFC\n      | Some (Jsont.String (\"NFD\", _)) -> Ok NFD\n      | Some (Jsont.String (\"NFKC\", _)) -> Ok NFKC\n      | Some (Jsont.String (\"NFKD\", _)) -> Ok NFKD\n      | Some (Jsont.String (\"Lowercase\", _)) -> Ok Lowercase\n      | Some (Jsont.String (\"Replace\", _)) ->\n          let pattern =\n            match find \"pattern\" with\n            | Some (Jsont.Object (pf, _)) -> (\n                match Jsont.Json.find_mem \"String\" pf with\n                | Some (_, Jsont.String (p, _)) -> Ok p\n                | _ -> Error err_replace_invalid_pattern)\n            | _ -> Error err_replace_missing_pattern\n          in\n          let replacement =\n            match find \"content\" with\n            | Some (Jsont.String (r, _)) -> Ok r\n            | _ -> Error err_replace_missing_content\n          in\n          Result.bind pattern (fun p ->\n              Result.map\n                (fun r -> replace ~pattern:p ~replacement:r)\n                replacement)\n      | Some (Jsont.String (\"Prepend\", _)) -> (\n          match find \"prepend\" with\n          | Some (Jsont.String (p, _)) -> Ok (Prepend p)\n          | _ -> Error err_prepend_missing)\n      | Some (Jsont.String (\"ByteLevel\", _)) ->\n          Ok\n            (Byte_level\n               {\n                 add_prefix_space = get_bool \"add_prefix_space\" false;\n                 use_regex = get_bool \"use_regex\" false;\n               })\n      | Some (Jsont.String (\"Sequence\", _)) -> (\n          match find \"normalizers\" with\n          | Some (Jsont.Array (l, _)) ->\n              let rec build acc = function\n                | [] -> Ok (Sequence (List.rev acc))\n                | item :: rest ->\n                    Result.bind (of_json item) (fun n -> build (n :: acc) rest)\n              in\n              build [] l\n          | _ -> Error err_sequence_missing)\n      | Some (Jsont.String (other, _)) ->\n          Error (strf \"Unknown normalizer type: %s\" other)\n      | _ -> Error err_missing_type)\n  | _ -> Error err_expected_object\n"
  },
  {
    "path": "packages/brot/lib/normalizer.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Text normalization.\n\n    Normalizers transform text before tokenization: lowercasing, accent removal,\n    Unicode normalization, whitespace cleanup, and model-specific preprocessing.\n    They are the first stage in the tokenization pipeline, applied before\n    {!Pre_tokenizer} and vocabulary-based encoding.\n\n    Compose normalizers with {!val-sequence}:\n    {[\n    let n =\n      Normalizer.sequence\n        [ Normalizer.nfd; Normalizer.strip_accents; Normalizer.lowercase ]\n    in\n    Normalizer.apply n \"Caf\\u{00E9}\"\n    (* \"cafe\" *)\n    ]}\n\n    See {!Brot} for the full tokenization pipeline. *)\n\ntype t\n(** The type for normalizers. *)\n\n(** {1:normalizers Normalizers} *)\n\n(** {2:unicode Unicode normalization} *)\n\nval nfc : t\n(** [nfc] is Unicode NFC normalization (canonical composition). *)\n\nval nfd : t\n(** [nfd] is Unicode NFD normalization (canonical decomposition). *)\n\nval nfkc : t\n(** [nfkc] is Unicode NFKC normalization (compatibility composition). *)\n\nval nfkd : t\n(** [nfkd] is Unicode NFKD normalization (compatibility decomposition). *)\n\n(** {2:text Text transforms} *)\n\nval lowercase : t\n(** [lowercase] is Unicode case folding to lowercase. *)\n\nval strip_accents : t\n(** [strip_accents] removes combining marks after NFD decomposition. Applies\n    {!val-nfd} before stripping. *)\n\nval strip : ?left:bool -> ?right:bool -> unit -> t\n(** [strip ?left ?right ()] is a normalizer that strips Unicode whitespace from\n    text boundaries. [left] and [right] default to [true]. *)\n\nval replace : pattern:string -> replacement:string -> t\n(** [replace ~pattern ~replacement] is a normalizer that replaces all [pattern]\n    matches with [replacement]. [pattern] is a PCRE regular expression, compiled\n    once at construction time.\n\n    Raises [Re.Pcre.Parse_error] if [pattern] is not valid PCRE. *)\n\nval prepend : string -> t\n(** [prepend s] is a normalizer that prepends [s] to non-empty text. Empty text\n    is returned unchanged. *)\n\n(** {2:byte_level Byte-level encoding} *)\n\nval byte_level : ?add_prefix_space:bool -> unit -> t\n(** [byte_level ?add_prefix_space ()] is GPT-2 style byte-level encoding. Each\n    byte is mapped to a printable Unicode codepoint using the GPT-2\n    byte-to-unicode table.\n    - [add_prefix_space] adds a space prefix when the text does not start with\n      whitespace. Defaults to [false]. *)\n\n(** {2:model Model-specific} *)\n\nval bert :\n  ?clean_text:bool ->\n  ?handle_chinese_chars:bool ->\n  ?strip_accents:bool option ->\n  ?lowercase:bool ->\n  unit ->\n  t\n(** [bert ()] is a BERT normalizer.\n\n    - [clean_text]: remove control characters and normalize whitespace. Default:\n      [true].\n    - [handle_chinese_chars]: pad CJK ideographs with spaces. Default: [true].\n    - [strip_accents]: strip accents after NFD decomposition. When [None],\n      accents are stripped iff [lowercase] is [true]. Default: [None].\n    - [lowercase]: lowercase text via Unicode case folding. Default: [true]. *)\n\n(** {2:composition Composition} *)\n\nval sequence : t list -> t\n(** [sequence ns] is the composition of normalizers [ns], applied left to right.\n*)\n\n(** {1:applying Applying} *)\n\nval apply : t -> string -> string\n(** [apply n s] is [s] normalized by [n]. *)\n\n(** {1:formatting Formatting} *)\n\nval pp : Format.formatter -> t -> unit\n(** [pp ppf n] formats [n] for inspection. *)\n\n(** {1:serialization Serialization} *)\n\nval to_json : t -> Jsont.json\n(** [to_json n] is [n] serialized to HuggingFace-compatible JSON. *)\n\nval of_json : Jsont.json -> (t, string) result\n(** [of_json json] is a normalizer deserialized from HuggingFace JSON. Errors if\n    [json] is not an object, has a missing or unknown [\"type\"] field, or has\n    invalid parameters. *)\n"
  },
  {
    "path": "packages/brot/lib/post_processor.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet strf = Printf.sprintf\nlet err_type_id tok = strf \"expected integer type id after ':' in '%s'\" tok\nlet err_piece tok = strf \"expected 'id' or 'id:type_id', got '%s'\" tok\nlet err_unknown_special tok = strf \"unknown special token '%s'\" tok\nlet err_mismatch tok = strf \"ids and tokens differ in length for '%s'\" tok\nlet err_expected what v = strf \"expected %s, got %s\" what v\nlet err_seq_id = \"sequence id must be \\\"A\\\", \\\"B\\\", 0 or 1\"\nlet err_type_id_field = \"expected number for 'type_id'\"\nlet err_missing_sequence = \"template references a sequence not provided\"\nlet err_pair_required = \"pair template required when two sequences are provided\"\nlet err_pair_must_ref_both = \"pair template must reference both $A and $B\"\nlet err_template_def = \"expected string, array or null for template\"\nlet err_unsupported_piece = \"expected Sequence or SpecialToken object\"\nlet err_special_missing_id = \"missing 'id' in SpecialToken\"\nlet err_special_missing_ids = \"missing 'ids' in special token\"\nlet err_special_entry = \"expected object for special token entry\"\n\n(* Types *)\n\ntype sequence_id = Sequence_a | Sequence_b\n\ntype template_piece =\n  | Piece_sequence of { id : sequence_id; type_id : int }\n  | Piece_special of { key : string; type_id : int }\n\ntype template = template_piece list\n\ntype special_token = {\n  key : string;\n  value_ids : int list;\n  value_tokens : string list;\n}\n\ntype token = string * int\n\ntype t =\n  | Bert of { sep : token; cls : token }\n  | Roberta of {\n      sep : token;\n      cls : token;\n      pad : token;\n      trim_offsets : bool;\n      add_prefix_space : bool;\n    }\n  | ByteLevel of { trim_offsets : bool }\n  | Template of {\n      single : template;\n      pair : template option;\n      special_tokens : special_token list;\n    }\n  | Sequence of t list\n\n(* Helpers *)\n\nlet special_token ~id ~token ~type_id =\n  Encoding.token ~id ~token ~offset:(0, 0) ~type_id ~special:true\n\nlet with_type_id enc type_id =\n  Encoding.create ~ids:(Encoding.ids enc)\n    ~type_ids:(Array.make (Encoding.length enc) type_id)\n    ~tokens:(Encoding.tokens enc) ~words:(Encoding.word_ids enc)\n    ~offsets:(Encoding.offsets enc)\n    ~special_tokens_mask:(Encoding.special_tokens_mask enc)\n    ~attention_mask:(Encoding.attention_mask enc)\n    ()\n\nlet is_ws = function\n  | ' ' | '\\t' | '\\n' | '\\r' | '\\x0b' | '\\x0c' -> true\n  | _ -> false\n\nlet build_special_lookup special_tokens =\n  let tbl = Hashtbl.create (List.length special_tokens + 1) in\n  List.iter (fun tok -> Hashtbl.replace tbl tok.key tok) special_tokens;\n  tbl\n\nlet string_is_int s =\n  let len = String.length s in\n  let rec loop i =\n    if i >= len then true\n    else match s.[i] with '0' .. '9' -> loop (i + 1) | _ -> false\n  in\n  len > 0 && loop 0\n\nlet sequence_id_to_label = function Sequence_a -> \"A\" | Sequence_b -> \"B\"\nlet sequence_id_to_index = function Sequence_a -> 0 | Sequence_b -> 1\n\n(* JSON helpers *)\n\nlet json_obj pairs =\n  Jsont.Json.object' (List.map (fun (k, v) -> (Jsont.Json.name k, v)) pairs)\n\nlet json_find name fields =\n  match Jsont.Json.find_mem name fields with\n  | Some (_, v) -> Some v\n  | None -> None\n\nlet json_bool_field fields name ~default =\n  match json_find name fields with\n  | Some (Jsont.Bool (b, _)) -> b\n  | _ -> default\n\nlet json_str_int_pair fields name ~default =\n  match json_find name fields with\n  | Some (Jsont.Array ([ Jsont.String (s, _); Jsont.Number (f, _) ], _)) ->\n      (s, int_of_float f)\n  | _ -> default\n\n(* Processors *)\n\nlet process_bert ~sep ~cls encodings ~add_special_tokens =\n  if not add_special_tokens then encodings\n  else\n    let cls_str, cls_id = cls in\n    let sep_str, sep_id = sep in\n    let cls_tok tid = special_token ~id:cls_id ~token:cls_str ~type_id:tid in\n    let sep_tok tid = special_token ~id:sep_id ~token:sep_str ~type_id:tid in\n    match encodings with\n    | [] -> []\n    | [ encoding ] ->\n        [\n          Encoding.concat_list [ cls_tok 0; with_type_id encoding 0; sep_tok 0 ];\n        ]\n    | [ enc1; enc2 ] ->\n        [\n          Encoding.concat_list\n            [\n              cls_tok 0;\n              with_type_id enc1 0;\n              sep_tok 0;\n              with_type_id enc2 1;\n              sep_tok 1;\n            ];\n        ]\n    | _ -> encodings\n\nlet process_roberta ~sep ~cls ~pad:_ ~trim_offsets:_ ~add_prefix_space:_\n    encodings ~add_special_tokens =\n  if not add_special_tokens then encodings\n  else\n    let cls_str, cls_id = cls in\n    let sep_str, sep_id = sep in\n    let cls_tok = special_token ~id:cls_id ~token:cls_str ~type_id:0 in\n    let sep_tok = special_token ~id:sep_id ~token:sep_str ~type_id:0 in\n    match encodings with\n    | [] -> []\n    | [ encoding ] ->\n        [ Encoding.concat_list [ cls_tok; with_type_id encoding 0; sep_tok ] ]\n    | [ enc1; enc2 ] ->\n        [\n          Encoding.concat_list\n            [\n              cls_tok;\n              with_type_id enc1 0;\n              sep_tok;\n              sep_tok;\n              with_type_id enc2 0;\n              sep_tok;\n            ];\n        ]\n    | _ -> encodings\n\nlet trim_offset enc_tokens idx (start, stop) =\n  if start >= stop then (start, stop)\n  else\n    let token =\n      if idx < Array.length enc_tokens then enc_tokens.(idx) else \"\"\n    in\n    let decoded = Pre_tokenizer.byte_level_decode token in\n    let len = String.length decoded in\n    let rec leading i =\n      if i >= len then len else if is_ws decoded.[i] then leading (i + 1) else i\n    in\n    let rec trailing i =\n      if i <= 0 then len\n      else if is_ws decoded.[i - 1] then trailing (i - 1)\n      else i\n    in\n    let lead = leading 0 in\n    let trail = trailing len in\n    let trimmed_lead = min (stop - start) lead in\n    let trimmed_trail = min (stop - start - trimmed_lead) (len - trail) in\n    let new_start = start + trimmed_lead in\n    let new_stop = max new_start (stop - trimmed_trail) in\n    (new_start, new_stop)\n\nlet process_byte_level ~trim_offsets encodings ~add_special_tokens:_ =\n  if not trim_offsets then encodings\n  else\n    List.map\n      (fun encoding ->\n        let enc_tokens = Encoding.tokens encoding in\n        let new_offsets =\n          Array.mapi (trim_offset enc_tokens) (Encoding.offsets encoding)\n        in\n        Encoding.create ~ids:(Encoding.ids encoding)\n          ~type_ids:(Encoding.type_ids encoding)\n          ~tokens:enc_tokens\n          ~words:(Encoding.word_ids encoding)\n          ~offsets:new_offsets\n          ~special_tokens_mask:(Encoding.special_tokens_mask encoding)\n          ~attention_mask:(Encoding.attention_mask encoding)\n          ~overflowing:(Encoding.overflowing encoding)\n          ())\n      encodings\n\n(* Template parsing *)\n\nlet split_template_string str =\n  let len = String.length str in\n  let rec skip_ws i =\n    if i >= len then len\n    else match str.[i] with ' ' | '\\t' -> skip_ws (i + 1) | _ -> i\n  in\n  let rec find_end i =\n    if i >= len then len\n    else match str.[i] with ' ' | '\\t' -> i | _ -> find_end (i + 1)\n  in\n  let rec loop i acc =\n    let i = skip_ws i in\n    if i >= len then List.rev acc\n    else\n      let j = find_end i in\n      loop j (String.sub str i (j - i) :: acc)\n  in\n  loop 0 []\n\nlet parse_sequence_base base =\n  let lower = String.lowercase_ascii base in\n  if lower = \"$\" || lower = \"$a\" then Some (Sequence_a, 0)\n  else if lower = \"$b\" then Some (Sequence_b, 0)\n  else if String.length base > 0 && base.[0] = '$' then\n    let rest = String.sub base 1 (String.length base - 1) in\n    if string_is_int rest then Some (Sequence_a, int_of_string rest) else None\n  else None\n\nlet parse_template_piece_from_string ~special_lookup token =\n  let parts = String.split_on_char ':' token in\n  let base, explicit_type =\n    match parts with\n    | [ id; type_part ] when string_is_int type_part ->\n        (id, Some (int_of_string type_part))\n    | [ _; _ ] -> invalid_arg (err_type_id token)\n    | [ id ] -> (id, None)\n    | _ -> invalid_arg (err_piece token)\n  in\n  match parse_sequence_base base with\n  | Some (seq_id, default_type) ->\n      let type_id = Option.value ~default:default_type explicit_type in\n      Piece_sequence { id = seq_id; type_id }\n  | None ->\n      if Hashtbl.mem special_lookup base then\n        let type_id = Option.value ~default:0 explicit_type in\n        Piece_special { key = base; type_id }\n      else invalid_arg (err_unknown_special token)\n\nlet parse_template_string ~special_lookup str =\n  List.map\n    (parse_template_piece_from_string ~special_lookup)\n    (split_template_string str)\n\nlet parse_sequence_id_json fields =\n  match json_find \"id\" fields with\n  | Some (Jsont.String (s, _)) -> (\n      match String.lowercase_ascii s with\n      | \"a\" -> Sequence_a\n      | \"b\" -> Sequence_b\n      | _ -> invalid_arg err_seq_id)\n  | Some (Jsont.Number (v, _)) -> (\n      match int_of_float v with\n      | 0 -> Sequence_a\n      | 1 -> Sequence_b\n      | _ -> invalid_arg err_seq_id)\n  | None -> Sequence_a\n  | _ -> invalid_arg err_seq_id\n\nlet json_type_id fields =\n  match json_find \"type_id\" fields with\n  | Some (Jsont.Number (v, _)) -> int_of_float v\n  | None -> 0\n  | _ -> invalid_arg err_type_id_field\n\nlet parse_template_piece_from_json ~special_lookup json =\n  match json with\n  | Jsont.Object (outer_fields, _) -> (\n      match json_find \"Sequence\" outer_fields with\n      | Some (Jsont.Object (fields, _)) ->\n          let id = parse_sequence_id_json fields in\n          let type_id = json_type_id fields in\n          Piece_sequence { id; type_id }\n      | _ -> (\n          match json_find \"SpecialToken\" outer_fields with\n          | Some (Jsont.Object (fields, _)) ->\n              let key =\n                match json_find \"id\" fields with\n                | Some (Jsont.String (s, _)) -> s\n                | _ -> invalid_arg err_special_missing_id\n              in\n              if not (Hashtbl.mem special_lookup key) then\n                invalid_arg (err_unknown_special key);\n              let type_id = json_type_id fields in\n              Piece_special { key; type_id }\n          | _ -> invalid_arg err_unsupported_piece))\n  | _ -> invalid_arg err_unsupported_piece\n\nlet parse_template_definition ~special_lookup = function\n  | Jsont.String (s, _) -> parse_template_string ~special_lookup s\n  | Jsont.Array (l, _) ->\n      List.map (parse_template_piece_from_json ~special_lookup) l\n  | Jsont.Null _ -> []\n  | _ -> invalid_arg err_template_def\n\n(* Template encoding *)\n\nlet build_encoding_from_pieces pieces source_encodings special_lookup =\n  let ids_rev = ref [] in\n  let type_ids_rev = ref [] in\n  let tokens_rev = ref [] in\n  let words_rev = ref [] in\n  let offsets_rev = ref [] in\n  let special_mask_rev = ref [] in\n  let attention_rev = ref [] in\n  let append ~id ~token ~word ~type_id ~offset ~special ~attention =\n    ids_rev := id :: !ids_rev;\n    type_ids_rev := type_id :: !type_ids_rev;\n    tokens_rev := token :: !tokens_rev;\n    words_rev := word :: !words_rev;\n    offsets_rev := offset :: !offsets_rev;\n    special_mask_rev := special :: !special_mask_rev;\n    attention_rev := attention :: !attention_rev\n  in\n  let append_sequence seq_id type_id =\n    let index = sequence_id_to_index seq_id in\n    if index >= Array.length source_encodings then\n      invalid_arg err_missing_sequence;\n    let src = source_encodings.(index) in\n    let src_ids = Encoding.ids src in\n    let src_tokens = Encoding.tokens src in\n    let src_words = Encoding.word_ids src in\n    let src_offsets = Encoding.offsets src in\n    let src_special = Encoding.special_tokens_mask src in\n    let src_attention = Encoding.attention_mask src in\n    let len = Array.length src_ids in\n    for i = 0 to len - 1 do\n      let token = if i < Array.length src_tokens then src_tokens.(i) else \"\" in\n      let word = if i < Array.length src_words then src_words.(i) else None in\n      let offset =\n        if i < Array.length src_offsets then src_offsets.(i) else (0, 0)\n      in\n      let special =\n        if i < Array.length src_special && src_special.(i) <> 0 then 1 else 0\n      in\n      let attention =\n        if i < Array.length src_attention && src_attention.(i) <> 0 then 1\n        else 0\n      in\n      append ~id:src_ids.(i) ~token ~word ~type_id ~offset ~special ~attention\n    done\n  in\n  let append_special key type_id =\n    match Hashtbl.find_opt special_lookup key with\n    | None -> invalid_arg (err_unknown_special key)\n    | Some special ->\n        let rec loop ids tokens =\n          match (ids, tokens) with\n          | id :: rest_ids, token :: rest_tokens ->\n              append ~id ~token ~word:None ~type_id ~offset:(0, 0) ~special:1\n                ~attention:1;\n              loop rest_ids rest_tokens\n          | [], [] -> ()\n          | _ -> invalid_arg (err_mismatch key)\n        in\n        loop special.value_ids special.value_tokens\n  in\n  List.iter\n    (function\n      | Piece_sequence { id; type_id } -> append_sequence id type_id\n      | Piece_special { key; type_id } -> append_special key type_id)\n    pieces;\n  let to_array r = Array.of_list (List.rev !r) in\n  Encoding.create ~ids:(to_array ids_rev) ~type_ids:(to_array type_ids_rev)\n    ~tokens:(to_array tokens_rev) ~words:(to_array words_rev)\n    ~offsets:(to_array offsets_rev)\n    ~special_tokens_mask:(to_array special_mask_rev)\n    ~attention_mask:(to_array attention_rev) ()\n\nlet process_template ~single ~pair ~special_tokens encodings ~add_special_tokens\n    =\n  if not add_special_tokens then encodings\n  else\n    let special_lookup = build_special_lookup special_tokens in\n    let source = Array.of_list encodings in\n    match Array.length source with\n    | 0 -> []\n    | 1 -> [ build_encoding_from_pieces single source special_lookup ]\n    | 2 ->\n        let pair =\n          match pair with Some p -> p | None -> invalid_arg err_pair_required\n        in\n        [ build_encoding_from_pieces pair source special_lookup ]\n    | _ -> encodings\n\n(* Processing *)\n\nlet rec process_list processor encodings ~add_special_tokens =\n  match processor with\n  | Bert { sep; cls } -> process_bert ~sep ~cls encodings ~add_special_tokens\n  | Roberta { sep; cls; pad; trim_offsets; add_prefix_space } ->\n      process_roberta ~sep ~cls ~pad ~trim_offsets ~add_prefix_space encodings\n        ~add_special_tokens\n  | ByteLevel { trim_offsets } ->\n      process_byte_level ~trim_offsets encodings ~add_special_tokens\n  | Template { single; pair; special_tokens } ->\n      process_template ~single ~pair ~special_tokens encodings\n        ~add_special_tokens\n  | Sequence processors ->\n      List.fold_left\n        (fun encs proc -> process_list proc encs ~add_special_tokens)\n        encodings processors\n\nlet process processor ?pair enc ~add_special_tokens =\n  let encodings = match pair with None -> [ enc ] | Some p -> [ enc; p ] in\n  match process_list processor encodings ~add_special_tokens with\n  | [ r ] -> r\n  | r :: _ -> r\n  | [] -> enc\n\nlet rec added_tokens processor ~is_pair =\n  match processor with\n  | Bert _ -> if is_pair then 3 else 2\n  | Roberta _ -> if is_pair then 4 else 2\n  | ByteLevel _ -> 0\n  | Template { single; pair; special_tokens } ->\n      let lookup = build_special_lookup special_tokens in\n      let count_special pieces =\n        List.fold_left\n          (fun acc piece ->\n            match piece with\n            | Piece_special { key; _ } -> (\n                match Hashtbl.find_opt lookup key with\n                | Some tok -> acc + List.length tok.value_ids\n                | None -> acc)\n            | _ -> acc)\n          0 pieces\n      in\n      if is_pair then\n        match pair with\n        | Some p -> count_special p\n        | None -> count_special single\n      else count_special single\n  | Sequence processors ->\n      List.fold_left\n        (fun acc proc -> acc + added_tokens proc ~is_pair)\n        0 processors\n\n(* Constructors *)\n\nlet bert ~sep ~cls () = Bert { sep; cls }\n\nlet roberta ~sep ~cls ?(trim_offsets = true) ?(add_prefix_space = true) () =\n  let pad = (\"<pad>\", 1) in\n  Roberta { sep; cls; pad; trim_offsets; add_prefix_space }\n\nlet byte_level ?(trim_offsets = true) () = ByteLevel { trim_offsets }\n\nlet template ~single ?pair ?(special_tokens = []) () =\n  let specials =\n    List.map\n      (fun (token, id) ->\n        { key = token; value_ids = [ id ]; value_tokens = [ token ] })\n      special_tokens\n  in\n  let lookup = build_special_lookup specials in\n  let single = parse_template_string ~special_lookup:lookup single in\n  let has_sequence pieces seq =\n    List.exists\n      (function Piece_sequence { id; _ } when id = seq -> true | _ -> false)\n      pieces\n  in\n  let pair =\n    match pair with\n    | None -> None\n    | Some p ->\n        let tpl = parse_template_string ~special_lookup:lookup p in\n        if not (has_sequence tpl Sequence_a && has_sequence tpl Sequence_b) then\n          invalid_arg err_pair_must_ref_both;\n        Some tpl\n  in\n  Template { single; pair; special_tokens = specials }\n\nlet sequence processors = Sequence processors\n\n(* Formatting *)\n\nlet rec pp ppf = function\n  | Bert { sep = sep_s, _; cls = cls_s, _ } ->\n      Format.fprintf ppf \"@[<2>Bert@ ~cls:%S@ ~sep:%S@]\" cls_s sep_s\n  | Roberta { sep = sep_s, _; cls = cls_s, _; _ } ->\n      Format.fprintf ppf \"@[<2>Roberta@ ~cls:%S@ ~sep:%S@]\" cls_s sep_s\n  | ByteLevel { trim_offsets } ->\n      Format.fprintf ppf \"@[<2>ByteLevel@ ~trim_offsets:%b@]\" trim_offsets\n  | Template _ -> Format.fprintf ppf \"Template\"\n  | Sequence processors ->\n      Format.fprintf ppf \"@[<2>Sequence[@,%a]@]\"\n        (Format.pp_print_list\n           ~pp_sep:(fun ppf () -> Format.fprintf ppf \",@ \")\n           pp)\n        processors\n\n(* Serialization *)\n\nlet token_pair_to_json (s, id) =\n  Jsont.Json.list [ Jsont.Json.string s; Jsont.Json.int id ]\n\nlet template_to_json pieces =\n  let piece_json tag id type_id =\n    json_obj\n      [ (tag, json_obj [ (\"id\", id); (\"type_id\", Jsont.Json.int type_id) ]) ]\n  in\n  Jsont.Json.list\n    (List.map\n       (function\n         | Piece_sequence { id; type_id } ->\n             piece_json \"Sequence\"\n               (Jsont.Json.string (sequence_id_to_label id))\n               type_id\n         | Piece_special { key; type_id } ->\n             piece_json \"SpecialToken\" (Jsont.Json.string key) type_id)\n       pieces)\n\nlet rec to_json = function\n  | Bert { sep; cls } ->\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"BertProcessing\");\n          (\"sep\", token_pair_to_json sep);\n          (\"cls\", token_pair_to_json cls);\n        ]\n  | Roberta { sep; cls; pad; trim_offsets; add_prefix_space } ->\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"RobertaProcessing\");\n          (\"sep\", token_pair_to_json sep);\n          (\"cls\", token_pair_to_json cls);\n          (\"pad\", token_pair_to_json pad);\n          (\"trim_offsets\", Jsont.Json.bool trim_offsets);\n          (\"add_prefix_space\", Jsont.Json.bool add_prefix_space);\n        ]\n  | ByteLevel { trim_offsets } ->\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"ByteLevel\");\n          (\"trim_offsets\", Jsont.Json.bool trim_offsets);\n        ]\n  | Template { single; pair; special_tokens } ->\n      let pair_json =\n        match pair with\n        | None -> Jsont.Json.null ()\n        | Some p -> template_to_json p\n      in\n      let special_token_json tok =\n        let ids = Jsont.Json.list (List.map Jsont.Json.int tok.value_ids) in\n        let tokens =\n          Jsont.Json.list (List.map Jsont.Json.string tok.value_tokens)\n        in\n        ( Jsont.Json.name tok.key,\n          json_obj\n            [\n              (\"id\", Jsont.Json.string tok.key); (\"ids\", ids); (\"tokens\", tokens);\n            ] )\n      in\n      let special_json =\n        Jsont.Json.object' (List.map special_token_json special_tokens)\n      in\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"TemplateProcessing\");\n          (\"single\", template_to_json single);\n          (\"pair\", pair_json);\n          (\"special_tokens\", special_json);\n        ]\n  | Sequence processors ->\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"Sequence\");\n          (\"processors\", Jsont.Json.list (List.map to_json processors));\n        ]\n\n(* Deserialization *)\n\nlet parse_special_token_json fields alias =\n  let key =\n    match json_find \"id\" fields with\n    | Some (Jsont.String (s, _)) -> s\n    | _ -> alias\n  in\n  let value_ids =\n    match json_find \"ids\" fields with\n    | Some (Jsont.Array (lst, _)) ->\n        List.map\n          (function\n            | Jsont.Number (f, _) -> int_of_float f\n            | v ->\n                invalid_arg\n                  (err_expected \"number\" (Format.asprintf \"%a\" Jsont.pp_json v)))\n          lst\n    | _ -> invalid_arg err_special_missing_ids\n  in\n  let value_tokens =\n    match json_find \"tokens\" fields with\n    | Some (Jsont.Array (lst, _)) ->\n        List.map\n          (function\n            | Jsont.String (s, _) -> s\n            | v ->\n                invalid_arg\n                  (err_expected \"string\" (Format.asprintf \"%a\" Jsont.pp_json v)))\n          lst\n    | _ -> [ key ]\n  in\n  if List.length value_ids <> List.length value_tokens then\n    invalid_arg (err_mismatch key);\n  { key; value_ids; value_tokens }\n\nlet parse_special_tokens_json fields =\n  match json_find \"special_tokens\" fields with\n  | Some (Jsont.Object (tokens, _)) ->\n      List.map\n        (fun ((alias, _), value) ->\n          match value with\n          | Jsont.Object (token_fields, _) ->\n              parse_special_token_json token_fields alias\n          | _ -> invalid_arg err_special_entry)\n        tokens\n  | Some v ->\n      invalid_arg\n        (err_expected \"object for 'special_tokens'\"\n           (Format.asprintf \"%a\" Jsont.pp_json v))\n  | None -> []\n\nlet rec of_json_exn json =\n  match json with\n  | Jsont.Object (fields, _) -> (\n      match json_find \"type\" fields with\n      | Some (Jsont.String (\"BertProcessing\", _)) ->\n          let sep = json_str_int_pair fields \"sep\" ~default:(\"[SEP]\", 102) in\n          let cls = json_str_int_pair fields \"cls\" ~default:(\"[CLS]\", 101) in\n          Bert { sep; cls }\n      | Some (Jsont.String (\"RobertaProcessing\", _)) ->\n          let sep = json_str_int_pair fields \"sep\" ~default:(\"</s>\", 2) in\n          let cls = json_str_int_pair fields \"cls\" ~default:(\"<s>\", 0) in\n          let pad = json_str_int_pair fields \"pad\" ~default:(\"<pad>\", 1) in\n          let trim_offsets =\n            json_bool_field fields \"trim_offsets\" ~default:true\n          in\n          let add_prefix_space =\n            json_bool_field fields \"add_prefix_space\" ~default:true\n          in\n          Roberta { sep; cls; pad; trim_offsets; add_prefix_space }\n      | Some (Jsont.String (\"ByteLevel\", _)) ->\n          let trim_offsets =\n            json_bool_field fields \"trim_offsets\" ~default:true\n          in\n          ByteLevel { trim_offsets }\n      | Some (Jsont.String (\"TemplateProcessing\", _)) ->\n          let special_tokens = parse_special_tokens_json fields in\n          let lookup = build_special_lookup special_tokens in\n          let single =\n            match json_find \"single\" fields with\n            | Some json -> parse_template_definition ~special_lookup:lookup json\n            | None -> parse_template_string ~special_lookup:lookup \"$A\"\n          in\n          let pair =\n            match json_find \"pair\" fields with\n            | Some (Jsont.Null _) | None -> None\n            | Some json ->\n                Some (parse_template_definition ~special_lookup:lookup json)\n          in\n          Template { single; pair; special_tokens }\n      | Some (Jsont.String (\"Sequence\", _)) -> (\n          match json_find \"processors\" fields with\n          | Some (Jsont.Array (procs, _)) ->\n              Sequence (List.map of_json_exn procs)\n          | _ -> failwith \"expected array for 'processors'\")\n      | _ -> failwith \"unsupported processor type\")\n  | _ -> failwith \"expected JSON object\"\n\nlet of_json json =\n  try Ok (of_json_exn json) with\n  | Failure msg -> Error msg\n  | Invalid_argument msg -> Error msg\n"
  },
  {
    "path": "packages/brot/lib/post_processor.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Post-processing tokenization output with special tokens.\n\n    Post-processors add special tokens and type IDs to tokenized sequences after\n    core tokenization. They handle model-specific requirements like [[CLS]] and\n    [[SEP]] for BERT, sentence pair formatting, and byte-level offset\n    adjustments. *)\n\ntype t\n(** The type for post-processors. *)\n\ntype token = string * int\n(** A special token as [(text, id)]. *)\n\n(** {1:constructors Constructors} *)\n\nval bert : sep:token -> cls:token -> unit -> t\n(** [bert ~sep ~cls ()] is a BERT-style post-processor.\n\n    Single: [[CLS] A [SEP]]. Pair: [[CLS] A [SEP] B [SEP]]. Type IDs: [0] for\n    the first sequence, [1] for the second. *)\n\nval roberta :\n  sep:token ->\n  cls:token ->\n  ?trim_offsets:bool ->\n  ?add_prefix_space:bool ->\n  unit ->\n  t\n(** [roberta ~sep ~cls ()] is a RoBERTa-style post-processor.\n\n    Single: [<s> A </s>]. Pair: [<s> A </s> </s> B </s>]. All type IDs are [0].\n\n    [trim_offsets] defaults to [true]. [add_prefix_space] defaults to [true]. *)\n\nval byte_level : ?trim_offsets:bool -> unit -> t\n(** [byte_level ()] is a byte-level post-processor that adjusts character\n    offsets for byte-level encoding.\n\n    [trim_offsets] removes leading and trailing whitespace from offsets.\n    Defaults to [true]. *)\n\nval template :\n  single:string -> ?pair:string -> ?special_tokens:token list -> unit -> t\n(** [template ~single ()] is a template-based post-processor.\n\n    Templates use [$A] and [$B] as sequence placeholders and literal special\n    token names (e.g. [[CLS]]). Type IDs can be specified with a colon suffix:\n    [$A:0], [[SEP]:1].\n\n    [special_tokens] defaults to [[]]. *)\n\nval sequence : t list -> t\n(** [sequence processors] chains [processors] left-to-right. *)\n\n(** {1:processing Processing} *)\n\nval process :\n  t -> ?pair:Encoding.t -> Encoding.t -> add_special_tokens:bool -> Encoding.t\n(** [process t enc ~add_special_tokens] adds special tokens and sets type IDs on\n    [enc].\n\n    When [~pair] is provided, both sequences are merged into a single encoding\n    with appropriate type IDs. When [~add_special_tokens] is [false], special\n    token insertion is skipped but byte-level offset trimming still applies. *)\n\nval added_tokens : t -> is_pair:bool -> int\n(** [added_tokens t ~is_pair] is the number of special tokens [t] adds. Useful\n    for calculating the truncation budget. *)\n\n(** {1:fmt Formatting} *)\n\nval pp : Format.formatter -> t -> unit\n(** [pp] formats a post-processor for inspection. *)\n\n(** {1:serialization Serialization} *)\n\nval of_json : Jsont.json -> (t, string) result\n(** [of_json json] is a post-processor from HuggingFace [tokenizer.json] format.\n    Errors if [json] is not an object, has a missing or unknown [\"type\"] field,\n    or has invalid parameters. *)\n\nval to_json : t -> Jsont.json\n(** [to_json t] is [t] serialized to HuggingFace [tokenizer.json] format. *)\n"
  },
  {
    "path": "packages/brot/lib/pre_tokenizer.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Types *)\n\ntype behavior =\n  [ `Isolated\n  | `Removed\n  | `Merged_with_previous\n  | `Merged_with_next\n  | `Contiguous ]\n\ntype prepend_scheme = [ `First | `Never | `Always ]\n\ntype t =\n  | Byte_level of {\n      add_prefix_space : bool;\n      use_regex : bool;\n      trim_offsets : bool;\n    }\n  | Bert\n  | Whitespace\n  | Whitespace_split\n  | Punctuation of { behavior : behavior }\n  | Split of { pattern : string; behavior : behavior; invert : bool }\n  | Char_delimiter of char\n  | Digits of { individual : bool }\n  | Metaspace of {\n      replacement : char;\n      prepend_scheme : prepend_scheme;\n      split : bool;\n    }\n  | Sequence of t list\n  | Fixed_length of { length : int }\n  | Unicode_scripts\n\n(* Errors *)\n\nlet strf = Printf.sprintf\nlet err_unknown_behavior s = strf \"unknown punctuation behavior '%s'\" s\nlet err_unknown_scheme s = strf \"unknown prepend_scheme '%s'\" s\nlet err_unsupported_type s = strf \"unsupported pre-tokenizer type '%s'\" s\nlet err_expected_char name = strf \"expected single character for '%s'\" name\nlet err_missing_type = \"missing 'type' field\"\nlet err_expected_object = \"expected JSON object\"\nlet err_missing_behavior = \"missing 'behavior' field\"\nlet err_split_missing = \"requires 'pattern' and 'behavior'\"\nlet err_char_delim_missing = \"requires 'delimiter'\"\nlet err_metaspace_missing = \"requires 'replacement' and 'prepend_scheme'\"\nlet err_sequence_missing = \"requires 'pretokenizers' list\"\nlet err_fixed_length = \"requires positive length\"\n\n(* Character classification *)\n\n(* ASCII property table: packed flags for O(1) classification. bit 0:\n   whitespace, bit 1: alphabetic, bit 2: numeric, bit 3: punctuation *)\n\nlet ascii_props =\n  let t = Array.make 128 0 in\n  for i = 9 to 13 do\n    t.(i) <- t.(i) lor 1\n  done;\n  t.(32) <- t.(32) lor 1;\n  for i = 65 to 90 do\n    t.(i) <- t.(i) lor 2\n  done;\n  for i = 97 to 122 do\n    t.(i) <- t.(i) lor 2\n  done;\n  for i = 48 to 57 do\n    t.(i) <- t.(i) lor 4\n  done;\n  List.iter\n    (fun i -> t.(i) <- t.(i) lor 8)\n    [\n      33;\n      34;\n      35;\n      37;\n      38;\n      39;\n      40;\n      41;\n      42;\n      44;\n      45;\n      46;\n      47;\n      58;\n      59;\n      63;\n      64;\n      91;\n      92;\n      93;\n      95;\n      123;\n      125;\n    ];\n  t\n\nlet[@inline] is_whitespace code =\n  if code < 128 then Array.unsafe_get ascii_props code land 1 <> 0\n  else Uucp.White.is_white_space (Uchar.of_int code)\n\nlet[@inline] is_alphabetic code =\n  if code < 128 then Array.unsafe_get ascii_props code land 2 <> 0\n  else Uucp.Alpha.is_alphabetic (Uchar.of_int code)\n\nlet[@inline] is_numeric code =\n  if code < 128 then Array.unsafe_get ascii_props code land 4 <> 0\n  else\n    match Uucp.Gc.general_category (Uchar.of_int code) with\n    | `Nd | `Nl | `No -> true\n    | _ -> false\n\nlet[@inline] is_punctuation code =\n  if code < 128 then Array.unsafe_get ascii_props code land 8 <> 0\n  else\n    match Uucp.Gc.general_category (Uchar.of_int code) with\n    | `Pc | `Pd | `Pe | `Pf | `Pi | `Po | `Ps -> true\n    | _ -> false\n\n(* Returns (codepoint lsl 3) lor byte_length — zero allocation. *)\nlet[@inline] utf8_next s i =\n  let c = Char.code (String.unsafe_get s i) in\n  if c < 0x80 then (c lsl 3) lor 1\n  else if c < 0xE0 then\n    (((c land 0x1F) lsl 6)\n    lor (Char.code (String.unsafe_get s (i + 1)) land 0x3F))\n    lsl 3\n    lor 2\n  else if c < 0xF0 then\n    (((c land 0x0F) lsl 12)\n    lor ((Char.code (String.unsafe_get s (i + 1)) land 0x3F) lsl 6)\n    lor (Char.code (String.unsafe_get s (i + 2)) land 0x3F))\n    lsl 3\n    lor 3\n  else\n    (((c land 0x07) lsl 18)\n    lor ((Char.code (String.unsafe_get s (i + 1)) land 0x3F) lsl 12)\n    lor ((Char.code (String.unsafe_get s (i + 2)) land 0x3F) lsl 6)\n    lor (Char.code (String.unsafe_get s (i + 3)) land 0x3F))\n    lsl 3\n    lor 4\n\n(* Pre-computed byte ↔ unicode mappings for byte-level encode/decode *)\nlet byte_to_unicode, unicode_to_byte =\n  let is_direct = Array.make 256 false in\n  for i = 33 to 126 do\n    is_direct.(i) <- true\n  done;\n  for i = 161 to 172 do\n    is_direct.(i) <- true\n  done;\n  for i = 174 to 255 do\n    is_direct.(i) <- true\n  done;\n  let byte_to_unicode = Array.make 256 0 in\n  let next_code = ref 0 in\n  let max_code = ref 0 in\n  for b = 0 to 255 do\n    let code =\n      if is_direct.(b) then b\n      else\n        let code = 256 + !next_code in\n        incr next_code;\n        code\n    in\n    byte_to_unicode.(b) <- code;\n    if code > !max_code then max_code := code\n  done;\n  let unicode_to_byte = Array.make (!max_code + 1) (-1) in\n  for b = 0 to 255 do\n    let code = byte_to_unicode.(b) in\n    if code < Array.length unicode_to_byte then unicode_to_byte.(code) <- b\n  done;\n  (byte_to_unicode, unicode_to_byte)\n\nlet byte_level_encode text =\n  let len = String.length text in\n  (* Worst case: every byte remaps to a 2-byte UTF-8 sequence *)\n  let result = Bytes.create (len * 2) in\n  let j = ref 0 in\n  for i = 0 to len - 1 do\n    let u =\n      Array.unsafe_get byte_to_unicode (Char.code (String.unsafe_get text i))\n    in\n    if u < 128 then begin\n      Bytes.unsafe_set result !j (Char.unsafe_chr u);\n      incr j\n    end\n    else begin\n      Bytes.unsafe_set result !j (Char.unsafe_chr (0xC0 lor (u lsr 6)));\n      Bytes.unsafe_set result (!j + 1)\n        (Char.unsafe_chr (0x80 lor (u land 0x3F)));\n      j := !j + 2\n    end\n  done;\n  Bytes.sub_string result 0 !j\n\nlet byte_level_encode_range text ~start ~len =\n  let result = Bytes.create (len * 2) in\n  let j = ref 0 in\n  for i = start to start + len - 1 do\n    let u =\n      Array.unsafe_get byte_to_unicode (Char.code (String.unsafe_get text i))\n    in\n    if u < 128 then begin\n      Bytes.unsafe_set result !j (Char.unsafe_chr u);\n      incr j\n    end\n    else begin\n      Bytes.unsafe_set result !j (Char.unsafe_chr (0xC0 lor (u lsr 6)));\n      Bytes.unsafe_set result (!j + 1)\n        (Char.unsafe_chr (0x80 lor (u land 0x3F)));\n      j := !j + 2\n    end\n  done;\n  Bytes.sub_string result 0 !j\n\nlet byte_level_decode text =\n  let len = String.length text in\n  let result = Buffer.create len in\n  let i = ref 0 in\n  while !i < len do\n    let b0 = Char.code (String.unsafe_get text !i) in\n    if b0 < 128 then begin\n      (* ASCII: direct lookup, no utf8_next needed *)\n      let byte = Array.unsafe_get unicode_to_byte b0 in\n      Buffer.add_char result\n        (if byte >= 0 then Char.chr byte else Char.unsafe_chr b0);\n      incr i\n    end\n    else begin\n      let p = utf8_next text !i in\n      let code = p lsr 3 and clen = p land 7 in\n      let byte =\n        if code < Array.length unicode_to_byte then unicode_to_byte.(code)\n        else -1\n      in\n      if byte >= 0 then Buffer.add_char result (Char.chr byte)\n      else\n        for j = !i to !i + clen - 1 do\n          Buffer.add_char result (String.unsafe_get text j)\n        done;\n      i := !i + clen\n    end\n  done;\n  Buffer.contents result\n\nlet[@inline] is_other code =\n  (not (is_whitespace code))\n  && (not (is_alphabetic code))\n  && not (is_numeric code)\n\nlet split_gpt2_pattern text =\n  let len = String.length text in\n  if len = 0 then []\n  else\n    let spans = ref [] in\n    let pos = ref 0 in\n\n    (* Try: optional leading space + run of chars matching a class.\n       [ascii_mask]: bitmask into ascii_props for the ASCII fast path. [invert]:\n       when true, match chars where (props land mask) = 0. [classify]: predicate\n       for non-ASCII codepoints (slow path only). *)\n    let try_space_run ~ascii_mask ~invert ~classify () =\n      let start = !pos in\n      let b0 = Char.code (String.unsafe_get text !pos) in\n      let has_space =\n        if b0 < 128 then Array.unsafe_get ascii_props b0 land 1 <> 0\n        else is_whitespace b0\n      in\n      let run_start = if has_space then start + 1 else start in\n      if run_start < len then\n        let b = Char.code (String.unsafe_get text run_start) in\n        let ok, clen =\n          if b < 128 then\n            let v = Array.unsafe_get ascii_props b land ascii_mask in\n            ((if invert then v = 0 else v <> 0), 1)\n          else\n            let p = utf8_next text run_start in\n            let code = p lsr 3 and cl = p land 7 in\n            (classify code, cl)\n        in\n        if ok then (\n          let j = ref (run_start + clen) in\n          let continue = ref true in\n          while !j < len && !continue do\n            let b = Char.code (String.unsafe_get text !j) in\n            if b < 128 then\n              let v = Array.unsafe_get ascii_props b land ascii_mask in\n              if if invert then v = 0 else v <> 0 then j := !j + 1\n              else continue := false\n            else\n              let p = utf8_next text !j in\n              if classify (p lsr 3) then j := !j + (p land 7)\n              else continue := false\n          done;\n          spans := (start, !j - start) :: !spans;\n          pos := !j;\n          true)\n        else false\n      else false\n    in\n\n    let[@inline] next_is_alnum next_pos =\n      if next_pos >= len then false\n      else\n        let nb = Char.code (String.unsafe_get text next_pos) in\n        if nb < 128 then Array.unsafe_get ascii_props nb land 6 <> 0\n        else\n          let nc = utf8_next text next_pos lsr 3 in\n          is_alphabetic nc || is_numeric nc\n    in\n\n    let rec loop () =\n      if !pos >= len then ()\n      else begin\n        (* 1. Contractions: 's 't 'm 'd 're 've 'll *)\n        let matched_contraction =\n          text.[!pos] = '\\''\n          &&\n          let remaining = len - !pos in\n          remaining >= 2\n          &&\n          let c1 = String.unsafe_get text (!pos + 1) in\n          if c1 = 's' || c1 = 't' || c1 = 'm' || c1 = 'd' then (\n            spans := (!pos, 2) :: !spans;\n            pos := !pos + 2;\n            true)\n          else\n            remaining >= 3\n            &&\n            let c2 = String.unsafe_get text (!pos + 2) in\n            if\n              (c1 = 'r' && c2 = 'e')\n              || (c1 = 'v' && c2 = 'e')\n              || (c1 = 'l' && c2 = 'l')\n            then (\n              spans := (!pos, 3) :: !spans;\n              pos := !pos + 3;\n              true)\n            else false\n        in\n        if matched_contraction then ()\n        else if\n          try_space_run ~ascii_mask:2 ~invert:false ~classify:is_alphabetic ()\n        then ()\n        else if\n          try_space_run ~ascii_mask:4 ~invert:false ~classify:is_numeric ()\n        then ()\n        else if try_space_run ~ascii_mask:7 ~invert:true ~classify:is_other ()\n        then ()\n        (* 5 & 6. Whitespace run *)\n          else begin\n          let b0 = Char.code (String.unsafe_get text !pos) in\n          let is_ws, clen =\n            if b0 < 128 then (Array.unsafe_get ascii_props b0 land 1 <> 0, 1)\n            else\n              let p = utf8_next text !pos in\n              let code = p lsr 3 and cl = p land 7 in\n              (is_whitespace code, cl)\n          in\n          if is_ws then begin\n            let j = ref (!pos + clen) in\n            let continue = ref true in\n            while !j < len && !continue do\n              let b = Char.code (String.unsafe_get text !j) in\n              if b < 128 then\n                if Array.unsafe_get ascii_props b land 1 <> 0 then\n                  if next_is_alnum (!j + 1) && b = 0x20 then continue := false\n                  else j := !j + 1\n                else continue := false\n              else\n                let p = utf8_next text !j in\n                let code = p lsr 3 and cl = p land 7 in\n                if is_whitespace code then\n                  if next_is_alnum (!j + cl) && code = 0x20 then\n                    continue := false\n                  else j := !j + cl\n                else continue := false\n            done;\n            spans := (!pos, !j - !pos) :: !spans;\n            pos := !j\n          end\n          else begin\n            (* Fallback: single character *)\n            spans := (!pos, clen) :: !spans;\n            pos := !pos + clen\n          end\n        end;\n        loop ()\n      end\n    in\n    loop ();\n    List.rev !spans\n\n(* Pre-tokenize implementations *)\n\nlet pre_tokenize_whitespace_split text =\n  let pieces = ref [] in\n  let start = ref (-1) in\n  let i = ref 0 in\n  let len = String.length text in\n  let flush () =\n    if !start >= 0 then begin\n      pieces := (String.sub text !start (!i - !start), (!start, !i)) :: !pieces;\n      start := -1\n    end\n  in\n  while !i < len do\n    let b = Char.code (String.unsafe_get text !i) in\n    if b < 128 then\n      if Array.unsafe_get ascii_props b land 1 <> 0 then (\n        flush ();\n        i := !i + 1)\n      else (\n        if !start < 0 then start := !i;\n        i := !i + 1)\n    else\n      let p = utf8_next text !i in\n      let code = p lsr 3 and l = p land 7 in\n      if is_whitespace code then (\n        flush ();\n        i := !i + l)\n      else (\n        if !start < 0 then start := !i;\n        i := !i + l)\n  done;\n  flush ();\n  List.rev !pieces\n\nlet pre_tokenize_whitespace text =\n  let pieces = ref [] in\n  let start = ref (-1) in\n  let i = ref 0 in\n  let len = String.length text in\n  let in_word = ref false in\n  let in_punct = ref false in\n  let flush () =\n    if !start >= 0 then begin\n      pieces := (String.sub text !start (!i - !start), (!start, !i)) :: !pieces;\n      start := -1\n    end\n  in\n  while !i < len do\n    let b = Char.code (String.unsafe_get text !i) in\n    if b < 128 then\n      let p = Array.unsafe_get ascii_props b in\n      if p land 6 <> 0 || b = 95 then (\n        if !in_punct then flush ();\n        if !start < 0 then start := !i;\n        in_word := true;\n        in_punct := false;\n        i := !i + 1)\n      else if p land 1 <> 0 then (\n        flush ();\n        in_word := false;\n        in_punct := false;\n        i := !i + 1)\n      else (\n        if !in_word then flush ();\n        if !start < 0 then start := !i;\n        in_word := false;\n        in_punct := true;\n        i := !i + 1)\n    else\n      let p = utf8_next text !i in\n      let code = p lsr 3 and l = p land 7 in\n      if is_alphabetic code || is_numeric code then (\n        if !in_punct then flush ();\n        if !start < 0 then start := !i;\n        in_word := true;\n        in_punct := false;\n        i := !i + l)\n      else if is_whitespace code then (\n        flush ();\n        in_word := false;\n        in_punct := false;\n        i := !i + l)\n      else (\n        if !in_word then flush ();\n        if !start < 0 then start := !i;\n        in_word := false;\n        in_punct := true;\n        i := !i + l)\n  done;\n  flush ();\n  List.rev !pieces\n\nlet pre_tokenize_byte_level ~add_prefix_space ~use_regex ~trim_offsets:_ text =\n  let orig_len = String.length text in\n  let text, prefix_added =\n    if\n      add_prefix_space && orig_len > 0\n      && not (is_whitespace (Char.code text.[0]))\n    then (\" \" ^ text, true)\n    else (text, false)\n  in\n  if use_regex then\n    let spans = split_gpt2_pattern text in\n    List.map\n      (fun (start, plen) ->\n        let o_start =\n          if prefix_added then if start = 0 then 0 else start - 1 else start\n        in\n        let o_end =\n          min orig_len (if prefix_added then start + plen - 1 else start + plen)\n        in\n        (byte_level_encode_range text ~start ~len:plen, (max 0 o_start, o_end)))\n      spans\n  else [ (byte_level_encode text, (0, orig_len)) ]\n\nlet pre_tokenize_bert text =\n  let pieces = ref [] in\n  let start = ref (-1) in\n  let i = ref 0 in\n  let len = String.length text in\n  let flush () =\n    if !start >= 0 then begin\n      pieces := (String.sub text !start (!i - !start), (!start, !i)) :: !pieces;\n      start := -1\n    end\n  in\n  while !i < len do\n    let b = Char.code (String.unsafe_get text !i) in\n    if b < 128 then\n      let p = Array.unsafe_get ascii_props b in\n      if p land 1 <> 0 then (\n        flush ();\n        i := !i + 1)\n      else if p land 8 <> 0 then (\n        flush ();\n        pieces := (String.sub text !i 1, (!i, !i + 1)) :: !pieces;\n        i := !i + 1)\n      else (\n        if !start < 0 then start := !i;\n        i := !i + 1)\n    else\n      let p = utf8_next text !i in\n      let code = p lsr 3 and l = p land 7 in\n      if is_whitespace code then (\n        flush ();\n        i := !i + l)\n      else if is_punctuation code then (\n        flush ();\n        pieces := (String.sub text !i l, (!i, !i + l)) :: !pieces;\n        i := !i + l)\n      else (\n        if !start < 0 then start := !i;\n        i := !i + l)\n  done;\n  flush ();\n  List.rev !pieces\n\nlet pre_tokenize_punctuation ~behavior text =\n  let pieces = ref [] in\n  let start = ref (-1) in\n  let i = ref 0 in\n  let len = String.length text in\n  let last_was_punc = ref false in\n  let flush () =\n    if !start >= 0 then begin\n      pieces := (String.sub text !start (!i - !start), (!start, !i)) :: !pieces;\n      start := -1\n    end\n  in\n  let handle_char is_p l =\n    if is_p then (\n      (match behavior with\n      | `Isolated ->\n          flush ();\n          pieces := (String.sub text !i l, (!i, !i + l)) :: !pieces\n      | `Removed -> flush ()\n      | `Merged_with_previous -> if !start < 0 then start := !i\n      | `Merged_with_next ->\n          flush ();\n          start := !i\n      | `Contiguous ->\n          if not (!start >= 0 && !last_was_punc) then begin\n            flush ();\n            start := !i\n          end);\n      last_was_punc := true;\n      i := !i + l)\n    else (\n      if behavior = `Contiguous && !start >= 0 && !last_was_punc then flush ();\n      if !start < 0 then start := !i;\n      i := !i + l;\n      last_was_punc := false)\n  in\n  while !i < len do\n    let b = Char.code (String.unsafe_get text !i) in\n    if b < 128 then handle_char (Array.unsafe_get ascii_props b land 8 <> 0) 1\n    else\n      let p = utf8_next text !i in\n      let code = p lsr 3 and l = p land 7 in\n      handle_char (is_punctuation code) l\n  done;\n  flush ();\n  List.rev !pieces\n\nlet pre_tokenize_split ~pattern ~behavior ~invert text =\n  let plen = String.length pattern in\n  if plen = 0 then [ (text, (0, String.length text)) ]\n  else\n    let pieces = ref [] in\n    let current = Buffer.create 16 in\n    let current_start = ref 0 in\n    let i = ref 0 in\n    let flush_current () =\n      if Buffer.length current > 0 then (\n        pieces :=\n          ( Buffer.contents current,\n            (!current_start, !current_start + Buffer.length current) )\n          :: !pieces;\n        Buffer.clear current)\n    in\n    while !i < String.length text do\n      let is_match =\n        !i + plen <= String.length text && String.sub text !i plen = pattern\n      in\n      let is_delim = if invert then not is_match else is_match in\n      let delim_len = if is_delim then if invert then 1 else plen else 1 in\n      if is_delim then (\n        (match behavior with\n        | `Removed -> flush_current ()\n        | `Isolated ->\n            flush_current ();\n            let delim_str = String.sub text !i delim_len in\n            pieces := (delim_str, (!i, !i + delim_len)) :: !pieces\n        | `Merged_with_previous ->\n            Buffer.add_string current (String.sub text !i delim_len);\n            flush_current ()\n        | `Merged_with_next ->\n            flush_current ();\n            current_start := !i;\n            Buffer.add_string current (String.sub text !i delim_len)\n        | `Contiguous ->\n            if Buffer.length current > 0 && is_delim then\n              Buffer.add_string current (String.sub text !i delim_len)\n            else (\n              flush_current ();\n              Buffer.add_string current (String.sub text !i delim_len)));\n        i := !i + delim_len)\n      else (\n        if Buffer.length current = 0 then current_start := !i;\n        Buffer.add_string current (String.sub text !i 1);\n        i := !i + 1)\n    done;\n    flush_current ();\n    List.rev !pieces\n\nlet pre_tokenize_digits ~individual text =\n  let pieces = ref [] in\n  let start = ref (-1) in\n  let i = ref 0 in\n  let len = String.length text in\n  let in_digits = ref false in\n  let flush () =\n    if !start >= 0 then begin\n      pieces := (String.sub text !start (!i - !start), (!start, !i)) :: !pieces;\n      start := -1\n    end\n  in\n  let handle_char is_d l =\n    if individual && is_d then (\n      flush ();\n      pieces := (String.sub text !i l, (!i, !i + l)) :: !pieces;\n      i := !i + l)\n    else (\n      if is_d <> !in_digits then (\n        flush ();\n        in_digits := is_d);\n      if !start < 0 then start := !i;\n      i := !i + l)\n  in\n  while !i < len do\n    let b = Char.code (String.unsafe_get text !i) in\n    if b < 128 then handle_char (Array.unsafe_get ascii_props b land 4 <> 0) 1\n    else\n      let p = utf8_next text !i in\n      let code = p lsr 3 and l = p land 7 in\n      handle_char (is_numeric code) l\n  done;\n  flush ();\n  List.rev !pieces\n\nlet pre_tokenize_metaspace ~replacement ~prepend_scheme ~split text =\n  let repl = String.make 1 replacement in\n  let text =\n    match prepend_scheme with\n    | (`Always | `First) when String.length text > 0 && text.[0] <> ' ' ->\n        \" \" ^ text\n    | _ -> text\n  in\n  let len = String.length text in\n  let buf = Buffer.create len in\n  let i = ref 0 in\n  while !i < len do\n    if text.[!i] = ' ' then (\n      Buffer.add_string buf repl;\n      incr i)\n    else\n      let l = utf8_next text !i land 7 in\n      Buffer.add_substring buf text !i l;\n      i := !i + l\n  done;\n  let transformed = Buffer.contents buf in\n  if split then (\n    let tlen = String.length transformed in\n    let rlen = String.length repl in\n    let splits = ref [] in\n    let start = ref 0 in\n    let pos = ref 0 in\n    while !pos < tlen do\n      if !pos + rlen <= tlen && String.sub transformed !pos rlen = repl then (\n        if !pos > !start then\n          splits :=\n            (String.sub transformed !start (!pos - !start), (!start, !pos))\n            :: !splits;\n        start := !pos;\n        pos := !pos + rlen)\n      else incr pos\n    done;\n    if !pos > !start then\n      splits :=\n        (String.sub transformed !start (!pos - !start), (!start, !pos))\n        :: !splits;\n    List.rev !splits)\n  else [ (transformed, (0, len)) ]\n\nlet pre_tokenize_fixed_length ~length text =\n  if length <= 0 || String.length text = 0 then []\n  else\n    let pieces = ref [] in\n    let len = String.length text in\n    let i = ref 0 in\n    while !i < len do\n      let start = !i in\n      let count = ref 0 in\n      while !i < len && !count < length do\n        let l = utf8_next text !i land 7 in\n        i := !i + l;\n        incr count\n      done;\n      pieces := (String.sub text start (!i - start), (start, !i)) :: !pieces\n    done;\n    List.rev !pieces\n\ntype script = [ `Any | Uucp.Script.t ]\n\nlet fixed_script code : script =\n  if code = 0x30FC then (`Hani :> script)\n  else if is_whitespace code then `Any\n  else\n    match Uucp.Script.script (Uchar.of_int code) with\n    | `Hira | `Kana -> (`Hani :> script)\n    | s -> (s :> script)\n\nlet pre_tokenize_unicode_scripts text =\n  let pieces = ref [] in\n  let start = ref (-1) in\n  let len = String.length text in\n  let i = ref 0 in\n  let last_script = ref None in\n  let flush () =\n    if !start >= 0 then begin\n      pieces := (String.sub text !start (!i - !start), (!start, !i)) :: !pieces;\n      start := -1\n    end\n  in\n  let emit (script : script) l =\n    if\n      script <> `Any && !last_script <> Some `Any && !last_script <> Some script\n    then flush ();\n    if !start < 0 then start := !i;\n    i := !i + l;\n    if script <> `Any then last_script := Some script\n  in\n  while !i < len do\n    let b = Char.code (String.unsafe_get text !i) in\n    if b < 128 then\n      let p = Array.unsafe_get ascii_props b in\n      let script : script =\n        if p land 1 <> 0 then `Any else if p land 2 <> 0 then `Latn else `Zyyy\n      in\n      emit script 1\n    else\n      let p = utf8_next text !i in\n      let code = p lsr 3 and l = p land 7 in\n      emit (fixed_script code) l\n  done;\n  flush ();\n  List.rev !pieces\n\n(* Constructors *)\n\nlet whitespace () = Whitespace\nlet whitespace_split () = Whitespace_split\nlet bert () = Bert\n\nlet byte_level ?(add_prefix_space = true) ?(use_regex = true)\n    ?(trim_offsets = true) () =\n  Byte_level { add_prefix_space; use_regex; trim_offsets }\n\nlet punctuation ?(behavior = `Isolated) () = Punctuation { behavior }\n\nlet split ~pattern ?(behavior = `Removed) ?(invert = false) () =\n  Split { pattern; behavior; invert }\n\nlet char_delimiter c = Char_delimiter c\n\nlet digits ?(individual_digits = false) () =\n  Digits { individual = individual_digits }\n\nlet metaspace ?(replacement = '_') ?(prepend_scheme = `Always) ?(split = true)\n    () =\n  Metaspace { replacement; prepend_scheme; split }\n\nlet unicode_scripts () = Unicode_scripts\nlet fixed_length n = Fixed_length { length = n }\nlet sequence ts = Sequence ts\n\n(* Dispatch *)\n\nlet rec pre_tokenize t text =\n  match t with\n  | Whitespace -> pre_tokenize_whitespace text\n  | Whitespace_split -> pre_tokenize_whitespace_split text\n  | Bert -> pre_tokenize_bert text\n  | Byte_level { add_prefix_space; use_regex; trim_offsets } ->\n      pre_tokenize_byte_level ~add_prefix_space ~use_regex ~trim_offsets text\n  | Punctuation { behavior } -> pre_tokenize_punctuation ~behavior text\n  | Split { pattern; behavior; invert } ->\n      pre_tokenize_split ~pattern ~behavior ~invert text\n  | Char_delimiter c ->\n      pre_tokenize_split ~pattern:(String.make 1 c) ~behavior:`Removed\n        ~invert:false text\n  | Digits { individual } -> pre_tokenize_digits ~individual text\n  | Metaspace { replacement; prepend_scheme; split } ->\n      pre_tokenize_metaspace ~replacement ~prepend_scheme ~split text\n  | Unicode_scripts -> pre_tokenize_unicode_scripts text\n  | Fixed_length { length } -> pre_tokenize_fixed_length ~length text\n  | Sequence ts -> pre_tokenize_sequence ts text\n\nand pre_tokenize_sequence ts text =\n  let initial = [ (text, (0, String.length text)) ] in\n  List.fold_left\n    (fun pieces t ->\n      List.concat_map\n        (fun (s, (o_start, _)) ->\n          let sub_pieces = pre_tokenize t s in\n          List.map\n            (fun (p, (p_start, p_end)) ->\n              (p, (o_start + p_start, o_start + p_end)))\n            sub_pieces)\n        pieces)\n    initial ts\n\n(* Serialization *)\n\nlet json_obj pairs =\n  Jsont.Json.object' (List.map (fun (k, v) -> (Jsont.Json.name k, v)) pairs)\n\nlet behavior_to_string = function\n  | `Isolated -> \"Isolated\"\n  | `Removed -> \"Removed\"\n  | `Merged_with_previous -> \"MergedWithPrevious\"\n  | `Merged_with_next -> \"MergedWithNext\"\n  | `Contiguous -> \"Contiguous\"\n\nlet behavior_of_string = function\n  | \"Isolated\" -> Ok `Isolated\n  | \"Removed\" -> Ok `Removed\n  | \"MergedWithPrevious\" -> Ok `Merged_with_previous\n  | \"MergedWithNext\" -> Ok `Merged_with_next\n  | \"Contiguous\" -> Ok `Contiguous\n  | other -> Error (err_unknown_behavior other)\n\nlet scheme_to_string = function\n  | `First -> \"First\"\n  | `Never -> \"Never\"\n  | `Always -> \"Always\"\n\nlet scheme_of_string = function\n  | \"First\" -> Ok `First\n  | \"Never\" -> Ok `Never\n  | \"Always\" -> Ok `Always\n  | other -> Error (err_unknown_scheme other)\n\n(* Formatting *)\n\nlet rec pp ppf = function\n  | Byte_level { add_prefix_space; use_regex; trim_offsets } ->\n      Format.fprintf ppf\n        \"@[<1>ByteLevel(add_prefix_space=%b,@ use_regex=%b,@ trim_offsets=%b)@]\"\n        add_prefix_space use_regex trim_offsets\n  | Bert -> Format.pp_print_string ppf \"Bert\"\n  | Whitespace -> Format.pp_print_string ppf \"Whitespace\"\n  | Whitespace_split -> Format.pp_print_string ppf \"WhitespaceSplit\"\n  | Punctuation { behavior } ->\n      Format.fprintf ppf \"@[<1>Punctuation(%s)@]\" (behavior_to_string behavior)\n  | Split { pattern; behavior; invert } ->\n      Format.fprintf ppf \"@[<1>Split(%S,@ %s,@ invert=%b)@]\" pattern\n        (behavior_to_string behavior)\n        invert\n  | Char_delimiter c -> Format.fprintf ppf \"CharDelimiter(%C)\" c\n  | Digits { individual } ->\n      Format.fprintf ppf \"Digits(individual=%b)\" individual\n  | Metaspace { replacement; prepend_scheme; split } ->\n      Format.fprintf ppf \"@[<1>Metaspace(%C,@ %s,@ split=%b)@]\" replacement\n        (scheme_to_string prepend_scheme)\n        split\n  | Sequence ts ->\n      Format.fprintf ppf \"@[<1>Sequence[%a]@]\"\n        (Format.pp_print_list\n           ~pp_sep:(fun ppf () -> Format.fprintf ppf \",@ \")\n           pp)\n        ts\n  | Fixed_length { length } -> Format.fprintf ppf \"FixedLength(%d)\" length\n  | Unicode_scripts -> Format.pp_print_string ppf \"UnicodeScripts\"\n\nlet rec to_json = function\n  | Byte_level { add_prefix_space; use_regex; trim_offsets } ->\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"ByteLevel\");\n          (\"add_prefix_space\", Jsont.Json.bool add_prefix_space);\n          (\"use_regex\", Jsont.Json.bool use_regex);\n          (\"trim_offsets\", Jsont.Json.bool trim_offsets);\n        ]\n  | Bert -> json_obj [ (\"type\", Jsont.Json.string \"BertPreTokenizer\") ]\n  | Whitespace -> json_obj [ (\"type\", Jsont.Json.string \"Whitespace\") ]\n  | Whitespace_split ->\n      json_obj [ (\"type\", Jsont.Json.string \"WhitespaceSplit\") ]\n  | Punctuation { behavior } ->\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"Punctuation\");\n          (\"behavior\", Jsont.Json.string (behavior_to_string behavior));\n        ]\n  | Split { pattern; behavior; invert } ->\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"Split\");\n          (\"pattern\", Jsont.Json.string pattern);\n          (\"behavior\", Jsont.Json.string (behavior_to_string behavior));\n          (\"invert\", Jsont.Json.bool invert);\n        ]\n  | Char_delimiter delimiter ->\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"CharDelimiterSplit\");\n          (\"delimiter\", Jsont.Json.string (String.make 1 delimiter));\n        ]\n  | Digits { individual } ->\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"Digits\");\n          (\"individual_digits\", Jsont.Json.bool individual);\n        ]\n  | Metaspace { replacement; prepend_scheme; split } ->\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"Metaspace\");\n          (\"replacement\", Jsont.Json.string (String.make 1 replacement));\n          (\"prepend_scheme\", Jsont.Json.string (scheme_to_string prepend_scheme));\n          (\"split\", Jsont.Json.bool split);\n        ]\n  | Sequence ts ->\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"Sequence\");\n          (\"pretokenizers\", Jsont.Json.list (List.map to_json ts));\n        ]\n  | Fixed_length { length } ->\n      json_obj\n        [\n          (\"type\", Jsont.Json.string \"FixedLength\");\n          (\"length\", Jsont.Json.int length);\n        ]\n  | Unicode_scripts -> json_obj [ (\"type\", Jsont.Json.string \"UnicodeScripts\") ]\n\nlet find_field name fields = Option.map snd (Jsont.Json.find_mem name fields)\n\nlet bool_field name default fields =\n  match find_field name fields with\n  | Some (Jsont.Bool (b, _)) -> b\n  | Some (Jsont.Number (f, _)) -> int_of_float f <> 0\n  | Some (Jsont.String (s, _)) -> (\n      match String.lowercase_ascii s with\n      | \"true\" | \"1\" -> true\n      | \"false\" | \"0\" -> false\n      | _ -> default)\n  | _ -> default\n\nlet int_field name default fields =\n  match find_field name fields with\n  | Some (Jsont.Number (f, _)) -> int_of_float f\n  | Some (Jsont.String (s, _)) -> (\n      match int_of_string_opt s with Some v -> v | None -> default)\n  | _ -> default\n\nlet char_of_field name = function\n  | Jsont.String (s, _) when String.length s = 1 -> Ok s.[0]\n  | _ -> Error (err_expected_char name)\n\nlet rec of_json = function\n  | Jsont.Object (fields, _) -> (\n      match find_field \"type\" fields with\n      | Some (Jsont.String (\"ByteLevel\", _)) ->\n          let add_prefix_space = bool_field \"add_prefix_space\" true fields in\n          let use_regex = bool_field \"use_regex\" true fields in\n          let trim_offsets = bool_field \"trim_offsets\" true fields in\n          Ok (Byte_level { add_prefix_space; use_regex; trim_offsets })\n      | Some (Jsont.String (\"BertPreTokenizer\", _)) -> Ok Bert\n      | Some (Jsont.String (\"Whitespace\", _)) -> Ok Whitespace\n      | Some (Jsont.String (\"WhitespaceSplit\", _)) -> Ok Whitespace_split\n      | Some (Jsont.String (\"Punctuation\", _)) -> (\n          match find_field \"behavior\" fields with\n          | Some (Jsont.String (s, _)) ->\n              Result.map\n                (fun b -> Punctuation { behavior = b })\n                (behavior_of_string s)\n          | _ -> Error err_missing_behavior)\n      | Some (Jsont.String (\"Split\", _)) -> (\n          match (find_field \"pattern\" fields, find_field \"behavior\" fields) with\n          | ( Some (Jsont.String (pattern, _)),\n              Some (Jsont.String (behavior_str, _)) ) ->\n              Result.map\n                (fun behavior ->\n                  let invert = bool_field \"invert\" false fields in\n                  Split { pattern; behavior; invert })\n                (behavior_of_string behavior_str)\n          | _ -> Error err_split_missing)\n      | Some (Jsont.String (\"CharDelimiterSplit\", _)) -> (\n          match find_field \"delimiter\" fields with\n          | Some v ->\n              Result.map\n                (fun c -> Char_delimiter c)\n                (char_of_field \"delimiter\" v)\n          | None -> Error err_char_delim_missing)\n      | Some (Jsont.String (\"Digits\", _)) ->\n          let individual = bool_field \"individual_digits\" false fields in\n          Ok (Digits { individual })\n      | Some (Jsont.String (\"Metaspace\", _)) -> (\n          match\n            (find_field \"replacement\" fields, find_field \"prepend_scheme\" fields)\n          with\n          | Some (Jsont.String (repl, _)), Some (Jsont.String (scheme, _))\n            when String.length repl = 1 ->\n              Result.map\n                (fun prepend_scheme ->\n                  let split = bool_field \"split\" true fields in\n                  Metaspace { replacement = repl.[0]; prepend_scheme; split })\n                (scheme_of_string scheme)\n          | _ -> Error err_metaspace_missing)\n      | Some (Jsont.String (\"Sequence\", _)) -> (\n          match find_field \"pretokenizers\" fields with\n          | Some (Jsont.Array (elements, _)) ->\n              let rec build acc = function\n                | [] -> Ok (Sequence (List.rev acc))\n                | item :: rest -> (\n                    match of_json item with\n                    | Ok t -> build (t :: acc) rest\n                    | Error _ as e -> e)\n              in\n              build [] elements\n          | _ -> Error err_sequence_missing)\n      | Some (Jsont.String (\"FixedLength\", _)) ->\n          let length = int_field \"length\" 0 fields in\n          if length <= 0 then Error err_fixed_length\n          else Ok (Fixed_length { length })\n      | Some (Jsont.String (\"UnicodeScripts\", _)) -> Ok Unicode_scripts\n      | Some (Jsont.String (other, _)) -> Error (err_unsupported_type other)\n      | _ -> Error err_missing_type)\n  | _ -> Error err_expected_object\n"
  },
  {
    "path": "packages/brot/lib/pre_tokenizer.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Pre-tokenization.\n\n    Pre-tokenizers split raw text into pieces before vocabulary-based\n    tokenization (BPE, WordPiece, etc.) is applied. Each piece carries byte\n    offsets into the original text.\n\n    See {!Brot} for the full tokenization pipeline. *)\n\ntype t\n(** The type for pre-tokenizers. *)\n\n(** {1:constructors Constructors} *)\n\nval whitespace : unit -> t\n(** [whitespace ()] splits on whitespace using pattern [\\w+|[^\\w\\s]+].\n\n    Groups word characters (letters, digits, underscore) together and groups\n    non-word, non-space characters together. Whitespace is used as delimiter but\n    not included in output. *)\n\nval whitespace_split : unit -> t\n(** [whitespace_split ()] splits on any whitespace characters.\n\n    Removes whitespace from output. Simplest and fastest pre-tokenizer. *)\n\nval bert : unit -> t\n(** [bert ()] applies BERT-style pre-tokenization.\n\n    Splits on whitespace, isolates punctuation, and separates CJK characters\n    individually. *)\n\nval byte_level :\n  ?add_prefix_space:bool -> ?use_regex:bool -> ?trim_offsets:bool -> unit -> t\n(** [byte_level ()] is a byte-level pre-tokenizer. Used by GPT-2, GPT-3,\n    RoBERTa.\n\n    Converts text to byte representation and applies GPT-2's regex pattern for\n    splitting.\n\n    - [add_prefix_space]: add space at beginning if text does not start with\n      whitespace. Default: [true].\n    - [use_regex]: use GPT-2's regex pattern for splitting. Default: [true].\n    - [trim_offsets]: adjust offsets for byte-level encoding. Default: [true].\n*)\n\ntype behavior =\n  [ `Isolated  (** Keep delimiter as separate piece *)\n  | `Removed  (** Remove delimiter *)\n  | `Merged_with_previous  (** Merge delimiter with previous piece *)\n  | `Merged_with_next  (** Merge delimiter with next piece *)\n  | `Contiguous  (** Group consecutive delimiters together *) ]\n(** Delimiter handling behavior for splitting operations. *)\n\nval punctuation : ?behavior:behavior -> unit -> t\n(** [punctuation ()] separates punctuation from alphanumeric content.\n\n    [behavior] defaults to [`Isolated]. *)\n\nval split : pattern:string -> ?behavior:behavior -> ?invert:bool -> unit -> t\n(** [split ~pattern ()] splits on a literal string [pattern].\n\n    [behavior] defaults to [`Removed]. When [invert] is [true], splits on\n    everything {e except} the pattern; defaults to [false]. *)\n\nval char_delimiter : char -> t\n(** [char_delimiter c] splits on character [c], removing it from output.\n\n    Equivalent to [split ~pattern:(String.make 1 c) ~behavior:`Removed ()]. *)\n\nval digits : ?individual_digits:bool -> unit -> t\n(** [digits ()] splits on digit boundaries.\n\n    When [individual_digits] is [true], each digit is a separate piece; when\n    [false] (default), consecutive digits are grouped. *)\n\ntype prepend_scheme =\n  [ `First  (** Only prepend to first piece *)\n  | `Never  (** Never prepend *)\n  | `Always  (** Always prepend if not starting with space *) ]\n(** Controls when metaspace prepends the replacement character. *)\n\nval metaspace :\n  ?replacement:char ->\n  ?prepend_scheme:prepend_scheme ->\n  ?split:bool ->\n  unit ->\n  t\n(** [metaspace ()] replaces whitespace with a visible marker. Used by\n    SentencePiece models.\n\n    - [replacement]: character to replace spaces with. Default: ['_'].\n    - [prepend_scheme]: when to prepend the replacement character. Default:\n      [`Always].\n    - [split]: whether to split on the replacement character. Default: [true].\n*)\n\nval unicode_scripts : unit -> t\n(** [unicode_scripts ()] splits on Unicode script boundaries.\n\n    Separates text when the writing system changes (e.g., Latin to Cyrillic,\n    Latin to Han). *)\n\nval fixed_length : int -> t\n(** [fixed_length n] splits into fixed-length character chunks.\n\n    The last chunk may be shorter than [n]. *)\n\nval sequence : t list -> t\n(** [sequence ts] chains multiple pre-tokenizers left-to-right.\n\n    Each pre-tokenizer processes the pieces from the previous one. Offsets are\n    composed correctly through the chain. *)\n\n(** {1 Operations} *)\n\nval pre_tokenize : t -> string -> (string * (int * int)) list\n(** [pre_tokenize t text] splits [text] into pieces with character offsets.\n\n    Returns a list of [(piece, (start, end_))] where [start] and [end_] are byte\n    positions in the original [text]. Offsets are non-overlapping and in\n    ascending order. *)\n\n(** {1 Formatting} *)\n\nval pp : Format.formatter -> t -> unit\n(** [pp ppf t] formats [t] for inspection. *)\n\n(** {1:byte_level_decode Byte-level decoding} *)\n\nval byte_level_decode : string -> string\n(** [byte_level_decode s] reverses byte-level encoding by converting the special\n    Unicode codepoints back to original byte values. *)\n\n(** {1 Serialization} *)\n\nval to_json : t -> Jsont.json\n(** [to_json t] serializes [t] to HuggingFace JSON format. *)\n\nval of_json : Jsont.json -> (t, string) result\n(** [of_json json] is a pre-tokenizer from HuggingFace JSON format. Errors if\n    [json] is not an object, has a missing or unknown [\"type\"] field, or has\n    invalid parameters. *)\n"
  },
  {
    "path": "packages/brot/lib/unigram.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Compact trie for longest-prefix matching *)\n\ntype trie = {\n  trie_ids : int array;\n  child_starts : int array;\n  edge_bytes : bytes;\n  edge_targets : int array;\n}\n\nlet build_trie token_to_ids =\n  if Hashtbl.length token_to_ids = 0 then\n    {\n      trie_ids = [||];\n      child_starts = [| 0 |];\n      edge_bytes = Bytes.empty;\n      edge_targets = [||];\n    }\n  else\n    let cap = ref 256 in\n    let ids = ref (Array.make !cap (-1)) in\n    let ch = ref (Array.init !cap (fun _ -> Hashtbl.create 0)) in\n    let n = ref 1 in\n    !ch.(0) <- Hashtbl.create 64;\n    let grow () =\n      let new_cap = !cap * 2 in\n      let new_ids = Array.make new_cap (-1) in\n      Array.blit !ids 0 new_ids 0 !n;\n      ids := new_ids;\n      let new_ch =\n        Array.init new_cap (fun i ->\n            if i < !n then !ch.(i) else Hashtbl.create 0)\n      in\n      ch := new_ch;\n      cap := new_cap\n    in\n    Hashtbl.iter\n      (fun key id ->\n        let cur = ref 0 in\n        for i = 0 to String.length key - 1 do\n          let byte = Char.code (String.unsafe_get key i) in\n          let child =\n            match Hashtbl.find_opt !ch.(!cur) byte with\n            | Some c -> c\n            | None ->\n                if !n >= !cap then grow ();\n                let c = !n in\n                incr n;\n                !ch.(c) <- Hashtbl.create 4;\n                Hashtbl.add !ch.(!cur) byte c;\n                c\n          in\n          cur := child\n        done;\n        !ids.(!cur) <- id)\n      token_to_ids;\n    let node_count = !n in\n    let trie_ids = Array.init node_count (fun i -> !ids.(i)) in\n    let child_starts = Array.make (node_count + 1) 0 in\n    let total = ref 0 in\n    for i = 0 to node_count - 1 do\n      child_starts.(i) <- !total;\n      total := !total + Hashtbl.length !ch.(i)\n    done;\n    child_starts.(node_count) <- !total;\n    let edge_bytes = Bytes.create !total in\n    let edge_targets = Array.make !total 0 in\n    let pos = ref 0 in\n    for i = 0 to node_count - 1 do\n      Hashtbl.iter\n        (fun byte child ->\n          Bytes.unsafe_set edge_bytes !pos (Char.unsafe_chr byte);\n          edge_targets.(!pos) <- child;\n          incr pos)\n        !ch.(i)\n    done;\n    for i = 0 to node_count - 1 do\n      let start = child_starts.(i) in\n      let stop = child_starts.(i + 1) in\n      for j = start + 1 to stop - 1 do\n        let kb = Bytes.unsafe_get edge_bytes j in\n        let kt = edge_targets.(j) in\n        let k = ref (j - 1) in\n        while !k >= start && Bytes.unsafe_get edge_bytes !k > kb do\n          Bytes.unsafe_set edge_bytes (!k + 1) (Bytes.unsafe_get edge_bytes !k);\n          edge_targets.(!k + 1) <- edge_targets.(!k);\n          decr k\n        done;\n        Bytes.unsafe_set edge_bytes (!k + 1) kb;\n        edge_targets.(!k + 1) <- kt\n      done\n    done;\n    { trie_ids; child_starts; edge_bytes; edge_targets }\n\nlet[@inline] trie_step trie node byte =\n  let lo = ref (Array.unsafe_get trie.child_starts node) in\n  let hi = ref (Array.unsafe_get trie.child_starts (node + 1) - 1) in\n  let result = ref (-1) in\n  while !lo <= !hi do\n    let mid = !lo + ((!hi - !lo) asr 1) in\n    let mid_byte = Char.code (Bytes.unsafe_get trie.edge_bytes mid) in\n    if mid_byte = byte then (\n      result := Array.unsafe_get trie.edge_targets mid;\n      lo := !hi + 1)\n    else if mid_byte < byte then lo := mid + 1\n    else hi := mid - 1\n  done;\n  !result\n\nlet trie_longest_match trie text ~start =\n  if Array.length trie.trie_ids = 0 then None\n  else\n    let text_len = String.length text in\n    let last_id = ref (-1) in\n    let last_end = ref start in\n    let current = ref 0 in\n    let stopped = ref false in\n    let j = ref start in\n    while !j < text_len && not !stopped do\n      let child =\n        trie_step trie !current (Char.code (String.unsafe_get text !j))\n      in\n      if child < 0 then stopped := true\n      else (\n        current := child;\n        incr j;\n        let tid = Array.unsafe_get trie.trie_ids child in\n        if tid >= 0 then (\n          last_id := tid;\n          last_end := !j))\n    done;\n    if !last_id >= 0 then Some (!last_id, !last_end) else None\n\n(* Model type *)\n\ntype t = {\n  vocab : (string * float) array;\n  token_to_ids : (string, int) Hashtbl.t;\n  trie : trie;\n}\n\nlet create vocab_list =\n  let vocab = Array.of_list vocab_list in\n  let token_to_ids = Hashtbl.create (Array.length vocab) in\n  Array.iteri\n    (fun idx (token, _) -> Hashtbl.replace token_to_ids token idx)\n    vocab;\n  let trie = build_trie token_to_ids in\n  { vocab; token_to_ids; trie }\n\nlet token_to_id model token = Hashtbl.find_opt model.token_to_ids token\n\nlet id_to_token model id =\n  if id >= 0 && id < Array.length model.vocab then\n    let token, _ = model.vocab.(id) in\n    Some token\n  else None\n\nlet get_vocab model = Array.to_list model.vocab\nlet get_vocab_size model = Array.length model.vocab\n\nlet tokenize model text =\n  let len = String.length text in\n  let rec consume pos acc =\n    if pos >= len then List.rev acc\n    else if\n      text.[pos] = ' '\n      || text.[pos] = '\\n'\n      || text.[pos] = '\\t'\n      || text.[pos] = '\\r'\n    then consume (pos + 1) acc\n    else\n      match trie_longest_match model.trie text ~start:pos with\n      | Some (id, end_pos) ->\n          let s = String.sub text pos (end_pos - pos) in\n          consume end_pos ((id, s, (pos, end_pos)) :: acc)\n      | None ->\n          let s = String.sub text pos 1 in\n          let id = match token_to_id model s with Some id -> id | None -> 0 in\n          consume (pos + 1) ((id, s, (pos, pos + 1)) :: acc)\n  in\n  consume 0 []\n\nlet json_obj pairs =\n  Jsont.Json.object' (List.map (fun (k, v) -> (Jsont.Json.name k, v)) pairs)\n\nlet json_to_string j =\n  match Jsont_bytesrw.encode_string ~format:Jsont.Minify Jsont.json j with\n  | Ok s -> s\n  | Error e -> failwith e\n\nlet save model ~folder () =\n  let json_vocab =\n    Array.to_list model.vocab\n    |> List.mapi (fun id (token, prob) ->\n        json_obj\n          [\n            (\"id\", Jsont.Json.int id);\n            (\"token\", Jsont.Json.string token);\n            (\"prob\", Jsont.Json.number prob);\n          ])\n  in\n  let json =\n    json_obj\n      [\n        (\"type\", Jsont.Json.string \"Unigram\");\n        (\"vocab\", Jsont.Json.list json_vocab);\n      ]\n  in\n  let path = Filename.concat folder \"unigram.json\" in\n  let oc = open_out path in\n  Fun.protect\n    ~finally:(fun () -> close_out oc)\n    (fun () -> output_string oc (json_to_string json));\n  [ \"unigram.json\" ]\n\nlet train ~vocab_size ~show_progress ~special_tokens ~shrinking_factor\n    ~unk_token ~max_piece_length ~n_sub_iterations texts existing =\n  let _ =\n    ( show_progress,\n      shrinking_factor,\n      unk_token,\n      max_piece_length,\n      n_sub_iterations,\n      existing )\n  in\n  let counts = Hashtbl.create 10000 in\n  List.iter\n    (fun line ->\n      let words = Re.split (Re.compile (Re.rep1 (Re.set \" \\t\\n\\r\"))) line in\n      List.iter\n        (fun word ->\n          if word <> \"\" then\n            Hashtbl.replace counts word\n              (1 + Option.value ~default:0 (Hashtbl.find_opt counts word)))\n        words)\n    texts;\n\n  let total =\n    Hashtbl.fold (fun _ count acc -> acc + count) counts 0 |> float_of_int\n  in\n  let sorted =\n    Hashtbl.fold (fun token count acc -> (token, count) :: acc) counts []\n    |> List.sort (fun (_, c1) (_, c2) -> compare c2 c1)\n  in\n\n  let take_first n lst =\n    let rec aux i = function\n      | [] -> []\n      | _ when i = 0 -> []\n      | x :: xs -> x :: aux (i - 1) xs\n    in\n    aux n lst\n  in\n\n  let selected = take_first vocab_size sorted in\n  let vocab_with_probs =\n    special_tokens\n    |> List.map (fun token -> (token, 1.0 /. float_of_int (vocab_size + 1)))\n    |> fun specials ->\n    specials\n    @ List.map\n        (fun (token, count) ->\n          let prob = if total = 0. then 0. else float_of_int count /. total in\n          (token, prob))\n        selected\n  in\n  let model = create vocab_with_probs in\n  (model, special_tokens)\n"
  },
  {
    "path": "packages/brot/lib/unigram.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Unigram language model tokenization.\n\n    {b Internal module.} Probabilistic subword tokenization using token\n    log-probabilities. Used by SentencePiece, AlBERT, T5, and mBART.\n\n    Tokenization uses greedy longest-prefix matching via a compact trie with\n    sorted edges and binary-search dispatch. At each byte position the longest\n    vocabulary match is consumed. Unknown single characters default to ID [0].\n*)\n\ntype t\n(** The type for unigram models. *)\n\n(** {1:creation Creation} *)\n\nval create : (string * float) list -> t\n(** [create vocab] is a unigram model from [(token, log_probability)] pairs. The\n    trie is built at creation time. *)\n\n(** {1:tokenization Tokenization} *)\n\nval tokenize : t -> string -> (int * string * (int * int)) list\n(** [tokenize t s] is the tokenization of [s] as [(id, token, (start, stop))]\n    triples. Offsets are byte positions in [s]. *)\n\n(** {1:vocabulary Vocabulary} *)\n\nval token_to_id : t -> string -> int option\n(** [token_to_id t tok] is the ID of [tok] in the vocabulary. *)\n\nval id_to_token : t -> int -> string option\n(** [id_to_token t id] is the token string for [id]. *)\n\nval get_vocab : t -> (string * float) list\n(** [get_vocab t] is the vocabulary as [(token, score)] pairs. *)\n\nval get_vocab_size : t -> int\n(** [get_vocab_size t] is the number of tokens in the vocabulary. *)\n\n(** {1:serialization Serialization} *)\n\nval save : t -> folder:string -> unit -> string list\n(** [save t ~folder ()] writes [unigram.json] to [folder]. The file contains\n    each token with its ID and log-probability in JSON format. Returns the list\n    of created filenames. *)\n\n(** {1:training Training} *)\n\nval train :\n  vocab_size:int ->\n  show_progress:bool ->\n  special_tokens:string list ->\n  shrinking_factor:float ->\n  unk_token:string option ->\n  max_piece_length:int ->\n  n_sub_iterations:int ->\n  string list ->\n  t option ->\n  t * string list\n(** [train ~vocab_size ~show_progress ~special_tokens ~shrinking_factor\n     ~unk_token ~max_piece_length ~n_sub_iterations texts init] learns a unigram\n    model from [texts].\n\n    - [vocab_size] is the target vocabulary size.\n    - [show_progress] enables progress output on [stderr].\n    - [special_tokens] are added to the vocabulary first.\n    - [shrinking_factor] controls vocabulary pruning rate.\n    - [unk_token] is the unknown token, if any.\n    - [max_piece_length] limits the byte length of vocabulary pieces.\n    - [n_sub_iterations] is the number of EM sub-iterations.\n    - [init], when provided, seeds the vocabulary from an existing model.\n\n    Returns [(model, special_tokens)]. *)\n"
  },
  {
    "path": "packages/brot/lib/word_level.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype t = {\n  vocab : (string, int) Hashtbl.t;\n  vocab_r : (int, string) Hashtbl.t;\n  unk_token : string;\n}\n\nlet create ?(vocab = []) ?(unk_token = \"<unk>\") () =\n  let size = max 1 (List.length vocab) in\n  let vocab_tbl = Hashtbl.create size in\n  let vocab_r_tbl = Hashtbl.create size in\n  List.iter\n    (fun (token, id) ->\n      Hashtbl.replace vocab_tbl token id;\n      Hashtbl.replace vocab_r_tbl id token)\n    vocab;\n  { vocab = vocab_tbl; vocab_r = vocab_r_tbl; unk_token }\n\nlet add_token vocab vocab_r token id =\n  Hashtbl.replace vocab token id;\n  Hashtbl.replace vocab_r id token\n\nlet tokenize model text =\n  if String.length text = 0 then []\n  else\n    (* Match HuggingFace tokenizers semantics exactly: 1. Try to find token in\n       vocab 2. Fall back to UNK token if available 3. Return empty list if\n       neither exists (error case) *)\n    match Hashtbl.find_opt model.vocab text with\n    | Some id -> [ (id, text, (0, String.length text)) ]\n    | None -> (\n        match Hashtbl.find_opt model.vocab model.unk_token with\n        | Some unk_id -> [ (unk_id, model.unk_token, (0, String.length text)) ]\n        | None -> [] (* Token not found and no UNK token - return empty *))\n\nlet tokenize_ids model text =\n  if String.length text = 0 then [||]\n  else\n    match Hashtbl.find_opt model.vocab text with\n    | Some id -> [| id |]\n    | None -> (\n        match Hashtbl.find_opt model.vocab model.unk_token with\n        | Some unk_id -> [| unk_id |]\n        | None -> [||])\n\nlet token_to_id model token = Hashtbl.find_opt model.vocab token\nlet id_to_token model id = Hashtbl.find_opt model.vocab_r id\n\nlet get_vocab model =\n  Hashtbl.fold (fun token id acc -> (token, id) :: acc) model.vocab []\n\nlet get_vocab_size model = Hashtbl.length model.vocab\n\nlet add_tokens model tokens =\n  let start_id = Hashtbl.length model.vocab in\n  let count = ref 0 in\n  List.iteri\n    (fun i token ->\n      if not (Hashtbl.mem model.vocab token) then (\n        add_token model.vocab model.vocab_r token (start_id + i);\n        incr count))\n    tokens;\n  !count\n\nlet json_obj pairs =\n  Jsont.Json.object' (List.map (fun (k, v) -> (Jsont.Json.name k, v)) pairs)\n\nlet json_to_string j =\n  match Jsont_bytesrw.encode_string ~format:Jsont.Minify Jsont.json j with\n  | Ok s -> s\n  | Error e -> failwith e\n\nlet save model ~folder () =\n  let vocab_items =\n    get_vocab model\n    |> List.sort (fun (_, id1) (_, id2) -> compare id1 id2)\n    |> List.map (fun (token, id) ->\n        json_obj\n          [ (\"token\", Jsont.Json.string token); (\"id\", Jsont.Json.int id) ])\n  in\n  let json =\n    json_obj\n      [\n        (\"type\", Jsont.Json.string \"WordLevel\");\n        (\"unk_token\", Jsont.Json.string model.unk_token);\n        (\"vocab\", Jsont.Json.list vocab_items);\n      ]\n  in\n  let path = Filename.concat folder \"wordlevel.json\" in\n  let oc = open_out path in\n  Fun.protect\n    ~finally:(fun () -> close_out oc)\n    (fun () -> output_string oc (json_to_string json));\n  [ \"wordlevel.json\" ]\n\nlet train ~vocab_size ~min_frequency ~show_progress ~special_tokens texts\n    existing =\n  let _ = show_progress in\n  let counts = Hashtbl.create 10000 in\n  List.iter\n    (fun line ->\n      let words = Re.split (Re.compile (Re.rep1 (Re.set \" \\t\\n\\r\"))) line in\n      List.iter\n        (fun word ->\n          if word <> \"\" then\n            Hashtbl.replace counts word\n              (1 + Option.value ~default:0 (Hashtbl.find_opt counts word)))\n        words)\n    texts;\n\n  let items =\n    Hashtbl.fold\n      (fun word count acc ->\n        if count >= min_frequency then (word, count) :: acc else acc)\n      counts []\n    |> List.sort (fun (_, c1) (_, c2) -> compare c2 c1)\n  in\n  let vocab_items = ref [] in\n  let idx = ref 0 in\n  List.iter\n    (fun token ->\n      if !idx < vocab_size then (\n        vocab_items := (fst token, !idx) :: !vocab_items;\n        incr idx))\n    items;\n  let vocab_items = List.rev !vocab_items in\n\n  let specials = List.mapi (fun i token -> (token, i)) special_tokens in\n  let vocab = specials @ vocab_items in\n  let model =\n    match existing with\n    | Some model ->\n        model.vocab |> Hashtbl.clear;\n        model.vocab_r |> Hashtbl.clear;\n        List.iter\n          (fun (token, id) -> add_token model.vocab model.vocab_r token id)\n          vocab;\n        model\n    | None -> create ~vocab ()\n  in\n  (model, special_tokens)\n"
  },
  {
    "path": "packages/brot/lib/word_level.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Word-level tokenization model.\n\n    {b Internal module.} Direct vocabulary lookup with no subword splitting.\n    Each input word is mapped to a single token ID via exact string match. Words\n    not in the vocabulary are replaced by [unk_token]. *)\n\ntype t\n(** The type for word-level models. *)\n\n(** {1:creation Creation} *)\n\nval create : ?vocab:(string * int) list -> ?unk_token:string -> unit -> t\n(** [create ?vocab ?unk_token ()] is a word-level model.\n\n    - [vocab] is the initial vocabulary as [(token, id)] pairs. Defaults to\n      [[]].\n    - [unk_token] is the token emitted for unknown words. Defaults to [\"[UNK]\"].\n*)\n\n(** {1:tokenization Tokenization} *)\n\nval tokenize : t -> string -> (int * string * (int * int)) list\n(** [tokenize t s] is [[(id, token, (start, stop))]] for [s]. If [s] is not in\n    the vocabulary, [unk_token] is used. If [unk_token] itself is not in the\n    vocabulary, the empty list is returned. *)\n\nval tokenize_ids : t -> string -> int array\n(** [tokenize_ids t s] is like {!tokenize} but returns only token IDs. *)\n\n(** {1:vocabulary Vocabulary} *)\n\nval token_to_id : t -> string -> int option\n(** [token_to_id t tok] is the ID of [tok] in the vocabulary. *)\n\nval id_to_token : t -> int -> string option\n(** [id_to_token t id] is the token string for [id]. *)\n\nval get_vocab : t -> (string * int) list\n(** [get_vocab t] is the vocabulary as [(token, id)] pairs. *)\n\nval get_vocab_size : t -> int\n(** [get_vocab_size t] is the number of tokens in the vocabulary. *)\n\nval add_tokens : t -> string list -> int\n(** [add_tokens t toks] adds [toks] to the vocabulary, assigning consecutive IDs\n    starting after the current maximum. Returns the number of new tokens\n    actually added (duplicates are skipped). Mutates [t]. *)\n\n(** {1:serialization Serialization} *)\n\nval save : t -> folder:string -> unit -> string list\n(** [save t ~folder ()] writes [wordlevel.json] to [folder]. The file contains\n    the vocabulary and [unk_token] in JSON format. Returns the list of created\n    filenames. *)\n\n(** {1:training Training} *)\n\nval train :\n  vocab_size:int ->\n  min_frequency:int ->\n  show_progress:bool ->\n  special_tokens:string list ->\n  string list ->\n  t option ->\n  t * string list\n(** [train ~vocab_size ~min_frequency ~show_progress ~special_tokens texts init]\n    learns a vocabulary from [texts] by counting word frequencies.\n\n    - [vocab_size] is the target vocabulary size.\n    - [min_frequency] is the minimum word frequency to include.\n    - [show_progress] enables progress output on [stderr].\n    - [special_tokens] are added to the vocabulary first.\n    - [init], when provided, seeds the vocabulary from an existing model.\n\n    Returns [(model, special_tokens)]. *)\n"
  },
  {
    "path": "packages/brot/lib/wordpiece.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype token = { id : int; value : string; offsets : int * int }\n\n(* Compact trie for zero-allocation longest-prefix matching *)\n\ntype trie = {\n  trie_ids : int array;\n  child_starts : int array;\n  edge_bytes : bytes;\n  edge_targets : int array;\n  (* Flat 256-element arrays for dense nodes (>16 children) — O(1) lookup *)\n  flat_nodes : int array array;\n}\n\nlet build_trie vocab =\n  if Hashtbl.length vocab = 0 then\n    {\n      trie_ids = [||];\n      child_starts = [| 0 |];\n      edge_bytes = Bytes.empty;\n      edge_targets = [||];\n      flat_nodes = [||];\n    }\n  else\n    let cap = ref 256 in\n    let ids = ref (Array.make !cap (-1)) in\n    let ch = ref (Array.init !cap (fun _ -> Hashtbl.create 0)) in\n    let n = ref 1 in\n    !ch.(0) <- Hashtbl.create 64;\n    let grow () =\n      let new_cap = !cap * 2 in\n      let new_ids = Array.make new_cap (-1) in\n      Array.blit !ids 0 new_ids 0 !n;\n      ids := new_ids;\n      let new_ch =\n        Array.init new_cap (fun i ->\n            if i < !n then !ch.(i) else Hashtbl.create 0)\n      in\n      ch := new_ch;\n      cap := new_cap\n    in\n    Hashtbl.iter\n      (fun key id ->\n        let cur = ref 0 in\n        for i = 0 to String.length key - 1 do\n          let byte = Char.code (String.unsafe_get key i) in\n          let child =\n            match Hashtbl.find_opt !ch.(!cur) byte with\n            | Some c -> c\n            | None ->\n                if !n >= !cap then grow ();\n                let c = !n in\n                incr n;\n                !ch.(c) <- Hashtbl.create 4;\n                Hashtbl.add !ch.(!cur) byte c;\n                c\n          in\n          cur := child\n        done;\n        !ids.(!cur) <- id)\n      vocab;\n    let node_count = !n in\n    let trie_ids = Array.init node_count (fun i -> !ids.(i)) in\n    let child_starts = Array.make (node_count + 1) 0 in\n    let total = ref 0 in\n    for i = 0 to node_count - 1 do\n      child_starts.(i) <- !total;\n      total := !total + Hashtbl.length !ch.(i)\n    done;\n    child_starts.(node_count) <- !total;\n    let edge_bytes = Bytes.create !total in\n    let edge_targets = Array.make !total 0 in\n    let pos = ref 0 in\n    for i = 0 to node_count - 1 do\n      Hashtbl.iter\n        (fun byte child ->\n          Bytes.unsafe_set edge_bytes !pos (Char.unsafe_chr byte);\n          edge_targets.(!pos) <- child;\n          incr pos)\n        !ch.(i)\n    done;\n    (* Sort each node's children by byte value for binary search *)\n    for i = 0 to node_count - 1 do\n      let start = child_starts.(i) in\n      let stop = child_starts.(i + 1) in\n      for j = start + 1 to stop - 1 do\n        let kb = Bytes.unsafe_get edge_bytes j in\n        let kt = edge_targets.(j) in\n        let k = ref (j - 1) in\n        while !k >= start && Bytes.unsafe_get edge_bytes !k > kb do\n          Bytes.unsafe_set edge_bytes (!k + 1) (Bytes.unsafe_get edge_bytes !k);\n          edge_targets.(!k + 1) <- edge_targets.(!k);\n          decr k\n        done;\n        Bytes.unsafe_set edge_bytes (!k + 1) kb;\n        edge_targets.(!k + 1) <- kt\n      done\n    done;\n    (* Build flat 256-element arrays for dense nodes (>16 children) *)\n    let flat_nodes = Array.make node_count [||] in\n    for i = 0 to node_count - 1 do\n      let start = child_starts.(i) in\n      let count = child_starts.(i + 1) - start in\n      if count > 16 then begin\n        let flat = Array.make 256 (-1) in\n        for j = start to start + count - 1 do\n          let b = Char.code (Bytes.unsafe_get edge_bytes j) in\n          flat.(b) <- Array.unsafe_get edge_targets j\n        done;\n        flat_nodes.(i) <- flat\n      end\n    done;\n    { trie_ids; child_starts; edge_bytes; edge_targets; flat_nodes }\n\nlet[@inline] trie_step trie node byte =\n  let flat = Array.unsafe_get trie.flat_nodes node in\n  if Array.length flat > 0 then Array.unsafe_get flat byte\n  else\n    let lo = ref (Array.unsafe_get trie.child_starts node) in\n    let hi = ref (Array.unsafe_get trie.child_starts (node + 1) - 1) in\n    let result = ref (-1) in\n    while !lo <= !hi do\n      let mid = !lo + ((!hi - !lo) asr 1) in\n      let mid_byte = Char.code (Bytes.unsafe_get trie.edge_bytes mid) in\n      if mid_byte = byte then (\n        result := Array.unsafe_get trie.edge_targets mid;\n        lo := !hi + 1)\n      else if mid_byte < byte then lo := mid + 1\n      else hi := mid - 1\n    done;\n    !result\n\nlet trie_longest_match trie sequence ~start ~prefix ~prefix_len =\n  if Array.length trie.trie_ids = 0 then None\n  else\n    let seq_len = String.length sequence in\n    let last_id = ref (-1) in\n    let last_end = ref start in\n    let current = ref 0 in\n    let stopped = ref false in\n    let i = ref 0 in\n    while !i < prefix_len && not !stopped do\n      let child =\n        trie_step trie !current (Char.code (String.unsafe_get prefix !i))\n      in\n      if child < 0 then stopped := true\n      else (\n        current := child;\n        incr i)\n    done;\n    (if not !stopped then\n       let j = ref start in\n       while !j < seq_len && not !stopped do\n         let child =\n           trie_step trie !current (Char.code (String.unsafe_get sequence !j))\n         in\n         if child < 0 then stopped := true\n         else (\n           current := child;\n           incr j;\n           let tid = Array.unsafe_get trie.trie_ids child in\n           if tid >= 0 then (\n             last_id := tid;\n             last_end := !j))\n       done);\n    if !last_id >= 0 then Some (!last_id, !last_end) else None\n\n(* Model type *)\n\ntype t = {\n  vocab : (string, int) Hashtbl.t;\n  vocab_r : string array;\n  trie : trie;\n  unk_token : string;\n  continuing_subword_prefix : string;\n  max_input_chars_per_word : int;\n}\n\nlet create ~vocab ?(unk_token = \"[UNK]\") ?(continuing_subword_prefix = \"##\")\n    ?(max_input_chars_per_word = 100) () =\n  let max_id = Hashtbl.fold (fun _ id acc -> max id acc) vocab (-1) in\n  let vocab_r = Array.make (max_id + 1) \"\" in\n  Hashtbl.iter (fun k v -> Array.unsafe_set vocab_r v k) vocab;\n  if Hashtbl.length vocab > 0 && not (Hashtbl.mem vocab unk_token) then\n    invalid_arg \"Wordpiece.create: unk_token not in vocab\";\n  let trie = build_trie vocab in\n  {\n    vocab;\n    vocab_r;\n    trie;\n    unk_token;\n    continuing_subword_prefix;\n    max_input_chars_per_word;\n  }\n\nlet read_file ~vocab_file =\n  let vocab = Hashtbl.create 10000 in\n  let ic = open_in vocab_file in\n  Fun.protect\n    ~finally:(fun () -> close_in ic)\n    (fun () ->\n      let index = ref 0 in\n      (try\n         while true do\n           let line = input_line ic in\n           let token = String.trim line in\n           if token <> \"\" then (\n             Hashtbl.add vocab token !index;\n             incr index)\n         done\n       with End_of_file -> ());\n      vocab)\n\nlet from_file ~vocab_file =\n  let vocab = read_file ~vocab_file in\n  create ~vocab ()\n\nlet count_chars s =\n  let len = String.length s in\n  let n = ref 0 in\n  for i = 0 to len - 1 do\n    if Char.code (String.unsafe_get s i) land 0xC0 <> 0x80 then incr n\n  done;\n  !n\n\nlet tokenize model sequence =\n  if Hashtbl.length model.vocab = 0 then []\n  else\n    let seq_len = String.length sequence in\n    if count_chars sequence > model.max_input_chars_per_word then\n      let id = Hashtbl.find model.vocab model.unk_token in\n      [ { id; value = model.unk_token; offsets = (0, seq_len) } ]\n    else\n      let prefix = model.continuing_subword_prefix in\n      let prefix_len = String.length prefix in\n      let rec greedy start acc =\n        if start >= seq_len then List.rev acc\n        else\n          let p = if start > 0 then prefix else \"\" in\n          let pl = if start > 0 then prefix_len else 0 in\n          match\n            trie_longest_match model.trie sequence ~start ~prefix:p\n              ~prefix_len:pl\n          with\n          | Some (id, end_byte) ->\n              let value = Array.unsafe_get model.vocab_r id in\n              greedy end_byte ({ id; value; offsets = (start, end_byte) } :: acc)\n          | None ->\n              let id = Hashtbl.find model.vocab model.unk_token in\n              [ { id; value = model.unk_token; offsets = (0, seq_len) } ]\n      in\n      greedy 0 []\n\nlet tokenize_ids model sequence =\n  if Hashtbl.length model.vocab = 0 then [||]\n  else\n    let seq_len = String.length sequence in\n    if count_chars sequence > model.max_input_chars_per_word then\n      let id = Hashtbl.find model.vocab model.unk_token in\n      [| id |]\n    else\n      let prefix = model.continuing_subword_prefix in\n      let prefix_len = String.length prefix in\n      let ids = ref [] in\n      let n = ref 0 in\n      let rec greedy start =\n        if start >= seq_len then ()\n        else\n          let p = if start > 0 then prefix else \"\" in\n          let pl = if start > 0 then prefix_len else 0 in\n          match\n            trie_longest_match model.trie sequence ~start ~prefix:p\n              ~prefix_len:pl\n          with\n          | Some (id, end_byte) ->\n              ids := id :: !ids;\n              incr n;\n              greedy end_byte\n          | None ->\n              let unk_id = Hashtbl.find model.vocab model.unk_token in\n              ids := [ unk_id ];\n              n := 1\n      in\n      greedy 0;\n      let result = Array.make !n 0 in\n      List.iteri (fun i id -> result.(!n - 1 - i) <- id) !ids;\n      result\n\nlet tokenize_spans_encoding model pre_tokens ~type_id =\n  if Hashtbl.length model.vocab = 0 then Encoding.empty\n  else\n    let trie = model.trie in\n    let prefix = model.continuing_subword_prefix in\n    let prefix_len = String.length prefix in\n    let unk_id = Hashtbl.find model.vocab model.unk_token in\n    let max_chars = model.max_input_chars_per_word in\n    let vocab_r = model.vocab_r in\n    let unk_token_str = model.unk_token in\n    (* Single pass: convert pre_tokens to array for direct access (no closure),\n       tokenize all fragments and fill growable output arrays directly. *)\n    let pre_arr = Array.of_list pre_tokens in\n    let n_pre = Array.length pre_arr in\n    let cap = ref (max 16 (n_pre * 2)) in\n    let ids = ref (Array.make !cap 0) in\n    let token_strs = ref (Array.make !cap \"\") in\n    let offsets_arr = ref (Array.make !cap (0, 0)) in\n    let n = ref 0 in\n    let grow () =\n      let new_cap = !cap * 2 in\n      let new_ids = Array.make new_cap 0 in\n      Array.blit !ids 0 new_ids 0 !n;\n      ids := new_ids;\n      let new_strs = Array.make new_cap \"\" in\n      Array.blit !token_strs 0 new_strs 0 !n;\n      token_strs := new_strs;\n      let new_off = Array.make new_cap (0, 0) in\n      Array.blit !offsets_arr 0 new_off 0 !n;\n      offsets_arr := new_off;\n      cap := new_cap\n    in\n    (* Hoisted mutable state for trie matching — allocated once *)\n    let current = ref 0 in\n    let stopped = ref false in\n    let last_id = ref (-1) in\n    let last_end = ref 0 in\n    let pos = ref 0 in\n    let is_unk = ref false in\n    let char_count = ref 0 in\n    let i_ref = ref 0 in\n    let j_ref = ref 0 in\n    for frag_idx = 0 to n_pre - 1 do\n      let fragment, _ = Array.unsafe_get pre_arr frag_idx in\n      let seq_len = String.length fragment in\n      char_count := 0;\n      for k = 0 to seq_len - 1 do\n        if Char.code (String.unsafe_get fragment k) land 0xC0 <> 0x80 then\n          incr char_count\n      done;\n      if !char_count > max_chars then begin\n        if !n >= !cap then grow ();\n        Array.unsafe_set !ids !n unk_id;\n        Array.unsafe_set !token_strs !n unk_token_str;\n        Array.unsafe_set !offsets_arr !n (0, seq_len);\n        incr n\n      end\n      else begin\n        pos := 0;\n        is_unk := false;\n        let start_n = !n in\n        while !pos < seq_len && not !is_unk do\n          let match_start = !pos in\n          current := 0;\n          stopped := false;\n          last_id := -1;\n          last_end := !pos;\n          if !pos > 0 then begin\n            i_ref := 0;\n            while !i_ref < prefix_len && not !stopped do\n              let child =\n                trie_step trie !current\n                  (Char.code (String.unsafe_get prefix !i_ref))\n              in\n              if child < 0 then stopped := true\n              else begin\n                current := child;\n                incr i_ref\n              end\n            done\n          end;\n          if not !stopped then begin\n            j_ref := !pos;\n            while !j_ref < seq_len && not !stopped do\n              let child =\n                trie_step trie !current\n                  (Char.code (String.unsafe_get fragment !j_ref))\n              in\n              if child < 0 then stopped := true\n              else begin\n                current := child;\n                incr j_ref;\n                let tid = Array.unsafe_get trie.trie_ids child in\n                if tid >= 0 then begin\n                  last_id := tid;\n                  last_end := !j_ref\n                end\n              end\n            done\n          end;\n          if !last_id >= 0 then begin\n            if !n >= !cap then grow ();\n            Array.unsafe_set !ids !n !last_id;\n            Array.unsafe_set !token_strs !n (Array.unsafe_get vocab_r !last_id);\n            Array.unsafe_set !offsets_arr !n (match_start, !last_end);\n            incr n;\n            pos := !last_end\n          end\n          else is_unk := true\n        done;\n        if !is_unk then begin\n          n := start_n;\n          if !n >= !cap then grow ();\n          Array.unsafe_set !ids !n unk_id;\n          Array.unsafe_set !token_strs !n unk_token_str;\n          Array.unsafe_set !offsets_arr !n (0, seq_len);\n          n := start_n + 1\n        end\n      end\n    done;\n    let total = !n in\n    if total = 0 then Encoding.empty\n    else\n      let final_ids = if total = !cap then !ids else Array.sub !ids 0 total in\n      let final_strs =\n        if total = !cap then !token_strs else Array.sub !token_strs 0 total\n      in\n      let final_off =\n        if total = !cap then !offsets_arr else Array.sub !offsets_arr 0 total\n      in\n      Encoding.create ~ids:final_ids ~type_ids:(Array.make total type_id)\n        ~tokens:final_strs ~words:(Array.make total None) ~offsets:final_off\n        ~special_tokens_mask:(Array.make total 0)\n        ~attention_mask:(Array.make total 1) ()\n\nlet token_to_id model token = Hashtbl.find_opt model.vocab token\n\nlet id_to_token model id =\n  if id >= 0 && id < Array.length model.vocab_r then\n    Some (Array.unsafe_get model.vocab_r id)\n  else None\n\nlet get_vocab model = Hashtbl.fold (fun k v acc -> (k, v) :: acc) model.vocab []\nlet get_vocab_size model = Hashtbl.length model.vocab\nlet get_unk_token model = model.unk_token\nlet get_continuing_subword_prefix model = model.continuing_subword_prefix\n\nlet save model ~path ?name () =\n  let vocab_file =\n    match name with\n    | Some n -> Filename.concat path (n ^ \"-vocab.txt\")\n    | None -> Filename.concat path \"vocab.txt\"\n  in\n  let vocab_list =\n    Hashtbl.fold (fun k v acc -> (v, k) :: acc) model.vocab []\n    |> List.sort compare\n    |> List.map (fun (_, k) -> k)\n  in\n  let oc = open_out vocab_file in\n  Fun.protect\n    ~finally:(fun () -> close_out oc)\n    (fun () ->\n      List.iter\n        (fun token ->\n          output_string oc token;\n          output_char oc '\\n')\n        vocab_list);\n  vocab_file\n\nlet from_bpe bpe =\n  let vocab = Hashtbl.create (Bpe.get_vocab_size bpe) in\n  List.iter (fun (k, id) -> Hashtbl.add vocab k id) (Bpe.get_vocab bpe);\n  let unk_token =\n    match Bpe.get_unk_token bpe with Some u -> u | None -> \"[UNK]\"\n  in\n  if not (Hashtbl.mem vocab unk_token) then begin\n    let max_id = Hashtbl.fold (fun _ id acc -> max id acc) vocab (-1) in\n    Hashtbl.add vocab unk_token (max_id + 1)\n  end;\n  let continuing_subword_prefix =\n    match Bpe.get_continuing_subword_prefix bpe with\n    | Some p -> p\n    | None -> \"##\"\n  in\n  create ~vocab ~unk_token ~continuing_subword_prefix ()\n\n(* Trainer *)\n\nlet train ~min_frequency ~vocab_size ~show_progress ~special_tokens\n    ~limit_alphabet ~initial_alphabet ~continuing_subword_prefix\n    ~end_of_word_suffix texts existing =\n  let _ = existing in\n  (* WordPiece training uses BPE algorithm internally *)\n  let bpe_trained, result_tokens =\n    Bpe.train ~min_frequency ~vocab_size ~show_progress ~special_tokens\n      ~limit_alphabet ~initial_alphabet\n      ~continuing_subword_prefix:(Some continuing_subword_prefix)\n      ~end_of_word_suffix ~max_token_length:None texts None\n  in\n  let wordpiece_model = from_bpe bpe_trained in\n  (wordpiece_model, result_tokens)\n"
  },
  {
    "path": "packages/brot/lib/wordpiece.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** WordPiece tokenization model.\n\n    {b Internal module.} Greedy longest-match-first subword decomposition\n    against a fixed vocabulary. Used by BERT, DistilBERT, and Electra.\n\n    A word is decomposed left-to-right: at each position the longest vocabulary\n    match is consumed. Continuation pieces are prefixed with\n    {!get_continuing_subword_prefix} (typically [\"##\"]). If no subword is found\n    at any position the {e entire} word falls back to {!get_unk_token}.\n\n    Vocabulary lookup uses a hybrid trie: dense nodes (more than 16 children)\n    use a 256-element flat array for O(1) byte dispatch, sparse nodes use binary\n    search on sorted edges. *)\n\ntype t\n(** The type for WordPiece models. *)\n\n(** {1:creation Creation} *)\n\nval create :\n  vocab:(string, int) Hashtbl.t ->\n  ?unk_token:string ->\n  ?continuing_subword_prefix:string ->\n  ?max_input_chars_per_word:int ->\n  unit ->\n  t\n(** [create ~vocab ()] is a WordPiece model backed by [vocab].\n\n    - [unk_token] is the token emitted for words that cannot be decomposed.\n      Defaults to [\"[UNK]\"].\n    - [continuing_subword_prefix] is prepended to non-initial subwords. Defaults\n      to [\"##\"].\n    - [max_input_chars_per_word] is the UTF-8 character count above which a word\n      is replaced by [unk_token] without attempting decomposition. Defaults to\n      [100].\n\n    Raises [Invalid_argument] if [unk_token] is not in [vocab]. *)\n\nval from_file : vocab_file:string -> t\n(** [from_file ~vocab_file] loads a model from a BERT-style [vocab.txt] file\n    (one token per line, ID equals line number). Uses BERT defaults:\n    [unk_token = \"[UNK]\"], [continuing_subword_prefix = \"##\"],\n    [max_input_chars_per_word = 100]. *)\n\n(** {1:tokenization Tokenization} *)\n\ntype token = { id : int; value : string; offsets : int * int }\n(** The type for tokens. [id] is the vocabulary index, [value] the string\n    content, and [offsets] the [(start, stop)] byte span in the source text. *)\n\nval tokenize : t -> string -> token list\n(** [tokenize t s] is the WordPiece decomposition of [s].\n\n    If [s] exceeds {!create}'s [max_input_chars_per_word] (in UTF-8 characters),\n    a single [unk_token] token spanning the whole input is returned. If\n    decomposition fails at any position, the result is likewise a single\n    [unk_token]. *)\n\nval tokenize_ids : t -> string -> int array\n(** [tokenize_ids t s] is like {!tokenize} but returns only token IDs. *)\n\nval tokenize_spans_encoding :\n  t -> (string * (int * int)) list -> type_id:int -> Encoding.t\n(** [tokenize_spans_encoding t spans ~type_id] tokenizes all [spans] and builds\n    an {!Encoding.t} directly. Each element of [spans] is\n    [(fragment, (start, stop))] where offsets are byte positions in the original\n    text.\n\n    This is a single-pass variant that avoids intermediate list and record\n    allocation: mutable refs are hoisted, growable arrays are filled in place,\n    and trie matching is inlined. *)\n\n(** {1:vocabulary Vocabulary} *)\n\nval token_to_id : t -> string -> int option\n(** [token_to_id t tok] is the ID of [tok] in the vocabulary. *)\n\nval id_to_token : t -> int -> string option\n(** [id_to_token t id] is the token string for [id]. *)\n\nval get_vocab : t -> (string * int) list\n(** [get_vocab t] is the vocabulary as [(token, id)] pairs. *)\n\nval get_vocab_size : t -> int\n(** [get_vocab_size t] is the number of tokens in the vocabulary. *)\n\nval get_unk_token : t -> string\n(** [get_unk_token t] is the unknown token string. *)\n\nval get_continuing_subword_prefix : t -> string\n(** [get_continuing_subword_prefix t] is the subword continuation prefix (e.g.\n    [\"##\"]). *)\n\n(** {1:serialization Serialization} *)\n\nval save : t -> path:string -> ?name:string -> unit -> string\n(** [save t ~path ()] writes the vocabulary as a plain-text [vocab.txt] file\n    (one token per line) to [path]. If [name] is given the file is named\n    [{name}-vocab.txt]. Returns the filepath written. *)\n\n(** {1:training Training} *)\n\nval train :\n  min_frequency:int ->\n  vocab_size:int ->\n  show_progress:bool ->\n  special_tokens:string list ->\n  limit_alphabet:int option ->\n  initial_alphabet:char list ->\n  continuing_subword_prefix:string ->\n  end_of_word_suffix:string option ->\n  string list ->\n  t option ->\n  t * string list\n(** [train ~min_frequency ~vocab_size ~show_progress ~special_tokens\n     ~limit_alphabet ~initial_alphabet ~continuing_subword_prefix\n     ~end_of_word_suffix texts init] learns a WordPiece vocabulary from [texts]\n    using BPE merge training internally.\n\n    - [min_frequency] is the minimum pair frequency to merge.\n    - [vocab_size] is the target vocabulary size.\n    - [show_progress] enables progress output on [stderr].\n    - [special_tokens] are added to the vocabulary first.\n    - [limit_alphabet] caps the number of distinct initial characters kept.\n    - [initial_alphabet] seeds the character set.\n    - [continuing_subword_prefix] is set on the resulting model.\n    - [end_of_word_suffix] appended to final subwords if given.\n    - [init], when provided, seeds the vocabulary from an existing model.\n\n    Returns [(model, special_tokens)]. *)\n"
  },
  {
    "path": "packages/brot/test/dune",
    "content": "(data_only_dirs fixtures scripts)\n\n(tests\n (names\n  test_tokenization\n  test_vocab\n  test_encoding\n  test_unicode\n  test_bpe\n  test_wordpiece\n  test_hf_tokenizers\n  test_processors\n  test_pretokenizers)\n (package brot)\n (libraries brot windtrap unix jsont))\n"
  },
  {
    "path": "packages/brot/test/fixtures/.gitignore",
    "content": "hf/\n"
  },
  {
    "path": "packages/brot/test/scripts/download_hf_tokenizers.py",
    "content": "#!/usr/bin/env python3\n\"\"\"\nDownload selected HuggingFace tokenizer JSON files into brot/test/fixtures/hf.\n\nRun this script whenever you need to refresh the fixtures:\n\n    python3 brot/test/scripts/download_hf_tokenizers.py\n\nThe files are ignored by git, so each developer/machine maintains its own cache.\n\"\"\"\n\nfrom __future__ import annotations\n\nimport hashlib\nimport json\nimport sys\nimport urllib.request\nfrom pathlib import Path\nfrom typing import Iterable, Tuple\n\n\nFIXTURES: Iterable[Tuple[str, str]] = (\n    (\n        \"bert-base-uncased\",\n        \"https://huggingface.co/bert-base-uncased/resolve/main/tokenizer.json?download=1\",\n    ),\n    (\n        \"gpt2\",\n        \"https://huggingface.co/gpt2/resolve/main/tokenizer.json?download=1\",\n    ),\n    (\n        \"roberta-base\",\n        \"https://huggingface.co/roberta-base/resolve/main/tokenizer.json?download=1\",\n    ),\n)\n\n\ndef download(url: str, dest: Path) -> None:\n    dest.parent.mkdir(parents=True, exist_ok=True)\n    tmp_path = dest.with_suffix(\".tmp\")\n\n    print(f\"→ downloading {url}…\")\n    with urllib.request.urlopen(url) as response, open(tmp_path, \"wb\") as out:\n        while True:\n            chunk = response.read(1024 * 64)\n            if not chunk:\n                break\n            out.write(chunk)\n    tmp_path.replace(dest)\n\n\ndef sha256(path: Path) -> str:\n    h = hashlib.sha256()\n    with path.open(\"rb\") as fh:\n        for chunk in iter(lambda: fh.read(1024 * 64), b\"\"):\n            h.update(chunk)\n    return h.hexdigest()\n\n\ndef summarize(path: Path) -> None:\n    try:\n        with path.open(\"r\", encoding=\"utf-8\") as fh:\n            metadata = json.load(fh)\n        model_type = metadata.get(\"model\", {}).get(\"type\", \"<unknown>\")\n        size = path.stat().st_size\n        digest = sha256(path)[:12]\n        print(f\"  saved {path} ({size} bytes, model={model_type}, sha256={digest})\")\n    except Exception as exc:  # pylint: disable=broad-except\n        print(f\"  warning: failed to inspect {path}: {exc}\")\n\n\ndef main() -> int:\n    test_root = Path(__file__).resolve().parents[1]\n    fixtures_dir = test_root / \"fixtures\" / \"hf\"\n    fixtures_dir.mkdir(parents=True, exist_ok=True)\n\n    for model, url in FIXTURES:\n        target = fixtures_dir / model / \"tokenizer.json\"\n        if target.exists():\n            print(f\"✓ {model} already present at {target}\")\n            continue\n        print(f\"Downloading {model} tokenizer…\")\n        try:\n            download(url, target)\n            summarize(target)\n        except Exception as exc:  # pylint: disable=broad-except\n            print(f\"  failed to download {model}: {exc}\", file=sys.stderr)\n            if target.exists():\n                target.unlink()\n            return 1\n\n    print(\"All fixtures downloaded.\")\n    return 0\n\n\nif __name__ == \"__main__\":\n    sys.exit(main())\n"
  },
  {
    "path": "packages/brot/test/test_bpe.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Brot\n\nlet test_bpe_basic () =\n  (* Create a simple vocabulary and merges *)\n  let vocab =\n    [\n      (\"h\", 0);\n      (\"e\", 1);\n      (\"l\", 2);\n      (\"o\", 3);\n      (\"ll\", 4);\n      (\"he\", 5);\n      (\"llo\", 6);\n      (\"hello\", 7);\n    ]\n  in\n\n  let merges =\n    [\n      (\"l\", \"l\");\n      (* rank 0: Merge 'l' + 'l' -> 'll' *)\n      (\"ll\", \"o\");\n      (* rank 1: Merge 'll' + 'o' -> 'llo' *)\n      (\"he\", \"llo\");\n      (* rank 2: Merge 'he' + 'llo' -> 'hello' *)\n    ]\n  in\n\n  let tokenizer = bpe ~vocab ~merges ~unk_token:\"<unk>\" () in\n\n  let encoding = encode tokenizer \"hello\" in\n  let tokens = Encoding.tokens encoding |> Array.to_list in\n\n  Printf.printf \"Tokenized 'hello': \";\n  List.iter (Printf.printf \"%s \") tokens;\n  Printf.printf \"\\n\";\n\n  equal ~msg:\"vocabulary size\" int 8 (vocab_size tokenizer)\n\nlet test_bpe_builder () =\n  let vocab = [ (\"a\", 0); (\"b\", 1); (\"ab\", 2) ] in\n  let merges = [ (\"a\", \"b\") ] in\n\n  let tokenizer = bpe ~vocab ~merges ~cache_capacity:50 () in\n\n  let encoding = encode tokenizer \"ab\" in\n  let tokens = Encoding.tokens encoding in\n  equal ~msg:\"single token for 'ab'\" int 1 (Array.length tokens)\n\nlet test_bpe_save_load () =\n  let vocab = [ (\"t\", 0); (\"e\", 1); (\"s\", 2); (\"test\", 3) ] in\n  let merges = [] in\n  (* No merges for simplicity *)\n\n  let tokenizer = bpe ~vocab ~merges () in\n\n  (* Save the model *)\n  let temp_dir = Filename.temp_dir \"bpe_test\" \"\" in\n  let files = save_model_files tokenizer ~folder:temp_dir () in\n\n  (* Load the model *)\n  let vocab_file = List.find (fun f -> Filename.check_suffix f \".json\") files in\n  let merges_file = List.find (fun f -> Filename.check_suffix f \".txt\") files in\n  let loaded_tokenizer =\n    from_model_file ~vocab:vocab_file ~merges:merges_file ()\n  in\n\n  (* Test that loaded tokenizer works the same *)\n  let original_tokens = encode tokenizer \"test\" |> Encoding.tokens in\n  let loaded_tokens = encode loaded_tokenizer \"test\" |> Encoding.tokens in\n\n  equal ~msg:\"same number of tokens\" int\n    (Array.length original_tokens)\n    (Array.length loaded_tokens);\n\n  (* Clean up *)\n  List.iter Sys.remove files;\n  Unix.rmdir temp_dir\n\nlet test_tokenizer_integration () =\n  (* Create a BPE tokenizer using the high-level API *)\n  let vocab =\n    [\n      (\"h\", 0); (\"e\", 1); (\"l\", 2); (\"o\", 3); (\"he\", 4); (\"llo\", 5); (\"hello\", 6);\n    ]\n  in\n  let merges = [ (\"h\", \"e\"); (\"he\", \"llo\") ] in\n  let tokenizer = bpe ~vocab ~merges () in\n\n  (* Test encoding *)\n  let tokens = encode tokenizer \"hello\" |> Encoding.tokens |> Array.to_list in\n\n  Printf.printf \"bpe result: \";\n  List.iter (Printf.printf \"%s \") tokens;\n  Printf.printf \"\\n\";\n\n  equal ~msg:\"tokenizer produces output\" bool true (List.length tokens > 0)\n\nlet () =\n  run \"BPE tests\"\n    [\n      group \"basic\"\n        [\n          test \"basic tokenization\" test_bpe_basic;\n          test \"builder pattern\" test_bpe_builder;\n          test \"save and load\" test_bpe_save_load;\n          test \"tokenizer integration\" test_tokenizer_integration;\n        ];\n    ]\n"
  },
  {
    "path": "packages/brot/test/test_encoding.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Brot\n\nlet make_word_tokenizer ?(specials = []) () =\n  word_level ~pre:(Pre_tokenizer.whitespace ()) ~specials ()\n\nlet test_encode_simple () =\n  let tokenizer = add_tokens (make_word_tokenizer ()) [ \"hello\"; \"world\" ] in\n  let ids = encode tokenizer \"hello world hello\" |> Encoding.ids in\n  equal ~msg:\"encoded length\" int 3 (Array.length ids);\n  equal ~msg:\"repeated token same id\" bool true (ids.(0) = ids.(2))\n\nlet test_encode_with_vocab () =\n  let tokenizer = add_tokens (make_word_tokenizer ()) [ \"hello\"; \"world\" ] in\n  let ids = encode tokenizer \"hello world\" |> Encoding.ids |> Array.to_list in\n  equal ~msg:\"encoded with vocab\" (list int) [ 0; 1 ] ids\n\nlet test_encode_unknown_tokens () =\n  let tokenizer =\n    add_tokens\n      (make_word_tokenizer ~specials:[ special \"<unk>\" ] ())\n      [ \"hello\" ]\n  in\n  let ids =\n    encode tokenizer \"hello unknown world\" |> Encoding.ids |> Array.to_list\n  in\n  equal ~msg:\"encoded something\" bool true (List.length ids > 0)\n\nlet test_encode_empty () =\n  let tokenizer = make_word_tokenizer () in\n  let ids = encode tokenizer \"\" |> Encoding.ids |> Array.to_list in\n  equal ~msg:\"encode empty\" (list int) [] ids\n\nlet test_encode_batch_simple () =\n  let tokenizer =\n    add_tokens (make_word_tokenizer ()) [ \"hello\"; \"world\"; \"hi\"; \"there\" ]\n  in\n  let encodings = encode_batch tokenizer [ \"hello world\"; \"hi there\" ] in\n  equal ~msg:\"batch size\" int 2 (List.length encodings);\n  let first = List.hd encodings in\n  equal ~msg:\"first encoding has ids\" bool true\n    (Array.length (Encoding.ids first) > 0)\n\nlet test_encode_batch_with_padding () =\n  let tokenizer =\n    add_tokens\n      (make_word_tokenizer ~specials:[ special \"<pad>\" ] ())\n      [ \"hello\"; \"world\"; \"hi\"; \"there\" ]\n  in\n  let padding =\n    {\n      length = `Fixed 5;\n      direction = `Right;\n      pad_id = None;\n      pad_type_id = None;\n      pad_token = Some \"<pad>\";\n    }\n  in\n  let encodings = encode_batch tokenizer ~padding [ \"hello\"; \"hi there\" ] in\n  let first = Encoding.ids (List.nth encodings 0) in\n  let second = Encoding.ids (List.nth encodings 1) in\n  equal ~msg:\"first padded length\" int 5 (Array.length first);\n  equal ~msg:\"second padded length\" int 5 (Array.length second)\n\nlet test_encode_batch_empty () =\n  let tokenizer = make_word_tokenizer () in\n  let encodings = encode_batch tokenizer [] in\n  equal ~msg:\"empty batch\" int 0 (List.length encodings)\n\nlet test_decode_simple () =\n  let tokenizer = add_tokens (make_word_tokenizer ()) [ \"hello\"; \"world\" ] in\n  let decoded = decode tokenizer [| 0; 1 |] in\n  equal ~msg:\"decoded text\" string \"hello world\" decoded\n\nlet test_decode_with_special () =\n  let tokenizer =\n    add_tokens\n      (make_word_tokenizer ~specials:[ special \"<bos>\"; special \"<eos>\" ] ())\n      [ \"hello\" ]\n  in\n  (* <bos>=0, <eos>=1, hello=2 *)\n  let decoded = decode tokenizer [| 0; 2; 1 |] in\n  equal ~msg:\"decoded with special\" string \"<bos> hello <eos>\" decoded\n\nlet test_decode_skip_special () =\n  let tokenizer =\n    add_tokens\n      (make_word_tokenizer ~specials:[ special \"<bos>\"; special \"<eos>\" ] ())\n      [ \"hello\" ]\n  in\n  let decoded = decode ~skip_special_tokens:true tokenizer [| 0; 2; 1 |] in\n  equal ~msg:\"decoded without special\" string \"hello\" decoded\n\nlet test_decode_batch () =\n  let tokenizer =\n    add_tokens (make_word_tokenizer ()) [ \"hello\"; \"world\"; \"hi\"; \"there\" ]\n  in\n  let decoded = decode_batch tokenizer [ [| 0; 1 |]; [| 2; 3 |] ] in\n  equal ~msg:\"decoded count\" int 2 (List.length decoded);\n  equal ~msg:\"first decoded\" string \"hello world\" (List.nth decoded 0);\n  equal ~msg:\"second decoded\" string \"hi there\" (List.nth decoded 1)\n\nlet test_chars_model () =\n  let tokenizer = chars () in\n  let ids = encode tokenizer \"abc\" |> Encoding.ids |> Array.to_list in\n  equal ~msg:\"char ids\" (list int) [ 97; 98; 99 ] ids\n\nlet suite =\n  [\n    test \"encode simple\" test_encode_simple;\n    test \"encode with vocab\" test_encode_with_vocab;\n    test \"encode unknown tokens\" test_encode_unknown_tokens;\n    test \"encode empty\" test_encode_empty;\n    test \"batch simple\" test_encode_batch_simple;\n    test \"batch with padding\" test_encode_batch_with_padding;\n    test \"batch empty request\" test_encode_batch_empty;\n    test \"decode simple\" test_decode_simple;\n    test \"decode with special\" test_decode_with_special;\n    test \"decode skip special\" test_decode_skip_special;\n    test \"decode batch\" test_decode_batch;\n    test \"chars model\" test_chars_model;\n  ]\n\nlet () = run \"Encoding tests\" [ group \"encoding\" suite ]\n"
  },
  {
    "path": "packages/brot/test/test_hf_tokenizers.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Brot\nopen Windtrap\n\nlet candidate_roots () =\n  match Sys.getenv_opt \"DUNE_SOURCEROOT\" with\n  | Some root -> [ root; Sys.getcwd () ]\n  | None -> [ Sys.getcwd () ]\n\nlet locate_fixture model =\n  let relative =\n    Filename.concat \"brot/test/fixtures/hf\"\n      (Filename.concat model \"tokenizer.json\")\n  in\n  let rec search = function\n    | [] -> None\n    | root :: rest ->\n        let path = Filename.concat root relative in\n        if Sys.file_exists path then Some path else search rest\n  in\n  search (candidate_roots ())\n\nlet with_hf_tokenizer model f =\n  match locate_fixture model with\n  | None -> skip ()\n  | Some path -> (\n      match from_file path with\n      | Ok tok -> f tok\n      | Error msg -> failf \"Failed to load tokenizer %s: %s\" model msg)\n\nlet test_bert_base_uncased () =\n  with_hf_tokenizer \"bert-base-uncased\" (fun tok ->\n      let encoding = encode tok \"Hello world!\" in\n      let tokens = Encoding.tokens encoding |> Array.to_list in\n      equal ~msg:\"token sequence\" (list string)\n        [ \"[CLS]\"; \"hello\"; \"world\"; \"!\"; \"[SEP]\" ]\n        tokens;\n      let type_ids = Encoding.type_ids encoding |> Array.to_list in\n      equal ~msg:\"type ids\" (list int) [ 0; 0; 0; 0; 0 ] type_ids;\n      equal ~msg:\"has [MASK]\" bool true\n        (Option.is_some (token_to_id tok \"[MASK]\")))\n\nlet test_gpt2_small () =\n  with_hf_tokenizer \"gpt2\" (fun tok ->\n      let encoding = encode tok \"Hello world\" in\n      let ids = Encoding.ids encoding |> Array.to_list in\n      equal ~msg:\"ids\" (list int) [ 15496; 995 ] ids;\n      let roundtrip =\n        decode tok (Array.of_list ids) ~skip_special_tokens:true\n      in\n      equal ~msg:\"decode\" string \"Hello world\" roundtrip)\n\nlet test_roberta_base () =\n  with_hf_tokenizer \"roberta-base\" (fun tok ->\n      let encoding = encode tok \"A quick test\" in\n      let tokens = Encoding.tokens encoding |> Array.to_list in\n      equal ~msg:\"tokens\" (list string)\n        [ \"<s>\"; \"A\"; \"Ġquick\"; \"Ġtest\"; \"</s>\" ]\n        tokens;\n      let attention = Encoding.attention_mask encoding |> Array.to_list in\n      equal ~msg:\"attention mask\" (list int) [ 1; 1; 1; 1; 1 ] attention)\n\nlet () =\n  run \"HF tokenizers\"\n    [\n      group \"bert-base-uncased\" [ test \"encode\" test_bert_base_uncased ];\n      group \"gpt2\" [ test \"encode\" test_gpt2_small ];\n      group \"roberta-base\" [ test \"encode\" test_roberta_base ];\n    ]\n"
  },
  {
    "path": "packages/brot/test/test_pretokenizers.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nmodule Pre = Brot.Pre_tokenizer\n\nlet check_tokenization name input expected =\n  equal ~msg:name (list (pair string (pair int int))) expected input\n\nlet check_strings name input expected =\n  equal ~msg:name (list string) expected (List.map fst input)\n\nlet test_byte_level_basic () =\n  let tokenizer = Pre.byte_level ~add_prefix_space:false ~use_regex:true () in\n\n  (* Test basic tokenization *)\n  let test_case text expected_pieces expected_offsets =\n    let result = Pre.pre_tokenize tokenizer text in\n    let offsets = List.map snd result in\n    check_strings\n      (Printf.sprintf \"ByteLevel pieces for %S\" text)\n      result expected_pieces;\n    equal\n      ~msg:(Printf.sprintf \"ByteLevel offsets for %S\" text)\n      (list (pair int int))\n      expected_offsets offsets\n  in\n\n  (* Basic words *)\n  test_case \"Hello\" [ \"Hello\" ] [ (0, 5) ];\n  test_case \"hello\" [ \"hello\" ] [ (0, 5) ];\n  test_case \"HELLO\" [ \"HELLO\" ] [ (0, 5) ];\n\n  (* Words with spaces - space becomes Ġ (0xC4 0xA0) *)\n  test_case \"Hello world\" [ \"Hello\"; \"\\196\\160world\" ] [ (0, 5); (5, 11) ];\n  test_case \"Hello  world\"\n    [ \"Hello\"; \"\\196\\160\"; \"\\196\\160world\" ]\n    [ (0, 5); (5, 6); (6, 12) ];\n\n  (* Leading/trailing spaces *)\n  test_case \" hello\" [ \"\\196\\160hello\" ] [ (0, 6) ];\n  test_case \"hello \" [ \"hello\"; \"\\196\\160\" ] [ (0, 5); (5, 6) ];\n  (* Note: Python produces ['Ġ', 'Ġhello', 'ĠĠ'] for \" hello \" *)\n  test_case \"  hello  \"\n    [ \"\\196\\160\"; \"\\196\\160hello\"; \"\\196\\160\\196\\160\" ]\n    [ (0, 1); (1, 7); (7, 9) ];\n\n  (* Contractions - should be kept as separate pieces *)\n  test_case \"'s\" [ \"'s\" ] [ (0, 2) ];\n  test_case \"'t\" [ \"'t\" ] [ (0, 2) ];\n  test_case \"'re\" [ \"'re\" ] [ (0, 3) ];\n  test_case \"'ve\" [ \"'ve\" ] [ (0, 3) ];\n  test_case \"'m\" [ \"'m\" ] [ (0, 2) ];\n  test_case \"'ll\" [ \"'ll\" ] [ (0, 3) ];\n  test_case \"'d\" [ \"'d\" ] [ (0, 2) ];\n\n  (* Words with contractions *)\n  test_case \"don't\" [ \"don\"; \"'t\" ] [ (0, 3); (3, 5) ];\n  test_case \"it's\" [ \"it\"; \"'s\" ] [ (0, 2); (2, 4) ];\n  test_case \"we're\" [ \"we\"; \"'re\" ] [ (0, 2); (2, 5) ];\n  test_case \"I'll\" [ \"I\"; \"'ll\" ] [ (0, 1); (1, 4) ];\n  test_case \"OpenAI's\" [ \"OpenAI\"; \"'s\" ] [ (0, 6); (6, 8) ]\n\nlet test_byte_level_prefix_space () =\n  (* Test with add_prefix_space=true *)\n  let tokenizer = Pre.byte_level ~add_prefix_space:true ~use_regex:true () in\n\n  let test_case text expected_pieces =\n    let result = Pre.pre_tokenize tokenizer text in\n    check_strings\n      (Printf.sprintf \"ByteLevel with prefix for %S\" text)\n      result expected_pieces\n  in\n\n  (* Should add space prefix when text doesn't start with space *)\n  test_case \"hello\" [ \"\\196\\160hello\" ];\n  test_case \"Hello world\" [ \"\\196\\160Hello\"; \"\\196\\160world\" ];\n\n  (* Should NOT add extra space when text already starts with space *)\n  test_case \" hello\" [ \"\\196\\160hello\" ];\n  test_case \"  hello\" [ \"\\196\\160\"; \"\\196\\160hello\" ]\n\nlet test_byte_level_special_chars () =\n  let tokenizer = Pre.byte_level ~add_prefix_space:false ~use_regex:true () in\n\n  let test_case text desc =\n    let result = Pre.pre_tokenize tokenizer text in\n    let pieces = List.map fst result in\n    (* Just verify it doesn't crash and produces something *)\n    equal\n      ~msg:(Printf.sprintf \"ByteLevel handles %s\" desc)\n      bool true\n      (List.length pieces > 0)\n  in\n\n  (* Punctuation *)\n  test_case \".\" \"period\";\n  test_case \"!\" \"exclamation\";\n  test_case \"?\" \"question\";\n  test_case \",\" \"comma\";\n  test_case \";\" \"semicolon\";\n  test_case \":\" \"colon\";\n\n  (* Special characters *)\n  test_case \"@\" \"at sign\";\n  test_case \"#\" \"hash\";\n  test_case \"$\" \"dollar\";\n  test_case \"%\" \"percent\";\n  test_case \"^\" \"caret\";\n  test_case \"&\" \"ampersand\";\n  test_case \"*\" \"asterisk\";\n\n  (* Brackets and quotes *)\n  test_case \"()\" \"parentheses\";\n  test_case \"[]\" \"brackets\";\n  test_case \"{}\" \"braces\";\n  test_case \"\\\"\\\"\" \"quotes\";\n  test_case \"''\" \"single quotes\";\n\n  (* Numbers *)\n  test_case \"123\" \"numbers\";\n  test_case \"3.14\" \"decimal\";\n  test_case \"1,000\" \"number with comma\";\n\n  (* Mixed *)\n  test_case \"Hello, world!\" \"punctuated sentence\";\n  test_case \"@user #hashtag\" \"social media\";\n  test_case \"test@example.com\" \"email\";\n  test_case \"https://example.com\" \"URL\";\n  test_case \"function()\" \"function call\";\n  test_case \"a+b=c\" \"math expression\"\n\nlet test_byte_level_unicode () =\n  let tokenizer = Pre.byte_level ~add_prefix_space:false ~use_regex:true () in\n\n  let test_case text desc =\n    let result = Pre.pre_tokenize tokenizer text in\n    let pieces = List.map fst result in\n    (* Byte-level encoding should handle any Unicode by encoding bytes *)\n    equal\n      ~msg:(Printf.sprintf \"ByteLevel handles %s\" desc)\n      bool true\n      (List.length pieces > 0);\n    (* Check that we can reconstruct something (even if not identical due to\n       encoding) *)\n    let concatenated = String.concat \"\" pieces in\n    equal\n      ~msg:(Printf.sprintf \"ByteLevel produces non-empty output for %s\" desc)\n      bool true\n      (String.length concatenated > 0)\n  in\n\n  (* Common accented characters *)\n  test_case \"café\" \"accented e\";\n  test_case \"naïve\" \"diaeresis\";\n  test_case \"résumé\" \"French accents\";\n\n  (* Other languages *)\n  test_case \"你好\" \"Chinese\";\n  test_case \"こんにちは\" \"Japanese\";\n  test_case \"안녕하세요\" \"Korean\";\n  test_case \"Привет\" \"Russian\";\n  test_case \"مرحبا\" \"Arabic\";\n\n  (* Emojis *)\n  test_case \"😀\" \"emoji\";\n  test_case \"👍🏻\" \"emoji with skin tone\";\n  test_case \"Hello 👋 World\" \"text with emoji\"\n\nlet test_byte_level_edge_cases () =\n  let tokenizer = Pre.byte_level ~add_prefix_space:false ~use_regex:true () in\n\n  (* Empty string *)\n  let result = Pre.pre_tokenize tokenizer \"\" in\n  equal ~msg:\"Empty string\" (list string) [] (List.map fst result);\n\n  (* Single character *)\n  let result = Pre.pre_tokenize tokenizer \"a\" in\n  check_strings \"Single char\" result [ \"a\" ];\n\n  (* Only spaces - Python produces ['ĠĠĠ'] all together *)\n  let result = Pre.pre_tokenize tokenizer \"   \" in\n  check_strings \"Only spaces\" result [ \"\\196\\160\\196\\160\\196\\160\" ];\n\n  (* Only punctuation - Python keeps '...' together *)\n  let result = Pre.pre_tokenize tokenizer \"...\" in\n  check_strings \"Only punctuation\" result [ \"...\" ];\n\n  (* Very long word *)\n  let long_word = String.make 100 'a' in\n  let result = Pre.pre_tokenize tokenizer long_word in\n  equal ~msg:\"Long word produces single token\" int 1 (List.length result);\n\n  (* Mixed whitespace *)\n  let result = Pre.pre_tokenize tokenizer \"hello\\tworld\\nfoo\\rbar\" in\n  equal ~msg:\"Handles tabs and newlines\" bool true (List.length result > 0)\n\nlet test_bert_pretokenizer () =\n  let test_case text expected =\n    let result = Pre.pre_tokenize (Pre.bert ()) text in\n    check_tokenization\n      (Printf.sprintf \"BERT tokenization of %S\" text)\n      result expected\n  in\n\n  (* Basic tokenization *)\n  test_case \"Hello world\" [ (\"Hello\", (0, 5)); (\"world\", (6, 11)) ];\n  test_case \"Hello, world!\"\n    [ (\"Hello\", (0, 5)); (\",\", (5, 6)); (\"world\", (7, 12)); (\"!\", (12, 13)) ];\n\n  (* Punctuation handling *)\n  test_case \"test.\" [ (\"test\", (0, 4)); (\".\", (4, 5)) ];\n  test_case \"a-b\" [ (\"a\", (0, 1)); (\"-\", (1, 2)); (\"b\", (2, 3)) ];\n  test_case \"it's\" [ (\"it\", (0, 2)); (\"'\", (2, 3)); (\"s\", (3, 4)) ];\n\n  (* Multiple spaces *)\n  test_case \"hello  world\" [ (\"hello\", (0, 5)); (\"world\", (7, 12)) ];\n\n  (* Unicode *)\n  test_case \"café\" [ (\"café\", (0, 5)) ];\n\n  (* Note: e is 2 bytes in UTF-8 *)\n\n  (* Empty and whitespace *)\n  test_case \"\" [];\n  test_case \"   \" []\n\nlet test_whitespace_pretokenizer () =\n  let test_case text expected =\n    let result = Pre.pre_tokenize (Pre.whitespace ()) text in\n    check_tokenization\n      (Printf.sprintf \"Whitespace tokenization of %S\" text)\n      result expected\n  in\n\n  (* Pattern is \\w+|[^\\w\\s]+ *)\n  test_case \"Hello world\" [ (\"Hello\", (0, 5)); (\"world\", (6, 11)) ];\n  test_case \"Hello, world!\"\n    [ (\"Hello\", (0, 5)); (\",\", (5, 6)); (\"world\", (7, 12)); (\"!\", (12, 13)) ];\n  test_case \"test_var\" [ (\"test_var\", (0, 8)) ];\n  (* underscore is part of \\w *)\n  test_case \"123abc\" [ (\"123abc\", (0, 6)) ];\n  (* numbers are part of \\w *)\n  test_case \"a+b=c\"\n    [\n      (\"a\", (0, 1)); (\"+\", (1, 2)); (\"b\", (2, 3)); (\"=\", (3, 4)); (\"c\", (4, 5));\n    ]\n\nlet test_whitespace_split () =\n  let test_case text expected =\n    let result = Pre.pre_tokenize (Pre.whitespace_split ()) text in\n    check_tokenization\n      (Printf.sprintf \"WhitespaceSplit of %S\" text)\n      result expected\n  in\n\n  (* Simple split on whitespace *)\n  test_case \"Hello world\" [ (\"Hello\", (0, 5)); (\"world\", (6, 11)) ];\n  test_case \"  Hello  world  \" [ (\"Hello\", (2, 7)); (\"world\", (9, 14)) ];\n  test_case \"one\\ttwo\\nthree\"\n    [ (\"one\", (0, 3)); (\"two\", (4, 7)); (\"three\", (8, 13)) ];\n  test_case \"\" [];\n  test_case \"   \" []\n\nlet test_punctuation_pretokenizer () =\n  (* Test different behaviors *)\n  let test_isolated text expected =\n    let tokenizer = Pre.punctuation ~behavior:`Isolated () in\n    let result = Pre.pre_tokenize tokenizer text in\n    check_tokenization\n      (Printf.sprintf \"Punctuation Isolated %S\" text)\n      result expected\n  in\n\n  let test_removed text expected =\n    let tokenizer = Pre.punctuation ~behavior:`Removed () in\n    let result = Pre.pre_tokenize tokenizer text in\n    check_tokenization\n      (Printf.sprintf \"Punctuation Removed %S\" text)\n      result expected\n  in\n\n  (* Isolated behavior *)\n  test_isolated \"Hello, world!\"\n    [ (\"Hello\", (0, 5)); (\",\", (5, 6)); (\" world\", (6, 12)); (\"!\", (12, 13)) ];\n\n  (* Removed behavior *)\n  test_removed \"Hello, world!\" [ (\"Hello\", (0, 5)); (\" world\", (6, 12)) ];\n\n  (* Multiple punctuation *)\n  test_isolated \"test...end\"\n    [\n      (\"test\", (0, 4));\n      (\".\", (4, 5));\n      (\".\", (5, 6));\n      (\".\", (6, 7));\n      (\"end\", (7, 10));\n    ];\n\n  (* Unicode punctuation *)\n  test_isolated \"test—end\" [ (\"test\", (0, 4)); (\"—\", (4, 7)); (\"end\", (7, 10)) ]\n(* em dash is 3 bytes *)\n\nlet test_digits_pretokenizer () =\n  let test_individual text expected =\n    let tokenizer = Pre.digits ~individual_digits:true () in\n    let result = Pre.pre_tokenize tokenizer text in\n    check_tokenization\n      (Printf.sprintf \"Digits individual %S\" text)\n      result expected\n  in\n\n  let test_grouped text expected =\n    let tokenizer = Pre.digits ~individual_digits:false () in\n    let result = Pre.pre_tokenize tokenizer text in\n    check_tokenization (Printf.sprintf \"Digits grouped %S\" text) result expected\n  in\n\n  (* Individual digits *)\n  test_individual \"123\" [ (\"1\", (0, 1)); (\"2\", (1, 2)); (\"3\", (2, 3)) ];\n  test_individual \"a1b2\"\n    [ (\"a\", (0, 1)); (\"1\", (1, 2)); (\"b\", (2, 3)); (\"2\", (3, 4)) ];\n\n  (* Grouped digits *)\n  test_grouped \"123\" [ (\"123\", (0, 3)) ];\n  test_grouped \"a123b456\"\n    [ (\"a\", (0, 1)); (\"123\", (1, 4)); (\"b\", (4, 5)); (\"456\", (5, 8)) ];\n  test_grouped \"3.14\" [ (\"3\", (0, 1)); (\".\", (1, 2)); (\"14\", (2, 4)) ]\n\nlet test_split_pretokenizer () =\n  let test_case pattern behavior text expected =\n    let tokenizer = Pre.split ~pattern ~behavior () in\n    let result = Pre.pre_tokenize tokenizer text in\n    check_tokenization\n      (Printf.sprintf \"Split pattern=%S behavior=%s text=%S\" pattern\n         (match behavior with\n         | `Isolated -> \"Isolated\"\n         | `Removed -> \"Removed\"\n         | `Merged_with_previous -> \"MergedPrev\"\n         | `Merged_with_next -> \"MergedNext\"\n         | `Contiguous -> \"Contiguous\")\n         text)\n      result expected\n  in\n\n  (* Test different behaviors *)\n  test_case \",\" `Isolated \"a,b,c\"\n    [\n      (\"a\", (0, 1)); (\",\", (1, 2)); (\"b\", (2, 3)); (\",\", (3, 4)); (\"c\", (4, 5));\n    ];\n\n  test_case \",\" `Removed \"a,b,c\" [ (\"a\", (0, 1)); (\"b\", (2, 3)); (\"c\", (4, 5)) ];\n\n  test_case \",\" `Merged_with_previous \"a,b,c\"\n    [ (\"a,\", (0, 2)); (\"b,\", (2, 4)); (\"c\", (4, 5)) ];\n\n  test_case \",\" `Merged_with_next \"a,b,c\"\n    [ (\"a\", (0, 1)); (\",b\", (1, 3)); (\",c\", (3, 5)) ];\n\n  (* Test with longer pattern *)\n  test_case \"::\" `Isolated \"a::b::c\"\n    [\n      (\"a\", (0, 1)); (\"::\", (1, 3)); (\"b\", (3, 4)); (\"::\", (4, 6)); (\"c\", (6, 7));\n    ]\n\nlet test_char_delimiter_split () =\n  let test_case delim text expected =\n    let result = Pre.pre_tokenize (Pre.char_delimiter delim) text in\n    check_tokenization\n      (Printf.sprintf \"CharDelimiterSplit delim='%c' text=%S\" delim text)\n      result expected\n  in\n\n  test_case ',' \"a,b,c\" [ (\"a\", (0, 1)); (\"b\", (2, 3)); (\"c\", (4, 5)) ];\n  test_case ' ' \"hello world\" [ (\"hello\", (0, 5)); (\"world\", (6, 11)) ];\n  test_case '|' \"one|two|three\"\n    [ (\"one\", (0, 3)); (\"two\", (4, 7)); (\"three\", (8, 13)) ];\n  test_case ',' \"\" [];\n  test_case ',' \",\" []\n\nlet test_sequence_pretokenizer () =\n  (* Combine whitespace split then punctuation isolation *)\n  let tokenizers =\n    [ Pre.whitespace_split (); Pre.punctuation ~behavior:`Isolated () ]\n  in\n  let tokenizer = Pre.sequence tokenizers in\n\n  let test_case text expected =\n    let result = Pre.pre_tokenize tokenizer text in\n    check_tokenization (Printf.sprintf \"Sequence %S\" text) result expected\n  in\n\n  (* First splits on whitespace, then isolates punctuation in each piece *)\n  test_case \"Hello, world!\"\n    [ (\"Hello\", (0, 5)); (\",\", (5, 6)); (\"world\", (7, 12)); (\"!\", (12, 13)) ];\n\n  (* Multiple words and punctuation *)\n  test_case \"test. another, example!\"\n    [\n      (\"test\", (0, 4));\n      (\".\", (4, 5));\n      (\"another\", (6, 13));\n      (\",\", (13, 14));\n      (\"example\", (15, 22));\n      (\"!\", (22, 23));\n    ]\n\nlet test_fixed_length () =\n  let test_case length text expected =\n    let result = Pre.pre_tokenize (Pre.fixed_length length) text in\n    check_tokenization\n      (Printf.sprintf \"FixedLength %d %S\" length text)\n      result expected\n  in\n\n  test_case 3 \"abcdefghi\" [ (\"abc\", (0, 3)); (\"def\", (3, 6)); (\"ghi\", (6, 9)) ];\n  test_case 2 \"abcde\" [ (\"ab\", (0, 2)); (\"cd\", (2, 4)); (\"e\", (4, 5)) ];\n  test_case 5 \"hello\" [ (\"hello\", (0, 5)) ];\n  test_case 0 \"test\" [];\n  test_case 3 \"\" [];\n\n  (* With UTF-8 - counts characters not bytes *)\n  test_case 2 \"café\" [ (\"ca\", (0, 2)); (\"fé\", (2, 5)) ]\n(* e is 2 bytes *)\n\nlet test_unicode_scripts () =\n  let test_case text desc =\n    let tokenizer = Pre.unicode_scripts () in\n    let result = Pre.pre_tokenize tokenizer text in\n    (* Just verify it runs without crashing and produces something reasonable *)\n    equal\n      ~msg:(Printf.sprintf \"UnicodeScripts %s\" desc)\n      bool true\n      (List.length result >= 0)\n  in\n\n  test_case \"Hello world\" \"Latin text\";\n  test_case \"Hello世界\" \"Mixed Latin and Chinese\";\n  test_case \"Привет мир\" \"Cyrillic\";\n  test_case \"مرحبا بالعالم\" \"Arabic\";\n  test_case \"こんにちは世界\" \"Japanese\";\n  test_case \"\" \"Empty string\"\n\nlet test_metaspace_basic () =\n  let test_case text expected =\n    let result =\n      Pre.pre_tokenize\n        (Pre.metaspace ~replacement:'_' ~prepend_scheme:`Always ~split:true ())\n        text\n    in\n    check_strings (Printf.sprintf \"Metaspace %S\" text) result expected\n  in\n\n  test_case \"Hello world\" [ \"_Hello\"; \"_world\" ];\n  test_case \" starts with space\" [ \"_starts\"; \"_with\"; \"_space\" ];\n  test_case \"\" []\n\nlet () =\n  run \"Pre-tokenizers Test Suite\"\n    [\n      group \"byte_level\"\n        [\n          test \"ByteLevel basic\" test_byte_level_basic;\n          test \"ByteLevel prefix space\" test_byte_level_prefix_space;\n          test \"ByteLevel special chars\" test_byte_level_special_chars;\n          test \"ByteLevel unicode\" test_byte_level_unicode;\n          test \"ByteLevel edge cases\" test_byte_level_edge_cases;\n        ];\n      group \"bert\" [ test \"BERT tokenization\" test_bert_pretokenizer ];\n      group \"whitespace\"\n        [\n          test \"Whitespace tokenization\" test_whitespace_pretokenizer;\n          test \"WhitespaceSplit\" test_whitespace_split;\n        ];\n      group \"punctuation\"\n        [ test \"Punctuation behaviors\" test_punctuation_pretokenizer ];\n      group \"digits\" [ test \"Digits tokenization\" test_digits_pretokenizer ];\n      group \"split\"\n        [\n          test \"Split with patterns\" test_split_pretokenizer;\n          test \"CharDelimiterSplit\" test_char_delimiter_split;\n        ];\n      group \"sequence\"\n        [ test \"Sequence of tokenizers\" test_sequence_pretokenizer ];\n      group \"fixed_length\" [ test \"FixedLength chunks\" test_fixed_length ];\n      group \"unicode_scripts\" [ test \"UnicodeScripts\" test_unicode_scripts ];\n      group \"metaspace\" [ test \"Metaspace basic\" test_metaspace_basic ];\n    ]\n"
  },
  {
    "path": "packages/brot/test/test_processors.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Brot\n\nlet make_encoding ~ids ~tokens ~type_id =\n  let len = Array.length ids in\n  Encoding.create ~ids:(Array.copy ids) ~type_ids:(Array.make len type_id)\n    ~tokens:(Array.copy tokens) ~words:(Array.make len None)\n    ~offsets:(Array.make len (0, 0))\n    ~special_tokens_mask:(Array.make len 0) ~attention_mask:(Array.make len 1)\n    ()\n\nlet json_obj pairs =\n  Jsont.Json.object' (List.map (fun (k, v) -> (Jsont.Json.name k, v)) pairs)\n\nlet test_template_multi_special () =\n  let processor =\n    Result.get_ok\n      (Post_processor.of_json\n         (json_obj\n            [\n              (\"type\", Jsont.Json.string \"TemplateProcessing\");\n              ( \"single\",\n                Jsont.Json.list\n                  [\n                    json_obj\n                      [\n                        ( \"SpecialToken\",\n                          json_obj\n                            [\n                              (\"id\", Jsont.Json.string \"<multi>\");\n                              (\"type_id\", Jsont.Json.int 2);\n                            ] );\n                      ];\n                    json_obj\n                      [\n                        ( \"Sequence\",\n                          json_obj\n                            [\n                              (\"id\", Jsont.Json.string \"A\");\n                              (\"type_id\", Jsont.Json.int 0);\n                            ] );\n                      ];\n                  ] );\n              (\"pair\", Jsont.Json.null ());\n              ( \"special_tokens\",\n                json_obj\n                  [\n                    ( \"<multi>\",\n                      json_obj\n                        [\n                          (\"id\", Jsont.Json.string \"<multi>\");\n                          ( \"ids\",\n                            Jsont.Json.list\n                              [ Jsont.Json.int 100; Jsont.Json.int 101 ] );\n                          ( \"tokens\",\n                            Jsont.Json.list\n                              [\n                                Jsont.Json.string \"<m1>\";\n                                Jsont.Json.string \"<m2>\";\n                              ] );\n                        ] );\n                  ] );\n            ]))\n  in\n  let base = make_encoding ~ids:[| 10 |] ~tokens:[| \"hello\" |] ~type_id:0 in\n  let encoding =\n    Post_processor.process processor base ~add_special_tokens:true\n  in\n  equal ~msg:\"ids\" (array int) [| 100; 101; 10 |] (Encoding.ids encoding);\n  equal ~msg:\"tokens\" (array string)\n    [| \"<m1>\"; \"<m2>\"; \"hello\" |]\n    (Encoding.tokens encoding);\n  equal ~msg:\"type ids\" (array int) [| 2; 2; 0 |] (Encoding.type_ids encoding);\n  equal ~msg:\"special mask\" (array int) [| 1; 1; 0 |]\n    (Encoding.special_tokens_mask encoding);\n  equal ~msg:\"attention mask\" (array int) [| 1; 1; 1 |]\n    (Encoding.attention_mask encoding);\n  equal ~msg:\"added tokens single\" int 2\n    (Post_processor.added_tokens processor ~is_pair:false)\n\nlet test_template_pair_type_ids () =\n  let processor =\n    Post_processor.template ~single:\"$A [SEP]\"\n      ~pair:\"[CLS]:0 $A:0 [SEP]:0 $B:3 [SEP]:3\"\n      ~special_tokens:[ (\"[CLS]\", 101); (\"[SEP]\", 102) ]\n      ()\n  in\n  let seq_a =\n    make_encoding ~ids:[| 10; 11 |] ~tokens:[| \"hello\"; \"world\" |] ~type_id:0\n  in\n  let seq_b = make_encoding ~ids:[| 20 |] ~tokens:[| \"pair\" |] ~type_id:1 in\n  let encoding =\n    Post_processor.process processor ~pair:seq_b seq_a ~add_special_tokens:true\n  in\n  equal ~msg:\"pair ids\" (array int)\n    [| 101; 10; 11; 102; 20; 102 |]\n    (Encoding.ids encoding);\n  equal ~msg:\"pair tokens\" (array string)\n    [| \"[CLS]\"; \"hello\"; \"world\"; \"[SEP]\"; \"pair\"; \"[SEP]\" |]\n    (Encoding.tokens encoding);\n  equal ~msg:\"pair type ids\" (array int) [| 0; 0; 0; 0; 3; 3 |]\n    (Encoding.type_ids encoding);\n  equal ~msg:\"pair special mask\" (array int) [| 1; 0; 0; 1; 0; 1 |]\n    (Encoding.special_tokens_mask encoding);\n  equal ~msg:\"added tokens pair\" int 3\n    (Post_processor.added_tokens processor ~is_pair:true);\n  let no_special =\n    Post_processor.process processor ~pair:seq_b seq_a ~add_special_tokens:false\n  in\n  equal ~msg:\"no-special ids\" (array int) (Encoding.ids seq_a)\n    (Encoding.ids no_special)\n\nlet () =\n  run \"Processors\"\n    [\n      group \"template\"\n        [\n          test \"multi-id special expansion\" test_template_multi_special;\n          test \"pair template semantics\" test_template_pair_type_ids;\n        ];\n    ]\n"
  },
  {
    "path": "packages/brot/test/test_tokenization.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Tokenization tests for brot *)\n\nopen Windtrap\nopen Brot\n\n(* Helper function to tokenize text *)\nlet tokenize_text text =\n  (* Pre-tokenize to get all unique tokens *)\n  let pre_tokens =\n    Pre_tokenizer.pre_tokenize (Pre_tokenizer.whitespace ()) text\n  in\n  let unique_tokens =\n    List.fold_left\n      (fun acc (tok, _) -> if List.mem tok acc then acc else tok :: acc)\n      [] pre_tokens\n    |> List.rev\n  in\n  (* Build vocabulary with all tokens from the text plus extras *)\n  let all_tokens =\n    unique_tokens\n    @\n    (* Add numbered words for long text test *)\n    List.init 1000 (fun i -> Printf.sprintf \"word%d\" i)\n  in\n  let vocab = List.mapi (fun i token -> (token, i)) all_tokens in\n\n  (* Create WordLevel tokenizer with the vocabulary *)\n  let tokenizer =\n    word_level ~vocab ~unk_token:\"<unk>\" ~pre:(Pre_tokenizer.whitespace ()) ()\n  in\n  encode tokenizer text |> Encoding.tokens |> Array.to_list\n\n(* Basic Tokenization Tests *)\n\nlet test_tokenize_words_simple () =\n  let tokens = tokenize_text \"Hello world!\" in\n  equal ~msg:\"simple words\" (list string) [ \"Hello\"; \"world\"; \"!\" ] tokens\n\nlet test_tokenize_words_punctuation () =\n  let tokens = tokenize_text \"don't stop, it's fun!\" in\n  equal ~msg:\"words with punctuation\" (list string)\n    [ \"don\"; \"'\"; \"t\"; \"stop\"; \",\"; \"it\"; \"'\"; \"s\"; \"fun\"; \"!\" ]\n    tokens\n\nlet test_tokenize_words_numbers () =\n  let tokens = tokenize_text \"I have 42 apples and 3.14 pies\" in\n  equal ~msg:\"words with numbers\" (list string)\n    [ \"I\"; \"have\"; \"42\"; \"apples\"; \"and\"; \"3\"; \".\"; \"14\"; \"pies\" ]\n    tokens\n\nlet test_tokenize_words_empty () =\n  let tokens = tokenize_text \"\" in\n  equal ~msg:\"empty string\" (list string) [] tokens\n\nlet test_tokenize_words_whitespace_only () =\n  let tokens = tokenize_text \"   \\t\\n  \" in\n  equal ~msg:\"whitespace only\" (list string) [] tokens\n\nlet test_tokenize_words_special_chars () =\n  let tokens = tokenize_text \"hello@world.com #ml $100 C++\" in\n  equal ~msg:\"special characters\" (list string)\n    [ \"hello\"; \"@\"; \"world\"; \".\"; \"com\"; \"#\"; \"ml\"; \"$\"; \"100\"; \"C\"; \"++\" ]\n    tokens\n\n(* Character Tokenization Tests *)\n\nlet tokenize_chars text =\n  let chars = ref [] in\n  String.iter (fun c -> chars := String.make 1 c :: !chars) text;\n  List.rev !chars\n\nlet test_tokenize_chars_ascii () =\n  let tokens = tokenize_chars \"Hi!\" in\n  equal ~msg:\"ASCII chars\" (list string) [ \"H\"; \"i\"; \"!\" ] tokens\n\nlet test_tokenize_chars_unicode () =\n  let tokens = tokenize_chars \"Hello 👋 世界\" in\n  (* Note: UTF-8 encoding means multi-byte chars may appear differently *)\n  equal ~msg:\"has tokens\" bool true (List.length tokens > 0)\n\nlet test_tokenize_chars_empty () =\n  let tokens = tokenize_chars \"\" in\n  equal ~msg:\"empty string chars\" (list string) [] tokens\n\n(* Pre-tokenizer Pattern Tests *)\n\nlet test_tokenize_regex_words () =\n  (* Use the helper that sets up vocabulary properly *)\n  let tokens = tokenize_text \"hello-world test_123\" in\n  equal ~msg:\"regex words\" (list string)\n    [ \"hello\"; \"-\"; \"world\"; \"test_123\" ]\n    tokens\n\nlet test_tokenize_regex_custom () =\n  (* Test with punctuation pre-tokenizer *)\n  let text = \"don't stop!\" in\n  let pre_tokens =\n    Pre_tokenizer.pre_tokenize (Pre_tokenizer.punctuation ()) text\n  in\n  let vocab = List.mapi (fun i (tok, _) -> (tok, i)) pre_tokens in\n  let tokenizer =\n    word_level ~vocab ~unk_token:\"<unk>\" ~pre:(Pre_tokenizer.punctuation ()) ()\n  in\n  let tokens = encode tokenizer text |> Encoding.tokens |> Array.to_list in\n  equal ~msg:\"has tokens\" bool true (List.length tokens > 0)\n\nlet test_tokenize_regex_no_match () =\n  let tokenizer = word_level () in\n  let tokens =\n    encode tokenizer \"no numbers here\" |> Encoding.tokens |> Array.to_list\n  in\n  equal ~msg:\"regex no match\" (list string) [] tokens\n\n(* Unigram Model Tests *)\n\n(* Round-trip lookups *)\nlet test_unigram_roundtrip () =\n  let tokens = [ \"hello\"; \"world\"; \"test\" ] in\n  let vocab = List.map (fun token -> (token, 0.0)) tokens in\n  let tokenizer = unigram ~vocab () in\n  List.iteri\n    (fun expected_id token ->\n      equal\n        ~msg:(Printf.sprintf \"token_to_id '%s'\" token)\n        (option int) (Some expected_id)\n        (token_to_id tokenizer token);\n      equal\n        ~msg:(Printf.sprintf \"id_to_token %d\" expected_id)\n        (option string) (Some token)\n        (id_to_token tokenizer expected_id))\n    tokens\n\n(* token_to_id - out of vocab *)\nlet test_unigram_token_to_id_oov () =\n  let tokenizer = unigram ~vocab:[ (\"hello\", 0.0); (\"world\", 0.0) ] () in\n  equal ~msg:\"token_to_id out-of-vocab\" (option int) None\n    (token_to_id tokenizer \"missing\")\n\n(* id_to_token - out of bounds *)\nlet test_unigram_id_to_token_oob () =\n  let tokenizer = unigram ~vocab:[ (\"hello\", 0.0); (\"world\", 0.0) ] () in\n  equal ~msg:\"id_to_token negative\" (option string) None\n    (id_to_token tokenizer (-1));\n  equal ~msg:\"id_to_token out of bounds\" (option string) None\n    (id_to_token tokenizer 10)\n\n(* Test empty vocabulary *)\nlet test_unigram_empty_vocab () =\n  let tokenizer = unigram ~vocab:[] () in\n  equal ~msg:\"empty vocab token_to_id\" (option int) None\n    (token_to_id tokenizer \"test\");\n  equal ~msg:\"empty vocab id_to_token\" (option string) None\n    (id_to_token tokenizer 0)\n\n(* Test special characters and unicode *)\nlet test_unigram_special_tokens () =\n  let tokenizer =\n    unigram\n      ~vocab:\n        [\n          (\"<unk>\", 0.0);\n          (\"<s>\", 0.0);\n          (\"</s>\", 0.0);\n          (\"▁hello\", 0.0);\n          (\"世界\", 0.0);\n        ]\n      ()\n  in\n  equal ~msg:\"special <unk>\" (option int) (Some 0)\n    (token_to_id tokenizer \"<unk>\");\n  equal ~msg:\"special <s>\" (option int) (Some 1) (token_to_id tokenizer \"<s>\");\n  equal ~msg:\"sentencepiece token\" (option int) (Some 3)\n    (token_to_id tokenizer \"▁hello\");\n  equal ~msg:\"unicode token\" (option int) (Some 4) (token_to_id tokenizer \"世界\");\n  equal ~msg:\"id to unicode\" (option string) (Some \"世界\")\n    (id_to_token tokenizer 4)\n\nlet test_unigram_encode_sequence () =\n  let tokenizer = unigram ~vocab:[ (\"hello\", 0.0); (\"world\", 0.0) ] () in\n  let encoding = encode tokenizer \"hello world\" in\n  let tokens = Encoding.tokens encoding |> Array.to_list in\n  equal ~msg:\"unigram encode tokens\" (list string) [ \"hello\"; \"world\" ] tokens\n\nlet test_pad_token_set_at_construction () =\n  let vocab = [ (\"hello\", 0); (\"world\", 1); (\"<unk>\", 2); (\"[PAD]\", 3) ] in\n  let tokenizer =\n    word_level ~vocab ~unk_token:\"<unk>\"\n      ~pre:(Pre_tokenizer.whitespace ())\n      ~specials:[ special \"[PAD]\" ]\n      ~pad_token:\"[PAD]\" ()\n  in\n  equal ~msg:\"pad token set\" (option string) (Some \"[PAD]\")\n    (pad_token tokenizer);\n  let pad_id =\n    match token_to_id tokenizer \"[PAD]\" with\n    | Some id -> id\n    | None -> failwith \"missing pad id\"\n  in\n  let encoding =\n    encode tokenizer \"hello\"\n      ~padding:\n        {\n          length = `Fixed 3;\n          direction = `Right;\n          pad_id = None;\n          pad_type_id = None;\n          pad_token = None;\n        }\n  in\n  let ids = Encoding.ids encoding |> Array.to_list in\n  let pad_ids = List.tl ids in\n  equal ~msg:\"pad id matches configured token\" (list int) [ pad_id; pad_id ]\n    pad_ids\n\n(* Edge Cases *)\n\nlet test_tokenize_long_text () =\n  let text =\n    String.concat \" \" (List.init 1000 (fun i -> Printf.sprintf \"word%d\" i))\n  in\n  let tokens = tokenize_text text in\n  equal ~msg:\"long text token count\" int 1000 (List.length tokens)\n\nlet test_tokenize_repeated_punctuation () =\n  let tokens = tokenize_text \"wow!!! really???\" in\n  equal ~msg:\"repeated punctuation\" (list string)\n    [ \"wow\"; \"!!!\"; \"really\"; \"???\" ]\n    tokens\n\nlet test_tokenize_mixed_whitespace () =\n  let tokens = tokenize_text \"hello\\tworld\\nthere\\r\\nfriend\" in\n  equal ~msg:\"mixed whitespace\" (list string)\n    [ \"hello\"; \"world\"; \"there\"; \"friend\" ]\n    tokens\n\n(* Test Suite *)\n\nlet tokenization_tests =\n  [\n    (* Words tokenization *)\n    test \"tokenize words simple\" test_tokenize_words_simple;\n    test \"tokenize words punctuation\" test_tokenize_words_punctuation;\n    test \"tokenize words numbers\" test_tokenize_words_numbers;\n    test \"tokenize words empty\" test_tokenize_words_empty;\n    test \"tokenize words whitespace only\" test_tokenize_words_whitespace_only;\n    test \"tokenize words special chars\" test_tokenize_words_special_chars;\n    (* Character tokenization *)\n    test \"tokenize chars ASCII\" test_tokenize_chars_ascii;\n    test \"tokenize chars unicode\" test_tokenize_chars_unicode;\n    test \"tokenize chars empty\" test_tokenize_chars_empty;\n    (* Regex tokenization *)\n    test \"tokenize regex words\" test_tokenize_regex_words;\n    test \"tokenize regex custom\" test_tokenize_regex_custom;\n    test \"tokenize regex no match\" test_tokenize_regex_no_match;\n    (* Edge cases *)\n    test \"tokenize long text\" test_tokenize_long_text;\n    test \"tokenize repeated punctuation\" test_tokenize_repeated_punctuation;\n    test \"tokenize mixed whitespace\" test_tokenize_mixed_whitespace;\n    (* Unigram model tests *)\n    test \"unigram roundtrip\" test_unigram_roundtrip;\n    test \"unigram token_to_id out-of-vocab\" test_unigram_token_to_id_oov;\n    test \"unigram id_to_token out-of-bounds\" test_unigram_id_to_token_oob;\n    test \"unigram empty vocab\" test_unigram_empty_vocab;\n    test \"unigram special tokens\" test_unigram_special_tokens;\n    test \"unigram encode sequence\" test_unigram_encode_sequence;\n    test \"pad token reassignment updates id\" test_pad_token_set_at_construction;\n  ]\n\nlet () = run \"brot tokenization\" [ group \"tokenization\" tokenization_tests ]\n"
  },
  {
    "path": "packages/brot/test/test_unicode.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Unicode processing tests for brot *)\n\nopen Windtrap\nopen Brot\n\n(* Normalization via public API *)\n\nlet test_lowercase_normalization () =\n  let text = \"HELLO WORLD\" in\n  let normalizer = Normalizer.lowercase in\n  let result = Normalizer.apply normalizer text in\n  equal ~msg:\"lowercase\" string \"hello world\" result\n\nlet test_strip_accents_normalization () =\n  let text = \"caf\\xC3\\xA9 na\\xC3\\xAFve r\\xC3\\xA9sum\\xC3\\xA9\" in\n  let normalizer =\n    Normalizer.sequence [ Normalizer.nfd; Normalizer.strip_accents ]\n  in\n  let result = Normalizer.apply normalizer text in\n  equal ~msg:\"strip accents\" string \"cafe naive resume\" result\n\nlet test_normalization_sequence () =\n  let text = \"  HELLO  World  \" in\n  let normalizer =\n    Normalizer.sequence\n      [\n        Normalizer.lowercase;\n        Normalizer.strip ();\n        Normalizer.replace ~pattern:\"\\\\s+\" ~replacement:\" \";\n      ]\n  in\n  let result = Normalizer.apply normalizer text in\n  equal ~msg:\"sequence\" string \"hello world\" result\n\n(* Integration with Tokenizer *)\n\nlet test_tokenize_with_normalization () =\n  let text = \"HELLO   WORLD!\" in\n  let normalizer =\n    Normalizer.sequence\n      [\n        Normalizer.lowercase;\n        Normalizer.replace ~pattern:\"\\\\s+\" ~replacement:\" \";\n      ]\n  in\n  let tokenizer =\n    word_level ~normalizer ~pre:(Pre_tokenizer.whitespace ()) ()\n  in\n  let tokenizer = add_tokens tokenizer [ \"hello\"; \"world\"; \"!\" ] in\n  let tokens = encode tokenizer text |> Encoding.tokens |> Array.to_list in\n  equal ~msg:\"normalized tokenization\" (list string) [ \"hello\"; \"world\"; \"!\" ]\n    tokens\n\nlet test_tokenize_unicode_words () =\n  let text = \"café résumé naïve\" in\n  let tokenizer = word_level ~pre:(Pre_tokenizer.whitespace ()) () in\n  let tokenizer = add_tokens tokenizer [ \"café\"; \"résumé\"; \"naïve\" ] in\n  let tokens = encode tokenizer text |> Encoding.tokens |> Array.to_list in\n  equal ~msg:\"tokenized unicode\" bool true (List.length tokens > 0)\n\nlet test_malformed_unicode () =\n  let text = \"Hello\" ^ String.make 1 '\\xFF' ^ String.make 1 '\\xFE' ^ \"World\" in\n  let tokenizer = chars () in\n  let tokens = encode tokenizer text |> Encoding.tokens |> Array.to_list in\n  equal ~msg:\"handled malformed\" bool true (List.length tokens > 0)\n\n(* Test Suite *)\n\nlet unicode_tests =\n  [\n    (* Normalization *)\n    test \"lowercase normalization\" test_lowercase_normalization;\n    test \"strip accents normalization\" test_strip_accents_normalization;\n    test \"normalization sequence\" test_normalization_sequence;\n    (* Integration *)\n    test \"tokenize with normalization\" test_tokenize_with_normalization;\n    test \"tokenize unicode words\" test_tokenize_unicode_words;\n    (* Error handling *)\n    test \"malformed unicode\" test_malformed_unicode;\n  ]\n\nlet () = run \"brot unicode\" [ group \"unicode\" unicode_tests ]\n"
  },
  {
    "path": "packages/brot/test/test_vocab.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Brot\n\nlet test_vocab_create_empty () =\n  let tokenizer = word_level () in\n  let vocab = vocab tokenizer in\n  equal ~msg:\"empty vocab size\" int 0 (List.length vocab)\n\nlet test_vocab_with_tokenizer () =\n  let tokenizer = word_level () in\n  let vocab = vocab tokenizer in\n  equal ~msg:\"initial vocab size\" int 0 (List.length vocab)\n\nlet test_vocab_add_tokens () =\n  let tokenizer =\n    add_tokens\n      (word_level ~specials:[ special \"<pad>\"; special \"<unk>\" ] ())\n      [ \"hello\"; \"world\" ]\n  in\n  let vocab_size = vocab_size tokenizer in\n  equal ~msg:\"vocab size increased\" bool true (vocab_size >= 2)\n\nlet test_vocab_encode_decode () =\n  let tokenizer =\n    add_tokens\n      (word_level ~pre:(Pre_tokenizer.whitespace ()) ())\n      [ \"hello\"; \"world\" ]\n  in\n  let ids = encode tokenizer \"hello world\" |> Encoding.ids in\n  equal ~msg:\"encoded ids\" bool true (Array.length ids > 0);\n  let decoded = decode tokenizer ids in\n  equal ~msg:\"decoded text\" string \"hello world\" decoded\n\nlet test_vocab_batch_encode () =\n  let tokenizer = add_tokens (Brot.word_level ()) [ \"hello\"; \"world\" ] in\n  let encodings = encode_batch tokenizer [ \"hello\"; \"world\" ] in\n  equal ~msg:\"batch size\" int 2 (List.length encodings)\n\nlet test_vocab_special_tokens () =\n  let tokenizer =\n    add_tokens\n      (word_level ~specials:[ special \"[CLS]\"; special \"[SEP]\" ] ())\n      [ \"test\" ]\n  in\n  let tokens =\n    encode ~add_special_tokens:true tokenizer \"test\" |> Encoding.tokens\n  in\n  equal ~msg:\"tokens emitted\" bool true (Array.length tokens > 0)\n\nlet test_vocab_save_load () =\n  let tokenizer =\n    add_tokens (Brot.word_level ()) [ \"hello\"; \"world\"; \"test\" ]\n  in\n  let json = to_json tokenizer in\n  match from_json json with\n  | Error msg -> failf \"failed to round-trip tokenizer: %s\" msg\n  | Ok reloaded ->\n      let original_vocab = vocab tokenizer in\n      let loaded_vocab = vocab reloaded in\n      equal ~msg:\"vocab size matches\" int\n        (List.length original_vocab)\n        (List.length loaded_vocab);\n      List.iter\n        (fun (token, _) ->\n          equal\n            ~msg:(Printf.sprintf \"token %s preserved\" token)\n            bool true\n            (Option.is_some (token_to_id reloaded token)))\n        original_vocab\n\nlet suite =\n  [\n    test \"create empty\" test_vocab_create_empty;\n    test \"with tokenizer\" test_vocab_with_tokenizer;\n    test \"add tokens\" test_vocab_add_tokens;\n    test \"encode decode\" test_vocab_encode_decode;\n    test \"batch encode\" test_vocab_batch_encode;\n    test \"special tokens\" test_vocab_special_tokens;\n    test \"save load\" test_vocab_save_load;\n  ]\n\nlet () = run \"Vocabulary tests\" [ group \"vocab\" suite ]\n"
  },
  {
    "path": "packages/brot/test/test_wordpiece.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Brot\n\nlet test_wordpiece_basic () =\n  (* Create a simple vocabulary *)\n  let vocab =\n    [\n      (\"[UNK]\", 0);\n      (\"hello\", 1);\n      (\"world\", 2);\n      (\"##llo\", 3);\n      (\"##rld\", 4);\n      (\"he\", 5);\n      (\"wo\", 6);\n    ]\n  in\n\n  let tokenizer =\n    wordpiece ~vocab ~unk_token:\"[UNK]\" ~continuing_subword_prefix:\"##\" ()\n  in\n\n  (* Test tokenizing a known word *)\n  let encoding = encode tokenizer \"hello\" in\n  let tokens = Encoding.tokens encoding in\n  equal ~msg:\"single token for 'hello'\" int 1 (Array.length tokens);\n  equal ~msg:\"token value\" string \"hello\" tokens.(0);\n\n  Printf.printf \"Tokenized 'hello': \";\n  Array.iter (Printf.printf \"%s \") tokens;\n  Printf.printf \"\\n\";\n\n  equal ~msg:\"vocabulary size\" int 7 (vocab_size tokenizer)\n\nlet test_wordpiece_subwords () =\n  (* Create vocabulary with subword pieces *)\n  let vocab =\n    [\n      (\"[UNK]\", 0);\n      (\"un\", 1);\n      (\"##able\", 2);\n      (\"##happy\", 3);\n      (\"play\", 4);\n      (\"##ing\", 5);\n      (\"##ed\", 6);\n    ]\n  in\n\n  let tokenizer = wordpiece ~vocab ~unk_token:\"[UNK]\" () in\n\n  (* Test word that can be split into subwords *)\n  let encoding = encode tokenizer \"playing\" in\n  let tokens = Encoding.tokens encoding in\n  Printf.printf \"Tokenized 'playing': \";\n  Array.iter (Printf.printf \"%s \") tokens;\n  Printf.printf \"\\n\";\n\n  equal ~msg:\"should split into subwords\" int 2 (Array.length tokens);\n  equal ~msg:\"first token\" string \"play\" tokens.(0);\n  equal ~msg:\"second token\" string \"##ing\" tokens.(1)\n\nlet test_wordpiece_unknown () =\n  (* Create minimal vocabulary *)\n  let vocab = [ (\"[UNK]\", 0); (\"hello\", 1) ] in\n\n  let tokenizer = wordpiece ~vocab ~unk_token:\"[UNK]\" () in\n\n  (* Test unknown word *)\n  let encoding = encode tokenizer \"goodbye\" in\n  let tokens = Encoding.tokens encoding in\n  equal ~msg:\"unknown word becomes single token\" int 1 (Array.length tokens);\n  equal ~msg:\"unknown token\" string \"[UNK]\" tokens.(0)\n\nlet test_wordpiece_max_chars () =\n  (* Create vocabulary *)\n  let vocab = [ (\"[UNK]\", 0); (\"test\", 1) ] in\n\n  let tokenizer =\n    wordpiece ~vocab ~unk_token:\"[UNK]\" ~max_input_chars_per_word:5 ()\n  in\n\n  (* Test word exceeding max chars *)\n  let long_word = String.make 10 'a' in\n  let encoding = encode tokenizer long_word in\n  let tokens = Encoding.tokens encoding in\n  equal ~msg:\"long word becomes unknown\" int 1 (Array.length tokens);\n  equal ~msg:\"unknown token\" string \"[UNK]\" tokens.(0)\n\nlet test_wordpiece_save_load () =\n  (* Create vocabulary *)\n  let vocab =\n    [\n      (\"[PAD]\", 0);\n      (\"[UNK]\", 1);\n      (\"[CLS]\", 2);\n      (\"[SEP]\", 3);\n      (\"hello\", 4);\n      (\"world\", 5);\n    ]\n  in\n\n  let tokenizer = wordpiece ~vocab ~unk_token:\"[UNK]\" () in\n\n  (* Save the model *)\n  let temp_dir = Filename.temp_dir \"wordpiece_test\" \"\" in\n  let files = save_model_files tokenizer ~folder:temp_dir () in\n\n  (* Load the model *)\n  let vocab_file = List.find (fun f -> Filename.check_suffix f \".txt\") files in\n  let loaded_tokenizer = from_model_file ~vocab:vocab_file () in\n\n  (* Test that loaded tokenizer works the same *)\n  let original_tokens = encode tokenizer \"hello\" |> Encoding.tokens in\n  let loaded_tokens = encode loaded_tokenizer \"hello\" |> Encoding.tokens in\n\n  equal ~msg:\"same number of tokens\" int\n    (Array.length original_tokens)\n    (Array.length loaded_tokens);\n\n  (* Clean up *)\n  List.iter Sys.remove files;\n  Unix.rmdir temp_dir\n\nlet test_tokenizer_integration () =\n  (* Create a WordPiece tokenizer using the high-level API *)\n  let vocab =\n    [\n      (\"[PAD]\", 0);\n      (\"[UNK]\", 1);\n      (\"[CLS]\", 2);\n      (\"[SEP]\", 3);\n      (\"hello\", 4);\n      (\"world\", 5);\n      (\"##ing\", 6);\n    ]\n  in\n  let tokenizer = wordpiece ~vocab ~unk_token:\"[UNK]\" () in\n\n  (* Test encoding *)\n  let tokens = encode tokenizer \"hello\" |> Encoding.tokens |> Array.to_list in\n\n  Printf.printf \"wordpiece result: \";\n  List.iter (Printf.printf \"%s \") tokens;\n  Printf.printf \"\\n\";\n\n  equal ~msg:\"tokenizer produces output\" bool true (List.length tokens > 0)\n\nlet test_wordpiece_greedy_matching () =\n  (* Test the greedy longest-match-first algorithm *)\n  let vocab =\n    [\n      (\"[UNK]\", 0);\n      (\"un\", 1);\n      (\"able\", 2);\n      (\"unable\", 3);\n      (* Longer match should be preferred *)\n      (\"##able\", 4);\n    ]\n  in\n\n  let tokenizer = wordpiece ~vocab ~unk_token:\"[UNK]\" () in\n\n  (* Should match \"unable\" as a single token, not \"un\" + \"##able\" *)\n  let encoding = encode tokenizer \"unable\" in\n  let tokens = Encoding.tokens encoding in\n  equal ~msg:\"greedy match finds longest token\" int 1 (Array.length tokens);\n  equal ~msg:\"matched full word\" string \"unable\" tokens.(0)\n\nlet () =\n  run \"WordPiece tests\"\n    [\n      group \"basic\"\n        [\n          test \"basic tokenization\" test_wordpiece_basic;\n          test \"subword tokenization\" test_wordpiece_subwords;\n          test \"unknown tokens\" test_wordpiece_unknown;\n          test \"max input chars\" test_wordpiece_max_chars;\n          test \"save and load\" test_wordpiece_save_load;\n          test \"tokenizer integration\" test_tokenizer_integration;\n          test \"greedy matching\" test_wordpiece_greedy_matching;\n        ];\n    ]\n"
  },
  {
    "path": "packages/dune",
    "content": "(dirs :standard \\ nx-oxcaml)\n"
  },
  {
    "path": "packages/fehu/README.md",
    "content": "# Fehu\n\nReinforcement learning environment toolkit for OCaml, built on [Rune](../rune/)\n\nFehu provides type-safe environments, composable wrappers, trajectory\ncollection, replay buffers, GAE computation, policy evaluation, and\nvectorized environments. It follows the Gymnasium interface pattern:\nenvironments expose `reset` and `step` with typed observation and action\nspaces.\n\n## Quick Start\n\nCreate an environment, run a random policy, and evaluate:\n\n```ocaml\nopen Fehu\n\nlet () =\n  let rng = Rune.Rng.key 42 in\n  let env = Fehu_envs.Cartpole.make ~rng () in\n\n  (* Run one episode *)\n  let _obs, _info = Env.reset env () in\n  let done_ = ref false in\n  let total_reward = ref 0.0 in\n  while not !done_ do\n    let act, _ = Space.sample (Env.action_space env)\n      ~rng:(Env.take_rng env) in\n    let s = Env.step env act in\n    total_reward := !total_reward +. s.reward;\n    done_ := s.terminated || s.truncated\n  done;\n  Printf.printf \"Episode reward: %.0f\\n\" !total_reward;\n\n  (* Evaluate over 10 episodes *)\n  let stats = Eval.run env\n    ~policy:(fun _obs ->\n      let act, _ = Space.sample (Env.action_space env)\n        ~rng:(Env.take_rng env) in act)\n    ~n_episodes:10 ()\n  in\n  Printf.printf \"Mean reward: %.1f (std: %.1f)\\n\"\n    stats.mean_reward stats.std_reward\n```\n\n## Features\n\n- **Environments**: typed `('obs, 'act, 'render) Env.t` with lifecycle enforcement (reset before step, auto-guard on terminal states)\n- **Spaces**: Discrete, Box, Multi_binary, Multi_discrete, Tuple, Dict, Sequence, Text with sampling, validation, and serialization\n- **Wrappers**: `map_observation`, `map_action`, `map_reward`, `clip_action`, `clip_observation`, `time_limit`, and custom wrappers via `Env.wrap`\n- **Trajectory collection**: `Collect.rollout` and `Collect.episodes` in structure-of-arrays form with automatic episode resets\n- **Replay buffers**: fixed-capacity circular buffer with uniform random sampling (`Buffer.sample`, `Buffer.sample_arrays`)\n- **GAE**: generalized advantage estimation with proper terminated/truncated handling (`Gae.compute`, `Gae.returns`)\n- **Evaluation**: `Eval.run` computes mean/std reward and episode length over multiple episodes\n- **Vectorized environments**: `Vec_env.create` runs multiple environments with batched step and auto-reset\n- **Rendering**: `Render.image` and `Render.rollout` for frame capture, `Env.on_render` for recording\n- **Built-in environments**: CartPole-v1, MountainCar-v0, GridWorld, RandomWalk\n\n## Libraries\n\n| Library | opam package | Description |\n|---------|-------------|-------------|\n| `fehu` | `fehu` | Core: environments, spaces, wrappers, collection, buffers, GAE, evaluation |\n| `fehu-envs` | `fehu.envs` | Built-in environments (CartPole, MountainCar, GridWorld, RandomWalk) |\n\n## Built-in Environments\n\n| Environment | Observation | Actions | Reward | Termination |\n|-------------|------------|---------|--------|-------------|\n| CartPole | Box [4] (x, v, θ, ω) | Discrete 2 | +1.0 per step | Pole > ±12° or cart > ±2.4, truncated at 500 |\n| MountainCar | Box [2] (position, velocity) | Discrete 3 | −1.0 per step | Position ≥ 0.5 with v ≥ 0, truncated at 200 |\n| GridWorld | Multi_discrete [5; 5] | Discrete 4 | +10 at goal, −1 otherwise | Reach (4,4), truncated at 200 |\n| RandomWalk | Box [1] | Discrete 2 | −|position| | None, truncated at 200 |\n\n## Contributing\n\nSee the [Raven monorepo README](../README.md) for guidelines.\n\n## License\n\nISC License. See [LICENSE](../LICENSE) for details.\n"
  },
  {
    "path": "packages/fehu/bench/bench_fehu.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule Gae = Fehu.Gae\nmodule Buffer = Fehu.Buffer\n\nlet gamma = 0.99\nlet lambda = 0.95\n\nlet make_arrays n =\n  let rewards = Array.init n (fun i -> Float.of_int (i mod 5)) in\n  let values = Array.init n (fun i -> Float.of_int (i mod 10) *. 0.1) in\n  let terminated = Array.init n (fun i -> i mod 50 = 49) in\n  let truncated = Array.init n (fun i -> i mod 100 = 99) in\n  let next_values =\n    Array.init n (fun i -> Float.of_int ((i + 1) mod 10) *. 0.1)\n  in\n  (rewards, values, terminated, truncated, next_values)\n\nlet gae_benchmarks () =\n  let sizes = [ (\"256\", 256); (\"1024\", 1024); (\"4096\", 4096) ] in\n  let benches = ref [] in\n  List.iter\n    (fun (label, n) ->\n      let rewards, values, terminated, truncated, next_values = make_arrays n in\n      benches :=\n        Thumper.bench (Printf.sprintf \"compute n=%s\" label) (fun () ->\n            Gae.compute ~rewards ~values ~terminated ~truncated ~next_values\n              ~gamma ~lambda)\n        :: !benches)\n    sizes;\n  List.rev !benches\n\nlet gae_from_values_benchmarks () =\n  let sizes = [ (\"256\", 256); (\"1024\", 1024); (\"4096\", 4096) ] in\n  let benches = ref [] in\n  List.iter\n    (fun (label, n) ->\n      let rewards, values, terminated, truncated, _ = make_arrays n in\n      benches :=\n        Thumper.bench (Printf.sprintf \"compute_from_values n=%s\" label)\n          (fun () ->\n            Gae.compute_from_values ~rewards ~values ~terminated ~truncated\n              ~last_value:0.0 ~gamma ~lambda)\n        :: !benches)\n    sizes;\n  List.rev !benches\n\nlet returns_benchmarks () =\n  let sizes = [ (\"256\", 256); (\"1024\", 1024); (\"4096\", 4096) ] in\n  let benches = ref [] in\n  List.iter\n    (fun (label, n) ->\n      let rewards, _, terminated, truncated, _ = make_arrays n in\n      benches :=\n        Thumper.bench (Printf.sprintf \"returns n=%s\" label) (fun () ->\n            Gae.returns ~rewards ~terminated ~truncated ~gamma)\n        :: !benches)\n    sizes;\n  List.rev !benches\n\nlet normalize_benchmarks () =\n  let sizes = [ (\"256\", 256); (\"1024\", 1024); (\"4096\", 4096) ] in\n  let benches = ref [] in\n  List.iter\n    (fun (label, n) ->\n      let arr = Array.init n (fun i -> Float.of_int i *. 0.01) in\n      benches :=\n        Thumper.bench (Printf.sprintf \"normalize n=%s\" label) (fun () ->\n            Gae.normalize arr)\n        :: !benches)\n    sizes;\n  List.rev !benches\n\nlet fill_buffer capacity =\n  let buf : (float, float) Buffer.t = Buffer.create ~capacity in\n  for i = 0 to capacity - 1 do\n    Buffer.add buf\n      {\n        Buffer.observation = Float.of_int i;\n        action = Float.of_int (i mod 4);\n        reward = Float.of_int (i mod 10) *. 0.1;\n        next_observation = Float.of_int (i + 1);\n        terminated = i mod 50 = 49;\n        truncated = i mod 100 = 99;\n      }\n  done;\n  buf\n\nlet buffer_add_benchmarks () =\n  let capacity = 10000 in\n  let buf = fill_buffer capacity in\n  let tr =\n    {\n      Buffer.observation = 0.0;\n      action = 0.0;\n      reward = 1.0;\n      next_observation = 1.0;\n      terminated = false;\n      truncated = false;\n    }\n  in\n  [ Thumper.bench \"add (full buffer, cap=10000)\" (fun () -> Buffer.add buf tr) ]\n\nlet buffer_create_benchmarks () =\n  let sizes = [ (\"100\", 100); (\"1000\", 1000); (\"10000\", 10000) ] in\n  List.map\n    (fun (label, n) ->\n      Thumper.bench (Printf.sprintf \"create+fill cap=%s\" label) (fun () ->\n          fill_buffer n))\n    sizes\n\nlet build_benchmarks () =\n  [\n    Thumper.group \"GAE\" (gae_benchmarks ());\n    Thumper.group \"GAE from values\" (gae_from_values_benchmarks ());\n    Thumper.group \"Returns\" (returns_benchmarks ());\n    Thumper.group \"Normalize\" (normalize_benchmarks ());\n    Thumper.group \"Buffer add\" (buffer_add_benchmarks ());\n    Thumper.group \"Buffer create\" (buffer_create_benchmarks ());\n  ]\n\nlet () =\n  let benchmarks = build_benchmarks () in\n  Thumper.run \"fehu\" benchmarks\n"
  },
  {
    "path": "packages/fehu/bench/dune",
    "content": "(executable\n (name bench_fehu)\n (libraries nx fehu thumper))\n\n(rule\n (alias runtest)\n (action\n  (progn\n   (run %{exe:bench_fehu.exe} -q)\n   (diff? fehu.thumper fehu.thumper.corrected))))\n"
  },
  {
    "path": "packages/fehu/bench/fehu.thumper",
    "content": "# thumper baseline\n# version: 1\n# suite_name: fehu\n# host: 1480401c3b76ed18\n# cpu: Apple M1 Max\n# ocaml: 5.4.1\n# git: 31747323\n# dirty: true\n# command: /Users/tmattio/Workspace/raven/_build/default/packages/fehu/bench/bench_fehu.exe --bless --quick\n\nbuffer_add/add__full_buffer__cap_10000_\talloc_words\t0.000000e+00\t0.000000e+00\t0.000000e+00\tinf\t5\t0\nbuffer_add/add__full_buffer__cap_10000_\tcpu_time\t1.226649e-08\t1.223096e-08\t1.229882e-08\t2.766164e-03\t5\t0\nbuffer_add/add__full_buffer__cap_10000_\twall_time\t1.227909e-08\t1.224626e-08\t1.230893e-08\t2.552074e-03\t5\t0\nbuffer_create/create_fill_cap_100\talloc_words\t2.116000e+03\t2.116000e+03\t2.116000e+03\t0.000000e+00\t5\t0\nbuffer_create/create_fill_cap_100\tcpu_time\t1.430951e-06\t1.419749e-06\t1.441342e-06\t7.544736e-03\t5\t0\nbuffer_create/create_fill_cap_100\twall_time\t1.433376e-06\t1.418645e-06\t1.445267e-06\t9.286558e-03\t5\t1\nbuffer_create/create_fill_cap_1000\talloc_words\t2.101600e+04\t2.101600e+04\t2.101600e+04\t0.000000e+00\t5\t0\nbuffer_create/create_fill_cap_1000\tcpu_time\t1.815813e-05\t1.805756e-05\t1.829634e-05\t6.575071e-03\t5\t1\nbuffer_create/create_fill_cap_1000\twall_time\t1.817186e-05\t1.806923e-05\t1.829684e-05\t6.262546e-03\t5\t1\nbuffer_create/create_fill_cap_10000\talloc_words\t2.100160e+05\t2.100160e+05\t2.100160e+05\t0.000000e+00\t5\t0\nbuffer_create/create_fill_cap_10000\tcpu_time\t1.670111e-04\t1.660089e-04\t1.677894e-04\t5.330490e-03\t5\t0\nbuffer_create/create_fill_cap_10000\twall_time\t1.673292e-04\t1.663936e-04\t1.679940e-04\t4.782197e-03\t5\t1\ngae/compute_n_1024\talloc_words\t2.053000e+03\t2.053000e+03\t2.053000e+03\t0.000000e+00\t5\t0\ngae/compute_n_1024\tcpu_time\t4.432903e-06\t4.388681e-06\t4.472044e-06\t9.402727e-03\t5\t0\ngae/compute_n_1024\twall_time\t4.434699e-06\t4.389286e-06\t4.487822e-06\t1.110972e-02\t5\t0\ngae/compute_n_256\talloc_words\t5.170000e+02\t5.170000e+02\t5.170000e+02\t0.000000e+00\t5\t0\ngae/compute_n_256\tcpu_time\t8.899175e-07\t8.873495e-07\t8.928650e-07\t3.098850e-03\t5\t1\ngae/compute_n_256\twall_time\t8.917142e-07\t8.886773e-07\t8.955820e-07\t3.871563e-03\t5\t1\ngae/compute_n_4096\talloc_words\t8.197000e+03\t8.197000e+03\t8.197000e+03\t0.000000e+00\t5\t0\ngae/compute_n_4096\tcpu_time\t1.953902e-05\t1.938466e-05\t1.977415e-05\t9.967171e-03\t5\t0\ngae/compute_n_4096\twall_time\t1.954775e-05\t1.938586e-05\t1.974381e-05\t9.155682e-03\t5\t0\ngae_from_values/compute_from_values_n_1024\talloc_words\t5.130000e+03\t5.130000e+03\t5.130000e+03\t0.000000e+00\t5\t0\ngae_from_values/compute_from_values_n_1024\tcpu_time\t7.572262e-06\t7.451256e-06\t7.687885e-06\t1.562475e-02\t5\t0\ngae_from_values/compute_from_values_n_1024\twall_time\t7.577999e-06\t7.456862e-06\t7.698184e-06\t1.592253e-02\t5\t0\ngae_from_values/compute_from_values_n_256\talloc_words\t1.290000e+03\t1.290000e+03\t1.290000e+03\t0.000000e+00\t5\t0\ngae_from_values/compute_from_values_n_256\tcpu_time\t1.573065e-06\t1.567452e-06\t1.579053e-06\t3.687355e-03\t5\t0\ngae_from_values/compute_from_values_n_256\twall_time\t1.573980e-06\t1.567490e-06\t1.582763e-06\t4.851611e-03\t5\t0\ngae_from_values/compute_from_values_n_4096\talloc_words\t2.049000e+04\t2.049000e+04\t2.049000e+04\t0.000000e+00\t5\t0\ngae_from_values/compute_from_values_n_4096\tcpu_time\t3.301526e-05\t3.259093e-05\t3.336488e-05\t1.172117e-02\t5\t0\ngae_from_values/compute_from_values_n_4096\twall_time\t3.310795e-05\t3.264492e-05\t3.349818e-05\t1.288613e-02\t5\t0\nnormalize/normalize_n_1024\talloc_words\t3.083000e+03\t3.083000e+03\t3.083000e+03\t0.000000e+00\t5\t0\nnormalize/normalize_n_1024\tcpu_time\t8.499536e-06\t8.421020e-06\t8.595588e-06\t1.026924e-02\t5\t2\nnormalize/normalize_n_1024\twall_time\t8.507517e-06\t8.425891e-06\t8.596021e-06\t9.998763e-03\t5\t2\nnormalize/normalize_n_256\talloc_words\t7.790000e+02\t7.790000e+02\t7.790000e+02\t0.000000e+00\t5\t0\nnormalize/normalize_n_256\tcpu_time\t1.998406e-06\t1.990551e-06\t2.005793e-06\t3.813604e-03\t5\t0\nnormalize/normalize_n_256\twall_time\t1.999451e-06\t1.990430e-06\t2.006996e-06\t4.142583e-03\t5\t0\nnormalize/normalize_n_4096\talloc_words\t1.229900e+04\t1.229900e+04\t1.229900e+04\t0.000000e+00\t5\t0\nnormalize/normalize_n_4096\tcpu_time\t3.403318e-05\t3.376822e-05\t3.423778e-05\t6.898544e-03\t5\t0\nnormalize/normalize_n_4096\twall_time\t3.405293e-05\t3.381521e-05\t3.430319e-05\t7.164939e-03\t5\t0\nreturns/returns_n_1024\talloc_words\t1.025000e+03\t1.025000e+03\t1.025000e+03\t0.000000e+00\t5\t0\nreturns/returns_n_1024\tcpu_time\t2.979521e-06\t2.946996e-06\t3.001828e-06\t9.201606e-03\t5\t0\nreturns/returns_n_1024\twall_time\t2.994462e-06\t2.963207e-06\t3.022511e-06\t9.902417e-03\t5\t0\nreturns/returns_n_256\talloc_words\t2.570000e+02\t2.570000e+02\t2.570000e+02\t0.000000e+00\t5\t0\nreturns/returns_n_256\tcpu_time\t6.113050e-07\t6.027021e-07\t6.163104e-07\t1.113047e-02\t5\t1\nreturns/returns_n_256\twall_time\t6.123541e-07\t6.072009e-07\t6.175059e-07\t8.414242e-03\t5\t1\nreturns/returns_n_4096\talloc_words\t4.097000e+03\t4.097000e+03\t4.097000e+03\t0.000000e+00\t5\t0\nreturns/returns_n_4096\tcpu_time\t1.194814e-05\t1.181334e-05\t1.212174e-05\t1.290600e-02\t5\t0\nreturns/returns_n_4096\twall_time\t1.196891e-05\t1.180767e-05\t1.211994e-05\t1.304521e-02\t5\t0\n"
  },
  {
    "path": "packages/fehu/doc/01-getting-started.md",
    "content": "# Getting Started\n\nThis guide covers the basics: creating environments, running the step loop,\nunderstanding spaces, and using the built-in environments.\n\n## Installation\n\n<!-- $MDX skip -->\n```bash\nopam install fehu\n```\n\nOr build from source:\n\n<!-- $MDX skip -->\n```bash\ngit clone https://github.com/raven-ml/raven\ncd raven && dune build fehu\n```\n\n## Creating an Environment\n\nEnvironments are created via factory functions in `Fehu_envs`. Randomness is\nprovided by the implicit RNG scope from `Nx.Rng.run`:\n\n```ocaml\nopen Fehu\n\nlet () = Nx.Rng.run ~seed:42 @@ fun () ->\n  let env = Fehu_envs.Cartpole.make () in\n  ignore env\n```\n\nThe seed controls all randomness in the scope. Use the same seed to get\nthe same episode sequence.\n\n## The Step Loop\n\nAn environment follows a strict lifecycle: `reset` must be called before the\nfirst `step`, and again after any terminal step (terminated or truncated).\n\n```ocaml\nopen Fehu\n\nlet () = Nx.Rng.run ~seed:42 @@ fun () ->\n  let env = Fehu_envs.Cartpole.make () in\n\n  (* Reset returns the initial observation and info *)\n  let _obs, _info = Env.reset env () in\n\n  (* Step returns observation, reward, terminated, truncated, info *)\n  let s = Env.step env (Space.Discrete.of_int 0) in\n  Printf.printf \"reward: %.1f, terminated: %b, truncated: %b\\n\"\n    s.reward s.terminated s.truncated\n```\n\nA complete episode loop:\n\n```ocaml\nopen Fehu\n\nlet run_episode env =\n  let _obs, _info = Env.reset env () in\n  let done_ = ref false in\n  let total_reward = ref 0.0 in\n  while not !done_ do\n    let act = Space.sample (Env.action_space env) in\n    let s = Env.step env act in\n    total_reward := !total_reward +. s.reward;\n    done_ := s.terminated || s.truncated\n  done;\n  !total_reward\n\nlet () = Nx.Rng.run ~seed:42 @@ fun () ->\n  let env = Fehu_envs.Cartpole.make () in\n  let _reward = run_episode env in ()\n```\n\n## Spaces\n\nSpaces define the valid observations and actions for an environment. They\nprovide sampling, validation, and serialization.\n\n### Discrete\n\nInteger choices. Used for environments with a finite number of actions (e.g.,\nleft/right).\n\n```ocaml\nopen Fehu\n\nlet space = Space.Discrete.create 4  (* actions 0, 1, 2, 3 *)\nlet _n = Space.Discrete.n space      (* 4 *)\n\n(* Sample a random action (requires an Nx.Rng scope) *)\nlet _act = Nx.Rng.run ~seed:0 @@ fun () ->\n  Space.sample space\n\n(* Convert between int and discrete element *)\nlet act = Space.Discrete.of_int 2\nlet _i = Space.Discrete.to_int act   (* 2 *)\n\n(* Check membership *)\nlet _valid = Space.contains space act (* true *)\n```\n\n### Box\n\nContinuous vectors with per-dimension bounds. Used for continuous observations\n(e.g., position, velocity) and continuous actions.\n\n```ocaml\nopen Fehu\n\nlet space = Space.Box.create\n  ~low:[| -1.0; -2.0 |]\n  ~high:[| 1.0; 2.0 |]\n\nlet _low, _high = Space.Box.bounds space\nlet _obs = Nx.Rng.run ~seed:0 @@ fun () -> Space.sample space\n```\n\n### Other Space Types\n\n- **Multi_binary**: binary vectors of fixed length (multi-label scenarios)\n- **Multi_discrete**: multiple discrete axes with independent cardinalities\n- **Tuple**: fixed-length heterogeneous sequences\n- **Dict**: named fields with different space types\n- **Sequence**: variable-length homogeneous sequences\n- **Text**: character strings from a fixed alphabet\n\nAll spaces support `contains`, `sample`, `pack`/`unpack` (to/from the\nuniversal `Value.t` type), and `boundary_values`.\n\n## Available Environments\n\n### CartPole\n\nClassic cart-pole balancing. Push a cart left or right to keep a pole upright.\nReward is +1.0 per step. Terminates when the pole exceeds +/-12 degrees or the\ncart leaves +/-2.4. Truncates at 500 steps.\n\n- **Observation**: Box [4] -- x, x_dot, theta, theta_dot\n- **Actions**: Discrete 2 -- 0 = push left, 1 = push right\n\n```ocaml\nlet _env = Nx.Rng.run ~seed:42 @@ fun () -> Fehu_envs.Cartpole.make ()\n```\n\n### MountainCar\n\nA car in a valley must build momentum to climb a hill. Reward is -1.0 per\nstep. Terminates when position >= 0.5 with non-negative velocity. Truncates at\n200 steps.\n\n- **Observation**: Box [2] -- position, velocity\n- **Actions**: Discrete 3 -- 0 = push left, 1 = coast, 2 = push right\n\n```ocaml\nlet _env = Nx.Rng.run ~seed:42 @@ fun () -> Fehu_envs.Mountain_car.make ()\n```\n\n### GridWorld\n\n5x5 grid navigation with an obstacle. Agent starts at (0,0), goal at (4,4),\nobstacle at (2,2). Reward is +10.0 at goal, -1.0 otherwise. Truncates at 200\nsteps.\n\n- **Observation**: Multi_discrete [5; 5] -- (row, col)\n- **Actions**: Discrete 4 -- 0 = up, 1 = down, 2 = left, 3 = right\n\n```ocaml\nlet _env = Nx.Rng.run ~seed:42 @@ fun () -> Fehu_envs.Grid_world.make ()\n```\n\n### RandomWalk\n\nOne-dimensional random walk on [-10, 10]. Reward is -|position|. Terminates at\nboundaries or after 200 steps.\n\n- **Observation**: Box [1] in [-10.0, 10.0]\n- **Actions**: Discrete 2 -- 0 = left, 1 = right\n\n```ocaml\nlet _env = Nx.Rng.run ~seed:42 @@ fun () -> Fehu_envs.Random_walk.make ()\n```\n\n## Render Modes\n\nEnvironments can optionally render their state. Pass `~render_mode` when\ncreating the environment:\n\n```ocaml\nopen Fehu\n\nlet () = Nx.Rng.run ~seed:42 @@ fun () ->\n  let env = Fehu_envs.Cartpole.make\n    ~render_mode:`Ansi () in\n\n  let _obs, _info = Env.reset env () in\n  let _s = Env.step env (Space.Discrete.of_int 0) in\n\n  (* Render after reset or step *)\n  match Env.render env with\n  | Some text -> print_endline text\n  | None -> ()\n```\n\nSupported render modes vary by environment: `Ansi` for text output,\n`Rgb_array` for pixel frames, `Human` for interactive display.\n\n## Next Steps\n\n- [Environments and Wrappers](../02-environments/) -- custom environments, wrappers, rendering, vectorized environments\n- [Collection and Evaluation](../03-collection-and-evaluation/) -- trajectory collection, replay buffers, GAE, evaluation\n"
  },
  {
    "path": "packages/fehu/doc/02-environments.md",
    "content": "# Environments and Wrappers\n\nThis guide covers creating custom environments, composing wrappers, rendering,\nand running vectorized environments.\n\n## The Env.t Type\n\nAn environment `('obs, 'act, 'render) Env.t` is parameterized by its\nobservation type, action type, and render type. The type system ensures that\npolicies, wrappers, and collection utilities all agree on these types.\n\nThe lifecycle is strict:\n\n1. Call `Env.reset` to get the initial observation\n2. Call `Env.step` with an action to advance one timestep\n3. When `terminated` or `truncated` is true, call `Env.reset` again\n4. Call `Env.close` when done (optional, releases resources)\n\nCalling `step` before `reset`, or after a terminal step without resetting,\nraises `Invalid_argument`.\n\n## Creating Custom Environments\n\nUse `Env.create` to build an environment from `reset` and `step` functions.\nBoth receive the environment handle as their first argument, which provides\naccess to spaces and lifecycle state. Random keys are drawn from the implicit\nRNG scope (see below).\n\n<!-- $MDX skip -->\n```ocaml\nopen Fehu\n\n(* A simple counting environment: agent must choose action 1 *)\nlet make_counter () =\n  let count = ref 0 in\n  Env.create\n    ~id:\"Counter-v0\"\n    ~observation_space:(Space.Discrete.create 100)\n    ~action_space:(Space.Discrete.create 2)\n    ~reset:(fun _env ?options:_ () ->\n      count := 0;\n      Space.Discrete.of_int 0, Info.empty)\n    ~step:(fun _env action ->\n      let a = Space.Discrete.to_int action in\n      if a = 1 then incr count else count := 0;\n      let obs = Space.Discrete.of_int !count in\n      let terminated = !count >= 10 in\n      Env.step_result ~observation:obs\n        ~reward:(if a = 1 then 1.0 else -1.0)\n        ~terminated ())\n    ()\n```\n\n### RNG Management\n\nEnvironments draw random keys from the implicit RNG scope established by\n`Nx.Rng.run`. Any call to `Space.sample` or other random operations inside\n`reset` and `step` callbacks will use this scope automatically:\n\n<!-- $MDX skip -->\n```ocaml\nlet make_noisy_env () =\n  Env.create\n    ~observation_space:(Space.Box.create\n      ~low:[| 0.0 |] ~high:[| 1.0 |])\n    ~action_space:(Space.Discrete.create 2)\n    ~reset:(fun env ?options:_ () ->\n      let obs = Space.sample (Env.observation_space env) in\n      obs, Info.empty)\n    ~step:(fun env _action ->\n      let obs = Space.sample (Env.observation_space env) in\n      Env.step_result ~observation:obs\n        ~reward:1.0 ())\n    ()\n```\n\n## Wrappers\n\nWrappers transform an environment's observations, actions, or rewards without\nmodifying the inner environment. They compose: wrap a wrapper to stack\ntransformations.\n\n### map_observation\n\nTransform observations from reset and step:\n\n<!-- $MDX skip -->\n```ocaml\nopen Fehu\n\n(* Normalize observations to [0, 1] *)\nlet env = Env.map_observation\n  ~observation_space:(Space.Box.create\n    ~low:[| 0.0; 0.0; 0.0; 0.0 |]\n    ~high:[| 1.0; 1.0; 1.0; 1.0 |])\n  ~f:(fun obs info ->\n    (* obs is a float32 tensor, transform it *)\n    let normalized = normalize_fn obs in\n    normalized, info)\n  env\n```\n\nThe function `f` receives both the observation and the info dictionary,\nreturning both. This allows wrappers to pass metadata through info.\n\n### map_action\n\nTransform actions before they reach the inner environment:\n\n<!-- $MDX skip -->\n```ocaml\n(* Remap discrete actions *)\nlet env = Env.map_action\n  ~action_space:(Space.Discrete.create 3)\n  ~f:(fun act ->\n    (* Map from 3-action to 2-action space *)\n    let i = Space.Discrete.to_int act in\n    Space.Discrete.of_int (if i >= 2 then 1 else i))\n  env\n```\n\n### map_reward\n\nTransform rewards after each step:\n\n<!-- $MDX skip -->\n```ocaml\n(* Scale rewards *)\nlet env = Env.map_reward\n  ~f:(fun ~reward ~info -> reward *. 0.01, info)\n  env\n```\n\n### clip_action\n\nClamp continuous actions to the action space bounds. The wrapper relaxes the\naction space to accept any float values, then clips before forwarding:\n\n<!-- $MDX skip -->\n```ocaml\n(* Works with Box action spaces *)\nlet env = Env.clip_action env\n```\n\n### clip_observation\n\nClamp observations to specified bounds:\n\n<!-- $MDX skip -->\n```ocaml\nlet env = Env.clip_observation\n  ~low:[| -1.0; -1.0 |]\n  ~high:[| 1.0; 1.0 |]\n  env\n```\n\n### time_limit\n\nEnforce a maximum episode length. When the limit is reached, the step's\n`truncated` flag is set to true:\n\n<!-- $MDX skip -->\n```ocaml\nlet env = Env.time_limit ~max_episode_steps:200 env\n```\n\n### Custom Wrappers with Env.wrap\n\nFor transformations that need full control over reset and step, use `Env.wrap`.\nThe wrapper shares the inner environment's lifecycle (RNG, closed flag, reset\nflag):\n\n<!-- $MDX skip -->\n```ocaml\nopen Fehu\n\n(* A wrapper that tracks episode reward *)\nlet with_episode_reward env =\n  let episode_reward = ref 0.0 in\n  Env.wrap\n    ~observation_space:(Env.observation_space env)\n    ~action_space:(Env.action_space env)\n    ~reset:(fun inner ?options () ->\n      episode_reward := 0.0;\n      Env.reset inner ?options ())\n    ~step:(fun inner action ->\n      let s = Env.step inner action in\n      episode_reward := !episode_reward +. s.reward;\n      let info =\n        if s.terminated || s.truncated then\n          Info.set \"episode_reward\"\n            (Info.float !episode_reward) s.info\n        else s.info\n      in\n      { s with info })\n    env\n```\n\n## Rendering\n\nEnvironments support optional rendering via render modes. Pass\n`~render_mode` at creation time:\n\n<!-- $MDX skip -->\n```ocaml\nlet env = Fehu_envs.Grid_world.make\n  ~render_mode:`Ansi ()\n\nlet _obs, _info = Env.reset env ()\nmatch Env.render env with\n| Some (Text s) -> print_endline s\n| _ -> ()\n```\n\n### Render Rollout\n\n`Render.rollout` runs a policy and feeds rendered frames to a sink function:\n\n<!-- $MDX skip -->\n```ocaml\nopen Fehu\n\n(* Collect rendered frames from a policy rollout *)\nlet frames = ref [] in\nRender.rollout env\n  ~policy:(fun _obs -> Space.sample (Env.action_space env))\n  ~steps:100\n  ~sink:(fun img -> frames := img :: !frames)\n  ()\n```\n\n### Recording with on_render\n\n`Render.on_render` wraps an environment so that every frame after reset and\nstep is passed to a sink:\n\n<!-- $MDX skip -->\n```ocaml\nlet env = Render.on_render\n  ~sink:(fun img -> save_frame img)\n  env\n```\n\n## Vectorized Environments\n\n`Vec_env` runs multiple environment instances with batched inputs and outputs.\nTerminated or truncated episodes are automatically reset.\n\n<!-- $MDX skip -->\n```ocaml\nopen Fehu\n\nlet () = Nx.Rng.run ~seed:42 @@ fun () ->\n  (* Create 4 parallel environments *)\n  let envs = List.init 4 (fun _ -> Fehu_envs.Cartpole.make ()) in\n  let vec = Vec_env.create envs in\n\n  let n = Vec_env.num_envs vec in  (* 4 *)\n\n  (* Reset all environments *)\n  let _observations, _infos = Vec_env.reset vec () in\n\n  (* Step all environments with an array of actions *)\n  let actions = Array.init n (fun _ -> Space.Discrete.of_int 0) in\n  let _s = Vec_env.step vec actions in\n\n  (* _s.observations, _s.rewards, _s.terminated, _s.truncated *)\n\n  (* Clean up *)\n  Vec_env.close vec\n```\n\nAll environments must have structurally identical observation and action spaces\n(checked via `Space.equal_spec`). On terminal steps, the original terminal\nobservation is stored in the step info under `\"final_observation\"` as a packed\n`Value.t`, and the terminal info under `\"final_info\"`.\n\n## Next Steps\n\n- [Getting Started](../01-getting-started/) -- installation, environments, spaces, step loop\n- [Collection and Evaluation](../03-collection-and-evaluation/) -- trajectory collection, replay buffers, GAE, evaluation\n"
  },
  {
    "path": "packages/fehu/doc/03-collection-and-evaluation.md",
    "content": "# Collection, Buffers, and Evaluation\n\nThis guide covers trajectory collection, replay buffers, generalized advantage\nestimation, and policy evaluation.\n\n## Trajectory Collection\n\n`Collect` gathers agent-environment interactions into structure-of-arrays form\nfor batch processing.\n\n### Rollout\n\n`Collect.rollout` collects a fixed number of transitions. It resets the\nenvironment at the start and automatically on episode boundaries:\n\n<!-- $MDX skip -->\n```ocaml\nopen Fehu\n\nlet () = Nx.Rng.run ~seed:42 @@ fun () ->\n  let env = Fehu_envs.Cartpole.make () in\n\n  (* The policy receives an observation and returns\n     (action, log_prob option, value_estimate option) *)\n  let policy _obs =\n    let act = Space.sample (Env.action_space env) in\n    (act, None, None)\n  in\n\n  let _trajectory = Collect.rollout env ~policy ~n_steps:1024 in ()\n```\n\nThe returned trajectory contains parallel arrays:\n\n<!-- $MDX skip -->\n```ocaml\nlet n = Collect.length trajectory  (* 1024 *)\nlet obs = trajectory.observations       (* 'obs array *)\nlet acts = trajectory.actions           (* 'act array *)\nlet rews = trajectory.rewards           (* float array *)\nlet next_obs = trajectory.next_observations  (* 'obs array *)\nlet terms = trajectory.terminated       (* bool array *)\nlet truncs = trajectory.truncated       (* bool array *)\nlet infos = trajectory.infos            (* Info.t array *)\nlet log_ps = trajectory.log_probs       (* float array option *)\nlet vals = trajectory.values            (* float array option *)\n```\n\nWhen the policy returns `Some log_prob` or `Some value`, those are collected\ninto `log_probs` and `values`. When any return is `None`, the corresponding\nfield is `None` for the entire trajectory.\n\n### Policy Signature\n\nThe policy function has the signature:\n\n```\n'obs -> 'act * float option * float option\n```\n\nThe three components are:\n\n1. **action**: the action to take\n2. **log_prob** (optional): the log-probability of the action under the current policy, used for importance sampling in PPO\n3. **value** (optional): the estimated value of the current state, used for GAE computation\n\nFor a simple random policy, return `None` for both:\n\n<!-- $MDX skip -->\n```ocaml\nlet random_policy _obs =\n  let act = Space.sample (Env.action_space env) in\n  (act, None, None)\n```\n\nFor a neural network policy with value head:\n\n<!-- $MDX skip -->\n```ocaml\nlet nn_policy obs =\n  let logits, value = forward_pass model obs in\n  let act = sample_from_logits logits in\n  let log_prob = log_prob_of logits act in\n  (act, Some log_prob, Some value)\n```\n\n### Episodes\n\n`Collect.episodes` collects complete episodes, one trajectory per episode:\n\n<!-- $MDX skip -->\n```ocaml\nlet episodes = Collect.episodes env\n  ~policy ~n_episodes:10\n  ~max_steps:500 ()\n\n(* episodes is a ('obs, 'act) Collect.t list *)\nlet total_rewards = List.map (fun traj ->\n  Array.fold_left (+.) 0.0 traj.rewards) episodes\n```\n\nEach episode runs until termination, truncation, or `max_steps` (default\n1000).\n\n### Concatenating Trajectories\n\n`Collect.concat` merges multiple trajectories into one:\n\n<!-- $MDX skip -->\n```ocaml\nlet combined = Collect.concat [traj1; traj2; traj3]\n```\n\nOptional fields (`log_probs`, `values`) are kept only if present in all inputs.\n\n## Replay Buffers\n\n`Buffer` provides a fixed-capacity circular buffer for off-policy experience\nstorage. It stores individual transitions and supports uniform random sampling.\n\n### Creating and Filling\n\n<!-- $MDX skip -->\n```ocaml\nopen Fehu\n\nlet buf = Buffer.create ~capacity:10_000\n\n(* Add transitions one at a time *)\nBuffer.add buf {\n  observation = obs;\n  action = act;\n  reward = 1.0;\n  next_observation = next_obs;\n  terminated = false;\n  truncated = false;\n}\n\nlet n = Buffer.size buf           (* number of stored transitions *)\nlet full = Buffer.is_full buf     (* true when at capacity *)\nlet cap = Buffer.capacity buf     (* 10000 *)\n```\n\nWhen the buffer is full, new transitions overwrite the oldest ones.\n\n### Sampling\n\nDraw a batch of transitions uniformly at random (with replacement):\n\n<!-- $MDX skip -->\n```ocaml\nlet batch = Nx.Rng.run ~seed:0 @@ fun () ->\n  Buffer.sample buf ~batch_size:64\n\n(* batch is a transition array *)\nlet _obs_0 = batch.(0).observation\nlet _rew_0 = batch.(0).reward\n```\n\nFor structure-of-arrays form (more convenient for training):\n\n<!-- $MDX skip -->\n```ocaml\nlet (observations, actions, rewards,\n     next_observations, terminated, truncated) =\n  Nx.Rng.run ~seed:0 @@ fun () ->\n    Buffer.sample_arrays buf ~batch_size:64\n```\n\n### Clearing\n\n<!-- $MDX skip -->\n```ocaml\nBuffer.clear buf  (* removes all transitions, keeps storage allocated *)\n```\n\n## Generalized Advantage Estimation\n\n`Gae` computes advantages and returns for policy gradient methods. It correctly\nhandles the distinction between terminated and truncated episodes:\n\n- **Terminated**: the episode ended naturally (e.g., pole fell). Bootstrap\n  value is zero.\n- **Truncated**: the episode was cut short (e.g., time limit). Bootstrap value\n  comes from `next_values`.\n\n### Computing Advantages\n\n<!-- $MDX skip -->\n```ocaml\nopen Fehu\n\n(* From a trajectory with value estimates *)\nlet advantages, returns = Gae.compute\n  ~rewards:trajectory.rewards\n  ~values:(Option.get trajectory.values)\n  ~terminated:trajectory.terminated\n  ~truncated:trajectory.truncated\n  ~next_values    (* V(s_{t+1}) for each step *)\n  ~gamma:0.99     (* discount factor *)\n  ~lambda:0.95    (* GAE smoothing parameter *)\n```\n\nWhen you have values from a value network and the last value estimate,\n`compute_from_values` builds `next_values` for you:\n\n<!-- $MDX skip -->\n```ocaml\nlet advantages, returns = Gae.compute_from_values\n  ~rewards:trajectory.rewards\n  ~values:(Option.get trajectory.values)\n  ~terminated:trajectory.terminated\n  ~truncated:trajectory.truncated\n  ~last_value:0.0   (* V(s_T) for the final state *)\n  ~gamma:0.99\n  ~lambda:0.95\n```\n\n### Monte Carlo Returns\n\nFor simpler algorithms that do not need advantages:\n\n<!-- $MDX skip -->\n```ocaml\nlet rets = Gae.returns\n  ~rewards:trajectory.rewards\n  ~terminated:trajectory.terminated\n  ~truncated:trajectory.truncated\n  ~gamma:0.99\n```\n\n### Normalizing Advantages\n\nNormalize to zero mean and unit variance for training stability:\n\n<!-- $MDX skip -->\n```ocaml\nlet normalized = Gae.normalize advantages\n(* or with custom epsilon *)\nlet normalized = Gae.normalize ~eps:1e-6 advantages\n```\n\n## Policy Evaluation\n\n`Eval.run` runs a deterministic or stochastic policy over multiple episodes\nand reports summary statistics:\n\n<!-- $MDX skip -->\n```ocaml\nopen Fehu\n\nlet () = Nx.Rng.run ~seed:42 @@ fun () ->\n  let env = Fehu_envs.Cartpole.make () in\n\n  (* Evaluate a random policy *)\n  let stats = Eval.run env\n    ~policy:(fun _obs -> Space.sample (Env.action_space env))\n    ~n_episodes:100\n    ~max_steps:500\n    ()\n  in\n  Printf.printf\n    \"Episodes: %d, Mean reward: %.1f +/- %.1f, Mean length: %.0f\\n\"\n    stats.n_episodes\n    stats.mean_reward\n    stats.std_reward\n    stats.mean_length\n```\n\nThe evaluation policy has a simpler signature than the collection policy: it\nonly returns an action, not log-probs or value estimates:\n\n```\n'obs -> 'act\n```\n\n`Eval.run` resets the environment between episodes. Default `n_episodes` is 10\nand default `max_steps` is 1000.\n\n## Putting It Together\n\nA typical PPO-style training iteration using these utilities:\n\n<!-- $MDX skip -->\n```ocaml\nopen Fehu\n\n(* 1. Collect rollout *)\nlet trajectory = Collect.rollout env\n  ~policy:(fun obs ->\n    let act, log_prob, value = nn_policy obs in\n    (act, Some log_prob, Some value))\n  ~n_steps:2048\n\n(* 2. Compute advantages *)\nlet last_value = estimate_value model last_obs in\nlet advantages, returns = Gae.compute_from_values\n  ~rewards:trajectory.rewards\n  ~values:(Option.get trajectory.values)\n  ~terminated:trajectory.terminated\n  ~truncated:trajectory.truncated\n  ~last_value\n  ~gamma:0.99 ~lambda:0.95\n\nlet advantages = Gae.normalize advantages\n\n(* 3. Update policy using trajectory data + advantages *)\n(* ... your PPO update here ... *)\n\n(* 4. Evaluate *)\nlet stats = Eval.run env\n  ~policy:(fun obs -> greedy_action model obs)\n  ~n_episodes:10 ()\n```\n\n## Next Steps\n\n- [Getting Started](../01-getting-started/) -- installation, environments, spaces, step loop\n- [Environments and Wrappers](../02-environments/) -- custom environments, wrappers, rendering, vectorized environments\n"
  },
  {
    "path": "packages/fehu/doc/04-gymnasium-comparison.md",
    "content": "# Fehu vs. Gymnasium -- A Practical Comparison\n\nThis guide explains how Fehu's reinforcement learning API relates to Python's [Gymnasium](https://gymnasium.farama.org/) (and [Stable Baselines3](https://stable-baselines3.readthedocs.io/) for collection/buffer/GAE), focusing on:\n\n* How core concepts map (Env, Space, step loop, wrappers)\n* Where the APIs feel similar vs. deliberately different\n* How to translate common Gymnasium patterns into Fehu\n\nIf you already use Gymnasium, this should be enough to become productive in Fehu quickly.\n\n---\n\n## 1. Big-Picture Differences\n\n| Aspect                | Gymnasium (Python)                                 | Fehu (OCaml)                                                         |\n| --------------------- | -------------------------------------------------- | -------------------------------------------------------------------- |\n| Language              | Dynamic, interpreted                               | Statically typed, compiled                                           |\n| Environment type      | `gymnasium.Env`                                    | `('obs, 'act, 'render) Env.t`                                        |\n| Observation/action    | Untyped (`np.ndarray`, `int`, etc.)                | Parametric: `'obs` and `'act` tracked in the type                    |\n| Spaces                | `gymnasium.spaces.*`                               | `'a Space.t` with typed modules (`Space.Discrete`, `Space.Box`, ...) |\n| Step result           | Tuple `(obs, reward, terminated, truncated, info)` | Record `Env.step` with named fields                                  |\n| Wrappers              | Subclassing `gymnasium.Wrapper`                    | `Env.wrap` or composable combinators (`map_observation`, etc.)       |\n| Vectorized envs       | `gymnasium.vector.SyncVectorEnv`                   | `Vec_env.create`                                                     |\n| Trajectory collection | External (Stable Baselines3, TorchRL)              | Built-in: `Collect.rollout`, `Collect.episodes`                      |\n| Replay buffers        | External (Stable Baselines3, TorchRL)              | Built-in: `Buffer.create`, `Buffer.add`, `Buffer.sample`             |\n| GAE                   | External (Stable Baselines3)                       | Built-in: `Gae.compute`, `Gae.returns`, `Gae.normalize`              |\n| Policy evaluation     | Manual loop or SB3 `evaluate_policy`               | Built-in: `Eval.run`                                                 |\n| RNG                   | `np.random` / seed passed to `env.reset(seed=...)` | Implicit scope via `Nx.Rng.run ~seed`                                |\n| Rendering             | String mode `\"human\"`, `\"rgb_array\"`               | Polymorphic variants `` `Human ``, `` `Rgb_array ``, etc.            |\n| Mutability            | Environments are mutable objects                   | Environments are immutable handles; state is internal                |\n\n**Fehu semantics to know (read once):**\n- `Env.reset` must be called before `Env.step`. After a terminal step, another `reset` is required.\n- Spaces validate observations and actions automatically -- `Env.step` raises if an action is outside the action space.\n- RNG is scoped: wrap your code in `Nx.Rng.run ~seed:42 (fun () -> ...)` instead of passing seeds to individual calls.\n- Trajectory collection, replay buffers, GAE, and evaluation are built into Fehu, not external libraries.\n\n---\n\n## 2. Spaces\n\n### 2.1 Discrete\n\n**Gymnasium**\n\n```python\nimport gymnasium as gym\n\nspace = gym.spaces.Discrete(5)           # {0, 1, 2, 3, 4}\nspace = gym.spaces.Discrete(5, start=1)  # {1, 2, 3, 4, 5}\n\nsample = space.sample()\nassert space.contains(sample)\n```\n\n**Fehu**\n\n<!-- $MDX skip -->\n```ocaml\nopen Fehu\n\nlet space = Space.Discrete.create 5            (* {0, 1, 2, 3, 4} *)\nlet space = Space.Discrete.create ~start:1 5   (* {1, 2, 3, 4, 5} *)\n\nlet sample = Space.sample space\nlet valid  = Space.contains space sample\n\nlet n     = Space.Discrete.n space      (* 5 *)\nlet start = Space.Discrete.start space  (* 1 *)\n\n(* Convert between discrete elements and ints *)\nlet action = Space.Discrete.of_int 3\nlet value  = Space.Discrete.to_int action\n```\n\nDiscrete elements are `(int32, Nx.int32_elt) Nx.t` scalars, not bare OCaml ints.\n\n### 2.2 Box (continuous)\n\n**Gymnasium**\n\n```python\nimport numpy as np\n\nspace = gym.spaces.Box(\n    low=np.array([-1.0, -2.0]),\n    high=np.array([1.0, 2.0]),\n    dtype=np.float32,\n)\nsample = space.sample()\n```\n\n**Fehu**\n\n<!-- $MDX skip -->\n```ocaml\nlet space =\n  Space.Box.create\n    ~low:[| -1.0; -2.0 |]\n    ~high:[| 1.0; 2.0 |]\n\nlet sample = Space.sample space\nlet (low, high) = Space.Box.bounds space\n```\n\nBox elements are `(float, Nx.float32_elt) Nx.t` tensors. Infinite bounds are allowed; sampling falls back to uniform draws in `[-1e6, 1e6]` clamped to bounds.\n\n### 2.3 Multi_binary\n\n**Gymnasium**\n\n```python\nspace = gym.spaces.MultiBinary(4)  # {0,1}^4\n```\n\n**Fehu**\n\n<!-- $MDX skip -->\n```ocaml\nlet space = Space.Multi_binary.create 4\n```\n\nElements are `(int32, Nx.int32_elt) Nx.t` vectors with values 0 or 1.\n\n### 2.4 Multi_discrete\n\n**Gymnasium**\n\n```python\nspace = gym.spaces.MultiDiscrete([3, 5, 2])  # 3 axes: {0..2}, {0..4}, {0..1}\n```\n\n**Fehu**\n\n<!-- $MDX skip -->\n```ocaml\nlet space = Space.Multi_discrete.create [| 3; 5; 2 |]\n```\n\n### 2.5 Composite spaces\n\n**Gymnasium**\n\n```python\nspace = gym.spaces.Tuple((\n    gym.spaces.Discrete(3),\n    gym.spaces.Box(low=0.0, high=1.0, shape=(2,)),\n))\n\nspace = gym.spaces.Dict({\n    \"position\": gym.spaces.Box(low=-10.0, high=10.0, shape=(3,)),\n    \"velocity\": gym.spaces.Box(low=-1.0, high=1.0, shape=(3,)),\n})\n```\n\n**Fehu**\n\n<!-- $MDX skip -->\n```ocaml\nlet space =\n  Space.Tuple.create [\n    Space.Pack (Space.Discrete.create 3);\n    Space.Pack (Space.Box.create ~low:[| 0.0; 0.0 |] ~high:[| 1.0; 1.0 |]);\n  ]\n\nlet space =\n  Space.Dict.create [\n    (\"position\", Space.Pack (Space.Box.create ~low:[| -10.; -10.; -10. |] ~high:[| 10.; 10.; 10. |]));\n    (\"velocity\", Space.Pack (Space.Box.create ~low:[| -1.; -1.; -1. |] ~high:[| 1.; 1.; 1. |]));\n  ]\n```\n\nComposite space elements use `Value.t` for heterogeneous data: `Tuple.element = Value.t list`, `Dict.element = (string * Value.t) list`.\n\n### 2.6 Sequence and Text\n\n**Gymnasium**\n\n```python\nspace = gym.spaces.Sequence(gym.spaces.Discrete(5), seed=42)\nspace = gym.spaces.Text(max_length=32, charset=\"abcdef\")\n```\n\n**Fehu**\n\n<!-- $MDX skip -->\n```ocaml\nlet space = Space.Sequence.create ~max_length:10 (Space.Discrete.create 5)\nlet space = Space.Text.create ~charset:\"abcdef\" ~max_length:32 ()\n```\n\n### 2.7 Common operations\n\nAll space types share the same interface:\n\n<!-- $MDX skip -->\n```ocaml\nlet sample = Space.sample space           (* random element *)\nlet valid  = Space.contains space sample  (* membership test *)\nlet spec   = Space.spec space             (* structural description *)\nlet shape  = Space.shape space            (* dimensionality, if defined *)\n\n(* Serialization via Value.t *)\nlet packed   = Space.pack space sample\nlet unpacked = Space.unpack space packed  (* (element, string) result *)\n\n(* Edge cases for testing *)\nlet edges = Space.boundary_values space\n```\n\n---\n\n## 3. Creating Environments\n\n### 3.1 From a registry\n\n**Gymnasium**\n\n```python\nenv = gym.make(\"CartPole-v1\", render_mode=\"human\")\n```\n\n**Fehu** does not have a global registry. Environments are constructed directly:\n\n<!-- $MDX skip -->\n```ocaml\nlet env =\n  Env.create\n    ~id:\"CartPole-v1\"\n    ~observation_space:(Space.Box.create\n      ~low:[| -4.8; Float.neg_infinity; -0.418; Float.neg_infinity |]\n      ~high:[| 4.8; Float.infinity; 0.418; Float.infinity |])\n    ~action_space:(Space.Discrete.create 2)\n    ~render_mode:`Human\n    ~render_modes:[\"human\"; \"rgb_array\"]\n    ~reset:(fun _env ?options:_ () ->\n      let obs = (* initial state *) in\n      (obs, Info.empty))\n    ~step:(fun _env action ->\n      let obs = (* next state *) in\n      Env.step_result ~observation:obs ~reward:1.0 ())\n    ()\n```\n\n`Env.create` takes the observation space, action space, and two callbacks: `reset` and `step`. Optional `render` and `close` callbacks handle visualization and cleanup.\n\n### 3.2 Step result construction\n\n**Gymnasium** returns a flat tuple from `env.step()`:\n\n```python\nobs, reward, terminated, truncated, info = env.step(action)\n```\n\n**Fehu** uses a record with named fields, and provides a convenience constructor with defaults:\n\n<!-- $MDX skip -->\n```ocaml\n(* Inside a step callback *)\nEnv.step_result\n  ~observation:obs\n  ~reward:1.0\n  ~terminated:false\n  ~truncated:false\n  ~info:Info.empty\n  ()\n\n(* Defaults: reward=0., terminated=false, truncated=false, info=Info.empty *)\nEnv.step_result ~observation:obs ()\n```\n\n---\n\n## 4. Step Loop\n\n### 4.1 Basic episode\n\n**Gymnasium**\n\n```python\nenv = gym.make(\"CartPole-v1\")\nobs, info = env.reset(seed=42)\n\ntotal_reward = 0.0\nwhile True:\n    action = env.action_space.sample()\n    obs, reward, terminated, truncated, info = env.step(action)\n    total_reward += reward\n    if terminated or truncated:\n        break\n\nenv.close()\n```\n\n**Fehu**\n\n<!-- $MDX skip -->\n```ocaml\nlet () =\n  Nx.Rng.run ~seed:42 (fun () ->\n    let env = (* create environment *) in\n    let (obs, _info) = Env.reset env () in\n    let obs = ref obs in\n    let total_reward = ref 0.0 in\n    let done_ = ref false in\n    while not !done_ do\n      let action = Space.sample (Env.action_space env) in\n      let step = Env.step env action in\n      obs := step.observation;\n      total_reward := !total_reward +. step.reward;\n      done_ := step.terminated || step.truncated\n    done;\n    Env.close env)\n```\n\nKey differences:\n- RNG is scoped with `Nx.Rng.run ~seed:42` rather than passed to `reset`.\n- Step results are accessed by field name (`step.observation`, `step.reward`).\n- `Env.step` raises `Invalid_argument` if called without a prior `reset` or after a terminal step without resetting.\n\n### 4.2 Multiple episodes\n\n**Gymnasium**\n\n```python\nfor episode in range(10):\n    obs, info = env.reset()\n    done = False\n    while not done:\n        action = policy(obs)\n        obs, reward, terminated, truncated, info = env.step(action)\n        done = terminated or truncated\n```\n\n**Fehu** -- manual loop or use `Collect.episodes`:\n\n<!-- $MDX skip -->\n```ocaml\n(* Manual *)\nlet () =\n  Nx.Rng.run ~seed:0 (fun () ->\n    let env = (* create environment *) in\n    for _ep = 0 to 9 do\n      let (obs, _info) = Env.reset env () in\n      let obs = ref obs in\n      let done_ = ref false in\n      while not !done_ do\n        let action = policy !obs in\n        let step = Env.step env action in\n        obs := step.observation;\n        done_ := step.terminated || step.truncated\n      done\n    done;\n    Env.close env)\n\n(* Or use Collect.episodes directly *)\nlet trajs =\n  Nx.Rng.run ~seed:0 (fun () ->\n    let env = (* create environment *) in\n    Collect.episodes env\n      ~policy:(fun obs -> (policy obs, None, None))\n      ~n_episodes:10 ())\n```\n\n---\n\n## 5. Wrappers\n\n### 5.1 Gymnasium approach: subclassing\n\n**Gymnasium**\n\n```python\nclass NormalizeObservation(gym.Wrapper):\n    def __init__(self, env, mean, std):\n        super().__init__(env)\n        self.mean = mean\n        self.std = std\n\n    def observation(self, obs):\n        return (obs - self.mean) / self.std\n\nenv = NormalizeObservation(env, mean=0.0, std=1.0)\n```\n\n### 5.2 Fehu approach: composable functions\n\n**Fehu** provides `Env.wrap` for full control and specialized combinators for common patterns.\n\n**`map_observation`** -- transform observations:\n\n<!-- $MDX skip -->\n```ocaml\nlet normalized_env =\n  Env.map_observation\n    ~observation_space:obs_space\n    ~f:(fun obs _info ->\n      let normalized = (* normalize obs *) in\n      (normalized, Info.empty))\n    env\n```\n\n**`map_action`** -- transform actions before passing to the inner env:\n\n<!-- $MDX skip -->\n```ocaml\nlet remapped_env =\n  Env.map_action\n    ~action_space:new_action_space\n    ~f:(fun new_action -> (* convert to inner action *))\n    env\n```\n\n**`map_reward`** -- transform rewards:\n\n<!-- $MDX skip -->\n```ocaml\nlet scaled_env =\n  Env.map_reward\n    ~f:(fun ~reward ~info -> (reward *. 0.1, info))\n    env\n```\n\n**`clip_action`** -- clamp continuous actions to bounds:\n\n<!-- $MDX skip -->\n```ocaml\n(* Gymnasium *)\n(* from gymnasium.wrappers import ClipAction *)\n(* env = ClipAction(env) *)\n\n(* Fehu *)\nlet clipped_env = Env.clip_action env\n```\n\n**`clip_observation`** -- clamp observations:\n\n<!-- $MDX skip -->\n```ocaml\nlet clipped_env =\n  Env.clip_observation\n    ~low:[| -5.0; -5.0 |]\n    ~high:[| 5.0; 5.0 |]\n    env\n```\n\n**`time_limit`** -- enforce maximum episode length:\n\n<!-- $MDX skip -->\n```ocaml\n(* Gymnasium *)\n(* from gymnasium.wrappers import TimeLimit *)\n(* env = TimeLimit(env, max_episode_steps=200) *)\n\n(* Fehu *)\nlet limited_env = Env.time_limit ~max_episode_steps:200 env\n```\n\n### 5.3 Full custom wrapper with `Env.wrap`\n\nWhen the combinators are not enough, use `Env.wrap` directly:\n\n<!-- $MDX skip -->\n```ocaml\nlet custom_env =\n  Env.wrap\n    ~observation_space:new_obs_space\n    ~action_space:new_act_space\n    ~reset:(fun inner ?options () ->\n      let (obs, info) = Env.reset inner ?options () in\n      (transform_obs obs, info))\n    ~step:(fun inner action ->\n      let step = Env.step inner (transform_action action) in\n      { step with observation = transform_obs step.observation })\n    env\n```\n\n`Env.wrap` receives the inner environment as the first argument to `reset`, `step`, `render`, and `close`. Guards (closed check, needs-reset check, space validation) are enforced automatically.\n\n### 5.4 Composing wrappers\n\nWrappers compose by chaining:\n\n<!-- $MDX skip -->\n```ocaml\nlet env =\n  base_env\n  |> Env.time_limit ~max_episode_steps:500\n  |> Env.clip_action\n  |> Env.map_reward ~f:(fun ~reward ~info -> (reward *. 0.01, info))\n```\n\n---\n\n## 6. Vectorized Environments\n\n### 6.1 Synchronous vectorization\n\n**Gymnasium**\n\n```python\nenvs = gym.vector.SyncVectorEnv([\n    lambda: gym.make(\"CartPole-v1\") for _ in range(4)\n])\n\nobs, infos = envs.reset()\nactions = envs.action_space.sample()  # batch of 4 actions\nobs, rewards, terminated, truncated, infos = envs.step(actions)\nenvs.close()\n```\n\n**Fehu**\n\n<!-- $MDX skip -->\n```ocaml\nlet venv =\n  Vec_env.create [env1; env2; env3; env4]\n\nlet n = Vec_env.num_envs venv  (* 4 *)\n\nlet (observations, infos) = Vec_env.reset venv ()\nlet actions = Array.init n (fun _ -> Space.sample (Vec_env.action_space venv)) in\nlet step = Vec_env.step venv actions\n\n(* step.observations : 'obs array      -- one per env *)\n(* step.rewards      : float array     -- one per env *)\n(* step.terminated   : bool array      -- one per env *)\n(* step.truncated    : bool array      -- one per env *)\n(* step.infos        : Info.t array    -- one per env *)\n\nVec_env.close venv\n```\n\nKey differences:\n- `Vec_env.create` takes a list of already-constructed environments. All must have structurally identical spaces.\n- Terminated or truncated environments are automatically reset. The terminal observation is stored in the step's info under `\"final_observation\"` (as a packed `Value.t`), and the terminal info under `\"final_info\"`.\n- The step result is a record with named arrays, not a tuple.\n\n---\n\n## 7. Trajectory Collection\n\n### 7.1 Fixed-step rollout\n\n**Gymnasium + Stable Baselines3**\n\n```python\nfrom stable_baselines3.common.buffers import RolloutBuffer\n\n# Manual loop or SB3 internals\nobs, _ = env.reset()\nfor step in range(2048):\n    action, log_prob, value = policy(obs)\n    obs, reward, terminated, truncated, info = env.step(action)\n    buffer.add(obs, action, reward, ...)\n    if terminated or truncated:\n        obs, _ = env.reset()\n```\n\n**Fehu** -- built-in:\n\n<!-- $MDX skip -->\n```ocaml\nlet trajectory =\n  Collect.rollout env\n    ~policy:(fun obs ->\n      let action = (* select action *) in\n      let log_prob = (* optional log probability *) in\n      let value = (* optional value estimate *) in\n      (action, Some log_prob, Some value))\n    ~n_steps:2048\n```\n\n`Collect.rollout` handles resets on episode boundaries automatically and returns a `Collect.t` record:\n\n<!-- $MDX skip -->\n```ocaml\n(* Collect.t fields: *)\ntrajectory.observations       (* 'obs array *)\ntrajectory.actions            (* 'act array *)\ntrajectory.rewards            (* float array *)\ntrajectory.next_observations  (* 'obs array *)\ntrajectory.terminated         (* bool array *)\ntrajectory.truncated          (* bool array *)\ntrajectory.infos              (* Info.t array *)\ntrajectory.log_probs          (* float array option *)\ntrajectory.values             (* float array option *)\n\nlet n = Collect.length trajectory\n```\n\n### 7.2 Complete episodes\n\n**Gymnasium + manual**\n\n```python\nepisodes = []\nfor _ in range(10):\n    obs, _ = env.reset()\n    episode = []\n    done = False\n    while not done:\n        action = policy(obs)\n        next_obs, reward, terminated, truncated, info = env.step(action)\n        episode.append((obs, action, reward, next_obs, terminated, truncated))\n        obs = next_obs\n        done = terminated or truncated\n    episodes.append(episode)\n```\n\n**Fehu** -- built-in:\n\n<!-- $MDX skip -->\n```ocaml\nlet episode_list =\n  Collect.episodes env\n    ~policy:(fun obs -> (policy obs, None, None))\n    ~n_episodes:10\n    ~max_steps:1000\n    ()\n(* episode_list : ('obs, 'act) Collect.t list *)\n```\n\nEach element is one episode as a `Collect.t`. Concatenate them with `Collect.concat`:\n\n<!-- $MDX skip -->\n```ocaml\nlet all_transitions = Collect.concat episode_list\n```\n\n---\n\n## 8. Replay Buffers\n\n### 8.1 Standard replay buffer\n\n**Stable Baselines3**\n\n```python\nfrom stable_baselines3.common.buffers import ReplayBuffer\n\nbuffer = ReplayBuffer(buffer_size=100_000, observation_space=..., action_space=...)\nbuffer.add(obs, next_obs, action, reward, done, infos)\nbatch = buffer.sample(batch_size=256)\n```\n\n**Fehu** -- built-in:\n\n<!-- $MDX skip -->\n```ocaml\nlet buf = Buffer.create ~capacity:100_000\n\nlet () =\n  Buffer.add buf {\n    Buffer.observation = obs;\n    action;\n    reward = 1.0;\n    next_observation = next_obs;\n    terminated = false;\n    truncated = false;\n  }\n\n(* Uniform random sampling *)\nlet batch = Buffer.sample buf ~batch_size:256\n(* batch : ('obs, 'act) Buffer.transition array *)\n\n(* Structure-of-arrays form for training loops *)\nlet (observations, actions, rewards, next_observations, terminated, truncated) =\n  Buffer.sample_arrays buf ~batch_size:256\n```\n\n### 8.2 Buffer queries\n\n<!-- $MDX skip -->\n```ocaml\nlet n       = Buffer.size buf       (* current number of stored transitions *)\nlet cap     = Buffer.capacity buf   (* maximum capacity *)\nlet full    = Buffer.is_full buf    (* true when size = capacity *)\nlet ()      = Buffer.clear buf      (* remove all transitions, keep storage *)\n```\n\n---\n\n## 9. GAE and Returns\n\n### 9.1 Generalized Advantage Estimation\n\n**Stable Baselines3** (internal)\n\n```python\n# SB3 computes GAE internally in on-policy algorithms\n# or manually:\nimport numpy as np\n\ndef compute_gae(rewards, values, dones, next_values, gamma=0.99, lam=0.95):\n    advantages = np.zeros_like(rewards)\n    last_gae = 0\n    for t in reversed(range(len(rewards))):\n        delta = rewards[t] + gamma * next_values[t] * (1 - dones[t]) - values[t]\n        advantages[t] = last_gae = delta + gamma * lam * (1 - dones[t]) * last_gae\n    returns = advantages + values\n    return advantages, returns\n```\n\n**Fehu** -- built-in, with correct terminated/truncated handling:\n\n<!-- $MDX skip -->\n```ocaml\nlet (advantages, returns) =\n  Gae.compute\n    ~rewards:trajectory.rewards\n    ~values:(Option.get trajectory.values)\n    ~terminated:trajectory.terminated\n    ~truncated:trajectory.truncated\n    ~next_values  (* float array: V(s_{t+1}) for each t *)\n    ~gamma:0.99\n    ~lambda:0.95\n```\n\nWhen you have values from a rollout and a final bootstrap value:\n\n<!-- $MDX skip -->\n```ocaml\nlet (advantages, returns) =\n  Gae.compute_from_values\n    ~rewards:trajectory.rewards\n    ~values:(Option.get trajectory.values)\n    ~terminated:trajectory.terminated\n    ~truncated:trajectory.truncated\n    ~last_value:0.0\n    ~gamma:0.99\n    ~lambda:0.95\n```\n\n`compute_from_values` builds `next_values` from `values` and `last_value` automatically: `next_values.(t) = values.(t+1)` for `t < n-1`, and `next_values.(n-1) = last_value`.\n\n### 9.2 Monte Carlo returns\n\n**Manual Python**\n\n```python\ndef discounted_returns(rewards, dones, gamma=0.99):\n    returns = np.zeros_like(rewards)\n    running = 0.0\n    for t in reversed(range(len(rewards))):\n        running = rewards[t] + gamma * running * (1 - dones[t])\n        returns[t] = running\n    return returns\n```\n\n**Fehu**\n\n<!-- $MDX skip -->\n```ocaml\nlet mc_returns =\n  Gae.returns\n    ~rewards:trajectory.rewards\n    ~terminated:trajectory.terminated\n    ~truncated:trajectory.truncated\n    ~gamma:0.99\n```\n\n### 9.3 Normalization\n\n<!-- $MDX skip -->\n```ocaml\nlet normalized_advantages = Gae.normalize advantages\nlet normalized_custom     = Gae.normalize ~eps:1e-5 advantages\n```\n\n---\n\n## 10. Policy Evaluation\n\n**Gymnasium + Stable Baselines3**\n\n```python\nfrom stable_baselines3.common.evaluation import evaluate_policy\n\nmean_reward, std_reward = evaluate_policy(\n    model, env, n_eval_episodes=10, deterministic=True\n)\n```\n\n**Fehu** -- built-in:\n\n<!-- $MDX skip -->\n```ocaml\nlet stats =\n  Eval.run env\n    ~policy:(fun obs -> (* deterministic action *))\n    ~n_episodes:10\n    ~max_steps:1000\n    ()\n\n(* stats.mean_reward  : float *)\n(* stats.std_reward   : float *)\n(* stats.mean_length  : float *)\n(* stats.n_episodes   : int   *)\n```\n\n`Eval.run` resets the environment between episodes and collects total reward and episode length across all episodes.\n\n---\n\n## 11. Rendering\n\n### 11.1 Render modes\n\n**Gymnasium**\n\n```python\nenv = gym.make(\"CartPole-v1\", render_mode=\"human\")\nenv.reset()\nenv.step(action)\nframe = env.render()  # None for \"human\", np.ndarray for \"rgb_array\"\n```\n\n**Fehu**\n\n<!-- $MDX skip -->\n```ocaml\nlet env =\n  Env.create\n    ~render_mode:`Human\n    ~render_modes:[\"human\"; \"rgb_array\"]\n    ~render:(fun () -> (* return 'render option *))\n    (* ... *)\n    ()\n\nlet frame = Env.render env  (* 'render option *)\n```\n\nRender modes are polymorphic variants: `` `Human ``, `` `Rgb_array ``, `` `Ansi ``, `` `Svg ``, `` `Custom of string ``.\n\n### 11.2 Frame type\n\nFor `Rgb_array` environments, Fehu uses `Render.image`:\n\n<!-- $MDX skip -->\n```ocaml\n(* Render.image fields: *)\n(* width        : int                              *)\n(* height       : int                              *)\n(* pixel_format : Render.Pixel.format (Rgb|Rgba|Gray) *)\n(* data         : uint8 bigarray                   *)\n```\n\n### 11.3 Recording rendered rollouts\n\n**Gymnasium**\n\n```python\nfrom gymnasium.wrappers import RecordVideo\nenv = RecordVideo(env, video_folder=\"./videos\")\n```\n\n**Fehu** -- use `Render.rollout` or `Render.on_render`:\n\n<!-- $MDX skip -->\n```ocaml\n(* Run a policy and feed frames to a sink *)\nRender.rollout env\n  ~policy:(fun obs -> (* action *))\n  ~steps:500\n  ~sink:(fun frame -> (* save or display frame *))\n  ()\n\n(* Or wrap the env to capture every rendered frame *)\nlet recording_env =\n  Render.on_render\n    ~sink:(fun frame -> (* process frame *))\n    env\n```\n\n---\n\n## 12. Info Dictionaries\n\n**Gymnasium** uses plain Python dicts for info:\n\n```python\nobs, info = env.reset()\nprint(info.get(\"elapsed_steps\", 0))\n```\n\n**Fehu** uses typed `Info.t` dictionaries with `Value.t` values:\n\n<!-- $MDX skip -->\n```ocaml\nlet info = Info.of_list [\n  (\"elapsed_steps\", Info.int 42);\n  (\"success\", Info.bool true);\n]\n\nlet steps = Info.find \"elapsed_steps\" info  (* Value.t option *)\nlet steps = Info.find_exn \"elapsed_steps\" info  (* Value.t, raises on missing *)\n\nlet info' = Info.set \"custom_key\" (Info.float 3.14) info\nlet info' = Info.merge info1 info2  (* info2 wins on conflicts *)\nlet is_empty = Info.is_empty info\n```\n\n---\n\n## 13. Quick Cheat Sheet\n\n| Task                 | Gymnasium / SB3                                   | Fehu                                                                              |\n| -------------------- | ------------------------------------------------- | --------------------------------------------------------------------------------- |\n| Create env           | `gym.make(\"CartPole-v1\")`                         | `Env.create ~observation_space ~action_space ~reset ~step ()`                     |\n| Reset                | `obs, info = env.reset(seed=42)`                  | `let (obs, info) = Env.reset env ()`                                              |\n| Step                 | `obs, r, term, trunc, info = env.step(a)`         | `let s = Env.step env a` (record fields)                                          |\n| Close                | `env.close()`                                     | `Env.close env`                                                                   |\n| Discrete space       | `gym.spaces.Discrete(5)`                          | `Space.Discrete.create 5`                                                         |\n| Box space            | `gym.spaces.Box(low, high)`                       | `Space.Box.create ~low ~high`                                                     |\n| Sample from space    | `space.sample()`                                  | `Space.sample space`                                                              |\n| Contains check       | `space.contains(x)`                               | `Space.contains space x`                                                          |\n| Observation wrapper  | `class W(gym.ObservationWrapper)`                 | `Env.map_observation ~observation_space ~f env`                                   |\n| Action wrapper       | `class W(gym.ActionWrapper)`                      | `Env.map_action ~action_space ~f env`                                             |\n| Reward wrapper       | `class W(gym.RewardWrapper)`                      | `Env.map_reward ~f env`                                                           |\n| Clip actions         | `ClipAction(env)`                                 | `Env.clip_action env`                                                             |\n| Time limit           | `TimeLimit(env, max_episode_steps=N)`             | `Env.time_limit ~max_episode_steps:N env`                                         |\n| Vectorize            | `gym.vector.SyncVectorEnv([...])`                 | `Vec_env.create [env1; env2; ...]`                                                |\n| Rollout N steps      | Manual loop / SB3 internal                        | `Collect.rollout env ~policy ~n_steps`                                            |\n| Collect N episodes   | Manual loop                                       | `Collect.episodes env ~policy ~n_episodes ()`                                     |\n| Replay buffer        | `ReplayBuffer(buffer_size=N, ...)`                | `Buffer.create ~capacity:N`                                                       |\n| Add to buffer        | `buffer.add(obs, next_obs, ...)`                  | `Buffer.add buf transition`                                                       |\n| Sample from buffer   | `buffer.sample(batch_size=B)`                     | `Buffer.sample buf ~batch_size:B`                                                 |\n| GAE                  | SB3 internal / manual                             | `Gae.compute ~rewards ~values ~terminated ~truncated ~next_values ~gamma ~lambda` |\n| Discounted returns   | Manual loop                                       | `Gae.returns ~rewards ~terminated ~truncated ~gamma`                              |\n| Normalize advantages | `(adv - mean) / std`                              | `Gae.normalize advantages`                                                        |\n| Evaluate policy      | `evaluate_policy(model, env, n_eval_episodes=10)` | `Eval.run env ~policy ~n_episodes:10 ()`                                          |\n| Render               | `env.render()`                                    | `Env.render env`                                                                  |\n| Record frames        | `RecordVideo(env, ...)`                           | `Render.on_render ~sink env`                                                      |\n| Seed RNG             | `env.reset(seed=42)`                              | `Nx.Rng.run ~seed:42 (fun () -> ...)`                                             |\n"
  },
  {
    "path": "packages/fehu/doc/dune",
    "content": "(mdx\n (files *.md)\n (package fehu)\n (libraries fehu fehu.envs nx))\n"
  },
  {
    "path": "packages/fehu/doc/index.md",
    "content": "# Fehu\n\nFehu is a reinforcement learning environment toolkit for OCaml. It provides\ntype-safe environments, composable wrappers, trajectory collection, replay\nbuffers, GAE computation, policy evaluation, and vectorized environments.\n\nFehu follows the Gymnasium interface pattern: environments expose `reset` and\n`step` with typed observation and action spaces. Wrappers compose freely.\nCollection and evaluation utilities handle the plumbing between environments\nand training loops.\n\n## Features\n\n- **Type-safe environments**: observation and action spaces are encoded in the type system\n- **Rich space types**: Discrete, Box, Multi_binary, Multi_discrete, Tuple, Dict, Sequence, Text\n- **Composable wrappers**: map_observation, map_action, map_reward, clip_action, clip_observation, time_limit\n- **Trajectory collection**: rollout and episode collection in structure-of-arrays form\n- **Replay buffers**: fixed-capacity circular buffer with uniform random sampling\n- **GAE**: generalized advantage estimation with proper terminated/truncated handling\n- **Policy evaluation**: run a policy over episodes and get mean/std reward statistics\n- **Vectorized environments**: run multiple environments with batched step and auto-reset\n- **Built-in environments**: CartPole, MountainCar, GridWorld, RandomWalk\n\n## Quick Start\n\nCreate an environment, run a random agent, and evaluate:\n\n```ocaml\nopen Fehu\n\nlet () = Nx.Rng.run ~seed:42 @@ fun () ->\n  let env = Fehu_envs.Cartpole.make () in\n\n  (* Run one episode *)\n  let _obs, _info = Env.reset env () in\n  let done_ = ref false in\n  let total_reward = ref 0.0 in\n  while not !done_ do\n    let act = Space.sample (Env.action_space env) in\n    let s = Env.step env act in\n    total_reward := !total_reward +. s.reward;\n    done_ := s.terminated || s.truncated\n  done;\n\n  (* Evaluate over 10 episodes *)\n  let _stats = Eval.run env\n    ~policy:(fun _obs -> Space.sample (Env.action_space env))\n    ~n_episodes:10 ()\n  in ()\n```\n\n## Next Steps\n\n- [Getting Started](01-getting-started/) -- installation, environments, spaces, step loop\n- [Environments and Wrappers](02-environments/) -- custom environments, wrappers, rendering, vectorized environments\n- [Collection and Evaluation](03-collection-and-evaluation/) -- trajectory collection, replay buffers, GAE, evaluation\n"
  },
  {
    "path": "packages/fehu/examples/01-random-agent/dune",
    "content": "(executable\n (name main)\n (libraries nx rune fehu fehu.envs))\n"
  },
  {
    "path": "packages/fehu/examples/01-random-agent/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* A random agent on CartPole-v1.\n\n   Demonstrates the Env lifecycle: create, reset, step, render, close. Then uses\n   Eval.run for batch evaluation. *)\n\nopen Fehu\n\nlet () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  Printf.printf \"Random Agent on CartPole-v1\\n\";\n  Printf.printf \"===========================\\n\\n\";\n\n  let env = Fehu_envs.Cartpole.make ~render_mode:`Ansi () in\n\n  (* -- Manual episode loop ------------------------------------------------ *)\n  Printf.printf \"Running 5 episodes with random actions...\\n\\n\";\n\n  for episode = 1 to 5 do\n    let obs = ref (fst (Env.reset env ())) in\n    let total_reward = ref 0.0 in\n    let steps = ref 0 in\n    let done_ = ref false in\n    while not !done_ do\n      (* Show the first step of episode 1 *)\n      (if episode = 1 && !steps = 0 then\n         match Env.render env with\n         | Some text -> Printf.printf \"%s\\n\" text\n         | None -> ());\n      let action = Space.sample (Env.action_space env) in\n      let s = Env.step env action in\n      total_reward := !total_reward +. s.reward;\n      incr steps;\n      obs := s.observation;\n      done_ := s.terminated || s.truncated\n    done;\n    Printf.printf \"  Episode %d:  reward = %5.1f  length = %3d\\n\" episode\n      !total_reward !steps\n  done;\n\n  (* -- Batch evaluation with Eval.run ------------------------------------ *)\n  Printf.printf \"\\nEvaluating over 100 episodes...\\n\\n\";\n\n  let random_policy _obs = Space.sample (Env.action_space env) in\n  let stats = Eval.run env ~policy:random_policy ~n_episodes:100 () in\n  Printf.printf \"  mean reward: %6.2f +/- %.2f\\n\" stats.mean_reward\n    stats.std_reward;\n  Printf.printf \"  mean length: %6.1f\\n\" stats.mean_length;\n\n  Env.close env;\n  Printf.printf \"\\nDone.\\n\"\n"
  },
  {
    "path": "packages/fehu/examples/02-q-learning/dune",
    "content": "(executable\n (name main)\n (libraries nx rune fehu fehu.envs))\n"
  },
  {
    "path": "packages/fehu/examples/02-q-learning/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Tabular Q-learning on CartPole-v1.\n\n   Discretizes the continuous 4D observation into bins, learns a Q-table with\n   epsilon-greedy exploration and temporal difference updates. Uses Eval.run for\n   periodic evaluation. *)\n\nopen Fehu\n\n(* Hyperparameters *)\n\nlet n_bins = 12\nlet n_actions = 2\nlet alpha = 0.1\nlet gamma = 0.99\nlet epsilon_start = 1.0\nlet epsilon_end = 0.01\nlet epsilon_decay = 2000.0\nlet n_episodes = 10_000\nlet eval_interval = 500\n\n(* Sparkline *)\n\nlet sparkline values =\n  let blocks =\n    [|\n      \"\\xe2\\x96\\x81\";\n      \"\\xe2\\x96\\x82\";\n      \"\\xe2\\x96\\x83\";\n      \"\\xe2\\x96\\x84\";\n      \"\\xe2\\x96\\x85\";\n      \"\\xe2\\x96\\x86\";\n      \"\\xe2\\x96\\x87\";\n      \"\\xe2\\x96\\x88\";\n    |]\n  in\n  let lo = Array.fold_left Float.min Float.infinity values in\n  let hi = Array.fold_left Float.max Float.neg_infinity values in\n  let range = hi -. lo in\n  if range < 1e-9 then\n    String.concat \"\" (Array.to_list (Array.map (fun _ -> blocks.(4)) values))\n  else\n    String.concat \"\"\n      (Array.to_list\n         (Array.map\n            (fun v ->\n              let idx = Float.to_int ((v -. lo) /. range *. 7.0) in\n              blocks.(max 0 (min 7 idx)))\n            values))\n\n(* Q-table *)\n\nlet n_states = n_bins * n_bins * n_bins * n_bins\nlet q = Array.make (n_states * n_actions) 0.0\nlet q_get s a = q.((s * n_actions) + a)\nlet q_set s a v = q.((s * n_actions) + a) <- v\n\n(* Discretize: clip each of the 4 obs dimensions into bins. CartPole obs: [x,\n   x_dot, theta, theta_dot] We use generous clip ranges that cover typical\n   CartPole trajectories. *)\n\nlet clip_ranges = [| (-2.4, 2.4); (-3.0, 3.0); (-0.21, 0.21); (-3.0, 3.0) |]\n\nlet discretize obs =\n  let arr = (Nx.to_array obs : float array) in\n  let bin i =\n    let lo, hi = clip_ranges.(i) in\n    let v = Float.max lo (Float.min hi arr.(i)) in\n    let normalized = (v -. lo) /. (hi -. lo) in\n    Float.to_int (normalized *. Float.of_int (n_bins - 1))\n    |> max 0\n    |> min (n_bins - 1)\n  in\n  let b0 = bin 0 in\n  let b1 = bin 1 in\n  let b2 = bin 2 in\n  let b3 = bin 3 in\n  (b0 * n_bins * n_bins * n_bins) + (b1 * n_bins * n_bins) + (b2 * n_bins) + b3\n\nlet best_action s = if q_get s 0 >= q_get s 1 then 0 else 1\n\n(* Training *)\n\nlet () =\n  Printf.printf \"Q-Learning on CartPole-v1\\n\";\n  Printf.printf \"==========================\\n\\n\";\n  Printf.printf \"States: %d bins/dim (%d total), Actions: left/right\\n\" n_bins\n    n_states;\n  Printf.printf \"alpha = %.2f, gamma = %.2f, episodes = %d\\n\\n\" alpha gamma\n    n_episodes;\n\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let sample_uniform () =\n    let t = Nx.rand Nx.float32 [| 1 |] in\n    (Nx.to_array t : float array).(0)\n  in\n\n  let sample_random_action () =\n    let t = Nx.randint Nx.int32 ~high:n_actions [| 1 |] 0 in\n    Int32.to_int (Nx.to_array t : Int32.t array).(0)\n  in\n\n  let env = Fehu_envs.Cartpole.make () in\n\n  let n_evals = n_episodes / eval_interval in\n  let reward_history = Array.make n_evals 0.0 in\n  let eval_idx = ref 0 in\n\n  Printf.printf \"Training...\\n\\n\";\n\n  for episode = 1 to n_episodes do\n    let epsilon =\n      epsilon_end\n      +. (epsilon_start -. epsilon_end)\n         *. exp (-.Float.of_int episode /. epsilon_decay)\n    in\n\n    let obs, _info = Env.reset env () in\n    let state = ref (discretize obs) in\n    let done_ = ref false in\n\n    while not !done_ do\n      let a =\n        if sample_uniform () < epsilon then sample_random_action ()\n        else best_action !state\n      in\n      let s = Env.step env (Space.Discrete.of_int a) in\n      let next_state = discretize s.observation in\n      let done_flag = s.terminated || s.truncated in\n\n      let bootstrap =\n        if done_flag then 0.0\n        else Float.max (q_get next_state 0) (q_get next_state 1)\n      in\n      let target = s.reward +. (gamma *. bootstrap) in\n      let old_q = q_get !state a in\n      q_set !state a (old_q +. (alpha *. (target -. old_q)));\n\n      state := next_state;\n      done_ := done_flag\n    done;\n\n    if episode mod eval_interval = 0 then begin\n      let greedy_policy obs =\n        Space.Discrete.of_int (best_action (discretize obs))\n      in\n      let stats = Eval.run env ~policy:greedy_policy ~n_episodes:20 () in\n      Printf.printf\n        \"  episode %5d  eps = %.2f  eval: reward = %5.1f +/- %4.1f\\n%!\" episode\n        epsilon stats.mean_reward stats.std_reward;\n      reward_history.(!eval_idx) <- stats.mean_reward;\n      incr eval_idx\n    end\n  done;\n\n  Printf.printf \"\\n  reward: %s\\n\" (sparkline reward_history);\n\n  (* Final evaluation *)\n  Printf.printf \"\\nFinal evaluation (100 episodes):\\n\";\n  let greedy_policy obs =\n    Space.Discrete.of_int (best_action (discretize obs))\n  in\n  let stats = Eval.run env ~policy:greedy_policy ~n_episodes:100 () in\n  Printf.printf \"  mean reward: %5.1f +/- %.1f\\n\" stats.mean_reward\n    stats.std_reward;\n  Printf.printf \"  mean length: %5.1f\\n\" stats.mean_length;\n\n  if stats.mean_reward >= 195.0 then\n    Printf.printf \"\\nSolved! (mean reward >= 195)\\n\"\n  else Printf.printf \"\\nNot solved yet (mean reward < 195).\\n\";\n\n  Env.close env\n"
  },
  {
    "path": "packages/fehu/examples/03-reinforce/dune",
    "content": "(executable\n (name main)\n (libraries nx rune kaun vega fehu fehu.envs))\n"
  },
  {
    "path": "packages/fehu/examples/03-reinforce/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* REINFORCE on CartPole-v1.\n\n   Policy gradient with a small neural network. Collects rollouts, computes\n   discounted returns, and updates the policy by maximizing the expected return\n   weighted by log-probabilities. *)\n\nopen Fehu\nopen Kaun\n\n(* Hyperparameters *)\n\nlet gamma = 0.99\nlet lr = 1e-3\nlet n_steps = 2048\nlet n_updates = 250\nlet eval_interval = 10\nlet eval_episodes = 20\n\n(* Sparkline *)\n\nlet sparkline values =\n  let blocks =\n    [|\n      \"\\xe2\\x96\\x81\";\n      \"\\xe2\\x96\\x82\";\n      \"\\xe2\\x96\\x83\";\n      \"\\xe2\\x96\\x84\";\n      \"\\xe2\\x96\\x85\";\n      \"\\xe2\\x96\\x86\";\n      \"\\xe2\\x96\\x87\";\n      \"\\xe2\\x96\\x88\";\n    |]\n  in\n  let lo = Array.fold_left Float.min Float.infinity values in\n  let hi = Array.fold_left Float.max Float.neg_infinity values in\n  let range = hi -. lo in\n  if range < 1e-9 then\n    String.concat \"\" (Array.to_list (Array.map (fun _ -> blocks.(4)) values))\n  else\n    String.concat \"\"\n      (Array.to_list\n         (Array.map\n            (fun v ->\n              let idx = Float.to_int ((v -. lo) /. range *. 7.0) in\n              blocks.(max 0 (min 7 idx)))\n            values))\n\n(* Network *)\n\nlet network =\n  Layer.sequential\n    [\n      Layer.linear ~in_features:4 ~out_features:64 ();\n      Layer.relu ();\n      Layer.linear ~in_features:64 ~out_features:2 ();\n    ]\n\n(* Forward pass: obs [batch; 4] -> logits [batch; 2] *)\n\nlet forward params net_state obs =\n  let vars = Layer.make_vars ~params ~state:net_state ~dtype:Nx.float32 in\n  fst (Layer.apply network vars ~training:false obs)\n\n(* Main *)\n\nlet () =\n  Printf.printf \"REINFORCE on CartPole-v1\\n\";\n  Printf.printf \"=========================\\n\\n\";\n  Printf.printf \"Network: Linear(4 -> 64) -> ReLU -> Linear(64 -> 2)\\n\";\n  Printf.printf \"Rollout: %d steps/update, gamma = %.2f, lr = %.4f\\n\\n\" n_steps\n    gamma lr;\n\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let env = Fehu_envs.Cartpole.make () in\n\n  (* Initialize network *)\n  let vars = Layer.init network ~dtype:Nx.float32 in\n  let params = ref (Layer.params vars) in\n  let net_state = Layer.state vars in\n\n  Printf.printf \"Parameters: %d\\n\\n\" (Ptree.count_parameters !params);\n\n  (* Optimizer *)\n  let algo = Vega.adam (Vega.Schedule.constant lr) in\n  let opt_state = ref (Optim.init algo !params) in\n\n  let policy obs =\n    let obs_batch = Nx.reshape [| 1; 4 |] obs in\n    let logits = Rune.no_grad (fun () -> forward !params net_state obs_batch) in\n    let action_idx = Nx.categorical logits in\n    let action = Nx.reshape [||] action_idx in\n    let log_probs = Nx.log_softmax logits in\n    let action_1 = Nx.reshape [| 1; 1 |] action_idx in\n    let log_prob = Nx.take_along_axis ~axis:1 action_1 log_probs in\n    let lp = Nx.item [ 0; 0 ] log_prob in\n    (action, Some lp, None)\n  in\n\n  (* Greedy policy for evaluation *)\n  let greedy_policy obs =\n    let obs_batch = Nx.reshape [| 1; 4 |] obs in\n    let logits = Rune.no_grad (fun () -> forward !params net_state obs_batch) in\n    let action_idx =\n      Nx.argmax logits ~axis:(-1) ~keepdims:false |> Nx.cast Nx.int32\n    in\n    Nx.reshape [||] action_idx\n  in\n\n  (* Training loop *)\n  Printf.printf \"Training...\\n\\n\";\n\n  let n_evals = n_updates / eval_interval in\n  let reward_history = Array.make n_evals 0.0 in\n  let eval_idx = ref 0 in\n\n  for update = 1 to n_updates do\n    (* Collect rollout *)\n    let traj = Collect.rollout env ~policy ~n_steps in\n    let n = Collect.length traj in\n\n    (* Compute discounted returns and normalize *)\n    let returns =\n      Gae.returns ~rewards:traj.rewards ~terminated:traj.terminated\n        ~truncated:traj.truncated ~gamma\n    in\n    let returns = Gae.normalize returns in\n\n    (* Stack observations and actions into batch tensors *)\n    let obs_batch = Nx.stack (Array.to_list traj.observations) in\n    let actions_batch =\n      Nx.stack\n        (Array.to_list (Array.map (fun a -> Nx.reshape [| 1 |] a) traj.actions))\n    in\n    let returns_t = Nx.create Nx.float32 [| n |] returns in\n\n    (* Policy gradient loss *)\n    let loss_fn p =\n      let logits = forward p net_state obs_batch in\n      let log_probs = Nx.log_softmax logits in\n      let action_log_probs =\n        Nx.take_along_axis ~axis:1 actions_batch log_probs\n      in\n      let action_log_probs = Nx.reshape [| n |] action_log_probs in\n      let weighted = Nx.mul action_log_probs returns_t in\n      Nx.neg (Nx.mean weighted)\n    in\n\n    let loss, grads = Grad.value_and_grad loss_fn !params in\n    let new_params, new_opt_state = Optim.update !opt_state !params grads in\n    params := new_params;\n    opt_state := new_opt_state;\n\n    (* Evaluate periodically *)\n    if update mod eval_interval = 0 then begin\n      let stats =\n        Eval.run env ~policy:greedy_policy ~n_episodes:eval_episodes ()\n      in\n      Printf.printf\n        \"  update %3d  loss = %6.3f  eval: reward = %5.1f +/- %4.1f\\n%!\" update\n        (Nx.item [] loss) stats.mean_reward stats.std_reward;\n      reward_history.(!eval_idx) <- stats.mean_reward;\n      incr eval_idx\n    end\n  done;\n\n  Printf.printf \"\\n  reward: %s\\n\" (sparkline reward_history);\n\n  (* Final evaluation *)\n  Printf.printf \"\\nFinal evaluation (%d episodes):\\n\" 50;\n  let stats = Eval.run env ~policy:greedy_policy ~n_episodes:50 () in\n  Printf.printf \"  mean reward: %5.1f +/- %.1f\\n\" stats.mean_reward\n    stats.std_reward;\n  Printf.printf \"  mean length: %5.1f\\n\" stats.mean_length;\n\n  if stats.mean_reward >= 475.0 then\n    Printf.printf \"\\nSolved! (mean reward >= 475)\\n\"\n  else Printf.printf \"\\nNot solved yet (mean reward < 475).\\n\";\n\n  Env.close env\n"
  },
  {
    "path": "packages/fehu/examples/04-dqn/dune",
    "content": "(executable\n (name main)\n (libraries nx rune kaun vega fehu fehu.envs))\n"
  },
  {
    "path": "packages/fehu/examples/04-dqn/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* DQN on CartPole-v1.\n\n   Deep Q-Network with experience replay and a target network. Epsilon-greedy\n   exploration decays linearly. The target network is hard-copied every\n   target_update_interval steps. *)\n\nopen Fehu\nopen Kaun\n\n(* Hyperparameters *)\n\nlet buffer_capacity = 50_000\nlet batch_size = 64\nlet gamma = 0.99\nlet lr = 5e-4\nlet epsilon_start = 1.0\nlet epsilon_end = 0.05\nlet epsilon_decay_steps = 10_000\nlet target_update_interval = 250\nlet learning_starts = 1000\nlet n_steps = 50_000\nlet eval_interval = 2000\nlet eval_episodes = 20\n\n(* Sparkline *)\n\nlet sparkline values =\n  let blocks =\n    [|\n      \"\\xe2\\x96\\x81\";\n      \"\\xe2\\x96\\x82\";\n      \"\\xe2\\x96\\x83\";\n      \"\\xe2\\x96\\x84\";\n      \"\\xe2\\x96\\x85\";\n      \"\\xe2\\x96\\x86\";\n      \"\\xe2\\x96\\x87\";\n      \"\\xe2\\x96\\x88\";\n    |]\n  in\n  let lo = Array.fold_left Float.min Float.infinity values in\n  let hi = Array.fold_left Float.max Float.neg_infinity values in\n  let range = hi -. lo in\n  if range < 1e-9 then\n    String.concat \"\" (Array.to_list (Array.map (fun _ -> blocks.(4)) values))\n  else\n    String.concat \"\"\n      (Array.to_list\n         (Array.map\n            (fun v ->\n              let idx = Float.to_int ((v -. lo) /. range *. 7.0) in\n              blocks.(max 0 (min 7 idx)))\n            values))\n\n(* Network *)\n\nlet q_network =\n  Layer.sequential\n    [\n      Layer.linear ~in_features:4 ~out_features:128 ();\n      Layer.relu ();\n      Layer.linear ~in_features:128 ~out_features:128 ();\n      Layer.relu ();\n      Layer.linear ~in_features:128 ~out_features:2 ();\n    ]\n\n(* Forward pass: obs [batch; 4] -> q_values [batch; 2] *)\n\nlet forward params net_state obs =\n  let vars = Layer.make_vars ~params ~state:net_state ~dtype:Nx.float32 in\n  fst (Layer.apply q_network vars ~training:false obs)\n\n(* Epsilon schedule: linear decay *)\n\nlet epsilon step =\n  let t =\n    Float.min 1.0 (Float.of_int step /. Float.of_int epsilon_decay_steps)\n  in\n  epsilon_start +. (t *. (epsilon_end -. epsilon_start))\n\n(* Copy parameters for the target network *)\n\nlet copy_params params = Ptree.map { run = (fun t -> Nx.copy t) } params\n\n(* Main *)\n\nlet () =\n  Printf.printf \"DQN on CartPole-v1\\n\";\n  Printf.printf \"===================\\n\\n\";\n  Printf.printf\n    \"Network: Linear(4 -> 128) -> ReLU -> Linear(128 -> 128) -> ReLU -> \\\n     Linear(128 -> 2)\\n\";\n  Printf.printf \"Buffer: %d, batch: %d, gamma = %.2f, lr = %.4f\\n\"\n    buffer_capacity batch_size gamma lr;\n  Printf.printf\n    \"Epsilon: %.2f -> %.2f over %d steps, target update every %d steps\\n\\n\"\n    epsilon_start epsilon_end epsilon_decay_steps target_update_interval;\n\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let env = Fehu_envs.Cartpole.make () in\n\n  (* Initialize network *)\n  let vars = Layer.init q_network ~dtype:Nx.float32 in\n  let params = ref (Layer.params vars) in\n  let net_state = Layer.state vars in\n  let target_params = ref (copy_params !params) in\n\n  Printf.printf \"Parameters: %d\\n\\n\" (Ptree.count_parameters !params);\n\n  (* Optimizer *)\n  let algo = Vega.adam (Vega.Schedule.constant lr) in\n  let opt_state = ref (Optim.init algo !params) in\n\n  (* Replay buffer *)\n  let buffer = Buffer.create ~capacity:buffer_capacity in\n\n  let sample_uniform () =\n    let t = Nx.rand Nx.float32 [| 1 |] in\n    (Nx.to_array t : float array).(0)\n  in\n\n  (* Epsilon-greedy action selection *)\n  let select_action obs eps =\n    if sample_uniform () < eps then Space.sample (Env.action_space env)\n    else begin\n      let obs_batch = Nx.reshape [| 1; 4 |] obs in\n      let q_values =\n        Rune.no_grad (fun () -> forward !params net_state obs_batch)\n      in\n      let action_idx =\n        Nx.argmax q_values ~axis:(-1) ~keepdims:false |> Nx.cast Nx.int32\n      in\n      Nx.reshape [||] action_idx\n    end\n  in\n\n  (* Greedy policy for evaluation *)\n  let greedy_policy obs =\n    let obs_batch = Nx.reshape [| 1; 4 |] obs in\n    let q_values =\n      Rune.no_grad (fun () -> forward !params net_state obs_batch)\n    in\n    let action_idx =\n      Nx.argmax q_values ~axis:(-1) ~keepdims:false |> Nx.cast Nx.int32\n    in\n    Nx.reshape [||] action_idx\n  in\n\n  (* Training step *)\n  let train_step () =\n    let obs_arr, act_arr, rew_arr, next_obs_arr, term_arr, trunc_arr =\n      Buffer.sample_arrays buffer ~batch_size\n    in\n    let n = Array.length obs_arr in\n\n    (* Stack into batch tensors *)\n    let obs_batch = Nx.stack (Array.to_list obs_arr) in\n    let next_obs_batch = Nx.stack (Array.to_list next_obs_arr) in\n    let actions_batch =\n      Nx.stack\n        (Array.to_list (Array.map (fun a -> Nx.reshape [| 1 |] a) act_arr))\n    in\n    let rewards_t = Nx.create Nx.float32 [| n |] rew_arr in\n\n    (* Done mask: 1.0 if not done, 0.0 if done *)\n    let done_mask =\n      Array.init n (fun i -> if term_arr.(i) || trunc_arr.(i) then 0.0 else 1.0)\n    in\n    let done_mask_t = Nx.create Nx.float32 [| n |] done_mask in\n\n    (* Compute TD target with target network (no gradient) *)\n    let td_target =\n      Rune.no_grad (fun () ->\n          let target_q = forward !target_params net_state next_obs_batch in\n          let max_q = Nx.max target_q ~axes:[ 1 ] ~keepdims:false in\n          Nx.add rewards_t\n            (Nx.mul (Nx.scalar Nx.float32 gamma) (Nx.mul max_q done_mask_t)))\n    in\n    let td_target = Rune.detach td_target in\n\n    (* Loss: MSE between predicted Q and TD target *)\n    let loss_fn p =\n      let q_values = forward p net_state obs_batch in\n      let q_selected = Nx.take_along_axis ~axis:1 actions_batch q_values in\n      let q_selected = Nx.reshape [| n |] q_selected in\n      let diff = Nx.sub q_selected td_target in\n      Nx.mean (Nx.mul diff diff)\n    in\n\n    let loss, grads = Grad.value_and_grad loss_fn !params in\n    let new_params, new_opt_state = Optim.update !opt_state !params grads in\n    params := new_params;\n    opt_state := new_opt_state;\n    Nx.item [] loss\n  in\n\n  (* Main training loop *)\n  Printf.printf \"Filling buffer (%d steps)...\\n\\n\" learning_starts;\n\n  let obs = ref (fst (Env.reset env ())) in\n  let last_loss = ref 0.0 in\n\n  let n_evals = n_steps / eval_interval in\n  let reward_history = Array.make n_evals 0.0 in\n  let eval_idx = ref 0 in\n\n  Printf.printf \"Training...\\n\\n\";\n\n  for step = 1 to n_steps do\n    let eps = epsilon step in\n    let action = select_action !obs eps in\n    let s = Env.step env action in\n\n    Buffer.add buffer\n      {\n        observation = !obs;\n        action;\n        reward = s.reward;\n        next_observation = s.observation;\n        terminated = s.terminated;\n        truncated = s.truncated;\n      };\n\n    if s.terminated || s.truncated then obs := fst (Env.reset env ())\n    else obs := s.observation;\n\n    (* Train *)\n    if step >= learning_starts then begin\n      last_loss := train_step ();\n\n      (* Update target network *)\n      if step mod target_update_interval = 0 then\n        target_params := copy_params !params\n    end;\n\n    (* Evaluate periodically *)\n    if step mod eval_interval = 0 then begin\n      let stats =\n        Eval.run env ~policy:greedy_policy ~n_episodes:eval_episodes ()\n      in\n      Printf.printf\n        \"  step %5d  epsilon = %.2f  loss = %6.4f  eval: reward = %5.1f +/- \\\n         %4.1f\\n\\\n         %!\"\n        step eps !last_loss stats.mean_reward stats.std_reward;\n      reward_history.(!eval_idx) <- stats.mean_reward;\n      incr eval_idx;\n      (* Eval.run leaves the env in a done state; reset for training *)\n      obs := fst (Env.reset env ())\n    end\n  done;\n\n  Printf.printf \"\\n  reward: %s\\n\" (sparkline reward_history);\n\n  (* Final evaluation *)\n  Printf.printf \"\\nFinal evaluation (%d episodes):\\n\" 50;\n  let stats = Eval.run env ~policy:greedy_policy ~n_episodes:50 () in\n  Printf.printf \"  mean reward: %5.1f +/- %.1f\\n\" stats.mean_reward\n    stats.std_reward;\n  Printf.printf \"  mean length: %5.1f\\n\" stats.mean_length;\n\n  if stats.mean_reward >= 475.0 then\n    Printf.printf \"\\nSolved! (mean reward >= 475)\\n\"\n  else Printf.printf \"\\nNot solved yet (mean reward < 475).\\n\";\n\n  Env.close env\n"
  },
  {
    "path": "packages/fehu/lib/buffer.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet err_capacity = \"Buffer.create: capacity must be positive\"\nlet err_empty = \"Buffer.sample: buffer is empty\"\nlet err_batch_size = \"Buffer.sample: batch_size must be positive\"\n\ntype ('obs, 'act) transition = {\n  observation : 'obs;\n  action : 'act;\n  reward : float;\n  next_observation : 'obs;\n  terminated : bool;\n  truncated : bool;\n}\n\ntype ('obs, 'act) t = {\n  capacity : int;\n  mutable size : int;\n  mutable pos : int;\n  mutable observations : 'obs array;\n  mutable actions : 'act array;\n  rewards : float array;\n  mutable next_observations : 'obs array;\n  terminateds : bool array;\n  truncateds : bool array;\n}\n\n(* Constructor *)\n\nlet create ~capacity =\n  if capacity <= 0 then invalid_arg err_capacity;\n  {\n    capacity;\n    size = 0;\n    pos = 0;\n    observations = [||];\n    actions = [||];\n    rewards = Array.make capacity 0.0;\n    next_observations = [||];\n    terminateds = Array.make capacity false;\n    truncateds = Array.make capacity false;\n  }\n\n(* Mutating *)\n\nlet ensure_init buf (tr : _ transition) =\n  if Array.length buf.observations = 0 then begin\n    buf.observations <- Array.make buf.capacity tr.observation;\n    buf.actions <- Array.make buf.capacity tr.action;\n    buf.next_observations <- Array.make buf.capacity tr.next_observation\n  end\n\nlet add buf tr =\n  ensure_init buf tr;\n  buf.observations.(buf.pos) <- tr.observation;\n  buf.actions.(buf.pos) <- tr.action;\n  buf.rewards.(buf.pos) <- tr.reward;\n  buf.next_observations.(buf.pos) <- tr.next_observation;\n  buf.terminateds.(buf.pos) <- tr.terminated;\n  buf.truncateds.(buf.pos) <- tr.truncated;\n  buf.pos <- (buf.pos + 1) mod buf.capacity;\n  if buf.size < buf.capacity then buf.size <- buf.size + 1\n\nlet clear buf =\n  buf.size <- 0;\n  buf.pos <- 0\n\n(* Sampling *)\n\nlet sample_indices buf ~batch_size =\n  if buf.size = 0 then invalid_arg err_empty;\n  if batch_size <= 0 then invalid_arg err_batch_size;\n  let n = min batch_size buf.size in\n  let raw = Nx.randint Nx.int32 ~high:buf.size [| n |] 0 in\n  let idx : Int32.t array = Nx.to_array raw in\n  (idx, n)\n\nlet sample buf ~batch_size =\n  let idx, n = sample_indices buf ~batch_size in\n  Array.init n (fun i ->\n      let j = Int32.to_int idx.(i) in\n      {\n        observation = buf.observations.(j);\n        action = buf.actions.(j);\n        reward = buf.rewards.(j);\n        next_observation = buf.next_observations.(j);\n        terminated = buf.terminateds.(j);\n        truncated = buf.truncateds.(j);\n      })\n\nlet sample_arrays buf ~batch_size =\n  let idx, n = sample_indices buf ~batch_size in\n  let get arr i = arr.(Int32.to_int idx.(i)) in\n  let observations = Array.init n (get buf.observations) in\n  let actions = Array.init n (get buf.actions) in\n  let rewards = Array.init n (get buf.rewards) in\n  let next_observations = Array.init n (get buf.next_observations) in\n  let terminated = Array.init n (get buf.terminateds) in\n  let truncated = Array.init n (get buf.truncateds) in\n  (observations, actions, rewards, next_observations, terminated, truncated)\n\n(* Queries *)\n\nlet size buf = buf.size\nlet is_full buf = buf.size = buf.capacity\nlet capacity buf = buf.capacity\n"
  },
  {
    "path": "packages/fehu/lib/buffer.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Replay buffer for off-policy experience storage.\n\n    A fixed-capacity circular buffer that stores transitions and supports\n    uniform random sampling. Observation and action arrays are lazily\n    initialized on the first {!add}. *)\n\n(** {1:types Types} *)\n\ntype ('obs, 'act) transition = {\n  observation : 'obs;  (** State before the action. *)\n  action : 'act;  (** Action taken. *)\n  reward : float;  (** Scalar reward received. *)\n  next_observation : 'obs;  (** State after the action. *)\n  terminated : bool;  (** Natural episode ending. *)\n  truncated : bool;  (** Forced episode ending. *)\n}\n(** The type for transitions. *)\n\ntype ('obs, 'act) t\n(** A replay buffer of transitions. *)\n\n(** {1:constructors Constructors} *)\n\nval create : capacity:int -> ('obs, 'act) t\n(** [create ~capacity] is an empty buffer that holds at most [capacity]\n    transitions.\n\n    Raises [Invalid_argument] if [capacity <= 0]. *)\n\n(** {1:mutating Mutating} *)\n\nval add : ('obs, 'act) t -> ('obs, 'act) transition -> unit\n(** [add buf tr] appends [tr], overwriting the oldest transition when at\n    capacity. *)\n\nval clear : ('obs, 'act) t -> unit\n(** [clear buf] removes all transitions, keeping storage allocated. *)\n\n(** {1:sampling Sampling} *)\n\nval sample : ('obs, 'act) t -> batch_size:int -> ('obs, 'act) transition array\n(** [sample buf ~batch_size] draws [batch_size] transitions uniformly at random\n    (with replacement).\n\n    Random keys are drawn from the implicit RNG scope.\n\n    If [batch_size] exceeds {!size}, samples [min batch_size size] transitions.\n\n    Raises [Invalid_argument] if [buf] is empty or [batch_size <= 0]. *)\n\nval sample_arrays :\n  ('obs, 'act) t ->\n  batch_size:int ->\n  'obs array * 'act array * float array * 'obs array * bool array * bool array\n(** [sample_arrays buf ~batch_size] is like {!sample} but returns\n    structure-of-arrays\n    [(observations, actions, rewards, next_observations, terminated, truncated)]\n    for direct use in training loops. *)\n\n(** {1:queries Queries} *)\n\nval size : ('obs, 'act) t -> int\n(** [size buf] is the number of stored transitions. *)\n\nval is_full : ('obs, 'act) t -> bool\n(** [is_full buf] is [true] iff [size buf = capacity]. *)\n\nval capacity : ('obs, 'act) t -> int\n(** [capacity buf] is the maximum number of transitions [buf] can hold. *)\n"
  },
  {
    "path": "packages/fehu/lib/collect.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet err_concat_empty = \"Collect.concat: empty list\"\n\ntype ('obs, 'act) t = {\n  observations : 'obs array;\n  actions : 'act array;\n  rewards : float array;\n  next_observations : 'obs array;\n  terminated : bool array;\n  truncated : bool array;\n  infos : Info.t array;\n  log_probs : float array option;\n  values : float array option;\n}\n\nlet length t = Array.length t.observations\n\n(* Concatenation *)\n\nlet concat_opt_field ts get =\n  if List.for_all (fun t -> Option.is_some (get t)) ts then\n    Some (Array.concat (List.map (fun t -> Option.get (get t)) ts))\n  else None\n\nlet concat = function\n  | [] -> invalid_arg err_concat_empty\n  | [ t ] -> t\n  | ts ->\n      {\n        observations = Array.concat (List.map (fun t -> t.observations) ts);\n        actions = Array.concat (List.map (fun t -> t.actions) ts);\n        rewards = Array.concat (List.map (fun t -> t.rewards) ts);\n        next_observations =\n          Array.concat (List.map (fun t -> t.next_observations) ts);\n        terminated = Array.concat (List.map (fun t -> t.terminated) ts);\n        truncated = Array.concat (List.map (fun t -> t.truncated) ts);\n        infos = Array.concat (List.map (fun t -> t.infos) ts);\n        log_probs = concat_opt_field ts (fun t -> t.log_probs);\n        values = concat_opt_field ts (fun t -> t.values);\n      }\n\n(* Accumulator for building trajectories *)\n\ntype ('obs, 'act) acc = {\n  mutable obs : 'obs list;\n  mutable acts : 'act list;\n  mutable rews : float list;\n  mutable next_obs : 'obs list;\n  mutable terms : bool list;\n  mutable truncs : bool list;\n  mutable infos_acc : Info.t list;\n  mutable lps : float list;\n  mutable vals : float list;\n  mutable count : int;\n}\n\nlet create_acc () =\n  {\n    obs = [];\n    acts = [];\n    rews = [];\n    next_obs = [];\n    terms = [];\n    truncs = [];\n    infos_acc = [];\n    lps = [];\n    vals = [];\n    count = 0;\n  }\n\nlet acc_step acc ~current_obs ~action ~lp_opt ~v_opt (s : _ Env.step) =\n  acc.obs <- current_obs :: acc.obs;\n  acc.acts <- action :: acc.acts;\n  acc.rews <- s.reward :: acc.rews;\n  acc.next_obs <- s.observation :: acc.next_obs;\n  acc.terms <- s.terminated :: acc.terms;\n  acc.truncs <- s.truncated :: acc.truncs;\n  acc.infos_acc <- s.info :: acc.infos_acc;\n  (match lp_opt with Some lp -> acc.lps <- lp :: acc.lps | None -> ());\n  (match v_opt with Some v -> acc.vals <- v :: acc.vals | None -> ());\n  acc.count <- acc.count + 1\n\nlet acc_to_trajectory acc =\n  let n = acc.count in\n  let log_probs =\n    if List.length acc.lps = n then Some (Array.of_list (List.rev acc.lps))\n    else None\n  in\n  let values =\n    if List.length acc.vals = n then Some (Array.of_list (List.rev acc.vals))\n    else None\n  in\n  {\n    observations = Array.of_list (List.rev acc.obs);\n    actions = Array.of_list (List.rev acc.acts);\n    rewards = Array.of_list (List.rev acc.rews);\n    next_observations = Array.of_list (List.rev acc.next_obs);\n    terminated = Array.of_list (List.rev acc.terms);\n    truncated = Array.of_list (List.rev acc.truncs);\n    infos = Array.of_list (List.rev acc.infos_acc);\n    log_probs;\n    values;\n  }\n\n(* Collecting *)\n\nlet rollout env ~policy ~n_steps =\n  let acc = create_acc () in\n  let obs, _info = Env.reset env () in\n  let current_obs = ref obs in\n  while acc.count < n_steps do\n    let action, lp_opt, v_opt = policy !current_obs in\n    let s = Env.step env action in\n    acc_step acc ~current_obs:!current_obs ~action ~lp_opt ~v_opt s;\n    current_obs := s.observation;\n    if s.terminated || s.truncated then begin\n      let obs, _info = Env.reset env () in\n      current_obs := obs\n    end\n  done;\n  acc_to_trajectory acc\n\nlet episodes env ~policy ~n_episodes ?(max_steps = 1000) () =\n  let eps = ref [] in\n  for _ = 1 to n_episodes do\n    let acc = create_acc () in\n    let obs, _info = Env.reset env () in\n    let current_obs = ref obs in\n    let done_flag = ref false in\n    while acc.count < max_steps && not !done_flag do\n      let action, lp_opt, v_opt = policy !current_obs in\n      let s = Env.step env action in\n      acc_step acc ~current_obs:!current_obs ~action ~lp_opt ~v_opt s;\n      current_obs := s.observation;\n      done_flag := s.terminated || s.truncated\n    done;\n    eps := acc_to_trajectory acc :: !eps\n  done;\n  List.rev !eps\n"
  },
  {
    "path": "packages/fehu/lib/collect.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Trajectory collection from environments.\n\n    Collects sequential agent-environment interactions into structure-of-arrays\n    form for batch processing. Handles automatic resets on episode boundaries\n    and records both the current and next observation at each timestep. *)\n\n(** {1:types Types} *)\n\ntype ('obs, 'act) t = {\n  observations : 'obs array;  (** States before each action. *)\n  actions : 'act array;  (** Actions taken. *)\n  rewards : float array;  (** Scalar rewards received. *)\n  next_observations : 'obs array;  (** States after each action. *)\n  terminated : bool array;  (** Natural episode endings. *)\n  truncated : bool array;  (** Forced episode endings. *)\n  infos : Info.t array;  (** Per-step metadata. *)\n  log_probs : float array option;  (** Policy log-probabilities. *)\n  values : float array option;  (** Value estimates. *)\n}\n(** The type for trajectories. All arrays have the same length. Optional fields\n    are [None] when the policy does not provide them. *)\n\n(** {1:accessors Accessors} *)\n\nval length : ('obs, 'act) t -> int\n(** [length traj] is the number of transitions in [traj]. *)\n\n(** {1:combining Combining} *)\n\nval concat : ('obs, 'act) t list -> ('obs, 'act) t\n(** [concat trajs] concatenates [trajs] into a single trajectory. Optional\n    fields are kept only if present in all inputs.\n\n    Raises [Invalid_argument] if [trajs] is empty. *)\n\n(** {1:collecting Collecting} *)\n\nval rollout :\n  ('obs, 'act, 'render) Env.t ->\n  policy:('obs -> 'act * float option * float option) ->\n  n_steps:int ->\n  ('obs, 'act) t\n(** [rollout env ~policy ~n_steps] collects [n_steps] transitions.\n\n    Resets [env] at the start and automatically on episode boundaries\n    (terminated or truncated). The [policy] receives the current observation and\n    returns [(action, log_prob_opt, value_opt)]. *)\n\nval episodes :\n  ('obs, 'act, 'render) Env.t ->\n  policy:('obs -> 'act * float option * float option) ->\n  n_episodes:int ->\n  ?max_steps:int ->\n  unit ->\n  ('obs, 'act) t list\n(** [episodes env ~policy ~n_episodes ()] collects complete episodes, one\n    trajectory per episode. Each episode runs until termination, truncation, or\n    [max_steps] (default [1000]). *)\n"
  },
  {
    "path": "packages/fehu/lib/dune",
    "content": "(library\n (name fehu)\n (public_name fehu)\n (libraries nx))\n"
  },
  {
    "path": "packages/fehu/lib/env.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet strf = Printf.sprintf\n\n(* Error messages *)\n\nlet err_closed op = strf \"Env: operation '%s' on a closed environment\" op\n\nlet err_needs_reset op =\n  strf \"Env: operation '%s' requires calling reset first\" op\n\nlet err_render_mode mode modes =\n  strf \"Env.create: render mode '%s' not in render_modes [%s]\" mode\n    (String.concat \"; \" modes)\n\nlet err_obs_reset value =\n  strf \"Env.reset: observation outside observation_space (value=%s)\" value\n\nlet err_obs_step value =\n  strf \"Env.step: observation outside observation_space (value=%s)\" value\n\nlet err_action value =\n  strf \"Env.step: action outside action_space (value=%s)\" value\n\n(* Step result *)\n\ntype 'obs step = {\n  observation : 'obs;\n  reward : float;\n  terminated : bool;\n  truncated : bool;\n  info : Info.t;\n}\n\nlet step_result ~observation ?(reward = 0.) ?(terminated = false)\n    ?(truncated = false) ?(info = Info.empty) () =\n  { observation; reward; terminated; truncated; info }\n\n(* Render mode *)\n\ntype render_mode = [ `Human | `Rgb_array | `Ansi | `Svg | `Custom of string ]\n\nlet render_mode_to_string = function\n  | `Human -> \"human\"\n  | `Rgb_array -> \"rgb_array\"\n  | `Ansi -> \"ansi\"\n  | `Svg -> \"svg\"\n  | `Custom name -> name\n\n(* Shared mutable state *)\n\ntype shared = { mutable closed : bool; mutable needs_reset : bool }\n\n(* Environment *)\n\ntype ('obs, 'act, 'render) t = {\n  id : string option;\n  observation_space : 'obs Space.t;\n  action_space : 'act Space.t;\n  render_mode : render_mode option;\n  render_modes : string list;\n  shared : shared;\n  reset_fn : ?options:Info.t -> unit -> 'obs * Info.t;\n  step_fn : 'act -> 'obs step;\n  render_fn : unit -> 'render option;\n  close_fn : unit -> unit;\n}\n\n(* Lifecycle guards *)\n\nlet ensure_open shared op = if shared.closed then invalid_arg (err_closed op)\n\nlet ensure_reset shared op =\n  if shared.needs_reset then invalid_arg (err_needs_reset op)\n\n(* Constructor *)\n\nlet create ?id ~observation_space ~action_space ?render_mode\n    ?(render_modes = []) ~reset ~step ?render ?close () =\n  (match render_mode with\n  | None -> ()\n  | Some mode ->\n      let mode_s = render_mode_to_string mode in\n      if not (List.mem mode_s render_modes) then\n        invalid_arg (err_render_mode mode_s render_modes));\n  let shared = { closed = false; needs_reset = true } in\n  let render_fn = Option.value render ~default:(fun () -> None) in\n  let close_fn = Option.value close ~default:(fun () -> ()) in\n  let rec env =\n    {\n      id;\n      observation_space;\n      action_space;\n      render_mode;\n      render_modes;\n      shared;\n      reset_fn = (fun ?options () -> reset env ?options ());\n      step_fn = (fun action -> step env action);\n      render_fn;\n      close_fn;\n    }\n  in\n  env\n\n(* Wrap *)\n\nlet wrap ?id ~observation_space ~action_space ?render_mode ~reset ~step ?render\n    ?close inner =\n  let render_mode =\n    match render_mode with Some _ -> render_mode | None -> inner.render_mode\n  in\n  let render_fn =\n    match render with\n    | Some f -> fun () -> f inner\n    | None -> fun () -> inner.render_fn ()\n  in\n  let close_fn =\n    match close with\n    | Some f -> fun () -> f inner\n    | None -> fun () -> inner.close_fn ()\n  in\n  {\n    id;\n    observation_space;\n    action_space;\n    render_mode;\n    render_modes = inner.render_modes;\n    shared = inner.shared;\n    reset_fn = (fun ?options () -> reset inner ?options ());\n    step_fn = (fun action -> step inner action);\n    render_fn;\n    close_fn;\n  }\n\n(* Accessors *)\n\nlet id env = env.id\nlet observation_space env = env.observation_space\nlet action_space env = env.action_space\nlet render_mode env = env.render_mode\n\n(* Human render helper *)\n\nlet maybe_human_render env =\n  match env.render_mode with\n  | Some `Human -> ignore (env.render_fn ())\n  | _ -> ()\n\n(* Lifecycle — all guards live here *)\n\nlet closed env = env.shared.closed\n\nlet reset env ?options () =\n  ensure_open env.shared \"reset\";\n  let observation, info = env.reset_fn ?options () in\n  if not (Space.contains env.observation_space observation) then\n    invalid_arg\n      (err_obs_reset\n         (Space.pack env.observation_space observation |> Value.to_string));\n  env.shared.needs_reset <- false;\n  maybe_human_render env;\n  (observation, info)\n\nlet step env action =\n  ensure_open env.shared \"step\";\n  ensure_reset env.shared \"step\";\n  if not (Space.contains env.action_space action) then\n    invalid_arg\n      (err_action (Space.pack env.action_space action |> Value.to_string));\n  let result = env.step_fn action in\n  if not (Space.contains env.observation_space result.observation) then\n    invalid_arg\n      (err_obs_step\n         (Space.pack env.observation_space result.observation |> Value.to_string));\n  if result.terminated || result.truncated then env.shared.needs_reset <- true;\n  maybe_human_render env;\n  result\n\nlet render env =\n  ensure_open env.shared \"render\";\n  env.render_fn ()\n\nlet close env =\n  if not env.shared.closed then begin\n    env.close_fn ();\n    env.shared.closed <- true;\n    env.shared.needs_reset <- true\n  end\n\n(* Wrapper helpers *)\n\nlet err_clip_bounds = \"Env.clip_action: mismatched low/high bounds\"\nlet err_clip_obs_bounds = \"Env.clip_observation: mismatched low/high bounds\"\nlet err_time_limit = \"Env.time_limit: max_episode_steps must be positive\"\n\nlet derive_id env suffix =\n  match env.id with None -> None | Some id -> Some (id ^ suffix)\n\nlet clamp_tensor ~low ~high tensor =\n  let data = Nx.to_array tensor in\n  let clipped = Array.copy data in\n  let upper = Array.length clipped - 1 in\n  for idx = 0 to upper do\n    let lo = low.(idx) in\n    let hi = high.(idx) in\n    let v = clipped.(idx) in\n    if v < lo then clipped.(idx) <- lo else if v > hi then clipped.(idx) <- hi\n  done;\n  Nx.create Nx.float32 (Nx.shape tensor) clipped\n\n(* Wrappers *)\n\nlet map_observation ~observation_space ~f env =\n  wrap\n    ?id:(derive_id env \"/ObservationWrapper\")\n    ~observation_space ~action_space:env.action_space\n    ~reset:(fun inner ?options () ->\n      let obs, info = reset inner ?options () in\n      f obs info)\n    ~step:(fun inner action ->\n      let s = step inner action in\n      let obs, info = f s.observation s.info in\n      { s with observation = obs; info })\n    env\n\nlet map_action ~action_space ~f env =\n  wrap\n    ?id:(derive_id env \"/ActionWrapper\")\n    ~observation_space:env.observation_space ~action_space\n    ~reset:(fun inner ?options () -> reset inner ?options ())\n    ~step:(fun inner action ->\n      let s = step inner (f action) in\n      {\n        observation = s.observation;\n        reward = s.reward;\n        terminated = s.terminated;\n        truncated = s.truncated;\n        info = s.info;\n      })\n    env\n\nlet map_reward ~f env =\n  wrap\n    ?id:(derive_id env \"/RewardWrapper\")\n    ~observation_space:env.observation_space ~action_space:env.action_space\n    ~reset:(fun inner ?options () -> reset inner ?options ())\n    ~step:(fun inner action ->\n      let s = step inner action in\n      let reward, info = f ~reward:s.reward ~info:s.info in\n      { s with reward; info })\n    env\n\n(* Clipping *)\n\nlet clip_action env =\n  let low, high = Space.Box.bounds env.action_space in\n  let element_count = Array.length low in\n  if Array.length high <> element_count then invalid_arg err_clip_bounds;\n  let relaxed_low =\n    Array.init element_count (fun i ->\n        if Float.equal low.(i) high.(i) then low.(i) else Float.neg_infinity)\n  in\n  let relaxed_high =\n    Array.init element_count (fun i ->\n        if Float.equal low.(i) high.(i) then high.(i) else Float.infinity)\n  in\n  let relaxed_space = Space.Box.create ~low:relaxed_low ~high:relaxed_high in\n  map_action ~action_space:relaxed_space\n    ~f:(fun action -> clamp_tensor ~low ~high action)\n    env\n\nlet clip_observation ~low ~high env =\n  let inner_low, inner_high = Space.Box.bounds env.observation_space in\n  let n = Array.length low in\n  if Array.length high <> n then invalid_arg err_clip_obs_bounds;\n  if Array.length inner_low <> n then invalid_arg err_clip_obs_bounds;\n  let clamp_low = Array.init n (fun i -> Float.max low.(i) inner_low.(i)) in\n  let clamp_high = Array.init n (fun i -> Float.min high.(i) inner_high.(i)) in\n  let observation_space = Space.Box.create ~low:clamp_low ~high:clamp_high in\n  map_observation ~observation_space\n    ~f:(fun obs info ->\n      (clamp_tensor ~low:clamp_low ~high:clamp_high obs, info))\n    env\n\n(* Limits *)\n\nlet time_limit ~max_episode_steps env =\n  if max_episode_steps <= 0 then invalid_arg err_time_limit;\n  let steps = ref 0 in\n  let add_info info elapsed =\n    info\n    |> Info.set \"time_limit.truncated\" (Info.bool true)\n    |> Info.set \"time_limit.elapsed_steps\" (Info.int elapsed)\n  in\n  wrap\n    ?id:(derive_id env \"/TimeLimit\")\n    ~observation_space:env.observation_space ~action_space:env.action_space\n    ~reset:(fun inner ?options () ->\n      steps := 0;\n      reset inner ?options ())\n    ~step:(fun inner action ->\n      incr steps;\n      let s = step inner action in\n      if s.terminated || s.truncated then begin\n        steps := 0;\n        s\n      end\n      else if !steps >= max_episode_steps then begin\n        let info = add_info s.info !steps in\n        steps := 0;\n        { s with truncated = true; info }\n      end\n      else s)\n    env\n"
  },
  {
    "path": "packages/fehu/lib/env.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Reinforcement learning environments.\n\n    An environment defines an interactive loop: the agent observes, acts, and\n    receives a reward. The environment enforces a lifecycle: {!reset} must be\n    called before {!step}, and a terminated or truncated episode requires\n    another {!reset}. *)\n\n(** {1:step Step results} *)\n\ntype 'obs step = {\n  observation : 'obs;  (** The observation after the action. *)\n  reward : float;  (** Scalar reward for the transition. *)\n  terminated : bool;  (** [true] when the episode ends naturally. *)\n  truncated : bool;  (** [true] when the episode is cut short. *)\n  info : Info.t;  (** Auxiliary metadata. *)\n}\n(** The type for step results. *)\n\nval step_result :\n  observation:'obs ->\n  ?reward:float ->\n  ?terminated:bool ->\n  ?truncated:bool ->\n  ?info:Info.t ->\n  unit ->\n  'obs step\n(** [step_result ~observation ()] constructs a step result. [reward] defaults to\n    [0.], [terminated] and [truncated] default to [false], [info] defaults to\n    {!Info.empty}. *)\n\n(** {1:render Render modes} *)\n\ntype render_mode = [ `Human | `Rgb_array | `Ansi | `Svg | `Custom of string ]\n(** Rendering modes supported by environments. *)\n\nval render_mode_to_string : render_mode -> string\n(** [render_mode_to_string m] is the string representation of [m]. *)\n\n(** {1:env Environments} *)\n\ntype ('obs, 'act, 'render) t\n(** Environment handle. Use {!create} or {!wrap} to construct. *)\n\nval create :\n  ?id:string ->\n  observation_space:'obs Space.t ->\n  action_space:'act Space.t ->\n  ?render_mode:render_mode ->\n  ?render_modes:string list ->\n  reset:(('obs, 'act, 'render) t -> ?options:Info.t -> unit -> 'obs * Info.t) ->\n  step:(('obs, 'act, 'render) t -> 'act -> 'obs step) ->\n  ?render:(unit -> 'render option) ->\n  ?close:(unit -> unit) ->\n  unit ->\n  ('obs, 'act, 'render) t\n(** [create ~observation_space ~action_space ~reset ~step ()] makes a new\n    environment.\n\n    [reset] and [step] receive the environment handle as first argument. Random\n    keys for stochastic behavior are drawn from the implicit RNG scope.\n\n    [render_modes] lists the supported render mode strings. When [render_mode]\n    is provided, it must appear in [render_modes].\n\n    Raises [Invalid_argument] if [render_mode] is not in [render_modes]. *)\n\nval wrap :\n  ?id:string ->\n  observation_space:'obs2 Space.t ->\n  action_space:'act2 Space.t ->\n  ?render_mode:render_mode ->\n  reset:(('obs1, 'act1, 'render) t -> ?options:Info.t -> unit -> 'obs2 * Info.t) ->\n  step:(('obs1, 'act1, 'render) t -> 'act2 -> 'obs2 step) ->\n  ?render:(('obs1, 'act1, 'render) t -> 'render option) ->\n  ?close:(('obs1, 'act1, 'render) t -> unit) ->\n  ('obs1, 'act1, 'render) t ->\n  ('obs2, 'act2, 'render) t\n(** [wrap ~observation_space ~action_space ~reset ~step inner] builds a new\n    environment that wraps [inner]. The wrapper shares [inner]'s lifecycle state\n    (RNG, closed flag, reset flag). All guards (closed, needs-reset, space\n    validation) are enforced by {!reset}/{!step}, so wrappers get them\n    automatically.\n\n    The render type is preserved from [inner]. [render_mode] defaults to\n    [inner]'s. *)\n\n(** {1:accessors Accessors} *)\n\nval id : ('obs, 'act, 'render) t -> string option\n(** [id env] is the environment's identifier, if any. *)\n\nval observation_space : ('obs, 'act, 'render) t -> 'obs Space.t\n(** [observation_space env] is the space of valid observations. *)\n\nval action_space : ('obs, 'act, 'render) t -> 'act Space.t\n(** [action_space env] is the space of valid actions. *)\n\nval render_mode : ('obs, 'act, 'render) t -> render_mode option\n(** [render_mode env] is the render mode chosen at construction, if any. *)\n\n(** {1:lifecycle Lifecycle} *)\n\nval closed : ('obs, 'act, 'render) t -> bool\n(** [closed env] is [true] iff the environment has been closed. *)\n\nval reset : ('obs, 'act, 'render) t -> ?options:Info.t -> unit -> 'obs * Info.t\n(** [reset env ()] resets the environment to an initial state.\n\n    Raises [Invalid_argument] if [env] is closed, or if the reset function\n    produces an observation outside {!observation_space}. *)\n\nval step : ('obs, 'act, 'render) t -> 'act -> 'obs step\n(** [step env action] advances the environment by one timestep.\n\n    Raises [Invalid_argument] if [env] is closed, if no {!reset} has been called\n    since the last terminal step, if [action] is outside {!action_space}, or if\n    the step function produces an observation outside {!observation_space}. *)\n\nval render : ('obs, 'act, 'render) t -> 'render option\n(** [render env] produces a visualization of the current state.\n\n    Raises [Invalid_argument] if [env] is closed. *)\n\nval close : ('obs, 'act, 'render) t -> unit\n(** [close env] releases resources held by the environment. Subsequent calls are\n    no-ops. *)\n\n(** {1:wrappers Wrappers} *)\n\nval map_observation :\n  observation_space:'obs2 Space.t ->\n  f:('obs1 -> Info.t -> 'obs2 * Info.t) ->\n  ('obs1, 'act, 'render) t ->\n  ('obs2, 'act, 'render) t\n(** [map_observation ~observation_space ~f env] transforms observations. Every\n    observation from {!reset} and {!step} is passed through [f] together with\n    the info dictionary. *)\n\nval map_action :\n  action_space:'act2 Space.t ->\n  f:('act2 -> 'act1) ->\n  ('obs, 'act1, 'render) t ->\n  ('obs, 'act2, 'render) t\n(** [map_action ~action_space ~f env] transforms actions before passing them to\n    the inner environment. *)\n\nval map_reward :\n  f:(reward:float -> info:Info.t -> float * Info.t) ->\n  ('obs, 'act, 'render) t ->\n  ('obs, 'act, 'render) t\n(** [map_reward ~f env] transforms rewards after each step. *)\n\n(** {1:clip Clipping} *)\n\nval clip_action :\n  ('obs, Space.Box.element, 'render) t -> ('obs, Space.Box.element, 'render) t\n(** [clip_action env] clamps continuous actions to the bounds of the inner\n    environment's {!Space.Box} action space. The wrapper exposes a relaxed space\n    that accepts any float values, then clips before forwarding. *)\n\nval clip_observation :\n  low:float array ->\n  high:float array ->\n  (Space.Box.element, 'act, 'render) t ->\n  (Space.Box.element, 'act, 'render) t\n(** [clip_observation ~low ~high env] clamps observations to \\[[low]; [high]\\].\n    The wrapper's observation space is the intersection of the provided bounds\n    and the inner space's bounds.\n\n    Raises [Invalid_argument] if [low] and [high] differ in length or do not\n    match the inner space's dimensionality. *)\n\n(** {1:limits Limits} *)\n\nval time_limit :\n  max_episode_steps:int -> ('obs, 'act, 'render) t -> ('obs, 'act, 'render) t\n(** [time_limit ~max_episode_steps env] enforces a maximum episode length. When\n    the limit is reached the step's [truncated] flag is set to [true]. The\n    counter resets on {!reset}.\n\n    Raises [Invalid_argument] if [max_episode_steps <= 0]. *)\n"
  },
  {
    "path": "packages/fehu/lib/envs/cartpole.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Fehu\n\ntype obs = (float, Nx.float32_elt) Nx.t\ntype act = (int32, Nx.int32_elt) Nx.t\ntype render = string\n\n(* Physics constants matching Gymnasium CartPole-v1 *)\n\nlet gravity = 9.8\nlet masscart = 1.0\nlet masspole = 0.1\nlet total_mass = masscart +. masspole\nlet half_pole_length = 0.5\nlet polemass_length = masspole *. half_pole_length\nlet force_mag = 10.0\nlet tau = 0.02\n\n(* Termination thresholds *)\n\nlet theta_threshold = 12. *. Float.pi /. 180.\nlet x_threshold = 2.4\nlet max_steps = 500\n\n(* Float32-representable large bound for \"unbounded\" dimensions *)\nlet f32_max = 3.4028235e38\n\nlet observation_space =\n  Space.Box.create\n    ~low:[| -4.8; -.f32_max; -.theta_threshold *. 2.; -.f32_max |]\n    ~high:[| 4.8; f32_max; theta_threshold *. 2.; f32_max |]\n\nlet action_space = Space.Discrete.create 2\n\nlet make_obs x x_dot theta theta_dot =\n  Nx.create Nx.float32 [| 4 |] [| x; x_dot; theta; theta_dot |]\n\nlet make ?render_mode () =\n  let x = ref 0.0 in\n  let x_dot = ref 0.0 in\n  let theta = ref 0.0 in\n  let theta_dot = ref 0.0 in\n  let steps = ref 0 in\n  let reset _env ?options:_ () =\n    let random_state () =\n      let r = Nx.rand Nx.float32 [| 1 |] in\n      let v = (Nx.to_array r).(0) in\n      (v -. 0.5) *. 0.1\n    in\n    x := random_state ();\n    x_dot := random_state ();\n    theta := random_state ();\n    theta_dot := random_state ();\n    steps := 0;\n    (make_obs !x !x_dot !theta !theta_dot, Info.empty)\n  in\n  let step _env action =\n    let force =\n      if Space.Discrete.to_int action = 1 then force_mag else -.force_mag\n    in\n    let costheta = cos !theta in\n    let sintheta = sin !theta in\n    let temp =\n      (force +. (polemass_length *. !theta_dot *. !theta_dot *. sintheta))\n      /. total_mass\n    in\n    let thetaacc =\n      ((gravity *. sintheta) -. (costheta *. temp))\n      /. (half_pole_length\n         *. ((4.0 /. 3.0) -. (masspole *. costheta *. costheta /. total_mass)))\n    in\n    let xacc =\n      temp -. (polemass_length *. thetaacc *. costheta /. total_mass)\n    in\n    x := !x +. (tau *. !x_dot);\n    x_dot := !x_dot +. (tau *. xacc);\n    theta := !theta +. (tau *. !theta_dot);\n    theta_dot := !theta_dot +. (tau *. thetaacc);\n    incr steps;\n    let terminated =\n      !x < -.x_threshold || !x > x_threshold || !theta < -.theta_threshold\n      || !theta > theta_threshold\n    in\n    let truncated = (not terminated) && !steps >= max_steps in\n    let reward = if terminated then 0.0 else 1.0 in\n    let info = Info.set \"steps\" (Info.int !steps) Info.empty in\n    Env.step_result\n      ~observation:(make_obs !x !x_dot !theta !theta_dot)\n      ~reward ~terminated ~truncated ~info ()\n  in\n  let render () =\n    Some\n      (Printf.sprintf\n         \"CartPole: x=%.3f, x_dot=%.3f, theta=%.3f\\xc2\\xb0, theta_dot=%.3f, \\\n          steps=%d\"\n         !x !x_dot\n         (!theta *. 180. /. Float.pi)\n         !theta_dot !steps)\n  in\n  Env.create ?render_mode ~render_modes:[ \"ansi\" ] ~id:\"CartPole-v1\"\n    ~observation_space ~action_space ~reset ~step ~render ()\n"
  },
  {
    "path": "packages/fehu/lib/envs/dune",
    "content": "(library\n (name fehu_envs)\n (public_name fehu.envs)\n (libraries fehu nx))\n"
  },
  {
    "path": "packages/fehu/lib/envs/fehu_envs.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule Random_walk = Random_walk\nmodule Cartpole = Cartpole\nmodule Grid_world = Grid_world\nmodule Mountain_car = Mountain_car\n"
  },
  {
    "path": "packages/fehu/lib/envs/fehu_envs.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Built-in environments for testing and learning.\n\n    Four environments covering the standard RL benchmarks: a simple 1D walk, the\n    classic cart-pole, a grid navigation problem, and the sparse-reward mountain\n    car. All follow the {!Fehu.Env} interface. *)\n\n(** {1:envs Environments} *)\n\nmodule Random_walk : sig\n  (** One-dimensional random walk.\n\n      The agent moves left or right on a line bounded by \\[[-10]; [10]\\]. Reward\n      is [- |position|]. Episodes terminate when the agent reaches a boundary or\n      after 200 steps.\n\n      {b Observation}: {!Fehu.Space.Box} of shape [[1]] in \\[[-10.0]; [10.0]\\].\n\n      {b Actions}: {!Fehu.Space.Discrete} 2 -- 0 = left, 1 = right.\n\n      {b Render modes}: [ansi]. *)\n\n  type obs = (float, Nx.float32_elt) Nx.t\n  type act = (int32, Nx.int32_elt) Nx.t\n  type render = string\n\n  val make :\n    ?render_mode:Fehu.Env.render_mode -> unit -> (obs, act, render) Fehu.Env.t\n  (** [make ()] is a random walk environment. *)\nend\n\nmodule Cartpole : sig\n  (** Classic cart-pole balancing (CartPole-v1).\n\n      A pole is attached to a cart on a frictionless track. The agent pushes the\n      cart left or right to keep the pole upright. Reward is [+1.0] per step\n      while the pole stays up. The episode terminates when the pole exceeds\n      +/-12 degrees or the cart leaves +/-2.4, and truncates at 500 steps.\n\n      {b Observation}: {!Fehu.Space.Box} of shape [[4]] -- [x], [x_dot],\n      [theta], [theta_dot].\n\n      {b Actions}: {!Fehu.Space.Discrete} 2 -- 0 = push left, 1 = push right.\n\n      {b Render modes}: [ansi]. *)\n\n  type obs = (float, Nx.float32_elt) Nx.t\n  type act = (int32, Nx.int32_elt) Nx.t\n  type render = string\n\n  val make :\n    ?render_mode:Fehu.Env.render_mode -> unit -> (obs, act, render) Fehu.Env.t\n  (** [make ()] is a cart-pole environment. *)\nend\n\nmodule Grid_world : sig\n  (** 5x5 grid navigation with obstacle.\n\n      The agent starts at [(0, 0)] and must reach the goal at [(4, 4)]. An\n      obstacle at [(2, 2)] blocks movement. Reward is [+10.0] on reaching the\n      goal, [-1.0] otherwise. Truncates at 200 steps.\n\n      {b Observation}: {!Fehu.Space.Multi_discrete} [[5; 5]] -- [(row, col)].\n\n      {b Actions}: {!Fehu.Space.Discrete} 4 -- 0 = up, 1 = down, 2 = left, 3 =\n      right.\n\n      {b Render modes}: [ansi], [rgb_array]. *)\n\n  type obs = (int32, Nx.int32_elt) Nx.t\n  type act = (int32, Nx.int32_elt) Nx.t\n  type render = Text of string | Image of Fehu.Render.image\n\n  val make :\n    ?render_mode:Fehu.Env.render_mode -> unit -> (obs, act, render) Fehu.Env.t\n  (** [make ()] is a grid world environment. *)\nend\n\nmodule Mountain_car : sig\n  (** Mountain car with sparse reward (MountainCar-v0).\n\n      A car sits in a valley between two hills. The engine is too weak to climb\n      the right hill directly; the agent must build momentum by rocking back and\n      forth. Reward is [-1.0] per step. The episode terminates when the car\n      reaches position >= 0.5 with non-negative velocity, and truncates at 200\n      steps.\n\n      {b Observation}: {!Fehu.Space.Box} of shape [[2]] -- [position],\n      [velocity].\n\n      {b Actions}: {!Fehu.Space.Discrete} 3 -- 0 = push left, 1 = coast, 2 =\n      push right.\n\n      {b Render modes}: [ansi]. *)\n\n  type obs = (float, Nx.float32_elt) Nx.t\n  type act = (int32, Nx.int32_elt) Nx.t\n  type render = string\n\n  val make :\n    ?render_mode:Fehu.Env.render_mode -> unit -> (obs, act, render) Fehu.Env.t\n  (** [make ()] is a mountain car environment. *)\nend\n"
  },
  {
    "path": "packages/fehu/lib/envs/grid_world.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Fehu\n\ntype obs = (int32, Nx.int32_elt) Nx.t\ntype act = (int32, Nx.int32_elt) Nx.t\ntype render = Text of string | Image of Render.image\n\nlet grid_size = 5\nlet max_steps = 200\nlet observation_space = Space.Multi_discrete.create [| grid_size; grid_size |]\nlet action_space = Space.Discrete.create 4\nlet is_goal row col = row = grid_size - 1 && col = grid_size - 1\nlet is_obstacle row col = row = 2 && col = 2\n\nlet is_valid row col =\n  row >= 0 && row < grid_size && col >= 0 && col < grid_size\n  && not (is_obstacle row col)\n\nlet make_obs row col =\n  Nx.create Nx.int32 [| 2 |] [| Int32.of_int row; Int32.of_int col |]\n\n(* ANSI rendering *)\n\nlet render_text row col =\n  let buffer = Bytes.make (grid_size * grid_size) '.' in\n  Bytes.set buffer ((row * grid_size) + col) 'A';\n  Bytes.set buffer (((grid_size - 1) * grid_size) + (grid_size - 1)) 'G';\n  Bytes.set buffer ((2 * grid_size) + 2) '#';\n  let rows =\n    List.init grid_size (fun r ->\n        Bytes.sub_string buffer (r * grid_size) grid_size)\n  in\n  Format.asprintf \"Position: (%d, %d)@.%a\" row col\n    (Format.pp_print_list\n       ~pp_sep:(fun fmt () -> Format.fprintf fmt \"@.\")\n       Format.pp_print_string)\n    rows\n\n(* RGB rendering *)\n\nlet cell_size = 32\nlet frame_width = grid_size * cell_size\nlet frame_height = grid_size * cell_size\n\nlet fill_rect data ~x0 ~y0 ~w ~h ~r ~g ~b =\n  for dy = 0 to h - 1 do\n    let row_offset = (y0 + dy) * frame_width * 3 in\n    for dx = 0 to w - 1 do\n      let base = row_offset + ((x0 + dx) * 3) in\n      Bigarray.Array1.unsafe_set data base r;\n      Bigarray.Array1.unsafe_set data (base + 1) g;\n      Bigarray.Array1.unsafe_set data (base + 2) b\n    done\n  done\n\nlet render_image row col =\n  let len = frame_width * frame_height * 3 in\n  let data =\n    Bigarray.Array1.create Bigarray.int8_unsigned Bigarray.c_layout len\n  in\n  fill_rect data ~x0:0 ~y0:0 ~w:frame_width ~h:frame_height ~r:30 ~g:33 ~b:36;\n  for gr = 0 to grid_size - 1 do\n    for gc = 0 to grid_size - 1 do\n      let x0 = gc * cell_size in\n      let y0 = gr * cell_size in\n      fill_rect data ~x0 ~y0 ~w:cell_size ~h:cell_size ~r:44 ~g:48 ~b:52;\n      fill_rect data ~x0:(x0 + 1) ~y0:(y0 + 1) ~w:(cell_size - 2)\n        ~h:(cell_size - 2) ~r:54 ~g:60 ~b:65\n    done\n  done;\n  let draw_cell cr cc ~r ~g ~b =\n    fill_rect data\n      ~x0:((cc * cell_size) + 2)\n      ~y0:((cr * cell_size) + 2)\n      ~w:(cell_size - 4) ~h:(cell_size - 4) ~r ~g ~b\n  in\n  draw_cell row col ~r:78 ~g:162 ~b:196;\n  draw_cell (grid_size - 1) (grid_size - 1) ~r:76 ~g:175 ~b:80;\n  draw_cell 2 2 ~r:200 ~g:80 ~b:80;\n  Render.image ~width:frame_width ~height:frame_height data\n\nlet make ?render_mode () =\n  let row = ref 0 in\n  let col = ref 0 in\n  let steps = ref 0 in\n  let reset _env ?options:_ () =\n    row := 0;\n    col := 0;\n    steps := 0;\n    (make_obs 0 0, Info.empty)\n  in\n  let step _env action =\n    let r, c = (!row, !col) in\n    let nr, nc =\n      match Space.Discrete.to_int action with\n      | 0 -> (r - 1, c)\n      | 1 -> (r + 1, c)\n      | 2 -> (r, c - 1)\n      | 3 -> (r, c + 1)\n      | _ -> (r, c)\n    in\n    let nr, nc = if is_valid nr nc then (nr, nc) else (r, c) in\n    row := nr;\n    col := nc;\n    incr steps;\n    let terminated = is_goal nr nc in\n    let truncated = (not terminated) && !steps >= max_steps in\n    let reward = if terminated then 10.0 else -1.0 in\n    let info = Info.set \"steps\" (Info.int !steps) Info.empty in\n    Env.step_result ~observation:(make_obs nr nc) ~reward ~terminated ~truncated\n      ~info ()\n  in\n  let render_mode_val = render_mode in\n  let render () =\n    match render_mode_val with\n    | Some `Rgb_array -> Some (Image (render_image !row !col))\n    | _ -> Some (Text (render_text !row !col))\n  in\n  Env.create ?render_mode ~render_modes:[ \"ansi\"; \"rgb_array\" ]\n    ~id:\"GridWorld-v0\" ~observation_space ~action_space ~reset ~step ~render ()\n"
  },
  {
    "path": "packages/fehu/lib/envs/mountain_car.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Fehu\n\ntype obs = (float, Nx.float32_elt) Nx.t\ntype act = (int32, Nx.int32_elt) Nx.t\ntype render = string\n\n(* Physics constants matching Gymnasium MountainCar-v0 *)\n\nlet min_position = -1.2\nlet max_position = 0.6\nlet max_speed = 0.07\nlet goal_position = 0.5\nlet goal_velocity = 0.0\nlet force = 0.001\nlet gravity = 0.0025\nlet max_steps = 200\n\nlet observation_space =\n  Space.Box.create\n    ~low:[| min_position; -.max_speed |]\n    ~high:[| max_position; max_speed |]\n\nlet action_space = Space.Discrete.create 3\n\nlet make_obs position velocity =\n  Nx.create Nx.float32 [| 2 |] [| position; velocity |]\n\nlet make ?render_mode () =\n  let position = ref 0.0 in\n  let velocity = ref 0.0 in\n  let steps = ref 0 in\n  let reset _env ?options:_ () =\n    let r = Nx.rand Nx.float32 [| 1 |] in\n    let v = (Nx.to_array r).(0) in\n    position := -0.6 +. (v *. 0.2);\n    velocity := 0.0;\n    steps := 0;\n    (make_obs !position !velocity, Info.empty)\n  in\n  let step _env action =\n    let force_direction = float_of_int (Space.Discrete.to_int action - 1) in\n    let vel =\n      !velocity +. (force_direction *. force)\n      -. (gravity *. cos (3.0 *. !position))\n    in\n    let vel = Float.max (-.max_speed) (Float.min vel max_speed) in\n    let pos = !position +. vel in\n    let pos = Float.max min_position (Float.min pos max_position) in\n    let vel = if pos = min_position && vel < 0.0 then 0.0 else vel in\n    position := pos;\n    velocity := vel;\n    incr steps;\n    let terminated = pos >= goal_position && vel >= goal_velocity in\n    let truncated = (not terminated) && !steps >= max_steps in\n    let reward = -1.0 in\n    let info = Info.set \"steps\" (Info.int !steps) Info.empty in\n    Env.step_result ~observation:(make_obs pos vel) ~reward ~terminated\n      ~truncated ~info ()\n  in\n  let render () =\n    let normalized_pos =\n      (!position -. min_position) /. (max_position -. min_position)\n    in\n    let car_pos = int_of_float (normalized_pos *. 40.0) in\n    let goal_pos =\n      int_of_float\n        ((goal_position -. min_position)\n        /. (max_position -. min_position)\n        *. 40.0)\n    in\n    let track = Bytes.make 41 '-' in\n    Bytes.set track goal_pos 'G';\n    Bytes.set track (max 0 (min 40 car_pos)) 'C';\n    Some\n      (Printf.sprintf \"MountainCar: [%s] pos=%.3f, vel=%.3f, steps=%d\"\n         (Bytes.to_string track) !position !velocity !steps)\n  in\n  Env.create ?render_mode ~render_modes:[ \"ansi\" ] ~id:\"MountainCar-v0\"\n    ~observation_space ~action_space ~reset ~step ~render ()\n"
  },
  {
    "path": "packages/fehu/lib/envs/random_walk.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Fehu\n\ntype obs = (float, Nx.float32_elt) Nx.t\ntype act = (int32, Nx.int32_elt) Nx.t\ntype render = string\n\nlet step_size = 1.0\nlet max_position = 10.0\nlet max_steps = 200\n\nlet observation_space =\n  Space.Box.create ~low:[| -.max_position |] ~high:[| max_position |]\n\nlet action_space = Space.Discrete.create 2\nlet make_obs position = Nx.create Nx.float32 [| 1 |] [| position |]\n\nlet render_ansi position =\n  let offset = int_of_float (position +. max_position) in\n  let offset = max 0 (min 20 offset) in\n  let buffer = Bytes.make 21 '.' in\n  Bytes.set buffer offset 'o';\n  Printf.sprintf \"Position: %+.2f\\n|%s|\" position (Bytes.to_string buffer)\n\nlet make ?render_mode () =\n  let position = ref 0.0 in\n  let steps = ref 0 in\n  let reset _env ?options:_ () =\n    position := 0.0;\n    steps := 0;\n    (make_obs 0.0, Info.empty)\n  in\n  let step _env action =\n    let direction =\n      if Space.Discrete.to_int action = 0 then -.step_size else step_size\n    in\n    let updated = !position +. direction in\n    let clamped = Float.min max_position (Float.max (-.max_position) updated) in\n    position := clamped;\n    incr steps;\n    let terminated = Float.abs clamped >= max_position in\n    let truncated = (not terminated) && !steps >= max_steps in\n    let reward = -.Float.abs clamped in\n    let info = Info.set \"steps\" (Info.int !steps) Info.empty in\n    Env.step_result ~observation:(make_obs clamped) ~reward ~terminated\n      ~truncated ~info ()\n  in\n  let render () = Some (render_ansi !position) in\n  Env.create ?render_mode ~render_modes:[ \"ansi\" ] ~id:\"RandomWalk-v0\"\n    ~observation_space ~action_space ~reset ~step ~render ()\n"
  },
  {
    "path": "packages/fehu/lib/eval.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype stats = {\n  mean_reward : float;\n  std_reward : float;\n  mean_length : float;\n  n_episodes : int;\n}\n\nlet run env ~policy ?(n_episodes = 10) ?(max_steps = 1000) () =\n  let ep_rewards = Array.make n_episodes 0.0 in\n  let ep_lengths = Array.make n_episodes 0.0 in\n  for ep = 0 to n_episodes - 1 do\n    let obs, _info = Env.reset env () in\n    let current_obs = ref obs in\n    let total_reward = ref 0.0 in\n    let steps = ref 0 in\n    let done_flag = ref false in\n    while !steps < max_steps && not !done_flag do\n      let action = policy !current_obs in\n      let s = Env.step env action in\n      total_reward := !total_reward +. s.reward;\n      steps := !steps + 1;\n      current_obs := s.observation;\n      done_flag := s.terminated || s.truncated\n    done;\n    ep_rewards.(ep) <- !total_reward;\n    ep_lengths.(ep) <- Float.of_int !steps\n  done;\n  let n = Float.of_int n_episodes in\n  let mean_reward = ref 0.0 in\n  let mean_length = ref 0.0 in\n  for i = 0 to n_episodes - 1 do\n    mean_reward := !mean_reward +. ep_rewards.(i);\n    mean_length := !mean_length +. ep_lengths.(i)\n  done;\n  mean_reward := !mean_reward /. n;\n  mean_length := !mean_length /. n;\n  let var_sum = ref 0.0 in\n  for i = 0 to n_episodes - 1 do\n    let d = ep_rewards.(i) -. !mean_reward in\n    var_sum := !var_sum +. (d *. d)\n  done;\n  let std_reward = sqrt (!var_sum /. n) in\n  {\n    mean_reward = !mean_reward;\n    std_reward;\n    mean_length = !mean_length;\n    n_episodes;\n  }\n"
  },
  {
    "path": "packages/fehu/lib/eval.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Policy evaluation.\n\n    Runs a deterministic or stochastic policy over multiple episodes and reports\n    summary statistics. *)\n\n(** {1:types Types} *)\n\ntype stats = {\n  mean_reward : float;  (** Mean total reward across episodes. *)\n  std_reward : float;  (** Standard deviation of total rewards. *)\n  mean_length : float;  (** Mean episode length in steps. *)\n  n_episodes : int;  (** Number of episodes evaluated. *)\n}\n(** The type for evaluation statistics. *)\n\n(** {1:running Running} *)\n\nval run :\n  ('obs, 'act, 'render) Env.t ->\n  policy:('obs -> 'act) ->\n  ?n_episodes:int ->\n  ?max_steps:int ->\n  unit ->\n  stats\n(** [run env ~policy ()] evaluates [policy] over [n_episodes] (default [10])\n    episodes of at most [max_steps] (default [1000]) steps each. The environment\n    is reset between episodes. *)\n"
  },
  {
    "path": "packages/fehu/lib/fehu.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule Value = Value\nmodule Info = Info\nmodule Space = Space\nmodule Env = Env\nmodule Vec_env = Vec_env\nmodule Collect = Collect\nmodule Buffer = Buffer\nmodule Gae = Gae\nmodule Eval = Eval\nmodule Render = Render\n"
  },
  {
    "path": "packages/fehu/lib/fehu.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** {0 Fehu} Reinforcement learning environments and utilities.\n\n    {1 Core}\n    {!modules: Value Info Space Env}\n\n    {1 Collection and training}\n    {!modules: Collect Buffer Gae Eval}\n\n    {1 Composition}\n    {!modules: Vec_env Render} *)\n\nmodule Value = Value\nmodule Info = Info\nmodule Space = Space\nmodule Env = Env\nmodule Vec_env = Vec_env\nmodule Collect = Collect\nmodule Buffer = Buffer\nmodule Gae = Gae\nmodule Eval = Eval\nmodule Render = Render\n"
  },
  {
    "path": "packages/fehu/lib/gae.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet err_lengths = \"Gae: all arrays must have the same length\"\n\nlet err_returns_lengths =\n  \"Gae.returns: rewards, terminated, and truncated must have the same length\"\n\nlet err_cfv_lengths =\n  \"Gae.compute_from_values: all arrays must have the same length\"\n\nlet compute ~rewards ~values ~terminated ~truncated ~next_values ~gamma ~lambda\n    =\n  let n = Array.length rewards in\n  if\n    n <> Array.length values\n    || n <> Array.length terminated\n    || n <> Array.length truncated\n    || n <> Array.length next_values\n  then invalid_arg err_lengths;\n  let advantages = Array.make n 0.0 in\n  let returns = Array.make n 0.0 in\n  let last_gae = ref 0.0 in\n  for t = n - 1 downto 0 do\n    let next_val, continuation =\n      if terminated.(t) then (0.0, 0.0)\n      else if truncated.(t) then (next_values.(t), 0.0)\n      else begin\n        let v = if t = n - 1 then next_values.(t) else values.(t + 1) in\n        (v, 1.0)\n      end\n    in\n    let delta = rewards.(t) +. (gamma *. next_val) -. values.(t) in\n    last_gae := delta +. (gamma *. lambda *. continuation *. !last_gae);\n    advantages.(t) <- !last_gae;\n    returns.(t) <- !last_gae +. values.(t)\n  done;\n  (advantages, returns)\n\nlet compute_from_values ~rewards ~values ~terminated ~truncated ~last_value\n    ~gamma ~lambda =\n  let n = Array.length rewards in\n  if\n    n <> Array.length values\n    || n <> Array.length terminated\n    || n <> Array.length truncated\n  then invalid_arg err_cfv_lengths;\n  let next_values =\n    Array.init n (fun t -> if t = n - 1 then last_value else values.(t + 1))\n  in\n  compute ~rewards ~values ~terminated ~truncated ~next_values ~gamma ~lambda\n\nlet returns ~rewards ~terminated ~truncated ~gamma =\n  let n = Array.length rewards in\n  if n <> Array.length terminated || n <> Array.length truncated then\n    invalid_arg err_returns_lengths;\n  let ret = Array.make n 0.0 in\n  let acc = ref 0.0 in\n  for t = n - 1 downto 0 do\n    let cont = if terminated.(t) || truncated.(t) then 0.0 else 1.0 in\n    acc := rewards.(t) +. (gamma *. cont *. !acc);\n    ret.(t) <- !acc\n  done;\n  ret\n\nlet normalize ?(eps = 1e-8) arr =\n  let n = Array.length arr in\n  if n = 0 then arr\n  else begin\n    let mean = ref 0.0 in\n    let m2 = ref 0.0 in\n    for i = 0 to n - 1 do\n      let k = Float.of_int (i + 1) in\n      let x = arr.(i) in\n      let delta = x -. !mean in\n      mean := !mean +. (delta /. k);\n      let delta2 = x -. !mean in\n      m2 := !m2 +. (delta *. delta2)\n    done;\n    let std = sqrt (!m2 /. Float.of_int n) +. eps in\n    let mu = !mean in\n    Array.init n (fun i -> (arr.(i) -. mu) /. std)\n  end\n"
  },
  {
    "path": "packages/fehu/lib/gae.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Generalized Advantage Estimation.\n\n    Correctly handles the distinction between terminated and truncated episodes.\n    On termination, the bootstrap value is zero. On truncation, the bootstrap\n    value comes from [next_values]. *)\n\n(** {1:gae GAE} *)\n\nval compute :\n  rewards:float array ->\n  values:float array ->\n  terminated:bool array ->\n  truncated:bool array ->\n  next_values:float array ->\n  gamma:float ->\n  lambda:float ->\n  float array * float array\n(** [compute ~rewards ~values ~terminated ~truncated ~next_values\n      ~gamma ~lambda] is [(advantages, returns)].\n\n    [next_values.(t)] is V(s_{{t+1}}). When [terminated.(t)] is\n    [true], the bootstrap value is zero and the GAE trace resets.\n    When [truncated.(t)] is [true], the bootstrap value is\n    [next_values.(t)] and the trace resets for the new episode.\n    Otherwise, continuation uses the next step's value.\n\n    Raises [Invalid_argument] if array lengths differ. *)\n\nval compute_from_values :\n  rewards:float array ->\n  values:float array ->\n  terminated:bool array ->\n  truncated:bool array ->\n  last_value:float ->\n  gamma:float ->\n  lambda:float ->\n  float array * float array\n(** [compute_from_values ~rewards ~values ~terminated ~truncated ~last_value\n     ~gamma ~lambda] is [(advantages, returns)].\n\n    Convenience wrapper around {!compute} that builds [next_values] from\n    [values] and [last_value]: [next_values.(t) = values.(t+1)] for [t < n-1],\n    and [next_values.(n-1) = last_value].\n\n    Raises [Invalid_argument] if array lengths differ. *)\n\n(** {1:returns Monte Carlo returns} *)\n\nval returns :\n  rewards:float array ->\n  terminated:bool array ->\n  truncated:bool array ->\n  gamma:float ->\n  float array\n(** [returns ~rewards ~terminated ~truncated ~gamma] computes discounted\n    cumulative returns. The accumulation resets at terminal or truncated states.\n*)\n\n(** {1:normalize Normalization} *)\n\nval normalize : ?eps:float -> float array -> float array\n(** [normalize arr] is a copy of [arr] with zero mean and unit variance. [eps]\n    (default [1e-8]) prevents division by zero. *)\n"
  },
  {
    "path": "packages/fehu/lib/info.ml",
    "content": "module String_map = Map.Make (String)\n\ntype t = Value.t String_map.t\n\nlet empty = String_map.empty\nlet is_empty = String_map.is_empty\nlet set key value info = String_map.add key value info\nlet find key info = String_map.find_opt key info\n\nlet find_exn key info =\n  match String_map.find_opt key info with\n  | Some v -> v\n  | None -> invalid_arg (Printf.sprintf \"Info.find_exn: key %S not present\" key)\n\nlet remove key info = String_map.remove key info\nlet merge a b = String_map.union (fun _key _left right -> Some right) a b\nlet to_list info = String_map.bindings info\n\nlet of_list kvs =\n  List.fold_left (fun acc (k, v) -> String_map.add k v acc) String_map.empty kvs\n\nlet to_value info = Value.Dict (String_map.bindings info)\n\nlet pp ppf t =\n  let bindings = String_map.bindings t in\n  Format.fprintf ppf \"{\";\n  List.iteri\n    (fun i (k, v) ->\n      if i > 0 then Format.fprintf ppf \"; \";\n      Format.fprintf ppf \"%s: %a\" k Value.pp v)\n    bindings;\n  Format.fprintf ppf \"}\"\n\n(* Convenience constructors *)\n\nlet null = Value.Null\nlet bool b = Value.Bool b\nlet int i = Value.Int i\nlet float f = Value.Float f\nlet string s = Value.String s\nlet int_array arr = Value.Int_array (Array.copy arr)\nlet float_array arr = Value.Float_array (Array.copy arr)\nlet bool_array arr = Value.Bool_array (Array.copy arr)\n"
  },
  {
    "path": "packages/fehu/lib/info.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Step metadata dictionaries.\n\n    Info dictionaries carry auxiliary data returned by {!Env.reset} and\n    {!Env.step}. Keys are strings and values are {!Value.t}. *)\n\n(** {1:types Types} *)\n\ntype t\n(** The type for info dictionaries. *)\n\n(** {1:constructors Constructors} *)\n\nval empty : t\n(** [empty] is the empty dictionary. *)\n\nval of_list : (string * Value.t) list -> t\n(** [of_list kvs] is a dictionary from the given key-value pairs. *)\n\n(** {1:predicates Predicates} *)\n\nval is_empty : t -> bool\n(** [is_empty t] is [true] iff [t] has no bindings. *)\n\n(** {1:ops Operations} *)\n\nval set : string -> Value.t -> t -> t\n(** [set k v t] is [t] with [k] bound to [v]. *)\n\nval find : string -> t -> Value.t option\n(** [find k t] is the value bound to [k] in [t], if any. *)\n\nval find_exn : string -> t -> Value.t\n(** [find_exn k t] is the value bound to [k] in [t].\n\n    Raises [Invalid_argument] if [k] is not present. *)\n\nval remove : string -> t -> t\n(** [remove k t] is [t] without the binding for [k]. *)\n\nval merge : t -> t -> t\n(** [merge a b] is the union of [a] and [b]. When both have a binding for the\n    same key, the value from [b] wins. *)\n\n(** {1:converting Converting} *)\n\nval to_list : t -> (string * Value.t) list\n(** [to_list t] is the bindings of [t] in key order. *)\n\nval to_value : t -> Value.t\n(** [to_value t] is [t] as a {!Value.Dict}. *)\n\n(** {1:fmt Formatting} *)\n\nval pp : Format.formatter -> t -> unit\n(** [pp] formats an info dictionary for debugging. *)\n\n(** {1:convenience Convenience value constructors} *)\n\nval null : Value.t\n(** [null] is {!Value.Null}. *)\n\nval bool : bool -> Value.t\n(** [bool b] is [Value.Bool b]. *)\n\nval int : int -> Value.t\n(** [int i] is [Value.Int i]. *)\n\nval float : float -> Value.t\n(** [float f] is [Value.Float f]. *)\n\nval string : string -> Value.t\n(** [string s] is [Value.String s]. *)\n\nval int_array : int array -> Value.t\n(** [int_array arr] is [Value.Int_array (Array.copy arr)]. *)\n\nval float_array : float array -> Value.t\n(** [float_array arr] is [Value.Float_array (Array.copy arr)]. *)\n\nval bool_array : bool array -> Value.t\n(** [bool_array arr] is [Value.Bool_array (Array.copy arr)]. *)\n"
  },
  {
    "path": "packages/fehu/lib/render.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule Pixel = struct\n  type format = Rgb | Rgba | Gray\n\n  let channels = function Rgb -> 3 | Rgba -> 4 | Gray -> 1\nend\n\ntype image = {\n  width : int;\n  height : int;\n  pixel_format : Pixel.format;\n  data : (int, Bigarray.int8_unsigned_elt, Bigarray.c_layout) Bigarray.Array1.t;\n}\n\nlet err_data_length ~expected ~got =\n  Printf.sprintf\n    \"Render.image: data length %d does not match width * height * channels = %d\"\n    got expected\n\nlet image ~width ~height ?(pixel_format = Pixel.Rgb) data =\n  let expected = width * height * Pixel.channels pixel_format in\n  let got = Bigarray.Array1.dim data in\n  if got <> expected then invalid_arg (err_data_length ~expected ~got);\n  { width; height; pixel_format; data }\n\nlet rollout env ~policy ~steps ~sink () =\n  let obs, _info = Env.reset env () in\n  let current_obs = ref obs in\n  for _ = 1 to steps do\n    let action = policy !current_obs in\n    let step = Env.step env action in\n    (match Env.render env with Some frame -> sink frame | None -> ());\n    current_obs := step.Env.observation;\n    if step.Env.terminated || step.Env.truncated then begin\n      let obs, _info = Env.reset env () in\n      current_obs := obs\n    end\n  done\n\nlet derive_id env suffix =\n  match Env.id env with None -> None | Some id -> Some (id ^ suffix)\n\nlet on_render ~sink env =\n  let maybe_record inner =\n    match Env.render inner with Some frame -> sink frame | None -> ()\n  in\n  Env.wrap\n    ?id:(derive_id env \"/OnRender\")\n    ~observation_space:(Env.observation_space env)\n    ~action_space:(Env.action_space env)\n    ~reset:(fun inner ?options () ->\n      let result = Env.reset inner ?options () in\n      maybe_record inner;\n      result)\n    ~step:(fun inner action ->\n      let s = Env.step inner action in\n      maybe_record inner;\n      s)\n    ~render:(fun inner -> Env.render inner)\n    env\n"
  },
  {
    "path": "packages/fehu/lib/render.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Visualization primitives.\n\n    {!image} is the standard frame type for rgb-rendered environments.\n    {!rollout} runs a policy and feeds rendered frames to a user-provided sink.\n*)\n\n(** {1:pixel Pixel formats} *)\n\nmodule Pixel : sig\n  (** The type for pixel formats. *)\n  type format =\n    | Rgb  (** 3 channels. *)\n    | Rgba  (** 4 channels. *)\n    | Gray  (** 1 channel. *)\n\n  val channels : format -> int\n  (** [channels fmt] is the number of channels for [fmt]. *)\nend\n\n(** {1:image Images} *)\n\ntype image = {\n  width : int;  (** Width in pixels. *)\n  height : int;  (** Height in pixels. *)\n  pixel_format : Pixel.format;  (** Pixel layout. *)\n  data : (int, Bigarray.int8_unsigned_elt, Bigarray.c_layout) Bigarray.Array1.t;\n      (** Raw pixel data of length [width * height * channels]. *)\n}\n(** The type for rendered frames. *)\n\nval image :\n  width:int ->\n  height:int ->\n  ?pixel_format:Pixel.format ->\n  (int, Bigarray.int8_unsigned_elt, Bigarray.c_layout) Bigarray.Array1.t ->\n  image\n(** [image ~width ~height data] constructs a frame. [pixel_format] defaults to\n    [Rgb].\n\n    Raises [Invalid_argument] if [Bigarray.Array1.dim data] does not equal\n    [width * height * channels]. *)\n\n(** {1:rollout Rollout} *)\n\nval rollout :\n  ('obs, 'act, image) Env.t ->\n  policy:('obs -> 'act) ->\n  steps:int ->\n  sink:(image -> unit) ->\n  unit ->\n  unit\n(** [rollout env ~policy ~steps ~sink ()] runs [policy] in [env] for up to\n    [steps] steps. Each rendered frame is passed to [sink]. The environment is\n    reset at the start and on episode boundaries. *)\n\n(** {1:recording Recording} *)\n\nval on_render :\n  sink:(image -> unit) -> ('obs, 'act, image) Env.t -> ('obs, 'act, image) Env.t\n(** [on_render ~sink env] wraps [env] so that every rendered frame after\n    {!Env.reset} and {!Env.step} is passed to [sink]. The wrapper is\n    transparent: observations, actions, rewards, and termination signals pass\n    through unchanged. *)\n"
  },
  {
    "path": "packages/fehu/lib/space.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Error messages *)\n\nlet err_discrete_n = \"Space.Discrete.create: n must be strictly positive\"\nlet err_discrete_not = \"Space.Discrete: not a discrete space\"\nlet err_box_empty = \"Space.Box.create: low cannot be empty\"\nlet err_box_shape = \"Space.Box.create: low and high must have identical lengths\"\nlet err_box_not = \"Space.Box: not a box space\"\nlet err_mb_n = \"Space.Multi_binary.create: n must be strictly positive\"\nlet err_md_empty = \"Space.Multi_discrete.create: nvec must not be empty\"\nlet err_seq_min = \"Space.Sequence.create: min_length must be non-negative\"\nlet err_seq_max = \"Space.Sequence.create: max_length must be >= min_length\"\nlet err_text_max = \"Space.Text.create: max_length must be positive\"\nlet err_text_charset = \"Space.Text.create: charset must not be empty\"\nlet strf = Printf.sprintf\nlet errorf fmt = Format.kasprintf (fun msg -> Error msg) fmt\n\n(* Spec *)\n\ntype spec =\n  | Discrete of { start : int; n : int }\n  | Box of { low : float array; high : float array }\n  | Multi_binary of { n : int }\n  | Multi_discrete of { nvec : int array }\n  | Tuple of spec list\n  | Dict of (string * spec) list\n  | Sequence of { min_length : int; max_length : int option; base : spec }\n  | Text of { charset : string; max_length : int }\n\nlet rec equal_spec a b =\n  match (a, b) with\n  | Discrete a, Discrete b -> a.start = b.start && a.n = b.n\n  | Box a, Box b -> a.low = b.low && a.high = b.high\n  | Multi_binary a, Multi_binary b -> a.n = b.n\n  | Multi_discrete a, Multi_discrete b -> a.nvec = b.nvec\n  | Tuple a, Tuple b ->\n      List.length a = List.length b && List.for_all2 equal_spec a b\n  | Dict a, Dict b ->\n      List.length a = List.length b\n      && List.for_all2\n           (fun (ka, sa) (kb, sb) -> String.equal ka kb && equal_spec sa sb)\n           a b\n  | Sequence a, Sequence b ->\n      a.min_length = b.min_length\n      && a.max_length = b.max_length\n      && equal_spec a.base b.base\n  | Text a, Text b ->\n      String.equal a.charset b.charset && a.max_length = b.max_length\n  | ( ( Discrete _ | Box _ | Multi_binary _ | Multi_discrete _ | Tuple _\n      | Dict _ | Sequence _ | Text _ ),\n      _ ) ->\n      false\n\n(* Space type *)\n\ntype 'a t = {\n  spec : spec;\n  shape : int array option;\n  contains : 'a -> bool;\n  sample : unit -> 'a;\n  pack : 'a -> Value.t;\n  unpack : Value.t -> ('a, string) result;\n  boundaries : Value.t list;\n  box_bounds : (float array * float array) option;\n  discrete_info : (int * int) option;\n}\n\ntype packed = Pack : 'a t -> packed\n\nlet spec s = s.spec\nlet shape s = s.shape\nlet contains s v = s.contains v\nlet sample s = s.sample ()\nlet pack s v = s.pack v\nlet unpack s v = s.unpack v\nlet boundary_values s = s.boundaries\n\n(* Discrete *)\n\nmodule Discrete = struct\n  type element = (int32, Nx.int32_elt) Nx.t\n\n  let to_int tensor =\n    let reshaped = Nx.reshape [| 1 |] tensor in\n    let arr : Int32.t array = Nx.to_array reshaped in\n    Int32.to_int arr.(0)\n\n  let of_int v = Nx.scalar Nx.int32 (Int32.of_int v)\n\n  let create ?(start = 0) n =\n    if n <= 0 then invalid_arg err_discrete_n;\n    let hi = start + n in\n    let contains tensor =\n      let reshaped = Nx.reshape [| 1 |] tensor in\n      let arr : Int32.t array = Nx.to_array reshaped in\n      Array.length arr = 1\n      &&\n      let v = Int32.to_int arr.(0) in\n      v >= start && v < hi\n    in\n    let sample () =\n      let tensor = Nx.randint Nx.int32 ~high:hi [| 1 |] start in\n      let arr : Int32.t array = Nx.to_array tensor in\n      Nx.scalar Nx.int32 arr.(0)\n    in\n    let pack tensor =\n      let arr : Int32.t array = Nx.to_array (Nx.reshape [| 1 |] tensor) in\n      Value.Int (Int32.to_int arr.(0))\n    in\n    let unpack = function\n      | Value.Int v when v >= start && v < hi ->\n          Ok (Nx.scalar Nx.int32 (Int32.of_int v))\n      | Value.Int v -> errorf \"Discrete value %d outside [%d, %d)\" v start hi\n      | other -> errorf \"Discrete expects Int, got %s\" (Value.to_string other)\n    in\n    let boundaries =\n      if n = 1 then [ Value.Int start ]\n      else [ Value.Int start; Value.Int (hi - 1) ]\n    in\n    {\n      spec = Discrete { start; n };\n      shape = None;\n      contains;\n      sample;\n      pack;\n      unpack;\n      boundaries;\n      box_bounds = None;\n      discrete_info = Some (start, n);\n    }\n\n  let n s =\n    match s.discrete_info with\n    | Some (_, n) -> n\n    | None -> invalid_arg err_discrete_not\n\n  let start s =\n    match s.discrete_info with\n    | Some (start, _) -> start\n    | None -> invalid_arg err_discrete_not\nend\n\n(* Box *)\n\nmodule Box = struct\n  type element = (float, Nx.float32_elt) Nx.t\n\n  let create ~low ~high =\n    let arity = Array.length low in\n    if arity = 0 then invalid_arg err_box_empty;\n    if arity <> Array.length high then invalid_arg err_box_shape;\n    Array.iteri\n      (fun i lo ->\n        if lo > high.(i) then\n          invalid_arg\n            (strf \"Space.Box.create: low[%d]=%g > high[%d]=%g\" i lo i high.(i)))\n      low;\n    let low = Array.copy low in\n    let high = Array.copy high in\n    let contains tensor =\n      let sh = Nx.shape tensor in\n      Array.length sh = 1\n      && sh.(0) = arity\n      &&\n      let values = Nx.to_array tensor in\n      let rec loop i =\n        if i = arity then true\n        else\n          let v = values.(i) in\n          v >= low.(i) && v <= high.(i) && loop (i + 1)\n      in\n      loop 0\n    in\n    let sample () =\n      let uniform = Nx.rand Nx.float32 [| arity |] in\n      let draws = Nx.to_array uniform in\n      let values =\n        Array.init arity (fun i ->\n            let lo = low.(i) in\n            let hi = high.(i) in\n            if Float.equal lo hi then lo\n            else\n              let range = hi -. lo in\n              if Float.is_finite range then lo +. (draws.(i) *. range)\n              else\n                let v = -1e6 +. (draws.(i) *. 2e6) in\n                Float.max lo (Float.min hi v))\n      in\n      Nx.create Nx.float32 [| arity |] values\n    in\n    let pack tensor = Value.Float_array (Array.copy (Nx.to_array tensor)) in\n    let unpack = function\n      | Value.Float_array arr when Array.length arr = arity ->\n          let tensor = Nx.create Nx.float32 [| arity |] arr in\n          if contains tensor then Ok tensor\n          else\n            errorf \"Box value outside bounds: %s\"\n              (Value.to_string (Value.Float_array arr))\n      | Value.Float_array arr ->\n          errorf \"Box expects vector of size %d, got size %d\" arity\n            (Array.length arr)\n      | other ->\n          errorf \"Box expects Float_array, got %s\" (Value.to_string other)\n    in\n    let identical =\n      let same = ref true in\n      let i = ref 0 in\n      while !same && !i < arity do\n        if not (Float.equal low.(!i) high.(!i)) then same := false;\n        incr i\n      done;\n      !same\n    in\n    let boundaries =\n      let lo_v = Value.Float_array (Array.copy low) in\n      let hi_v = Value.Float_array (Array.copy high) in\n      if identical then [ lo_v ] else [ lo_v; hi_v ]\n    in\n    let box_bounds = Some (Array.copy low, Array.copy high) in\n    {\n      spec = Box { low = Array.copy low; high = Array.copy high };\n      shape = Some [| arity |];\n      contains;\n      sample;\n      pack;\n      unpack;\n      boundaries;\n      box_bounds;\n      discrete_info = None;\n    }\n\n  let bounds s =\n    match s.box_bounds with\n    | Some (low, high) -> (Array.copy low, Array.copy high)\n    | None -> invalid_arg err_box_not\nend\n\n(* Multi_binary *)\n\nmodule Multi_binary = struct\n  type element = (int32, Nx.int32_elt) Nx.t\n\n  let create n =\n    if n <= 0 then invalid_arg err_mb_n;\n    let contains tensor =\n      let sh = Nx.shape tensor in\n      Array.length sh = 1\n      && sh.(0) = n\n      &&\n      let arr : Int32.t array = Nx.to_array tensor in\n      Array.for_all (fun v -> v = Int32.zero || v = Int32.one) arr\n    in\n    let sample () = Nx.randint Nx.int32 ~high:2 [| n |] 0 in\n    let pack tensor =\n      let arr : Int32.t array = Nx.to_array tensor in\n      Value.Bool_array\n        (Array.init n (fun i -> not (Int32.equal arr.(i) Int32.zero)))\n    in\n    let unpack = function\n      | Value.Bool_array arr when Array.length arr = n ->\n          let data =\n            Array.map (fun b -> if b then Int32.one else Int32.zero) arr\n          in\n          Ok (Nx.create Nx.int32 [| n |] data)\n      | Value.Bool_array arr ->\n          errorf \"Multi_binary expects vector of size %d, got size %d\" n\n            (Array.length arr)\n      | other ->\n          errorf \"Multi_binary expects Bool_array, got %s\"\n            (Value.to_string other)\n    in\n    let boundaries =\n      [\n        Value.Bool_array (Array.make n false);\n        Value.Bool_array (Array.make n true);\n      ]\n    in\n    {\n      spec = Multi_binary { n };\n      shape = Some [| n |];\n      contains;\n      sample;\n      pack;\n      unpack;\n      boundaries;\n      box_bounds = None;\n      discrete_info = None;\n    }\nend\n\n(* Multi_discrete *)\n\nmodule Multi_discrete = struct\n  type element = (int32, Nx.int32_elt) Nx.t\n\n  let create nvec =\n    let arity = Array.length nvec in\n    if arity = 0 then invalid_arg err_md_empty;\n    let nvec = Array.copy nvec in\n    Array.iteri\n      (fun i bound ->\n        if bound <= 0 then\n          invalid_arg\n            (strf \"Space.Multi_discrete.create: nvec[%d] must be > 0\" i))\n      nvec;\n    let contains tensor =\n      let sh = Nx.shape tensor in\n      Array.length sh = 1\n      && sh.(0) = arity\n      &&\n      let arr : Int32.t array = Nx.to_array tensor in\n      let rec loop i =\n        if i = arity then true\n        else\n          let v = Int32.to_int arr.(i) in\n          v >= 0 && v < nvec.(i) && loop (i + 1)\n      in\n      loop 0\n    in\n    let sample () =\n      let data =\n        Array.init arity (fun i ->\n            let tensor = Nx.randint Nx.int32 ~high:nvec.(i) [| 1 |] 0 in\n            let arr = Nx.to_array tensor in\n            arr.(0))\n      in\n      Nx.create Nx.int32 [| arity |] data\n    in\n    let pack tensor =\n      let arr : Int32.t array = Nx.to_array tensor in\n      Value.Int_array (Array.map Int32.to_int arr)\n    in\n    let unpack = function\n      | Value.Int_array arr when Array.length arr = arity ->\n          let data = Array.map Int32.of_int arr in\n          let tensor = Nx.create Nx.int32 [| arity |] data in\n          if contains tensor then Ok tensor\n          else\n            errorf \"Multi_discrete value outside bounds: %s\"\n              (Value.to_string (Value.Int_array arr))\n      | Value.Int_array arr ->\n          errorf \"Multi_discrete expects vector of size %d, got size %d\" arity\n            (Array.length arr)\n      | other ->\n          errorf \"Multi_discrete expects Int_array, got %s\"\n            (Value.to_string other)\n    in\n    let boundaries =\n      [\n        Value.Int_array (Array.make arity 0);\n        Value.Int_array (Array.init arity (fun i -> nvec.(i) - 1));\n      ]\n    in\n    {\n      spec = Multi_discrete { nvec = Array.copy nvec };\n      shape = Some [| arity |];\n      contains;\n      sample;\n      pack;\n      unpack;\n      boundaries;\n      box_bounds = None;\n      discrete_info = None;\n    }\nend\n\n(* Tuple *)\n\nmodule Tuple = struct\n  type element = Value.t list\n\n  let create spaces =\n    let spaces = Array.of_list spaces in\n    let len = Array.length spaces in\n    let contains values =\n      let rec loop i = function\n        | [] -> i = len\n        | v :: rest -> (\n            if i >= len then false\n            else\n              let (Pack s) = spaces.(i) in\n              match s.unpack v with\n              | Ok _ -> loop (i + 1) rest\n              | Error _ -> false)\n      in\n      loop 0 values\n    in\n    let sample () =\n      let values =\n        Array.to_list\n          (Array.init len (fun i ->\n               let (Pack s) = spaces.(i) in\n               let v = s.sample () in\n               s.pack v))\n      in\n      values\n    in\n    let pack values = Value.List values in\n    let unpack = function\n      | Value.List values ->\n          if List.length values <> len then\n            errorf \"Tuple expects %d elements, got %d\" len (List.length values)\n          else\n            let rec loop i = function\n              | [] -> Ok values\n              | v :: rest -> (\n                  let (Pack s) = spaces.(i) in\n                  match s.unpack v with\n                  | Ok _ -> loop (i + 1) rest\n                  | Error msg -> errorf \"Tuple element %d: %s\" i msg)\n            in\n            loop 0 values\n      | other -> errorf \"Tuple expects List, got %s\" (Value.to_string other)\n    in\n    let sub_specs = Array.to_list (Array.map (fun (Pack s) -> s.spec) spaces) in\n    {\n      spec = Tuple sub_specs;\n      shape = None;\n      contains;\n      sample;\n      pack;\n      unpack;\n      boundaries = [];\n      box_bounds = None;\n      discrete_info = None;\n    }\nend\n\n(* Dict *)\n\nmodule Dict = struct\n  type element = (string * Value.t) list\n\n  module String_map = Map.Make (String)\n\n  let create entries =\n    let map =\n      List.fold_left\n        (fun acc (key, space) ->\n          if String_map.mem key acc then\n            invalid_arg (strf \"Space.Dict.create: duplicate key '%s'\" key);\n          String_map.add key space acc)\n        String_map.empty entries\n    in\n    let contains values =\n      let rec loop remaining m =\n        match remaining with\n        | [] -> String_map.is_empty m\n        | (key, value) :: rest -> (\n            match String_map.find_opt key m with\n            | None -> false\n            | Some (Pack s) -> (\n                match s.unpack value with\n                | Ok _ -> loop rest (String_map.remove key m)\n                | Error _ -> false))\n      in\n      loop values map\n    in\n    let sample () =\n      if String_map.is_empty map then []\n      else\n        let acc =\n          String_map.fold\n            (fun key (Pack s) acc ->\n              let v = s.sample () in\n              (key, s.pack v) :: acc)\n            map []\n        in\n        List.rev acc\n    in\n    let pack values = Value.Dict values in\n    let unpack = function\n      | Value.Dict values ->\n          if contains values then Ok values\n          else errorf \"Dict contains unexpected keys or values\"\n      | other -> errorf \"Dict expects Dict, got %s\" (Value.to_string other)\n    in\n    let sub_specs =\n      List.rev\n        (String_map.fold (fun key (Pack s) acc -> (key, s.spec) :: acc) map [])\n    in\n    {\n      spec = Dict sub_specs;\n      shape = None;\n      contains;\n      sample;\n      pack;\n      unpack;\n      boundaries = [];\n      box_bounds = None;\n      discrete_info = None;\n    }\nend\n\n(* Sequence *)\n\nmodule Sequence = struct\n  type 'a element = 'a list\n\n  let create ?(min_length = 0) ?max_length base =\n    if min_length < 0 then invalid_arg err_seq_min;\n    let max_length =\n      match max_length with\n      | None -> None\n      | Some m when m < min_length -> invalid_arg err_seq_max\n      | Some _ as m -> m\n    in\n    let contains values =\n      let len = List.length values in\n      len >= min_length\n      && (match max_length with None -> true | Some m -> len <= m)\n      && List.for_all (fun v -> base.contains v) values\n    in\n    let sample () =\n      let length =\n        match max_length with\n        | None -> min_length\n        | Some max_len ->\n            if max_len = min_length then min_length\n            else\n              let tensor =\n                Nx.randint Nx.int32 ~high:(max_len + 1) [| 1 |] min_length\n              in\n              let arr = Nx.to_array tensor in\n              Int32.to_int arr.(0)\n      in\n      if length = 0 then []\n      else\n        let rec build i acc =\n          if i = length then List.rev acc\n          else\n            let v = base.sample () in\n            build (i + 1) (v :: acc)\n        in\n        build 0 []\n    in\n    let pack values = Value.List (List.map (fun v -> base.pack v) values) in\n    let unpack = function\n      | Value.List values ->\n          let len = List.length values in\n          let exceeds =\n            match max_length with None -> false | Some m -> len > m\n          in\n          if len < min_length || exceeds then\n            match max_length with\n            | None ->\n                errorf \"Sequence length %d shorter than minimum %d\" len\n                  min_length\n            | Some m ->\n                errorf \"Sequence length %d outside [%d, %d]\" len min_length m\n          else\n            let rec loop acc = function\n              | [] -> Ok (List.rev acc)\n              | v :: rest -> (\n                  match base.unpack v with\n                  | Ok x -> loop (x :: acc) rest\n                  | Error _ as err -> err)\n            in\n            loop [] values\n      | other -> errorf \"Sequence expects List, got %s\" (Value.to_string other)\n    in\n    {\n      spec = Sequence { min_length; max_length; base = base.spec };\n      shape = None;\n      contains;\n      sample;\n      pack;\n      unpack;\n      boundaries = [];\n      box_bounds = None;\n      discrete_info = None;\n    }\nend\n\n(* Text *)\n\nmodule Text = struct\n  type element = string\n\n  let default_charset =\n    \"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789 \"\n\n  let create ?(charset = default_charset) ?(max_length = 64) () =\n    if max_length <= 0 then invalid_arg err_text_max;\n    let charset_len = String.length charset in\n    if charset_len = 0 then invalid_arg err_text_charset;\n    let contains value =\n      let len = String.length value in\n      len <= max_length\n      &&\n      let rec loop i =\n        if i = len then true\n        else String.contains charset value.[i] && loop (i + 1)\n      in\n      loop 0\n    in\n    let sample () =\n      let length =\n        if max_length = 1 then 1\n        else\n          let tensor = Nx.randint Nx.int32 ~high:(max_length + 1) [| 1 |] 1 in\n          let arr = Nx.to_array tensor in\n          Int32.to_int arr.(0)\n      in\n      if length = 0 then \"\"\n      else\n        let idxs = Nx.randint Nx.int32 ~high:charset_len [| length |] 0 in\n        let arr = Nx.to_array idxs in\n        Bytes.init length (fun i -> charset.[Int32.to_int arr.(i)])\n        |> Bytes.to_string\n    in\n    let pack value = Value.String value in\n    let unpack = function\n      | Value.String s when contains s -> Ok s\n      | Value.String s -> errorf \"Text value '%s' violates constraints\" s\n      | other -> errorf \"Text expects String, got %s\" (Value.to_string other)\n    in\n    let example = if charset_len = 0 then \"\" else String.make 1 charset.[0] in\n    let boundaries = [ Value.String \"\"; Value.String example ] in\n    {\n      spec = Text { charset; max_length };\n      shape = None;\n      contains;\n      sample;\n      pack;\n      unpack;\n      boundaries;\n      box_bounds = None;\n      discrete_info = None;\n    }\nend\n"
  },
  {
    "path": "packages/fehu/lib/space.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Observation and action spaces.\n\n    Spaces define valid observations and actions for reinforcement learning\n    environments. They specify shapes, constraints, and provide methods to\n    validate, sample, and serialize values.\n\n    Each space type corresponds to a common RL scenario: discrete choices,\n    continuous vectors, binary indicators, composite structures, and\n    variable-length sequences. *)\n\n(** {1:spec Structural description} *)\n\n(** Structural description of a space. Two spaces are compatible when their\n    specs are equal. *)\ntype spec =\n  | Discrete of { start : int; n : int }\n      (** Integer choices in \\[[start]; [start + n - 1]\\]. *)\n  | Box of { low : float array; high : float array }\n      (** Continuous vector bounded per dimension. *)\n  | Multi_binary of { n : int }  (** Binary vector of length [n]. *)\n  | Multi_discrete of { nvec : int array }\n      (** Multiple discrete axes with per-axis cardinalities. *)\n  | Tuple of spec list  (** Fixed-length heterogeneous sequence. *)\n  | Dict of (string * spec) list  (** Named fields with different types. *)\n  | Sequence of { min_length : int; max_length : int option; base : spec }\n      (** Variable-length homogeneous sequence. *)\n  | Text of { charset : string; max_length : int }\n      (** Character strings from a fixed alphabet. *)\n\nval equal_spec : spec -> spec -> bool\n(** [equal_spec a b] is [true] iff [a] and [b] describe structurally identical\n    spaces. *)\n\n(** {1:spaces Spaces} *)\n\ntype 'a t\n(** The type for spaces over values of type ['a]. A space is self-contained: all\n    bounds, constraints, and serialization logic are stored in the value itself.\n*)\n\ntype packed =\n  | Pack : 'a t -> packed\n      (** Type-erased space for heterogeneous collections. *)\n\n(** {1:ops Operations} *)\n\nval spec : 'a t -> spec\n(** [spec s] is the structural description of [s]. *)\n\nval shape : 'a t -> int array option\n(** [shape s] is the dimensionality of [s], if defined. [None] for scalar or\n    variable-length spaces. *)\n\nval contains : 'a t -> 'a -> bool\n(** [contains s v] is [true] iff [v] is valid in [s]. *)\n\nval sample : 'a t -> 'a\n(** [sample s] is a uniformly sampled value from [s].\n\n    Random keys are drawn from the implicit RNG scope. *)\n\nval pack : 'a t -> 'a -> Value.t\n(** [pack s v] is [v] converted to the universal {!Value.t} representation. *)\n\nval unpack : 'a t -> Value.t -> ('a, string) result\n(** [unpack s v] is [Ok x] if [v] can be converted to a valid element of [s], or\n    [Error msg] otherwise. *)\n\nval boundary_values : 'a t -> Value.t list\n(** [boundary_values s] is a list of representative edge-case values for [s].\n    Includes lower/upper bounds or canonical sentinels when known. The empty\n    list when no boundary values apply. *)\n\n(** {1:space_types Space types} *)\n\nmodule Discrete : sig\n  type element = (int32, Nx.int32_elt) Nx.t\n  (** Discrete action represented as a scalar int32 tensor. *)\n\n  val create : ?start:int -> int -> element t\n  (** [create ?start n] is a discrete space with [n] choices in the range\n      \\[[start]; [start + n - 1]\\]. [start] defaults to [0].\n\n      Raises [Invalid_argument] if [n <= 0]. *)\n\n  val n : element t -> int\n  (** [n s] is the number of choices in [s].\n\n      Raises [Invalid_argument] if [s] is not a discrete space. *)\n\n  val start : element t -> int\n  (** [start s] is the starting value of [s].\n\n      Raises [Invalid_argument] if [s] is not a discrete space. *)\n\n  val to_int : element -> int\n  (** [to_int e] is the integer value of the discrete element [e]. *)\n\n  val of_int : int -> element\n  (** [of_int v] is a discrete element with value [v]. *)\nend\n\nmodule Box : sig\n  type element = (float, Nx.float32_elt) Nx.t\n  (** Continuous vector represented as a float32 tensor. *)\n\n  val create : low:float array -> high:float array -> element t\n  (** [create ~low ~high] is a continuous space where element [i] satisfies\n      [low.(i) <= x.(i) <= high.(i)]. Both arrays must have the same positive\n      length.\n\n      When the range of a dimension is not finite (e.g. bounds set to\n      [Float.max_float]), sampling falls back to a uniform draw in \\[[-1e6];\n      [1e6]\\] clamped to bounds.\n\n      Raises [Invalid_argument] if [low] is empty, if [low] and [high] differ in\n      length, or if any [low.(i) > high.(i)]. *)\n\n  val bounds : element t -> float array * float array\n  (** [bounds s] is [(low, high)] copies of the bound vectors.\n\n      Raises [Invalid_argument] if [s] is not a box space. *)\nend\n\nmodule Multi_binary : sig\n  type element = (int32, Nx.int32_elt) Nx.t\n  (** Binary vector for multi-label scenarios. *)\n\n  val create : int -> element t\n  (** [create n] is a binary vector space of length [n]. Valid values are int32\n      tensors with [n] elements, each 0 or 1.\n\n      Raises [Invalid_argument] if [n <= 0]. *)\nend\n\nmodule Multi_discrete : sig\n  type element = (int32, Nx.int32_elt) Nx.t\n  (** Multiple discrete choices with independent cardinalities. *)\n\n  val create : int array -> element t\n  (** [create nvec] is a multi-discrete space where element [i] is in \\[[0];\n      [nvec.(i) - 1]\\].\n\n      Raises [Invalid_argument] if [nvec] is empty or any [nvec.(i) <= 0]. *)\nend\n\nmodule Tuple : sig\n  type element = Value.t list\n  (** Fixed-length heterogeneous sequence in {!Value.t} form. *)\n\n  val create : packed list -> element t\n  (** [create spaces] is a tuple space. Valid values are lists where element [i]\n      belongs to [spaces.(i)]. {!unpack} validates each element against its\n      subspace. *)\nend\n\nmodule Dict : sig\n  type element = (string * Value.t) list\n  (** Named fields with different space types. *)\n\n  val create : (string * packed) list -> element t\n  (** [create fields] is a dictionary space with named fields. Valid values are\n      association lists matching the keys and subspaces of [fields].\n\n      Raises [Invalid_argument] if [fields] contains duplicate keys. *)\nend\n\nmodule Sequence : sig\n  type 'a element = 'a list\n  (** Variable-length homogeneous sequence. *)\n\n  val create : ?min_length:int -> ?max_length:int -> 'a t -> 'a element t\n  (** [create ?min_length ?max_length s] is a sequence space over [s].\n      [min_length] defaults to [0]. When [max_length] is provided, sampling\n      draws a uniform length in \\[[min_length]; [max_length]\\]; otherwise the\n      sampler returns sequences of length [min_length].\n\n      Raises [Invalid_argument] if [min_length < 0] or\n      [max_length < min_length]. *)\nend\n\nmodule Text : sig\n  type element = string\n  (** String space for textual observations or actions. *)\n\n  val create : ?charset:string -> ?max_length:int -> unit -> element t\n  (** [create ?charset ?max_length ()] is a text space. [charset] defaults to\n      alphanumeric plus space. [max_length] defaults to [64]. Valid strings\n      contain only characters from [charset] and have length at most\n      [max_length].\n\n      Raises [Invalid_argument] if [max_length <= 0] or [charset] is empty. *)\nend\n"
  },
  {
    "path": "packages/fehu/lib/value.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype t =\n  | Null\n  | Bool of bool\n  | Int of int\n  | Float of float\n  | String of string\n  | Int_array of int array\n  | Float_array of float array\n  | Bool_array of bool array\n  | List of t list\n  | Dict of (string * t) list\n\n(* Equality *)\n\nlet rec equal a b =\n  match (a, b) with\n  | Null, Null -> true\n  | Bool a, Bool b -> Bool.equal a b\n  | Int a, Int b -> Int.equal a b\n  | Float a, Float b -> Float.equal a b\n  | String a, String b -> String.equal a b\n  | Int_array a, Int_array b -> a = b\n  | Float_array a, Float_array b -> a = b\n  | Bool_array a, Bool_array b -> a = b\n  | List a, List b -> equal_list a b\n  | Dict a, Dict b -> equal_dict a b\n  | ( ( Null | Bool _ | Int _ | Float _ | String _ | Int_array _ | Float_array _\n      | Bool_array _ | List _ | Dict _ ),\n      _ ) ->\n      false\n\nand equal_list a b =\n  match (a, b) with\n  | [], [] -> true\n  | x :: xs, y :: ys -> equal x y && equal_list xs ys\n  | _ -> false\n\nand equal_dict a b =\n  match (a, b) with\n  | [], [] -> true\n  | (ka, va) :: rest_a, (kb, vb) :: rest_b ->\n      String.equal ka kb && equal va vb && equal_dict rest_a rest_b\n  | _ -> false\n\n(* Formatting *)\n\nlet pp_array pp_elt ppf a =\n  Format.fprintf ppf \"[|\";\n  for i = 0 to Array.length a - 1 do\n    if i > 0 then Format.fprintf ppf \"; \";\n    pp_elt ppf a.(i)\n  done;\n  Format.fprintf ppf \"|]\"\n\nlet rec pp ppf = function\n  | Null -> Format.fprintf ppf \"null\"\n  | Bool b -> Format.fprintf ppf \"%b\" b\n  | Int i -> Format.fprintf ppf \"%d\" i\n  | Float f -> Format.fprintf ppf \"%g\" f\n  | String s -> Format.fprintf ppf \"%S\" s\n  | Int_array a -> pp_array (fun ppf v -> Format.fprintf ppf \"%d\" v) ppf a\n  | Float_array a -> pp_array (fun ppf v -> Format.fprintf ppf \"%g\" v) ppf a\n  | Bool_array a -> pp_array (fun ppf v -> Format.fprintf ppf \"%b\" v) ppf a\n  | List items ->\n      Format.fprintf ppf \"[\";\n      List.iteri\n        (fun i v ->\n          if i > 0 then Format.fprintf ppf \"; \";\n          pp ppf v)\n        items;\n      Format.fprintf ppf \"]\"\n  | Dict fields ->\n      Format.fprintf ppf \"{\";\n      List.iteri\n        (fun i (k, v) ->\n          if i > 0 then Format.fprintf ppf \"; \";\n          Format.fprintf ppf \"%s: %a\" k pp v)\n        fields;\n      Format.fprintf ppf \"}\"\n\nlet to_string v = Format.asprintf \"%a\" pp v\n"
  },
  {
    "path": "packages/fehu/lib/value.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Universal value type.\n\n    Values represent heterogeneous data flowing through spaces and info\n    dictionaries. Each variant wraps one kind of scalar, array, or composite\n    datum. *)\n\n(** {1:types Types} *)\n\n(** The type for universal values. *)\ntype t =\n  | Null  (** No value. *)\n  | Bool of bool  (** A boolean. *)\n  | Int of int  (** An integer. *)\n  | Float of float  (** A float. *)\n  | String of string  (** A string. *)\n  | Int_array of int array  (** An integer array. *)\n  | Float_array of float array  (** A float array. *)\n  | Bool_array of bool array  (** A boolean array. *)\n  | List of t list  (** A heterogeneous list. *)\n  | Dict of (string * t) list  (** A string-keyed association list. *)\n\n(** {1:predicates Predicates} *)\n\nval equal : t -> t -> bool\n(** [equal a b] is [true] iff [a] and [b] are structurally equal. *)\n\n(** {1:fmt Formatting} *)\n\nval pp : Format.formatter -> t -> unit\n(** [pp] formats a value for debugging. *)\n\nval to_string : t -> string\n(** [to_string v] is [v] formatted as a string via {!pp}. *)\n"
  },
  {
    "path": "packages/fehu/lib/vec_env.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet strf = Printf.sprintf\n\n(* Error messages *)\n\nlet err_empty = \"Vec_env.create: env list must not be empty\"\nlet err_action_len n m = strf \"Vec_env.step: expected %d actions, got %d\" n m\n\nlet err_space kind =\n  strf \"Vec_env.create: all environments must have the same %s space\" kind\n\n(* Types *)\n\ntype 'obs step = {\n  observations : 'obs array;\n  rewards : float array;\n  terminated : bool array;\n  truncated : bool array;\n  infos : Info.t array;\n}\n\ntype ('obs, 'act, 'render) t = {\n  envs : ('obs, 'act, 'render) Env.t array;\n  observation_space : 'obs Space.t;\n  action_space : 'act Space.t;\n}\n\n(* Space compatibility *)\n\nlet ensure_compatible envs =\n  let first = envs.(0) in\n  let obs_spec = Space.spec (Env.observation_space first) in\n  let act_spec = Space.spec (Env.action_space first) in\n  for i = 1 to Array.length envs - 1 do\n    let env = envs.(i) in\n    if not (Space.equal_spec obs_spec (Space.spec (Env.observation_space env)))\n    then invalid_arg (err_space \"observation\");\n    if not (Space.equal_spec act_spec (Space.spec (Env.action_space env))) then\n      invalid_arg (err_space \"action\")\n  done\n\n(* Constructor *)\n\nlet create envs =\n  match envs with\n  | [] -> invalid_arg err_empty\n  | first :: _ ->\n      let envs = Array.of_list envs in\n      ensure_compatible envs;\n      {\n        envs;\n        observation_space = Env.observation_space first;\n        action_space = Env.action_space first;\n      }\n\n(* Accessors *)\n\nlet num_envs t = Array.length t.envs\nlet observation_space t = t.observation_space\nlet action_space t = t.action_space\n\n(* Reset *)\n\nlet reset t () =\n  let n = Array.length t.envs in\n  let results = Array.init n (fun i -> Env.reset t.envs.(i) ()) in\n  let observations = Array.map fst results in\n  let infos = Array.map snd results in\n  (observations, infos)\n\n(* Step *)\n\nlet step t actions =\n  let n = Array.length t.envs in\n  if Array.length actions <> n then\n    invalid_arg (err_action_len n (Array.length actions));\n  let results = Array.init n (fun i -> Env.step t.envs.(i) actions.(i)) in\n  let observations = Array.make n results.(0).observation in\n  let rewards = Array.make n 0. in\n  let terminated = Array.make n false in\n  let truncated = Array.make n false in\n  let infos = Array.make n Info.empty in\n  for i = 0 to n - 1 do\n    let result = results.(i) in\n    rewards.(i) <- result.reward;\n    terminated.(i) <- result.terminated;\n    truncated.(i) <- result.truncated;\n    if result.terminated || result.truncated then begin\n      let final_obs = Space.pack t.observation_space result.observation in\n      let info = Info.set \"final_observation\" final_obs result.info in\n      let info = Info.set \"final_info\" (Info.to_value result.info) info in\n      let obs, reset_info = Env.reset t.envs.(i) () in\n      observations.(i) <- obs;\n      infos.(i) <- Info.merge info reset_info\n    end\n    else begin\n      observations.(i) <- result.observation;\n      infos.(i) <- result.info\n    end\n  done;\n  { observations; rewards; terminated; truncated; infos }\n\n(* Close *)\n\nlet close t = Array.iter Env.close t.envs\n"
  },
  {
    "path": "packages/fehu/lib/vec_env.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Vectorized environments.\n\n    Runs multiple environment instances and batches their outputs. All\n    environments must have compatible observation and action spaces. Terminated\n    or truncated episodes are automatically reset. *)\n\n(** {1:types Types} *)\n\ntype ('obs, 'act, 'render) t\n(** The type for vectorized environments. *)\n\ntype 'obs step = {\n  observations : 'obs array;  (** One observation per environment. *)\n  rewards : float array;  (** One reward per environment. *)\n  terminated : bool array;  (** Per-environment termination flags. *)\n  truncated : bool array;  (** Per-environment truncation flags. *)\n  infos : Info.t array;  (** Per-environment info dictionaries. *)\n}\n(** The type for batched step results. All arrays have length {!num_envs}. *)\n\n(** {1:constructors Constructors} *)\n\nval create : ('obs, 'act, 'render) Env.t list -> ('obs, 'act, 'render) t\n(** [create envs] creates a vectorized environment.\n\n    All environments must have structurally identical spaces (checked via\n    {!Space.spec} and {!Space.equal_spec}). Raises [Invalid_argument] if [envs]\n    is empty or spaces differ. *)\n\n(** {1:accessors Accessors} *)\n\nval num_envs : ('obs, 'act, 'render) t -> int\n(** [num_envs t] is the number of environments. *)\n\nval observation_space : ('obs, 'act, 'render) t -> 'obs Space.t\n(** [observation_space t] is the shared observation space. *)\n\nval action_space : ('obs, 'act, 'render) t -> 'act Space.t\n(** [action_space t] is the shared action space. *)\n\n(** {1:lifecycle Lifecycle} *)\n\nval reset : ('obs, 'act, 'render) t -> unit -> 'obs array * Info.t array\n(** [reset t ()] resets all environments. *)\n\nval step : ('obs, 'act, 'render) t -> 'act array -> 'obs step\n(** [step t actions] steps all environments with the given actions.\n\n    [actions] must have length [num_envs t]. Terminated or truncated\n    environments are automatically reset. The terminal observation is stored in\n    the step's info under the key [\"final_observation\"] as a packed {!Value.t}.\n    The terminal info is stored under [\"final_info\"].\n\n    Raises [Invalid_argument] if [Array.length actions <> num_envs t]. *)\n\nval close : ('obs, 'act, 'render) t -> unit\n(** [close t] closes all environments. *)\n"
  },
  {
    "path": "packages/fehu/test/dune",
    "content": "(tests\n (names\n  test_value\n  test_info\n  test_space\n  test_env\n  test_env_wrappers\n  test_collect\n  test_buffer\n  test_gae\n  test_eval\n  test_vec_env\n  test_render\n  test_envs)\n (package fehu)\n (libraries fehu fehu.envs nx windtrap))\n"
  },
  {
    "path": "packages/fehu/test/test_buffer.ml",
    "content": "open Fehu\nopen Windtrap\n\nlet make_transition obs act rew next_obs term trunc =\n  Buffer.\n    {\n      observation = obs;\n      action = act;\n      reward = rew;\n      next_observation = next_obs;\n      terminated = term;\n      truncated = trunc;\n    }\n\n(* Creation *)\n\nlet test_create_empty () =\n  let buf = Buffer.create ~capacity:10 in\n  equal ~msg:\"size = 0\" int 0 (Buffer.size buf);\n  is_false ~msg:\"not full\" (Buffer.is_full buf)\n\nlet test_capacity () =\n  let buf = Buffer.create ~capacity:10 in\n  equal ~msg:\"capacity = 10\" int 10 (Buffer.capacity buf)\n\nlet test_create_zero_capacity () =\n  raises_invalid_arg \"Buffer.create: capacity must be positive\" (fun () ->\n      Buffer.create ~capacity:0)\n\nlet test_create_negative_capacity () =\n  raises_invalid_arg \"Buffer.create: capacity must be positive\" (fun () ->\n      Buffer.create ~capacity:(-1))\n\n(* Add/Size *)\n\nlet test_add_increments_size () =\n  let buf = Buffer.create ~capacity:10 in\n  Buffer.add buf (make_transition 1 0 1.0 2 false false);\n  equal ~msg:\"size = 1\" int 1 (Buffer.size buf);\n  Buffer.add buf (make_transition 2 1 2.0 3 false false);\n  equal ~msg:\"size = 2\" int 2 (Buffer.size buf)\n\nlet test_size_capped_at_capacity () =\n  let buf = Buffer.create ~capacity:3 in\n  for i = 1 to 5 do\n    Buffer.add buf (make_transition i 0 1.0 (i + 1) false false)\n  done;\n  equal ~msg:\"size capped at 3\" int 3 (Buffer.size buf)\n\nlet test_is_full () =\n  let buf = Buffer.create ~capacity:2 in\n  Buffer.add buf (make_transition 1 0 1.0 2 false false);\n  is_false ~msg:\"not yet full\" (Buffer.is_full buf);\n  Buffer.add buf (make_transition 2 1 2.0 3 false false);\n  is_true ~msg:\"full\" (Buffer.is_full buf)\n\n(* Sample *)\n\nlet test_sample_batch_size () =\n  let buf = Buffer.create ~capacity:10 in\n  for i = 1 to 5 do\n    Buffer.add buf (make_transition i 0 1.0 (i + 1) false false)\n  done;\n  let batch = Buffer.sample buf ~batch_size:3 in\n  equal ~msg:\"batch length\" int 3 (Array.length batch)\n\nlet test_sample_empty_raises () =\n  let buf = Buffer.create ~capacity:10 in\n  raises_invalid_arg \"Buffer.sample: buffer is empty\" (fun () ->\n      Buffer.sample buf ~batch_size:1)\n\nlet test_sample_zero_batch_raises () =\n  let buf = Buffer.create ~capacity:10 in\n  Buffer.add buf (make_transition 1 0 1.0 2 false false);\n  raises_invalid_arg \"Buffer.sample: batch_size must be positive\" (fun () ->\n      Buffer.sample buf ~batch_size:0)\n\nlet test_sample_arrays_lengths () =\n  let buf = Buffer.create ~capacity:10 in\n  for i = 1 to 5 do\n    Buffer.add buf (make_transition i 0 1.0 (i + 1) false false)\n  done;\n  let obs, acts, rews, next_obs, terms, truncs =\n    Buffer.sample_arrays buf ~batch_size:3\n  in\n  equal ~msg:\"obs length\" int 3 (Array.length obs);\n  equal ~msg:\"acts length\" int 3 (Array.length acts);\n  equal ~msg:\"rews length\" int 3 (Array.length rews);\n  equal ~msg:\"next_obs length\" int 3 (Array.length next_obs);\n  equal ~msg:\"terms length\" int 3 (Array.length terms);\n  equal ~msg:\"truncs length\" int 3 (Array.length truncs)\n\nlet test_sample_arrays_empty_raises () =\n  let buf = Buffer.create ~capacity:10 in\n  raises_invalid_arg \"Buffer.sample: buffer is empty\" (fun () ->\n      Buffer.sample_arrays buf ~batch_size:1)\n\n(* Clear *)\n\nlet test_clear_resets () =\n  let buf = Buffer.create ~capacity:10 in\n  Buffer.add buf (make_transition 1 0 1.0 2 false false);\n  Buffer.add buf (make_transition 2 1 2.0 3 false false);\n  Buffer.clear buf;\n  equal ~msg:\"size = 0 after clear\" int 0 (Buffer.size buf)\n\nlet test_add_after_clear () =\n  let buf = Buffer.create ~capacity:10 in\n  Buffer.add buf (make_transition 1 0 1.0 2 false false);\n  Buffer.clear buf;\n  Buffer.add buf (make_transition 3 1 3.0 4 false false);\n  equal ~msg:\"size = 1 after re-add\" int 1 (Buffer.size buf)\n\nlet () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  run \"Fehu.Buffer\"\n    [\n      group \"creation\"\n        [\n          test \"empty\" test_create_empty;\n          test \"capacity\" test_capacity;\n          test \"zero capacity raises\" test_create_zero_capacity;\n          test \"negative capacity raises\" test_create_negative_capacity;\n        ];\n      group \"add/size\"\n        [\n          test \"add increments size\" test_add_increments_size;\n          test \"size capped at capacity\" test_size_capped_at_capacity;\n          test \"is_full\" test_is_full;\n        ];\n      group \"sample\"\n        [\n          test \"batch size\" test_sample_batch_size;\n          test \"empty raises\" test_sample_empty_raises;\n          test \"zero batch raises\" test_sample_zero_batch_raises;\n          test \"sample_arrays lengths\" test_sample_arrays_lengths;\n          test \"sample_arrays empty raises\" test_sample_arrays_empty_raises;\n        ];\n      group \"clear\"\n        [\n          test \"resets size\" test_clear_resets;\n          test \"add after clear\" test_add_after_clear;\n        ];\n    ]\n"
  },
  {
    "path": "packages/fehu/test/test_collect.ml",
    "content": "open Fehu\nopen Windtrap\n\nlet make_test_env ?(max_steps = 100) () =\n  let obs_space = Space.Box.create ~low:[| 0.0 |] ~high:[| 10.0 |] in\n  let act_space = Space.Discrete.create 2 in\n  let state = ref 5.0 in\n  let steps = ref 0 in\n  let reset _env ?options:_ () =\n    state := 5.0;\n    steps := 0;\n    (Nx.create Nx.float32 [| 1 |] [| !state |], Info.empty)\n  in\n  let step _env action =\n    let a : Int32.t array = Nx.to_array (Nx.reshape [| 1 |] action) in\n    state := !state +. if Int32.to_int a.(0) = 0 then -1.0 else 1.0;\n    incr steps;\n    let terminated = !state <= 0.0 || !state >= 10.0 in\n    let truncated = (not terminated) && !steps >= max_steps in\n    Env.step_result\n      ~observation:(Nx.create Nx.float32 [| 1 |] [| !state |])\n      ~reward:1.0 ~terminated ~truncated ()\n  in\n  Env.create ~id:\"Test-v0\" ~observation_space:obs_space ~action_space:act_space\n    ~reset ~step ()\n\n(* Rollout *)\n\nlet test_rollout_length () =\n  let env = make_test_env () in\n  let policy _obs = (Nx.create Nx.int32 [| 1 |] [| 1l |], None, None) in\n  let traj = Collect.rollout env ~policy ~n_steps:5 in\n  equal ~msg:\"length = 5\" int 5 (Collect.length traj)\n\nlet test_rollout_arrays_length () =\n  let env = make_test_env () in\n  let policy _obs = (Nx.create Nx.int32 [| 1 |] [| 1l |], None, None) in\n  let traj = Collect.rollout env ~policy ~n_steps:5 in\n  equal ~msg:\"observations\" int 5 (Array.length traj.observations);\n  equal ~msg:\"actions\" int 5 (Array.length traj.actions);\n  equal ~msg:\"rewards\" int 5 (Array.length traj.rewards);\n  equal ~msg:\"next_observations\" int 5 (Array.length traj.next_observations);\n  equal ~msg:\"terminated\" int 5 (Array.length traj.terminated);\n  equal ~msg:\"truncated\" int 5 (Array.length traj.truncated);\n  equal ~msg:\"infos\" int 5 (Array.length traj.infos)\n\nlet test_rollout_next_obs_populated () =\n  let env = make_test_env () in\n  let policy _obs = (Nx.create Nx.int32 [| 1 |] [| 1l |], None, None) in\n  let traj = Collect.rollout env ~policy ~n_steps:3 in\n  for i = 0 to 2 do\n    let arr : float array =\n      Nx.to_array (Nx.reshape [| 1 |] traj.next_observations.(i))\n    in\n    is_true ~msg:\"next_obs is finite\" (Float.is_finite arr.(0))\n  done\n\nlet test_rollout_no_log_probs () =\n  let env = make_test_env () in\n  let policy _obs = (Nx.create Nx.int32 [| 1 |] [| 1l |], None, None) in\n  let traj = Collect.rollout env ~policy ~n_steps:3 in\n  is_none ~msg:\"log_probs\" traj.log_probs;\n  is_none ~msg:\"values\" traj.values\n\nlet test_rollout_with_log_probs () =\n  let env = make_test_env () in\n  let policy _obs =\n    (Nx.create Nx.int32 [| 1 |] [| 1l |], Some (-0.5), Some 1.0)\n  in\n  let traj = Collect.rollout env ~policy ~n_steps:4 in\n  is_some ~msg:\"log_probs present\" traj.log_probs;\n  is_some ~msg:\"values present\" traj.values;\n  equal ~msg:\"log_probs length\" int 4 (Array.length (Option.get traj.log_probs));\n  equal ~msg:\"values length\" int 4 (Array.length (Option.get traj.values))\n\n(* Episodes *)\n\nlet test_episodes_count () =\n  let env = make_test_env ~max_steps:10 () in\n  let policy _obs = (Nx.create Nx.int32 [| 1 |] [| 1l |], None, None) in\n  let eps = Collect.episodes env ~policy ~n_episodes:2 ~max_steps:10 () in\n  equal ~msg:\"2 episodes\" int 2 (List.length eps)\n\nlet test_episodes_positive_length () =\n  let env = make_test_env ~max_steps:10 () in\n  let policy _obs = (Nx.create Nx.int32 [| 1 |] [| 1l |], None, None) in\n  let eps = Collect.episodes env ~policy ~n_episodes:2 ~max_steps:10 () in\n  List.iter\n    (fun ep ->\n      is_true ~msg:\"episode has positive length\" (Collect.length ep > 0))\n    eps\n\n(* Concat *)\n\nlet test_concat_two () =\n  let env = make_test_env () in\n  let policy _obs = (Nx.create Nx.int32 [| 1 |] [| 1l |], None, None) in\n  let t1 = Collect.rollout env ~policy ~n_steps:3 in\n  let t2 = Collect.rollout env ~policy ~n_steps:4 in\n  let t = Collect.concat [ t1; t2 ] in\n  equal ~msg:\"total length\" int 7 (Collect.length t)\n\nlet test_concat_empty_raises () =\n  raises_invalid_arg \"Collect.concat: empty list\" (fun () -> Collect.concat [])\n\nlet test_concat_singleton () =\n  let env = make_test_env () in\n  let policy _obs = (Nx.create Nx.int32 [| 1 |] [| 1l |], None, None) in\n  let t1 = Collect.rollout env ~policy ~n_steps:5 in\n  let t = Collect.concat [ t1 ] in\n  equal ~msg:\"same length\" int 5 (Collect.length t)\n\nlet () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  run \"Fehu.Collect\"\n    [\n      group \"rollout\"\n        [\n          test \"length\" test_rollout_length;\n          test \"arrays length\" test_rollout_arrays_length;\n          test \"next_observations populated\" test_rollout_next_obs_populated;\n          test \"no log_probs/values\" test_rollout_no_log_probs;\n          test \"with log_probs/values\" test_rollout_with_log_probs;\n        ];\n      group \"episodes\"\n        [\n          test \"count\" test_episodes_count;\n          test \"positive length\" test_episodes_positive_length;\n        ];\n      group \"concat\"\n        [\n          test \"two trajectories\" test_concat_two;\n          test \"empty raises\" test_concat_empty_raises;\n          test \"singleton\" test_concat_singleton;\n        ];\n    ]\n"
  },
  {
    "path": "packages/fehu/test/test_env.ml",
    "content": "open Fehu\nopen Windtrap\n\nlet make_test_env ?(max_steps = 100) () =\n  let obs_space = Space.Box.create ~low:[| 0.0 |] ~high:[| 10.0 |] in\n  let act_space = Space.Discrete.create 2 in\n  let state = ref 5.0 in\n  let steps = ref 0 in\n  let reset _env ?options:_ () =\n    state := 5.0;\n    steps := 0;\n    (Nx.create Nx.float32 [| 1 |] [| !state |], Info.empty)\n  in\n  let step _env action =\n    let a : Int32.t array = Nx.to_array (Nx.reshape [| 1 |] action) in\n    state := !state +. if Int32.to_int a.(0) = 0 then -1.0 else 1.0;\n    incr steps;\n    let terminated = !state <= 0.0 || !state >= 10.0 in\n    let truncated = (not terminated) && !steps >= max_steps in\n    Env.step_result\n      ~observation:(Nx.create Nx.float32 [| 1 |] [| !state |])\n      ~reward:1.0 ~terminated ~truncated ()\n  in\n  Env.create ~id:\"Test-v0\" ~observation_space:obs_space ~action_space:act_space\n    ~reset ~step ()\n\nlet action_left = Nx.create Nx.int32 [| 1 |] [| 0l |]\nlet action_right = Nx.create Nx.int32 [| 1 |] [| 1l |]\n\nlet read_obs obs =\n  let arr : float array = Nx.to_array (Nx.reshape [| 1 |] obs) in\n  arr.(0)\n\n(* Creation *)\n\nlet test_id () =\n  let env = make_test_env () in\n  equal ~msg:\"id is Some Test-v0\" (option string) (Some \"Test-v0\") (Env.id env)\n\nlet test_observation_space () =\n  let env = make_test_env () in\n  let low, high = Space.Box.bounds (Env.observation_space env) in\n  equal ~msg:\"obs low\" (array (float 0.0)) [| 0.0 |] low;\n  equal ~msg:\"obs high\" (array (float 0.0)) [| 10.0 |] high\n\nlet test_action_space () =\n  let env = make_test_env () in\n  equal ~msg:\"act n\" int 2 (Space.Discrete.n (Env.action_space env))\n\nlet test_render_mode_default () =\n  let env = make_test_env () in\n  is_none ~msg:\"render_mode default is None\" (Env.render_mode env)\n\nlet test_render_mode_invalid () =\n  raises_invalid_arg ~msg:\"render_mode not in render_modes\"\n    \"Env.create: render mode 'human' not in render_modes []\" (fun () ->\n      let obs_space = Space.Box.create ~low:[| 0.0 |] ~high:[| 1.0 |] in\n      let act_space = Space.Discrete.create 2 in\n      Env.create ~observation_space:obs_space ~action_space:act_space\n        ~render_mode:`Human ~render_modes:[]\n        ~reset:(fun _env ?options:_ () -> assert false)\n        ~step:(fun _env _ -> assert false)\n        ())\n\n(* Lifecycle *)\n\nlet test_reset_obs () =\n  let env = make_test_env () in\n  let obs, _info = Env.reset env () in\n  equal ~msg:\"reset obs shape\" (array int) [| 1 |] (Nx.shape obs);\n  equal ~msg:\"reset obs value\" (float 0.0) 5.0 (read_obs obs)\n\nlet test_step_after_reset () =\n  let env = make_test_env () in\n  let _obs, _info = Env.reset env () in\n  let step = Env.step env action_right in\n  equal ~msg:\"reward\" (float 0.0) 1.0 step.reward;\n  is_false ~msg:\"not terminated\" step.terminated;\n  is_false ~msg:\"not truncated\" step.truncated\n\nlet test_step_before_reset () =\n  let env = make_test_env () in\n  raises_invalid_arg ~msg:\"step before reset\"\n    \"Env: operation 'step' requires calling reset first\" (fun () ->\n      Env.step env action_left)\n\nlet test_step_after_terminal () =\n  let env = make_test_env () in\n  let _obs, _info = Env.reset env () in\n  (* Move left 5 times: 5 -> 4 -> 3 -> 2 -> 1 -> 0, terminated *)\n  for _ = 1 to 4 do\n    ignore (Env.step env action_left)\n  done;\n  let step = Env.step env action_left in\n  is_true ~msg:\"terminated\" step.terminated;\n  raises_invalid_arg ~msg:\"step after terminal\"\n    \"Env: operation 'step' requires calling reset first\" (fun () ->\n      Env.step env action_left)\n\nlet test_reset_after_terminal () =\n  let env = make_test_env () in\n  let _obs, _info = Env.reset env () in\n  for _ = 1 to 5 do\n    ignore (Env.step env action_left)\n  done;\n  let obs, _info = Env.reset env () in\n  equal ~msg:\"reset clears terminal\" (float 0.0) 5.0 (read_obs obs)\n\nlet test_close () =\n  let env = make_test_env () in\n  Env.close env;\n  is_true ~msg:\"closed\" (Env.closed env)\n\nlet test_step_on_closed () =\n  let env = make_test_env () in\n  let _obs, _info = Env.reset env () in\n  Env.close env;\n  raises_invalid_arg ~msg:\"step on closed\"\n    \"Env: operation 'step' on a closed environment\" (fun () ->\n      Env.step env action_left)\n\nlet test_reset_on_closed () =\n  let env = make_test_env () in\n  Env.close env;\n  raises_invalid_arg ~msg:\"reset on closed\"\n    \"Env: operation 'reset' on a closed environment\" (fun () ->\n      Env.reset env ())\n\nlet test_render_on_closed () =\n  let env = make_test_env () in\n  Env.close env;\n  raises_invalid_arg ~msg:\"render on closed\"\n    \"Env: operation 'render' on a closed environment\" (fun () -> Env.render env)\n\nlet test_close_idempotent () =\n  let env = make_test_env () in\n  Env.close env;\n  Env.close env;\n  is_true ~msg:\"still closed\" (Env.closed env)\n\n(* step_result *)\n\nlet test_step_result_defaults () =\n  let obs = Nx.create Nx.float32 [| 1 |] [| 0.0 |] in\n  let s = Env.step_result ~observation:obs () in\n  equal ~msg:\"default reward\" (float 0.0) 0.0 s.reward;\n  is_false ~msg:\"default terminated\" s.terminated;\n  is_false ~msg:\"default truncated\" s.truncated;\n  is_true ~msg:\"default info empty\" (Info.is_empty s.info)\n\nlet test_step_result_custom () =\n  let obs = Nx.create Nx.float32 [| 1 |] [| 0.0 |] in\n  let info = Info.set \"k\" (Info.int 1) Info.empty in\n  let s =\n    Env.step_result ~observation:obs ~reward:5.0 ~terminated:true\n      ~truncated:false ~info ()\n  in\n  equal ~msg:\"custom reward\" (float 0.0) 5.0 s.reward;\n  is_true ~msg:\"custom terminated\" s.terminated;\n  is_false ~msg:\"custom truncated\" s.truncated;\n  is_some ~msg:\"custom info has key\" (Info.find \"k\" s.info)\n\n(* time_limit lifecycle enforcement *)\n\nlet test_time_limit_needs_reset () =\n  let env = make_test_env () in\n  let wrapped = Env.time_limit ~max_episode_steps:3 env in\n  let _obs, _info = Env.reset wrapped () in\n  for _ = 1 to 2 do\n    ignore (Env.step wrapped action_right)\n  done;\n  let s3 = Env.step wrapped action_right in\n  is_true ~msg:\"step 3 truncated\" s3.truncated;\n  raises_invalid_arg ~msg:\"step after time_limit truncation requires reset\"\n    \"Env: operation 'step' requires calling reset first\" (fun () ->\n      Env.step wrapped action_right)\n\nlet () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  run \"Fehu.Env\"\n    [\n      group \"creation\"\n        [\n          test \"id\" test_id;\n          test \"observation_space\" test_observation_space;\n          test \"action_space\" test_action_space;\n          test \"render_mode default\" test_render_mode_default;\n          test \"render_mode invalid\" test_render_mode_invalid;\n        ];\n      group \"lifecycle\"\n        [\n          test \"reset returns valid obs\" test_reset_obs;\n          test \"step after reset\" test_step_after_reset;\n          test \"step before reset\" test_step_before_reset;\n          test \"step after terminal\" test_step_after_terminal;\n          test \"reset after terminal\" test_reset_after_terminal;\n          test \"close\" test_close;\n          test \"step on closed\" test_step_on_closed;\n          test \"reset on closed\" test_reset_on_closed;\n          test \"render on closed\" test_render_on_closed;\n          test \"close idempotent\" test_close_idempotent;\n          test \"time_limit needs reset after truncation\"\n            test_time_limit_needs_reset;\n        ];\n      group \"step_result\"\n        [\n          test \"defaults\" test_step_result_defaults;\n          test \"custom values\" test_step_result_custom;\n        ];\n    ]\n"
  },
  {
    "path": "packages/fehu/test/test_env_wrappers.ml",
    "content": "open Fehu\nopen Windtrap\n\nlet make_test_env ?(max_steps = 100) () =\n  let obs_space = Space.Box.create ~low:[| 0.0 |] ~high:[| 10.0 |] in\n  let act_space = Space.Discrete.create 2 in\n  let state = ref 5.0 in\n  let steps = ref 0 in\n  let reset _env ?options:_ () =\n    state := 5.0;\n    steps := 0;\n    (Nx.create Nx.float32 [| 1 |] [| !state |], Info.empty)\n  in\n  let step _env action =\n    let a : Int32.t array = Nx.to_array (Nx.reshape [| 1 |] action) in\n    state := !state +. if Int32.to_int a.(0) = 0 then -1.0 else 1.0;\n    incr steps;\n    let terminated = !state <= 0.0 || !state >= 10.0 in\n    let truncated = (not terminated) && !steps >= max_steps in\n    Env.step_result\n      ~observation:(Nx.create Nx.float32 [| 1 |] [| !state |])\n      ~reward:1.0 ~terminated ~truncated ()\n  in\n  Env.create ~id:\"Test-v0\" ~observation_space:obs_space ~action_space:act_space\n    ~reset ~step ()\n\nlet action_left = Nx.create Nx.int32 [| 1 |] [| 0l |]\nlet action_right = Nx.create Nx.int32 [| 1 |] [| 1l |]\n\nlet read_obs obs =\n  let arr : float array = Nx.to_array (Nx.reshape [| 1 |] obs) in\n  arr.(0)\n\nlet value = testable ~pp:Value.pp ~equal:Value.equal ()\n\n(* State sharing *)\n\nlet test_close_wrapper_closes_inner () =\n  let env = make_test_env () in\n  let wrapped =\n    Env.map_observation\n      ~observation_space:(Env.observation_space env)\n      ~f:(fun obs info -> (obs, info))\n      env\n  in\n  Env.close wrapped;\n  is_true ~msg:\"inner closed\" (Env.closed env)\n\nlet test_close_inner_closes_wrapper () =\n  let env = make_test_env () in\n  let wrapped =\n    Env.map_observation\n      ~observation_space:(Env.observation_space env)\n      ~f:(fun obs info -> (obs, info))\n      env\n  in\n  Env.close env;\n  is_true ~msg:\"wrapper closed\" (Env.closed wrapped)\n\nlet test_reset_wrapper_clears_inner () =\n  let env = make_test_env () in\n  let wrapped =\n    Env.map_observation\n      ~observation_space:(Env.observation_space env)\n      ~f:(fun obs info -> (obs, info))\n      env\n  in\n  let _obs, _info = Env.reset wrapped () in\n  let step = Env.step env action_left in\n  equal ~msg:\"inner step works\" (float 0.0) 1.0 step.reward\n\n(* map_observation *)\n\nlet test_map_observation_reset () =\n  let env = make_test_env () in\n  let double_space = Space.Box.create ~low:[| 0.0 |] ~high:[| 20.0 |] in\n  let wrapped =\n    Env.map_observation ~observation_space:double_space\n      ~f:(fun obs info ->\n        let v = read_obs obs in\n        (Nx.create Nx.float32 [| 1 |] [| v *. 2.0 |], info))\n      env\n  in\n  let obs, _info = Env.reset wrapped () in\n  equal ~msg:\"doubled reset obs\" (float 0.0) 10.0 (read_obs obs)\n\nlet test_map_observation_step () =\n  let env = make_test_env () in\n  let double_space = Space.Box.create ~low:[| 0.0 |] ~high:[| 20.0 |] in\n  let wrapped =\n    Env.map_observation ~observation_space:double_space\n      ~f:(fun obs info ->\n        let v = read_obs obs in\n        (Nx.create Nx.float32 [| 1 |] [| v *. 2.0 |], info))\n      env\n  in\n  let _obs, _info = Env.reset wrapped () in\n  let step = Env.step wrapped action_right in\n  (* Inner: 5 + 1 = 6, doubled: 12 *)\n  equal ~msg:\"doubled step obs\" (float 0.0) 12.0 (read_obs step.observation)\n\nlet test_map_observation_id () =\n  let env = make_test_env () in\n  let wrapped =\n    Env.map_observation\n      ~observation_space:(Env.observation_space env)\n      ~f:(fun obs info -> (obs, info))\n      env\n  in\n  equal ~msg:\"id suffix\" (option string) (Some \"Test-v0/ObservationWrapper\")\n    (Env.id wrapped)\n\n(* map_action *)\n\nlet test_map_action_flip () =\n  let env = make_test_env () in\n  let wrapped =\n    Env.map_action ~action_space:(Env.action_space env)\n      ~f:(fun action ->\n        let a : Int32.t array = Nx.to_array (Nx.reshape [| 1 |] action) in\n        let flipped = if Int32.to_int a.(0) = 0 then 1l else 0l in\n        Nx.create Nx.int32 [| 1 |] [| flipped |])\n      env\n  in\n  let _obs, _info = Env.reset wrapped () in\n  (* Send left (0) to wrapper; inner sees right (1): 5 -> 6 *)\n  let step = Env.step wrapped action_left in\n  equal ~msg:\"flipped: left becomes right\" (float 0.0) 6.0\n    (read_obs step.observation)\n\nlet test_map_action_id () =\n  let env = make_test_env () in\n  let wrapped =\n    Env.map_action ~action_space:(Env.action_space env) ~f:Fun.id env\n  in\n  equal ~msg:\"id suffix\" (option string) (Some \"Test-v0/ActionWrapper\")\n    (Env.id wrapped)\n\n(* map_reward *)\n\nlet test_map_reward () =\n  let env = make_test_env () in\n  let wrapped =\n    Env.map_reward ~f:(fun ~reward ~info -> (reward *. 2.0, info)) env\n  in\n  let _obs, _info = Env.reset wrapped () in\n  let step = Env.step wrapped action_right in\n  equal ~msg:\"doubled reward\" (float 0.0) 2.0 step.reward\n\nlet test_map_reward_id () =\n  let env = make_test_env () in\n  let wrapped = Env.map_reward ~f:(fun ~reward ~info -> (reward, info)) env in\n  equal ~msg:\"id suffix\" (option string) (Some \"Test-v0/RewardWrapper\")\n    (Env.id wrapped)\n\n(* clip_action *)\n\nlet make_box_action_env () =\n  let obs_space = Space.Box.create ~low:[| 0.0 |] ~high:[| 10.0 |] in\n  let act_space = Space.Box.create ~low:[| 0.0 |] ~high:[| 1.0 |] in\n  let last_action = ref 0.0 in\n  let reset _env ?options:_ () =\n    last_action := 0.0;\n    (Nx.create Nx.float32 [| 1 |] [| 5.0 |], Info.empty)\n  in\n  let step _env action =\n    let a : float array = Nx.to_array (Nx.reshape [| 1 |] action) in\n    last_action := a.(0);\n    Env.step_result\n      ~observation:(Nx.create Nx.float32 [| 1 |] [| a.(0) |])\n      ~reward:1.0 ()\n  in\n  let env =\n    Env.create ~id:\"BoxAct-v0\" ~observation_space:obs_space\n      ~action_space:act_space ~reset ~step ()\n  in\n  (env, last_action)\n\nlet test_clip_action () =\n  let env, last_action = make_box_action_env () in\n  let wrapped = Env.clip_action env in\n  let _obs, _info = Env.reset wrapped () in\n  let _step = Env.step wrapped (Nx.create Nx.float32 [| 1 |] [| 2.0 |]) in\n  equal ~msg:\"clamped to upper\" (float 0.0) 1.0 !last_action;\n  let _step = Env.step wrapped (Nx.create Nx.float32 [| 1 |] [| -0.5 |]) in\n  equal ~msg:\"clamped to lower\" (float 0.0) 0.0 !last_action\n\n(* clip_observation *)\n\nlet make_box_obs_env () =\n  let obs_space = Space.Box.create ~low:[| 0.0 |] ~high:[| 10.0 |] in\n  let act_space = Space.Discrete.create 2 in\n  let obs_val = ref 5.0 in\n  let reset _env ?options:_ () =\n    obs_val := 5.0;\n    (Nx.create Nx.float32 [| 1 |] [| !obs_val |], Info.empty)\n  in\n  let step _env action =\n    let a : Int32.t array = Nx.to_array (Nx.reshape [| 1 |] action) in\n    obs_val := !obs_val +. if Int32.to_int a.(0) = 0 then -3.0 else 3.0;\n    Env.step_result\n      ~observation:(Nx.create Nx.float32 [| 1 |] [| !obs_val |])\n      ~reward:1.0 ()\n  in\n  Env.create ~id:\"BoxObs-v0\" ~observation_space:obs_space\n    ~action_space:act_space ~reset ~step ()\n\nlet test_clip_observation () =\n  let env = make_box_obs_env () in\n  let wrapped = Env.clip_observation ~low:[| 2.0 |] ~high:[| 8.0 |] env in\n  let _obs, _info = Env.reset wrapped () in\n  (* Step right: inner obs = 8.0, clipped to 8.0 *)\n  let s1 = Env.step wrapped action_right in\n  let arr1 : float array = Nx.to_array (Nx.reshape [| 1 |] s1.observation) in\n  equal ~msg:\"clipped to upper\" (float 0.0) 8.0 arr1.(0);\n  let _obs, _info = Env.reset wrapped () in\n  (* Step left: inner obs = 2.0, within bounds *)\n  let s2 = Env.step wrapped action_left in\n  let arr2 : float array = Nx.to_array (Nx.reshape [| 1 |] s2.observation) in\n  equal ~msg:\"within bounds\" (float 0.0) 2.0 arr2.(0)\n\nlet test_clip_observation_space () =\n  let env = make_box_obs_env () in\n  let wrapped = Env.clip_observation ~low:[| 2.0 |] ~high:[| 8.0 |] env in\n  let low, high = Space.Box.bounds (Env.observation_space wrapped) in\n  equal ~msg:\"clipped low\" (array (float 0.0)) [| 2.0 |] low;\n  equal ~msg:\"clipped high\" (array (float 0.0)) [| 8.0 |] high\n\n(* time_limit *)\n\nlet test_time_limit_truncation () =\n  let env = make_test_env () in\n  let wrapped = Env.time_limit ~max_episode_steps:3 env in\n  let _obs, _info = Env.reset wrapped () in\n  let s1 = Env.step wrapped action_right in\n  is_false ~msg:\"step 1 not truncated\" s1.truncated;\n  let s2 = Env.step wrapped action_right in\n  is_false ~msg:\"step 2 not truncated\" s2.truncated;\n  let s3 = Env.step wrapped action_right in\n  is_true ~msg:\"step 3 truncated\" s3.truncated\n\nlet test_time_limit_info () =\n  let env = make_test_env () in\n  let wrapped = Env.time_limit ~max_episode_steps:2 env in\n  let _obs, _info = Env.reset wrapped () in\n  let _s1 = Env.step wrapped action_right in\n  let s2 = Env.step wrapped action_right in\n  is_some ~msg:\"time_limit.truncated present\"\n    (Info.find \"time_limit.truncated\" s2.info);\n  is_some ~msg:\"time_limit.elapsed_steps present\"\n    (Info.find \"time_limit.elapsed_steps\" s2.info)\n\nlet test_time_limit_info_values () =\n  let env = make_test_env () in\n  let wrapped = Env.time_limit ~max_episode_steps:2 env in\n  let _obs, _info = Env.reset wrapped () in\n  let _s1 = Env.step wrapped action_right in\n  let s2 = Env.step wrapped action_right in\n  let tl_truncated = Info.find_exn \"time_limit.truncated\" s2.info in\n  equal ~msg:\"truncated is Bool true\" value (Value.Bool true) tl_truncated;\n  let tl_steps = Info.find_exn \"time_limit.elapsed_steps\" s2.info in\n  equal ~msg:\"elapsed_steps is Int 2\" value (Value.Int 2) tl_steps\n\nlet test_time_limit_counter_resets () =\n  let env = make_test_env () in\n  let wrapped = Env.time_limit ~max_episode_steps:3 env in\n  let _obs, _info = Env.reset wrapped () in\n  for _ = 1 to 3 do\n    ignore (Env.step wrapped action_right)\n  done;\n  let _obs, _info = Env.reset wrapped () in\n  let s1 = Env.step wrapped action_right in\n  is_false ~msg:\"counter reset after new episode\" s1.truncated\n\nlet test_time_limit_nonpositive () =\n  let env = make_test_env () in\n  raises_invalid_arg ~msg:\"max_episode_steps=0\"\n    \"Env.time_limit: max_episode_steps must be positive\" (fun () ->\n      Env.time_limit ~max_episode_steps:0 env);\n  raises_invalid_arg ~msg:\"max_episode_steps=-1\"\n    \"Env.time_limit: max_episode_steps must be positive\" (fun () ->\n      Env.time_limit ~max_episode_steps:(-1) env)\n\nlet test_time_limit_needs_reset () =\n  let env = make_test_env () in\n  let wrapped = Env.time_limit ~max_episode_steps:2 env in\n  let _obs, _info = Env.reset wrapped () in\n  let _s1 = Env.step wrapped action_right in\n  let s2 = Env.step wrapped action_right in\n  is_true ~msg:\"truncated at limit\" s2.truncated;\n  raises_invalid_arg ~msg:\"step after time_limit truncation\"\n    \"Env: operation 'step' requires calling reset first\" (fun () ->\n      Env.step wrapped action_right)\n\nlet () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  run \"Fehu.Env (wrappers)\"\n    [\n      group \"state sharing\"\n        [\n          test \"close wrapper closes inner\" test_close_wrapper_closes_inner;\n          test \"close inner closes wrapper\" test_close_inner_closes_wrapper;\n          test \"reset wrapper clears inner\" test_reset_wrapper_clears_inner;\n        ];\n      group \"map_observation\"\n        [\n          test \"doubles reset obs\" test_map_observation_reset;\n          test \"doubles step obs\" test_map_observation_step;\n          test \"id suffix\" test_map_observation_id;\n        ];\n      group \"map_action\"\n        [\n          test \"flip reverses direction\" test_map_action_flip;\n          test \"id suffix\" test_map_action_id;\n        ];\n      group \"map_reward\"\n        [\n          test \"doubles reward\" test_map_reward;\n          test \"id suffix\" test_map_reward_id;\n        ];\n      group \"clip_action\" [ test \"clamps out-of-bounds\" test_clip_action ];\n      group \"clip_observation\"\n        [\n          test \"clamps to explicit bounds\" test_clip_observation;\n          test \"observation space reflects bounds\" test_clip_observation_space;\n        ];\n      group \"time_limit\"\n        [\n          test \"truncation at limit\" test_time_limit_truncation;\n          test \"info keys present\" test_time_limit_info;\n          test \"info values correct\" test_time_limit_info_values;\n          test \"counter resets on new episode\" test_time_limit_counter_resets;\n          test \"nonpositive raises\" test_time_limit_nonpositive;\n          test \"needs reset after truncation\" test_time_limit_needs_reset;\n        ];\n    ]\n"
  },
  {
    "path": "packages/fehu/test/test_envs.ml",
    "content": "open Fehu\nopen Fehu_envs\nopen Windtrap\n\nlet read_float obs =\n  let arr : float array = Nx.to_array (Nx.reshape [| 1 |] obs) in\n  arr.(0)\n\nlet read_int32_array obs n =\n  let arr : Int32.t array = Nx.to_array (Nx.reshape [| n |] obs) in\n  Array.map Int32.to_int arr\n\nlet discrete action = Nx.create Nx.int32 [| 1 |] [| Int32.of_int action |]\n\n(* Random_walk *)\n\nlet test_rw_creation () =\n  let env = Random_walk.make () in\n  match Env.id env with\n  | Some id ->\n      is_true ~msg:\"id starts with RandomWalk\"\n        (String.length id >= 10 && String.sub id 0 10 = \"RandomWalk\")\n  | None -> fail \"expected an id\"\n\nlet test_rw_reset_obs () =\n  let env = Random_walk.make () in\n  let obs, _info = Env.reset env () in\n  equal ~msg:\"reset obs is 0.0\" (float 1e-6) 0.0 (read_float obs)\n\nlet test_rw_step_left () =\n  let env = Random_walk.make () in\n  let _obs, _info = Env.reset env () in\n  let s = Env.step env (discrete 0) in\n  equal ~msg:\"step left to -1.0\" (float 1e-6) (-1.0) (read_float s.observation)\n\nlet test_rw_step_right () =\n  let env = Random_walk.make () in\n  let _obs, _info = Env.reset env () in\n  let s = Env.step env (discrete 1) in\n  equal ~msg:\"step right to 1.0\" (float 1e-6) 1.0 (read_float s.observation)\n\nlet test_rw_termination () =\n  let env = Random_walk.make () in\n  let _obs, _info = Env.reset env () in\n  let terminated = ref false in\n  for _ = 1 to 20 do\n    if not !terminated then begin\n      let s = Env.step env (discrete 1) in\n      if s.terminated then terminated := true\n      else if s.truncated then begin\n        let _obs, _info = Env.reset env () in\n        ()\n      end\n    end\n  done;\n  is_true ~msg:\"terminated at boundary\" !terminated\n\nlet test_rw_ansi_render () =\n  let env = Random_walk.make ~render_mode:`Ansi () in\n  let _obs, _info = Env.reset env () in\n  match Env.render env with\n  | Some s -> is_true ~msg:\"non-empty render\" (String.length s > 0)\n  | None -> fail \"expected Some render\"\n\n(* Cartpole *)\n\nlet test_cp_creation () =\n  let env = Cartpole.make () in\n  match Env.id env with\n  | Some id ->\n      is_true ~msg:\"id starts with CartPole\"\n        (String.length id >= 8 && String.sub id 0 8 = \"CartPole\")\n  | None -> fail \"expected an id\"\n\nlet test_cp_reset_shape () =\n  let env = Cartpole.make () in\n  let obs, _info = Env.reset env () in\n  let shape = Nx.shape obs in\n  equal ~msg:\"obs shape [4]\" (array int) [| 4 |] shape\n\nlet test_cp_step_reward () =\n  let env = Cartpole.make () in\n  let _obs, _info = Env.reset env () in\n  let s = Env.step env (discrete 1) in\n  is_false ~msg:\"not terminated on first step\" s.terminated;\n  equal ~msg:\"reward 1.0\" (float 1e-6) 1.0 s.reward\n\nlet test_cp_termination () =\n  let env = Cartpole.make () in\n  let _obs, _info = Env.reset env () in\n  let done_flag = ref false in\n  for _ = 1 to 600 do\n    if not !done_flag then begin\n      let s = Env.step env (discrete 0) in\n      if s.terminated || s.truncated then done_flag := true\n    end\n  done;\n  is_true ~msg:\"episode ends\" !done_flag\n\n(* Grid_world *)\n\nlet test_gw_creation () =\n  let env = Grid_world.make () in\n  match Env.id env with\n  | Some id ->\n      is_true ~msg:\"id starts with GridWorld\"\n        (String.length id >= 9 && String.sub id 0 9 = \"GridWorld\")\n  | None -> fail \"expected an id\"\n\nlet test_gw_reset_obs () =\n  let env = Grid_world.make () in\n  let obs, _info = Env.reset env () in\n  let pos = read_int32_array obs 2 in\n  equal ~msg:\"row = 0\" int 0 pos.(0);\n  equal ~msg:\"col = 0\" int 0 pos.(1)\n\nlet test_gw_move_down () =\n  let env = Grid_world.make () in\n  let _obs, _info = Env.reset env () in\n  let s = Env.step env (discrete 1) in\n  let pos = read_int32_array s.observation 2 in\n  equal ~msg:\"row = 1 after down\" int 1 pos.(0)\n\nlet test_gw_move_right () =\n  let env = Grid_world.make () in\n  let _obs, _info = Env.reset env () in\n  let s = Env.step env (discrete 3) in\n  let pos = read_int32_array s.observation 2 in\n  equal ~msg:\"col = 1 after right\" int 1 pos.(1)\n\nlet test_gw_obstacle () =\n  let env = Grid_world.make () in\n  let _obs, _info = Env.reset env () in\n  (* Navigate to (1, 2): down, right, right *)\n  let _s = Env.step env (discrete 1) in\n  let _s = Env.step env (discrete 3) in\n  let s = Env.step env (discrete 3) in\n  let pos = read_int32_array s.observation 2 in\n  equal ~msg:\"at (1,2)\" int 1 pos.(0);\n  equal ~msg:\"at (1,2)\" int 2 pos.(1);\n  (* Try to move down into obstacle at (2,2) *)\n  let s = Env.step env (discrete 1) in\n  let pos = read_int32_array s.observation 2 in\n  equal ~msg:\"blocked row still 1\" int 1 pos.(0);\n  equal ~msg:\"blocked col still 2\" int 2 pos.(1)\n\nlet test_gw_reach_goal () =\n  let env = Grid_world.make () in\n  let _obs, _info = Env.reset env () in\n  (* Path to (4,4) avoiding obstacle at (2,2): down 4 times to row 4, then right\n     4 times to col 4 *)\n  for _ = 1 to 4 do\n    ignore (Env.step env (discrete 1))\n  done;\n  let s_right1 = Env.step env (discrete 3) in\n  is_false ~msg:\"not done yet\" s_right1.terminated;\n  let _s = Env.step env (discrete 3) in\n  let _s = Env.step env (discrete 3) in\n  let s = Env.step env (discrete 3) in\n  is_true ~msg:\"terminated at goal\" s.terminated;\n  equal ~msg:\"reward 10.0\" (float 1e-6) 10.0 s.reward\n\nlet test_gw_ansi_render () =\n  let env = Grid_world.make ~render_mode:`Ansi () in\n  let _obs, _info = Env.reset env () in\n  match Env.render env with\n  | Some (Grid_world.Text s) ->\n      is_true ~msg:\"non-empty render\" (String.length s > 0)\n  | Some (Grid_world.Image _) -> fail \"expected Text render\"\n  | None -> fail \"expected Some render\"\n\n(* Mountain_car *)\n\nlet test_mc_creation () =\n  let env = Mountain_car.make () in\n  match Env.id env with\n  | Some id ->\n      is_true ~msg:\"id starts with MountainCar\"\n        (String.length id >= 11 && String.sub id 0 11 = \"MountainCar\")\n  | None -> fail \"expected an id\"\n\nlet test_mc_reset_shape () =\n  let env = Mountain_car.make () in\n  let obs, _info = Env.reset env () in\n  let shape = Nx.shape obs in\n  equal ~msg:\"obs shape [2]\" (array int) [| 2 |] shape\n\nlet test_mc_step_coast () =\n  let env = Mountain_car.make () in\n  let _obs, _info = Env.reset env () in\n  let s = Env.step env (discrete 1) in\n  let shape = Nx.shape s.observation in\n  equal ~msg:\"obs shape after step\" (array int) [| 2 |] shape;\n  is_false ~msg:\"not terminated\" s.terminated\n\nlet test_mc_reward () =\n  let env = Mountain_car.make () in\n  let _obs, _info = Env.reset env () in\n  let s = Env.step env (discrete 1) in\n  equal ~msg:\"reward -1.0\" (float 1e-6) (-1.0) s.reward\n\nlet () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  run \"Fehu_envs\"\n    [\n      group \"RandomWalk\"\n        [\n          test \"creation\" test_rw_creation;\n          test \"reset observation\" test_rw_reset_obs;\n          test \"step left\" test_rw_step_left;\n          test \"step right\" test_rw_step_right;\n          test \"termination at boundary\" test_rw_termination;\n          test \"ansi render\" test_rw_ansi_render;\n        ];\n      group \"CartPole\"\n        [\n          test \"creation\" test_cp_creation;\n          test \"reset shape\" test_cp_reset_shape;\n          test \"step reward\" test_cp_step_reward;\n          test \"termination\" test_cp_termination;\n        ];\n      group \"GridWorld\"\n        [\n          test \"creation\" test_gw_creation;\n          test \"reset observation\" test_gw_reset_obs;\n          test \"move down\" test_gw_move_down;\n          test \"move right\" test_gw_move_right;\n          test \"obstacle blocks movement\" test_gw_obstacle;\n          test \"reach goal\" test_gw_reach_goal;\n          test \"ansi render\" test_gw_ansi_render;\n        ];\n      group \"MountainCar\"\n        [\n          test \"creation\" test_mc_creation;\n          test \"reset shape\" test_mc_reset_shape;\n          test \"step coast\" test_mc_step_coast;\n          test \"reward\" test_mc_reward;\n        ];\n    ]\n"
  },
  {
    "path": "packages/fehu/test/test_eval.ml",
    "content": "open Fehu\nopen Windtrap\n\nlet make_test_env ?(max_steps = 100) () =\n  let obs_space = Space.Box.create ~low:[| 0.0 |] ~high:[| 10.0 |] in\n  let act_space = Space.Discrete.create 2 in\n  let state = ref 5.0 in\n  let steps = ref 0 in\n  let reset _env ?options:_ () =\n    state := 5.0;\n    steps := 0;\n    (Nx.create Nx.float32 [| 1 |] [| !state |], Info.empty)\n  in\n  let step _env action =\n    let a : Int32.t array = Nx.to_array (Nx.reshape [| 1 |] action) in\n    state := !state +. if Int32.to_int a.(0) = 0 then -1.0 else 1.0;\n    incr steps;\n    let terminated = !state <= 0.0 || !state >= 10.0 in\n    let truncated = (not terminated) && !steps >= max_steps in\n    Env.step_result\n      ~observation:(Nx.create Nx.float32 [| 1 |] [| !state |])\n      ~reward:1.0 ~terminated ~truncated ()\n  in\n  Env.create ~id:\"Test-v0\" ~observation_space:obs_space ~action_space:act_space\n    ~reset ~step ()\n\n(* Run *)\n\nlet test_constant_reward_stats () =\n  let env = make_test_env ~max_steps:5 () in\n  let policy _obs = Nx.create Nx.int32 [| 1 |] [| 1l |] in\n  let stats = Eval.run env ~policy ~n_episodes:3 ~max_steps:5 () in\n  equal ~msg:\"mean_reward\" (float 1e-6) 5.0 stats.mean_reward;\n  equal ~msg:\"std_reward\" (float 1e-6) 0.0 stats.std_reward;\n  equal ~msg:\"mean_length\" (float 1e-6) 5.0 stats.mean_length;\n  equal ~msg:\"n_episodes\" int 3 stats.n_episodes\n\nlet test_n_episodes_matches () =\n  let env = make_test_env ~max_steps:5 () in\n  let policy _obs = Nx.create Nx.int32 [| 1 |] [| 1l |] in\n  let stats = Eval.run env ~policy ~n_episodes:7 ~max_steps:5 () in\n  equal ~msg:\"n_episodes matches\" int 7 stats.n_episodes\n\nlet () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  run \"Fehu.Eval\"\n    [\n      group \"run\"\n        [\n          test \"constant reward statistics\" test_constant_reward_stats;\n          test \"n_episodes matches stats\" test_n_episodes_matches;\n        ];\n    ]\n"
  },
  {
    "path": "packages/fehu/test/test_gae.ml",
    "content": "open Fehu\nopen Windtrap\n\nlet f = float 1e-6\n\n(* Compute *)\n\nlet test_compute_simple () =\n  let rewards = [| 1.0; 1.0; 1.0 |] in\n  let values = [| 0.5; 0.5; 0.5 |] in\n  let terminated = [| false; false; false |] in\n  let truncated = [| false; false; false |] in\n  let next_values = [| 0.5; 0.5; 0.5 |] in\n  let advantages, returns =\n    Gae.compute ~rewards ~values ~terminated ~truncated ~next_values ~gamma:0.99\n      ~lambda:0.95\n  in\n  equal ~msg:\"lengths match\" int 3 (Array.length advantages);\n  for i = 0 to 2 do\n    equal ~msg:\"returns = advantages + values\" f returns.(i)\n      (advantages.(i) +. values.(i))\n  done\n\nlet test_compute_termination () =\n  let rewards = [| 1.0; 1.0; 1.0 |] in\n  let values = [| 0.5; 0.5; 0.5 |] in\n  let terminated = [| false; true; false |] in\n  let truncated = [| false; false; false |] in\n  let next_values = [| 0.5; 0.5; 0.5 |] in\n  let advantages, _returns =\n    Gae.compute ~rewards ~values ~terminated ~truncated ~next_values ~gamma:0.99\n      ~lambda:0.95\n  in\n  (* At step 1 (terminated), bootstrap is 0: delta = 1.0 + 0.99*0 - 0.5 = 0.5 *)\n  equal ~msg:\"terminal advantage\" f 0.5 advantages.(1)\n\nlet test_compute_truncation () =\n  let rewards = [| 1.0; 1.0; 1.0 |] in\n  let values = [| 0.5; 0.5; 0.5 |] in\n  let terminated = [| false; false; false |] in\n  let truncated = [| false; true; false |] in\n  let next_values = [| 0.5; 2.0; 0.5 |] in\n  let advantages_trunc, _returns =\n    Gae.compute ~rewards ~values ~terminated ~truncated ~next_values ~gamma:0.99\n      ~lambda:0.95\n  in\n  (* At step 1 (truncated), bootstrap uses next_values.(1) = 2.0 delta = 1.0 +\n     0.99*2.0 - 0.5 = 2.48 *)\n  let terminated_fake = [| false; true; false |] in\n  let advantages_term, _returns_term =\n    Gae.compute ~rewards ~values ~terminated:terminated_fake\n      ~truncated:[| false; false; false |] ~next_values ~gamma:0.99 ~lambda:0.95\n  in\n  (* With termination instead, bootstrap would be 0: delta = 1.0 + 0.99*0 - 0.5\n     = 0.5 These must differ because truncation uses next_values. *)\n  not_equal ~msg:\"truncation differs from termination\" f advantages_trunc.(1)\n    advantages_term.(1)\n\nlet test_compute_length_mismatch () =\n  raises_invalid_arg \"Gae: all arrays must have the same length\" (fun () ->\n      Gae.compute ~rewards:[| 1.0; 1.0 |] ~values:[| 0.5 |]\n        ~terminated:[| false; false |] ~truncated:[| false; false |]\n        ~next_values:[| 0.5; 0.5 |] ~gamma:0.99 ~lambda:0.95)\n\nlet test_compute_empty () =\n  let advantages, returns =\n    Gae.compute ~rewards:[||] ~values:[||] ~terminated:[||] ~truncated:[||]\n      ~next_values:[||] ~gamma:0.99 ~lambda:0.95\n  in\n  equal ~msg:\"empty advantages\" int 0 (Array.length advantages);\n  equal ~msg:\"empty returns\" int 0 (Array.length returns)\n\n(* Returns *)\n\nlet test_returns_simple () =\n  let ret =\n    Gae.returns ~rewards:[| 1.0; 1.0; 1.0 |]\n      ~terminated:[| false; false; false |] ~truncated:[| false; false; false |]\n      ~gamma:1.0\n  in\n  equal ~msg:\"ret[0]\" f 3.0 ret.(0);\n  equal ~msg:\"ret[1]\" f 2.0 ret.(1);\n  equal ~msg:\"ret[2]\" f 1.0 ret.(2)\n\nlet test_returns_gamma_zero () =\n  let ret =\n    Gae.returns ~rewards:[| 1.0; 2.0; 3.0 |]\n      ~terminated:[| false; false; false |] ~truncated:[| false; false; false |]\n      ~gamma:0.0\n  in\n  equal ~msg:\"ret[0]\" f 1.0 ret.(0);\n  equal ~msg:\"ret[1]\" f 2.0 ret.(1);\n  equal ~msg:\"ret[2]\" f 3.0 ret.(2)\n\nlet test_returns_terminated () =\n  let ret =\n    Gae.returns ~rewards:[| 1.0; 1.0; 1.0 |]\n      ~terminated:[| false; true; false |] ~truncated:[| false; false; false |]\n      ~gamma:1.0\n  in\n  (* Step 2: acc = 1.0 Step 1: terminated, so acc = 1.0 + 1.0*0.0*1.0 = 1.0 Step\n     0: acc = 1.0 + 1.0*1.0*1.0 = 2.0 *)\n  equal ~msg:\"ret[0]\" f 2.0 ret.(0);\n  equal ~msg:\"ret[1]\" f 1.0 ret.(1);\n  equal ~msg:\"ret[2]\" f 1.0 ret.(2)\n\nlet test_returns_truncated () =\n  let ret =\n    Gae.returns ~rewards:[| 1.0; 1.0; 1.0 |]\n      ~terminated:[| false; false; false |] ~truncated:[| false; true; false |]\n      ~gamma:1.0\n  in\n  (* Truncation at step 1 resets accumulation, same as terminated *)\n  equal ~msg:\"ret[0]\" f 2.0 ret.(0);\n  equal ~msg:\"ret[1]\" f 1.0 ret.(1);\n  equal ~msg:\"ret[2]\" f 1.0 ret.(2)\n\nlet test_returns_length_mismatch () =\n  raises_invalid_arg\n    \"Gae.returns: rewards, terminated, and truncated must have the same length\"\n    (fun () ->\n      Gae.returns ~rewards:[| 1.0; 1.0 |] ~terminated:[| false |]\n        ~truncated:[| false; false |] ~gamma:0.99)\n\n(* Compute from values *)\n\nlet test_compute_from_values_simple () =\n  let rewards = [| 1.0; 1.0; 1.0 |] in\n  let values = [| 0.5; 0.5; 0.5 |] in\n  let terminated = [| false; false; false |] in\n  let truncated = [| false; false; false |] in\n  let last_value = 0.5 in\n  let advantages, returns =\n    Gae.compute_from_values ~rewards ~values ~terminated ~truncated ~last_value\n      ~gamma:0.99 ~lambda:0.95\n  in\n  (* next_values should be [| 0.5; 0.5; 0.5 |] (values shifted + last_value) *)\n  let advantages2, returns2 =\n    Gae.compute ~rewards ~values ~terminated ~truncated\n      ~next_values:[| 0.5; 0.5; 0.5 |] ~gamma:0.99 ~lambda:0.95\n  in\n  for i = 0 to 2 do\n    equal ~msg:\"advantages match\" f advantages2.(i) advantages.(i);\n    equal ~msg:\"returns match\" f returns2.(i) returns.(i)\n  done\n\nlet test_compute_from_values_shifted () =\n  let rewards = [| 1.0; 1.0; 1.0 |] in\n  let values = [| 1.0; 2.0; 3.0 |] in\n  let terminated = [| false; false; false |] in\n  let truncated = [| false; false; false |] in\n  let last_value = 4.0 in\n  let advantages, _returns =\n    Gae.compute_from_values ~rewards ~values ~terminated ~truncated ~last_value\n      ~gamma:0.99 ~lambda:0.95\n  in\n  (* next_values = [| 2.0; 3.0; 4.0 |] *)\n  let advantages2, _returns2 =\n    Gae.compute ~rewards ~values ~terminated ~truncated\n      ~next_values:[| 2.0; 3.0; 4.0 |] ~gamma:0.99 ~lambda:0.95\n  in\n  for i = 0 to 2 do\n    equal ~msg:\"advantages match\" f advantages2.(i) advantages.(i)\n  done\n\n(* Normalize *)\n\nlet test_normalize_mean_zero () =\n  let arr = [| 1.0; 2.0; 3.0; 4.0; 5.0 |] in\n  let normed = Gae.normalize arr in\n  let mean = ref 0.0 in\n  Array.iter (fun x -> mean := !mean +. x) normed;\n  mean := !mean /. Float.of_int (Array.length normed);\n  equal ~msg:\"mean near 0\" f 0.0 !mean\n\nlet test_normalize_std_one () =\n  let arr = [| 1.0; 2.0; 3.0; 4.0; 5.0 |] in\n  let normed = Gae.normalize arr in\n  let n = Array.length normed in\n  let mean = ref 0.0 in\n  Array.iter (fun x -> mean := !mean +. x) normed;\n  mean := !mean /. Float.of_int n;\n  let var = ref 0.0 in\n  Array.iter\n    (fun x ->\n      let d = x -. !mean in\n      var := !var +. (d *. d))\n    normed;\n  var := !var /. Float.of_int n;\n  let std = sqrt !var in\n  is_true ~msg:\"std near 1\" (Float.abs (std -. 1.0) < 0.01)\n\nlet test_normalize_empty () =\n  let normed = Gae.normalize [||] in\n  equal ~msg:\"empty\" int 0 (Array.length normed)\n\nlet test_normalize_single () =\n  let normed = Gae.normalize [| 42.0 |] in\n  equal ~msg:\"single normalizes to 0\" f 0.0 normed.(0)\n\nlet () =\n  run \"Fehu.Gae\"\n    [\n      group \"compute\"\n        [\n          test \"simple\" test_compute_simple;\n          test \"termination\" test_compute_termination;\n          test \"truncation\" test_compute_truncation;\n          test \"length mismatch\" test_compute_length_mismatch;\n          test \"empty\" test_compute_empty;\n        ];\n      group \"returns\"\n        [\n          test \"simple\" test_returns_simple;\n          test \"gamma zero\" test_returns_gamma_zero;\n          test \"terminated resets\" test_returns_terminated;\n          test \"truncated resets\" test_returns_truncated;\n          test \"length mismatch\" test_returns_length_mismatch;\n        ];\n      group \"compute_from_values\"\n        [\n          test \"matches compute\" test_compute_from_values_simple;\n          test \"shifted values\" test_compute_from_values_shifted;\n        ];\n      group \"normalize\"\n        [\n          test \"mean near zero\" test_normalize_mean_zero;\n          test \"std near one\" test_normalize_std_one;\n          test \"empty\" test_normalize_empty;\n          test \"single element\" test_normalize_single;\n        ];\n    ]\n"
  },
  {
    "path": "packages/fehu/test/test_info.ml",
    "content": "open Fehu\nopen Windtrap\n\nlet value = testable ~pp:Value.pp ~equal:Value.equal ()\n\n(* Operations *)\n\nlet test_empty_is_empty () =\n  equal ~msg:\"empty is_empty\" bool true (Info.is_empty Info.empty)\n\nlet test_set_then_find () =\n  let info = Info.set \"k\" (Value.Int 1) Info.empty in\n  equal ~msg:\"find after set\" (option value) (Some (Value.Int 1))\n    (Info.find \"k\" info)\n\nlet test_find_missing () =\n  let info = Info.set \"k\" (Value.Int 1) Info.empty in\n  equal ~msg:\"find missing\" (option value) None (Info.find \"other\" info)\n\nlet test_find_exn_existing () =\n  let info = Info.set \"k\" (Value.Int 42) Info.empty in\n  equal ~msg:\"find_exn existing\" value (Value.Int 42) (Info.find_exn \"k\" info)\n\nlet test_remove () =\n  let info = Info.set \"k\" (Value.Int 1) Info.empty in\n  let info = Info.remove \"k\" info in\n  equal ~msg:\"find after remove\" (option value) None (Info.find \"k\" info)\n\nlet test_merge_right_biased () =\n  let a = Info.set \"k\" (Value.Int 1) Info.empty in\n  let b = Info.set \"k\" (Value.Int 2) Info.empty in\n  let merged = Info.merge a b in\n  equal ~msg:\"merge right wins\" value (Value.Int 2) (Info.find_exn \"k\" merged)\n\nlet test_of_list_to_list_round_trip () =\n  let kvs = [ (\"a\", Value.Int 1); (\"c\", Value.Int 3); (\"b\", Value.Int 2) ] in\n  let info = Info.of_list kvs in\n  let result = Info.to_list info in\n  equal ~msg:\"round-trip keys sorted\" (list string) [ \"a\"; \"b\"; \"c\" ]\n    (List.map fst result);\n  equal ~msg:\"round-trip values\" (list value)\n    [ Value.Int 1; Value.Int 2; Value.Int 3 ]\n    (List.map snd result)\n\n(* Errors *)\n\nlet test_find_exn_missing () =\n  raises_invalid_arg \"Info.find_exn: key \\\"missing\\\" not present\" (fun () ->\n      ignore (Info.find_exn \"missing\" Info.empty))\n\n(* Convenience *)\n\nlet test_int_convenience () =\n  equal ~msg:\"Info.int\" value (Value.Int 42) (Info.int 42)\n\nlet test_float_convenience () =\n  equal ~msg:\"Info.float\" value (Value.Float 1.0) (Info.float 1.0)\n\nlet test_bool_convenience () =\n  equal ~msg:\"Info.bool\" value (Value.Bool true) (Info.bool true)\n\nlet test_string_convenience () =\n  equal ~msg:\"Info.string\" value (Value.String \"hi\") (Info.string \"hi\")\n\nlet test_null_convenience () = equal ~msg:\"Info.null\" value Value.Null Info.null\n\nlet () =\n  run \"Fehu.Info\"\n    [\n      group \"operations\"\n        [\n          test \"empty is_empty\" test_empty_is_empty;\n          test \"set then find\" test_set_then_find;\n          test \"find missing key\" test_find_missing;\n          test \"find_exn existing\" test_find_exn_existing;\n          test \"remove\" test_remove;\n          test \"merge right-biased\" test_merge_right_biased;\n          test \"of_list/to_list round-trip\" test_of_list_to_list_round_trip;\n        ];\n      group \"errors\" [ test \"find_exn missing raises\" test_find_exn_missing ];\n      group \"convenience\"\n        [\n          test \"int\" test_int_convenience;\n          test \"float\" test_float_convenience;\n          test \"bool\" test_bool_convenience;\n          test \"string\" test_string_convenience;\n          test \"null\" test_null_convenience;\n        ];\n    ]\n"
  },
  {
    "path": "packages/fehu/test/test_render.ml",
    "content": "open Fehu\nopen Windtrap\n\nlet make_data n =\n  Bigarray.Array1.create Bigarray.int8_unsigned Bigarray.c_layout n\n\n(* Image *)\n\nlet test_valid_rgb_image () =\n  let data = make_data 12 in\n  let img = Render.image ~width:2 ~height:2 data in\n  equal ~msg:\"width\" int 2 img.width;\n  equal ~msg:\"height\" int 2 img.height\n\nlet test_wrong_data_length_raises () =\n  let data = make_data 10 in\n  raises_invalid_arg\n    \"Render.image: data length 10 does not match width * height * channels = 12\"\n    (fun () -> ignore (Render.image ~width:2 ~height:2 data))\n\nlet test_rgba_channels () =\n  let data = make_data 16 in\n  let img =\n    Render.image ~width:2 ~height:2 ~pixel_format:Render.Pixel.Rgba data\n  in\n  equal ~msg:\"width\" int 2 img.width;\n  equal ~msg:\"height\" int 2 img.height\n\nlet test_gray_channels () =\n  let data = make_data 4 in\n  let img =\n    Render.image ~width:2 ~height:2 ~pixel_format:Render.Pixel.Gray data\n  in\n  equal ~msg:\"width\" int 2 img.width;\n  equal ~msg:\"height\" int 2 img.height\n\nlet test_pixel_format_default_rgb () =\n  let data = make_data 3 in\n  let img = Render.image ~width:1 ~height:1 data in\n  equal ~msg:\"default is Rgb\" int 3 (Render.Pixel.channels img.pixel_format)\n\n(* Rollout *)\n\nlet make_renderable_env () =\n  let obs_space = Space.Box.create ~low:[| 0.0 |] ~high:[| 10.0 |] in\n  let act_space = Space.Discrete.create 2 in\n  let state = ref 5.0 in\n  let reset _env ?options:_ () =\n    state := 5.0;\n    (Nx.create Nx.float32 [| 1 |] [| !state |], Info.empty)\n  in\n  let step _env action =\n    let a : Int32.t array = Nx.to_array (Nx.reshape [| 1 |] action) in\n    state := !state +. if Int32.to_int a.(0) = 0 then -1.0 else 1.0;\n    let terminated = !state <= 0.0 || !state >= 10.0 in\n    Env.step_result\n      ~observation:(Nx.create Nx.float32 [| 1 |] [| !state |])\n      ~reward:1.0 ~terminated ()\n  in\n  let render () =\n    let data = make_data 3 in\n    Some (Render.image ~width:1 ~height:1 data)\n  in\n  Env.create ~id:\"Renderable-v0\" ~observation_space:obs_space\n    ~action_space:act_space ~reset ~step ~render ()\n\nlet test_rollout_sink_called () =\n  let env = make_renderable_env () in\n  let count = ref 0 in\n  let policy _obs = Nx.create Nx.int32 [| 1 |] [| 1l |] in\n  let sink _frame = incr count in\n  Render.rollout env ~policy ~steps:3 ~sink ();\n  equal ~msg:\"sink called 3 times\" int 3 !count\n\n(* on_render *)\n\nlet action_right = Nx.create Nx.int32 [| 1 |] [| 1l |]\n\nlet test_on_render_frame_count () =\n  let env = make_renderable_env () in\n  let count = ref 0 in\n  let wrapped = Render.on_render ~sink:(fun _ -> incr count) env in\n  let _obs, _info = Env.reset wrapped () in\n  let _s1 = Env.step wrapped action_right in\n  let _s2 = Env.step wrapped action_right in\n  let _s3 = Env.step wrapped action_right in\n  (* 1 frame from reset + 3 frames from steps = 4 *)\n  equal ~msg:\"frame count\" int 4 !count\n\nlet test_on_render_passthrough () =\n  let env = make_renderable_env () in\n  let wrapped = Render.on_render ~sink:(fun _ -> ()) env in\n  let _obs, _info = Env.reset wrapped () in\n  let step = Env.step wrapped action_right in\n  equal ~msg:\"reward unchanged\" (float 0.0) 1.0 step.reward;\n  is_false ~msg:\"not terminated\" step.terminated;\n  is_false ~msg:\"not truncated\" step.truncated\n\nlet test_on_render_id () =\n  let env = make_renderable_env () in\n  let wrapped = Render.on_render ~sink:(fun _ -> ()) env in\n  equal ~msg:\"id suffix\" (option string) (Some \"Renderable-v0/OnRender\")\n    (Env.id wrapped)\n\nlet () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  run \"Fehu.Render\"\n    [\n      group \"image\"\n        [\n          test \"valid RGB 2x2\" test_valid_rgb_image;\n          test \"wrong data length raises\" test_wrong_data_length_raises;\n          test \"RGBA 4 channels\" test_rgba_channels;\n          test \"Gray 1 channel\" test_gray_channels;\n          test \"default pixel_format is Rgb\" test_pixel_format_default_rgb;\n        ];\n      group \"rollout\"\n        [ test \"sink called for each step\" test_rollout_sink_called ];\n      group \"on_render\"\n        [\n          test \"frame count\" test_on_render_frame_count;\n          test \"passthrough\" test_on_render_passthrough;\n          test \"id suffix\" test_on_render_id;\n        ];\n    ]\n"
  },
  {
    "path": "packages/fehu/test/test_space.ml",
    "content": "open Fehu\nopen Windtrap\n\nlet value = testable ~pp:Value.pp ~equal:Value.equal ()\n\n(* Helpers *)\n\nlet int32_scalar v = Nx.scalar Nx.int32 (Int32.of_int v)\n\nlet int32_vec arr =\n  Nx.create Nx.int32 [| Array.length arr |] (Array.map Int32.of_int arr)\n\nlet float32_vec arr = Nx.create Nx.float32 [| Array.length arr |] arr\n\nlet read_float32_vec t =\n  let n = (Nx.shape t).(0) in\n  let arr : float array = Nx.to_array (Nx.reshape [| n |] t) in\n  arr\n\n(* Discrete *)\n\nlet test_discrete_default () =\n  let s = Space.Discrete.create 3 in\n  equal ~msg:\"n is 3\" int 3 (Space.Discrete.n s);\n  equal ~msg:\"start is 0\" int 0 (Space.Discrete.start s)\n\nlet test_discrete_custom_start () =\n  let s = Space.Discrete.create ~start:5 3 in\n  equal ~msg:\"start is 5\" int 5 (Space.Discrete.start s);\n  equal ~msg:\"n is 3\" int 3 (Space.Discrete.n s)\n\nlet test_discrete_contains () =\n  let s = Space.Discrete.create 3 in\n  is_true ~msg:\"contains 0\" (Space.contains s (int32_scalar 0));\n  is_true ~msg:\"contains 1\" (Space.contains s (int32_scalar 1));\n  is_true ~msg:\"contains 2\" (Space.contains s (int32_scalar 2));\n  is_false ~msg:\"not contains 3\" (Space.contains s (int32_scalar 3));\n  is_false ~msg:\"not contains -1\" (Space.contains s (int32_scalar (-1)))\n\nlet test_discrete_contains_with_start () =\n  let s = Space.Discrete.create ~start:5 3 in\n  is_true ~msg:\"contains 5\" (Space.contains s (int32_scalar 5));\n  is_true ~msg:\"contains 7\" (Space.contains s (int32_scalar 7));\n  is_false ~msg:\"not contains 4\" (Space.contains s (int32_scalar 4));\n  is_false ~msg:\"not contains 8\" (Space.contains s (int32_scalar 8))\n\nlet test_discrete_sample () =\n  let s = Space.Discrete.create 3 in\n  let v = Space.sample s in\n  is_true ~msg:\"sample is valid\" (Space.contains s v)\n\nlet test_discrete_pack_unpack () =\n  let s = Space.Discrete.create 3 in\n  let v = int32_scalar 2 in\n  let packed = Space.pack s v in\n  equal ~msg:\"pack produces Int 2\" value (Value.Int 2) packed;\n  let unpacked = Space.unpack s packed in\n  is_ok ~msg:\"unpack succeeds\" unpacked\n\nlet test_discrete_unpack_invalid () =\n  let s = Space.Discrete.create 3 in\n  is_error ~msg:\"unpack out of range\" (Space.unpack s (Value.Int 5));\n  is_error ~msg:\"unpack wrong type\" (Space.unpack s (Value.String \"x\"))\n\nlet test_discrete_boundary_values () =\n  let s = Space.Discrete.create 3 in\n  let bvs = Space.boundary_values s in\n  equal ~msg:\"2 boundary values\" int 2 (List.length bvs);\n  equal ~msg:\"first boundary\" value (Value.Int 0) (List.hd bvs);\n  equal ~msg:\"last boundary\" value (Value.Int 2) (List.nth bvs 1)\n\nlet test_discrete_boundary_single () =\n  let s = Space.Discrete.create 1 in\n  let bvs = Space.boundary_values s in\n  equal ~msg:\"1 boundary for n=1\" int 1 (List.length bvs)\n\nlet test_discrete_shape () =\n  let s = Space.Discrete.create 3 in\n  is_none ~msg:\"discrete shape is None\" (Space.shape s)\n\nlet test_discrete_error_zero () =\n  raises_invalid_arg \"Space.Discrete.create: n must be strictly positive\"\n    (fun () -> Space.Discrete.create 0)\n\nlet test_discrete_error_negative () =\n  raises_invalid_arg \"Space.Discrete.create: n must be strictly positive\"\n    (fun () -> Space.Discrete.create (-1))\n\n(* Box *)\n\nlet test_box_1d () =\n  let s = Space.Box.create ~low:[| 0.0 |] ~high:[| 10.0 |] in\n  let low, high = Space.Box.bounds s in\n  equal ~msg:\"low\" (array (float 0.)) [| 0.0 |] low;\n  equal ~msg:\"high\" (array (float 0.)) [| 10.0 |] high\n\nlet test_box_contains () =\n  let s = Space.Box.create ~low:[| 0.0 |] ~high:[| 10.0 |] in\n  is_true ~msg:\"mid value\" (Space.contains s (float32_vec [| 5.0 |]));\n  is_true ~msg:\"low bound\" (Space.contains s (float32_vec [| 0.0 |]));\n  is_true ~msg:\"high bound\" (Space.contains s (float32_vec [| 10.0 |]));\n  is_false ~msg:\"below low\" (Space.contains s (float32_vec [| -0.1 |]));\n  is_false ~msg:\"above high\" (Space.contains s (float32_vec [| 10.1 |]))\n\nlet test_box_sample () =\n  let s = Space.Box.create ~low:[| 0.0 |] ~high:[| 10.0 |] in\n  let v = Space.sample s in\n  is_true ~msg:\"sample in bounds\" (Space.contains s v)\n\nlet test_box_pack_unpack () =\n  let s = Space.Box.create ~low:[| 0.0 |] ~high:[| 10.0 |] in\n  let v = float32_vec [| 5.0 |] in\n  let packed = Space.pack s v in\n  let unpacked = Space.unpack s packed in\n  is_ok ~msg:\"round-trip succeeds\" unpacked;\n  match unpacked with\n  | Ok t ->\n      let arr = read_float32_vec t in\n      equal ~msg:\"value preserved\" (float 0.01) 5.0 arr.(0)\n  | Error _ -> fail \"unreachable\"\n\nlet test_box_boundary_values () =\n  let s = Space.Box.create ~low:[| 0.0 |] ~high:[| 10.0 |] in\n  let bvs = Space.boundary_values s in\n  equal ~msg:\"2 boundaries\" int 2 (List.length bvs)\n\nlet test_box_boundary_identical () =\n  let s = Space.Box.create ~low:[| 5.0 |] ~high:[| 5.0 |] in\n  let bvs = Space.boundary_values s in\n  equal ~msg:\"1 boundary when identical\" int 1 (List.length bvs)\n\nlet test_box_shape_1d () =\n  let s = Space.Box.create ~low:[| 0.0 |] ~high:[| 10.0 |] in\n  is_some ~msg:\"shape is Some\" (Space.shape s);\n  equal ~msg:\"shape [|1|]\" (array int) [| 1 |] (Option.get (Space.shape s))\n\nlet test_box_2d () =\n  let s = Space.Box.create ~low:[| 0.0; -1.0 |] ~high:[| 1.0; 1.0 |] in\n  equal ~msg:\"shape [|2|]\" (array int) [| 2 |] (Option.get (Space.shape s));\n  is_true ~msg:\"2d in bounds\" (Space.contains s (float32_vec [| 0.5; 0.0 |]));\n  is_false ~msg:\"2d out of bounds\"\n    (Space.contains s (float32_vec [| 0.5; 2.0 |]))\n\nlet test_box_error_empty () =\n  raises_invalid_arg \"Space.Box.create: low cannot be empty\" (fun () ->\n      Space.Box.create ~low:[||] ~high:[||])\n\nlet test_box_error_mismatch () =\n  raises_invalid_arg\n    \"Space.Box.create: low and high must have identical lengths\" (fun () ->\n      Space.Box.create ~low:[| 0.0 |] ~high:[| 1.0; 2.0 |])\n\nlet test_box_error_low_gt_high () =\n  raises_match ~msg:\"low > high raises\"\n    (fun exn -> match exn with Invalid_argument _ -> true | _ -> false)\n    (fun () -> Space.Box.create ~low:[| 5.0 |] ~high:[| 1.0 |])\n\n(* Multi_binary *)\n\nlet test_mb_contains () =\n  let s = Space.Multi_binary.create 3 in\n  is_true ~msg:\"all zeros\" (Space.contains s (int32_vec [| 0; 0; 0 |]));\n  is_true ~msg:\"all ones\" (Space.contains s (int32_vec [| 1; 1; 1 |]));\n  is_true ~msg:\"mixed\" (Space.contains s (int32_vec [| 0; 1; 0 |]));\n  is_false ~msg:\"value 2 invalid\" (Space.contains s (int32_vec [| 0; 2; 0 |]));\n  is_false ~msg:\"wrong length\" (Space.contains s (int32_vec [| 0; 1 |]))\n\nlet test_mb_sample () =\n  let s = Space.Multi_binary.create 3 in\n  let v = Space.sample s in\n  is_true ~msg:\"sample valid\" (Space.contains s v)\n\nlet test_mb_boundary_values () =\n  let s = Space.Multi_binary.create 3 in\n  let bvs = Space.boundary_values s in\n  equal ~msg:\"2 boundaries\" int 2 (List.length bvs)\n\nlet test_mb_shape () =\n  let s = Space.Multi_binary.create 3 in\n  equal ~msg:\"shape [|3|]\" (option (array int)) (Some [| 3 |]) (Space.shape s)\n\nlet test_mb_error () =\n  raises_invalid_arg \"Space.Multi_binary.create: n must be strictly positive\"\n    (fun () -> Space.Multi_binary.create 0)\n\n(* Multi_discrete *)\n\nlet test_md_contains () =\n  let s = Space.Multi_discrete.create [| 3; 4 |] in\n  is_true ~msg:\"valid\" (Space.contains s (int32_vec [| 0; 0 |]));\n  is_true ~msg:\"upper valid\" (Space.contains s (int32_vec [| 2; 3 |]));\n  is_false ~msg:\"first oob\" (Space.contains s (int32_vec [| 3; 0 |]));\n  is_false ~msg:\"second oob\" (Space.contains s (int32_vec [| 0; 4 |]));\n  is_false ~msg:\"negative\" (Space.contains s (int32_vec [| -1; 0 |]))\n\nlet test_md_sample () =\n  let s = Space.Multi_discrete.create [| 3; 4 |] in\n  let v = Space.sample s in\n  is_true ~msg:\"sample valid\" (Space.contains s v)\n\nlet test_md_shape () =\n  let s = Space.Multi_discrete.create [| 3; 4 |] in\n  equal ~msg:\"shape [|2|]\" (option (array int)) (Some [| 2 |]) (Space.shape s)\n\nlet test_md_error_empty () =\n  raises_invalid_arg \"Space.Multi_discrete.create: nvec must not be empty\"\n    (fun () -> Space.Multi_discrete.create [||])\n\nlet test_md_error_zero_element () =\n  raises_match ~msg:\"nvec element <= 0 raises\"\n    (fun exn -> match exn with Invalid_argument _ -> true | _ -> false)\n    (fun () -> Space.Multi_discrete.create [| 3; 0 |])\n\n(* Tuple *)\n\nlet test_tuple_contains () =\n  let ds = Space.Discrete.create 3 in\n  let bs = Space.Box.create ~low:[| 0.0 |] ~high:[| 1.0 |] in\n  let s = Space.Tuple.create [ Pack ds; Pack bs ] in\n  let valid = [ Value.Int 1; Value.Float_array [| 0.5 |] ] in\n  is_true ~msg:\"valid tuple\" (Space.contains s valid);\n  let bad_length = [ Value.Int 1 ] in\n  is_false ~msg:\"wrong length\" (Space.contains s bad_length);\n  let bad_value = [ Value.Int 5; Value.Float_array [| 0.5 |] ] in\n  is_false ~msg:\"invalid element\" (Space.contains s bad_value)\n\nlet test_tuple_sample () =\n  let ds = Space.Discrete.create 3 in\n  let bs = Space.Box.create ~low:[| 0.0 |] ~high:[| 1.0 |] in\n  let s = Space.Tuple.create [ Pack ds; Pack bs ] in\n  let v = Space.sample s in\n  is_true ~msg:\"sample valid\" (Space.contains s v)\n\nlet test_tuple_pack_unpack () =\n  let ds = Space.Discrete.create 3 in\n  let bs = Space.Box.create ~low:[| 0.0 |] ~high:[| 1.0 |] in\n  let s = Space.Tuple.create [ Pack ds; Pack bs ] in\n  let v = [ Value.Int 1; Value.Float_array [| 0.5 |] ] in\n  let packed = Space.pack s v in\n  let unpacked = Space.unpack s packed in\n  is_ok ~msg:\"round-trip succeeds\" unpacked\n\nlet test_tuple_empty () =\n  let s = Space.Tuple.create [] in\n  is_true ~msg:\"empty tuple valid\" (Space.contains s []);\n  is_false ~msg:\"non-empty invalid\" (Space.contains s [ Value.Int 0 ])\n\n(* Dict *)\n\nlet test_dict_contains () =\n  let ds = Space.Discrete.create 3 in\n  let bs = Space.Box.create ~low:[| 0.0 |] ~high:[| 1.0 |] in\n  let s = Space.Dict.create [ (\"action\", Pack ds); (\"obs\", Pack bs) ] in\n  let valid =\n    [ (\"action\", Value.Int 1); (\"obs\", Value.Float_array [| 0.5 |]) ]\n  in\n  is_true ~msg:\"valid dict\" (Space.contains s valid);\n  let missing_key = [ (\"action\", Value.Int 1) ] in\n  is_false ~msg:\"missing key\" (Space.contains s missing_key);\n  let extra_key =\n    [\n      (\"action\", Value.Int 1);\n      (\"obs\", Value.Float_array [| 0.5 |]);\n      (\"extra\", Value.Int 0);\n    ]\n  in\n  is_false ~msg:\"extra key\" (Space.contains s extra_key)\n\nlet test_dict_sample () =\n  let ds = Space.Discrete.create 3 in\n  let s = Space.Dict.create [ (\"a\", Pack ds) ] in\n  let v = Space.sample s in\n  is_true ~msg:\"sample valid\" (Space.contains s v)\n\nlet test_dict_error_duplicate () =\n  let ds = Space.Discrete.create 3 in\n  raises_match ~msg:\"duplicate key raises\"\n    (fun exn -> match exn with Invalid_argument _ -> true | _ -> false)\n    (fun () -> Space.Dict.create [ (\"a\", Pack ds); (\"a\", Pack ds) ])\n\n(* Text *)\n\nlet test_text_contains () =\n  let s = Space.Text.create () in\n  is_true ~msg:\"alpha string\" (Space.contains s \"hello\");\n  is_true ~msg:\"empty string\" (Space.contains s \"\");\n  is_true ~msg:\"with digits\" (Space.contains s \"abc123\");\n  is_true ~msg:\"with space\" (Space.contains s \"hello world\")\n\nlet test_text_contains_invalid () =\n  let s = Space.Text.create ~charset:\"abc\" () in\n  is_false ~msg:\"char outside charset\" (Space.contains s \"abcd\")\n\nlet test_text_contains_too_long () =\n  let s = Space.Text.create ~max_length:3 () in\n  is_false ~msg:\"exceeds max_length\" (Space.contains s \"abcd\");\n  is_true ~msg:\"at max_length\" (Space.contains s \"abc\")\n\nlet test_text_sample () =\n  let s = Space.Text.create () in\n  let v = Space.sample s in\n  is_true ~msg:\"sample valid\" (Space.contains s v);\n  is_true ~msg:\"sample non-empty\" (String.length v > 0)\n\nlet test_text_boundary_values () =\n  let s = Space.Text.create () in\n  let bvs = Space.boundary_values s in\n  equal ~msg:\"2 boundaries\" int 2 (List.length bvs)\n\nlet test_text_error_max_length () =\n  raises_invalid_arg \"Space.Text.create: max_length must be positive\" (fun () ->\n      Space.Text.create ~max_length:0 ())\n\nlet test_text_error_charset () =\n  raises_invalid_arg \"Space.Text.create: charset must not be empty\" (fun () ->\n      Space.Text.create ~charset:\"\" ())\n\n(* Sequence *)\n\nlet test_seq_contains () =\n  let ds = Space.Discrete.create 3 in\n  let s = Space.Sequence.create ~min_length:1 ~max_length:3 ds in\n  let v1 = int32_scalar 0 in\n  let v2 = int32_scalar 2 in\n  is_true ~msg:\"length 1\" (Space.contains s [ v1 ]);\n  is_true ~msg:\"length 3\" (Space.contains s [ v1; v2; v1 ]);\n  is_false ~msg:\"empty\" (Space.contains s []);\n  is_false ~msg:\"too long\" (Space.contains s [ v1; v2; v1; v2 ])\n\nlet test_seq_contains_unbounded () =\n  let ds = Space.Discrete.create 3 in\n  let s = Space.Sequence.create ~min_length:0 ds in\n  is_true ~msg:\"empty is valid\" (Space.contains s []);\n  is_true ~msg:\"long is valid\"\n    (Space.contains s (List.init 100 (fun _ -> int32_scalar 0)))\n\nlet test_seq_sample () =\n  let ds = Space.Discrete.create 3 in\n  let s = Space.Sequence.create ~min_length:1 ~max_length:5 ds in\n  let v = Space.sample s in\n  is_true ~msg:\"sample valid\" (Space.contains s v)\n\nlet test_seq_sample_fixed () =\n  let ds = Space.Discrete.create 3 in\n  let s = Space.Sequence.create ~min_length:2 ds in\n  let v = Space.sample s in\n  equal ~msg:\"fixed length 2\" int 2 (List.length v)\n\nlet test_seq_pack_unpack () =\n  let ds = Space.Discrete.create 3 in\n  let s = Space.Sequence.create ~min_length:1 ~max_length:3 ds in\n  let v = [ int32_scalar 0; int32_scalar 1 ] in\n  let packed = Space.pack s v in\n  let unpacked = Space.unpack s packed in\n  is_ok ~msg:\"round-trip succeeds\" unpacked\n\nlet test_seq_error_min_negative () =\n  let ds = Space.Discrete.create 3 in\n  raises_invalid_arg \"Space.Sequence.create: min_length must be non-negative\"\n    (fun () -> Space.Sequence.create ~min_length:(-1) ds)\n\nlet test_seq_error_max_lt_min () =\n  let ds = Space.Discrete.create 3 in\n  raises_invalid_arg \"Space.Sequence.create: max_length must be >= min_length\"\n    (fun () -> Space.Sequence.create ~min_length:5 ~max_length:2 ds)\n\n(* Discrete helpers *)\n\nlet test_discrete_to_int () =\n  let v = Space.Discrete.of_int 5 in\n  equal ~msg:\"to_int round-trip\" int 5 (Space.Discrete.to_int v)\n\nlet test_discrete_of_int () =\n  let v = Space.Discrete.of_int 3 in\n  let s = Space.Discrete.create 5 in\n  is_true ~msg:\"of_int creates valid element\" (Space.contains s v)\n\n(* Spec *)\n\nlet test_spec_discrete () =\n  let s = Space.Discrete.create ~start:2 4 in\n  let sp = Space.spec s in\n  equal ~msg:\"discrete spec\" bool true\n    (Space.equal_spec sp (Space.Discrete { start = 2; n = 4 }))\n\nlet test_spec_box () =\n  let s = Space.Box.create ~low:[| 0.0 |] ~high:[| 1.0 |] in\n  let sp = Space.spec s in\n  equal ~msg:\"box spec\" bool true\n    (Space.equal_spec sp (Space.Box { low = [| 0.0 |]; high = [| 1.0 |] }))\n\nlet test_spec_equal_same () =\n  let s1 = Space.Discrete.create 3 in\n  let s2 = Space.Discrete.create 3 in\n  is_true ~msg:\"same spaces equal spec\"\n    (Space.equal_spec (Space.spec s1) (Space.spec s2))\n\nlet test_spec_not_equal_different () =\n  let s1 = Space.Discrete.create 3 in\n  let s2 = Space.Discrete.create 4 in\n  is_false ~msg:\"different spaces not equal spec\"\n    (Space.equal_spec (Space.spec s1) (Space.spec s2))\n\nlet test_spec_not_equal_kinds () =\n  let s1 = Space.Discrete.create 3 in\n  let s2 = Space.Box.create ~low:[| 0.0 |] ~high:[| 1.0 |] in\n  is_false ~msg:\"different kinds not equal spec\"\n    (Space.equal_spec (Space.spec s1) (Space.spec s2))\n\nlet test_spec_tuple () =\n  let ds = Space.Discrete.create 3 in\n  let bs = Space.Box.create ~low:[| 0.0 |] ~high:[| 1.0 |] in\n  let s = Space.Tuple.create [ Pack ds; Pack bs ] in\n  let sp = Space.spec s in\n  let expected =\n    Space.Tuple\n      [\n        Space.Discrete { start = 0; n = 3 };\n        Space.Box { low = [| 0.0 |]; high = [| 1.0 |] };\n      ]\n  in\n  is_true ~msg:\"tuple spec\" (Space.equal_spec sp expected)\n\nlet test_spec_dict () =\n  let ds = Space.Discrete.create 3 in\n  let s = Space.Dict.create [ (\"a\", Pack ds) ] in\n  let sp = Space.spec s in\n  let expected = Space.Dict [ (\"a\", Space.Discrete { start = 0; n = 3 }) ] in\n  is_true ~msg:\"dict spec\" (Space.equal_spec sp expected)\n\n(* Tuple.unpack validation *)\n\nlet test_tuple_unpack_validates_elements () =\n  let ds = Space.Discrete.create 3 in\n  let s = Space.Tuple.create [ Pack ds ] in\n  (* Value.Int 5 is out of range for Discrete(n=3, start=0) *)\n  let bad = Value.List [ Value.Int 5 ] in\n  is_error ~msg:\"unpack rejects invalid element\" (Space.unpack s bad)\n\nlet test_tuple_unpack_valid () =\n  let ds = Space.Discrete.create 3 in\n  let s = Space.Tuple.create [ Pack ds ] in\n  let good = Value.List [ Value.Int 1 ] in\n  is_ok ~msg:\"unpack accepts valid element\" (Space.unpack s good)\n\n(* Entry point *)\n\nlet () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  run \"Fehu.Space\"\n    [\n      group \"Discrete\"\n        [\n          test \"default start\" test_discrete_default;\n          test \"custom start\" test_discrete_custom_start;\n          test \"contains valid/invalid\" test_discrete_contains;\n          test \"contains with start\" test_discrete_contains_with_start;\n          test \"sample\" test_discrete_sample;\n          test \"pack/unpack\" test_discrete_pack_unpack;\n          test \"unpack invalid\" test_discrete_unpack_invalid;\n          test \"boundary values\" test_discrete_boundary_values;\n          test \"boundary single\" test_discrete_boundary_single;\n          test \"shape\" test_discrete_shape;\n          test \"error n=0\" test_discrete_error_zero;\n          test \"error n<0\" test_discrete_error_negative;\n          test \"to_int round-trip\" test_discrete_to_int;\n          test \"of_int valid\" test_discrete_of_int;\n        ];\n      group \"Box\"\n        [\n          test \"1d create and bounds\" test_box_1d;\n          test \"contains\" test_box_contains;\n          test \"sample\" test_box_sample;\n          test \"pack/unpack\" test_box_pack_unpack;\n          test \"boundary values\" test_box_boundary_values;\n          test \"boundary identical\" test_box_boundary_identical;\n          test \"shape 1d\" test_box_shape_1d;\n          test \"2d\" test_box_2d;\n          test \"error empty\" test_box_error_empty;\n          test \"error mismatched lengths\" test_box_error_mismatch;\n          test \"error low > high\" test_box_error_low_gt_high;\n        ];\n      group \"Multi_binary\"\n        [\n          test \"contains\" test_mb_contains;\n          test \"sample\" test_mb_sample;\n          test \"boundary values\" test_mb_boundary_values;\n          test \"shape\" test_mb_shape;\n          test \"error n=0\" test_mb_error;\n        ];\n      group \"Multi_discrete\"\n        [\n          test \"contains\" test_md_contains;\n          test \"sample\" test_md_sample;\n          test \"shape\" test_md_shape;\n          test \"error empty\" test_md_error_empty;\n          test \"error element <= 0\" test_md_error_zero_element;\n        ];\n      group \"Tuple\"\n        [\n          test \"contains\" test_tuple_contains;\n          test \"sample\" test_tuple_sample;\n          test \"pack/unpack\" test_tuple_pack_unpack;\n          test \"empty tuple\" test_tuple_empty;\n          test \"unpack validates elements\" test_tuple_unpack_validates_elements;\n          test \"unpack valid\" test_tuple_unpack_valid;\n        ];\n      group \"Dict\"\n        [\n          test \"contains\" test_dict_contains;\n          test \"sample\" test_dict_sample;\n          test \"error duplicate keys\" test_dict_error_duplicate;\n        ];\n      group \"Text\"\n        [\n          test \"contains\" test_text_contains;\n          test \"contains invalid charset\" test_text_contains_invalid;\n          test \"contains too long\" test_text_contains_too_long;\n          test \"sample\" test_text_sample;\n          test \"boundary values\" test_text_boundary_values;\n          test \"error max_length=0\" test_text_error_max_length;\n          test \"error empty charset\" test_text_error_charset;\n        ];\n      group \"Sequence\"\n        [\n          test \"contains bounded\" test_seq_contains;\n          test \"contains unbounded\" test_seq_contains_unbounded;\n          test \"sample\" test_seq_sample;\n          test \"sample fixed length\" test_seq_sample_fixed;\n          test \"pack/unpack\" test_seq_pack_unpack;\n          test \"error min < 0\" test_seq_error_min_negative;\n          test \"error max < min\" test_seq_error_max_lt_min;\n        ];\n      group \"spec\"\n        [\n          test \"discrete\" test_spec_discrete;\n          test \"box\" test_spec_box;\n          test \"equal same\" test_spec_equal_same;\n          test \"not equal different\" test_spec_not_equal_different;\n          test \"not equal kinds\" test_spec_not_equal_kinds;\n          test \"tuple\" test_spec_tuple;\n          test \"dict\" test_spec_dict;\n        ];\n    ]\n"
  },
  {
    "path": "packages/fehu/test/test_value.ml",
    "content": "open Fehu\nopen Windtrap\n\nlet value = testable ~pp:Value.pp ~equal:Value.equal ()\n\n(* Equality *)\n\nlet test_null_equal () = equal ~msg:\"null = null\" value Value.Null Value.Null\n\nlet test_bool_equal () =\n  equal ~msg:\"true = true\" value (Bool true) (Bool true);\n  not_equal ~msg:\"true <> false\" value (Bool true) (Bool false)\n\nlet test_int_equal () =\n  equal ~msg:\"1 = 1\" value (Int 1) (Int 1);\n  not_equal ~msg:\"1 <> 2\" value (Int 1) (Int 2)\n\nlet test_float_equal () = equal ~msg:\"1.0 = 1.0\" value (Float 1.0) (Float 1.0)\n\nlet test_string_equal () =\n  equal ~msg:\"a = a\" value (String \"a\") (String \"a\");\n  not_equal ~msg:\"a <> b\" value (String \"a\") (String \"b\")\n\nlet test_int_array_equal () =\n  equal ~msg:\"[|1;2|] = [|1;2|]\" value\n    (Int_array [| 1; 2 |])\n    (Int_array [| 1; 2 |])\n\nlet test_float_array_equal () =\n  equal ~msg:\"[|1.0|] = [|1.0|]\" value (Float_array [| 1.0 |])\n    (Float_array [| 1.0 |])\n\nlet test_bool_array_equal () =\n  equal ~msg:\"[|true|] = [|true|]\" value (Bool_array [| true |])\n    (Bool_array [| true |])\n\nlet test_list_equal () =\n  equal ~msg:\"[Int 1] = [Int 1]\" value (List [ Int 1 ]) (List [ Int 1 ])\n\nlet test_dict_equal () =\n  equal ~msg:\"dict equal\" value (Dict [ (\"k\", Int 1) ]) (Dict [ (\"k\", Int 1) ])\n\nlet test_cross_type_inequality () =\n  not_equal ~msg:\"Int 1 <> Float 1.0\" value (Int 1) (Float 1.0);\n  not_equal ~msg:\"Null <> Int 0\" value Null (Int 0)\n\n(* Formatting *)\n\nlet test_to_string_null () =\n  equal ~msg:\"null\" string \"null\" (Value.to_string Null)\n\nlet test_to_string_bool () =\n  equal ~msg:\"bool true\" string \"true\" (Value.to_string (Bool true))\n\nlet test_to_string_int () =\n  equal ~msg:\"int 42\" string \"42\" (Value.to_string (Int 42))\n\nlet test_to_string_float () =\n  let s = Value.to_string (Float 3.14) in\n  is_true ~msg:\"float non-empty\" (String.length s > 0)\n\nlet test_to_string_string () =\n  let s = Value.to_string (String \"hello\") in\n  is_true ~msg:\"string non-empty\" (String.length s > 0)\n\nlet test_to_string_arrays () =\n  is_true ~msg:\"int_array non-empty\"\n    (String.length (Value.to_string (Int_array [| 1 |])) > 0);\n  is_true ~msg:\"float_array non-empty\"\n    (String.length (Value.to_string (Float_array [| 1.0 |])) > 0);\n  is_true ~msg:\"bool_array non-empty\"\n    (String.length (Value.to_string (Bool_array [| true |])) > 0)\n\nlet test_to_string_list () =\n  let s = Value.to_string (List [ Int 1; Int 2 ]) in\n  is_true ~msg:\"list non-empty\" (String.length s > 0)\n\nlet test_to_string_dict () =\n  let s = Value.to_string (Dict [ (\"k\", Int 1) ]) in\n  is_true ~msg:\"dict non-empty\" (String.length s > 0)\n\nlet () =\n  run \"Fehu.Value\"\n    [\n      group \"equality\"\n        [\n          test \"null\" test_null_equal;\n          test \"bool\" test_bool_equal;\n          test \"int\" test_int_equal;\n          test \"float\" test_float_equal;\n          test \"string\" test_string_equal;\n          test \"int_array\" test_int_array_equal;\n          test \"float_array\" test_float_array_equal;\n          test \"bool_array\" test_bool_array_equal;\n          test \"list\" test_list_equal;\n          test \"dict\" test_dict_equal;\n          test \"cross-type inequality\" test_cross_type_inequality;\n        ];\n      group \"formatting\"\n        [\n          test \"to_string null\" test_to_string_null;\n          test \"to_string bool\" test_to_string_bool;\n          test \"to_string int\" test_to_string_int;\n          test \"to_string float\" test_to_string_float;\n          test \"to_string string\" test_to_string_string;\n          test \"to_string arrays\" test_to_string_arrays;\n          test \"to_string list\" test_to_string_list;\n          test \"to_string dict\" test_to_string_dict;\n        ];\n    ]\n"
  },
  {
    "path": "packages/fehu/test/test_vec_env.ml",
    "content": "open Fehu\nopen Windtrap\n\nlet make_test_env ?(max_steps = 100) () =\n  let obs_space = Space.Box.create ~low:[| 0.0 |] ~high:[| 10.0 |] in\n  let act_space = Space.Discrete.create 2 in\n  let state = ref 5.0 in\n  let steps = ref 0 in\n  let reset _env ?options:_ () =\n    state := 5.0;\n    steps := 0;\n    (Nx.create Nx.float32 [| 1 |] [| !state |], Info.empty)\n  in\n  let step _env action =\n    let a : Int32.t array = Nx.to_array (Nx.reshape [| 1 |] action) in\n    state := !state +. if Int32.to_int a.(0) = 0 then -1.0 else 1.0;\n    incr steps;\n    let terminated = !state <= 0.0 || !state >= 10.0 in\n    let truncated = (not terminated) && !steps >= max_steps in\n    Env.step_result\n      ~observation:(Nx.create Nx.float32 [| 1 |] [| !state |])\n      ~reward:1.0 ~terminated ~truncated ()\n  in\n  Env.create ~id:\"Test-v0\" ~observation_space:obs_space ~action_space:act_space\n    ~reset ~step ()\n\nlet make_envs n = List.init n (fun _ -> make_test_env ())\n\n(* Creation *)\n\nlet test_create_num_envs () =\n  let venv = Vec_env.create (make_envs 3) in\n  equal ~msg:\"num_envs\" int 3 (Vec_env.num_envs venv)\n\nlet test_spaces_match_first_env () =\n  let envs = make_envs 3 in\n  let venv = Vec_env.create envs in\n  let obs_shape = Space.shape (Vec_env.observation_space venv) in\n  let act_shape = Space.shape (Vec_env.action_space venv) in\n  let first_obs = Space.shape (Env.observation_space (List.hd envs)) in\n  let first_act = Space.shape (Env.action_space (List.hd envs)) in\n  equal ~msg:\"obs space shape\" (option (array int)) first_obs obs_shape;\n  equal ~msg:\"act space shape\" (option (array int)) first_act act_shape\n\nlet test_empty_list_raises () =\n  raises_invalid_arg \"Vec_env.create: env list must not be empty\" (fun () ->\n      ignore (Vec_env.create []))\n\nlet test_incompatible_spaces_raises () =\n  let obs1 = Space.Box.create ~low:[| 0.0 |] ~high:[| 10.0 |] in\n  let act = Space.Discrete.create 2 in\n  let obs2 = Space.Box.create ~low:[| 0.0 |] ~high:[| 5.0 |] in\n  let make_env obs =\n    let reset _env ?options:_ () =\n      (Nx.create Nx.float32 [| 1 |] [| 0.0 |], Info.empty)\n    in\n    let step _env _action =\n      Env.step_result ~observation:(Nx.create Nx.float32 [| 1 |] [| 0.0 |]) ()\n    in\n    Env.create ~observation_space:obs ~action_space:act ~reset ~step ()\n  in\n  let e1 = make_env obs1 in\n  let e2 = make_env obs2 in\n  raises_match ~msg:\"incompatible spaces raises\"\n    (fun exn -> match exn with Invalid_argument _ -> true | _ -> false)\n    (fun () -> ignore (Vec_env.create [ e1; e2 ]))\n\n(* Reset *)\n\nlet test_reset_obs_length () =\n  let venv = Vec_env.create (make_envs 3) in\n  let obs, _infos = Vec_env.reset venv () in\n  equal ~msg:\"obs array length\" int 3 (Array.length obs)\n\nlet test_reset_infos_length () =\n  let venv = Vec_env.create (make_envs 3) in\n  let _obs, infos = Vec_env.reset venv () in\n  equal ~msg:\"infos array length\" int 3 (Array.length infos)\n\n(* Step *)\n\nlet test_step_result_lengths () =\n  let venv = Vec_env.create (make_envs 3) in\n  let _obs, _infos = Vec_env.reset venv () in\n  let action = Nx.create Nx.int32 [| 1 |] [| 1l |] in\n  let actions = Array.make 3 action in\n  let s = Vec_env.step venv actions in\n  equal ~msg:\"observations length\" int 3 (Array.length s.observations);\n  equal ~msg:\"rewards length\" int 3 (Array.length s.rewards);\n  equal ~msg:\"terminated length\" int 3 (Array.length s.terminated);\n  equal ~msg:\"truncated length\" int 3 (Array.length s.truncated);\n  equal ~msg:\"infos length\" int 3 (Array.length s.infos)\n\nlet test_wrong_action_count_raises () =\n  let venv = Vec_env.create (make_envs 3) in\n  let _obs, _infos = Vec_env.reset venv () in\n  let action = Nx.create Nx.int32 [| 1 |] [| 1l |] in\n  let actions = Array.make 2 action in\n  raises_invalid_arg \"Vec_env.step: expected 3 actions, got 2\" (fun () ->\n      ignore (Vec_env.step venv actions))\n\nlet test_autoreset_final_observation () =\n  let env = make_test_env ~max_steps:3 () in\n  let venv = Vec_env.create [ env ] in\n  let _obs, _infos = Vec_env.reset venv () in\n  let right = Nx.create Nx.int32 [| 1 |] [| 1l |] in\n  let actions = [| right |] in\n  (* Step until truncated at max_steps=3 *)\n  let s1 = Vec_env.step venv actions in\n  is_false ~msg:\"not done after step 1\" s1.truncated.(0);\n  let s2 = Vec_env.step venv actions in\n  is_false ~msg:\"not done after step 2\" s2.truncated.(0);\n  let s3 = Vec_env.step venv actions in\n  is_true ~msg:\"truncated after step 3\" s3.truncated.(0);\n  (* After autoreset, info should have final_observation *)\n  is_some ~msg:\"final_observation key present\"\n    (Info.find \"final_observation\" s3.infos.(0));\n  (* Observation should be from reset (5.0), not terminal *)\n  let arr : float array =\n    Nx.to_array (Nx.reshape [| 1 |] s3.observations.(0))\n  in\n  equal ~msg:\"obs is from reset\" (float 1e-6) 5.0 arr.(0)\n\n(* Close *)\n\nlet test_close_all_envs () =\n  let envs = make_envs 3 in\n  let venv = Vec_env.create envs in\n  Vec_env.close venv;\n  List.iter (fun env -> is_true ~msg:\"env is closed\" (Env.closed env)) envs\n\nlet () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  run \"Fehu.Vec_env\"\n    [\n      group \"creation\"\n        [\n          test \"num_envs\" test_create_num_envs;\n          test \"spaces match first env\" test_spaces_match_first_env;\n          test \"empty list raises\" test_empty_list_raises;\n          test \"incompatible spaces raises\" test_incompatible_spaces_raises;\n        ];\n      group \"reset\"\n        [\n          test \"observations length\" test_reset_obs_length;\n          test \"infos length\" test_reset_infos_length;\n        ];\n      group \"step\"\n        [\n          test \"result array lengths\" test_step_result_lengths;\n          test \"wrong action count raises\" test_wrong_action_count_raises;\n          test \"autoreset with final_observation\"\n            test_autoreset_final_observation;\n        ];\n      group \"close\" [ test \"closes all inner envs\" test_close_all_envs ];\n    ]\n"
  },
  {
    "path": "packages/hugin/README.md",
    "content": "# Hugin\n\nDeclarative plotting and visualization library for OCaml.\n\nHugin is part of the Raven ecosystem, providing a functional API to create\npublication-quality charts and figures from Nx arrays. You build immutable\nplot specifications with mark constructors, compose them with `|>` pipelines,\nand render to PNG, SVG, PDF, or an interactive SDL window.\n\n## Features\n\n- Line, scatter, bar, histogram, error bar, fill-between, hline/vline, hspan/vspan\n- Heatmap, colormapped image display (`imshow`), contour plots\n- Multi-panel layouts with `Layout.grid`, `Layout.hstack`, `Layout.vstack`\n- Perceptually uniform OKLCH colors with colorblind-friendly Okabe-Ito palette\n- Predefined colormaps: viridis, plasma, inferno, magma, cividis, coolwarm\n- Themes with context scaling (paper, notebook, talk, poster)\n- Axis scales: linear, log, sqrt, asinh, symlog\n- Cairo rendering (PNG, PDF), pure-OCaml SVG backend, interactive SDL display\n- Format printer for Quill notebooks (`#install_printer`)\n\n## Quick Start\n\n<!-- $MDX skip -->\n```ocaml\nopen Hugin\n\nlet () =\n  let x = Nx.linspace Nx.float32 0. 6.28 100 in\n  let y = Nx.sin x in\n  line ~x ~y () |> title \"Sine wave\" |> render_png \"sine.png\"\n```\n\n## Contributing\n\nSee the [Raven monorepo README](../../README.md) for guidelines.\n\n## License\n\nISC License. See [LICENSE](../../LICENSE) for details.\n"
  },
  {
    "path": "packages/hugin/doc/01-getting-started.md",
    "content": "# Getting Started\n\nThis guide covers installation, your first plot, and the key concepts behind Hugin.\n\n## Installation\n\nInstall system dependencies:\n\n<!-- $MDX skip -->\n```bash\n# macOS\nbrew install cairo sdl2\n\n# Ubuntu/Debian\napt install libcairo2-dev libsdl2-dev\n```\n\nThen install hugin:\n\n<!-- $MDX skip -->\n```bash\nopam install hugin\n```\n\nOr build from source:\n\n<!-- $MDX skip -->\n```bash\ngit clone https://github.com/raven-ml/raven\ncd raven && dune build dev/hugin\n```\n\nAdd to your `dune` file:\n\n<!-- $MDX skip -->\n```dune\n(executable\n (name main)\n (libraries hugin))\n```\n\n## Your First Plot\n\n<!-- $MDX skip -->\n```ocaml\nopen Hugin\n\nlet () =\n  let x = Nx.linspace Nx.float32 0. (2. *. Float.pi) 100 in\n  let y = Nx.sin x in\n  line ~x ~y () |> title \"Sine wave\" |> render_png \"sine.png\"\n```\n\nThis creates a 1-D array of 100 points, computes the sine, builds a line specification, adds a title, and writes a PNG file.\n\n## Key Concepts\n\n### Marks\n\nA mark constructor (`line`, `point`, `bar`, `hist`, `heatmap`, etc.) takes data arrays and optional visual properties and returns an immutable plot specification of type `t`. A mark is already a complete spec — you can render it directly:\n\n<!-- $MDX skip -->\n```ocaml\nline ~x ~y () |> render_png \"plot.png\"\n```\n\n### Decorations\n\nDecoration functions add metadata to a spec. They are designed for the `|>` pipeline:\n\n<!-- $MDX skip -->\n```ocaml\nline ~x ~y ()\n|> title \"My Plot\"\n|> xlabel \"Time (s)\"\n|> ylabel \"Amplitude\"\n|> xlim 0. 10.\n|> grid_lines true\n```\n\nDecorations include `title`, `xlabel`, `ylabel`, `xlim`, `ylim`, `xscale`, `yscale`, `grid_lines`, `legend`, `xticks`, `yticks`, `xinvert`, `yinvert`, `with_theme`, and tick formatting.\n\n### Composition\n\n`layers` overlays multiple marks on shared axes:\n\n<!-- $MDX skip -->\n```ocaml\nlayers [\n  line ~x ~y:(Nx.sin x) ~label:\"sin\" ();\n  line ~x ~y:(Nx.cos x) ~label:\"cos\" ~line_style:`Dashed ();\n]\n|> legend |> render_png \"overlay.png\"\n```\n\nYou can mix mark types freely. A `line` with `point` markers, a `bar` chart with `hline` reference lines — anything goes.\n\n### Layout\n\n`Layout.grid` arranges specs in rows and columns:\n\n<!-- $MDX skip -->\n```ocaml\nlet p1 = line ~x ~y:(Nx.sin x) () |> title \"sin\" in\nlet p2 = line ~x ~y:(Nx.cos x) () |> title \"cos\" in\nLayout.grid [ [ p1; p2 ] ] |> render_png \"grid.png\"\n```\n\n`Layout.hstack` and `Layout.vstack` are shorthands for single-row and single-column grids.\n\n### Rendering\n\nFour output modes:\n\n| Function | Output |\n|----------|--------|\n| `render_png \"file.png\" t` | PNG image file |\n| `render_svg \"file.svg\" t` | SVG document file |\n| `render_pdf \"file.pdf\" t` | PDF document file |\n| `show t` | Interactive SDL window (resize, Esc to close) |\n\nAll renderers accept optional `~width` and `~height` (default 1600×1200) and `~theme`.\n\n`render_svg_to_string` and `render_to_buffer` return the output as a string instead of writing a file.\n\n## Common Marks\n\n### Line\n\n<!-- $MDX skip -->\n```ocaml\nline ~x ~y ()\nline ~x ~y ~color:Color.blue ~line_style:`Dashed ~line_width:2.0 ()\nline ~x ~y ~step:`Post ()  (* staircase plot *)\n```\n\n### Scatter\n\n<!-- $MDX skip -->\n```ocaml\npoint ~x ~y ()\npoint ~x ~y ~color_by:values ~size:8. ~marker:Star ()\npoint ~x ~y ~size_by:weights ()  (* variable marker size *)\n```\n\n### Bar Chart\n\n<!-- $MDX skip -->\n```ocaml\nbar ~x:categories ~height:values ()\nbar ~x:categories ~height:values ~width:0.5 ~color:Color.orange ()\n```\n\n### Histogram\n\n<!-- $MDX skip -->\n```ocaml\nhist ~x:data ()\nhist ~x:data ~bins:(`Num 30) ~density:true ~color:Color.green ()\n```\n\n### Heatmap\n\n<!-- $MDX skip -->\n```ocaml\n(* data has shape [|rows; cols|] *)\nheatmap ~data ()\nheatmap ~data ~annotate:true ~cmap:Cmap.viridis ()\n```\n\n### Fill Between\n\n<!-- $MDX skip -->\n```ocaml\nfill_between ~x ~y1:(Nx.sub y err) ~y2:(Nx.add y err) ~alpha:0.3 ()\n```\n\n### Error Bars\n\n<!-- $MDX skip -->\n```ocaml\nerrorbar ~x ~y ~yerr:(`Symmetric err) ()\nerrorbar ~x ~y ~yerr:(`Asymmetric (lo, hi)) ~xerr:(`Symmetric xerr) ()\n```\n\n## Next Steps\n\n- [Marks and Styling](/docs/hugin/marks-and-styling/) — full mark catalog and visual properties\n- [Layout and Decorations](/docs/hugin/layout-and-decorations/) — axes, scales, themes, multi-panel\n- [Colors and Colormaps](/docs/hugin/colors-and-colormaps/) — OKLCH colors and colormap reference\n"
  },
  {
    "path": "packages/hugin/doc/02-marks-and-styling.md",
    "content": "# Marks and Styling\n\nEvery visualization in Hugin starts with one or more marks. A mark constructor takes data arrays and optional visual properties and returns an immutable plot specification.\n\n## Mark Catalog\n\n### Line Plots\n\n`line ~x ~y ()` connects points `(x.(i), y.(i))` with straight segments.\n\n| Argument | Type | Default | Description |\n|----------|------|---------|-------------|\n| `~x` | `Nx.float32_t` | required | X coordinates |\n| `~y` | `Nx.float32_t` | required | Y coordinates |\n| `~color` | `Color.t` | theme palette | Line color |\n| `~line_width` | `float` | theme line width | Stroke width |\n| `~line_style` | `` `Solid \\| `Dashed \\| `Dotted \\| `Dash_dot `` | `` `Solid `` | Dash pattern |\n| `~step` | `` `Pre \\| `Post \\| `Mid `` | none | Staircase interpolation |\n| `~marker` | `marker` | none | Marker at each point |\n| `~label` | `string` | none | Legend entry |\n| `~alpha` | `float` | 1.0 | Opacity |\n\nStep modes: `Post` holds each value until the next x-point, `Pre` steps at the current x-point, `Mid` steps at the midpoint.\n\n### Scatter Plots\n\n`point ~x ~y ()` places individual markers at data coordinates.\n\n| Argument | Type | Default | Description |\n|----------|------|---------|-------------|\n| `~x` | `Nx.float32_t` | required | X coordinates |\n| `~y` | `Nx.float32_t` | required | Y coordinates |\n| `~color` | `Color.t` | theme palette | Uniform color |\n| `~color_by` | `Nx.float32_t` | none | Per-point values mapped through sequential colormap |\n| `~size` | `float` | theme marker size | Uniform marker size |\n| `~size_by` | `Nx.float32_t` | none | Per-point values for variable marker area |\n| `~marker` | `marker` | `Circle` | Marker shape |\n| `~label` | `string` | none | Legend entry |\n| `~alpha` | `float` | 1.0 | Opacity |\n\nWhen `~color_by` is set, a colorbar is displayed showing the value-to-color mapping.\n\n### Bar Charts\n\n`bar ~x ~height ()` draws vertical bars centered on `x` values.\n\n| Argument | Type | Default | Description |\n|----------|------|---------|-------------|\n| `~x` | `Nx.float32_t` | required | Bar center positions |\n| `~height` | `Nx.float32_t` | required | Bar heights |\n| `~width` | `float` | 0.8 | Bar width |\n| `~bottom` | `float` | 0.0 | Baseline y-value |\n| `~color` | `Color.t` | theme palette | Fill color |\n| `~label` | `string` | none | Legend entry |\n| `~alpha` | `float` | 1.0 | Opacity |\n\n### Histograms\n\n`hist ~x ()` bins the values in `x` and draws a bar chart.\n\n| Argument | Type | Default | Description |\n|----------|------|---------|-------------|\n| `~x` | `Nx.float32_t` | required | Data values |\n| `~bins` | `` `Num of int \\| `Edges of float array `` | `` `Num 10 `` | Number of bins or explicit edges |\n| `~density` | `bool` | false | Normalize so total area equals 1.0 |\n| `~color` | `Color.t` | theme palette | Fill color |\n| `~label` | `string` | none | Legend entry |\n\n### Reference Lines and Spans\n\n`hline ~y ()` draws a horizontal line across the full plot width. `vline ~x ()` draws a vertical line across the full height. Both accept `~color`, `~line_width`, `~line_style`, `~label`, and `~alpha`.\n\n`hspan ~y0 ~y1 ()` shades a horizontal band. `vspan ~x0 ~x1 ()` shades a vertical band. Both accept `~color`, `~alpha` (default 0.2), and `~label`.\n\n### Fill Between\n\n`fill_between ~x ~y1 ~y2 ()` fills the area between two curves. `~alpha` defaults to 0.3.\n\n### Error Bars\n\n`errorbar ~x ~y ~yerr ()` draws error bars at each point.\n\n- `~yerr`: `` `Symmetric e `` draws y ± e, `` `Asymmetric (lo, hi) `` draws [y - lo, y + hi]\n- `~xerr`: optional horizontal error bars with the same format\n- `~cap_size`: cap width (defaults to half the theme marker size)\n\n### Text\n\n`text ~x ~y \"label\" ()` places a string at data coordinates `(x, y)`. Accepts `~color` and `~font_size`.\n\n### Image\n\n`image data` displays an Nx uint8 array as an image. `data` must have shape `[|h; w; 3|]` (RGB) or `[|h; w; 4|]` (RGBA).\n\n### Colormapped Image\n\n`imshow ~data ()` displays a 2-D float array through a colormap.\n\n| Argument | Type | Default | Description |\n|----------|------|---------|-------------|\n| `~data` | `Nx.float32_t` | required | 2-D array of shape `[|rows; cols|]` |\n| `~stretch` | `` `Linear \\| `Log \\| `Sqrt \\| `Asinh \\| `Power of float `` | `` `Linear `` | Transfer function before colormap lookup |\n| `~cmap` | `Cmap.t` | theme sequential | Colormap |\n| `~vmin` | `float` | data min | Lower bound of color range |\n| `~vmax` | `float` | data max | Upper bound of color range |\n\n### Heatmap\n\n`heatmap ~data ()` displays a 2-D array as a grid of colored cells. Row 0 appears at the top.\n\n| Argument | Type | Default | Description |\n|----------|------|---------|-------------|\n| `~data` | `Nx.float32_t` | required | 2-D array of shape `[|rows; cols|]` |\n| `~annotate` | `bool` | false | Show cell values |\n| `~cmap` | `Cmap.t` | theme sequential | Colormap |\n| `~vmin` | `float` | data min | Lower bound |\n| `~vmax` | `float` | data max | Upper bound |\n| `~fmt` | `float -> string` | `Printf.sprintf \"%.2g\"` | Cell value formatter (when annotate is true) |\n\n### Contour\n\n`contour ~data ~x0 ~x1 ~y0 ~y1 ()` draws iso-level contour lines through a 2-D grid.\n\n| Argument | Type | Default | Description |\n|----------|------|---------|-------------|\n| `~data` | `Nx.float32_t` | required | 2-D array of shape `[|rows; cols|]` |\n| `~x0`, `~x1`, `~y0`, `~y1` | `float` | required | Data-space rectangle |\n| `~levels` | `` `Num of int \\| `Values of float array `` | `` `Num 8 `` | Number of levels or explicit values |\n| `~filled` | `bool` | false | Fill regions between levels |\n| `~cmap` | `Cmap.t` | theme sequential | Per-level colormap |\n| `~color` | `Color.t` | none | Single stroke color (unfilled contours) |\n| `~line_width` | `float` | theme line width | Stroke width |\n| `~label` | `string` | none | Legend entry |\n| `~alpha` | `float` | 1.0 | Opacity |\n\n## Marker Shapes\n\nFive built-in shapes:\n\n| Marker | Description |\n|--------|-------------|\n| `Circle` | Filled circle |\n| `Square` | Filled square |\n| `Triangle` | Filled triangle |\n| `Plus` | Plus sign (+) |\n| `Star` | Five-pointed star |\n\nUse with `line ~marker:Triangle` or `point ~marker:Star`.\n\n## Auto-Coloring\n\nWhen you omit `~color`, marks are colored automatically from the theme's categorical palette. The first mark in a spec gets `palette.(0)`, the second gets `palette.(1)`, and so on. Explicitly setting `~color` takes precedence.\n\n## Next Steps\n\n- [Layout and Decorations](/docs/hugin/layout-and-decorations/) — axes, scales, themes, multi-panel layouts\n- [Colors and Colormaps](/docs/hugin/colors-and-colormaps/) — OKLCH color space, palettes, and colormap reference\n"
  },
  {
    "path": "packages/hugin/doc/03-layout-and-decorations.md",
    "content": "# Layout and Decorations\n\nDecorations add metadata and styling to a plot specification. Layout functions arrange multiple specs into multi-panel figures.\n\n## Decorations\n\nAll decoration functions take a `t` and return a new `t`, designed for the `|>` pipeline:\n\n<!-- $MDX skip -->\n```ocaml\nline ~x ~y ()\n|> title \"Frequency Response\"\n|> xlabel \"Frequency (Hz)\"\n|> ylabel \"Magnitude (dB)\"\n|> xscale `Log\n|> ylim (-60.) 0.\n|> grid_lines true\n```\n\n### Titles and Labels\n\n| Function | Description |\n|----------|-------------|\n| `title s t` | Plot title |\n| `xlabel s t` | X-axis label |\n| `ylabel s t` | Y-axis label |\n\n### Axis Limits\n\n| Function | Description |\n|----------|-------------|\n| `xlim lo hi t` | Fix x-axis range to [lo, hi] |\n| `ylim lo hi t` | Fix y-axis range to [lo, hi] |\n\nWhen omitted, axis ranges are computed automatically from the data with 5% padding.\n\n### Axis Scales\n\n| Function | Description |\n|----------|-------------|\n| `xscale s t` | Set x-axis scale |\n| `yscale s t` | Set y-axis scale |\n\nAvailable scales:\n\n| Scale | When to use |\n|-------|-------------|\n| `` `Linear `` | Default. Uniform spacing. |\n| `` `Log `` | Data spanning multiple orders of magnitude. All values must be positive. |\n| `` `Sqrt `` | Moderate compression of large values. Handles zero. |\n| `` `Asinh `` | Like log but handles zero and negative values. Transitions smoothly from linear near zero to logarithmic at large magnitudes. |\n| `` `Symlog linthresh `` | Linear within [-linthresh, linthresh], logarithmic outside. Good for data with both small and large values centered around zero. |\n\n### Axis Direction\n\n| Function | Description |\n|----------|-------------|\n| `xinvert t` | X-axis values increase right-to-left |\n| `yinvert t` | Y-axis values increase top-to-bottom |\n\nUseful for conventions like right ascension in sky charts (xinvert) or magnitude axes in HR diagrams (yinvert).\n\n### Ticks\n\n| Function | Description |\n|----------|-------------|\n| `xticks ticks t` | Explicit tick positions and labels as `(float * string) list` |\n| `yticks ticks t` | Same for y-axis |\n| `xtick_format fmt t` | Custom tick label formatter (preserves auto-generated positions) |\n| `ytick_format fmt t` | Same for y-axis |\n\nExample with explicit ticks:\n\n<!-- $MDX skip -->\n```ocaml\nline ~x ~y ()\n|> xticks [ (0., \"Jan\"); (1., \"Feb\"); (2., \"Mar\"); (3., \"Apr\") ]\n```\n\nExample with custom formatting:\n\n<!-- $MDX skip -->\n```ocaml\nline ~x ~y ()\n|> xtick_format (Printf.sprintf \"%.1f%%\")\n```\n\n### Grid and Legend\n\n| Function | Description |\n|----------|-------------|\n| `grid_lines visible t` | Show or hide grid lines |\n| `legend ?loc t` | Show legend at `loc` (default `Upper_right`) |\n\nLegend locations: `Upper_right`, `Upper_left`, `Lower_right`, `Lower_left`, `Center`.\n\nThe legend is populated from marks that have a `~label`. Marks without labels are excluded.\n\n### Theme Override\n\n`with_theme theme t` renders with `theme` instead of the default.\n\n## Layout\n\n### Grid\n\n`Layout.grid rows` arranges specs in a grid where each inner list is a row:\n\n<!-- $MDX skip -->\n```ocaml\nlet p1 = line ~x ~y:(Nx.sin x) () |> title \"sin\" in\nlet p2 = line ~x ~y:(Nx.cos x) () |> title \"cos\" in\nlet p3 = line ~x ~y:(Nx.tan x) () |> title \"tan\" |> ylim (-5.) 5. in\nlet p4 = hist ~x:(Nx.rand Nx.float32 [|500|]) () |> title \"random\" in\nLayout.grid [ [ p1; p2 ]; [ p3; p4 ] ] |> render_png \"grid.png\"\n```\n\n`~gap` controls spacing between panels as a fraction of total size (default 0.05).\n\n### Stack\n\n| Function | Description |\n|----------|-------------|\n| `Layout.hstack specs` | Single row of panels |\n| `Layout.vstack specs` | Single column of panels |\n\nBoth accept `~gap`.\n\n## Themes\n\nA theme controls every non-data visual element: background, typography, axes, grid, spacing, and data palettes.\n\n### Predefined Themes\n\n| Theme | Description |\n|-------|-------------|\n| `Theme.default` | Light background, subtle grid, Okabe-Ito palette |\n| `Theme.dark` | Dark background |\n| `Theme.minimal` | No grid, thin axes |\n\n<!-- $MDX skip -->\n```ocaml\nline ~x ~y () |> with_theme Theme.dark |> render_png \"dark.png\"\n```\n\n### Context Scaling\n\nContext functions scale all visual elements (fonts, line widths, spacing) for different output media:\n\n| Function | Scale factor | Use case |\n|----------|-------------|----------|\n| `Theme.paper` | 1.0 | Journal figures |\n| `Theme.notebook` | 1.3 | Quill notebooks |\n| `Theme.talk` | 1.6 | Slides and presentations |\n| `Theme.poster` | 2.0 | Conference posters |\n\n<!-- $MDX skip -->\n```ocaml\nlet theme = Theme.dark |> Theme.talk in\nline ~x ~y () |> with_theme theme |> render_png \"slide.png\"\n```\n\n### Theme Fields\n\nThe `Theme.t` record is fully public. You can create custom themes by modifying fields:\n\n| Field | Type | Description |\n|-------|------|-------------|\n| `background` | `Color.t` | Background color |\n| `palette` | `Color.t array` | Categorical color palette |\n| `sequential` | `Cmap.t` | Default sequential colormap |\n| `diverging` | `Cmap.t` | Default diverging colormap |\n| `font_title` | `Theme.font` | Title font |\n| `font_label` | `Theme.font` | Axis label font |\n| `font_tick` | `Theme.font` | Tick label font |\n| `axis` | `Theme.line` | Axis line style |\n| `grid` | `Theme.line option` | Grid line style (None to hide) |\n| `tick_length` | `float` | Tick mark length |\n| `padding` | `float` | Plot area padding |\n| `title_gap` | `float` | Gap below title |\n| `label_gap` | `float` | Gap between label and axis |\n| `scale_factor` | `float` | Global size multiplier |\n| `line_width` | `float` | Default line width |\n| `marker_size` | `float` | Default marker size |\n\n## Next Steps\n\n- [Colors and Colormaps](/docs/hugin/colors-and-colormaps/) — OKLCH color space, operations, and colormap reference\n- [Matplotlib Comparison](/docs/hugin/matplotlib-comparison/) — side-by-side with Python\n"
  },
  {
    "path": "packages/hugin/doc/04-colors-and-colormaps.md",
    "content": "# Colors and Colormaps\n\nHugin uses the OKLCH color space for perceptually uniform color operations and ships with colorblind-friendly palettes and scientific colormaps.\n\n## Colors\n\n### OKLCH Color Space\n\nColors are represented internally in [OKLCH](https://bottosson.github.io/posts/oklab/), a perceptually uniform color space. Operations like `lighten`, `darken`, and `mix` produce visually consistent results: equal numerical steps yield equal perceived differences.\n\nOKLCH components:\n\n| Component | Range | Description |\n|-----------|-------|-------------|\n| Lightness (L) | [0, 1] | Black to white |\n| Chroma (C) | [0, ~0.4] | Gray to saturated |\n| Hue (H) | [0, 360) | Color wheel angle |\n| Alpha (A) | [0, 1] | Transparency |\n\n### Constructors\n\n<!-- $MDX skip -->\n```ocaml\n(* From OKLCH components *)\nColor.oklch ~l:0.7 ~c:0.15 ~h:145. ()\nColor.oklcha ~l:0.7 ~c:0.15 ~h:145. ~a:0.5 ()\n\n(* From sRGB [0, 1] *)\nColor.rgb ~r:0.2 ~g:0.6 ~b:0.8 ()\nColor.rgba ~r:0.2 ~g:0.6 ~b:0.8 ~a:0.5 ()\n\n(* From hex string *)\nColor.hex \"#3399CC\"\nColor.hex \"#3399CCAA\"  (* with alpha *)\n```\n\nAll constructors convert to OKLCH on creation. The reverse conversion (`to_rgba`) is called at render time.\n\n### Accessors\n\n<!-- $MDX skip -->\n```ocaml\nColor.lightness c    (* OKLCH lightness *)\nColor.chroma c       (* OKLCH chroma *)\nColor.hue c          (* OKLCH hue in degrees *)\nColor.alpha c        (* alpha channel *)\nColor.to_rgba c      (* sRGB (r, g, b, a) tuple, clamped to gamut *)\n```\n\n### Operations\n\n<!-- $MDX skip -->\n```ocaml\nColor.lighten 0.1 c      (* increase lightness by 0.1, clamped to [0, 1] *)\nColor.darken 0.1 c       (* decrease lightness by 0.1, clamped to [0, 1] *)\nColor.with_alpha 0.5 c   (* set alpha *)\nColor.mix 0.5 a b        (* blend a and b: 0.0 = a, 1.0 = b *)\n```\n\n`mix` interpolates all OKLCH components. Hue follows the shortest arc on the color wheel.\n\n### Named Colors\n\nThe default named colors follow the [Okabe-Ito palette](https://jfly.uni-koeln.de/color/), designed to be distinguishable under all forms of color-vision deficiency:\n\n| Color | Value |\n|-------|-------|\n| `Color.orange` | Okabe-Ito orange |\n| `Color.sky_blue` | Okabe-Ito sky blue |\n| `Color.green` | Okabe-Ito bluish green |\n| `Color.yellow` | Okabe-Ito yellow |\n| `Color.blue` | Okabe-Ito blue |\n| `Color.vermillion` | Okabe-Ito vermillion |\n| `Color.purple` | Okabe-Ito reddish purple |\n| `Color.black` | Black |\n| `Color.white` | White |\n| `Color.gray` | Neutral gray |\n\n### Formatting\n\n`Color.pp` formats as `oklch(L C H / A)` for debugging.\n\n## Colormaps\n\nA colormap is a continuous mapping from [0, 1] to `Color.t`. Internally stored as a 256-entry lookup table with OKLCH interpolation.\n\n### Evaluation\n\n<!-- $MDX skip -->\n```ocaml\nlet c = Cmap.eval Cmap.viridis 0.5  (* color at midpoint *)\n```\n\nValues are clamped to [0, 1].\n\n### Predefined Colormaps\n\nPerceptually uniform sequential colormaps from the [viridis family](https://bids.github.io/colormap/):\n\n| Colormap | Description |\n|----------|-------------|\n| `Cmap.viridis` | Purple-teal-yellow (default) |\n| `Cmap.plasma` | Purple-orange-yellow |\n| `Cmap.inferno` | Black-purple-orange-yellow |\n| `Cmap.magma` | Black-purple-pink-yellow |\n| `Cmap.cividis` | Optimized for color-vision deficiency |\n\nOther colormaps:\n\n| Colormap | Description |\n|----------|-------------|\n| `Cmap.coolwarm` | Blue-white-red diverging |\n| `Cmap.gray` | Black to white |\n| `Cmap.gray_r` | White to black (standard for astronomy) |\n| `Cmap.hot` | Black-red-yellow-white |\n\n### Custom Colormaps\n\n`Cmap.of_colors` creates a colormap by interpolating linearly through an array of color stops in OKLCH space:\n\n<!-- $MDX skip -->\n```ocaml\nlet my_cmap = Cmap.of_colors [|\n  Color.hex \"#000080\";\n  Color.hex \"#FFFFFF\";\n  Color.hex \"#800000\";\n|]\n```\n\nStops are evenly spaced from 0 to 1. Requires at least 2 colors.\n\n## Using Colors with Marks\n\n### Uniform Color\n\nSet `~color` on any mark:\n\n<!-- $MDX skip -->\n```ocaml\nline ~x ~y ~color:Color.vermillion ()\nbar ~x ~height ~color:(Color.hex \"#336699\") ()\n```\n\n### Data-Driven Color\n\n`point` supports `~color_by` to map per-point values through the theme's sequential colormap:\n\n<!-- $MDX skip -->\n```ocaml\npoint ~x ~y ~color_by:temperature ~marker:Circle ()\n```\n\nA colorbar is displayed automatically.\n\n### Colormaps on 2-D Data\n\n`heatmap`, `imshow`, and `contour` accept `~cmap` to override the default:\n\n<!-- $MDX skip -->\n```ocaml\nheatmap ~data ~cmap:Cmap.coolwarm ()\nimshow ~data ~cmap:Cmap.inferno ~stretch:`Log ()\ncontour ~data ~x0 ~x1 ~y0 ~y1 ~filled:true ~cmap:Cmap.plasma ()\n```\n\n## Next Steps\n\n- [Matplotlib Comparison](/docs/hugin/matplotlib-comparison/) — side-by-side with Python\n- [Marks and Styling](/docs/hugin/marks-and-styling/) — full mark catalog\n"
  },
  {
    "path": "packages/hugin/doc/05-matplotlib-comparison.md",
    "content": "# Hugin vs Matplotlib\n\nSide-by-side examples comparing Hugin (OCaml) with Matplotlib (Python). Hugin uses a declarative, pipeline-oriented API while Matplotlib uses an imperative, object-oriented approach.\n\n## Key Differences\n\n| | Hugin | Matplotlib |\n|---|---|---|\n| Style | Declarative, immutable specs | Imperative, mutable state |\n| Composition | `\\|>` pipeline | Method calls on axes |\n| State | No global state | `plt` global state |\n| Colors | OKLCH color space | sRGB strings |\n| Output | `render_png`, `render_svg`, `show` | `plt.savefig`, `plt.show` |\n\n## Line Plot\n\n**Hugin:**\n\n<!-- $MDX skip -->\n```ocaml\nopen Hugin\n\nlet () =\n  let x = Nx.linspace Nx.float32 0. (2. *. Float.pi) 100 in\n  layers [\n    line ~x ~y:(Nx.sin x) ~label:\"sin(x)\" ~color:Color.blue ();\n    line ~x ~y:(Nx.cos x) ~label:\"cos(x)\" ~color:Color.vermillion\n      ~line_style:`Dashed ();\n  ]\n  |> title \"Trigonometric Functions\"\n  |> xlabel \"Angle (radians)\"\n  |> ylabel \"Value\"\n  |> ylim (-1.2) 1.2\n  |> grid_lines true\n  |> legend\n  |> render_png \"trig.png\"\n```\n\n**Matplotlib:**\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.linspace(0, 2 * np.pi, 100)\n\nplt.figure()\nplt.plot(x, np.sin(x), label=\"sin(x)\", color=\"blue\")\nplt.plot(x, np.cos(x), label=\"cos(x)\", color=\"red\", linestyle=\"--\")\nplt.title(\"Trigonometric Functions\")\nplt.xlabel(\"Angle (radians)\")\nplt.ylabel(\"Value\")\nplt.ylim(-1.2, 1.2)\nplt.grid(True)\nplt.legend()\nplt.savefig(\"trig.png\")\n```\n\n## Scatter Plot\n\n**Hugin:**\n\n<!-- $MDX skip -->\n```ocaml\nopen Hugin\n\nlet () =\n  let x = Nx.rand Nx.float32 [| 200 |] in\n  let y = Nx.rand Nx.float32 [| 200 |] in\n  let c = Nx.add x y in\n  point ~x ~y ~color_by:c ~size:8. ~marker:Circle ()\n  |> title \"Random Scatter\"\n  |> render_png \"scatter.png\"\n```\n\n**Matplotlib:**\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.random.rand(200)\ny = np.random.rand(200)\nc = x + y\n\nplt.figure()\nplt.scatter(x, y, c=c, s=64, marker=\"o\")\nplt.title(\"Random Scatter\")\nplt.colorbar()\nplt.savefig(\"scatter.png\")\n```\n\n## Bar Chart\n\n**Hugin:**\n\n<!-- $MDX skip -->\n```ocaml\nopen Hugin\n\nlet () =\n  let x = Nx.create Nx.float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n  let h = Nx.create Nx.float32 [| 4 |] [| 3.; 7.; 2.; 5. |] in\n  bar ~x ~height:h ~color:Color.orange ()\n  |> title \"Quarterly Revenue\"\n  |> xticks [ (1., \"Q1\"); (2., \"Q2\"); (3., \"Q3\"); (4., \"Q4\") ]\n  |> ylabel \"Revenue ($M)\"\n  |> render_png \"bar.png\"\n```\n\n**Matplotlib:**\n\n```python\nimport matplotlib.pyplot as plt\n\nx = [1, 2, 3, 4]\nh = [3, 7, 2, 5]\n\nplt.figure()\nplt.bar(x, h, color=\"orange\")\nplt.title(\"Quarterly Revenue\")\nplt.xticks(x, [\"Q1\", \"Q2\", \"Q3\", \"Q4\"])\nplt.ylabel(\"Revenue ($M)\")\nplt.savefig(\"bar.png\")\n```\n\n## Histogram\n\n**Hugin:**\n\n<!-- $MDX skip -->\n```ocaml\nopen Hugin\n\nlet () =\n  let data = Nx.randn Nx.float32 [| 1000 |] in\n  hist ~x:data ~bins:(`Num 30) ~density:true ~color:Color.sky_blue ()\n  |> title \"Normal Distribution\"\n  |> xlabel \"Value\"\n  |> ylabel \"Density\"\n  |> render_png \"hist.png\"\n```\n\n**Matplotlib:**\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndata = np.random.randn(1000)\n\nplt.figure()\nplt.hist(data, bins=30, density=True, color=\"skyblue\")\nplt.title(\"Normal Distribution\")\nplt.xlabel(\"Value\")\nplt.ylabel(\"Density\")\nplt.savefig(\"hist.png\")\n```\n\n## Multi-Panel Layout\n\n**Hugin:**\n\n<!-- $MDX skip -->\n```ocaml\nopen Hugin\n\nlet () =\n  let x = Nx.linspace Nx.float32 0. (2. *. Float.pi) 100 in\n  let p1 = line ~x ~y:(Nx.sin x) () |> title \"sin\" in\n  let p2 = line ~x ~y:(Nx.cos x) () |> title \"cos\" in\n  let p3 = line ~x ~y:(Nx.tan x) () |> title \"tan\" |> ylim (-5.) 5. in\n  let p4 = hist ~x:(Nx.rand Nx.float32 [| 500 |]) () |> title \"random\" in\n  Layout.grid [ [ p1; p2 ]; [ p3; p4 ] ]\n  |> render_png \"grid.png\"\n```\n\n**Matplotlib:**\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.linspace(0, 2 * np.pi, 100)\n\nfig, axes = plt.subplots(2, 2)\naxes[0, 0].plot(x, np.sin(x)); axes[0, 0].set_title(\"sin\")\naxes[0, 1].plot(x, np.cos(x)); axes[0, 1].set_title(\"cos\")\naxes[1, 0].plot(x, np.tan(x)); axes[1, 0].set_title(\"tan\")\naxes[1, 0].set_ylim(-5, 5)\naxes[1, 1].hist(np.random.rand(500)); axes[1, 1].set_title(\"random\")\nplt.tight_layout()\nplt.savefig(\"grid.png\")\n```\n\n## Heatmap\n\n**Hugin:**\n\n<!-- $MDX skip -->\n```ocaml\nopen Hugin\n\nlet () =\n  let data = Nx.init Nx.float32 [| 8; 10 |] (fun idx ->\n    let i = Float.of_int idx.(0) and j = Float.of_int idx.(1) in\n    Float.sin (i *. 0.5) *. Float.cos (j *. 0.4))\n  in\n  heatmap ~data ~annotate:true ~cmap:Cmap.viridis ()\n  |> title \"Heatmap\"\n  |> render_png \"heatmap.png\"\n```\n\n**Matplotlib:**\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndata = np.fromfunction(\n    lambda i, j: np.sin(i * 0.5) * np.cos(j * 0.4), (8, 10)\n)\n\nfig, ax = plt.subplots()\nim = ax.imshow(data, cmap=\"viridis\")\nfor i in range(8):\n    for j in range(10):\n        ax.text(j, i, f\"{data[i, j]:.2g}\", ha=\"center\", va=\"center\")\nax.set_title(\"Heatmap\")\nplt.colorbar(im)\nplt.savefig(\"heatmap.png\")\n```\n\n## Styling\n\n**Hugin:**\n\n<!-- $MDX skip -->\n```ocaml\nopen Hugin\n\nlet () =\n  let x = Nx.linspace Nx.float32 0. (2. *. Float.pi) 50 in\n  line ~x ~y:(Nx.sin x)\n    ~color:Color.vermillion\n    ~line_style:`Dashed\n    ~line_width:2.5\n    ~marker:Triangle\n    ~alpha:0.7\n    ()\n  |> render_png \"styled.png\"\n```\n\n**Matplotlib:**\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.linspace(0, 2 * np.pi, 50)\n\nplt.figure()\nplt.plot(x, np.sin(x), color=\"red\", linestyle=\"--\",\n         linewidth=2.5, marker=\"^\", alpha=0.7)\nplt.savefig(\"styled.png\")\n```\n\n## Themes\n\nHugin provides built-in themes with context scaling. Matplotlib uses style sheets.\n\n**Hugin:**\n\n<!-- $MDX skip -->\n```ocaml\n(* Dark theme scaled for a presentation *)\nlet theme = Theme.dark |> Theme.talk in\nline ~x ~y () |> with_theme theme |> render_png \"slide.png\"\n```\n\n**Matplotlib:**\n\n```python\nplt.style.use(\"dark_background\")\nplt.rcParams.update({\"font.size\": 14})\nplt.plot(x, y)\nplt.savefig(\"slide.png\")\n```\n\n## Save and Export\n\n**Hugin:**\n\n<!-- $MDX skip -->\n```ocaml\nlet spec = line ~x ~y () |> title \"My Plot\" in\nspec |> render_png \"plot.png\";\nspec |> render_svg \"plot.svg\";\nspec |> render_pdf \"plot.pdf\";\nspec |> show  (* interactive SDL window *)\n```\n\n**Matplotlib:**\n\n```python\nplt.plot(x, y)\nplt.title(\"My Plot\")\nplt.savefig(\"plot.png\")\nplt.savefig(\"plot.svg\")\nplt.savefig(\"plot.pdf\")\nplt.show()\n```\n\nIn Hugin, the spec is an immutable value. You can render the same spec to multiple formats without rebuilding it. In Matplotlib, the figure is mutable state that `savefig` and `show` consume.\n\n## Interactive Display\n\n**Hugin:**\n\n<!-- $MDX skip -->\n```ocaml\nshow ~width:1600. ~height:1200. spec\n```\n\nThe SDL window is resizable. The plot re-renders at the new dimensions. Press Escape or Q to close.\n\n**Matplotlib:**\n\n```python\nplt.show()\n```\n"
  },
  {
    "path": "packages/hugin/doc/index.md",
    "content": "# Hugin\n\nHugin creates publication-quality plots from Nx arrays using a declarative, pipeline-oriented API.\n\n## What Hugin Does\n\nHugin turns immutable plot specifications into rendered output. You build a specification from mark constructors (`line`, `point`, `bar`, `hist`), decorate it with `title`, `xlabel`, and axis controls via the `|>` pipeline, and render with `render_png`, `render_svg`, or `show`.\n\nInternally, rendering proceeds in three stages: the user-facing spec is compiled to a prepared tree (histograms binned, data bounds computed, marks auto-colored), then resolved to device-pixel coordinates, then drawn by a backend. Data compilation happens once; layout resolution is cheap and repeatable at different sizes.\n\n## System Requirements\n\nHugin needs Cairo and SDL2 for rendering:\n\n<!-- $MDX skip -->\n```bash\n# macOS\nbrew install cairo sdl2\n\n# Ubuntu/Debian\napt install libcairo2-dev libsdl2-dev\n```\n\n## Quick Start\n\n<!-- $MDX skip -->\n```ocaml\nopen Hugin\n\nlet () =\n  let x = Nx.linspace Nx.float32 0. (2. *. Float.pi) 100 in\n  let y = Nx.sin x in\n  line ~x ~y () |> title \"Sine wave\" |> render_png \"sine.png\"\n```\n\nTwo marks on shared axes:\n\n<!-- $MDX skip -->\n```ocaml\nopen Hugin\n\nlet () =\n  let x = Nx.linspace Nx.float32 0. (2. *. Float.pi) 100 in\n  layers [\n    line ~x ~y:(Nx.sin x) ~label:\"sin\" ();\n    line ~x ~y:(Nx.cos x) ~label:\"cos\" ~line_style:`Dashed ();\n  ]\n  |> legend |> render_png \"trig.png\"\n```\n\n## Next Steps\n\n- [Getting Started](/docs/hugin/getting-started/) — installation, first plot, key concepts\n- [Marks and Styling](/docs/hugin/marks-and-styling/) — mark catalog, visual properties\n- [Layout and Decorations](/docs/hugin/layout-and-decorations/) — axes, scales, themes, multi-panel\n- [Colors and Colormaps](/docs/hugin/colors-and-colormaps/) — OKLCH colors, palettes, colormaps\n- [Matplotlib Comparison](/docs/hugin/matplotlib-comparison/) — side-by-side with Python\n"
  },
  {
    "path": "packages/hugin/examples/01-line-plot/README.md",
    "content": "# Line Plot\n\nCreate data with Nx, build a line plot, and render to PNG in three lines.\n\n![Line Plot](line_plot.png)\n"
  },
  {
    "path": "packages/hugin/examples/01-line-plot/dune",
    "content": "(executable\n (name main)\n (libraries hugin nx))\n\n(rule\n (targets line_plot.png)\n (deps main.exe)\n (action\n  (run ./main.exe))\n (mode\n  (promote (until-clean))))\n"
  },
  {
    "path": "packages/hugin/examples/01-line-plot/main.ml",
    "content": "(* Your first plot.\n\n   The simplest possible visualization: create data with Nx, build a line plot,\n   and render to PNG. *)\n\nopen Hugin\n\nlet () =\n  let x = Nx.linspace Nx.float32 0. (2. *. Float.pi) 100 in\n  let y = Nx.sin x in\n  line ~x ~y () |> render_png \"line_plot.png\"\n"
  },
  {
    "path": "packages/hugin/examples/02-styling/README.md",
    "content": "# Styling\n\nMark constructors accept optional visual properties: `~color`, `~line_style`, `~line_width`, `~marker`, and `~alpha`.\n\n![Styling](styling.png)\n"
  },
  {
    "path": "packages/hugin/examples/02-styling/dune",
    "content": "(executable\n (name main)\n (libraries hugin nx))\n\n(rule\n (targets styling.png)\n (deps main.exe)\n (action\n  (run ./main.exe))\n (mode\n  (promote (until-clean))))\n"
  },
  {
    "path": "packages/hugin/examples/02-styling/main.ml",
    "content": "(* Styling.\n\n   Every mark constructor accepts optional visual properties as labeled\n   arguments. This example shows how to set color, line style, width, and marker\n   shape. *)\n\nopen Hugin\n\nlet () =\n  let x = Nx.linspace Nx.float32 0. (2. *. Float.pi) 50 in\n  let y = Nx.sin x in\n  line ~x ~y ~color:Color.vermillion ~line_style:`Dashed ~line_width:2.5\n    ~marker:Triangle ~alpha:0.7 ()\n  |> render_png \"styling.png\"\n"
  },
  {
    "path": "packages/hugin/examples/03-scatter/README.md",
    "content": "# Scatter Plot\n\nThe `point` mark places markers at data coordinates. Pass `~color_by` to map a third variable through the theme's sequential colormap.\n\n![Scatter Plot](scatter.png)\n"
  },
  {
    "path": "packages/hugin/examples/03-scatter/dune",
    "content": "(executable\n (name main)\n (libraries hugin nx))\n\n(rule\n (targets scatter.png)\n (deps main.exe)\n (action\n  (run ./main.exe))\n (mode\n  (promote (until-clean))))\n"
  },
  {
    "path": "packages/hugin/examples/03-scatter/main.ml",
    "content": "(* Scatter plots.\n\n   Point marks place individual markers at data coordinates. Use color_by to map\n   a third variable through the theme's sequential colormap. *)\n\nopen Hugin\n\nlet () =\n  let x = Nx.rand Nx.float32 [| 200 |] in\n  let y = Nx.rand Nx.float32 [| 200 |] in\n  let c = Nx.add x y in\n  point ~x ~y ~color_by:c ~size:8. ~marker:Circle ()\n  |> title \"Random Scatter\" |> render_png \"scatter.png\"\n"
  },
  {
    "path": "packages/hugin/examples/04-bar-chart/README.md",
    "content": "# Bar Chart\n\nBar marks draw vertical bars centered at x positions. Use `~xticks` to label the x-axis with category names.\n\n![Bar Chart](bar_chart.png)\n"
  },
  {
    "path": "packages/hugin/examples/04-bar-chart/dune",
    "content": "(executable\n (name main)\n (libraries hugin nx))\n\n(rule\n (targets bar_chart.png)\n (deps main.exe)\n (action\n  (run ./main.exe))\n (mode\n  (promote (until-clean))))\n"
  },
  {
    "path": "packages/hugin/examples/04-bar-chart/main.ml",
    "content": "(* Bar charts.\n\n   Bar marks draw vertical bars centered at x positions. Height is measured from\n   bottom (default 0). *)\n\nopen Hugin\n\nlet () =\n  let x = Nx.create Nx.float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n  let h = Nx.create Nx.float32 [| 5 |] [| 4.2; 7.1; 3.8; 9.0; 5.5 |] in\n  bar ~x ~height:h ~color:Color.sky_blue ()\n  |> title \"Quarterly Revenue\" |> xlabel \"Quarter\" |> ylabel \"Revenue ($M)\"\n  |> xticks [ (1., \"Q1\"); (2., \"Q2\"); (3., \"Q3\"); (4., \"Q4\"); (5., \"Q5\") ]\n  |> render_png \"bar_chart.png\"\n"
  },
  {
    "path": "packages/hugin/examples/05-histogram/README.md",
    "content": "# Histogram\n\nHistogram marks bin continuous data into evenly-spaced intervals. Use `~density:true` to normalize the total area to 1.\n\n![Histogram](histogram.png)\n"
  },
  {
    "path": "packages/hugin/examples/05-histogram/dune",
    "content": "(executable\n (name main)\n (libraries hugin nx))\n\n(rule\n (targets histogram.png)\n (deps main.exe)\n (action\n  (run ./main.exe))\n (mode\n  (promote (until-clean))))\n"
  },
  {
    "path": "packages/hugin/examples/05-histogram/main.ml",
    "content": "(* Histograms.\n\n   Histogram marks bin continuous data. Use ~density:true to normalize so the\n   total area equals 1. *)\n\nopen Hugin\n\nlet () =\n  let samples = Nx.rand Nx.float32 [| 1000 |] in\n  hist ~x:samples ~bins:(`Num 25) ~density:true ~color:Color.green ()\n  |> title \"Distribution\" |> xlabel \"Value\" |> render_png \"histogram.png\"\n"
  },
  {
    "path": "packages/hugin/examples/06-layers/README.md",
    "content": "# Layers\n\n`layers` overlays multiple marks on shared axes. Any mark with a `~label` automatically appears in the legend.\n\n![Layers](layers.png)\n"
  },
  {
    "path": "packages/hugin/examples/06-layers/dune",
    "content": "(executable\n (name main)\n (libraries hugin nx))\n\n(rule\n (targets layers.png)\n (deps main.exe)\n (action\n  (run ./main.exe))\n (mode\n  (promote (until-clean))))\n"
  },
  {
    "path": "packages/hugin/examples/06-layers/main.ml",
    "content": "(* Layers and legends.\n\n   Use layers to overlay different mark types on shared axes. Any mark with a\n   ~label automatically appears in the legend. *)\n\nopen Hugin\n\nlet () =\n  let x = Nx.linspace Nx.float32 0. 10. 100 in\n  let y = Nx.sin x in\n  layers\n    [\n      fill_between ~x ~y1:(Nx.sub_s y 0.3) ~y2:(Nx.add_s y 0.3) ~label:\"± 0.3\"\n        ();\n      line ~x ~y ~label:\"sin(x)\" ();\n      hline ~y:0. ~line_style:`Dashed ~color:Color.gray ~label:\"baseline\" ();\n    ]\n  |> title \"Sine with Confidence Band\"\n  |> xlabel \"x\" |> ylabel \"y\" |> legend |> render_png \"layers.png\"\n"
  },
  {
    "path": "packages/hugin/examples/07-decorations/README.md",
    "content": "# Decorations\n\nDecoration functions (`xscale`, `xlim`, `ylim`, `grid_lines`, `xtick_format`) control axis behavior and compose with `|>`.\n\n![Decorations](decorations.png)\n"
  },
  {
    "path": "packages/hugin/examples/07-decorations/dune",
    "content": "(executable\n (name main)\n (libraries hugin nx))\n\n(rule\n (targets decorations.png)\n (deps main.exe)\n (action\n  (run ./main.exe))\n (mode\n  (promote (until-clean))))\n"
  },
  {
    "path": "packages/hugin/examples/07-decorations/main.ml",
    "content": "(* Axis decorations.\n\n   Decorations control axes limits, scales, and grid visibility. They compose\n   naturally with the |> pipeline. *)\n\nopen Hugin\n\nlet () =\n  let x = Nx.linspace Nx.float32 1. 1000. 100 in\n  let y = Nx.log x in\n  line ~x ~y () |> title \"Logarithmic Scale\" |> xlabel \"x\" |> ylabel \"ln(x)\"\n  |> xscale `Log |> xlim 1. 1000. |> ylim 0. 8.\n  |> xtick_format (Printf.sprintf \"%.0f\")\n  |> grid_lines true\n  |> render_png \"decorations.png\"\n"
  },
  {
    "path": "packages/hugin/examples/08-grid-layout/README.md",
    "content": "# Grid Layout\n\n`grid` arranges independent plots in rows and columns. Each cell has its own axes and decorations.\n\n![Grid Layout](grid_layout.png)\n"
  },
  {
    "path": "packages/hugin/examples/08-grid-layout/dune",
    "content": "(executable\n (name main)\n (libraries hugin nx))\n\n(rule\n (targets grid_layout.png)\n (deps main.exe)\n (action\n  (run ./main.exe))\n (mode\n  (promote (until-clean))))\n"
  },
  {
    "path": "packages/hugin/examples/08-grid-layout/main.ml",
    "content": "(* Grid layout.\n\n   Arrange independent plots in a grid. Each cell is a standalone specification\n   with its own axes and decorations. *)\n\nopen Hugin\n\nlet () =\n  let x = Nx.linspace Nx.float32 0. (2. *. Float.pi) 100 in\n  let p1 = line ~x ~y:(Nx.sin x) () |> title \"sin\" in\n  let p2 = line ~x ~y:(Nx.cos x) () |> title \"cos\" in\n  let p3 = line ~x ~y:(Nx.tan (Nx.mul_s x 0.3)) () |> title \"tan(0.3x)\" in\n  let p4 =\n    point ~x ~y:(Nx.sin x) ~color:Color.vermillion ~marker:Plus ()\n    |> title \"sin (scatter)\"\n  in\n  grid [ [ p1; p2 ]; [ p3; p4 ] ] |> render_png \"grid_layout.png\"\n"
  },
  {
    "path": "packages/hugin/examples/09-themes/README.md",
    "content": "# Themes\n\nThemes control visual appearance: background, fonts, axes, grid, and data colors. Context functions like `Theme.talk` scale everything up for presentations.\n\n![Themes](themes.png)\n"
  },
  {
    "path": "packages/hugin/examples/09-themes/dune",
    "content": "(executable\n (name main)\n (libraries hugin nx))\n\n(rule\n (targets themes.png)\n (deps main.exe)\n (action\n  (run ./main.exe))\n (mode\n  (promote (until-clean))))\n"
  },
  {
    "path": "packages/hugin/examples/09-themes/main.ml",
    "content": "(* Themes and context scaling.\n\n   Themes control the entire visual appearance: colors, fonts, line widths.\n   Context functions like Theme.talk scale everything up for presentations. *)\n\nopen Hugin\n\nlet () =\n  let x = Nx.linspace Nx.float32 0. 10. 80 in\n  let base =\n    layers\n      [\n        line ~x ~y:(Nx.sin x) ~label:\"sin\" ();\n        line ~x ~y:(Nx.cos x) ~label:\"cos\" ();\n      ]\n    |> legend\n  in\n  grid\n    [\n      [\n        base |> with_theme Theme.default |> title \"Default\";\n        base |> with_theme Theme.dark |> title \"Dark\";\n      ];\n      [\n        base |> with_theme Theme.minimal |> title \"Minimal\";\n        base |> with_theme (Theme.talk Theme.default) |> title \"Talk\";\n      ];\n    ]\n  |> render_png \"themes.png\"\n"
  },
  {
    "path": "packages/hugin/examples/10-showcase/README.md",
    "content": "# Showcase\n\nCombines multiple mark types, layouts, and output formats in a single visualization. Renders to both PNG and SVG.\n\n![Showcase](showcase.png)\n"
  },
  {
    "path": "packages/hugin/examples/10-showcase/dune",
    "content": "(executable\n (name main)\n (libraries hugin nx))\n\n(rule\n (targets showcase.png showcase.svg)\n (deps main.exe)\n (action\n  (run ./main.exe))\n (mode\n  (promote (until-clean))))\n"
  },
  {
    "path": "packages/hugin/examples/10-showcase/main.ml",
    "content": "(* Full showcase.\n\n   Demonstrates multiple mark types, layouts, themes, and output formats in a\n   single example. *)\n\nopen Hugin\n\nlet () =\n  let x = Nx.linspace Nx.float32 0. 10. 100 in\n\n  let p1 =\n    layers\n      [\n        line ~x ~y:(Nx.sin x) ~label:\"sin\" ~color:Color.blue ();\n        point\n          ~x:(Nx.mul_s (Nx.rand Nx.float32 [| 30 |]) 10.)\n          ~y:(Nx.sub_s (Nx.mul_s (Nx.rand Nx.float32 [| 30 |]) 2.) 1.)\n          ~color:Color.vermillion ~marker:Star ~label:\"noise\" ();\n      ]\n    |> title \"Lines & Scatter\" |> legend\n  in\n\n  let p2 =\n    let xb = Nx.create Nx.float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n    let h = Nx.create Nx.float32 [| 4 |] [| 3.; 7.; 2.; 5. |] in\n    bar ~x:xb ~height:h ~color:Color.orange () |> title \"Bar Chart\"\n  in\n\n  let p3 =\n    hist ~x:(Nx.rand Nx.float32 [| 500 |]) ~bins:(`Num 20) ~color:Color.green ()\n    |> title \"Histogram\"\n  in\n\n  let p4 =\n    let xs = Nx.rand Nx.float32 [| 50 |] in\n    let ys = Nx.rand Nx.float32 [| 50 |] in\n    let cb = Nx.mul_s xs 100. in\n    let sb = Nx.mul_s ys 40. in\n    point ~x:xs ~y:ys ~color_by:cb ~size_by:sb ~marker:Circle ()\n    |> title \"color_by + size_by\" |> xlabel \"x\" |> ylabel \"y\"\n  in\n\n  let p5 =\n    let xl = Nx.linspace Nx.float32 1. 100. 50 in\n    line ~x:xl ~y:(Nx.mul xl xl) ~color:Color.purple ()\n    |> title \"Quadratic (log y)\" |> yscale `Log\n  in\n\n  let p6 =\n    let data =\n      Nx.init Nx.float32 [| 8; 10 |] (fun idx ->\n          let i = Float.of_int idx.(0) and j = Float.of_int idx.(1) in\n          Float.sin (i *. 0.5) *. Float.cos (j *. 0.4))\n    in\n    heatmap ~data ~cmap:Cmap.viridis () |> title \"Heatmap\"\n  in\n\n  let spec = grid [ [ p1; p2 ]; [ p3; p4 ]; [ p5; p6 ] ] in\n  spec |> render_png \"showcase.png\";\n  spec |> render_svg \"showcase.svg\"\n"
  },
  {
    "path": "packages/hugin/examples/11-errorbar/README.md",
    "content": "# Error Bars\n\n`errorbar` shows measurement uncertainty. Use `` `Symmetric `` for equal +/- errors or `` `Asymmetric `` for independent lower and upper bounds.\n\n![Error Bars](errorbar.png)\n"
  },
  {
    "path": "packages/hugin/examples/11-errorbar/dune",
    "content": "(executable\n (name main)\n (libraries hugin nx))\n\n(rule\n (targets errorbar.png)\n (deps main.exe)\n (action\n  (run ./main.exe))\n (mode\n  (promote (until-clean))))\n"
  },
  {
    "path": "packages/hugin/examples/11-errorbar/main.ml",
    "content": "(* Error bars.\n\n   Errorbar marks show measurement uncertainty. Use `Symmetric for equal +/-\n   errors or `Asymmetric for independent lower and upper bounds. *)\n\nopen Hugin\n\nlet () =\n  let x = Nx.create Nx.float32 [| 6 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let y = Nx.create Nx.float32 [| 6 |] [| 2.1; 3.8; 3.2; 5.1; 4.5; 6.3 |] in\n  let err = Nx.create Nx.float32 [| 6 |] [| 0.3; 0.5; 0.2; 0.6; 0.4; 0.3 |] in\n  errorbar ~x ~y ~yerr:(`Symmetric err) ~cap_size:6. ~color:Color.blue ()\n  |> title \"Measurements\" |> xlabel \"Trial\" |> ylabel \"Value\"\n  |> render_png \"errorbar.png\"\n"
  },
  {
    "path": "packages/hugin/examples/README.md",
    "content": "# Hugin Examples\n\nLearn Hugin through progressively complex examples. Start with `01-line-plot`\nand work through the numbered examples in order.\n\n## Examples\n\n| Example | Concept | Key Functions |\n|---------|---------|---------------|\n| [`01-line-plot`](./01-line-plot/) | Your first plot | `line`, `render_png` |\n| [`02-styling`](./02-styling/) | Colors, line styles, markers | `~color`, `~line_style`, `~marker`, `~alpha` |\n| [`03-scatter`](./03-scatter/) | Scatter plots and color mapping | `point`, `~color_by` |\n| [`04-bar-chart`](./04-bar-chart/) | Bar charts with categorical axes | `bar`, `xlabel`, `ylabel`, `xticks` |\n| [`05-histogram`](./05-histogram/) | Histograms and density | `hist`, `~bins`, `~density` |\n| [`06-layers`](./06-layers/) | Overlaying marks and legends | `layers`, `fill_between`, `hline`, `legend` |\n| [`07-decorations`](./07-decorations/) | Axis control and grid lines | `xscale`, `xlim`, `ylim`, `xtick_format`, `grid_lines` |\n| [`08-grid-layout`](./08-grid-layout/) | Multi-panel layouts | `Layout.grid` |\n| [`09-themes`](./09-themes/) | Themes and context scaling | `Theme.default`, `Theme.dark`, `Theme.talk` |\n| [`10-showcase`](./10-showcase/) | Full showcase with multiple outputs | All mark types, `heatmap`, `render_svg` |\n| [`11-errorbar`](./11-errorbar/) | Measurement uncertainty | `errorbar`, `~yerr`, `~cap_size` |\n\n## Running Examples\n\nAll examples can be run with:\n\n```bash\ndune exec dev/hugin/examples/<name>/main.exe\n```\n\nFor example:\n\n```bash\ndune exec dev/hugin/examples/01-line-plot/main.exe\n```\n\n## Quick Reference\n\n### Single Plot\n\n```ocaml\nopen Hugin\n\nlet x = Nx.linspace Nx.float32 0. 6.28 100 in\nlet y = Nx.sin x in\nline ~x ~y () |> title \"Sine\" |> render_png \"plot.png\"\n```\n\n### Multiple Marks on Shared Axes\n\n```ocaml\nlayers\n  [\n    line ~x ~y:(Nx.sin x) ~label:\"sin\" ();\n    line ~x ~y:(Nx.cos x) ~label:\"cos\" ~line_style:`Dashed ();\n  ]\n|> legend |> render_png \"plot.png\"\n```\n\n### Grid Layout\n\n```ocaml\nlet p1 = line ~x ~y:(Nx.sin x) () |> title \"sin\" in\nlet p2 = line ~x ~y:(Nx.cos x) () |> title \"cos\" in\nLayout.grid [ [ p1; p2 ] ] |> render_png \"grid.png\"\n```\n"
  },
  {
    "path": "packages/hugin/lib/axis.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Per-axis configuration and resolution.\n\n   config holds the user-set options (all optional, from decorations). t holds\n   the resolved axis with defaults applied and data bounds merged. *)\n\ntype config = {\n  label : string option;\n  lim : (float * float) option;\n  scale : Spec.scale option;\n  invert : bool;\n  ticks : (float * string) list option;\n  tick_format : (float -> string) option;\n}\n\nlet empty_config =\n  {\n    label = None;\n    lim = None;\n    scale = None;\n    invert = false;\n    ticks = None;\n    tick_format = None;\n  }\n\ntype t = {\n  scale : Spec.scale;\n  invert : bool;\n  lo : float;\n  hi : float;\n  label : string option;\n  ticks : (float * string) list option;\n  tick_format : (float -> string) option;\n}\n\nlet resolve ~data_lo ~data_hi (c : config) =\n  let scale = Option.value ~default:`Linear c.scale in\n  let lo, hi = Option.value ~default:(data_lo, data_hi) c.lim in\n  {\n    scale;\n    invert = c.invert;\n    lo;\n    hi;\n    label = c.label;\n    ticks = c.ticks;\n    tick_format = c.tick_format;\n  }\n\nlet make_scale_and_ticks (a : t) =\n  let s = Scale.make ~invert:a.invert a.scale ~lo:a.lo ~hi:a.hi () in\n  let ticks =\n    match a.ticks with\n    | Some t -> t\n    | None -> Ticks.generate a.scale ~lo:a.lo ~hi:a.hi ()\n  in\n  let ticks =\n    match a.tick_format with\n    | None -> ticks\n    | Some f -> List.map (fun (v, _) -> (v, f v)) ticks\n  in\n  (s, ticks)\n"
  },
  {
    "path": "packages/hugin/lib/axis.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Per-axis configuration and resolution.\n\n    {b Internal module.} Consolidates per-axis state into two stages: {!config}\n    holds user-set options from decorations (all optional), and {!t} holds the\n    resolved axis with defaults applied and data bounds merged. Used by\n    {!Prepared} and {!Resolve}. *)\n\n(** {1:config Configuration} *)\n\ntype config = {\n  label : string option;\n  lim : (float * float) option;\n  scale : Spec.scale option;\n  invert : bool;\n  ticks : (float * string) list option;\n  tick_format : (float -> string) option;\n}\n(** The type for per-axis user options collected from decorations. *)\n\nval empty_config : config\n(** [empty_config] is the default configuration: no label, no limits, [invert]\n    is [false], scale/ticks/format are [None]. *)\n\n(** {1:resolved Resolved axis} *)\n\ntype t = {\n  scale : Spec.scale;\n  invert : bool;\n  lo : float;\n  hi : float;\n  label : string option;\n  ticks : (float * string) list option;\n  tick_format : (float -> string) option;\n}\n(** The type for resolved axes. [scale] defaults to [`Linear], [lo] and [hi]\n    come from data bounds unless overridden by {!config.lim}. *)\n\nval resolve : data_lo:float -> data_hi:float -> config -> t\n(** [resolve ~data_lo ~data_hi c] is a resolved axis from [c]. Uses [data_lo]\n    and [data_hi] when [c.lim] is [None], and [`Linear] when [c.scale] is\n    [None]. *)\n\nval make_scale_and_ticks : t -> Scale.t * (float * string) list\n(** [make_scale_and_ticks a] is [(scale, ticks)] for [a]. Generates ticks via\n    {!Ticks.generate} when [a.ticks] is [None], then applies [a.tick_format] if\n    set. *)\n"
  },
  {
    "path": "packages/hugin/lib/cairo_backend.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Color helpers *)\n\nlet set_color cr c =\n  let r, g, b, a = Color.to_rgba c in\n  Ucairo.set_source_rgba cr r g b a\n\nlet set_font cr (font : Theme.font) =\n  let weight =\n    match font.weight with `Normal -> Ucairo.Normal | `Bold -> Ucairo.Bold\n  in\n  Ucairo.select_font_face cr font.family weight;\n  Ucairo.set_font_size cr font.size\n\n(* Text measurer *)\n\nlet text_measurer cr ~font s =\n  set_font cr font;\n  let ext = Ucairo.text_extents cr s in\n  (ext.width, ext.height)\n\n(* Marker rendering *)\n\nlet draw_marker cr shape size (px, py) =\n  let hs = size /. 2. in\n  match shape with\n  | Spec.Circle ->\n      Ucairo.arc cr px py ~r:hs ~a1:0. ~a2:(2. *. Float.pi);\n      Ucairo.Path.close cr\n  | Spec.Square -> Ucairo.rectangle cr (px -. hs) (py -. hs) ~w:size ~h:size\n  | Spec.Triangle ->\n      Ucairo.move_to cr px (py -. hs);\n      Ucairo.line_to cr (px +. hs) (py +. hs);\n      Ucairo.line_to cr (px -. hs) (py +. hs);\n      Ucairo.Path.close cr\n  | Spec.Plus ->\n      Ucairo.move_to cr (px -. hs) py;\n      Ucairo.line_to cr (px +. hs) py;\n      Ucairo.move_to cr px (py -. hs);\n      Ucairo.line_to cr px (py +. hs)\n  | Spec.Star ->\n      Ucairo.move_to cr (px -. hs) py;\n      Ucairo.line_to cr (px +. hs) py;\n      Ucairo.move_to cr px (py -. hs);\n      Ucairo.line_to cr px (py +. hs);\n      let d = hs *. 0.707 in\n      Ucairo.move_to cr (px -. d) (py -. d);\n      Ucairo.line_to cr (px +. d) (py +. d);\n      Ucairo.move_to cr (px +. d) (py -. d);\n      Ucairo.line_to cr (px -. d) (py +. d)\n\n(* Primitive rendering *)\n\nlet rec render_primitive cr = function\n  | Scene.Path { points; close; fill; stroke; line_width; dash } ->\n      if Array.length points < 2 then ()\n      else begin\n        let x0, y0 = points.(0) in\n        Ucairo.move_to cr x0 y0;\n        for i = 1 to Array.length points - 1 do\n          let x, y = points.(i) in\n          Ucairo.line_to cr x y\n        done;\n        if close then Ucairo.Path.close cr;\n        begin match fill with\n        | Some c ->\n            set_color cr c;\n            if stroke <> None then Ucairo.fill_preserve cr else Ucairo.fill cr\n        | None -> ()\n        end;\n        begin match stroke with\n        | Some c ->\n            set_color cr c;\n            Ucairo.set_line_width cr line_width;\n            (match dash with\n            | [] -> Ucairo.set_dash cr [||]\n            | ds -> Ucairo.set_dash cr (Array.of_list ds));\n            Ucairo.stroke cr\n        | None -> ()\n        end\n      end\n  | Scene.Markers { points; shape; size; sizes; fill; fills; stroke } ->\n      let stroke_only =\n        match shape with Spec.Plus | Spec.Star -> true | _ -> false\n      in\n      Array.iteri\n        (fun i pt ->\n          let s = match sizes with Some ss -> ss.(i) | None -> size in\n          let f = match fills with Some fs -> Some fs.(i) | None -> fill in\n          Ucairo.Path.clear cr;\n          draw_marker cr shape s pt;\n          if stroke_only then begin\n            let c =\n              match f with\n              | Some c -> c\n              | None -> ( match stroke with Some c -> c | None -> Color.black)\n            in\n            set_color cr c;\n            Ucairo.set_line_width cr (Float.max 1. (s *. 0.15));\n            Ucairo.stroke cr\n          end\n          else begin\n            begin match f with\n            | Some c ->\n                set_color cr c;\n                if stroke <> None then Ucairo.fill_preserve cr\n                else Ucairo.fill cr\n            | None -> ()\n            end;\n            begin match stroke with\n            | Some c ->\n                set_color cr c;\n                Ucairo.set_line_width cr (Float.max 1. (s *. 0.15));\n                Ucairo.stroke cr\n            | None -> ()\n            end\n          end)\n        points\n  | Scene.Text { x; y; content; font; color; anchor; baseline; angle } ->\n      set_font cr font;\n      set_color cr color;\n      let ext = Ucairo.text_extents cr content in\n      let dx =\n        match anchor with\n        | `Start -> -.ext.x_bearing\n        | `Middle -> -.(ext.x_bearing +. (ext.width /. 2.))\n        | `End -> -.(ext.x_bearing +. ext.width)\n      in\n      let dy =\n        match baseline with\n        | `Top -> -.ext.y_bearing\n        | `Middle -> -.(ext.y_bearing +. (ext.height /. 2.))\n        | `Bottom -> -.(ext.y_bearing +. ext.height)\n      in\n      Ucairo.save cr;\n      Ucairo.translate cr x y;\n      if angle <> 0. then Ucairo.rotate cr angle;\n      Ucairo.move_to cr dx dy;\n      Ucairo.show_text cr content;\n      Ucairo.restore cr\n  | Scene.Image { x; y; w; h; data } ->\n      let img_surface = Image_util.nx_to_cairo_surface data in\n      let img_w = (Nx.shape data).(1) and img_h = (Nx.shape data).(0) in\n      Ucairo.save cr;\n      Ucairo.translate cr x y;\n      Ucairo.scale cr (w /. float img_w) (h /. float img_h);\n      Ucairo.set_source_surface cr img_surface ~x:0. ~y:0.;\n      Ucairo.paint cr;\n      Ucairo.restore cr;\n      Ucairo.Surface.finish img_surface\n  | Scene.Clip { x; y; w; h; children } ->\n      Ucairo.save cr;\n      Ucairo.rectangle cr x y ~w ~h;\n      Ucairo.clip cr;\n      List.iter (render_primitive cr) children;\n      Ucairo.restore cr\n  | Scene.Group children -> List.iter (render_primitive cr) children\n\n(* Scene rendering *)\n\nlet render_scene cr (scene : Scene.t) =\n  Ucairo.set_antialias cr Ucairo.Antialias_default;\n  Ucairo.set_line_cap cr Ucairo.Round;\n  Ucairo.set_line_join cr Ucairo.Join_round;\n  List.iter (render_primitive cr) scene.primitives\n\n(* Entry points *)\n\nlet render_to_png filename ~width ~height (scene : Scene.t) =\n  let w = int_of_float width and h = int_of_float height in\n  let surface = Ucairo.Image.create ~w ~h in\n  let cr = Ucairo.create surface in\n  render_scene cr scene;\n  Ucairo.Png.write surface filename;\n  Ucairo.Surface.finish surface\n\nlet render_to_pdf filename ~width ~height (scene : Scene.t) =\n  let surface = Ucairo.Pdf.create filename ~w:width ~h:height in\n  let cr = Ucairo.create surface in\n  render_scene cr scene;\n  Ucairo.Surface.finish surface\n\nlet render_to_buffer ~width ~height (scene : Scene.t) =\n  let w = int_of_float width and h = int_of_float height in\n  let surface = Ucairo.Image.create ~w ~h in\n  let cr = Ucairo.create surface in\n  render_scene cr scene;\n  let buf = Buffer.create 4096 in\n  Ucairo.Png.write_to_stream surface (Buffer.add_string buf);\n  Ucairo.Surface.finish surface;\n  Buffer.contents buf\n\nlet show_interactive ~theme ~width ~height prepared =\n  let w = int_of_float width and h = int_of_float height in\n  let csdl = Cairo_sdl.create ~width:w ~height:h ~title:\"Hugin\" in\n\n  let render_current () =\n    let cr = Cairo_sdl.context csdl in\n    let cw = float (Cairo_sdl.width csdl) in\n    let ch = float (Cairo_sdl.height csdl) in\n    let tm = text_measurer cr in\n    let scene =\n      Resolve.resolve_prepared ~text_measurer:tm ~theme ~width:cw ~height:ch\n        prepared\n    in\n    render_scene cr scene;\n    Cairo_sdl.present csdl\n  in\n\n  render_current ();\n\n  let ev = Usdl.Event.create () in\n  let quit = ref false in\n  while not !quit do\n    if not (Usdl.Event.wait ev) then quit := true\n    else\n      begin match Usdl.Event.typ ev with\n      | `Quit -> quit := true\n      | `Window_event ->\n          begin match Usdl.Event.window_event_id ev with\n          | `Resized | `Size_changed ->\n              Cairo_sdl.resize csdl;\n              render_current ()\n          | `Exposed -> render_current ()\n          | `Close -> quit := true\n          | _ -> ()\n          end\n      | `Key_down ->\n          let keycode = Usdl.Event.keycode ev in\n          if keycode = Usdl.Keycode.escape || keycode = Usdl.Keycode.q then\n            quit := true\n      | _ -> ()\n      end\n  done;\n  Cairo_sdl.destroy csdl\n"
  },
  {
    "path": "packages/hugin/lib/cairo_backend.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Cairo rendering backend.\n\n    {b Internal module.} Renders {!Scene.t} to PNG, PDF, or an interactive SDL\n    window via Cairo. *)\n\n(** {1:measurer Text measurement} *)\n\nval text_measurer : Ucairo.t -> Resolve.text_measurer\n(** [text_measurer cr] is a text measurer backed by {!Ucairo.text_extents}. *)\n\n(** {1:rendering Rendering} *)\n\nval render_scene : Ucairo.t -> Scene.t -> unit\n(** [render_scene cr scene] draws [scene] onto [cr]. *)\n\nval render_to_png : string -> width:float -> height:float -> Scene.t -> unit\n(** [render_to_png filename ~width ~height scene] writes [scene] as a PNG image.\n*)\n\nval render_to_pdf : string -> width:float -> height:float -> Scene.t -> unit\n(** [render_to_pdf filename ~width ~height scene] writes [scene] as a\n    single-page PDF. *)\n\nval render_to_buffer : width:float -> height:float -> Scene.t -> string\n(** [render_to_buffer ~width ~height scene] is the PNG-encoded contents of\n    [scene] as a string. *)\n\n(** {1:interactive Interactive display} *)\n\nval show_interactive :\n  theme:Theme.t -> width:float -> height:float -> Prepared.t -> unit\n(** [show_interactive ~theme ~width ~height prepared] opens an SDL window and\n    renders [prepared]. Compiles data once; only re-resolves layout on resize.\n    Exits on Escape, Q, or window close. *)\n"
  },
  {
    "path": "packages/hugin/lib/cairo_sdl.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Cairo-SDL integration: shared ARGB8888 surface *)\n\ntype t = {\n  window : Usdl.Window.t;\n  renderer : Usdl.Renderer.t;\n  mutable surface : Usdl.Surface.t;\n  mutable cairo_surface : Ucairo.surface;\n  mutable context : Ucairo.t;\n  mutable width : int;\n  mutable height : int;\n}\n\nlet make_cairo_context surface =\n  let pixels = Usdl.Surface.pixels surface in\n  let stride = Usdl.Surface.pitch surface in\n  let total = Bigarray.Array1.dim pixels in\n  let h = total / stride in\n  let w = stride / 4 in\n  let cs = Ucairo.Image.create_for_data8 pixels ~w ~h ~stride in\n  let cr = Ucairo.create cs in\n  (cs, cr, w, h)\n\nlet create ~width ~height ~title =\n  Usdl.init ();\n  let window = Usdl.Window.create ~title ~w:width ~h:height in\n  let renderer = Usdl.Renderer.create window in\n  let ow, oh = Usdl.Renderer.output_size renderer in\n  let surface = Usdl.Surface.create_argb8888 ~w:ow ~h:oh in\n  let cairo_surface, context, w, h = make_cairo_context surface in\n  { window; renderer; surface; cairo_surface; context; width = w; height = h }\n\nlet context t = t.context\nlet width t = t.width\nlet height t = t.height\n\nlet present t =\n  Ucairo.Surface.flush t.cairo_surface;\n  let tex = Usdl.Texture.of_surface t.renderer t.surface in\n  Usdl.Renderer.clear t.renderer;\n  Usdl.Renderer.copy t.renderer tex;\n  Usdl.Renderer.present t.renderer;\n  Usdl.Texture.destroy tex\n\nlet resize t =\n  let nw, nh = Usdl.Renderer.output_size t.renderer in\n  if nw <> t.width || nh <> t.height then\n    begin if nw > 0 && nh > 0 then begin\n      Ucairo.Surface.finish t.cairo_surface;\n      Usdl.Surface.destroy t.surface;\n      let surface = Usdl.Surface.create_argb8888 ~w:nw ~h:nh in\n      let cairo_surface, context, w, h = make_cairo_context surface in\n      t.surface <- surface;\n      t.cairo_surface <- cairo_surface;\n      t.context <- context;\n      t.width <- w;\n      t.height <- h\n    end\n    end\n\nlet destroy t =\n  Ucairo.Surface.finish t.cairo_surface;\n  Usdl.Surface.destroy t.surface;\n  Usdl.Renderer.destroy t.renderer;\n  Usdl.Window.destroy t.window;\n  Usdl.quit ()\n"
  },
  {
    "path": "packages/hugin/lib/cairo_sdl.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Cairo-SDL integration.\n\n    {b Internal module.} Manages a shared ARGB8888 surface between Cairo and SDL\n    for interactive rendering. *)\n\ntype t\n(** The type for Cairo-SDL contexts. *)\n\nval create : width:int -> height:int -> title:string -> t\n(** [create ~width ~height ~title] initializes SDL, creates a resizable window,\n    and sets up a shared Cairo surface.\n\n    Raises [Failure] if SDL initialization fails. *)\n\nval context : t -> Ucairo.t\n(** [context t] is the current Cairo drawing context. Valid until the next\n    {!present} or {!resize}. *)\n\nval width : t -> int\n(** [width t] is the current surface width in pixels. *)\n\nval height : t -> int\n(** [height t] is the current surface height in pixels. *)\n\nval present : t -> unit\n(** [present t] flushes the Cairo surface to the SDL window and prepares a fresh\n    Cairo context for the next frame. *)\n\nval resize : t -> unit\n(** [resize t] updates the surface dimensions to match the renderer output size.\n    No-op if the size has not changed. *)\n\nval destroy : t -> unit\n(** [destroy t] frees all SDL and Cairo resources and calls {!Usdl.quit}. *)\n"
  },
  {
    "path": "packages/hugin/lib/cmap.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype t = Color.t array\n\nlet eval t v =\n  let v = Float.max 0. (Float.min 1. v) in\n  let i = int_of_float (v *. 255.) in\n  t.(min i 255)\n\nlet of_colors stops =\n  let n = Array.length stops in\n  if n < 2 then invalid_arg \"Cmap.of_colors: need at least 2 stops\";\n  Array.init 256 (fun i ->\n      let v = float i /. 255. in\n      let scaled = v *. float (n - 1) in\n      let idx = int_of_float scaled in\n      let idx = min idx (n - 2) in\n      let frac = scaled -. float idx in\n      Color.mix frac stops.(idx) stops.(idx + 1))\n\n(* Decode a canonical 256-entry hex-encoded colormap *)\n\nlet hex_digit c =\n  match c with\n  | '0' .. '9' -> Char.code c - Char.code '0'\n  | 'a' .. 'f' -> 10 + Char.code c - Char.code 'a'\n  | 'A' .. 'F' -> 10 + Char.code c - Char.code 'A'\n  | _ -> invalid_arg (Printf.sprintf \"Cmap.hex_digit: invalid hex digit %C\" c)\n\nlet decode_hex_cmap hex =\n  Array.init 256 (fun i ->\n      let off = i * 6 in\n      let byte j =\n        let h = hex_digit (String.unsafe_get hex (off + (j * 2))) in\n        let l = hex_digit (String.unsafe_get hex (off + (j * 2) + 1)) in\n        float ((h lsl 4) lor l) /. 255.\n      in\n      Color.rgb ~r:(byte 0) ~g:(byte 1) ~b:(byte 2) ())\n\nlet viridis = decode_hex_cmap Cmap_data.viridis_hex\nlet plasma = decode_hex_cmap Cmap_data.plasma_hex\nlet inferno = decode_hex_cmap Cmap_data.inferno_hex\nlet magma = decode_hex_cmap Cmap_data.magma_hex\nlet cividis = decode_hex_cmap Cmap_data.cividis_hex\nlet coolwarm = decode_hex_cmap Cmap_data.coolwarm_hex\n\nlet gray =\n  of_colors [| Color.rgb ~r:0. ~g:0. ~b:0. (); Color.rgb ~r:1. ~g:1. ~b:1. () |]\n\nlet gray_r =\n  of_colors [| Color.rgb ~r:1. ~g:1. ~b:1. (); Color.rgb ~r:0. ~g:0. ~b:0. () |]\n\nlet hot =\n  of_colors\n    [|\n      Color.rgb ~r:0. ~g:0. ~b:0. ();\n      Color.rgb ~r:0.7 ~g:0. ~b:0. ();\n      Color.rgb ~r:1. ~g:0.6 ~b:0. ();\n      Color.rgb ~r:1. ~g:1. ~b:1. ();\n    |]\n"
  },
  {
    "path": "packages/hugin/lib/cmap.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Colormaps.\n\n    A colormap is a continuous mapping from \\[[0];[1]\\] to {!Color.t}.\n    Internally stored as a 256-entry lookup table with OKLCH interpolation, so\n    {!eval} is a single array access. *)\n\n(** {1:types Types} *)\n\ntype t\n(** The type for colormaps. *)\n\n(** {1:eval Evaluation} *)\n\nval eval : t -> float -> Color.t\n(** [eval cmap v] is the color at position [v], clamped to \\[[0];[1]\\]. *)\n\n(** {1:constructors Constructors} *)\n\nval of_colors : Color.t array -> t\n(** [of_colors stops] is a colormap interpolating linearly through [stops] in\n    OKLCH space. The stops are evenly spaced from [0] to [1].\n\n    Raises [Invalid_argument] if [stops] has fewer than 2 elements. *)\n\n(** {1:predefined Predefined colormaps}\n\n    Perceptually uniform sequential colormaps from the\n    {{:https://bids.github.io/colormap/}viridis family}, plus a diverging\n    colormap. *)\n\nval viridis : t\nval plasma : t\nval inferno : t\nval magma : t\nval cividis : t\nval coolwarm : t\n\nval gray : t\n(** Linear grayscale (black to white). *)\n\nval gray_r : t\n(** Reversed grayscale (white to black). The standard default for astronomical\n    image display. *)\n\nval hot : t\n(** Black-red-yellow-white. Common in X-ray astronomy. *)\n"
  },
  {
    "path": "packages/hugin/lib/cmap_data.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Canonical 256-entry colormap data encoded as hex strings. Each string is 1536\n   characters: 256 entries of 6 hex chars (RRGGBB). *)\n\nlet viridis_hex =\n  \"44015444025645045745055946075a46085c460a5d460b5e470d60470e6147106347116447136548146748166848176948186a481a6c481b6d481c6e481d6f481f70482071482173482374482475482576482677482878482979472a7a472c7a472d7b472e7c472f7d46307e46327e46337f463480453581453781453882443983443a83443b84433d84433e85423f854240864241864142874144874045884046883f47883f48893e49893e4a893e4c8a3d4d8a3d4e8a3c4f8a3c508b3b518b3b528b3a538b3a548c39558c39568c38588c38598c375a8c375b8d365c8d365d8d355e8d355f8d34608d34618d33628d33638d32648e32658e31668e31678e31688e30698e306a8e2f6b8e2f6c8e2e6d8e2e6e8e2e6f8e2d708e2d718e2c718e2c728e2c738e2b748e2b758e2a768e2a778e2a788e29798e297a8e297b8e287c8e287d8e277e8e277f8e27808e26818e26828e26828e25838e25848e25858e24868e24878e23888e23898e238a8d228b8d228c8d228d8d218e8d218f8d21908d21918c20928c20928c20938c1f948c1f958b1f968b1f978b1f988b1f998a1f9a8a1e9b8a1e9c891e9d891f9e891f9f881fa0881fa1881fa1871fa28720a38620a48621a58521a68522a78522a88423a98324aa8325ab8225ac8226ad8127ad8128ae8029af7f2ab07f2cb17e2db27d2eb37c2fb47c31b57b32b67a34b67935b77937b87838b9773aba763bbb753dbc743fbc7340bd7242be7144bf7046c06f48c16e4ac16d4cc26c4ec36b50c46a52c56954c56856c66758c7655ac8645cc8635ec96260ca6063cb5f65cb5e67cc5c69cd5b6ccd5a6ece5870cf5773d05675d05477d1537ad1517cd2507fd34e81d34d84d44b86d54989d5488bd6468ed64590d74393d74195d84098d83e9bd93c9dd93ba0da39a2da37a5db36a8db34aadc32addc30b0dd2fb2dd2db5de2bb8de29bade28bddf26c0df25c2df23c5e021c8e020cae11fcde11dd0e11cd2e21bd5e21ad8e219dae319dde318dfe318e2e418e5e419e7e419eae51aece51befe51cf1e51df4e61ef6e620f8e621fbe723fde725\"\n\nlet plasma_hex =\n  \"0d088710078813078916078a19068c1b068d1d068e20068f2206902406912605912805922a05932c05942e05952f059631059733059735049837049938049a3a049a3c049b3e049c3f049c41049d43039e44039e46039f48039f4903a04b03a14c02a14e02a25002a25102a35302a35502a45601a45801a45901a55b01a55c01a65e01a66001a66100a76300a76400a76600a76700a86900a86a00a86c00a86e00a86f00a87100a87201a87401a87501a87701a87801a87a02a87b02a87d03a87e03a88004a88104a78305a78405a78606a68707a68808a68a09a58b0aa58d0ba58e0ca48f0da4910ea3920fa39410a29511a19613a19814a099159f9a169f9c179e9d189d9e199da01a9ca11b9ba21d9aa31e9aa51f99a62098a72197a82296aa2395ab2494ac2694ad2793ae2892b02991b12a90b22b8fb32c8eb42e8db52f8cb6308bb7318ab83289ba3388bb3488bc3587bd3786be3885bf3984c03a83c13b82c23c81c33d80c43e7fc5407ec6417dc7427cc8437bc9447aca457acb4679cc4778cc4977cd4a76ce4b75cf4c74d04d73d14e72d24f71d35171d45270d5536fd5546ed6556dd7566cd8576bd9586ada5a6ada5b69db5c68dc5d67dd5e66de5f65de6164df6263e06363e16462e26561e26660e3685fe4695ee56a5de56b5de66c5ce76e5be76f5ae87059e97158e97257ea7457eb7556eb7655ec7754ed7953ed7a52ee7b51ef7c51ef7e50f07f4ff0804ef1814df1834cf2844bf3854bf3874af48849f48948f58b47f58c46f68d45f68f44f79044f79143f79342f89441f89540f9973ff9983ef99a3efa9b3dfa9c3cfa9e3bfb9f3afba139fba238fca338fca537fca636fca835fca934fdab33fdac33fdae32fdaf31fdb130fdb22ffdb42ffdb52efeb72dfeb82cfeba2cfebb2bfebd2afebe2afec029fdc229fdc328fdc527fdc627fdc827fdca26fdcb26fccd25fcce25fcd025fcd225fbd324fbd524fbd724fad824fada24f9dc24f9dd25f8df25f8e125f7e225f7e425f6e626f6e826f5e926f5eb27f4ed27f3ee27f3f027f2f227f1f426f1f525f0f724f0f921\"\n\nlet inferno_hex =\n  \"00000401000501010601010802010a02020c02020e03021004031204031405041706041907051b08051d09061f0a07220b07240c08260d08290e092b10092d110a30120a32140b34150b37160b39180c3c190c3e1b0c411c0c431e0c451f0c48210c4a230c4c240c4f260c51280b53290b552b0b572d0b592f0a5b310a5c320a5e340a5f3609613809623909633b09643d09653e0966400a67420a68440a68450a69470b6a490b6a4a0c6b4c0c6b4d0d6c4f0d6c510e6c520e6d540f6d550f6d57106e59106e5a116e5c126e5d126e5f136e61136e62146e64156e65156e67166e69166e6a176e6c186e6d186e6f196e71196e721a6e741a6e751b6e771c6d781c6d7a1d6d7c1d6d7d1e6d7f1e6c801f6c82206c84206b85216b87216b88226a8a226a8c23698d23698f24699025689225689326679526679727669827669a28659b29649d29649f2a63a02a63a22b62a32c61a52c60a62d60a82e5fa92e5eab2f5ead305dae305cb0315bb1325ab3325ab43359b63458b73557b93556ba3655bc3754bd3853bf3952c03a51c13a50c33b4fc43c4ec63d4dc73e4cc83f4bca404acb4149cc4248ce4347cf4446d04545d24644d34743d44842d54a41d74b3fd84c3ed94d3dda4e3cdb503bdd513ade5238df5337e05536e15635e25734e35933e45a31e55c30e65d2fe75e2ee8602de9612bea632aeb6429eb6628ec6726ed6925ee6a24ef6c23ef6e21f06f20f1711ff1731df2741cf3761bf37819f47918f57b17f57d15f67e14f68013f78212f78410f8850ff8870ef8890cf98b0bf98c0af98e09fa9008fa9207fa9407fb9606fb9706fb9906fb9b06fb9d07fc9f07fca108fca309fca50afca60cfca80dfcaa0ffcac11fcae12fcb014fcb216fcb418fbb61afbb81dfbba1ffbbc21fbbe23fac026fac228fac42afac62df9c72ff9c932f9cb35f8cd37f8cf3af7d13df7d340f6d543f6d746f5d949f5db4cf4dd4ff4df53f4e156f3e35af3e55df2e661f2e865f2ea69f1ec6df1ed71f1ef75f1f179f2f27df2f482f3f586f3f68af4f88ef5f992f6fa96f8fb9af9fc9dfafda1fcffa4\"\n\nlet magma_hex =\n  \"00000401000501010601010802010902020b02020d03030f03031204041405041606051806051a07061c08071e0907200a08220b09240c09260d0a290e0b2b100b2d110c2f120d31130d34140e36150e38160f3b180f3d19103f1a10421c10441d11471e114920114b21114e22115024125325125527125829115a2a115c2c115f2d11612f116331116533106734106936106b38106c390f6e3b0f703d0f713f0f72400f74420f75440f764510774710784910784a10794c117a4e117b4f127b51127c52137c54137d56147d57157e59157e5a167e5c167f5d177f5f187f601880621980641a80651a80671b80681c816a1c816b1d816d1d816e1e81701f81721f817320817521817621817822817922827b23827c23827e24828025828125818326818426818627818827818928818b29818c29818e2a81902a81912b81932b80942c80962c80982d80992d809b2e7f9c2e7f9e2f7fa02f7fa1307ea3307ea5317ea6317da8327daa337dab337cad347cae347bb0357bb2357bb3367ab5367ab73779b83779ba3878bc3978bd3977bf3a77c03a76c23b75c43c75c53c74c73d73c83e73ca3e72cc3f71cd4071cf4070d0416fd2426fd3436ed5446dd6456cd8456cd9466bdb476adc4869de4968df4a68e04c67e24d66e34e65e44f64e55064e75263e85362e95462ea5661eb5760ec5860ed5a5fee5b5eef5d5ef05f5ef1605df2625df2645cf3655cf4675cf4695cf56b5cf66c5cf66e5cf7705cf7725cf8745cf8765cf9785df9795df97b5dfa7d5efa7f5efa815ffb835ffb8560fb8761fc8961fc8a62fc8c63fc8e64fc9065fd9266fd9467fd9668fd9869fd9a6afd9b6bfe9d6cfe9f6dfea16efea36ffea571fea772fea973feaa74feac76feae77feb078feb27afeb47bfeb67cfeb77efeb97ffebb81febd82febf84fec185fec287fec488fec68afec88cfeca8dfecc8ffecd90fecf92fed194fed395fed597fed799fed89afdda9cfddc9efddea0fde0a1fde2a3fde3a5fde5a7fde7a9fde9aafdebacfcecaefceeb0fcf0b2fcf2b4fcf4b6fcf6b8fcf7b9fcf9bbfcfbbdfcfdbf\"\n\nlet cividis_hex =\n  \"00224e00234f00245100255300255400265600275800285900285b00295d002a5f002a61002b62002c64002c66002d68002e6a002e6c002f6d00306f0030700031700031710132710533710833700c34700f357012357014367016377018376f1a386f1c396f1e3a6f203a6f213b6e233c6e243c6e263d6e273e6e293f6e2a3f6d2b406d2d416d2e416d2f426d31436d32436d33446d34456c35456c36466c38476c39486c3a486c3b496c3c4a6c3d4a6c3e4b6c3f4c6c404c6c414d6c424e6c434e6c444f6c45506c46516c47516c48526c49536c4a536c4b546c4c556c4d556c4e566c4f576c50576c51586d52596d535a6d545a6d555b6d555c6d565c6d575d6d585e6d595e6e5a5f6e5b606e5c616e5d616e5e626e5e636f5f636f60646f61656f62656f636670646770656870656870666970676a71686a71696b716a6c716b6d726c6d726c6e726d6f726e6f736f70737071737172747272747273747374757474757575757676767777767777777878777979777a7a787b7a787c7b787d7c787e7c787e7d787f7e78807f78817f788280798381798482798582798683798784788885788985788a86788b87788c88788d88788e89788f8a78908b78918b78928c78928d78938e78948e77958f779690779791779892779992779a93769b94769c95769d95769e96769f9775a09875a19975a29975a39a74a49b74a59c74a69c74a79d73a89e73a99f73aaa073aba072aca172ada272aea371afa471b0a571b1a570b3a670b4a76fb5a86fb6a96fb7a96eb8aa6eb9ab6dbaac6dbbad6dbcae6cbdae6cbeaf6bbfb06bc0b16ac1b26ac2b369c3b369c4b468c5b568c6b667c7b767c8b866c9b965cbb965ccba64cdbb63cebc63cfbd62d0be62d1bf61d2c060d3c05fd4c15fd5c25ed6c35dd7c45cd9c55cdac65bdbc75adcc859ddc858dec958dfca57e0cb56e1cc55e2cd54e4ce53e5cf52e6d051e7d150e8d24fe9d34eead34cebd44bedd54aeed649efd748f0d846f1d945f2da44f3db42f5dc41f6dd3ff7de3ef8df3cf9e03afbe138fce236fde334fee434fee535fee636fee838\"\n\nlet coolwarm_hex =\n  \"3b4cc03c4ec23d50c33e51c53f53c64055c84257c94358cb445acc455cce465ecf485fd14961d24a63d34b64d54c66d64e68d84f69d9506bda516ddb536edd5470de5572df5673e05875e15977e35a78e45b7ae55d7ce65e7de75f7fe86180e96282ea6384eb6485ec6687ed6788ee688aef6a8bef6b8df06c8ff16e90f26f92f37093f37295f47396f57597f67699f6779af7799cf87a9df87b9ff97da0f97ea1fa80a3fa81a4fb82a6fb84a7fc85a8fc86a9fc88abfd89acfd8badfd8caffe8db0fe8fb1fe90b2fe92b4fe93b5fe94b6ff96b7ff97b8ff98b9ff9abbff9bbcff9dbdff9ebeff9fbfffa1c0ffa2c1ffa3c2fea5c3fea6c4fea7c5fea9c6fdaac7fdabc8fdadc9fdaec9fcafcafcb1cbfcb2ccfbb3cdfbb5cdfab6cefab7cff9b9d0f9bad0f8bbd1f8bcd2f7bed2f6bfd3f6c0d4f5c1d4f4c3d5f4c4d5f3c5d6f2c6d6f1c7d7f0c9d7f0cad8efcbd8eeccd9edcdd9eccedaebcfdaead1dae9d2dbe8d3dbe7d4dbe6d5dbe5d6dce4d7dce3d8dce2d9dce1dadce0dbdcdedcdddddddcdcdedcdbdfdbd9e0dbd8e1dad6e2dad5e3d9d3e4d9d2e5d8d1e6d7cfe7d7cee8d6cce9d5cbead5c9ead4c8ebd3c6ecd3c5edd2c3edd1c2eed0c0efcfbfefcebdf0cdbbf1cdbaf1ccb8f2cbb7f2cab5f2c9b4f3c8b2f3c7b1f4c6aff4c5adf5c4acf5c2aaf5c1a9f5c0a7f6bfa6f6bea4f6bda2f7bca1f7ba9ff7b99ef7b89cf7b79bf7b599f7b497f7b396f7b194f7b093f7af91f7ad90f7ac8ef7aa8cf7a98bf7a889f7a688f6a586f6a385f6a283f5a081f59f80f59d7ef59c7df49a7bf4987af39778f39577f39475f29274f29072f18f71f18d6ff08b6ef08a6cef886bee8669ee8468ed8366ec8165ec7f63eb7d62ea7b60e97a5fe9785de8765ce7745be67259e57058e46e56e36c55e36b54e26952e16751e0654fdf634ede614ddd5f4bdc5d4ada5a49d95847d85646d75445d65244d55042d44e41d24b40d1493fd0473dcf453ccd423bcc403acb3e38ca3b37c83836c73635c53334c43032c32e31c12b30c0282fbe242ebd1f2dbb1b2cba162bb8122ab70d28b50927b40426\"\n"
  },
  {
    "path": "packages/hugin/lib/color.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype t = { l : float; c : float; h : float; a : float }\n\n(* Constructors *)\n\nlet oklch ~l ~c ~h () = { l; c; h; a = 1. }\nlet oklcha ~l ~c ~h ~a () = { l; c; h; a }\n\n(* sRGB <-> linear RGB *)\n\nlet srgb_to_linear c =\n  if c <= 0.04045 then c /. 12.92 else Float.pow ((c +. 0.055) /. 1.055) 2.4\n\nlet linear_to_srgb c =\n  if c <= 0.0031308 then 12.92 *. c\n  else (1.055 *. Float.pow c (1. /. 2.4)) -. 0.055\n\n(* Linear RGB -> OKLab *)\n\nlet linear_rgb_to_oklab r g b =\n  let l = (0.4122214708 *. r) +. (0.5363325363 *. g) +. (0.0514459929 *. b) in\n  let m = (0.2119034982 *. r) +. (0.6806995451 *. g) +. (0.1073969566 *. b) in\n  let s = (0.0883024619 *. r) +. (0.2164557896 *. g) +. (0.6898418685 *. b) in\n  let l = Float.cbrt l and m = Float.cbrt m and s = Float.cbrt s in\n  let lab_l =\n    (0.2104542553 *. l) +. (0.7936177850 *. m) -. (0.0040720468 *. s)\n  in\n  let lab_a =\n    (1.9779984951 *. l) -. (2.4285922050 *. m) +. (0.4505937099 *. s)\n  in\n  let lab_b =\n    (0.0259040371 *. l) +. (0.7827717662 *. m) -. (0.8086757660 *. s)\n  in\n  (lab_l, lab_a, lab_b)\n\n(* OKLab -> linear RGB *)\n\nlet oklab_to_linear_rgb lab_l lab_a lab_b =\n  let l = lab_l +. (0.3963377774 *. lab_a) +. (0.2158037573 *. lab_b) in\n  let m = lab_l -. (0.1055613458 *. lab_a) -. (0.0638541728 *. lab_b) in\n  let s = lab_l -. (0.0894841775 *. lab_a) -. (1.2914855480 *. lab_b) in\n  let l = l *. l *. l and m = m *. m *. m and s = s *. s *. s in\n  let r = (4.0767416621 *. l) -. (3.3077115913 *. m) +. (0.2309699292 *. s) in\n  let g = (-1.2684380046 *. l) +. (2.6097574011 *. m) -. (0.3413193965 *. s) in\n  let b = (-0.0041960863 *. l) -. (0.7034186147 *. m) +. (1.7076147010 *. s) in\n  (r, g, b)\n\n(* OKLab <-> OKLCH *)\n\nlet oklab_to_oklch lab_l lab_a lab_b =\n  let c = Float.sqrt ((lab_a *. lab_a) +. (lab_b *. lab_b)) in\n  let h = Float.atan2 lab_b lab_a *. 180. /. Float.pi in\n  let h = if h < 0. then h +. 360. else h in\n  (lab_l, c, h)\n\nlet oklch_to_oklab l c h =\n  let h_rad = h *. Float.pi /. 180. in\n  (l, c *. Float.cos h_rad, c *. Float.sin h_rad)\n\n(* sRGB -> OKLCH *)\n\nlet of_srgb r g b =\n  let lr = srgb_to_linear r\n  and lg = srgb_to_linear g\n  and lb = srgb_to_linear b in\n  let lab_l, lab_a, lab_b = linear_rgb_to_oklab lr lg lb in\n  oklab_to_oklch lab_l lab_a lab_b\n\n(* OKLCH -> sRGB *)\n\nlet to_srgb l c h =\n  let lab_l, lab_a, lab_b = oklch_to_oklab l c h in\n  let lr, lg, lb = oklab_to_linear_rgb lab_l lab_a lab_b in\n  let clamp v = Float.max 0. (Float.min 1. v) in\n  ( linear_to_srgb (clamp lr),\n    linear_to_srgb (clamp lg),\n    linear_to_srgb (clamp lb) )\n\nlet rgb ~r ~g ~b () =\n  let l, c, h = of_srgb r g b in\n  { l; c; h; a = 1. }\n\nlet rgba ~r ~g ~b ~a () =\n  let l, c, h = of_srgb r g b in\n  { l; c; h; a }\n\nlet hex_digit c =\n  match c with\n  | '0' .. '9' -> Char.code c - Char.code '0'\n  | 'a' .. 'f' -> Char.code c - Char.code 'a' + 10\n  | 'A' .. 'F' -> Char.code c - Char.code 'A' + 10\n  | _ -> invalid_arg (Printf.sprintf \"Color.hex: invalid hex digit %C\" c)\n\nlet hex_byte s i =\n  let hi = hex_digit (String.get s i) in\n  let lo = hex_digit (String.get s (i + 1)) in\n  float ((hi * 16) + lo) /. 255.\n\nlet hex s =\n  let n = String.length s in\n  let off = if n > 0 && String.get s 0 = '#' then 1 else 0 in\n  let len = n - off in\n  match len with\n  | 6 ->\n      let r = hex_byte s off\n      and g = hex_byte s (off + 2)\n      and b = hex_byte s (off + 4) in\n      rgb ~r ~g ~b ()\n  | 8 ->\n      let r = hex_byte s off and g = hex_byte s (off + 2) in\n      let b = hex_byte s (off + 4) and a = hex_byte s (off + 6) in\n      rgba ~r ~g ~b ~a ()\n  | _ ->\n      invalid_arg\n        (Printf.sprintf \"Color.hex: expected 6 or 8 hex digits, got %d\" len)\n\n(* Accessors *)\n\nlet lightness t = t.l\nlet chroma t = t.c\nlet hue t = t.h\nlet alpha t = t.a\n\n(* Converting *)\n\nlet to_rgba t =\n  let r, g, b = to_srgb t.l t.c t.h in\n  (r, g, b, t.a)\n\n(* Operations *)\n\nlet with_alpha a t = { t with a }\nlet lighten amount t = { t with l = Float.min 1. (t.l +. amount) }\nlet darken amount t = { t with l = Float.max 0. (t.l -. amount) }\n\nlet interpolate_hue ratio h1 h2 =\n  let diff = h2 -. h1 in\n  let diff =\n    if diff > 180. then diff -. 360.\n    else if diff < -180. then diff +. 360.\n    else diff\n  in\n  let h = h1 +. (ratio *. diff) in\n  if h < 0. then h +. 360. else if h >= 360. then h -. 360. else h\n\nlet mix ratio a b =\n  {\n    l = a.l +. (ratio *. (b.l -. a.l));\n    c = a.c +. (ratio *. (b.c -. a.c));\n    h = interpolate_hue ratio a.h b.h;\n    a = a.a +. (ratio *. (b.a -. a.a));\n  }\n\n(* Named colors — Okabe-Ito *)\n\nlet orange = hex \"#E69F00\"\nlet sky_blue = hex \"#56B4E9\"\nlet green = hex \"#009E73\"\nlet yellow = hex \"#F0E442\"\nlet blue = hex \"#0072B2\"\nlet vermillion = hex \"#D55E00\"\nlet purple = hex \"#CC79A7\"\nlet black = { l = 0.; c = 0.; h = 0.; a = 1. }\nlet white = { l = 1.; c = 0.; h = 0.; a = 1. }\nlet gray = oklch ~l:0.5 ~c:0. ~h:0. ()\n\n(* Formatting *)\n\nlet pp fmt t = Format.fprintf fmt \"oklch(%g %g %g / %g)\" t.l t.c t.h t.a\n"
  },
  {
    "path": "packages/hugin/lib/color.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Perceptually uniform colors.\n\n    Colors are represented internally in the\n    {{:https://bottosson.github.io/posts/oklab/}OKLCH} color space. All\n    operations ({!lighten}, {!darken}, {!mix}) produce perceptually uniform\n    results: equal numerical steps yield equal perceived differences.\n\n    Constructors accept common input formats (sRGB, hex) and convert to OKLCH on\n    creation. The reverse conversion {!to_rgba} is called only at render time.\n*)\n\n(** {1:types Types} *)\n\ntype t\n(** The type for colors in OKLCH space. Components are lightness \\[0, 1\\],\n    chroma \\[0, ~0.4\\], hue \\[0, 360), and alpha \\[0, 1\\]. *)\n\n(** {1:constructors Constructors} *)\n\nval oklch : l:float -> c:float -> h:float -> unit -> t\n(** [oklch ~l ~c ~h ()] is the fully opaque OKLCH color with lightness [l],\n    chroma [c], and hue [h] (in degrees). *)\n\nval oklcha : l:float -> c:float -> h:float -> a:float -> unit -> t\n(** [oklcha ~l ~c ~h ~a ()] is like {!oklch} with alpha [a]. *)\n\nval rgb : r:float -> g:float -> b:float -> unit -> t\n(** [rgb ~r ~g ~b ()] is the fully opaque color with sRGB components [r], [g],\n    [b] in \\[0, 1\\], converted to OKLCH. *)\n\nval rgba : r:float -> g:float -> b:float -> a:float -> unit -> t\n(** [rgba ~r ~g ~b ~a ()] is like {!rgb} with alpha [a]. *)\n\nval hex : string -> t\n(** [hex s] is the color parsed from the hex string [s]. Accepts [\"#RRGGBB\"] and\n    [\"#RRGGBBAA\"] formats.\n\n    Raises [Invalid_argument] if [s] is not a valid hex color. *)\n\n(** {1:accessors Accessors} *)\n\nval lightness : t -> float\n(** [lightness c] is the OKLCH lightness of [c] in \\[0, 1\\]. *)\n\nval chroma : t -> float\n(** [chroma c] is the OKLCH chroma of [c] in \\[0, ~0.4\\]. *)\n\nval hue : t -> float\n(** [hue c] is the OKLCH hue of [c] in degrees \\[0, 360). *)\n\nval alpha : t -> float\n(** [alpha c] is the alpha of [c] in \\[0, 1\\]. *)\n\n(** {1:converting Converting} *)\n\nval to_rgba : t -> float * float * float * float\n(** [to_rgba c] is [(r, g, b, a)] with sRGB components in \\[0, 1\\]. Values are\n    clamped to the sRGB gamut. *)\n\n(** {1:operations Operations} *)\n\nval with_alpha : float -> t -> t\n(** [with_alpha a c] is [c] with alpha set to [a]. *)\n\nval lighten : float -> t -> t\n(** [lighten amount c] is [c] with lightness increased by [amount], clamped to\n    \\[0, 1\\]. *)\n\nval darken : float -> t -> t\n(** [darken amount c] is [c] with lightness decreased by [amount], clamped to\n    \\[0, 1\\]. *)\n\nval mix : float -> t -> t -> t\n(** [mix ratio a b] is the perceptual blend of [a] and [b]. [ratio] is the\n    interpolation factor: [0.0] gives [a], [1.0] gives [b]. Hue is interpolated\n    along the shortest arc. *)\n\n(** {1:named Named colors}\n\n    The default named colors follow the\n    {{:https://jfly.uni-koeln.de/color/}Okabe-Ito} palette, designed to be\n    distinguishable under all forms of color-vision deficiency. *)\n\nval orange : t\nval sky_blue : t\nval green : t\nval yellow : t\nval blue : t\nval vermillion : t\nval purple : t\nval black : t\nval white : t\nval gray : t\n\n(** {1:fmt Formatting} *)\n\nval pp : Format.formatter -> t -> unit\n(** [pp] formats the color as [oklch(L C H / A)]. *)\n"
  },
  {
    "path": "packages/hugin/lib/dune",
    "content": "(library\n (name hugin)\n (public_name hugin)\n (private_modules\n  axis\n  spec\n  scale\n  ticks\n  scene\n  prepared\n  resolve\n  image_util\n  cairo_sdl\n  cairo_backend\n  svg_backend\n  cmap_data)\n (libraries nx nx.buffer ucairo usdl))\n"
  },
  {
    "path": "packages/hugin/lib/hugin.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule Color = Color\nmodule Cmap = Cmap\nmodule Theme = Theme\n\ntype t = Spec.t\ntype marker = Spec.marker = Circle | Square | Triangle | Plus | Star\n\ntype legend_loc = Spec.legend_loc =\n  | Upper_right\n  | Upper_left\n  | Lower_right\n  | Lower_left\n  | Center\n  | Right\n  | Upper_center\n  | Lower_center\n\ntype line_style = Spec.line_style\ntype scale = Spec.scale\ntype stretch = Spec.stretch\n\n(* Mark constructors *)\n\nlet line = Spec.line\nlet point = Spec.point\nlet bar = Spec.bar\nlet hist = Spec.hist\nlet image = Spec.image\nlet text = Spec.text\nlet hline = Spec.hline\nlet vline = Spec.vline\nlet abline = Spec.abline\nlet fill_between = Spec.fill_between\nlet hspan = Spec.hspan\nlet vspan = Spec.vspan\nlet errorbar = Spec.errorbar\nlet heatmap = Spec.heatmap\nlet imshow = Spec.imshow\nlet contour = Spec.contour\n\n(* Composition *)\n\nlet layers = Spec.layers\n\n(* Decorations *)\n\nlet title = Spec.title\nlet xlabel = Spec.xlabel\nlet ylabel = Spec.ylabel\nlet xlim = Spec.xlim\nlet ylim = Spec.ylim\nlet xscale = Spec.xscale\nlet yscale = Spec.yscale\nlet xinvert = Spec.xinvert\nlet yinvert = Spec.yinvert\nlet grid_lines = Spec.grid_lines\nlet legend = Spec.legend\nlet xticks = Spec.xticks\nlet yticks = Spec.yticks\nlet with_theme = Spec.with_theme\nlet xtick_format = Spec.xtick_format\nlet ytick_format = Spec.ytick_format\nlet frame = Spec.frame\nlet no_axes = Spec.no_axes\n\n(* Layout *)\n\nlet grid = Spec.grid_layout\nlet hstack ?gap specs = Spec.grid_layout ?gap [ specs ]\nlet vstack ?gap specs = Spec.grid_layout ?gap (List.map (fun s -> [ s ]) specs)\n\n(* Rendering *)\n\nlet default_width = 1600.\nlet default_height = 1200.\n\n(* Use Cairo text measurement for all backends for consistent layout *)\nlet resolve_with_cairo ~theme ~width ~height spec =\n  let surface = Ucairo.Image.create ~w:1 ~h:1 in\n  let cr = Ucairo.create surface in\n  let tm = Cairo_backend.text_measurer cr in\n  let scene = Resolve.resolve ~text_measurer:tm ~theme ~width ~height spec in\n  Ucairo.Surface.finish surface;\n  scene\n\nlet show ?(theme = Theme.default) ?(width = default_width)\n    ?(height = default_height) spec =\n  let prepared = Prepared.compile ~theme spec in\n  Cairo_backend.show_interactive ~theme ~width ~height prepared\n\nlet render_png ?(theme = Theme.default) ?(width = default_width)\n    ?(height = default_height) filename spec =\n  let scene = resolve_with_cairo ~theme ~width ~height spec in\n  Cairo_backend.render_to_png filename ~width ~height scene\n\nlet render_pdf ?(theme = Theme.default) ?(width = default_width)\n    ?(height = default_height) filename spec =\n  let scene = resolve_with_cairo ~theme ~width ~height spec in\n  Cairo_backend.render_to_pdf filename ~width ~height scene\n\nlet render_svg ?(theme = Theme.default) ?(width = default_width)\n    ?(height = default_height) filename spec =\n  let scene = resolve_with_cairo ~theme ~width ~height spec in\n  Svg_backend.render_to_file filename scene\n\nlet render_svg_to_string ?(theme = Theme.default) ?(width = default_width)\n    ?(height = default_height) spec =\n  let scene = resolve_with_cairo ~theme ~width ~height spec in\n  Svg_backend.render scene\n\nlet render_to_buffer ?(theme = Theme.default) ?(width = default_width)\n    ?(height = default_height) spec =\n  let scene = resolve_with_cairo ~theme ~width ~height spec in\n  Cairo_backend.render_to_buffer ~width ~height scene\n\nlet infer_dimensions spec =\n  let rec grid_shape = function\n    | Spec.Grid { rows; _ } ->\n        let nrows = List.length rows in\n        let ncols =\n          List.fold_left (fun acc row -> max acc (List.length row)) 0 rows\n        in\n        Some (nrows, ncols)\n    | Spec.Decorated { inner; _ } -> grid_shape inner\n    | _ -> None\n  in\n  match grid_shape spec with\n  | Some (nrows, ncols) when ncols > 0 ->\n      let cell_w = default_width /. float ncols in\n      (default_width, cell_w *. float nrows)\n  | _ -> (default_width, default_height)\n\nlet pp fmt spec =\n  let width, height = infer_dimensions spec in\n  let buf = render_to_buffer ~width ~height spec in\n  let b64 = Image_util.base64_encode buf in\n  Format.fprintf fmt \"![figure](data:image/png;base64,%s)\" b64\n"
  },
  {
    "path": "packages/hugin/lib/hugin.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Declarative plotting and visualization.\n\n    Hugin turns immutable plot specifications into rendered output. A plot is a\n    value of type {!t} built from mark constructors ({!line}, {!point}, {!bar},\n    {!hist}), composed with {!layers}, decorated with {!title}, {!xlabel}, etc.\n    via the [|>] pipeline, and rendered with {!show}, {!render_png}, or\n    {!render_svg}.\n\n    {[\n    let x = Nx.linspace Float32 0. 6.28 100 in\n    let y = Nx.map (fun v -> Float.sin v) x in\n    Hugin.line ~x ~y () |> Hugin.title \"Sine wave\"\n    |> Hugin.render_png \"sine.png\"\n    ]} *)\n\n(** {1:sub Sub-modules} *)\n\nmodule Color = Color\n(** Perceptually uniform OKLCH colors. *)\n\nmodule Cmap = Cmap\n(** Colormaps. *)\n\nmodule Theme = Theme\n(** Visual themes. *)\n\n(** {1:types Types} *)\n\ntype t\n(** The type for plot specifications. Immutable and composable. *)\n\ntype marker =\n  | Circle\n  | Square\n  | Triangle\n  | Plus\n  | Star  (** The type for point marker shapes. *)\n\ntype legend_loc =\n  | Upper_right\n  | Upper_left\n  | Lower_right\n  | Lower_left\n  | Center\n  | Right\n  | Upper_center\n  | Lower_center  (** The type for legend placement. *)\n\ntype line_style = [ `Solid | `Dashed | `Dotted | `Dash_dot ]\n(** The type for line dash patterns. *)\n\ntype scale = [ `Linear | `Log | `Sqrt | `Asinh | `Symlog of float ]\n(** The type for axis scales. [`Sqrt] and [`Asinh] handle zero gracefully.\n    [`Symlog linthresh] is linear within \\[[-linthresh];[linthresh]\\] and\n    logarithmic outside. *)\n\ntype stretch = [ `Linear | `Log | `Sqrt | `Asinh | `Power of float ]\n(** The type for image stretch functions. [`Power a] raises normalized values to\n    the power [a]. *)\n\n(** {1:marks Mark constructors}\n\n    Each constructor builds a single-layer specification from data arrays and\n    optional visual properties. A mark is already a valid {!t} that can be\n    rendered directly. *)\n\nval line :\n  x:Nx.float32_t ->\n  y:Nx.float32_t ->\n  ?color:Color.t ->\n  ?line_width:float ->\n  ?line_style:line_style ->\n  ?step:[ `Pre | `Post | `Mid ] ->\n  ?marker:marker ->\n  ?label:string ->\n  ?alpha:float ->\n  unit ->\n  t\n(** [line ~x ~y ()] is a line plot connecting the points [(x.(i), y.(i))].\n\n    [color] defaults to the next color in the theme palette. [line_width]\n    defaults to the theme line width. [line_style] defaults to [`Solid]. [step]\n    draws a staircase line: [`Post] holds each value until the next x-point,\n    [`Pre] steps to the new value at the current x-point, [`Mid] steps at the\n    midpoint between consecutive x-points. *)\n\nval point :\n  x:Nx.float32_t ->\n  y:Nx.float32_t ->\n  ?color:Color.t ->\n  ?color_by:Nx.float32_t ->\n  ?size:float ->\n  ?size_by:Nx.float32_t ->\n  ?marker:marker ->\n  ?label:string ->\n  ?alpha:float ->\n  unit ->\n  t\n(** [point ~x ~y ()] is a scatter plot of discrete markers at [(x.(i), y.(i))].\n\n    [color_by] maps per-point values through the theme's sequential colormap.\n    [size_by] scales marker area per point. [marker] defaults to {!Circle}. *)\n\nval bar :\n  x:Nx.float32_t ->\n  height:Nx.float32_t ->\n  ?width:float ->\n  ?bottom:float ->\n  ?color:Color.t ->\n  ?label:string ->\n  ?alpha:float ->\n  unit ->\n  t\n(** [bar ~x ~height ()] is a bar chart with bars centered on [x] values,\n    extending from [bottom] (default [0.0]) to [bottom + height]. [width]\n    defaults to [0.8]. *)\n\nval hist :\n  x:Nx.float32_t ->\n  ?bins:[ `Num of int | `Edges of float array ] ->\n  ?density:bool ->\n  ?color:Color.t ->\n  ?label:string ->\n  unit ->\n  t\n(** [hist ~x ()] is a histogram of the values in [x].\n\n    [bins] defaults to [`Num 10]. When [density] is [true], the histogram is\n    normalized so the total area equals [1.0]. *)\n\nval image : ?extent:float * float * float * float -> Nx.uint8_t -> t\n(** [image ?extent data] displays [data] as an image. [data] has shape\n    [[|h; w; 3|]] (RGB) or [[|h; w; 4|]] (RGBA).\n\n    When [extent] is [(xmin, xmax, ymin, ymax)], the image is placed in data\n    coordinates. Without [extent], the image is centered in the plot area\n    preserving aspect ratio. *)\n\nval text :\n  x:float ->\n  y:float ->\n  string ->\n  ?color:Color.t ->\n  ?font_size:float ->\n  unit ->\n  t\n(** [text ~x ~y s ()] places the string [s] at data coordinates [(x, y)]. *)\n\nval hline :\n  y:float ->\n  ?color:Color.t ->\n  ?line_width:float ->\n  ?line_style:line_style ->\n  ?label:string ->\n  ?alpha:float ->\n  unit ->\n  t\n(** [hline ~y ()] draws a horizontal reference line at [y] spanning the full\n    plot width. *)\n\nval vline :\n  x:float ->\n  ?color:Color.t ->\n  ?line_width:float ->\n  ?line_style:line_style ->\n  ?label:string ->\n  ?alpha:float ->\n  unit ->\n  t\n(** [vline ~x ()] draws a vertical reference line at [x] spanning the full plot\n    height. *)\n\nval abline :\n  slope:float ->\n  intercept:float ->\n  ?color:Color.t ->\n  ?line_width:float ->\n  ?line_style:line_style ->\n  ?label:string ->\n  ?alpha:float ->\n  unit ->\n  t\n(** [abline ~slope ~intercept ()] draws a diagonal line\n    [y = slope * x + intercept] spanning the full plot area. Useful for\n    regression lines and [y = x] references. *)\n\nval fill_between :\n  x:Nx.float32_t ->\n  y1:Nx.float32_t ->\n  y2:Nx.float32_t ->\n  ?where:Nx.float32_t ->\n  ?color:Color.t ->\n  ?alpha:float ->\n  ?label:string ->\n  unit ->\n  t\n(** [fill_between ~x ~y1 ~y2 ()] fills the area between curves [y1] and [y2]\n    over the shared [x] axis. [alpha] defaults to [0.3].\n\n    [where] is an optional mask array of the same length as [x]: the fill is\n    only drawn where [where.(i) > 0.], producing separate filled regions. *)\n\nval hspan :\n  y0:float ->\n  y1:float ->\n  ?color:Color.t ->\n  ?alpha:float ->\n  ?label:string ->\n  unit ->\n  t\n(** [hspan ~y0 ~y1 ()] is a horizontal shaded band between [y0] and [y1],\n    spanning the full plot width. [alpha] defaults to [0.2]. *)\n\nval vspan :\n  x0:float ->\n  x1:float ->\n  ?color:Color.t ->\n  ?alpha:float ->\n  ?label:string ->\n  unit ->\n  t\n(** [vspan ~x0 ~x1 ()] is a vertical shaded band between [x0] and [x1], spanning\n    the full plot height. [alpha] defaults to [0.2]. *)\n\nval errorbar :\n  x:Nx.float32_t ->\n  y:Nx.float32_t ->\n  yerr:\n    [ `Symmetric of Nx.float32_t | `Asymmetric of Nx.float32_t * Nx.float32_t ] ->\n  ?xerr:\n    [ `Symmetric of Nx.float32_t | `Asymmetric of Nx.float32_t * Nx.float32_t ] ->\n  ?color:Color.t ->\n  ?line_width:float ->\n  ?cap_size:float ->\n  ?label:string ->\n  ?alpha:float ->\n  unit ->\n  t\n(** [errorbar ~x ~y ~yerr ()] draws error bars at [(x.(i), y.(i))].\n\n    [yerr] specifies vertical error: [`Symmetric e] draws [y +/- e],\n    [`Asymmetric (lo, hi)] draws [[y - lo, y + hi]]. [xerr] adds horizontal\n    error bars. [cap_size] defaults to half the theme marker size. *)\n\nval heatmap :\n  data:Nx.float32_t ->\n  ?annotate:bool ->\n  ?cmap:Cmap.t ->\n  ?vmin:float ->\n  ?vmax:float ->\n  ?fmt:(float -> string) ->\n  unit ->\n  t\n(** [heatmap ~data ()] displays a 2D array as a grid of colored cells. [data]\n    has shape [[|rows; cols|]]. Row 0 appears at the top.\n\n    [cmap] defaults to the theme's sequential colormap. [vmin] and [vmax]\n    override the automatic value range. When [annotate] is [true], each cell\n    shows its value formatted by [fmt] (default [Printf.sprintf \"%.2g\"]). *)\n\nval imshow :\n  data:Nx.float32_t ->\n  ?stretch:stretch ->\n  ?cmap:Cmap.t ->\n  ?vmin:float ->\n  ?vmax:float ->\n  unit ->\n  t\n(** [imshow ~data ()] displays a 2D float array as a colormapped image. [data]\n    has shape [[|rows; cols|]].\n\n    [stretch] controls the transfer function applied before colormap lookup:\n    [`Linear] (default), [`Log], [`Sqrt], [`Asinh], or [`Power a]. [cmap]\n    defaults to the theme's sequential colormap. [vmin] and [vmax] override the\n    automatic value range. *)\n\nval contour :\n  data:Nx.float32_t ->\n  x0:float ->\n  x1:float ->\n  y0:float ->\n  y1:float ->\n  ?levels:[ `Num of int | `Values of float array ] ->\n  ?filled:bool ->\n  ?cmap:Cmap.t ->\n  ?color:Color.t ->\n  ?line_width:float ->\n  ?label:string ->\n  ?alpha:float ->\n  unit ->\n  t\n(** [contour ~data ~x0 ~x1 ~y0 ~y1 ()] draws iso-level contour lines through the\n    2D grid [data] of shape [[|rows; cols|]], mapped to the data-space rectangle\n    \\[[x0];[x1]\\] x \\[[y0];[y1]\\].\n\n    [levels] defaults to [`Num 8]. When [filled] is [true], regions between\n    adjacent levels are filled. [color] sets a single stroke color for unfilled\n    contours; [cmap] assigns per-level colors from the theme's sequential\n    colormap. *)\n\n(** {1:composition Composition} *)\n\nval layers : t list -> t\n(** [layers marks] overlays [marks] on shared axes. A single mark is already a\n    valid {!t}; [layers] is only needed to combine multiple marks into one plot.\n*)\n\n(** {1:decorations Decorations}\n\n    Decoration functions add metadata to a specification. They are designed for\n    the [|>] pipeline:\n    {[\n    line ~x ~y () |> title \"My Plot\" |> xlabel \"Time\"\n    ]} *)\n\nval title : string -> t -> t\n(** [title s t] is [t] with plot title [s]. *)\n\nval xlabel : string -> t -> t\n(** [xlabel s t] is [t] with x-axis label [s]. *)\n\nval ylabel : string -> t -> t\n(** [ylabel s t] is [t] with y-axis label [s]. *)\n\nval xlim : float -> float -> t -> t\n(** [xlim lo hi t] is [t] with x-axis range fixed to \\[[lo];[hi]\\]. *)\n\nval ylim : float -> float -> t -> t\n(** [ylim lo hi t] is [t] with y-axis range fixed to \\[[lo];[hi]\\]. *)\n\nval xscale : scale -> t -> t\n(** [xscale s t] is [t] with x-axis scale [s]. Defaults to [`Linear].\n\n    [`Sqrt] and [`Asinh] handle zero gracefully. [`Symlog linthresh] is linear\n    within \\[[-linthresh];[linthresh]\\] and logarithmic outside. *)\n\nval yscale : scale -> t -> t\n(** [yscale s t] is [t] with y-axis scale [s]. Defaults to [`Linear]. *)\n\nval xinvert : t -> t\n(** [xinvert t] is [t] with the x-axis inverted (values increase right-to-left).\n    Useful for right ascension in sky charts. *)\n\nval yinvert : t -> t\n(** [yinvert t] is [t] with the y-axis inverted (values increase top-to-bottom).\n    Useful for magnitude axes in HR diagrams. *)\n\nval grid_lines : bool -> t -> t\n(** [grid_lines visible t] is [t] with grid lines shown or hidden. *)\n\nval legend : ?loc:legend_loc -> ?ncol:int -> t -> t\n(** [legend ?loc ?ncol t] is [t] with the legend shown at [loc]. [loc] defaults\n    to {!Upper_right}. [ncol] defaults to [1]; set higher for multi-column\n    layouts with many series. The legend is automatically visible when any mark\n    has a [~label]. *)\n\nval xticks : (float * string) list -> t -> t\n(** [xticks ticks t] is [t] with explicit x-axis tick positions and labels.\n    Overrides auto-generated ticks. *)\n\nval yticks : (float * string) list -> t -> t\n(** [yticks ticks t] is [t] with explicit y-axis tick positions and labels.\n    Overrides auto-generated ticks. *)\n\nval with_theme : Theme.t -> t -> t\n(** [with_theme th t] is [t] rendered with theme [th] instead of the default. *)\n\nval xtick_format : (float -> string) -> t -> t\n(** [xtick_format fmt t] is [t] with x-axis tick labels formatted by [fmt].\n    Overrides auto-generated labels while preserving tick positions. *)\n\nval ytick_format : (float -> string) -> t -> t\n(** [ytick_format fmt t] is [t] with y-axis tick labels formatted by [fmt].\n    Overrides auto-generated labels while preserving tick positions. *)\n\nval frame : bool -> t -> t\n(** [frame visible t] is [t] with the axis border rectangle shown or hidden.\n    [visible] defaults to [true]. *)\n\nval no_axes : t -> t\n(** [no_axes t] hides the axis frame, ticks, and tick labels. Title is\n    preserved. The full panel area is used for marks. Useful for image grids:\n\n    {[\n    List.init 10 (fun i ->\n        Hugin.imshow ~data:digits.(i) ~cmap:Cmap.gray ()\n        |> Hugin.title (string_of_int labels.(i))\n        |> Hugin.no_axes)\n    |> Hugin.hstack\n    ]} *)\n\n(** {1:layout Layout} *)\n\nval grid : ?gap:float -> t list list -> t\n(** [grid rows] arranges specifications in a grid. Each inner list is a row of\n    panels. [gap] defaults to [0.05] (fraction of total size). *)\n\nval hstack : ?gap:float -> t list -> t\n(** [hstack specs] arranges [specs] in a single row. *)\n\nval vstack : ?gap:float -> t list -> t\n(** [vstack specs] arranges [specs] in a single column. *)\n\n(** {1:rendering Rendering} *)\n\nval show : ?theme:Theme.t -> ?width:float -> ?height:float -> t -> unit\n(** [show t] displays [t] in an interactive SDL window.\n\n    [width] defaults to [1600.0]. [height] defaults to [1200.0]. The window\n    supports resize (re-resolves at new dimensions) and closes on Escape or Q.\n*)\n\nval render_png :\n  ?theme:Theme.t -> ?width:float -> ?height:float -> string -> t -> unit\n(** [render_png filename t] writes [t] as a PNG image to [filename].\n\n    [width] defaults to [1600.0]. [height] defaults to [1200.0]. *)\n\nval render_pdf :\n  ?theme:Theme.t -> ?width:float -> ?height:float -> string -> t -> unit\n(** [render_pdf filename t] writes [t] as a PDF document to [filename].\n\n    [width] defaults to [1600.0]. [height] defaults to [1200.0]. *)\n\nval render_svg :\n  ?theme:Theme.t -> ?width:float -> ?height:float -> string -> t -> unit\n(** [render_svg filename t] writes [t] as an SVG document to [filename].\n\n    [width] defaults to [1600.0]. [height] defaults to [1200.0]. *)\n\nval render_svg_to_string :\n  ?theme:Theme.t -> ?width:float -> ?height:float -> t -> string\n(** [render_svg_to_string t] is [t] rendered as an SVG document string.\n\n    [width] defaults to [1600.0]. [height] defaults to [1200.0]. *)\n\nval render_to_buffer :\n  ?theme:Theme.t -> ?width:float -> ?height:float -> t -> string\n(** [render_to_buffer t] is [t] rendered as a PNG image, returned as a string of\n    bytes. *)\n\n(** {1:fmt Formatting} *)\n\nval pp : Format.formatter -> t -> unit\n(** [pp] renders the specification as a PNG data URI. Intended for use with\n    [#install_printer] in the toplevel and Quill.\n\n    Output format: [![figure](data:image/png;base64,...)] *)\n"
  },
  {
    "path": "packages/hugin/lib/image_util.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Shared image encoding utilities *)\n\n(* Base64 *)\n\nlet base64_alphabet =\n  \"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/\"\n\nlet base64_encode input =\n  let len = String.length input in\n  let out_len = (len + 2) / 3 * 4 in\n  let out = Bytes.create out_len in\n  let rec loop i j =\n    if i < len then begin\n      let b0 = Char.code (String.unsafe_get input i) in\n      let b1 =\n        if i + 1 < len then Char.code (String.unsafe_get input (i + 1)) else 0\n      in\n      let b2 =\n        if i + 2 < len then Char.code (String.unsafe_get input (i + 2)) else 0\n      in\n      Bytes.unsafe_set out j (String.unsafe_get base64_alphabet (b0 lsr 2));\n      Bytes.unsafe_set out (j + 1)\n        (String.unsafe_get base64_alphabet (((b0 land 3) lsl 4) lor (b1 lsr 4)));\n      Bytes.unsafe_set out (j + 2)\n        (if i + 1 < len then\n           String.unsafe_get base64_alphabet\n             (((b1 land 0xf) lsl 2) lor (b2 lsr 6))\n         else '=');\n      Bytes.unsafe_set out (j + 3)\n        (if i + 2 < len then String.unsafe_get base64_alphabet (b2 land 0x3f)\n         else '=');\n      loop (i + 3) (j + 4)\n    end\n  in\n  loop 0 0;\n  Bytes.unsafe_to_string out\n\n(* Nx uint8 image -> Cairo ARGB32 surface *)\n\nlet nx_to_cairo_surface (data : Nx.uint8_t) =\n  let shape = Nx.shape data in\n  let img_h = shape.(0) and img_w = shape.(1) in\n  let channels = if Array.length shape > 2 then shape.(2) else 1 in\n  let stride = Ucairo.Image.stride_for_width img_w in\n  let data_arr =\n    Bigarray.Array1.create Bigarray.int8_unsigned Bigarray.c_layout\n      (stride * img_h)\n  in\n  let buf = Nx.data data in\n  let base = Nx.offset data in\n  let strides = Nx.strides data in\n  (* uint8: byte strides = element strides *)\n  let s0 = strides.(0) and s1 = strides.(1) in\n  let s2 = if Array.length strides > 2 then strides.(2) else 0 in\n  for row = 0 to img_h - 1 do\n    let row_base = base + (row * s0) in\n    for col = 0 to img_w - 1 do\n      let off = (row * stride) + (col * 4) in\n      let idx = row_base + (col * s1) in\n      let r = Nx_buffer.unsafe_get buf idx in\n      let g = Nx_buffer.unsafe_get buf (idx + s2) in\n      let b = Nx_buffer.unsafe_get buf (idx + (2 * s2)) in\n      let a =\n        if channels >= 4 then Nx_buffer.unsafe_get buf (idx + (3 * s2)) else 255\n      in\n      (* Cairo ARGB32: premultiplied BGRA in memory on little-endian *)\n      let premul c a = c * a / 255 in\n      Bigarray.Array1.unsafe_set data_arr off (premul b a);\n      Bigarray.Array1.unsafe_set data_arr (off + 1) (premul g a);\n      Bigarray.Array1.unsafe_set data_arr (off + 2) (premul r a);\n      Bigarray.Array1.unsafe_set data_arr (off + 3) a\n    done\n  done;\n  Ucairo.Image.create_for_data8 data_arr ~w:img_w ~h:img_h ~stride\n\nlet nx_to_png_base64 data =\n  let surface = nx_to_cairo_surface data in\n  let png_buf = Buffer.create 4096 in\n  Ucairo.Png.write_to_stream surface (Buffer.add_string png_buf);\n  Ucairo.Surface.finish surface;\n  base64_encode (Buffer.contents png_buf)\n"
  },
  {
    "path": "packages/hugin/lib/image_util.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Image encoding utilities.\n\n    {b Internal module.} Shared base64 encoding and Nx-to-Cairo-surface\n    conversion used by both rendering backends. *)\n\n(** {1:base64 Base64} *)\n\nval base64_encode : string -> string\n(** [base64_encode s] is the base64 encoding of [s]. *)\n\n(** {1:surface Cairo surface conversion} *)\n\nval nx_to_cairo_surface : Nx.uint8_t -> Ucairo.surface\n(** [nx_to_cairo_surface data] is a Cairo ARGB32 image surface from [data].\n    [data] has shape [[|h; w; 3|]] (RGB) or [[|h; w; 4|]] (RGBA). *)\n\nval nx_to_png_base64 : Nx.uint8_t -> string\n(** [nx_to_png_base64 data] is the base64-encoded PNG of [data]. *)\n"
  },
  {
    "path": "packages/hugin/lib/prepared.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Data-only compilation: Spec.t -> Prepared.t\n\n   Compiles once per dataset. Separates data-dependent work (collecting\n   decorations, histogram binning, auto-coloring, data bounds) from\n   layout-dependent work (pixel coordinates, text measurement) which lives in\n   Resolve. *)\n\n(* Data bounds *)\n\nlet nx_finite_range (arr : Nx.float32_t) =\n  let n = (Nx.shape arr).(0) in\n  let lo = ref Float.infinity and hi = ref Float.neg_infinity in\n  for i = 0 to n - 1 do\n    let v = Nx.item [ i ] arr in\n    if Float.is_finite v then begin\n      if v < !lo then lo := v;\n      if v > !hi then hi := v\n    end\n  done;\n  (!lo, !hi)\n\nlet expand_range scale lo hi =\n  if lo = hi then (lo -. 1., hi +. 1.)\n  else\n    match scale with\n    | `Log ->\n        let lo_log = Float.log10 (Float.max 1e-10 lo) in\n        let hi_log = Float.log10 (Float.max 1e-10 hi) in\n        let pad = (hi_log -. lo_log) *. 0.05 in\n        (Float.pow 10. (lo_log -. pad), Float.pow 10. (hi_log +. pad))\n    | `Sqrt ->\n        let lo = Float.max 0. lo in\n        let pad = (hi -. lo) *. 0.05 in\n        (Float.max 0. (lo -. pad), hi +. pad)\n    | `Asinh | `Symlog _ | `Linear ->\n        let pad = (hi -. lo) *. 0.05 in\n        (lo -. pad, hi +. pad)\n\nlet mark_x_range = function\n  | Spec.Line { x; _ } | Spec.Point { x; _ } -> Some (nx_finite_range x)\n  | Spec.Bar { x; width; _ } ->\n      let lo, hi = nx_finite_range x in\n      let w = (match width with Some w -> w | None -> 0.8) /. 2. in\n      Some (lo -. w, hi +. w)\n  | Spec.Hist { x; _ } -> Some (nx_finite_range x)\n  | Spec.Image { extent = Some (xmin, xmax, _, _); _ } ->\n      Some (Float.min xmin xmax, Float.max xmin xmax)\n  | Spec.Image _ -> None\n  | Spec.Text_mark { x; _ } -> Some (x, x)\n  | Spec.Hline _ -> None\n  | Spec.Vline { x; _ } -> Some (x, x)\n  | Spec.Abline _ -> None\n  | Spec.Fill_between { x; _ } -> Some (nx_finite_range x)\n  | Spec.Errorbar { x; xerr; _ } ->\n      let lo, hi = nx_finite_range x in\n      let lo, hi =\n        match xerr with\n        | Some (`Symmetric e) ->\n            let _, emax = nx_finite_range e in\n            (lo -. emax, hi +. emax)\n        | Some (`Asymmetric (elo, ehi)) ->\n            let _, emlo = nx_finite_range elo in\n            let _, emhi = nx_finite_range ehi in\n            (lo -. emlo, hi +. emhi)\n        | None -> (lo, hi)\n      in\n      Some (lo, hi)\n  | Spec.Hspan _ -> None\n  | Spec.Vspan { x0; x1; _ } -> Some (Float.min x0 x1, Float.max x0 x1)\n  | Spec.Heatmap { data; _ } ->\n      let shape = Nx.shape data in\n      let cols = float shape.(1) in\n      Some (0., cols)\n  | Spec.Imshow _ -> None\n  | Spec.Contour { x0; x1; _ } -> Some (Float.min x0 x1, Float.max x0 x1)\n\nlet mark_y_range = function\n  | Spec.Line { y; _ } | Spec.Point { y; _ } -> Some (nx_finite_range y)\n  | Spec.Bar { height; bottom; _ } ->\n      let lo, hi = nx_finite_range height in\n      Some (Float.min bottom (bottom +. lo), Float.max bottom (bottom +. hi))\n  | Spec.Hist _ -> None\n  | Spec.Image { extent = Some (_, _, ymin, ymax); _ } ->\n      Some (Float.min ymin ymax, Float.max ymin ymax)\n  | Spec.Image _ -> None\n  | Spec.Text_mark { y; _ } -> Some (y, y)\n  | Spec.Hline { y; _ } -> Some (y, y)\n  | Spec.Vline _ -> None\n  | Spec.Abline _ -> None\n  | Spec.Fill_between { y1; y2; _ } ->\n      let lo1, hi1 = nx_finite_range y1 in\n      let lo2, hi2 = nx_finite_range y2 in\n      Some (Float.min lo1 lo2, Float.max hi1 hi2)\n  | Spec.Errorbar { y; yerr; _ } ->\n      let lo, hi = nx_finite_range y in\n      let lo, hi =\n        match yerr with\n        | `Symmetric e ->\n            let _, emax = nx_finite_range e in\n            (lo -. emax, hi +. emax)\n        | `Asymmetric (elo, ehi) ->\n            let _, emlo = nx_finite_range elo in\n            let _, emhi = nx_finite_range ehi in\n            (lo -. emlo, hi +. emhi)\n      in\n      Some (lo, hi)\n  | Spec.Hspan { y0; y1; _ } -> Some (Float.min y0 y1, Float.max y0 y1)\n  | Spec.Vspan _ -> None\n  | Spec.Heatmap { data; _ } ->\n      let shape = Nx.shape data in\n      let rows = float shape.(0) in\n      Some (0., rows)\n  | Spec.Imshow _ -> None\n  | Spec.Contour { y0; y1; _ } -> Some (Float.min y0 y1, Float.max y0 y1)\n\nlet union_range a b =\n  match (a, b) with\n  | None, x | x, None -> x\n  | Some (a0, a1), Some (b0, b1) -> Some (Float.min a0 b0, Float.max a1 b1)\n\nlet compute_data_bounds ~xscale ~yscale marks =\n  let xr =\n    List.fold_left (fun acc m -> union_range acc (mark_x_range m)) None marks\n  in\n  let yr =\n    List.fold_left (fun acc m -> union_range acc (mark_y_range m)) None marks\n  in\n  let xlo, xhi =\n    match xr with Some (a, b) -> expand_range xscale a b | None -> (0., 1.)\n  in\n  let ylo, yhi =\n    match yr with Some (a, b) -> expand_range yscale a b | None -> (0., 1.)\n  in\n  (xlo, xhi, ylo, yhi)\n\n(* Collect decorations from spec tree *)\n\ntype collected = {\n  marks : Spec.mark list;\n  x : Axis.config;\n  y : Axis.config;\n  title : string option;\n  grid_visible : bool option;\n  frame_visible : bool option;\n  legend_loc : Spec.legend_loc option;\n  legend_ncol : int;\n  theme_override : Theme.t option;\n}\n\nlet empty_collected =\n  {\n    marks = [];\n    x = Axis.empty_config;\n    y = Axis.empty_config;\n    title = None;\n    grid_visible = None;\n    frame_visible = None;\n    legend_loc = None;\n    legend_ncol = 1;\n    theme_override = None;\n  }\n\nlet rec collect c = function\n  | Spec.Mark m -> { c with marks = m :: c.marks }\n  | Spec.Layers ts -> List.fold_left collect c ts\n  | Spec.Decorated { inner; decorations } ->\n      let c = collect c inner in\n      List.fold_left apply_decoration c decorations\n  | Spec.Grid _ -> c\n\nand apply_decoration c = function\n  | Spec.Title s when c.title = None -> { c with title = Some s }\n  | Spec.Xlabel s when c.x.label = None ->\n      { c with x = { c.x with label = Some s } }\n  | Spec.Ylabel s when c.y.label = None ->\n      { c with y = { c.y with label = Some s } }\n  | Spec.Xlim (lo, hi) when c.x.lim = None ->\n      { c with x = { c.x with lim = Some (lo, hi) } }\n  | Spec.Ylim (lo, hi) when c.y.lim = None ->\n      { c with y = { c.y with lim = Some (lo, hi) } }\n  | Spec.Xscale s when c.x.scale = None ->\n      { c with x = { c.x with scale = Some s } }\n  | Spec.Yscale s when c.y.scale = None ->\n      { c with y = { c.y with scale = Some s } }\n  | Spec.Xinvert -> { c with x = { c.x with invert = true } }\n  | Spec.Yinvert -> { c with y = { c.y with invert = true } }\n  | Spec.Grid_visible v when c.grid_visible = None ->\n      { c with grid_visible = Some v }\n  | Spec.Legend (loc, ncol) when c.legend_loc = None ->\n      { c with legend_loc = Some loc; legend_ncol = ncol }\n  | Spec.Xticks t when c.x.ticks = None ->\n      { c with x = { c.x with ticks = Some t } }\n  | Spec.Yticks t when c.y.ticks = None ->\n      { c with y = { c.y with ticks = Some t } }\n  | Spec.With_theme t when c.theme_override = None ->\n      { c with theme_override = Some t }\n  | Spec.Xtick_format f when c.x.tick_format = None ->\n      { c with x = { c.x with tick_format = Some f } }\n  | Spec.Ytick_format f when c.y.tick_format = None ->\n      { c with y = { c.y with tick_format = Some f } }\n  | Spec.Frame v when c.frame_visible = None ->\n      { c with frame_visible = Some v }\n  | _ -> c\n\n(* Auto-coloring *)\n\nlet mark_color = function\n  | Spec.Line { color; _ }\n  | Spec.Point { color; _ }\n  | Spec.Bar { color; _ }\n  | Spec.Hist { color; _ }\n  | Spec.Text_mark { color; _ }\n  | Spec.Hline { color; _ }\n  | Spec.Vline { color; _ }\n  | Spec.Abline { color; _ }\n  | Spec.Fill_between { color; _ }\n  | Spec.Hspan { color; _ }\n  | Spec.Vspan { color; _ }\n  | Spec.Errorbar { color; _ }\n  | Spec.Contour { color; _ } ->\n      color\n  | Spec.Image _ | Spec.Heatmap _ | Spec.Imshow _ -> None\n\nlet auto_color (theme : Theme.t) marks =\n  let n_palette = Array.length theme.palette in\n  List.mapi\n    (fun i m ->\n      match mark_color m with\n      | Some _ -> m\n      | None -> (\n          let c = theme.palette.(i mod n_palette) in\n          match m with\n          | Spec.Line r -> Spec.Line { r with color = Some c }\n          | Spec.Point r -> Spec.Point { r with color = Some c }\n          | Spec.Bar r -> Spec.Bar { r with color = Some c }\n          | Spec.Hist r -> Spec.Hist { r with color = Some c }\n          | Spec.Hline r -> Spec.Hline { r with color = Some c }\n          | Spec.Vline r -> Spec.Vline { r with color = Some c }\n          | Spec.Abline r -> Spec.Abline { r with color = Some c }\n          | Spec.Fill_between r -> Spec.Fill_between { r with color = Some c }\n          | Spec.Hspan r -> Spec.Hspan { r with color = Some c }\n          | Spec.Vspan r -> Spec.Vspan { r with color = Some c }\n          | Spec.Errorbar r -> Spec.Errorbar { r with color = Some c }\n          | Spec.Contour r -> Spec.Contour { r with color = Some c }\n          | m -> m))\n    marks\n\n(* Histogram normalization — convert Hist to Bar *)\n\nlet normalize_hist marks =\n  List.map\n    (fun m ->\n      match m with\n      | Spec.Hist { x; bins; density; color; label } ->\n          let xmin, xmax = nx_finite_range x in\n          let edges =\n            match bins with\n            | `Num num_bins ->\n                Array.init (num_bins + 1) (fun i ->\n                    xmin +. ((xmax -. xmin) *. float i /. float num_bins))\n            | `Edges e -> e\n          in\n          let num_bins = Array.length edges - 1 in\n          let n = (Nx.shape x).(0) in\n          let counts = Array.make num_bins 0. in\n          let binned = ref 0 in\n          for i = 0 to n - 1 do\n            let v = Nx.item [ i ] x in\n            if Float.is_finite v && v >= edges.(0) && v <= edges.(num_bins) then begin\n              incr binned;\n              let bin = ref 0 in\n              while !bin < num_bins - 1 && v >= edges.(!bin + 1) do\n                incr bin\n              done;\n              counts.(!bin) <- counts.(!bin) +. 1.\n            end\n          done;\n          if density then begin\n            let total =\n              let b = float !binned in\n              if b = 0. then 1. else b\n            in\n            for i = 0 to num_bins - 1 do\n              let w = edges.(i + 1) -. edges.(i) in\n              counts.(i) <- counts.(i) /. (total *. w)\n            done\n          end;\n          let bar_x =\n            Nx.init Float32 [| num_bins |] (fun idx ->\n                let i = idx.(0) in\n                (edges.(i) +. edges.(i + 1)) /. 2.)\n          in\n          let bar_h =\n            Nx.init Float32 [| num_bins |] (fun idx -> counts.(idx.(0)))\n          in\n          let w = if num_bins > 0 then edges.(1) -. edges.(0) else 1. in\n          Spec.Bar\n            {\n              x = bar_x;\n              height = bar_h;\n              width = Some w;\n              bottom = 0.;\n              color;\n              label;\n              alpha = None;\n            }\n      | m -> m)\n    marks\n\n(* Guide ranges *)\n\nlet color_by_range marks =\n  List.fold_left\n    (fun acc m ->\n      match m with\n      | Spec.Point { color_by = Some cb; _ } ->\n          let lo, hi = nx_finite_range cb in\n          union_range acc (Some (lo, hi))\n      | _ -> acc)\n    None marks\n\nlet size_by_range marks =\n  List.fold_left\n    (fun acc m ->\n      match m with\n      | Spec.Point { size_by = Some sb; _ } ->\n          let lo, hi = nx_finite_range sb in\n          union_range acc (Some (lo, hi))\n      | _ -> acc)\n    None marks\n\n(* Collect marks from all panels in a spec tree *)\n\nlet rec collect_all_marks = function\n  | Spec.Mark m -> [ m ]\n  | Spec.Layers ts -> List.concat_map collect_all_marks ts\n  | Spec.Decorated { inner; _ } -> collect_all_marks inner\n  | Spec.Grid { rows; _ } ->\n      List.concat_map (List.concat_map collect_all_marks) rows\n\n(* Grid-level decorations *)\n\ntype grid_decorations = {\n  gd_title : string option;\n  gd_xlabel : string option;\n  gd_ylabel : string option;\n  gd_legend_loc : Spec.legend_loc option;\n  gd_legend_ncol : int;\n  gd_theme_override : Theme.t option;\n}\n\nlet extract_grid_decorations decorations =\n  let d =\n    {\n      gd_title = None;\n      gd_xlabel = None;\n      gd_ylabel = None;\n      gd_legend_loc = None;\n      gd_legend_ncol = 1;\n      gd_theme_override = None;\n    }\n  in\n  List.fold_left\n    (fun d dec ->\n      match dec with\n      | Spec.Title s when d.gd_title = None -> { d with gd_title = Some s }\n      | Spec.Xlabel s when d.gd_xlabel = None -> { d with gd_xlabel = Some s }\n      | Spec.Ylabel s when d.gd_ylabel = None -> { d with gd_ylabel = Some s }\n      | Spec.Legend (loc, ncol) when d.gd_legend_loc = None ->\n          { d with gd_legend_loc = Some loc; gd_legend_ncol = ncol }\n      | Spec.With_theme t when d.gd_theme_override = None ->\n          { d with gd_theme_override = Some t }\n      | _ -> d)\n    d decorations\n\n(* Prepared panel — all data-only work done *)\n\ntype panel = {\n  marks : Spec.mark list;\n  x : Axis.t;\n  y : Axis.t;\n  title : string option;\n  legend_loc : Spec.legend_loc option;\n  legend_ncol : int;\n  grid_visible : bool option;\n  frame_visible : bool option;\n  theme_override : Theme.t option;\n  colorbar_range : (float * float) option;\n  size_by_range : (float * float) option;\n}\n\ntype t =\n  | Panel of panel\n  | Grid of { rows : t list list; gap : float }\n  | Decorated_grid of {\n      decorations : grid_decorations;\n      inner : t;\n      all_marks : Spec.mark list;\n    }\n\n(* Imshow: rasterize float32 data to uint8 RGB via stretch + colormap *)\n\nlet apply_stretch stretch v =\n  match stretch with\n  | `Linear -> v\n  | `Log -> Float.log10 (1. +. (9. *. v)) /. Float.log10 10.\n  | `Sqrt -> Float.sqrt (Float.max 0. v)\n  | `Asinh ->\n      let a = 10. in\n      Float.asinh (a *. v) /. Float.asinh a\n  | `Power a -> Float.pow (Float.max 0. v) a\n\nlet rasterize_imshow ~stretch ~cmap ~vmin ~vmax (data : Nx.float32_t) =\n  let shape = Nx.shape data in\n  let rows = shape.(0) and cols = shape.(1) in\n  let lo = ref Float.infinity and hi = ref Float.neg_infinity in\n  for r = 0 to rows - 1 do\n    for c = 0 to cols - 1 do\n      let v = Nx.item [ r; c ] data in\n      if Float.is_finite v then begin\n        if v < !lo then lo := v;\n        if v > !hi then hi := v\n      end\n    done\n  done;\n  let vlo = match vmin with Some v -> v | None -> !lo in\n  let vhi = match vmax with Some v -> v | None -> !hi in\n  let vrange = if vhi = vlo then 1. else vhi -. vlo in\n  let rgb = Nx.zeros Nx.uint8 [| rows; cols; 3 |] in\n  for r = 0 to rows - 1 do\n    for c = 0 to cols - 1 do\n      let v = Nx.item [ r; c ] data in\n      let t = Float.max 0. (Float.min 1. ((v -. vlo) /. vrange)) in\n      let t = apply_stretch stretch t in\n      let t = Float.max 0. (Float.min 1. t) in\n      let color = Cmap.eval cmap t in\n      let cr, cg, cb, _ = Color.to_rgba color in\n      Nx.set_item [ r; c; 0 ] (int_of_float (cr *. 255.)) rgb;\n      Nx.set_item [ r; c; 1 ] (int_of_float (cg *. 255.)) rgb;\n      Nx.set_item [ r; c; 2 ] (int_of_float (cb *. 255.)) rgb\n    done\n  done;\n  rgb\n\nlet normalize_imshow (theme : Theme.t) marks =\n  List.map\n    (fun m ->\n      match m with\n      | Spec.Imshow { data; stretch; cmap; vmin; vmax } ->\n          let cmap = match cmap with Some c -> c | None -> theme.sequential in\n          let rgb = rasterize_imshow ~stretch ~cmap ~vmin ~vmax data in\n          Spec.Image { data = rgb; extent = None }\n      | m -> m)\n    marks\n\n(* Contour tracing via marching squares *)\n\ntype contour_paths = { level : float; paths : (float * float) array list }\n\n(* Join 2-point segments that share endpoints into connected polylines. Marching\n   squares produces one segment per cell edge crossing. Segments from adjacent\n   cells share exact floating-point endpoints (deterministic lerp), so we chain\n   them with exact equality via a hashtable. *)\nlet join_segments segments =\n  let n = List.length segments in\n  if n = 0 then []\n  else\n    let segs = Array.of_list segments in\n    let visited = Array.make n false in\n    let adj = Hashtbl.create (2 * n) in\n    Array.iteri\n      (fun i (a, b) ->\n        let add pt =\n          let cur = try Hashtbl.find adj pt with Not_found -> [] in\n          Hashtbl.replace adj pt (i :: cur)\n        in\n        add a;\n        add b)\n      segs;\n    let find_unvisited_neighbor pt =\n      match Hashtbl.find adj pt with\n      | exception Not_found -> None\n      | neighbors ->\n          let rec scan = function\n            | [] -> None\n            | j :: rest -> if visited.(j) then scan rest else Some j\n          in\n          scan neighbors\n    in\n    let chains = ref [] in\n    for start = 0 to n - 1 do\n      if not visited.(start) then begin\n        visited.(start) <- true;\n        let a0, b0 = segs.(start) in\n        (* front: backward extensions (cons'd, so in chain order). back: forward\n           extensions (cons'd, so reversed). *)\n        let front = ref [ a0 ] in\n        let back = ref [ b0 ] in\n        (* Extend forward from b0 *)\n        let cur = ref b0 in\n        let go = ref true in\n        while !go do\n          match find_unvisited_neighbor !cur with\n          | None -> go := false\n          | Some j ->\n              visited.(j) <- true;\n              let a, b = segs.(j) in\n              let next = if a = !cur then b else a in\n              back := next :: !back;\n              cur := next\n        done;\n        (* Extend backward from a0 *)\n        cur := a0;\n        go := true;\n        while !go do\n          match find_unvisited_neighbor !cur with\n          | None -> go := false\n          | Some j ->\n              visited.(j) <- true;\n              let a, b = segs.(j) in\n              let next = if a = !cur then b else a in\n              front := next :: !front;\n              cur := next\n        done;\n        (* front is in chain order; back is reversed *)\n        chains := Array.of_list (!front @ List.rev !back) :: !chains\n      end\n    done;\n    List.rev !chains\n\nlet trace_contours ~rows ~cols (data : Nx.float32_t) levels =\n  let get r c =\n    if r >= 0 && r < rows && c >= 0 && c < cols then Nx.item [ r; c ] data\n    else 0.\n  in\n  List.map\n    (fun level ->\n      let segments = ref [] in\n      for r = 0 to rows - 2 do\n        for c = 0 to cols - 2 do\n          let v00 = get r c in\n          let v10 = get r (c + 1) in\n          let v11 = get (r + 1) (c + 1) in\n          let v01 = get (r + 1) c in\n          let b0 = if v00 >= level then 1 else 0 in\n          let b1 = if v10 >= level then 1 else 0 in\n          let b2 = if v11 >= level then 1 else 0 in\n          let b3 = if v01 >= level then 1 else 0 in\n          let case = b0 lor (b1 lsl 1) lor (b2 lsl 2) lor (b3 lsl 3) in\n          let lerp va vb =\n            let d = vb -. va in\n            if Float.abs d < 1e-30 then 0.5 else (level -. va) /. d\n          in\n          let fc = float c and fr = float r in\n          let top = (fc +. lerp v00 v10, fr) in\n          let right = (fc +. 1., fr +. lerp v10 v11) in\n          let bottom = (fc +. lerp v01 v11, fr +. 1.) in\n          let left = (fc, fr +. lerp v00 v01) in\n          let add a b = segments := (a, b) :: !segments in\n          begin match case with\n          | 0 | 15 -> ()\n          | 1 | 14 -> add top left\n          | 2 | 13 -> add top right\n          | 3 | 12 -> add left right\n          | 4 | 11 -> add right bottom\n          | 5 ->\n              let center = (v00 +. v10 +. v11 +. v01) /. 4. in\n              if center >= level then begin\n                add top right;\n                add bottom left\n              end\n              else begin\n                add top left;\n                add bottom right\n              end\n          | 6 | 9 -> add top bottom\n          | 7 | 8 -> add bottom left\n          | 10 ->\n              let center = (v00 +. v10 +. v11 +. v01) /. 4. in\n              if center >= level then begin\n                add top left;\n                add bottom right\n              end\n              else begin\n                add top right;\n                add bottom left\n              end\n          | _ -> ()\n          end\n        done\n      done;\n      let paths = join_segments !segments in\n      { level; paths })\n    levels\n\nlet prepare_contour ~x0 ~x1 ~y0 ~y1 ~data ~levels =\n  let shape = Nx.shape data in\n  let rows = shape.(0) and cols = shape.(1) in\n  let lo = ref Float.infinity and hi = ref Float.neg_infinity in\n  for r = 0 to rows - 1 do\n    for c = 0 to cols - 1 do\n      let v = Nx.item [ r; c ] data in\n      if Float.is_finite v then begin\n        if v < !lo then lo := v;\n        if v > !hi then hi := v\n      end\n    done\n  done;\n  let vlo = !lo and vhi = !hi in\n  let level_values =\n    match levels with\n    | `Values a -> Array.to_list a\n    | `Num n ->\n        let range = vhi -. vlo in\n        if range = 0. then [ vlo ]\n        else\n          List.init n (fun i ->\n              vlo +. (range *. (float (i + 1) /. float (n + 1))))\n  in\n  let contours = trace_contours ~rows ~cols data level_values in\n  (* Map grid coords to data coords *)\n  let xscale = (x1 -. x0) /. float (cols - 1) in\n  let yscale = (y1 -. y0) /. float (rows - 1) in\n  List.map\n    (fun cp ->\n      let paths =\n        List.map\n          (fun seg ->\n            Array.map\n              (fun (gc, gr) -> (x0 +. (gc *. xscale), y0 +. (gr *. yscale)))\n              seg)\n          cp.paths\n      in\n      { cp with paths })\n    contours\n\n(* Compile a spec tree into a prepared tree *)\n\nlet compile_panel theme spec =\n  let c = collect empty_collected spec in\n  let c = { c with marks = List.rev c.marks } in\n  let theme = Option.value ~default:theme c.theme_override in\n  let marks =\n    normalize_hist (normalize_imshow theme (auto_color theme c.marks))\n  in\n  let xscale = Option.value ~default:`Linear c.x.scale in\n  let yscale = Option.value ~default:`Linear c.y.scale in\n  let xlo, xhi, ylo, yhi = compute_data_bounds ~xscale ~yscale marks in\n  let x = Axis.resolve ~data_lo:xlo ~data_hi:xhi c.x in\n  let y = Axis.resolve ~data_lo:ylo ~data_hi:yhi c.y in\n  Panel\n    {\n      marks;\n      x;\n      y;\n      title = c.title;\n      legend_loc = c.legend_loc;\n      legend_ncol = c.legend_ncol;\n      grid_visible = c.grid_visible;\n      frame_visible = c.frame_visible;\n      theme_override = c.theme_override;\n      colorbar_range = color_by_range marks;\n      size_by_range = size_by_range marks;\n    }\n\nlet rec compile ~theme spec =\n  match spec with\n  | Spec.Grid { rows; gap } ->\n      let rows = List.map (List.map (compile ~theme)) rows in\n      Grid { rows; gap }\n  | Spec.Decorated { inner = Spec.Grid _ as g; decorations } ->\n      let gd = extract_grid_decorations decorations in\n      let theme = Option.value ~default:theme gd.gd_theme_override in\n      let all_marks = auto_color theme (collect_all_marks g) in\n      let inner = compile ~theme g in\n      Decorated_grid { decorations = gd; inner; all_marks }\n  | spec -> compile_panel theme spec\n"
  },
  {
    "path": "packages/hugin/lib/prepared.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Data-only compilation stage.\n\n    {b Internal module.} Compiles a {!Spec.t} tree into a {!t} tree with all\n    data-dependent work done: decoration collection, histogram binning,\n    auto-coloring, data-bound computation, imshow rasterization, contour\n    tracing, and guide-range detection.\n\n    The result is independent of output dimensions and can be resolved\n    repeatedly at different sizes by {!Resolve.resolve_prepared}. *)\n\n(** {1:bounds Data bounds} *)\n\nval nx_finite_range : Nx.float32_t -> float * float\n(** [nx_finite_range arr] is [(lo, hi)] of the finite values in [arr]. *)\n\n(** {1:marks Mark introspection} *)\n\nval mark_color : Spec.mark -> Color.t option\n(** [mark_color m] is the color of [m], if set. *)\n\n(** {1:contour Contour tracing} *)\n\ntype contour_paths = { level : float; paths : (float * float) array list }\n(** The type for traced contour paths at a single iso-level. Coordinates are in\n    data space. *)\n\nval prepare_contour :\n  x0:float ->\n  x1:float ->\n  y0:float ->\n  y1:float ->\n  data:Nx.float32_t ->\n  levels:[ `Num of int | `Values of float array ] ->\n  contour_paths list\n(** [prepare_contour ~x0 ~x1 ~y0 ~y1 ~data ~levels] traces contour paths through\n    [data] and maps grid coordinates to the data-space rectangle \\[[x0];[x1]\\] x\n    \\[[y0];[y1]\\]. *)\n\n(** {1:panel Prepared panel} *)\n\ntype panel = {\n  marks : Spec.mark list;\n  x : Axis.t;\n  y : Axis.t;\n  title : string option;\n  legend_loc : Spec.legend_loc option;\n  legend_ncol : int;\n  grid_visible : bool option;\n  frame_visible : bool option;\n  theme_override : Theme.t option;\n  colorbar_range : (float * float) option;\n  size_by_range : (float * float) option;\n}\n(** The type for prepared panels. All data-only work is done: marks are\n    auto-colored and histograms normalized to bars, data bounds are computed,\n    and guide ranges are detected. *)\n\n(** {1:grid Grid decorations} *)\n\ntype grid_decorations = {\n  gd_title : string option;\n  gd_xlabel : string option;\n  gd_ylabel : string option;\n  gd_legend_loc : Spec.legend_loc option;\n  gd_legend_ncol : int;\n  gd_theme_override : Theme.t option;\n}\n(** The type for grid-level decorations extracted from a decorated grid spec. *)\n\n(** {1:tree Prepared tree} *)\n\ntype t =\n  | Panel of panel\n  | Grid of { rows : t list list; gap : float }\n  | Decorated_grid of {\n      decorations : grid_decorations;\n      inner : t;\n      all_marks : Spec.mark list;\n    }\n      (** The type for prepared spec trees. Mirrors {!Spec.t} structure with all\n          data-only work pre-computed. *)\n\n(** {1:compile Compilation} *)\n\nval compile : theme:Theme.t -> Spec.t -> t\n(** [compile ~theme spec] is the prepared tree for [spec]. Collects decorations,\n    normalizes histograms, auto-colors marks, computes data bounds, and detects\n    colorbar/size-guide ranges. *)\n"
  },
  {
    "path": "packages/hugin/lib/resolve.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Prepared.t + Theme.t -> Scene.t resolution\n\n   Layout and pixel-coordinate work. All data-only processing (histogram\n   binning, auto-coloring, bounds) is done in Prepared. *)\n\ntype text_measurer = font:Theme.font -> string -> float * float\n\n(* Region in device pixels *)\ntype region = { rx : float; ry : float; rw : float; rh : float }\n\n(* Scale-aware coord transform *)\n\nlet data_to_pixel_x sx region v =\n  let u = sx.Scale.to_unit v in\n  region.rx +. (u *. region.rw)\n\nlet data_to_pixel_y sy region v =\n  let u = sy.Scale.to_unit v in\n  region.ry +. region.rh -. (u *. region.rh)\n\n(* Dash pattern *)\n\nlet dash_of_style = function\n  | `Solid -> []\n  | `Dashed -> [ 6.; 4. ]\n  | `Dotted -> [ 2.; 3. ]\n  | `Dash_dot -> [ 6.; 3.; 2.; 3. ]\n\n(* Resolution helpers *)\n\nlet resolve_color ?default_alpha color alpha =\n  let c = Option.value ~default:Color.black color in\n  match (alpha, default_alpha) with\n  | Some a, _ | None, Some a -> Color.with_alpha a c\n  | None, None -> c\n\nlet resolve_line_width sf (theme : Theme.t) line_width =\n  Option.value ~default:theme.line_width line_width *. sf\n\nlet resolve_dash sf line_style =\n  let dash = match line_style with Some s -> dash_of_style s | None -> [] in\n  List.map (fun d -> d *. sf) dash\n\n(* Emit mark primitives *)\n\nlet step_transform step points n =\n  match step with\n  | None -> points\n  | Some `Post ->\n      if n < 2 then points\n      else begin\n        let out = Array.make ((2 * n) - 1) (0., 0.) in\n        let k = ref 0 in\n        for i = 0 to n - 2 do\n          let px, py = points.(i) in\n          let px_next, _ = points.(i + 1) in\n          out.(!k) <- (px, py);\n          incr k;\n          out.(!k) <- (px_next, py);\n          incr k\n        done;\n        out.(!k) <- points.(n - 1);\n        Array.sub out 0 (!k + 1)\n      end\n  | Some `Pre ->\n      if n < 2 then points\n      else begin\n        let out = Array.make ((2 * n) - 1) (0., 0.) in\n        let k = ref 0 in\n        out.(!k) <- points.(0);\n        incr k;\n        for i = 1 to n - 1 do\n          let _, py_prev = points.(i - 1) in\n          let px, py = points.(i) in\n          out.(!k) <- (px, py_prev);\n          incr k;\n          out.(!k) <- (px, py);\n          incr k\n        done;\n        Array.sub out 0 !k\n      end\n  | Some `Mid ->\n      if n < 2 then points\n      else begin\n        let out = Array.make ((3 * n) - 2) (0., 0.) in\n        let k = ref 0 in\n        for i = 0 to n - 2 do\n          let px, py = points.(i) in\n          let px_next, py_next = points.(i + 1) in\n          let mx = (px +. px_next) /. 2. in\n          out.(!k) <- (px, py);\n          incr k;\n          out.(!k) <- (mx, py);\n          incr k;\n          out.(!k) <- (mx, py_next);\n          incr k\n        done;\n        out.(!k) <- points.(n - 1);\n        Array.sub out 0 (!k + 1)\n      end\n\nlet emit_line_mark sx sy plot_area theme ~x ~y ~color ~line_width ~line_style\n    ~step ~marker ~alpha =\n  let n = (Nx.shape x).(0) in\n  let color = resolve_color color alpha in\n  let sf = theme.Theme.scale_factor in\n  let lw = resolve_line_width sf theme line_width in\n  let scaled_dash = resolve_dash sf line_style in\n  (* Split line into finite-value segments *)\n  let segments = ref [] in\n  let current = ref [] in\n  let all_finite_points = ref [] in\n  for i = 0 to n - 1 do\n    let xv = Nx.item [ i ] x in\n    let yv = Nx.item [ i ] y in\n    if Float.is_finite xv && Float.is_finite yv then begin\n      let px = data_to_pixel_x sx plot_area xv in\n      let py = data_to_pixel_y sy plot_area yv in\n      let pt = (px, py) in\n      current := pt :: !current;\n      all_finite_points := pt :: !all_finite_points\n    end\n    else\n      match !current with\n      | [] -> ()\n      | _ ->\n          segments := Array.of_list (List.rev !current) :: !segments;\n          current := []\n  done;\n  (match !current with\n  | [] -> ()\n  | _ -> segments := Array.of_list (List.rev !current) :: !segments);\n  let segments = List.rev !segments in\n  let paths =\n    List.map\n      (fun points ->\n        let n_pts = Array.length points in\n        let points = step_transform step points n_pts in\n        Scene.Path\n          {\n            points;\n            close = false;\n            fill = None;\n            stroke = Some color;\n            line_width = lw;\n            dash = scaled_dash;\n          })\n      segments\n  in\n  match marker with\n  | Some shape ->\n      let finite_points = Array.of_list (List.rev !all_finite_points) in\n      let ms = theme.marker_size *. sf in\n      let markers =\n        Scene.Markers\n          {\n            points = finite_points;\n            shape;\n            size = ms;\n            sizes = None;\n            fill = Some color;\n            fills = None;\n            stroke = None;\n          }\n      in\n      paths @ [ markers ]\n  | None -> paths\n\nlet emit_point_mark sx sy plot_area theme ~x ~y ~color ~color_by ~size ~size_by\n    ~marker ~alpha =\n  let n = (Nx.shape x).(0) in\n  let color = resolve_color color alpha in\n  let shape = Option.value ~default:Spec.Circle marker in\n  let ms =\n    (match size with Some s -> s | None -> theme.Theme.marker_size)\n    *. theme.scale_factor\n  in\n  (* Collect only finite points *)\n  let valid = Array.make n false in\n  let num_valid = ref 0 in\n  for i = 0 to n - 1 do\n    let xv = Nx.item [ i ] x in\n    let yv = Nx.item [ i ] y in\n    if Float.is_finite xv && Float.is_finite yv then begin\n      valid.(i) <- true;\n      incr num_valid\n    end\n  done;\n  let nv = !num_valid in\n  let points = Array.make nv (0., 0.) in\n  let vi = ref 0 in\n  for i = 0 to n - 1 do\n    if valid.(i) then begin\n      let px = data_to_pixel_x sx plot_area (Nx.item [ i ] x) in\n      let py = data_to_pixel_y sy plot_area (Nx.item [ i ] y) in\n      points.(!vi) <- (px, py);\n      incr vi\n    end\n  done;\n  let sizes =\n    match size_by with\n    | Some sb ->\n        let sb_lo, sb_hi = Prepared.nx_finite_range sb in\n        let sb_range = if sb_hi = sb_lo then 1. else sb_hi -. sb_lo in\n        let arr = Array.make nv ms in\n        let vi = ref 0 in\n        for i = 0 to n - 1 do\n          if valid.(i) then begin\n            let sv = (Nx.item [ i ] sb -. sb_lo) /. sb_range in\n            arr.(!vi) <- (ms *. 0.5) +. (ms *. Float.sqrt sv);\n            incr vi\n          end\n        done;\n        Some arr\n    | None -> None\n  in\n  let fills =\n    match color_by with\n    | Some cb ->\n        let cb_lo, cb_hi = Prepared.nx_finite_range cb in\n        let cb_range = if cb_hi = cb_lo then 1. else cb_hi -. cb_lo in\n        let arr = Array.make nv Color.black in\n        let vi = ref 0 in\n        for i = 0 to n - 1 do\n          if valid.(i) then begin\n            let cv = (Nx.item [ i ] cb -. cb_lo) /. cb_range in\n            let c = Cmap.eval theme.sequential cv in\n            arr.(!vi) <-\n              (match alpha with Some a -> Color.with_alpha a c | None -> c);\n            incr vi\n          end\n        done;\n        Some arr\n    | None -> None\n  in\n  let fill = if fills <> None then None else Some color in\n  let stroke = Some color in\n  [ Scene.Markers { points; shape; size = ms; sizes; fill; fills; stroke } ]\n\nlet emit_bar_mark sx sy plot_area theme ~x ~height ~width ~bottom ~color ~alpha\n    =\n  let n = (Nx.shape x).(0) in\n  let color = resolve_color color alpha in\n  let w = Option.value ~default:0.8 width in\n  let prims = ref [] in\n  for i = 0 to n - 1 do\n    let xi = Nx.item [ i ] x in\n    let hi = Nx.item [ i ] height in\n    if Float.is_finite xi && Float.is_finite hi then\n      let x0 = data_to_pixel_x sx plot_area (xi -. (w /. 2.)) in\n      let x1 = data_to_pixel_x sx plot_area (xi +. (w /. 2.)) in\n      let y0 = data_to_pixel_y sy plot_area bottom in\n      let y1 = data_to_pixel_y sy plot_area (bottom +. hi) in\n      let lx = Float.min x0 x1 and rx = Float.max x0 x1 in\n      let ty = Float.min y0 y1 and by = Float.max y0 y1 in\n      prims :=\n        Scene.Path\n          {\n            points = [| (lx, ty); (rx, ty); (rx, by); (lx, by) |];\n            close = true;\n            fill = Some color;\n            stroke = None;\n            line_width = 0.;\n            dash = [];\n          }\n        :: !prims\n  done;\n  List.rev !prims\n\nlet emit_image_mark sx sy plot_area ~data ~extent =\n  match extent with\n  | Some (xmin, xmax, ymin, ymax) ->\n      let px0 = data_to_pixel_x sx plot_area xmin in\n      let px1 = data_to_pixel_x sx plot_area xmax in\n      let py0 = data_to_pixel_y sy plot_area ymax in\n      let py1 = data_to_pixel_y sy plot_area ymin in\n      let x = Float.min px0 px1 in\n      let y = Float.min py0 py1 in\n      let w = Float.abs (px1 -. px0) in\n      let h = Float.abs (py1 -. py0) in\n      [ Scene.Image { x; y; w; h; data } ]\n  | None ->\n      [\n        Scene.Image\n          {\n            x = plot_area.rx;\n            y = plot_area.ry;\n            w = plot_area.rw;\n            h = plot_area.rh;\n            data;\n          };\n      ]\n\nlet emit_text_mark sx sy plot_area theme ~x ~y ~content ~color ~font_size =\n  let color = Option.value ~default:Color.black color in\n  let size = Option.value ~default:theme.Theme.font_label.size font_size in\n  let px = data_to_pixel_x sx plot_area x in\n  let py = data_to_pixel_y sy plot_area y in\n  [\n    Scene.Text\n      {\n        x = px;\n        y = py;\n        content;\n        font =\n          {\n            family = theme.font_label.family;\n            size = size *. theme.scale_factor;\n            weight = `Normal;\n          };\n        color;\n        anchor = `Start;\n        baseline = `Bottom;\n        angle = 0.;\n      };\n  ]\n\nlet emit_hline_mark sy plot_area theme ~y:yv ~color ~line_width ~line_style\n    ~alpha =\n  let color = resolve_color color alpha in\n  let sf = theme.Theme.scale_factor in\n  let lw = resolve_line_width sf theme line_width in\n  let dash = resolve_dash sf line_style in\n  let py = data_to_pixel_y sy plot_area yv in\n  [\n    Scene.Path\n      {\n        points = [| (plot_area.rx, py); (plot_area.rx +. plot_area.rw, py) |];\n        close = false;\n        fill = None;\n        stroke = Some color;\n        line_width = lw;\n        dash;\n      };\n  ]\n\nlet emit_vline_mark sx plot_area theme ~x:xv ~color ~line_width ~line_style\n    ~alpha =\n  let color = resolve_color color alpha in\n  let sf = theme.Theme.scale_factor in\n  let lw = resolve_line_width sf theme line_width in\n  let dash = resolve_dash sf line_style in\n  let px = data_to_pixel_x sx plot_area xv in\n  [\n    Scene.Path\n      {\n        points = [| (px, plot_area.ry); (px, plot_area.ry +. plot_area.rh) |];\n        close = false;\n        fill = None;\n        stroke = Some color;\n        line_width = lw;\n        dash;\n      };\n  ]\n\nlet emit_abline_mark sx sy plot_area theme ~slope ~intercept ~color ~line_width\n    ~line_style ~alpha =\n  let color = resolve_color color alpha in\n  let sf = theme.Theme.scale_factor in\n  let lw = resolve_line_width sf theme line_width in\n  let dash = resolve_dash sf line_style in\n  let x0 = sx.Scale.lo and x1 = sx.Scale.hi in\n  let y0v = (slope *. x0) +. intercept in\n  let y1v = (slope *. x1) +. intercept in\n  let px0 = data_to_pixel_x sx plot_area x0 in\n  let py0 = data_to_pixel_y sy plot_area y0v in\n  let px1 = data_to_pixel_x sx plot_area x1 in\n  let py1 = data_to_pixel_y sy plot_area y1v in\n  [\n    Scene.Path\n      {\n        points = [| (px0, py0); (px1, py1) |];\n        close = false;\n        fill = None;\n        stroke = Some color;\n        line_width = lw;\n        dash;\n      };\n  ]\n\nlet emit_fill_between_segment sx sy plot_area color indices x y1 y2 =\n  let n_seg = List.length indices in\n  if n_seg = 0 then []\n  else\n    let points = Array.make (2 * n_seg) (0., 0.) in\n    let k = ref 0 in\n    List.iter\n      (fun i ->\n        let xv = Nx.item [ i ] x in\n        let yv = Nx.item [ i ] y1 in\n        if Float.is_finite xv && Float.is_finite yv then begin\n          points.(!k) <-\n            (data_to_pixel_x sx plot_area xv, data_to_pixel_y sy plot_area yv);\n          incr k\n        end)\n      indices;\n    let forward_count = !k in\n    List.iter\n      (fun i ->\n        let xv = Nx.item [ i ] x in\n        let yv = Nx.item [ i ] y2 in\n        if Float.is_finite xv && Float.is_finite yv then begin\n          points.(!k) <-\n            (data_to_pixel_x sx plot_area xv, data_to_pixel_y sy plot_area yv);\n          incr k\n        end)\n      (List.rev indices);\n    let total = !k in\n    if total < 3 || forward_count = 0 then []\n    else\n      [\n        Scene.Path\n          {\n            points = Array.sub points 0 total;\n            close = true;\n            fill = Some color;\n            stroke = None;\n            line_width = 0.;\n            dash = [];\n          };\n      ]\n\nlet emit_fill_between_mark sx sy plot_area ~x ~y1 ~y2 ~where ~color ~alpha =\n  let n = (Nx.shape x).(0) in\n  let color = resolve_color ~default_alpha:0.3 color alpha in\n  match where with\n  | None ->\n      let indices = List.init n Fun.id in\n      emit_fill_between_segment sx sy plot_area color indices x y1 y2\n  | Some mask ->\n      (* Split into contiguous runs where mask > 0 *)\n      let segments = ref [] in\n      let current = ref [] in\n      for i = 0 to n - 1 do\n        if Nx.item [ i ] mask > 0. then current := i :: !current\n        else\n          match !current with\n          | [] -> ()\n          | seg ->\n              segments := List.rev seg :: !segments;\n              current := []\n      done;\n      (match !current with\n      | [] -> ()\n      | seg -> segments := List.rev seg :: !segments);\n      List.concat_map\n        (fun seg -> emit_fill_between_segment sx sy plot_area color seg x y1 y2)\n        (List.rev !segments)\n\nlet emit_hspan_mark sy plot_area ~y0 ~y1 ~color ~alpha =\n  let color = resolve_color ~default_alpha:0.2 color alpha in\n  let py0 = data_to_pixel_y sy plot_area y0 in\n  let py1 = data_to_pixel_y sy plot_area y1 in\n  let top = Float.min py0 py1 and bot = Float.max py0 py1 in\n  [\n    Scene.Path\n      {\n        points =\n          [|\n            (plot_area.rx, top);\n            (plot_area.rx +. plot_area.rw, top);\n            (plot_area.rx +. plot_area.rw, bot);\n            (plot_area.rx, bot);\n          |];\n        close = true;\n        fill = Some color;\n        stroke = None;\n        line_width = 0.;\n        dash = [];\n      };\n  ]\n\nlet emit_vspan_mark sx plot_area ~x0 ~x1 ~color ~alpha =\n  let color = resolve_color ~default_alpha:0.2 color alpha in\n  let px0 = data_to_pixel_x sx plot_area x0 in\n  let px1 = data_to_pixel_x sx plot_area x1 in\n  let left = Float.min px0 px1 and right = Float.max px0 px1 in\n  [\n    Scene.Path\n      {\n        points =\n          [|\n            (left, plot_area.ry);\n            (right, plot_area.ry);\n            (right, plot_area.ry +. plot_area.rh);\n            (left, plot_area.ry +. plot_area.rh);\n          |];\n        close = true;\n        fill = Some color;\n        stroke = None;\n        line_width = 0.;\n        dash = [];\n      };\n  ]\n\nlet emit_errorbar_mark sx sy plot_area theme ~x ~y ~yerr ~xerr ~color\n    ~line_width ~cap_size ~alpha =\n  let n = (Nx.shape x).(0) in\n  let color = resolve_color color alpha in\n  let sf = theme.Theme.scale_factor in\n  let lw = resolve_line_width sf theme line_width in\n  let cap =\n    (match cap_size with Some s -> s | None -> theme.marker_size *. 0.5) *. sf\n  in\n  let prims = ref [] in\n  let make_path pts =\n    Scene.Path\n      {\n        points = pts;\n        close = false;\n        fill = None;\n        stroke = Some color;\n        line_width = lw;\n        dash = [];\n      }\n  in\n  for i = 0 to n - 1 do\n    let xv = Nx.item [ i ] x in\n    let yv = Nx.item [ i ] y in\n    if Float.is_finite xv && Float.is_finite yv then begin\n      let px = data_to_pixel_x sx plot_area xv in\n      let py = data_to_pixel_y sy plot_area yv in\n      let y_lo, y_hi =\n        match yerr with\n        | `Symmetric e ->\n            let ev = Nx.item [ i ] e in\n            (yv -. ev, yv +. ev)\n        | `Asymmetric (elo, ehi) ->\n            (yv -. Nx.item [ i ] elo, yv +. Nx.item [ i ] ehi)\n      in\n      let py_lo = data_to_pixel_y sy plot_area y_lo in\n      let py_hi = data_to_pixel_y sy plot_area y_hi in\n      prims := make_path [| (px, py_lo); (px, py_hi) |] :: !prims;\n      prims := make_path [| (px -. cap, py_hi); (px +. cap, py_hi) |] :: !prims;\n      prims := make_path [| (px -. cap, py_lo); (px +. cap, py_lo) |] :: !prims;\n      begin match xerr with\n      | Some xerr_val ->\n          let x_lo, x_hi =\n            match xerr_val with\n            | `Symmetric e ->\n                let ev = Nx.item [ i ] e in\n                (xv -. ev, xv +. ev)\n            | `Asymmetric (elo, ehi) ->\n                (xv -. Nx.item [ i ] elo, xv +. Nx.item [ i ] ehi)\n          in\n          let px_lo = data_to_pixel_x sx plot_area x_lo in\n          let px_hi = data_to_pixel_x sx plot_area x_hi in\n          prims := make_path [| (px_lo, py); (px_hi, py) |] :: !prims;\n          prims :=\n            make_path [| (px_lo, py -. cap); (px_lo, py +. cap) |] :: !prims;\n          prims :=\n            make_path [| (px_hi, py -. cap); (px_hi, py +. cap) |] :: !prims\n      | None -> ()\n      end\n    end\n  done;\n  List.rev !prims\n\nlet emit_heatmap_mark sx sy plot_area theme ~data ~cmap ~annotate ~vmin ~vmax\n    ~fmt =\n  let shape = Nx.shape data in\n  let rows = shape.(0) and cols = shape.(1) in\n  let frows = float rows in\n  let lo = ref Float.infinity and hi = ref Float.neg_infinity in\n  for r = 0 to rows - 1 do\n    for c = 0 to cols - 1 do\n      let v = Nx.item [ r; c ] data in\n      if Float.is_finite v then begin\n        if v < !lo then lo := v;\n        if v > !hi then hi := v\n      end\n    done\n  done;\n  let vlo = Option.value ~default:!lo vmin in\n  let vhi = Option.value ~default:!hi vmax in\n  let vrange = if vhi = vlo then 1. else vhi -. vlo in\n  let cmap = Option.value ~default:theme.Theme.sequential cmap in\n  let sf = theme.Theme.scale_factor in\n  let prims = ref [] in\n  for r = 0 to rows - 1 do\n    for c = 0 to cols - 1 do\n      let v = Nx.item [ r; c ] data in\n      let t = Float.max 0. (Float.min 1. ((v -. vlo) /. vrange)) in\n      let cell_color = Cmap.eval cmap t in\n      let x0 = data_to_pixel_x sx plot_area (float c) in\n      let x1 = data_to_pixel_x sx plot_area (float (c + 1)) in\n      let y0 = data_to_pixel_y sy plot_area (frows -. float r) in\n      let y1 = data_to_pixel_y sy plot_area (frows -. float (r + 1)) in\n      let lx = Float.min x0 x1 and rx = Float.max x0 x1 in\n      let ty = Float.min y0 y1 and by = Float.max y0 y1 in\n      prims :=\n        Scene.Path\n          {\n            points = [| (lx, ty); (rx, ty); (rx, by); (lx, by) |];\n            close = true;\n            fill = Some cell_color;\n            stroke = None;\n            line_width = 0.;\n            dash = [];\n          }\n        :: !prims;\n      if annotate then begin\n        let text =\n          match fmt with Some f -> f v | None -> Printf.sprintf \"%.2g\" v\n        in\n        let text_color =\n          if Color.lightness cell_color > 0.65 then Color.black else Color.white\n        in\n        let cx = (lx +. rx) /. 2. in\n        let cy = (ty +. by) /. 2. in\n        let font_size =\n          Float.max (8. *. sf)\n            (Float.min\n               (Float.abs (rx -. lx) *. 0.4)\n               (Float.abs (by -. ty) *. 0.4))\n        in\n        prims :=\n          Scene.Text\n            {\n              x = cx;\n              y = cy;\n              content = text;\n              font =\n                {\n                  family = theme.font_tick.family;\n                  size = font_size;\n                  weight = `Normal;\n                };\n              color = text_color;\n              anchor = `Middle;\n              baseline = `Middle;\n              angle = 0.;\n            }\n          :: !prims\n      end\n    done\n  done;\n  List.rev !prims\n\nlet emit_contour_mark sx sy plot_area theme ~data ~x0 ~x1 ~y0 ~y1 ~levels\n    ~filled ~cmap ~color ~line_width ~alpha =\n  let sf = theme.Theme.scale_factor in\n  let contours = Prepared.prepare_contour ~x0 ~x1 ~y0 ~y1 ~data ~levels in\n  let n_levels = List.length contours in\n  let prims = ref [] in\n  List.iteri\n    (fun i cp ->\n      let t = if n_levels <= 1 then 0.5 else float i /. float (n_levels - 1) in\n      let c =\n        match color with\n        | Some c -> c\n        | None ->\n            let cmap = Option.value ~default:theme.Theme.sequential cmap in\n            Cmap.eval cmap t\n      in\n      let c = match alpha with Some a -> Color.with_alpha a c | None -> c in\n      let lw = resolve_line_width sf theme line_width in\n      List.iter\n        (fun seg ->\n          let points =\n            Array.map\n              (fun (dx, dy) ->\n                ( data_to_pixel_x sx plot_area dx,\n                  data_to_pixel_y sy plot_area dy ))\n              seg\n          in\n          if filled then\n            prims :=\n              Scene.Path\n                {\n                  points;\n                  close = true;\n                  fill = Some c;\n                  stroke = None;\n                  line_width = 0.;\n                  dash = [];\n                }\n              :: !prims\n          else\n            prims :=\n              Scene.Path\n                {\n                  points;\n                  close = false;\n                  fill = None;\n                  stroke = Some c;\n                  line_width = lw;\n                  dash = [];\n                }\n              :: !prims)\n        cp.Prepared.paths)\n    contours;\n  List.rev !prims\n\nlet emit_mark sx sy plot_area theme = function\n  | Spec.Line\n      { x; y; color; line_width; line_style; step; marker; label = _; alpha } ->\n      emit_line_mark sx sy plot_area theme ~x ~y ~color ~line_width ~line_style\n        ~step ~marker ~alpha\n  | Spec.Point\n      { x; y; color; color_by; size; size_by; marker; label = _; alpha } ->\n      emit_point_mark sx sy plot_area theme ~x ~y ~color ~color_by ~size\n        ~size_by ~marker ~alpha\n  | Spec.Bar { x; height; width; bottom; color; label = _; alpha } ->\n      emit_bar_mark sx sy plot_area theme ~x ~height ~width ~bottom ~color\n        ~alpha\n  | Spec.Hist _ ->\n      failwith\n        \"resolve: Spec.Hist reached emit_mark; should have been normalized to \\\n         Bar by Prepared.compile\"\n  | Spec.Image { data; extent } -> emit_image_mark sx sy plot_area ~data ~extent\n  | Spec.Text_mark { x; y; content; color; font_size } ->\n      emit_text_mark sx sy plot_area theme ~x ~y ~content ~color ~font_size\n  | Spec.Hline { y; color; line_width; line_style; label = _; alpha } ->\n      emit_hline_mark sy plot_area theme ~y ~color ~line_width ~line_style\n        ~alpha\n  | Spec.Vline { x; color; line_width; line_style; label = _; alpha } ->\n      emit_vline_mark sx plot_area theme ~x ~color ~line_width ~line_style\n        ~alpha\n  | Spec.Abline\n      { slope; intercept; color; line_width; line_style; label = _; alpha } ->\n      emit_abline_mark sx sy plot_area theme ~slope ~intercept ~color\n        ~line_width ~line_style ~alpha\n  | Spec.Fill_between { x; y1; y2; where; color; alpha; label = _ } ->\n      emit_fill_between_mark sx sy plot_area ~x ~y1 ~y2 ~where ~color ~alpha\n  | Spec.Hspan { y0; y1; color; alpha; label = _ } ->\n      emit_hspan_mark sy plot_area ~y0 ~y1 ~color ~alpha\n  | Spec.Vspan { x0; x1; color; alpha; label = _ } ->\n      emit_vspan_mark sx plot_area ~x0 ~x1 ~color ~alpha\n  | Spec.Errorbar\n      { x; y; yerr; xerr; color; line_width; cap_size; label = _; alpha } ->\n      emit_errorbar_mark sx sy plot_area theme ~x ~y ~yerr ~xerr ~color\n        ~line_width ~cap_size ~alpha\n  | Spec.Heatmap { data; cmap; annotate; vmin; vmax; fmt } ->\n      emit_heatmap_mark sx sy plot_area theme ~data ~cmap ~annotate ~vmin ~vmax\n        ~fmt\n  | Spec.Imshow _ ->\n      failwith\n        \"resolve: Spec.Imshow reached emit_mark; should have been normalized \\\n         to Image by Prepared.compile\"\n  | Spec.Contour\n      {\n        data;\n        x0;\n        x1;\n        y0;\n        y1;\n        levels;\n        filled;\n        cmap;\n        color;\n        line_width;\n        label = _;\n        alpha;\n      } ->\n      emit_contour_mark sx sy plot_area theme ~data ~x0 ~x1 ~y0 ~y1 ~levels\n        ~filled ~cmap ~color ~line_width ~alpha\n\n(* Axis primitives *)\n\nlet scaled_font (theme : Theme.t) (f : Theme.font) =\n  { f with size = f.size *. theme.scale_factor }\n\nlet emit_axes ~text_measurer sx sy plot_area (theme : Theme.t) ~xticks ~yticks\n    (pp : Prepared.panel) =\n  let sf = theme.scale_factor in\n  let prims = ref [] in\n  let axis_color = theme.axis.color in\n  let lw = theme.axis.width *. sf in\n  let tl = theme.tick_length *. sf in\n  List.iter\n    (fun (v, label) ->\n      let px = data_to_pixel_x sx plot_area v in\n      let by = plot_area.ry +. plot_area.rh in\n      prims :=\n        Scene.Path\n          {\n            points = [| (px, by); (px, by +. tl) |];\n            close = false;\n            fill = None;\n            stroke = Some axis_color;\n            line_width = lw;\n            dash = [];\n          }\n        :: !prims;\n      let font = scaled_font theme theme.font_tick in\n      prims :=\n        Scene.Text\n          {\n            x = px;\n            y = by +. tl +. (8. *. sf);\n            content = label;\n            font;\n            color = axis_color;\n            anchor = `Middle;\n            baseline = `Top;\n            angle = 0.;\n          }\n        :: !prims)\n    xticks;\n\n  (* Y ticks *)\n  List.iter\n    (fun (v, label) ->\n      let py = data_to_pixel_y sy plot_area v in\n      let lx = plot_area.rx in\n      prims :=\n        Scene.Path\n          {\n            points = [| (lx -. tl, py); (lx, py) |];\n            close = false;\n            fill = None;\n            stroke = Some axis_color;\n            line_width = lw;\n            dash = [];\n          }\n        :: !prims;\n      let font = scaled_font theme theme.font_tick in\n      prims :=\n        Scene.Text\n          {\n            x = lx -. tl -. (8. *. sf);\n            y = py;\n            content = label;\n            font;\n            color = axis_color;\n            anchor = `End;\n            baseline = `Middle;\n            angle = 0.;\n          }\n        :: !prims)\n    yticks;\n\n  (* Grid *)\n  let show_grid = Option.value ~default:(theme.grid <> None) pp.grid_visible in\n  begin match theme.grid with\n  | Some grid_line when show_grid ->\n      List.iter\n        (fun (v, _) ->\n          let px = data_to_pixel_x sx plot_area v in\n          prims :=\n            Scene.Path\n              {\n                points =\n                  [| (px, plot_area.ry); (px, plot_area.ry +. plot_area.rh) |];\n                close = false;\n                fill = None;\n                stroke = Some grid_line.color;\n                line_width = grid_line.width *. sf;\n                dash = grid_line.dash;\n              }\n            :: !prims)\n        xticks;\n      List.iter\n        (fun (v, _) ->\n          let py = data_to_pixel_y sy plot_area v in\n          prims :=\n            Scene.Path\n              {\n                points =\n                  [| (plot_area.rx, py); (plot_area.rx +. plot_area.rw, py) |];\n                close = false;\n                fill = None;\n                stroke = Some grid_line.color;\n                line_width = grid_line.width *. sf;\n                dash = grid_line.dash;\n              }\n            :: !prims)\n        yticks\n  | _ -> ()\n  end;\n\n  (* Axis border *)\n  let show_frame = Option.value ~default:true pp.frame_visible in\n  if show_frame then begin\n    let lx = plot_area.rx and ty = plot_area.ry in\n    let rx = lx +. plot_area.rw and by = ty +. plot_area.rh in\n    prims :=\n      Scene.Path\n        {\n          points = [| (lx, ty); (rx, ty); (rx, by); (lx, by) |];\n          close = true;\n          fill = None;\n          stroke = Some axis_color;\n          line_width = lw;\n          dash = [];\n        }\n      :: !prims\n  end;\n\n  (* Title *)\n  begin match pp.title with\n  | Some s ->\n      let font = scaled_font theme theme.font_title in\n      let cx = plot_area.rx +. (plot_area.rw /. 2.) in\n      prims :=\n        Scene.Text\n          {\n            x = cx;\n            y = plot_area.ry -. (theme.title_gap *. sf);\n            content = s;\n            font;\n            color = axis_color;\n            anchor = `Middle;\n            baseline = `Bottom;\n            angle = 0.;\n          }\n        :: !prims\n  | None -> ()\n  end;\n\n  (* X label *)\n  begin match pp.x.label with\n  | Some s ->\n      let font = scaled_font theme theme.font_label in\n      let cx = plot_area.rx +. (plot_area.rw /. 2.) in\n      let _, tick_h =\n        text_measurer ~font:(scaled_font theme theme.font_tick) \"0\"\n      in\n      let y =\n        plot_area.ry +. plot_area.rh +. tl +. tick_h +. (theme.label_gap *. sf)\n      in\n      prims :=\n        Scene.Text\n          {\n            x = cx;\n            y;\n            content = s;\n            font;\n            color = axis_color;\n            anchor = `Middle;\n            baseline = `Top;\n            angle = 0.;\n          }\n        :: !prims\n  | None -> ()\n  end;\n\n  (* Y label *)\n  begin match pp.y.label with\n  | Some s ->\n      let font = scaled_font theme theme.font_label in\n      let tick_font = scaled_font theme theme.font_tick in\n      let max_ytick_w =\n        List.fold_left\n          (fun acc (_, label) ->\n            let w, _ = text_measurer ~font:tick_font label in\n            Float.max acc w)\n          0. yticks\n      in\n      let _, label_h = text_measurer ~font s in\n      let x =\n        plot_area.rx -. tl -. max_ytick_w -. (8. *. sf)\n        -. (theme.label_gap *. sf) -. (label_h /. 2.)\n      in\n      let y = plot_area.ry +. (plot_area.rh /. 2.) in\n      prims :=\n        Scene.Text\n          {\n            x;\n            y;\n            content = s;\n            font;\n            color = axis_color;\n            anchor = `Middle;\n            baseline = `Middle;\n            angle = Float.pi /. 2.;\n          }\n        :: !prims\n  | None -> ()\n  end;\n\n  List.rev !prims\n\n(* Legend *)\n\ntype legend_kind =\n  | Legend_line of Spec.line_style option * Spec.marker option\n  | Legend_point of Spec.marker\n  | Legend_bar\n  | Legend_ref_line of Spec.line_style option\n\nlet mark_label = function\n  | Spec.Line { label; _ }\n  | Spec.Point { label; _ }\n  | Spec.Bar { label; _ }\n  | Spec.Hist { label; _ }\n  | Spec.Hline { label; _ }\n  | Spec.Vline { label; _ }\n  | Spec.Abline { label; _ }\n  | Spec.Fill_between { label; _ }\n  | Spec.Hspan { label; _ }\n  | Spec.Vspan { label; _ }\n  | Spec.Errorbar { label; _ }\n  | Spec.Contour { label; _ } ->\n      label\n  | Spec.Image _ | Spec.Text_mark _ | Spec.Heatmap _ | Spec.Imshow _ -> None\n\nlet mark_legend_kind = function\n  | Spec.Line { line_style; marker; _ } -> Legend_line (line_style, marker)\n  | Spec.Point { marker; _ } ->\n      Legend_point (Option.value ~default:Spec.Circle marker)\n  | Spec.Bar _ | Spec.Hist _ | Spec.Fill_between _ | Spec.Hspan _ | Spec.Vspan _\n    ->\n      Legend_bar\n  | Spec.Hline { line_style; _ }\n  | Spec.Vline { line_style; _ }\n  | Spec.Abline { line_style; _ } ->\n      Legend_ref_line line_style\n  | Spec.Errorbar _ -> Legend_ref_line None\n  | Spec.Contour { filled = true; _ } -> Legend_bar\n  | Spec.Contour _ -> Legend_ref_line None\n  | _ -> Legend_bar\n\nlet emit_legend ~text_measurer ~loc ~ncol plot_area theme marks =\n  let sf = theme.Theme.scale_factor in\n  let entries =\n    List.filter_map\n      (fun m ->\n        match mark_label m with\n        | Some label ->\n            let color =\n              match Prepared.mark_color m with\n              | Some c -> c\n              | None -> Color.black\n            in\n            Some (label, color, mark_legend_kind m)\n        | None -> None)\n      marks\n  in\n  if entries = [] then []\n  else begin\n    let font = scaled_font theme theme.font_tick in\n    let swatch_size = 10. *. sf in\n    let gap = 4. *. sf in\n    let line_h = Float.max swatch_size (font.size *. 1.2) in\n    let margin = 8. *. sf in\n    let ncol = max 1 ncol in\n    let n_entries = List.length entries in\n    let nrows = (n_entries + ncol - 1) / ncol in\n    (* Compute per-column max label width *)\n    let col_widths = Array.make ncol 0. in\n    List.iteri\n      (fun i (label, _, _) ->\n        let col = i mod ncol in\n        let w, _ = text_measurer ~font label in\n        col_widths.(col) <- Float.max col_widths.(col) w)\n      entries;\n    let col_gap = 12. *. sf in\n    let col_w i = swatch_size +. gap +. col_widths.(i) in\n    let legend_w =\n      let total = ref 0. in\n      for i = 0 to ncol - 1 do\n        total := !total +. col_w i\n      done;\n      !total +. (col_gap *. float (ncol - 1))\n    in\n    let legend_h = (float nrows *. (line_h +. gap)) -. gap in\n    let loc = Option.value ~default:Spec.Upper_right loc in\n    (* x0 is the right edge of the legend area *)\n    let x0, y0 =\n      match loc with\n      | Spec.Upper_right ->\n          (plot_area.rx +. plot_area.rw -. margin, plot_area.ry +. margin)\n      | Spec.Upper_left ->\n          (plot_area.rx +. margin +. legend_w, plot_area.ry +. margin)\n      | Spec.Lower_right ->\n          ( plot_area.rx +. plot_area.rw -. margin,\n            plot_area.ry +. plot_area.rh -. margin -. legend_h )\n      | Spec.Lower_left ->\n          ( plot_area.rx +. margin +. legend_w,\n            plot_area.ry +. plot_area.rh -. margin -. legend_h )\n      | Spec.Center ->\n          ( plot_area.rx +. ((plot_area.rw +. legend_w) /. 2.),\n            plot_area.ry +. ((plot_area.rh -. legend_h) /. 2.) )\n      | Spec.Right ->\n          ( plot_area.rx +. plot_area.rw -. margin,\n            plot_area.ry +. ((plot_area.rh -. legend_h) /. 2.) )\n      | Spec.Upper_center ->\n          ( plot_area.rx +. ((plot_area.rw +. legend_w) /. 2.),\n            plot_area.ry +. margin )\n      | Spec.Lower_center ->\n          ( plot_area.rx +. ((plot_area.rw +. legend_w) /. 2.),\n            plot_area.ry +. plot_area.rh -. margin -. legend_h )\n    in\n    (* Background box *)\n    let inner_pad = 6. *. sf in\n    let bg_x = x0 -. legend_w -. inner_pad in\n    let bg_y = y0 -. inner_pad in\n    let bg_w = legend_w +. (2. *. inner_pad) in\n    let bg_h = legend_h +. (2. *. inner_pad) in\n    let bg =\n      Scene.Path\n        {\n          points =\n            [|\n              (bg_x, bg_y);\n              (bg_x +. bg_w, bg_y);\n              (bg_x +. bg_w, bg_y +. bg_h);\n              (bg_x, bg_y +. bg_h);\n            |];\n          close = true;\n          fill = Some (Color.with_alpha 0.85 theme.background);\n          stroke = Some (Color.with_alpha 0.3 theme.axis.color);\n          line_width = 1. *. sf;\n          dash = [];\n        }\n    in\n    (* Compute column x-offsets (from right edge of legend) *)\n    let col_offsets = Array.make ncol 0. in\n    let acc = ref 0. in\n    for c = ncol - 1 downto 0 do\n      col_offsets.(c) <- !acc;\n      acc := !acc +. col_w c +. col_gap\n    done;\n    let prims = ref [ bg ] in\n    List.iteri\n      (fun i (label, color, kind) ->\n        let row = i / ncol in\n        let col = i mod ncol in\n        let y = y0 +. (float row *. (line_h +. gap)) in\n        let y_mid = y +. (swatch_size /. 2.) in\n        let cx0 = x0 -. col_offsets.(col) in\n        begin match kind with\n        | Legend_line (line_style, marker) ->\n            prims :=\n              Scene.Path\n                {\n                  points = [| (cx0 -. swatch_size, y_mid); (cx0, y_mid) |];\n                  close = false;\n                  fill = None;\n                  stroke = Some color;\n                  line_width = theme.line_width *. sf;\n                  dash = resolve_dash sf line_style;\n                }\n              :: !prims;\n            begin match marker with\n            | Some shape ->\n                let ms = 6. *. sf in\n                prims :=\n                  Scene.Markers\n                    {\n                      points = [| (cx0 -. (swatch_size /. 2.), y_mid) |];\n                      shape;\n                      size = ms;\n                      sizes = None;\n                      fill = Some color;\n                      fills = None;\n                      stroke = None;\n                    }\n                  :: !prims\n            | None -> ()\n            end\n        | Legend_point marker ->\n            let ms = 8. *. sf in\n            prims :=\n              Scene.Markers\n                {\n                  points = [| (cx0 -. (swatch_size /. 2.), y_mid) |];\n                  shape = marker;\n                  size = ms;\n                  sizes = None;\n                  fill = Some color;\n                  fills = None;\n                  stroke = None;\n                }\n              :: !prims\n        | Legend_bar ->\n            prims :=\n              Scene.Path\n                {\n                  points =\n                    [|\n                      (cx0 -. swatch_size, y);\n                      (cx0, y);\n                      (cx0, y +. swatch_size);\n                      (cx0 -. swatch_size, y +. swatch_size);\n                    |];\n                  close = true;\n                  fill = Some color;\n                  stroke = None;\n                  line_width = 0.;\n                  dash = [];\n                }\n              :: !prims\n        | Legend_ref_line line_style ->\n            prims :=\n              Scene.Path\n                {\n                  points = [| (cx0 -. swatch_size, y_mid); (cx0, y_mid) |];\n                  close = false;\n                  fill = None;\n                  stroke = Some color;\n                  line_width = theme.line_width *. sf;\n                  dash = resolve_dash sf line_style;\n                }\n              :: !prims\n        end;\n        prims :=\n          Scene.Text\n            {\n              x = cx0 -. swatch_size -. gap;\n              y = y_mid;\n              content = label;\n              font;\n              color = theme.axis.color;\n              anchor = `End;\n              baseline = `Middle;\n              angle = 0.;\n            }\n          :: !prims)\n      entries;\n    List.rev !prims\n  end\n\n(* Colorbar for color_by *)\n\nlet emit_colorbar plot_area (theme : Theme.t) ~height_frac (lo, hi) =\n  let sf = theme.scale_factor in\n  let font = scaled_font theme theme.font_tick in\n  let bar_w = 16. *. sf in\n  let bar_gap = 12. *. sf in\n  let bar_x = plot_area.rx +. plot_area.rw +. bar_gap in\n  let bar_y = plot_area.ry in\n  let bar_h = plot_area.rh *. height_frac in\n  (* Vertical gradient: series of thin horizontal strips *)\n  let n_strips = 64 in\n  let strip_h = bar_h /. float n_strips in\n  let strips =\n    List.init n_strips (fun i ->\n        let t = 1. -. (float i /. float (n_strips - 1)) in\n        let c = Cmap.eval theme.sequential t in\n        let sy = bar_y +. (float i *. strip_h) in\n        Scene.Path\n          {\n            points =\n              [|\n                (bar_x, sy);\n                (bar_x +. bar_w, sy);\n                (bar_x +. bar_w, sy +. strip_h +. 1.);\n                (bar_x, sy +. strip_h +. 1.);\n              |];\n            close = true;\n            fill = Some c;\n            stroke = None;\n            line_width = 0.;\n            dash = [];\n          })\n  in\n  (* Border around colorbar *)\n  let border =\n    Scene.Path\n      {\n        points =\n          [|\n            (bar_x, bar_y);\n            (bar_x +. bar_w, bar_y);\n            (bar_x +. bar_w, bar_y +. bar_h);\n            (bar_x, bar_y +. bar_h);\n          |];\n        close = true;\n        fill = None;\n        stroke = Some theme.axis.color;\n        line_width = theme.axis.width *. sf;\n        dash = [];\n      }\n  in\n  (* Tick labels along the right edge *)\n  let ticks = Ticks.generate `Linear ~lo ~hi () in\n  let range = hi -. lo in\n  let range = if range = 0. then 1. else range in\n  let label_x = bar_x +. bar_w +. (6. *. sf) in\n  let tick_prims =\n    List.filter_map\n      (fun (v, label) ->\n        let t = (v -. lo) /. range in\n        if t < -0.01 || t > 1.01 then None\n        else\n          let py = bar_y +. bar_h -. (t *. bar_h) in\n          Some\n            (Scene.Text\n               {\n                 x = label_x;\n                 y = py;\n                 content = label;\n                 font;\n                 color = theme.axis.color;\n                 anchor = `Start;\n                 baseline = `Middle;\n                 angle = 0.;\n               }))\n      ticks\n  in\n  strips @ [ border ] @ tick_prims\n\n(* Size guide for size_by *)\n\nlet emit_size_guide plot_area (theme : Theme.t) ~y_offset (lo, hi) =\n  let sf = theme.scale_factor in\n  let font = scaled_font theme theme.font_tick in\n  let guide_gap = 12. *. sf in\n  let guide_x = plot_area.rx +. plot_area.rw +. guide_gap in\n  let max_r = 12. *. sf in\n  (* Three representative sizes: max, mid, min *)\n  let values = [| hi; (lo +. hi) /. 2.; lo |] in\n  let range = hi -. lo in\n  let range = if range = 0. then 1. else range in\n  let prims = ref [] in\n  let cy = ref (plot_area.ry +. y_offset +. max_r +. (4. *. sf)) in\n  Array.iter\n    (fun v ->\n      let t = (v -. lo) /. range in\n      let size = ((max_r *. 0.3) +. (max_r *. 0.7 *. Float.sqrt t)) *. 2. in\n      let cx = guide_x +. max_r in\n      prims :=\n        Scene.Markers\n          {\n            points = [| (cx, !cy) |];\n            shape = Spec.Circle;\n            size;\n            sizes = None;\n            fill = Some (Color.with_alpha 0.2 theme.axis.color);\n            fills = None;\n            stroke = Some theme.axis.color;\n          }\n        :: !prims;\n      let label = Printf.sprintf \"%.4g\" v in\n      let label_x = cx +. max_r +. (6. *. sf) in\n      prims :=\n        Scene.Text\n          {\n            x = label_x;\n            y = !cy;\n            content = label;\n            font;\n            color = theme.axis.color;\n            anchor = `Start;\n            baseline = `Middle;\n            angle = 0.;\n          }\n        :: !prims;\n      cy := !cy +. (max_r *. 2.) +. (8. *. sf))\n    values;\n  List.rev !prims\n\n(* Compute layout padding *)\n\nlet compute_layout ~text_measurer (theme : Theme.t) (pp : Prepared.panel) xticks\n    yticks =\n  let sf = theme.scale_factor in\n  let tick_font = scaled_font theme theme.font_tick in\n  let label_font = scaled_font theme theme.font_label in\n  let title_font = scaled_font theme theme.font_title in\n  let tl = theme.tick_length *. sf in\n\n  (* Left padding: y-tick labels + gap + optional ylabel *)\n  let left =\n    let base = theme.padding *. sf in\n    match yticks with\n    | [] -> base\n    | _ ->\n        let max_ytick_w =\n          List.fold_left\n            (fun acc (_, label) ->\n              let w, _ = text_measurer ~font:tick_font label in\n              Float.max acc w)\n            0. yticks\n        in\n        base +. max_ytick_w +. tl +. (8. *. sf)\n  in\n  let left =\n    match pp.y.label with\n    | Some s ->\n        let _, h = text_measurer ~font:label_font s in\n        left +. h +. (theme.label_gap *. sf)\n    | None -> left\n  in\n\n  (* Bottom padding: x-tick labels + gap + optional xlabel *)\n  let bottom =\n    let base = theme.padding *. sf in\n    match xticks with\n    | [] -> base\n    | _ ->\n        let _, tick_h = text_measurer ~font:tick_font \"0\" in\n        base +. tick_h +. tl +. (8. *. sf)\n  in\n  let bottom =\n    match pp.x.label with\n    | Some s ->\n        let _, h = text_measurer ~font:label_font s in\n        bottom +. h +. (theme.label_gap *. sf)\n    | None -> bottom\n  in\n\n  (* Top padding: title *)\n  let top = theme.padding *. sf in\n  let top =\n    match pp.title with\n    | Some s ->\n        let _, h = text_measurer ~font:title_font s in\n        top +. h +. (theme.title_gap *. sf)\n    | None -> top\n  in\n\n  (* Right padding — extra space for colorbar / size guide *)\n  let right =\n    let base = theme.padding *. sf in\n    let colorbar_w =\n      match pp.colorbar_range with\n      | Some (lo, hi) ->\n          let bar_w = 16. *. sf in\n          let bar_gap = 12. *. sf in\n          let ticks = Ticks.generate `Linear ~lo ~hi () in\n          let max_label_w =\n            List.fold_left\n              (fun acc (_, label) ->\n                let w, _ = text_measurer ~font:tick_font label in\n                Float.max acc w)\n              0. ticks\n          in\n          bar_gap +. bar_w +. (6. *. sf) +. max_label_w +. (4. *. sf)\n      | None -> 0.\n    in\n    let size_guide_w =\n      match pp.size_by_range with\n      | Some (lo, hi) ->\n          let guide_gap = 12. *. sf in\n          let max_r = 12. *. sf in\n          let mid = (lo +. hi) /. 2. in\n          let max_label_w =\n            List.fold_left\n              (fun acc v ->\n                let w, _ =\n                  text_measurer ~font:tick_font (Printf.sprintf \"%.4g\" v)\n                in\n                Float.max acc w)\n              0. [ lo; mid; hi ]\n          in\n          guide_gap +. (max_r *. 2.) +. (6. *. sf) +. max_label_w +. (4. *. sf)\n      | None -> 0.\n    in\n    base +. Float.max colorbar_w size_guide_w\n  in\n\n  (left, top, right, bottom)\n\n(* Resolve a single prepared panel *)\n\nlet resolve_panel ~text_measurer theme region (pp : Prepared.panel) =\n  let theme = Option.value ~default:theme pp.theme_override in\n\n  let sx, xticks = Axis.make_scale_and_ticks pp.x in\n  let sy, yticks = Axis.make_scale_and_ticks pp.y in\n\n  let left, top, right, bottom =\n    compute_layout ~text_measurer theme pp xticks yticks\n  in\n\n  let plot_area =\n    {\n      rx = region.rx +. left;\n      ry = region.ry +. top;\n      rw = Float.max 1. (region.rw -. left -. right);\n      rh = Float.max 1. (region.rh -. top -. bottom);\n    }\n  in\n\n  (* Background *)\n  let bg =\n    Scene.Path\n      {\n        points =\n          [|\n            (region.rx, region.ry);\n            (region.rx +. region.rw, region.ry);\n            (region.rx +. region.rw, region.ry +. region.rh);\n            (region.rx, region.ry +. region.rh);\n          |];\n        close = true;\n        fill = Some theme.background;\n        stroke = None;\n        line_width = 0.;\n        dash = [];\n      }\n  in\n\n  (* Axes decorations *)\n  let axes_prims =\n    emit_axes ~text_measurer sx sy plot_area theme ~xticks ~yticks pp\n  in\n\n  (* Data marks inside clip region *)\n  let data_prims = List.concat_map (emit_mark sx sy plot_area theme) pp.marks in\n  let clipped_data =\n    Scene.Clip\n      {\n        x = plot_area.rx;\n        y = plot_area.ry;\n        w = plot_area.rw;\n        h = plot_area.rh;\n        children = data_prims;\n      }\n  in\n\n  (* Legend *)\n  let legend_prims =\n    emit_legend ~text_measurer ~loc:pp.legend_loc ~ncol:pp.legend_ncol plot_area\n      theme pp.marks\n  in\n\n  (* Colorbar for color_by *)\n  let has_both = pp.colorbar_range <> None && pp.size_by_range <> None in\n  let colorbar_prims =\n    match pp.colorbar_range with\n    | Some range ->\n        let height_frac = if has_both then 0.55 else 1. in\n        emit_colorbar plot_area theme ~height_frac range\n    | None -> []\n  in\n\n  (* Size guide for size_by *)\n  let size_guide_prims =\n    match pp.size_by_range with\n    | Some range ->\n        let y_offset = if has_both then plot_area.rh *. 0.6 else 0. in\n        emit_size_guide plot_area theme ~y_offset range\n    | None -> []\n  in\n\n  [ bg; clipped_data ] @ axes_prims @ legend_prims @ colorbar_prims\n  @ size_guide_prims\n\n(* Resolve a prepared grid layout *)\n\nlet resolve_grid ~resolve_prepared ~text_measurer theme region rows gap =\n  let nrows = List.length rows in\n  let ncols =\n    List.fold_left (fun acc row -> max acc (List.length row)) 0 rows\n  in\n  if nrows = 0 || ncols = 0 then []\n  else begin\n    let cell_w = (region.rw -. (gap *. float (ncols - 1))) /. float ncols in\n    let cell_h = (region.rh -. (gap *. float (nrows - 1))) /. float nrows in\n    let prims = ref [] in\n    List.iteri\n      (fun ri row ->\n        List.iteri\n          (fun ci prepared ->\n            let cell_region =\n              {\n                rx = region.rx +. (float ci *. (cell_w +. gap));\n                ry = region.ry +. (float ri *. (cell_h +. gap));\n                rw = cell_w;\n                rh = cell_h;\n              }\n            in\n            let p =\n              resolve_prepared ~text_measurer theme cell_region prepared\n            in\n            prims := List.rev_append p !prims)\n          row)\n      rows;\n    List.rev !prims\n  end\n\n(* Grid-level decorations *)\n\nlet emit_grid_decorations ~text_measurer theme region\n    (gd : Prepared.grid_decorations) all_marks =\n  let sf = theme.Theme.scale_factor in\n  let color = theme.axis.color in\n  let prims = ref [] in\n  let r = ref region in\n\n  (* Title: above grid *)\n  begin match gd.gd_title with\n  | Some s ->\n      let font = scaled_font theme theme.font_title in\n      let _, title_h = text_measurer ~font s in\n      let title_gap = theme.title_gap *. sf in\n      prims :=\n        Scene.Text\n          {\n            x = !r.rx +. (!r.rw /. 2.);\n            y = !r.ry +. title_h;\n            content = s;\n            font;\n            color;\n            anchor = `Middle;\n            baseline = `Bottom;\n            angle = 0.;\n          }\n        :: !prims;\n      let used = title_h +. title_gap in\n      r := { !r with ry = !r.ry +. used; rh = !r.rh -. used }\n  | None -> ()\n  end;\n\n  (* Xlabel: below grid *)\n  begin match gd.gd_xlabel with\n  | Some s ->\n      let font = scaled_font theme theme.font_label in\n      let _, label_h = text_measurer ~font s in\n      let label_gap = theme.label_gap *. sf in\n      let used = label_h +. label_gap in\n      prims :=\n        Scene.Text\n          {\n            x = !r.rx +. (!r.rw /. 2.);\n            y = !r.ry +. !r.rh -. label_gap;\n            content = s;\n            font;\n            color;\n            anchor = `Middle;\n            baseline = `Bottom;\n            angle = 0.;\n          }\n        :: !prims;\n      r := { !r with rh = !r.rh -. used }\n  | None -> ()\n  end;\n\n  (* Ylabel: left of grid, rotated *)\n  begin match gd.gd_ylabel with\n  | Some s ->\n      let font = scaled_font theme theme.font_label in\n      let _, label_h = text_measurer ~font s in\n      let label_gap = theme.label_gap *. sf in\n      let used = label_h +. label_gap in\n      prims :=\n        Scene.Text\n          {\n            x = !r.rx +. (label_h /. 2.);\n            y = !r.ry +. (!r.rh /. 2.);\n            content = s;\n            font;\n            color;\n            anchor = `Middle;\n            baseline = `Middle;\n            angle = Float.pi /. 2.;\n          }\n        :: !prims;\n      r := { !r with rx = !r.rx +. used; rw = !r.rw -. used }\n  | None -> ()\n  end;\n\n  (* Shared legend *)\n  let legend_prims =\n    match gd.gd_legend_loc with\n    | Some loc ->\n        emit_legend ~text_measurer ~loc:(Some loc)\n          ~ncol:gd.Prepared.gd_legend_ncol !r theme all_marks\n    | None -> []\n  in\n\n  (List.rev !prims, legend_prims, !r)\n\n(* Top-level resolve from Prepared.t *)\n\nlet rec resolve_tree ~text_measurer theme region = function\n  | Prepared.Panel pp -> resolve_panel ~text_measurer theme region pp\n  | Prepared.Grid { rows; gap } ->\n      let gap_px = gap *. Float.min region.rw region.rh in\n      resolve_grid ~resolve_prepared:resolve_tree ~text_measurer theme region\n        rows gap_px\n  | Prepared.Decorated_grid { decorations; inner; all_marks } ->\n      let theme = Option.value ~default:theme decorations.gd_theme_override in\n      let dec_prims, legend_prims, grid_region =\n        emit_grid_decorations ~text_measurer theme region decorations all_marks\n      in\n      dec_prims\n      @ resolve_tree ~text_measurer theme grid_region inner\n      @ legend_prims\n\nlet resolve_prepared ~text_measurer ~theme ~width ~height prepared =\n  let region = { rx = 0.; ry = 0.; rw = width; rh = height } in\n  let primitives = resolve_tree ~text_measurer theme region prepared in\n  { Scene.width; height; primitives }\n\n(* Convenience: compile + resolve in one step *)\n\nlet resolve ~text_measurer ~theme ~width ~height spec =\n  let prepared = Prepared.compile ~theme spec in\n  resolve_prepared ~text_measurer ~theme ~width ~height prepared\n"
  },
  {
    "path": "packages/hugin/lib/resolve.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Specification to scene resolution.\n\n    {b Internal module.} Walks a {!Spec.t} tree, computes data bounds and\n    layout, and emits a {!Scene.t} with all coordinates in device pixels. *)\n\ntype text_measurer = font:Theme.font -> string -> float * float\n(** The type for text measurers. Returns [(width, height)] for a string rendered\n    in the given font. *)\n\nval resolve_prepared :\n  text_measurer:text_measurer ->\n  theme:Theme.t ->\n  width:float ->\n  height:float ->\n  Prepared.t ->\n  Scene.t\n(** [resolve_prepared ~text_measurer ~theme ~width ~height prepared] is the\n    resolved scene for [prepared] at the given dimensions. Layout-only work\n    (pixel coordinates, text measurement) is done here; data work is already\n    done in {!Prepared.compile}. *)\n\nval resolve :\n  text_measurer:text_measurer ->\n  theme:Theme.t ->\n  width:float ->\n  height:float ->\n  Spec.t ->\n  Scene.t\n(** [resolve ~text_measurer ~theme ~width ~height spec] is the resolved scene\n    for [spec] at the given dimensions. Convenience wrapper that calls\n    {!Prepared.compile} then {!resolve_prepared}. *)\n"
  },
  {
    "path": "packages/hugin/lib/scale.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Data-to-unit mapping functions *)\n\ntype t = {\n  to_unit : float -> float;\n  from_unit : float -> float;\n  lo : float;\n  hi : float;\n}\n\nlet maybe_invert invert to_unit from_unit =\n  if invert then ((fun v -> 1. -. to_unit v), fun u -> from_unit (1. -. u))\n  else (to_unit, from_unit)\n\nlet linear ?(invert = false) ~lo ~hi () =\n  let range = hi -. lo in\n  let range = if range = 0. then 1. else range in\n  let to_unit, from_unit =\n    maybe_invert invert\n      (fun v -> (v -. lo) /. range)\n      (fun u -> lo +. (u *. range))\n  in\n  { to_unit; from_unit; lo; hi }\n\nlet log ?(invert = false) ~lo ~hi () =\n  let lo_log = Float.log10 (Float.max 1e-300 lo) in\n  let hi_log = Float.log10 (Float.max 1e-300 hi) in\n  let range = hi_log -. lo_log in\n  let range = if range = 0. then 1. else range in\n  let to_unit, from_unit =\n    maybe_invert invert\n      (fun v ->\n        if v <= 0. then Float.nan else (Float.log10 v -. lo_log) /. range)\n      (fun u -> Float.pow 10. (lo_log +. (u *. range)))\n  in\n  { to_unit; from_unit; lo; hi }\n\nlet sqrt ?(invert = false) ~lo ~hi () =\n  let lo_s = Float.sqrt (Float.max 0. lo) in\n  let hi_s = Float.sqrt (Float.max 0. hi) in\n  let range = hi_s -. lo_s in\n  let range = if range = 0. then 1. else range in\n  let to_unit, from_unit =\n    maybe_invert invert\n      (fun v -> (Float.sqrt (Float.max 0. v) -. lo_s) /. range)\n      (fun u ->\n        let s = lo_s +. (u *. range) in\n        s *. s)\n  in\n  { to_unit; from_unit; lo; hi }\n\nlet asinh ?(invert = false) ~lo ~hi () =\n  let lo_a = Float.asinh lo in\n  let hi_a = Float.asinh hi in\n  let range = hi_a -. lo_a in\n  let range = if range = 0. then 1. else range in\n  let to_unit, from_unit =\n    maybe_invert invert\n      (fun v -> (Float.asinh v -. lo_a) /. range)\n      (fun u ->\n        let a = lo_a +. (u *. range) in\n        Float.sinh a)\n  in\n  { to_unit; from_unit; lo; hi }\n\nlet symlog ?(invert = false) ~linthresh ~lo ~hi () =\n  let transform v =\n    if Float.abs v <= linthresh then v /. linthresh\n    else Float.copy_sign (1. +. Float.log10 (Float.abs v /. linthresh)) v\n  in\n  let inv_transform v =\n    if Float.abs v <= 1. then v *. linthresh\n    else Float.copy_sign (linthresh *. Float.pow 10. (Float.abs v -. 1.)) v\n  in\n  let lo_t = transform lo in\n  let hi_t = transform hi in\n  let range = hi_t -. lo_t in\n  let range = if range = 0. then 1. else range in\n  let to_unit, from_unit =\n    maybe_invert invert\n      (fun v -> (transform v -. lo_t) /. range)\n      (fun u -> inv_transform (lo_t +. (u *. range)))\n  in\n  { to_unit; from_unit; lo; hi }\n\nlet make ?(invert = false) kind ~lo ~hi () =\n  match kind with\n  | `Linear -> linear ~invert ~lo ~hi ()\n  | `Log -> log ~invert ~lo ~hi ()\n  | `Sqrt -> sqrt ~invert ~lo ~hi ()\n  | `Asinh -> asinh ~invert ~lo ~hi ()\n  | `Symlog linthresh -> symlog ~invert ~linthresh ~lo ~hi ()\n"
  },
  {
    "path": "packages/hugin/lib/scale.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Data-to-unit mapping functions.\n\n    {b Internal module.} Maps data-space values to the unit interval [[0, 1]]\n    for linear, logarithmic, square-root, inverse-hyperbolic-sine, and\n    symmetric-log scales. When [~invert] is [true], the mapping is reversed\n    ([lo] maps to [1] and [hi] to [0]). *)\n\ntype t = {\n  to_unit : float -> float;  (** [to_unit v] maps data value [v] to [[0, 1]]. *)\n  from_unit : float -> float;\n      (** [from_unit u] maps unit value [u] back to data space. *)\n  lo : float;  (** Lower bound in data space. *)\n  hi : float;  (** Upper bound in data space. *)\n}\n(** The type for scales. *)\n\nval linear : ?invert:bool -> lo:float -> hi:float -> unit -> t\n(** [linear ~lo ~hi ()] is a linear scale over [[lo, hi]]. *)\n\nval log : ?invert:bool -> lo:float -> hi:float -> unit -> t\n(** [log ~lo ~hi ()] is a base-10 logarithmic scale over [[lo, hi]]. *)\n\nval sqrt : ?invert:bool -> lo:float -> hi:float -> unit -> t\n(** [sqrt ~lo ~hi ()] is a square-root scale over [[lo, hi]]. Values below zero\n    are clamped. *)\n\nval asinh : ?invert:bool -> lo:float -> hi:float -> unit -> t\n(** [asinh ~lo ~hi ()] is an inverse-hyperbolic-sine scale over [[lo, hi]].\n    Transitions smoothly from linear near zero to logarithmic at large absolute\n    values. Handles negative values. *)\n\nval symlog :\n  ?invert:bool -> linthresh:float -> lo:float -> hi:float -> unit -> t\n(** [symlog ~linthresh ~lo ~hi ()] is a symmetric logarithmic scale. Linear\n    within \\[[-linthresh];[linthresh]\\], logarithmic outside. *)\n\nval make :\n  ?invert:bool ->\n  [ `Linear | `Log | `Sqrt | `Asinh | `Symlog of float ] ->\n  lo:float ->\n  hi:float ->\n  unit ->\n  t\n(** [make kind ~lo ~hi ()] is a scale of the given [kind]. *)\n"
  },
  {
    "path": "packages/hugin/lib/scene.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Scene IR — resolved primitives in device pixels *)\n\ntype primitive =\n  | Path of {\n      points : (float * float) array;\n      close : bool;\n      fill : Color.t option;\n      stroke : Color.t option;\n      line_width : float;\n      dash : float list;\n    }\n  | Markers of {\n      points : (float * float) array;\n      shape : Spec.marker;\n      size : float;\n      sizes : float array option;\n      fill : Color.t option;\n      fills : Color.t array option;\n      stroke : Color.t option;\n    }\n  | Text of {\n      x : float;\n      y : float;\n      content : string;\n      font : Theme.font;\n      color : Color.t;\n      anchor : [ `Start | `Middle | `End ];\n      baseline : [ `Top | `Middle | `Bottom ];\n      angle : float;\n    }\n  | Image of { x : float; y : float; w : float; h : float; data : Nx.uint8_t }\n  | Clip of {\n      x : float;\n      y : float;\n      w : float;\n      h : float;\n      children : primitive list;\n    }\n  | Group of primitive list\n\ntype t = { width : float; height : float; primitives : primitive list }\n\nlet rec fold_primitive f acc = function\n  | Group children | Clip { children; _ } ->\n      List.fold_left (fold_primitive f) acc children\n  | p -> f acc p\n\nlet fold f scene acc = List.fold_left (fold_primitive f) acc scene.primitives\n"
  },
  {
    "path": "packages/hugin/lib/scene.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Scene intermediate representation.\n\n    {b Internal module.} Resolved drawing primitives in device-pixel\n    coordinates. All data-space concepts are gone; backends fold over these\n    primitives to produce output. *)\n\n(** {1:types Types} *)\n\ntype primitive =\n  | Path of {\n      points : (float * float) array;\n      close : bool;\n      fill : Color.t option;\n      stroke : Color.t option;\n      line_width : float;\n      dash : float list;\n    }\n  | Markers of {\n      points : (float * float) array;\n      shape : Spec.marker;\n      size : float;\n      sizes : float array option;\n      fill : Color.t option;\n      fills : Color.t array option;\n      stroke : Color.t option;\n    }\n  | Text of {\n      x : float;\n      y : float;\n      content : string;\n      font : Theme.font;\n      color : Color.t;\n      anchor : [ `Start | `Middle | `End ];\n      baseline : [ `Top | `Middle | `Bottom ];\n      angle : float;\n    }\n  | Image of { x : float; y : float; w : float; h : float; data : Nx.uint8_t }\n  | Clip of {\n      x : float;\n      y : float;\n      w : float;\n      h : float;\n      children : primitive list;\n    }\n  | Group of primitive list  (** The type for drawing primitives. *)\n\ntype t = { width : float; height : float; primitives : primitive list }\n(** The type for resolved scenes. *)\n\n(** {1:traversal Traversal} *)\n\nval fold : ('a -> primitive -> 'a) -> t -> 'a -> 'a\n(** [fold f scene acc] folds [f] over every leaf primitive in [scene],\n    descending into {!Clip} and {!Group} nodes. *)\n"
  },
  {
    "path": "packages/hugin/lib/spec.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype line_style = [ `Solid | `Dashed | `Dotted | `Dash_dot ]\ntype marker = Circle | Square | Triangle | Plus | Star\n\ntype legend_loc =\n  | Upper_right\n  | Upper_left\n  | Lower_right\n  | Lower_left\n  | Center\n  | Right\n  | Upper_center\n  | Lower_center\n\ntype scale = [ `Linear | `Log | `Sqrt | `Asinh | `Symlog of float ]\ntype stretch = [ `Linear | `Log | `Sqrt | `Asinh | `Power of float ]\n\ntype mark =\n  | Line of {\n      x : Nx.float32_t;\n      y : Nx.float32_t;\n      color : Color.t option;\n      line_width : float option;\n      line_style : line_style option;\n      step : [ `Pre | `Post | `Mid ] option;\n      marker : marker option;\n      label : string option;\n      alpha : float option;\n    }\n  | Point of {\n      x : Nx.float32_t;\n      y : Nx.float32_t;\n      color : Color.t option;\n      color_by : Nx.float32_t option;\n      size : float option;\n      size_by : Nx.float32_t option;\n      marker : marker option;\n      label : string option;\n      alpha : float option;\n    }\n  | Bar of {\n      x : Nx.float32_t;\n      height : Nx.float32_t;\n      width : float option;\n      bottom : float;\n      color : Color.t option;\n      label : string option;\n      alpha : float option;\n    }\n  | Hist of {\n      x : Nx.float32_t;\n      bins : [ `Num of int | `Edges of float array ];\n      density : bool;\n      color : Color.t option;\n      label : string option;\n    }\n  | Image of {\n      data : Nx.uint8_t;\n      extent : (float * float * float * float) option;\n    }\n  | Text_mark of {\n      x : float;\n      y : float;\n      content : string;\n      color : Color.t option;\n      font_size : float option;\n    }\n  | Hline of {\n      y : float;\n      color : Color.t option;\n      line_width : float option;\n      line_style : line_style option;\n      label : string option;\n      alpha : float option;\n    }\n  | Vline of {\n      x : float;\n      color : Color.t option;\n      line_width : float option;\n      line_style : line_style option;\n      label : string option;\n      alpha : float option;\n    }\n  | Abline of {\n      slope : float;\n      intercept : float;\n      color : Color.t option;\n      line_width : float option;\n      line_style : line_style option;\n      label : string option;\n      alpha : float option;\n    }\n  | Fill_between of {\n      x : Nx.float32_t;\n      y1 : Nx.float32_t;\n      y2 : Nx.float32_t;\n      where : Nx.float32_t option;\n      color : Color.t option;\n      alpha : float option;\n      label : string option;\n    }\n  | Hspan of {\n      y0 : float;\n      y1 : float;\n      color : Color.t option;\n      alpha : float option;\n      label : string option;\n    }\n  | Vspan of {\n      x0 : float;\n      x1 : float;\n      color : Color.t option;\n      alpha : float option;\n      label : string option;\n    }\n  | Errorbar of {\n      x : Nx.float32_t;\n      y : Nx.float32_t;\n      yerr :\n        [ `Symmetric of Nx.float32_t\n        | `Asymmetric of Nx.float32_t * Nx.float32_t ];\n      xerr :\n        [ `Symmetric of Nx.float32_t\n        | `Asymmetric of Nx.float32_t * Nx.float32_t ]\n        option;\n      color : Color.t option;\n      line_width : float option;\n      cap_size : float option;\n      label : string option;\n      alpha : float option;\n    }\n  | Heatmap of {\n      data : Nx.float32_t;\n      cmap : Cmap.t option;\n      annotate : bool;\n      vmin : float option;\n      vmax : float option;\n      fmt : (float -> string) option;\n    }\n  | Imshow of {\n      data : Nx.float32_t;\n      stretch : stretch;\n      cmap : Cmap.t option;\n      vmin : float option;\n      vmax : float option;\n    }\n  | Contour of {\n      data : Nx.float32_t;\n      x0 : float;\n      x1 : float;\n      y0 : float;\n      y1 : float;\n      levels : [ `Num of int | `Values of float array ];\n      filled : bool;\n      cmap : Cmap.t option;\n      color : Color.t option;\n      line_width : float option;\n      label : string option;\n      alpha : float option;\n    }\n\ntype decoration =\n  | Title of string\n  | Xlabel of string\n  | Ylabel of string\n  | Xlim of float * float\n  | Ylim of float * float\n  | Xscale of scale\n  | Yscale of scale\n  | Xinvert\n  | Yinvert\n  | Grid_visible of bool\n  | Legend of legend_loc * int\n  | Xticks of (float * string) list\n  | Yticks of (float * string) list\n  | With_theme of Theme.t\n  | Xtick_format of (float -> string)\n  | Ytick_format of (float -> string)\n  | Frame of bool\n\ntype t =\n  | Mark of mark\n  | Layers of t list\n  | Decorated of { inner : t; decorations : decoration list }\n  | Grid of { rows : t list list; gap : float }\n\n(* Mark constructors *)\n\nlet line ~x ~y ?color ?line_width ?line_style ?step ?marker ?label ?alpha () =\n  Mark\n    (Line { x; y; color; line_width; line_style; step; marker; label; alpha })\n\nlet point ~x ~y ?color ?color_by ?size ?size_by ?marker ?label ?alpha () =\n  Mark (Point { x; y; color; color_by; size; size_by; marker; label; alpha })\n\nlet bar ~x ~height ?width ?(bottom = 0.) ?color ?label ?alpha () =\n  Mark (Bar { x; height; width; bottom; color; label; alpha })\n\nlet hist ~x ?(bins = `Num 10) ?(density = false) ?color ?label () =\n  Mark (Hist { x; bins; density; color; label })\n\nlet image ?extent data = Mark (Image { data; extent })\n\nlet text ~x ~y s ?color ?font_size () =\n  Mark (Text_mark { x; y; content = s; color; font_size })\n\nlet hline ~y ?color ?line_width ?line_style ?label ?alpha () =\n  Mark (Hline { y; color; line_width; line_style; label; alpha })\n\nlet vline ~x ?color ?line_width ?line_style ?label ?alpha () =\n  Mark (Vline { x; color; line_width; line_style; label; alpha })\n\nlet abline ~slope ~intercept ?color ?line_width ?line_style ?label ?alpha () =\n  Mark\n    (Abline { slope; intercept; color; line_width; line_style; label; alpha })\n\nlet fill_between ~x ~y1 ~y2 ?where ?color ?alpha ?label () =\n  Mark (Fill_between { x; y1; y2; where; color; alpha; label })\n\nlet hspan ~y0 ~y1 ?color ?alpha ?label () =\n  Mark (Hspan { y0; y1; color; alpha; label })\n\nlet vspan ~x0 ~x1 ?color ?alpha ?label () =\n  Mark (Vspan { x0; x1; color; alpha; label })\n\nlet errorbar ~x ~y ~yerr ?xerr ?color ?line_width ?cap_size ?label ?alpha () =\n  Mark\n    (Errorbar { x; y; yerr; xerr; color; line_width; cap_size; label; alpha })\n\nlet heatmap ~data ?(annotate = false) ?cmap ?vmin ?vmax ?fmt () =\n  Mark (Heatmap { data; cmap; annotate; vmin; vmax; fmt })\n\nlet imshow ~data ?(stretch = `Linear) ?cmap ?vmin ?vmax () =\n  Mark (Imshow { data; stretch; cmap; vmin; vmax })\n\nlet contour ~data ~x0 ~x1 ~y0 ~y1 ?(levels = `Num 8) ?(filled = false) ?cmap\n    ?color ?line_width ?label ?alpha () =\n  Mark\n    (Contour\n       {\n         data;\n         x0;\n         x1;\n         y0;\n         y1;\n         levels;\n         filled;\n         cmap;\n         color;\n         line_width;\n         label;\n         alpha;\n       })\n\n(* Composition *)\n\nlet layers ts = Layers ts\n\n(* Decorations *)\n\nlet decorate d = function\n  | Decorated r -> Decorated { r with decorations = d :: r.decorations }\n  | t -> Decorated { inner = t; decorations = [ d ] }\n\nlet title s t = decorate (Title s) t\nlet xlabel s t = decorate (Xlabel s) t\nlet ylabel s t = decorate (Ylabel s) t\nlet xlim lo hi t = decorate (Xlim (lo, hi)) t\nlet ylim lo hi t = decorate (Ylim (lo, hi)) t\nlet xscale s t = decorate (Xscale s) t\nlet yscale s t = decorate (Yscale s) t\nlet xinvert t = decorate Xinvert t\nlet yinvert t = decorate Yinvert t\nlet grid_lines visible t = decorate (Grid_visible visible) t\nlet legend ?(loc = Upper_right) ?(ncol = 1) t = decorate (Legend (loc, ncol)) t\nlet xticks ticks t = decorate (Xticks ticks) t\nlet yticks ticks t = decorate (Yticks ticks) t\nlet with_theme th t = decorate (With_theme th) t\nlet xtick_format fmt t = decorate (Xtick_format fmt) t\nlet ytick_format fmt t = decorate (Ytick_format fmt) t\nlet frame v t = decorate (Frame v) t\n\nlet no_axes t =\n  t |> decorate (Frame false) |> decorate (Xticks []) |> decorate (Yticks [])\n  |> decorate (Grid_visible false)\n\n(* Layout *)\n\nlet grid_layout ?(gap = 0.05) rows = Grid { rows; gap }\n"
  },
  {
    "path": "packages/hugin/lib/spec.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Immutable plot specifications.\n\n    {b Internal module.} The specification tree is the user-facing\n    representation of a plot. {!Prepared.compile} resolves data-dependent work;\n    {!Resolve} turns the result into a {!Scene.t}. *)\n\n(** {1:types Types} *)\n\ntype line_style = [ `Solid | `Dashed | `Dotted | `Dash_dot ]\n(** The type for line dash patterns. *)\n\ntype marker =\n  | Circle\n  | Square\n  | Triangle\n  | Plus\n  | Star  (** The type for point marker shapes. *)\n\ntype legend_loc =\n  | Upper_right\n  | Upper_left\n  | Lower_right\n  | Lower_left\n  | Center\n  | Right\n  | Upper_center\n  | Lower_center  (** The type for legend placement. *)\n\ntype scale = [ `Linear | `Log | `Sqrt | `Asinh | `Symlog of float ]\n(** The type for axis scales. [`Sqrt] and [`Asinh] handle zero gracefully.\n    [`Symlog linthresh] is linear within \\[[-linthresh];[linthresh]\\] and\n    logarithmic outside. *)\n\ntype stretch = [ `Linear | `Log | `Sqrt | `Asinh | `Power of float ]\n(** The type for image stretch functions. [`Power a] raises normalized values to\n    the power [a]. *)\n\ntype mark =\n  | Line of {\n      x : Nx.float32_t;\n      y : Nx.float32_t;\n      color : Color.t option;\n      line_width : float option;\n      line_style : line_style option;\n      step : [ `Pre | `Post | `Mid ] option;\n      marker : marker option;\n      label : string option;\n      alpha : float option;\n    }\n  | Point of {\n      x : Nx.float32_t;\n      y : Nx.float32_t;\n      color : Color.t option;\n      color_by : Nx.float32_t option;\n      size : float option;\n      size_by : Nx.float32_t option;\n      marker : marker option;\n      label : string option;\n      alpha : float option;\n    }\n  | Bar of {\n      x : Nx.float32_t;\n      height : Nx.float32_t;\n      width : float option;\n      bottom : float;\n      color : Color.t option;\n      label : string option;\n      alpha : float option;\n    }\n  | Hist of {\n      x : Nx.float32_t;\n      bins : [ `Num of int | `Edges of float array ];\n      density : bool;\n      color : Color.t option;\n      label : string option;\n    }\n  | Image of {\n      data : Nx.uint8_t;\n      extent : (float * float * float * float) option;\n    }\n  | Text_mark of {\n      x : float;\n      y : float;\n      content : string;\n      color : Color.t option;\n      font_size : float option;\n    }\n  | Hline of {\n      y : float;\n      color : Color.t option;\n      line_width : float option;\n      line_style : line_style option;\n      label : string option;\n      alpha : float option;\n    }\n  | Vline of {\n      x : float;\n      color : Color.t option;\n      line_width : float option;\n      line_style : line_style option;\n      label : string option;\n      alpha : float option;\n    }\n  | Abline of {\n      slope : float;\n      intercept : float;\n      color : Color.t option;\n      line_width : float option;\n      line_style : line_style option;\n      label : string option;\n      alpha : float option;\n    }\n  | Fill_between of {\n      x : Nx.float32_t;\n      y1 : Nx.float32_t;\n      y2 : Nx.float32_t;\n      where : Nx.float32_t option;\n      color : Color.t option;\n      alpha : float option;\n      label : string option;\n    }\n  | Hspan of {\n      y0 : float;\n      y1 : float;\n      color : Color.t option;\n      alpha : float option;\n      label : string option;\n    }\n  | Vspan of {\n      x0 : float;\n      x1 : float;\n      color : Color.t option;\n      alpha : float option;\n      label : string option;\n    }\n  | Errorbar of {\n      x : Nx.float32_t;\n      y : Nx.float32_t;\n      yerr :\n        [ `Symmetric of Nx.float32_t\n        | `Asymmetric of Nx.float32_t * Nx.float32_t ];\n      xerr :\n        [ `Symmetric of Nx.float32_t\n        | `Asymmetric of Nx.float32_t * Nx.float32_t ]\n        option;\n      color : Color.t option;\n      line_width : float option;\n      cap_size : float option;\n      label : string option;\n      alpha : float option;\n    }\n  | Heatmap of {\n      data : Nx.float32_t;\n      cmap : Cmap.t option;\n      annotate : bool;\n      vmin : float option;\n      vmax : float option;\n      fmt : (float -> string) option;\n    }\n  | Imshow of {\n      data : Nx.float32_t;\n      stretch : stretch;\n      cmap : Cmap.t option;\n      vmin : float option;\n      vmax : float option;\n    }\n  | Contour of {\n      data : Nx.float32_t;\n      x0 : float;\n      x1 : float;\n      y0 : float;\n      y1 : float;\n      levels : [ `Num of int | `Values of float array ];\n      filled : bool;\n      cmap : Cmap.t option;\n      color : Color.t option;\n      line_width : float option;\n      label : string option;\n      alpha : float option;\n    }\n      (** The type for visual marks. Each constructor carries the data arrays\n          and visual properties for one layer. *)\n\ntype decoration =\n  | Title of string\n  | Xlabel of string\n  | Ylabel of string\n  | Xlim of float * float\n  | Ylim of float * float\n  | Xscale of scale\n  | Yscale of scale\n  | Xinvert\n  | Yinvert\n  | Grid_visible of bool\n  | Legend of legend_loc * int\n  | Xticks of (float * string) list\n  | Yticks of (float * string) list\n  | With_theme of Theme.t\n  | Xtick_format of (float -> string)\n  | Ytick_format of (float -> string)\n  | Frame of bool\n      (** The type for plot decorations. Applied via {!Decorated} nodes. *)\n\ntype t =\n  | Mark of mark\n  | Layers of t list\n  | Decorated of { inner : t; decorations : decoration list }\n  | Grid of { rows : t list list; gap : float }\n      (** The type for plot specifications. An immutable tree composed via mark\n          constructors, {!layers}, decoration functions, and {!grid_layout}. *)\n\n(** {1:marks Mark constructors} *)\n\nval line :\n  x:Nx.float32_t ->\n  y:Nx.float32_t ->\n  ?color:Color.t ->\n  ?line_width:float ->\n  ?line_style:line_style ->\n  ?step:[ `Pre | `Post | `Mid ] ->\n  ?marker:marker ->\n  ?label:string ->\n  ?alpha:float ->\n  unit ->\n  t\n(** [line ~x ~y ()] is a line mark. *)\n\nval point :\n  x:Nx.float32_t ->\n  y:Nx.float32_t ->\n  ?color:Color.t ->\n  ?color_by:Nx.float32_t ->\n  ?size:float ->\n  ?size_by:Nx.float32_t ->\n  ?marker:marker ->\n  ?label:string ->\n  ?alpha:float ->\n  unit ->\n  t\n(** [point ~x ~y ()] is a scatter mark. *)\n\nval bar :\n  x:Nx.float32_t ->\n  height:Nx.float32_t ->\n  ?width:float ->\n  ?bottom:float ->\n  ?color:Color.t ->\n  ?label:string ->\n  ?alpha:float ->\n  unit ->\n  t\n(** [bar ~x ~height ()] is a bar mark. [bottom] defaults to [0.]. *)\n\nval hist :\n  x:Nx.float32_t ->\n  ?bins:[ `Num of int | `Edges of float array ] ->\n  ?density:bool ->\n  ?color:Color.t ->\n  ?label:string ->\n  unit ->\n  t\n(** [hist ~x ()] is a histogram mark. [bins] defaults to [`Num 10]. *)\n\nval image : ?extent:float * float * float * float -> Nx.uint8_t -> t\n(** [image ?extent data] is an image mark. When [extent] is\n    [(xmin, xmax, ymin, ymax)], the image is placed in data coordinates. *)\n\nval text :\n  x:float ->\n  y:float ->\n  string ->\n  ?color:Color.t ->\n  ?font_size:float ->\n  unit ->\n  t\n(** [text ~x ~y s ()] is a text mark at [(x, y)]. *)\n\nval hline :\n  y:float ->\n  ?color:Color.t ->\n  ?line_width:float ->\n  ?line_style:line_style ->\n  ?label:string ->\n  ?alpha:float ->\n  unit ->\n  t\n(** [hline ~y ()] is a horizontal reference line. *)\n\nval vline :\n  x:float ->\n  ?color:Color.t ->\n  ?line_width:float ->\n  ?line_style:line_style ->\n  ?label:string ->\n  ?alpha:float ->\n  unit ->\n  t\n(** [vline ~x ()] is a vertical reference line. *)\n\nval abline :\n  slope:float ->\n  intercept:float ->\n  ?color:Color.t ->\n  ?line_width:float ->\n  ?line_style:line_style ->\n  ?label:string ->\n  ?alpha:float ->\n  unit ->\n  t\n(** [abline ~slope ~intercept ()] is a diagonal line [y = slope * x + intercept]\n    spanning the full plot area. *)\n\nval fill_between :\n  x:Nx.float32_t ->\n  y1:Nx.float32_t ->\n  y2:Nx.float32_t ->\n  ?where:Nx.float32_t ->\n  ?color:Color.t ->\n  ?alpha:float ->\n  ?label:string ->\n  unit ->\n  t\n(** [fill_between ~x ~y1 ~y2 ()] is a filled area between two curves. [where] is\n    a mask array: only fill where [where.(i) > 0.]. *)\n\nval hspan :\n  y0:float ->\n  y1:float ->\n  ?color:Color.t ->\n  ?alpha:float ->\n  ?label:string ->\n  unit ->\n  t\n(** [hspan ~y0 ~y1 ()] is a horizontal shaded band. *)\n\nval vspan :\n  x0:float ->\n  x1:float ->\n  ?color:Color.t ->\n  ?alpha:float ->\n  ?label:string ->\n  unit ->\n  t\n(** [vspan ~x0 ~x1 ()] is a vertical shaded band. *)\n\nval errorbar :\n  x:Nx.float32_t ->\n  y:Nx.float32_t ->\n  yerr:\n    [ `Symmetric of Nx.float32_t | `Asymmetric of Nx.float32_t * Nx.float32_t ] ->\n  ?xerr:\n    [ `Symmetric of Nx.float32_t | `Asymmetric of Nx.float32_t * Nx.float32_t ] ->\n  ?color:Color.t ->\n  ?line_width:float ->\n  ?cap_size:float ->\n  ?label:string ->\n  ?alpha:float ->\n  unit ->\n  t\n(** [errorbar ~x ~y ~yerr ()] is an error bar mark. *)\n\nval heatmap :\n  data:Nx.float32_t ->\n  ?annotate:bool ->\n  ?cmap:Cmap.t ->\n  ?vmin:float ->\n  ?vmax:float ->\n  ?fmt:(float -> string) ->\n  unit ->\n  t\n(** [heatmap ~data ()] is a heatmap mark. [data] has shape [[|rows; cols|]]. *)\n\nval imshow :\n  data:Nx.float32_t ->\n  ?stretch:stretch ->\n  ?cmap:Cmap.t ->\n  ?vmin:float ->\n  ?vmax:float ->\n  unit ->\n  t\n(** [imshow ~data ()] is a colormapped image mark. [stretch] defaults to\n    [`Linear]. *)\n\nval contour :\n  data:Nx.float32_t ->\n  x0:float ->\n  x1:float ->\n  y0:float ->\n  y1:float ->\n  ?levels:[ `Num of int | `Values of float array ] ->\n  ?filled:bool ->\n  ?cmap:Cmap.t ->\n  ?color:Color.t ->\n  ?line_width:float ->\n  ?label:string ->\n  ?alpha:float ->\n  unit ->\n  t\n(** [contour ~data ~x0 ~x1 ~y0 ~y1 ()] is a contour mark. [levels] defaults to\n    [`Num 8]. [filled] defaults to [false]. *)\n\n(** {1:composition Composition} *)\n\nval layers : t list -> t\n(** [layers marks] overlays [marks] on shared axes. *)\n\n(** {1:decorations Decorations} *)\n\nval title : string -> t -> t\n(** [title s t] adds plot title [s]. *)\n\nval xlabel : string -> t -> t\n(** [xlabel s t] adds x-axis label [s]. *)\n\nval ylabel : string -> t -> t\n(** [ylabel s t] adds y-axis label [s]. *)\n\nval xlim : float -> float -> t -> t\n(** [xlim lo hi t] fixes the x-axis range. *)\n\nval ylim : float -> float -> t -> t\n(** [ylim lo hi t] fixes the y-axis range. *)\n\nval xscale : scale -> t -> t\n(** [xscale s t] sets the x-axis scale. *)\n\nval yscale : scale -> t -> t\n(** [yscale s t] sets the y-axis scale. *)\n\nval xinvert : t -> t\n(** [xinvert t] inverts the x-axis direction (values increase right-to-left). *)\n\nval yinvert : t -> t\n(** [yinvert t] inverts the y-axis direction (values increase top-to-bottom). *)\n\nval grid_lines : bool -> t -> t\n(** [grid_lines visible t] shows or hides grid lines. *)\n\nval legend : ?loc:legend_loc -> ?ncol:int -> t -> t\n(** [legend t] shows the legend. [loc] defaults to {!Upper_right}. [ncol]\n    defaults to [1]; set higher for multi-column layouts. *)\n\nval xticks : (float * string) list -> t -> t\n(** [xticks ticks t] sets explicit x-axis tick positions and labels. *)\n\nval yticks : (float * string) list -> t -> t\n(** [yticks ticks t] sets explicit y-axis tick positions and labels. *)\n\nval with_theme : Theme.t -> t -> t\n(** [with_theme th t] overrides the rendering theme. *)\n\nval xtick_format : (float -> string) -> t -> t\n(** [xtick_format fmt t] formats x-axis tick labels with [fmt]. *)\n\nval ytick_format : (float -> string) -> t -> t\n(** [ytick_format fmt t] formats y-axis tick labels with [fmt]. *)\n\nval frame : bool -> t -> t\n(** [frame visible t] shows or hides the axis border rectangle. *)\n\nval no_axes : t -> t\n(** [no_axes t] hides the axis frame, ticks, and tick labels. The full panel\n    area is used for marks. Title is preserved. Useful for image grids. *)\n\n(** {1:layout Layout} *)\n\nval grid_layout : ?gap:float -> t list list -> t\n(** [grid_layout rows] arranges specs in a grid. [gap] defaults to [0.05]. *)\n"
  },
  {
    "path": "packages/hugin/lib/svg_backend.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* SVG backend *)\n\n(* Text measurer *)\n\nlet text_measurer ~(font : Theme.font) s =\n  let w = float (String.length s) *. font.size *. 0.6 in\n  let h = font.size in\n  (w, h)\n\n(* Helpers *)\n\nlet color_to_rgb_string c =\n  let r, g, b, _ = Color.to_rgba c in\n  Printf.sprintf \"rgb(%d,%d,%d)\"\n    (Float.to_int (r *. 255.))\n    (Float.to_int (g *. 255.))\n    (Float.to_int (b *. 255.))\n\nlet add_fill buf = function\n  | None -> Buffer.add_string buf \" fill=\\\"none\\\"\"\n  | Some c ->\n      Printf.bprintf buf \" fill=\\\"%s\\\"\" (color_to_rgb_string c);\n      let a = Color.alpha c in\n      if a < 1. then Printf.bprintf buf \" fill-opacity=\\\"%.3g\\\"\" a\n\nlet add_stroke buf = function\n  | None -> Buffer.add_string buf \" stroke=\\\"none\\\"\"\n  | Some c ->\n      Printf.bprintf buf \" stroke=\\\"%s\\\"\" (color_to_rgb_string c);\n      let a = Color.alpha c in\n      if a < 1. then Printf.bprintf buf \" stroke-opacity=\\\"%.3g\\\"\" a\n\nlet text_anchor_string = function\n  | `Start -> \"start\"\n  | `Middle -> \"middle\"\n  | `End -> \"end\"\n\nlet dominant_baseline_string = function\n  | `Top -> \"text-before-edge\"\n  | `Middle -> \"central\"\n  | `Bottom -> \"text-after-edge\"\n\nlet escape_xml s =\n  let buf = Buffer.create (String.length s) in\n  String.iter\n    (function\n      | '<' -> Buffer.add_string buf \"&lt;\"\n      | '>' -> Buffer.add_string buf \"&gt;\"\n      | '&' -> Buffer.add_string buf \"&amp;\"\n      | '\"' -> Buffer.add_string buf \"&quot;\"\n      | c -> Buffer.add_char buf c)\n    s;\n  Buffer.contents buf\n\n(* Marker shapes *)\n\nlet marker_path shape size =\n  let hs = size /. 2. in\n  match shape with\n  | Spec.Circle ->\n      Printf.sprintf \"M %g 0 A %g %g 0 1 1 %g 0 A %g %g 0 1 1 %g 0 Z\" (-.hs) hs\n        hs hs hs hs (-.hs)\n  | Spec.Square ->\n      Printf.sprintf \"M %g %g L %g %g L %g %g L %g %g Z\" (-.hs) (-.hs) hs (-.hs)\n        hs hs (-.hs) hs\n  | Spec.Triangle ->\n      Printf.sprintf \"M 0 %g L %g %g L %g %g Z\" (-.hs) hs hs (-.hs) hs\n  | Spec.Plus ->\n      Printf.sprintf \"M %g 0 L %g 0 M 0 %g L 0 %g\" (-.hs) hs (-.hs) hs\n  | Spec.Star ->\n      let d = hs *. 0.707 in\n      Printf.sprintf\n        \"M %g 0 L %g 0 M 0 %g L 0 %g M %g %g L %g %g M %g %g L %g %g\" (-.hs) hs\n        (-.hs) hs (-.d) (-.d) d d d (-.d) (-.d) d\n\n(* Primitive rendering — ids threaded through to avoid global state *)\n\ntype ids = { mutable clip_id : int; mutable marker_id : int }\n\nlet fresh_clip ids =\n  ids.clip_id <- ids.clip_id + 1;\n  Printf.sprintf \"clip-%d\" ids.clip_id\n\nlet fresh_marker ids =\n  ids.marker_id <- ids.marker_id + 1;\n  Printf.sprintf \"marker-%d\" ids.marker_id\n\nlet rec render_primitive ids buf = function\n  | Scene.Path { points; close; fill; stroke; line_width; dash } ->\n      if Array.length points < 2 then ()\n      else begin\n        Buffer.add_string buf \"<path d=\\\"\";\n        Array.iteri\n          (fun i (x, y) ->\n            if i = 0 then Printf.bprintf buf \"M %g %g\" x y\n            else Printf.bprintf buf \" L %g %g\" x y)\n          points;\n        if close then Buffer.add_string buf \" Z\";\n        Buffer.add_char buf '\"';\n        add_fill buf fill;\n        add_stroke buf stroke;\n        if line_width > 0. then\n          Printf.bprintf buf \" stroke-width=\\\"%g\\\"\" line_width;\n        begin match dash with\n        | [] -> ()\n        | ds ->\n            Buffer.add_string buf \" stroke-dasharray=\\\"\";\n            List.iteri\n              (fun i d ->\n                if i > 0 then Buffer.add_char buf ',';\n                Printf.bprintf buf \"%g\" d)\n              ds;\n            Buffer.add_char buf '\"'\n        end;\n        Buffer.add_string buf \"/>\\n\"\n      end\n  | Scene.Markers { points; shape; size; sizes; fill; fills; stroke } ->\n      let stroke_only =\n        match shape with Spec.Plus | Spec.Star -> true | _ -> false\n      in\n      begin match (fills, sizes) with\n      | None, None ->\n          let id = fresh_marker ids in\n          let d = marker_path shape size in\n          Printf.bprintf buf \"<defs><symbol id=\\\"%s\\\"><path d=\\\"%s\\\"\" id d;\n          if stroke_only then begin\n            Buffer.add_string buf \" fill=\\\"none\\\"\";\n            let stroke_c = match fill with Some _ -> fill | None -> stroke in\n            add_stroke buf stroke_c;\n            Printf.bprintf buf \" stroke-width=\\\"%g\\\"\"\n              (Float.max 1. (size *. 0.15))\n          end\n          else begin\n            add_fill buf fill;\n            add_stroke buf stroke;\n            if stroke <> None then Buffer.add_string buf \" stroke-width=\\\"1\\\"\"\n          end;\n          Buffer.add_string buf \"/></symbol></defs>\\n\";\n          Array.iter\n            (fun (x, y) ->\n              Printf.bprintf buf \"<use href=\\\"#%s\\\" x=\\\"%g\\\" y=\\\"%g\\\"/>\\n\" id x\n                y)\n            points\n      | _ ->\n          Array.iteri\n            (fun i (x, y) ->\n              let s = match sizes with Some ss -> ss.(i) | None -> size in\n              let f =\n                match fills with Some fs -> Some fs.(i) | None -> fill\n              in\n              let d = marker_path shape s in\n              Printf.bprintf buf \"<path d=\\\"%s\\\" transform=\\\"translate(%g,%g)\\\"\"\n                d x y;\n              if stroke_only then begin\n                Buffer.add_string buf \" fill=\\\"none\\\"\";\n                let stroke_c = match f with Some _ -> f | None -> stroke in\n                add_stroke buf stroke_c;\n                Printf.bprintf buf \" stroke-width=\\\"%g\\\"\"\n                  (Float.max 1. (s *. 0.15))\n              end\n              else begin\n                add_fill buf f;\n                add_stroke buf stroke;\n                if stroke <> None then\n                  Buffer.add_string buf \" stroke-width=\\\"1\\\"\"\n              end;\n              Buffer.add_string buf \"/>\\n\")\n            points\n      end\n  | Scene.Text { x; y; content; font; color; anchor; baseline; angle } ->\n      Printf.bprintf buf \"<text x=\\\"%g\\\" y=\\\"%g\\\"\" x y;\n      Printf.bprintf buf \" font-family=\\\"%s\\\"\" font.Theme.family;\n      Printf.bprintf buf \" font-size=\\\"%g\\\"\" font.size;\n      begin match font.weight with\n      | `Bold -> Buffer.add_string buf \" font-weight=\\\"bold\\\"\"\n      | `Normal -> ()\n      end;\n      Printf.bprintf buf \" fill=\\\"%s\\\"\" (color_to_rgb_string color);\n      let a = Color.alpha color in\n      if a < 1. then Printf.bprintf buf \" fill-opacity=\\\"%.3g\\\"\" a;\n      Printf.bprintf buf \" text-anchor=\\\"%s\\\"\" (text_anchor_string anchor);\n      Printf.bprintf buf \" dominant-baseline=\\\"%s\\\"\"\n        (dominant_baseline_string baseline);\n      if angle <> 0. then\n        Printf.bprintf buf \" transform=\\\"rotate(%g,%g,%g)\\\"\"\n          (angle *. -180. /. Float.pi)\n          x y;\n      Printf.bprintf buf \">%s</text>\\n\" (escape_xml content)\n  | Scene.Image { x; y; w; h; data } ->\n      let b64 = Image_util.nx_to_png_base64 data in\n      Printf.bprintf buf \"<image x=\\\"%g\\\" y=\\\"%g\\\" width=\\\"%g\\\" height=\\\"%g\\\"\" x\n        y w h;\n      Printf.bprintf buf \" href=\\\"data:image/png;base64,%s\\\"/>\\n\" b64\n  | Scene.Clip { x; y; w; h; children } ->\n      let id = fresh_clip ids in\n      Printf.bprintf buf\n        \"<defs><clipPath id=\\\"%s\\\"><rect x=\\\"%g\\\" y=\\\"%g\\\" width=\\\"%g\\\" \\\n         height=\\\"%g\\\"/></clipPath></defs>\\n\"\n        id x y w h;\n      Printf.bprintf buf \"<g clip-path=\\\"url(#%s)\\\">\\n\" id;\n      List.iter (render_primitive ids buf) children;\n      Buffer.add_string buf \"</g>\\n\"\n  | Scene.Group children ->\n      Buffer.add_string buf \"<g>\\n\";\n      List.iter (render_primitive ids buf) children;\n      Buffer.add_string buf \"</g>\\n\"\n\n(* Entry points *)\n\nlet render (scene : Scene.t) =\n  let ids = { clip_id = 0; marker_id = 0 } in\n  let buf = Buffer.create 4096 in\n  Printf.bprintf buf \"<?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\"?>\\n\";\n  Printf.bprintf buf\n    \"<svg xmlns=\\\"http://www.w3.org/2000/svg\\\" width=\\\"%g\\\" height=\\\"%g\\\" \\\n     viewBox=\\\"0 0 %g %g\\\">\\n\"\n    scene.width scene.height scene.width scene.height;\n  List.iter (render_primitive ids buf) scene.primitives;\n  Buffer.add_string buf \"</svg>\\n\";\n  Buffer.contents buf\n\nlet render_to_file filename scene =\n  let s = render scene in\n  let oc = open_out filename in\n  output_string oc s;\n  close_out oc\n"
  },
  {
    "path": "packages/hugin/lib/svg_backend.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** SVG rendering backend.\n\n    {b Internal module.} Renders {!Scene.t} to SVG markup. Pure OCaml, no\n    external dependencies beyond Cairo for image encoding. *)\n\n(** {1:measurer Text measurement} *)\n\nval text_measurer : Resolve.text_measurer\n(** [text_measurer] estimates text dimensions from character count and font\n    size. Heuristic: width is [String.length s * 0.6 * font.size]. *)\n\n(** {1:rendering Rendering} *)\n\nval render : Scene.t -> string\n(** [render scene] is [scene] as an SVG document string. *)\n\nval render_to_file : string -> Scene.t -> unit\n(** [render_to_file filename scene] writes [scene] as an SVG file. *)\n"
  },
  {
    "path": "packages/hugin/lib/theme.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype font = { family : string; size : float; weight : [ `Normal | `Bold ] }\ntype line = { color : Color.t; width : float; dash : float list }\n\ntype t = {\n  background : Color.t;\n  palette : Color.t array;\n  sequential : Cmap.t;\n  diverging : Cmap.t;\n  font_title : font;\n  font_label : font;\n  font_tick : font;\n  axis : line;\n  grid : line option;\n  tick_length : float;\n  padding : float;\n  title_gap : float;\n  label_gap : float;\n  scale_factor : float;\n  line_width : float;\n  marker_size : float;\n}\n\nlet axis_color = Color.oklch ~l:0.3 ~c:0. ~h:0. ()\nlet grid_color = Color.with_alpha 0.15 axis_color\n\nlet default =\n  {\n    background = Color.oklch ~l:0.985 ~c:0. ~h:0. ();\n    palette =\n      [|\n        Color.orange;\n        Color.sky_blue;\n        Color.green;\n        Color.darken 0.1 Color.yellow;\n        Color.blue;\n        Color.vermillion;\n        Color.purple;\n        Color.black;\n      |];\n    sequential = Cmap.viridis;\n    diverging = Cmap.coolwarm;\n    font_title = { family = \"sans-serif\"; size = 28.; weight = `Bold };\n    font_label = { family = \"sans-serif\"; size = 22.; weight = `Normal };\n    font_tick = { family = \"sans-serif\"; size = 18.; weight = `Normal };\n    axis = { color = axis_color; width = 2.; dash = [] };\n    grid = Some { color = grid_color; width = 1.; dash = [] };\n    tick_length = 10.;\n    padding = 24.;\n    title_gap = 16.;\n    label_gap = 12.;\n    scale_factor = 1.;\n    line_width = 3.;\n    marker_size = 10.;\n  }\n\nlet dark_bg = Color.oklch ~l:0.15 ~c:0. ~h:0. ()\nlet dark_fg = Color.oklch ~l:0.8 ~c:0. ~h:0. ()\nlet dark_grid = Color.with_alpha 0.2 dark_fg\n\nlet dark =\n  {\n    default with\n    background = dark_bg;\n    palette =\n      [|\n        Color.orange;\n        Color.sky_blue;\n        Color.green;\n        Color.yellow;\n        Color.blue;\n        Color.vermillion;\n        Color.purple;\n        Color.white;\n      |];\n    axis = { color = dark_fg; width = 2.; dash = [] };\n    grid = Some { color = dark_grid; width = 1.; dash = [] };\n  }\n\nlet minimal =\n  { default with axis = { default.axis with width = 1. }; grid = None }\n\nlet paper t = { t with scale_factor = 1.0 }\nlet notebook t = { t with scale_factor = 1.3 }\nlet talk t = { t with scale_factor = 1.6 }\nlet poster t = { t with scale_factor = 2.0 }\n"
  },
  {
    "path": "packages/hugin/lib/theme.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Visual themes.\n\n    A theme controls every non-data visual element: background color,\n    typography, axes, grid, spacing, and default data palettes.\n\n    Themes separate two orthogonal concerns: {e style} (aesthetic appearance)\n    and {e context} (scaling for the output medium). The {!paper}, {!notebook},\n    {!talk}, and {!poster} functions adjust {!field-scale_factor} to uniformly\n    scale all visual elements for the target medium. *)\n\n(** {1:types Types} *)\n\ntype font = { family : string; size : float; weight : [ `Normal | `Bold ] }\n(** The type for font specifications. [size] is in points before\n    {!field-scale_factor} is applied. *)\n\ntype line = { color : Color.t; width : float; dash : float list }\n(** The type for line styles. [dash] is a list of on/off lengths; empty means\n    solid. *)\n\ntype t = {\n  background : Color.t;\n  palette : Color.t array;\n  sequential : Cmap.t;\n  diverging : Cmap.t;\n  font_title : font;\n  font_label : font;\n  font_tick : font;\n  axis : line;\n  grid : line option;\n  tick_length : float;\n  padding : float;\n  title_gap : float;\n  label_gap : float;\n  scale_factor : float;\n  line_width : float;\n  marker_size : float;\n}\n(** The type for themes. All dimensional values (font sizes, line widths, gaps)\n    are multiplied by {!field-scale_factor} at render time. *)\n\n(** {1:predefined Predefined themes} *)\n\nval default : t\n(** [default] is a light theme with subtle grid, Okabe-Ito categorical palette,\n    and Tufte-informed defaults. *)\n\nval dark : t\n(** [dark] is a dark-background theme. *)\n\nval minimal : t\n(** [minimal] is a theme with no grid and thin axes. *)\n\n(** {1:context Context scaling} *)\n\nval paper : t -> t\n(** [paper t] is [t] with [scale_factor = 1.0]. *)\n\nval notebook : t -> t\n(** [notebook t] is [t] with [scale_factor = 1.3]. *)\n\nval talk : t -> t\n(** [talk t] is [t] with [scale_factor = 1.6]. *)\n\nval poster : t -> t\n(** [poster t] is [t] with [scale_factor = 2.0]. *)\n"
  },
  {
    "path": "packages/hugin/lib/ticks.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Nice tick generation *)\n\nlet nice_num v round =\n  let exp = Float.floor (Float.log10 v) in\n  let frac = v /. Float.pow 10. exp in\n  let nice =\n    if round then\n      begin if frac < 1.5 then 1.\n      else if frac < 3. then 2.\n      else if frac < 7. then 5.\n      else 10.\n      end\n    else\n      begin if frac <= 1. then 1.\n      else if frac <= 2. then 2.\n      else if frac <= 5. then 5.\n      else 10.\n      end\n  in\n  nice *. Float.pow 10. exp\n\nlet format_tick v =\n  let v = if Float.abs v < 1e-14 then 0. else v in\n  Printf.sprintf \"%.6g\" v\n\nlet generate_linear ~lo ~hi ~max_ticks =\n  let range = nice_num (hi -. lo) false in\n  let step = nice_num (range /. float max_ticks) true in\n  let lo' = Float.floor (lo /. step) *. step in\n  let acc = ref [] in\n  let v = ref lo' in\n  while !v <= hi +. (step *. 0.5) do\n    if !v >= lo -. (step *. 0.001) && !v <= hi +. (step *. 0.001) then\n      acc := (!v, format_tick !v) :: !acc;\n    v := !v +. step\n  done;\n  List.rev !acc\n\nlet format_log_tick e =\n  let ei = int_of_float e in\n  if ei = 0 then \"1\" else if ei = 1 then \"10\" else Printf.sprintf \"10^%d\" ei\n\nlet generate_log ~lo ~hi ~max_ticks =\n  let lo_exp = Float.floor (Float.log10 (Float.max 1e-300 lo)) in\n  let hi_exp = Float.ceil (Float.log10 (Float.max 1e-300 hi)) in\n  let n_decades = int_of_float (hi_exp -. lo_exp) in\n  let stride = Float.of_int (max 1 ((n_decades + max_ticks - 1) / max_ticks)) in\n  let acc = ref [] in\n  let e = ref lo_exp in\n  while !e <= hi_exp do\n    let v = Float.pow 10. !e in\n    if v >= lo *. 0.999 && v <= hi *. 1.001 then\n      acc := (v, format_log_tick !e) :: !acc;\n    e := !e +. stride\n  done;\n  List.rev !acc\n\n(* Sqrt ticks: generate in data space using nice linear ticks *)\n\nlet generate_sqrt ~lo ~hi ~max_ticks =\n  let lo = Float.max 0. lo in\n  generate_linear ~lo ~hi ~max_ticks\n\n(* Asinh ticks: pick nice values in data space *)\n\nlet generate_asinh ~lo ~hi ~max_ticks =\n  if lo >= 0. then generate_linear ~lo ~hi ~max_ticks\n  else generate_linear ~lo ~hi ~max_ticks\n\n(* Symlog ticks: linear ticks inside linthresh, log ticks outside *)\n\nlet generate_symlog ~linthresh ~lo ~hi ~max_ticks =\n  let ticks = ref [] in\n  (* Linear region *)\n  let lin_lo = Float.max lo (-.linthresh) in\n  let lin_hi = Float.min hi linthresh in\n  if lin_lo < lin_hi then begin\n    let lin_ticks =\n      generate_linear ~lo:lin_lo ~hi:lin_hi ~max_ticks:(max_ticks / 2)\n    in\n    ticks := lin_ticks\n  end;\n  (* Positive log region *)\n  if hi > linthresh then begin\n    let pos_lo = Float.max linthresh lo in\n    let pos_ticks = generate_log ~lo:pos_lo ~hi ~max_ticks:(max_ticks / 3) in\n    ticks := !ticks @ pos_ticks\n  end;\n  (* Negative log region *)\n  if lo < -.linthresh then begin\n    let neg_hi = Float.min (-.linthresh) hi in\n    let neg_lo_abs = Float.abs lo in\n    let neg_hi_abs = Float.abs neg_hi in\n    let pos_ticks =\n      generate_log ~lo:neg_hi_abs ~hi:neg_lo_abs ~max_ticks:(max_ticks / 3)\n    in\n    let neg_ticks =\n      List.rev_map (fun (v, _) -> (-.v, format_tick (-.v))) pos_ticks\n    in\n    ticks := neg_ticks @ !ticks\n  end;\n  !ticks\n\nlet generate kind ~lo ~hi ?(max_ticks = 8) () =\n  match kind with\n  | `Linear -> generate_linear ~lo ~hi ~max_ticks\n  | `Log -> generate_log ~lo ~hi ~max_ticks\n  | `Sqrt -> generate_sqrt ~lo ~hi ~max_ticks\n  | `Asinh -> generate_asinh ~lo ~hi ~max_ticks\n  | `Symlog linthresh -> generate_symlog ~linthresh ~lo ~hi ~max_ticks\n"
  },
  {
    "path": "packages/hugin/lib/ticks.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Tick generation for axes.\n\n    {b Internal module.} Produces nicely-spaced tick positions and formatted\n    labels for linear, logarithmic, square-root, asinh, and symlog scales. *)\n\nval generate :\n  [ `Linear | `Log | `Sqrt | `Asinh | `Symlog of float ] ->\n  lo:float ->\n  hi:float ->\n  ?max_ticks:int ->\n  unit ->\n  (float * string) list\n(** [generate kind ~lo ~hi ()] is a list of [(value, label)] pairs for ticks\n    spanning [[lo, hi]]. [max_ticks] defaults to [8]. *)\n"
  },
  {
    "path": "packages/hugin/test/dune",
    "content": "(tests\n (names\n  test_color\n  test_cmap\n  test_scale\n  test_ticks\n  test_resolve\n  test_svg_backend\n  test_image_util)\n (package hugin)\n (libraries hugin nx windtrap))\n"
  },
  {
    "path": "packages/hugin/test/test_cmap.ml",
    "content": "open Hugin\nopen Windtrap\n\nlet check_float msg = equal ~msg (float 0.01)\nlet bw = Cmap.of_colors [| Color.black; Color.white |]\n\n(* eval *)\n\nlet test_eval_at_zero () =\n  let c = Cmap.eval bw 0.0 in\n  check_float \"lightness at 0\" (Color.lightness Color.black) (Color.lightness c)\n\nlet test_eval_at_one () =\n  let c = Cmap.eval bw 1.0 in\n  check_float \"lightness at 1\" (Color.lightness Color.white) (Color.lightness c)\n\nlet test_eval_negative_clamped () =\n  let c0 = Cmap.eval bw 0.0 in\n  let cn = Cmap.eval bw (-0.5) in\n  check_float \"negative clamped\" (Color.lightness c0) (Color.lightness cn)\n\nlet test_eval_above_one_clamped () =\n  let c1 = Cmap.eval bw 1.0 in\n  let ch = Cmap.eval bw 1.5 in\n  check_float \"above 1 clamped\" (Color.lightness c1) (Color.lightness ch)\n\n(* of_colors *)\n\nlet test_two_stops_midpoint () =\n  let mid = Cmap.eval bw 0.5 in\n  let l = Color.lightness mid in\n  is_true ~msg:\"midpoint lightness near 0.5\" (l > 0.4 && l < 0.6)\n\nlet test_three_stops_midpoint () =\n  let red = Color.rgb ~r:1. ~g:0. ~b:0. () in\n  let green = Color.rgb ~r:0. ~g:1. ~b:0. () in\n  let blue = Color.rgb ~r:0. ~g:0. ~b:1. () in\n  let cm = Cmap.of_colors [| red; green; blue |] in\n  let mid = Cmap.eval cm 0.5 in\n  let r, g, _, _ = Color.to_rgba mid in\n  (* At 0.5, should be near the green stop *)\n  is_true ~msg:\"green channel high at midpoint\" (g > r)\n\nlet test_one_stop_raises () =\n  raises_invalid_arg \"Cmap.of_colors: need at least 2 stops\" (fun () ->\n      Cmap.of_colors [| Color.black |])\n\nlet test_empty_raises () =\n  raises_invalid_arg \"Cmap.of_colors: need at least 2 stops\" (fun () ->\n      Cmap.of_colors [||])\n\n(* predefined *)\n\nlet test_predefined_no_raise () =\n  let cmaps =\n    [\n      Cmap.viridis;\n      Cmap.plasma;\n      Cmap.inferno;\n      Cmap.magma;\n      Cmap.cividis;\n      Cmap.coolwarm;\n    ]\n  in\n  List.iter\n    (fun cm ->\n      let _ = Cmap.eval cm 0.0 in\n      let _ = Cmap.eval cm 0.5 in\n      let _ = Cmap.eval cm 1.0 in\n      ())\n    cmaps\n\nlet test_viridis_endpoints () =\n  let r0, _, b0, _ = Cmap.eval Cmap.viridis 0.0 |> Color.to_rgba in\n  let r1, _, _, _ = Cmap.eval Cmap.viridis 1.0 |> Color.to_rgba in\n  (* Viridis starts dark purple (low r, high b), ends yellow (high r) *)\n  is_true ~msg:\"viridis start is dark\" (r0 < 0.4);\n  is_true ~msg:\"viridis start has blue\" (b0 > 0.2);\n  is_true ~msg:\"viridis end is bright\" (r1 > 0.8)\n\nlet () =\n  run \"Cmap\"\n    [\n      group \"eval\"\n        [\n          test \"at 0.0\" test_eval_at_zero;\n          test \"at 1.0\" test_eval_at_one;\n          test \"negative clamped\" test_eval_negative_clamped;\n          test \"above 1.0 clamped\" test_eval_above_one_clamped;\n        ];\n      group \"of_colors\"\n        [\n          test \"two stops midpoint\" test_two_stops_midpoint;\n          test \"three stops midpoint\" test_three_stops_midpoint;\n          test \"one stop raises\" test_one_stop_raises;\n          test \"empty raises\" test_empty_raises;\n        ];\n      group \"predefined\"\n        [\n          test \"all evaluate without error\" test_predefined_no_raise;\n          test \"viridis endpoints\" test_viridis_endpoints;\n        ];\n    ]\n"
  },
  {
    "path": "packages/hugin/test/test_color.ml",
    "content": "open Hugin\nopen Windtrap\n\nlet rgba_testable = quad (float 0.01) (float 0.01) (float 0.01) (float 0.01)\nlet check_float msg = equal ~msg (float 1e-6)\nlet check_rgba msg expected actual = equal ~msg rgba_testable expected actual\n\n(* sRGB roundtrip *)\n\nlet test_black_roundtrip () =\n  let r, g, b, a = Color.rgb ~r:0. ~g:0. ~b:0. () |> Color.to_rgba in\n  check_rgba \"black\" (0., 0., 0., 1.) (r, g, b, a)\n\nlet test_white_roundtrip () =\n  let r, g, b, a = Color.rgb ~r:1. ~g:1. ~b:1. () |> Color.to_rgba in\n  check_rgba \"white\" (1., 1., 1., 1.) (r, g, b, a)\n\nlet test_red_roundtrip () =\n  let r, g, b, a = Color.rgb ~r:1. ~g:0. ~b:0. () |> Color.to_rgba in\n  check_rgba \"red\" (1., 0., 0., 1.) (r, g, b, a)\n\nlet test_green_roundtrip () =\n  let r, g, b, a = Color.rgb ~r:0. ~g:1. ~b:0. () |> Color.to_rgba in\n  check_rgba \"green\" (0., 1., 0., 1.) (r, g, b, a)\n\nlet test_blue_roundtrip () =\n  let r, g, b, a = Color.rgb ~r:0. ~g:0. ~b:1. () |> Color.to_rgba in\n  (* Pure blue is near the sRGB gamut boundary in OKLCH, so the roundtrip is not\n     perfectly lossless due to gamut clamping. We verify it's close. *)\n  let wide_rgba = quad (float 0.15) (float 0.01) (float 0.01) (float 0.01) in\n  equal ~msg:\"blue\" wide_rgba (0., 0., 1., 1.) (r, g, b, a)\n\nlet test_mid_gray_roundtrip () =\n  let r, g, b, a = Color.rgb ~r:0.5 ~g:0.5 ~b:0.5 () |> Color.to_rgba in\n  check_rgba \"mid gray\" (0.5, 0.5, 0.5, 1.) (r, g, b, a)\n\nlet test_arbitrary_roundtrip () =\n  let r, g, b, a = Color.rgb ~r:0.3 ~g:0.6 ~b:0.9 () |> Color.to_rgba in\n  (* OKLCH roundtrip has small error due to gamut boundary clamping *)\n  let wide_rgba = quad (float 0.03) (float 0.03) (float 0.03) (float 0.01) in\n  equal ~msg:\"arbitrary\" wide_rgba (0.3, 0.6, 0.9, 1.) (r, g, b, a)\n\n(* Hex parsing *)\n\nlet test_hex_6_with_hash () =\n  let r, g, b, _ = Color.hex \"#FF0000\" |> Color.to_rgba in\n  check_rgba \"red hex\" (1., 0., 0., 1.) (r, g, b, 1.)\n\nlet test_hex_6_without_hash () =\n  let r, g, b, _ = Color.hex \"FF0000\" |> Color.to_rgba in\n  check_rgba \"red hex no hash\" (1., 0., 0., 1.) (r, g, b, 1.)\n\nlet test_hex_8_with_alpha () =\n  let _, _, _, a = Color.hex \"#FF000080\" |> Color.to_rgba in\n  let expected_a = 128. /. 255. in\n  check_float \"alpha\" expected_a a\n\nlet test_hex_case_insensitive () =\n  let r1, g1, b1, _ = Color.hex \"#ff0000\" |> Color.to_rgba in\n  let r2, g2, b2, _ = Color.hex \"#FF0000\" |> Color.to_rgba in\n  check_rgba \"case insensitive\" (r1, g1, b1, 1.) (r2, g2, b2, 1.)\n\nlet test_hex_invalid_length () =\n  raises_invalid_arg \"Color.hex: expected 6 or 8 hex digits, got 3\" (fun () ->\n      Color.hex \"#FFF\")\n\nlet test_hex_invalid_chars () =\n  raises_invalid_arg \"Color.hex: invalid hex digit 'G'\" (fun () ->\n      Color.hex \"#GGGGGG\")\n\nlet test_hex_empty () =\n  raises_invalid_arg \"Color.hex: expected 6 or 8 hex digits, got 0\" (fun () ->\n      Color.hex \"\")\n\n(* OKLCH constructors *)\n\nlet test_oklch_fields () =\n  let c = Color.oklch ~l:0.5 ~c:0.1 ~h:180. () in\n  check_float \"lightness\" 0.5 (Color.lightness c);\n  check_float \"chroma\" 0.1 (Color.chroma c);\n  check_float \"hue\" 180. (Color.hue c);\n  check_float \"alpha\" 1. (Color.alpha c)\n\nlet test_oklcha_alpha () =\n  let c = Color.oklcha ~l:0.5 ~c:0.1 ~h:180. ~a:0.7 () in\n  check_float \"alpha\" 0.7 (Color.alpha c)\n\n(* Operations *)\n\nlet test_lighten_clamps () =\n  let c = Color.lighten 2.0 (Color.oklch ~l:0.8 ~c:0. ~h:0. ()) in\n  check_float \"clamped to 1\" 1.0 (Color.lightness c)\n\nlet test_darken_clamps () =\n  let c = Color.darken 2.0 (Color.oklch ~l:0.2 ~c:0. ~h:0. ()) in\n  check_float \"clamped to 0\" 0.0 (Color.lightness c)\n\nlet test_lighten_adds () =\n  let c = Color.lighten 0.1 (Color.oklch ~l:0.5 ~c:0. ~h:0. ()) in\n  check_float \"lighten adds\" 0.6 (Color.lightness c)\n\nlet test_darken_subtracts () =\n  let c = Color.darken 0.1 (Color.oklch ~l:0.5 ~c:0. ~h:0. ()) in\n  check_float \"darken subtracts\" 0.4 (Color.lightness c)\n\nlet test_with_alpha () =\n  let c = Color.with_alpha 0.3 (Color.oklch ~l:0.5 ~c:0. ~h:0. ()) in\n  check_float \"with_alpha\" 0.3 (Color.alpha c)\n\n(* Mix *)\n\nlet a = Color.oklch ~l:0.2 ~c:0.05 ~h:10. ()\nlet b = Color.oklch ~l:0.8 ~c:0.15 ~h:50. ()\n\nlet test_mix_zero () =\n  let m = Color.mix 0.0 a b in\n  check_float \"lightness\" (Color.lightness a) (Color.lightness m);\n  check_float \"chroma\" (Color.chroma a) (Color.chroma m);\n  check_float \"hue\" (Color.hue a) (Color.hue m)\n\nlet test_mix_one () =\n  let m = Color.mix 1.0 a b in\n  check_float \"lightness\" (Color.lightness b) (Color.lightness m);\n  check_float \"chroma\" (Color.chroma b) (Color.chroma m);\n  check_float \"hue\" (Color.hue b) (Color.hue m)\n\nlet test_mix_midpoint_lightness () =\n  let m = Color.mix 0.5 a b in\n  check_float \"midpoint lightness\" 0.5 (Color.lightness m)\n\nlet test_mix_midpoint_chroma () =\n  let m = Color.mix 0.5 a b in\n  check_float \"midpoint chroma\" 0.1 (Color.chroma m)\n\nlet test_mix_hue_forward () =\n  let c1 = Color.oklch ~l:0.5 ~c:0.1 ~h:10. () in\n  let c2 = Color.oklch ~l:0.5 ~c:0.1 ~h:50. () in\n  let m = Color.mix 0.5 c1 c2 in\n  check_float \"hue forward\" 30. (Color.hue m)\n\nlet test_mix_hue_wraps_360 () =\n  let c1 = Color.oklch ~l:0.5 ~c:0.1 ~h:350. () in\n  let c2 = Color.oklch ~l:0.5 ~c:0.1 ~h:10. () in\n  let m = Color.mix 0.5 c1 c2 in\n  check_float \"hue wraps\" 0. (Color.hue m)\n\nlet test_mix_hue_reverse_wrap () =\n  let c1 = Color.oklch ~l:0.5 ~c:0.1 ~h:10. () in\n  let c2 = Color.oklch ~l:0.5 ~c:0.1 ~h:350. () in\n  let m = Color.mix 0.5 c1 c2 in\n  check_float \"hue reverse wrap\" 0. (Color.hue m)\n\nlet test_mix_alpha () =\n  let c1 = Color.oklcha ~l:0.5 ~c:0. ~h:0. ~a:0.0 () in\n  let c2 = Color.oklcha ~l:0.5 ~c:0. ~h:0. ~a:1.0 () in\n  let m = Color.mix 0.5 c1 c2 in\n  check_float \"alpha interpolates\" 0.5 (Color.alpha m)\n\n(* Gamut clamping *)\n\nlet test_high_chroma_clamped () =\n  let r, g, b, _ = Color.oklch ~l:0.5 ~c:0.4 ~h:0. () |> Color.to_rgba in\n  is_true ~msg:\"r in [0,1]\" (r >= 0. && r <= 1.);\n  is_true ~msg:\"g in [0,1]\" (g >= 0. && g <= 1.);\n  is_true ~msg:\"b in [0,1]\" (b >= 0. && b <= 1.)\n\n(* Named colors *)\n\nlet test_black_named () =\n  let r, g, b, a = Color.black |> Color.to_rgba in\n  check_rgba \"black\" (0., 0., 0., 1.) (r, g, b, a)\n\nlet test_white_named () =\n  let r, g, b, a = Color.white |> Color.to_rgba in\n  check_rgba \"white\" (1., 1., 1., 1.) (r, g, b, a)\n\nlet test_orange_matches_hex () =\n  let r1, g1, b1, _ = Color.orange |> Color.to_rgba in\n  let r2, g2, b2, _ = Color.hex \"#E69F00\" |> Color.to_rgba in\n  check_rgba \"orange\" (r1, g1, b1, 1.) (r2, g2, b2, 1.)\n\nlet () =\n  run \"Color\"\n    [\n      group \"sRGB roundtrip\"\n        [\n          test \"black\" test_black_roundtrip;\n          test \"white\" test_white_roundtrip;\n          test \"red\" test_red_roundtrip;\n          test \"green\" test_green_roundtrip;\n          test \"blue\" test_blue_roundtrip;\n          test \"mid gray\" test_mid_gray_roundtrip;\n          test \"arbitrary\" test_arbitrary_roundtrip;\n        ];\n      group \"hex parsing\"\n        [\n          test \"6-digit with hash\" test_hex_6_with_hash;\n          test \"6-digit without hash\" test_hex_6_without_hash;\n          test \"8-digit with alpha\" test_hex_8_with_alpha;\n          test \"case insensitive\" test_hex_case_insensitive;\n          test \"invalid length\" test_hex_invalid_length;\n          test \"invalid chars\" test_hex_invalid_chars;\n          test \"empty string\" test_hex_empty;\n        ];\n      group \"OKLCH constructors\"\n        [\n          test \"oklch fields\" test_oklch_fields;\n          test \"oklcha alpha\" test_oklcha_alpha;\n        ];\n      group \"operations\"\n        [\n          test \"lighten clamps\" test_lighten_clamps;\n          test \"darken clamps\" test_darken_clamps;\n          test \"lighten adds\" test_lighten_adds;\n          test \"darken subtracts\" test_darken_subtracts;\n          test \"with_alpha\" test_with_alpha;\n        ];\n      group \"mix\"\n        [\n          test \"mix 0.0 returns first\" test_mix_zero;\n          test \"mix 1.0 returns second\" test_mix_one;\n          test \"midpoint lightness\" test_mix_midpoint_lightness;\n          test \"midpoint chroma\" test_mix_midpoint_chroma;\n          test \"hue shortest arc forward\" test_mix_hue_forward;\n          test \"hue wraps across 360\" test_mix_hue_wraps_360;\n          test \"hue reverse wrap\" test_mix_hue_reverse_wrap;\n          test \"alpha interpolates\" test_mix_alpha;\n        ];\n      group \"gamut clamping\"\n        [ test \"high chroma clamped\" test_high_chroma_clamped ];\n      group \"named colors\"\n        [\n          test \"black\" test_black_named;\n          test \"white\" test_white_named;\n          test \"orange matches hex\" test_orange_matches_hex;\n        ];\n    ]\n"
  },
  {
    "path": "packages/hugin/test/test_image_util.ml",
    "content": "(*---------------------------------------------------------------------------\n  Tests for Image_util.base64_encode — exercised indirectly through Hugin.pp\n  which calls base64_encode on PNG buffer data. We also test the base64 logic\n  through the pp data URI output.\n  ---------------------------------------------------------------------------*)\n\nopen Hugin\nopen Windtrap\n\nlet contains s sub =\n  let len_s = String.length s and len_sub = String.length sub in\n  if len_sub > len_s then false\n  else\n    let found = ref false in\n    for i = 0 to len_s - len_sub do\n      if (not !found) && String.sub s i len_sub = sub then found := true\n    done;\n    !found\n\nlet is_base64_char = function\n  | 'A' .. 'Z' | 'a' .. 'z' | '0' .. '9' | '+' | '/' | '=' -> true\n  | _ -> false\n\nlet sample_x = Nx.init Float32 [| 5 |] (fun i -> float_of_int i.(0))\nlet sample_y = Nx.init Float32 [| 5 |] (fun i -> float_of_int i.(0))\n\n(* pp produces a data URI with base64 encoded PNG *)\n\nlet test_pp_data_uri () =\n  let spec = Hugin.line ~x:sample_x ~y:sample_y () in\n  let buf = Buffer.create 256 in\n  let fmt = Format.formatter_of_buffer buf in\n  Hugin.pp fmt spec;\n  Format.pp_print_flush fmt ();\n  let output = Buffer.contents buf in\n  is_true ~msg:\"starts with image markdown\"\n    (contains output \"![figure](data:image/png;base64,\");\n  (* Base64 output should only contain valid base64 chars *)\n  let b64_start = \"base64,\" in\n  let start_idx =\n    let rec find i =\n      if i > String.length output - String.length b64_start then -1\n      else if String.sub output i (String.length b64_start) = b64_start then\n        i + String.length b64_start\n      else find (i + 1)\n    in\n    find 0\n  in\n  is_true ~msg:\"found base64 data\" (start_idx > 0);\n  if start_idx > 0 then begin\n    let end_idx = String.length output - 1 in\n    let b64 = String.sub output start_idx (end_idx - start_idx) in\n    is_true ~msg:\"all chars are valid base64\"\n      (String.to_seq b64 |> Seq.for_all is_base64_char)\n  end\n\n(* render_to_buffer produces non-empty data *)\n\nlet test_render_to_buffer () =\n  let spec = Hugin.line ~x:sample_x ~y:sample_y () in\n  let buf = Hugin.render_to_buffer spec in\n  is_true ~msg:\"non-empty\" (String.length buf > 0);\n  (* PNG magic bytes: 0x89 P N G *)\n  is_true ~msg:\"PNG magic byte\" (Char.code (String.get buf 0) = 0x89);\n  is_true ~msg:\"PNG P\" (String.get buf 1 = 'P');\n  is_true ~msg:\"PNG N\" (String.get buf 2 = 'N');\n  is_true ~msg:\"PNG G\" (String.get buf 3 = 'G')\n\nlet () =\n  run \"Image_util\"\n    [\n      group \"pp data URI\"\n        [ test \"produces valid base64 data URI\" test_pp_data_uri ];\n      group \"render_to_buffer\"\n        [ test \"produces valid PNG\" test_render_to_buffer ];\n    ]\n"
  },
  {
    "path": "packages/hugin/test/test_resolve.ml",
    "content": "open Hugin\nopen Windtrap\n\n(* Helpers *)\n\nlet contains s sub =\n  let len_s = String.length s and len_sub = String.length sub in\n  if len_sub > len_s then false\n  else\n    let found = ref false in\n    for i = 0 to len_s - len_sub do\n      if (not !found) && String.sub s i len_sub = sub then found := true\n    done;\n    !found\n\nlet count_substring s sub =\n  let len_s = String.length s and len_sub = String.length sub in\n  if len_sub > len_s || len_sub = 0 then 0\n  else begin\n    let count = ref 0 in\n    for i = 0 to len_s - len_sub do\n      if String.sub s i len_sub = sub then incr count\n    done;\n    !count\n  end\n\nlet render ?(width = 400.) ?(height = 300.) spec =\n  let tmp = Filename.temp_file \"hugin_test\" \".svg\" in\n  Hugin.render_svg ~width ~height tmp spec;\n  let ic = open_in tmp in\n  let n = in_channel_length ic in\n  let s = really_input_string ic n in\n  close_in ic;\n  Sys.remove tmp;\n  s\n\nlet sample_x = Nx.init Float32 [| 5 |] (fun i -> float_of_int i.(0))\nlet sample_y = Nx.init Float32 [| 5 |] (fun i -> float_of_int i.(0) *. 2.)\nlet sample_line () = Hugin.line ~x:sample_x ~y:sample_y ()\nlet sample_point () = Hugin.point ~x:sample_x ~y:sample_y ()\n\nlet sample_bar () =\n  Hugin.bar ~x:sample_x\n    ~height:(Nx.init Float32 [| 5 |] (fun i -> float_of_int i.(0) +. 1.))\n    ()\n\n(* basic marks *)\n\nlet test_line_resolves () =\n  let svg = render (sample_line ()) in\n  is_true ~msg:\"starts with xml\"\n    (String.length svg > 5 && String.sub svg 0 5 = \"<?xml\");\n  is_true ~msg:\"contains path\" (contains svg \"<path\");\n  is_true ~msg:\"contains svg\" (contains svg \"<svg\")\n\nlet test_point_resolves () =\n  let svg = render (sample_point ()) in\n  is_true ~msg:\"contains marker\" (contains svg \"<path d=\\\"M\")\n\nlet test_bar_resolves () =\n  let svg = render (sample_bar ()) in\n  is_true ~msg:\"contains closed path\" (contains svg \" Z\\\"\")\n\nlet test_hist_resolves () =\n  let data = Nx.init Float32 [| 100 |] (fun i -> float_of_int i.(0) /. 10.) in\n  let svg = render (Hugin.hist ~x:data ()) in\n  is_true ~msg:\"contains path\" (contains svg \"<path\")\n\nlet test_text_mark_resolves () =\n  let svg = render (Hugin.text ~x:1. ~y:1. \"hello\" ()) in\n  is_true ~msg:\"contains text\" (contains svg \">hello<\")\n\nlet test_hline_resolves () =\n  let spec = Hugin.layers [ sample_line (); Hugin.hline ~y:3. () ] in\n  let svg = render spec in\n  (* hline adds a horizontal path; 2 paths from line+hline vs 1 without *)\n  let path_count = count_substring svg \"<path\" in\n  is_true ~msg:\"at least 2 paths\" (path_count >= 2)\n\nlet test_vline_resolves () =\n  let spec = Hugin.layers [ sample_line (); Hugin.vline ~x:2. () ] in\n  let svg = render spec in\n  let path_count = count_substring svg \"<path\" in\n  is_true ~msg:\"at least 2 paths\" (path_count >= 2)\n\nlet test_empty_layers () =\n  let svg = render (Hugin.layers []) in\n  is_true ~msg:\"valid svg\" (contains svg \"<svg\")\n\n(* decorations *)\n\nlet test_title_appears () =\n  let svg = render (sample_line () |> Hugin.title \"My Title\") in\n  is_true ~msg:\"title in svg\" (contains svg \">My Title<\")\n\nlet test_xlabel_appears () =\n  let svg = render (sample_line () |> Hugin.xlabel \"X Axis\") in\n  is_true ~msg:\"xlabel in svg\" (contains svg \">X Axis<\")\n\nlet test_ylabel_appears () =\n  let svg = render (sample_line () |> Hugin.ylabel \"Y Axis\") in\n  is_true ~msg:\"ylabel in svg\" (contains svg \">Y Axis<\")\n\nlet test_outermost_title_wins () =\n  (* decorate prepends to the decoration list, and apply_decoration keeps the\n     first-seen title. So the outermost (last-applied) title wins. *)\n  let svg =\n    render (sample_line () |> Hugin.title \"Inner\" |> Hugin.title \"Outer\")\n  in\n  is_true ~msg:\"outer title present\" (contains svg \">Outer<\");\n  is_false ~msg:\"inner title absent\" (contains svg \">Inner<\")\n\n(* histogram normalization *)\n\nlet test_hist_bins () =\n  let data = Nx.init Float32 [| 100 |] (fun i -> float_of_int i.(0)) in\n  let svg = render (Hugin.hist ~x:data ~bins:(`Num 5) ()) in\n  is_true ~msg:\"produces paths\" (contains svg \"<path\")\n\nlet test_hist_density () =\n  let data = Nx.init Float32 [| 100 |] (fun i -> float_of_int i.(0)) in\n  let svg = render (Hugin.hist ~x:data ~bins:(`Num 5) ~density:true ()) in\n  (* density normalization should produce bars; 5 bins = 5 closed paths *)\n  let z_count = count_substring svg \" Z\\\"\" in\n  is_true ~msg:\"has 5 bars\" (z_count >= 5)\n\nlet test_hist_edges () =\n  let data = Nx.init Float32 [| 100 |] (fun i -> float_of_int i.(0)) in\n  let svg = render (Hugin.hist ~x:data ~bins:(`Edges [| 0.; 50.; 100. |]) ()) in\n  (* 2 bins from 3 edges = 2 closed paths *)\n  let z_count = count_substring svg \" Z\\\"\" in\n  is_true ~msg:\"has 2 bars\" (z_count >= 2)\n\n(* auto coloring *)\n\nlet test_auto_color_different () =\n  let line1 = Hugin.line ~x:sample_x ~y:sample_y () in\n  let y2 = Nx.init Float32 [| 5 |] (fun i -> float_of_int i.(0) *. 3.) in\n  let line2 = Hugin.line ~x:sample_x ~y:y2 () in\n  let svg = render (Hugin.layers [ line1; line2 ]) in\n  let stroke_count = count_substring svg \"stroke=\\\"rgb(\" in\n  is_true ~msg:\"multiple stroke colors\" (stroke_count >= 2)\n\nlet test_explicit_color_preserved () =\n  let svg = render (Hugin.line ~x:sample_x ~y:sample_y ~color:Color.black ()) in\n  is_true ~msg:\"has black stroke\" (contains svg \"stroke=\\\"rgb(0,0,0)\\\"\")\n\n(* grid layout *)\n\nlet test_grid_2x2 () =\n  let a = sample_line () |> Hugin.title \"A\" in\n  let b = sample_line () |> Hugin.title \"B\" in\n  let c = sample_line () |> Hugin.title \"C\" in\n  let d = sample_line () |> Hugin.title \"D\" in\n  let svg = render (Hugin.grid [ [ a; b ]; [ c; d ] ]) in\n  is_true ~msg:\"has A\" (contains svg \">A<\");\n  is_true ~msg:\"has D\" (contains svg \">D<\");\n  (* 4 panels = 4 clip regions *)\n  let clip_count = count_substring svg \"<clipPath\" in\n  is_true ~msg:\"4 clips\" (clip_count = 4)\n\nlet test_grid_empty () =\n  let svg = render (Hugin.grid []) in\n  is_true ~msg:\"valid svg\" (contains svg \"<svg\")\n\nlet test_hstack () =\n  let a = sample_line () |> Hugin.title \"L\" in\n  let b = sample_line () |> Hugin.title \"R\" in\n  let svg = render (Hugin.hstack [ a; b ]) in\n  is_true ~msg:\"has L\" (contains svg \">L<\");\n  is_true ~msg:\"has R\" (contains svg \">R<\")\n\nlet test_vstack () =\n  let a = sample_line () |> Hugin.title \"Top\" in\n  let b = sample_line () |> Hugin.title \"Bot\" in\n  let svg = render (Hugin.vstack [ a; b ]) in\n  is_true ~msg:\"has Top\" (contains svg \">Top<\");\n  is_true ~msg:\"has Bot\" (contains svg \">Bot<\")\n\n(* themes *)\n\nlet test_dark_theme () =\n  let svg = render (sample_line () |> Hugin.with_theme Theme.dark) in\n  (* dark theme has dark background — rgb values near 0 *)\n  is_true ~msg:\"has dark fill\" (contains svg \"fill=\\\"rgb(\");\n  is_true ~msg:\"has light strokes\" (contains svg \"stroke=\\\"rgb(\")\n\nlet test_minimal_theme () =\n  let svg_default = render (sample_line ()) in\n  let svg_minimal = render (sample_line () |> Hugin.with_theme Theme.minimal) in\n  (* minimal theme has no grid, so fewer paths *)\n  let default_paths = count_substring svg_default \"<path\" in\n  let minimal_paths = count_substring svg_minimal \"<path\" in\n  is_true ~msg:\"fewer paths than default\" (minimal_paths <= default_paths)\n\n(* grid_lines *)\n\nlet test_grid_lines_off () =\n  let svg_on = render (sample_line () |> Hugin.grid_lines true) in\n  let svg_off = render (sample_line () |> Hugin.grid_lines false) in\n  let on_paths = count_substring svg_on \"<path\" in\n  let off_paths = count_substring svg_off \"<path\" in\n  is_true ~msg:\"fewer paths with grid off\" (off_paths < on_paths)\n\n(* legend *)\n\nlet test_legend_appears () =\n  let line1 = Hugin.line ~x:sample_x ~y:sample_y ~label:\"Series A\" () in\n  let svg = render (line1 |> Hugin.legend) in\n  is_true ~msg:\"legend text\" (contains svg \">Series A<\")\n\n(* fill_between *)\n\nlet test_fill_between_resolves () =\n  let y2 = Nx.init Float32 [| 5 |] (fun i -> float_of_int i.(0) *. 3.) in\n  let svg = render (Hugin.fill_between ~x:sample_x ~y1:sample_y ~y2 ()) in\n  is_true ~msg:\"contains path\" (contains svg \"<path\");\n  is_true ~msg:\"has fill\" (contains svg \"fill=\")\n\nlet test_fill_between_with_label () =\n  let y2 = Nx.init Float32 [| 5 |] (fun i -> float_of_int i.(0) *. 3.) in\n  let spec =\n    Hugin.fill_between ~x:sample_x ~y1:sample_y ~y2 ~label:\"band\" ()\n    |> Hugin.legend\n  in\n  let svg = render spec in\n  is_true ~msg:\"legend text\" (contains svg \">band<\")\n\n(* hspan / vspan *)\n\nlet test_hspan_resolves () =\n  let svg_base = render (sample_line ()) in\n  let spec = Hugin.layers [ sample_line (); Hugin.hspan ~y0:1. ~y1:3. () ] in\n  let svg = render spec in\n  (* hspan adds a filled rectangle = one extra closed path *)\n  let base_z = count_substring svg_base \" Z\\\"\" in\n  let with_z = count_substring svg \" Z\\\"\" in\n  is_true ~msg:\"more closed paths with hspan\" (with_z > base_z)\n\nlet test_vspan_resolves () =\n  let svg_base = render (sample_line ()) in\n  let spec = Hugin.layers [ sample_line (); Hugin.vspan ~x0:1. ~x1:3. () ] in\n  let svg = render spec in\n  let base_z = count_substring svg_base \" Z\\\"\" in\n  let with_z = count_substring svg \" Z\\\"\" in\n  is_true ~msg:\"more closed paths with vspan\" (with_z > base_z)\n\n(* step line *)\n\nlet test_step_post () =\n  let svg_normal = render (Hugin.line ~x:sample_x ~y:sample_y ()) in\n  let svg_step = render (Hugin.line ~x:sample_x ~y:sample_y ~step:`Post ()) in\n  (* step line inserts intermediate points, so more L commands in total *)\n  let normal_l = count_substring svg_normal \" L\" in\n  let step_l = count_substring svg_step \" L\" in\n  is_true ~msg:\"step has more L commands\" (step_l > normal_l)\n\nlet test_step_pre () =\n  let svg_normal = render (Hugin.line ~x:sample_x ~y:sample_y ()) in\n  let svg_step = render (Hugin.line ~x:sample_x ~y:sample_y ~step:`Pre ()) in\n  (* pre step also inserts intermediate points *)\n  let normal_l = count_substring svg_normal \" L\" in\n  let step_l = count_substring svg_step \" L\" in\n  is_true ~msg:\"step has more L commands\" (step_l > normal_l)\n\nlet test_step_mid () =\n  let svg_normal = render (Hugin.line ~x:sample_x ~y:sample_y ()) in\n  let svg_step = render (Hugin.line ~x:sample_x ~y:sample_y ~step:`Mid ()) in\n  (* mid step inserts 2 intermediate points per segment *)\n  let normal_l = count_substring svg_normal \" L\" in\n  let step_l = count_substring svg_step \" L\" in\n  is_true ~msg:\"step has more L commands\" (step_l > normal_l)\n\n(* errorbar *)\n\nlet test_errorbar_symmetric () =\n  let err = Nx.init Float32 [| 5 |] (fun _ -> 0.5) in\n  let svg =\n    render (Hugin.errorbar ~x:sample_x ~y:sample_y ~yerr:(`Symmetric err) ())\n  in\n  (* 5 points × 3 paths each (stem + 2 caps) = 15 paths *)\n  let path_count = count_substring svg \"<path\" in\n  is_true ~msg:\"at least 15 paths for 5 error bars\" (path_count >= 15)\n\nlet test_errorbar_asymmetric () =\n  let elo = Nx.init Float32 [| 5 |] (fun _ -> 0.3) in\n  let ehi = Nx.init Float32 [| 5 |] (fun _ -> 0.7) in\n  let svg =\n    render\n      (Hugin.errorbar ~x:sample_x ~y:sample_y ~yerr:(`Asymmetric (elo, ehi)) ())\n  in\n  let path_count = count_substring svg \"<path\" in\n  is_true ~msg:\"at least 15 paths for 5 error bars\" (path_count >= 15)\n\nlet test_errorbar_with_xerr () =\n  let yerr = Nx.init Float32 [| 5 |] (fun _ -> 0.5) in\n  let xerr = Nx.init Float32 [| 5 |] (fun _ -> 0.2) in\n  let svg =\n    render\n      (Hugin.errorbar ~x:sample_x ~y:sample_y ~yerr:(`Symmetric yerr)\n         ~xerr:(`Symmetric xerr) ())\n  in\n  (* with xerr: 5 points × 6 paths each (yerr stem+2caps + xerr stem+2caps) =\n     30 *)\n  let svg_yerr_only =\n    render (Hugin.errorbar ~x:sample_x ~y:sample_y ~yerr:(`Symmetric yerr) ())\n  in\n  let yerr_paths = count_substring svg_yerr_only \"<path\" in\n  let both_paths = count_substring svg \"<path\" in\n  is_true ~msg:\"xerr adds more paths\" (both_paths > yerr_paths)\n\n(* heatmap *)\n\nlet test_heatmap_resolves () =\n  let data =\n    Nx.init Float32 [| 3; 4 |] (fun i -> float_of_int (i.(0) + i.(1)))\n  in\n  let svg = render (Hugin.heatmap ~data ()) in\n  is_true ~msg:\"contains paths\" (contains svg \"<path\")\n\nlet test_heatmap_annotated () =\n  let data =\n    Nx.init Float32 [| 2; 2 |] (fun i -> float_of_int (i.(0) + i.(1)))\n  in\n  let svg = render (Hugin.heatmap ~data ~annotate:true ()) in\n  is_true ~msg:\"contains text\" (contains svg \"<text\")\n\nlet test_heatmap_custom_fmt () =\n  let data = Nx.init Float32 [| 2; 2 |] (fun _ -> 0.5) in\n  let svg =\n    render\n      (Hugin.heatmap ~data ~annotate:true\n         ~fmt:(fun v -> Printf.sprintf \"%.0f%%\" (v *. 100.))\n         ())\n  in\n  is_true ~msg:\"contains formatted text\" (contains svg \">50%<\")\n\n(* imshow *)\n\nlet imshow_data () =\n  Nx.init Float32 [| 4; 6 |] (fun i -> float_of_int i.(0) +. float_of_int i.(1))\n\nlet test_imshow_rasterizes_to_image () =\n  let svg = render (Hugin.imshow ~data:(imshow_data ()) ()) in\n  (* imshow is rasterized to an Image in the Prepared stage — verify the SVG\n     backend emits an <image> element with base64 PNG data *)\n  is_true ~msg:\"contains image element\" (contains svg \"<image\");\n  is_true ~msg:\"has base64 data\" (contains svg \"base64,\")\n\nlet test_imshow_stretches_differ () =\n  (* Different stretches must produce different pixel data *)\n  let data = imshow_data () in\n  let svg_linear = render (Hugin.imshow ~data ~stretch:`Linear ()) in\n  let svg_log = render (Hugin.imshow ~data ~stretch:`Log ()) in\n  let svg_sqrt = render (Hugin.imshow ~data ~stretch:`Sqrt ()) in\n  is_true ~msg:\"log differs from linear\" (svg_log <> svg_linear);\n  is_true ~msg:\"sqrt differs from linear\" (svg_sqrt <> svg_linear);\n  is_true ~msg:\"log differs from sqrt\" (svg_log <> svg_sqrt)\n\nlet test_imshow_cmap_changes_output () =\n  let data = imshow_data () in\n  let svg_default = render (Hugin.imshow ~data ()) in\n  let svg_hot = render (Hugin.imshow ~data ~cmap:Cmap.hot ()) in\n  let svg_gray = render (Hugin.imshow ~data ~cmap:Cmap.gray_r ()) in\n  is_true ~msg:\"hot differs from default\" (svg_hot <> svg_default);\n  is_true ~msg:\"gray_r differs from hot\" (svg_gray <> svg_hot)\n\n(* contour *)\n\nlet contour_data () =\n  (* Concentric circles centered at (4.5, 4.5), values = r² *)\n  Nx.init Float32 [| 10; 10 |] (fun i ->\n      let x = float_of_int i.(1) -. 4.5 in\n      let y = float_of_int i.(0) -. 4.5 in\n      (x *. x) +. (y *. y))\n\nlet test_contour_unfilled_has_stroked_paths () =\n  let svg =\n    render\n      (Hugin.contour ~data:(contour_data ()) ~x0:0. ~x1:9. ~y0:0. ~y1:9.\n         ~levels:(`Num 4) ())\n  in\n  (* Unfilled contours are stroked paths (stroke=, fill=\"none\") *)\n  is_true ~msg:\"has stroked paths\" (contains svg \"stroke=\\\"rgb(\");\n  let path_count = count_substring svg \"<path\" in\n  is_true ~msg:\"multiple contour paths\" (path_count >= 4)\n\nlet test_contour_filled_more_paths () =\n  let data = contour_data () in\n  let svg_unfilled =\n    render (Hugin.contour ~data ~x0:0. ~x1:9. ~y0:0. ~y1:9. ~levels:(`Num 4) ())\n  in\n  let svg_filled =\n    render\n      (Hugin.contour ~data ~x0:0. ~x1:9. ~y0:0. ~y1:9. ~levels:(`Num 4)\n         ~filled:true ())\n  in\n  (* Filled contours use fill=rgb(...), unfilled use stroke *)\n  is_true ~msg:\"filled has fill color\" (contains svg_filled \"fill=\\\"rgb(\");\n  (* Filled output differs from unfilled *)\n  is_true ~msg:\"filled differs from unfilled\" (svg_filled <> svg_unfilled)\n\nlet test_contour_level_count_affects_paths () =\n  let data = contour_data () in\n  let svg_few =\n    render (Hugin.contour ~data ~x0:0. ~x1:9. ~y0:0. ~y1:9. ~levels:(`Num 2) ())\n  in\n  let svg_many =\n    render (Hugin.contour ~data ~x0:0. ~x1:9. ~y0:0. ~y1:9. ~levels:(`Num 8) ())\n  in\n  let few_paths = count_substring svg_few \"<path\" in\n  let many_paths = count_substring svg_many \"<path\" in\n  is_true ~msg:\"more levels = more paths\" (many_paths > few_paths)\n\nlet test_contour_legend () =\n  let svg =\n    render\n      (Hugin.contour ~data:(contour_data ()) ~x0:0. ~x1:9. ~y0:0. ~y1:9.\n         ~label:\"density\" ()\n      |> Hugin.legend)\n  in\n  is_true ~msg:\"legend text\" (contains svg \">density<\")\n\n(* inverted axes *)\n\nlet test_invert_changes_path_data () =\n  (* Inversion reverses the scale mapping, so the path d= attribute must differ\n     between normal and inverted rendering of the same data. *)\n  let svg_normal = render (sample_line ()) in\n  let svg_xinv = render (sample_line () |> Hugin.xinvert) in\n  let svg_yinv = render (sample_line () |> Hugin.yinvert) in\n  is_true ~msg:\"xinvert changes path\" (svg_xinv <> svg_normal);\n  is_true ~msg:\"yinvert changes path\" (svg_yinv <> svg_normal)\n\nlet test_yinvert_hr_diagram () =\n  (* An HR diagram uses yinvert (brighter stars at top) and decorations. *)\n  let bv = Nx.create Float32 [| 5 |] [| -0.3; 0.; 0.5; 1.0; 1.5 |] in\n  let mag = Nx.create Float32 [| 5 |] [| -5.; 0.; 2.; 5.; 10. |] in\n  let svg =\n    render\n      (Hugin.point ~x:bv ~y:mag ()\n      |> Hugin.yinvert |> Hugin.xlabel \"B-V\" |> Hugin.ylabel \"Magnitude\")\n  in\n  is_true ~msg:\"xlabel present\" (contains svg \">B-V<\");\n  is_true ~msg:\"ylabel present\" (contains svg \">Magnitude<\");\n  is_true ~msg:\"has markers\" (contains svg \"<path d=\\\"M\")\n\n(* tick format *)\n\nlet test_xtick_format () =\n  let spec =\n    sample_line ()\n    |> Hugin.xtick_format (fun v -> Printf.sprintf \"%.0f%%\" (v *. 100.))\n  in\n  let svg = render spec in\n  (* x data is 0..4, so formatted ticks should contain \"%\" *)\n  is_true ~msg:\"formatted ticks contain %\" (contains svg \"%\")\n\nlet test_ytick_format () =\n  let spec =\n    sample_line () |> Hugin.ytick_format (fun v -> Printf.sprintf \"$%.0f\" v)\n  in\n  let svg = render spec in\n  (* y data is 0..8, so formatted ticks should contain \"$\" *)\n  is_true ~msg:\"formatted ticks contain $\" (contains svg \"$\")\n\nlet () =\n  run \"Resolve\"\n    [\n      group \"basic marks\"\n        [\n          test \"line\" test_line_resolves;\n          test \"point\" test_point_resolves;\n          test \"bar\" test_bar_resolves;\n          test \"hist\" test_hist_resolves;\n          test \"text\" test_text_mark_resolves;\n          test \"hline\" test_hline_resolves;\n          test \"vline\" test_vline_resolves;\n          test \"empty layers\" test_empty_layers;\n        ];\n      group \"decorations\"\n        [\n          test \"title appears\" test_title_appears;\n          test \"xlabel appears\" test_xlabel_appears;\n          test \"ylabel appears\" test_ylabel_appears;\n          test \"outermost title wins\" test_outermost_title_wins;\n        ];\n      group \"histogram normalization\"\n        [\n          test \"bins\" test_hist_bins;\n          test \"density\" test_hist_density;\n          test \"edges\" test_hist_edges;\n        ];\n      group \"auto coloring\"\n        [\n          test \"different colors\" test_auto_color_different;\n          test \"explicit color preserved\" test_explicit_color_preserved;\n        ];\n      group \"grid layout\"\n        [\n          test \"2x2 grid\" test_grid_2x2;\n          test \"empty grid\" test_grid_empty;\n          test \"hstack\" test_hstack;\n          test \"vstack\" test_vstack;\n        ];\n      group \"themes\"\n        [\n          test \"dark theme\" test_dark_theme;\n          test \"minimal theme\" test_minimal_theme;\n        ];\n      group \"grid lines\" [ test \"grid lines off\" test_grid_lines_off ];\n      group \"legend\" [ test \"legend appears\" test_legend_appears ];\n      group \"fill_between\"\n        [\n          test \"resolves\" test_fill_between_resolves;\n          test \"with label\" test_fill_between_with_label;\n        ];\n      group \"hspan/vspan\"\n        [ test \"hspan\" test_hspan_resolves; test \"vspan\" test_vspan_resolves ];\n      group \"step line\"\n        [\n          test \"post\" test_step_post;\n          test \"pre\" test_step_pre;\n          test \"mid\" test_step_mid;\n        ];\n      group \"errorbar\"\n        [\n          test \"symmetric\" test_errorbar_symmetric;\n          test \"asymmetric\" test_errorbar_asymmetric;\n          test \"with xerr\" test_errorbar_with_xerr;\n        ];\n      group \"heatmap\"\n        [\n          test \"resolves\" test_heatmap_resolves;\n          test \"annotated\" test_heatmap_annotated;\n          test \"custom fmt\" test_heatmap_custom_fmt;\n        ];\n      group \"tick format\"\n        [\n          test \"xtick_format\" test_xtick_format;\n          test \"ytick_format\" test_ytick_format;\n        ];\n      group \"imshow\"\n        [\n          test \"rasterizes to image\" test_imshow_rasterizes_to_image;\n          test \"stretches differ\" test_imshow_stretches_differ;\n          test \"cmap changes output\" test_imshow_cmap_changes_output;\n        ];\n      group \"contour\"\n        [\n          test \"unfilled has stroked paths\"\n            test_contour_unfilled_has_stroked_paths;\n          test \"filled more paths\" test_contour_filled_more_paths;\n          test \"level count affects paths\"\n            test_contour_level_count_affects_paths;\n          test \"legend\" test_contour_legend;\n        ];\n      group \"inverted axes\"\n        [\n          test \"invert changes path data\" test_invert_changes_path_data;\n          test \"yinvert HR diagram\" test_yinvert_hr_diagram;\n        ];\n    ]\n"
  },
  {
    "path": "packages/hugin/test/test_scale.ml",
    "content": "(*---------------------------------------------------------------------------\n  Tests for Scale logic — exercised indirectly through Hugin.render_svg. We\n  verify that linear and log scales produce correct axis tick labels in the SVG\n  output, which proves the scale math is correct.\n  ---------------------------------------------------------------------------*)\n\nopen Hugin\nopen Windtrap\n\nlet contains s sub =\n  let len_s = String.length s and len_sub = String.length sub in\n  if len_sub > len_s then false\n  else\n    let found = ref false in\n    for i = 0 to len_s - len_sub do\n      if (not !found) && String.sub s i len_sub = sub then found := true\n    done;\n    !found\n\nlet count_substring s sub =\n  let len_s = String.length s and len_sub = String.length sub in\n  if len_sub > len_s || len_sub = 0 then 0\n  else begin\n    let count = ref 0 in\n    for i = 0 to len_s - len_sub do\n      if String.sub s i len_sub = sub then incr count\n    done;\n    !count\n  end\n\nlet render spec =\n  let tmp = Filename.temp_file \"hugin_test\" \".svg\" in\n  Hugin.render_svg ~width:400. ~height:300. tmp spec;\n  let ic = open_in tmp in\n  let n = in_channel_length ic in\n  let s = really_input_string ic n in\n  close_in ic;\n  Sys.remove tmp;\n  s\n\nlet x5 = Nx.init Float32 [| 5 |] (fun i -> float_of_int i.(0))\nlet y5 = Nx.init Float32 [| 5 |] (fun i -> float_of_int i.(0))\n\n(* linear scale *)\n\nlet test_linear_ticks_present () =\n  let svg = render (Hugin.line ~x:x5 ~y:y5 ()) in\n  (* Data range 0-4, auto-ticks should include 0 *)\n  is_true ~msg:\"has tick 0\" (contains svg \">0<\")\n\nlet test_linear_xlim () =\n  (* Use different x and y ranges so we can distinguish x ticks from y ticks. x\n     data: 0..10, y data: 100..200. With xlim 0-5, x ticks stay in [0,5] but y\n     ticks are around 100-200 — no overlap. *)\n  let x = Nx.init Float32 [| 11 |] (fun i -> float_of_int i.(0)) in\n  let y =\n    Nx.init Float32 [| 11 |] (fun i -> 100. +. (float_of_int i.(0) *. 10.))\n  in\n  let svg = render (Hugin.line ~x ~y () |> Hugin.xlim 0. 5.) in\n  is_true ~msg:\"has tick 0\" (contains svg \">0<\");\n  (* With xlim 0-5, we should not see x-axis tick \"8\" or \"10\". Y-axis ticks are\n     in 100-200 range so no confusion. *)\n  is_false ~msg:\"no tick 8\" (contains svg \">8<\");\n  is_false ~msg:\"no tick 10\" (contains svg \">10<\")\n\nlet test_linear_ylim () =\n  let x = Nx.init Float32 [| 11 |] (fun i -> float_of_int i.(0)) in\n  let y = Nx.init Float32 [| 11 |] (fun i -> float_of_int i.(0)) in\n  let svg = render (Hugin.line ~x ~y () |> Hugin.ylim 0. 5.) in\n  is_true ~msg:\"valid svg\" (contains svg \"<svg\")\n\nlet test_linear_negative_range () =\n  let x = Nx.create Float32 [| 5 |] [| -10.; -5.; 0.; 5.; 10. |] in\n  let y = Nx.create Float32 [| 5 |] [| -10.; -5.; 0.; 5.; 10. |] in\n  let svg = render (Hugin.line ~x ~y ()) in\n  is_true ~msg:\"has tick 0\" (contains svg \">0<\")\n\nlet test_linear_small_range () =\n  let x = Nx.create Float32 [| 3 |] [| 0.; 0.0005; 0.001 |] in\n  let y = Nx.create Float32 [| 3 |] [| 0.; 0.5; 1. |] in\n  let svg = render (Hugin.line ~x ~y ()) in\n  is_true ~msg:\"valid svg\" (contains svg \"<svg\")\n\nlet test_linear_single_point () =\n  let x = Nx.create Float32 [| 1 |] [| 5. |] in\n  let y = Nx.create Float32 [| 1 |] [| 5. |] in\n  let svg = render (Hugin.line ~x ~y ()) in\n  is_true ~msg:\"valid svg\" (contains svg \"<svg\")\n\n(* log scale *)\n\nlet test_log_ticks () =\n  let x = Nx.create Float32 [| 4 |] [| 1.; 10.; 100.; 1000. |] in\n  let y = Nx.create Float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n  let svg = render (Hugin.line ~x ~y () |> Hugin.xscale `Log) in\n  is_true ~msg:\"has tick 1\" (contains svg \">1<\");\n  is_true ~msg:\"has tick 10\" (contains svg \">10<\");\n  is_true ~msg:\"has tick 10^2\" (contains svg \">10^2<\");\n  is_true ~msg:\"has tick 10^3\" (contains svg \">10^3<\")\n\nlet test_log_y () =\n  let x = Nx.create Float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n  let y = Nx.create Float32 [| 4 |] [| 1.; 10.; 100.; 1000. |] in\n  let svg = render (Hugin.line ~x ~y () |> Hugin.yscale `Log) in\n  is_true ~msg:\"has tick 1\" (contains svg \">1<\");\n  is_true ~msg:\"has tick 10^3\" (contains svg \">10^3<\")\n\n(* custom ticks *)\n\nlet test_explicit_xticks () =\n  let svg =\n    render\n      (Hugin.line ~x:x5 ~y:y5 () |> Hugin.xticks [ (0., \"zero\"); (4., \"four\") ])\n  in\n  is_true ~msg:\"has custom tick zero\" (contains svg \">zero<\");\n  is_true ~msg:\"has custom tick four\" (contains svg \">four<\")\n\nlet test_explicit_yticks () =\n  let svg =\n    render\n      (Hugin.line ~x:x5 ~y:y5 () |> Hugin.yticks [ (0., \"low\"); (4., \"high\") ])\n  in\n  is_true ~msg:\"has custom tick low\" (contains svg \">low<\");\n  is_true ~msg:\"has custom tick high\" (contains svg \">high<\")\n\n(* sqrt scale *)\n\nlet test_sqrt_handles_zero () =\n  (* Sqrt scale handles zero gracefully — critical for astronomical fluxes *)\n  let x = Nx.create Float32 [| 5 |] [| 0.; 1.; 4.; 9.; 16. |] in\n  let y = Nx.create Float32 [| 5 |] [| 0.; 1.; 2.; 3.; 4. |] in\n  let svg = render (Hugin.line ~x ~y () |> Hugin.xscale `Sqrt) in\n  is_true ~msg:\"has tick 0\" (contains svg \">0<\");\n  is_true ~msg:\"has path\" (contains svg \"<path\")\n\nlet test_sqrt_differs_from_linear () =\n  let x = Nx.create Float32 [| 5 |] [| 0.; 1.; 4.; 9.; 16. |] in\n  let y = Nx.create Float32 [| 5 |] [| 0.; 1.; 2.; 3.; 4. |] in\n  let svg_lin = render (Hugin.line ~x ~y ()) in\n  let svg_sqrt = render (Hugin.line ~x ~y () |> Hugin.yscale `Sqrt) in\n  is_true ~msg:\"sqrt changes output\" (svg_sqrt <> svg_lin)\n\n(* asinh scale *)\n\nlet test_asinh_negative_values () =\n  (* Asinh handles negative values, unlike log — needed for\n     background-subtracted fluxes *)\n  let x = Nx.create Float32 [| 5 |] [| -100.; -1.; 0.; 1.; 100. |] in\n  let y = Nx.create Float32 [| 5 |] [| 0.; 1.; 2.; 3.; 4. |] in\n  let svg = render (Hugin.line ~x ~y () |> Hugin.xscale `Asinh) in\n  is_true ~msg:\"has tick 0\" (contains svg \">0<\");\n  is_true ~msg:\"has path\" (contains svg \"<path\")\n\nlet test_asinh_differs_from_linear () =\n  let x = Nx.create Float32 [| 5 |] [| 0.; 1.; 2.; 3.; 4. |] in\n  let y = Nx.create Float32 [| 5 |] [| -100.; -1.; 0.; 1.; 100. |] in\n  let svg_lin = render (Hugin.line ~x ~y ()) in\n  let svg_asinh = render (Hugin.line ~x ~y () |> Hugin.yscale `Asinh) in\n  is_true ~msg:\"asinh changes output\" (svg_asinh <> svg_lin)\n\n(* symlog scale *)\n\nlet test_symlog_has_linear_and_log_ticks () =\n  (* Symlog should produce ticks in both the linear region (near 0) and the log\n     region (far from 0) *)\n  let x =\n    Nx.create Float32 [| 7 |] [| -1000.; -10.; -1.; 0.; 1.; 10.; 1000. |]\n  in\n  let y = Nx.init Float32 [| 7 |] (fun i -> float_of_int i.(0)) in\n  let svg = render (Hugin.line ~x ~y () |> Hugin.xscale (`Symlog 10.)) in\n  is_true ~msg:\"has tick 0 (linear region)\" (contains svg \">0<\");\n  is_true ~msg:\"has path\" (contains svg \"<path\")\n\nlet test_symlog_differs_from_linear () =\n  let x =\n    Nx.create Float32 [| 7 |] [| -1000.; -10.; -1.; 0.; 1.; 10.; 1000. |]\n  in\n  let y = Nx.init Float32 [| 7 |] (fun i -> float_of_int i.(0)) in\n  let svg_lin = render (Hugin.line ~x ~y ()) in\n  let svg_sym = render (Hugin.line ~x ~y () |> Hugin.xscale (`Symlog 10.)) in\n  is_true ~msg:\"symlog changes output\" (svg_sym <> svg_lin)\n\n(* inverted scales *)\n\nlet test_invert_reverses_tick_order () =\n  (* The same tick labels should appear, but xinvert swaps pixel positions. We\n     verify the SVG output actually changes. *)\n  let svg_normal = render (Hugin.line ~x:x5 ~y:y5 ()) in\n  let svg_inv = render (Hugin.line ~x:x5 ~y:y5 () |> Hugin.xinvert) in\n  is_true ~msg:\"has tick 0\" (contains svg_inv \">0<\");\n  is_true ~msg:\"invert changes output\" (svg_inv <> svg_normal)\n\nlet test_invert_preserves_ticks () =\n  (* Inversion should not remove or add ticks, just reposition them *)\n  let svg_normal = render (Hugin.line ~x:x5 ~y:y5 ()) in\n  let svg_inv = render (Hugin.line ~x:x5 ~y:y5 () |> Hugin.yinvert) in\n  let normal_texts = count_substring svg_normal \"<text\" in\n  let inv_texts = count_substring svg_inv \"<text\" in\n  is_true ~msg:\"same number of text elements\" (normal_texts = inv_texts)\n\nlet test_log_inverted () =\n  (* Log + invert is the typical RA axis for sky charts *)\n  let x = Nx.create Float32 [| 4 |] [| 1.; 10.; 100.; 1000. |] in\n  let y = Nx.create Float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n  let svg =\n    render (Hugin.line ~x ~y () |> Hugin.xscale `Log |> Hugin.xinvert)\n  in\n  is_true ~msg:\"has tick 1\" (contains svg \">1<\");\n  is_true ~msg:\"has tick 10\" (contains svg \">10<\")\n\nlet () =\n  run \"Scale\"\n    [\n      group \"linear\"\n        [\n          test \"ticks present\" test_linear_ticks_present;\n          test \"xlim constrains\" test_linear_xlim;\n          test \"ylim constrains\" test_linear_ylim;\n          test \"negative range\" test_linear_negative_range;\n          test \"small range\" test_linear_small_range;\n          test \"single point\" test_linear_single_point;\n        ];\n      group \"log\"\n        [\n          test \"power-of-10 ticks\" test_log_ticks; test \"log y axis\" test_log_y;\n        ];\n      group \"sqrt\"\n        [\n          test \"handles zero\" test_sqrt_handles_zero;\n          test \"differs from linear\" test_sqrt_differs_from_linear;\n        ];\n      group \"asinh\"\n        [\n          test \"negative values\" test_asinh_negative_values;\n          test \"differs from linear\" test_asinh_differs_from_linear;\n        ];\n      group \"symlog\"\n        [\n          test \"linear and log ticks\" test_symlog_has_linear_and_log_ticks;\n          test \"differs from linear\" test_symlog_differs_from_linear;\n        ];\n      group \"inverted\"\n        [\n          test \"reverses tick order\" test_invert_reverses_tick_order;\n          test \"preserves ticks\" test_invert_preserves_ticks;\n          test \"log inverted\" test_log_inverted;\n        ];\n      group \"custom ticks\"\n        [\n          test \"explicit xticks\" test_explicit_xticks;\n          test \"explicit yticks\" test_explicit_yticks;\n        ];\n    ]\n"
  },
  {
    "path": "packages/hugin/test/test_svg_backend.ml",
    "content": "(*---------------------------------------------------------------------------\n  Tests for the SVG backend — rendered through Hugin.render_svg. We verify SVG\n  structure, XML escaping, and content correctness.\n  ---------------------------------------------------------------------------*)\n\nopen Hugin\nopen Windtrap\n\nlet contains s sub =\n  let len_s = String.length s and len_sub = String.length sub in\n  if len_sub > len_s then false\n  else\n    let found = ref false in\n    for i = 0 to len_s - len_sub do\n      if (not !found) && String.sub s i len_sub = sub then found := true\n    done;\n    !found\n\nlet ends_with s suffix = String.ends_with ~suffix s\n\nlet render ?(width = 400.) ?(height = 300.) spec =\n  let tmp = Filename.temp_file \"hugin_test\" \".svg\" in\n  Hugin.render_svg ~width ~height tmp spec;\n  let ic = open_in tmp in\n  let n = in_channel_length ic in\n  let s = really_input_string ic n in\n  close_in ic;\n  Sys.remove tmp;\n  s\n\nlet sample_x = Nx.init Float32 [| 5 |] (fun i -> float_of_int i.(0))\nlet sample_y = Nx.init Float32 [| 5 |] (fun i -> float_of_int i.(0))\n\n(* SVG structure *)\n\nlet test_svg_envelope () =\n  let svg = render (Hugin.line ~x:sample_x ~y:sample_y ()) in\n  is_true ~msg:\"starts with xml\"\n    (String.length svg > 5 && String.sub svg 0 5 = \"<?xml\");\n  is_true ~msg:\"ends with svg\" (ends_with svg \"</svg>\\n\")\n\nlet test_svg_dimensions () =\n  let svg =\n    render ~width:800. ~height:600. (Hugin.line ~x:sample_x ~y:sample_y ())\n  in\n  is_true ~msg:\"has width\" (contains svg \"width=\\\"800\\\"\");\n  is_true ~msg:\"has height\" (contains svg \"height=\\\"600\\\"\")\n\n(* XML escaping through text marks *)\n\nlet test_xml_escaping () =\n  let svg = render (Hugin.text ~x:1. ~y:1. \"a & b < c\" ()) in\n  is_true ~msg:\"ampersand escaped\" (contains svg \"&amp;\");\n  is_true ~msg:\"less-than escaped\" (contains svg \"&lt;\")\n\nlet test_xml_escaping_quotes () =\n  let svg = render (Hugin.text ~x:1. ~y:1. \"say \\\"hello\\\"\" ()) in\n  is_true ~msg:\"quotes escaped\" (contains svg \"&quot;\")\n\n(* Clip regions *)\n\nlet test_clip_region () =\n  (* A line plot should produce a clip region for the data area *)\n  let svg = render (Hugin.line ~x:sample_x ~y:sample_y ()) in\n  is_true ~msg:\"has clipPath\" (contains svg \"<clipPath\")\n\n(* Dash patterns *)\n\nlet test_dashed_line () =\n  let svg =\n    render (Hugin.line ~x:sample_x ~y:sample_y ~line_style:`Dashed ())\n  in\n  is_true ~msg:\"has dasharray\" (contains svg \"stroke-dasharray\")\n\nlet test_dotted_line () =\n  let svg =\n    render (Hugin.line ~x:sample_x ~y:sample_y ~line_style:`Dotted ())\n  in\n  is_true ~msg:\"has dasharray\" (contains svg \"stroke-dasharray\")\n\n(* Marker rendering *)\n\nlet test_markers_in_svg () =\n  let svg = render (Hugin.line ~x:sample_x ~y:sample_y ~marker:Circle ()) in\n  (* Markers use <defs><symbol>...<use> pattern *)\n  is_true ~msg:\"has use elements\" (contains svg \"<use \")\n\nlet test_scatter_markers () =\n  let svg = render (Hugin.point ~x:sample_x ~y:sample_y ()) in\n  is_true ~msg:\"has marker paths\" (contains svg \"<path d=\\\"M\")\n\nlet () =\n  run \"Svg_backend\"\n    [\n      group \"SVG structure\"\n        [\n          test \"XML envelope\" test_svg_envelope;\n          test \"dimensions\" test_svg_dimensions;\n          test \"clip region\" test_clip_region;\n        ];\n      group \"XML escaping\"\n        [\n          test \"ampersand and less-than\" test_xml_escaping;\n          test \"quotes\" test_xml_escaping_quotes;\n        ];\n      group \"line styles\"\n        [ test \"dashed\" test_dashed_line; test \"dotted\" test_dotted_line ];\n      group \"markers\"\n        [\n          test \"line markers\" test_markers_in_svg;\n          test \"scatter markers\" test_scatter_markers;\n        ];\n    ]\n"
  },
  {
    "path": "packages/hugin/test/test_ticks.ml",
    "content": "(*---------------------------------------------------------------------------\n  Tests for Ticks — exercised indirectly through SVG output. We verify tick\n  label formatting, count, and presence in rendered SVGs.\n  ---------------------------------------------------------------------------*)\n\nopen Hugin\nopen Windtrap\n\nlet contains s sub =\n  let len_s = String.length s and len_sub = String.length sub in\n  if len_sub > len_s then false\n  else\n    let found = ref false in\n    for i = 0 to len_s - len_sub do\n      if (not !found) && String.sub s i len_sub = sub then found := true\n    done;\n    !found\n\nlet count_substring s sub =\n  let len_s = String.length s and len_sub = String.length sub in\n  if len_sub > len_s || len_sub = 0 then 0\n  else begin\n    let count = ref 0 in\n    for i = 0 to len_s - len_sub do\n      if String.sub s i len_sub = sub then incr count\n    done;\n    !count\n  end\n\nlet render spec =\n  let tmp = Filename.temp_file \"hugin_test\" \".svg\" in\n  Hugin.render_svg ~width:400. ~height:300. tmp spec;\n  let ic = open_in tmp in\n  let n = in_channel_length ic in\n  let s = really_input_string ic n in\n  close_in ic;\n  Sys.remove tmp;\n  s\n\n(* linear tick formatting *)\n\nlet test_zero_label () =\n  let x = Nx.create Float32 [| 5 |] [| -10.; -5.; 0.; 5.; 10. |] in\n  let y = Nx.create Float32 [| 5 |] [| -10.; -5.; 0.; 5.; 10. |] in\n  let svg = render (Hugin.line ~x ~y ()) in\n  (* The zero tick should show \"0\" not \"1e-15\" or similar *)\n  is_true ~msg:\"has zero tick\" (contains svg \">0<\")\n\nlet test_reasonable_count () =\n  let x = Nx.init Float32 [| 101 |] (fun i -> float_of_int i.(0)) in\n  let y = Nx.init Float32 [| 101 |] (fun i -> float_of_int i.(0)) in\n  let svg = render (Hugin.line ~x ~y ()) in\n  (* Count text elements that look like tick labels. Each tick generates a\n     <text> element. A basic line plot should have title-area text + x ticks + y\n     ticks. We just check it's not an absurd number. *)\n  let text_count = count_substring svg \"<text \" in\n  is_true ~msg:\"text count reasonable\" (text_count > 2 && text_count < 40)\n\n(* log tick formatting *)\n\nlet test_log_tick_labels () =\n  let x = Nx.create Float32 [| 5 |] [| 0.01; 0.1; 1.; 10.; 100. |] in\n  let y = Nx.create Float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n  let svg = render (Hugin.line ~x ~y () |> Hugin.xscale `Log) in\n  (* Log ticks should be powers of 10, formatted as 10^k *)\n  is_true ~msg:\"has 10^-2\" (contains svg \">10^-2<\");\n  is_true ~msg:\"has 10^2\" (contains svg \">10^2<\")\n\n(* large range doesn't explode *)\n\nlet test_large_range () =\n  let x = Nx.create Float32 [| 2 |] [| 0.; 1e6 |] in\n  let y = Nx.create Float32 [| 2 |] [| 0.; 1. |] in\n  let svg = render (Hugin.line ~x ~y ()) in\n  let text_count = count_substring svg \"<text \" in\n  is_true ~msg:\"tick count bounded\" (text_count < 40)\n\n(* small fractional range *)\n\nlet test_fractional_range () =\n  let x = Nx.create Float32 [| 2 |] [| 0.; 0.001 |] in\n  let y = Nx.create Float32 [| 2 |] [| 0.; 1. |] in\n  let svg = render (Hugin.line ~x ~y ()) in\n  is_true ~msg:\"valid svg\" (contains svg \"<svg\")\n\nlet () =\n  run \"Ticks\"\n    [\n      group \"linear formatting\"\n        [\n          test \"zero label\" test_zero_label;\n          test \"reasonable count\" test_reasonable_count;\n          test \"large range bounded\" test_large_range;\n          test \"fractional range\" test_fractional_range;\n        ];\n      group \"log formatting\" [ test \"log tick labels\" test_log_tick_labels ];\n    ]\n"
  },
  {
    "path": "packages/hugin/top/dune",
    "content": "(library\n (name hugin_top)\n (public_name hugin.top)\n (modules hugin_top)\n (libraries hugin compiler-libs.toplevel)\n (modes byte))\n"
  },
  {
    "path": "packages/hugin/top/hugin_top.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet install_printer name =\n  let phrase =\n    Printf.sprintf \"#install_printer %s;;\" name\n    |> Lexing.from_string\n    |> !Toploop.parse_toplevel_phrase\n  in\n  Toploop.execute_phrase false Format.err_formatter phrase |> ignore\n\nlet () = install_printer \"Hugin.pp\"\n"
  },
  {
    "path": "packages/hugin/ucairo/discover/discover.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule C = Configurator.V1\nmodule P = C.Pkg_config\n\nlet default_cairo c =\n  let sys = C.ocaml_config_var_exn c \"system\" in\n  if sys = \"msvc\" || sys = \"win64\" then\n    {\n      P.cflags = [ \"-I\"; \"C:\\\\gtk\\\\include\\\\cairo\" ];\n      libs = [ \"/LC:\\\\gtk\\\\lib\"; \"cairo.lib\" ];\n    }\n  else { P.cflags = [ \"-I/usr/include/cairo\" ]; libs = [ \"-lcairo\" ] }\n\nlet () =\n  C.main ~name:\"cairo-config\" (fun c ->\n      let p =\n        match P.get c with\n        | Some p -> (\n            match P.query p ~package:\"cairo\" with\n            | Some p -> p\n            | None -> default_cairo c)\n        | None -> default_cairo c\n      in\n      let cflags =\n        match Sys.getenv \"CAIRO_CFLAGS\" with\n        | exception Not_found -> \"-fPIC\" :: p.P.cflags\n        | alt -> C.Flags.extract_blank_separated_words alt\n      in\n      let libs =\n        match Sys.getenv \"CAIRO_LIBS\" with\n        | exception Not_found -> p.P.libs\n        | alt -> C.Flags.extract_blank_separated_words alt\n      in\n      C.Flags.write_sexp \"cflags.sexp\" cflags;\n      C.Flags.write_sexp \"clibs.sexp\" libs)\n"
  },
  {
    "path": "packages/hugin/ucairo/discover/dune",
    "content": "(executable\n (name discover)\n (modules discover)\n (libraries dune-configurator))\n\n(rule\n (deps discover.exe)\n (targets cflags.sexp clibs.sexp)\n (action\n  (run ./discover.exe)))\n"
  },
  {
    "path": "packages/hugin/ucairo/dune",
    "content": "(library\n (name ucairo)\n (public_name hugin.ucairo)\n (foreign_stubs\n  (language c)\n  (names ucairo_stubs)\n  (flags\n   (:include discover/cflags.sexp)))\n (c_library_flags\n  (:include discover/clibs.sexp)))\n"
  },
  {
    "path": "packages/hugin/ucairo/ucairo.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Handle types *)\n\ntype t\ntype surface\n\n(* Enums *)\n\ntype font_weight = Normal | Bold\ntype line_cap = Butt | Round | Square\ntype line_join = Join_miter | Join_round | Join_bevel\n\ntype antialias =\n  | Antialias_default\n  | Antialias_none\n  | Antialias_gray\n  | Antialias_subpixel\n\n(* Text extents *)\n\ntype text_extents = {\n  x_bearing : float;\n  y_bearing : float;\n  width : float;\n  height : float;\n  x_advance : float;\n  y_advance : float;\n}\n\n(* Context *)\n\nexternal create : surface -> t = \"caml_ucairo_create\"\n\n(* State *)\n\nexternal save : t -> unit = \"caml_ucairo_save\"\nexternal restore : t -> unit = \"caml_ucairo_restore\"\n\n(* Transformations *)\n\nexternal translate : t -> float -> float -> unit = \"caml_ucairo_translate\"\nexternal scale : t -> float -> float -> unit = \"caml_ucairo_scale\"\nexternal rotate : t -> float -> unit = \"caml_ucairo_rotate\"\n\n(* Source *)\n\nexternal set_source_rgba : t -> float -> float -> float -> float -> unit\n  = \"caml_ucairo_set_source_rgba\"\n\nexternal set_source_surface : t -> surface -> x:float -> y:float -> unit\n  = \"caml_ucairo_set_source_surface\"\n\n(* Stroke/fill parameters *)\n\nexternal set_line_width : t -> float -> unit = \"caml_ucairo_set_line_width\"\nexternal raw_set_line_cap : t -> int -> unit = \"caml_ucairo_set_line_cap\"\n\nlet set_line_cap t cap =\n  raw_set_line_cap t (match cap with Butt -> 0 | Round -> 1 | Square -> 2)\n\nexternal raw_set_line_join : t -> int -> unit = \"caml_ucairo_set_line_join\"\n\nlet set_line_join t join =\n  raw_set_line_join t\n    (match join with Join_miter -> 0 | Join_round -> 1 | Join_bevel -> 2)\n\nexternal set_dash : t -> float array -> unit = \"caml_ucairo_set_dash\"\nexternal raw_set_antialias : t -> int -> unit = \"caml_ucairo_set_antialias\"\n\nlet set_antialias t aa =\n  raw_set_antialias t\n    (match aa with\n    | Antialias_default -> 0\n    | Antialias_none -> 1\n    | Antialias_gray -> 2\n    | Antialias_subpixel -> 3)\n\n(* Font *)\n\nexternal raw_select_font_face : t -> string -> int -> unit\n  = \"caml_ucairo_select_font_face\"\n\nlet select_font_face t family weight =\n  raw_select_font_face t family (match weight with Normal -> 0 | Bold -> 1)\n\nexternal set_font_size : t -> float -> unit = \"caml_ucairo_set_font_size\"\nexternal text_extents : t -> string -> text_extents = \"caml_ucairo_text_extents\"\nexternal show_text : t -> string -> unit = \"caml_ucairo_show_text\"\n\n(* Path *)\n\nexternal move_to : t -> float -> float -> unit = \"caml_ucairo_move_to\"\nexternal line_to : t -> float -> float -> unit = \"caml_ucairo_line_to\"\n\nexternal arc : t -> float -> float -> r:float -> a1:float -> a2:float -> unit\n  = \"caml_ucairo_arc_bytecode\" \"caml_ucairo_arc_native\"\n\nexternal rectangle : t -> float -> float -> w:float -> h:float -> unit\n  = \"caml_ucairo_rectangle\"\n\nmodule Path = struct\n  external close : t -> unit = \"caml_ucairo_path_close\"\n  external clear : t -> unit = \"caml_ucairo_path_clear\"\nend\n\n(* Drawing *)\n\nexternal fill : t -> unit = \"caml_ucairo_fill\"\nexternal fill_preserve : t -> unit = \"caml_ucairo_fill_preserve\"\nexternal stroke : t -> unit = \"caml_ucairo_stroke\"\nexternal paint : t -> unit = \"caml_ucairo_paint\"\nexternal clip : t -> unit = \"caml_ucairo_clip\"\n\n(* Surface *)\n\nmodule Surface = struct\n  external finish : surface -> unit = \"caml_ucairo_surface_finish\"\n  external flush : surface -> unit = \"caml_ucairo_surface_flush\"\nend\n\n(* Image *)\n\nmodule Image = struct\n  external create : w:int -> h:int -> surface = \"caml_ucairo_image_create\"\n\n  external create_for_data8 :\n    (int, Bigarray.int8_unsigned_elt, Bigarray.c_layout) Bigarray.Array1.t ->\n    w:int ->\n    h:int ->\n    stride:int ->\n    surface = \"caml_ucairo_image_create_for_data8\"\n\n  external stride_for_width : int -> int = \"caml_ucairo_image_stride_for_width\"\n  [@@noalloc]\nend\n\n(* PDF *)\n\nmodule Pdf = struct\n  external create : string -> w:float -> h:float -> surface\n    = \"caml_ucairo_pdf_create\"\nend\n\n(* PNG *)\n\nmodule Png = struct\n  external write : surface -> string -> unit = \"caml_ucairo_png_write\"\n\n  external write_to_stream : surface -> (string -> unit) -> unit\n    = \"caml_ucairo_png_write_to_stream\"\nend\n"
  },
  {
    "path": "packages/hugin/ucairo/ucairo.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Minimal Cairo bindings.\n\n    Thin bindings covering image and PDF surface creation, path drawing, text\n    rendering, and PNG output. Designed for the Hugin rendering backend; not a\n    general-purpose Cairo binding.\n\n    All functions raise [Failure] on Cairo errors and [Invalid_argument] on\n    destroyed handles. *)\n\n(** {1:types Handle types} *)\n\ntype t\n(** The type for Cairo drawing contexts. *)\n\ntype surface\n(** The type for Cairo surfaces. *)\n\n(** {1:enums Enumerations} *)\n\ntype font_weight = Normal | Bold  (** The type for font weight. *)\ntype line_cap = Butt | Round | Square  (** The type for line cap style. *)\n\ntype line_join =\n  | Join_miter\n  | Join_round\n  | Join_bevel  (** The type for line join style. *)\n\ntype antialias =\n  | Antialias_default\n  | Antialias_none\n  | Antialias_gray\n  | Antialias_subpixel  (** The type for antialiasing mode. *)\n\n(** {1:text_extents Text extents} *)\n\ntype text_extents = {\n  x_bearing : float;\n  y_bearing : float;\n  width : float;\n  height : float;\n  x_advance : float;\n  y_advance : float;\n}\n(** The type for text extent measurements. *)\n\n(** {1:context Context creation} *)\n\nval create : surface -> t\n(** [create surface] is a new drawing context targeting [surface]. *)\n\n(** {1:state State} *)\n\nval save : t -> unit\n(** [save t] pushes the current graphics state onto the stack. *)\n\nval restore : t -> unit\n(** [restore t] pops the graphics state from the stack. *)\n\n(** {1:transform Transformations} *)\n\nval translate : t -> float -> float -> unit\n(** [translate t tx ty] translates the user-space origin by [(tx, ty)]. *)\n\nval scale : t -> float -> float -> unit\n(** [scale t sx sy] scales the user-space axes by [(sx, sy)]. *)\n\nval rotate : t -> float -> unit\n(** [rotate t angle] rotates the user-space axes by [angle] radians. *)\n\n(** {1:source Source} *)\n\nval set_source_rgba : t -> float -> float -> float -> float -> unit\n(** [set_source_rgba t r g b a] sets the source to the given RGBA color. *)\n\nval set_source_surface : t -> surface -> x:float -> y:float -> unit\n(** [set_source_surface t s ~x ~y] sets [s] as the source, offset by [(x, y)].\n*)\n\n(** {1:stroke_fill Stroke and fill parameters} *)\n\nval set_line_width : t -> float -> unit\n(** [set_line_width t w] sets the stroke line width. *)\n\nval set_line_cap : t -> line_cap -> unit\n(** [set_line_cap t cap] sets the line cap style. *)\n\nval set_line_join : t -> line_join -> unit\n(** [set_line_join t join] sets the line join style. *)\n\nval set_dash : t -> float array -> unit\n(** [set_dash t dashes] sets the dash pattern. An empty array disables dashing.\n*)\n\nval set_antialias : t -> antialias -> unit\n(** [set_antialias t aa] sets the antialiasing mode. *)\n\n(** {1:font Font} *)\n\nval select_font_face : t -> string -> font_weight -> unit\n(** [select_font_face t family weight] selects a toy font face. Slant is always\n    upright. *)\n\nval set_font_size : t -> float -> unit\n(** [set_font_size t size] sets the font size in user-space units. *)\n\nval text_extents : t -> string -> text_extents\n(** [text_extents t s] is the extents of [s] with the current font. *)\n\nval show_text : t -> string -> unit\n(** [show_text t s] renders [s] at the current point. *)\n\n(** {1:path Path operations} *)\n\nval move_to : t -> float -> float -> unit\n(** [move_to t x y] begins a new sub-path at [(x, y)]. *)\n\nval line_to : t -> float -> float -> unit\n(** [line_to t x y] adds a line segment to [(x, y)]. *)\n\nval arc : t -> float -> float -> r:float -> a1:float -> a2:float -> unit\n(** [arc t xc yc ~r ~a1 ~a2] adds a circular arc centered at [(xc, yc)] with\n    radius [r] from angle [a1] to [a2] (in radians). *)\n\nval rectangle : t -> float -> float -> w:float -> h:float -> unit\n(** [rectangle t x y ~w ~h] adds a closed rectangle sub-path. *)\n\n(** {1:path_mod Path module} *)\n\nmodule Path : sig\n  val close : t -> unit\n  (** [close t] closes the current sub-path with a line to its start. *)\n\n  val clear : t -> unit\n  (** [clear t] clears the current path. *)\nend\n\n(** {1:drawing Drawing operations} *)\n\nval fill : t -> unit\n(** [fill t] fills the current path and clears it. *)\n\nval fill_preserve : t -> unit\n(** [fill_preserve t] fills the current path without clearing it. *)\n\nval stroke : t -> unit\n(** [stroke t] strokes the current path and clears it. *)\n\nval paint : t -> unit\n(** [paint t] paints the current source everywhere within the current clip. *)\n\nval clip : t -> unit\n(** [clip t] establishes a new clip region by intersecting the current clip with\n    the current path, then clears the path. *)\n\n(** {1:surface Surface operations} *)\n\nmodule Surface : sig\n  val finish : surface -> unit\n  (** [finish s] finalizes the surface and releases external resources. *)\n\n  val flush : surface -> unit\n  (** [flush s] completes any pending drawing operations. *)\nend\n\n(** {1:image Image surface} *)\n\nmodule Image : sig\n  val create : w:int -> h:int -> surface\n  (** [create ~w ~h] is a new ARGB32 image surface of dimensions [w] x [h].\n\n      Raises [Failure] if allocation fails. *)\n\n  val create_for_data8 :\n    (int, Bigarray.int8_unsigned_elt, Bigarray.c_layout) Bigarray.Array1.t ->\n    w:int ->\n    h:int ->\n    stride:int ->\n    surface\n  (** [create_for_data8 data ~w ~h ~stride] wraps existing pixel [data] as an\n      ARGB32 image surface. [data] must remain live for the lifetime of the\n      surface. *)\n\n  val stride_for_width : int -> int\n  (** [stride_for_width w] is the minimum stride in bytes for an ARGB32 image of\n      width [w], respecting Cairo alignment requirements. *)\nend\n\n(** {1:pdf PDF surface} *)\n\nmodule Pdf : sig\n  val create : string -> w:float -> h:float -> surface\n  (** [create filename ~w ~h] is a new PDF surface writing to [filename].\n      Dimensions are in points (1 point = 1/72 inch). *)\nend\n\n(** {1:png PNG output} *)\n\nmodule Png : sig\n  val write : surface -> string -> unit\n  (** [write surface filename] writes [surface] as a PNG file. *)\n\n  val write_to_stream : surface -> (string -> unit) -> unit\n  (** [write_to_stream surface f] writes [surface] as PNG data, calling [f] with\n      each chunk. *)\nend\n"
  },
  {
    "path": "packages/hugin/ucairo/ucairo_stubs.c",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n#define CAML_NAME_SPACE\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n#include <caml/callback.h>\n#include <caml/custom.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/mlvalues.h>\n\n#include <cairo.h>\n#include <cairo-pdf.h>\n#include <string.h>\n\n/* --------------------------------------------------------------------------\n   Custom blocks\n   -------------------------------------------------------------------------- */\n\n/* Context (cairo_t *) */\n\n#define Context_val(v) (*(cairo_t **)Data_custom_val(v))\n\nstatic void finalize_context(value v)\n{\n  cairo_t *cr = Context_val(v);\n  if (cr != NULL) { cairo_destroy(cr); Context_val(v) = NULL; }\n}\n\nstatic struct custom_operations context_ops = {\n  .identifier  = \"ucairo.context\",\n  .finalize    = finalize_context,\n  .compare     = custom_compare_default,\n  .hash        = custom_hash_default,\n  .serialize   = custom_serialize_default,\n  .deserialize = custom_deserialize_default,\n  .compare_ext = custom_compare_ext_default,\n};\n\nstatic value alloc_context(cairo_t *cr)\n{\n  value v = caml_alloc_custom(&context_ops, sizeof(cairo_t *), 0, 1);\n  Context_val(v) = cr;\n  return v;\n}\n\n/* Surface (cairo_surface_t *) */\n\n#define Surface_val(v) (*(cairo_surface_t **)Data_custom_val(v))\n\nstatic void finalize_surface(value v)\n{\n  cairo_surface_t *s = Surface_val(v);\n  if (s != NULL) { cairo_surface_destroy(s); Surface_val(v) = NULL; }\n}\n\nstatic struct custom_operations surface_ops = {\n  .identifier  = \"ucairo.surface\",\n  .finalize    = finalize_surface,\n  .compare     = custom_compare_default,\n  .hash        = custom_hash_default,\n  .serialize   = custom_serialize_default,\n  .deserialize = custom_deserialize_default,\n  .compare_ext = custom_compare_ext_default,\n};\n\nstatic value alloc_surface(cairo_surface_t *s)\n{\n  value v = caml_alloc_custom(&surface_ops, sizeof(cairo_surface_t *), 0, 1);\n  Surface_val(v) = s;\n  return v;\n}\n\n/* --------------------------------------------------------------------------\n   Helpers\n   -------------------------------------------------------------------------- */\n\nstatic inline cairo_t *check_context(value v, const char *fn)\n{\n  cairo_t *cr = Context_val(v);\n  if (cr == NULL) caml_invalid_argument(fn);\n  return cr;\n}\n\nstatic inline cairo_surface_t *check_surface(value v, const char *fn)\n{\n  cairo_surface_t *s = Surface_val(v);\n  if (s == NULL) caml_invalid_argument(fn);\n  return s;\n}\n\n/* --------------------------------------------------------------------------\n   Context creation\n   -------------------------------------------------------------------------- */\n\nCAMLprim value caml_ucairo_create(value vsurf)\n{\n  CAMLparam1(vsurf);\n  cairo_surface_t *s = check_surface(vsurf, \"Ucairo.create: destroyed surface\");\n  cairo_t *cr = cairo_create(s);\n  if (cairo_status(cr) != CAIRO_STATUS_SUCCESS) {\n    const char *msg = cairo_status_to_string(cairo_status(cr));\n    cairo_destroy(cr);\n    caml_failwith(msg);\n  }\n  CAMLreturn(alloc_context(cr));\n}\n\n/* --------------------------------------------------------------------------\n   State\n   -------------------------------------------------------------------------- */\n\nCAMLprim value caml_ucairo_save(value vcr)\n{\n  CAMLparam1(vcr);\n  cairo_save(check_context(vcr, \"Ucairo.save: destroyed context\"));\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_ucairo_restore(value vcr)\n{\n  CAMLparam1(vcr);\n  cairo_restore(check_context(vcr, \"Ucairo.restore: destroyed context\"));\n  CAMLreturn(Val_unit);\n}\n\n/* --------------------------------------------------------------------------\n   Transformations\n   -------------------------------------------------------------------------- */\n\nCAMLprim value caml_ucairo_translate(value vcr, value vtx, value vty)\n{\n  CAMLparam3(vcr, vtx, vty);\n  cairo_translate(check_context(vcr, \"Ucairo.translate: destroyed context\"),\n                  Double_val(vtx), Double_val(vty));\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_ucairo_scale(value vcr, value vsx, value vsy)\n{\n  CAMLparam3(vcr, vsx, vsy);\n  cairo_scale(check_context(vcr, \"Ucairo.scale: destroyed context\"),\n              Double_val(vsx), Double_val(vsy));\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_ucairo_rotate(value vcr, value vangle)\n{\n  CAMLparam2(vcr, vangle);\n  cairo_rotate(check_context(vcr, \"Ucairo.rotate: destroyed context\"),\n               Double_val(vangle));\n  CAMLreturn(Val_unit);\n}\n\n/* --------------------------------------------------------------------------\n   Source\n   -------------------------------------------------------------------------- */\n\nCAMLprim value caml_ucairo_set_source_rgba(value vcr, value vr, value vg,\n                                           value vb, value va)\n{\n  CAMLparam5(vcr, vr, vg, vb, va);\n  cairo_set_source_rgba(check_context(vcr, \"Ucairo.set_source_rgba: destroyed context\"),\n                        Double_val(vr), Double_val(vg),\n                        Double_val(vb), Double_val(va));\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_ucairo_set_source_surface(value vcr, value vsurf,\n                                              value vx, value vy)\n{\n  CAMLparam4(vcr, vsurf, vx, vy);\n  cairo_set_source_surface(\n    check_context(vcr, \"Ucairo.set_source_surface: destroyed context\"),\n    check_surface(vsurf, \"Ucairo.set_source_surface: destroyed surface\"),\n    Double_val(vx), Double_val(vy));\n  CAMLreturn(Val_unit);\n}\n\n/* --------------------------------------------------------------------------\n   Stroke and fill parameters\n   -------------------------------------------------------------------------- */\n\nCAMLprim value caml_ucairo_set_line_width(value vcr, value vw)\n{\n  CAMLparam2(vcr, vw);\n  cairo_set_line_width(check_context(vcr, \"Ucairo.set_line_width: destroyed context\"),\n                       Double_val(vw));\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_ucairo_set_line_cap(value vcr, value vcap)\n{\n  CAMLparam2(vcr, vcap);\n  static const cairo_line_cap_t caps[] = {\n    CAIRO_LINE_CAP_BUTT, CAIRO_LINE_CAP_ROUND, CAIRO_LINE_CAP_SQUARE\n  };\n  cairo_set_line_cap(check_context(vcr, \"Ucairo.set_line_cap: destroyed context\"),\n                     caps[Int_val(vcap)]);\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_ucairo_set_line_join(value vcr, value vjoin)\n{\n  CAMLparam2(vcr, vjoin);\n  static const cairo_line_join_t joins[] = {\n    CAIRO_LINE_JOIN_MITER, CAIRO_LINE_JOIN_ROUND, CAIRO_LINE_JOIN_BEVEL\n  };\n  cairo_set_line_join(check_context(vcr, \"Ucairo.set_line_join: destroyed context\"),\n                      joins[Int_val(vjoin)]);\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_ucairo_set_dash(value vcr, value varr)\n{\n  CAMLparam2(vcr, varr);\n  cairo_t *cr = check_context(vcr, \"Ucairo.set_dash: destroyed context\");\n  int n = Wosize_val(varr) / Double_wosize;\n  if (n == 0) {\n    cairo_set_dash(cr, NULL, 0, 0.0);\n  } else {\n    double stack_buf[64];\n    double *dashes = n <= 64 ? stack_buf\n                             : caml_stat_alloc(n * sizeof(double));\n    for (int i = 0; i < n; i++)\n      dashes[i] = Double_field(varr, i);\n    cairo_set_dash(cr, dashes, n, 0.0);\n    if (dashes != stack_buf) caml_stat_free(dashes);\n  }\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_ucairo_set_antialias(value vcr, value vaa)\n{\n  CAMLparam2(vcr, vaa);\n  static const cairo_antialias_t aa[] = {\n    CAIRO_ANTIALIAS_DEFAULT, CAIRO_ANTIALIAS_NONE,\n    CAIRO_ANTIALIAS_GRAY, CAIRO_ANTIALIAS_SUBPIXEL\n  };\n  cairo_set_antialias(check_context(vcr, \"Ucairo.set_antialias: destroyed context\"),\n                      aa[Int_val(vaa)]);\n  CAMLreturn(Val_unit);\n}\n\n/* --------------------------------------------------------------------------\n   Font\n   -------------------------------------------------------------------------- */\n\nCAMLprim value caml_ucairo_select_font_face(value vcr, value vfamily,\n                                            value vweight)\n{\n  CAMLparam3(vcr, vfamily, vweight);\n  static const cairo_font_weight_t weights[] = {\n    CAIRO_FONT_WEIGHT_NORMAL, CAIRO_FONT_WEIGHT_BOLD\n  };\n  cairo_select_font_face(\n    check_context(vcr, \"Ucairo.select_font_face: destroyed context\"),\n    String_val(vfamily), CAIRO_FONT_SLANT_NORMAL, weights[Int_val(vweight)]);\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_ucairo_set_font_size(value vcr, value vsize)\n{\n  CAMLparam2(vcr, vsize);\n  cairo_set_font_size(check_context(vcr, \"Ucairo.set_font_size: destroyed context\"),\n                      Double_val(vsize));\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_ucairo_text_extents(value vcr, value vstr)\n{\n  CAMLparam2(vcr, vstr);\n  CAMLlocal1(result);\n  cairo_t *cr = check_context(vcr, \"Ucairo.text_extents: destroyed context\");\n  cairo_text_extents_t ext;\n  cairo_text_extents(cr, String_val(vstr), &ext);\n  result = caml_alloc(6 * Double_wosize, Double_array_tag);\n  Store_double_field(result, 0, ext.x_bearing);\n  Store_double_field(result, 1, ext.y_bearing);\n  Store_double_field(result, 2, ext.width);\n  Store_double_field(result, 3, ext.height);\n  Store_double_field(result, 4, ext.x_advance);\n  Store_double_field(result, 5, ext.y_advance);\n  CAMLreturn(result);\n}\n\nCAMLprim value caml_ucairo_show_text(value vcr, value vstr)\n{\n  CAMLparam2(vcr, vstr);\n  cairo_show_text(check_context(vcr, \"Ucairo.show_text: destroyed context\"),\n                  String_val(vstr));\n  CAMLreturn(Val_unit);\n}\n\n/* --------------------------------------------------------------------------\n   Path operations\n   -------------------------------------------------------------------------- */\n\nCAMLprim value caml_ucairo_move_to(value vcr, value vx, value vy)\n{\n  CAMLparam3(vcr, vx, vy);\n  cairo_move_to(check_context(vcr, \"Ucairo.move_to: destroyed context\"),\n                Double_val(vx), Double_val(vy));\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_ucairo_line_to(value vcr, value vx, value vy)\n{\n  CAMLparam3(vcr, vx, vy);\n  cairo_line_to(check_context(vcr, \"Ucairo.line_to: destroyed context\"),\n                Double_val(vx), Double_val(vy));\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_ucairo_arc_native(value vcr, value vxc, value vyc,\n                                      value vr, value va1, value va2)\n{\n  CAMLparam5(vcr, vxc, vyc, vr, va1);\n  CAMLxparam1(va2);\n  cairo_arc(check_context(vcr, \"Ucairo.arc: destroyed context\"),\n            Double_val(vxc), Double_val(vyc),\n            Double_val(vr), Double_val(va1), Double_val(va2));\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_ucairo_arc_bytecode(value *argv, int argc)\n{\n  (void)argc;\n  return caml_ucairo_arc_native(argv[0], argv[1], argv[2],\n                                argv[3], argv[4], argv[5]);\n}\n\nCAMLprim value caml_ucairo_rectangle(value vcr, value vx, value vy,\n                                     value vw, value vh)\n{\n  CAMLparam5(vcr, vx, vy, vw, vh);\n  cairo_rectangle(check_context(vcr, \"Ucairo.rectangle: destroyed context\"),\n                  Double_val(vx), Double_val(vy),\n                  Double_val(vw), Double_val(vh));\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_ucairo_path_close(value vcr)\n{\n  CAMLparam1(vcr);\n  cairo_close_path(check_context(vcr, \"Ucairo.Path.close: destroyed context\"));\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_ucairo_path_clear(value vcr)\n{\n  CAMLparam1(vcr);\n  cairo_new_path(check_context(vcr, \"Ucairo.Path.clear: destroyed context\"));\n  CAMLreturn(Val_unit);\n}\n\n/* --------------------------------------------------------------------------\n   Drawing operations\n   -------------------------------------------------------------------------- */\n\nCAMLprim value caml_ucairo_fill(value vcr)\n{\n  CAMLparam1(vcr);\n  cairo_fill(check_context(vcr, \"Ucairo.fill: destroyed context\"));\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_ucairo_fill_preserve(value vcr)\n{\n  CAMLparam1(vcr);\n  cairo_fill_preserve(check_context(vcr, \"Ucairo.fill_preserve: destroyed context\"));\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_ucairo_stroke(value vcr)\n{\n  CAMLparam1(vcr);\n  cairo_stroke(check_context(vcr, \"Ucairo.stroke: destroyed context\"));\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_ucairo_paint(value vcr)\n{\n  CAMLparam1(vcr);\n  cairo_paint(check_context(vcr, \"Ucairo.paint: destroyed context\"));\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_ucairo_clip(value vcr)\n{\n  CAMLparam1(vcr);\n  cairo_clip(check_context(vcr, \"Ucairo.clip: destroyed context\"));\n  CAMLreturn(Val_unit);\n}\n\n/* --------------------------------------------------------------------------\n   Surface operations\n   -------------------------------------------------------------------------- */\n\nCAMLprim value caml_ucairo_surface_finish(value vsurf)\n{\n  CAMLparam1(vsurf);\n  cairo_surface_t *s = Surface_val(vsurf);\n  if (s != NULL) cairo_surface_finish(s);\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_ucairo_surface_flush(value vsurf)\n{\n  CAMLparam1(vsurf);\n  cairo_surface_flush(check_surface(vsurf, \"Ucairo.Surface.flush: destroyed surface\"));\n  CAMLreturn(Val_unit);\n}\n\n/* --------------------------------------------------------------------------\n   Image surface\n   -------------------------------------------------------------------------- */\n\nCAMLprim value caml_ucairo_image_create(value vw, value vh)\n{\n  CAMLparam2(vw, vh);\n  cairo_surface_t *s = cairo_image_surface_create(\n    CAIRO_FORMAT_ARGB32, Int_val(vw), Int_val(vh));\n  if (cairo_surface_status(s) != CAIRO_STATUS_SUCCESS) {\n    const char *msg = cairo_status_to_string(cairo_surface_status(s));\n    cairo_surface_destroy(s);\n    caml_failwith(msg);\n  }\n  CAMLreturn(alloc_surface(s));\n}\n\nCAMLprim value caml_ucairo_image_create_for_data8(value vdata, value vw,\n                                                  value vh, value vstride)\n{\n  CAMLparam4(vdata, vw, vh, vstride);\n  unsigned char *data = (unsigned char *)Caml_ba_data_val(vdata);\n  cairo_surface_t *s = cairo_image_surface_create_for_data(\n    data, CAIRO_FORMAT_ARGB32, Int_val(vw), Int_val(vh), Int_val(vstride));\n  if (cairo_surface_status(s) != CAIRO_STATUS_SUCCESS) {\n    const char *msg = cairo_status_to_string(cairo_surface_status(s));\n    cairo_surface_destroy(s);\n    caml_failwith(msg);\n  }\n  CAMLreturn(alloc_surface(s));\n}\n\nCAMLprim value caml_ucairo_image_stride_for_width(value vw)\n{\n  return Val_int(cairo_format_stride_for_width(CAIRO_FORMAT_ARGB32, Int_val(vw)));\n}\n\n/* --------------------------------------------------------------------------\n   PDF surface\n   -------------------------------------------------------------------------- */\n\nCAMLprim value caml_ucairo_pdf_create(value vfilename, value vw, value vh)\n{\n  CAMLparam3(vfilename, vw, vh);\n  cairo_surface_t *s = cairo_pdf_surface_create(\n    String_val(vfilename), Double_val(vw), Double_val(vh));\n  if (cairo_surface_status(s) != CAIRO_STATUS_SUCCESS) {\n    const char *msg = cairo_status_to_string(cairo_surface_status(s));\n    cairo_surface_destroy(s);\n    caml_failwith(msg);\n  }\n  CAMLreturn(alloc_surface(s));\n}\n\n/* --------------------------------------------------------------------------\n   PNG output\n   -------------------------------------------------------------------------- */\n\nCAMLprim value caml_ucairo_png_write(value vsurf, value vfilename)\n{\n  CAMLparam2(vsurf, vfilename);\n  cairo_surface_t *s = check_surface(vsurf, \"Ucairo.Png.write: destroyed surface\");\n  cairo_status_t st = cairo_surface_write_to_png(s, String_val(vfilename));\n  if (st != CAIRO_STATUS_SUCCESS)\n    caml_failwith(cairo_status_to_string(st));\n  CAMLreturn(Val_unit);\n}\n\nstatic cairo_status_t\npng_write_func(void *closure, const unsigned char *data, unsigned int length)\n{\n  CAMLparam0();\n  CAMLlocal2(vstr, r);\n  vstr = caml_alloc_string(length);\n  memcpy(Bytes_val(vstr), data, length);\n  /* closure points to CAMLparam-rooted vcallback in the caller frame;\n     re-read after allocation so we get the post-GC value. */\n  r = caml_callback_exn(*(value *)closure, vstr);\n  if (Is_exception_result(r))\n    CAMLreturnT(cairo_status_t, CAIRO_STATUS_WRITE_ERROR);\n  CAMLreturnT(cairo_status_t, CAIRO_STATUS_SUCCESS);\n}\n\nCAMLprim value caml_ucairo_png_write_to_stream(value vsurf, value vcallback)\n{\n  CAMLparam2(vsurf, vcallback);\n  cairo_surface_t *s = check_surface(vsurf,\n    \"Ucairo.Png.write_to_stream: destroyed surface\");\n  /* vcallback is rooted by CAMLparam2; we pass its address as the closure\n     so the callback can retrieve it. This is safe because\n     cairo_surface_write_to_png_stream calls png_write_func synchronously. */\n  cairo_status_t st = cairo_surface_write_to_png_stream(\n    s, png_write_func, &vcallback);\n  if (st != CAIRO_STATUS_SUCCESS)\n    caml_failwith(cairo_status_to_string(st));\n  CAMLreturn(Val_unit);\n}\n"
  },
  {
    "path": "packages/hugin/usdl/discover/discover.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule C = Configurator.V1\n\nlet () =\n  C.main ~name:\"sdl2-config\" (fun c ->\n      let pkg_config =\n        match C.Pkg_config.get c with\n        | None -> C.die \"pkg-config not found\"\n        | Some pc -> pc\n      in\n      let sdl2 =\n        match C.Pkg_config.query pkg_config ~package:\"sdl2\" with\n        | None -> C.die \"SDL2 not found via pkg-config\"\n        | Some info -> info\n      in\n      let cflags = \"-fPIC\" :: sdl2.C.Pkg_config.cflags in\n      let clibs = sdl2.C.Pkg_config.libs in\n      C.Flags.write_sexp \"cflags.sexp\" cflags;\n      C.Flags.write_sexp \"clibs.sexp\" clibs)\n"
  },
  {
    "path": "packages/hugin/usdl/discover/dune",
    "content": "(executable\n (name discover)\n (modules discover)\n (libraries dune-configurator))\n\n(rule\n (deps discover.exe)\n (targets cflags.sexp clibs.sexp)\n (action\n  (run ./discover.exe)))\n"
  },
  {
    "path": "packages/hugin/usdl/dune",
    "content": "(library\n (name usdl)\n (public_name hugin.usdl)\n (foreign_stubs\n  (language c)\n  (names usdl_stubs)\n  (flags\n   (:include discover/cflags.sexp)))\n (c_library_flags\n  (:include discover/clibs.sexp)))\n"
  },
  {
    "path": "packages/hugin/usdl/usdl.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Init / quit *)\n\nexternal init : unit -> unit = \"caml_usdl_init\"\nexternal quit : unit -> unit = \"caml_usdl_quit\"\n\n(* Handle types *)\n\ntype renderer\ntype surface\ntype texture\n\n(* Window *)\n\nmodule Window = struct\n  type t\n\n  external create : title:string -> w:int -> h:int -> t\n    = \"caml_usdl_window_create\"\n\n  external destroy : t -> unit = \"caml_usdl_window_destroy\"\nend\n\n(* Renderer *)\n\nmodule Renderer = struct\n  type t = renderer\n\n  external create : Window.t -> t = \"caml_usdl_renderer_create\"\n  external output_size : t -> int * int = \"caml_usdl_renderer_output_size\"\n  external clear : t -> unit = \"caml_usdl_renderer_clear\"\n  external copy : t -> texture -> unit = \"caml_usdl_renderer_copy\"\n  external present : t -> unit = \"caml_usdl_renderer_present\"\n  external destroy : t -> unit = \"caml_usdl_renderer_destroy\"\nend\n\n(* Surface *)\n\nmodule Surface = struct\n  type t = surface\n\n  external create_argb8888 : w:int -> h:int -> t\n    = \"caml_usdl_surface_create_argb8888\"\n\n  external pitch : t -> int = \"caml_usdl_surface_pitch\"\n\n  external pixels :\n    t -> (int, Bigarray.int8_unsigned_elt, Bigarray.c_layout) Bigarray.Array1.t\n    = \"caml_usdl_surface_pixels\"\n\n  external destroy : t -> unit = \"caml_usdl_surface_destroy\"\nend\n\n(* Texture *)\n\nmodule Texture = struct\n  type t = texture\n\n  external of_surface : Renderer.t -> Surface.t -> t\n    = \"caml_usdl_texture_of_surface\"\n\n  external destroy : t -> unit = \"caml_usdl_texture_destroy\"\nend\n\n(* Event *)\n\nmodule Event = struct\n  type t\n  type event_type = [ `Quit | `Window_event | `Key_down | `Unknown of int ]\n\n  type window_event =\n    [ `Resized | `Size_changed | `Exposed | `Close | `Unknown of int ]\n\n  external create : unit -> t = \"caml_usdl_event_create\"\n  external wait : t -> bool = \"caml_usdl_event_wait\"\n  external raw_type : t -> int = \"caml_usdl_event_type\" [@@noalloc]\n  external raw_window_id : t -> int = \"caml_usdl_event_window_id\" [@@noalloc]\n  external keycode : t -> int = \"caml_usdl_event_keycode\" [@@noalloc]\n\n  let typ t =\n    match raw_type t with\n    | 0x100 -> `Quit\n    | 0x200 -> `Window_event\n    | 0x300 -> `Key_down\n    | n -> `Unknown n\n\n  let window_event_id t =\n    match raw_window_id t with\n    | 5 -> `Resized\n    | 6 -> `Size_changed\n    | 2 -> `Exposed\n    | 14 -> `Close\n    | n -> `Unknown n\nend\n\n(* Keycodes *)\n\nmodule Keycode = struct\n  let escape = 27\n  let q = Char.code 'q'\nend\n"
  },
  {
    "path": "packages/hugin/usdl/usdl.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Minimal SDL2 bindings.\n\n    Thin bindings covering window creation, renderer management, surface pixel\n    access, and event polling. Designed for the Cairo-SDL integration layer; not\n    a general-purpose SDL binding.\n\n    All functions raise [Failure] on SDL errors. *)\n\n(** {1:init Initialization} *)\n\nval init : unit -> unit\n(** [init ()] initializes SDL video and sets the render scale quality hint.\n\n    Raises [Failure] if SDL initialization fails. *)\n\nval quit : unit -> unit\n(** [quit ()] shuts down SDL. *)\n\n(** {1:handles Handle types} *)\n\ntype renderer\n(** The type for SDL renderers. *)\n\ntype surface\n(** The type for SDL surfaces. *)\n\ntype texture\n(** The type for SDL textures. *)\n\n(** {1:window Window} *)\n\nmodule Window : sig\n  type t\n  (** The type for SDL windows. *)\n\n  val create : title:string -> w:int -> h:int -> t\n  (** [create ~title ~w ~h] is a resizable, high-DPI-aware window.\n\n      Raises [Failure] if window creation fails. *)\n\n  val destroy : t -> unit\n  (** [destroy t] frees the window. Safe to call more than once. *)\nend\n\n(** {1:renderer Renderer} *)\n\nmodule Renderer : sig\n  type t = renderer\n  (** The type for SDL renderers. *)\n\n  val create : Window.t -> t\n  (** [create win] is a hardware-accelerated, vsync-enabled renderer for [win].\n\n      Raises [Failure] if renderer creation fails. *)\n\n  val output_size : t -> int * int\n  (** [output_size t] is [(w, h)] in pixels (accounting for high-DPI scaling).\n\n      Raises [Failure] if the query fails. *)\n\n  val clear : t -> unit\n  (** [clear t] clears the render target. *)\n\n  val copy : t -> texture -> unit\n  (** [copy t tex] copies [tex] to the entire render target. *)\n\n  val present : t -> unit\n  (** [present t] presents the composed backbuffer. *)\n\n  val destroy : t -> unit\n  (** [destroy t] frees the renderer. Safe to call more than once. *)\nend\n\n(** {1:surface Surface} *)\n\nmodule Surface : sig\n  type t = surface\n  (** The type for SDL surfaces. *)\n\n  val create_argb8888 : w:int -> h:int -> t\n  (** [create_argb8888 ~w ~h] is a 32-bit ARGB8888 surface.\n\n      Raises [Failure] if allocation fails. *)\n\n  val pitch : t -> int\n  (** [pitch t] is the byte length of one row. *)\n\n  val pixels :\n    t -> (int, Bigarray.int8_unsigned_elt, Bigarray.c_layout) Bigarray.Array1.t\n  (** [pixels t] is the raw pixel buffer. The bigarray is a view onto\n      SDL-managed memory; it must not outlive the surface. *)\n\n  val destroy : t -> unit\n  (** [destroy t] frees the surface. Safe to call more than once. *)\nend\n\n(** {1:texture Texture} *)\n\nmodule Texture : sig\n  type t = texture\n  (** The type for SDL textures. *)\n\n  val of_surface : renderer -> surface -> t\n  (** [of_surface ren surf] creates a texture from [surf].\n\n      Raises [Failure] if texture creation fails. *)\n\n  val destroy : t -> unit\n  (** [destroy t] frees the texture. Safe to call more than once. *)\nend\n\n(** {1:event Events} *)\n\nmodule Event : sig\n  type t\n  (** The type for event storage. *)\n\n  type event_type = [ `Quit | `Window_event | `Key_down | `Unknown of int ]\n  (** The type for event kinds. *)\n\n  type window_event =\n    [ `Resized | `Size_changed | `Exposed | `Close | `Unknown of int ]\n  (** The type for window event sub-kinds. *)\n\n  val create : unit -> t\n  (** [create ()] allocates event storage. *)\n\n  val wait : t -> bool\n  (** [wait t] blocks until an event arrives. Returns [true] if an event was\n      received, [false] on error. Releases the runtime lock while blocking. *)\n\n  val typ : t -> event_type\n  (** [typ t] is the kind of the last received event. *)\n\n  val window_event_id : t -> window_event\n  (** [window_event_id t] is the window event sub-kind. Only meaningful when\n      [typ t] is [`Window_event]. *)\n\n  val keycode : t -> int\n  (** [keycode t] is the key code. Only meaningful when [typ t] is [`Key_down].\n  *)\nend\n\n(** {1:keycode Key codes} *)\n\nmodule Keycode : sig\n  val escape : int\n  val q : int\nend\n"
  },
  {
    "path": "packages/hugin/usdl/usdl_stubs.c",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n#define CAML_NAME_SPACE\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n#include <caml/custom.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/mlvalues.h>\n#include <caml/threads.h>\n#include <string.h>\n\n#ifdef __APPLE__\n#include <SDL2/SDL.h>\n#else\n#include <SDL.h>\n#endif\n\n/* Window */\n\n#define Window_val(v) (*(SDL_Window **)Data_custom_val(v))\n\nstatic void finalize_window(value v)\n{\n  SDL_Window *w = Window_val(v);\n  if (w != NULL) { SDL_DestroyWindow(w); Window_val(v) = NULL; }\n}\n\nstatic struct custom_operations window_ops = {\n  .identifier  = \"usdl.window\",\n  .finalize    = finalize_window,\n  .compare     = custom_compare_default,\n  .hash        = custom_hash_default,\n  .serialize   = custom_serialize_default,\n  .deserialize = custom_deserialize_default,\n  .compare_ext = custom_compare_ext_default,\n};\n\nstatic value alloc_window(SDL_Window *w)\n{\n  value v = caml_alloc_custom(&window_ops, sizeof(SDL_Window *), 0, 1);\n  Window_val(v) = w;\n  return v;\n}\n\n/* Renderer */\n\n#define Renderer_val(v) (*(SDL_Renderer **)Data_custom_val(v))\n\nstatic void finalize_renderer(value v)\n{\n  SDL_Renderer *r = Renderer_val(v);\n  if (r != NULL) { SDL_DestroyRenderer(r); Renderer_val(v) = NULL; }\n}\n\nstatic struct custom_operations renderer_ops = {\n  .identifier  = \"usdl.renderer\",\n  .finalize    = finalize_renderer,\n  .compare     = custom_compare_default,\n  .hash        = custom_hash_default,\n  .serialize   = custom_serialize_default,\n  .deserialize = custom_deserialize_default,\n  .compare_ext = custom_compare_ext_default,\n};\n\nstatic value alloc_renderer(SDL_Renderer *r)\n{\n  value v = caml_alloc_custom(&renderer_ops, sizeof(SDL_Renderer *), 0, 1);\n  Renderer_val(v) = r;\n  return v;\n}\n\n/* Surface */\n\n#define Surface_val(v) (*(SDL_Surface **)Data_custom_val(v))\n\nstatic void finalize_surface(value v)\n{\n  SDL_Surface *s = Surface_val(v);\n  if (s != NULL) { SDL_FreeSurface(s); Surface_val(v) = NULL; }\n}\n\nstatic struct custom_operations surface_ops = {\n  .identifier  = \"usdl.surface\",\n  .finalize    = finalize_surface,\n  .compare     = custom_compare_default,\n  .hash        = custom_hash_default,\n  .serialize   = custom_serialize_default,\n  .deserialize = custom_deserialize_default,\n  .compare_ext = custom_compare_ext_default,\n};\n\nstatic value alloc_surface(SDL_Surface *s)\n{\n  value v = caml_alloc_custom(&surface_ops, sizeof(SDL_Surface *), 0, 1);\n  Surface_val(v) = s;\n  return v;\n}\n\n/* Texture */\n\n#define Texture_val(v) (*(SDL_Texture **)Data_custom_val(v))\n\nstatic void finalize_texture(value v)\n{\n  SDL_Texture *t = Texture_val(v);\n  if (t != NULL) { SDL_DestroyTexture(t); Texture_val(v) = NULL; }\n}\n\nstatic struct custom_operations texture_ops = {\n  .identifier  = \"usdl.texture\",\n  .finalize    = finalize_texture,\n  .compare     = custom_compare_default,\n  .hash        = custom_hash_default,\n  .serialize   = custom_serialize_default,\n  .deserialize = custom_deserialize_default,\n  .compare_ext = custom_compare_ext_default,\n};\n\nstatic value alloc_texture(SDL_Texture *t)\n{\n  value v = caml_alloc_custom(&texture_ops, sizeof(SDL_Texture *), 0, 1);\n  Texture_val(v) = t;\n  return v;\n}\n\n/* Event — stored inline in custom block, no heap allocation */\n\n#define Event_val(v) ((SDL_Event *)Data_custom_val(v))\n\nstatic struct custom_operations event_ops = {\n  .identifier  = \"usdl.event\",\n  .finalize    = custom_finalize_default,\n  .compare     = custom_compare_default,\n  .hash        = custom_hash_default,\n  .serialize   = custom_serialize_default,\n  .deserialize = custom_deserialize_default,\n  .compare_ext = custom_compare_ext_default,\n};\n\n/* Init / quit */\n\nCAMLprim value caml_usdl_init(value vunit)\n{\n  CAMLparam1(vunit);\n  if (SDL_Init(SDL_INIT_VIDEO) < 0)\n    caml_failwith(SDL_GetError());\n  SDL_SetHint(SDL_HINT_RENDER_SCALE_QUALITY, \"1\");\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_usdl_quit(value vunit)\n{\n  CAMLparam1(vunit);\n  SDL_Quit();\n  CAMLreturn(Val_unit);\n}\n\n/* Window */\n\nCAMLprim value caml_usdl_window_create(value vtitle, value vw, value vh)\n{\n  CAMLparam3(vtitle, vw, vh);\n  Uint32 flags = SDL_WINDOW_SHOWN | SDL_WINDOW_RESIZABLE |\n                 SDL_WINDOW_ALLOW_HIGHDPI;\n  SDL_Window *win = SDL_CreateWindow(\n    String_val(vtitle),\n    SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED,\n    Int_val(vw), Int_val(vh), flags);\n  if (win == NULL) caml_failwith(SDL_GetError());\n  CAMLreturn(alloc_window(win));\n}\n\nCAMLprim value caml_usdl_window_destroy(value vwin)\n{\n  CAMLparam1(vwin);\n  SDL_Window *w = Window_val(vwin);\n  if (w != NULL) { SDL_DestroyWindow(w); Window_val(vwin) = NULL; }\n  CAMLreturn(Val_unit);\n}\n\n/* Renderer */\n\nCAMLprim value caml_usdl_renderer_create(value vwin)\n{\n  CAMLparam1(vwin);\n  SDL_Window *win = Window_val(vwin);\n  if (win == NULL) caml_invalid_argument(\"Usdl.Renderer.create: destroyed window\");\n  Uint32 flags = SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC;\n  SDL_Renderer *ren = SDL_CreateRenderer(win, -1, flags);\n  if (ren == NULL) caml_failwith(SDL_GetError());\n  CAMLreturn(alloc_renderer(ren));\n}\n\nCAMLprim value caml_usdl_renderer_output_size(value vren)\n{\n  CAMLparam1(vren);\n  CAMLlocal1(result);\n  SDL_Renderer *ren = Renderer_val(vren);\n  if (ren == NULL) caml_invalid_argument(\"Usdl.Renderer.output_size: destroyed renderer\");\n  int w, h;\n  if (SDL_GetRendererOutputSize(ren, &w, &h) < 0)\n    caml_failwith(SDL_GetError());\n  result = caml_alloc_tuple(2);\n  Store_field(result, 0, Val_int(w));\n  Store_field(result, 1, Val_int(h));\n  CAMLreturn(result);\n}\n\nCAMLprim value caml_usdl_renderer_clear(value vren)\n{\n  CAMLparam1(vren);\n  SDL_Renderer *ren = Renderer_val(vren);\n  if (ren == NULL) caml_invalid_argument(\"Usdl.Renderer.clear: destroyed renderer\");\n  if (SDL_RenderClear(ren) < 0) caml_failwith(SDL_GetError());\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_usdl_renderer_copy(value vren, value vtex)\n{\n  CAMLparam2(vren, vtex);\n  SDL_Renderer *ren = Renderer_val(vren);\n  SDL_Texture *tex = Texture_val(vtex);\n  if (ren == NULL || tex == NULL)\n    caml_invalid_argument(\"Usdl.Renderer.copy: destroyed handle\");\n  if (SDL_RenderCopy(ren, tex, NULL, NULL) < 0)\n    caml_failwith(SDL_GetError());\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_usdl_renderer_present(value vren)\n{\n  CAMLparam1(vren);\n  SDL_Renderer *ren = Renderer_val(vren);\n  if (ren == NULL) caml_invalid_argument(\"Usdl.Renderer.present: destroyed renderer\");\n  SDL_RenderPresent(ren);\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_usdl_renderer_destroy(value vren)\n{\n  CAMLparam1(vren);\n  SDL_Renderer *r = Renderer_val(vren);\n  if (r != NULL) { SDL_DestroyRenderer(r); Renderer_val(vren) = NULL; }\n  CAMLreturn(Val_unit);\n}\n\n/* Surface */\n\nCAMLprim value caml_usdl_surface_create_argb8888(value vw, value vh)\n{\n  CAMLparam2(vw, vh);\n  SDL_Surface *s = SDL_CreateRGBSurfaceWithFormat(\n    0, Int_val(vw), Int_val(vh), 32, SDL_PIXELFORMAT_ARGB8888);\n  if (s == NULL) caml_failwith(SDL_GetError());\n  CAMLreturn(alloc_surface(s));\n}\n\nCAMLprim value caml_usdl_surface_destroy(value vsurf)\n{\n  CAMLparam1(vsurf);\n  SDL_Surface *s = Surface_val(vsurf);\n  if (s != NULL) { SDL_FreeSurface(s); Surface_val(vsurf) = NULL; }\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_usdl_surface_pitch(value vsurf)\n{\n  CAMLparam1(vsurf);\n  SDL_Surface *s = Surface_val(vsurf);\n  if (s == NULL) caml_invalid_argument(\"Usdl.Surface.pitch: destroyed surface\");\n  CAMLreturn(Val_int(s->pitch));\n}\n\nCAMLprim value caml_usdl_surface_pixels(value vsurf)\n{\n  CAMLparam1(vsurf);\n  SDL_Surface *s = Surface_val(vsurf);\n  if (s == NULL) caml_invalid_argument(\"Usdl.Surface.pixels: destroyed surface\");\n  if (s->pixels == NULL) caml_failwith(\"Usdl.Surface.pixels: NULL pixels\");\n  CAMLreturn(caml_ba_alloc_dims(\n    CAML_BA_UINT8 | CAML_BA_C_LAYOUT | CAML_BA_EXTERNAL,\n    1, s->pixels, (intnat)s->h * s->pitch));\n}\n\n/* Texture */\n\nCAMLprim value caml_usdl_texture_of_surface(value vren, value vsurf)\n{\n  CAMLparam2(vren, vsurf);\n  SDL_Renderer *ren = Renderer_val(vren);\n  SDL_Surface *s = Surface_val(vsurf);\n  if (ren == NULL || s == NULL)\n    caml_invalid_argument(\"Usdl.Texture.of_surface: destroyed handle\");\n  SDL_Texture *tex = SDL_CreateTextureFromSurface(ren, s);\n  if (tex == NULL) caml_failwith(SDL_GetError());\n  CAMLreturn(alloc_texture(tex));\n}\n\nCAMLprim value caml_usdl_texture_destroy(value vtex)\n{\n  CAMLparam1(vtex);\n  SDL_Texture *t = Texture_val(vtex);\n  if (t != NULL) { SDL_DestroyTexture(t); Texture_val(vtex) = NULL; }\n  CAMLreturn(Val_unit);\n}\n\n/* Event */\n\nCAMLprim value caml_usdl_event_create(value vunit)\n{\n  CAMLparam1(vunit);\n  value v = caml_alloc_custom(&event_ops, sizeof(SDL_Event), 0, 1);\n  memset(Event_val(v), 0, sizeof(SDL_Event));\n  CAMLreturn(v);\n}\n\nCAMLprim value caml_usdl_event_wait(value vev)\n{\n  CAMLparam1(vev);\n  SDL_Event ev;\n  caml_release_runtime_system();\n  int ret = SDL_WaitEvent(&ev);\n  caml_acquire_runtime_system();\n  if (ret == 1) memcpy(Event_val(vev), &ev, sizeof(SDL_Event));\n  CAMLreturn(Val_bool(ret == 1));\n}\n\nCAMLprim value caml_usdl_event_type(value vev)\n{\n  return Val_int(Event_val(vev)->type);\n}\n\nCAMLprim value caml_usdl_event_window_id(value vev)\n{\n  SDL_Event *ev = Event_val(vev);\n  if (ev->type == SDL_WINDOWEVENT)\n    return Val_int(ev->window.event);\n  return Val_int(-1);\n}\n\nCAMLprim value caml_usdl_event_keycode(value vev)\n{\n  SDL_Event *ev = Event_val(vev);\n  if (ev->type == SDL_KEYDOWN || ev->type == SDL_KEYUP)\n    return Val_int(ev->key.keysym.sym);\n  return Val_int(-1);\n}\n"
  },
  {
    "path": "packages/kaun/README.md",
    "content": "# Kaun\n\nNeural networks and training utilities for OCaml, built on [Rune](../rune/)\n\nKaun provides composable layers, optimizers with learning-rate schedules,\nautomatic differentiation over parameter trees, data pipelines, and a\nhigh-level training loop. It also supports loading pretrained models from\nHuggingFace.\n\n## Quick Start\n\nTrain a small network on XOR:\n\n```ocaml\nopen Kaun\n\nlet () =\n  let rngs = Rune.Rng.key 42 in\n  let dtype = Rune.float32 in\n\n  let x = Rune.create dtype [| 4; 2 |] [| 0.; 0.; 0.; 1.; 1.; 0.; 1.; 1. |] in\n  let y = Rune.create dtype [| 4; 1 |] [| 0.; 1.; 1.; 0. |] in\n\n  let model =\n    Layer.sequential\n      [\n        Layer.linear ~in_features:2 ~out_features:4 ();\n        Layer.tanh ();\n        Layer.linear ~in_features:4 ~out_features:1 ();\n      ]\n  in\n  let trainer =\n    Train.make ~model\n      ~optimizer:(Optim.adam ~lr:(Optim.Schedule.constant 0.01) ())\n  in\n  let st = Train.init trainer ~rngs ~dtype in\n  let st =\n    Train.fit trainer st ~rngs\n      ~report:(fun ~step ~loss _st ->\n        if step mod 200 = 0 then Printf.printf \"step %4d  loss %.6f\\n\" step loss)\n      (Data.repeat 1000 (x, fun pred -> Loss.binary_cross_entropy pred y))\n  in\n  let pred = Train.predict trainer st x |> Rune.sigmoid in\n  for i = 0 to 3 do\n    Printf.printf \"  [%.0f, %.0f] -> %.3f\\n\"\n      (Rune.item [ i; 0 ] x) (Rune.item [ i; 1 ] x) (Rune.item [ i; 0 ] pred)\n  done\n```\n\n## Features\n\n- **Layers**: linear, conv1d, conv2d, layer norm, RMS norm, batch norm, embedding, dropout, multi-head attention with RoPE, and all standard activations (relu, gelu, tanh, sigmoid, etc.)\n- **Composition**: `Layer.sequential` and `Layer.compose` for building models\n- **Optimizers**: SGD, Adam, AdamW, RMSprop, Adagrad with gradient clipping\n- **Schedules**: constant, cosine decay, warmup cosine, exponential decay, warmup linear\n- **Training**: `Train.fit` iterates over `Data.t` pipelines with early stopping and per-step reporting; `Train.step` for manual control\n- **Data pipelines**: lazy, composable iterators with shuffle, batching, and `Data.prepare` for the common (x, y) tensor pair workflow\n- **Metrics**: running trackers, dataset evaluation, accuracy, precision, recall, F1\n- **Losses**: cross-entropy, sparse cross-entropy, binary cross-entropy, MSE, MAE\n- **Parameter trees**: `Ptree.t` for heterogeneous tensor storage, mapping, and serialization\n- **Checkpointing**: save/load to SafeTensors format\n- **HuggingFace**: download pretrained weights and configs (`kaun.hf`)\n- **Datasets**: MNIST and FashionMNIST loaders (`kaun.datasets`)\n\n## Libraries\n\n| Library | opam package | Description |\n|---------|-------------|-------------|\n| `kaun` | `kaun` | Core: layers, optimizers, training, data, metrics |\n| `kaun_hf` | `kaun.hf` | HuggingFace Hub integration |\n| `kaun_datasets` | `kaun.datasets` | Dataset loaders (MNIST, FashionMNIST) |\n\n## Examples\n\n- **01-xor** -- Binary classification on XOR with a 2-layer network\n- **02-mnist** -- CNN with conv2d, pooling, and multi-epoch training on MNIST\n- **03-bert** -- Fine-tune pretrained BERT for sentiment classification\n- **04-gpt2** -- Autoregressive text generation with pretrained GPT-2\n\n## Contributing\n\nSee the [Raven monorepo README](../README.md) for guidelines.\n\n## License\n\nISC License. See [LICENSE](../LICENSE) for details.\n"
  },
  {
    "path": "packages/kaun/bench/bench_kaun.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule Fn = Kaun.Fn\nmodule Layer = Kaun.Layer\nmodule Loss = Kaun.Loss\nmodule Attention = Kaun.Attention\n\nlet batch = 32\nlet seq_len = 128\nlet dim = 256\nlet num_heads = 8\nlet num_classes = 10\n\nlet normalization_benchmarks () =\n  let x = Nx.rand Nx.float32 [| batch; seq_len; dim |] in\n  let gamma = Nx.ones Nx.float32 [| dim |] in\n  let beta = Nx.zeros Nx.float32 [| dim |] in\n  let x_4d = Nx.rand Nx.float32 [| batch; dim; 8; 8 |] in\n  let bn4_scale = Nx.ones Nx.float32 [| dim |] in\n  let bn4_bias = Nx.zeros Nx.float32 [| dim |] in\n  let x_2d = Nx.rand Nx.float32 [| batch; dim |] in\n  let bn2_scale = Nx.ones Nx.float32 [| dim |] in\n  let bn2_bias = Nx.zeros Nx.float32 [| dim |] in\n  [\n    Thumper.bench \"layer_norm [32;128;256]\" (fun () ->\n        Fn.layer_norm ~gamma ~beta x);\n    Thumper.bench \"rms_norm [32;128;256]\" (fun () -> Fn.rms_norm x);\n    Thumper.bench \"batch_norm [32;256;8;8]\" (fun () ->\n        Fn.batch_norm ~scale:bn4_scale ~bias:bn4_bias x_4d);\n    Thumper.bench \"batch_norm [32;256]\" (fun () ->\n        Fn.batch_norm ~scale:bn2_scale ~bias:bn2_bias x_2d);\n  ]\n\nlet attention_benchmarks () =\n  let head_dim = dim / num_heads in\n  let q = Nx.rand Nx.float32 [| batch; num_heads; seq_len; head_dim |] in\n  let k = Nx.rand Nx.float32 [| batch; num_heads; seq_len; head_dim |] in\n  let v = Nx.rand Nx.float32 [| batch; num_heads; seq_len; head_dim |] in\n  let x = Nx.rand Nx.float32 [| batch; seq_len; head_dim |] in\n  [\n    Thumper.bench \"dot_product_attention [32;8;128;32]\" (fun () ->\n        Fn.dot_product_attention q k v);\n    Thumper.bench \"dot_product_attention causal [32;8;128;32]\" (fun () ->\n        Fn.dot_product_attention ~is_causal:true q k v);\n    Thumper.bench \"rope [32;128;32]\" (fun () -> Attention.rope x);\n  ]\n\nlet loss_benchmarks () =\n  let logits = Nx.rand Nx.float32 [| batch; num_classes |] in\n  let labels_onehot = Nx.zeros Nx.float32 [| batch; num_classes |] in\n  let targets = Nx.rand Nx.float32 [| batch; num_classes |] in\n  let predictions = Nx.rand Nx.float32 [| batch; num_classes |] in\n  let binary_logits = Nx.rand Nx.float32 [| batch |] in\n  let binary_labels = Nx.zeros Nx.float32 [| batch |] in\n  [\n    Thumper.bench \"cross_entropy [32;10]\" (fun () ->\n        Loss.cross_entropy logits labels_onehot);\n    Thumper.bench \"binary_cross_entropy [32]\" (fun () ->\n        Loss.binary_cross_entropy binary_logits binary_labels);\n    Thumper.bench \"mse [32;10]\" (fun () -> Loss.mse predictions targets);\n    Thumper.bench \"mae [32;10]\" (fun () -> Loss.mae predictions targets);\n  ]\n\nlet conv_benchmarks () =\n  let x1d = Nx.rand Nx.float32 [| batch; 64; 128 |] in\n  let w1d = Nx.rand Nx.float32 [| 128; 64; 3 |] in\n  let x2d = Nx.rand Nx.float32 [| batch; 64; 32; 32 |] in\n  let w2d = Nx.rand Nx.float32 [| 128; 64; 3; 3 |] in\n  [\n    Thumper.bench \"conv1d [32;64;128] k=3\" (fun () -> Fn.conv1d x1d w1d);\n    Thumper.bench \"conv1d same [32;64;128] k=3\" (fun () ->\n        Fn.conv1d ~padding:`Same x1d w1d);\n    Thumper.bench \"conv2d [32;64;32;32] k=3x3\" (fun () -> Fn.conv2d x2d w2d);\n    Thumper.bench \"conv2d same [32;64;32;32] k=3x3\" (fun () ->\n        Fn.conv2d ~padding:`Same x2d w2d);\n  ]\n\nlet pooling_benchmarks () =\n  let x2d = Nx.rand Nx.float32 [| batch; 64; 32; 32 |] in\n  [\n    Thumper.bench \"max_pool2d [32;64;32;32] k=2x2\" (fun () ->\n        Fn.max_pool2d ~kernel_size:(2, 2) x2d);\n    Thumper.bench \"avg_pool2d [32;64;32;32] k=2x2\" (fun () ->\n        Fn.avg_pool2d ~kernel_size:(2, 2) x2d);\n  ]\n\nlet layer_benchmarks () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let linear_layer = Layer.linear ~in_features:dim ~out_features:dim () in\n  let linear_vars = Layer.init linear_layer ~dtype:Nx.float32 in\n  let ln_layer = Layer.layer_norm ~dim () in\n  let ln_vars = Layer.init ln_layer ~dtype:Nx.float32 in\n  let mha_layer = Attention.multi_head_attention ~embed_dim:dim ~num_heads () in\n  let mha_vars = Layer.init mha_layer ~dtype:Nx.float32 in\n  let x = Nx.rand Nx.float32 [| batch; seq_len; dim |] in\n  [\n    Thumper.bench \"Layer.linear [32;128;256]->[32;128;256]\" (fun () ->\n        Layer.apply linear_layer linear_vars ~training:false x);\n    Thumper.bench \"Layer.layer_norm [32;128;256]\" (fun () ->\n        Layer.apply ln_layer ln_vars ~training:false x);\n    Thumper.bench \"Layer.multi_head_attention [32;128;256] h=8\" (fun () ->\n        Layer.apply mha_layer mha_vars ~training:false x);\n  ]\n\nlet embedding_benchmarks () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let vocab_size = 32000 in\n  let embed_dim = dim in\n  let table = Nx.rand Nx.float32 [| vocab_size; embed_dim |] in\n  let indices =\n    Nx.create Nx.int32 [| batch; seq_len |]\n      (Array.init (batch * seq_len) (fun i -> Int32.of_int (i mod vocab_size)))\n  in\n  [\n    Thumper.bench \"embedding [32;128] vocab=32000 dim=256\" (fun () ->\n        Fn.embedding ~embedding:table indices);\n  ]\n\nlet build_benchmarks () =\n  [\n    Thumper.group \"Normalization\" (normalization_benchmarks ());\n    Thumper.group \"Attention\" (attention_benchmarks ());\n    Thumper.group \"Loss\" (loss_benchmarks ());\n    Thumper.group \"Convolution\" (conv_benchmarks ());\n    Thumper.group \"Pooling\" (pooling_benchmarks ());\n    Thumper.group \"Layer\" (layer_benchmarks ());\n    Thumper.group \"Embedding\" (embedding_benchmarks ());\n  ]\n\nlet () =\n  let benchmarks = build_benchmarks () in\n  Thumper.run \"kaun\" benchmarks\n"
  },
  {
    "path": "packages/kaun/bench/dune",
    "content": "(executable\n (name bench_kaun)\n (libraries nx rune kaun thumper))\n\n(rule\n (alias runtest)\n (action\n  (progn\n   (run %{exe:bench_kaun.exe} -q)\n   (diff? kaun.thumper kaun.thumper.corrected))))\n"
  },
  {
    "path": "packages/kaun/bench/kaun.thumper",
    "content": "# thumper baseline\n# version: 1\n# suite_name: kaun\n# host: 1480401c3b76ed18\n# cpu: Apple M1 Max\n# ocaml: 5.4.1\n# git: 31747323\n# dirty: true\n# command: /Users/tmattio/Workspace/raven/_build/default/packages/kaun/bench/bench_kaun.exe --bless --quick\n\nattention/dot_product_attention__32_8_128_32_\talloc_words\t2.137000e+03\t2.137000e+03\t2.137000e+03\t0.000000e+00\t5\t1\nattention/dot_product_attention__32_8_128_32_\tcpu_time\t9.671033e-02\t9.639139e-02\t9.768167e-02\t6.670834e-03\t5\t0\nattention/dot_product_attention__32_8_128_32_\twall_time\t7.475045e-02\t7.419780e-02\t7.636849e-02\t1.451961e-02\t5\t0\nattention/dot_product_attention_causal__32_8_128_32_\talloc_words\t8.969000e+03\t8.969000e+03\t8.969000e+03\t0.000000e+00\t5\t1\nattention/dot_product_attention_causal__32_8_128_32_\tcpu_time\t1.370568e-01\t1.362397e-01\t1.373701e-01\t4.123616e-03\t5\t0\nattention/dot_product_attention_causal__32_8_128_32_\twall_time\t1.113716e-01\t1.109278e-01\t1.116209e-01\t3.111817e-03\t5\t0\nattention/rope__32_128_32_\talloc_words\t6.995000e+03\t6.995000e+03\t6.995000e+03\t0.000000e+00\t5\t0\nattention/rope__32_128_32_\tcpu_time\t4.598942e-03\t4.544050e-03\t4.656300e-03\t1.220391e-02\t5\t0\nattention/rope__32_128_32_\twall_time\t3.115630e-03\t3.091075e-03\t3.141427e-03\t8.080598e-03\t5\t0\nconvolution/conv1d__32_64_128__k_3\talloc_words\t5.000000e+02\t5.000000e+02\t5.000000e+02\t0.000000e+00\t5\t0\nconvolution/conv1d__32_64_128__k_3\tcpu_time\t1.620888e-03\t1.596007e-03\t1.637217e-03\t1.271213e-02\t5\t0\nconvolution/conv1d__32_64_128__k_3\twall_time\t1.626572e-03\t1.602353e-03\t1.644431e-03\t1.293450e-02\t5\t0\nconvolution/conv1d_same__32_64_128__k_3\talloc_words\t5.100000e+02\t5.100000e+02\t5.100000e+02\t0.000000e+00\t5\t0\nconvolution/conv1d_same__32_64_128__k_3\tcpu_time\t5.615602e-03\t5.517316e-03\t5.660000e-03\t1.270422e-02\t5\t2\nconvolution/conv1d_same__32_64_128__k_3\twall_time\t1.344058e-03\t1.329492e-03\t1.358392e-03\t1.075103e-02\t5\t0\nconvolution/conv2d__32_64_32_32__k_3x3\talloc_words\t6.070000e+02\t6.070000e+02\t6.070000e+02\t0.000000e+00\t5\t1\nconvolution/conv2d__32_64_32_32__k_3x3\tcpu_time\t2.908078e-02\t2.856329e-02\t2.982455e-02\t2.168548e-02\t5\t0\nconvolution/conv2d__32_64_32_32__k_3x3\twall_time\t3.174315e-02\t2.900269e-02\t3.589196e-02\t1.085158e-01\t5\t1\nconvolution/conv2d_same__32_64_32_32__k_3x3\talloc_words\t6.200000e+02\t6.200000e+02\t6.200000e+02\t0.000000e+00\t5\t1\nconvolution/conv2d_same__32_64_32_32__k_3x3\tcpu_time\t1.506193e-01\t1.496805e-01\t1.521535e-01\t8.209534e-03\t5\t0\nconvolution/conv2d_same__32_64_32_32__k_3x3\twall_time\t3.016738e-02\t2.915340e-02\t3.056394e-02\t2.337867e-02\t5\t1\nembedding/embedding__32_128__vocab_32000_dim_256\talloc_words\t9.950000e+02\t9.950000e+02\t9.950000e+02\t0.000000e+00\t5\t0\nembedding/embedding__32_128__vocab_32000_dim_256\tcpu_time\t7.132733e-03\t7.093132e-03\t7.245324e-03\t1.066859e-02\t5\t1\nembedding/embedding__32_128__vocab_32000_dim_256\twall_time\t6.695677e-03\t6.659470e-03\t6.761231e-03\t7.598951e-03\t5\t1\nlayer/layer_layer_norm__32_128_256_\talloc_words\t3.773000e+03\t3.773000e+03\t3.773000e+03\t0.000000e+00\t5\t1\nlayer/layer_layer_norm__32_128_256_\tcpu_time\t2.918713e-02\t2.908128e-02\t2.951400e-02\t7.412812e-03\t5\t1\nlayer/layer_layer_norm__32_128_256_\twall_time\t2.688614e-02\t2.676829e-02\t2.714231e-02\t6.955731e-03\t5\t2\nlayer/layer_linear__32_128_256_-__32_128_256_\talloc_words\t5.640000e+02\t5.640000e+02\t5.640000e+02\t0.000000e+00\t5\t0\nlayer/layer_linear__32_128_256_-__32_128_256_\tcpu_time\t7.458522e-03\t7.400407e-03\t7.510652e-03\t7.390491e-03\t5\t0\nlayer/layer_linear__32_128_256_-__32_128_256_\twall_time\t6.948420e-03\t6.920065e-03\t6.969745e-03\t3.574904e-03\t5\t0\nlayer/layer_multi_head_attention__32_128_256__h_8\talloc_words\t3.827000e+03\t3.827000e+03\t3.827000e+03\t0.000000e+00\t5\t1\nlayer/layer_multi_head_attention__32_128_256__h_8\tcpu_time\t1.053272e-01\t1.052223e-01\t1.058294e-01\t2.882088e-03\t5\t0\nlayer/layer_multi_head_attention__32_128_256__h_8\twall_time\t8.404089e-02\t8.353135e-02\t8.467066e-02\t6.778344e-03\t5\t1\nloss/binary_cross_entropy__32_\talloc_words\t5.785000e+03\t5.785000e+03\t5.785000e+03\t0.000000e+00\t5\t0\nloss/binary_cross_entropy__32_\tcpu_time\t2.444898e-05\t2.428635e-05\t2.461726e-05\t6.767379e-03\t5\t0\nloss/binary_cross_entropy__32_\twall_time\t2.458928e-05\t2.440828e-05\t2.479800e-05\t7.924565e-03\t5\t0\nloss/cross_entropy__32_10_\talloc_words\t1.968000e+03\t1.968000e+03\t1.968000e+03\t0.000000e+00\t5\t0\nloss/cross_entropy__32_10_\tcpu_time\t1.317014e-05\t1.291714e-05\t1.357537e-05\t2.498951e-02\t5\t1\nloss/cross_entropy__32_10_\twall_time\t1.346084e-05\t1.302917e-05\t1.403663e-05\t3.742195e-02\t5\t1\nloss/mae__32_10_\talloc_words\t6.390000e+02\t6.390000e+02\t6.390000e+02\t0.000000e+00\t6\t0\nloss/mae__32_10_\tcpu_time\t3.343846e-06\t3.255477e-06\t3.435480e-06\t2.691551e-02\t6\t0\nloss/mae__32_10_\twall_time\t3.398029e-06\t3.290374e-06\t3.521626e-06\t3.402740e-02\t6\t0\nloss/mse__32_10_\talloc_words\t7.030000e+02\t7.030000e+02\t7.030000e+02\t0.000000e+00\t5\t0\nloss/mse__32_10_\tcpu_time\t3.414987e-06\t3.392142e-06\t3.434869e-06\t6.255854e-03\t5\t0\nloss/mse__32_10_\twall_time\t3.428576e-06\t3.401322e-06\t3.456842e-06\t8.096629e-03\t5\t0\nnormalization/batch_norm__32_256_\talloc_words\t4.656000e+03\t4.656000e+03\t4.656000e+03\t0.000000e+00\t5\t0\nnormalization/batch_norm__32_256_\tcpu_time\t2.447512e-04\t2.412132e-04\t2.483533e-04\t1.458635e-02\t5\t0\nnormalization/batch_norm__32_256_\twall_time\t2.451437e-04\t2.417406e-04\t2.493240e-04\t1.546732e-02\t5\t0\nnormalization/batch_norm__32_256_8_8_\talloc_words\t5.006000e+03\t5.006000e+03\t5.006000e+03\t0.000000e+00\t10\t1\nnormalization/batch_norm__32_256_8_8_\tcpu_time\t2.507420e-02\t2.377102e-02\t2.591671e-02\t4.278692e-02\t10\t0\nnormalization/batch_norm__32_256_8_8_\twall_time\t2.386942e-02\t2.278693e-02\t2.460672e-02\t3.811965e-02\t10\t0\nnormalization/layer_norm__32_128_256_\talloc_words\t3.754000e+03\t3.754000e+03\t3.754000e+03\t0.000000e+00\t5\t1\nnormalization/layer_norm__32_128_256_\tcpu_time\t2.951118e-02\t2.887281e-02\t2.970830e-02\t1.415546e-02\t5\t0\nnormalization/layer_norm__32_128_256_\twall_time\t2.723408e-02\t2.674500e-02\t2.737987e-02\t1.165582e-02\t5\t0\nnormalization/rms_norm__32_128_256_\talloc_words\t1.868000e+03\t1.868000e+03\t1.868000e+03\t0.000000e+00\t5\t0\nnormalization/rms_norm__32_128_256_\tcpu_time\t7.965700e-03\t7.931148e-03\t7.994719e-03\t3.990293e-03\t5\t0\nnormalization/rms_norm__32_128_256_\twall_time\t7.253412e-03\t7.224232e-03\t7.272761e-03\t3.345277e-03\t5\t2\npooling/avg_pool2d__32_64_32_32__k_2x2\talloc_words\t9.650000e+02\t9.650000e+02\t9.650000e+02\t0.000000e+00\t5\t1\npooling/avg_pool2d__32_64_32_32__k_2x2\tcpu_time\t2.944396e-02\t2.936873e-02\t2.954386e-02\t2.973982e-03\t5\t0\npooling/avg_pool2d__32_64_32_32__k_2x2\twall_time\t2.400500e-02\t2.392232e-02\t2.405630e-02\t2.790660e-03\t5\t0\npooling/max_pool2d__32_64_32_32__k_2x2\talloc_words\t4.980000e+02\t4.980000e+02\t4.980000e+02\t0.000000e+00\t5\t1\npooling/max_pool2d__32_64_32_32__k_2x2\tcpu_time\t7.406087e-02\t7.315733e-02\t7.438250e-02\t8.271349e-03\t5\t0\npooling/max_pool2d__32_64_32_32__k_2x2\twall_time\t2.149041e-02\t2.079528e-02\t2.253364e-02\t4.044505e-02\t5\t1\n"
  },
  {
    "path": "packages/kaun/doc/01-getting-started.md",
    "content": "# Getting Started\n\nThis guide covers installation, key concepts, and two complete examples:\nlearning XOR and classifying MNIST digits.\n\n## Installation\n\n<!-- $MDX skip -->\n```bash\nopam install kaun\n```\n\nOr build from source:\n\n<!-- $MDX skip -->\n```bash\ngit clone https://github.com/raven-ml/raven\ncd raven && dune build kaun\n```\n\n## Key Concepts\n\n**Layer.** A layer is a record `{ init; apply }`. `init` creates fresh\nparameters and state. `apply` runs the forward pass. Layers compose with\n`Layer.sequential` (homogeneous float pipelines) and `Layer.compose`\n(heterogeneous, e.g. embedding to dense).\n\n**Ptree.** A `Ptree.t` is a tree of tensors. Dict nodes hold named\nsubtrees, list nodes hold ordered subtrees, and leaves hold tensors.\nParameters and state are both `Ptree.t` values — plain data you can\ninspect, map, serialize, and load.\n\n**Layer.vars.** A `vars` bundles `params` (trainable), `state`\n(non-trainable, e.g. batch norm running statistics), and a `dtype`\nwitness.\n\n**Train.** `Train.make` pairs a model with an optimizer. `Train.init`\ncreates the initial training state. `Train.fit` trains over a `Data.t`\npipeline. `Train.predict` runs inference.\n\n**Data.** `Data.t` is a lazy, composable iterator. Build from tensors or\narrays, shuffle, batch, map, and feed to `Train.fit`.\n\n**Optim.** An optimizer combines a learning-rate schedule with an update\nrule. Schedules are functions `int -> float`.\n\n## Example: XOR\n\nThe XOR problem is the simplest non-linear classification task. This\nexample trains a small network to learn it.\n\n<!-- $MDX skip -->\n```ocaml\nopen Kaun\n\nlet () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n\n  (* XOR dataset: 4 examples, 2 features each *)\n  let x = Nx.create Nx.Float32 [| 4; 2 |] [| 0.; 0.; 0.; 1.; 1.; 0.; 1.; 1. |] in\n  let y = Nx.create Nx.Float32 [| 4; 1 |] [| 0.; 1.; 1.; 0. |] in\n\n  (* Model: 2 -> 4 -> 1 with tanh activation *)\n  let model =\n    Layer.sequential\n      [\n        Layer.linear ~in_features:2 ~out_features:4 ();\n        Layer.tanh ();\n        Layer.linear ~in_features:4 ~out_features:1 ();\n      ]\n  in\n\n  (* Create a trainer: model + optimizer *)\n  let trainer =\n    Train.make ~model\n      ~optimizer:(Vega.adam (Vega.Schedule.constant 0.01))\n  in\n\n  (* Initialize training state (model vars + optimizer state) *)\n  let st = Train.init trainer ~dtype:Nx.Float32 in\n\n  (* Train for 1000 steps on the same data *)\n  let st =\n    Train.fit trainer st\n      ~report:(fun ~step ~loss _st ->\n        if step mod 200 = 0 then\n          Printf.printf \"step %4d  loss %.6f\\n\" step loss)\n      (Data.repeat 1000 (x, fun pred -> Loss.binary_cross_entropy pred y))\n  in\n\n  (* Predict *)\n  let pred = Train.predict trainer st x |> Nx.sigmoid in\n  Printf.printf \"\\npredictions (expected 0 1 1 0):\\n\";\n  for i = 0 to 3 do\n    Printf.printf \"  [%.0f, %.0f] -> %.3f\\n\"\n      (Nx.item [ i; 0 ] x)\n      (Nx.item [ i; 1 ] x)\n      (Nx.item [ i; 0 ] pred)\n  done\n```\n\nKey points:\n\n- `Data.repeat 1000 (x, loss_fn)` creates a pipeline that yields the\n  same `(input, loss_fn)` pair 1000 times.\n- The loss function `fun pred -> Loss.binary_cross_entropy pred y`\n  receives the model output and computes a scalar loss.\n- `Train.predict` runs in evaluation mode (no dropout, no state\n  updates).\n\n## Example: MNIST\n\nA convolutional network for handwritten digit classification using the\nbuilt-in MNIST dataset loader.\n\n<!-- $MDX skip -->\n```ocaml\nopen Kaun\n\nlet batch_size = 64\nlet epochs = 3\nlet lr = 0.001\n\nlet model =\n  Layer.sequential\n    [\n      Layer.conv2d ~in_channels:1 ~out_channels:16 ();\n      Layer.relu ();\n      Layer.max_pool2d ~kernel_size:(2, 2) ();\n      Layer.conv2d ~in_channels:16 ~out_channels:32 ();\n      Layer.relu ();\n      Layer.max_pool2d ~kernel_size:(2, 2) ();\n      Layer.flatten ();\n      Layer.linear ~in_features:(32 * 7 * 7) ~out_features:128 ();\n      Layer.relu ();\n      Layer.linear ~in_features:128 ~out_features:10 ();\n    ]\n\nlet () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n\n  Printf.printf \"Loading MNIST...\\n%!\";\n  let (x_train, y_train), (x_test, y_test) = Kaun_datasets.mnist () in\n  let n_train = (Nx.shape x_train).(0) in\n  Printf.printf \"  train: %d  test: %d\\n%!\" n_train (Nx.shape x_test).(0);\n\n  (* Fixed test batches *)\n  let test_batches = Data.prepare ~batch_size (x_test, y_test) in\n\n  (* Trainer *)\n  let trainer =\n    Train.make ~model\n      ~optimizer:(Vega.adam (Vega.Schedule.constant lr))\n  in\n  let st = ref (Train.init trainer ~dtype:Nx.Float32) in\n\n  for epoch = 1 to epochs do\n    (* Shuffle training data each epoch *)\n    let train_data =\n      Data.prepare ~shuffle:true ~batch_size (x_train, y_train)\n      |> Data.map (fun (x, y) ->\n             (x, fun logits -> Loss.cross_entropy_sparse logits y))\n    in\n    let num_batches = n_train / batch_size in\n    let tracker = Metric.tracker () in\n\n    st :=\n      Train.fit trainer !st\n        ~report:(fun ~step ~loss _st ->\n          Metric.observe tracker \"loss\" loss;\n          Printf.printf \"\\r  batch %d/%d  loss: %.4f%!\" step num_batches loss)\n        train_data;\n    Printf.printf \"\\n%!\";\n\n    (* Evaluate on test set *)\n    Data.reset test_batches;\n    let test_acc =\n      Metric.eval\n        (fun (x, y) ->\n          let logits = Train.predict trainer !st x in\n          Metric.accuracy logits y)\n        test_batches\n    in\n    Printf.printf \"epoch %d  train_loss: %.4f  test_acc: %.2f%%\\n%!\" epoch\n      (Metric.mean tracker \"loss\")\n      (test_acc *. 100.)\n  done\n```\n\nKey points:\n\n- `Kaun_datasets.mnist ()` returns `((x_train, y_train), (x_test, y_test))`\n  as float32 tensor pairs. Images have shape `[N; 1; 28; 28]` (NCHW),\n  labels `[N]`.\n- `Data.prepare ~shuffle:key ~batch_size (x, y)` creates a shuffled,\n  batched pipeline of tensor pairs.\n- `Data.map` attaches the loss function to each batch, producing the\n  `(input, loss_fn)` pairs that `Train.fit` expects.\n- `Metric.eval` folds a function over a data pipeline and returns the\n  mean.\n- `Metric.tracker` accumulates running means for reporting.\n\n## Next Steps\n\n- [Layers and Models](../02-layers-and-models/) — full layer catalog, composition patterns, custom layers\n- [Training](../03-training/) — optimizers, schedules, losses, data pipelines, custom loops\n- [Checkpoints and Pretrained Models](../04-checkpoints-and-pretrained/) — saving, loading, HuggingFace Hub\n"
  },
  {
    "path": "packages/kaun/doc/02-layers-and-models.md",
    "content": "# Layers and Models\n\nA `Layer.t` pairs parameter initialization with a forward computation.\nThis guide covers the built-in layers, composition, custom layers, and\nthe `vars` type.\n\n## The Layer Type\n\nA layer is a record with two fields:\n\n<!-- $MDX skip -->\n```ocaml\ntype ('input, 'output) Layer.t = {\n  init :\n    'layout.\n    dtype:(float, 'layout) Nx.dtype -> 'layout vars;\n  apply :\n    'layout 'in_elt.\n    params:Ptree.t ->\n    state:Ptree.t ->\n    dtype:(float, 'layout) Nx.dtype ->\n    training:bool ->\n    ?ctx:Context.t ->\n    ('input, 'in_elt) Nx.t ->\n    ('output, 'layout) Nx.t * Ptree.t;\n}\n```\n\nThe type parameters `'input` and `'output` describe the element types.\nMost layers use `(float, float) Layer.t` — they accept and produce float\ntensors. `embedding` is `(int32, float) Layer.t` — it accepts int32\nindices and produces float vectors.\n\nUse `Layer.init` and `Layer.apply` instead of accessing fields directly:\n\n<!-- $MDX skip -->\n```ocaml\nlet vars = Layer.init model ~dtype:Nx.Float32\nlet output, vars' = Layer.apply model vars ~training:false x\n```\n\n## The vars Type\n\n`Layer.vars` bundles trainable parameters, non-trainable state, and a\ndtype witness:\n\n<!-- $MDX skip -->\n```ocaml\nLayer.params vars   (* Ptree.t — trainable parameters *)\nLayer.state vars    (* Ptree.t — non-trainable state (e.g. batch norm stats) *)\nLayer.dtype vars    (* dtype witness *)\n```\n\nUse `Layer.with_params` and `Layer.with_state` to replace components:\n\n<!-- $MDX skip -->\n```ocaml\nlet vars' = Layer.with_params vars new_params\n```\n\n## Composition\n\n### sequential\n\n`Layer.sequential` chains `(float, float) Layer.t` layers in order.\nParameters are stored as a `Ptree.List`:\n\n<!-- $MDX skip -->\n```ocaml\nlet model = Layer.sequential [\n  Layer.linear ~in_features:784 ~out_features:128 ();\n  Layer.relu ();\n  Layer.linear ~in_features:128 ~out_features:10 ();\n]\n```\n\n### compose\n\n`Layer.compose` chains two layers with different input/output types.\nParameters are stored as a `Ptree.Dict` with keys `\"left\"` and\n`\"right\"`:\n\n<!-- $MDX skip -->\n```ocaml\n(* embedding (int32 -> float) composed with a linear layer (float -> float) *)\nlet embed_then_project =\n  Layer.compose\n    (Layer.embedding ~vocab_size:10000 ~embed_dim:256 ())\n    (Layer.linear ~in_features:256 ~out_features:128 ())\n(* embed_then_project : (int32, float) Layer.t *)\n```\n\n## Dense\n\n<!-- $MDX skip -->\n```ocaml\nLayer.linear ~in_features:784 ~out_features:128 ()\n```\n\nFully connected layer computing `xW + b`. Optional `~weight_init` and\n`~bias_init` arguments override the defaults (Glorot uniform for\nweights, zeros for bias).\n\n## Convolution\n\n<!-- $MDX skip -->\n```ocaml\n(* 1D: input [batch; in_channels; length] *)\nLayer.conv1d ~in_channels:3 ~out_channels:16 ()\nLayer.conv1d ~in_channels:3 ~out_channels:16 ~kernel_size:5 ~stride:2 ~padding:`Valid ()\n\n(* 2D: input [batch; in_channels; height; width] *)\nLayer.conv2d ~in_channels:1 ~out_channels:32 ()\nLayer.conv2d ~in_channels:1 ~out_channels:32 ~kernel_size:(5, 5) ()\n```\n\n`conv1d` supports configurable `~kernel_size` (default 3), `~stride`\n(default 1), `~dilation` (default 1), and `~padding` (default `` `Same ``).\n\n`conv2d` supports configurable `~kernel_size` (default `(3, 3)`). Stride\nis `(1, 1)` and padding is `` `Same ``.\n\n## Normalization\n\n<!-- $MDX skip -->\n```ocaml\nLayer.layer_norm ~dim:128 ()              (* learnable gamma and beta *)\nLayer.layer_norm ~dim:128 ~eps:1e-6 ()\n\nLayer.rms_norm ~dim:128 ()                (* learnable scale, no bias *)\n\nLayer.batch_norm ~num_features:32 ()      (* learnable scale and bias,\n                                             running mean/var in state *)\n```\n\n`batch_norm` updates running statistics during training and uses them\nduring evaluation. Normalization axes are inferred from rank: rank 2\nuses `[0]`, rank 3 uses `[0; 2]`, rank 4 uses `[0; 2; 3]`.\n\n## Embedding\n\n<!-- $MDX skip -->\n```ocaml\nLayer.embedding ~vocab_size:10000 ~embed_dim:256 ()\n```\n\nInput: int32 token indices of any shape. Output: float tensors with\n`embed_dim` appended to the input shape.\n\nWhen `~scale:true` (the default), output vectors are multiplied by\n`sqrt(embed_dim)`.\n\n## Regularization\n\n<!-- $MDX skip -->\n```ocaml\nLayer.dropout ~rate:0.1 ()\n```\n\nDuring training (`~training:true`), randomly zeros elements with\nprobability `rate`. Requires `~rngs` during training. Identity during\nevaluation.\n\n## Activations\n\nAll activation layers have no parameters:\n\n<!-- $MDX skip -->\n```ocaml\nLayer.relu ()       (* max(0, x) *)\nLayer.gelu ()       (* Gaussian error linear unit *)\nLayer.silu ()       (* x * sigmoid(x) *)\nLayer.tanh ()       (* hyperbolic tangent *)\nLayer.sigmoid ()    (* logistic function *)\n```\n\n## Pooling\n\n<!-- $MDX skip -->\n```ocaml\nLayer.max_pool2d ~kernel_size:(2, 2) ()\nLayer.avg_pool2d ~kernel_size:(2, 2) ()\nLayer.max_pool2d ~kernel_size:(2, 2) ~stride:(1, 1) ()\n```\n\n`~stride` defaults to `~kernel_size`. No parameters.\n\n## Reshape\n\n<!-- $MDX skip -->\n```ocaml\nLayer.flatten ()\n```\n\nFlattens all dimensions after the batch dimension:\n`[batch; d1; ...; dn]` becomes `[batch; d1 * ... * dn]`.\n\n## Multi-Head Attention\n\n<!-- $MDX skip -->\n```ocaml\nAttention.multi_head_attention ~embed_dim:256 ~num_heads:8 ()\n```\n\nInput shape: `[batch; seq_len; embed_dim]`. Output shape:\n`[batch; seq_len; embed_dim]`.\n\nOptions:\n\n- `~num_kv_heads` — for grouped query attention (GQA). Default: same as\n  `num_heads`.\n- `~is_causal:true` — applies a causal mask to prevent attending to\n  future positions.\n- `~rope:true` — applies rotary position embeddings to Q and K.\n  `~rope_theta` sets the base frequency (default 10000.0).\n- `~dropout` — attention dropout rate. Requires `~rngs` during training.\n\nPass an attention mask via `Context`:\n\n<!-- $MDX skip -->\n```ocaml\nlet ctx =\n  Context.empty\n  |> Context.set ~name:Attention.attention_mask_key (Ptree.P mask)\nin\nLayer.apply model vars ~training:false ~ctx input\n```\n\nThe mask is a bool or int32 tensor of shape `[batch; seq_k]`. Nonzero\npositions are kept, zero positions are masked.\n\nRoPE is also available as a standalone function:\n\n<!-- $MDX skip -->\n```ocaml\nlet x' = Attention.rope x              (* default theta=10000, seq_dim=-2 *)\nlet x' = Attention.rope ~theta:500000. ~seq_dim:1 x\n```\n\n## Custom Layers\n\nA custom layer is a `{ init; apply }` record. Here is a residual block:\n\n<!-- $MDX skip -->\n```ocaml\nlet residual_block ~dim () : (float, float) Layer.t =\n  let inner = Layer.sequential [\n    Layer.linear ~in_features:dim ~out_features:dim ();\n    Layer.relu ();\n    Layer.linear ~in_features:dim ~out_features:dim ();\n  ] in\n  {\n    init = inner.init;\n    apply = (fun ~params ~state ~dtype ~training ?rngs ?ctx x ->\n      let y, state' = inner.apply ~params ~state ~dtype ~training ?rngs ?ctx x in\n      (Nx.add x y, state'));\n  }\n```\n\nUse `Layer.make_vars` to build vars in custom `init` functions:\n\n<!-- $MDX skip -->\n```ocaml\nLayer.make_vars ~params ~state:Ptree.empty ~dtype\n```\n\n## Context\n\n`Context.t` carries per-call auxiliary data that specific layers read\nduring the forward pass. Most layers ignore it.\n\n<!-- $MDX skip -->\n```ocaml\nlet ctx =\n  Context.empty\n  |> Context.set ~name:\"attention_mask\" (Ptree.P mask)\n  |> Context.set ~name:\"token_type_ids\" (Ptree.P ids)\nin\nLayer.apply model vars ~training:false ~ctx input_ids\n```\n\nContext is forwarded through `compose` and `sequential` to all sublayers.\n`Train.fit`, `Train.step`, and `Train.predict` accept an optional `~ctx`\nargument.\n\n## Weight Initialization\n\nOverride default initialization with `Init.t` values:\n\n<!-- $MDX skip -->\n```ocaml\nLayer.linear ~in_features:128 ~out_features:64\n  ~weight_init:(Init.he_normal ())\n  ~bias_init:Init.zeros\n  ()\n```\n\nAvailable initializers:\n\n- `Init.zeros`, `Init.ones`, `Init.constant v`\n- `Init.uniform ~scale ()`, `Init.normal ~stddev ()`\n- `Init.glorot_uniform ()`, `Init.glorot_normal ()`\n- `Init.he_uniform ()`, `Init.he_normal ()`\n- `Init.lecun_uniform ()`, `Init.lecun_normal ()`\n- `Init.variance_scaling ~scale ~mode ~distribution ()`\n\n## Next Steps\n\n- [Training](../03-training/) — optimizers, losses, data pipelines, training loops\n- [Checkpoints and Pretrained Models](../04-checkpoints-and-pretrained/) — saving, loading, HuggingFace Hub\n"
  },
  {
    "path": "packages/kaun/doc/03-training.md",
    "content": "# Training\n\nThis guide covers optimizers, learning-rate schedules, loss functions,\ndata pipelines, the high-level training loop, metrics, and custom\ntraining.\n\n## Optimizers\n\nOptimizers are provided by the `Vega` package and take a `Schedule.t`\nas the learning rate:\n\n<!-- $MDX skip -->\n```ocaml\n(* SGD with momentum *)\nVega.sgd ~momentum:0.9 (Vega.Schedule.constant 0.1)\n\n(* Adam *)\nVega.adam (Vega.Schedule.constant 1e-3)\n\n(* AdamW with weight decay *)\nVega.adamw ~weight_decay:0.01 (Vega.Schedule.constant 1e-3)\n\n(* RMSprop *)\nVega.rmsprop (Vega.Schedule.constant 1e-3)\n\n(* Adagrad *)\nVega.adagrad (Vega.Schedule.constant 0.01)\n```\n\n`sgd` supports optional `~momentum` (default 0.0) and `~nesterov`\n(default false). `adam` and `adamw` support `~b1` (default 0.9), `~b2`\n(default 0.999), and `~eps` (default 1e-8). `rmsprop` supports `~decay`\n(default 0.9), `~eps`, and `~momentum`.\n\n## Learning-Rate Schedules\n\nA schedule is a function `int -> float` mapping 1-based step numbers to\nlearning rates:\n\n<!-- $MDX skip -->\n```ocaml\n(* Fixed learning rate *)\nVega.Schedule.constant 1e-3\n\n(* Cosine decay from 1e-3 to 0 over 10000 steps *)\nVega.Schedule.cosine_decay ~init_value:1e-3 ~decay_steps:10000 ()\n\n(* Cosine decay with minimum alpha *)\nVega.Schedule.cosine_decay ~init_value:1e-3 ~decay_steps:10000 ~alpha:1e-5 ()\n\n(* Linear warmup from 0 to 1e-3 over 1000 steps *)\nVega.Schedule.linear ~init_value:0. ~end_value:1e-3 ~steps:1000\n\n(* Cosine warmup *)\nVega.Schedule.warmup_cosine ~init_value:0. ~peak_value:1e-3 ~warmup_steps:1000\n\n(* Exponential decay *)\nVega.Schedule.exponential_decay ~init_value:1e-3 ~decay_rate:0.96 ~decay_steps:1000\n```\n\nCompose schedules by writing a custom function:\n\n<!-- $MDX skip -->\n```ocaml\nlet warmup_then_cosine step =\n  if step <= 1000 then\n    Vega.Schedule.linear ~init_value:0. ~end_value:1e-3 ~steps:1000 step\n  else\n    Vega.Schedule.cosine_decay ~init_value:1e-3 ~decay_steps:9000 () (step - 1000)\n```\n\n## Loss Functions\n\nAll loss functions return scalar tensors that are differentiable through\nRune's autodiff:\n\n<!-- $MDX skip -->\n```ocaml\n(* Multi-class: logits [batch; num_classes], one-hot labels [batch; num_classes] *)\nLoss.cross_entropy logits one_hot_labels\n\n(* Multi-class with integer labels: logits [batch; num_classes], labels [batch] *)\nLoss.cross_entropy_sparse logits class_indices\n\n(* Binary: raw logits (not sigmoid), labels in {0, 1} *)\nLoss.binary_cross_entropy logits labels\n\n(* Regression *)\nLoss.mse predictions targets\nLoss.mae predictions targets\n```\n\n## Data Pipelines\n\n`Data.t` is a lazy, composable iterator. Build pipelines by chaining\nconstructors, transformers, and consumers.\n\n### Constructors\n\n<!-- $MDX skip -->\n```ocaml\n(* From arrays *)\nData.of_array [| example1; example2; example3 |]\n\n(* From tensors: slices along first dimension *)\nData.of_tensor x         (* yields x[0], x[1], ... *)\nData.of_tensors (x, y)   (* yields (x[0], y[0]), (x[1], y[1]), ... *)\n\n(* From a function *)\nData.of_fn 1000 (fun i -> generate_example i)\n\n(* Repeat a value *)\nData.repeat 1000 (x, loss_fn)\n```\n\n### Transformers\n\n<!-- $MDX skip -->\n```ocaml\n(* Map each element *)\nData.map (fun (x, y) -> (preprocess x, y)) data\n\n(* Batch into arrays of size n *)\nData.batch 32 data               (* yields arrays of 32 elements *)\nData.batch ~drop_last:true 32 data\n\n(* Batch and map in one step *)\nData.map_batch 32 collate_fn data\n\n(* Shuffle *)\nData.shuffle rng_key data\n```\n\n### Consumers\n\n<!-- $MDX skip -->\n```ocaml\nData.iter (fun x -> process x) data\nData.iteri (fun i x -> Printf.printf \"%d: %f\\n\" i x) data\nData.fold (fun acc x -> acc +. x) 0. data\nData.to_array data\nData.to_seq data\n```\n\n### The prepare Shortcut\n\n`Data.prepare` combines tensor slicing, optional shuffle, and batching\ninto one call. It is the standard way to feed tensor data to training:\n\n<!-- $MDX skip -->\n```ocaml\nlet train_data =\n  Data.prepare ~shuffle:rng_key ~batch_size:64 (x_train, y_train)\n  |> Data.map (fun (x, y) ->\n         (x, fun logits -> Loss.cross_entropy_sparse logits y))\n```\n\n`Data.prepare` yields `(x_batch, y_batch)` tensor pairs. The `Data.map`\nstep attaches the loss function, producing the `(input, loss_fn)` pairs\nthat `Train.fit` expects.\n\n`~drop_last` defaults to `true` in `prepare`.\n\n### Resetting\n\nPipelines are single-pass. Call `Data.reset` to iterate again:\n\n<!-- $MDX skip -->\n```ocaml\nData.reset test_batches;\nlet acc = Metric.eval eval_fn test_batches\n```\n\n## High-Level Training\n\n### Train.make and Train.init\n\nCreate a trainer by pairing a model with an optimizer, then initialize:\n\n<!-- $MDX skip -->\n```ocaml\nlet trainer = Train.make ~model\n  ~optimizer:(Vega.adam (Vega.Schedule.constant 1e-3))\n\nlet st = Train.init trainer ~dtype:Nx.Float32\n```\n\n### Train.fit\n\n`Train.fit` trains over a data pipeline and returns the final state:\n\n<!-- $MDX skip -->\n```ocaml\nlet st = Train.fit trainer st\n  ~report:(fun ~step ~loss _st ->\n    Printf.printf \"step %d  loss %.4f\\n\" step loss)\n  data\n```\n\nEach element of `data` is `(input, loss_fn)` where `loss_fn` takes the\nmodel output and returns a scalar loss.\n\nThe optional `~report` callback is called after every step. The `~step`\nnumber is 1-based.\n\n### Early Stopping\n\nRaise `Train.Early_stop` inside `~report` to end training early.\n`Train.fit` catches it and returns the current state:\n\n<!-- $MDX skip -->\n```ocaml\nlet st = Train.fit trainer st\n  ~report:(fun ~step:_ ~loss st ->\n    if loss < 0.001 then raise Train.Early_stop)\n  data\n```\n\n### Train.predict\n\nRun inference in evaluation mode (no dropout, no state updates):\n\n<!-- $MDX skip -->\n```ocaml\nlet logits = Train.predict trainer st x\n```\n\n### Train.step\n\nFor manual control over single training steps:\n\n<!-- $MDX skip -->\n```ocaml\nlet loss, st' = Train.step trainer st ~training:true\n  ~loss:(fun logits -> Loss.cross_entropy_sparse logits y)\n  x\n```\n\n### Starting from Pretrained Weights\n\nUse `Train.make_state` to create training state from externally loaded\nweights instead of random initialization:\n\n<!-- $MDX skip -->\n```ocaml\nlet vars = (* load from checkpoint *) in\nlet st = Train.make_state trainer vars\n```\n\n## Metrics\n\n### Metric Functions\n\nMetric functions are plain `predictions -> targets -> float` functions:\n\n<!-- $MDX skip -->\n```ocaml\n(* Multi-class: logits [batch; num_classes], labels [batch] *)\nMetric.accuracy logits targets\n\n(* Binary classification *)\nMetric.binary_accuracy ~threshold:0.5 predictions targets\n\n(* Precision, recall, F1 with averaging mode *)\nMetric.precision Metric.Macro logits targets\nMetric.recall Metric.Micro logits targets\nMetric.f1 Metric.Weighted logits targets\n```\n\nAveraging modes: `Macro` (unweighted mean of per-class scores), `Micro`\n(global aggregation), `Weighted` (mean weighted by class support).\n\n### Dataset Evaluation\n\n`Metric.eval` folds a function over a data pipeline and returns the\nmean:\n\n<!-- $MDX skip -->\n```ocaml\nData.reset test_batches;\nlet test_acc =\n  Metric.eval\n    (fun (x, y) ->\n      let logits = Train.predict trainer st x in\n      Metric.accuracy logits y)\n    test_batches\n```\n\n`Metric.eval_many` evaluates multiple named metrics at once:\n\n<!-- $MDX skip -->\n```ocaml\nlet results =\n  Metric.eval_many\n    (fun (x, y) ->\n      let logits = Train.predict trainer st x in\n      [ (\"accuracy\", Metric.accuracy logits y);\n        (\"f1\", Metric.f1 Metric.Macro logits y) ])\n    test_batches\n(* results : (string * float) list *)\n```\n\n### Running Tracker\n\n`Metric.tracker` accumulates running means during training:\n\n<!-- $MDX skip -->\n```ocaml\nlet tracker = Metric.tracker () in\n(* In the training loop: *)\nMetric.observe tracker \"loss\" loss_value;\nMetric.observe tracker \"accuracy\" acc_value;\n\n(* After an epoch: *)\nPrintf.printf \"%s\\n\" (Metric.summary tracker);\n(* \"accuracy: 0.9150  loss: 0.4231\" *)\n\nMetric.reset tracker\n```\n\n## Gradient Utilities\n\n### Gradient Clipping\n\nClip gradients by global L2 norm to prevent exploding gradients. Use\nthis with `Train.step` in custom training loops:\n\n<!-- $MDX skip -->\n```ocaml\nlet clipped_grads = Optim.clip_by_global_norm 1.0 grads\n```\n\n### Global Norm\n\nCompute the L2 norm across all leaf tensors:\n\n<!-- $MDX skip -->\n```ocaml\nlet norm = Optim.global_norm grads\n```\n\n### Manual Gradient Computation\n\n`Grad.value_and_grad` differentiates a function with respect to a\n`Ptree.t`:\n\n<!-- $MDX skip -->\n```ocaml\nlet loss, grads = Grad.value_and_grad\n  (fun params ->\n    let output, _state = model.apply ~params ~state ~dtype ~training:true x in\n    Loss.mse output y)\n  params\n```\n\n`Grad.value_and_grad_aux` returns auxiliary data alongside the loss:\n\n<!-- $MDX skip -->\n```ocaml\nlet loss, grads, new_state = Grad.value_and_grad_aux\n  (fun params ->\n    let output, new_state = model.apply ~params ~state ~dtype ~training:true x in\n    (Loss.mse output y, new_state))\n  params\n```\n\n## Next Steps\n\n- [Layers and Models](../02-layers-and-models/) — full layer catalog, composition, custom layers\n- [Checkpoints and Pretrained Models](../04-checkpoints-and-pretrained/) — saving, loading, HuggingFace Hub\n"
  },
  {
    "path": "packages/kaun/doc/04-checkpoints-and-pretrained.md",
    "content": "# Checkpoints and Pretrained Models\n\nThis guide covers saving and loading model parameters with SafeTensors,\nand downloading pretrained weights from the HuggingFace Hub.\n\n## SafeTensors Checkpointing\n\nKaun serializes parameter trees to the\n[SafeTensors](https://huggingface.co/docs/safetensors/) format. Tensor\npaths from the tree structure become file keys (e.g. `layers.0.weight`).\n\n### Saving\n\n<!-- $MDX skip -->\n```ocaml\nlet vars = Train.vars st in\nCheckpoint.save \"model.safetensors\" (Layer.params vars)\n```\n\n### Loading\n\n`Checkpoint.load` requires a `~like` template that defines the expected\ntree structure and dtypes. Tensors are cast to the template's dtype if\nneeded. Extra keys in the file are ignored.\n\n<!-- $MDX skip -->\n```ocaml\n(* Initialize model to get the tree structure *)\nlet vars = Layer.init model ~dtype:Nx.Float32 in\nlet params = Checkpoint.load \"model.safetensors\" ~like:(Layer.params vars) in\nlet vars = Layer.with_params vars params\n```\n\n### Saving and Loading State\n\nTo save both parameters and non-trainable state (e.g. batch norm\nrunning statistics):\n\n<!-- $MDX skip -->\n```ocaml\n(* Save *)\nlet vars = Train.vars st in\nCheckpoint.save \"params.safetensors\" (Layer.params vars);\nCheckpoint.save \"state.safetensors\" (Layer.state vars)\n\n(* Load *)\nlet vars = Layer.init model ~dtype:Nx.Float32 in\nlet params = Checkpoint.load \"params.safetensors\" ~like:(Layer.params vars) in\nlet state = Checkpoint.load \"state.safetensors\" ~like:(Layer.state vars) in\nlet vars = Layer.with_params vars params |> fun v -> Layer.with_state v state\n```\n\n### Resuming Training\n\nUse `Train.make_state` to create training state from loaded weights:\n\n<!-- $MDX skip -->\n```ocaml\nlet trainer = Train.make ~model ~optimizer in\nlet st = Train.make_state trainer vars in\n(* Continue training from here *)\nlet st = Train.fit trainer st data\n```\n\n## HuggingFace Hub\n\nThe `kaun-hf` package provides access to the HuggingFace Hub for\ndownloading pretrained model weights and configurations.\n\n### Downloading Files\n\n<!-- $MDX skip -->\n```ocaml\nlet path = Kaun_hf.download_file ~model_id:\"bert-base-uncased\"\n  ~filename:\"config.json\" ()\n(* path : string — local filesystem path *)\n```\n\nFiles are cached under `$RAVEN_CACHE_ROOT/huggingface` (or\n`$XDG_CACHE_HOME/raven/huggingface`). Subsequent calls return the cached\npath.\n\nOptions:\n\n- `~token` — HuggingFace API token for private repositories. Defaults\n  to the `HF_TOKEN` environment variable.\n- `~cache_dir` — override the default cache directory.\n- `~offline:true` — only return cached files, do not download.\n- `~revision:(Rev \"v1.0\")` — download a specific tag, branch, or commit.\n  Default is `Main`.\n\n### Loading Configuration\n\n<!-- $MDX skip -->\n```ocaml\nlet config = Kaun_hf.load_config ~model_id:\"bert-base-uncased\" ()\n(* config : Jsont.json *)\n```\n\nReturns the parsed `config.json` from the repository.\n\n### Loading Weights\n\n<!-- $MDX skip -->\n```ocaml\nlet weights = Kaun_hf.load_weights ~model_id:\"bert-base-uncased\" ()\n(* weights : (string * Kaun.Ptree.tensor) list *)\n```\n\nReturns a flat list of `(name, tensor)` pairs from the model's\nSafeTensors checkpoint. Sharded checkpoints are handled transparently:\nwhen `model.safetensors.index.json` is present, all shards are\ndownloaded and merged.\n\nTensor names are the raw keys from the SafeTensors file (e.g.\n`bert.encoder.layer.0.attention.self.query.weight`). Your model code\nmaps these to its own parameter structure.\n\n### Loading a Pretrained Model\n\nThe typical pattern for loading pretrained weights:\n\n1. Build the model architecture from the config.\n2. Initialize to get the parameter tree structure.\n3. Load weights and map them to the tree.\n\n<!-- $MDX skip -->\n```ocaml\n(* 1. Build model from config *)\nlet config = Kaun_hf.load_config ~model_id:\"bert-base-uncased\" () in\nlet model = build_bert_model config in\n\n(* 2. Initialize to get tree structure *)\nlet vars = Layer.init model ~dtype:Nx.Float32 in\n\n(* 3. Load and map weights *)\nlet weights = Kaun_hf.load_weights ~model_id:\"bert-base-uncased\" () in\nlet params = map_weights_to_ptree weights (Layer.params vars) in\nlet vars = Layer.with_params vars params in\n\n(* 4. Use for inference *)\nlet trainer = Train.make ~model\n  ~optimizer:(Vega.adam (Vega.Schedule.constant 1e-5))\nin\nlet st = Train.make_state trainer vars in\nlet logits = Train.predict trainer st input_ids\n```\n\n### Cache Management\n\n<!-- $MDX skip -->\n```ocaml\n(* Clear all cached files *)\nKaun_hf.clear_cache ()\n\n(* Clear a specific model's cache *)\nKaun_hf.clear_cache ~model_id:\"bert-base-uncased\" ()\n```\n\n## Next Steps\n\n- [Getting Started](../01-getting-started/) — XOR and MNIST examples\n- [Layers and Models](../02-layers-and-models/) — layer catalog, composition, custom layers\n- [Training](../03-training/) — optimizers, losses, data pipelines, training loops\n"
  },
  {
    "path": "packages/kaun/doc/06-pytorch-comparison.md",
    "content": "# Kaun vs. PyTorch / Flax -- A Practical Comparison\n\nThis guide explains how Kaun relates to PyTorch and Flax, focusing on:\n\n* How core concepts map (modules/layers, parameters, training loops)\n* Where the APIs feel similar vs. deliberately different\n* How to translate common patterns between frameworks\n\nKaun's design is closer to Flax than to PyTorch: layers are pure data,\nparameters are explicit trees, and forward passes are functions rather\nthan method calls. If you know Flax, Kaun will feel familiar. If you\nknow only PyTorch, the main shift is from mutable objects to immutable\nrecords.\n\n---\n\n## 1. Big-Picture Differences\n\n| Aspect            | PyTorch                                        | Flax (Linen)                       | Kaun (OCaml)                                          |\n| ----------------- | ---------------------------------------------- | ---------------------------------- | ----------------------------------------------------- |\n| Language          | Python, dynamic                                | Python (JAX), dynamic              | OCaml, statically typed                               |\n| Model definition  | `nn.Module` class with `forward`               | `nn.Module` class with `__call__`  | `Layer.t` record with `init` and `apply`              |\n| Parameter storage | Mutable attributes on module                   | Frozen dict returned by `init`     | `Ptree.t` tree returned by `Layer.init`               |\n| Forward pass      | `model(x)` (stateful method)                   | `model.apply(params, x)`           | `Layer.apply model vars ~training x`                  |\n| Mutation          | Modules are mutable objects                    | Params are immutable dicts         | `Layer.vars` and `Ptree.t` are immutable              |\n| Autograd          | Dynamic tape (`autograd`)                      | Functional transforms (`jax.grad`) | Rune effect-based autodiff                            |\n| Optimizer         | `torch.optim.Adam(model.parameters(), lr=...)` | `optax.adam(lr)`                   | `Vega.adam (Vega.Schedule.constant lr)`               |\n| Training loop     | Manual (or Lightning/etc.)                     | Manual (or Orbax/etc.)             | `Train.fit` or manual `Train.step`                    |\n| Data loading      | `DataLoader`                                   | `tf.data` or manual                | `Data.t` lazy pipeline                                |\n| Checkpointing     | `torch.save` / `torch.load` (pickle)           | Orbax / msgpack                    | SafeTensors via `Checkpoint.save` / `Checkpoint.load` |\n| RNG               | Global `torch.manual_seed`                     | Explicit PRNGKey threading         | Implicit scope via `Nx.Rng.run ~seed`                 |\n| Device management | `model.to(\"cuda\")`, `tensor.cuda()`            | `jax.device_put`                   | CPU by default; JIT manages devices internally        |\n\n---\n\n## 2. Defining Models\n\n### PyTorch\n\n```python\nimport torch\nimport torch.nn as nn\n\nclass MLP(nn.Module):\n    def __init__(self, in_features, hidden, out_features):\n        super().__init__()\n        self.fc1 = nn.Linear(in_features, hidden)\n        self.fc2 = nn.Linear(hidden, out_features)\n\n    def forward(self, x):\n        x = torch.relu(self.fc1(x))\n        return self.fc2(x)\n\nmodel = MLP(784, 128, 10)\n```\n\n### Flax\n\n```python\nimport flax.linen as nn\nimport jax\n\nclass MLP(nn.Module):\n    hidden: int\n    out_features: int\n\n    @nn.compact\n    def __call__(self, x):\n        x = nn.relu(nn.Dense(self.hidden)(x))\n        return nn.Dense(self.out_features)(x)\n\nmodel = MLP(hidden=128, out_features=10)\nparams = model.init(jax.random.PRNGKey(0), jnp.ones([1, 784]))\n```\n\n### Kaun\n\n<!-- $MDX skip -->\n```ocaml\nopen Kaun\n\nlet model =\n  Layer.sequential\n    [\n      Layer.linear ~in_features:784 ~out_features:128 ();\n      Layer.relu ();\n      Layer.linear ~in_features:128 ~out_features:10 ();\n    ]\n\nlet vars =\n  Nx.Rng.run ~seed:0 @@ fun () ->\n  Layer.init model ~dtype:Nx.Float32\n```\n\nKey differences:\n\n* PyTorch defines models as classes. Flax defines models as dataclasses\n  with `__call__`. Kaun uses `Layer.t` records -- plain data, not classes.\n* `Layer.sequential` replaces class-based composition for homogeneous\n  float pipelines. `Layer.compose` handles heterogeneous types (e.g.\n  embedding into dense).\n* Activation functions are layers (`Layer.relu ()`) rather than free\n  functions called inside `forward`. This keeps the composition uniform.\n\n---\n\n## 3. Parameters\n\n### PyTorch\n\n```python\n# Parameters live inside the module\nfor name, param in model.named_parameters():\n    print(name, param.shape)\n\n# state_dict is an OrderedDict\nsd = model.state_dict()\nmodel.load_state_dict(sd)\n```\n\n### Flax\n\n```python\n# Params are a frozen dict returned by init\nparams = model.init(key, x)[\"params\"]\njax.tree_util.tree_map(lambda p: p.shape, params)\n```\n\n### Kaun\n\n<!-- $MDX skip -->\n```ocaml\n(* vars bundles params, state, and dtype *)\nlet params = Layer.params vars   (* Ptree.t *)\nlet state  = Layer.state vars    (* Ptree.t *)\nlet dt     = Layer.dtype vars    (* (float, 'layout) Nx.dtype *)\n\n(* Inspect parameter shapes *)\nlet paths = Ptree.flatten_with_paths params\n(* [(\"0.weight\", P tensor); (\"0.bias\", P tensor); ...] *)\n\n(* Count total parameters *)\nlet n = Ptree.count_parameters params\n\n(* Replace parameters *)\nlet vars' = Layer.with_params vars new_params\n```\n\nKey differences:\n\n* PyTorch stores parameters as mutable module attributes. Flax returns\n  frozen dicts. Kaun returns `Ptree.t` -- a tree with `Tensor` leaves,\n  `Dict` nodes, and `List` nodes.\n* `Ptree.t` is plain immutable data. You can map, fold, flatten, and\n  serialize it without going through the model.\n* `Layer.vars` also carries non-trainable state (e.g. batch norm\n  running statistics), separate from trainable parameters.\n\n---\n\n## 4. Forward Pass\n\n### PyTorch\n\n```python\nmodel.train()\noutput = model(x)          # stateful: dropout active, batchnorm updates\n\nmodel.eval()\nwith torch.no_grad():\n    output = model(x)      # no dropout, batchnorm uses running stats\n```\n\n### Flax\n\n```python\noutput = model.apply(params, x)\noutput = model.apply(params, x, train=True, rngs={\"dropout\": key})\n```\n\n### Kaun\n\n<!-- $MDX skip -->\n```ocaml\n(* Training: dropout active, batchnorm updates running stats *)\nlet output, vars' = Layer.apply model vars ~training:true x\n\n(* Evaluation: no dropout, batchnorm uses running stats *)\nlet output, vars' = Layer.apply model vars ~training:false x\n\n(* Or through the trainer *)\nlet logits = Train.predict trainer st x\n```\n\nKey differences:\n\n* PyTorch uses `model.train()` / `model.eval()` to switch mode globally.\n  Kaun passes `~training` as an argument on each call.\n* `Layer.apply` returns `(output, updated_vars)`. The updated vars carry\n  new state (e.g. batch norm statistics). Parameters are unchanged.\n* `Train.predict` is a shortcut for evaluation mode with no state updates.\n\n---\n\n## 5. Optimizers and LR Schedules\n\n### PyTorch\n\n```python\noptimizer = torch.optim.Adam(model.parameters(), lr=1e-3)\nscheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=10000)\n\noptimizer.zero_grad()\nloss.backward()\noptimizer.step()\nscheduler.step()\n```\n\n### Flax / Optax\n\n```python\nimport optax\n\ntx = optax.adam(learning_rate=optax.cosine_decay_schedule(1e-3, 10000))\nopt_state = tx.init(params)\nupdates, opt_state = tx.update(grads, opt_state, params)\nparams = optax.apply_updates(params, updates)\n```\n\n### Kaun\n\n<!-- $MDX skip -->\n```ocaml\n(* Schedule is a function: step -> lr *)\nlet schedule = Vega.Schedule.cosine_decay ~init_value:1e-3 ~decay_steps:10000 ()\n\n(* Optimizer *)\nlet tx = Vega.adam schedule\n\n(* Init and update via Kaun's Optim bridge *)\nlet st = Optim.init tx params in\nlet updates, st' = Optim.update st params grads in\nlet params' = Optim.apply_updates params updates\n\n(* Or use the convenience function *)\nlet params', st' = Optim.step st params grads\n```\n\nAvailable optimizers:\n\n<!-- $MDX skip -->\n```ocaml\nVega.sgd ~momentum:0.9 ~nesterov:true schedule\nVega.adam ~b1:0.9 ~b2:0.999 ~eps:1e-8 schedule\nVega.adamw ~weight_decay:0.01 schedule\nVega.rmsprop ~decay:0.9 ~momentum:0.0 schedule\nVega.adagrad ~eps:1e-8 schedule\n```\n\nAvailable schedules:\n\n<!-- $MDX skip -->\n```ocaml\nVega.Schedule.constant 1e-3\nVega.Schedule.cosine_decay ~init_value:1e-3 ~decay_steps:10000 ()\nVega.Schedule.warmup_cosine ~init_value:0. ~peak_value:1e-3 ~warmup_steps:1000\nVega.Schedule.linear ~init_value:0. ~end_value:1e-3 ~steps:1000\nVega.Schedule.exponential_decay ~init_value:1e-3 ~decay_rate:0.96 ~decay_steps:1000\n```\n\nKey differences:\n\n* PyTorch couples the optimizer to the model via `model.parameters()`.\n  Vega and Optax are decoupled -- they operate on parameter trees.\n* PyTorch separates scheduler from optimizer. Vega (like Optax) bakes\n  the schedule into the optimizer as its last positional argument.\n* A Vega schedule is just `int -> float`. Compose them by writing a\n  plain OCaml function.\n\n---\n\n## 6. Loss Functions\n\n### PyTorch\n\n```python\nloss = nn.functional.cross_entropy(logits, labels)\nloss = nn.functional.binary_cross_entropy_with_logits(logits, labels)\nloss = nn.functional.mse_loss(pred, target)\n```\n\n### Kaun\n\n<!-- $MDX skip -->\n```ocaml\n(* Multi-class with one-hot labels *)\nLoss.cross_entropy logits one_hot_labels\n\n(* Multi-class with integer labels *)\nLoss.cross_entropy_sparse logits class_indices\n\n(* Binary classification (raw logits, not sigmoid) *)\nLoss.binary_cross_entropy logits labels\n\n(* Regression *)\nLoss.mse predictions targets\nLoss.mae predictions targets\n```\n\nKey differences:\n\n* PyTorch's `cross_entropy` expects integer labels (like\n  `cross_entropy_sparse`). Kaun offers both one-hot and integer\n  variants.\n* All Kaun losses return scalar means and are differentiable through\n  Rune's autodiff.\n* Kaun losses are plain functions, not module methods. There is no\n  `nn.CrossEntropyLoss()` class.\n\n---\n\n## 7. Training Loops\n\n### PyTorch (manual loop)\n\n```python\nmodel.train()\noptimizer = torch.optim.Adam(model.parameters(), lr=1e-3)\n\nfor epoch in range(10):\n    for x_batch, y_batch in dataloader:\n        optimizer.zero_grad()\n        logits = model(x_batch)\n        loss = nn.functional.cross_entropy(logits, y_batch)\n        loss.backward()\n        optimizer.step()\n        print(f\"loss: {loss.item():.4f}\")\n```\n\n### Kaun with Train.fit\n\n<!-- $MDX skip -->\n```ocaml\nlet trainer =\n  Train.make ~model\n    ~optimizer:(Vega.adam (Vega.Schedule.constant 1e-3))\n\nlet st = Nx.Rng.run ~seed:42 @@ fun () ->\n  Train.init trainer ~dtype:Nx.Float32\n\n(* Train over a data pipeline *)\nlet st =\n  Train.fit trainer st\n    ~report:(fun ~step ~loss _st ->\n      Printf.printf \"step %d  loss %.4f\\n\" step loss)\n    data\n```\n\n`Train.fit` takes a `Data.t` where each element is `(input, loss_fn)`.\nThe loss function receives the model output and returns a scalar loss.\nGradient computation, optimizer step, and state threading are handled\ninternally.\n\n### Kaun with Train.step (manual loop)\n\nFor fine-grained control, use `Train.step` directly:\n\n<!-- $MDX skip -->\n```ocaml\nlet st = ref (Nx.Rng.run ~seed:42 @@ fun () ->\n  Train.init trainer ~dtype:Nx.Float32)\n\nlet () =\n  Data.iter\n    (fun (x, y) ->\n      let loss, st' =\n        Train.step trainer !st ~training:true\n          ~loss:(fun logits -> Loss.cross_entropy_sparse logits y)\n          x\n      in\n      st := st';\n      Printf.printf \"loss: %.4f\\n\" (Nx.item [] loss))\n    data\n```\n\n### Early stopping\n\nRaise `Train.Early_stop` inside the `~report` callback:\n\n<!-- $MDX skip -->\n```ocaml\nlet st =\n  Train.fit trainer st\n    ~report:(fun ~step:_ ~loss _st ->\n      if loss < 0.001 then raise Train.Early_stop)\n    data\n```\n\nKey differences:\n\n* PyTorch training loops are fully manual: zero gradients, forward,\n  backward, step. Kaun's `Train.fit` handles the entire loop.\n* `Train.step` is the escape hatch for custom loops, but you never call\n  `backward` or `zero_grad` -- differentiation is implicit.\n* State threading replaces mutation. `Train.fit` returns the final\n  state; `Train.step` returns `(loss, new_state)`.\n\n---\n\n## 8. Data Loading\n\n### PyTorch\n\n```python\nfrom torch.utils.data import DataLoader, TensorDataset\n\ndataset = TensorDataset(x_train, y_train)\nloader = DataLoader(dataset, batch_size=64, shuffle=True, drop_last=True)\n\nfor x_batch, y_batch in loader:\n    ...\n```\n\n### Kaun\n\n<!-- $MDX skip -->\n```ocaml\n(* From tensor pairs -- the common case *)\nlet data =\n  Data.prepare ~shuffle:true ~batch_size:64 (x_train, y_train)\n  |> Data.map (fun (x, y) ->\n         (x, fun logits -> Loss.cross_entropy_sparse logits y))\n\n(* From arrays *)\nlet data = Data.of_array examples |> Data.shuffle |> Data.batch 32\n\n(* From a generator function *)\nlet data = Data.of_fn 10000 generate_example\n\n(* Repeat a fixed example (useful for toy problems) *)\nlet data = Data.repeat 1000 (x, loss_fn)\n\n(* Consumers *)\nData.iter process data\nData.fold accumulate init data\nlet arr = Data.to_array data\n```\n\nKey differences:\n\n* PyTorch uses `Dataset` + `DataLoader` classes with worker processes\n  for parallel loading. Kaun uses `Data.t`, a lazy composable iterator.\n* `Data.prepare` is the standard shortcut: it slices tensors, optionally\n  shuffles, and batches in one call. `~drop_last` defaults to `true`.\n* Pipelines are single-pass. Call `Data.reset` before iterating again\n  (e.g. between epochs).\n* `Data.map` attaches the loss function to each batch, producing the\n  `(input, loss_fn)` pairs that `Train.fit` expects.\n\n---\n\n## 9. Checkpointing\n\n### PyTorch\n\n```python\n# Save\ntorch.save(model.state_dict(), \"model.pt\")\n\n# Load\nmodel.load_state_dict(torch.load(\"model.pt\"))\n```\n\n### Kaun\n\n<!-- $MDX skip -->\n```ocaml\n(* Save parameters *)\nlet vars = Train.vars st in\nCheckpoint.save \"model.safetensors\" (Layer.params vars)\n\n(* Load parameters *)\nlet vars = Layer.init model ~dtype:Nx.Float32 in\nlet params = Checkpoint.load \"model.safetensors\" ~like:(Layer.params vars) in\nlet vars = Layer.with_params vars params\n\n(* Save both params and state (e.g. batch norm stats) *)\nCheckpoint.save \"params.safetensors\" (Layer.params vars);\nCheckpoint.save \"state.safetensors\" (Layer.state vars)\n\n(* Resume training from loaded weights *)\nlet st = Train.make_state trainer vars\n```\n\nKey differences:\n\n* PyTorch uses Python pickle by default (arbitrary code execution risk).\n  Kaun uses SafeTensors -- a flat, memory-mappable format with no code\n  execution.\n* `Checkpoint.load` requires a `~like` template defining the expected\n  tree structure and dtypes. Extra keys in the file are ignored, and\n  tensors are cast to the template's dtype if needed.\n* Pretrained weights from HuggingFace Hub are available via\n  `Kaun_hf.load_weights`.\n\n---\n\n## 10. Quick Cheat Sheet\n\n| Task                         | PyTorch                                            | Kaun                                                           |\n| ---------------------------- | -------------------------------------------------- | -------------------------------------------------------------- |\n| Define a model               | `class M(nn.Module): ...`                          | `Layer.sequential [Layer.linear ...; Layer.relu (); ...]`      |\n| Initialize parameters        | `model = M()` (implicit)                           | `Layer.init model ~dtype:Nx.Float32`                           |\n| Forward pass (training)      | `model.train(); y = model(x)`                      | `Layer.apply model vars ~training:true x`                      |\n| Forward pass (eval)          | `model.eval(); y = model(x)`                       | `Train.predict trainer st x`                                   |\n| Count parameters             | `sum(p.numel() for p in model.parameters())`       | `Ptree.count_parameters (Layer.params vars)`                   |\n| Create optimizer             | `Adam(model.parameters(), lr=1e-3)`                | `Vega.adam (Vega.Schedule.constant 1e-3)`                      |\n| Cosine decay schedule        | `CosineAnnealingLR(opt, T_max=N)`                  | `Vega.Schedule.cosine_decay ~init_value:lr ~decay_steps:N ()` |\n| Compute loss                 | `F.cross_entropy(logits, labels)`                  | `Loss.cross_entropy_sparse logits labels`                      |\n| Training step                | `zero_grad(); loss.backward(); opt.step()`         | `Train.step trainer st ~training:true ~loss x`                 |\n| Full training loop           | Manual `for` loop                                  | `Train.fit trainer st data`                                    |\n| Early stopping               | Manual condition check                             | `raise Train.Early_stop` inside `~report`                      |\n| Gradient clipping            | `clip_grad_norm_(model.parameters(), max_norm)`    | `Optim.clip_by_global_norm max_norm grads`                     |\n| Data loading                 | `DataLoader(dataset, batch_size=64, shuffle=True)` | `Data.prepare ~shuffle:true ~batch_size:64 (x, y)`             |\n| Save checkpoint              | `torch.save(model.state_dict(), path)`             | `Checkpoint.save path (Layer.params vars)`                     |\n| Load checkpoint              | `model.load_state_dict(torch.load(path))`          | `Checkpoint.load path ~like:(Layer.params vars)`               |\n| Compose heterogeneous layers | Define inside `forward`                            | `Layer.compose embedding_layer dense_layer`                    |\n| Dropout                      | `nn.Dropout(p=0.1)`                                | `Layer.dropout ~rate:0.1 ()`                                   |\n| Batch normalization          | `nn.BatchNorm2d(32)`                               | `Layer.batch_norm ~num_features:32 ()`                         |\n| Layer normalization          | `nn.LayerNorm(128)`                                | `Layer.layer_norm ~dim:128 ()`                                 |\n| Set RNG seed                 | `torch.manual_seed(42)`                            | `Nx.Rng.run ~seed:42 @@ fun () -> ...`                         |\n"
  },
  {
    "path": "packages/kaun/doc/dune",
    "content": "(mdx\n (files *.md)\n (package kaun)\n (libraries kaun rune nx nx.io))\n"
  },
  {
    "path": "packages/kaun/doc/index.md",
    "content": "# Kaun\n\nKaun is a neural network library for OCaml built on\n[Rune](https://github.com/raven-ml/raven/tree/main/rune). It provides\ncomposable layers, parameter trees, optimizers, data pipelines, and a\nhigh-level training loop. Pretrained weights load from the HuggingFace\nHub via SafeTensors.\n\n## Features\n\n- **Composable layers**: `sequential`, `compose`, and custom `{ init; apply }` records\n- **Parameter trees**: `Ptree.t` for inspection, serialization, and transformation\n- **High-level training**: `Train.fit` with data pipelines, or `Train.step` for manual control\n- **Optimizers**: SGD, Adam, AdamW, RMSprop, Adagrad with LR schedules\n- **Losses**: cross-entropy, binary cross-entropy, MSE, MAE\n- **Metrics**: accuracy, precision, recall, F1, running tracker, dataset evaluation\n- **Layers**: linear, conv1d/2d, layer_norm, rms_norm, batch_norm, embedding, dropout, pooling, multi-head attention with GQA and RoPE\n- **Checkpointing**: SafeTensors save/load, HuggingFace Hub integration\n- **Datasets**: MNIST and Fashion-MNIST loaders\n\n## Quick Start\n\nTrain a model on the XOR problem:\n\n<!-- $MDX skip -->\n```ocaml\nopen Kaun\n\nlet () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let x = Nx.create Nx.Float32 [| 4; 2 |] [| 0.; 0.; 0.; 1.; 1.; 0.; 1.; 1. |] in\n  let y = Nx.create Nx.Float32 [| 4; 1 |] [| 0.; 1.; 1.; 0. |] in\n\n  let model = Layer.sequential [\n    Layer.linear ~in_features:2 ~out_features:4 ();\n    Layer.tanh ();\n    Layer.linear ~in_features:4 ~out_features:1 ();\n  ] in\n\n  let trainer = Train.make ~model\n    ~optimizer:(Vega.adam (Vega.Schedule.constant 0.01))\n  in\n  let st = Train.init trainer ~dtype:Nx.Float32 in\n  let st = Train.fit trainer st\n    (Data.repeat 1000 (x, fun pred -> Loss.binary_cross_entropy pred y))\n  in\n  let pred = Train.predict trainer st x |> Nx.sigmoid in\n  for i = 0 to 3 do\n    Printf.printf \"[%.0f, %.0f] -> %.3f\\n\"\n      (Nx.item [ i; 0 ] x) (Nx.item [ i; 1 ] x) (Nx.item [ i; 0 ] pred)\n  done\n```\n\n## Next Steps\n\n- [Getting Started](01-getting-started/) — installation, XOR and MNIST examples, key concepts\n- [Layers and Models](02-layers-and-models/) — layer catalog, composition, custom layers\n- [Training](03-training/) — optimizers, losses, data pipelines, metrics, custom loops\n- [Checkpoints and Pretrained Models](04-checkpoints-and-pretrained/) — SafeTensors, HuggingFace Hub\n\n## See Also\n\n- [Munin](/docs/munin/) — experiment tracking with live terminal dashboard and CLI\n"
  },
  {
    "path": "packages/kaun/examples/01-xor/dune",
    "content": "(executable\n (name main)\n (libraries nx rune vega kaun))\n"
  },
  {
    "path": "packages/kaun/examples/01-xor/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Kaun\n\nlet () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let dtype = Nx.float32 in\n\n  (* XOR dataset *)\n  let x = Nx.create dtype [| 4; 2 |] [| 0.; 0.; 0.; 1.; 1.; 0.; 1.; 1. |] in\n  let y = Nx.create dtype [| 4; 1 |] [| 0.; 1.; 1.; 0. |] in\n\n  (* Model *)\n  let model =\n    Layer.sequential\n      [\n        Layer.linear ~in_features:2 ~out_features:4 ();\n        Layer.tanh ();\n        Layer.linear ~in_features:4 ~out_features:1 ();\n      ]\n  in\n\n  (* Trainer = model + optimizer *)\n  let trainer =\n    Train.make ~model ~optimizer:(Vega.adam (Vega.Schedule.constant 0.01))\n  in\n\n  (* Initialize train state (vars + optimizer state) *)\n  let st = Train.init trainer ~dtype in\n\n  (* Fit *)\n  let st =\n    Train.fit trainer st\n      ~report:(fun ~step ~loss _st ->\n        if step mod 200 = 0 then Printf.printf \"step %4d  loss %.6f\\n\" step loss)\n      (Data.repeat 1000 (x, fun pred -> Loss.binary_cross_entropy pred y))\n  in\n\n  (* Evaluate *)\n  let pred = Train.predict trainer st x |> Nx.sigmoid in\n  Printf.printf \"\\npredictions (expected 0 1 1 0):\\n\";\n  for i = 0 to 3 do\n    Printf.printf \"  [%.0f, %.0f] -> %.3f\\n\"\n      (Nx.item [ i; 0 ] x)\n      (Nx.item [ i; 1 ] x)\n      (Nx.item [ i; 0 ] pred)\n  done\n"
  },
  {
    "path": "packages/kaun/examples/02-mnist/dune",
    "content": "(executable\n (name main)\n (libraries nx rune vega kaun kaun.datasets))\n"
  },
  {
    "path": "packages/kaun/examples/02-mnist/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Kaun\n\nlet batch_size = 64\nlet epochs = 3\nlet lr = 0.001\n\nlet model =\n  Layer.sequential\n    [\n      Layer.conv2d ~in_channels:1 ~out_channels:16 ();\n      Layer.relu ();\n      Layer.max_pool2d ~kernel_size:(2, 2) ();\n      Layer.conv2d ~in_channels:16 ~out_channels:32 ();\n      Layer.relu ();\n      Layer.max_pool2d ~kernel_size:(2, 2) ();\n      Layer.flatten ();\n      Layer.linear ~in_features:(32 * 7 * 7) ~out_features:128 ();\n      Layer.relu ();\n      Layer.linear ~in_features:128 ~out_features:10 ();\n    ]\n\nlet () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let dtype = Nx.float32 in\n\n  Printf.printf \"Loading MNIST...\\n%!\";\n  let (x_train, y_train), (x_test, y_test) = Kaun_datasets.mnist () in\n  let n_train = (Nx.shape x_train).(0) in\n  Printf.printf \"  train: %d  test: %d\\n%!\" n_train (Nx.shape x_test).(0);\n\n  (* Test batches (fixed order, no shuffle) *)\n  let test_batches = Data.prepare ~batch_size (x_test, y_test) in\n\n  (* Trainer *)\n  let trainer =\n    Train.make ~model ~optimizer:(Vega.adam (Vega.Schedule.constant lr))\n  in\n  let st = ref (Train.init trainer ~dtype) in\n\n  for epoch = 1 to epochs do\n    let train_data =\n      Data.prepare ~shuffle:true ~batch_size (x_train, y_train)\n      |> Data.map (fun (x, y) ->\n          (x, fun logits -> Loss.cross_entropy_sparse logits y))\n    in\n    let num_batches = n_train / batch_size in\n    let tracker = Metric.tracker () in\n    st :=\n      Train.fit trainer !st\n        ~report:(fun ~step ~loss _st ->\n          Metric.observe tracker \"loss\" loss;\n          Printf.printf \"\\r  batch %d/%d  loss: %.4f%!\" step num_batches loss)\n        train_data;\n    Printf.printf \"\\n%!\";\n\n    (* Evaluate *)\n    Data.reset test_batches;\n    let test_acc =\n      Metric.eval\n        (fun (x, y) ->\n          let logits = Train.predict trainer !st x in\n          Metric.accuracy logits y)\n        test_batches\n    in\n\n    Printf.printf \"epoch %d  train_loss: %.4f  test_acc: %.2f%%\\n%!\" epoch\n      (Metric.mean tracker \"loss\")\n      (test_acc *. 100.)\n  done;\n\n  Printf.printf \"\\nDone.\\n\"\n"
  },
  {
    "path": "packages/kaun/examples/03-bert/bert.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Kaun\n\nlet invalid_argf fmt = Printf.ksprintf invalid_arg fmt\n\nlet require_float_dtype (type p in_elt) ~ctx (expected : (float, p) Nx.dtype)\n    (x : (float, in_elt) Nx.t) : (float, p) Nx.t =\n  match Nx_core.Dtype.equal_witness expected (Nx.dtype x) with\n  | Some Type.Equal -> x\n  | None ->\n      invalid_argf \"%s: dtype mismatch (expected %s, got %s)\" ctx\n        (Nx_core.Dtype.to_string expected)\n        (Nx_core.Dtype.to_string (Nx.dtype x))\n\n(* Config *)\n\ntype config = {\n  vocab_size : int;\n  max_position_embeddings : int;\n  type_vocab_size : int;\n  hidden_size : int;\n  num_hidden_layers : int;\n  num_attention_heads : int;\n  intermediate_size : int;\n  hidden_dropout_prob : float;\n  attention_dropout_prob : float;\n  layer_norm_eps : float;\n}\n\nlet config ~vocab_size ~hidden_size ~num_hidden_layers ~num_attention_heads\n    ~intermediate_size ?(max_position_embeddings = 512) ?(type_vocab_size = 2)\n    ?(hidden_dropout_prob = 0.1) ?(attention_dropout_prob = 0.1)\n    ?(layer_norm_eps = 1e-12) () =\n  if hidden_size mod num_attention_heads <> 0 then\n    invalid_argf\n      \"Bert.config: hidden_size (%d) not divisible by num_attention_heads (%d)\"\n      hidden_size num_attention_heads;\n  if hidden_dropout_prob < 0.0 || hidden_dropout_prob >= 1.0 then\n    invalid_arg \"Bert.config: hidden_dropout_prob must satisfy 0 <= p < 1\";\n  if attention_dropout_prob < 0.0 || attention_dropout_prob >= 1.0 then\n    invalid_arg \"Bert.config: attention_dropout_prob must satisfy 0 <= p < 1\";\n  {\n    vocab_size;\n    max_position_embeddings;\n    type_vocab_size;\n    hidden_size;\n    num_hidden_layers;\n    num_attention_heads;\n    intermediate_size;\n    hidden_dropout_prob;\n    attention_dropout_prob;\n    layer_norm_eps;\n  }\n\n(* Context keys *)\n\nlet token_type_ids_key = \"token_type_ids\"\n\n(* Helpers *)\n\nlet get_from_ctx_int32 ~name ~default ctx =\n  match ctx with\n  | Some c -> (\n      match Context.find c ~name with\n      | Some tensor -> Ptree.Tensor.to_typed_exn Nx.int32 tensor\n      | None -> default ())\n  | None -> default ()\n\nlet get_attention_mask_bool ctx ~batch ~seq =\n  match ctx with\n  | Some c -> (\n      match Context.find c ~name:Attention.attention_mask_key with\n      | Some tensor -> (\n          match Ptree.Tensor.to_typed Nx.bool tensor with\n          | Some m -> m\n          | None ->\n              let int_mask = Ptree.Tensor.to_typed_exn Nx.int32 tensor in\n              Nx.not_equal int_mask (Nx.zeros Nx.int32 (Nx.shape int_mask)))\n      | None -> Nx.broadcast_to [| batch; seq |] (Nx.scalar Nx.bool true))\n  | None -> Nx.broadcast_to [| batch; seq |] (Nx.scalar Nx.bool true)\n\nlet fields ~ctx t = Ptree.Dict.fields_exn ~ctx t\nlet get fs ~name dtype = Ptree.Dict.get_tensor_exn fs ~name dtype\nlet find ~ctx key fs = Ptree.Dict.find_exn ~ctx key fs\n\n(* Self-attention with biased projections *)\n\nlet self_attention (type l) ~(cfg : config) ~(dtype : (float, l) Nx.dtype)\n    ~training ~attention_mask ~params (x : (float, l) Nx.t) : (float, l) Nx.t =\n  let shape = Nx.shape x in\n  let batch = shape.(0) in\n  let seq = shape.(1) in\n  let h = cfg.hidden_size in\n  let heads = cfg.num_attention_heads in\n  let head_dim = h / heads in\n  let fs = fields ~ctx:\"Bert.attention\" params in\n\n  let proj name =\n    let w = get fs ~name:(name ^ \"_weight\") dtype in\n    let b = get fs ~name:(name ^ \"_bias\") dtype in\n    fun t -> Nx.add (Nx.matmul t w) b\n  in\n  let q = proj \"q\" x in\n  let k = proj \"k\" x in\n  let v = proj \"v\" x in\n\n  let split_heads t =\n    Nx.reshape [| batch; seq; heads; head_dim |] t\n    |> Nx.transpose ~axes:[ 0; 2; 1; 3 ]\n  in\n  let q = split_heads q in\n  let k = split_heads k in\n  let v = split_heads v in\n\n  (* Broadcast mask [batch; seq] -> [batch; 1; 1; seq] *)\n  let attention_mask = Nx.reshape [| batch; 1; 1; seq |] attention_mask in\n\n  let dropout_rate =\n    if training && cfg.attention_dropout_prob > 0.0 then\n      Some cfg.attention_dropout_prob\n    else None\n  in\n\n  let attn =\n    Kaun.Fn.dot_product_attention ~attention_mask ?dropout_rate q k v\n  in\n\n  (* Merge heads *)\n  let merged =\n    Nx.transpose attn ~axes:[ 0; 2; 1; 3 ]\n    |> Nx.contiguous\n    |> Nx.reshape [| batch; seq; h |]\n  in\n\n  (* Output projection *)\n  let o_w = get fs ~name:\"o_weight\" dtype in\n  let o_b = get fs ~name:\"o_bias\" dtype in\n  Nx.add (Nx.matmul merged o_w) o_b\n\n(* Encoder block *)\n\nlet encoder_block (type l) ~(cfg : config) ~(dtype : (float, l) Nx.dtype)\n    ~training ~attention_mask ~params (x : (float, l) Nx.t) : (float, l) Nx.t =\n  let fs = fields ~ctx:\"Bert.block\" params in\n\n  (* Self-attention *)\n  let attn_params = find ~ctx:\"Bert.block\" \"attention\" fs in\n  let attn =\n    self_attention ~cfg ~dtype ~training ~attention_mask ~params:attn_params x\n  in\n\n  (* Hidden dropout on attention output *)\n  let attn =\n    if training && cfg.hidden_dropout_prob > 0.0 then\n      Kaun.Fn.dropout ~rate:cfg.hidden_dropout_prob attn\n    else attn\n  in\n\n  (* Residual + LayerNorm (post-norm, original BERT) *)\n  let ln1_g = get fs ~name:\"attn_ln_gamma\" dtype in\n  let ln1_b = get fs ~name:\"attn_ln_beta\" dtype in\n  let x =\n    Kaun.Fn.layer_norm ~gamma:ln1_g ~beta:ln1_b ~epsilon:cfg.layer_norm_eps\n      (Nx.add x attn)\n  in\n\n  (* FFN: up -> GELU -> down *)\n  let ffn_up_w = get fs ~name:\"ffn_up_weight\" dtype in\n  let ffn_up_b = get fs ~name:\"ffn_up_bias\" dtype in\n  let ffn_down_w = get fs ~name:\"ffn_down_weight\" dtype in\n  let ffn_down_b = get fs ~name:\"ffn_down_bias\" dtype in\n\n  let y = Nx.add (Nx.matmul x ffn_up_w) ffn_up_b |> Kaun.Activation.gelu in\n  let y = Nx.add (Nx.matmul y ffn_down_w) ffn_down_b in\n\n  (* Hidden dropout on FFN output *)\n  let y =\n    if training && cfg.hidden_dropout_prob > 0.0 then\n      Kaun.Fn.dropout ~rate:cfg.hidden_dropout_prob y\n    else y\n  in\n\n  (* Residual + LayerNorm *)\n  let ln2_g = get fs ~name:\"ffn_ln_gamma\" dtype in\n  let ln2_b = get fs ~name:\"ffn_ln_beta\" dtype in\n  Kaun.Fn.layer_norm ~gamma:ln2_g ~beta:ln2_b ~epsilon:cfg.layer_norm_eps\n    (Nx.add x y)\n\n(* Forward: embeddings + encoder stack *)\n\nlet encode (type l in_elt) ~(cfg : config) ~params\n    ~(dtype : (float, l) Nx.dtype) ~training ?ctx\n    (input_ids : (int32, in_elt) Nx.t) : (float, l) Nx.t =\n  let input_ids = Nx.cast Nx.int32 input_ids in\n  let shape = Nx.shape input_ids in\n  let batch = shape.(0) in\n  let seq = shape.(1) in\n\n  if seq > cfg.max_position_embeddings then\n    invalid_argf \"Bert.encode: seq_len=%d exceeds max_position_embeddings=%d\"\n      seq cfg.max_position_embeddings;\n\n  (* Read auxiliary inputs from context *)\n  let token_type_ids =\n    get_from_ctx_int32 ~name:token_type_ids_key ctx ~default:(fun () ->\n        Nx.zeros Nx.int32 [| batch; seq |])\n  in\n  let attention_mask = get_attention_mask_bool ctx ~batch ~seq in\n\n  (* Params *)\n  let root = fields ~ctx:\"Bert.encode\" params in\n  let emb_t = find ~ctx:\"Bert.encode\" \"embeddings\" root in\n  let layers_t = find ~ctx:\"Bert.encode\" \"layers\" root in\n\n  let emb = fields ~ctx:\"Bert.embeddings\" emb_t in\n  let word_emb = get emb ~name:\"word\" dtype in\n  let pos_emb = get emb ~name:\"pos\" dtype in\n  let type_emb = get emb ~name:\"type\" dtype in\n  let ln_g = get emb ~name:\"ln_gamma\" dtype in\n  let ln_b = get emb ~name:\"ln_beta\" dtype in\n\n  (* Embedding lookup: word + position + token_type *)\n  let position_ids =\n    Nx.arange_f Nx.float32 0.0 (float_of_int seq) 1.0\n    |> Nx.cast Nx.int32\n    |> Nx.reshape [| 1; seq |]\n    |> Nx.broadcast_to [| batch; seq |]\n    |> Nx.contiguous\n  in\n  let token_type_ids = Nx.contiguous token_type_ids in\n  let tok = Kaun.Fn.embedding ~scale:false ~embedding:word_emb input_ids in\n  let pos = Kaun.Fn.embedding ~scale:false ~embedding:pos_emb position_ids in\n  let typ = Kaun.Fn.embedding ~scale:false ~embedding:type_emb token_type_ids in\n  let x = Nx.add tok (Nx.add pos typ) in\n  let x =\n    Kaun.Fn.layer_norm ~gamma:ln_g ~beta:ln_b ~epsilon:cfg.layer_norm_eps x\n  in\n\n  (* Embedding dropout *)\n  let x =\n    if training && cfg.hidden_dropout_prob > 0.0 then\n      Kaun.Fn.dropout ~rate:cfg.hidden_dropout_prob x\n    else x\n  in\n\n  (* Encoder stack *)\n  let blocks = Ptree.List.items_exn ~ctx:\"Bert.encode.layers\" layers_t in\n  let x =\n    List.fold_left\n      (fun h block_params ->\n        encoder_block ~cfg ~dtype ~training ~attention_mask ~params:block_params\n          h)\n      x blocks\n  in\n  x\n\n(* Parameter initialization *)\n\nlet init_block_params ~dtype ~hidden ~intermediate =\n  let w = Init.normal ~stddev:0.02 () in\n  let zeros n = Nx.zeros dtype [| n |] in\n  let ones n = Nx.ones dtype [| n |] in\n  let attn_params =\n    Ptree.dict\n      [\n        (\"q_weight\", Ptree.tensor (w.f [| hidden; hidden |] dtype));\n        (\"q_bias\", Ptree.tensor (zeros hidden));\n        (\"k_weight\", Ptree.tensor (w.f [| hidden; hidden |] dtype));\n        (\"k_bias\", Ptree.tensor (zeros hidden));\n        (\"v_weight\", Ptree.tensor (w.f [| hidden; hidden |] dtype));\n        (\"v_bias\", Ptree.tensor (zeros hidden));\n        (\"o_weight\", Ptree.tensor (w.f [| hidden; hidden |] dtype));\n        (\"o_bias\", Ptree.tensor (zeros hidden));\n      ]\n  in\n  Ptree.dict\n    [\n      (\"attention\", attn_params);\n      (\"attn_ln_gamma\", Ptree.tensor (ones hidden));\n      (\"attn_ln_beta\", Ptree.tensor (zeros hidden));\n      (\"ffn_up_weight\", Ptree.tensor (w.f [| hidden; intermediate |] dtype));\n      (\"ffn_up_bias\", Ptree.tensor (zeros intermediate));\n      (\"ffn_down_weight\", Ptree.tensor (w.f [| intermediate; hidden |] dtype));\n      (\"ffn_down_bias\", Ptree.tensor (zeros hidden));\n      (\"ffn_ln_gamma\", Ptree.tensor (ones hidden));\n      (\"ffn_ln_beta\", Ptree.tensor (zeros hidden));\n    ]\n\nlet init_encoder_params ~cfg ~dtype =\n  let h = cfg.hidden_size in\n  let w = Init.normal ~stddev:0.02 () in\n  let word = w.f [| cfg.vocab_size; h |] dtype in\n  let pos = w.f [| cfg.max_position_embeddings; h |] dtype in\n  let typ = w.f [| cfg.type_vocab_size; h |] dtype in\n  let blocks =\n    List.init cfg.num_hidden_layers (fun _ ->\n        init_block_params ~dtype ~hidden:h ~intermediate:cfg.intermediate_size)\n  in\n  Ptree.dict\n    [\n      ( \"embeddings\",\n        Ptree.dict\n          [\n            (\"word\", Ptree.tensor word);\n            (\"pos\", Ptree.tensor pos);\n            (\"type\", Ptree.tensor typ);\n            (\"ln_gamma\", Ptree.tensor (Nx.ones dtype [| h |]));\n            (\"ln_beta\", Ptree.tensor (Nx.zeros dtype [| h |]));\n          ] );\n      (\"layers\", Ptree.list blocks);\n    ]\n\n(* Layers *)\n\nlet encoder (cfg : config) () : (int32, float) Layer.t =\n  {\n    Layer.init =\n      (fun ~dtype ->\n        Layer.make_vars\n          ~params:(init_encoder_params ~cfg ~dtype)\n          ~state:Ptree.empty ~dtype);\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        ignore state;\n        let y = encode ~cfg ~params ~dtype ~training ?ctx x in\n        (y, Ptree.empty));\n  }\n\nlet pooler (cfg : config) () : (float, float) Layer.t =\n  let w_init = Init.normal ~stddev:0.02 () in\n  {\n    Layer.init =\n      (fun ~dtype ->\n        let w = w_init.f [| cfg.hidden_size; cfg.hidden_size |] dtype in\n        let b = Nx.zeros dtype [| cfg.hidden_size |] in\n        Layer.make_vars\n          ~params:\n            (Ptree.dict\n               [ (\"weight\", Ptree.tensor w); (\"bias\", Ptree.tensor b) ])\n          ~state:Ptree.empty ~dtype);\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        ignore (training, ctx, state);\n        let x = require_float_dtype ~ctx:\"Bert.pooler\" dtype x in\n        let fs = fields ~ctx:\"Bert.pooler\" params in\n        let w = get fs ~name:\"weight\" dtype in\n        let b = get fs ~name:\"bias\" dtype in\n        let batch = (Nx.shape x).(0) in\n        let cls =\n          Nx.slice [ A; R (0, 1) ] x |> Nx.reshape [| batch; cfg.hidden_size |]\n        in\n        (Nx.add (Nx.matmul cls w) b |> Nx.tanh, Ptree.empty));\n  }\n\nlet for_sequence_classification (cfg : config) ~num_labels () :\n    (int32, float) Layer.t =\n  let w_init = Init.normal ~stddev:0.02 () in\n  {\n    Layer.init =\n      (fun ~dtype ->\n        let enc = init_encoder_params ~cfg ~dtype in\n        let pool_w = w_init.f [| cfg.hidden_size; cfg.hidden_size |] dtype in\n        let cls_w = w_init.f [| cfg.hidden_size; num_labels |] dtype in\n        Layer.make_vars\n          ~params:\n            (Ptree.dict\n               [\n                 (\"encoder\", enc);\n                 ( \"pooler\",\n                   Ptree.dict\n                     [\n                       (\"weight\", Ptree.tensor pool_w);\n                       ( \"bias\",\n                         Ptree.tensor (Nx.zeros dtype [| cfg.hidden_size |]) );\n                     ] );\n                 ( \"classifier\",\n                   Ptree.dict\n                     [\n                       (\"weight\", Ptree.tensor cls_w);\n                       (\"bias\", Ptree.tensor (Nx.zeros dtype [| num_labels |]));\n                     ] );\n               ])\n          ~state:Ptree.empty ~dtype);\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        ignore state;\n        let root = fields ~ctx:\"Bert.seq_cls\" params in\n        let enc_params = find ~ctx:\"Bert.seq_cls\" \"encoder\" root in\n        let pool_params = find ~ctx:\"Bert.seq_cls\" \"pooler\" root in\n        let cls_params = find ~ctx:\"Bert.seq_cls\" \"classifier\" root in\n\n        let hidden = encode ~cfg ~params:enc_params ~dtype ~training ?ctx x in\n\n        (* Pooler: CLS token -> dense -> tanh *)\n        let pool_fs = fields ~ctx:\"Bert.seq_cls.pooler\" pool_params in\n        let pool_w = get pool_fs ~name:\"weight\" dtype in\n        let pool_b = get pool_fs ~name:\"bias\" dtype in\n        let batch = (Nx.shape hidden).(0) in\n        let cls =\n          Nx.slice [ A; R (0, 1) ] hidden\n          |> Nx.reshape [| batch; cfg.hidden_size |]\n        in\n        let pooled = Nx.add (Nx.matmul cls pool_w) pool_b |> Nx.tanh in\n\n        (* Dropout on pooled output during fine-tuning *)\n        let pooled =\n          if training && cfg.hidden_dropout_prob > 0.0 then\n            Kaun.Fn.dropout ~rate:cfg.hidden_dropout_prob pooled\n          else pooled\n        in\n\n        (* Classifier *)\n        let cls_fs = fields ~ctx:\"Bert.seq_cls.classifier\" cls_params in\n        let cls_w = get cls_fs ~name:\"weight\" dtype in\n        let cls_b = get cls_fs ~name:\"bias\" dtype in\n        (Nx.add (Nx.matmul pooled cls_w) cls_b, Ptree.empty));\n  }\n\nlet for_masked_lm (cfg : config) () : (int32, float) Layer.t =\n  let w_init = Init.normal ~stddev:0.02 () in\n  {\n    Layer.init =\n      (fun ~dtype ->\n        let enc = init_encoder_params ~cfg ~dtype in\n        let dense_w = w_init.f [| cfg.hidden_size; cfg.hidden_size |] dtype in\n        Layer.make_vars\n          ~params:\n            (Ptree.dict\n               [\n                 (\"encoder\", enc);\n                 ( \"mlm\",\n                   Ptree.dict\n                     [\n                       (\"dense_weight\", Ptree.tensor dense_w);\n                       ( \"dense_bias\",\n                         Ptree.tensor (Nx.zeros dtype [| cfg.hidden_size |]) );\n                       ( \"ln_gamma\",\n                         Ptree.tensor (Nx.ones dtype [| cfg.hidden_size |]) );\n                       ( \"ln_beta\",\n                         Ptree.tensor (Nx.zeros dtype [| cfg.hidden_size |]) );\n                       ( \"decoder_bias\",\n                         Ptree.tensor (Nx.zeros dtype [| cfg.vocab_size |]) );\n                     ] );\n               ])\n          ~state:Ptree.empty ~dtype);\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        ignore state;\n        let root = fields ~ctx:\"Bert.mlm\" params in\n        let enc_params = find ~ctx:\"Bert.mlm\" \"encoder\" root in\n        let mlm_params = find ~ctx:\"Bert.mlm\" \"mlm\" root in\n\n        let hidden = encode ~cfg ~params:enc_params ~dtype ~training ?ctx x in\n\n        (* MLM transform: dense -> GELU -> LN *)\n        let mlm_fs = fields ~ctx:\"Bert.mlm.head\" mlm_params in\n        let dw = get mlm_fs ~name:\"dense_weight\" dtype in\n        let db = get mlm_fs ~name:\"dense_bias\" dtype in\n        let ln_g = get mlm_fs ~name:\"ln_gamma\" dtype in\n        let ln_b = get mlm_fs ~name:\"ln_beta\" dtype in\n        let dec_b = get mlm_fs ~name:\"decoder_bias\" dtype in\n\n        let h = Nx.add (Nx.matmul hidden dw) db |> Kaun.Activation.gelu in\n        let h =\n          Kaun.Fn.layer_norm ~gamma:ln_g ~beta:ln_b ~epsilon:cfg.layer_norm_eps\n            h\n        in\n\n        (* Tied decoder: logits = h @ word_emb^T + bias *)\n        let enc_root = fields ~ctx:\"Bert.mlm.encoder\" enc_params in\n        let emb_t = find ~ctx:\"Bert.mlm.encoder\" \"embeddings\" enc_root in\n        let emb_fs = fields ~ctx:\"Bert.mlm.embeddings\" emb_t in\n        let word_emb = get emb_fs ~name:\"word\" dtype in\n        let logits =\n          Nx.add (Nx.matmul h (Nx.transpose word_emb ~axes:[ 1; 0 ])) dec_b\n        in\n        (logits, Ptree.empty));\n  }\n\n(* JSON config parsing *)\n\nlet json_mem name = function\n  | Jsont.Object (mems, _) -> (\n      match Jsont.Json.find_mem name mems with\n      | Some (_, v) -> v\n      | None -> Jsont.Null ((), Jsont.Meta.none))\n  | _ -> Jsont.Null ((), Jsont.Meta.none)\n\nlet json_to_int = function\n  | Jsont.Number (f, _) -> int_of_float f\n  | _ -> failwith \"expected int\"\n\nlet json_to_int_option = function\n  | Jsont.Number (f, _) -> Some (int_of_float f)\n  | _ -> None\n\nlet json_to_float_option = function Jsont.Number (f, _) -> Some f | _ -> None\n\nlet parse_config json =\n  config\n    ~vocab_size:(json |> json_mem \"vocab_size\" |> json_to_int)\n    ~hidden_size:(json |> json_mem \"hidden_size\" |> json_to_int)\n    ~num_hidden_layers:(json |> json_mem \"num_hidden_layers\" |> json_to_int)\n    ~num_attention_heads:(json |> json_mem \"num_attention_heads\" |> json_to_int)\n    ~intermediate_size:(json |> json_mem \"intermediate_size\" |> json_to_int)\n    ?max_position_embeddings:\n      (json |> json_mem \"max_position_embeddings\" |> json_to_int_option)\n    ?type_vocab_size:(json |> json_mem \"type_vocab_size\" |> json_to_int_option)\n    ?hidden_dropout_prob:\n      (json |> json_mem \"hidden_dropout_prob\" |> json_to_float_option)\n    ?attention_dropout_prob:\n      (json |> json_mem \"attention_probs_dropout_prob\" |> json_to_float_option)\n    ?layer_norm_eps:(json |> json_mem \"layer_norm_eps\" |> json_to_float_option)\n    ()\n\n(* HuggingFace weight mapping *)\n\nlet transpose_weight (Ptree.P t) = Ptree.P (Nx.transpose t ~axes:[ 1; 0 ])\nlet cast_tensor dtype (Ptree.P t) = Ptree.P (Nx.cast dtype t)\n\nlet map_hf_weights ~cfg ~dtype hf_weights =\n  let tbl = Hashtbl.create (List.length hf_weights) in\n  List.iter (fun (name, tensor) -> Hashtbl.add tbl name tensor) hf_weights;\n  let hf name =\n    match Hashtbl.find_opt tbl name with\n    | Some t -> cast_tensor dtype t\n    | None -> invalid_argf \"from_pretrained: missing HF weight %S\" name\n  in\n  (* Some checkpoints use LayerNorm.weight/bias, others use\n     LayerNorm.gamma/beta. Try both. *)\n  let hf_ln_weight prefix =\n    let w = prefix ^ \".weight\" in\n    let g = prefix ^ \".gamma\" in\n    if Hashtbl.mem tbl w then hf w else hf g\n  in\n  let hf_ln_bias prefix =\n    let b = prefix ^ \".bias\" in\n    let beta = prefix ^ \".beta\" in\n    if Hashtbl.mem tbl b then hf b else hf beta\n  in\n  let hf_t name = Ptree.Tensor (transpose_weight (hf name)) in\n  let hf_b name = Ptree.Tensor (hf name) in\n  let ln_w prefix = Ptree.Tensor (hf_ln_weight prefix) in\n  let ln_b prefix = Ptree.Tensor (hf_ln_bias prefix) in\n  let layer i =\n    let p s = Printf.sprintf \"bert.encoder.layer.%d.%s\" i s in\n    let attn_ln = p \"attention.output.LayerNorm\" in\n    let ffn_ln = p \"output.LayerNorm\" in\n    Ptree.dict\n      [\n        ( \"attention\",\n          Ptree.dict\n            [\n              (\"q_weight\", hf_t (p \"attention.self.query.weight\"));\n              (\"q_bias\", hf_b (p \"attention.self.query.bias\"));\n              (\"k_weight\", hf_t (p \"attention.self.key.weight\"));\n              (\"k_bias\", hf_b (p \"attention.self.key.bias\"));\n              (\"v_weight\", hf_t (p \"attention.self.value.weight\"));\n              (\"v_bias\", hf_b (p \"attention.self.value.bias\"));\n              (\"o_weight\", hf_t (p \"attention.output.dense.weight\"));\n              (\"o_bias\", hf_b (p \"attention.output.dense.bias\"));\n            ] );\n        (\"attn_ln_gamma\", ln_w attn_ln);\n        (\"attn_ln_beta\", ln_b attn_ln);\n        (\"ffn_up_weight\", hf_t (p \"intermediate.dense.weight\"));\n        (\"ffn_up_bias\", hf_b (p \"intermediate.dense.bias\"));\n        (\"ffn_down_weight\", hf_t (p \"output.dense.weight\"));\n        (\"ffn_down_bias\", hf_b (p \"output.dense.bias\"));\n        (\"ffn_ln_gamma\", ln_w ffn_ln);\n        (\"ffn_ln_beta\", ln_b ffn_ln);\n      ]\n  in\n  let emb_ln = \"bert.embeddings.LayerNorm\" in\n  let encoder_params =\n    Ptree.dict\n      [\n        ( \"embeddings\",\n          Ptree.dict\n            [\n              (\"word\", hf_b \"bert.embeddings.word_embeddings.weight\");\n              (\"pos\", hf_b \"bert.embeddings.position_embeddings.weight\");\n              (\"type\", hf_b \"bert.embeddings.token_type_embeddings.weight\");\n              (\"ln_gamma\", ln_w emb_ln);\n              (\"ln_beta\", ln_b emb_ln);\n            ] );\n        (\"layers\", Ptree.list (List.init cfg.num_hidden_layers layer));\n      ]\n  in\n  let pooler_params =\n    let has_pooler = Hashtbl.mem tbl \"bert.pooler.dense.weight\" in\n    if has_pooler then\n      Some\n        (Ptree.dict\n           [\n             (\"weight\", hf_t \"bert.pooler.dense.weight\");\n             (\"bias\", hf_b \"bert.pooler.dense.bias\");\n           ])\n    else None\n  in\n  let mlm_params =\n    let has_mlm = Hashtbl.mem tbl \"cls.predictions.transform.dense.weight\" in\n    if has_mlm then\n      let mlm_ln = \"cls.predictions.transform.LayerNorm\" in\n      Some\n        (Ptree.dict\n           [\n             (\"dense_weight\", hf_t \"cls.predictions.transform.dense.weight\");\n             (\"dense_bias\", hf_b \"cls.predictions.transform.dense.bias\");\n             (\"ln_gamma\", ln_w mlm_ln);\n             (\"ln_beta\", ln_b mlm_ln);\n             (\"decoder_bias\", hf_b \"cls.predictions.bias\");\n           ])\n    else None\n  in\n  (encoder_params, pooler_params, mlm_params)\n\n(* Pretrained loading *)\n\nlet from_pretrained ?(model_id = \"bert-base-uncased\") () =\n  let json = Kaun_hf.load_config ~model_id () in\n  let cfg = parse_config json in\n  let hf_weights = Kaun_hf.load_weights ~model_id () in\n  let encoder_params, pooler_params, mlm_params =\n    map_hf_weights ~cfg ~dtype:Nx.float32 hf_weights\n  in\n  (cfg, encoder_params, pooler_params, mlm_params)\n"
  },
  {
    "path": "packages/kaun/examples/03-bert/bert.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** BERT encoder and task heads.\n\n    BERT inputs are passed as int32 [input_ids] with auxiliary data in\n    {!Context}:\n\n    {[\n    let ctx =\n      Context.empty\n      |> Context.set ~name:Bert.token_type_ids_key (Ptree.P token_type_ids)\n      |> Context.set ~name:Attention.attention_mask_key (Ptree.P attention_mask)\n    in\n    Layer.apply model vars ~training:false ~ctx input_ids\n    ]}\n\n    When absent, [token_type_ids] defaults to zeros and [attention_mask]\n    defaults to ones (no padding). *)\n\nopen Kaun\n\n(** {1:config Configuration} *)\n\ntype config = {\n  vocab_size : int;\n  max_position_embeddings : int;\n  type_vocab_size : int;\n  hidden_size : int;\n  num_hidden_layers : int;\n  num_attention_heads : int;\n  intermediate_size : int;\n  hidden_dropout_prob : float;\n  attention_dropout_prob : float;\n  layer_norm_eps : float;\n}\n(** The type for BERT configurations. *)\n\nval config :\n  vocab_size:int ->\n  hidden_size:int ->\n  num_hidden_layers:int ->\n  num_attention_heads:int ->\n  intermediate_size:int ->\n  ?max_position_embeddings:int ->\n  ?type_vocab_size:int ->\n  ?hidden_dropout_prob:float ->\n  ?attention_dropout_prob:float ->\n  ?layer_norm_eps:float ->\n  unit ->\n  config\n(** [config ~vocab_size ~hidden_size ~num_hidden_layers ~num_attention_heads\n     ~intermediate_size ()] is a BERT configuration.\n\n    [max_position_embeddings] defaults to [512]. [type_vocab_size] defaults to\n    [2]. [hidden_dropout_prob] and [attention_dropout_prob] default to [0.1].\n    [layer_norm_eps] defaults to [1e-12].\n\n    Raises [Invalid_argument] if [hidden_size] is not divisible by\n    [num_attention_heads] or if dropout rates are outside [\\[0, 1)]. *)\n\n(** {1:context Context keys} *)\n\nval token_type_ids_key : string\n(** [\"token_type_ids\"]. The {!Context} key for segment ids (shape\n    [[batch; seq]], int32, values 0 or 1). *)\n\n(** {1:layers Layers} *)\n\nval encoder : config -> unit -> (int32, float) Layer.t\n(** [encoder cfg ()] is the base BERT encoder.\n\n    Input: int32 [input_ids] of shape [[batch; seq]]. Output: float hidden\n    states of shape [[batch; seq; hidden_size]].\n\n    Reads {!token_type_ids_key} and {!Attention.attention_mask_key} from [ctx].\n*)\n\nval pooler : config -> unit -> (float, float) Layer.t\n(** [pooler cfg ()] maps [[batch; seq; hidden_size]] to [[batch; hidden_size]]\n    by extracting the CLS token (position 0) and applying a dense + tanh. *)\n\nval for_sequence_classification :\n  config -> num_labels:int -> unit -> (int32, float) Layer.t\n(** [for_sequence_classification cfg ~num_labels ()] is encoder + pooler +\n    classifier. Output: logits [[batch; num_labels]]. *)\n\nval for_masked_lm : config -> unit -> (int32, float) Layer.t\n(** [for_masked_lm cfg ()] is encoder + MLM head with tied word embeddings.\n    Output: logits [[batch; seq; vocab_size]]. *)\n\n(** {1:pretrained Pretrained loading} *)\n\nval from_pretrained :\n  ?model_id:string -> unit -> config * Ptree.t * Ptree.t option * Ptree.t option\n(** [from_pretrained ?model_id ()] downloads [model_id] from HuggingFace and\n    returns [(cfg, encoder_params, pooler_params, mlm_params)].\n\n    [encoder_params] is ready for {!encoder}. [pooler_params] and [mlm_params]\n    are [Some _] when the checkpoint contains the corresponding weights.\n\n    [model_id] defaults to [\"bert-base-uncased\"]. *)\n"
  },
  {
    "path": "packages/kaun/examples/03-bert/dune",
    "content": "(executable\n (name main)\n (libraries nx nx.core rune vega kaun kaun.hf jsont))\n"
  },
  {
    "path": "packages/kaun/examples/03-bert/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Fine-tune pretrained BERT for binary sentiment classification.\n\n   Downloads bert-base-uncased from HuggingFace (~440MB on first run), assembles\n   a sequence-classification head, and trains on a tiny synthetic dataset to\n   show the full pipeline. *)\n\nopen Kaun\n\nlet print_shape name t =\n  let shape = Nx.shape t in\n  Printf.printf \"%s: [%s]\\n\" name\n    (String.concat \"; \" (Array.to_list (Array.map string_of_int shape)))\n\nlet () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let dtype = Nx.float32 in\n  let num_labels = 2 in\n\n  (* Load pretrained encoder + pooler from HuggingFace *)\n  Printf.printf \"Loading bert-base-uncased...\\n%!\";\n  let cfg, encoder_params, pooler_params, _mlm_params =\n    Bert.from_pretrained ()\n  in\n  Printf.printf \"  hidden=%d  layers=%d  heads=%d  vocab=%d\\n\\n\" cfg.hidden_size\n    cfg.num_hidden_layers cfg.num_attention_heads cfg.vocab_size;\n\n  (* Assemble classification model: pretrained encoder + pooler, fresh\n     classifier head *)\n  let w_init = Init.normal ~stddev:0.02 () in\n  let params =\n    Ptree.dict\n      [\n        (\"encoder\", encoder_params);\n        ( \"pooler\",\n          match pooler_params with\n          | Some p -> p\n          | None ->\n              Ptree.dict\n                [\n                  ( \"weight\",\n                    Ptree.tensor\n                      (w_init.f [| cfg.hidden_size; cfg.hidden_size |] dtype) );\n                  (\"bias\", Ptree.tensor (Nx.zeros dtype [| cfg.hidden_size |]));\n                ] );\n        ( \"classifier\",\n          Ptree.dict\n            [\n              ( \"weight\",\n                Ptree.tensor (w_init.f [| cfg.hidden_size; num_labels |] dtype)\n              );\n              (\"bias\", Ptree.tensor (Nx.zeros dtype [| num_labels |]));\n            ] );\n      ]\n  in\n\n  let model = Bert.for_sequence_classification cfg ~num_labels () in\n  let vars = Layer.make_vars ~params ~state:Ptree.empty ~dtype in\n\n  (* Tiny synthetic dataset (token ids from bert-base-uncased tokenizer) *)\n  let input_ids =\n    Nx.create Nx.int32 [| 4; 6 |]\n      [|\n        101l;\n        1045l;\n        2293l;\n        2023l;\n        102l;\n        0l;\n        (* \"I love this\" -> 1 *)\n        101l;\n        2307l;\n        3185l;\n        102l;\n        0l;\n        0l;\n        (* \"great movie\" -> 1 *)\n        101l;\n        1045l;\n        5223l;\n        2023l;\n        102l;\n        0l;\n        (* \"I hate this\" -> 0 *)\n        101l;\n        6659l;\n        2143l;\n        102l;\n        0l;\n        0l;\n        (* \"terrible film\" -> 0 *)\n      |]\n  in\n  let labels = Nx.create Nx.int32 [| 4 |] [| 1l; 1l; 0l; 0l |] in\n  let attention_mask =\n    Nx.create Nx.int32 [| 4; 6 |]\n      [|\n        1l;\n        1l;\n        1l;\n        1l;\n        1l;\n        0l;\n        1l;\n        1l;\n        1l;\n        1l;\n        0l;\n        0l;\n        1l;\n        1l;\n        1l;\n        1l;\n        1l;\n        0l;\n        1l;\n        1l;\n        1l;\n        1l;\n        0l;\n        0l;\n      |]\n  in\n  let ctx =\n    Context.empty\n    |> Context.set ~name:Attention.attention_mask_key (Ptree.P attention_mask)\n  in\n\n  (* --- Inference before training --- *)\n  Printf.printf \"=== Before training ===\\n\";\n  let logits_before =\n    let y, _ = Layer.apply model vars ~training:false ~ctx input_ids in\n    y\n  in\n  print_shape \"logits\" logits_before;\n\n  (* --- Fine-tune --- *)\n  Printf.printf \"\\n=== Training ===\\n%!\";\n  let trainer =\n    Train.make ~model\n      ~optimizer:(Vega.adamw ~weight_decay:0.01 (Vega.Schedule.constant 2e-5))\n  in\n  let st = Train.make_state trainer vars in\n  let st =\n    Train.fit trainer st ~ctx\n      ~report:(fun ~step ~loss _st ->\n        Printf.printf \"  step %2d  loss %.4f\\n%!\" step loss)\n      (Data.repeat 10\n         (input_ids, fun logits -> Loss.cross_entropy_sparse logits labels))\n  in\n\n  (* --- Predictions after training --- *)\n  Printf.printf \"\\n=== After training ===\\n\";\n  let logits = Train.predict trainer st ~ctx input_ids in\n  let sentences =\n    [| \"I love this\"; \"great movie\"; \"I hate this\"; \"terrible film\" |]\n  in\n  for i = 0 to 3 do\n    let row = Nx.slice [ I i ] logits in\n    let v0 = Nx.item [ 0 ] row in\n    let v1 = Nx.item [ 1 ] row in\n    let pred = if v1 > v0 then \"positive\" else \"negative\" in\n    let label = Int32.to_int (Nx.item [ i ] labels) in\n    let expected = if label = 1 then \"positive\" else \"negative\" in\n    Printf.printf \"  %-20s  pred=%-8s  expected=%-8s  %s\\n\"\n      (Printf.sprintf \"\\\"%s\\\"\" sentences.(i))\n      pred expected\n      (if String.equal pred expected then \"OK\" else \"WRONG\")\n  done\n"
  },
  {
    "path": "packages/kaun/examples/03-bert/reference_hf_output.py",
    "content": "#!/usr/bin/env python3\n\"\"\"Generate reference BERT outputs from HuggingFace transformers.\"\"\"\n\nfrom transformers import BertModel, BertTokenizer\nimport torch\n\n\ndef main():\n    print(\"Generating HuggingFace BERT reference outputs\")\n    print(\"=\" * 50)\n\n    model = BertModel.from_pretrained(\"bert-base-uncased\")\n    tokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\n\n    text = \"Hello world\"\n    print(f'\\nInput text: \"{text}\"')\n\n    inputs = tokenizer(text, return_tensors=\"pt\")\n    print(f\"Token IDs: {inputs['input_ids'][0].tolist()}\")\n\n    with torch.no_grad():\n        outputs = model(**inputs)\n\n    last_hidden_state = outputs.last_hidden_state\n    pooler_output = outputs.pooler_output\n\n    print(f\"Output shape: {list(last_hidden_state.shape)}\")\n\n    cls_token = last_hidden_state[0, 0]\n    print(\"\\nCLS token (first 5 values):\")\n    for i in range(5):\n        print(f\"  [{i}]: {cls_token[i].item():.6f}\")\n\n    print(\"\\nPooler output (first 5 values):\")\n    for i in range(5):\n        print(f\"  [{i}]: {pooler_output[0, i].item():.6f}\")\n\n    print(\"\\n\" + \"=\" * 50)\n    print(\"OCaml expected values:\")\n    cls_vals = \"; \".join(f\"{cls_token[i].item():.6f}\" for i in range(5))\n    pool_vals = \"; \".join(f\"{pooler_output[0, i].item():.6f}\" for i in range(5))\n    print(f\"expected_cls = [| {cls_vals} |]\")\n    print(f\"expected_pooler = [| {pool_vals} |]\")\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "packages/kaun/examples/04-gpt2/dune",
    "content": "(executable\n (name main)\n (libraries nx nx.core rune kaun kaun.hf brot jsont))\n"
  },
  {
    "path": "packages/kaun/examples/04-gpt2/gpt2.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Kaun\n\nlet invalid_argf fmt = Printf.ksprintf invalid_arg fmt\n\n(* Config *)\n\ntype config = {\n  vocab_size : int;\n  n_positions : int;\n  n_embd : int;\n  n_layer : int;\n  n_head : int;\n  n_inner : int;\n  resid_pdrop : float;\n  embd_pdrop : float;\n  attn_pdrop : float;\n  layer_norm_eps : float;\n}\n\nlet config ~vocab_size ~n_embd ~n_layer ~n_head ?(n_positions = 1024)\n    ?(n_inner = 4 * n_embd) ?(resid_pdrop = 0.1) ?(embd_pdrop = 0.1)\n    ?(attn_pdrop = 0.1) ?(layer_norm_eps = 1e-5) () =\n  if n_embd mod n_head <> 0 then\n    invalid_argf \"Gpt2.config: n_embd (%d) not divisible by n_head (%d)\" n_embd\n      n_head;\n  {\n    vocab_size;\n    n_positions;\n    n_embd;\n    n_layer;\n    n_head;\n    n_inner;\n    resid_pdrop;\n    embd_pdrop;\n    attn_pdrop;\n    layer_norm_eps;\n  }\n\n(* Helpers *)\n\nlet fields ~ctx t = Ptree.Dict.fields_exn ~ctx t\nlet get fs ~name dtype = Ptree.Dict.get_tensor_exn fs ~name dtype\nlet find ~ctx key fs = Ptree.Dict.find_exn ~ctx key fs\n\n(* Causal self-attention with combined QKV *)\n\nlet causal_self_attention (type l) ~(cfg : config)\n    ~(dtype : (float, l) Nx.dtype) ~training ~params (x : (float, l) Nx.t) :\n    (float, l) Nx.t =\n  let shape = Nx.shape x in\n  let batch = shape.(0) in\n  let seq = shape.(1) in\n  let h = cfg.n_embd in\n  let heads = cfg.n_head in\n  let head_dim = h / heads in\n  let fs = fields ~ctx:\"Gpt2.attention\" params in\n\n  (* Combined QKV projection: [batch, seq, 3*h] *)\n  let qkv_w = get fs ~name:\"qkv_weight\" dtype in\n  let qkv_b = get fs ~name:\"qkv_bias\" dtype in\n  let qkv = Nx.add (Nx.matmul x qkv_w) qkv_b in\n\n  (* Split into Q, K, V *)\n  let qkv_parts = Nx.split ~axis:(-1) 3 qkv in\n  let q = List.nth qkv_parts 0 in\n  let k = List.nth qkv_parts 1 in\n  let v = List.nth qkv_parts 2 in\n\n  let split_heads t =\n    Nx.reshape [| batch; seq; heads; head_dim |] t\n    |> Nx.transpose ~axes:[ 0; 2; 1; 3 ]\n  in\n  let q = split_heads q in\n  let k = split_heads k in\n  let v = split_heads v in\n\n  let dropout_rate =\n    if training && cfg.attn_pdrop > 0.0 then Some cfg.attn_pdrop else None\n  in\n\n  let attn =\n    Kaun.Fn.dot_product_attention ~is_causal:true ?dropout_rate q k v\n  in\n\n  (* Merge heads *)\n  let merged =\n    Nx.transpose attn ~axes:[ 0; 2; 1; 3 ]\n    |> Nx.contiguous\n    |> Nx.reshape [| batch; seq; h |]\n  in\n\n  (* Output projection *)\n  let o_w = get fs ~name:\"o_weight\" dtype in\n  let o_b = get fs ~name:\"o_bias\" dtype in\n  Nx.add (Nx.matmul merged o_w) o_b\n\n(* Transformer block (pre-norm) *)\n\nlet transformer_block (type l) ~(cfg : config) ~(dtype : (float, l) Nx.dtype)\n    ~training ~params (x : (float, l) Nx.t) : (float, l) Nx.t =\n  let fs = fields ~ctx:\"Gpt2.block\" params in\n\n  (* Pre-norm attention *)\n  let ln1_g = get fs ~name:\"ln1_gamma\" dtype in\n  let ln1_b = get fs ~name:\"ln1_beta\" dtype in\n  let x' =\n    Kaun.Fn.layer_norm ~gamma:ln1_g ~beta:ln1_b ~epsilon:cfg.layer_norm_eps x\n  in\n\n  let attn_params = find ~ctx:\"Gpt2.block\" \"attention\" fs in\n  let attn =\n    causal_self_attention ~cfg ~dtype ~training ~params:attn_params x'\n  in\n\n  (* Residual dropout *)\n  let attn =\n    if training && cfg.resid_pdrop > 0.0 then\n      Kaun.Fn.dropout ~rate:cfg.resid_pdrop attn\n    else attn\n  in\n  let x = Nx.add x attn in\n\n  (* Pre-norm FFN *)\n  let ln2_g = get fs ~name:\"ln2_gamma\" dtype in\n  let ln2_b = get fs ~name:\"ln2_beta\" dtype in\n  let x' =\n    Kaun.Fn.layer_norm ~gamma:ln2_g ~beta:ln2_b ~epsilon:cfg.layer_norm_eps x\n  in\n\n  let ffn_up_w = get fs ~name:\"ffn_up_weight\" dtype in\n  let ffn_up_b = get fs ~name:\"ffn_up_bias\" dtype in\n  let ffn_down_w = get fs ~name:\"ffn_down_weight\" dtype in\n  let ffn_down_b = get fs ~name:\"ffn_down_bias\" dtype in\n\n  let y =\n    Nx.add (Nx.matmul x' ffn_up_w) ffn_up_b |> Kaun.Activation.gelu_approx\n  in\n  let y = Nx.add (Nx.matmul y ffn_down_w) ffn_down_b in\n\n  (* Residual dropout *)\n  let y =\n    if training && cfg.resid_pdrop > 0.0 then\n      Kaun.Fn.dropout ~rate:cfg.resid_pdrop y\n    else y\n  in\n  Nx.add x y\n\n(* Forward: embeddings + transformer stack + final layer norm *)\n\nlet decode (type l in_elt) ~(cfg : config) ~params\n    ~(dtype : (float, l) Nx.dtype) ~training (input_ids : (int32, in_elt) Nx.t)\n    : (float, l) Nx.t =\n  let input_ids = Nx.cast Nx.int32 input_ids in\n  let shape = Nx.shape input_ids in\n  let batch = shape.(0) in\n  let seq = shape.(1) in\n\n  if seq > cfg.n_positions then\n    invalid_argf \"Gpt2.decode: seq_len=%d exceeds n_positions=%d\" seq\n      cfg.n_positions;\n\n  (* Params *)\n  let root = fields ~ctx:\"Gpt2.decode\" params in\n\n  let wte = get root ~name:\"wte\" dtype in\n  let wpe = get root ~name:\"wpe\" dtype in\n  let layers_t = find ~ctx:\"Gpt2.decode\" \"layers\" root in\n\n  (* Embedding lookup: token + position *)\n  let position_ids =\n    Nx.arange_f Nx.float32 0.0 (float_of_int seq) 1.0\n    |> Nx.cast Nx.int32\n    |> Nx.reshape [| 1; seq |]\n    |> Nx.broadcast_to [| batch; seq |]\n    |> Nx.contiguous\n  in\n  let tok = Kaun.Fn.embedding ~scale:false ~embedding:wte input_ids in\n  let pos = Kaun.Fn.embedding ~scale:false ~embedding:wpe position_ids in\n  let x = Nx.add tok pos in\n\n  (* Embedding dropout *)\n  let x =\n    if training && cfg.embd_pdrop > 0.0 then\n      Kaun.Fn.dropout ~rate:cfg.embd_pdrop x\n    else x\n  in\n\n  (* Transformer stack *)\n  let blocks = Ptree.List.items_exn ~ctx:\"Gpt2.decode.layers\" layers_t in\n  let x =\n    List.fold_left\n      (fun h block_params ->\n        transformer_block ~cfg ~dtype ~training ~params:block_params h)\n      x blocks\n  in\n\n  (* Final layer norm *)\n  let ln_f_g = get root ~name:\"ln_f_gamma\" dtype in\n  let ln_f_b = get root ~name:\"ln_f_beta\" dtype in\n  Kaun.Fn.layer_norm ~gamma:ln_f_g ~beta:ln_f_b ~epsilon:cfg.layer_norm_eps x\n\n(* Parameter initialization *)\n\nlet init_block_params ~dtype ~n_embd ~n_inner =\n  let w = Init.normal ~stddev:0.02 () in\n  let zeros n = Nx.zeros dtype [| n |] in\n  let ones n = Nx.ones dtype [| n |] in\n  let attn_params =\n    Ptree.dict\n      [\n        (\"qkv_weight\", Ptree.tensor (w.f [| n_embd; 3 * n_embd |] dtype));\n        (\"qkv_bias\", Ptree.tensor (zeros (3 * n_embd)));\n        (\"o_weight\", Ptree.tensor (w.f [| n_embd; n_embd |] dtype));\n        (\"o_bias\", Ptree.tensor (zeros n_embd));\n      ]\n  in\n  Ptree.dict\n    [\n      (\"attention\", attn_params);\n      (\"ln1_gamma\", Ptree.tensor (ones n_embd));\n      (\"ln1_beta\", Ptree.tensor (zeros n_embd));\n      (\"ffn_up_weight\", Ptree.tensor (w.f [| n_embd; n_inner |] dtype));\n      (\"ffn_up_bias\", Ptree.tensor (zeros n_inner));\n      (\"ffn_down_weight\", Ptree.tensor (w.f [| n_inner; n_embd |] dtype));\n      (\"ffn_down_bias\", Ptree.tensor (zeros n_embd));\n      (\"ln2_gamma\", Ptree.tensor (ones n_embd));\n      (\"ln2_beta\", Ptree.tensor (zeros n_embd));\n    ]\n\nlet init_decoder_params ~cfg ~dtype =\n  let h = cfg.n_embd in\n  let w = Init.normal ~stddev:0.02 () in\n  let wte = w.f [| cfg.vocab_size; h |] dtype in\n  let wpe = w.f [| cfg.n_positions; h |] dtype in\n  let blocks =\n    List.init cfg.n_layer (fun _ ->\n        init_block_params ~dtype ~n_embd:h ~n_inner:cfg.n_inner)\n  in\n  Ptree.dict\n    [\n      (\"wte\", Ptree.tensor wte);\n      (\"wpe\", Ptree.tensor wpe);\n      (\"layers\", Ptree.list blocks);\n      (\"ln_f_gamma\", Ptree.tensor (Nx.ones dtype [| h |]));\n      (\"ln_f_beta\", Ptree.tensor (Nx.zeros dtype [| h |]));\n    ]\n\n(* Layers *)\n\nlet decoder (cfg : config) () : (int32, float) Layer.t =\n  {\n    Layer.init =\n      (fun ~dtype ->\n        Layer.make_vars\n          ~params:(init_decoder_params ~cfg ~dtype)\n          ~state:Ptree.empty ~dtype);\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        ignore (state, ctx);\n        let y = decode ~cfg ~params ~dtype ~training x in\n        (y, Ptree.empty));\n  }\n\nlet for_causal_lm (cfg : config) () : (int32, float) Layer.t =\n  {\n    Layer.init =\n      (fun ~dtype ->\n        Layer.make_vars\n          ~params:(init_decoder_params ~cfg ~dtype)\n          ~state:Ptree.empty ~dtype);\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        ignore (state, ctx);\n        let hidden = decode ~cfg ~params ~dtype ~training x in\n        (* Tied LM head: logits = hidden @ wte^T *)\n        let root = fields ~ctx:\"Gpt2.lm_head\" params in\n        let wte = get root ~name:\"wte\" dtype in\n        let logits = Nx.matmul hidden (Nx.transpose wte ~axes:[ 1; 0 ]) in\n        (logits, Ptree.empty));\n  }\n\n(* JSON config parsing *)\n\nlet json_mem name = function\n  | Jsont.Object (mems, _) -> (\n      match Jsont.Json.find_mem name mems with\n      | Some (_, v) -> v\n      | None -> Jsont.Null ((), Jsont.Meta.none))\n  | _ -> Jsont.Null ((), Jsont.Meta.none)\n\nlet json_to_int = function\n  | Jsont.Number (f, _) -> int_of_float f\n  | _ -> failwith \"expected int\"\n\nlet json_to_int_option = function\n  | Jsont.Number (f, _) -> Some (int_of_float f)\n  | _ -> None\n\nlet json_to_float_option = function Jsont.Number (f, _) -> Some f | _ -> None\n\nlet parse_config json =\n  let n_embd = json |> json_mem \"n_embd\" |> json_to_int in\n  config\n    ~vocab_size:(json |> json_mem \"vocab_size\" |> json_to_int)\n    ~n_embd\n    ~n_layer:(json |> json_mem \"n_layer\" |> json_to_int)\n    ~n_head:(json |> json_mem \"n_head\" |> json_to_int)\n    ?n_positions:(json |> json_mem \"n_positions\" |> json_to_int_option)\n    ?n_inner:(json |> json_mem \"n_inner\" |> json_to_int_option)\n    ?resid_pdrop:(json |> json_mem \"resid_pdrop\" |> json_to_float_option)\n    ?embd_pdrop:(json |> json_mem \"embd_pdrop\" |> json_to_float_option)\n    ?attn_pdrop:(json |> json_mem \"attn_pdrop\" |> json_to_float_option)\n    ?layer_norm_eps:\n      (json |> json_mem \"layer_norm_epsilon\" |> json_to_float_option)\n    ()\n\n(* HuggingFace weight mapping *)\n\nlet cast_tensor dtype (Ptree.P t) = Ptree.P (Nx.cast dtype t)\n\nlet map_hf_weights ~cfg ~dtype hf_weights =\n  let tbl = Hashtbl.create (List.length hf_weights) in\n  List.iter (fun (name, tensor) -> Hashtbl.add tbl name tensor) hf_weights;\n  let hf name =\n    match Hashtbl.find_opt tbl name with\n    | Some t -> cast_tensor dtype t\n    | None -> invalid_argf \"from_pretrained: missing HF weight %S\" name\n  in\n  (* GPT-2 stores weights as [in, out] — NO transpose needed *)\n  let hf_t name = Ptree.Tensor (hf name) in\n  let layer i =\n    let p s = Printf.sprintf \"h.%d.%s\" i s in\n    Ptree.dict\n      [\n        ( \"attention\",\n          Ptree.dict\n            [\n              (\"qkv_weight\", hf_t (p \"attn.c_attn.weight\"));\n              (\"qkv_bias\", hf_t (p \"attn.c_attn.bias\"));\n              (\"o_weight\", hf_t (p \"attn.c_proj.weight\"));\n              (\"o_bias\", hf_t (p \"attn.c_proj.bias\"));\n            ] );\n        (\"ln1_gamma\", hf_t (p \"ln_1.weight\"));\n        (\"ln1_beta\", hf_t (p \"ln_1.bias\"));\n        (\"ffn_up_weight\", hf_t (p \"mlp.c_fc.weight\"));\n        (\"ffn_up_bias\", hf_t (p \"mlp.c_fc.bias\"));\n        (\"ffn_down_weight\", hf_t (p \"mlp.c_proj.weight\"));\n        (\"ffn_down_bias\", hf_t (p \"mlp.c_proj.bias\"));\n        (\"ln2_gamma\", hf_t (p \"ln_2.weight\"));\n        (\"ln2_beta\", hf_t (p \"ln_2.bias\"));\n      ]\n  in\n  Ptree.dict\n    [\n      (\"wte\", hf_t \"wte.weight\");\n      (\"wpe\", hf_t \"wpe.weight\");\n      (\"layers\", Ptree.list (List.init cfg.n_layer layer));\n      (\"ln_f_gamma\", hf_t \"ln_f.weight\");\n      (\"ln_f_beta\", hf_t \"ln_f.bias\");\n    ]\n\n(* Pretrained loading *)\n\nlet from_pretrained ?(model_id = \"gpt2\") () =\n  let json = Kaun_hf.load_config ~model_id () in\n  let cfg = parse_config json in\n  let hf_weights = Kaun_hf.load_weights ~model_id () in\n  let params = map_hf_weights ~cfg ~dtype:Nx.float32 hf_weights in\n  (cfg, params)\n"
  },
  {
    "path": "packages/kaun/examples/04-gpt2/gpt2.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** GPT-2 decoder and language model head.\n\n    GPT-2 inputs are passed as int32 [input_ids]:\n\n    {[\n    Layer.apply model vars ~training:false input_ids\n    ]}\n\n    Position ids are computed automatically from the sequence length. *)\n\nopen Kaun\n\n(** {1:config Configuration} *)\n\ntype config = {\n  vocab_size : int;\n  n_positions : int;\n  n_embd : int;\n  n_layer : int;\n  n_head : int;\n  n_inner : int;\n  resid_pdrop : float;\n  embd_pdrop : float;\n  attn_pdrop : float;\n  layer_norm_eps : float;\n}\n(** The type for GPT-2 configurations. *)\n\nval config :\n  vocab_size:int ->\n  n_embd:int ->\n  n_layer:int ->\n  n_head:int ->\n  ?n_positions:int ->\n  ?n_inner:int ->\n  ?resid_pdrop:float ->\n  ?embd_pdrop:float ->\n  ?attn_pdrop:float ->\n  ?layer_norm_eps:float ->\n  unit ->\n  config\n(** [config ~vocab_size ~n_embd ~n_layer ~n_head ()] is a GPT-2 configuration.\n\n    [n_positions] defaults to [1024]. [n_inner] defaults to [4 * n_embd].\n    Dropout rates default to [0.1]. [layer_norm_eps] defaults to [1e-5].\n\n    Raises [Invalid_argument] if [n_embd] is not divisible by [n_head]. *)\n\n(** {1:layers Layers} *)\n\nval decoder : config -> unit -> (int32, float) Layer.t\n(** [decoder cfg ()] is the GPT-2 transformer decoder.\n\n    Input: int32 [input_ids] of shape [[batch; seq]]. Output: float hidden\n    states of shape [[batch; seq; n_embd]]. *)\n\nval for_causal_lm : config -> unit -> (int32, float) Layer.t\n(** [for_causal_lm cfg ()] is decoder + tied LM head.\n\n    Output: logits [[batch; seq; vocab_size]]. Word embeddings are tied with the\n    LM head projection. *)\n\n(** {1:pretrained Pretrained loading} *)\n\nval from_pretrained : ?model_id:string -> unit -> config * Ptree.t\n(** [from_pretrained ?model_id ()] downloads [model_id] from HuggingFace and\n    returns [(cfg, decoder_params)].\n\n    [decoder_params] is ready for {!decoder} or {!for_causal_lm} (the LM head\n    reuses the word embedding weights).\n\n    [model_id] defaults to [\"gpt2\"]. *)\n"
  },
  {
    "path": "packages/kaun/examples/04-gpt2/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Autoregressive text generation with pretrained GPT-2.\n\n   Downloads gpt2 from HuggingFace (~548MB on first run) and generates text\n   continuations from several prompts using greedy decoding. *)\n\nopen Kaun\n\n(* Tokenizer *)\n\nlet load_tokenizer model_id =\n  let vocab = Kaun_hf.download_file ~model_id ~filename:\"vocab.json\" () in\n  let merges = Kaun_hf.download_file ~model_id ~filename:\"merges.txt\" () in\n  Brot.from_model_file ~vocab ~merges\n    ~pre:\n      (Brot.Pre_tokenizer.byte_level ~add_prefix_space:false ~use_regex:true ())\n    ~decoder:(Brot.Decoder.byte_level ())\n    ()\n\nlet encode tokenizer text =\n  Array.map Int32.of_int (Brot.encode_ids tokenizer text)\n\nlet decode tokenizer ids = Brot.decode tokenizer (Array.map Int32.to_int ids)\n\n(* Greedy decode: at each step pick the highest-probability next token. *)\n\nlet generate model vars ~max_tokens prompt =\n  let tokens = ref (Array.to_list prompt) in\n  for _ = 1 to max_tokens do\n    let ids = Array.of_list !tokens in\n    let n = Array.length ids in\n    let input = Nx.create Nx.int32 [| 1; n |] ids in\n    let logits, _ = Layer.apply model vars ~training:false input in\n    let last = Nx.slice [ I 0; I (n - 1) ] logits in\n    let next : int32 = Nx.item [] (Nx.argmax ~axis:0 last) in\n    tokens := !tokens @ [ next ]\n  done;\n  Array.of_list !tokens\n\n(* Show the model's top-k predictions at a given position. *)\n\nlet print_top_k ~k model vars input_ids ~pos =\n  let logits, _ = Layer.apply model vars ~training:false input_ids in\n  let row = Nx.slice [ I 0; I pos ] logits in\n  let sorted = Nx.argsort ~descending:true ~axis:0 row in\n  let probs = Nx.softmax ~axes:[ 0 ] row in\n  for i = 0 to k - 1 do\n    let idx = Int32.to_int (Nx.item [ i ] sorted) in\n    let prob : float = Nx.item [ idx ] probs in\n    Printf.printf \"    #%d  token %-6d  p=%.4f\\n\" (i + 1) idx prob\n  done\n\nlet () =\n  let model_id = \"gpt2\" in\n  let dtype = Nx.float32 in\n\n  (* Load tokenizer and model *)\n  Printf.printf \"Loading %s...\\n%!\" model_id;\n  let tokenizer = load_tokenizer model_id in\n  let cfg, params = Gpt2.from_pretrained ~model_id () in\n  Printf.printf \"  vocab=%d  n_embd=%d  layers=%d  heads=%d\\n\\n\" cfg.vocab_size\n    cfg.n_embd cfg.n_layer cfg.n_head;\n\n  let model = Gpt2.for_causal_lm cfg () in\n  let vars = Layer.make_vars ~params ~state:Ptree.empty ~dtype in\n\n  (* --- What does the model predict after \"Hello world\"? --- *)\n  Printf.printf \"=== Next-token predictions ===\\n\";\n  Printf.printf \"  Prompt: \\\"Hello world\\\"\\n\";\n  Printf.printf \"  Top 5 continuations:\\n\";\n  let hello_ids = encode tokenizer \"Hello world\" in\n  let hello = Nx.create Nx.int32 [| 1; Array.length hello_ids |] hello_ids in\n  print_top_k ~k:5 model vars hello ~pos:(Array.length hello_ids - 1);\n\n  (* --- Greedy generation from several prompts --- *)\n  Printf.printf \"\\n=== Greedy generation (30 tokens each) ===\\n\\n\";\n  let prompts =\n    [ \"The meaning of life is\"; \"Once upon a time\"; \"The quick brown fox\" ]\n  in\n  List.iter\n    (fun text ->\n      let prompt = encode tokenizer text in\n      let generated = generate model vars ~max_tokens:30 prompt in\n      let continuation =\n        Array.sub generated (Array.length prompt)\n          (Array.length generated - Array.length prompt)\n      in\n      Printf.printf \"  \\\"%s\\\" ->\\n\" text;\n      Printf.printf \"    %s\\n\\n\" (decode tokenizer continuation))\n    prompts\n"
  },
  {
    "path": "packages/kaun/examples/04-gpt2/reference_hf_output.py",
    "content": "#!/usr/bin/env python3\n\"\"\"Get reference GPT-2 output from transformers for comparison.\"\"\"\n\nimport torch\nfrom transformers import GPT2LMHeadModel, GPT2Tokenizer\n\nprint(\"Loading GPT-2 model and tokenizer...\")\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\nmodel = GPT2LMHeadModel.from_pretrained(\"gpt2\")\nmodel.eval()\n\ntext = \"Hello world\"\nprint(f\"Test input: {repr(text)}\")\n\ninputs = tokenizer(text, return_tensors=\"pt\")\nprint(f\"Input IDs: {inputs.input_ids.tolist()[0]}\")\n\nwith torch.no_grad():\n    outputs = model(**inputs)\n    logits = outputs.logits\n\nprint(f\"\\nLogits shape: {list(logits.shape)}\")\nprint(f\"First 10 logit values at position 0: {logits[0, 0, :10].tolist()}\")\n\n# Top 5 predictions for next token (after \"world\")\nlast_logits = logits[0, -1, :]\nprobs = torch.softmax(last_logits, dim=-1)\ntop_probs, top_indices = torch.topk(probs, 5)\n\nprint(\"\\nTop 5 predicted next tokens:\")\nfor idx, prob in zip(top_indices.tolist(), top_probs.tolist()):\n    token = tokenizer.decode([idx])\n    print(f\"  Token {idx} ({repr(token)}): {prob:.4f}\")\n\n# Greedy generation\ngenerated = model.generate(\n    inputs.input_ids, max_new_tokens=20, do_sample=False\n)\nprint(f\"\\nGreedy generation: {tokenizer.decode(generated[0])}\")\nprint(f\"Token IDs: {generated[0].tolist()}\")\n"
  },
  {
    "path": "packages/kaun/lib/activation.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Re-exports *)\n\nlet relu x = Nx.relu x\nlet sigmoid x = Nx.sigmoid x\nlet tanh x = Nx.tanh x\n\n(* Activations *)\n\nlet relu6 x =\n  let zero = Nx.scalar_like x 0.0 in\n  let six = Nx.scalar_like x 6.0 in\n  Nx.minimum (Nx.maximum x zero) six\n\nlet hard_sigmoid ?(alpha = 1.0 /. 6.0) ?(beta = 0.5) x =\n  let linear =\n    Nx.add (Nx.mul (Nx.scalar_like x alpha) x) (Nx.scalar_like x beta)\n  in\n  let zero = Nx.scalar_like x 0. in\n  let one = Nx.scalar_like x 1. in\n  Nx.minimum one (Nx.maximum zero linear)\n\nlet softplus x = Nx.log (Nx.add (Nx.scalar_like x 1.) (Nx.exp x))\nlet silu x = Nx.mul x (Nx.sigmoid x)\nlet swish x = silu x\nlet hard_silu x = Nx.mul x (hard_sigmoid x)\nlet hard_swish x = hard_silu x\n\nlet prelu ~alpha x =\n  let zero = Nx.zeros_like x in\n  Nx.add (Nx.maximum zero x) (Nx.mul alpha (Nx.minimum zero x))\n\nlet log_sigmoid x =\n  (* Numerically stable: branch on sign to avoid overflow *)\n  let zero = Nx.scalar_like x 0.0 in\n  let one = Nx.scalar_like x 1.0 in\n  let is_positive = Nx.greater x zero in\n  let branch_pos = Nx.neg (Nx.log (Nx.add one (Nx.exp (Nx.neg x)))) in\n  let branch_neg = Nx.sub x (Nx.log (Nx.add one (Nx.exp x))) in\n  Nx.where is_positive branch_pos branch_neg\n\nlet leaky_relu ?(negative_slope = 0.01) x =\n  Nx.maximum x (Nx.mul (Nx.scalar_like x negative_slope) x)\n\nlet hard_tanh x =\n  let one = Nx.scalar_like x 1. in\n  let neg_one = Nx.scalar_like x (-1.0) in\n  Nx.maximum neg_one (Nx.minimum x one)\n\nlet elu ?(alpha = 1.0) x =\n  let zero = Nx.scalar_like x 0.0 in\n  let one = Nx.scalar_like x 1. in\n  let alpha_s = Nx.scalar_like x alpha in\n  let exp_minus_one = Nx.sub (Nx.exp x) one in\n  Nx.add (Nx.maximum x zero) (Nx.mul alpha_s (Nx.minimum zero exp_minus_one))\n\nlet selu x =\n  let alpha = 1.6732632423543772848170429916717 in\n  let lambda = 1.0507009873554804934193349852946 in\n  Nx.mul (Nx.scalar_like x lambda) (elu ~alpha x)\n\nlet celu ?(alpha = 1.0) x =\n  let zero = Nx.zeros_like x in\n  let alpha_s = Nx.scalar_like x alpha in\n  let one = Nx.scalar_like x 1. in\n  let neg_term =\n    Nx.mul alpha_s (Nx.sub (Nx.exp (Nx.div (Nx.minimum zero x) alpha_s)) one)\n  in\n  Nx.add (Nx.maximum zero x) neg_term\n\nlet squareplus ?(b = 4.0) x =\n  let half = Nx.scalar_like x 0.5 in\n  let inside = Nx.add (Nx.square x) (Nx.scalar_like x b) in\n  Nx.mul half (Nx.add x (Nx.sqrt inside))\n\nlet glu ?(axis = -1) x =\n  match Nx.split ~axis 2 x with\n  | [ left; right ] -> Nx.mul left (Nx.sigmoid right)\n  | _ -> invalid_arg \"Activation.glu: split did not produce two partitions\"\n\nlet sparse_plus x =\n  let zero = Nx.zeros_like x in\n  let one = Nx.scalar_like x 1. in\n  let neg_one = Nx.scalar_like x (-1.) in\n  let quadratic = Nx.mul (Nx.scalar_like x 0.25) (Nx.square (Nx.add x one)) in\n  let res = Nx.where (Nx.greater_equal x one) x quadratic in\n  Nx.where (Nx.less_equal x neg_one) zero res\n\nlet sparse_sigmoid x =\n  let zero = Nx.zeros_like x in\n  let one = Nx.scalar_like x 1. in\n  let neg_one = Nx.scalar_like x (-1.) in\n  let half = Nx.scalar_like x 0.5 in\n  let linear = Nx.mul half (Nx.add x one) in\n  let res = Nx.where (Nx.greater_equal x one) one linear in\n  Nx.where (Nx.less_equal x neg_one) zero res\n\nlet gelu_approx x =\n  let one = Nx.scalar_like x 1.0 in\n  let half = Nx.scalar_like x 0.5 in\n  let sqrt2_pi = Nx.scalar_like x 0.7978845608 in\n  let coeff = Nx.scalar_like x 0.044715 in\n  let x2 = Nx.mul x x in\n  let inner = Nx.add one (Nx.mul coeff x2) in\n  let arg = Nx.mul (Nx.mul x sqrt2_pi) inner in\n  Nx.mul half (Nx.mul x (Nx.add one (Nx.tanh arg)))\n\nlet gelu x =\n  let half = Nx.scalar_like x 0.5 in\n  let one = Nx.scalar_like x 1.0 in\n  let sqrt2 = Nx.scalar_like x 1.4142135623730951 in\n  Nx.mul (Nx.mul half x) (Nx.add one (Nx.erf (Nx.div x sqrt2)))\n\nlet softsign x =\n  let one = Nx.scalar_like x 1.0 in\n  Nx.div x (Nx.add one (Nx.abs x))\n\nlet mish x = Nx.mul x (Nx.tanh (softplus x))\n"
  },
  {
    "path": "packages/kaun/lib/activation.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Activation functions for neural networks.\n\n    All functions are differentiable through Rune's autodiff. The standard\n    activations {!relu}, {!sigmoid} and {!tanh} are re-exported from {!Nx} for\n    convenience. *)\n\n(** {1:standard Standard activations} *)\n\nval relu : ('a, 'b) Nx.t -> ('a, 'b) Nx.t\n(** [relu x] is [max(x, 0)]. *)\n\nval sigmoid : (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [sigmoid x] is [1 / (1 + exp(-x))]. *)\n\nval tanh : (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [tanh x] is the hyperbolic tangent of [x]. *)\n\nval relu6 : (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [relu6 x] is [min(max(x, 0), 6)]. *)\n\nval leaky_relu : ?negative_slope:float -> (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [leaky_relu x] is [max(x, negative_slope * x)].\n\n    [negative_slope] defaults to [0.01]. *)\n\nval hard_tanh : (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [hard_tanh x] is [max(-1, min(1, x))]. *)\n\nval hard_sigmoid :\n  ?alpha:float -> ?beta:float -> (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [hard_sigmoid x] is [min(1, max(0, alpha * x + beta))].\n\n    [alpha] defaults to [1/6]. [beta] defaults to [0.5]. *)\n\nval prelu : alpha:(float, 'b) Nx.t -> (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [prelu ~alpha x] is [max(0, x) + alpha * min(0, x)].\n\n    [alpha] is a learnable tensor, broadcast against [x]. *)\n\n(** {1:exponential Exponential family} *)\n\nval elu : ?alpha:float -> (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [elu x] is [x] when [x >= 0] and [alpha * (exp(x) - 1)] otherwise.\n\n    [alpha] defaults to [1.0]. *)\n\nval selu : (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [selu x] is [lambda * elu(x, alpha)] with self-normalizing constants. *)\n\nval celu : ?alpha:float -> (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [celu x] is [max(0, x) + min(0, alpha * (exp(x/alpha) - 1))].\n\n    [alpha] defaults to [1.0]. *)\n\n(** {1:smooth Smooth activations} *)\n\nval gelu : (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [gelu x] is [0.5 * x * (1 + erf(x / sqrt(2)))]. *)\n\nval gelu_approx : (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [gelu_approx x] is the tanh-based approximation of {!gelu}. *)\n\nval silu : (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [silu x] is [x * sigmoid(x)] (also known as Swish). *)\n\nval swish : (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [swish x] is {!silu}. *)\n\nval hard_silu : (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [hard_silu x] is [x * hard_sigmoid(x)]. *)\n\nval hard_swish : (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [hard_swish x] is {!hard_silu}. *)\n\nval mish : (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [mish x] is [x * tanh(softplus(x))]. *)\n\nval softplus : (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [softplus x] is [log(1 + exp(x))]. *)\n\nval softsign : (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [softsign x] is [x / (abs(x) + 1)]. *)\n\nval squareplus : ?b:float -> (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [squareplus x] is [0.5 * (x + sqrt(x^2 + b))].\n\n    [b] defaults to [4.0]. *)\n\nval log_sigmoid : (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [log_sigmoid x] is [log(sigmoid(x))], computed in a numerically stable way\n    by branching on the sign of [x]. *)\n\n(** {1:gating Gating} *)\n\nval glu : ?axis:int -> (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [glu x] splits [x] in half along [axis] and returns [left * sigmoid(right)].\n\n    [axis] defaults to [-1].\n\n    Raises [Invalid_argument] if the split does not produce two partitions. *)\n\n(** {1:sparse Sparse activations} *)\n\nval sparse_plus : (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [sparse_plus x] is [x] when [x >= 1], [0] when [x <= -1], and\n    [0.25 * (x + 1)^2] otherwise. *)\n\nval sparse_sigmoid : (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [sparse_sigmoid x] is [1] when [x >= 1], [0] when [x <= -1], and\n    [0.5 * (x + 1)] otherwise. *)\n"
  },
  {
    "path": "packages/kaun/lib/attention.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet invalid_argf fmt = Printf.ksprintf invalid_arg fmt\n\nmodule Dtype = Nx_core.Dtype\n\nlet require_same_float_dtype (type p in_elt) ~ctx\n    (expected : (float, p) Nx.dtype) (x : (float, in_elt) Nx.t) :\n    (float, p) Nx.t =\n  match Dtype.equal_witness expected (Nx.dtype x) with\n  | Some Type.Equal -> (x : (float, p) Nx.t)\n  | None ->\n      invalid_argf \"%s: input dtype %s does not match model dtype %s\" ctx\n        (Dtype.to_string (Nx.dtype x))\n        (Dtype.to_string expected)\n\nlet normalize_axis ~ctx ~ndim axis =\n  let normalized = if axis < 0 then ndim + axis else axis in\n  if normalized < 0 || normalized >= ndim then\n    invalid_argf \"%s: axis %d out of bounds for rank %d\" ctx axis ndim;\n  normalized\n\n(* Rotary position embeddings *)\n\nlet rope ?(theta = 10000.0) ?(seq_dim = -2) x =\n  let ctx = \"Attention.rope\" in\n  let shape = Nx.shape x in\n  let ndim = Array.length shape in\n  if ndim < 2 then invalid_argf \"%s: expected rank >= 2, got rank %d\" ctx ndim;\n  let seq_axis = normalize_axis ~ctx ~ndim seq_dim in\n  if seq_axis = ndim - 1 then\n    invalid_argf\n      \"%s: seq_dim points to the last axis; last axis is reserved for head_dim\"\n      ctx;\n  let seq_len = shape.(seq_axis) in\n  let head_dim = shape.(ndim - 1) in\n  if head_dim mod 2 <> 0 then\n    invalid_argf \"%s: head_dim must be even, got %d\" ctx head_dim;\n  let half = head_dim / 2 in\n  let dtype = Nx.dtype x in\n  let inv_freq =\n    let exponents = Nx.arange_f dtype 0.0 (float_of_int head_dim) 2.0 in\n    let normalized =\n      Nx.div exponents (Nx.scalar dtype (float_of_int head_dim))\n    in\n    Nx.pow (Nx.scalar dtype theta) (Nx.neg normalized)\n  in\n  let positions = Nx.arange_f dtype 0.0 (float_of_int seq_len) 1.0 in\n  let angles =\n    Nx.matmul\n      (Nx.reshape [| seq_len; 1 |] positions)\n      (Nx.reshape [| 1; half |] inv_freq)\n  in\n  let broadcast_shape = Array.make ndim 1 in\n  broadcast_shape.(seq_axis) <- seq_len;\n  broadcast_shape.(ndim - 1) <- half;\n  let cos_angles = Nx.reshape broadcast_shape (Nx.cos angles) in\n  let sin_angles = Nx.reshape broadcast_shape (Nx.sin angles) in\n  let last_axis_slice start stop =\n    let slices = Array.make ndim Nx.A in\n    slices.(ndim - 1) <- Nx.R (start, stop);\n    Array.to_list slices\n  in\n  let x1 = Nx.slice (last_axis_slice 0 half) x in\n  let x2 = Nx.slice (last_axis_slice half head_dim) x in\n  let r1 = Nx.sub (Nx.mul x1 cos_angles) (Nx.mul x2 sin_angles) in\n  let r2 = Nx.add (Nx.mul x1 sin_angles) (Nx.mul x2 cos_angles) in\n  Nx.concatenate ~axis:(-1) [ r1; r2 ]\n\n(* Multi-head self-attention *)\n\nlet attention_mask_key = \"attention_mask\"\nlet apply_rope ~theta t = rope ~theta t\n\nlet multi_head_attention ~embed_dim ~num_heads ?(num_kv_heads = num_heads)\n    ?(dropout = 0.0) ?(is_causal = false) ?(rope = false)\n    ?(rope_theta = 10000.0) () =\n  let use_rope = rope in\n  let head_dim = embed_dim / num_heads in\n  if head_dim * num_heads <> embed_dim then\n    invalid_argf\n      \"Attention.multi_head_attention: embed_dim (%d) not divisible by \\\n       num_heads (%d)\"\n      embed_dim num_heads;\n  if num_heads mod num_kv_heads <> 0 then\n    invalid_argf\n      \"Attention.multi_head_attention: num_heads (%d) not divisible by \\\n       num_kv_heads (%d)\"\n      num_heads num_kv_heads;\n  if dropout < 0.0 || dropout >= 1.0 then\n    invalid_argf\n      \"Attention.multi_head_attention: expected 0.0 <= dropout < 1.0, got %g\"\n      dropout;\n  let weight_init = Init.glorot_uniform () in\n  {\n    Layer.init =\n      (fun ~dtype ->\n        let q_proj =\n          weight_init.f [| embed_dim; num_heads * head_dim |] dtype\n        in\n        let k_proj =\n          weight_init.f [| embed_dim; num_kv_heads * head_dim |] dtype\n        in\n        let v_proj =\n          weight_init.f [| embed_dim; num_kv_heads * head_dim |] dtype\n        in\n        let out_proj =\n          weight_init.f [| num_heads * head_dim; embed_dim |] dtype\n        in\n        Layer.make_vars\n          ~params:\n            (Ptree.dict\n               [\n                 (\"q_proj\", Ptree.tensor q_proj);\n                 (\"k_proj\", Ptree.tensor k_proj);\n                 (\"v_proj\", Ptree.tensor v_proj);\n                 (\"out_proj\", Ptree.tensor out_proj);\n               ])\n          ~state:(Ptree.list []) ~dtype);\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        let x =\n          require_same_float_dtype ~ctx:\"Attention.multi_head_attention\" dtype x\n        in\n        let shape = Nx.shape x in\n        let batch = shape.(0) in\n        let seq_len = shape.(1) in\n        let fields =\n          Ptree.Dict.fields_exn ~ctx:\"Attention.multi_head_attention.params\"\n            params\n        in\n        let get name = Ptree.Dict.get_tensor_exn fields ~name dtype in\n        let q_proj = get \"q_proj\" in\n        let k_proj = get \"k_proj\" in\n        let v_proj = get \"v_proj\" in\n        let out_proj = get \"out_proj\" in\n        let q = Nx.matmul x q_proj in\n        let k = Nx.matmul x k_proj in\n        let v = Nx.matmul x v_proj in\n        let reshape_heads t heads =\n          let t = Nx.reshape [| batch; seq_len; heads; head_dim |] t in\n          Nx.transpose t ~axes:[ 0; 2; 1; 3 ]\n        in\n        let q = reshape_heads q num_heads in\n        let k = reshape_heads k num_kv_heads in\n        let v = reshape_heads v num_kv_heads in\n        let repeat_kv t =\n          if num_kv_heads < num_heads then\n            let repetition = num_heads / num_kv_heads in\n            let shape = Nx.shape t in\n            let expanded = Nx.expand_dims [ 2 ] t in\n            let target =\n              [| shape.(0); shape.(1); repetition; shape.(2); shape.(3) |]\n            in\n            Nx.broadcast_to target expanded\n            |> Nx.contiguous\n            |> Nx.reshape [| shape.(0); num_heads; shape.(2); shape.(3) |]\n          else t\n        in\n        let k = repeat_kv k in\n        let v = repeat_kv v in\n        let q, k =\n          if use_rope then\n            (apply_rope ~theta:rope_theta q, apply_rope ~theta:rope_theta k)\n          else (q, k)\n        in\n        let dropout_rate =\n          if training && dropout > 0.0 then Some dropout else None\n        in\n        (* Read attention mask from context if present. Accepts [batch; seq_k]\n           int32 (0/1) or bool, reshapes to [batch; 1; 1; seq_k] for\n           broadcasting over heads and queries. *)\n        let attention_mask =\n          match ctx with\n          | None -> None\n          | Some ctx -> (\n              match Context.find ctx ~name:attention_mask_key with\n              | None -> None\n              | Some tensor ->\n                  let bool_mask =\n                    match Ptree.Tensor.to_typed Nx.bool tensor with\n                    | Some m -> m\n                    | None ->\n                        (* int/float mask: cast to int32, nonzero = true *)\n                        let int_mask =\n                          match Ptree.Tensor.to_typed Nx.int32 tensor with\n                          | Some m -> m\n                          | None ->\n                              let (Ptree.P raw) = tensor in\n                              Nx.cast Nx.int32 raw\n                        in\n                        Nx.not_equal int_mask\n                          (Nx.zeros Nx.int32 (Nx.shape int_mask))\n                  in\n                  let mask_shape = Nx.shape bool_mask in\n                  let ndim = Array.length mask_shape in\n                  let reshaped =\n                    if ndim = 2 then\n                      Nx.reshape\n                        [| mask_shape.(0); 1; 1; mask_shape.(1) |]\n                        bool_mask\n                    else bool_mask\n                  in\n                  Some reshaped)\n        in\n        let attn =\n          Fn.dot_product_attention ?attention_mask ?dropout_rate ~is_causal q k\n            v\n        in\n        let merged =\n          Nx.transpose attn ~axes:[ 0; 2; 1; 3 ]\n          |> Nx.contiguous\n          |> Nx.reshape [| batch; seq_len; embed_dim |]\n        in\n        let output = Nx.matmul merged out_proj in\n        (output, state));\n  }\n"
  },
  {
    "path": "packages/kaun/lib/attention.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Multi-head self-attention.\n\n    Provides scaled dot-product attention with support for grouped query\n    attention (GQA), causal masking, rotary position embeddings (RoPE), and\n    dropout. *)\n\n(** {1:rope Rotary Position Embeddings} *)\n\nval rope : ?theta:float -> ?seq_dim:int -> (float, 'a) Nx.t -> (float, 'a) Nx.t\n(** [rope ?theta ?seq_dim x] applies rotary position embeddings to [x].\n\n    [x] may have any rank [>= 2], with shape [[d0; ...; dn-1]] where:\n    - [head_dim = dn-1] (last axis).\n    - [seq_len] is on axis [seq_dim].\n\n    [theta] defaults to [10000.0]. [seq_dim] defaults to [-2] (second-to-last\n    axis). Negative [seq_dim] values are interpreted relative to rank.\n\n    [head_dim] must be even.\n\n    Raises [Invalid_argument] if [x] has rank < 2, if [seq_dim] is out of\n    bounds, if [seq_dim] designates the last axis, or if [head_dim] is odd. *)\n\n(** {1:mha Multi-Head Attention} *)\n\nval attention_mask_key : string\n(** [attention_mask_key] is [\"attention_mask\"]. The well-known {!Context} key\n    that {!multi_head_attention} reads during the forward pass. *)\n\nval multi_head_attention :\n  embed_dim:int ->\n  num_heads:int ->\n  ?num_kv_heads:int ->\n  ?dropout:float ->\n  ?is_causal:bool ->\n  ?rope:bool ->\n  ?rope_theta:float ->\n  unit ->\n  (float, float) Layer.t\n(** [multi_head_attention ~embed_dim ~num_heads ()] is a multi-head\n    self-attention layer.\n\n    Input shape: [[batch; seq_len; embed_dim]]. Output shape:\n    [[batch; seq_len; embed_dim]].\n\n    [num_kv_heads] defaults to [num_heads] (standard MHA). When\n    [num_kv_heads < num_heads], grouped query attention (GQA) is used.\n    [num_heads] must be divisible by [num_kv_heads].\n\n    [dropout] defaults to [0.0]. When positive, dropout is applied during\n    training using keys from the implicit RNG scope.\n\n    [is_causal] defaults to [false]. When [true], a causal mask prevents\n    attending to future positions.\n\n    [rope] defaults to [false]. When [true], rotary position embeddings are\n    applied to Q and K before the attention computation. [rope_theta] defaults\n    to [10000.0].\n\n    When [ctx] contains {!attention_mask_key} (a bool or int32 tensor of shape\n    [[batch; seq_k]]), it is applied as a padding mask. [true] / nonzero keeps\n    the position, [false] / [0] masks it.\n\n    Parameters:\n    - [q_proj] ([[embed_dim; num_heads * head_dim]])\n    - [k_proj] ([[embed_dim; num_kv_heads * head_dim]])\n    - [v_proj] ([[embed_dim; num_kv_heads * head_dim]])\n    - [out_proj] ([[num_heads * head_dim; embed_dim]])\n\n    Raises [Invalid_argument] if [embed_dim] is not divisible by [num_heads], or\n    if [num_heads] is not divisible by [num_kv_heads]. *)\n"
  },
  {
    "path": "packages/kaun/lib/checkpoint.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet invalid_argf fmt = Printf.ksprintf invalid_arg fmt\n\nlet shape_to_string s =\n  \"[\" ^ String.concat \"; \" (Array.to_list (Array.map string_of_int s)) ^ \"]\"\n\nlet save path tree =\n  let pairs = Ptree.flatten_with_paths tree in\n  let items =\n    List.map\n      (fun (name, pt) ->\n        let nx = Ptree.with_tensor pt { run = (fun t -> Nx_io.P t) } in\n        (name, nx))\n      pairs\n  in\n  Nx_io.save_safetensors path items\n\nlet load path ~like =\n  let archive = Nx_io.load_safetensors path in\n  let _, rebuild = Ptree.flatten like in\n  let path_leaves = Ptree.flatten_with_paths like in\n  let loaded =\n    List.map\n      (fun (name, template) ->\n        match Hashtbl.find_opt archive name with\n        | None -> invalid_argf \"Checkpoint.load: missing key %S\" name\n        | Some (Nx_io.P nx) ->\n            Ptree.with_tensor template\n              {\n                run =\n                  (fun tmpl ->\n                    let expected = Nx.shape tmpl in\n                    let actual = Nx.shape nx in\n                    if expected <> actual then\n                      invalid_argf\n                        \"Checkpoint.load: shape mismatch for %S: expected %s, \\\n                         got %s\"\n                        name (shape_to_string expected) (shape_to_string actual);\n                    let casted = Nx.cast (Nx.dtype tmpl) nx in\n                    Ptree.P casted);\n              })\n      path_leaves\n  in\n  rebuild loaded\n"
  },
  {
    "path": "packages/kaun/lib/checkpoint.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Model checkpointing.\n\n    {!Checkpoint} serializes {!Ptree.t} parameter trees to and from\n    {{:https://huggingface.co/docs/safetensors/}SafeTensors} files. Tensor paths\n    from {!Ptree.flatten_with_paths} become file keys (e.g.\n    [\"layers.0.weight\"]). *)\n\nval save : string -> Ptree.t -> unit\n(** [save path t] writes [t]'s tensors to a safetensors file at [path].\n\n    Raises [Failure] on I/O errors. *)\n\nval load : string -> like:Ptree.t -> Ptree.t\n(** [load path ~like] loads tensors from a safetensors file and reconstructs a\n    tree with the same structure as [like].\n\n    Each tensor is cast to [like]'s dtype if needed. Extra keys in the file are\n    silently ignored.\n\n    Raises [Invalid_argument] if a key required by [like] is missing from the\n    file, or if a tensor's shape does not match [like]. Raises [Failure] on I/O\n    errors. *)\n"
  },
  {
    "path": "packages/kaun/lib/context.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule Dtype = Nx_core.Dtype\n\ntype t = (string * Ptree.tensor) list\n\nlet empty = []\nlet set ~name tensor t = (name, tensor) :: t\nlet find t ~name = List.assoc_opt name t\n\nlet get_float_exn (type l) ~ctx t ~name ~(dtype : (float, l) Nx.dtype) :\n    (float, l) Nx.t =\n  match find t ~name with\n  | Some (Ptree.P x) -> (\n      match Dtype.equal_witness dtype (Nx.dtype x) with\n      | Some Type.Equal -> x\n      | None ->\n          invalid_arg\n            (Printf.sprintf \"%s: %s dtype mismatch (expected %s, got %s)\" ctx\n               name (Dtype.to_string dtype)\n               (Dtype.to_string (Nx.dtype x))))\n  | None -> invalid_arg (Printf.sprintf \"%s: %s not found in context\" ctx name)\n\nlet get_int32_exn ~ctx t ~name : (int32, Bigarray.int32_elt) Nx.t =\n  match find t ~name with\n  | Some tensor -> Ptree.Tensor.to_typed_exn Nx.int32 tensor\n  | None -> invalid_arg (Printf.sprintf \"%s: %s not found in context\" ctx name)\n\nlet get_bool_exn ~ctx t ~name : (bool, Nx.bool_elt) Nx.t =\n  match find t ~name with\n  | Some tensor -> Ptree.Tensor.to_typed_exn Nx.bool tensor\n  | None -> invalid_arg (Printf.sprintf \"%s: %s not found in context\" ctx name)\n"
  },
  {
    "path": "packages/kaun/lib/context.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Per-call auxiliary data for layers.\n\n    A {!type:t} carries read-only tensors (attention masks, position ids,\n    encoder memory) that specific layers consume during a forward pass. Most\n    layers ignore the context; transformer layers read from it by well-known key\n    names.\n\n    {[\n    let ctx =\n      Context.empty\n      |> Context.set ~name:\"attention_mask\" (Ptree.P mask)\n      |> Context.set ~name:\"token_type_ids\" (Ptree.P ids)\n    in\n    Layer.apply model vars ~training:false ~ctx input_ids\n    ]} *)\n\n(** {1:types Types} *)\n\ntype t\n(** The type for forward-pass contexts. *)\n\n(** {1:constructors Constructors} *)\n\nval empty : t\n(** [empty] is the empty context. *)\n\nval set : name:string -> Ptree.tensor -> t -> t\n(** [set ~name tensor ctx] is [ctx] with [name] bound to [tensor].\n\n    Shadows any previous binding for [name]. *)\n\n(** {1:lookup Lookup} *)\n\nval find : t -> name:string -> Ptree.tensor option\n(** [find ctx ~name] is the tensor bound to [name] in [ctx], if any. *)\n\nval get_float_exn :\n  ctx:string ->\n  t ->\n  name:string ->\n  dtype:(float, 'l) Nx.dtype ->\n  (float, 'l) Nx.t\n(** [get_float_exn ~ctx t ~name ~dtype] is the float tensor bound to [name],\n    cast-checked against [dtype].\n\n    Raises [Invalid_argument] if [name] is missing or has a different dtype.\n    [ctx] is used in error messages. *)\n\nval get_int32_exn :\n  ctx:string -> t -> name:string -> (int32, Bigarray.int32_elt) Nx.t\n(** [get_int32_exn ~ctx t ~name] is the int32 tensor bound to [name].\n\n    Raises [Invalid_argument] if [name] is missing or has a different dtype. *)\n\nval get_bool_exn : ctx:string -> t -> name:string -> (bool, Nx.bool_elt) Nx.t\n(** [get_bool_exn ~ctx t ~name] is the bool tensor bound to [name].\n\n    Raises [Invalid_argument] if [name] is missing or has a different dtype. *)\n"
  },
  {
    "path": "packages/kaun/lib/data.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype 'a t = {\n  next : unit -> 'a option;\n  reset : unit -> unit;\n  length : int option;\n}\n\n(* Constructors *)\n\nlet of_array a =\n  let n = Array.length a in\n  let i = ref 0 in\n  {\n    next =\n      (fun () ->\n        if !i >= n then None\n        else\n          let v = a.(!i) in\n          incr i;\n          Some v);\n    reset = (fun () -> i := 0);\n    length = Some n;\n  }\n\nlet of_tensor t =\n  let n = (Nx.shape t).(0) in\n  let i = ref 0 in\n  {\n    next =\n      (fun () ->\n        if !i >= n then None\n        else\n          let v = Nx.slice [ I !i ] t in\n          incr i;\n          Some v);\n    reset = (fun () -> i := 0);\n    length = Some n;\n  }\n\nlet of_tensors (x, y) =\n  let nx = (Nx.shape x).(0) in\n  let ny = (Nx.shape y).(0) in\n  if nx <> ny then\n    invalid_arg\n      (Printf.sprintf \"Data.of_tensors: first dimensions differ (%d vs %d)\" nx\n         ny);\n  let n = nx in\n  let i = ref 0 in\n  {\n    next =\n      (fun () ->\n        if !i >= n then None\n        else\n          let vx = Nx.slice [ I !i ] x in\n          let vy = Nx.slice [ I !i ] y in\n          incr i;\n          Some (vx, vy));\n    reset = (fun () -> i := 0);\n    length = Some n;\n  }\n\nlet of_fn n f =\n  if n < 0 then\n    invalid_arg (Printf.sprintf \"Data.of_fn: expected n >= 0, got %d\" n);\n  let i = ref 0 in\n  {\n    next =\n      (fun () ->\n        if !i >= n then None\n        else\n          let v = f !i in\n          incr i;\n          Some v);\n    reset = (fun () -> i := 0);\n    length = Some n;\n  }\n\nlet repeat n v = of_fn n (fun _ -> v)\n\n(* Transformers *)\n\nlet map f t =\n  {\n    next = (fun () -> Option.map f (t.next ()));\n    reset = t.reset;\n    length = t.length;\n  }\n\nlet batch ?(drop_last = false) n t =\n  if n <= 0 then\n    invalid_arg (Printf.sprintf \"Data.batch: expected n > 0, got %d\" n);\n  let batch_len =\n    Option.map (fun l -> if drop_last then l / n else (l + n - 1) / n) t.length\n  in\n  {\n    next =\n      (fun () ->\n        match t.next () with\n        | None -> None\n        | Some first ->\n            let buf = Array.make n first in\n            let k = ref 1 in\n            let continue = ref true in\n            while !k < n && !continue do\n              match t.next () with\n              | Some v ->\n                  buf.(!k) <- v;\n                  incr k\n              | None -> continue := false\n            done;\n            if !k < n && drop_last then None\n            else if !k < n then Some (Array.sub buf 0 !k)\n            else Some buf);\n    reset = t.reset;\n    length = batch_len;\n  }\n\nlet map_batch ?drop_last n f t = map f (batch ?drop_last n t)\n\nlet shuffle t =\n  match t.length with\n  | None -> invalid_arg \"Data.shuffle: requires a pipeline with known length\"\n  | Some n ->\n      let perm_tensor = Nx.permutation n in\n      let perm = Array.map Int32.to_int (Nx.to_array perm_tensor) in\n      (* Eagerly materialize the upstream into an array *)\n      let elements =\n        Array.init n (fun _ ->\n            match t.next () with Some v -> v | None -> assert false)\n      in\n      let i = ref 0 in\n      {\n        next =\n          (fun () ->\n            if !i >= n then None\n            else\n              let v = elements.(perm.(!i)) in\n              incr i;\n              Some v);\n        reset = (fun () -> i := 0);\n        length = Some n;\n      }\n\n(* Consumers *)\n\nlet iter f t =\n  let rec loop () =\n    match t.next () with\n    | None -> ()\n    | Some v ->\n        f v;\n        loop ()\n  in\n  loop ()\n\nlet iteri f t =\n  let i = ref 0 in\n  let rec loop () =\n    match t.next () with\n    | None -> ()\n    | Some v ->\n        f !i v;\n        incr i;\n        loop ()\n  in\n  loop ()\n\nlet fold f init t =\n  let rec loop acc =\n    match t.next () with None -> acc | Some v -> loop (f acc v)\n  in\n  loop init\n\nlet to_array t =\n  let items = ref [] in\n  iter (fun v -> items := v :: !items) t;\n  Array.of_list (List.rev !items)\n\nlet rec to_seq t () =\n  match t.next () with None -> Seq.Nil | Some v -> Seq.Cons (v, to_seq t)\n\n(* Properties *)\n\nlet reset t = t.reset ()\nlet length t = t.length\n\n(* Utilities *)\n\nlet stack_batch tensors = Nx.stack (Array.to_list tensors)\nlet shuffle_pipeline = shuffle\n\nlet prepare ?(shuffle = false) ~batch_size ?(drop_last = true) (x, y) =\n  let nx = (Nx.shape x).(0) in\n  let ny = (Nx.shape y).(0) in\n  if nx <> ny then\n    invalid_arg\n      (Printf.sprintf \"Data.prepare: first dimensions differ (%d vs %d)\" nx ny);\n  if batch_size <= 0 then\n    invalid_arg\n      (Printf.sprintf \"Data.prepare: expected batch_size > 0, got %d\" batch_size);\n  let indices = of_fn nx Fun.id in\n  let indices = if shuffle then shuffle_pipeline indices else indices in\n  map_batch ~drop_last batch_size\n    (fun idx_arr ->\n      let n = Array.length idx_arr in\n      let xs = Array.init n (fun j -> Nx.slice [ I idx_arr.(j) ] x) in\n      let ys = Array.init n (fun j -> Nx.slice [ I idx_arr.(j) ] y) in\n      (stack_batch xs, stack_batch ys))\n    indices\n"
  },
  {
    "path": "packages/kaun/lib/data.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Lazy, composable data pipelines for training.\n\n    A {!type:t} is a resettable iterator over elements of type ['a]. Pipelines\n    are built by composing constructors, transformers, and consumers.\n\n    {[\n    Data.of_array examples |> Data.shuffle |> Data.map_batch 32 collate\n    |> Data.iter train_step\n    ]} *)\n\n(** {1:types Types} *)\n\ntype 'a t\n(** The type for lazy data pipelines producing elements of type ['a]. *)\n\n(** {1:constructors Constructors} *)\n\nval of_array : 'a array -> 'a t\n(** [of_array a] is a pipeline yielding the elements of [a] in order. *)\n\nval of_tensor : ('a, 'b) Nx.t -> ('a, 'b) Nx.t t\n(** [of_tensor t] is a pipeline yielding slices along the first dimension of\n    [t]. Each element has shape [t.shape[1:]]. *)\n\nval of_tensors :\n  ('a, 'b) Nx.t * ('c, 'd) Nx.t -> (('a, 'b) Nx.t * ('c, 'd) Nx.t) t\n(** [of_tensors (x, y)] is a pipeline yielding paired slices along the first\n    dimension of [x] and [y].\n\n    Raises [Invalid_argument] if [x] and [y] have different first dimension\n    sizes. *)\n\nval of_fn : int -> (int -> 'a) -> 'a t\n(** [of_fn n f] is a pipeline yielding [f 0], [f 1], ..., [f (n - 1)].\n\n    Raises [Invalid_argument] if [n < 0]. *)\n\nval repeat : int -> 'a -> 'a t\n(** [repeat n v] is a pipeline that yields [v] exactly [n] times.\n\n    Raises [Invalid_argument] if [n < 0]. *)\n\n(** {1:transformers Transformers} *)\n\nval map : ('a -> 'b) -> 'a t -> 'b t\n(** [map f t] is a pipeline that applies [f] to each element of [t]. *)\n\nval batch : ?drop_last:bool -> int -> 'a t -> 'a array t\n(** [batch ?drop_last n t] is a pipeline yielding arrays of [n] consecutive\n    elements from [t].\n\n    [drop_last] defaults to [false]. When [true], the final batch is dropped if\n    it has fewer than [n] elements.\n\n    Raises [Invalid_argument] if [n <= 0]. *)\n\nval map_batch : ?drop_last:bool -> int -> ('a array -> 'b) -> 'a t -> 'b t\n(** [map_batch ?drop_last n f t] is [map f (batch ?drop_last n t)]. *)\n\nval shuffle : 'a t -> 'a t\n(** [shuffle t] is a pipeline that yields the elements of [t] in a random order.\n    The permutation is computed once when the pipeline is created.\n\n    Random keys are drawn from the implicit RNG scope.\n\n    Raises [Invalid_argument] if [t] has unknown length. *)\n\n(** {1:consumers Consumers} *)\n\nval iter : ('a -> unit) -> 'a t -> unit\n(** [iter f t] applies [f] to each element of [t]. *)\n\nval iteri : (int -> 'a -> unit) -> 'a t -> unit\n(** [iteri f t] applies [f i x] to each element [x] of [t], where [i] is the\n    0-based index. *)\n\nval fold : ('acc -> 'a -> 'acc) -> 'acc -> 'a t -> 'acc\n(** [fold f init t] folds [f] over the elements of [t]. *)\n\nval to_array : 'a t -> 'a array\n(** [to_array t] collects all elements of [t] into an array. *)\n\nval to_seq : 'a t -> 'a Seq.t\n(** [to_seq t] is a standard [Seq.t] view of [t]. Does not reset [t]. *)\n\n(** {1:properties Properties} *)\n\nval reset : 'a t -> unit\n(** [reset t] resets [t] so that iteration starts from the beginning. *)\n\nval length : 'a t -> int option\n(** [length t] is the number of elements in [t], if known. *)\n\n(** {1:utilities Utilities} *)\n\nval stack_batch : ('a, 'b) Nx.t array -> ('a, 'b) Nx.t\n(** [stack_batch tensors] stacks an array of tensors along a new first axis.\n    Equivalent to [Nx.stack (Array.to_list tensors)]. *)\n\nval prepare :\n  ?shuffle:bool ->\n  batch_size:int ->\n  ?drop_last:bool ->\n  ('a, 'b) Nx.t * ('c, 'd) Nx.t ->\n  (('a, 'b) Nx.t * ('c, 'd) Nx.t) t\n(** [prepare ?shuffle ~batch_size (x, y)] is a pipeline that yields batched\n    tensor pairs from [x] and [y].\n\n    Each yielded pair has shape [[batch_size; ...]] along the first dimension.\n\n    [shuffle] defaults to [false]. When [true], elements are yielded in a random\n    order. [drop_last] defaults to [true].\n\n    Raises [Invalid_argument] if [x] and [y] have different first dimension\n    sizes, or if [batch_size <= 0]. *)\n"
  },
  {
    "path": "packages/kaun/lib/datasets/cifar10.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Bigarray\nopen Dataset_utils\n\nlet src = Logs.Src.create \"kaun.datasets.cifar10\" ~doc:\"CIFAR-10 dataset loader\"\n\nmodule Log = (val Logs.src_log src : Logs.LOG)\n\nmodule Config = struct\n  let url = \"https://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz\"\n  let cache_subdir = \"cifar10/\"\n  let archive_name = \"cifar-10-binary.tar.gz\"\n  let extracted_subdir = \"cifar-10-batches-bin/\"\n  let height = 32\n  let width = 32\n  let channels = 3\n  let image_size = channels * height * width\n  let entry_size = 1 + image_size\n  let entries_per_batch = 10000\n\n  let train_batches =\n    [\n      \"data_batch_1.bin\";\n      \"data_batch_2.bin\";\n      \"data_batch_3.bin\";\n      \"data_batch_4.bin\";\n      \"data_batch_5.bin\";\n    ]\n\n  let test_batches = [ \"test_batch.bin\" ]\nend\n\nlet ensure_dataset () =\n  let dataset_dir = get_cache_dir Config.cache_subdir in\n  mkdir_p dataset_dir;\n  let archive_path = dataset_dir ^ Config.archive_name in\n  let extracted_dir = dataset_dir ^ Config.extracted_subdir in\n  let check_file = extracted_dir ^ \"test_batch.bin\" in\n  if not (Sys.file_exists check_file) then (\n    ensure_file Config.url archive_path;\n    if\n      not\n        (ensure_extracted_tar_gz ~tar_gz_path:archive_path\n           ~target_dir:dataset_dir ~check_file)\n    then\n      failwith\n        (Printf.sprintf \"Failed to extract CIFAR-10 archive to %s\" extracted_dir));\n  extracted_dir\n\nlet read_batch_file ~extracted_dir filename =\n  let path = extracted_dir ^ filename in\n  Log.debug (fun m -> m \"Reading CIFAR-10 batch: %s\" path);\n  let ic = open_in_bin path in\n  Fun.protect\n    ~finally:(fun () -> close_in ic)\n    (fun () ->\n      let s = really_input_string ic (in_channel_length ic) in\n      let num_entries = String.length s / Config.entry_size in\n      if String.length s <> num_entries * Config.entry_size then\n        failwith\n          (Printf.sprintf\n             \"CIFAR-10 batch %s has unexpected size %d (expected multiple of \\\n              %d)\"\n             filename (String.length s) Config.entry_size);\n      (s, num_entries))\n\nlet load () =\n  let extracted_dir = ensure_dataset () in\n  let load_split batch_files expected_total =\n    let images =\n      Genarray.create int8_unsigned c_layout\n        [| expected_total; Config.channels; Config.height; Config.width |]\n    in\n    let labels = Array1.create int8_unsigned c_layout expected_total in\n    let flat = Bigarray.reshape_1 images (expected_total * Config.image_size) in\n    let offset = ref 0 in\n    List.iter\n      (fun filename ->\n        let s, num_entries = read_batch_file ~extracted_dir filename in\n        for i = 0 to num_entries - 1 do\n          let entry_offset = i * Config.entry_size in\n          let idx = !offset + i in\n          Array1.unsafe_set labels idx (Char.code s.[entry_offset]);\n          let img_offset = entry_offset + 1 in\n          let base = idx * Config.image_size in\n          for p = 0 to Config.image_size - 1 do\n            Array1.unsafe_set flat (base + p)\n              (Char.code (String.unsafe_get s (img_offset + p)))\n          done\n        done;\n        offset := !offset + num_entries)\n      batch_files;\n    (images, labels)\n  in\n  Log.info (fun m -> m \"Loading CIFAR-10 datasets...\");\n  let train_images, train_labels =\n    load_split Config.train_batches\n      (List.length Config.train_batches * Config.entries_per_batch)\n  in\n  let test_images, test_labels =\n    load_split Config.test_batches\n      (List.length Config.test_batches * Config.entries_per_batch)\n  in\n  Log.info (fun m -> m \"CIFAR-10 loading complete\");\n  ((train_images, train_labels), (test_images, test_labels))\n"
  },
  {
    "path": "packages/kaun/lib/datasets/dataset_utils.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet src = Logs.Src.create \"kaun.datasets\" ~doc:\"Kaun datasets\"\n\nmodule Log = (val Logs.src_log src : Logs.LOG)\n\nlet mkdir_p path =\n  if path = \"\" || path = \".\" || path = Filename.dir_sep then ()\n  else\n    let components =\n      String.split_on_char Filename.dir_sep.[0] path |> List.filter (( <> ) \"\")\n    in\n    let is_absolute = path <> \"\" && path.[0] = Filename.dir_sep.[0] in\n    let initial_prefix = if is_absolute then Filename.dir_sep else \".\" in\n    ignore\n      (List.fold_left\n         (fun prefix comp ->\n           let next =\n             if prefix = Filename.dir_sep then Filename.dir_sep ^ comp\n             else Filename.concat prefix comp\n           in\n           (if Sys.file_exists next then (\n              if not (Sys.is_directory next) then\n                failwith\n                  (Printf.sprintf \"mkdir_p: '%s' exists but is not a directory\"\n                     next))\n            else\n              try Unix.mkdir next 0o755\n              with Unix.Unix_error (Unix.EEXIST, _, _) ->\n                if not (Sys.is_directory next) then\n                  failwith\n                    (Printf.sprintf\n                       \"mkdir_p: '%s' appeared as non-directory after EEXIST\"\n                       next));\n           next)\n         initial_prefix components)\n\nlet get_cache_dir ?(getenv = Sys.getenv_opt) dataset_name =\n  let root =\n    match getenv \"RAVEN_CACHE_ROOT\" with\n    | Some dir when dir <> \"\" -> dir\n    | _ ->\n        let xdg =\n          match getenv \"XDG_CACHE_HOME\" with\n          | Some d when d <> \"\" -> d\n          | _ -> Filename.concat (Sys.getenv \"HOME\") \".cache\"\n        in\n        Filename.concat xdg \"raven\"\n  in\n  let path =\n    List.fold_left Filename.concat root (\"datasets\" :: [ dataset_name ])\n  in\n  let sep = Filename.dir_sep.[0] in\n  if path <> \"\" && path.[String.length path - 1] = sep then path\n  else path ^ Filename.dir_sep\n\nlet curl_download ~url ~dest () =\n  let check =\n    lazy (Unix.system \"command -v curl >/dev/null 2>&1\" = Unix.WEXITED 0)\n  in\n  if not (Lazy.force check) then failwith \"curl not found on PATH\";\n  mkdir_p (Filename.dirname dest);\n  let cmd =\n    Printf.sprintf \"curl -L --fail -s -o %s %s\" (Filename.quote dest)\n      (Filename.quote url)\n  in\n  match Unix.system cmd with\n  | Unix.WEXITED 0 -> ()\n  | _ ->\n      (try Sys.remove dest with Sys_error _ -> ());\n      failwith (Printf.sprintf \"Failed to download %s\" url)\n\nlet download_file url dest_path =\n  Log.info (fun m -> m \"Downloading %s to %s\" (Filename.basename url) dest_path);\n  curl_download ~url ~dest:dest_path ();\n  Log.info (fun m -> m \"Downloaded %s\" (Filename.basename dest_path))\n\nlet ensure_file url dest_path =\n  if not (Sys.file_exists dest_path) then download_file url dest_path\n  else Log.debug (fun m -> m \"Found %s\" dest_path)\n\nlet ensure_decompressed_gz ~gz_path ~target_path =\n  if Sys.file_exists target_path then (\n    Log.debug (fun m -> m \"Found %s\" target_path);\n    true)\n  else if Sys.file_exists gz_path then (\n    Log.info (fun m -> m \"Decompressing %s...\" gz_path);\n    let ic = Gzip.open_in gz_path in\n    let oc = open_out_bin target_path in\n    Fun.protect\n      ~finally:(fun () ->\n        Gzip.close_in ic;\n        close_out oc)\n      (fun () ->\n        let buf = Bytes.create 4096 in\n        let rec loop () =\n          let n = Gzip.input ic buf 0 4096 in\n          if n > 0 then (\n            output oc buf 0 n;\n            loop ())\n        in\n        loop ());\n    Log.info (fun m -> m \"Decompressed to %s\" target_path);\n    true)\n  else (\n    Log.warn (fun m -> m \"Compressed file %s not found\" gz_path);\n    false)\n\nlet ensure_extracted_tar_gz ~tar_gz_path ~target_dir ~check_file =\n  if Sys.file_exists check_file then (\n    Log.debug (fun m -> m \"Found %s\" check_file);\n    true)\n  else if Sys.file_exists tar_gz_path then (\n    Log.info (fun m -> m \"Extracting %s...\" tar_gz_path);\n    mkdir_p target_dir;\n    let cmd =\n      Printf.sprintf \"tar -xzf %s -C %s\"\n        (Filename.quote tar_gz_path)\n        (Filename.quote target_dir)\n    in\n    match Unix.system cmd with\n    | Unix.WEXITED 0 ->\n        Log.info (fun m -> m \"Extracted to %s\" target_dir);\n        true\n    | _ ->\n        Log.warn (fun m -> m \"Failed to extract %s\" tar_gz_path);\n        false)\n  else (\n    Log.warn (fun m -> m \"Archive %s not found\" tar_gz_path);\n    false)\n"
  },
  {
    "path": "packages/kaun/lib/datasets/dune",
    "content": "(library\n (name kaun_datasets)\n (public_name kaun.datasets)\n (libraries unix zip rune nx kaun logs))\n"
  },
  {
    "path": "packages/kaun/lib/datasets/kaun_datasets.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet mnist ?(fashion = false) ?(normalize = true) ?(data_format = `NCHW) () =\n  let (train_images, train_labels), (test_images, test_labels) =\n    Mnist.load ~fashion_mnist:fashion\n  in\n  let make_tensors images labels =\n    let n = Bigarray.Array3.dim1 images in\n    let h = Bigarray.Array3.dim2 images in\n    let w = Bigarray.Array3.dim3 images in\n    let x =\n      Nx.of_bigarray (Bigarray.genarray_of_array3 images)\n      |> Nx.reshape [| n; h; w; 1 |]\n      |> Nx.cast Nx.float32\n    in\n    let x = if normalize then Nx.div_s x 255.0 else x in\n    let x =\n      match data_format with\n      | `NCHW -> Nx.transpose x ~axes:[ 0; 3; 1; 2 ]\n      | `NHWC -> x\n    in\n    let y =\n      Nx.of_bigarray (Bigarray.genarray_of_array1 labels) |> Nx.cast Nx.int32\n    in\n    (x, y)\n  in\n  let train = make_tensors train_images train_labels in\n  let test = make_tensors test_images test_labels in\n  (train, test)\n\nlet cifar10 ?(normalize = true) ?(data_format = `NCHW) () =\n  let (train_images, train_labels), (test_images, test_labels) =\n    Cifar10.load ()\n  in\n  let make_tensors images labels =\n    let x = Nx.of_bigarray images |> Nx.cast Nx.float32 in\n    let x = if normalize then Nx.div_s x 255.0 else x in\n    let x =\n      match data_format with\n      | `NCHW -> x\n      | `NHWC -> Nx.transpose x ~axes:[ 0; 2; 3; 1 ]\n    in\n    let y =\n      Nx.of_bigarray (Bigarray.genarray_of_array1 labels) |> Nx.cast Nx.int32\n    in\n    (x, y)\n  in\n  let train = make_tensors train_images train_labels in\n  let test = make_tensors test_images test_labels in\n  (train, test)\n"
  },
  {
    "path": "packages/kaun/lib/datasets/kaun_datasets.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Dataset loaders for kaun.\n\n    Datasets are downloaded on demand and cached locally under\n    [$RAVEN_CACHE_ROOT/datasets/] (or [$XDG_CACHE_HOME/raven/datasets/]). *)\n\nval mnist :\n  ?fashion:bool ->\n  ?normalize:bool ->\n  ?data_format:[ `NCHW | `NHWC ] ->\n  unit ->\n  (Nx.float32_t * Nx.int32_t) * (Nx.float32_t * Nx.int32_t)\n(** [mnist ()] is [((x_train, y_train), (x_test, y_test))].\n\n    Images are float32 in \\[0, 1\\] (when [normalize] is [true], the default).\n    Labels are int32 class indices.\n\n    [fashion] selects Fashion-MNIST when [true]. Defaults to [false].\n    [data_format] defaults to [`NCHW].\n\n    Tensor shapes:\n    - [`NCHW]: images [[N; 1; 28; 28]], labels [[N]]\n    - [`NHWC]: images [[N; 28; 28; 1]], labels [[N]]\n\n    Raises [Failure] on download or parsing errors. *)\n\nval cifar10 :\n  ?normalize:bool ->\n  ?data_format:[ `NCHW | `NHWC ] ->\n  unit ->\n  (Nx.float32_t * Nx.int32_t) * (Nx.float32_t * Nx.int32_t)\n(** [cifar10 ()] is [((x_train, y_train), (x_test, y_test))].\n\n    Images are float32 in \\[0, 1\\] (when [normalize] is [true], the default).\n    Labels are int32 class indices (0--9: airplane, automobile, bird, cat, deer,\n    dog, frog, horse, ship, truck).\n\n    [data_format] defaults to [`NCHW].\n\n    Tensor shapes:\n    - [`NCHW]: images [[N; 3; 32; 32]], labels [[N]]\n    - [`NHWC]: images [[N; 32; 32; 3]], labels [[N]]\n\n    Raises [Failure] on download, extraction, or parsing errors. *)\n"
  },
  {
    "path": "packages/kaun/lib/datasets/mnist.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Bigarray\nopen Dataset_utils\n\nlet src = Logs.Src.create \"kaun.datasets.mnist\" ~doc:\"MNIST dataset loader\"\n\nmodule Log = (val Logs.src_log src : Logs.LOG)\n\nmodule Config = struct\n  type t = {\n    name : string;\n    cache_subdir : string;\n    train_images_url : string;\n    train_labels_url : string;\n    test_images_url : string;\n    test_labels_url : string;\n    image_magic_number : int;\n    label_magic_number : int;\n  }\n\n  let mnist =\n    {\n      name = \"MNIST\";\n      cache_subdir = \"mnist/\";\n      train_images_url =\n        \"https://ossci-datasets.s3.amazonaws.com/mnist/train-images-idx3-ubyte.gz\";\n      train_labels_url =\n        \"https://ossci-datasets.s3.amazonaws.com/mnist/train-labels-idx1-ubyte.gz\";\n      test_images_url =\n        \"https://ossci-datasets.s3.amazonaws.com/mnist/t10k-images-idx3-ubyte.gz\";\n      test_labels_url =\n        \"https://ossci-datasets.s3.amazonaws.com/mnist/t10k-labels-idx1-ubyte.gz\";\n      image_magic_number = 2051;\n      label_magic_number = 2049;\n    }\n\n  let fashion_mnist =\n    {\n      name = \"Fashion-MNIST\";\n      cache_subdir = \"fashion-mnist/\";\n      train_images_url =\n        \"http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz\";\n      train_labels_url =\n        \"http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz\";\n      test_images_url =\n        \"http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz\";\n      test_labels_url =\n        \"http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz\";\n      image_magic_number = 2051;\n      label_magic_number = 2049;\n    }\nend\n\nlet read_int32_be s pos =\n  let b1 = Char.code s.[pos] in\n  let b2 = Char.code s.[pos + 1] in\n  let b3 = Char.code s.[pos + 2] in\n  let b4 = Char.code s.[pos + 3] in\n  (b1 lsl 24) lor (b2 lsl 16) lor (b3 lsl 8) lor b4\n\nlet ensure_dataset config =\n  let dataset_dir = get_cache_dir config.Config.cache_subdir in\n  mkdir_p dataset_dir;\n  let files_to_process =\n    [\n      (\"train-images-idx3-ubyte\", config.Config.train_images_url);\n      (\"train-labels-idx1-ubyte\", config.Config.train_labels_url);\n      (\"t10k-images-idx3-ubyte\", config.Config.test_images_url);\n      (\"t10k-labels-idx1-ubyte\", config.Config.test_labels_url);\n    ]\n  in\n  List.iter\n    (fun (base_filename, url) ->\n      let gz_filename = base_filename ^ \".gz\" in\n      let gz_path = dataset_dir ^ gz_filename in\n      let path = dataset_dir ^ base_filename in\n      if not (Sys.file_exists path) then (\n        Log.debug (fun m ->\n            m \"File %s not found for %s dataset\" base_filename config.name);\n        ensure_file url gz_path;\n        if not (ensure_decompressed_gz ~gz_path ~target_path:path) then\n          failwith (Printf.sprintf \"Failed to obtain decompressed file %s\" path))\n      else Log.debug (fun m -> m \"Found decompressed file %s\" path))\n    files_to_process\n\nlet read_idx_file ~read_header ~create_array ~populate_array ~expected_magic\n    config filename =\n  Log.debug (fun m -> m \"Reading %s file: %s\" config.Config.name filename);\n  let ic = open_in_bin filename in\n  let s =\n    Fun.protect\n      ~finally:(fun () -> close_in ic)\n      (fun () -> really_input_string ic (in_channel_length ic))\n  in\n  let magic = read_int32_be s 0 in\n  if magic <> expected_magic then\n    failwith\n      (Printf.sprintf \"Invalid magic number %d in %s (expected %d)\" magic\n         filename expected_magic);\n  let dimensions, data_offset = read_header s in\n  let total_items, data_len =\n    match dimensions with\n    | [| d1 |] -> (d1, d1)\n    | [| d1; d2; d3 |] -> (d1, d1 * d2 * d3)\n    | _ -> failwith \"Unsupported dimension format\"\n  in\n  let expected_len = data_offset + data_len in\n  if String.length s <> expected_len then\n    failwith\n      (Printf.sprintf\n         \"File %s has unexpected length: %d vs %d (header offset %d, data len \\\n          %d)\"\n         filename (String.length s) expected_len data_offset data_len);\n  let arr = create_array dimensions in\n  populate_array arr s data_offset total_items;\n  arr\n\nlet read_images config filename =\n  let read_header s =\n    let num_images = read_int32_be s 4 in\n    let num_rows = read_int32_be s 8 in\n    let num_cols = read_int32_be s 12 in\n    ([| num_images; num_rows; num_cols |], 16)\n  in\n  let create_array dims =\n    Array3.create int8_unsigned c_layout dims.(0) dims.(1) dims.(2)\n  in\n  let populate_array arr s offset _total_items =\n    let num_images = Array3.dim1 arr in\n    let num_rows = Array3.dim2 arr in\n    let num_cols = Array3.dim3 arr in\n    let img_size = num_rows * num_cols in\n    for i = 0 to num_images - 1 do\n      let start_pos = offset + (i * img_size) in\n      for r = 0 to num_rows - 1 do\n        for c = 0 to num_cols - 1 do\n          let pos = start_pos + (r * num_cols) + c in\n          arr.{i, r, c} <- Char.code s.[pos]\n        done\n      done\n    done\n  in\n  read_idx_file ~read_header ~create_array ~populate_array\n    ~expected_magic:config.Config.image_magic_number config filename\n\nlet read_labels config filename =\n  let read_header s =\n    let num_labels = read_int32_be s 4 in\n    ([| num_labels |], 8)\n  in\n  let create_array dims = Array1.create int8_unsigned c_layout dims.(0) in\n  let populate_array arr s offset total_items =\n    for i = 0 to total_items - 1 do\n      arr.{i} <- Char.code s.[offset + i]\n    done\n  in\n  read_idx_file ~read_header ~create_array ~populate_array\n    ~expected_magic:config.Config.label_magic_number config filename\n\nlet load ~fashion_mnist =\n  let config = if fashion_mnist then Config.fashion_mnist else Config.mnist in\n  ensure_dataset config;\n  let dataset_dir = get_cache_dir config.Config.cache_subdir in\n  let train_images_path = dataset_dir ^ \"train-images-idx3-ubyte\" in\n  let train_labels_path = dataset_dir ^ \"train-labels-idx1-ubyte\" in\n  let test_images_path = dataset_dir ^ \"t10k-images-idx3-ubyte\" in\n  let test_labels_path = dataset_dir ^ \"t10k-labels-idx1-ubyte\" in\n  Log.info (fun m -> m \"Loading %s datasets...\" config.name);\n  let train_images = read_images config train_images_path in\n  let train_labels = read_labels config train_labels_path in\n  let test_images = read_images config test_images_path in\n  let test_labels = read_labels config test_labels_path in\n  Log.info (fun m -> m \"%s loading complete\" config.name);\n  ((train_images, train_labels), (test_images, test_labels))\n"
  },
  {
    "path": "packages/kaun/lib/dune",
    "content": "(library\n (name kaun)\n (public_name kaun)\n (libraries rune vega nx nx.core nx.io))\n"
  },
  {
    "path": "packages/kaun/lib/fn.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet invalid_argf fmt = Printf.ksprintf invalid_arg fmt\n\nlet invalid_argf_fn fn fmt =\n  Printf.ksprintf (fun msg -> invalid_argf \"Fn.%s: %s\" fn msg) fmt\n\n(* Helpers *)\n\nlet normalize_axis ~fn ~ndim ax =\n  let axis = if ax < 0 then ndim + ax else ax in\n  if axis < 0 || axis >= ndim then\n    invalid_argf_fn fn \"axis %d out of bounds for rank %d\" ax ndim;\n  axis\n\nlet normalize_axes ~fn ~ndim axes =\n  match axes with\n  | [] -> invalid_argf_fn fn \"axes must contain at least one axis\"\n  | lst -> List.map (normalize_axis ~fn ~ndim) lst\n\nlet keep_shape ~axes x_shape =\n  Array.mapi\n    (fun idx dim -> if List.exists (fun ax -> ax = idx) axes then 1 else dim)\n    x_shape\n\nlet unaffected_axes ~ndim ~axes =\n  Array.init ndim Fun.id |> Array.to_list\n  |> List.filter (fun ax -> not (List.exists (( = ) ax) axes))\n\nlet core_shape ~axes:unaffected x_shape =\n  Array.of_list (List.map (fun ax -> x_shape.(ax)) unaffected)\n\nlet broadcast_param ~fn ~name ~x_shape ~keep_shape ~core_shape param =\n  let param_shape = Nx.shape param in\n  if param_shape = keep_shape then param\n  else if param_shape = core_shape then Nx.reshape keep_shape param\n  else if param_shape = x_shape then param\n  else\n    invalid_argf_fn fn \"%s: shape must match normalized axes or remaining axes\"\n      name\n\n(* Normalization *)\n\nlet batch_norm ?axes ?(epsilon = 1e-5) ~scale ~bias x =\n  let ndim = Nx.ndim x in\n  let axes =\n    let default =\n      match axes with\n      | Some ax -> ax\n      | None ->\n          if ndim = 2 then [ 0 ] else if ndim = 4 then [ 0; 2; 3 ] else [ 0 ]\n    in\n    normalize_axes ~fn:\"batch_norm\" ~ndim default\n  in\n  let x_shape = Nx.shape x in\n  let keep = keep_shape ~axes x_shape in\n  let unaffected = unaffected_axes ~ndim ~axes in\n  let core = core_shape ~axes:unaffected x_shape in\n  let broadcast name param =\n    let param =\n      if Nx.dtype param <> Nx.dtype x then Nx.cast (Nx.dtype x) param else param\n    in\n    broadcast_param ~fn:\"batch_norm\" ~name ~x_shape ~keep_shape:keep\n      ~core_shape:core param\n  in\n  let mean_x = Nx.mean x ~axes ~keepdims:true in\n  let variance = Nx.var x ~axes ~keepdims:true in\n  let eps = Nx.scalar_like x epsilon in\n  let normalized = Nx.mul (Nx.sub x mean_x) (Nx.rsqrt (Nx.add variance eps)) in\n  let scale_b = broadcast \"scale\" scale in\n  let bias_b = broadcast \"bias\" bias in\n  Nx.add (Nx.mul normalized scale_b) bias_b\n\nlet rms_norm ?axes ?(epsilon = 1e-5) ?gamma x =\n  let ndim = Nx.ndim x in\n  let axes =\n    let default = match axes with Some ax -> ax | None -> [ -1 ] in\n    normalize_axes ~fn:\"rms_norm\" ~ndim default\n  in\n  let x_shape = Nx.shape x in\n  let keep = keep_shape ~axes x_shape in\n  let mean_square = Nx.mean (Nx.mul x x) ~axes ~keepdims:true in\n  let eps = Nx.scalar_like x epsilon in\n  let normalized = Nx.mul x (Nx.rsqrt (Nx.add mean_square eps)) in\n  match gamma with\n  | None -> normalized\n  | Some gamma ->\n      let gamma_shape = Nx.shape gamma in\n      let gamma =\n        if gamma_shape = keep then gamma\n        else\n          let unaffected = unaffected_axes ~ndim ~axes in\n          let core = core_shape ~axes:unaffected x_shape in\n          if gamma_shape = core then Nx.reshape keep gamma\n          else if gamma_shape = x_shape then gamma\n          else\n            invalid_argf_fn \"rms_norm\"\n              \"gamma: shape must match normalized axes or remaining axes\"\n      in\n      Nx.mul normalized gamma\n\nlet layer_norm ?(axes = [ -1 ]) ?(epsilon = 1e-5) ?gamma ?beta x =\n  let ndim = Nx.ndim x in\n  let axes =\n    List.map\n      (fun ax ->\n        let axis = if ax < 0 then ndim + ax else ax in\n        if axis < 0 || axis >= ndim then\n          invalid_argf_fn \"layer_norm\" \"axis %d out of bounds for rank %d\" ax\n            ndim;\n        axis)\n      axes\n  in\n  let x_shape = Nx.shape x in\n  let keep =\n    Array.mapi (fun idx dim -> if List.mem idx axes then dim else 1) x_shape\n  in\n  let broadcast_param name param =\n    let param_shape = Nx.shape param in\n    if param_shape = x_shape then param\n    else if param_shape = keep then param\n    else\n      let axes_shape = Array.of_list (List.map (fun ax -> x_shape.(ax)) axes) in\n      if param_shape = axes_shape then Nx.reshape keep param\n      else\n        invalid_argf_fn \"layer_norm\" \"%s: shape must match normalized axes\" name\n  in\n  let mean_x = Nx.mean x ~axes ~keepdims:true in\n  let centered = Nx.sub x mean_x in\n  let variance = Nx.mean (Nx.mul centered centered) ~axes ~keepdims:true in\n  let eps = Nx.scalar_like x epsilon in\n  let inv_std = Nx.rsqrt (Nx.add variance eps) in\n  let normalized = Nx.mul centered inv_std in\n  let with_scale =\n    match gamma with\n    | None -> normalized\n    | Some gamma ->\n        let gamma_broadcast = broadcast_param \"gamma\" gamma in\n        Nx.mul normalized gamma_broadcast\n  in\n  match beta with\n  | None -> with_scale\n  | Some beta ->\n      let beta_broadcast = broadcast_param \"beta\" beta in\n      Nx.add with_scale beta_broadcast\n\n(* Embedding *)\n\nlet embedding ?(scale = true) ~embedding indices =\n  let embed_shape = Nx.shape embedding in\n  if Array.length embed_shape <> 2 then\n    invalid_argf_fn \"embedding\"\n      \"embedding matrix must have shape [vocab_size; embed_dim]\";\n  let embed_dim = embed_shape.(1) in\n  let indices_shape = Nx.shape indices in\n  let is_scalar = Array.length indices_shape = 0 in\n  let vocab_size = embed_shape.(0) in\n  if vocab_size <= 0 then\n    invalid_argf_fn \"embedding\" \"vocabulary dimension must be positive\";\n  let flat_size = Array.fold_left ( * ) 1 indices_shape in\n  let indices_flat =\n    if is_scalar then Nx.reshape [| 1 |] indices\n    else Nx.reshape [| flat_size |] indices\n  in\n  let gathered = Nx.take ~axis:0 indices_flat embedding in\n  let output_shape =\n    if is_scalar then [| embed_dim |]\n    else Array.append indices_shape [| embed_dim |]\n  in\n  let embedded = Nx.reshape output_shape gathered in\n  if not scale then embedded\n  else\n    let factor =\n      Nx.scalar_like embedding (Stdlib.sqrt (float_of_int embed_dim))\n    in\n    Nx.mul embedded factor\n\n(* Dropout *)\n\nlet dropout ~rate x =\n  if rate < 0.0 || rate >= 1.0 then\n    invalid_argf_fn \"dropout\" \"rate must satisfy 0.0 <= rate < 1.0\";\n  let tensor_dtype = Nx.dtype x in\n  if not (Nx_core.Dtype.is_float tensor_dtype) then\n    invalid_argf_fn \"dropout\" \"requires floating point dtype\";\n  if rate = 0.0 then x\n  else\n    let keep_prob = 1.0 -. rate in\n    let random_vals = Nx.rand tensor_dtype (Nx.shape x) in\n    let threshold = Nx.scalar_like x keep_prob in\n    let keep_mask = Nx.less random_vals threshold in\n    let keep_mask_float = Nx.cast tensor_dtype keep_mask in\n    let scale = Nx.scalar_like x (1.0 /. keep_prob) in\n    Nx.mul x (Nx.mul keep_mask_float scale)\n\n(* Attention *)\n\nlet dot_product_attention (type b) ?attention_mask ?scale ?dropout_rate\n    ?(is_causal = false) (q : (float, b) Nx.t) (k : (float, b) Nx.t)\n    (v : (float, b) Nx.t) =\n  let check_float name (t : (float, b) Nx.t) =\n    match Nx.dtype t with\n    | Nx.Float16 -> ()\n    | Nx.Float32 -> ()\n    | Nx.Float64 -> ()\n    | _ ->\n        invalid_argf_fn \"dot_product_attention\"\n          \"%s: requires floating point dtype\" name\n  in\n  check_float \"query\" q;\n  check_float \"key\" k;\n  check_float \"value\" v;\n  let q_shape = Nx.shape q in\n  let k_shape = Nx.shape k in\n  let v_shape = Nx.shape v in\n  let q_rank = Array.length q_shape in\n  if q_rank < 2 then\n    invalid_argf_fn \"dot_product_attention\" \"query: must have rank >= 2\";\n  if Array.length k_shape <> q_rank || Array.length v_shape <> q_rank then\n    invalid_argf_fn \"dot_product_attention\" \"key/value: must match query rank\";\n  let depth = q_shape.(q_rank - 1) in\n  if k_shape.(q_rank - 1) <> depth then\n    invalid_argf_fn \"dot_product_attention\"\n      \"key last dim %d does not match query last dim %d\"\n      k_shape.(q_rank - 1)\n      depth;\n  let scale_factor =\n    match scale with\n    | Some s -> s\n    | None -> 1.0 /. Stdlib.sqrt (float_of_int depth)\n  in\n  let transpose_last_two tensor =\n    let nd = Array.length (Nx.shape tensor) in\n    if nd < 2 then\n      invalid_argf_fn \"dot_product_attention\" \"key/value: must have rank >= 2\";\n    let axes = Array.init nd Fun.id in\n    let tmp = axes.(nd - 1) in\n    axes.(nd - 1) <- axes.(nd - 2);\n    axes.(nd - 2) <- tmp;\n    Nx.transpose tensor ~axes:(Array.to_list axes)\n  in\n  let k_t = transpose_last_two k in\n  let scores = Nx.matmul q k_t in\n  let scores =\n    if scale_factor = 1.0 then scores\n    else Nx.mul scores (Nx.scalar_like scores scale_factor)\n  in\n  let scores =\n    if is_causal then (\n      let scores_shape = Nx.shape scores in\n      let seq_len_q = scores_shape.(q_rank - 2) in\n      let seq_len_k = scores_shape.(q_rank - 1) in\n      if seq_len_q <> seq_len_k then\n        invalid_argf_fn \"dot_product_attention\"\n          \"causal masking requires seq_len_q == seq_len_k\";\n      let ones_matrix =\n        Nx.full (Nx.dtype scores) [| seq_len_q; seq_len_k |] 1.0\n      in\n      let causal_mask = Nx.tril ones_matrix in\n      let causal_mask = Nx.cast Nx.bool causal_mask in\n      let causal_mask = Nx.broadcast_to scores_shape causal_mask in\n      let neg_inf = Nx.scalar_like scores (-1e9) in\n      Nx.where causal_mask scores neg_inf)\n    else scores\n  in\n  let scores =\n    match attention_mask with\n    | None -> scores\n    | Some mask ->\n        let neg_inf = Nx.scalar_like scores (-1e9) in\n        Nx.where mask scores neg_inf\n  in\n  let probs = Nx.softmax ~axes:[ -1 ] scores in\n  let probs =\n    match dropout_rate with None -> probs | Some rate -> dropout ~rate probs\n  in\n  Nx.matmul probs v\n\n(* Conv / Pool helpers *)\n\nlet ceildiv a b = (a + b - 1) / b\n\nlet calculate_nn_padding input_spatial ~kernel_size ~stride ~dilation\n    ~(padding : [ `Same | `Valid ]) =\n  let k = Array.length kernel_size in\n  match padding with\n  | `Valid -> Array.make k (0, 0)\n  | `Same ->\n      Array.init k (fun i ->\n          let eff_k = (dilation.(i) * (kernel_size.(i) - 1)) + 1 in\n          let out = ceildiv input_spatial.(i) stride.(i) in\n          let total =\n            Stdlib.max 0 (((out - 1) * stride.(i)) + eff_k - input_spatial.(i))\n          in\n          (total / 2, total - (total / 2)))\n\nlet apply_ceil_mode input_spatial ~kernel_size ~stride ~dilation ~padding\n    ~ceil_mode =\n  if not ceil_mode then padding\n  else\n    Array.init (Array.length kernel_size) (fun i ->\n        let pb, pa = padding.(i) in\n        let padded = input_spatial.(i) + pb + pa in\n        let eff_k = (dilation.(i) * (kernel_size.(i) - 1)) + 1 in\n        let out_floor = ((padded - eff_k) / stride.(i)) + 1 in\n        let out_ceil = ceildiv (padded - eff_k) stride.(i) + 1 in\n        if out_ceil > out_floor then\n          let extra = ((out_ceil - 1) * stride.(i)) + eff_k - padded in\n          (pb, pa + extra)\n        else (pb, pa))\n\n(* Convolution *)\n\nlet conv1d ?(groups = 1) ?(stride = 1) ?(dilation = 1) ?(padding = `Valid) ?bias\n    x w =\n  let x_shape = Nx.shape x in\n  let w_shape = Nx.shape w in\n  if Array.length x_shape <> 3 then\n    invalid_argf_fn \"conv1d\" \"input must be 3D (N, C_in, L)\";\n  if Array.length w_shape <> 3 then\n    invalid_argf_fn \"conv1d\" \"weight must be 3D (C_out, C_in/groups, K)\";\n  let n = x_shape.(0) in\n  let cin = x_shape.(1) in\n  let cout = w_shape.(0) in\n  let cin_per_group = w_shape.(1) in\n  if cin <> groups * cin_per_group then\n    invalid_argf_fn \"conv1d\" \"C_in=%d does not match groups=%d * C_in/g=%d\" cin\n      groups cin_per_group;\n  let kernel_size = [| w_shape.(2) |] in\n  let stride_arr = [| stride |] in\n  let dilation_arr = [| dilation |] in\n  let input_spatial = [| x_shape.(2) |] in\n  let pad_pairs =\n    calculate_nn_padding input_spatial ~kernel_size ~stride:stride_arr\n      ~dilation:dilation_arr ~padding\n  in\n  let kernel_elements = w_shape.(2) in\n  (* unfold: (N, C_in, L_in) -> (N, C_in, K, L_out) *)\n  let x_unf =\n    Nx.extract_patches ~kernel_size ~stride:stride_arr ~dilation:dilation_arr\n      ~padding:pad_pairs x\n  in\n  let x_unf_shape = Nx.shape x_unf in\n  let l_out = x_unf_shape.(3) in\n  (* Merge channels and kernel: (N, C_in*K, L_out) *)\n  let x_col = Nx.reshape [| n; cin * kernel_elements; l_out |] x_unf in\n  let result =\n    if groups = 1 then\n      let w_flat = Nx.reshape [| cout; cin * kernel_elements |] w in\n      Nx.matmul w_flat x_col\n    else\n      let rcout = cout / groups in\n      let x_grouped =\n        Nx.reshape [| n; groups; cin_per_group * kernel_elements; l_out |] x_col\n      in\n      let w_grouped =\n        Nx.reshape [| groups; rcout; cin_per_group * kernel_elements |] w\n      in\n      let x_batched =\n        Nx.reshape\n          [| n * groups; cin_per_group * kernel_elements; l_out |]\n          x_grouped\n      in\n      let w_expanded = Nx.unsqueeze ~axes:[ 0 ] w_grouped in\n      let w_expanded =\n        Nx.expand\n          [| n; groups; rcout; cin_per_group * kernel_elements |]\n          w_expanded\n      in\n      let w_expanded =\n        Nx.reshape\n          [| n * groups; rcout; cin_per_group * kernel_elements |]\n          w_expanded\n      in\n      let result = Nx.matmul w_expanded x_batched in\n      let result = Nx.reshape [| n; groups; rcout; l_out |] result in\n      Nx.reshape [| n; cout; l_out |] result\n  in\n  match bias with\n  | None -> result\n  | Some b -> Nx.add result (Nx.reshape [| 1; cout; 1 |] b)\n\nlet conv2d ?(groups = 1) ?(stride = (1, 1)) ?(dilation = (1, 1))\n    ?(padding = `Valid) ?bias x w =\n  let x_shape = Nx.shape x in\n  let w_shape = Nx.shape w in\n  if Array.length x_shape <> 4 then\n    invalid_argf_fn \"conv2d\" \"input must be 4D (N, C_in, H, W)\";\n  if Array.length w_shape <> 4 then\n    invalid_argf_fn \"conv2d\" \"weight must be 4D (C_out, C_in/groups, kH, kW)\";\n  let n = x_shape.(0) in\n  let cin = x_shape.(1) in\n  let cout = w_shape.(0) in\n  let cin_per_group = w_shape.(1) in\n  if cin <> groups * cin_per_group then\n    invalid_argf_fn \"conv2d\" \"C_in=%d does not match groups=%d * C_in/g=%d\" cin\n      groups cin_per_group;\n  let sh, sw = stride in\n  let dh, dw = dilation in\n  let kernel_size = [| w_shape.(2); w_shape.(3) |] in\n  let stride_arr = [| sh; sw |] in\n  let dilation_arr = [| dh; dw |] in\n  let input_spatial = [| x_shape.(2); x_shape.(3) |] in\n  let pad_pairs =\n    calculate_nn_padding input_spatial ~kernel_size ~stride:stride_arr\n      ~dilation:dilation_arr ~padding\n  in\n  let kernel_elements = w_shape.(2) * w_shape.(3) in\n  (* unfold: (N, C_in, H, W) -> (N, C_in, kH*kW, L) *)\n  let x_unf =\n    Nx.extract_patches ~kernel_size ~stride:stride_arr ~dilation:dilation_arr\n      ~padding:pad_pairs x\n  in\n  let x_unf_shape = Nx.shape x_unf in\n  let l_out = x_unf_shape.(3) in\n  (* Merge channels and kernel: (N, C_in*kH*kW, L) *)\n  let x_col = Nx.reshape [| n; cin * kernel_elements; l_out |] x_unf in\n  let result =\n    if groups = 1 then\n      let w_flat = Nx.reshape [| cout; cin * kernel_elements |] w in\n      Nx.matmul w_flat x_col\n    else\n      let rcout = cout / groups in\n      let x_grouped =\n        Nx.reshape [| n; groups; cin_per_group * kernel_elements; l_out |] x_col\n      in\n      let w_grouped =\n        Nx.reshape [| groups; rcout; cin_per_group * kernel_elements |] w\n      in\n      let x_batched =\n        Nx.reshape\n          [| n * groups; cin_per_group * kernel_elements; l_out |]\n          x_grouped\n      in\n      let w_expanded = Nx.unsqueeze ~axes:[ 0 ] w_grouped in\n      let w_expanded =\n        Nx.expand\n          [| n; groups; rcout; cin_per_group * kernel_elements |]\n          w_expanded\n      in\n      let w_expanded =\n        Nx.reshape\n          [| n * groups; rcout; cin_per_group * kernel_elements |]\n          w_expanded\n      in\n      let result = Nx.matmul w_expanded x_batched in\n      let result = Nx.reshape [| n; groups; rcout; l_out |] result in\n      Nx.reshape [| n; cout; l_out |] result\n  in\n  (* Reshape from (N, C_out, L) to (N, C_out, H_out, W_out) *)\n  let padded_h = input_spatial.(0) + fst pad_pairs.(0) + snd pad_pairs.(0) in\n  let padded_w = input_spatial.(1) + fst pad_pairs.(1) + snd pad_pairs.(1) in\n  let eff_kh = ((kernel_size.(0) - 1) * dh) + 1 in\n  let eff_kw = ((kernel_size.(1) - 1) * dw) + 1 in\n  let h_out = ((padded_h - eff_kh) / sh) + 1 in\n  let w_out = ((padded_w - eff_kw) / sw) + 1 in\n  let result = Nx.reshape [| n; cout; h_out; w_out |] result in\n  match bias with\n  | None -> result\n  | Some b -> Nx.add result (Nx.reshape [| 1; cout; 1; 1 |] b)\n\n(* Pooling *)\n\nlet max_pool1d ~kernel_size ?(stride = 1) ?(dilation = 1) ?(padding = `Valid)\n    ?(ceil_mode = false) x =\n  let x_shape = Nx.shape x in\n  if Array.length x_shape <> 3 then\n    invalid_argf_fn \"max_pool1d\" \"input must be 3D (N, C, L)\";\n  let n = x_shape.(0) in\n  let c = x_shape.(1) in\n  let kernel_size_arr = [| kernel_size |] in\n  let stride_arr = [| stride |] in\n  let dilation_arr = [| dilation |] in\n  let input_spatial = [| x_shape.(2) |] in\n  let pad_pairs =\n    calculate_nn_padding input_spatial ~kernel_size:kernel_size_arr\n      ~stride:stride_arr ~dilation:dilation_arr ~padding\n  in\n  let pad_pairs =\n    apply_ceil_mode input_spatial ~kernel_size:kernel_size_arr\n      ~stride:stride_arr ~dilation:dilation_arr ~padding:pad_pairs ~ceil_mode\n  in\n  (* unfold: (N, C, L) -> (N, C, K, L_out) *)\n  let x_unf =\n    Nx.extract_patches ~kernel_size:kernel_size_arr ~stride:stride_arr\n      ~dilation:dilation_arr ~padding:pad_pairs x\n  in\n  let x_unf_ndim = Nx.ndim x_unf in\n  let reduced = Nx.max x_unf ~axes:[ x_unf_ndim - 2 ] ~keepdims:false in\n  let x_unf_shape = Nx.shape x_unf in\n  let l_out = x_unf_shape.(x_unf_ndim - 1) in\n  Nx.reshape [| n; c; l_out |] reduced\n\nlet max_pool2d ~kernel_size ?(stride = (1, 1)) ?(dilation = (1, 1))\n    ?(padding = `Valid) ?(ceil_mode = false) x =\n  let x_shape = Nx.shape x in\n  if Array.length x_shape <> 4 then\n    invalid_argf_fn \"max_pool2d\" \"input must be 4D (N, C, H, W)\";\n  let n = x_shape.(0) in\n  let c = x_shape.(1) in\n  let kh, kw = kernel_size in\n  let sh, sw = stride in\n  let dh, dw = dilation in\n  let kernel_size_arr = [| kh; kw |] in\n  let stride_arr = [| sh; sw |] in\n  let dilation_arr = [| dh; dw |] in\n  let input_spatial = [| x_shape.(2); x_shape.(3) |] in\n  let pad_pairs =\n    calculate_nn_padding input_spatial ~kernel_size:kernel_size_arr\n      ~stride:stride_arr ~dilation:dilation_arr ~padding\n  in\n  let pad_pairs =\n    apply_ceil_mode input_spatial ~kernel_size:kernel_size_arr\n      ~stride:stride_arr ~dilation:dilation_arr ~padding:pad_pairs ~ceil_mode\n  in\n  let x_unf =\n    Nx.extract_patches ~kernel_size:kernel_size_arr ~stride:stride_arr\n      ~dilation:dilation_arr ~padding:pad_pairs x\n  in\n  let x_unf_ndim = Nx.ndim x_unf in\n  let reduced = Nx.max x_unf ~axes:[ x_unf_ndim - 2 ] ~keepdims:false in\n  let x_unf_shape = Nx.shape x_unf in\n  let l_out = x_unf_shape.(x_unf_ndim - 1) in\n  let padded_h = input_spatial.(0) + fst pad_pairs.(0) + snd pad_pairs.(0) in\n  let padded_w = input_spatial.(1) + fst pad_pairs.(1) + snd pad_pairs.(1) in\n  let eff_kh = ((kh - 1) * dh) + 1 in\n  let eff_kw = ((kw - 1) * dw) + 1 in\n  let h_out = ((padded_h - eff_kh) / sh) + 1 in\n  let w_out = ((padded_w - eff_kw) / sw) + 1 in\n  let _ = l_out in\n  Nx.reshape [| n; c; h_out; w_out |] reduced\n\nlet avg_pool1d ~kernel_size ?(stride = 1) ?(dilation = 1) ?(padding = `Valid)\n    ?(ceil_mode = false) ?(count_include_pad = true) x =\n  let x_shape = Nx.shape x in\n  if Array.length x_shape <> 3 then\n    invalid_argf_fn \"avg_pool1d\" \"input must be 3D (N, C, L)\";\n  let n = x_shape.(0) in\n  let c = x_shape.(1) in\n  let kernel_size_arr = [| kernel_size |] in\n  let stride_arr = [| stride |] in\n  let dilation_arr = [| dilation |] in\n  let input_spatial = [| x_shape.(2) |] in\n  let pad_pairs =\n    calculate_nn_padding input_spatial ~kernel_size:kernel_size_arr\n      ~stride:stride_arr ~dilation:dilation_arr ~padding\n  in\n  let pad_pairs =\n    apply_ceil_mode input_spatial ~kernel_size:kernel_size_arr\n      ~stride:stride_arr ~dilation:dilation_arr ~padding:pad_pairs ~ceil_mode\n  in\n  let x_unf =\n    Nx.extract_patches ~kernel_size:kernel_size_arr ~stride:stride_arr\n      ~dilation:dilation_arr ~padding:pad_pairs x\n  in\n  let x_unf_ndim = Nx.ndim x_unf in\n  let x_unf_shape = Nx.shape x_unf in\n  let l_out = x_unf_shape.(x_unf_ndim - 1) in\n  let summed = Nx.sum x_unf ~axes:[ x_unf_ndim - 2 ] in\n  let result = Nx.reshape [| n; c; l_out |] summed in\n  if count_include_pad then Nx.div_s result (float_of_int kernel_size)\n  else\n    let ones = Nx.ones_like x in\n    let ones_unf =\n      Nx.extract_patches ~kernel_size:kernel_size_arr ~stride:stride_arr\n        ~dilation:dilation_arr ~padding:pad_pairs ones\n    in\n    let count = Nx.sum ones_unf ~axes:[ Nx.ndim ones_unf - 2 ] in\n    let count = Nx.reshape [| n; c; l_out |] count in\n    Nx.div result count\n\nlet avg_pool2d ~kernel_size ?(stride = (1, 1)) ?(dilation = (1, 1))\n    ?(padding = `Valid) ?(ceil_mode = false) ?(count_include_pad = true) x =\n  let x_shape = Nx.shape x in\n  if Array.length x_shape <> 4 then\n    invalid_argf_fn \"avg_pool2d\" \"input must be 4D (N, C, H, W)\";\n  let n = x_shape.(0) in\n  let c = x_shape.(1) in\n  let kh, kw = kernel_size in\n  let sh, sw = stride in\n  let dh, dw = dilation in\n  let kernel_size_arr = [| kh; kw |] in\n  let stride_arr = [| sh; sw |] in\n  let dilation_arr = [| dh; dw |] in\n  let input_spatial = [| x_shape.(2); x_shape.(3) |] in\n  let pad_pairs =\n    calculate_nn_padding input_spatial ~kernel_size:kernel_size_arr\n      ~stride:stride_arr ~dilation:dilation_arr ~padding\n  in\n  let pad_pairs =\n    apply_ceil_mode input_spatial ~kernel_size:kernel_size_arr\n      ~stride:stride_arr ~dilation:dilation_arr ~padding:pad_pairs ~ceil_mode\n  in\n  let x_unf =\n    Nx.extract_patches ~kernel_size:kernel_size_arr ~stride:stride_arr\n      ~dilation:dilation_arr ~padding:pad_pairs x\n  in\n  let x_unf_ndim = Nx.ndim x_unf in\n  let x_unf_shape = Nx.shape x_unf in\n  let l_out = x_unf_shape.(x_unf_ndim - 1) in\n  let summed = Nx.sum x_unf ~axes:[ x_unf_ndim - 2 ] in\n  let padded_h = input_spatial.(0) + fst pad_pairs.(0) + snd pad_pairs.(0) in\n  let padded_w = input_spatial.(1) + fst pad_pairs.(1) + snd pad_pairs.(1) in\n  let eff_kh = ((kh - 1) * dh) + 1 in\n  let eff_kw = ((kw - 1) * dw) + 1 in\n  let h_out = ((padded_h - eff_kh) / sh) + 1 in\n  let w_out = ((padded_w - eff_kw) / sw) + 1 in\n  let _ = l_out in\n  let result = Nx.reshape [| n; c; h_out; w_out |] summed in\n  if count_include_pad then\n    let kernel_numel = float_of_int (kh * kw) in\n    Nx.div_s result kernel_numel\n  else\n    let ones = Nx.ones_like x in\n    let ones_unf =\n      Nx.extract_patches ~kernel_size:kernel_size_arr ~stride:stride_arr\n        ~dilation:dilation_arr ~padding:pad_pairs ones\n    in\n    let count = Nx.sum ones_unf ~axes:[ Nx.ndim ones_unf - 2 ] in\n    let count = Nx.reshape [| n; c; h_out; w_out |] count in\n    Nx.div result count\n"
  },
  {
    "path": "packages/kaun/lib/fn.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Functional neural network operations.\n\n    Stateless building blocks for neural networks: normalization, attention,\n    embedding lookup, and regularization. All functions are differentiable\n    through Rune's autodiff. *)\n\n(** {1:norm Normalization} *)\n\nval batch_norm :\n  ?axes:int list ->\n  ?epsilon:float ->\n  scale:(float, 'b) Nx.t ->\n  bias:(float, 'b) Nx.t ->\n  (float, 'b) Nx.t ->\n  (float, 'b) Nx.t\n(** [batch_norm ?axes ?epsilon ~scale ~bias x] normalizes [x] along [axes], then\n    applies learnable [scale] and [bias].\n\n    [axes] defaults to [[0]] for 2D and [[0; 2; 3]] for 4D input. [epsilon]\n    defaults to [1e-5].\n\n    [scale] and [bias] must broadcast across the normalized axes.\n\n    Raises [Invalid_argument] if [axes] is empty or out of bounds, or if\n    [scale]/[bias] shapes are incompatible. *)\n\nval layer_norm :\n  ?axes:int list ->\n  ?epsilon:float ->\n  ?gamma:(float, 'b) Nx.t ->\n  ?beta:(float, 'b) Nx.t ->\n  (float, 'b) Nx.t ->\n  (float, 'b) Nx.t\n(** [layer_norm ?axes ?epsilon ?gamma ?beta x] subtracts the mean and divides by\n    the standard deviation along [axes], optionally scaling by [gamma] and\n    shifting by [beta].\n\n    [axes] defaults to [[-1]]. [epsilon] defaults to [1e-5].\n\n    Raises [Invalid_argument] if [axes] is out of bounds, or if [gamma]/[beta]\n    shapes are incompatible. *)\n\nval rms_norm :\n  ?axes:int list ->\n  ?epsilon:float ->\n  ?gamma:(float, 'b) Nx.t ->\n  (float, 'b) Nx.t ->\n  (float, 'b) Nx.t\n(** [rms_norm ?axes ?epsilon ?gamma x] normalizes [x] by the root mean square\n    along [axes], optionally scaling by [gamma].\n\n    [axes] defaults to [[-1]]. [epsilon] defaults to [1e-5].\n\n    Raises [Invalid_argument] if [axes] is empty or out of bounds, or if [gamma]\n    shape is incompatible. *)\n\n(** {1:embedding Embedding} *)\n\nval embedding :\n  ?scale:bool ->\n  embedding:(float, 'b) Nx.t ->\n  (int32, Nx.int32_elt) Nx.t ->\n  (float, 'b) Nx.t\n(** [embedding ?scale ~embedding indices] gathers rows of [embedding] at\n    positions given by [indices].\n\n    [embedding] must have shape [[vocab_size; embed_dim]]. The result has shape\n    [[*indices_shape; embed_dim]].\n\n    When [scale] is [true] (the default), the result is multiplied by\n    [sqrt(embed_dim)].\n\n    Raises [Invalid_argument] if [embedding] is not rank 2 or if [vocab_size] is\n    not positive. *)\n\n(** {1:dropout Dropout} *)\n\nval dropout : rate:float -> (float, 'b) Nx.t -> (float, 'b) Nx.t\n(** [dropout ~rate x] randomly zeroes elements of [x] with probability [rate]\n    and scales the remaining values by [1 / (1 - rate)].\n\n    [rate] must satisfy [0.0 <= rate < 1.0]. When [rate] is [0.0], [x] is\n    returned unchanged.\n\n    Random keys are drawn from the implicit RNG scope.\n\n    Raises [Invalid_argument] if [rate] is out of range or [x] is not floating\n    point. *)\n\n(** {1:attention Attention} *)\n\nval dot_product_attention :\n  ?attention_mask:(bool, Nx.bool_elt) Nx.t ->\n  ?scale:float ->\n  ?dropout_rate:float ->\n  ?is_causal:bool ->\n  (float, 'b) Nx.t ->\n  (float, 'b) Nx.t ->\n  (float, 'b) Nx.t ->\n  (float, 'b) Nx.t\n(** [dot_product_attention ?attention_mask ?scale ?dropout_rate ?is_causal q k\n     v] is scaled dot-product attention.\n\n    [q], [k], [v] must have matching rank (>= 2) and the last dimension of [q]\n    and [k] must agree.\n\n    [scale] defaults to [1 / sqrt(depth)]. [is_causal] defaults to [false]; when\n    [true], a lower-triangular mask is applied (requires\n    [seq_len_q = seq_len_k]).\n\n    [attention_mask], when provided, broadcasts to the attention score shape:\n    [true] keeps scores, [false] sets them to negative infinity.\n\n    When [dropout_rate] is set, dropout is applied to attention weights using\n    keys from the implicit RNG scope.\n\n    Raises [Invalid_argument] if ranks, shapes, or dtypes are incompatible. *)\n\n(** {1:conv Convolution} *)\n\nval conv1d :\n  ?groups:int ->\n  ?stride:int ->\n  ?dilation:int ->\n  ?padding:[ `Same | `Valid ] ->\n  ?bias:(float, 'b) Nx.t ->\n  (float, 'b) Nx.t ->\n  (float, 'b) Nx.t ->\n  (float, 'b) Nx.t\n(** [conv1d ?groups ?stride ?dilation ?padding ?bias x w] computes 1D\n    convolution.\n\n    [x]: [(N, C_in, L)]. [w]: [(C_out, C_in/groups, K)].\n\n    [groups] defaults to [1]. [stride] and [dilation] default to [1]. [padding]\n    defaults to [`Valid].\n\n    Raises [Invalid_argument] if input/weight shapes are incompatible or channel\n    counts do not match [groups]. *)\n\nval conv2d :\n  ?groups:int ->\n  ?stride:int * int ->\n  ?dilation:int * int ->\n  ?padding:[ `Same | `Valid ] ->\n  ?bias:(float, 'b) Nx.t ->\n  (float, 'b) Nx.t ->\n  (float, 'b) Nx.t ->\n  (float, 'b) Nx.t\n(** [conv2d ?groups ?stride ?dilation ?padding ?bias x w] computes 2D\n    convolution.\n\n    [x]: [(N, C_in, H, W)]. [w]: [(C_out, C_in/groups, kH, kW)].\n\n    [groups] defaults to [1]. [stride] and [dilation] default to [(1, 1)].\n    [padding] defaults to [`Valid].\n\n    Raises [Invalid_argument] if input/weight shapes are incompatible or channel\n    counts do not match [groups]. *)\n\n(** {1:pool Pooling} *)\n\nval max_pool1d :\n  kernel_size:int ->\n  ?stride:int ->\n  ?dilation:int ->\n  ?padding:[ `Same | `Valid ] ->\n  ?ceil_mode:bool ->\n  ('a, 'b) Nx.t ->\n  ('a, 'b) Nx.t\n(** [max_pool1d ~kernel_size ?stride ?dilation ?padding ?ceil_mode x] applies 1D\n    max pooling.\n\n    [x]: [(N, C, L)]. [stride] defaults to [1]. [dilation] defaults to [1].\n    [padding] defaults to [`Valid]. [ceil_mode] defaults to [false]. *)\n\nval max_pool2d :\n  kernel_size:int * int ->\n  ?stride:int * int ->\n  ?dilation:int * int ->\n  ?padding:[ `Same | `Valid ] ->\n  ?ceil_mode:bool ->\n  ('a, 'b) Nx.t ->\n  ('a, 'b) Nx.t\n(** [max_pool2d ~kernel_size ?stride ?dilation ?padding ?ceil_mode x] applies 2D\n    max pooling.\n\n    [x]: [(N, C, H, W)]. [stride] defaults to [(1, 1)]. [dilation] defaults to\n    [(1, 1)]. [padding] defaults to [`Valid]. [ceil_mode] defaults to [false].\n*)\n\nval avg_pool1d :\n  kernel_size:int ->\n  ?stride:int ->\n  ?dilation:int ->\n  ?padding:[ `Same | `Valid ] ->\n  ?ceil_mode:bool ->\n  ?count_include_pad:bool ->\n  (float, 'b) Nx.t ->\n  (float, 'b) Nx.t\n(** [avg_pool1d ~kernel_size ?stride ?dilation ?padding ?ceil_mode\n     ?count_include_pad x] applies 1D average pooling.\n\n    [x]: [(N, C, L)]. [stride] defaults to [1]. [dilation] defaults to [1].\n    [padding] defaults to [`Valid]. [ceil_mode] defaults to [false].\n    [count_include_pad] defaults to [true]. *)\n\nval avg_pool2d :\n  kernel_size:int * int ->\n  ?stride:int * int ->\n  ?dilation:int * int ->\n  ?padding:[ `Same | `Valid ] ->\n  ?ceil_mode:bool ->\n  ?count_include_pad:bool ->\n  (float, 'b) Nx.t ->\n  (float, 'b) Nx.t\n(** [avg_pool2d ~kernel_size ?stride ?dilation ?padding ?ceil_mode\n     ?count_include_pad x] applies 2D average pooling.\n\n    [x]: [(N, C, H, W)]. [stride] defaults to [(1, 1)]. [dilation] defaults to\n    [(1, 1)]. [padding] defaults to [`Valid]. [ceil_mode] defaults to [false].\n    [count_include_pad] defaults to [true]. *)\n"
  },
  {
    "path": "packages/kaun/lib/grad.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule Dtype = Nx_core.Dtype\n\nlet invalid_argf fmt = Printf.ksprintf invalid_arg fmt\nlet path_label path = if path = \"\" then \"<root>\" else path\n\nlet err_non_float fn_name path dtype =\n  invalid_argf \"%s: %s expected float dtype, got %s\" fn_name (path_label path)\n    (Dtype.to_string dtype)\n\nlet err_mismatch fn_name path expected actual =\n  invalid_argf \"%s: %s has dtype/layout %s but expected %s\" fn_name\n    (path_label path) (Dtype.to_string actual) (Dtype.to_string expected)\n\nlet value_and_grad_aux f params =\n  let fn_name = \"Grad.value_and_grad\" in\n  let leaves, rebuild = Ptree.flatten params in\n  let path_leaves = Ptree.flatten_with_paths params in\n  let leaf_count = List.length leaves in\n  let path_count = List.length path_leaves in\n  if leaf_count <> path_count then\n    invalid_argf\n      \"%s: internal error: flatten/flatten_with_paths length mismatch (%d vs \\\n       %d)\"\n      fn_name leaf_count path_count;\n  if leaf_count = 0 then\n    let value, aux = f params in\n    (value, rebuild [], aux)\n  else\n    let leaves_array = Array.of_list leaves in\n    let paths_array = Array.of_list (List.map fst path_leaves) in\n    let first_leaf = leaves_array.(0) in\n    let first_path = paths_array.(0) in\n    Ptree.with_tensor first_leaf\n      {\n        run =\n          (fun (type a layout) (first_tensor : (a, layout) Nx.t) ->\n            let first_dtype : (a, layout) Dtype.t = Nx.dtype first_tensor in\n            if not (Dtype.is_float first_dtype) then\n              err_non_float fn_name first_path first_dtype;\n            let typed_inputs =\n              List.mapi\n                (fun index leaf ->\n                  let path = paths_array.(index) in\n                  Ptree.with_tensor leaf\n                    {\n                      run =\n                        (fun (type a2 layout2) (tensor : (a2, layout2) Nx.t) ->\n                          let dtype = Nx.dtype tensor in\n                          if not (Dtype.is_float dtype) then\n                            err_non_float fn_name path dtype;\n                          match Dtype.equal_witness first_dtype dtype with\n                          | Some Type.Equal -> (tensor : (a, layout) Nx.t)\n                          | None -> err_mismatch fn_name path first_dtype dtype);\n                    })\n                leaves\n            in\n            let aux = ref None in\n            let objective typed_params =\n              let packed =\n                List.map (fun tensor -> Ptree.P tensor) typed_params\n              in\n              let value, aux_value = f (rebuild packed) in\n              if Option.is_none !aux then aux := Some aux_value;\n              value\n            in\n            let value, grads = Rune.value_and_grads objective typed_inputs in\n            let aux_value =\n              match !aux with\n              | Some value -> value\n              | None ->\n                  invalid_argf\n                    \"%s: internal error: objective did not produce auxiliary \\\n                     output\"\n                    fn_name\n            in\n            let grad_leaves = List.map (fun grad -> Ptree.P grad) grads in\n            (value, rebuild grad_leaves, aux_value));\n      }\n\nlet value_and_grad f params =\n  let value, grads, () = value_and_grad_aux (fun tree -> (f tree, ())) params in\n  (value, grads)\n\nlet value_and_grad_mixed f params =\n  let fn_name = \"Grad.value_and_grad_mixed\" in\n  let leaves, rebuild = Ptree.flatten params in\n  if List.length leaves = 0 then (f params, rebuild [])\n  else\n    let path_leaves = Ptree.flatten_with_paths params in\n    let leaf_count = List.length leaves in\n    let path_count = List.length path_leaves in\n    if leaf_count <> path_count then\n      invalid_argf\n        \"%s: internal error: flatten/flatten_with_paths length mismatch (%d vs \\\n         %d)\"\n        fn_name leaf_count path_count;\n    let leaves_array = Array.of_list leaves in\n    let paths_array = Array.of_list (List.map fst path_leaves) in\n    let grads_array = Array.make leaf_count None in\n    let groups = Hashtbl.create 8 in\n    Array.iteri\n      (fun index (Ptree.P tensor) ->\n        let dtype = Nx.dtype tensor in\n        if not (Dtype.is_float dtype) then\n          err_non_float fn_name paths_array.(index) dtype;\n        let group_key = Dtype.to_string dtype in\n        match Hashtbl.find_opt groups group_key with\n        | None -> Hashtbl.add groups group_key (Ptree.P tensor, [ index ])\n        | Some (repr, indices) ->\n            Hashtbl.replace groups group_key (repr, index :: indices))\n      leaves_array;\n    let grouped_indices =\n      Hashtbl.fold\n        (fun _ (repr, indices) acc -> (repr, List.rev indices) :: acc)\n        groups []\n    in\n    let value = ref None in\n    List.iter\n      (fun (repr, indices) ->\n        Ptree.with_tensor repr\n          {\n            run =\n              (fun (type a layout) (repr_tensor : (a, layout) Nx.t) ->\n                let repr_dtype : (a, layout) Dtype.t = Nx.dtype repr_tensor in\n                let typed_inputs =\n                  List.map\n                    (fun index ->\n                      Ptree.with_tensor leaves_array.(index)\n                        {\n                          run =\n                            (fun (type a2 layout2)\n                              (tensor : (a2, layout2) Nx.t)\n                            ->\n                              let dtype = Nx.dtype tensor in\n                              match Dtype.equal_witness repr_dtype dtype with\n                              | Some Type.Equal -> (tensor : (a, layout) Nx.t)\n                              | None ->\n                                  err_mismatch fn_name paths_array.(index)\n                                    repr_dtype dtype);\n                        })\n                    indices\n                in\n                let objective group_params =\n                  let packed = Array.copy leaves_array in\n                  List.iter2\n                    (fun index tensor -> packed.(index) <- Ptree.P tensor)\n                    indices group_params;\n                  f (rebuild (Array.to_list packed))\n                in\n                let current_value, current_grads =\n                  Rune.value_and_grads objective typed_inputs\n                in\n                if Option.is_none !value then value := Some current_value;\n                List.iter2\n                  (fun index grad -> grads_array.(index) <- Some (Ptree.P grad))\n                  indices current_grads);\n          })\n      grouped_indices;\n    let value =\n      match !value with\n      | Some v -> v\n      | None ->\n          invalid_argf \"%s: internal error: no autodiff group produced a value\"\n            fn_name\n    in\n    let grad_leaves =\n      Array.to_list\n        (Array.mapi\n           (fun index grad ->\n             match grad with\n             | Some g -> g\n             | None ->\n                 invalid_argf \"%s: internal error: missing gradient for leaf %s\"\n                   fn_name\n                   (path_label paths_array.(index)))\n           grads_array)\n    in\n    (value, rebuild grad_leaves)\n\nlet grad f params = snd (value_and_grad f params)\n"
  },
  {
    "path": "packages/kaun/lib/grad.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Automatic differentiation over parameter trees.\n\n    {!value_and_grad} differentiates scalar losses with respect to {!Ptree.t}\n    leaves. By default, all leaves must share one floating dtype; this enables a\n    single forward/backward pass.\n\n    Use {!value_and_grad_aux} to return auxiliary data (for example updated\n    layer state) alongside the loss. Use {!value_and_grad_mixed} when mixed\n    dtypes are required. *)\n\n(** {1:core Core} *)\n\nval value_and_grad :\n  (Ptree.t -> (float, 'l) Nx.t) -> Ptree.t -> (float, 'l) Nx.t * Ptree.t\n(** [value_and_grad f params] is [(f params, grads)].\n\n    [params] must contain only floating-point leaves and all leaves must have\n    the same dtype/layout witness.\n\n    Raises [Invalid_argument] if a leaf is non-float, or if dtypes/layout differ\n    across leaves. Error messages include leaf paths. *)\n\nval value_and_grad_aux :\n  (Ptree.t -> (float, 'l) Nx.t * 'aux) ->\n  Ptree.t ->\n  (float, 'l) Nx.t * Ptree.t * 'aux\n(** [value_and_grad_aux f params] differentiates [fst (f params)] and returns\n    [(loss, grads, aux)].\n\n    The same dtype constraints and errors as {!value_and_grad} apply. *)\n\nval value_and_grad_mixed :\n  (Ptree.t -> (float, 'l) Nx.t) -> Ptree.t -> (float, 'l) Nx.t * Ptree.t\n(** [value_and_grad_mixed f params] supports mixed floating dtypes/layouts by\n    grouping leaves and running multiple autodiff passes.\n\n    {b Warning.} [f] may be evaluated multiple times (once per dtype/layout\n    group).\n\n    Raises [Invalid_argument] if any leaf is non-float. Error messages include\n    leaf paths. *)\n\nval grad : (Ptree.t -> (float, 'l) Nx.t) -> Ptree.t -> Ptree.t\n(** [grad f params] is [snd (value_and_grad f params)]. *)\n"
  },
  {
    "path": "packages/kaun/lib/hf/dune",
    "content": "(library\n (name kaun_hf)\n (public_name kaun.hf)\n (libraries unix rune nx nx.io kaun jsont jsont.bytesrw))\n"
  },
  {
    "path": "packages/kaun/lib/hf/kaun_hf.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Types *)\n\ntype revision = Main | Rev of string\n\n(* Error messages *)\n\nlet err_no_curl = \"curl not found on PATH\"\nlet err_download url = Printf.sprintf \"Failed to download %s\" url\n\nlet err_offline model_id filename =\n  Printf.sprintf \"Not cached (offline): %s/%s\" model_id filename\n\nlet err_no_safetensors model_id =\n  Printf.sprintf \"No safetensors found for %s\" model_id\n\nlet err_missing_tensor model_id name path =\n  Printf.sprintf \"%s: tensor %S missing in shard %s\" model_id name path\n\nlet err_empty_weight_map = \"Empty weight_map in index file\"\nlet err_missing_weight_map = \"Missing weight_map in index file\"\n\nlet err_incomplete_shards =\n  \"Incomplete shard loading: not all weight_map tensors were found\"\n\n(* Cache directory *)\n\nlet default_cache_dir () =\n  match Sys.getenv_opt \"RAVEN_CACHE_ROOT\" with\n  | Some d when d <> \"\" -> Filename.concat d \"huggingface\"\n  | _ ->\n      let xdg =\n        match Sys.getenv_opt \"XDG_CACHE_HOME\" with\n        | Some d when d <> \"\" -> d\n        | _ -> Filename.concat (Sys.getenv \"HOME\") \".cache\"\n      in\n      Filename.concat (Filename.concat xdg \"raven\") \"huggingface\"\n\n(* Filesystem *)\n\nlet rec mkdir_p path =\n  if path = \"\" || path = \".\" || path = Filename.dir_sep then ()\n  else if not (Sys.file_exists path) then begin\n    mkdir_p (Filename.dirname path);\n    try Unix.mkdir path 0o755 with Unix.Unix_error (Unix.EEXIST, _, _) -> ()\n  end\n\nlet rec rm_rf path =\n  if Sys.is_directory path then begin\n    let entries = Sys.readdir path in\n    Array.iter (fun e -> rm_rf (Filename.concat path e)) entries;\n    Unix.rmdir path\n  end\n  else Sys.remove path\n\n(* HTTP via curl *)\n\nlet curl_available =\n  lazy (Unix.system \"command -v curl >/dev/null 2>&1\" = Unix.WEXITED 0)\n\nlet check_curl () = if not (Lazy.force curl_available) then failwith err_no_curl\n\nlet header_flags headers =\n  List.map\n    (fun (k, v) -> Printf.sprintf \"-H %s\" (Filename.quote (k ^ \": \" ^ v)))\n    headers\n  |> String.concat \" \"\n\nlet curl_download ~headers ~url ~dest () =\n  check_curl ();\n  mkdir_p (Filename.dirname dest);\n  let hdr = header_flags headers in\n  let cmd =\n    Printf.sprintf \"curl -L --fail -s %s -o %s %s\" hdr (Filename.quote dest)\n      (Filename.quote url)\n  in\n  match Unix.system cmd with\n  | Unix.WEXITED 0 -> ()\n  | _ ->\n      (try Sys.remove dest with Sys_error _ -> ());\n      failwith (err_download url)\n\n(* Hub URL and cache paths *)\n\nlet revision_string = function Main -> \"main\" | Rev r -> r\n\nlet hub_url ~model_id ~revision ~filename =\n  Printf.sprintf \"https://huggingface.co/%s/resolve/%s/%s\" model_id\n    (revision_string revision) filename\n\nlet sanitize_model_id model_id =\n  String.map (fun c -> if c = '/' then '-' else c) model_id\n\nlet cache_path ~cache_dir ~model_id ~revision ~filename =\n  let rev = revision_string revision in\n  let model_dir = sanitize_model_id model_id in\n  Filename.concat cache_dir\n    (Filename.concat model_dir (Filename.concat rev filename))\n\nlet auth_headers = function\n  | Some t -> [ (\"Authorization\", \"Bearer \" ^ t) ]\n  | None -> []\n\n(* Downloading *)\n\nlet download_file ?token ?cache_dir ?(offline = false) ?(revision = Main)\n    ~model_id ~filename () =\n  let token =\n    match token with Some _ as t -> t | None -> Sys.getenv_opt \"HF_TOKEN\"\n  in\n  let cache_dir = Option.value cache_dir ~default:(default_cache_dir ()) in\n  let local = cache_path ~cache_dir ~model_id ~revision ~filename in\n  if Sys.file_exists local then local\n  else if offline then failwith (err_offline model_id filename)\n  else begin\n    let url = hub_url ~model_id ~revision ~filename in\n    curl_download ~headers:(auth_headers token) ~url ~dest:local ();\n    local\n  end\n\n(* JSON helpers *)\n\nlet read_json_file path =\n  let ic = open_in path in\n  let s =\n    Fun.protect\n      ~finally:(fun () -> close_in ic)\n      (fun () -> really_input_string ic (in_channel_length ic))\n  in\n  match Jsont_bytesrw.decode_string Jsont.json s with\n  | Ok v -> v\n  | Error e -> failwith e\n\nlet json_mem name = function\n  | Jsont.Object (mems, _) -> (\n      match Jsont.Json.find_mem name mems with\n      | Some (_, v) -> v\n      | None -> Jsont.Null ((), Jsont.Meta.none))\n  | _ -> Jsont.Null ((), Jsont.Meta.none)\n\n(* Tensor conversion *)\n\nlet to_ptree_tensor (Nx_io.P nx) = Kaun.Ptree.P nx\n\n(* Loading *)\n\nlet load_entries ?allowed_names path =\n  let archive = Nx_io.load_safetensors path in\n  match allowed_names with\n  | None ->\n      Hashtbl.fold\n        (fun name packed acc -> (name, to_ptree_tensor packed) :: acc)\n        archive []\n  | Some names ->\n      List.map\n        (fun name ->\n          match Hashtbl.find_opt archive name with\n          | Some packed -> (name, to_ptree_tensor packed)\n          | None -> failwith (err_missing_tensor \"\" name path))\n        names\n\nlet try_download f =\n  try Some (f ()) with Failure _ -> None | Sys_error _ -> None\n\nlet load_sharded ~download index_filename =\n  match try_download (fun () -> download index_filename) with\n  | None -> None\n  | Some index_path ->\n      let json = read_json_file index_path in\n      let weight_map =\n        match json_mem \"weight_map\" json with\n        | Jsont.Object (entries, _) ->\n            List.map\n              (fun ((tensor_name, _), shard_json) ->\n                match shard_json with\n                | Jsont.String (shard, _) -> (tensor_name, shard)\n                | _ -> failwith err_missing_weight_map)\n              entries\n        | _ -> failwith err_missing_weight_map\n      in\n      if weight_map = [] then failwith err_empty_weight_map;\n      (* Group tensors by shard filename, preserving file order *)\n      let shards_by_file = Hashtbl.create 8 in\n      let file_order = ref [] in\n      List.iter\n        (fun (tensor_name, shard_filename) ->\n          match Hashtbl.find_opt shards_by_file shard_filename with\n          | Some tensors ->\n              Hashtbl.replace shards_by_file shard_filename\n                (tensor_name :: tensors)\n          | None ->\n              Hashtbl.add shards_by_file shard_filename [ tensor_name ];\n              file_order := shard_filename :: !file_order)\n        weight_map;\n      let file_order = List.rev !file_order in\n      let seen = Hashtbl.create (List.length weight_map) in\n      let entries =\n        List.fold_left\n          (fun acc shard_filename ->\n            let shard_path = download shard_filename in\n            let tensors =\n              match Hashtbl.find_opt shards_by_file shard_filename with\n              | Some names -> List.rev names\n              | None -> []\n            in\n            let new_entries = load_entries ~allowed_names:tensors shard_path in\n            List.iter\n              (fun (name, _) -> Hashtbl.replace seen name ())\n              new_entries;\n            List.rev_append new_entries acc)\n          [] file_order\n      in\n      if Hashtbl.length seen <> List.length weight_map then\n        failwith err_incomplete_shards;\n      Some (List.rev entries)\n\nlet load_single ~download filename =\n  match try_download (fun () -> download filename) with\n  | None -> None\n  | Some path -> Some (load_entries path)\n\nlet load_config ?token ?cache_dir ?offline ?revision ~model_id () =\n  let path =\n    download_file ?token ?cache_dir ?offline ?revision ~model_id\n      ~filename:\"config.json\" ()\n  in\n  read_json_file path\n\nlet load_weights ?token ?cache_dir ?offline ?revision ~model_id () =\n  let download filename =\n    download_file ?token ?cache_dir ?offline ?revision ~model_id ~filename ()\n  in\n  match load_sharded ~download \"model.safetensors.index.json\" with\n  | Some entries -> entries\n  | None -> (\n      match load_single ~download \"model.safetensors\" with\n      | Some entries -> entries\n      | None -> failwith (err_no_safetensors model_id))\n\n(* Cache management *)\n\nlet clear_cache ?cache_dir ?model_id () =\n  let cache_dir = Option.value cache_dir ~default:(default_cache_dir ()) in\n  match model_id with\n  | Some id ->\n      let path = Filename.concat cache_dir (sanitize_model_id id) in\n      if Sys.file_exists path then rm_rf path\n  | None -> if Sys.file_exists cache_dir then rm_rf cache_dir\n"
  },
  {
    "path": "packages/kaun/lib/hf/kaun_hf.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** HuggingFace Hub integration.\n\n    Download pretrained model weights and configuration files from the\n    {{:https://huggingface.co}HuggingFace Hub}. Supports single-file and sharded\n    SafeTensors checkpoints, caching, authentication, and offline mode.\n\n    {[\n      let config =\n        Kaun_hf.load_config ~model_id:\"bert-base-uncased\" ()\n      in\n      let weights =\n        Kaun_hf.load_weights ~model_id:\"bert-base-uncased\" ()\n      in\n      (* weights : (string * Kaun.Ptree.tensor) list *)\n    ]} *)\n\n(** {1:types Types} *)\n\n(** The type for repository revisions. *)\ntype revision =\n  | Main  (** The default branch. *)\n  | Rev of string  (** A tag, branch name, or commit hash. *)\n\n(** {1:downloading Downloading} *)\n\nval download_file :\n  ?token:string ->\n  ?cache_dir:string ->\n  ?offline:bool ->\n  ?revision:revision ->\n  model_id:string ->\n  filename:string ->\n  unit ->\n  string\n(** [download_file ~model_id ~filename ()] is the local path to [filename] from\n    the repository [model_id].\n\n    The file is downloaded to the cache on first access and served from there on\n    subsequent calls.\n\n    [token] is a HuggingFace API token for private repositories. Defaults to the\n    value of [HF_TOKEN].\n\n    [cache_dir] defaults to [{RAVEN_CACHE_ROOT}/huggingface], or\n    [{XDG_CACHE_HOME}/raven/huggingface] when unset.\n\n    [offline] defaults to [false]. When [true], only cached files are returned.\n\n    [revision] defaults to {!Main}.\n\n    Raises [Failure] if the download fails or the file is not cached in offline\n    mode. *)\n\n(** {1:loading Loading} *)\n\nval load_config :\n  ?token:string ->\n  ?cache_dir:string ->\n  ?offline:bool ->\n  ?revision:revision ->\n  model_id:string ->\n  unit ->\n  Jsont.json\n(** [load_config ~model_id ()] is the parsed [config.json] from [model_id].\n\n    Parameters are the same as {!download_file}.\n\n    Raises [Failure] on download or JSON parse errors. *)\n\nval load_weights :\n  ?token:string ->\n  ?cache_dir:string ->\n  ?offline:bool ->\n  ?revision:revision ->\n  model_id:string ->\n  unit ->\n  (string * Kaun.Ptree.tensor) list\n(** [load_weights ~model_id ()] is the list of [(name, tensor)] pairs from\n    [model_id]'s SafeTensors checkpoint.\n\n    Handles sharded checkpoints transparently: when\n    [model.safetensors.index.json] is present, all referenced shards are\n    downloaded and merged. Falls back to [model.safetensors] when no index\n    exists.\n\n    Tensor names are the raw keys from the SafeTensors file (e.g.\n    [\"bert.encoder.layer.0.attention.self.query.weight\"]). Model code is\n    responsible for mapping these to its own parameter structure.\n\n    Parameters are the same as {!download_file}.\n\n    Raises [Failure] if no SafeTensors files are found, or on download/parse\n    errors. *)\n\n(** {1:cache Cache management} *)\n\nval clear_cache : ?cache_dir:string -> ?model_id:string -> unit -> unit\n(** [clear_cache ()] removes all cached files.\n\n    When [model_id] is given, only that model's cache is removed. *)\n"
  },
  {
    "path": "packages/kaun/lib/init.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype t = {\n  f : 'layout. int array -> (float, 'layout) Nx.dtype -> (float, 'layout) Nx.t;\n}\n\ntype mode = [ `Fan_in | `Fan_out | `Fan_avg ]\ntype distribution = [ `Normal | `Truncated_normal | `Uniform ]\n\nlet invalid_argf fmt = Printf.ksprintf invalid_arg fmt\n\nlet check_non_negative what value =\n  if value < 0.0 then invalid_argf \"%s must be >= 0, got %g\" what value\n\nlet normalize_axis ~rank ~name axis =\n  let axis = if axis < 0 then rank + axis else axis in\n  if axis < 0 || axis >= rank then\n    invalid_argf \"invalid %s axis: %d for rank-%d shape\" name axis rank;\n  axis\n\n(* Fan computation for variance scaling. *)\n\nlet compute_fans shape ~in_axis ~out_axis =\n  let rank = Array.length shape in\n  if rank = 0 then (1, 1)\n  else if rank = 1 then\n    let total = shape.(0) in\n    (total, total)\n  else\n    let in_axis = normalize_axis ~rank ~name:\"in\" in_axis in\n    let out_axis = normalize_axis ~rank ~name:\"out\" out_axis in\n    let fan_in = shape.(in_axis) in\n    let fan_out = shape.(out_axis) in\n    let receptive = ref 1 in\n    for i = 0 to rank - 1 do\n      if i <> in_axis && i <> out_axis then receptive := !receptive * shape.(i)\n    done;\n    (fan_in * !receptive, fan_out * !receptive)\n\n(* Truncated normal with bounds at +/-2 standard deviations. *)\n\nlet truncated_normal ~stddev shape dtype =\n  let z = Nx.truncated_normal dtype ~lower:(-2.0) ~upper:2.0 shape in\n  Nx.mul z (Nx.scalar dtype stddev)\n\n(* Variance scaling — the general framework behind glorot/he/lecun. *)\n\nlet variance_scaling ~scale ~mode ~distribution ?(in_axis = -2) ?(out_axis = -1)\n    () =\n  check_non_negative \"scale\" scale;\n  {\n    f =\n      (fun shape dtype ->\n        let fan_in, fan_out = compute_fans shape ~in_axis ~out_axis in\n        let n =\n          match mode with\n          | `Fan_in -> float_of_int fan_in\n          | `Fan_out -> float_of_int fan_out\n          | `Fan_avg -> float_of_int (fan_in + fan_out) /. 2.0\n        in\n        if n <= 0.0 then\n          invalid_argf \"non-positive fan: fan_in=%d fan_out=%d\" fan_in fan_out;\n        let variance = scale /. n in\n        match distribution with\n        | `Normal ->\n            let z = Nx.randn dtype shape in\n            Nx.mul z (Nx.scalar dtype (sqrt variance))\n        | `Truncated_normal ->\n            (* Correct for stddev loss from truncation to [-2, 2]. *)\n            truncated_normal\n              ~stddev:(sqrt variance /. 0.87962566103423978)\n              shape dtype\n        | `Uniform ->\n            let limit = sqrt (3.0 *. variance) in\n            let u = Nx.rand dtype shape in\n            Nx.sub\n              (Nx.mul u (Nx.scalar dtype (2.0 *. limit)))\n              (Nx.scalar dtype limit));\n  }\n\n(* Constant *)\n\nlet constant value = { f = (fun shape dtype -> Nx.full dtype shape value) }\nlet zeros = constant 0.0\nlet ones = constant 1.0\n\n(* Random *)\n\nlet uniform ?(scale = 0.01) () =\n  check_non_negative \"scale\" scale;\n  {\n    f =\n      (fun shape dtype -> Nx.mul (Nx.rand dtype shape) (Nx.scalar dtype scale));\n  }\n\nlet normal ?(stddev = 0.01) () =\n  check_non_negative \"stddev\" stddev;\n  {\n    f =\n      (fun shape dtype ->\n        Nx.mul (Nx.randn dtype shape) (Nx.scalar dtype stddev));\n  }\n\n(* Glorot / Xavier *)\n\nlet glorot_uniform ?(in_axis = -2) ?(out_axis = -1) () =\n  variance_scaling ~scale:1.0 ~mode:`Fan_avg ~distribution:`Uniform ~in_axis\n    ~out_axis ()\n\nlet glorot_normal ?(in_axis = -2) ?(out_axis = -1) () =\n  variance_scaling ~scale:1.0 ~mode:`Fan_avg ~distribution:`Truncated_normal\n    ~in_axis ~out_axis ()\n\n(* He / Kaiming *)\n\nlet he_uniform ?(in_axis = -2) ?(out_axis = -1) () =\n  variance_scaling ~scale:2.0 ~mode:`Fan_in ~distribution:`Uniform ~in_axis\n    ~out_axis ()\n\nlet he_normal ?(in_axis = -2) ?(out_axis = -1) () =\n  variance_scaling ~scale:2.0 ~mode:`Fan_in ~distribution:`Truncated_normal\n    ~in_axis ~out_axis ()\n\n(* LeCun *)\n\nlet lecun_uniform ?(in_axis = -2) ?(out_axis = -1) () =\n  variance_scaling ~scale:1.0 ~mode:`Fan_in ~distribution:`Uniform ~in_axis\n    ~out_axis ()\n\nlet lecun_normal ?(in_axis = -2) ?(out_axis = -1) () =\n  variance_scaling ~scale:1.0 ~mode:`Fan_in ~distribution:`Truncated_normal\n    ~in_axis ~out_axis ()\n"
  },
  {
    "path": "packages/kaun/lib/init.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Weight initialization strategies.\n\n    Initializers map a shape and float dtype to tensors. Random keys are\n    obtained implicitly via {!Nx.Rng.next_key}. Named families (Glorot, He,\n    LeCun) are defined in terms of {!variance_scaling}. *)\n\n(** {1:types Types} *)\n\ntype t = {\n  f : 'layout. int array -> (float, 'layout) Nx.dtype -> (float, 'layout) Nx.t;\n}\n(** [t] is the type for initializers.\n\n    [i.f shape dtype] is an initialized tensor for [shape] and [dtype]. Random\n    keys are drawn from the implicit RNG scope. *)\n\n(** {1:constant Constant} *)\n\nval zeros : t\n(** [zeros] is the initializer that fills with [0.0]. *)\n\nval ones : t\n(** [ones] is the initializer that fills with [1.0]. *)\n\nval constant : float -> t\n(** [constant v] is the initializer that fills with [v]. *)\n\n(** {1:random Random} *)\n\nval uniform : ?scale:float -> unit -> t\n(** [uniform ?scale ()] is the initializer that samples from [U(0, scale)].\n\n    [scale] defaults to [0.01].\n\n    Raises [Invalid_argument] if [scale] is negative. *)\n\nval normal : ?stddev:float -> unit -> t\n(** [normal ?stddev ()] is the initializer that samples from [N(0, stddev)].\n\n    [stddev] defaults to [0.01].\n\n    Raises [Invalid_argument] if [stddev] is negative. *)\n\n(** {1:variance Variance Scaling} *)\n\ntype mode = [ `Fan_in | `Fan_out | `Fan_avg ]\n(** The type for variance-scaling fan modes. *)\n\ntype distribution = [ `Normal | `Truncated_normal | `Uniform ]\n(** The type for variance-scaling distribution families. *)\n\nval variance_scaling :\n  scale:float ->\n  mode:mode ->\n  distribution:distribution ->\n  ?in_axis:int ->\n  ?out_axis:int ->\n  unit ->\n  t\n(** [variance_scaling ~scale ~mode ~distribution ?in_axis ?out_axis ()] is the\n    variance-scaling initializer.\n\n    [in_axis] defaults to [-2] and [out_axis] defaults to [-1]. Negative axes\n    are interpreted from the end.\n\n    The target variance is [scale / n], with:\n    - [n = fan_in] for [`Fan_in].\n    - [n = fan_out] for [`Fan_out].\n    - [n = (fan_in + fan_out) / 2] for [`Fan_avg].\n\n    Distributions are:\n    - [`Normal]: [N(0, scale / n)].\n    - [`Uniform]: [U(-limit, limit)] with [limit = sqrt (3 * scale / n)].\n    - [`Truncated_normal]: normal samples truncated to \\[[-2];[2]\\] and rescaled\n      to match [scale / n].\n\n    Raises [Invalid_argument] if:\n    - [scale] is negative.\n    - [in_axis] or [out_axis] is out of bounds for rank > 1.\n    - the computed fan is non-positive. *)\n\n(** {1:glorot Glorot/Xavier} *)\n\nval glorot_uniform : ?in_axis:int -> ?out_axis:int -> unit -> t\n(** [glorot_uniform ?in_axis ?out_axis ()] is Glorot/Xavier uniform\n    initialization.\n\n    It samples from [U(-limit, limit)] with\n    [limit = sqrt (6 / (fan_in + fan_out))].\n\n    This is the Xavier/Glorot scheme of Glorot and Bengio (2010). It is\n    implemented via {!variance_scaling} with fan-average mode.\n\n    Raises [Invalid_argument] under the same conditions as {!variance_scaling}.\n*)\n\nval glorot_normal : ?in_axis:int -> ?out_axis:int -> unit -> t\n(** [glorot_normal ?in_axis ?out_axis ()] is Glorot/Xavier normal\n    initialization.\n\n    It uses truncated normal sampling with fan-average target variance\n    [2 / (fan_in + fan_out)].\n\n    This is the Xavier/Glorot family of Glorot and Bengio (2010). It is\n    implemented via {!variance_scaling}.\n\n    Raises [Invalid_argument] under the same conditions as {!variance_scaling}.\n*)\n\n(** {1:he He/Kaiming} *)\n\nval he_uniform : ?in_axis:int -> ?out_axis:int -> unit -> t\n(** [he_uniform ?in_axis ?out_axis ()] is He/Kaiming uniform initialization.\n\n    It samples from [U(-limit, limit)] with [limit = sqrt (6 / fan_in)].\n\n    This is the Kaiming/He scheme of He et al. (2015), commonly used for\n    ReLU-like activations. It is implemented via {!variance_scaling} in fan-in\n    mode.\n\n    Raises [Invalid_argument] under the same conditions as {!variance_scaling}.\n*)\n\nval he_normal : ?in_axis:int -> ?out_axis:int -> unit -> t\n(** [he_normal ?in_axis ?out_axis ()] is He/Kaiming normal initialization.\n\n    It uses truncated normal sampling with fan-in target variance [2 / fan_in].\n\n    This is the Kaiming/He family of He et al. (2015). It is implemented via\n    {!variance_scaling}.\n\n    Raises [Invalid_argument] under the same conditions as {!variance_scaling}.\n*)\n\n(** {1:lecun LeCun} *)\n\nval lecun_uniform : ?in_axis:int -> ?out_axis:int -> unit -> t\n(** [lecun_uniform ?in_axis ?out_axis ()] is LeCun uniform initialization.\n\n    It samples from [U(-limit, limit)] with [limit = sqrt (3 / fan_in)].\n\n    This is the LeCun fan-in family (Efficient BackProp, LeCun et al., 1998). It\n    is implemented via {!variance_scaling}.\n\n    Raises [Invalid_argument] under the same conditions as {!variance_scaling}.\n*)\n\nval lecun_normal : ?in_axis:int -> ?out_axis:int -> unit -> t\n(** [lecun_normal ?in_axis ?out_axis ()] is LeCun normal initialization.\n\n    It uses truncated normal sampling with fan-in target variance [1 / fan_in].\n\n    This is the LeCun fan-in family (Efficient BackProp, LeCun et al., 1998). It\n    is implemented via {!variance_scaling}.\n\n    Raises [Invalid_argument] under the same conditions as {!variance_scaling}.\n*)\n"
  },
  {
    "path": "packages/kaun/lib/layer.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype 'layout vars = {\n  params : Ptree.t;\n  state : Ptree.t;\n  dtype : (float, 'layout) Nx.dtype;\n}\n\ntype ('input, 'output) t = {\n  init : 'layout. dtype:(float, 'layout) Nx.dtype -> 'layout vars;\n  apply :\n    'layout 'in_elt.\n    params:Ptree.t ->\n    state:Ptree.t ->\n    dtype:(float, 'layout) Nx.dtype ->\n    training:bool ->\n    ?ctx:Context.t ->\n    ('input, 'in_elt) Nx.t ->\n    ('output, 'layout) Nx.t * Ptree.t;\n}\n\nlet invalid_argf fmt = Printf.ksprintf invalid_arg fmt\nlet params v = v.params\nlet state v = v.state\nlet dtype v = v.dtype\nlet with_params v params = { v with params }\nlet with_state v state = { v with state }\nlet make_vars ~params ~state ~dtype = { params; state; dtype }\n\nmodule Dtype = Nx_core.Dtype\n\nlet require_same_float_dtype (type p in_elt) ~ctx\n    (expected : (float, p) Nx.dtype) (x : (float, in_elt) Nx.t) :\n    (float, p) Nx.t =\n  match Dtype.equal_witness expected (Nx.dtype x) with\n  | Some Type.Equal -> (x : (float, p) Nx.t)\n  | None ->\n      invalid_argf \"%s: input dtype %s does not match model dtype %s\" ctx\n        (Dtype.to_string (Nx.dtype x))\n        (Dtype.to_string expected)\n\nlet require_int32_indices (type in_elt) ~ctx (x : (int32, in_elt) Nx.t) :\n    (int32, Bigarray.int32_elt) Nx.t =\n  match Dtype.equal_witness Nx.int32 (Nx.dtype x) with\n  | Some Type.Equal -> (x : (int32, Bigarray.int32_elt) Nx.t)\n  | None ->\n      invalid_argf \"%s: expected int32 indices, got %s\" ctx\n        (Dtype.to_string (Nx.dtype x))\n\nlet init t ~dtype = t.init ~dtype\n\nlet apply (type a b layout in_elt) (t : (a, b) t) (vars : layout vars) ~training\n    ?ctx (x : (a, in_elt) Nx.t) =\n  let y, state =\n    t.apply ~params:vars.params ~state:vars.state ~dtype:vars.dtype ~training\n      ?ctx x\n  in\n  (y, { vars with state })\n\nlet compose left right =\n  {\n    init =\n      (fun ~dtype ->\n        let k1 = Nx.Rng.next_key () in\n        let k2 = Nx.Rng.next_key () in\n        let left_vars = Nx.Rng.with_key k1 (fun () -> left.init ~dtype) in\n        let right_vars = Nx.Rng.with_key k2 (fun () -> right.init ~dtype) in\n        {\n          params =\n            Ptree.dict\n              [ (\"left\", left_vars.params); (\"right\", right_vars.params) ];\n          state =\n            Ptree.dict\n              [ (\"left\", left_vars.state); (\"right\", right_vars.state) ];\n          dtype;\n        });\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        let param_fields =\n          Ptree.Dict.fields_exn ~ctx:\"Layer.compose.params\" params\n        in\n        let state_fields =\n          Ptree.Dict.fields_exn ~ctx:\"Layer.compose.state\" state\n        in\n        let left_params =\n          Ptree.Dict.find_exn ~ctx:\"Layer.compose.params\" \"left\" param_fields\n        in\n        let right_params =\n          Ptree.Dict.find_exn ~ctx:\"Layer.compose.params\" \"right\" param_fields\n        in\n        let left_state =\n          Ptree.Dict.find_exn ~ctx:\"Layer.compose.state\" \"left\" state_fields\n        in\n        let right_state =\n          Ptree.Dict.find_exn ~ctx:\"Layer.compose.state\" \"right\" state_fields\n        in\n        let y, left_state' =\n          left.apply ~params:left_params ~state:left_state ~dtype ~training ?ctx\n            x\n        in\n        let z, right_state' =\n          right.apply ~params:right_params ~state:right_state ~dtype ~training\n            ?ctx y\n        in\n        (z, Ptree.dict [ (\"left\", left_state'); (\"right\", right_state') ]));\n  }\n\n(* Dense *)\n\nlet linear ~in_features ~out_features ?weight_init ?bias_init () =\n  let weight_init =\n    match weight_init with\n    | Some init_value -> init_value\n    | None -> Init.glorot_uniform ()\n  in\n  let bias_init =\n    match bias_init with Some init_value -> init_value | None -> Init.zeros\n  in\n  {\n    init =\n      (fun ~dtype ->\n        let weight = weight_init.f [| in_features; out_features |] dtype in\n        let bias = bias_init.f [| out_features |] dtype in\n        {\n          params =\n            Ptree.dict\n              [ (\"weight\", Ptree.tensor weight); (\"bias\", Ptree.tensor bias) ];\n          state = Ptree.empty;\n          dtype;\n        });\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        ignore (training, ctx);\n        let x = require_same_float_dtype ~ctx:\"Layer.linear\" dtype x in\n        let fields = Ptree.Dict.fields_exn ~ctx:\"Layer.linear\" params in\n        let weight = Ptree.Dict.get_tensor_exn fields ~name:\"weight\" dtype in\n        let bias = Ptree.Dict.get_tensor_exn fields ~name:\"bias\" dtype in\n        (Nx.add (Nx.matmul x weight) bias, state));\n  }\n\n(* Convolution *)\n\nlet conv1d ~in_channels ~out_channels ?(kernel_size = 3) ?(stride = 1)\n    ?(dilation = 1) ?(padding = `Same) () =\n  let weight_init = Init.glorot_uniform ~in_axis:1 ~out_axis:0 () in\n  {\n    init =\n      (fun ~dtype ->\n        let weight =\n          weight_init.f [| out_channels; in_channels; kernel_size |] dtype\n        in\n        let bias = Nx.zeros dtype [| out_channels |] in\n        {\n          params =\n            Ptree.dict\n              [ (\"weight\", Ptree.tensor weight); (\"bias\", Ptree.tensor bias) ];\n          state = Ptree.empty;\n          dtype;\n        });\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        ignore (training, ctx);\n        let x = require_same_float_dtype ~ctx:\"Layer.conv1d\" dtype x in\n        let fields = Ptree.Dict.fields_exn ~ctx:\"Layer.conv1d\" params in\n        let weight = Ptree.Dict.get_tensor_exn fields ~name:\"weight\" dtype in\n        let bias = Ptree.Dict.get_tensor_exn fields ~name:\"bias\" dtype in\n        let x =\n          match padding with\n          | `Same | `Valid -> x\n          | `Causal ->\n              let pad_left = (kernel_size - 1) * dilation in\n              Nx.pad [| (0, 0); (0, 0); (pad_left, 0) |] 0.0 x\n        in\n        let padding =\n          match padding with `Same -> `Same | `Valid | `Causal -> `Valid\n        in\n        (Fn.conv1d ~stride ~dilation ~padding ~bias x weight, state));\n  }\n\nlet conv2d ~in_channels ~out_channels ?(kernel_size = (3, 3)) () =\n  let kh, kw = kernel_size in\n  let weight_init = Init.glorot_uniform ~in_axis:1 ~out_axis:0 () in\n  {\n    init =\n      (fun ~dtype ->\n        let weight =\n          weight_init.f [| out_channels; in_channels; kh; kw |] dtype\n        in\n        let bias = Nx.zeros dtype [| out_channels |] in\n        {\n          params =\n            Ptree.dict\n              [ (\"weight\", Ptree.tensor weight); (\"bias\", Ptree.tensor bias) ];\n          state = Ptree.empty;\n          dtype;\n        });\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        ignore (training, ctx);\n        let x = require_same_float_dtype ~ctx:\"Layer.conv2d\" dtype x in\n        let fields = Ptree.Dict.fields_exn ~ctx:\"Layer.conv2d\" params in\n        let weight = Ptree.Dict.get_tensor_exn fields ~name:\"weight\" dtype in\n        let bias = Ptree.Dict.get_tensor_exn fields ~name:\"bias\" dtype in\n        (Fn.conv2d ~padding:`Same ~bias x weight, state));\n  }\n\n(* Normalization *)\n\nlet layer_norm ~dim ?(eps = 1e-5) () =\n  {\n    init =\n      (fun ~dtype ->\n        let gamma = Nx.ones dtype [| dim |] in\n        let beta = Nx.zeros dtype [| dim |] in\n        {\n          params =\n            Ptree.dict\n              [ (\"gamma\", Ptree.tensor gamma); (\"beta\", Ptree.tensor beta) ];\n          state = Ptree.empty;\n          dtype;\n        });\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        ignore (training, ctx);\n        let x = require_same_float_dtype ~ctx:\"Layer.layer_norm\" dtype x in\n        let fields = Ptree.Dict.fields_exn ~ctx:\"Layer.layer_norm\" params in\n        let gamma = Ptree.Dict.get_tensor_exn fields ~name:\"gamma\" dtype in\n        let beta = Ptree.Dict.get_tensor_exn fields ~name:\"beta\" dtype in\n        (Fn.layer_norm ~gamma ~beta ~epsilon:eps x, state));\n  }\n\nlet rms_norm ~dim ?(eps = 1e-6) () =\n  {\n    init =\n      (fun ~dtype ->\n        let scale = Nx.ones dtype [| dim |] in\n        {\n          params = Ptree.dict [ (\"scale\", Ptree.tensor scale) ];\n          state = Ptree.empty;\n          dtype;\n        });\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        ignore (training, ctx);\n        let x = require_same_float_dtype ~ctx:\"Layer.rms_norm\" dtype x in\n        let fields = Ptree.Dict.fields_exn ~ctx:\"Layer.rms_norm\" params in\n        let scale = Ptree.Dict.get_tensor_exn fields ~name:\"scale\" dtype in\n        (Fn.rms_norm ~gamma:scale ~epsilon:eps x, state));\n  }\n\nlet batch_norm ~num_features () =\n  {\n    init =\n      (fun ~dtype ->\n        let scale = Nx.ones dtype [| num_features |] in\n        let bias = Nx.zeros dtype [| num_features |] in\n        let running_mean = Nx.zeros dtype [| num_features |] in\n        let running_var = Nx.ones dtype [| num_features |] in\n        {\n          params =\n            Ptree.dict\n              [ (\"scale\", Ptree.tensor scale); (\"bias\", Ptree.tensor bias) ];\n          state =\n            Ptree.dict\n              [\n                (\"running_mean\", Ptree.tensor running_mean);\n                (\"running_var\", Ptree.tensor running_var);\n              ];\n          dtype;\n        });\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        ignore ctx;\n        let x = require_same_float_dtype ~ctx:\"Layer.batch_norm\" dtype x in\n        let params_fields =\n          Ptree.Dict.fields_exn ~ctx:\"Layer.batch_norm.params\" params\n        in\n        let state_fields =\n          Ptree.Dict.fields_exn ~ctx:\"Layer.batch_norm.state\" state\n        in\n        let scale =\n          Ptree.Dict.get_tensor_exn params_fields ~name:\"scale\" dtype\n        in\n        let bias = Ptree.Dict.get_tensor_exn params_fields ~name:\"bias\" dtype in\n        let running_mean =\n          Ptree.Dict.get_tensor_exn state_fields ~name:\"running_mean\" dtype\n        in\n        let running_var =\n          Ptree.Dict.get_tensor_exn state_fields ~name:\"running_var\" dtype\n        in\n        let rank = Array.length (Nx.shape x) in\n        let axes =\n          match rank with\n          | 2 -> [ 0 ]\n          | 3 -> [ 0; 2 ]\n          | 4 -> [ 0; 2; 3 ]\n          | _ -> [ 0 ]\n        in\n        if training then\n          let momentum = 0.99 in\n          let one_minus = 1.0 -. momentum in\n          let batch_mean = Nx.mean ~axes x in\n          let batch_var = Nx.var ~axes x in\n          let y = Fn.batch_norm ~axes ~scale ~bias x in\n          let running_mean' =\n            Nx.add\n              (Nx.mul running_mean (Nx.scalar dtype momentum))\n              (Nx.mul batch_mean (Nx.scalar dtype one_minus))\n          in\n          let running_var' =\n            Nx.add\n              (Nx.mul running_var (Nx.scalar dtype momentum))\n              (Nx.mul batch_var (Nx.scalar dtype one_minus))\n          in\n          let state' =\n            Ptree.dict\n              [\n                (\"running_mean\", Ptree.tensor running_mean');\n                (\"running_var\", Ptree.tensor running_var');\n              ]\n          in\n          (y, state')\n        else\n          let scale_eval, bias_eval =\n            match rank with\n            | 2 ->\n                ( Nx.reshape [| 1; num_features |] scale,\n                  Nx.reshape [| 1; num_features |] bias )\n            | 3 ->\n                ( Nx.reshape [| 1; num_features; 1 |] scale,\n                  Nx.reshape [| 1; num_features; 1 |] bias )\n            | 4 ->\n                ( Nx.reshape [| 1; num_features; 1; 1 |] scale,\n                  Nx.reshape [| 1; num_features; 1; 1 |] bias )\n            | _ -> (scale, bias)\n          in\n          let y =\n            Nx.standardize ~axes ~mean:running_mean ~variance:running_var x\n            |> fun normalized -> Nx.add (Nx.mul normalized scale_eval) bias_eval\n          in\n          (y, state));\n  }\n\n(* Embedding *)\n\nlet embedding ~vocab_size ~embed_dim ?(scale = true) () =\n  let emb_init = Init.normal ~stddev:0.02 () in\n  {\n    init =\n      (fun ~dtype ->\n        let embedding = emb_init.f [| vocab_size; embed_dim |] dtype in\n        {\n          params = Ptree.dict [ (\"embedding\", Ptree.tensor embedding) ];\n          state = Ptree.empty;\n          dtype;\n        });\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx indices ->\n        ignore (training, ctx);\n        let fields = Ptree.Dict.fields_exn ~ctx:\"Layer.embedding\" params in\n        let embedding =\n          Ptree.Dict.get_tensor_exn fields ~name:\"embedding\" dtype\n        in\n        let indices = require_int32_indices ~ctx:\"Layer.embedding\" indices in\n        (Fn.embedding ~scale ~embedding indices, state));\n  }\n\n(* Regularization *)\n\nlet dropout ~rate () =\n  if rate < 0.0 || rate >= 1.0 then\n    invalid_argf \"Layer.dropout: expected 0.0 <= rate < 1.0, got %g\" rate;\n  {\n    init = (fun ~dtype -> { params = Ptree.empty; state = Ptree.empty; dtype });\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        ignore (params, ctx);\n        let x = require_same_float_dtype ~ctx:\"Layer.dropout\" dtype x in\n        if (not training) || rate = 0.0 then (x, state)\n        else (Fn.dropout ~rate x, state));\n  }\n\n(* Activation layers *)\n\nlet relu () =\n  {\n    init = (fun ~dtype -> { params = Ptree.empty; state = Ptree.empty; dtype });\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        ignore (params, training, ctx);\n        let x = require_same_float_dtype ~ctx:\"Layer.relu\" dtype x in\n        (Nx.relu x, state));\n  }\n\nlet gelu () =\n  {\n    init = (fun ~dtype -> { params = Ptree.empty; state = Ptree.empty; dtype });\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        ignore (params, training, ctx);\n        let x = require_same_float_dtype ~ctx:\"Layer.gelu\" dtype x in\n        (Activation.gelu x, state));\n  }\n\nlet silu () =\n  {\n    init = (fun ~dtype -> { params = Ptree.empty; state = Ptree.empty; dtype });\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        ignore (params, training, ctx);\n        let x = require_same_float_dtype ~ctx:\"Layer.silu\" dtype x in\n        (Activation.silu x, state));\n  }\n\nlet tanh () =\n  {\n    init = (fun ~dtype -> { params = Ptree.empty; state = Ptree.empty; dtype });\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        ignore (params, training, ctx);\n        let x = require_same_float_dtype ~ctx:\"Layer.tanh\" dtype x in\n        (Nx.tanh x, state));\n  }\n\nlet sigmoid () =\n  {\n    init = (fun ~dtype -> { params = Ptree.empty; state = Ptree.empty; dtype });\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        ignore (params, training, ctx);\n        let x = require_same_float_dtype ~ctx:\"Layer.sigmoid\" dtype x in\n        (Nx.sigmoid x, state));\n  }\n\n(* Pooling *)\n\nlet max_pool2d ~kernel_size ?stride () =\n  let stride = match stride with Some value -> value | None -> kernel_size in\n  {\n    init = (fun ~dtype -> { params = Ptree.empty; state = Ptree.empty; dtype });\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        ignore (params, training, ctx);\n        let x = require_same_float_dtype ~ctx:\"Layer.max_pool2d\" dtype x in\n        (Fn.max_pool2d ~kernel_size ~stride x, state));\n  }\n\nlet avg_pool2d ~kernel_size ?stride () =\n  let stride = match stride with Some value -> value | None -> kernel_size in\n  {\n    init = (fun ~dtype -> { params = Ptree.empty; state = Ptree.empty; dtype });\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        ignore (params, training, ctx);\n        let x = require_same_float_dtype ~ctx:\"Layer.avg_pool2d\" dtype x in\n        (Fn.avg_pool2d ~kernel_size ~stride x, state));\n  }\n\n(* Reshape *)\n\nlet flatten () =\n  {\n    init = (fun ~dtype -> { params = Ptree.empty; state = Ptree.empty; dtype });\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx x ->\n        ignore (params, training, ctx);\n        let x = require_same_float_dtype ~ctx:\"Layer.flatten\" dtype x in\n        let shape = Nx.shape x in\n        if Array.length shape = 0 then\n          invalid_arg \"Layer.flatten: expected rank >= 1\";\n        let batch_size = shape.(0) in\n        let flat_size =\n          Array.fold_left ( * ) 1 (Array.sub shape 1 (Array.length shape - 1))\n        in\n        let x = if Nx.is_c_contiguous x then x else Nx.contiguous x in\n        (Nx.reshape [| batch_size; flat_size |] x, state));\n  }\n\n(* Composition *)\n\nlet sequential layers =\n  {\n    init =\n      (fun ~dtype ->\n        let n = List.length layers in\n        let keys = Array.init n (fun _ -> Nx.Rng.next_key ()) in\n        let acc_params = ref [] in\n        let acc_state = ref [] in\n        List.iteri\n          (fun i module_ ->\n            let vars =\n              Nx.Rng.with_key keys.(i) (fun () -> module_.init ~dtype)\n            in\n            acc_params := vars.params :: !acc_params;\n            acc_state := vars.state :: !acc_state)\n          layers;\n        {\n          params = Ptree.list (List.rev !acc_params);\n          state = Ptree.list (List.rev !acc_state);\n          dtype;\n        });\n    apply =\n      (fun ~params ~state ~dtype ~training ?ctx input ->\n        let params_items =\n          Ptree.List.items_exn ~ctx:\"Layer.sequential.params\" params\n        in\n        let state_items =\n          Ptree.List.items_exn ~ctx:\"Layer.sequential.state\" state\n        in\n        match (layers, params_items, state_items) with\n        | [], [], [] ->\n            let input =\n              require_same_float_dtype ~ctx:\"Layer.sequential\" dtype input\n            in\n            (input, Ptree.list [])\n        | first :: rest_layers, p :: ps, s :: ss ->\n            let y, first_state =\n              first.apply ~params:p ~state:s ~dtype ~training ?ctx input\n            in\n            let rec go modules param_values state_values x =\n              match (modules, param_values, state_values) with\n              | [], [], [] -> (x, [])\n              | module_ :: modules_tail, p :: ps_tail, s :: ss_tail ->\n                  let y, state' =\n                    module_.apply ~params:p ~state:s ~dtype ~training ?ctx x\n                  in\n                  let y_final, state_tail = go modules_tail ps_tail ss_tail y in\n                  (y_final, state' :: state_tail)\n              | _ ->\n                  invalid_arg\n                    \"Layer.sequential: params/state/layers length mismatch\"\n            in\n            let y_final, rest_states = go rest_layers ps ss y in\n            (y_final, Ptree.list (first_state :: rest_states))\n        | _ ->\n            invalid_arg \"Layer.sequential: params/state/layers length mismatch\");\n  }\n"
  },
  {
    "path": "packages/kaun/lib/layer.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Composable neural network layers.\n\n    A {!type:t} pairs parameter/state initialization with a forward computation.\n    Layers compose with {!compose} for heterogeneous pipelines (for example\n    embeddings to dense layers) and with {!sequential} for homogeneous float\n    pipelines. *)\n\n(** {1:types Types} *)\n\ntype 'layout vars\n(** The type for model variables.\n\n    [params] are trainable variables consumed by {!Optim}. [state] is\n    non-trainable mutable state updated by forward passes (for example running\n    statistics in {!batch_norm}). *)\n\nval params : 'layout vars -> Ptree.t\n(** [params v] is [v]'s trainable parameter tree. *)\n\nval state : 'layout vars -> Ptree.t\n(** [state v] is [v]'s non-trainable mutable state tree. *)\n\nval dtype : 'layout vars -> (float, 'layout) Nx.dtype\n(** [dtype v] is [v]'s floating dtype witness. *)\n\nval with_params : 'layout vars -> Ptree.t -> 'layout vars\n(** [with_params v params] is [v] with replaced trainable parameters. *)\n\nval with_state : 'layout vars -> Ptree.t -> 'layout vars\n(** [with_state v state] is [v] with replaced non-trainable state. *)\n\nval make_vars :\n  params:Ptree.t ->\n  state:Ptree.t ->\n  dtype:(float, 'layout) Nx.dtype ->\n  'layout vars\n(** [make_vars ~params ~state ~dtype] builds model variables.\n\n    This is mainly useful for layer constructors implemented outside the\n    {!Layer} module. *)\n\ntype ('input, 'output) t = {\n  init : 'layout. dtype:(float, 'layout) Nx.dtype -> 'layout vars;\n  apply :\n    'layout 'in_elt.\n    params:Ptree.t ->\n    state:Ptree.t ->\n    dtype:(float, 'layout) Nx.dtype ->\n    training:bool ->\n    ?ctx:Context.t ->\n    ('input, 'in_elt) Nx.t ->\n    ('output, 'layout) Nx.t * Ptree.t;\n}\n(** The type for layers.\n\n    [init] creates fresh [params] and [state]. [apply] computes a forward pass\n    and returns updated [state]. Random operations (weight initialization,\n    dropout) use the implicit RNG scope established by {!Nx.Rng.run} or\n    {!Nx.Rng.with_key}.\n\n    The input tensor's dtype witness ['in_elt] is independent of the model's\n    float dtype witness ['layout]. This allows layers like {!embedding} to\n    accept [int32_elt] indices while the model parameters use [float32_elt].\n    Float-consuming layers (e.g. {!linear}) require the input dtype to match the\n    model dtype exactly and raise [Invalid_argument] on mismatch.\n\n    [ctx] carries per-call auxiliary data (attention masks, position ids,\n    encoder memory). Most layers ignore it; transformer layers read from it\n    using well-known key names. See {!Context}. *)\n\nval init : ('a, 'b) t -> dtype:(float, 'layout) Nx.dtype -> 'layout vars\n(** [init m ~dtype] is [m]'s fresh variables.\n\n    Composite layers ({!compose}, {!sequential}) isolate sub-network RNG streams\n    via {!Nx.Rng.with_key}. *)\n\nval apply :\n  ('a, 'b) t ->\n  'layout vars ->\n  training:bool ->\n  ?ctx:Context.t ->\n  ('a, 'in_elt) Nx.t ->\n  ('b, 'layout) Nx.t * 'layout vars\n(** [apply m vars ~training ?ctx x] is the forward pass of [m].\n\n    Returns [(y, vars')] where [params vars' = params vars] and [state vars'] is\n    the updated state from the forward pass.\n\n    The input tensor's dtype witness ['in_elt] is independent of the model's\n    float dtype witness ['layout]. For float-consuming layers, the input must\n    have the same dtype as the model; a mismatch raises [Invalid_argument].\n\n    [training] controls stochastic/stateful behavior. For example, {!dropout}\n    uses dropout masks only when [training = true], and {!batch_norm} updates\n    running statistics only when [training = true].\n\n    [ctx] carries per-call auxiliary data such as attention masks. See\n    {!Context}. *)\n\n(** {1:compose Composition} *)\n\nval compose : ('a, 'b) t -> ('b, 'c) t -> ('a, 'c) t\n(** [compose left right] applies [left] then [right].\n\n    Parameters and state are stored as {!Ptree.Dict} nodes with keys [\"left\"]\n    and [\"right\"]. The RNG key is split between both layers during\n    initialization and forward pass. *)\n\nval sequential : (float, float) t list -> (float, float) t\n(** [sequential layers] applies [layers] in order.\n\n    Parameters and state are stored as {!Ptree.List} nodes with one entry per\n    layer. The RNG key is split per layer during initialization and forward\n    pass.\n\n    Raises [Invalid_argument] if runtime parameter/state list lengths do not\n    match [layers]. *)\n\n(** {1:dense Dense} *)\n\nval linear :\n  in_features:int ->\n  out_features:int ->\n  ?weight_init:Init.t ->\n  ?bias_init:Init.t ->\n  unit ->\n  (float, float) t\n(** [linear ~in_features ~out_features ?weight_init ?bias_init ()] is the fully\n    connected map [xW + b].\n\n    [weight_init] defaults to {!Init.glorot_uniform ()}. [bias_init] defaults to\n    {!Init.zeros}.\n\n    Parameters:\n    - [weight] with shape [[in_features; out_features]].\n    - [bias] with shape [[out_features]]. *)\n\n(** {1:conv Convolution} *)\n\nval conv1d :\n  in_channels:int ->\n  out_channels:int ->\n  ?kernel_size:int ->\n  ?stride:int ->\n  ?dilation:int ->\n  ?padding:[ `Same | `Valid | `Causal ] ->\n  unit ->\n  (float, float) t\n(** [conv1d ~in_channels ~out_channels ?kernel_size ?stride ?dilation ?padding\n     ()] is 1D convolution over inputs shaped [[batch; in_channels; length]].\n\n    [kernel_size] defaults to [3]. [stride] defaults to [1]. [dilation] defaults\n    to [1]. [padding] defaults to [`Same].\n\n    Parameters:\n    - [weight] with shape [[out_channels; in_channels; kernel_size]].\n    - [bias] with shape [[out_channels]]. *)\n\nval conv2d :\n  in_channels:int ->\n  out_channels:int ->\n  ?kernel_size:int * int ->\n  unit ->\n  (float, float) t\n(** [conv2d ~in_channels ~out_channels ?kernel_size ()] is 2D convolution over\n    inputs shaped [[batch; in_channels; height; width]].\n\n    [kernel_size] defaults to [(3, 3)]. Stride is fixed to [(1, 1)] and padding\n    mode is [`Same].\n\n    Parameters:\n    - [weight] with shape [[out_channels; in_channels; kh; kw]].\n    - [bias] with shape [[out_channels]]. *)\n\n(** {1:norm Normalization} *)\n\nval layer_norm : dim:int -> ?eps:float -> unit -> (float, float) t\n(** [layer_norm ~dim ?eps ()] is layer normalization with learnable affine\n    parameters.\n\n    [eps] defaults to [1e-5].\n\n    Parameters:\n    - [gamma] with shape [[dim]].\n    - [beta] with shape [[dim]]. *)\n\nval rms_norm : dim:int -> ?eps:float -> unit -> (float, float) t\n(** [rms_norm ~dim ?eps ()] is RMS normalization with learnable scale.\n\n    [eps] defaults to [1e-6].\n\n    Parameters:\n    - [scale] with shape [[dim]]. *)\n\nval batch_norm : num_features:int -> unit -> (float, float) t\n(** [batch_norm ~num_features ()] is stateful batch normalization.\n\n    During training, batch statistics are used and running statistics are\n    updated. During evaluation, running statistics are used and preserved.\n\n    Normalization axes are inferred from rank:\n    - rank 2 uses [[0]].\n    - rank 3 uses [[0; 2]].\n    - rank 4 uses [[0; 2; 3]].\n    - other ranks use [[0]].\n\n    Parameters:\n    - [scale] with shape [[num_features]].\n    - [bias] with shape [[num_features]].\n\n    State:\n    - [running_mean] with shape [[num_features]].\n    - [running_var] with shape [[num_features]]. *)\n\n(** {1:embed Embedding} *)\n\nval embedding :\n  vocab_size:int -> embed_dim:int -> ?scale:bool -> unit -> (int32, float) t\n(** [embedding ~vocab_size ~embed_dim ?scale ()] is an embedding lookup layer.\n\n    Inputs are int32 token indices. Output shape is\n    [indices_shape ++ [embed_dim]].\n\n    [scale] defaults to [true]. When [true], output vectors are multiplied by\n    [sqrt embed_dim].\n\n    Parameters:\n    - [embedding] with shape [[vocab_size; embed_dim]]. *)\n\n(** {1:reg Regularization} *)\n\nval dropout : rate:float -> unit -> (float, float) t\n(** [dropout ~rate ()] is elementwise dropout.\n\n    When [training = false], it is identity. When [training = true], dropout\n    masks are generated using keys from the implicit RNG scope.\n\n    Raises [Invalid_argument] if [rate] is outside [0.0 <= rate < 1.0]. *)\n\n(** {1:act Activation Layers} *)\n\nval relu : unit -> (float, float) t\n(** [relu ()] is [max(0, x)]. No parameters. *)\n\nval gelu : unit -> (float, float) t\n(** [gelu ()] is the Gaussian error linear unit. No parameters. *)\n\nval silu : unit -> (float, float) t\n(** [silu ()] is [x * sigmoid(x)]. No parameters. *)\n\nval tanh : unit -> (float, float) t\n(** [tanh ()] is hyperbolic tangent. No parameters. *)\n\nval sigmoid : unit -> (float, float) t\n(** [sigmoid ()] is the logistic function. No parameters. *)\n\n(** {1:pool Pooling} *)\n\nval max_pool2d :\n  kernel_size:int * int -> ?stride:int * int -> unit -> (float, float) t\n(** [max_pool2d ~kernel_size ?stride ()] is 2D max pooling.\n\n    [stride] defaults to [kernel_size]. No parameters. *)\n\nval avg_pool2d :\n  kernel_size:int * int -> ?stride:int * int -> unit -> (float, float) t\n(** [avg_pool2d ~kernel_size ?stride ()] is 2D average pooling.\n\n    [stride] defaults to [kernel_size]. No parameters. *)\n\n(** {1:reshape Reshape} *)\n\nval flatten : unit -> (float, float) t\n(** [flatten ()] flattens all dimensions after the batch dimension.\n\n    [[batch; d1; ...; dn]] becomes [[batch; d1 * ... * dn]].\n\n    Raises [Invalid_argument] if the input rank is [0]. *)\n"
  },
  {
    "path": "packages/kaun/lib/loss.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet invalid_argf fmt = Printf.ksprintf invalid_arg fmt\n\nlet invalid_argf_fn fn fmt =\n  Printf.ksprintf (fun msg -> invalid_argf \"Loss.%s: %s\" fn msg) fmt\n\nlet check_logits_shape ~fn logits =\n  let logits_shape = Nx.shape logits in\n  let logits_rank = Array.length logits_shape in\n  if logits_rank < 1 then invalid_argf_fn fn \"logits must have rank >= 1\";\n  let class_axis = logits_rank - 1 in\n  let num_classes = logits_shape.(class_axis) in\n  if num_classes <= 0 then\n    invalid_argf_fn fn \"logits class dimension must be positive (got %d)\"\n      num_classes;\n  logits_shape\n\nlet check_same_shape ~fn ~rhs_name lhs rhs =\n  let lhs_rank = Array.length lhs in\n  let rhs_rank = Array.length rhs in\n  if rhs_rank <> lhs_rank then\n    invalid_argf_fn fn \"%s rank mismatch (got %d, expected %d)\" rhs_name\n      rhs_rank lhs_rank;\n  for i = 0 to lhs_rank - 1 do\n    if rhs.(i) <> lhs.(i) then\n      invalid_argf_fn fn \"%s shape mismatch at axis %d (got %d, expected %d)\"\n        rhs_name i rhs.(i) lhs.(i)\n  done\n\nlet check_cross_entropy_shapes logits labels =\n  let fn = \"cross_entropy\" in\n  let logits_shape = check_logits_shape ~fn logits in\n  let labels_shape = Nx.shape labels in\n  check_same_shape ~fn ~rhs_name:\"labels\" logits_shape labels_shape\n\nlet cross_entropy logits labels =\n  check_cross_entropy_shapes logits labels;\n  let max_logits = Nx.max logits ~axes:[ -1 ] ~keepdims:true in\n  let shifted = Nx.sub logits max_logits in\n  let log_sum_exp =\n    Nx.log (Nx.sum (Nx.exp shifted) ~axes:[ -1 ] ~keepdims:true)\n  in\n  let log_softmax = Nx.sub shifted log_sum_exp in\n  let per_example = Nx.neg (Nx.sum (Nx.mul labels log_softmax) ~axes:[ -1 ]) in\n  Nx.mean per_example\n\nlet check_sparse_indices_dtype indices =\n  let fn = \"cross_entropy_sparse\" in\n  let dtype = Nx.dtype indices in\n  if not (Nx_core.Dtype.is_int dtype) then\n    invalid_argf_fn fn \"expected integer labels, got %s\"\n      (Nx_core.Dtype.to_string dtype)\n\nlet check_sparse_shapes logits indices =\n  let fn = \"cross_entropy_sparse\" in\n  let logits_shape = check_logits_shape ~fn logits in\n  let indices_shape = Nx.shape indices in\n  let logits_rank = Array.length logits_shape in\n  let indices_rank = Array.length indices_shape in\n  if indices_rank <> logits_rank - 1 then\n    invalid_argf_fn fn \"labels rank mismatch (got %d, expected %d)\" indices_rank\n      (logits_rank - 1);\n  for i = 0 to indices_rank - 1 do\n    if indices_shape.(i) <> logits_shape.(i) then\n      invalid_argf_fn fn\n        \"labels shape mismatch at axis %d (got %d, expected %d)\" i\n        indices_shape.(i) logits_shape.(i)\n  done;\n  let class_axis = logits_rank - 1 in\n  logits_shape.(class_axis)\n\nlet cross_entropy_sparse logits indices =\n  check_sparse_indices_dtype indices;\n  ignore (check_sparse_shapes logits indices : int);\n  let indices_int = Nx.cast Nx.int32 indices in\n  (* Numerically stable log-softmax *)\n  let max_logits = Nx.max logits ~axes:[ -1 ] ~keepdims:true in\n  let shifted = Nx.sub logits max_logits in\n  let log_sum_exp =\n    Nx.log (Nx.sum (Nx.exp shifted) ~axes:[ -1 ] ~keepdims:true)\n  in\n  (* Gather true-class logits: [...] → [...; 1] for take_along_axis *)\n  let indices_expanded = Nx.expand_dims [ -1 ] indices_int in\n  let true_logits = Nx.take_along_axis ~axis:(-1) indices_expanded shifted in\n  (* loss = -(true_logit - log_sum_exp) *)\n  let per_example =\n    Nx.neg\n      (Nx.sub\n         (Nx.squeeze ~axes:[ -1 ] true_logits)\n         (Nx.squeeze ~axes:[ -1 ] log_sum_exp))\n  in\n  Nx.mean per_example\n\nlet binary_cross_entropy logits labels =\n  let fn = \"binary_cross_entropy\" in\n  let logits_shape = Nx.shape logits in\n  let labels_shape = Nx.shape labels in\n  check_same_shape ~fn ~rhs_name:\"labels\" logits_shape labels_shape;\n  let dtype = Nx.dtype logits in\n  let one = Nx.scalar dtype 1.0 in\n  let log_p = Activation.log_sigmoid logits in\n  let log_1_minus_p = Activation.log_sigmoid (Nx.neg logits) in\n  let per_element =\n    Nx.neg\n      (Nx.add (Nx.mul labels log_p) (Nx.mul (Nx.sub one labels) log_1_minus_p))\n  in\n  Nx.mean per_element\n\nlet mse predictions targets =\n  let diff = Nx.sub predictions targets in\n  Nx.mean (Nx.mul diff diff)\n\nlet mae predictions targets = Nx.mean (Nx.abs (Nx.sub predictions targets))\n"
  },
  {
    "path": "packages/kaun/lib/loss.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Loss functions.\n\n    Losses are differentiable through Rune's autodiff and return scalar means.\n    [Invalid_argument] messages are prefixed with [Loss.<function>:]. *)\n\n(** {1:classification Classification} *)\n\nval cross_entropy : (float, 'a) Nx.t -> (float, 'a) Nx.t -> (float, 'a) Nx.t\n(** [cross_entropy logits one_hot_labels] is softmax cross-entropy.\n\n    [logits] has shape [[...; num_classes]] and must be rank >= 1.\n    [one_hot_labels] must have the same shape.\n\n    Uses the log-sum-exp trick for numerical stability.\n\n    Raises [Invalid_argument] if ranks or shapes differ, or if [num_classes] is\n    not positive. *)\n\nval cross_entropy_sparse : (float, 'a) Nx.t -> ('c, 'd) Nx.t -> (float, 'a) Nx.t\n(** [cross_entropy_sparse logits class_indices] is {!cross_entropy} with integer\n    labels.\n\n    [class_indices] has shape [[...]] and must match [logits] without the last\n    dimension. The class dimension is [logits]' last axis.\n\n    Raises [Invalid_argument] if labels are non-integer, ranks mismatch,\n    non-class dimensions differ, or the class dimension is non-positive. *)\n\nval binary_cross_entropy :\n  (float, 'a) Nx.t -> (float, 'a) Nx.t -> (float, 'a) Nx.t\n(** [binary_cross_entropy logits labels] is sigmoid binary cross-entropy.\n\n    [logits] are raw (not sigmoid-normalized). [labels] are typically in\n    \\[[0];[1]\\]. Uses log-sigmoid for numerical stability.\n\n    Raises [Invalid_argument] if [logits] and [labels] shapes differ. *)\n\n(** {1:regression Regression} *)\n\nval mse : ('a, 'b) Nx.t -> ('a, 'b) Nx.t -> ('a, 'b) Nx.t\n(** [mse predictions targets] is [mean ((predictions - targets)^2)].\n\n    Shape compatibility follows Nx broadcasting semantics. *)\n\nval mae : ('a, 'b) Nx.t -> ('a, 'b) Nx.t -> ('a, 'b) Nx.t\n(** [mae predictions targets] is [mean (abs (predictions - targets))].\n\n    Shape compatibility follows Nx broadcasting semantics. *)\n"
  },
  {
    "path": "packages/kaun/lib/metric.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet strf = Printf.sprintf\n\n(* Tracker *)\n\ntype entry = { mutable sum : float; mutable n : int }\ntype tracker = (string, entry) Hashtbl.t\n\nlet tracker () : tracker = Hashtbl.create 16\n\nlet observe (t : tracker) name value =\n  match Hashtbl.find_opt t name with\n  | Some e ->\n      e.sum <- e.sum +. value;\n      e.n <- e.n + 1\n  | None -> Hashtbl.replace t name { sum = value; n = 1 }\n\nlet find_entry t name = Hashtbl.find t name\n\nlet mean t name =\n  let e = find_entry t name in\n  e.sum /. float_of_int e.n\n\nlet count t name =\n  let e = find_entry t name in\n  e.n\n\nlet reset t = Hashtbl.reset t\n\nlet to_list t =\n  let pairs =\n    Hashtbl.fold (fun k e acc -> (k, e.sum /. float_of_int e.n) :: acc) t []\n  in\n  List.sort (fun (a, _) (b, _) -> String.compare a b) pairs\n\nlet summary t =\n  let pairs = to_list t in\n  String.concat \"  \" (List.map (fun (k, v) -> strf \"%s: %.4f\" k v) pairs)\n\n(* Dataset evaluation *)\n\nlet eval f data =\n  let sum = ref 0.0 in\n  let n = ref 0 in\n  Data.iter\n    (fun x ->\n      sum := !sum +. f x;\n      incr n)\n    data;\n  if !n = 0 then invalid_arg \"Metric.eval: empty dataset\";\n  !sum /. float_of_int !n\n\nlet eval_many f data =\n  let tbl = Hashtbl.create 8 in\n  let n = ref 0 in\n  Data.iter\n    (fun x ->\n      let pairs = f x in\n      List.iter\n        (fun (k, v) ->\n          match Hashtbl.find_opt tbl k with\n          | Some e ->\n              e.sum <- e.sum +. v;\n              e.n <- e.n + 1\n          | None -> Hashtbl.replace tbl k { sum = v; n = 1 })\n        pairs;\n      incr n)\n    data;\n  if !n = 0 then invalid_arg \"Metric.eval_many: empty dataset\";\n  let pairs =\n    Hashtbl.fold (fun k e acc -> (k, e.sum /. float_of_int e.n) :: acc) tbl []\n  in\n  List.sort (fun (a, _) (b, _) -> String.compare a b) pairs\n\ntype average = Macro | Micro | Weighted\n\n(* Metric functions *)\n\nlet accuracy (type a b c) (predictions : (float, a) Nx.t)\n    (targets : (b, c) Nx.t) =\n  let pred_shape = Nx.shape predictions in\n  let rank = Array.length pred_shape in\n  let predicted =\n    if rank >= 2 then\n      (* Multi-class: argmax along last axis *)\n      Nx.argmax ~axis:(-1) predictions\n    else\n      (* Binary: threshold at 0.5 *)\n      let half = Nx.scalar (Nx.dtype predictions) 0.5 in\n      Nx.cast Nx.int32 (Nx.greater predictions half)\n  in\n  let targets_i32 = Nx.cast Nx.int32 targets in\n  let correct = Nx.equal predicted targets_i32 in\n  let correct_f = Nx.cast Nx.float32 correct in\n  Nx.item [] (Nx.mean correct_f)\n\nlet binary_accuracy ?(threshold = 0.5) predictions targets =\n  let dtype = Nx.dtype predictions in\n  let thresh = Nx.scalar dtype threshold in\n  let predicted = Nx.cast Nx.float32 (Nx.greater predictions thresh) in\n  let targets_f = Nx.cast Nx.float32 targets in\n  let correct = Nx.equal predicted targets_f in\n  let correct_f = Nx.cast Nx.float32 correct in\n  Nx.item [] (Nx.mean correct_f)\n\n(* Classification metrics *)\n\nlet confusion_counts (type a b c) (predictions : (float, a) Nx.t)\n    (targets : (b, c) Nx.t) =\n  let pred_shape = Nx.shape predictions in\n  let num_classes = pred_shape.(Array.length pred_shape - 1) in\n  let predicted = Nx.argmax ~axis:(-1) predictions in\n  let targets_i32 = Nx.cast Nx.int32 targets in\n  let pred_oh = Nx.cast Nx.float32 (Nx.one_hot ~num_classes predicted) in\n  let tgt_oh = Nx.cast Nx.float32 (Nx.one_hot ~num_classes targets_i32) in\n  let tp = Nx.sum (Nx.mul pred_oh tgt_oh) ~axes:[ 0 ] in\n  let pred_sum = Nx.sum pred_oh ~axes:[ 0 ] in\n  let tgt_sum = Nx.sum tgt_oh ~axes:[ 0 ] in\n  let fp = Nx.sub pred_sum tp in\n  let fn = Nx.sub tgt_sum tp in\n  (tp, fp, fn, num_classes)\n\nlet safe_div a b = if b = 0.0 then 0.0 else a /. b\n\nlet precision avg predictions targets =\n  let tp, fp, fn, num_classes = confusion_counts predictions targets in\n  let tp = Nx.to_array tp in\n  let fp = Nx.to_array fp in\n  match avg with\n  | Micro ->\n      let tp_sum = Array.fold_left ( +. ) 0.0 tp in\n      let fp_sum = Array.fold_left ( +. ) 0.0 fp in\n      safe_div tp_sum (tp_sum +. fp_sum)\n  | Macro ->\n      let sum = ref 0.0 in\n      for c = 0 to num_classes - 1 do\n        sum := !sum +. safe_div tp.(c) (tp.(c) +. fp.(c))\n      done;\n      !sum /. float_of_int num_classes\n  | Weighted ->\n      let fn = Nx.to_array fn in\n      let w_sum = ref 0.0 in\n      let total = ref 0.0 in\n      for c = 0 to num_classes - 1 do\n        let support = tp.(c) +. fn.(c) in\n        w_sum := !w_sum +. (support *. safe_div tp.(c) (tp.(c) +. fp.(c)));\n        total := !total +. support\n      done;\n      safe_div !w_sum !total\n\nlet recall avg predictions targets =\n  let tp, _fp, fn, num_classes = confusion_counts predictions targets in\n  let tp = Nx.to_array tp in\n  let fn = Nx.to_array fn in\n  match avg with\n  | Micro ->\n      let tp_sum = Array.fold_left ( +. ) 0.0 tp in\n      let fn_sum = Array.fold_left ( +. ) 0.0 fn in\n      safe_div tp_sum (tp_sum +. fn_sum)\n  | Macro ->\n      let sum = ref 0.0 in\n      for c = 0 to num_classes - 1 do\n        sum := !sum +. safe_div tp.(c) (tp.(c) +. fn.(c))\n      done;\n      !sum /. float_of_int num_classes\n  | Weighted ->\n      let w_sum = ref 0.0 in\n      let total = ref 0.0 in\n      for c = 0 to num_classes - 1 do\n        let support = tp.(c) +. fn.(c) in\n        w_sum := !w_sum +. (support *. safe_div tp.(c) (tp.(c) +. fn.(c)));\n        total := !total +. support\n      done;\n      safe_div !w_sum !total\n\nlet f1 avg predictions targets =\n  let tp, fp, fn, num_classes = confusion_counts predictions targets in\n  let tp = Nx.to_array tp in\n  let fp = Nx.to_array fp in\n  let fn = Nx.to_array fn in\n  match avg with\n  | Micro ->\n      let tp_sum = Array.fold_left ( +. ) 0.0 tp in\n      let fp_sum = Array.fold_left ( +. ) 0.0 fp in\n      let fn_sum = Array.fold_left ( +. ) 0.0 fn in\n      safe_div (2.0 *. tp_sum) ((2.0 *. tp_sum) +. fp_sum +. fn_sum)\n  | Macro ->\n      let sum = ref 0.0 in\n      for c = 0 to num_classes - 1 do\n        sum :=\n          !sum +. safe_div (2.0 *. tp.(c)) ((2.0 *. tp.(c)) +. fp.(c) +. fn.(c))\n      done;\n      !sum /. float_of_int num_classes\n  | Weighted ->\n      let w_sum = ref 0.0 in\n      let total = ref 0.0 in\n      for c = 0 to num_classes - 1 do\n        let support = tp.(c) +. fn.(c) in\n        w_sum :=\n          !w_sum\n          +. support\n             *. safe_div (2.0 *. tp.(c)) ((2.0 *. tp.(c)) +. fp.(c) +. fn.(c));\n        total := !total +. support\n      done;\n      safe_div !w_sum !total\n"
  },
  {
    "path": "packages/kaun/lib/metric.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Training metrics.\n\n    {!Metric} provides running scalar tracking and dataset evaluation.\n\n    A {!type:tracker} accumulates named running means during training. For\n    dataset evaluation, {!eval} and {!eval_many} fold user-supplied functions\n    over a {!Data.t} pipeline and return averaged results.\n\n    Metric functions such as {!accuracy} are plain tensor-to-scalar functions\n    that compose freely with {!eval}. *)\n\n(** {1:tracker Running Tracker} *)\n\ntype tracker\n(** A mutable set of named running-mean accumulators. *)\n\nval tracker : unit -> tracker\n(** [tracker ()] is a fresh tracker with no observations. *)\n\nval observe : tracker -> string -> float -> unit\n(** [observe t name value] records [value] under [name]. *)\n\nval mean : tracker -> string -> float\n(** [mean t name] is the running mean of observations under [name].\n\n    Raises [Not_found] if [name] was never observed. *)\n\nval count : tracker -> string -> int\n(** [count t name] is the number of observations under [name].\n\n    Raises [Not_found] if [name] was never observed. *)\n\nval reset : tracker -> unit\n(** [reset t] clears all observations. *)\n\nval to_list : tracker -> (string * float) list\n(** [to_list t] is the current means as [(name, mean)] pairs, sorted by name. *)\n\nval summary : tracker -> string\n(** [summary t] is a human-readable one-liner of all current means, e.g.\n    [\"accuracy: 0.9150  loss: 0.4231\"]. *)\n\n(** {1:eval Dataset Evaluation} *)\n\nval eval : ('a -> float) -> 'a Data.t -> float\n(** [eval f data] is the mean of [f batch] over all elements of [data].\n\n    Raises [Invalid_argument] if [data] yields no elements. *)\n\nval eval_many :\n  ('a -> (string * float) list) -> 'a Data.t -> (string * float) list\n(** [eval_many f data] is the per-name mean of [f batch] over all elements of\n    [data]. Returns [(name, mean)] pairs sorted by name.\n\n    Raises [Invalid_argument] if [data] yields no elements. *)\n\n(** {1:average Averaging} *)\n\ntype average =\n  | Macro\n  | Micro\n  | Weighted\n      (** The type for multi-class averaging modes.\n          - [Macro] is the unweighted mean of per-class scores.\n          - [Micro] aggregates TP, FP, FN globally before computing.\n          - [Weighted] is the mean of per-class scores weighted by class support\n            (number of true instances). *)\n\n(** {1:compute Common Metric Functions} *)\n\nval accuracy : (float, 'a) Nx.t -> ('b, 'c) Nx.t -> float\n(** [accuracy predictions targets] is the fraction of correct predictions.\n\n    Multi-class: [predictions] has shape [[batch; num_classes]] (logits or\n    probabilities), [targets] has shape [[batch]] (integer class indices).\n    Predicted class is [argmax] along the last axis.\n\n    Binary: both tensors have shape [[batch]] or [[batch; 1]]. Predictions above\n    [0.5] count as class [1]. *)\n\nval binary_accuracy :\n  ?threshold:float -> (float, 'a) Nx.t -> (float, 'a) Nx.t -> float\n(** [binary_accuracy ?threshold predictions targets] is the fraction of correct\n    binary predictions.\n\n    [threshold] defaults to [0.5]. Predictions above [threshold] count as class\n    [1], targets are expected in \\[[0];[1]\\]. *)\n\n(** {1:classification Classification} *)\n\nval precision : average -> (float, 'a) Nx.t -> ('b, 'c) Nx.t -> float\n(** [precision avg predictions targets] is the precision score.\n\n    [predictions] has shape [[batch; num_classes]] (logits or probabilities).\n    [targets] has shape [[batch]] (integer class indices). Predicted class is\n    [argmax] along the last axis.\n\n    When a class has no predicted instances, its precision is [0.0]. *)\n\nval recall : average -> (float, 'a) Nx.t -> ('b, 'c) Nx.t -> float\n(** [recall avg predictions targets] is the recall score.\n\n    Input convention is the same as {!precision}.\n\n    When a class has no true instances, its recall is [0.0]. *)\n\nval f1 : average -> (float, 'a) Nx.t -> ('b, 'c) Nx.t -> float\n(** [f1 avg predictions targets] is the F1 score (harmonic mean of {!precision}\n    and {!recall}).\n\n    Input convention is the same as {!precision}.\n\n    When both precision and recall are [0.0] for a class, its F1 is [0.0]. *)\n"
  },
  {
    "path": "packages/kaun/lib/optim.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule Dtype = Nx_core.Dtype\n\n(* Helpers *)\n\nlet err_expected_float_dtype = \"Optim: expected floating-point dtype\"\n\nlet float_of_scalar (type a b) (dtype : (a, b) Dtype.t) (value : a) : float =\n  match dtype with\n  | Dtype.Float16 ->\n      let value : float = value in\n      value\n  | Dtype.Float32 ->\n      let value : float = value in\n      value\n  | Dtype.Float64 ->\n      let value : float = value in\n      value\n  | Dtype.BFloat16 ->\n      let value : float = value in\n      value\n  | Dtype.Float8_e4m3 ->\n      let value : float = value in\n      value\n  | Dtype.Float8_e5m2 ->\n      let value : float = value in\n      value\n  | _ -> invalid_arg err_expected_float_dtype\n\nlet scalar dt x = Nx.scalar dt (Dtype.of_float dt x)\n\nlet tensor_sum_sq (Ptree.P t) =\n  let dtype = Nx.dtype t in\n  let sq = Nx.mul t t in\n  float_of_scalar dtype (Nx.item [] (Nx.sum sq))\n\n(* Per-leaf packed Vega state with captured dtype for type unification *)\n\ntype packed_vega_state =\n  | PVS : {\n      dtype : ('a, 'b) Dtype.t;\n      st : ('a, 'b) Vega.state;\n    }\n      -> packed_vega_state\n\n(* State *)\n\ntype state = { tx : Vega.t; leaf_states : packed_vega_state array }\n\n(* Init *)\n\nlet init tx params =\n  let leaves, _ = Ptree.flatten params in\n  let leaf_states =\n    Array.of_list\n      (List.map\n         (fun pt ->\n           Ptree.with_tensor pt\n             {\n               run = (fun t -> PVS { dtype = Nx.dtype t; st = Vega.init tx t });\n             })\n         leaves)\n  in\n  { tx; leaf_states }\n\n(* Update: returns updates tree (not new params) *)\n\nlet update st params grads =\n  let param_leaves, rebuild = Ptree.flatten params in\n  let grad_leaves, _ = Ptree.flatten grads in\n  let n = Array.length st.leaf_states in\n  let update_packed = Array.make n (List.hd param_leaves) in\n  let new_leaf_states = Array.make n st.leaf_states.(0) in\n  List.iteri\n    (fun i param_pt ->\n      let grad_pt = List.nth grad_leaves i in\n      let (PVS { dtype = dt; st = vega_st }) = st.leaf_states.(i) in\n      let param_t = Ptree.Tensor.to_typed_exn dt param_pt in\n      let grad_t = Ptree.Tensor.to_typed_exn dt grad_pt in\n      let upd, new_vega_st = Vega.update vega_st ~grad:grad_t ~param:param_t in\n      update_packed.(i) <- Ptree.P upd;\n      new_leaf_states.(i) <- PVS { dtype = dt; st = new_vega_st })\n    param_leaves;\n  let updates = rebuild (Array.to_list update_packed) in\n  (updates, { tx = st.tx; leaf_states = new_leaf_states })\n\n(* Apply updates: add updates to params *)\n\nlet apply_updates params updates =\n  Ptree.map2 { run = (fun param upd -> Nx.add param upd) } params updates\n\n(* Step: convenience for update + apply_updates *)\n\nlet step st params grads =\n  let updates, new_st = update st params grads in\n  let new_params = apply_updates params updates in\n  (new_params, new_st)\n\n(* Serialization *)\n\nlet state_to_trees st =\n  let n = Array.length st.leaf_states in\n  if n = 0 then (0, [])\n  else\n    (* Get count from first leaf (all leaves share the same count) *)\n    let (PVS { st = first_st; _ }) = st.leaf_states.(0) in\n    let count, _ = Vega.state_to_tensors first_st in\n    (* Extract per-leaf tensor arrays *)\n    let per_leaf_tensors = Array.make n [||] in\n    for i = 0 to n - 1 do\n      let (PVS { st = vega_st; _ }) = st.leaf_states.(i) in\n      let _, tensors = Vega.state_to_tensors vega_st in\n      per_leaf_tensors.(i) <- Array.map (fun t -> Ptree.P t) tensors\n    done;\n    (* Determine number of state tensors per leaf *)\n    let n_tensors = Array.length per_leaf_tensors.(0) in\n    if n_tensors = 0 then (count, [])\n    else\n      (* Transpose: per-leaf x per-tensor -> per-tensor x per-leaf *)\n      let tensor_trees =\n        List.init n_tensors (fun m ->\n            let leaves =\n              List.init n (fun i -> Ptree.Tensor per_leaf_tensors.(i).(m))\n            in\n            Ptree.List leaves)\n      in\n      (count, tensor_trees)\n\nlet state_of_trees tx ~count trees =\n  let n_trees = List.length trees in\n  let expected_tensors = Vega.n_tensors tx in\n  if n_trees <> expected_tensors then\n    invalid_arg\n      (Printf.sprintf \"Optim.state_of_trees: expected %d moment trees, got %d\"\n         expected_tensors n_trees);\n  if n_trees = 0 then { tx; leaf_states = [||] }\n  else\n    let first_tree = List.hd trees in\n    let first_items = Ptree.List.items_exn first_tree in\n    let n_leaves = List.length first_items in\n    (* Collect per-tensor leaf lists *)\n    let tensor_leaves =\n      List.map\n        (fun tree -> List.map Ptree.as_tensor_exn (Ptree.List.items_exn tree))\n        trees\n    in\n    (* Transpose: per-tensor x per-leaf -> per-leaf x per-tensor *)\n    let leaf_states =\n      Array.init n_leaves (fun i ->\n          let leaf_tensors =\n            List.map (fun moment -> List.nth moment i) tensor_leaves\n          in\n          (* Use the first tensor's dtype as reference *)\n          let ref_pt = List.hd leaf_tensors in\n          Ptree.with_tensor ref_pt\n            {\n              run =\n                (fun ref_t ->\n                  let dt = Nx.dtype ref_t in\n                  let typed_tensors =\n                    Array.of_list\n                      (List.map (Ptree.Tensor.to_typed_exn dt) leaf_tensors)\n                  in\n                  let vega_st = Vega.state_of_tensors tx ~count typed_tensors in\n                  PVS { dtype = dt; st = vega_st });\n            })\n    in\n    { tx; leaf_states }\n\n(* Gradient utilities *)\n\nlet global_norm t =\n  let sum_sq = Ptree.fold (fun acc p -> acc +. tensor_sum_sq p) 0. t in\n  sqrt sum_sq\n\nlet clip_by_global_norm max_norm grads =\n  let norm = global_norm grads in\n  if norm <= max_norm then grads\n  else\n    let scale = max_norm /. norm in\n    Ptree.map\n      {\n        run =\n          (fun t ->\n            let dt = Nx.dtype t in\n            Nx.mul t (scalar dt scale));\n      }\n      grads\n"
  },
  {
    "path": "packages/kaun/lib/optim.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Parameter-tree optimizer.\n\n    Bridges {!Vega} with {!Ptree}: applies per-parameter optimizer steps across\n    all leaves of a heterogeneous parameter tree. *)\n\n(** {1:types Types} *)\n\ntype state\n(** Optimizer state for a parameter tree. Packs per-leaf {!Vega} states. *)\n\n(** {1:core Core} *)\n\nval init : Vega.t -> Ptree.t -> state\n(** [init tx params] initializes optimizer state for all leaves of [params]. *)\n\nval update : state -> Ptree.t -> Ptree.t -> Ptree.t * state\n(** [update state params grads] returns [(updates, new_state)].\n\n    Applies {!Vega.update} to each matching leaf pair. The returned [updates]\n    tree has the same structure as [params] and can be applied via\n    {!apply_updates}. *)\n\nval apply_updates : Ptree.t -> Ptree.t -> Ptree.t\n(** [apply_updates params updates] adds [updates] to [params] element-wise\n    across all leaves. *)\n\nval step : state -> Ptree.t -> Ptree.t -> Ptree.t * state\n(** [step state params grads] returns [(new_params, new_state)].\n\n    Convenience for:\n    {[\n    let updates, state = update state params grads in\n    (apply_updates params updates, state)\n    ]} *)\n\n(** {1:serialization Serialization} *)\n\nval state_to_trees : state -> int * Ptree.t list\n(** [state_to_trees st] is [(count, trees)] where [count] is the optimizer step\n    count and [trees] are the internal state as parameter trees.\n\n    Transforms with no state tensors return an empty list. *)\n\nval state_of_trees : Vega.t -> count:int -> Ptree.t list -> state\n(** [state_of_trees tx ~count trees] reconstructs optimizer state from a\n    transformation, step count, and serialized trees.\n\n    Raises [Invalid_argument] if the number of trees does not match the\n    transformation's expectation. *)\n\n(** {1:grad Gradient Utilities} *)\n\nval clip_by_global_norm : float -> Ptree.t -> Ptree.t\n(** [clip_by_global_norm max_norm grads] rescales [grads] so their global L2\n    norm does not exceed [max_norm]. Returns [grads] unchanged if the norm is\n    already within bounds.\n\n    Raises [Invalid_argument] if a leaf tensor is not floating point. *)\n\nval global_norm : Ptree.t -> float\n(** [global_norm t] is the L2 norm across all leaf tensors of [t].\n\n    Raises [Invalid_argument] if a leaf tensor is not floating point. *)\n"
  },
  {
    "path": "packages/kaun/lib/ptree.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype tensor = P : ('a, 'layout) Nx.t -> tensor\ntype t = Tensor of tensor | List of t list | Dict of (string * t) list\n\nlet invalid_argf fmt = Printf.ksprintf invalid_arg fmt\n\nlet invalid_arg_ctx ?ctx msg =\n  match ctx with\n  | None -> invalid_arg msg\n  | Some ctx -> invalid_argf \"%s: %s\" ctx msg\n\nlet expected ?ctx what = invalid_arg_ctx ?ctx (\"expected \" ^ what)\n\nlet key_not_found ?ctx key =\n  match ctx with\n  | None -> invalid_argf \"key %S not found\" key\n  | Some ctx -> invalid_argf \"%s: key %S not found\" ctx key\n\nlet tensor x = Tensor (P x)\nlet list xs = List xs\nlet empty = List []\n\nlet validate_key key =\n  if String.length key = 0 then invalid_arg \"empty key\";\n  for i = 0 to String.length key - 1 do\n    let c = String.unsafe_get key i in\n    match c with\n    | '.' | '[' | ']' ->\n        invalid_argf\n          \"key %S contains reserved character %C (keys must not contain '.', \\\n           '[', or ']')\"\n          key c\n    | _ -> ()\n  done\n\nlet dict kvs =\n  let tbl = Hashtbl.create (Stdlib.List.length kvs) in\n  Stdlib.List.iter\n    (fun (k, _) ->\n      validate_key k;\n      if Hashtbl.mem tbl k then invalid_argf \"duplicate key %S\" k\n      else Hashtbl.add tbl k ())\n    kvs;\n  Dict kvs\n\nmodule Tensor = struct\n  let dtype (P t) = Nx_core.Dtype.pack (Nx.dtype t)\n  let shape (P t) = Nx.shape t\n  let numel (P t) = Nx.numel t\n\n  let to_typed (type a l) (dtype : (a, l) Nx.dtype) (P t) : (a, l) Nx.t option =\n    match Nx_core.Dtype.equal_witness (Nx.dtype t) dtype with\n    | Some Type.Equal -> Some t\n    | None -> None\n\n  let to_typed_exn (type a l) (dtype : (a, l) Nx.dtype) (P t) : (a, l) Nx.t =\n    match Nx_core.Dtype.equal_witness (Nx.dtype t) dtype with\n    | Some Type.Equal -> t\n    | None ->\n        invalid_argf \"dtype mismatch: expected %s, got %s\"\n          (Nx_core.Dtype.to_string dtype)\n          (Nx_core.Dtype.to_string (Nx.dtype t))\nend\n\nmodule Dict = struct\n  type fields = (string * t) list\n\n  let fields_exn ?ctx t =\n    match t with Dict kvs -> kvs | _ -> expected ?ctx \"Dict\"\n\n  let find key fields = Stdlib.List.assoc_opt key fields\n\n  let find_exn ?ctx key fields =\n    match find key fields with Some v -> v | None -> key_not_found ?ctx key\n\n  let get_tensor_exn fields ~name dtype =\n    match find_exn name fields with\n    | Tensor p -> Tensor.to_typed_exn dtype p\n    | _ -> invalid_argf \"field %S is not a tensor\" name\nend\n\nmodule List = struct\n  let items_exn ?ctx t =\n    match t with List xs -> xs | _ -> expected ?ctx \"List\"\nend\n\ntype 'r tensor_handler = { run : 'a 'layout. ('a, 'layout) Nx.t -> 'r }\n\ntype map_handler = {\n  run : 'a 'layout. ('a, 'layout) Nx.t -> ('a, 'layout) Nx.t;\n}\n\ntype map2_handler = {\n  run :\n    'a 'layout. ('a, 'layout) Nx.t -> ('a, 'layout) Nx.t -> ('a, 'layout) Nx.t;\n}\n\nlet with_tensor (P t) (handler : _ tensor_handler) = handler.run t\n\nlet as_tensor_exn ?ctx t =\n  match t with Tensor p -> p | _ -> expected ?ctx \"Tensor\"\n\nlet map (f : map_handler) t =\n  let rec go = function\n    | Tensor (P x) -> Tensor (P (f.run x))\n    | List xs -> List (Stdlib.List.map go xs)\n    | Dict kvs -> Dict (Stdlib.List.map (fun (k, v) -> (k, go v)) kvs)\n  in\n  go t\n\nlet map2 (f : map2_handler) a b =\n  let rec go a b =\n    match (a, b) with\n    | Tensor (P x), Tensor (P y) -> (\n        match Nx_core.Dtype.equal_witness (Nx.dtype x) (Nx.dtype y) with\n        | Some Type.Equal -> Tensor (P (f.run x y))\n        | None -> invalid_arg \"dtype mismatch\")\n    | List xs, List ys ->\n        if Stdlib.List.length xs <> Stdlib.List.length ys then\n          invalid_arg \"list length mismatch\";\n        List (Stdlib.List.map2 go xs ys)\n    | Dict kvs1, Dict kvs2 ->\n        if Stdlib.List.length kvs1 <> Stdlib.List.length kvs2 then\n          invalid_arg \"dict size mismatch\";\n        Dict\n          (Stdlib.List.map\n             (fun (k, v1) ->\n               match Stdlib.List.assoc_opt k kvs2 with\n               | Some v2 -> (k, go v1 v2)\n               | None -> invalid_argf \"key %S not found in second dict\" k)\n             kvs1)\n    | _ -> invalid_arg \"structure mismatch\"\n  in\n  go a b\n\nlet iter f t =\n  let rec go = function\n    | Tensor p -> f p\n    | List xs -> Stdlib.List.iter go xs\n    | Dict kvs -> Stdlib.List.iter (fun (_, v) -> go v) kvs\n  in\n  go t\n\nlet fold f acc t =\n  let rec go acc = function\n    | Tensor p -> f acc p\n    | List xs -> Stdlib.List.fold_left go acc xs\n    | Dict kvs -> Stdlib.List.fold_left (fun acc (_, v) -> go acc v) acc kvs\n  in\n  go acc t\n\nlet flatten t =\n  let tensors = ref [] in\n  iter (fun p -> tensors := p :: !tensors) t;\n  let tensors = Stdlib.List.rev !tensors in\n  let rebuild new_tensors =\n    let remaining = ref new_tensors in\n    let take () =\n      match !remaining with\n      | [] -> invalid_arg \"not enough tensors to rebuild tree\"\n      | x :: rest ->\n          remaining := rest;\n          x\n    in\n    let rec go = function\n      | Tensor _ -> Tensor (take ())\n      | List xs -> List (Stdlib.List.map go xs)\n      | Dict kvs -> Dict (Stdlib.List.map (fun (k, v) -> (k, go v)) kvs)\n    in\n    let result = go t in\n    (match !remaining with\n    | [] -> ()\n    | _ -> invalid_arg \"too many tensors to rebuild tree\");\n    result\n  in\n  (tensors, rebuild)\n\nlet flatten_with_paths t =\n  let join prefix seg = if prefix = \"\" then seg else prefix ^ \".\" ^ seg in\n  let acc = ref [] in\n  let rec go prefix = function\n    | Tensor p -> acc := (prefix, p) :: !acc\n    | List xs ->\n        Stdlib.List.iteri (fun i v -> go (join prefix (string_of_int i)) v) xs\n    | Dict kvs -> Stdlib.List.iter (fun (k, v) -> go (join prefix k) v) kvs\n  in\n  go \"\" t;\n  Stdlib.List.rev !acc\n\nlet zeros_like t = map { run = Nx.zeros_like } t\nlet count_parameters t = fold (fun acc p -> acc + Tensor.numel p) 0 t\n\nlet pp_shape shape =\n  Stdlib.String.concat \"x\"\n    (Stdlib.Array.to_list (Stdlib.Array.map string_of_int shape))\n\nlet rec pp_with_indent indent ppf = function\n  | Tensor p ->\n      with_tensor p\n        {\n          run =\n            (fun t ->\n              Format.fprintf ppf \"Tensor(%s, %s)\"\n                (Nx_core.Dtype.to_string (Nx.dtype t))\n                (pp_shape (Nx.shape t)));\n        }\n  | List [] -> Format.pp_print_string ppf \"List []\"\n  | List xs ->\n      let next_indent = indent ^ \"  \" in\n      Format.pp_print_string ppf \"List [\";\n      Stdlib.List.iter\n        (fun v ->\n          Format.pp_print_char ppf '\\n';\n          Format.pp_print_string ppf next_indent;\n          pp_with_indent next_indent ppf v)\n        xs;\n      Format.pp_print_char ppf '\\n';\n      Format.pp_print_string ppf indent;\n      Format.pp_print_char ppf ']'\n  | Dict [] -> Format.pp_print_string ppf \"Dict {}\"\n  | Dict kvs ->\n      let next_indent = indent ^ \"  \" in\n      Format.pp_print_string ppf \"Dict {\";\n      Stdlib.List.iter\n        (fun (k, v) ->\n          Format.pp_print_char ppf '\\n';\n          Format.pp_print_string ppf next_indent;\n          Format.pp_print_string ppf k;\n          Format.pp_print_string ppf \": \";\n          pp_with_indent next_indent ppf v)\n        kvs;\n      Format.pp_print_char ppf '\\n';\n      Format.pp_print_string ppf indent;\n      Format.pp_print_char ppf '}'\n\nlet pp ppf t = pp_with_indent \"\" ppf t\n"
  },
  {
    "path": "packages/kaun/lib/ptree.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Heterogeneous parameter trees.\n\n    A parameter tree is a finite tree with tensor leaves and container nodes.\n    Leaves are packed tensors ({!tensor}), and containers are either ordered\n    lists ([List]) or string-keyed dicts ([Dict]). *)\n\ntype tensor =\n  | P : ('a, 'layout) Nx.t -> tensor\n      (** A packed tensor. The wrapper hides dtype and layout parameters. *)\n\ntype t =\n  | Tensor of tensor  (** A tensor leaf. *)\n  | List of t list  (** An ordered list branch. *)\n  | Dict of (string * t) list  (** A dict branch. Keys are strings. *)\n\n(** {1:constructors Constructors} *)\n\nval tensor : ('a, 'layout) Nx.t -> t\n(** [tensor x] is [Tensor (P x)]. *)\n\nval list : t list -> t\n(** [list xs] is [List xs]. *)\n\nval dict : (string * t) list -> t\n(** [dict kvs] is [Dict kvs] with key validation.\n\n    Raises [Invalid_argument] if a key is empty, duplicated, or contains ['.'],\n    ['\\['], or ['\\]']. *)\n\nval empty : t\n(** [empty] is [List []]. Canonical value for \"no parameters\" or \"no state\". *)\n\n(** {1:tensor Tensor Inspection} *)\n\nmodule Tensor : sig\n  val dtype : tensor -> Nx_core.Dtype.packed\n  (** [dtype t] is [t]'s dtype. *)\n\n  val shape : tensor -> int array\n  (** [shape t] is [t]'s shape. *)\n\n  val numel : tensor -> int\n  (** [numel t] is the number of elements in [t]. *)\n\n  val to_typed : ('a, 'l) Nx.dtype -> tensor -> ('a, 'l) Nx.t option\n  (** [to_typed dtype t] is [Some x] iff [t] has dtype [dtype], with [x] the\n      typed tensor. It is [None] on dtype mismatch. *)\n\n  val to_typed_exn : ('a, 'l) Nx.dtype -> tensor -> ('a, 'l) Nx.t\n  (** [to_typed_exn dtype t] is the typed tensor in [t].\n\n      Raises [Invalid_argument] if [t]'s dtype is not [dtype]. *)\nend\n\n(** {1:dict Dict Access} *)\n\nmodule Dict : sig\n  type fields = (string * t) list\n  (** The type for dict fields. *)\n\n  val fields_exn : ?ctx:string -> t -> fields\n  (** [fields_exn ?ctx t] is [t]'s fields.\n\n      Raises [Invalid_argument] if [t] is not [Dict _]. The optional [ctx] is\n      prefixed to the error message. *)\n\n  val find : string -> fields -> t option\n  (** [find name fields] is [Some v] if [name] is bound in [fields], and [None]\n      otherwise. *)\n\n  val find_exn : ?ctx:string -> string -> fields -> t\n  (** [find_exn ?ctx name fields] is [name]'s value in [fields].\n\n      Raises [Invalid_argument] if [name] is missing. The optional [ctx] is\n      prefixed to the error message. *)\n\n  val get_tensor_exn :\n    fields -> name:string -> ('a, 'l) Nx_core.Dtype.t -> ('a, 'l) Nx.t\n  (** [get_tensor_exn fields ~name dtype] is the typed tensor in [fields] under\n      [name].\n\n      Raises [Invalid_argument] if [name] is missing, [name] is not a tensor, or\n      the tensor dtype differs from [dtype]. *)\nend\n\n(** {1:list List Access} *)\n\nmodule List : sig\n  val items_exn : ?ctx:string -> t -> t list\n  (** [items_exn ?ctx t] is [t]'s items.\n\n      Raises [Invalid_argument] if [t] is not [List _]. The optional [ctx] is\n      prefixed to the error message. *)\nend\n\n(** {1:leaf Leaf Access} *)\n\ntype 'r tensor_handler = { run : 'a 'layout. ('a, 'layout) Nx.t -> 'r }\n(** Rank-2 handler for unpacking {!tensor}. *)\n\nval with_tensor : tensor -> 'a tensor_handler -> 'a\n(** [with_tensor t h] applies [h.run] to the unpacked tensor in [t]. *)\n\nval as_tensor_exn : ?ctx:string -> t -> tensor\n(** [as_tensor_exn ?ctx t] is [t]'s packed tensor.\n\n    Raises [Invalid_argument] if [t] is not [Tensor _]. The optional [ctx] is\n    prefixed to the error message. *)\n\n(** {1:functional Functional Operations} *)\n\ntype map_handler = {\n  run : 'a 'layout. ('a, 'layout) Nx.t -> ('a, 'layout) Nx.t;\n}\n(** Rank-2 tensor mapper used by {!map}. *)\n\nval map : map_handler -> t -> t\n(** [map f t] maps [f.run] over tensor leaves and preserves tree structure. *)\n\ntype map2_handler = {\n  run :\n    'a 'layout. ('a, 'layout) Nx.t -> ('a, 'layout) Nx.t -> ('a, 'layout) Nx.t;\n}\n(** Rank-2 tensor zipper used by {!map2}. *)\n\nval map2 : map2_handler -> t -> t -> t\n(** [map2 f a b] zips [a] and [b] and applies [f.run] to paired tensor leaves.\n\n    Lists are matched by position. Dict nodes are matched by key using [a]'s key\n    order.\n\n    Raises [Invalid_argument] on structure mismatch, list or dict size mismatch,\n    missing keys in [b], or paired dtype mismatch. *)\n\nval iter : (tensor -> unit) -> t -> unit\n(** [iter f t] applies [f] to each leaf tensor in depth-first order.\n\n    Leaves are visited left-to-right in list order and dict field order. *)\n\nval fold : ('acc -> tensor -> 'acc) -> 'acc -> t -> 'acc\n(** [fold f acc t] folds leaf tensors in the same traversal order as {!iter}. *)\n\n(** {1:flatten Flatten and Rebuild} *)\n\nval flatten : t -> tensor list * (tensor list -> t)\n(** [flatten t] is [(leaves, rebuild)] where:\n    - [leaves] are [t]'s leaf tensors in depth-first order;\n    - [rebuild new_leaves] rebuilds [t]'s structure with [new_leaves].\n\n    [rebuild] raises [Invalid_argument] if [new_leaves] has a different length\n    than [leaves]. *)\n\nval flatten_with_paths : t -> (string * tensor) list\n(** [flatten_with_paths t] returns [(path, tensor)] pairs where paths are\n    dot-separated strings. Dict keys become path segments; list indices become\n    decimal segments (e.g. [\"layers.0.weight\"]).\n\n    If [t] is a tensor leaf, its path is the empty string.\n\n    The path encoding is injective for trees built with {!dict}, because {!dict}\n    rejects keys containing ['.'], ['\\['], or ['\\]']. *)\n\n(** {1:utils Utilities} *)\n\nval zeros_like : t -> t\n(** [zeros_like t] has the same structure as [t], with each tensor replaced by\n    [Nx.zeros_like]. *)\n\nval count_parameters : t -> int\n(** [count_parameters t] is the sum of {!Tensor.numel} over all leaf tensors. *)\n\nval pp : Format.formatter -> t -> unit\n(** [pp] formats trees for debugging. *)\n"
  },
  {
    "path": "packages/kaun/lib/train.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype ('i, 'o) t = { model : ('i, 'o) Layer.t; optimizer : Vega.t }\ntype 'l state = { vars : 'l Layer.vars; opt_state : Optim.state }\n\nlet make ~model ~optimizer = { model; optimizer }\n\nlet init t ~dtype =\n  let vars = Layer.init t.model ~dtype in\n  let opt_state = Optim.init t.optimizer (Layer.params vars) in\n  { vars; opt_state }\n\nlet vars st = st.vars\n\nlet make_state t vars =\n  let opt_state = Optim.init t.optimizer (Layer.params vars) in\n  { vars; opt_state }\n\nlet step (type i o l in_elt) (t : (i, o) t) (st : l state) ~training ?ctx\n    ~(loss : (o, l) Nx.t -> (float, l) Nx.t) (x : (i, in_elt) Nx.t) =\n  let loss_val, grads, new_layer_state =\n    Grad.value_and_grad_aux\n      (fun params ->\n        let vars' = Layer.with_params st.vars params in\n        let pred, vars'' = Layer.apply t.model vars' ~training ?ctx x in\n        (loss pred, Layer.state vars''))\n      (Layer.params st.vars)\n  in\n  let new_params, opt_state =\n    Optim.step st.opt_state (Layer.params st.vars) grads\n  in\n  let vars =\n    Layer.with_params st.vars new_params |> fun v ->\n    Layer.with_state v new_layer_state\n  in\n  (loss_val, { vars; opt_state })\n\nexception Early_stop\n\nlet fit (type i o l in_elt) (t : (i, o) t) (st : l state) ?ctx ?report\n    (data : ((i, in_elt) Nx.t * ((o, l) Nx.t -> (float, l) Nx.t)) Data.t) =\n  let st = ref st in\n  let i = ref 0 in\n  (try\n     Data.iter\n       (fun (x, loss) ->\n         incr i;\n         let loss_val, st' = step t !st ~training:true ?ctx ~loss x in\n         st := st';\n         match report with\n         | Some f -> f ~step:!i ~loss:(Nx.item [] loss_val) !st\n         | None -> ())\n       data\n   with Early_stop -> ());\n  !st\n\nlet predict (type i o l in_elt) (t : (i, o) t) (st : l state) ?ctx\n    (x : (i, in_elt) Nx.t) =\n  let y, _ = Layer.apply t.model st.vars ~training:false ?ctx x in\n  y\n"
  },
  {
    "path": "packages/kaun/lib/train.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** High-level training loop.\n\n    {!Train} composes {!Layer}, {!Grad}, and {!Optim} into a single training\n    driver. Users never touch parameter trees, optimizer state, or gradient\n    computation directly.\n\n    For advanced use, {!step} exposes a single training step and {!vars} gives\n    access to the underlying model variables. *)\n\n(** {1:types Types} *)\n\ntype ('i, 'o) t\n(** The type for trainers. A trainer pairs a model with an optimizer. *)\n\ntype 'l state\n(** The type for training state. Bundles model variables and optimizer state. *)\n\n(** {1:core Core} *)\n\nval make : model:('i, 'o) Layer.t -> optimizer:Vega.t -> ('i, 'o) t\n(** [make ~model ~optimizer] creates a trainer. *)\n\nval init : ('i, 'o) t -> dtype:(float, 'l) Nx.dtype -> 'l state\n(** [init trainer ~dtype] initializes model variables and optimizer state.\n\n    Random keys for weight initialization are drawn from the implicit RNG scope.\n*)\n\nval vars : 'l state -> 'l Layer.vars\n(** [vars st] is the current model variables (params + state + dtype). *)\n\nval make_state : ('i, 'o) t -> 'l Layer.vars -> 'l state\n(** [make_state trainer vars] is a training state with [vars] and freshly\n    initialized optimizer state.\n\n    Use this to start training from pretrained or externally loaded weights\n    instead of {!init}. *)\n\n(** {1:training Training} *)\n\nexception Early_stop\n(** Raise inside [report] to end training early. {!fit} catches this exception\n    and returns the current state. *)\n\nval step :\n  ('i, 'o) t ->\n  'l state ->\n  training:bool ->\n  ?ctx:Context.t ->\n  loss:(('o, 'l) Nx.t -> (float, 'l) Nx.t) ->\n  ('i, 'in_elt) Nx.t ->\n  (float, 'l) Nx.t * 'l state\n(** [step trainer st ~training ?ctx ~loss x] performs one training step.\n\n    Computes the forward pass, differentiates the loss with respect to trainable\n    parameters, applies the optimizer, and threads updated layer state.\n\n    [ctx] is forwarded to the model's forward pass. See {!Context}.\n\n    When [training = false], gradients are still computed and optimizer is still\n    applied. Use {!predict} for pure inference. *)\n\nval fit :\n  ('i, 'o) t ->\n  'l state ->\n  ?ctx:Context.t ->\n  ?report:(step:int -> loss:float -> 'l state -> unit) ->\n  (('i, 'in_elt) Nx.t * (('o, 'l) Nx.t -> (float, 'l) Nx.t)) Data.t ->\n  'l state\n(** [fit trainer st ?ctx ?report data] trains the model over [data] and returns\n    the final state.\n\n    Each element of [data] is a pair [(x, loss_fn)] where [x] is the input\n    tensor and [loss_fn] computes the scalar loss from the model output. This\n    allows the loss to depend on per-batch labels.\n\n    [ctx] is forwarded to the model's forward pass on each step. See {!Context}.\n\n    When provided, [report] is called after every step with the step number\n    (1-based), scalar loss, and training state. Raise {!Early_stop} inside\n    [report] to end training early.\n\n    For fixed-data training (same input every step), use {!Data.repeat}:\n    {[\n    Train.fit trainer st (Data.repeat 1000 (x, loss_fn))\n    ]} *)\n\n(** {1:inference Inference} *)\n\nval predict :\n  ('i, 'o) t ->\n  'l state ->\n  ?ctx:Context.t ->\n  ('i, 'in_elt) Nx.t ->\n  ('o, 'l) Nx.t\n(** [predict trainer st ?ctx x] runs the model in evaluation mode (no state\n    updates, no dropout).\n\n    [ctx] is forwarded to the model's forward pass. See {!Context}. *)\n"
  },
  {
    "path": "packages/kaun/test/dune",
    "content": "(tests\n (names\n  test_ptree\n  test_init\n  test_loss\n  test_layer\n  test_fn\n  test_optim\n  test_grad\n  test_attention\n  test_data\n  test_train\n  test_metric\n  test_checkpoint)\n (package kaun)\n (libraries rune vega nx nx.core nx.io kaun windtrap))\n"
  },
  {
    "path": "packages/kaun/test/test_attention.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nmodule Attention = Kaun.Attention\nmodule Layer = Kaun.Layer\nmodule Ptree = Kaun.Ptree\n\nlet dtype = Nx.float32\n\n(* Init *)\n\nlet test_init_param_shapes () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Attention.multi_head_attention ~embed_dim:64 ~num_heads:4 () in\n  let vars = Layer.init m ~dtype in\n  let fields = Ptree.Dict.fields_exn (Layer.params vars) in\n  let shape name =\n    Array.to_list (Nx.shape (Ptree.Dict.get_tensor_exn fields ~name dtype))\n  in\n  equal ~msg:\"q_proj shape\" (list int) [ 64; 64 ] (shape \"q_proj\");\n  equal ~msg:\"k_proj shape\" (list int) [ 64; 64 ] (shape \"k_proj\");\n  equal ~msg:\"v_proj shape\" (list int) [ 64; 64 ] (shape \"v_proj\");\n  equal ~msg:\"out_proj shape\" (list int) [ 64; 64 ] (shape \"out_proj\")\n\nlet test_init_gqa_shapes () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m =\n    Attention.multi_head_attention ~embed_dim:64 ~num_heads:8 ~num_kv_heads:2 ()\n  in\n  let vars = Layer.init m ~dtype in\n  let fields = Ptree.Dict.fields_exn (Layer.params vars) in\n  let shape name =\n    Array.to_list (Nx.shape (Ptree.Dict.get_tensor_exn fields ~name dtype))\n  in\n  let head_dim = 64 / 8 in\n  equal ~msg:\"q_proj shape\" (list int) [ 64; 8 * head_dim ] (shape \"q_proj\");\n  equal ~msg:\"k_proj shape\" (list int) [ 64; 2 * head_dim ] (shape \"k_proj\");\n  equal ~msg:\"v_proj shape\" (list int) [ 64; 2 * head_dim ] (shape \"v_proj\");\n  equal ~msg:\"out_proj shape\" (list int) [ 64; 64 ] (shape \"out_proj\")\n\n(* Forward *)\n\nlet test_forward_shape () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Attention.multi_head_attention ~embed_dim:64 ~num_heads:4 () in\n  let vars = Layer.init m ~dtype in\n  let x = Nx.randn dtype [| 2; 8; 64 |] in\n  let y, _vars' = Layer.apply m vars ~training:false x in\n  equal ~msg:\"output shape\" (list int) [ 2; 8; 64 ] (Array.to_list (Nx.shape y))\n\nlet test_forward_gqa () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m =\n    Attention.multi_head_attention ~embed_dim:64 ~num_heads:8 ~num_kv_heads:2 ()\n  in\n  let vars = Layer.init m ~dtype in\n  let x = Nx.randn dtype [| 2; 8; 64 |] in\n  let y, _vars' = Layer.apply m vars ~training:false x in\n  equal ~msg:\"GQA output shape\" (list int) [ 2; 8; 64 ]\n    (Array.to_list (Nx.shape y))\n\nlet test_causal_differs () =\n  Nx.Rng.run ~seed:7 @@ fun () ->\n  let m_causal =\n    Attention.multi_head_attention ~embed_dim:32 ~num_heads:2 ~is_causal:true ()\n  in\n  let m_non_causal =\n    Attention.multi_head_attention ~embed_dim:32 ~num_heads:2 ~is_causal:false\n      ()\n  in\n  let vars_causal = Layer.init m_causal ~dtype in\n  let vars_non_causal = Layer.init m_non_causal ~dtype in\n  let x = Nx.randn dtype [| 1; 6; 32 |] in\n  let y_causal, _ = Layer.apply m_causal vars_causal ~training:false x in\n  let y_non_causal, _ =\n    Layer.apply m_non_causal vars_non_causal ~training:false x\n  in\n  let sum_causal = Nx.item [] (Nx.sum y_causal) in\n  let sum_non_causal = Nx.item [] (Nx.sum y_non_causal) in\n  is_true ~msg:\"causal vs non-causal differ\"\n    (Float.abs (sum_causal -. sum_non_causal) > 1e-6)\n\n(* RoPE *)\n\nlet test_rope_preserves_shape () =\n  Nx.Rng.run ~seed:0 @@ fun () ->\n  let x = Nx.randn dtype [| 2; 4; 8; 16 |] in\n  let y = Attention.rope x in\n  equal ~msg:\"rope output shape\" (list int) [ 2; 4; 8; 16 ]\n    (Array.to_list (Nx.shape y))\n\nlet test_rope_changes_values () =\n  Nx.Rng.run ~seed:0 @@ fun () ->\n  let x = Nx.randn dtype [| 1; 2; 4; 8 |] in\n  let y = Attention.rope x in\n  let diff = Nx.item [] (Nx.sum (Nx.abs (Nx.sub x y))) in\n  is_true ~msg:\"rope changes values\" (diff > 0.0)\n\nlet test_rope_seq_dim () =\n  Nx.Rng.run ~seed:0 @@ fun () ->\n  let x = Nx.randn dtype [| 2; 8; 4; 16 |] in\n  let y = Attention.rope ~seq_dim:1 x in\n  equal ~msg:\"rope seq_dim shape\" (list int) [ 2; 8; 4; 16 ]\n    (Array.to_list (Nx.shape y))\n\nlet test_rope_odd_dim_error () =\n  Nx.Rng.run ~seed:0 @@ fun () ->\n  let x = Nx.randn dtype [| 1; 2; 4; 7 |] in\n  raises_match\n    (fun exn -> match exn with Invalid_argument _ -> true | _ -> false)\n    (fun () -> ignore (Attention.rope x))\n\n(* Dropout *)\n\nlet test_dropout_eval_identity () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m =\n    Attention.multi_head_attention ~embed_dim:32 ~num_heads:2 ~dropout:0.5 ()\n  in\n  let vars = Layer.init m ~dtype in\n  let x = Nx.randn dtype [| 1; 4; 32 |] in\n  let y, _ = Layer.apply m vars ~training:false x in\n  equal ~msg:\"eval shape\" (list int) [ 1; 4; 32 ] (Array.to_list (Nx.shape y))\n\nlet test_dropout_training () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m =\n    Attention.multi_head_attention ~embed_dim:32 ~num_heads:2 ~dropout:0.5 ()\n  in\n  let vars = Layer.init m ~dtype in\n  let x = Nx.randn dtype [| 1; 4; 32 |] in\n  let y, _ = Layer.apply m vars ~training:true x in\n  equal ~msg:\"training shape\" (list int) [ 1; 4; 32 ]\n    (Array.to_list (Nx.shape y))\n\n(* RoPE integration *)\n\nlet test_forward_with_rope () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m =\n    Attention.multi_head_attention ~embed_dim:32 ~num_heads:2 ~rope:true ()\n  in\n  let vars = Layer.init m ~dtype in\n  let x = Nx.randn dtype [| 1; 8; 32 |] in\n  let y, _ = Layer.apply m vars ~training:false x in\n  equal ~msg:\"rope forward shape\" (list int) [ 1; 8; 32 ]\n    (Array.to_list (Nx.shape y))\n\nlet () =\n  run \"Kaun.Attention\"\n    [\n      group \"init\"\n        [\n          test \"param shapes\" test_init_param_shapes;\n          test \"GQA param shapes\" test_init_gqa_shapes;\n        ];\n      group \"forward\"\n        [\n          test \"output shape\" test_forward_shape;\n          test \"GQA output shape\" test_forward_gqa;\n          test \"causal differs\" test_causal_differs;\n          test \"with RoPE\" test_forward_with_rope;\n        ];\n      group \"rope\"\n        [\n          test \"preserves shape\" test_rope_preserves_shape;\n          test \"changes values\" test_rope_changes_values;\n          test \"respects seq_dim\" test_rope_seq_dim;\n          test \"odd dim error\" test_rope_odd_dim_error;\n        ];\n      group \"dropout\"\n        [\n          test \"eval identity\" test_dropout_eval_identity;\n          test \"training with dropout\" test_dropout_training;\n        ];\n    ]\n"
  },
  {
    "path": "packages/kaun/test/test_checkpoint.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nmodule Checkpoint = Kaun.Checkpoint\nmodule Ptree = Kaun.Ptree\nmodule Optim = Kaun.Optim\n\nlet with_tmpfile f =\n  let path = Filename.temp_file \"ckpt\" \".safetensors\" in\n  Fun.protect ~finally:(fun () -> Sys.remove path) (fun () -> f path)\n\nlet to_array t = Nx.to_array (Nx.reshape [| -1 |] (Nx.cast Nx.float32 t))\n\n(* Checkpoint save/load *)\n\nlet test_roundtrip_single_tensor () =\n  with_tmpfile (fun path ->\n      let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n      let tree = Ptree.tensor t in\n      Checkpoint.save path tree;\n      let loaded = Checkpoint.load path ~like:tree in\n      match loaded with\n      | Ptree.Tensor (Ptree.P lt) ->\n          let vals = to_array lt in\n          equal ~msg:\"length\" int 6 (Array.length vals);\n          equal ~msg:\"first\" (float 1e-6) 1.0 vals.(0);\n          equal ~msg:\"last\" (float 1e-6) 6.0 vals.(5)\n      | _ -> fail \"expected Tensor\")\n\nlet test_roundtrip_nested_tree () =\n  with_tmpfile (fun path ->\n      let w = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n      let b = Nx.create Nx.float32 [| 2 |] [| 0.1; 0.2 |] in\n      let tree =\n        Ptree.dict\n          [\n            ( \"layer0\",\n              Ptree.dict\n                [ (\"weight\", Ptree.tensor w); (\"bias\", Ptree.tensor b) ] );\n            (\"layer1\", Ptree.dict [ (\"weight\", Ptree.tensor w) ]);\n          ]\n      in\n      Checkpoint.save path tree;\n      let loaded = Checkpoint.load path ~like:tree in\n      let pairs = Ptree.flatten_with_paths loaded in\n      equal ~msg:\"num leaves\" int 3 (List.length pairs);\n      let names = List.map fst pairs in\n      equal ~msg:\"paths\" (list string)\n        [ \"layer0.weight\"; \"layer0.bias\"; \"layer1.weight\" ]\n        names)\n\nlet test_roundtrip_list_tree () =\n  with_tmpfile (fun path ->\n      let t0 = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n      let t1 = Nx.create Nx.float32 [| 2 |] [| 4.; 5. |] in\n      let tree = Ptree.list [ Ptree.tensor t0; Ptree.tensor t1 ] in\n      Checkpoint.save path tree;\n      let loaded = Checkpoint.load path ~like:tree in\n      let pairs = Ptree.flatten_with_paths loaded in\n      equal ~msg:\"num leaves\" int 2 (List.length pairs);\n      let _, Ptree.P lt1 = List.nth pairs 1 in\n      let vals = to_array lt1 in\n      equal ~msg:\"second tensor\" (float 1e-6) 5.0 vals.(1))\n\nlet test_missing_key () =\n  with_tmpfile (fun path ->\n      let t = Nx.create Nx.float32 [| 2 |] [| 1.; 2. |] in\n      let small = Ptree.dict [ (\"a\", Ptree.tensor t) ] in\n      Checkpoint.save path small;\n      let big = Ptree.dict [ (\"a\", Ptree.tensor t); (\"b\", Ptree.tensor t) ] in\n      raises_invalid_arg \"Checkpoint.load: missing key \\\"b\\\"\" (fun () ->\n          ignore (Checkpoint.load path ~like:big)))\n\nlet test_shape_mismatch () =\n  with_tmpfile (fun path ->\n      let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n      let tree = Ptree.tensor t in\n      Checkpoint.save path tree;\n      let wrong =\n        Ptree.tensor\n          (Nx.create Nx.float32 [| 3; 2 |] [| 1.; 2.; 3.; 4.; 5.; 6. |])\n      in\n      raises_invalid_arg\n        \"Checkpoint.load: shape mismatch for \\\"\\\": expected [3; 2], got [2; 3]\"\n        (fun () -> ignore (Checkpoint.load path ~like:wrong)))\n\nlet test_dtype_casting () =\n  with_tmpfile (fun path ->\n      let t = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n      Checkpoint.save path (Ptree.tensor t);\n      let template =\n        Ptree.tensor (Nx.create Nx.float64 [| 3 |] [| 0.; 0.; 0. |])\n      in\n      let loaded = Checkpoint.load path ~like:template in\n      match loaded with\n      | Ptree.Tensor (Ptree.P lt) ->\n          let vals = to_array lt in\n          equal ~msg:\"casted value\" (float 1e-6) 2.0 vals.(1)\n      | _ -> fail \"expected Tensor\")\n\nlet test_empty_tree () =\n  with_tmpfile (fun path ->\n      Checkpoint.save path Ptree.empty;\n      let loaded = Checkpoint.load path ~like:Ptree.empty in\n      match loaded with Ptree.List [] -> () | _ -> fail \"expected empty list\")\n\n(* Optim state serialization *)\n\nlet test_optim_sgd_no_momentum () =\n  let params = Ptree.tensor (Nx.create Nx.float32 [| 2 |] [| 1.; 2. |]) in\n  let algo = Vega.sgd (Vega.Schedule.constant 0.01) in\n  let st = Optim.init algo params in\n  let count, trees = Optim.state_to_trees st in\n  equal ~msg:\"count\" int 0 count;\n  equal ~msg:\"no trees\" int 0 (List.length trees);\n  let st' = Optim.state_of_trees algo ~count trees in\n  let count', trees' = Optim.state_to_trees st' in\n  equal ~msg:\"count roundtrip\" int 0 count';\n  equal ~msg:\"trees roundtrip\" int 0 (List.length trees')\n\nlet test_optim_sgd_momentum () =\n  let params = Ptree.tensor (Nx.create Nx.float32 [| 2 |] [| 1.; 2. |]) in\n  let algo = Vega.sgd ~momentum:0.9 (Vega.Schedule.constant 0.01) in\n  let st = Optim.init algo params in\n  let count, trees = Optim.state_to_trees st in\n  equal ~msg:\"count\" int 0 count;\n  equal ~msg:\"one tree\" int 1 (List.length trees)\n\nlet test_optim_adam_roundtrip () =\n  let params = Ptree.tensor (Nx.create Nx.float32 [| 2 |] [| 1.; 2. |]) in\n  let algo = Vega.adam (Vega.Schedule.constant 0.001) in\n  let st = Optim.init algo params in\n  let count, trees = Optim.state_to_trees st in\n  equal ~msg:\"count\" int 0 count;\n  equal ~msg:\"two trees\" int 2 (List.length trees);\n  let st' = Optim.state_of_trees algo ~count trees in\n  let count', trees' = Optim.state_to_trees st' in\n  equal ~msg:\"count roundtrip\" int 0 count';\n  equal ~msg:\"trees roundtrip\" int 2 (List.length trees')\n\nlet test_optim_wrong_tree_count () =\n  let algo = Vega.adam (Vega.Schedule.constant 0.001) in\n  raises_invalid_arg \"Optim.state_of_trees: expected 2 moment trees, got 1\"\n    (fun () -> ignore (Optim.state_of_trees algo ~count:0 [ Ptree.empty ]))\n\nlet () =\n  run \"Kaun.Checkpoint\"\n    [\n      group \"save/load\"\n        [\n          test \"roundtrip single tensor\" test_roundtrip_single_tensor;\n          test \"roundtrip nested tree\" test_roundtrip_nested_tree;\n          test \"roundtrip list tree\" test_roundtrip_list_tree;\n          test \"missing key\" test_missing_key;\n          test \"shape mismatch\" test_shape_mismatch;\n          test \"dtype casting\" test_dtype_casting;\n          test \"empty tree\" test_empty_tree;\n        ];\n      group \"optim serialization\"\n        [\n          test \"sgd no momentum\" test_optim_sgd_no_momentum;\n          test \"sgd momentum\" test_optim_sgd_momentum;\n          test \"adam roundtrip\" test_optim_adam_roundtrip;\n          test \"wrong tree count\" test_optim_wrong_tree_count;\n        ];\n    ]\n"
  },
  {
    "path": "packages/kaun/test/test_data.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nmodule Data = Kaun.Data\n\nlet dtype = Nx.float32\n\n(* Constructors *)\n\nlet test_of_array () =\n  let d = Data.of_array [| 10; 20; 30 |] in\n  equal ~msg:\"length\" (option int) (Some 3) (Data.length d);\n  let a = Data.to_array d in\n  equal ~msg:\"elements\" (array int) [| 10; 20; 30 |] a\n\nlet test_of_fn () =\n  let d = Data.of_fn 4 (fun i -> i * i) in\n  equal ~msg:\"length\" (option int) (Some 4) (Data.length d);\n  let a = Data.to_array d in\n  equal ~msg:\"elements\" (array int) [| 0; 1; 4; 9 |] a\n\nlet test_of_fn_negative () =\n  raises_match\n    (fun exn -> match exn with Invalid_argument _ -> true | _ -> false)\n    (fun () -> ignore (Data.of_fn (-1) Fun.id))\n\nlet test_of_tensor () =\n  let t = Nx.create dtype [| 3; 2 |] [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0 |] in\n  let d = Data.of_tensor t in\n  equal ~msg:\"length\" (option int) (Some 3) (Data.length d);\n  let a = Data.to_array d in\n  equal ~msg:\"count\" int 3 (Array.length a);\n  equal ~msg:\"shape\" (list int) [ 2 ] (Array.to_list (Nx.shape a.(0)));\n  equal ~msg:\"first elem\" (float 1e-6) 1.0 (Nx.item [ 0 ] a.(0))\n\nlet test_of_tensors () =\n  let x = Nx.create dtype [| 3; 2 |] [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0 |] in\n  let y = Nx.create dtype [| 3 |] [| 10.0; 20.0; 30.0 |] in\n  let d = Data.of_tensors (x, y) in\n  equal ~msg:\"length\" (option int) (Some 3) (Data.length d);\n  let a = Data.to_array d in\n  equal ~msg:\"count\" int 3 (Array.length a);\n  let x0, y0 = a.(0) in\n  equal ~msg:\"x0 shape\" (list int) [ 2 ] (Array.to_list (Nx.shape x0));\n  equal ~msg:\"y0 scalar\" (float 1e-6) 10.0 (Nx.item [] y0)\n\nlet test_of_tensors_mismatch () =\n  let x = Nx.create dtype [| 3; 2 |] [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0 |] in\n  let y = Nx.create dtype [| 2 |] [| 10.0; 20.0 |] in\n  raises_match\n    (fun exn -> match exn with Invalid_argument _ -> true | _ -> false)\n    (fun () -> ignore (Data.of_tensors (x, y)))\n\n(* Transformers *)\n\nlet test_map () =\n  let d = Data.of_array [| 1; 2; 3 |] |> Data.map (fun x -> x * 2) in\n  equal ~msg:\"mapped\" (array int) [| 2; 4; 6 |] (Data.to_array d)\n\nlet test_batch () =\n  let d = Data.of_array [| 1; 2; 3; 4; 5 |] |> Data.batch 2 in\n  let batches = Data.to_array d in\n  equal ~msg:\"num batches\" int 3 (Array.length batches);\n  equal ~msg:\"batch 0\" (array int) [| 1; 2 |] batches.(0);\n  equal ~msg:\"batch 1\" (array int) [| 3; 4 |] batches.(1);\n  equal ~msg:\"batch 2 (partial)\" (array int) [| 5 |] batches.(2)\n\nlet test_batch_drop_last () =\n  let d = Data.of_array [| 1; 2; 3; 4; 5 |] |> Data.batch ~drop_last:true 2 in\n  let batches = Data.to_array d in\n  equal ~msg:\"num batches\" int 2 (Array.length batches);\n  equal ~msg:\"batch 0\" (array int) [| 1; 2 |] batches.(0);\n  equal ~msg:\"batch 1\" (array int) [| 3; 4 |] batches.(1)\n\nlet test_batch_invalid_size () =\n  raises_match\n    (fun exn -> match exn with Invalid_argument _ -> true | _ -> false)\n    (fun () -> ignore (Data.of_array [| 1; 2 |] |> Data.batch 0))\n\nlet test_map_batch () =\n  let d =\n    Data.of_array [| 1; 2; 3; 4 |]\n    |> Data.map_batch 2 (fun batch -> Array.fold_left ( + ) 0 batch)\n  in\n  equal ~msg:\"map_batch\" (array int) [| 3; 7 |] (Data.to_array d)\n\nlet test_shuffle_deterministic () =\n  let d1 =\n    Nx.Rng.run ~seed:42 @@ fun () ->\n    Data.of_array [| 0; 1; 2; 3; 4; 5; 6; 7 |] |> Data.shuffle |> Data.to_array\n  in\n  let d2 =\n    Nx.Rng.run ~seed:42 @@ fun () ->\n    Data.of_array [| 0; 1; 2; 3; 4; 5; 6; 7 |] |> Data.shuffle |> Data.to_array\n  in\n  equal ~msg:\"same seed same order\" (array int) d1 d2\n\nlet test_shuffle_different_seed () =\n  let a1 =\n    Nx.Rng.run ~seed:1 @@ fun () ->\n    Data.of_array [| 0; 1; 2; 3; 4; 5; 6; 7 |] |> Data.shuffle |> Data.to_array\n  in\n  let a2 =\n    Nx.Rng.run ~seed:2 @@ fun () ->\n    Data.of_array [| 0; 1; 2; 3; 4; 5; 6; 7 |] |> Data.shuffle |> Data.to_array\n  in\n  is_true ~msg:\"different seed different order\" (a1 <> a2)\n\n(* Consumers *)\n\nlet test_fold () =\n  let sum = Data.of_array [| 1; 2; 3; 4 |] |> Data.fold ( + ) 0 in\n  equal ~msg:\"fold sum\" int 10 sum\n\nlet test_to_seq () =\n  let s = Data.of_array [| 10; 20; 30 |] |> Data.to_seq in\n  let a = Array.of_seq s in\n  equal ~msg:\"to_seq\" (array int) [| 10; 20; 30 |] a\n\n(* Properties *)\n\nlet test_reset () =\n  let d = Data.of_array [| 1; 2; 3 |] in\n  let a1 = Data.to_array d in\n  Data.reset d;\n  let a2 = Data.to_array d in\n  equal ~msg:\"reset re-iterates\" (array int) a1 a2\n\nlet test_length () =\n  let d = Data.of_array [| 1; 2; 3 |] in\n  equal ~msg:\"known length\" (option int) (Some 3) (Data.length d);\n  let d2 = Data.map (fun x -> x + 1) d in\n  equal ~msg:\"map preserves length\" (option int) (Some 3) (Data.length d2)\n\n(* Utilities *)\n\nlet test_stack_batch () =\n  let tensors =\n    [|\n      Nx.create dtype [| 2 |] [| 1.0; 2.0 |];\n      Nx.create dtype [| 2 |] [| 3.0; 4.0 |];\n      Nx.create dtype [| 2 |] [| 5.0; 6.0 |];\n    |]\n  in\n  let stacked = Data.stack_batch tensors in\n  equal ~msg:\"shape\" (list int) [ 3; 2 ] (Array.to_list (Nx.shape stacked));\n  equal ~msg:\"value\" (float 1e-6) 3.0 (Nx.item [ 1; 0 ] stacked)\n\nlet () =\n  run \"Kaun.Data\"\n    [\n      group \"constructors\"\n        [\n          test \"of_array\" test_of_array;\n          test \"of_fn\" test_of_fn;\n          test \"of_fn negative\" test_of_fn_negative;\n          test \"of_tensor\" test_of_tensor;\n          test \"of_tensors\" test_of_tensors;\n          test \"of_tensors mismatch\" test_of_tensors_mismatch;\n        ];\n      group \"transformers\"\n        [\n          test \"map\" test_map;\n          test \"batch\" test_batch;\n          test \"batch drop_last\" test_batch_drop_last;\n          test \"batch invalid size\" test_batch_invalid_size;\n          test \"map_batch\" test_map_batch;\n          test \"shuffle deterministic\" test_shuffle_deterministic;\n          test \"shuffle different seed\" test_shuffle_different_seed;\n        ];\n      group \"consumers\" [ test \"fold\" test_fold; test \"to_seq\" test_to_seq ];\n      group \"properties\" [ test \"reset\" test_reset; test \"length\" test_length ];\n      group \"utilities\" [ test \"stack_batch\" test_stack_batch ];\n    ]\n"
  },
  {
    "path": "packages/kaun/test/test_fn.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nmodule Fn = Kaun.Fn\n\nlet flatten_f32 t = Nx.to_array (Nx.reshape [| -1 |] (Nx.cast Nx.float32 t))\nlet check_shape msg expected t = equal ~msg (array int) expected (Nx.shape t)\n\nlet check_values msg expected t =\n  let actual = flatten_f32 t in\n  let n = Array.length expected in\n  if Array.length actual <> n then\n    failf \"%s: expected %d elements, got %d\" msg n (Array.length actual);\n  for i = 0 to n - 1 do\n    equal\n      ~msg:(Printf.sprintf \"%s[%d]\" msg i)\n      (float 1e-4) expected.(i) actual.(i)\n  done\n\n(* conv1d *)\n\nlet test_conv1d_basic () =\n  let x = Nx.create Nx.float32 [| 1; 1; 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n  let w = Nx.create Nx.float32 [| 1; 1; 3 |] [| 1.; 1.; 1. |] in\n  let result = Fn.conv1d x w in\n  check_shape \"conv1d basic shape\" [| 1; 1; 3 |] result;\n  check_values \"conv1d basic\" [| 6.; 9.; 12. |] result\n\nlet test_conv1d_same_padding () =\n  let x = Nx.create Nx.float32 [| 1; 1; 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n  let w = Nx.create Nx.float32 [| 1; 1; 3 |] [| 1.; 1.; 1. |] in\n  let result = Fn.conv1d ~padding:`Same x w in\n  check_shape \"conv1d same shape\" [| 1; 1; 5 |] result;\n  check_values \"conv1d same\" [| 3.; 6.; 9.; 12.; 9. |] result\n\nlet test_conv1d_stride () =\n  let x =\n    Nx.create Nx.float32 [| 1; 1; 8 |] [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8. |]\n  in\n  let w = Nx.create Nx.float32 [| 1; 1; 3 |] [| 1.; 1.; 1. |] in\n  let result = Fn.conv1d ~stride:2 x w in\n  check_shape \"conv1d stride shape\" [| 1; 1; 3 |] result;\n  check_values \"conv1d stride\" [| 6.; 12.; 18. |] result\n\nlet test_conv1d_dilation () =\n  let x = Nx.create Nx.float32 [| 1; 1; 7 |] [| 1.; 2.; 3.; 4.; 5.; 6.; 7. |] in\n  let w = Nx.create Nx.float32 [| 1; 1; 3 |] [| 1.; 0.; 1. |] in\n  let result = Fn.conv1d ~dilation:2 x w in\n  check_shape \"conv1d dilation shape\" [| 1; 1; 3 |] result;\n  (* kernel [1;0;1] with dilation=2 picks (i, i+2, i+4): 1+5=6, 2+6=8, 3+7=10 *)\n  check_values \"conv1d dilation\" [| 6.; 8.; 10. |] result\n\nlet test_conv1d_bias () =\n  let x = Nx.create Nx.float32 [| 1; 1; 3 |] [| 1.; 2.; 3. |] in\n  let w = Nx.create Nx.float32 [| 1; 1; 2 |] [| 1.; 1. |] in\n  let bias = Nx.create Nx.float32 [| 1 |] [| 10. |] in\n  let result = Fn.conv1d ~bias x w in\n  check_values \"conv1d bias\" [| 13.; 15. |] result\n\nlet test_conv1d_groups () =\n  let x = Nx.create Nx.float32 [| 1; 4; 4 |] (Array.init 16 float_of_int) in\n  let w =\n    Nx.create Nx.float32 [| 2; 2; 2 |] [| 1.; 1.; 1.; 1.; 1.; 1.; 1.; 1. |]\n  in\n  let result = Fn.conv1d ~groups:2 x w in\n  check_shape \"conv1d groups shape\" [| 1; 2; 3 |] result\n\n(* conv2d *)\n\nlet test_conv2d_basic () =\n  let x = Nx.create Nx.float32 [| 1; 1; 4; 4 |] (Array.init 16 float_of_int) in\n  let w = Nx.create Nx.float32 [| 1; 1; 3; 3 |] (Array.make 9 1.0) in\n  let result = Fn.conv2d x w in\n  check_shape \"conv2d basic shape\" [| 1; 1; 2; 2 |] result;\n  check_values \"conv2d basic\" [| 45.; 54.; 81.; 90. |] result\n\nlet test_conv2d_same_padding () =\n  let x =\n    Nx.create Nx.float32 [| 1; 1; 3; 3 |]\n      [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9. |]\n  in\n  let w = Nx.create Nx.float32 [| 1; 1; 2; 2 |] [| 1.; 1.; 1.; 1. |] in\n  let result = Fn.conv2d ~padding:`Same x w in\n  check_shape \"conv2d same shape\" [| 1; 1; 3; 3 |] result\n\nlet test_conv2d_stride () =\n  let x = Nx.create Nx.float32 [| 1; 1; 5; 5 |] (Array.init 25 float_of_int) in\n  let w = Nx.create Nx.float32 [| 1; 1; 3; 3 |] (Array.make 9 1.0) in\n  let result = Fn.conv2d ~stride:(2, 2) x w in\n  check_shape \"conv2d stride shape\" [| 1; 1; 2; 2 |] result;\n  check_values \"conv2d stride\" [| 54.; 72.; 144.; 162. |] result\n\nlet test_conv2d_dilation () =\n  let x = Nx.create Nx.float32 [| 1; 1; 5; 5 |] (Array.init 25 float_of_int) in\n  let w =\n    Nx.create Nx.float32 [| 1; 1; 3; 3 |]\n      [| 1.; 0.; 0.; 0.; 0.; 0.; 0.; 0.; 1. |]\n  in\n  let result = Fn.conv2d ~dilation:(2, 2) x w in\n  check_shape \"conv2d dilation shape\" [| 1; 1; 1; 1 |] result;\n  check_values \"conv2d dilation\" [| 24. |] result\n\nlet test_conv2d_multi_channel () =\n  let x = Nx.create Nx.float32 [| 1; 3; 4; 4 |] (Array.init 48 float_of_int) in\n  let w = Nx.create Nx.float32 [| 2; 3; 3; 3 |] (Array.make 54 1.0) in\n  let result = Fn.conv2d x w in\n  check_shape \"conv2d multi-channel shape\" [| 1; 2; 2; 2 |] result\n\nlet test_conv2d_groups () =\n  let x = Nx.create Nx.float32 [| 1; 4; 6; 6 |] (Array.init 144 float_of_int) in\n  let w = Nx.create Nx.float32 [| 4; 2; 2; 2 |] (Array.make 32 1.0) in\n  let result = Fn.conv2d ~groups:2 x w in\n  check_shape \"conv2d groups shape\" [| 1; 4; 5; 5 |] result\n\nlet test_conv2d_bias () =\n  let x = Nx.create Nx.float32 [| 1; 1; 3; 3 |] (Array.make 9 1.0) in\n  let w = Nx.create Nx.float32 [| 2; 1; 2; 2 |] (Array.make 8 1.0) in\n  let bias = Nx.create Nx.float32 [| 2 |] [| 10.; 20. |] in\n  let result = Fn.conv2d ~bias x w in\n  check_shape \"conv2d bias shape\" [| 1; 2; 2; 2 |] result;\n  (* Each 2x2 window of ones with all-ones kernel = 4.0, + bias *)\n  check_values \"conv2d bias\" [| 14.; 14.; 14.; 14.; 24.; 24.; 24.; 24. |] result\n\n(* max_pool1d *)\n\nlet test_max_pool1d_basic () =\n  let x = Nx.create Nx.float32 [| 1; 1; 6 |] [| 1.; 3.; 2.; 5.; 4.; 6. |] in\n  let result = Fn.max_pool1d ~kernel_size:2 ~stride:2 x in\n  check_shape \"max_pool1d shape\" [| 1; 1; 3 |] result;\n  check_values \"max_pool1d\" [| 3.; 5.; 6. |] result\n\nlet test_max_pool1d_same_padding () =\n  let x = Nx.create Nx.float32 [| 1; 1; 5 |] [| 1.; 3.; 2.; 5.; 4. |] in\n  let result = Fn.max_pool1d ~kernel_size:3 ~stride:1 ~padding:`Same x in\n  check_shape \"max_pool1d same shape\" [| 1; 1; 5 |] result\n\n(* max_pool2d *)\n\nlet test_max_pool2d_basic () =\n  let x = Nx.create Nx.float32 [| 1; 1; 4; 4 |] (Array.init 16 float_of_int) in\n  let result = Fn.max_pool2d ~kernel_size:(2, 2) ~stride:(2, 2) x in\n  check_shape \"max_pool2d shape\" [| 1; 1; 2; 2 |] result;\n  check_values \"max_pool2d\" [| 5.; 7.; 13.; 15. |] result\n\nlet test_max_pool2d_stride1 () =\n  let x =\n    Nx.create Nx.float32 [| 1; 1; 3; 3 |]\n      [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9. |]\n  in\n  let result = Fn.max_pool2d ~kernel_size:(2, 2) ~stride:(1, 1) x in\n  check_shape \"max_pool2d stride1 shape\" [| 1; 1; 2; 2 |] result;\n  check_values \"max_pool2d stride1\" [| 5.; 6.; 8.; 9. |] result\n\n(* avg_pool1d *)\n\nlet test_avg_pool1d_basic () =\n  let x = Nx.create Nx.float32 [| 1; 1; 6 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let result = Fn.avg_pool1d ~kernel_size:2 ~stride:2 x in\n  check_shape \"avg_pool1d shape\" [| 1; 1; 3 |] result;\n  check_values \"avg_pool1d\" [| 1.5; 3.5; 5.5 |] result\n\n(* avg_pool2d *)\n\nlet test_avg_pool2d_basic () =\n  let x = Nx.create Nx.float32 [| 1; 1; 4; 4 |] (Array.init 16 float_of_int) in\n  let result = Fn.avg_pool2d ~kernel_size:(2, 2) ~stride:(2, 2) x in\n  check_shape \"avg_pool2d shape\" [| 1; 1; 2; 2 |] result;\n  check_values \"avg_pool2d\" [| 2.5; 4.5; 10.5; 12.5 |] result\n\nlet test_avg_pool2d_same_padding () =\n  let x =\n    Nx.create Nx.float32 [| 1; 1; 3; 3 |]\n      [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9. |]\n  in\n  let result =\n    Fn.avg_pool2d ~kernel_size:(2, 2) ~stride:(2, 2) ~padding:`Same x\n  in\n  check_shape \"avg_pool2d same shape\" [| 1; 1; 2; 2 |] result\n\n(* Gradient tests *)\n\nlet eps = 1e-4\n\nlet check_rune ~eps msg expected actual =\n  let xs = flatten_f32 expected in\n  let ys = flatten_f32 actual in\n  let n = Array.length xs in\n  if Array.length ys <> n then\n    failf \"%s: shape mismatch: expected %d elts, got %d\" msg n (Array.length ys);\n  for i = 0 to n - 1 do\n    equal ~msg:(Printf.sprintf \"%s[%d]\" msg i) (float eps) xs.(i) ys.(i)\n  done\n\nlet test_grad_conv2d () =\n  (* conv2d is correlation (no kernel flip), unlike the old Nx convolve2d *)\n  let x =\n    Nx.create Nx.float32 [| 1; 1; 4; 4 |]\n      (Array.init 16 (fun i -> float_of_int (i + 1)))\n  in\n  let w = Nx.create Nx.float32 [| 1; 1; 2; 2 |] [| 1.; 0.; 0.; 1. |] in\n  (* grad w.r.t. input: sum(conv2d(x, w)) → each input pixel's grad is how many\n     output windows include it, weighted by the kernel value at that position.\n     For a 2x2 kernel [1,0;0,1] on 4x4 input with Valid padding → 3x3 output.\n     JAX: jax.grad(lambda x: jnp.sum(jax.lax.conv(x, w, (1,1), 'VALID')))(x) *)\n  let f_x x = Nx.sum (Fn.conv2d x w) in\n  let grad_x = Rune.grad f_x x in\n  let expected_x =\n    Nx.create Nx.float32 [| 1; 1; 4; 4 |]\n      [| 1.; 1.; 1.; 0.; 1.; 2.; 2.; 1.; 1.; 2.; 2.; 1.; 0.; 1.; 1.; 1. |]\n  in\n  check_rune ~eps \"conv2d dx\" expected_x grad_x;\n  (* grad w.r.t. kernel *)\n  let f_w w = Nx.sum (Fn.conv2d x w) in\n  let grad_w = Rune.grad f_w w in\n  (* For correlation: dL/dw[i,j] = sum of x values at positions covered by\n     w[i,j] across all output windows. w[0,0] covers x[0..2,0..2], w[0,1] covers\n     x[0..2,1..3], etc. *)\n  let expected_w =\n    Nx.create Nx.float32 [| 1; 1; 2; 2 |] [| 54.; 63.; 90.; 99. |]\n  in\n  check_rune ~eps \"conv2d dw\" expected_w grad_w\n\nlet test_grad_avg_pool2d () =\n  let x =\n    Nx.create Nx.float32 [| 1; 1; 4; 4 |]\n      (Array.init 16 (fun i -> float_of_int (i + 1)))\n  in\n  (* Non-overlapping 2x2 avg pool: each output = mean of 4 inputs. grad of\n     sum(avg_pool) = 0.25 everywhere (each input contributes to exactly one\n     output, scaled by 1/4) *)\n  let f x = Nx.sum (Fn.avg_pool2d ~kernel_size:(2, 2) ~stride:(2, 2) x) in\n  let grad_x = Rune.grad f x in\n  let expected = Nx.full Nx.float32 [| 1; 1; 4; 4 |] 0.25 in\n  check_rune ~eps \"avg_pool2d dx\" expected grad_x\n\nlet test_grad_avg_pool2d_overlapping () =\n  let x =\n    Nx.create Nx.float32 [| 1; 1; 4; 4 |]\n      (Array.init 16 (fun i -> float_of_int (i + 1)))\n  in\n  (* Overlapping 2x2 avg pool with stride 1: 3x3 output. Each output window\n     contributes 0.25 per input pixel it covers. Corner pixels appear in 1\n     window, edge in 2, interior in 4. *)\n  let f x = Nx.sum (Fn.avg_pool2d ~kernel_size:(2, 2) ~stride:(1, 1) x) in\n  let grad_x = Rune.grad f x in\n  let expected =\n    Nx.create Nx.float32 [| 1; 1; 4; 4 |]\n      [|\n        0.25;\n        0.5;\n        0.5;\n        0.25;\n        0.5;\n        1.0;\n        1.0;\n        0.5;\n        0.5;\n        1.0;\n        1.0;\n        0.5;\n        0.25;\n        0.5;\n        0.5;\n        0.25;\n      |]\n  in\n  check_rune ~eps \"avg_pool2d overlapping dx\" expected grad_x\n\nlet () =\n  run \"Kaun.Fn\"\n    [\n      group \"conv1d\"\n        [\n          test \"basic\" test_conv1d_basic;\n          test \"same padding\" test_conv1d_same_padding;\n          test \"stride\" test_conv1d_stride;\n          test \"dilation\" test_conv1d_dilation;\n          test \"bias\" test_conv1d_bias;\n          test \"groups\" test_conv1d_groups;\n        ];\n      group \"conv2d\"\n        [\n          test \"basic\" test_conv2d_basic;\n          test \"same padding\" test_conv2d_same_padding;\n          test \"stride\" test_conv2d_stride;\n          test \"dilation\" test_conv2d_dilation;\n          test \"multi-channel\" test_conv2d_multi_channel;\n          test \"groups\" test_conv2d_groups;\n          test \"bias\" test_conv2d_bias;\n        ];\n      group \"max_pool\"\n        [\n          test \"1d basic\" test_max_pool1d_basic;\n          test \"1d same padding\" test_max_pool1d_same_padding;\n          test \"2d basic\" test_max_pool2d_basic;\n          test \"2d stride 1\" test_max_pool2d_stride1;\n        ];\n      group \"avg_pool\"\n        [\n          test \"1d basic\" test_avg_pool1d_basic;\n          test \"2d basic\" test_avg_pool2d_basic;\n          test \"2d same padding\" test_avg_pool2d_same_padding;\n        ];\n      group \"gradients\"\n        [\n          test \"conv2d\" test_grad_conv2d;\n          test \"avg_pool2d\" test_grad_avg_pool2d;\n          test \"avg_pool2d overlapping\" test_grad_avg_pool2d_overlapping;\n        ];\n    ]\n"
  },
  {
    "path": "packages/kaun/test/test_grad.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nmodule Grad = Kaun.Grad\nmodule Ptree = Kaun.Ptree\n\nlet string_contains s sub =\n  let slen = String.length s in\n  let sub_len = String.length sub in\n  let rec loop i =\n    if i + sub_len > slen then false\n    else if String.sub s i sub_len = sub then true\n    else loop (i + 1)\n  in\n  if sub_len = 0 then true else loop 0\n\nlet raises_invalid_arg_contains needle f =\n  raises_match\n    (fun exn ->\n      match exn with\n      | Invalid_argument msg -> string_contains msg needle\n      | _ -> false)\n    f\n\n(* f(x) = 0.5 * sum(x^2), gradient = x *)\nlet test_scalar_quadratic () =\n  let x = Nx.create Nx.float32 [| 2 |] [| 3.0; -4.0 |] in\n  let params = Ptree.tensor x in\n  let loss, grads =\n    Grad.value_and_grad\n      (fun p ->\n        let (Ptree.P t) = Ptree.as_tensor_exn p in\n        let t = Ptree.Tensor.to_typed_exn Nx.float32 (Ptree.P t) in\n        Nx.mul (Nx.scalar Nx.float32 0.5) (Nx.sum (Nx.mul t t)))\n      params\n  in\n  equal ~msg:\"loss value\" (float 1e-6) 12.5 (Nx.item [] loss);\n  let g = Ptree.Tensor.to_typed_exn Nx.float32 (Ptree.as_tensor_exn grads) in\n  equal ~msg:\"grad[0]\" (float 1e-5) 3.0 (Nx.item [ 0 ] g);\n  equal ~msg:\"grad[1]\" (float 1e-5) (-4.0) (Nx.item [ 1 ] g)\n\n(* f(w, b) = sum(w * x + b), dw = x, db = ones *)\nlet test_multi_leaf_dict () =\n  let w = Nx.create Nx.float32 [| 3 |] [| 1.0; 2.0; 3.0 |] in\n  let b = Nx.create Nx.float32 [| 3 |] [| 0.1; 0.2; 0.3 |] in\n  let x = Nx.create Nx.float32 [| 3 |] [| 4.0; 5.0; 6.0 |] in\n  let params = Ptree.dict [ (\"w\", Ptree.tensor w); (\"b\", Ptree.tensor b) ] in\n  let loss, grads =\n    Grad.value_and_grad\n      (fun p ->\n        let fields = Ptree.Dict.fields_exn p in\n        let w = Ptree.Dict.get_tensor_exn fields ~name:\"w\" Nx.float32 in\n        let b = Ptree.Dict.get_tensor_exn fields ~name:\"b\" Nx.float32 in\n        Nx.sum (Nx.add (Nx.mul w x) b))\n      params\n  in\n  equal ~msg:\"loss value\" (float 1e-4) 32.6 (Nx.item [] loss);\n  let grad_fields = Ptree.Dict.fields_exn grads in\n  let gw = Ptree.Dict.get_tensor_exn grad_fields ~name:\"w\" Nx.float32 in\n  let gb = Ptree.Dict.get_tensor_exn grad_fields ~name:\"b\" Nx.float32 in\n  equal ~msg:\"dw[0]\" (float 1e-5) 4.0 (Nx.item [ 0 ] gw);\n  equal ~msg:\"dw[1]\" (float 1e-5) 5.0 (Nx.item [ 1 ] gw);\n  equal ~msg:\"dw[2]\" (float 1e-5) 6.0 (Nx.item [ 2 ] gw);\n  equal ~msg:\"db[0]\" (float 1e-5) 1.0 (Nx.item [ 0 ] gb);\n  equal ~msg:\"db[1]\" (float 1e-5) 1.0 (Nx.item [ 1 ] gb);\n  equal ~msg:\"db[2]\" (float 1e-5) 1.0 (Nx.item [ 2 ] gb)\n\nlet test_nested_tree () =\n  let a = Nx.create Nx.float32 [| 2 |] [| 1.0; 2.0 |] in\n  let b = Nx.create Nx.float32 [| 2 |] [| 3.0; 4.0 |] in\n  let params =\n    Ptree.dict\n      [\n        (\"layer1\", Ptree.dict [ (\"w\", Ptree.tensor a) ]);\n        (\"layer2\", Ptree.dict [ (\"w\", Ptree.tensor b) ]);\n      ]\n  in\n  let _loss, grads =\n    Grad.value_and_grad\n      (fun p ->\n        let f1 = Ptree.Dict.fields_exn p in\n        let l1 = Ptree.Dict.fields_exn (Ptree.Dict.find_exn \"layer1\" f1) in\n        let l2 = Ptree.Dict.fields_exn (Ptree.Dict.find_exn \"layer2\" f1) in\n        let w1 = Ptree.Dict.get_tensor_exn l1 ~name:\"w\" Nx.float32 in\n        let w2 = Ptree.Dict.get_tensor_exn l2 ~name:\"w\" Nx.float32 in\n        Nx.add (Nx.sum (Nx.mul w1 w1)) (Nx.sum (Nx.mul w2 w2)))\n      params\n  in\n  let gf = Ptree.Dict.fields_exn grads in\n  let gl1 = Ptree.Dict.fields_exn (Ptree.Dict.find_exn \"layer1\" gf) in\n  let gl2 = Ptree.Dict.fields_exn (Ptree.Dict.find_exn \"layer2\" gf) in\n  let ga = Ptree.Dict.get_tensor_exn gl1 ~name:\"w\" Nx.float32 in\n  let gb = Ptree.Dict.get_tensor_exn gl2 ~name:\"w\" Nx.float32 in\n  equal ~msg:\"ga[0]\" (float 1e-5) 2.0 (Nx.item [ 0 ] ga);\n  equal ~msg:\"ga[1]\" (float 1e-5) 4.0 (Nx.item [ 1 ] ga);\n  equal ~msg:\"gb[0]\" (float 1e-5) 6.0 (Nx.item [ 0 ] gb);\n  equal ~msg:\"gb[1]\" (float 1e-5) 8.0 (Nx.item [ 1 ] gb)\n\nlet test_value_and_grad_aux () =\n  let x = Nx.create Nx.float32 [| 2 |] [| 3.0; 4.0 |] in\n  let params = Ptree.tensor x in\n  let loss, grads, aux =\n    Grad.value_and_grad_aux\n      (fun p ->\n        let (Ptree.P t) = Ptree.as_tensor_exn p in\n        let t = Ptree.Tensor.to_typed_exn Nx.float32 (Ptree.P t) in\n        (Nx.sum (Nx.mul t t), Nx.item [ 0 ] t))\n      params\n  in\n  equal ~msg:\"loss value\" (float 1e-6) 25.0 (Nx.item [] loss);\n  equal ~msg:\"aux value\" (float 1e-6) 3.0 aux;\n  let g = Ptree.Tensor.to_typed_exn Nx.float32 (Ptree.as_tensor_exn grads) in\n  equal ~msg:\"grad[0]\" (float 1e-5) 6.0 (Nx.item [ 0 ] g);\n  equal ~msg:\"grad[1]\" (float 1e-5) 8.0 (Nx.item [ 1 ] g)\n\nlet test_grad_convenience () =\n  let x = Nx.create Nx.float32 [| 2 |] [| 5.0; -3.0 |] in\n  let params = Ptree.tensor x in\n  let f p =\n    let (Ptree.P t) = Ptree.as_tensor_exn p in\n    let t = Ptree.Tensor.to_typed_exn Nx.float32 (Ptree.P t) in\n    Nx.sum t\n  in\n  let grads = Grad.grad f params in\n  let g = Ptree.Tensor.to_typed_exn Nx.float32 (Ptree.as_tensor_exn grads) in\n  equal ~msg:\"grad[0]\" (float 1e-5) 1.0 (Nx.item [ 0 ] g);\n  equal ~msg:\"grad[1]\" (float 1e-5) 1.0 (Nx.item [ 1 ] g)\n\nlet test_empty_tree () =\n  let params = Ptree.list [] in\n  let loss, grads =\n    Grad.value_and_grad (fun _p -> Nx.scalar Nx.float32 42.0) params\n  in\n  equal ~msg:\"empty tree loss\" (float 1e-6) 42.0 (Nx.item [] loss);\n  match grads with\n  | Ptree.List [] -> ()\n  | _ -> fail \"expected empty list gradient\"\n\nlet test_non_float_leaf_error () =\n  let params = Ptree.tensor (Nx.zeros Nx.int32 [| 3 |]) in\n  raises_invalid_arg_contains \"<root> expected float dtype\" (fun () ->\n      ignore (Grad.value_and_grad (fun _p -> Nx.scalar Nx.float32 0.0) params))\n\nlet test_mixed_dtype_error () =\n  let params =\n    Ptree.dict\n      [\n        (\"a\", Ptree.tensor (Nx.ones Nx.float16 [| 2 |]));\n        (\"b\", Ptree.tensor (Nx.ones Nx.float32 [| 2 |]));\n      ]\n  in\n  raises_invalid_arg_contains \"has dtype/layout\" (fun () ->\n      ignore\n        (Grad.value_and_grad\n           (fun p ->\n             let fields = Ptree.Dict.fields_exn p in\n             let a = Ptree.Dict.get_tensor_exn fields ~name:\"a\" Nx.float16 in\n             let b = Ptree.Dict.get_tensor_exn fields ~name:\"b\" Nx.float32 in\n             Nx.add (Nx.sum (Nx.cast Nx.float32 a)) (Nx.sum b))\n           params))\n\nlet test_value_and_grad_mixed () =\n  let params =\n    Ptree.dict\n      [\n        (\"a\", Ptree.tensor (Nx.ones Nx.float16 [| 2 |]));\n        (\"b\", Ptree.tensor (Nx.ones Nx.float32 [| 2 |]));\n      ]\n  in\n  let loss, grads =\n    Grad.value_and_grad_mixed\n      (fun p ->\n        let fields = Ptree.Dict.fields_exn p in\n        let a = Ptree.Dict.get_tensor_exn fields ~name:\"a\" Nx.float16 in\n        let b = Ptree.Dict.get_tensor_exn fields ~name:\"b\" Nx.float32 in\n        Nx.add (Nx.sum (Nx.cast Nx.float32 a)) (Nx.sum b))\n      params\n  in\n  equal ~msg:\"loss value\" (float 1e-5) 4.0 (Nx.item [] loss);\n  let grad_fields = Ptree.Dict.fields_exn grads in\n  let ga = Ptree.Dict.get_tensor_exn grad_fields ~name:\"a\" Nx.float16 in\n  let gb = Ptree.Dict.get_tensor_exn grad_fields ~name:\"b\" Nx.float32 in\n  equal ~msg:\"ga[0]\" (float 1e-5) 1.0 (Nx.item [ 0 ] (Nx.cast Nx.float32 ga));\n  equal ~msg:\"gb[0]\" (float 1e-5) 1.0 (Nx.item [ 0 ] gb)\n\nlet test_structure_preserved () =\n  let params =\n    Ptree.list\n      [\n        Ptree.tensor (Nx.ones Nx.float32 [| 2 |]);\n        Ptree.tensor (Nx.ones Nx.float32 [| 3 |]);\n      ]\n  in\n  let grads =\n    Grad.grad\n      (fun p ->\n        let items = Ptree.List.items_exn p in\n        let t0 =\n          Ptree.Tensor.to_typed_exn Nx.float32\n            (Ptree.as_tensor_exn (List.nth items 0))\n        in\n        let t1 =\n          Ptree.Tensor.to_typed_exn Nx.float32\n            (Ptree.as_tensor_exn (List.nth items 1))\n        in\n        Nx.add (Nx.sum t0) (Nx.sum t1))\n      params\n  in\n  let items = Ptree.List.items_exn grads in\n  equal ~msg:\"list length\" int 2 (List.length items);\n  let g0 =\n    Ptree.Tensor.to_typed_exn Nx.float32\n      (Ptree.as_tensor_exn (List.nth items 0))\n  in\n  let g1 =\n    Ptree.Tensor.to_typed_exn Nx.float32\n      (Ptree.as_tensor_exn (List.nth items 1))\n  in\n  equal ~msg:\"g0 shape\" (list int) [ 2 ] (Array.to_list (Nx.shape g0));\n  equal ~msg:\"g1 shape\" (list int) [ 3 ] (Array.to_list (Nx.shape g1))\n\nlet () =\n  run \"Kaun.Grad\"\n    [\n      group \"value_and_grad\"\n        [\n          test \"scalar quadratic\" test_scalar_quadratic;\n          test \"multi-leaf dict\" test_multi_leaf_dict;\n          test \"nested tree\" test_nested_tree;\n          test \"structure preserved\" test_structure_preserved;\n          test \"value_and_grad_aux\" test_value_and_grad_aux;\n          test \"value_and_grad_mixed\" test_value_and_grad_mixed;\n        ];\n      group \"grad\" [ test \"grad convenience\" test_grad_convenience ];\n      group \"edge cases\"\n        [\n          test \"empty tree\" test_empty_tree;\n          test \"non-float leaf error\" test_non_float_leaf_error;\n          test \"mixed dtype error\" test_mixed_dtype_error;\n        ];\n    ]\n"
  },
  {
    "path": "packages/kaun/test/test_init.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nmodule Init = Kaun.Init\n\nlet string_contains s sub =\n  let slen = String.length s in\n  let sub_len = String.length sub in\n  let rec loop i =\n    if i + sub_len > slen then false\n    else if String.sub s i sub_len = sub then true\n    else loop (i + 1)\n  in\n  if sub_len = 0 then true else loop 0\n\nlet raises_invalid_arg_contains needle f =\n  raises_match\n    (fun exn ->\n      match exn with\n      | Invalid_argument msg -> string_contains msg needle\n      | _ -> false)\n    f\n\nlet flatten_f32 t = Nx.to_array (Nx.reshape [| -1 |] (Nx.cast Nx.float32 t))\n\nlet tensor_all pred t =\n  let a = flatten_f32 t in\n  Array.for_all pred a\n\nlet tensor_stats t =\n  let a = flatten_f32 t in\n  let n = Array.length a in\n  let sum = ref 0.0 in\n  for i = 0 to n - 1 do\n    sum := !sum +. a.(i)\n  done;\n  let mean = !sum /. float_of_int n in\n  let sq = ref 0.0 in\n  for i = 0 to n - 1 do\n    let d = a.(i) -. mean in\n    sq := !sq +. (d *. d)\n  done;\n  let variance = !sq /. float_of_int n in\n  (mean, variance)\n\nlet compute_fans shape ~in_axis ~out_axis =\n  let rank = Array.length shape in\n  if rank = 0 then (1, 1)\n  else if rank = 1 then (shape.(0), shape.(0))\n  else\n    let normalize_axis axis = if axis < 0 then rank + axis else axis in\n    let in_axis = normalize_axis in_axis in\n    let out_axis = normalize_axis out_axis in\n    let fan_in = shape.(in_axis) in\n    let fan_out = shape.(out_axis) in\n    let receptive = ref 1 in\n    for i = 0 to rank - 1 do\n      if i <> in_axis && i <> out_axis then receptive := !receptive * shape.(i)\n    done;\n    (fan_in * !receptive, fan_out * !receptive)\n\nlet expected_variance ~scale ~mode ~fan_in ~fan_out =\n  let n =\n    match mode with\n    | `Fan_in -> float_of_int fan_in\n    | `Fan_out -> float_of_int fan_out\n    | `Fan_avg -> float_of_int (fan_in + fan_out) /. 2.0\n  in\n  scale /. n\n\nlet uniform_limit variance = sqrt (3.0 *. variance)\n\nlet test_constants () =\n  Nx.Rng.run ~seed:0 @@ fun () ->\n  let shape = [| 11; 13 |] in\n  let zeros = Init.zeros.f shape Nx.float32 in\n  equal ~msg:\"zeros\" bool true (tensor_all (fun x -> x = 0.0) zeros);\n  let ones = Init.ones.f shape Nx.float32 in\n  equal ~msg:\"ones\" bool true (tensor_all (fun x -> x = 1.0) ones);\n  let c = (Init.constant 3.5).f shape Nx.float32 in\n  equal ~msg:\"constant\" bool true (tensor_all (fun x -> x = 3.5) c)\n\nlet test_uniform_range_and_mean () =\n  Nx.Rng.run ~seed:1 @@ fun () ->\n  let scale = 0.25 in\n  let t = (Init.uniform ~scale ()).f [| 120_000 |] Nx.float32 in\n  equal ~msg:\"uniform range\" bool true\n    (tensor_all (fun x -> x >= 0.0 && x < scale) t);\n  let mean, _ = tensor_stats t in\n  equal ~msg:\"uniform mean\" (float 8e-3) (scale /. 2.0) mean\n\nlet test_normal_mean_and_variance () =\n  Nx.Rng.run ~seed:2 @@ fun () ->\n  let stddev = 0.2 in\n  let t = (Init.normal ~stddev ()).f [| 140_000 |] Nx.float32 in\n  let mean, variance = tensor_stats t in\n  equal ~msg:\"normal mean\" (float 6e-3) 0.0 mean;\n  equal ~msg:\"normal variance\" (float 8e-3) (stddev *. stddev) variance\n\nlet test_glorot_uniform_bounds () =\n  Nx.Rng.run ~seed:3 @@ fun () ->\n  let shape = [| 64; 32 |] in\n  let fan_in, fan_out = compute_fans shape ~in_axis:(-2) ~out_axis:(-1) in\n  let variance = expected_variance ~scale:1.0 ~mode:`Fan_avg ~fan_in ~fan_out in\n  let limit = uniform_limit variance in\n  let t = (Init.glorot_uniform ()).f shape Nx.float32 in\n  equal ~msg:\"glorot_uniform bounds\" bool true\n    (tensor_all (fun x -> x >= -.limit && x <= limit) t)\n\nlet test_glorot_normal_variance () =\n  Nx.Rng.run ~seed:4 @@ fun () ->\n  let shape = [| 960; 480 |] in\n  let fan_in, fan_out = compute_fans shape ~in_axis:(-2) ~out_axis:(-1) in\n  let expected = expected_variance ~scale:1.0 ~mode:`Fan_avg ~fan_in ~fan_out in\n  let t = (Init.glorot_normal ()).f shape Nx.float32 in\n  let _, variance = tensor_stats t in\n  equal ~msg:\"glorot_normal variance\" (float 3e-4) expected variance\n\nlet test_he_uniform_bounds () =\n  Nx.Rng.run ~seed:5 @@ fun () ->\n  let shape = [| 128; 64 |] in\n  let fan_in, fan_out = compute_fans shape ~in_axis:(-2) ~out_axis:(-1) in\n  let variance = expected_variance ~scale:2.0 ~mode:`Fan_in ~fan_in ~fan_out in\n  let limit = uniform_limit variance in\n  let t = (Init.he_uniform ()).f shape Nx.float32 in\n  equal ~msg:\"he_uniform bounds\" bool true\n    (tensor_all (fun x -> x >= -.limit && x <= limit) t)\n\nlet test_he_normal_variance () =\n  Nx.Rng.run ~seed:6 @@ fun () ->\n  let shape = [| 256; 64 |] in\n  let fan_in, fan_out = compute_fans shape ~in_axis:(-2) ~out_axis:(-1) in\n  let expected = expected_variance ~scale:2.0 ~mode:`Fan_in ~fan_in ~fan_out in\n  let t = (Init.he_normal ()).f shape Nx.float32 in\n  let _, variance = tensor_stats t in\n  equal ~msg:\"he_normal variance\" (float 2e-3) expected variance\n\nlet test_lecun_uniform_bounds () =\n  Nx.Rng.run ~seed:7 @@ fun () ->\n  let shape = [| 128; 32 |] in\n  let fan_in, fan_out = compute_fans shape ~in_axis:(-2) ~out_axis:(-1) in\n  let variance = expected_variance ~scale:1.0 ~mode:`Fan_in ~fan_in ~fan_out in\n  let limit = uniform_limit variance in\n  let t = (Init.lecun_uniform ()).f shape Nx.float32 in\n  equal ~msg:\"lecun_uniform bounds\" bool true\n    (tensor_all (fun x -> x >= -.limit && x <= limit) t)\n\nlet test_lecun_normal_variance () =\n  Nx.Rng.run ~seed:8 @@ fun () ->\n  let shape = [| 128; 16 |] in\n  let fan_in, fan_out = compute_fans shape ~in_axis:(-2) ~out_axis:(-1) in\n  let expected = expected_variance ~scale:1.0 ~mode:`Fan_in ~fan_in ~fan_out in\n  let t = (Init.lecun_normal ()).f shape Nx.float32 in\n  let _, variance = tensor_stats t in\n  equal ~msg:\"lecun_normal variance\" (float 1.5e-3) expected variance\n\nlet test_variance_scaling_axis_override () =\n  Nx.Rng.run ~seed:9 @@ fun () ->\n  let shape = [| 2; 9; 4 |] in\n  let in_axis = 2 in\n  let out_axis = 0 in\n  let fan_in, fan_out = compute_fans shape ~in_axis ~out_axis in\n  let variance = expected_variance ~scale:1.7 ~mode:`Fan_out ~fan_in ~fan_out in\n  let limit = uniform_limit variance in\n  let init =\n    Init.variance_scaling ~scale:1.7 ~mode:`Fan_out ~distribution:`Uniform\n      ~in_axis ~out_axis ()\n  in\n  let t = init.f shape Nx.float32 in\n  equal ~msg:\"variance_scaling axis override\" bool true\n    (tensor_all (fun x -> x >= -.limit && x <= limit) t)\n\nlet test_validation_errors () =\n  raises_invalid_arg_contains \"scale\" (fun () ->\n      ignore (Init.uniform ~scale:(-1.0) ()));\n  raises_invalid_arg_contains \"stddev\" (fun () ->\n      ignore (Init.normal ~stddev:(-0.1) ()));\n  raises_invalid_arg_contains \"scale\" (fun () ->\n      ignore\n        (Init.variance_scaling ~scale:(-1.0) ~mode:`Fan_in\n           ~distribution:`Uniform ()));\n  let init =\n    Init.variance_scaling ~scale:1.0 ~mode:`Fan_avg ~distribution:`Uniform\n      ~in_axis:9 ()\n  in\n  Nx.Rng.run ~seed:10 @@ fun () ->\n  raises_invalid_arg_contains \"invalid in axis\" (fun () ->\n      ignore (init.f [| 3; 4 |] Nx.float32));\n  let zero_fan =\n    Init.variance_scaling ~scale:1.0 ~mode:`Fan_in ~distribution:`Uniform ()\n  in\n  raises_invalid_arg_contains \"non-positive fan\" (fun () ->\n      ignore (zero_fan.f [| 0; 4 |] Nx.float32))\n\nlet test_deterministic_same_seed () =\n  let init = Init.he_uniform () in\n  let shape = [| 64; 64 |] in\n  let t0 =\n    Nx.Rng.run ~seed:12 @@ fun () -> init.f shape Nx.float32 |> flatten_f32\n  in\n  let t1 =\n    Nx.Rng.run ~seed:12 @@ fun () -> init.f shape Nx.float32 |> flatten_f32\n  in\n  equal ~msg:\"same seed deterministic\" bool true (t0 = t1)\n\nlet () =\n  run \"Kaun.Init\"\n    [\n      group \"constant\" [ test \"zeros ones constant\" test_constants ];\n      group \"random\"\n        [\n          test \"uniform range and mean\" test_uniform_range_and_mean;\n          test \"normal mean and variance\" test_normal_mean_and_variance;\n          test \"deterministic same seed\" test_deterministic_same_seed;\n        ];\n      group \"variance scaling families\"\n        [\n          test \"glorot uniform bounds\" test_glorot_uniform_bounds;\n          test \"glorot normal variance\" test_glorot_normal_variance;\n          test \"he uniform bounds\" test_he_uniform_bounds;\n          test \"he normal variance\" test_he_normal_variance;\n          test \"lecun uniform bounds\" test_lecun_uniform_bounds;\n          test \"lecun normal variance\" test_lecun_normal_variance;\n          test \"variance scaling axis override\"\n            test_variance_scaling_axis_override;\n        ];\n      group \"validation\" [ test \"invalid arguments\" test_validation_errors ];\n    ]\n"
  },
  {
    "path": "packages/kaun/test/test_layer.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nmodule Layer = Kaun.Layer\nmodule Ptree = Kaun.Ptree\n\nlet flatten_f32 t = Nx.to_array (Nx.reshape [| -1 |] (Nx.cast Nx.float32 t))\n\nlet tensor_close ~eps ~expected ~actual =\n  let xs = flatten_f32 expected in\n  let ys = flatten_f32 actual in\n  let nx = Array.length xs in\n  let ny = Array.length ys in\n  if nx <> ny then false\n  else\n    let ok = ref true in\n    for i = 0 to nx - 1 do\n      if abs_float (xs.(i) -. ys.(i)) > eps then ok := false\n    done;\n    !ok\n\nlet apply_out (type a in_elt) (m : (a, float) Layer.t) vars ~training\n    (x : (a, in_elt) Nx.t) =\n  let y, _ = Layer.apply m vars ~training x in\n  y\n\n(* Linear *)\n\nlet test_linear_shapes () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.linear ~in_features:4 ~out_features:3 () in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  let fields = Ptree.Dict.fields_exn (Layer.params vars) in\n  let w = Ptree.Dict.get_tensor_exn fields ~name:\"weight\" Nx.float32 in\n  let b = Ptree.Dict.get_tensor_exn fields ~name:\"bias\" Nx.float32 in\n  equal ~msg:\"weight shape\" (list int) [ 4; 3 ] (Array.to_list (Nx.shape w));\n  equal ~msg:\"bias shape\" (list int) [ 3 ] (Array.to_list (Nx.shape b))\n\nlet test_linear_forward () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.linear ~in_features:2 ~out_features:3 () in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  let x = Nx.ones Nx.float32 [| 1; 2 |] in\n  let y = apply_out m vars ~training:false x in\n  equal ~msg:\"output shape\" (list int) [ 1; 3 ] (Array.to_list (Nx.shape y))\n\nlet test_linear_manual_params () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.linear ~in_features:2 ~out_features:2 () in\n  let w = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 0.0; 0.0; 1.0 |] in\n  let b = Nx.create Nx.float32 [| 2 |] [| 0.5; -0.5 |] in\n  let params =\n    Ptree.dict [ (\"weight\", Ptree.tensor w); (\"bias\", Ptree.tensor b) ]\n  in\n  let vars =\n    Layer.init m ~dtype:Nx.float32 |> fun vars -> Layer.with_params vars params\n  in\n  let x = Nx.create Nx.float32 [| 1; 2 |] [| 3.0; 4.0 |] in\n  let y = apply_out m vars ~training:false x in\n  let expected = Nx.create Nx.float32 [| 1; 2 |] [| 3.5; 3.5 |] in\n  equal ~msg:\"linear identity + bias\" bool true\n    (tensor_close ~eps:1e-6 ~expected ~actual:y)\n\n(* Normalization *)\n\nlet test_layer_norm_shapes () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.layer_norm ~dim:8 () in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  let fields = Ptree.Dict.fields_exn (Layer.params vars) in\n  let gamma = Ptree.Dict.get_tensor_exn fields ~name:\"gamma\" Nx.float32 in\n  let beta = Ptree.Dict.get_tensor_exn fields ~name:\"beta\" Nx.float32 in\n  equal ~msg:\"gamma shape\" (list int) [ 8 ] (Array.to_list (Nx.shape gamma));\n  equal ~msg:\"beta shape\" (list int) [ 8 ] (Array.to_list (Nx.shape beta));\n  equal ~msg:\"gamma values\" bool true\n    (Array.for_all (fun x -> x = 1.0) (flatten_f32 gamma));\n  equal ~msg:\"beta values\" bool true\n    (Array.for_all (fun x -> x = 0.0) (flatten_f32 beta))\n\nlet test_layer_norm_forward () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.layer_norm ~dim:4 () in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  let x =\n    Nx.create Nx.float32 [| 2; 4 |] [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0; 7.0; 8.0 |]\n  in\n  let y = apply_out m vars ~training:false x in\n  equal ~msg:\"output shape\" (list int) [ 2; 4 ] (Array.to_list (Nx.shape y))\n\nlet test_rms_norm_shapes () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.rms_norm ~dim:6 () in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  let fields = Ptree.Dict.fields_exn (Layer.params vars) in\n  let scale = Ptree.Dict.get_tensor_exn fields ~name:\"scale\" Nx.float32 in\n  equal ~msg:\"scale shape\" (list int) [ 6 ] (Array.to_list (Nx.shape scale));\n  equal ~msg:\"scale values\" bool true\n    (Array.for_all (fun x -> x = 1.0) (flatten_f32 scale))\n\nlet test_batch_norm_shapes () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.batch_norm ~num_features:3 () in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  let param_fields = Ptree.Dict.fields_exn (Layer.params vars) in\n  let state_fields = Ptree.Dict.fields_exn (Layer.state vars) in\n  let scale = Ptree.Dict.get_tensor_exn param_fields ~name:\"scale\" Nx.float32 in\n  let bias = Ptree.Dict.get_tensor_exn param_fields ~name:\"bias\" Nx.float32 in\n  let running_mean =\n    Ptree.Dict.get_tensor_exn state_fields ~name:\"running_mean\" Nx.float32\n  in\n  let running_var =\n    Ptree.Dict.get_tensor_exn state_fields ~name:\"running_var\" Nx.float32\n  in\n  equal ~msg:\"scale shape\" (list int) [ 3 ] (Array.to_list (Nx.shape scale));\n  equal ~msg:\"bias shape\" (list int) [ 3 ] (Array.to_list (Nx.shape bias));\n  equal ~msg:\"running_mean shape\" (list int) [ 3 ]\n    (Array.to_list (Nx.shape running_mean));\n  equal ~msg:\"running_var shape\" (list int) [ 3 ]\n    (Array.to_list (Nx.shape running_var))\n\nlet test_batch_norm_rank3_axes () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.batch_norm ~num_features:3 () in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  let param_fields = Ptree.Dict.fields_exn (Layer.params vars) in\n  let scale = Ptree.Dict.get_tensor_exn param_fields ~name:\"scale\" Nx.float32 in\n  let bias = Ptree.Dict.get_tensor_exn param_fields ~name:\"bias\" Nx.float32 in\n  let x =\n    Nx.create Nx.float32 [| 2; 3; 4 |]\n      [|\n        1.0;\n        2.0;\n        3.0;\n        4.0;\n        5.0;\n        6.0;\n        7.0;\n        8.0;\n        9.0;\n        10.0;\n        11.0;\n        12.0;\n        2.0;\n        4.0;\n        6.0;\n        8.0;\n        1.0;\n        3.0;\n        5.0;\n        7.0;\n        0.5;\n        1.5;\n        2.5;\n        3.5;\n      |]\n  in\n  let y, _ = Layer.apply m vars ~training:true x in\n  let expected = Kaun.Fn.batch_norm ~axes:[ 0; 2 ] ~scale ~bias x in\n  equal ~msg:\"batch_norm rank3 uses [0;2] axes\" bool true\n    (tensor_close ~eps:1e-6 ~expected ~actual:y)\n\nlet test_batch_norm_running_stats_eval () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.batch_norm ~num_features:2 () in\n  let vars0 = Layer.init m ~dtype:Nx.float32 in\n  let x_train = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  let _y_train, vars1 = Layer.apply m vars0 ~training:true x_train in\n  let param_fields = Ptree.Dict.fields_exn (Layer.params vars1) in\n  let state_fields = Ptree.Dict.fields_exn (Layer.state vars1) in\n  let scale = Ptree.Dict.get_tensor_exn param_fields ~name:\"scale\" Nx.float32 in\n  let bias = Ptree.Dict.get_tensor_exn param_fields ~name:\"bias\" Nx.float32 in\n  let running_mean =\n    Ptree.Dict.get_tensor_exn state_fields ~name:\"running_mean\" Nx.float32\n  in\n  let running_var =\n    Ptree.Dict.get_tensor_exn state_fields ~name:\"running_var\" Nx.float32\n  in\n  let x_eval = Nx.create Nx.float32 [| 2; 2 |] [| 10.0; 20.0; 30.0; 40.0 |] in\n  let y_eval, vars2 = Layer.apply m vars1 ~training:false x_eval in\n  let expected =\n    Nx.standardize ~axes:[ 0 ] ~mean:running_mean ~variance:running_var x_eval\n    |> fun z ->\n    Nx.add (Nx.mul z (Nx.reshape [| 1; 2 |] scale)) (Nx.reshape [| 1; 2 |] bias)\n  in\n  equal ~msg:\"batch_norm eval uses running stats\" bool true\n    (tensor_close ~eps:1e-6 ~expected ~actual:y_eval);\n  let state_fields2 = Ptree.Dict.fields_exn (Layer.state vars2) in\n  let running_mean2 =\n    Ptree.Dict.get_tensor_exn state_fields2 ~name:\"running_mean\" Nx.float32\n  in\n  let running_var2 =\n    Ptree.Dict.get_tensor_exn state_fields2 ~name:\"running_var\" Nx.float32\n  in\n  equal ~msg:\"batch_norm eval keeps running_mean\" bool true\n    (tensor_close ~eps:1e-6 ~expected:running_mean ~actual:running_mean2);\n  equal ~msg:\"batch_norm eval keeps running_var\" bool true\n    (tensor_close ~eps:1e-6 ~expected:running_var ~actual:running_var2)\n\nlet test_batch_norm_eval_affine_rank3 () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.batch_norm ~num_features:3 () in\n  let scale = Nx.create Nx.float32 [| 3 |] [| 2.0; 3.0; 4.0 |] in\n  let bias = Nx.create Nx.float32 [| 3 |] [| 10.0; 20.0; 30.0 |] in\n  let running_mean = Nx.create Nx.float32 [| 3 |] [| 1.0; 2.0; 3.0 |] in\n  let running_var = Nx.create Nx.float32 [| 3 |] [| 4.0; 9.0; 16.0 |] in\n  let vars =\n    Layer.init m ~dtype:Nx.float32 |> fun vars ->\n    Layer.with_params vars\n      (Ptree.dict\n         [ (\"scale\", Ptree.tensor scale); (\"bias\", Ptree.tensor bias) ])\n    |> fun vars ->\n    Layer.with_state vars\n      (Ptree.dict\n         [\n           (\"running_mean\", Ptree.tensor running_mean);\n           (\"running_var\", Ptree.tensor running_var);\n         ])\n  in\n  let x =\n    Nx.create Nx.float32 [| 1; 3; 2 |] [| 1.0; 5.0; 2.0; 8.0; 3.0; 11.0 |]\n  in\n  let y, _ = Layer.apply m vars ~training:false x in\n  let expected =\n    Nx.standardize ~axes:[ 0; 2 ] ~mean:running_mean ~variance:running_var x\n    |> fun z ->\n    Nx.add\n      (Nx.mul z (Nx.reshape [| 1; 3; 1 |] scale))\n      (Nx.reshape [| 1; 3; 1 |] bias)\n  in\n  equal ~msg:\"batch_norm eval rank3 applies affine\" bool true\n    (tensor_close ~eps:1e-6 ~expected ~actual:y)\n\n(* Embedding *)\n\nlet test_embedding_shapes () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.embedding ~vocab_size:100 ~embed_dim:16 () in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  let fields = Ptree.Dict.fields_exn (Layer.params vars) in\n  let emb = Ptree.Dict.get_tensor_exn fields ~name:\"embedding\" Nx.float32 in\n  equal ~msg:\"embedding shape\" (list int) [ 100; 16 ]\n    (Array.to_list (Nx.shape emb))\n\nlet test_embedding_forward () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.embedding ~vocab_size:10 ~embed_dim:4 ~scale:false () in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  let indices = Nx.create Nx.int32 [| 3 |] [| 0l; 5l; 2l |] in\n  let y = apply_out m vars ~training:false indices in\n  equal ~msg:\"embedding output shape\" (list int) [ 3; 4 ]\n    (Array.to_list (Nx.shape y))\n\nlet test_compose_embedding_linear () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let emb = Layer.embedding ~vocab_size:10 ~embed_dim:4 ~scale:false () in\n  let proj = Layer.linear ~in_features:4 ~out_features:2 () in\n  let m = Layer.compose emb proj in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  let indices = Nx.create Nx.int32 [| 3 |] [| 0l; 5l; 2l |] in\n  let y, _ = Layer.apply m vars ~training:false indices in\n  equal ~msg:\"compose embedding+linear output shape\" (list int) [ 3; 2 ]\n    (Array.to_list (Nx.shape y))\n\n(* Activations *)\n\nlet test_relu () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.relu () in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  let x = Nx.create Nx.float32 [| 4 |] [| -2.0; -0.5; 0.0; 3.0 |] in\n  let y = apply_out m vars ~training:false x in\n  let expected = Nx.create Nx.float32 [| 4 |] [| 0.0; 0.0; 0.0; 3.0 |] in\n  equal ~msg:\"relu\" bool true (tensor_close ~eps:1e-6 ~expected ~actual:y)\n\nlet test_activation_no_params () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let activations =\n    [\n      Layer.relu ();\n      Layer.gelu ();\n      Layer.silu ();\n      Layer.tanh ();\n      Layer.sigmoid ();\n    ]\n  in\n  let assert_no_params (m : (float, float) Layer.t) =\n    let vars = Layer.init m ~dtype:Nx.float32 in\n    match (Layer.params vars, Layer.state vars) with\n    | Ptree.List [], Ptree.List [] -> ()\n    | _ -> fail \"expected empty params and state\"\n  in\n  List.iter (fun m -> assert_no_params m) activations\n\n(* Dropout *)\n\nlet test_dropout_eval_identity () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.dropout ~rate:0.99 () in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  let x = Nx.ones Nx.float32 [| 10 |] in\n  let y = apply_out m vars ~training:false x in\n  equal ~msg:\"dropout eval = identity\" bool true\n    (tensor_close ~eps:1e-6 ~expected:x ~actual:y)\n\nlet test_dropout_training () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.dropout ~rate:0.5 () in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  let x = Nx.ones Nx.float32 [| 10 |] in\n  let y = apply_out m vars ~training:true x in\n  equal ~msg:\"dropout training shape\" (list int) [ 10 ]\n    (Array.to_list (Nx.shape y))\n\nlet test_dropout_rate_bounds () =\n  raises_match\n    (fun exn -> match exn with Invalid_argument _ -> true | _ -> false)\n    (fun () -> ignore (Layer.dropout ~rate:(-0.1) ()));\n  raises_match\n    (fun exn -> match exn with Invalid_argument _ -> true | _ -> false)\n    (fun () -> ignore (Layer.dropout ~rate:1.0 ()))\n\n(* Flatten *)\n\nlet test_flatten_forward () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.flatten () in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  let x = Nx.ones Nx.float32 [| 2; 3; 4 |] in\n  let y = apply_out m vars ~training:false x in\n  equal ~msg:\"flatten shape\" (list int) [ 2; 12 ] (Array.to_list (Nx.shape y))\n\n(* Sequential *)\n\nlet test_sequential_init_structure () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m =\n    Layer.sequential\n      [\n        Layer.linear ~in_features:4 ~out_features:3 ();\n        Layer.relu ();\n        Layer.linear ~in_features:3 ~out_features:2 ();\n      ]\n  in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  let param_items = Ptree.List.items_exn (Layer.params vars) in\n  let state_items = Ptree.List.items_exn (Layer.state vars) in\n  equal ~msg:\"sequential params length\" int 3 (List.length param_items);\n  equal ~msg:\"sequential state length\" int 3 (List.length state_items);\n  let f0 = Ptree.Dict.fields_exn (List.nth param_items 0) in\n  let w0 = Ptree.Dict.get_tensor_exn f0 ~name:\"weight\" Nx.float32 in\n  equal ~msg:\"layer0 weight shape\" (list int) [ 4; 3 ]\n    (Array.to_list (Nx.shape w0));\n  (match List.nth param_items 1 with\n  | Ptree.List [] -> ()\n  | _ -> fail \"relu should have no params\");\n  let f2 = Ptree.Dict.fields_exn (List.nth param_items 2) in\n  let w2 = Ptree.Dict.get_tensor_exn f2 ~name:\"weight\" Nx.float32 in\n  equal ~msg:\"layer2 weight shape\" (list int) [ 3; 2 ]\n    (Array.to_list (Nx.shape w2))\n\nlet test_sequential_forward () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m =\n    Layer.sequential\n      [\n        Layer.linear ~in_features:4 ~out_features:3 ();\n        Layer.relu ();\n        Layer.linear ~in_features:3 ~out_features:2 ();\n      ]\n  in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  let x = Nx.ones Nx.float32 [| 5; 4 |] in\n  let y = apply_out m vars ~training:false x in\n  equal ~msg:\"sequential output shape\" (list int) [ 5; 2 ]\n    (Array.to_list (Nx.shape y))\n\n(* Convolution *)\n\nlet test_conv1d_shapes () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.conv1d ~in_channels:3 ~out_channels:8 () in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  let fields = Ptree.Dict.fields_exn (Layer.params vars) in\n  let w = Ptree.Dict.get_tensor_exn fields ~name:\"weight\" Nx.float32 in\n  let b = Ptree.Dict.get_tensor_exn fields ~name:\"bias\" Nx.float32 in\n  equal ~msg:\"weight shape\" (list int) [ 8; 3; 3 ] (Array.to_list (Nx.shape w));\n  equal ~msg:\"bias shape\" (list int) [ 8 ] (Array.to_list (Nx.shape b))\n\nlet test_conv1d_forward () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.conv1d ~in_channels:2 ~out_channels:4 ~kernel_size:3 () in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  let x = Nx.ones Nx.float32 [| 1; 2; 10 |] in\n  let y = apply_out m vars ~training:false x in\n  let shape = Nx.shape y in\n  equal ~msg:\"conv1d output batch\" int 1 shape.(0);\n  equal ~msg:\"conv1d output channels\" int 4 shape.(1);\n  equal ~msg:\"conv1d output length\" int 10 shape.(2)\n\nlet test_conv2d_shapes () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.conv2d ~in_channels:3 ~out_channels:16 () in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  let fields = Ptree.Dict.fields_exn (Layer.params vars) in\n  let w = Ptree.Dict.get_tensor_exn fields ~name:\"weight\" Nx.float32 in\n  let b = Ptree.Dict.get_tensor_exn fields ~name:\"bias\" Nx.float32 in\n  equal ~msg:\"weight shape\" (list int) [ 16; 3; 3; 3 ]\n    (Array.to_list (Nx.shape w));\n  equal ~msg:\"bias shape\" (list int) [ 16 ] (Array.to_list (Nx.shape b))\n\nlet test_conv2d_forward () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.conv2d ~in_channels:1 ~out_channels:4 ~kernel_size:(3, 3) () in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  let x = Nx.ones Nx.float32 [| 1; 1; 8; 8 |] in\n  let y = apply_out m vars ~training:false x in\n  equal ~msg:\"conv2d output shape\" (list int) [ 1; 4; 8; 8 ]\n    (Array.to_list (Nx.shape y))\n\n(* Pooling *)\n\nlet test_max_pool2d () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.max_pool2d ~kernel_size:(2, 2) () in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  let x = Nx.ones Nx.float32 [| 1; 1; 4; 4 |] in\n  let y = apply_out m vars ~training:false x in\n  equal ~msg:\"max_pool2d shape\" (list int) [ 1; 1; 2; 2 ]\n    (Array.to_list (Nx.shape y))\n\nlet test_avg_pool2d () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.avg_pool2d ~kernel_size:(2, 2) () in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  let x = Nx.ones Nx.float32 [| 1; 1; 6; 6 |] in\n  let y = apply_out m vars ~training:false x in\n  equal ~msg:\"avg_pool2d shape\" (list int) [ 1; 1; 3; 3 ]\n    (Array.to_list (Nx.shape y))\n\n(* Parameter count *)\n\nlet test_param_count () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let m = Layer.linear ~in_features:10 ~out_features:5 () in\n  let vars = Layer.init m ~dtype:Nx.float32 in\n  equal ~msg:\"linear param count\" int 55\n    (Ptree.count_parameters (Layer.params vars))\n\nlet () =\n  run \"Kaun.Layer\"\n    [\n      group \"linear\"\n        [\n          test \"shapes\" test_linear_shapes;\n          test \"forward\" test_linear_forward;\n          test \"manual params\" test_linear_manual_params;\n          test \"param count\" test_param_count;\n        ];\n      group \"normalization\"\n        [\n          test \"layer_norm shapes\" test_layer_norm_shapes;\n          test \"layer_norm forward\" test_layer_norm_forward;\n          test \"rms_norm shapes\" test_rms_norm_shapes;\n          test \"batch_norm shapes\" test_batch_norm_shapes;\n          test \"batch_norm rank3 axes\" test_batch_norm_rank3_axes;\n          test \"batch_norm running stats eval\"\n            test_batch_norm_running_stats_eval;\n          test \"batch_norm eval affine rank3\" test_batch_norm_eval_affine_rank3;\n        ];\n      group \"embedding\"\n        [\n          test \"shapes\" test_embedding_shapes;\n          test \"forward\" test_embedding_forward;\n          test \"compose embedding+linear\" test_compose_embedding_linear;\n        ];\n      group \"activation\"\n        [ test \"relu\" test_relu; test \"no params\" test_activation_no_params ];\n      group \"regularization\"\n        [\n          test \"dropout eval identity\" test_dropout_eval_identity;\n          test \"dropout training\" test_dropout_training;\n          test \"dropout rate bounds\" test_dropout_rate_bounds;\n        ];\n      group \"conv\"\n        [\n          test \"conv1d shapes\" test_conv1d_shapes;\n          test \"conv1d forward\" test_conv1d_forward;\n          test \"conv2d shapes\" test_conv2d_shapes;\n          test \"conv2d forward\" test_conv2d_forward;\n        ];\n      group \"pooling\"\n        [ test \"max_pool2d\" test_max_pool2d; test \"avg_pool2d\" test_avg_pool2d ];\n      group \"reshape\" [ test \"flatten\" test_flatten_forward ];\n      group \"sequential\"\n        [\n          test \"init structure\" test_sequential_init_structure;\n          test \"forward\" test_sequential_forward;\n        ];\n    ]\n"
  },
  {
    "path": "packages/kaun/test/test_loss.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nmodule Loss = Kaun.Loss\n\nlet flatten_f32 t = Nx.to_array (Nx.reshape [| -1 |] (Nx.cast Nx.float32 t))\n\nlet tensor_close ~eps ~expected ~actual =\n  let xs = flatten_f32 expected in\n  let ys = flatten_f32 actual in\n  let nx = Array.length xs in\n  let ny = Array.length ys in\n  if nx <> ny then false\n  else\n    let ok = ref true in\n    for i = 0 to nx - 1 do\n      if abs_float (xs.(i) -. ys.(i)) > eps then ok := false\n    done;\n    !ok\n\nlet test_cross_entropy_known_value () =\n  let logits = Nx.create Nx.float32 [| 1; 3 |] [| 2.0; 0.0; -2.0 |] in\n  let labels = Nx.create Nx.float32 [| 1; 3 |] [| 1.0; 0.0; 0.0 |] in\n  let expected = log (1.0 +. exp (-2.0) +. exp (-4.0)) in\n  let actual = Loss.cross_entropy logits labels |> Nx.item [] in\n  equal ~msg:\"cross_entropy known value\" (float 1e-6) expected actual\n\nlet test_sparse_matches_dense_2d () =\n  let logits =\n    Nx.create Nx.float32 [| 3; 4 |]\n      [| 2.0; 0.1; -1.0; 0.3; 0.2; 1.7; -0.4; 0.9; -0.1; 0.8; 1.4; -2.0 |]\n  in\n  let indices = Nx.create Nx.int32 [| 3 |] [| 0l; 1l; 2l |] in\n  let one_hot = Nx.cast Nx.float32 (Nx.one_hot ~num_classes:4 indices) in\n  let dense = Loss.cross_entropy logits one_hot in\n  let sparse = Loss.cross_entropy_sparse logits indices in\n  equal ~msg:\"sparse = dense (2d)\" (float 1e-6) (Nx.item [] dense)\n    (Nx.item [] sparse)\n\nlet test_sparse_matches_dense_nd () =\n  let logits =\n    Nx.create Nx.float32 [| 2; 2; 3 |]\n      [| 0.7; -1.2; 0.5; 1.1; 0.2; -0.3; -0.8; 0.9; 0.4; 0.6; -0.5; 1.7 |]\n  in\n  let indices = Nx.create Nx.int32 [| 2; 2 |] [| 0l; 2l; 1l; 2l |] in\n  let one_hot = Nx.cast Nx.float32 (Nx.one_hot ~num_classes:3 indices) in\n  let dense = Loss.cross_entropy logits one_hot in\n  let sparse = Loss.cross_entropy_sparse logits indices in\n  equal ~msg:\"sparse = dense (nd)\" (float 1e-6) (Nx.item [] dense)\n    (Nx.item [] sparse)\n\nlet test_cross_entropy_rejects_invalid_shapes () =\n  raises_invalid_arg \"Loss.cross_entropy: logits must have rank >= 1\" (fun () ->\n      let logits = Nx.scalar Nx.float32 0.0 in\n      let labels = Nx.scalar Nx.float32 1.0 in\n      ignore (Loss.cross_entropy logits labels));\n  raises_invalid_arg\n    \"Loss.cross_entropy: labels rank mismatch (got 1, expected 2)\" (fun () ->\n      let logits = Nx.zeros Nx.float32 [| 2; 3 |] in\n      let labels = Nx.zeros Nx.float32 [| 2 |] in\n      ignore (Loss.cross_entropy logits labels));\n  raises_invalid_arg\n    \"Loss.cross_entropy: labels shape mismatch at axis 0 (got 4, expected 2)\"\n    (fun () ->\n      let logits = Nx.zeros Nx.float32 [| 2; 3 |] in\n      let labels = Nx.zeros Nx.float32 [| 4; 3 |] in\n      ignore (Loss.cross_entropy logits labels));\n  raises_invalid_arg\n    \"Loss.cross_entropy: logits class dimension must be positive (got 0)\"\n    (fun () ->\n      let logits = Nx.zeros Nx.float32 [| 2; 0 |] in\n      let labels = Nx.zeros Nx.float32 [| 2; 0 |] in\n      ignore (Loss.cross_entropy logits labels))\n\nlet test_sparse_rejects_non_integer_labels () =\n  let logits = Nx.zeros Nx.float32 [| 2; 3 |] in\n  let bad = Nx.zeros Nx.float32 [| 2 |] in\n  let msg =\n    Printf.sprintf \"Loss.cross_entropy_sparse: expected integer labels, got %s\"\n      (Nx_core.Dtype.to_string Nx.float32)\n  in\n  raises_invalid_arg msg (fun () ->\n      ignore (Loss.cross_entropy_sparse logits bad))\n\nlet test_sparse_rejects_shape_mismatch () =\n  let logits_2d = Nx.zeros Nx.float32 [| 2; 3 |] in\n  let bad_rank = Nx.zeros Nx.int32 [| 2; 1 |] in\n  raises_invalid_arg\n    \"Loss.cross_entropy_sparse: labels rank mismatch (got 2, expected 1)\"\n    (fun () -> ignore (Loss.cross_entropy_sparse logits_2d bad_rank));\n  let logits_3d = Nx.zeros Nx.float32 [| 2; 3; 4 |] in\n  let bad_shape = Nx.zeros Nx.int32 [| 2; 5 |] in\n  raises_invalid_arg\n    \"Loss.cross_entropy_sparse: labels shape mismatch at axis 1 (got 5, \\\n     expected 3)\" (fun () ->\n      ignore (Loss.cross_entropy_sparse logits_3d bad_shape))\n\nlet test_sparse_rejects_invalid_logits_shape () =\n  raises_invalid_arg \"Loss.cross_entropy_sparse: logits must have rank >= 1\"\n    (fun () ->\n      let logits = Nx.scalar Nx.float32 0.0 in\n      let labels = Nx.scalar Nx.int32 0l in\n      ignore (Loss.cross_entropy_sparse logits labels));\n  raises_invalid_arg\n    \"Loss.cross_entropy_sparse: logits class dimension must be positive (got 0)\"\n    (fun () ->\n      let logits = Nx.zeros Nx.float32 [| 2; 0 |] in\n      let labels = Nx.zeros Nx.int32 [| 2 |] in\n      ignore (Loss.cross_entropy_sparse logits labels))\n\nlet test_binary_cross_entropy_logits_stable () =\n  let logits =\n    Nx.create Nx.float32 [| 5 |] [| 1000.0; -1000.0; 0.0; 50.0; -50.0 |]\n  in\n  let labels = Nx.create Nx.float32 [| 5 |] [| 1.0; 0.0; 1.0; 0.0; 1.0 |] in\n  let loss = Loss.binary_cross_entropy logits labels |> Nx.item [] in\n  let xs = [| 1000.0; -1000.0; 0.0; 50.0; -50.0 |] in\n  let ys = [| 1.0; 0.0; 1.0; 0.0; 1.0 |] in\n  let expected = ref 0.0 in\n  for i = 0 to Array.length xs - 1 do\n    let x = xs.(i) in\n    let y = ys.(i) in\n    let v = max x 0.0 -. (x *. y) +. log1p (exp (-.abs_float x)) in\n    expected := !expected +. v\n  done;\n  expected := !expected /. float_of_int (Array.length xs);\n  equal ~msg:\"binary_cross_entropy stable value\" (float 1e-6) !expected loss;\n  equal ~msg:\"binary_cross_entropy finite\" bool true\n    (match classify_float loss with FP_nan | FP_infinite -> false | _ -> true)\n\nlet test_binary_cross_entropy_rejects_invalid_shapes () =\n  raises_invalid_arg\n    \"Loss.binary_cross_entropy: labels rank mismatch (got 1, expected 2)\"\n    (fun () ->\n      let logits = Nx.zeros Nx.float32 [| 2; 1 |] in\n      let labels = Nx.zeros Nx.float32 [| 2 |] in\n      ignore (Loss.binary_cross_entropy logits labels));\n  raises_invalid_arg\n    \"Loss.binary_cross_entropy: labels shape mismatch at axis 0 (got 3, \\\n     expected 2)\" (fun () ->\n      let logits = Nx.zeros Nx.float32 [| 2; 1 |] in\n      let labels = Nx.zeros Nx.float32 [| 3; 1 |] in\n      ignore (Loss.binary_cross_entropy logits labels))\n\nlet test_mse_gradient_exact () =\n  let predictions = Nx.create Nx.float32 [| 2; 2 |] [| 0.5; -1.0; 2.0; 3.0 |] in\n  let targets = Nx.create Nx.float32 [| 2; 2 |] [| 0.0; 1.0; 1.0; 2.0 |] in\n  let grad = Rune.grad (fun p -> Loss.mse p targets) predictions in\n  let expected =\n    Nx.create Nx.float32 [| 2; 2 |]\n      [|\n        2.0 *. (0.5 -. 0.0) /. 4.0;\n        2.0 *. (-1.0 -. 1.0) /. 4.0;\n        2.0 *. (2.0 -. 1.0) /. 4.0;\n        2.0 *. (3.0 -. 2.0) /. 4.0;\n      |]\n  in\n  equal ~msg:\"mse grad exact\" bool true\n    (tensor_close ~eps:1e-6 ~expected ~actual:grad)\n\nlet test_cross_entropy_sparse_dense_gradient_match () =\n  let logits =\n    Nx.create Nx.float32 [| 2; 3 |] [| 2.0; 1.0; 0.5; -1.0; 0.2; 0.0 |]\n  in\n  let indices = Nx.create Nx.int32 [| 2 |] [| 0l; 2l |] in\n  let one_hot = Nx.cast Nx.float32 (Nx.one_hot ~num_classes:3 indices) in\n  let dense_grad = Rune.grad (fun x -> Loss.cross_entropy x one_hot) logits in\n  let sparse_grad =\n    Rune.grad (fun x -> Loss.cross_entropy_sparse x indices) logits\n  in\n  equal ~msg:\"cross_entropy sparse grad = dense grad\" bool true\n    (tensor_close ~eps:1e-6 ~expected:dense_grad ~actual:sparse_grad)\n\nlet test_regression_values () =\n  let predictions = Nx.create Nx.float32 [| 3 |] [| 1.0; 4.0; 3.0 |] in\n  let targets = Nx.create Nx.float32 [| 3 |] [| 1.0; 1.0; 2.0 |] in\n  equal ~msg:\"mse value\" (float 1e-6) (10.0 /. 3.0)\n    (Nx.item [] (Loss.mse predictions targets));\n  equal ~msg:\"mae value\" (float 1e-6) (4.0 /. 3.0)\n    (Nx.item [] (Loss.mae predictions targets))\n\nlet test_regression_broadcasting () =\n  let predictions =\n    Nx.create Nx.float32 [| 2; 3 |] [| 0.0; 1.0; 2.0; 3.0; 4.0; 5.0 |]\n  in\n  let targets = Nx.create Nx.float32 [| 1; 3 |] [| 1.0; 1.0; 1.0 |] in\n  equal ~msg:\"mse broadcast\" (float 1e-6) (31.0 /. 6.0)\n    (Nx.item [] (Loss.mse predictions targets));\n  equal ~msg:\"mae broadcast\" (float 1e-6) (11.0 /. 6.0)\n    (Nx.item [] (Loss.mae predictions targets))\n\nlet test_mae_gradient_exact () =\n  let predictions = Nx.create Nx.float32 [| 2; 2 |] [| 2.0; -1.0; 4.0; 0.0 |] in\n  let targets = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 1.0; 2.0; -3.0 |] in\n  let grad = Rune.grad (fun p -> Loss.mae p targets) predictions in\n  let expected =\n    Nx.create Nx.float32 [| 2; 2 |] [| 0.25; -0.25; 0.25; 0.25 |]\n  in\n  equal ~msg:\"mae grad exact\" bool true\n    (tensor_close ~eps:1e-6 ~expected ~actual:grad)\n\nlet () =\n  run \"Kaun.Loss\"\n    [\n      group \"cross-entropy\"\n        [\n          test \"cross entropy known value\" test_cross_entropy_known_value;\n          test \"cross entropy rejects invalid shapes\"\n            test_cross_entropy_rejects_invalid_shapes;\n          test \"sparse matches dense (2d)\" test_sparse_matches_dense_2d;\n          test \"sparse matches dense (nd)\" test_sparse_matches_dense_nd;\n          test \"sparse rejects non-integer labels\"\n            test_sparse_rejects_non_integer_labels;\n          test \"sparse rejects shape mismatch\"\n            test_sparse_rejects_shape_mismatch;\n          test \"sparse rejects invalid logits shape\"\n            test_sparse_rejects_invalid_logits_shape;\n          test \"sparse/dense gradients match\"\n            test_cross_entropy_sparse_dense_gradient_match;\n        ];\n      group \"binary\"\n        [\n          test \"binary cross entropy logits stable\"\n            test_binary_cross_entropy_logits_stable;\n          test \"binary cross entropy rejects invalid shapes\"\n            test_binary_cross_entropy_rejects_invalid_shapes;\n        ];\n      group \"regression\"\n        [\n          test \"mse value + mae value\" test_regression_values;\n          test \"mse/mae broadcasting\" test_regression_broadcasting;\n          test \"mse gradient exact\" test_mse_gradient_exact;\n          test \"mae gradient exact\" test_mae_gradient_exact;\n        ];\n    ]\n"
  },
  {
    "path": "packages/kaun/test/test_metric.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nmodule Metric = Kaun.Metric\n\n(* Tracker *)\n\nlet test_tracker_observe_and_mean () =\n  let t = Metric.tracker () in\n  Metric.observe t \"loss\" 1.0;\n  Metric.observe t \"loss\" 3.0;\n  equal ~msg:\"mean of two\" (float 1e-10) 2.0 (Metric.mean t \"loss\")\n\nlet test_tracker_count () =\n  let t = Metric.tracker () in\n  Metric.observe t \"acc\" 0.9;\n  Metric.observe t \"acc\" 0.8;\n  Metric.observe t \"acc\" 0.7;\n  equal ~msg:\"count\" int 3 (Metric.count t \"acc\")\n\nlet test_tracker_not_found () =\n  let t = Metric.tracker () in\n  raises Not_found (fun () -> ignore (Metric.mean t \"missing\"));\n  raises Not_found (fun () -> ignore (Metric.count t \"missing\"))\n\nlet test_tracker_reset () =\n  let t = Metric.tracker () in\n  Metric.observe t \"x\" 1.0;\n  Metric.reset t;\n  equal ~msg:\"empty after reset\"\n    (list (pair string (float 1e-10)))\n    [] (Metric.to_list t)\n\nlet test_tracker_to_list_sorted () =\n  let t = Metric.tracker () in\n  Metric.observe t \"loss\" 0.5;\n  Metric.observe t \"accuracy\" 0.9;\n  Metric.observe t \"lr\" 0.001;\n  let names = List.map fst (Metric.to_list t) in\n  equal ~msg:\"sorted by name\" (list string) [ \"accuracy\"; \"loss\"; \"lr\" ] names\n\nlet test_tracker_summary () =\n  let t = Metric.tracker () in\n  Metric.observe t \"loss\" 0.4;\n  Metric.observe t \"accuracy\" 0.9;\n  let s = Metric.summary t in\n  (* sorted: accuracy before loss *)\n  equal ~msg:\"summary format\" string \"accuracy: 0.9000  loss: 0.4000\" s\n\n(* Dataset evaluation *)\n\nlet test_eval_mean () =\n  let data = Kaun.Data.of_array [| 2.0; 4.0; 6.0 |] in\n  let result = Metric.eval Fun.id data in\n  equal ~msg:\"eval mean\" (float 1e-10) 4.0 result\n\nlet test_eval_empty_raises () =\n  let data = Kaun.Data.of_array [||] in\n  raises_invalid_arg \"Metric.eval: empty dataset\" (fun () ->\n      ignore (Metric.eval Fun.id data))\n\nlet test_eval_many () =\n  let data = Kaun.Data.of_array [| 1.0; 3.0 |] in\n  let result =\n    Metric.eval_many\n      (fun x -> [ (\"double\", x *. 2.0); (\"half\", x /. 2.0) ])\n      data\n  in\n  equal ~msg:\"double\" (float 1e-10) 4.0 (List.assoc \"double\" result);\n  equal ~msg:\"half\" (float 1e-10) 1.0 (List.assoc \"half\" result)\n\nlet test_eval_many_empty_raises () =\n  let data = Kaun.Data.of_array [||] in\n  raises_invalid_arg \"Metric.eval_many: empty dataset\" (fun () ->\n      ignore (Metric.eval_many (fun x -> [ (\"v\", x) ]) data))\n\n(* Accuracy *)\n\nlet test_accuracy_multiclass () =\n  (* logits: batch=4, classes=3 *)\n  let predictions =\n    Nx.create Nx.float32 [| 4; 3 |]\n      [|\n        (* predicted class 2 *) 0.1;\n        0.2;\n        0.7;\n        (* predicted class 0 *) 0.9;\n        0.05;\n        0.05;\n        (* predicted class 1 *) 0.1;\n        0.8;\n        0.1;\n        (* predicted class 0 *) 0.6;\n        0.2;\n        0.2;\n      |]\n  in\n  (* targets: class indices *)\n  let targets = Nx.create Nx.int32 [| 4 |] [| 2l; 0l; 0l; 0l |] in\n  (* correct: sample 0 (2=2), sample 1 (0=0), sample 3 (0=0) = 3/4 *)\n  equal ~msg:\"multiclass accuracy\" (float 1e-6) 0.75\n    (Metric.accuracy predictions targets)\n\nlet test_accuracy_binary () =\n  let predictions = Nx.create Nx.float32 [| 4 |] [| 0.8; 0.3; 0.6; 0.1 |] in\n  let targets = Nx.create Nx.int32 [| 4 |] [| 1l; 0l; 1l; 1l |] in\n  (* predicted: 1, 0, 1, 0; targets: 1, 0, 1, 1 => 3/4 correct *)\n  equal ~msg:\"binary accuracy\" (float 1e-6) 0.75\n    (Metric.accuracy predictions targets)\n\nlet test_binary_accuracy_default_threshold () =\n  let predictions = Nx.create Nx.float32 [| 4 |] [| 0.8; 0.3; 0.6; 0.1 |] in\n  let targets = Nx.create Nx.float32 [| 4 |] [| 1.0; 0.0; 1.0; 1.0 |] in\n  equal ~msg:\"binary_accuracy default\" (float 1e-6) 0.75\n    (Metric.binary_accuracy predictions targets)\n\nlet test_binary_accuracy_custom_threshold () =\n  let predictions = Nx.create Nx.float32 [| 4 |] [| 0.8; 0.3; 0.6; 0.1 |] in\n  let targets = Nx.create Nx.float32 [| 4 |] [| 1.0; 1.0; 1.0; 0.0 |] in\n  (* threshold=0.25: predicted 1, 1, 1, 0; targets: 1, 1, 1, 0 => 4/4 *)\n  equal ~msg:\"binary_accuracy threshold=0.25\" (float 1e-6) 1.0\n    (Metric.binary_accuracy ~threshold:0.25 predictions targets)\n\n(* Precision / Recall / F1 *)\n\n(* Test scenario: 3 classes, 6 samples. predictions (logits): argmax gives [0;\n   1; 0; 2; 1; 0] targets: [0; 1; 1; 2; 0; 0]\n\n   Confusion per class: class 0: TP=2, FP=1, FN=1, support=3 class 1: TP=1,\n   FP=1, FN=1, support=2 class 2: TP=1, FP=0, FN=0, support=1\n\n   Per-class precision: [2/3; 1/2; 1/1] Per-class recall: [2/3; 1/2; 1/1]\n   Per-class f1: [2/3; 1/2; 1/1] *)\n\nlet prf_predictions () =\n  Nx.create Nx.float32 [| 6; 3 |]\n    [|\n      (* pred 0 *) 0.8;\n      0.1;\n      0.1;\n      (* pred 1 *) 0.1;\n      0.7;\n      0.2;\n      (* pred 0 *) 0.6;\n      0.3;\n      0.1;\n      (* pred 2 *) 0.1;\n      0.2;\n      0.7;\n      (* pred 1 *) 0.2;\n      0.6;\n      0.2;\n      (* pred 0 *) 0.5;\n      0.3;\n      0.2;\n    |]\n\nlet prf_targets () = Nx.create Nx.int32 [| 6 |] [| 0l; 1l; 1l; 2l; 0l; 0l |]\n\nlet test_precision_macro () =\n  let predictions = prf_predictions () in\n  let targets = prf_targets () in\n  (* macro = mean(2/3, 1/2, 1/1) = (2/3 + 1/2 + 1) / 3 *)\n  let expected = ((2.0 /. 3.0) +. (1.0 /. 2.0) +. 1.0) /. 3.0 in\n  equal ~msg:\"precision macro\" (float 1e-6) expected\n    (Metric.precision Macro predictions targets)\n\nlet test_precision_micro () =\n  let predictions = prf_predictions () in\n  let targets = prf_targets () in\n  (* micro = sum(TP) / (sum(TP) + sum(FP)) = 4 / (4 + 2) = 2/3 *)\n  equal ~msg:\"precision micro\" (float 1e-6) (4.0 /. 6.0)\n    (Metric.precision Micro predictions targets)\n\nlet test_precision_weighted () =\n  let predictions = prf_predictions () in\n  let targets = prf_targets () in\n  (* weighted = (3 * 2/3 + 2 * 1/2 + 1 * 1) / 6 = (2 + 1 + 1) / 6 = 2/3 *)\n  let expected =\n    ((3.0 *. 2.0 /. 3.0) +. (2.0 *. 1.0 /. 2.0) +. (1.0 *. 1.0)) /. 6.0\n  in\n  equal ~msg:\"precision weighted\" (float 1e-6) expected\n    (Metric.precision Weighted predictions targets)\n\nlet test_recall_macro () =\n  let predictions = prf_predictions () in\n  let targets = prf_targets () in\n  let expected = ((2.0 /. 3.0) +. (1.0 /. 2.0) +. 1.0) /. 3.0 in\n  equal ~msg:\"recall macro\" (float 1e-6) expected\n    (Metric.recall Macro predictions targets)\n\nlet test_recall_micro () =\n  let predictions = prf_predictions () in\n  let targets = prf_targets () in\n  (* micro recall = sum(TP) / (sum(TP) + sum(FN)) = 4 / (4 + 2) = 2/3 *)\n  equal ~msg:\"recall micro\" (float 1e-6) (4.0 /. 6.0)\n    (Metric.recall Micro predictions targets)\n\nlet test_f1_macro () =\n  let predictions = prf_predictions () in\n  let targets = prf_targets () in\n  (* per-class f1 = [2/3; 1/2; 1] *)\n  let expected = ((2.0 /. 3.0) +. (1.0 /. 2.0) +. 1.0) /. 3.0 in\n  equal ~msg:\"f1 macro\" (float 1e-6) expected\n    (Metric.f1 Macro predictions targets)\n\nlet test_f1_micro () =\n  let predictions = prf_predictions () in\n  let targets = prf_targets () in\n  (* micro f1 = 2*sum(TP) / (2*sum(TP) + sum(FP) + sum(FN)) = 2*4 / (2*4 + 2 +\n     2) = 8/12 = 2/3 *)\n  equal ~msg:\"f1 micro\" (float 1e-6) (8.0 /. 12.0)\n    (Metric.f1 Micro predictions targets)\n\nlet test_micro_equals_accuracy () =\n  (* For multiclass single-label, micro P = micro R = micro F1 = accuracy *)\n  let predictions = prf_predictions () in\n  let targets = prf_targets () in\n  let acc = Metric.accuracy predictions targets in\n  equal ~msg:\"micro precision = accuracy\" (float 1e-6) acc\n    (Metric.precision Micro predictions targets);\n  equal ~msg:\"micro recall = accuracy\" (float 1e-6) acc\n    (Metric.recall Micro predictions targets);\n  equal ~msg:\"micro f1 = accuracy\" (float 1e-6) acc\n    (Metric.f1 Micro predictions targets)\n\nlet test_precision_zero_predictions () =\n  (* class 2 has no predictions: pred=[0,1,0], targets=[0,1,2] *)\n  let predictions =\n    Nx.create Nx.float32 [| 3; 3 |]\n      [| 0.8; 0.2; 0.0; 0.1; 0.9; 0.0; 0.6; 0.4; 0.0 |]\n  in\n  let targets = Nx.create Nx.int32 [| 3 |] [| 0l; 1l; 2l |] in\n  (* class 0: TP=1, FP=1 => P=1/2 class 1: TP=1, FP=0 => P=1 class 2: TP=0, FP=0\n     => P=0.0 (zero-div) macro = (1/2 + 1 + 0) / 3 = 0.5 *)\n  equal ~msg:\"precision with missing class\" (float 1e-6) 0.5\n    (Metric.precision Macro predictions targets)\n\nlet test_binary_f1 () =\n  (* 2-class problem *)\n  let predictions =\n    Nx.create Nx.float32 [| 4; 2 |]\n      [|\n        0.9;\n        0.1;\n        (* pred 0 *)\n        0.3;\n        0.7;\n        (* pred 1 *)\n        0.4;\n        0.6;\n        (* pred 1 *)\n        0.8;\n        0.2;\n        (* pred 0 *)\n      |]\n  in\n  let targets = Nx.create Nx.int32 [| 4 |] [| 0l; 1l; 0l; 0l |] in\n  (* class 0: TP=2, FP=0, FN=1 => P=1.0, R=2/3, F1=2*1*(2/3)/(1+2/3)=4/5 *)\n  (* class 1: TP=1, FP=1, FN=0 => P=1/2, R=1.0, F1=2*(1/2)*1/(1/2+1)=2/3 *)\n  let expected_macro = ((4.0 /. 5.0) +. (2.0 /. 3.0)) /. 2.0 in\n  equal ~msg:\"binary f1 macro\" (float 1e-6) expected_macro\n    (Metric.f1 Macro predictions targets)\n\nlet () =\n  run \"Kaun.Metric\"\n    [\n      group \"tracker\"\n        [\n          test \"observe and mean\" test_tracker_observe_and_mean;\n          test \"count\" test_tracker_count;\n          test \"not found raises\" test_tracker_not_found;\n          test \"reset\" test_tracker_reset;\n          test \"to_list sorted\" test_tracker_to_list_sorted;\n          test \"summary\" test_tracker_summary;\n        ];\n      group \"eval\"\n        [\n          test \"eval mean\" test_eval_mean;\n          test \"eval empty raises\" test_eval_empty_raises;\n          test \"eval_many\" test_eval_many;\n          test \"eval_many empty raises\" test_eval_many_empty_raises;\n        ];\n      group \"accuracy\"\n        [\n          test \"multiclass\" test_accuracy_multiclass;\n          test \"binary\" test_accuracy_binary;\n          test \"binary_accuracy default\" test_binary_accuracy_default_threshold;\n          test \"binary_accuracy custom threshold\"\n            test_binary_accuracy_custom_threshold;\n        ];\n      group \"precision/recall/f1\"\n        [\n          test \"precision macro\" test_precision_macro;\n          test \"precision micro\" test_precision_micro;\n          test \"precision weighted\" test_precision_weighted;\n          test \"recall macro\" test_recall_macro;\n          test \"recall micro\" test_recall_micro;\n          test \"f1 macro\" test_f1_macro;\n          test \"f1 micro\" test_f1_micro;\n          test \"micro = accuracy\" test_micro_equals_accuracy;\n          test \"precision zero predictions\" test_precision_zero_predictions;\n          test \"binary f1\" test_binary_f1;\n        ];\n    ]\n"
  },
  {
    "path": "packages/kaun/test/test_optim.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nmodule Optim = Kaun.Optim\nmodule Ptree = Kaun.Ptree\nmodule Grad = Kaun.Grad\n\n(* Schedules *)\n\nlet test_constant_schedule () =\n  let s = Vega.Schedule.constant 0.01 in\n  equal ~msg:\"step 1\" (float 1e-10) 0.01 (s 1);\n  equal ~msg:\"step 100\" (float 1e-10) 0.01 (s 100);\n  equal ~msg:\"step 0\" (float 1e-10) 0.01 (s 0)\n\nlet test_cosine_decay () =\n  let s = Vega.Schedule.cosine_decay ~init_value:0.1 ~decay_steps:100 () in\n  equal ~msg:\"step 0\" (float 1e-10) 0.1 (s 0);\n  equal ~msg:\"step 100 (fully decayed)\" (float 1e-10) 0.0 (s 100);\n  equal ~msg:\"step 200 (past decay)\" (float 1e-10) 0.0 (s 200);\n  let mid = s 50 in\n  equal ~msg:\"step 50 (midpoint)\" (float 1e-6) 0.05 mid\n\nlet test_cosine_decay_alpha () =\n  let s =\n    Vega.Schedule.cosine_decay ~init_value:0.1 ~decay_steps:100 ~alpha:0.1 ()\n  in\n  equal ~msg:\"step 100 (alpha floor)\" (float 1e-10) 0.01 (s 100)\n\nlet test_warmup_cosine () =\n  let s =\n    Vega.Schedule.warmup_cosine ~init_value:0.0 ~peak_value:0.01\n      ~warmup_steps:100\n  in\n  equal ~msg:\"step 0\" (float 1e-10) 0.0 (s 0);\n  equal ~msg:\"step 100 (peak)\" (float 1e-10) 0.01 (s 100);\n  equal ~msg:\"step 200 (past warmup)\" (float 1e-10) 0.01 (s 200)\n\nlet test_warmup_linear () =\n  let s = Vega.Schedule.linear ~init_value:0.0 ~end_value:0.1 ~steps:10 in\n  equal ~msg:\"step 0\" (float 1e-10) 0.0 (s 0);\n  equal ~msg:\"step 5 (midpoint)\" (float 1e-10) 0.05 (s 5);\n  equal ~msg:\"step 10 (peak)\" (float 1e-10) 0.1 (s 10);\n  equal ~msg:\"step 20 (past warmup)\" (float 1e-10) 0.1 (s 20)\n\nlet test_exponential_decay () =\n  let s =\n    Vega.Schedule.exponential_decay ~init_value:1.0 ~decay_rate:0.5\n      ~decay_steps:10\n  in\n  equal ~msg:\"step 0\" (float 1e-10) 1.0 (s 0);\n  equal ~msg:\"step 10\" (float 1e-6) 0.5 (s 10);\n  equal ~msg:\"step 20\" (float 1e-6) 0.25 (s 20)\n\n(* Helpers *)\n\nlet quadratic_loss params =\n  (* f(x) = 0.5 * sum(x^2), grad = x *)\n  let (Ptree.P t) = Ptree.as_tensor_exn params in\n  let t = Ptree.Tensor.to_typed_exn Nx.float32 (Ptree.P t) in\n  Nx.mul (Nx.scalar Nx.float32 0.5) (Nx.sum (Nx.mul t t))\n\nlet make_params values =\n  Ptree.tensor (Nx.create Nx.float32 [| Array.length values |] values)\n\nlet get_values params =\n  let (Ptree.P t) = Ptree.as_tensor_exn params in\n  let t = Ptree.Tensor.to_typed_exn Nx.float32 (Ptree.P t) in\n  Nx.to_array (Nx.reshape [| -1 |] t)\n\nlet train_steps algo params ~steps =\n  let state = Optim.init algo params in\n  let p = ref params in\n  let s = ref state in\n  for _ = 1 to steps do\n    let _loss, grads = Grad.value_and_grad quadratic_loss !p in\n    let new_params, state' = Optim.step !s !p grads in\n    p := new_params;\n    s := state'\n  done;\n  !p\n\n(* SGD *)\n\nlet test_sgd_basic () =\n  let lr = Vega.Schedule.constant 0.1 in\n  let algo = Vega.sgd lr in\n  let params = make_params [| 4.0; -3.0 |] in\n  let result = train_steps algo params ~steps:1 in\n  let v = get_values result in\n  (* After 1 step: x - lr * x = x * (1 - lr) = x * 0.9 *)\n  equal ~msg:\"sgd[0] after 1 step\" (float 1e-5) 3.6 v.(0);\n  equal ~msg:\"sgd[1] after 1 step\" (float 1e-5) (-2.7) v.(1)\n\nlet test_sgd_converges () =\n  let lr = Vega.Schedule.constant 0.1 in\n  let algo = Vega.sgd lr in\n  let params = make_params [| 10.0; -8.0 |] in\n  let result = train_steps algo params ~steps:100 in\n  let v = get_values result in\n  equal ~msg:\"sgd converges[0]\" (float 1e-3) 0.0 v.(0);\n  equal ~msg:\"sgd converges[1]\" (float 1e-3) 0.0 v.(1)\n\nlet test_sgd_momentum () =\n  let lr = Vega.Schedule.constant 0.01 in\n  let algo = Vega.sgd ~momentum:0.9 lr in\n  let params = make_params [| 5.0; -3.0 |] in\n  let result = train_steps algo params ~steps:100 in\n  let v = get_values result in\n  equal ~msg:\"sgd+momentum converges[0]\" (float 0.1) 0.0 v.(0);\n  equal ~msg:\"sgd+momentum converges[1]\" (float 0.1) 0.0 v.(1)\n\nlet test_sgd_nesterov () =\n  let lr = Vega.Schedule.constant 0.01 in\n  let algo = Vega.sgd ~momentum:0.9 ~nesterov:true lr in\n  let params = make_params [| 5.0; -3.0 |] in\n  let result = train_steps algo params ~steps:100 in\n  let v = get_values result in\n  equal ~msg:\"sgd+nesterov converges[0]\" (float 1e-2) 0.0 v.(0);\n  equal ~msg:\"sgd+nesterov converges[1]\" (float 1e-2) 0.0 v.(1)\n\n(* Adam *)\n\nlet test_adam_converges () =\n  let lr = Vega.Schedule.constant 0.1 in\n  let algo = Vega.adam lr in\n  let params = make_params [| 5.0; -3.0 |] in\n  let result = train_steps algo params ~steps:100 in\n  let v = get_values result in\n  equal ~msg:\"adam converges[0]\" (float 0.5) 0.0 v.(0);\n  equal ~msg:\"adam converges[1]\" (float 0.5) 0.0 v.(1)\n\n(* AdamW *)\n\nlet test_adamw_converges () =\n  let lr = Vega.Schedule.constant 0.1 in\n  let algo = Vega.adamw lr in\n  let params = make_params [| 5.0; -3.0 |] in\n  let result = train_steps algo params ~steps:100 in\n  let v = get_values result in\n  equal ~msg:\"adamw converges[0]\" (float 0.5) 0.0 v.(0);\n  equal ~msg:\"adamw converges[1]\" (float 0.5) 0.0 v.(1)\n\n(* RMSprop *)\n\nlet test_rmsprop_converges () =\n  let lr = Vega.Schedule.constant 0.1 in\n  let algo = Vega.rmsprop lr in\n  let params = make_params [| 5.0; -3.0 |] in\n  let result = train_steps algo params ~steps:100 in\n  let v = get_values result in\n  equal ~msg:\"rmsprop converges[0]\" (float 0.5) 0.0 v.(0);\n  equal ~msg:\"rmsprop converges[1]\" (float 0.5) 0.0 v.(1)\n\nlet test_rmsprop_momentum () =\n  let lr = Vega.Schedule.constant 0.01 in\n  let algo = Vega.rmsprop ~momentum:0.9 lr in\n  let params = make_params [| 5.0; -3.0 |] in\n  let result = train_steps algo params ~steps:100 in\n  let v = get_values result in\n  equal ~msg:\"rmsprop+momentum converges[0]\" (float 0.5) 0.0 v.(0);\n  equal ~msg:\"rmsprop+momentum converges[1]\" (float 0.5) 0.0 v.(1)\n\n(* Adagrad *)\n\nlet test_adagrad_converges () =\n  let lr = Vega.Schedule.constant 0.5 in\n  let algo = Vega.adagrad lr in\n  let params = make_params [| 5.0; -3.0 |] in\n  let result = train_steps algo params ~steps:100 in\n  let v = get_values result in\n  equal ~msg:\"adagrad converges[0]\" (float 0.5) 0.0 v.(0);\n  equal ~msg:\"adagrad converges[1]\" (float 0.5) 0.0 v.(1)\n\nlet test_invalid_hyperparameters () =\n  let lr = Vega.Schedule.constant 0.1 in\n  raises_match\n    (fun exn -> match exn with Invalid_argument _ -> true | _ -> false)\n    (fun () -> ignore (Vega.sgd ~momentum:1.0 lr));\n  raises_match\n    (fun exn -> match exn with Invalid_argument _ -> true | _ -> false)\n    (fun () -> ignore (Vega.adam ~eps:0.0 lr));\n  raises_match\n    (fun exn -> match exn with Invalid_argument _ -> true | _ -> false)\n    (fun () -> ignore (Vega.adamw ~weight_decay:(-0.1) lr));\n  raises_match\n    (fun exn -> match exn with Invalid_argument _ -> true | _ -> false)\n    (fun () -> ignore (Vega.rmsprop ~decay:1.0 lr));\n  raises_match\n    (fun exn -> match exn with Invalid_argument _ -> true | _ -> false)\n    (fun () -> ignore (Vega.adagrad ~eps:0.0 lr))\n\n(* Gradient utilities *)\n\nlet test_global_norm () =\n  let t =\n    Ptree.dict\n      [\n        (\"a\", Ptree.tensor (Nx.create Nx.float32 [| 2 |] [| 3.0; 4.0 |]));\n        (\"b\", Ptree.tensor (Nx.create Nx.float32 [| 1 |] [| 0.0 |]));\n      ]\n  in\n  (* sqrt(9 + 16 + 0) = 5 *)\n  equal ~msg:\"global_norm\" (float 1e-5) 5.0 (Optim.global_norm t)\n\nlet test_clip_by_global_norm () =\n  let t = Ptree.tensor (Nx.create Nx.float32 [| 2 |] [| 3.0; 4.0 |]) in\n  (* norm = 5, clip to 2.5 → scale by 0.5 *)\n  let clipped = Optim.clip_by_global_norm 2.5 t in\n  let v = get_values clipped in\n  equal ~msg:\"clipped[0]\" (float 1e-5) 1.5 v.(0);\n  equal ~msg:\"clipped[1]\" (float 1e-5) 2.0 v.(1)\n\nlet test_clip_no_op () =\n  let t = Ptree.tensor (Nx.create Nx.float32 [| 2 |] [| 1.0; 1.0 |]) in\n  (* norm = sqrt(2) ~ 1.41, max_norm = 5.0 → no clipping *)\n  let clipped = Optim.clip_by_global_norm 5.0 t in\n  let v = get_values clipped in\n  equal ~msg:\"no clip[0]\" (float 1e-5) 1.0 v.(0);\n  equal ~msg:\"no clip[1]\" (float 1e-5) 1.0 v.(1)\n\n(* Multi-parameter tree *)\n\nlet test_multi_param_tree () =\n  let lr = Vega.Schedule.constant 0.1 in\n  let algo = Vega.sgd lr in\n  let params =\n    Ptree.dict\n      [\n        (\"w\", Ptree.tensor (Nx.create Nx.float32 [| 2 |] [| 4.0; -2.0 |]));\n        (\"b\", Ptree.tensor (Nx.create Nx.float32 [| 1 |] [| 1.0 |]));\n      ]\n  in\n  let f p =\n    let fields = Ptree.Dict.fields_exn p in\n    let w = Ptree.Dict.get_tensor_exn fields ~name:\"w\" Nx.float32 in\n    let b = Ptree.Dict.get_tensor_exn fields ~name:\"b\" Nx.float32 in\n    Nx.add\n      (Nx.mul (Nx.scalar Nx.float32 0.5) (Nx.sum (Nx.mul w w)))\n      (Nx.mul (Nx.scalar Nx.float32 0.5) (Nx.sum (Nx.mul b b)))\n  in\n  let state = Optim.init algo params in\n  let _loss, grads = Grad.value_and_grad f params in\n  let result, _state' = Optim.step state params grads in\n  let fields = Ptree.Dict.fields_exn result in\n  let w = Ptree.Dict.get_tensor_exn fields ~name:\"w\" Nx.float32 in\n  let b = Ptree.Dict.get_tensor_exn fields ~name:\"b\" Nx.float32 in\n  (* w_new = w - lr * w = w * 0.9 *)\n  equal ~msg:\"w[0]\" (float 1e-5) 3.6 (Nx.item [ 0 ] w);\n  equal ~msg:\"w[1]\" (float 1e-5) (-1.8) (Nx.item [ 1 ] w);\n  equal ~msg:\"b[0]\" (float 1e-5) 0.9 (Nx.item [ 0 ] b)\n\nlet () =\n  run \"Kaun.Optim\"\n    [\n      group \"schedules\"\n        [\n          test \"constant\" test_constant_schedule;\n          test \"cosine decay\" test_cosine_decay;\n          test \"cosine decay alpha\" test_cosine_decay_alpha;\n          test \"warmup cosine\" test_warmup_cosine;\n          test \"warmup linear\" test_warmup_linear;\n          test \"exponential decay\" test_exponential_decay;\n        ];\n      group \"sgd\"\n        [\n          test \"basic step\" test_sgd_basic;\n          test \"converges\" test_sgd_converges;\n          test \"momentum\" test_sgd_momentum;\n          test \"nesterov\" test_sgd_nesterov;\n        ];\n      group \"adam\" [ test \"converges\" test_adam_converges ];\n      group \"adamw\" [ test \"converges\" test_adamw_converges ];\n      group \"rmsprop\"\n        [\n          test \"converges\" test_rmsprop_converges;\n          test \"momentum\" test_rmsprop_momentum;\n        ];\n      group \"adagrad\" [ test \"converges\" test_adagrad_converges ];\n      group \"validation\"\n        [ test \"invalid hyperparameters\" test_invalid_hyperparameters ];\n      group \"gradient utilities\"\n        [\n          test \"global_norm\" test_global_norm;\n          test \"clip_by_global_norm\" test_clip_by_global_norm;\n          test \"clip no-op\" test_clip_no_op;\n        ];\n      group \"multi-param\" [ test \"tree optimizer step\" test_multi_param_tree ];\n    ]\n"
  },
  {
    "path": "packages/kaun/test/test_ptree.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nmodule Ptree = Kaun.Ptree\n\nlet string_contains s sub =\n  let slen = String.length s in\n  let sub_len = String.length sub in\n  let rec loop i =\n    if i + sub_len > slen then false\n    else if String.sub s i sub_len = sub then true\n    else loop (i + 1)\n  in\n  if sub_len = 0 then true else loop 0\n\nlet raises_invalid_arg_any f =\n  raises_match\n    (fun exn -> match exn with Invalid_argument _ -> true | _ -> false)\n    f\n\nlet raises_invalid_arg_contains needle f =\n  raises_match\n    (fun exn ->\n      match exn with\n      | Invalid_argument msg -> string_contains msg needle\n      | _ -> false)\n    f\n\nlet f32_leaf v = Ptree.tensor (Nx.full Nx.float32 [| 1 |] v)\n\nlet f32_value_of_tensor p =\n  let t = Ptree.Tensor.to_typed_exn Nx.float32 p in\n  Nx.item [ 0 ] t\n\nlet f32_value_of_tree t =\n  let p = Ptree.as_tensor_exn t in\n  f32_value_of_tensor p\n\nlet collect_f32_values t =\n  let values = ref [] in\n  Ptree.iter\n    (fun p ->\n      let v =\n        Ptree.with_tensor p\n          {\n            run =\n              (fun (type a) (type layout) (x : (a, layout) Nx.t) ->\n                let y = Nx.cast Nx.float32 x in\n                Nx.item [ 0 ] (Nx.reshape [| -1 |] y));\n          }\n      in\n      values := v :: !values)\n    t;\n  List.rev !values\n\nlet test_dict_key_validation () =\n  raises_invalid_arg \"duplicate key \\\"w\\\"\" (fun () ->\n      Ptree.dict [ (\"w\", f32_leaf 1.0); (\"w\", f32_leaf 2.0) ]);\n  raises_invalid_arg \"empty key\" (fun () -> Ptree.dict [ (\"\", f32_leaf 1.0) ]);\n  raises_invalid_arg_contains \"reserved character '.'\" (fun () ->\n      Ptree.dict [ (\"a.b\", f32_leaf 1.0) ]);\n  raises_invalid_arg_contains \"reserved character '['\" (fun () ->\n      Ptree.dict [ (\"a[0]\", f32_leaf 1.0) ]);\n  ignore\n    (Ptree.dict\n       [\n         (\"weight\", f32_leaf 1.0);\n         (\"bias\", f32_leaf 2.0);\n         (\"layer_1\", f32_leaf 3.0);\n       ])\n\nlet test_tensor_module () =\n  let p = Ptree.P (Nx.zeros Nx.float32 [| 2; 3 |]) in\n  let dtype_matches =\n    match Ptree.Tensor.dtype p with\n    | Nx_core.Dtype.Pack dt -> Nx_core.Dtype.equal dt Nx.float32\n  in\n  equal ~msg:\"dtype\" bool true dtype_matches;\n  equal ~msg:\"shape\" (list int) [ 2; 3 ] (Array.to_list (Ptree.Tensor.shape p));\n  equal ~msg:\"numel\" int 6 (Ptree.Tensor.numel p);\n  equal ~msg:\"to_typed hit\" bool true\n    (Option.is_some (Ptree.Tensor.to_typed Nx.float32 p));\n  equal ~msg:\"to_typed miss\" bool true\n    (Option.is_none (Ptree.Tensor.to_typed Nx.float64 p));\n  raises_invalid_arg_contains \"dtype mismatch\" (fun () ->\n      Ptree.Tensor.to_typed_exn Nx.float64 p)\n\nlet test_leaf_access () =\n  let p = Ptree.P (Nx.full Nx.float32 [| 1 |] 7.0) in\n  let v =\n    Ptree.with_tensor p\n      {\n        run =\n          (fun (type a) (type layout) (t : (a, layout) Nx.t) ->\n            let t = Nx.cast Nx.float32 t in\n            Nx.item [ 0 ] t);\n      }\n  in\n  equal ~msg:\"with_tensor\" (float 1e-6) 7.0 v;\n  equal ~msg:\"as_tensor_exn\" (float 1e-6) 7.0\n    (f32_value_of_tree (Ptree.tensor (Nx.full Nx.float32 [| 1 |] 7.0)));\n  raises_invalid_arg_contains \"ctx\" (fun () ->\n      Ptree.as_tensor_exn ~ctx:\"ctx\" (Ptree.list []))\n\nlet test_dict_access () =\n  let fields =\n    Ptree.Dict.fields_exn\n      (Ptree.dict [ (\"w\", f32_leaf 3.0); (\"b\", f32_leaf 4.0) ])\n  in\n  equal ~msg:\"find hit\" bool true (Option.is_some (Ptree.Dict.find \"w\" fields));\n  equal ~msg:\"find miss\" bool true (Option.is_none (Ptree.Dict.find \"x\" fields));\n  equal ~msg:\"find_exn\" (float 1e-6) 4.0\n    (f32_value_of_tree (Ptree.Dict.find_exn \"b\" fields));\n  raises_invalid_arg_contains \"ctx\" (fun () ->\n      Ptree.Dict.find_exn ~ctx:\"ctx\" \"x\" fields);\n  equal ~msg:\"get_tensor_exn\" (float 1e-6) 3.0\n    (Nx.item [ 0 ] (Ptree.Dict.get_tensor_exn fields ~name:\"w\" Nx.float32));\n  raises_invalid_arg_any (fun () ->\n      Ptree.Dict.get_tensor_exn fields ~name:\"x\" Nx.float32);\n  raises_invalid_arg_any (fun () ->\n      Ptree.Dict.get_tensor_exn fields ~name:\"w\" Nx.float64);\n  raises_invalid_arg_any (fun () ->\n      Ptree.Dict.get_tensor_exn\n        (Ptree.Dict.fields_exn (Ptree.dict [ (\"node\", Ptree.list []) ]))\n        ~name:\"node\" Nx.float32);\n  raises_invalid_arg_contains \"ctx\" (fun () ->\n      Ptree.Dict.fields_exn ~ctx:\"ctx\"\n        (Ptree.tensor (Nx.zeros Nx.float32 [| 1 |])))\n\nlet test_list_access () =\n  let items =\n    Ptree.List.items_exn\n      (Ptree.list [ f32_leaf 1.0; f32_leaf 2.0; Ptree.list [ f32_leaf 3.0 ] ])\n  in\n  equal ~msg:\"items_exn length\" int 3 (List.length items);\n  raises_invalid_arg_contains \"ctx\" (fun () ->\n      Ptree.List.items_exn ~ctx:\"ctx\" (f32_leaf 1.0))\n\nlet test_map () =\n  let tree =\n    Ptree.dict\n      [ (\"a\", f32_leaf 1.0); (\"b\", Ptree.list [ f32_leaf 2.0; f32_leaf 3.0 ]) ]\n  in\n  let mapped =\n    Ptree.map\n      {\n        run =\n          (fun (type a) (type layout) (t : (a, layout) Nx.t) ->\n            let dt = Nx.dtype t in\n            let ten = Nx_core.Dtype.of_float dt 10.0 in\n            Nx.add t (Nx.scalar dt ten));\n      }\n      tree\n  in\n  equal ~msg:\"map values\"\n    (list (float 1e-6))\n    [ 11.0; 12.0; 13.0 ]\n    (collect_f32_values mapped)\n\nlet test_map2_success_and_order () =\n  let lhs = Ptree.dict [ (\"z\", f32_leaf 1.0); (\"a\", f32_leaf 2.0) ] in\n  let rhs = Ptree.dict [ (\"a\", f32_leaf 20.0); (\"z\", f32_leaf 10.0) ] in\n  let out = Ptree.map2 { run = Nx.add } lhs rhs in\n  let fields = Ptree.Dict.fields_exn out in\n  equal ~msg:\"preserve lhs key order\" (list string) [ \"z\"; \"a\" ]\n    (List.map fst fields);\n  equal ~msg:\"z value\" (float 1e-6) 11.0\n    (Nx.item [ 0 ] (Ptree.Dict.get_tensor_exn fields ~name:\"z\" Nx.float32));\n  equal ~msg:\"a value\" (float 1e-6) 22.0\n    (Nx.item [ 0 ] (Ptree.Dict.get_tensor_exn fields ~name:\"a\" Nx.float32))\n\nlet test_map2_errors () =\n  raises_invalid_arg_contains \"structure mismatch\" (fun () ->\n      Ptree.map2 { run = Nx.add } (f32_leaf 1.0)\n        (Ptree.dict [ (\"x\", f32_leaf 1.0) ]));\n  raises_invalid_arg_contains \"list length mismatch\" (fun () ->\n      Ptree.map2 { run = Nx.add }\n        (Ptree.list [ f32_leaf 1.0 ])\n        (Ptree.list [ f32_leaf 1.0; f32_leaf 2.0 ]));\n  raises_invalid_arg_contains \"dict size mismatch\" (fun () ->\n      Ptree.map2 { run = Nx.add }\n        (Ptree.dict [ (\"a\", f32_leaf 1.0) ])\n        (Ptree.dict [ (\"a\", f32_leaf 1.0); (\"b\", f32_leaf 2.0) ]));\n  raises_invalid_arg_contains \"not found in second dict\" (fun () ->\n      Ptree.map2 { run = Nx.add }\n        (Ptree.dict [ (\"a\", f32_leaf 1.0) ])\n        (Ptree.dict [ (\"b\", f32_leaf 1.0) ]));\n  raises_invalid_arg_contains \"dtype mismatch\" (fun () ->\n      Ptree.map2 { run = Nx.add }\n        (Ptree.tensor (Nx.ones Nx.float32 [| 1 |]))\n        (Ptree.tensor (Nx.ones Nx.int32 [| 1 |])))\n\nlet test_iter_and_fold_order () =\n  let tree =\n    Ptree.dict\n      [\n        (\"a\", f32_leaf 1.0);\n        (\"b\", Ptree.list [ f32_leaf 2.0; Ptree.dict [ (\"c\", f32_leaf 3.0) ] ]);\n        (\"d\", f32_leaf 4.0);\n      ]\n  in\n  let iter_values = collect_f32_values tree in\n  equal ~msg:\"iter order\" (list (float 1e-6)) [ 1.0; 2.0; 3.0; 4.0 ] iter_values;\n  let fold_values =\n    Ptree.fold\n      (fun acc p ->\n        let v = f32_value_of_tensor p in\n        v :: acc)\n      [] tree\n    |> List.rev\n  in\n  equal ~msg:\"fold order\" (list (float 1e-6)) [ 1.0; 2.0; 3.0; 4.0 ] fold_values\n\nlet test_flatten_and_rebuild () =\n  let tree =\n    Ptree.dict\n      [ (\"a\", f32_leaf 1.0); (\"b\", Ptree.list [ f32_leaf 2.0; f32_leaf 3.0 ]) ]\n  in\n  let leaves, rebuild = Ptree.flatten tree in\n  equal ~msg:\"flatten length\" int 3 (List.length leaves);\n  equal ~msg:\"flatten order\"\n    (list (float 1e-6))\n    [ 1.0; 2.0; 3.0 ]\n    (List.map f32_value_of_tensor leaves);\n  let leaves_plus_10 =\n    List.map\n      (fun (Ptree.P t) ->\n        Ptree.P\n          (Nx.add t\n             (Nx.scalar (Nx.dtype t) (Nx_core.Dtype.of_float (Nx.dtype t) 10.0))))\n      leaves\n  in\n  let rebuilt = rebuild leaves_plus_10 in\n  equal ~msg:\"rebuild values\"\n    (list (float 1e-6))\n    [ 11.0; 12.0; 13.0 ]\n    (collect_f32_values rebuilt);\n  let first_leaf =\n    match leaves with\n    | first :: _ -> first\n    | [] -> fail \"flatten returned no leaves\"\n  in\n  raises_invalid_arg_contains \"not enough tensors\" (fun () ->\n      rebuild [ first_leaf ]);\n  raises_invalid_arg_contains \"too many tensors\" (fun () ->\n      rebuild (leaves @ [ first_leaf ]))\n\nlet test_flatten_with_paths () =\n  let root = f32_leaf 3.0 in\n  equal ~msg:\"tensor root path\" (list string) [ \"\" ]\n    (List.map fst (Ptree.flatten_with_paths root));\n  let tree =\n    Ptree.dict\n      [\n        (\"w\", f32_leaf 1.0);\n        ( \"layers\",\n          Ptree.list [ f32_leaf 2.0; Ptree.dict [ (\"b\", f32_leaf 3.0) ] ] );\n      ]\n  in\n  equal ~msg:\"nested paths\" (list string)\n    [ \"w\"; \"layers.0\"; \"layers.1.b\" ]\n    (List.map fst (Ptree.flatten_with_paths tree))\n\nlet test_zeros_like_and_count_parameters () =\n  let tree =\n    Ptree.dict\n      [\n        (\"w\", Ptree.tensor (Nx.ones Nx.float32 [| 2; 3 |]));\n        (\"b\", Ptree.tensor (Nx.full Nx.float32 [| 4 |] 5.0));\n      ]\n  in\n  equal ~msg:\"count parameters\" int 10 (Ptree.count_parameters tree);\n  let zeros = Ptree.zeros_like tree in\n  equal ~msg:\"count preserved\" int 10 (Ptree.count_parameters zeros);\n  Ptree.iter\n    (fun p ->\n      Ptree.with_tensor p\n        {\n          run =\n            (fun (type a) (type layout) (x : (a, layout) Nx.t) ->\n              let y = Nx.cast Nx.float32 x in\n              let flat = Nx.reshape [| -1 |] y in\n              let n = Nx.numel flat in\n              for i = 0 to n - 1 do\n                equal ~msg:\"zeros_like values\" (float 1e-6) 0.0\n                  (Nx.item [ i ] flat)\n              done);\n        })\n    zeros\n\nlet test_pp () =\n  let tree =\n    Ptree.dict\n      [\n        (\"w\", Ptree.tensor (Nx.ones Nx.float32 [| 2 |]));\n        (\"b\", Ptree.list [ f32_leaf 1.0 ]);\n      ]\n  in\n  let s = Format.asprintf \"%a\" Ptree.pp tree in\n  equal ~msg:\"pp has Dict\" bool true\n    (String.length s >= String.length \"Dict\" && String.sub s 0 4 = \"Dict\");\n  equal ~msg:\"pp has key\" bool true (string_contains s \"w\");\n  equal ~msg:\"pp non-empty\" bool true (String.length s > 0)\n\nlet () =\n  run \"Kaun.Ptree\"\n    [\n      group \"constructors\"\n        [\n          test \"dict key validation\" test_dict_key_validation;\n          test \"leaf access\" test_leaf_access;\n        ];\n      group \"tensor\" [ test \"Tensor module\" test_tensor_module ];\n      group \"containers\"\n        [\n          test \"dict access\" test_dict_access;\n          test \"list access\" test_list_access;\n        ];\n      group \"functional\"\n        [\n          test \"map\" test_map;\n          test \"map2 success and order\" test_map2_success_and_order;\n          test \"map2 errors\" test_map2_errors;\n          test \"iter/fold traversal order\" test_iter_and_fold_order;\n        ];\n      group \"flatten\"\n        [\n          test \"flatten/rebuild\" test_flatten_and_rebuild;\n          test \"flatten_with_paths\" test_flatten_with_paths;\n        ];\n      group \"utilities\"\n        [\n          test \"zeros_like/count_parameters\"\n            test_zeros_like_and_count_parameters;\n          test \"pp\" test_pp;\n        ];\n    ]\n"
  },
  {
    "path": "packages/kaun/test/test_train.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nmodule Data = Kaun.Data\nmodule Layer = Kaun.Layer\nmodule Train = Kaun.Train\nmodule Loss = Kaun.Loss\n\nlet test_make_init () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let model =\n    Layer.sequential\n      [\n        Layer.linear ~in_features:2 ~out_features:4 ();\n        Layer.tanh ();\n        Layer.linear ~in_features:4 ~out_features:1 ();\n      ]\n  in\n  let optimizer = Vega.adam (Vega.Schedule.constant 0.01) in\n  let trainer = Train.make ~model ~optimizer in\n  let st = Train.init trainer ~dtype:Nx.float32 in\n  let vars = Train.vars st in\n  let param_count = Kaun.Ptree.count_parameters (Layer.params vars) in\n  equal ~msg:\"has parameters\" bool true (param_count > 0)\n\nlet test_step () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let model =\n    Layer.sequential\n      [\n        Layer.linear ~in_features:2 ~out_features:4 ();\n        Layer.tanh ();\n        Layer.linear ~in_features:4 ~out_features:1 ();\n      ]\n  in\n  let optimizer = Vega.adam (Vega.Schedule.constant 0.01) in\n  let trainer = Train.make ~model ~optimizer in\n  let st = Train.init trainer ~dtype:Nx.float32 in\n  let x =\n    Nx.create Nx.float32 [| 4; 2 |] [| 0.; 0.; 0.; 1.; 1.; 0.; 1.; 1. |]\n  in\n  let y = Nx.create Nx.float32 [| 4; 1 |] [| 0.; 1.; 1.; 0. |] in\n  let loss_val, st' =\n    Train.step trainer st ~training:true\n      ~loss:(fun pred -> Loss.binary_cross_entropy pred y)\n      x\n  in\n  let loss_f = Nx.item [] loss_val in\n  equal ~msg:\"loss is finite\" bool true (Float.is_finite loss_f);\n  let vars0 = Train.vars st in\n  let vars1 = Train.vars st' in\n  equal ~msg:\"params changed\" bool false\n    (Layer.params vars0 == Layer.params vars1)\n\nlet test_fit () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let model =\n    Layer.sequential\n      [\n        Layer.linear ~in_features:2 ~out_features:4 ();\n        Layer.tanh ();\n        Layer.linear ~in_features:4 ~out_features:1 ();\n      ]\n  in\n  let optimizer = Vega.adam (Vega.Schedule.constant 0.05) in\n  let trainer = Train.make ~model ~optimizer in\n  let st = Train.init trainer ~dtype:Nx.float32 in\n  let x =\n    Nx.create Nx.float32 [| 4; 2 |] [| 0.; 0.; 0.; 1.; 1.; 0.; 1.; 1. |]\n  in\n  let y = Nx.create Nx.float32 [| 4; 1 |] [| 0.; 1.; 1.; 0. |] in\n  let st' =\n    Train.fit trainer st\n      (Data.repeat 30 (x, fun pred -> Loss.binary_cross_entropy pred y))\n  in\n  let pred = Train.predict trainer st' x |> Nx.sigmoid in\n  let p0 = Nx.item [ 0; 0 ] pred in\n  let p1 = Nx.item [ 1; 0 ] pred in\n  let p2 = Nx.item [ 2; 0 ] pred in\n  let p3 = Nx.item [ 3; 0 ] pred in\n  equal ~msg:\"[0,0] -> ~0\" bool true (p0 < 0.4);\n  equal ~msg:\"[0,1] -> ~1\" bool true (p1 > 0.6);\n  equal ~msg:\"[1,0] -> ~1\" bool true (p2 > 0.6);\n  equal ~msg:\"[1,1] -> ~0\" bool true (p3 < 0.4)\n\nlet test_predict () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let model =\n    Layer.sequential\n      [ Layer.linear ~in_features:3 ~out_features:2 (); Layer.relu () ]\n  in\n  let optimizer = Vega.sgd (Vega.Schedule.constant 0.01) in\n  let trainer = Train.make ~model ~optimizer in\n  let st = Train.init trainer ~dtype:Nx.float32 in\n  let x = Nx.ones Nx.float32 [| 2; 3 |] in\n  let y = Train.predict trainer st x in\n  equal ~msg:\"predict shape\" (list int) [ 2; 2 ] (Array.to_list (Nx.shape y))\n\nlet test_fit_with_reporting () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let model =\n    Layer.sequential [ Layer.linear ~in_features:2 ~out_features:1 () ]\n  in\n  let optimizer = Vega.sgd (Vega.Schedule.constant 0.01) in\n  let trainer = Train.make ~model ~optimizer in\n  let st = Train.init trainer ~dtype:Nx.float32 in\n  let x = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  let y = Nx.create Nx.float32 [| 2; 1 |] [| 1.; 0. |] in\n  let report_count = ref 0 in\n  let _st' =\n    Train.fit trainer st\n      ~report:(fun ~step ~loss:_ _st ->\n        if step mod 3 = 0 then incr report_count)\n      (Data.repeat 10 (x, fun pred -> Loss.binary_cross_entropy pred y))\n  in\n  equal ~msg:\"report called 3 times (steps 3,6,9)\" int 3 !report_count\n\nlet test_fit_early_stop () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let model =\n    Layer.sequential [ Layer.linear ~in_features:2 ~out_features:1 () ]\n  in\n  let optimizer = Vega.sgd (Vega.Schedule.constant 0.01) in\n  let trainer = Train.make ~model ~optimizer in\n  let st = Train.init trainer ~dtype:Nx.float32 in\n  let x = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  let y = Nx.create Nx.float32 [| 2; 1 |] [| 1.; 0. |] in\n  let last_step = ref 0 in\n  let _st' =\n    Train.fit trainer st\n      ~report:(fun ~step ~loss:_ _st ->\n        last_step := step;\n        if step >= 15 then raise_notrace Train.Early_stop)\n      (Data.repeat 100 (x, fun pred -> Loss.binary_cross_entropy pred y))\n  in\n  equal ~msg:\"stopped at step 15\" int 15 !last_step\n\nlet test_batch_norm_state_threading () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let model =\n    Layer.sequential\n      [\n        Layer.linear ~in_features:2 ~out_features:4 ();\n        Layer.batch_norm ~num_features:4 ();\n        Layer.relu ();\n        Layer.linear ~in_features:4 ~out_features:1 ();\n      ]\n  in\n  let optimizer = Vega.adam (Vega.Schedule.constant 0.01) in\n  let trainer = Train.make ~model ~optimizer in\n  let st0 = Train.init trainer ~dtype:Nx.float32 in\n  let x =\n    Nx.create Nx.float32 [| 4; 2 |] [| 0.; 0.; 0.; 1.; 1.; 0.; 1.; 1. |]\n  in\n  let y = Nx.create Nx.float32 [| 4; 1 |] [| 0.; 1.; 1.; 0. |] in\n  let _, st1 =\n    Train.step trainer st0 ~training:true\n      ~loss:(fun pred -> Loss.binary_cross_entropy pred y)\n      x\n  in\n  let state0 = Layer.state (Train.vars st0) in\n  let state1 = Layer.state (Train.vars st1) in\n  equal ~msg:\"batch_norm state changed after step\" bool false (state0 == state1)\n\nlet () =\n  run \"Kaun.Train\"\n    [\n      group \"make/init\" [ test \"make and init\" test_make_init ];\n      group \"step\" [ test \"single step\" test_step ];\n      group \"fit\"\n        [\n          test \"xor convergence\" test_fit;\n          test \"reporting\" test_fit_with_reporting;\n          test \"early stop\" test_fit_early_stop;\n        ];\n      group \"predict\" [ test \"shapes\" test_predict ];\n      group \"stateful\"\n        [ test \"batch_norm state threading\" test_batch_norm_state_threading ];\n    ]\n"
  },
  {
    "path": "packages/munin/README.md",
    "content": "# Munin\n\nLocal experiment tracking for [Raven](https://github.com/raven-ml/raven).\n\nTrack metrics, save artifacts, and compare runs on the local filesystem.\nComes with a terminal dashboard for live monitoring. Data is plain JSON,\nreadable with `jq`. No server, no accounts.\n\n## Quick Start\n\n```ocaml\nlet session =\n  Munin.Session.start ~experiment:\"mnist\"\n    ~params:[ (\"lr\", `Float 0.001); (\"epochs\", `Int 10) ]\n    ()\nin\nfor step = 1 to 1000 do\n  let loss = train_step () in\n  Munin.Session.log_metric session ~step \"train/loss\" loss\ndone;\nMunin.Session.finish session ()\n```\n\n```sh\nmunin watch          # live terminal dashboard\nmunin compare a b c  # side-by-side params + summary\n```\n\n## Features\n\n- **Scalar metrics** with `define_metric` for auto-computed summaries (min, max, mean, last) and custom x-axes\n- **Media logging** -- images, tables, and files at specific steps\n- **Versioned artifacts** with content-addressed storage, aliases, and lineage tracking\n- **Terminal dashboard** with live metric charts, system resource panels, and EMA smoothing\n- **CLI** -- `runs`, `show`, `compare`, `metrics`, `watch`, `artifacts`, `delete`, `gc`\n- **System monitoring** -- opt-in CPU and memory tracking via background thread\n- **Run grouping** for hyperparameter sweeps, parent/child for nested runs\n- **Provenance** -- git commit, command line, hostname, environment captured automatically\n- **Plain JSON storage** -- append-only JSONL event logs, `jq`-friendly\n\n## Libraries\n\n| Library | Description |\n|---------|-------------|\n| `munin` | Core tracking: Session, Run, Store, Artifact |\n| `munin.tui` | Terminal dashboard (`munin watch`) |\n| `munin.sys` | Background system monitoring (CPU, memory) |\n\n## Examples\n\n- **01-basic** -- Minimal run with scalar metrics\n- **02-metrics** -- Metric definitions, auto-summaries, epoch tracking\n- **03-artifacts** -- Versioned checkpoints with aliases and lineage\n- **04-media** -- Logging images and tables\n- **05-parameter-sweep** -- Run grouping for hyperparameter search\n- **06-inspect** -- Reading runs programmatically\n- **07-system-monitor** -- Background CPU and memory tracking\n\n## Contributing\n\nSee the [Raven monorepo README](../../README.md) for guidelines.\n\n## License\n\nISC License. See [LICENSE](../../LICENSE) for details.\n"
  },
  {
    "path": "packages/munin/bin/dune",
    "content": "(executable\n (name main)\n (public_name munin)\n (package munin)\n (libraries munin munin.tui cmdliner jsont jsont.bytesrw))\n"
  },
  {
    "path": "packages/munin/bin/main.ml",
    "content": "let pp_kv pairs =\n  List.iter\n    (fun (key, value) ->\n      Printf.printf \"  %s: %s\\n\" key (Format.asprintf \"%a\" Value.pp value))\n    pairs\n\nlet pp_string_list items = Printf.printf \"  %s\\n\" (String.concat \", \" items)\n\nlet string_of_status = function\n  | `running -> \"running\"\n  | `finished -> \"finished\"\n  | `failed -> \"failed\"\n  | `killed -> \"killed\"\n\nlet string_of_kind = function\n  | `dataset -> \"dataset\"\n  | `model -> \"model\"\n  | `checkpoint -> \"checkpoint\"\n  | `file -> \"file\"\n  | `dir -> \"dir\"\n  | `other -> \"other\"\n\nlet string_of_payload = function `file -> \"file\" | `dir -> \"dir\"\n\nlet value_to_float = function\n  | `Float f -> Some f\n  | `Int n -> Some (Float.of_int n)\n  | `String s -> Float.of_string_opt s\n  | `Bool _ -> None\n\nlet sorted_unique_keys runs f =\n  let seen = Hashtbl.create 16 in\n  List.iter\n    (fun run -> List.iter (fun (k, _) -> Hashtbl.replace seen k ()) (f run))\n    runs;\n  Hashtbl.to_seq_keys seen |> List.of_seq |> List.sort String.compare\n\nlet run_root root f =\n  let store = Store.open_ ?root () in\n  f store\n\nlet runs_cmd =\n  let doc = \"List tracked runs\" in\n  let experiment =\n    let doc = \"Limit the listing to a single experiment.\" in\n    Cmdliner.Arg.(\n      value & opt (some string) None & info [ \"experiment\" ] ~docv:\"NAME\" ~doc)\n  in\n  Cmdliner.Cmd.v\n    (Cmdliner.Cmd.info \"runs\" ~doc)\n    Cmdliner.Term.(\n      const (fun root experiment ->\n          run_root root (fun store ->\n              Store.list_runs store ?experiment ()\n              |> List.iter (fun run ->\n                  Printf.printf \"%s\\t%s\\t%s\\t%s\\t%s\\t%s\\n\" (Run.id run)\n                    (Run.experiment_name run)\n                    (string_of_status (Run.status run))\n                    (Option.value (Run.parent_id run) ~default:\"-\")\n                    (Option.value (Run.name run) ~default:\"-\")\n                    (Option.value (Run.provenance run).git_commit ~default:\"-\"))))\n      $ Cmdliner.Arg.(\n          value & opt (some string) None & info [ \"root\" ] ~docv:\"DIR\")\n      $ experiment)\n\nlet show_cmd =\n  let doc = \"Show one run\" in\n  let run_id =\n    Cmdliner.Arg.(required & pos 0 (some string) None & info [] ~docv:\"RUN_ID\")\n  in\n  Cmdliner.Cmd.v\n    (Cmdliner.Cmd.info \"show\" ~doc)\n    Cmdliner.Term.(\n      const (fun root run_id ->\n          run_root root (fun store ->\n              match Store.find_run store run_id with\n              | None ->\n                  Printf.eprintf \"munin: run not found: %s\\n\" run_id;\n                  exit 1\n              | Some run ->\n                  Printf.printf \"id: %s\\n\" (Run.id run);\n                  Printf.printf \"experiment: %s\\n\" (Run.experiment_name run);\n                  Printf.printf \"name: %s\\n\"\n                    (Option.value (Run.name run) ~default:\"-\");\n                  Printf.printf \"parent: %s\\n\"\n                    (Option.value (Run.parent_id run) ~default:\"-\");\n                  Printf.printf \"status: %s\\n\"\n                    (string_of_status (Run.status run));\n                  Printf.printf \"started_at: %.0f\\n\" (Run.started_at run);\n                  Option.iter\n                    (Printf.printf \"ended_at: %.0f\\n\")\n                    (Run.ended_at run);\n                  Printf.printf \"resumable: %b\\n\" (Run.resumable run);\n                  Printf.printf \"notes: %s\\n\"\n                    (Option.value (Run.notes run) ~default:\"-\");\n                  let prov = Run.provenance run in\n                  Printf.printf \"command: %s\\n\" (String.concat \" \" prov.command);\n                  Printf.printf \"cwd: %s\\n\" prov.cwd;\n                  Printf.printf \"hostname: %s\\n\"\n                    (Option.value prov.hostname ~default:\"-\");\n                  Printf.printf \"pid: %d\\n\" prov.pid;\n                  Printf.printf \"git_commit: %s\\n\"\n                    (Option.value prov.git_commit ~default:\"-\");\n                  Printf.printf \"git_dirty: %s\\n\"\n                    (match prov.git_dirty with\n                    | None -> \"-\"\n                    | Some true -> \"true\"\n                    | Some false -> \"false\");\n                  Printf.printf \"env:\\n\";\n                  List.iter\n                    (fun (key, value) -> Printf.printf \"  %s=%s\\n\" key value)\n                    prov.env;\n                  Printf.printf \"tags:\\n\";\n                  List.iter (Printf.printf \"  %s\\n\") (Run.tags run);\n                  Printf.printf \"params:\\n\";\n                  pp_kv (Run.params run);\n                  Printf.printf \"summary:\\n\";\n                  pp_kv (Run.summary run);\n                  Printf.printf \"metric_keys:\\n\";\n                  pp_string_list (Run.metric_keys run);\n                  Printf.printf \"latest_metrics:\\n\";\n                  List.iter\n                    (fun (key, (metric : Run.metric)) ->\n                      Printf.printf \"  %s: step=%d value=%g\\n\" key metric.step\n                        metric.value)\n                    (Run.latest_metrics run);\n                  Printf.printf \"children:\\n\";\n                  List.iter\n                    (fun child -> Printf.printf \"  %s\\n\" (Run.id child))\n                    (Run.children run);\n                  Printf.printf \"output_artifacts:\\n\";\n                  List.iter\n                    (fun artifact ->\n                      Printf.printf \"  %s %s aliases=[%s] consumers=[%s]\\n\"\n                        (Artifact.name artifact)\n                        (Artifact.version artifact)\n                        (String.concat \",\" (Artifact.aliases artifact))\n                        (String.concat \",\" (Artifact.consumer_run_ids artifact)))\n                    (Run.output_artifacts run);\n                  Printf.printf \"input_artifacts:\\n\";\n                  List.iter\n                    (fun artifact ->\n                      Printf.printf \"  %s %s producer=%s\\n\"\n                        (Artifact.name artifact)\n                        (Artifact.version artifact)\n                        (Option.value\n                           (Artifact.producer_run_id artifact)\n                           ~default:\"-\"))\n                    (Run.input_artifacts run)))\n      $ Cmdliner.Arg.(\n          value & opt (some string) None & info [ \"root\" ] ~docv:\"DIR\")\n      $ run_id)\n\nlet artifacts_cmd =\n  let doc = \"List stored artifacts\" in\n  let name =\n    let doc = \"Limit the listing to a single artifact name.\" in\n    Cmdliner.Arg.(\n      value & opt (some string) None & info [ \"name\" ] ~docv:\"NAME\" ~doc)\n  in\n  Cmdliner.Cmd.v\n    (Cmdliner.Cmd.info \"artifacts\" ~doc)\n    Cmdliner.Term.(\n      const (fun root name ->\n          run_root root (fun store ->\n              Store.list_artifacts store ?name ()\n              |> List.iter (fun artifact ->\n                  Printf.printf \"%s\\t%s\\t%s\\t%s\\t%d\\t%s\\t%s\\n\"\n                    (Artifact.name artifact)\n                    (Artifact.version artifact)\n                    (string_of_kind (Artifact.kind artifact))\n                    (string_of_payload (Artifact.payload artifact))\n                    (Artifact.size_bytes artifact)\n                    (Option.value\n                       (Artifact.producer_run_id artifact)\n                       ~default:\"-\")\n                    (String.concat \",\" (Artifact.consumer_run_ids artifact)))))\n      $ Cmdliner.Arg.(\n          value & opt (some string) None & info [ \"root\" ] ~docv:\"DIR\")\n      $ name)\n\nlet watch_cmd =\n  let doc = \"Launch the live experiment dashboard\" in\n  let experiment =\n    let doc = \"Limit to runs in a single experiment.\" in\n    Cmdliner.Arg.(\n      value & opt (some string) None & info [ \"experiment\" ] ~docv:\"NAME\" ~doc)\n  in\n  let runs =\n    Cmdliner.Arg.(value & pos_all string [] & info [] ~docv:\"RUN_ID\")\n  in\n  Cmdliner.Cmd.v\n    (Cmdliner.Cmd.info \"watch\" ~doc)\n    Cmdliner.Term.(\n      const (fun root experiment runs ->\n          Munin_tui.run ?root ?experiment ~runs ())\n      $ Cmdliner.Arg.(\n          value & opt (some string) None & info [ \"root\" ] ~docv:\"DIR\")\n      $ experiment $ runs)\n\nlet compare_cmd =\n  let doc = \"Compare runs side by side\" in\n  let root =\n    Cmdliner.Arg.(value & opt (some string) None & info [ \"root\" ] ~docv:\"DIR\")\n  in\n  let run_ids =\n    Cmdliner.Arg.(non_empty & pos_all string [] & info [] ~docv:\"RUN_ID\")\n  in\n  Cmdliner.Cmd.v\n    (Cmdliner.Cmd.info \"compare\" ~doc)\n    Cmdliner.Term.(\n      const (fun root run_ids ->\n          run_root root (fun store ->\n              let runs =\n                List.filter_map (fun id -> Store.find_run store id) run_ids\n              in\n              if runs = [] then (\n                Printf.eprintf \"munin: no runs found\\n\";\n                exit 1);\n              let run_label run =\n                Option.value (Run.name run) ~default:(Run.id run)\n              in\n              (* Header *)\n              Printf.printf \"key\";\n              List.iter (fun run -> Printf.printf \"\\t%s\" (run_label run)) runs;\n              Printf.printf \"\\n\";\n              (* Params *)\n              let param_keys = sorted_unique_keys runs Run.params in\n              List.iter\n                (fun key ->\n                  Printf.printf \"%s\" key;\n                  List.iter\n                    (fun run ->\n                      let v =\n                        match List.assoc_opt key (Run.params run) with\n                        | Some v -> Format.asprintf \"%a\" Value.pp v\n                        | None -> \"-\"\n                      in\n                      Printf.printf \"\\t%s\" v)\n                    runs;\n                  Printf.printf \"\\n\")\n                param_keys;\n              (* Summaries *)\n              let summary_keys = sorted_unique_keys runs Run.summary in\n              (* Collect goals from metric_defs *)\n              let goals = Hashtbl.create 8 in\n              List.iter\n                (fun run ->\n                  List.iter\n                    (fun (key, (def : Run.metric_def)) ->\n                      match def.goal with\n                      | Some g -> Hashtbl.replace goals key g\n                      | None -> ())\n                    (Run.metric_defs run))\n                runs;\n              List.iter\n                (fun key ->\n                  Printf.printf \"%s\" key;\n                  let values =\n                    List.map\n                      (fun run ->\n                        Run.find_summary run key\n                        |> Fun.flip Option.bind value_to_float)\n                      runs\n                  in\n                  (* Find best index *)\n                  let best_idx =\n                    match Hashtbl.find_opt goals key with\n                    | None -> None\n                    | Some goal ->\n                        let compare =\n                          match goal with\n                          | `Minimize -> fun a b -> Float.compare a b\n                          | `Maximize -> fun a b -> Float.compare b a\n                        in\n                        let best = ref None in\n                        List.iteri\n                          (fun i v ->\n                            match (v, !best) with\n                            | Some v, None -> best := Some (i, v)\n                            | Some v, Some (_, bv) ->\n                                if compare v bv < 0 then best := Some (i, v)\n                            | None, _ -> ())\n                          values;\n                        Option.map fst !best\n                  in\n                  List.iteri\n                    (fun i _ ->\n                      let s =\n                        match List.nth values i with\n                        | Some v ->\n                            let s = Printf.sprintf \"%g\" v in\n                            if Some i = best_idx then s ^ \"*\" else s\n                        | None -> \"-\"\n                      in\n                      Printf.printf \"\\t%s\" s)\n                    runs;\n                  Printf.printf \"\\n\")\n                summary_keys))\n      $ root $ run_ids)\n\nlet metrics_cmd =\n  let doc = \"Show metric history\" in\n  let root =\n    Cmdliner.Arg.(value & opt (some string) None & info [ \"root\" ] ~docv:\"DIR\")\n  in\n  let run_id =\n    Cmdliner.Arg.(required & pos 0 (some string) None & info [] ~docv:\"RUN_ID\")\n  in\n  let key =\n    let doc = \"Metric key to dump history for.\" in\n    Cmdliner.Arg.(\n      value & opt (some string) None & info [ \"key\" ] ~docv:\"KEY\" ~doc)\n  in\n  let format =\n    let doc = \"Output format: tsv (default), csv, or json.\" in\n    Cmdliner.Arg.(\n      value\n      & opt (enum [ (\"tsv\", `Tsv); (\"csv\", `Csv); (\"json\", `Json) ]) `Tsv\n      & info [ \"format\" ] ~docv:\"FORMAT\" ~doc)\n  in\n  Cmdliner.Cmd.v\n    (Cmdliner.Cmd.info \"metrics\" ~doc)\n    Cmdliner.Term.(\n      const (fun root run_id key format ->\n          run_root root (fun store ->\n              match Store.find_run store run_id with\n              | None ->\n                  Printf.eprintf \"munin: run not found: %s\\n\" run_id;\n                  exit 1\n              | Some run -> (\n                  match key with\n                  | None ->\n                      (* Listing mode *)\n                      Printf.printf \"key\\tlatest_value\\tlatest_step\\tcount\\n\";\n                      List.iter\n                        (fun (key, (m : Run.metric)) ->\n                          let count =\n                            List.length (Run.metric_history run key)\n                          in\n                          Printf.printf \"%s\\t%g\\t%d\\t%d\\n\" key m.value m.step\n                            count)\n                        (Run.latest_metrics run)\n                  | Some key -> (\n                      let history = Run.metric_history run key in\n                      match format with\n                      | `Tsv ->\n                          Printf.printf \"step\\ttimestamp\\tvalue\\n\";\n                          List.iter\n                            (fun (m : Run.metric) ->\n                              Printf.printf \"%d\\t%.6f\\t%g\\n\" m.step m.timestamp\n                                m.value)\n                            history\n                      | `Csv ->\n                          Printf.printf \"step,timestamp,value\\n\";\n                          List.iter\n                            (fun (m : Run.metric) ->\n                              Printf.printf \"%d,%.6f,%g\\n\" m.step m.timestamp\n                                m.value)\n                            history\n                      | `Json ->\n                          Printf.printf \"[\";\n                          List.iteri\n                            (fun i (m : Run.metric) ->\n                              if i > 0 then Printf.printf \",\";\n                              Printf.printf\n                                \"{\\\"step\\\":%d,\\\"timestamp\\\":%.6f,\\\"value\\\":%g}\"\n                                m.step m.timestamp m.value)\n                            history;\n                          Printf.printf \"]\\n\"))))\n      $ root $ run_id $ key $ format)\n\nlet () =\n  exit\n    (Cmdliner.Cmd.eval\n       (Cmdliner.Cmd.group\n          (Cmdliner.Cmd.info \"munin\" ~doc:\"Local experiment tracking for Raven\")\n          [\n            runs_cmd;\n            show_cmd;\n            artifacts_cmd;\n            watch_cmd;\n            compare_cmd;\n            metrics_cmd;\n          ]))\n"
  },
  {
    "path": "packages/munin/doc/01-getting-started.md",
    "content": "# Getting Started\n\nThis guide covers installation, key concepts, and a complete first\nexample that tracks a run, inspects it via the CLI, and compares two\nruns.\n\n## Installation\n\n<!-- $MDX skip -->\n```bash\nopam install munin\n```\n\nOr build from source:\n\n<!-- $MDX skip -->\n```bash\ngit clone https://github.com/raven-ml/raven\ncd raven && dune build munin\n```\n\n## Key Concepts\n\n**Session.** A session is the write handle for a single run. All\nmutations go through append-only events -- no direct state editing.\n`Session.start` opens a session, `Session.finish` closes it.\n`Session.with_run` wraps both and handles exceptions.\n\n**Run.** A run is the persisted, read-only view of a tracked\nexperiment. It materializes its state by replaying the event log.\n`Run.params`, `Run.summary`, `Run.metric_history`, and other\naccessors expose the data.\n\n**Store.** A store is the root directory containing all experiments\nand artifacts. `Store.open_` creates or opens it.\n`Store.list_runs`, `Store.find_run`, and `Store.latest_run`\ndiscover runs across experiments.\n\n**Artifact.** An artifact is a versioned, content-addressed file or\ndirectory. Versions are auto-incremented (v1, v2, ...). Aliases like\n`\"latest\"` or `\"best\"` resolve to a specific version.\n`Session.log_artifact` produces one, `Session.use_artifact` records\nconsumption.\n\n**Value.** Parameters, summaries, and metadata use a simple scalar\ntype: `` [`Bool of bool | `Int of int | `Float of float | `String of string] ``.\n\n## Example: First Tracked Run\n\nThis example starts a session, logs hyperparameters and metrics,\nsaves an artifact, then reads everything back.\n\n<!-- $MDX skip -->\n```ocaml\nopen Munin\n\nlet () =\n  let session =\n    Session.start ~experiment:\"demo\" ~name:\"baseline\"\n      ~params:[ (\"lr\", `Float 0.001); (\"hidden\", `Int 64) ]\n      ()\n  in\n  (* Log metrics at each step. *)\n  Session.define_metric session \"loss\" ~summary:`Min ~goal:`Minimize ();\n\n  for step = 1 to 50 do\n    let loss = 1.0 /. Float.of_int step in\n    let acc = 1.0 -. loss in\n    Session.log_metrics session ~step [ (\"loss\", loss); (\"accuracy\", acc) ]\n  done;\n\n  (* Write a summary value explicitly. *)\n  Session.set_summary session [ (\"note\", `String \"first run\") ];\n\n  Session.finish session ();\n  Printf.printf \"run: %s\\n\" (Run.id (Session.run session))\n```\n\nAfter running, inspect from the terminal:\n\n<!-- $MDX skip -->\n```sh\n# List all runs.\nmunin runs\n\n# Show full details for a run.\nmunin show <RUN_ID>\n\n# Dump metric history as TSV.\nmunin metrics <RUN_ID> --key loss\n\n# Export as JSON.\nmunin metrics <RUN_ID> --key loss --format json\n```\n\n## Example: Comparing Two Runs\n\nRun the same experiment with different hyperparameters, then compare.\n\n<!-- $MDX skip -->\n```ocaml\nopen Munin\n\nlet train ~name ~lr =\n  Session.with_run ~experiment:\"demo\" ~name\n    ~params:[ (\"lr\", `Float lr) ]\n  @@ fun session ->\n  Session.define_metric session \"loss\" ~summary:`Min ~goal:`Minimize ();\n  for step = 1 to 50 do\n    let loss = (1.0 /. Float.of_int step) *. (1.0 /. lr) in\n    Session.log_metric session ~step \"loss\" loss\n  done\n\nlet () =\n  train ~name:\"slow\" ~lr:0.01;\n  train ~name:\"fast\" ~lr:0.1\n```\n\nCompare them side by side:\n\n<!-- $MDX skip -->\n```sh\nmunin compare <RUN_ID_1> <RUN_ID_2>\n```\n\nThe compare command prints a table with parameters and summary values.\nWhen a metric has a `goal` declared, the best value is marked with `*`.\n\n## Provenance\n\nEvery run automatically captures:\n\n- The command line (`Sys.argv`)\n- Working directory\n- Hostname and PID\n- Git commit hash and dirty status\n\nPass `~capture_env:[\"CUDA_VISIBLE_DEVICES\"; \"OMP_NUM_THREADS\"]` to\n`Session.start` to also record specific environment variables.\n\n## Store Location\n\nBy default, runs are stored in `$XDG_DATA_HOME/raven/munin`. Override\nwith the `RAVEN_TRACKING_DIR` environment variable, or pass `~root`\nto `Session.start` and `Store.open_`.\n\n## Next Steps\n\n- [Tracking Metrics](../02-tracking/) -- scalars, metric definitions, media, Kaun integration\n- [Artifacts](../03-artifacts/) -- versioned files, aliases, lineage, deduplication\n"
  },
  {
    "path": "packages/munin/doc/02-tracking.md",
    "content": "# Tracking Metrics\n\nThis page covers scalar metric logging, metric definitions with\nsummaries and goals, media logging, and integration with Kaun's\ntraining loop.\n\n## Logging Scalars\n\n`Session.log_metric` records a single scalar at a given step.\n`Session.log_metrics` records several atomically with the same\ntimestamp.\n\n<!-- $MDX skip -->\n```ocaml\nopen Munin\n\nlet () =\n  Session.with_run ~experiment:\"tracking-demo\" @@ fun session ->\n  for step = 1 to 100 do\n    let loss = 1.0 /. Float.of_int step in\n    let acc = 1.0 -. loss in\n    Session.log_metrics session ~step [ (\"loss\", loss); (\"accuracy\", acc) ]\n  done\n```\n\nEach call appends to an event log. The `step` is your x-axis counter\n(typically the global training step). A wall-clock timestamp is added\nautomatically; pass `~timestamp` to override it.\n\nRead metrics back through the `Run` module:\n\n<!-- $MDX skip -->\n```ocaml\nlet run = Session.run session in\nRun.metric_keys run        (* [\"accuracy\"; \"loss\"] *)\nRun.latest_metrics run     (* latest value per key *)\nRun.metric_history run \"loss\"  (* full chronological history *)\n```\n\n## Defining Metrics\n\n`Session.define_metric` declares how a metric should be summarized,\ncompared, and plotted. Call it once per key, before or after logging\nvalues.\n\n<!-- $MDX skip -->\n```ocaml\nSession.define_metric session \"loss\"\n  ~summary:`Min\n  ~goal:`Minimize\n  ();\n\nSession.define_metric session \"accuracy\"\n  ~summary:`Max\n  ~goal:`Maximize\n  ();\n```\n\n### Summary Modes\n\nThe `~summary` parameter controls the auto-computed run summary value:\n\n| Mode   | Summary value |\n|--------|---------------|\n| `` `Min ``  | Minimum over all samples |\n| `` `Max ``  | Maximum over all samples |\n| `` `Mean `` | Arithmetic mean of all samples |\n| `` `Last `` | Most recent sample (default) |\n| `` `None `` | No auto-summary |\n\nWhen the run is loaded, the summary is computed from the full metric\nhistory. You do not need to compute it yourself.\n\n### Explicit Summaries\n\n`Session.set_summary` writes explicit summary values that always take\nprecedence over auto-computed ones:\n\n<!-- $MDX skip -->\n```ocaml\nSession.set_summary session\n  [ (\"best_loss\", `Float 0.023); (\"note\", `String \"converged early\") ]\n```\n\nUse this for values that are not simple aggregations of a metric\nhistory, or for non-float summaries.\n\n### Goal\n\nThe `~goal` parameter declares whether lower (`` `Minimize ``) or\nhigher (`` `Maximize ``) values are better. It is used by:\n\n- `munin compare` to mark the best value with `*`\n- `munin watch` TUI for \"best\" badges\n- `Run_monitor.best` to find the best observation\n\n### Step Metric\n\nThe `~step_metric` parameter specifies another metric as the x-axis:\n\n<!-- $MDX skip -->\n```ocaml\nSession.define_metric session \"val/accuracy\"\n  ~summary:`Max ~goal:`Maximize ~step_metric:\"epoch\" ();\n```\n\nThis tells renderers to plot `val/accuracy` against the `epoch`\nmetric instead of the raw step counter.\n\n## Epoch Tracking\n\nEpochs are not a special concept -- log them as a regular metric and\nreference them with `~step_metric`:\n\n<!-- $MDX skip -->\n```ocaml\nSession.define_metric session \"train/loss\"\n  ~summary:`Min ~goal:`Minimize ~step_metric:\"epoch\" ();\nSession.define_metric session \"val/accuracy\"\n  ~summary:`Max ~goal:`Maximize ~step_metric:\"epoch\" ();\n\nfor epoch = 1 to 10 do\n  let steps_per_epoch = 100 in\n  for batch = 1 to steps_per_epoch do\n    let step = ((epoch - 1) * steps_per_epoch) + batch in\n    let loss = 1.0 /. Float.of_int step in\n    Session.log_metrics session ~step\n      [ (\"train/loss\", loss); (\"epoch\", Float.of_int epoch) ]\n  done;\n  let step = epoch * steps_per_epoch in\n  Session.log_metric session ~step \"val/accuracy\" (Float.of_int epoch *. 0.1)\ndone\n```\n\n## Media Logging\n\n### Images and Files\n\n`Session.log_media` copies a file into the run's `media/` directory\nand records it in the event log. The `~kind` is metadata for\nrenderers.\n\n<!-- $MDX skip -->\n```ocaml\n(* Log an image at a specific step. *)\nSession.log_media session ~step:100 ~key:\"viz/confusion\"\n  ~kind:`Image ~path:\"/tmp/confusion_matrix.png\";\n\n(* Log a text file. *)\nSession.log_media session ~step:1 ~key:\"config\"\n  ~kind:`File ~path:\"config.yaml\"\n```\n\nKeys may contain `/` separators to organize media into a hierarchy.\nThe file is stored at `<run_dir>/media/<key_path>_<step>.<ext>`.\n\nRead media back:\n\n<!-- $MDX skip -->\n```ocaml\nlet run = Session.run session in\nRun.media_keys run                    (* [\"config\"; \"viz/confusion\"] *)\nRun.media_history run \"viz/confusion\" (* list of media_entry records *)\n```\n\n### Structured Tables\n\n`Session.log_table` stores a table as a JSON file. Useful for\nconfusion matrices, per-class metrics, or data samples.\n\n<!-- $MDX skip -->\n```ocaml\nSession.log_table session ~step:1 ~key:\"results/per_class\"\n  ~columns:[ \"class\"; \"precision\"; \"recall\"; \"f1\" ]\n  ~rows:[\n    [ `String \"cat\";  `Float 0.92; `Float 0.88; `Float 0.90 ];\n    [ `String \"dog\";  `Float 0.89; `Float 0.93; `Float 0.91 ];\n    [ `String \"bird\"; `Float 0.95; `Float 0.91; `Float 0.93 ];\n  ]\n```\n\n## Integration with Kaun\n\nMunin has no compile-time dependency on Kaun. Integration happens\nthrough `Train.fit`'s `~report` callback:\n\n<!-- $MDX skip -->\n```ocaml\nopen Kaun\n\nlet () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let session =\n    Munin.Session.start ~experiment:\"mnist\" ~name:\"cnn-adam\"\n      ~params:[\n        (\"lr\", `Float 0.001);\n        (\"batch_size\", `Int 64);\n        (\"optimizer\", `String \"adam\");\n      ]\n      ()\n  in\n  Munin.Session.define_metric session \"train/loss\"\n    ~summary:`Min ~goal:`Minimize ();\n  Munin.Session.define_metric session \"val/accuracy\"\n    ~summary:`Max ~goal:`Maximize ();\n\n  let (x_train, y_train), (x_test, y_test) = Kaun_datasets.mnist () in\n  let trainer =\n    Train.make ~model ~optimizer:(Vega.adam (Vega.Schedule.constant 0.001))\n  in\n  let st = ref (Train.init trainer ~dtype:Nx.float32) in\n\n  for epoch = 1 to 3 do\n    let train_data =\n      Data.prepare ~shuffle:true ~batch_size:64 (x_train, y_train)\n      |> Data.map (fun (x, y) ->\n          (x, fun logits -> Loss.cross_entropy_sparse logits y))\n    in\n    st :=\n      Train.fit trainer !st\n        ~report:(fun ~step ~loss _st ->\n          Munin.Session.log_metrics session ~step\n            [ (\"train/loss\", loss); (\"epoch\", Float.of_int epoch) ])\n        train_data;\n\n    (* Evaluate and log validation accuracy. *)\n    let test_batches = Data.prepare ~batch_size:64 (x_test, y_test) in\n    let acc =\n      Metric.eval\n        (fun (x, y) ->\n          let logits = Train.predict trainer !st x in\n          Metric.accuracy logits y)\n        test_batches\n    in\n    Munin.Session.log_metric session ~step:(epoch * 937)\n      \"val/accuracy\" acc\n  done;\n\n  Munin.Session.finish session ()\n```\n\n## System Monitoring\n\n`Munin_sys.start` spawns a background thread that samples CPU and\nmemory usage every 15 seconds (configurable via `~interval`):\n\n<!-- $MDX skip -->\n```ocaml\nlet sysmon = Munin_sys.start session () in\n(* ... training ... *)\nMunin_sys.stop sysmon\n```\n\nLogged metrics: `sys/cpu_user`, `sys/cpu_system`, `sys/mem_used_pct`,\n`sys/mem_used_gb`, `sys/proc_cpu_pct`, `sys/proc_mem_mb`.\n\n## Next Steps\n\n- [Artifacts](../03-artifacts/) -- versioned files, aliases, lineage, deduplication\n"
  },
  {
    "path": "packages/munin/doc/03-artifacts.md",
    "content": "# Artifacts\n\nArtifacts are versioned, content-addressed files or directories with\ncross-run lineage tracking. Use them for datasets, model checkpoints,\nand any other outputs you want to version alongside your runs.\n\n## Logging an Artifact\n\n`Session.log_artifact` copies a file or directory into the blob store,\nassigns a version, and records it as an output of the current run.\n\n<!-- $MDX skip -->\n```ocaml\nopen Munin\n\nlet () =\n  Session.with_run ~experiment:\"pipeline\" ~name:\"prepare-data\"\n    ~tags:[ \"data\" ]\n  @@ fun session ->\n  (* ... produce a dataset file ... *)\n  let _artifact =\n    Session.log_artifact session\n      ~name:\"measurements\"\n      ~kind:`dataset\n      ~path:\"data/measurements.csv\"\n      ~metadata:[ (\"rows\", `Int 10000); (\"format\", `String \"csv\") ]\n      ~aliases:[ \"latest\" ]\n      ()\n  in\n  ()\n```\n\nParameters:\n\n- **`~name`** -- logical name for the artifact (e.g. `\"measurements\"`, `\"mnist-cnn\"`).\n- **`~kind`** -- one of `` `checkpoint ``, `` `model ``, `` `dataset ``, `` `file ``, `` `dir ``, `` `other ``.\n- **`~path`** -- path to the file or directory to store.\n- **`~metadata`** -- optional key-value pairs attached to the version.\n- **`~aliases`** -- optional alias list (e.g. `[\"latest\"; \"best\"]`).\n\n## Artifact Kinds\n\nThe `kind` field is a semantic label. It does not affect storage; all\nartifacts are stored the same way.\n\n| Kind | Use for |\n|------|---------|\n| `` `checkpoint `` | Training checkpoints (model + optimizer state) |\n| `` `model `` | Final model weights |\n| `` `dataset `` | Datasets and data splits |\n| `` `file `` | Single files (configs, logs, reports) |\n| `` `dir `` | Directory trees |\n| `` `other `` | Anything else |\n\n## Versioning\n\nEach call to `log_artifact` with the same `~name` creates a new\nversion: `v1`, `v2`, `v3`, and so on. Versions are immutable once\ncreated.\n\n<!-- $MDX skip -->\n```ocaml\n(* First call creates v1. *)\nlet v1 =\n  Session.log_artifact session ~name:\"model\" ~kind:`model\n    ~path:\"model_epoch1.safetensors\" ()\nin\n(* Second call creates v2. *)\nlet v2 =\n  Session.log_artifact session ~name:\"model\" ~kind:`model\n    ~path:\"model_epoch2.safetensors\" ()\nin\nPrintf.printf \"%s %s\\n\" (Artifact.version v1) (Artifact.version v2)\n(* prints: v1 v2 *)\n```\n\n## Aliases\n\nAliases are mutable pointers to a specific version. Common aliases:\n\n- `\"latest\"` -- the most recent version\n- `\"best\"` -- the best-performing version\n\nPass `~aliases` when logging to attach them. When a new version gets\nthe same alias, it moves from the old version to the new one.\n\nResolve an alias through the store:\n\n<!-- $MDX skip -->\n```ocaml\nlet store = Store.open_ () in\nmatch Store.find_artifact store ~name:\"model\" ~version:\"latest\" with\n| Some artifact ->\n    Printf.printf \"resolved to %s\\n\" (Artifact.version artifact)\n| None ->\n    Printf.printf \"not found\\n\"\n```\n\n`Store.find_artifact` accepts both explicit versions (`\"v2\"`) and\naliases (`\"latest\"`).\n\n## Content-Addressed Deduplication\n\nArtifact payloads are stored in a blob directory keyed by their\nSHA-256 digest. If two versions have identical content, only one copy\nis stored on disk.\n\n<!-- $MDX skip -->\n```ocaml\n(* These share the same blob if the file content is identical. *)\nlet a = Session.log_artifact session ~name:\"config\" ~kind:`file\n  ~path:\"config.yaml\" () in\nlet b = Session.log_artifact session ~name:\"config\" ~kind:`file\n  ~path:\"config.yaml\" () in\nassert (Artifact.digest a = Artifact.digest b)\n```\n\n`Store.gc` removes blobs that are no longer referenced by any artifact\nversion.\n\n## Lineage\n\n### Producer\n\nWhen you call `Session.log_artifact`, the current run is automatically\nrecorded as the producer:\n\n<!-- $MDX skip -->\n```ocaml\nlet artifact =\n  Session.log_artifact session ~name:\"features\" ~kind:`dataset\n    ~path:\"features.csv\" ()\nin\nArtifact.producer_run_id artifact  (* Some \"<run_id>\" *)\n```\n\n### Consumer\n\n`Session.use_artifact` records the current run as a consumer of an\nexisting artifact:\n\n<!-- $MDX skip -->\n```ocaml\n(* Run 2 consumes the artifact produced by Run 1. *)\nlet store = Store.open_ () in\nmatch Store.find_artifact store ~name:\"features\" ~version:\"latest\" with\n| Some artifact ->\n    Session.use_artifact session artifact;\n    let path = Artifact.path artifact in\n    Printf.printf \"loading from: %s\\n\" path\n| None ->\n    failwith \"artifact not found\"\n```\n\nAfter this, the lineage is recorded in both directions:\n\n<!-- $MDX skip -->\n```ocaml\nArtifact.producer_run_id artifact   (* run that created it *)\nArtifact.consumer_run_ids artifact  (* runs that consumed it *)\n\nRun.output_artifacts run1  (* artifacts produced by run1 *)\nRun.input_artifacts run2   (* artifacts consumed by run2 *)\n```\n\n## Loading Artifacts\n\n### From a Store\n\n`Store.find_artifact` resolves by name and version (or alias):\n\n<!-- $MDX skip -->\n```ocaml\nlet store = Store.open_ () in\nmatch Store.find_artifact store ~name:\"mnist-cnn\" ~version:\"best\" with\n| Some artifact ->\n    let path = Artifact.path artifact in\n    Printf.printf \"path: %s (%d bytes)\\n\" path (Artifact.size_bytes artifact)\n| None ->\n    Printf.printf \"not found\\n\"\n```\n\n### Listing Artifacts\n\n`Store.list_artifacts` supports filtering by name, kind, alias, and\nlineage:\n\n<!-- $MDX skip -->\n```ocaml\nlet store = Store.open_ () in\n\n(* All artifacts. *)\nlet all = Store.list_artifacts store () in\n\n(* Only checkpoints. *)\nlet checkpoints = Store.list_artifacts store ~kind:`checkpoint () in\n\n(* Only artifacts produced by a specific run. *)\nlet from_run = Store.list_artifacts store ~producer_run:\"<RUN_ID>\" () in\n\nPrintf.printf \"total: %d, checkpoints: %d, from run: %d\\n\"\n  (List.length all) (List.length checkpoints) (List.length from_run)\n```\n\n### From the CLI\n\n<!-- $MDX skip -->\n```sh\n# List all artifacts.\nmunin artifacts\n\n# Filter by name.\nmunin artifacts --name mnist-cnn\n```\n\n## Complete Example: Cross-Run Lineage\n\nA data-preparation run produces a dataset. A training run consumes it\nand produces a model checkpoint.\n\n<!-- $MDX skip -->\n```ocaml\nopen Munin\n\nlet write_file path text =\n  let oc = open_out path in\n  Fun.protect ~finally:(fun () -> close_out oc)\n    (fun () -> output_string oc text)\n\nlet () =\n  (* Run 1: produce a dataset. *)\n  Session.with_run ~experiment:\"pipeline\" ~name:\"prepare-data\"\n    ~tags:[ \"data\" ]\n  @@ fun session ->\n  write_file \"/tmp/data.csv\" \"x,y\\n1.0,2.0\\n3.0,4.0\\n\";\n  ignore\n    (Session.log_artifact session ~name:\"training-data\" ~kind:`dataset\n       ~path:\"/tmp/data.csv\"\n       ~metadata:[ (\"rows\", `Int 2) ]\n       ~aliases:[ \"latest\" ] ())\n\nlet () =\n  (* Run 2: consume the dataset, produce a model. *)\n  let store = Store.open_ () in\n  Session.with_run ~experiment:\"pipeline\" ~name:\"train\"\n    ~tags:[ \"training\" ]\n  @@ fun session ->\n  let dataset =\n    match Store.find_artifact store ~name:\"training-data\" ~version:\"latest\" with\n    | Some a -> a\n    | None -> failwith \"dataset not found\"\n  in\n  Session.use_artifact session dataset;\n  Printf.printf \"training on: %s\\n\" (Artifact.path dataset);\n\n  (* ... train model ... *)\n\n  write_file \"/tmp/model.bin\" \"model weights\";\n  ignore\n    (Session.log_artifact session ~name:\"my-model\" ~kind:`model\n       ~path:\"/tmp/model.bin\"\n       ~aliases:[ \"latest\" ] ())\n```\n\n## Garbage Collection\n\n`Store.gc` removes blobs not referenced by any artifact version:\n\n<!-- $MDX skip -->\n```ocaml\nlet store = Store.open_ () in\nlet removed = Store.gc store in\nPrintf.printf \"removed %d unreferenced blobs\\n\" removed\n```\n"
  },
  {
    "path": "packages/munin/doc/04-cli.md",
    "content": "# CLI Reference\n\nThe `munin` command-line tool inspects and manages the local tracking store.\nEvery subcommand accepts `--root DIR` to override the default store location\n(`$RAVEN_TRACKING_DIR` or `$XDG_DATA_HOME/raven/munin`).\n\n## munin runs\n\nList tracked runs. Output is tab-separated: ID, experiment, status, parent,\nname, git commit.\n\n<!-- $MDX skip -->\n```\nmunin runs\n```\n\n```\n20260317T143201_abc  mnist-sweep  finished  -        lr-0.001  a1b2c3d\n20260317T141502_def  mnist-sweep  finished  -        lr-0.01   a1b2c3d\n20260317T140003_ghi  cifar10      running   -        baseline  e4f5a6b\n```\n\nFilter by experiment:\n\n<!-- $MDX skip -->\n```\nmunin runs --experiment mnist-sweep\n```\n\n```\n20260317T143201_abc  mnist-sweep  finished  -  lr-0.001  a1b2c3d\n20260317T141502_def  mnist-sweep  finished  -  lr-0.01   a1b2c3d\n```\n\n## munin show\n\nDisplay full details for a single run.\n\n<!-- $MDX skip -->\n```\nmunin show 20260317T143201_abc\n```\n\n```\nid: 20260317T143201_abc\nexperiment: mnist-sweep\nname: lr-0.001\nparent: -\nstatus: finished\nstarted_at: 1742224321\nended_at: 1742225180\nresumable: false\nnotes: -\ncommand: ./train.exe --lr 0.001\ncwd: /home/user/project\nhostname: workstation\npid: 42519\ngit_commit: a1b2c3d\ngit_dirty: false\nenv:\n  CUDA_VISIBLE_DEVICES=0\ntags:\n  sweep\n  final\nparams:\n  lr: 0.001\n  batch_size: 64\n  epochs: 10\nsummary:\n  loss: 0.0312\n  accuracy: 0.991\nmetric_keys:\n  accuracy, loss, lr\nlatest_metrics:\n  accuracy: step=9380 value=0.991\n  loss: step=9380 value=0.0312\nchildren:\noutput_artifacts:\n  mnist-model v3 aliases=[latest] consumers=[]\ninput_artifacts:\n  mnist-data v1 producer=20260310T090000_xyz\n```\n\n## munin compare\n\nCompare two or more runs side by side. Prints a tab-separated table with\nparameters and summary values. When a metric has a declared goal (`Minimize` or\n`Maximize`), the best value is marked with `*`.\n\n<!-- $MDX skip -->\n```\nmunin compare 20260317T143201_abc 20260317T141502_def\n```\n\n```\nkey           lr-0.001   lr-0.01\nbatch_size    64         64\nepochs        10         10\nlr            0.001      0.01\naccuracy      0.991*     0.984\nloss          0.0312*    0.0587\n```\n\nWorks with any number of runs:\n\n<!-- $MDX skip -->\n```\nmunin compare abc def ghi\n```\n\n## munin metrics\n\nTwo modes: listing mode (no `--key`) and history mode (with `--key`).\n\n### Listing mode\n\nShows all metric keys with their latest value, latest step, and sample count.\n\n<!-- $MDX skip -->\n```\nmunin metrics 20260317T143201_abc\n```\n\n```\nkey              latest_value  latest_step  count\naccuracy         0.991         9380         9380\nloss             0.0312        9380         9380\nlr               0.001         9380         1\nsys/cpu_user     12.3          627          627\nsys/mem_used_gb  6.82          627          627\n```\n\n### History mode\n\nDump the full time series for a single key. Supports `--format tsv` (default),\n`csv`, and `json`.\n\n<!-- $MDX skip -->\n```\nmunin metrics 20260317T143201_abc --key loss\n```\n\n```\nstep\ttimestamp\tvalue\n1\t1742224322.123456\t2.3026\n2\t1742224322.234567\t1.8451\n3\t1742224322.345678\t1.2107\n...\n```\n\n<!-- $MDX skip -->\n```\nmunin metrics 20260317T143201_abc --key loss --format csv\n```\n\n```\nstep,timestamp,value\n1,1742224322.123456,2.3026\n2,1742224322.234567,1.8451\n...\n```\n\n<!-- $MDX skip -->\n```\nmunin metrics 20260317T143201_abc --key loss --format json\n```\n\n```json\n[{\"step\":1,\"timestamp\":1742224322.123456,\"value\":2.3026},{\"step\":2,\"timestamp\":1742224322.234567,\"value\":1.8451}]\n```\n\n## munin watch\n\nLaunch the terminal dashboard. See [Terminal Dashboard](05-dashboard.md) for\nfull documentation.\n\nAuto-detect the latest run:\n\n<!-- $MDX skip -->\n```\nmunin watch\n```\n\nOpen a specific run:\n\n<!-- $MDX skip -->\n```\nmunin watch 20260317T143201_abc\n```\n\nFilter by experiment (picks the latest run in that experiment):\n\n<!-- $MDX skip -->\n```\nmunin watch --experiment mnist-sweep\n```\n\n## munin artifacts\n\nList stored artifacts. Output is tab-separated: name, version, kind, payload\ntype, size in bytes, producer run, consumer runs.\n\n<!-- $MDX skip -->\n```\nmunin artifacts\n```\n\n```\nmnist-data   v1  dataset  dir   48000000  20260310T090000_xyz  20260317T143201_abc,20260317T141502_def\nmnist-model  v1  model    file  4521984   20260317T141502_def  -\nmnist-model  v2  model    file  4521984   20260317T143201_abc  -\nmnist-model  v3  model    file  4521984   20260317T143201_abc  -\n```\n\nFilter by name:\n\n<!-- $MDX skip -->\n```\nmunin artifacts --name mnist-model\n```\n\n```\nmnist-model  v1  model  file  4521984  20260317T141502_def  -\nmnist-model  v2  model  file  4521984  20260317T143201_abc  -\nmnist-model  v3  model  file  4521984  20260317T143201_abc  -\n```\n\n## munin delete\n\nDelete a run and its event log from the store. Does not remove shared blobs\n(use `munin gc` for that). Removes the experiment directory if no runs remain.\n\n<!-- $MDX skip -->\n```\nmunin delete 20260317T141502_def\n```\n\n```\nDelete run 20260317T141502_def (mnist-sweep / lr-0.01)? [y/N] y\nDeleted.\n```\n\nSkip the confirmation prompt with `--yes`:\n\n<!-- $MDX skip -->\n```\nmunin delete 20260317T141502_def --yes\n```\n\n## munin gc\n\nGarbage-collect unreferenced blobs from the blob store. Blobs that are no\nlonger referenced by any artifact are removed.\n\n<!-- $MDX skip -->\n```\nmunin gc\n```\n\n```\nRemoved 3 unreferenced blob(s).\n```\n"
  },
  {
    "path": "packages/munin/doc/05-dashboard.md",
    "content": "# Terminal Dashboard\n\nMunin includes a terminal-based dashboard for monitoring runs in real time.\nIt renders braille-resolution charts, status indicators, and system resource\nbars directly in the terminal.\n\n## Launching\n\nThe dashboard is started with `munin watch`. With no arguments it auto-detects\nthe most recently started run:\n\n<!-- $MDX skip -->\n```\nmunin watch\n```\n\nTo open a specific run, pass its ID:\n\n<!-- $MDX skip -->\n```\nmunin watch 20260317T143201_abc\n```\n\nTo pick the latest run in a given experiment:\n\n<!-- $MDX skip -->\n```\nmunin watch --experiment mnist-sweep\n```\n\n## Layout\n\nThe dashboard has three sections stacked vertically:\n\n### Header\n\nA single-line bar showing:\n\n- **Experiment and run name** (with run ID in parentheses)\n- **Tags** as inline badges\n- **Epoch** counter (e.g. `Epoch 3/10`) when an `epoch` metric is logged and\n  `epochs` is set in the params\n- **Step** counter (the highest step across all metrics)\n- **Elapsed time** in `HH:MM:SS`\n- **Status badge** on the right: a colored dot and label\n\n### Metrics panel\n\nA grid of braille-resolution line charts, one per user metric (system metrics\nprefixed with `sys/` are excluded). Each chart shows the metric name, latest\nvalue, and best value when a goal is defined.\n\nCharts are arranged in a responsive grid: 2 columns when the terminal is wide\nenough (at least 50 characters), 1 column otherwise. Rows are sized at 14\ncharacters tall. When there are more metrics than fit on screen, they are split\ninto batches and navigated with `<` / `>`.\n\nThe currently selected chart has a white border; unselected charts have a dim\nborder. Pressing Enter on the selected chart opens the detail view.\n\n### System panel\n\nA side panel (right 34% of the screen) showing four resource bars when system\nmetrics are available:\n\n- **CPU** -- combined user (green) + system (cyan) percentage with sparkline\n- **Mem** -- system memory percentage and absolute GB with sparkline\n- **Proc** -- process CPU percentage with sparkline\n- **RSS** -- process resident set size in MB with sparkline\n\nThe bars change color based on utilization: green below 50%, yellow 50-80%,\nred above 80%.\n\nToggle the system panel on/off with `[` or `]`.\n\n### Footer\n\nA hint bar showing available keyboard shortcuts for the current mode.\n\n## Keyboard shortcuts\n\n### Dashboard mode\n\n| Key          | Action                                     |\n|--------------|--------------------------------------------|\n| Arrow keys   | Navigate the metric chart grid              |\n| Enter/Space  | Open the selected metric in detail view     |\n| `<` / `>`    | Previous / next batch of metrics            |\n| `[` / `]`    | Toggle the system panel                     |\n| `q` / Escape | Quit the dashboard                          |\n\n### Detail view\n\n| Key          | Action                                     |\n|--------------|--------------------------------------------|\n| `S`          | Cycle EMA smoothing: Off, Light (1), Medium (2), Heavy (3) |\n| `q` / Escape | Return to dashboard                        |\n\n## Status detection\n\nThe dashboard determines run status from the event log:\n\n- **Live** (green) -- new events are arriving.\n- **Stopped** (gray) -- no events received for 5 seconds. The run process may\n  have crashed or been suspended.\n- **Done** (blue) -- a `Finished` event with status `finished` was received.\n- **Failed** (red) -- a `Finished` event with status `failed` was received.\n- **Killed** (yellow) -- a `Finished` event with status `killed` was received.\n\nThe dashboard polls the event log on every tick and transitions between states\nautomatically.\n\n## Detail view\n\nPressing Enter on a metric chart opens a full-screen detail view. The chart\nfills 80% of the screen with full axis labels and gridlines.\n\n**EMA smoothing** can be toggled by pressing `S`, cycling through four levels:\n\n| Level  | Alpha | Effect                        |\n|--------|-------|-------------------------------|\n| Off    | --    | Raw values                    |\n| Light  | 0.5   | Mild smoothing                |\n| Medium | 0.3   | Moderate smoothing            |\n| Heavy  | 0.15  | Aggressive smoothing          |\n\nWhen smoothing is active, the chart title shows \"(EMA)\" and the footer displays\nthe current level number.\n\nThe best value (determined by the metric's declared goal, or heuristically for\nkeys containing \"loss\" or \"error\") is displayed below the chart.\n"
  },
  {
    "path": "packages/munin/doc/06-system-monitoring.md",
    "content": "# System Monitoring\n\nThe `munin.sys` library provides background system monitoring that logs CPU and\nmemory metrics alongside your training metrics.\n\n## Setup\n\nAdd `munin.sys` to your dune `libraries`:\n\n<!-- $MDX skip -->\n```lisp\n(executable\n (name main)\n (libraries munin munin.sys))\n```\n\n## Usage\n\nStart a monitor after creating a session, and stop it before finishing:\n\n<!-- $MDX skip -->\n```ocaml\nlet () =\n  let session =\n    Munin.Session.start ~experiment:\"train\" ~name:\"resnet-50\" ()\n  in\n  let monitor = Munin_sys.start session () in\n\n  (* ... training loop ... *)\n\n  Munin_sys.stop monitor;\n  Munin.Session.finish session ()\n```\n\nThe monitor spawns a background thread that samples system and process\nstatistics at a fixed interval and logs them as scalar metrics.\n\n### Configuring the interval\n\nThe default sampling interval is 15 seconds. Override it with `~interval`:\n\n<!-- $MDX skip -->\n```ocaml\nlet monitor = Munin_sys.start session ~interval:5.0 ()\n```\n\nThe first sample is taken after one interval elapses.\n\n## Logged metrics\n\nAll metrics use the `sys/` prefix and are defined with `~summary:`Last`, so the\nfinal sampled value appears in run summaries.\n\n### System-wide\n\n| Metric key          | Description                       | Range    |\n|---------------------|-----------------------------------|----------|\n| `sys/cpu_user`      | User CPU percentage               | 0--100   |\n| `sys/cpu_system`    | System (kernel) CPU percentage    | 0--100   |\n| `sys/mem_used_pct`  | Memory usage percentage           | 0--100   |\n| `sys/mem_used_gb`   | Memory used in GB                 | 0+       |\n\n### Per-process\n\n| Metric key          | Description                       | Range    |\n|---------------------|-----------------------------------|----------|\n| `sys/proc_cpu_pct`  | Process CPU percentage            | 0+       |\n| `sys/proc_mem_mb`   | Process resident set size in MB   | 0+       |\n\n## Platform support\n\n`munin.sys` works on Linux and macOS. Platform-specific behavior:\n\n- **Linux**: CPU counters are read from `/proc/stat`; memory from\n  `/proc/meminfo`; process stats from `/proc/self/stat` and `Unix.times`.\n- **macOS**: CPU and memory use Mach host statistics; process memory uses\n  `task_info`. Only user/nice/system/idle CPU fields are populated.\n\n## TUI system panel\n\nThe `munin watch` dashboard displays a system panel on the right side with\nCPU, memory, process CPU, and RSS bars. This panel reads the `sys/` metrics\nfrom the run's event log -- it does not perform its own sampling. If your run\ndoes not use `munin.sys`, the system panel shows zeroes.\n\nToggle the panel with `[` or `]` in the dashboard.\n\n## Sysstat module\n\nThe `Munin_sys` module re-exports the `Sysstat` interface, giving direct access\nto stateless, poll-based sampling functions. These are useful for custom\nmonitoring outside the background thread.\n\n### Cpu\n\n<!-- $MDX skip -->\n```ocaml\nlet prev = Munin_sys.Cpu.sample () in\n(* ... wait ... *)\nlet next = Munin_sys.Cpu.sample () in\nlet stats = Munin_sys.Cpu.compute ~prev ~next in\nPrintf.printf \"User: %.1f%%  System: %.1f%%\\n\" stats.user stats.system\n```\n\n`Cpu.sample_per_core` returns an array of per-core counters.\n\n### Mem\n\n<!-- $MDX skip -->\n```ocaml\nlet mem = Munin_sys.Mem.sample () in\nlet used_gb = Int64.to_float mem.used /. 1_073_741_824. in\nPrintf.printf \"Memory: %.1f GB used / %.1f GB total\\n\"\n  used_gb (Int64.to_float mem.total /. 1_073_741_824.)\n```\n\n### Net\n\n<!-- $MDX skip -->\n```ocaml\nlet prev = Munin_sys.Net.sample () in\n(* ... wait ... *)\nlet next = Munin_sys.Net.sample () in\nlet stats = Munin_sys.Net.compute ~prev ~next ~dt:1.0 in\nPrintf.printf \"Rx: %.0f B/s  Tx: %.0f B/s\\n\"\n  stats.rx_bytes_per_sec stats.tx_bytes_per_sec\n```\n\n### Disk_io\n\n<!-- $MDX skip -->\n```ocaml\nlet prev = Munin_sys.Disk_io.sample () in\n(* ... wait ... *)\nlet next = Munin_sys.Disk_io.sample () in\nlet stats = Munin_sys.Disk_io.compute ~prev ~next ~dt:1.0 in\nPrintf.printf \"Read: %.0f B/s  Write: %.0f B/s  Util: %.1f%%\\n\"\n  stats.read_bytes_per_sec stats.write_bytes_per_sec\n  stats.utilization_percent\n```\n\n### Fs\n\n<!-- $MDX skip -->\n```ocaml\nlet fs = Munin_sys.Fs.sample () in\nlet used_pct =\n  Int64.to_float fs.used_bytes /. Int64.to_float fs.total_bytes *. 100.\nin\nPrintf.printf \"Disk: %.1f%% used\\n\" used_pct;\nList.iter (fun (p : Munin_sys.Fs.partition) ->\n  Printf.printf \"  %s: %Ld / %Ld bytes\\n\"\n    p.mount_point p.used_bytes p.total_bytes\n) fs.partitions\n```\n\n### Proc\n\nCurrent process stats:\n\n<!-- $MDX skip -->\n```ocaml\nlet prev = Munin_sys.Proc.Self.sample () in\n(* ... wait ... *)\nlet next = Munin_sys.Proc.Self.sample () in\nlet stats = Munin_sys.Proc.Self.compute ~prev ~next ~dt:1.0 ~num_cores:None in\nPrintf.printf \"CPU: %.1f%%  RSS: %Ld bytes\\n\" stats.cpu_percent stats.rss_bytes\n```\n\nProcess table (all visible processes):\n\n<!-- $MDX skip -->\n```ocaml\nlet prev = Munin_sys.Proc.Table.sample () in\n(* ... wait ... *)\nlet next = Munin_sys.Proc.Table.sample () in\nlet stats = Munin_sys.Proc.Table.compute ~prev ~next ~dt:1.0 in\nList.iter (fun (p : Munin_sys.Proc.Table.stats) ->\n  Printf.printf \"%d  %-15s  CPU: %.1f%%  Mem: %.1f%%\\n\"\n    p.pid p.name p.cpu_percent p.mem_percent\n) (List.sort (fun a b -> compare b.cpu_percent a.cpu_percent) stats)\n```\n\n### System info\n\n<!-- $MDX skip -->\n```ocaml\nlet (l1, l5, l15) = Munin_sys.loadavg () in\nPrintf.printf \"Load: %.2f %.2f %.2f\\n\" l1 l5 l15;\nPrintf.printf \"Uptime: %Ld seconds\\n\" (Munin_sys.uptime ())\n```\n"
  },
  {
    "path": "packages/munin/doc/dune",
    "content": "(mdx\n (files *.md)\n (package munin)\n (deps))\n"
  },
  {
    "path": "packages/munin/doc/index.md",
    "content": "# Munin\n\nMunin is a local-first experiment tracker for OCaml. It records\nhyperparameters, metrics, media, and versioned artifacts to disk with\nno external services. A CLI and live TUI let you inspect and compare\nruns from the terminal.\n\n## Features\n\n- **Scalar tracking**: `log_metric`, `log_metrics`, auto-computed summaries\n- **Metric definitions**: summary modes (min/max/mean/last), goals (minimize/maximize), custom x-axes\n- **Media logging**: images, files, audio, and structured tables\n- **Versioned artifacts**: content-addressed deduplication, aliases, cross-run lineage\n- **Provenance**: git commit, command line, environment variables, captured automatically\n- **System monitoring**: background CPU and memory sampling via `Munin_sys`\n- **CLI**: `munin runs`, `munin show`, `munin compare`, `munin metrics`, `munin artifacts`\n- **Live TUI**: `munin watch` with real-time metric charts and system stats\n\n## Quick Start\n\n<!-- $MDX skip -->\n```ocaml\nlet () =\n  Munin.Session.with_run ~experiment:\"demo\"\n    ~params:[ (\"lr\", `Float 0.001); (\"epochs\", `Int 10) ]\n  @@ fun session ->\n  for step = 1 to 100 do\n    let loss = 1.0 /. Float.of_int step in\n    Munin.Session.log_metric session ~step \"loss\" loss\n  done;\n  Munin.Session.set_summary session [ (\"final_loss\", `Float 0.01) ]\n```\n\nInspect the run from the terminal:\n\n<!-- $MDX skip -->\n```sh\nmunin runs\nmunin show <RUN_ID>\nmunin metrics <RUN_ID> --key loss\n```\n\n## Next Steps\n\n- [Getting Started](01-getting-started/) -- installation, key concepts, first example\n- [Tracking Metrics](02-tracking/) -- scalars, metric definitions, media, Kaun integration\n- [Artifacts](03-artifacts/) -- versioned files, aliases, lineage, deduplication\n"
  },
  {
    "path": "packages/munin/examples/01-basic/README.md",
    "content": "# 01-basic\n\nCreates a small local store, logs a run with scalar metrics, writes a summary,\nstores a file artifact, and prints the resulting run id.\n"
  },
  {
    "path": "packages/munin/examples/01-basic/dune",
    "content": "(executable\n (name main)\n (libraries munin))\n"
  },
  {
    "path": "packages/munin/examples/01-basic/main.ml",
    "content": "let () =\n  let root = \"_munin\" in\n  let artifact_path = Filename.concat root \"artifact.txt\" in\n  let write path text =\n    let oc = open_out path in\n    Fun.protect\n      ~finally:(fun () -> close_out oc)\n      (fun () -> output_string oc text)\n  in\n  let session =\n    Session.start ~root ~experiment:\"demo\" ~name:\"baseline\"\n      ~params:[ (\"lr\", `Float 0.001) ]\n      ()\n  in\n  write artifact_path \"hello from munin\\n\";\n  Session.log_metric session ~step:1 \"loss\" 1.25;\n  Session.log_metric session ~step:2 \"loss\" 0.94;\n  Session.set_summary session [ (\"best_loss\", `Float 0.94) ];\n  ignore\n    (Session.log_artifact session ~name:\"notes\" ~kind:`file ~path:artifact_path\n       ());\n  Session.finish session ();\n  Printf.printf \"run: %s\\n\" (Run.id (Session.run session))\n"
  },
  {
    "path": "packages/munin/examples/02-metrics/dune",
    "content": "(executable\n (name main)\n (libraries munin))\n"
  },
  {
    "path": "packages/munin/examples/02-metrics/main.ml",
    "content": "(** Metric definitions and rich scalar logging.\n\n    Demonstrates define_metric with summaries, goals, and step_metric for custom\n    x-axes. Simulates an iterative solver converging over epochs. *)\n\nopen Munin\n\nlet () =\n  let root = \"_munin\" in\n  let session =\n    Session.start ~root ~experiment:\"solver\" ~name:\"conjugate-gradient\"\n      ~params:[ (\"tolerance\", `Float 1e-6); (\"max_iter\", `Int 500) ]\n      ()\n  in\n  (* Declare how metrics should be summarised and compared. *)\n  Session.define_metric session \"residual\" ~summary:`Min ~goal:`Minimize ();\n  Session.define_metric session \"convergence_rate\" ~summary:`Mean\n    ~step_metric:\"epoch\" ();\n\n  (* Simulate an iterative solver: residual shrinks, rate stabilises. *)\n  let residual = ref 1.0 in\n  for epoch = 1 to 20 do\n    let rate = 0.7 +. Random.float 0.1 in\n    residual := !residual *. rate;\n    let step = epoch * 25 in\n    Session.log_metrics session ~step\n      [\n        (\"residual\", !residual);\n        (\"convergence_rate\", rate);\n        (\"epoch\", Float.of_int epoch);\n      ]\n  done;\n\n  Session.set_summary session [ (\"final_residual\", `Float !residual) ];\n  Session.finish session ();\n\n  (* Read back and print. *)\n  let run = Session.run session in\n  Printf.printf \"run: %s\\n\" (Run.id run);\n  Printf.printf \"metric keys: %s\\n\" (String.concat \", \" (Run.metric_keys run));\n\n  let defs = Run.metric_defs run in\n  List.iter\n    (fun (key, (def : Run.metric_def)) ->\n      let goal =\n        match def.goal with\n        | Some `Minimize -> \"minimize\"\n        | Some `Maximize -> \"maximize\"\n        | None -> \"none\"\n      in\n      Printf.printf \"  %s: summary=%s goal=%s\\n\" key\n        (match def.summary with\n        | `Min -> \"min\"\n        | `Max -> \"max\"\n        | `Mean -> \"mean\"\n        | `Last -> \"last\"\n        | `None -> \"none\")\n        goal)\n    defs;\n\n  let history = Run.metric_history run \"residual\" in\n  Printf.printf \"residual: %d samples, final=%.2e\\n\" (List.length history)\n    (List.nth history (List.length history - 1)).value\n"
  },
  {
    "path": "packages/munin/examples/03-artifacts/dune",
    "content": "(executable\n (name main)\n (libraries munin))\n"
  },
  {
    "path": "packages/munin/examples/03-artifacts/main.ml",
    "content": "(** Artifact versioning and lineage across runs.\n\n    Run 1 produces a dataset artifact. Run 2 consumes it and produces a result.\n    Demonstrates versioning, aliases, and cross-run lineage. *)\n\nopen Munin\n\nlet write_file path text =\n  let oc = open_out path in\n  Fun.protect\n    ~finally:(fun () -> close_out oc)\n    (fun () -> output_string oc text)\n\nlet () =\n  let root = \"_munin\" in\n  let store = Store.open_ ~root () in\n\n  (* Run 1: produce a dataset. *)\n  let session1 =\n    Session.start ~root ~experiment:\"pipeline\" ~name:\"prepare-data\"\n      ~tags:[ \"data\" ] ()\n  in\n  let data_path = Filename.concat root \"measurements.csv\" in\n  write_file data_path \"wavelength,flux\\n450.0,1.23\\n550.0,2.45\\n650.0,1.87\\n\";\n  let dataset =\n    Session.log_artifact session1 ~name:\"measurements\" ~kind:`dataset\n      ~path:data_path\n      ~metadata:[ (\"rows\", `Int 3) ]\n      ~aliases:[ \"latest\" ] ()\n  in\n  Session.finish session1 ();\n  Printf.printf \"produced: %s v%s (aliases: %s)\\n\" (Artifact.name dataset)\n    (Artifact.version dataset)\n    (String.concat \", \" (Artifact.aliases dataset));\n\n  (* Run 2: consume the dataset, produce a result. *)\n  let session2 =\n    Session.start ~root ~experiment:\"pipeline\" ~name:\"analyse\"\n      ~tags:[ \"analysis\" ] ()\n  in\n  Session.use_artifact session2 dataset;\n  let result_path = Filename.concat root \"result.txt\" in\n  write_file result_path \"peak_wavelength=550.0\\npeak_flux=2.45\\n\";\n  let result =\n    Session.log_artifact session2 ~name:\"analysis-result\" ~kind:`file\n      ~path:result_path ~aliases:[ \"latest\"; \"best\" ] ()\n  in\n  Session.finish session2 ();\n  Printf.printf \"produced: %s v%s\\n\" (Artifact.name result)\n    (Artifact.version result);\n\n  (* Query artifacts from the store. *)\n  (match Store.find_artifact store ~name:\"measurements\" ~version:\"latest\" with\n  | Some a ->\n      Printf.printf \"\\nresolved 'measurements:latest' -> v%s (%d bytes)\\n\"\n        (Artifact.version a) (Artifact.size_bytes a);\n      Printf.printf \"  producer: %s\\n\"\n        (Option.value ~default:\"unknown\" (Artifact.producer_run_id a));\n      Printf.printf \"  consumers: %s\\n\"\n        (String.concat \", \" (Artifact.consumer_run_ids a))\n  | None -> Printf.printf \"artifact not found\\n\");\n\n  let all = Store.list_artifacts store () in\n  Printf.printf \"\\nall artifacts: %d\\n\" (List.length all);\n  List.iter\n    (fun a -> Printf.printf \"  %s v%s\\n\" (Artifact.name a) (Artifact.version a))\n    all\n"
  },
  {
    "path": "packages/munin/examples/04-media/dune",
    "content": "(executable\n (name main)\n (libraries munin unix))\n"
  },
  {
    "path": "packages/munin/examples/04-media/main.ml",
    "content": "(** Non-scalar data: media files and tables.\n\n    Demonstrates log_media for files and log_table for structured data. Creates\n    synthetic data to keep the example self-contained. *)\n\nopen Munin\n\nlet write_file path text =\n  let oc = open_out path in\n  Fun.protect\n    ~finally:(fun () -> close_out oc)\n    (fun () -> output_string oc text)\n\n(* Write a tiny PPM image (3x3 red gradient). *)\nlet write_ppm path =\n  let oc = open_out_bin path in\n  Fun.protect\n    ~finally:(fun () -> close_out oc)\n    (fun () ->\n      Printf.fprintf oc \"P6\\n3 3\\n255\\n\";\n      for row = 0 to 2 do\n        for _col = 0 to 2 do\n          let v = 85 * row in\n          output_char oc (Char.chr v);\n          output_char oc '\\000';\n          output_char oc '\\000'\n        done\n      done)\n\nlet () =\n  let root = \"_munin\" in\n  let session = Session.start ~root ~experiment:\"media-demo\" ~name:\"run-1\" () in\n  let tmp = Filename.concat root \"_tmp\" in\n  (try Unix.mkdir tmp 0o755 with Unix.Unix_error (Unix.EEXIST, _, _) -> ());\n\n  (* Log an image at two different steps. *)\n  let img_path = Filename.concat tmp \"sample.ppm\" in\n  write_ppm img_path;\n  Session.log_media session ~step:1 ~key:\"viz/sample\" ~kind:`Image\n    ~path:img_path;\n  write_ppm img_path;\n  Session.log_media session ~step:2 ~key:\"viz/sample\" ~kind:`Image\n    ~path:img_path;\n\n  (* Log a text file. *)\n  let notes_path = Filename.concat tmp \"notes.txt\" in\n  write_file notes_path \"Observation: signal peaks at 550nm.\\n\";\n  Session.log_media session ~step:1 ~key:\"notes\" ~kind:`File ~path:notes_path;\n\n  (* Log a structured table — e.g. per-class metrics or a confusion matrix. *)\n  Session.log_table session ~step:1 ~key:\"results/per_band\"\n    ~columns:[ \"band\"; \"snr\"; \"coverage\" ]\n    ~rows:\n      [\n        [ `String \"blue\"; `Float 12.3; `Float 0.95 ];\n        [ `String \"green\"; `Float 18.7; `Float 0.98 ];\n        [ `String \"red\"; `Float 15.1; `Float 0.92 ];\n      ];\n\n  Session.finish session ();\n\n  (* Read back media entries. *)\n  let run = Session.run session in\n  Printf.printf \"run: %s\\n\" (Run.id run);\n  Printf.printf \"media keys: %s\\n\" (String.concat \", \" (Run.media_keys run));\n  let entries = Run.media_history run \"viz/sample\" in\n  Printf.printf \"viz/sample: %d entries\\n\" (List.length entries);\n  List.iter\n    (fun (e : Run.media_entry) ->\n      Printf.printf \"  step=%d kind=%s path=%s\\n\" e.step\n        (match e.kind with\n        | `Image -> \"image\"\n        | `Audio -> \"audio\"\n        | `Table -> \"table\"\n        | `File -> \"file\")\n        (Filename.basename e.path))\n    entries\n"
  },
  {
    "path": "packages/munin/examples/05-parameter-sweep/dune",
    "content": "(executable\n (name main)\n (libraries munin))\n"
  },
  {
    "path": "packages/munin/examples/05-parameter-sweep/main.ml",
    "content": "(** Parameter sweep with grouped runs.\n\n    Runs the same computation with different configurations, grouped under a\n    single sweep. Compares results at the end via Store queries. *)\n\nopen Munin\n\nlet () =\n  let root = \"_munin\" in\n  let store = Store.open_ ~root () in\n  let group = \"sweep-1\" in\n\n  (* Sweep over three configurations. *)\n  let configs =\n    [\n      (\"aggressive\", 0.1, 50);\n      (\"moderate\", 0.01, 100);\n      (\"conservative\", 0.001, 200);\n    ]\n  in\n  List.iter\n    (fun (name, step_size, max_iter) ->\n      let session =\n        Session.start ~root ~experiment:\"optimisation\" ~name ~group\n          ~params:\n            [ (\"step_size\", `Float step_size); (\"max_iter\", `Int max_iter) ]\n          ()\n      in\n      Session.define_metric session \"error\" ~summary:`Min ~goal:`Minimize ();\n\n      (* Simulate convergence: smaller steps converge slower but lower. *)\n      let error = ref 10.0 in\n      for i = 1 to max_iter do\n        error := (!error *. (1.0 -. step_size)) +. Random.float 0.01;\n        if i mod 10 = 0 then Session.log_metric session ~step:i \"error\" !error\n      done;\n      Session.finish session ())\n    configs;\n\n  (* Compare: list all runs in the group and print a results table. *)\n  let runs = Store.list_runs store ~experiment:\"optimisation\" ~group () in\n  Printf.printf \"%-15s  %-10s  %-10s  %-12s\\n\" \"name\" \"step_size\" \"max_iter\"\n    \"final_error\";\n  Printf.printf \"%s\\n\" (String.make 52 '-');\n  List.iter\n    (fun run ->\n      let name = Option.value ~default:\"?\" (Run.name run) in\n      let step_size =\n        match Run.find_param run \"step_size\" with\n        | Some (`Float f) -> Printf.sprintf \"%g\" f\n        | _ -> \"?\"\n      in\n      let max_iter =\n        match Run.find_param run \"max_iter\" with\n        | Some v -> Format.asprintf \"%a\" Value.pp v\n        | None -> \"?\"\n      in\n      let latest = Run.latest_metrics run in\n      let error =\n        match List.assoc_opt \"error\" latest with\n        | Some m -> Printf.sprintf \"%.6f\" m.value\n        | None -> \"?\"\n      in\n      Printf.printf \"%-15s  %-10s  %-10s  %-12s\\n\" name step_size max_iter error)\n    runs\n"
  },
  {
    "path": "packages/munin/examples/06-inspect/dune",
    "content": "(executable\n (name main)\n (libraries munin))\n"
  },
  {
    "path": "packages/munin/examples/06-inspect/main.ml",
    "content": "(** Querying and inspecting past runs.\n\n    The \"notebook\" use case: open a store, browse experiments, filter runs,\n    examine provenance, and extract metric histories. Assumes earlier examples\n    have been run to populate the store. *)\n\nopen Munin\n\nlet () =\n  let root = \"_munin\" in\n  let store = Store.open_ ~root () in\n\n  (* List all experiments. *)\n  let experiments = Store.list_experiments store in\n  Printf.printf \"experiments: %s\\n\\n\" (String.concat \", \" experiments);\n\n  (* List runs, optionally filtering. *)\n  let all_runs = Store.list_runs store () in\n  Printf.printf \"total runs: %d\\n\" (List.length all_runs);\n  let finished = Store.list_runs store ~status:`finished () in\n  Printf.printf \"finished runs: %d\\n\\n\" (List.length finished);\n\n  (* Find the latest run and inspect it. *)\n  (match Store.latest_run store () with\n  | None -> Printf.printf \"no runs found\\n\"\n  | Some run ->\n      Printf.printf \"latest run: %s\\n\" (Run.id run);\n      Printf.printf \"  experiment: %s\\n\" (Run.experiment_name run);\n      Printf.printf \"  name: %s\\n\"\n        (Option.value ~default:\"(none)\" (Run.name run));\n      Printf.printf \"  status: %s\\n\"\n        (match Run.status run with\n        | `running -> \"running\"\n        | `finished -> \"finished\"\n        | `failed -> \"failed\"\n        | `killed -> \"killed\");\n      Printf.printf \"  tags: [%s]\\n\" (String.concat \", \" (Run.tags run));\n\n      (* Provenance. *)\n      let prov = Run.provenance run in\n      Printf.printf \"  hostname: %s\\n\" (Option.value ~default:\"?\" prov.hostname);\n      Printf.printf \"  git: %s%s\\n\"\n        (Option.value ~default:\"?\" prov.git_commit)\n        (match prov.git_dirty with Some true -> \" (dirty)\" | _ -> \"\");\n\n      (* Params. *)\n      let params = Run.params run in\n      if params <> [] then (\n        Printf.printf \"  params:\\n\";\n        List.iter\n          (fun (k, v) ->\n            Printf.printf \"    %s = %s\\n\" k (Format.asprintf \"%a\" Value.pp v))\n          params);\n\n      (* Metrics. *)\n      let keys = Run.metric_keys run in\n      Printf.printf \"  metrics: %s\\n\" (String.concat \", \" keys);\n      List.iter\n        (fun key ->\n          let history = Run.metric_history run key in\n          let n = List.length history in\n          if n > 0 then\n            let last = (List.nth history (n - 1)).value in\n            Printf.printf \"    %s: %d samples, last=%.4g\\n\" key n last)\n        keys);\n\n  (* List artifacts. *)\n  let artifacts = Store.list_artifacts store () in\n  Printf.printf \"\\nartifacts: %d\\n\" (List.length artifacts);\n  List.iter\n    (fun a ->\n      Printf.printf \"  %s v%s (%d bytes)\\n\" (Artifact.name a)\n        (Artifact.version a) (Artifact.size_bytes a))\n    artifacts\n"
  },
  {
    "path": "packages/munin/examples/07-system-monitor/dune",
    "content": "(executable\n (name main)\n (libraries munin munin.sys))\n"
  },
  {
    "path": "packages/munin/examples/07-system-monitor/main.ml",
    "content": "(** Automatic system metrics during a computation.\n\n    Starts a system monitor that logs CPU and memory usage in the background\n    while a CPU-intensive computation runs. *)\n\nopen Munin\n\n(* A simple CPU-bound computation: count primes up to n. *)\nlet count_primes n =\n  let count = ref 0 in\n  for i = 2 to n do\n    let is_prime = ref true in\n    let j = ref 2 in\n    while !j * !j <= i && !is_prime do\n      if i mod !j = 0 then is_prime := false;\n      incr j\n    done;\n    if !is_prime then incr count\n  done;\n  !count\n\nlet () =\n  let root = \"_munin\" in\n  let session =\n    Session.start ~root ~experiment:\"compute\" ~name:\"prime-sieve\"\n      ~params:[ (\"limit\", `Int 5_000_000) ]\n      ()\n  in\n\n  (* Start system monitoring with a short interval for this demo. *)\n  let monitor = Munin_sys.start session ~interval:0.5 () in\n\n  (* Run the computation, logging progress. *)\n  let steps = 10 in\n  let per_step = 500_000 in\n  for i = 1 to steps do\n    let limit = i * per_step in\n    let n = count_primes limit in\n    Session.log_metrics session ~step:i\n      [ (\"primes_found\", Float.of_int n); (\"limit\", Float.of_int limit) ]\n  done;\n\n  Munin_sys.stop monitor;\n  Session.finish session ();\n\n  (* Check what system metrics were recorded. *)\n  let run = Session.run session in\n  let keys = Run.metric_keys run in\n  let sys_keys =\n    List.filter (fun k -> String.length k > 4 && String.sub k 0 4 = \"sys/\") keys\n  in\n  Printf.printf \"run: %s\\n\" (Run.id run);\n  Printf.printf \"system metrics: %s\\n\" (String.concat \", \" sys_keys);\n  List.iter\n    (fun key ->\n      let history = Run.metric_history run key in\n      let n = List.length history in\n      if n > 0 then\n        let last = (List.nth history (n - 1)).value in\n        Printf.printf \"  %s: %d samples, last=%.2f\\n\" key n last)\n    sys_keys\n"
  },
  {
    "path": "packages/munin/examples/README.md",
    "content": "# Munin Examples\n\n| Example | What you'll learn |\n|---------|-------------------|\n| [01-basic](01-basic/) | Start a session, log metrics, store an artifact, finish |\n| [02-metrics](02-metrics/) | Define metrics with summaries, goals, and custom x-axes |\n| [03-artifacts](03-artifacts/) | Version artifacts, attach aliases, track cross-run lineage |\n| [04-media](04-media/) | Log images, files, and structured tables |\n| [05-parameter-sweep](05-parameter-sweep/) | Group runs under a sweep, compare results |\n| [06-inspect](06-inspect/) | Query the store, browse experiments, examine provenance |\n| [07-system-monitor](07-system-monitor/) | Record CPU and memory usage automatically |\n| [x-kaun-mnist](x-kaun-mnist/) | End-to-end MNIST training with kaun integration |\n\nRun any example with:\n\n```sh\ndune exec packages/munin/examples/01-basic/main.exe\n```\n\nExamples write to a local `_munin/` directory.\n"
  },
  {
    "path": "packages/munin/examples/x-kaun-mnist/dune",
    "content": "(executable\n (name main)\n (libraries nx rune vega kaun kaun.datasets munin munin.sys))\n"
  },
  {
    "path": "packages/munin/examples/x-kaun-mnist/main.ml",
    "content": "(** End-to-end MNIST training with experiment tracking.\n\n    Trains a CNN on MNIST using kaun, logging metrics, hyperparameters, and a\n    model checkpoint via munin. Shows how munin integrates with a real training\n    loop without adding a dependency from kaun to munin. *)\n\nopen Kaun\n\nlet batch_size = 64\nlet epochs = 3\nlet lr = 0.001\n\nlet model =\n  Layer.sequential\n    [\n      Layer.conv2d ~in_channels:1 ~out_channels:16 ();\n      Layer.relu ();\n      Layer.max_pool2d ~kernel_size:(2, 2) ();\n      Layer.conv2d ~in_channels:16 ~out_channels:32 ();\n      Layer.relu ();\n      Layer.max_pool2d ~kernel_size:(2, 2) ();\n      Layer.flatten ();\n      Layer.linear ~in_features:(32 * 7 * 7) ~out_features:128 ();\n      Layer.relu ();\n      Layer.linear ~in_features:128 ~out_features:10 ();\n    ]\n\nlet () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let dtype = Nx.float32 in\n\n  (* Start a tracked run. *)\n  let session =\n    Munin.Session.start ~experiment:\"mnist\" ~name:\"cnn-adam\"\n      ~tags:[ \"baseline\" ]\n      ~params:\n        [\n          (\"lr\", `Float lr);\n          (\"batch_size\", `Int batch_size);\n          (\"epochs\", `Int epochs);\n          (\"optimizer\", `String \"adam\");\n        ]\n      ()\n  in\n  Munin.Session.define_metric session \"train/loss\" ~summary:`Min ~goal:`Minimize\n    ();\n  Munin.Session.define_metric session \"val/accuracy\" ~summary:`Max\n    ~goal:`Maximize ();\n  let sysmon = Munin_sys.start session () in\n\n  Printf.printf \"run: %s\\n%!\" (Munin.Run.id (Munin.Session.run session));\n\n  (* Load data. *)\n  Printf.printf \"Loading MNIST...\\n%!\";\n  let (x_train, y_train), (x_test, y_test) = Kaun_datasets.mnist () in\n  let n_train = (Nx.shape x_train).(0) in\n  Printf.printf \"  train: %d  test: %d\\n%!\" n_train (Nx.shape x_test).(0);\n\n  let test_batches = Data.prepare ~batch_size (x_test, y_test) in\n  let trainer =\n    Train.make ~model ~optimizer:(Vega.adam (Vega.Schedule.constant lr))\n  in\n  let st = ref (Train.init trainer ~dtype) in\n  let global_step = ref 0 in\n  let last_acc = ref 0. in\n\n  for epoch = 1 to epochs do\n    let train_data =\n      Data.prepare ~shuffle:true ~batch_size (x_train, y_train)\n      |> Data.map (fun (x, y) ->\n          (x, fun logits -> Loss.cross_entropy_sparse logits y))\n    in\n    let num_batches = n_train / batch_size in\n    let tracker = Metric.tracker () in\n\n    st :=\n      Train.fit trainer !st\n        ~report:(fun ~step ~loss _st ->\n          let s = !global_step + step in\n          Metric.observe tracker \"loss\" loss;\n          Munin.Session.log_metrics session ~step:s\n            [ (\"train/loss\", loss); (\"epoch\", Float.of_int epoch) ];\n          Printf.printf \"\\r  batch %d/%d  loss: %.4f%!\" step num_batches loss)\n        train_data;\n    global_step := !global_step + num_batches;\n    Printf.printf \"\\n%!\";\n\n    (* Evaluate. *)\n    Data.reset test_batches;\n    let test_acc =\n      Metric.eval\n        (fun (x, y) ->\n          let logits = Train.predict trainer !st x in\n          Metric.accuracy logits y)\n        test_batches\n    in\n    last_acc := test_acc;\n\n    Munin.Session.log_metrics session ~step:!global_step\n      [\n        (\"train/loss_avg\", Metric.mean tracker \"loss\");\n        (\"val/accuracy\", test_acc);\n      ];\n\n    Printf.printf \"epoch %d  loss: %.4f  val_acc: %.2f%%\\n%!\" epoch\n      (Metric.mean tracker \"loss\")\n      (test_acc *. 100.)\n  done;\n\n  (* Save model checkpoint as a versioned artifact. *)\n  let checkpoint_path =\n    Filename.concat\n      (Munin.Run.dir (Munin.Session.run session))\n      \"model.safetensors\"\n  in\n  Checkpoint.save checkpoint_path (Layer.params (Train.vars !st));\n  ignore\n    (Munin.Session.log_artifact session ~name:\"mnist-cnn\" ~kind:`checkpoint\n       ~path:checkpoint_path\n       ~metadata:[ (\"format\", `String \"safetensors\") ]\n       ~aliases:[ \"latest\" ] ());\n\n  Munin_sys.stop sysmon;\n  Munin.Session.set_notes session\n    (Some (Printf.sprintf \"Final val accuracy: %.2f%%\" (!last_acc *. 100.)));\n  Munin.Session.finish session ();\n  Printf.printf \"\\nDone. Run: %s\\n\" (Munin.Run.id (Munin.Session.run session))\n"
  },
  {
    "path": "packages/munin/lib/artifact.ml",
    "content": "type kind = [ `dataset | `model | `checkpoint | `file | `dir | `other ]\ntype payload = [ `file | `dir ]\n\ntype t = {\n  root : string;\n  name : string;\n  kind : kind;\n  payload : payload;\n  version : string;\n  digest : string;\n  materialized_rel_path : string;\n  size_bytes : int;\n  metadata : (string * Jsont.json) list;\n  aliases : string list;\n  producer_run_id : string option;\n  consumer_run_ids : string list;\n  created_at : float;\n}\n\nlet schema_version = 2\nlet name t = t.name\nlet kind t = t.kind\nlet payload t = t.payload\nlet version t = t.version\nlet digest t = t.digest\nlet size_bytes t = t.size_bytes\nlet metadata t = List.map (fun (k, v) -> (k, Value.of_json v)) t.metadata\nlet aliases t = t.aliases\nlet producer_run_id t = t.producer_run_id\nlet consumer_run_ids t = t.consumer_run_ids\nlet created_at t = t.created_at\nlet path t = Filename.concat t.root t.materialized_rel_path\nlet has_alias t alias = List.exists (String.equal alias) t.aliases\n\nlet kind_of_string = function\n  | \"dataset\" -> `dataset\n  | \"model\" -> `model\n  | \"checkpoint\" -> `checkpoint\n  | \"file\" -> `file\n  | \"dir\" -> `dir\n  | _ -> `other\n\nlet kind_to_string : kind -> string = function\n  | `dataset -> \"dataset\"\n  | `model -> \"model\"\n  | `checkpoint -> \"checkpoint\"\n  | `file -> \"file\"\n  | `dir -> \"dir\"\n  | `other -> \"other\"\n\nlet payload_of_string = function \"dir\" -> `dir | _ -> `file\n\nlet payload_to_string : payload -> string = function\n  | `file -> \"file\"\n  | `dir -> \"dir\"\n\nlet versions_dir root name =\n  Filename.concat\n    (Filename.concat (Filename.concat root \"artifacts\") name)\n    \"versions\"\n\nlet manifest_path root name version =\n  Filename.concat\n    (Filename.concat (versions_dir root name) version)\n    \"manifest.json\"\n\nlet load_manifest root path =\n  try\n    let json = Fs.read_file path |> Json_utils.json_of_string in\n    let schema_ok =\n      match\n        Json_utils.json_mem \"schema_version\" json |> Json_utils.json_number\n      with\n      | Some v -> int_of_float v = schema_version\n      | None -> false\n    in\n    if not schema_ok then None\n    else\n      match\n        ( Json_utils.json_mem \"name\" json |> Json_utils.json_string,\n          Json_utils.json_mem \"version\" json |> Json_utils.json_string,\n          Json_utils.json_mem \"kind\" json |> Json_utils.json_string,\n          Json_utils.json_mem \"payload\" json |> Json_utils.json_string,\n          Json_utils.json_mem \"digest\" json |> Json_utils.json_string,\n          Json_utils.json_mem \"path\" json |> Json_utils.json_string,\n          Json_utils.json_mem \"size_bytes\" json |> Json_utils.json_number )\n      with\n      | ( Some name,\n          Some version,\n          Some kind,\n          Some payload,\n          Some digest,\n          Some materialized_rel_path,\n          Some size_bytes ) ->\n          Some\n            {\n              root;\n              name;\n              kind = kind_of_string kind;\n              payload = payload_of_string payload;\n              version;\n              digest;\n              materialized_rel_path;\n              size_bytes = int_of_float size_bytes;\n              metadata =\n                Json_utils.json_mem \"metadata\" json |> Json_utils.json_assoc;\n              aliases =\n                Json_utils.json_mem \"aliases\" json\n                |> Json_utils.json_string_list;\n              producer_run_id =\n                Json_utils.json_mem \"producer_run_id\" json\n                |> Json_utils.json_string;\n              consumer_run_ids =\n                Json_utils.json_mem \"consumer_run_ids\" json\n                |> Json_utils.json_string_list;\n              created_at =\n                Option.value\n                  (Json_utils.json_mem \"created_at\" json\n                  |> Json_utils.json_number)\n                  ~default:0.0;\n            }\n      | _ -> None\n  with _ -> None\n\nlet write_manifest root name version artifact =\n  let json =\n    Json_utils.json_obj\n      ([\n         (\"schema_version\", Jsont.Json.int schema_version);\n         (\"name\", Jsont.Json.string artifact.name);\n         (\"version\", Jsont.Json.string artifact.version);\n         (\"kind\", Jsont.Json.string (kind_to_string artifact.kind));\n         (\"payload\", Jsont.Json.string (payload_to_string artifact.payload));\n         (\"digest\", Jsont.Json.string artifact.digest);\n         (\"path\", Jsont.Json.string artifact.materialized_rel_path);\n         (\"size_bytes\", Jsont.Json.int artifact.size_bytes);\n         (\"metadata\", Json_utils.json_obj artifact.metadata);\n         ( \"aliases\",\n           Jsont.Json.list (List.map Jsont.Json.string artifact.aliases) );\n         ( \"consumer_run_ids\",\n           Jsont.Json.list\n             (List.map Jsont.Json.string artifact.consumer_run_ids) );\n         (\"created_at\", Jsont.Json.number artifact.created_at);\n       ]\n      @\n      match artifact.producer_run_id with\n      | None -> []\n      | Some run_id -> [ (\"producer_run_id\", Jsont.Json.string run_id) ])\n  in\n  Fs.write_file\n    (manifest_path root name version)\n    (Json_utils.json_to_string ~pretty:true json ^ \"\\n\")\n\nlet version_number version =\n  if String.length version >= 2 && version.[0] = 'v' then\n    int_of_string_opt (String.sub version 1 (String.length version - 1))\n  else None\n\nlet compare_version a b =\n  match (version_number a.version, version_number b.version) with\n  | Some a_num, Some b_num -> Int.compare a_num b_num\n  | _ ->\n      let by_name = String.compare a.name b.name in\n      if by_name <> 0 then by_name else String.compare a.version b.version\n\nlet resolve_alias root name alias =\n  Fs.list_dirs (versions_dir root name)\n  |> List.filter_map (fun version ->\n      load_manifest root (manifest_path root name version))\n  |> List.filter (fun artifact -> has_alias artifact alias)\n  |> List.sort compare_version |> List.rev\n  |> function\n  | artifact :: _ -> Some artifact\n  | [] -> None\n\nlet load ~root ~name ~version =\n  let path = manifest_path root name version in\n  if Sys.file_exists path then load_manifest root path\n  else resolve_alias root name version\n\nlet list ~root ?name ?kind ?alias ?producer_run ?consumer_run () =\n  let names =\n    match name with\n    | Some name -> [ name ]\n    | None -> Fs.list_dirs (Filename.concat root \"artifacts\")\n  in\n  List.concat_map\n    (fun name ->\n      Fs.list_dirs (versions_dir root name)\n      |> List.filter_map (fun version ->\n          load_manifest root (manifest_path root name version)))\n    names\n  |> List.filter (fun artifact ->\n      Option.fold ~none:true ~some:(fun k -> artifact.kind = k) kind\n      && Option.fold ~none:true ~some:(has_alias artifact) alias\n      && Option.fold ~none:true\n           ~some:(fun run_id -> artifact.producer_run_id = Some run_id)\n           producer_run\n      && Option.fold ~none:true\n           ~some:(fun run_id ->\n             List.exists (String.equal run_id) artifact.consumer_run_ids)\n           consumer_run)\n  |> List.sort (fun a b ->\n      let by_name = String.compare a.name b.name in\n      if by_name <> 0 then by_name else compare_version a b)\n\nlet create ~root ~name ~kind ~payload ~digest ~path:size_path ~metadata ~aliases\n    ~producer_run_id =\n  let version =\n    let max_version =\n      Fs.list_dirs (versions_dir root name)\n      |> List.filter_map version_number\n      |> List.fold_left max 0\n    in\n    Printf.sprintf \"v%d\" (max_version + 1)\n  in\n  let artifact =\n    {\n      root;\n      name;\n      kind;\n      payload;\n      version;\n      digest;\n      materialized_rel_path = size_path;\n      size_bytes = Fs.path_size (Filename.concat root size_path);\n      metadata;\n      aliases;\n      producer_run_id;\n      consumer_run_ids = [];\n      created_at = Unix.gettimeofday ();\n    }\n  in\n  write_manifest root name version artifact;\n  artifact\n\nlet add_consumer ~root ~name ~version run_id =\n  match load ~root ~name ~version with\n  | None -> ()\n  | Some artifact ->\n      if List.exists (String.equal run_id) artifact.consumer_run_ids then ()\n      else\n        let artifact =\n          {\n            artifact with\n            consumer_run_ids = artifact.consumer_run_ids @ [ run_id ];\n          }\n        in\n        write_manifest root name version artifact\n"
  },
  {
    "path": "packages/munin/lib/artifact.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Versioned local artifacts.\n\n    Artifacts are immutable named payloads with versions, aliases, and lineage.\n*)\n\n(** {1:types Types} *)\n\ntype kind =\n  [ `dataset  (** Dataset. *)\n  | `model  (** Model weights. *)\n  | `checkpoint  (** Checkpoint. *)\n  | `file  (** Single file. *)\n  | `dir  (** Directory tree. *)\n  | `other  (** Unclassified artifact. *) ]\n(** The type for logical artifact kinds. *)\n\ntype payload =\n  [ `file  (** Single file payload. *) | `dir  (** Directory tree payload. *) ]\n(** The type for materialized payload kinds. *)\n\ntype t\n(** The type for artifact handles. *)\n\n(** {1:loading Loading} *)\n\nval load : root:string -> name:string -> version:string -> t option\n(** [load ~root ~name ~version] is the artifact named [name] at [version], if\n    present.\n\n    If [version] does not match an explicit version, it is resolved as an alias.\n    Returns [None] if neither matches. *)\n\nval list :\n  root:string ->\n  ?name:string ->\n  ?kind:kind ->\n  ?alias:string ->\n  ?producer_run:string ->\n  ?consumer_run:string ->\n  unit ->\n  t list\n(** [list ~root ()] is the artifacts stored under [root], filtered when the\n    optional selectors are provided. Results are sorted by name, then by version\n    number. *)\n\n(** {1:identity Identity} *)\n\nval name : t -> string\n(** [name t] is the logical artifact name. *)\n\nval kind : t -> kind\n(** [kind t] is the artifact's logical kind. *)\n\nval payload : t -> payload\n(** [payload t] is the materialized payload kind. *)\n\nval version : t -> string\n(** [version t] is the explicit version such as [\"v1\"]. *)\n\n(** {1:content Content} *)\n\nval digest : t -> string\n(** [digest t] is the content-addressed SHA-256 digest of the payload. *)\n\nval path : t -> string\n(** [path t] is the absolute path to the materialized payload in the blob store.\n*)\n\nval size_bytes : t -> int\n(** [size_bytes t] is the total byte size of the materialized payload. *)\n\nval metadata : t -> (string * Value.t) list\n(** [metadata t] is the artifact metadata. *)\n\nval aliases : t -> string list\n(** [aliases t] is the alias list attached to the version. *)\n\nval has_alias : t -> string -> bool\n(** [has_alias t alias] is [true] iff [alias] points at [t]. *)\n\n(** {1:lineage Lineage} *)\n\nval producer_run_id : t -> string option\n(** [producer_run_id t] is the producing run identifier, if known. *)\n\nval consumer_run_ids : t -> string list\n(** [consumer_run_ids t] is the list of consuming run identifiers. *)\n\nval created_at : t -> float\n(** [created_at t] is the artifact creation timestamp ([Unix.gettimeofday] at\n    creation time). *)\n\n(**/**)\n\nval create :\n  root:string ->\n  name:string ->\n  kind:kind ->\n  payload:payload ->\n  digest:string ->\n  path:string ->\n  metadata:(string * Jsont.json) list ->\n  aliases:string list ->\n  producer_run_id:string option ->\n  t\n\nval add_consumer :\n  root:string -> name:string -> version:string -> string -> unit\n\n(**/**)\n"
  },
  {
    "path": "packages/munin/lib/dune",
    "content": "(library\n (name munin)\n (public_name munin)\n (wrapped false)\n (private_modules env fs json_utils event_log index)\n (libraries unix jsont jsont.bytesrw sha))\n"
  },
  {
    "path": "packages/munin/lib/env.ml",
    "content": "let root () =\n  match Sys.getenv_opt \"RAVEN_TRACKING_DIR\" with\n  | Some dir -> dir\n  | None ->\n      let data_home =\n        match Sys.getenv_opt \"XDG_DATA_HOME\" with\n        | Some dir -> dir\n        | None ->\n            Filename.concat\n              (Filename.concat (Sys.getenv \"HOME\") \".local\")\n              \"share\"\n      in\n      Filename.concat (Filename.concat data_home \"raven\") \"munin\"\n"
  },
  {
    "path": "packages/munin/lib/event_log.ml",
    "content": "type summary_mode = [ `Min | `Max | `Mean | `Last | `None ]\ntype goal = [ `Minimize | `Maximize ]\ntype media_kind = [ `Image | `Audio | `Table | `File ]\n\ntype event =\n  | Metric of { step : int; timestamp : float; key : string; value : float }\n  | Define_metric of {\n      key : string;\n      summary : summary_mode;\n      step_metric : string option;\n      goal : goal option;\n    }\n  | Media of {\n      step : int;\n      timestamp : float;\n      key : string;\n      kind : media_kind;\n      path : string;\n    }\n  | Summary of (string * Value.t) list\n  | Notes of string option\n  | Tags of string list\n  | Artifact_output of { name : string; version : string }\n  | Artifact_input of { name : string; version : string }\n  | Resumed of { at : float }\n  | Finished of { status : string; ended_at : float }\n\nlet json_of_optional_string = function\n  | Some value -> Jsont.Json.string value\n  | None -> Json_utils.null\n\nlet optional_string_of_json json =\n  match json with\n  | Jsont.Null _ -> Some None\n  | Jsont.String (value, _) -> Some (Some value)\n  | _ -> None\n\nlet summary_mode_to_string = function\n  | `Min -> \"min\"\n  | `Max -> \"max\"\n  | `Mean -> \"mean\"\n  | `Last -> \"last\"\n  | `None -> \"none\"\n\nlet summary_mode_of_string = function\n  | \"min\" -> Some `Min\n  | \"max\" -> Some `Max\n  | \"mean\" -> Some `Mean\n  | \"last\" -> Some `Last\n  | \"none\" -> Some `None\n  | _ -> None\n\nlet goal_to_string = function\n  | `Minimize -> \"minimize\"\n  | `Maximize -> \"maximize\"\n\nlet goal_of_string = function\n  | \"minimize\" -> Some `Minimize\n  | \"maximize\" -> Some `Maximize\n  | _ -> None\n\nlet media_kind_to_string = function\n  | `Image -> \"image\"\n  | `Audio -> \"audio\"\n  | `Table -> \"table\"\n  | `File -> \"file\"\n\nlet media_kind_of_string = function\n  | \"image\" -> Some `Image\n  | \"audio\" -> Some `Audio\n  | \"table\" -> Some `Table\n  | \"file\" -> Some `File\n  | _ -> None\n\nlet of_json json =\n  match Json_utils.json_mem \"type\" json |> Json_utils.json_string with\n  | Some \"metric\" -> (\n      match\n        ( Json_utils.json_mem \"step\" json |> Json_utils.json_number,\n          Json_utils.json_mem \"timestamp\" json |> Json_utils.json_number,\n          Json_utils.json_mem \"key\" json |> Json_utils.json_string,\n          Json_utils.json_mem \"value\" json |> Json_utils.json_number )\n      with\n      | Some step, Some timestamp, Some key, Some value ->\n          Some (Metric { step = int_of_float step; timestamp; key; value })\n      | _ -> None)\n  | Some \"define_metric\" -> (\n      match\n        ( Json_utils.json_mem \"key\" json |> Json_utils.json_string,\n          Json_utils.json_mem \"summary\" json\n          |> Json_utils.json_string\n          |> Fun.flip Option.bind summary_mode_of_string )\n      with\n      | Some key, Some summary ->\n          let step_metric =\n            Json_utils.json_mem \"step_metric\" json |> Json_utils.json_string\n          in\n          let goal =\n            Json_utils.json_mem \"goal\" json\n            |> Json_utils.json_string\n            |> Fun.flip Option.bind goal_of_string\n          in\n          Some (Define_metric { key; summary; step_metric; goal })\n      | _ -> None)\n  | Some \"media\" -> (\n      match\n        ( Json_utils.json_mem \"step\" json |> Json_utils.json_number,\n          Json_utils.json_mem \"ts\" json |> Json_utils.json_number,\n          Json_utils.json_mem \"key\" json |> Json_utils.json_string,\n          Json_utils.json_mem \"kind\" json\n          |> Json_utils.json_string\n          |> Fun.flip Option.bind media_kind_of_string,\n          Json_utils.json_mem \"path\" json |> Json_utils.json_string )\n      with\n      | Some step, Some ts, Some key, Some kind, Some path ->\n          Some\n            (Media { step = int_of_float step; timestamp = ts; key; kind; path })\n      | _ -> None)\n  | Some \"summary\" ->\n      Some\n        (Summary\n           (Json_utils.json_mem \"values\" json\n           |> Json_utils.json_assoc\n           |> List.map (fun (k, v) -> (k, Value.of_json v))))\n  | Some \"notes\" ->\n      Json_utils.json_mem \"value\" json\n      |> optional_string_of_json\n      |> Option.map (fun value -> Notes value)\n  | Some \"tags\" ->\n      Some\n        (Tags (Json_utils.json_mem \"values\" json |> Json_utils.json_string_list))\n  | Some \"artifact_output\" -> (\n      match\n        ( Json_utils.json_mem \"name\" json |> Json_utils.json_string,\n          Json_utils.json_mem \"version\" json |> Json_utils.json_string )\n      with\n      | Some name, Some version -> Some (Artifact_output { name; version })\n      | _ -> None)\n  | Some \"artifact_input\" -> (\n      match\n        ( Json_utils.json_mem \"name\" json |> Json_utils.json_string,\n          Json_utils.json_mem \"version\" json |> Json_utils.json_string )\n      with\n      | Some name, Some version -> Some (Artifact_input { name; version })\n      | _ -> None)\n  | Some \"resumed\" -> (\n      match Json_utils.json_mem \"at\" json |> Json_utils.json_number with\n      | Some at -> Some (Resumed { at })\n      | None -> None)\n  | Some \"finished\" -> (\n      match\n        ( Json_utils.json_mem \"status\" json |> Json_utils.json_string,\n          Json_utils.json_mem \"ended_at\" json |> Json_utils.json_number )\n      with\n      | Some status, Some ended_at -> Some (Finished { status; ended_at })\n      | _ -> None)\n  | _ -> None\n\nlet decode_line line =\n  try Json_utils.json_of_string line |> of_json with _ -> None\n\nlet to_json = function\n  | Metric { step; timestamp; key; value } ->\n      Json_utils.json_obj\n        [\n          (\"type\", Jsont.Json.string \"metric\");\n          (\"step\", Jsont.Json.int step);\n          (\"timestamp\", Jsont.Json.number timestamp);\n          (\"key\", Jsont.Json.string key);\n          (\"value\", Jsont.Json.number value);\n        ]\n  | Define_metric { key; summary; step_metric; goal } ->\n      Json_utils.json_obj\n        ([\n           (\"type\", Jsont.Json.string \"define_metric\");\n           (\"key\", Jsont.Json.string key);\n           (\"summary\", Jsont.Json.string (summary_mode_to_string summary));\n         ]\n        @ (match step_metric with\n          | Some sm -> [ (\"step_metric\", Jsont.Json.string sm) ]\n          | None -> [])\n        @\n        match goal with\n        | Some g -> [ (\"goal\", Jsont.Json.string (goal_to_string g)) ]\n        | None -> [])\n  | Media { step; timestamp; key; kind; path } ->\n      Json_utils.json_obj\n        [\n          (\"type\", Jsont.Json.string \"media\");\n          (\"step\", Jsont.Json.int step);\n          (\"ts\", Jsont.Json.number timestamp);\n          (\"key\", Jsont.Json.string key);\n          (\"kind\", Jsont.Json.string (media_kind_to_string kind));\n          (\"path\", Jsont.Json.string path);\n        ]\n  | Summary values ->\n      Json_utils.json_obj\n        [\n          (\"type\", Jsont.Json.string \"summary\");\n          ( \"values\",\n            Json_utils.json_obj\n              (List.map (fun (k, v) -> (k, Value.to_json v)) values) );\n        ]\n  | Notes value ->\n      Json_utils.json_obj\n        [\n          (\"type\", Jsont.Json.string \"notes\");\n          (\"value\", json_of_optional_string value);\n        ]\n  | Tags values ->\n      Json_utils.json_obj\n        [\n          (\"type\", Jsont.Json.string \"tags\");\n          (\"values\", Jsont.Json.list (List.map Jsont.Json.string values));\n        ]\n  | Artifact_output { name; version } ->\n      Json_utils.json_obj\n        [\n          (\"type\", Jsont.Json.string \"artifact_output\");\n          (\"name\", Jsont.Json.string name);\n          (\"version\", Jsont.Json.string version);\n        ]\n  | Artifact_input { name; version } ->\n      Json_utils.json_obj\n        [\n          (\"type\", Jsont.Json.string \"artifact_input\");\n          (\"name\", Jsont.Json.string name);\n          (\"version\", Jsont.Json.string version);\n        ]\n  | Resumed { at } ->\n      Json_utils.json_obj\n        [ (\"type\", Jsont.Json.string \"resumed\"); (\"at\", Jsont.Json.number at) ]\n  | Finished { status; ended_at } ->\n      Json_utils.json_obj\n        [\n          (\"type\", Jsont.Json.string \"finished\");\n          (\"status\", Jsont.Json.string status);\n          (\"ended_at\", Jsont.Json.number ended_at);\n        ]\n\nlet encode event = Json_utils.json_to_string (to_json event)\n\nlet read path =\n  if not (Sys.file_exists path) then []\n  else\n    let ic = open_in path in\n    let rec loop acc =\n      match input_line ic with\n      | line ->\n          let acc =\n            match decode_line line with\n            | Some event -> event :: acc\n            | None -> acc\n          in\n          loop acc\n      | exception End_of_file -> List.rev acc\n    in\n    Fun.protect ~finally:(fun () -> close_in ic) (fun () -> loop [])\n"
  },
  {
    "path": "packages/munin/lib/fs.ml",
    "content": "let is_directory path =\n  try (Unix.stat path).Unix.st_kind = Unix.S_DIR\n  with Unix.Unix_error _ -> false\n\nlet ensure_dir path =\n  let rec loop current =\n    if current = \"\" || current = Filename.dir_sep then ()\n    else if Sys.file_exists current then ()\n    else (\n      loop (Filename.dirname current);\n      Unix.mkdir current 0o755)\n  in\n  loop path\n\nlet list_entries path =\n  if Sys.file_exists path && is_directory path then\n    Sys.readdir path |> Array.to_list |> List.sort String.compare\n  else []\n\nlet list_dirs path =\n  List.filter\n    (fun name -> is_directory (Filename.concat path name))\n    (list_entries path)\n\nlet read_file path =\n  let ic = open_in_bin path in\n  Fun.protect\n    ~finally:(fun () -> close_in ic)\n    (fun () -> really_input_string ic (in_channel_length ic))\n\nlet write_file path text =\n  ensure_dir (Filename.dirname path);\n  let oc = open_out_bin path in\n  Fun.protect\n    ~finally:(fun () -> close_out oc)\n    (fun () -> output_string oc text)\n\nlet append_line path line =\n  ensure_dir (Filename.dirname path);\n  let oc = open_out_gen [ Open_creat; Open_append; Open_binary ] 0o644 path in\n  Fun.protect\n    ~finally:(fun () -> close_out oc)\n    (fun () ->\n      output_string oc line;\n      output_char oc '\\n')\n\nlet copy_file src dst =\n  ensure_dir (Filename.dirname dst);\n  let ic = open_in_bin src in\n  let oc = open_out_bin dst in\n  let buffer = Bytes.create 65536 in\n  Fun.protect\n    ~finally:(fun () ->\n      close_in ic;\n      close_out oc)\n    (fun () ->\n      let rec loop () =\n        let count = input ic buffer 0 (Bytes.length buffer) in\n        if count > 0 then (\n          output oc buffer 0 count;\n          loop ())\n      in\n      loop ())\n\nlet rec copy_tree src dst =\n  if is_directory src then (\n    ensure_dir dst;\n    List.iter\n      (fun name ->\n        copy_tree (Filename.concat src name) (Filename.concat dst name))\n      (list_entries src))\n  else copy_file src dst\n\nlet rec remove_tree path =\n  if is_directory path then (\n    List.iter\n      (fun name -> remove_tree (Filename.concat path name))\n      (list_entries path);\n    Unix.rmdir path)\n  else Sys.remove path\n\nlet rec iter_tree ?(rel = \"\") root f =\n  let path = if rel = \"\" then root else Filename.concat root rel in\n  if is_directory path then (\n    f rel `Dir;\n    List.iter\n      (fun name ->\n        let child = if rel = \"\" then name else Filename.concat rel name in\n        iter_tree ~rel:child root f)\n      (list_entries path))\n  else f rel `File\n\nlet sha256_file path = Sha256.to_hex (Sha256.file_fast path)\n\nlet sha256_path path =\n  if is_directory path then (\n    let ctx = Sha256.init () in\n    iter_tree path (fun rel kind ->\n        if rel <> \"\" then\n          match kind with\n          | `Dir ->\n              Sha256.update_string ctx \"dir:\";\n              Sha256.update_string ctx rel;\n              Sha256.update_string ctx \"\\n\"\n          | `File ->\n              Sha256.update_string ctx \"file:\";\n              Sha256.update_string ctx rel;\n              Sha256.update_string ctx \":\";\n              Sha256.update_string ctx (sha256_file (Filename.concat path rel));\n              Sha256.update_string ctx \"\\n\");\n    Sha256.to_hex (Sha256.finalize ctx))\n  else sha256_file path\n\nlet file_size path =\n  try (Unix.stat path).Unix.st_size with Unix.Unix_error _ -> 0\n\nlet rec path_size path =\n  if is_directory path then\n    list_entries path\n    |> List.fold_left\n         (fun acc name -> acc + path_size (Filename.concat path name))\n         0\n  else file_size path\n\nlet command_output command =\n  let ic = Unix.open_process_in (command ^ \" 2>/dev/null\") in\n  let output =\n    Fun.protect\n      ~finally:(fun () -> ignore (Unix.close_process_in ic))\n      (fun () ->\n        let rec loop acc =\n          match input_line ic with\n          | line -> loop (line :: acc)\n          | exception End_of_file -> List.rev acc\n        in\n        String.concat \"\\n\" (loop []))\n  in\n  if output = \"\" then None else Some output\n"
  },
  {
    "path": "packages/munin/lib/index.ml",
    "content": "type status = [ `running | `finished | `failed | `killed ]\n\ntype entry = {\n  experiment : string;\n  name : string option;\n  group : string option;\n  parent_id : string option;\n  status : status;\n  tags : string list;\n  started_at : float;\n}\n\nlet index_path root = Filename.concat root \"index.json\"\n\nlet status_to_string = function\n  | `running -> \"running\"\n  | `finished -> \"finished\"\n  | `failed -> \"failed\"\n  | `killed -> \"killed\"\n\nlet entry_to_json entry =\n  Json_utils.json_obj\n    ([\n       (\"experiment\", Jsont.Json.string entry.experiment);\n       (\"status\", Jsont.Json.string (status_to_string entry.status));\n       (\"tags\", Jsont.Json.list (List.map Jsont.Json.string entry.tags));\n       (\"started_at\", Jsont.Json.number entry.started_at);\n     ]\n    @ (match entry.name with\n      | Some n -> [ (\"name\", Jsont.Json.string n) ]\n      | None -> [])\n    @ (match entry.group with\n      | Some g -> [ (\"group\", Jsont.Json.string g) ]\n      | None -> [])\n    @\n    match entry.parent_id with\n    | Some p -> [ (\"parent_id\", Jsont.Json.string p) ]\n    | None -> [])\n\nlet entry_of_json json =\n  match Json_utils.json_mem \"experiment\" json |> Json_utils.json_string with\n  | None -> None\n  | Some experiment ->\n      let status : status =\n        match Json_utils.json_mem \"status\" json |> Json_utils.json_string with\n        | Some \"finished\" -> `finished\n        | Some \"failed\" -> `failed\n        | Some \"killed\" -> `killed\n        | _ -> `running\n      in\n      let tags =\n        Json_utils.json_mem \"tags\" json |> Json_utils.json_string_list\n      in\n      let started_at =\n        Option.value\n          (Json_utils.json_mem \"started_at\" json |> Json_utils.json_number)\n          ~default:0.0\n      in\n      Some\n        {\n          experiment;\n          name = Json_utils.json_mem \"name\" json |> Json_utils.json_string;\n          group = Json_utils.json_mem \"group\" json |> Json_utils.json_string;\n          parent_id =\n            Json_utils.json_mem \"parent_id\" json |> Json_utils.json_string;\n          status;\n          tags;\n          started_at;\n        }\n\nlet read root =\n  let path = index_path root in\n  if not (Sys.file_exists path) then None\n  else\n    try\n      let json = Fs.read_file path |> Json_utils.json_of_string in\n      let tbl = Hashtbl.create 64 in\n      List.iter\n        (fun (id, value) ->\n          match entry_of_json value with\n          | Some entry -> Hashtbl.replace tbl id entry\n          | None -> ())\n        (Json_utils.json_assoc json);\n      Some tbl\n    with _ -> None\n\nlet write root tbl =\n  let entries =\n    Hashtbl.to_seq tbl |> List.of_seq\n    |> List.sort (fun (a, _) (b, _) -> String.compare b a)\n    |> List.map (fun (id, entry) -> (id, entry_to_json entry))\n  in\n  let json = Json_utils.json_obj entries in\n  Fs.write_file (index_path root)\n    (Json_utils.json_to_string ~pretty:true json ^ \"\\n\")\n\nlet modify root f =\n  let tbl =\n    match read root with Some tbl -> tbl | None -> Hashtbl.create 16\n  in\n  f tbl;\n  write root tbl\n\nlet add root ~id entry = modify root (fun tbl -> Hashtbl.replace tbl id entry)\n\nlet update_status root ~id status =\n  modify root (fun tbl ->\n      match Hashtbl.find_opt tbl id with\n      | Some entry -> Hashtbl.replace tbl id { entry with status }\n      | None -> ())\n\nlet remove root ~id = modify root (fun tbl -> Hashtbl.remove tbl id)\n"
  },
  {
    "path": "packages/munin/lib/json_utils.ml",
    "content": "let null = Jsont.Null ((), Jsont.Meta.none)\n\nlet json_obj pairs =\n  Jsont.Json.object'\n    (List.map (fun (key, value) -> (Jsont.Json.name key, value)) pairs)\n\nlet json_mem name = function\n  | Jsont.Object (members, _) -> (\n      match Jsont.Json.find_mem name members with\n      | Some (_, value) -> value\n      | None -> null)\n  | _ -> null\n\nlet json_string = function Jsont.String (value, _) -> Some value | _ -> None\nlet json_number = function Jsont.Number (value, _) -> Some value | _ -> None\nlet json_bool = function Jsont.Bool (value, _) -> Some value | _ -> None\n\nlet json_string_list = function\n  | Jsont.Array (items, _) ->\n      List.filter_map\n        (function Jsont.String (value, _) -> Some value | _ -> None)\n        items\n  | _ -> []\n\nlet json_assoc = function\n  | Jsont.Object (members, _) ->\n      List.map (fun ((key, _), value) -> (key, value)) members\n  | _ -> []\n\nlet json_of_string text =\n  match Jsont_bytesrw.decode_string Jsont.json text with\n  | Ok json -> json\n  | Error message -> failwith message\n\nlet json_to_string ?(pretty = false) json =\n  let format = if pretty then Jsont.Indent else Jsont.Minify in\n  match Jsont_bytesrw.encode_string ~format Jsont.json json with\n  | Ok text -> text\n  | Error message -> failwith message\n"
  },
  {
    "path": "packages/munin/lib/munin.ml",
    "content": "module Value = Value\nmodule Artifact = Artifact\nmodule Run = Run\nmodule Run_monitor = Run_monitor\nmodule Session = Session\nmodule Store = Store\n"
  },
  {
    "path": "packages/munin/lib/munin.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Local experiment tracking for Raven.\n\n    Munin is a local-first experiment tracker. Start with {!Session} to write\n    runs, {!Run} to read them back, {!Run_monitor} for live polling, {!Store}\n    for discovery, and {!Artifact} for versioned payloads.\n\n    {1:library Library [munin]}\n    {!modules:Value Session Run Run_monitor Store Artifact} *)\n\nmodule Value = Value\nmodule Artifact = Artifact\nmodule Run = Run\nmodule Run_monitor = Run_monitor\nmodule Session = Session\nmodule Store = Store\n"
  },
  {
    "path": "packages/munin/lib/run.ml",
    "content": "type status = [ `running | `finished | `failed | `killed ]\ntype metric = { step : int; timestamp : float; value : float }\n\ntype provenance = {\n  notes : string option;\n  command : string list;\n  cwd : string;\n  hostname : string option;\n  pid : int;\n  git_commit : string option;\n  git_dirty : bool option;\n  env : (string * string) list;\n}\n\ntype metric_def = {\n  summary : [ `Min | `Max | `Mean | `Last | `None ];\n  step_metric : string option;\n  goal : [ `Minimize | `Maximize ] option;\n}\n\ntype media_entry = {\n  step : int;\n  timestamp : float;\n  kind : [ `Image | `Audio | `Table | `File ];\n  path : string;\n}\n\n(* Heavy data loaded on demand from manifest + events *)\ntype full = {\n  params : (string * Value.t) list;\n  provenance : provenance;\n  ended_at : float option;\n  summary : (string * Value.t) list;\n  latest_metrics : (string * metric) list;\n  histories : (string * metric list) list;\n  metric_defs : (string * metric_def) list;\n  media : (string * media_entry list) list;\n  input_artifacts : Artifact.t list;\n  output_artifacts : Artifact.t list;\n}\n\n(* Header fields are always available without I/O *)\ntype t = {\n  root : string;\n  id : string;\n  dir : string;\n  experiment_name : string;\n  name : string option;\n  group : string option;\n  parent_id : string option;\n  started_at : float;\n  status : status;\n  tags : string list;\n  full : full Lazy.t;\n}\n\nlet schema_version = 2\n\n(* Header accessors — no I/O *)\n\nlet id t = t.id\nlet dir t = t.dir\nlet experiment_name t = t.experiment_name\nlet name t = t.name\nlet group t = t.group\nlet parent_id t = t.parent_id\nlet started_at t = t.started_at\nlet status t = t.status\nlet tags t = t.tags\nlet resumable t = t.status = `running\n\n(* Full accessors — forces lazy on first access *)\n\nlet full t = Lazy.force t.full\nlet params t = (full t).params\nlet provenance t = (full t).provenance\nlet notes t = (full t).provenance.notes\nlet ended_at t = (full t).ended_at\nlet summary t = (full t).summary\nlet find_param t key = List.assoc_opt key (full t).params\nlet find_summary t key = List.assoc_opt key (full t).summary\nlet latest_metrics t = (full t).latest_metrics\nlet metric_keys t = List.map fst (full t).latest_metrics\nlet input_artifacts t = (full t).input_artifacts\nlet output_artifacts t = (full t).output_artifacts\nlet metric_defs t = (full t).metric_defs\nlet media_keys t = List.map fst (full t).media\n\nlet media_history t key =\n  match List.assoc_opt key (full t).media with\n  | Some entries -> entries\n  | None -> []\n\nlet metric_history t key =\n  match List.assoc_opt key (full t).histories with\n  | Some history -> history\n  | None -> []\n\n(* Paths *)\n\nlet run_dir ~root ~experiment id =\n  Filename.concat\n    (Filename.concat\n       (Filename.concat (Filename.concat root \"experiments\") experiment)\n       \"runs\")\n    id\n\nlet manifest_path root experiment id =\n  Filename.concat (run_dir ~root ~experiment id) \"run.json\"\n\nlet events_path dir = Filename.concat dir \"events.jsonl\"\n\n(* Parsing helpers *)\n\nlet status_of_string = function\n  | \"finished\" -> `finished\n  | \"failed\" -> `failed\n  | \"killed\" -> `killed\n  | _ -> `running\n\nlet push_tag seen acc tag =\n  if Hashtbl.mem seen tag then acc\n  else (\n    Hashtbl.replace seen tag ();\n    tag :: acc)\n\nlet provenance_of_json json =\n  let env_json = Json_utils.json_mem \"env\" json in\n  {\n    notes = Json_utils.json_mem \"notes\" json |> Json_utils.json_string;\n    command = Json_utils.json_mem \"command\" json |> Json_utils.json_string_list;\n    cwd =\n      Option.value\n        (Json_utils.json_mem \"cwd\" json |> Json_utils.json_string)\n        ~default:\"\";\n    hostname = Json_utils.json_mem \"hostname\" json |> Json_utils.json_string;\n    pid =\n      Option.value\n        (Json_utils.json_mem \"pid\" json |> Json_utils.json_number)\n        ~default:0.0\n      |> int_of_float;\n    git_commit = Json_utils.json_mem \"git_commit\" json |> Json_utils.json_string;\n    git_dirty = Json_utils.json_mem \"git_dirty\" json |> Json_utils.json_bool;\n    env =\n      Json_utils.json_assoc env_json\n      |> List.filter_map (fun (key, value) ->\n          Json_utils.json_string value |> Option.map (fun text -> (key, text)));\n  }\n\nlet sorted_of_hashtbl tbl =\n  Hashtbl.to_seq tbl |> List.of_seq\n  |> List.sort (fun (a, _) (b, _) -> String.compare a b)\n\n(* Materialize full data from manifest JSON + event log *)\nlet materialize_full root dir manifest_json =\n  let tag_seen = Hashtbl.create 8 in\n  let initial_tags =\n    Json_utils.json_mem \"tags\" manifest_json\n    |> Json_utils.json_string_list\n    |> List.fold_left (push_tag tag_seen) []\n    |> List.rev\n  in\n  let params =\n    Json_utils.json_mem \"params\" manifest_json\n    |> Json_utils.json_assoc\n    |> List.map (fun (k, v) -> (k, Value.of_json v))\n  in\n  let summary_table = Hashtbl.create 8 in\n  let history_table = Hashtbl.create 16 in\n  let latest_table = Hashtbl.create 16 in\n  let metric_def_table = Hashtbl.create 8 in\n  let media_table = Hashtbl.create 8 in\n  let input_seen = Hashtbl.create 8 in\n  let output_seen = Hashtbl.create 8 in\n  let input_artifacts = ref [] in\n  let output_artifacts = ref [] in\n  let tags = ref initial_tags in\n  let status = ref `running in\n  let ended_at = ref None in\n  let notes =\n    ref\n      (Json_utils.json_mem \"provenance\" manifest_json |> provenance_of_json)\n        .notes\n  in\n  List.iter\n    (function\n      | Event_log.Metric { step; timestamp; key; value } ->\n          let metric = { step; timestamp; value } in\n          let history =\n            match Hashtbl.find_opt history_table key with\n            | Some history -> history\n            | None -> []\n          in\n          Hashtbl.replace history_table key (metric :: history);\n          Hashtbl.replace latest_table key metric\n      | Define_metric { key; summary; step_metric; goal } ->\n          Hashtbl.replace metric_def_table key { summary; step_metric; goal }\n      | Media { step; timestamp; key; kind; path } ->\n          let abs_path = Filename.concat dir path in\n          let entry = { step; timestamp; kind; path = abs_path } in\n          let prev =\n            match Hashtbl.find_opt media_table key with\n            | Some l -> l\n            | None -> []\n          in\n          Hashtbl.replace media_table key (entry :: prev)\n      | Summary values ->\n          List.iter\n            (fun (key, value) -> Hashtbl.replace summary_table key value)\n            values\n      | Notes value -> notes := value\n      | Tags values -> tags := List.fold_left (push_tag tag_seen) !tags values\n      | Artifact_output { name; version } ->\n          let key = name ^ \":\" ^ version in\n          if not (Hashtbl.mem output_seen key) then (\n            Hashtbl.replace output_seen key ();\n            match Artifact.load ~root ~name ~version with\n            | Some artifact -> output_artifacts := artifact :: !output_artifacts\n            | None -> ())\n      | Artifact_input { name; version } ->\n          let key = name ^ \":\" ^ version in\n          if not (Hashtbl.mem input_seen key) then (\n            Hashtbl.replace input_seen key ();\n            match Artifact.load ~root ~name ~version with\n            | Some artifact -> input_artifacts := artifact :: !input_artifacts\n            | None -> ())\n      | Resumed _ ->\n          ended_at := None;\n          status := `running\n      | Finished { status = status_string; ended_at = finished_at } ->\n          status := status_of_string status_string;\n          ended_at := Some finished_at)\n    (Event_log.read (events_path dir));\n  let latest_metrics = sorted_of_hashtbl latest_table in\n  let histories =\n    sorted_of_hashtbl history_table\n    |> List.map (fun (key, history) -> (key, List.rev history))\n  in\n  let metric_defs = sorted_of_hashtbl metric_def_table in\n  (* Auto-compute summaries from define_metric declarations. Explicit\n     set_summary always wins; auto-summary only fills gaps. *)\n  Hashtbl.iter\n    (fun key (def : metric_def) ->\n      if not (Hashtbl.mem summary_table key) then\n        match Hashtbl.find_opt history_table key with\n        | None | Some [] -> ()\n        | Some history ->\n            let auto =\n              match def.summary with\n              | `Min ->\n                  Some\n                    (List.fold_left\n                       (fun acc (m : metric) -> Float.min acc m.value)\n                       Float.infinity history)\n              | `Max ->\n                  Some\n                    (List.fold_left\n                       (fun acc (m : metric) -> Float.max acc m.value)\n                       Float.neg_infinity history)\n              | `Mean ->\n                  let n = List.length history in\n                  let sum =\n                    List.fold_left\n                      (fun acc (m : metric) -> acc +. m.value)\n                      0. history\n                  in\n                  Some (sum /. Float.of_int n)\n              | `Last -> Some (List.hd history).value\n              | `None -> None\n            in\n            Option.iter\n              (fun v -> Hashtbl.replace summary_table key (`Float v))\n              auto)\n    metric_def_table;\n  let summary = sorted_of_hashtbl summary_table in\n  let media =\n    sorted_of_hashtbl media_table\n    |> List.map (fun (key, entries) -> (key, List.rev entries))\n  in\n  let base_provenance =\n    Json_utils.json_mem \"provenance\" manifest_json |> provenance_of_json\n  in\n  ( !status,\n    List.rev !tags,\n    {\n      params;\n      provenance = { base_provenance with notes = !notes };\n      ended_at = !ended_at;\n      summary;\n      latest_metrics;\n      histories;\n      metric_defs;\n      media;\n      input_artifacts = List.rev !input_artifacts;\n      output_artifacts = List.rev !output_artifacts;\n    } )\n\n(* Full eager load — reads manifest + events immediately *)\nlet load ~root ~experiment ~id =\n  let path = manifest_path root experiment id in\n  if not (Sys.file_exists path) then None\n  else\n    try\n      let json = Fs.read_file path |> Json_utils.json_of_string in\n      let schema_ok =\n        match\n          Json_utils.json_mem \"schema_version\" json |> Json_utils.json_number\n        with\n        | Some value -> int_of_float value = schema_version\n        | None -> false\n      in\n      if not schema_ok then None\n      else\n        let dir = run_dir ~root ~experiment id in\n        let name = Json_utils.json_mem \"name\" json |> Json_utils.json_string in\n        let group =\n          Json_utils.json_mem \"group\" json |> Json_utils.json_string\n        in\n        let parent_id =\n          Json_utils.json_mem \"parent_id\" json |> Json_utils.json_string\n        in\n        let started_at =\n          Option.value\n            (Json_utils.json_mem \"started_at\" json |> Json_utils.json_number)\n            ~default:0.0\n        in\n        let status, tags, full_data = materialize_full root dir json in\n        Some\n          {\n            root;\n            id;\n            dir;\n            experiment_name = experiment;\n            name;\n            group;\n            parent_id;\n            started_at;\n            status;\n            tags;\n            full = Lazy.from_val full_data;\n          }\n    with _ -> None\n\n(* Lazy load from index — reads manifest + events only when full data\n   accessed *)\nlet load_from_index ~root id (entry : Index.entry) =\n  let dir = run_dir ~root ~experiment:entry.experiment id in\n  let full =\n    lazy\n      (let path = manifest_path root entry.experiment id in\n       let json = Fs.read_file path |> Json_utils.json_of_string in\n       let _status, _tags, full_data = materialize_full root dir json in\n       full_data)\n  in\n  {\n    root;\n    id;\n    dir;\n    experiment_name = entry.experiment;\n    name = entry.name;\n    group = entry.group;\n    parent_id = entry.parent_id;\n    started_at = entry.started_at;\n    status = entry.status;\n    tags = entry.tags;\n    full;\n  }\n\nlet list ~root ~experiment ?status:status_filter ?tag ?parent\n    ?group:group_filter () =\n  let runs_dir =\n    Filename.concat\n      (Filename.concat (Filename.concat root \"experiments\") experiment)\n      \"runs\"\n  in\n  Fs.list_dirs runs_dir\n  |> List.filter_map (fun id -> load ~root ~experiment ~id)\n  |> List.filter (fun run ->\n      Option.fold ~none:true ~some:(fun s -> status run = s) status_filter\n      && Option.fold ~none:true\n           ~some:(fun tag -> List.exists (String.equal tag) (tags run))\n           tag\n      && Option.fold ~none:true\n           ~some:(fun parent -> parent_id run = Some parent)\n           parent\n      && Option.fold ~none:true ~some:(fun g -> group run = Some g) group_filter)\n  |> List.sort (fun a b -> String.compare (id b) (id a))\n\nlet children t = list ~root:t.root ~experiment:t.experiment_name ~parent:t.id ()\n"
  },
  {
    "path": "packages/munin/lib/run.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Persisted tracked runs.\n\n    Runs are the durable tracked objects of Munin. They expose immutable\n    manifest data together with materialized state rebuilt from the append-only\n    event log. *)\n\n(** {1:types Types} *)\n\ntype status =\n  [ `running  (** Run is actively logging. *)\n  | `finished  (** Run completed successfully. *)\n  | `failed  (** Run terminated with an error. *)\n  | `killed  (** Run was manually terminated. *) ]\n(** The type for run status values. *)\n\ntype metric = {\n  step : int;  (** Step counter at which the sample was logged. *)\n  timestamp : float;  (** Wall-clock time of the sample. *)\n  value : float;  (** Scalar metric value. *)\n}\n(** The type for scalar metric observations. *)\n\ntype provenance = {\n  notes : string option;  (** Free-form run note. *)\n  command : string list;  (** Command line that started the run. *)\n  cwd : string;  (** Working directory at run start. *)\n  hostname : string option;  (** Machine hostname. *)\n  pid : int;  (** Process identifier. *)\n  git_commit : string option;  (** Git HEAD commit hash. *)\n  git_dirty : bool option;  (** Whether the working tree was dirty. *)\n  env : (string * string) list;  (** Captured environment variables. *)\n}\n(** The type for run provenance. *)\n\ntype metric_def = {\n  summary : [ `Min | `Max | `Mean | `Last | `None ];\n      (** How the run summary value is computed from history. *)\n  step_metric : string option;\n      (** Another metric to use as x-axis (e.g. [\"epoch\"]). *)\n  goal : [ `Minimize | `Maximize ] option;\n      (** Whether lower or higher values are better. *)\n}\n(** The type for metric definitions. Declares how a metric should be summarised\n    and plotted. *)\n\ntype media_entry = {\n  step : int;  (** Step counter at which the media was logged. *)\n  timestamp : float;  (** Wall-clock time. *)\n  kind : [ `Image | `Audio | `Table | `File ];  (** Media type for renderers. *)\n  path : string;  (** Absolute path to stored file. *)\n}\n(** The type for media log entries. *)\n\ntype t\n(** The type for run handles. *)\n\n(** {1:loading Loading} *)\n\nval load : root:string -> experiment:string -> id:string -> t option\n(** [load ~root ~experiment ~id] is the run [id] in [experiment], if present.\n    Returns [None] if the manifest is missing, has an incompatible schema\n    version, or cannot be read. *)\n\nval list :\n  root:string ->\n  experiment:string ->\n  ?status:status ->\n  ?tag:string ->\n  ?parent:string ->\n  ?group:string ->\n  unit ->\n  t list\n(** [list ~root ~experiment ()] is the runs persisted for [experiment], filtered\n    when the optional selectors are provided. [parent] filters by parent run\n    identifier. Results are sorted by identifier descending (newest first). *)\n\n(** {1:identity Identity} *)\n\nval id : t -> string\n(** [id t] is the unique run identifier. *)\n\nval dir : t -> string\n(** [dir t] is the absolute path to the run directory. *)\n\nval experiment_name : t -> string\n(** [experiment_name t] is the containing experiment name. *)\n\nval name : t -> string option\n(** [name t] is the optional human-readable run name. *)\n\nval group : t -> string option\n(** [group t] is the optional run group for flat grouping (e.g. sweeps). *)\n\nval parent_id : t -> string option\n(** [parent_id t] is the parent run identifier, if any. *)\n\n(** {1:status Status} *)\n\nval started_at : t -> float\n(** [started_at t] is the run start timestamp. *)\n\nval ended_at : t -> float option\n(** [ended_at t] is the run completion timestamp, if any. *)\n\nval status : t -> status\n(** [status t] is the current run status. *)\n\nval resumable : t -> bool\n(** [resumable t] is [true] iff [status t] is [`running]. *)\n\n(** {1:provenance Provenance} *)\n\nval provenance : t -> provenance\n(** [provenance t] is the run provenance. *)\n\nval notes : t -> string option\n(** [notes t] is the latest run note, if any. *)\n\n(** {1:metadata Metadata} *)\n\nval tags : t -> string list\n(** [tags t] is the run tag list. *)\n\nval params : t -> (string * Value.t) list\n(** [params t] is the immutable run parameter set. *)\n\nval find_param : t -> string -> Value.t option\n(** [find_param t key] is the parameter value for [key], if present. *)\n\nval summary : t -> (string * Value.t) list\n(** [summary t] is the run summary map, sorted alphabetically by key. Later\n    writes replace earlier values. *)\n\nval find_summary : t -> string -> Value.t option\n(** [find_summary t key] is the summary value for [key], if present. *)\n\n(** {1:metrics Metrics} *)\n\nval metric_keys : t -> string list\n(** [metric_keys t] is the sorted list of metric keys observed in [t]. *)\n\nval latest_metrics : t -> (string * metric) list\n(** [latest_metrics t] is the latest scalar metric value per key, sorted\n    alphabetically by key. *)\n\nval metric_history : t -> string -> metric list\n(** [metric_history t key] is the full history for [key] in chronological order.\n    Returns the empty list if [key] has no samples. *)\n\nval metric_defs : t -> (string * metric_def) list\n(** [metric_defs t] is the metric definitions declared via\n    {!Session.define_metric}, sorted alphabetically by key. *)\n\n(** {1:media Media} *)\n\nval media_keys : t -> string list\n(** [media_keys t] is the sorted list of media keys logged in [t]. *)\n\nval media_history : t -> string -> media_entry list\n(** [media_history t key] is the media entries for [key] in chronological order.\n    Returns the empty list if [key] has no entries. *)\n\n(** {1:relations Relations} *)\n\nval children : t -> t list\n(** [children t] is the list of child runs of [t]. Performs a filesystem scan of\n    the experiment directory. *)\n\nval input_artifacts : t -> Artifact.t list\n(** [input_artifacts t] is the list of artifacts consumed by [t]. *)\n\nval output_artifacts : t -> Artifact.t list\n(** [output_artifacts t] is the list of artifacts produced by [t]. *)\n\n(**/**)\n\nval status_of_string : string -> status\nval load_from_index : root:string -> string -> Index.entry -> t\n\n(**/**)\n"
  },
  {
    "path": "packages/munin/lib/run_monitor.ml",
    "content": "type live_status = [ `Live | `Stopped | `Done of Run.status ]\n\ntype t = {\n  run : Run.t;\n  mutable ic : in_channel option;\n  mutable pos : int;\n  mutable last_event_time : float;\n  latest : (string, Run.metric) Hashtbl.t;\n  histories : (string, (int * float) list) Hashtbl.t;\n  defs : (string, Run.metric_def) Hashtbl.t;\n  mutable finished : Run.status option;\n}\n\nlet stopped_timeout = 5.0\nlet events_path dir = Filename.concat dir \"events.jsonl\"\n\nlet start run =\n  {\n    run;\n    ic = None;\n    pos = 0;\n    last_event_time = Unix.gettimeofday ();\n    latest = Hashtbl.create 16;\n    histories = Hashtbl.create 16;\n    defs = Hashtbl.create 8;\n    finished = None;\n  }\n\nlet close t =\n  Option.iter close_in t.ic;\n  t.ic <- None\n\nlet read_new_events t =\n  let path = events_path (Run.dir t.run) in\n  if not (Sys.file_exists path) then []\n  else\n    let stat = Unix.stat path in\n    let size = stat.Unix.st_size in\n    if size <= t.pos then []\n    else\n      let ic =\n        match t.ic with\n        | Some ic ->\n            (* Check for truncation/rotation *)\n            if size < t.pos then (\n              close_in ic;\n              let new_ic = open_in path in\n              t.ic <- Some new_ic;\n              t.pos <- 0;\n              new_ic)\n            else ic\n        | None ->\n            let ic = open_in path in\n            t.ic <- Some ic;\n            ic\n      in\n      seek_in ic t.pos;\n      let events = ref [] in\n      (try\n         while true do\n           let line = input_line ic in\n           match Event_log.decode_line line with\n           | Some event -> events := event :: !events\n           | None -> ()\n         done\n       with End_of_file -> ());\n      t.pos <- pos_in ic;\n      List.rev !events\n\nlet process_event t = function\n  | Event_log.Metric { step; timestamp; key; value } ->\n      let metric = Run.{ step; timestamp; value } in\n      Hashtbl.replace t.latest key metric;\n      let history =\n        match Hashtbl.find_opt t.histories key with Some h -> h | None -> []\n      in\n      Hashtbl.replace t.histories key ((step, value) :: history);\n      t.last_event_time <- timestamp\n  | Define_metric { key; summary = s; step_metric; goal } ->\n      Hashtbl.replace t.defs key { Run.summary = s; step_metric; goal }\n  | Finished { status; ended_at = _ } ->\n      t.finished <- Some (Run.status_of_string status)\n  | Resumed _ ->\n      t.finished <- None;\n      t.last_event_time <- Unix.gettimeofday ()\n  | Media _ | Summary _ | Notes _ | Tags _ | Artifact_output _\n  | Artifact_input _ ->\n      ()\n\nlet poll t =\n  let events = read_new_events t in\n  List.iter (process_event t) events\n\nlet live_status t =\n  match t.finished with\n  | Some status -> `Done status\n  | None ->\n      let elapsed = Unix.gettimeofday () -. t.last_event_time in\n      if elapsed > stopped_timeout then `Stopped else `Live\n\nlet metrics t =\n  Hashtbl.to_seq t.latest |> List.of_seq\n  |> List.sort (fun (a, _) (b, _) -> String.compare a b)\n\nlet history t key =\n  match Hashtbl.find_opt t.histories key with\n  | Some h -> List.rev h\n  | None -> []\n\nlet metric_defs t =\n  Hashtbl.to_seq t.defs |> List.of_seq\n  |> List.sort (fun (a, _) (b, _) -> String.compare a b)\n\nlet contains_sub ~sub s =\n  let ls = String.length sub and lk = String.length s in\n  ls <= lk\n  &&\n  let rec loop i = i <= lk - ls && (String.sub s i ls = sub || loop (i + 1)) in\n  loop 0\n\nlet is_loss_like key =\n  let key = String.lowercase_ascii key in\n  contains_sub ~sub:\"loss\" key || contains_sub ~sub:\"error\" key\n\nlet best t key =\n  match Hashtbl.find_opt t.histories key with\n  | None | Some [] -> None\n  | Some history ->\n      let minimize =\n        match Hashtbl.find_opt t.defs key with\n        | Some { goal = Some `Minimize; _ } -> true\n        | Some { goal = Some `Maximize; _ } -> false\n        | _ -> is_loss_like key\n      in\n      let compare =\n        if minimize then fun a b -> Float.compare a b\n        else fun a b -> Float.compare b a\n      in\n      let best_step, best_value =\n        List.fold_left\n          (fun (bs, bv) (s, v) -> if compare v bv < 0 then (s, v) else (bs, bv))\n          (List.hd history) (List.tl history)\n      in\n      let timestamp =\n        match Hashtbl.find_opt t.latest key with\n        | Some m -> m.timestamp\n        | None -> 0.0\n      in\n      Some Run.{ step = best_step; timestamp; value = best_value }\n"
  },
  {
    "path": "packages/munin/lib/run_monitor.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Incremental run monitoring.\n\n    [Run_monitor] polls a run's event log and maintains aggregated metric state.\n    Used by the TUI and future web dashboard for live updates without re-reading\n    the entire log. *)\n\ntype t\n(** The type for run monitors. *)\n\ntype live_status =\n  [ `Live  (** Events are still arriving. *)\n  | `Stopped  (** No events for the timeout period (5 seconds). *)\n  | `Done of Run.status  (** [Finished] event received. *) ]\n(** The type for live run status. *)\n\nval start : Run.t -> t\n(** [start run] opens the run's event log for incremental reading. The event log\n    file handle is acquired lazily on the first {!poll}. Must be paired with\n    {!close}. *)\n\nval poll : t -> unit\n(** [poll t] reads new events since the last poll and updates state. *)\n\nval close : t -> unit\n(** [close t] releases the file handle. *)\n\n(** {1:state State} *)\n\nval live_status : t -> live_status\n(** [live_status t] is the current run status based on event activity. *)\n\nval metrics : t -> (string * Run.metric) list\n(** [metrics t] is the latest metric value per key, sorted alphabetically. *)\n\nval history : t -> string -> (int * float) list\n(** [history t key] is the [(step, value)] history for [key] in chronological\n    order. Returns the empty list if [key] has no samples. *)\n\nval metric_defs : t -> (string * Run.metric_def) list\n(** [metric_defs t] is the metric definitions declared so far, sorted\n    alphabetically by key. *)\n\nval best : t -> string -> Run.metric option\n(** [best t key] is the best observation for [key] according to\n    {!Session.define_metric} goal, or a heuristic if undefined (keys containing\n    \"loss\" or \"error\" prefer lower values). Returns [None] if no samples exist\n    for [key]. *)\n"
  },
  {
    "path": "packages/munin/lib/session.ml",
    "content": "let err_not_resumable = \"Munin.Session.resume: run is not resumable\"\nlet err_missing_manifest = \"Munin.Session.run: missing run manifest\"\nlet err_closed_session = \"Munin.Session.log_artifact: closed session\"\nlet err_path_missing = \"Munin.Session.log_artifact: path does not exist: \"\nlet err_media_missing = \"Munin.Session.log_media: path does not exist: \"\n\ntype t = {\n  root : string;\n  experiment : string;\n  id : string;\n  dir : string;\n  mutex : Mutex.t;\n  mutable closed : bool;\n}\n\nlet schema_version = 2\nlet manifest_path dir = Filename.concat dir \"run.json\"\nlet events_path dir = Filename.concat dir \"events.jsonl\"\nlet random_state = lazy (Random.State.make_self_init ())\n\nlet generate_id () =\n  let state = Lazy.force random_state in\n  let now = Unix.gettimeofday () in\n  let tm = Unix.localtime now in\n  let stamp =\n    Printf.sprintf \"%04d-%02d-%02d_%02d-%02d-%02d\" (tm.Unix.tm_year + 1900)\n      (tm.Unix.tm_mon + 1) tm.Unix.tm_mday tm.Unix.tm_hour tm.Unix.tm_min\n      tm.Unix.tm_sec\n  in\n  let suffix = Printf.sprintf \"%04x\" (Random.State.int state 0x10000) in\n  stamp ^ \"_\" ^ suffix\n\nlet status_to_string = function\n  | `finished -> \"finished\"\n  | `failed -> \"failed\"\n  | `killed -> \"killed\"\n\nlet git_output cwd args =\n  let command =\n    String.concat \" \" (List.map Filename.quote (\"git\" :: \"-C\" :: cwd :: args))\n  in\n  Fs.command_output command\n\nlet detect_git_commit cwd = git_output cwd [ \"rev-parse\"; \"HEAD\" ]\n\nlet detect_git_dirty cwd =\n  match git_output cwd [ \"status\"; \"--porcelain\"; \"--untracked-files=no\" ] with\n  | None -> None\n  | Some output -> Some (output <> \"\")\n\nlet capture_env_vars names =\n  List.filter_map\n    (fun name -> Option.map (fun value -> (name, value)) (Sys.getenv_opt name))\n    names\n\nlet first_some a b = match a with Some _ -> a | None -> b\n\nlet root_of_run_dir dir =\n  Filename.dirname (Filename.dirname (Filename.dirname (Filename.dirname dir)))\n\nlet with_lock t f =\n  Mutex.lock t.mutex;\n  Fun.protect ~finally:(fun () -> Mutex.unlock t.mutex) f\n\nlet append_event t event =\n  Fs.append_line (events_path t.dir) (Event_log.encode event)\n\nlet write_manifest path json =\n  Fs.write_file path (Json_utils.json_to_string ~pretty:true json ^ \"\\n\")\n\nlet optional_field key f = function None -> [] | Some v -> [ (key, f v) ]\n\nlet provenance_json ?notes ~command ~cwd ~hostname ~pid ~git_commit ~git_dirty\n    ~env () =\n  Json_utils.json_obj\n    ([\n       (\"command\", Jsont.Json.list (List.map Jsont.Json.string command));\n       (\"cwd\", Jsont.Json.string cwd);\n       (\"pid\", Jsont.Json.int pid);\n       ( \"env\",\n         Json_utils.json_obj\n           (List.map (fun (k, v) -> (k, Jsont.Json.string v)) env) );\n     ]\n    @ optional_field \"notes\" Jsont.Json.string notes\n    @ optional_field \"hostname\" Jsont.Json.string hostname\n    @ optional_field \"git_commit\" Jsont.Json.string git_commit\n    @ optional_field \"git_dirty\" Jsont.Json.bool git_dirty)\n\nlet make_manifest ~id ~experiment ~started_at ?name ?group ?parent ~tags ~params\n    ~provenance () =\n  Json_utils.json_obj\n    ([\n       (\"schema_version\", Jsont.Json.int schema_version);\n       (\"id\", Jsont.Json.string id);\n       (\"experiment\", Jsont.Json.string experiment);\n       (\"started_at\", Jsont.Json.number started_at);\n       (\"tags\", Jsont.Json.list (List.map Jsont.Json.string tags));\n       ( \"params\",\n         Json_utils.json_obj\n           (List.map (fun (k, v) -> (k, Value.to_json v)) params) );\n       (\"provenance\", provenance);\n     ]\n    @ optional_field \"name\" Jsont.Json.string name\n    @ optional_field \"group\" Jsont.Json.string group\n    @ optional_field \"parent_id\"\n        (fun run -> Jsont.Json.string (Run.id run))\n        parent)\n\nlet start ?root ~experiment ?name ?group ?parent ?(tags = []) ?(params = [])\n    ?notes ?(capture_env = []) ?command ?cwd ?hostname ?pid ?git_commit\n    ?git_dirty ?env () =\n  let root = Option.value root ~default:(Env.root ()) in\n  Fs.ensure_dir (Filename.concat root \"experiments\");\n  Fs.ensure_dir (Filename.concat root \"artifacts\");\n  Fs.ensure_dir (Filename.concat (Filename.concat root \"blobs\") \"sha256\");\n  let id = generate_id () in\n  let dir =\n    Filename.concat\n      (Filename.concat\n         (Filename.concat (Filename.concat root \"experiments\") experiment)\n         \"runs\")\n      id\n  in\n  Fs.ensure_dir dir;\n  let cwd = Option.value cwd ~default:(Sys.getcwd ()) in\n  let command = Option.value command ~default:(Array.to_list Sys.argv) in\n  let hostname =\n    match hostname with\n    | Some hostname -> Some hostname\n    | None -> Some (Unix.gethostname ())\n  in\n  let pid = Option.value pid ~default:(Unix.getpid ()) in\n  let git_commit = first_some git_commit (detect_git_commit cwd) in\n  let git_dirty = first_some git_dirty (detect_git_dirty cwd) in\n  let env = Option.value env ~default:(capture_env_vars capture_env) in\n  let provenance =\n    provenance_json ?notes ~command ~cwd ~hostname ~pid ~git_commit ~git_dirty\n      ~env ()\n  in\n  let started_at = Unix.gettimeofday () in\n  let parent_id = Option.map Run.id parent in\n  let manifest =\n    make_manifest ~id ~experiment ~started_at ?name ?group ?parent ~tags ~params\n      ~provenance ()\n  in\n  write_manifest (manifest_path dir) manifest;\n  Index.add root ~id\n    { experiment; name; group; parent_id; status = `running; tags; started_at };\n  { root; experiment; id; dir; mutex = Mutex.create (); closed = false }\n\nlet finish ?(status = `finished) t () =\n  with_lock t (fun () ->\n      if not t.closed then (\n        append_event t\n          (Event_log.Finished\n             {\n               status = status_to_string status;\n               ended_at = Unix.gettimeofday ();\n             });\n        Index.update_status t.root ~id:t.id (status :> Index.status);\n        t.closed <- true))\n\nlet with_run ?root ~experiment ?name ?parent ?tags ?params ?notes ?capture_env f\n    =\n  let session =\n    start ?root ~experiment ?name ?parent ?tags ?params ?notes ?capture_env ()\n  in\n  match f session with\n  | value ->\n      finish session ();\n      value\n  | exception exn ->\n      finish ~status:`failed session ();\n      raise exn\n\nlet resume run =\n  if not (Run.resumable run) then invalid_arg err_not_resumable;\n  let root = root_of_run_dir (Run.dir run) in\n  let t =\n    {\n      root;\n      experiment = Run.experiment_name run;\n      id = Run.id run;\n      dir = Run.dir run;\n      mutex = Mutex.create ();\n      closed = false;\n    }\n  in\n  append_event t (Event_log.Resumed { at = Unix.gettimeofday () });\n  Index.update_status root ~id:(Run.id run) `running;\n  t\n\nlet run t =\n  match Run.load ~root:t.root ~experiment:t.experiment ~id:t.id with\n  | Some run -> run\n  | None -> failwith err_missing_manifest\n\nlet set_notes t notes =\n  with_lock t (fun () ->\n      if not t.closed then append_event t (Event_log.Notes notes))\n\nlet log_metric t ~step ?timestamp key value =\n  with_lock t (fun () ->\n      if not t.closed then\n        let timestamp =\n          Option.value timestamp ~default:(Unix.gettimeofday ())\n        in\n        append_event t (Event_log.Metric { step; timestamp; key; value }))\n\nlet log_metrics t ~step ?timestamp pairs =\n  with_lock t (fun () ->\n      if not t.closed then\n        let timestamp =\n          Option.value timestamp ~default:(Unix.gettimeofday ())\n        in\n        List.iter\n          (fun (key, value) ->\n            append_event t (Event_log.Metric { step; timestamp; key; value }))\n          pairs)\n\nlet define_metric t key ?(summary = `Last) ?step_metric ?goal () =\n  with_lock t (fun () ->\n      if not t.closed then\n        append_event t\n          (Event_log.Define_metric { key; summary; step_metric; goal }))\n\nlet rel_path_of ~run_dir abs_path =\n  String.sub abs_path\n    (String.length run_dir + 1)\n    (String.length abs_path - String.length run_dir - 1)\n\nlet media_dest_path run_dir key step ext =\n  let parts = String.split_on_char '/' key in\n  let dir_parts, leaf =\n    match List.rev parts with\n    | [] -> ([], \"media\")\n    | [ single ] -> ([], single)\n    | last :: rest -> (List.rev rest, last)\n  in\n  let media_dir =\n    List.fold_left Filename.concat (Filename.concat run_dir \"media\") dir_parts\n  in\n  let filename = Printf.sprintf \"%s_%d%s\" leaf step ext in\n  (media_dir, Filename.concat media_dir filename)\n\nlet log_media t ~step ~key ~kind ~path =\n  with_lock t (fun () ->\n      if not t.closed then begin\n        if not (Sys.file_exists path) then invalid_arg (err_media_missing ^ path);\n        let ext = Filename.extension path in\n        let media_dir, dest = media_dest_path t.dir key step ext in\n        Fs.ensure_dir media_dir;\n        Fs.copy_file path dest;\n        let timestamp = Unix.gettimeofday () in\n        append_event t\n          (Event_log.Media\n             {\n               step;\n               timestamp;\n               key;\n               kind;\n               path = rel_path_of ~run_dir:t.dir dest;\n             })\n      end)\n\nlet log_table t ~step ~key ~columns ~rows =\n  with_lock t (fun () ->\n      if not t.closed then begin\n        let json =\n          Json_utils.json_obj\n            [\n              (\"columns\", Jsont.Json.list (List.map Jsont.Json.string columns));\n              ( \"rows\",\n                Jsont.Json.list\n                  (List.map\n                     (fun row -> Jsont.Json.list (List.map Value.to_json row))\n                     rows) );\n            ]\n        in\n        let media_dir, dest = media_dest_path t.dir key step \".json\" in\n        Fs.ensure_dir media_dir;\n        Fs.write_file dest (Json_utils.json_to_string ~pretty:true json ^ \"\\n\");\n        let timestamp = Unix.gettimeofday () in\n        append_event t\n          (Event_log.Media\n             {\n               step;\n               timestamp;\n               key;\n               kind = `Table;\n               path = rel_path_of ~run_dir:t.dir dest;\n             })\n      end)\n\nlet set_summary t values =\n  with_lock t (fun () ->\n      if not t.closed then append_event t (Event_log.Summary values))\n\nlet add_tags t tags =\n  with_lock t (fun () ->\n      if (not t.closed) && tags <> [] then append_event t (Event_log.Tags tags))\n\nlet log_artifact t ~name ~kind ~path ?(metadata = []) ?(aliases = []) () =\n  with_lock t (fun () ->\n      if t.closed then failwith err_closed_session;\n      if not (Sys.file_exists path) then invalid_arg (err_path_missing ^ path);\n      let digest = Fs.sha256_path path in\n      let blob_rel_path =\n        Filename.concat (Filename.concat \"blobs\" \"sha256\") digest\n      in\n      let blob_abs_path = Filename.concat t.root blob_rel_path in\n      if not (Sys.file_exists blob_abs_path) then\n        Fs.copy_tree path blob_abs_path;\n      let payload : Artifact.payload =\n        if Fs.is_directory path then `dir else `file\n      in\n      let json_metadata =\n        List.map (fun (k, v) -> (k, Value.to_json v)) metadata\n      in\n      let artifact =\n        Artifact.create ~root:t.root ~name ~kind ~payload ~digest\n          ~path:blob_rel_path ~metadata:json_metadata ~aliases\n          ~producer_run_id:(Some t.id)\n      in\n      append_event t\n        (Event_log.Artifact_output { name; version = Artifact.version artifact });\n      artifact)\n\nlet use_artifact t artifact =\n  with_lock t (fun () ->\n      if not t.closed then (\n        Artifact.add_consumer ~root:t.root ~name:(Artifact.name artifact)\n          ~version:(Artifact.version artifact)\n          t.id;\n        append_event t\n          (Event_log.Artifact_input\n             {\n               name = Artifact.name artifact;\n               version = Artifact.version artifact;\n             })))\n"
  },
  {
    "path": "packages/munin/lib/session.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Active run writers.\n\n    A session is the append-only mutation boundary for a run. All writes go\n    through the event log; no direct state mutation. *)\n\ntype t\n(** The type for active run sessions. *)\n\n(** {1:lifecycle Lifecycle} *)\n\nval start :\n  ?root:string ->\n  experiment:string ->\n  ?name:string ->\n  ?group:string ->\n  ?parent:Run.t ->\n  ?tags:string list ->\n  ?params:(string * Value.t) list ->\n  ?notes:string ->\n  ?capture_env:string list ->\n  ?command:string list ->\n  ?cwd:string ->\n  ?hostname:string ->\n  ?pid:int ->\n  ?git_commit:string ->\n  ?git_dirty:bool ->\n  ?env:(string * string) list ->\n  unit ->\n  t\n(** [start ~experiment ()] starts a new run session.\n\n    Creates the store directory structure if it does not exist.\n\n    - [root] defaults to [$RAVEN_TRACKING_DIR] or [$XDG_DATA_HOME/raven/munin].\n    - [name] defaults to [None].\n    - [tags] defaults to [[]].\n    - [params] defaults to [[]].\n    - [notes] defaults to [None].\n    - [capture_env] is a list of environment variable names to capture into\n      provenance. Defaults to [[]].\n    - [command] defaults to [Sys.argv].\n    - [cwd] defaults to [Sys.getcwd ()].\n    - [hostname] defaults to [Unix.gethostname ()].\n    - [pid] defaults to [Unix.getpid ()].\n    - [git_commit] defaults to the HEAD commit detected from [cwd].\n    - [git_dirty] defaults to the working tree status detected from [cwd].\n    - [env] defaults to the variables captured via [capture_env]. *)\n\nval with_run :\n  ?root:string ->\n  experiment:string ->\n  ?name:string ->\n  ?parent:Run.t ->\n  ?tags:string list ->\n  ?params:(string * Value.t) list ->\n  ?notes:string ->\n  ?capture_env:string list ->\n  (t -> 'a) ->\n  'a\n(** [with_run ~experiment f] starts a run, calls [f], and finishes the run as\n    [`finished] on success or [`failed] on exception. The exception is re-raised\n    after the run is closed.\n\n    Optional arguments default as in {!start}. *)\n\nval resume : Run.t -> t\n(** [resume run] reopens an unfinished run for additional logging.\n\n    Raises [Invalid_argument] if [Run.resumable run] is [false]. *)\n\nval run : t -> Run.t\n(** [run t] is the current materialized view of the run.\n\n    Raises [Failure] if the run manifest is missing. *)\n\nval finish : ?status:[ `finished | `failed | `killed ] -> t -> unit -> unit\n(** [finish t ()] closes the run with the given final status.\n\n    [status] defaults to [`finished]. The trailing [unit] argument allows\n    partial application as a finalizer (e.g.\n    [Fun.protect ~finally:(finish session)]). Calling [finish] on an\n    already-closed session is a no-op. *)\n\n(** {1:scalars Scalars} *)\n\nval log_metric : t -> step:int -> ?timestamp:float -> string -> float -> unit\n(** [log_metric t ~step key value] appends a scalar metric sample.\n\n    [timestamp] defaults to [Unix.gettimeofday ()]. Silently ignored if the\n    session is closed. *)\n\nval log_metrics :\n  t -> step:int -> ?timestamp:float -> (string * float) list -> unit\n(** [log_metrics t ~step pairs] appends multiple scalar metric samples\n    atomically.\n\n    [timestamp] defaults to [Unix.gettimeofday ()]. *)\n\n(** {1:metric_defs Metric definitions} *)\n\nval define_metric :\n  t ->\n  string ->\n  ?summary:[ `Min | `Max | `Mean | `Last | `None ] ->\n  ?step_metric:string ->\n  ?goal:[ `Minimize | `Maximize ] ->\n  unit ->\n  unit\n(** [define_metric t key ()] declares how a metric should be summarised and\n    plotted.\n\n    - [summary] controls how the run summary value is computed from history:\n      [`Min] (best for loss), [`Max] (best for accuracy), [`Mean], [`Last]\n      (default), [`None] (no auto-summary).\n    - [step_metric] specifies another metric as x-axis (e.g. [\"epoch\"]).\n      Defaults to [None].\n    - [goal] declares whether lower ([`Minimize]) or higher ([`Maximize]) is\n      better, used by the TUI for \"best\" badges and by comparisons. Defaults to\n      [None]. *)\n\n(** {1:media Media} *)\n\nval log_media :\n  t ->\n  step:int ->\n  key:string ->\n  kind:[ `Image | `Audio | `Table | `File ] ->\n  path:string ->\n  unit\n(** [log_media t ~step ~key ~kind ~path] copies [path] into the run's [media/]\n    directory and appends a media event to the log.\n\n    The file is stored at [<run_dir>/media/<key_path>_<step>.<ext>] where\n    [<key_path>] preserves the key's slash-delimited hierarchy as directories.\n    [kind] is metadata for renderers; the TUI ignores media events. Silently\n    ignored if the session is closed.\n\n    @raise Invalid_argument if [path] does not exist. *)\n\nval log_table :\n  t ->\n  step:int ->\n  key:string ->\n  columns:string list ->\n  rows:Value.t list list ->\n  unit\n(** [log_table t ~step ~key ~columns ~rows] stores a table as JSON in the run's\n    [media/] directory and appends a media event with [kind = `Table].\n\n    The JSON file has the structure [{\"columns\": [...], \"rows\": [...]}]. Useful\n    for confusion matrices, per-class metrics, data samples. *)\n\n(** {1:metadata Metadata} *)\n\nval set_notes : t -> string option -> unit\n(** [set_notes t note] replaces the run note. [None] clears it. *)\n\nval set_summary : t -> (string * Value.t) list -> unit\n(** [set_summary t values] merges summary values into the run. Later writes\n    replace earlier values for the same key. *)\n\nval add_tags : t -> string list -> unit\n(** [add_tags t tags] appends tags to the run. Duplicate tags are ignored by\n    readers. Empty lists are not written. *)\n\n(** {1:artifacts Artifacts} *)\n\nval log_artifact :\n  t ->\n  name:string ->\n  kind:Artifact.kind ->\n  path:string ->\n  ?metadata:(string * Value.t) list ->\n  ?aliases:string list ->\n  unit ->\n  Artifact.t\n(** [log_artifact t ~name ~kind ~path ()] stores [path] as a versioned artifact,\n    records it as an output of [t], and returns the created version.\n\n    - [metadata] defaults to [[]].\n    - [aliases] defaults to [[]].\n\n    Raises [Failure] if the session is closed. Raises [Invalid_argument] if\n    [path] does not exist. *)\n\nval use_artifact : t -> Artifact.t -> unit\n(** [use_artifact t artifact] records [artifact] as an input of [t]. *)\n"
  },
  {
    "path": "packages/munin/lib/store.ml",
    "content": "type t = { root : string }\n\nlet root t = t.root\n\nlet open_ ?root () =\n  let root = Option.value root ~default:(Env.root ()) in\n  Fs.ensure_dir (Filename.concat root \"experiments\");\n  Fs.ensure_dir (Filename.concat root \"artifacts\");\n  Fs.ensure_dir (Filename.concat (Filename.concat root \"blobs\") \"sha256\");\n  { root }\n\nlet list_experiments t = Fs.list_dirs (Filename.concat t.root \"experiments\")\n\n(* Index-based listing: filter on header fields, return lazy Run.t *)\nlet list_runs_indexed index ~root ?experiment ?status ?tag ?parent ?group () =\n  Hashtbl.to_seq index |> List.of_seq\n  |> List.filter_map (fun (id, (entry : Index.entry)) ->\n      if\n        Option.fold ~none:true ~some:(String.equal entry.experiment) experiment\n        && Option.fold ~none:true ~some:(( = ) entry.status) status\n        && Option.fold ~none:true\n             ~some:(fun t -> List.exists (String.equal t) entry.tags)\n             tag\n        && Option.fold ~none:true\n             ~some:(fun p -> entry.parent_id = Some p)\n             parent\n        && Option.fold ~none:true ~some:(fun g -> entry.group = Some g) group\n      then Some (Run.load_from_index ~root id entry)\n      else None)\n  |> List.sort (fun a b -> String.compare (Run.id b) (Run.id a))\n\n(* Filesystem-based listing: fallback when no index *)\nlet list_runs_scan ~root ?experiment ?status ?tag ?parent ?group () =\n  let runs =\n    match experiment with\n    | Some experiment ->\n        Run.list ~root ~experiment ?status ?tag ?parent ?group ()\n    | None ->\n        Fs.list_dirs (Filename.concat root \"experiments\")\n        |> List.concat_map (fun experiment ->\n            Run.list ~root ~experiment ?status ?tag ?parent ?group ())\n  in\n  List.sort (fun a b -> String.compare (Run.id b) (Run.id a)) runs\n\nlet list_runs t ?experiment ?status ?tag ?parent ?group () =\n  match Index.read t.root with\n  | Some index ->\n      list_runs_indexed index ~root:t.root ?experiment ?status ?tag ?parent\n        ?group ()\n  | None ->\n      list_runs_scan ~root:t.root ?experiment ?status ?tag ?parent ?group ()\n\nlet find_run t id =\n  match Index.read t.root with\n  | Some index -> (\n      match Hashtbl.find_opt index id with\n      | Some entry -> Some (Run.load_from_index ~root:t.root id entry)\n      | None -> None)\n  | None ->\n      List.find_map\n        (fun experiment -> Run.load ~root:t.root ~experiment ~id)\n        (list_experiments t)\n\nlet latest_run t ?experiment ?status ?tag ?group () =\n  match list_runs t ?experiment ?status ?tag ?group () with\n  | run :: _ -> Some run\n  | [] -> None\n\nlet find_artifact t ~name ~version = Artifact.load ~root:t.root ~name ~version\n\nlet list_artifacts t ?name ?kind ?alias ?producer_run ?consumer_run () =\n  Artifact.list ~root:t.root ?name ?kind ?alias ?producer_run ?consumer_run ()\n\nlet delete_run t run =\n  Fs.remove_tree (Run.dir run);\n  let exp_dir =\n    Filename.concat\n      (Filename.concat t.root \"experiments\")\n      (Run.experiment_name run)\n  in\n  if Fs.list_dirs (Filename.concat exp_dir \"runs\") = [] then\n    Fs.remove_tree exp_dir;\n  Index.remove t.root ~id:(Run.id run)\n\nlet gc t =\n  let blobs_dir = Filename.concat (Filename.concat t.root \"blobs\") \"sha256\" in\n  let referenced = Hashtbl.create 64 in\n  List.iter\n    (fun artifact -> Hashtbl.replace referenced (Artifact.digest artifact) ())\n    (list_artifacts t ());\n  let removed = ref 0 in\n  Fs.list_dirs blobs_dir\n  |> List.iter (fun digest ->\n      if not (Hashtbl.mem referenced digest) then (\n        Fs.remove_tree (Filename.concat blobs_dir digest);\n        incr removed));\n  !removed\n"
  },
  {
    "path": "packages/munin/lib/store.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Local tracking stores.\n\n    A store is the local root directory containing experiments, runs, artifacts,\n    and blobs. *)\n\ntype t\n(** The type for store handles. *)\n\nval open_ : ?root:string -> unit -> t\n(** [open_ ()] opens the local tracking store, creating its root directories if\n    needed.\n\n    [root] defaults to [$RAVEN_TRACKING_DIR] or [$XDG_DATA_HOME/raven/munin]. *)\n\nval root : t -> string\n(** [root t] is the absolute store root path. *)\n\n(** {1:runs Runs} *)\n\nval list_experiments : t -> string list\n(** [list_experiments t] is the experiment names stored in [t]. *)\n\nval list_runs :\n  t ->\n  ?experiment:string ->\n  ?status:Run.status ->\n  ?tag:string ->\n  ?parent:string ->\n  ?group:string ->\n  unit ->\n  Run.t list\n(** [list_runs t ()] is the persisted runs in [t], filtered when the optional\n    selectors are provided. When [experiment] is omitted, searches across all\n    experiments. Results are sorted by identifier descending (newest first). *)\n\nval find_run : t -> string -> Run.t option\n(** [find_run t id] is the run with identifier [id], if present. Searches across\n    all experiments. *)\n\nval latest_run :\n  t ->\n  ?experiment:string ->\n  ?status:Run.status ->\n  ?tag:string ->\n  ?group:string ->\n  unit ->\n  Run.t option\n(** [latest_run t ()] is the most recently started run matching the optional\n    filters, by identifier ordering. *)\n\n(** {1:artifacts Artifacts} *)\n\nval find_artifact : t -> name:string -> version:string -> Artifact.t option\n(** [find_artifact t ~name ~version] is the named artifact version or alias, if\n    present. *)\n\nval list_artifacts :\n  t ->\n  ?name:string ->\n  ?kind:Artifact.kind ->\n  ?alias:string ->\n  ?producer_run:string ->\n  ?consumer_run:string ->\n  unit ->\n  Artifact.t list\n(** [list_artifacts t ()] is the stored artifacts in [t], filtered when the\n    optional selectors are provided. *)\n\n(** {1:maintenance Maintenance} *)\n\nval delete_run : t -> Run.t -> unit\n(** [delete_run t run] removes [run] and its event log from the store. Does not\n    remove shared blobs. Removes the experiment directory if no runs remain. *)\n\nval gc : t -> int\n(** [gc t] removes unreferenced blobs. Returns the number removed. *)\n"
  },
  {
    "path": "packages/munin/lib/sys/config/discover.ml",
    "content": "module C = Configurator.V1\n\nlet () =\n  C.main ~name:\"metrics-config\" (fun c ->\n      let system = C.ocaml_config_var_exn c \"system\" in\n      let c_flags, c_library_flags =\n        if String.equal system \"macosx\" then\n          (* macOS: link against IOKit and CoreFoundation frameworks *)\n          ([], [ \"-framework\"; \"IOKit\"; \"-framework\"; \"CoreFoundation\" ])\n        else\n          (* Linux: no extra flags needed *)\n          ([], [])\n      in\n      C.Flags.write_sexp \"c_flags.sexp\" c_flags;\n      C.Flags.write_sexp \"c_library_flags.sexp\" c_library_flags)\n"
  },
  {
    "path": "packages/munin/lib/sys/config/dune",
    "content": "(executable\n (name discover)\n (libraries dune-configurator))\n"
  },
  {
    "path": "packages/munin/lib/sys/dune",
    "content": "(library\n (name munin_sys)\n (public_name munin.sys)\n (private_modules sysstat)\n (libraries unix str munin threads.posix)\n (foreign_stubs\n  (language c)\n  (flags\n   (:standard\n    (:include c_flags.sexp)))\n  (names sysstat_stubs))\n (c_library_flags\n  (:include c_library_flags.sexp)))\n\n(rule\n (targets c_flags.sexp c_library_flags.sexp)\n (action\n  (run %{exe:config/discover.exe})))\n"
  },
  {
    "path": "packages/munin/lib/sys/munin_sys.ml",
    "content": "include Sysstat\n\ntype t = { stop : bool Atomic.t; thread : Thread.t }\n\nlet define session key = Session.define_metric session key ~summary:`Last ()\n\nlet start session ?(interval = 2.0) () =\n  define session \"sys/cpu_user\";\n  define session \"sys/cpu_system\";\n  define session \"sys/mem_used_pct\";\n  define session \"sys/mem_used_gb\";\n  define session \"sys/proc_cpu_pct\";\n  define session \"sys/proc_mem_mb\";\n  define session \"sys/disk_read_mbs\";\n  define session \"sys/disk_write_mbs\";\n  define session \"sys/disk_util_pct\";\n  let stop_flag = Atomic.make false in\n  let prev_cpu = ref (Sysstat.Cpu.sample ()) in\n  let prev_proc = ref (Sysstat.Proc.Self.sample ()) in\n  let prev_disk = ref (Sysstat.Disk_io.sample ()) in\n  let prev_time = ref (Unix.gettimeofday ()) in\n  let step = ref 0 in\n  let thread =\n    Thread.create\n      (fun () ->\n        while not (Atomic.get stop_flag) do\n          Thread.delay interval;\n          if not (Atomic.get stop_flag) then begin\n            incr step;\n            let now = Unix.gettimeofday () in\n            let dt = now -. !prev_time in\n            (* System CPU *)\n            let cpu = Sysstat.Cpu.sample () in\n            let cpu_stats = Sysstat.Cpu.compute ~prev:!prev_cpu ~next:cpu in\n            prev_cpu := cpu;\n            (* System memory *)\n            let mem = Sysstat.Mem.sample () in\n            let mem_pct =\n              Int64.to_float mem.used *. 100. /. Int64.to_float mem.total\n            in\n            let mem_gb = Int64.to_float mem.used /. 1_073_741_824. in\n            (* Process stats *)\n            let proc = Sysstat.Proc.Self.sample () in\n            let proc_stats =\n              Sysstat.Proc.Self.compute ~prev:!prev_proc ~next:proc ~dt\n                ~num_cores:None\n            in\n            prev_proc := proc;\n            (* Disk I/O *)\n            let disk = Sysstat.Disk_io.sample () in\n            let disk_stats =\n              Sysstat.Disk_io.compute ~prev:!prev_disk ~next:disk ~dt\n            in\n            prev_disk := disk;\n            prev_time := now;\n            Session.log_metrics session ~step:!step\n              [\n                (\"sys/cpu_user\", cpu_stats.user);\n                (\"sys/cpu_system\", cpu_stats.system);\n                (\"sys/mem_used_pct\", mem_pct);\n                (\"sys/mem_used_gb\", mem_gb);\n                (\"sys/proc_cpu_pct\", proc_stats.cpu_percent);\n                ( \"sys/proc_mem_mb\",\n                  Int64.to_float proc_stats.rss_bytes /. 1_048_576. );\n                ( \"sys/disk_read_mbs\",\n                  disk_stats.read_bytes_per_sec /. 1_048_576. );\n                ( \"sys/disk_write_mbs\",\n                  disk_stats.write_bytes_per_sec /. 1_048_576. );\n                (\"sys/disk_util_pct\", disk_stats.utilization_percent);\n              ]\n          end\n        done)\n      ()\n  in\n  { stop = stop_flag; thread }\n\nlet stop t =\n  if not (Atomic.get t.stop) then begin\n    Atomic.set t.stop true;\n    Thread.join t.thread\n  end\n"
  },
  {
    "path": "packages/munin/lib/sys/munin_sys.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** System monitoring.\n\n    Provides stateless, poll-based system metrics sampling and background\n    session monitoring. Each module samples instantaneous or cumulative values\n    from the operating system. CPU, network, and disk I/O statistics are\n    cumulative since boot and require two samples to compute usage percentages;\n    memory statistics are instantaneous.\n\n    {1:platform Platform support}\n\n    Supported platforms: Linux and macOS. Platform-specific behavior is\n    documented per module. Some metrics have limited availability on certain\n    platforms (e.g., macOS CPU counters populate only user/nice/system/idle\n    fields).\n\n    {1:background Background monitoring}\n\n    {!start} spawns a background thread that periodically samples CPU, memory,\n    and process statistics and logs them as scalar metrics via {!Munin.Session}.\n\n    Logged metrics (all with [sys/] prefix):\n\n    {b System-wide:}\n    - [sys/cpu_user] — user CPU percentage (0–100)\n    - [sys/cpu_system] — system CPU percentage (0–100)\n    - [sys/mem_used_pct] — memory usage percentage (0–100)\n    - [sys/mem_used_gb] — memory used in GB\n\n    {b Per-process:}\n    - [sys/proc_cpu_pct] — process CPU percentage\n    - [sys/proc_mem_mb] — process resident set size in MB\n\n    {b Disk I/O:}\n    - [sys/disk_read_mbs] — disk read rate in MB/s\n    - [sys/disk_write_mbs] — disk write rate in MB/s\n    - [sys/disk_util_pct] — disk utilization percentage *)\n\ninclude module type of Sysstat\n(** @inline *)\n\n(** {1 Background monitoring} *)\n\ntype t\n(** The type for background monitors. *)\n\nval start : Session.t -> ?interval:float -> unit -> t\n(** [start session ~interval ()] begins periodic system monitoring.\n\n    All [sys/] metrics are defined with [~summary:`Last] so the final sampled\n    value appears in run summaries.\n\n    [interval] defaults to [2.0] seconds. The first sample is taken after one\n    interval. The monitor thread is a daemon thread. *)\n\nval stop : t -> unit\n(** [stop t] signals the monitoring thread to exit and blocks until it\n    terminates. Safe to call multiple times. *)\n"
  },
  {
    "path": "packages/munin/lib/sys/sysstat.ml",
    "content": "(* C binding types *)\n\ntype raw_memory = {\n  total : int64;\n  page_size : int64;\n  active : int64;\n  inactive : int64;\n  speculative : int64;\n  wired : int64;\n  compressor : int64;\n  purgeable : int64;\n  external_ : int64;\n  free : int64;\n  swap_total : int64;\n  swap_used : int64;\n}\n\ntype raw_network_io = {\n  bytes_rx : int64;\n  packets_rx : int64;\n  bytes_tx : int64;\n  packets_tx : int64;\n}\n\ntype raw_disk_io = {\n  bytes_read : int64;\n  bytes_written : int64;\n  time_ms : int64;\n  num_disks : int64;\n}\n\ntype raw_proc_info = {\n  name : string;\n  ppid : int;\n  state : char;\n  priority : int;\n  nice : int;\n  cmdline : string;\n  user : string;\n  user_time : int64;\n  system_time : int64;\n  resident_size : int64;\n  virtual_size : int64;\n  num_threads : int;\n  num_running : int;\n  faults : int64;\n}\n\ntype raw_statvfs = { total : int64; free : int64; avail : int64 }\ntype raw_self_mem = { rss : int64; vsize : int64 }\ntype raw_timebase = { numer : int; denom : int }\n\n(* C bindings *)\n\nexternal c_clk_tck : unit -> int = \"caml_sysstat_clk_tck\"\nexternal c_getpagesize : unit -> int = \"caml_sysstat_getpagesize\"\n\nexternal c_get_cpu_load : unit -> int64 array array\n  = \"caml_sysstat_get_cpu_load\"\n\nexternal c_get_memory : unit -> raw_memory = \"caml_sysstat_get_memory\"\n\nexternal c_get_network_io : unit -> raw_network_io\n  = \"caml_sysstat_get_network_io\"\n\nexternal c_get_disk_io : unit -> raw_disk_io = \"caml_sysstat_get_disk_io\"\nexternal c_list_pids : unit -> int array = \"caml_sysstat_list_pids\"\n\nexternal c_get_proc_info : int -> raw_proc_info option\n  = \"caml_sysstat_get_proc_info\"\n\nexternal c_statvfs : string -> raw_statvfs = \"caml_sysstat_statvfs\"\nexternal c_proc_self_mem : unit -> raw_self_mem = \"caml_sysstat_proc_self_mem\"\nexternal c_get_timebase : unit -> raw_timebase = \"caml_sysstat_get_timebase\"\n\nexternal c_getmounts : unit -> (string * string * string) array\n  = \"caml_sysstat_getmounts\"\n\nexternal c_get_loadavg : unit -> float * float * float\n  = \"caml_sysstat_get_loadavg\"\n\nexternal c_get_uptime : unit -> int64 = \"caml_sysstat_get_uptime\"\n\n(* Helpers *)\n\nlet is_whitespace = function ' ' | '\\t' | '\\r' | '\\n' -> true | _ -> false\n\nlet split_whitespace (s : string) : string list =\n  let len = String.length s in\n  let rec skip i = if i < len && is_whitespace s.[i] then skip (i + 1) else i in\n  let rec take i j =\n    if j < len && not (is_whitespace s.[j]) then take i (j + 1) else j\n  in\n  let rec loop i acc =\n    let i = skip i in\n    if i >= len then List.rev acc\n    else\n      let j = take i i in\n      loop j (String.sub s i (j - i) :: acc)\n  in\n  loop 0 []\n\nlet with_in_file path f =\n  let ic = open_in path in\n  match f ic with\n  | v ->\n      close_in_noerr ic;\n      v\n  | exception e ->\n      close_in_noerr ic;\n      raise e\n\nlet fold_lines path ~init ~f =\n  with_in_file path (fun ic ->\n      let rec loop acc =\n        match input_line ic with\n        | line -> loop (f acc line)\n        | exception End_of_file -> acc\n      in\n      loop init)\n\nlet int64_of_string_opt s = try Some (Int64.of_string s) with _ -> None\nlet int_of_string_opt s = try Some (int_of_string s) with _ -> None\nlet i64_max0 x = if Int64.compare x 0L < 0 then 0L else x\n\nlet i64_delta ~prev ~next =\n  let d = Int64.sub next prev in\n  if Int64.compare d 0L < 0 then 0L else d\n\nlet f_delta_i64 ~prev ~next = Int64.to_float (i64_delta ~prev ~next)\nlet page_size_bytes : int64 Lazy.t = lazy (Int64.of_int (c_getpagesize ()))\nlet page_size () = Lazy.force page_size_bytes\n\n(* Time conversion *)\n\nlet clk_tck : int Lazy.t = lazy (c_clk_tck ())\n\nlet jiffies_to_ns (jiffies : int64) : int64 =\n  let hz = Int64.of_int (Lazy.force clk_tck) in\n  if Int64.compare hz 0L <= 0 then 0L\n  else\n    (* Avoid overflow: (q*1e9) + (r*1e9)/hz where jiffies = q*hz + r. *)\n    let q = Int64.div jiffies hz in\n    let r = Int64.rem jiffies hz in\n    Int64.add\n      (Int64.mul q 1_000_000_000L)\n      (Int64.div (Int64.mul r 1_000_000_000L) hz)\n\nlet mach_timebase : (float * float) Lazy.t =\n  lazy\n    (let tb = c_get_timebase () in\n     (float_of_int tb.numer, float_of_int tb.denom))\n\nlet mach_ticks_to_ns (ticks : int64) : float =\n  let numer, denom = Lazy.force mach_timebase in\n  Int64.to_float ticks *. numer /. denom\n\nlet proc_ticks_to_ns (ticks : int64) : float =\n  if Sys.file_exists \"/proc\" then Int64.to_float (jiffies_to_ns ticks)\n  else mach_ticks_to_ns ticks\n\n(* Cpu *)\n\nmodule Cpu = struct\n  type t = {\n    user : int64;\n    nice : int64;\n    system : int64;\n    idle : int64;\n    iowait : int64;\n    irq : int64;\n    softirq : int64;\n    steal : int64;\n    guest : int64;\n  }\n\n  type stats = {\n    user : float;\n    nice : float;\n    system : float;\n    idle : float;\n    iowait : float;\n    irq : float;\n    softirq : float;\n    steal : float;\n    guest : float;\n  }\n\n  let zero : t =\n    {\n      user = 0L;\n      nice = 0L;\n      system = 0L;\n      idle = 0L;\n      iowait = 0L;\n      irq = 0L;\n      softirq = 0L;\n      steal = 0L;\n      guest = 0L;\n    }\n\n  let zero_stats =\n    {\n      user = 0.0;\n      nice = 0.0;\n      system = 0.0;\n      idle = 100.0;\n      iowait = 0.0;\n      irq = 0.0;\n      softirq = 0.0;\n      steal = 0.0;\n      guest = 0.0;\n    }\n\n  let of_linux_fields (fields : string list) : t option =\n    match fields with\n    | _name :: user :: nice :: system :: idle :: rest -> (\n        match\n          ( int64_of_string_opt user,\n            int64_of_string_opt nice,\n            int64_of_string_opt system,\n            int64_of_string_opt idle )\n        with\n        | Some user, Some nice, Some system, Some idle ->\n            let get idx =\n              match List.nth_opt rest idx with\n              | None -> 0L\n              | Some s -> Option.value (int64_of_string_opt s) ~default:0L\n            in\n            Some\n              {\n                user;\n                nice;\n                system;\n                idle;\n                iowait = get 0;\n                irq = get 1;\n                softirq = get 2;\n                steal = get 3;\n                guest = get 4;\n              }\n        | _ -> None)\n    | _ -> None\n\n  let of_macos_row (row : int64 array) : t option =\n    if Array.length row < 4 then None\n    else\n      Some\n        {\n          user = row.(0);\n          nice = row.(1);\n          system = row.(2);\n          idle = row.(3);\n          iowait = 0L;\n          irq = 0L;\n          softirq = 0L;\n          steal = 0L;\n          guest = 0L;\n        }\n\n  let is_cpu_core_name (name : string) =\n    (* \"cpu0\", \"cpu1\", ... *)\n    String.length name >= 4\n    && String.starts_with ~prefix:\"cpu\" name\n    && match name.[3] with '0' .. '9' -> true | _ -> false\n\n  let read_linux () : (t * t list) option =\n    let path = \"/proc/stat\" in\n    if not (Sys.file_exists path) then None\n    else\n      let total = ref None in\n      let cores = ref [] in\n      let handle_line () line =\n        match split_whitespace line with\n        | [] -> ()\n        | name :: _ as fields ->\n            if name = \"cpu\" then total := of_linux_fields fields\n            else if is_cpu_core_name name then\n              match of_linux_fields fields with\n              | Some t -> cores := t :: !cores\n              | None -> ()\n            else ()\n      in\n      (try\n         fold_lines path ~init:() ~f:(fun () line ->\n             handle_line () line;\n             ())\n       with _ -> ());\n      match !total with None -> None | Some t -> Some (t, List.rev !cores)\n\n  let read_macos () : (t * t list) option =\n    try\n      let rows = c_get_cpu_load () in\n      if Array.length rows = 0 then None\n      else\n        match of_macos_row rows.(0) with\n        | None -> None\n        | Some total ->\n            let cores =\n              if Array.length rows <= 1 then []\n              else\n                Array.to_list\n                  (Array.init\n                     (Array.length rows - 1)\n                     (fun i ->\n                       match of_macos_row rows.(i + 1) with\n                       | Some t -> t\n                       | None -> zero))\n            in\n            Some (total, cores)\n    with _ -> None\n\n  let read () =\n    if Sys.file_exists \"/proc/stat\" then read_linux () else read_macos ()\n\n  let sample () : t =\n    match read () with\n    | Some (t, _) -> t\n    | None -> raise (Sys_error \"Cpu.sample: failed to read CPU statistics\")\n\n  let sample_per_core () : t array =\n    match read () with\n    | Some (_, cores) -> Array.of_list cores\n    | None ->\n        raise\n          (Sys_error\n             \"Cpu.sample_per_core: failed to read per-core CPU statistics\")\n\n  let compute ~(prev : t) ~(next : t) : stats =\n    let du = i64_delta ~prev:prev.user ~next:next.user in\n    let dn = i64_delta ~prev:prev.nice ~next:next.nice in\n    let ds = i64_delta ~prev:prev.system ~next:next.system in\n    let di = i64_delta ~prev:prev.idle ~next:next.idle in\n    let diw = i64_delta ~prev:prev.iowait ~next:next.iowait in\n    let dirq = i64_delta ~prev:prev.irq ~next:next.irq in\n    let dsi = i64_delta ~prev:prev.softirq ~next:next.softirq in\n    let dst = i64_delta ~prev:prev.steal ~next:next.steal in\n    let dg = i64_delta ~prev:prev.guest ~next:next.guest in\n\n    let total =\n      List.fold_left Int64.add 0L [ du; dn; ds; di; diw; dirq; dsi; dst; dg ]\n    in\n    if Int64.compare total 0L <= 0 then zero_stats\n    else\n      let total_f = Int64.to_float total in\n      let pct v = Int64.to_float v /. total_f *. 100.0 in\n      {\n        user = pct du;\n        nice = pct dn;\n        system = pct ds;\n        idle = pct di;\n        iowait = pct diw;\n        irq = pct dirq;\n        softirq = pct dsi;\n        steal = pct dst;\n        guest = pct dg;\n      }\nend\n\n(* Mem *)\n\nmodule Mem = struct\n  type t = {\n    total : int64;\n    used : int64;\n    free : int64;\n    available : int64;\n    compressed : int64;\n    wired : int64;\n    active : int64;\n    inactive : int64;\n    purgeable : int64;\n    speculative : int64;\n    external_ : int64;\n    page_size : int64;\n    swap_total : int64;\n    swap_used : int64;\n  }\n\n  let read_linux_meminfo () =\n    let path = \"/proc/meminfo\" in\n    if not (Sys.file_exists path) then None\n    else\n      let mem_total = ref None in\n      let mem_free = ref None in\n      let mem_available = ref None in\n      let buffers = ref None in\n      let cached = ref None in\n      let swap_total = ref None in\n      let swap_free = ref None in\n      let shmem = ref None in\n      let sreclaimable = ref None in\n      let set_kb (r : int64 option ref) (line : string) =\n        match split_whitespace line with\n        | _key :: value :: _ -> (\n            match int64_of_string_opt value with\n            | None -> ()\n            | Some kb -> r := Some (Int64.mul kb 1024L))\n        | _ -> ()\n      in\n      let handle_line line =\n        if String.starts_with ~prefix:\"MemTotal:\" line then\n          set_kb mem_total line\n        else if String.starts_with ~prefix:\"MemFree:\" line then\n          set_kb mem_free line\n        else if String.starts_with ~prefix:\"MemAvailable:\" line then\n          set_kb mem_available line\n        else if String.starts_with ~prefix:\"Buffers:\" line then\n          set_kb buffers line\n        else if String.starts_with ~prefix:\"Cached:\" line then\n          set_kb cached line\n        else if String.starts_with ~prefix:\"SwapTotal:\" line then\n          set_kb swap_total line\n        else if String.starts_with ~prefix:\"SwapFree:\" line then\n          set_kb swap_free line\n        else if String.starts_with ~prefix:\"Shmem:\" line then set_kb shmem line\n        else if String.starts_with ~prefix:\"SReclaimable:\" line then\n          set_kb sreclaimable line\n        else ()\n      in\n      (try fold_lines path ~init:() ~f:(fun () line -> handle_line line)\n       with _ -> ());\n      match !mem_total with\n      | None -> None\n      | Some total ->\n          let free = Option.value !mem_free ~default:0L in\n          let buffers_b = Option.value !buffers ~default:0L in\n          let cached_b = Option.value !cached ~default:0L in\n          let shmem_b = Option.value !shmem ~default:0L in\n          let sreclaimable_b = Option.value !sreclaimable ~default:0L in\n          let swap_total_b = Option.value !swap_total ~default:0L in\n          let swap_free_b = Option.value !swap_free ~default:0L in\n\n          let cache_used =\n            i64_max0 (Int64.sub (Int64.add cached_b sreclaimable_b) shmem_b)\n          in\n          let used =\n            i64_max0\n              (Int64.sub\n                 (Int64.sub (Int64.sub total free) buffers_b)\n                 cache_used)\n          in\n          let swap_used = i64_max0 (Int64.sub swap_total_b swap_free_b) in\n          (* MemAvailable is available on Linux 3.14+; fallback to free +\n             buffers + cached *)\n          let available =\n            match !mem_available with\n            | Some v -> v\n            | None -> Int64.add free (Int64.add buffers_b cached_b)\n          in\n\n          Some\n            {\n              total;\n              used;\n              free;\n              available;\n              compressed = 0L;\n              wired = 0L;\n              active = 0L;\n              inactive = cache_used;\n              purgeable = buffers_b;\n              speculative = 0L;\n              external_ = 0L;\n              page_size = page_size ();\n              swap_total = swap_total_b;\n              swap_used;\n            }\n\n  let read_macos () =\n    try\n      let raw = c_get_memory () in\n      let ps = raw.page_size in\n      let to_bytes pages = Int64.mul pages ps in\n      let active = to_bytes raw.active in\n      let inactive = to_bytes raw.inactive in\n      let speculative = to_bytes raw.speculative in\n      let wired = to_bytes raw.wired in\n      let compressed = to_bytes raw.compressor in\n      let purgeable = to_bytes raw.purgeable in\n      let external_ = to_bytes raw.external_ in\n      let free = to_bytes raw.free in\n\n      let used_raw =\n        Int64.add active\n          (Int64.add inactive\n             (Int64.add speculative (Int64.add wired compressed)))\n      in\n      let used_raw = i64_max0 (Int64.sub used_raw purgeable) in\n      let used_raw = i64_max0 (Int64.sub used_raw external_) in\n      let used_display = i64_max0 (Int64.sub used_raw compressed) in\n      (* Approximate available as free + inactive + purgeable *)\n      let available = Int64.add free (Int64.add inactive purgeable) in\n\n      Some\n        {\n          total = raw.total;\n          used = used_display;\n          free;\n          available;\n          compressed;\n          wired;\n          active;\n          inactive;\n          purgeable;\n          speculative;\n          external_;\n          page_size = ps;\n          swap_total = raw.swap_total;\n          swap_used = raw.swap_used;\n        }\n    with _ -> None\n\n  let sample () : t =\n    let result =\n      if Sys.file_exists \"/proc/meminfo\" then read_linux_meminfo ()\n      else read_macos ()\n    in\n    match result with\n    | Some t -> t\n    | None -> raise (Sys_error \"Mem.sample: failed to read memory statistics\")\nend\n\n(* Net *)\n\nmodule Net = struct\n  type t = {\n    bytes_rx : int64;\n    packets_rx : int64;\n    bytes_tx : int64;\n    packets_tx : int64;\n  }\n\n  type stats = {\n    rx_bytes_per_sec : float;\n    rx_packets_per_sec : float;\n    tx_bytes_per_sec : float;\n    tx_packets_per_sec : float;\n  }\n\n  let read_linux () : t option =\n    let path = \"/proc/net/dev\" in\n    if not (Sys.file_exists path) then None\n    else\n      let add acc (rx_b, rx_p, tx_b, tx_p) =\n        {\n          bytes_rx = Int64.add acc.bytes_rx rx_b;\n          packets_rx = Int64.add acc.packets_rx rx_p;\n          bytes_tx = Int64.add acc.bytes_tx tx_b;\n          packets_tx = Int64.add acc.packets_tx tx_p;\n        }\n      in\n      let parse_line line =\n        match String.split_on_char ':' line with\n        | [ lhs; rhs ] -> (\n            let iface = String.trim lhs in\n            if iface = \"\" || iface = \"lo\" then None\n            else\n              let fields = split_whitespace rhs |> Array.of_list in\n              if Array.length fields < 10 then None\n              else\n                match\n                  ( int64_of_string_opt fields.(0),\n                    int64_of_string_opt fields.(1),\n                    int64_of_string_opt fields.(8),\n                    int64_of_string_opt fields.(9) )\n                with\n                | Some rx_b, Some rx_p, Some tx_b, Some tx_p ->\n                    Some (rx_b, rx_p, tx_b, tx_p)\n                | _ -> None)\n        | _ -> None\n      in\n      let init =\n        { bytes_rx = 0L; packets_rx = 0L; bytes_tx = 0L; packets_tx = 0L }\n      in\n      let total =\n        fold_lines path ~init ~f:(fun acc line ->\n            match parse_line line with\n            | None -> acc\n            | Some tuple -> add acc tuple)\n      in\n      Some total\n\n  let read_macos () : t option =\n    try\n      let raw = c_get_network_io () in\n      Some\n        {\n          bytes_rx = raw.bytes_rx;\n          packets_rx = raw.packets_rx;\n          bytes_tx = raw.bytes_tx;\n          packets_tx = raw.packets_tx;\n        }\n    with _ -> None\n\n  let sample () : t =\n    let result =\n      if Sys.file_exists \"/proc/net/dev\" then read_linux () else read_macos ()\n    in\n    match result with\n    | Some t -> t\n    | None -> raise (Sys_error \"Net.sample: failed to read network statistics\")\n\n  let compute ~(prev : t) ~(next : t) ~(dt : float) : stats =\n    if dt <= 0.0 then invalid_arg \"Net.compute: dt must be positive\";\n    {\n      rx_bytes_per_sec =\n        f_delta_i64 ~prev:prev.bytes_rx ~next:next.bytes_rx /. dt;\n      rx_packets_per_sec =\n        f_delta_i64 ~prev:prev.packets_rx ~next:next.packets_rx /. dt;\n      tx_bytes_per_sec =\n        f_delta_i64 ~prev:prev.bytes_tx ~next:next.bytes_tx /. dt;\n      tx_packets_per_sec =\n        f_delta_i64 ~prev:prev.packets_tx ~next:next.packets_tx /. dt;\n    }\nend\n\n(* Disk_io *)\n\nmodule Disk_io = struct\n  type t = {\n    bytes_read : int64;\n    bytes_written : int64;\n    time_ms : int64;\n    num_disks : int64;\n  }\n\n  type stats = {\n    read_bytes_per_sec : float;\n    write_bytes_per_sec : float;\n    utilization_percent : float;\n  }\n\n  type linux_disk = {\n    name : string;\n    sectors_read : int64;\n    sectors_written : int64;\n    io_time_ms : int64;\n  }\n\n  let is_excluded_name name =\n    String.starts_with ~prefix:\"dm-\" name\n    || String.starts_with ~prefix:\"loop\" name\n    || String.starts_with ~prefix:\"md\" name\n    || String.starts_with ~prefix:\"zram\" name\n\n  let partition_base (name : string) : string option =\n    (* If name ends with digits, it's *possibly* a partition. We return the\n       candidate base device name. *)\n    let len = String.length name in\n    let rec find_first_non_digit i =\n      if i < 0 then None\n      else\n        match name.[i] with\n        | '0' .. '9' -> find_first_non_digit (i - 1)\n        | _ -> Some i\n    in\n    match find_first_non_digit (len - 1) with\n    | None -> None\n    | Some i when i = len - 1 -> None (* no trailing digits *)\n    | Some i ->\n        let digit_start = i + 1 in\n        if digit_start <= 0 then None\n        else if name.[i] = 'p' then\n          (* nvme0n1p1 / mmcblk0p1 style *)\n          if i <= 0 then None else Some (String.sub name 0 i)\n        else\n          (* sda1 style *)\n          Some (String.sub name 0 (i + 1))\n\n  let read_linux () : t option =\n    let path = \"/proc/diskstats\" in\n    if not (Sys.file_exists path) then None\n    else\n      let parsed =\n        fold_lines path ~init:[] ~f:(fun acc line ->\n            match split_whitespace line with\n            | _major :: _minor :: name :: _reads_ok :: _reads_merged\n              :: sectors_read :: _time_read :: _writes_ok :: _writes_merged\n              :: sectors_written :: _time_write :: _in_flight :: io_time_ms\n              :: _weighted_time :: _ -> (\n                if is_excluded_name name then acc\n                else\n                  match\n                    ( int64_of_string_opt sectors_read,\n                      int64_of_string_opt sectors_written,\n                      int64_of_string_opt io_time_ms )\n                  with\n                  | Some sr, Some sw, Some t ->\n                      {\n                        name;\n                        sectors_read = sr;\n                        sectors_written = sw;\n                        io_time_ms = t;\n                      }\n                      :: acc\n                  | _ -> acc)\n            | _ -> acc)\n      in\n      let candidates = List.rev parsed in\n      let name_set = Hashtbl.create (List.length candidates) in\n      List.iter (fun d -> Hashtbl.replace name_set d.name ()) candidates;\n\n      let is_partition d =\n        match partition_base d.name with\n        | None -> false\n        | Some base -> Hashtbl.mem name_set base\n      in\n\n      let bytes_read = ref 0L in\n      let bytes_written = ref 0L in\n      let time_ms = ref 0L in\n      let num_disks = ref 0L in\n\n      List.iter\n        (fun d ->\n          if not (is_partition d) then (\n            bytes_read := Int64.add !bytes_read (Int64.mul d.sectors_read 512L);\n            bytes_written :=\n              Int64.add !bytes_written (Int64.mul d.sectors_written 512L);\n            time_ms := Int64.add !time_ms d.io_time_ms;\n            num_disks := Int64.add !num_disks 1L))\n        candidates;\n\n      Some\n        {\n          bytes_read = !bytes_read;\n          bytes_written = !bytes_written;\n          time_ms = !time_ms;\n          num_disks = !num_disks;\n        }\n\n  let read_macos () : t option =\n    try\n      let raw = c_get_disk_io () in\n      Some\n        {\n          bytes_read = raw.bytes_read;\n          bytes_written = raw.bytes_written;\n          time_ms = raw.time_ms;\n          num_disks = raw.num_disks;\n        }\n    with _ -> None\n\n  let sample () : t =\n    let result =\n      if Sys.file_exists \"/proc/diskstats\" then read_linux () else read_macos ()\n    in\n    match result with\n    | Some t -> t\n    | None ->\n        raise (Sys_error \"Disk_io.sample: failed to read disk I/O statistics\")\n\n  let compute ~(prev : t) ~(next : t) ~(dt : float) : stats =\n    if dt <= 0.0 then invalid_arg \"Disk_io.compute: dt must be positive\";\n\n    let read_bps =\n      f_delta_i64 ~prev:prev.bytes_read ~next:next.bytes_read /. dt\n    in\n    let write_bps =\n      f_delta_i64 ~prev:prev.bytes_written ~next:next.bytes_written /. dt\n    in\n    let time_delta_ms = f_delta_i64 ~prev:prev.time_ms ~next:next.time_ms in\n\n    let utilization =\n      if Int64.compare next.num_disks 0L <= 0 then 0.0\n      else\n        let denom = dt *. 1000.0 *. Int64.to_float next.num_disks in\n        if denom <= 0.0 then 0.0 else 100.0 *. time_delta_ms /. denom\n    in\n    {\n      read_bytes_per_sec = read_bps;\n      write_bytes_per_sec = write_bps;\n      utilization_percent = min utilization 100.0;\n    }\nend\n\n(* Fs *)\n\nmodule Fs = struct\n  type partition = {\n    mount_point : string;\n    total_bytes : int64;\n    used_bytes : int64;\n    avail_bytes : int64;\n  }\n\n  type t = {\n    total_bytes : int64;\n    used_bytes : int64;\n    avail_bytes : int64;\n    partitions : partition list;\n  }\n\n  let is_excluded_mount ~mount_point ~fstype =\n    let excluded_fstypes =\n      [\n        \"devfs\";\n        \"devtmpfs\";\n        \"tmpfs\";\n        \"proc\";\n        \"sysfs\";\n        \"cgroup\";\n        \"autofs\";\n        \"overlay\";\n      ]\n    in\n    let excluded_mount_prefixes = [ \"/dev\"; \"/proc\"; \"/sys\"; \"/run\" ] in\n    List.exists (fun p -> String.starts_with ~prefix:p fstype) excluded_fstypes\n    || List.exists\n         (fun p -> String.starts_with ~prefix:p mount_point)\n         excluded_mount_prefixes\n\n  let stat_partition (mount_point : string) : partition option =\n    try\n      let raw = c_statvfs mount_point in\n      if Int64.compare raw.total 0L <= 0 then None\n      else if Int64.compare raw.total 100_000_000L < 0 then None\n      else\n        let used = i64_max0 (Int64.sub raw.total raw.free) in\n        Some\n          {\n            mount_point;\n            total_bytes = raw.total;\n            used_bytes = used;\n            avail_bytes = raw.avail;\n          }\n    with _ -> None\n\n  let partitions () : partition list =\n    try\n      c_getmounts () |> Array.to_list\n      |> List.filter_map (fun (mount_point, _device, fstype) ->\n          if is_excluded_mount ~mount_point ~fstype then None\n          else stat_partition mount_point)\n    with _ -> []\n\n  let sample ?(path = \"/\") () : t =\n    let result =\n      try\n        let raw = c_statvfs path in\n        if Int64.compare raw.total 0L < 0 then None\n        else\n          let used = i64_max0 (Int64.sub raw.total raw.free) in\n          Some\n            {\n              total_bytes = raw.total;\n              used_bytes = used;\n              avail_bytes = raw.avail;\n              partitions = partitions ();\n            }\n      with _ -> None\n    in\n    match result with\n    | Some t -> t\n    | None ->\n        raise\n          (Sys_error\n             (\"Fs.sample: failed to read filesystem statistics for \" ^ path))\nend\n\n(* Proc *)\n\nmodule Proc = struct\n  type state =\n    | Running\n    | Sleeping\n    | Disk_sleep\n    | Stopped\n    | Zombie\n    | Idle\n    | Unknown\n\n  let state_of_char = function\n    | 'R' -> Running\n    | 'S' -> Sleeping\n    | 'D' -> Disk_sleep\n    | 'T' | 't' -> Stopped\n    | 'Z' -> Zombie\n    | 'I' -> Idle\n    | 'X' -> Unknown (* dead *)\n    | _ -> Unknown\n\n  module Self = struct\n    type t = {\n      utime : float;\n      stime : float;\n      rss_bytes : int64;\n      vsize_bytes : int64;\n    }\n\n    type stats = { cpu_percent : float; rss_bytes : int64; vsize_bytes : int64 }\n\n    let read_proc_stat_tail (path : string) : string list option =\n      try\n        let line = with_in_file path input_line in\n        match String.rindex_opt line ')' with\n        | None -> None\n        | Some k ->\n            let start = k + 2 in\n            if start >= String.length line then None\n            else\n              Some\n                (split_whitespace\n                   (String.sub line start (String.length line - start)))\n      with _ -> None\n\n    let linux_rss_vsize () : (int64 * int64) option =\n      match read_proc_stat_tail \"/proc/self/stat\" with\n      | None -> None\n      | Some fields -> (\n          match (List.nth_opt fields 20, List.nth_opt fields 21) with\n          | Some vsize_s, Some rss_pages_s -> (\n              match\n                (int64_of_string_opt vsize_s, int64_of_string_opt rss_pages_s)\n              with\n              | Some vsize, Some rss_pages ->\n                  let rss = Int64.mul rss_pages (page_size ()) in\n                  Some (rss, vsize)\n              | _ -> None)\n          | _ -> None)\n\n    let sample () : t =\n      let result =\n        try\n          let times = Unix.times () in\n          let utime = times.Unix.tms_utime in\n          let stime = times.Unix.tms_stime in\n\n          let rss_bytes, vsize_bytes =\n            if Sys.file_exists \"/proc/self/stat\" then\n              match linux_rss_vsize () with\n              | Some (rss, vsz) -> (rss, vsz)\n              | None -> (0L, 0L)\n            else\n              let raw = c_proc_self_mem () in\n              let rss = if Int64.compare raw.rss 0L < 0 then 0L else raw.rss in\n              let vsz =\n                if Int64.compare raw.vsize 0L < 0 then 0L else raw.vsize\n              in\n              (rss, vsz)\n          in\n          Some { utime; stime; rss_bytes; vsize_bytes }\n        with _ -> None\n      in\n      match result with\n      | Some t -> t\n      | None ->\n          raise\n            (Sys_error \"Proc.Self.sample: failed to read process statistics\")\n\n    let compute ~(prev : t) ~(next : t) ~(dt : float) ~(num_cores : int option)\n        : stats =\n      if dt <= 0.0 then invalid_arg \"Proc.Self.compute: dt must be positive\";\n\n      let dt_ok = dt > 0.01 && dt < 10.0 in\n      let cpu_delta = next.utime -. prev.utime +. (next.stime -. prev.stime) in\n      let cpu_delta = max 0.0 cpu_delta in\n\n      let cpu_percent =\n        if not dt_ok then 0.0\n        else\n          let raw = cpu_delta /. dt *. 100.0 in\n          match num_cores with\n          | Some n when n > 0 && n <= 128 -> min (raw /. float_of_int n) 100.0\n          | _ -> min raw 800.0\n      in\n      {\n        cpu_percent;\n        rss_bytes = next.rss_bytes;\n        vsize_bytes = next.vsize_bytes;\n      }\n  end\n\n  module Table = struct\n    type t = {\n      pid : int;\n      ppid : int;\n      name : string;\n      cmdline : string;\n      state : state;\n      user : string;\n      priority : int;\n      nice : int;\n      user_time : int64;\n      system_time : int64;\n      resident_size : int64;\n      virtual_size : int64;\n      num_threads : int;\n      num_running : int;\n      faults : int64;\n      mem_percent : float;\n    }\n\n    type stats = {\n      pid : int;\n      name : string;\n      cpu_percent : float;\n      mem_percent : float;\n      rss_bytes : int64;\n    }\n\n    let read_total_mem_bytes_linux () : int64 option =\n      let path = \"/proc/meminfo\" in\n      if not (Sys.file_exists path) then None\n      else\n        let mem_total = ref None in\n        let handle line =\n          if String.starts_with ~prefix:\"MemTotal:\" line then\n            match split_whitespace line with\n            | _key :: value :: _ -> (\n                match int64_of_string_opt value with\n                | Some kb -> mem_total := Some (Int64.mul kb 1024L)\n                | None -> ())\n            | _ -> ()\n        in\n        (try fold_lines path ~init:() ~f:(fun () line -> handle line)\n         with _ -> ());\n        !mem_total\n\n    let total_mem_bytes () : int64 =\n      if Sys.file_exists \"/proc/meminfo\" then\n        Option.value (read_total_mem_bytes_linux ()) ~default:0L\n      else try (c_get_memory ()).total with _ -> 0L\n\n    let read_proc_stat_tail (path : string) : string list option =\n      try\n        let line = with_in_file path input_line in\n        match String.rindex_opt line ')' with\n        | None -> None\n        | Some k ->\n            let start = k + 2 in\n            if start >= String.length line then None\n            else\n              Some\n                (split_whitespace\n                   (String.sub line start (String.length line - start)))\n      with _ -> None\n\n    let read_comm (path : string) : string option =\n      try\n        let s = with_in_file path input_line |> String.trim in\n        if s = \"\" then None else Some s\n      with _ -> None\n\n    let read_cmdline (pid : int) : string =\n      let path = Printf.sprintf \"/proc/%d/cmdline\" pid in\n      try\n        let ic = open_in path in\n        let len = in_channel_length ic in\n        if len = 0 then (\n          close_in_noerr ic;\n          \"\")\n        else\n          let buf = Bytes.create len in\n          let n = input ic buf 0 len in\n          close_in_noerr ic;\n          (* Replace null bytes with spaces *)\n          for i = 0 to n - 1 do\n            if Bytes.get buf i = '\\000' then Bytes.set buf i ' '\n          done;\n          String.trim (Bytes.sub_string buf 0 n)\n      with _ -> \"\"\n\n    let get_username (uid : int) : string =\n      try (Unix.getpwuid uid).Unix.pw_name with _ -> \"\"\n\n    let sample_linux () : t list =\n      let total_mem = total_mem_bytes () in\n      let dir = \"/proc\" in\n      let dh = try Some (Unix.opendir dir) with _ -> None in\n      match dh with\n      | None -> []\n      | Some dh ->\n          let procs = ref [] in\n          let add pid fields =\n            (* Fields are tokens starting at /proc/[pid]/stat field 3 (state).\n               Index 0 = state, 1 = ppid, ... 11 = utime, 12 = stime, 15 =\n               priority, 16 = nice, 17 = num_threads, 20 = vsize, 21 = rss *)\n            let state_char =\n              match List.nth_opt fields 0 with\n              | Some s when String.length s > 0 -> s.[0]\n              | _ -> '?'\n            in\n            let ppid =\n              match List.nth_opt fields 1 with\n              | Some s -> Option.value (int_of_string_opt s) ~default:0\n              | None -> 0\n            in\n            let priority =\n              match List.nth_opt fields 15 with\n              | Some s -> Option.value (int_of_string_opt s) ~default:0\n              | None -> 0\n            in\n            let nice =\n              match List.nth_opt fields 16 with\n              | Some s -> Option.value (int_of_string_opt s) ~default:0\n              | None -> 0\n            in\n            let num_threads =\n              match List.nth_opt fields 17 with\n              | Some s -> Option.value (int_of_string_opt s) ~default:1\n              | None -> 1\n            in\n            match\n              ( List.nth_opt fields 11,\n                List.nth_opt fields 12,\n                List.nth_opt fields 20,\n                List.nth_opt fields 21 )\n            with\n            | Some ut_s, Some st_s, Some vsz_s, Some rss_pages_s -> (\n                match\n                  ( int64_of_string_opt ut_s,\n                    int64_of_string_opt st_s,\n                    int64_of_string_opt vsz_s,\n                    int64_of_string_opt rss_pages_s )\n                with\n                | Some utime, Some stime, Some vsize, Some rss_pages ->\n                    let rss = Int64.mul rss_pages (page_size ()) in\n                    let mem_percent =\n                      if Int64.compare total_mem 0L > 0 then\n                        Int64.to_float rss *. 100.0 /. Int64.to_float total_mem\n                      else 0.0\n                    in\n                    let name =\n                      match read_comm (Printf.sprintf \"/proc/%d/comm\" pid) with\n                      | Some n -> n\n                      | None -> string_of_int pid\n                    in\n                    let cmdline = read_cmdline pid in\n                    let uid =\n                      try\n                        (Unix.stat (Printf.sprintf \"/proc/%d\" pid)).Unix.st_uid\n                      with _ -> -1\n                    in\n                    let user = if uid >= 0 then get_username uid else \"\" in\n                    procs :=\n                      {\n                        pid;\n                        ppid;\n                        name;\n                        cmdline;\n                        state = state_of_char state_char;\n                        user;\n                        priority;\n                        nice;\n                        user_time = utime;\n                        system_time = stime;\n                        resident_size = rss;\n                        virtual_size = vsize;\n                        num_threads;\n                        num_running = 0;\n                        faults = 0L;\n                        mem_percent;\n                      }\n                      :: !procs\n                | _ -> ())\n            | _ -> ()\n          in\n          (try\n             while true do\n               let entry = Unix.readdir dh in\n               match int_of_string_opt entry with\n               | None -> ()\n               | Some pid -> (\n                   let stat_path = Printf.sprintf \"/proc/%d/stat\" pid in\n                   if Sys.file_exists stat_path then\n                     match read_proc_stat_tail stat_path with\n                     | Some fields -> add pid fields\n                     | None -> ())\n             done\n           with End_of_file -> ());\n          Unix.closedir dh;\n          !procs\n\n    let sample_macos () : t list =\n      let total_mem = total_mem_bytes () in\n      let pids =\n        try c_list_pids () |> Array.to_list |> List.filter (fun pid -> pid > 0)\n        with _ -> []\n      in\n      let add acc pid =\n        match c_get_proc_info pid with\n        | None -> acc\n        | Some info ->\n            let mem_percent =\n              if Int64.compare total_mem 0L > 0 then\n                Int64.to_float info.resident_size\n                *. 100.0 /. Int64.to_float total_mem\n              else 0.0\n            in\n            {\n              pid;\n              ppid = info.ppid;\n              name = info.name;\n              cmdline = info.cmdline;\n              state = state_of_char info.state;\n              user = info.user;\n              priority = info.priority;\n              nice = info.nice;\n              user_time = info.user_time;\n              system_time = info.system_time;\n              resident_size = info.resident_size;\n              virtual_size = info.virtual_size;\n              num_threads = info.num_threads;\n              num_running = info.num_running;\n              faults = info.faults;\n              mem_percent;\n            }\n            :: acc\n      in\n      List.fold_left add [] pids\n\n    let sample () : t list =\n      try if Sys.file_exists \"/proc\" then sample_linux () else sample_macos ()\n      with _ -> []\n\n    let compute ~(prev : t list) ~(next : t list) ~(dt : float) : stats list =\n      if dt <= 0.0 then invalid_arg \"Proc.Table.compute: dt must be positive\";\n\n      let prev_by_pid : (int, t) Hashtbl.t =\n        Hashtbl.create (List.length prev)\n      in\n      List.iter (fun (p : t) -> Hashtbl.replace prev_by_pid p.pid p) prev;\n\n      let interval_ns = dt *. 1e9 in\n      let cpu_percent (prev_p : t) (curr_p : t) =\n        let prev_total = Int64.add prev_p.user_time prev_p.system_time in\n        let curr_total = Int64.add curr_p.user_time curr_p.system_time in\n        if Int64.compare curr_total prev_total <= 0 then 0.0\n        else\n          let delta_ticks = Int64.sub curr_total prev_total in\n          let delta_ns = proc_ticks_to_ns delta_ticks in\n          if interval_ns <= 0.0 then 0.0 else delta_ns /. interval_ns *. 100.0\n      in\n\n      let make_stats ~pid ~name ~cpu_percent ~mem_percent ~rss_bytes : stats =\n        { pid; name; cpu_percent; mem_percent; rss_bytes }\n      in\n      next\n      |> List.filter_map (fun (curr : t) ->\n          match Hashtbl.find_opt prev_by_pid curr.pid with\n          | Some prev_p ->\n              let cpu = cpu_percent prev_p curr in\n              if cpu > 0.0 || curr.mem_percent > 0.0 then\n                Some\n                  (make_stats ~pid:curr.pid ~name:curr.name ~cpu_percent:cpu\n                     ~mem_percent:curr.mem_percent ~rss_bytes:curr.resident_size)\n              else None\n          | None ->\n              if curr.mem_percent > 0.0 then\n                Some\n                  (make_stats ~pid:curr.pid ~name:curr.name ~cpu_percent:0.0\n                     ~mem_percent:curr.mem_percent ~rss_bytes:curr.resident_size)\n              else None)\n  end\nend\n\n(* System info *)\n\nlet loadavg () = c_get_loadavg ()\nlet uptime () = c_get_uptime ()\n"
  },
  {
    "path": "packages/munin/lib/sys/sysstat.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** System metrics collection library.\n\n    Sysstat provides stateless, poll-based system monitoring. The caller manages\n    state and sampling intervals. Each module samples instantaneous or\n    cumulative values from the operating system. CPU, network, and disk I/O\n    statistics are cumulative since boot and require two samples to compute\n    usage percentages; memory statistics are instantaneous.\n\n    {1:platform Platform support}\n\n    Supported platforms: Linux and macOS. Platform-specific behavior is\n    documented per module. Some metrics have limited availability on certain\n    platforms (e.g., macOS CPU counters populate only user/nice/system/idle\n    fields). *)\n\n(** {1:modules Modules} *)\n\n(** CPU statistics. *)\nmodule Cpu : sig\n  type t = {\n    user : int64;  (** Time spent in user mode. *)\n    nice : int64;  (** Time spent in user mode with low priority (nice). *)\n    system : int64;  (** Time spent in system (kernel) mode. *)\n    idle : int64;  (** Time spent idle. *)\n    iowait : int64;  (** Time spent waiting for I/O to complete (Linux only). *)\n    irq : int64;  (** Time spent servicing interrupts (Linux only). *)\n    softirq : int64;  (** Time spent servicing soft interrupts (Linux only). *)\n    steal : int64;\n        (** Time stolen by other VMs in virtualized environments (Linux only).\n        *)\n    guest : int64;  (** Time spent running guest VMs (Linux only). *)\n  }\n  (** Cumulative CPU time counters in ticks since boot.\n\n      All fields represent cumulative time spent in each CPU state. The unit is\n      platform-specific ticks (Linux jiffies or macOS Mach ticks) but is\n      abstracted away by {!compute}.\n\n      Platform behavior:\n      - {b macOS}: Only [user], [nice], [system], and [idle] are available;\n        other fields are [0L]. *)\n\n  type stats = {\n    user : float;  (** Percentage of time spent in user mode. *)\n    nice : float;\n        (** Percentage of time spent in user mode with low priority. *)\n    system : float;  (** Percentage of time spent in system mode. *)\n    idle : float;  (** Percentage of time spent idle. *)\n    iowait : float;  (** Percentage of time spent waiting for I/O. *)\n    irq : float;  (** Percentage of time spent servicing interrupts. *)\n    softirq : float;  (** Percentage of time spent servicing soft interrupts. *)\n    steal : float;  (** Percentage of time stolen by hypervisor. *)\n    guest : float;  (** Percentage of time spent running guest VMs. *)\n  }\n  (** CPU usage percentages between two samples.\n\n      All fields are in the range [0.0] to [100.0], where [100.0] represents\n      full utilization. The sum of all fields equals [100.0]. *)\n\n  val sample : unit -> t\n  (** [sample ()] returns aggregate CPU counters across all cores.\n\n      @raise Sys_error\n        if CPU statistics are unavailable on the current platform or an error\n        occurs during sampling. *)\n\n  val sample_per_core : unit -> t array\n  (** [sample_per_core ()] returns per-core CPU counters.\n\n      The array length equals the number of logical CPU cores.\n\n      @raise Sys_error\n        if per-core statistics are unavailable or an error occurs. *)\n\n  val compute : prev:t -> next:t -> stats\n  (** [compute ~prev ~next] calculates CPU usage percentages between two\n      samples.\n\n      Computes the delta for each counter field and converts to percentages of\n      total CPU time elapsed between samples. The sum of all fields in the\n      returned [stats] equals [100.0].\n\n      Returns all-zero percentages (with [idle = 100.0]) if no time has elapsed\n      between samples (i.e., all counters are identical). This is not an error\n      condition. *)\nend\n\n(** Memory statistics (instantaneous). *)\nmodule Mem : sig\n  type t = {\n    total : int64;  (** Total physical memory. *)\n    used : int64;\n        (** Memory in use (calculated via platform-specific formula). *)\n    free : int64;  (** Free memory available for allocation. *)\n    available : int64;\n        (** Memory available for starting new applications without swapping.\n\n            On macOS, approximated as [free + inactive + purgeable]. *)\n    compressed : int64;  (** Compressed memory (macOS only, [0L] on Linux). *)\n    wired : int64;\n        (** Wired (non-pageable) memory (macOS only, [0L] on Linux). *)\n    active : int64;  (** Active memory pages (macOS only, [0L] on Linux). *)\n    inactive : int64;\n        (** Inactive memory pages (Linux: cached + sreclaimable - shmem). *)\n    purgeable : int64;  (** Purgeable memory (macOS) or buffers (Linux). *)\n    speculative : int64;  (** Speculative memory (macOS only, [0L] on Linux). *)\n    external_ : int64;  (** External memory (macOS only, [0L] on Linux). *)\n    page_size : int64;  (** System page size in bytes. *)\n    swap_total : int64;  (** Total swap space. *)\n    swap_used : int64;  (** Used swap space. *)\n  }\n  (** Memory usage statistics.\n\n      All size fields are in bytes. Memory statistics are instantaneous\n      snapshots, not cumulative counters.\n\n      Platform behavior:\n      - {b Linux}: The [used] field is computed as\n        [total - free - buffers - (cached + sreclaimable - shmem)].\n      - {b macOS}: The [used] field is computed as\n        [active + inactive + speculative + wired + compressed - purgeable -\n         external], with [compressed] subtracted from the display value. *)\n\n  val sample : unit -> t\n  (** [sample ()] returns current memory usage statistics.\n\n      @raise Sys_error\n        if memory statistics are unavailable on the current platform or an error\n        occurs during sampling. *)\nend\n\n(** Network I/O statistics. *)\nmodule Net : sig\n  type t = {\n    bytes_rx : int64;  (** Total bytes received. *)\n    packets_rx : int64;  (** Total packets received. *)\n    bytes_tx : int64;  (** Total bytes transmitted. *)\n    packets_tx : int64;  (** Total packets transmitted. *)\n  }\n  (** Cumulative network I/O counters since boot.\n\n      Aggregates all network interfaces except loopback. Counters are cumulative\n      and monotonically increasing (until system reboot or counter overflow). *)\n\n  type stats = {\n    rx_bytes_per_sec : float;  (** Receive rate in bytes per second. *)\n    rx_packets_per_sec : float;  (** Receive rate in packets per second. *)\n    tx_bytes_per_sec : float;  (** Transmit rate in bytes per second. *)\n    tx_packets_per_sec : float;  (** Transmit rate in packets per second. *)\n  }\n  (** Network I/O rates computed between two samples. *)\n\n  val sample : unit -> t\n  (** [sample ()] returns cumulative network I/O counters.\n\n      @raise Sys_error\n        if network statistics are unavailable on the current platform or an\n        error occurs during sampling. *)\n\n  val compute : prev:t -> next:t -> dt:float -> stats\n  (** [compute ~prev ~next ~dt] calculates network I/O rates between two\n      samples.\n\n      Computes the delta for each counter and divides by [dt] to obtain rates\n      per second. Negative deltas (e.g., from counter overflow or reboot) are\n      treated as zero.\n\n      @raise Invalid_argument if [dt <= 0.0]. *)\nend\n\n(** Disk I/O statistics. *)\nmodule Disk_io : sig\n  type t = {\n    bytes_read : int64;  (** Total bytes read from disk. *)\n    bytes_written : int64;  (** Total bytes written to disk. *)\n    time_ms : int64;  (** Cumulative I/O time in milliseconds. *)\n    num_disks : int64;  (** Number of physical disks included in aggregation. *)\n  }\n  (** Cumulative disk I/O counters since boot.\n\n      Aggregates physical disks only, excluding virtual devices, partitions, and\n      metadata devices. Counters are cumulative and monotonically increasing\n      until system reboot.\n\n      Platform behavior:\n      - {b Linux}: Excludes virtual devices ([dm-*], [loop*], [md*], [zram*])\n        and partitions (detected by prefix matching the parent device name). *)\n\n  type stats = {\n    read_bytes_per_sec : float;  (** Read rate in bytes per second. *)\n    write_bytes_per_sec : float;  (** Write rate in bytes per second. *)\n    utilization_percent : float;\n        (** Disk utilization percentage (0.0 to 100.0). *)\n  }\n  (** Disk I/O rates and utilization computed between two samples. *)\n\n  val sample : unit -> t\n  (** [sample ()] returns cumulative disk I/O counters.\n\n      Only physical disks are included; partitions and virtual devices are\n      excluded.\n\n      @raise Sys_error\n        if disk statistics are unavailable on the current platform or an error\n        occurs during sampling. *)\n\n  val compute : prev:t -> next:t -> dt:float -> stats\n  (** [compute ~prev ~next ~dt] calculates disk I/O rates and utilization\n      between two samples.\n\n      Computes the delta for each counter and divides by [dt] to obtain rates\n      per second. Utilization is calculated as\n      [(time_delta / (dt * 1000 * num_disks)) * 100], representing the\n      percentage of time disks were actively performing I/O, capped at [100.0].\n\n      If [num_disks] is [0L], [utilization_percent] is [0.0].\n\n      @raise Invalid_argument if [dt <= 0.0]. *)\nend\n\n(** Filesystem statistics (instantaneous). *)\nmodule Fs : sig\n  type partition = {\n    mount_point : string;  (** Mount point path (e.g., [\"/\"], [\"/home\"]). *)\n    total_bytes : int64;  (** Total filesystem size. *)\n    used_bytes : int64;  (** Used space (calculated as [total - free]). *)\n    avail_bytes : int64;  (** Available space for unprivileged users. *)\n  }\n  (** Partition information for a single mounted filesystem.\n\n      All size fields are in bytes. Represents a snapshot of filesystem usage at\n      the time of sampling. *)\n\n  type t = {\n    total_bytes : int64;  (** Total filesystem size for the queried path. *)\n    used_bytes : int64;  (** Used space for the queried path. *)\n    avail_bytes : int64;  (** Available space for the queried path. *)\n    partitions : partition list;\n        (** All mounted partitions (excluding virtual filesystems). *)\n  }\n  (** Filesystem statistics for a specific path.\n\n      All size fields are in bytes. Contains statistics for the filesystem\n      containing the specified path, plus a list of all mounted partitions. *)\n\n  val sample : ?path:string -> unit -> t\n  (** [sample ?path ()] returns filesystem statistics for the specified path.\n\n      Returns statistics for the filesystem containing [path] (default: [\"/\"]),\n      along with a list of all mounted partitions via {!partitions}.\n\n      Virtual and system filesystems (e.g., [devfs], [tmpfs], [proc], [sysfs])\n      are excluded from the partitions list. Filesystems smaller than\n      approximately 100 MB are also excluded.\n\n      @raise Sys_error if the path does not exist or an error occurs. *)\n\n  val partitions : unit -> partition list\n  (** [partitions ()] returns a list of all mounted partitions.\n\n      Enumerates mounted filesystems and queries their usage. Excludes\n      virtual/system filesystems and small filesystems (< 100 MB).\n\n      Returns an empty list if no partitions are found or an error occurs. *)\nend\n\n(** Process statistics. *)\nmodule Proc : sig\n  (** Process state. *)\n  type state =\n    | Running  (** Currently executing on CPU. *)\n    | Sleeping  (** Interruptible sleep (waiting for event). *)\n    | Disk_sleep  (** Uninterruptible sleep (waiting for I/O). *)\n    | Stopped  (** Stopped by signal (e.g., SIGSTOP). *)\n    | Zombie  (** Terminated but not yet reaped by parent. *)\n    | Idle  (** Idle kernel thread. *)\n    | Unknown  (** State could not be determined. *)\n\n  (** Current process (self) statistics. *)\n  module Self : sig\n    type t = {\n      utime : float;  (** Cumulative user-mode CPU time in seconds. *)\n      stime : float;  (** Cumulative system-mode CPU time in seconds. *)\n      rss_bytes : int64;  (** Resident set size (physical memory) in bytes. *)\n      vsize_bytes : int64;  (** Virtual memory size in bytes. *)\n    }\n    (** Raw process snapshot for delta calculation.\n\n        Contains cumulative CPU time and instantaneous memory usage for the\n        current process. CPU times are in seconds (converted from\n        platform-specific units by [Unix.times]). *)\n\n    type stats = {\n      cpu_percent : float;\n          (** CPU usage percentage (0.0 to 100.0 per core, or total if\n              [num_cores] provided). *)\n      rss_bytes : int64;  (** Resident set size in bytes. *)\n      vsize_bytes : int64;  (** Virtual memory size in bytes. *)\n    }\n    (** Computed process statistics. *)\n\n    val sample : unit -> t\n    (** [sample ()] returns raw CPU times and memory usage for the current\n        process.\n\n        Uses [Unix.times] for CPU times.\n\n        @raise Sys_error if an error occurs during sampling. *)\n\n    val compute : prev:t -> next:t -> dt:float -> num_cores:int option -> stats\n    (** [compute ~prev ~next ~dt ~num_cores] computes CPU usage percentage\n        between two samples.\n\n        CPU percentage is calculated as\n        [((utime_delta + stime_delta) / dt) * 100]. If [num_cores] is provided,\n        the percentage is normalized by dividing by the number of cores,\n        yielding a value in [0.0] to [100.0]. Without normalization, the value\n        can exceed [100.0] on multi-core systems.\n\n        The result is clamped to prevent spurious values from timing anomalies:\n        - With [num_cores]: capped at [100.0]\n        - Without [num_cores]: capped at [800.0]\n\n        If [dt] is outside the range [(0.01, 10.0)], returns [0.0] for\n        [cpu_percent] to avoid division by near-zero or implausibly large\n        intervals.\n\n        @raise Invalid_argument if [dt <= 0.0]. *)\n  end\n\n  (** Process table statistics. *)\n  module Table : sig\n    type t = {\n      pid : int;  (** Process ID. *)\n      ppid : int;  (** Parent process ID. *)\n      name : string;  (** Process name (comm). *)\n      cmdline : string;\n          (** Full command line with arguments. Empty if unavailable. *)\n      state : state;  (** Current process state. *)\n      user : string;  (** Owner username. Empty if UID lookup fails. *)\n      priority : int;  (** Scheduling priority. *)\n      nice : int;  (** Nice value (-20 to 19). *)\n      user_time : int64;  (** Cumulative user-mode CPU time in ticks. *)\n      system_time : int64;  (** Cumulative system-mode CPU time in ticks. *)\n      resident_size : int64;  (** Resident set size in bytes. *)\n      virtual_size : int64;  (** Virtual memory size in bytes. *)\n      num_threads : int;  (** Number of threads. *)\n      num_running : int;\n          (** Number of running threads (macOS only, [0] on Linux). *)\n      faults : int64;  (** Page faults (macOS only, [0L] on Linux). *)\n      mem_percent : float;\n          (** Memory usage as percentage of total physical memory. *)\n    }\n    (** Raw process snapshot for delta calculation.\n\n        Contains cumulative CPU time, state, and instantaneous memory/thread\n        information for a process. CPU times are in platform-specific ticks\n        (Linux jiffies or macOS Mach ticks). *)\n\n    type stats = {\n      pid : int;  (** Process ID. *)\n      name : string;  (** Process name. *)\n      cpu_percent : float;  (** CPU usage percentage between samples. *)\n      mem_percent : float;\n          (** Memory usage as percentage of total physical memory. *)\n      rss_bytes : int64;  (** Resident set size in bytes. *)\n    }\n    (** Computed process statistics.\n\n        Contains derived CPU percentage and filtered memory information. Only\n        processes with non-zero CPU or memory usage are included. *)\n\n    val sample : unit -> t list\n    (** [sample ()] returns raw process snapshots for all running processes.\n\n        Enumerates all processes visible to the current user and reads their\n        statistics.\n\n        Returns an empty list if an error occurs during enumeration. Individual\n        process errors (e.g., process termination during sampling) are silently\n        skipped. *)\n\n    val compute : prev:t list -> next:t list -> dt:float -> stats list\n    (** [compute ~prev ~next ~dt] calculates CPU percentages and filters\n        processes.\n\n        Matches processes by PID between [prev] and [next] samples. For matched\n        processes, computes CPU percentage as:\n        [(cpu_time_delta_ns / interval_ns) * 100].\n\n        Only processes with non-zero [cpu_percent] or [mem_percent] are included\n        in the result. New processes (in [next] but not [prev]) are included if\n        their [mem_percent] is non-zero, with [cpu_percent] set to [0.0].\n\n        @raise Invalid_argument if [dt <= 0.0]. *)\n  end\nend\n\n(** {1:sysinfo System information} *)\n\nval loadavg : unit -> float * float * float\n(** [loadavg ()] returns the 1, 5, and 15 minute load averages. *)\n\nval uptime : unit -> int64\n(** [uptime ()] returns system uptime in seconds. *)\n"
  },
  {
    "path": "packages/munin/lib/sys/sysstat_stubs.c",
    "content": "/*\n * sysstat_stubs.c - OCaml FFI bindings for system metrics\n *\n * Platform-specific implementations.\n * macOS uses Mach APIs, IOKit, and libproc.\n * Linux uses /proc filesystem (parsed in OCaml), stubs return defaults.\n */\n\n#include <caml/alloc.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/mlvalues.h>\n#include <stdint.h>\n#include <stdlib.h>\n#include <string.h>\n#include <sys/param.h>\n#include <sys/statvfs.h>\n#include <sys/time.h>\n#include <time.h>\n#include <unistd.h>\n\n#ifdef __APPLE__\n#include <CoreFoundation/CoreFoundation.h>\n#include <IOKit/IOKitLib.h>\n#include <IOKit/storage/IOBlockStorageDriver.h>\n#include <libproc.h>\n#include <mach/host_info.h>\n#include <mach/mach.h>\n#include <mach/mach_host.h>\n#include <mach/mach_time.h>\n#include <mach/task.h>\n#include <mach/task_info.h>\n#include <mach/vm_statistics.h>\n#include <net/if.h>\n#include <net/if_types.h>\n#include <net/route.h>\n#include <pwd.h>\n#include <sys/mount.h>\n#include <sys/proc_info.h>\n#include <sys/sysctl.h>\n#endif\n\n#ifdef __linux__\n#include <mntent.h>\n#endif\n\n/* Helpers */\n\n#ifdef __APPLE__\n/* Allocate a 4-element int64 array for CPU time counters */\nstatic value alloc_cpu_row(int64_t user, int64_t nice, int64_t sys,\n                           int64_t idle) {\n  CAMLparam0();\n  CAMLlocal1(arr);\n  arr = caml_alloc(4, 0);\n  Store_field(arr, 0, caml_copy_int64(user));\n  Store_field(arr, 1, caml_copy_int64(nice));\n  Store_field(arr, 2, caml_copy_int64(sys));\n  Store_field(arr, 3, caml_copy_int64(idle));\n  CAMLreturn(arr);\n}\n#endif\n\n/* CPU load */\n\nCAMLprim value caml_sysstat_get_cpu_load(value unit) {\n  CAMLparam1(unit);\n  CAMLlocal1(result);\n\n#ifdef __APPLE__\n  CAMLlocal2(row, totalrow);\n\n  natural_t ncpu = 0;\n  processor_info_array_t cpuInfo;\n  mach_msg_type_number_t numCpuInfo;\n\n  host_t host = mach_host_self();\n  kern_return_t kr = host_processor_info(host, PROCESSOR_CPU_LOAD_INFO, &ncpu,\n                                         &cpuInfo, &numCpuInfo);\n  mach_port_deallocate(mach_task_self(), host);\n\n  if (kr != KERN_SUCCESS || ncpu == 0) {\n    result = caml_alloc(0, 0);\n    CAMLreturn(result);\n  }\n\n  int64_t sum_user = 0, sum_nice = 0, sum_sys = 0, sum_idle = 0;\n  result = caml_alloc((mlsize_t)(ncpu + 1), 0);\n\n  for (natural_t i = 0; i < ncpu; i++) {\n    integer_t* base = cpuInfo + (CPU_STATE_MAX * i);\n    int64_t user = (int64_t)base[CPU_STATE_USER];\n    int64_t nice = (int64_t)base[CPU_STATE_NICE];\n    int64_t sys = (int64_t)base[CPU_STATE_SYSTEM];\n    int64_t idle = (int64_t)base[CPU_STATE_IDLE];\n\n    sum_user += user;\n    sum_nice += nice;\n    sum_sys += sys;\n    sum_idle += idle;\n\n    row = alloc_cpu_row(user, nice, sys, idle);\n    Store_field(result, (mlsize_t)(i + 1), row);\n  }\n\n  totalrow = alloc_cpu_row(sum_user, sum_nice, sum_sys, sum_idle);\n  Store_field(result, 0, totalrow);\n\n  vm_deallocate(mach_task_self(), (vm_address_t)cpuInfo,\n                (vm_size_t)(numCpuInfo * sizeof(integer_t)));\n#else\n  /* Linux: CPU stats read from /proc/stat in OCaml */\n  result = caml_alloc(0, 0);\n#endif\n\n  CAMLreturn(result);\n}\n\n/* Memory */\n\nCAMLprim value caml_sysstat_get_memory(value unit) {\n  CAMLparam1(unit);\n  CAMLlocal1(result);\n\n#ifdef __APPLE__\n  host_t host = mach_host_self();\n\n  /* Get total memory */\n  host_basic_info_data_t hbi;\n  mach_msg_type_number_t hbi_count = HOST_BASIC_INFO_COUNT;\n  kern_return_t kr =\n      host_info(host, HOST_BASIC_INFO, (host_info_t)&hbi, &hbi_count);\n  if (kr != KERN_SUCCESS) {\n    mach_port_deallocate(mach_task_self(), host);\n    caml_failwith(\"host_info failed\");\n  }\n\n  /* Get VM statistics */\n  vm_statistics64_data_t vm_stats;\n  mach_msg_type_number_t vm_count = HOST_VM_INFO64_COUNT;\n  kr = host_statistics64(host, HOST_VM_INFO64, (host_info64_t)&vm_stats,\n                         &vm_count);\n  mach_port_deallocate(mach_task_self(), host);\n  if (kr != KERN_SUCCESS) {\n    caml_failwith(\"host_statistics64 failed\");\n  }\n\n  /* Get swap usage */\n  int mib[2] = {CTL_VM, VM_SWAPUSAGE};\n  struct xsw_usage swap_info;\n  size_t swap_size = sizeof(swap_info);\n  int swap_ok = (sysctl(mib, 2, &swap_info, &swap_size, NULL, 0) == 0);\n\n  /* Return raw page counts - OCaml computes derived values */\n  result = caml_alloc_tuple(12);\n  Store_field(result, 0, caml_copy_int64((int64_t)hbi.max_mem));\n  Store_field(result, 1, caml_copy_int64((int64_t)vm_page_size));\n  Store_field(result, 2, caml_copy_int64((int64_t)vm_stats.active_count));\n  Store_field(result, 3, caml_copy_int64((int64_t)vm_stats.inactive_count));\n  Store_field(result, 4, caml_copy_int64((int64_t)vm_stats.speculative_count));\n  Store_field(result, 5, caml_copy_int64((int64_t)vm_stats.wire_count));\n  Store_field(result, 6,\n              caml_copy_int64((int64_t)vm_stats.compressor_page_count));\n  Store_field(result, 7, caml_copy_int64((int64_t)vm_stats.purgeable_count));\n  Store_field(result, 8,\n              caml_copy_int64((int64_t)vm_stats.external_page_count));\n  Store_field(result, 9, caml_copy_int64((int64_t)vm_stats.free_count));\n  Store_field(result, 10,\n              caml_copy_int64(swap_ok ? (int64_t)swap_info.xsu_total : 0));\n  Store_field(result, 11,\n              caml_copy_int64(swap_ok ? (int64_t)swap_info.xsu_used : 0));\n#else\n  /* Linux: Memory stats read from /proc/meminfo in OCaml */\n  result = caml_alloc_tuple(12);\n  for (int i = 0; i < 12; i++) {\n    Store_field(result, i, caml_copy_int64(0));\n  }\n#endif\n\n  CAMLreturn(result);\n}\n\n/* Network I/O */\n\nCAMLprim value caml_sysstat_get_network_io(value unit) {\n  CAMLparam1(unit);\n  CAMLlocal1(result);\n\n#ifdef __APPLE__\n  int mib[6] = {CTL_NET, PF_ROUTE, 0, 0, NET_RT_IFLIST2, 0};\n  size_t len = 0;\n  char* buf = NULL;\n\n  for (int retry = 0; retry < 4; retry++) {\n    len = 0;\n    if (sysctl(mib, 6, NULL, &len, NULL, 0) < 0 || len == 0) goto fail;\n\n    len += 16 * retry * retry * sizeof(struct if_msghdr2);\n    buf = malloc(len);\n    if (!buf) goto fail;\n\n    if (sysctl(mib, 6, buf, &len, NULL, 0) == 0) break;\n    free(buf);\n    buf = NULL;\n    if (retry == 3) goto fail;\n  }\n\n  uint64_t bytes_rx = 0, packets_rx = 0, bytes_tx = 0, packets_tx = 0;\n  for (char* next = buf; next < buf + len;) {\n    struct if_msghdr* ifm = (struct if_msghdr*)next;\n    next += ifm->ifm_msglen;\n    if (ifm->ifm_type != RTM_IFINFO2) continue;\n\n    struct if_msghdr2* ifm2 = (struct if_msghdr2*)ifm;\n    if (ifm2->ifm_data.ifi_type != IFT_LOOP) {\n      bytes_rx += ifm2->ifm_data.ifi_ibytes;\n      packets_rx += ifm2->ifm_data.ifi_ipackets;\n      bytes_tx += ifm2->ifm_data.ifi_obytes;\n      packets_tx += ifm2->ifm_data.ifi_opackets;\n    }\n  }\n  free(buf);\n\n  result = caml_alloc_tuple(4);\n  Store_field(result, 0, caml_copy_int64((int64_t)bytes_rx));\n  Store_field(result, 1, caml_copy_int64((int64_t)packets_rx));\n  Store_field(result, 2, caml_copy_int64((int64_t)bytes_tx));\n  Store_field(result, 3, caml_copy_int64((int64_t)packets_tx));\n  CAMLreturn(result);\n\nfail:\n  result = caml_alloc_tuple(4);\n  for (int i = 0; i < 4; i++) Store_field(result, i, caml_copy_int64(0));\n#else\n  /* Linux: Network stats read from /proc/net/dev in OCaml */\n  result = caml_alloc_tuple(4);\n  for (int i = 0; i < 4; i++) Store_field(result, i, caml_copy_int64(0));\n#endif\n\n  CAMLreturn(result);\n}\n\n/* Disk I/O */\n\nCAMLprim value caml_sysstat_get_disk_io(value unit) {\n  CAMLparam1(unit);\n  CAMLlocal1(result);\n\n#ifdef __APPLE__\n  io_iterator_t drive_list;\n\n  kern_return_t kr = IOServiceGetMatchingServices(\n      kIOMainPortDefault, IOServiceMatching(\"IOBlockStorageDriver\"),\n      &drive_list);\n\n  if (kr != KERN_SUCCESS) {\n    result = caml_alloc_tuple(4);\n    for (int i = 0; i < 4; i++) Store_field(result, i, caml_copy_int64(0));\n    CAMLreturn(result);\n  }\n\n  uint64_t read_sum = 0, write_sum = 0, time_sum = 0;\n  uint64_t num_disks = 0;\n\n  io_registry_entry_t drive;\n  while ((drive = IOIteratorNext(drive_list)) != 0) {\n    CFMutableDictionaryRef properties = NULL;\n\n    if (IORegistryEntryCreateCFProperties(\n            drive, &properties, kCFAllocatorDefault, 0) != KERN_SUCCESS) {\n      IOObjectRelease(drive);\n      continue;\n    }\n\n    if (!properties) {\n      IOObjectRelease(drive);\n      continue;\n    }\n\n    CFDictionaryRef statistics = CFDictionaryGetValue(\n        properties, CFSTR(kIOBlockStorageDriverStatisticsKey));\n\n    if (statistics) {\n      num_disks++;\n      CFNumberRef number;\n      uint64_t value;\n\n      number = CFDictionaryGetValue(\n          statistics, CFSTR(kIOBlockStorageDriverStatisticsBytesReadKey));\n      if (number) {\n        CFNumberGetValue(number, kCFNumberSInt64Type, &value);\n        read_sum += value;\n      }\n\n      number = CFDictionaryGetValue(\n          statistics, CFSTR(kIOBlockStorageDriverStatisticsBytesWrittenKey));\n      if (number) {\n        CFNumberGetValue(number, kCFNumberSInt64Type, &value);\n        write_sum += value;\n      }\n\n      number = CFDictionaryGetValue(\n          statistics, CFSTR(kIOBlockStorageDriverStatisticsTotalReadTimeKey));\n      if (number) {\n        CFNumberGetValue(number, kCFNumberSInt64Type, &value);\n        time_sum += value;\n      }\n\n      number = CFDictionaryGetValue(\n          statistics, CFSTR(kIOBlockStorageDriverStatisticsTotalWriteTimeKey));\n      if (number) {\n        CFNumberGetValue(number, kCFNumberSInt64Type, &value);\n        time_sum += value;\n      }\n    }\n\n    CFRelease(properties);\n    IOObjectRelease(drive);\n  }\n\n  IOObjectRelease(drive_list);\n\n  result = caml_alloc_tuple(4);\n  Store_field(result, 0, caml_copy_int64((int64_t)read_sum));\n  Store_field(result, 1, caml_copy_int64((int64_t)write_sum));\n  Store_field(result, 2,\n              caml_copy_int64((int64_t)(time_sum / 1000000))); /* ns to ms */\n  Store_field(result, 3, caml_copy_int64((int64_t)num_disks));\n#else\n  /* Linux: Disk stats read from /proc/diskstats in OCaml */\n  result = caml_alloc_tuple(4);\n  for (int i = 0; i < 4; i++) {\n    Store_field(result, i, caml_copy_int64(0));\n  }\n#endif\n\n  CAMLreturn(result);\n}\n\n/* Process list */\n\nCAMLprim value caml_sysstat_list_pids(value unit) {\n  CAMLparam1(unit);\n  CAMLlocal1(result);\n\n#ifdef __APPLE__\n  int bufsize = proc_listpids(PROC_ALL_PIDS, 0, NULL, 0);\n  if (bufsize <= 0) {\n    result = caml_alloc(0, 0);\n    CAMLreturn(result);\n  }\n\n  pid_t* pids = malloc(bufsize);\n  if (!pids) {\n    result = caml_alloc(0, 0);\n    CAMLreturn(result);\n  }\n\n  int bytes_written = proc_listpids(PROC_ALL_PIDS, 0, pids, bufsize);\n  if (bytes_written <= 0) {\n    free(pids);\n    result = caml_alloc(0, 0);\n    CAMLreturn(result);\n  }\n\n  int num_pids = bytes_written / (int)sizeof(pid_t);\n\n  result = caml_alloc(num_pids, 0);\n  for (int i = 0; i < num_pids; i++) {\n    Store_field(result, i, Val_int(pids[i]));\n  }\n\n  free(pids);\n#else\n  /* Linux: PIDs enumerated from /proc in OCaml */\n  result = caml_alloc(0, 0);\n#endif\n\n  CAMLreturn(result);\n}\n\n/* Process info */\n\n#ifdef __APPLE__\n/* Map macOS process state to a character code matching Linux convention */\nstatic char macos_state_char(int p_stat) {\n  switch (p_stat) {\n    case SIDL:\n      return 'I'; /* Idle (being created) */\n    case SRUN:\n      return 'R'; /* Running */\n    case SSLEEP:\n      return 'S'; /* Sleeping */\n    case SSTOP:\n      return 'T'; /* Stopped */\n    case SZOMB:\n      return 'Z'; /* Zombie */\n    default:\n      return '?';\n  }\n}\n\n/* Get command line arguments for a process using sysctl KERN_PROCARGS2 */\nstatic int get_proc_cmdline(pid_t pid, char* buf, size_t bufsize) {\n  if (bufsize == 0) return 0;\n  buf[0] = '\\0';\n\n  int mib[3] = {CTL_KERN, KERN_PROCARGS2, pid};\n  size_t argmax = 0;\n\n  /* Get maximum argument size */\n  size_t argmax_size = sizeof(argmax);\n  int argmax_mib[2] = {CTL_KERN, KERN_ARGMAX};\n  if (sysctl(argmax_mib, 2, &argmax, &argmax_size, NULL, 0) != 0) {\n    argmax = 65536; /* fallback */\n  }\n\n  char* procargs = malloc(argmax);\n  if (!procargs) return 0;\n\n  size_t size = argmax;\n  if (sysctl(mib, 3, procargs, &size, NULL, 0) != 0) {\n    free(procargs);\n    return 0;\n  }\n\n  /* Skip argc (first 4 bytes) */\n  if (size < sizeof(int)) {\n    free(procargs);\n    return 0;\n  }\n\n  int argc;\n  memcpy(&argc, procargs, sizeof(int));\n  char* p = procargs + sizeof(int);\n  char* end = procargs + size;\n\n  /* Skip executable path */\n  while (p < end && *p != '\\0') p++;\n  while (p < end && *p == '\\0') p++;\n\n  /* Copy arguments up to bufsize, replacing nulls with spaces */\n  size_t written = 0;\n  int arg_count = 0;\n  while (p < end && arg_count < argc && written < bufsize - 1) {\n    if (*p == '\\0') {\n      if (written > 0 && written < bufsize - 1) {\n        buf[written++] = ' ';\n      }\n      arg_count++;\n      p++;\n    } else {\n      buf[written++] = *p++;\n    }\n  }\n\n  /* Trim trailing space if present */\n  if (written > 0 && buf[written - 1] == ' ') {\n    written--;\n  }\n  buf[written] = '\\0';\n\n  free(procargs);\n  return (int)written;\n}\n#endif\n\nCAMLprim value caml_sysstat_get_proc_info(value v_pid) {\n  CAMLparam1(v_pid);\n\n#ifdef __APPLE__\n  CAMLlocal5(result, name_str, cmdline_str, user_str, some);\n\n  pid_t pid = Int_val(v_pid);\n\n  /* Get task info for CPU times, memory, threads */\n  struct proc_taskinfo pti;\n  int ret = proc_pidinfo(pid, PROC_PIDTASKINFO, 0, &pti, PROC_PIDTASKINFO_SIZE);\n\n  if (ret != PROC_PIDTASKINFO_SIZE) {\n    CAMLreturn(Val_int(0)); /* None */\n  }\n\n  /* Get BSD info for ppid, state, nice, uid */\n  struct proc_bsdinfo pbi;\n  ret = proc_pidinfo(pid, PROC_PIDTBSDINFO, 0, &pbi, PROC_PIDTBSDINFO_SIZE);\n\n  int ppid = 0;\n  char state_char = '?';\n  int priority = 0;\n  int nice = 0;\n  uid_t uid = 0;\n\n  if (ret == PROC_PIDTBSDINFO_SIZE) {\n    ppid = (int)pbi.pbi_ppid;\n    state_char = macos_state_char(pbi.pbi_status);\n    nice = (int)pbi.pbi_nice;\n    uid = pbi.pbi_uid;\n  }\n\n  /* Get priority from kinfo_proc via sysctl */\n  {\n    struct kinfo_proc kp;\n    size_t kp_size = sizeof(kp);\n    int mib[4] = {CTL_KERN, KERN_PROC, KERN_PROC_PID, pid};\n    if (sysctl(mib, 4, &kp, &kp_size, NULL, 0) == 0 && kp_size > 0) {\n      priority = (int)kp.kp_proc.p_priority;\n    }\n  }\n\n  /* Get process name from path */\n  char name[MAXPATHLEN];\n  ret = proc_pidpath(pid, name, sizeof(name));\n  if (ret <= 0) {\n    proc_name(pid, name, sizeof(name));\n  } else {\n    char* slash = strrchr(name, '/');\n    if (slash) {\n      memmove(name, slash + 1, strlen(slash + 1) + 1);\n    }\n  }\n\n  /* Get command line */\n  char cmdline[4096];\n  get_proc_cmdline(pid, cmdline, sizeof(cmdline));\n\n  /* Get username from UID */\n  char username[256] = \"\";\n  struct passwd* pw = getpwuid(uid);\n  if (pw && pw->pw_name) {\n    strncpy(username, pw->pw_name, sizeof(username) - 1);\n    username[sizeof(username) - 1] = '\\0';\n  }\n\n  /* Build result tuple with 14 fields */\n  result = caml_alloc_tuple(14);\n  name_str = caml_copy_string(name);\n  Store_field(result, 0, name_str);            /* name */\n  Store_field(result, 1, Val_int(ppid));       /* ppid */\n  Store_field(result, 2, Val_int(state_char)); /* state (as char code) */\n  Store_field(result, 3, Val_int(priority));   /* priority */\n  Store_field(result, 4, Val_int(nice));       /* nice */\n  cmdline_str = caml_copy_string(cmdline);\n  Store_field(result, 5, cmdline_str); /* cmdline */\n  user_str = caml_copy_string(username);\n  Store_field(result, 6, user_str); /* user */\n  Store_field(result, 7,\n              caml_copy_int64((int64_t)pti.pti_total_user)); /* user_time */\n  Store_field(result, 8,\n              caml_copy_int64((int64_t)pti.pti_total_system)); /* system_time */\n  Store_field(\n      result, 9,\n      caml_copy_int64((int64_t)pti.pti_resident_size)); /* resident_size */\n  Store_field(\n      result, 10,\n      caml_copy_int64((int64_t)pti.pti_virtual_size));  /* virtual_size */\n  Store_field(result, 11, Val_int(pti.pti_threadnum));  /* num_threads */\n  Store_field(result, 12, Val_int(pti.pti_numrunning)); /* num_running */\n  Store_field(result, 13,\n              caml_copy_int64((int64_t)pti.pti_faults)); /* faults */\n\n  some = caml_alloc(1, 0);\n  Store_field(some, 0, result);\n  CAMLreturn(some);\n#else\n  /* Linux: Process info read from /proc/[pid] in OCaml */\n  (void)v_pid;\n  CAMLreturn(Val_int(0)); /* None */\n#endif\n}\n\n/* Mach timebase */\n\nCAMLprim value caml_sysstat_get_timebase(value unit) {\n  CAMLparam1(unit);\n  CAMLlocal1(result);\n\n#ifdef __APPLE__\n  mach_timebase_info_data_t info;\n  mach_timebase_info(&info);\n\n  result = caml_alloc_tuple(2);\n  Store_field(result, 0, Val_int(info.numer));\n  Store_field(result, 1, Val_int(info.denom));\n#else\n  /* Linux: Not needed, return 1:1 ratio */\n  result = caml_alloc_tuple(2);\n  Store_field(result, 0, Val_int(1));\n  Store_field(result, 1, Val_int(1));\n#endif\n\n  CAMLreturn(result);\n}\n\n/* Cross-platform */\n\n/* Get system load averages (1, 5, 15 minute) */\nCAMLprim value caml_sysstat_get_loadavg(value unit) {\n  CAMLparam1(unit);\n  CAMLlocal1(result);\n\n  double loadavg[3];\n  if (getloadavg(loadavg, 3) != 3) {\n    loadavg[0] = loadavg[1] = loadavg[2] = 0.0;\n  }\n\n  result = caml_alloc_tuple(3);\n  Store_field(result, 0, caml_copy_double(loadavg[0]));\n  Store_field(result, 1, caml_copy_double(loadavg[1]));\n  Store_field(result, 2, caml_copy_double(loadavg[2]));\n\n  CAMLreturn(result);\n}\n\n/* Get system uptime in seconds */\nCAMLprim value caml_sysstat_get_uptime(value unit) {\n  CAMLparam1(unit);\n\n#ifdef __APPLE__\n  struct timeval boottime;\n  size_t size = sizeof(boottime);\n  int mib[2] = {CTL_KERN, KERN_BOOTTIME};\n\n  if (sysctl(mib, 2, &boottime, &size, NULL, 0) != 0) {\n    CAMLreturn(caml_copy_int64(0));\n  }\n\n  struct timeval now;\n  gettimeofday(&now, NULL);\n\n  int64_t uptime = (int64_t)(now.tv_sec - boottime.tv_sec);\n  CAMLreturn(caml_copy_int64(uptime));\n#else\n  /* Linux: read from /proc/uptime */\n  FILE* f = fopen(\"/proc/uptime\", \"r\");\n  if (!f) {\n    CAMLreturn(caml_copy_int64(0));\n  }\n\n  double uptime_secs = 0.0;\n  if (fscanf(f, \"%lf\", &uptime_secs) != 1) {\n    uptime_secs = 0.0;\n  }\n  fclose(f);\n\n  CAMLreturn(caml_copy_int64((int64_t)uptime_secs));\n#endif\n}\n\n/* Get system page size */\nCAMLprim value caml_sysstat_getpagesize(value unit) {\n  CAMLparam1(unit);\n  CAMLreturn(Val_long(sysconf(_SC_PAGESIZE)));\n}\n\n/* Get disk filesystem statistics using statvfs */\nCAMLprim value caml_sysstat_statvfs(value v_path) {\n  CAMLparam1(v_path);\n  CAMLlocal1(tup);\n\n  const char* path = String_val(v_path);\n  struct statvfs st;\n  if (statvfs(path, &st) != 0) {\n    caml_failwith(\"statvfs failed\");\n  }\n\n  uint64_t fr =\n      (st.f_frsize != 0) ? (uint64_t)st.f_frsize : (uint64_t)st.f_bsize;\n  uint64_t total = fr * (uint64_t)st.f_blocks;\n  uint64_t free = fr * (uint64_t)st.f_bfree;\n  uint64_t avail = fr * (uint64_t)st.f_bavail;\n\n  tup = caml_alloc_tuple(3);\n  Store_field(tup, 0, caml_copy_int64((int64_t)total));\n  Store_field(tup, 1, caml_copy_int64((int64_t)free));\n  Store_field(tup, 2, caml_copy_int64((int64_t)avail));\n  CAMLreturn(tup);\n}\n\n/* Get process (self) memory statistics */\nCAMLprim value caml_sysstat_proc_self_mem(value unit) {\n  CAMLparam1(unit);\n  CAMLlocal1(tup);\n\n#ifdef __APPLE__\n  mach_task_basic_info_data_t info;\n  mach_msg_type_number_t count = MACH_TASK_BASIC_INFO_COUNT;\n  kern_return_t kr = task_info(mach_task_self(), MACH_TASK_BASIC_INFO,\n                               (task_info_t)&info, &count);\n  if (kr != KERN_SUCCESS) {\n    tup = caml_alloc_tuple(2);\n    Store_field(tup, 0, caml_copy_int64(-1));\n    Store_field(tup, 1, caml_copy_int64(-1));\n    CAMLreturn(tup);\n  }\n  tup = caml_alloc_tuple(2);\n  Store_field(tup, 0, caml_copy_int64((int64_t)info.resident_size));\n  Store_field(tup, 1, caml_copy_int64((int64_t)info.virtual_size));\n#else\n  /* Linux: Handled in OCaml via /proc/self/stat */\n  tup = caml_alloc_tuple(2);\n  Store_field(tup, 0, caml_copy_int64(-1));\n  Store_field(tup, 1, caml_copy_int64(-1));\n#endif\n\n  CAMLreturn(tup);\n}\n\n/* Get clock ticks per second (for Linux jiffies conversion) */\nCAMLprim value caml_sysstat_clk_tck(value unit) {\n  CAMLparam1(unit);\n  CAMLreturn(Val_long(sysconf(_SC_CLK_TCK)));\n}\n\n/* Mount enumeration */\n\n/* Get list of mounted filesystems.\n   Returns array of (mount_point, device, fstype) tuples. */\nCAMLprim value caml_sysstat_getmounts(value unit) {\n  CAMLparam1(unit);\n  CAMLlocal3(result, tup, str);\n\n#ifdef __APPLE__\n  struct statfs* mntbuf;\n  int count = getmntinfo(&mntbuf, MNT_NOWAIT);\n  if (count <= 0) {\n    result = caml_alloc(0, 0);\n    CAMLreturn(result);\n  }\n\n  result = caml_alloc(count, 0);\n  for (int i = 0; i < count; i++) {\n    tup = caml_alloc_tuple(3);\n    str = caml_copy_string(mntbuf[i].f_mntonname);\n    Store_field(tup, 0, str);\n    str = caml_copy_string(mntbuf[i].f_mntfromname);\n    Store_field(tup, 1, str);\n    str = caml_copy_string(mntbuf[i].f_fstypename);\n    Store_field(tup, 2, str);\n    Store_field(result, i, tup);\n  }\n#elif defined(__linux__)\n  FILE* f = setmntent(\"/proc/mounts\", \"r\");\n  if (!f) {\n    result = caml_alloc(0, 0);\n    CAMLreturn(result);\n  }\n\n  /* Read all entries into a temporary buffer to avoid TOCTOU race */\n  int capacity = 64;\n  int count = 0;\n  char** dirs = malloc(capacity * sizeof(char*));\n  char** devs = malloc(capacity * sizeof(char*));\n  char** types = malloc(capacity * sizeof(char*));\n\n  if (!dirs || !devs || !types) {\n    free(dirs);\n    free(devs);\n    free(types);\n    endmntent(f);\n    result = caml_alloc(0, 0);\n    CAMLreturn(result);\n  }\n\n  struct mntent* mnt;\n  while ((mnt = getmntent(f)) != NULL) {\n    if (count >= capacity) {\n      capacity *= 2;\n      char** new_dirs = realloc(dirs, capacity * sizeof(char*));\n      char** new_devs = realloc(devs, capacity * sizeof(char*));\n      char** new_types = realloc(types, capacity * sizeof(char*));\n      if (!new_dirs || !new_devs || !new_types) {\n        /* Clean up on allocation failure */\n        for (int j = 0; j < count; j++) {\n          free(dirs[j]);\n          free(devs[j]);\n          free(types[j]);\n        }\n        free(new_dirs ? new_dirs : dirs);\n        free(new_devs ? new_devs : devs);\n        free(new_types ? new_types : types);\n        endmntent(f);\n        result = caml_alloc(0, 0);\n        CAMLreturn(result);\n      }\n      dirs = new_dirs;\n      devs = new_devs;\n      types = new_types;\n    }\n    dirs[count] = strdup(mnt->mnt_dir);\n    devs[count] = strdup(mnt->mnt_fsname);\n    types[count] = strdup(mnt->mnt_type);\n    if (!dirs[count] || !devs[count] || !types[count]) {\n      /* Clean up on strdup failure */\n      for (int j = 0; j <= count; j++) {\n        free(dirs[j]);\n        free(devs[j]);\n        free(types[j]);\n      }\n      free(dirs);\n      free(devs);\n      free(types);\n      endmntent(f);\n      result = caml_alloc(0, 0);\n      CAMLreturn(result);\n    }\n    count++;\n  }\n  endmntent(f);\n\n  /* Now allocate OCaml structures from the buffered data */\n  result = caml_alloc(count, 0);\n  for (int i = 0; i < count; i++) {\n    tup = caml_alloc_tuple(3);\n    str = caml_copy_string(dirs[i]);\n    Store_field(tup, 0, str);\n    str = caml_copy_string(devs[i]);\n    Store_field(tup, 1, str);\n    str = caml_copy_string(types[i]);\n    Store_field(tup, 2, str);\n    Store_field(result, i, tup);\n  }\n\n  /* Free temporary buffers */\n  for (int i = 0; i < count; i++) {\n    free(dirs[i]);\n    free(devs[i]);\n    free(types[i]);\n  }\n  free(dirs);\n  free(devs);\n  free(types);\n#else\n  result = caml_alloc(0, 0);\n#endif\n\n  CAMLreturn(result);\n}\n"
  },
  {
    "path": "packages/munin/lib/tui/detail.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Mosaic\nmodule Charts = Matrix_charts\n\n(* Exponential moving average: alpha in (0,1]. *)\nlet ema alpha history =\n  match history with\n  | [] -> []\n  | (s0, v0) :: rest ->\n      let rec loop acc prev = function\n        | [] -> List.rev acc\n        | (s, v) :: xs ->\n            let smoothed = (alpha *. v) +. ((1. -. alpha) *. prev) in\n            loop ((s, smoothed) :: acc) smoothed xs\n      in\n      (s0, v0) :: loop [] v0 rest\n\n(* Statistics *)\n\ntype stats = {\n  last : float;\n  best : float option;\n  best_step : int option;\n  mean : float;\n  count : int;\n}\n\nlet compute_stats history ~best =\n  match history with\n  | [] -> None\n  | _ ->\n      let count = ref 0 in\n      let sum = ref 0. in\n      let last = ref 0. in\n      List.iter\n        (fun (_, v) ->\n          incr count;\n          sum := !sum +. v;\n          last := v)\n        history;\n      let n = !count in\n      let mean = !sum /. float_of_int n in\n      let best_step =\n        match best with\n        | None -> None\n        | Some bv ->\n            let rec find = function\n              | [] -> None\n              | (s, v) :: _ when Float.equal v bv -> Some s\n              | _ :: rest -> find rest\n            in\n            find history\n      in\n      Some { last = !last; best; best_step; mean; count = n }\n\nlet format_step step =\n  if step < 1000 then string_of_int step\n  else\n    let s = string_of_int step in\n    let len = String.length s in\n    let buf = Buffer.create (len + ((len - 1) / 3)) in\n    for i = 0 to len - 1 do\n      if i > 0 && (len - i) mod 3 = 0 then Buffer.add_char buf ',';\n      Buffer.add_char buf s.[i]\n    done;\n    Buffer.contents buf\n\nlet view_stats_row stats =\n  let parts =\n    [ Printf.sprintf \"Last: %.4g\" stats.last ]\n    @ (match (stats.best, stats.best_step) with\n      | Some bv, Some bs ->\n          [ Printf.sprintf \"Best: %.4g (step %s)\" bv (format_step bs) ]\n      | Some bv, None -> [ Printf.sprintf \"Best: %.4g\" bv ]\n      | None, _ -> [])\n    @ [\n        Printf.sprintf \"Mean: %.4g\" stats.mean;\n        Printf.sprintf \"Samples: %s\" (format_step stats.count);\n      ]\n  in\n  box ~justify_content:Center ~align_items:Center\n    ~size:{ width = pct 100; height = auto }\n    [ text ~style:Theme.muted_style (String.concat \"  \\u{00B7}  \" parts) ]\n\nlet meta_style = Ansi.Style.make ~fg:(Ansi.Color.grayscale ~level:12) ()\n\nlet view_metric_def_row (metric_def : Munin.Run.metric_def) =\n  let parts =\n    (match metric_def.goal with\n      | Some `Minimize -> [ \"Goal: minimize\" ]\n      | Some `Maximize -> [ \"Goal: maximize\" ]\n      | None -> [])\n    @ (match metric_def.summary with\n      | `Last -> []\n      | `Min -> [ \"Summary: min\" ]\n      | `Max -> [ \"Summary: max\" ]\n      | `Mean -> [ \"Summary: mean\" ]\n      | `None -> [ \"Summary: none\" ])\n    @\n    match metric_def.step_metric with\n    | Some sm -> [ Printf.sprintf \"Step metric: %s\" sm ]\n    | None -> []\n  in\n  if parts = [] then None\n  else\n    Some\n      (box ~justify_content:Center ~align_items:Center\n         ~size:{ width = pct 100; height = auto }\n         [ text ~style:meta_style (String.concat \"  \\u{00B7}  \" parts) ])\n\n(* View *)\n\nlet view ~tag ~history_for_tag ~best ~size ~smooth ~metric_def =\n  let history = history_for_tag tag in\n  let display_history =\n    match smooth with None -> history | Some alpha -> ema alpha history\n  in\n  let title =\n    match Theme.last_value history with\n    | None -> tag\n    | Some v -> Printf.sprintf \"%s [%.4f]\" tag v\n  in\n  let title = if Option.is_some smooth then title ^ \" (EMA)\" else title in\n  let stats = compute_stats history ~best in\n  let stats_rows =\n    (match stats with Some s -> [ view_stats_row s ] | None -> [])\n    @\n    match metric_def with\n    | Some md -> (\n        match view_metric_def_row md with Some row -> [ row ] | None -> [])\n    | None -> []\n  in\n  box ~flex_direction:Column ~gap:(gap 1) ~align_items:Center ~size\n    ([\n       box ~border:true ~title ~padding:(padding 1)\n         ~size:{ width = pct 100; height = pct 100 }\n         ~flex_grow:1.0\n         [\n           canvas\n             ~size:{ width = pct 100; height = pct 100 }\n             (fun c ~delta:_ ->\n               let width = Canvas.width c in\n               let height = Canvas.height c in\n               let grid = Canvas.grid c in\n               match smooth with\n               | None ->\n                   Theme.draw_metric_chart ~compact:false history grid ~width\n                     ~height\n               | Some _ ->\n                   if history = [] then ()\n                   else\n                     let to_arr h =\n                       Array.of_list\n                         (List.map\n                            (fun (step, value) -> (float_of_int step, value))\n                            h)\n                     in\n                     let raw_data = to_arr history in\n                     let smooth_data = to_arr display_history in\n                     let chart =\n                       Charts.empty ()\n                       |> Charts.with_frame\n                            (Charts.manual_frame ~margins:(1, 1, 1, 4) ())\n                       |> Charts.with_axes\n                            ~x:\n                              (Charts.Axis.default |> Charts.Axis.with_ticks 6\n                              |> Charts.Axis.with_style Theme.axis_style)\n                            ~y:\n                              (Charts.Axis.default |> Charts.Axis.with_ticks 4\n                              |> Charts.Axis.with_style Theme.axis_style\n                              |> Charts.Axis.with_format (fun _ v ->\n                                  Printf.sprintf \"%.4g\" v))\n                       |> Charts.with_grid\n                            (Charts.Gridlines.default\n                            |> Charts.Gridlines.with_style Theme.grid_style\n                            |> Charts.Gridlines.with_x true\n                            |> Charts.Gridlines.with_y true)\n                       |> Charts.line ~id:\"raw\" ~resolution:`Braille2x4\n                            ~style:\n                              (Ansi.Style.make\n                                 ~fg:(Ansi.Color.grayscale ~level:8)\n                                 ())\n                            ~x:fst ~y:snd raw_data\n                       |> Charts.line ~id:\"smooth\" ~resolution:`Braille2x4\n                            ~style:(Ansi.Style.make ~fg:Ansi.Color.cyan ())\n                            ~x:fst ~y:snd smooth_data\n                     in\n                     ignore (Charts.draw chart grid ~width ~height));\n         ];\n     ]\n    @ stats_rows)\n"
  },
  {
    "path": "packages/munin/lib/tui/dune",
    "content": "(library\n (name munin_tui)\n (public_name munin.tui)\n (libraries\n  munin\n  unix\n  mosaic\n  mosaic.ui\n  jsont\n  jsont.bytesrw\n  matrix.charts\n  matrix.grid))\n"
  },
  {
    "path": "packages/munin/lib/tui/footer.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Mosaic\n\nlet hints = function\n  | `Dashboard ->\n      \"\\u{2190}\\u{2191}\\u{2193}\\u{2192} Select \\u{00B7} Enter Open \\u{00B7} <> \\\n       Batch \\u{00B7} [] System \\u{00B7} i Info \\u{00B7} q Quit\"\n  | `Detail smooth -> (\n      match smooth with\n      | Theme.Off -> \"S Smooth \\u{00B7} Esc back\"\n      | Light | Medium | Heavy ->\n          Printf.sprintf \"S Smooth [%d] \\u{00B7} Esc back\"\n            (Theme.smooth_display smooth))\n  | `Info -> \"q/Esc back\"\n\nlet view ~mode =\n  box ~padding:(padding_xy 2 0) ~background:Theme.header_bg\n    ~size:{ width = pct 100; height = auto }\n    [ text ~style:Theme.muted_style (hints mode) ]\n"
  },
  {
    "path": "packages/munin/lib/tui/header.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Mosaic\n\n(* Styles *)\n\nlet label_style = Ansi.Style.make ~fg:(Ansi.Color.grayscale ~level:16) ()\nlet value_style = Ansi.Style.make ~bold:true ~fg:Ansi.Color.white ()\n\n(* Helpers *)\n\nlet format_elapsed secs =\n  let secs = int_of_float secs in\n  let h = secs / 3600 in\n  let m = secs mod 3600 / 60 in\n  let s = secs mod 60 in\n  Printf.sprintf \"%02d:%02d:%02d\" h m s\n\nlet format_step step =\n  if step < 1000 then string_of_int step\n  else\n    let s = string_of_int step in\n    let len = String.length s in\n    let buf = Buffer.create (len + ((len - 1) / 3)) in\n    for i = 0 to len - 1 do\n      if i > 0 && (len - i) mod 3 = 0 then Buffer.add_char buf ',';\n      Buffer.add_char buf s.[i]\n    done;\n    Buffer.contents buf\n\n(* View *)\n\nlet tag_style =\n  Ansi.Style.make\n    ~fg:(Ansi.Color.grayscale ~level:18)\n    ~bg:(Ansi.Color.grayscale ~level:5)\n    ()\n\nlet view ~run_id ~run_name ~tags ~latest_epoch ~total_epochs ~latest_step\n    ~elapsed_secs ~status =\n  let color = Theme.status_color status in\n  let epoch_items =\n    match (latest_epoch, total_epochs) with\n    | None, _ -> []\n    | Some e, Some t ->\n        [\n          text ~style:label_style \"Epoch \";\n          text ~style:value_style (Printf.sprintf \"%d/%d\" e t);\n        ]\n    | Some e, None ->\n        [\n          text ~style:label_style \"Epoch \";\n          text ~style:value_style (string_of_int e);\n        ]\n  in\n  let step_items =\n    match latest_step with\n    | None -> []\n    | Some s ->\n        [\n          text ~style:label_style \"Step \";\n          text ~style:value_style (format_step s);\n        ]\n  in\n  let sep () = text ~style:Theme.muted_style \" \\u{00B7} \" in\n  let stats =\n    [ epoch_items; step_items ]\n    |> List.filter (fun l -> l <> [])\n    |> List.mapi (fun i items -> if i > 0 then sep () :: items else items)\n    |> List.flatten\n  in\n  let stats =\n    if stats <> [] then\n      stats\n      @ [ sep (); text ~style:Theme.muted_style (format_elapsed elapsed_secs) ]\n    else [ text ~style:Theme.muted_style (format_elapsed elapsed_secs) ]\n  in\n  box ~padding:(padding_xy 2 0) ~flex_direction:Row ~gap:(gap 2)\n    ~align_items:Center ~background:Theme.header_bg\n    ~size:{ width = pct 100; height = auto }\n    ([\n       text ~style:value_style \"Munin\"; text ~style:Theme.muted_style \"\\u{2502}\";\n     ]\n    @ (match run_name with\n      | Some name ->\n          [\n            text ~style:value_style name;\n            text ~style:Theme.muted_style (Printf.sprintf \" (%s)\" run_id);\n          ]\n      | None -> [ text ~style:Theme.muted_style run_id ])\n    @ List.map (fun t -> text ~style:tag_style (Printf.sprintf \" %s \" t)) tags\n    @ stats\n    @ [\n        box ~flex_grow:1.0 ~size:{ width = auto; height = auto } [];\n        text ~style:(Ansi.Style.make ~fg:color ()) \"\\u{25CF}\";\n        text\n          ~style:(Ansi.Style.make ~bold:true ~fg:color ())\n          (Theme.status_label status);\n      ])\n"
  },
  {
    "path": "packages/munin/lib/tui/info.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Mosaic\n\n(* Helpers *)\n\nlet format_timestamp t =\n  let tm = Unix.localtime t in\n  Printf.sprintf \"%04d-%02d-%02d %02d:%02d:%02d\" (1900 + tm.tm_year)\n    (1 + tm.tm_mon) tm.tm_mday tm.tm_hour tm.tm_min tm.tm_sec\n\nlet format_elapsed secs =\n  let secs = int_of_float secs in\n  let h = secs / 3600 in\n  let m = secs mod 3600 / 60 in\n  let s = secs mod 60 in\n  Printf.sprintf \"%02d:%02d:%02d\" h m s\n\nlet short_hash s = if String.length s > 7 then String.sub s 0 7 else s\n\n(* View *)\n\nlet view ~(run : Munin.Run.t) ~(status : Theme.run_status) ~elapsed_secs\n    ~(metric_defs : (string * Munin.Run.metric_def) list)\n    ~(latest_metrics : (string * Munin.Run.metric) list)\n    ~(step_metrics : string list) ~(best_for_tag : string -> float option) =\n  let prov = Munin.Run.provenance run in\n  let status_color = Theme.status_color status in\n  let run_section =\n    [ Overview.section_header \"Run\" ]\n    @ (match Munin.Run.name run with\n      | Some name -> [ Overview.kv_row \"Name\" name ]\n      | None -> [])\n    @ [ Overview.kv_row \"ID\" (Munin.Run.id run) ]\n    @ [ Overview.kv_row \"Experiment\" (Munin.Run.experiment_name run) ]\n    @ (match Munin.Run.group run with\n      | Some g -> [ Overview.kv_row \"Group\" g ]\n      | None -> [])\n    @ [\n        box ~flex_direction:Row ~gap:(gap 1)\n          ~size:{ width = pct 100; height = auto }\n          [\n            text ~style:Overview.key_style \"Status\";\n            box ~flex_grow:1.0 ~size:{ width = auto; height = auto } [];\n            text ~style:(Ansi.Style.make ~fg:status_color ()) \"\\u{25CF} \";\n            text\n              ~style:(Ansi.Style.make ~bold:true ~fg:status_color ())\n              (Theme.status_label status);\n          ];\n        Overview.kv_row \"Started\" (format_timestamp (Munin.Run.started_at run));\n        Overview.kv_row \"Duration\" (format_elapsed elapsed_secs);\n      ]\n    @\n    match Munin.Run.tags run with\n    | [] -> []\n    | tags -> [ Overview.kv_row \"Tags\" (String.concat \", \" tags) ]\n  in\n  let prov_section =\n    [ Overview.section_header \"Provenance\" ]\n    @ [\n        Overview.kv_row \"Command\" (String.concat \" \" prov.command);\n        Overview.kv_row \"Directory\" prov.cwd;\n      ]\n    @ (match prov.hostname with\n      | Some h -> [ Overview.kv_row \"Hostname\" h ]\n      | None -> [])\n    @ [ Overview.kv_row \"PID\" (string_of_int prov.pid) ]\n    @\n    match prov.git_commit with\n    | Some hash ->\n        let dirty =\n          match prov.git_dirty with\n          | Some true -> \" (dirty)\"\n          | Some false -> \" (clean)\"\n          | None -> \"\"\n        in\n        [ Overview.kv_row \"Git\" (short_hash hash ^ dirty) ]\n    | None -> []\n  in\n  let params = Munin.Run.params run in\n  let params_section =\n    if params = [] then []\n    else\n      [ Overview.section_header \"Params\" ]\n      @ List.map\n          (fun (k, v) -> Overview.kv_row k (Overview.format_value v))\n          params\n  in\n  let metrics =\n    List.filter\n      (fun (k, _) -> (not (Overview.is_sys k)) && not (List.mem k step_metrics))\n      latest_metrics\n  in\n  let summary_section =\n    if metrics = [] then []\n    else\n      [ Overview.section_header \"Summary\" ]\n      @ List.map\n          (fun (k, m) ->\n            Overview.format_summary_value ~metric_defs ~best_for_tag k m)\n          metrics\n  in\n  let notes_section =\n    match Munin.Run.notes run with\n    | Some notes when notes <> \"\" ->\n        [\n          Overview.section_header \"Notes\"; text ~style:Overview.val_style notes;\n        ]\n    | _ -> []\n  in\n  box ~flex_direction:Column ~padding:(padding 2) ~gap:(gap 1)\n    ~size:{ width = pct 100; height = pct 100 }\n    (run_section @ prov_section @ params_section @ summary_section\n   @ notes_section)\n"
  },
  {
    "path": "packages/munin/lib/tui/metrics.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Mosaic\n\n(* Constants *)\n\nlet graph_height = 14\nlet min_graph_width = 25\nlet header_height = 3\nlet footer_height = 1\nlet metrics_padding = 2\n(* Layout helpers *)\n\nlet calculate_columns (available_width : int) : int =\n  if available_width < min_graph_width * 2 then 1 else 2\n\nlet calculate_rows_per_batch (screen_height : int) : int =\n  let available_height =\n    screen_height - header_height - footer_height - (metrics_padding * 2)\n  in\n  if available_height < graph_height then 1\n  else max 1 (available_height / graph_height)\n\nlet calculate_graphs_per_batch ~width ~height : int =\n  let columns = calculate_columns width in\n  let rows = calculate_rows_per_batch height in\n  rows * columns\n\n(* Component state and update *)\n\ntype state = {\n  screen_width : int;\n  screen_height : int;\n  current_batch : int;\n  selected : int;\n}\n\ntype msg =\n  | Resize of int * int\n  | Next_batch\n  | Prev_batch\n  | Select_left\n  | Select_right\n  | Select_up\n  | Select_down\n\ntype batch_window = {\n  start_idx : int;\n  end_idx : int;\n  total_batches : int;\n  current_batch : int;\n}\n\nlet batch_window ~width ~height ~current_batch ~total_metrics =\n  if total_metrics = 0 then\n    { start_idx = 0; end_idx = 0; total_batches = 1; current_batch = 0 }\n  else\n    let per_batch = calculate_graphs_per_batch ~width ~height in\n    let total_batches = (total_metrics + per_batch - 1) / per_batch in\n    let current_batch = min current_batch (max 0 (total_batches - 1)) in\n    let start_idx = current_batch * per_batch in\n    let end_idx = min (start_idx + per_batch) total_metrics in\n    { start_idx; end_idx; total_batches; current_batch }\n\nlet initial_state () =\n  { screen_width = 80; screen_height = 24; current_batch = 0; selected = 0 }\n\nlet visible_count s ~total_metrics =\n  let w =\n    batch_window ~width:s.screen_width ~height:s.screen_height\n      ~current_batch:s.current_batch ~total_metrics\n  in\n  w.end_idx - w.start_idx\n\nlet update (msg : msg) (s : state) ~total_metrics : state =\n  match msg with\n  | Resize (width, height) ->\n      let w =\n        batch_window ~width ~height ~current_batch:s.current_batch\n          ~total_metrics\n      in\n      let n = w.end_idx - w.start_idx in\n      {\n        screen_width = width;\n        screen_height = height;\n        current_batch = w.current_batch;\n        selected = min s.selected (max 0 (n - 1));\n      }\n  | Next_batch ->\n      let w =\n        batch_window ~width:s.screen_width ~height:s.screen_height\n          ~current_batch:(s.current_batch + 1) ~total_metrics\n      in\n      if w.current_batch = s.current_batch then s\n      else { s with current_batch = w.current_batch; selected = 0 }\n  | Prev_batch ->\n      let prev = max 0 (s.current_batch - 1) in\n      if prev = s.current_batch then s\n      else { s with current_batch = prev; selected = 0 }\n  | Select_left ->\n      let n = visible_count s ~total_metrics in\n      if n = 0 || s.selected = 0 then s\n      else { s with selected = s.selected - 1 }\n  | Select_right ->\n      let n = visible_count s ~total_metrics in\n      if n = 0 || s.selected >= n - 1 then s\n      else { s with selected = s.selected + 1 }\n  | Select_up ->\n      let cols = calculate_columns s.screen_width in\n      if s.selected >= cols then { s with selected = s.selected - cols } else s\n  | Select_down ->\n      let n = visible_count s ~total_metrics in\n      let cols = calculate_columns s.screen_width in\n      if s.selected + cols < n then { s with selected = s.selected + cols }\n      else s\n\nlet selected_tag (s : state) ~total_metrics ~all_tags =\n  let w =\n    batch_window ~width:s.screen_width ~height:s.screen_height\n      ~current_batch:s.current_batch ~total_metrics\n  in\n  let idx = w.start_idx + s.selected in\n  if idx < w.end_idx then List.nth_opt all_tags idx else None\n\n(* Metric grouping *)\n\nlet group_prefix tag =\n  match String.index_opt tag '/' with\n  | Some i -> String.sub tag 0 i\n  | None -> \"\"\n\nlet section_style =\n  Ansi.Style.make ~fg:(Ansi.Color.grayscale ~level:12) ~dim:true ()\n\nlet view_group_header prefix =\n  box\n    ~size:{ width = pct 100; height = auto }\n    [ text ~style:section_style (Printf.sprintf \"\\u{2500}\\u{2500} %s \" prefix) ]\n\nlet prefix_groups metrics =\n  let rec go current_prefix acc group = function\n    | [] ->\n        let groups =\n          if group = [] then acc else (current_prefix, List.rev group) :: acc\n        in\n        List.rev groups\n    | ((_, tag) as item) :: rest ->\n        let p = group_prefix tag in\n        if String.equal p current_prefix then\n          go current_prefix acc (item :: group) rest\n        else\n          let acc =\n            if group = [] then acc else (current_prefix, List.rev group) :: acc\n          in\n          go p acc [ item ] rest\n  in\n  match metrics with\n  | [] -> []\n  | ((_, tag) as item) :: rest -> go (group_prefix tag) [] [ item ] rest\n\n(* Chart rendering *)\n\nlet dim_border = Ansi.Color.grayscale ~level:6\nlet selected_border = Ansi.Color.white\n\nlet view_metric_chart ~history_for_tag ~(best_for_tag : string -> float option)\n    ~goal_for_tag ~columns ~selected tag =\n  let history = history_for_tag tag in\n  let best = best_for_tag tag in\n  let width_pct = if columns = 1 then 100 else 49 in\n  let goal_arrow =\n    match goal_for_tag tag with\n    | Some `Minimize -> \" \\u{2193}\"\n    | Some `Maximize -> \" \\u{2191}\"\n    | None -> \"\"\n  in\n  let title =\n    match (Theme.last_value history, best) with\n    | Some value, Some best_val ->\n        Printf.sprintf \"%s [%.4f] best: %.4f%s\" tag value best_val goal_arrow\n    | Some value, None -> Printf.sprintf \"%s [%.4f]\" tag value\n    | None, _ -> tag\n  in\n  let border_color = if selected then selected_border else dim_border in\n  box ~key:tag ~border:true ~border_color ~title ~padding:(padding 0)\n    ~size:{ width = pct width_pct; height = px 14 }\n    [\n      canvas\n        ~size:{ width = pct 100; height = pct 100 }\n        (fun c ~delta:_ ->\n          Theme.draw_metric_chart ~compact:true history (Canvas.grid c)\n            ~width:(Canvas.width c) ~height:(Canvas.height c));\n    ]\n\n(* View *)\n\nlet rec chunk_by n = function\n  | [] -> []\n  | lst ->\n      let rec take k acc = function\n        | [] -> (List.rev acc, [])\n        | x :: xs ->\n            if k = 0 then (List.rev acc, x :: xs)\n            else take (k - 1) (x :: acc) xs\n      in\n      let group, rest = take n [] lst in\n      group :: chunk_by n rest\n\nlet view (s : state) ~metric_tags ~history_for_tag ~best_for_tag ~goal_for_tag =\n  if metric_tags = [] then\n    box ~padding:(padding 1)\n      ~size:{ width = pct 100; height = auto }\n      [ text ~style:Theme.muted_style \"  Waiting for metrics...\" ]\n  else\n    let columns = calculate_columns s.screen_width in\n    let total_metrics = List.length metric_tags in\n    let w =\n      batch_window ~width:s.screen_width ~height:s.screen_height\n        ~current_batch:s.current_batch ~total_metrics\n    in\n    let visible_metrics =\n      List.mapi (fun i tag -> (i, tag)) metric_tags\n      |> List.filter (fun (i, _) -> i >= w.start_idx && i < w.end_idx)\n      |> List.mapi (fun local_idx (_, tag) -> (local_idx, tag))\n    in\n    let groups = prefix_groups visible_metrics in\n    let batch_header =\n      if w.total_batches > 1 then\n        [\n          box ~flex_direction:Row ~justify_content:Flex_end ~align_items:Center\n            ~size:{ width = pct 100; height = auto }\n            [\n              text ~style:Theme.muted_style\n                (Printf.sprintf \"Batch %d/%d\" (w.current_batch + 1)\n                   w.total_batches);\n            ];\n        ]\n      else []\n    in\n    let charts =\n      List.concat_map\n        (fun (prefix, group_metrics) ->\n          let header =\n            if prefix <> \"\" then [ view_group_header prefix ] else []\n          in\n          let rows = chunk_by columns group_metrics in\n          let chart_rows =\n            List.mapi\n              (fun row_idx row ->\n                box\n                  ~key:(Printf.sprintf \"row-%s-%d\" prefix row_idx)\n                  ~flex_direction:Row ~gap:(gap 1)\n                  ~size:{ width = pct 100; height = auto }\n                  (List.map\n                     (fun (local_idx, tag) ->\n                       view_metric_chart ~history_for_tag ~best_for_tag\n                         ~goal_for_tag ~columns\n                         ~selected:(local_idx = s.selected) tag)\n                     row))\n              rows\n          in\n          header @ chart_rows)\n        groups\n    in\n    box ~flex_direction:Column ~padding:(padding_lrtb 1 1 1 0) ~gap:(gap 1)\n      ~size:{ width = pct 100; height = auto }\n      (batch_header @ charts)\n"
  },
  {
    "path": "packages/munin/lib/tui/munin_tui.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Mosaic\n\n(* Model *)\n\ntype model = {\n  run : Munin.Run.t;\n  monitor : Munin.Run_monitor.t;\n  run_status : Theme.run_status;\n  metrics_state : Metrics.state;\n  mode : mode;\n  smooth : Theme.smooth;\n  show_system : bool;\n  screen_width : int;\n  was_live : bool;\n  run_completed : bool;\n}\n\nand mode = Dashboard | Detail of string | Info\n\ntype msg =\n  | Tick of float\n  | Quit\n  | Metrics_msg of Metrics.msg\n  | Open_metric of string\n  | Open_selected\n  | Close_metric\n  | Open_info\n  | Close_info\n  | Toggle_smooth\n  | Toggle_system\n  | Terminal_resize of int * int\n\n(* Helpers *)\n\nlet run_status_of_live_status :\n    Munin.Run_monitor.live_status -> Theme.run_status = function\n  | `Live -> Theme.Live\n  | `Stopped -> Theme.Stopped\n  | `Done `finished -> Theme.Done\n  | `Done `failed -> Theme.Failed\n  | `Done `killed -> Theme.Killed\n  | `Done `running -> Theme.Live\n\nlet latest_step monitor =\n  let ms = Munin.Run_monitor.metrics monitor in\n  match ms with\n  | [] -> None\n  | _ ->\n      Some\n        (List.fold_left\n           (fun acc (_, (m : Munin.Run.metric)) -> max acc m.step)\n           0 ms)\n\nlet latest_epoch monitor =\n  let ms = Munin.Run_monitor.metrics monitor in\n  match List.assoc_opt \"epoch\" ms with\n  | Some (m : Munin.Run.metric) -> Some (int_of_float m.value)\n  | None -> None\n\nlet total_epochs run =\n  match Munin.Run.find_param run \"epochs\" with\n  | Some v -> Munin.Value.to_int v\n  | None -> None\n\nlet elapsed_secs m =\n  let end_time =\n    match m.run_status with\n    | Theme.Live -> Unix.gettimeofday ()\n    | Stopped | Done | Failed | Killed -> (\n        match Munin.Run.ended_at m.run with\n        | Some t -> t\n        | None -> Unix.gettimeofday ())\n  in\n  end_time -. Munin.Run.started_at m.run\n\nlet metrics_width m =\n  if m.show_system then int_of_float (float_of_int m.screen_width *. 0.66)\n  else m.screen_width\n\nlet best_value monitor tag =\n  Option.map\n    (fun (b : Munin.Run.metric) -> b.value)\n    (Munin.Run_monitor.best monitor tag)\n\n(* View *)\n\nlet divider () =\n  box\n    ~size:{ width = px 1; height = pct 100 }\n    ~background:(Ansi.Color.grayscale ~level:8)\n    [ text \" \" ]\n\nlet is_sys_metric tag = String.length tag > 4 && String.sub tag 0 4 = \"sys/\"\n\nlet step_metrics monitor =\n  let defs = Munin.Run_monitor.metric_defs monitor in\n  let from_defs =\n    List.filter_map (fun (_, (d : Munin.Run.metric_def)) -> d.step_metric) defs\n  in\n  if List.mem \"epoch\" from_defs then from_defs else \"epoch\" :: from_defs\n\nlet user_metric_tags monitor =\n  let sms = step_metrics monitor in\n  List.filter_map\n    (fun (tag, _) ->\n      if is_sys_metric tag || List.mem tag sms then None else Some tag)\n    (Munin.Run_monitor.metrics monitor)\n\nlet blocks =\n  [|\n    \"\\u{2581}\";\n    \"\\u{2582}\";\n    \"\\u{2583}\";\n    \"\\u{2584}\";\n    \"\\u{2585}\";\n    \"\\u{2586}\";\n    \"\\u{2587}\";\n    \"\\u{2588}\";\n  |]\n\nlet mini_sparkline history ~width =\n  let values = List.map snd history in\n  let n = List.length values in\n  if n = 0 then \"\"\n  else\n    let arr =\n      if n <= width then Array.of_list values\n      else\n        let a = Array.of_list values in\n        Array.sub a (n - width) width\n    in\n    let len = Array.length arr in\n    let lo = Array.fold_left min infinity arr in\n    let hi = Array.fold_left max neg_infinity arr in\n    let range = hi -. lo in\n    let buf = Buffer.create (len * 3) in\n    Array.iter\n      (fun v ->\n        let idx =\n          if range = 0. then 3\n          else int_of_float ((v -. lo) /. range *. 7.) |> max 0 |> min 7\n        in\n        Buffer.add_string buf blocks.(idx))\n      arr;\n    Buffer.contents buf\n\nlet spark_style = Ansi.Style.make ~fg:(Ansi.Color.grayscale ~level:10) ()\nlet bold_white = Ansi.Style.make ~bold:true ~fg:Ansi.Color.white ()\n\nlet view_summary_banner m =\n  let elapsed = elapsed_secs m in\n  let h = int_of_float elapsed / 3600 in\n  let mi = int_of_float elapsed mod 3600 / 60 in\n  let s = int_of_float elapsed mod 60 in\n  let duration = Printf.sprintf \"%02d:%02d:%02d\" h mi s in\n  let status_color = Theme.status_color m.run_status in\n  let status_label = Theme.status_label m.run_status in\n  let metric_tags = user_metric_tags m.monitor in\n  let capped =\n    if List.length metric_tags > 8 then\n      List.filteri (fun i _ -> i < 8) metric_tags\n    else metric_tags\n  in\n  let metric_entries =\n    List.map\n      (fun tag ->\n        let history = Munin.Run_monitor.history m.monitor tag in\n        let spark = mini_sparkline history ~width:8 in\n        let value =\n          match best_value m.monitor tag with\n          | Some v -> Printf.sprintf \"%.4g\" v\n          | None -> (\n              match Theme.last_value history with\n              | Some v -> Printf.sprintf \"%.4g\" v\n              | None -> \"-\")\n        in\n        box ~flex_direction:Row ~gap:(gap 1)\n          ~size:{ width = pct 50; height = auto }\n          [\n            text ~style:Theme.muted_style tag;\n            text ~style:spark_style spark;\n            text ~style:bold_white value;\n          ])\n      capped\n  in\n  let rec pairs = function\n    | [] -> []\n    | [ x ] -> [ [ x ] ]\n    | x :: y :: rest -> [ x; y ] :: pairs rest\n  in\n  let metric_rows =\n    List.map\n      (fun row ->\n        box ~flex_direction:Row ~size:{ width = pct 100; height = auto } row)\n      (pairs metric_entries)\n  in\n  box ~border:true\n    ~border_color:(Ansi.Color.grayscale ~level:8)\n    ~title:(Printf.sprintf \" Run %s \" status_label)\n    ~padding:(padding_xy 2 0)\n    ~size:{ width = pct 100; height = auto }\n    ([\n       box ~flex_direction:Row ~gap:(gap 1)\n         ~size:{ width = pct 100; height = auto }\n         [\n           text\n             ~style:(Ansi.Style.make ~fg:status_color ())\n             (Printf.sprintf \"%s in %s\" status_label duration);\n         ];\n     ]\n    @ metric_rows)\n\nlet view_dashboard m =\n  let all_metrics = Munin.Run_monitor.metrics m.monitor in\n  let metric_tags = user_metric_tags m.monitor in\n  let history_for_tag tag = Munin.Run_monitor.history m.monitor tag in\n  let best_for_tag tag = best_value m.monitor tag in\n  let goal_for_tag tag =\n    match List.assoc_opt tag (Munin.Run_monitor.metric_defs m.monitor) with\n    | Some (d : Munin.Run.metric_def) -> d.goal\n    | None -> None\n  in\n  let metrics_pct = if m.show_system then 66 else 100 in\n  let sys_latest tag =\n    match List.assoc_opt tag all_metrics with\n    | Some (m : Munin.Run.metric) -> m.value\n    | None -> 0.0\n  in\n  let sys_values =\n    System.\n      {\n        cpu_user = sys_latest \"sys/cpu_user\";\n        cpu_system = sys_latest \"sys/cpu_system\";\n        mem_pct = sys_latest \"sys/mem_used_pct\";\n        mem_gb = sys_latest \"sys/mem_used_gb\";\n        proc_cpu = sys_latest \"sys/proc_cpu_pct\";\n        proc_mem_mb = sys_latest \"sys/proc_mem_mb\";\n        disk_read_mbs = sys_latest \"sys/disk_read_mbs\";\n        disk_write_mbs = sys_latest \"sys/disk_write_mbs\";\n        disk_util_pct = sys_latest \"sys/disk_util_pct\";\n      }\n  in\n  let right_panel =\n    if m.show_system then\n      [\n        divider ();\n        box ~flex_direction:Column ~padding:(padding_lrtb 1 1 1 0) ~gap:(gap 1)\n          ~size:{ width = pct 34; height = auto }\n          [\n            System.view sys_values ~history_for_tag;\n            Overview.view ~run:m.run ~latest_metrics:all_metrics\n              ~step_metrics:(step_metrics m.monitor)\n              ~metric_defs:(Munin.Run_monitor.metric_defs m.monitor)\n              ~best_for_tag:(best_value m.monitor);\n          ];\n      ]\n    else []\n  in\n  let banner = if m.run_completed then [ view_summary_banner m ] else [] in\n  box ~flex_direction:Column\n    ~size:{ width = pct 100; height = pct 100 }\n    ([\n       Header.view ~run_id:(Munin.Run.id m.run) ~run_name:(Munin.Run.name m.run)\n         ~tags:(Munin.Run.tags m.run) ~latest_epoch:(latest_epoch m.monitor)\n         ~total_epochs:(total_epochs m.run) ~latest_step:(latest_step m.monitor)\n         ~elapsed_secs:(elapsed_secs m) ~status:m.run_status;\n     ]\n    @ banner\n    @ [\n        box ~flex_direction:Row ~flex_grow:1.0 ~flex_shrink:1.0\n          ~overflow:{ x = Hidden; y = Hidden }\n          ~size:{ width = pct 100; height = auto }\n          ([\n             box\n               ~size:{ width = pct metrics_pct; height = auto }\n               [\n                 Metrics.view m.metrics_state ~metric_tags ~history_for_tag\n                   ~best_for_tag ~goal_for_tag;\n               ];\n           ]\n          @ right_panel);\n        Footer.view ~mode:`Dashboard;\n      ])\n\nlet view_detail m tag =\n  let smooth_param = Theme.smooth_alpha m.smooth in\n  let history_for_tag t = Munin.Run_monitor.history m.monitor t in\n  let metric_def =\n    List.assoc_opt tag (Munin.Run_monitor.metric_defs m.monitor)\n  in\n  box ~flex_direction:Column\n    ~size:{ width = pct 100; height = pct 100 }\n    [\n      box ~flex_grow:1.0 ~justify_content:Center ~align_items:Center\n        ~size:{ width = pct 100; height = pct 100 }\n        [\n          Detail.view ~tag ~history_for_tag ~best:(best_value m.monitor tag)\n            ~smooth:smooth_param ~metric_def\n            ~size:{ width = pct 80; height = pct 80 };\n        ];\n      Footer.view ~mode:(`Detail m.smooth);\n    ]\n\nlet view_info m =\n  let all_metrics = Munin.Run_monitor.metrics m.monitor in\n  box ~flex_direction:Column\n    ~size:{ width = pct 100; height = pct 100 }\n    [\n      Header.view ~run_id:(Munin.Run.id m.run) ~run_name:(Munin.Run.name m.run)\n        ~tags:(Munin.Run.tags m.run) ~latest_epoch:(latest_epoch m.monitor)\n        ~total_epochs:(total_epochs m.run) ~latest_step:(latest_step m.monitor)\n        ~elapsed_secs:(elapsed_secs m) ~status:m.run_status;\n      box ~flex_grow:1.0 ~flex_shrink:1.0 ~overflow:{ x = Hidden; y = Hidden }\n        ~size:{ width = pct 100; height = auto }\n        [\n          Info.view ~run:m.run ~status:m.run_status\n            ~elapsed_secs:(elapsed_secs m)\n            ~metric_defs:(Munin.Run_monitor.metric_defs m.monitor)\n            ~latest_metrics:all_metrics ~step_metrics:(step_metrics m.monitor)\n            ~best_for_tag:(best_value m.monitor);\n        ];\n      Footer.view ~mode:`Info;\n    ]\n\nlet view m =\n  match m.mode with\n  | Dashboard -> view_dashboard m\n  | Detail tag -> view_detail m tag\n  | Info -> view_info m\n\n(* TEA core *)\n\nlet init ~run =\n  let monitor = Munin.Run_monitor.start run in\n  Munin.Run_monitor.poll monitor;\n  let run_status =\n    run_status_of_live_status (Munin.Run_monitor.live_status monitor)\n  in\n  let metrics_state = Metrics.initial_state () in\n  ( {\n      run;\n      monitor;\n      run_status;\n      metrics_state;\n      mode = Dashboard;\n      smooth = Theme.Off;\n      show_system = true;\n      screen_width = 80;\n      was_live = run_status = Theme.Live;\n      run_completed = false;\n    },\n    Cmd.none )\n\nlet update msg m =\n  match msg with\n  | Tick _dt ->\n      Munin.Run_monitor.poll m.monitor;\n      let run_status =\n        run_status_of_live_status (Munin.Run_monitor.live_status m.monitor)\n      in\n      let run_completed =\n        m.run_completed\n        || m.was_live && run_status <> Theme.Live\n           && run_status <> Theme.Stopped\n      in\n      ({ m with run_status; run_completed }, Cmd.none)\n  | Terminal_resize (width, height) ->\n      let m = { m with screen_width = width } in\n      let mw = metrics_width m in\n      let metrics_state' =\n        Metrics.update\n          (Metrics.Resize (mw, height))\n          m.metrics_state\n          ~total_metrics:(List.length (user_metric_tags m.monitor))\n      in\n      ({ m with metrics_state = metrics_state' }, Cmd.none)\n  | Metrics_msg metrics_msg ->\n      let total_metrics = List.length (user_metric_tags m.monitor) in\n      let metrics_state' =\n        Metrics.update metrics_msg m.metrics_state ~total_metrics\n      in\n      ({ m with metrics_state = metrics_state' }, Cmd.none)\n  | Open_metric tag -> ({ m with mode = Detail tag }, Cmd.none)\n  | Open_selected -> (\n      let all_tags = user_metric_tags m.monitor in\n      let total_metrics = List.length all_tags in\n      match Metrics.selected_tag m.metrics_state ~total_metrics ~all_tags with\n      | Some tag -> ({ m with mode = Detail tag }, Cmd.none)\n      | None -> (m, Cmd.none))\n  | Close_metric -> ({ m with mode = Dashboard }, Cmd.none)\n  | Open_info -> ({ m with mode = Info }, Cmd.none)\n  | Close_info -> ({ m with mode = Dashboard }, Cmd.none)\n  | Toggle_smooth -> (\n      match m.mode with\n      | Detail _ -> ({ m with smooth = Theme.next_smooth m.smooth }, Cmd.none)\n      | Dashboard | Info -> (m, Cmd.none))\n  | Toggle_system ->\n      let m = { m with show_system = not m.show_system } in\n      let mw = metrics_width m in\n      let total_metrics = List.length (user_metric_tags m.monitor) in\n      let metrics_state' =\n        Metrics.update\n          (Metrics.Resize (mw, m.metrics_state.screen_height))\n          m.metrics_state ~total_metrics\n      in\n      ({ m with metrics_state = metrics_state' }, Cmd.none)\n  | Quit ->\n      Munin.Run_monitor.close m.monitor;\n      (m, Cmd.quit)\n\nlet subscriptions m =\n  Sub.batch\n    [\n      Sub.on_tick (fun ~dt -> Tick dt);\n      Sub.on_resize (fun ~width ~height -> Terminal_resize (width, height));\n      Sub.on_key (fun ev ->\n          let is c ch =\n            let lo = Uchar.of_char ch in\n            let hi = Uchar.of_char (Char.uppercase_ascii ch) in\n            Uchar.equal c lo || Uchar.equal c hi\n          in\n          let data = Mosaic_ui.Event.Key.data ev in\n          match (data.key, m.mode) with\n          | Char c, Detail _ when is c 's' -> Some Toggle_smooth\n          | Char c, Dashboard when is c 'q' -> Some Quit\n          | Char c, Detail _ when is c 'q' -> Some Close_metric\n          | Char c, Info when is c 'q' -> Some Close_info\n          | Char c, Dashboard when is c 'i' -> Some Open_info\n          | Escape, Dashboard -> Some Quit\n          | Escape, Detail _ -> Some Close_metric\n          | Escape, Info -> Some Close_info\n          | Left, Dashboard -> Some (Metrics_msg Metrics.Select_left)\n          | Right, Dashboard -> Some (Metrics_msg Metrics.Select_right)\n          | Up, Dashboard -> Some (Metrics_msg Metrics.Select_up)\n          | Down, Dashboard -> Some (Metrics_msg Metrics.Select_down)\n          | Char c, Dashboard when is c '[' -> Some Toggle_system\n          | Char c, Dashboard when is c ']' -> Some Toggle_system\n          | Char c, Dashboard when Uchar.equal c (Uchar.of_char '<') ->\n              Some (Metrics_msg Metrics.Prev_batch)\n          | Char c, Dashboard when Uchar.equal c (Uchar.of_char '>') ->\n              Some (Metrics_msg Metrics.Next_batch)\n          | Enter, Dashboard -> Some Open_selected\n          | Char c, Dashboard when is c ' ' -> Some Open_selected\n          | _ -> None);\n    ]\n\nlet run ?root ?experiment ?runs () =\n  let store = Munin.Store.open_ ?root () in\n  let run =\n    match runs with\n    | Some [ run_id ] -> Munin.Store.find_run store run_id\n    | None | Some [] -> Munin.Store.latest_run store ?experiment ()\n    | Some _ ->\n        Printf.printf \"munin: please specify a single run\\n%!\";\n        None\n  in\n  match run with\n  | Some run ->\n      let init () = init ~run in\n      Mosaic.run { init; update; view; subscriptions }\n  | None -> (\n      match runs with\n      | Some [ run_id ] -> Printf.printf \"munin: run not found: %s\\n%!\" run_id\n      | _ ->\n          Printf.printf \"munin: no runs found in %s\\n%!\"\n            (Munin.Store.root store))\n"
  },
  {
    "path": "packages/munin/lib/tui/munin_tui.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Experiment dashboard TUI.\n\n    Terminal-based dashboard for monitoring runs tracked by {!Munin}. Displays\n    live metrics, charts, and system resource usage. *)\n\nval run :\n  ?root:string -> ?experiment:string -> ?runs:string list -> unit -> unit\n(** [run ()] launches the dashboard.\n\n    - [root] defaults to [$RAVEN_TRACKING_DIR] or [$XDG_DATA_HOME/raven/munin].\n    - [experiment] filters runs by experiment name.\n    - [runs] is a list of run IDs to display; currently exactly one must be\n      provided. When omitted, the most recent run is selected automatically. *)\n"
  },
  {
    "path": "packages/munin/lib/tui/overview.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Mosaic\n\n(* Styles *)\n\nlet section_style =\n  Ansi.Style.make ~fg:(Ansi.Color.grayscale ~level:12) ~dim:true ()\n\nlet key_style = Ansi.Style.make ~fg:(Ansi.Color.grayscale ~level:16) ()\nlet val_style = Ansi.Style.make ~fg:Ansi.Color.white ()\n\n(* Helpers *)\n\nlet is_sys tag = String.length tag > 4 && String.sub tag 0 4 = \"sys/\"\n\nlet format_value (v : Munin.Value.t) =\n  match v with\n  | `Bool b -> string_of_bool b\n  | `Int i -> string_of_int i\n  | `Float f ->\n      if Float.is_integer f && Float.abs f < 1e9 then Printf.sprintf \"%.0f\" f\n      else Printf.sprintf \"%g\" f\n  | `String s -> s\n\nlet format_float f =\n  if Float.is_integer f && Float.abs f < 1e6 then Printf.sprintf \"%.0f\" f\n  else if Float.abs f >= 0.01 && Float.abs f < 1e6 then Printf.sprintf \"%.4f\" f\n  else Printf.sprintf \"%.4g\" f\n\n(* View *)\n\nlet section_header label =\n  box\n    ~size:{ width = pct 100; height = auto }\n    [ text ~style:section_style (Printf.sprintf \"\\u{2500}\\u{2500} %s \" label) ]\n\nlet kv_row k v =\n  box ~flex_direction:Row ~gap:(gap 1)\n    ~size:{ width = pct 100; height = auto }\n    [\n      text ~style:key_style k;\n      box ~flex_grow:1.0 ~size:{ width = auto; height = auto } [];\n      text ~style:val_style v;\n    ]\n\nlet badge_style =\n  Ansi.Style.make ~fg:(Ansi.Color.grayscale ~level:10) ~dim:true ()\n\nlet format_summary_value ~(metric_defs : (string * Munin.Run.metric_def) list)\n    ~(best_for_tag : string -> float option) k (m : Munin.Run.metric) =\n  let def = List.assoc_opt k metric_defs in\n  let value =\n    match (def, best_for_tag k) with\n    | Some { summary = `Min | `Max; _ }, Some bv -> format_float bv\n    | _ -> format_float m.value\n  in\n  let summary_badge =\n    match def with\n    | Some { summary = `Min; _ } -> \" (min)\"\n    | Some { summary = `Max; _ } -> \" (max)\"\n    | Some { summary = `Mean; _ } -> \" (mean)\"\n    | Some { summary = `None; _ } -> \" (none)\"\n    | Some { summary = `Last; _ } | None -> \"\"\n  in\n  let goal_arrow =\n    match def with\n    | Some { goal = Some `Minimize; _ } -> \" \\u{2193}\"\n    | Some { goal = Some `Maximize; _ } -> \" \\u{2191}\"\n    | _ -> \"\"\n  in\n  box ~flex_direction:Row ~gap:(gap 0)\n    ~size:{ width = pct 100; height = auto }\n    [\n      text ~style:key_style k;\n      box ~flex_grow:1.0 ~size:{ width = auto; height = auto } [];\n      text ~style:val_style value;\n      text ~style:badge_style (summary_badge ^ goal_arrow);\n    ]\n\nlet view ~(run : Munin.Run.t)\n    ~(latest_metrics : (string * Munin.Run.metric) list)\n    ~(step_metrics : string list)\n    ~(metric_defs : (string * Munin.Run.metric_def) list)\n    ~(best_for_tag : string -> float option) =\n  let params = Munin.Run.params run in\n  let metrics =\n    List.filter\n      (fun (k, _) -> (not (is_sys k)) && not (List.mem k step_metrics))\n      latest_metrics\n  in\n  let sections =\n    (if params <> [] then\n       [ section_header \"Params\" ]\n       @ List.map (fun (k, v) -> kv_row k (format_value v)) params\n     else [])\n    @\n    if metrics <> [] then\n      [ section_header \"Summary\" ]\n      @ List.map\n          (fun (k, m) -> format_summary_value ~metric_defs ~best_for_tag k m)\n          metrics\n    else []\n  in\n  if sections = [] then box ~size:{ width = px 0; height = px 0 } []\n  else\n    box ~flex_direction:Column ~gap:(gap 0)\n      ~size:{ width = pct 100; height = auto }\n      sections\n"
  },
  {
    "path": "packages/munin/lib/tui/system.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Mosaic\n\n(* Types *)\n\ntype values = {\n  cpu_user : float;\n  cpu_system : float;\n  mem_pct : float;\n  mem_gb : float;\n  proc_cpu : float;\n  proc_mem_mb : float;\n  disk_read_mbs : float;\n  disk_write_mbs : float;\n  disk_util_pct : float;\n}\n\n(* Styles *)\n\nlet label_style = Ansi.Style.make ~fg:Ansi.Color.white ()\nlet bracket_style = Ansi.Style.make ~fg:(Ansi.Color.grayscale ~level:10) ()\nlet value_style = Ansi.Style.make ~fg:Ansi.Color.white ()\nlet user_style = Ansi.Style.make ~fg:Ansi.Color.green ()\nlet system_style = Ansi.Style.make ~fg:Ansi.Color.cyan ()\nlet spark_style = Ansi.Style.make ~fg:(Ansi.Color.grayscale ~level:14) ()\n\nlet bar_color pct =\n  if pct > 80. then Ansi.Color.red\n  else if pct > 50. then Ansi.Color.yellow\n  else Ansi.Color.green\n\n(* Sparkline rendering *)\n\nlet blocks =\n  [|\n    \"\\u{2581}\";\n    \"\\u{2582}\";\n    \"\\u{2583}\";\n    \"\\u{2584}\";\n    \"\\u{2585}\";\n    \"\\u{2586}\";\n    \"\\u{2587}\";\n    \"\\u{2588}\";\n  |]\n\nlet sparkline history ~width =\n  let n = List.length history in\n  if n = 0 then \"\"\n  else\n    let values =\n      if n <= width then Array.of_list history\n      else\n        let arr = Array.of_list history in\n        Array.sub arr (n - width) width\n    in\n    let len = Array.length values in\n    let lo = Array.fold_left min infinity values in\n    let hi = Array.fold_left max neg_infinity values in\n    let range = hi -. lo in\n    let buf = Buffer.create (len * 3) in\n    Array.iter\n      (fun v ->\n        let idx =\n          if range = 0. then 3\n          else int_of_float ((v -. lo) /. range *. 7.) |> max 0 |> min 7\n        in\n        Buffer.add_string buf blocks.(idx))\n      values;\n    Buffer.contents buf\n\n(* Bar drawing — reused from previous implementation *)\n\nlet draw_bar c ~y ~width ~label ~value_text ~percent ~spark =\n  let label_len = String.length label in\n  let value_len = String.length value_text in\n  Canvas.draw_text c ~x:0 ~y ~text:label ~style:label_style;\n  let spark_width = 12 in\n  let bar_end = width - spark_width - 2 in\n  let bar_start = label_len in\n  if bar_end - bar_start < 2 then ()\n  else begin\n    Canvas.draw_text c ~x:bar_start ~y ~text:\"[\" ~style:bracket_style;\n    Canvas.draw_text c ~x:bar_end ~y ~text:\"]\" ~style:bracket_style;\n    let inner = bar_end - bar_start - 1 in\n    let fill_space = inner - value_len - 1 in\n    let fill_count =\n      if fill_space > 0 then\n        int_of_float (percent /. 100. *. float_of_int fill_space)\n        |> max 0 |> min fill_space\n      else 0\n    in\n    let color = bar_color percent in\n    let style = Ansi.Style.make ~fg:color () in\n    for i = 0 to fill_count - 1 do\n      Canvas.draw_text c ~x:(bar_start + 1 + i) ~y ~text:\"|\" ~style\n    done;\n    let vx = bar_end - value_len in\n    if vx > bar_start + 1 then\n      Canvas.draw_text c ~x:vx ~y ~text:value_text ~style:value_style;\n    Canvas.draw_text c ~x:(bar_end + 2) ~y ~text:spark ~style:spark_style\n  end\n\nlet draw_cpu_bar c ~y ~width ~label ~cpu_user ~cpu_system ~spark =\n  let label_len = String.length label in\n  let total = cpu_user +. cpu_system in\n  let value_text = Printf.sprintf \"%.0f%%\" total in\n  let value_len = String.length value_text in\n  Canvas.draw_text c ~x:0 ~y ~text:label ~style:label_style;\n  let spark_width = 12 in\n  let bar_end = width - spark_width - 2 in\n  let bar_start = label_len in\n  if bar_end - bar_start < 2 then ()\n  else begin\n    Canvas.draw_text c ~x:bar_start ~y ~text:\"[\" ~style:bracket_style;\n    Canvas.draw_text c ~x:bar_end ~y ~text:\"]\" ~style:bracket_style;\n    let inner = bar_end - bar_start - 1 in\n    let fill_space = inner - value_len - 1 in\n    let user_count =\n      if fill_space > 0 then\n        int_of_float (cpu_user /. 100. *. float_of_int fill_space)\n        |> max 0 |> min fill_space\n      else 0\n    in\n    let system_count =\n      if fill_space > 0 then\n        int_of_float (cpu_system /. 100. *. float_of_int fill_space)\n        |> max 0\n        |> min (fill_space - user_count)\n      else 0\n    in\n    for i = 0 to user_count - 1 do\n      Canvas.draw_text c ~x:(bar_start + 1 + i) ~y ~text:\"|\" ~style:user_style\n    done;\n    for i = 0 to system_count - 1 do\n      Canvas.draw_text c\n        ~x:(bar_start + 1 + user_count + i)\n        ~y ~text:\"|\" ~style:system_style\n    done;\n    let vx = bar_end - value_len in\n    if vx > bar_start + 1 then\n      Canvas.draw_text c ~x:vx ~y ~text:value_text ~style:value_style;\n    Canvas.draw_text c ~x:(bar_end + 2) ~y ~text:spark ~style:spark_style\n  end\n\nlet draw_value_line c ~y ~width ~label ~value_text ~spark =\n  let label_len = String.length label in\n  Canvas.draw_text c ~x:0 ~y ~text:label ~style:label_style;\n  let spark_width = 12 in\n  let vx = width - spark_width - 2 - String.length value_text in\n  if vx > label_len then\n    Canvas.draw_text c ~x:vx ~y ~text:value_text ~style:value_style;\n  Canvas.draw_text c ~x:(width - spark_width) ~y ~text:spark ~style:spark_style\n\n(* View *)\n\nlet extract_values history = List.map snd history\n\nlet view (v : values) ~(history_for_tag : string -> (int * float) list) =\n  let spark tag = sparkline (extract_values (history_for_tag tag)) ~width:10 in\n  let cpu_spark =\n    let h1 = extract_values (history_for_tag \"sys/cpu_user\") in\n    let h2 = extract_values (history_for_tag \"sys/cpu_system\") in\n    let rec zip_add a b =\n      match (a, b) with\n      | x :: xs, y :: ys -> (x +. y) :: zip_add xs ys\n      | [], _ | _, [] -> []\n    in\n    sparkline (zip_add h1 h2) ~width:10\n  in\n  let format_rate mbs =\n    if mbs >= 1024. then Printf.sprintf \"%.1f GB/s\" (mbs /. 1024.)\n    else if mbs >= 1. then Printf.sprintf \"%.1f MB/s\" mbs\n    else Printf.sprintf \"%.0f KB/s\" (mbs *. 1024.)\n  in\n  canvas\n    ~size:{ width = pct 100; height = px 7 }\n    (fun c ~delta:_ ->\n      Canvas.clear c;\n      let w = Canvas.width c in\n      draw_cpu_bar c ~y:0 ~width:w ~label:\"CPU  \" ~cpu_user:v.cpu_user\n        ~cpu_system:v.cpu_system ~spark:cpu_spark;\n      draw_bar c ~y:1 ~width:w ~label:\"Mem  \"\n        ~value_text:(Printf.sprintf \"%.0f%%  %.1fG\" v.mem_pct v.mem_gb)\n        ~percent:v.mem_pct ~spark:(spark \"sys/mem_used_pct\");\n      draw_bar c ~y:2 ~width:w ~label:\"Proc \"\n        ~value_text:(Printf.sprintf \"%.0f%%\" v.proc_cpu)\n        ~percent:v.proc_cpu ~spark:(spark \"sys/proc_cpu_pct\");\n      draw_value_line c ~y:3 ~width:w ~label:\"RSS  \"\n        ~value_text:(Printf.sprintf \"%.0fM\" v.proc_mem_mb)\n        ~spark:(spark \"sys/proc_mem_mb\");\n      draw_bar c ~y:4 ~width:w ~label:\"Disk \"\n        ~value_text:(Printf.sprintf \"%.0f%%\" v.disk_util_pct)\n        ~percent:v.disk_util_pct\n        ~spark:(spark \"sys/disk_util_pct\");\n      draw_value_line c ~y:5 ~width:w ~label:\"  R  \"\n        ~value_text:(format_rate v.disk_read_mbs)\n        ~spark:(spark \"sys/disk_read_mbs\");\n      draw_value_line c ~y:6 ~width:w ~label:\"  W  \"\n        ~value_text:(format_rate v.disk_write_mbs)\n        ~spark:(spark \"sys/disk_write_mbs\"))\n"
  },
  {
    "path": "packages/munin/lib/tui/theme.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Mosaic\nmodule Charts = Matrix_charts\n\n(* Run status *)\n\ntype run_status = Live | Stopped | Done | Failed | Killed\n\nlet status_label = function\n  | Live -> \"LIVE\"\n  | Stopped -> \"Stopped\"\n  | Done -> \"Done\"\n  | Failed -> \"Failed\"\n  | Killed -> \"Killed\"\n\nlet status_color = function\n  | Live -> Ansi.Color.green\n  | Stopped -> Ansi.Color.grayscale ~level:12\n  | Done -> Ansi.Color.of_rgb 80 140 200\n  | Failed -> Ansi.Color.red\n  | Killed -> Ansi.Color.yellow\n\n(* Smooth level *)\n\ntype smooth = Off | Light | Medium | Heavy\n\nlet smooth_alpha = function\n  | Off -> None\n  | Light -> Some 0.5\n  | Medium -> Some 0.3\n  | Heavy -> Some 0.15\n\nlet next_smooth = function\n  | Off -> Light\n  | Light -> Medium\n  | Medium -> Heavy\n  | Heavy -> Off\n\nlet smooth_display = function Off -> 0 | Light -> 1 | Medium -> 2 | Heavy -> 3\n\n(* Shared styles *)\n\nlet muted_style = Ansi.Style.make ~fg:(Ansi.Color.grayscale ~level:14) ()\n\nlet axis_style =\n  Ansi.Style.make ~fg:(Ansi.Color.grayscale ~level:12) ~dim:true ()\n\nlet grid_style =\n  Ansi.Style.make ~fg:(Ansi.Color.grayscale ~level:6) ~dim:true ()\n\nlet header_bg = Ansi.Color.grayscale ~level:2\n\n(* Shared helpers *)\n\nlet last_value history =\n  let rec go = function\n    | [] -> None\n    | [ (_, v) ] -> Some v\n    | _ :: rest -> go rest\n  in\n  go history\n\n(* Chart drawing *)\n\nlet draw_metric_chart ~compact history grid ~width ~height =\n  if history = [] then ()\n  else\n    let data =\n      Array.of_list\n        (List.map (fun (step, value) -> (float_of_int step, value)) history)\n    in\n    let margins, x_ticks, y_ticks, y_format =\n      if compact then ((1, 0, 0, 2), 4, 2, fun _ v -> Printf.sprintf \"%.1f\" v)\n      else ((1, 1, 1, 4), 6, 4, fun _ v -> Printf.sprintf \"%.4g\" v)\n    in\n    let chart =\n      Charts.empty ()\n      |> Charts.with_frame (Charts.manual_frame ~margins ())\n      |> Charts.with_axes\n           ~x:\n             (Charts.Axis.default\n             |> Charts.Axis.with_ticks x_ticks\n             |> Charts.Axis.with_style axis_style)\n           ~y:\n             (Charts.Axis.default\n             |> Charts.Axis.with_ticks y_ticks\n             |> Charts.Axis.with_style axis_style\n             |> Charts.Axis.with_format y_format)\n      |> Charts.with_grid\n           (Charts.Gridlines.default\n           |> Charts.Gridlines.with_style grid_style\n           |> Charts.Gridlines.with_x true\n           |> Charts.Gridlines.with_y true)\n      |> Charts.line ~id:\"metric\" ~resolution:`Braille2x4\n           ~style:(Ansi.Style.make ~fg:Ansi.Color.cyan ())\n           ~x:fst ~y:snd data\n    in\n    ignore (Charts.draw chart grid ~width ~height)\n"
  },
  {
    "path": "packages/munin/lib/value.ml",
    "content": "type t = [ `Bool of bool | `Int of int | `Float of float | `String of string ]\n\nlet pp ppf = function\n  | `Bool b -> Format.pp_print_bool ppf b\n  | `Int n -> Format.pp_print_int ppf n\n  | `Float f -> Format.fprintf ppf \"%g\" f\n  | `String s -> Format.fprintf ppf \"%S\" s\n\nlet to_float = function\n  | `Float f -> Some f\n  | `Int n -> Some (Float.of_int n)\n  | _ -> None\n\nlet to_int = function\n  | `Int n -> Some n\n  | `Float f when Float.is_integer f -> Some (Float.to_int f)\n  | _ -> None\n\nlet to_string = function `String s -> Some s | _ -> None\nlet to_bool = function `Bool b -> Some b | _ -> None\n\nlet to_json : t -> Jsont.json = function\n  | `Bool b -> Jsont.Json.bool b\n  | `Int n -> Jsont.Json.int n\n  | `Float f -> Jsont.Json.number f\n  | `String s -> Jsont.Json.string s\n\nlet of_json : Jsont.json -> t = function\n  | Jsont.Bool (b, _) -> `Bool b\n  | Jsont.Number (f, _) ->\n      if Float.is_integer f && Float.abs f < 4503599627370496. then\n        `Int (Float.to_int f)\n      else `Float f\n  | Jsont.String (s, _) -> `String s\n  | _ -> `String \"\"\n"
  },
  {
    "path": "packages/munin/lib/value.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Scalar values for parameters, summaries, and metadata. *)\n\ntype t = [ `Bool of bool | `Int of int | `Float of float | `String of string ]\n(** The type for scalar values. *)\n\nval pp : Format.formatter -> t -> unit\n(** [pp ppf v] pretty-prints [v]. *)\n\nval to_float : t -> float option\n(** [to_float v] extracts a float. [`Int] values are promoted. *)\n\nval to_int : t -> int option\n(** [to_int v] extracts an int. [`Float] values are truncated if integral. *)\n\nval to_string : t -> string option\n(** [to_string v] extracts a string. *)\n\nval to_bool : t -> bool option\n(** [to_bool v] extracts a bool. *)\n\n(**/**)\n\nval to_json : t -> Jsont.json\nval of_json : Jsont.json -> t\n\n(**/**)\n"
  },
  {
    "path": "packages/munin/test/dune",
    "content": "(test\n (name test_munin)\n (package munin)\n (libraries munin munin.sys unix threads.posix windtrap jsont))\n"
  },
  {
    "path": "packages/munin/test/test_munin.ml",
    "content": "open Windtrap\n\n(* Helpers *)\n\nlet with_temp_dir f =\n  let base = Filename.temp_file \"munin\" \"test\" in\n  Sys.remove base;\n  Unix.mkdir base 0o755;\n  Fun.protect\n    ~finally:(fun () -> ignore (Sys.command (\"rm -rf \" ^ Filename.quote base)))\n    (fun () -> f base)\n\nlet write_text path text =\n  let oc = open_out path in\n  Fun.protect\n    ~finally:(fun () -> close_out oc)\n    (fun () -> output_string oc text)\n\nlet make_file root name text =\n  let path = Filename.concat root name in\n  write_text path text;\n  path\n\nlet make_dir root name files =\n  let dir = Filename.concat root name in\n  Unix.mkdir dir 0o755;\n  List.iter\n    (fun (fname, text) -> write_text (Filename.concat dir fname) text)\n    files;\n  dir\n\n(* Session lifecycle *)\n\nlet test_start_creates_run_dir () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  let run = Session.run session in\n  is_true ~msg:\"run dir exists\" (Sys.file_exists (Run.dir run));\n  is_true ~msg:\"manifest exists\"\n    (Sys.file_exists (Filename.concat (Run.dir run) \"run.json\"));\n  is_true ~msg:\"status is running\" (Run.status run = `running);\n  Session.finish session ()\n\nlet test_finish_sets_status () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish session ();\n  let run = Session.run session in\n  is_true ~msg:\"status finished\" (Run.status run = `finished);\n  is_true ~msg:\"ended_at set\" (Option.is_some (Run.ended_at run))\n\nlet test_finish_failed () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish ~status:`failed session ();\n  is_true ~msg:\"status failed\" (Run.status (Session.run session) = `failed)\n\nlet test_finish_killed () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish ~status:`killed session ();\n  is_true ~msg:\"status killed\" (Run.status (Session.run session) = `killed)\n\nlet test_finish_idempotent () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish session ();\n  Session.finish ~status:`failed session ();\n  is_true ~msg:\"still finished\" (Run.status (Session.run session) = `finished)\n\nlet test_with_run_success () =\n  with_temp_dir @@ fun root ->\n  let result =\n    Session.with_run ~root ~experiment:\"exp\" (fun session ->\n        Session.log_metric session ~step:1 \"x\" 1.0;\n        42)\n  in\n  equal ~msg:\"return value\" int 42 result;\n  let store = Store.open_ ~root () in\n  let run = Option.get (Store.latest_run store ()) in\n  is_true ~msg:\"status finished\" (Run.status run = `finished)\n\nlet test_with_run_exception () =\n  with_temp_dir @@ fun root ->\n  let raised =\n    try\n      ignore\n        (Session.with_run ~root ~experiment:\"exp\" (fun _session ->\n             failwith \"boom\"));\n      false\n    with Failure _ -> true\n  in\n  is_true ~msg:\"exception re-raised\" raised;\n  let store = Store.open_ ~root () in\n  let run = Option.get (Store.latest_run store ()) in\n  is_true ~msg:\"status failed\" (Run.status run = `failed)\n\nlet test_resume () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.log_metric session ~step:1 \"x\" 1.0;\n  let run = Session.run session in\n  is_true ~msg:\"resumable before finish\" (Run.resumable run);\n  let resumed = Session.resume run in\n  Session.log_metric resumed ~step:2 \"x\" 0.5;\n  Session.finish resumed ();\n  let final = Session.run resumed in\n  equal ~msg:\"history length\" int 2 (List.length (Run.metric_history final \"x\"));\n  is_false ~msg:\"not resumable after finish\" (Run.resumable final)\n\nlet test_resume_finished_raises () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish session ();\n  let run = Session.run session in\n  raises_invalid_arg \"Munin.Session.resume: run is not resumable\" (fun () ->\n      ignore (Session.resume run))\n\nlet test_ops_after_finish_ignored () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.log_metric session ~step:1 \"x\" 1.0;\n  Session.finish session ();\n  Session.log_metric session ~step:2 \"x\" 2.0;\n  Session.set_notes session (Some \"late\");\n  Session.add_tags session [ \"late\" ];\n  Session.set_summary session [ (\"late\", `Float 1.0) ];\n  let run = Session.run session in\n  equal ~msg:\"only one metric\" int 1 (List.length (Run.metric_history run \"x\"));\n  is_true ~msg:\"no late note\" (Run.notes run = None);\n  equal ~msg:\"no tags\" (list string) [] (Run.tags run)\n\nlet lifecycle =\n  [\n    test \"start creates run dir\" test_start_creates_run_dir;\n    test \"finish sets status\" test_finish_sets_status;\n    test \"finish failed\" test_finish_failed;\n    test \"finish killed\" test_finish_killed;\n    test \"finish is idempotent\" test_finish_idempotent;\n    test \"with_run success\" test_with_run_success;\n    test \"with_run exception\" test_with_run_exception;\n    test \"resume\" test_resume;\n    test \"resume finished raises\" test_resume_finished_raises;\n    test \"ops after finish ignored\" test_ops_after_finish_ignored;\n  ]\n\n(* Scalars *)\n\nlet test_log_metric () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.log_metric session ~step:1 \"train/loss\" 1.5;\n  Session.finish session ();\n  let run = Session.run session in\n  let m =\n    match List.assoc_opt \"train/loss\" (Run.latest_metrics run) with\n    | Some m -> m\n    | None -> failwith \"missing\"\n  in\n  equal ~msg:\"step\" int 1 m.step;\n  equal ~msg:\"value\" (float 0.0) 1.5 m.value\n\nlet test_log_metrics_batch () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.log_metrics session ~step:1 [ (\"train/loss\", 1.0); (\"val/acc\", 0.6) ];\n  Session.finish session ();\n  let run = Session.run session in\n  equal ~msg:\"two keys\" int 2 (List.length (Run.metric_keys run))\n\nlet test_metric_history_chronological () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.log_metric session ~step:1 \"x\" 3.0;\n  Session.log_metric session ~step:2 \"x\" 2.0;\n  Session.log_metric session ~step:3 \"x\" 1.0;\n  Session.finish session ();\n  let run = Session.run session in\n  let history = Run.metric_history run \"x\" in\n  equal ~msg:\"length\" int 3 (List.length history);\n  let values = List.map (fun (m : Run.metric) -> m.value) history in\n  equal ~msg:\"order\" (list (float 0.0)) [ 3.0; 2.0; 1.0 ] values\n\nlet test_latest_metrics () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.log_metric session ~step:1 \"x\" 1.0;\n  Session.log_metric session ~step:2 \"x\" 2.0;\n  Session.finish session ();\n  let run = Session.run session in\n  let latest =\n    match List.assoc_opt \"x\" (Run.latest_metrics run) with\n    | Some m -> m\n    | None -> failwith \"missing\"\n  in\n  equal ~msg:\"latest step\" int 2 latest.step;\n  equal ~msg:\"latest value\" (float 0.0) 2.0 latest.value\n\nlet test_metric_keys_sorted () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.log_metric session ~step:1 \"z/loss\" 1.0;\n  Session.log_metric session ~step:1 \"a/acc\" 0.5;\n  Session.log_metric session ~step:1 \"m/lr\" 0.01;\n  Session.finish session ();\n  let run = Session.run session in\n  equal ~msg:\"sorted keys\" (list string)\n    [ \"a/acc\"; \"m/lr\"; \"z/loss\" ]\n    (Run.metric_keys run)\n\nlet test_explicit_timestamp () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.log_metric session ~step:1 ~timestamp:42.0 \"x\" 1.0;\n  Session.finish session ();\n  let run = Session.run session in\n  let m =\n    match List.assoc_opt \"x\" (Run.latest_metrics run) with\n    | Some m -> m\n    | None -> failwith \"missing\"\n  in\n  equal ~msg:\"timestamp\" (float 0.0) 42.0 m.timestamp\n\nlet test_missing_metric_history_empty () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish session ();\n  let run = Session.run session in\n  equal ~msg:\"empty history\" (list int) []\n    (List.map\n       (fun (m : Run.metric) -> m.step)\n       (Run.metric_history run \"nonexistent\"))\n\nlet scalars =\n  [\n    test \"log_metric\" test_log_metric;\n    test \"log_metrics batch\" test_log_metrics_batch;\n    test \"history chronological\" test_metric_history_chronological;\n    test \"latest_metrics\" test_latest_metrics;\n    test \"metric_keys sorted\" test_metric_keys_sorted;\n    test \"explicit timestamp\" test_explicit_timestamp;\n    test \"missing key history empty\" test_missing_metric_history_empty;\n  ]\n\n(* Metric definitions *)\n\nlet test_define_metric_summary_and_goal () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.define_metric session \"train/loss\" ~summary:`Min ~goal:`Minimize ();\n  Session.define_metric session \"val/acc\" ~summary:`Max ~goal:`Maximize ();\n  Session.finish session ();\n  let run = Session.run session in\n  let defs = Run.metric_defs run in\n  equal ~msg:\"two defs\" int 2 (List.length defs);\n  let loss_def = List.assoc \"train/loss\" defs in\n  is_true ~msg:\"loss summary min\" (loss_def.summary = `Min);\n  is_true ~msg:\"loss goal minimize\" (loss_def.goal = Some `Minimize);\n  let acc_def = List.assoc \"val/acc\" defs in\n  is_true ~msg:\"acc summary max\" (acc_def.summary = `Max);\n  is_true ~msg:\"acc goal maximize\" (acc_def.goal = Some `Maximize)\n\nlet test_define_metric_all_summaries () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.define_metric session \"a\" ~summary:`Min ();\n  Session.define_metric session \"b\" ~summary:`Max ();\n  Session.define_metric session \"c\" ~summary:`Mean ();\n  Session.define_metric session \"d\" ~summary:`Last ();\n  Session.define_metric session \"e\" ~summary:`None ();\n  Session.finish session ();\n  let run = Session.run session in\n  let defs = Run.metric_defs run in\n  equal ~msg:\"five defs\" int 5 (List.length defs);\n  is_true ~msg:\"a=Min\" ((List.assoc \"a\" defs).summary = `Min);\n  is_true ~msg:\"b=Max\" ((List.assoc \"b\" defs).summary = `Max);\n  is_true ~msg:\"c=Mean\" ((List.assoc \"c\" defs).summary = `Mean);\n  is_true ~msg:\"d=Last\" ((List.assoc \"d\" defs).summary = `Last);\n  is_true ~msg:\"e=None\" ((List.assoc \"e\" defs).summary = `None)\n\nlet test_define_metric_step_metric () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.define_metric session \"train/loss\" ~step_metric:\"epoch\" ();\n  Session.finish session ();\n  let run = Session.run session in\n  let def = List.assoc \"train/loss\" (Run.metric_defs run) in\n  equal ~msg:\"step_metric\" (option string) (Some \"epoch\") def.step_metric\n\nlet test_define_metric_default_summary () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.define_metric session \"x\" ();\n  Session.finish session ();\n  let run = Session.run session in\n  let def = List.assoc \"x\" (Run.metric_defs run) in\n  is_true ~msg:\"default summary is Last\" (def.summary = `Last)\n\nlet test_metric_defs_sorted () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.define_metric session \"z\" ();\n  Session.define_metric session \"a\" ();\n  Session.define_metric session \"m\" ();\n  Session.finish session ();\n  let run = Session.run session in\n  let keys = List.map fst (Run.metric_defs run) in\n  equal ~msg:\"sorted\" (list string) [ \"a\"; \"m\"; \"z\" ] keys\n\nlet metric_definitions =\n  [\n    test \"summary and goal\" test_define_metric_summary_and_goal;\n    test \"all summary modes\" test_define_metric_all_summaries;\n    test \"step_metric\" test_define_metric_step_metric;\n    test \"default summary is Last\" test_define_metric_default_summary;\n    test \"defs sorted\" test_metric_defs_sorted;\n  ]\n\n(* Metadata *)\n\nlet test_set_notes () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.set_notes session (Some \"hello\");\n  Session.finish session ();\n  equal ~msg:\"notes set\" (option string) (Some \"hello\")\n    (Run.notes (Session.run session))\n\nlet test_set_notes_replace () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.set_notes session (Some \"first\");\n  Session.set_notes session (Some \"second\");\n  Session.finish session ();\n  equal ~msg:\"notes replaced\" (option string) (Some \"second\")\n    (Run.notes (Session.run session))\n\nlet test_clear_notes () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" ~notes:\"initial\" () in\n  Session.set_notes session None;\n  Session.finish session ();\n  equal ~msg:\"notes cleared\" (option string) None\n    (Run.notes (Session.run session))\n\nlet test_set_summary_merge () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.set_summary session [ (\"a\", `Float 1.0) ];\n  Session.set_summary session [ (\"b\", `Float 2.0); (\"a\", `Float 3.0) ];\n  Session.finish session ();\n  let run = Session.run session in\n  let summary = Run.summary run in\n  equal ~msg:\"two keys\" int 2 (List.length summary);\n  is_true ~msg:\"a replaced\"\n    (Value.to_float (Option.get (Run.find_summary run \"a\")) = Some 3.0);\n  is_true ~msg:\"b present\"\n    (Value.to_float (Option.get (Run.find_summary run \"b\")) = Some 2.0)\n\nlet test_find_summary_missing () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish session ();\n  equal ~msg:\"missing key\" (option unit) None\n    (Option.map ignore (Run.find_summary (Session.run session) \"nope\"))\n\nlet test_add_tags () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" ~tags:[ \"initial\" ] () in\n  Session.add_tags session [ \"added\" ];\n  Session.finish session ();\n  let run = Session.run session in\n  equal ~msg:\"tags\" (list string) [ \"initial\"; \"added\" ] (Run.tags run)\n\nlet test_add_tags_dedup () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" ~tags:[ \"a\" ] () in\n  Session.add_tags session [ \"a\"; \"b\" ];\n  Session.finish session ();\n  let run = Session.run session in\n  equal ~msg:\"deduped\" (list string) [ \"a\"; \"b\" ] (Run.tags run)\n\nlet test_add_tags_empty_noop () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.add_tags session [];\n  Session.finish session ();\n  equal ~msg:\"no tags\" (list string) [] (Run.tags (Session.run session))\n\nlet test_params_immutable () =\n  with_temp_dir @@ fun root ->\n  let session =\n    Session.start ~root ~experiment:\"exp\"\n      ~params:[ (\"lr\", `Float 0.001); (\"bs\", `Int 32) ]\n      ()\n  in\n  Session.finish session ();\n  let run = Session.run session in\n  equal ~msg:\"param count\" int 2 (List.length (Run.params run))\n\nlet metadata =\n  [\n    test \"set_notes\" test_set_notes;\n    test \"set_notes replace\" test_set_notes_replace;\n    test \"clear notes\" test_clear_notes;\n    test \"set_summary merge\" test_set_summary_merge;\n    test \"find_summary missing\" test_find_summary_missing;\n    test \"add_tags\" test_add_tags;\n    test \"add_tags dedup\" test_add_tags_dedup;\n    test \"add_tags empty noop\" test_add_tags_empty_noop;\n    test \"params immutable\" test_params_immutable;\n  ]\n\n(* Provenance *)\n\nlet test_provenance_fields () =\n  with_temp_dir @@ fun root ->\n  let session =\n    Session.start ~root ~experiment:\"exp\" ~command:[ \"ocaml\"; \"train.ml\" ]\n      ~cwd:root ~hostname:\"node0\" ~pid:42 ~git_commit:\"abc123\" ~git_dirty:true\n      ()\n  in\n  Session.finish session ();\n  let prov = Run.provenance (Session.run session) in\n  equal ~msg:\"command\" (list string) [ \"ocaml\"; \"train.ml\" ] prov.command;\n  equal ~msg:\"cwd\" string root prov.cwd;\n  equal ~msg:\"hostname\" (option string) (Some \"node0\") prov.hostname;\n  equal ~msg:\"pid\" int 42 prov.pid;\n  equal ~msg:\"git_commit\" (option string) (Some \"abc123\") prov.git_commit;\n  equal ~msg:\"git_dirty\" (option bool) (Some true) prov.git_dirty\n\nlet test_capture_env () =\n  with_temp_dir @@ fun root ->\n  Unix.putenv \"MUNIN_TEST_A\" \"alpha\";\n  let session =\n    Session.start ~root ~experiment:\"exp\" ~capture_env:[ \"MUNIN_TEST_A\" ] ()\n  in\n  Session.finish session ();\n  let prov = Run.provenance (Session.run session) in\n  equal ~msg:\"env captured\"\n    (list (pair string string))\n    [ (\"MUNIN_TEST_A\", \"alpha\") ]\n    prov.env\n\nlet test_capture_env_missing () =\n  with_temp_dir @@ fun root ->\n  (* Use a variable name that won't exist *)\n  let session =\n    Session.start ~root ~experiment:\"exp\"\n      ~capture_env:[ \"MUNIN_NONEXISTENT_9999\" ]\n      ()\n  in\n  Session.finish session ();\n  let prov = Run.provenance (Session.run session) in\n  equal ~msg:\"missing env omitted\" (list (pair string string)) [] prov.env\n\nlet test_explicit_env () =\n  with_temp_dir @@ fun root ->\n  let session =\n    Session.start ~root ~experiment:\"exp\" ~env:[ (\"KEY\", \"value\") ] ()\n  in\n  Session.finish session ();\n  let prov = Run.provenance (Session.run session) in\n  equal ~msg:\"explicit env\"\n    (list (pair string string))\n    [ (\"KEY\", \"value\") ]\n    prov.env\n\nlet provenance_tests =\n  [\n    test \"all fields round-trip\" test_provenance_fields;\n    test \"capture_env\" test_capture_env;\n    test \"capture_env missing var\" test_capture_env_missing;\n    test \"explicit env\" test_explicit_env;\n  ]\n\n(* Run loading *)\n\nlet test_run_load () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish session ();\n  let id = Run.id (Session.run session) in\n  is_some ~msg:\"load existing\" (Run.load ~root ~experiment:\"exp\" ~id)\n\nlet test_run_load_missing () =\n  with_temp_dir @@ fun root ->\n  let _store = Store.open_ ~root () in\n  is_none ~msg:\"load missing\"\n    (Run.load ~root ~experiment:\"exp\" ~id:\"nonexistent\")\n\nlet test_run_list_sorted_descending () =\n  with_temp_dir @@ fun root ->\n  let s1 = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish s1 ();\n  let s2 = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish s2 ();\n  let runs = Run.list ~root ~experiment:\"exp\" () in\n  equal ~msg:\"count\" int 2 (List.length runs);\n  let ids = List.map Run.id runs in\n  (* Sorted descending by id *)\n  is_true ~msg:\"descending order\"\n    (String.compare (List.nth ids 0) (List.nth ids 1) > 0)\n\nlet test_run_list_filter_status () =\n  with_temp_dir @@ fun root ->\n  let s1 = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish s1 ();\n  let s2 = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish ~status:`failed s2 ();\n  equal ~msg:\"finished only\" int 1\n    (List.length (Run.list ~root ~experiment:\"exp\" ~status:`finished ()));\n  equal ~msg:\"failed only\" int 1\n    (List.length (Run.list ~root ~experiment:\"exp\" ~status:`failed ()))\n\nlet test_run_list_filter_tag () =\n  with_temp_dir @@ fun root ->\n  let s1 = Session.start ~root ~experiment:\"exp\" ~tags:[ \"train\" ] () in\n  Session.finish s1 ();\n  let s2 = Session.start ~root ~experiment:\"exp\" ~tags:[ \"eval\" ] () in\n  Session.finish s2 ();\n  equal ~msg:\"train tagged\" int 1\n    (List.length (Run.list ~root ~experiment:\"exp\" ~tag:\"train\" ()));\n  equal ~msg:\"eval tagged\" int 1\n    (List.length (Run.list ~root ~experiment:\"exp\" ~tag:\"eval\" ()))\n\nlet test_run_list_filter_parent () =\n  with_temp_dir @@ fun root ->\n  let parent = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish parent ();\n  let parent_run = Session.run parent in\n  let child = Session.start ~root ~experiment:\"exp\" ~parent:parent_run () in\n  Session.finish child ();\n  let _other = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish _other ();\n  equal ~msg:\"one child\" int 1\n    (List.length\n       (Run.list ~root ~experiment:\"exp\" ~parent:(Run.id parent_run) ()))\n\nlet test_run_children () =\n  with_temp_dir @@ fun root ->\n  let parent = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish parent ();\n  let parent_run = Session.run parent in\n  let c1 = Session.start ~root ~experiment:\"exp\" ~parent:parent_run () in\n  Session.finish c1 ();\n  let c2 = Session.start ~root ~experiment:\"exp\" ~parent:parent_run () in\n  Session.finish c2 ();\n  let children = Run.children parent_run in\n  equal ~msg:\"two children\" int 2 (List.length children);\n  List.iter\n    (fun child ->\n      equal ~msg:\"parent_id\" (option string)\n        (Some (Run.id parent_run))\n        (Run.parent_id child))\n    children\n\nlet test_run_name () =\n  with_temp_dir @@ fun root ->\n  let s1 = Session.start ~root ~experiment:\"exp\" ~name:\"baseline\" () in\n  Session.finish s1 ();\n  let s2 = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish s2 ();\n  equal ~msg:\"named\" (option string) (Some \"baseline\")\n    (Run.name (Session.run s1));\n  equal ~msg:\"unnamed\" (option string) None (Run.name (Session.run s2))\n\nlet test_experiment_name () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"my-exp\" () in\n  Session.finish session ();\n  equal ~msg:\"experiment name\" string \"my-exp\"\n    (Run.experiment_name (Session.run session))\n\nlet run_loading =\n  [\n    test \"load existing\" test_run_load;\n    test \"load missing\" test_run_load_missing;\n    test \"list sorted descending\" test_run_list_sorted_descending;\n    test \"list filter status\" test_run_list_filter_status;\n    test \"list filter tag\" test_run_list_filter_tag;\n    test \"list filter parent\" test_run_list_filter_parent;\n    test \"children\" test_run_children;\n    test \"name\" test_run_name;\n    test \"experiment_name\" test_experiment_name;\n  ]\n\n(* Artifacts *)\n\nlet test_file_artifact () =\n  with_temp_dir @@ fun root ->\n  let src = make_file root \"weights.bin\" \"model weights\" in\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  let artifact =\n    Session.log_artifact session ~name:\"model\" ~kind:`checkpoint ~path:src ()\n  in\n  Session.finish session ();\n  equal ~msg:\"name\" string \"model\" (Artifact.name artifact);\n  equal ~msg:\"version\" string \"v1\" (Artifact.version artifact);\n  is_true ~msg:\"kind\" (Artifact.kind artifact = `checkpoint);\n  is_true ~msg:\"payload file\" (Artifact.payload artifact = `file);\n  is_true ~msg:\"size positive\" (Artifact.size_bytes artifact > 0);\n  is_true ~msg:\"path exists\" (Sys.file_exists (Artifact.path artifact));\n  is_true ~msg:\"digest is sha256 hex\"\n    (String.length (Artifact.digest artifact) = 64)\n\nlet test_dir_artifact () =\n  with_temp_dir @@ fun root ->\n  let dir =\n    make_dir root \"dataset\" [ (\"a.txt\", \"data a\"); (\"b.txt\", \"data b\") ]\n  in\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  let artifact =\n    Session.log_artifact session ~name:\"data\" ~kind:`dataset ~path:dir ()\n  in\n  Session.finish session ();\n  is_true ~msg:\"payload dir\" (Artifact.payload artifact = `dir);\n  is_true ~msg:\"kind dataset\" (Artifact.kind artifact = `dataset);\n  is_true ~msg:\"blob dir exists\" (Sys.file_exists (Artifact.path artifact))\n\nlet test_artifact_versioning () =\n  with_temp_dir @@ fun root ->\n  let src = make_file root \"model.bin\" \"v1 weights\" in\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  let a1 =\n    Session.log_artifact session ~name:\"model\" ~kind:`model ~path:src ()\n  in\n  write_text src \"v2 weights\";\n  let a2 =\n    Session.log_artifact session ~name:\"model\" ~kind:`model ~path:src ()\n  in\n  Session.finish session ();\n  equal ~msg:\"first version\" string \"v1\" (Artifact.version a1);\n  equal ~msg:\"second version\" string \"v2\" (Artifact.version a2)\n\nlet test_artifact_aliases () =\n  with_temp_dir @@ fun root ->\n  let src = make_file root \"model.bin\" \"weights\" in\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  let artifact =\n    Session.log_artifact session ~name:\"model\" ~kind:`model ~path:src\n      ~aliases:[ \"best\"; \"latest\" ] ()\n  in\n  Session.finish session ();\n  is_true ~msg:\"has best alias\" (Artifact.has_alias artifact \"best\");\n  is_true ~msg:\"has latest alias\" (Artifact.has_alias artifact \"latest\");\n  is_false ~msg:\"no nope alias\" (Artifact.has_alias artifact \"nope\")\n\nlet test_artifact_alias_resolution () =\n  with_temp_dir @@ fun root ->\n  let src = make_file root \"model.bin\" \"weights\" in\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  ignore\n    (Session.log_artifact session ~name:\"model\" ~kind:`model ~path:src\n       ~aliases:[ \"latest\" ] ());\n  write_text src \"v2 weights\";\n  ignore\n    (Session.log_artifact session ~name:\"model\" ~kind:`model ~path:src\n       ~aliases:[ \"latest\" ] ());\n  Session.finish session ();\n  let resolved = Artifact.load ~root ~name:\"model\" ~version:\"latest\" in\n  is_true ~msg:\"alias resolves to v2\"\n    (match resolved with Some a -> Artifact.version a = \"v2\" | None -> false);\n  let v1 = Artifact.load ~root ~name:\"model\" ~version:\"v1\" in\n  is_some ~msg:\"explicit v1 loads\" v1\n\nlet test_artifact_metadata () =\n  with_temp_dir @@ fun root ->\n  let src = make_file root \"model.bin\" \"weights\" in\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  let artifact =\n    Session.log_artifact session ~name:\"model\" ~kind:`model ~path:src\n      ~metadata:[ (\"framework\", `String \"rune\") ]\n      ()\n  in\n  Session.finish session ();\n  equal ~msg:\"metadata count\" int 1 (List.length (Artifact.metadata artifact))\n\nlet test_artifact_lineage () =\n  with_temp_dir @@ fun root ->\n  let src = make_file root \"model.bin\" \"weights\" in\n  let producer = Session.start ~root ~experiment:\"exp\" ~name:\"producer\" () in\n  let artifact =\n    Session.log_artifact producer ~name:\"model\" ~kind:`model ~path:src ()\n  in\n  let consumer = Session.start ~root ~experiment:\"exp\" ~name:\"consumer\" () in\n  Session.use_artifact consumer artifact;\n  Session.finish producer ();\n  Session.finish consumer ();\n  let producer_run = Session.run producer in\n  let consumer_run = Session.run consumer in\n  equal ~msg:\"producer_run_id\" (option string)\n    (Some (Run.id producer_run))\n    (Artifact.producer_run_id artifact);\n  (* Reload to see consumer linkage *)\n  let reloaded =\n    match Artifact.load ~root ~name:\"model\" ~version:\"v1\" with\n    | Some a -> a\n    | None -> failwith \"missing artifact\"\n  in\n  equal ~msg:\"consumer_run_ids\" (list string)\n    [ Run.id consumer_run ]\n    (Artifact.consumer_run_ids reloaded);\n  equal ~msg:\"output artifacts\" int 1\n    (List.length (Run.output_artifacts producer_run));\n  equal ~msg:\"input artifacts\" int 1\n    (List.length (Run.input_artifacts consumer_run))\n\nlet test_log_artifact_closed_raises () =\n  with_temp_dir @@ fun root ->\n  let src = make_file root \"model.bin\" \"weights\" in\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish session ();\n  raises_failure \"Munin.Session.log_artifact: closed session\" (fun () ->\n      ignore\n        (Session.log_artifact session ~name:\"model\" ~kind:`model ~path:src ()))\n\nlet test_log_artifact_missing_path_raises () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  raises_invalid_arg\n    \"Munin.Session.log_artifact: path does not exist: /nonexistent/path\"\n    (fun () ->\n      ignore\n        (Session.log_artifact session ~name:\"model\" ~kind:`model\n           ~path:\"/nonexistent/path\" ()))\n\nlet test_artifact_content_dedup () =\n  with_temp_dir @@ fun root ->\n  let src = make_file root \"model.bin\" \"same content\" in\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  let a1 =\n    Session.log_artifact session ~name:\"model\" ~kind:`model ~path:src ()\n  in\n  let a2 =\n    Session.log_artifact session ~name:\"backup\" ~kind:`model ~path:src ()\n  in\n  Session.finish session ();\n  equal ~msg:\"same digest\" string (Artifact.digest a1) (Artifact.digest a2)\n\nlet artifacts =\n  [\n    test \"file artifact\" test_file_artifact;\n    test \"directory artifact\" test_dir_artifact;\n    test \"versioning\" test_artifact_versioning;\n    test \"aliases\" test_artifact_aliases;\n    test \"alias resolution\" test_artifact_alias_resolution;\n    test \"metadata\" test_artifact_metadata;\n    test \"lineage\" test_artifact_lineage;\n    test \"closed session raises\" test_log_artifact_closed_raises;\n    test \"missing path raises\" test_log_artifact_missing_path_raises;\n    test \"content dedup\" test_artifact_content_dedup;\n  ]\n\n(* Store *)\n\nlet test_store_open () =\n  with_temp_dir @@ fun root ->\n  let store = Store.open_ ~root () in\n  equal ~msg:\"root\" string root (Store.root store);\n  is_true ~msg:\"experiments dir exists\"\n    (Sys.file_exists (Filename.concat root \"experiments\"));\n  is_true ~msg:\"artifacts dir exists\"\n    (Sys.file_exists (Filename.concat root \"artifacts\"))\n\nlet test_store_list_experiments () =\n  with_temp_dir @@ fun root ->\n  let s1 = Session.start ~root ~experiment:\"mnist\" () in\n  Session.finish s1 ();\n  let s2 = Session.start ~root ~experiment:\"cifar\" () in\n  Session.finish s2 ();\n  let store = Store.open_ ~root () in\n  let exps = Store.list_experiments store in\n  equal ~msg:\"two experiments\" int 2 (List.length exps);\n  is_true ~msg:\"has mnist\" (List.mem \"mnist\" exps);\n  is_true ~msg:\"has cifar\" (List.mem \"cifar\" exps)\n\nlet test_store_list_runs_all () =\n  with_temp_dir @@ fun root ->\n  let s1 = Session.start ~root ~experiment:\"exp1\" () in\n  Session.finish s1 ();\n  let s2 = Session.start ~root ~experiment:\"exp2\" () in\n  Session.finish s2 ();\n  let store = Store.open_ ~root () in\n  equal ~msg:\"all runs\" int 2 (List.length (Store.list_runs store ()))\n\nlet test_store_list_runs_experiment () =\n  with_temp_dir @@ fun root ->\n  let s1 = Session.start ~root ~experiment:\"exp1\" () in\n  Session.finish s1 ();\n  let s2 = Session.start ~root ~experiment:\"exp2\" () in\n  Session.finish s2 ();\n  let store = Store.open_ ~root () in\n  equal ~msg:\"exp1 runs\" int 1\n    (List.length (Store.list_runs store ~experiment:\"exp1\" ()));\n  equal ~msg:\"exp2 runs\" int 1\n    (List.length (Store.list_runs store ~experiment:\"exp2\" ()))\n\nlet test_store_find_run () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish session ();\n  let id = Run.id (Session.run session) in\n  let store = Store.open_ ~root () in\n  is_some ~msg:\"found\" (Store.find_run store id)\n\nlet test_store_find_run_missing () =\n  with_temp_dir @@ fun root ->\n  let store = Store.open_ ~root () in\n  is_none ~msg:\"not found\" (Store.find_run store \"nonexistent\")\n\nlet test_store_latest_run () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" ~name:\"only\" () in\n  Session.finish session ();\n  let store = Store.open_ ~root () in\n  let latest = Store.latest_run store () in\n  is_true ~msg:\"latest returns the run\"\n    (match latest with Some run -> Run.name run = Some \"only\" | None -> false)\n\nlet test_store_latest_run_with_filter () =\n  with_temp_dir @@ fun root ->\n  let s1 =\n    Session.start ~root ~experiment:\"exp\" ~tags:[ \"train\" ] ~name:\"a\" ()\n  in\n  Session.finish s1 ();\n  Unix.sleepf 0.01;\n  let s2 =\n    Session.start ~root ~experiment:\"exp\" ~tags:[ \"eval\" ] ~name:\"b\" ()\n  in\n  Session.finish s2 ();\n  let store = Store.open_ ~root () in\n  let latest = Store.latest_run store ~tag:\"train\" () in\n  is_true ~msg:\"latest with tag filter\"\n    (match latest with Some run -> Run.name run = Some \"a\" | None -> false)\n\nlet test_store_delete_run () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish session ();\n  let run = Session.run session in\n  let store = Store.open_ ~root () in\n  is_true ~msg:\"run dir exists before\" (Sys.file_exists (Run.dir run));\n  Store.delete_run store run;\n  is_false ~msg:\"run dir removed\" (Sys.file_exists (Run.dir run))\n\nlet test_store_delete_run_cleans_experiment () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"temp\" () in\n  Session.finish session ();\n  let run = Session.run session in\n  let store = Store.open_ ~root () in\n  Store.delete_run store run;\n  let exp_dir = Filename.concat (Filename.concat root \"experiments\") \"temp\" in\n  is_false ~msg:\"experiment dir removed\" (Sys.file_exists exp_dir)\n\nlet test_store_gc () =\n  with_temp_dir @@ fun root ->\n  (* Use directory artifact so blob is a directory (gc uses list_dirs) *)\n  let dir = make_dir root \"dataset\" [ (\"a.txt\", \"data\") ] in\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  let artifact =\n    Session.log_artifact session ~name:\"data\" ~kind:`dataset ~path:dir ()\n  in\n  Session.finish session ();\n  let store = Store.open_ ~root () in\n  (* Blob is referenced, gc should remove 0 *)\n  let removed_before = Store.gc store in\n  equal ~msg:\"no unreferenced before\" int 0 removed_before;\n  is_true ~msg:\"blob exists\" (Sys.file_exists (Artifact.path artifact));\n  (* Remove artifact manifest to make blob unreferenced *)\n  let version_dir =\n    Filename.concat\n      (Filename.concat\n         (Filename.concat (Filename.concat root \"artifacts\") \"data\")\n         \"versions\")\n      \"v1\"\n  in\n  ignore (Sys.command (\"rm -rf \" ^ Filename.quote version_dir));\n  let removed_after = Store.gc store in\n  equal ~msg:\"one blob removed\" int 1 removed_after\n\nlet test_store_find_artifact () =\n  with_temp_dir @@ fun root ->\n  let src = make_file root \"model.bin\" \"weights\" in\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  ignore (Session.log_artifact session ~name:\"model\" ~kind:`model ~path:src ());\n  Session.finish session ();\n  let store = Store.open_ ~root () in\n  is_some ~msg:\"found\" (Store.find_artifact store ~name:\"model\" ~version:\"v1\")\n\nlet test_store_list_artifacts () =\n  with_temp_dir @@ fun root ->\n  let src = make_file root \"model.bin\" \"weights\" in\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  ignore (Session.log_artifact session ~name:\"model\" ~kind:`model ~path:src ());\n  ignore (Session.log_artifact session ~name:\"data\" ~kind:`dataset ~path:src ());\n  Session.finish session ();\n  let store = Store.open_ ~root () in\n  equal ~msg:\"all artifacts\" int 2 (List.length (Store.list_artifacts store ()));\n  equal ~msg:\"filter by kind\" int 1\n    (List.length (Store.list_artifacts store ~kind:`model ()));\n  equal ~msg:\"filter by name\" int 1\n    (List.length (Store.list_artifacts store ~name:\"data\" ()))\n\nlet store_tests =\n  [\n    test \"open creates dirs\" test_store_open;\n    test \"list_experiments\" test_store_list_experiments;\n    test \"list_runs all\" test_store_list_runs_all;\n    test \"list_runs by experiment\" test_store_list_runs_experiment;\n    test \"find_run\" test_store_find_run;\n    test \"find_run missing\" test_store_find_run_missing;\n    test \"latest_run\" test_store_latest_run;\n    test \"latest_run with filter\" test_store_latest_run_with_filter;\n    test \"delete_run\" test_store_delete_run;\n    test \"delete_run cleans experiment\" test_store_delete_run_cleans_experiment;\n    test \"gc\" test_store_gc;\n    test \"find_artifact\" test_store_find_artifact;\n    test \"list_artifacts\" test_store_list_artifacts;\n  ]\n\n(* Run monitor *)\n\nlet test_monitor_poll () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.log_metric session ~step:1 \"train/loss\" 1.0;\n  Session.log_metric session ~step:2 \"train/loss\" 0.5;\n  let run = Session.run session in\n  let monitor = Run_monitor.start run in\n  Run_monitor.poll monitor;\n  let metrics = Run_monitor.metrics monitor in\n  equal ~msg:\"one key\" int 1 (List.length metrics);\n  let latest = List.assoc \"train/loss\" metrics in\n  equal ~msg:\"latest step\" int 2 latest.step;\n  Run_monitor.close monitor;\n  Session.finish session ()\n\nlet test_monitor_incremental () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.log_metric session ~step:1 \"x\" 1.0;\n  let run = Session.run session in\n  let monitor = Run_monitor.start run in\n  Run_monitor.poll monitor;\n  equal ~msg:\"one point after first poll\" int 1\n    (List.length (Run_monitor.history monitor \"x\"));\n  Session.log_metric session ~step:2 \"x\" 2.0;\n  Run_monitor.poll monitor;\n  equal ~msg:\"two points after second poll\" int 2\n    (List.length (Run_monitor.history monitor \"x\"));\n  Run_monitor.close monitor;\n  Session.finish session ()\n\nlet test_monitor_history_chronological () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.log_metric session ~step:1 \"x\" 3.0;\n  Session.log_metric session ~step:2 \"x\" 1.0;\n  Session.log_metric session ~step:3 \"x\" 2.0;\n  let run = Session.run session in\n  let monitor = Run_monitor.start run in\n  Run_monitor.poll monitor;\n  let history = Run_monitor.history monitor \"x\" in\n  let steps = List.map fst history in\n  equal ~msg:\"chronological steps\" (list int) [ 1; 2; 3 ] steps;\n  Run_monitor.close monitor;\n  Session.finish session ()\n\nlet test_monitor_metric_defs () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.define_metric session \"loss\" ~summary:`Min ~goal:`Minimize ();\n  let run = Session.run session in\n  let monitor = Run_monitor.start run in\n  Run_monitor.poll monitor;\n  let defs = Run_monitor.metric_defs monitor in\n  equal ~msg:\"one def\" int 1 (List.length defs);\n  let def = List.assoc \"loss\" defs in\n  is_true ~msg:\"summary min\" (def.summary = `Min);\n  Run_monitor.close monitor;\n  Session.finish session ()\n\nlet test_monitor_best_minimize () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.define_metric session \"loss\" ~goal:`Minimize ();\n  Session.log_metric session ~step:1 \"loss\" 1.0;\n  Session.log_metric session ~step:2 \"loss\" 0.3;\n  Session.log_metric session ~step:3 \"loss\" 0.7;\n  let run = Session.run session in\n  let monitor = Run_monitor.start run in\n  Run_monitor.poll monitor;\n  let best = Run_monitor.best monitor \"loss\" in\n  is_true ~msg:\"best is 0.3\"\n    (match best with Some m -> m.value = 0.3 | None -> false);\n  Run_monitor.close monitor;\n  Session.finish session ()\n\nlet test_monitor_best_maximize () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.define_metric session \"acc\" ~goal:`Maximize ();\n  Session.log_metric session ~step:1 \"acc\" 0.5;\n  Session.log_metric session ~step:2 \"acc\" 0.9;\n  Session.log_metric session ~step:3 \"acc\" 0.7;\n  let run = Session.run session in\n  let monitor = Run_monitor.start run in\n  Run_monitor.poll monitor;\n  let best = Run_monitor.best monitor \"acc\" in\n  is_true ~msg:\"best is 0.9\"\n    (match best with Some m -> m.value = 0.9 | None -> false);\n  Run_monitor.close monitor;\n  Session.finish session ()\n\nlet test_monitor_best_loss_heuristic () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.log_metric session ~step:1 \"train/loss\" 1.0;\n  Session.log_metric session ~step:2 \"train/loss\" 0.2;\n  Session.log_metric session ~step:3 \"train/loss\" 0.5;\n  let run = Session.run session in\n  let monitor = Run_monitor.start run in\n  Run_monitor.poll monitor;\n  let best = Run_monitor.best monitor \"train/loss\" in\n  is_true ~msg:\"loss heuristic picks minimum\"\n    (match best with Some m -> m.value = 0.2 | None -> false);\n  Run_monitor.close monitor;\n  Session.finish session ()\n\nlet test_monitor_live_status () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.log_metric session ~step:1 \"x\" 1.0;\n  let run = Session.run session in\n  let monitor = Run_monitor.start run in\n  Run_monitor.poll monitor;\n  is_true ~msg:\"live during session\" (Run_monitor.live_status monitor = `Live);\n  Session.finish session ();\n  Run_monitor.poll monitor;\n  is_true ~msg:\"done after finish\"\n    (match Run_monitor.live_status monitor with `Done _ -> true | _ -> false);\n  Run_monitor.close monitor\n\nlet test_monitor_best_nonexistent () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  let run = Session.run session in\n  let monitor = Run_monitor.start run in\n  Run_monitor.poll monitor;\n  is_none ~msg:\"no best for missing key\" (Run_monitor.best monitor \"nope\");\n  Run_monitor.close monitor;\n  Session.finish session ()\n\nlet run_monitor =\n  [\n    test \"poll reads metrics\" test_monitor_poll;\n    test \"incremental polling\" test_monitor_incremental;\n    test \"history chronological\" test_monitor_history_chronological;\n    test \"metric_defs\" test_monitor_metric_defs;\n    test \"best minimize\" test_monitor_best_minimize;\n    test \"best maximize\" test_monitor_best_maximize;\n    test \"best loss heuristic\" test_monitor_best_loss_heuristic;\n    test \"live_status\" test_monitor_live_status;\n    test \"best nonexistent key\" test_monitor_best_nonexistent;\n  ]\n\n(* Robustness *)\n\nlet test_partial_log () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.log_metric session ~step:1 \"x\" 1.0;\n  let run = Session.run session in\n  (* Append truncated JSON *)\n  let oc =\n    open_out_gen\n      [ Open_append; Open_creat ]\n      0o644\n      (Filename.concat (Run.dir run) \"events.jsonl\")\n  in\n  Fun.protect\n    ~finally:(fun () -> close_out oc)\n    (fun () -> output_string oc \"{\\\"type\\\":\\\"metric\\\"\");\n  let run = Session.run session in\n  equal ~msg:\"partial line ignored\" int 1\n    (List.length (Run.metric_history run \"x\"));\n  Session.finish session ()\n\nlet test_malformed_line () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.log_metric session ~step:1 \"x\" 1.0;\n  let run = Session.run session in\n  let oc =\n    open_out_gen\n      [ Open_append; Open_creat ]\n      0o644\n      (Filename.concat (Run.dir run) \"events.jsonl\")\n  in\n  Fun.protect\n    ~finally:(fun () -> close_out oc)\n    (fun () -> output_string oc \"this is not json at all\\n\");\n  let run = Session.run session in\n  equal ~msg:\"malformed line skipped\" int 1\n    (List.length (Run.metric_history run \"x\"));\n  Session.finish session ()\n\nlet test_empty_event_log () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  let run = Session.run session in\n  (* No events logged *)\n  equal ~msg:\"no metrics\" int 0 (List.length (Run.metric_keys run));\n  equal ~msg:\"no tags\" (list string) [] (Run.tags run);\n  is_true ~msg:\"running\" (Run.status run = `running);\n  Session.finish session ()\n\nlet test_mixed_valid_invalid () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.log_metric session ~step:1 \"x\" 1.0;\n  let run = Session.run session in\n  let oc =\n    open_out_gen\n      [ Open_append; Open_creat ]\n      0o644\n      (Filename.concat (Run.dir run) \"events.jsonl\")\n  in\n  Fun.protect\n    ~finally:(fun () -> close_out oc)\n    (fun () ->\n      output_string oc \"garbage\\n\";\n      output_string oc \"{\\\"bad\\\":true}\\n\");\n  Session.log_metric session ~step:2 \"x\" 2.0;\n  let run = Session.run session in\n  equal ~msg:\"only valid metrics\" int 2\n    (List.length (Run.metric_history run \"x\"));\n  Session.finish session ()\n\nlet robustness =\n  [\n    test \"partial log\" test_partial_log;\n    test \"malformed line\" test_malformed_line;\n    test \"empty event log\" test_empty_event_log;\n    test \"mixed valid and invalid\" test_mixed_valid_invalid;\n  ]\n\n(* Auto-computed summaries *)\n\nlet summary_float run key =\n  match Run.find_summary run key with\n  | Some v -> Value.to_float v\n  | None -> None\n\nlet test_auto_summary_min () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.define_metric session \"loss\" ~summary:`Min ();\n  Session.log_metric session ~step:1 \"loss\" 1.0;\n  Session.log_metric session ~step:2 \"loss\" 0.3;\n  Session.log_metric session ~step:3 \"loss\" 0.7;\n  Session.finish session ();\n  let run = Session.run session in\n  equal ~msg:\"min summary\"\n    (option (float 0.0))\n    (Some 0.3) (summary_float run \"loss\")\n\nlet test_auto_summary_max () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.define_metric session \"acc\" ~summary:`Max ();\n  Session.log_metric session ~step:1 \"acc\" 0.5;\n  Session.log_metric session ~step:2 \"acc\" 0.9;\n  Session.log_metric session ~step:3 \"acc\" 0.7;\n  Session.finish session ();\n  let run = Session.run session in\n  equal ~msg:\"max summary\"\n    (option (float 0.0))\n    (Some 0.9) (summary_float run \"acc\")\n\nlet test_auto_summary_mean () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.define_metric session \"x\" ~summary:`Mean ();\n  Session.log_metric session ~step:1 \"x\" 1.0;\n  Session.log_metric session ~step:2 \"x\" 2.0;\n  Session.log_metric session ~step:3 \"x\" 3.0;\n  Session.finish session ();\n  let run = Session.run session in\n  equal ~msg:\"mean summary\"\n    (option (float 0.0))\n    (Some 2.0) (summary_float run \"x\")\n\nlet test_auto_summary_last () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.define_metric session \"x\" ~summary:`Last ();\n  Session.log_metric session ~step:1 \"x\" 1.0;\n  Session.log_metric session ~step:2 \"x\" 2.0;\n  Session.log_metric session ~step:3 \"x\" 3.0;\n  Session.finish session ();\n  let run = Session.run session in\n  equal ~msg:\"last summary\"\n    (option (float 0.0))\n    (Some 3.0) (summary_float run \"x\")\n\nlet test_auto_summary_none () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.define_metric session \"x\" ~summary:`None ();\n  Session.log_metric session ~step:1 \"x\" 1.0;\n  Session.finish session ();\n  let run = Session.run session in\n  is_none ~msg:\"no auto-summary\" (Run.find_summary run \"x\")\n\nlet test_explicit_summary_wins () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.define_metric session \"loss\" ~summary:`Min ();\n  Session.log_metric session ~step:1 \"loss\" 1.0;\n  Session.log_metric session ~step:2 \"loss\" 0.3;\n  Session.set_summary session [ (\"loss\", `Float 999.0) ];\n  Session.finish session ();\n  let run = Session.run session in\n  equal ~msg:\"explicit wins\"\n    (option (float 0.0))\n    (Some 999.0) (summary_float run \"loss\")\n\nlet auto_summaries =\n  [\n    test \"auto-summary min\" test_auto_summary_min;\n    test \"auto-summary max\" test_auto_summary_max;\n    test \"auto-summary mean\" test_auto_summary_mean;\n    test \"auto-summary last\" test_auto_summary_last;\n    test \"auto-summary none\" test_auto_summary_none;\n    test \"explicit summary wins\" test_explicit_summary_wins;\n  ]\n\n(* Grouping *)\n\nlet test_group_round_trip () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" ~group:\"sweep-lr\" () in\n  Session.finish session ();\n  let run = Session.run session in\n  equal ~msg:\"group\" (option string) (Some \"sweep-lr\") (Run.group run)\n\nlet test_group_none_by_default () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish session ();\n  equal ~msg:\"no group\" (option string) None (Run.group (Session.run session))\n\nlet test_run_list_filter_group () =\n  with_temp_dir @@ fun root ->\n  let s1 = Session.start ~root ~experiment:\"exp\" ~group:\"a\" () in\n  Session.finish s1 ();\n  let s2 = Session.start ~root ~experiment:\"exp\" ~group:\"b\" () in\n  Session.finish s2 ();\n  let s3 = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish s3 ();\n  equal ~msg:\"group a\" int 1\n    (List.length (Run.list ~root ~experiment:\"exp\" ~group:\"a\" ()));\n  equal ~msg:\"group b\" int 1\n    (List.length (Run.list ~root ~experiment:\"exp\" ~group:\"b\" ()));\n  equal ~msg:\"all\" int 3 (List.length (Run.list ~root ~experiment:\"exp\" ()))\n\nlet test_store_list_runs_group () =\n  with_temp_dir @@ fun root ->\n  let s1 = Session.start ~root ~experiment:\"exp\" ~group:\"sweep\" () in\n  Session.finish s1 ();\n  let s2 = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish s2 ();\n  let store = Store.open_ ~root () in\n  equal ~msg:\"sweep only\" int 1\n    (List.length (Store.list_runs store ~group:\"sweep\" ()))\n\nlet test_store_latest_run_group () =\n  with_temp_dir @@ fun root ->\n  let s1 = Session.start ~root ~experiment:\"exp\" ~group:\"a\" () in\n  Session.finish s1 ();\n  let s2 = Session.start ~root ~experiment:\"exp\" ~group:\"b\" () in\n  Session.finish s2 ();\n  let store = Store.open_ ~root () in\n  let latest = Store.latest_run store ~group:\"a\" () in\n  is_true ~msg:\"latest in group a\"\n    (match latest with Some r -> Run.group r = Some \"a\" | None -> false)\n\nlet grouping =\n  [\n    test \"group round-trip\" test_group_round_trip;\n    test \"group none by default\" test_group_none_by_default;\n    test \"run list filter group\" test_run_list_filter_group;\n    test \"store list_runs group\" test_store_list_runs_group;\n    test \"store latest_run group\" test_store_latest_run_group;\n  ]\n\n(* Media *)\n\nlet test_log_media_copies_file () =\n  with_temp_dir @@ fun root ->\n  let src = make_file root \"pred.png\" \"image data\" in\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.log_media session ~step:1 ~key:\"predictions\" ~kind:`Image ~path:src;\n  Session.finish session ();\n  let run = Session.run session in\n  let entries = Run.media_history run \"predictions\" in\n  equal ~msg:\"one entry\" int 1 (List.length entries);\n  let entry = List.hd entries in\n  is_true ~msg:\"file exists\" (Sys.file_exists entry.path);\n  is_true ~msg:\"kind is image\" (entry.kind = `Image);\n  equal ~msg:\"step\" int 1 entry.step\n\nlet test_log_media_nested_key () =\n  with_temp_dir @@ fun root ->\n  let src = make_file root \"img.png\" \"pixels\" in\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.log_media session ~step:5 ~key:\"train/predictions\" ~kind:`Image\n    ~path:src;\n  Session.finish session ();\n  let run = Session.run session in\n  let entries = Run.media_history run \"train/predictions\" in\n  equal ~msg:\"one entry\" int 1 (List.length entries);\n  let entry = List.hd entries in\n  is_true ~msg:\"file in subdir\"\n    (let parts = String.split_on_char '/' entry.path in\n     List.exists (String.equal \"train\") parts)\n\nlet test_log_media_multiple_steps () =\n  with_temp_dir @@ fun root ->\n  let src = make_file root \"img.png\" \"pixels\" in\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.log_media session ~step:1 ~key:\"pred\" ~kind:`Image ~path:src;\n  Session.log_media session ~step:5 ~key:\"pred\" ~kind:`Image ~path:src;\n  Session.log_media session ~step:10 ~key:\"pred\" ~kind:`Image ~path:src;\n  Session.finish session ();\n  let run = Session.run session in\n  let entries = Run.media_history run \"pred\" in\n  equal ~msg:\"three entries\" int 3 (List.length entries);\n  let steps = List.map (fun (e : Run.media_entry) -> e.step) entries in\n  equal ~msg:\"chronological\" (list int) [ 1; 5; 10 ] steps\n\nlet test_log_media_missing_path_raises () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  raises_invalid_arg\n    \"Munin.Session.log_media: path does not exist: /no/such/file\" (fun () ->\n      Session.log_media session ~step:1 ~key:\"x\" ~kind:`File\n        ~path:\"/no/such/file\");\n  Session.finish session ()\n\nlet test_log_media_closed_ignored () =\n  with_temp_dir @@ fun root ->\n  let src = make_file root \"img.png\" \"pixels\" in\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish session ();\n  Session.log_media session ~step:1 ~key:\"pred\" ~kind:`Image ~path:src;\n  let run = Session.run session in\n  equal ~msg:\"no media\" int 0 (List.length (Run.media_keys run))\n\nlet test_log_table () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.log_table session ~step:1 ~key:\"confusion\" ~columns:[ \"cat\"; \"dog\" ]\n    ~rows:[ [ `Int 90; `Int 10 ]; [ `Int 5; `Int 95 ] ];\n  Session.finish session ();\n  let run = Session.run session in\n  let entries = Run.media_history run \"confusion\" in\n  equal ~msg:\"one entry\" int 1 (List.length entries);\n  let entry = List.hd entries in\n  is_true ~msg:\"kind is table\" (entry.kind = `Table);\n  is_true ~msg:\"json file exists\" (Sys.file_exists entry.path)\n\nlet test_media_keys_sorted () =\n  with_temp_dir @@ fun root ->\n  let src = make_file root \"f.bin\" \"data\" in\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.log_media session ~step:1 ~key:\"z/output\" ~kind:`File ~path:src;\n  Session.log_media session ~step:1 ~key:\"a/input\" ~kind:`File ~path:src;\n  Session.log_media session ~step:1 ~key:\"m/middle\" ~kind:`File ~path:src;\n  Session.finish session ();\n  let run = Session.run session in\n  equal ~msg:\"sorted\" (list string)\n    [ \"a/input\"; \"m/middle\"; \"z/output\" ]\n    (Run.media_keys run)\n\nlet test_media_empty_history () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  Session.finish session ();\n  let run = Session.run session in\n  equal ~msg:\"no media keys\" (list string) [] (Run.media_keys run);\n  equal ~msg:\"empty history\" int 0\n    (List.length (Run.media_history run \"nonexistent\"))\n\nlet media =\n  [\n    test \"log_media copies file\" test_log_media_copies_file;\n    test \"log_media nested key\" test_log_media_nested_key;\n    test \"log_media multiple steps\" test_log_media_multiple_steps;\n    test \"log_media missing path raises\" test_log_media_missing_path_raises;\n    test \"log_media closed ignored\" test_log_media_closed_ignored;\n    test \"log_table\" test_log_table;\n    test \"media_keys sorted\" test_media_keys_sorted;\n    test \"media empty history\" test_media_empty_history;\n  ]\n\n(* System monitor *)\n\nlet test_system_monitor_logs_metrics () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  let monitor = Munin_sys.start session ~interval:0.1 () in\n  Thread.delay 0.35;\n  Munin_sys.stop monitor;\n  Session.finish session ();\n  let run = Session.run session in\n  let keys = Run.metric_keys run in\n  is_true ~msg:\"has sys/cpu_user\" (List.mem \"sys/cpu_user\" keys);\n  is_true ~msg:\"has sys/mem_used_pct\" (List.mem \"sys/mem_used_pct\" keys);\n  is_true ~msg:\"has sys/proc_mem_mb\" (List.mem \"sys/proc_mem_mb\" keys)\n\nlet test_system_monitor_defines_metrics () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  let monitor = Munin_sys.start session ~interval:100.0 () in\n  Munin_sys.stop monitor;\n  Session.finish session ();\n  let run = Session.run session in\n  let defs = Run.metric_defs run in\n  let has_def key =\n    match List.assoc_opt key defs with\n    | Some d -> d.summary = `Last\n    | None -> false\n  in\n  is_true ~msg:\"cpu_user def\" (has_def \"sys/cpu_user\");\n  is_true ~msg:\"mem_used_pct def\" (has_def \"sys/mem_used_pct\");\n  is_true ~msg:\"proc_mem_mb def\" (has_def \"sys/proc_mem_mb\")\n\nlet test_system_monitor_stop_idempotent () =\n  with_temp_dir @@ fun root ->\n  let session = Session.start ~root ~experiment:\"exp\" () in\n  let monitor = Munin_sys.start session ~interval:100.0 () in\n  Munin_sys.stop monitor;\n  Munin_sys.stop monitor;\n  Session.finish session ()\n\nlet system_monitor_tests =\n  [\n    test \"logs metrics\" test_system_monitor_logs_metrics;\n    test \"defines metrics\" test_system_monitor_defines_metrics;\n    test \"stop idempotent\" test_system_monitor_stop_idempotent;\n  ]\n\n(* Suite *)\n\nlet suite =\n  [\n    group \"Session lifecycle\" lifecycle;\n    group \"Scalars\" scalars;\n    group \"Metric definitions\" metric_definitions;\n    group \"Metadata\" metadata;\n    group \"Provenance\" provenance_tests;\n    group \"Run loading\" run_loading;\n    group \"Artifacts\" artifacts;\n    group \"Store\" store_tests;\n    group \"Run monitor\" run_monitor;\n    group \"Robustness\" robustness;\n    group \"Auto-computed summaries\" auto_summaries;\n    group \"Grouping\" grouping;\n    group \"Media\" media;\n    group \"System monitor\" system_monitor_tests;\n  ]\n\nlet () = run \"Munin\" suite\n"
  },
  {
    "path": "packages/norn/README.md",
    "content": "# Norn\n\nMCMC sampling with automatic gradients for OCaml, powered by [Rune](../rune/)\n\nNorn provides Hamiltonian Monte Carlo and NUTS samplers that leverage Rune's\nautomatic differentiation. You supply an unnormalized log-density function and\nan initial position; Norn handles gradient computation, trajectory integration,\nand Stan-style window adaptation of step size and mass matrix. Common workflows\nare one-line calls, while the kernel API gives full control over the sampling\npipeline.\n\n## Quick Start\n\nSample from a 2D Gaussian with NUTS:\n\n```ocaml\nopen Nx\n\nlet () =\n  Rng.run ~seed:42 @@ fun () ->\n  let f64 = Nx.float64 in\n\n  (* Target: N([3; -1], [[1, 0.8]; [0.8, 1]]) *)\n  let mu = Nx.create f64 [| 2 |] [| 3.0; -1.0 |] in\n  let prec = Nx.create f64 [| 2; 2 |] [| 5.0; -4.0; -4.0; 5.0 |] in\n  let log_prob x =\n    let d = Nx.sub x mu in\n    let d_col = Nx.reshape [| 2; 1 |] d in\n    Nx.mul_s (Nx.squeeze (Nx.matmul (Nx.matrix_transpose d_col) (Nx.matmul prec d_col))) (-0.5)\n  in\n\n  let init = Nx.zeros f64 [| 2 |] in\n  let result = Norn.nuts ~n:1000 log_prob init in\n\n  let mean = Nx.mean ~axes:[ 0 ] result.samples in\n  Printf.printf \"posterior mean: %s\\n\" (Nx.data_to_string mean);\n  Printf.printf \"accept rate:   %.2f\\n\" result.stats.accept_rate;\n  Printf.printf \"ESS:           %s\\n\" (Nx.data_to_string (Norn.ess result.samples))\n```\n\n## Features\n\n- **One-line sampling**: `Norn.hmc` and `Norn.nuts` for common workflows\n- **Configurable API**: `Norn.sample` with custom kernels via `make_kernel`\n- **Automatic gradients**: log-density gradients computed by Rune -- no manual derivatives\n- **Symplectic integrators**: `leapfrog`, `mclachlan`, `yoshida`\n- **Mass matrix metrics**: `unit_metric`, `diagonal_metric`, `dense_metric`\n- **Stan-style adaptation**: dual averaging for step size, Welford estimation for mass matrix\n- **Diagnostics**: effective sample size (`ess`) and split R-hat (`rhat`)\n\n## Examples\n\n- **01-sampling-basics** -- Sample from a correlated 2D Gaussian with NUTS\n- **02-bayesian-regression** -- Bayesian linear regression with credible intervals\n- **03-diagnostics** -- Multi-chain convergence checking with ESS and R-hat\n\n## Contributing\n\nSee the [Raven monorepo README](../README.md) for guidelines.\n\n## License\n\nISC License. See [LICENSE](../LICENSE) for details.\n"
  },
  {
    "path": "packages/norn/bench/bench_norn.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet f64 = Nx.float64\n\nlet make_log_prob dim =\n  let mean = Nx.zeros f64 [| dim |] in\n  fun x ->\n    let d = Nx.sub x mean in\n    Nx.mul_s (Nx.sum (Nx.mul d d)) (-0.5)\n\nlet hmc_benches () =\n  let cases = [ (\"2D_3lf\", 2, 3); (\"5D_3lf\", 5, 3) ] in\n  List.map\n    (fun (label, dim, num_leapfrog) ->\n      let log_prob = make_log_prob dim in\n      let metric = Norn.unit_metric dim in\n      let kernel = Norn.hmc_kernel ~num_leapfrog ~step_size:0.1 ~metric () in\n      let init = Nx.zeros f64 [| dim |] in\n      let state = ref (kernel.init init log_prob) in\n      Thumper.bench (Printf.sprintf \"HMC/%s\" label) (fun () ->\n          let new_state, info = kernel.step !state log_prob in\n          state := new_state;\n          info.Norn.acceptance_rate))\n    cases\n\nlet nuts_benches () =\n  let cases = [ (\"2D_d3\", 2, 3); (\"5D_d3\", 5, 3) ] in\n  List.map\n    (fun (label, dim, max_depth) ->\n      let log_prob = make_log_prob dim in\n      let metric = Norn.unit_metric dim in\n      let kernel = Norn.nuts_kernel ~max_depth ~step_size:0.1 ~metric () in\n      let init = Nx.zeros f64 [| dim |] in\n      let state = ref (kernel.init init log_prob) in\n      Thumper.bench (Printf.sprintf \"NUTS/%s\" label) (fun () ->\n          let new_state, info = kernel.step !state log_prob in\n          state := new_state;\n          info.Norn.acceptance_rate))\n    cases\n\nlet ess_benches () =\n  let cases = [ (\"2D_n100\", 2, 100); (\"5D_n100\", 5, 100) ] in\n  List.map\n    (fun (label, dim, n) ->\n      let samples = Nx.randn f64 [| n; dim |] in\n      Thumper.bench (Printf.sprintf \"ESS/%s\" label) (fun () -> Norn.ess samples))\n    cases\n\nlet rhat_benches () =\n  let cases = [ (\"2D_n100\", 2, 100); (\"5D_n100\", 5, 100) ] in\n  List.map\n    (fun (label, dim, n) ->\n      let chains = Array.init 4 (fun _ -> Nx.randn f64 [| n; dim |]) in\n      Thumper.bench (Printf.sprintf \"Rhat/%s\" label) (fun () ->\n          Norn.rhat chains))\n    cases\n\nlet () =\n  Nx.Rng.run ~seed:42 (fun () ->\n      Thumper.run \"norn\"\n        [\n          Thumper.group \"HMC\" (hmc_benches ());\n          Thumper.group \"NUTS\" (nuts_benches ());\n          Thumper.group \"ESS\" (ess_benches ());\n          Thumper.group \"Rhat\" (rhat_benches ());\n        ])\n"
  },
  {
    "path": "packages/norn/bench/dune",
    "content": "(executable\n (name bench_norn)\n (libraries nx rune norn thumper))\n\n(rule\n (alias runtest)\n (action\n  (progn\n   (run %{exe:bench_norn.exe} -q)\n   (diff? norn.thumper norn.thumper.corrected))))\n"
  },
  {
    "path": "packages/norn/bench/norn.thumper",
    "content": "# thumper baseline\n# version: 1\n# suite_name: norn\n# host: 1480401c3b76ed18\n# cpu: Apple M1 Max\n# ocaml: 5.4.1\n# git: 31747323\n# dirty: true\n# command: /Users/tmattio/Workspace/raven/_build/default/packages/norn/bench/bench_norn.exe --bless --quick\n\ness/ess_2d_n100\talloc_words\t7.352397e+06\t7.352397e+06\t7.352397e+06\t0.000000e+00\t5\t1\ness/ess_2d_n100\tcpu_time\t1.559885e-02\t1.557237e-02\t1.567803e-02\t3.386704e-03\t5\t0\ness/ess_2d_n100\twall_time\t1.560669e-02\t1.557799e-02\t1.567933e-02\t3.246829e-03\t5\t0\ness/ess_5d_n100\talloc_words\t1.837855e+07\t1.837855e+07\t1.837855e+07\t0.000000e+00\t5\t1\ness/ess_5d_n100\tcpu_time\t4.013211e-02\t3.996071e-02\t4.034314e-02\t4.764651e-03\t5\t0\ness/ess_5d_n100\twall_time\t4.025803e-02\t4.005148e-02\t4.049841e-02\t5.550847e-03\t5\t0\nhmc/hmc_2d_3lf\talloc_words\t2.538700e+04\t2.538700e+04\t2.538700e+04\t0.000000e+00\t5\t0\nhmc/hmc_2d_3lf\tcpu_time\t1.123211e-04\t1.114248e-04\t1.134997e-04\t9.236276e-03\t5\t1\nhmc/hmc_2d_3lf\twall_time\t1.124071e-04\t1.114420e-04\t1.136754e-04\t9.934176e-03\t5\t1\nhmc/hmc_5d_3lf\talloc_words\t2.548300e+04\t2.548200e+04\t2.548300e+04\t1.962092e-05\t5\t0\nhmc/hmc_5d_3lf\tcpu_time\t1.165771e-04\t1.158171e-04\t1.172042e-04\t5.949159e-03\t5\t0\nhmc/hmc_5d_3lf\twall_time\t1.167592e-04\t1.160356e-04\t1.174092e-04\t5.882306e-03\t5\t0\nnuts/nuts_2d_d3\talloc_words\t8.002300e+04\t5.842900e+04\t8.409700e+04\t1.603789e-01\t5\t0\nnuts/nuts_2d_d3\tcpu_time\t3.536361e-04\t3.497111e-04\t3.602631e-04\t1.491926e-02\t5\t0\nnuts/nuts_2d_d3\twall_time\t3.537679e-04\t3.499804e-04\t3.604565e-04\t1.480642e-02\t5\t0\nnuts/nuts_5d_d3\talloc_words\t8.291500e+04\t8.290400e+04\t8.291900e+04\t9.045408e-05\t5\t1\nnuts/nuts_5d_d3\tcpu_time\t3.620090e-04\t3.591001e-04\t3.636080e-04\t6.226318e-03\t5\t0\nnuts/nuts_5d_d3\twall_time\t3.621874e-04\t3.585911e-04\t3.638656e-04\t7.281394e-03\t5\t0\nrhat/rhat_2d_n100\talloc_words\t3.096500e+04\t3.096500e+04\t3.096500e+04\t0.000000e+00\t5\t0\nrhat/rhat_2d_n100\tcpu_time\t1.348335e-04\t1.338291e-04\t1.359049e-04\t7.697749e-03\t5\t1\nrhat/rhat_2d_n100\twall_time\t1.351109e-04\t1.339526e-04\t1.364586e-04\t9.274007e-03\t5\t1\nrhat/rhat_5d_n100\talloc_words\t3.096500e+04\t3.096500e+04\t3.096500e+04\t0.000000e+00\t5\t0\nrhat/rhat_5d_n100\tcpu_time\t1.453519e-04\t1.445829e-04\t1.461137e-04\t5.265860e-03\t5\t0\nrhat/rhat_5d_n100\twall_time\t1.457385e-04\t1.450074e-04\t1.464628e-04\t4.993161e-03\t5\t0\n"
  },
  {
    "path": "packages/norn/doc/01-getting-started.md",
    "content": "# Getting Started\n\nThis guide shows you how to sample from a target distribution using Norn's\nMCMC samplers.\n\n## Installation\n\n<!-- $MDX skip -->\n```bash\nopam install norn\n```\n\nOr build from source:\n\n<!-- $MDX skip -->\n```bash\ngit clone https://github.com/raven-ml/raven\ncd raven && dune build norn\n```\n\nAdd to your `dune` file:\n\n<!-- $MDX skip -->\n```dune\n(executable\n (name main)\n (libraries norn nx rune))\n```\n\n## Your First Sampler\n\nNorn samplers take three things: a sample count, an unnormalized log-density\nfunction, and an initial position. Here we sample from a 2D standard Gaussian\nusing NUTS:\n\n<!-- $MDX skip -->\n```ocaml\nopen Nx\n\nlet () =\n  Rng.run ~seed:42 @@ fun () ->\n  let f64 = Nx.float64 in\n\n  (* log p(x) = -0.5 * ||x||^2  (standard Gaussian) *)\n  let log_prob x = Nx.mul_s (Nx.sum (Nx.square x)) (-0.5) in\n\n  let init = Nx.zeros f64 [| 2 |] in\n  let result = Norn.nuts ~n:1000 log_prob init in\n\n  Printf.printf \"samples shape: %s\\n\"\n    (String.concat \"x\" (List.map string_of_int\n       (Array.to_list (Nx.shape result.samples))));\n  Printf.printf \"accept rate:   %.3f\\n\" result.stats.accept_rate;\n  Printf.printf \"divergences:   %d\\n\" result.stats.num_divergent\n```\n\nKey points:\n- `log_prob` returns a scalar `Nx.float64_t` (not a float) -- Rune differentiates it automatically\n- `init` is the starting position, shape `[dim]`\n- `result.samples` has shape `[n; dim]` -- one row per sample\n- NUTS adapts trajectory length automatically via U-turn detection\n\n## Understanding the Result\n\n`Norn.nuts` and `Norn.hmc` return a `result` record:\n\n<!-- $MDX skip -->\n```ocaml\ntype result = {\n  samples : Nx.float64_t;       (* shape [n; dim] *)\n  log_densities : Nx.float64_t; (* shape [n] *)\n  stats : stats;\n}\n\ntype stats = {\n  accept_rate : float;    (* mean acceptance rate during sampling *)\n  step_size : float;      (* final adapted step size *)\n  num_divergent : int;    (* number of divergent transitions *)\n}\n```\n\nCompute posterior summaries from `result.samples`:\n\n<!-- $MDX skip -->\n```ocaml\nlet mean = Nx.mean ~axes:[ 0 ] result.samples in\nlet std = Nx.std ~axes:[ 0 ] result.samples in\nPrintf.printf \"mean: %s\\n\" (Nx.data_to_string mean);\nPrintf.printf \"std:  %s\\n\" (Nx.data_to_string std)\n```\n\n## Using HMC\n\nHMC requires a fixed number of leapfrog steps per transition. It is simpler\nthan NUTS but requires tuning `num_leapfrog`:\n\n<!-- $MDX skip -->\n```ocaml\nlet result =\n  Norn.hmc ~n:1000 ~num_leapfrog:30 log_prob init\n```\n\nDefault values: `step_size = 0.01`, `target_accept = 0.65`, `num_leapfrog = 20`.\nStep size and mass matrix are adapted during warmup regardless of the sampler.\n\n## The Kernel API\n\nFor more control, use `Norn.sample` with a kernel constructor. The `make_kernel`\nfunction receives adapted step size and metric at each warmup step:\n\n<!-- $MDX skip -->\n```ocaml\nlet result =\n  Norn.sample ~n:1000 log_prob init (fun ~step_size ~metric ->\n      Norn.nuts_kernel ~step_size ~metric ())\n```\n\nThis is equivalent to `Norn.nuts ~n:1000 log_prob init`, but you can customize\nthe kernel:\n\n<!-- $MDX skip -->\n```ocaml\nlet result =\n  Norn.sample ~n:1000 log_prob init (fun ~step_size ~metric ->\n      Norn.nuts_kernel ~integrator:Norn.mclachlan ~max_depth:8\n        ~step_size ~metric ())\n```\n\nThe `make_kernel` signature is `step_size:float -> metric:metric -> kernel`.\nDuring warmup, `sample` calls `make_kernel` each step with the latest adapted\nvalues. After warmup, it freezes the final step size and metric for all\nsampling steps.\n\n## HMC vs NUTS\n\n| Aspect | HMC | NUTS |\n|--------|-----|------|\n| Trajectory length | Fixed (`num_leapfrog` steps) | Automatic (U-turn detection) |\n| Tuning parameters | `step_size`, `num_leapfrog` | `step_size`, `max_depth` |\n| Default target accept | 0.65 | 0.80 |\n| Gradient evaluations | `num_leapfrog` per step | Variable, up to `2^max_depth` |\n| Best for | Simple, well-conditioned posteriors | General use |\n\nNUTS is the recommended default. Use HMC when you know the optimal trajectory\nlength or need predictable cost per step.\n\n## Next Steps\n\n- [Adaptation and Diagnostics](../02-adaptation-and-diagnostics/) -- warmup windows, ESS, R-hat\n- [Advanced Usage](../03-advanced-usage/) -- custom integrators, metrics, and monitoring\n- [PyMC Comparison](../04-pymc-comparison/) -- mapping from Python's PyMC/BlackJAX to Norn\n"
  },
  {
    "path": "packages/norn/doc/02-adaptation-and-diagnostics.md",
    "content": "# Adaptation and Diagnostics\n\nNorn uses Stan-style window adaptation during warmup to tune step size and\nmass matrix automatically. After sampling, diagnostics like ESS and R-hat\nhelp you assess whether the chain has converged.\n\n## Window Adaptation\n\nWhen you call `Norn.nuts`, `Norn.hmc`, or `Norn.sample`, the first\n`num_warmup` iterations (default `n / 2`) are discarded as warmup. During\nwarmup, Norn adapts two quantities:\n\n1. **Step size** -- via dual averaging (Nesterov 2009)\n2. **Mass matrix** -- via regularized Welford variance estimation\n\nWarmup is divided into three phases following Stan's scheme:\n\n| Phase | What adapts | Description |\n|-------|-------------|-------------|\n| Initial fast | Step size only | Short burn-in to find a reasonable step size |\n| Slow windows | Step size + mass matrix | Doubling windows that collect samples for Welford estimation. At the end of each window, the mass matrix is updated and step size is reset |\n| Final fast | Step size only | Short phase to re-tune step size for the final metric |\n\nThe slow windows double in length (e.g., 25, 50, 100, ...) so more samples\ncontribute to later mass matrix estimates, which are more reliable as the\nchain moves closer to the typical set.\n\n## Step Size Adaptation\n\nStep size is tuned via dual averaging to reach a target acceptance rate.\nThe default targets are:\n\n- HMC: `target_accept = 0.65`\n- NUTS: `target_accept = 0.80`\n\nYou can override the target:\n\n<!-- $MDX skip -->\n```ocaml\nlet result = Norn.nuts ~n:1000 ~target_accept:0.90 log_prob init\n```\n\nHigher target acceptance rates produce smaller step sizes, which give more\naccurate trajectories at the cost of more computation. Values between 0.6\nand 0.9 work well for most problems.\n\nYou can also set the initial step size:\n\n<!-- $MDX skip -->\n```ocaml\nlet result = Norn.nuts ~n:1000 ~step_size:0.1 log_prob init\n```\n\nThe final adapted step size is available in `result.stats.step_size`.\n\n## Mass Matrix Adaptation\n\nThe mass matrix (inverse metric) controls the shape of the momentum\ndistribution. A well-chosen mass matrix makes the sampler's kinetic energy\nmatch the target's geometry, improving mixing.\n\nDuring slow windows, Norn collects position samples and estimates the\ninverse mass matrix as a diagonal covariance using the Welford online\nalgorithm with shrinkage regularization. At the end of each slow window:\n\n1. The diagonal inverse mass matrix is computed from accumulated statistics\n2. A `diagonal_metric` is constructed from the estimate\n3. The Welford accumulator is reset for the next window\n4. Step size dual averaging is reset to re-tune for the new metric\n\nAfter warmup, the metric is frozen and used for all sampling iterations.\n\n## Controlling Warmup\n\nSet `num_warmup` explicitly when the default (`n / 2`) is too few or too many:\n\n<!-- $MDX skip -->\n```ocaml\n(* More warmup for a difficult posterior *)\nlet result = Norn.nuts ~n:1000 ~num_warmup:2000 log_prob init\n\n(* Less warmup when the posterior is simple *)\nlet result = Norn.nuts ~n:1000 ~num_warmup:100 log_prob init\n```\n\n## Effective Sample Size\n\nAutocorrelated MCMC samples contain less information than independent samples.\nThe effective sample size (ESS) estimates how many independent samples the\nchain is worth:\n\n<!-- $MDX skip -->\n```ocaml\nlet result = Norn.nuts ~n:2000 log_prob init in\nlet n_eff = Norn.ess result.samples in\nPrintf.printf \"ESS: %s\\n\" (Nx.data_to_string n_eff)\n```\n\n`ess` takes a matrix of shape `[n; dim]` and returns a vector of shape `[dim]`\nwith the ESS for each parameter. It uses autocorrelation with the initial\nmonotone sequence estimator.\n\nRules of thumb:\n- ESS > 100 per parameter is often sufficient for posterior means\n- ESS > 400 is preferred for tail quantiles\n- Low ESS relative to `n` suggests poor mixing -- consider reparameterization\n  or a different metric\n\n## Split R-hat\n\nR-hat measures convergence by comparing within-chain and between-chain variance.\nIt requires multiple chains:\n\n<!-- $MDX skip -->\n```ocaml\nopen Nx\n\nlet () =\n  Rng.run ~seed:42 @@ fun () ->\n  let f64 = Nx.float64 in\n  let log_prob x = Nx.mul_s (Nx.sum (Nx.square x)) (-0.5) in\n\n  (* Run 4 chains from different starting points *)\n  let chains =\n    Array.init 4 (fun i ->\n        let init = Nx.mul_s (Nx.ones f64 [| 3 |]) (Float.of_int (i - 2)) in\n        let result = Norn.nuts ~n:500 log_prob init in\n        result.samples)\n  in\n\n  let r = Norn.rhat chains in\n  Printf.printf \"R-hat: %s\\n\" (Nx.data_to_string r)\n```\n\n`rhat` takes an array of chains, each of shape `[n; dim]`, and returns\nshape `[dim]`. It uses the split R-hat variant (each chain is split in half\nbefore comparison).\n\nInterpretation:\n- R-hat close to 1.0 indicates convergence\n- R-hat > 1.01 suggests the chains have not mixed\n- R-hat > 1.1 is a strong signal of non-convergence\n\n## Checking Convergence\n\nA practical convergence check combines multiple diagnostics:\n\n<!-- $MDX skip -->\n```ocaml\nlet check_convergence (results : Norn.result array) =\n  let chains = Array.map (fun r -> r.Norn.samples) results in\n  let r = Norn.rhat chains in\n\n  (* Check R-hat for all parameters *)\n  let max_rhat = Nx.item [] (Nx.max r) in\n  if max_rhat > 1.01 then\n    Printf.printf \"WARNING: max R-hat = %.3f (chains have not converged)\\n\"\n      max_rhat;\n\n  (* Check ESS for each chain *)\n  Array.iteri\n    (fun i result ->\n      let n_eff = Norn.ess result.Norn.samples in\n      let min_ess = Nx.item [] (Nx.min n_eff) in\n      Printf.printf \"chain %d: min ESS = %.0f, divergences = %d\\n\" i min_ess\n        result.stats.num_divergent)\n    results;\n\n  (* Check divergences *)\n  let total_div =\n    Array.fold_left (fun acc r -> acc + r.Norn.stats.num_divergent) 0 results\n  in\n  if total_div > 0 then\n    Printf.printf \"WARNING: %d divergent transitions (consider reparameterization)\\n\"\n      total_div\n```\n\n## Next Steps\n\n- [Advanced Usage](../03-advanced-usage/) -- custom integrators, metrics, and monitoring\n- [Getting Started](../01-getting-started/) -- basic usage and the kernel API\n- [PyMC Comparison](../04-pymc-comparison/) -- mapping from Python's PyMC/BlackJAX to Norn\n"
  },
  {
    "path": "packages/norn/doc/03-advanced-usage.md",
    "content": "# Advanced Usage\n\nThis guide covers custom integrators, metrics, kernel composition via\n`Norn.sample`, and monitoring sampling progress.\n\n## Integrators\n\nThe integrator controls how Hamiltonian dynamics are approximated. Norn\nprovides three symplectic integrators:\n\n| Integrator | Order | Grad evals/step | Best for |\n|-----------|-------|-----------------|----------|\n| `leapfrog` | 2nd | 1 | General use (default) |\n| `mclachlan` | 2nd | 2 | Higher acceptance on stiff problems |\n| `yoshida` | 4th | 3 | High accuracy with fewer steps |\n\n### Leapfrog (default)\n\nThe standard velocity Verlet integrator. One gradient evaluation per step,\ngood balance of accuracy and cost:\n\n<!-- $MDX skip -->\n```ocaml\nlet result =\n  Norn.sample ~n:1000 log_prob init (fun ~step_size ~metric ->\n      Norn.nuts_kernel ~integrator:Norn.leapfrog ~step_size ~metric ())\n```\n\n### McLachlan\n\nMcLachlan's two-stage integrator achieves higher acceptance rates than\nleapfrog on challenging posteriors at the cost of two gradient evaluations\nper step:\n\n<!-- $MDX skip -->\n```ocaml\nlet result =\n  Norn.sample ~n:1000 log_prob init (fun ~step_size ~metric ->\n      Norn.nuts_kernel ~integrator:Norn.mclachlan ~step_size ~metric ())\n```\n\nUse McLachlan when leapfrog produces too many divergences or low acceptance\nrates despite adaptation.\n\n### Yoshida\n\nYoshida's fourth-order integrator is more accurate than leapfrog, allowing\nlarger step sizes or fewer integration steps. Three gradient evaluations\nper step:\n\n<!-- $MDX skip -->\n```ocaml\nlet result =\n  Norn.sample ~n:1000 log_prob init (fun ~step_size ~metric ->\n      Norn.hmc_kernel ~integrator:Norn.yoshida ~num_leapfrog:10\n        ~step_size ~metric ())\n```\n\nYoshida is most useful with HMC where the trajectory length is fixed -- the\nhigher accuracy lets you use fewer steps for the same trajectory quality.\n\n## Metrics\n\nThe metric defines the mass matrix, which shapes the momentum distribution\nto match the target geometry. A good metric improves mixing by making the\nsampler's kinetic energy reflect the posterior's covariance structure.\n\n### Unit Metric\n\nIdentity mass matrix. Momentum sampled from `N(0, I)`. This is the starting\npoint for adaptation:\n\n<!-- $MDX skip -->\n```ocaml\nlet m = Norn.unit_metric dim\n```\n\n### Diagonal Metric\n\nDiagonal mass matrix estimated from the inverse variance of each parameter.\nThis is what window adaptation produces automatically:\n\n<!-- $MDX skip -->\n```ocaml\nlet f64 = Nx.float64 in\nlet inv_mass_diag = Nx.create f64 [| 2 |] [| 1.0; 0.01 |] in\nlet m = Norn.diagonal_metric inv_mass_diag\n```\n\nUse a diagonal metric when parameters have very different scales. Adaptation\nestimates this automatically, but you can provide your own if you know the\nposterior variances.\n\n### Dense Metric\n\nFull inverse mass matrix. Uses Cholesky decomposition for momentum sampling.\nCaptures correlations between parameters:\n\n<!-- $MDX skip -->\n```ocaml\nlet f64 = Nx.float64 in\nlet inv_mass =\n  Nx.create f64 [| 2; 2 |] [| 1.0; 0.8; 0.8; 1.0 |]\nin\nlet m = Norn.dense_metric inv_mass\n```\n\nDense metrics help with strongly correlated posteriors but are expensive for\nhigh-dimensional problems (`O(dim^2)` storage, `O(dim^3)` Cholesky).\n\n## Composing Kernels with sample\n\n`Norn.sample` is the configurable entry point. The `make_kernel` function\nreceives the current adapted step size and metric, returning a kernel:\n\n<!-- $MDX skip -->\n```ocaml\nlet result =\n  Norn.sample ~n:2000 ~num_warmup:1000 ~target_accept:0.85\n    log_prob init (fun ~step_size ~metric ->\n      Norn.nuts_kernel ~integrator:Norn.mclachlan ~max_depth:8\n        ~step_size ~metric ())\n```\n\nThis gives you full control over:\n- The sampler algorithm (HMC vs NUTS)\n- The integrator (leapfrog, mclachlan, yoshida)\n- Algorithm-specific parameters (`num_leapfrog`, `max_depth`)\n- Step size and metric are provided by adaptation\n\nThe `make_kernel` function is called at every warmup step (with updated\nadaptation values) and once more with the final values before sampling begins.\n\n## Monitoring with report\n\nThe `~report` callback lets you monitor sampling progress. It is called after\neach step with the current step number, state, and diagnostics:\n\n<!-- $MDX skip -->\n```ocaml\nlet report ~step state info =\n  if step mod 100 = 0 then\n    Printf.printf \"step %4d  log_p = %.2f  accept = %.3f  steps = %d%s\\n\"\n      step state.Norn.log_density info.Norn.acceptance_rate\n      info.num_integration_steps\n      (if info.is_divergent then \"  DIVERGENT\" else \"\")\n\nlet result =\n  Norn.sample ~n:1000 ~report log_prob init (fun ~step_size ~metric ->\n      Norn.nuts_kernel ~step_size ~metric ())\n```\n\nStep numbers are negative during warmup (counting down to zero) and\nnon-negative during sampling. This makes it easy to distinguish the two\nphases:\n\n<!-- $MDX skip -->\n```ocaml\nlet report ~step _state info =\n  if step < 0 then\n    Printf.printf \"warmup %4d  accept = %.3f\\n\" step info.Norn.acceptance_rate\n  else if step mod 100 = 0 then\n    Printf.printf \"sample %4d  accept = %.3f\\n\" step info.acceptance_rate\n```\n\n## Providing a Known Metric\n\nIf you know the posterior covariance from a previous run or analytic\ncalculation, skip the adaptation overhead by providing the metric directly:\n\n<!-- $MDX skip -->\n```ocaml\nopen Nx\n\nlet () =\n  Rng.run ~seed:42 @@ fun () ->\n  let f64 = Nx.float64 in\n  let log_prob x = Nx.mul_s (Nx.sum (Nx.square x)) (-0.5) in\n  let init = Nx.zeros f64 [| 2 |] in\n\n  (* Use a known diagonal inverse mass *)\n  let inv_mass_diag = Nx.create f64 [| 2 |] [| 1.0; 1.0 |] in\n  let metric = Norn.diagonal_metric inv_mass_diag in\n\n  let result =\n    Norn.sample ~n:1000 ~num_warmup:200 log_prob init (fun ~step_size ~metric:_ ->\n        Norn.nuts_kernel ~step_size ~metric ())\n  in\n  Printf.printf \"accept rate: %.3f\\n\" result.stats.accept_rate\n```\n\nNote that `~metric:_` ignores the adapted metric and uses the fixed one.\nStep size is still adapted during warmup.\n\n## Next Steps\n\n- [Getting Started](../01-getting-started/) -- basic usage and the kernel API\n- [Adaptation and Diagnostics](../02-adaptation-and-diagnostics/) -- warmup windows, ESS, R-hat\n- [PyMC Comparison](../04-pymc-comparison/) -- mapping from Python's PyMC/BlackJAX to Norn\n"
  },
  {
    "path": "packages/norn/doc/04-pymc-comparison.md",
    "content": "# PyMC Comparison\n\nThis page maps [PyMC](https://www.pymc.io/) and\n[BlackJAX](https://github.com/blackjax-devs/blackjax) concepts to their\nNorn equivalents. Norn's design is closest to BlackJAX: both provide\nfunctional kernel APIs where the sampler state is explicit and the log-density\nfunction is passed at each step.\n\n## One-Line Sampling\n\n**PyMC:**\n\n```python\nimport pymc as pm\n\nwith pm.Model():\n    x = pm.Normal(\"x\", mu=0, sigma=1, shape=2)\n    trace = pm.sample(1000, tune=500)\n```\n\n**Norn:**\n\n<!-- $MDX skip -->\n```ocaml\nlet log_prob x = Nx.mul_s (Nx.sum (Nx.square x)) (-0.5) in\nlet init = Nx.zeros Nx.float64 [| 2 |] in\nlet result = Norn.nuts ~n:1000 ~num_warmup:500 log_prob init\n```\n\nPyMC builds a probabilistic model and derives the log-density automatically.\nNorn takes the log-density function directly -- you write it yourself or\nbuild it from your model. Rune handles the gradient.\n\n## BlackJAX Kernel API\n\n**BlackJAX:**\n\n```python\nimport blackjax\nimport jax\n\nkernel = blackjax.nuts(log_prob, step_size=0.5)\nstate = kernel.init(jax.numpy.zeros(2))\n\nfor _ in range(1000):\n    key, subkey = jax.random.split(key)\n    state, info = kernel.step(subkey, state)\n```\n\n**Norn:**\n\n<!-- $MDX skip -->\n```ocaml\nlet metric = Norn.unit_metric 2 in\nlet kernel = Norn.nuts_kernel ~step_size:0.5 ~metric () in\nlet state = ref (kernel.init (Nx.zeros Nx.float64 [| 2 |]) log_prob) in\n\nfor _ = 1 to 1000 do\n  let new_state, _info = kernel.step !state log_prob in\n  state := new_state\ndone\n```\n\nBoth use a `{init; step}` pattern. The key difference: BlackJAX threads a\nPRNG key explicitly, while Norn uses Nx's RNG context (`Rng.run`).\n\n## Adaptation\n\n**BlackJAX:**\n\n```python\nwarmup = blackjax.window_adaptation(blackjax.nuts, log_prob)\nstate, kernel, _ = warmup.run(key, jax.numpy.zeros(2), 1000)\n```\n\n**Norn:**\n\n<!-- $MDX skip -->\n```ocaml\n(* Adaptation is built into sample/nuts/hmc *)\nlet result = Norn.nuts ~n:1000 ~num_warmup:500 log_prob init\n\n(* Or use sample for control over the kernel *)\nlet result =\n  Norn.sample ~n:1000 ~num_warmup:500 log_prob init (fun ~step_size ~metric ->\n      Norn.nuts_kernel ~step_size ~metric ())\n```\n\nIn BlackJAX, adaptation is a separate step that returns a tuned kernel. In\nNorn, adaptation is integrated into `sample` -- it adapts step size and mass\nmatrix during warmup, then freezes them for sampling.\n\n## Samplers\n\n| PyMC / BlackJAX | Norn | Notes |\n|-----------------|------|-------|\n| `pm.sample()` (NUTS) | `Norn.nuts ~n log_prob init` | NUTS with adaptation |\n| `blackjax.nuts(log_prob, step_size)` | `Norn.nuts_kernel ~step_size ~metric ()` | NUTS kernel |\n| `blackjax.hmc(log_prob, step_size, ...)` | `Norn.hmc_kernel ~step_size ~metric ()` | HMC kernel |\n| `pm.sample(step=pm.HamiltonianMC(...))` | `Norn.hmc ~n log_prob init` | HMC with adaptation |\n\n## Integrators\n\n| BlackJAX | Norn | Notes |\n|----------|------|-------|\n| `blackjax.mcmc.integrators.velocity_verlet` | `Norn.leapfrog` | Default, 1 grad eval/step |\n| `blackjax.mcmc.integrators.mclachlan` | `Norn.mclachlan` | 2 grad evals/step |\n| `blackjax.mcmc.integrators.yoshida` | `Norn.yoshida` | 3 grad evals/step |\n\nUsage comparison:\n\n```python\n# BlackJAX\nkernel = blackjax.nuts(log_prob, step_size=0.5,\n                       integrator=blackjax.mcmc.integrators.mclachlan)\n```\n\n<!-- $MDX skip -->\n```ocaml\n(* Norn *)\nlet kernel =\n  Norn.nuts_kernel ~integrator:Norn.mclachlan ~step_size:0.5 ~metric ()\n```\n\n## Metrics (Mass Matrix)\n\n| BlackJAX | Norn | Notes |\n|----------|------|-------|\n| `blackjax.mcmc.metrics.default_metric(jnp.ones(d))` | `Norn.unit_metric d` | Identity |\n| `blackjax.mcmc.metrics.default_metric(inv_mass_diag)` | `Norn.diagonal_metric inv_mass_diag` | Diagonal |\n| Dense metric via Cholesky | `Norn.dense_metric inv_mass_matrix` | Full matrix |\n\n## Diagnostics\n\n| PyMC / ArviZ | Norn | Notes |\n|--------------|------|-------|\n| `az.ess(trace)` | `Norn.ess samples` | Effective sample size |\n| `az.rhat(trace)` | `Norn.rhat chains` | Split R-hat |\n| `trace.sample_stats[\"diverging\"]` | `result.stats.num_divergent` | Divergence count |\n| `trace.sample_stats[\"accept\"]` | `result.stats.accept_rate` | Mean acceptance rate |\n| `trace.sample_stats[\"step_size\"]` | `result.stats.step_size` | Final step size |\n\n## State and Info\n\n**BlackJAX state:**\n\n```python\nstate.position      # current sample\nstate.logdensity    # log p(x)\nstate.logdensity_grad  # grad log p(x)\n```\n\n**Norn state:**\n\n<!-- $MDX skip -->\n```ocaml\nstate.position         (* Nx.float64_t, shape [dim] *)\nstate.log_density      (* float *)\nstate.grad_log_density (* Nx.float64_t, shape [dim] *)\n```\n\n**BlackJAX info:**\n\n```python\ninfo.acceptance_rate\ninfo.is_divergent\ninfo.energy\ninfo.num_integration_steps\n```\n\n**Norn info:**\n\n<!-- $MDX skip -->\n```ocaml\ninfo.acceptance_rate        (* float in [0, 1] *)\ninfo.is_divergent           (* bool *)\ninfo.energy                 (* float *)\ninfo.num_integration_steps  (* int *)\n```\n\n## Key Differences\n\n| Aspect | PyMC / BlackJAX | Norn |\n|--------|-----------------|------|\n| Language | Python / JAX | OCaml / Rune |\n| Model definition | Declarative (PyMC) or functional (BlackJAX) | Functional -- write `log_prob` directly |\n| Gradients | JAX autodiff | Rune autodiff |\n| PRNG | Explicit key splitting (JAX) | Scoped via `Nx.Rng.run` |\n| Adaptation | Separate step (BlackJAX) or automatic (PyMC) | Integrated into `sample` |\n| Mass matrix output | Diagonal or dense | `metric` record with `sample_momentum`, `kinetic_energy`, `scale` |\n| Multi-chain | Built-in (`chains` parameter) | Run multiple calls, combine with `rhat` |\n| Trace format | ArviZ InferenceData | `result` record with `samples` matrix |\n| Probabilistic DSL | Yes (PyMC) | No -- bring your own log-density |\n"
  },
  {
    "path": "packages/norn/doc/dune",
    "content": "(mdx\n (files *.md)\n (package norn)\n (libraries norn nx rune))\n"
  },
  {
    "path": "packages/norn/doc/index.md",
    "content": "# Norn\n\nNorn provides MCMC sampling with automatic gradients for OCaml. You supply an unnormalized log-density function and an initial position; Norn handles gradient computation via Rune, trajectory integration, and Stan-style window adaptation. One-line convenience functions cover common workflows, while the kernel API gives full control over integrators, metrics, and adaptation.\n\n## Features\n\n- **One-line sampling** -- `Norn.hmc` and `Norn.nuts` with automatic adaptation\n- **Configurable API** -- `Norn.sample` with custom kernels via `make_kernel`\n- **Automatic gradients** -- log-density gradients computed by Rune\n- **Symplectic integrators** -- `leapfrog`, `mclachlan`, `yoshida`\n- **Mass matrix metrics** -- `unit_metric`, `diagonal_metric`, `dense_metric`\n- **Stan-style adaptation** -- dual averaging for step size, Welford estimation for mass matrix\n- **Diagnostics** -- effective sample size (`ess`) and split R-hat (`rhat`)\n\n## Quick Start\n\n<!-- $MDX skip -->\n```ocaml\nopen Nx\n\nlet () =\n  Rng.run ~seed:42 @@ fun () ->\n  let f64 = Nx.float64 in\n\n  (* Target: N([3; -1], I) *)\n  let mu = Nx.create f64 [| 2 |] [| 3.0; -1.0 |] in\n  let log_prob x =\n    let d = Nx.sub x mu in\n    Nx.mul_s (Nx.sum (Nx.square d)) (-0.5)\n  in\n\n  let init = Nx.zeros f64 [| 2 |] in\n  let result = Norn.nuts ~n:1000 log_prob init in\n\n  let mean = Nx.mean ~axes:[ 0 ] result.samples in\n  Printf.printf \"posterior mean: %s\\n\" (Nx.data_to_string mean);\n  Printf.printf \"accept rate:   %.2f\\n\" result.stats.accept_rate;\n  Printf.printf \"ESS:           %s\\n\" (Nx.data_to_string (Norn.ess result.samples))\n```\n\n## Next Steps\n\n- [Getting Started](01-getting-started/) -- installation, first sampler, the kernel API\n- [Adaptation and Diagnostics](02-adaptation-and-diagnostics/) -- warmup windows, ESS, R-hat\n- [Advanced Usage](03-advanced-usage/) -- custom integrators, metrics, and monitoring\n- [PyMC Comparison](04-pymc-comparison/) -- mapping from Python's PyMC/BlackJAX to Norn\n"
  },
  {
    "path": "packages/norn/examples/01-sampling-basics/README.md",
    "content": "# `01-sampling-basics`\n\nYour first sampler. This example draws 1000 samples from a 2D correlated\nGaussian using NUTS and prints summary statistics to verify the chain recovered\nthe true distribution.\n\n```bash\ndune exec packages/norn/examples/01-sampling-basics/main.exe\n```\n\n## What You'll Learn\n\n- Defining a log-density function for MCMC\n- One-line sampling with `Norn.nuts`\n- Computing sample mean, variance, and covariance from the output\n- Reading basic diagnostics: ESS, acceptance rate, step size, divergences\n\n## Key Functions\n\n| Function | Purpose |\n| -------- | ------- |\n| `nuts`   | Draw samples using the No-U-Turn Sampler with automatic adaptation |\n| `ess`    | Effective sample size per parameter via autocorrelation |\n\n## How It Works\n\nThe target is a 2D Gaussian with mean `[2, -1]` and covariance `[[1, 0.8], [0.8, 2]]`.\nWe define `log_prob` as the unnormalized log-density (the Mahalanobis form), then\ncall `Norn.nuts ~n:1000 log_prob init`. NUTS handles warmup adaptation (step size\nand mass matrix) automatically.\n\n## Try It\n\n1. Increase `~n` to 5000 and observe ESS and variance estimates improve.\n2. Start from a bad initial point like `[100.0; 100.0]` -- warmup should still converge.\n3. Replace `Norn.nuts` with `Norn.hmc` and compare acceptance rates.\n\n## Next Steps\n\nContinue to [02-bayesian-regression](../02-bayesian-regression/) to see MCMC applied\nto a real inference problem.\n"
  },
  {
    "path": "packages/norn/examples/01-sampling-basics/dune",
    "content": "(executable\n (name main)\n (libraries nx rune norn))\n"
  },
  {
    "path": "packages/norn/examples/01-sampling-basics/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Sample from a 2D correlated Gaussian using NUTS.\n\n   Target distribution: N(mu, Sigma) with mu = [2.0; -1.0] Sigma = [[1.0, 0.8],\n   [0.8, 2.0]]\n\n   NUTS automatically adapts step size and trajectory length during warmup, so a\n   single call to [Norn.nuts] is all you need. *)\n\nlet f64 = Nx.float64\n\n(* Target parameters *)\nlet mu = Nx.create f64 [| 2 |] [| 2.0; -1.0 |]\nlet sigma_inv = Nx.inv (Nx.create f64 [| 2; 2 |] [| 1.0; 0.8; 0.8; 2.0 |])\n\n(* Log-density of the target (unnormalized). *)\nlet log_prob x =\n  let d = Nx.sub x mu in\n  let dt = Nx.reshape [| 1; 2 |] d in\n  let mahal = Nx.matmul (Nx.matmul dt sigma_inv) (Nx.reshape [| 2; 1 |] d) in\n  Nx.mul_s (Nx.reshape [||] mahal) (-0.5)\n\nlet () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let init = Nx.zeros f64 [| 2 |] in\n  let result = Norn.nuts ~n:1000 log_prob init in\n\n  (* Sample mean *)\n  let sample_mean = Nx.mean ~axes:[ 0 ] result.samples in\n  Printf.printf \"--- 2D Correlated Gaussian (NUTS, 1000 samples) ---\\n\\n\";\n  Printf.printf \"True mean:    [%6.3f, %6.3f]\\n\" (Nx.item [ 0 ] mu)\n    (Nx.item [ 1 ] mu);\n  Printf.printf \"Sample mean:  [%6.3f, %6.3f]\\n\"\n    (Nx.item [ 0 ] sample_mean)\n    (Nx.item [ 1 ] sample_mean);\n\n  (* Sample variance *)\n  let centered = Nx.sub result.samples sample_mean in\n  let n = Float.of_int ((Nx.shape result.samples).(0) - 1) in\n  let sample_cov =\n    Nx.div_s (Nx.matmul (Nx.matrix_transpose centered) centered) n\n  in\n  Printf.printf \"\\nTrue var:     [%6.3f, %6.3f]\\n\" 1.0 2.0;\n  Printf.printf \"Sample var:   [%6.3f, %6.3f]\\n\"\n    (Nx.item [ 0; 0 ] sample_cov)\n    (Nx.item [ 1; 1 ] sample_cov);\n  Printf.printf \"True cov:     %6.3f\\n\" 0.8;\n  Printf.printf \"Sample cov:   %6.3f\\n\" (Nx.item [ 0; 1 ] sample_cov);\n\n  (* Diagnostics *)\n  let e = Norn.ess result.samples in\n  Printf.printf \"\\nESS:          [%6.1f, %6.1f]\\n\" (Nx.item [ 0 ] e)\n    (Nx.item [ 1 ] e);\n  Printf.printf \"Accept rate:  %.3f\\n\" result.stats.accept_rate;\n  Printf.printf \"Step size:    %.4f\\n\" result.stats.step_size;\n  Printf.printf \"Divergent:    %d\\n\" result.stats.num_divergent\n"
  },
  {
    "path": "packages/norn/examples/02-bayesian-regression/README.md",
    "content": "# `02-bayesian-regression`\n\nBayesian linear regression on synthetic data. Generates noisy observations from\n`y = 2x + 1`, defines a Gaussian likelihood with normal priors, and uses NUTS\nto infer the posterior over slope and intercept.\n\n```bash\ndune exec packages/norn/examples/02-bayesian-regression/main.exe\n```\n\n## What You'll Learn\n\n- Building a log-posterior from likelihood and prior\n- Interpreting posterior means and 95% credible intervals\n- Using the configurable `Norn.sample` API with `Norn.nuts_kernel`\n\n## Key Functions\n\n| Function      | Purpose                                           |\n| ------------- | ------------------------------------------------- |\n| `nuts`        | One-line NUTS sampling with automatic adaptation  |\n| `sample`      | Configurable sampling with a user-provided kernel |\n| `nuts_kernel` | Construct a NUTS kernel with explicit parameters  |\n| `ess`         | Effective sample size per parameter               |\n\n## How It Works\n\n1. Generate 50 data points from `y = 2x + 1 + N(0, 0.5)`.\n2. Define `log_posterior` as Gaussian log-likelihood plus `N(0, 10)` priors.\n3. Run `Norn.nuts ~n:2000` to draw posterior samples.\n4. Compute posterior means and 95% credible intervals from the samples.\n5. Re-run with `Norn.sample` + `Norn.nuts_kernel` to show the configurable API.\n\n## Try It\n\n1. Reduce `n_data` to 10 and observe wider credible intervals.\n2. Use a tighter prior `N(0, 1)` and see how it biases the posterior toward zero.\n3. Replace `Norn.nuts` with `Norn.hmc ~num_leapfrog:30` and compare.\n\n## Next Steps\n\nContinue to [03-diagnostics](../03-diagnostics/) to learn about multi-chain\nconvergence analysis with ESS and R-hat.\n"
  },
  {
    "path": "packages/norn/examples/02-bayesian-regression/dune",
    "content": "(executable\n (name main)\n (libraries nx rune norn))\n"
  },
  {
    "path": "packages/norn/examples/02-bayesian-regression/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Bayesian linear regression: infer slope and intercept from noisy data.\n\n   Model: y_i = slope * x_i + intercept + eps_i, eps_i ~ N(0, sigma^2)\n\n   True parameters: slope = 2.0, intercept = 1.0, sigma = 0.5\n\n   Priors: slope ~ N(0, 10) intercept ~ N(0, 10)\n\n   We sample the posterior with NUTS and report credible intervals. *)\n\nlet f64 = Nx.float64\n\n(* Generate synthetic data: y = 2x + 1 + noise *)\nlet n_data = 50\nlet true_slope = 2.0\nlet true_intercept = 1.0\nlet noise_sigma = 0.5\n\nlet gen_data () =\n  let x = Nx.linspace f64 (-2.0) 2.0 n_data in\n  let noise = Nx.mul_s (Nx.randn f64 [| n_data |]) noise_sigma in\n  let y =\n    Nx.add (Nx.add (Nx.mul_s x true_slope) (Nx.scalar f64 true_intercept)) noise\n  in\n  (x, y)\n\n(* Log-posterior: Gaussian likelihood + normal prior. params = [slope;\n   intercept] *)\nlet log_posterior x_data y_data params =\n  let slope = Nx.slice [ I 0 ] params in\n  let intercept = Nx.slice [ I 1 ] params in\n  (* Predicted values *)\n  let y_pred = Nx.add (Nx.mul x_data slope) intercept in\n  let residuals = Nx.sub y_data y_pred in\n  (* Log-likelihood: -0.5 * sum((y - y_pred)^2) / sigma^2 *)\n  let ll =\n    Nx.div_s\n      (Nx.mul_s (Nx.sum (Nx.square residuals)) (-0.5))\n      (noise_sigma *. noise_sigma)\n  in\n  (* Log-prior: N(0, 10) on each parameter *)\n  let lp_slope = Nx.mul_s (Nx.square slope) (-0.5 /. 100.0) in\n  let lp_intercept = Nx.mul_s (Nx.square intercept) (-0.5 /. 100.0) in\n  Nx.add ll (Nx.add lp_slope lp_intercept)\n\nlet percentile samples frac =\n  let n = (Nx.shape samples).(0) in\n  let sorted, _ = Nx.sort samples in\n  let idx = Float.to_int (frac *. Float.of_int (n - 1)) in\n  Nx.item [ idx ] sorted\n\nlet () =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  let x_data, y_data = gen_data () in\n  let init = Nx.zeros f64 [| 2 |] in\n  let log_prob = log_posterior x_data y_data in\n  let result = Norn.nuts ~n:2000 ~num_warmup:1000 log_prob init in\n\n  Printf.printf \"--- Bayesian Linear Regression (NUTS, 2000 samples) ---\\n\\n\";\n  Printf.printf \"True:      slope = %.2f, intercept = %.2f\\n\" true_slope\n    true_intercept;\n\n  let sample_mean = Nx.mean ~axes:[ 0 ] result.samples in\n  Printf.printf \"Posterior: slope = %.3f, intercept = %.3f\\n\"\n    (Nx.item [ 0 ] sample_mean)\n    (Nx.item [ 1 ] sample_mean);\n\n  (* 95%% credible intervals *)\n  Printf.printf \"\\n95%% credible intervals:\\n\";\n  let slope_samples = Nx.slice [ A; I 0 ] result.samples in\n  let intercept_samples = Nx.slice [ A; I 1 ] result.samples in\n  Printf.printf \"  slope:     [%.3f, %.3f]\\n\"\n    (percentile slope_samples 0.025)\n    (percentile slope_samples 0.975);\n  Printf.printf \"  intercept: [%.3f, %.3f]\\n\"\n    (percentile intercept_samples 0.025)\n    (percentile intercept_samples 0.975);\n\n  (* Diagnostics *)\n  let e = Norn.ess result.samples in\n  Printf.printf \"\\nESS:         [%.1f, %.1f]\\n\" (Nx.item [ 0 ] e)\n    (Nx.item [ 1 ] e);\n  Printf.printf \"Accept rate: %.3f\\n\" result.stats.accept_rate;\n  Printf.printf \"Step size:   %.4f\\n\" result.stats.step_size;\n  Printf.printf \"Divergent:   %d\\n\" result.stats.num_divergent;\n\n  (* Also demonstrate the configurable API *)\n  Printf.printf \"\\n--- Same model with configurable sample API ---\\n\";\n  let result2 =\n    Norn.sample ~n:1000 ~num_warmup:500 log_prob init (fun ~step_size ~metric ->\n        Norn.nuts_kernel ~step_size ~metric ())\n  in\n  let mean2 = Nx.mean ~axes:[ 0 ] result2.samples in\n  Printf.printf \"Posterior: slope = %.3f, intercept = %.3f\\n\"\n    (Nx.item [ 0 ] mean2) (Nx.item [ 1 ] mean2);\n  Printf.printf \"Accept rate: %.3f\\n\" result2.stats.accept_rate\n"
  },
  {
    "path": "packages/norn/examples/03-diagnostics/README.md",
    "content": "# `03-diagnostics`\n\nMulti-chain convergence diagnostics. Runs 4 independent NUTS chains on a 3D\nGaussian target, then computes ESS and split R-hat to verify that the chains\nhave converged and mixed.\n\n```bash\ndune exec packages/norn/examples/03-diagnostics/main.exe\n```\n\n## What You'll Learn\n\n- Running multiple chains with different seeds\n- Computing effective sample size (ESS) per chain\n- Computing split R-hat across chains for convergence assessment\n- Interpreting diagnostic thresholds (ESS > 100, R-hat < 1.01)\n\n## Key Functions\n\n| Function | Purpose |\n| -------- | ------- |\n| `nuts`   | Draw samples with automatic adaptation |\n| `ess`    | Effective sample size via initial monotone sequence estimator |\n| `rhat`   | Split R-hat convergence diagnostic across chains |\n\n## How It Works\n\n1. Define a 3D Gaussian target with different scales per dimension (var = 1, 4, 0.25).\n2. Run 4 independent NUTS chains, each with a different random seed.\n3. Report per-chain acceptance rate, step size, and divergence count.\n4. Compute ESS for each chain and R-hat across chains.\n5. Pool all chains for a final posterior summary.\n\n## Interpreting the Output\n\n- **ESS**: The number of effectively independent samples. If ESS is much lower\n  than the actual sample count, the chain has high autocorrelation.\n- **R-hat**: Measures between-chain vs within-chain variance. Values close to\n  1.0 mean the chains agree. Above 1.01 suggests incomplete mixing.\n\n## Try It\n\n1. Reduce `n_samples` to 50 and observe R-hat increase above 1.01.\n2. Use a highly correlated target and see ESS drop.\n3. Try `Norn.hmc` instead of `Norn.nuts` and compare ESS.\n\n## Further Reading\n\n- [Sampling Basics](../01-sampling-basics/) -- single-chain sampling\n- [Bayesian Regression](../02-bayesian-regression/) -- a real inference problem\n"
  },
  {
    "path": "packages/norn/examples/03-diagnostics/dune",
    "content": "(executable\n (name main)\n (libraries nx rune norn))\n"
  },
  {
    "path": "packages/norn/examples/03-diagnostics/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Multi-chain convergence diagnostics.\n\n   Run 4 chains on a 3D target distribution, then compute ESS and R-hat to\n   assess whether the chains have converged and mixed well.\n\n   Guidelines: - ESS > 100 per parameter for reliable estimates - R-hat < 1.01\n   indicates convergence across chains *)\n\nlet f64 = Nx.float64\n\n(* Target: 3D Gaussian with different scales per dimension. mu = [1, -2, 0.5]\n   sigma = diag([1, 4, 0.25]) *)\nlet mu = Nx.create f64 [| 3 |] [| 1.0; -2.0; 0.5 |]\nlet inv_var = Nx.create f64 [| 3 |] [| 1.0; 0.25; 4.0 |]\n(* 1/sigma^2 for each dim *)\n\nlet log_prob x =\n  let d = Nx.sub x mu in\n  Nx.mul_s (Nx.sum (Nx.mul (Nx.square d) inv_var)) (-0.5)\n\nlet dim = 3\nlet n_chains = 4\nlet n_samples = 1000\nlet param_names = [| \"x0\"; \"x1\"; \"x2\" |]\n\nlet () =\n  Printf.printf \"--- Multi-Chain Diagnostics (%d chains x %d samples) ---\\n\\n\"\n    n_chains n_samples;\n\n  (* Run chains with different seeds *)\n  let chains =\n    Array.init n_chains (fun i ->\n        Nx.Rng.run ~seed:(i + 1) @@ fun () ->\n        let init = Nx.zeros f64 [| dim |] in\n        Norn.nuts ~n:n_samples ~num_warmup:500 log_prob init)\n  in\n\n  (* Per-chain summary *)\n  Printf.printf \"Per-chain summary:\\n\";\n  Printf.printf \"  %-8s  %-12s  %-12s  %-8s\\n\" \"Chain\" \"Accept Rate\" \"Step Size\"\n    \"Diverg.\";\n  Array.iteri\n    (fun i r ->\n      Printf.printf \"  %-8d  %-12.3f  %-12.4f  %-8d\\n\" (i + 1)\n        r.Norn.stats.accept_rate r.stats.step_size r.stats.num_divergent)\n    chains;\n\n  (* Per-chain ESS *)\n  Printf.printf \"\\nEffective Sample Size (ESS) per chain:\\n\";\n  Printf.printf \"  %-8s\" \"Chain\";\n  Array.iter (fun name -> Printf.printf \"  %-8s\" name) param_names;\n  Printf.printf \"\\n\";\n  Array.iteri\n    (fun i r ->\n      let e = Norn.ess r.Norn.samples in\n      Printf.printf \"  %-8d\" (i + 1);\n      for d = 0 to dim - 1 do\n        Printf.printf \"  %-8.1f\" (Nx.item [ d ] e)\n      done;\n      Printf.printf \"\\n\")\n    chains;\n\n  (* R-hat across chains *)\n  let chain_samples = Array.map (fun r -> r.Norn.samples) chains in\n  let r = Norn.rhat chain_samples in\n  Printf.printf \"\\nSplit R-hat (target: < 1.01):\\n\";\n  for d = 0 to dim - 1 do\n    let rv = Nx.item [ d ] r in\n    let status = if rv < 1.01 then \"OK\" else \"WARNING\" in\n    Printf.printf \"  %s: %.4f  [%s]\\n\" param_names.(d) rv status\n  done;\n\n  (* Grand summary *)\n  let all_converged = ref true in\n  for d = 0 to dim - 1 do\n    if Nx.item [ d ] r >= 1.01 then all_converged := false\n  done;\n  Printf.printf \"\\nConvergence: %s\\n\"\n    (if !all_converged then \"All parameters converged (R-hat < 1.01)\"\n     else\n       \"Some parameters have not converged -- increase samples or check model\");\n\n  (* Pooled posterior summary *)\n  Printf.printf \"\\nPooled posterior (all chains):\\n\";\n  Printf.printf \"  %-8s  %-10s  %-10s  %-10s\\n\" \"Param\" \"True\" \"Mean\" \"Std\";\n  let all_samples =\n    Nx.concatenate ~axis:0\n      (Array.to_list (Array.map (fun r -> r.Norn.samples) chains))\n  in\n  let pooled_mean = Nx.mean ~axes:[ 0 ] all_samples in\n  let pooled_centered = Nx.sub all_samples pooled_mean in\n  let nf = Float.of_int ((Nx.shape all_samples).(0) - 1) in\n  let pooled_var =\n    Nx.div_s (Nx.sum ~axes:[ 0 ] (Nx.square pooled_centered)) nf\n  in\n  let pooled_std = Nx.sqrt pooled_var in\n  for d = 0 to dim - 1 do\n    Printf.printf \"  %-8s  %-10.3f  %-10.3f  %-10.3f\\n\" param_names.(d)\n      (Nx.item [ d ] mu)\n      (Nx.item [ d ] pooled_mean)\n      (Nx.item [ d ] pooled_std)\n  done\n"
  },
  {
    "path": "packages/norn/examples/README.md",
    "content": "# Norn Examples\n\nLearn Norn through progressively complex examples. Start with `01-sampling-basics`\nand work through the numbered examples in order.\n\n## Examples\n\n| Example | Concept | Key Functions |\n|---------|---------|---------------|\n| [`01-sampling-basics`](./01-sampling-basics/) | Sample from a correlated Gaussian with NUTS | `nuts`, `ess` |\n| [`02-bayesian-regression`](./02-bayesian-regression/) | Bayesian linear regression with posterior inference | `nuts`, `sample`, `nuts_kernel` |\n| [`03-diagnostics`](./03-diagnostics/) | Multi-chain convergence diagnostics | `nuts`, `ess`, `rhat` |\n\n## Running Examples\n\nAll examples can be run with:\n\n```bash\ndune exec packages/norn/examples/<name>/main.exe\n```\n\nFor example:\n\n```bash\ndune exec packages/norn/examples/01-sampling-basics/main.exe\n```\n\n## Quick Reference\n\n### One-Line Sampling\n\n```ocaml\nlet result =\n  Nx.Rng.run ~seed:42 @@ fun () ->\n  Norn.nuts ~n:1000 log_prob (Nx.zeros Nx.float64 [| dim |])\n```\n\n### Configurable Sampling\n\n```ocaml\nlet result =\n  Norn.sample ~n:1000 ~num_warmup:500 log_prob init\n    (fun ~step_size ~metric ->\n      Norn.nuts_kernel ~step_size ~metric ())\n```\n\n### Convergence Diagnostics\n\n```ocaml\nlet ess = Norn.ess result.samples in\nlet rhat = Norn.rhat [| chain1.samples; chain2.samples |]\n```\n"
  },
  {
    "path": "packages/norn/lib/adapt.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Step-size adaptation via dual averaging (Nesterov 2009). *)\n\nlet f64 = Nx.float64\n\ntype step_size = {\n  target_accept : float;\n  mu : float;\n  log_eps : float;\n  log_eps_bar : float;\n  h_bar : float;\n  count : int;\n}\n\nlet step_size_init ?(target_accept = 0.65) eps =\n  {\n    target_accept;\n    mu = Float.log (10.0 *. eps);\n    log_eps = Float.log eps;\n    log_eps_bar = 0.0;\n    h_bar = 0.0;\n    count = 0;\n  }\n\nlet step_size_update ss ~acceptance_rate =\n  let gamma = 0.05 in\n  let t0 = 10.0 in\n  let kappa = 0.75 in\n  let m = Float.of_int (ss.count + 1) in\n  let w = 1.0 /. (m +. t0) in\n  let h_bar =\n    ((1.0 -. w) *. ss.h_bar) +. (w *. (ss.target_accept -. acceptance_rate))\n  in\n  let log_eps = ss.mu -. (Float.sqrt m /. gamma *. h_bar) in\n  let m_pow = m ** -.kappa in\n  (* BlackJAX: log_x_avg uses PREVIOUS log_x, not the newly computed one. *)\n  let log_eps_bar =\n    (m_pow *. ss.log_eps) +. ((1.0 -. m_pow) *. ss.log_eps_bar)\n  in\n  { ss with h_bar; log_eps; log_eps_bar; count = ss.count + 1 }\n\nlet step_size_current ss = Float.exp ss.log_eps\nlet step_size_final ss = Float.exp ss.log_eps_bar\n\n(* Mass-matrix adaptation via Welford's online algorithm. *)\n\ntype mass_matrix = {\n  dim : int;\n  count : int;\n  mean : Nx.float64_t;\n  m2 : Nx.float64_t;\n}\n\nlet mass_matrix_init dim =\n  { dim; count = 0; mean = Nx.zeros f64 [| dim |]; m2 = Nx.zeros f64 [| dim |] }\n\nlet mass_matrix_update mm position =\n  let count = mm.count + 1 in\n  let delta = Nx.sub position mm.mean in\n  let mean = Nx.add mm.mean (Nx.div_s delta (Float.of_int count)) in\n  let delta2 = Nx.sub position mean in\n  let m2 = Nx.add mm.m2 (Nx.mul delta delta2) in\n  { mm with count; mean; m2 }\n\nlet mass_matrix_inv_diag mm =\n  if mm.count < 2 then None\n  else\n    let n = Float.of_int mm.count in\n    let variance = Nx.div_s mm.m2 (n -. 1.0) in\n    let w = n /. (n +. 5.0) in\n    let shrinkage = 1e-3 *. 5.0 /. (n +. 5.0) in\n    Some (Nx.add_s (Nx.mul_s variance w) shrinkage)\n\nlet mass_matrix_reset mm =\n  {\n    mm with\n    count = 0;\n    mean = Nx.zeros f64 [| mm.dim |];\n    m2 = Nx.zeros f64 [| mm.dim |];\n  }\n\nlet step_size_reset ss =\n  step_size_init ~target_accept:ss.target_accept (step_size_final ss)\n\n(* Window adaptation schedule (Stan warmup).\n\n   Three phases: - Fast (initial buffer): adapt step size only. - Slow (doubling\n   windows): adapt step size + mass matrix. At each window boundary the mass\n   matrix is finalized (regularized) and both estimators are reset. - Fast\n   (final buffer): adapt step size only with the final mass matrix. *)\n\ntype warmup_action = Fast | Slow | Slow_end\n\nlet build_schedule num_warmup =\n  if num_warmup < 20 then Array.make num_warmup Fast\n  else\n    let initial_buffer, final_buffer, first_window =\n      if 75 + 50 + 25 > num_warmup then\n        let ib = Float.to_int (0.15 *. Float.of_int num_warmup) in\n        let fb = Float.to_int (0.10 *. Float.of_int num_warmup) in\n        (ib, fb, num_warmup - ib - fb)\n      else (75, 50, 25)\n    in\n    let schedule = Array.make num_warmup Fast in\n    let slow_end_pos = num_warmup - final_buffer in\n    let pos = ref initial_buffer in\n    let window_size = ref first_window in\n    while !pos < slow_end_pos do\n      let end_pos =\n        if !pos + (3 * !window_size) <= slow_end_pos then !pos + !window_size\n        else slow_end_pos\n      in\n      for j = !pos to end_pos - 1 do\n        schedule.(j) <- (if j = end_pos - 1 then Slow_end else Slow)\n      done;\n      pos := end_pos;\n      window_size := !window_size * 2\n    done;\n    schedule\n"
  },
  {
    "path": "packages/norn/lib/dune",
    "content": "(library\n (name norn)\n (public_name norn)\n (private_modules adapt internal nuts)\n (libraries nx rune))\n"
  },
  {
    "path": "packages/norn/lib/internal.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype state = {\n  position : Nx.float64_t;\n  log_density : float;\n  grad_log_density : Nx.float64_t;\n}\n\ntype info = {\n  acceptance_rate : float;\n  is_divergent : bool;\n  energy : float;\n  num_integration_steps : int;\n}\n\ntype kernel = {\n  init : Nx.float64_t -> (Nx.float64_t -> Nx.float64_t) -> state;\n  step : state -> (Nx.float64_t -> Nx.float64_t) -> state * info;\n}\n\ntype integrator =\n  (Nx.float64_t -> Nx.float64_t) ->\n  Nx.float64_t ->\n  Nx.float64_t ->\n  Nx.float64_t ->\n  (Nx.float64_t -> float * Nx.float64_t) ->\n  float ->\n  Nx.float64_t * Nx.float64_t * float * Nx.float64_t\n\ntype metric = {\n  sample_momentum : int -> Nx.float64_t;\n  kinetic_energy : Nx.float64_t -> float;\n  scale : Nx.float64_t -> Nx.float64_t;\n  is_turning : Nx.float64_t -> Nx.float64_t -> Nx.float64_t -> bool;\n}\n\ntype stats = { accept_rate : float; step_size : float; num_divergent : int }\n\ntype result = {\n  samples : Nx.float64_t;\n  log_densities : Nx.float64_t;\n  stats : stats;\n}\n"
  },
  {
    "path": "packages/norn/lib/norn.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ninclude Internal\n\nlet f64 = Nx.float64\n\n(* Integrators *)\n\nlet leapfrog kinetic_energy_grad q p g grad_log_prob eps =\n  let p = Nx.add p (Nx.mul_s g (eps /. 2.0)) in\n  let q = Nx.add q (Nx.mul_s (kinetic_energy_grad p) eps) in\n  let lp, g = grad_log_prob q in\n  let p = Nx.add p (Nx.mul_s g (eps /. 2.0)) in\n  (q, p, lp, g)\n\nlet palindromic ~momentum_coeffs ~position_coeffs kinetic_energy_grad q p g\n    grad_log_prob eps =\n  let n_pos = Array.length position_coeffs in\n  let q = ref q in\n  let p = ref (Nx.add p (Nx.mul_s g (momentum_coeffs.(0) *. eps))) in\n  let lp = ref 0.0 in\n  let g = ref g in\n  for i = 0 to n_pos - 1 do\n    q :=\n      Nx.add !q (Nx.mul_s (kinetic_energy_grad !p) (position_coeffs.(i) *. eps));\n    let lp', g' = grad_log_prob !q in\n    lp := lp';\n    g := g';\n    p := Nx.add !p (Nx.mul_s !g (momentum_coeffs.(i + 1) *. eps))\n  done;\n  (!q, !p, !lp, !g)\n\nlet mclachlan =\n  let l = 0.1932093174209856 in\n  palindromic\n    ~momentum_coeffs:[| l; 1.0 -. (2.0 *. l); l |]\n    ~position_coeffs:[| 0.5; 0.5 |]\n\nlet yoshida =\n  let cbrt2 = 2.0 ** (1.0 /. 3.0) in\n  let w1 = 1.0 /. (2.0 -. cbrt2) in\n  let w0 = -.cbrt2 /. (2.0 -. cbrt2) in\n  palindromic\n    ~momentum_coeffs:\n      [| w1 /. 2.0; (w1 +. w0) /. 2.0; (w0 +. w1) /. 2.0; w1 /. 2.0 |]\n    ~position_coeffs:[| w1; w0; w1 |]\n\n(* Metrics *)\n\nlet euclidean_is_turning scale left_p right_p momentum_sum =\n  let rho = Nx.sub momentum_sum (Nx.div_s (Nx.add left_p right_p) 2.0) in\n  Nx.item [] (Nx.vdot (scale left_p) rho) <= 0.0\n  || Nx.item [] (Nx.vdot (scale right_p) rho) <= 0.0\n\nlet unit_metric dim =\n  {\n    sample_momentum = (fun _dim -> Nx.randn f64 [| dim |]);\n    kinetic_energy = (fun p -> 0.5 *. Nx.item [] (Nx.sum (Nx.square p)));\n    scale = Fun.id;\n    is_turning = euclidean_is_turning Fun.id;\n  }\n\nlet diagonal_metric inv_mass_diag =\n  let mass_diag = Nx.recip inv_mass_diag in\n  let sqrt_mass = Nx.sqrt mass_diag in\n  let scale v = Nx.mul v inv_mass_diag in\n  {\n    sample_momentum = (fun dim -> Nx.mul (Nx.randn f64 [| dim |]) sqrt_mass);\n    kinetic_energy =\n      (fun p -> 0.5 *. Nx.item [] (Nx.sum (Nx.mul (Nx.square p) inv_mass_diag)));\n    scale;\n    is_turning = euclidean_is_turning scale;\n  }\n\nlet dense_metric inv_mass_matrix =\n  let dim = (Nx.shape inv_mass_matrix).(0) in\n  let mass_matrix = Nx.inv inv_mass_matrix in\n  let chol = Nx.cholesky mass_matrix in\n  let scale v =\n    let v_col = Nx.reshape [| dim; 1 |] v in\n    Nx.reshape [| dim |] (Nx.matmul inv_mass_matrix v_col)\n  in\n  {\n    sample_momentum =\n      (fun _dim ->\n        let z = Nx.randn f64 [| dim; 1 |] in\n        Nx.reshape [| dim |] (Nx.matmul chol z));\n    kinetic_energy =\n      (fun p ->\n        let p_col = Nx.reshape [| dim; 1 |] p in\n        0.5\n        *. Nx.item []\n             (Nx.matmul\n                (Nx.matrix_transpose p_col)\n                (Nx.matmul inv_mass_matrix p_col)));\n    scale;\n    is_turning = euclidean_is_turning scale;\n  }\n\n(* Kernels *)\n\nlet grad_log_prob log_density_fn q =\n  let lp, g = Rune.value_and_grad log_density_fn q in\n  (Nx.item [] lp, g)\n\nlet init_state position log_density_fn =\n  let lp, g = Rune.value_and_grad log_density_fn position in\n  { position; log_density = Nx.item [] lp; grad_log_density = g }\n\nlet hmc_kernel ?(integrator : integrator = leapfrog) ?(num_leapfrog = 20)\n    ~step_size ~(metric : metric) () =\n  let step (state : state) log_density_fn =\n    let dim = Nx.numel state.position in\n    let glp = grad_log_prob log_density_fn in\n    let p0 = metric.sample_momentum dim in\n    let ke_current = metric.kinetic_energy p0 in\n    let q = ref state.position in\n    let p = ref p0 in\n    let lp = ref state.log_density in\n    let g = ref state.grad_log_density in\n    for _ = 1 to num_leapfrog do\n      let q', p', lp', g' = integrator metric.scale !q !p !g glp step_size in\n      q := q';\n      p := p';\n      lp := lp';\n      g := g'\n    done;\n    let ke_proposed = metric.kinetic_energy !p in\n    let delta = !lp -. state.log_density -. (ke_proposed -. ke_current) in\n    let log_accept = if Float.is_nan delta then Float.neg_infinity else delta in\n    let acceptance_rate = Float.min 1.0 (Float.exp log_accept) in\n    let accepted = Float.log (Nx.item [] (Nx.rand f64 [||])) < log_accept in\n    let new_state =\n      if accepted then\n        { position = !q; log_density = !lp; grad_log_density = !g }\n      else state\n    in\n    let info =\n      {\n        acceptance_rate;\n        is_divergent = Float.abs (ke_proposed -. ke_current) > 1000.0;\n        energy = -. !lp +. ke_proposed;\n        num_integration_steps = num_leapfrog;\n      }\n    in\n    (new_state, info)\n  in\n  { init = init_state; step }\n\nlet nuts_kernel ?(integrator : integrator = leapfrog) ?(max_depth = 10)\n    ~step_size ~(metric : metric) () =\n  let step state log_density_fn =\n    Nuts.step integrator metric step_size max_depth state log_density_fn\n  in\n  { init = init_state; step }\n\n(* Sampling *)\n\nlet metric_of_mass_matrix dim mm =\n  match Adapt.mass_matrix_inv_diag mm with\n  | None -> unit_metric dim\n  | Some inv_mass_diag -> diagonal_metric inv_mass_diag\n\nlet sample ?(step_size = 0.01) ?(target_accept = 0.65) ?num_warmup ?report ~n\n    log_density_fn init make_kernel =\n  let num_warmup = match num_warmup with Some w -> w | None -> n / 2 in\n  let dim = Nx.numel init in\n  let schedule = Adapt.build_schedule num_warmup in\n  let met = ref (unit_metric dim) in\n  let kern = ref (make_kernel ~step_size ~metric:!met) in\n  let state = ref (!kern.init init log_density_fn) in\n  let ss = ref (Adapt.step_size_init ~target_accept step_size) in\n  let mm = ref (Adapt.mass_matrix_init dim) in\n  for i = 1 to num_warmup do\n    let eps = Adapt.step_size_current !ss in\n    kern := make_kernel ~step_size:eps ~metric:!met;\n    let new_state, info = !kern.step !state log_density_fn in\n    state := new_state;\n    (match schedule.(i - 1) with\n    | Adapt.Fast ->\n        ss := Adapt.step_size_update !ss ~acceptance_rate:info.acceptance_rate\n    | Adapt.Slow ->\n        ss := Adapt.step_size_update !ss ~acceptance_rate:info.acceptance_rate;\n        mm := Adapt.mass_matrix_update !mm new_state.position\n    | Adapt.Slow_end ->\n        ss := Adapt.step_size_update !ss ~acceptance_rate:info.acceptance_rate;\n        mm := Adapt.mass_matrix_update !mm new_state.position;\n        met := metric_of_mass_matrix dim !mm;\n        mm := Adapt.mass_matrix_reset !mm;\n        ss := Adapt.step_size_reset !ss);\n    match report with\n    | Some f -> f ~step:(-(num_warmup - i + 1)) new_state info\n    | None -> ()\n  done;\n  let final_step_size = Adapt.step_size_final !ss in\n  kern := make_kernel ~step_size:final_step_size ~metric:!met;\n  let samples = Nx.zeros f64 [| n; dim |] in\n  let log_densities = Nx.zeros f64 [| n |] in\n  let total_accept = ref 0.0 in\n  let num_divergent = ref 0 in\n  for i = 0 to n - 1 do\n    let new_state, info = !kern.step !state log_density_fn in\n    state := new_state;\n    total_accept := !total_accept +. info.acceptance_rate;\n    if info.is_divergent then incr num_divergent;\n    Nx.set_slice [ I i ] samples new_state.position;\n    Nx.set_item [ i ] new_state.log_density log_densities;\n    match report with Some f -> f ~step:i new_state info | None -> ()\n  done;\n  {\n    samples;\n    log_densities;\n    stats =\n      {\n        accept_rate = !total_accept /. Float.of_int n;\n        step_size = final_step_size;\n        num_divergent = !num_divergent;\n      };\n  }\n\nlet hmc ?(step_size = 0.01) ?(target_accept = 0.65) ?num_leapfrog ?num_warmup ~n\n    log_prob init =\n  sample ~step_size ~target_accept ?num_warmup ~n log_prob init\n    (fun ~step_size ~metric ->\n      hmc_kernel ?integrator:None ?num_leapfrog ~step_size ~metric ())\n\nlet nuts ?(step_size = 0.01) ?(target_accept = 0.80) ?max_depth ?num_warmup ~n\n    log_prob init =\n  sample ~step_size ~target_accept ?num_warmup ~n log_prob init\n    (fun ~step_size ~metric ->\n      nuts_kernel ?integrator:None ?max_depth ~step_size ~metric ())\n\n(* Diagnostics *)\n\nlet autocorr samples =\n  let n = (Nx.shape samples).(0) in\n  let dim = (Nx.shape samples).(1) in\n  let mean = Nx.mean ~axes:[ 0 ] samples in\n  let centered = Nx.sub samples mean in\n  let max_lag = n / 2 in\n  let acf = Nx.zeros f64 [| max_lag; dim |] in\n  for d = 0 to dim - 1 do\n    let col = Nx.slice [ A; I d ] centered in\n    let v = ref 0.0 in\n    for i = 0 to n - 1 do\n      let x = Nx.item [ i ] col in\n      v := !v +. (x *. x)\n    done;\n    v := !v /. Float.of_int n;\n    for lag = 0 to max_lag - 1 do\n      let c = ref 0.0 in\n      for i = 0 to n - 1 - lag do\n        c := !c +. (Nx.item [ i ] col *. Nx.item [ i + lag ] col)\n      done;\n      Nx.set_item [ lag; d ] (!c /. (Float.of_int n *. !v)) acf\n    done\n  done;\n  acf\n\nlet ess samples =\n  let n = (Nx.shape samples).(0) in\n  let dim = (Nx.shape samples).(1) in\n  let acf = autocorr samples in\n  let max_lag = n / 2 in\n  let result = Nx.zeros f64 [| dim |] in\n  for d = 0 to dim - 1 do\n    let tau = ref 1.0 in\n    let lag = ref 1 in\n    let stop = ref false in\n    while !lag < max_lag - 1 && not !stop do\n      let rho1 = Nx.item [ !lag; d ] acf in\n      let rho2 = Nx.item [ !lag + 1; d ] acf in\n      if rho1 +. rho2 < 0.0 then stop := true\n      else begin\n        tau := !tau +. (2.0 *. rho1);\n        incr lag\n      end\n    done;\n    Nx.set_item [ d ] (Float.of_int n /. !tau) result\n  done;\n  result\n\nlet rhat chains =\n  let m = Array.length chains in\n  let n = (Nx.shape chains.(0)).(0) in\n  let dim = (Nx.shape chains.(0)).(1) in\n  let half = n / 2 in\n  let split_chains = Array.make (2 * m) chains.(0) in\n  for i = 0 to m - 1 do\n    split_chains.(2 * i) <- Nx.slice [ R (0, half - 1) ] chains.(i);\n    split_chains.((2 * i) + 1) <- Nx.slice [ R (half, n - 1) ] chains.(i)\n  done;\n  let nf = Float.of_int half in\n  let mf = Float.of_int (2 * m) in\n  let chain_means = Array.map (Nx.mean ~axes:[ 0 ]) split_chains in\n  let grand_mean =\n    Array.fold_left Nx.add (Nx.zeros f64 [| dim |]) chain_means |> fun s ->\n    Nx.div_s s mf\n  in\n  let b =\n    Array.fold_left\n      (fun acc cm ->\n        let diff = Nx.sub cm grand_mean in\n        Nx.add acc (Nx.square diff))\n      (Nx.zeros f64 [| dim |]) chain_means\n    |> fun s -> Nx.mul_s s (nf /. (mf -. 1.0))\n  in\n  let w =\n    Array.fold_left\n      (fun acc chain ->\n        let cm = Nx.mean ~axes:[ 0 ] chain in\n        let centered = Nx.sub chain cm in\n        let s2 =\n          Nx.div_s (Nx.sum ~axes:[ 0 ] (Nx.square centered)) (nf -. 1.0)\n        in\n        Nx.add acc s2)\n      (Nx.zeros f64 [| dim |]) split_chains\n    |> fun s -> Nx.div_s s mf\n  in\n  let var_hat = Nx.add (Nx.mul_s w ((nf -. 1.0) /. nf)) (Nx.div_s b nf) in\n  Nx.sqrt (Nx.div var_hat w)\n"
  },
  {
    "path": "packages/norn/lib/norn.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** MCMC sampling with automatic gradients.\n\n    Norn provides Markov chain Monte Carlo samplers that leverage {!Rune}'s\n    automatic differentiation. The core abstraction is the {!type-kernel}: a\n    composable [{init; step}] record that any algorithm produces and any\n    sampling loop consumes.\n\n    {b Quick start.}\n    {[\n    let result = Norn.nuts ~n:1000 log_prob (Nx.zeros Nx.float64 [| dim |])\n    ]}\n\n    For configured usage, construct a kernel and pass it to {!sample}:\n    {[\n    let result =\n      Norn.sample ~n:1000 log_prob init (fun ~step_size ~metric ->\n          Norn.nuts_kernel ~step_size ~metric ())\n    ]} *)\n\n(** {1:types Types} *)\n\ntype state = {\n  position : Nx.float64_t;  (** Current sample, shape [[dim]]. *)\n  log_density : float;  (** Log-density at {!position}. *)\n  grad_log_density : Nx.float64_t;\n      (** Gradient of log-density at {!position}, shape [[dim]]. *)\n}\n(** The type for sampler states. Shared across all gradient-based kernels. *)\n\ntype info = {\n  acceptance_rate : float;\n      (** Metropolis acceptance probability in \\[0, 1\\]. *)\n  is_divergent : bool;  (** [true] when the energy error exceeds 1000. *)\n  energy : float;  (** Total Hamiltonian energy of the proposal. *)\n  num_integration_steps : int;  (** Leapfrog steps taken this transition. *)\n}\n(** The type for per-step diagnostics. *)\n\ntype kernel = {\n  init : Nx.float64_t -> (Nx.float64_t -> Nx.float64_t) -> state;\n      (** [init position log_density_fn] is the initial state at [position]. *)\n  step : state -> (Nx.float64_t -> Nx.float64_t) -> state * info;\n      (** [step state log_density_fn] is [(new_state, info)]. *)\n}\n(** The type for sampling kernels. Constructed by {!hmc_kernel}, {!nuts_kernel},\n    etc. The [log_density_fn] argument is not baked in so the same kernel can be\n    reused with different targets (e.g. tempering). *)\n\n(** {1:integrators Integrators} *)\n\ntype integrator =\n  (Nx.float64_t -> Nx.float64_t) ->\n  Nx.float64_t ->\n  Nx.float64_t ->\n  Nx.float64_t ->\n  (Nx.float64_t -> float * Nx.float64_t) ->\n  float ->\n  Nx.float64_t * Nx.float64_t * float * Nx.float64_t\n(** The type for symplectic integrators.\n    [integrator kinetic_energy_grad position momentum gradient grad_log_prob\n     step_size] is [(new_pos, new_mom, new_log_density, new_grad)].\n\n    [kinetic_energy_grad] is [M{^-1} p], the gradient of the kinetic energy with\n    respect to momentum. For unit metric this is the identity. The kernel\n    provides it from {!type-metric}[.scale]. *)\n\nval leapfrog : integrator\n(** [leapfrog] is the velocity Verlet integrator (second-order symplectic). *)\n\nval mclachlan : integrator\n(** [mclachlan] is McLachlan's two-stage integrator. Higher acceptance rates\n    than {!leapfrog} on challenging posteriors (McLachlan 1995). Two gradient\n    evaluations per step. *)\n\nval yoshida : integrator\n(** [yoshida] is Yoshida's fourth-order symplectic integrator. More accurate\n    than {!leapfrog} at the cost of three gradient evaluations per step. *)\n\n(** {1:metrics Metrics} *)\n\ntype metric = {\n  sample_momentum : int -> Nx.float64_t;\n      (** [sample_momentum dim] draws momentum from the kinetic energy\n          distribution. *)\n  kinetic_energy : Nx.float64_t -> float;\n      (** [kinetic_energy p] is [0.5 * p{^T} M{^-1} p]. *)\n  scale : Nx.float64_t -> Nx.float64_t;  (** [scale v] is [M{^-1} v]. *)\n  is_turning : Nx.float64_t -> Nx.float64_t -> Nx.float64_t -> bool;\n      (** [is_turning left_p right_p momentum_sum] is the U-turn criterion for\n          NUTS trajectory termination. *)\n}\n(** The type for mass matrix metrics. Defines the geometry of the sampling\n    space. *)\n\nval unit_metric : int -> metric\n(** [unit_metric dim] is the identity metric. Momentum sampled from [N(0, I)].\n*)\n\nval diagonal_metric : Nx.float64_t -> metric\n(** [diagonal_metric inv_mass_diag] is a diagonal metric with the given inverse\n    mass diagonal. *)\n\nval dense_metric : Nx.float64_t -> metric\n(** [dense_metric inv_mass_matrix] is a dense metric with the given inverse mass\n    matrix. Uses Cholesky decomposition for momentum sampling. *)\n\n(** {1:kernels Kernels} *)\n\nval hmc_kernel :\n  ?integrator:integrator ->\n  ?num_leapfrog:int ->\n  step_size:float ->\n  metric:metric ->\n  unit ->\n  kernel\n(** [hmc_kernel ~step_size ~metric ()] is a Hamiltonian Monte Carlo kernel.\n\n    [integrator] defaults to {!leapfrog}. [num_leapfrog] defaults to [20]. *)\n\nval nuts_kernel :\n  ?integrator:integrator ->\n  ?max_depth:int ->\n  step_size:float ->\n  metric:metric ->\n  unit ->\n  kernel\n(** [nuts_kernel ~step_size ~metric ()] is a No-U-Turn Sampler kernel.\n\n    NUTS automatically adapts the trajectory length using a binary tree\n    expansion with U-turn detection. This eliminates the [num_leapfrog]\n    parameter of {!hmc_kernel}.\n\n    [integrator] defaults to {!leapfrog}. [max_depth] defaults to [10]. *)\n\n(** {1:sampling Sampling} *)\n\ntype stats = {\n  accept_rate : float;  (** Mean acceptance rate during sampling. *)\n  step_size : float;  (** Final adapted step size. *)\n  num_divergent : int;  (** Number of divergent transitions. *)\n}\n(** The type for aggregate sampling statistics. *)\n\ntype result = {\n  samples : Nx.float64_t;  (** Shape [[n; dim]]. *)\n  log_densities : Nx.float64_t;  (** Shape [[n]]. *)\n  stats : stats;\n}\n(** The type for sampling results. *)\n\nval sample :\n  ?step_size:float ->\n  ?target_accept:float ->\n  ?num_warmup:int ->\n  ?report:(step:int -> state -> info -> unit) ->\n  n:int ->\n  (Nx.float64_t -> Nx.float64_t) ->\n  Nx.float64_t ->\n  (step_size:float -> metric:metric -> kernel) ->\n  result\n(** [sample ~n log_prob init make_kernel] draws [n] samples from the\n    distribution with unnormalized log-density [log_prob], starting at [init].\n\n    During [num_warmup] iterations (discarded), step size and mass matrix are\n    adapted using Stan-style window adaptation: an initial fast phase (step size\n    only), doubling slow windows (step size + mass matrix with regularized\n    Welford estimation), and a final fast phase.\n\n    [step_size] defaults to [0.01]. [target_accept] defaults to [0.65].\n    [num_warmup] defaults to [n / 2]. [report] is called after each step with\n    negative step numbers during warmup. *)\n\nval hmc :\n  ?step_size:float ->\n  ?target_accept:float ->\n  ?num_leapfrog:int ->\n  ?num_warmup:int ->\n  n:int ->\n  (Nx.float64_t -> Nx.float64_t) ->\n  Nx.float64_t ->\n  result\n(** [hmc ~n log_prob init] draws [n] samples using Hamiltonian Monte Carlo with\n    window adaptation.\n\n    [step_size] defaults to [0.01]. [target_accept] defaults to [0.65].\n    [num_leapfrog] defaults to [20]. [num_warmup] defaults to [n / 2]. *)\n\nval nuts :\n  ?step_size:float ->\n  ?target_accept:float ->\n  ?max_depth:int ->\n  ?num_warmup:int ->\n  n:int ->\n  (Nx.float64_t -> Nx.float64_t) ->\n  Nx.float64_t ->\n  result\n(** [nuts ~n log_prob init] draws [n] samples using the No-U-Turn Sampler with\n    window adaptation.\n\n    [step_size] defaults to [0.01]. [target_accept] defaults to [0.80].\n    [max_depth] defaults to [10]. [num_warmup] defaults to [n / 2]. *)\n\n(** {1:diagnostics Diagnostics} *)\n\nval ess : Nx.float64_t -> Nx.float64_t\n(** [ess samples] is the effective sample size for each parameter. [samples] has\n    shape [[n; dim]], returns shape [[dim]]. Computed via autocorrelation with\n    the initial monotone sequence estimator. *)\n\nval rhat : Nx.float64_t array -> Nx.float64_t\n(** [rhat chains] is the split R-hat convergence diagnostic for each parameter.\n    Each chain has shape [[n; dim]], returns shape [[dim]]. Values close to\n    [1.0] indicate convergence; above [1.01] suggests the chains have not mixed.\n*)\n"
  },
  {
    "path": "packages/norn/lib/nuts.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* No-U-Turn Sampler (Hoffman & Gelman 2014). Recursive binary tree expansion\n   with multinomial sampling and generalised U-turn detection (Betancourt 2013).\n   Follows BlackJAX's implementation strictly: - inner recursion:\n   progressive_uniform_sampling (multinomial) - outer expansion:\n   progressive_biased_sampling - U-turn: rho = m_sum - (m_right + m_left)/2,\n   dot(v, rho) <= 0 *)\n\nlet f64 = Nx.float64\n\n(* Proposal: sampled state and acceptance statistics. *)\n\ntype proposal = {\n  q : Nx.float64_t;\n  lp : float;\n  g : Nx.float64_t;\n  energy : float;\n  weight : float;\n  sum_log_p_accept : float;\n}\n\n(* Trajectory: endpoints and accumulated momentum. *)\n\ntype trajectory = {\n  left_q : Nx.float64_t;\n  left_p : Nx.float64_t;\n  left_g : Nx.float64_t;\n  right_q : Nx.float64_t;\n  right_p : Nx.float64_t;\n  right_g : Nx.float64_t;\n  momentum_sum : Nx.float64_t;\n  num_states : int;\n}\n\nlet log_add_exp a b =\n  if a = Float.neg_infinity then b\n  else if b = Float.neg_infinity then a\n  else if a >= b then a +. Float.log (1.0 +. Float.exp (b -. a))\n  else b +. Float.log (1.0 +. Float.exp (a -. b))\n\n(* One leapfrog step → leaf proposal + trajectory. *)\nlet build_leaf (integrator : Internal.integrator) (metric : Internal.metric)\n    direction step_size grad_log_prob q p g initial_energy =\n  let eps = Float.of_int direction *. step_size in\n  let q', p', lp', g' = integrator metric.scale q p g grad_log_prob eps in\n  let energy = -.lp' +. metric.kinetic_energy p' in\n  let delta = initial_energy -. energy in\n  let delta = if Float.is_nan delta then Float.neg_infinity else delta in\n  let proposal =\n    {\n      q = q';\n      lp = lp';\n      g = g';\n      energy;\n      weight = delta;\n      sum_log_p_accept = Float.min delta 0.0;\n    }\n  in\n  let trajectory =\n    {\n      left_q = q';\n      left_p = p';\n      left_g = g';\n      right_q = q';\n      right_p = p';\n      right_g = g';\n      momentum_sum = p';\n      num_states = 1;\n    }\n  in\n  let is_diverging = delta < -1000.0 in\n  (proposal, trajectory, is_diverging)\n\n(* progressive_uniform_sampling: multinomial for inner tree building. *)\nlet uniform_sample prop new_prop =\n  let p_accept = 1.0 /. (1.0 +. Float.exp (prop.weight -. new_prop.weight)) in\n  let u = Nx.item [] (Nx.rand f64 [||]) in\n  let weight = log_add_exp prop.weight new_prop.weight in\n  let sum_log_p_accept =\n    log_add_exp prop.sum_log_p_accept new_prop.sum_log_p_accept\n  in\n  if u < p_accept then { new_prop with weight; sum_log_p_accept }\n  else { prop with weight; sum_log_p_accept }\n\n(* progressive_biased_sampling: for outer tree doubling. *)\nlet biased_sample prop new_prop =\n  let p_accept = Float.min 1.0 (Float.exp (new_prop.weight -. prop.weight)) in\n  let u = Nx.item [] (Nx.rand f64 [||]) in\n  let weight = log_add_exp prop.weight new_prop.weight in\n  let sum_log_p_accept =\n    log_add_exp prop.sum_log_p_accept new_prop.sum_log_p_accept\n  in\n  if u < p_accept then { new_prop with weight; sum_log_p_accept }\n  else { prop with weight; sum_log_p_accept }\n\nlet merge_trajectories direction traj new_traj =\n  let l, r = if direction > 0 then (traj, new_traj) else (new_traj, traj) in\n  {\n    left_q = l.left_q;\n    left_p = l.left_p;\n    left_g = l.left_g;\n    right_q = r.right_q;\n    right_p = r.right_p;\n    right_g = r.right_g;\n    momentum_sum = Nx.add l.momentum_sum r.momentum_sum;\n    num_states = l.num_states + r.num_states;\n  }\n\n(* Recursive tree building — buildtree_integrate from BlackJAX\n   trajectory.py:dynamic_recursive_integration. *)\nlet rec build_tree integrator metric step_size grad_log_prob q p g depth\n    direction initial_energy =\n  if depth = 0 then\n    let prop, traj, is_div =\n      build_leaf integrator metric direction step_size grad_log_prob q p g\n        initial_energy\n    in\n    (prop, traj, is_div, false)\n  else\n    let half = depth - 1 in\n    let prop, traj, is_div, is_turn =\n      build_tree integrator metric step_size grad_log_prob q p g half direction\n        initial_energy\n    in\n    if is_div || is_turn then (prop, traj, is_div, is_turn)\n    else\n      let q', p', g' =\n        if direction > 0 then (traj.right_q, traj.right_p, traj.right_g)\n        else (traj.left_q, traj.left_p, traj.left_g)\n      in\n      let new_prop, new_traj, new_div, new_turn =\n        build_tree integrator metric step_size grad_log_prob q' p' g' half\n          direction initial_energy\n      in\n      let merged = merge_trajectories direction traj new_traj in\n      if new_turn then\n        (* Second half turning: keep old proposal state/weight, accumulate\n           sum_log_p_accept for consistent acceptance_rate. *)\n        let slpa =\n          log_add_exp prop.sum_log_p_accept new_prop.sum_log_p_accept\n        in\n        ({ prop with sum_log_p_accept = slpa }, merged, new_div, true)\n      else\n        (* Check U-turn on merged trajectory *)\n        let turning =\n          metric.is_turning merged.left_p merged.right_p merged.momentum_sum\n        in\n        (* Always sample when second half is not turning *)\n        let sampled = uniform_sample prop new_prop in\n        (sampled, merged, new_div, turning)\n\n(* Outer expansion loop — dynamic_multiplicative_expansion from BlackJAX\n   trajectory.py. *)\nlet step (integrator : Internal.integrator) (metric : Internal.metric) step_size\n    max_depth (state : Internal.state) log_density_fn =\n  let grad_log_prob q =\n    let lp, g = Rune.value_and_grad log_density_fn q in\n    (Nx.item [] lp, g)\n  in\n  let dim = Nx.numel state.position in\n  let p0 = metric.sample_momentum dim in\n  let ke0 = metric.kinetic_energy p0 in\n  let initial_energy = -.state.log_density +. ke0 in\n  let proposal =\n    ref\n      {\n        q = state.position;\n        lp = state.log_density;\n        g = state.grad_log_density;\n        energy = initial_energy;\n        weight = 0.0;\n        sum_log_p_accept = Float.neg_infinity;\n      }\n  in\n  let trajectory =\n    ref\n      {\n        left_q = state.position;\n        left_p = p0;\n        left_g = state.grad_log_density;\n        right_q = state.position;\n        right_p = p0;\n        right_g = state.grad_log_density;\n        momentum_sum = p0;\n        num_states = 0;\n      }\n  in\n  let depth = ref 0 in\n  let diverging = ref false in\n  let turning = ref false in\n  while !depth < max_depth && (not !diverging) && not !turning do\n    let direction = if Nx.item [] (Nx.rand f64 [||]) < 0.5 then -1 else 1 in\n    let q, p, g =\n      if direction > 0 then\n        (!trajectory.right_q, !trajectory.right_p, !trajectory.right_g)\n      else (!trajectory.left_q, !trajectory.left_p, !trajectory.left_g)\n    in\n    let sub_prop, sub_traj, sub_div, sub_turn =\n      build_tree integrator metric step_size grad_log_prob q p g !depth\n        direction initial_energy\n    in\n    (* Update proposal: biased sampling unless subtree diverged or turned *)\n    if sub_div || sub_turn then\n      proposal :=\n        {\n          !proposal with\n          sum_log_p_accept =\n            log_add_exp !proposal.sum_log_p_accept sub_prop.sum_log_p_accept;\n        }\n    else proposal := biased_sample !proposal sub_prop;\n    (* Always merge trajectory *)\n    trajectory := merge_trajectories direction !trajectory sub_traj;\n    (* Check U-turn on full trajectory *)\n    let full_turn =\n      metric.is_turning !trajectory.left_p !trajectory.right_p\n        !trajectory.momentum_sum\n    in\n    diverging := sub_div;\n    turning := sub_turn || full_turn;\n    incr depth\n  done;\n  let p = !proposal in\n  let t = !trajectory in\n  let new_state : Internal.state =\n    { position = p.q; log_density = p.lp; grad_log_density = p.g }\n  in\n  let n_states = Float.of_int (max 1 t.num_states) in\n  let acceptance_rate = Float.exp p.sum_log_p_accept /. n_states in\n  let info : Internal.info =\n    {\n      acceptance_rate;\n      is_divergent = !diverging;\n      energy = initial_energy;\n      num_integration_steps = t.num_states;\n    }\n  in\n  (new_state, info)\n"
  },
  {
    "path": "packages/norn/test/debug_nuts.ml",
    "content": "let f64 = Nx.float64\nlet true_mean = Nx.create f64 [| 2 |] [| 3.0; -1.0 |]\nlet true_cov = Nx.create f64 [| 2; 2 |] [| 1.0; 0.5; 0.5; 2.0 |]\nlet cov_inv = Nx.inv true_cov\n\nlet log_prob x =\n  let d = Nx.sub x true_mean in\n  let dt = Nx.reshape [| 1; 2 |] d in\n  let mahal = Nx.matmul (Nx.matmul dt cov_inv) (Nx.reshape [| 2; 1 |] d) in\n  Nx.mul_s (Nx.reshape [||] mahal) (-0.5)\n\nlet compute_stats name positions n =\n  let nf = Float.of_int n in\n  let sum0 = ref 0.0 in\n  let sum1 = ref 0.0 in\n  for i = 0 to n - 1 do\n    let x0, x1 = positions.(i) in\n    sum0 := !sum0 +. x0;\n    sum1 := !sum1 +. x1\n  done;\n  let mean0 = !sum0 /. nf in\n  let mean1 = !sum1 /. nf in\n  let var0 = ref 0.0 in\n  let var1 = ref 0.0 in\n  for i = 0 to n - 1 do\n    let x0, x1 = positions.(i) in\n    var0 := !var0 +. ((x0 -. mean0) *. (x0 -. mean0));\n    var1 := !var1 +. ((x1 -. mean1) *. (x1 -. mean1))\n  done;\n  let var0 = !var0 /. nf in\n  let var1 = !var1 /. nf in\n  Printf.printf \"%s:\\n\" name;\n  Printf.printf \"  mean = [%.4f, %.4f]  (true: [3.0, -1.0])\\n\" mean0 mean1;\n  Printf.printf \"  var  = [%.4f, %.4f]  (true: [1.0, 2.0])\\n\" var0 var1;\n  Printf.printf \"  var error = [%.1f%%, %.1f%%]\\n\"\n    (100.0 *. Float.abs (var0 -. 1.0) /. 1.0)\n    (100.0 *. Float.abs (var1 -. 2.0) /. 2.0)\n\nlet () =\n  Printf.printf \"=== Comparison: Norn vs BlackJAX ===\\n\\n\";\n  Printf.printf \"Target: 2D Gaussian, mean=[3,-1], cov=[[1,0.5],[0.5,2]]\\n\\n\";\n\n  (* --- HMC: match BlackJAX: 1000 warmup + 3000 samples, fixed --- *)\n  Nx.Rng.run ~seed:42 (fun () ->\n      let metric = Norn.unit_metric 2 in\n      let kernel = Norn.hmc_kernel ~step_size:0.1 ~num_leapfrog:20 ~metric () in\n      let init = Nx.zeros f64 [| 2 |] in\n      let state = ref (kernel.init init log_prob) in\n      for _ = 1 to 1000 do\n        let s, _ = kernel.step !state log_prob in\n        state := s\n      done;\n      let positions = Array.make 3000 (0.0, 0.0) in\n      for i = 0 to 2999 do\n        let s, _ = kernel.step !state log_prob in\n        state := s;\n        positions.(i) <- (Nx.item [ 0 ] s.position, Nx.item [ 1 ] s.position)\n      done;\n      compute_stats \"Norn HMC (fixed eps=0.1, 1000w+3000s)\" positions 3000;\n      Printf.printf\n        \"  BlackJAX: mean=[3.0048, -0.9621] var=[0.9896, 2.0819]\\n\\n\");\n\n  (* --- NUTS: match BlackJAX: 1000 warmup + 3000 samples, fixed --- *)\n  Nx.Rng.run ~seed:42 (fun () ->\n      let metric = Norn.unit_metric 2 in\n      let kernel = Norn.nuts_kernel ~step_size:0.1 ~metric () in\n      let init = Nx.zeros f64 [| 2 |] in\n      let state = ref (kernel.init init log_prob) in\n      for _ = 1 to 1000 do\n        let s, _ = kernel.step !state log_prob in\n        state := s\n      done;\n      let positions = Array.make 3000 (0.0, 0.0) in\n      let total_lf = ref 0 in\n      for i = 0 to 2999 do\n        let s, info = kernel.step !state log_prob in\n        state := s;\n        total_lf := !total_lf + info.num_integration_steps;\n        positions.(i) <- (Nx.item [ 0 ] s.position, Nx.item [ 1 ] s.position)\n      done;\n      compute_stats \"Norn NUTS (fixed eps=0.1, 1000w+3000s)\" positions 3000;\n      Printf.printf \"  avg leapfrog/step = %.1f\\n\"\n        (Float.of_int !total_lf /. 3000.0);\n      Printf.printf\n        \"  BlackJAX: mean=[3.0297, -0.9192] var=[1.0453, 2.1354]\\n\\n\");\n\n  (* --- NUTS adapted: warmup trace + 100 samples --- *)\n  Nx.Rng.run ~seed:42 (fun () ->\n      let report ~step (_state : Norn.state) (info : Norn.info) =\n        if step < 0 then begin\n          let ws = 1000 + step + 1 in\n          if ws <= 10 || ws mod 100 = 0 then\n            Printf.printf \"  warmup %4d: accept=%.4f lf=%d\\n\" ws\n              info.acceptance_rate info.num_integration_steps\n        end\n      in\n      let result =\n        Norn.sample ~step_size:0.1 ~target_accept:0.80 ~num_warmup:1000 ~report\n          ~n:100 log_prob (Nx.zeros f64 [| 2 |]) (fun ~step_size ~metric ->\n            Norn.nuts_kernel ~step_size ~metric ())\n      in\n      Printf.printf \"\\nAdapted: step_size=%.6f accept=%.4f divergent=%d\\n\"\n        result.stats.step_size result.stats.accept_rate\n        result.stats.num_divergent)\n"
  },
  {
    "path": "packages/norn/test/dune",
    "content": "(test\n (name test_norn)\n (package norn)\n (libraries nx rune norn windtrap))\n\n(executable\n (name debug_nuts)\n (libraries nx rune norn))\n"
  },
  {
    "path": "packages/norn/test/test_blackjax_ref.py",
    "content": "\"\"\"Reference test: 2D correlated Gaussian with BlackJAX NUTS and HMC.\"\"\"\nimport jax\nimport jax.numpy as jnp\nimport blackjax\n\ntrue_mean = jnp.array([3.0, -1.0])\ntrue_cov = jnp.array([[1.0, 0.5], [0.5, 2.0]])\ncov_inv = jnp.linalg.inv(true_cov)\n\ndef log_prob(x):\n    d = x - true_mean\n    return -0.5 * d @ cov_inv @ d\n\n# --- HMC ---\nkey = jax.random.key(42)\ninv_mass = jnp.ones(2)\nhmc_alg = blackjax.hmc(log_prob, step_size=0.1, inverse_mass_matrix=inv_mass, num_integration_steps=20)\nstate = hmc_alg.init(jnp.zeros(2))\nstep_fn = jax.jit(hmc_alg.step)\n\nsamples_hmc = []\nfor i in range(4000):  # 1000 warmup + 3000 samples\n    key = jax.random.fold_in(key, i)\n    state, info = step_fn(key, state)\n    if i >= 1000:\n        samples_hmc.append(state.position)\n\nsamples_hmc = jnp.stack(samples_hmc)\nprint(f\"HMC (no adaptation, fixed step_size=0.1):\")\nprint(f\"  mean = {jnp.mean(samples_hmc, axis=0)}\")\nprint(f\"  var  = {jnp.var(samples_hmc, axis=0)}\")\nprint()\n\n# --- NUTS ---\nkey = jax.random.key(42)\nnuts_alg = blackjax.nuts(log_prob, step_size=0.1, inverse_mass_matrix=inv_mass)\nstate = nuts_alg.init(jnp.zeros(2))\nstep_fn = jax.jit(nuts_alg.step)\n\nsamples_nuts = []\nfor i in range(4000):\n    key = jax.random.fold_in(key, i)\n    state, info = step_fn(key, state)\n    if i >= 1000:\n        samples_nuts.append(state.position)\n\nsamples_nuts = jnp.stack(samples_nuts)\nprint(f\"NUTS (no adaptation, fixed step_size=0.1):\")\nprint(f\"  mean = {jnp.mean(samples_nuts, axis=0)}\")\nprint(f\"  var  = {jnp.var(samples_nuts, axis=0)}\")\nprint()\n\n# --- NUTS with window adaptation ---\nkey = jax.random.key(42)\nwarmup = blackjax.window_adaptation(blackjax.nuts, log_prob, num_steps=1000)\nkey, warmup_key = jax.random.split(key)\n(adapted_state, adapted_params), _ = warmup.run(warmup_key, jnp.zeros(2))\nprint(f\"Window adaptation results:\")\nprint(f\"  step_size = {adapted_params['step_size']}\")\nprint(f\"  inv_mass  = {adapted_params['inverse_mass_matrix']}\")\n\nnuts_adapted = blackjax.nuts(log_prob, **adapted_params)\nstep_fn = jax.jit(nuts_adapted.step)\nstate = adapted_state\n\nsamples_adapted = []\nfor i in range(3000):\n    key = jax.random.fold_in(key, i)\n    state, info = step_fn(key, state)\n    samples_adapted.append(state.position)\n\nsamples_adapted = jnp.stack(samples_adapted)\nprint(f\"NUTS (with window adaptation):\")\nprint(f\"  mean = {jnp.mean(samples_adapted, axis=0)}\")\nprint(f\"  var  = {jnp.var(samples_adapted, axis=0)}\")\n"
  },
  {
    "path": "packages/norn/test/test_norn.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\n\nlet f64 = Nx.float64\n\n(* 2D correlated Gaussian: mean [3, -1], covariance [[1, 0.5], [0.5, 2]] *)\nlet true_mean = Nx.create f64 [| 2 |] [| 3.0; -1.0 |]\nlet true_cov = Nx.create f64 [| 2; 2 |] [| 1.0; 0.5; 0.5; 2.0 |]\nlet cov_inv = Nx.inv true_cov\n\nlet log_prob x =\n  let d = Nx.sub x true_mean in\n  let dt = Nx.reshape [| 1; 2 |] d in\n  let mahal = Nx.matmul (Nx.matmul dt cov_inv) (Nx.reshape [| 2; 1 |] d) in\n  Nx.mul_s (Nx.reshape [||] mahal) (-0.5)\n\nlet check_result msg result =\n  is_true\n    ~msg:\n      (Printf.sprintf \"%s: accept rate %.2f > 0.4\" msg\n         result.Norn.stats.accept_rate)\n    (result.stats.accept_rate > 0.4);\n  let sample_mean = Nx.mean ~axes:[ 0 ] result.samples in\n  for i = 0 to 1 do\n    let sm = Nx.item [ i ] sample_mean in\n    let tm = Nx.item [ i ] true_mean in\n    is_true\n      ~msg:(Printf.sprintf \"%s: mean[%d]: %.2f ~ %.2f\" msg i sm tm)\n      (Float.abs (sm -. tm) < 0.5)\n  done;\n  let centered = Nx.sub result.samples sample_mean in\n  let n = Float.of_int ((Nx.shape result.samples).(0) - 1) in\n  let sample_cov =\n    Nx.div_s (Nx.matmul (Nx.matrix_transpose centered) centered) n\n  in\n  for i = 0 to 1 do\n    let sc = Nx.item [ i; i ] sample_cov in\n    let tc = Nx.item [ i; i ] true_cov in\n    is_true\n      ~msg:(Printf.sprintf \"%s: var[%d]: %.2f ~ %.2f (within 60%%)\" msg i sc tc)\n      (Float.abs (sc -. tc) /. tc < 0.6)\n  done\n\nlet test_hmc () =\n  Nx.Rng.run ~seed:42 (fun () ->\n      let init = Nx.zeros f64 [| 2 |] in\n      let result =\n        Norn.hmc ~step_size:0.1 ~num_leapfrog:20 ~num_warmup:200 ~n:500 log_prob\n          init\n      in\n      check_result \"HMC\" result)\n\nlet test_nuts () =\n  Nx.Rng.run ~seed:42 (fun () ->\n      let init = Nx.zeros f64 [| 2 |] in\n      let result =\n        Norn.nuts ~step_size:0.5 ~max_depth:6 ~num_warmup:500 ~n:800 log_prob\n          init\n      in\n      check_result \"NUTS\" result)\n\nlet test_kernel_api () =\n  Nx.Rng.run ~seed:42 (fun () ->\n      let init = Nx.zeros f64 [| 2 |] in\n      let metric = Norn.unit_metric 2 in\n      let kernel = Norn.hmc_kernel ~step_size:0.1 ~metric () in\n      let state = kernel.init init log_prob in\n      is_true ~msg:\"init log_density is finite\"\n        (Float.is_finite state.log_density);\n      let state', info = kernel.step state log_prob in\n      is_true ~msg:\"step produces finite log_density\"\n        (Float.is_finite state'.log_density);\n      is_true ~msg:\"acceptance_rate in [0, 1]\"\n        (info.acceptance_rate >= 0.0 && info.acceptance_rate <= 1.0))\n\nlet test_sample_with_kernel () =\n  Nx.Rng.run ~seed:42 (fun () ->\n      let init = Nx.zeros f64 [| 2 |] in\n      let result =\n        Norn.sample ~step_size:0.1 ~num_warmup:200 ~n:500 log_prob init\n          (fun ~step_size ~metric -> Norn.hmc_kernel ~step_size ~metric ())\n      in\n      check_result \"sample+kernel\" result)\n\nlet test_diagnostics () =\n  Nx.Rng.run ~seed:42 (fun () ->\n      let init = Nx.zeros f64 [| 2 |] in\n      let chains =\n        Array.init 4 (fun i ->\n            Nx.Rng.run ~seed:i (fun () ->\n                Norn.nuts ~step_size:0.1 ~num_warmup:500 ~n:1000 log_prob init))\n      in\n      let chain_samples = Array.map (fun r -> r.Norn.samples) chains in\n      let r = Norn.rhat chain_samples in\n      for d = 0 to 1 do\n        let rv = Nx.item [ d ] r in\n        is_true ~msg:(Printf.sprintf \"rhat[%d]: %.3f < 1.1\" d rv) (rv < 1.1)\n      done;\n      let e = Norn.ess chain_samples.(0) in\n      for d = 0 to 1 do\n        let ev = Nx.item [ d ] e in\n        is_true ~msg:(Printf.sprintf \"ess[%d]: %.0f > 50\" d ev) (ev > 50.0)\n      done)\n\nlet () =\n  run \"Norn\"\n    [\n      test \"HMC: 2D Gaussian\" test_hmc;\n      test \"NUTS: 2D Gaussian\" test_nuts;\n      test \"Kernel API\" test_kernel_api;\n      test \"Sample with kernel\" test_sample_with_kernel;\n      test \"Diagnostics\" test_diagnostics;\n    ]\n"
  },
  {
    "path": "packages/nx/README.md",
    "content": "# Nx\n\nN-dimensional array library for OCaml.\n\nNx is the core component of the Raven ecosystem, providing efficient\nnumerical computation with multi-device support. It offers NumPy-like\nfunctionality with the benefits of OCaml's strong static type system.\n\n## Features\n\n- Multi-dimensional arrays (tensors) with arbitrary rank\n- Support for data types: float16, float32, float64, int8, int16, int32,\n  int64, uint8, uint16, complex32, complex64\n- Flexible memory layouts: C-contiguous and strided\n- Zero-copy slicing, reshaping, and broadcasting\n- Element-wise and scalar operations (add, sub, mul, div, map, etc.)\n- Linear algebra routines (`dot`, matrix multiplication, transpose,\n  sum, mean, argmax, etc.)\n- Optimized CPU backend; pure OCaml interface leveraging Bigarray\n- I/O support: image formats (PNG, JPEG), NumPy files (.npy, .npz)\n- Seamless integration with the Raven ecosystem: `sowilo`, `quill`, `hugin`, etc.\n\n## Quick Start\n\n```ocaml\nopen Nx\n\n(* Create a 2x3 tensor *)\nlet a = create float32 [|2;3|] [|1.; 2.; 3.; 4.; 5.; 6.|]\n\n(* Fill a tensor with ones *)\nlet b = full float32 [|2;3|] 1.0\n\n(* Element-wise addition *)\nlet c = add a b\n\n(* Matrix multiplication *)\nlet x = create float32 [|2;3|] [|1.;2.;3.;4.;5.;6.|]\nlet y = create float32 [|3;2|] [|7.;8.;9.;10.;11.;12.|]\nlet z = dot x y\n\n(* Reduction: sum across an axis *)\nlet s = sum ~axes:[|1|] x\n```\n\n## Contributing\n\nSee the [Raven monorepo README](../README.md) for contribution guidelines.\n\n## License\n\nISC License. See [LICENSE](../LICENSE) for details.\n"
  },
  {
    "path": "packages/nx/bench/README.md",
    "content": "# Nx Benchmarks\n\nThis directory contains benchmarks for the nx library. We provide comparative benchmarks against NumPy.\n\nAdditional focused suites live under subdirectories:\n\n- `conv2d/` for convolutional workloads\n- `einsum/` for tensor contractions\n- `matmul/` for dense matrix multiplication\n\n## Results Nx\n\n```\n┌────────────────────────────┬──────────┬──────────┬─────────┬─────────┬────────────┐\n│ Name                       │ Wall/Run │  CPU/Run │ mWd/Run │ Speedup │ vs Fastest │\n├────────────────────────────┼──────────┼──────────┼─────────┼─────────┼────────────┤\n│ Transpose 100x100 f32 (Nx) │ 132.92ns │ 174.70ns │ 114.00w │   1.00x │       100% │\n│ Transpose 200x200 f64 (Nx) │ 144.19ns │ 162.57ns │ 114.00w │   0.92x │       108% │\n│ Transpose 50x50 f32 (Nx)   │ 145.52ns │ 184.50ns │ 114.00w │   0.91x │       109% │\n│ Transpose 500x500 f64 (Nx) │ 147.02ns │ 166.86ns │ 114.00w │   0.90x │       111% │\n│ Transpose 100x100 f64 (Nx) │ 147.10ns │ 232.42ns │ 114.00w │   0.90x │       111% │\n│ Transpose 500x500 f32 (Nx) │ 148.00ns │ 162.61ns │ 114.00w │   0.90x │       111% │\n│ Transpose 200x200 f32 (Nx) │ 150.68ns │ 170.96ns │ 114.00w │   0.88x │       113% │\n│ Transpose 50x50 f64 (Nx)   │ 158.90ns │ 161.09ns │ 114.00w │   0.84x │       120% │\n│ Mul 50x50 f32 (Nx)         │   1.61μs │   1.61μs │ 808.00w │   0.08x │      1211% │\n│ Mul 50x50 f64 (Nx)         │   1.74μs │   1.68μs │ 808.00w │   0.08x │      1311% │\n│ Add 50x50 f64 (Nx)         │   1.77μs │   1.82μs │ 808.00w │   0.07x │      1335% │\n│ Add 50x50 f32 (Nx)         │   1.83μs │   1.83μs │ 808.00w │   0.07x │      1374% │\n│ Mul 100x100 f32 (Nx)       │   2.41μs │   2.52μs │ 808.00w │   0.06x │      1814% │\n│ Add 100x100 f32 (Nx)       │   2.69μs │   2.69μs │ 808.00w │   0.05x │      2022% │\n│ Sum 50x50 f64 (Nx)         │   3.23μs │   3.25μs │ 416.00w │   0.04x │      2432% │\n│ Sum 50x50 f32 (Nx)         │   3.27μs │   3.24μs │ 416.00w │   0.04x │      2457% │\n│ Mul 100x100 f64 (Nx)       │  22.87μs │  22.72μs │ 801.00w │   0.01x │     17206% │\n│ Add 100x100 f64 (Nx)       │  24.60μs │  24.55μs │ 801.00w │   0.01x │     18510% │\n│ Sum 100x100 f64 (Nx)       │  87.16μs │ 178.36μs │ 416.00w │   0.00x │     65571% │\n│ Sum 100x100 f32 (Nx)       │  87.68μs │ 178.89μs │ 416.00w │   0.00x │     65966% │\n│ Sum 200x200 f64 (Nx)       │  97.11μs │ 220.67μs │ 416.00w │   0.00x │     73060% │\n│ Sum 200x200 f32 (Nx)       │  98.78μs │ 218.97μs │ 416.00w │   0.00x │     74315% │\n│ Mul 200x200 f32 (Nx)       │ 105.68μs │ 152.89μs │ 801.00w │   0.00x │     79507% │\n│ Add 200x200 f32 (Nx)       │ 109.71μs │ 161.79μs │ 801.00w │   0.00x │     82541% │\n│ Sum 500x500 f32 (Nx)       │ 135.81μs │ 425.91μs │ 416.00w │   0.00x │    102173% │\n│ Sum 500x500 f64 (Nx)       │ 139.32μs │ 427.21μs │ 416.00w │   0.00x │    104814% │\n│ Mul 200x200 f64 (Nx)       │ 157.28μs │ 210.49μs │ 801.00w │   0.00x │    118332% │\n│ Add 200x200 f64 (Nx)       │ 162.52μs │ 223.83μs │ 801.00w │   0.00x │    122270% │\n│ Mul 500x500 f32 (Nx)       │ 195.66μs │ 271.61μs │ 801.00w │   0.00x │    147200% │\n│ Add 500x500 f32 (Nx)       │ 204.88μs │ 301.06μs │ 801.00w │   0.00x │    154141% │\n│ Mul 500x500 f64 (Nx)       │ 277.63μs │ 534.55μs │ 801.00w │   0.00x │    208873% │\n│ Add 500x500 f64 (Nx)       │ 284.96μs │ 571.38μs │ 801.00w │   0.00x │    214386% │\n└────────────────────────────┴──────────┴──────────┴─────────┴─────────┴────────────┘\n```\n\n## Results NumPy\n\n```\n┌───────────────────────────────┬──────────┬──────────┬─────────┬─────────┬────────────┐\n│ Name                          │ Wall/Run │  CPU/Run │ mWd/Run │ Speedup │ vs Fastest │\n├───────────────────────────────┼──────────┼──────────┼─────────┼─────────┼────────────┤\n│ Transpose 200x200 f64 (NumPy) │ 309.12ns │ 308.61ns │   0.04w │   1.00x │       100% │\n│ Transpose 500x500 f64 (NumPy) │ 310.86ns │ 310.27ns │   0.06w │   0.99x │       101% │\n│ Transpose 100x100 f64 (NumPy) │ 311.56ns │ 309.53ns │   0.04w │   0.99x │       101% │\n│ Transpose 50x50 f32 (NumPy)   │ 311.76ns │ 311.08ns │   0.04w │   0.99x │       101% │\n│ Transpose 50x50 f64 (NumPy)   │ 313.21ns │ 312.09ns │   0.04w │   0.99x │       101% │\n│ Transpose 200x200 f32 (NumPy) │ 313.63ns │ 312.67ns │   0.04w │   0.99x │       101% │\n│ Transpose 500x500 f32 (NumPy) │ 315.64ns │ 313.60ns │   0.05w │   0.98x │       102% │\n│ Transpose 100x100 f32 (NumPy) │ 322.03ns │ 315.85ns │   0.04w │   0.96x │       104% │\n│ Add 50x50 f32 (NumPy)         │ 541.58ns │ 540.36ns │   0.06w │   0.57x │       175% │\n│ Mul 50x50 f32 (NumPy)         │ 547.03ns │ 539.71ns │   0.06w │   0.57x │       177% │\n│ Add 50x50 f64 (NumPy)         │ 688.90ns │ 688.46ns │   0.08w │   0.45x │       223% │\n│ Mul 50x50 f64 (NumPy)         │ 775.58ns │ 711.24ns │   0.08w │   0.40x │       251% │\n│ Mul 100x100 f32 (NumPy)       │   1.06µs │   1.06µs │   0.11w │   0.29x │       342% │\n│ Add 100x100 f32 (NumPy)       │   1.08µs │   1.05µs │   0.12w │   0.29x │       351% │\n│ Sum 50x50 f64 (NumPy)         │   1.88µs │   1.88µs │   0.20w │   0.16x │       608% │\n│ Sum 50x50 f32 (NumPy)         │   1.90µs │   1.89µs │   0.20w │   0.16x │       615% │\n│ Mul 100x100 f64 (NumPy)       │   2.54µs │   2.51µs │   0.32w │   0.12x │       822% │\n│ Add 100x100 f64 (NumPy)       │   2.58µs │   2.52µs │   0.32w │   0.12x │       835% │\n│ Sum 100x100 f32 (NumPy)       │   3.16µs │   3.16µs │   0.34w │   0.10x │      1023% │\n│ Sum 100x100 f64 (NumPy)       │   3.16µs │   3.16µs │   0.34w │   0.10x │      1023% │\n│ Mul 200x200 f32 (NumPy)       │   4.39µs │   4.37µs │   0.55w │   0.07x │      1419% │\n│ Add 200x200 f32 (NumPy)       │   4.39µs │   4.38µs │   0.54w │   0.07x │      1421% │\n│ Add 200x200 f64 (NumPy)       │   8.30µs │   8.21µs │   0.95w │   0.04x │      2685% │\n│ Mul 200x200 f64 (NumPy)       │   8.45µs │   8.32µs │   0.97w │   0.04x │      2734% │\n│ Sum 200x200 f32 (NumPy)       │   9.06µs │   8.94µs │   0.92w │   0.03x │      2931% │\n│ Sum 200x200 f64 (NumPy)       │   9.53µs │   9.50µs │   1.26w │   0.03x │      3082% │\n│ Add 500x500 f32 (NumPy)       │  24.95µs │  24.89µs │   3.01w │   0.01x │      8071% │\n│ Mul 500x500 f32 (NumPy)       │  25.01µs │  24.95µs │   3.08w │   0.01x │      8090% │\n│ Sum 500x500 f32 (NumPy)       │  43.39µs │  43.00µs │   5.57w │   0.01x │     14036% │\n│ Sum 500x500 f64 (NumPy)       │  48.26µs │  48.03µs │   8.13w │   0.01x │     15614% │\n│ Add 500x500 f64 (NumPy)       │ 159.37µs │ 159.03µs │  23.29w │   0.00x │     51557% │\n│ Mul 500x500 f64 (NumPy)       │ 167.38µs │ 166.97µs │  28.98w │   0.00x │     54149% │\n└───────────────────────────────┴──────────┴──────────┴─────────┴─────────┴────────────┘\n```\n"
  },
  {
    "path": "packages/nx/bench/bench_numpy.py",
    "content": "from __future__ import annotations\n\nimport sys\nfrom pathlib import Path\nfrom typing import Any, Callable, Iterable, List, Sequence, Tuple\n\nimport numpy as np\n\n_SCRIPTS_DIR = Path(__file__).resolve().parent\nwhile not (_SCRIPTS_DIR / \"dune-project\").exists():\n    _SCRIPTS_DIR = _SCRIPTS_DIR.parent\n_SCRIPTS_DIR = _SCRIPTS_DIR / \"scripts\"\nif str(_SCRIPTS_DIR) not in sys.path:\n    sys.path.insert(0, str(_SCRIPTS_DIR))\n\nimport ubench  # type: ignore\n\n\nSIZES: Sequence[int] = (50, 100, 200, 500)\nDTYPES: Sequence[np.dtype] = (np.float32, np.float64)\nBACKEND_NAME = \"NumPy\"\n_RNG = np.random.default_rng(seed=0)\n\n\ndef _dtype_label(dtype: np.dtype) -> str:\n    if dtype == np.float32:\n        return \"f32\"\n    if dtype == np.float64:\n        return \"f64\"\n    return str(dtype)\n\n\ndef _benchmark_name(op_name: str, size: int, dtype: np.dtype) -> str:\n    return f\"{op_name} {size}x{size} {_dtype_label(dtype)} ({BACKEND_NAME})\"\n\n\ndef _numpy_operations(\n    size: int, dtype: np.dtype\n) -> Iterable[Tuple[str, Callable[[], None]]]:\n    a = _RNG.random((size, size), dtype=dtype)\n    b = _RNG.random((size, size), dtype=dtype)\n\n    ops: List[Tuple[str, Callable[[], None]]] = [\n        (\"Add\", lambda a=a, b=b: np.add(a, b)),\n        (\"Mul\", lambda a=a, b=b: np.multiply(a, b)),\n    ]\n\n    ops.extend(\n        [\n            (\"Sum\", lambda a=a: np.sum(a)),\n            (\"Transpose\", lambda a=a: np.transpose(a)),\n        ]\n    )\n    return ops\n\n\ndef build_benchmarks() -> List[Any]:\n    benchmarks: List[Any] = []\n    for size in SIZES:\n        for dtype in DTYPES:\n            for op_name, fn in _numpy_operations(size, dtype):\n                bench_name = _benchmark_name(op_name, size, dtype)\n                benchmarks.append(ubench.bench(bench_name, fn))\n    return benchmarks\n\n\ndef default_config() -> ubench.Config:\n    return (\n        ubench.Config.default()\n        .time_limit(1.0)\n        .warmup(1)\n        .min_measurements(5)\n        .min_cpu(0.01)\n        .geometric_scale(1.3)\n        .gc_stabilization(False)\n        .build()\n    )\n\n\ndef main() -> None:\n    benchmarks = build_benchmarks()\n    # Mirror the OCaml defaults for fair comparisons with Nx benchmarks.\n    config = default_config()\n    ubench.run(benchmarks, config=config, output_format=\"pretty\", verbose=False)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "packages/nx/bench/bench_nx.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Configuration *)\nlet sizes = [ 50; 100; 200; 500 ]\nlet backend_name = \"Nx\"\n\nlet benchmark_name op_name size dtype_label =\n  Printf.sprintf \"%s %dx%d %s (%s)\" op_name size size dtype_label backend_name\n\nlet nx_operations_f32 ~size =\n  let shape = [| size; size |] in\n  let a = Nx.rand Nx.Float32 shape in\n  let b = Nx.rand Nx.Float32 shape in\n\n  let ops = [ (\"Add\", fun () -> Nx.add a b); (\"Mul\", fun () -> Nx.mul a b) ] in\n\n  let ops =\n    ops\n    @ [ (\"Sum\", fun () -> Nx.sum a); (\"Transpose\", fun () -> Nx.transpose a) ]\n  in\n\n  ops\n\nlet nx_operations_f64 ~size =\n  let shape = [| size; size |] in\n  let a = Nx.rand Nx.Float64 shape in\n  let b = Nx.rand Nx.Float64 shape in\n\n  let ops = [ (\"Add\", fun () -> Nx.add a b); (\"Mul\", fun () -> Nx.mul a b) ] in\n\n  let ops =\n    ops\n    @ [ (\"Sum\", fun () -> Nx.sum a); (\"Transpose\", fun () -> Nx.transpose a) ]\n  in\n\n  ops\n\nlet build_benchmarks () =\n  let f32_benches = ref [] in\n  let f64_benches = ref [] in\n  List.iter\n    (fun size ->\n      let ops_f32 = nx_operations_f32 ~size in\n      List.iter\n        (fun (op_name, fn) ->\n          let bench_name = benchmark_name op_name size \"f32\" in\n          f32_benches := Thumper.bench bench_name fn :: !f32_benches)\n        ops_f32;\n\n      let ops_f64 = nx_operations_f64 ~size in\n      List.iter\n        (fun (op_name, fn) ->\n          let bench_name = benchmark_name op_name size \"f64\" in\n          f64_benches := Thumper.bench bench_name fn :: !f64_benches)\n        ops_f64)\n    sizes;\n  [\n    Thumper.group \"f32\" (List.rev !f32_benches);\n    Thumper.group \"f64\" (List.rev !f64_benches);\n  ]\n\nlet () =\n  let benchmarks = build_benchmarks () in\n  Thumper.run \"nx\" benchmarks\n"
  },
  {
    "path": "packages/nx/bench/conv2d/README.md",
    "content": "# Nx Conv2d Benchmarks\n\nComparative benchmarks of Nx conv2d operations against PyTorch.\n\n## Results Nx\n\n```\n┌───────────────────────────────────────┬──────────┬──────────┬─────────┬─────────┬────────────┐\n│ Name                                  │ Wall/Run │  CPU/Run │ mWd/Run │ Speedup │ vs Fastest │\n├───────────────────────────────────────┼──────────┼──────────┼─────────┼─────────┼────────────┤\n│ Conv2d B1 C3->32 64x64 K3 f32 (Nx)    │ 251.48μs │ 250.62μs │  2.79kw │   1.00x │       100% │\n│ Conv2d B1 C3->32 64x64 K3 f64 (Nx)    │ 359.48μs │ 357.43μs │  2.79kw │   0.70x │       143% │\n│ Conv2d B8 C32->64 32x32 K3 f32 (Nx)   │   4.30ms │   4.28ms │  2.78kw │   0.06x │      1710% │\n│ Conv2d B16 C64->128 16x16 K3 f32 (Nx) │   4.45ms │   4.35ms │  2.78kw │   0.06x │      1768% │\n│ Conv2d B8 C32->64 32x32 K3 f64 (Nx)   │   5.31ms │   5.25ms │  2.78kw │   0.05x │      2111% │\n│ Conv2d B16 C64->128 16x16 K3 f64 (Nx) │   5.73ms │   5.51ms │  2.78kw │   0.04x │      2279% │\n└───────────────────────────────────────┴──────────┴──────────┴─────────┴─────────┴────────────┘\n```\n\n## Results PyTorch\n\n```\n┌────────────────────────────────────────────┬──────────┬──────────┬─────────┬─────────┬────────────┐\n│ Name                                       │ Wall/Run │  CPU/Run │ mWd/Run │ Speedup │ vs Fastest │\n├────────────────────────────────────────────┼──────────┼──────────┼─────────┼─────────┼────────────┤\n│ Conv2d B1 C3->32 64x64 K3 f32 (PyTorch)    │  85.48µs │ 130.87µs │  69.73w │   1.00x │       100% │\n│ Conv2d B1 C3->32 64x64 K3 f64 (PyTorch)    │ 155.03µs │ 221.13µs │ 118.66w │   0.55x │       181% │\n│ Conv2d B8 C32->64 32x32 K3 f32 (PyTorch)   │ 756.29µs │   3.34ms │  1.75kw │   0.11x │       885% │\n│ Conv2d B8 C32->64 32x32 K3 f64 (PyTorch)   │   1.06ms │   5.96ms │  3.57kw │   0.08x │      1245% │\n│ Conv2d B16 C64->128 16x16 K3 f64 (PyTorch) │   1.16ms │   7.32ms │  4.07kw │   0.07x │      1360% │\n│ Conv2d B16 C64->128 16x16 K3 f32 (PyTorch) │   2.92ms │   8.72ms │  4.28kw │   0.03x │      3415% │\n└────────────────────────────────────────────┴──────────┴──────────┴─────────┴─────────┴────────────┘\n```\n"
  },
  {
    "path": "packages/nx/bench/conv2d/bench_conv2d_nx.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Benchmarks for Nx.correlate and Nx.extract_patches *)\n\nlet backend_name = \"Nx\"\n\n(* correlate: (leading..., spatial...) with kernel of rank K *)\nlet correlate_configs =\n  [\n    (* (label, input_shape, kernel_shape) *)\n    (\"1D 1k\", [| 100 |], [| 5 |]);\n    (\"1D 10k batched\", [| 16; 10000 |], [| 5 |]);\n    (\"2D 64x64\", [| 64; 64 |], [| 3; 3 |]);\n    (\"2D 256x256\", [| 256; 256 |], [| 3; 3 |]);\n    (\"2D batch 8x64x64\", [| 8; 64; 64 |], [| 3; 3 |]);\n    (\"2D batch 8x256x256\", [| 8; 256; 256 |], [| 3; 3 |]);\n  ]\n\nlet extract_patches_configs =\n  [\n    (* (label, input_shape, kernel_size, stride) *)\n    (\"2D 64x64 k3 s1\", [| 1; 1; 64; 64 |], [| 3; 3 |], [| 1; 1 |]);\n    (\"2D 64x64 k3 s2\", [| 1; 1; 64; 64 |], [| 3; 3 |], [| 2; 2 |]);\n    (\"2D 256x256 k3 s1\", [| 1; 1; 256; 256 |], [| 3; 3 |], [| 1; 1 |]);\n    (\"2D 8x3x64x64 k3 s1\", [| 8; 3; 64; 64 |], [| 3; 3 |], [| 1; 1 |]);\n  ]\n\nlet build_benchmarks () =\n  let benchmarks = ref [] in\n  List.iter\n    (fun (label, input_shape, kernel_shape) ->\n      let x = Nx.rand Nx.Float32 input_shape in\n      let k = Nx.rand Nx.Float32 kernel_shape in\n      let name = Printf.sprintf \"correlate %s f32 (%s)\" label backend_name in\n      benchmarks :=\n        Thumper.bench name (fun () -> Nx.correlate x k) :: !benchmarks)\n    correlate_configs;\n  List.iter\n    (fun (label, input_shape, kernel_size, stride) ->\n      let x = Nx.rand Nx.Float32 input_shape in\n      let k = Array.length kernel_size in\n      let dilation = Array.make k 1 in\n      let padding = Array.make k (0, 0) in\n      let name =\n        Printf.sprintf \"extract_patches %s f32 (%s)\" label backend_name\n      in\n      benchmarks :=\n        Thumper.bench name (fun () ->\n            Nx.extract_patches ~kernel_size ~stride ~dilation ~padding x)\n        :: !benchmarks)\n    extract_patches_configs;\n  List.rev !benchmarks\n\nlet () =\n  let benchmarks = build_benchmarks () in\n  Thumper.run \"nx_conv2d\" benchmarks\n"
  },
  {
    "path": "packages/nx/bench/conv2d/bench_conv2d_pytorch.py",
    "content": "from __future__ import annotations\n\nimport sys\nfrom pathlib import Path\nfrom typing import Any, Callable, List, Sequence, Tuple\n\nimport torch\nimport torch.nn.functional as F\n\n_SCRIPTS_DIR = Path(__file__).resolve().parent\nwhile not (_SCRIPTS_DIR / \"dune-project\").exists():\n    _SCRIPTS_DIR = _SCRIPTS_DIR.parent\n_SCRIPTS_DIR = _SCRIPTS_DIR / \"scripts\"\nif str(_SCRIPTS_DIR) not in sys.path:\n    sys.path.insert(0, str(_SCRIPTS_DIR))\n\nimport ubench  # type: ignore\n\n\n# Common CNN layer sizes: (batch, in_channels, out_channels, input_size, kernel_size)\nCONFIGS: Sequence[Tuple[int, int, int, int, int]] = (\n    (1, 3, 32, 64, 3),    # Small: first conv layer, single image\n    (8, 32, 64, 32, 3),   # Medium: mid-layer, small batch\n    (16, 64, 128, 16, 3), # Large: deep layer, larger batch\n)\n\nDTYPES: Sequence[torch.dtype] = (torch.float32, torch.float64)\nBACKEND_NAME = \"PyTorch\"\n\n\ndef _dtype_label(dtype: torch.dtype) -> str:\n    if dtype == torch.float32:\n        return \"f32\"\n    if dtype == torch.float64:\n        return \"f64\"\n    return str(dtype)\n\n\ndef _benchmark_name(\n    op_name: str,\n    batch: int,\n    in_ch: int,\n    out_ch: int,\n    img_size: int,\n    kernel_size: int,\n    dtype: torch.dtype,\n) -> str:\n    return (\n        f\"{op_name} B{batch} C{in_ch}->{out_ch} {img_size}x{img_size} \"\n        f\"K{kernel_size} {_dtype_label(dtype)} ({BACKEND_NAME})\"\n    )\n\n\nclass ConvSpec:\n    \"\"\"Conv2d operation specification.\"\"\"\n\n    def __init__(\n        self,\n        name: str,\n        batch: int,\n        in_channels: int,\n        out_channels: int,\n        img_size: int,\n        kernel_size: int,\n    ):\n        self.name = name\n        self.batch = batch\n        self.in_channels = in_channels\n        self.out_channels = out_channels\n        self.img_size = img_size\n        self.kernel_size = kernel_size\n\n\ndef create_conv_specs() -> List[ConvSpec]:\n    \"\"\"Create conv2d specs from configs.\"\"\"\n    return [\n        ConvSpec(\"Conv2d\", batch, in_ch, out_ch, img_size, kernel_size)\n        for batch, in_ch, out_ch, img_size, kernel_size in CONFIGS\n    ]\n\n\ndef build_benchmarks() -> List[Any]:\n    \"\"\"Build all conv2d benchmarks.\"\"\"\n    benchmarks: List[Any] = []\n    specs = create_conv_specs()\n\n    torch.manual_seed(0)\n\n    for spec in specs:\n        for dtype in DTYPES:\n            input_shape = (spec.batch, spec.in_channels, spec.img_size, spec.img_size)\n            kernel_shape = (spec.out_channels, spec.in_channels, spec.kernel_size, spec.kernel_size)\n\n            input_tensor = torch.rand(input_shape, dtype=dtype)\n            kernel_tensor = torch.rand(kernel_shape, dtype=dtype)\n\n            bench_name = _benchmark_name(\n                spec.name, spec.batch, spec.in_channels, spec.out_channels,\n                spec.img_size, spec.kernel_size, dtype\n            )\n\n            def make_fn(inp: torch.Tensor, kern: torch.Tensor) -> Callable[[], None]:\n                return lambda: F.conv2d(inp, kern)\n\n            benchmarks.append(ubench.bench(bench_name, make_fn(input_tensor, kernel_tensor)))\n\n    return benchmarks\n\n\ndef default_config() -> ubench.Config:\n    \"\"\"Create default benchmark configuration.\"\"\"\n    return (\n        ubench.Config.default()\n        .time_limit(1.0)\n        .warmup(1)\n        .min_measurements(5)\n        .min_cpu(0.01)\n        .geometric_scale(1.3)\n        .gc_stabilization(False)\n        .build()\n    )\n\n\ndef main() -> None:\n    \"\"\"Main entry point.\"\"\"\n    benchmarks = build_benchmarks()\n    config = default_config()\n    ubench.run(benchmarks, config=config, output_format=\"pretty\", verbose=False)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "packages/nx/bench/conv2d/dune",
    "content": "(executable\n (name bench_conv2d_nx)\n (libraries nx thumper))\n\n(rule\n (alias runtest)\n (action\n  (progn\n   (run %{exe:bench_conv2d_nx.exe} -q)\n   (diff? nx_conv2d.thumper nx_conv2d.thumper.corrected))))\n"
  },
  {
    "path": "packages/nx/bench/conv2d/nx_conv2d.thumper",
    "content": "# thumper baseline\n# version: 1\n# suite_name: nx_conv2d\n# host: 1480401c3b76ed18\n# cpu: Apple M1 Max\n# ocaml: 5.4.1\n# git: 31747323\n# dirty: true\n# command: ./bench_conv2d_nx.exe -q\n\ncorrelate_1d_10k_batched_f32__nx_\talloc_words\t9.310000e+02\t9.310000e+02\t9.310000e+02\t0.000000e+00\t10\t1\ncorrelate_1d_10k_batched_f32__nx_\tcpu_time\t6.759299e-03\t6.742944e-03\t6.781850e-03\t2.877945e-03\t10\t0\ncorrelate_1d_10k_batched_f32__nx_\twall_time\t6.144004e-03\t6.131814e-03\t6.158235e-03\t2.150099e-03\t10\t0\ncorrelate_1d_1k_f32__nx_\talloc_words\t8.160000e+02\t8.160000e+02\t8.160000e+02\t0.000000e+00\t10\t0\ncorrelate_1d_1k_f32__nx_\tcpu_time\t6.460449e-06\t6.450824e-06\t6.471332e-06\t1.587145e-03\t10\t0\ncorrelate_1d_1k_f32__nx_\twall_time\t6.461020e-06\t6.451447e-06\t6.471827e-06\t1.577114e-03\t10\t1\ncorrelate_2d_256x256_f32__nx_\talloc_words\t8.990000e+02\t8.990000e+02\t8.990000e+02\t0.000000e+00\t10\t0\ncorrelate_2d_256x256_f32__nx_\tcpu_time\t4.401137e-03\t4.389369e-03\t4.411961e-03\t2.566629e-03\t10\t0\ncorrelate_2d_256x256_f32__nx_\twall_time\t3.977256e-03\t3.968869e-03\t3.986821e-03\t2.256904e-03\t10\t0\ncorrelate_2d_64x64_f32__nx_\talloc_words\t8.990000e+02\t8.990000e+02\t8.990000e+02\t0.000000e+00\t10\t2\ncorrelate_2d_64x64_f32__nx_\tcpu_time\t4.350262e-04\t4.309908e-04\t4.382260e-04\t8.315777e-03\t10\t3\ncorrelate_2d_64x64_f32__nx_\twall_time\t3.878239e-04\t3.848593e-04\t3.897182e-04\t6.264335e-03\t10\t2\ncorrelate_2d_batch_8x256x256_f32__nx_\talloc_words\t1.016000e+03\t1.016000e+03\t1.016000e+03\t0.000000e+00\t10\t1\ncorrelate_2d_batch_8x256x256_f32__nx_\tcpu_time\t3.718025e-02\t3.711858e-02\t3.722040e-02\t1.369329e-03\t10\t2\ncorrelate_2d_batch_8x256x256_f32__nx_\twall_time\t3.247770e-02\t3.246563e-02\t3.250109e-02\t5.459608e-04\t10\t1\ncorrelate_2d_batch_8x64x64_f32__nx_\talloc_words\t1.016000e+03\t1.016000e+03\t1.016000e+03\t0.000000e+00\t10\t0\ncorrelate_2d_batch_8x64x64_f32__nx_\tcpu_time\t2.536084e-03\t2.526750e-03\t2.546429e-03\t3.879779e-03\t10\t0\ncorrelate_2d_batch_8x64x64_f32__nx_\twall_time\t2.322492e-03\t2.315861e-03\t2.330876e-03\t3.232401e-03\t10\t0\nextract_patches_2d_256x256_k3_s1_f32__nx_\talloc_words\t1.390000e+02\t1.390000e+02\t1.390000e+02\t0.000000e+00\t10\t0\nextract_patches_2d_256x256_k3_s1_f32__nx_\tcpu_time\t7.138257e-04\t7.129127e-04\t7.148739e-04\t1.373699e-03\t10\t1\nextract_patches_2d_256x256_k3_s1_f32__nx_\twall_time\t7.139240e-04\t7.129017e-04\t7.150828e-04\t1.527537e-03\t10\t0\nextract_patches_2d_64x64_k3_s1_f32__nx_\talloc_words\t1.390000e+02\t1.390000e+02\t1.390000e+02\t0.000000e+00\t10\t1\nextract_patches_2d_64x64_k3_s1_f32__nx_\tcpu_time\t6.823460e-05\t6.808474e-05\t6.831762e-05\t1.706495e-03\t10\t3\nextract_patches_2d_64x64_k3_s1_f32__nx_\twall_time\t6.821857e-05\t6.813969e-05\t6.832750e-05\t1.376542e-03\t10\t2\nextract_patches_2d_64x64_k3_s2_f32__nx_\talloc_words\t1.390000e+02\t1.390000e+02\t1.390000e+02\t0.000000e+00\t10\t0\nextract_patches_2d_64x64_k3_s2_f32__nx_\tcpu_time\t1.160441e-05\t1.159022e-05\t1.162028e-05\t1.295226e-03\t10\t0\nextract_patches_2d_64x64_k3_s2_f32__nx_\twall_time\t1.160520e-05\t1.158941e-05\t1.162151e-05\t1.382895e-03\t10\t0\nextract_patches_2d_8x3x64x64_k3_s1_f32__nx_\talloc_words\t1.390000e+02\t1.390000e+02\t1.390000e+02\t0.000000e+00\t10\t0\nextract_patches_2d_8x3x64x64_k3_s1_f32__nx_\tcpu_time\t1.006386e-03\t1.005213e-03\t1.007881e-03\t1.325399e-03\t10\t0\nextract_patches_2d_8x3x64x64_k3_s1_f32__nx_\twall_time\t1.006492e-03\t1.005348e-03\t1.008017e-03\t1.325981e-03\t10\t0\n"
  },
  {
    "path": "packages/nx/bench/dune",
    "content": "(executable\n (name bench_nx)\n (modules bench_nx)\n (libraries nx thumper str))\n\n(rule\n (alias runtest)\n (action\n  (progn\n   (run %{exe:bench_nx.exe} -q)\n   (diff? nx.thumper nx.thumper.corrected))))\n"
  },
  {
    "path": "packages/nx/bench/einsum/README.md",
    "content": "# Nx Einsum Benchmarks\n\nComparative benchmarks of Nx einsum operations against NumPy.\n\n## Results Nx\n\n```\n┌──────────────────────────────────┬──────────┬──────────┬─────────┬─────────┬────────────┐\n│ Name                             │ Wall/Run │  CPU/Run │ mWd/Run │ Speedup │ vs Fastest │\n├──────────────────────────────────┼──────────┼──────────┼─────────┼─────────┼────────────┤\n│ InnerProduct 50x50 f64 (Nx)      │   1.90μs │   1.92μs │  1.26kw │   1.00x │       100% │\n│ InnerProduct 100x100 f32 (Nx)    │   1.96μs │   1.99μs │  1.26kw │   0.97x │       103% │\n│ InnerProduct 100x100 f64 (Nx)    │   1.98μs │   2.00μs │  1.26kw │   0.96x │       104% │\n│ InnerProduct 50x50 f32 (Nx)      │   2.06μs │   2.03μs │  1.26kw │   0.92x │       108% │\n│ InnerProduct 200x200 f32 (Nx)    │   2.18μs │   2.13μs │  1.26kw │   0.87x │       114% │\n│ InnerProduct 512x512 f32 (Nx)    │   2.47μs │   2.44μs │  1.26kw │   0.77x │       130% │\n│ InnerProduct 200x200 f64 (Nx)    │   2.56μs │   2.45μs │  1.26kw │   0.74x │       135% │\n│ InnerProduct 512x512 f64 (Nx)    │   2.63μs │   2.63μs │  1.26kw │   0.72x │       138% │\n│ MatMul 50x50 f32 (Nx)            │   3.22μs │   3.22μs │ 952.00w │   0.59x │       169% │\n│ MatMul 50x50 f64 (Nx)            │   4.93μs │   4.95μs │ 952.00w │   0.39x │       259% │\n│ MatMul 100x100 f32 (Nx)          │   6.86μs │   6.80μs │ 952.00w │   0.28x │       360% │\n│ ContractReduce2 50x50 f64 (Nx)   │   7.29μs │   7.26μs │  4.01kw │   0.26x │       383% │\n│ ContractReduce2 50x50 f32 (Nx)   │   7.30μs │   7.31μs │  4.01kw │   0.26x │       383% │\n│ ContractReduce1 50x50 f32 (Nx)   │   7.81μs │   7.80μs │  4.01kw │   0.24x │       410% │\n│ ContractReduce1 50x50 f64 (Nx)   │   8.01μs │   8.04μs │  4.01kw │   0.24x │       421% │\n│ IndependentSum 50x50 f64 (Nx)    │   9.77μs │   9.82μs │  3.81kw │   0.19x │       513% │\n│ IndependentSum 50x50 f32 (Nx)    │  10.06μs │  10.09μs │  3.81kw │   0.19x │       528% │\n│ BatchMatMul 50x50 f32 (Nx)       │  11.52μs │  11.49μs │  2.91kw │   0.17x │       605% │\n│ ContractReduce2 100x100 f32 (Nx) │  14.62μs │  14.62μs │  4.01kw │   0.13x │       768% │\n│ ContractReduce2 100x100 f64 (Nx) │  14.96μs │  14.72μs │  4.01kw │   0.13x │       786% │\n│ ContractReduce1 100x100 f64 (Nx) │  17.04μs │  17.04μs │  4.01kw │   0.11x │       895% │\n│ ContractReduce1 100x100 f32 (Nx) │  17.11μs │  16.96μs │  4.01kw │   0.11x │       899% │\n│ MatMul 100x100 f64 (Nx)          │  33.03μs │  33.07μs │ 945.00w │   0.06x │      1734% │\n│ BatchMatMul 50x50 f64 (Nx)       │  35.48μs │  35.42μs │  2.91kw │   0.05x │      1864% │\n│ ContractReduce2 200x200 f32 (Nx) │  50.95μs │  50.92μs │  4.01kw │   0.04x │      2676% │\n│ ContractReduce2 200x200 f64 (Nx) │  53.94μs │  53.27μs │  4.01kw │   0.04x │      2833% │\n│ ContractReduce1 200x200 f32 (Nx) │  57.76μs │  57.50μs │  4.01kw │   0.03x │      3033% │\n│ BatchMatMul 100x100 f32 (Nx)     │  57.95μs │  57.95μs │  2.91kw │   0.03x │      3044% │\n│ ContractReduce1 200x200 f64 (Nx) │  58.40μs │  57.82μs │  4.01kw │   0.03x │      3067% │\n│ MatMul 200x200 f32 (Nx)          │  59.92μs │  59.24μs │ 945.00w │   0.03x │      3147% │\n│ BatchMatMul 100x100 f64 (Nx)     │ 127.22μs │ 127.21μs │  2.91kw │   0.01x │      6681% │\n│ MatMul 200x200 f64 (Nx)          │ 147.15μs │ 145.63μs │ 945.00w │   0.01x │      7728% │\n│ IndependentSum 100x100 f64 (Nx)  │ 184.71μs │ 471.85μs │  3.81kw │   0.01x │      9701% │\n│ IndependentSum 200x200 f32 (Nx)  │ 185.52μs │ 502.80μs │  3.81kw │   0.01x │      9743% │\n│ IndependentSum 200x200 f64 (Nx)  │ 186.57μs │ 508.79μs │  3.81kw │   0.01x │      9799% │\n│ IndependentSum 100x100 f32 (Nx)  │ 223.17μs │ 460.18μs │  3.81kw │   0.01x │     11721% │\n│ BatchMatMul 200x200 f32 (Nx)     │ 235.95μs │ 234.52μs │  2.91kw │   0.01x │     12392% │\n│ IndependentSum 512x512 f64 (Nx)  │ 282.82μs │   1.01ms │  3.81kw │   0.01x │     14853% │\n│ IndependentSum 512x512 f32 (Nx)  │ 283.39μs │ 970.86μs │  3.81kw │   0.01x │     14883% │\n│ MatMul 512x512 f32 (Nx)          │ 324.12μs │ 424.46μs │ 945.00w │   0.01x │     17022% │\n│ ContractReduce2 512x512 f32 (Nx) │ 414.50μs │ 412.71μs │  4.01kw │   0.00x │     21769% │\n│ BatchMatMul 200x200 f64 (Nx)     │ 438.50μs │ 437.15μs │  2.91kw │   0.00x │     23029% │\n│ ContractReduce2 512x512 f64 (Nx) │ 446.00μs │ 441.93μs │  4.01kw │   0.00x │     23423% │\n│ ContractReduce1 512x512 f32 (Nx) │ 448.72μs │ 447.88μs │  4.01kw │   0.00x │     23566% │\n│ ContractReduce1 512x512 f64 (Nx) │ 509.76μs │ 507.39μs │  4.01kw │   0.00x │     26772% │\n│ MatMul 512x512 f64 (Nx)          │ 766.18μs │   1.20ms │ 945.00w │   0.00x │     40238% │\n│ BatchMatMul 512x512 f32 (Nx)     │ 818.47μs │   1.29ms │  2.91kw │   0.00x │     42985% │\n│ BatchMatMul 512x512 f64 (Nx)     │   2.48ms │   4.50ms │  2.91kw │   0.00x │    130422% │\n└──────────────────────────────────┴──────────┴──────────┴─────────┴─────────┴────────────┘\n```\n\n## Results NumPy\n\n```\n┌─────────────────────────────────────┬──────────┬─────────┬─────────┬────────────┐\n│ Name                                │ Time/Run │ mWd/Run │ Speedup │ vs Fastest │\n├─────────────────────────────────────┼──────────┼─────────┼─────────┼────────────┤\n│ InnerProduct 100x100 f32 (NumPy)    │   1.14µs │   0.12w │   1.00x │       100% │\n│ InnerProduct 50x50 f32 (NumPy)      │   1.14µs │   0.12w │   1.00x │       100% │\n│ InnerProduct 50x50 f64 (NumPy)      │   1.14µs │   0.12w │   0.99x │       101% │\n│ InnerProduct 100x100 f64 (NumPy)    │   1.16µs │   0.13w │   0.98x │       102% │\n│ InnerProduct 200x200 f32 (NumPy)    │   1.16µs │   0.15w │   0.98x │       102% │\n│ InnerProduct 200x200 f64 (NumPy)    │   1.23µs │   0.17w │   0.93x │       108% │\n│ ContractReduce2 50x50 f32 (NumPy)   │  10.83µs │   0.97w │   0.11x │       952% │\n│ ContractReduce2 50x50 f64 (NumPy)   │  15.03µs │   1.29w │   0.08x │      1322% │\n│ ContractReduce1 50x50 f32 (NumPy)   │  16.93µs │   1.64w │   0.07x │      1489% │\n│ MatMul 50x50 f32 (NumPy)            │  20.34µs │   1.74w │   0.06x │      1789% │\n│ MatMul 50x50 f64 (NumPy)            │  26.58µs │   2.77w │   0.04x │      2338% │\n│ ContractReduce1 50x50 f64 (NumPy)   │  27.01µs │   2.81w │   0.04x │      2375% │\n│ ContractReduce2 100x100 f32 (NumPy) │  57.04µs │   6.33w │   0.02x │      5017% │\n│ ContractReduce2 100x100 f64 (NumPy) │  94.44µs │   9.32w │   0.01x │      8306% │\n│ MatMul 100x100 f32 (NumPy)          │ 102.29µs │  10.32w │   0.01x │      8996% │\n│ ContractReduce1 100x100 f32 (NumPy) │ 104.80µs │  10.60w │   0.01x │      9217% │\n│ BatchMatMul 50x50 f32 (NumPy)       │ 108.84µs │  10.09w │   0.01x │      9572% │\n│ BatchMatMul 50x50 f64 (NumPy)       │ 134.92µs │  13.38w │   0.01x │     11866% │\n│ MatMul 100x100 f64 (NumPy)          │ 161.94µs │  14.91w │   0.01x │     14242% │\n│ ContractReduce1 100x100 f64 (NumPy) │ 222.89µs │  24.17w │   0.01x │     19603% │\n│ ContractReduce2 200x200 f32 (NumPy) │ 483.26µs │  56.67w │   0.00x │     42501% │\n│ BatchMatMul 100x100 f32 (NumPy)     │ 518.96µs │  47.34w │   0.00x │     45641% │\n│ MatMul 200x200 f32 (NumPy)          │ 733.17µs │  80.95w │   0.00x │     64480% │\n│ BatchMatMul 100x100 f64 (NumPy)     │ 762.60µs │  78.79w │   0.00x │     67068% │\n│ ContractReduce2 200x200 f64 (NumPy) │ 843.93µs │ 114.13w │   0.00x │     74221% │\n│ ContractReduce1 200x200 f32 (NumPy) │ 937.95µs │ 112.44w │   0.00x │     82490% │\n│ MatMul 200x200 f64 (NumPy)          │   1.35ms │ 150.59w │   0.00x │    118410% │\n│ ContractReduce1 200x200 f64 (NumPy) │   2.27ms │ 294.93w │   0.00x │    200049% │\n│ BatchMatMul 200x200 f32 (NumPy)     │   3.37ms │ 397.35w │   0.00x │    296058% │\n│ BatchMatMul 200x200 f64 (NumPy)     │   5.80ms │ 712.76w │   0.00x │    510284% │\n└─────────────────────────────────────┴──────────┴─────────┴─────────┴────────────┘\n```\n"
  },
  {
    "path": "packages/nx/bench/einsum/bench_einsum_numpy.py",
    "content": "from __future__ import annotations\n\nimport sys\nfrom pathlib import Path\nfrom typing import Any, Callable, List, Sequence\n\nimport numpy as np\n\n_SCRIPTS_DIR = Path(__file__).resolve().parent\nwhile not (_SCRIPTS_DIR / \"dune-project\").exists():\n    _SCRIPTS_DIR = _SCRIPTS_DIR.parent\n_SCRIPTS_DIR = _SCRIPTS_DIR / \"scripts\"\nif str(_SCRIPTS_DIR) not in sys.path:\n    sys.path.insert(0, str(_SCRIPTS_DIR))\n\nimport ubench  # type: ignore\n\n\nSIZES: Sequence[int] = (50, 100, 200)\nDTYPES: Sequence[np.dtype] = (np.float32, np.float64)\nBACKEND_NAME = \"NumPy\"\n_RNG = np.random.default_rng(seed=0)\n\n\ndef _dtype_label(dtype: np.dtype) -> str:\n    if dtype == np.float32:\n        return \"f32\"\n    if dtype == np.float64:\n        return \"f64\"\n    return str(dtype)\n\n\ndef _benchmark_name(op_name: str, size: int, dtype: np.dtype) -> str:\n    return f\"{op_name} {size}x{size} {_dtype_label(dtype)} ({BACKEND_NAME})\"\n\n\nclass EinsumOp:\n    \"\"\"Einsum operation specification.\"\"\"\n\n    def __init__(\n        self,\n        name: str,\n        subscripts: str,\n        setup: Callable[[int, np.dtype], List[np.ndarray]],\n    ):\n        self.name = name\n        self.subscripts = subscripts\n        self.setup = setup\n\n\n# Define common einsum operations to benchmark - covering key use cases\nEINSUM_OPS = [\n    EinsumOp(\n        \"MatMul\",\n        \"ij,jk->ik\",\n        lambda size, dtype: [\n            _RNG.random((size, size), dtype=dtype),\n            _RNG.random((size, size), dtype=dtype),\n        ],\n    ),\n    EinsumOp(\n        \"BatchMatMul\",\n        \"bij,bjk->bik\",\n        lambda size, dtype: [\n            _RNG.random((4, size, size), dtype=dtype),\n            _RNG.random((4, size, size), dtype=dtype),\n        ],\n    ),\n    EinsumOp(\n        \"InnerProduct\",\n        \"i,i->\",\n        lambda size, dtype: [\n            _RNG.random(size, dtype=dtype),\n            _RNG.random(size, dtype=dtype),\n        ],\n    ),\n    # Critical contraction-reduction patterns (known to be slow in Raven)\n    EinsumOp(\n        \"ContractReduce1\",\n        \"ij,kj->\",\n        lambda size, dtype: [\n            _RNG.random((size, size), dtype=dtype),\n            _RNG.random((size, size), dtype=dtype),\n        ],\n    ),\n    EinsumOp(\n        \"ContractReduce2\",\n        \"ij,jk->\",\n        lambda size, dtype: [\n            _RNG.random((size, size), dtype=dtype),\n            _RNG.random((size, size), dtype=dtype),\n        ],\n    ),\n]\n\n\ndef build_benchmarks() -> List[Any]:\n    \"\"\"Build all einsum benchmarks.\"\"\"\n    benchmarks: List[Any] = []\n\n    for size in SIZES:\n        for dtype in DTYPES:\n            for op in EINSUM_OPS:\n                operands = op.setup(size, dtype)\n                bench_name = _benchmark_name(op.name, size, dtype)\n\n                # Capture operands in closure\n                def make_fn(subscripts: str, arrays: List[np.ndarray]) -> Callable[[], None]:\n                    return lambda: np.einsum(subscripts, *arrays)\n\n                benchmarks.append(ubench.bench(bench_name, make_fn(op.subscripts, operands)))\n\n    return benchmarks\n\n\ndef default_config() -> ubench.Config:\n    \"\"\"Create default benchmark configuration.\"\"\"\n    return (\n        ubench.Config.default()\n        .time_limit(1.0)\n        .warmup(1)\n        .min_measurements(5)\n        .min_cpu(0.01)\n        .geometric_scale(1.3)\n        .gc_stabilization(False)\n        .build()\n    )\n\n\ndef main() -> None:\n    \"\"\"Main entry point.\"\"\"\n    benchmarks = build_benchmarks()\n    config = default_config()\n    ubench.run(benchmarks, config=config, output_format=\"pretty\", verbose=False)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "packages/nx/bench/einsum/bench_einsum_nx.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Configuration *)\nlet sizes = [ 50; 100; 200; 512 ]\nlet backend_name = \"Nx\"\n\nlet benchmark_name op_name size dtype_label =\n  Printf.sprintf \"%s %dx%d %s (%s)\" op_name size size dtype_label backend_name\n\ntype einsum_spec = { name : string; subscripts : string }\n\nlet einsum_specs =\n  [\n    { name = \"MatMul\"; subscripts = \"ij,jk->ik\" };\n    { name = \"BatchMatMul\"; subscripts = \"bij,bjk->bik\" };\n    { name = \"InnerProduct\"; subscripts = \"i,i->\" };\n    (* Critical contraction-reduction patterns (known to be slow) *)\n    { name = \"ContractReduce1\"; subscripts = \"ij,kj->\" };\n    { name = \"ContractReduce2\"; subscripts = \"ij,jk->\" };\n    (* Independent contraction: no shared axes, sum everything *)\n    { name = \"IndependentSum\"; subscripts = \"ab,cd->\" };\n  ]\n\nlet setup_f32 spec size =\n  match spec.name with\n  | \"MatMul\" | \"ContractReduce2\" ->\n      let shape = [| size; size |] in\n      [ Nx.rand Nx.Float32 shape; Nx.rand Nx.Float32 shape ]\n  | \"BatchMatMul\" ->\n      let shape = [| 4; size; size |] in\n      [ Nx.rand Nx.Float32 shape; Nx.rand Nx.Float32 shape ]\n  | \"InnerProduct\" ->\n      let shape = [| size |] in\n      [ Nx.rand Nx.Float32 shape; Nx.rand Nx.Float32 shape ]\n  | \"ContractReduce1\" ->\n      let shape = [| size; size |] in\n      [ Nx.rand Nx.Float32 shape; Nx.rand Nx.Float32 shape ]\n  | \"IndependentSum\" ->\n      let shape = [| size; size |] in\n      [ Nx.rand Nx.Float32 shape; Nx.rand Nx.Float32 shape ]\n  | _ -> failwith (\"Unknown einsum operation: \" ^ spec.name)\n\nlet setup_f64 spec size =\n  match spec.name with\n  | \"MatMul\" | \"ContractReduce2\" ->\n      let shape = [| size; size |] in\n      [ Nx.rand Nx.Float64 shape; Nx.rand Nx.Float64 shape ]\n  | \"BatchMatMul\" ->\n      let shape = [| 4; size; size |] in\n      [ Nx.rand Nx.Float64 shape; Nx.rand Nx.Float64 shape ]\n  | \"InnerProduct\" ->\n      let shape = [| size |] in\n      [ Nx.rand Nx.Float64 shape; Nx.rand Nx.Float64 shape ]\n  | \"ContractReduce1\" ->\n      let shape = [| size; size |] in\n      [ Nx.rand Nx.Float64 shape; Nx.rand Nx.Float64 shape ]\n  | \"IndependentSum\" ->\n      let shape = [| size; size |] in\n      [ Nx.rand Nx.Float64 shape; Nx.rand Nx.Float64 shape ]\n  | _ -> failwith (\"Unknown einsum operation: \" ^ spec.name)\n\nlet build_benchmarks () =\n  let f32_benches = ref [] in\n  let f64_benches = ref [] in\n\n  List.iter\n    (fun size ->\n      List.iter\n        (fun spec ->\n          let operands = setup_f32 spec size |> Array.of_list in\n          let bench_name = benchmark_name spec.name size \"f32\" in\n          let fn () = Nx.einsum spec.subscripts operands in\n          f32_benches := Thumper.bench bench_name fn :: !f32_benches)\n        einsum_specs)\n    sizes;\n\n  List.iter\n    (fun size ->\n      List.iter\n        (fun spec ->\n          let operands = setup_f64 spec size |> Array.of_list in\n          let bench_name = benchmark_name spec.name size \"f64\" in\n          let fn () = Nx.einsum spec.subscripts operands in\n          f64_benches := Thumper.bench bench_name fn :: !f64_benches)\n        einsum_specs)\n    sizes;\n\n  [\n    Thumper.group \"f32\" (List.rev !f32_benches);\n    Thumper.group \"f64\" (List.rev !f64_benches);\n  ]\n\nlet () =\n  let benchmarks = build_benchmarks () in\n  Thumper.run \"nx_einsum\" benchmarks\n"
  },
  {
    "path": "packages/nx/bench/einsum/dune",
    "content": "(executable\n (name bench_einsum_nx)\n (modules bench_einsum_nx)\n (libraries nx thumper))\n\n(rule\n (alias runtest)\n (action\n  (progn\n   (run %{exe:bench_einsum_nx.exe} -q)\n   (diff? nx_einsum.thumper nx_einsum.thumper.corrected))))\n"
  },
  {
    "path": "packages/nx/bench/einsum/nx_einsum.thumper",
    "content": "# thumper baseline\n# version: 1\n# suite_name: nx_einsum\n# host: 1480401c3b76ed18\n# cpu: Apple M1 Max\n# ocaml: 5.4.1\n# git: 31747323\n# dirty: true\n# command: /Users/tmattio/Workspace/raven/_build/default/packages/nx/bench/einsum/bench_einsum_nx.exe --bless --quick\n\nf32/batchmatmul_100x100_f32__nx_\talloc_words\t1.701000e+03\t1.701000e+03\t1.701000e+03\t0.000000e+00\t5\t0\nf32/batchmatmul_100x100_f32__nx_\tcpu_time\t5.518725e-05\t5.460953e-05\t5.569620e-05\t9.845340e-03\t5\t0\nf32/batchmatmul_100x100_f32__nx_\twall_time\t5.536186e-05\t5.486018e-05\t5.585119e-05\t8.950249e-03\t5\t0\nf32/batchmatmul_200x200_f32__nx_\talloc_words\t1.701000e+03\t1.701000e+03\t1.701000e+03\t0.000000e+00\t5\t0\nf32/batchmatmul_200x200_f32__nx_\tcpu_time\t1.977725e-04\t1.925440e-04\t2.030003e-04\t2.643504e-02\t5\t0\nf32/batchmatmul_200x200_f32__nx_\twall_time\t1.980320e-04\t1.925745e-04\t2.032979e-04\t2.707490e-02\t5\t0\nf32/batchmatmul_50x50_f32__nx_\talloc_words\t1.701000e+03\t1.701000e+03\t1.701000e+03\t0.000000e+00\t5\t0\nf32/batchmatmul_50x50_f32__nx_\tcpu_time\t9.819694e-06\t9.608194e-06\t1.004464e-05\t2.222318e-02\t5\t0\nf32/batchmatmul_50x50_f32__nx_\twall_time\t9.822858e-06\t9.623288e-06\t1.006387e-05\t2.242628e-02\t5\t0\nf32/batchmatmul_512x512_f32__nx_\talloc_words\t1.701000e+03\t1.701000e+03\t1.701000e+03\t0.000000e+00\t5\t0\nf32/batchmatmul_512x512_f32__nx_\tcpu_time\t1.297080e-03\t1.259932e-03\t1.340206e-03\t3.094424e-02\t5\t1\nf32/batchmatmul_512x512_f32__nx_\twall_time\t7.969367e-04\t7.790774e-04\t8.238494e-04\t2.809007e-02\t5\t1\nf32/contractreduce1_100x100_f32__nx_\talloc_words\t1.813000e+03\t1.813000e+03\t1.813000e+03\t0.000000e+00\t5\t0\nf32/contractreduce1_100x100_f32__nx_\tcpu_time\t1.594601e-05\t1.561383e-05\t1.628084e-05\t2.091450e-02\t5\t0\nf32/contractreduce1_100x100_f32__nx_\twall_time\t1.609735e-05\t1.569702e-05\t1.652829e-05\t2.581989e-02\t5\t1\nf32/contractreduce1_200x200_f32__nx_\talloc_words\t1.813000e+03\t1.813000e+03\t1.813000e+03\t0.000000e+00\t5\t0\nf32/contractreduce1_200x200_f32__nx_\tcpu_time\t5.603130e-05\t5.519197e-05\t5.691415e-05\t1.536804e-02\t5\t0\nf32/contractreduce1_200x200_f32__nx_\twall_time\t5.617537e-05\t5.508807e-05\t5.705600e-05\t1.751602e-02\t5\t0\nf32/contractreduce1_50x50_f32__nx_\talloc_words\t1.813000e+03\t1.813000e+03\t1.813000e+03\t0.000000e+00\t5\t0\nf32/contractreduce1_50x50_f32__nx_\tcpu_time\t6.199364e-06\t6.090980e-06\t6.304951e-06\t1.725755e-02\t5\t1\nf32/contractreduce1_50x50_f32__nx_\twall_time\t6.211418e-06\t6.096398e-06\t6.304748e-06\t1.677152e-02\t5\t0\nf32/contractreduce1_512x512_f32__nx_\talloc_words\t1.813000e+03\t1.813000e+03\t1.813000e+03\t0.000000e+00\t5\t1\nf32/contractreduce1_512x512_f32__nx_\tcpu_time\t4.509663e-04\t4.489071e-04\t4.521112e-04\t3.552538e-03\t5\t0\nf32/contractreduce1_512x512_f32__nx_\twall_time\t4.516230e-04\t4.494771e-04\t4.528449e-04\t3.728570e-03\t5\t0\nf32/contractreduce2_100x100_f32__nx_\talloc_words\t1.813000e+03\t1.813000e+03\t1.813000e+03\t0.000000e+00\t5\t0\nf32/contractreduce2_100x100_f32__nx_\tcpu_time\t1.377649e-05\t1.358060e-05\t1.393764e-05\t1.295804e-02\t5\t0\nf32/contractreduce2_100x100_f32__nx_\twall_time\t1.381472e-05\t1.361141e-05\t1.401211e-05\t1.450244e-02\t5\t0\nf32/contractreduce2_200x200_f32__nx_\talloc_words\t1.813000e+03\t1.813000e+03\t1.813000e+03\t0.000000e+00\t5\t0\nf32/contractreduce2_200x200_f32__nx_\tcpu_time\t5.053761e-05\t5.032101e-05\t5.081128e-05\t4.850615e-03\t5\t0\nf32/contractreduce2_200x200_f32__nx_\twall_time\t5.056575e-05\t5.036507e-05\t5.088464e-05\t5.137586e-03\t5\t0\nf32/contractreduce2_50x50_f32__nx_\talloc_words\t1.813000e+03\t1.813000e+03\t1.813000e+03\t0.000000e+00\t5\t0\nf32/contractreduce2_50x50_f32__nx_\tcpu_time\t5.939967e-06\t5.918716e-06\t5.964805e-06\t3.879551e-03\t5\t2\nf32/contractreduce2_50x50_f32__nx_\twall_time\t5.953429e-06\t5.924350e-06\t5.978908e-06\t4.582069e-03\t5\t0\nf32/contractreduce2_512x512_f32__nx_\talloc_words\t1.813000e+03\t1.813000e+03\t1.813000e+03\t0.000000e+00\t5\t1\nf32/contractreduce2_512x512_f32__nx_\tcpu_time\t4.102682e-04\t4.066891e-04\t4.159497e-04\t1.128601e-02\t5\t0\nf32/contractreduce2_512x512_f32__nx_\twall_time\t4.107669e-04\t4.070123e-04\t4.173550e-04\t1.258954e-02\t5\t0\nf32/independentsum_100x100_f32__nx_\talloc_words\t1.699000e+03\t1.699000e+03\t1.699000e+03\t0.000000e+00\t9\t0\nf32/independentsum_100x100_f32__nx_\tcpu_time\t4.854485e-04\t4.696863e-04\t5.062099e-04\t3.761847e-02\t9\t0\nf32/independentsum_100x100_f32__nx_\twall_time\t2.070175e-04\t1.960637e-04\t2.159687e-04\t4.807575e-02\t9\t2\nf32/independentsum_200x200_f32__nx_\talloc_words\t1.699000e+03\t1.699000e+03\t1.699000e+03\t0.000000e+00\t5\t0\nf32/independentsum_200x200_f32__nx_\tcpu_time\t4.956030e-04\t4.876978e-04\t5.040143e-04\t1.646122e-02\t5\t0\nf32/independentsum_200x200_f32__nx_\twall_time\t1.944663e-04\t1.892080e-04\t1.992350e-04\t2.578080e-02\t5\t0\nf32/independentsum_50x50_f32__nx_\talloc_words\t1.699000e+03\t1.699000e+03\t1.699000e+03\t0.000000e+00\t5\t0\nf32/independentsum_50x50_f32__nx_\tcpu_time\t8.687534e-06\t8.653824e-06\t8.723281e-06\t3.997540e-03\t5\t0\nf32/independentsum_50x50_f32__nx_\twall_time\t8.703804e-06\t8.674385e-06\t8.736752e-06\t3.582744e-03\t5\t0\nf32/independentsum_512x512_f32__nx_\talloc_words\t1.699000e+03\t1.699000e+03\t1.699000e+03\t0.000000e+00\t5\t0\nf32/independentsum_512x512_f32__nx_\tcpu_time\t9.743405e-04\t9.667292e-04\t9.815193e-04\t7.589837e-03\t5\t0\nf32/independentsum_512x512_f32__nx_\twall_time\t2.947632e-04\t2.914982e-04\t2.971370e-04\t9.564960e-03\t5\t1\nf32/innerproduct_100x100_f32__nx_\talloc_words\t2.900000e+02\t2.900000e+02\t2.900000e+02\t0.000000e+00\t5\t0\nf32/innerproduct_100x100_f32__nx_\tcpu_time\t1.321850e-06\t1.294614e-06\t1.345927e-06\t1.940965e-02\t5\t0\nf32/innerproduct_100x100_f32__nx_\twall_time\t1.349716e-06\t1.298123e-06\t1.406750e-06\t4.024086e-02\t5\t1\nf32/innerproduct_200x200_f32__nx_\talloc_words\t2.900000e+02\t2.900000e+02\t2.900000e+02\t0.000000e+00\t5\t0\nf32/innerproduct_200x200_f32__nx_\tcpu_time\t1.384659e-06\t1.369420e-06\t1.400144e-06\t1.109454e-02\t5\t0\nf32/innerproduct_200x200_f32__nx_\twall_time\t1.386790e-06\t1.375480e-06\t1.399783e-06\t8.762314e-03\t5\t0\nf32/innerproduct_50x50_f32__nx_\talloc_words\t2.900000e+02\t2.900000e+02\t2.900000e+02\t0.000000e+00\t5\t0\nf32/innerproduct_50x50_f32__nx_\tcpu_time\t1.204197e-06\t1.191251e-06\t1.222291e-06\t1.288819e-02\t5\t0\nf32/innerproduct_50x50_f32__nx_\twall_time\t1.204623e-06\t1.190244e-06\t1.222792e-06\t1.350979e-02\t5\t0\nf32/innerproduct_512x512_f32__nx_\talloc_words\t2.900000e+02\t2.900000e+02\t2.900000e+02\t0.000000e+00\t5\t0\nf32/innerproduct_512x512_f32__nx_\tcpu_time\t1.769961e-06\t1.762406e-06\t1.778535e-06\t4.556398e-03\t5\t1\nf32/innerproduct_512x512_f32__nx_\twall_time\t1.771485e-06\t1.764424e-06\t1.779815e-06\t4.344216e-03\t5\t2\nf32/matmul_100x100_f32__nx_\talloc_words\t1.360000e+02\t1.360000e+02\t1.360000e+02\t0.000000e+00\t5\t0\nf32/matmul_100x100_f32__nx_\tcpu_time\t6.162875e-06\t6.051695e-06\t6.226873e-06\t1.421240e-02\t5\t2\nf32/matmul_100x100_f32__nx_\twall_time\t6.176118e-06\t6.062691e-06\t6.243488e-06\t1.463678e-02\t5\t2\nf32/matmul_200x200_f32__nx_\talloc_words\t1.360000e+02\t1.360000e+02\t1.360000e+02\t0.000000e+00\t5\t0\nf32/matmul_200x200_f32__nx_\tcpu_time\t5.209572e-05\t5.140853e-05\t5.301647e-05\t1.543257e-02\t5\t0\nf32/matmul_200x200_f32__nx_\twall_time\t5.221197e-05\t5.152504e-05\t5.300514e-05\t1.417399e-02\t5\t0\nf32/matmul_50x50_f32__nx_\talloc_words\t1.360000e+02\t1.360000e+02\t1.360000e+02\t0.000000e+00\t5\t0\nf32/matmul_50x50_f32__nx_\tcpu_time\t1.785309e-06\t1.751060e-06\t1.823101e-06\t2.017611e-02\t5\t0\nf32/matmul_50x50_f32__nx_\twall_time\t1.785520e-06\t1.751599e-06\t1.826631e-06\t2.101120e-02\t5\t0\nf32/matmul_512x512_f32__nx_\talloc_words\t1.360000e+02\t1.360000e+02\t1.360000e+02\t0.000000e+00\t5\t0\nf32/matmul_512x512_f32__nx_\tcpu_time\t3.733167e-04\t3.698187e-04\t3.768665e-04\t9.439403e-03\t5\t0\nf32/matmul_512x512_f32__nx_\twall_time\t2.745620e-04\t2.727650e-04\t2.763596e-04\t6.545934e-03\t5\t0\nf64/batchmatmul_100x100_f64__nx_\talloc_words\t1.701000e+03\t1.701000e+03\t1.701000e+03\t0.000000e+00\t5\t0\nf64/batchmatmul_100x100_f64__nx_\tcpu_time\t1.093644e-04\t1.083188e-04\t1.105216e-04\t1.007089e-02\t5\t0\nf64/batchmatmul_100x100_f64__nx_\twall_time\t1.094756e-04\t1.082373e-04\t1.107356e-04\t1.141050e-02\t5\t0\nf64/batchmatmul_200x200_f64__nx_\talloc_words\t1.701000e+03\t1.701000e+03\t1.701000e+03\t0.000000e+00\t5\t0\nf64/batchmatmul_200x200_f64__nx_\tcpu_time\t3.139812e-04\t3.106339e-04\t3.180848e-04\t1.186526e-02\t5\t0\nf64/batchmatmul_200x200_f64__nx_\twall_time\t3.141270e-04\t3.106870e-04\t3.185770e-04\t1.255869e-02\t5\t0\nf64/batchmatmul_50x50_f64__nx_\talloc_words\t1.701000e+03\t1.701000e+03\t1.701000e+03\t0.000000e+00\t5\t0\nf64/batchmatmul_50x50_f64__nx_\tcpu_time\t3.042245e-05\t3.016933e-05\t3.068369e-05\t8.453563e-03\t5\t0\nf64/batchmatmul_50x50_f64__nx_\twall_time\t3.042087e-05\t3.017943e-05\t3.064479e-05\t7.648615e-03\t5\t0\nf64/batchmatmul_512x512_f64__nx_\talloc_words\t1.701000e+03\t1.701000e+03\t1.701000e+03\t0.000000e+00\t5\t0\nf64/batchmatmul_512x512_f64__nx_\tcpu_time\t4.545035e-03\t4.510594e-03\t4.573573e-03\t6.928353e-03\t5\t0\nf64/batchmatmul_512x512_f64__nx_\twall_time\t2.484462e-03\t2.456361e-03\t2.499965e-03\t8.775377e-03\t5\t1\nf64/contractreduce1_100x100_f64__nx_\talloc_words\t1.813000e+03\t1.813000e+03\t1.813000e+03\t0.000000e+00\t5\t0\nf64/contractreduce1_100x100_f64__nx_\tcpu_time\t1.557706e-05\t1.550447e-05\t1.564236e-05\t4.426011e-03\t5\t1\nf64/contractreduce1_100x100_f64__nx_\twall_time\t1.559369e-05\t1.551448e-05\t1.566053e-05\t4.683153e-03\t5\t1\nf64/contractreduce1_200x200_f64__nx_\talloc_words\t1.813000e+03\t1.813000e+03\t1.813000e+03\t0.000000e+00\t5\t0\nf64/contractreduce1_200x200_f64__nx_\tcpu_time\t5.395237e-05\t5.369265e-05\t5.411275e-05\t3.893179e-03\t5\t1\nf64/contractreduce1_200x200_f64__nx_\twall_time\t5.396311e-05\t5.369013e-05\t5.412146e-05\t3.996517e-03\t5\t1\nf64/contractreduce1_50x50_f64__nx_\talloc_words\t1.813000e+03\t1.813000e+03\t1.813000e+03\t0.000000e+00\t5\t0\nf64/contractreduce1_50x50_f64__nx_\tcpu_time\t6.248360e-06\t6.184944e-06\t6.325221e-06\t1.122510e-02\t5\t0\nf64/contractreduce1_50x50_f64__nx_\twall_time\t6.253809e-06\t6.190696e-06\t6.330196e-06\t1.115326e-02\t5\t0\nf64/contractreduce1_512x512_f64__nx_\talloc_words\t1.813000e+03\t1.813000e+03\t1.813000e+03\t0.000000e+00\t5\t1\nf64/contractreduce1_512x512_f64__nx_\tcpu_time\t4.840624e-04\t4.812912e-04\t4.858375e-04\t4.695958e-03\t5\t0\nf64/contractreduce1_512x512_f64__nx_\twall_time\t4.840321e-04\t4.809812e-04\t4.858114e-04\t4.989489e-03\t5\t0\nf64/contractreduce2_100x100_f64__nx_\talloc_words\t1.813000e+03\t1.813000e+03\t1.813000e+03\t0.000000e+00\t5\t0\nf64/contractreduce2_100x100_f64__nx_\tcpu_time\t1.337326e-05\t1.330468e-05\t1.341755e-05\t4.219924e-03\t5\t1\nf64/contractreduce2_100x100_f64__nx_\twall_time\t1.340222e-05\t1.332199e-05\t1.346779e-05\t5.439589e-03\t5\t0\nf64/contractreduce2_200x200_f64__nx_\talloc_words\t1.813000e+03\t1.813000e+03\t1.813000e+03\t0.000000e+00\t5\t0\nf64/contractreduce2_200x200_f64__nx_\tcpu_time\t4.878437e-05\t4.843472e-05\t4.914975e-05\t7.328470e-03\t5\t0\nf64/contractreduce2_200x200_f64__nx_\twall_time\t4.879236e-05\t4.848573e-05\t4.915805e-05\t6.889622e-03\t5\t0\nf64/contractreduce2_50x50_f64__nx_\talloc_words\t1.813000e+03\t1.813000e+03\t1.813000e+03\t0.000000e+00\t5\t0\nf64/contractreduce2_50x50_f64__nx_\tcpu_time\t5.661167e-06\t5.639267e-06\t5.688871e-06\t4.381094e-03\t5\t2\nf64/contractreduce2_50x50_f64__nx_\twall_time\t5.669255e-06\t5.642517e-06\t5.701602e-06\t5.210925e-03\t5\t2\nf64/contractreduce2_512x512_f64__nx_\talloc_words\t1.813000e+03\t1.813000e+03\t1.813000e+03\t0.000000e+00\t8\t0\nf64/contractreduce2_512x512_f64__nx_\tcpu_time\t4.410580e-04\t4.296686e-04\t4.590414e-04\t3.329810e-02\t8\t0\nf64/contractreduce2_512x512_f64__nx_\twall_time\t4.411046e-04\t4.302303e-04\t4.575018e-04\t3.091271e-02\t8\t0\nf64/independentsum_100x100_f64__nx_\talloc_words\t1.699000e+03\t1.699000e+03\t1.699000e+03\t0.000000e+00\t5\t0\nf64/independentsum_100x100_f64__nx_\tcpu_time\t4.113704e-04\t4.017748e-04\t4.217268e-04\t2.425059e-02\t5\t0\nf64/independentsum_100x100_f64__nx_\twall_time\t1.732455e-04\t1.688961e-04\t1.779039e-04\t2.599720e-02\t5\t2\nf64/independentsum_200x200_f64__nx_\talloc_words\t1.699000e+03\t1.699000e+03\t1.699000e+03\t0.000000e+00\t5\t0\nf64/independentsum_200x200_f64__nx_\tcpu_time\t4.733798e-04\t4.692991e-04\t4.784865e-04\t9.704117e-03\t5\t0\nf64/independentsum_200x200_f64__nx_\twall_time\t1.991224e-04\t1.954485e-04\t2.025067e-04\t1.772320e-02\t5\t1\nf64/independentsum_50x50_f64__nx_\talloc_words\t1.699000e+03\t1.699000e+03\t1.699000e+03\t0.000000e+00\t5\t0\nf64/independentsum_50x50_f64__nx_\tcpu_time\t8.313444e-06\t8.269508e-06\t8.365449e-06\t5.770216e-03\t5\t2\nf64/independentsum_50x50_f64__nx_\twall_time\t8.318378e-06\t8.276324e-06\t8.372429e-06\t5.776654e-03\t5\t2\nf64/independentsum_512x512_f64__nx_\talloc_words\t1.699000e+03\t1.699000e+03\t1.699000e+03\t0.000000e+00\t5\t1\nf64/independentsum_512x512_f64__nx_\tcpu_time\t9.901247e-04\t9.801404e-04\t1.003114e-03\t1.160142e-02\t5\t0\nf64/independentsum_512x512_f64__nx_\twall_time\t3.018494e-04\t2.935635e-04\t3.142709e-04\t3.430086e-02\t5\t0\nf64/innerproduct_100x100_f64__nx_\talloc_words\t2.900000e+02\t2.900000e+02\t2.900000e+02\t0.000000e+00\t5\t0\nf64/innerproduct_100x100_f64__nx_\tcpu_time\t1.254726e-06\t1.244257e-06\t1.268506e-06\t9.663129e-03\t5\t1\nf64/innerproduct_100x100_f64__nx_\twall_time\t1.254991e-06\t1.245116e-06\t1.267312e-06\t8.843216e-03\t5\t1\nf64/innerproduct_200x200_f64__nx_\talloc_words\t2.900000e+02\t2.900000e+02\t2.900000e+02\t0.000000e+00\t5\t0\nf64/innerproduct_200x200_f64__nx_\tcpu_time\t1.372800e-06\t1.366217e-06\t1.378100e-06\t4.327955e-03\t5\t0\nf64/innerproduct_200x200_f64__nx_\twall_time\t1.373272e-06\t1.367614e-06\t1.379922e-06\t4.481246e-03\t5\t0\nf64/innerproduct_50x50_f64__nx_\talloc_words\t2.900000e+02\t2.900000e+02\t2.900000e+02\t0.000000e+00\t5\t0\nf64/innerproduct_50x50_f64__nx_\tcpu_time\t1.169241e-06\t1.165243e-06\t1.177169e-06\t5.099995e-03\t5\t1\nf64/innerproduct_50x50_f64__nx_\twall_time\t1.169412e-06\t1.165101e-06\t1.177018e-06\t5.095543e-03\t5\t1\nf64/innerproduct_512x512_f64__nx_\talloc_words\t2.900000e+02\t2.900000e+02\t2.900000e+02\t0.000000e+00\t5\t0\nf64/innerproduct_512x512_f64__nx_\tcpu_time\t1.794112e-06\t1.775288e-06\t1.820588e-06\t1.262473e-02\t5\t1\nf64/innerproduct_512x512_f64__nx_\twall_time\t1.796327e-06\t1.776489e-06\t1.823544e-06\t1.309738e-02\t5\t1\nf64/matmul_100x100_f64__nx_\talloc_words\t1.360000e+02\t1.360000e+02\t1.360000e+02\t0.000000e+00\t5\t0\nf64/matmul_100x100_f64__nx_\tcpu_time\t2.720731e-05\t2.716254e-05\t2.725049e-05\t1.616393e-03\t5\t0\nf64/matmul_100x100_f64__nx_\twall_time\t2.721241e-05\t2.717116e-05\t2.725219e-05\t1.488889e-03\t5\t0\nf64/matmul_200x200_f64__nx_\talloc_words\t1.360000e+02\t1.360000e+02\t1.360000e+02\t0.000000e+00\t5\t0\nf64/matmul_200x200_f64__nx_\tcpu_time\t1.192722e-04\t1.190362e-04\t1.195069e-04\t1.973204e-03\t5\t0\nf64/matmul_200x200_f64__nx_\twall_time\t1.192726e-04\t1.190383e-04\t1.194766e-04\t1.837440e-03\t5\t0\nf64/matmul_50x50_f64__nx_\talloc_words\t1.360000e+02\t1.360000e+02\t1.360000e+02\t0.000000e+00\t5\t0\nf64/matmul_50x50_f64__nx_\tcpu_time\t3.063932e-06\t3.014326e-06\t3.101383e-06\t1.420676e-02\t5\t1\nf64/matmul_50x50_f64__nx_\twall_time\t3.065170e-06\t3.015116e-06\t3.105687e-06\t1.477417e-02\t5\t1\nf64/matmul_512x512_f64__nx_\talloc_words\t1.360000e+02\t1.360000e+02\t1.360000e+02\t0.000000e+00\t5\t1\nf64/matmul_512x512_f64__nx_\tcpu_time\t9.722075e-04\t9.688681e-04\t9.791965e-04\t5.311801e-03\t5\t1\nf64/matmul_512x512_f64__nx_\twall_time\t5.703058e-04\t5.670109e-04\t5.757878e-04\t7.694888e-03\t5\t0\n"
  },
  {
    "path": "packages/nx/bench/matmul/README.md",
    "content": "# Nx MatMul Benchmarks\n\nFocused benchmarks for dense matrix multiplication comparing Nx and NumPy.\n\nWe benchmark four representative shapes (square, tall-skinny, wide, large) in\nboth `f32` and `f64` to mirror the workloads we rely on for Rune and the Nx\nbackend. Each shape is tested in two modes:\n\n- **alloc**: a fresh output buffer is allocated every call (`Nx.matmul a b` /\n  `np.matmul(a, b)`)\n- **reuse**: a pre-allocated output buffer is passed in (`Nx.matmul ~out a b` /\n  `np.matmul(a, b, out=out)`)\n\nThe reuse variant isolates pure BLAS compute time from allocation overhead.\n\n## Running the Benchmarks\n\n### Nx (OCaml)\n\n```bash\ndune exec nx/bench/matmul/bench_matmul_nx.exe\n```\n\n### NumPy (Python)\n\n```bash\npython nx/bench/matmul/bench_matmul_numpy.py\n```\n\n## Results Nx (OCaml)\n\n```\n┌─────────────────────────────────────────────────────┬──────────┬──────────┬─────────┬─────────┬────────────┐\n│ Name                                                │ Wall/Run │  CPU/Run │ mWd/Run │ Speedup │ vs Fastest │\n├─────────────────────────────────────────────────────┼──────────┼──────────┼─────────┼─────────┼────────────┤\n│ MatMul SquareSmall 64x64 @ 64x64 f32 reuse (Nx)     │   1.43μs │   1.48μs │ 408.00w │   1.00x │       100% │\n│ MatMul SquareSmall 64x64 @ 64x64 f32 (Nx)           │   2.50μs │   2.49μs │ 688.00w │   0.57x │       175% │\n│ MatMul SquareSmall 64x64 @ 64x64 f64 reuse (Nx)     │   2.82μs │   2.85μs │ 408.00w │   0.51x │       198% │\n│ MatMul SquareSmall 64x64 @ 64x64 f64 (Nx)           │   4.31μs │   4.31μs │ 688.00w │   0.33x │       302% │\n│ MatMul Wide 128x256 @ 256x64 f32 reuse (Nx)         │   5.56μs │   5.55μs │ 408.00w │   0.26x │       390% │\n│ MatMul Wide 128x256 @ 256x64 f32 (Nx)               │   6.46μs │   6.46μs │ 688.00w │   0.22x │       453% │\n│ MatMul TallSkinny 256x64 @ 64x256 f32 reuse (Nx)    │   9.13μs │   9.12μs │ 408.00w │   0.16x │       640% │\n│ MatMul Wide 128x256 @ 256x64 f64 reuse (Nx)         │  16.12μs │  16.14μs │ 408.00w │   0.09x │      1130% │\n│ MatMul Wide 128x256 @ 256x64 f64 (Nx)               │  17.51μs │  17.50μs │ 688.00w │   0.08x │      1227% │\n│ MatMul TallSkinny 256x64 @ 64x256 f64 reuse (Nx)    │  28.96μs │  28.93μs │ 408.00w │   0.05x │      2030% │\n│ MatMul TallSkinny 256x64 @ 64x256 f32 (Nx)          │  66.79μs │  66.82μs │ 681.00w │   0.02x │      4682% │\n│ MatMul TallSkinny 256x64 @ 64x256 f64 (Nx)          │ 104.87μs │ 104.87μs │ 681.00w │   0.01x │      7350% │\n│ MatMul SquareLarge 512x512 @ 512x512 f32 reuse (Nx) │ 142.69μs │ 240.46μs │ 408.00w │   0.01x │     10001% │\n│ MatMul SquareLarge 512x512 @ 512x512 f32 (Nx)       │ 244.27μs │ 332.73μs │ 681.00w │   0.01x │     17122% │\n│ MatMul SquareLarge 512x512 @ 512x512 f64 reuse (Nx) │ 455.47μs │ 865.78μs │ 408.00w │   0.00x │     31925% │\n│ MatMul SquareLarge 512x512 @ 512x512 f64 (Nx)       │ 558.42μs │ 965.35μs │ 681.00w │   0.00x │     39141% │\n└─────────────────────────────────────────────────────┴──────────┴──────────┴─────────┴─────────┴────────────┘\n```\n\n## Results NumPy (Python)\n\n```\n┌────────────────────────────────────────────────────────┬──────────┬──────────┬─────────┬─────────┬────────────┐\n│ Name                                                   │ Wall/Run │  CPU/Run │ mWd/Run │ Speedup │ vs Fastest │\n├────────────────────────────────────────────────────────┼──────────┼──────────┼─────────┼─────────┼────────────┤\n│ MatMul SquareSmall 64x64 @ 64x64 f32 (NumPy)           │   1.62µs │   1.62µs │   0.15w │   1.00x │       100% │\n│ MatMul SquareSmall 64x64 @ 64x64 f32 reuse (NumPy)     │   1.62µs │   1.62µs │   0.15w │   1.00x │       100% │\n│ MatMul SquareSmall 64x64 @ 64x64 f64 reuse (NumPy)     │   2.97µs │   2.97µs │   0.25w │   0.55x │       183% │\n│ MatMul SquareSmall 64x64 @ 64x64 f64 (NumPy)           │   3.03µs │   3.03µs │   0.25w │   0.54x │       187% │\n│ MatMul Wide 128x256 @ 256x64 f32 reuse (NumPy)         │   5.67µs │   5.67µs │   0.58w │   0.29x │       349% │\n│ MatMul Wide 128x256 @ 256x64 f32 (NumPy)               │   5.67µs │   5.67µs │   0.58w │   0.29x │       349% │\n│ MatMul TallSkinny 256x64 @ 64x256 f32 reuse (NumPy)    │   9.17µs │   9.16µs │   0.95w │   0.18x │       565% │\n│ MatMul TallSkinny 256x64 @ 64x256 f32 (NumPy)          │   9.94µs │   9.94µs │   0.94w │   0.16x │       613% │\n│ MatMul Wide 128x256 @ 256x64 f64 reuse (NumPy)         │  15.82µs │  15.82µs │   1.65w │   0.10x │       975% │\n│ MatMul Wide 128x256 @ 256x64 f64 (NumPy)               │  16.82µs │  16.82µs │   1.65w │   0.10x │      1036% │\n│ MatMul TallSkinny 256x64 @ 64x256 f64 reuse (NumPy)    │  28.73µs │  28.73µs │   2.77w │   0.06x │      1771% │\n│ MatMul TallSkinny 256x64 @ 64x256 f64 (NumPy)          │  29.63µs │  29.62µs │   2.73w │   0.05x │      1826% │\n│ MatMul SquareLarge 512x512 @ 512x512 f32 reuse (NumPy) │ 140.12µs │ 238.93µs │  22.98w │   0.01x │      8635% │\n│ MatMul SquareLarge 512x512 @ 512x512 f32 (NumPy)       │ 142.90µs │ 241.33µs │  22.51w │   0.01x │      8807% │\n│ MatMul SquareLarge 512x512 @ 512x512 f64 (NumPy)       │ 458.38µs │ 872.47µs │  84.53w │   0.00x │     28249% │\n│ MatMul SquareLarge 512x512 @ 512x512 f64 reuse (NumPy) │ 458.52µs │ 870.74µs │  87.59w │   0.00x │     28257% │\n└────────────────────────────────────────────────────────┴──────────┴──────────┴─────────┴─────────┴────────────┘\n```\n\n## Comparison (reuse, f32)\n\nPure BLAS compute time (pre-allocated output, f32):\n\n| Shape                      | Nx       | NumPy    | Ratio     |\n| -------------------------- | -------- | -------- | --------- |\n| SquareSmall 64x64          | 1.43μs   | 1.62μs   | **0.88x** |\n| Wide 128x256 @ 256x64      | 5.56μs   | 5.67μs   | **0.98x** |\n| TallSkinny 256x64 @ 64x256 | 9.13μs   | 9.17μs   | **1.00x** |\n| SquareLarge 512x512        | 142.69μs | 140.12μs | **1.02x** |\n\nWith a pre-allocated output buffer, Nx is at parity with NumPy.\n\n## Notes on allocation overhead\n\nThe alloc variants show a large gap on some shapes, most dramatically TallSkinny\nf32 where Nx takes 66.79μs (alloc) vs 9.13μs (reuse) — nearly 58μs of pure\nallocation overhead for a 256x256 output buffer (256 KB).\n\nNumPy's alloc path barely suffers (9.94μs vs 9.17μs reuse). This is likely because\nPython's memory allocator (pymalloc) recycles recently freed blocks: in a\nbenchmark loop, each iteration frees and immediately re-allocates the same-sized\noutput, so pymalloc returns the same already-faulted virtual pages. No new page\nfaults occur.\n"
  },
  {
    "path": "packages/nx/bench/matmul/bench_matmul_numpy.py",
    "content": "from __future__ import annotations\n\nimport sys\nfrom dataclasses import dataclass\nfrom pathlib import Path\nfrom typing import Any, Callable, List, Sequence, Tuple\n\nimport numpy as np\n\n_SCRIPTS_DIR = Path(__file__).resolve().parent\nwhile not (_SCRIPTS_DIR / \"dune-project\").exists():\n    _SCRIPTS_DIR = _SCRIPTS_DIR.parent\n_SCRIPTS_DIR = _SCRIPTS_DIR / \"scripts\"\nif str(_SCRIPTS_DIR) not in sys.path:\n    sys.path.insert(0, str(_SCRIPTS_DIR))\n\nimport ubench  # type: ignore\n\n\nBACKEND_NAME = \"NumPy\"\nDTYPES: Sequence[np.dtype] = (np.float32, np.float64)\n\n\n@dataclass(frozen=True)\nclass MatmulCase:\n    \"\"\"Specification for a matrix multiplication benchmark.\"\"\"\n\n    name: str\n    m: int\n    k: int\n    n: int\n    seed: int\n\n\nCASES: Sequence[MatmulCase] = (\n    MatmulCase(\"SquareSmall\", m=64, k=64, n=64, seed=11),\n    MatmulCase(\"TallSkinny\", m=256, k=64, n=256, seed=17),\n    MatmulCase(\"Wide\", m=128, k=256, n=64, seed=23),\n    MatmulCase(\"SquareLarge\", m=512, k=512, n=512, seed=29),\n)\n\n\ndef _dtype_label(dtype: np.dtype) -> str:\n    if dtype == np.float32:\n        return \"f32\"\n    if dtype == np.float64:\n        return \"f64\"\n    return str(dtype)\n\n\ndef _benchmark_name(case: MatmulCase, dtype: np.dtype, suffix: str = \"\") -> str:\n    return f\"MatMul {case.name} {case.m}x{case.k} @ {case.k}x{case.n} {_dtype_label(dtype)}{suffix} ({BACKEND_NAME})\"\n\n\ndef _make_operands(case: MatmulCase, dtype: np.dtype) -> Tuple[np.ndarray, np.ndarray]:\n    lhs_rng = np.random.default_rng(seed=case.seed)\n    rhs_rng = np.random.default_rng(seed=case.seed + 1)\n    lhs = lhs_rng.random((case.m, case.k), dtype=dtype)\n    rhs = rhs_rng.random((case.k, case.n), dtype=dtype)\n    return lhs, rhs\n\n\ndef build_benchmarks() -> List[Any]:\n    \"\"\"Build benchmarks for NumPy matmul.\"\"\"\n    benchmarks: List[Any] = []\n\n    for case in CASES:\n        for dtype in DTYPES:\n            lhs, rhs = _make_operands(case, dtype)\n\n            def make_fn(a: np.ndarray, b: np.ndarray) -> Callable[[], None]:\n                return lambda: np.matmul(a, b)\n\n            benchmarks.append(ubench.bench(_benchmark_name(case, dtype), make_fn(lhs, rhs)))\n\n            out = np.empty((case.m, case.n), dtype=dtype)\n\n            def make_fn_reuse(a: np.ndarray, b: np.ndarray, o: np.ndarray) -> Callable[[], None]:\n                return lambda: np.matmul(a, b, out=o)\n\n            benchmarks.append(ubench.bench(_benchmark_name(case, dtype, \" reuse\"), make_fn_reuse(lhs, rhs, out)))\n\n    return benchmarks\n\n\ndef default_config() -> ubench.Config:\n    \"\"\"Create default benchmark configuration.\"\"\"\n    return (\n        ubench.Config.default()\n        .time_limit(1.0)\n        .warmup(1)\n        .min_measurements(5)\n        .min_cpu(0.01)\n        .geometric_scale(1.3)\n        .gc_stabilization(False)\n        .build()\n    )\n\n\ndef main() -> None:\n    \"\"\"Main entry point.\"\"\"\n    benchmarks = build_benchmarks()\n    config = default_config()\n    ubench.run(benchmarks, config=config, output_format=\"pretty\", verbose=False)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "packages/nx/bench/matmul/bench_matmul_nx.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet backend_name = \"Nx\"\n\ntype matmul_case = { name : string; m : int; k : int; n : int }\n\nlet cases =\n  [\n    { name = \"SquareSmall\"; m = 64; k = 64; n = 64 };\n    { name = \"TallSkinny\"; m = 256; k = 64; n = 256 };\n    { name = \"Wide\"; m = 128; k = 256; n = 64 };\n    { name = \"SquareLarge\"; m = 512; k = 512; n = 512 };\n  ]\n\nlet benchmark_name case dtype_label suffix =\n  Printf.sprintf \"MatMul %s %dx%d @ %dx%d %s%s (%s)\" case.name case.m case.k\n    case.k case.n dtype_label suffix backend_name\n\nlet setup_operands (type a b) (dtype : (a, b) Nx.dtype) case =\n  let lhs = Nx.rand dtype [| case.m; case.k |] in\n  let rhs = Nx.rand dtype [| case.k; case.n |] in\n  (lhs, rhs)\n\nlet add_case (type a b) benches case (dtype : (a, b) Nx.dtype) dtype_label =\n  let lhs, rhs = setup_operands dtype case in\n  let name = benchmark_name case dtype_label \"\" in\n  let fn () = Nx.matmul lhs rhs in\n  benches := Thumper.bench name fn :: !benches\n\nlet build_benchmarks () =\n  let f32_benches = ref [] in\n  let f64_benches = ref [] in\n  List.iter\n    (fun case ->\n      add_case f32_benches case Nx.Float32 \"f32\";\n      add_case f64_benches case Nx.Float64 \"f64\")\n    cases;\n  [\n    Thumper.group \"f32\" (List.rev !f32_benches);\n    Thumper.group \"f64\" (List.rev !f64_benches);\n  ]\n\nlet () =\n  let benchmarks = build_benchmarks () in\n  Thumper.run \"nx_matmul\" benchmarks\n"
  },
  {
    "path": "packages/nx/bench/matmul/dune",
    "content": "(executable\n (name bench_matmul_nx)\n (modules bench_matmul_nx)\n (libraries nx thumper))\n\n(rule\n (alias runtest)\n (action\n  (progn\n   (run %{exe:bench_matmul_nx.exe} -q)\n   (diff? nx_matmul.thumper nx_matmul.thumper.corrected))))\n"
  },
  {
    "path": "packages/nx/bench/matmul/nx_matmul.thumper",
    "content": "# thumper baseline\n# version: 1\n# suite_name: nx_matmul\n# host: 1480401c3b76ed18\n# cpu: Apple M1 Max\n# ocaml: 5.4.1\n# git: 31747323\n# dirty: true\n# command: /Users/tmattio/Workspace/raven/_build/default/packages/nx/bench/matmul/bench_matmul_nx.exe --bless --quick\n\nf32/matmul_squarelarge_512x512___512x512_f32__nx_\talloc_words\t1.360000e+02\t1.360000e+02\t1.360000e+02\t0.000000e+00\t5\t0\nf32/matmul_squarelarge_512x512___512x512_f32__nx_\tcpu_time\t3.098033e-04\t3.078516e-04\t3.124807e-04\t7.470928e-03\t5\t1\nf32/matmul_squarelarge_512x512___512x512_f32__nx_\twall_time\t2.106800e-04\t2.089472e-04\t2.131048e-04\t9.867098e-03\t5\t1\nf32/matmul_squaresmall_64x64___64x64_f32__nx_\talloc_words\t1.360000e+02\t1.360000e+02\t1.360000e+02\t0.000000e+00\t5\t0\nf32/matmul_squaresmall_64x64___64x64_f32__nx_\tcpu_time\t1.483007e-06\t1.465233e-06\t1.502532e-06\t1.257557e-02\t5\t2\nf32/matmul_squaresmall_64x64___64x64_f32__nx_\twall_time\t1.483088e-06\t1.465796e-06\t1.505107e-06\t1.325325e-02\t5\t2\nf32/matmul_tallskinny_256x64___64x256_f32__nx_\talloc_words\t1.360000e+02\t1.360000e+02\t1.360000e+02\t0.000000e+00\t5\t0\nf32/matmul_tallskinny_256x64___64x256_f32__nx_\tcpu_time\t5.986367e-05\t5.930609e-05\t6.047566e-05\t9.768660e-03\t5\t0\nf32/matmul_tallskinny_256x64___64x256_f32__nx_\twall_time\t5.989386e-05\t5.933197e-05\t6.040919e-05\t8.992800e-03\t5\t0\nf32/matmul_wide_128x256___256x64_f32__nx_\talloc_words\t1.360000e+02\t1.360000e+02\t1.360000e+02\t0.000000e+00\t5\t0\nf32/matmul_wide_128x256___256x64_f32__nx_\tcpu_time\t5.925620e-06\t5.888825e-06\t5.966923e-06\t6.589886e-03\t5\t0\nf32/matmul_wide_128x256___256x64_f32__nx_\twall_time\t5.929617e-06\t5.893135e-06\t5.978258e-06\t7.177803e-03\t5\t0\nf64/matmul_squarelarge_512x512___512x512_f64__nx_\talloc_words\t1.360000e+02\t1.360000e+02\t1.360000e+02\t0.000000e+00\t5\t1\nf64/matmul_squarelarge_512x512___512x512_f64__nx_\tcpu_time\t9.528020e-04\t9.434341e-04\t9.591325e-04\t8.238030e-03\t5\t0\nf64/matmul_squarelarge_512x512___512x512_f64__nx_\twall_time\t5.317828e-04\t5.251065e-04\t5.359640e-04\t1.020860e-02\t5\t0\nf64/matmul_squaresmall_64x64___64x64_f64__nx_\talloc_words\t1.360000e+02\t1.360000e+02\t1.360000e+02\t0.000000e+00\t5\t0\nf64/matmul_squaresmall_64x64___64x64_f64__nx_\tcpu_time\t3.022022e-06\t3.001909e-06\t3.045365e-06\t7.190003e-03\t5\t1\nf64/matmul_squaresmall_64x64___64x64_f64__nx_\twall_time\t3.024805e-06\t3.004257e-06\t3.049405e-06\t7.462840e-03\t5\t0\nf64/matmul_tallskinny_256x64___64x256_f64__nx_\talloc_words\t1.360000e+02\t1.360000e+02\t1.360000e+02\t0.000000e+00\t5\t0\nf64/matmul_tallskinny_256x64___64x256_f64__nx_\tcpu_time\t8.711857e-05\t8.639228e-05\t8.780838e-05\t8.127438e-03\t5\t0\nf64/matmul_tallskinny_256x64___64x256_f64__nx_\twall_time\t8.725737e-05\t8.644738e-05\t8.805477e-05\t9.210608e-03\t5\t0\nf64/matmul_wide_128x256___256x64_f64__nx_\talloc_words\t1.360000e+02\t1.360000e+02\t1.360000e+02\t0.000000e+00\t5\t0\nf64/matmul_wide_128x256___256x64_f64__nx_\tcpu_time\t1.791992e-05\t1.777490e-05\t1.808887e-05\t8.760198e-03\t5\t1\nf64/matmul_wide_128x256___256x64_f64__nx_\twall_time\t1.793295e-05\t1.778910e-05\t1.806828e-05\t7.784129e-03\t5\t1\n"
  },
  {
    "path": "packages/nx/bench/nx.thumper",
    "content": "# thumper baseline\n# version: 1\n# suite_name: nx\n# host: 1480401c3b76ed18\n# cpu: Apple M1 Max\n# ocaml: 5.4.1\n# git: 31747323\n# dirty: true\n# command: /Users/tmattio/Workspace/raven/_build/default/packages/nx/bench/bench_nx.exe --bless --quick\n\nf32/add_100x100_f32__nx_\talloc_words\t1.590000e+02\t1.590000e+02\t1.590000e+02\t0.000000e+00\t5\t0\nf32/add_100x100_f32__nx_\tcpu_time\t2.289842e-06\t2.255045e-06\t2.324444e-06\t1.515354e-02\t5\t0\nf32/add_100x100_f32__nx_\twall_time\t2.292567e-06\t2.259659e-06\t2.327562e-06\t1.480952e-02\t5\t0\nf32/add_200x200_f32__nx_\talloc_words\t1.590000e+02\t1.590000e+02\t1.590000e+02\t0.000000e+00\t17\t0\nf32/add_200x200_f32__nx_\tcpu_time\t1.221056e-04\t1.180869e-04\t1.254806e-04\t3.027581e-02\t17\t0\nf32/add_200x200_f32__nx_\twall_time\t9.132800e-05\t8.955639e-05\t9.291314e-05\t1.837742e-02\t17\t2\nf32/add_500x500_f32__nx_\talloc_words\t1.590000e+02\t1.590000e+02\t1.590000e+02\t0.000000e+00\t5\t0\nf32/add_500x500_f32__nx_\tcpu_time\t1.982840e-04\t1.953932e-04\t2.003410e-04\t1.247646e-02\t5\t0\nf32/add_500x500_f32__nx_\twall_time\t1.331384e-04\t1.319710e-04\t1.343860e-04\t9.069246e-03\t5\t2\nf32/add_50x50_f32__nx_\talloc_words\t1.590000e+02\t1.590000e+02\t1.590000e+02\t0.000000e+00\t5\t0\nf32/add_50x50_f32__nx_\tcpu_time\t9.139271e-07\t9.030871e-07\t9.326158e-07\t1.615483e-02\t5\t2\nf32/add_50x50_f32__nx_\twall_time\t9.144538e-07\t9.013141e-07\t9.312193e-07\t1.635138e-02\t5\t2\nf32/mul_100x100_f32__nx_\talloc_words\t1.590000e+02\t1.590000e+02\t1.590000e+02\t0.000000e+00\t5\t0\nf32/mul_100x100_f32__nx_\tcpu_time\t2.232628e-06\t2.220281e-06\t2.244351e-06\t5.390599e-03\t5\t0\nf32/mul_100x100_f32__nx_\twall_time\t2.232888e-06\t2.220330e-06\t2.246141e-06\t5.779777e-03\t5\t0\nf32/mul_200x200_f32__nx_\talloc_words\t1.590000e+02\t1.590000e+02\t1.590000e+02\t0.000000e+00\t19\t0\nf32/mul_200x200_f32__nx_\tcpu_time\t1.201819e-04\t1.157264e-04\t1.250037e-04\t3.859657e-02\t19\t1\nf32/mul_200x200_f32__nx_\twall_time\t8.937001e-05\t8.702655e-05\t9.206869e-05\t2.820937e-02\t19\t1\nf32/mul_500x500_f32__nx_\talloc_words\t1.590000e+02\t1.590000e+02\t1.590000e+02\t0.000000e+00\t5\t0\nf32/mul_500x500_f32__nx_\tcpu_time\t1.998010e-04\t1.955131e-04\t2.045045e-04\t2.250090e-02\t5\t0\nf32/mul_500x500_f32__nx_\twall_time\t1.381852e-04\t1.347488e-04\t1.447954e-04\t3.635207e-02\t5\t2\nf32/mul_50x50_f32__nx_\talloc_words\t1.590000e+02\t1.590000e+02\t1.590000e+02\t0.000000e+00\t5\t0\nf32/mul_50x50_f32__nx_\tcpu_time\t9.028346e-07\t8.931068e-07\t9.136291e-07\t1.136551e-02\t5\t0\nf32/mul_50x50_f32__nx_\twall_time\t9.034284e-07\t8.932583e-07\t9.148317e-07\t1.193975e-02\t5\t0\nf32/sum_100x100_f32__nx_\talloc_words\t1.470000e+02\t1.470000e+02\t1.470000e+02\t0.000000e+00\t5\t0\nf32/sum_100x100_f32__nx_\tcpu_time\t1.651748e-04\t1.621466e-04\t1.690628e-04\t2.093597e-02\t5\t0\nf32/sum_100x100_f32__nx_\twall_time\t9.238947e-05\t9.053141e-05\t9.423027e-05\t2.001771e-02\t5\t0\nf32/sum_200x200_f32__nx_\talloc_words\t1.470000e+02\t1.470000e+02\t1.470000e+02\t0.000000e+00\t5\t0\nf32/sum_200x200_f32__nx_\tcpu_time\t1.989311e-04\t1.979527e-04\t2.003839e-04\t6.110793e-03\t5\t1\nf32/sum_200x200_f32__nx_\twall_time\t1.002071e-04\t9.748815e-05\t1.027331e-04\t2.617052e-02\t5\t0\nf32/sum_500x500_f32__nx_\talloc_words\t1.470000e+02\t1.470000e+02\t1.470000e+02\t0.000000e+00\t5\t0\nf32/sum_500x500_f32__nx_\tcpu_time\t4.345019e-04\t4.263330e-04\t4.394754e-04\t1.512357e-02\t5\t2\nf32/sum_500x500_f32__nx_\twall_time\t1.474488e-04\t1.430595e-04\t1.530280e-04\t3.380340e-02\t5\t2\nf32/sum_50x50_f32__nx_\talloc_words\t1.470000e+02\t1.470000e+02\t1.470000e+02\t0.000000e+00\t5\t0\nf32/sum_50x50_f32__nx_\tcpu_time\t2.932343e-06\t2.916853e-06\t2.948537e-06\t5.402607e-03\t5\t0\nf32/sum_50x50_f32__nx_\twall_time\t2.935280e-06\t2.917064e-06\t2.949515e-06\t5.527796e-03\t5\t0\nf32/transpose_100x100_f32__nx_\talloc_words\t9.600000e+01\t9.600000e+01\t9.600000e+01\t0.000000e+00\t5\t0\nf32/transpose_100x100_f32__nx_\tcpu_time\t1.841045e-07\t1.837203e-07\t1.846028e-07\t2.396922e-03\t5\t1\nf32/transpose_100x100_f32__nx_\twall_time\t1.842488e-07\t1.838311e-07\t1.847817e-07\t2.579518e-03\t5\t0\nf32/transpose_200x200_f32__nx_\talloc_words\t9.600000e+01\t9.600000e+01\t9.600000e+01\t0.000000e+00\t5\t0\nf32/transpose_200x200_f32__nx_\tcpu_time\t1.840894e-07\t1.834876e-07\t1.846789e-07\t3.235640e-03\t5\t1\nf32/transpose_200x200_f32__nx_\twall_time\t1.842397e-07\t1.836915e-07\t1.847628e-07\t2.907419e-03\t5\t1\nf32/transpose_500x500_f32__nx_\talloc_words\t9.600000e+01\t9.600000e+01\t9.600000e+01\t0.000000e+00\t5\t0\nf32/transpose_500x500_f32__nx_\tcpu_time\t1.834628e-07\t1.819333e-07\t1.845332e-07\t7.085538e-03\t5\t1\nf32/transpose_500x500_f32__nx_\twall_time\t1.836824e-07\t1.822987e-07\t1.847774e-07\t6.747174e-03\t5\t1\nf32/transpose_50x50_f32__nx_\talloc_words\t9.600000e+01\t9.600000e+01\t9.600000e+01\t0.000000e+00\t5\t0\nf32/transpose_50x50_f32__nx_\tcpu_time\t1.801717e-07\t1.778792e-07\t1.833509e-07\t1.518458e-02\t5\t0\nf32/transpose_50x50_f32__nx_\twall_time\t1.803913e-07\t1.781703e-07\t1.836419e-07\t1.516598e-02\t5\t0\nf64/add_100x100_f64__nx_\talloc_words\t1.590000e+02\t1.590000e+02\t1.590000e+02\t0.000000e+00\t5\t0\nf64/add_100x100_f64__nx_\tcpu_time\t2.126338e-05\t2.101478e-05\t2.156238e-05\t1.287656e-02\t5\t0\nf64/add_100x100_f64__nx_\twall_time\t2.133682e-05\t2.108753e-05\t2.165103e-05\t1.320477e-02\t5\t0\nf64/add_200x200_f64__nx_\talloc_words\t1.590000e+02\t1.590000e+02\t1.590000e+02\t0.000000e+00\t5\t0\nf64/add_200x200_f64__nx_\tcpu_time\t1.894001e-04\t1.868058e-04\t1.918444e-04\t1.330146e-02\t5\t2\nf64/add_200x200_f64__nx_\twall_time\t1.351414e-04\t1.344452e-04\t1.359930e-04\t5.726566e-03\t5\t1\nf64/add_500x500_f64__nx_\talloc_words\t1.590000e+02\t1.590000e+02\t1.590000e+02\t0.000000e+00\t5\t0\nf64/add_500x500_f64__nx_\tcpu_time\t2.356499e-04\t2.299707e-04\t2.426843e-04\t2.697567e-02\t5\t0\nf64/add_500x500_f64__nx_\twall_time\t1.503470e-04\t1.468076e-04\t1.573330e-04\t3.500354e-02\t5\t1\nf64/add_50x50_f64__nx_\talloc_words\t1.590000e+02\t1.590000e+02\t1.590000e+02\t0.000000e+00\t5\t0\nf64/add_50x50_f64__nx_\tcpu_time\t1.121478e-06\t1.103626e-06\t1.134863e-06\t1.392706e-02\t5\t1\nf64/add_50x50_f64__nx_\twall_time\t1.160394e-06\t1.130030e-06\t1.201028e-06\t3.059240e-02\t5\t1\nf64/mul_100x100_f64__nx_\talloc_words\t1.590000e+02\t1.590000e+02\t1.590000e+02\t0.000000e+00\t5\t0\nf64/mul_100x100_f64__nx_\tcpu_time\t2.125428e-05\t2.111687e-05\t2.142256e-05\t7.191079e-03\t5\t1\nf64/mul_100x100_f64__nx_\twall_time\t2.127835e-05\t2.112211e-05\t2.146393e-05\t8.032123e-03\t5\t1\nf64/mul_200x200_f64__nx_\talloc_words\t1.590000e+02\t1.590000e+02\t1.590000e+02\t0.000000e+00\t17\t0\nf64/mul_200x200_f64__nx_\tcpu_time\t1.814761e-04\t1.743791e-04\t1.874073e-04\t3.589524e-02\t17\t0\nf64/mul_200x200_f64__nx_\twall_time\t1.352870e-04\t1.329161e-04\t1.380020e-04\t1.879655e-02\t17\t1\nf64/mul_500x500_f64__nx_\talloc_words\t1.590000e+02\t1.590000e+02\t1.590000e+02\t0.000000e+00\t5\t0\nf64/mul_500x500_f64__nx_\tcpu_time\t2.469418e-04\t2.437919e-04\t2.492588e-04\t1.106920e-02\t5\t1\nf64/mul_500x500_f64__nx_\twall_time\t1.515527e-04\t1.476347e-04\t1.561116e-04\t2.796696e-02\t5\t1\nf64/mul_50x50_f64__nx_\talloc_words\t1.590000e+02\t1.590000e+02\t1.590000e+02\t0.000000e+00\t5\t0\nf64/mul_50x50_f64__nx_\tcpu_time\t1.084578e-06\t1.080511e-06\t1.090346e-06\t4.534014e-03\t5\t1\nf64/mul_50x50_f64__nx_\twall_time\t1.084654e-06\t1.080573e-06\t1.090667e-06\t4.653020e-03\t5\t1\nf64/sum_100x100_f64__nx_\talloc_words\t1.470000e+02\t1.470000e+02\t1.470000e+02\t0.000000e+00\t6\t0\nf64/sum_100x100_f64__nx_\tcpu_time\t2.008794e-04\t1.965453e-04\t2.072756e-04\t2.670825e-02\t6\t0\nf64/sum_100x100_f64__nx_\twall_time\t9.554162e-05\t9.382525e-05\t9.769565e-05\t2.025505e-02\t6\t0\nf64/sum_200x200_f64__nx_\talloc_words\t1.470000e+02\t1.470000e+02\t1.470000e+02\t0.000000e+00\t5\t0\nf64/sum_200x200_f64__nx_\tcpu_time\t2.272642e-04\t2.234083e-04\t2.335007e-04\t2.220419e-02\t5\t1\nf64/sum_200x200_f64__nx_\twall_time\t1.069815e-04\t1.031181e-04\t1.131110e-04\t4.670401e-02\t5\t1\nf64/sum_500x500_f64__nx_\talloc_words\t1.470000e+02\t1.470000e+02\t1.470000e+02\t0.000000e+00\t5\t0\nf64/sum_500x500_f64__nx_\tcpu_time\t4.457657e-04\t4.412075e-04\t4.507980e-04\t1.075731e-02\t5\t0\nf64/sum_500x500_f64__nx_\twall_time\t1.481280e-04\t1.460633e-04\t1.505301e-04\t1.507728e-02\t5\t0\nf64/sum_50x50_f64__nx_\talloc_words\t1.470000e+02\t1.470000e+02\t1.470000e+02\t0.000000e+00\t5\t0\nf64/sum_50x50_f64__nx_\tcpu_time\t2.815102e-06\t2.810154e-06\t2.819670e-06\t1.690112e-03\t5\t1\nf64/sum_50x50_f64__nx_\twall_time\t2.815492e-06\t2.811970e-06\t2.819541e-06\t1.344629e-03\t5\t1\nf64/transpose_100x100_f64__nx_\talloc_words\t9.600000e+01\t9.600000e+01\t9.600000e+01\t0.000000e+00\t5\t0\nf64/transpose_100x100_f64__nx_\tcpu_time\t1.838005e-07\t1.801340e-07\t1.880564e-07\t2.155166e-02\t5\t1\nf64/transpose_100x100_f64__nx_\twall_time\t1.843967e-07\t1.809335e-07\t1.879831e-07\t1.911527e-02\t5\t1\nf64/transpose_200x200_f64__nx_\talloc_words\t9.600000e+01\t9.600000e+01\t9.600000e+01\t0.000000e+00\t5\t0\nf64/transpose_200x200_f64__nx_\tcpu_time\t1.849292e-07\t1.835602e-07\t1.873169e-07\t1.015702e-02\t5\t1\nf64/transpose_200x200_f64__nx_\twall_time\t1.849859e-07\t1.836472e-07\t1.873381e-07\t9.976236e-03\t5\t1\nf64/transpose_500x500_f64__nx_\talloc_words\t9.600000e+01\t9.600000e+01\t9.600000e+01\t0.000000e+00\t5\t0\nf64/transpose_500x500_f64__nx_\tcpu_time\t1.841557e-07\t1.828908e-07\t1.855841e-07\t7.312706e-03\t5\t0\nf64/transpose_500x500_f64__nx_\twall_time\t1.843796e-07\t1.832046e-07\t1.858367e-07\t7.137778e-03\t5\t1\nf64/transpose_50x50_f64__nx_\talloc_words\t9.600000e+01\t9.600000e+01\t9.600000e+01\t0.000000e+00\t5\t0\nf64/transpose_50x50_f64__nx_\tcpu_time\t1.836186e-07\t1.785118e-07\t1.879890e-07\t2.580672e-02\t5\t0\nf64/transpose_50x50_f64__nx_\twall_time\t1.842476e-07\t1.789137e-07\t1.888056e-07\t2.684399e-02\t5\t0\n"
  },
  {
    "path": "packages/nx/doc/01-getting-started.md",
    "content": "# Getting Started\n\nThis guide covers installation, data types, array creation, slicing, broadcasting, and basic operations.\n\n## Installation\n\n<!-- $MDX skip -->\n```bash\nopam install nx\n```\n\nOr build from source:\n\n<!-- $MDX skip -->\n```bash\ngit clone https://github.com/raven-ml/raven\ncd raven && dune build packages/nx\n```\n\nAdd to your `dune` file:\n\n<!-- $MDX skip -->\n```dune\n(executable\n (name main)\n (libraries nx))\n```\n\n## Creating Arrays\n\n```ocaml\nopen Nx\n\nlet () =\n  (* From explicit values: provide dtype, shape, and flat data *)\n  let a = create Float32 [|2; 3|] [|1.; 2.; 3.; 4.; 5.; 6.|] in\n  print_data a;\n\n  (* Filled arrays *)\n  let z = zeros Float32 [|3; 3|] in\n  let o = ones Int32 [|5|] in\n  let f = full Float64 [|2; 2|] 3.14 in\n  ignore (z, o, f);\n\n  (* Ranges and sequences *)\n  let r = arange Int32 0 10 1 in          (* [0, 1, ..., 9] *)\n  let l = linspace Float32 0. 1. 5 in     (* 5 points in [0, 1] *)\n  ignore (r, l);\n\n  (* Random arrays *)\n  let x = rand Float32 [|3; 4|] in\n  let y = randn Float32 [|3; 4|] in\n  ignore (x, y);\n\n  (* Special matrices *)\n  let i = eye Float32 3 in               (* 3×3 identity *)\n  print_data i\n```\n\n## Data Types\n\nEvery array has a `dtype` that determines its element type. Common dtypes:\n\n| Dtype | OCaml type | Typical use |\n|-------|-----------|-------------|\n| `Float32` | `float` | Neural networks, images |\n| `Float64` | `float` | Scientific computing |\n| `Int32` | `int32` | Integer data, indices |\n| `Int64` | `int64` | Large integers |\n| `Bool` | `bool` | Masks, conditions |\n| `Complex128` | `Complex.t` | Signal processing |\n\nNx does not automatically cast between types. Convert explicitly with `astype`:\n\n```ocaml\nopen Nx\n\nlet () =\n  let x = create Int32 [|3|] [|1l; 2l; 3l|] in\n  let y = astype Float32 x in\n  print_data y   (* [1. 2. 3.] as float32 *)\n```\n\n## Array Properties\n\n```ocaml\nopen Nx\n\nlet () =\n  let x = rand Float32 [|2; 3; 4|] in\n  Printf.printf \"shape: [|%s|]\\n\"\n    (Array.to_list (shape x) |> List.map string_of_int |> String.concat \"; \");\n  Printf.printf \"ndim: %d\\n\" (ndim x);         (* 3 *)\n  Printf.printf \"size: %d\\n\" (size x);          (* 24 *)\n  Printf.printf \"dtype: %s\\n\" (dtype_to_string (dtype x))\n```\n\n## Element-wise Operations\n\nBinary operations work element-wise and support broadcasting:\n\n```ocaml\nopen Nx\n\nlet () =\n  let a = create Float32 [|3|] [|1.; 2.; 3.|] in\n  let b = create Float32 [|3|] [|4.; 5.; 6.|] in\n\n  let _ = add a b in       (* [5. 7. 9.] *)\n  let _ = mul a b in       (* [4. 10. 18.] *)\n  let _ = sub a b in       (* [-3. -3. -3.] *)\n  let _ = div a b in       (* [0.25 0.4 0.5] *)\n\n  (* Scalar operations *)\n  let _ = add a (scalar Float32 10.) in   (* [11. 12. 13.] *)\n\n  (* Math functions *)\n  let _ = sin a in\n  let _ = exp a in\n  let _ = sqrt (abs a) in\n  ()\n```\n\n## Reductions\n\n```ocaml\nopen Nx\n\nlet () =\n  let x = create Float32 [|2; 3|] [|1.; 2.; 3.; 4.; 5.; 6.|] in\n\n  (* Reduce all elements *)\n  Printf.printf \"sum = %.1f\\n\" (item [] (sum x));\n  Printf.printf \"mean = %.1f\\n\" (item [] (mean x));\n\n  (* Reduce along an axis *)\n  let col_sums = sum ~axes:[0] x in    (* sum each column *)\n  print_data col_sums;   (* [5. 7. 9.] *)\n\n  let row_sums = sum ~axes:[1] x in    (* sum each row *)\n  print_data row_sums    (* [6. 15.] *)\n```\n\n## Slicing and Indexing\n\n### Basic indexing\n\n```ocaml\nopen Nx\n\nlet () =\n  let x = create Int32 [|3; 3|] [|1l; 2l; 3l; 4l; 5l; 6l; 7l; 8l; 9l|] in\n\n  (* Get a row *)\n  let row = get [1] x in           (* [4, 5, 6] *)\n  print_data row;\n\n  (* Get a scalar *)\n  let v = item [1; 2] x in        (* 6l *)\n  Printf.printf \"x[1,2] = %ld\\n\" v\n```\n\n### Advanced slicing\n\n```ocaml\nopen Nx\n\nlet () =\n  let x = create Int32 [|4; 4|]\n    [|1l; 2l; 3l; 4l; 5l; 6l; 7l; 8l;\n      9l; 10l; 11l; 12l; 13l; 14l; 15l; 16l|] in\n\n  (* Range: rows 0 to 2 (exclusive), all columns *)\n  let sub = slice [R (0, 2); A] x in\n  print_data sub;\n\n  (* Single index on one axis, range on another *)\n  let row1_cols = slice [I 1; R (0, 3)] x in\n  print_data row1_cols;\n\n  (* Gather specific indices *)\n  let picked = slice [L [0; 3]; L [1; 2]] x in\n  print_data picked\n```\n\nIndex types: `I i` (single index), `R (start, stop)` (half-open range), `Rs (start, stop, step)` (strided range), `L indices` (gather), `A` (all), `N` (new axis).\n\n## Broadcasting\n\nOperations automatically broadcast arrays with compatible shapes. Dimensions are aligned from the right, and each pair must be equal or one must be 1:\n\n```ocaml\nopen Nx\n\nlet () =\n  let matrix = ones Float32 [|3; 4|] in\n  let row = create Float32 [|1; 4|] [|10.; 20.; 30.; 40.|] in\n  let result = add matrix row in    (* row added to every row *)\n  print_data result\n```\n\n## Matrix Multiplication\n\n```ocaml\nopen Nx\n\nlet () =\n  let a = rand Float32 [|3; 4|] in\n  let b = rand Float32 [|4; 2|] in\n  let c = matmul a b in\n  Printf.printf \"(%d×%d) × (%d×%d) = (%d×%d)\\n\"\n    (dim 0 a) (dim 1 a) (dim 0 b) (dim 1 b) (dim 0 c) (dim 1 c)\n```\n\n## Next Steps\n\n- [Array Operations](/docs/nx/array-operations/) — reshaping, views, joining, transposing\n- [Linear Algebra](/docs/nx/linear-algebra/) — decompositions, solvers, FFT\n- [NumPy Comparison](/docs/nx/numpy-comparison/) — side-by-side reference if you're coming from Python\n"
  },
  {
    "path": "packages/nx/doc/02-array-operations.md",
    "content": "# Array Operations\n\nThis guide covers reshaping, broadcasting, joining, slicing, and the view model that underlies Nx's efficiency.\n\n## Views and Copies\n\nMany Nx operations return **views** — tensors that share the underlying buffer with the original but have different shape, strides, or offset. Views are O(1) and allocate no new data.\n\nView-producing operations: `reshape`, `transpose`, `slice`, `squeeze`, `unsqueeze`, `flip`, `get`, `moveaxis`, `swapaxes`.\n\nCopy-producing operations: `contiguous`, `copy`, `concatenate`, `stack`, `pad`, element-wise operations.\n\nUse `is_c_contiguous` to check whether elements are laid out contiguously in row-major order, and `contiguous` to force a copy when needed:\n\n<!-- $MDX skip -->\n```ocaml\nlet t = Nx.transpose x in\nNx.is_c_contiguous t       (* often false *)\nlet t' = Nx.contiguous t   (* force a contiguous copy *)\n```\n\n## Reshaping\n\n### reshape\n\nChange the shape without changing the data order. The total number of elements must match. Use `-1` to infer one dimension:\n\n```ocaml\nopen Nx\n\nlet () =\n  let x = create Int32 [|6|] [|1l; 2l; 3l; 4l; 5l; 6l|] in\n  let a = reshape [|2; 3|] x in\n  let b = reshape [|3; -1|] x in   (* -1 inferred as 2 *)\n  print_data a;\n  print_data b\n```\n\n### flatten and unflatten\n\n`flatten` collapses dimensions into one. `unflatten` expands a dimension back:\n\n```ocaml\nopen Nx\n\nlet () =\n  let x = zeros Float32 [|2; 3; 4|] in\n  ignore (flatten x |> shape);                  (* [|24|] *)\n  ignore (flatten ~start_dim:1 x |> shape);     (* [|2; 12|] *)\n\n  let y = zeros Float32 [|2; 12|] in\n  ignore (unflatten 1 [|3; 4|] y |> shape)      (* [|2; 3; 4|] *)\n```\n\n### squeeze and unsqueeze\n\nRemove or add dimensions of size 1:\n\n```ocaml\nopen Nx\n\nlet () =\n  let x = ones Float32 [|1; 3; 1; 4|] in\n  let a = squeeze x in                    (* [|3; 4|] *)\n  let b = squeeze ~axes:[0] x in          (* [|3; 1; 4|] *)\n  Printf.printf \"squeeze all: %dx%d\\n\" (dim 0 a) (dim 1 a);\n  Printf.printf \"squeeze [0]: %dx%dx%d\\n\" (dim 0 b) (dim 1 b) (dim 2 b);\n\n  let y = create Float32 [|3|] [|1.; 2.; 3.|] in\n  let c = unsqueeze ~axes:[0; 2] y in     (* [|1; 3; 1|] *)\n  Printf.printf \"unsqueeze: %dx%dx%d\\n\" (dim 0 c) (dim 1 c) (dim 2 c)\n```\n\n## Broadcasting\n\nBinary operations automatically broadcast operands. Dimensions are aligned from the right, and each pair must be equal or one must be 1:\n\n```ocaml\nopen Nx\n\nlet () =\n  (* Add a row vector to every row of a matrix *)\n  let matrix = ones Float32 [|3; 4|] in\n  let row = create Float32 [|1; 4|] [|10.; 20.; 30.; 40.|] in\n  let result = add matrix row in\n  print_data result;\n\n  (* Add a column vector to every column *)\n  let col = create Float32 [|3; 1|] [|100.; 200.; 300.|] in\n  let result2 = add matrix col in\n  print_data result2\n```\n\nYou can also broadcast explicitly:\n\n<!-- $MDX skip -->\n```ocaml\nlet x = Nx.broadcast_to [|3; 3|] (Nx.create Nx.Float32 [|1; 3|] [|1.; 2.; 3.|])\n(* Repeats the row 3 times without copying data *)\n```\n\n### Broadcasting rules\n\nShapes are compatible when, aligned from the right, every dimension pair is either equal or one of them is 1. The result shape takes the maximum at each position.\n\n```\n[|   3; 4|]  +  [|1; 4|]  →  [|3; 4|]   ✓\n[|2; 3; 4|]  +  [|   4|]  →  [|2; 3; 4|] ✓\n[|   3; 4|]  +  [|3; 1|]  →  [|3; 4|]   ✓\n[|      3|]  +  [|   4|]  →  error        ✗\n```\n\n## Transposing and Permuting\n\n### transpose\n\nReverse dimensions (no copy):\n\n```ocaml\nopen Nx\n\nlet () =\n  let x = create Int32 [|2; 3|] [|1l; 2l; 3l; 4l; 5l; 6l|] in\n  let t = transpose x in\n  print_data t\n  (* [[1, 4],\n      [2, 5],\n      [3, 6]] *)\n```\n\nSpecify a permutation for higher-rank tensors:\n\n<!-- $MDX skip -->\n```ocaml\n(* Permute [batch; height; width; channels] to [batch; channels; height; width] *)\nlet nhwc_to_nchw x = Nx.transpose ~axes:[0; 3; 1; 2] x\n```\n\n### moveaxis and swapaxes\n\nMove or swap individual dimensions:\n\n<!-- $MDX skip -->\n```ocaml\nNx.moveaxis 0 2 x    (* move axis 0 to position 2 *)\nNx.swapaxes 1 2 x    (* swap axes 1 and 2 *)\n```\n\n### flip\n\nReverse elements along axes:\n\n<!-- $MDX skip -->\n```ocaml\nNx.flip ~axes:[1] x     (* mirror columns *)\nNx.flip x                (* reverse all dimensions *)\n```\n\n## Indexing and Slicing\n\n### get\n\nIndex from the outermost dimension inward. Returns a sub-tensor (view):\n\n```ocaml\nopen Nx\n\nlet () =\n  let x = create Int32 [|2; 3|] [|1l; 2l; 3l; 4l; 5l; 6l|] in\n  let row = get [1] x in      (* second row: [4, 5, 6] *)\n  print_data row\n```\n\n### item\n\nExtract a scalar value:\n\n<!-- $MDX skip -->\n```ocaml\nlet v = Nx.item [1; 2] matrix    (* element at row 1, column 2 *)\n```\n\n### slice\n\nAdvanced indexing with range and index specifications:\n\n```ocaml\nopen Nx\n\nlet () =\n  let x = create Int32 [|3; 3|] [|1l; 2l; 3l; 4l; 5l; 6l; 7l; 8l; 9l|] in\n\n  (* R (start, stop): half-open range *)\n  let rows_0_1 = slice [R (0, 2); A] x in\n  print_data rows_0_1;\n\n  (* I i: single index (reduces dimension) *)\n  let col_1 = slice [A; I 1] x in\n  print_data col_1;\n\n  (* L [indices]: gather specific indices *)\n  let corners = slice [L [0; 2]; L [0; 2]] x in\n  print_data corners\n```\n\nIndex types:\n- `I i` — single index (reduces dimension)\n- `R (start, stop)` — half-open range\n- `Rs (start, stop, step)` — strided range\n- `L indices` — gather listed indices\n- `A` — all elements (default for trailing axes)\n- `N` — insert new axis of size 1\n\n## Joining and Splitting\n\n### concatenate\n\nJoin tensors along an existing axis:\n\n```ocaml\nopen Nx\n\nlet () =\n  let a = ones Float32 [|2; 3|] in\n  let b = zeros Float32 [|2; 3|] in\n  let c = concatenate ~axis:0 [a; b] in   (* [|4; 3|] *)\n  Printf.printf \"concat axis 0: %dx%d\\n\" (dim 0 c) (dim 1 c);\n  let d = concatenate ~axis:1 [a; b] in   (* [|2; 6|] *)\n  Printf.printf \"concat axis 1: %dx%d\\n\" (dim 0 d) (dim 1 d)\n```\n\nShorthands: `vstack` (axis 0), `hstack` (axis 1), `dstack` (axis 2).\n\n### stack\n\nJoin tensors along a **new** axis:\n\n```ocaml\nopen Nx\n\nlet () =\n  let a = create Float32 [|3|] [|1.; 2.; 3.|] in\n  let b = create Float32 [|3|] [|4.; 5.; 6.|] in\n  let c = stack ~axis:0 [a; b] in   (* [|2; 3|] *)\n  print_data c\n```\n\n### split\n\nSplit a tensor into equal parts along an axis:\n\n<!-- $MDX skip -->\n```ocaml\nlet parts = Nx.split ~axis:0 2 x    (* split into 2 along axis 0 *)\n```\n\n## Tiling and Repeating\n\n### tile\n\nReplicate the tensor according to a repeat pattern:\n\n<!-- $MDX skip -->\n```ocaml\n(* Tile a [2; 3] tensor 2x along rows, 3x along columns → [4; 9] *)\nNx.tile [|2; 3|] x\n```\n\n### repeat\n\nRepeat elements along a single axis:\n\n<!-- $MDX skip -->\n```ocaml\n(* Repeat each element 3 times along axis 0 *)\nNx.repeat ~axis:0 3 x\n```\n\n### pad\n\nPad with a constant value:\n\n<!-- $MDX skip -->\n```ocaml\n(* Pad: 1 before and 2 after along axis 0, 0 and 1 along axis 1 *)\nNx.pad [|(1, 2); (0, 1)|] 0. x\n```\n\n## Next Steps\n\n- [Linear Algebra](/docs/nx/linear-algebra/) — matrix operations, decompositions, FFT\n- [Input/Output](/docs/nx/io/) — reading and writing images, npy, npz files\n- [NumPy Comparison](/docs/nx/numpy-comparison/) — side-by-side reference\n"
  },
  {
    "path": "packages/nx/doc/03-linear-algebra.md",
    "content": "# Linear Algebra\n\nNx provides a comprehensive linear algebra suite and FFT operations. This guide covers the most commonly used operations.\n\n## Matrix Multiplication\n\n### matmul\n\nGeneral matrix multiplication supporting batched inputs:\n\n```ocaml\nopen Nx\n\nlet () =\n  let a = rand Float32 [|3; 4|] in\n  let b = rand Float32 [|4; 2|] in\n  let c = matmul a b in   (* [|3; 2|] *)\n  Printf.printf \"result shape: [|%d; %d|]\\n\" (dim 0 c) (dim 1 c)\n```\n\n`matmul` supports batched matrix multiplication: leading dimensions are broadcast.\n\n<!-- $MDX skip -->\n```ocaml\n(* Batched: [|batch; m; k|] × [|batch; k; n|] → [|batch; m; n|] *)\nlet a = Nx.rand Nx.Float32 [|10; 3; 4|] in\nlet b = Nx.rand Nx.Float32 [|10; 4; 2|] in\nlet c = Nx.matmul a b   (* [|10; 3; 2|] *)\n```\n\n### Related products\n\n| Function | Purpose |\n|----------|---------|\n| `dot` | Inner product (flattened inputs) |\n| `vdot` | Complex-conjugate inner product |\n| `inner` | Inner product over last axes |\n| `outer` | Outer product of 1-D tensors |\n| `tensordot` | Contraction over specified axes |\n| `einsum` | Einstein summation notation |\n| `kron` | Kronecker product |\n| `cross` | Cross product of 3-element vectors |\n\n### einsum\n\nEinstein summation provides a compact notation for many tensor operations:\n\n<!-- $MDX skip -->\n```ocaml\n(* Matrix multiplication: ij,jk->ik *)\nlet c = Nx.einsum \"ij,jk->ik\" [|a; b|]\n\n(* Batch matrix multiply: bij,bjk->bik *)\nlet c = Nx.einsum \"bij,bjk->bik\" [|a; b|]\n\n(* Trace: ii-> *)\nlet tr = Nx.einsum \"ii->\" [|m|]\n\n(* Transpose: ij->ji *)\nlet t = Nx.einsum \"ij->ji\" [|m|]\n```\n\n## Decompositions\n\n### Cholesky\n\nFactor a symmetric positive-definite matrix: A = L·Lᵀ\n\n<!-- $MDX skip -->\n```ocaml\nlet l = Nx.cholesky a               (* lower triangular by default *)\nlet u = Nx.cholesky ~upper:true a   (* upper triangular *)\n```\n\n### QR\n\nFactor A = Q·R where Q is orthogonal and R is upper triangular:\n\n<!-- $MDX skip -->\n```ocaml\nlet q, r = Nx.qr a                        (* reduced by default *)\nlet q_full, r_full = Nx.qr ~mode:`Complete a\n```\n\n### SVD\n\nSingular value decomposition A = U·Σ·Vᵀ:\n\n<!-- $MDX skip -->\n```ocaml\nlet u, s, vt = Nx.svd a\nlet s_only = Nx.svdvals a   (* singular values only, more efficient *)\n```\n\n### Eigendecomposition\n\n<!-- $MDX skip -->\n```ocaml\n(* General: returns complex eigenvalues and eigenvectors *)\nlet eigenvalues, eigenvectors = Nx.eig a\nlet eigenvalues_only = Nx.eigvals a\n\n(* Symmetric/Hermitian: returns real eigenvalues *)\nlet eigenvalues, eigenvectors = Nx.eigh a\nlet eigenvalues_only = Nx.eigvalsh a\n```\n\n## Solving Linear Systems\n\n### solve\n\nSolve A·x = b for x:\n\n<!-- $MDX skip -->\n```ocaml\nlet x = Nx.solve a b\n```\n\n### lstsq\n\nLeast-squares solution (for overdetermined systems):\n\n<!-- $MDX skip -->\n```ocaml\nlet x, residuals, rank, sv = Nx.lstsq a b\n```\n\n### inv and pinv\n\nMatrix inverse and pseudo-inverse:\n\n<!-- $MDX skip -->\n```ocaml\nlet a_inv = Nx.inv a          (* requires square, non-singular *)\nlet a_pinv = Nx.pinv a        (* works for any shape *)\n```\n\n## Norms and Properties\n\n### norm\n\nCompute various matrix and vector norms:\n\n<!-- $MDX skip -->\n```ocaml\n(* Vector norms *)\nlet l2 = Nx.norm v                            (* L2 by default *)\nlet l1 = Nx.norm ~ord:(`Float 1.) v           (* L1 norm *)\nlet linf = Nx.norm ~ord:`Inf v                (* max absolute value *)\n\n(* Matrix norms *)\nlet fro = Nx.norm ~ord:`Fro m                 (* Frobenius norm *)\n\n(* Along specific axes *)\nlet row_norms = Nx.norm ~axis:[1] m           (* per-row L2 norm *)\n```\n\n### Other properties\n\n<!-- $MDX skip -->\n```ocaml\nlet d = Nx.det m                    (* determinant *)\nlet sd = Nx.slogdet m              (* sign and log-determinant *)\nlet tr = Nx.trace m                (* sum of diagonal elements *)\nlet r = Nx.matrix_rank m           (* numerical rank *)\nlet c = Nx.cond m                  (* condition number *)\nlet diag = Nx.diagonal m           (* extract diagonal *)\n```\n\n## FFT\n\nNx provides the full suite of discrete Fourier transforms.\n\n### Basic FFT\n\n<!-- $MDX skip -->\n```ocaml\n(* 1-D complex FFT and inverse *)\nlet spectrum = Nx.fft x\nlet reconstructed = Nx.ifft spectrum\n\n(* 2-D FFT *)\nlet spectrum_2d = Nx.fft2 image\n\n(* N-D FFT *)\nlet spectrum_nd = Nx.fftn ~axes:[0; 1; 2] volume\n```\n\n### Real FFT\n\nFor real-valued inputs, `rfft` is more efficient — it exploits conjugate symmetry and returns only the positive-frequency half:\n\n<!-- $MDX skip -->\n```ocaml\nlet spectrum = Nx.rfft signal           (* n/2+1 complex outputs *)\nlet signal_back = Nx.irfft spectrum     (* back to real *)\n\nlet spectrum_2d = Nx.rfft2 image\nlet spectrum_nd = Nx.rfftn ~axes:[0; 1] data\n```\n\n### Frequency axes\n\n<!-- $MDX skip -->\n```ocaml\nlet freqs = Nx.fftfreq n          (* frequency bins for fft *)\nlet rfreqs = Nx.rfftfreq n        (* frequency bins for rfft *)\nlet shifted = Nx.fftshift spectrum (* shift zero-frequency to center *)\n```\n\n## Next Steps\n\n- [Array Operations](/docs/nx/array-operations/) — reshaping, broadcasting, slicing\n- [Input/Output](/docs/nx/io/) — reading and writing files\n- [NumPy Comparison](/docs/nx/numpy-comparison/) — side-by-side reference\n"
  },
  {
    "path": "packages/nx/doc/04-io.md",
    "content": "# Input/Output Operations\n\nThe `Nx_io` module provides functions to load and save Nx tensors in various file formats,\nincluding image formats and NumPy formats.\n\n## Features\n\n- **Image I/O**: Load and save images in PNG, JPEG, BMP, TGA, and GIF formats\n- **NumPy .npy format**: Load and save single arrays in NumPy's native format\n- **NumPy .npz archives**: Load and save multiple named arrays in compressed archives\n- **Runtime dtype detection**: Handle arrays with types determined at runtime\n- **Type conversion utilities**: Convert between different numeric types\n\n## Image Operations\n\n### Loading Images\n\nImages can be loaded as uint8 tensors:\n\n<!-- $MDX skip -->\n```ocaml\nopen Nx_io\n\n(* Load as RGB: shape [|height; width; 3|] *)\nlet img = load_image \"photo.png\"\n\n(* Load as grayscale: shape [|height; width|] *)\nlet gray = load_image ~grayscale:true \"photo.png\"\n```\n\n### Saving Images\n\nSave uint8 tensors as images:\n\n<!-- $MDX skip -->\n```ocaml\n(* Save RGB or grayscale based on shape *)\nsave_image \"output.png\" img\n```\n\nSupported shapes:\n- `[|height; width|]` — Grayscale\n- `[|height; width; 1|]` — Grayscale with explicit channel\n- `[|height; width; 3|]` — RGB\n- `[|height; width; 4|]` — RGBA\n\n## NumPy Format Support\n\n### Single Arrays (.npy)\n\nThe .npy format stores a single array with its dtype and shape information:\n\n<!-- $MDX skip -->\n```ocaml\n(* Load array with runtime-detected type *)\nlet P arr = load_npy \"data.npy\"\n(* arr : ('a, 'b) Nx.t *)\n\n(* Convert to specific type *)\nlet float_arr = load_npy \"data.npy\" |> as_float32\n\n(* Save array *)\nsave_npy \"output.npy\" my_array\n```\n\n### Archives (.npz)\n\nThe .npz format stores multiple named arrays in a compressed archive:\n\n<!-- $MDX skip -->\n```ocaml\n(* Load entire archive *)\nlet archive = load_npz \"bundle.npz\"\n\n(* Access specific array *)\nlet () =\n  match Hashtbl.find_opt archive \"weights\" with\n  | Some (P arr) ->\n      let _weights = as_float32 (P arr) in\n      (* use weights *)\n      ()\n  | None -> failwith \"weights not found\"\n\n(* Load single array directly *)\nlet P data = load_npz_member ~name:\"data\" \"bundle.npz\"\n\n(* Save multiple arrays *)\nlet () =\n  save_npz \"model.npz\" [\n    (\"inputs\", P input_array);\n    (\"labels\", P label_array);\n    (\"weights\", P weight_array)\n  ]\n```\n\n## Packed Arrays and Type Conversions\n\nSince file formats store type information that's only known at runtime,\nloaded arrays are wrapped in the `packed_nx` type:\n\n```ocaml\ntype packed_nx = P : ('a, 'b) Nx.t -> packed_nx\n```\n\nConvert packed arrays to specific types using the provided functions:\n\n<!-- $MDX skip -->\n```ocaml\nlet packed = load_npy \"data.npy\"\nlet float32_array = as_float32 packed\nlet int32_array = as_int32 packed\nlet uint8_array = as_uint8 packed\n```\n\nAvailable conversions:\n- Floating point: `as_float16`, `as_float32`, `as_float64`\n- Signed integers: `as_int8`, `as_int16`, `as_int32`, `as_int64`\n- Unsigned integers: `as_uint8`, `as_uint16`\n- Complex: `as_complex32`, `as_complex64`\n\n## Examples\n\n### Image Processing Pipeline\n\n```ocaml\nopen Nx\nopen Nx_io\n\nlet process_image input_path output_path =\n  (* Load image *)\n  let img = load_image input_path in\n\n  (* Convert to float for processing *)\n  let img_float = Nx.astype float32 img in\n\n  (* Normalize to [0, 1] *)\n  let normalized = Nx.div_s img_float 255.0 in\n\n  (* Apply some processing: reduce brightness and add bias *)\n  let processed = Nx.add_s (Nx.mul_s normalized 0.8) 0.1 in\n\n  (* Convert back to uint8 *)\n  let result = Nx.astype uint8 (Nx.clip ~min:0.0 ~max:255.0 (Nx.mul_s processed 255.0)) in\n\n  (* Save result *)\n  save_image output_path result\n```\n\n### Model Checkpoint Save/Load\n\n<!-- $MDX skip -->\n```ocaml\nlet save_checkpoint ~path ~epoch ~model =\n  let weights = Model.get_weights model in\n  let optimizer_state = Model.get_optimizer_state model in\n\n  save_npz path [\n    (\"epoch\", P (Nx.scalar int32 epoch));\n    (\"weights\", P weights);\n    (\"optimizer_state\", P optimizer_state);\n  ]\n\nlet load_checkpoint path =\n  let archive = load_npz path in\n  let epoch =\n    match Hashtbl.find_opt archive \"epoch\" with\n    | Some p -> Nx.item [] (as_int32 p)\n    | None -> failwith \"epoch not found\"\n  in\n  let weights =\n    match Hashtbl.find_opt archive \"weights\" with\n    | Some p -> as_float32 p\n    | None -> failwith \"weights not found\"\n  in\n  (epoch, weights)\n```\n\n## Error Handling\n\nAll I/O operations may raise `Failure` exceptions:\n- File not found or inaccessible\n- Unsupported file format\n- Invalid data format\n- Incompatible array shapes (for `save_image`)\n- Missing archive members (for `load_npz_member`)\n\n## Performance Considerations\n\n- Image loading/saving uses the stb_image libraries (header-only C libraries)\n- NumPy format I/O is implemented in pure OCaml\n- Large .npz archives are loaded entirely into memory\n- For very large datasets, consider loading arrays individually with `load_npz_member`\n"
  },
  {
    "path": "packages/nx/doc/05-numpy-comparison.md",
    "content": "# Nx vs NumPy Comparison\n\nThis document compares the Nx library (OCaml) with NumPy (Python), highlighting similarities, differences, and providing equivalent code examples.\n\n- [Nx vs NumPy Comparison](#nx-vs-numpy-comparison)\n  - [1. Overview](#1-overview)\n  - [2. Array Creation](#2-array-creation)\n    - [Basic Array Creation](#basic-array-creation)\n    - [Advanced Array Creation](#advanced-array-creation)\n  - [3. Array Operations](#3-array-operations)\n    - [Basic Operations](#basic-operations)\n    - [Array Manipulation](#array-manipulation)\n  - [4. Element Access and Slicing](#4-element-access-and-slicing)\n  - [5. Statistical Functions](#5-statistical-functions)\n  - [6. Linear Algebra](#6-linear-algebra)\n  - [7. Broadcasting](#7-broadcasting)\n  - [8. Conditional Operations](#8-conditional-operations)\n  - [9. Random Number Generation](#9-random-number-generation)\n  - [10. Real-World Example: Linear Regression](#10-real-world-example-linear-regression)\n\n## 1. Overview\n\nNx is a numerical computing library for OCaml. It takes heavy inspiration from NumPy and aims to be as familiar as possible to NumPy users. That said, there are some phyilosophical differences between the two.\n\n- **Pure OCaml Implementation:** Nx is fully native OCaml without C bindings. For that reason, it typically doesn't match NumPy's raw performances. But it is not trying to: while we care about performance, we prioritize the local development experience, where performance is not critical. That said, Nx uses a backend architecture under the hood, so it can easily be extended to use C or CUDA backends. This is what libraries like Rune are doing, implementing custom backends for Nx, making them suitable for production use cases.\n- **Portable Compilation:** In return for a pure OCaml implementation, you get to compile Nx to JavaScript, WebAssembly, or even unikernels. Making it suitable for a wide range of application.\n- **Type Safety First:** Nx leverages OCaml's strong type system and doesn't perform automatic type casting between array types. You can still use the `astype` function for explicit type conversions.\n- **Bigarray Foundation:** Built on OCaml's Bigarray, Nx uses uint8 instead of boolean arrays and doesn't support string arrays.\n\nApart from the above, Nx is designed to be as close to NumPy as possible. The broadcasting rules are the same, and most functions behave similarly. If you notice an undocumented difference, please open an issue; it's probably a bug.\n\n## 2. Array Creation\n\n### Basic Array Creation\n\n**Nx:**\n```ocaml\n(* Creating a zeros array *)\nlet zeros = Nx.zeros Nx.float64 [|3; 3|]\n\n(* Creating a ones array *)\nlet ones = Nx.ones Nx.float64 [|3; 3|]\n\n(* Creating an array with a specific value *)\nlet full = Nx.full Nx.float64 [|3; 3|] 5.0\n\n(* Creating a range *)\nlet range = Nx.arange Nx.int32 0 10 1\n\n(* Creating an identity matrix *)\nlet identity = Nx.identity Nx.float64 3\n```\n\n**NumPy:**\n```python\n# Creating a zeros array\nzeros = np.zeros((3, 3))\n\n# Creating a ones array\nones = np.ones((3, 3))\n\n# Creating an array with a specific value\nfull = np.full((3, 3), 5.0)\n\n# Creating a range\nrange_array = np.arange(0, 10, 1)\n\n# Creating an identity matrix\nidentity = np.identity(3)\n```\n\n### Advanced Array Creation\n\n**Nx:**\n```ocaml\n(* Creating from existing data *)\nlet data = [|1.0; 2.0; 3.0; 4.0|]\nlet arr = Nx.create Nx.float64 [|2; 2|] data\n\n(* Creating using a function *)\nlet init_arr = Nx.init Nx.float64 [|3; 3|] (fun idx ->\n  float_of_int (idx.(0) + idx.(1)))\n```\n\n**NumPy:**\n```python\n# Creating from existing data\ndata = [1.0, 2.0, 3.0, 4.0]\narr = np.array(data).reshape(2, 2)\n\n# Creating using a function\ninit_arr = np.fromfunction(lambda i, j: i + j, (3, 3))\n```\n\n## 3. Array Operations\n\n### Basic Operations\n\n**Nx:**\n<!-- $MDX skip -->\n```ocaml\n(* Element-wise addition *)\nlet result = Nx.add arr1 arr2\n\n(* Scalar multiplication *)\nlet scaled = Nx.mul_s arr 2.0\n\n(* Matrix multiplication *)\nlet matmul_result = Nx.matmul arr1 arr2\n```\n\n**NumPy:**\n```python\n# Element-wise addition\nresult = arr1 + arr2\n\n# In-place addition\narr1 += arr2\n\n# Scalar multiplication\nscaled = arr * 2.0\n\n# Matrix multiplication\nmatmul_result = arr1 @ arr2  # or np.matmul(arr1, arr2)\n```\n\n### Array Manipulation\n\n**Nx:**\n<!-- $MDX skip -->\n```ocaml\n(* Reshape array *)\nlet reshaped = Nx.reshape [|1; 6|] arr\n\n(* Transpose array *)\nlet transposed = Nx.transpose arr\n\n(* Flatten array *)\nlet flattened = Nx.flatten arr\n\n(* Concatenate arrays *)\nlet concat = Nx.concatenate ~axis:0 [arr1; arr2]\n```\n\n**NumPy:**\n```python\n# Reshape array\nreshaped = arr.reshape(1, 6)\n\n# Transpose array\ntransposed = arr.transpose()\n\n# Flatten array\nflattened = arr.flatten()\n\n# Concatenate arrays\nconcat = np.concatenate([arr1, arr2], axis=0)\n```\n\n## 4. Element Access and Slicing\n\n**Nx:**\n<!-- $MDX skip -->\n```ocaml\n(* Get a single element *)\nlet element = Nx.item [0; 1] arr\n\n(* Set a single element *)\nlet () = Nx.set_item [0; 1] 5.0 arr\n\n(* Get a slice/subarray *)\nlet row = Nx.get [0] arr\n```\n\n**NumPy:**\n```python\n# Get a single element\nelement = arr[0, 1]\n\n# Set a single element\narr[0, 1] = 5.0\n\n# Get a slice/subarray\nslice = arr[0]\n```\n\n## 5. Statistical Functions\n\n**Nx:**\n<!-- $MDX skip -->\n```ocaml\n(* Sum of all elements *)\nlet total = Nx.sum arr\n\n(* Mean of all elements *)\nlet avg = Nx.mean arr\n\n(* Min and max values *)\nlet min_val = Nx.min arr\nlet max_val = Nx.max arr\n\n(* Sum along an axis *)\nlet axis_sum = Nx.sum ~axes:[|0|] arr\n```\n\n**NumPy:**\n```python\n# Sum of all elements\ntotal = np.sum(arr)\n\n# Mean of all elements\navg = np.mean(arr)\n\n# Min and max values\nmin_val = np.min(arr)\nmax_val = np.max(arr)\n\n# Sum along an axis\naxis_sum = np.sum(arr, axis=0)\n```\n\n## 6. Linear Algebra\n\n**Nx:**\n<!-- $MDX skip -->\n```ocaml\n(* Matrix inverse *)\nlet inv_a = Nx.inv a\n\n(* Solve linear system Ax = b *)\nlet x = Nx.solve a b\n\n(* SVD decomposition *)\nlet u, s, vt = Nx.svd a\n\n(* Eigenvalue decomposition *)\nlet eigenvalues, eigenvectors = Nx.eig a\n```\n\n**NumPy:**\n```python\n# Matrix inverse\ninv_a = np.linalg.inv(a)\n\n# Solve linear system Ax = b\nx = np.linalg.solve(a, b)\n\n# SVD decomposition\nu, s, vt = np.linalg.svd(a)\n\n# Eigenvalue decomposition\neigenvalues, eigenvectors = np.linalg.eig(a)\n```\n\n## 7. Broadcasting\n\n**Nx:**\n<!-- $MDX skip -->\n```ocaml\n(* Broadcast a smaller array to match dimensions *)\nlet broadcasted = Nx.broadcast_to [|3; 3|] smaller_arr\n\n(* Broadcasting happens automatically in operations *)\nlet result = Nx.add matrix vector\n```\n\n**NumPy:**\n```python\n# Broadcast a smaller array to match dimensions\nbroadcasted = np.broadcast_to(smaller_arr, (3, 3))\n\n# Broadcasting happens automatically in operations\nresult = matrix + vector\n```\n\n## 8. Conditional Operations\n\n**Nx:**\n<!-- $MDX skip -->\n```ocaml\n(* Create a boolean mask *)\nlet mask = Nx.greater arr (Nx.scalar Nx.float64 0.5)\n\n(* Apply condition with where *)\nlet result = Nx.where mask arr1 arr2\n```\n\n**NumPy:**\n```python\n# Create a boolean mask\nmask = arr > 0.5\n\n# Apply condition with where\nresult = np.where(mask, arr1, arr2)\n```\n\n## 9. Random Number Generation\n\n**Nx:**\n```ocaml\n(* Generate uniform random numbers *)\nlet random = Nx.rand Nx.float64 [|3; 3|]\n\n(* Generate normal distributed random numbers *)\nlet normal = Nx.randn Nx.float64 [|3; 3|]\n\n(* For reproducibility, wrap in Rng.run *)\nlet reproducible =\n  Nx.Rng.run ~seed:42 (fun () ->\n    Nx.rand Nx.float64 [|3; 3|])\n```\n\n**NumPy:**\n```python\n# Generate uniform random numbers\nrandom = np.random.rand(3, 3)\n\n# Generate normal distributed random numbers\nnormal = np.random.randn(3, 3)\n```\n\n## 10. Real-World Example: Linear Regression\n\n**Nx:**\n```ocaml\n(* Generate sample data *)\nlet x = Nx.linspace Nx.float64 0.0 10.0 100\nlet y =\n  Nx.(\n    add (mul_s x 2.0)\n      (randn Nx.float64 [|100|]))\n\n(* Reshape x for design matrix *)\nlet x_design = Nx.(concatenate ~axis:1 [ones Nx.float64 [|100; 1|];\n                                           reshape [|100; 1|] x])\n\n(* Compute coefficients using normal equation *)\nlet xtx = Nx.matmul (Nx.transpose x_design) x_design\nlet xty = Nx.matmul (Nx.transpose x_design) (Nx.reshape [|100; 1|] y)\nlet coeffs = Nx.solve xtx xty\n\n(* Make predictions *)\nlet y_pred = Nx.matmul x_design coeffs\n```\n\n**NumPy:**\n```python\n# Generate sample data\nx = np.linspace(0, 10, 100)\ny = 2 * x + np.random.randn(100)\n\n# Reshape x for design matrix\nx_design = np.column_stack((np.ones(100), x))\n\n# Compute coefficients using normal equation\nxtx = x_design.T @ x_design\nxty = x_design.T @ y.reshape(100, 1)\ncoeffs = np.linalg.solve(xtx, xty)\n\n# Make predictions\ny_pred = x_design @ coeffs\n```\n"
  },
  {
    "path": "packages/nx/doc/dune",
    "content": "(mdx\n (files *.md)\n (package nx)\n (libraries nx nx.io))\n"
  },
  {
    "path": "packages/nx/doc/index.md",
    "content": "# nx\n\nNx provides n-dimensional arrays with NumPy-like semantics and OCaml's type safety. It is the numerical foundation for the entire Raven ecosystem.\n\n## Features\n\n- **19 data types** — float16 through float64, int4 through int64, complex64/128, bool\n- **Broadcasting** — automatic shape matching for binary operations\n- **Views** — reshape, transpose, and slice without copying data\n- **Linear algebra** — matmul, solve, cholesky, QR, SVD, eigendecomposition\n- **FFT** — full suite of discrete Fourier transforms\n- **Signal processing** — convolution, correlation, filtering\n- **I/O** — read and write images (PNG, JPEG), NumPy files (.npy, .npz)\n- **Pluggable backends** — default C backend, extensible architecture\n\n## Quick Start\n\n```ocaml\nopen Nx\n\nlet () =\n  (* Create and manipulate arrays *)\n  let x = linspace Float32 0. 10. 5 in\n  let y = mul x x in\n  Printf.printf \"x = \"; print_data x;\n  Printf.printf \"y = x² = \"; print_data y;\n\n  (* Matrix operations *)\n  let a = rand Float32 [|3; 3|] in\n  let b = rand Float32 [|3; 3|] in\n  let c = matmul a b in\n  Printf.printf \"matmul shape: [|%d; %d|]\\n\" (dim 0 c) (dim 1 c)\n```\n\n## Next Steps\n\n- [Getting Started](/docs/nx/getting-started/) — installation, dtypes, slicing, broadcasting\n- [Array Operations](/docs/nx/array-operations/) — reshaping, views, joining, splitting\n- [Linear Algebra](/docs/nx/linear-algebra/) — decompositions, solvers, FFT\n- [Input/Output](/docs/nx/io/) — images, npy, npz files\n- [NumPy Comparison](/docs/nx/numpy-comparison/) — side-by-side reference\n"
  },
  {
    "path": "packages/nx/examples/01-creating-arrays/README.md",
    "content": "# `01-creating-arrays`\n\nBuild arrays from scratch — constants, ranges, grids, and custom data. This\nexample walks through the most common ways to create arrays in Nx.\n\n```bash\ndune exec nx/examples/01-creating-arrays/main.exe\n```\n\n## What You'll Learn\n\n- Choosing a dtype (`float32`, `float64`, `int32`)\n- Filling arrays with constants: `zeros`, `ones`, `full`\n- Generating ranges: `arange`, `linspace`, `logspace`\n- Building arrays from OCaml data: `create`, `init`\n- Diagonal and special matrices: `identity`, `eye`, `tril`, `triu`\n- Coordinate grids with `meshgrid`\n\n## Key Functions\n\n| Function                       | Purpose                                |\n| ------------------------------ | -------------------------------------- |\n| `zeros dtype shape`            | Array of all zeros                     |\n| `ones dtype shape`             | Array of all ones                      |\n| `full dtype shape value`       | Array filled with a value              |\n| `arange dtype start stop step` | Integer-stepped range (exclusive stop) |\n| `linspace dtype start stop n`  | Evenly spaced floats                   |\n| `logspace dtype start stop n`  | Logarithmically spaced values          |\n| `create dtype shape data`      | Array from an OCaml array              |\n| `init dtype shape f`           | Array from a function of indices       |\n| `identity dtype n`             | n×n identity matrix                    |\n| `eye ?k dtype n`               | Ones on the k-th diagonal              |\n| `meshgrid x y`                 | Coordinate grids from 1D arrays        |\n| `tril m` / `triu m`            | Lower / upper triangular part          |\n\n## Output Walkthrough\n\nWhen you run this example, you'll see arrays printed in a compact format:\n\n```\nzeros (2×3):\n[[0, 0, 0],\n [0, 0, 0]]\n\narange 0..9:\n[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\n5×5 multiplication table:\n[[1, 2, 3, 4, 5],\n [2, 4, 6, 8, 10],\n [3, 6, 9, 12, 15],\n [4, 8, 12, 16, 20],\n [5, 10, 15, 20, 25]]\n```\n\nThe multiplication table is built with `init`, which calls a function with the\nindex array for each element:\n\n```ocaml\ninit int32 [| 5; 5 |] (fun idx ->\n    Int32.of_int ((idx.(0) + 1) * (idx.(1) + 1)))\n```\n\n`meshgrid` builds a pair of 2D coordinate grids from two 1D arrays — useful\nfor evaluating functions over a grid:\n\n```\nmeshgrid X:\n[[0, 1, 2],\n [0, 1, 2]]\nmeshgrid Y:\n[[0, 0, 0],\n [1, 1, 1]]\n```\n\n## Dtypes\n\nEvery array has a dtype that determines the element type and precision. The\nfirst argument to most creation functions is the dtype:\n\n```ocaml\nzeros float32 [| 2; 3 |]   (* 32-bit floats *)\nones float64 [| 3 |]       (* 64-bit floats *)\narange int32 0 10 1        (* 32-bit integers *)\n```\n\nNx supports 18 dtypes including `Float16`, `BFloat16`, `Complex128`, `Bool`,\nand various integer widths.\n\n## Try It\n\n1. Create a 10-element `linspace` from -1.0 to 1.0 and print it.\n2. Use `init` to build a 4×4 matrix where each element is the sum of its row\n   and column index.\n3. Try `eye ~k:(-1) float64 4` to see the subdiagonal.\n\n## Next Steps\n\nContinue to [02-infix-and-arithmetic](../02-infix-and-arithmetic/) to learn how\nthe Infix module makes array math read like algebra.\n"
  },
  {
    "path": "packages/nx/examples/01-creating-arrays/dune",
    "content": "(executable\n (name main)\n (libraries nx))\n"
  },
  {
    "path": "packages/nx/examples/01-creating-arrays/main.ml",
    "content": "(** Build arrays from scratch — constants, ranges, grids, and custom data.\n\n    This example walks through the most common ways to create arrays. By the end\n    you'll know how to pick a dtype, fill arrays with constants, generate\n    ranges, and build grids and triangular matrices. *)\n\nopen Nx\n\nlet () =\n  (* Constant-filled arrays: zeros, ones, and an arbitrary fill value. *)\n  let z = zeros float32 [| 2; 3 |] in\n  Printf.printf \"zeros (2×3):\\n%s\\n\\n\" (data_to_string z);\n\n  let o = ones float64 [| 3 |] in\n  Printf.printf \"ones (3):\\n%s\\n\\n\" (data_to_string o);\n\n  let pi = full float64 [| 2; 2 |] Float.pi in\n  Printf.printf \"full π (2×2):\\n%s\\n\\n\" (data_to_string pi);\n\n  (* Ranges: integer steps and evenly-spaced floats. *)\n  let ints = arange int32 0 10 1 in\n  Printf.printf \"arange 0..9:\\n%s\\n\\n\" (data_to_string ints);\n\n  let spaced = linspace float64 0.0 1.0 5 in\n  Printf.printf \"linspace 0..1 (5 points):\\n%s\\n\\n\" (data_to_string spaced);\n\n  let decades = logspace float64 1.0 4.0 4 in\n  Printf.printf \"logspace 10¹..10⁴:\\n%s\\n\\n\" (data_to_string decades);\n\n  (* From raw data: pack an OCaml array into a 2×3 matrix. *)\n  let data = create float64 [| 2; 3 |] [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0 |] in\n  Printf.printf \"create from data (2×3):\\n%s\\n\\n\" (data_to_string data);\n\n  (* Build a multiplication table with [init]. *)\n  let mul_table =\n    init int32 [| 5; 5 |] (fun idx ->\n        Int32.of_int ((idx.(0) + 1) * (idx.(1) + 1)))\n  in\n  Printf.printf \"5×5 multiplication table:\\n%s\\n\\n\" (data_to_string mul_table);\n\n  (* Identity and eye: diagonal matrices. *)\n  let id = identity float64 3 in\n  Printf.printf \"identity 3×3:\\n%s\\n\\n\" (data_to_string id);\n\n  let e = eye ~k:1 float64 3 in\n  Printf.printf \"eye (k=1, superdiagonal):\\n%s\\n\\n\" (data_to_string e);\n\n  (* Coordinate grids with meshgrid. *)\n  let xs = arange_f float64 0.0 3.0 1.0 in\n  let ys = arange_f float64 0.0 2.0 1.0 in\n  let grid_x, grid_y = meshgrid xs ys in\n  Printf.printf \"meshgrid X:\\n%s\\n\" (data_to_string grid_x);\n  Printf.printf \"meshgrid Y:\\n%s\\n\\n\" (data_to_string grid_y);\n\n  (* Triangular matrices: tril and triu. *)\n  let m = ones float64 [| 4; 4 |] in\n  Printf.printf \"tril (lower triangle):\\n%s\\n\" (data_to_string (tril m));\n  Printf.printf \"triu (upper triangle):\\n%s\\n\" (data_to_string (triu m))\n"
  },
  {
    "path": "packages/nx/examples/02-infix-and-arithmetic/README.md",
    "content": "# `02-infix-and-arithmetic`\n\nElement-wise math with operators — the Infix module makes array code read like\nalgebra.\n\n```bash\ndune exec nx/examples/02-infix-and-arithmetic/main.exe\n```\n\n## What You'll Learn\n\n- Using the `Infix` module for clean operator-based math\n- Scalar arithmetic: `*$`, `+$`, `-$`, `/$` (array op scalar)\n- Element-wise operations: `*`, `/`, `+`, `-` (array op array)\n- Math functions: `abs`, `sqrt`, `square`, `exp`, `sign`\n- Clamping values with `clamp ~min ~max`\n- Min-max normalization for scaling data to [0, 1]\n\n## Key Functions\n\n| Function / Operator | Purpose                            |\n| ------------------- | ---------------------------------- |\n| `a +$ s`            | Add scalar to all elements         |\n| `a -$ s`            | Subtract scalar from all elements  |\n| `a *$ s`            | Multiply all elements by scalar    |\n| `a /$ s`            | Divide all elements by scalar      |\n| `a + b`             | Element-wise addition              |\n| `a - b`             | Element-wise subtraction           |\n| `a * b`             | Element-wise multiplication        |\n| `a / b`             | Element-wise division              |\n| `abs a`             | Absolute value of each element     |\n| `sqrt a`            | Square root of each element        |\n| `square a`          | Square each element                |\n| `exp a`             | e^x for each element               |\n| `sign a`            | Sign of each element (-1, 0, or 1) |\n| `clamp ~min ~max a` | Clip values to [min, max]          |\n| `min a` / `max a`   | Minimum / maximum element          |\n\n## Output Walkthrough\n\nWhen you run this example, you'll see various arithmetic operations applied to\narrays:\n\n```\nCelsius:    [0, 20, 37, 100, -40]\nFahrenheit: [32, 68, 98.6, 212, -40]\n```\n\nTemperature conversion uses scalar arithmetic — `*$` for multiplication and\n`+$` for addition:\n\n```ocaml\nlet fahrenheit = celsius *$ 1.8 +$ 32.0 in\n```\n\nThis reads just like the formula `F = C × 1.8 + 32`, but operates on the entire\narray at once.\n\nBMI calculation demonstrates element-wise array operations:\n\n```\nHeights (m): [1.65, 1.8, 1.72, 1.55]\nWeights (kg): [68, 90, 75, 52]\nBMI:          [24.977, 27.778, 25.351, 21.639]\n```\n\nThe formula `BMI = weight / height²` becomes:\n\n```ocaml\nlet bmi = weight_kg / (height_m * height_m) in\n```\n\nBoth `*` and `/` work element-by-element, computing BMI for all individuals\nsimultaneously.\n\nMin-max normalization scales exam scores to the range [0, 1]:\n\n```\nRaw scores:  [72, 85, 60, 93, 78, 55]\nNormalized:  [0.447, 0.789, 0.132, 1, 0.605, 0]\n```\n\nThe implementation extracts minimum and maximum values, then applies the\nnormalization formula:\n\n```ocaml\nlet lo = min scores in\nlet hi = max scores in\nlet normalized = (scores - lo) / (hi - lo) in\n```\n\nMath functions apply element-wise to transform arrays:\n\n```\nx:       [-2, -1, 0, 1, 2]\nabs(x):  [2, 1, 0, 1, 2]\nx²:      [4, 1, 0, 1, 4]\n√|x|:    [1.414, 1, 0, 1, 1.414]\nexp(x):  [0.135, 0.368, 1, 2.718, 7.389]\nsign(x): [-1, -1, 0, 1, 1]\n```\n\n`clamp` restricts values to a valid range — useful for sensor data or ensuring\ninputs stay within bounds:\n\n```\nSensor readings: [-5, 12, 105, 42, -1, 99]\nClamped [0,100]: [0, 12, 100, 42, 0, 99]\n```\n\n## Try It\n\n1. Create an array of angles in degrees and convert to radians using `*$`.\n2. Build two 1D arrays and compute their element-wise difference, then use\n   `abs` to get absolute differences.\n3. Generate random-looking data with `create`, then normalize it to [-1, 1]\n   instead of [0, 1].\n\n## Next Steps\n\nContinue to [03-indexing-and-slicing](../03-indexing-and-slicing/) to learn how\nto extract and modify specific regions of arrays.\n"
  },
  {
    "path": "packages/nx/examples/02-infix-and-arithmetic/dune",
    "content": "(executable\n (name main)\n (libraries nx))\n"
  },
  {
    "path": "packages/nx/examples/02-infix-and-arithmetic/main.ml",
    "content": "(** Element-wise math with operators — the Infix module makes array code read\n    like algebra.\n\n    Temperature conversions, BMI calculations, and score normalization, all\n    expressed with clean infix operators instead of verbose function calls. *)\n\nopen Nx\nopen Nx.Infix\n\nlet () =\n  (* --- Temperature conversion: C → F --- *)\n  let celsius = create float64 [| 5 |] [| 0.0; 20.0; 37.0; 100.0; -40.0 |] in\n  let fahrenheit = (celsius *$ 1.8) +$ 32.0 in\n  Printf.printf \"Celsius:    %s\\n\" (data_to_string celsius);\n  Printf.printf \"Fahrenheit: %s\\n\\n\" (data_to_string fahrenheit);\n\n  (* --- BMI from height and weight arrays --- *)\n  let height_m = create float64 [| 4 |] [| 1.65; 1.80; 1.72; 1.55 |] in\n  let weight_kg = create float64 [| 4 |] [| 68.0; 90.0; 75.0; 52.0 |] in\n  let bmi = weight_kg / (height_m * height_m) in\n  Printf.printf \"Heights (m): %s\\n\" (data_to_string height_m);\n  Printf.printf \"Weights (kg): %s\\n\" (data_to_string weight_kg);\n  Printf.printf \"BMI:          %s\\n\\n\" (data_to_string bmi);\n\n  (* --- Exam score normalization (min-max scaling to [0, 1]) --- *)\n  let scores =\n    create float64 [| 6 |] [| 72.0; 85.0; 60.0; 93.0; 78.0; 55.0 |]\n  in\n  let lo = min scores in\n  let hi = max scores in\n  let normalized = (scores - lo) / (hi - lo) in\n  Printf.printf \"Raw scores:  %s\\n\" (data_to_string scores);\n  Printf.printf \"Normalized:  %s\\n\\n\" (data_to_string normalized);\n\n  (* --- Math functions: exp, log, sqrt, abs --- *)\n  let x = create float64 [| 5 |] [| -2.0; -1.0; 0.0; 1.0; 2.0 |] in\n  Printf.printf \"x:       %s\\n\" (data_to_string x);\n  Printf.printf \"abs(x):  %s\\n\" (data_to_string (abs x));\n  Printf.printf \"x²:      %s\\n\" (data_to_string (square x));\n  Printf.printf \"√|x|:    %s\\n\" (data_to_string (sqrt (abs x)));\n  Printf.printf \"exp(x):  %s\\n\" (data_to_string (exp x));\n  Printf.printf \"sign(x): %s\\n\\n\" (data_to_string (sign x));\n\n  (* --- Clamp: cap sensor readings to a valid range --- *)\n  let readings =\n    create float64 [| 6 |] [| -5.0; 12.0; 105.0; 42.0; -1.0; 99.0 |]\n  in\n  let clamped = clamp ~min:0.0 ~max:100.0 readings in\n  Printf.printf \"Sensor readings: %s\\n\" (data_to_string readings);\n  Printf.printf \"Clamped [0,100]: %s\\n\" (data_to_string clamped)\n"
  },
  {
    "path": "packages/nx/examples/03-indexing-and-slicing/README.md",
    "content": "# `03-indexing-and-slicing`\n\nSelect, slice, and mask — extract exactly the data you need. This example uses\na grade book to demonstrate every way Nx lets you reach into an array.\n\n```bash\ndune exec nx/examples/03-indexing-and-slicing/main.exe\n```\n\n## What You'll Learn\n\n- Reading single elements with `item`\n- Selecting rows and columns with `I` and `A`\n- Range slicing with `R` and strided slicing with `Rs`\n- Infix indexing syntax: `.%{}` and `.${}`\n- Boolean masks with `compress` and `where`\n- Picking rows by index with `take`\n\n## Key Functions\n\n| Function / Index              | Purpose                                |\n| ----------------------------- | -------------------------------------- |\n| `item [i; j] t`               | Extract a single OCaml scalar          |\n| `I n`                         | Select index `n` along one axis        |\n| `A`                           | Select all indices along an axis       |\n| `R (start, stop)`             | Half-open range `[start, stop)`        |\n| `Rs (start, stop, step)`      | Range with stride                      |\n| `t.${[...]}`                  | Infix slicing (synonym for `slice`)    |\n| `compress ~axis ~condition t` | Keep rows/cols where condition is true |\n| `where cond then_ else_`      | Element-wise conditional selection     |\n| `take ~axis indices t`        | Gather rows by integer indices         |\n| `greater_s t scalar`          | Element-wise `t > scalar` → bool mask  |\n\n## Output Walkthrough\n\nThe example starts with a 5×4 grade book (5 students, 4 subjects):\n\n```\nGrade book (students × subjects):\n[[88, 72, 95, 83],\n [45, 90, 67, 78],\n [92, 85, 91, 70],\n [76, 63, 80, 95],\n [60, 78, 55, 82]]\n```\n\n### Single element\n\n```ocaml\nitem [ 0; 1 ] grades              (* → 72.0 *)\n```\n\n### Row and column selection\n\nThe infix `.${[...]}` operator makes slicing readable. `I n` picks one index,\n`A` keeps the full axis:\n\n```ocaml\ngrades.${[ I 2; A ]}              (* student 2, all subjects → [92, 85, 91, 70] *)\ngrades.${[ A; I 0 ]}              (* all students, Math      → [88, 45, 92, 76, 60] *)\n```\n\n### Range and strided slicing\n\n`R (start, stop)` is a half-open range. `Rs (start, stop, step)` adds a stride:\n\n```ocaml\ngrades.${[ R (1, 4); R (0, 2) ]}  (* students 1-3, Math & Science *)\ngrades.${[ Rs (0, 5, 2); Rs (0, 4, 2) ]}  (* every other student & subject *)\n```\n\n### Boolean masks\n\nBuild a boolean mask, then use `compress` to filter rows:\n\n```ocaml\nlet high_math = greater_s (grades.${[ A; I 0 ]}) 85.0 in\ncompress ~axis:0 ~condition:high_math grades\n```\n\n```\nMath > 85 mask: [true, false, true, false, false]\nStudents with Math > 85:\n[[88, 72, 95, 83],\n [92, 85, 91, 70]]\n```\n\n### Conditional replacement\n\n`where` replaces elements based on a condition — here, flooring all grades\nbelow 60:\n\n```ocaml\nwhere (less_s grades 60.0) (full float64 [| 5; 4 |] 60.0) grades\n```\n\n## Index Types at a Glance\n\n| Index          | Meaning        | Example                        |\n| -------------- | -------------- | ------------------------------ |\n| `I n`          | Single index   | `I 2` — third element          |\n| `A`            | All indices    | `A` — keep entire axis         |\n| `R (a, b)`     | Range `[a, b)` | `R (1, 4)` — indices 1, 2, 3   |\n| `Rs (a, b, s)` | Strided range  | `Rs (0, 10, 2)` — even indices |\n| `L [...]`      | Explicit list  | `L [0; 3; 7]` — pick specific  |\n| `M mask`       | Boolean mask   | `M bool_array` — where true    |\n| `N`            | New axis       | `N` — insert dimension         |\n\n## Try It\n\n1. Extract the Art column (column 3) for all students.\n2. Use `Rs (4, -1, -1)` to reverse the student order (negative step).\n3. Find students whose average grade across all subjects exceeds 80 using\n   `mean ~axes:[1]` and a boolean mask.\n\n## Next Steps\n\nContinue to [04-reshaping-and-broadcasting](../04-reshaping-and-broadcasting/)\nto learn how to change array shapes and let broadcasting align dimensions\nautomatically.\n"
  },
  {
    "path": "packages/nx/examples/03-indexing-and-slicing/dune",
    "content": "(executable\n (name main)\n (libraries nx))\n"
  },
  {
    "path": "packages/nx/examples/03-indexing-and-slicing/main.ml",
    "content": "(** Select, slice, and mask — extract exactly the data you need.\n\n    A grade book of 5 students across 4 subjects. We'll pull out individual\n    scores, entire rows and columns, ranges, and use boolean masks to find top\n    performers. *)\n\nopen Nx\nopen Nx.Infix\n\nlet () =\n  (* Grade book: 5 students × 4 subjects (Math, Science, English, Art). *)\n  let grades =\n    create float64 [| 5; 4 |]\n      [|\n        88.0;\n        72.0;\n        95.0;\n        83.0;\n        45.0;\n        90.0;\n        67.0;\n        78.0;\n        92.0;\n        85.0;\n        91.0;\n        70.0;\n        76.0;\n        63.0;\n        80.0;\n        95.0;\n        60.0;\n        78.0;\n        55.0;\n        82.0;\n      |]\n  in\n  Printf.printf \"Grade book (students × subjects):\\n%s\\n\\n\"\n    (data_to_string grades);\n\n  (* Single element: student 0's Science score (row 0, col 1). *)\n  let score = item [ 0; 1 ] grades in\n  Printf.printf \"Student 0, Science: %.0f\\n\\n\" score;\n\n  (* Entire row: all of student 2's grades. *)\n  let student_2 = grades.${[ I 2; A ]} in\n  Printf.printf \"Student 2 (all subjects): %s\\n\\n\" (data_to_string student_2);\n\n  (* Entire column: everyone's Math scores (column 0). *)\n  let math = grades.${[ A; I 0 ]} in\n  Printf.printf \"Math scores (all students): %s\\n\\n\" (data_to_string math);\n\n  (* Range: students 1-3, first two subjects. *)\n  let subset = grades.${[ R (1, 4); R (0, 2) ]} in\n  Printf.printf \"Students 1-3, Math & Science:\\n%s\\n\\n\" (data_to_string subset);\n\n  (* Strided: every other student, every other subject. *)\n  let strided = grades.${[ Rs (0, 5, 2); Rs (0, 4, 2) ]} in\n  Printf.printf \"Every other student & subject:\\n%s\\n\\n\"\n    (data_to_string strided);\n\n  (* Boolean mask: which students scored above 85 in Math? *)\n  let math_scores = grades.${[ A; I 0 ]} in\n  let high_math = greater_s math_scores 85.0 in\n  Printf.printf \"Math > 85 mask: %s\\n\" (data_to_string high_math);\n\n  let top_students = compress ~axis:0 ~condition:high_math grades in\n  Printf.printf \"Students with Math > 85:\\n%s\\n\\n\" (data_to_string top_students);\n\n  (* where: replace failing grades (<60) with 60. *)\n  let passing =\n    where (less_s grades 60.0) (full float64 [| 5; 4 |] 60.0) grades\n  in\n  Printf.printf \"After floor at 60:\\n%s\\n\\n\" (data_to_string passing);\n\n  (* take: select specific students by index. *)\n  let picks = take ~axis:0 (create int32 [| 3 |] [| 0l; 2l; 4l |]) grades in\n  Printf.printf \"Students 0, 2, 4:\\n%s\\n\" (data_to_string picks)\n"
  },
  {
    "path": "packages/nx/examples/04-reshaping-and-broadcasting/README.md",
    "content": "# `04-reshaping-and-broadcasting`\n\nChange array shapes and let broadcasting align dimensions automatically. This\nexample reshapes a flat signal into frames, centers data by subtracting column\nmeans, and builds an outer product — all without explicit loops.\n\n```bash\ndune exec nx/examples/04-reshaping-and-broadcasting/main.exe\n```\n\n## What You'll Learn\n\n- Reshaping flat arrays into multi-dimensional frames with `reshape`\n- Flattening back to 1D with `flatten`\n- Transposing rows and columns\n- Stacking arrays vertically and horizontally: `vstack`, `hstack`\n- Broadcasting: how `keepdims` enables operations on different-shaped arrays\n- Building outer products via broadcasting\n- Adding and removing dimensions with `expand_dims` and `squeeze`\n\n## Key Functions\n\n| Function              | Purpose                                                |\n| --------------------- | ------------------------------------------------------ |\n| `reshape shape t`     | Change array shape (total elements must match)         |\n| `flatten t`           | Collapse all dimensions into 1D                        |\n| `transpose t`         | Reverse all axes (swap rows and columns)               |\n| `vstack ts`           | Stack arrays vertically (along axis 0)                 |\n| `hstack ts`           | Stack arrays horizontally (along axis 1)               |\n| `expand_dims axes t`  | Insert size-1 dimensions at specified positions        |\n| `squeeze t`           | Remove all size-1 dimensions                           |\n| `mean ~keepdims:true` | Reduce while keeping axis as size 1 (for broadcasting) |\n\n## Output Walkthrough\n\nReshape a flat 12-element signal into a 3×4 matrix of frames:\n\n```ocaml\nlet signal = arange_f float64 0.0 12.0 1.0 in\nlet frames = reshape [| 3; 4 |] signal\n```\n\n```\nFlat signal (12 samples):\n[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]\n\nReshaped into 3 frames of 4:\n[[0, 1, 2, 3],\n [4, 5, 6, 7],\n [8, 9, 10, 11]]\n```\n\n### Broadcasting in action\n\nSubtracting column means from data. The `keepdims:true` parameter gives\nthe mean shape `[1; 3]` instead of `[3]`, which broadcasts against `[4; 3]`:\n\n```ocaml\nlet col_means = mean ~axes:[ 0 ] ~keepdims:true data in\nlet centered = data - col_means\n```\n\n### Outer product via broadcasting\n\nReshape vectors into compatible shapes and multiply — no loops needed:\n\n```ocaml\nlet outer = reshape [| 4; 1 |] x * reshape [| 1; 3 |] y\n```\n\n```\nOuter product (x × y):\n[[10, 20, 30],\n [20, 40, 60],\n [30, 60, 90],\n [40, 80, 120]]\n```\n\n## Broadcasting Rules\n\nTwo dimensions are compatible for broadcasting when they are either:\n1. Equal, or\n2. One of them is 1\n\nWhen dimensions differ, the size-1 dimension is stretched to match. This is\nwhy `keepdims:true` is essential for reductions used in arithmetic.\n\n## Try It\n\n1. Reshape the signal into `[4; 3]` instead of `[3; 4]` and compare with the\n   transpose of the original frames.\n2. Stack three 1D arrays of different values with `vstack`, then compute\n   row-wise means using `mean ~axes:[1]`.\n3. Compute an outer product of two vectors of different lengths (e.g., 5 and 3)\n   using `reshape` and broadcasting.\n\n## Next Steps\n\nContinue to [05-reductions-and-statistics](../05-reductions-and-statistics/) to\nlearn how to summarize data with aggregations along any axis.\n"
  },
  {
    "path": "packages/nx/examples/04-reshaping-and-broadcasting/dune",
    "content": "(executable\n (name main)\n (libraries nx))\n"
  },
  {
    "path": "packages/nx/examples/04-reshaping-and-broadcasting/main.ml",
    "content": "(** Change array shapes and let broadcasting align dimensions automatically.\n\n    Reshape a flat signal into frames, center data by subtracting column means\n    (broadcasting in action), and build an outer product without any loops. *)\n\nopen Nx\nopen Nx.Infix\n\nlet () =\n  (* --- Reshape: flat signal → frames --- *)\n  let signal = arange_f float64 0.0 12.0 1.0 in\n  Printf.printf \"Flat signal (12 samples):\\n%s\\n\\n\" (data_to_string signal);\n\n  let frames = reshape [| 3; 4 |] signal in\n  Printf.printf \"Reshaped into 3 frames of 4:\\n%s\\n\\n\" (data_to_string frames);\n\n  let flat_again = flatten frames in\n  Printf.printf \"Flattened back: %s\\n\\n\" (data_to_string flat_again);\n\n  (* --- Transpose: swap rows and columns --- *)\n  Printf.printf \"Transposed:\\n%s\\n\\n\" (data_to_string (transpose frames));\n\n  (* --- Stacking arrays --- *)\n  let a = create float64 [| 3 |] [| 1.0; 2.0; 3.0 |] in\n  let b = create float64 [| 3 |] [| 4.0; 5.0; 6.0 |] in\n  Printf.printf \"vstack [a; b]:\\n%s\\n\" (data_to_string (vstack [ a; b ]));\n  Printf.printf \"hstack [a; b]: %s\\n\\n\" (data_to_string (hstack [ a; b ]));\n\n  (* --- Broadcasting: subtract column means to center data --- *)\n  let data =\n    create float64 [| 4; 3 |]\n      [|\n        10.0;\n        200.0;\n        3000.0;\n        20.0;\n        400.0;\n        1000.0;\n        30.0;\n        100.0;\n        2000.0;\n        40.0;\n        300.0;\n        4000.0;\n      |]\n  in\n  Printf.printf \"Raw data (4 samples × 3 features):\\n%s\\n\" (data_to_string data);\n\n  (* Mean along axis 0 with keepdims — shape [1; 3] broadcasts against [4;\n     3]. *)\n  let col_means = mean ~axes:[ 0 ] ~keepdims:true data in\n  Printf.printf \"Column means: %s\\n\" (data_to_string col_means);\n\n  let centered = data - col_means in\n  Printf.printf \"Centered (zero-mean columns):\\n%s\\n\\n\"\n    (data_to_string centered);\n\n  (* --- Outer product via broadcasting --- *)\n  let x = create float64 [| 4 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  let y = create float64 [| 3 |] [| 10.0; 20.0; 30.0 |] in\n\n  (* x as column [4;1], y as row [1;3] → result is [4;3]. *)\n  let outer = reshape [| 4; 1 |] x * reshape [| 1; 3 |] y in\n  Printf.printf \"x = %s\\n\" (data_to_string x);\n  Printf.printf \"y = %s\\n\" (data_to_string y);\n  Printf.printf \"Outer product (x × y):\\n%s\\n\\n\" (data_to_string outer);\n\n  (* --- expand_dims / squeeze --- *)\n  let v = arange float64 0 4 1 in\n  let row = expand_dims [ 0 ] v in\n  let col = expand_dims [ 1 ] v in\n  Printf.printf \"Vector:     shape %s → %s\\n\"\n    (shape_to_string (shape v))\n    (data_to_string v);\n  Printf.printf \"Row vector: shape %s → %s\\n\"\n    (shape_to_string (shape row))\n    (data_to_string row);\n  Printf.printf \"Col vector: shape %s\\n%s\\n\"\n    (shape_to_string (shape col))\n    (data_to_string col)\n"
  },
  {
    "path": "packages/nx/examples/05-reductions-and-statistics/README.md",
    "content": "# `05-reductions-and-statistics`\n\nSummarize data with reductions — means, variances, and aggregations along any\naxis. This example analyzes daily temperature readings across four cities.\n\n```bash\ndune exec nx/examples/05-reductions-and-statistics/main.exe\n```\n\n## What You'll Learn\n\n- Reducing along specific axes with `mean`, `std`, `sum`\n- Finding extremes and their positions with `min`, `max`, `argmax`\n- Computing running totals with `cumsum`\n- Preserving dimensions for broadcasting with `keepdims`\n- Detecting outliers using z-score normalization\n- Testing conditions with `all` and `any`\n\n## Key Functions\n\n| Function                      | Purpose                                     |\n| ----------------------------- | ------------------------------------------- |\n| `mean ~axes t`                | Average values along specified axes         |\n| `std ~axes t`                 | Standard deviation along axes               |\n| `min t` / `max t`             | Global minimum / maximum                    |\n| `min ~axes t` / `max ~axes t` | Per-axis minimum / maximum                  |\n| `argmax ~axis t`              | Index of the maximum along an axis          |\n| `cumsum ~axis t`              | Cumulative sum along an axis                |\n| `all t` / `any t`             | Test if all / any elements are true         |\n| `greater_s t s`               | Element-wise `t > s` returning a bool array |\n| `less_s t s`                  | Element-wise `t < s` returning a bool array |\n\n## Output Walkthrough\n\nThe dataset is a 4×7 matrix — 4 cities, 7 days of temperature readings:\n\n```ocaml\nlet city_means = mean ~axes:[ 1 ] temps in\n```\n\n```\nCity averages:\n  Paris       mean=22.9  std=2.3\n  Cairo       mean=32.0  std=2.1\n  Helsinki    mean=-5.6  std=2.6\n  London      mean=14.9  std=1.3\n```\n\n### Axis semantics\n\n- `~axes:[1]` reduces across columns (days) → one value per city\n- `~axes:[0]` reduces across rows (cities) → one value per day\n- No axis → reduces everything to a scalar\n\n### Outlier detection with z-scores\n\nUsing `keepdims:true` to broadcast the mean and std against the original data:\n\n```ocaml\nlet mu = mean ~axes:[ 1 ] ~keepdims:true temps in\nlet sigma = std ~axes:[ 1 ] ~keepdims:true temps in\nlet z_scores = (temps - mu) / sigma in\nlet outlier_mask = greater_s (abs z_scores) 1.5\n```\n\n### Condition testing\n\n```ocaml\nlet all_above_zero = all (greater_s temps 0.0) in    (* false — Helsinki *)\nlet any_below_neg5 = any (less_s temps (-5.0)) in     (* true  — Helsinki *)\n```\n\n## Try It\n\n1. Compute the daily average across all cities with `mean ~axes:[0]` and find\n   which day was warmest on average.\n2. Use `cumsum ~axis:1` on the full temperature matrix to see running totals\n   per city.\n3. Find the day with the smallest temperature range across cities using\n   `max ~axes:[0]` minus `min ~axes:[0]`.\n\n## Next Steps\n\nContinue to [06-random-numbers](../06-random-numbers/) to generate synthetic\ndata with controlled, reproducible distributions.\n"
  },
  {
    "path": "packages/nx/examples/05-reductions-and-statistics/dune",
    "content": "(executable\n (name main)\n (libraries nx))\n"
  },
  {
    "path": "packages/nx/examples/05-reductions-and-statistics/main.ml",
    "content": "(** Summarize data with reductions — means, variances, and aggregations along\n    any axis.\n\n    Analyze daily temperature readings across four cities. Compute averages,\n    find extremes, track running totals, and flag outliers. *)\n\nopen Nx\nopen Nx.Infix\n\nlet () =\n  (* Daily temperatures (°C) for 4 cities over 7 days. Rows = cities, columns =\n     days. *)\n  let temps =\n    create float64 [| 4; 7 |]\n      [|\n        22.0;\n        24.0;\n        19.0;\n        25.0;\n        23.0;\n        21.0;\n        26.0;\n        (* Paris *)\n        30.0;\n        32.0;\n        35.0;\n        31.0;\n        29.0;\n        33.0;\n        34.0;\n        (* Cairo *)\n        -5.0;\n        -8.0;\n        -3.0;\n        -10.0;\n        -2.0;\n        -7.0;\n        -4.0;\n        (* Helsinki *)\n        15.0;\n        14.0;\n        16.0;\n        13.0;\n        17.0;\n        15.0;\n        14.0;\n        (* London *)\n      |]\n  in\n  let cities = [| \"Paris\"; \"Cairo\"; \"Helsinki\"; \"London\" |] in\n  Printf.printf \"Daily temperatures (4 cities × 7 days):\\n%s\\n\\n\"\n    (data_to_string temps);\n\n  (* --- Per-city statistics (reduce along axis 1 = across days) --- *)\n  let city_means = mean ~axes:[ 1 ] temps in\n  let city_stds = std ~axes:[ 1 ] temps in\n  Printf.printf \"City averages:\\n\";\n  for i = 0 to 3 do\n    Printf.printf \"  %-10s  mean=%.1f  std=%.1f\\n\" cities.(i)\n      (item [ i ] city_means) (item [ i ] city_stds)\n  done;\n  print_newline ();\n\n  (* --- Hottest day per city (argmax along axis 1) --- *)\n  let hottest_day = argmax ~axis:1 temps in\n  Printf.printf \"Hottest day per city:\\n\";\n  for i = 0 to 3 do\n    Printf.printf \"  %-10s  day %ld\\n\" cities.(i) (item [ i ] hottest_day)\n  done;\n  print_newline ();\n\n  (* --- Global extremes --- *)\n  Printf.printf \"Warmest reading: %.1f°C\\n\" (item [] (max temps));\n  Printf.printf \"Coldest reading: %.1f°C\\n\\n\" (item [] (min temps));\n\n  (* --- Cumulative sum: running total of Cairo's temperatures --- *)\n  let cairo = temps.${[ I 1; A ]} in\n  let cumulative = cumsum ~axis:0 cairo in\n  Printf.printf \"Cairo daily:      %s\\n\" (data_to_string cairo);\n  Printf.printf \"Cairo cumulative: %s\\n\\n\" (data_to_string cumulative);\n\n  (* --- Outlier detection with z-scores --- *)\n  let mu = mean ~axes:[ 1 ] ~keepdims:true temps in\n  let sigma = std ~axes:[ 1 ] ~keepdims:true temps in\n  let z_scores = (temps - mu) / sigma in\n  let outlier_mask = greater_s (abs z_scores) 1.5 in\n  Printf.printf \"Z-scores:\\n%s\\n\" (data_to_string z_scores);\n  Printf.printf \"Outliers (|z| > 1.5): %s\\n\\n\" (data_to_string outlier_mask);\n\n  (* --- Check if all/any values meet a condition --- *)\n  let all_above_zero = all (greater_s temps 0.0) in\n  let any_below_neg5 = any (less_s temps (-5.0)) in\n  Printf.printf \"All temps > 0?   %b\\n\" (item [] all_above_zero);\n  Printf.printf \"Any temp < -5?   %b\\n\" (item [] any_below_neg5)\n"
  },
  {
    "path": "packages/nx/examples/06-random-numbers/README.md",
    "content": "# `06-random-numbers`\n\nImplicit RNG with reproducible scopes — generate distributions, sample,\nand shuffle. Wrap code in `Rng.run ~seed` for deterministic results.\n\n```bash\ndune exec nx/examples/06-random-numbers/main.exe\n```\n\n## What You'll Learn\n\n- Generating uniform, normal, and integer distributions\n- Running a Monte Carlo simulation to estimate pi\n- Creating synthetic training data with controlled noise\n- Verifying reproducibility with `Rng.run ~seed`\n- Shuffling arrays with `Rng.shuffle`\n\n## Key Functions\n\n| Function                                 | Purpose                                           |\n| ---------------------------------------- | ------------------------------------------------- |\n| `Rng.run ~seed f`                        | Execute `f` in a deterministic RNG scope          |\n| `Rng.uniform dtype shape`                | Uniform random values in [0, 1)                   |\n| `Rng.normal dtype shape`                 | Standard normal distribution (mean=0, std=1)      |\n| `Rng.randint ~high dtype shape low`      | Random integers in [low, high)                    |\n| `Rng.shuffle t`                          | Randomly permute array elements                   |\n| `rand dtype shape`                       | Shorthand for uniform random values               |\n\n## Output Walkthrough\n\n### Monte Carlo pi estimation\n\nDrop random points in a unit square. The fraction inside the unit circle\napproximates pi/4:\n\n```ocaml\nlet xs = rand float64 [| n |] in\nlet ys = rand float64 [| n |] in\nlet inside = less_s ((xs * xs) + (ys * ys)) 1.0 in\nlet pi_est = item [] (sum (cast Float64 inside)) *. 4.0 /. Float.of_int n\n```\n\n```\nMonte Carlo pi (100000 points): 3.1420  (actual: 3.1416)\n```\n\n### Reproducibility\n\nSame seed always produces the same numbers:\n\n```ocaml\nlet a = Rng.run ~seed:99 (fun () -> Rng.normal Float64 [| 3 |]) in\nlet b = Rng.run ~seed:99 (fun () -> Rng.normal Float64 [| 3 |]) in\n(* Identical? true *)\n```\n\n## Try It\n\n1. Roll 1000 dice with `Rng.randint` and compute the mean — it should\n   approach the theoretical expected value of 3.5.\n2. Increase the Monte Carlo sample count to 1,000,000 and observe how the pi\n   estimate improves.\n3. Generate two clusters of 2D points (one centered at origin, one at (3, 3))\n   using `Rng.normal` with offsets.\n\n## Next Steps\n\nContinue to [07-linear-algebra](../07-linear-algebra/) to learn matrix\noperations, decompositions, and solving linear systems.\n"
  },
  {
    "path": "packages/nx/examples/06-random-numbers/dune",
    "content": "(executable\n (name main)\n (libraries nx))\n"
  },
  {
    "path": "packages/nx/examples/06-random-numbers/main.ml",
    "content": "(** Implicit RNG with reproducible scopes — generate distributions, sample, and\n    shuffle.\n\n    Roll dice, estimate pi with Monte Carlo, and generate noisy training data.\n    Every result is reproducible inside an [Rng.run] scope: same seed, same\n    numbers. Outside any scope the global fallback provides convenient but\n    non-reproducible randomness. *)\n\nopen Nx\nopen Nx.Infix\n\nlet () =\n  (* --- Dice simulation: roll 10 six-sided dice --- *)\n  let dice = randint Int32 ~high:7 [| 10 |] 1 in\n  Printf.printf \"10 dice rolls: %s\\n\\n\" (data_to_string dice);\n\n  (* --- Uniform random floats in [0, 1) --- *)\n  let uniform = rand Float64 [| 5 |] in\n  Printf.printf \"Uniform [0,1): %s\\n\\n\" (data_to_string uniform);\n\n  (* --- Normal distribution (mean=0, std=1) --- *)\n  let normal = randn Float64 [| 5 |] in\n  Printf.printf \"Normal(0,1):   %s\\n\\n\" (data_to_string normal);\n\n  (* --- Monte Carlo pi estimation ---\n\n     Drop N random points in a unit square. The fraction landing inside the unit\n     circle (distance from origin < 1) approximates pi/4. *)\n  let n = 100_000 in\n  let xs = rand float64 [| n |] in\n  let ys = rand float64 [| n |] in\n  let inside = less_s ((xs * xs) + (ys * ys)) 1.0 in\n  let count = sum (cast Float64 inside) in\n  let pi_est = item [] count *. 4.0 /. Float.of_int n in\n  Printf.printf \"Monte Carlo pi (%d points): %.4f  (actual: %.4f)\\n\\n\" n pi_est\n    Float.pi;\n\n  (* --- Synthetic training data: y = 3x + 2 + noise --- *)\n  let x = rand Float64 [| 8 |] in\n  let noise = randn Float64 [| 8 |] *$ 0.1 in\n  let y = (x *$ 3.0) +$ 2.0 + noise in\n  Printf.printf \"x:     %s\\n\" (data_to_string x);\n  Printf.printf \"y ~ 3x+2: %s\\n\\n\" (data_to_string y);\n\n  (* --- Reproducibility: Rng.run ~seed gives the same result --- *)\n  let a = Rng.run ~seed:99 (fun () -> randn Float64 [| 3 |]) in\n  let b = Rng.run ~seed:99 (fun () -> randn Float64 [| 3 |]) in\n  Printf.printf \"Same seed, run 1: %s\\n\" (data_to_string a);\n  Printf.printf \"Same seed, run 2: %s\\n\" (data_to_string b);\n  Printf.printf \"Identical? %b\\n\\n\" (item [] (all (equal a b)));\n\n  (* --- Shuffle: random permutation of a dataset --- *)\n  let data = arange int32 0 8 1 in\n  let shuffled = shuffle data in\n  Printf.printf \"Original:  %s\\n\" (data_to_string data);\n  Printf.printf \"Shuffled:  %s\\n\" (data_to_string shuffled)\n"
  },
  {
    "path": "packages/nx/examples/07-linear-algebra/README.md",
    "content": "# `07-linear-algebra`\n\nSolve systems, decompose matrices, and fit models — linear algebra made\npractical. This example covers matrix multiplication, linear solves, least\nsquares fitting, eigendecomposition, and SVD.\n\n```bash\ndune exec nx/examples/07-linear-algebra/main.exe\n```\n\n## What You'll Learn\n\n- Matrix multiplication with `@@` and dot products with `<.>`\n- Solving linear systems with `/@`\n- Computing inverses, determinants, and norms\n- Fitting a line to data with least squares (`lstsq`)\n- Eigendecomposition of symmetric matrices (`eigh`)\n- Singular value decomposition and reconstruction (`svd`)\n\n## Key Functions\n\n| Function    | Purpose                                            |\n| ----------- | -------------------------------------------------- |\n| `a @@ b`    | Matrix multiplication                              |\n| `u <.> v`   | Vector dot product                                 |\n| `a /@ b`    | Solve linear system Ax = b                         |\n| `inv m`     | Matrix inverse                                     |\n| `det m`     | Determinant                                        |\n| `norm m`    | Matrix norm (Frobenius by default)                 |\n| `lstsq a b` | Least squares solution to overdetermined system    |\n| `eigh m`    | Eigenvalues and eigenvectors of a symmetric matrix |\n| `svd m`     | Singular value decomposition (U, S, Vt)            |\n| `diag v`    | Create diagonal matrix from a vector               |\n\n## Output Walkthrough\n\n### Matrix multiplication\n\n```ocaml\nlet a = create float64 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\nlet b = create float64 [| 3; 2 |] [| 7.; 8.; 9.; 10.; 11.; 12. |] in\na @@ b    (* [2; 2] result *)\n```\n\n### Solving linear systems\n\nThe `/@` operator solves Ax = b for x:\n\n```ocaml\nlet x = coeff /@ rhs    (* x = [2; 3; -1] *)\n```\n\n### Inverse verification\n\n```ocaml\nlet m_inv = inv m in\nm @@ m_inv    (* ≈ identity matrix *)\n```\n\n### Least squares fitting\n\nBuild a design matrix [x, 1] and solve for slope and intercept:\n\n```ocaml\nlet design = hstack [ x_col; ones float64 [| 6; 1 |] ] in\nlet coeffs, _, _, _ = lstsq design y_col in\n(* m ≈ 1.97, c ≈ 1.03 *)\n```\n\n### SVD decomposition and reconstruction\n\n```ocaml\nlet u_mat, s_vec, vt = svd data in\nlet reconstructed = u_mat.${[ A; R (0, 2) ]} @@ diag s_vec @@ vt\n(* reconstructed ≈ original *)\n```\n\n## Try It\n\n1. Solve a different 3×3 system and verify the solution by computing\n   `coeff @@ x` — it should match the right-hand side.\n2. Extend the least squares example to fit a quadratic by adding an x^2\n   column to the design matrix.\n3. Use SVD for low-rank approximation: zero out the smallest singular value,\n   reconstruct, and compare to the original.\n\n## Next Steps\n\nContinue to [08-signal-processing](../08-signal-processing/) to apply\nfrequency analysis with FFT.\n"
  },
  {
    "path": "packages/nx/examples/07-linear-algebra/dune",
    "content": "(executable\n (name main)\n (libraries nx))\n"
  },
  {
    "path": "packages/nx/examples/07-linear-algebra/main.ml",
    "content": "(** Solve systems, decompose matrices, and fit models — linear algebra made\n    practical.\n\n    Fit a line to noisy data with least squares, verify matrix inverses,\n    decompose matrices with SVD and eigendecomposition. *)\n\nopen Nx\nopen Nx.Infix\n\nlet () =\n  (* --- Matrix multiplication --- *)\n  let a = create float64 [| 2; 3 |] [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0 |] in\n  let b = create float64 [| 3; 2 |] [| 7.0; 8.0; 9.0; 10.0; 11.0; 12.0 |] in\n  Printf.printf \"A (2×3):\\n%s\\n\" (data_to_string a);\n  Printf.printf \"B (3×2):\\n%s\\n\" (data_to_string b);\n  Printf.printf \"A @@ B:\\n%s\\n\\n\" (data_to_string (a @@ b));\n\n  (* --- Dot product --- *)\n  let u = create float64 [| 3 |] [| 1.0; 2.0; 3.0 |] in\n  let v = create float64 [| 3 |] [| 4.0; 5.0; 6.0 |] in\n  Printf.printf \"u · v = %s\\n\\n\" (data_to_string (u <.> v));\n\n  (* --- Solving linear systems: A x = b --- *)\n  let coeff =\n    create float64 [| 3; 3 |]\n      [| 2.0; 1.0; -1.0; -3.0; -1.0; 2.0; -2.0; 1.0; 2.0 |]\n  in\n  let rhs = create float64 [| 3; 1 |] [| 8.0; -11.0; -3.0 |] in\n  let x = coeff /@ rhs in\n  Printf.printf \"System Ax = b:\\n\";\n  Printf.printf \"A:\\n%s\\n\" (data_to_string coeff);\n  Printf.printf \"b: %s\\n\" (data_to_string (flatten rhs));\n  Printf.printf \"x: %s\\n\\n\" (data_to_string (flatten x));\n\n  (* --- Inverse: verify A @@ inv(A) ≈ I --- *)\n  let m = create float64 [| 2; 2 |] [| 4.0; 7.0; 2.0; 6.0 |] in\n  let m_inv = inv m in\n  let product = m @@ m_inv in\n  Printf.printf \"M:\\n%s\\n\" (data_to_string m);\n  Printf.printf \"M⁻¹:\\n%s\\n\" (data_to_string m_inv);\n  Printf.printf \"M × M⁻¹ ≈ I:\\n%s\\n\\n\" (data_to_string product);\n\n  (* --- Determinant and norm --- *)\n  Printf.printf \"det(M) = %.1f\\n\" (item [] (det m));\n  Printf.printf \"‖M‖_F  = %.4f\\n\\n\" (item [] (norm m));\n\n  (* --- Least squares: fit y = mx + c to noisy data --- *)\n  let x_data = create float64 [| 6 |] [| 0.0; 1.0; 2.0; 3.0; 4.0; 5.0 |] in\n  let y_data = create float64 [| 6 |] [| 1.1; 2.9; 5.2; 6.8; 9.1; 10.8 |] in\n\n  (* Build design matrix: [x, 1] for y = m*x + c *)\n  let x_col = reshape [| 6; 1 |] x_data in\n  let design = hstack [ x_col; ones float64 [| 6; 1 |] ] in\n  let y_col = reshape [| 6; 1 |] y_data in\n\n  let coeffs, _residuals, _rank, _sv = lstsq design y_col in\n  Printf.printf \"Least squares fit  y = m·x + c:\\n\";\n  Printf.printf \"  m = %.4f\\n\" (item [ 0; 0 ] coeffs);\n  Printf.printf \"  c = %.4f\\n\\n\" (item [ 1; 0 ] coeffs);\n\n  (* --- Eigendecomposition of a symmetric matrix --- *)\n  let sym =\n    create float64 [| 3; 3 |]\n      [| 2.0; -1.0; 0.0; -1.0; 2.0; -1.0; 0.0; -1.0; 2.0 |]\n  in\n  let eigenvalues, eigenvectors = eigh sym in\n  Printf.printf \"Symmetric matrix:\\n%s\\n\" (data_to_string sym);\n  Printf.printf \"Eigenvalues:  %s\\n\" (data_to_string eigenvalues);\n  Printf.printf \"Eigenvectors:\\n%s\\n\\n\" (data_to_string eigenvectors);\n\n  (* --- SVD: decompose and reconstruct --- *)\n  let data = create float64 [| 3; 2 |] [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0 |] in\n  let u_mat, s_vec, vt = svd data in\n  (* Reconstruct: U @ diag(S) @ Vt *)\n  let s_diag = diag s_vec in\n  let reconstructed = u_mat.${[ A; R (0, 2) ]} @@ s_diag @@ vt in\n  Printf.printf \"Original:\\n%s\\n\" (data_to_string data);\n  Printf.printf \"Singular values: %s\\n\" (data_to_string s_vec);\n  Printf.printf \"Reconstructed (U·S·Vt):\\n%s\\n\" (data_to_string reconstructed)\n"
  },
  {
    "path": "packages/nx/examples/08-signal-processing/README.md",
    "content": "# `08-signal-processing`\n\nAnalyze frequencies with FFT — decompose signals and filter noise. This example\nbuilds a signal from two sine waves plus noise, identifies the component\nfrequencies, and filters the noise in the frequency domain.\n\n```bash\ndune exec nx/examples/08-signal-processing/main.exe\n```\n\n## What You'll Learn\n\n- Constructing synthetic signals from sine waves and noise\n- Transforming to the frequency domain with `rfft`\n- Mapping frequency bins to Hz with `rfftfreq`\n- Identifying dominant frequency components by magnitude\n- Filtering noise by zeroing small-magnitude frequency bins\n- Reconstructing a clean signal with `irfft`\n\n## Key Functions\n\n| Function                      | Purpose                                           |\n| ----------------------------- | ------------------------------------------------- |\n| `rfft t`                      | Real-valued FFT (time domain to frequency domain) |\n| `irfft ~n t`                  | Inverse real FFT (frequency domain back to time)  |\n| `rfftfreq ~d n`               | Frequency bin labels for `rfft` output            |\n| `linspace dtype start stop n` | Evenly spaced time samples                        |\n| `sin t`                       | Element-wise sine                                 |\n| `Rng.normal ~key dtype shape` | Gaussian noise                                    |\n| `Nx.Infix` (`+`, `*`, `*$`)   | Clean arithmetic on arrays                        |\n\n## Output Walkthrough\n\n### Signal construction\n\nA 256-sample signal at 256 Hz composed of two sine waves plus noise:\n\n```\nSignal: 256 samples at 256 Hz\nComponents: 5 Hz (amplitude 1.0) + 20 Hz (amplitude 0.5) + noise\n```\n\n### Frequency analysis\n\nAfter `rfft`, compute magnitudes scaled by 2/N to get amplitudes:\n\n```\nDominant frequencies:\n  5.0 Hz  (magnitude 1.002)\n  20.0 Hz  (magnitude 0.501)\n```\n\nThe FFT correctly recovers both sine components. Noise spreads across many\nbins with small magnitudes.\n\n### Noise filtering\n\nZero all frequency bins below a threshold, then reconstruct with `irfft`:\n\n```\nAfter filtering (threshold=0.2):\n  Original first 8:  [1.29, 1.16, 0.81, 0.44, ...]\n  Filtered first 8:  [1.00, 1.16, 0.79, 0.36, ...]\n```\n\nThe filtered signal retains the two sine waves while removing noise.\n\n### Frequency bins\n\nFor an N-sample real signal, `rfft` produces N/2 + 1 complex bins from 0 Hz\nto the Nyquist frequency (sample_rate / 2):\n\n```\nTotal bins: 129 (for 256-sample signal)\n```\n\n## Try It\n\n1. Add a third sine wave at 40 Hz and verify it appears in the dominant\n   frequencies.\n2. Raise the filter threshold to 0.5 and observe how the 20 Hz component\n   gets removed (its amplitude is only 0.5).\n3. Double the sample rate to 512 Hz and check how the frequency resolution\n   changes.\n\n## Next Steps\n\nContinue to [09-image-processing](../09-image-processing/) to apply\nconvolution and pooling to 2D image data.\n"
  },
  {
    "path": "packages/nx/examples/08-signal-processing/dune",
    "content": "(executable\n (name main)\n (libraries nx))\n"
  },
  {
    "path": "packages/nx/examples/08-signal-processing/main.ml",
    "content": "(** Analyze frequencies with FFT — decompose signals and filter noise.\n\n    Build a signal from two sine waves plus noise. Use the real FFT to identify\n    component frequencies, then filter the noise and reconstruct a clean signal.\n*)\n\nopen Nx\nopen Nx.Infix\n\nlet () =\n  (* Signal parameters. *)\n  let n = 256 in\n  let sample_rate = 256.0 in\n  let dt = 1.0 /. sample_rate in\n\n  (* Time axis: n samples at the given rate. *)\n  let t = linspace float64 0.0 (Float.of_int n *. dt) n ~endpoint:false in\n\n  (* Build signal: 5 Hz sine + 20 Hz sine + noise. *)\n  let noise = randn Float64 [| n |] *$ 0.3 in\n  let pi2 = 2.0 *. Float.pi in\n  let signal_5hz = sin (t *$ (pi2 *. 5.0)) in\n  let signal_20hz = sin (t *$ (pi2 *. 20.0)) *$ 0.5 in\n  let signal = signal_5hz + signal_20hz + noise in\n\n  Printf.printf \"Signal: %d samples at %.0f Hz\\n\" n sample_rate;\n  Printf.printf\n    \"Components: 5 Hz (amplitude 1.0) + 20 Hz (amplitude 0.5) + noise\\n\\n\";\n\n  (* Show first 8 samples. *)\n  Printf.printf \"First 8 samples: %s\\n\\n\"\n    (data_to_string (slice [ R (0, 8) ] signal));\n\n  (* --- Real FFT: transform to frequency domain --- *)\n  let spectrum = rfft signal in\n  let freqs = rfftfreq ~d:dt n in\n\n  (* Magnitudes (scaled by 2/N for single-sided spectrum). Extract real and\n     imaginary parts to compute |z| = sqrt(re² + im²). *)\n  let spectrum_arr = to_array spectrum in\n  let re =\n    create float64 (shape spectrum)\n      (Array.map (fun c -> c.Complex.re) spectrum_arr)\n  in\n  let im =\n    create float64 (shape spectrum)\n      (Array.map (fun c -> c.Complex.im) spectrum_arr)\n  in\n  let magnitudes = sqrt ((re * re) + (im * im)) *$ (2.0 /. Float.of_int n) in\n\n  (* Find the dominant frequencies (magnitude > 0.3). *)\n  Printf.printf \"Dominant frequencies:\\n\";\n  let n_freqs = (shape magnitudes).(0) in\n  for i = 0 to pred n_freqs do\n    let mag = item [ i ] magnitudes in\n    if Stdlib.( > ) mag 0.3 then\n      Printf.printf \"  %.1f Hz  (magnitude %.3f)\\n\" (item [ i ] freqs) mag\n  done;\n  print_newline ();\n\n  (* --- Filter: zero out small frequency components --- *)\n  let threshold = 0.2 in\n  let mag_arr = to_array magnitudes in\n  let filtered =\n    Array.mapi\n      (fun i c ->\n        if Stdlib.( < ) mag_arr.(i) threshold then Complex.zero else c)\n      (to_array spectrum)\n  in\n  let clean_spectrum = create Complex128 (shape spectrum) filtered in\n\n  (* Inverse FFT back to time domain. *)\n  let clean_signal = irfft ~n clean_spectrum in\n\n  Printf.printf \"After filtering (threshold=%.1f):\\n\" threshold;\n  Printf.printf \"  Original first 8:  %s\\n\"\n    (data_to_string (slice [ R (0, 8) ] signal));\n  Printf.printf \"  Filtered first 8:  %s\\n\\n\"\n    (data_to_string (slice [ R (0, 8) ] clean_signal));\n\n  (* --- Frequency bins explained --- *)\n  Printf.printf \"Frequency bins (first 10): %s\\n\"\n    (data_to_string (slice [ R (0, 10) ] freqs));\n  Printf.printf \"Total bins: %d (for %d-sample signal)\\n\" (shape freqs).(0) n\n"
  },
  {
    "path": "packages/nx/examples/09-image-processing/README.md",
    "content": "# `09-image-processing`\n\nLoad, transform, and save images as arrays — convolutions, pooling, and pixel\nmath. This example creates a synthetic grayscale image, blurs it, detects edges\nwith Sobel filters, and downsamples with max pooling.\n\n```bash\ndune exec nx/examples/09-image-processing/main.exe\n```\n\n## What You'll Learn\n\n- Creating synthetic images with `init` and pixel math\n- Applying 2D convolution with `correlate2d` (NCHW format)\n- Gaussian blur with a 3x3 kernel\n- Sobel edge detection (horizontal + vertical gradients)\n- Downsampling with `max_pool2d`\n- Converting between `UInt8` and `Float32` for computation\n- Saving arrays as PNG files with `Nx_io.save_image`\n\n## Key Functions\n\n| Function                                      | Purpose                                            |\n| --------------------------------------------- | -------------------------------------------------- |\n| `init UInt8 shape f`                          | Create an image by computing each pixel            |\n| `correlate2d ~padding_mode:\\`Same img kernel` | 2D convolution (expects NCHW)                      |\n| `max_pool2d ~kernel_size ~stride img`         | Downsample by taking max in each window            |\n| `cast Float32 t`                              | Convert dtype for floating-point operations        |\n| `clamp ~min ~max t`                           | Clamp values to a valid pixel range                |\n| `contiguous t`                                | Ensure contiguous memory layout (required for I/O) |\n| `Nx_io.save_image path t`                     | Save a 2D (HxW) array as a grayscale PNG           |\n\n## Output Walkthrough\n\n### Synthetic image\n\nA 64x64 horizontal gradient with a bright rectangle in the center:\n\n```ocaml\nlet img = init UInt8 [| h; w |] (fun idx ->\n    let y = idx.(0) and x = idx.(1) in\n    let base = x * 255 / (w - 1) in\n    if y >= 16 && y < 48 && x >= 16 && x < 48 then 220 else base)\n```\n\n### NCHW format\n\nConvolution operations expect 4D tensors in NCHW format (batch, channels,\nheight, width). Convert with:\n\n```ocaml\nlet img_f = cast Float32 img |> contiguous |> reshape [| 1; 1; h; w |]\n```\n\n### Gaussian blur\n\nA 3x3 kernel with weights summing to 1, giving more weight to the center:\n\n```ocaml\nlet blur_kernel = create Float32 [| 1; 1; 3; 3 |]\n  [| 1./16.; 2./16.; 1./16.;\n     2./16.; 4./16.; 2./16.;\n     1./16.; 2./16.; 1./16. |]\n```\n\n### Sobel edge detection\n\nCombines horizontal and vertical gradient magnitudes:\n\n```ocaml\nlet gx = correlate2d ~padding_mode:`Same img_f sobel_x in\nlet gy = correlate2d ~padding_mode:`Same img_f sobel_y in\nlet edges = sqrt (add (mul gx gx) (mul gy gy))\n```\n\n### Max pooling\n\n2x downsampling by taking the maximum in each 2x2 window:\n\n```\nSaved: pooled.png (64x64 -> 32x32)\n```\n\n## Output Files\n\nRunning this example creates four PNG files in the current directory:\n\n| File           | Description                    |\n| -------------- | ------------------------------ |\n| `gradient.png` | Original synthetic image       |\n| `blurred.png`  | After Gaussian blur            |\n| `edges.png`    | Sobel edge detection result    |\n| `pooled.png`   | 2x downsampled via max pooling |\n\n## Try It\n\n1. Replace the blur kernel with a sharpening kernel:\n   `[| 0.; -1.; 0.; -1.; 5.; -1.; 0.; -1.; 0. |]`\n2. Try a larger pooling window (`4, 4`) and observe the effect on image size\n   and detail.\n3. Chain blur and edge detection: blur first to reduce noise, then apply Sobel.\n\n## Next Steps\n\nYou've completed the Nx examples! For machine learning workflows, see the\n[kaun examples](/docs/kaun/).\n"
  },
  {
    "path": "packages/nx/examples/09-image-processing/dune",
    "content": "(executable\n (name main)\n (libraries nx nx.io))\n"
  },
  {
    "path": "packages/nx/examples/09-image-processing/main.ml",
    "content": "(** Load, transform, and save images as arrays — convolutions, pooling, and\n    pixel math.\n\n    Create a synthetic grayscale gradient, blur it, detect edges with Sobel\n    filters, and downsample with max pooling. Results are saved as PNG files. *)\n\nopen Nx\n\nlet () =\n  let h = 64 and w = 64 in\n\n  (* --- Create a gradient image with a bright rectangle --- *)\n  let img =\n    init UInt8 [| h; w |] (fun idx ->\n        let y = idx.(0) and x = idx.(1) in\n        (* Background: horizontal gradient. *)\n        let base = x * 255 / (w - 1) in\n        (* Bright rectangle in the center. *)\n        if y >= 16 && y < 48 && x >= 16 && x < 48 then 220 else base)\n  in\n  Printf.printf \"Created %dx%d grayscale image\\n\" h w;\n\n  (* Save the original. *)\n  Nx_io.save_image \"gradient.png\" (contiguous img);\n  Printf.printf \"Saved: gradient.png\\n\";\n\n  (* --- Gaussian blur with a 3x3 kernel --- *)\n\n  (* Convert to float for convolution. The scipy-style correlate works on raw\n     spatial dims, so we use [H; W] directly. *)\n  let img_f = cast Float32 img |> contiguous in\n\n  let blur_kernel =\n    create Float32 [| 3; 3 |]\n      [|\n        1.0 /. 16.0;\n        2.0 /. 16.0;\n        1.0 /. 16.0;\n        2.0 /. 16.0;\n        4.0 /. 16.0;\n        2.0 /. 16.0;\n        1.0 /. 16.0;\n        2.0 /. 16.0;\n        1.0 /. 16.0;\n      |]\n  in\n  let blurred = correlate ~padding:`Same img_f blur_kernel in\n  let blurred_img =\n    clamp ~min:0.0 ~max:255.0 blurred |> cast UInt8 |> contiguous\n  in\n  Nx_io.save_image \"blurred.png\" blurred_img;\n  Printf.printf \"Saved: blurred.png\\n\";\n\n  (* --- Sobel edge detection --- *)\n  let sobel_x =\n    create Float32 [| 3; 3 |]\n      [| -1.0; 0.0; 1.0; -2.0; 0.0; 2.0; -1.0; 0.0; 1.0 |]\n  in\n  let sobel_y =\n    create Float32 [| 3; 3 |]\n      [| -1.0; -2.0; -1.0; 0.0; 0.0; 0.0; 1.0; 2.0; 1.0 |]\n  in\n\n  let gx = correlate ~padding:`Same img_f sobel_x in\n  let gy = correlate ~padding:`Same img_f sobel_y in\n  let edges = sqrt (add (mul gx gx) (mul gy gy)) in\n  let edges_img = clamp ~min:0.0 ~max:255.0 edges |> cast UInt8 |> contiguous in\n  Nx_io.save_image \"edges.png\" edges_img;\n  Printf.printf \"Saved: edges.png\\n\";\n\n  (* --- Max pooling: 2x downsample using maximum_filter --- *)\n  let pooled =\n    maximum_filter ~kernel_size:[| 2; 2 |] ~stride:[| 2; 2 |] img_f\n  in\n  let pool_h = (shape pooled).(0) and pool_w = (shape pooled).(1) in\n  let pooled_img =\n    clamp ~min:0.0 ~max:255.0 pooled |> cast UInt8 |> contiguous\n  in\n  Nx_io.save_image \"pooled.png\" pooled_img;\n  Printf.printf \"Saved: pooled.png (%dx%d -> %dx%d)\\n\" h w pool_h pool_w;\n\n  Printf.printf \"\\nAll images saved to the current directory.\\n\"\n"
  },
  {
    "path": "packages/nx/examples/10-machine-learning/README.md",
    "content": "# Machine Learning\n\nFour classic ML algorithms built from Nx primitives: SVD, broadcasting,\nreductions, and scalar loops.\n\n| File       | Algorithm | Key Nx operations                                         |\n| ---------- | --------- | --------------------------------------------------------- |\n| `pca.ml`   | PCA       | `svd`, `mean`, `matmul`, `cumsum`                         |\n| `kmeans.ml`| K-Means   | broadcasting, `argmin`, `categorical`, `sq_distances`     |\n| `dbscan.ml`| DBSCAN    | pairwise distances, `less_equal_s`, boolean `item` in BFS |\n| `tsne.ml`  | t-SNE     | `exp`, `log`, Student-t kernel, momentum gradient descent |\n\n## Running\n\n```bash\ndune exec nx/examples/10-machine-learning/pca.exe\ndune exec nx/examples/10-machine-learning/kmeans.exe\ndune exec nx/examples/10-machine-learning/dbscan.exe\ndune exec nx/examples/10-machine-learning/tsne.exe\n```\n"
  },
  {
    "path": "packages/nx/examples/10-machine-learning/dbscan.ml",
    "content": "(** DBSCAN density-based clustering.\n\n    Generate two dense clusters with scattered noise, find clusters using\n    neighbourhood density, and report cluster sizes and noise count. *)\n\nopen Nx\n\nlet () =\n  let eps = 1.5 in\n  let min_samples = 5 in\n\n  (* Two tight blobs plus uniform noise *)\n  let c1 = add_s (mul_s (randn Float64 [| 80; 2 |]) 0.6) 3.0 in\n  let c2 = sub_s (mul_s (randn Float64 [| 80; 2 |]) 0.6) 3.0 in\n  let noise = sub_s (mul_s (rand Float64 [| 20; 2 |]) 14.0) 7.0 in\n  let data = concatenate ~axis:0 [ c1; c2; noise ] in\n  let n = (shape data).(0) in\n  Printf.printf \"Data: %d points (eps=%.1f, min_samples=%d)\\n\\n\" n eps\n    min_samples;\n\n  (* Pairwise Euclidean distance matrix [n, n] *)\n  let diff = sub (expand_dims [ 1 ] data) (expand_dims [ 0 ] data) in\n  let dist = sqrt (sum ~axes:[ 2 ] (square diff)) in\n\n  (* Neighbour adjacency and core-point mask *)\n  let neighbours = less_equal_s dist eps in\n  let counts = sum ~axes:[ 1 ] (cast Float64 neighbours) in\n  let core = greater_equal_s counts (Float.of_int min_samples) in\n\n  (* BFS cluster expansion *)\n  let labels = Array.make n (-1) in\n  let cluster_id = ref 0 in\n  for i = 0 to n - 1 do\n    if labels.(i) = -1 && item [ i ] core then begin\n      let c = !cluster_id in\n      incr cluster_id;\n      labels.(i) <- c;\n      let q = Queue.create () in\n      Queue.push i q;\n      while not (Queue.is_empty q) do\n        let p = Queue.pop q in\n        for j = 0 to n - 1 do\n          if labels.(j) = -1 && item [ p; j ] neighbours then begin\n            labels.(j) <- c;\n            if item [ j ] core then Queue.push j q\n          end\n        done\n      done\n    end\n  done;\n\n  let n_clusters = !cluster_id in\n  let n_noise =\n    Array.fold_left (fun acc l -> if l = -1 then acc + 1 else acc) 0 labels\n  in\n  Printf.printf \"Clusters found: %d\\n\" n_clusters;\n  Printf.printf \"Noise points:   %d\\n\\n\" n_noise;\n  for c = 0 to n_clusters - 1 do\n    let count =\n      Array.fold_left (fun acc l -> if l = c then acc + 1 else acc) 0 labels\n    in\n    Printf.printf \"  Cluster %d: %d points\\n\" c count\n  done\n"
  },
  {
    "path": "packages/nx/examples/10-machine-learning/dune",
    "content": "(executables\n (names pca kmeans dbscan tsne)\n (libraries nx))\n"
  },
  {
    "path": "packages/nx/examples/10-machine-learning/kmeans.ml",
    "content": "(** K-means clustering with kmeans++ initialisation.\n\n    Generate synthetic blobs, cluster them with Lloyd's algorithm, and report\n    centroid positions and inertia. *)\n\nopen Nx\n\n(* Pairwise squared L2 distances: [n, d] x [k, d] -> [n, k] *)\nlet sq_distances a b =\n  sum ~axes:[ 2 ] (square (sub (expand_dims [ 1 ] a) (expand_dims [ 0 ] b)))\n\n(* Isotropic Gaussian blobs around given centres. *)\nlet make_blobs ~samples_per_cluster centers =\n  let d = (shape centers).(1) in\n  let blobs =\n    List.init\n      (shape centers).(0)\n      (fun c ->\n        add (randn Float64 [| samples_per_cluster; d |]) (get [ c ] centers))\n  in\n  shuffle (concatenate ~axis:0 blobs)\n\n(* Kmeans++ initialisation: pick k centres from data. *)\nlet kmeanspp data k =\n  let n = (shape data).(0) in\n  let d = (shape data).(1) in\n  let centroids = zeros Float64 [| k; d |] in\n  let idx = Int32.to_int (item [] (randint Int32 ~high:n [||] 0)) in\n  set [ 0 ] centroids (get [ idx ] data);\n  for c = 1 to k - 1 do\n    let current = slice [ R (0, c); A ] centroids in\n    let min_d = min ~axes:[ 1 ] (sq_distances data current) in\n    let chosen =\n      Int32.to_int (item [] (categorical (log (clamp ~min:1e-30 min_d))))\n    in\n    set [ c ] centroids (get [ chosen ] data)\n  done;\n  centroids\n\nlet () =\n  let true_centers =\n    create Float64 [| 3; 2 |] [| 0.0; 0.0; 7.0; 7.0; -5.0; 10.0 |]\n  in\n  let data = make_blobs ~samples_per_cluster:100 true_centers in\n  let n = (shape data).(0) in\n  let d = (shape data).(1) in\n  let k = 3 in\n  Printf.printf \"Data: %d points, %d features, %d clusters\\n\\n\" n d k;\n\n  let centroids = kmeanspp data k in\n  let labels = ref (zeros Int32 [| n |]) in\n  let max_iter = 100 in\n  let tol = 1e-6 in\n  let converged = ref false in\n  let iter = ref 0 in\n  while !iter < max_iter && not !converged do\n    labels := argmin ~axis:1 (sq_distances data centroids);\n\n    let old = copy centroids in\n    for c = 0 to k - 1 do\n      let mask = cast Float64 (equal !labels (scalar Int32 (Int32.of_int c))) in\n      let count = item [] (sum mask) in\n      if count > 0.0 then begin\n        let total = sum ~axes:[ 0 ] (mul data (expand_dims [ 1 ] mask)) in\n        set [ c ] centroids (div_s total count)\n      end\n    done;\n\n    let shift = item [] (max (abs (sub centroids old))) in\n    converged := shift < tol;\n    incr iter\n  done;\n\n  Printf.printf \"Converged after %d iterations\\n\\n\" !iter;\n  Printf.printf \"Centroids:\\n%s\\n\" (data_to_string centroids);\n\n  for c = 0 to k - 1 do\n    let count =\n      item []\n        (sum (cast Float64 (equal !labels (scalar Int32 (Int32.of_int c)))))\n    in\n    Printf.printf \"  Cluster %d: %.0f points\\n\" c count\n  done;\n\n  let inertia = item [] (sum (min ~axes:[ 1 ] (sq_distances data centroids))) in\n  Printf.printf \"\\nInertia: %.2f\\n\" inertia\n"
  },
  {
    "path": "packages/nx/examples/10-machine-learning/pca.ml",
    "content": "(** Principal component analysis via SVD.\n\n    Generate synthetic data with known structure, project to lower dimensions,\n    and verify the explained variance captures the signal. *)\n\nopen Nx\nopen Nx.Infix\n\nlet () =\n  (* 200 points in 5D: most variance along axes 0 and 1 *)\n  let n = 200 in\n  let scale = create Float64 [| 1; 5 |] [| 10.0; 5.0; 1.0; 1.0; 1.0 |] in\n  let data = randn Float64 [| n; 5 |] * scale in\n  Printf.printf \"Data shape: [%d; %d]\\n\\n\" (shape data).(0) (shape data).(1);\n\n  (* Center *)\n  let mu = mean ~axes:[ 0 ] ~keepdims:true data in\n  let centered = data - mu in\n\n  (* Economy SVD: centered = U diag(S) Vt *)\n  let _u, s, vt = svd ~full_matrices:false centered in\n\n  (* Explained variance ratio: s_i^2 / sum(s^2) *)\n  let s2 = square s in\n  let ratios = s2 /$ item [] (sum s2) in\n  Printf.printf \"Singular values:          %s\\n\" (data_to_string s);\n  Printf.printf \"Explained variance ratio: %s\\n\" (data_to_string ratios);\n  Printf.printf \"Cumulative:               %s\\n\\n\"\n    (data_to_string (cumsum ratios));\n\n  (* Project to 2 components *)\n  let n_components = 2 in\n  let components = slice [ R (0, n_components); A ] vt in\n  let projected = matmul centered (matrix_transpose components) in\n  Printf.printf \"Projected shape: [%d; %d]\\n\"\n    (shape projected).(0)\n    (shape projected).(1);\n\n  (* Reconstruct and measure error *)\n  let reconstructed = matmul projected components + mu in\n  let rmse = sqrt (mean (square (data - reconstructed))) in\n  Printf.printf \"Reconstruction RMSE (2 of 5 components): %.4f\\n\" (item [] rmse)\n"
  },
  {
    "path": "packages/nx/examples/10-machine-learning/tsne.ml",
    "content": "(** t-SNE dimensionality reduction.\n\n    Embed three 10-dimensional clusters into 2D using the exact t-SNE algorithm.\n    Reports KL divergence and per-cluster spread. *)\n\nopen Nx\n\n(* Pairwise squared distances: [n, d] -> [n, n] *)\nlet pairwise_sq data =\n  let diff = sub (expand_dims [ 1 ] data) (expand_dims [ 0 ] data) in\n  sum ~axes:[ 2 ] (square diff)\n\n(* Off-diagonal mask: 1 everywhere except the diagonal. *)\nlet off_diag n =\n  sub (full Float64 [| n; n |] 1.0) (cast Float64 (eye Float64 n))\n\n(* Compute symmetric P matrix via binary search for each row's bandwidth. *)\nlet compute_p dist_sq ~perplexity =\n  let n = (shape dist_sq).(0) in\n  let target = Float.log perplexity in\n  let p = zeros Float64 [| n; n |] in\n  for i = 0 to n - 1 do\n    let di = get [ i ] dist_sq in\n    let lo = ref 1e-10 in\n    let hi = ref 1e4 in\n    let row = ref (zeros Float64 [| n |]) in\n    for _ = 0 to 50 do\n      let sigma = (!lo +. !hi) /. 2.0 in\n      let beta = 1.0 /. (2.0 *. sigma *. sigma) in\n      let pi = exp (mul_s di (-.beta)) in\n      set_item [ i ] 0.0 pi;\n      let s = item [] (sum pi) in\n      let pi = div_s pi (Float.max s 1e-30) in\n      let h = -.item [] (sum (mul pi (log (clamp ~min:1e-30 pi)))) in\n      row := pi;\n      if h > target then hi := sigma else lo := sigma\n    done;\n    set [ i ] p !row\n  done;\n  (* Symmetrise: P = (P + P^T) / (2n) *)\n  let p = div_s (add p (matrix_transpose p)) (2.0 *. Float.of_int n) in\n  clamp ~min:1e-12 p\n\nlet () =\n  let n_per = 50 in\n  let dim = 10 in\n  let perplexity = 20.0 in\n  let max_iter = 500 in\n  let lr = 100.0 in\n\n  (* Three well-separated clusters in 10D *)\n  let c0 = randn Float64 [| n_per; dim |] in\n  let c1 = add_s (randn Float64 [| n_per; dim |]) 8.0 in\n  let c2 = sub_s (randn Float64 [| n_per; dim |]) 8.0 in\n  let data = concatenate ~axis:0 [ c0; c1; c2 ] in\n  let n = (shape data).(0) in\n  Printf.printf \"Data: %d points in %dD, perplexity=%.0f\\n\\n\" n dim perplexity;\n\n  let dist_sq = pairwise_sq data in\n  let p = compute_p dist_sq ~perplexity in\n\n  let y = ref (mul_s (randn Float64 [| n; 2 |]) 1e-4) in\n  let vel = ref (zeros Float64 [| n; 2 |]) in\n  let mask = off_diag n in\n\n  for iter = 1 to max_iter do\n    let y_diff = sub (expand_dims [ 1 ] !y) (expand_dims [ 0 ] !y) in\n    let y_dsq = sum ~axes:[ 2 ] (square y_diff) in\n    let inv_d = mul (div (scalar Float64 1.0) (add_s y_dsq 1.0)) mask in\n    let q_sum = Float.max (item [] (sum inv_d)) 1e-30 in\n    let q = clamp ~min:1e-12 (div_s inv_d q_sum) in\n\n    let p_eff = if iter <= 100 then mul_s p 4.0 else p in\n\n    (* Gradient: 4 sum_j (p_ij - q_ij)(y_i - y_j)(1+||y_i-y_j||^2)^{-1} *)\n    let mult = mul (sub p_eff q) inv_d in\n    let grad =\n      mul_s (sum ~axes:[ 1 ] (mul (expand_dims [ 2 ] mult) y_diff)) 4.0\n    in\n\n    let momentum = if iter <= 100 then 0.5 else 0.8 in\n    vel := sub (mul_s !vel momentum) (mul_s grad lr);\n    y := add !y !vel;\n\n    if iter = 1 || iter mod 100 = 0 then begin\n      let kl = item [] (sum (mul p (log (div p q)))) in\n      Printf.printf \"  iter %4d  KL = %.4f\\n\" iter kl\n    end\n  done;\n\n  Printf.printf \"\\nPer-cluster spread (mean std of embedded coordinates):\\n\";\n  for c = 0 to 2 do\n    let lo = c * n_per in\n    let cluster = slice [ R (lo, lo + n_per); A ] !y in\n    let sx = item [] (mean (std ~axes:[ 0 ] cluster)) in\n    Printf.printf \"  Cluster %d: %.4f\\n\" c sx\n  done\n"
  },
  {
    "path": "packages/nx/examples/README.md",
    "content": "# Nx Examples\n\nTen standalone examples that teach Nx from the ground up. Each builds on the\nprevious one, progressing from array creation to machine learning.\n\n## Examples\n\n| #   | Example                                                      | What You'll Learn                                                    |\n| --- | ------------------------------------------------------------ | -------------------------------------------------------------------- |\n| 01  | [Creating Arrays](01-creating-arrays/)                       | `zeros`, `ones`, `arange`, `linspace`, `init`, `meshgrid`, dtypes    |\n| 02  | [Infix and Arithmetic](02-infix-and-arithmetic/)             | `Nx.Infix` operators (`+`, `*$`, `/`), `abs`, `sqrt`, `exp`, `clamp` |\n| 03  | [Indexing and Slicing](03-indexing-and-slicing/)             | `I`, `R`, `Rs`, `A`, `.${[...]}`, `compress`, `where`, `take`        |\n| 04  | [Reshaping and Broadcasting](04-reshaping-and-broadcasting/) | `reshape`, `flatten`, `transpose`, `vstack`, broadcasting rules      |\n| 05  | [Reductions and Statistics](05-reductions-and-statistics/)   | `mean`, `std`, `argmax`, `cumsum`, `all`, `any`, axis parameter      |\n| 06  | [Random Numbers](06-random-numbers/)                         | `Rng.run`, `Rng.uniform`, `Rng.normal`, `Rng.shuffle`, Monte Carlo   |\n| 07  | [Linear Algebra](07-linear-algebra/)                         | `@@`, `/@`, `inv`, `det`, `lstsq`, `eigh`, `svd`                     |\n| 08  | [Signal Processing](08-signal-processing/)                   | `rfft`, `irfft`, `rfftfreq`, frequency filtering                     |\n| 09  | [Image Processing](09-image-processing/)                     | `correlate2d`, `max_pool2d`, Sobel edges, `Nx_io.save_image`         |\n| 10  | [Machine Learning](10-machine-learning/)                     | PCA, K-Means, DBSCAN, t-SNE from Nx primitives                       |\n\n## Running\n\nFrom the repository root:\n\n```bash\ndune exec nx/examples/01-creating-arrays/main.exe\ndune exec nx/examples/02-infix-and-arithmetic/main.exe\n# ... and so on through 10\n```\n\n## Dependencies\n\n- Examples 01-08 and 10 use only `nx`\n- Example 09 adds `nx.io` (image I/O)\n"
  },
  {
    "path": "packages/nx/lib/.ocamlformat-ignore",
    "content": "prelude.ml"
  },
  {
    "path": "packages/nx/lib/backend/dune",
    "content": "(library\n (name nx_backend)\n (public_name nx.backend)\n (virtual_modules nx_backend)\n (default_implementation nx_c)\n (libraries nx_core nx_buffer))\n"
  },
  {
    "path": "packages/nx/lib/backend/nx_backend.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ninclude Nx_core.Backend_intf.S\n\n(* TODO: [create_context : unit -> context] won't work for backends that need\n   parameters (e.g. GPU device index, memory limits). We'll likely need\n   [create_context : ?config:config -> unit -> context] or similar, with a\n   default config for each backend. *)\nval create_context : unit -> context\n"
  },
  {
    "path": "packages/nx/lib/backend_c/config/discover.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule C = Configurator.V1\n\nlet test_blas =\n  {|\n#include <cblas.h>\n\nint main() {\n  float a[6] = {1.0f, 2.0f, 3.0f, 4.0f, 5.0f, 6.0f};\n  float b[6] = {7.0f, 8.0f, 9.0f, 10.0f, 11.0f, 12.0f};\n  float c[9] = {0.0f};\n  cblas_sgemm(CblasRowMajor, CblasNoTrans, CblasNoTrans, 3, 3, 2, 1.0f, a, 2,\n              b, 3, 0.0f, c, 3);\n  return (int)c[0];\n}\n|}\n\nlet test_lapacke =\n  {|\n#include <lapacke.h>\n\nint main() {\n  double a[4] = {4.0, 1.0, 1.0, 3.0};\n  lapack_int info = LAPACKE_dpotrf(LAPACK_ROW_MAJOR, 'L', 2, a, 2);\n  return (int)info;\n}\n|}\n\nlet openblas_default system : C.Pkg_config.package_conf =\n  match system with\n  | \"mingw\" | \"mingw64\" ->\n      (* On cygwin, discover.exe is a mingw binary whose Sys.file_exists cannot\n         resolve cygwin paths. The C compiler is a cygwin binary and handles\n         them fine, so pass the known sysroot paths directly. *)\n      let triplet =\n        if system = \"mingw64\" then \"x86_64-w64-mingw32\" else \"i686-w64-mingw32\"\n      in\n      let prefix = \"/usr/\" ^ triplet ^ \"/sys-root/mingw\" in\n      {\n        C.Pkg_config.cflags = [ \"-I\" ^ prefix ^ \"/include\" ];\n        libs = [ \"-L\" ^ prefix ^ \"/lib\"; \"-lcblas\" ];\n      }\n  | _ ->\n      let search =\n        [ \"/usr/local/opt/openblas/lib\"; \"/opt/OpenBLAS/lib/\"; \"/usr/lib\" ]\n        |> List.filter Sys.file_exists\n      in\n      let libs = List.map (fun path -> \"-L\" ^ path) search @ [ \"-lopenblas\" ] in\n      let include_dirs =\n        [ \"/usr/include/openblas\" ] |> List.filter Sys.file_exists\n      in\n      let cflags = List.map (fun path -> \"-I\" ^ path) include_dirs in\n      { C.Pkg_config.cflags; libs }\n\nlet default_ldlibs = [ \"-lm\" ]\n\nlet string_contains ~needle haystack =\n  let h_len = String.length haystack in\n  let n_len = String.length needle in\n  let rec aux i =\n    if i + n_len > h_len then false\n    else if String.sub haystack i n_len = needle then true\n    else aux (i + 1)\n  in\n  if n_len = 0 then true else aux 0\n\nlet list_find_map f lst =\n  let rec aux = function\n    | [] -> None\n    | x :: xs -> ( match f x with None -> aux xs | some -> some)\n  in\n  aux lst\n\nlet libomp_paths c =\n  let env = Sys.getenv_opt \"LIBOMP_PREFIX\" in\n  let brew_prefix =\n    if C.Process.run_ok c \"brew\" [ \"--prefix\"; \"libomp\" ] then\n      Some\n        (C.Process.run_capture_exn c \"brew\" [ \"--prefix\"; \"libomp\" ]\n        |> String.trim)\n    else None\n  in\n  let candidates =\n    List.filter_map\n      (fun x -> x)\n      [\n        env;\n        brew_prefix;\n        Some \"/opt/homebrew/opt/libomp\";\n        Some \"/usr/local/opt/libomp\";\n      ]\n  in\n  list_find_map\n    (fun prefix ->\n      let include_dir = Filename.concat prefix \"include\" in\n      let lib_dir = Filename.concat prefix \"lib\" in\n      let header = Filename.concat include_dir \"omp.h\" in\n      if Sys.file_exists header then\n        Some ([ \"-I\" ^ include_dir ], [ \"-L\" ^ lib_dir ])\n      else None)\n    candidates\n\nlet compiler_is_clang c =\n  let compiler =\n    Sys.getenv_opt \"CC\"\n    |> Option.value\n         ~default:\n           (match C.ocaml_config_var c \"c_compiler\" with\n           | Some cc when String.trim cc <> \"\" -> cc\n           | _ -> \"cc\")\n  in\n  if C.Process.run_ok c compiler [ \"--version\" ] then\n    let version =\n      C.Process.run_capture_exn c compiler [ \"--version\" ]\n      |> String.lowercase_ascii\n    in\n    string_contains ~needle:\"clang\" version\n  else false\n\nlet detect_openmp c system base_flags =\n  let test c_flags link_flags =\n    C.c_test c ~c_flags ~link_flags\n      \"#include <omp.h>\\nint main(){return omp_get_num_threads();}\"\n  in\n  match system with\n  | \"macosx\" ->\n      if compiler_is_clang c then\n        let include_flags, lib_dir_flags =\n          match libomp_paths c with Some paths -> paths | None -> ([], [])\n        in\n        let openmp_flags = include_flags @ [ \"-Xpreprocessor\"; \"-fopenmp\" ] in\n        let openmp_libs = lib_dir_flags @ [ \"-lomp\" ] in\n        if test openmp_flags openmp_libs then\n          (base_flags @ openmp_flags, openmp_libs)\n        else (base_flags, [])\n      else if test [ \"-fopenmp\" ] [ \"-fopenmp\" ] then\n        (base_flags @ [ \"-fopenmp\" ], [ \"-fopenmp\" ])\n      else (base_flags, [])\n  | \"linux\" | \"linux_elf\" | \"mingw\" | \"mingw64\" | \"cygwin\" ->\n      if test [ \"-fopenmp\" ] [ \"-fopenmp\" ] then\n        (base_flags @ [ \"-fopenmp\" ], [ \"-fopenmp\" ])\n      else (base_flags, [])\n  | _ ->\n      if test [ \"-fopenmp\" ] [ \"-fopenmp\" ] then\n        (base_flags @ [ \"-fopenmp\" ], [ \"-fopenmp\" ])\n      else (base_flags, [])\n\nlet pkg_query c package =\n  match C.Pkg_config.get c with\n  | None -> None\n  | Some pc -> C.Pkg_config.query pc ~package\n\nlet ensure_lapacke c c_flags libs pkg_query_fn =\n  if C.c_test c test_lapacke ~c_flags ~link_flags:libs then (libs, c_flags)\n  else\n    let lapacke_conf =\n      match pkg_query_fn \"llapacke\" with\n      | Some conf -> conf\n      | None -> (\n          match pkg_query_fn \"lapacke\" with\n          | Some conf -> conf\n          | None -> { C.Pkg_config.cflags = []; libs = [ \"-llapacke\" ] })\n    in\n    let libs = lapacke_conf.libs @ libs in\n    let c_flags = lapacke_conf.cflags @ c_flags in\n    if C.c_test c test_lapacke ~c_flags ~link_flags:libs then (libs, c_flags)\n    else (\n      Printf.printf\n        {|\nUnable to link against LAPACKE even after adding (%s) to the link flags.\nVerify that a LAPACKE implementation is installed and visible to the\nbuild system (consider installing lapacke or setting PKG_CONFIG_PATH).\n|}\n        (String.concat \" \" lapacke_conf.libs);\n      failwith \"Unable to link against lapacke.\")\n\nlet () =\n  C.main ~name:\"nx_c\" (fun c ->\n      let system = C.ocaml_config_var_exn c \"system\" in\n      let architecture = C.ocaml_config_var_exn c \"architecture\" in\n      let word_size = C.ocaml_config_var_exn c \"word_size\" in\n\n      let base_flags =\n        let opt_flags =\n          match architecture with\n          | \"amd64\" | \"x86_64\" -> [ \"-O3\"; \"-march=native\"; \"-fPIC\" ]\n          | \"arm64\" | \"aarch64\" -> [ \"-O3\"; \"-mcpu=native\"; \"-fPIC\" ]\n          | \"power\" | \"ppc\" | \"ppc64\" | \"ppc64le\" ->\n              [ \"-O3\"; \"-mcpu=native\"; \"-fPIC\" ]\n          | \"riscv32\" -> [ \"-O3\"; \"-march=rv32gc\"; \"-fPIC\" ]\n          | \"riscv64\" -> [ \"-O3\"; \"-march=rv64gc\"; \"-fPIC\" ]\n          | \"riscv\" ->\n              if word_size = \"64\" then [ \"-O3\"; \"-march=rv64gc\"; \"-fPIC\" ]\n              else [ \"-O3\"; \"-march=rv32gc\"; \"-fPIC\" ]\n          | \"s390x\" -> [ \"-O3\"; \"-march=native\"; \"-fPIC\" ]\n          | _ -> [ \"-O3\"; \"-fPIC\" ]\n        in\n        (* Suppress vectorization failure warnings from clang *)\n        if compiler_is_clang c then opt_flags @ [ \"-Wno-pass-failed\" ]\n        else opt_flags\n      in\n\n      let opt_flags =\n        match system with\n        | \"macosx\" -> List.filter (fun flag -> flag <> \"-fPIC\") base_flags\n        | _ -> base_flags\n      in\n\n      let opt_flags, openmp_libs = detect_openmp c system opt_flags in\n\n      let openblas_conf =\n        match pkg_query c \"openblas\" with\n        | Some conf -> conf\n        | None -> (\n            match pkg_query c \"cblas\" with\n            | Some conf -> conf\n            | None -> openblas_default system)\n      in\n\n      let filter_openmp_flags flags =\n        let rec loop acc = function\n          | [] -> List.rev acc\n          | \"-Xpreprocessor\" :: \"-fopenmp\" :: rest -> loop acc rest\n          | \"-fopenmp\" :: rest -> loop acc rest\n          | flag :: rest -> loop (flag :: acc) rest\n        in\n        loop [] flags\n      in\n      let openblas_cflags = filter_openmp_flags openblas_conf.cflags in\n      let openblas_libs = filter_openmp_flags openblas_conf.libs in\n      let c_flags = opt_flags @ openblas_cflags in\n      let libs =\n        (if system = \"macosx\" then [ \"-framework\"; \"Accelerate\" ] else [])\n        @ openblas_libs @ openmp_libs @ default_ldlibs\n      in\n\n      if not (C.c_test c test_blas ~c_flags ~link_flags:libs) then (\n        Printf.printf\n          {|\nUnable to link against OpenBLAS: the current values for cflags and libs\nare respectively (%s) and (%s).\nCheck that OpenBLAS is installed and, if necessary, extend PKG_CONFIG_PATH\nwith the directory containing openblas.pc.\n|}\n          (String.concat \" \" openblas_cflags)\n          (String.concat \" \" openblas_libs);\n        failwith \"Unable to link against openblas.\");\n\n      let libs, c_flags = ensure_lapacke c c_flags libs (pkg_query c) in\n\n      C.Flags.write_sexp \"c_flags.sexp\" c_flags;\n      C.Flags.write_sexp \"c_library_flags.sexp\" libs)\n"
  },
  {
    "path": "packages/nx/lib/backend_c/config/dune",
    "content": "(executable\n (name discover)\n (libraries dune-configurator))\n"
  },
  {
    "path": "packages/nx/lib/backend_c/dune",
    "content": "(library\n (name nx_c)\n (public_name nx.c)\n (implements nx.backend)\n (libraries nx_buffer nx_core nx.pocketfft)\n (foreign_stubs\n  (language c)\n  (names\n   nx_c_binary\n   nx_c_unary\n   nx_c_reduce\n   nx_c_sort\n   nx_c_scan\n   nx_c_ternary\n   nx_c_cast\n   nx_c_memory\n   nx_c_index\n   nx_c_random\n   nx_c_shape\n   nx_c_window\n   nx_c_matmul\n   nx_c_cholesky\n   nx_c_qr\n   nx_c_eig\n   nx_c_solve\n   nx_c_svd)\n  (flags\n   :standard\n   (:include c_flags.sexp)))\n (c_library_flags\n  :standard\n  (:include c_library_flags.sexp)))\n\n(rule\n (targets c_library_flags.sexp c_flags.sexp)\n (action\n  (run config/discover.exe)))\n"
  },
  {
    "path": "packages/nx/lib/backend_c/nx_backend.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Nx_core\n\nlet err op fmt = Printf.ksprintf (fun msg -> invalid_arg (op ^ \": \" ^ msg)) fmt\n\ntype ('a, 'b) buffer = ('a, 'b) Nx_buffer.t\ntype context = unit\n\nlet create_context () = ()\n\ntype ('a, 'b) t = {\n  context : context;\n  dtype : ('a, 'b) Dtype.t;\n  buffer : ('a, 'b) buffer;\n  view : View.t;\n}\n\n(* We define an FFI tensor type for easy access to the view fields in C.\n\n   XXX: probably more efficient to inline those in our [t] type and have the\n   view function create a view when called. *)\ntype ('a, 'b) ffi_tensor = {\n  data : ('a, 'b) buffer;\n  shape : int array;\n  strides : int array;\n  offset : int;\n}\n[@@warning \"-69\"]\n\n(* ───── External FFI Declarations ───── *)\n\nexternal caml_add :\n  ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_add\"\n\nexternal caml_mul :\n  ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_mul\"\n\nexternal caml_idiv :\n  ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_idiv\"\n\nexternal caml_fdiv :\n  ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_fdiv\"\n\nexternal caml_max :\n  ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_max\"\n\nexternal caml_min :\n  ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_min\"\n\nexternal caml_sub :\n  ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_sub\"\n\nexternal caml_mod :\n  ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_mod\"\n\nexternal caml_pow :\n  ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_pow\"\n\nexternal caml_cmpeq :\n  ('a, 'b) ffi_tensor ->\n  ('a, 'b) ffi_tensor ->\n  (bool, Dtype.bool_elt) ffi_tensor ->\n  unit = \"caml_nx_cmpeq\"\n\nexternal caml_cmpne :\n  ('a, 'b) ffi_tensor ->\n  ('a, 'b) ffi_tensor ->\n  (bool, Dtype.bool_elt) ffi_tensor ->\n  unit = \"caml_nx_cmpne\"\n\nexternal caml_cmplt :\n  ('a, 'b) ffi_tensor ->\n  ('a, 'b) ffi_tensor ->\n  (bool, Dtype.bool_elt) ffi_tensor ->\n  unit = \"caml_nx_cmplt\"\n\nexternal caml_cmple :\n  ('a, 'b) ffi_tensor ->\n  ('a, 'b) ffi_tensor ->\n  (bool, Dtype.bool_elt) ffi_tensor ->\n  unit = \"caml_nx_cmple\"\n\nexternal caml_xor :\n  ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_xor\"\n\nexternal caml_or :\n  ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_or\"\n\nexternal caml_and :\n  ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_and\"\n\nexternal caml_atan2 :\n  ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_atan2\"\n\n(* ───── Unary Operation FFI Declarations ───── *)\n\nexternal caml_neg : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_neg\"\n\nexternal caml_sin : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_sin\"\n\nexternal caml_cos : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_cos\"\n\nexternal caml_sqrt : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_sqrt\"\n\nexternal caml_abs : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_abs\"\n\nexternal caml_log : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_log\"\n\nexternal caml_exp : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_exp\"\n\nexternal caml_recip : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_recip\"\n\nexternal caml_sign : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_sign\"\n\nexternal caml_tan : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_tan\"\n\nexternal caml_asin : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_asin\"\n\nexternal caml_acos : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_acos\"\n\nexternal caml_atan : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_atan\"\n\nexternal caml_sinh : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_sinh\"\n\nexternal caml_cosh : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_cosh\"\n\nexternal caml_tanh : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_tanh\"\n\nexternal caml_trunc : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_trunc\"\n\nexternal caml_ceil : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_ceil\"\n\nexternal caml_floor : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_floor\"\n\nexternal caml_round : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_round\"\n\nexternal caml_erf : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_erf\"\n\n(* ───── Ternary Operation FFI Declarations ───── *)\n\nexternal caml_where :\n  (bool, Dtype.bool_elt) ffi_tensor ->\n  ('a, 'b) ffi_tensor ->\n  ('a, 'b) ffi_tensor ->\n  ('a, 'b) ffi_tensor ->\n  unit = \"caml_nx_where\"\n\n(* ───── Reduction Operation FFI Declarations ───── *)\n\nexternal caml_reduce_sum :\n  ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> int array -> bool -> unit\n  = \"caml_nx_reduce_sum\"\n\nexternal caml_reduce_max :\n  ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> int array -> bool -> unit\n  = \"caml_nx_reduce_max\"\n\nexternal caml_reduce_prod :\n  ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> int array -> bool -> unit\n  = \"caml_nx_reduce_prod\"\n\nexternal caml_reduce_min :\n  ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> int array -> bool -> unit\n  = \"caml_nx_reduce_min\"\n\nexternal caml_associative_scan :\n  ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> int -> int -> unit\n  = \"caml_nx_associative_scan\"\n\nexternal caml_argmax :\n  ('a, 'b) ffi_tensor ->\n  (int32, Dtype.int32_elt) ffi_tensor ->\n  int ->\n  bool ->\n  unit = \"caml_nx_argmax\"\n\nexternal caml_argmin :\n  ('a, 'b) ffi_tensor ->\n  (int32, Dtype.int32_elt) ffi_tensor ->\n  int ->\n  bool ->\n  unit = \"caml_nx_argmin\"\n\nexternal caml_sort :\n  ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> int -> bool -> unit\n  = \"caml_nx_sort\"\n\nexternal caml_argsort :\n  ('a, 'b) ffi_tensor ->\n  (int32, Dtype.int32_elt) ffi_tensor ->\n  int ->\n  bool ->\n  unit = \"caml_nx_argsort\"\n\n(* Cast operation FFI declaration *)\nexternal caml_cast : ('a, 'b) ffi_tensor -> ('c, 'd) ffi_tensor -> unit\n  = \"caml_nx_cast\"\n\n(* ───── Memory Operation FFI Declarations ───── *)\n\nexternal caml_copy : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor = \"caml_nx_copy\"\n\nexternal caml_contiguous : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor\n  = \"caml_nx_contiguous\"\n\nexternal caml_assign : ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_assign\"\n\n(* ───── Index Operation FFI Declarations ───── *)\n\nexternal caml_gather :\n  ('a, 'b) ffi_tensor ->\n  (int32, Dtype.int32_elt) ffi_tensor ->\n  ('a, 'b) ffi_tensor ->\n  int ->\n  unit = \"caml_nx_op_gather\"\n\nexternal caml_scatter :\n  ('a, 'b) ffi_tensor ->\n  (int32, Dtype.int32_elt) ffi_tensor ->\n  ('a, 'b) ffi_tensor ->\n  int ->\n  ('a, 'b) ffi_tensor ->\n  int ->\n  bool ->\n  unit = \"caml_nx_op_scatter_bc\" \"caml_nx_op_scatter\"\n\n(* ───── Linear Algebra Operation FFI Declarations ───── *)\n\nexternal caml_cholesky :\n  ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> bool -> unit\n  = \"caml_nx_op_cholesky\"\n\nexternal caml_matmul :\n  ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_matmul\"\n\nexternal caml_triangular_solve :\n  ('a, 'b) ffi_tensor ->\n  ('a, 'b) ffi_tensor ->\n  ('a, 'b) ffi_tensor ->\n  bool ->\n  bool ->\n  bool ->\n  unit = \"caml_nx_op_triangular_solve_bc\" \"caml_nx_op_triangular_solve\"\n\nexternal caml_qr :\n  ('a, 'b) ffi_tensor ->\n  ('a, 'b) ffi_tensor ->\n  ('a, 'b) ffi_tensor ->\n  bool ->\n  unit = \"caml_nx_op_qr\"\n\nexternal caml_eig :\n  ('a, 'b) ffi_tensor ->\n  ('c, 'd) ffi_tensor ->\n  ('e, 'f) ffi_tensor ->\n  bool ->\n  bool ->\n  unit = \"caml_nx_op_eig\"\n\nexternal caml_svd :\n  ('a, 'b) ffi_tensor ->\n  ('a, 'b) ffi_tensor ->\n  ('c, 'd) ffi_tensor ->\n  ('a, 'b) ffi_tensor ->\n  bool ->\n  unit = \"caml_nx_op_svd\"\n\n(* ───── Shape Operation FFI Declarations ───── *)\n\nexternal caml_cat :\n  ('a, 'b) ffi_tensor list -> int -> ('a, 'b) ffi_tensor -> unit = \"caml_nx_cat\"\n\nexternal caml_pad :\n  ('a, 'b) ffi_tensor -> int array -> 'a -> ('a, 'b) ffi_tensor -> unit\n  = \"caml_nx_pad\"\n\n(* ───── Window Operation FFI Declarations ───── *)\n\nexternal caml_unfold :\n  ('a, 'b) ffi_tensor ->\n  int array ->\n  int array ->\n  int array ->\n  int array ->\n  ('a, 'b) ffi_tensor ->\n  unit = \"caml_nx_op_unfold_bc\" \"caml_nx_op_unfold\"\n\nexternal caml_fold :\n  ('a, 'b) ffi_tensor ->\n  int array ->\n  int array ->\n  int array ->\n  int array ->\n  int array ->\n  ('a, 'b) ffi_tensor ->\n  unit = \"caml_nx_op_fold_bc\" \"caml_nx_op_fold\"\n\n(* ───── Random Operation FFI Declarations ───── *)\n\nexternal caml_threefry :\n  (int32, Dtype.int32_elt) ffi_tensor ->\n  (int32, Dtype.int32_elt) ffi_tensor ->\n  (int32, Dtype.int32_elt) ffi_tensor ->\n  unit = \"caml_nx_threefry\"\n\n(* ───── Helper Functions ───── *)\n\nlet view t = t.view\nlet dtype t = t.dtype\nlet to_host t = t.buffer\nlet context t = t.context\nlet shape t = View.shape t.view\n\nlet strides t =\n  match View.strides_opt t.view with\n  | Some s -> s\n  | None -> invalid_arg \"strides: cannot get strides for view\"\n\nlet offset t = View.offset t.view\nlet is_contiguous t = View.is_c_contiguous t.view\n\n(* Check if a tensor can be efficiently operated on *)\nlet can_get_strides t = View.can_get_strides t.view\n\n(* Convert tensor to FFI representation if possible *)\nlet to_ffi_tensor t =\n  if not (can_get_strides t) then\n    invalid_arg \"to_ffi_tensor: tensor has non-materializable view\"\n  else\n    { data = t.buffer; shape = shape t; strides = strides t; offset = offset t }\n\n(* Create a new tensor with given shape *)\nlet create_tensor ctx dtype shape_arr =\n  let size = Array.fold_left ( * ) 1 shape_arr in\n  let kind = Dtype.to_buffer_kind dtype in\n  let buffer = Nx_buffer.create kind size in\n  let view = View.create shape_arr in\n  { context = ctx; dtype; buffer; view }\n\nlet buffer ctx dtype shape_arr =\n  let kind = Dtype.to_buffer_kind dtype in\n  let size = Array.fold_left ( * ) 1 shape_arr in\n  let buffer = Nx_buffer.create kind size in\n  let view = View.create shape_arr in\n  { context = ctx; dtype; buffer; view }\n\nlet full ctx dtype shape_arr value =\n  let t = buffer ctx dtype shape_arr in\n  Nx_buffer.fill t.buffer value;\n  t\n\n(* Materialize a tensor to contiguous layout if needed *)\nlet materialize t =\n  (* Check if it has broadcast dimensions (zero strides) *)\n  let strides_arr = strides t in\n  let has_broadcast = Array.exists (( = ) 0) strides_arr in\n\n  if is_contiguous t && offset t = 0 && not has_broadcast then t\n  else\n    (* Create a contiguous copy *)\n    let out_shape = shape t in\n    let out = create_tensor t.context t.dtype out_shape in\n    let t_ffi = to_ffi_tensor t in\n    let out_ffi = to_ffi_tensor out in\n    caml_assign t_ffi out_ffi;\n    out\n\n(* Ensure tensor is materializable for C operations *)\nlet ensure_materializable t =\n  if not (can_get_strides t) then\n    (* Broadcast views or complex chains need materialization *)\n    materialize t\n  else\n    (* Check for zero strides (broadcast dimensions) *)\n    let strides_arr = strides t in\n    if Array.exists (( = ) 0) strides_arr then\n      (* Has broadcast dimensions - need to materialize *)\n      materialize t\n    else t\n\n(* Generic binary operation - allocates output and returns it *)\nlet binary_op op_name ffi_op x y =\n  let x_shape = shape x in\n  let y_shape = shape y in\n  if x_shape <> y_shape then\n    err op_name \"shape mismatch: x %s, y %s\" (Shape.to_string x_shape)\n      (Shape.to_string y_shape)\n  else\n    let x' = ensure_materializable x in\n    let y' = ensure_materializable y in\n    let out = buffer () (dtype x) x_shape in\n    let x_ffi = to_ffi_tensor x' in\n    let y_ffi = to_ffi_tensor y' in\n    let out_ffi = to_ffi_tensor out in\n    ffi_op x_ffi y_ffi out_ffi;\n    out\n\n(* Comparison operation - allocates bool output and returns it *)\nlet comparison_op op_name ffi_op x y =\n  let x_shape = shape x in\n  let y_shape = shape y in\n  if x_shape <> y_shape then\n    err op_name \"shape mismatch: x %s, y %s\" (Shape.to_string x_shape)\n      (Shape.to_string y_shape)\n  else\n    let x' = ensure_materializable x in\n    let y' = ensure_materializable y in\n    let out = buffer () Dtype.Bool x_shape in\n    let x_ffi = to_ffi_tensor x' in\n    let y_ffi = to_ffi_tensor y' in\n    let out_ffi = to_ffi_tensor out in\n    ffi_op x_ffi y_ffi out_ffi;\n    out\n\n(* ───── Buffer Allocation ───── *)\n\nlet from_host ctx array =\n  let dtype = Dtype.of_buffer_kind (Nx_buffer.kind array) in\n  let size = Nx_buffer.length array in\n  (* Create a view for the 1D array *)\n  let view = View.create [| size |] in\n  (* Note: We're sharing the buffer directly, assuming it's contiguous *)\n  { context = ctx; dtype; buffer = array; view }\n\n(* Generic unary operation - allocates output and returns it *)\nlet unary_op _op_name ffi_op x =\n  let x' = ensure_materializable x in\n  let out = buffer () (dtype x) (shape x) in\n  let x_ffi = to_ffi_tensor x' in\n  let out_ffi = to_ffi_tensor out in\n  ffi_op x_ffi out_ffi;\n  out\n\n(* ───── Binary Operations ───── *)\n\nlet add x y = binary_op \"add\" caml_add x y\nlet sub x y = binary_op \"sub\" caml_sub x y\nlet mul x y = binary_op \"mul\" caml_mul x y\nlet max x y = binary_op \"max\" caml_max x y\nlet min x y = binary_op \"min\" caml_min x y\nlet mod_ x y = binary_op \"mod\" caml_mod x y\nlet pow x y = binary_op \"pow\" caml_pow x y\nlet xor x y = binary_op \"xor\" caml_xor x y\nlet or_ x y = binary_op \"or\" caml_or x y\nlet and_ x y = binary_op \"and\" caml_and x y\nlet atan2 y x = binary_op \"atan2\" caml_atan2 y x\n\n(* ───── Comparison Operations ───── *)\n\nlet cmpeq x y = comparison_op \"cmpeq\" caml_cmpeq x y\nlet cmpne x y = comparison_op \"cmpne\" caml_cmpne x y\nlet cmplt x y = comparison_op \"cmplt\" caml_cmplt x y\nlet cmple x y = comparison_op \"cmple\" caml_cmple x y\n\n(* ───── Unary Operations ───── *)\n\nlet neg x = unary_op \"neg\" caml_neg x\nlet log x = unary_op \"log\" caml_log x\nlet exp x = unary_op \"exp\" caml_exp x\nlet sin x = unary_op \"sin\" caml_sin x\nlet cos x = unary_op \"cos\" caml_cos x\nlet sqrt x = unary_op \"sqrt\" caml_sqrt x\nlet abs x = unary_op \"abs\" caml_abs x\nlet recip x = unary_op \"recip\" caml_recip x\nlet sign x = unary_op \"sign\" caml_sign x\nlet tan x = unary_op \"tan\" caml_tan x\nlet asin x = unary_op \"asin\" caml_asin x\nlet acos x = unary_op \"acos\" caml_acos x\nlet atan x = unary_op \"atan\" caml_atan x\nlet sinh x = unary_op \"sinh\" caml_sinh x\nlet cosh x = unary_op \"cosh\" caml_cosh x\nlet tanh x = unary_op \"tanh\" caml_tanh x\nlet trunc x = unary_op \"trunc\" caml_trunc x\nlet ceil x = unary_op \"ceil\" caml_ceil x\nlet floor x = unary_op \"floor\" caml_floor x\nlet round x = unary_op \"round\" caml_round x\nlet erf x = unary_op \"erf\" caml_erf x\n\n(* Ternary Op - allocates output and returns it *)\nlet where cond if_true if_false =\n  let cond_shape = shape cond in\n  let if_true_shape = shape if_true in\n  let if_false_shape = shape if_false in\n\n  if cond_shape <> if_true_shape || if_true_shape <> if_false_shape then\n    err \"where\" \"shape mismatch: cond %s, if_true %s, if_false %s\"\n      (Shape.to_string cond_shape)\n      (Shape.to_string if_true_shape)\n      (Shape.to_string if_false_shape)\n  else\n    let cond' = ensure_materializable cond in\n    let if_true' = ensure_materializable if_true in\n    let if_false' = ensure_materializable if_false in\n    let out = buffer () (dtype if_true) if_true_shape in\n    let cond_ffi = to_ffi_tensor cond' in\n    let if_true_ffi = to_ffi_tensor if_true' in\n    let if_false_ffi = to_ffi_tensor if_false' in\n    let out_ffi = to_ffi_tensor out in\n    caml_where cond_ffi if_true_ffi if_false_ffi out_ffi;\n    out\n\n(* Reduction Ops - allocates output and returns it *)\nlet reduce_op _op_name ffi_op ~axes ~keepdims x =\n  let input_shape = shape x in\n  let ndim = Array.length input_shape in\n\n  if ndim = 0 then begin\n    let out = buffer () (dtype x) [||] in\n    Nx_buffer.set out.buffer 0 (Nx_buffer.get x.buffer 0);\n    out\n  end\n  else\n    let normalized_axes =\n      Array.map (fun ax -> if ax < 0 then ax + ndim else ax) axes\n    in\n    let out_shape =\n      Shape.reduce_output_shape input_shape normalized_axes keepdims\n    in\n    let out = buffer () (dtype x) out_shape in\n    let x' = ensure_materializable x in\n    let x_ffi = to_ffi_tensor x' in\n    let out_ffi = to_ffi_tensor out in\n    ffi_op x_ffi out_ffi normalized_axes keepdims;\n    out\n\nlet reduce_sum ~axes ~keepdims x =\n  reduce_op \"reduce_sum\" caml_reduce_sum ~axes ~keepdims x\n\nlet reduce_max ~axes ~keepdims x =\n  reduce_op \"reduce_max\" caml_reduce_max ~axes ~keepdims x\n\nlet reduce_prod ~axes ~keepdims x =\n  reduce_op \"reduce_prod\" caml_reduce_prod ~axes ~keepdims x\n\nlet reduce_min ~axes ~keepdims x =\n  reduce_op \"reduce_min\" caml_reduce_min ~axes ~keepdims x\n\nlet associative_scan ~axis ~op x =\n  let x_shape = shape x in\n  let rank = Array.length x_shape in\n  if rank = 0 then invalid_arg \"associative_scan: requires rank >= 1\"\n  else\n    let axis = if axis < 0 then axis + rank else axis in\n    if axis < 0 || axis >= rank then\n      err \"associative_scan\" \"axis %d out of bounds for rank %d\" axis rank\n    else\n      let x' = ensure_materializable x in\n      let out = buffer () (dtype x) x_shape in\n      let x_ffi = to_ffi_tensor x' in\n      let out_ffi = to_ffi_tensor out in\n      let op_tag =\n        match op with `Sum -> 0 | `Prod -> 1 | `Max -> 2 | `Min -> 3\n      in\n      caml_associative_scan x_ffi out_ffi axis op_tag;\n      out\n\n(* Movement Ops - These are view-only operations *)\nlet expand x shape = { x with view = View.expand x.view shape }\nlet reshape x shape = { x with view = View.reshape x.view shape }\nlet permute x axes = { x with view = View.permute x.view axes }\n\nlet pad x padding fill_value =\n  let x' = ensure_materializable x in\n\n  (* Calculate output shape *)\n  let in_shape = shape x in\n  let ndim = Array.length in_shape in\n\n  (* Convert pairs to flat array for C interface *)\n  let padding_flat =\n    Array.init (2 * ndim) (fun i ->\n        let dim = i / 2 in\n        if i mod 2 = 0 then fst padding.(dim) else snd padding.(dim))\n  in\n\n  (* Calculate output shape *)\n  let out_shape =\n    Array.init ndim (fun i ->\n        let before, after = padding.(i) in\n        in_shape.(i) + before + after)\n  in\n\n  let out = create_tensor x.context x.dtype out_shape in\n  let x_ffi = to_ffi_tensor x' in\n  let out_ffi = to_ffi_tensor out in\n  caml_pad x_ffi padding_flat fill_value out_ffi;\n  out\n\nlet shrink x bounds = { x with view = View.shrink x.view bounds }\nlet flip x axes = { x with view = View.flip x.view axes }\n\nlet cat tensors ~axis =\n  match tensors with\n  | [] -> invalid_arg \"cat: empty tensor list\"\n  | first :: _ ->\n      let tensors' = List.map ensure_materializable tensors in\n\n      (* Calculate output shape *)\n      let first_shape = shape first in\n      let ndim = Array.length first_shape in\n      let norm_axis = if axis < 0 then ndim + axis else axis in\n\n      (* Sum up dimensions along concatenation axis *)\n      let total_axis_size =\n        List.fold_left\n          (fun acc t ->\n            let s = shape t in\n            acc + s.(norm_axis))\n          0 tensors\n      in\n\n      let out_shape =\n        Array.mapi\n          (fun i dim -> if i = norm_axis then total_axis_size else dim)\n          first_shape\n      in\n      let out = buffer () (dtype first) out_shape in\n      let tensors_ffi = List.map to_ffi_tensor tensors' in\n      let out_ffi = to_ffi_tensor out in\n      caml_cat tensors_ffi norm_axis out_ffi;\n      out\n\n(* ───── Other Ops ───── *)\n\nlet cast (type a b c d) ~(dtype : (c, d) Dtype.t) (x : (a, b) t) : (c, d) t =\n  let x' = ensure_materializable x in\n  let out = buffer () dtype (shape x) in\n  let x_ffi = to_ffi_tensor x' in\n  let out_ffi = to_ffi_tensor out in\n  caml_cast x_ffi out_ffi;\n  out\n\nlet contiguous x =\n  (* Check if already contiguous with no offset and no broadcast dimensions *)\n  let strides_arr = strides x in\n  let has_broadcast = Array.exists (( = ) 0) strides_arr in\n  if is_contiguous x && offset x = 0 && not has_broadcast then x\n  else\n    let x' = ensure_materializable x in\n    let x_ffi = to_ffi_tensor x' in\n    let out_ffi = caml_contiguous x_ffi in\n    (* Create tensor from FFI result - it's contiguous so simple view *)\n    let view = View.create out_ffi.shape in\n    { context = x.context; dtype = x.dtype; buffer = out_ffi.data; view }\n\nlet copy x =\n  let x' = ensure_materializable x in\n  let x_ffi = to_ffi_tensor x' in\n  let out_ffi = caml_copy x_ffi in\n  (* Create tensor from FFI result - it's contiguous so simple view *)\n  let view = View.create out_ffi.shape in\n  { context = x.context; dtype = x.dtype; buffer = out_ffi.data; view }\n\nlet assign dst src =\n  let src' = ensure_materializable src in\n  (* dst doesn't need materialization - we're writing to it *)\n  let src_ffi = to_ffi_tensor src' in\n  let dst_ffi = to_ffi_tensor dst in\n  caml_assign src_ffi dst_ffi\n\nlet threefry key counter =\n  let key' = ensure_materializable key in\n  let counter' = ensure_materializable counter in\n  let out = buffer () Dtype.Int32 (shape counter) in\n  let key_ffi = to_ffi_tensor key' in\n  let counter_ffi = to_ffi_tensor counter' in\n  let out_ffi = to_ffi_tensor out in\n  caml_threefry key_ffi counter_ffi out_ffi;\n  out\n\n(* ───── Element Access Ops ───── *)\n\nlet gather data indices ~axis =\n  (* Ensure inputs are materializable. Preserve broadcasted strides for indices\n     to enable C fast paths (e.g., memcpy row gather). *)\n  let data' = ensure_materializable data in\n  (* Do not materialize indices unless we cannot get strides *)\n  let indices' =\n    if can_get_strides indices then indices else ensure_materializable indices\n  in\n  let out = buffer () (dtype data) (shape indices) in\n  let data_ffi = to_ffi_tensor data' in\n  let indices_ffi = to_ffi_tensor indices' in\n  let out_ffi = to_ffi_tensor out in\n  caml_gather data_ffi indices_ffi out_ffi axis;\n  out\n\nlet scatter ?(mode = `Set) ?(unique_indices = false) data_template ~indices\n    ~updates ~axis =\n  (* Ensure inputs are materializable *)\n  let template' = ensure_materializable data_template in\n  let indices' = ensure_materializable indices in\n  let updates' = ensure_materializable updates in\n\n  (* Create output tensor - for Set mode, start with a copy of template *)\n  let out =\n    if mode = `Set then copy data_template (* Start with copy of template *)\n    else\n      create_tensor data_template.context data_template.dtype\n        (shape data_template)\n  in\n\n  (* Convert to FFI tensors *)\n  let template_ffi = to_ffi_tensor template' in\n  let indices_ffi = to_ffi_tensor indices' in\n  let updates_ffi = to_ffi_tensor updates' in\n  let out_ffi = to_ffi_tensor out in\n\n  (* Convert mode to integer: 0 for Set, 1 for Add *)\n  let mode_int = match mode with `Set -> 0 | `Add -> 1 in\n\n  (* Call FFI function *)\n  caml_scatter template_ffi indices_ffi updates_ffi axis out_ffi mode_int\n    unique_indices;\n\n  out\n\nlet unfold x ~kernel_size ~stride ~dilation ~padding =\n  let x' = ensure_materializable x in\n  let in_shape = shape x in\n  let k = Array.length kernel_size in\n  let leading_ndim = Array.length in_shape - k in\n  let leading_shape = Array.sub in_shape 0 leading_ndim in\n  let spatial_dims = Array.sub in_shape leading_ndim k in\n\n  let padding_flat =\n    Array.init\n      (Array.length padding * 2)\n      (fun i ->\n        let dim = i / 2 in\n        if i mod 2 = 0 then fst padding.(dim) else snd padding.(dim))\n  in\n\n  let out_spatial =\n    Array.init k (fun i ->\n        let pad_before, pad_after = padding.(i) in\n        let padded = spatial_dims.(i) + pad_before + pad_after in\n        let kernel_extent = (dilation.(i) * (kernel_size.(i) - 1)) + 1 in\n        let diff = padded - kernel_extent in\n        if diff < 0 then\n          invalid_arg \"unfold: kernel size larger than padded input\"\n        else (diff / stride.(i)) + 1)\n  in\n\n  let kernel_prod = Array.fold_left ( * ) 1 kernel_size in\n  let spatial_prod = Array.fold_left ( * ) 1 out_spatial in\n  let out_shape =\n    Array.concat [ leading_shape; [| kernel_prod; spatial_prod |] ]\n  in\n\n  let out = create_tensor x.context x.dtype out_shape in\n  let x_ffi = to_ffi_tensor x' in\n  let out_ffi = to_ffi_tensor out in\n  caml_unfold x_ffi kernel_size stride dilation padding_flat out_ffi;\n  out\n\nlet fold x ~output_size ~kernel_size ~stride ~dilation ~padding =\n  let x' = ensure_materializable x in\n  let in_shape = shape x in\n  let leading_ndim = Array.length in_shape - 2 in\n  let leading_shape = Array.sub in_shape 0 leading_ndim in\n\n  let padding_flat =\n    Array.init\n      (Array.length padding * 2)\n      (fun i ->\n        let dim = i / 2 in\n        if i mod 2 = 0 then fst padding.(dim) else snd padding.(dim))\n  in\n\n  let _ =\n    Array.init (Array.length output_size) (fun i ->\n        let pad_before, pad_after = padding.(i) in\n        let padded = output_size.(i) + pad_before + pad_after in\n        let kernel_extent = (dilation.(i) * (kernel_size.(i) - 1)) + 1 in\n        let diff = padded - kernel_extent in\n        if diff < 0 then\n          invalid_arg \"fold: kernel size larger than padded output\"\n        else (diff / stride.(i)) + 1)\n  in\n\n  let out_shape = Array.concat [ leading_shape; output_size ] in\n\n  let out = create_tensor x.context x.dtype out_shape in\n  let x_ffi = to_ffi_tensor x' in\n  let out_ffi = to_ffi_tensor out in\n  caml_fold x_ffi output_size kernel_size stride dilation padding_flat out_ffi;\n  out\n\nlet matmul x y =\n  let x' = if is_contiguous x then x else ensure_materializable x in\n  let y' = if is_contiguous y then y else ensure_materializable y in\n  let x_shape = shape x in\n  let y_shape = shape y in\n  let x_ndim = Array.length x_shape in\n  let y_ndim = Array.length y_shape in\n  let m = x_shape.(x_ndim - 2) in\n  let n = y_shape.(y_ndim - 1) in\n  let max_ndim = Int.max x_ndim y_ndim in\n  let batch_nd = max_ndim - 2 in\n  let batch_dims =\n    Array.init batch_nd (fun i ->\n        let a_idx = i - (max_ndim - x_ndim) in\n        let b_idx = i - (max_ndim - y_ndim) in\n        let sa = if a_idx >= 0 then x_shape.(a_idx) else 1 in\n        let sb = if b_idx >= 0 then y_shape.(b_idx) else 1 in\n        Int.max sa sb)\n  in\n  let out_shape = Array.append batch_dims [| m; n |] in\n  let out = buffer () (dtype x) out_shape in\n  let x_ffi = to_ffi_tensor x' in\n  let y_ffi = to_ffi_tensor y' in\n  let out_ffi = to_ffi_tensor out in\n  caml_matmul x_ffi y_ffi out_ffi;\n  out\n\n(* Helper to compute contiguous strides in bytes *)\nlet contiguous_strides shape elem_size =\n  let ndim = Array.length shape in\n  if ndim = 0 then [||]\n  else\n    let strides = Array.make ndim 1 in\n    for i = ndim - 2 downto 0 do\n      strides.(i) <- strides.(i + 1) * shape.(i + 1)\n    done;\n    Array.map (fun s -> s * elem_size) strides\n\n(* ───── Fourier Transforms Using PocketFFT ───── *)\n\nlet fft (type a b) ?out (x : (a, b) t) ~axes : (a, b) t =\n  let x' = materialize x in\n  let out_shape = shape x' in\n  let out =\n    match out with\n    | Some o -> o\n    | None -> create_tensor x.context x.dtype out_shape\n  in\n\n  let shape_arr = out_shape in\n  let elem_size = Dtype.itemsize x.dtype in\n  let strides_in = contiguous_strides out_shape elem_size in\n  let strides_out = contiguous_strides out_shape elem_size in\n  (* Normalize negative axes *)\n  let ndim = Array.length out_shape in\n  let axes_arr = Array.map (fun ax -> if ax < 0 then ndim + ax else ax) axes in\n\n  (match (x.dtype : (a, b) Dtype.t) with\n  | Dtype.Complex64 ->\n      Pocketfft.c2c_f32 ~shape:shape_arr ~stride_in:strides_in\n        ~stride_out:strides_out ~axes:axes_arr ~forward:true ~fct:1.0\n        ~data_in:(Nx_buffer.to_bigarray1 x'.buffer)\n        ~data_out:(Nx_buffer.to_bigarray1 out.buffer)\n        ~nthreads:1\n  | Dtype.Complex128 ->\n      Pocketfft.c2c_f64 ~shape:shape_arr ~stride_in:strides_in\n        ~stride_out:strides_out ~axes:axes_arr ~forward:true ~fct:1.0\n        ~data_in:(Nx_buffer.to_bigarray1 x'.buffer)\n        ~data_out:(Nx_buffer.to_bigarray1 out.buffer)\n        ~nthreads:1\n  | _ -> invalid_arg \"fft: unsupported dtype\");\n\n  out\n\nlet ifft (type a b) ?out (x : (a, b) t) ~axes : (a, b) t =\n  let x' = materialize x in\n  let out_shape = shape x' in\n  let out =\n    match out with\n    | Some o -> o\n    | None -> create_tensor x.context x.dtype out_shape\n  in\n\n  let shape_arr = out_shape in\n  let elem_size = Dtype.itemsize x.dtype in\n  let strides_in = contiguous_strides out_shape elem_size in\n  let strides_out = contiguous_strides out_shape elem_size in\n  (* Normalize negative axes *)\n  let ndim = Array.length out_shape in\n  let axes_arr = Array.map (fun ax -> if ax < 0 then ndim + ax else ax) axes in\n\n  (match (x.dtype : (a, b) Dtype.t) with\n  | Dtype.Complex64 ->\n      Pocketfft.c2c_f32 ~shape:shape_arr ~stride_in:strides_in\n        ~stride_out:strides_out ~axes:axes_arr ~forward:false ~fct:1.0\n        ~data_in:(Nx_buffer.to_bigarray1 x'.buffer)\n        ~data_out:(Nx_buffer.to_bigarray1 out.buffer)\n        ~nthreads:1\n  | Dtype.Complex128 ->\n      Pocketfft.c2c_f64 ~shape:shape_arr ~stride_in:strides_in\n        ~stride_out:strides_out ~axes:axes_arr ~forward:false ~fct:1.0\n        ~data_in:(Nx_buffer.to_bigarray1 x'.buffer)\n        ~data_out:(Nx_buffer.to_bigarray1 out.buffer)\n        ~nthreads:1\n  | _ -> invalid_arg \"ifft: unsupported dtype\");\n\n  out\n\nlet rfft (type a b c d) ?out (x : (a, b) t) ~(dtype : (c, d) Dtype.t) ~axes :\n    (c, d) t =\n  let x' = materialize x in\n\n  (* Calculate output shape for rfft *)\n  let in_shape = shape x' in\n  let out_shape = Array.copy in_shape in\n  let last_axis = Array.length axes - 1 in\n  (if last_axis >= 0 then\n     let axis_idx =\n       if axes.(last_axis) < 0 then Array.length in_shape + axes.(last_axis)\n       else axes.(last_axis)\n     in\n     out_shape.(axis_idx) <- (in_shape.(axis_idx) / 2) + 1);\n\n  let out =\n    match out with\n    | Some o -> o\n    | None -> create_tensor x.context dtype out_shape\n  in\n\n  let strides_in = contiguous_strides in_shape (Dtype.itemsize x.dtype) in\n  let strides_out = contiguous_strides out_shape (Dtype.itemsize dtype) in\n\n  (* Normalize negative axes *)\n  let ndim = Array.length in_shape in\n  let axes_normalized =\n    Array.map (fun ax -> if ax < 0 then ndim + ax else ax) axes\n  in\n\n  (match ((x.dtype : (a, b) Dtype.t), (dtype : (c, d) Dtype.t)) with\n  | Dtype.Float32, Dtype.Complex64 ->\n      Pocketfft.r2c_f32 ~shape_in:in_shape ~stride_in:strides_in\n        ~stride_out:strides_out ~axes:axes_normalized ~forward:true ~fct:1.0\n        ~data_in:(Nx_buffer.to_bigarray1 x'.buffer)\n        ~data_out:(Nx_buffer.to_bigarray1 out.buffer)\n        ~nthreads:1\n  | Dtype.Float64, Dtype.Complex128 ->\n      Pocketfft.r2c_f64 ~shape_in:in_shape ~stride_in:strides_in\n        ~stride_out:strides_out ~axes:axes_normalized ~forward:true ~fct:1.0\n        ~data_in:(Nx_buffer.to_bigarray1 x'.buffer)\n        ~data_out:(Nx_buffer.to_bigarray1 out.buffer)\n        ~nthreads:1\n  | _ -> invalid_arg \"rfft: unsupported dtype combination\");\n\n  out\n\nlet irfft (type a b c d) ?out ?s (x : (a, b) t) ~(dtype : (c, d) Dtype.t) ~axes\n    : (c, d) t =\n  let x' = materialize x in\n\n  (* Calculate output shape for irfft *)\n  let in_shape = shape x' in\n  let out_shape = Array.copy in_shape in\n  let last_axis = Array.length axes - 1 in\n\n  (if last_axis >= 0 then\n     let axis_idx =\n       if axes.(last_axis) < 0 then Array.length in_shape + axes.(last_axis)\n       else axes.(last_axis)\n     in\n     let size =\n       match s with\n       | None -> (in_shape.(axis_idx) - 1) * 2\n       | Some sizes -> sizes.(last_axis)\n     in\n     out_shape.(axis_idx) <- size);\n\n  let out =\n    match out with\n    | Some o -> o\n    | None -> create_tensor x.context dtype out_shape\n  in\n\n  let strides_in = contiguous_strides in_shape (Dtype.itemsize x.dtype) in\n  let strides_out = contiguous_strides out_shape (Dtype.itemsize dtype) in\n\n  (* Normalize negative axes *)\n  let ndim = Array.length in_shape in\n  let axes_normalized =\n    Array.map (fun ax -> if ax < 0 then ndim + ax else ax) axes\n  in\n\n  (match ((x.dtype : (a, b) Dtype.t), (dtype : (c, d) Dtype.t)) with\n  | Dtype.Complex64, Dtype.Float32 ->\n      Pocketfft.c2r_f32 ~shape_out:out_shape ~stride_in:strides_in\n        ~stride_out:strides_out ~axes:axes_normalized ~forward:false ~fct:1.0\n        ~data_in:(Nx_buffer.to_bigarray1 x'.buffer)\n        ~data_out:(Nx_buffer.to_bigarray1 out.buffer)\n        ~nthreads:1\n  | Dtype.Complex128, Dtype.Float64 ->\n      Pocketfft.c2r_f64 ~shape_out:out_shape ~stride_in:strides_in\n        ~stride_out:strides_out ~axes:axes_normalized ~forward:false ~fct:1.0\n        ~data_in:(Nx_buffer.to_bigarray1 x'.buffer)\n        ~data_out:(Nx_buffer.to_bigarray1 out.buffer)\n        ~nthreads:1\n  | _ -> invalid_arg \"irfft: unsupported dtype combination\");\n\n  out\n\n(* ───── Linear Algebra Operations ───── *)\n\nlet cholesky ~upper x =\n  (* Ensure input is materializable *)\n  let x' = ensure_materializable x in\n\n  (* Create output tensor with same shape and dtype *)\n  let out_shape = shape x in\n  let out = create_tensor x.context x.dtype out_shape in\n\n  (* Convert to FFI tensors *)\n  let x_ffi = to_ffi_tensor x' in\n  let out_ffi = to_ffi_tensor out in\n\n  (* Call FFI function *)\n  caml_cholesky x_ffi out_ffi upper;\n\n  out\n\nlet qr ~reduced x =\n  let x' = ensure_materializable x in\n  let x_shape = shape x in\n  let m = x_shape.(Array.length x_shape - 2) in\n  let n = x_shape.(Array.length x_shape - 1) in\n  let k = Stdlib.min m n in\n\n  (* Calculate Q and R shapes *)\n  let q_shape = Array.copy x_shape in\n  let r_shape = Array.copy x_shape in\n\n  if reduced then (\n    (* Reduced QR: Q is m×k, R is k×n *)\n    q_shape.(Array.length q_shape - 1) <- k;\n    r_shape.(Array.length r_shape - 2) <- k)\n  else (\n    (* Complete QR: Q is m×m, R is m×n *)\n    q_shape.(Array.length q_shape - 1) <- m;\n    (* R shape is already m×n from the copy *)\n    ());\n\n  let q = create_tensor x.context x.dtype q_shape in\n  let r = create_tensor x.context x.dtype r_shape in\n\n  let x_ffi = to_ffi_tensor x' in\n  let q_ffi = to_ffi_tensor q in\n  let r_ffi = to_ffi_tensor r in\n\n  caml_qr x_ffi q_ffi r_ffi reduced;\n  (q, r)\n\nlet svd (type a b) ~full_matrices (x : (a, b) t) :\n    (a, b) t * (float, Dtype.float64_elt) t * (a, b) t =\n  let x' = ensure_materializable x in\n  let x_shape = shape x in\n  let m = x_shape.(Array.length x_shape - 2) in\n  let n = x_shape.(Array.length x_shape - 1) in\n  let k = Stdlib.min m n in\n\n  (* Calculate U, S, Vt shapes *)\n  let batch_shape = Array.sub x_shape 0 (Array.length x_shape - 2) in\n\n  let u_shape =\n    Array.append batch_shape (if full_matrices then [| m; m |] else [| m; k |])\n  in\n  let s_shape = Array.append batch_shape [| k |] in\n  let vt_shape =\n    Array.append batch_shape (if full_matrices then [| n; n |] else [| k; n |])\n  in\n\n  let u = create_tensor x.context x.dtype u_shape in\n  let s = create_tensor x.context Dtype.Float64 s_shape in\n  let vt = create_tensor x.context x.dtype vt_shape in\n\n  let x_ffi = to_ffi_tensor x' in\n  let u_ffi = to_ffi_tensor u in\n  let s_ffi = to_ffi_tensor s in\n  let vt_ffi = to_ffi_tensor vt in\n\n  caml_svd x_ffi u_ffi s_ffi vt_ffi full_matrices;\n  (u, s, vt)\n\nlet eig (type a b) ~vectors (x : (a, b) t) :\n    (Complex.t, Dtype.complex64_elt) t\n    * (Complex.t, Dtype.complex64_elt) t option =\n  let x' = ensure_materializable x in\n  let x_shape = shape x in\n  let n = x_shape.(Array.length x_shape - 1) in\n\n  (* Eigenvalues and eigenvectors are always complex128 *)\n  let batch_shape = Array.sub x_shape 0 (Array.length x_shape - 2) in\n  let vals_shape = Array.append batch_shape [| n |] in\n  let vecs_shape = x_shape in\n\n  let vals = create_tensor x.context Dtype.Complex128 vals_shape in\n  let vecs =\n    if vectors then create_tensor x.context Dtype.Complex128 vecs_shape\n    else\n      (* Create dummy tensor for C interface *)\n      create_tensor x.context Dtype.Complex128 [| 1 |]\n  in\n\n  let x_ffi = to_ffi_tensor x' in\n  let vals_ffi = to_ffi_tensor vals in\n  let vecs_ffi = to_ffi_tensor vecs in\n\n  caml_eig x_ffi vals_ffi vecs_ffi false vectors;\n  if vectors then (vals, Some vecs) else (vals, None)\n\nlet eigh (type a b) ~vectors (x : (a, b) t) :\n    (float, Dtype.float64_elt) t * (a, b) t option =\n  let x' = ensure_materializable x in\n  let x_shape = shape x in\n\n  (* For symmetric/hermitian matrices, eigenvalues are always float64 *)\n  let batch_shape = Array.sub x_shape 0 (Array.length x_shape - 2) in\n  let n = x_shape.(Array.length x_shape - 1) in\n  let vals_shape = Array.append batch_shape [| n |] in\n\n  let vals = create_tensor x.context Dtype.Float64 vals_shape in\n  let vecs =\n    if vectors then create_tensor x.context x.dtype x_shape\n    else\n      (* Create dummy tensor for C interface *)\n      create_tensor x.context x.dtype [| 1 |]\n  in\n\n  let x_ffi = to_ffi_tensor x' in\n  let vals_ffi = to_ffi_tensor vals in\n  let vecs_ffi = to_ffi_tensor vecs in\n\n  caml_eig x_ffi vals_ffi vecs_ffi true vectors;\n  if vectors then (vals, Some vecs) else (vals, None)\n\nlet triangular_solve ~upper ~transpose ~unit_diag a b =\n  let a' = ensure_materializable a in\n\n  (* Handle 1D input b by expanding to 2D *)\n  let b_shape = shape b in\n  let b_ndim = Array.length b_shape in\n  let b_is_1d = b_ndim = 1 in\n\n  let b_expanded, out_shape =\n    if b_is_1d then\n      (* Expand 1D to 2D by adding a trailing dimension *)\n      let new_shape = [| b_shape.(0); 1 |] in\n      let b_reshaped = reshape b new_shape in\n      (b_reshaped, b_shape) (* Keep original shape for output *)\n    else (b, shape b)\n  in\n\n  let b' = ensure_materializable b_expanded in\n\n  (* Create output with appropriate shape *)\n  let out_shape_expanded = shape b_expanded in\n  let out_expanded = create_tensor b.context b.dtype out_shape_expanded in\n\n  let a_ffi = to_ffi_tensor a' in\n  let b_ffi = to_ffi_tensor b' in\n  let out_ffi = to_ffi_tensor out_expanded in\n\n  caml_triangular_solve a_ffi b_ffi out_ffi upper transpose unit_diag;\n\n  (* Squeeze output back to 1D if input was 1D *)\n  if b_is_1d then reshape out_expanded out_shape else out_expanded\n\nlet div x y =\n  let dt = dtype x in\n  if Dtype.is_int dt || Dtype.is_uint dt then binary_op \"idiv\" caml_idiv x y\n  else binary_op \"fdiv\" caml_fdiv x y\n\nlet argmax ~axis ~keepdims x =\n  let x' = ensure_materializable x in\n  let out_shape = Shape.reduce_output_shape (shape x) [| axis |] keepdims in\n  let out = buffer () Dtype.Int32 out_shape in\n  let x_ffi = to_ffi_tensor x' in\n  let out_ffi = to_ffi_tensor out in\n  caml_argmax x_ffi out_ffi axis keepdims;\n  out\n\nlet argmin ~axis ~keepdims x =\n  let x' = ensure_materializable x in\n  let out_shape = Shape.reduce_output_shape (shape x) [| axis |] keepdims in\n  let out = buffer () Dtype.Int32 out_shape in\n  let x_ffi = to_ffi_tensor x' in\n  let out_ffi = to_ffi_tensor out in\n  caml_argmin x_ffi out_ffi axis keepdims;\n  out\n\nlet sort ~axis ~descending x =\n  let x' = ensure_materializable x in\n  let out = buffer () (dtype x) (shape x) in\n  let x_ffi = to_ffi_tensor x' in\n  let out_ffi = to_ffi_tensor out in\n  caml_sort x_ffi out_ffi axis descending;\n  out\n\nlet argsort ~axis ~descending x =\n  let x' = ensure_materializable x in\n  let out = buffer () Dtype.Int32 (shape x) in\n  let x_ffi = to_ffi_tensor x' in\n  let out_ffi = to_ffi_tensor out in\n  caml_argsort x_ffi out_ffi axis descending;\n  out\n"
  },
  {
    "path": "packages/nx/lib/backend_c/nx_c_binary.c",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n// Binary operations for nx C backend\n\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n#include <caml/custom.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/threads.h>\n#include <string.h>\n\n#include \"nx_c_shared.h\"\n\n#if defined(_OPENMP)\n#define NX_PARALLEL_THRESHOLD 32768\n#define NX_FOR_EACH_ELEM(total, BODY)                                           \\\n  do {                                                                          \\\n    if ((total) >= NX_PARALLEL_THRESHOLD) {                                     \\\n      _Pragma(\"omp parallel for simd schedule(static)\")                         \\\n      for (long i = 0; i < (total); ++i) {                                      \\\n        BODY;                                                                   \\\n      }                                                                         \\\n    } else {                                                                    \\\n      _Pragma(\"omp simd\")                                                       \\\n      for (long i = 0; i < (total); ++i) {                                      \\\n        BODY;                                                                   \\\n      }                                                                         \\\n    }                                                                           \\\n  } while (0)\n#else\n#define NX_FOR_EACH_ELEM(total, BODY)                                           \\\n  do {                                                                          \\\n    for (long i = 0; i < (total); ++i) {                                        \\\n      BODY;                                                                     \\\n    }                                                                           \\\n  } while (0)\n#endif\n\n// Type definitions for binary operations\ntypedef void (*binary_op_t)(const ndarray_t *, const ndarray_t *, ndarray_t *);\n\n// Dispatch table for each type\ntypedef struct {\n  binary_op_t i8, u8, i16, u16, i32, i64, u32, u64, inat;\n  binary_op_t f16, f32, f64;\n  binary_op_t c32, c64;\n  binary_op_t bf16, bool_, i4, u4, f8e4m3, f8e5m2;\n} binary_op_table;\n\n// Macro to generate all standard type variants for an operation\n// Note: float16, bfloat16, fp8 types need special handling with conversion\n#define GENERATE_BINARY_OP(name, OP_EXPR)               \\\n  BINARY_OP_FOR_TYPE(name, int8_t, i8, OP_EXPR)         \\\n  BINARY_OP_FOR_TYPE(name, uint8_t, u8, OP_EXPR)        \\\n  BINARY_OP_FOR_TYPE(name, int16_t, i16, OP_EXPR)       \\\n  BINARY_OP_FOR_TYPE(name, uint16_t, u16, OP_EXPR)      \\\n  BINARY_OP_FOR_TYPE(name, int32_t, i32, OP_EXPR)       \\\n  BINARY_OP_FOR_TYPE(name, int64_t, i64, OP_EXPR)       \\\n  BINARY_OP_FOR_TYPE(name, uint32_t, u32, OP_EXPR)      \\\n  BINARY_OP_FOR_TYPE(name, uint64_t, u64, OP_EXPR)      \\\n  BINARY_OP_FOR_TYPE(name, intnat, inat, OP_EXPR)       \\\n  BINARY_OP_FOR_TYPE(name, float, f32, OP_EXPR)         \\\n  BINARY_OP_FOR_TYPE(name, double, f64, OP_EXPR)\n\n// Macro to build dispatch table\n#define BUILD_DISPATCH_TABLE(name)                                             \\\n  static const binary_op_table name##_table = {.i8 = nx_c_##name##_i8,         \\\n                                               .u8 = nx_c_##name##_u8,         \\\n                                               .i16 = nx_c_##name##_i16,       \\\n                                               .u16 = nx_c_##name##_u16,       \\\n                                               .i32 = nx_c_##name##_i32,       \\\n                                               .i64 = nx_c_##name##_i64,       \\\n                                               .u32 = nx_c_##name##_u32,       \\\n                                               .u64 = nx_c_##name##_u64,       \\\n                                               .inat = nx_c_##name##_inat,     \\\n                                               .f16 = nx_c_##name##_f16,       \\\n                                               .f32 = nx_c_##name##_f32,       \\\n                                               .f64 = nx_c_##name##_f64,       \\\n                                               .c32 = nx_c_##name##_c32,       \\\n                                               .c64 = nx_c_##name##_c64,       \\\n                                               .bf16 = nx_c_##name##_bf16,     \\\n                                               .bool_ = nx_c_##name##_bool_,   \\\n                                               .i4 = nx_c_##name##_i4,         \\\n                                               .u4 = nx_c_##name##_u4,         \\\n                                               .f8e4m3 = nx_c_##name##_f8e4m3, \\\n                                               .f8e5m2 = nx_c_##name##_f8e5m2}\n\n// Helper to iterate over inner dimensions with a kernel function for binary\n// operations\ntypedef void (*kernel_fn)(void *, void *, void *, long, long, long);\n\nstatic inline void iterate_inner_dims(const ndarray_t *x, const ndarray_t *y,\n                                      const ndarray_t *z, long outer_idx,\n                                      kernel_fn kernel, void *x_data,\n                                      void *y_data, void *z_data) {\n  if (x->ndim <= 1) {\n    kernel(x_data, y_data, z_data, x->offset + outer_idx * x->strides[0],\n           y->offset + outer_idx * y->strides[0],\n           z->offset + outer_idx * z->strides[0]);\n    return;\n  }\n\n  long x_base = x->offset + outer_idx * x->strides[0];\n  long y_base = y->offset + outer_idx * y->strides[0];\n  long z_base = z->offset + outer_idx * z->strides[0];\n\n  // Create temporary iterator for inner dimensions\n  int inner_ndim = x->ndim - 1;\n  int coords_stack[MAX_NDIM];\n  int *coords = coords_stack;\n  bool heap_alloc = false;\n  if (inner_ndim > MAX_NDIM) {\n    coords = (int *)calloc(inner_ndim, sizeof(int));\n    if (!coords) {\n      fprintf(stderr, \"nx: iterate_inner_dims: allocation failed\\n\");\n      abort();\n    }\n    heap_alloc = true;\n  } else {\n    memset(coords_stack, 0, inner_ndim * sizeof(int));\n  }\n\n  // Iterate over inner dimensions\n  bool done = false;\n  while (!done) {\n    long x_off = x_base;\n    long y_off = y_base;\n    long z_off = z_base;\n\n    for (int i = 0; i < inner_ndim; i++) {\n      x_off += coords[i] * x->strides[i + 1];\n      y_off += coords[i] * y->strides[i + 1];\n      z_off += coords[i] * z->strides[i + 1];\n    }\n\n    kernel(x_data, y_data, z_data, x_off, y_off, z_off);\n\n    // Advance to next position\n    done = true;\n    for (int i = inner_ndim - 1; i >= 0; i--) {\n      coords[i]++;\n      if (coords[i] < x->shape[i + 1]) {\n        done = false;\n        break;\n      }\n      coords[i] = 0;\n    }\n  }\n\n  if (heap_alloc) free(coords);\n}\n\n// Generic binary operation kernel\n#define BINARY_OP_KERNEL(name, T, suffix, OP)                             \\\n  static inline void nx_c_##name##_##suffix##_kernel(void *x_data, void *y_data, \\\n                                              void *z_data, long x_off,   \\\n                                              long y_off, long z_off) {   \\\n    T *x = (T *)x_data;                                                   \\\n    T *y = (T *)y_data;                                                   \\\n    T *z = (T *)z_data;                                                   \\\n    z[z_off] = OP(x[x_off], y[y_off]);                                    \\\n  }\n\n// Generic binary operation implementation\n#define BINARY_OP_IMPL(name, T, suffix)                                        \\\n  static void nx_c_##name##_##suffix(const ndarray_t *x, const ndarray_t *y,   \\\n                                     ndarray_t *z) {                           \\\n    if (!x || !y || !z) {                                                      \\\n      fprintf(stderr, \"nx: nx_c_\" #name \"_\" #suffix \": null pointer\\n\");       \\\n      abort();                                                                 \\\n    }                                                                          \\\n    long total = total_elements_safe(x);                                       \\\n    if (total == 0) return;                                                    \\\n                                                                               \\\n    if (is_fully_contiguous(x, y, z)) {                                        \\\n      T *restrict xs = (T *)x->data + x->offset;                               \\\n      T *restrict ys = (T *)y->data + y->offset;                               \\\n      T *restrict zs = (T *)z->data + z->offset;                               \\\n      NX_FOR_EACH_ELEM(total, nx_c_##name##_##suffix##_kernel(xs, ys, zs, i, i, i)); \\\n    } else if (x->shape[0] > 1 && total / x->shape[0] > 50) {                  \\\n      _Pragma(\"omp parallel for if(x->shape[0] > 4)\") for (long i = 0;         \\\n                                                           i < x->shape[0];    \\\n                                                           i++) {              \\\n        iterate_inner_dims(x, y, z, i, nx_c_##name##_##suffix##_kernel,        \\\n                           x->data, y->data, z->data);                         \\\n      }                                                                        \\\n    } else {                                                                   \\\n      nd_iterator_t it;                                                        \\\n      nd_iterator_init_safe(&it, x, y, z);                                     \\\n      do {                                                                     \\\n        long x_off, y_off, z_off;                                              \\\n        nd_iterator_get_offsets(&it, &x_off, &y_off, &z_off);                  \\\n        nx_c_##name##_##suffix##_kernel(x->data, y->data, z->data,             \\\n                                        x->offset + x_off, y->offset + y_off,  \\\n                                        z->offset + z_off);                    \\\n      } while (nd_iterator_next(&it));                                         \\\n      nd_iterator_destroy(&it);                                                \\\n    }                                                                          \\\n  }\n\n// Macro to generate both kernel and implementation for an operation\n#define BINARY_OP_FOR_TYPE(name, T, suffix, OP) \\\n  BINARY_OP_KERNEL(name, T, suffix, OP)         \\\n  BINARY_OP_IMPL(name, T, suffix)\n\n// Low-precision float kernel (convert to float for op)\n#define LOW_PREC_OP_KERNEL(name, T, suffix, OP, TO_FLOAT, FROM_FLOAT)     \\\n  static void nx_c_##name##_##suffix##_kernel(void *x_data, void *y_data, \\\n                                              void *z_data, long x_off,   \\\n                                              long y_off, long z_off) {   \\\n    T *x = (T *)x_data;                                                   \\\n    T *y = (T *)y_data;                                                   \\\n    T *z = (T *)z_data;                                                   \\\n    float a = TO_FLOAT(x[x_off]);                                         \\\n    float b = TO_FLOAT(y[y_off]);                                         \\\n    z[z_off] = FROM_FLOAT(OP(a, b));                                      \\\n  }\n\n// For floating-point division, no zero check - let IEEE 754 semantics apply\n// (produces inf/-inf/NaN as appropriate)\n\n// For low-precision, use the impl with the special kernel\n#define LOW_PREC_OP_IMPL(name, T, suffix) BINARY_OP_IMPL(name, T, suffix)\n\n// Complex OP for arithmetic - reuse from nx_c_shared.h where possible\n// COMPLEX_ADD and COMPLEX_MUL are defined in nx_c_shared.h\n#define COMPLEX_SUB(x, y) ((x) - (y))\n#define COMPLEX_DIV(x, y) ((x) / (y))\n\n// Helper macros for int4 saturation\n#define CLAMP_I4(x) ((x) < -8 ? -8 : ((x) > 7 ? 7 : (x)))\n#define CLAMP_U4(x) ((x) < 0 ? 0 : ((x) > 15 ? 15 : (x)))\n\n// Special implementation for int4 (packed, unpack/op/pack with saturation)\n#define INT4_OP_IMPL(name, signedness, suffix, OP)                            \\\n  static void nx_c_##name##_##suffix##_kernel(void *x_data, void *y_data,     \\\n                                              void *z_data, long x_off,       \\\n                                              long y_off, long z_off) {       \\\n    uint8_t *x = (uint8_t *)x_data;                                           \\\n    uint8_t *y = (uint8_t *)y_data;                                           \\\n    uint8_t *z = (uint8_t *)z_data;                                           \\\n    long byte_off = x_off / 2;                                                \\\n    int nib_off = x_off % 2;                                                  \\\n    int a = nib_off ? (signedness ? (int8_t)(x[byte_off] >> 4)                \\\n                                  : (x[byte_off] >> 4) & 0x0F)                \\\n                    : (signedness ? (int8_t)((x[byte_off] & 0x0F) << 4) >> 4  \\\n                                  : x[byte_off] & 0x0F);                      \\\n    int b = nib_off ? (signedness ? (int8_t)(y[byte_off] >> 4)                \\\n                                  : (y[byte_off] >> 4) & 0x0F)                \\\n                    : (signedness ? (int8_t)((y[byte_off] & 0x0F) << 4) >> 4  \\\n                                  : y[byte_off] & 0x0F);                      \\\n    int res = OP(a, b);                                                       \\\n    /* Saturate to 4-bit range */                                             \\\n    res = signedness ? CLAMP_I4(res) : CLAMP_U4(res);                         \\\n    uint8_t nib = (uint8_t)res & 0x0F;                                        \\\n    if (nib_off) {                                                            \\\n      z[byte_off] = (z[byte_off] & 0x0F) | (nib << 4);                        \\\n    } else {                                                                  \\\n      z[byte_off] = (z[byte_off] & 0xF0) | nib;                               \\\n    }                                                                         \\\n  }                                                                           \\\n  static void nx_c_##name##_##suffix(const ndarray_t *x, const ndarray_t *y,  \\\n                                     ndarray_t *z) {                          \\\n    if (is_fully_contiguous(x, y, z)) {                                       \\\n      long total = total_elements_safe(x);                                    \\\n      void *x_data = x->data + x->offset;                                     \\\n      void *y_data = y->data + y->offset;                                     \\\n      void *z_data = z->data + z->offset;                                     \\\n      _Pragma(\"omp parallel for if(total > 10000)\") for (long i = 0;          \\\n                                                         i < total; i++) {    \\\n        nx_c_##name##_##suffix##_kernel(x_data, y_data, z_data, i, i, i);     \\\n      }                                                                       \\\n    } else {                                                                  \\\n      nd_iterator_t it;                                                       \\\n      nd_iterator_init_safe(&it, x, y, z);                                    \\\n      void *x_data = x->data;                                                 \\\n      void *y_data = y->data;                                                 \\\n      void *z_data = z->data;                                                 \\\n      do {                                                                    \\\n        long x_off, y_off, z_off;                                             \\\n        nd_iterator_get_offsets(&it, &x_off, &y_off, &z_off);                 \\\n        nx_c_##name##_##suffix##_kernel(x_data, y_data, z_data,               \\\n                                        x_off + x->offset, y_off + y->offset, \\\n                                        z_off + z->offset);                   \\\n      } while (nd_iterator_next(&it));                                        \\\n      nd_iterator_destroy(&it);                                               \\\n    }                                                                         \\\n  }\n\n// For bool, treat as uint8_t with standard arithmetic\n// Note: Results may exceed 0/1 range (e.g., 1+1=2), stored in uint8_t\n\n// Generate for all ops\n// Addition\n#define ADD_OP(x, y) ((x) + (y))\nGENERATE_BINARY_OP(add, ADD_OP)\n\n// Float16, BFloat16, FP8 variants need conversion\nLOW_PREC_OP_KERNEL(add, uint16_t, f16, ADD_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(add, uint16_t, f16)\nLOW_PREC_OP_KERNEL(add, caml_ba_bfloat16, bf16, ADD_OP, bfloat16_to_float,\n                   float_to_bfloat16)\nLOW_PREC_OP_IMPL(add, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(add, caml_ba_fp8_e4m3, f8e4m3, ADD_OP, fp8_e4m3_to_float,\n                   float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(add, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(add, caml_ba_fp8_e5m2, f8e5m2, ADD_OP, fp8_e5m2_to_float,\n                   float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(add, caml_ba_fp8_e5m2, f8e5m2)\n\nBINARY_OP_FOR_TYPE(add, complex32, c32, COMPLEX_ADD)\nBINARY_OP_FOR_TYPE(add, complex64, c64, COMPLEX_ADD)\n\nINT4_OP_IMPL(add, 1, i4, ADD_OP)\nINT4_OP_IMPL(add, 0, u4, ADD_OP)\nBINARY_OP_FOR_TYPE(add, caml_ba_bool, bool_, ADD_OP)  // Standard arithmetic\nBUILD_DISPATCH_TABLE(add);\n\n// Subtraction\n#define SUB_OP(x, y) ((x) - (y))\nGENERATE_BINARY_OP(sub, SUB_OP)\n\n// Float16, BFloat16, FP8 variants need conversion\nLOW_PREC_OP_KERNEL(sub, uint16_t, f16, SUB_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(sub, uint16_t, f16)\nLOW_PREC_OP_KERNEL(sub, caml_ba_bfloat16, bf16, SUB_OP, bfloat16_to_float,\n                   float_to_bfloat16)\nLOW_PREC_OP_IMPL(sub, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(sub, caml_ba_fp8_e4m3, f8e4m3, SUB_OP, fp8_e4m3_to_float,\n                   float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(sub, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(sub, caml_ba_fp8_e5m2, f8e5m2, SUB_OP, fp8_e5m2_to_float,\n                   float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(sub, caml_ba_fp8_e5m2, f8e5m2)\n\nBINARY_OP_FOR_TYPE(sub, complex32, c32, COMPLEX_SUB)\nBINARY_OP_FOR_TYPE(sub, complex64, c64, COMPLEX_SUB)\n\nINT4_OP_IMPL(sub, 1, i4, SUB_OP)\nINT4_OP_IMPL(sub, 0, u4, SUB_OP)\nBINARY_OP_FOR_TYPE(sub, caml_ba_bool, bool_,\n                   SUB_OP)  // Standard arithmetic (may wrap)\nBUILD_DISPATCH_TABLE(sub);\n\n// Multiplication\n#define MUL_OP(x, y) ((x) * (y))\nGENERATE_BINARY_OP(mul, MUL_OP)\n\n// Float16, BFloat16, FP8 variants need conversion\nLOW_PREC_OP_KERNEL(mul, uint16_t, f16, MUL_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(mul, uint16_t, f16)\nLOW_PREC_OP_KERNEL(mul, caml_ba_bfloat16, bf16, MUL_OP, bfloat16_to_float,\n                   float_to_bfloat16)\nLOW_PREC_OP_IMPL(mul, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(mul, caml_ba_fp8_e4m3, f8e4m3, MUL_OP, fp8_e4m3_to_float,\n                   float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(mul, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(mul, caml_ba_fp8_e5m2, f8e5m2, MUL_OP, fp8_e5m2_to_float,\n                   float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(mul, caml_ba_fp8_e5m2, f8e5m2)\n\nBINARY_OP_FOR_TYPE(mul, complex32, c32, COMPLEX_MUL)\nBINARY_OP_FOR_TYPE(mul, complex64, c64, COMPLEX_MUL)\n\nINT4_OP_IMPL(mul, 1, i4, MUL_OP)\nINT4_OP_IMPL(mul, 0, u4, MUL_OP)\nBINARY_OP_FOR_TYPE(mul, caml_ba_bool, bool_, MUL_OP)  // Standard arithmetic\nBUILD_DISPATCH_TABLE(mul);\n\n// Integer division - truncates and checks for zero\n#define INT_DIV_OP(x, y) \\\n  ((y) == 0 ? (fprintf(stderr, \"nx: division by zero\\n\"), abort(), (x)) : ((x) / (y)))\n\n// Integer types\nBINARY_OP_FOR_TYPE(idiv, int8_t, i8, INT_DIV_OP)\nBINARY_OP_FOR_TYPE(idiv, uint8_t, u8, INT_DIV_OP)\nBINARY_OP_FOR_TYPE(idiv, int16_t, i16, INT_DIV_OP)\nBINARY_OP_FOR_TYPE(idiv, uint16_t, u16, INT_DIV_OP)\nBINARY_OP_FOR_TYPE(idiv, int32_t, i32, INT_DIV_OP)\nBINARY_OP_FOR_TYPE(idiv, int64_t, i64, INT_DIV_OP)\nBINARY_OP_FOR_TYPE(idiv, uint32_t, u32, INT_DIV_OP)\nBINARY_OP_FOR_TYPE(idiv, uint64_t, u64, INT_DIV_OP)\nBINARY_OP_FOR_TYPE(idiv, intnat, inat, INT_DIV_OP)\n\n// For float types, idiv truncates the result\n#define FLOAT_IDIV_OP(x, y) (trunc((x) / (y)))\nBINARY_OP_FOR_TYPE(idiv, float, f32, FLOAT_IDIV_OP)\nBINARY_OP_FOR_TYPE(idiv, double, f64, FLOAT_IDIV_OP)\n\nLOW_PREC_OP_KERNEL(idiv, uint16_t, f16, FLOAT_IDIV_OP, half_to_float,\n                   float_to_half)\nLOW_PREC_OP_IMPL(idiv, uint16_t, f16)\nLOW_PREC_OP_KERNEL(idiv, caml_ba_bfloat16, bf16, FLOAT_IDIV_OP,\n                   bfloat16_to_float, float_to_bfloat16)\nLOW_PREC_OP_IMPL(idiv, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(idiv, caml_ba_fp8_e4m3, f8e4m3, FLOAT_IDIV_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(idiv, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(idiv, caml_ba_fp8_e5m2, f8e5m2, FLOAT_IDIV_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(idiv, caml_ba_fp8_e5m2, f8e5m2)\n\n// Complex idiv also truncates both real and imaginary parts\nstatic void nx_c_idiv_c32_kernel(void *x_data, void *y_data, void *z_data,\n                                 long x_off, long y_off, long z_off) {\n  complex32 *x = (complex32 *)x_data;\n  complex32 *y = (complex32 *)y_data;\n  complex32 *z = (complex32 *)z_data;\n  complex32 res = x[x_off] / y[y_off];\n  z[z_off] = truncf(crealf(res)) + I * truncf(cimagf(res));\n}\n\nstatic void nx_c_idiv_c64_kernel(void *x_data, void *y_data, void *z_data,\n                                 long x_off, long y_off, long z_off) {\n  complex64 *x = (complex64 *)x_data;\n  complex64 *y = (complex64 *)y_data;\n  complex64 *z = (complex64 *)z_data;\n  complex64 res = x[x_off] / y[y_off];\n  z[z_off] = trunc(creal(res)) + I * trunc(cimag(res));\n}\n\nBINARY_OP_IMPL(idiv, complex32, c32)\nBINARY_OP_IMPL(idiv, complex64, c64)\n\n\nINT4_OP_IMPL(idiv, 1, i4, INT_DIV_OP)\nINT4_OP_IMPL(idiv, 0, u4, INT_DIV_OP)\nBINARY_OP_FOR_TYPE(idiv, caml_ba_bool, bool_, INT_DIV_OP)\nBUILD_DISPATCH_TABLE(idiv);\n\n// Floating-point division - follows IEEE 754 (inf/NaN for division by zero)\n#define FLOAT_DIV_OP(x, y) ((x) / (y))\n\n// Integer types converted to float for fdiv\nBINARY_OP_FOR_TYPE(fdiv, int8_t, i8, FLOAT_DIV_OP)\nBINARY_OP_FOR_TYPE(fdiv, uint8_t, u8, FLOAT_DIV_OP)\nBINARY_OP_FOR_TYPE(fdiv, int16_t, i16, FLOAT_DIV_OP)\nBINARY_OP_FOR_TYPE(fdiv, uint16_t, u16, FLOAT_DIV_OP)\nBINARY_OP_FOR_TYPE(fdiv, int32_t, i32, FLOAT_DIV_OP)\nBINARY_OP_FOR_TYPE(fdiv, int64_t, i64, FLOAT_DIV_OP)\nBINARY_OP_FOR_TYPE(fdiv, uint32_t, u32, FLOAT_DIV_OP)\nBINARY_OP_FOR_TYPE(fdiv, uint64_t, u64, FLOAT_DIV_OP)\nBINARY_OP_FOR_TYPE(fdiv, intnat, inat, FLOAT_DIV_OP)\n\n// Floating-point types use IEEE 754 semantics\nBINARY_OP_FOR_TYPE(fdiv, float, f32, FLOAT_DIV_OP)\nBINARY_OP_FOR_TYPE(fdiv, double, f64, FLOAT_DIV_OP)\n\nLOW_PREC_OP_KERNEL(fdiv, uint16_t, f16, FLOAT_DIV_OP, half_to_float,\n                   float_to_half)\nLOW_PREC_OP_IMPL(fdiv, uint16_t, f16)\nLOW_PREC_OP_KERNEL(fdiv, caml_ba_bfloat16, bf16, FLOAT_DIV_OP,\n                   bfloat16_to_float, float_to_bfloat16)\nLOW_PREC_OP_IMPL(fdiv, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(fdiv, caml_ba_fp8_e4m3, f8e4m3, FLOAT_DIV_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(fdiv, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(fdiv, caml_ba_fp8_e5m2, f8e5m2, FLOAT_DIV_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(fdiv, caml_ba_fp8_e5m2, f8e5m2)\n\nBINARY_OP_FOR_TYPE(fdiv, complex32, c32, COMPLEX_DIV)\nBINARY_OP_FOR_TYPE(fdiv, complex64, c64, COMPLEX_DIV)\n\nINT4_OP_IMPL(fdiv, 1, i4, FLOAT_DIV_OP)\nINT4_OP_IMPL(fdiv, 0, u4, FLOAT_DIV_OP)\nBINARY_OP_FOR_TYPE(fdiv, caml_ba_bool, bool_, FLOAT_DIV_OP)\nBUILD_DISPATCH_TABLE(fdiv);\n\n// Max/Min with special complex\n#define MAX_OP(x, y) ((x) > (y) ? (x) : (y))\nGENERATE_BINARY_OP(max, MAX_OP)\n\n// Float16, BFloat16, FP8 variants need conversion\nLOW_PREC_OP_KERNEL(max, uint16_t, f16, MAX_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(max, uint16_t, f16)\nLOW_PREC_OP_KERNEL(max, caml_ba_bfloat16, bf16, MAX_OP, bfloat16_to_float,\n                   float_to_bfloat16)\nLOW_PREC_OP_IMPL(max, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(max, caml_ba_fp8_e4m3, f8e4m3, MAX_OP, fp8_e4m3_to_float,\n                   float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(max, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(max, caml_ba_fp8_e5m2, f8e5m2, MAX_OP, fp8_e5m2_to_float,\n                   float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(max, caml_ba_fp8_e5m2, f8e5m2)\n\nBINARY_OP_FOR_TYPE(max, complex32, c32, complex_max)\nBINARY_OP_FOR_TYPE(max, complex64, c64, complex64_max)\nINT4_OP_IMPL(max, 1, i4, MAX_OP)\nINT4_OP_IMPL(max, 0, u4, MAX_OP)\nBINARY_OP_FOR_TYPE(max, caml_ba_bool, bool_, MAX_OP)  // Standard comparison\nBUILD_DISPATCH_TABLE(max);\n\n#define MIN_OP(x, y) ((x) < (y) ? (x) : (y))\nGENERATE_BINARY_OP(min, MIN_OP)\n\n// Float16, BFloat16, FP8 variants need conversion\nLOW_PREC_OP_KERNEL(min, uint16_t, f16, MIN_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(min, uint16_t, f16)\nLOW_PREC_OP_KERNEL(min, caml_ba_bfloat16, bf16, MIN_OP, bfloat16_to_float,\n                   float_to_bfloat16)\nLOW_PREC_OP_IMPL(min, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(min, caml_ba_fp8_e4m3, f8e4m3, MIN_OP, fp8_e4m3_to_float,\n                   float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(min, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(min, caml_ba_fp8_e5m2, f8e5m2, MIN_OP, fp8_e5m2_to_float,\n                   float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(min, caml_ba_fp8_e5m2, f8e5m2)\n\nBINARY_OP_FOR_TYPE(min, complex32, c32, complex_min)\nBINARY_OP_FOR_TYPE(min, complex64, c64, complex64_min)\nINT4_OP_IMPL(min, 1, i4, MIN_OP)\nINT4_OP_IMPL(min, 0, u4, MIN_OP)\nBINARY_OP_FOR_TYPE(min, caml_ba_bool, bool_, MIN_OP)  // Standard comparison\nBUILD_DISPATCH_TABLE(min);\n\n// =========== MODULO ===========\n#define MOD_OP(x, y) \\\n  ((y) == 0 ? (fprintf(stderr, \"nx: modulo by zero\\n\"), abort(), 0) : ((x) % (y)))\n#define FMOD_OP(x, y) (fmod((x), (y)))\n\n// Integer modulo\nBINARY_OP_FOR_TYPE(mod, int8_t, i8, MOD_OP)\nBINARY_OP_FOR_TYPE(mod, uint8_t, u8, MOD_OP)\nBINARY_OP_FOR_TYPE(mod, int16_t, i16, MOD_OP)\nBINARY_OP_FOR_TYPE(mod, uint16_t, u16, MOD_OP)\nBINARY_OP_FOR_TYPE(mod, int32_t, i32, MOD_OP)\nBINARY_OP_FOR_TYPE(mod, int64_t, i64, MOD_OP)\nBINARY_OP_FOR_TYPE(mod, uint32_t, u32, MOD_OP)\nBINARY_OP_FOR_TYPE(mod, uint64_t, u64, MOD_OP)\nBINARY_OP_FOR_TYPE(mod, intnat, inat, MOD_OP)\n\n// Float modulo uses fmod\nBINARY_OP_FOR_TYPE(mod, float, f32, FMOD_OP)\nBINARY_OP_FOR_TYPE(mod, double, f64, FMOD_OP)\nLOW_PREC_OP_KERNEL(mod, uint16_t, f16, FMOD_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(mod, uint16_t, f16)\nLOW_PREC_OP_KERNEL(mod, caml_ba_bfloat16, bf16, FMOD_OP, bfloat16_to_float,\n                   float_to_bfloat16)\nLOW_PREC_OP_IMPL(mod, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(mod, caml_ba_fp8_e4m3, f8e4m3, FMOD_OP, fp8_e4m3_to_float,\n                   float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(mod, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(mod, caml_ba_fp8_e5m2, f8e5m2, FMOD_OP, fp8_e5m2_to_float,\n                   float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(mod, caml_ba_fp8_e5m2, f8e5m2)\n\n// Complex modulo not well-defined\n\nINT4_OP_IMPL(mod, 1, i4, MOD_OP)\nINT4_OP_IMPL(mod, 0, u4, MOD_OP)\nBINARY_OP_FOR_TYPE(mod, caml_ba_bool, bool_, MOD_OP)\n\n// Build dispatch table with NULL for unsupported complex types\nstatic const binary_op_table mod_table = {.i8 = nx_c_mod_i8,\n                                          .u8 = nx_c_mod_u8,\n                                          .i16 = nx_c_mod_i16,\n                                          .u16 = nx_c_mod_u16,\n                                          .i32 = nx_c_mod_i32,\n                                          .i64 = nx_c_mod_i64,\n                                          .u32 = nx_c_mod_u32,\n                                          .u64 = nx_c_mod_u64,\n                                          .inat = nx_c_mod_inat,\n                                          .f16 = nx_c_mod_f16,\n                                          .f32 = nx_c_mod_f32,\n                                          .f64 = nx_c_mod_f64,\n                                          .c32 = NULL,\n                                          .c64 = NULL,\n                                          .bf16 = nx_c_mod_bf16,\n                                          .bool_ = nx_c_mod_bool_,\n                                          .i4 = nx_c_mod_i4,\n                                          .u4 = nx_c_mod_u4,\n                                          .f8e4m3 = nx_c_mod_f8e4m3,\n                                          .f8e5m2 = nx_c_mod_f8e5m2};\n\n// =========== POWER ===========\n#define POW_OP(x, y) (pow((double)(x), (double)(y)))\n#define FPOW_OP(x, y) (powf((x), (y)))\n#define DPOW_OP(x, y) (pow((x), (y)))\n\n// All types use pow, converting to appropriate precision\nBINARY_OP_FOR_TYPE(pow, int8_t, i8, POW_OP)\nBINARY_OP_FOR_TYPE(pow, uint8_t, u8, POW_OP)\nBINARY_OP_FOR_TYPE(pow, int16_t, i16, POW_OP)\nBINARY_OP_FOR_TYPE(pow, uint16_t, u16, POW_OP)\nBINARY_OP_FOR_TYPE(pow, int32_t, i32, POW_OP)\nBINARY_OP_FOR_TYPE(pow, int64_t, i64, POW_OP)\nBINARY_OP_FOR_TYPE(pow, uint32_t, u32, POW_OP)\nBINARY_OP_FOR_TYPE(pow, uint64_t, u64, POW_OP)\nBINARY_OP_FOR_TYPE(pow, intnat, inat, POW_OP)\n\nBINARY_OP_FOR_TYPE(pow, float, f32, FPOW_OP)\nBINARY_OP_FOR_TYPE(pow, double, f64, DPOW_OP)\nLOW_PREC_OP_KERNEL(pow, uint16_t, f16, FPOW_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(pow, uint16_t, f16)\nLOW_PREC_OP_KERNEL(pow, caml_ba_bfloat16, bf16, FPOW_OP, bfloat16_to_float,\n                   float_to_bfloat16)\nLOW_PREC_OP_IMPL(pow, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(pow, caml_ba_fp8_e4m3, f8e4m3, FPOW_OP, fp8_e4m3_to_float,\n                   float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(pow, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(pow, caml_ba_fp8_e5m2, f8e5m2, FPOW_OP, fp8_e5m2_to_float,\n                   float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(pow, caml_ba_fp8_e5m2, f8e5m2)\n\n// Complex power using cpow\n#define CPOW32_OP(x, y) (cpowf((x), (y)))\n#define CPOW64_OP(x, y) (cpow((x), (y)))\nBINARY_OP_FOR_TYPE(pow, complex32, c32, CPOW32_OP)\nBINARY_OP_FOR_TYPE(pow, complex64, c64, CPOW64_OP)\n\n\nINT4_OP_IMPL(pow, 1, i4, POW_OP)\nINT4_OP_IMPL(pow, 0, u4, POW_OP)\nBINARY_OP_FOR_TYPE(pow, caml_ba_bool, bool_, POW_OP)\nBUILD_DISPATCH_TABLE(pow);\n\n// =========== ATAN2 ===========\n#define ATAN2F_OP(x, y) (atan2f((x), (y)))\n#define ATAN2D_OP(x, y) (atan2((x), (y)))\n\nBINARY_OP_FOR_TYPE(atan2, float, f32, ATAN2F_OP)\nBINARY_OP_FOR_TYPE(atan2, double, f64, ATAN2D_OP)\n\nLOW_PREC_OP_KERNEL(atan2, uint16_t, f16, ATAN2F_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(atan2, uint16_t, f16)\nLOW_PREC_OP_KERNEL(atan2, caml_ba_bfloat16, bf16, ATAN2F_OP, bfloat16_to_float,\n                   float_to_bfloat16)\nLOW_PREC_OP_IMPL(atan2, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(atan2, caml_ba_fp8_e4m3, f8e4m3, ATAN2F_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(atan2, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(atan2, caml_ba_fp8_e5m2, f8e5m2, ATAN2F_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(atan2, caml_ba_fp8_e5m2, f8e5m2)\n\nstatic const binary_op_table atan2_table = {.i8 = NULL,\n                                            .u8 = NULL,\n                                            .i16 = NULL,\n                                            .u16 = NULL,\n                                            .i32 = NULL,\n                                            .i64 = NULL,\n                                            .u32 = NULL,\n                                            .u64 = NULL,\n                                            .inat = NULL,\n                                            .f16 = nx_c_atan2_f16,\n                                            .f32 = nx_c_atan2_f32,\n                                            .f64 = nx_c_atan2_f64,\n                                            .c32 = NULL,\n                                            .c64 = NULL,\n                                            .bf16 = nx_c_atan2_bf16,\n                                            .bool_ = NULL,\n                                            .i4 = NULL,\n                                            .u4 = NULL,\n                                            .f8e4m3 = nx_c_atan2_f8e4m3,\n                                            .f8e5m2 = nx_c_atan2_f8e5m2};\n\n// =========== COMPARISON - LESS THAN ===========\n#define CMPLT_OP(x, y) ((x) < (y) ? 1 : 0)\n\n// Comparison operations that output uint8\n#define COMPARISON_OP_FOR_TYPE(name, T, suffix, OP)                            \\\n  static void nx_c_##name##_##suffix##_kernel(void *x_data, void *y_data,      \\\n                                              void *z_data, long x_off,        \\\n                                              long y_off, long z_off) {        \\\n    T *x = (T *)x_data;                                                        \\\n    T *y = (T *)y_data;                                                        \\\n    uint8_t *z = (uint8_t *)z_data;                                            \\\n    z[z_off] = OP(x[x_off], y[y_off]);                                         \\\n  }                                                                            \\\n  static void nx_c_##name##_##suffix(const ndarray_t *x, const ndarray_t *y,   \\\n                                     ndarray_t *z) {                           \\\n    if (!x || !y || !z) {                                                      \\\n      fprintf(stderr, \"nx: nx_c_\" #name \"_\" #suffix \": null pointer\\n\");       \\\n      abort();                                                                 \\\n    }                                                                          \\\n    long total = total_elements_safe(x);                                       \\\n    if (total == 0) return;                                                    \\\n                                                                               \\\n    if (is_fully_contiguous(x, y, z)) {                                        \\\n      _Pragma(\"omp parallel for simd if(total > 1000)\") for (long i = 0;       \\\n                                                             i < total; i++) { \\\n        nx_c_##name##_##suffix##_kernel(x->data, y->data, z->data,             \\\n                                        x->offset + i, y->offset + i,          \\\n                                        z->offset + i);                        \\\n      }                                                                        \\\n    } else if (x->shape[0] > 1 && total / x->shape[0] > 50) {                  \\\n      _Pragma(\"omp parallel for if(x->shape[0] > 4)\") for (long i = 0;         \\\n                                                           i < x->shape[0];    \\\n                                                           i++) {              \\\n        iterate_inner_dims(x, y, z, i, nx_c_##name##_##suffix##_kernel,        \\\n                           x->data, y->data, z->data);                         \\\n      }                                                                        \\\n    } else {                                                                   \\\n      nd_iterator_t it;                                                        \\\n      nd_iterator_init_safe(&it, x, y, z);                                     \\\n      do {                                                                     \\\n        long x_off, y_off, z_off;                                              \\\n        nd_iterator_get_offsets(&it, &x_off, &y_off, &z_off);                  \\\n        nx_c_##name##_##suffix##_kernel(x->data, y->data, z->data,             \\\n                                        x->offset + x_off, y->offset + y_off,  \\\n                                        z->offset + z_off);                    \\\n      } while (nd_iterator_next(&it));                                         \\\n      nd_iterator_destroy(&it);                                                \\\n    }                                                                          \\\n  }\n\nCOMPARISON_OP_FOR_TYPE(cmplt, int8_t, i8, CMPLT_OP)\nCOMPARISON_OP_FOR_TYPE(cmplt, uint8_t, u8, CMPLT_OP)\nCOMPARISON_OP_FOR_TYPE(cmplt, int16_t, i16, CMPLT_OP)\nCOMPARISON_OP_FOR_TYPE(cmplt, uint16_t, u16, CMPLT_OP)\nCOMPARISON_OP_FOR_TYPE(cmplt, int32_t, i32, CMPLT_OP)\nCOMPARISON_OP_FOR_TYPE(cmplt, int64_t, i64, CMPLT_OP)\nCOMPARISON_OP_FOR_TYPE(cmplt, uint32_t, u32, CMPLT_OP)\nCOMPARISON_OP_FOR_TYPE(cmplt, uint64_t, u64, CMPLT_OP)\nCOMPARISON_OP_FOR_TYPE(cmplt, intnat, inat, CMPLT_OP)\nCOMPARISON_OP_FOR_TYPE(cmplt, float, f32, CMPLT_OP)\nCOMPARISON_OP_FOR_TYPE(cmplt, double, f64, CMPLT_OP)\n\n// Low precision comparisons\nstatic void nx_c_cmplt_f16_kernel(void *x_data, void *y_data, void *z_data,\n                                  long x_off, long y_off, long z_off) {\n  uint16_t *x = (uint16_t *)x_data;\n  uint16_t *y = (uint16_t *)y_data;\n  bool *z = (bool *)z_data;\n  float a = half_to_float(x[x_off]);\n  float b = half_to_float(y[y_off]);\n  z[z_off] = a < b ? 1 : 0;\n}\nBINARY_OP_IMPL(cmplt, uint16_t, f16)\n\n// Similar for other low-precision types\n#define LOW_PREC_CMP_KERNEL(name, T, suffix, OP, TO_FLOAT)                \\\n  static void nx_c_##name##_##suffix##_kernel(void *x_data, void *y_data, \\\n                                              void *z_data, long x_off,   \\\n                                              long y_off, long z_off) {   \\\n    T *x = (T *)x_data;                                                   \\\n    T *y = (T *)y_data;                                                   \\\n    bool *z = (bool *)z_data;                                       \\\n    float a = TO_FLOAT(x[x_off]);                                         \\\n    float b = TO_FLOAT(y[y_off]);                                         \\\n    z[z_off] = OP(a, b);                                                  \\\n  }                                                                       \\\n  BINARY_OP_IMPL(name, T, suffix)\n\nLOW_PREC_CMP_KERNEL(cmplt, caml_ba_bfloat16, bf16, CMPLT_OP, bfloat16_to_float)\nLOW_PREC_CMP_KERNEL(cmplt, caml_ba_fp8_e4m3, f8e4m3, CMPLT_OP,\n                    fp8_e4m3_to_float)\nLOW_PREC_CMP_KERNEL(cmplt, caml_ba_fp8_e5m2, f8e5m2, CMPLT_OP,\n                    fp8_e5m2_to_float)\n\n// Complex comparison not well-defined\n\n// Int4 comparison implementation - unpacks 4-bit values and outputs uint8\n#define INT4_COMPARISON_OP_IMPL(name, signedness, suffix, OP)                  \\\n  static void nx_c_##name##_##suffix##_kernel(void *x_data, void *y_data,      \\\n                                              void *z_data, long x_off,        \\\n                                              long y_off, long z_off) {        \\\n    uint8_t *x = (uint8_t *)x_data;                                            \\\n    uint8_t *y = (uint8_t *)y_data;                                            \\\n    bool *z = (bool *)z_data;                                            \\\n    /* Unpack x value */                                                       \\\n    long x_byte_off = x_off / 2;                                               \\\n    int x_nib_off = x_off % 2;                                                 \\\n    int a = x_nib_off ? (signedness ? (int8_t)(x[x_byte_off] >> 4)             \\\n                                    : (x[x_byte_off] >> 4) & 0x0F)             \\\n                      : (signedness ? (int8_t)((x[x_byte_off] & 0x0F) << 4) >> 4 \\\n                                    : x[x_byte_off] & 0x0F);                    \\\n    /* Unpack y value */                                                       \\\n    long y_byte_off = y_off / 2;                                               \\\n    int y_nib_off = y_off % 2;                                                 \\\n    int b = y_nib_off ? (signedness ? (int8_t)(y[y_byte_off] >> 4)             \\\n                                    : (y[y_byte_off] >> 4) & 0x0F)             \\\n                      : (signedness ? (int8_t)((y[y_byte_off] & 0x0F) << 4) >> 4 \\\n                                    : y[y_byte_off] & 0x0F);                    \\\n    /* Store comparison result as uint8 (0 or 1) */                            \\\n    z[z_off] = OP(a, b);                                                       \\\n  }                                                                            \\\n  static void nx_c_##name##_##suffix(const ndarray_t *x, const ndarray_t *y,   \\\n                                     ndarray_t *z) {                           \\\n    if (is_fully_contiguous(x, y, z)) {                                        \\\n      long total = total_elements_safe(x);                                     \\\n      void *x_data = x->data + x->offset;                                      \\\n      void *y_data = y->data + y->offset;                                      \\\n      void *z_data = z->data + z->offset;                                      \\\n      _Pragma(\"omp parallel for if(total > 10000)\") for (long i = 0;           \\\n                                                         i < total; i++) {     \\\n        nx_c_##name##_##suffix##_kernel(x_data, y_data, z_data, i, i, i);      \\\n      }                                                                        \\\n    } else {                                                                   \\\n      nd_iterator_t it;                                                        \\\n      nd_iterator_init_safe(&it, x, y, z);                                     \\\n      void *x_data = x->data;                                                  \\\n      void *y_data = y->data;                                                  \\\n      void *z_data = z->data;                                                  \\\n      do {                                                                     \\\n        long x_off, y_off, z_off;                                              \\\n        nd_iterator_get_offsets(&it, &x_off, &y_off, &z_off);                  \\\n        nx_c_##name##_##suffix##_kernel(x_data, y_data, z_data,                \\\n                                        x_off + x->offset, y_off + y->offset,  \\\n                                        z_off + z->offset);                    \\\n      } while (nd_iterator_next(&it));                                         \\\n      nd_iterator_destroy(&it);                                                \\\n    }                                                                          \\\n  }\n\n// Define comparison operators\n#define CMPGT_OP(x, y) ((x) > (y) ? true : false)\n#define CMPLE_OP(x, y) ((x) <= (y) ? true : false)\n#define CMPGE_OP(x, y) ((x) >= (y) ? true : false)\n#define CMPEQ_OP(x, y) ((x) == (y) ? true : false)\n#define CMPNE_OP(x, y) ((x) != (y) ? true : false)\n\n// Generate int4/uint4 comparison operations\nINT4_COMPARISON_OP_IMPL(cmplt, 1, i4, CMPLT_OP)\nINT4_COMPARISON_OP_IMPL(cmplt, 0, u4, CMPLT_OP)\nINT4_COMPARISON_OP_IMPL(cmpgt, 1, i4, CMPGT_OP)\nINT4_COMPARISON_OP_IMPL(cmpgt, 0, u4, CMPGT_OP)\nINT4_COMPARISON_OP_IMPL(cmple, 1, i4, CMPLE_OP)\nINT4_COMPARISON_OP_IMPL(cmple, 0, u4, CMPLE_OP)\nINT4_COMPARISON_OP_IMPL(cmpge, 1, i4, CMPGE_OP)\nINT4_COMPARISON_OP_IMPL(cmpge, 0, u4, CMPGE_OP)\nINT4_COMPARISON_OP_IMPL(cmpeq, 1, i4, CMPEQ_OP)\nINT4_COMPARISON_OP_IMPL(cmpeq, 0, u4, CMPEQ_OP)\nINT4_COMPARISON_OP_IMPL(cmpne, 1, i4, CMPNE_OP)\nINT4_COMPARISON_OP_IMPL(cmpne, 0, u4, CMPNE_OP)\n\nCOMPARISON_OP_FOR_TYPE(cmplt, caml_ba_bool, bool_, CMPLT_OP)\n\n// Build dispatch table with NULL for unsupported complex and int4/uint4 types\nstatic const binary_op_table cmplt_table = {.i8 = nx_c_cmplt_i8,\n                                            .u8 = nx_c_cmplt_u8,\n                                            .i16 = nx_c_cmplt_i16,\n                                            .u16 = nx_c_cmplt_u16,\n                                            .i32 = nx_c_cmplt_i32,\n                                            .i64 = nx_c_cmplt_i64,\n                                            .u32 = nx_c_cmplt_u32,\n                                            .u64 = nx_c_cmplt_u64,\n                                            .inat = nx_c_cmplt_inat,\n                                            .f16 = nx_c_cmplt_f16,\n                                            .f32 = nx_c_cmplt_f32,\n                                            .f64 = nx_c_cmplt_f64,\n                                            .c32 = NULL,\n                                            .c64 = NULL,\n                                            .bf16 = nx_c_cmplt_bf16,\n                                            .bool_ = nx_c_cmplt_bool_,\n                                            .i4 = nx_c_cmplt_i4,\n                                            .u4 = nx_c_cmplt_u4,\n                                            .f8e4m3 = nx_c_cmplt_f8e4m3,\n                                            .f8e5m2 = nx_c_cmplt_f8e5m2};\n\n// =========== COMPARISON - NOT EQUAL ===========\nCOMPARISON_OP_FOR_TYPE(cmpne, int8_t, i8, CMPNE_OP)\nCOMPARISON_OP_FOR_TYPE(cmpne, uint8_t, u8, CMPNE_OP)\nCOMPARISON_OP_FOR_TYPE(cmpne, int16_t, i16, CMPNE_OP)\nCOMPARISON_OP_FOR_TYPE(cmpne, uint16_t, u16, CMPNE_OP)\nCOMPARISON_OP_FOR_TYPE(cmpne, int32_t, i32, CMPNE_OP)\nCOMPARISON_OP_FOR_TYPE(cmpne, int64_t, i64, CMPNE_OP)\nCOMPARISON_OP_FOR_TYPE(cmpne, uint32_t, u32, CMPNE_OP)\nCOMPARISON_OP_FOR_TYPE(cmpne, uint64_t, u64, CMPNE_OP)\nCOMPARISON_OP_FOR_TYPE(cmpne, intnat, inat, CMPNE_OP)\nCOMPARISON_OP_FOR_TYPE(cmpne, float, f32, CMPNE_OP)\nCOMPARISON_OP_FOR_TYPE(cmpne, double, f64, CMPNE_OP)\n\n// Low precision\nstatic void nx_c_cmpne_f16_kernel(void *x_data, void *y_data, void *z_data,\n                                  long x_off, long y_off, long z_off) {\n  uint16_t *x = (uint16_t *)x_data;\n  uint16_t *y = (uint16_t *)y_data;\n  uint8_t *z = (uint8_t *)z_data;\n  float a = half_to_float(x[x_off]);\n  float b = half_to_float(y[y_off]);\n  z[z_off] = a != b ? 1 : 0;\n}\nBINARY_OP_IMPL(cmpne, uint16_t, f16)\n\nLOW_PREC_CMP_KERNEL(cmpne, caml_ba_bfloat16, bf16, CMPNE_OP, bfloat16_to_float)\nLOW_PREC_CMP_KERNEL(cmpne, caml_ba_fp8_e4m3, f8e4m3, CMPNE_OP,\n                    fp8_e4m3_to_float)\nLOW_PREC_CMP_KERNEL(cmpne, caml_ba_fp8_e5m2, f8e5m2, CMPNE_OP,\n                    fp8_e5m2_to_float)\n\n// Complex comparison for equality\nstatic void nx_c_cmpne_c32_kernel(void *x_data, void *y_data, void *z_data,\n                                  long x_off, long y_off, long z_off) {\n  complex32 *x = (complex32 *)x_data;\n  complex32 *y = (complex32 *)y_data;\n  uint8_t *z = (uint8_t *)z_data;\n  z[z_off] = (x[x_off] != y[y_off]) ? 1 : 0;\n}\nBINARY_OP_IMPL(cmpne, complex32, c32)\n\nstatic void nx_c_cmpne_c64_kernel(void *x_data, void *y_data, void *z_data,\n                                  long x_off, long y_off, long z_off) {\n  complex64 *x = (complex64 *)x_data;\n  complex64 *y = (complex64 *)y_data;\n  uint8_t *z = (uint8_t *)z_data;\n  z[z_off] = (x[x_off] != y[y_off]) ? 1 : 0;\n}\nBINARY_OP_IMPL(cmpne, complex64, c64)\n\n// Int4 comparison not yet implemented\n\nCOMPARISON_OP_FOR_TYPE(cmpne, caml_ba_bool, bool_, CMPNE_OP)\n\n// Build dispatch table with NULL for unsupported int4/uint4 types\nstatic const binary_op_table cmpne_table = {.i8 = nx_c_cmpne_i8,\n                                            .u8 = nx_c_cmpne_u8,\n                                            .i16 = nx_c_cmpne_i16,\n                                            .u16 = nx_c_cmpne_u16,\n                                            .i32 = nx_c_cmpne_i32,\n                                            .i64 = nx_c_cmpne_i64,\n                                            .u32 = nx_c_cmpne_u32,\n                                            .u64 = nx_c_cmpne_u64,\n                                            .inat = nx_c_cmpne_inat,\n                                            .f16 = nx_c_cmpne_f16,\n                                            .f32 = nx_c_cmpne_f32,\n                                            .f64 = nx_c_cmpne_f64,\n                                            .c32 = nx_c_cmpne_c32,\n                                            .c64 = nx_c_cmpne_c64,\n                                            .bf16 = nx_c_cmpne_bf16,\n                                            .bool_ = nx_c_cmpne_bool_,\n                                            .i4 = nx_c_cmpne_i4,\n                                            .u4 = nx_c_cmpne_u4,\n                                            .f8e4m3 = nx_c_cmpne_f8e4m3,\n                                            .f8e5m2 = nx_c_cmpne_f8e5m2};\n\n// =========== COMPARISON - EQUAL ===========\nCOMPARISON_OP_FOR_TYPE(cmpeq, int8_t, i8, CMPEQ_OP)\nCOMPARISON_OP_FOR_TYPE(cmpeq, uint8_t, u8, CMPEQ_OP)\nCOMPARISON_OP_FOR_TYPE(cmpeq, int16_t, i16, CMPEQ_OP)\nCOMPARISON_OP_FOR_TYPE(cmpeq, uint16_t, u16, CMPEQ_OP)\nCOMPARISON_OP_FOR_TYPE(cmpeq, int32_t, i32, CMPEQ_OP)\nCOMPARISON_OP_FOR_TYPE(cmpeq, int64_t, i64, CMPEQ_OP)\nCOMPARISON_OP_FOR_TYPE(cmpeq, uint32_t, u32, CMPEQ_OP)\nCOMPARISON_OP_FOR_TYPE(cmpeq, uint64_t, u64, CMPEQ_OP)\nCOMPARISON_OP_FOR_TYPE(cmpeq, intnat, inat, CMPEQ_OP)\nCOMPARISON_OP_FOR_TYPE(cmpeq, float, f32, CMPEQ_OP)\nCOMPARISON_OP_FOR_TYPE(cmpeq, double, f64, CMPEQ_OP)\n\n// Low precision\nstatic void nx_c_cmpeq_f16_kernel(void *x_data, void *y_data, void *z_data,\n                                  long x_off, long y_off, long z_off) {\n  uint16_t *x = (uint16_t *)x_data;\n  uint16_t *y = (uint16_t *)y_data;\n  uint8_t *z = (uint8_t *)z_data;\n  float a = half_to_float(x[x_off]);\n  float b = half_to_float(y[y_off]);\n  z[z_off] = a == b ? 1 : 0;\n}\nBINARY_OP_IMPL(cmpeq, uint16_t, f16)\n\nLOW_PREC_CMP_KERNEL(cmpeq, caml_ba_bfloat16, bf16, CMPEQ_OP, bfloat16_to_float)\nLOW_PREC_CMP_KERNEL(cmpeq, caml_ba_fp8_e4m3, f8e4m3, CMPEQ_OP,\n                    fp8_e4m3_to_float)\nLOW_PREC_CMP_KERNEL(cmpeq, caml_ba_fp8_e5m2, f8e5m2, CMPEQ_OP,\n                    fp8_e5m2_to_float)\n\n// Complex comparison for equality\nstatic void nx_c_cmpeq_c32_kernel(void *x_data, void *y_data, void *z_data,\n                                  long x_off, long y_off, long z_off) {\n  complex32 *x = (complex32 *)x_data;\n  complex32 *y = (complex32 *)y_data;\n  uint8_t *z = (uint8_t *)z_data;\n  z[z_off] = (x[x_off] == y[y_off]) ? 1 : 0;\n}\nBINARY_OP_IMPL(cmpeq, complex32, c32)\n\nstatic void nx_c_cmpeq_c64_kernel(void *x_data, void *y_data, void *z_data,\n                                  long x_off, long y_off, long z_off) {\n  complex64 *x = (complex64 *)x_data;\n  complex64 *y = (complex64 *)y_data;\n  uint8_t *z = (uint8_t *)z_data;\n  z[z_off] = (x[x_off] == y[y_off]) ? 1 : 0;\n}\nBINARY_OP_IMPL(cmpeq, complex64, c64)\n\nCOMPARISON_OP_FOR_TYPE(cmpeq, caml_ba_bool, bool_, CMPEQ_OP)\n\nstatic const binary_op_table cmpeq_table = {.i8 = nx_c_cmpeq_i8,\n                                            .u8 = nx_c_cmpeq_u8,\n                                            .i16 = nx_c_cmpeq_i16,\n                                            .u16 = nx_c_cmpeq_u16,\n                                            .i32 = nx_c_cmpeq_i32,\n                                            .i64 = nx_c_cmpeq_i64,\n                                            .u32 = nx_c_cmpeq_u32,\n                                            .u64 = nx_c_cmpeq_u64,\n                                            .inat = nx_c_cmpeq_inat,\n                                            .f16 = nx_c_cmpeq_f16,\n                                            .f32 = nx_c_cmpeq_f32,\n                                            .f64 = nx_c_cmpeq_f64,\n                                            .c32 = nx_c_cmpeq_c32,\n                                            .c64 = nx_c_cmpeq_c64,\n                                            .bf16 = nx_c_cmpeq_bf16,\n                                            .bool_ = nx_c_cmpeq_bool_,\n                                            .i4 = nx_c_cmpeq_i4,\n                                            .u4 = nx_c_cmpeq_u4,\n                                            .f8e4m3 = nx_c_cmpeq_f8e4m3,\n                                            .f8e5m2 = nx_c_cmpeq_f8e5m2};\n\n// =========== COMPARISON - LESS THAN OR EQUAL ===========\nCOMPARISON_OP_FOR_TYPE(cmple, int8_t, i8, CMPLE_OP)\nCOMPARISON_OP_FOR_TYPE(cmple, uint8_t, u8, CMPLE_OP)\nCOMPARISON_OP_FOR_TYPE(cmple, int16_t, i16, CMPLE_OP)\nCOMPARISON_OP_FOR_TYPE(cmple, uint16_t, u16, CMPLE_OP)\nCOMPARISON_OP_FOR_TYPE(cmple, int32_t, i32, CMPLE_OP)\nCOMPARISON_OP_FOR_TYPE(cmple, int64_t, i64, CMPLE_OP)\nCOMPARISON_OP_FOR_TYPE(cmple, uint32_t, u32, CMPLE_OP)\nCOMPARISON_OP_FOR_TYPE(cmple, uint64_t, u64, CMPLE_OP)\nCOMPARISON_OP_FOR_TYPE(cmple, intnat, inat, CMPLE_OP)\nCOMPARISON_OP_FOR_TYPE(cmple, float, f32, CMPLE_OP)\nCOMPARISON_OP_FOR_TYPE(cmple, double, f64, CMPLE_OP)\n\n// Low precision comparisons\nstatic void nx_c_cmple_f16_kernel(void *x_data, void *y_data, void *z_data,\n                                  long x_off, long y_off, long z_off) {\n  uint16_t *x = (uint16_t *)x_data;\n  uint16_t *y = (uint16_t *)y_data;\n  uint8_t *z = (uint8_t *)z_data;\n  float a = half_to_float(x[x_off]);\n  float b = half_to_float(y[y_off]);\n  z[z_off] = a <= b ? 1 : 0;\n}\nBINARY_OP_IMPL(cmple, uint16_t, f16)\n\nLOW_PREC_CMP_KERNEL(cmple, caml_ba_bfloat16, bf16, CMPLE_OP, bfloat16_to_float)\nLOW_PREC_CMP_KERNEL(cmple, caml_ba_fp8_e4m3, f8e4m3, CMPLE_OP,\n                    fp8_e4m3_to_float)\nLOW_PREC_CMP_KERNEL(cmple, caml_ba_fp8_e5m2, f8e5m2, CMPLE_OP,\n                    fp8_e5m2_to_float)\n\nCOMPARISON_OP_FOR_TYPE(cmple, caml_ba_bool, bool_, CMPLE_OP)\n\n// Build dispatch table with NULL for unsupported complex types\nstatic const binary_op_table cmple_table = {.i8 = nx_c_cmple_i8,\n                                            .u8 = nx_c_cmple_u8,\n                                            .i16 = nx_c_cmple_i16,\n                                            .u16 = nx_c_cmple_u16,\n                                            .i32 = nx_c_cmple_i32,\n                                            .i64 = nx_c_cmple_i64,\n                                            .u32 = nx_c_cmple_u32,\n                                            .u64 = nx_c_cmple_u64,\n                                            .inat = nx_c_cmple_inat,\n                                            .f16 = nx_c_cmple_f16,\n                                            .f32 = nx_c_cmple_f32,\n                                            .f64 = nx_c_cmple_f64,\n                                            .c32 = NULL,\n                                            .c64 = NULL,\n                                            .bf16 = nx_c_cmple_bf16,\n                                            .bool_ = nx_c_cmple_bool_,\n                                            .i4 = nx_c_cmple_i4,\n                                            .u4 = nx_c_cmple_u4,\n                                            .f8e4m3 = nx_c_cmple_f8e4m3,\n                                            .f8e5m2 = nx_c_cmple_f8e5m2};\n\n// =========== BITWISE XOR ===========\n#define XOR_OP(x, y) ((x) ^ (y))\n\n// Bitwise operations only for integer types\nBINARY_OP_FOR_TYPE(xor, int8_t, i8, XOR_OP)\nBINARY_OP_FOR_TYPE(xor, uint8_t, u8, XOR_OP)\nBINARY_OP_FOR_TYPE(xor, int16_t, i16, XOR_OP)\nBINARY_OP_FOR_TYPE(xor, uint16_t, u16, XOR_OP)\nBINARY_OP_FOR_TYPE(xor, int32_t, i32, XOR_OP)\nBINARY_OP_FOR_TYPE(xor, int64_t, i64, XOR_OP)\nBINARY_OP_FOR_TYPE(xor, uint32_t, u32, XOR_OP)\nBINARY_OP_FOR_TYPE(xor, uint64_t, u64, XOR_OP)\nBINARY_OP_FOR_TYPE(xor, intnat, inat, XOR_OP)\n\n// Float bitwise operations not well-defined\n\nINT4_OP_IMPL(xor, 1, i4, XOR_OP)\nINT4_OP_IMPL(xor, 0, u4, XOR_OP)\nBINARY_OP_FOR_TYPE(xor, caml_ba_bool, bool_, XOR_OP)\n\n// Build dispatch table with NULL for unsupported float/complex types\nstatic const binary_op_table xor_table = {.i8 = nx_c_xor_i8,\n                                          .u8 = nx_c_xor_u8,\n                                          .i16 = nx_c_xor_i16,\n                                          .u16 = nx_c_xor_u16,\n                                          .i32 = nx_c_xor_i32,\n                                          .i64 = nx_c_xor_i64,\n                                          .u32 = nx_c_xor_u32,\n                                          .u64 = nx_c_xor_u64,\n                                          .inat = nx_c_xor_inat,\n                                          .f16 = NULL,\n                                          .f32 = NULL,\n                                          .f64 = NULL,\n                                          .c32 = NULL,\n                                          .c64 = NULL,\n                                          .bf16 = NULL,\n                                          .bool_ = nx_c_xor_bool_,\n                                          .i4 = nx_c_xor_i4,\n                                          .u4 = nx_c_xor_u4,\n                                          .f8e4m3 = NULL,\n                                          .f8e5m2 = NULL};\n\n// =========== BITWISE OR ===========\n#define OR_OP(x, y) ((x) | (y))\n\nBINARY_OP_FOR_TYPE(or, int8_t, i8, OR_OP)\nBINARY_OP_FOR_TYPE(or, uint8_t, u8, OR_OP)\nBINARY_OP_FOR_TYPE(or, int16_t, i16, OR_OP)\nBINARY_OP_FOR_TYPE(or, uint16_t, u16, OR_OP)\nBINARY_OP_FOR_TYPE(or, int32_t, i32, OR_OP)\nBINARY_OP_FOR_TYPE(or, int64_t, i64, OR_OP)\nBINARY_OP_FOR_TYPE(or, uint32_t, u32, OR_OP)\nBINARY_OP_FOR_TYPE(or, uint64_t, u64, OR_OP)\nBINARY_OP_FOR_TYPE(or, intnat, inat, OR_OP)\n\n// Float bitwise operations not well-defined\n\nINT4_OP_IMPL(or, 1, i4, OR_OP)\nINT4_OP_IMPL(or, 0, u4, OR_OP)\nBINARY_OP_FOR_TYPE(or, caml_ba_bool, bool_, OR_OP)\n\n// Build dispatch table with NULL for unsupported float/complex types\nstatic const binary_op_table or_table = {.i8 = nx_c_or_i8,\n                                         .u8 = nx_c_or_u8,\n                                         .i16 = nx_c_or_i16,\n                                         .u16 = nx_c_or_u16,\n                                         .i32 = nx_c_or_i32,\n                                         .i64 = nx_c_or_i64,\n                                         .u32 = nx_c_or_u32,\n                                         .u64 = nx_c_or_u64,\n                                         .inat = nx_c_or_inat,\n                                         .f16 = NULL,\n                                         .f32 = NULL,\n                                         .f64 = NULL,\n                                         .c32 = NULL,\n                                         .c64 = NULL,\n                                         .bf16 = NULL,\n                                         .bool_ = nx_c_or_bool_,\n                                         .i4 = nx_c_or_i4,\n                                         .u4 = nx_c_or_u4,\n                                         .f8e4m3 = NULL,\n                                         .f8e5m2 = NULL};\n\n// =========== BITWISE AND ===========\n#define AND_OP(x, y) ((x) & (y))\n\nBINARY_OP_FOR_TYPE(and, int8_t, i8, AND_OP)\nBINARY_OP_FOR_TYPE(and, uint8_t, u8, AND_OP)\nBINARY_OP_FOR_TYPE(and, int16_t, i16, AND_OP)\nBINARY_OP_FOR_TYPE(and, uint16_t, u16, AND_OP)\nBINARY_OP_FOR_TYPE(and, int32_t, i32, AND_OP)\nBINARY_OP_FOR_TYPE(and, int64_t, i64, AND_OP)\nBINARY_OP_FOR_TYPE(and, uint32_t, u32, AND_OP)\nBINARY_OP_FOR_TYPE(and, uint64_t, u64, AND_OP)\nBINARY_OP_FOR_TYPE(and, intnat, inat, AND_OP)\n\n// Float bitwise operations not well-defined\n\nINT4_OP_IMPL(and, 1, i4, AND_OP)\nINT4_OP_IMPL(and, 0, u4, AND_OP)\nBINARY_OP_FOR_TYPE(and, caml_ba_bool, bool_, AND_OP)\n\n// Build dispatch table with NULL for unsupported float/complex types\nstatic const binary_op_table and_table = {.i8 = nx_c_and_i8,\n                                          .u8 = nx_c_and_u8,\n                                          .i16 = nx_c_and_i16,\n                                          .u16 = nx_c_and_u16,\n                                          .i32 = nx_c_and_i32,\n                                          .i64 = nx_c_and_i64,\n                                          .u32 = nx_c_and_u32,\n                                          .u64 = nx_c_and_u64,\n                                          .inat = nx_c_and_inat,\n                                          .f16 = NULL,\n                                          .f32 = NULL,\n                                          .f64 = NULL,\n                                          .c32 = NULL,\n                                          .c64 = NULL,\n                                          .bf16 = NULL,\n                                          .bool_ = nx_c_and_bool_,\n                                          .i4 = nx_c_and_i4,\n                                          .u4 = nx_c_and_u4,\n                                          .f8e4m3 = NULL,\n                                          .f8e5m2 = NULL};\n\n// Shared dispatch infrastructure\n\n// Generic dispatch function for binary operations\nstatic void dispatch_binary_op(value v_x, value v_y, value v_z,\n                               const binary_op_table *table,\n                               const char *op_name) {\n  // Extract ndarrays from FFI tensors\n  ndarray_t x = extract_ndarray(v_x);\n  ndarray_t y = extract_ndarray(v_y);\n  ndarray_t z = extract_ndarray(v_z);\n\n  // Check shapes match\n  if (x.ndim != y.ndim || x.ndim != z.ndim) {\n    cleanup_ndarray(&x);\n    cleanup_ndarray(&y);\n    cleanup_ndarray(&z);\n    caml_failwith(\"shape mismatch\");\n  }\n  for (int i = 0; i < x.ndim; i++) {\n    if (x.shape[i] != y.shape[i] || x.shape[i] != z.shape[i]) {\n      cleanup_ndarray(&x);\n      cleanup_ndarray(&y);\n      cleanup_ndarray(&z);\n      caml_failwith(\"shape mismatch\");\n    }\n  }\n\n  // Get bigarray kind from the data field\n  value v_x_data = Field(v_x, FFI_TENSOR_DATA);\n  value v_y_data = Field(v_y, FFI_TENSOR_DATA);\n  value v_z_data = Field(v_z, FFI_TENSOR_DATA);\n\n  struct caml_ba_array *ba = Caml_ba_array_val(v_x_data);\n  int kind = nx_buffer_get_kind(ba);\n\n  // Check kinds match for y and z\n  int kind_y = nx_buffer_get_kind(Caml_ba_array_val(v_y_data));\n  int kind_z = nx_buffer_get_kind(Caml_ba_array_val(v_z_data));\n  if (kind != kind_y || kind != kind_z) {\n    cleanup_ndarray(&x);\n    cleanup_ndarray(&y);\n    cleanup_ndarray(&z);\n    caml_failwith(\"dtype mismatch\");\n  }\n\n  // Select operation based on dtype\n  binary_op_t op = NULL;\n  switch (kind) {\n    case CAML_BA_SINT8:\n      op = table->i8;\n      break;\n    case CAML_BA_UINT8:\n      op = table->u8;\n      break;\n    case CAML_BA_SINT16:\n      op = table->i16;\n      break;\n    case CAML_BA_UINT16:\n      op = table->u16;\n      break;\n    case CAML_BA_INT32:\n      op = table->i32;\n      break;\n    case CAML_BA_INT64:\n      op = table->i64;\n      break;\n    case NX_BA_UINT32:\n      op = table->u32;\n      break;\n    case NX_BA_UINT64:\n      op = table->u64;\n      break;\n    case CAML_BA_CAML_INT:\n    case CAML_BA_NATIVE_INT:\n      op = table->inat;\n      break;\n    case CAML_BA_FLOAT16:\n      op = table->f16;\n      break;\n    case CAML_BA_FLOAT32:\n      op = table->f32;\n      break;\n    case CAML_BA_FLOAT64:\n      op = table->f64;\n      break;\n    case CAML_BA_COMPLEX32:\n      op = table->c32;\n      break;\n    case CAML_BA_COMPLEX64:\n      op = table->c64;\n      break;\n    case NX_BA_BFLOAT16:\n      op = table->bf16;\n      break;\n    case NX_BA_BOOL:\n      op = table->bool_;\n      break;\n    case NX_BA_INT4:\n      op = table->i4;\n      break;\n    case NX_BA_UINT4:\n      op = table->u4;\n      break;\n    case NX_BA_FP8_E4M3:\n      op = table->f8e4m3;\n      break;\n    case NX_BA_FP8_E5M2:\n      op = table->f8e5m2;\n      break;\n    default:\n      cleanup_ndarray(&x);\n      cleanup_ndarray(&y);\n      cleanup_ndarray(&z);\n      caml_failwith(\"dispatch_binary_op: unsupported dtype\");\n  }\n\n  if (!op) {\n    char msg[256];\n    snprintf(msg, sizeof(msg), \"%s: operation not supported for dtype\",\n             op_name);\n    cleanup_ndarray(&x);\n    cleanup_ndarray(&y);\n    cleanup_ndarray(&z);\n    caml_failwith(msg);\n  }\n\n  // Enter blocking section for potentially long computation\n  caml_enter_blocking_section();\n  op(&x, &y, &z);\n  caml_leave_blocking_section();\n\n  // Clean up if heap allocated\n  cleanup_ndarray(&x);\n  cleanup_ndarray(&y);\n  cleanup_ndarray(&z);\n}\n\n// Generic dispatch function for comparison operations (output is always bool)\nstatic void dispatch_comparison_op(value v_x, value v_y, value v_z,\n                                   const binary_op_table *table,\n                                   const char *op_name) {\n  // Extract ndarrays from FFI tensors\n  ndarray_t x = extract_ndarray(v_x);\n  ndarray_t y = extract_ndarray(v_y);\n  ndarray_t z = extract_ndarray(v_z);\n\n  // Check shapes match\n  if (x.ndim != y.ndim || x.ndim != z.ndim) {\n    cleanup_ndarray(&x);\n    cleanup_ndarray(&y);\n    cleanup_ndarray(&z);\n    caml_failwith(\"shape mismatch\");\n  }\n  for (int i = 0; i < x.ndim; i++) {\n    if (x.shape[i] != y.shape[i] || x.shape[i] != z.shape[i]) {\n      cleanup_ndarray(&x);\n      cleanup_ndarray(&y);\n      cleanup_ndarray(&z);\n      caml_failwith(\"shape mismatch\");\n    }\n  }\n\n  // Get bigarray kind from the data field\n  value v_x_data = Field(v_x, FFI_TENSOR_DATA);\n  value v_y_data = Field(v_y, FFI_TENSOR_DATA);\n  value v_z_data = Field(v_z, FFI_TENSOR_DATA);\n\n  struct caml_ba_array *ba = Caml_ba_array_val(v_x_data);\n  int kind = nx_buffer_get_kind(ba);\n\n  // Check input kinds match\n  int kind_y = nx_buffer_get_kind(Caml_ba_array_val(v_y_data));\n  if (kind != kind_y) {\n    cleanup_ndarray(&x);\n    cleanup_ndarray(&y);\n    cleanup_ndarray(&z);\n    caml_failwith(\"dtype mismatch: comparison inputs must have same dtype\");\n  }\n  \n  // Check output is uint8\n  int kind_z = nx_buffer_get_kind(Caml_ba_array_val(v_z_data));\n  if (kind_z != NX_BA_BOOL) {\n    cleanup_ndarray(&x);\n    cleanup_ndarray(&y);\n    cleanup_ndarray(&z);\n    caml_failwith(\"dtype mismatch: comparison output must be bool\");\n  }\n\n  // Select operation based on input dtype\n  binary_op_t op = NULL;\n  switch (kind) {\n    case CAML_BA_SINT8:\n      op = table->i8;\n      break;\n    case CAML_BA_UINT8:\n      op = table->u8;\n      break;\n    case CAML_BA_SINT16:\n      op = table->i16;\n      break;\n    case CAML_BA_UINT16:\n      op = table->u16;\n      break;\n    case CAML_BA_INT32:\n      op = table->i32;\n      break;\n    case CAML_BA_INT64:\n      op = table->i64;\n      break;\n    case NX_BA_UINT32:\n      op = table->u32;\n      break;\n    case NX_BA_UINT64:\n      op = table->u64;\n      break;\n    case CAML_BA_CAML_INT:\n    case CAML_BA_NATIVE_INT:\n      op = table->inat;\n      break;\n    case CAML_BA_FLOAT16:\n      op = table->f16;\n      break;\n    case CAML_BA_FLOAT32:\n      op = table->f32;\n      break;\n    case CAML_BA_FLOAT64:\n      op = table->f64;\n      break;\n    case CAML_BA_COMPLEX32:\n      op = table->c32;\n      break;\n    case CAML_BA_COMPLEX64:\n      op = table->c64;\n      break;\n    case NX_BA_BFLOAT16:\n      op = table->bf16;\n      break;\n    case NX_BA_BOOL:\n      op = table->bool_;\n      break;\n    case NX_BA_INT4:\n      op = table->i4;\n      break;\n    case NX_BA_UINT4:\n      op = table->u4;\n      break;\n    case NX_BA_FP8_E4M3:\n      op = table->f8e4m3;\n      break;\n    case NX_BA_FP8_E5M2:\n      op = table->f8e5m2;\n      break;\n    default:\n      cleanup_ndarray(&x);\n      cleanup_ndarray(&y);\n      cleanup_ndarray(&z);\n      caml_failwith(\"dispatch_comparison_op: unsupported dtype\");\n  }\n\n  if (!op) {\n    char msg[256];\n    snprintf(msg, sizeof(msg), \"%s: operation not supported for dtype\",\n             op_name);\n    cleanup_ndarray(&x);\n    cleanup_ndarray(&y);\n    cleanup_ndarray(&z);\n    caml_failwith(msg);\n  }\n\n  // Enter blocking section for potentially long computation\n  caml_enter_blocking_section();\n  op(&x, &y, &z);\n  caml_leave_blocking_section();\n\n  // Clean up if heap allocated\n  cleanup_ndarray(&x);\n  cleanup_ndarray(&y);\n  cleanup_ndarray(&z);\n}\n\n// ============================================================================\n// OCaml FFI Stubs\n// ============================================================================\n\n// Macro to define FFI stub for each operation\n#define DEFINE_FFI_STUB(name)                                      \\\n  CAMLprim value caml_nx_##name(value v_x, value v_y, value v_z) { \\\n    CAMLparam3(v_x, v_y, v_z);                                     \\\n    dispatch_binary_op(v_x, v_y, v_z, &name##_table, #name);       \\\n    CAMLreturn(Val_unit);                                          \\\n  }\n\n// Macro to define FFI stub for comparison operations\n#define DEFINE_CMP_FFI_STUB(name)                                  \\\n  CAMLprim value caml_nx_##name(value v_x, value v_y, value v_z) { \\\n    CAMLparam3(v_x, v_y, v_z);                                     \\\n    dispatch_comparison_op(v_x, v_y, v_z, &name##_table, #name);   \\\n    CAMLreturn(Val_unit);                                          \\\n  }\n\nDEFINE_FFI_STUB(add)\nDEFINE_FFI_STUB(sub)\nDEFINE_FFI_STUB(mul)\nDEFINE_FFI_STUB(idiv)\nDEFINE_FFI_STUB(fdiv)\nDEFINE_FFI_STUB(max)\nDEFINE_FFI_STUB(min)\nDEFINE_FFI_STUB(mod)\nDEFINE_FFI_STUB(pow)\nDEFINE_FFI_STUB(atan2)\nDEFINE_CMP_FFI_STUB(cmpeq)\nDEFINE_CMP_FFI_STUB(cmpne)\nDEFINE_CMP_FFI_STUB(cmplt)\nDEFINE_CMP_FFI_STUB(cmple)\nDEFINE_FFI_STUB(xor)\nDEFINE_FFI_STUB(or)\nDEFINE_FFI_STUB(and)\n"
  },
  {
    "path": "packages/nx/lib/backend_c/nx_c_cast.c",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n// Cast operations for nx C backend\n\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n#include <caml/custom.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/threads.h>\n\n#include \"nx_c_shared.h\"\n\n// Type definitions for cast operations\ntypedef void (*cast_op_t)(const ndarray_t *, ndarray_t *);\n\n// Enum for dtypes to index the table\ntypedef enum {\n  NX_DTYPE_I8 = 0,\n  NX_DTYPE_U8,\n  NX_DTYPE_I16,\n  NX_DTYPE_U16,\n  NX_DTYPE_I32,\n  NX_DTYPE_I64,\n  NX_DTYPE_U32,\n  NX_DTYPE_U64,\n  NX_DTYPE_INAT,\n  NX_DTYPE_F16,\n  NX_DTYPE_F32,\n  NX_DTYPE_F64,\n  NX_DTYPE_C32,\n  NX_DTYPE_C64,\n  NX_DTYPE_BF16,\n  NX_DTYPE_BOOL,\n  NX_DTYPE_I4,\n  NX_DTYPE_U4,\n  NX_DTYPE_F8E4M3,\n  NX_DTYPE_F8E5M2,\n  NX_NUM_DTYPES\n} nx_dtype;\n\n// Map caml_ba_kind to nx_dtype\nstatic nx_dtype kind_to_dtype(int kind) {\n  switch (kind) {\n    case CAML_BA_SINT8:\n      return NX_DTYPE_I8;\n    case CAML_BA_UINT8:\n      return NX_DTYPE_U8;\n    case CAML_BA_SINT16:\n      return NX_DTYPE_I16;\n    case CAML_BA_UINT16:\n      return NX_DTYPE_U16;\n    case CAML_BA_INT32:\n      return NX_DTYPE_I32;\n    case CAML_BA_INT64:\n      return NX_DTYPE_I64;\n    case NX_BA_UINT32:\n      return NX_DTYPE_U32;\n    case NX_BA_UINT64:\n      return NX_DTYPE_U64;\n    case CAML_BA_CAML_INT:\n    case CAML_BA_NATIVE_INT:\n      return NX_DTYPE_INAT;\n    case CAML_BA_FLOAT16:\n      return NX_DTYPE_F16;\n    case CAML_BA_FLOAT32:\n      return NX_DTYPE_F32;\n    case CAML_BA_FLOAT64:\n      return NX_DTYPE_F64;\n    case CAML_BA_COMPLEX32:\n      return NX_DTYPE_C32;\n    case CAML_BA_COMPLEX64:\n      return NX_DTYPE_C64;\n    case NX_BA_BFLOAT16:\n      return NX_DTYPE_BF16;\n    case NX_BA_BOOL:\n      return NX_DTYPE_BOOL;\n    case NX_BA_INT4:\n      return NX_DTYPE_I4;\n    case NX_BA_UINT4:\n      return NX_DTYPE_U4;\n    case NX_BA_FP8_E4M3:\n      return NX_DTYPE_F8E4M3;\n    case NX_BA_FP8_E5M2:\n      return NX_DTYPE_F8E5M2;\n    default:\n      return NX_NUM_DTYPES;\n  }\n}\n\n// Helper to iterate over inner dimensions for unary (cast) operations\ntypedef void (*kernel_fn)(void *, void *, long, long);\n\nstatic inline void iterate_inner_dims2(const ndarray_t *x, const ndarray_t *z,\n                                       long outer_idx, kernel_fn kernel,\n                                       void *x_data, void *z_data) {\n  if (x->ndim <= 1) {\n    kernel(x_data, z_data, x->offset + outer_idx * x->strides[0],\n           z->offset + outer_idx * z->strides[0]);\n    return;\n  }\n\n  long x_base = x->offset + outer_idx * x->strides[0];\n  long z_base = z->offset + outer_idx * z->strides[0];\n\n  int inner_ndim = x->ndim - 1;\n  int *coords = (int *)calloc(inner_ndim, sizeof(int));\n  if (!coords) {\n    fprintf(stderr, \"nx: iterate_inner_dims2: allocation failed\\n\");\n    abort();\n  }\n\n  bool done = false;\n  while (!done) {\n    long x_off = x_base;\n    long z_off = z_base;\n\n    for (int i = 0; i < inner_ndim; i++) {\n      x_off += coords[i] * x->strides[i + 1];\n      z_off += coords[i] * z->strides[i + 1];\n    }\n\n    kernel(x_data, z_data, x_off, z_off);\n\n    done = true;\n    for (int i = inner_ndim - 1; i >= 0; i--) {\n      coords[i]++;\n      if (coords[i] < x->shape[i + 1]) {\n        done = false;\n        break;\n      }\n      coords[i] = 0;\n    }\n  }\n\n  free(coords);\n}\n\n// Generic cast implementation macro\n#define CAST_IMPL(src_suffix, dst_suffix)                                      \\\n  static void nx_c_cast_##src_suffix##_to_##dst_suffix(const ndarray_t *src,   \\\n                                                       ndarray_t *dst) {       \\\n    if (!src || !dst) {                                                        \\\n      fprintf(stderr, \"nx: nx_c_cast_\" #src_suffix \"_to_\" #dst_suffix          \\\n              \": null pointer\\n\");                                             \\\n      abort();                                         \\\n    }                                                                          \\\n    long total = total_elements_safe(src);                                     \\\n    if (total == 0) return;                                                    \\\n                                                                               \\\n    if (is_contiguous(src) && is_contiguous(dst)) {                            \\\n      _Pragma(\"omp parallel for simd if(total > 1000)\") for (long i = 0;       \\\n                                                             i < total; i++) { \\\n        nx_c_cast_##src_suffix##_to_##dst_suffix##_kernel(                     \\\n            src->data, dst->data, src->offset + i, dst->offset + i);           \\\n      }                                                                        \\\n    } else if (src->shape[0] > 1 && total / src->shape[0] > 50) {              \\\n      _Pragma(\"omp parallel for if(src->shape[0] > 4)\") for (long i = 0;       \\\n                                                             i <               \\\n                                                             src->shape[0];    \\\n                                                             i++) {            \\\n        iterate_inner_dims2(src, dst, i,                                       \\\n                            nx_c_cast_##src_suffix##_to_##dst_suffix##_kernel, \\\n                            src->data, dst->data);                             \\\n      }                                                                        \\\n    } else {                                                                   \\\n      nd_copy_iterator_t it;                                                   \\\n      nd_copy_iterator_init(&it, src, dst);                                    \\\n      do {                                                                     \\\n        long src_off, dst_off;                                                 \\\n        nd_copy_iterator_get_offsets(&it, &src_off, &dst_off);                 \\\n        nx_c_cast_##src_suffix##_to_##dst_suffix##_kernel(                     \\\n            src->data, dst->data, src->offset + src_off,                       \\\n            dst->offset + dst_off);                                            \\\n      } while (nd_copy_iterator_next(&it));                                    \\\n      nd_copy_iterator_destroy(&it);                                           \\\n    }                                                                          \\\n  }\n\n// Note: Assumes is_fully_contiguous accepts 3 arguments, with NULL for y.\n// Assume shared.h has overload or ignore NULL.\n\n// Standard real types (integer and float, excluding bool, low prec, complex,\n// packed)\n#define STANDARD_REAL_TYPES \\\n  SR_TYPE(i8, int8_t)       \\\n  SR_TYPE(u8, uint8_t)      \\\n  SR_TYPE(i16, int16_t)     \\\n  SR_TYPE(u16, uint16_t)    \\\n  SR_TYPE(i32, int32_t)     \\\n  SR_TYPE(i64, int64_t)     \\\n  SR_TYPE(u32, uint32_t)    \\\n  SR_TYPE(u64, uint64_t)    \\\n  SR_TYPE(inat, intnat)     \\\n  SR_TYPE(f32, float)       \\\n  SR_TYPE(f64, double)\n\n// Low precision float types\n#define LOW_PREC_TYPES                                                    \\\n  LP_TYPE(f16, uint16_t, half_to_float, float_to_half)                    \\\n  LP_TYPE(bf16, caml_ba_bfloat16, bfloat16_to_float, float_to_bfloat16)   \\\n  LP_TYPE(f8e4m3, caml_ba_fp8_e4m3, fp8_e4m3_to_float, float_to_fp8_e4m3) \\\n  LP_TYPE(f8e5m2, caml_ba_fp8_e5m2, fp8_e5m2_to_float, float_to_fp8_e5m2)\n\n// Complex types\n#define COMPLEX_TYPES                                                        \\\n  CP_TYPE(c32, complex32, crealf(src[src_off]), cimagf(src[src_off]), float) \\\n  CP_TYPE(c64, complex64, creal(src[src_off]), cimag(src[src_off]), double)\n\n// Generate cast for standard real to standard real\n\n#define GEN_CAST_STANDARD_TO_STANDARD(src_suffix, dst_suffix, src_t, dst_t) \\\n  static void nx_c_cast_##src_suffix##_to_##dst_suffix##_kernel(            \\\n      void *src_data, void *dst_data, long src_off, long dst_off) {         \\\n    src_t *src = (src_t *)src_data;                                         \\\n    dst_t *dst = (dst_t *)dst_data;                                         \\\n    dst[dst_off] = (dst_t)src[src_off];                                     \\\n  }                                                                         \\\n  CAST_IMPL(src_suffix, dst_suffix)\n\n// Generate all standard-to-standard cast combinations\n#define GEN_ALL_STANDARD_CASTS(src_suffix, src_t)                 \\\n  GEN_CAST_STANDARD_TO_STANDARD(src_suffix, i8, src_t, int8_t)    \\\n  GEN_CAST_STANDARD_TO_STANDARD(src_suffix, u8, src_t, uint8_t)   \\\n  GEN_CAST_STANDARD_TO_STANDARD(src_suffix, i16, src_t, int16_t)  \\\n  GEN_CAST_STANDARD_TO_STANDARD(src_suffix, u16, src_t, uint16_t) \\\n  GEN_CAST_STANDARD_TO_STANDARD(src_suffix, i32, src_t, int32_t)  \\\n  GEN_CAST_STANDARD_TO_STANDARD(src_suffix, i64, src_t, int64_t)  \\\n  GEN_CAST_STANDARD_TO_STANDARD(src_suffix, u32, src_t, uint32_t) \\\n  GEN_CAST_STANDARD_TO_STANDARD(src_suffix, u64, src_t, uint64_t) \\\n  GEN_CAST_STANDARD_TO_STANDARD(src_suffix, inat, src_t, intnat)  \\\n  GEN_CAST_STANDARD_TO_STANDARD(src_suffix, f32, src_t, float)    \\\n  GEN_CAST_STANDARD_TO_STANDARD(src_suffix, f64, src_t, double)\n\n// Generate casts for each standard type\nGEN_ALL_STANDARD_CASTS(i8, int8_t)\nGEN_ALL_STANDARD_CASTS(u8, uint8_t)\nGEN_ALL_STANDARD_CASTS(i16, int16_t)\nGEN_ALL_STANDARD_CASTS(u16, uint16_t)\nGEN_ALL_STANDARD_CASTS(i32, int32_t)\nGEN_ALL_STANDARD_CASTS(i64, int64_t)\nGEN_ALL_STANDARD_CASTS(u32, uint32_t)\nGEN_ALL_STANDARD_CASTS(u64, uint64_t)\nGEN_ALL_STANDARD_CASTS(inat, intnat)\nGEN_ALL_STANDARD_CASTS(f32, float)\nGEN_ALL_STANDARD_CASTS(f64, double)\n\n// Generate cast for bool to standard real\n\n#define GEN_CAST_BOOL_TO_STANDARD(dst_suffix, dst_t)                \\\n  static void nx_c_cast_bool_to_##dst_suffix##_kernel(              \\\n      void *src_data, void *dst_data, long src_off, long dst_off) { \\\n    uint8_t *src = (uint8_t *)src_data;                             \\\n    dst_t *dst = (dst_t *)dst_data;                                 \\\n    dst[dst_off] = (dst_t)src[src_off];                             \\\n  }                                                                 \\\n  CAST_IMPL(bool, dst_suffix)\n\n#define SR_TYPE(dst_suffix, dst_t) GEN_CAST_BOOL_TO_STANDARD(dst_suffix, dst_t)\nSTANDARD_REAL_TYPES\n#undef SR_TYPE\n\n// Generate cast for standard real to bool\n\n#define GEN_CAST_STANDARD_TO_BOOL(src_suffix, src_t)                \\\n  static void nx_c_cast_##src_suffix##_to_bool_kernel(              \\\n      void *src_data, void *dst_data, long src_off, long dst_off) { \\\n    src_t *src = (src_t *)src_data;                                 \\\n    uint8_t *dst = (uint8_t *)dst_data;                             \\\n    dst[dst_off] = (src[src_off] != 0) ? 1 : 0;                     \\\n  }                                                                 \\\n  CAST_IMPL(src_suffix, bool)\n\n#define SR_TYPE(src_suffix, src_t) GEN_CAST_STANDARD_TO_BOOL(src_suffix, src_t)\nSTANDARD_REAL_TYPES\n#undef SR_TYPE\n\n// Bool to bool\n\nstatic void nx_c_cast_bool_to_bool_kernel(void *src_data, void *dst_data,\n                                          long src_off, long dst_off) {\n  uint8_t *src = (uint8_t *)src_data;\n  uint8_t *dst = (uint8_t *)dst_data;\n  dst[dst_off] = (src[src_off] != 0) ? 1 : 0;\n}\nCAST_IMPL(bool, bool)\n\n// Generate cast for low prec to standard real\n\n#define GEN_CAST_LP_TO_STANDARD(src_suffix, dst_suffix, src_t, dst_t, \\\n                                TO_FLOAT)                             \\\n  static void nx_c_cast_##src_suffix##_to_##dst_suffix##_kernel(      \\\n      void *src_data, void *dst_data, long src_off, long dst_off) {   \\\n    src_t *src = (src_t *)src_data;                                   \\\n    dst_t *dst = (dst_t *)dst_data;                                   \\\n    float temp = TO_FLOAT(src[src_off]);                              \\\n    dst[dst_off] = (dst_t)temp;                                       \\\n  }                                                                   \\\n  CAST_IMPL(src_suffix, dst_suffix)\n\n// Generate all low-precision to standard casts\n#define GEN_ALL_LP_TO_STANDARD_CASTS(src_suffix, src_t, TO_FLOAT)     \\\n  GEN_CAST_LP_TO_STANDARD(src_suffix, i8, src_t, int8_t, TO_FLOAT)    \\\n  GEN_CAST_LP_TO_STANDARD(src_suffix, u8, src_t, uint8_t, TO_FLOAT)   \\\n  GEN_CAST_LP_TO_STANDARD(src_suffix, i16, src_t, int16_t, TO_FLOAT)  \\\n  GEN_CAST_LP_TO_STANDARD(src_suffix, u16, src_t, uint16_t, TO_FLOAT) \\\n  GEN_CAST_LP_TO_STANDARD(src_suffix, i32, src_t, int32_t, TO_FLOAT)  \\\n  GEN_CAST_LP_TO_STANDARD(src_suffix, i64, src_t, int64_t, TO_FLOAT)  \\\n  GEN_CAST_LP_TO_STANDARD(src_suffix, u32, src_t, uint32_t, TO_FLOAT) \\\n  GEN_CAST_LP_TO_STANDARD(src_suffix, u64, src_t, uint64_t, TO_FLOAT) \\\n  GEN_CAST_LP_TO_STANDARD(src_suffix, inat, src_t, intnat, TO_FLOAT)  \\\n  GEN_CAST_LP_TO_STANDARD(src_suffix, f32, src_t, float, TO_FLOAT)    \\\n  GEN_CAST_LP_TO_STANDARD(src_suffix, f64, src_t, double, TO_FLOAT)\n\n// Generate casts for each low-precision type\nGEN_ALL_LP_TO_STANDARD_CASTS(f16, uint16_t, half_to_float)\nGEN_ALL_LP_TO_STANDARD_CASTS(bf16, caml_ba_bfloat16, bfloat16_to_float)\nGEN_ALL_LP_TO_STANDARD_CASTS(f8e4m3, caml_ba_fp8_e4m3, fp8_e4m3_to_float)\nGEN_ALL_LP_TO_STANDARD_CASTS(f8e5m2, caml_ba_fp8_e5m2, fp8_e5m2_to_float)\n\n// Generate cast for standard real to low prec\n\n#define GEN_CAST_STANDARD_TO_LP(src_suffix, dst_suffix, src_t, dst_t, \\\n                                FROM_FLOAT)                           \\\n  static void nx_c_cast_##src_suffix##_to_##dst_suffix##_kernel(      \\\n      void *src_data, void *dst_data, long src_off, long dst_off) {   \\\n    src_t *src = (src_t *)src_data;                                   \\\n    dst_t *dst = (dst_t *)dst_data;                                   \\\n    float temp = (float)src[src_off];                                 \\\n    dst[dst_off] = FROM_FLOAT(temp);                                  \\\n  }                                                                   \\\n  CAST_IMPL(src_suffix, dst_suffix)\n\n// Generate all standard to low-precision casts\n#define GEN_ALL_STANDARD_TO_LP_CASTS(dst_suffix, dst_t, FROM_FLOAT)     \\\n  GEN_CAST_STANDARD_TO_LP(i8, dst_suffix, int8_t, dst_t, FROM_FLOAT)    \\\n  GEN_CAST_STANDARD_TO_LP(u8, dst_suffix, uint8_t, dst_t, FROM_FLOAT)   \\\n  GEN_CAST_STANDARD_TO_LP(i16, dst_suffix, int16_t, dst_t, FROM_FLOAT)  \\\n  GEN_CAST_STANDARD_TO_LP(u16, dst_suffix, uint16_t, dst_t, FROM_FLOAT) \\\n  GEN_CAST_STANDARD_TO_LP(i32, dst_suffix, int32_t, dst_t, FROM_FLOAT)  \\\n  GEN_CAST_STANDARD_TO_LP(i64, dst_suffix, int64_t, dst_t, FROM_FLOAT)  \\\n  GEN_CAST_STANDARD_TO_LP(u32, dst_suffix, uint32_t, dst_t, FROM_FLOAT) \\\n  GEN_CAST_STANDARD_TO_LP(u64, dst_suffix, uint64_t, dst_t, FROM_FLOAT) \\\n  GEN_CAST_STANDARD_TO_LP(inat, dst_suffix, intnat, dst_t, FROM_FLOAT)  \\\n  GEN_CAST_STANDARD_TO_LP(f32, dst_suffix, float, dst_t, FROM_FLOAT)    \\\n  GEN_CAST_STANDARD_TO_LP(f64, dst_suffix, double, dst_t, FROM_FLOAT)\n\n// Generate casts for each low-precision type\nGEN_ALL_STANDARD_TO_LP_CASTS(f16, uint16_t, float_to_half)\nGEN_ALL_STANDARD_TO_LP_CASTS(bf16, caml_ba_bfloat16, float_to_bfloat16)\nGEN_ALL_STANDARD_TO_LP_CASTS(f8e4m3, caml_ba_fp8_e4m3, float_to_fp8_e4m3)\nGEN_ALL_STANDARD_TO_LP_CASTS(f8e5m2, caml_ba_fp8_e5m2, float_to_fp8_e5m2)\n\n// Bool to low prec\n#define GEN_CAST_BOOL_TO_LP(dst_suffix, dst_t, FROM_FLOAT)            \\\n  static void nx_c_cast_bool_to_##dst_suffix##_kernel(                \\\n      void *src_data, void *dst_data, long src_off, long dst_off) {   \\\n    uint8_t *src = (uint8_t *)src_data;                               \\\n    dst_t *dst = (dst_t *)dst_data;                                   \\\n    float temp = (src[src_off] != 0) ? 1.0f : 0.0f;                   \\\n    dst[dst_off] = FROM_FLOAT(temp);                                  \\\n  }                                                                   \\\n  CAST_IMPL(bool, dst_suffix)\n\nGEN_CAST_BOOL_TO_LP(f16, uint16_t, float_to_half)\nGEN_CAST_BOOL_TO_LP(bf16, caml_ba_bfloat16, float_to_bfloat16)\nGEN_CAST_BOOL_TO_LP(f8e4m3, caml_ba_fp8_e4m3, float_to_fp8_e4m3)\nGEN_CAST_BOOL_TO_LP(f8e5m2, caml_ba_fp8_e5m2, float_to_fp8_e5m2)\n\n// Generate cast for low prec to bool\n\n#define GEN_CAST_LP_TO_BOOL(src_suffix, src_t, TO_FLOAT)            \\\n  static void nx_c_cast_##src_suffix##_to_bool_kernel(              \\\n      void *src_data, void *dst_data, long src_off, long dst_off) { \\\n    src_t *src = (src_t *)src_data;                                 \\\n    uint8_t *dst = (uint8_t *)dst_data;                             \\\n    float temp = TO_FLOAT(src[src_off]);                            \\\n    dst[dst_off] = (temp != 0.0f) ? 1 : 0;                          \\\n  }                                                                 \\\n  CAST_IMPL(src_suffix, bool)\n\n#define LP_TYPE(suffix, t, to_f, from_f) GEN_CAST_LP_TO_BOOL(suffix, t, to_f)\nLOW_PREC_TYPES\n#undef LP_TYPE\n\n// Generate cast for low prec to low prec\n\n#define GEN_CAST_LP_TO_LP(src_suffix, dst_suffix, src_t, dst_t, TO_FLOAT, \\\n                          FROM_FLOAT)                                     \\\n  static void nx_c_cast_##src_suffix##_to_##dst_suffix##_kernel(          \\\n      void *src_data, void *dst_data, long src_off, long dst_off) {       \\\n    src_t *src = (src_t *)src_data;                                       \\\n    dst_t *dst = (dst_t *)dst_data;                                       \\\n    float temp = TO_FLOAT(src[src_off]);                                  \\\n    dst[dst_off] = FROM_FLOAT(temp);                                      \\\n  }                                                                       \\\n  CAST_IMPL(src_suffix, dst_suffix)\n\n// Generate all low-precision to low-precision casts\n// Identity casts for low-precision types\nstatic void nx_c_cast_f16_to_f16_kernel(void *src_data, void *dst_data,\n                                        long src_off, long dst_off) {\n  uint16_t *src = (uint16_t *)src_data;\n  uint16_t *dst = (uint16_t *)dst_data;\n  dst[dst_off] = src[src_off];\n}\nCAST_IMPL(f16, f16)\n\nGEN_CAST_LP_TO_LP(f16, bf16, uint16_t, caml_ba_bfloat16, half_to_float,\n                  float_to_bfloat16)\nGEN_CAST_LP_TO_LP(f16, f8e4m3, uint16_t, caml_ba_fp8_e4m3, half_to_float,\n                  float_to_fp8_e4m3)\nGEN_CAST_LP_TO_LP(f16, f8e5m2, uint16_t, caml_ba_fp8_e5m2, half_to_float,\n                  float_to_fp8_e5m2)\nGEN_CAST_LP_TO_LP(bf16, f16, caml_ba_bfloat16, uint16_t, bfloat16_to_float,\n                  float_to_half)\nGEN_CAST_LP_TO_LP(bf16, f8e4m3, caml_ba_bfloat16, caml_ba_fp8_e4m3,\n                  bfloat16_to_float, float_to_fp8_e4m3)\nGEN_CAST_LP_TO_LP(bf16, f8e5m2, caml_ba_bfloat16, caml_ba_fp8_e5m2,\n                  bfloat16_to_float, float_to_fp8_e5m2)\nGEN_CAST_LP_TO_LP(f8e4m3, f16, caml_ba_fp8_e4m3, uint16_t, fp8_e4m3_to_float,\n                  float_to_half)\nGEN_CAST_LP_TO_LP(f8e4m3, bf16, caml_ba_fp8_e4m3, caml_ba_bfloat16,\n                  fp8_e4m3_to_float, float_to_bfloat16)\nGEN_CAST_LP_TO_LP(f8e4m3, f8e5m2, caml_ba_fp8_e4m3, caml_ba_fp8_e5m2,\n                  fp8_e4m3_to_float, float_to_fp8_e5m2)\nGEN_CAST_LP_TO_LP(f8e5m2, f16, caml_ba_fp8_e5m2, uint16_t, fp8_e5m2_to_float,\n                  float_to_half)\nGEN_CAST_LP_TO_LP(f8e5m2, bf16, caml_ba_fp8_e5m2, caml_ba_bfloat16,\n                  fp8_e5m2_to_float, float_to_bfloat16)\nGEN_CAST_LP_TO_LP(f8e5m2, f8e4m3, caml_ba_fp8_e5m2, caml_ba_fp8_e4m3,\n                  fp8_e5m2_to_float, float_to_fp8_e4m3)\n\n// Generate cast for complex to standard real\n\n#define GEN_CAST_CP_TO_STANDARD(src_suffix, dst_suffix, src_t, dst_t, RE_FN) \\\n  static void nx_c_cast_##src_suffix##_to_##dst_suffix##_kernel(             \\\n      void *src_data, void *dst_data, long src_off, long dst_off) {          \\\n    src_t *src = (src_t *)src_data;                                          \\\n    dst_t *dst = (dst_t *)dst_data;                                          \\\n    double temp = RE_FN;                                                     \\\n    dst[dst_off] = (dst_t)temp;                                              \\\n  }                                                                          \\\n  CAST_IMPL(src_suffix, dst_suffix)\n\n// Generate all complex to standard casts\n#define GEN_ALL_CP_TO_STANDARD_CASTS(src_suffix, src_t, RE_FN)     \\\n  GEN_CAST_CP_TO_STANDARD(src_suffix, i8, src_t, int8_t, RE_FN)    \\\n  GEN_CAST_CP_TO_STANDARD(src_suffix, u8, src_t, uint8_t, RE_FN)   \\\n  GEN_CAST_CP_TO_STANDARD(src_suffix, i16, src_t, int16_t, RE_FN)  \\\n  GEN_CAST_CP_TO_STANDARD(src_suffix, u16, src_t, uint16_t, RE_FN) \\\n  GEN_CAST_CP_TO_STANDARD(src_suffix, i32, src_t, int32_t, RE_FN)  \\\n  GEN_CAST_CP_TO_STANDARD(src_suffix, i64, src_t, int64_t, RE_FN)  \\\n  GEN_CAST_CP_TO_STANDARD(src_suffix, u32, src_t, uint32_t, RE_FN) \\\n  GEN_CAST_CP_TO_STANDARD(src_suffix, u64, src_t, uint64_t, RE_FN) \\\n  GEN_CAST_CP_TO_STANDARD(src_suffix, inat, src_t, intnat, RE_FN)  \\\n  GEN_CAST_CP_TO_STANDARD(src_suffix, f32, src_t, float, RE_FN)    \\\n  GEN_CAST_CP_TO_STANDARD(src_suffix, f64, src_t, double, RE_FN)\n\nGEN_ALL_CP_TO_STANDARD_CASTS(c32, complex32, crealf(src[src_off]))\nGEN_ALL_CP_TO_STANDARD_CASTS(c64, complex64, creal(src[src_off]))\n\n// Generate cast for complex to bool\n\n#define GEN_CAST_CP_TO_BOOL(src_suffix, src_t, RE_FN, IM_FN)        \\\n  static void nx_c_cast_##src_suffix##_to_bool_kernel(              \\\n      void *src_data, void *dst_data, long src_off, long dst_off) { \\\n    src_t *src = (src_t *)src_data;                                 \\\n    uint8_t *dst = (uint8_t *)dst_data;                             \\\n    double re = RE_FN;                                              \\\n    double im = IM_FN;                                              \\\n    dst[dst_off] = (re != 0 || im != 0) ? 1 : 0;                    \\\n  }                                                                 \\\n  CAST_IMPL(src_suffix, bool)\n\n#define CP_TYPE(suffix, t, re_fn, im_fn, base_t) \\\n  GEN_CAST_CP_TO_BOOL(suffix, t, re_fn, im_fn)\nCOMPLEX_TYPES\n#undef CP_TYPE\n\n// Generate cast for complex to low prec float\n\n#define GEN_CAST_CP_TO_LP(src_suffix, dst_suffix, src_t, dst_t, RE_FN, \\\n                          FROM_FLOAT)                                  \\\n  static void nx_c_cast_##src_suffix##_to_##dst_suffix##_kernel(       \\\n      void *src_data, void *dst_data, long src_off, long dst_off) {    \\\n    src_t *src = (src_t *)src_data;                                    \\\n    dst_t *dst = (dst_t *)dst_data;                                    \\\n    float temp = RE_FN;                                                \\\n    dst[dst_off] = FROM_FLOAT(temp);                                   \\\n  }                                                                    \\\n  CAST_IMPL(src_suffix, dst_suffix)\n\n// Generate all complex to low-precision casts\n#define GEN_ALL_CP_TO_LP_CASTS(src_suffix, src_t, RE_FN)                    \\\n  GEN_CAST_CP_TO_LP(src_suffix, f16, src_t, uint16_t, RE_FN, float_to_half) \\\n  GEN_CAST_CP_TO_LP(src_suffix, bf16, src_t, caml_ba_bfloat16, RE_FN,       \\\n                    float_to_bfloat16)                                      \\\n  GEN_CAST_CP_TO_LP(src_suffix, f8e4m3, src_t, caml_ba_fp8_e4m3, RE_FN,     \\\n                    float_to_fp8_e4m3)                                      \\\n  GEN_CAST_CP_TO_LP(src_suffix, f8e5m2, src_t, caml_ba_fp8_e5m2, RE_FN,     \\\n                    float_to_fp8_e5m2)\n\nGEN_ALL_CP_TO_LP_CASTS(c32, complex32, crealf(src[src_off]))\nGEN_ALL_CP_TO_LP_CASTS(c64, complex64, creal(src[src_off]))\n\n// Generate cast for standard real to c32/c64\n\n#define GEN_CAST_STANDARD_TO_C32_C64(src_suffix, dst_suffix, src_t, dst_t, \\\n                                     BASE_T)                               \\\n  static void nx_c_cast_##src_suffix##_to_##dst_suffix##_kernel(           \\\n      void *src_data, void *dst_data, long src_off, long dst_off) {        \\\n    src_t *src = (src_t *)src_data;                                        \\\n    dst_t *dst = (dst_t *)dst_data;                                        \\\n    BASE_T temp = (BASE_T)src[src_off];                                    \\\n    dst[dst_off] = temp + 0.0 * I;                                         \\\n  }                                                                        \\\n  CAST_IMPL(src_suffix, dst_suffix)\n\n\n// Generate all standard to complex casts for c32/c64\n#define GEN_ALL_STANDARD_TO_C32_C64_CASTS(dst_suffix, dst_t, BASE_T)     \\\n  GEN_CAST_STANDARD_TO_C32_C64(i8, dst_suffix, int8_t, dst_t, BASE_T)    \\\n  GEN_CAST_STANDARD_TO_C32_C64(u8, dst_suffix, uint8_t, dst_t, BASE_T)   \\\n  GEN_CAST_STANDARD_TO_C32_C64(i16, dst_suffix, int16_t, dst_t, BASE_T)  \\\n  GEN_CAST_STANDARD_TO_C32_C64(u16, dst_suffix, uint16_t, dst_t, BASE_T) \\\n  GEN_CAST_STANDARD_TO_C32_C64(i32, dst_suffix, int32_t, dst_t, BASE_T)  \\\n  GEN_CAST_STANDARD_TO_C32_C64(i64, dst_suffix, int64_t, dst_t, BASE_T)  \\\n  GEN_CAST_STANDARD_TO_C32_C64(u32, dst_suffix, uint32_t, dst_t, BASE_T) \\\n  GEN_CAST_STANDARD_TO_C32_C64(u64, dst_suffix, uint64_t, dst_t, BASE_T) \\\n  GEN_CAST_STANDARD_TO_C32_C64(inat, dst_suffix, intnat, dst_t, BASE_T)  \\\n  GEN_CAST_STANDARD_TO_C32_C64(f32, dst_suffix, float, dst_t, BASE_T)    \\\n  GEN_CAST_STANDARD_TO_C32_C64(f64, dst_suffix, double, dst_t, BASE_T)\n\nGEN_ALL_STANDARD_TO_C32_C64_CASTS(c32, complex32, float)\nGEN_ALL_STANDARD_TO_C32_C64_CASTS(c64, complex64, double)\n\n\n// Removed - Already generated above\n\n// Bool to c32/c64\n\n#define GEN_CAST_BOOL_TO_CP(dst_suffix, dst_t, BASE_T)              \\\n  static void nx_c_cast_bool_to_##dst_suffix##_kernel(              \\\n      void *src_data, void *dst_data, long src_off, long dst_off) { \\\n    uint8_t *src = (uint8_t *)src_data;                             \\\n    dst_t *dst = (dst_t *)dst_data;                                 \\\n    BASE_T temp = (BASE_T)src[src_off];                             \\\n    dst[dst_off] = temp + 0.0 * I;                                  \\\n  }                                                                 \\\n  CAST_IMPL(bool, dst_suffix)\n\nGEN_CAST_BOOL_TO_CP(c32, complex32, float)\nGEN_CAST_BOOL_TO_CP(c64, complex64, double)\n\n// Low prec to c32/c64\n\n#define GEN_CAST_LP_TO_CP(src_suffix, src_t, dst_suffix, dst_t, BASE_T, \\\n                          TO_FLOAT)                                     \\\n  static void nx_c_cast_##src_suffix##_to_##dst_suffix##_kernel(        \\\n      void *src_data, void *dst_data, long src_off, long dst_off) {     \\\n    src_t *src = (src_t *)src_data;                                     \\\n    dst_t *dst = (dst_t *)dst_data;                                     \\\n    BASE_T temp = (BASE_T)TO_FLOAT(src[src_off]);                       \\\n    dst[dst_off] = temp + 0.0 * I;                                      \\\n  }                                                                     \\\n  CAST_IMPL(src_suffix, dst_suffix)\n\n#define DEFINE_CASTS_LP_TO_CP(src_suffix, src_t, TO_FLOAT)              \\\n  GEN_CAST_LP_TO_CP(src_suffix, src_t, c32, complex32, float, TO_FLOAT) \\\n  GEN_CAST_LP_TO_CP(src_suffix, src_t, c64, complex64, double, TO_FLOAT)\n\n#define LP_TYPE(suffix, t, to_f, from_f) DEFINE_CASTS_LP_TO_CP(suffix, t, to_f)\nLOW_PREC_TYPES\n#undef LP_TYPE\n\n// Complex to complex pairs (individual)\n\nstatic void nx_c_cast_c32_to_c32_kernel(void *src_data, void *dst_data,\n                                        long src_off, long dst_off) {\n  complex32 *src = (complex32 *)src_data;\n  complex32 *dst = (complex32 *)dst_data;\n  dst[dst_off] = src[src_off];\n}\nCAST_IMPL(c32, c32)\n\nstatic void nx_c_cast_c32_to_c64_kernel(void *src_data, void *dst_data,\n                                        long src_off, long dst_off) {\n  complex32 *src = (complex32 *)src_data;\n  complex64 *dst = (complex64 *)dst_data;\n  dst[dst_off] = (complex64)src[src_off];\n}\nCAST_IMPL(c32, c64)\n\n\nstatic void nx_c_cast_c64_to_c32_kernel(void *src_data, void *dst_data,\n                                        long src_off, long dst_off) {\n  complex64 *src = (complex64 *)src_data;\n  complex32 *dst = (complex32 *)dst_data;\n  dst[dst_off] = (complex32)src[src_off];\n}\nCAST_IMPL(c64, c32)\n\nstatic void nx_c_cast_c64_to_c64_kernel(void *src_data, void *dst_data,\n                                        long src_off, long dst_off) {\n  complex64 *src = (complex64 *)src_data;\n  complex64 *dst = (complex64 *)dst_data;\n  dst[dst_off] = src[src_off];\n}\nCAST_IMPL(c64, c64)\n\n\n// Generate cast for i4 to standard real\n\n#define GEN_CAST_I4_TO_STANDARD(dst_suffix, dst_t)                  \\\n  static void nx_c_cast_i4_to_##dst_suffix##_kernel(                \\\n      void *src_data, void *dst_data, long src_off, long dst_off) { \\\n    uint8_t *src = (uint8_t *)src_data;                             \\\n    dst_t *dst = (dst_t *)dst_data;                                 \\\n    long byte_off = src_off / 2;                                    \\\n    int nib_off = src_off % 2;                                      \\\n    int a = nib_off ? ((int8_t)src[byte_off] >> 4)                  \\\n                    : (int8_t)((src[byte_off] & 0x0F) << 4) >> 4;   \\\n    dst[dst_off] = (dst_t)a;                                        \\\n  }                                                                 \\\n  CAST_IMPL(i4, dst_suffix)\n\n#define SR_TYPE(dst_suffix, dst_t) GEN_CAST_I4_TO_STANDARD(dst_suffix, dst_t)\nSTANDARD_REAL_TYPES\n#undef SR_TYPE\n\n// Generate cast for u4 to standard real\n\n#define GEN_CAST_U4_TO_STANDARD(dst_suffix, dst_t)                        \\\n  static void nx_c_cast_u4_to_##dst_suffix##_kernel(                      \\\n      void *src_data, void *dst_data, long src_off, long dst_off) {       \\\n    uint8_t *src = (uint8_t *)src_data;                                   \\\n    dst_t *dst = (dst_t *)dst_data;                                       \\\n    long byte_off = src_off / 2;                                          \\\n    int nib_off = src_off % 2;                                            \\\n    int a = nib_off ? (src[byte_off] >> 4) & 0x0F : src[byte_off] & 0x0F; \\\n    dst[dst_off] = (dst_t)a;                                              \\\n  }                                                                       \\\n  CAST_IMPL(u4, dst_suffix)\n\n#define SR_TYPE(dst_suffix, dst_t) GEN_CAST_U4_TO_STANDARD(dst_suffix, dst_t)\nSTANDARD_REAL_TYPES\n#undef SR_TYPE\n\n// i4 to bool\n\nstatic void nx_c_cast_i4_to_bool_kernel(void *src_data, void *dst_data,\n                                        long src_off, long dst_off) {\n  uint8_t *src = (uint8_t *)src_data;\n  uint8_t *dst = (uint8_t *)dst_data;\n  long byte_off = src_off / 2;\n  int nib_off = src_off % 2;\n  int a = nib_off ? ((int8_t)src[byte_off] >> 4)\n                  : (int8_t)((src[byte_off] & 0x0F) << 4) >> 4;\n  dst[dst_off] = (a != 0) ? 1 : 0;\n}\nCAST_IMPL(i4, bool)\n\n// u4 to bool\n\nstatic void nx_c_cast_u4_to_bool_kernel(void *src_data, void *dst_data,\n                                        long src_off, long dst_off) {\n  uint8_t *src = (uint8_t *)src_data;\n  uint8_t *dst = (uint8_t *)dst_data;\n  long byte_off = src_off / 2;\n  int nib_off = src_off % 2;\n  int a = nib_off ? (src[byte_off] >> 4) & 0x0F : src[byte_off] & 0x0F;\n  dst[dst_off] = (a != 0) ? 1 : 0;\n}\nCAST_IMPL(u4, bool)\n\n// Generate cast for i4 to low prec\n\n#define GEN_CAST_I4_TO_LP(dst_suffix, dst_t, FROM_FLOAT)            \\\n  static void nx_c_cast_i4_to_##dst_suffix##_kernel(                \\\n      void *src_data, void *dst_data, long src_off, long dst_off) { \\\n    uint8_t *src = (uint8_t *)src_data;                             \\\n    dst_t *dst = (dst_t *)dst_data;                                 \\\n    long byte_off = src_off / 2;                                    \\\n    int nib_off = src_off % 2;                                      \\\n    int a = nib_off ? ((int8_t)src[byte_off] >> 4)                  \\\n                    : (int8_t)((src[byte_off] & 0x0F) << 4) >> 4;   \\\n    float temp = (float)a;                                          \\\n    dst[dst_off] = FROM_FLOAT(temp);                                \\\n  }                                                                 \\\n  CAST_IMPL(i4, dst_suffix)\n\n#define LP_TYPE(suffix, t, to_f, from_f) GEN_CAST_I4_TO_LP(suffix, t, from_f)\nLOW_PREC_TYPES\n#undef LP_TYPE\n\n// Generate cast for u4 to low prec\n\n#define GEN_CAST_U4_TO_LP(dst_suffix, dst_t, FROM_FLOAT)                  \\\n  static void nx_c_cast_u4_to_##dst_suffix##_kernel(                      \\\n      void *src_data, void *dst_data, long src_off, long dst_off) {       \\\n    uint8_t *src = (uint8_t *)src_data;                                   \\\n    dst_t *dst = (dst_t *)dst_data;                                       \\\n    long byte_off = src_off / 2;                                          \\\n    int nib_off = src_off % 2;                                            \\\n    int a = nib_off ? (src[byte_off] >> 4) & 0x0F : src[byte_off] & 0x0F; \\\n    float temp = (float)a;                                                \\\n    dst[dst_off] = FROM_FLOAT(temp);                                      \\\n  }                                                                       \\\n  CAST_IMPL(u4, dst_suffix)\n\n#define LP_TYPE(suffix, t, to_f, from_f) GEN_CAST_U4_TO_LP(suffix, t, from_f)\nLOW_PREC_TYPES\n#undef LP_TYPE\n\n// Generate cast for i4 to c32/c64\n\n#define GEN_CAST_I4_TO_CP(dst_suffix, dst_t, BASE_T)                \\\n  static void nx_c_cast_i4_to_##dst_suffix##_kernel(                \\\n      void *src_data, void *dst_data, long src_off, long dst_off) { \\\n    uint8_t *src = (uint8_t *)src_data;                             \\\n    dst_t *dst = (dst_t *)dst_data;                                 \\\n    long byte_off = src_off / 2;                                    \\\n    int nib_off = src_off % 2;                                      \\\n    int a = nib_off ? ((int8_t)src[byte_off] >> 4)                  \\\n                    : (int8_t)((src[byte_off] & 0x0F) << 4) >> 4;   \\\n    BASE_T temp = (BASE_T)a;                                        \\\n    dst[dst_off] = temp + 0.0 * I;                                  \\\n  }                                                                 \\\n  CAST_IMPL(i4, dst_suffix)\n\nGEN_CAST_I4_TO_CP(c32, complex32, float)\nGEN_CAST_I4_TO_CP(c64, complex64, double)\n\n// Similar for u4 to c32/c64\n\n#define GEN_CAST_U4_TO_CP(dst_suffix, dst_t, BASE_T)                      \\\n  static void nx_c_cast_u4_to_##dst_suffix##_kernel(                      \\\n      void *src_data, void *dst_data, long src_off, long dst_off) {       \\\n    uint8_t *src = (uint8_t *)src_data;                                   \\\n    dst_t *dst = (dst_t *)dst_data;                                       \\\n    long byte_off = src_off / 2;                                          \\\n    int nib_off = src_off % 2;                                            \\\n    int a = nib_off ? (src[byte_off] >> 4) & 0x0F : src[byte_off] & 0x0F; \\\n    BASE_T temp = (BASE_T)a;                                              \\\n    dst[dst_off] = temp + 0.0 * I;                                        \\\n  }                                                                       \\\n  CAST_IMPL(u4, dst_suffix)\n\nGEN_CAST_U4_TO_CP(c32, complex32, float)\nGEN_CAST_U4_TO_CP(c64, complex64, double)\n\n// Generate cast for standard real to i4/u4\n\n#define GEN_CAST_STANDARD_TO_PACKED(src_suffix, src_t, packed_suffix) \\\n  static void nx_c_cast_##src_suffix##_to_##packed_suffix##_kernel(   \\\n      void *src_data, void *dst_data, long src_off, long dst_off) {   \\\n    src_t *src = (src_t *)src_data;                                   \\\n    uint8_t *dst = (uint8_t *)dst_data;                               \\\n    long byte_off = dst_off / 2;                                      \\\n    int nib_off = dst_off % 2;                                        \\\n    int res = (int)src[src_off];                                      \\\n    uint8_t nib = (uint8_t)res & 0x0F;                                \\\n    if (nib_off) {                                                    \\\n      dst[byte_off] = (dst[byte_off] & 0x0F) | (nib << 4);            \\\n    } else {                                                          \\\n      dst[byte_off] = (dst[byte_off] & 0xF0) | nib;                   \\\n    }                                                                 \\\n  }                                                                   \\\n  CAST_IMPL(src_suffix, packed_suffix)\n\n// Generate standard to packed casts\nGEN_CAST_STANDARD_TO_PACKED(i8, int8_t, i4)\nGEN_CAST_STANDARD_TO_PACKED(u8, uint8_t, i4)\nGEN_CAST_STANDARD_TO_PACKED(i16, int16_t, i4)\nGEN_CAST_STANDARD_TO_PACKED(u16, uint16_t, i4)\nGEN_CAST_STANDARD_TO_PACKED(i32, int32_t, i4)\nGEN_CAST_STANDARD_TO_PACKED(i64, int64_t, i4)\nGEN_CAST_STANDARD_TO_PACKED(u32, uint32_t, i4)\nGEN_CAST_STANDARD_TO_PACKED(u64, uint64_t, i4)\nGEN_CAST_STANDARD_TO_PACKED(inat, intnat, i4)\nGEN_CAST_STANDARD_TO_PACKED(f32, float, i4)\nGEN_CAST_STANDARD_TO_PACKED(f64, double, i4)\n\nGEN_CAST_STANDARD_TO_PACKED(i8, int8_t, u4)\nGEN_CAST_STANDARD_TO_PACKED(u8, uint8_t, u4)\nGEN_CAST_STANDARD_TO_PACKED(i16, int16_t, u4)\nGEN_CAST_STANDARD_TO_PACKED(u16, uint16_t, u4)\nGEN_CAST_STANDARD_TO_PACKED(i32, int32_t, u4)\nGEN_CAST_STANDARD_TO_PACKED(i64, int64_t, u4)\nGEN_CAST_STANDARD_TO_PACKED(u32, uint32_t, u4)\nGEN_CAST_STANDARD_TO_PACKED(u64, uint64_t, u4)\nGEN_CAST_STANDARD_TO_PACKED(inat, intnat, u4)\nGEN_CAST_STANDARD_TO_PACKED(f32, float, u4)\nGEN_CAST_STANDARD_TO_PACKED(f64, double, u4)\n\n// Bool to i4/u4\n\n#define GEN_CAST_BOOL_TO_PACKED(packed_suffix)                      \\\n  static void nx_c_cast_bool_to_##packed_suffix##_kernel(           \\\n      void *src_data, void *dst_data, long src_off, long dst_off) { \\\n    uint8_t *src = (uint8_t *)src_data;                             \\\n    uint8_t *dst = (uint8_t *)dst_data;                             \\\n    long byte_off = dst_off / 2;                                    \\\n    int nib_off = dst_off % 2;                                      \\\n    int res = (int)src[src_off];                                    \\\n    uint8_t nib = (uint8_t)res & 0x0F;                              \\\n    if (nib_off) {                                                  \\\n      dst[byte_off] = (dst[byte_off] & 0x0F) | (nib << 4);          \\\n    } else {                                                        \\\n      dst[byte_off] = (dst[byte_off] & 0xF0) | nib;                 \\\n    }                                                               \\\n  }                                                                 \\\n  CAST_IMPL(bool, packed_suffix)\n\nGEN_CAST_BOOL_TO_PACKED(i4)\nGEN_CAST_BOOL_TO_PACKED(u4)\n\n// Low prec to i4/u4\n\n#define GEN_CAST_LP_TO_PACKED(src_suffix, src_t, TO_FLOAT, packed_suffix) \\\n  static void nx_c_cast_##src_suffix##_to_##packed_suffix##_kernel(       \\\n      void *src_data, void *dst_data, long src_off, long dst_off) {       \\\n    src_t *src = (src_t *)src_data;                                       \\\n    uint8_t *dst = (uint8_t *)dst_data;                                   \\\n    long byte_off = dst_off / 2;                                          \\\n    int nib_off = dst_off % 2;                                            \\\n    float temp = TO_FLOAT(src[src_off]);                                  \\\n    int res = (int)temp;                                                  \\\n    uint8_t nib = (uint8_t)res & 0x0F;                                    \\\n    if (nib_off) {                                                        \\\n      dst[byte_off] = (dst[byte_off] & 0x0F) | (nib << 4);                \\\n    } else {                                                              \\\n      dst[byte_off] = (dst[byte_off] & 0xF0) | nib;                       \\\n    }                                                                     \\\n  }                                                                       \\\n  CAST_IMPL(src_suffix, packed_suffix)\n\n// Generate low-precision to packed casts\nGEN_CAST_LP_TO_PACKED(f16, uint16_t, half_to_float, i4)\nGEN_CAST_LP_TO_PACKED(bf16, caml_ba_bfloat16, bfloat16_to_float, i4)\nGEN_CAST_LP_TO_PACKED(f8e4m3, caml_ba_fp8_e4m3, fp8_e4m3_to_float, i4)\nGEN_CAST_LP_TO_PACKED(f8e5m2, caml_ba_fp8_e5m2, fp8_e5m2_to_float, i4)\n\nGEN_CAST_LP_TO_PACKED(f16, uint16_t, half_to_float, u4)\nGEN_CAST_LP_TO_PACKED(bf16, caml_ba_bfloat16, bfloat16_to_float, u4)\nGEN_CAST_LP_TO_PACKED(f8e4m3, caml_ba_fp8_e4m3, fp8_e4m3_to_float, u4)\nGEN_CAST_LP_TO_PACKED(f8e5m2, caml_ba_fp8_e5m2, fp8_e5m2_to_float, u4)\n\n// Complex to i4/u4\n\n#define GEN_CAST_CP_TO_PACKED(src_suffix, src_t, RE_FN, packed_suffix) \\\n  static void nx_c_cast_##src_suffix##_to_##packed_suffix##_kernel(    \\\n      void *src_data, void *dst_data, long src_off, long dst_off) {    \\\n    src_t *src = (src_t *)src_data;                                    \\\n    uint8_t *dst = (uint8_t *)dst_data;                                \\\n    long byte_off = dst_off / 2;                                       \\\n    int nib_off = dst_off % 2;                                         \\\n    float temp = RE_FN;                                                \\\n    int res = (int)temp;                                               \\\n    uint8_t nib = (uint8_t)res & 0x0F;                                 \\\n    if (nib_off) {                                                     \\\n      dst[byte_off] = (dst[byte_off] & 0x0F) | (nib << 4);             \\\n    } else {                                                           \\\n      dst[byte_off] = (dst[byte_off] & 0xF0) | nib;                    \\\n    }                                                                  \\\n  }                                                                    \\\n  CAST_IMPL(src_suffix, packed_suffix)\n\n// Generate complex to packed casts\nGEN_CAST_CP_TO_PACKED(c32, complex32, crealf(src[src_off]), i4)\nGEN_CAST_CP_TO_PACKED(c64, complex64, creal(src[src_off]), i4)\n\nGEN_CAST_CP_TO_PACKED(c32, complex32, crealf(src[src_off]), u4)\nGEN_CAST_CP_TO_PACKED(c64, complex64, creal(src[src_off]), u4)\n\n// Identity casts for other low-precision and special types\nstatic void nx_c_cast_bf16_to_bf16_kernel(void *src_data, void *dst_data,\n                                          long src_off, long dst_off) {\n  caml_ba_bfloat16 *src = (caml_ba_bfloat16 *)src_data;\n  caml_ba_bfloat16 *dst = (caml_ba_bfloat16 *)dst_data;\n  dst[dst_off] = src[src_off];\n}\nCAST_IMPL(bf16, bf16)\n\nstatic void nx_c_cast_f8e4m3_to_f8e4m3_kernel(void *src_data, void *dst_data,\n                                              long src_off, long dst_off) {\n  caml_ba_fp8_e4m3 *src = (caml_ba_fp8_e4m3 *)src_data;\n  caml_ba_fp8_e4m3 *dst = (caml_ba_fp8_e4m3 *)dst_data;\n  dst[dst_off] = src[src_off];\n}\nCAST_IMPL(f8e4m3, f8e4m3)\n\nstatic void nx_c_cast_f8e5m2_to_f8e5m2_kernel(void *src_data, void *dst_data,\n                                              long src_off, long dst_off) {\n  caml_ba_fp8_e5m2 *src = (caml_ba_fp8_e5m2 *)src_data;\n  caml_ba_fp8_e5m2 *dst = (caml_ba_fp8_e5m2 *)dst_data;\n  dst[dst_off] = src[src_off];\n}\nCAST_IMPL(f8e5m2, f8e5m2)\n\n// Packed to packed\n\nstatic void nx_c_cast_i4_to_i4_kernel(void *src_data, void *dst_data,\n                                      long src_off, long dst_off) {\n  uint8_t *src = (uint8_t *)src_data;\n  uint8_t *dst = (uint8_t *)dst_data;\n  long byte_off = src_off / 2;\n  int nib_off = src_off % 2;\n  int a = nib_off ? ((int8_t)src[byte_off] >> 4)\n                  : (int8_t)((src[byte_off] & 0x0F) << 4) >> 4;\n  uint8_t nib = (uint8_t)a & 0x0F;\n  long d_byte_off = dst_off / 2;\n  int d_nib_off = dst_off % 2;\n  if (d_nib_off) {\n    dst[d_byte_off] = (dst[d_byte_off] & 0x0F) | (nib << 4);\n  } else {\n    dst[d_byte_off] = (dst[d_byte_off] & 0xF0) | nib;\n  }\n}\nCAST_IMPL(i4, i4)\n\nstatic void nx_c_cast_i4_to_u4_kernel(void *src_data, void *dst_data,\n                                      long src_off, long dst_off) {\n  uint8_t *src = (uint8_t *)src_data;\n  uint8_t *dst = (uint8_t *)dst_data;\n  long byte_off = src_off / 2;\n  int nib_off = src_off % 2;\n  int a = nib_off ? ((int8_t)src[byte_off] >> 4)\n                  : (int8_t)((src[byte_off] & 0x0F) << 4) >> 4;\n  uint8_t nib = (uint8_t)a & 0x0F;\n  long d_byte_off = dst_off / 2;\n  int d_nib_off = dst_off % 2;\n  if (d_nib_off) {\n    dst[d_byte_off] = (dst[d_byte_off] & 0x0F) | (nib << 4);\n  } else {\n    dst[d_byte_off] = (dst[d_byte_off] & 0xF0) | nib;\n  }\n}\nCAST_IMPL(i4, u4)\n\nstatic void nx_c_cast_u4_to_i4_kernel(void *src_data, void *dst_data,\n                                      long src_off, long dst_off) {\n  uint8_t *src = (uint8_t *)src_data;\n  uint8_t *dst = (uint8_t *)dst_data;\n  long byte_off = src_off / 2;\n  int nib_off = src_off % 2;\n  int a = nib_off ? (src[byte_off] >> 4) & 0x0F : src[byte_off] & 0x0F;\n  uint8_t nib = (uint8_t)a & 0x0F;\n  long d_byte_off = dst_off / 2;\n  int d_nib_off = dst_off % 2;\n  if (d_nib_off) {\n    dst[d_byte_off] = (dst[d_byte_off] & 0x0F) | (nib << 4);\n  } else {\n    dst[d_byte_off] = (dst[d_byte_off] & 0xF0) | nib;\n  }\n}\nCAST_IMPL(u4, i4)\n\nstatic void nx_c_cast_u4_to_u4_kernel(void *src_data, void *dst_data,\n                                      long src_off, long dst_off) {\n  uint8_t *src = (uint8_t *)src_data;\n  uint8_t *dst = (uint8_t *)dst_data;\n  long byte_off = src_off / 2;\n  int nib_off = src_off % 2;\n  int a = nib_off ? (src[byte_off] >> 4) & 0x0F : src[byte_off] & 0x0F;\n  uint8_t nib = (uint8_t)a & 0x0F;\n  long d_byte_off = dst_off / 2;\n  int d_nib_off = dst_off % 2;\n  if (d_nib_off) {\n    dst[d_byte_off] = (dst[d_byte_off] & 0x0F) | (nib << 4);\n  } else {\n    dst[d_byte_off] = (dst[d_byte_off] & 0xF0) | nib;\n  }\n}\nCAST_IMPL(u4, u4)\n\n// Dispatch table\n\nstatic const cast_op_t cast_table[NX_NUM_DTYPES][NX_NUM_DTYPES] =\n    {\n        [NX_DTYPE_I8] =\n            {\n                [NX_DTYPE_I8] = nx_c_cast_i8_to_i8,\n                [NX_DTYPE_U8] = nx_c_cast_i8_to_u8,\n                [NX_DTYPE_I16] = nx_c_cast_i8_to_i16,\n                [NX_DTYPE_U16] = nx_c_cast_i8_to_u16,\n                [NX_DTYPE_I32] = nx_c_cast_i8_to_i32,\n                [NX_DTYPE_I64] = nx_c_cast_i8_to_i64,\n                [NX_DTYPE_U32] = nx_c_cast_i8_to_u32,\n                [NX_DTYPE_U64] = nx_c_cast_i8_to_u64,\n                [NX_DTYPE_INAT] = nx_c_cast_i8_to_inat,\n                [NX_DTYPE_F16] = nx_c_cast_i8_to_f16,\n                [NX_DTYPE_F32] = nx_c_cast_i8_to_f32,\n                [NX_DTYPE_F64] = nx_c_cast_i8_to_f64,\n                [NX_DTYPE_C32] = nx_c_cast_i8_to_c32,\n                [NX_DTYPE_C64] = nx_c_cast_i8_to_c64,\n                [NX_DTYPE_BF16] = nx_c_cast_i8_to_bf16,\n                [NX_DTYPE_BOOL] = nx_c_cast_i8_to_bool,\n                [NX_DTYPE_I4] = nx_c_cast_i8_to_i4,\n                [NX_DTYPE_U4] = nx_c_cast_i8_to_u4,\n                [NX_DTYPE_F8E4M3] = nx_c_cast_i8_to_f8e4m3,\n                [NX_DTYPE_F8E5M2] = nx_c_cast_i8_to_f8e5m2,\n            },\n        [NX_DTYPE_U8] =\n            {\n                [NX_DTYPE_I8] = nx_c_cast_u8_to_i8,\n                [NX_DTYPE_U8] = nx_c_cast_u8_to_u8,\n                [NX_DTYPE_I16] = nx_c_cast_u8_to_i16,\n                [NX_DTYPE_U16] = nx_c_cast_u8_to_u16,\n                [NX_DTYPE_I32] = nx_c_cast_u8_to_i32,\n                [NX_DTYPE_I64] = nx_c_cast_u8_to_i64,\n                [NX_DTYPE_U32] = nx_c_cast_u8_to_u32,\n                [NX_DTYPE_U64] = nx_c_cast_u8_to_u64,\n                [NX_DTYPE_INAT] = nx_c_cast_u8_to_inat,\n                [NX_DTYPE_F16] = nx_c_cast_u8_to_f16,\n                [NX_DTYPE_F32] = nx_c_cast_u8_to_f32,\n                [NX_DTYPE_F64] = nx_c_cast_u8_to_f64,\n                [NX_DTYPE_C32] = nx_c_cast_u8_to_c32,\n                [NX_DTYPE_C64] = nx_c_cast_u8_to_c64,\n                [NX_DTYPE_BF16] = nx_c_cast_u8_to_bf16,\n                [NX_DTYPE_BOOL] = nx_c_cast_u8_to_bool,\n                [NX_DTYPE_I4] = nx_c_cast_u8_to_i4,\n                [NX_DTYPE_U4] = nx_c_cast_u8_to_u4,\n                [NX_DTYPE_F8E4M3] = nx_c_cast_u8_to_f8e4m3,\n                [NX_DTYPE_F8E5M2] = nx_c_cast_u8_to_f8e5m2,\n            },\n        [NX_DTYPE_I16] =\n            {\n                [NX_DTYPE_I8] = nx_c_cast_i16_to_i8,\n                [NX_DTYPE_U8] = nx_c_cast_i16_to_u8,\n                [NX_DTYPE_I16] = nx_c_cast_i16_to_i16,\n                [NX_DTYPE_U16] = nx_c_cast_i16_to_u16,\n                [NX_DTYPE_I32] = nx_c_cast_i16_to_i32,\n                [NX_DTYPE_I64] = nx_c_cast_i16_to_i64,\n                [NX_DTYPE_U32] = nx_c_cast_i16_to_u32,\n                [NX_DTYPE_U64] = nx_c_cast_i16_to_u64,\n                [NX_DTYPE_INAT] = nx_c_cast_i16_to_inat,\n                [NX_DTYPE_F16] = nx_c_cast_i16_to_f16,\n                [NX_DTYPE_F32] = nx_c_cast_i16_to_f32,\n                [NX_DTYPE_F64] = nx_c_cast_i16_to_f64,\n                [NX_DTYPE_C32] = nx_c_cast_i16_to_c32,\n                [NX_DTYPE_C64] = nx_c_cast_i16_to_c64,\n                [NX_DTYPE_BF16] = nx_c_cast_i16_to_bf16,\n                [NX_DTYPE_BOOL] = nx_c_cast_i16_to_bool,\n                [NX_DTYPE_I4] = nx_c_cast_i16_to_i4,\n                [NX_DTYPE_U4] = nx_c_cast_i16_to_u4,\n                [NX_DTYPE_F8E4M3] = nx_c_cast_i16_to_f8e4m3,\n                [NX_DTYPE_F8E5M2] = nx_c_cast_i16_to_f8e5m2,\n            },\n        [NX_DTYPE_U16] =\n            {\n                [NX_DTYPE_I8] = nx_c_cast_u16_to_i8,\n                [NX_DTYPE_U8] = nx_c_cast_u16_to_u8,\n                [NX_DTYPE_I16] = nx_c_cast_u16_to_i16,\n                [NX_DTYPE_U16] = nx_c_cast_u16_to_u16,\n                [NX_DTYPE_I32] = nx_c_cast_u16_to_i32,\n                [NX_DTYPE_I64] = nx_c_cast_u16_to_i64,\n                [NX_DTYPE_U32] = nx_c_cast_u16_to_u32,\n                [NX_DTYPE_U64] = nx_c_cast_u16_to_u64,\n                [NX_DTYPE_INAT] = nx_c_cast_u16_to_inat,\n                [NX_DTYPE_F16] = nx_c_cast_u16_to_f16,\n                [NX_DTYPE_F32] = nx_c_cast_u16_to_f32,\n                [NX_DTYPE_F64] = nx_c_cast_u16_to_f64,\n                [NX_DTYPE_C32] = nx_c_cast_u16_to_c32,\n                [NX_DTYPE_C64] = nx_c_cast_u16_to_c64,\n                [NX_DTYPE_BF16] = nx_c_cast_u16_to_bf16,\n                [NX_DTYPE_BOOL] = nx_c_cast_u16_to_bool,\n                [NX_DTYPE_I4] = nx_c_cast_u16_to_i4,\n                [NX_DTYPE_U4] = nx_c_cast_u16_to_u4,\n                [NX_DTYPE_F8E4M3] = nx_c_cast_u16_to_f8e4m3,\n                [NX_DTYPE_F8E5M2] = nx_c_cast_u16_to_f8e5m2,\n            },\n        [NX_DTYPE_I32] =\n            {\n                [NX_DTYPE_I8] = nx_c_cast_i32_to_i8,\n                [NX_DTYPE_U8] = nx_c_cast_i32_to_u8,\n                [NX_DTYPE_I16] = nx_c_cast_i32_to_i16,\n                [NX_DTYPE_U16] = nx_c_cast_i32_to_u16,\n                [NX_DTYPE_I32] = nx_c_cast_i32_to_i32,\n                [NX_DTYPE_I64] = nx_c_cast_i32_to_i64,\n                [NX_DTYPE_U32] = nx_c_cast_i32_to_u32,\n                [NX_DTYPE_U64] = nx_c_cast_i32_to_u64,\n                [NX_DTYPE_INAT] = nx_c_cast_i32_to_inat,\n                [NX_DTYPE_F16] = nx_c_cast_i32_to_f16,\n                [NX_DTYPE_F32] = nx_c_cast_i32_to_f32,\n                [NX_DTYPE_F64] = nx_c_cast_i32_to_f64,\n                [NX_DTYPE_C32] = nx_c_cast_i32_to_c32,\n                [NX_DTYPE_C64] = nx_c_cast_i32_to_c64,\n                [NX_DTYPE_BF16] = nx_c_cast_i32_to_bf16,\n                [NX_DTYPE_BOOL] = nx_c_cast_i32_to_bool,\n                [NX_DTYPE_I4] = nx_c_cast_i32_to_i4,\n                [NX_DTYPE_U4] = nx_c_cast_i32_to_u4,\n                [NX_DTYPE_F8E4M3] = nx_c_cast_i32_to_f8e4m3,\n                [NX_DTYPE_F8E5M2] = nx_c_cast_i32_to_f8e5m2,\n            },\n        [NX_DTYPE_I64] =\n            {\n                [NX_DTYPE_I8] = nx_c_cast_i64_to_i8,\n                [NX_DTYPE_U8] = nx_c_cast_i64_to_u8,\n                [NX_DTYPE_I16] = nx_c_cast_i64_to_i16,\n                [NX_DTYPE_U16] = nx_c_cast_i64_to_u16,\n                [NX_DTYPE_I32] = nx_c_cast_i64_to_i32,\n                [NX_DTYPE_I64] = nx_c_cast_i64_to_i64,\n                [NX_DTYPE_U32] = nx_c_cast_i64_to_u32,\n                [NX_DTYPE_U64] = nx_c_cast_i64_to_u64,\n                [NX_DTYPE_INAT] = nx_c_cast_i64_to_inat,\n                [NX_DTYPE_F16] = nx_c_cast_i64_to_f16,\n                [NX_DTYPE_F32] = nx_c_cast_i64_to_f32,\n                [NX_DTYPE_F64] = nx_c_cast_i64_to_f64,\n                [NX_DTYPE_C32] = nx_c_cast_i64_to_c32,\n                [NX_DTYPE_C64] = nx_c_cast_i64_to_c64,\n                [NX_DTYPE_BF16] = nx_c_cast_i64_to_bf16,\n                [NX_DTYPE_BOOL] = nx_c_cast_i64_to_bool,\n                [NX_DTYPE_I4] = nx_c_cast_i64_to_i4,\n                [NX_DTYPE_U4] = nx_c_cast_i64_to_u4,\n                [NX_DTYPE_F8E4M3] = nx_c_cast_i64_to_f8e4m3,\n                [NX_DTYPE_F8E5M2] = nx_c_cast_i64_to_f8e5m2,\n            },\n        [NX_DTYPE_U32] =\n            {\n                [NX_DTYPE_I8] = nx_c_cast_u32_to_i8,\n                [NX_DTYPE_U8] = nx_c_cast_u32_to_u8,\n                [NX_DTYPE_I16] = nx_c_cast_u32_to_i16,\n                [NX_DTYPE_U16] = nx_c_cast_u32_to_u16,\n                [NX_DTYPE_I32] = nx_c_cast_u32_to_i32,\n                [NX_DTYPE_I64] = nx_c_cast_u32_to_i64,\n                [NX_DTYPE_U32] = nx_c_cast_u32_to_u32,\n                [NX_DTYPE_U64] = nx_c_cast_u32_to_u64,\n                [NX_DTYPE_INAT] = nx_c_cast_u32_to_inat,\n                [NX_DTYPE_F16] = nx_c_cast_u32_to_f16,\n                [NX_DTYPE_F32] = nx_c_cast_u32_to_f32,\n                [NX_DTYPE_F64] = nx_c_cast_u32_to_f64,\n                [NX_DTYPE_C32] = nx_c_cast_u32_to_c32,\n                [NX_DTYPE_C64] = nx_c_cast_u32_to_c64,\n                [NX_DTYPE_BF16] = nx_c_cast_u32_to_bf16,\n                [NX_DTYPE_BOOL] = nx_c_cast_u32_to_bool,\n                [NX_DTYPE_I4] = nx_c_cast_u32_to_i4,\n                [NX_DTYPE_U4] = nx_c_cast_u32_to_u4,\n                [NX_DTYPE_F8E4M3] = nx_c_cast_u32_to_f8e4m3,\n                [NX_DTYPE_F8E5M2] = nx_c_cast_u32_to_f8e5m2,\n            },\n        [NX_DTYPE_U64] =\n            {\n                [NX_DTYPE_I8] = nx_c_cast_u64_to_i8,\n                [NX_DTYPE_U8] = nx_c_cast_u64_to_u8,\n                [NX_DTYPE_I16] = nx_c_cast_u64_to_i16,\n                [NX_DTYPE_U16] = nx_c_cast_u64_to_u16,\n                [NX_DTYPE_I32] = nx_c_cast_u64_to_i32,\n                [NX_DTYPE_I64] = nx_c_cast_u64_to_i64,\n                [NX_DTYPE_U32] = nx_c_cast_u64_to_u32,\n                [NX_DTYPE_U64] = nx_c_cast_u64_to_u64,\n                [NX_DTYPE_INAT] = nx_c_cast_u64_to_inat,\n                [NX_DTYPE_F16] = nx_c_cast_u64_to_f16,\n                [NX_DTYPE_F32] = nx_c_cast_u64_to_f32,\n                [NX_DTYPE_F64] = nx_c_cast_u64_to_f64,\n                [NX_DTYPE_C32] = nx_c_cast_u64_to_c32,\n                [NX_DTYPE_C64] = nx_c_cast_u64_to_c64,\n                [NX_DTYPE_BF16] = nx_c_cast_u64_to_bf16,\n                [NX_DTYPE_BOOL] = nx_c_cast_u64_to_bool,\n                [NX_DTYPE_I4] = nx_c_cast_u64_to_i4,\n                [NX_DTYPE_U4] = nx_c_cast_u64_to_u4,\n                [NX_DTYPE_F8E4M3] = nx_c_cast_u64_to_f8e4m3,\n                [NX_DTYPE_F8E5M2] = nx_c_cast_u64_to_f8e5m2,\n            },\n        [NX_DTYPE_INAT] =\n            {\n                [NX_DTYPE_I8] = nx_c_cast_inat_to_i8,\n                [NX_DTYPE_U8] = nx_c_cast_inat_to_u8,\n                [NX_DTYPE_I16] = nx_c_cast_inat_to_i16,\n                [NX_DTYPE_U16] = nx_c_cast_inat_to_u16,\n                [NX_DTYPE_I32] = nx_c_cast_inat_to_i32,\n                [NX_DTYPE_I64] = nx_c_cast_inat_to_i64,\n                [NX_DTYPE_U32] = nx_c_cast_inat_to_u32,\n                [NX_DTYPE_U64] = nx_c_cast_inat_to_u64,\n                [NX_DTYPE_INAT] = nx_c_cast_inat_to_inat,\n                [NX_DTYPE_F16] = nx_c_cast_inat_to_f16,\n                [NX_DTYPE_F32] = nx_c_cast_inat_to_f32,\n                [NX_DTYPE_F64] = nx_c_cast_inat_to_f64,\n                [NX_DTYPE_C32] = nx_c_cast_inat_to_c32,\n                [NX_DTYPE_C64] = nx_c_cast_inat_to_c64,\n                [NX_DTYPE_BF16] = nx_c_cast_inat_to_bf16,\n                [NX_DTYPE_BOOL] = nx_c_cast_inat_to_bool,\n                [NX_DTYPE_I4] = nx_c_cast_inat_to_i4,\n                [NX_DTYPE_U4] = nx_c_cast_inat_to_u4,\n                [NX_DTYPE_F8E4M3] = nx_c_cast_inat_to_f8e4m3,\n                [NX_DTYPE_F8E5M2] = nx_c_cast_inat_to_f8e5m2,\n            },\n        [NX_DTYPE_F16] =\n            {\n                [NX_DTYPE_I8] = nx_c_cast_f16_to_i8,\n                [NX_DTYPE_U8] = nx_c_cast_f16_to_u8,\n                [NX_DTYPE_I16] = nx_c_cast_f16_to_i16,\n                [NX_DTYPE_U16] = nx_c_cast_f16_to_u16,\n                [NX_DTYPE_I32] = nx_c_cast_f16_to_i32,\n                [NX_DTYPE_I64] = nx_c_cast_f16_to_i64,\n                [NX_DTYPE_U32] = nx_c_cast_f16_to_u32,\n                [NX_DTYPE_U64] = nx_c_cast_f16_to_u64,\n                [NX_DTYPE_INAT] = nx_c_cast_f16_to_inat,\n                [NX_DTYPE_F16] = nx_c_cast_f16_to_f16,\n                [NX_DTYPE_F32] = nx_c_cast_f16_to_f32,\n                [NX_DTYPE_F64] = nx_c_cast_f16_to_f64,\n                [NX_DTYPE_C32] = nx_c_cast_f16_to_c32,\n                [NX_DTYPE_C64] = nx_c_cast_f16_to_c64,\n                [NX_DTYPE_BF16] = nx_c_cast_f16_to_bf16,\n                [NX_DTYPE_BOOL] = nx_c_cast_f16_to_bool,\n                [NX_DTYPE_I4] = nx_c_cast_f16_to_i4,\n                [NX_DTYPE_U4] = nx_c_cast_f16_to_u4,\n                [NX_DTYPE_F8E4M3] = nx_c_cast_f16_to_f8e4m3,\n                [NX_DTYPE_F8E5M2] = nx_c_cast_f16_to_f8e5m2,\n            },\n        [NX_DTYPE_F32] =\n            {\n                [NX_DTYPE_I8] = nx_c_cast_f32_to_i8,\n                [NX_DTYPE_U8] = nx_c_cast_f32_to_u8,\n                [NX_DTYPE_I16] = nx_c_cast_f32_to_i16,\n                [NX_DTYPE_U16] = nx_c_cast_f32_to_u16,\n                [NX_DTYPE_I32] = nx_c_cast_f32_to_i32,\n                [NX_DTYPE_I64] = nx_c_cast_f32_to_i64,\n                [NX_DTYPE_U32] = nx_c_cast_f32_to_u32,\n                [NX_DTYPE_U64] = nx_c_cast_f32_to_u64,\n                [NX_DTYPE_INAT] = nx_c_cast_f32_to_inat,\n                [NX_DTYPE_F16] = nx_c_cast_f32_to_f16,\n                [NX_DTYPE_F32] = nx_c_cast_f32_to_f32,\n                [NX_DTYPE_F64] = nx_c_cast_f32_to_f64,\n                [NX_DTYPE_C32] = nx_c_cast_f32_to_c32,\n                [NX_DTYPE_C64] = nx_c_cast_f32_to_c64,\n                [NX_DTYPE_BF16] = nx_c_cast_f32_to_bf16,\n                [NX_DTYPE_BOOL] = nx_c_cast_f32_to_bool,\n                [NX_DTYPE_I4] = nx_c_cast_f32_to_i4,\n                [NX_DTYPE_U4] = nx_c_cast_f32_to_u4,\n                [NX_DTYPE_F8E4M3] = nx_c_cast_f32_to_f8e4m3,\n                [NX_DTYPE_F8E5M2] = nx_c_cast_f32_to_f8e5m2,\n            },\n        [NX_DTYPE_F64] =\n            {\n                [NX_DTYPE_I8] = nx_c_cast_f64_to_i8,\n                [NX_DTYPE_U8] = nx_c_cast_f64_to_u8,\n                [NX_DTYPE_I16] = nx_c_cast_f64_to_i16,\n                [NX_DTYPE_U16] = nx_c_cast_f64_to_u16,\n                [NX_DTYPE_I32] = nx_c_cast_f64_to_i32,\n                [NX_DTYPE_I64] = nx_c_cast_f64_to_i64,\n                [NX_DTYPE_U32] = nx_c_cast_f64_to_u32,\n                [NX_DTYPE_U64] = nx_c_cast_f64_to_u64,\n                [NX_DTYPE_INAT] = nx_c_cast_f64_to_inat,\n                [NX_DTYPE_F16] = nx_c_cast_f64_to_f16,\n                [NX_DTYPE_F32] = nx_c_cast_f64_to_f32,\n                [NX_DTYPE_F64] = nx_c_cast_f64_to_f64,\n                [NX_DTYPE_C32] = nx_c_cast_f64_to_c32,\n                [NX_DTYPE_C64] = nx_c_cast_f64_to_c64,\n                [NX_DTYPE_BF16] = nx_c_cast_f64_to_bf16,\n                [NX_DTYPE_BOOL] = nx_c_cast_f64_to_bool,\n                [NX_DTYPE_I4] = nx_c_cast_f64_to_i4,\n                [NX_DTYPE_U4] = nx_c_cast_f64_to_u4,\n                [NX_DTYPE_F8E4M3] = nx_c_cast_f64_to_f8e4m3,\n                [NX_DTYPE_F8E5M2] = nx_c_cast_f64_to_f8e5m2,\n            },\n        [NX_DTYPE_C32] =\n            {\n                [NX_DTYPE_I8] = nx_c_cast_c32_to_i8,\n                [NX_DTYPE_U8] = nx_c_cast_c32_to_u8,\n                [NX_DTYPE_I16] = nx_c_cast_c32_to_i16,\n                [NX_DTYPE_U16] = nx_c_cast_c32_to_u16,\n                [NX_DTYPE_I32] = nx_c_cast_c32_to_i32,\n                [NX_DTYPE_I64] = nx_c_cast_c32_to_i64,\n                [NX_DTYPE_U32] = nx_c_cast_c32_to_u32,\n                [NX_DTYPE_U64] = nx_c_cast_c32_to_u64,\n                [NX_DTYPE_INAT] = nx_c_cast_c32_to_inat,\n                [NX_DTYPE_F16] = nx_c_cast_c32_to_f16,\n                [NX_DTYPE_F32] = nx_c_cast_c32_to_f32,\n                [NX_DTYPE_F64] = nx_c_cast_c32_to_f64,\n                [NX_DTYPE_C32] = nx_c_cast_c32_to_c32,\n                [NX_DTYPE_C64] = nx_c_cast_c32_to_c64,\n                [NX_DTYPE_BF16] = nx_c_cast_c32_to_bf16,\n                [NX_DTYPE_BOOL] = nx_c_cast_c32_to_bool,\n                [NX_DTYPE_I4] = nx_c_cast_c32_to_i4,\n                [NX_DTYPE_U4] = nx_c_cast_c32_to_u4,\n                [NX_DTYPE_F8E4M3] = nx_c_cast_c32_to_f8e4m3,\n                [NX_DTYPE_F8E5M2] = nx_c_cast_c32_to_f8e5m2,\n            },\n        [NX_DTYPE_C64] =\n            {\n                [NX_DTYPE_I8] = nx_c_cast_c64_to_i8,\n                [NX_DTYPE_U8] = nx_c_cast_c64_to_u8,\n                [NX_DTYPE_I16] = nx_c_cast_c64_to_i16,\n                [NX_DTYPE_U16] = nx_c_cast_c64_to_u16,\n                [NX_DTYPE_I32] = nx_c_cast_c64_to_i32,\n                [NX_DTYPE_I64] = nx_c_cast_c64_to_i64,\n                [NX_DTYPE_U32] = nx_c_cast_c64_to_u32,\n                [NX_DTYPE_U64] = nx_c_cast_c64_to_u64,\n                [NX_DTYPE_INAT] = nx_c_cast_c64_to_inat,\n                [NX_DTYPE_F16] = nx_c_cast_c64_to_f16,\n                [NX_DTYPE_F32] = nx_c_cast_c64_to_f32,\n                [NX_DTYPE_F64] = nx_c_cast_c64_to_f64,\n                [NX_DTYPE_C32] = nx_c_cast_c64_to_c32,\n                [NX_DTYPE_C64] = nx_c_cast_c64_to_c64,\n                [NX_DTYPE_BF16] = nx_c_cast_c64_to_bf16,\n                [NX_DTYPE_BOOL] = nx_c_cast_c64_to_bool,\n                [NX_DTYPE_I4] = nx_c_cast_c64_to_i4,\n                [NX_DTYPE_U4] = nx_c_cast_c64_to_u4,\n                [NX_DTYPE_F8E4M3] = nx_c_cast_c64_to_f8e4m3,\n                [NX_DTYPE_F8E5M2] = nx_c_cast_c64_to_f8e5m2,\n            },\n        [NX_DTYPE_BF16] =\n            {\n                [NX_DTYPE_I8] = nx_c_cast_bf16_to_i8,\n                [NX_DTYPE_U8] = nx_c_cast_bf16_to_u8,\n                [NX_DTYPE_I16] = nx_c_cast_bf16_to_i16,\n                [NX_DTYPE_U16] = nx_c_cast_bf16_to_u16,\n                [NX_DTYPE_I32] = nx_c_cast_bf16_to_i32,\n                [NX_DTYPE_I64] = nx_c_cast_bf16_to_i64,\n                [NX_DTYPE_U32] = nx_c_cast_bf16_to_u32,\n                [NX_DTYPE_U64] = nx_c_cast_bf16_to_u64,\n                [NX_DTYPE_INAT] = nx_c_cast_bf16_to_inat,\n                [NX_DTYPE_F16] = nx_c_cast_bf16_to_f16,\n                [NX_DTYPE_F32] = nx_c_cast_bf16_to_f32,\n                [NX_DTYPE_F64] = nx_c_cast_bf16_to_f64,\n                [NX_DTYPE_C32] = nx_c_cast_bf16_to_c32,\n                [NX_DTYPE_C64] = nx_c_cast_bf16_to_c64,\n                [NX_DTYPE_BF16] = nx_c_cast_bf16_to_bf16,\n                [NX_DTYPE_BOOL] = nx_c_cast_bf16_to_bool,\n                [NX_DTYPE_I4] = nx_c_cast_bf16_to_i4,\n                [NX_DTYPE_U4] = nx_c_cast_bf16_to_u4,\n                [NX_DTYPE_F8E4M3] = nx_c_cast_bf16_to_f8e4m3,\n                [NX_DTYPE_F8E5M2] = nx_c_cast_bf16_to_f8e5m2,\n            },\n        [NX_DTYPE_BOOL] =\n            {\n                [NX_DTYPE_I8] = nx_c_cast_bool_to_i8,\n                [NX_DTYPE_U8] = nx_c_cast_bool_to_u8,\n                [NX_DTYPE_I16] = nx_c_cast_bool_to_i16,\n                [NX_DTYPE_U16] = nx_c_cast_bool_to_u16,\n                [NX_DTYPE_I32] = nx_c_cast_bool_to_i32,\n                [NX_DTYPE_I64] = nx_c_cast_bool_to_i64,\n                [NX_DTYPE_U32] = nx_c_cast_bool_to_u32,\n                [NX_DTYPE_U64] = nx_c_cast_bool_to_u64,\n                [NX_DTYPE_INAT] = nx_c_cast_bool_to_inat,\n                [NX_DTYPE_F16] = nx_c_cast_bool_to_f16,\n                [NX_DTYPE_F32] = nx_c_cast_bool_to_f32,\n                [NX_DTYPE_F64] = nx_c_cast_bool_to_f64,\n                [NX_DTYPE_C32] = nx_c_cast_bool_to_c32,\n                [NX_DTYPE_C64] = nx_c_cast_bool_to_c64,\n                [NX_DTYPE_BF16] = nx_c_cast_bool_to_bf16,\n                [NX_DTYPE_BOOL] = nx_c_cast_bool_to_bool,\n                [NX_DTYPE_I4] = nx_c_cast_bool_to_i4,\n                [NX_DTYPE_U4] = nx_c_cast_bool_to_u4,\n                [NX_DTYPE_F8E4M3] = nx_c_cast_bool_to_f8e4m3,\n                [NX_DTYPE_F8E5M2] = nx_c_cast_bool_to_f8e5m2,\n            },\n        [NX_DTYPE_I4] =\n            {\n                [NX_DTYPE_I8] = nx_c_cast_i4_to_i8,\n                [NX_DTYPE_U8] = nx_c_cast_i4_to_u8,\n                [NX_DTYPE_I16] = nx_c_cast_i4_to_i16,\n                [NX_DTYPE_U16] = nx_c_cast_i4_to_u16,\n                [NX_DTYPE_I32] = nx_c_cast_i4_to_i32,\n                [NX_DTYPE_I64] = nx_c_cast_i4_to_i64,\n                [NX_DTYPE_U32] = nx_c_cast_i4_to_u32,\n                [NX_DTYPE_U64] = nx_c_cast_i4_to_u64,\n                [NX_DTYPE_INAT] = nx_c_cast_i4_to_inat,\n                [NX_DTYPE_F16] = nx_c_cast_i4_to_f16,\n                [NX_DTYPE_F32] = nx_c_cast_i4_to_f32,\n                [NX_DTYPE_F64] = nx_c_cast_i4_to_f64,\n                [NX_DTYPE_C32] = nx_c_cast_i4_to_c32,\n                [NX_DTYPE_C64] = nx_c_cast_i4_to_c64,\n                [NX_DTYPE_BF16] = nx_c_cast_i4_to_bf16,\n                [NX_DTYPE_BOOL] = nx_c_cast_i4_to_bool,\n                [NX_DTYPE_I4] = nx_c_cast_i4_to_i4,\n                [NX_DTYPE_U4] = nx_c_cast_i4_to_u4,\n                [NX_DTYPE_F8E4M3] = nx_c_cast_i4_to_f8e4m3,\n                [NX_DTYPE_F8E5M2] = nx_c_cast_i4_to_f8e5m2,\n            },\n        [NX_DTYPE_U4] =\n            {\n                [NX_DTYPE_I8] = nx_c_cast_u4_to_i8,\n                [NX_DTYPE_U8] = nx_c_cast_u4_to_u8,\n                [NX_DTYPE_I16] = nx_c_cast_u4_to_i16,\n                [NX_DTYPE_U16] = nx_c_cast_u4_to_u16,\n                [NX_DTYPE_I32] = nx_c_cast_u4_to_i32,\n                [NX_DTYPE_I64] = nx_c_cast_u4_to_i64,\n                [NX_DTYPE_U32] = nx_c_cast_u4_to_u32,\n                [NX_DTYPE_U64] = nx_c_cast_u4_to_u64,\n                [NX_DTYPE_INAT] = nx_c_cast_u4_to_inat,\n                [NX_DTYPE_F16] = nx_c_cast_u4_to_f16,\n                [NX_DTYPE_F32] = nx_c_cast_u4_to_f32,\n                [NX_DTYPE_F64] = nx_c_cast_u4_to_f64,\n                [NX_DTYPE_C32] = nx_c_cast_u4_to_c32,\n                [NX_DTYPE_C64] = nx_c_cast_u4_to_c64,\n                [NX_DTYPE_BF16] = nx_c_cast_u4_to_bf16,\n                [NX_DTYPE_BOOL] = nx_c_cast_u4_to_bool,\n                [NX_DTYPE_I4] = nx_c_cast_u4_to_i4,\n                [NX_DTYPE_U4] = nx_c_cast_u4_to_u4,\n                [NX_DTYPE_F8E4M3] = nx_c_cast_u4_to_f8e4m3,\n                [NX_DTYPE_F8E5M2] = nx_c_cast_u4_to_f8e5m2,\n            },\n        [NX_DTYPE_F8E4M3] =\n            {\n                [NX_DTYPE_I8] = nx_c_cast_f8e4m3_to_i8,\n                [NX_DTYPE_U8] = nx_c_cast_f8e4m3_to_u8,\n                [NX_DTYPE_I16] = nx_c_cast_f8e4m3_to_i16,\n                [NX_DTYPE_U16] = nx_c_cast_f8e4m3_to_u16,\n                [NX_DTYPE_I32] = nx_c_cast_f8e4m3_to_i32,\n                [NX_DTYPE_I64] = nx_c_cast_f8e4m3_to_i64,\n                [NX_DTYPE_U32] = nx_c_cast_f8e4m3_to_u32,\n                [NX_DTYPE_U64] = nx_c_cast_f8e4m3_to_u64,\n                [NX_DTYPE_INAT] = nx_c_cast_f8e4m3_to_inat,\n                [NX_DTYPE_F16] = nx_c_cast_f8e4m3_to_f16,\n                [NX_DTYPE_F32] = nx_c_cast_f8e4m3_to_f32,\n                [NX_DTYPE_F64] = nx_c_cast_f8e4m3_to_f64,\n                [NX_DTYPE_C32] = nx_c_cast_f8e4m3_to_c32,\n                [NX_DTYPE_C64] = nx_c_cast_f8e4m3_to_c64,\n                [NX_DTYPE_BF16] = nx_c_cast_f8e4m3_to_bf16,\n                [NX_DTYPE_BOOL] = nx_c_cast_f8e4m3_to_bool,\n                [NX_DTYPE_I4] = nx_c_cast_f8e4m3_to_i4,\n                [NX_DTYPE_U4] = nx_c_cast_f8e4m3_to_u4,\n                [NX_DTYPE_F8E4M3] = nx_c_cast_f8e4m3_to_f8e4m3,\n                [NX_DTYPE_F8E5M2] = nx_c_cast_f8e4m3_to_f8e5m2,\n            },\n        [NX_DTYPE_F8E5M2] =\n            {\n                [NX_DTYPE_I8] = nx_c_cast_f8e5m2_to_i8,\n                [NX_DTYPE_U8] = nx_c_cast_f8e5m2_to_u8,\n                [NX_DTYPE_I16] = nx_c_cast_f8e5m2_to_i16,\n                [NX_DTYPE_U16] = nx_c_cast_f8e5m2_to_u16,\n                [NX_DTYPE_I32] = nx_c_cast_f8e5m2_to_i32,\n                [NX_DTYPE_I64] = nx_c_cast_f8e5m2_to_i64,\n                [NX_DTYPE_U32] = nx_c_cast_f8e5m2_to_u32,\n                [NX_DTYPE_U64] = nx_c_cast_f8e5m2_to_u64,\n                [NX_DTYPE_INAT] = nx_c_cast_f8e5m2_to_inat,\n                [NX_DTYPE_F16] = nx_c_cast_f8e5m2_to_f16,\n                [NX_DTYPE_F32] = nx_c_cast_f8e5m2_to_f32,\n                [NX_DTYPE_F64] = nx_c_cast_f8e5m2_to_f64,\n                [NX_DTYPE_C32] = nx_c_cast_f8e5m2_to_c32,\n                [NX_DTYPE_C64] = nx_c_cast_f8e5m2_to_c64,\n                [NX_DTYPE_BF16] = nx_c_cast_f8e5m2_to_bf16,\n                [NX_DTYPE_BOOL] = nx_c_cast_f8e5m2_to_bool,\n                [NX_DTYPE_I4] = nx_c_cast_f8e5m2_to_i4,\n                [NX_DTYPE_U4] = nx_c_cast_f8e5m2_to_u4,\n                [NX_DTYPE_F8E4M3] = nx_c_cast_f8e5m2_to_f8e4m3,\n                [NX_DTYPE_F8E5M2] = nx_c_cast_f8e5m2_to_f8e5m2,\n            }\n};\n\n// Dispatch function for cast operations\nstatic void dispatch_cast(value v_src, value v_dst) {\n  ndarray_t src = extract_ndarray(v_src);\n  ndarray_t dst = extract_ndarray(v_dst);\n\n  if (src.ndim != dst.ndim) {\n    cleanup_ndarray(&src);\n    cleanup_ndarray(&dst);\n    caml_failwith(\"shape mismatch\");\n  }\n  for (int i = 0; i < src.ndim; i++) {\n    if (src.shape[i] != dst.shape[i]) {\n      cleanup_ndarray(&src);\n      cleanup_ndarray(&dst);\n      caml_failwith(\"shape mismatch\");\n    }\n  }\n\n  value v_src_data = Field(v_src, FFI_TENSOR_DATA);\n  value v_dst_data = Field(v_dst, FFI_TENSOR_DATA);\n\n  int src_kind = nx_buffer_get_kind(Caml_ba_array_val(v_src_data));\n  int dst_kind = nx_buffer_get_kind(Caml_ba_array_val(v_dst_data));\n\n  nx_dtype src_dtype = kind_to_dtype(src_kind);\n  nx_dtype dst_dtype = kind_to_dtype(dst_kind);\n\n  if (src_dtype == NX_NUM_DTYPES || dst_dtype == NX_NUM_DTYPES) {\n    cleanup_ndarray(&src);\n    cleanup_ndarray(&dst);\n    caml_failwith(\"unsupported dtype\");\n  }\n\n  cast_op_t op = cast_table[src_dtype][dst_dtype];\n\n  if (!op) {\n    cleanup_ndarray(&src);\n    cleanup_ndarray(&dst);\n    caml_failwith(\"cast not supported for this dtype combination\");\n  }\n\n  caml_enter_blocking_section();\n  op(&src, &dst);\n  caml_leave_blocking_section();\n\n  cleanup_ndarray(&src);\n  cleanup_ndarray(&dst);\n}\n\n// OCaml FFI Stub\nCAMLprim value caml_nx_cast(value v_src, value v_dst) {\n  CAMLparam2(v_src, v_dst);\n  dispatch_cast(v_src, v_dst);\n  CAMLreturn(Val_unit);\n}\n"
  },
  {
    "path": "packages/nx/lib/backend_c/nx_c_cholesky.c",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n#include <caml/custom.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/threads.h>\n#include <complex.h>\n#include <float.h>\n#include <lapacke.h>\n\n#include \"nx_c_shared.h\"\n\n// Machine epsilon for different precisions\n#define NX_EPS32 FLT_EPSILON\n#define NX_EPS64 DBL_EPSILON\n\n// Helper functions for packing/unpacking matrices\nstatic void nx_pack_f32(float* dst, const float* src, int m, int n,\n                        int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * n + j] = src[i * stride_row + j * stride_col];\n    }\n  }\n}\n\nstatic void nx_unpack_f32(float* dst, const float* src, int m, int n,\n                          int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * stride_row + j * stride_col] = src[i * n + j];\n    }\n  }\n}\n\nstatic void nx_pack_f64(double* dst, const double* src, int m, int n,\n                        int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * n + j] = src[i * stride_row + j * stride_col];\n    }\n  }\n}\n\nstatic void nx_unpack_f64(double* dst, const double* src, int m, int n,\n                          int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * stride_row + j * stride_col] = src[i * n + j];\n    }\n  }\n}\n\nstatic void nx_pack_c32(complex32* dst, const complex32* src, int m, int n,\n                        int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * n + j] = src[i * stride_row + j * stride_col];\n    }\n  }\n}\n\nstatic void nx_unpack_c32(complex32* dst, const complex32* src, int m, int n,\n                          int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * stride_row + j * stride_col] = src[i * n + j];\n    }\n  }\n}\n\nstatic void nx_pack_c64(complex64* dst, const complex64* src, int m, int n,\n                        int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * n + j] = src[i * stride_row + j * stride_col];\n    }\n  }\n}\n\nstatic void nx_unpack_c64(complex64* dst, const complex64* src, int m, int n,\n                          int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * stride_row + j * stride_col] = src[i * n + j];\n    }\n  }\n}\n\n// Helper function to check if character matches (case insensitive)\nstatic int lsame(char ca, char cb) {\n  if (ca == cb) return 1;\n  int inta = (unsigned char)ca;\n  int intb = (unsigned char)cb;\n  return (inta >= 'A' && inta <= 'Z' ? inta + 32 : inta) ==\n         (intb >= 'A' && intb <= 'Z' ? intb + 32 : intb);\n}\n\n// Cholesky decomposition implementations using LAPACK\n\nstatic int cholesky_float32(float* a, int n, int upper) {\n  char uplo = upper ? 'U' : 'L';\n  lapack_int info = LAPACKE_spotrf(LAPACK_ROW_MAJOR, uplo, n, a, n);\n  if (info == 0) {\n    // Zero out the unused triangle\n    if (upper) {\n      // Zero the lower triangle\n      for (int i = 0; i < n; i++) {\n        for (int j = 0; j < i; j++) {\n          a[i * n + j] = 0.0f;\n        }\n      }\n    } else {\n      // Zero the upper triangle\n      for (int i = 0; i < n; i++) {\n        for (int j = i + 1; j < n; j++) {\n          a[i * n + j] = 0.0f;\n        }\n      }\n    }\n  }\n  return (int)info;\n}\n\nstatic int cholesky_float64(double* a, int n, int upper) {\n  char uplo = upper ? 'U' : 'L';\n  lapack_int info = LAPACKE_dpotrf(LAPACK_ROW_MAJOR, uplo, n, a, n);\n  if (info == 0) {\n    // Zero out the unused triangle\n    if (upper) {\n      // Zero the lower triangle\n      for (int i = 0; i < n; i++) {\n        for (int j = 0; j < i; j++) {\n          a[i * n + j] = 0.0;\n        }\n      }\n    } else {\n      // Zero the upper triangle\n      for (int i = 0; i < n; i++) {\n        for (int j = i + 1; j < n; j++) {\n          a[i * n + j] = 0.0;\n        }\n      }\n    }\n  }\n  return (int)info;\n}\n\nstatic int cholesky_complex32(complex32* a, int n, int upper) {\n  char uplo = upper ? 'U' : 'L';\n  lapack_int info = LAPACKE_cpotrf(LAPACK_ROW_MAJOR, uplo, n, a, n);\n  if (info == 0) {\n    // Zero out the unused triangle\n    if (upper) {\n      // Zero the lower triangle\n      for (int i = 0; i < n; i++) {\n        for (int j = 0; j < i; j++) {\n          a[i * n + j] = 0.0f + 0.0f * I;\n        }\n      }\n    } else {\n      // Zero the upper triangle\n      for (int i = 0; i < n; i++) {\n        for (int j = i + 1; j < n; j++) {\n          a[i * n + j] = 0.0f + 0.0f * I;\n        }\n      }\n    }\n  }\n  return (int)info;\n}\n\nstatic int cholesky_complex64(complex64* a, int n, int upper) {\n  char uplo = upper ? 'U' : 'L';\n  lapack_int info = LAPACKE_zpotrf(LAPACK_ROW_MAJOR, uplo, n, a, n);\n  if (info == 0) {\n    // Zero out the unused triangle\n    if (upper) {\n      // Zero the lower triangle\n      for (int i = 0; i < n; i++) {\n        for (int j = 0; j < i; j++) {\n          a[i * n + j] = 0.0 + 0.0 * I;\n        }\n      }\n    } else {\n      // Zero the upper triangle\n      for (int i = 0; i < n; i++) {\n        for (int j = i + 1; j < n; j++) {\n          a[i * n + j] = 0.0 + 0.0 * I;\n        }\n      }\n    }\n  }\n  return (int)info;\n}\n\n// Lower precision implementations that upcast to float32/float64\nstatic int cholesky_float16(uint16_t* a, int n, int upper) {\n  float* a_float = (float*)malloc(n * n * sizeof(float));\n  if (!a_float) return -1;\n  for (int i = 0; i < n * n; i++) {\n    a_float[i] = half_to_float(a[i]);\n  }\n  int status = cholesky_float32(a_float, n, upper);\n  if (status == 0) {\n    for (int i = 0; i < n * n; i++) {\n      a[i] = float_to_half(a_float[i]);\n    }\n  }\n  free(a_float);\n  return status;\n}\n\nstatic int cholesky_bfloat16(caml_ba_bfloat16* a, int n, int upper) {\n  float* a_float = (float*)malloc(n * n * sizeof(float));\n  if (!a_float) return -1;\n  for (int i = 0; i < n * n; i++) {\n    a_float[i] = bfloat16_to_float(a[i]);\n  }\n  int status = cholesky_float32(a_float, n, upper);\n  if (status == 0) {\n    for (int i = 0; i < n * n; i++) {\n      a[i] = float_to_bfloat16(a_float[i]);\n    }\n  }\n  free(a_float);\n  return status;\n}\n\nstatic int cholesky_f8e4m3(caml_ba_fp8_e4m3* a, int n, int upper) {\n  float* a_float = (float*)malloc(n * n * sizeof(float));\n  if (!a_float) return -1;\n  for (int i = 0; i < n * n; i++) {\n    a_float[i] = fp8_e4m3_to_float(a[i]);\n  }\n  int status = cholesky_float32(a_float, n, upper);\n  if (status == 0) {\n    for (int i = 0; i < n * n; i++) {\n      a[i] = float_to_fp8_e4m3(a_float[i]);\n    }\n  }\n  free(a_float);\n  return status;\n}\n\nstatic int cholesky_f8e5m2(caml_ba_fp8_e5m2* a, int n, int upper) {\n  float* a_float = (float*)malloc(n * n * sizeof(float));\n  if (!a_float) return -1;\n  for (int i = 0; i < n * n; i++) {\n    a_float[i] = fp8_e5m2_to_float(a[i]);\n  }\n  int status = cholesky_float32(a_float, n, upper);\n  if (status == 0) {\n    for (int i = 0; i < n * n; i++) {\n      a[i] = float_to_fp8_e5m2(a_float[i]);\n    }\n  }\n  free(a_float);\n  return status;\n}\n\n// OCaml FFI stub\nCAMLprim value caml_nx_op_cholesky(value v_in, value v_out, value v_upper) {\n  CAMLparam3(v_in, v_out, v_upper);\n  int upper = Int_val(v_upper);\n  ndarray_t in = extract_ndarray(v_in);\n  ndarray_t out = extract_ndarray(v_out);\n  struct caml_ba_array* ba_in = Caml_ba_array_val(Field(v_in, FFI_TENSOR_DATA));\n  struct caml_ba_array* ba_out =\n      Caml_ba_array_val(Field(v_out, FFI_TENSOR_DATA));\n  int kind = nx_buffer_get_kind(ba_in);\n  if (in.ndim < 2) {\n    cleanup_ndarray(&in);\n    cleanup_ndarray(&out);\n    caml_failwith(\"cholesky: input must have at least 2 dimensions\");\n  }\n  if (in.shape[in.ndim - 1] != in.shape[in.ndim - 2]) {\n    cleanup_ndarray(&in);\n    cleanup_ndarray(&out);\n    caml_failwith(\"cholesky: input must be square matrix\");\n  }\n  int n = in.shape[in.ndim - 1];\n  int batch_size = 1;\n  for (int i = 0; i < in.ndim - 2; i++) {\n    batch_size *= in.shape[i];\n  }\n  int s_in_row = in.strides[in.ndim - 2];\n  int s_in_col = in.strides[in.ndim - 1];\n  int s_out_row = out.strides[out.ndim - 2];\n  int s_out_col = out.strides[out.ndim - 1];\n  caml_enter_blocking_section();\n  for (int b = 0; b < batch_size; b++) {\n    size_t off_in = in.offset;\n    size_t off_out = out.offset;\n    if (in.ndim > 2) {\n      int remaining = b;\n      for (int i = in.ndim - 3; i >= 0; i--) {\n        int coord = remaining % in.shape[i];\n        remaining /= in.shape[i];\n        off_in += coord * in.strides[i];\n        off_out += coord * out.strides[i];\n      }\n    }\n    int status = 0;\n    switch (kind) {\n      case CAML_BA_FLOAT32: {\n        float* base_in = (float*)ba_in->data + off_in;\n        float* base_out = (float*)ba_out->data + off_out;\n        float* A = (float*)malloc((size_t)n * n * sizeof(float));\n        nx_pack_f32(A, base_in, n, n, s_in_row, s_in_col);\n        status = cholesky_float32(A, n, upper);\n        if (status == 0) {\n          nx_unpack_f32(base_out, A, n, n, s_out_row, s_out_col);\n        }\n        free(A);\n        break;\n      }\n      case CAML_BA_FLOAT64: {\n        double* base_in = (double*)ba_in->data + off_in;\n        double* base_out = (double*)ba_out->data + off_out;\n        double* A = (double*)malloc((size_t)n * n * sizeof(double));\n        nx_pack_f64(A, base_in, n, n, s_in_row, s_in_col);\n        status = cholesky_float64(A, n, upper);\n        if (status == 0) {\n          nx_unpack_f64(base_out, A, n, n, s_out_row, s_out_col);\n        }\n        free(A);\n        break;\n      }\n      case CAML_BA_COMPLEX32: {\n        complex32* base_in = (complex32*)ba_in->data + off_in;\n        complex32* base_out = (complex32*)ba_out->data + off_out;\n        complex32* A = (complex32*)malloc((size_t)n * n * sizeof(complex32));\n        nx_pack_c32(A, base_in, n, n, s_in_row, s_in_col);\n        status = cholesky_complex32(A, n, upper);\n        if (status == 0) {\n          nx_unpack_c32(base_out, A, n, n, s_out_row, s_out_col);\n        }\n        free(A);\n        break;\n      }\n      case CAML_BA_COMPLEX64: {\n        complex64* base_in = (complex64*)ba_in->data + off_in;\n        complex64* base_out = (complex64*)ba_out->data + off_out;\n        complex64* A = (complex64*)malloc((size_t)n * n * sizeof(complex64));\n        nx_pack_c64(A, base_in, n, n, s_in_row, s_in_col);\n        status = cholesky_complex64(A, n, upper);\n        if (status == 0) {\n          nx_unpack_c64(base_out, A, n, n, s_out_row, s_out_col);\n        }\n        free(A);\n        break;\n      }\n      case CAML_BA_FLOAT16: {\n        uint16_t* base_in = (uint16_t*)ba_in->data + off_in;\n        uint16_t* base_out = (uint16_t*)ba_out->data + off_out;\n        uint16_t* A = (uint16_t*)malloc((size_t)n * n * sizeof(uint16_t));\n        // Pack into A (copy since same type)\n        for (int i = 0; i < n; i++) {\n          for (int j = 0; j < n; j++) {\n            A[i * n + j] = base_in[i * s_in_row + j * s_in_col];\n          }\n        }\n        status = cholesky_float16(A, n, upper);\n        if (status == 0) {\n          // Unpack back\n          for (int i = 0; i < n; i++) {\n            for (int j = 0; j < n; j++) {\n              base_out[i * s_out_row + j * s_out_col] = A[i * n + j];\n            }\n          }\n        }\n        free(A);\n        break;\n      }\n      case NX_BA_BFLOAT16: {\n        caml_ba_bfloat16* base_in = (caml_ba_bfloat16*)ba_in->data + off_in;\n        caml_ba_bfloat16* base_out = (caml_ba_bfloat16*)ba_out->data + off_out;\n        caml_ba_bfloat16* A =\n            (caml_ba_bfloat16*)malloc((size_t)n * n * sizeof(caml_ba_bfloat16));\n        // Pack into A\n        for (int i = 0; i < n; i++) {\n          for (int j = 0; j < n; j++) {\n            A[i * n + j] = base_in[i * s_in_row + j * s_in_col];\n          }\n        }\n        status = cholesky_bfloat16(A, n, upper);\n        if (status == 0) {\n          for (int i = 0; i < n; i++) {\n            for (int j = 0; j < n; j++) {\n              base_out[i * s_out_row + j * s_out_col] = A[i * n + j];\n            }\n          }\n        }\n        free(A);\n        break;\n      }\n      case NX_BA_FP8_E4M3: {\n        caml_ba_fp8_e4m3* base_in = (caml_ba_fp8_e4m3*)ba_in->data + off_in;\n        caml_ba_fp8_e4m3* base_out = (caml_ba_fp8_e4m3*)ba_out->data + off_out;\n        caml_ba_fp8_e4m3* A =\n            (caml_ba_fp8_e4m3*)malloc((size_t)n * n * sizeof(caml_ba_fp8_e4m3));\n        for (int i = 0; i < n; i++) {\n          for (int j = 0; j < n; j++) {\n            A[i * n + j] = base_in[i * s_in_row + j * s_in_col];\n          }\n        }\n        status = cholesky_f8e4m3(A, n, upper);\n        if (status == 0) {\n          for (int i = 0; i < n; i++) {\n            for (int j = 0; j < n; j++) {\n              base_out[i * s_out_row + j * s_out_col] = A[i * n + j];\n            }\n          }\n        }\n        free(A);\n        break;\n      }\n      case NX_BA_FP8_E5M2: {\n        caml_ba_fp8_e5m2* base_in = (caml_ba_fp8_e5m2*)ba_in->data + off_in;\n        caml_ba_fp8_e5m2* base_out = (caml_ba_fp8_e5m2*)ba_out->data + off_out;\n        caml_ba_fp8_e5m2* A =\n            (caml_ba_fp8_e5m2*)malloc((size_t)n * n * sizeof(caml_ba_fp8_e5m2));\n        for (int i = 0; i < n; i++) {\n          for (int j = 0; j < n; j++) {\n            A[i * n + j] = base_in[i * s_in_row + j * s_in_col];\n          }\n        }\n        status = cholesky_f8e5m2(A, n, upper);\n        if (status == 0) {\n          for (int i = 0; i < n; i++) {\n            for (int j = 0; j < n; j++) {\n              base_out[i * s_out_row + j * s_out_col] = A[i * n + j];\n            }\n          }\n        }\n        free(A);\n        break;\n      }\n      default:\n        caml_leave_blocking_section();\n        cleanup_ndarray(&in);\n        cleanup_ndarray(&out);\n        caml_failwith(\"cholesky: unsupported dtype\");\n    }\n    if (status != 0) {\n      caml_leave_blocking_section();\n      cleanup_ndarray(&in);\n      cleanup_ndarray(&out);\n      caml_invalid_argument(\"cholesky: not positive-definite\");\n    }\n  }\n  caml_leave_blocking_section();\n  cleanup_ndarray(&in);\n  cleanup_ndarray(&out);\n  CAMLreturn(Val_unit);\n}\n"
  },
  {
    "path": "packages/nx/lib/backend_c/nx_c_eig.c",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n// Eigenvalue decomposition implementations\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n#include <caml/custom.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/threads.h>\n#include <complex.h>\n#include <float.h>\n#include <lapacke.h>\n\n#include \"nx_c_shared.h\"\n\n// Machine epsilon for float32 and float64\n#define NX_EPS32 FLT_EPSILON\n#define NX_EPS64 DBL_EPSILON\n\n// Helper functions for packing/unpacking matrices\nstatic void nx_pack_f32(float* dst, const float* src, int m, int n,\n                        int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * n + j] = src[i * stride_row + j * stride_col];\n    }\n  }\n}\n\nstatic void nx_unpack_f32(float* dst, const float* src, int m, int n,\n                          int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * stride_row + j * stride_col] = src[i * n + j];\n    }\n  }\n}\n\nstatic void nx_pack_f64(double* dst, const double* src, int m, int n,\n                        int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * n + j] = src[i * stride_row + j * stride_col];\n    }\n  }\n}\n\nstatic void nx_unpack_f64(double* dst, const double* src, int m, int n,\n                          int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * stride_row + j * stride_col] = src[i * n + j];\n    }\n  }\n}\n\nstatic void nx_pack_c32(complex32* dst, const complex32* src, int m, int n,\n                        int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * n + j] = src[i * stride_row + j * stride_col];\n    }\n  }\n}\n\nstatic void nx_unpack_c32(complex32* dst, const complex32* src, int m, int n,\n                          int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * stride_row + j * stride_col] = src[i * n + j];\n    }\n  }\n}\n\nstatic void nx_pack_c64(complex64* dst, const complex64* src, int m, int n,\n                        int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * n + j] = src[i * stride_row + j * stride_col];\n    }\n  }\n}\n\nstatic void nx_unpack_c64(complex64* dst, const complex64* src, int m, int n,\n                          int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * stride_row + j * stride_col] = src[i * n + j];\n    }\n  }\n}\n\n// Math helper functions\nstatic inline float sign_float32(float x) {\n  return (x > 0.0f) ? 1.0f : ((x < 0.0f) ? -1.0f : 0.0f);\n}\n\nstatic inline double sign_float64(double x) {\n  return (x > 0.0) ? 1.0 : ((x < 0.0) ? -1.0 : 0.0);\n}\n\nstatic inline float hypot_float32(float x, float y) {\n  return hypotf(x, y);\n}\n\nstatic inline double hypot_float64(double x, double y) {\n  return hypot(x, y);\n}\n\n// Givens rotation computation\nstatic void givens_float32(float f, float g, float *c, float *s) {\n  if (g == 0.0f) {\n    *c = 1.0f;\n    *s = 0.0f;\n  } else if (fabsf(g) > fabsf(f)) {\n    float t = f / g;\n    float tt = hypotf(1.0f, t);\n    *c = 1.0f / tt;\n    *s = t * (*c);\n  } else {\n    float t = g / f;\n    float tt = hypotf(1.0f, t);\n    *s = 1.0f / tt;\n    *c = t * (*s);\n  }\n}\n\nstatic void givens_float64(double f, double g, double *c, double *s) {\n  if (g == 0.0) {\n    *c = 1.0;\n    *s = 0.0;\n  } else if (fabs(g) > fabs(f)) {\n    double t = f / g;\n    double tt = hypot(1.0, t);\n    *c = 1.0 / tt;\n    *s = t * (*c);\n  } else {\n    double t = g / f;\n    double tt = hypot(1.0, t);\n    *s = 1.0 / tt;\n    *c = t * (*s);\n  }\n}\n\n// Apply Givens rotation from the right\nstatic void apply_givens_right_float32(float *a, int m, int n, int i, int j,\n                                       float c, float s) {\n  for (int k = 0; k < m; k++) {\n    float temp = c * a[k * n + i] + s * a[k * n + j];\n    a[k * n + j] = -s * a[k * n + i] + c * a[k * n + j];\n    a[k * n + i] = temp;\n  }\n}\n\nstatic void apply_givens_right_float64(double *a, int m, int n, int i, int j,\n                                       double c, double s) {\n  for (int k = 0; k < m; k++) {\n    double temp = c * a[k * n + i] + s * a[k * n + j];\n    a[k * n + j] = -s * a[k * n + i] + c * a[k * n + j];\n    a[k * n + i] = temp;\n  }\n}\n\n// Apply Givens rotation from the left\nstatic void apply_givens_left_float32(float *a, int m, int n, int i, int j,\n                                      float c, float s) {\n  for (int k = 0; k < n; k++) {\n    float temp = c * a[i * n + k] + s * a[j * n + k];\n    a[j * n + k] = -s * a[i * n + k] + c * a[j * n + k];\n    a[i * n + k] = temp;\n  }\n}\n\nstatic void apply_givens_left_float64(double *a, int m, int n, int i, int j,\n                                      double c, double s) {\n  for (int k = 0; k < n; k++) {\n    double temp = c * a[i * n + k] + s * a[j * n + k];\n    a[j * n + k] = -s * a[i * n + k] + c * a[j * n + k];\n    a[i * n + k] = temp;\n  }\n}\n\n// Eigenvalue decomposition helpers\nstatic void tridiagonalize_float32(float* a, float* q, float* diag,\n                                   float* offdiag, int n) {\n  for (int i = 0; i < n; i++) {\n    for (int j = 0; j < n; j++) {\n      q[i * n + j] = (i == j) ? 1.0f : 0.0f;\n    }\n  }\n#pragma omp parallel for if (n > 100)\n  for (int k = 0; k < n - 2; k++) {\n    float norm2 = 0.0f;\n    for (int i = k + 1; i < n; i++) {\n      norm2 += a[i * n + k] * a[i * n + k];\n    }\n    if (norm2 <= 0.0f) continue;\n    float norm = sqrtf(norm2);\n    float sign = sign_float32(a[(k + 1) * n + k]);\n    float alpha = -sign * norm;\n    float beta = 1.0f / (alpha * (a[(k + 1) * n + k] / norm));\n    float* v = (float*)calloc(n, sizeof(float));\n    if (!v) continue;\n    for (int i = k + 1; i < n; i++) v[i] = a[i * n + k] / alpha;\n    v[k + 1] -= 1.0f;\n    for (int j = k + 1; j < n; j++) {\n      float gamma = 0.0f;\n      for (int i = k + 1; i < n; i++) gamma += v[i] * a[i * n + j];\n      gamma *= beta;\n      for (int i = k + 1; i < n; i++) a[i * n + j] -= gamma * v[i];\n      gamma = 0.0f;\n      for (int i = k + 1; i < n; i++) gamma += v[i] * a[j * n + i];\n      gamma *= beta;\n      for (int i = k + 1; i < n; i++) a[j * n + i] -= gamma * v[i];\n    }\n    for (int j = 0; j < n; j++) {\n      float gamma = 0.0f;\n      for (int i = k + 1; i < n; i++) gamma += v[i] * q[i * n + j];\n      gamma *= beta;\n      for (int i = k + 1; i < n; i++) q[i * n + j] -= gamma * v[i];\n    }\n    free(v);\n  }\n  for (int i = 0; i < n; i++) diag[i] = a[i * n + i];\n  for (int i = 0; i < n - 1; i++) offdiag[i] = a[i * n + (i + 1)];\n}\n\nstatic void tridiagonalize_float64(double* a, double* q, double* diag,\n                                   double* offdiag, int n) {\n  for (int i = 0; i < n; i++) {\n    for (int j = 0; j < n; j++) {\n      q[i * n + j] = (i == j) ? 1.0 : 0.0;\n    }\n  }\n#pragma omp parallel for if (n > 100)\n  for (int k = 0; k < n - 2; k++) {\n    double norm2 = 0.0;\n    for (int i = k + 1; i < n; i++) {\n      norm2 += a[i * n + k] * a[i * n + k];\n    }\n    if (norm2 <= 0.0) continue;\n    double norm = sqrt(norm2);\n    double sign = sign_float64(a[(k + 1) * n + k]);\n    double alpha = -sign * norm;\n    double beta = 1.0 / (alpha * (a[(k + 1) * n + k] / norm));\n    double* v = (double*)calloc(n, sizeof(double));\n    if (!v) continue;\n    for (int i = k + 1; i < n; i++) v[i] = a[i * n + k] / alpha;\n    v[k + 1] -= 1.0;\n    for (int j = k + 1; j < n; j++) {\n      double gamma = 0.0;\n      for (int i = k + 1; i < n; i++) gamma += v[i] * a[i * n + j];\n      gamma *= beta;\n      for (int i = k + 1; i < n; i++) a[i * n + j] -= gamma * v[i];\n      gamma = 0.0;\n      for (int i = k + 1; i < n; i++) gamma += v[i] * a[j * n + i];\n      gamma *= beta;\n      for (int i = k + 1; i < n; i++) a[j * n + i] -= gamma * v[i];\n    }\n    for (int j = 0; j < n; j++) {\n      double gamma = 0.0;\n      for (int i = k + 1; i < n; i++) gamma += v[i] * q[i * n + j];\n      gamma *= beta;\n      for (int i = k + 1; i < n; i++) q[i * n + j] -= gamma * v[i];\n    }\n    free(v);\n  }\n  for (int i = 0; i < n; i++) diag[i] = a[i * n + i];\n  for (int i = 0; i < n - 1; i++) offdiag[i] = a[i * n + (i + 1)];\n}\n\nstatic void qr_iteration_tridiag_float32(float* diag, float* offdiag, float* q,\n                                         int n) {\n  const float tol = NX_EPS32 * n;\n  const int max_iter = 30 * n;\n  int iter = 0;\n  while (iter++ < max_iter) {\n    int converged = 1;\n    for (int i = 0; i < n - 1; i++) {\n      if (fabsf(offdiag[i]) > tol * (fabsf(diag[i]) + fabsf(diag[i + 1]))) {\n        converged = 0;\n        break;\n      } else {\n        offdiag[i] = 0.0f;\n      }\n    }\n    if (converged) break;\n    int q_pos = n - 1;\n    while (q_pos > 0 && offdiag[q_pos - 1] == 0.0f) q_pos--;\n    if (q_pos == 0) continue;\n    int p_pos = q_pos - 1;\n    while (p_pos > 0 && offdiag[p_pos - 1] != 0.0f) p_pos--;\n    float d = (diag[q_pos - 1] - diag[q_pos]) / 2.0f;\n    float shift =\n        diag[q_pos] -\n        offdiag[q_pos - 1] * offdiag[q_pos - 1] /\n            (d + sign_float32(d) * hypot_float32(d, offdiag[q_pos - 1]));\n    float f = diag[p_pos] - shift;\n    float g = offdiag[p_pos];\n    for (int k = p_pos; k < q_pos; k++) {\n      float c, s;\n      givens_float32(f, g, &c, &s);\n      if (k > p_pos) offdiag[k - 1] = hypot_float32(f, g);\n      f = c * diag[k] + s * offdiag[k];\n      offdiag[k] = -s * diag[k] + c * offdiag[k];\n      g = s * diag[k + 1];\n      diag[k + 1] = c * diag[k + 1];\n      apply_givens_right_float32(q, n, n, k, k + 1, c, s);\n      givens_float32(f, g, &c, &s);\n      diag[k] = hypot_float32(f, g);\n      f = c * offdiag[k] + s * diag[k + 1];\n      diag[k + 1] = -s * offdiag[k] + c * diag[k + 1];\n      if (k < q_pos - 1) {\n        g = s * offdiag[k + 1];\n        offdiag[k + 1] = c * offdiag[k + 1];\n      }\n      apply_givens_left_float32(q, n, n, k, k + 1, c, s);\n    }\n  }\n}\n\nstatic void qr_iteration_tridiag_float64(double* diag, double* offdiag,\n                                         double* q, int n) {\n  const double tol = NX_EPS64 * n;\n  const int max_iter = 30 * n;\n  int iter = 0;\n  while (iter++ < max_iter) {\n    int converged = 1;\n    for (int i = 0; i < n - 1; i++) {\n      if (fabs(offdiag[i]) > tol * (fabs(diag[i]) + fabs(diag[i + 1]))) {\n        converged = 0;\n        break;\n      } else {\n        offdiag[i] = 0.0;\n      }\n    }\n    if (converged) break;\n    int q_pos = n - 1;\n    while (q_pos > 0 && offdiag[q_pos - 1] == 0.0) q_pos--;\n    if (q_pos == 0) continue;\n    int p_pos = q_pos - 1;\n    while (p_pos > 0 && offdiag[p_pos - 1] != 0.0) p_pos--;\n    double d = (diag[q_pos - 1] - diag[q_pos]) / 2.0;\n    double shift =\n        diag[q_pos] -\n        offdiag[q_pos - 1] * offdiag[q_pos - 1] /\n            (d + sign_float64(d) * hypot_float64(d, offdiag[q_pos - 1]));\n    double f = diag[p_pos] - shift;\n    double g = offdiag[p_pos];\n    for (int k = p_pos; k < q_pos; k++) {\n      double c, s;\n      givens_float64(f, g, &c, &s);\n      if (k > p_pos) offdiag[k - 1] = hypot_float64(f, g);\n      f = c * diag[k] + s * offdiag[k];\n      offdiag[k] = -s * diag[k] + c * offdiag[k];\n      g = s * diag[k + 1];\n      diag[k + 1] = c * diag[k + 1];\n      apply_givens_right_float64(q, n, n, k, k + 1, c, s);\n      givens_float64(f, g, &c, &s);\n      diag[k] = hypot_float64(f, g);\n      f = c * offdiag[k] + s * diag[k + 1];\n      diag[k + 1] = -s * offdiag[k] + c * diag[k + 1];\n      if (k < q_pos - 1) {\n        g = s * offdiag[k + 1];\n        offdiag[k + 1] = c * offdiag[k + 1];\n      }\n      apply_givens_left_float64(q, n, n, k, k + 1, c, s);\n    }\n  }\n}\n\n// Forward declarations\nstatic void eigh_float32(float* a, float* eigvals, float* eigvecs, int n);\nstatic void eigh_float64(double* a, double* eigvals, double* eigvecs, int n);\n\n// General eigenvalue decomposition for float32\nstatic void eig_float32(float* a, complex32* eigvals, complex32* eigvecs, int n) {\n   // Create a copy of the input matrix since LAPACK overwrites it\n   float* a_copy = (float*)malloc(n * n * sizeof(float));\n   if (!a_copy) return;\n   memcpy(a_copy, a, n * n * sizeof(float));\n\n   // Allocate workspace for eigenvalues\n   float* wr = (float*)malloc(n * sizeof(float));\n   float* wi = (float*)malloc(n * sizeof(float));\n   if (!wr || !wi) {\n     free(a_copy);\n     free(wr);\n     free(wi);\n     return;\n   }\n\n   // Allocate workspace for eigenvectors if requested\n   float* vr = NULL;\n   if (eigvecs) {\n     vr = (float*)malloc(n * n * sizeof(float));\n     if (!vr) {\n       free(a_copy);\n       free(wr);\n       free(wi);\n       return;\n     }\n   }\n\n   // Call LAPACK general eigenvalue decomposition\n   // LAPACK_ROW_MAJOR: row-major storage (matches our layout)\n   // 'N': don't compute left eigenvectors\n   // 'V': compute right eigenvectors if requested\n   int info = LAPACKE_sgeev(LAPACK_ROW_MAJOR,\n                            'N', eigvecs ? 'V' : 'N',\n                            n, a_copy, n,\n                            wr, wi, NULL, n,\n                            vr, n);\n\n   if (info == 0) {\n     // Convert real/imaginary eigenvalues to complex format\n     for (int i = 0; i < n; i++) {\n       eigvals[i] = wr[i] + wi[i] * I;\n     }\n\n     // Convert eigenvectors to complex format if requested\n     if (eigvecs) {\n       for (int i = 0; i < n * n; i++) {\n         eigvecs[i] = vr[i] + 0.0f * I;\n       }\n     }\n   }\n\n   free(a_copy);\n   free(wr);\n   free(wi);\n   free(vr);\n}\n\n// General eigenvalue decomposition for float64\nstatic void eig_float64(double* a, complex64* eigvals, complex64* eigvecs, int n) {\n   // Create a copy of the input matrix since LAPACK overwrites it\n   double* a_copy = (double*)malloc(n * n * sizeof(double));\n   if (!a_copy) return;\n   memcpy(a_copy, a, n * n * sizeof(double));\n\n   // Allocate workspace for eigenvalues\n   double* wr = (double*)malloc(n * sizeof(double));\n   double* wi = (double*)malloc(n * sizeof(double));\n   if (!wr || !wi) {\n     free(a_copy);\n     free(wr);\n     free(wi);\n     return;\n   }\n\n   // Allocate workspace for eigenvectors if requested\n   double* vr = NULL;\n   if (eigvecs) {\n     vr = (double*)malloc(n * n * sizeof(double));\n     if (!vr) {\n       free(a_copy);\n       free(wr);\n       free(wi);\n       return;\n     }\n   }\n\n   // Call LAPACK general eigenvalue decomposition\n   // LAPACK_ROW_MAJOR: row-major storage (matches our layout)\n   // 'N': don't compute left eigenvectors\n   // 'V': compute right eigenvectors if requested\n   int info = LAPACKE_dgeev(LAPACK_ROW_MAJOR,\n                            'N', eigvecs ? 'V' : 'N',\n                            n, a_copy, n,\n                            wr, wi, NULL, n,\n                            vr, n);\n\n   if (info == 0) {\n     // Convert real/imaginary eigenvalues to complex format\n     for (int i = 0; i < n; i++) {\n       eigvals[i] = wr[i] + wi[i] * I;\n     }\n\n     // Convert eigenvectors to complex format if requested\n     if (eigvecs) {\n       for (int i = 0; i < n * n; i++) {\n         eigvecs[i] = vr[i] + 0.0 * I;\n       }\n     }\n   }\n\n   free(a_copy);\n   free(wr);\n   free(wi);\n   free(vr);\n}\n\nstatic void eigh_float32(float* a, float* eigvals, float* eigvecs, int n) {\n   // Create a copy of the input matrix since LAPACK overwrites it\n   float* a_copy = (float*)malloc(n * n * sizeof(float));\n   if (!a_copy) return;\n   memcpy(a_copy, a, n * n * sizeof(float));\n\n   // Call LAPACK symmetric eigenvalue decomposition\n   // LAPACK_ROW_MAJOR: row-major storage\n   // 'V': compute eigenvectors, 'N': eigenvalues only\n   // 'L': lower triangular (arbitrary choice)\n   int info = LAPACKE_ssyev(LAPACK_ROW_MAJOR,\n                            eigvecs ? 'V' : 'N', 'L',\n                            n, a_copy, n, eigvals);\n\n   if (info == 0 && eigvecs) {\n     // Copy eigenvectors to output\n     memcpy(eigvecs, a_copy, n * n * sizeof(float));\n   }\n\n   free(a_copy);\n}\n\nstatic void eigh_float64(double* a, double* eigvals, double* eigvecs, int n) {\n   // Create a copy of the input matrix since LAPACK overwrites it\n   double* a_copy = (double*)malloc(n * n * sizeof(double));\n   if (!a_copy) return;\n   memcpy(a_copy, a, n * n * sizeof(double));\n\n   // Call LAPACK symmetric eigenvalue decomposition\n   // LAPACK_ROW_MAJOR: row-major storage\n   // 'V': compute eigenvectors, 'N': eigenvalues only\n   // 'L': lower triangular (arbitrary choice)\n   int info = LAPACKE_dsyev(LAPACK_ROW_MAJOR,\n                            eigvecs ? 'V' : 'N', 'L',\n                            n, a_copy, n, eigvals);\n\n   if (info == 0 && eigvecs) {\n     // Copy eigenvectors to output\n     memcpy(eigvecs, a_copy, n * n * sizeof(double));\n   }\n\n   free(a_copy);\n}\n\nstatic void eigh_complex32(complex32* a, float* eigvals, complex32* eigvecs,\n                           int n) {\n   // Create a copy of the input matrix since LAPACK overwrites it\n   complex32* a_copy = (complex32*)malloc(n * n * sizeof(complex32));\n   if (!a_copy) return;\n   memcpy(a_copy, a, n * n * sizeof(complex32));\n\n   // Call LAPACK Hermitian eigenvalue decomposition\n   // LAPACK_ROW_MAJOR: row-major storage\n   // 'V': compute eigenvectors, 'N': eigenvalues only\n   // 'L': lower triangular (arbitrary choice)\n   int info = LAPACKE_cheev(LAPACK_ROW_MAJOR,\n                            eigvecs ? 'V' : 'N', 'L',\n                            n, a_copy, n, eigvals);\n\n   if (info == 0 && eigvecs) {\n     // Copy eigenvectors to output\n     memcpy(eigvecs, a_copy, n * n * sizeof(complex32));\n   }\n\n   free(a_copy);\n}\n\nstatic void eigh_complex64(complex64* a, double* eigvals, complex64* eigvecs,\n                           int n) {\n   // Create a copy of the input matrix since LAPACK overwrites it\n   complex64* a_copy = (complex64*)malloc(n * n * sizeof(complex64));\n   if (!a_copy) return;\n   memcpy(a_copy, a, n * n * sizeof(complex64));\n\n   // Call LAPACK Hermitian eigenvalue decomposition\n   // LAPACK_ROW_MAJOR: row-major storage\n   // 'V': compute eigenvectors, 'N': eigenvalues only\n   // 'L': lower triangular (arbitrary choice)\n   int info = LAPACKE_zheev(LAPACK_ROW_MAJOR,\n                            eigvecs ? 'V' : 'N', 'L',\n                            n, a_copy, n, eigvals);\n\n   if (info == 0 && eigvecs) {\n     // Copy eigenvectors to output\n     memcpy(eigvecs, a_copy, n * n * sizeof(complex64));\n   }\n\n   free(a_copy);\n}\n\nstatic void eigh_float16(uint16_t* a, uint16_t* eigvals, uint16_t* eigvecs,\n                         int n) {\n  float* a_float = (float*)malloc(n * n * sizeof(float));\n  float* eigvals_float = (float*)malloc(n * sizeof(float));\n  float* eigvecs_float = eigvecs ? (float*)malloc(n * n * sizeof(float)) : NULL;\n  if (!a_float || !eigvals_float || (eigvecs && !eigvecs_float)) {\n    free(a_float);\n    free(eigvals_float);\n    free(eigvecs_float);\n    return;\n  }\n  for (int i = 0; i < n * n; i++) a_float[i] = half_to_float(a[i]);\n  eigh_float32(a_float, eigvals_float, eigvecs_float, n);\n  for (int i = 0; i < n; i++) eigvals[i] = float_to_half(eigvals_float[i]);\n  if (eigvecs) {\n    for (int i = 0; i < n * n; i++)\n      eigvecs[i] = float_to_half(eigvecs_float[i]);\n    free(eigvecs_float);\n  }\n  free(a_float);\n  free(eigvals_float);\n}\n\nstatic void eigh_bfloat16(caml_ba_bfloat16* a, caml_ba_bfloat16* eigvals,\n                          caml_ba_bfloat16* eigvecs, int n) {\n  float* a_float = (float*)malloc(n * n * sizeof(float));\n  float* eigvals_float = (float*)malloc(n * sizeof(float));\n  float* eigvecs_float = eigvecs ? (float*)malloc(n * n * sizeof(float)) : NULL;\n  if (!a_float || !eigvals_float || (eigvecs && !eigvecs_float)) {\n    free(a_float);\n    free(eigvals_float);\n    free(eigvecs_float);\n    return;\n  }\n  for (int i = 0; i < n * n; i++) a_float[i] = bfloat16_to_float(a[i]);\n  eigh_float32(a_float, eigvals_float, eigvecs_float, n);\n  for (int i = 0; i < n; i++) eigvals[i] = float_to_bfloat16(eigvals_float[i]);\n  if (eigvecs) {\n    for (int i = 0; i < n * n; i++)\n      eigvecs[i] = float_to_bfloat16(eigvecs_float[i]);\n    free(eigvecs_float);\n  }\n  free(a_float);\n  free(eigvals_float);\n}\n\nstatic void eigh_f8e4m3(caml_ba_fp8_e4m3* a, caml_ba_fp8_e4m3* eigvals,\n                        caml_ba_fp8_e4m3* eigvecs, int n) {\n  float* a_float = (float*)malloc(n * n * sizeof(float));\n  float* eigvals_float = (float*)malloc(n * sizeof(float));\n  float* eigvecs_float = eigvecs ? (float*)malloc(n * n * sizeof(float)) : NULL;\n  if (!a_float || !eigvals_float || (eigvecs && !eigvecs_float)) {\n    free(a_float);\n    free(eigvals_float);\n    free(eigvecs_float);\n    return;\n  }\n  for (int i = 0; i < n * n; i++) a_float[i] = fp8_e4m3_to_float(a[i]);\n  eigh_float32(a_float, eigvals_float, eigvecs_float, n);\n  for (int i = 0; i < n; i++) eigvals[i] = float_to_fp8_e4m3(eigvals_float[i]);\n  if (eigvecs) {\n    for (int i = 0; i < n * n; i++)\n      eigvecs[i] = float_to_fp8_e4m3(eigvecs_float[i]);\n    free(eigvecs_float);\n  }\n  free(a_float);\n  free(eigvals_float);\n}\n\nstatic void eigh_f8e5m2(caml_ba_fp8_e5m2* a, caml_ba_fp8_e5m2* eigvals,\n                        caml_ba_fp8_e5m2* eigvecs, int n) {\n  float* a_float = (float*)malloc(n * n * sizeof(float));\n  float* eigvals_float = (float*)malloc(n * sizeof(float));\n  float* eigvecs_float = eigvecs ? (float*)malloc(n * n * sizeof(float)) : NULL;\n  if (!a_float || !eigvals_float || (eigvecs && !eigvecs_float)) {\n    free(a_float);\n    free(eigvals_float);\n    free(eigvecs_float);\n    return;\n  }\n  for (int i = 0; i < n * n; i++) a_float[i] = fp8_e5m2_to_float(a[i]);\n  eigh_float32(a_float, eigvals_float, eigvecs_float, n);\n  for (int i = 0; i < n; i++) eigvals[i] = float_to_fp8_e5m2(eigvals_float[i]);\n  if (eigvecs) {\n    for (int i = 0; i < n * n; i++)\n      eigvecs[i] = float_to_fp8_e5m2(eigvecs_float[i]);\n    free(eigvecs_float);\n  }\n  free(a_float);\n  free(eigvals_float);\n}\n\nCAMLprim value caml_nx_op_eig(value v_in, value v_vals, value v_vecs,\n                              value v_symmetric, value v_compute_vectors) {\n  CAMLparam5(v_in, v_vals, v_vecs, v_symmetric, v_compute_vectors);\n  int symmetric = Int_val(v_symmetric);\n  int compute_vectors = Int_val(v_compute_vectors);\n  ndarray_t in = extract_ndarray(v_in);\n  ndarray_t vals = extract_ndarray(v_vals);\n  ndarray_t vecs = extract_ndarray(v_vecs);\n  struct caml_ba_array* ba_in = Caml_ba_array_val(Field(v_in, FFI_TENSOR_DATA));\n  struct caml_ba_array* ba_vals =\n      Caml_ba_array_val(Field(v_vals, FFI_TENSOR_DATA));\n  struct caml_ba_array* ba_vecs =\n      Caml_ba_array_val(Field(v_vecs, FFI_TENSOR_DATA));\n  int kind = nx_buffer_get_kind(ba_in);\n  if (in.ndim < 2) {\n    cleanup_ndarray(&in);\n    cleanup_ndarray(&vals);\n    cleanup_ndarray(&vecs);\n    caml_failwith(\"eig: input must have at least 2 dimensions\");\n  }\n  int n = in.shape[in.ndim - 1];\n  if (in.shape[in.ndim - 2] != n) {\n    cleanup_ndarray(&in);\n    cleanup_ndarray(&vals);\n    cleanup_ndarray(&vecs);\n    caml_failwith(\"eig: input must be square matrix\");\n  }\n  // General eigenvalue decomposition is now supported\n  int batch_size = 1;\n  for (int i = 0; i < in.ndim - 2; i++) {\n    batch_size *= in.shape[i];\n  }\n  int s_in_row = in.strides[in.ndim - 2];\n  int s_in_col = in.strides[in.ndim - 1];\n  int s_vals_stride = vals.strides[vals.ndim - 1];\n  int s_vecs_row = compute_vectors ? vecs.strides[vecs.ndim - 2] : 0;\n  int s_vecs_col = compute_vectors ? vecs.strides[vecs.ndim - 1] : 0;\n  caml_enter_blocking_section();\n  for (int b = 0; b < batch_size; b++) {\n    size_t off_in = in.offset;\n    size_t off_vals = vals.offset;\n    size_t off_vecs = compute_vectors ? vecs.offset : 0;\n    if (in.ndim > 2) {\n      int remaining = b;\n      for (int i = in.ndim - 3; i >= 0; i--) {\n        int coord = remaining % in.shape[i];\n        remaining /= in.shape[i];\n        off_in += coord * in.strides[i];\n        off_vals += coord * vals.strides[i];\n        if (compute_vectors) off_vecs += coord * vecs.strides[i];\n      }\n    }\n    switch (kind) {\n      case CAML_BA_FLOAT32: {\n        float* base_in = (float*)ba_in->data + off_in;\n        if (symmetric) {\n          double* base_vals = (double*)ba_vals->data + off_vals;  // Eigenvalues are always float64\n          float* base_vecs =\n              compute_vectors ? (float*)ba_vecs->data + off_vecs : NULL;\n          float* A = (float*)malloc((size_t)n * n * sizeof(float));\n          float* temp_vals = (float*)malloc(n * sizeof(float));\n          float* temp_vecs = compute_vectors ? (float*)malloc(n * n * sizeof(float)) : NULL;\n          if (!A || !temp_vals || (compute_vectors && !temp_vecs)) {\n            free(A);\n            free(temp_vals);\n            free(temp_vecs);\n            continue;\n          }\n          nx_pack_f32(A, base_in, n, n, s_in_row, s_in_col);\n          eigh_float32(A, temp_vals, temp_vecs, n);\n          // Convert eigenvalues from float32 to float64\n          for (int i = 0; i < n; i++) {\n            base_vals[i * s_vals_stride] = (double)temp_vals[i];\n          }\n          if (compute_vectors) {\n            nx_unpack_f32(base_vecs, temp_vecs, n, n, s_vecs_row, s_vecs_col);\n          }\n          free(A);\n          free(temp_vals);\n          free(temp_vecs);\n        } else {\n          // General eigenvalue decomposition - output is complex64\n          complex64* base_vals = (complex64*)ba_vals->data + off_vals;\n          complex64* base_vecs =\n              compute_vectors ? (complex64*)ba_vecs->data + off_vecs : NULL;\n          float* A = (float*)malloc((size_t)n * n * sizeof(float));\n          if (!A) continue;\n          nx_pack_f32(A, base_in, n, n, s_in_row, s_in_col);\n          // Allocate temporary buffers for complex32 results from LAPACK\n\n          complex32* temp_vals = (complex32*)malloc(n * sizeof(complex32));\n          complex32* temp_vecs = compute_vectors ?\n              (complex32*)malloc(n * n * sizeof(complex32)) : NULL;\n          if (!temp_vals || (compute_vectors && !temp_vecs)) {\n            free(A);\n            free(temp_vals);\n            free(temp_vecs);\n            continue;\n          }\n\n          eig_float32(A, temp_vals, temp_vecs, n);\n\n          // Convert eigenvalues from complex32 to complex64\n          for (int i = 0; i < n; i++) {\n            base_vals[i * s_vals_stride] = (double)crealf(temp_vals[i]) + (double)cimagf(temp_vals[i]) * I;\n          }\n\n          if (compute_vectors) {\n            // Unpack and convert complex eigenvectors from complex32 to complex64\n            for (int i = 0; i < n; i++) {\n              for (int j = 0; j < n; j++) {\n                base_vecs[i * s_vecs_row + j * s_vecs_col] =\n                    (double)crealf(temp_vecs[i * n + j]) + (double)cimagf(temp_vecs[i * n + j]) * I;\n              }\n            }\n            free(temp_vecs);\n          }\n          free(temp_vals);\n          free(A);\n        }\n        break;\n      }\n      case CAML_BA_FLOAT64: {\n        double* base_in = (double*)ba_in->data + off_in;\n        if (symmetric) {\n          double* base_vals = (double*)ba_vals->data + off_vals;\n          double* base_vecs =\n              compute_vectors ? (double*)ba_vecs->data + off_vecs : NULL;\n          double* A = (double*)malloc((size_t)n * n * sizeof(double));\n          if (!A) continue;\n          nx_pack_f64(A, base_in, n, n, s_in_row, s_in_col);\n          eigh_float64(A, base_vals, base_vecs, n);\n          if (compute_vectors) {\n            nx_unpack_f64(base_vecs, base_vecs, n, n, s_vecs_row, s_vecs_col);\n          }\n          free(A);\n        } else {\n          // General eigenvalue decomposition - output is complex\n          complex64* base_vals = (complex64*)ba_vals->data + off_vals;\n          complex64* base_vecs =\n              compute_vectors ? (complex64*)ba_vecs->data + off_vecs : NULL;\n          double* A = (double*)malloc((size_t)n * n * sizeof(double));\n          if (!A) continue;\n          nx_pack_f64(A, base_in, n, n, s_in_row, s_in_col);\n          // Allocate temporary buffers for complex results\n          complex64* temp_vals = (complex64*)malloc(n * sizeof(complex64));\n          complex64* temp_vecs = compute_vectors ?\n              (complex64*)malloc(n * n * sizeof(complex64)) : NULL;\n          if (!temp_vals || (compute_vectors && !temp_vecs)) {\n            free(A);\n            free(temp_vals);\n            free(temp_vecs);\n            continue;\n          }\n\n          eig_float64(A, temp_vals, temp_vecs, n);\n\n          // Copy eigenvalues to output with proper striding\n          for (int i = 0; i < n; i++) {\n            base_vals[i * s_vals_stride] = temp_vals[i];\n          }\n\n          if (compute_vectors) {\n            // Unpack complex eigenvectors with proper striding\n            for (int i = 0; i < n; i++) {\n              for (int j = 0; j < n; j++) {\n                base_vecs[i * s_vecs_row + j * s_vecs_col] = temp_vecs[i * n + j];\n              }\n            }\n            free(temp_vecs);\n          }\n          free(temp_vals);\n          free(A);\n        }\n        break;\n      }\n      case CAML_BA_COMPLEX32: {\n        complex32* base_in = (complex32*)ba_in->data + off_in;\n        float* base_vals = (float*)ba_vals->data + off_vals;\n        complex32* base_vecs =\n            compute_vectors ? (complex32*)ba_vecs->data + off_vecs : NULL;\n        complex32* A = (complex32*)malloc((size_t)n * n * sizeof(complex32));\n        if (!A) continue;\n        nx_pack_c32(A, base_in, n, n, s_in_row, s_in_col);\n        eigh_complex32(A, base_vals, base_vecs, n);\n        if (compute_vectors) {\n          nx_unpack_c32(base_vecs, base_vecs, n, n, s_vecs_row, s_vecs_col);\n        }\n        free(A);\n        break;\n      }\n      case CAML_BA_COMPLEX64: {\n        complex64* base_in = (complex64*)ba_in->data + off_in;\n        double* base_vals = (double*)ba_vals->data + off_vals;\n        complex64* base_vecs =\n            compute_vectors ? (complex64*)ba_vecs->data + off_vecs : NULL;\n        complex64* A = (complex64*)malloc((size_t)n * n * sizeof(complex64));\n        if (!A) continue;\n        nx_pack_c64(A, base_in, n, n, s_in_row, s_in_col);\n        eigh_complex64(A, base_vals, base_vecs, n);\n        if (compute_vectors) {\n          nx_unpack_c64(base_vecs, base_vecs, n, n, s_vecs_row, s_vecs_col);\n        }\n        free(A);\n        break;\n      }\n      case CAML_BA_FLOAT16: {\n        uint16_t* base_in = (uint16_t*)ba_in->data + off_in;\n        uint16_t* base_vals = (uint16_t*)ba_vals->data + off_vals;\n        uint16_t* base_vecs =\n            compute_vectors ? (uint16_t*)ba_vecs->data + off_vecs : NULL;\n        uint16_t* A = (uint16_t*)malloc((size_t)n * n * sizeof(uint16_t));\n        if (!A) continue;\n        for (int i = 0; i < n; i++) {\n          for (int j = 0; j < n; j++) {\n            A[i * n + j] = base_in[i * s_in_row + j * s_in_col];\n          }\n        }\n        eigh_float16(A, base_vals, base_vecs, n);\n        if (compute_vectors) {\n          for (int i = 0; i < n; i++) {\n            for (int j = 0; j < n; j++) {\n              base_vecs[i * s_vecs_row + j * s_vecs_col] = base_vecs[i * n + j];\n            }\n          }\n        }\n        free(A);\n        break;\n      }\n      case NX_BA_BFLOAT16: {\n        caml_ba_bfloat16* base_in = (caml_ba_bfloat16*)ba_in->data + off_in;\n        caml_ba_bfloat16* base_vals =\n            (caml_ba_bfloat16*)ba_vals->data + off_vals;\n        caml_ba_bfloat16* base_vecs =\n            compute_vectors ? (caml_ba_bfloat16*)ba_vecs->data + off_vecs\n                            : NULL;\n        caml_ba_bfloat16* A =\n            (caml_ba_bfloat16*)malloc((size_t)n * n * sizeof(caml_ba_bfloat16));\n        if (!A) continue;\n        for (int i = 0; i < n; i++) {\n          for (int j = 0; j < n; j++) {\n            A[i * n + j] = base_in[i * s_in_row + j * s_in_col];\n          }\n        }\n        eigh_bfloat16(A, base_vals, base_vecs, n);\n        if (compute_vectors) {\n          for (int i = 0; i < n; i++) {\n            for (int j = 0; j < n; j++) {\n              base_vecs[i * s_vecs_row + j * s_vecs_col] = base_vecs[i * n + j];\n            }\n          }\n        }\n        free(A);\n        break;\n      }\n      case NX_BA_FP8_E4M3: {\n        caml_ba_fp8_e4m3* base_in = (caml_ba_fp8_e4m3*)ba_in->data + off_in;\n        caml_ba_fp8_e4m3* base_vals =\n            (caml_ba_fp8_e4m3*)ba_vals->data + off_vals;\n        caml_ba_fp8_e4m3* base_vecs =\n            compute_vectors ? (caml_ba_fp8_e4m3*)ba_vecs->data + off_vecs\n                            : NULL;\n        caml_ba_fp8_e4m3* A =\n            (caml_ba_fp8_e4m3*)malloc((size_t)n * n * sizeof(caml_ba_fp8_e4m3));\n        if (!A) continue;\n        for (int i = 0; i < n; i++) {\n          for (int j = 0; j < n; j++) {\n            A[i * n + j] = base_in[i * s_in_row + j * s_in_col];\n          }\n        }\n        eigh_f8e4m3(A, base_vals, base_vecs, n);\n        if (compute_vectors) {\n          for (int i = 0; i < n; i++) {\n            for (int j = 0; j < n; j++) {\n              base_vecs[i * s_vecs_row + j * s_vecs_col] = base_vecs[i * n + j];\n            }\n          }\n        }\n        free(A);\n        break;\n      }\n      case NX_BA_FP8_E5M2: {\n        caml_ba_fp8_e5m2* base_in = (caml_ba_fp8_e5m2*)ba_in->data + off_in;\n        caml_ba_fp8_e5m2* base_vals =\n            (caml_ba_fp8_e5m2*)ba_vals->data + off_vals;\n        caml_ba_fp8_e5m2* base_vecs =\n            compute_vectors ? (caml_ba_fp8_e5m2*)ba_vecs->data + off_vecs\n                            : NULL;\n        caml_ba_fp8_e5m2* A =\n            (caml_ba_fp8_e5m2*)malloc((size_t)n * n * sizeof(caml_ba_fp8_e5m2));\n        if (!A) continue;\n        for (int i = 0; i < n; i++) {\n          for (int j = 0; j < n; j++) {\n            A[i * n + j] = base_in[i * s_in_row + j * s_in_col];\n          }\n        }\n        eigh_f8e5m2(A, base_vals, base_vecs, n);\n        if (compute_vectors) {\n          for (int i = 0; i < n; i++) {\n            for (int j = 0; j < n; j++) {\n              base_vecs[i * s_vecs_row + j * s_vecs_col] = base_vecs[i * n + j];\n            }\n          }\n        }\n        free(A);\n        break;\n      }\n      default:\n        caml_leave_blocking_section();\n        cleanup_ndarray(&in);\n        cleanup_ndarray(&vals);\n        cleanup_ndarray(&vecs);\n        caml_failwith(\"eig: unsupported dtype\");\n    }\n  }\n  caml_leave_blocking_section();\n  cleanup_ndarray(&in);\n  cleanup_ndarray(&vals);\n  cleanup_ndarray(&vecs);\n  CAMLreturn(Val_unit);\n}\n"
  },
  {
    "path": "packages/nx/lib/backend_c/nx_c_index.c",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n// Gather and scatter operations for nx C backend\n\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n#include <caml/custom.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/threads.h>\n\n#include \"nx_c_shared.h\"\n\n// Helper to check if shapes are equal\nstatic bool shape_equal(const int *shape1, const int *shape2, int ndim) {\n  for (int i = 0; i < ndim; i++) {\n    if (shape1[i] != shape2[i]) return false;\n  }\n  return true;\n}\n\n// Helper to check if two ndarrays have the same shape\nstatic bool same_shape(const ndarray_t *a, const ndarray_t *b) {\n  if (a->ndim != b->ndim) return false;\n  return shape_equal(a->shape, b->shape, a->ndim);\n}\n\n// Forward declaration - implementation after multi_iterator definition\nstatic void copy_ndarray(const ndarray_t *src, ndarray_t *dst, int kind);\n\n// Type definitions for element-wise in-place operations (for scatter modes)\ntypedef void (*elem_op_fn)(void *, long, void *, long);\n\n// Dispatch table for each type\ntypedef struct {\n  elem_op_fn i8, u8, i16, u16, i32, i64, u32, u64, inat;\n  elem_op_fn f16, f32, f64;\n  elem_op_fn c32, c64;\n  elem_op_fn bf16, bool_, i4, u4, f8e4m3, f8e5m2;\n} elem_op_table;\n\n// Macro to generate all standard type variants for an element operation\n#define GENERATE_ELEM_OP(name, OP_EXPR)               \\\n  ELEM_OP_FOR_TYPE(name, int8_t, i8, OP_EXPR)         \\\n  ELEM_OP_FOR_TYPE(name, uint8_t, u8, OP_EXPR)        \\\n  ELEM_OP_FOR_TYPE(name, int16_t, i16, OP_EXPR)       \\\n  ELEM_OP_FOR_TYPE(name, uint16_t, u16, OP_EXPR)      \\\n  ELEM_OP_FOR_TYPE(name, int32_t, i32, OP_EXPR)       \\\n  ELEM_OP_FOR_TYPE(name, int64_t, i64, OP_EXPR)       \\\n  ELEM_OP_FOR_TYPE(name, uint32_t, u32, OP_EXPR)      \\\n  ELEM_OP_FOR_TYPE(name, uint64_t, u64, OP_EXPR)      \\\n  ELEM_OP_FOR_TYPE(name, intnat, inat, OP_EXPR)       \\\n  ELEM_OP_FOR_TYPE(name, float, f32, OP_EXPR)         \\\n  ELEM_OP_FOR_TYPE(name, double, f64, OP_EXPR)\n\n// Macro to build dispatch table\n#define BUILD_ELEM_OP_TABLE(name)             \\\n  static const elem_op_table name##_table = { \\\n      .i8 = nx_c_elem_##name##_i8,            \\\n      .u8 = nx_c_elem_##name##_u8,            \\\n      .i16 = nx_c_elem_##name##_i16,          \\\n      .u16 = nx_c_elem_##name##_u16,          \\\n      .i32 = nx_c_elem_##name##_i32,          \\\n      .i64 = nx_c_elem_##name##_i64,          \\\n      .u32 = nx_c_elem_##name##_u32,          \\\n      .u64 = nx_c_elem_##name##_u64,          \\\n      .inat = nx_c_elem_##name##_inat,        \\\n      .f16 = nx_c_elem_##name##_f16,          \\\n      .f32 = nx_c_elem_##name##_f32,          \\\n      .f64 = nx_c_elem_##name##_f64,          \\\n      .c32 = nx_c_elem_##name##_c32,          \\\n      .c64 = nx_c_elem_##name##_c64,          \\\n      .bf16 = nx_c_elem_##name##_bf16,        \\\n      .bool_ = nx_c_elem_##name##_bool_,      \\\n      .i4 = nx_c_elem_##name##_i4,            \\\n      .u4 = nx_c_elem_##name##_u4,            \\\n      .f8e4m3 = nx_c_elem_##name##_f8e4m3,    \\\n      .f8e5m2 = nx_c_elem_##name##_f8e5m2}\n\n// Generic element operation\n#define ELEM_OP_FOR_TYPE(name, T, suffix, OP_EXPR)                            \\\n  static void nx_c_elem_##name##_##suffix(void *src, long src_off, void *dst, \\\n                                          long dst_off) {                     \\\n    T *s = (T *)src;                                                          \\\n    T *d = (T *)dst;                                                          \\\n    T a = s[src_off];                                                         \\\n    T b = d[dst_off];                                                         \\\n    d[dst_off] = OP_EXPR(a, b);                                               \\\n  }\n\n// Low-precision float elem op (convert to float)\n#define LOW_PREC_ELEM_OP(name, T, suffix, OP_EXPR, TO_FLOAT, FROM_FLOAT)      \\\n  static void nx_c_elem_##name##_##suffix(void *src, long src_off, void *dst, \\\n                                          long dst_off) {                     \\\n    T *s = (T *)src;                                                          \\\n    T *d = (T *)dst;                                                          \\\n    float a = TO_FLOAT(s[src_off]);                                           \\\n    float b = TO_FLOAT(d[dst_off]);                                           \\\n    d[dst_off] = FROM_FLOAT(OP_EXPR(a, b));                                   \\\n  }\n\n// Complex elem op\n#define COMPLEX_ELEM_OP_FOR_TYPE(name, T, suffix, OP_EXPR)                    \\\n  static void nx_c_elem_##name##_##suffix(void *src, long src_off, void *dst, \\\n                                          long dst_off) {                     \\\n    T *s = (T *)src;                                                          \\\n    T *d = (T *)dst;                                                          \\\n    T a = s[src_off];                                                         \\\n    T b = d[dst_off];                                                         \\\n    d[dst_off] = OP_EXPR(a, b);                                               \\\n  }\n\n// Int4 elem op (packed)\n#define INT4_ELEM_OP(name, signedness, suffix, OP_EXPR)                       \\\n  static void nx_c_elem_##name##_##suffix(void *src, long src_off, void *dst, \\\n                                          long dst_off) {                     \\\n    uint8_t *s = (uint8_t *)src;                                              \\\n    uint8_t *d = (uint8_t *)dst;                                              \\\n    long s_byte = src_off / 2;                                                \\\n    int s_nib = src_off % 2;                                                  \\\n    int a = s_nib ? (signedness ? (int8_t)(s[s_byte] >> 4)                    \\\n                                : (s[s_byte] >> 4) & 0x0F)                    \\\n                  : (signedness ? (int8_t)((s[s_byte] & 0x0F) << 4) >> 4      \\\n                                : s[s_byte] & 0x0F);                          \\\n    long d_byte = dst_off / 2;                                                \\\n    int d_nib = dst_off % 2;                                                  \\\n    int b = d_nib ? (signedness ? (int8_t)(d[d_byte] >> 4)                    \\\n                                : (d[d_byte] >> 4) & 0x0F)                    \\\n                  : (signedness ? (int8_t)((d[d_byte] & 0x0F) << 4) >> 4      \\\n                                : d[d_byte] & 0x0F);                          \\\n    int res = OP_EXPR(a, b);                                                  \\\n    res = signedness ? CLAMP_I4(res) : CLAMP_U4(res);                         \\\n    uint8_t nib = (uint8_t)res & 0x0F;                                        \\\n    if (d_nib) {                                                              \\\n      d[d_byte] = (d[d_byte] & 0x0F) | (nib << 4);                            \\\n    } else {                                                                  \\\n      d[d_byte] = (d[d_byte] & 0xF0) | nib;                                   \\\n    }                                                                         \\\n  }\n\n// Generate for set (assign)\n#define SET_EXPR(a, b) (a)\nGENERATE_ELEM_OP(set, SET_EXPR)\n\nLOW_PREC_ELEM_OP(set, uint16_t, f16, SET_EXPR, half_to_float, float_to_half)\nLOW_PREC_ELEM_OP(set, caml_ba_bfloat16, bf16, SET_EXPR, bfloat16_to_float,\n                 float_to_bfloat16)\nLOW_PREC_ELEM_OP(set, caml_ba_fp8_e4m3, f8e4m3, SET_EXPR, fp8_e4m3_to_float,\n                 float_to_fp8_e4m3)\nLOW_PREC_ELEM_OP(set, caml_ba_fp8_e5m2, f8e5m2, SET_EXPR, fp8_e5m2_to_float,\n                 float_to_fp8_e5m2)\n\nCOMPLEX_ELEM_OP_FOR_TYPE(set, complex32, c32, SET_EXPR)\nCOMPLEX_ELEM_OP_FOR_TYPE(set, complex64, c64, SET_EXPR)\nINT4_ELEM_OP(set, 1, i4, SET_EXPR)\nINT4_ELEM_OP(set, 0, u4, SET_EXPR)\nELEM_OP_FOR_TYPE(set, caml_ba_bool, bool_, SET_EXPR)\nBUILD_ELEM_OP_TABLE(set);\n\n// Generate for add (accumulate)\n#define ADD_EXPR(a, b) ((a) + (b))\nGENERATE_ELEM_OP(add, ADD_EXPR)\n\nLOW_PREC_ELEM_OP(add, uint16_t, f16, ADD_EXPR, half_to_float, float_to_half)\nLOW_PREC_ELEM_OP(add, caml_ba_bfloat16, bf16, ADD_EXPR, bfloat16_to_float,\n                 float_to_bfloat16)\nLOW_PREC_ELEM_OP(add, caml_ba_fp8_e4m3, f8e4m3, ADD_EXPR, fp8_e4m3_to_float,\n                 float_to_fp8_e4m3)\nLOW_PREC_ELEM_OP(add, caml_ba_fp8_e5m2, f8e5m2, ADD_EXPR, fp8_e5m2_to_float,\n                 float_to_fp8_e5m2)\n\nCOMPLEX_ELEM_OP_FOR_TYPE(add, complex32, c32, ADD_EXPR)\nCOMPLEX_ELEM_OP_FOR_TYPE(add, complex64, c64, ADD_EXPR)\nINT4_ELEM_OP(add, 1, i4, ADD_EXPR)\nINT4_ELEM_OP(add, 0, u4, ADD_EXPR)\nELEM_OP_FOR_TYPE(add, caml_ba_bool, bool_, ADD_EXPR)\nBUILD_ELEM_OP_TABLE(add);\n\n// Multi-dimensional iterator for shapes\ntypedef struct {\n  int ndim;\n  long *shape;\n  long *coords;  // Changed to long to handle large dimensions\n  int has_elements;\n} multi_iterator_t;\n\nstatic void multi_iterator_init(multi_iterator_t *it, const ndarray_t *nd) {\n  it->ndim = nd->ndim;\n  it->shape = (long *)malloc(it->ndim * sizeof(long));\n  it->coords = (long *)calloc(it->ndim, sizeof(long));\n  it->has_elements = 1;\n  for (int i = 0; i < it->ndim; i++) {\n    long dim = nd->shape[i];\n    it->shape[i] = dim;\n    it->coords[i] = 0;\n    if (dim == 0) it->has_elements = 0;\n  }\n  if (it->ndim == 0) it->has_elements = 1;\n}\n\nstatic int multi_iterator_next(multi_iterator_t *it) {\n  for (int i = it->ndim - 1; i >= 0; i--) {\n    it->coords[i]++;\n    if (it->coords[i] < it->shape[i]) return 1;\n    it->coords[i] = 0;\n  }\n  return 0;\n}\n\nstatic void multi_iterator_destroy(multi_iterator_t *it) {\n  free(it->shape);\n  free(it->coords);\n}\n\nstatic long compute_offset(const ndarray_t *nd, const long *coords) {\n  long off = 0;\n  for (int i = 0; i < nd->ndim; i++) {\n    off += coords[i] * nd->strides[i];\n  }\n  return off;\n}\n\n// Helper to get element byte size for memset (returns bytes per element, 0.5\n// approximated as special case)\nstatic double get_elem_byte_size(int kind) {\n  switch (kind) {\n    case CAML_BA_SINT8:\n    case CAML_BA_UINT8:\n    case NX_BA_BOOL:\n    case NX_BA_FP8_E4M3:\n    case NX_BA_FP8_E5M2:\n      return 1.0;\n    case CAML_BA_SINT16:\n    case CAML_BA_UINT16:\n    case CAML_BA_FLOAT16:\n    case NX_BA_BFLOAT16:\n      return 2.0;\n    case CAML_BA_INT32:\n    case CAML_BA_FLOAT32:\n    case NX_BA_UINT32:\n      return 4.0;\n    case CAML_BA_INT64:\n    case CAML_BA_NATIVE_INT:\n    case CAML_BA_CAML_INT:\n    case CAML_BA_FLOAT64:\n    case NX_BA_UINT64:\n      return 8.0;\n    case CAML_BA_COMPLEX32:\n      return 8.0;\n    case CAML_BA_COMPLEX64:\n      return 16.0;\n    case NX_BA_INT4:\n    case NX_BA_UINT4:\n      return 0.5;\n    default:\n      caml_failwith(\"unsupported kind\");\n      return 0;\n  }\n}\n\n// Integer element size in bytes for common kinds. Returns 0 if unsupported\nstatic inline size_t elem_size_from_kind(int kind) {\n  switch (kind) {\n    case CAML_BA_SINT8:\n    case CAML_BA_UINT8:\n    case NX_BA_BOOL:\n    case NX_BA_FP8_E4M3:\n    case NX_BA_FP8_E5M2:\n      return 1;\n    case CAML_BA_SINT16:\n    case CAML_BA_UINT16:\n    case CAML_BA_FLOAT16:\n    case NX_BA_BFLOAT16:\n      return 2;\n    case CAML_BA_INT32:\n    case CAML_BA_FLOAT32:\n    case NX_BA_UINT32:\n      return 4;\n    case CAML_BA_INT64:\n    case CAML_BA_FLOAT64:\n    case CAML_BA_NATIVE_INT:\n    case CAML_BA_CAML_INT:\n    case NX_BA_UINT64:\n      return 8;\n    case CAML_BA_COMPLEX32:\n      return 8;  // 2 * float32\n    case CAML_BA_COMPLEX64:\n      return 16; // 2 * float64\n    default:\n      return 0;\n  }\n}\n\n// Zero the output array - requires passing the value to access bigarray\nstatic void zero_ndarray(ndarray_t *nd, void *data, int kind) {\n  long total_elems = total_elements_safe(nd);\n  double bytes_per_elem = get_elem_byte_size(kind);\n  long total_bytes;\n  if (kind == NX_BA_INT4 || kind == NX_BA_UINT4) {\n    total_bytes = total_elems / 2;\n  } else {\n    total_bytes = (long)(total_elems * bytes_per_elem);\n  }\n  memset(data, 0, total_bytes);\n}\n\n// Helper to copy data from one ndarray to another (assuming same shape)\nstatic void copy_ndarray(const ndarray_t *src, ndarray_t *dst, int kind) {\n  if (!src || !dst) return;\n  if (!same_shape(src, dst)) return;\n  \n  // Get element size based on kind\n  size_t elem_size = 1;\n  switch (kind) {\n    case CAML_BA_SINT8:\n    case CAML_BA_UINT8:\n    case NX_BA_BOOL:\n    case NX_BA_FP8_E4M3:\n    case NX_BA_FP8_E5M2:\n      elem_size = 1;\n      break;\n    case CAML_BA_SINT16:\n    case CAML_BA_UINT16:\n    case CAML_BA_FLOAT16:\n    case NX_BA_BFLOAT16:\n      elem_size = 2;\n      break;\n    case CAML_BA_INT32:\n    case CAML_BA_FLOAT32:\n    case NX_BA_UINT32:\n      elem_size = 4;\n      break;\n    case CAML_BA_INT64:\n    case CAML_BA_FLOAT64:\n    case NX_BA_UINT64:\n      elem_size = 8;\n      break;\n    case CAML_BA_NATIVE_INT:\n    case CAML_BA_CAML_INT:\n      elem_size = sizeof(intnat);\n      break;\n    case CAML_BA_COMPLEX32:\n      elem_size = 8;  // 2 * float32\n      break;\n    case CAML_BA_COMPLEX64:\n      elem_size = 16;  // 2 * float64\n      break;\n    default:\n      return;  // Unsupported type\n  }\n  \n  // Use multi-iterator to copy elements\n  multi_iterator_t it;\n  multi_iterator_init(&it, src);\n\n  if (it.has_elements) {\n    do {\n      long src_off = compute_offset(src, it.coords);\n      long dst_off = compute_offset(dst, it.coords);\n\n      memcpy(dst->data + (dst->offset + dst_off) * elem_size,\n             src->data + (src->offset + src_off) * elem_size,\n             elem_size);\n    } while (multi_iterator_next(&it));\n  }\n\n  multi_iterator_destroy(&it);\n}\n\n// Generic gather implementation\nstatic const char *generic_gather(const ndarray_t *data,\n                                  const ndarray_t *indices, ndarray_t *out,\n                                  int axis, elem_op_fn op) {\n  const char *error_msg = NULL;\n\n  if (data->ndim != indices->ndim || data->ndim != out->ndim) {\n    error_msg = \"ndim mismatch\";\n    return error_msg;\n  }\n  for (int i = 0; i < data->ndim; i++) {\n    if (i != axis) {\n      if (indices->shape[i] != data->shape[i]) {\n        error_msg = \"shape mismatch on non-axis dims\";\n        return error_msg;\n      }\n    }\n  }\n  if (!shape_equal(indices->shape, out->shape, data->ndim)) {\n    error_msg = \"output shape must match indices\";\n    return error_msg;\n  }\n\n  if (total_elements_safe(indices) == 0) {\n    return NULL;\n  }\n\n  multi_iterator_t it;\n  multi_iterator_init(&it, indices);\n\n  if (it.has_elements) {\n    do {\n      long indices_off = compute_offset(indices, it.coords);\n      int32_t index = *\n          ((int32_t *)(indices->data\n                        + (indices->offset + indices_off) * sizeof(int32_t)));\n      // Handle negative indices (Python-style)\n      if (index < 0) {\n        index += data->shape[axis];\n      }\n      if (index < 0 || index >= data->shape[axis]) {\n        error_msg = \"index out of bounds\";\n        break;\n      }\n\n      long data_coords[32];  // Stack buffer for coordinates\n      for (int i = 0; i < it.ndim; i++) {\n        data_coords[i] = (i == axis) ? index : it.coords[i];\n      }\n      long data_off = compute_offset(data, data_coords);\n      long out_off = compute_offset(out, it.coords);\n\n      // Apply set op (copy)\n      op(data->data, data->offset + data_off, out->data,\n         out->offset + out_off);\n    } while (multi_iterator_next(&it));\n  }\n  multi_iterator_destroy(&it);\n\n  return error_msg;\n}\n\n// Generic scatter implementation\nstatic const char *generic_scatter(const ndarray_t *template,\n                                   const ndarray_t *indices,\n                                   const ndarray_t *updates, ndarray_t *out,\n                                   void *out_raw_data, int axis, elem_op_fn op,\n                                   int unique, int kind, int mode) {\n  const char *error_msg = NULL;\n\n  // NULL checks\n  if (!template || !indices || !updates || !out) {\n    error_msg = \"generic_scatter: NULL pointer\";\n    return error_msg;\n  }\n  if (!template->shape || !indices->shape || !updates->shape || !out->shape) {\n    error_msg = \"generic_scatter: NULL shape array\";\n    return error_msg;\n  }\n\n  if (template->ndim != indices->ndim || template->ndim != updates->ndim ||\n      template->ndim != out->ndim) {\n    error_msg = \"ndim mismatch\";\n    return error_msg;\n  }\n  for (int i = 0; i < template->ndim; i++) {\n    if (i != axis) {\n      if (indices->shape[i] != template->shape[i] ||\n          updates->shape[i] != indices->shape[i]) {\n        error_msg = \"shape mismatch on non-axis dims\";\n        return error_msg;\n      }\n    } else {\n      if (indices->shape[i] != updates->shape[i]) {\n        error_msg = \"indices and updates mismatch on axis\";\n        return error_msg;\n      }\n    }\n  }\n  if (!shape_equal(template->shape, out->shape, template->ndim)) {\n    error_msg = \"output shape must match template\";\n    return error_msg;\n  }\n\n  // For Set mode (0), copy template to output first\n  // For Add mode (1), zero the output\n  if (mode == 0) {\n    // Set mode - copy template data to output to preserve existing values\n    copy_ndarray(template, out, kind);\n  } else {\n    // Add mode - zero the output\n    zero_ndarray(out, out_raw_data, kind);\n  }\n\n  multi_iterator_t it;\n  multi_iterator_init(&it, indices);\n\n  if (it.has_elements) {\n    do {\n      long indices_off = compute_offset(indices, it.coords);\n      int32_t index = *\n          ((int32_t *)(indices->data\n                        + (indices->offset + indices_off) * sizeof(int32_t)));\n      // Handle negative indices (Python-style)\n      if (index < 0) {\n        index += template->shape[axis];\n      }\n      if (index < 0 || index >= template->shape[axis]) {\n        error_msg = \"index out of bounds\";\n        break;\n      }\n\n      long out_coords[32];  // Stack buffer for coordinates\n      for (int i = 0; i < it.ndim; i++) {\n        out_coords[i] = (i == axis) ? index : it.coords[i];\n      }\n      long out_off = compute_offset(out, out_coords);\n      long updates_off = compute_offset(updates, it.coords);\n\n      // Apply op\n      op(updates->data, updates->offset + updates_off, out->data,\n         out->offset + out_off);\n    } while (multi_iterator_next(&it));\n  }\n  multi_iterator_destroy(&it);\n\n  return error_msg;\n}\n\n// Dispatch for gather\nstatic void dispatch_gather(value v_data, value v_indices, value v_out,\n                            int axis) {\n  ndarray_t data = extract_ndarray(v_data);\n  ndarray_t indices = extract_ndarray(v_indices);\n  ndarray_t out = extract_ndarray(v_out);\n\n  value v_data_data = Field(v_data, FFI_TENSOR_DATA);\n  value v_out_data = Field(v_out, FFI_TENSOR_DATA);\n  value v_indices_data = Field(v_indices, FFI_TENSOR_DATA);\n\n  struct caml_ba_array *ba_data = Caml_ba_array_val(v_data_data);\n  struct caml_ba_array *ba_indices = Caml_ba_array_val(v_indices_data);\n  int kind = nx_buffer_get_kind(ba_data);\n  if (kind != nx_buffer_get_kind(Caml_ba_array_val(v_out_data)))\n    caml_failwith(\"dtype mismatch\");\n  if (nx_buffer_get_kind(ba_indices) != CAML_BA_INT32)\n    caml_failwith(\"indices must be int32\");\n\n  elem_op_fn op = NULL;\n  switch (kind) {\n    case CAML_BA_SINT8:\n      op = set_table.i8;\n      break;\n    case CAML_BA_UINT8:\n      op = set_table.u8;\n      break;\n    case CAML_BA_SINT16:\n      op = set_table.i16;\n      break;\n    case CAML_BA_UINT16:\n      op = set_table.u16;\n      break;\n    case CAML_BA_INT32:\n      op = set_table.i32;\n      break;\n    case CAML_BA_INT64:\n      op = set_table.i64;\n      break;\n    case NX_BA_UINT32:\n      op = set_table.u32;\n      break;\n    case NX_BA_UINT64:\n      op = set_table.u64;\n      break;\n    case CAML_BA_NATIVE_INT:\n    case CAML_BA_CAML_INT:\n      op = set_table.inat;\n      break;\n    case CAML_BA_FLOAT16:\n      op = set_table.f16;\n      break;\n    case CAML_BA_FLOAT32:\n      op = set_table.f32;\n      break;\n    case CAML_BA_FLOAT64:\n      op = set_table.f64;\n      break;\n    case CAML_BA_COMPLEX32:\n      op = set_table.c32;\n      break;\n    case CAML_BA_COMPLEX64:\n      op = set_table.c64;\n      break;\n    case NX_BA_BFLOAT16:\n      op = set_table.bf16;\n      break;\n    case NX_BA_BOOL:\n      op = set_table.bool_;\n      break;\n    case NX_BA_INT4:\n      op = set_table.i4;\n      break;\n    case NX_BA_UINT4:\n      op = set_table.u4;\n      break;\n    case NX_BA_FP8_E4M3:\n      op = set_table.f8e4m3;\n      break;\n    case NX_BA_FP8_E5M2:\n      op = set_table.f8e5m2;\n      break;\n    default:\n      caml_failwith(\"unsupported dtype for gather\");\n  }\n\n  if (!op) caml_failwith(\"gather not supported for dtype\");\n\n  // Fast path: 2D gather along axis 0 with broadcasted indices on dim 1,\n  // contiguous data/out -> memcpy whole rows\n  const char *error = NULL;\n  size_t elem_size = elem_size_from_kind(kind);\n  if (axis == 0 && data.ndim == 2 && indices.ndim == 2 && out.ndim == 2 &&\n      elem_size > 0 && is_contiguous(&data) && is_contiguous(&out)) {\n    // Broadcasting along dim 1 is represented by stride==0 on indices dim 1\n    if (indices.strides[1] == 0 &&\n        data.shape[1] == out.shape[1] && indices.shape[0] == out.shape[0]) {\n      long n = out.shape[0];\n      long d = out.shape[1];\n      long data_row_stride = data.strides[0]; // in elements\n      long out_row_stride = out.strides[0];   // in elements\n      char *restrict data_ptr = (char *)data.data;\n      char *restrict out_ptr = (char *)out.data;\n      int32_t *restrict idx_ptr = (int32_t *)indices.data;\n      long idx_off0 = indices.offset; // element offset\n      long idx_row_stride = indices.strides[0]; // in elements\n      long data_base = data.offset; // element offset\n      long out_base = out.offset;   // element offset\n      size_t row_bytes = (size_t)d * elem_size;\n\n      caml_enter_blocking_section();\n      for (long i = 0; i < n; i++) {\n        long idx_eoff = idx_off0 + i * idx_row_stride; // indices strides[1]==0\n        int32_t index = idx_ptr[idx_eoff];\n        if (index < 0) index += data.shape[0];\n        if (index < 0 || index >= data.shape[0]) {\n          error = \"index out of bounds\";\n          break;\n        }\n        long src_eoff = data_base + index * data_row_stride;\n        long dst_eoff = out_base + i * out_row_stride;\n        memcpy(out_ptr + (size_t)dst_eoff * elem_size,\n               data_ptr + (size_t)src_eoff * elem_size, row_bytes);\n      }\n      caml_leave_blocking_section();\n      cleanup_ndarray(&data);\n      cleanup_ndarray(&indices);\n      cleanup_ndarray(&out);\n      if (error) caml_failwith(error);\n      return;\n    }\n  }\n\n  caml_enter_blocking_section();\n  error = generic_gather(&data, &indices, &out, axis, op);\n  caml_leave_blocking_section();\n\n  cleanup_ndarray(&data);\n  cleanup_ndarray(&indices);\n  cleanup_ndarray(&out);\n\n  if (error) caml_failwith(error);\n}\n\n// Dispatch for scatter\nstatic void dispatch_scatter(value v_template, value v_indices, value v_updates,\n                             value v_out, int axis, value v_mode,\n                             value v_unique) {\n  ndarray_t templ = extract_ndarray(v_template);\n  ndarray_t indices = extract_ndarray(v_indices);\n  ndarray_t updates = extract_ndarray(v_updates);\n\n  // Check if v_out is None (represented as 0 in OCaml)\n  ndarray_t out;\n  if (v_out == Val_int(0)) {\n    // If v_out is None, use template as output\n    out = templ;\n  } else {\n    out = extract_ndarray(v_out);\n  }\n\n  value v_template_data = Field(v_template, FFI_TENSOR_DATA);\n  value v_updates_data = Field(v_updates, FFI_TENSOR_DATA);\n  value v_out_data =\n      (v_out == Val_int(0)) ? v_template_data : Field(v_out, FFI_TENSOR_DATA);\n  value v_indices_data = Field(v_indices, FFI_TENSOR_DATA);\n\n  struct caml_ba_array *ba_templ = Caml_ba_array_val(v_template_data);\n  int kind = nx_buffer_get_kind(ba_templ);\n  if (kind != nx_buffer_get_kind(Caml_ba_array_val(v_updates_data)) ||\n      kind != nx_buffer_get_kind(Caml_ba_array_val(v_out_data)))\n    caml_failwith(\"dtype mismatch\");\n  if (nx_buffer_get_kind(Caml_ba_array_val(v_indices_data)) != CAML_BA_INT32)\n    caml_failwith(\"indices must be int32\");\n\n  const elem_op_table *table = Int_val(v_mode) == 0 ? &set_table : &add_table;\n\n  elem_op_fn op = NULL;\n  switch (kind) {\n    case CAML_BA_SINT8:\n      op = table->i8;\n      break;\n    case CAML_BA_UINT8:\n      op = table->u8;\n      break;\n    case CAML_BA_SINT16:\n      op = table->i16;\n      break;\n    case CAML_BA_UINT16:\n      op = table->u16;\n      break;\n    case CAML_BA_INT32:\n      op = table->i32;\n      break;\n    case CAML_BA_INT64:\n      op = table->i64;\n      break;\n    case NX_BA_UINT32:\n      op = table->u32;\n      break;\n    case NX_BA_UINT64:\n      op = table->u64;\n      break;\n    case CAML_BA_NATIVE_INT:\n    case CAML_BA_CAML_INT:\n      op = table->inat;\n      break;\n    case CAML_BA_FLOAT16:\n      op = table->f16;\n      break;\n    case CAML_BA_FLOAT32:\n      op = table->f32;\n      break;\n    case CAML_BA_FLOAT64:\n      op = table->f64;\n      break;\n    case CAML_BA_COMPLEX32:\n      op = table->c32;\n      break;\n    case CAML_BA_COMPLEX64:\n      op = table->c64;\n      break;\n    case NX_BA_BFLOAT16:\n      op = table->bf16;\n      break;\n    case NX_BA_BOOL:\n      op = table->bool_;\n      break;\n    case NX_BA_INT4:\n      op = table->i4;\n      break;\n    case NX_BA_UINT4:\n      op = table->u4;\n      break;\n    case NX_BA_FP8_E4M3:\n      op = table->f8e4m3;\n      break;\n    case NX_BA_FP8_E5M2:\n      op = table->f8e5m2;\n      break;\n    default:\n      caml_failwith(\"unsupported dtype for scatter\");\n  }\n\n  if (!op) caml_failwith(\"scatter not supported for dtype\");\n\n  int unique = Bool_val(v_unique);\n  int mode = Int_val(v_mode);\n\n  // Derive the raw data pointer for zeroing BEFORE releasing the runtime lock.\n  value actual_v_out = (v_out == Val_int(0)) ? v_template : v_out;\n  value v_actual_data = Field(actual_v_out, FFI_TENSOR_DATA);\n  void *out_raw_data = Caml_ba_data_val(v_actual_data);\n\n  caml_enter_blocking_section();\n  const char *error = generic_scatter(&templ, &indices, &updates, &out,\n                                      out_raw_data, axis, op, unique, kind, mode);\n  caml_leave_blocking_section();\n\n  cleanup_ndarray(&indices);\n  cleanup_ndarray(&updates);\n  // Only cleanup templ and out if they're different\n  if (v_out != Val_int(0)) {\n    cleanup_ndarray(&templ);\n    cleanup_ndarray(&out);\n  } else {\n    // When v_out is None, out == templ, so only cleanup once\n    cleanup_ndarray(&templ);\n  }\n\n  if (error) caml_failwith(error);\n}\n\n// ============================================================================\n// OCaml FFI Stubs\n// ============================================================================\n\nCAMLprim value caml_nx_op_gather(value v_data, value v_indices, value v_out,\n                                 value v_axis) {\n  CAMLparam4(v_data, v_indices, v_out, v_axis);\n  dispatch_gather(v_data, v_indices, v_out, Int_val(v_axis));\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_nx_op_scatter(value v_template, value v_indices,\n                                  value v_updates, value v_axis, value v_out,\n                                  value v_mode, value v_unique) {\n  CAMLparam5(v_template, v_indices, v_updates, v_axis, v_out);\n  CAMLxparam2(v_mode, v_unique);\n  dispatch_scatter(v_template, v_indices, v_updates, v_out, Int_val(v_axis),\n                   v_mode, v_unique);\n  CAMLreturn(Val_unit);\n}\n\n// Bytecode wrapper for scatter (7 arguments)\nCAMLprim value caml_nx_op_scatter_bc(value *argv, int argn) {\n  CAMLparam0();\n  (void)argn;\n  value ret = caml_nx_op_scatter(argv[0], argv[1], argv[2], argv[3], argv[4],\n                                 argv[5], argv[6]);\n  CAMLreturn(ret);\n}\n"
  },
  {
    "path": "packages/nx/lib/backend_c/nx_c_matmul.c",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n// Matrix multiplication for nx C backend\n\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n#include <caml/custom.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/threads.h>\n#include <cblas.h>\n#include <limits.h>\n\n#include \"nx_c_shared.h\"\n\n// Type definitions for matmul operations\ntypedef void (*matmul_op_t)(const ndarray_t *, const ndarray_t *, ndarray_t *);\n\n// Dispatch table for each type\ntypedef struct {\n  matmul_op_t i8, u8, i16, u16, i32, i64, u32, u64, inat;\n  matmul_op_t f16, f32, f64;\n  matmul_op_t c32, c64;\n  matmul_op_t bf16, bool_, i4, u4, f8e4m3, f8e5m2;\n} matmul_op_table;\n\n// Macro to generate all standard type variants for matmul\n#define GENERATE_MATMUL_OP(suffix, T, ACCUM_T, CAST) \\\n  MATMUL_OP_FOR_TYPE(suffix, T, ACCUM_T, CAST)\n\n// Helper to iterate over batch dimensions with a kernel function for matmul\ntypedef void (*matmul_kernel_t)(void *, long, long, long, void *, long, long,\n                                long, void *, long, long, long, long, long,\n                                long);\n\nstatic inline void iterate_batch(\n    const long *batch_shape, int batch_nd, const long *batch_strides_a,\n    const long *batch_strides_b, const long *batch_strides_c, void *a_data,\n    void *b_data, void *c_data, long a_off, long b_off, long c_off, long a_rs,\n    long a_cs, long b_rs, long b_cs, long c_rs, long c_cs, long m, long k,\n    long n, matmul_kernel_t kernel) {\n  if (batch_nd <= 0) {\n    kernel(a_data, a_off, a_rs, a_cs, b_data, b_off, b_rs, b_cs, c_data, c_off,\n           c_rs, c_cs, m, k, n);\n    return;\n  }\n\n  int coords_buf[MAX_NDIM];\n  int *coords = coords_buf; // batch_nd <= MAX_NDIM by construction\n  for (int i = 0; i < batch_nd; ++i) coords[i] = 0;\n\n  bool done = false;\n  while (!done) {\n    long a_batch_off = a_off;\n    long b_batch_off = b_off;\n    long c_batch_off = c_off;\n\n    for (int i = 0; i < batch_nd; i++) {\n      a_batch_off += coords[i] * batch_strides_a[i];\n      b_batch_off += coords[i] * batch_strides_b[i];\n      c_batch_off += coords[i] * batch_strides_c[i];\n    }\n\n    kernel(a_data, a_batch_off, a_rs, a_cs, b_data, b_batch_off, b_rs, b_cs,\n           c_data, c_batch_off, c_rs, c_cs, m, k, n);\n\n    // Advance to next position\n    done = true;\n    for (int i = batch_nd - 1; i >= 0; i--) {\n      coords[i]++;\n      if (coords[i] < batch_shape[i]) {\n        done = false;\n        break;\n      }\n      coords[i] = 0;\n    }\n  }\n}\n\n// Generic matmul kernel\n#define MATMUL_OP_KERNEL(suffix, T, ACCUM_T, CAST)                                  \\\n  static void nx_c_matmul_##suffix##_kernel(                                        \\\n      void *a_data, long a_off, long a_rs, long a_cs, void *b_data,                 \\\n      long b_off, long b_rs, long b_cs, void *c_data, long c_off, long c_rs,        \\\n      long c_cs, long m, long k, long n) {                                          \\\n    T *restrict a = (T *)a_data;                                                    \\\n    T *restrict b = (T *)b_data;                                                    \\\n    T *restrict c = (T *)c_data;                                                    \\\n    /* Generic kernel (naive triple loop). Specific types may override               \\\n       with specialized kernels below. */                                            \\\n    _Pragma(\"omp parallel for collapse(2) if(m * n > 1000)\")                        \\\n    for (long i = 0; i < m; i++) {                                                  \\\n      for (long j = 0; j < n; j++) {                                                \\\n        ACCUM_T sum = 0;                                                            \\\n        for (long p = 0; p < k; p++) {                                              \\\n          sum += (ACCUM_T)a[a_off + i * a_rs + p * a_cs] *                          \\\n                 (ACCUM_T)b[b_off + p * b_rs + j * b_cs];                           \\\n        }                                                                           \\\n        c[c_off + i * c_rs + j * c_cs] = CAST(sum);                                 \\\n      }                                                                             \\\n    }                                                                               \\\n  }\n\n// Generic matmul implementation\n#define MATMUL_OP_IMPL(suffix, ELEM_SIZE)                                      \\\n  static void nx_c_matmul_##suffix(const ndarray_t *a, const ndarray_t *b,     \\\n                                   ndarray_t *c) {                             \\\n    if (!a || !b || !c) {                                                      \\\n      caml_failwith(\"nx_c_matmul_\" #suffix \": null pointer\");                  \\\n    }                                                                          \\\n    int nd = a->ndim > b->ndim ? a->ndim : b->ndim;                            \\\n    if (c->ndim != nd) {                                                       \\\n      caml_failwith(\"nx_c_matmul_\" #suffix \": output ndim mismatch\");          \\\n    }                                                                          \\\n    if (a->ndim < 2 || b->ndim < 2) {                                          \\\n      caml_failwith(\"nx_c_matmul_\" #suffix \": input ndim < 2\");                \\\n    }                                                                          \\\n    long m = a->shape[a->ndim - 2];                                            \\\n    long k = a->shape[a->ndim - 1];                                            \\\n    long kk = b->shape[b->ndim - 2];                                           \\\n    long n = b->shape[b->ndim - 1];                                            \\\n    if (k != kk) {                                                             \\\n      /* Build shape strings for error message */                              \\\n      char shape_a_str[256] = \"[\";                                             \\\n      char shape_b_str[256] = \"[\";                                             \\\n      for (int i = 0; i < a->ndim; i++) {                                      \\\n        char buf[32];                                                          \\\n        snprintf(buf, sizeof(buf), \"%s%d\", i > 0 ? \",\" : \"\", a->shape[i]);   \\\n        strcat(shape_a_str, buf);                                              \\\n      }                                                                        \\\n      strcat(shape_a_str, \"]\");                                                \\\n      for (int i = 0; i < b->ndim; i++) {                                      \\\n        char buf[32];                                                          \\\n        snprintf(buf, sizeof(buf), \"%s%d\", i > 0 ? \",\" : \"\", b->shape[i]);   \\\n        strcat(shape_b_str, buf);                                              \\\n      }                                                                        \\\n      strcat(shape_b_str, \"]\");                                                \\\n      char msg[512];                                                           \\\n      snprintf(msg, sizeof(msg),                                               \\\n               \"dot: cannot contract %s (last axis: %ld) to %s (axis %d: %ld) \"\\\n               \"(size %ld≠%ld)\",                                               \\\n               shape_a_str, k, shape_b_str, b->ndim - 2, kk, k, kk);           \\\n      caml_invalid_argument(msg);                                              \\\n    }                                                                          \\\n    if (c->shape[c->ndim - 2] != m || c->shape[c->ndim - 1] != n) {            \\\n      caml_failwith(\"nx_c_matmul_\" #suffix \": output shape mismatch\");         \\\n    }                                                                          \\\n    int batch_nd = nd - 2;                                                     \\\n    long batch_shape_buf[MAX_NDIM];                                            \\\n    long batch_strides_a_buf[MAX_NDIM];                                        \\\n    long batch_strides_b_buf[MAX_NDIM];                                        \\\n    long batch_strides_c_buf[MAX_NDIM];                                        \\\n    long *batch_shape = batch_shape_buf;                                       \\\n    long *batch_strides_a = batch_strides_a_buf;                                \\\n    long *batch_strides_b = batch_strides_b_buf;                                \\\n    long *batch_strides_c = batch_strides_c_buf;                                \\\n    int a_batch_offset = nd - a->ndim;                                         \\\n    int b_batch_offset = nd - b->ndim;                                         \\\n    for (int i = 0; i < batch_nd; i++) {                                       \\\n      long sa = 1, sb = 1;                                                     \\\n      long stra = 0, strb = 0;                                                 \\\n      if (i >= a_batch_offset) {                                               \\\n        int a_i = i - a_batch_offset;                                          \\\n        sa = a->shape[a_i];                                                    \\\n        stra = a->strides[a_i];                                                \\\n      }                                                                        \\\n      if (i >= b_batch_offset) {                                               \\\n        int b_i = i - b_batch_offset;                                          \\\n        sb = b->shape[b_i];                                                    \\\n        strb = b->strides[b_i];                                                \\\n      }                                                                        \\\n      if (sa != sb && sa != 1 && sb != 1) {                                    \\\n        caml_failwith(\"nx_c_matmul_\" #suffix \": batch shape mismatch\");        \\\n      }                                                                        \\\n      long s = sa > sb ? sa : sb;                                              \\\n      batch_shape[i] = s;                                                      \\\n      batch_strides_a[i] = (sa == 1) ? 0 : stra;                               \\\n      batch_strides_b[i] = (sb == 1) ? 0 : strb;                               \\\n      batch_strides_c[i] = c->strides[i];                                      \\\n      if (c->shape[i] != s) {                                                  \\\n        caml_failwith(\"nx_c_matmul_\" #suffix \": output batch shape mismatch\"); \\\n      }                                                                        \\\n    }                                                                          \\\n    long a_rs = a->strides[a->ndim - 2];                                       \\\n    long a_cs = a->strides[a->ndim - 1];                                       \\\n    long b_rs = b->strides[b->ndim - 2];                                       \\\n    long b_cs = b->strides[b->ndim - 1];                                       \\\n    long c_rs = c->strides[c->ndim - 2];                                       \\\n    long c_cs = c->strides[c->ndim - 1];                                       \\\n    void *a_data = (char *)a->data + (ELEM_SIZE ? a->offset * ELEM_SIZE : a->offset / 2);  \\\n    void *b_data = (char *)b->data + (ELEM_SIZE ? b->offset * ELEM_SIZE : b->offset / 2);  \\\n    void *c_data = (char *)c->data + (ELEM_SIZE ? c->offset * ELEM_SIZE : c->offset / 2);  \\\n    caml_enter_blocking_section();                                             \\\n    iterate_batch(batch_shape, batch_nd, batch_strides_a, batch_strides_b,     \\\n                  batch_strides_c, a_data, b_data, c_data, 0, 0, 0, a_rs,      \\\n                  a_cs, b_rs, b_cs, c_rs, c_cs, m, k, n,                       \\\n                  nx_c_matmul_##suffix##_kernel);                              \\\n    caml_leave_blocking_section();                                             \\\n  }\n\n// Macro to generate both kernel and implementation for matmul\n#define MATMUL_OP_FOR_TYPE(suffix, T, ACCUM_T, CAST) \\\n  MATMUL_OP_KERNEL(suffix, T, ACCUM_T, CAST)         \\\n  MATMUL_OP_IMPL(suffix, sizeof(T))\n\n// Low-precision float kernel (convert to float for mul/acc)\n#define LOW_PREC_MATMUL_KERNEL(suffix, T, TO_FLOAT, FROM_FLOAT)               \\\n  static void nx_c_matmul_##suffix##_kernel(                                  \\\n      void *a_data, long a_off, long a_rs, long a_cs, void *b_data,           \\\n      long b_off, long b_rs, long b_cs, void *c_data, long c_off, long c_rs,  \\\n      long c_cs, long m, long k, long n) {                                    \\\n    T *a = (T *)a_data;                                                       \\\n    T *b = (T *)b_data;                                                       \\\n    T *c = (T *)c_data;                                                       \\\n    _Pragma(\"omp parallel for collapse(2) if(m * n > 1000)\") for (long i = 0; \\\n                                                                  i < m;      \\\n                                                                  i++) {      \\\n      for (long j = 0; j < n; j++) {                                          \\\n        float sum = 0.0f;                                                     \\\n        for (long p = 0; p < k; p++) {                                        \\\n          float aa = TO_FLOAT(a[a_off + i * a_rs + p * a_cs]);                \\\n          float bb = TO_FLOAT(b[b_off + p * b_rs + j * b_cs]);                \\\n          sum += aa * bb;                                                     \\\n        }                                                                     \\\n        c[c_off + i * c_rs + j * c_cs] = FROM_FLOAT(sum);                     \\\n      }                                                                       \\\n    }                                                                         \\\n  }\n\n// For low-precision, use the impl with the special kernel\n#define LOW_PREC_MATMUL_IMPL(suffix, T) MATMUL_OP_IMPL(suffix, sizeof(T))\n\n// Special implementation for int4 (packed, unpack/mul/acc/pack with saturation)\n#define INT4_MATMUL_IMPL(signedness, suffix)                                   \\\n  static void nx_c_matmul_##suffix##_kernel(                                   \\\n      void *a_data, long a_off, long a_rs, long a_cs, void *b_data,            \\\n      long b_off, long b_rs, long b_cs, void *c_data, long c_off, long c_rs,   \\\n      long c_cs, long m, long k, long n) {                                     \\\n    uint8_t *a = (uint8_t *)a_data;                                            \\\n    uint8_t *b = (uint8_t *)b_data;                                            \\\n    uint8_t *c = (uint8_t *)c_data;                                            \\\n    _Pragma(\"omp parallel for collapse(2) if(m * n > 1000)\") for (long i = 0;  \\\n                                                                  i < m;       \\\n                                                                  i++) {       \\\n      for (long j = 0; j < n; j++) {                                           \\\n        int32_t sum = 0;                                                       \\\n        for (long p = 0; p < k; p++) {                                         \\\n          long a_idx = a_off + i * a_rs + p * a_cs;                            \\\n          long a_byte_off = a_idx / 2;                                         \\\n          int a_nib_off = a_idx % 2;                                           \\\n          int aa =                                                             \\\n              a_nib_off                                                        \\\n                  ? (signedness ? (int8_t)(a[a_byte_off] >> 4)                 \\\n                                : ((a[a_byte_off] >> 4) & 0x0F))               \\\n                  : (signedness ? (int8_t)(((a[a_byte_off] & 0x0F) << 4) >> 4) \\\n                                : (a[a_byte_off] & 0x0F));                     \\\n          long b_idx = b_off + p * b_rs + j * b_cs;                            \\\n          long b_byte_off = b_idx / 2;                                         \\\n          int b_nib_off = b_idx % 2;                                           \\\n          int bb =                                                             \\\n              b_nib_off                                                        \\\n                  ? (signedness ? (int8_t)(b[b_byte_off] >> 4)                 \\\n                                : ((b[b_byte_off] >> 4) & 0x0F))               \\\n                  : (signedness ? (int8_t)(((b[b_byte_off] & 0x0F) << 4) >> 4) \\\n                                : (b[b_byte_off] & 0x0F));                     \\\n          sum += aa * bb;                                                      \\\n        }                                                                      \\\n        int res = signedness ? CLAMP_I4(sum) : CLAMP_U4(sum);                  \\\n        uint8_t nib = (uint8_t)res & 0x0F;                                     \\\n        long c_idx = c_off + i * c_rs + j * c_cs;                              \\\n        long c_byte_off = c_idx / 2;                                           \\\n        int c_nib_off = c_idx % 2;                                             \\\n        if (c_nib_off) {                                                       \\\n          c[c_byte_off] = (c[c_byte_off] & 0x0F) | (nib << 4);                 \\\n        } else {                                                               \\\n          c[c_byte_off] = (c[c_byte_off] & 0xF0) | nib;                        \\\n        }                                                                      \\\n      }                                                                        \\\n    }                                                                          \\\n  }                                                                            \\\n  MATMUL_OP_IMPL(suffix, 0)  /* int4 offset is in nibbles, handled in kernel */\n\n// Generate for integer types with wider accumulation\nGENERATE_MATMUL_OP(i8, int8_t, int64_t, (int8_t))\nGENERATE_MATMUL_OP(u8, uint8_t, uint64_t, (uint8_t))\nGENERATE_MATMUL_OP(i16, int16_t, int64_t, (int16_t))\nGENERATE_MATMUL_OP(u16, uint16_t, uint64_t, (uint16_t))\nGENERATE_MATMUL_OP(i32, int32_t, int64_t, (int32_t))\nGENERATE_MATMUL_OP(i64, int64_t, int64_t, (int64_t))\nGENERATE_MATMUL_OP(u32, uint32_t, uint64_t, (uint32_t))\nGENERATE_MATMUL_OP(u64, uint64_t, uint64_t, (uint64_t))\nGENERATE_MATMUL_OP(inat, intnat, int64_t, (intnat))\nGENERATE_MATMUL_OP(bool_, caml_ba_bool, uint64_t, (caml_ba_bool))\n\n// Float types with same-type accumulation\n/* BLAS-based GEMM kernels for float32/float64 using CBLAS.\n   These use optimized BLAS routines when possible, falling back to\n   packing for non-contiguous strides. */\n\nstatic inline int setup_blas_row_major_params(long rows, long cols, long row_stride,\n                                              long col_stride, CBLAS_TRANSPOSE *trans,\n                                              int *ld) {\n  if (row_stride <= 0 || col_stride <= 0) return 0;\n  if (col_stride == 1) {\n    if (row_stride < cols || row_stride > INT_MAX) return 0;\n    *trans = CblasNoTrans;\n    *ld = (int)row_stride;\n    return 1;\n  }\n  if (row_stride == 1) {\n    if (col_stride < rows || col_stride > INT_MAX) return 0;\n    *trans = CblasTrans;\n    *ld = (int)col_stride;\n    return 1;\n  }\n  return 0;\n}\n\nstatic inline int setup_blas_row_major_output(long rows, long cols, long row_stride,\n                                              long col_stride, int *ld) {\n  if (col_stride != 1) return 0;\n  if (row_stride < cols || row_stride > INT_MAX) return 0;\n  *ld = (int)row_stride;\n  return 1;\n}\n\nstatic void nx_c_matmul_f32_kernel(void *a_data, long a_off, long a_rs,\n                                   long a_cs, void *b_data, long b_off,\n                                   long b_rs, long b_cs, void *c_data,\n                                   long c_off, long c_rs, long c_cs, long m,\n                                   long k, long n) {\n  float *restrict a = (float *)a_data;\n  float *restrict b = (float *)b_data;\n  float *restrict c = (float *)c_data;\n\n  int use_blas_direct = 0;\n  CBLAS_TRANSPOSE trans_a = CblasNoTrans;\n  CBLAS_TRANSPOSE trans_b = CblasNoTrans;\n  int lda = 0, ldb = 0, ldc = 0;\n\n  if (setup_blas_row_major_params(m, k, a_rs, a_cs, &trans_a, &lda) &&\n      setup_blas_row_major_params(k, n, b_rs, b_cs, &trans_b, &ldb) &&\n      setup_blas_row_major_output(m, n, c_rs, c_cs, &ldc)) {\n    use_blas_direct = 1;\n  }\n\n  if (use_blas_direct) {\n    cblas_sgemm(CblasRowMajor, trans_a, trans_b, m, n, k, 1.0f,\n                a + a_off, lda, b + b_off, ldb, 0.0f, c + c_off, ldc);\n  } else {\n    /* Non-contiguous layout: pack matrices first */\n    float *a_packed = (float *)malloc(m * k * sizeof(float));\n    float *b_packed = (float *)malloc(k * n * sizeof(float));\n    float *c_packed = (float *)malloc(m * n * sizeof(float));\n    if (!a_packed || !b_packed || !c_packed) {\n      free(a_packed);\n      free(b_packed);\n      free(c_packed);\n      return;\n    }\n\n    /* Pack A and B */\n    for (long i = 0; i < m; i++) {\n      for (long j = 0; j < k; j++) {\n        a_packed[i * k + j] = a[a_off + i * a_rs + j * a_cs];\n      }\n    }\n    for (long i = 0; i < k; i++) {\n      for (long j = 0; j < n; j++) {\n        b_packed[i * n + j] = b[b_off + i * b_rs + j * b_cs];\n      }\n    }\n\n    /* Compute using BLAS */\n    cblas_sgemm(CblasRowMajor, CblasNoTrans, CblasNoTrans, m, n, k, 1.0f,\n                a_packed, k, b_packed, n, 0.0f, c_packed, n);\n\n    /* Unpack C */\n    for (long i = 0; i < m; i++) {\n      for (long j = 0; j < n; j++) {\n        c[c_off + i * c_rs + j * c_cs] = c_packed[i * n + j];\n      }\n    }\n\n    free(a_packed);\n    free(b_packed);\n    free(c_packed);\n  }\n}\n\nstatic void nx_c_matmul_f64_kernel(void *a_data, long a_off, long a_rs,\n                                   long a_cs, void *b_data, long b_off,\n                                   long b_rs, long b_cs, void *c_data,\n                                   long c_off, long c_rs, long c_cs, long m,\n                                   long k, long n) {\n  double *restrict a = (double *)a_data;\n  double *restrict b = (double *)b_data;\n  double *restrict c = (double *)c_data;\n\n  int use_blas_direct = 0;\n  CBLAS_TRANSPOSE trans_a = CblasNoTrans;\n  CBLAS_TRANSPOSE trans_b = CblasNoTrans;\n  int lda = 0, ldb = 0, ldc = 0;\n\n  if (setup_blas_row_major_params(m, k, a_rs, a_cs, &trans_a, &lda) &&\n      setup_blas_row_major_params(k, n, b_rs, b_cs, &trans_b, &ldb) &&\n      setup_blas_row_major_output(m, n, c_rs, c_cs, &ldc)) {\n    use_blas_direct = 1;\n  }\n\n  if (use_blas_direct) {\n    cblas_dgemm(CblasRowMajor, trans_a, trans_b, m, n, k, 1.0,\n                a + a_off, lda, b + b_off, ldb, 0.0, c + c_off, ldc);\n  } else {\n    /* Non-contiguous layout: pack matrices first */\n    double *a_packed = (double *)malloc(m * k * sizeof(double));\n    double *b_packed = (double *)malloc(k * n * sizeof(double));\n    double *c_packed = (double *)malloc(m * n * sizeof(double));\n    if (!a_packed || !b_packed || !c_packed) {\n      free(a_packed);\n      free(b_packed);\n      free(c_packed);\n      return;\n    }\n\n    /* Pack A and B */\n    for (long i = 0; i < m; i++) {\n      for (long j = 0; j < k; j++) {\n        a_packed[i * k + j] = a[a_off + i * a_rs + j * a_cs];\n      }\n    }\n    for (long i = 0; i < k; i++) {\n      for (long j = 0; j < n; j++) {\n        b_packed[i * n + j] = b[b_off + i * b_rs + j * b_cs];\n      }\n    }\n\n    /* Compute using BLAS */\n    cblas_dgemm(CblasRowMajor, CblasNoTrans, CblasNoTrans, m, n, k, 1.0,\n                a_packed, k, b_packed, n, 0.0, c_packed, n);\n\n    /* Unpack C */\n    for (long i = 0; i < m; i++) {\n      for (long j = 0; j < n; j++) {\n        c[c_off + i * c_rs + j * c_cs] = c_packed[i * n + j];\n      }\n    }\n\n    free(a_packed);\n    free(b_packed);\n    free(c_packed);\n  }\n}\n\n/* Use the optimized kernels for f32/f64 and the generic implementation glue */\nMATMUL_OP_IMPL(f32, sizeof(float))\nMATMUL_OP_IMPL(f64, sizeof(double))\n\n// Complex types with BLAS GEMM\nstatic void nx_c_matmul_c32_kernel(void *a_data, long a_off, long a_rs,\n                                   long a_cs, void *b_data, long b_off,\n                                   long b_rs, long b_cs, void *c_data,\n                                   long c_off, long c_rs, long c_cs, long m,\n                                   long k, long n) {\n  complex32 *restrict a = (complex32 *)a_data;\n  complex32 *restrict b = (complex32 *)b_data;\n  complex32 *restrict c = (complex32 *)c_data;\n\n  complex32 alpha = 1.0f + 0.0f * I;\n  complex32 beta = 0.0f + 0.0f * I;\n\n  int use_blas_direct = 0;\n  CBLAS_TRANSPOSE trans_a = CblasNoTrans;\n  CBLAS_TRANSPOSE trans_b = CblasNoTrans;\n  int lda = 0, ldb = 0, ldc = 0;\n\n  if (setup_blas_row_major_params(m, k, a_rs, a_cs, &trans_a, &lda) &&\n      setup_blas_row_major_params(k, n, b_rs, b_cs, &trans_b, &ldb) &&\n      setup_blas_row_major_output(m, n, c_rs, c_cs, &ldc)) {\n    use_blas_direct = 1;\n  }\n\n  if (use_blas_direct) {\n    cblas_cgemm(CblasRowMajor, trans_a, trans_b, m, n, k, &alpha,\n                a + a_off, lda, b + b_off, ldb, &beta, c + c_off, ldc);\n  } else {\n    /* Non-contiguous layout: pack matrices first */\n    complex32 *a_packed = (complex32 *)malloc(m * k * sizeof(complex32));\n    complex32 *b_packed = (complex32 *)malloc(k * n * sizeof(complex32));\n    complex32 *c_packed = (complex32 *)malloc(m * n * sizeof(complex32));\n    if (!a_packed || !b_packed || !c_packed) {\n      free(a_packed);\n      free(b_packed);\n      free(c_packed);\n      return;\n    }\n\n    /* Pack A and B */\n    for (long i = 0; i < m; i++) {\n      for (long j = 0; j < k; j++) {\n        a_packed[i * k + j] = a[a_off + i * a_rs + j * a_cs];\n      }\n    }\n    for (long i = 0; i < k; i++) {\n      for (long j = 0; j < n; j++) {\n        b_packed[i * n + j] = b[b_off + i * b_rs + j * b_cs];\n      }\n    }\n\n    /* Compute using BLAS */\n    cblas_cgemm(CblasRowMajor, CblasNoTrans, CblasNoTrans, m, n, k, &alpha,\n                a_packed, k, b_packed, n, &beta, c_packed, n);\n\n    /* Unpack C */\n    for (long i = 0; i < m; i++) {\n      for (long j = 0; j < n; j++) {\n        c[c_off + i * c_rs + j * c_cs] = c_packed[i * n + j];\n      }\n    }\n\n    free(a_packed);\n    free(b_packed);\n    free(c_packed);\n  }\n}\n\nstatic void nx_c_matmul_c64_kernel(void *a_data, long a_off, long a_rs,\n                                   long a_cs, void *b_data, long b_off,\n                                   long b_rs, long b_cs, void *c_data,\n                                   long c_off, long c_rs, long c_cs, long m,\n                                   long k, long n) {\n  complex64 *restrict a = (complex64 *)a_data;\n  complex64 *restrict b = (complex64 *)b_data;\n  complex64 *restrict c = (complex64 *)c_data;\n\n  complex64 alpha = 1.0 + 0.0 * I;\n  complex64 beta = 0.0 + 0.0 * I;\n\n  int use_blas_direct = 0;\n  CBLAS_TRANSPOSE trans_a = CblasNoTrans;\n  CBLAS_TRANSPOSE trans_b = CblasNoTrans;\n  int lda = 0, ldb = 0, ldc = 0;\n\n  if (setup_blas_row_major_params(m, k, a_rs, a_cs, &trans_a, &lda) &&\n      setup_blas_row_major_params(k, n, b_rs, b_cs, &trans_b, &ldb) &&\n      setup_blas_row_major_output(m, n, c_rs, c_cs, &ldc)) {\n    use_blas_direct = 1;\n  }\n\n  if (use_blas_direct) {\n    cblas_zgemm(CblasRowMajor, trans_a, trans_b, m, n, k, &alpha,\n                a + a_off, lda, b + b_off, ldb, &beta, c + c_off, ldc);\n  } else {\n    /* Non-contiguous layout: pack matrices first */\n    complex64 *a_packed = (complex64 *)malloc(m * k * sizeof(complex64));\n    complex64 *b_packed = (complex64 *)malloc(k * n * sizeof(complex64));\n    complex64 *c_packed = (complex64 *)malloc(m * n * sizeof(complex64));\n    if (!a_packed || !b_packed || !c_packed) {\n      free(a_packed);\n      free(b_packed);\n      free(c_packed);\n      return;\n    }\n\n    /* Pack A and B */\n    for (long i = 0; i < m; i++) {\n      for (long j = 0; j < k; j++) {\n        a_packed[i * k + j] = a[a_off + i * a_rs + j * a_cs];\n      }\n    }\n    for (long i = 0; i < k; i++) {\n      for (long j = 0; j < n; j++) {\n        b_packed[i * n + j] = b[b_off + i * b_rs + j * b_cs];\n      }\n    }\n\n    /* Compute using BLAS */\n    cblas_zgemm(CblasRowMajor, CblasNoTrans, CblasNoTrans, m, n, k, &alpha,\n                a_packed, k, b_packed, n, &beta, c_packed, n);\n\n    /* Unpack C */\n    for (long i = 0; i < m; i++) {\n      for (long j = 0; j < n; j++) {\n        c[c_off + i * c_rs + j * c_cs] = c_packed[i * n + j];\n      }\n    }\n\n    free(a_packed);\n    free(b_packed);\n    free(c_packed);\n  }\n}\n\nMATMUL_OP_IMPL(c32, sizeof(complex32))\nMATMUL_OP_IMPL(c64, sizeof(complex64))\n\n// Low-precision floats\nLOW_PREC_MATMUL_KERNEL(f16, uint16_t, half_to_float, float_to_half)\nLOW_PREC_MATMUL_IMPL(f16, uint16_t)\nLOW_PREC_MATMUL_KERNEL(bf16, caml_ba_bfloat16, bfloat16_to_float,\n                       float_to_bfloat16)\nLOW_PREC_MATMUL_IMPL(bf16, caml_ba_bfloat16)\nLOW_PREC_MATMUL_KERNEL(f8e4m3, caml_ba_fp8_e4m3, fp8_e4m3_to_float,\n                       float_to_fp8_e4m3)\nLOW_PREC_MATMUL_IMPL(f8e4m3, caml_ba_fp8_e4m3)\nLOW_PREC_MATMUL_KERNEL(f8e5m2, caml_ba_fp8_e5m2, fp8_e5m2_to_float,\n                       float_to_fp8_e5m2)\nLOW_PREC_MATMUL_IMPL(f8e5m2, caml_ba_fp8_e5m2)\n\n// Int4/Uint4\nINT4_MATMUL_IMPL(1, i4)\nINT4_MATMUL_IMPL(0, u4)\n\n// Build dispatch table\n#define BUILD_DISPATCH_TABLE(name)                                             \\\n  static const matmul_op_table name##_table = {.i8 = nx_c_##name##_i8,         \\\n                                               .u8 = nx_c_##name##_u8,         \\\n                                               .i16 = nx_c_##name##_i16,       \\\n                                               .u16 = nx_c_##name##_u16,       \\\n                                               .i32 = nx_c_##name##_i32,       \\\n                                               .i64 = nx_c_##name##_i64,       \\\n                                               .u32 = nx_c_##name##_u32,       \\\n                                               .u64 = nx_c_##name##_u64,       \\\n                                               .inat = nx_c_##name##_inat,     \\\n                                               .f16 = nx_c_##name##_f16,       \\\n                                               .f32 = nx_c_##name##_f32,       \\\n                                               .f64 = nx_c_##name##_f64,       \\\n                                               .c32 = nx_c_##name##_c32,       \\\n                                               .c64 = nx_c_##name##_c64,       \\\n                                               .bf16 = nx_c_##name##_bf16,     \\\n                                               .bool_ = nx_c_##name##_bool_,   \\\n                                               .i4 = nx_c_##name##_i4,         \\\n                                               .u4 = nx_c_##name##_u4,         \\\n                                               .f8e4m3 = nx_c_##name##_f8e4m3, \\\n                                               .f8e5m2 = nx_c_##name##_f8e5m2}\n\nBUILD_DISPATCH_TABLE(matmul);\n\n// Generic dispatch function for matmul operations\nstatic void dispatch_matmul_op(value v_a, value v_b, value v_c,\n                               const matmul_op_table *table,\n                               const char *op_name) {\n  // Extract ndarrays using stack-allocated buffers (no malloc)\n  int sa[MAX_NDIM], stra[MAX_NDIM];\n  int sb[MAX_NDIM], strb[MAX_NDIM];\n  int sc[MAX_NDIM], strc[MAX_NDIM];\n  ndarray_t A = extract_ndarray_stack(v_a, sa, stra);\n  ndarray_t B = extract_ndarray_stack(v_b, sb, strb);\n  ndarray_t C = extract_ndarray_stack(v_c, sc, strc);\n\n  // Get bigarray kind from the data field\n  struct caml_ba_array *ba = Caml_ba_array_val(Field(v_a, FFI_TENSOR_DATA));\n  int kind = nx_buffer_get_kind(ba);\n\n  // Check kinds match for b and c\n  int kind_b = nx_buffer_get_kind(Caml_ba_array_val(Field(v_b, FFI_TENSOR_DATA)));\n  int kind_c = nx_buffer_get_kind(Caml_ba_array_val(Field(v_c, FFI_TENSOR_DATA)));\n  if (kind != kind_b || kind != kind_c) {\n    caml_failwith(\"dtype mismatch\");\n  }\n\n  // Select operation based on dtype\n  matmul_op_t op = NULL;\n  switch (kind) {\n    case CAML_BA_SINT8:\n      op = table->i8;\n      break;\n    case CAML_BA_UINT8:\n      op = table->u8;\n      break;\n    case CAML_BA_SINT16:\n      op = table->i16;\n      break;\n    case CAML_BA_UINT16:\n      op = table->u16;\n      break;\n    case CAML_BA_INT32:\n      op = table->i32;\n      break;\n    case CAML_BA_INT64:\n      op = table->i64;\n      break;\n    case NX_BA_UINT32:\n      op = table->u32;\n      break;\n    case NX_BA_UINT64:\n      op = table->u64;\n      break;\n    case CAML_BA_CAML_INT:\n    case CAML_BA_NATIVE_INT:\n      op = table->inat;\n      break;\n    case CAML_BA_FLOAT16:\n      op = table->f16;\n      break;\n    case CAML_BA_FLOAT32:\n      op = table->f32;\n      break;\n    case CAML_BA_FLOAT64:\n      op = table->f64;\n      break;\n    case CAML_BA_COMPLEX32:\n      op = table->c32;\n      break;\n    case CAML_BA_COMPLEX64:\n      op = table->c64;\n      break;\n    case NX_BA_BFLOAT16:\n      op = table->bf16;\n      break;\n    case NX_BA_BOOL:\n      op = table->bool_;\n      break;\n    case NX_BA_INT4:\n      op = table->i4;\n      break;\n    case NX_BA_UINT4:\n      op = table->u4;\n      break;\n    case NX_BA_FP8_E4M3:\n      op = table->f8e4m3;\n      break;\n    case NX_BA_FP8_E5M2:\n      op = table->f8e5m2;\n      break;\n    default:\n      caml_failwith(\"dispatch_matmul_op: unsupported dtype\");\n  }\n\n  if (!op) {\n    char msg[256];\n    snprintf(msg, sizeof(msg), \"%s: operation not supported for dtype\",\n             op_name);\n    caml_failwith(msg);\n  }\n\n  // Perform the operation (no cleanup needed — stack-allocated)\n  op(&A, &B, &C);\n}\n\n// ============================================================================\n// OCaml FFI Stubs\n// ============================================================================\n\nCAMLprim value caml_nx_matmul(value v_a, value v_b, value v_c) {\n  CAMLparam3(v_a, v_b, v_c);\n  dispatch_matmul_op(v_a, v_b, v_c, &matmul_table, \"matmul\");\n  CAMLreturn(Val_unit);\n}\n\n"
  },
  {
    "path": "packages/nx/lib/backend_c/nx_c_memory.c",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n// Memory operations for nx C backend\n\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n#include <caml/custom.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/threads.h>\n\n#include \"nx_c_shared.h\"\n\n// Helper to copy an int array (shape or strides)\nstatic value copy_int_array(value v_old) {\n  CAMLparam1(v_old);\n  int n = Wosize_val(v_old);\n  value v_new = caml_alloc(n, 0);\n  for (int i = 0; i < n; i++) {\n    Store_field(v_new, i, Field(v_old, i));\n  }\n  CAMLreturn(v_new);\n}\n\n// Helper to create a new tensor value\nstatic value create_tensor_value(value v_shape, value v_strides, value v_data,\n                                 long offset) {\n  CAMLparam3(v_shape, v_strides, v_data);\n  CAMLlocal1(v_new);\n  v_new = caml_alloc(4, 0);\n  Store_field(v_new, FFI_TENSOR_DATA, v_data);\n  Store_field(v_new, FFI_TENSOR_SHAPE, v_shape);\n  Store_field(v_new, FFI_TENSOR_STRIDES, v_strides);\n  Store_field(v_new, FFI_TENSOR_OFFSET, Val_long(offset));\n  CAMLreturn(v_new);\n}\n\n// Helper to set standard C-contiguous strides\nstatic void set_standard_strides(int *strides, int *shape, int ndim) {\n  if (ndim == 0) return;\n  int stride = 1;\n  for (int i = ndim - 1; i >= 0; i--) {\n    strides[i] = stride;\n    stride *= shape[i];\n  }\n}\n\n// Helper to check if two ndarrays have the same shape\nstatic bool same_shape(const ndarray_t *a, const ndarray_t *b) {\n  if (a->ndim != b->ndim) return false;\n  for (int i = 0; i < a->ndim; i++) {\n    if (a->shape[i] != b->shape[i]) return false;\n  }\n  return true;\n}\n\n// Helper to check if an ndarray is C-contiguous (row-major, no gaps)\nstatic bool is_c_contiguous(const ndarray_t *nd) {\n  if (nd->ndim == 0) return true;\n  long s = 1;\n  for (int i = nd->ndim - 1; i >= 0; i--) {\n    if (nd->strides[i] != s) return false;\n    s *= nd->shape[i];\n  }\n  return true;\n}\n\n// Helper to get element size in bytes based on kind\nstatic long get_element_size(int kind) {\n  switch (kind) {\n    case CAML_BA_SINT8:\n    case CAML_BA_UINT8:\n    case NX_BA_BOOL:\n    case NX_BA_FP8_E4M3:\n    case NX_BA_FP8_E5M2:\n      return 1;\n    case CAML_BA_SINT16:\n    case CAML_BA_UINT16:\n    case CAML_BA_FLOAT16:\n    case NX_BA_BFLOAT16:\n      return 2;\n    case CAML_BA_INT32:\n    case CAML_BA_FLOAT32:\n    case NX_BA_UINT32:\n      return 4;\n    case CAML_BA_COMPLEX32:\n      return 8;  // 2 * float32\n    case CAML_BA_INT64:\n    case CAML_BA_FLOAT64:\n    case NX_BA_UINT64:\n      return 8;\n    case CAML_BA_COMPLEX64:\n      return 16;  // 2 * float64\n    case CAML_BA_NATIVE_INT:\n    case CAML_BA_CAML_INT:\n      return sizeof(intnat);\n    case NX_BA_INT4:\n    case NX_BA_UINT4:\n      // Special handling required; size not used for memcpy\n      caml_failwith(\"get_element_size: int4/uint4 not supported for size\");\n    default:\n      caml_failwith(\"get_element_size: unsupported kind\");\n  }\n  return 0;  // Unreachable\n}\n\n// Core copy function: copies data from src to dst (assumes same shape and\n// dtype)\nstatic void nx_c_copy(const ndarray_t *src, const ndarray_t *dst, int kind) {\n  if (!src || !dst) { fprintf(stderr, \"nx: nx_c_copy: null pointer\\n\"); abort(); }\n  if (!same_shape(src, dst)) { fprintf(stderr, \"nx: nx_c_copy: shape mismatch\\n\"); abort(); }\n\n  long total = total_elements_safe(src);\n  if (total == 0) return;\n\n  if (kind == NX_BA_INT4 || kind == NX_BA_UINT4) {\n    bool signedness = (kind == NX_BA_INT4);\n    nd_copy_iterator_t it;\n    nd_copy_iterator_init(&it, src, dst);\n    do {\n      long src_off, dst_off;\n      nd_copy_iterator_get_offsets(&it, &src_off, &dst_off);\n      long abs_src_off = src->offset + src_off;\n      long byte_off = abs_src_off / 2;\n      int nib_off = abs_src_off % 2;\n      uint8_t *sdata = (uint8_t *)src->data;\n      int val;\n      if (nib_off) {\n        val = signedness ? (int8_t)(sdata[byte_off] >> 4)\n                         : (sdata[byte_off] >> 4) & 0x0F;\n      } else {\n        val = signedness ? (int8_t)((sdata[byte_off] & 0x0F) << 4) >> 4\n                         : sdata[byte_off] & 0x0F;\n      }\n      long abs_dst_off = dst->offset + dst_off;\n      byte_off = abs_dst_off / 2;\n      nib_off = abs_dst_off % 2;\n      uint8_t *ddata = (uint8_t *)dst->data;\n      uint8_t nib = (uint8_t)val & 0x0F;\n      if (nib_off) {\n        ddata[byte_off] = (ddata[byte_off] & 0x0F) | (nib << 4);\n      } else {\n        ddata[byte_off] = (ddata[byte_off] & 0xF0) | nib;\n      }\n    } while (nd_copy_iterator_next(&it));\n    nd_copy_iterator_destroy(&it);\n  } else {\n    long elsize = get_element_size(kind);\n    // Cannot use memcpy if src has broadcasts (zero strides)\n    bool src_has_broadcast = false;\n    for (int i = 0; i < src->ndim; i++) {\n      if (src->strides[i] == 0 && src->shape[i] > 1) {\n        src_has_broadcast = true;\n        break;\n      }\n    }\n    bool cont = !src_has_broadcast && is_c_contiguous(src) && is_c_contiguous(dst);\n    if (cont) {\n      memcpy((char *)dst->data + dst->offset * elsize,\n             (char *)src->data + src->offset * elsize, total * elsize);\n    } else {\n      nd_copy_iterator_t it;\n      nd_copy_iterator_init(&it, src, dst);\n      do {\n        long src_off, dst_off;\n        nd_copy_iterator_get_offsets(&it, &src_off, &dst_off);\n        memcpy((char *)dst->data + (dst->offset + dst_off) * elsize,\n               (char *)src->data + (src->offset + src_off) * elsize, elsize);\n      } while (nd_copy_iterator_next(&it));\n      nd_copy_iterator_destroy(&it);\n    }\n  }\n}\n\n// FFI stub for assign (in-place copy)\nCAMLprim value caml_nx_assign(value v_src, value v_dst) {\n  CAMLparam2(v_src, v_dst);\n  ndarray_t src = extract_ndarray(v_src);\n  ndarray_t dst = extract_ndarray(v_dst);\n  struct caml_ba_array *ba_src =\n      Caml_ba_array_val(Field(v_src, FFI_TENSOR_DATA));\n  struct caml_ba_array *ba_dst =\n      Caml_ba_array_val(Field(v_dst, FFI_TENSOR_DATA));\n  int kind_src = nx_buffer_get_kind(ba_src);\n  int kind_dst = nx_buffer_get_kind(ba_dst);\n  if (kind_src != kind_dst) {\n    cleanup_ndarray(&src);\n    cleanup_ndarray(&dst);\n    caml_failwith(\"caml_nx_assign: dtype mismatch\");\n  }\n  if (!same_shape(&src, &dst)) {\n    cleanup_ndarray(&src);\n    cleanup_ndarray(&dst);\n    caml_failwith(\"caml_nx_assign: shape mismatch\");\n  }\n  caml_enter_blocking_section();\n  nx_c_copy(&src, &dst, kind_src);\n  caml_leave_blocking_section();\n  cleanup_ndarray(&src);\n  cleanup_ndarray(&dst);\n  CAMLreturn(Val_unit);\n}\n\n// Helper to create a contiguous tensor (shared or copied)\nstatic value make_contiguous(value v_src, bool force_copy) {\n  CAMLparam1(v_src);\n  CAMLlocal4(v_new_data, v_new_shape, v_new_strides, v_new);\n  ndarray_t src = extract_ndarray(v_src);\n  struct caml_ba_array *ba = Caml_ba_array_val(Field(v_src, FFI_TENSOR_DATA));\n  int flags = ba->flags;\n  int kind = nx_buffer_get_kind(ba);\n  long total = total_elements_safe(&src);\n  bool can_share = !force_copy && is_c_contiguous(&src) && src.offset == 0;\n  if (can_share) {\n    v_new_data = Field(v_src, FFI_TENSOR_DATA);\n    v_new_shape = copy_int_array(Field(v_src, FFI_TENSOR_SHAPE));\n    v_new_strides = copy_int_array(Field(v_src, FFI_TENSOR_STRIDES));\n    v_new = create_tensor_value(v_new_shape, v_new_strides, v_new_data, 0);\n  } else {\n    intnat dims[1];\n    dims[0] = (intnat)total;\n    if (kind == NX_BA_INT4 || kind == NX_BA_UINT4) {\n      dims[0] = (intnat)((total + 1) / 2);\n      flags = (flags & ~CAML_BA_KIND_MASK) |\n              CAML_BA_UINT8;  // Use byte array for packed\n    }\n    v_new_data = caml_ba_alloc(flags, 1, NULL, dims);\n    v_new_shape = copy_int_array(Field(v_src, FFI_TENSOR_SHAPE));\n    v_new_strides = caml_alloc(src.ndim, 0);\n    int strides[32];  // Stack buffer for strides - use int not long\n    // Calculate C-contiguous strides\n    if (src.ndim > 0) {\n      int stride = 1;\n      for (int i = src.ndim - 1; i >= 0; i--) {\n        strides[i] = stride;\n        stride *= src.shape[i];\n      }\n    }\n    for (int i = 0; i < src.ndim; i++) {\n      Store_field(v_new_strides, i, Val_long(strides[i]));\n    }\n    ndarray_t dst = {0};\n    dst.data = Caml_ba_data_val(v_new_data);\n    dst.ndim = src.ndim;\n    dst.shape = src.shape;  // Can reuse since it's temporary\n    dst.strides = strides;  // Now types match correctly\n    dst.offset = 0;\n    caml_enter_blocking_section();\n    nx_c_copy(&src, &dst, kind);\n    caml_leave_blocking_section();\n    v_new = create_tensor_value(v_new_shape, v_new_strides, v_new_data, 0);\n  }\n  cleanup_ndarray(&src);\n  CAMLreturn(v_new);\n}\n\n// FFI stub for copy (always own buffer)\nCAMLprim value caml_nx_copy(value v_src) {\n  return make_contiguous(v_src, true);\n}\n\n// FFI stub for contiguous (may share buffer)\nCAMLprim value caml_nx_contiguous(value v_src) {\n  return make_contiguous(v_src, false);\n}\n"
  },
  {
    "path": "packages/nx/lib/backend_c/nx_c_qr.c",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n// QR decomposition implementations\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n#include <caml/custom.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/threads.h>\n#include <complex.h>\n#include <float.h>\n#include <lapacke.h>\n\n#include \"nx_c_shared.h\"\n\n// Machine epsilon for float32 and float64\n#define NX_EPS32 FLT_EPSILON\n#define NX_EPS64 DBL_EPSILON\n\n// Helper functions for packing/unpacking matrices\nstatic void nx_pack_f32(float* dst, const float* src, int m, int n,\n                        int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * n + j] = src[i * stride_row + j * stride_col];\n    }\n  }\n}\n\nstatic void nx_unpack_f32(float* dst, const float* src, int m, int n,\n                          int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * stride_row + j * stride_col] = src[i * n + j];\n    }\n  }\n}\n\nstatic void nx_pack_f64(double* dst, const double* src, int m, int n,\n                        int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * n + j] = src[i * stride_row + j * stride_col];\n    }\n  }\n}\n\nstatic void nx_unpack_f64(double* dst, const double* src, int m, int n,\n                          int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * stride_row + j * stride_col] = src[i * n + j];\n    }\n  }\n}\n\nstatic void nx_pack_c32(complex32* dst, const complex32* src, int m, int n,\n                        int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * n + j] = src[i * stride_row + j * stride_col];\n    }\n  }\n}\n\nstatic void nx_unpack_c32(complex32* dst, const complex32* src, int m, int n,\n                          int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * stride_row + j * stride_col] = src[i * n + j];\n    }\n  }\n}\n\nstatic void nx_pack_c64(complex64* dst, const complex64* src, int m, int n,\n                        int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * n + j] = src[i * stride_row + j * stride_col];\n    }\n  }\n}\n\nstatic void nx_unpack_c64(complex64* dst, const complex64* src, int m, int n,\n                          int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * stride_row + j * stride_col] = src[i * n + j];\n    }\n  }\n}\n\nstatic void qr_decompose_float32(float* a, float* q, float* r, int m, int n,\n                                 int reduced) {\n  const int k = reduced ? (m < n ? m : n) : m;\n  const int minmn = m < n ? m : n;\n  const int lda = k > n ? k : n;  // Leading dimension must be >= max(k, n)\n\n  // LAPACK destroys the input matrix, so we need to make a copy with proper size\n  float* a_copy = (float*)calloc(m * lda, sizeof(float));\n  if (!a_copy) return;\n\n  // Copy input matrix to a_copy (only the m×n part)\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      a_copy[i * lda + j] = a[i * n + j];\n    }\n  }\n\n  // Allocate workspace for Householder reflectors\n  float* tau = (float*)malloc(minmn * sizeof(float));\n  if (!tau) {\n    free(a_copy);\n    return;\n  }\n\n  // Step 1: QR factorization using Householder reflectors\n  lapack_int info = LAPACKE_sgeqrf(LAPACK_ROW_MAJOR, m, n, a_copy, lda, tau);\n  if (info != 0) {\n    free(a_copy);\n    free(tau);\n    return;\n  }\n\n  // Extract R from the upper triangular part of a_copy\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      r[i * n + j] = (i <= j) ? a_copy[i * lda + j] : 0.0f;\n    }\n  }\n\n  // Step 2: Generate Q from the Householder reflectors\n  info = LAPACKE_sorgqr(LAPACK_ROW_MAJOR, m, k, minmn, a_copy, lda, tau);\n  if (info != 0) {\n    free(a_copy);\n    free(tau);\n    return;\n  }\n\n  // Copy Q to the output (only the first k columns)\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < k; j++) {\n      q[i * k + j] = a_copy[i * lda + j];\n    }\n  }\n\n  free(a_copy);\n  free(tau);\n}\n\nstatic void qr_decompose_float64(double* a, double* q, double* r, int m, int n,\n                                 int reduced) {\n  const int k = reduced ? (m < n ? m : n) : m;\n  const int minmn = m < n ? m : n;\n  const int lda = k > n ? k : n;  // Leading dimension must be >= max(k, n)\n\n  // LAPACK destroys the input matrix, so we need to make a copy with proper size\n  double* a_copy = (double*)calloc(m * lda, sizeof(double));\n  if (!a_copy) return;\n\n  // Copy input matrix to a_copy (only the m×n part)\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      a_copy[i * lda + j] = a[i * n + j];\n    }\n  }\n\n  // Allocate workspace for Householder reflectors\n  double* tau = (double*)malloc(minmn * sizeof(double));\n  if (!tau) {\n    free(a_copy);\n    return;\n  }\n\n  // Step 1: QR factorization using Householder reflectors\n  lapack_int info = LAPACKE_dgeqrf(LAPACK_ROW_MAJOR, m, n, a_copy, lda, tau);\n  if (info != 0) {\n    free(a_copy);\n    free(tau);\n    return;\n  }\n\n  // Extract R from the upper triangular part of a_copy\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      r[i * n + j] = (i <= j) ? a_copy[i * lda + j] : 0.0;\n    }\n  }\n\n  // Step 2: Generate Q from the Householder reflectors\n  info = LAPACKE_dorgqr(LAPACK_ROW_MAJOR, m, k, minmn, a_copy, lda, tau);\n  if (info != 0) {\n    free(a_copy);\n    free(tau);\n    return;\n  }\n\n  // Copy Q to the output (only the first k columns)\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < k; j++) {\n      q[i * k + j] = a_copy[i * lda + j];\n    }\n  }\n\n  free(a_copy);\n  free(tau);\n}\n\nstatic void qr_decompose_complex32(complex32* a, complex32* q, complex32* r,\n                                   int m, int n, int reduced) {\n  const int k = reduced ? (m < n ? m : n) : m;\n  const int minmn = m < n ? m : n;\n  const int lda = k > n ? k : n;  // Leading dimension must be >= max(k, n)\n\n  // LAPACK destroys the input matrix, so we need to make a copy with proper size\n  complex32* a_copy = (complex32*)calloc(m * lda, sizeof(complex32));\n  if (!a_copy) return;\n\n  // Copy input matrix to a_copy (only the m×n part)\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      a_copy[i * lda + j] = a[i * n + j];\n    }\n  }\n\n  // Allocate workspace for Householder reflectors\n  complex32* tau = (complex32*)malloc(minmn * sizeof(complex32));\n  if (!tau) {\n    free(a_copy);\n    return;\n  }\n\n  // Step 1: QR factorization using Householder reflectors\n  lapack_int info = LAPACKE_cgeqrf(LAPACK_ROW_MAJOR, m, n, a_copy, lda, tau);\n  if (info != 0) {\n    free(a_copy);\n    free(tau);\n    return;\n  }\n\n  // Extract R from the upper triangular part of a_copy\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      r[i * n + j] = (i <= j) ? a_copy[i * lda + j] : 0.0f + 0.0f * I;\n    }\n  }\n\n  // Step 2: Generate Q from the Householder reflectors\n  info = LAPACKE_cungqr(LAPACK_ROW_MAJOR, m, k, minmn, a_copy, lda, tau);\n  if (info != 0) {\n    free(a_copy);\n    free(tau);\n    return;\n  }\n\n  // Copy Q to the output (only the first k columns)\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < k; j++) {\n      q[i * k + j] = a_copy[i * lda + j];\n    }\n  }\n\n  free(a_copy);\n  free(tau);\n}\n\nstatic void qr_decompose_complex64(complex64* a, complex64* q, complex64* r,\n                                   int m, int n, int reduced) {\n  const int k = reduced ? (m < n ? m : n) : m;\n  const int minmn = m < n ? m : n;\n  const int lda = k > n ? k : n;  // Leading dimension must be >= max(k, n)\n\n  // LAPACK destroys the input matrix, so we need to make a copy with proper size\n  complex64* a_copy = (complex64*)calloc(m * lda, sizeof(complex64));\n  if (!a_copy) return;\n\n  // Copy input matrix to a_copy (only the m×n part)\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      a_copy[i * lda + j] = a[i * n + j];\n    }\n  }\n\n  // Allocate workspace for Householder reflectors\n  complex64* tau = (complex64*)malloc(minmn * sizeof(complex64));\n  if (!tau) {\n    free(a_copy);\n    return;\n  }\n\n  // Step 1: QR factorization using Householder reflectors\n  lapack_int info = LAPACKE_zgeqrf(LAPACK_ROW_MAJOR, m, n, a_copy, lda, tau);\n  if (info != 0) {\n    free(a_copy);\n    free(tau);\n    return;\n  }\n\n  // Extract R from the upper triangular part of a_copy\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      r[i * n + j] = (i <= j) ? a_copy[i * lda + j] : 0.0 + 0.0 * I;\n    }\n  }\n\n  // Step 2: Generate Q from the Householder reflectors\n  info = LAPACKE_zungqr(LAPACK_ROW_MAJOR, m, k, minmn, a_copy, lda, tau);\n  if (info != 0) {\n    free(a_copy);\n    free(tau);\n    return;\n  }\n\n  // Copy Q to the output (only the first k columns)\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < k; j++) {\n      q[i * k + j] = a_copy[i * lda + j];\n    }\n  }\n\n  free(a_copy);\n  free(tau);\n}\n\nstatic int qr_decompose_float16(uint16_t* a, uint16_t* q, uint16_t* r, int m,\n                                int n, int reduced) {\n  float* a_float = (float*)malloc(m * n * sizeof(float));\n  int k = reduced ? (m < n ? m : n) : m;\n  float* q_float = (float*)malloc(m * k * sizeof(float));\n  float* r_float = (float*)malloc(m * n * sizeof(float));\n  if (!a_float || !q_float || !r_float) {\n    free(a_float);\n    free(q_float);\n    free(r_float);\n    return -1;\n  }\n  for (int i = 0; i < m * n; i++) a_float[i] = half_to_float(a[i]);\n  qr_decompose_float32(a_float, q_float, r_float, m, n, reduced);\n  for (int i = 0; i < m * k; i++) q[i] = float_to_half(q_float[i]);\n  for (int i = 0; i < m * n; i++) r[i] = float_to_half(r_float[i]);\n  free(a_float);\n  free(q_float);\n  free(r_float);\n  return 0;\n}\n\nstatic int qr_decompose_bfloat16(caml_ba_bfloat16* a, caml_ba_bfloat16* q,\n                                 caml_ba_bfloat16* r, int m, int n,\n                                 int reduced) {\n  float* a_float = (float*)malloc(m * n * sizeof(float));\n  int k = reduced ? (m < n ? m : n) : m;\n  float* q_float = (float*)malloc(m * k * sizeof(float));\n  float* r_float = (float*)malloc(m * n * sizeof(float));\n  if (!a_float || !q_float || !r_float) {\n    free(a_float);\n    free(q_float);\n    free(r_float);\n    return -1;\n  }\n  for (int i = 0; i < m * n; i++) a_float[i] = bfloat16_to_float(a[i]);\n  qr_decompose_float32(a_float, q_float, r_float, m, n, reduced);\n  for (int i = 0; i < m * k; i++) q[i] = float_to_bfloat16(q_float[i]);\n  for (int i = 0; i < m * n; i++) r[i] = float_to_bfloat16(r_float[i]);\n  free(a_float);\n  free(q_float);\n  free(r_float);\n  return 0;\n}\n\nstatic int qr_decompose_f8e4m3(caml_ba_fp8_e4m3* a, caml_ba_fp8_e4m3* q,\n                               caml_ba_fp8_e4m3* r, int m, int n, int reduced) {\n  float* a_float = (float*)malloc(m * n * sizeof(float));\n  int k = reduced ? (m < n ? m : n) : m;\n  float* q_float = (float*)malloc(m * k * sizeof(float));\n  float* r_float = (float*)malloc(m * n * sizeof(float));\n  if (!a_float || !q_float || !r_float) {\n    free(a_float);\n    free(q_float);\n    free(r_float);\n    return -1;\n  }\n  for (int i = 0; i < m * n; i++) a_float[i] = fp8_e4m3_to_float(a[i]);\n  qr_decompose_float32(a_float, q_float, r_float, m, n, reduced);\n  for (int i = 0; i < m * k; i++) q[i] = float_to_fp8_e4m3(q_float[i]);\n  for (int i = 0; i < m * n; i++) r[i] = float_to_fp8_e4m3(r_float[i]);\n  free(a_float);\n  free(q_float);\n  free(r_float);\n  return 0;\n}\n\nstatic int qr_decompose_f8e5m2(caml_ba_fp8_e5m2* a, caml_ba_fp8_e5m2* q,\n                               caml_ba_fp8_e5m2* r, int m, int n, int reduced) {\n  float* a_float = (float*)malloc(m * n * sizeof(float));\n  int k = reduced ? (m < n ? m : n) : m;\n  float* q_float = (float*)malloc(m * k * sizeof(float));\n  float* r_float = (float*)malloc(m * n * sizeof(float));\n  if (!a_float || !q_float || !r_float) {\n    free(a_float);\n    free(q_float);\n    free(r_float);\n    return -1;\n  }\n  for (int i = 0; i < m * n; i++) a_float[i] = fp8_e5m2_to_float(a[i]);\n  qr_decompose_float32(a_float, q_float, r_float, m, n, reduced);\n  for (int i = 0; i < m * k; i++) q[i] = float_to_fp8_e5m2(q_float[i]);\n  for (int i = 0; i < m * n; i++) r[i] = float_to_fp8_e5m2(r_float[i]);\n  free(a_float);\n  free(q_float);\n  free(r_float);\n  return 0;\n}\n\n\nCAMLprim value caml_nx_op_qr(value v_in, value v_q, value v_r,\n                             value v_reduced) {\n  CAMLparam4(v_in, v_q, v_r, v_reduced);\n  int reduced = Int_val(v_reduced);\n  ndarray_t in = extract_ndarray(v_in);\n  ndarray_t q_nd = extract_ndarray(v_q);\n  ndarray_t r_nd = extract_ndarray(v_r);\n  struct caml_ba_array* ba_in = Caml_ba_array_val(Field(v_in, FFI_TENSOR_DATA));\n  struct caml_ba_array* ba_q = Caml_ba_array_val(Field(v_q, FFI_TENSOR_DATA));\n  struct caml_ba_array* ba_r = Caml_ba_array_val(Field(v_r, FFI_TENSOR_DATA));\n  int kind = nx_buffer_get_kind(ba_in);\n  if (in.ndim < 2) {\n    cleanup_ndarray(&in);\n    cleanup_ndarray(&q_nd);\n    cleanup_ndarray(&r_nd);\n    caml_failwith(\"qr: input must have at least 2 dimensions\");\n  }\n  int m = in.shape[in.ndim - 2];\n  int n = in.shape[in.ndim - 1];\n  int k = reduced ? (m < n ? m : n) : m;\n  int rows_r = reduced ? k : m;\n  int batch_size = 1;\n  for (int i = 0; i < in.ndim - 2; i++) {\n    batch_size *= in.shape[i];\n  }\n  int s_in_row = in.strides[in.ndim - 2];\n  int s_in_col = in.strides[in.ndim - 1];\n  int s_q_row = q_nd.strides[q_nd.ndim - 2];\n  int s_q_col = q_nd.strides[q_nd.ndim - 1];\n  int s_r_row = r_nd.strides[r_nd.ndim - 2];\n  int s_r_col = r_nd.strides[r_nd.ndim - 1];\n  caml_enter_blocking_section();\n  for (int b = 0; b < batch_size; b++) {\n    size_t off_in = in.offset;\n    size_t off_q = q_nd.offset;\n    size_t off_r = r_nd.offset;\n    if (in.ndim > 2) {\n      int remaining = b;\n      for (int i = in.ndim - 3; i >= 0; i--) {\n        int coord = remaining % in.shape[i];\n        remaining /= in.shape[i];\n        off_in += coord * in.strides[i];\n        off_q += coord * q_nd.strides[i];\n        off_r += coord * r_nd.strides[i];\n      }\n    }\n    int status = 0;\n    switch (kind) {\n      case CAML_BA_FLOAT32: {\n        float* base_in = (float*)ba_in->data + off_in;\n        float* base_q = (float*)ba_q->data + off_q;\n        float* base_r = (float*)ba_r->data + off_r;\n        float* A = (float*)malloc((size_t)m * n * sizeof(float));\n        float* Q = (float*)malloc((size_t)m * k * sizeof(float));\n        float* R = (float*)malloc((size_t)m * n * sizeof(float));\n        nx_pack_f32(A, base_in, m, n, s_in_row, s_in_col);\n        qr_decompose_float32(A, Q, R, m, n, reduced);\n        nx_unpack_f32(base_q, Q, m, k, s_q_row, s_q_col);\n        nx_unpack_f32(base_r, R, rows_r, n, s_r_row, s_r_col);\n        free(A);\n        free(Q);\n        free(R);\n        break;\n      }\n      case CAML_BA_FLOAT64: {\n        double* base_in = (double*)ba_in->data + off_in;\n        double* base_q = (double*)ba_q->data + off_q;\n        double* base_r = (double*)ba_r->data + off_r;\n        double* A = (double*)malloc((size_t)m * n * sizeof(double));\n        double* Q = (double*)malloc((size_t)m * k * sizeof(double));\n        double* R = (double*)malloc((size_t)m * n * sizeof(double));\n        nx_pack_f64(A, base_in, m, n, s_in_row, s_in_col);\n        qr_decompose_float64(A, Q, R, m, n, reduced);\n        nx_unpack_f64(base_q, Q, m, k, s_q_row, s_q_col);\n        nx_unpack_f64(base_r, R, rows_r, n, s_r_row, s_r_col);\n        free(A);\n        free(Q);\n        free(R);\n        break;\n      }\n      case CAML_BA_COMPLEX32: {\n        complex32* base_in = (complex32*)ba_in->data + off_in;\n        complex32* base_q = (complex32*)ba_q->data + off_q;\n        complex32* base_r = (complex32*)ba_r->data + off_r;\n        complex32* A = (complex32*)malloc((size_t)m * n * sizeof(complex32));\n        complex32* Q = (complex32*)malloc((size_t)m * k * sizeof(complex32));\n        complex32* R = (complex32*)malloc((size_t)m * n * sizeof(complex32));\n        nx_pack_c32(A, base_in, m, n, s_in_row, s_in_col);\n        qr_decompose_complex32(A, Q, R, m, n, reduced);\n        nx_unpack_c32(base_q, Q, m, k, s_q_row, s_q_col);\n        nx_unpack_c32(base_r, R, rows_r, n, s_r_row, s_r_col);\n        free(A);\n        free(Q);\n        free(R);\n        break;\n      }\n      case CAML_BA_COMPLEX64: {\n        complex64* base_in = (complex64*)ba_in->data + off_in;\n        complex64* base_q = (complex64*)ba_q->data + off_q;\n        complex64* base_r = (complex64*)ba_r->data + off_r;\n        complex64* A = (complex64*)malloc((size_t)m * n * sizeof(complex64));\n        complex64* Q = (complex64*)malloc((size_t)m * k * sizeof(complex64));\n        complex64* R = (complex64*)malloc((size_t)m * n * sizeof(complex64));\n        nx_pack_c64(A, base_in, m, n, s_in_row, s_in_col);\n        qr_decompose_complex64(A, Q, R, m, n, reduced);\n        nx_unpack_c64(base_q, Q, m, k, s_q_row, s_q_col);\n        nx_unpack_c64(base_r, R, rows_r, n, s_r_row, s_r_col);\n        free(A);\n        free(Q);\n        free(R);\n        break;\n      }\n      case CAML_BA_FLOAT16: {\n        uint16_t* base_in = (uint16_t*)ba_in->data + off_in;\n        uint16_t* base_q = (uint16_t*)ba_q->data + off_q;\n        uint16_t* base_r = (uint16_t*)ba_r->data + off_r;\n        uint16_t* A = (uint16_t*)malloc((size_t)m * n * sizeof(uint16_t));\n        uint16_t* Q = (uint16_t*)malloc((size_t)m * k * sizeof(uint16_t));\n        uint16_t* R = (uint16_t*)malloc((size_t)m * n * sizeof(uint16_t));\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < n; j++) {\n            A[i * n + j] = base_in[i * s_in_row + j * s_in_col];\n          }\n        }\n        status = qr_decompose_float16(A, Q, R, m, n, reduced);\n        if (status == 0) {\n          for (int i = 0; i < m; i++) {\n            for (int j = 0; j < k; j++) {\n              base_q[i * s_q_row + j * s_q_col] = Q[i * k + j];\n            }\n          }\n          for (int i = 0; i < rows_r; i++) {\n            for (int j = 0; j < n; j++) {\n              base_r[i * s_r_row + j * s_r_col] = R[i * n + j];\n            }\n          }\n        }\n        free(A);\n        free(Q);\n        free(R);\n        break;\n      }\n      case NX_BA_BFLOAT16: {\n        caml_ba_bfloat16* base_in = (caml_ba_bfloat16*)ba_in->data + off_in;\n        caml_ba_bfloat16* base_q = (caml_ba_bfloat16*)ba_q->data + off_q;\n        caml_ba_bfloat16* base_r = (caml_ba_bfloat16*)ba_r->data + off_r;\n        caml_ba_bfloat16* A =\n            (caml_ba_bfloat16*)malloc((size_t)m * n * sizeof(caml_ba_bfloat16));\n        caml_ba_bfloat16* Q =\n            (caml_ba_bfloat16*)malloc((size_t)m * k * sizeof(caml_ba_bfloat16));\n        caml_ba_bfloat16* R =\n            (caml_ba_bfloat16*)malloc((size_t)m * n * sizeof(caml_ba_bfloat16));\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < n; j++) {\n            A[i * n + j] = base_in[i * s_in_row + j * s_in_col];\n          }\n        }\n        status = qr_decompose_bfloat16(A, Q, R, m, n, reduced);\n        if (status == 0) {\n          for (int i = 0; i < m; i++) {\n            for (int j = 0; j < k; j++) {\n              base_q[i * s_q_row + j * s_q_col] = Q[i * k + j];\n            }\n          }\n          for (int i = 0; i < rows_r; i++) {\n            for (int j = 0; j < n; j++) {\n              base_r[i * s_r_row + j * s_r_col] = R[i * n + j];\n            }\n          }\n        }\n        free(A);\n        free(Q);\n        free(R);\n        break;\n      }\n      case NX_BA_FP8_E4M3: {\n        caml_ba_fp8_e4m3* base_in = (caml_ba_fp8_e4m3*)ba_in->data + off_in;\n        caml_ba_fp8_e4m3* base_q = (caml_ba_fp8_e4m3*)ba_q->data + off_q;\n        caml_ba_fp8_e4m3* base_r = (caml_ba_fp8_e4m3*)ba_r->data + off_r;\n        caml_ba_fp8_e4m3* A =\n            (caml_ba_fp8_e4m3*)malloc((size_t)m * n * sizeof(caml_ba_fp8_e4m3));\n        caml_ba_fp8_e4m3* Q =\n            (caml_ba_fp8_e4m3*)malloc((size_t)m * k * sizeof(caml_ba_fp8_e4m3));\n        caml_ba_fp8_e4m3* R =\n            (caml_ba_fp8_e4m3*)malloc((size_t)m * n * sizeof(caml_ba_fp8_e4m3));\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < n; j++) {\n            A[i * n + j] = base_in[i * s_in_row + j * s_in_col];\n          }\n        }\n        status = qr_decompose_f8e4m3(A, Q, R, m, n, reduced);\n        if (status == 0) {\n          for (int i = 0; i < m; i++) {\n            for (int j = 0; j < k; j++) {\n              base_q[i * s_q_row + j * s_q_col] = Q[i * k + j];\n            }\n          }\n          for (int i = 0; i < rows_r; i++) {\n            for (int j = 0; j < n; j++) {\n              base_r[i * s_r_row + j * s_r_col] = R[i * n + j];\n            }\n          }\n        }\n        free(A);\n        free(Q);\n        free(R);\n        break;\n      }\n      case NX_BA_FP8_E5M2: {\n        caml_ba_fp8_e5m2* base_in = (caml_ba_fp8_e5m2*)ba_in->data + off_in;\n        caml_ba_fp8_e5m2* base_q = (caml_ba_fp8_e5m2*)ba_q->data + off_q;\n        caml_ba_fp8_e5m2* base_r = (caml_ba_fp8_e5m2*)ba_r->data + off_r;\n        caml_ba_fp8_e5m2* A =\n            (caml_ba_fp8_e5m2*)malloc((size_t)m * n * sizeof(caml_ba_fp8_e5m2));\n        caml_ba_fp8_e5m2* Q =\n            (caml_ba_fp8_e5m2*)malloc((size_t)m * k * sizeof(caml_ba_fp8_e5m2));\n        caml_ba_fp8_e5m2* R =\n            (caml_ba_fp8_e5m2*)malloc((size_t)m * n * sizeof(caml_ba_fp8_e5m2));\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < n; j++) {\n            A[i * n + j] = base_in[i * s_in_row + j * s_in_col];\n          }\n        }\n        status = qr_decompose_f8e5m2(A, Q, R, m, n, reduced);\n        if (status == 0) {\n          for (int i = 0; i < m; i++) {\n            for (int j = 0; j < k; j++) {\n              base_q[i * s_q_row + j * s_q_col] = Q[i * k + j];\n            }\n          }\n          for (int i = 0; i < rows_r; i++) {\n            for (int j = 0; j < n; j++) {\n              base_r[i * s_r_row + j * s_r_col] = R[i * n + j];\n            }\n          }\n        }\n        free(A);\n        free(Q);\n        free(R);\n        break;\n      }\n      default:\n        caml_leave_blocking_section();\n        cleanup_ndarray(&in);\n        cleanup_ndarray(&q_nd);\n        cleanup_ndarray(&r_nd);\n        caml_failwith(\"qr: unsupported dtype\");\n    }\n    if (status != 0) {\n      caml_leave_blocking_section();\n      cleanup_ndarray(&in);\n      cleanup_ndarray(&q_nd);\n      cleanup_ndarray(&r_nd);\n      caml_failwith(\"qr: decomposition failed\");\n    }\n  }\n  caml_leave_blocking_section();\n  cleanup_ndarray(&in);\n  cleanup_ndarray(&q_nd);\n  cleanup_ndarray(&r_nd);\n  CAMLreturn(Val_unit);\n}\n"
  },
  {
    "path": "packages/nx/lib/backend_c/nx_c_random.c",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n// PRNG operations for nx C backend\n\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n#include <caml/custom.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/threads.h>\n#include <stdint.h>\n#include <string.h>\n\n#include \"nx_c_shared.h\"\n\n// Type definitions for binary operations (reused structure from binary ops)\ntypedef void (*binary_op_t)(const ndarray_t *, const ndarray_t *, ndarray_t *);\n\n// Dispatch table for each type (only int32 supported for threefry)\ntypedef struct {\n  binary_op_t i8, u8, i16, u16, i32, i64, u32, u64, inat;\n  binary_op_t f16, f32, f64;\n  binary_op_t c32, c64;\n  binary_op_t bf16, bool_, i4, u4, f8e4m3, f8e5m2;\n} binary_op_table;\n\n// Threefry2x32 definitions\ntypedef uint32_t u32_t;\n\ntypedef struct {\n  u32_t v[2];\n} tfry_ctr_t;\n\ntypedef tfry_ctr_t tfry_key_t;\n\n#define ROTL_32(x, r) (((x) << (r)) | ((x) >> (32u - (r))))\n\nstatic tfry_ctr_t threefry2x32(tfry_key_t key, tfry_ctr_t ctr) {\n  tfry_ctr_t X;\n  u32_t ks[3];\n  ks[0] = key.v[0];\n  ks[1] = key.v[1];\n  ks[2] = 0x1BD11BDA ^ ks[0] ^ ks[1];\n\n  u32_t X0 = ctr.v[0] + ks[0];\n  u32_t X1 = ctr.v[1] + ks[1];\n\n  // Random123 Threefry2x32: 8 rotation constants, one rotation per round.\n  const int rots[8] = {13, 15, 26, 6, 17, 29, 16, 24};\n\n  for (int r = 0; r < 20; r++) {\n    X0 += X1;\n    X1 = ROTL_32(X1, rots[r % 8]);\n    X1 ^= X0;\n    if ((r + 1) % 4 == 0) {\n      int s = (r + 1) / 4;\n      X0 += ks[s % 3];\n      X1 += ks[(s + 1) % 3] + (u32_t)s;\n    }\n  }\n\n  X.v[0] = X0;\n  X.v[1] = X1;\n  return X;\n}\n\n// Threefry implementation for int32 (only supported type)\nstatic void nx_c_threefry_i32(const ndarray_t *key_p, const ndarray_t *ctr_p,\n                              ndarray_t *out_p) {\n  if (!key_p || !ctr_p || !out_p) {\n    fprintf(stderr, \"nx: nx_c_threefry_i32: null pointer\\n\");\n    abort();\n  }\n\n  ndarray_t key = *key_p;\n  ndarray_t ctr = *ctr_p;\n  ndarray_t out = *out_p;\n\n  // Dimension check already done before blocking section in dispatch_binary_op\n\n  long total_vectors = total_elements_safe(&key) / 2;\n  if (total_vectors == 0) return;\n\n  long last_stride_key = key.strides[key.ndim - 1];\n  long last_stride_ctr = ctr.strides[ctr.ndim - 1];\n  long last_stride_out = out.strides[out.ndim - 1];\n\n  int prefix_ndim = key.ndim - 1;\n  key.ndim = prefix_ndim;\n  ctr.ndim = prefix_ndim;\n  out.ndim = prefix_ndim;\n\n  if (is_fully_contiguous(key_p, ctr_p, out_p) &&\n      key_p->strides[key_p->ndim - 1] == 1 &&\n      ctr_p->strides[ctr_p->ndim - 1] == 1 &&\n      out_p->strides[out_p->ndim - 1] == 1) {\n    _Pragma(\n        \"omp parallel for simd if(total_vectors > 1000)\") for (long i = 0;\n                                                               i <\n                                                               total_vectors;\n                                                               i++) {\n      long off = i * 2;\n      tfry_key_t k;\n      tfry_ctr_t c;\n      int32_t *key_data = (int32_t *)key.data;\n      int32_t *ctr_data = (int32_t *)ctr.data;\n      int32_t *out_data = (int32_t *)out.data;\n      k.v[0] = (u32_t)key_data[key.offset + off];\n      k.v[1] = (u32_t)key_data[key.offset + off + 1];\n      c.v[0] = (u32_t)ctr_data[ctr.offset + off];\n      c.v[1] = (u32_t)ctr_data[ctr.offset + off + 1];\n      tfry_ctr_t res = threefry2x32(k, c);\n      out_data[out.offset + off] = (int32_t)res.v[0];\n      out_data[out.offset + off + 1] = (int32_t)res.v[1];\n    }\n  } else {\n    nd_iterator_t it;\n    nd_iterator_init_safe(&it, &key, &ctr, &out);\n    do {\n      long key_base, ctr_base, out_base;\n      nd_iterator_get_offsets(&it, &key_base, &ctr_base, &out_base);\n      long key_off0 = key.offset + key_base;\n      long key_off1 = key_off0 + last_stride_key;\n      long ctr_off0 = ctr.offset + ctr_base;\n      long ctr_off1 = ctr_off0 + last_stride_ctr;\n      long out_off0 = out.offset + out_base;\n      long out_off1 = out_off0 + last_stride_out;\n      tfry_key_t k;\n      tfry_ctr_t c;\n      int32_t *key_data = (int32_t *)key.data;\n      int32_t *ctr_data = (int32_t *)ctr.data;\n      int32_t *out_data = (int32_t *)out.data;\n      k.v[0] = (u32_t)key_data[key_off0];\n      k.v[1] = (u32_t)key_data[key_off1];\n      c.v[0] = (u32_t)ctr_data[ctr_off0];\n      c.v[1] = (u32_t)ctr_data[ctr_off1];\n      tfry_ctr_t res = threefry2x32(k, c);\n      out_data[out_off0] = (int32_t)res.v[0];\n      out_data[out_off1] = (int32_t)res.v[1];\n    } while (nd_iterator_next(&it));\n    nd_iterator_destroy(&it);\n  }\n}\n\n// Build dispatch table (only i32 supported)\nstatic const binary_op_table threefry_table = {.i8 = NULL,\n                                               .u8 = NULL,\n                                               .i16 = NULL,\n                                               .u16 = NULL,\n                                               .i32 = nx_c_threefry_i32,\n                                               .i64 = NULL,\n                                               .u32 = nx_c_threefry_i32,\n                                               .u64 = NULL,\n                                               .inat = NULL,\n                                               .f16 = NULL,\n                                               .f32 = NULL,\n                                               .f64 = NULL,\n                                               .c32 = NULL,\n                                               .c64 = NULL,\n                                               .bf16 = NULL,\n                                               .bool_ = NULL,\n                                               .i4 = NULL,\n                                               .u4 = NULL,\n                                               .f8e4m3 = NULL,\n                                               .f8e5m2 = NULL};\n\n// Reuse dispatch from binary (compatible structure)\nstatic void dispatch_binary_op(value v_x, value v_y, value v_z,\n                               const binary_op_table *table,\n                               const char *op_name) {\n  // Extract ndarrays from FFI tensors\n  ndarray_t x = extract_ndarray(v_x);\n  ndarray_t y = extract_ndarray(v_y);\n  ndarray_t z = extract_ndarray(v_z);\n\n  // Check shapes match\n  if (x.ndim != y.ndim || x.ndim != z.ndim) {\n    cleanup_ndarray(&x);\n    cleanup_ndarray(&y);\n    cleanup_ndarray(&z);\n    caml_failwith(\"shape mismatch\");\n  }\n  for (int i = 0; i < x.ndim; i++) {\n    if (x.shape[i] != y.shape[i] || x.shape[i] != z.shape[i]) {\n      cleanup_ndarray(&x);\n      cleanup_ndarray(&y);\n      cleanup_ndarray(&z);\n      caml_failwith(\"shape mismatch\");\n    }\n  }\n\n  // Get bigarray kind from the data field\n  value v_x_data = Field(v_x, FFI_TENSOR_DATA);\n  value v_y_data = Field(v_y, FFI_TENSOR_DATA);\n  value v_z_data = Field(v_z, FFI_TENSOR_DATA);\n\n  struct caml_ba_array *ba = Caml_ba_array_val(v_x_data);\n  int kind = nx_buffer_get_kind(ba);\n\n  // Check kinds match for y and z\n  int kind_y = nx_buffer_get_kind(Caml_ba_array_val(v_y_data));\n  int kind_z = nx_buffer_get_kind(Caml_ba_array_val(v_z_data));\n  if (kind != kind_y || kind != kind_z) {\n    cleanup_ndarray(&x);\n    cleanup_ndarray(&y);\n    cleanup_ndarray(&z);\n    caml_failwith(\"dtype mismatch\");\n  }\n\n  // Select operation based on dtype\n  binary_op_t op = NULL;\n  switch (kind) {\n    case CAML_BA_SINT8:\n      op = table->i8;\n      break;\n    case CAML_BA_UINT8:\n      op = table->u8;\n      break;\n    case CAML_BA_SINT16:\n      op = table->i16;\n      break;\n    case CAML_BA_UINT16:\n      op = table->u16;\n      break;\n    case CAML_BA_INT32:\n      op = table->i32;\n      break;\n    case CAML_BA_INT64:\n      op = table->i64;\n      break;\n    case NX_BA_UINT32:\n      op = table->u32;\n      break;\n    case NX_BA_UINT64:\n      op = table->u64;\n      break;\n    case CAML_BA_CAML_INT:\n    case CAML_BA_NATIVE_INT:\n      op = table->inat;\n      break;\n    case CAML_BA_FLOAT16:\n      op = table->f16;\n      break;\n    case CAML_BA_FLOAT32:\n      op = table->f32;\n      break;\n    case CAML_BA_FLOAT64:\n      op = table->f64;\n      break;\n    case CAML_BA_COMPLEX32:\n      op = table->c32;\n      break;\n    case CAML_BA_COMPLEX64:\n      op = table->c64;\n      break;\n    case NX_BA_BFLOAT16:\n      op = table->bf16;\n      break;\n    case NX_BA_BOOL:\n      op = table->bool_;\n      break;\n    case NX_BA_INT4:\n      op = table->i4;\n      break;\n    case NX_BA_UINT4:\n      op = table->u4;\n      break;\n    case NX_BA_FP8_E4M3:\n      op = table->f8e4m3;\n      break;\n    case NX_BA_FP8_E5M2:\n      op = table->f8e5m2;\n      break;\n    default:\n      cleanup_ndarray(&x);\n      cleanup_ndarray(&y);\n      cleanup_ndarray(&z);\n      caml_failwith(\"dispatch_binary_op: unsupported dtype\");\n  }\n\n  if (!op) {\n    char msg[256];\n    snprintf(msg, sizeof(msg), \"%s: operation not supported for dtype\",\n             op_name);\n    cleanup_ndarray(&x);\n    cleanup_ndarray(&y);\n    cleanup_ndarray(&z);\n    caml_failwith(msg);\n  }\n\n  // For threefry, validate that last dimension is 2 before blocking section\n  if (strcmp(op_name, \"threefry\") == 0) {\n    if (x.ndim < 1 || x.shape[x.ndim - 1] != 2 ||\n        y.shape[y.ndim - 1] != 2 || z.shape[z.ndim - 1] != 2) {\n      cleanup_ndarray(&x);\n      cleanup_ndarray(&y);\n      cleanup_ndarray(&z);\n      caml_failwith(\"threefry: last dimension must be 2\");\n    }\n  }\n\n  // Enter blocking section for potentially long computation\n  caml_enter_blocking_section();\n  op(&x, &y, &z);\n  caml_leave_blocking_section();\n\n  // Clean up if heap allocated\n  cleanup_ndarray(&x);\n  cleanup_ndarray(&y);\n  cleanup_ndarray(&z);\n}\n\n// ============================================================================\n// OCaml FFI Stubs\n// ============================================================================\n\nCAMLprim value caml_nx_threefry(value v_x, value v_y, value v_z) {\n  CAMLparam3(v_x, v_y, v_z);\n  dispatch_binary_op(v_x, v_y, v_z, &threefry_table, \"threefry\");\n  CAMLreturn(Val_unit);\n}\n"
  },
  {
    "path": "packages/nx/lib/backend_c/nx_c_reduce.c",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n// Reduction operations for nx C backend\n\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n#include <caml/custom.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/threads.h>\n#include <complex.h>\n#include <math.h>\n#include <stdlib.h>\n#include <string.h>\n\n#include \"nx_c_shared.h\"\n\n// Type definitions for reduction operations\ntypedef void (*reduce_op_t)(const ndarray_t *, ndarray_t *, const int *, int,\n                            bool);\n\n// Dispatch table for each type\ntypedef struct {\n  reduce_op_t i8, u8, i16, u16, i32, i64, u32, u64, inat;\n  reduce_op_t f16, f32, f64;\n  reduce_op_t c32, c64;\n  reduce_op_t bf16, bool_, i4, u4, f8e4m3, f8e5m2;\n} reduce_op_table;\n\n// Forward declarations for optimized fallback symbols generated later\n// (needed so we can call them from fast paths before their definitions)\n// Forward decls for wrapper functions that call the generic (macro-generated)\n// fallback implementations. Definitions are placed after the generic impls.\nstatic void nx_c_reduce_sum_f32_generic_wrap(const ndarray_t *, ndarray_t *,\n                                             const int *, int, bool);\nstatic void nx_c_reduce_sum_f64_generic_wrap(const ndarray_t *, ndarray_t *,\n                                             const int *, int, bool);\nstatic void nx_c_reduce_max_f32_generic_wrap(const ndarray_t *, ndarray_t *,\n                                             const int *, int, bool);\nstatic void nx_c_reduce_max_f64_generic_wrap(const ndarray_t *, ndarray_t *,\n                                             const int *, int, bool);\n\n// Helper utilities for optimized paths\nstatic inline bool normalize_and_sort_axes(int ndim, const int *axes,\n                                           int num_axes, int *out_axes) {\n  if (!axes || !out_axes || num_axes <= 0) return false;\n  for (int i = 0; i < num_axes; ++i) {\n    int axis = axes[i];\n    if (axis < 0) axis += ndim;\n    if (axis < 0 || axis >= ndim) return false;\n    out_axes[i] = axis;\n  }\n  for (int i = 0; i < num_axes - 1; ++i) {\n    for (int j = i + 1; j < num_axes; ++j) {\n      if (out_axes[j] < out_axes[i]) {\n        int tmp = out_axes[i];\n        out_axes[i] = out_axes[j];\n        out_axes[j] = tmp;\n      }\n    }\n  }\n  for (int i = 1; i < num_axes; ++i) {\n    if (out_axes[i] == out_axes[i - 1]) return false;\n  }\n  return true;\n}\n\nstatic inline bool axes_are_trailing(int ndim, int num_axes,\n                                     const int *sorted_axes) {\n  for (int i = 0; i < num_axes; ++i) {\n    if (sorted_axes[i] != ndim - num_axes + i) return false;\n  }\n  return true;\n}\n\nstatic inline long product_of_axes(const ndarray_t *input,\n                                   const int *sorted_axes, int num_axes) {\n  long prod = 1;\n  for (int i = 0; i < num_axes; ++i) {\n    long dim = input->shape[sorted_axes[i]];\n    if (dim == 0) return 0;\n    prod *= dim;\n  }\n  return prod;\n}\n\nstatic inline void fill_zero_float(float *dst, long count) {\n  if (count <= 0) return;\n  memset(dst, 0, (size_t)count * sizeof(float));\n}\n\nstatic inline void fill_zero_double(double *dst, long count) {\n  if (count <= 0) return;\n  memset(dst, 0, (size_t)count * sizeof(double));\n}\n\nstatic inline bool reduce_sum_single_axis_f32(const ndarray_t *input,\n                                              ndarray_t *output, int axis,\n                                              bool keepdims) {\n  (void)keepdims;\n  if (!is_contiguous(input) || !is_contiguous(output)) return false;\n\n  float *restrict in = (float *)input->data + input->offset;\n  float *restrict out = (float *)output->data + output->offset;\n\n  long axis_size = input->shape[axis];\n  long inner = 1;\n  for (int i = axis + 1; i < input->ndim; ++i) {\n    long dim = input->shape[i];\n    inner *= dim;\n  }\n\n  if (axis_size == 0 || inner == 0) {\n    fill_zero_float(out, total_elements_safe(output));\n    return true;\n  }\n\n  long total = total_elements_safe(input);\n  if (total == 0) {\n    fill_zero_float(out, total_elements_safe(output));\n    return true;\n  }\n\n  long total_out = total_elements_safe(output);\n  if (total_out == 0) return true;\n\n#if defined(_OPENMP)\n#pragma omp parallel for schedule(static) if (total_out > 4096)\n#endif\n  for (long idx = 0; idx < total_out; ++idx) {\n    long outer_idx = inner == 1 ? idx : idx / inner;\n    long inner_idx = inner == 1 ? 0 : idx % inner;\n    long base = outer_idx * axis_size * inner + inner_idx;\n    float acc = 0.0f;\n    for (long k = 0; k < axis_size; ++k) {\n      acc += in[base + k * inner];\n    }\n    out[idx] = acc;\n  }\n\n  return true;\n}\n\nstatic inline bool reduce_sum_single_axis_f64(const ndarray_t *input,\n                                              ndarray_t *output, int axis,\n                                              bool keepdims) {\n  (void)keepdims;\n  if (!is_contiguous(input) || !is_contiguous(output)) return false;\n\n  double *restrict in = (double *)input->data + input->offset;\n  double *restrict out = (double *)output->data + output->offset;\n\n  long axis_size = input->shape[axis];\n  long inner = 1;\n  for (int i = axis + 1; i < input->ndim; ++i) {\n    long dim = input->shape[i];\n    inner *= dim;\n  }\n\n  if (axis_size == 0 || inner == 0) {\n    fill_zero_double(out, total_elements_safe(output));\n    return true;\n  }\n\n  long total = total_elements_safe(input);\n  if (total == 0) {\n    fill_zero_double(out, total_elements_safe(output));\n    return true;\n  }\n\n  long total_out = total_elements_safe(output);\n  if (total_out == 0) return true;\n\n#if defined(_OPENMP)\n#pragma omp parallel for schedule(static) if (total_out > 4096)\n#endif\n  for (long idx = 0; idx < total_out; ++idx) {\n    long outer_idx = inner == 1 ? idx : idx / inner;\n    long inner_idx = inner == 1 ? 0 : idx % inner;\n    long base = outer_idx * axis_size * inner + inner_idx;\n    double acc = 0.0;\n    for (long k = 0; k < axis_size; ++k) {\n      acc += in[base + k * inner];\n    }\n    out[idx] = acc;\n  }\n\n  return true;\n}\n\n// Helper functions\nstatic int cmp_int(const void *a, const void *b) {\n  return *(const int *)a - *(const int *)b;\n}\n\nstatic long get_offset(const ndarray_t *nd, const int *coord) {\n  long off = 0;\n  for (int i = 0; i < nd->ndim; ++i) {\n    off += (long)coord[i] * nd->strides[i];\n  }\n  return off;\n}\n\nstatic void get_coord_from_idx(long idx, const ndarray_t *nd, int *coord) {\n  for (int i = nd->ndim - 1; i >= 0; --i) {\n    coord[i] = idx % nd->shape[i];\n    idx /= nd->shape[i];\n  }\n}\n\n// Macro to generate reduction implementation for standard types\n#define REDUCE_OP_IMPL(name, T, suffix, IDENTITY, HAS_IDENTITY, OP)        \\\n  static void nx_c_##name##_##suffix(const ndarray_t *input,               \\\n                                     ndarray_t *output, const int *axes,   \\\n                                     int num_axes, bool keepdims) {        \\\n    if (!input || !output) {                                               \\\n      fprintf(stderr, \"nx: nx_c_\" #name \"_\" #suffix \": null pointer\\n\");   \\\n      abort();                                                             \\\n    }                                                                      \\\n    bool *is_reduced = (bool *)calloc(input->ndim, sizeof(bool));          \\\n    if (!is_reduced) { fprintf(stderr, \"nx: allocation failed\\n\"); abort(); } \\\n    for (int i = 0; i < num_axes; ++i) {                                   \\\n      if (axes[i] < 0 || axes[i] >= input->ndim) {                         \\\n        fprintf(stderr, \"nx: invalid axis\\n\");                             \\\n        abort();                                                           \\\n      }                                                                    \\\n      is_reduced[axes[i]] = true;                                          \\\n    }                                                                      \\\n    int num_kept = input->ndim - num_axes;                                 \\\n    int *kept_axes = (int *)malloc(num_kept * sizeof(int));                \\\n    if (!kept_axes) {                                                      \\\n      fprintf(stderr, \"nx: allocation failed\\n\");                          \\\n      abort();                                                             \\\n    }                                                                      \\\n    int kk = 0;                                                            \\\n    for (int i = 0; i < input->ndim; ++i) {                                \\\n      if (!is_reduced[i]) kept_axes[kk++] = i;                             \\\n    }                                                                      \\\n    long reduce_prod = 1;                                                  \\\n    bool zero_size = false;                                                \\\n    for (int i = 0; i < num_axes; ++i) {                                   \\\n      long ss = input->shape[axes[i]];                                     \\\n      if (ss == 0) zero_size = true;                                       \\\n      reduce_prod *= ss;                                                   \\\n    }                                                                      \\\n    if (zero_size) reduce_prod = 0;                                        \\\n    if (reduce_prod == 0 && !HAS_IDENTITY) {                               \\\n      free(is_reduced);                                                    \\\n      free(kept_axes);                                                     \\\n      fprintf(stderr, \"nx: zero-size array to reduction operation \" #name  \\\n              \" which has no identity\\n\");                                  \\\n      abort();                             \\\n    }                                                                      \\\n    long total_out = total_elements_safe(output);                          \\\n    if (total_out == 0) {                                                  \\\n      free(is_reduced);                                                    \\\n      free(kept_axes);                                                     \\\n      return;                                                              \\\n    }                                                                      \\\n    _Pragma(\"omp parallel for if(total_out > 1000)\") for (long idx = 0;    \\\n                                                          idx < total_out; \\\n                                                          ++idx) {         \\\n      int local_out_coord[MAX_NDIM];                                       \\\n      int local_in_coord[MAX_NDIM];                                        \\\n      int local_reduced_coord[MAX_NDIM];                                   \\\n      get_coord_from_idx(idx, output, local_out_coord);                    \\\n      memset(local_in_coord, 0, input->ndim * sizeof(int));                \\\n      if (keepdims) {                                                      \\\n        for (int d = 0; d < input->ndim; ++d) {                            \\\n          if (!is_reduced[d]) local_in_coord[d] = local_out_coord[d];      \\\n        }                                                                  \\\n      } else {                                                             \\\n        for (int ii = 0; ii < num_kept; ++ii) {                            \\\n          local_in_coord[kept_axes[ii]] = local_out_coord[ii];             \\\n        }                                                                  \\\n      }                                                                    \\\n      T acc;                                                               \\\n      if (reduce_prod == 0) {                                              \\\n        acc = IDENTITY;                                                    \\\n      } else {                                                             \\\n        bool first = true;                                                 \\\n        memset(local_reduced_coord, 0, num_axes * sizeof(int));            \\\n        bool inner_done = false;                                           \\\n        while (!inner_done) {                                              \\\n          for (int j = 0; j < num_axes; ++j) {                             \\\n            local_in_coord[axes[j]] = local_reduced_coord[j];              \\\n          }                                                                \\\n          long in_off = input->offset + get_offset(input, local_in_coord); \\\n          T val = ((T *)input->data)[in_off];                              \\\n          if (first) {                                                     \\\n            acc = val;                                                     \\\n            first = false;                                                 \\\n          } else {                                                         \\\n            acc = OP(acc, val);                                            \\\n          }                                                                \\\n          inner_done = true;                                               \\\n          for (int j = num_axes - 1; j >= 0; --j) {                        \\\n            local_reduced_coord[j]++;                                      \\\n            if (local_reduced_coord[j] < input->shape[axes[j]]) {          \\\n              inner_done = false;                                          \\\n              break;                                                       \\\n            }                                                              \\\n            local_reduced_coord[j] = 0;                                    \\\n          }                                                                \\\n        }                                                                  \\\n      }                                                                    \\\n      long out_off = output->offset + get_offset(output, local_out_coord); \\\n      ((T *)output->data)[out_off] = acc;                                  \\\n    }                                                                      \\\n    free(is_reduced);                                                      \\\n    free(kept_axes);                                                       \\\n  }\n\n// Macro to generate both for a type\n#define REDUCE_OP_FOR_TYPE(name, T, suffix, IDENTITY, HAS_IDENTITY, OP) \\\n  REDUCE_OP_IMPL(name, T, suffix, IDENTITY, HAS_IDENTITY, OP)\n\n// Low-precision reduce impl\n#define LOW_PREC_REDUCE_OP_IMPL(name, T, suffix, IDENTITY_FLOAT, HAS_IDENTITY, \\\n                                OP_FLOAT, TO_FLOAT, FROM_FLOAT)                \\\n  static void nx_c_##name##_##suffix(const ndarray_t *input,                   \\\n                                     ndarray_t *output, const int *axes,       \\\n                                     int num_axes, bool keepdims) {            \\\n    if (!input || !output) {                                                   \\\n      caml_failwith(\"nx_c_\" #name \"_\" #suffix \": null pointer\");               \\\n    }                                                                          \\\n    bool *is_reduced = (bool *)calloc(input->ndim, sizeof(bool));              \\\n    if (!is_reduced) caml_failwith(\"allocation failed\");                       \\\n    for (int i = 0; i < num_axes; ++i) {                                       \\\n      if (axes[i] < 0 || axes[i] >= input->ndim) {                             \\\n        free(is_reduced);                                                      \\\n        caml_failwith(\"invalid axis\");                                         \\\n      }                                                                        \\\n      is_reduced[axes[i]] = true;                                              \\\n    }                                                                          \\\n    int num_kept = input->ndim - num_axes;                                     \\\n    int *kept_axes = (int *)malloc(num_kept * sizeof(int));                    \\\n    if (!kept_axes) {                                                          \\\n      free(is_reduced);                                                        \\\n      caml_failwith(\"allocation failed\");                                      \\\n    }                                                                          \\\n    int kk = 0;                                                                \\\n    for (int i = 0; i < input->ndim; ++i) {                                    \\\n      if (!is_reduced[i]) kept_axes[kk++] = i;                                 \\\n    }                                                                          \\\n    long reduce_prod = 1;                                                      \\\n    bool zero_size = false;                                                    \\\n    for (int i = 0; i < num_axes; ++i) {                                       \\\n      long ss = input->shape[axes[i]];                                         \\\n      if (ss == 0) zero_size = true;                                           \\\n      reduce_prod *= ss;                                                       \\\n    }                                                                          \\\n    if (zero_size) reduce_prod = 0;                                            \\\n    if (reduce_prod == 0 && !HAS_IDENTITY) {                                   \\\n      free(is_reduced);                                                        \\\n      free(kept_axes);                                                         \\\n      caml_failwith(\"zero-size array to reduction operation \" #name            \\\n                    \" which has no identity\");                                 \\\n    }                                                                          \\\n    long total_out = total_elements_safe(output);                              \\\n    if (total_out == 0) {                                                      \\\n      free(is_reduced);                                                        \\\n      free(kept_axes);                                                         \\\n      return;                                                                  \\\n    }                                                                          \\\n    _Pragma(\"omp parallel for if(total_out > 1000)\") for (long idx = 0;        \\\n                                                          idx < total_out;     \\\n                                                          ++idx) {             \\\n      int local_out_coord[MAX_NDIM];                                           \\\n      int local_in_coord[MAX_NDIM];                                            \\\n      int local_reduced_coord[MAX_NDIM];                                       \\\n      get_coord_from_idx(idx, output, local_out_coord);                        \\\n      memset(local_in_coord, 0, input->ndim * sizeof(int));                    \\\n      if (keepdims) {                                                          \\\n        for (int d = 0; d < input->ndim; ++d) {                                \\\n          if (!is_reduced[d]) local_in_coord[d] = local_out_coord[d];          \\\n        }                                                                      \\\n      } else {                                                                 \\\n        for (int ii = 0; ii < num_kept; ++ii) {                                \\\n          local_in_coord[kept_axes[ii]] = local_out_coord[ii];                 \\\n        }                                                                      \\\n      }                                                                        \\\n      float acc;                                                               \\\n      if (reduce_prod == 0) {                                                  \\\n        acc = IDENTITY_FLOAT;                                                  \\\n      } else {                                                                 \\\n        bool first = true;                                                     \\\n        memset(local_reduced_coord, 0, num_axes * sizeof(int));                \\\n        bool inner_done = false;                                               \\\n        while (!inner_done) {                                                  \\\n          for (int j = 0; j < num_axes; ++j) {                                 \\\n            local_in_coord[axes[j]] = local_reduced_coord[j];                  \\\n          }                                                                    \\\n          long in_off = input->offset + get_offset(input, local_in_coord);     \\\n          float val = TO_FLOAT(((T *)input->data)[in_off]);                    \\\n          if (first) {                                                         \\\n            acc = val;                                                         \\\n            first = false;                                                     \\\n          } else {                                                             \\\n            acc = OP_FLOAT(acc, val);                                          \\\n          }                                                                    \\\n          inner_done = true;                                                   \\\n          for (int j = num_axes - 1; j >= 0; --j) {                            \\\n            local_reduced_coord[j]++;                                          \\\n            if (local_reduced_coord[j] < input->shape[axes[j]]) {              \\\n              inner_done = false;                                              \\\n              break;                                                           \\\n            }                                                                  \\\n            local_reduced_coord[j] = 0;                                        \\\n          }                                                                    \\\n        }                                                                      \\\n      }                                                                        \\\n      long out_off = output->offset + get_offset(output, local_out_coord);     \\\n      ((T *)output->data)[out_off] = FROM_FLOAT(acc);                          \\\n    }                                                                          \\\n    free(is_reduced);                                                          \\\n    free(kept_axes);                                                           \\\n  }\n\n// Int4/Uint4 reduce impl\n#define INT4_REDUCE_IMPL(name, signedness, suffix, IDENTITY, HAS_IDENTITY, OP, \\\n                         CLAMP)                                                \\\n  static void nx_c_##name##_##suffix(const ndarray_t *input,                   \\\n                                     ndarray_t *output, const int *axes,       \\\n                                     int num_axes, bool keepdims) {            \\\n    if (!input || !output) {                                                   \\\n      caml_failwith(\"nx_c_\" #name \"_\" #suffix \": null pointer\");               \\\n    }                                                                          \\\n    bool *is_reduced = (bool *)calloc(input->ndim, sizeof(bool));              \\\n    if (!is_reduced) caml_failwith(\"allocation failed\");                       \\\n    for (int i = 0; i < num_axes; ++i) {                                       \\\n      if (axes[i] < 0 || axes[i] >= input->ndim) {                             \\\n        free(is_reduced);                                                      \\\n        caml_failwith(\"invalid axis\");                                         \\\n      }                                                                        \\\n      is_reduced[axes[i]] = true;                                              \\\n    }                                                                          \\\n    int num_kept = input->ndim - num_axes;                                     \\\n    int *kept_axes = (int *)malloc(num_kept * sizeof(int));                    \\\n    if (!kept_axes) {                                                          \\\n      free(is_reduced);                                                        \\\n      caml_failwith(\"allocation failed\");                                      \\\n    }                                                                          \\\n    int kk = 0;                                                                \\\n    for (int i = 0; i < input->ndim; ++i) {                                    \\\n      if (!is_reduced[i]) kept_axes[kk++] = i;                                 \\\n    }                                                                          \\\n    long reduce_prod = 1;                                                      \\\n    bool zero_size = false;                                                    \\\n    for (int i = 0; i < num_axes; ++i) {                                       \\\n      long ss = input->shape[axes[i]];                                         \\\n      if (ss == 0) zero_size = true;                                           \\\n      reduce_prod *= ss;                                                       \\\n    }                                                                          \\\n    if (zero_size) reduce_prod = 0;                                            \\\n    if (reduce_prod == 0 && !HAS_IDENTITY) {                                   \\\n      free(is_reduced);                                                        \\\n      free(kept_axes);                                                         \\\n      caml_failwith(\"zero-size array to reduction operation \" #name            \\\n                    \" which has no identity\");                                 \\\n    }                                                                          \\\n    long total_out = total_elements_safe(output);                              \\\n    if (total_out == 0) {                                                      \\\n      free(is_reduced);                                                        \\\n      free(kept_axes);                                                         \\\n      return;                                                                  \\\n    }                                                                          \\\n    _Pragma(\"omp parallel for if(total_out > 1000)\") for (long idx = 0;        \\\n                                                          idx < total_out;     \\\n                                                          ++idx) {             \\\n      int *local_out_coord = (int *)calloc(output->ndim, sizeof(int));         \\\n      if (!local_out_coord) { fprintf(stderr, \"nx: allocation failed\\n\"); abort(); } \\\n      int *local_in_coord = (int *)calloc(input->ndim, sizeof(int));           \\\n      if (!local_in_coord) {                                                   \\\n        fprintf(stderr, \"nx: allocation failed\\n\");                            \\\n        abort();                                                               \\\n      }                                                                        \\\n      int *local_reduced_coord = (int *)calloc(num_axes, sizeof(int));         \\\n      if (!local_reduced_coord) {                                              \\\n        fprintf(stderr, \"nx: allocation failed\\n\");                            \\\n        abort();                                                               \\\n      }                                                                        \\\n      get_coord_from_idx(idx, output, local_out_coord);                        \\\n      memset(local_in_coord, 0, input->ndim * sizeof(int));                    \\\n      if (keepdims) {                                                          \\\n        for (int d = 0; d < input->ndim; ++d) {                                \\\n          if (!is_reduced[d]) local_in_coord[d] = local_out_coord[d];          \\\n        }                                                                      \\\n      } else {                                                                 \\\n        for (int ii = 0; ii < num_kept; ++ii) {                                \\\n          local_in_coord[kept_axes[ii]] = local_out_coord[ii];                 \\\n        }                                                                      \\\n      }                                                                        \\\n      int acc;                                                                 \\\n      if (reduce_prod == 0) {                                                  \\\n        acc = IDENTITY;                                                        \\\n      } else {                                                                 \\\n        bool first = true;                                                     \\\n        memset(local_reduced_coord, 0, num_axes * sizeof(int));                \\\n        bool inner_done = false;                                               \\\n        while (!inner_done) {                                                  \\\n          for (int j = 0; j < num_axes; ++j) {                                 \\\n            local_in_coord[axes[j]] = local_reduced_coord[j];                  \\\n          }                                                                    \\\n          long in_off = input->offset + get_offset(input, local_in_coord);     \\\n          long byte_off = in_off / 2;                                          \\\n          int nib_off = in_off % 2;                                            \\\n          uint8_t *in_data = (uint8_t *)input->data;                           \\\n          int val =                                                            \\\n              nib_off ? (signedness ? (int8_t)(in_data[byte_off] >> 4)         \\\n                                    : (in_data[byte_off] >> 4) & 0x0F)         \\\n                      : (signedness                                            \\\n                             ? (int8_t)((in_data[byte_off] & 0x0F) << 4) >> 4  \\\n                             : in_data[byte_off] & 0x0F);                      \\\n          if (first) {                                                         \\\n            acc = val;                                                         \\\n            first = false;                                                     \\\n          } else {                                                             \\\n            acc = OP(acc, val);                                                \\\n          }                                                                    \\\n          inner_done = true;                                                   \\\n          for (int j = num_axes - 1; j >= 0; --j) {                            \\\n            local_reduced_coord[j]++;                                          \\\n            if (local_reduced_coord[j] < input->shape[axes[j]]) {              \\\n              inner_done = false;                                              \\\n              break;                                                           \\\n            }                                                                  \\\n            local_reduced_coord[j] = 0;                                        \\\n          }                                                                    \\\n        }                                                                      \\\n      }                                                                        \\\n      long out_off = output->offset + get_offset(output, local_out_coord);     \\\n      long out_byte_off = out_off / 2;                                         \\\n      int out_nib_off = out_off % 2;                                           \\\n      int res = CLAMP(acc);                                                    \\\n      uint8_t nib = (uint8_t)res & 0x0F;                                       \\\n      uint8_t *out_data = (uint8_t *)output->data;                             \\\n      if (out_nib_off) {                                                       \\\n        out_data[out_byte_off] = (out_data[out_byte_off] & 0x0F) | (nib << 4); \\\n      } else {                                                                 \\\n        out_data[out_byte_off] = (out_data[out_byte_off] & 0xF0) | nib;        \\\n      }                                                                        \\\n      free(local_out_coord);                                                   \\\n      free(local_in_coord);                                                    \\\n      free(local_reduced_coord);                                               \\\n    }                                                                          \\\n    free(is_reduced);                                                          \\\n    free(kept_axes);                                                           \\\n  }\n\n// Macro to build dispatch table\n#define BUILD_DISPATCH_TABLE(name)                                             \\\n  static const reduce_op_table name##_table = {.i8 = nx_c_##name##_i8,         \\\n                                               .u8 = nx_c_##name##_u8,         \\\n                                               .i16 = nx_c_##name##_i16,       \\\n                                               .u16 = nx_c_##name##_u16,       \\\n                                               .i32 = nx_c_##name##_i32,       \\\n                                               .i64 = nx_c_##name##_i64,       \\\n                                               .u32 = nx_c_##name##_u32,       \\\n                                               .u64 = nx_c_##name##_u64,       \\\n                                               .inat = nx_c_##name##_inat,     \\\n                                               .f16 = nx_c_##name##_f16,       \\\n                                               .f32 = nx_c_##name##_f32,       \\\n                                               .f64 = nx_c_##name##_f64,       \\\n                                               .c32 = nx_c_##name##_c32,       \\\n                                               .c64 = nx_c_##name##_c64,       \\\n                                               .bf16 = nx_c_##name##_bf16,     \\\n                                               .bool_ = nx_c_##name##_bool_,   \\\n                                               .i4 = nx_c_##name##_i4,         \\\n                                               .u4 = nx_c_##name##_u4,         \\\n                                               .f8e4m3 = nx_c_##name##_f8e4m3, \\\n                                               .f8e5m2 = nx_c_##name##_f8e5m2};\n\n// Generate for reduce_sum\n#define SUM_OP(acc, val) ((acc) + (val))\n#define SUM_IDENTITY(T) ((T)0)\n#define SUM_HAS_IDENTITY 1\n#define SUM_OP_FLOAT(acc, val) ((acc) + (val))\n#define SUM_IDENTITY_FLOAT 0.0f\n#define SUM_COMPLEX_IDENTITY (0)\n#define SUM_COMPLEX_OP(acc, val) COMPLEX_ADD(acc, val)\n\nREDUCE_OP_FOR_TYPE(reduce_sum, int8_t, i8, SUM_IDENTITY(int8_t),\n                   SUM_HAS_IDENTITY, SUM_OP)\nREDUCE_OP_FOR_TYPE(reduce_sum, uint8_t, u8, SUM_IDENTITY(uint8_t),\n                   SUM_HAS_IDENTITY, SUM_OP)\nREDUCE_OP_FOR_TYPE(reduce_sum, int16_t, i16, SUM_IDENTITY(int16_t),\n                   SUM_HAS_IDENTITY, SUM_OP)\nREDUCE_OP_FOR_TYPE(reduce_sum, uint16_t, u16, SUM_IDENTITY(uint16_t),\n                   SUM_HAS_IDENTITY, SUM_OP)\nREDUCE_OP_FOR_TYPE(reduce_sum, int32_t, i32, SUM_IDENTITY(int32_t),\n                   SUM_HAS_IDENTITY, SUM_OP)\nREDUCE_OP_FOR_TYPE(reduce_sum, int64_t, i64, SUM_IDENTITY(int64_t),\n                   SUM_HAS_IDENTITY, SUM_OP)\nREDUCE_OP_FOR_TYPE(reduce_sum, uint32_t, u32, SUM_IDENTITY(uint32_t),\n                   SUM_HAS_IDENTITY, SUM_OP)\nREDUCE_OP_FOR_TYPE(reduce_sum, uint64_t, u64, SUM_IDENTITY(uint64_t),\n                   SUM_HAS_IDENTITY, SUM_OP)\nREDUCE_OP_FOR_TYPE(reduce_sum, intnat, inat, SUM_IDENTITY(intnat),\n                   SUM_HAS_IDENTITY, SUM_OP)\n// Optimized last-dim contiguous fast paths for f32/f64 reduce_sum\nstatic void nx_c_reduce_sum_f32(const ndarray_t *input, ndarray_t *output,\n                                const int *axes, int num_axes, bool keepdims) {\n  int sorted_axes[MAX_NDIM];\n  bool have_sorted = false;\n  if (num_axes > 0) {\n    have_sorted =\n        normalize_and_sort_axes(input->ndim, axes, num_axes, sorted_axes);\n    if (!have_sorted) {\n      nx_c_reduce_sum_f32_generic_wrap(input, output, axes, num_axes, keepdims);\n      return;\n    }\n  }\n\n  if (num_axes == input->ndim && is_contiguous(input) &&\n      is_contiguous(output)) {\n    long total = total_elements_safe(input);\n    long out_total = total_elements_safe(output);\n    if (out_total == 0) return;\n    float *out = (float *)output->data + output->offset;\n    if (total == 0) {\n      fill_zero_float(out, out_total);\n      return;\n    }\n    float *in = (float *)input->data + input->offset;\n    float result = 0.0f;\n#if defined(_OPENMP)\n#pragma omp parallel for reduction(+:result) if (total > 4096)\n#endif\n    for (long i = 0; i < total; ++i) {\n      result += in[i];\n    }\n    out[0] = result;\n    return;\n  }\n\n  if (have_sorted && num_axes == 1) {\n    if (reduce_sum_single_axis_f32(input, output, sorted_axes[0], keepdims))\n      return;\n  }\n\n  if (have_sorted && is_contiguous(input) && is_contiguous(output) &&\n      axes_are_trailing(input->ndim, num_axes, sorted_axes)) {\n    long total = total_elements_safe(input);\n    long K = product_of_axes(input, sorted_axes, num_axes);\n    float *out = (float *)output->data + output->offset;\n    long out_total = total_elements_safe(output);\n    if (K == 0 || total == 0 || out_total == 0) {\n      fill_zero_float(out, out_total);\n      return;\n    }\n    long M = total / K;\n    float *in = (float *)input->data + input->offset;\n#if defined(_OPENMP)\n#pragma omp parallel for if (M > 1024)\n#endif\n    for (long m = 0; m < M; ++m) {\n      const float *chunk = in + (m * K);\n      float acc = 0.0f;\n      for (long p = 0; p < K; ++p) acc += chunk[p];\n      out[m] = acc;\n    }\n    return;\n  }\n\n  nx_c_reduce_sum_f32_generic_wrap(input, output, axes, num_axes, keepdims);\n}\n\nstatic void nx_c_reduce_sum_f64(const ndarray_t *input, ndarray_t *output,\n                                const int *axes, int num_axes, bool keepdims) {\n  int sorted_axes[MAX_NDIM];\n  bool have_sorted = false;\n  if (num_axes > 0) {\n    have_sorted =\n        normalize_and_sort_axes(input->ndim, axes, num_axes, sorted_axes);\n    if (!have_sorted) {\n      nx_c_reduce_sum_f64_generic_wrap(input, output, axes, num_axes, keepdims);\n      return;\n    }\n  }\n\n  if (num_axes == input->ndim && is_contiguous(input) &&\n      is_contiguous(output)) {\n    long total = total_elements_safe(input);\n    long out_total = total_elements_safe(output);\n    if (out_total == 0) return;\n    double *out = (double *)output->data + output->offset;\n    if (total == 0) {\n      fill_zero_double(out, out_total);\n      return;\n    }\n    double *in = (double *)input->data + input->offset;\n    double result = 0.0;\n#if defined(_OPENMP)\n#pragma omp parallel for reduction(+:result) if (total > 4096)\n#endif\n    for (long i = 0; i < total; ++i) {\n      result += in[i];\n    }\n    out[0] = result;\n    return;\n  }\n\n  if (have_sorted && num_axes == 1) {\n    if (reduce_sum_single_axis_f64(input, output, sorted_axes[0], keepdims))\n      return;\n  }\n\n  if (have_sorted && is_contiguous(input) && is_contiguous(output) &&\n      axes_are_trailing(input->ndim, num_axes, sorted_axes)) {\n    long total = total_elements_safe(input);\n    long K = product_of_axes(input, sorted_axes, num_axes);\n    double *out = (double *)output->data + output->offset;\n    long out_total = total_elements_safe(output);\n    if (K == 0 || total == 0 || out_total == 0) {\n      fill_zero_double(out, out_total);\n      return;\n    }\n    long M = total / K;\n    double *in = (double *)input->data + input->offset;\n#if defined(_OPENMP)\n#pragma omp parallel for if (M > 1024)\n#endif\n    for (long m = 0; m < M; ++m) {\n      const double *chunk = in + (m * K);\n      double acc = 0.0;\n      for (long p = 0; p < K; ++p) acc += chunk[p];\n      out[m] = acc;\n    }\n    return;\n  }\n\n  nx_c_reduce_sum_f64_generic_wrap(input, output, axes, num_axes, keepdims);\n}\n\n// Provide generic versions under alternate names to call in fallback\n#define nx_c_reduce_sum_f32_generic nx_c_reduce_sum_f32_fallback\n#define nx_c_reduce_sum_f64_generic nx_c_reduce_sum_f64_fallback\n\nREDUCE_OP_FOR_TYPE(reduce_sum, float, f32_fallback, SUM_IDENTITY(float),\n                   SUM_HAS_IDENTITY, SUM_OP)\nREDUCE_OP_FOR_TYPE(reduce_sum, double, f64_fallback, SUM_IDENTITY(double),\n                   SUM_HAS_IDENTITY, SUM_OP)\nREDUCE_OP_FOR_TYPE(reduce_sum, complex32, c32, SUM_COMPLEX_IDENTITY,\n                   SUM_HAS_IDENTITY, SUM_COMPLEX_OP)\nREDUCE_OP_FOR_TYPE(reduce_sum, complex64, c64, SUM_COMPLEX_IDENTITY,\n                   SUM_HAS_IDENTITY, SUM_COMPLEX_OP)\nREDUCE_OP_FOR_TYPE(reduce_sum, caml_ba_bool, bool_, SUM_IDENTITY(caml_ba_bool),\n                   SUM_HAS_IDENTITY, SUM_OP)\n\nLOW_PREC_REDUCE_OP_IMPL(reduce_sum, uint16_t, f16, SUM_IDENTITY_FLOAT,\n                        SUM_HAS_IDENTITY, SUM_OP_FLOAT, half_to_float,\n                        float_to_half)\nLOW_PREC_REDUCE_OP_IMPL(reduce_sum, caml_ba_bfloat16, bf16, SUM_IDENTITY_FLOAT,\n                        SUM_HAS_IDENTITY, SUM_OP_FLOAT, bfloat16_to_float,\n                        float_to_bfloat16)\nLOW_PREC_REDUCE_OP_IMPL(reduce_sum, caml_ba_fp8_e4m3, f8e4m3,\n                        SUM_IDENTITY_FLOAT, SUM_HAS_IDENTITY, SUM_OP_FLOAT,\n                        fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_REDUCE_OP_IMPL(reduce_sum, caml_ba_fp8_e5m2, f8e5m2,\n                        SUM_IDENTITY_FLOAT, SUM_HAS_IDENTITY, SUM_OP_FLOAT,\n                        fp8_e5m2_to_float, float_to_fp8_e5m2)\n\nINT4_REDUCE_IMPL(reduce_sum, 1, i4, 0, SUM_HAS_IDENTITY, SUM_OP, CLAMP_I4)\nINT4_REDUCE_IMPL(reduce_sum, 0, u4, 0, SUM_HAS_IDENTITY, SUM_OP, CLAMP_U4)\n// Define wrappers now that generic functions exist\nstatic void nx_c_reduce_sum_f32_generic_wrap(const ndarray_t *input,\n                                             ndarray_t *output,\n                                             const int *axes, int num_axes,\n                                             bool keepdims) {\n  nx_c_reduce_sum_f32_fallback(input, output, axes, num_axes, keepdims);\n}\nstatic void nx_c_reduce_sum_f64_generic_wrap(const ndarray_t *input,\n                                             ndarray_t *output,\n                                             const int *axes, int num_axes,\n                                             bool keepdims) {\n  nx_c_reduce_sum_f64_fallback(input, output, axes, num_axes, keepdims);\n}\n// Build dispatch table (bind f32/f64 to optimized versions)\nstatic const reduce_op_table reduce_sum_table = {\n    .i8 = nx_c_reduce_sum_i8,\n    .u8 = nx_c_reduce_sum_u8,\n    .i16 = nx_c_reduce_sum_i16,\n    .u16 = nx_c_reduce_sum_u16,\n    .i32 = nx_c_reduce_sum_i32,\n    .i64 = nx_c_reduce_sum_i64,\n    .inat = nx_c_reduce_sum_inat,\n    .f16 = nx_c_reduce_sum_f16,\n    .f32 = nx_c_reduce_sum_f32,\n    .f64 = nx_c_reduce_sum_f64,\n    .c32 = nx_c_reduce_sum_c32,\n    .c64 = nx_c_reduce_sum_c64,\n    .bf16 = nx_c_reduce_sum_bf16,\n    .bool_ = nx_c_reduce_sum_bool_,\n    .i4 = nx_c_reduce_sum_i4,\n    .u4 = nx_c_reduce_sum_u4,\n    .f8e4m3 = nx_c_reduce_sum_f8e4m3,\n    .f8e5m2 = nx_c_reduce_sum_f8e5m2};\n\n// Generate for reduce_max\n#define MAX_OP(acc, val) ((acc) > (val) ? (acc) : (val))\n#define MAX_IDENTITY(T) ((T)0)  // unused\n#define MAX_HAS_IDENTITY 0\n/* NaN propagation: if either operand is NaN, return NaN */\n#define MAX_OP_FLOAT(acc, val) \\\n  (isnan(acc) || isnan(val) ? NAN : ((acc) > (val) ? (acc) : (val)))\n#define MAX_IDENTITY_FLOAT 0.0f   // unused\n#define MAX_COMPLEX_IDENTITY (0)  // unused\n#define MAX_COMPLEX_OP(acc, val) complex_max(acc, val)\n#define MAX_COMPLEX64_OP(acc, val) complex64_max(acc, val)\n\n// Optimized last-dim contiguous fast paths for f32/f64 reduce_max\nstatic void nx_c_reduce_max_f32(const ndarray_t *input, ndarray_t *output,\n                                const int *axes, int num_axes, bool keepdims) {\n  int last = input->ndim - 1;\n  if (num_axes == 1 && axes[0] == last && is_contiguous(input) &&\n      is_contiguous(output) && input->ndim >= 1) {\n    long K = input->shape[last];\n    long total = total_elements_safe(input);\n    long M = (K == 0) ? 0 : (total / K);\n    float *in = (float *)input->data;\n    float *out = (float *)output->data;\n    long in_off = input->offset;\n    long out_off = output->offset;\n    _Pragma(\"omp parallel for if(M > 1024)\")\n    for (long r = 0; r < M; ++r) {\n      const float *row = in + in_off + r * K;\n      float acc = -INFINITY;\n      for (long p = 0; p < K; ++p) {\n        float v = row[p];\n        acc = (isnan(acc) || isnan(v)) ? NAN : (v > acc ? v : acc);\n      }\n      out[out_off + r] = acc;\n    }\n    return;\n  }\n  // Fallback to generic implementation\n  nx_c_reduce_max_f32_generic_wrap(input, output, axes, num_axes, keepdims);\n}\n\nstatic void nx_c_reduce_max_f64(const ndarray_t *input, ndarray_t *output,\n                                const int *axes, int num_axes, bool keepdims) {\n  int last = input->ndim - 1;\n  if (num_axes == 1 && axes[0] == last && is_contiguous(input) &&\n      is_contiguous(output) && input->ndim >= 1) {\n    long K = input->shape[last];\n    long total = total_elements_safe(input);\n    long M = (K == 0) ? 0 : (total / K);\n    double *in = (double *)input->data;\n    double *out = (double *)output->data;\n    long in_off = input->offset;\n    long out_off = output->offset;\n    _Pragma(\"omp parallel for if(M > 1024)\")\n    for (long r = 0; r < M; ++r) {\n      const double *row = in + in_off + r * K;\n      double acc = -INFINITY;\n      for (long p = 0; p < K; ++p) {\n        double v = row[p];\n        acc = (isnan(acc) || isnan(v)) ? NAN : (v > acc ? v : acc);\n      }\n      out[out_off + r] = acc;\n    }\n    return;\n  }\n  // Fallback to generic implementation\n  nx_c_reduce_max_f64_generic_wrap(input, output, axes, num_axes, keepdims);\n}\n\n// Provide generic versions under alternate names to call in fallback\n#define nx_c_reduce_max_f32_generic nx_c_reduce_max_f32_fallback\n#define nx_c_reduce_max_f64_generic nx_c_reduce_max_f64_fallback\n\nREDUCE_OP_FOR_TYPE(reduce_max, int8_t, i8, MAX_IDENTITY(int8_t),\n                   MAX_HAS_IDENTITY, MAX_OP)\nREDUCE_OP_FOR_TYPE(reduce_max, uint8_t, u8, MAX_IDENTITY(uint8_t),\n                   MAX_HAS_IDENTITY, MAX_OP)\nREDUCE_OP_FOR_TYPE(reduce_max, int16_t, i16, MAX_IDENTITY(int16_t),\n                   MAX_HAS_IDENTITY, MAX_OP)\nREDUCE_OP_FOR_TYPE(reduce_max, uint16_t, u16, MAX_IDENTITY(uint16_t),\n                   MAX_HAS_IDENTITY, MAX_OP)\nREDUCE_OP_FOR_TYPE(reduce_max, int32_t, i32, MAX_IDENTITY(int32_t),\n                   MAX_HAS_IDENTITY, MAX_OP)\nREDUCE_OP_FOR_TYPE(reduce_max, int64_t, i64, MAX_IDENTITY(int64_t),\n                   MAX_HAS_IDENTITY, MAX_OP)\nREDUCE_OP_FOR_TYPE(reduce_max, uint32_t, u32, MAX_IDENTITY(uint32_t),\n                   MAX_HAS_IDENTITY, MAX_OP)\nREDUCE_OP_FOR_TYPE(reduce_max, uint64_t, u64, MAX_IDENTITY(uint64_t),\n                   MAX_HAS_IDENTITY, MAX_OP)\nREDUCE_OP_FOR_TYPE(reduce_max, intnat, inat, MAX_IDENTITY(intnat),\n                   MAX_HAS_IDENTITY, MAX_OP)\nREDUCE_OP_FOR_TYPE(reduce_max, float, f32_fallback, MAX_IDENTITY(float),\n                   MAX_HAS_IDENTITY, MAX_OP_FLOAT)\nREDUCE_OP_FOR_TYPE(reduce_max, double, f64_fallback, MAX_IDENTITY(double),\n                   MAX_HAS_IDENTITY, MAX_OP_FLOAT)\nREDUCE_OP_FOR_TYPE(reduce_max, complex32, c32, MAX_COMPLEX_IDENTITY,\n                   MAX_HAS_IDENTITY, MAX_COMPLEX_OP)\nREDUCE_OP_FOR_TYPE(reduce_max, complex64, c64, MAX_COMPLEX_IDENTITY,\n                   MAX_HAS_IDENTITY, MAX_COMPLEX64_OP)\nREDUCE_OP_FOR_TYPE(reduce_max, caml_ba_bool, bool_, MAX_IDENTITY(caml_ba_bool),\n                   MAX_HAS_IDENTITY, MAX_OP)\n\nLOW_PREC_REDUCE_OP_IMPL(reduce_max, uint16_t, f16, MAX_IDENTITY_FLOAT,\n                        MAX_HAS_IDENTITY, MAX_OP_FLOAT, half_to_float,\n                        float_to_half)\nLOW_PREC_REDUCE_OP_IMPL(reduce_max, caml_ba_bfloat16, bf16, MAX_IDENTITY_FLOAT,\n                        MAX_HAS_IDENTITY, MAX_OP_FLOAT, bfloat16_to_float,\n                        float_to_bfloat16)\nLOW_PREC_REDUCE_OP_IMPL(reduce_max, caml_ba_fp8_e4m3, f8e4m3,\n                        MAX_IDENTITY_FLOAT, MAX_HAS_IDENTITY, MAX_OP_FLOAT,\n                        fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_REDUCE_OP_IMPL(reduce_max, caml_ba_fp8_e5m2, f8e5m2,\n                        MAX_IDENTITY_FLOAT, MAX_HAS_IDENTITY, MAX_OP_FLOAT,\n                        fp8_e5m2_to_float, float_to_fp8_e5m2)\n\nINT4_REDUCE_IMPL(reduce_max, 1, i4, 0, MAX_HAS_IDENTITY, MAX_OP, CLAMP_I4)\nINT4_REDUCE_IMPL(reduce_max, 0, u4, 0, MAX_HAS_IDENTITY, MAX_OP, CLAMP_U4)\n// Define wrappers now that generic functions exist\nstatic void nx_c_reduce_max_f32_generic_wrap(const ndarray_t *input,\n                                             ndarray_t *output,\n                                             const int *axes, int num_axes,\n                                             bool keepdims) {\n  nx_c_reduce_max_f32_fallback(input, output, axes, num_axes, keepdims);\n}\nstatic void nx_c_reduce_max_f64_generic_wrap(const ndarray_t *input,\n                                             ndarray_t *output,\n                                             const int *axes, int num_axes,\n                                             bool keepdims) {\n  nx_c_reduce_max_f64_fallback(input, output, axes, num_axes, keepdims);\n}\n// Build dispatch table (bind f32/f64 to optimized versions)\nstatic const reduce_op_table reduce_max_table = {\n    .i8 = nx_c_reduce_max_i8,\n    .u8 = nx_c_reduce_max_u8,\n    .i16 = nx_c_reduce_max_i16,\n    .u16 = nx_c_reduce_max_u16,\n    .i32 = nx_c_reduce_max_i32,\n    .i64 = nx_c_reduce_max_i64,\n    .inat = nx_c_reduce_max_inat,\n    .f16 = nx_c_reduce_max_f16,\n    .f32 = nx_c_reduce_max_f32,\n    .f64 = nx_c_reduce_max_f64,\n    .c32 = nx_c_reduce_max_c32,\n    .c64 = nx_c_reduce_max_c64,\n    .bf16 = nx_c_reduce_max_bf16,\n    .bool_ = nx_c_reduce_max_bool_,\n    .i4 = nx_c_reduce_max_i4,\n    .u4 = nx_c_reduce_max_u4,\n    .f8e4m3 = nx_c_reduce_max_f8e4m3,\n    .f8e5m2 = nx_c_reduce_max_f8e5m2};\n\n// Generate for reduce_prod\n#define PROD_OP(acc, val) ((acc) * (val))\n#define PROD_IDENTITY(T) ((T)1)\n#define PROD_HAS_IDENTITY 1\n#define PROD_OP_FLOAT(acc, val) ((acc) * (val))\n#define PROD_IDENTITY_FLOAT 1.0f\n#define PROD_COMPLEX_IDENTITY (1)\n#define PROD_COMPLEX_OP(acc, val) COMPLEX_MUL(acc, val)\n\nREDUCE_OP_FOR_TYPE(reduce_prod, int8_t, i8, PROD_IDENTITY(int8_t),\n                   PROD_HAS_IDENTITY, PROD_OP)\nREDUCE_OP_FOR_TYPE(reduce_prod, uint8_t, u8, PROD_IDENTITY(uint8_t),\n                   PROD_HAS_IDENTITY, PROD_OP)\nREDUCE_OP_FOR_TYPE(reduce_prod, int16_t, i16, PROD_IDENTITY(int16_t),\n                   PROD_HAS_IDENTITY, PROD_OP)\nREDUCE_OP_FOR_TYPE(reduce_prod, uint16_t, u16, PROD_IDENTITY(uint16_t),\n                   PROD_HAS_IDENTITY, PROD_OP)\nREDUCE_OP_FOR_TYPE(reduce_prod, int32_t, i32, PROD_IDENTITY(int32_t),\n                   PROD_HAS_IDENTITY, PROD_OP)\nREDUCE_OP_FOR_TYPE(reduce_prod, int64_t, i64, PROD_IDENTITY(int64_t),\n                   PROD_HAS_IDENTITY, PROD_OP)\nREDUCE_OP_FOR_TYPE(reduce_prod, uint32_t, u32, PROD_IDENTITY(uint32_t),\n                   PROD_HAS_IDENTITY, PROD_OP)\nREDUCE_OP_FOR_TYPE(reduce_prod, uint64_t, u64, PROD_IDENTITY(uint64_t),\n                   PROD_HAS_IDENTITY, PROD_OP)\nREDUCE_OP_FOR_TYPE(reduce_prod, intnat, inat, PROD_IDENTITY(intnat),\n                   PROD_HAS_IDENTITY, PROD_OP)\nREDUCE_OP_FOR_TYPE(reduce_prod, float, f32, PROD_IDENTITY(float),\n                   PROD_HAS_IDENTITY, PROD_OP)\nREDUCE_OP_FOR_TYPE(reduce_prod, double, f64, PROD_IDENTITY(double),\n                   PROD_HAS_IDENTITY, PROD_OP)\nREDUCE_OP_FOR_TYPE(reduce_prod, complex32, c32, PROD_COMPLEX_IDENTITY,\n                   PROD_HAS_IDENTITY, PROD_COMPLEX_OP)\nREDUCE_OP_FOR_TYPE(reduce_prod, complex64, c64, PROD_COMPLEX_IDENTITY,\n                   PROD_HAS_IDENTITY, PROD_COMPLEX_OP)\nREDUCE_OP_FOR_TYPE(reduce_prod, caml_ba_bool, bool_,\n                   PROD_IDENTITY(caml_ba_bool), PROD_HAS_IDENTITY, PROD_OP)\n\nLOW_PREC_REDUCE_OP_IMPL(reduce_prod, uint16_t, f16, PROD_IDENTITY_FLOAT,\n                        PROD_HAS_IDENTITY, PROD_OP_FLOAT, half_to_float,\n                        float_to_half)\nLOW_PREC_REDUCE_OP_IMPL(reduce_prod, caml_ba_bfloat16, bf16,\n                        PROD_IDENTITY_FLOAT, PROD_HAS_IDENTITY, PROD_OP_FLOAT,\n                        bfloat16_to_float, float_to_bfloat16)\nLOW_PREC_REDUCE_OP_IMPL(reduce_prod, caml_ba_fp8_e4m3, f8e4m3,\n                        PROD_IDENTITY_FLOAT, PROD_HAS_IDENTITY, PROD_OP_FLOAT,\n                        fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_REDUCE_OP_IMPL(reduce_prod, caml_ba_fp8_e5m2, f8e5m2,\n                        PROD_IDENTITY_FLOAT, PROD_HAS_IDENTITY, PROD_OP_FLOAT,\n                        fp8_e5m2_to_float, float_to_fp8_e5m2)\n\nINT4_REDUCE_IMPL(reduce_prod, 1, i4, 1, PROD_HAS_IDENTITY, PROD_OP, CLAMP_I4)\nINT4_REDUCE_IMPL(reduce_prod, 0, u4, 1, PROD_HAS_IDENTITY, PROD_OP, CLAMP_U4)\nBUILD_DISPATCH_TABLE(reduce_prod)\n\n// Generate for reduce_min\n#define MIN_OP(acc, val) ((acc) < (val) ? (acc) : (val))\n#define MIN_IDENTITY(T) ((T)0)  // unused\n#define MIN_HAS_IDENTITY 0\n/* NaN propagation: if either operand is NaN, return NaN */\n#define MIN_OP_FLOAT(acc, val) \\\n  (isnan(acc) || isnan(val) ? NAN : ((acc) < (val) ? (acc) : (val)))\n#define MIN_IDENTITY_FLOAT 0.0f   // unused\n#define MIN_COMPLEX_IDENTITY (0)  // unused\n#define MIN_COMPLEX_OP(acc, val) complex_min(acc, val)\n#define MIN_COMPLEX64_OP(acc, val) complex64_min(acc, val)\n\n// Forward declarations for optimized f32/f64 implementations\nstatic void nx_c_reduce_min_f32_generic_wrap(const ndarray_t *, ndarray_t *,\n                                             const int *, int, bool);\nstatic void nx_c_reduce_min_f64_generic_wrap(const ndarray_t *, ndarray_t *,\n                                             const int *, int, bool);\n\n// Optimized last-dim contiguous fast paths for f32/f64 reduce_min\nstatic void nx_c_reduce_min_f32(const ndarray_t *input, ndarray_t *output,\n                                const int *axes, int num_axes, bool keepdims) {\n  int last = input->ndim - 1;\n  if (num_axes == 1 && axes[0] == last && is_contiguous(input) &&\n      is_contiguous(output) && input->ndim >= 1) {\n    long K = input->shape[last];\n    long total = total_elements_safe(input);\n    long M = (K == 0) ? 0 : (total / K);\n    float *in = (float *)input->data;\n    float *out = (float *)output->data;\n    long in_off = input->offset;\n    long out_off = output->offset;\n    _Pragma(\"omp parallel for if(M > 1024)\")\n    for (long r = 0; r < M; ++r) {\n      const float *row = in + in_off + r * K;\n      float acc = INFINITY;\n      for (long p = 0; p < K; ++p) {\n        float v = row[p];\n        acc = (isnan(acc) || isnan(v)) ? NAN : (v < acc ? v : acc);\n      }\n      out[out_off + r] = acc;\n    }\n    return;\n  }\n  // Fallback to generic implementation\n  nx_c_reduce_min_f32_generic_wrap(input, output, axes, num_axes, keepdims);\n}\n\nstatic void nx_c_reduce_min_f64(const ndarray_t *input, ndarray_t *output,\n                                const int *axes, int num_axes, bool keepdims) {\n  int last = input->ndim - 1;\n  if (num_axes == 1 && axes[0] == last && is_contiguous(input) &&\n      is_contiguous(output) && input->ndim >= 1) {\n    long K = input->shape[last];\n    long total = total_elements_safe(input);\n    long M = (K == 0) ? 0 : (total / K);\n    double *in = (double *)input->data;\n    double *out = (double *)output->data;\n    long in_off = input->offset;\n    long out_off = output->offset;\n    _Pragma(\"omp parallel for if(M > 1024)\")\n    for (long r = 0; r < M; ++r) {\n      const double *row = in + in_off + r * K;\n      double acc = INFINITY;\n      for (long p = 0; p < K; ++p) {\n        double v = row[p];\n        acc = (isnan(acc) || isnan(v)) ? NAN : (v < acc ? v : acc);\n      }\n      out[out_off + r] = acc;\n    }\n    return;\n  }\n  // Fallback to generic implementation\n  nx_c_reduce_min_f64_generic_wrap(input, output, axes, num_axes, keepdims);\n}\n\n// Provide generic versions under alternate names to call in fallback\n#define nx_c_reduce_min_f32_generic nx_c_reduce_min_f32_fallback\n#define nx_c_reduce_min_f64_generic nx_c_reduce_min_f64_fallback\n\nREDUCE_OP_FOR_TYPE(reduce_min, int8_t, i8, MIN_IDENTITY(int8_t),\n                   MIN_HAS_IDENTITY, MIN_OP)\nREDUCE_OP_FOR_TYPE(reduce_min, uint8_t, u8, MIN_IDENTITY(uint8_t),\n                   MIN_HAS_IDENTITY, MIN_OP)\nREDUCE_OP_FOR_TYPE(reduce_min, int16_t, i16, MIN_IDENTITY(int16_t),\n                   MIN_HAS_IDENTITY, MIN_OP)\nREDUCE_OP_FOR_TYPE(reduce_min, uint16_t, u16, MIN_IDENTITY(uint16_t),\n                   MIN_HAS_IDENTITY, MIN_OP)\nREDUCE_OP_FOR_TYPE(reduce_min, int32_t, i32, MIN_IDENTITY(int32_t),\n                   MIN_HAS_IDENTITY, MIN_OP)\nREDUCE_OP_FOR_TYPE(reduce_min, int64_t, i64, MIN_IDENTITY(int64_t),\n                   MIN_HAS_IDENTITY, MIN_OP)\nREDUCE_OP_FOR_TYPE(reduce_min, uint32_t, u32, MIN_IDENTITY(uint32_t),\n                   MIN_HAS_IDENTITY, MIN_OP)\nREDUCE_OP_FOR_TYPE(reduce_min, uint64_t, u64, MIN_IDENTITY(uint64_t),\n                   MIN_HAS_IDENTITY, MIN_OP)\nREDUCE_OP_FOR_TYPE(reduce_min, intnat, inat, MIN_IDENTITY(intnat),\n                   MIN_HAS_IDENTITY, MIN_OP)\nREDUCE_OP_FOR_TYPE(reduce_min, float, f32_fallback, MIN_IDENTITY(float),\n                   MIN_HAS_IDENTITY, MIN_OP_FLOAT)\nREDUCE_OP_FOR_TYPE(reduce_min, double, f64_fallback, MIN_IDENTITY(double),\n                   MIN_HAS_IDENTITY, MIN_OP_FLOAT)\nREDUCE_OP_FOR_TYPE(reduce_min, complex32, c32, MIN_COMPLEX_IDENTITY,\n                   MIN_HAS_IDENTITY, MIN_COMPLEX_OP)\nREDUCE_OP_FOR_TYPE(reduce_min, complex64, c64, MIN_COMPLEX_IDENTITY,\n                   MIN_HAS_IDENTITY, MIN_COMPLEX64_OP)\nREDUCE_OP_FOR_TYPE(reduce_min, caml_ba_bool, bool_, MIN_IDENTITY(caml_ba_bool),\n                   MIN_HAS_IDENTITY, MIN_OP)\n\nLOW_PREC_REDUCE_OP_IMPL(reduce_min, uint16_t, f16, MIN_IDENTITY_FLOAT,\n                        MIN_HAS_IDENTITY, MIN_OP_FLOAT, half_to_float,\n                        float_to_half)\nLOW_PREC_REDUCE_OP_IMPL(reduce_min, caml_ba_bfloat16, bf16, MIN_IDENTITY_FLOAT,\n                        MIN_HAS_IDENTITY, MIN_OP_FLOAT, bfloat16_to_float,\n                        float_to_bfloat16)\nLOW_PREC_REDUCE_OP_IMPL(reduce_min, caml_ba_fp8_e4m3, f8e4m3,\n                        MIN_IDENTITY_FLOAT, MIN_HAS_IDENTITY, MIN_OP_FLOAT,\n                        fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_REDUCE_OP_IMPL(reduce_min, caml_ba_fp8_e5m2, f8e5m2,\n                        MIN_IDENTITY_FLOAT, MIN_HAS_IDENTITY, MIN_OP_FLOAT,\n                        fp8_e5m2_to_float, float_to_fp8_e5m2)\n\nINT4_REDUCE_IMPL(reduce_min, 1, i4, 0, MIN_HAS_IDENTITY, MIN_OP, CLAMP_I4)\nINT4_REDUCE_IMPL(reduce_min, 0, u4, 0, MIN_HAS_IDENTITY, MIN_OP, CLAMP_U4)\n// Define wrappers now that generic functions exist\nstatic void nx_c_reduce_min_f32_generic_wrap(const ndarray_t *input,\n                                             ndarray_t *output,\n                                             const int *axes, int num_axes,\n                                             bool keepdims) {\n  nx_c_reduce_min_f32_fallback(input, output, axes, num_axes, keepdims);\n}\nstatic void nx_c_reduce_min_f64_generic_wrap(const ndarray_t *input,\n                                             ndarray_t *output,\n                                             const int *axes, int num_axes,\n                                             bool keepdims) {\n  nx_c_reduce_min_f64_fallback(input, output, axes, num_axes, keepdims);\n}\n// Build dispatch table (bind f32/f64 to optimized versions)\nstatic const reduce_op_table reduce_min_table = {\n    .i8 = nx_c_reduce_min_i8,\n    .u8 = nx_c_reduce_min_u8,\n    .i16 = nx_c_reduce_min_i16,\n    .u16 = nx_c_reduce_min_u16,\n    .i32 = nx_c_reduce_min_i32,\n    .i64 = nx_c_reduce_min_i64,\n    .inat = nx_c_reduce_min_inat,\n    .f16 = nx_c_reduce_min_f16,\n    .f32 = nx_c_reduce_min_f32,\n    .f64 = nx_c_reduce_min_f64,\n    .c32 = nx_c_reduce_min_c32,\n    .c64 = nx_c_reduce_min_c64,\n    .bf16 = nx_c_reduce_min_bf16,\n    .bool_ = nx_c_reduce_min_bool_,\n    .i4 = nx_c_reduce_min_i4,\n    .u4 = nx_c_reduce_min_u4,\n    .f8e4m3 = nx_c_reduce_min_f8e4m3,\n    .f8e5m2 = nx_c_reduce_min_f8e5m2};\n\n// Generic dispatch function for reduction operations\nstatic void dispatch_reduce_op(value v_input, value v_output, int *axes,\n                               int num_axes, bool keepdims,\n                               const reduce_op_table *table,\n                               const char *op_name) {\n  ndarray_t input = extract_ndarray(v_input);\n  ndarray_t output = extract_ndarray(v_output);\n\n  // Validate axes before entering blocking section\n  // (Cannot call caml_failwith from within blocking section)\n  for (int i = 0; i < num_axes; ++i) {\n    if (axes[i] < 0 || axes[i] >= input.ndim) {\n      char msg[256];\n      snprintf(msg, sizeof(msg), \"%s: axis %d out of bounds for tensor of rank %d\",\n               op_name, axes[i], input.ndim);\n      cleanup_ndarray(&input);\n      cleanup_ndarray(&output);\n      caml_failwith(msg);\n    }\n  }\n\n  // Sort axes for consistency\n  qsort(axes, num_axes, sizeof(int), cmp_int);\n\n  // Check dtypes match\n  value v_input_data = Field(v_input, FFI_TENSOR_DATA);\n  value v_output_data = Field(v_output, FFI_TENSOR_DATA);\n  struct caml_ba_array *ba_input = Caml_ba_array_val(v_input_data);\n  struct caml_ba_array *ba_output = Caml_ba_array_val(v_output_data);\n  int kind_input = nx_buffer_get_kind(ba_input);\n  int kind_output = nx_buffer_get_kind(ba_output);\n  if (kind_input != kind_output) {\n    cleanup_ndarray(&input);\n    cleanup_ndarray(&output);\n    caml_failwith(\"dtype mismatch\");\n  }\n\n  // Select operation based on dtype\n  reduce_op_t op = NULL;\n  switch (kind_input) {\n    case CAML_BA_SINT8:\n      op = table->i8;\n      break;\n    case CAML_BA_UINT8:\n      op = table->u8;\n      break;\n    case CAML_BA_SINT16:\n      op = table->i16;\n      break;\n    case CAML_BA_UINT16:\n      op = table->u16;\n      break;\n    case CAML_BA_INT32:\n      op = table->i32;\n      break;\n    case CAML_BA_INT64:\n      op = table->i64;\n      break;\n    case NX_BA_UINT32:\n      op = table->u32;\n      break;\n    case NX_BA_UINT64:\n      op = table->u64;\n      break;\n    case CAML_BA_CAML_INT:\n    case CAML_BA_NATIVE_INT:\n      op = table->inat;\n      break;\n    case CAML_BA_FLOAT16:\n      op = table->f16;\n      break;\n    case CAML_BA_FLOAT32:\n      op = table->f32;\n      break;\n    case CAML_BA_FLOAT64:\n      op = table->f64;\n      break;\n    case CAML_BA_COMPLEX32:\n      op = table->c32;\n      break;\n    case CAML_BA_COMPLEX64:\n      op = table->c64;\n      break;\n    case NX_BA_BFLOAT16:\n      op = table->bf16;\n      break;\n    case NX_BA_BOOL:\n      op = table->bool_;\n      break;\n    case NX_BA_INT4:\n      op = table->i4;\n      break;\n    case NX_BA_UINT4:\n      op = table->u4;\n      break;\n    case NX_BA_FP8_E4M3:\n      op = table->f8e4m3;\n      break;\n    case NX_BA_FP8_E5M2:\n      op = table->f8e5m2;\n      break;\n    default:\n      cleanup_ndarray(&input);\n      cleanup_ndarray(&output);\n      caml_failwith(\"dispatch_reduce_op: unsupported dtype\");\n  }\n\n  if (!op) {\n    char msg[256];\n    snprintf(msg, sizeof(msg), \"%s: operation not supported for dtype\",\n             op_name);\n    cleanup_ndarray(&input);\n    cleanup_ndarray(&output);\n    caml_failwith(msg);\n  }\n\n  // Enter blocking section for potentially long computation\n  caml_enter_blocking_section();\n  op(&input, &output, axes, num_axes, keepdims);\n  caml_leave_blocking_section();\n\n  // Clean up if heap allocated\n  cleanup_ndarray(&input);\n  cleanup_ndarray(&output);\n}\n\n// ============================================================================\n// OCaml FFI Stubs\n// ============================================================================\n\n// Macro to define FFI stub for each operation\n#define DEFINE_FFI_STUB(name)                                                  \\\n  CAMLprim value caml_nx_##name(value v_input, value v_output, value v_axes,   \\\n                                value v_keepdims) {                            \\\n    CAMLparam4(v_input, v_output, v_axes, v_keepdims);                         \\\n    int num_axes = Wosize_val(v_axes);                                         \\\n    int *axes = (int *)malloc(num_axes * sizeof(int));                         \\\n    if (!axes) caml_failwith(\"allocation failed\");                             \\\n    for (int i = 0; i < num_axes; ++i) {                                       \\\n      axes[i] = Int_val(Field(v_axes, i));                                     \\\n    }                                                                          \\\n    bool keepdims = Bool_val(v_keepdims);                                      \\\n    dispatch_reduce_op(v_input, v_output, axes, num_axes, keepdims,            \\\n                       &name##_table, #name);                                  \\\n    free(axes);                                                                \\\n    CAMLreturn(Val_unit);                                                      \\\n  }\n\nDEFINE_FFI_STUB(reduce_sum)\nDEFINE_FFI_STUB(reduce_max)\nDEFINE_FFI_STUB(reduce_prod)\nDEFINE_FFI_STUB(reduce_min)\n"
  },
  {
    "path": "packages/nx/lib/backend_c/nx_c_scan.c",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n#include \"nx_c_shared.h\"\n\ntypedef void (*scan_fn_t)(const ndarray_t *, ndarray_t *, int axis);\n\ntypedef struct {\n  int axis_len;\n  long axis_stride_in;\n  long axis_stride_out;\n  int outer_dims;\n  int *outer_axes;\n  int *outer_coord;\n  const char *op_name;\n} scan_plan_t;\n\nstatic scan_plan_t scan_prepare(const ndarray_t *input, const ndarray_t *output,\n                                int axis, const char *op_name) {\n  scan_plan_t plan;\n  plan.axis_len = 0;\n  plan.axis_stride_in = 0;\n  plan.axis_stride_out = 0;\n  plan.outer_dims = 0;\n  plan.outer_axes = NULL;\n  plan.outer_coord = NULL;\n  plan.op_name = op_name;\n\n  if (!input || !output) {\n    fprintf(stderr, \"nx: associative_scan: null tensor\\n\");\n    abort();\n  }\n\n  if (input->ndim != output->ndim) {\n    fprintf(stderr, \"nx: %s: rank mismatch\\n\", op_name);\n    abort();\n  }\n\n  if (input->ndim <= 0) {\n    fprintf(stderr, \"nx: %s: tensor rank must be >= 1\\n\", op_name);\n    abort();\n  }\n\n  if (axis < 0 || axis >= input->ndim) {\n    fprintf(stderr, \"nx: %s: axis %d out of bounds for rank %d\\n\",\n            op_name, axis, input->ndim);\n    abort();\n  }\n\n  for (int i = 0; i < input->ndim; ++i) {\n    if (input->shape[i] != output->shape[i]) {\n      fprintf(stderr, \"nx: %s: shape mismatch on dim %d\\n\", op_name, i);\n      abort();\n    }\n  }\n\n  plan.axis_len = input->shape[axis];\n  plan.axis_stride_in = input->strides[axis];\n  plan.axis_stride_out = output->strides[axis];\n  plan.outer_dims = input->ndim - 1;\n\n  if (plan.outer_dims > 0) {\n    plan.outer_axes = (int *)malloc(plan.outer_dims * sizeof(int));\n    plan.outer_coord = (int *)calloc(plan.outer_dims, sizeof(int));\n    if (!plan.outer_axes || !plan.outer_coord) {\n      if (plan.outer_axes) free(plan.outer_axes);\n      if (plan.outer_coord) free(plan.outer_coord);\n      fprintf(stderr, \"nx: associative_scan: allocation failed\\n\");\n      abort();\n    }\n    int idx = 0;\n    for (int i = 0; i < input->ndim; ++i) {\n      if (i != axis) {\n        plan.outer_axes[idx++] = i;\n      }\n    }\n  }\n\n  return plan;\n}\n\nstatic void scan_plan_destroy(scan_plan_t *plan) {\n  if (plan->outer_axes) free(plan->outer_axes);\n  if (plan->outer_coord) free(plan->outer_coord);\n}\n\nstatic bool advance_outer_coords(const ndarray_t *input, const int *outer_axes,\n                                 int *outer_coord, int outer_dims) {\n  if (outer_dims == 0) return false;\n  for (int idx = outer_dims - 1; idx >= 0; --idx) {\n    int axis = outer_axes[idx];\n    outer_coord[idx]++;\n    if (outer_coord[idx] < input->shape[axis]) {\n      return true;\n    }\n    outer_coord[idx] = 0;\n  }\n  return false;\n}\n\n#define SUM_EXPR(acc, val) ((acc) + (val))\n#define PROD_EXPR(acc, val) ((acc) * (val))\n#define MAX_EXPR(acc, val) ((acc) > (val) ? (acc) : (val))\n#define MIN_EXPR(acc, val) ((acc) < (val) ? (acc) : (val))\n#define MAX_FLOAT_EXPR(acc, val)                                                \\\n  (isnan((double)(acc)) || isnan((double)(val))                                  \\\n       ? NAN                                                                     \\\n       : ((acc) > (val) ? (acc) : (val)))\n#define MIN_FLOAT_EXPR(acc, val)                                                \\\n  (isnan((double)(acc)) || isnan((double)(val))                                  \\\n       ? NAN                                                                     \\\n       : ((acc) < (val) ? (acc) : (val)))\n#define MAX_COMPLEX32_EXPR(acc, val) complex_max(acc, val)\n#define MIN_COMPLEX32_EXPR(acc, val) complex_min(acc, val)\n#define MAX_COMPLEX64_EXPR(acc, val) complex64_max(acc, val)\n#define MIN_COMPLEX64_EXPR(acc, val) complex64_min(acc, val)\n\n#define DEFINE_SCAN_DIRECT(OPNAME, TYPE, SUFFIX, ACC_EXPR)                      \\\n  static void nx_c_scan_##OPNAME##_##SUFFIX(const ndarray_t *input,            \\\n                                            ndarray_t *output, int axis) {     \\\n    scan_plan_t plan = scan_prepare(input, output, axis, \"scan_\" #OPNAME);     \\\n    if (plan.axis_len <= 0) {                                                  \\\n      scan_plan_destroy(&plan);                                                \\\n      return;                                                                  \\\n    }                                                                          \\\n    TYPE *in_data = (TYPE *)input->data;                                       \\\n    TYPE *out_data = (TYPE *)output->data;                                     \\\n    const int outer_dims = plan.outer_dims;                                    \\\n    const int *outer_axes = plan.outer_axes;                                   \\\n    int *outer_coord = plan.outer_coord;                                       \\\n    const long axis_stride_in = plan.axis_stride_in;                           \\\n    const long axis_stride_out = plan.axis_stride_out;                         \\\n    while (true) {                                                             \\\n      long in_base = input->offset;                                            \\\n      long out_base = output->offset;                                          \\\n      for (int i = 0; i < outer_dims; ++i) {                                  \\\n        int ax = outer_axes[i];                                                \\\n        long coord = outer_coord[i];                                           \\\n        in_base += coord * input->strides[ax];                                 \\\n        out_base += coord * output->strides[ax];                               \\\n      }                                                                        \\\n      long in_off = in_base;                                                   \\\n      long out_off = out_base;                                                 \\\n      TYPE acc = in_data[in_off];                                              \\\n      out_data[out_off] = acc;                                                 \\\n      for (int k = 1; k < plan.axis_len; ++k) {                                \\\n        in_off += axis_stride_in;                                              \\\n        out_off += axis_stride_out;                                            \\\n        TYPE val = in_data[in_off];                                            \\\n        acc = ACC_EXPR;                                                        \\\n        out_data[out_off] = acc;                                               \\\n      }                                                                        \\\n      if (outer_dims == 0) break;                                              \\\n      if (!advance_outer_coords(input, outer_axes, outer_coord, outer_dims))   \\\n        break;                                                                 \\\n    }                                                                          \\\n    scan_plan_destroy(&plan);                                                  \\\n  }\n\n#define DEFINE_SCAN_LOW_PREC(OPNAME, STORAGE_TYPE, SUFFIX, ACC_EXPR, TO_FLOAT,  \\\n                             FROM_FLOAT)                                       \\\n  static void nx_c_scan_##OPNAME##_##SUFFIX(const ndarray_t *input,            \\\n                                            ndarray_t *output, int axis) {     \\\n    scan_plan_t plan = scan_prepare(input, output, axis, \"scan_\" #OPNAME);     \\\n    if (plan.axis_len <= 0) {                                                  \\\n      scan_plan_destroy(&plan);                                                \\\n      return;                                                                  \\\n    }                                                                          \\\n    STORAGE_TYPE *in_data = (STORAGE_TYPE *)input->data;                       \\\n    STORAGE_TYPE *out_data = (STORAGE_TYPE *)output->data;                     \\\n    const int outer_dims = plan.outer_dims;                                    \\\n    const int *outer_axes = plan.outer_axes;                                   \\\n    int *outer_coord = plan.outer_coord;                                       \\\n    const long axis_stride_in = plan.axis_stride_in;                           \\\n    const long axis_stride_out = plan.axis_stride_out;                         \\\n    while (true) {                                                             \\\n      long in_base = input->offset;                                            \\\n      long out_base = output->offset;                                          \\\n      for (int i = 0; i < outer_dims; ++i) {                                  \\\n        int ax = outer_axes[i];                                                \\\n        long coord = outer_coord[i];                                           \\\n        in_base += coord * input->strides[ax];                                 \\\n        out_base += coord * output->strides[ax];                               \\\n      }                                                                        \\\n      long in_off = in_base;                                                   \\\n      long out_off = out_base;                                                 \\\n      float acc = TO_FLOAT(in_data[in_off]);                                   \\\n      out_data[out_off] = FROM_FLOAT(acc);                                     \\\n      for (int k = 1; k < plan.axis_len; ++k) {                                \\\n        in_off += axis_stride_in;                                              \\\n        out_off += axis_stride_out;                                            \\\n        float val = TO_FLOAT(in_data[in_off]);                                  \\\n        acc = ACC_EXPR;                                                        \\\n        out_data[out_off] = FROM_FLOAT(acc);                                   \\\n      }                                                                        \\\n      if (outer_dims == 0) break;                                              \\\n      if (!advance_outer_coords(input, outer_axes, outer_coord, outer_dims))   \\\n        break;                                                                 \\\n    }                                                                          \\\n    scan_plan_destroy(&plan);                                                  \\\n  }\n\n#define DEFINE_SCAN_INT4(OPNAME, SUFFIX, IS_SIGNED, ACC_EXPR)                  \\\n  static void nx_c_scan_##OPNAME##_##SUFFIX(const ndarray_t *input,            \\\n                                            ndarray_t *output, int axis) {     \\\n    scan_plan_t plan = scan_prepare(input, output, axis, \"scan_\" #OPNAME);     \\\n    if (plan.axis_len <= 0) {                                                  \\\n      scan_plan_destroy(&plan);                                                \\\n      return;                                                                  \\\n    }                                                                          \\\n    uint8_t *in_data = (uint8_t *)input->data;                                 \\\n    uint8_t *out_data = (uint8_t *)output->data;                               \\\n    const bool is_signed = (IS_SIGNED);                                        \\\n    const int outer_dims = plan.outer_dims;                                    \\\n    const int *outer_axes = plan.outer_axes;                                   \\\n    int *outer_coord = plan.outer_coord;                                       \\\n    const long axis_stride_in = plan.axis_stride_in;                           \\\n    const long axis_stride_out = plan.axis_stride_out;                         \\\n    while (true) {                                                             \\\n      long in_base = input->offset;                                            \\\n      long out_base = output->offset;                                          \\\n      for (int i = 0; i < outer_dims; ++i) {                                  \\\n        int ax = outer_axes[i];                                                \\\n        long coord = outer_coord[i];                                           \\\n        in_base += coord * input->strides[ax];                                 \\\n        out_base += coord * output->strides[ax];                               \\\n      }                                                                        \\\n      long in_off = in_base;                                                   \\\n      long out_off = out_base;                                                 \\\n      int acc = int4_get(in_data, in_off, is_signed);                          \\\n      int4_set(out_data, out_off, acc, is_signed);                             \\\n      for (int k = 1; k < plan.axis_len; ++k) {                                \\\n        in_off += axis_stride_in;                                              \\\n        out_off += axis_stride_out;                                            \\\n        int val = int4_get(in_data, in_off, is_signed);                        \\\n        acc = ACC_EXPR;                                                        \\\n        int4_set(out_data, out_off, acc, is_signed);                           \\\n      }                                                                        \\\n      if (outer_dims == 0) break;                                              \\\n      if (!advance_outer_coords(input, outer_axes, outer_coord, outer_dims))   \\\n        break;                                                                 \\\n    }                                                                          \\\n    scan_plan_destroy(&plan);                                                  \\\n  }\n\ntypedef struct {\n  scan_fn_t i8;\n  scan_fn_t u8;\n  scan_fn_t i16;\n  scan_fn_t u16;\n  scan_fn_t i32;\n  scan_fn_t i64;\n  scan_fn_t u32;\n  scan_fn_t u64;\n  scan_fn_t inat;\n  scan_fn_t f16;\n  scan_fn_t f32;\n  scan_fn_t f64;\n  scan_fn_t c32;\n  scan_fn_t c64;\n  scan_fn_t bf16;\n  scan_fn_t bool_;\n  scan_fn_t i4;\n  scan_fn_t u4;\n  scan_fn_t f8e4m3;\n  scan_fn_t f8e5m2;\n} scan_dispatch_table;\n\n// Sum implementations\nDEFINE_SCAN_DIRECT(sum, int8_t, i8, SUM_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(sum, uint8_t, u8, SUM_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(sum, int16_t, i16, SUM_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(sum, uint16_t, u16, SUM_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(sum, int32_t, i32, SUM_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(sum, int64_t, i64, SUM_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(sum, uint32_t, u32, SUM_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(sum, uint64_t, u64, SUM_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(sum, intnat, inat, SUM_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(sum, float, f32, SUM_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(sum, double, f64, SUM_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(sum, complex32, c32, SUM_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(sum, complex64, c64, SUM_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(sum, caml_ba_bool, bool_, SUM_EXPR(acc, val))\nDEFINE_SCAN_LOW_PREC(sum, uint16_t, f16, SUM_EXPR(acc, val), half_to_float,\n                     float_to_half)\nDEFINE_SCAN_LOW_PREC(sum, caml_ba_bfloat16, bf16, SUM_EXPR(acc, val),\n                     bfloat16_to_float, float_to_bfloat16)\nDEFINE_SCAN_LOW_PREC(sum, caml_ba_fp8_e4m3, f8e4m3, SUM_EXPR(acc, val),\n                     fp8_e4m3_to_float, float_to_fp8_e4m3)\nDEFINE_SCAN_LOW_PREC(sum, caml_ba_fp8_e5m2, f8e5m2, SUM_EXPR(acc, val),\n                     fp8_e5m2_to_float, float_to_fp8_e5m2)\nDEFINE_SCAN_INT4(sum, i4, true, SUM_EXPR(acc, val))\nDEFINE_SCAN_INT4(sum, u4, false, SUM_EXPR(acc, val))\n\nstatic const scan_dispatch_table scan_sum_table = {\n    .i8 = nx_c_scan_sum_i8,\n    .u8 = nx_c_scan_sum_u8,\n    .i16 = nx_c_scan_sum_i16,\n    .u16 = nx_c_scan_sum_u16,\n    .i32 = nx_c_scan_sum_i32,\n    .i64 = nx_c_scan_sum_i64,\n    .u32 = nx_c_scan_sum_u32,\n    .u64 = nx_c_scan_sum_u64,\n    .inat = nx_c_scan_sum_inat,\n    .f16 = nx_c_scan_sum_f16,\n    .f32 = nx_c_scan_sum_f32,\n    .f64 = nx_c_scan_sum_f64,\n    .c32 = nx_c_scan_sum_c32,\n    .c64 = nx_c_scan_sum_c64,\n    .bf16 = nx_c_scan_sum_bf16,\n    .bool_ = nx_c_scan_sum_bool_,\n    .i4 = nx_c_scan_sum_i4,\n    .u4 = nx_c_scan_sum_u4,\n    .f8e4m3 = nx_c_scan_sum_f8e4m3,\n    .f8e5m2 = nx_c_scan_sum_f8e5m2};\n\n// Prod implementations\nDEFINE_SCAN_DIRECT(prod, int8_t, i8, PROD_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(prod, uint8_t, u8, PROD_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(prod, int16_t, i16, PROD_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(prod, uint16_t, u16, PROD_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(prod, int32_t, i32, PROD_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(prod, int64_t, i64, PROD_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(prod, uint32_t, u32, PROD_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(prod, uint64_t, u64, PROD_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(prod, intnat, inat, PROD_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(prod, float, f32, PROD_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(prod, double, f64, PROD_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(prod, complex32, c32, PROD_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(prod, complex64, c64, PROD_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(prod, caml_ba_bool, bool_, PROD_EXPR(acc, val))\nDEFINE_SCAN_LOW_PREC(prod, uint16_t, f16, PROD_EXPR(acc, val), half_to_float,\n                     float_to_half)\nDEFINE_SCAN_LOW_PREC(prod, caml_ba_bfloat16, bf16, PROD_EXPR(acc, val),\n                     bfloat16_to_float, float_to_bfloat16)\nDEFINE_SCAN_LOW_PREC(prod, caml_ba_fp8_e4m3, f8e4m3, PROD_EXPR(acc, val),\n                     fp8_e4m3_to_float, float_to_fp8_e4m3)\nDEFINE_SCAN_LOW_PREC(prod, caml_ba_fp8_e5m2, f8e5m2, PROD_EXPR(acc, val),\n                     fp8_e5m2_to_float, float_to_fp8_e5m2)\nDEFINE_SCAN_INT4(prod, i4, true, PROD_EXPR(acc, val))\nDEFINE_SCAN_INT4(prod, u4, false, PROD_EXPR(acc, val))\n\nstatic const scan_dispatch_table scan_prod_table = {\n    .i8 = nx_c_scan_prod_i8,\n    .u8 = nx_c_scan_prod_u8,\n    .i16 = nx_c_scan_prod_i16,\n    .u16 = nx_c_scan_prod_u16,\n    .i32 = nx_c_scan_prod_i32,\n    .i64 = nx_c_scan_prod_i64,\n    .u32 = nx_c_scan_prod_u32,\n    .u64 = nx_c_scan_prod_u64,\n    .inat = nx_c_scan_prod_inat,\n    .f16 = nx_c_scan_prod_f16,\n    .f32 = nx_c_scan_prod_f32,\n    .f64 = nx_c_scan_prod_f64,\n    .c32 = nx_c_scan_prod_c32,\n    .c64 = nx_c_scan_prod_c64,\n    .bf16 = nx_c_scan_prod_bf16,\n    .bool_ = nx_c_scan_prod_bool_,\n    .i4 = nx_c_scan_prod_i4,\n    .u4 = nx_c_scan_prod_u4,\n    .f8e4m3 = nx_c_scan_prod_f8e4m3,\n    .f8e5m2 = nx_c_scan_prod_f8e5m2};\n\n// Max implementations\nDEFINE_SCAN_DIRECT(max, int8_t, i8, MAX_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(max, uint8_t, u8, MAX_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(max, int16_t, i16, MAX_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(max, uint16_t, u16, MAX_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(max, int32_t, i32, MAX_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(max, int64_t, i64, MAX_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(max, uint32_t, u32, MAX_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(max, uint64_t, u64, MAX_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(max, intnat, inat, MAX_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(max, float, f32, MAX_FLOAT_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(max, double, f64, MAX_FLOAT_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(max, complex32, c32, MAX_COMPLEX32_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(max, complex64, c64, MAX_COMPLEX64_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(max, caml_ba_bool, bool_, MAX_EXPR(acc, val))\nDEFINE_SCAN_LOW_PREC(max, uint16_t, f16, MAX_FLOAT_EXPR(acc, val), half_to_float,\n                     float_to_half)\nDEFINE_SCAN_LOW_PREC(max, caml_ba_bfloat16, bf16, MAX_FLOAT_EXPR(acc, val),\n                     bfloat16_to_float, float_to_bfloat16)\nDEFINE_SCAN_LOW_PREC(max, caml_ba_fp8_e4m3, f8e4m3, MAX_FLOAT_EXPR(acc, val),\n                     fp8_e4m3_to_float, float_to_fp8_e4m3)\nDEFINE_SCAN_LOW_PREC(max, caml_ba_fp8_e5m2, f8e5m2, MAX_FLOAT_EXPR(acc, val),\n                     fp8_e5m2_to_float, float_to_fp8_e5m2)\nDEFINE_SCAN_INT4(max, i4, true, MAX_EXPR(acc, val))\nDEFINE_SCAN_INT4(max, u4, false, MAX_EXPR(acc, val))\n\nstatic const scan_dispatch_table scan_max_table = {\n    .i8 = nx_c_scan_max_i8,\n    .u8 = nx_c_scan_max_u8,\n    .i16 = nx_c_scan_max_i16,\n    .u16 = nx_c_scan_max_u16,\n    .i32 = nx_c_scan_max_i32,\n    .i64 = nx_c_scan_max_i64,\n    .u32 = nx_c_scan_max_u32,\n    .u64 = nx_c_scan_max_u64,\n    .inat = nx_c_scan_max_inat,\n    .f16 = nx_c_scan_max_f16,\n    .f32 = nx_c_scan_max_f32,\n    .f64 = nx_c_scan_max_f64,\n    .c32 = nx_c_scan_max_c32,\n    .c64 = nx_c_scan_max_c64,\n    .bf16 = nx_c_scan_max_bf16,\n    .bool_ = nx_c_scan_max_bool_,\n    .i4 = nx_c_scan_max_i4,\n    .u4 = nx_c_scan_max_u4,\n    .f8e4m3 = nx_c_scan_max_f8e4m3,\n    .f8e5m2 = nx_c_scan_max_f8e5m2};\n\n// Min implementations\nDEFINE_SCAN_DIRECT(min, int8_t, i8, MIN_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(min, uint8_t, u8, MIN_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(min, int16_t, i16, MIN_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(min, uint16_t, u16, MIN_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(min, int32_t, i32, MIN_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(min, int64_t, i64, MIN_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(min, uint32_t, u32, MIN_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(min, uint64_t, u64, MIN_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(min, intnat, inat, MIN_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(min, float, f32, MIN_FLOAT_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(min, double, f64, MIN_FLOAT_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(min, complex32, c32, MIN_COMPLEX32_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(min, complex64, c64, MIN_COMPLEX64_EXPR(acc, val))\nDEFINE_SCAN_DIRECT(min, caml_ba_bool, bool_, MIN_EXPR(acc, val))\nDEFINE_SCAN_LOW_PREC(min, uint16_t, f16, MIN_FLOAT_EXPR(acc, val), half_to_float,\n                     float_to_half)\nDEFINE_SCAN_LOW_PREC(min, caml_ba_bfloat16, bf16, MIN_FLOAT_EXPR(acc, val),\n                     bfloat16_to_float, float_to_bfloat16)\nDEFINE_SCAN_LOW_PREC(min, caml_ba_fp8_e4m3, f8e4m3, MIN_FLOAT_EXPR(acc, val),\n                     fp8_e4m3_to_float, float_to_fp8_e4m3)\nDEFINE_SCAN_LOW_PREC(min, caml_ba_fp8_e5m2, f8e5m2, MIN_FLOAT_EXPR(acc, val),\n                     fp8_e5m2_to_float, float_to_fp8_e5m2)\nDEFINE_SCAN_INT4(min, i4, true, MIN_EXPR(acc, val))\nDEFINE_SCAN_INT4(min, u4, false, MIN_EXPR(acc, val))\n\nstatic const scan_dispatch_table scan_min_table = {\n    .i8 = nx_c_scan_min_i8,\n    .u8 = nx_c_scan_min_u8,\n    .i16 = nx_c_scan_min_i16,\n    .u16 = nx_c_scan_min_u16,\n    .i32 = nx_c_scan_min_i32,\n    .i64 = nx_c_scan_min_i64,\n    .u32 = nx_c_scan_min_u32,\n    .u64 = nx_c_scan_min_u64,\n    .inat = nx_c_scan_min_inat,\n    .f16 = nx_c_scan_min_f16,\n    .f32 = nx_c_scan_min_f32,\n    .f64 = nx_c_scan_min_f64,\n    .c32 = nx_c_scan_min_c32,\n    .c64 = nx_c_scan_min_c64,\n    .bf16 = nx_c_scan_min_bf16,\n    .bool_ = nx_c_scan_min_bool_,\n    .i4 = nx_c_scan_min_i4,\n    .u4 = nx_c_scan_min_u4,\n    .f8e4m3 = nx_c_scan_min_f8e4m3,\n    .f8e5m2 = nx_c_scan_min_f8e5m2};\n\nstatic void dispatch_scan_op(value v_input, value v_output, int axis,\n                             const scan_dispatch_table *table,\n                             const char *op_name) {\n  ndarray_t input = extract_ndarray(v_input);\n  ndarray_t output = extract_ndarray(v_output);\n\n  value v_input_data = Field(v_input, FFI_TENSOR_DATA);\n  value v_output_data = Field(v_output, FFI_TENSOR_DATA);\n  struct caml_ba_array *ba_input = Caml_ba_array_val(v_input_data);\n  struct caml_ba_array *ba_output = Caml_ba_array_val(v_output_data);\n  int kind_input = nx_buffer_get_kind(ba_input);\n  int kind_output = nx_buffer_get_kind(ba_output);\n\n  if (kind_input != kind_output) {\n    cleanup_ndarray(&input);\n    cleanup_ndarray(&output);\n    caml_failwith(\"associative_scan: dtype mismatch\");\n  }\n\n  scan_fn_t fn = NULL;\n  switch (kind_input) {\n    case CAML_BA_SINT8:\n      fn = table->i8;\n      break;\n    case CAML_BA_UINT8:\n      fn = table->u8;\n      break;\n    case CAML_BA_SINT16:\n      fn = table->i16;\n      break;\n    case CAML_BA_UINT16:\n      fn = table->u16;\n      break;\n    case CAML_BA_INT32:\n      fn = table->i32;\n      break;\n    case CAML_BA_INT64:\n      fn = table->i64;\n      break;\n    case NX_BA_UINT32:\n      fn = table->u32;\n      break;\n    case NX_BA_UINT64:\n      fn = table->u64;\n      break;\n    case CAML_BA_CAML_INT:\n    case CAML_BA_NATIVE_INT:\n      fn = table->inat;\n      break;\n    case CAML_BA_FLOAT16:\n      fn = table->f16;\n      break;\n    case CAML_BA_FLOAT32:\n      fn = table->f32;\n      break;\n    case CAML_BA_FLOAT64:\n      fn = table->f64;\n      break;\n    case CAML_BA_COMPLEX32:\n      fn = table->c32;\n      break;\n    case CAML_BA_COMPLEX64:\n      fn = table->c64;\n      break;\n    case NX_BA_BFLOAT16:\n      fn = table->bf16;\n      break;\n    case NX_BA_BOOL:\n      fn = table->bool_;\n      break;\n    case NX_BA_INT4:\n      fn = table->i4;\n      break;\n    case NX_BA_UINT4:\n      fn = table->u4;\n      break;\n    case NX_BA_FP8_E4M3:\n      fn = table->f8e4m3;\n      break;\n    case NX_BA_FP8_E5M2:\n      fn = table->f8e5m2;\n      break;\n    default:\n      cleanup_ndarray(&input);\n      cleanup_ndarray(&output);\n      caml_failwith(\"associative_scan: unsupported dtype\");\n  }\n\n  if (!fn) {\n    cleanup_ndarray(&input);\n    cleanup_ndarray(&output);\n    caml_failwith(\"associative_scan: operation not supported for dtype\");\n  }\n\n  caml_enter_blocking_section();\n  fn(&input, &output, axis);\n  caml_leave_blocking_section();\n\n  cleanup_ndarray(&input);\n  cleanup_ndarray(&output);\n}\n\nCAMLprim value caml_nx_associative_scan(value v_input, value v_output,\n                                        value v_axis, value v_op_tag) {\n  CAMLparam4(v_input, v_output, v_axis, v_op_tag);\n  int axis = Int_val(v_axis);\n  int op_tag = Int_val(v_op_tag);\n  const scan_dispatch_table *table = NULL;\n  const char *op_name = NULL;\n  switch (op_tag) {\n    case 0:\n      table = &scan_sum_table;\n      op_name = \"scan_sum\";\n      break;\n    case 1:\n      table = &scan_prod_table;\n      op_name = \"scan_prod\";\n      break;\n    case 2:\n      table = &scan_max_table;\n      op_name = \"scan_max\";\n      break;\n    case 3:\n      table = &scan_min_table;\n      op_name = \"scan_min\";\n      break;\n    default:\n      caml_failwith(\"associative_scan: invalid operation tag\");\n  }\n\n  dispatch_scan_op(v_input, v_output, axis, table, op_name);\n\n  CAMLreturn(Val_unit);\n}\n"
  },
  {
    "path": "packages/nx/lib/backend_c/nx_c_shape.c",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n// Pad and concatenate operations for nx C backend\n\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n#include <caml/custom.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/threads.h>\n\n#include \"nx_c_shared.h\"\n\n// Type for fill kernel functions\ntypedef void (*fill_kernel_fn)(void *data, void *val, long offset);\n\n// Helper to iterate over inner dimensions for fill operations\nstatic inline void iterate_inner_dims_fill(const ndarray_t *z, long outer_idx,\n                                           fill_kernel_fn kernel, void *val,\n                                           void *z_data) {\n  if (z->ndim <= 1) {\n    kernel(z_data, val, z->offset + outer_idx * z->strides[0]);\n    return;\n  }\n\n  long z_base = z->offset + outer_idx * z->strides[0];\n\n  // Create temporary iterator for inner dimensions\n  int inner_ndim = z->ndim - 1;\n  int *coords = (int *)calloc(inner_ndim, sizeof(int));\n  if (!coords) {\n    fprintf(stderr, \"nx: iterate_inner_dims_fill: allocation failed\\n\");\n    abort();\n  }\n\n  // Iterate over inner dimensions\n  bool done = false;\n  while (!done) {\n    long z_off = z_base;\n\n    for (int i = 0; i < inner_ndim; i++) {\n      z_off += coords[i] * z->strides[i + 1];\n    }\n\n    kernel(z_data, val, z_off);\n\n    // Advance to next position\n    done = true;\n    for (int i = inner_ndim - 1; i >= 0; i--) {\n      coords[i]++;\n      if (coords[i] < z->shape[i + 1]) {\n        done = false;\n        break;\n      }\n      coords[i] = 0;\n    }\n  }\n\n  free(coords);\n}\n\n// Type definitions for pad and cat operations\n\n// Fill operation type: fill ndarray with constant value\ntypedef void (*fill_op_t)(const ndarray_t *, void *);\n\n// Copy operation type: copy src to dst with pad_before offsets (used for pad\n// and cat)\ntypedef void (*copy_op_t)(const ndarray_t *, const ndarray_t *, long *);\n\n// Dispatch tables for fill and copy\ntypedef struct {\n  fill_op_t i8, u8, i16, u16, i32, i64, u32, u64, inat;\n  fill_op_t f16, f32, f64;\n  fill_op_t c32, c64;\n  fill_op_t bf16, bool_, i4, u4, f8e4m3, f8e5m2;\n} fill_op_table;\n\ntypedef struct {\n  copy_op_t i8, u8, i16, u16, i32, i64, u32, u64, inat;\n  copy_op_t f16, f32, f64;\n  copy_op_t c32, c64;\n  copy_op_t bf16, bool_, i4, u4, f8e4m3, f8e5m2;\n} copy_op_table;\n\n// Helper to get element size in bytes for memcpy eligibility (0 for\n// unsupported)\nstatic int get_elem_size(int kind) {\n  switch (kind) {\n    case CAML_BA_SINT8:\n    case CAML_BA_UINT8:\n    case NX_BA_BOOL:\n    case NX_BA_FP8_E4M3:\n    case NX_BA_FP8_E5M2:\n      return 1;\n    case CAML_BA_SINT16:\n    case CAML_BA_UINT16:\n    case CAML_BA_FLOAT16:\n    case NX_BA_BFLOAT16:\n      return 2;\n    case CAML_BA_INT32:\n    case CAML_BA_FLOAT32:\n    case NX_BA_UINT32:\n      return 4;\n    case CAML_BA_INT64:\n    case CAML_BA_NATIVE_INT:\n    case CAML_BA_FLOAT64:\n    case NX_BA_UINT64:\n      return 8;\n    case CAML_BA_COMPLEX32:\n      return 8;\n    case CAML_BA_COMPLEX64:\n      return 16;\n    case NX_BA_INT4:\n    case NX_BA_UINT4:\n      return 0;  // Packed, no memcpy\n    default:\n      return 0;\n  }\n}\n\n// is_contiguous is now defined in nx_c_shared.h\n\n// Helper iterator for inner dimensions in copy operations\ntypedef void (*copy_kernel_fn)(void *, void *, long, long);\n\nstatic inline void iterate_inner_dims_copy(const ndarray_t *src,\n                                           const ndarray_t *dst, long outer_idx,\n                                           copy_kernel_fn kernel,\n                                           void *src_data, void *dst_data,\n                                           long *pad_before) {\n  if (src->ndim <= 1) {\n    long src_base = src->offset + outer_idx * src->strides[0];\n    long dst_base = dst->offset + (outer_idx + pad_before[0]) * dst->strides[0];\n    kernel(src_data, dst_data, src_base, dst_base);\n    return;\n  }\n\n  long src_base = src->offset + outer_idx * src->strides[0];\n  long dst_base = dst->offset + (outer_idx + pad_before[0]) * dst->strides[0];\n  int inner_ndim = src->ndim - 1;\n  int *coords = (int *)calloc(inner_ndim, sizeof(int));\n  if (!coords) {\n    fprintf(stderr, \"nx: iterate_inner_dims_copy: allocation failed\\n\");\n    abort();\n  }\n\n  bool done = false;\n  while (!done) {\n    long src_off = src_base;\n    long dst_off = dst_base;\n\n    for (int i = 0; i < inner_ndim; i++) {\n      src_off += coords[i] * src->strides[i + 1];\n      dst_off += (coords[i] + pad_before[i + 1]) * dst->strides[i + 1];\n    }\n\n    kernel(src_data, dst_data, src_off, dst_off);\n\n    done = true;\n    for (int i = inner_ndim - 1; i >= 0; i--) {\n      coords[i]++;\n      if (coords[i] < src->shape[i + 1]) {\n        done = false;\n        break;\n      }\n      coords[i] = 0;\n    }\n  }\n\n  free(coords);\n}\n\n// Macro for standard fill operations\n#define FILL_OP_KERNEL(name, T, suffix)                                  \\\n  static void nx_c_##name##_##suffix##_kernel(void *z_data, void *val_p, \\\n                                              long z_off) {              \\\n    T *z = (T *)z_data;                                                  \\\n    T val = *(T *)val_p;                                                 \\\n    z[z_off] = val;                                                      \\\n  }\n\n#define FILL_OP_IMPL(name, T, suffix)                                          \\\n  static void nx_c_##name##_##suffix(const ndarray_t *z, void *val_p) {        \\\n    if (!z) {                                                                  \\\n      fprintf(stderr, \"nx: nx_c_\" #name \"_\" #suffix \": null pointer\\n\");       \\\n      abort();                                                                 \\\n    }                                                                          \\\n    long total = total_elements_safe(z);                                       \\\n    if (total == 0) return;                                                    \\\n                                                                               \\\n    if (is_contiguous(z)) {                                                    \\\n      _Pragma(\"omp parallel for simd if(total > 1000)\") for (long i = 0;       \\\n                                                             i < total; i++) { \\\n        nx_c_##name##_##suffix##_kernel(z->data, val_p, z->offset + i);        \\\n      }                                                                        \\\n    } else if (z->shape[0] > 1 && total / z->shape[0] > 50) {                  \\\n      _Pragma(\"omp parallel for if(z->shape[0] > 4)\") for (long i = 0;         \\\n                                                           i < z->shape[0];    \\\n                                                           i++) {              \\\n        iterate_inner_dims_fill(                                               \\\n            z, i, (fill_kernel_fn)nx_c_##name##_##suffix##_kernel, val_p,      \\\n            z->data);                                                          \\\n      }                                                                        \\\n    } else {                                                                   \\\n      nd_iterator_t it;                                                        \\\n      nd_iterator_init_safe(&it, z, z, z);                                     \\\n      do {                                                                     \\\n        long x_off, y_off, z_off;                                              \\\n        nd_iterator_get_offsets(&it, &x_off, &y_off, &z_off);                  \\\n        nx_c_##name##_##suffix##_kernel(z->data, val_p, z->offset + z_off);    \\\n      } while (nd_iterator_next(&it));                                         \\\n      nd_iterator_destroy(&it);                                                \\\n    }                                                                          \\\n  }\n\n#define FILL_OP_FOR_TYPE(name, T, suffix) \\\n  FILL_OP_KERNEL(name, T, suffix)         \\\n  FILL_OP_IMPL(name, T, suffix)\n\n#define GENERATE_FILL_OP(name)               \\\n  FILL_OP_FOR_TYPE(name, int8_t, i8)         \\\n  FILL_OP_FOR_TYPE(name, uint8_t, u8)        \\\n  FILL_OP_FOR_TYPE(name, int16_t, i16)       \\\n  FILL_OP_FOR_TYPE(name, uint16_t, u16)      \\\n  FILL_OP_FOR_TYPE(name, int32_t, i32)       \\\n  FILL_OP_FOR_TYPE(name, int64_t, i64)       \\\n  FILL_OP_FOR_TYPE(name, uint32_t, u32)      \\\n  FILL_OP_FOR_TYPE(name, uint64_t, u64)      \\\n  FILL_OP_FOR_TYPE(name, intnat, inat)       \\\n  FILL_OP_FOR_TYPE(name, float, f32)         \\\n  FILL_OP_FOR_TYPE(name, double, f64)        \\\n  FILL_OP_FOR_TYPE(name, complex32, c32)     \\\n  FILL_OP_FOR_TYPE(name, complex64, c64)\n\n// For low-precision fill\n#define LOW_PREC_FILL_KERNEL(name, T, suffix, TO_FLOAT, FROM_FLOAT)      \\\n  static void nx_c_##name##_##suffix##_kernel(void *z_data, void *val_p, \\\n                                              long z_off) {              \\\n    T *z = (T *)z_data;                                                  \\\n    float val = *(float *)val_p;                                         \\\n    z[z_off] = FROM_FLOAT(val);                                          \\\n  }\n\n#define LOW_PREC_FILL_IMPL(name, T, suffix) FILL_OP_IMPL(name, T, suffix)\n\n// For int4 fill (packed, with saturation)\n#define INT4_FILL_IMPL(name, signedness, suffix)                         \\\n  static void nx_c_##name##_##suffix##_kernel(void *z_data, void *val_p, \\\n                                              long z_off) {              \\\n    uint8_t *z = (uint8_t *)z_data;                                      \\\n    int val = *(int *)val_p;                                             \\\n    val = signedness ? CLAMP_I4(val) : CLAMP_U4(val);                    \\\n    uint8_t nib = (uint8_t)val & 0x0F;                                   \\\n    long byte_off = z_off / 2;                                           \\\n    int nib_off = z_off % 2;                                             \\\n    if (nib_off) {                                                       \\\n      z[byte_off] = (z[byte_off] & 0x0F) | (nib << 4);                   \\\n    } else {                                                             \\\n      z[byte_off] = (z[byte_off] & 0xF0) | nib;                          \\\n    }                                                                    \\\n  }                                                                      \\\n  FILL_OP_IMPL(name, uint8_t, suffix)  // Use uint8_t for packed\n\n// Macro for standard copy operations\n#define COPY_OP_KERNEL(name, T, suffix)                                       \\\n  static void nx_c_##name##_##suffix##_kernel(void *src_data, void *dst_data, \\\n                                              long src_off, long dst_off) {   \\\n    T *src = (T *)src_data;                                                   \\\n    T *dst = (T *)dst_data;                                                   \\\n    dst[dst_off] = src[src_off];                                              \\\n  }\n\n#define COPY_OP_IMPL(name, T, suffix)                                          \\\n  static void nx_c_##name##_##suffix(const ndarray_t *src,                     \\\n                                     const ndarray_t *dst, long *pad_before) { \\\n    if (!src || !dst) {                                                        \\\n      fprintf(stderr, \"nx: nx_c_\" #name \"_\" #suffix \": null pointer\\n\");       \\\n      abort();                                                                 \\\n    }                                                                          \\\n    long total = total_elements_safe(src);                                     \\\n    if (total == 0) return;                                                    \\\n                                                                               \\\n    /* Even if both are contiguous, we can't do a simple linear copy          \\\n       because the destination has different dimensions due to padding */      \\\n    if (false) {                                                               \\\n      /* Disabled - linear copy doesn't work for padding */                    \\\n    } else if (src->ndim > 0 && src->shape[0] > 1 &&                           \\\n               total / src->shape[0] > 50) {                                   \\\n      _Pragma(\"omp parallel for if(src->shape[0] > 4)\") for (long i = 0;       \\\n                                                             i <               \\\n                                                             src->shape[0];    \\\n                                                             i++) {            \\\n        iterate_inner_dims_copy(src, dst, i, nx_c_##name##_##suffix##_kernel,  \\\n                                src->data, dst->data, pad_before);             \\\n      }                                                                        \\\n    } else {                                                                   \\\n      int ndim = src->ndim;                                                    \\\n      int *coords = (int *)calloc(ndim, sizeof(int));                          \\\n      if (!coords) {                                                           \\\n        fprintf(stderr, \"nx: nx_c_\" #name \"_\" #suffix \": allocation failed\\n\"); \\\n        abort();                                                               \\\n      }                                                                        \\\n      bool done = false;                                                       \\\n      while (!done) {                                                          \\\n        long src_off = 0;                                                      \\\n        long dst_off = 0;                                                      \\\n        for (int d = 0; d < ndim; d++) {                                       \\\n          src_off += (long)coords[d] * src->strides[d];                        \\\n          dst_off += ((long)coords[d] + pad_before[d]) * dst->strides[d];      \\\n        }                                                                      \\\n        nx_c_##name##_##suffix##_kernel(src->data, dst->data,                  \\\n                                        src->offset + src_off,                 \\\n                                        dst->offset + dst_off);                \\\n        done = true;                                                           \\\n        for (int d = ndim - 1; d >= 0; d--) {                                  \\\n          coords[d]++;                                                         \\\n          if (coords[d] < src->shape[d]) {                                     \\\n            done = false;                                                      \\\n            break;                                                             \\\n          }                                                                    \\\n          coords[d] = 0;                                                       \\\n        }                                                                      \\\n      }                                                                        \\\n      free(coords);                                                            \\\n    }                                                                          \\\n  }\n\n#define COPY_OP_FOR_TYPE(name, T, suffix) \\\n  COPY_OP_KERNEL(name, T, suffix)         \\\n  COPY_OP_IMPL(name, T, suffix)\n\n#define GENERATE_COPY_OP(name)               \\\n  COPY_OP_FOR_TYPE(name, int8_t, i8)         \\\n  COPY_OP_FOR_TYPE(name, uint8_t, u8)        \\\n  COPY_OP_FOR_TYPE(name, int16_t, i16)       \\\n  COPY_OP_FOR_TYPE(name, uint16_t, u16)      \\\n  COPY_OP_FOR_TYPE(name, int32_t, i32)       \\\n  COPY_OP_FOR_TYPE(name, int64_t, i64)       \\\n  COPY_OP_FOR_TYPE(name, uint32_t, u32)      \\\n  COPY_OP_FOR_TYPE(name, uint64_t, u64)      \\\n  COPY_OP_FOR_TYPE(name, intnat, inat)       \\\n  COPY_OP_FOR_TYPE(name, float, f32)         \\\n  COPY_OP_FOR_TYPE(name, double, f64)        \\\n  COPY_OP_FOR_TYPE(name, complex32, c32)     \\\n  COPY_OP_FOR_TYPE(name, complex64, c64)\n\n// For low-precision copy (bitwise copy)\n#define LOW_PREC_COPY_KERNEL(name, T, suffix) COPY_OP_KERNEL(name, T, suffix)\n#define LOW_PREC_COPY_IMPL(name, T, suffix) COPY_OP_IMPL(name, T, suffix)\n\n// For int4 copy (packed)\n#define INT4_COPY_IMPL(name, signedness, suffix)                               \\\n  static void nx_c_##name##_##suffix##_kernel(void *src_data, void *dst_data,  \\\n                                              long src_off, long dst_off) {    \\\n    uint8_t *src = (uint8_t *)src_data;                                        \\\n    uint8_t *dst = (uint8_t *)dst_data;                                        \\\n    long byte_off_src = src_off / 2;                                           \\\n    int nib_off_src = src_off % 2;                                             \\\n    int a = nib_off_src                                                        \\\n                ? (signedness ? (int8_t)(src[byte_off_src] >> 4)               \\\n                              : (src[byte_off_src] >> 4) & 0x0F)               \\\n                : (signedness ? (int8_t)((src[byte_off_src] & 0x0F) << 4) >> 4 \\\n                              : src[byte_off_src] & 0x0F);                     \\\n    uint8_t nib = (uint8_t)a & 0x0F;                                           \\\n    long byte_off_dst = dst_off / 2;                                           \\\n    int nib_off_dst = dst_off % 2;                                             \\\n    if (nib_off_dst) {                                                         \\\n      dst[byte_off_dst] = (dst[byte_off_dst] & 0x0F) | (nib << 4);             \\\n    } else {                                                                   \\\n      dst[byte_off_dst] = (dst[byte_off_dst] & 0xF0) | nib;                    \\\n    }                                                                          \\\n  }                                                                            \\\n  static void nx_c_##name##_##suffix(const ndarray_t *src,                     \\\n                                     const ndarray_t *dst, long *pad_before) { \\\n    long total = total_elements_safe(src);                                     \\\n    if (total == 0) return;                                                    \\\n                                                                               \\\n    if (src->ndim > 0 && src->shape[0] > 1 && total / src->shape[0] > 50) {    \\\n      _Pragma(\"omp parallel for if(src->shape[0] > 4)\") for (long i = 0;       \\\n                                                             i <               \\\n                                                             src->shape[0];    \\\n                                                             i++) {            \\\n        iterate_inner_dims_copy(src, dst, i, nx_c_##name##_##suffix##_kernel,  \\\n                                src->data, dst->data, pad_before);             \\\n      }                                                                        \\\n    } else {                                                                   \\\n      int ndim = src->ndim;                                                    \\\n      int *coords = (int *)calloc(ndim, sizeof(int));                          \\\n      if (!coords) {                                                           \\\n        fprintf(stderr, \"nx: nx_c_\" #name \"_\" #suffix \": allocation failed\\n\"); \\\n        abort();                                                               \\\n      }                                                                        \\\n      bool done = false;                                                       \\\n      while (!done) {                                                          \\\n        long src_off = 0;                                                      \\\n        long dst_off = 0;                                                      \\\n        for (int d = 0; d < ndim; d++) {                                       \\\n          src_off += (long)coords[d] * src->strides[d];                        \\\n          dst_off += ((long)coords[d] + pad_before[d]) * dst->strides[d];      \\\n        }                                                                      \\\n        nx_c_##name##_##suffix##_kernel(src->data, dst->data,                  \\\n                                        src_off + src->offset,                 \\\n                                        dst_off + dst->offset);                \\\n        done = true;                                                           \\\n        for (int d = ndim - 1; d >= 0; d--) {                                  \\\n          coords[d]++;                                                         \\\n          if (coords[d] < src->shape[d]) {                                     \\\n            done = false;                                                      \\\n            break;                                                             \\\n          }                                                                    \\\n          coords[d] = 0;                                                       \\\n        }                                                                      \\\n      }                                                                        \\\n      free(coords);                                                            \\\n    }                                                                          \\\n  }\n\n// Generate fill and copy for all ops\nGENERATE_FILL_OP(fill)\nLOW_PREC_FILL_KERNEL(fill, uint16_t, f16, , float_to_half)\nLOW_PREC_FILL_IMPL(fill, uint16_t, f16)\nLOW_PREC_FILL_KERNEL(fill, caml_ba_bfloat16, bf16, , float_to_bfloat16)\nLOW_PREC_FILL_IMPL(fill, caml_ba_bfloat16, bf16)\nLOW_PREC_FILL_KERNEL(fill, caml_ba_fp8_e4m3, f8e4m3, , float_to_fp8_e4m3)\nLOW_PREC_FILL_IMPL(fill, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_FILL_KERNEL(fill, caml_ba_fp8_e5m2, f8e5m2, , float_to_fp8_e5m2)\nLOW_PREC_FILL_IMPL(fill, caml_ba_fp8_e5m2, f8e5m2)\nINT4_FILL_IMPL(fill, 1, i4)\nINT4_FILL_IMPL(fill, 0, u4)\nFILL_OP_FOR_TYPE(fill, caml_ba_bool, bool_)\n\n// Build dispatch table for fill operations\nstatic const fill_op_table fill_table = {.i8 = nx_c_fill_i8,\n                                         .u8 = nx_c_fill_u8,\n                                         .i16 = nx_c_fill_i16,\n                                         .u16 = nx_c_fill_u16,\n                                         .i32 = nx_c_fill_i32,\n                                         .i64 = nx_c_fill_i64,\n                                         .u32 = nx_c_fill_u32,\n                                         .u64 = nx_c_fill_u64,\n                                         .inat = nx_c_fill_inat,\n                                         .f16 = nx_c_fill_f16,\n                                         .f32 = nx_c_fill_f32,\n                                         .f64 = nx_c_fill_f64,\n                                         .c32 = nx_c_fill_c32,\n                                         .c64 = nx_c_fill_c64,\n                                         .bf16 = nx_c_fill_bf16,\n                                         .bool_ = nx_c_fill_bool_,\n                                         .i4 = nx_c_fill_i4,\n                                         .u4 = nx_c_fill_u4,\n                                         .f8e4m3 = nx_c_fill_f8e4m3,\n                                         .f8e5m2 = nx_c_fill_f8e5m2};\n\nGENERATE_COPY_OP(copy)\nLOW_PREC_COPY_KERNEL(copy, uint16_t, f16)\nLOW_PREC_COPY_IMPL(copy, uint16_t, f16)\nLOW_PREC_COPY_KERNEL(copy, caml_ba_bfloat16, bf16)\nLOW_PREC_COPY_IMPL(copy, caml_ba_bfloat16, bf16)\nLOW_PREC_COPY_KERNEL(copy, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_COPY_IMPL(copy, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_COPY_KERNEL(copy, caml_ba_fp8_e5m2, f8e5m2)\nLOW_PREC_COPY_IMPL(copy, caml_ba_fp8_e5m2, f8e5m2)\nINT4_COPY_IMPL(copy, 1, i4)\nINT4_COPY_IMPL(copy, 0, u4)\nCOPY_OP_FOR_TYPE(copy, caml_ba_bool, bool_)\n\nstatic const copy_op_table copy_table = {.i8 = nx_c_copy_i8,\n                                         .u8 = nx_c_copy_u8,\n                                         .i16 = nx_c_copy_i16,\n                                         .u16 = nx_c_copy_u16,\n                                         .i32 = nx_c_copy_i32,\n                                         .i64 = nx_c_copy_i64,\n                                         .u32 = nx_c_copy_u32,\n                                         .u64 = nx_c_copy_u64,\n                                         .inat = nx_c_copy_inat,\n                                         .f16 = nx_c_copy_f16,\n                                         .f32 = nx_c_copy_f32,\n                                         .f64 = nx_c_copy_f64,\n                                         .c32 = nx_c_copy_c32,\n                                         .c64 = nx_c_copy_c64,\n                                         .bf16 = nx_c_copy_bf16,\n                                         .bool_ = nx_c_copy_bool_,\n                                         .i4 = nx_c_copy_i4,\n                                         .u4 = nx_c_copy_u4,\n                                         .f8e4m3 = nx_c_copy_f8e4m3,\n                                         .f8e5m2 = nx_c_copy_f8e5m2};\n\n// ============================================================================\n// OCaml FFI Stubs\n// ============================================================================\n\nCAMLprim value caml_nx_pad(value v_input, value v_pads, value v_fill,\n                           value v_output) {\n  CAMLparam4(v_input, v_pads, v_fill, v_output);\n  ndarray_t input = extract_ndarray(v_input);\n  ndarray_t output = extract_ndarray(v_output);\n  int ndim = input.ndim;\n  if (ndim != output.ndim) {\n    cleanup_ndarray(&input);\n    cleanup_ndarray(&output);\n    caml_failwith(\"pad: ndim mismatch\");\n  }\n  long *pad_before = (long *)malloc(ndim * sizeof(long));\n  long *pad_after = (long *)malloc(ndim * sizeof(long));\n  if (!pad_before || !pad_after) {\n    cleanup_ndarray(&input);\n    cleanup_ndarray(&output);\n    caml_failwith(\"pad: allocation failed\");\n  }\n  // v_pads is a flat array: [before_0, after_0, before_1, after_1, ...]\n  for (int i = 0; i < ndim; i++) {\n    pad_before[i] = Long_val(Field(v_pads, 2 * i));\n    pad_after[i] = Long_val(Field(v_pads, 2 * i + 1));\n    if (pad_before[i] < 0 || pad_after[i] < 0) {\n      free(pad_before);\n      free(pad_after);\n      cleanup_ndarray(&input);\n      cleanup_ndarray(&output);\n      caml_failwith(\"pad: negative padding\");\n    }\n    if (input.shape[i] + pad_before[i] + pad_after[i] != output.shape[i]) {\n      free(pad_before);\n      free(pad_after);\n      cleanup_ndarray(&input);\n      cleanup_ndarray(&output);\n      caml_failwith(\"pad: shape mismatch\");\n    }\n  }\n  free(pad_after);  // Not used for copy, only validation\n\n  value v_input_data = Field(v_input, FFI_TENSOR_DATA);\n  struct caml_ba_array *ba = Caml_ba_array_val(v_input_data);\n  int kind = nx_buffer_get_kind(ba);\n\n  value v_output_data = Field(v_output, FFI_TENSOR_DATA);\n  int kind_out = nx_buffer_get_kind(Caml_ba_array_val(v_output_data));\n  if (kind != kind_out) {\n    free(pad_before);\n    cleanup_ndarray(&input);\n    cleanup_ndarray(&output);\n    caml_failwith(\"pad: dtype mismatch\");\n  }\n\n  fill_op_t fill_op = NULL;\n  copy_op_t copy_op = NULL;\n  switch (kind) {\n    case CAML_BA_SINT8:\n      fill_op = fill_table.i8;\n      copy_op = copy_table.i8;\n      break;\n    case CAML_BA_UINT8:\n      fill_op = fill_table.u8;\n      copy_op = copy_table.u8;\n      break;\n    case CAML_BA_SINT16:\n      fill_op = fill_table.i16;\n      copy_op = copy_table.i16;\n      break;\n    case CAML_BA_UINT16:\n      fill_op = fill_table.u16;\n      copy_op = copy_table.u16;\n      break;\n    case CAML_BA_INT32:\n      fill_op = fill_table.i32;\n      copy_op = copy_table.i32;\n      break;\n    case CAML_BA_INT64:\n      fill_op = fill_table.i64;\n      copy_op = copy_table.i64;\n      break;\n    case NX_BA_UINT32:\n      fill_op = fill_table.u32;\n      copy_op = copy_table.u32;\n      break;\n    case NX_BA_UINT64:\n      fill_op = fill_table.u64;\n      copy_op = copy_table.u64;\n      break;\n    case CAML_BA_CAML_INT:\n    case CAML_BA_NATIVE_INT:\n      fill_op = fill_table.inat;\n      copy_op = copy_table.inat;\n      break;\n    case CAML_BA_FLOAT16:\n      fill_op = fill_table.f16;\n      copy_op = copy_table.f16;\n      break;\n    case CAML_BA_FLOAT32:\n      fill_op = fill_table.f32;\n      copy_op = copy_table.f32;\n      break;\n    case CAML_BA_FLOAT64:\n      fill_op = fill_table.f64;\n      copy_op = copy_table.f64;\n      break;\n    case CAML_BA_COMPLEX32:\n      fill_op = fill_table.c32;\n      copy_op = copy_table.c32;\n      break;\n    case CAML_BA_COMPLEX64:\n      fill_op = fill_table.c64;\n      copy_op = copy_table.c64;\n      break;\n    case NX_BA_BFLOAT16:\n      fill_op = fill_table.bf16;\n      copy_op = copy_table.bf16;\n      break;\n    case NX_BA_BOOL:\n      fill_op = fill_table.bool_;\n      copy_op = copy_table.bool_;\n      break;\n    case NX_BA_INT4:\n      fill_op = fill_table.i4;\n      copy_op = copy_table.i4;\n      break;\n    case NX_BA_UINT4:\n      fill_op = fill_table.u4;\n      copy_op = copy_table.u4;\n      break;\n    case NX_BA_FP8_E4M3:\n      fill_op = fill_table.f8e4m3;\n      copy_op = copy_table.f8e4m3;\n      break;\n    case NX_BA_FP8_E5M2:\n      fill_op = fill_table.f8e5m2;\n      copy_op = copy_table.f8e5m2;\n      break;\n    default:\n      free(pad_before);\n      cleanup_ndarray(&input);\n      cleanup_ndarray(&output);\n      caml_failwith(\"pad: unsupported dtype\");\n  }\n\n  // Convert fill_value to C type\n  union {\n    int8_t i8;\n    uint8_t u8;\n    int16_t i16;\n    uint16_t u16;\n    int32_t i32;\n    int64_t i64;\n    uint32_t u32;\n    uint64_t u64;\n    intnat inat;\n    float f32;\n    double f64;\n    complex32 c32;\n    complex64 c64;\n    uint16_t f16;\n    caml_ba_bfloat16 bf16;\n    caml_ba_fp8_e4m3 f8e4m3;\n    caml_ba_fp8_e5m2 f8e5m2;\n    uint8_t bool_val;\n    int i4_val;\n  } fill_c;\n  void *fill_p = &fill_c;\n  switch (kind) {\n    case CAML_BA_SINT8:\n      fill_c.i8 = (int8_t)Long_val(v_fill);\n      break;\n    case CAML_BA_UINT8:\n      fill_c.u8 = (uint8_t)Long_val(v_fill);\n      break;\n    case CAML_BA_SINT16:\n      fill_c.i16 = (int16_t)Long_val(v_fill);\n      break;\n    case CAML_BA_UINT16:\n      fill_c.u16 = (uint16_t)Long_val(v_fill);\n      break;\n    case CAML_BA_INT32:\n      fill_c.i32 = Int32_val(v_fill);\n      break;\n    case CAML_BA_INT64:\n      fill_c.i64 = Int64_val(v_fill);\n      break;\n    case NX_BA_UINT32:\n      fill_c.u32 = (uint32_t)Int32_val(v_fill);\n      break;\n    case NX_BA_UINT64:\n      fill_c.u64 = (uint64_t)Int64_val(v_fill);\n      break;\n    case CAML_BA_CAML_INT:\n    case CAML_BA_NATIVE_INT:\n      fill_c.inat = Long_val(v_fill);\n      break;\n    case CAML_BA_FLOAT32:\n      fill_c.f32 = (float)Double_val(v_fill);\n      break;\n    case CAML_BA_FLOAT64:\n      fill_c.f64 = Double_val(v_fill);\n      break;\n    case CAML_BA_COMPLEX32:\n      // For complex types, v_fill is a Complex.t record {re: float; im: float}\n      if (Is_block(v_fill)) {\n        // Complex record - use Double_field to access float fields directly\n        fill_c.c32 = (float)Double_field(v_fill, 0) +\n                     I * (float)Double_field(v_fill, 1);\n      } else {\n        // Should not happen for complex types, but handle gracefully\n        fill_c.c32 = 0.0f + I * 0.0f;\n      }\n      break;\n    case CAML_BA_COMPLEX64:\n      if (Is_block(v_fill)) {\n        // Complex record - use Double_field to access float fields directly\n        fill_c.c64 = Double_field(v_fill, 0) + I * Double_field(v_fill, 1);\n      } else {\n        // Should not happen for complex types, but handle gracefully\n        fill_c.c64 = 0.0 + I * 0.0;\n      }\n      break;\n    case CAML_BA_FLOAT16:\n      fill_c.f16 = float_to_half((float)Double_val(v_fill));\n      break;\n    case NX_BA_BFLOAT16:\n      fill_c.bf16 = float_to_bfloat16((float)Double_val(v_fill));\n      break;\n    case NX_BA_FP8_E4M3:\n      fill_c.f8e4m3 = float_to_fp8_e4m3((float)Double_val(v_fill));\n      break;\n    case NX_BA_FP8_E5M2:\n      fill_c.f8e5m2 = float_to_fp8_e5m2((float)Double_val(v_fill));\n      break;\n    case NX_BA_BOOL:\n      fill_c.bool_val = Bool_val(v_fill) ? 1 : 0;\n      break;\n    case NX_BA_INT4:\n      fill_c.i4_val = CLAMP_I4(Long_val(v_fill));\n      break;\n    case NX_BA_UINT4:\n      fill_c.i4_val = CLAMP_U4(Long_val(v_fill));\n      break;\n  }\n\n  caml_enter_blocking_section();\n  fill_op(&output, fill_p);\n  copy_op(&input, &output, pad_before);\n  caml_leave_blocking_section();\n\n  free(pad_before);\n  cleanup_ndarray(&input);\n  cleanup_ndarray(&output);\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_nx_cat(value v_inputs, value v_axis, value v_output) {\n  CAMLparam3(v_inputs, v_axis, v_output);\n  int axis = Int_val(v_axis);\n  ndarray_t output = extract_ndarray(v_output);\n  int ndim = output.ndim;\n  if (axis < 0 || axis >= ndim) {\n    cleanup_ndarray(&output);\n    caml_failwith(\"cat: invalid axis\");\n  }\n\n  value v_first_data = Field(Field(v_inputs, 0), FFI_TENSOR_DATA);\n  struct caml_ba_array *ba = Caml_ba_array_val(v_first_data);\n  int kind = nx_buffer_get_kind(ba);\n\n  value v_output_data = Field(v_output, FFI_TENSOR_DATA);\n  int kind_out = nx_buffer_get_kind(Caml_ba_array_val(v_output_data));\n  if (kind != kind_out) {\n    cleanup_ndarray(&output);\n    caml_failwith(\"cat: dtype mismatch\");\n  }\n\n  copy_op_t copy_op = NULL;\n  switch (kind) {\n    case CAML_BA_SINT8:\n      copy_op = copy_table.i8;\n      break;\n    case CAML_BA_UINT8:\n      copy_op = copy_table.u8;\n      break;\n    case CAML_BA_SINT16:\n      copy_op = copy_table.i16;\n      break;\n    case CAML_BA_UINT16:\n      copy_op = copy_table.u16;\n      break;\n    case CAML_BA_INT32:\n      copy_op = copy_table.i32;\n      break;\n    case CAML_BA_INT64:\n      copy_op = copy_table.i64;\n      break;\n    case NX_BA_UINT32:\n      copy_op = copy_table.u32;\n      break;\n    case NX_BA_UINT64:\n      copy_op = copy_table.u64;\n      break;\n    case CAML_BA_CAML_INT:\n    case CAML_BA_NATIVE_INT:\n      copy_op = copy_table.inat;\n      break;\n    case CAML_BA_FLOAT16:\n      copy_op = copy_table.f16;\n      break;\n    case CAML_BA_FLOAT32:\n      copy_op = copy_table.f32;\n      break;\n    case CAML_BA_FLOAT64:\n      copy_op = copy_table.f64;\n      break;\n    case CAML_BA_COMPLEX32:\n      copy_op = copy_table.c32;\n      break;\n    case CAML_BA_COMPLEX64:\n      copy_op = copy_table.c64;\n      break;\n    case NX_BA_BFLOAT16:\n      copy_op = copy_table.bf16;\n      break;\n    case NX_BA_BOOL:\n      copy_op = copy_table.bool_;\n      break;\n    case NX_BA_INT4:\n      copy_op = copy_table.i4;\n      break;\n    case NX_BA_UINT4:\n      copy_op = copy_table.u4;\n      break;\n    case NX_BA_FP8_E4M3:\n      copy_op = copy_table.f8e4m3;\n      break;\n    case NX_BA_FP8_E5M2:\n      copy_op = copy_table.f8e5m2;\n      break;\n    default:\n      cleanup_ndarray(&output);\n      caml_failwith(\"cat: unsupported dtype\");\n  }\n\n  long *pad_before = (long *)malloc(ndim * sizeof(long));\n  if (!pad_before) {\n    cleanup_ndarray(&output);\n    caml_failwith(\"cat: allocation failed\");\n  }\n\n  long current = 0;\n  value tail = v_inputs;\n  while (tail != Val_int(0)) {  // Empty list is Val_int(0) in OCaml\n    value v_in = Field(tail, 0);\n    ndarray_t in = extract_ndarray(v_in);\n    if (in.ndim != ndim) {\n      free(pad_before);\n      cleanup_ndarray(&in);\n      cleanup_ndarray(&output);\n      caml_failwith(\"cat: ndim mismatch\");\n    }\n    for (int i = 0; i < ndim; i++) {\n      pad_before[i] = (i == axis) ? current : 0;\n      if (i != axis && in.shape[i] != output.shape[i]) {\n        free(pad_before);\n        cleanup_ndarray(&in);\n        cleanup_ndarray(&output);\n        caml_failwith(\"cat: shape mismatch\");\n      }\n    }\n    copy_op(&in, &output, pad_before);\n    current += in.shape[axis];\n    cleanup_ndarray(&in);\n    tail = Field(tail, 1);\n  }\n  if (current != output.shape[axis]) {\n    free(pad_before);\n    cleanup_ndarray(&output);\n    caml_failwith(\"cat: concatenated size mismatch\");\n  }\n\n  free(pad_before);\n  cleanup_ndarray(&output);\n  CAMLreturn(Val_unit);\n}\n"
  },
  {
    "path": "packages/nx/lib/backend_c/nx_c_shared.h",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n#ifndef NX_C_SHARED_H\n#define NX_C_SHARED_H\n\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n#include <caml/custom.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/mlvalues.h>\n#include <caml/threads.h>\n#include <complex.h>\n#include <math.h>\n#include <stdbool.h>\n#include <stdint.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n#include \"nx_buffer_stubs.h\"  // For extended kinds, caml_ba_* typedefs, and conversions\n\n#ifdef _OPENMP\n#include <omp.h>\n#endif\n\n// Maximum number of dimensions supported\n#define MAX_NDIM 32\n\n// FFI tensor field indices (matches OCaml record)\n// type ffi_tensor = {\n//   data : buffer;       (* Field 0 *)\n//   shape : int array;   (* Field 1 *)\n//   strides : int array; (* Field 2 *)\n//   offset : int;        (* Field 3 *)\n// }\n#define FFI_TENSOR_DATA 0\n#define FFI_TENSOR_SHAPE 1\n#define FFI_TENSOR_STRIDES 2\n#define FFI_TENSOR_OFFSET 3\n\ntypedef float _Complex complex32;\ntypedef double _Complex complex64;\n\n// Int4/uint4 clamping macros for saturation\n#define CLAMP_I4(x) ((x) < -8 ? -8 : ((x) > 7 ? 7 : (x)))\n#define CLAMP_U4(x) ((x) < 0 ? 0 : ((x) > 15 ? 15 : (x)))\n\nstatic inline int int4_get(const uint8_t *data, long offset, bool is_signed) {\n  long byte_off = offset / 2;\n  int nibble_off = offset % 2;\n  uint8_t byte = data[byte_off];\n  if (is_signed) {\n    if (nibble_off) {\n      return (int8_t)(byte & 0xF0) >> 4;\n    } else {\n      return (int8_t)((byte & 0x0F) << 4) >> 4;\n    }\n  } else {\n    if (nibble_off) {\n      return (byte >> 4) & 0x0F;\n    } else {\n      return byte & 0x0F;\n    }\n  }\n}\n\nstatic inline void int4_set(uint8_t *data, long offset, int value,\n                            bool is_signed) {\n  int clamped = is_signed ? CLAMP_I4(value) : CLAMP_U4(value);\n  uint8_t nibble = (uint8_t)(clamped & 0x0F);\n  long byte_off = offset / 2;\n  int nibble_off = offset % 2;\n  if (nibble_off) {\n    data[byte_off] = (data[byte_off] & 0x0F) | (nibble << 4);\n  } else {\n    data[byte_off] = (data[byte_off] & 0xF0) | nibble;\n  }\n}\n\n// Complex arithmetic operations\n#define COMPLEX_ADD(a, b) ((a) + (b))\n#define COMPLEX_MUL(a, b) ((a) * (b))\n\n// Complex comparison operations (lexicographic order)\nstatic inline complex32 complex_max(complex32 a, complex32 b) {\n  float a_real = crealf(a), a_imag = cimagf(a);\n  float b_real = crealf(b), b_imag = cimagf(b);\n  if (a_real > b_real) return a;\n  if (a_real < b_real) return b;\n  return (a_imag >= b_imag) ? a : b;\n}\n\nstatic inline complex64 complex64_max(complex64 a, complex64 b) {\n  double a_real = creal(a), a_imag = cimag(a);\n  double b_real = creal(b), b_imag = cimag(b);\n  if (a_real > b_real) return a;\n  if (a_real < b_real) return b;\n  return (a_imag >= b_imag) ? a : b;\n}\n\nstatic inline complex32 complex_min(complex32 a, complex32 b) {\n  float a_real = crealf(a), a_imag = cimagf(a);\n  float b_real = crealf(b), b_imag = cimagf(b);\n  if (a_real < b_real) return a;\n  if (a_real > b_real) return b;\n  return (a_imag < b_imag) ? a : b;\n}\n\nstatic inline complex64 complex64_min(complex64 a, complex64 b) {\n  double a_real = creal(a), a_imag = cimag(a);\n  double b_real = creal(b), b_imag = cimag(b);\n  if (a_real < b_real) return a;\n  if (a_real > b_real) return b;\n  return (a_imag < b_imag) ? a : b;\n}\n\n// Core ndarray structure for strided array operations\ntypedef struct {\n  void *data;\n  int ndim;\n  int *shape;\n  int *strides;\n  int offset;\n} ndarray_t;\n\n// Iterator for n-dimensional arrays (binary operations)\ntypedef struct {\n  int ndim;\n  int *shape;\n  int *coords;\n  int *x_strides;\n  int *y_strides;\n  int *z_strides;\n} nd_iterator_t;\n\n// Iterator for copying between two arrays\ntypedef struct {\n  int ndim;\n  int *shape;\n  int *coords;\n  int *src_strides;\n  int *dst_strides;\n} nd_copy_iterator_t;\n\n// Single array iterator for unary operations\ntypedef struct {\n  int ndim;\n  int *shape;\n  int *coords;\n  int *strides;\n} nd_single_iterator_t;\n\n// Macro to iterate over all types (extended to include nx_buffer types)\n// Note: int4/uint4 need special handling (2 values per byte)\n// Note: float16 uses caml_ba_uint16 like the standard library\n#define FOR_EACH_TYPE(MACRO)                      \\\n  MACRO(int8_t, i8, CAML_BA_SINT8)                \\\n  MACRO(uint8_t, u8, CAML_BA_UINT8)               \\\n  MACRO(int16_t, i16, CAML_BA_SINT16)             \\\n  MACRO(uint16_t, u16, CAML_BA_UINT16)            \\\n  MACRO(int32_t, i32, CAML_BA_INT32)              \\\n  MACRO(int64_t, i64, CAML_BA_INT64)              \\\n  MACRO(caml_ba_uint32, u32, NX_BA_UINT32)        \\\n  MACRO(caml_ba_uint64, u64, NX_BA_UINT64)        \\\n  MACRO(intnat, inat, CAML_BA_NATIVE_INT)         \\\n  MACRO(uint16_t, f16, CAML_BA_FLOAT16)           \\\n  MACRO(float, f32, CAML_BA_FLOAT32)              \\\n  MACRO(double, f64, CAML_BA_FLOAT64)             \\\n  MACRO(complex32, c32, CAML_BA_COMPLEX32)        \\\n  MACRO(complex64, c64, CAML_BA_COMPLEX64)        \\\n  MACRO(caml_ba_bfloat16, bf16, NX_BA_BFLOAT16)   \\\n  MACRO(caml_ba_bool, bool_, NX_BA_BOOL)          \\\n  MACRO(uint8_t, i4, NX_BA_INT4)                  \\\n  MACRO(uint8_t, u4, NX_BA_UINT4)                 \\\n  MACRO(caml_ba_fp8_e4m3, f8e4m3, NX_BA_FP8_E4M3) \\\n  MACRO(caml_ba_fp8_e5m2, f8e5m2, NX_BA_FP8_E5M2)\n\n// Helper functions for safe operations\n//\n// IMPORTANT: Functions in this header may be called from within\n// caml_enter_blocking_section() / caml_leave_blocking_section() pairs.\n// They must NEVER call caml_failwith or any other OCaml runtime function.\n// Use fprintf(stderr, ...) + abort() for unrecoverable errors instead.\n\nstatic inline long total_elements_safe(const ndarray_t *arr) {\n  if (!arr || arr->ndim == 0) return 1;\n  long total = 1;\n  for (int i = 0; i < arr->ndim; i++) {\n    long dim = arr->shape[i];\n    if (dim <= 0) return 0;\n    if (total > LONG_MAX / dim) {\n      fprintf(stderr, \"nx: total_elements_safe: integer overflow\\n\");\n      abort();\n    }\n    total *= dim;\n  }\n  return total;\n}\n\nstatic inline bool is_fully_contiguous(const ndarray_t *x, const ndarray_t *y,\n                                       const ndarray_t *z) {\n  if (!x || !y || !z || x->ndim != y->ndim || x->ndim != z->ndim) return false;\n  if (x->ndim == 0) return true;\n\n  // Check C-contiguous layout\n  int expected_stride = 1;\n  for (int i = x->ndim - 1; i >= 0; i--) {\n    if (x->strides[i] != expected_stride || y->strides[i] != expected_stride ||\n        z->strides[i] != expected_stride) {\n      return false;\n    }\n    expected_stride *= x->shape[i];\n  }\n  return true;\n}\n\nstatic inline bool is_contiguous(const ndarray_t *x) {\n  if (!x || x->ndim == 0) return true;\n  \n  // Check C-contiguous layout\n  int expected_stride = 1;\n  for (int i = x->ndim - 1; i >= 0; i--) {\n    if (x->strides[i] != expected_stride) {\n      return false;\n    }\n    expected_stride *= x->shape[i];\n  }\n  return true;\n}\n\nstatic inline void nd_iterator_init_safe(nd_iterator_t *it, const ndarray_t *x,\n                                         const ndarray_t *y,\n                                         const ndarray_t *z) {\n  if (!it || !x || !y || !z) {\n    fprintf(stderr, \"nx: nd_iterator_init_safe: null pointer\\n\");\n    abort();\n  }\n  if (x->ndim != y->ndim || x->ndim != z->ndim) {\n    fprintf(stderr, \"nx: nd_iterator_init_safe: dimension mismatch\\n\");\n    abort();\n  }\n  it->ndim = x->ndim;\n  it->shape = x->shape;\n  it->coords = (int *)calloc(x->ndim, sizeof(int));\n  it->x_strides = x->strides;\n  it->y_strides = y->strides;\n  it->z_strides = z->strides;\n  if (!it->coords) {\n    fprintf(stderr, \"nx: nd_iterator_init_safe: allocation failed\\n\");\n    abort();\n  }\n}\n\nstatic inline void nd_iterator_get_offsets(const nd_iterator_t *it, long *x_off,\n                                           long *y_off, long *z_off) {\n  *x_off = 0;\n  *y_off = 0;\n  *z_off = 0;\n  for (int i = 0; i < it->ndim; i++) {\n    *x_off += it->coords[i] * it->x_strides[i];\n    *y_off += it->coords[i] * it->y_strides[i];\n    *z_off += it->coords[i] * it->z_strides[i];\n  }\n}\n\nstatic inline bool nd_iterator_next(nd_iterator_t *it) {\n  for (int i = it->ndim - 1; i >= 0; i--) {\n    it->coords[i]++;\n    if (it->coords[i] < it->shape[i]) {\n      return true;\n    }\n    it->coords[i] = 0;\n  }\n  return false;\n}\n\nstatic inline void nd_iterator_destroy(nd_iterator_t *it) {\n  if (it && it->coords) {\n    free(it->coords);\n    it->coords = NULL;\n  }\n}\n\n// Single array iterator functions\nstatic inline void nd_iterator_init(nd_single_iterator_t *it, const ndarray_t *arr) {\n  if (!it || !arr) {\n    fprintf(stderr, \"nx: nd_iterator_init: null pointer\\n\");\n    abort();\n  }\n  it->ndim = arr->ndim;\n  it->shape = arr->shape;\n  it->coords = (int *)calloc(arr->ndim, sizeof(int));\n  it->strides = arr->strides;\n  if (!it->coords) {\n    fprintf(stderr, \"nx: nd_iterator_init: allocation failed\\n\");\n    abort();\n  }\n}\n\nstatic inline void nd_iterator_get_offset(const nd_single_iterator_t *it, long *offset) {\n  *offset = 0;\n  for (int i = 0; i < it->ndim; i++) {\n    *offset += it->coords[i] * it->strides[i];\n  }\n}\n\nstatic inline bool nd_single_iterator_next(nd_single_iterator_t *it) {\n  for (int i = it->ndim - 1; i >= 0; i--) {\n    it->coords[i]++;\n    if (it->coords[i] < it->shape[i]) {\n      return true;\n    }\n    it->coords[i] = 0;\n  }\n  return false;\n}\n\nstatic inline void nd_single_iterator_destroy(nd_single_iterator_t *it) {\n  if (it && it->coords) {\n    free(it->coords);\n    it->coords = NULL;\n  }\n}\n\n// Copy iterator functions\nstatic inline void nd_copy_iterator_init(nd_copy_iterator_t *it, const ndarray_t *src, const ndarray_t *dst) {\n  if (!it || !src || !dst) {\n    fprintf(stderr, \"nx: nd_copy_iterator_init: null pointer\\n\");\n    abort();\n  }\n  if (src->ndim != dst->ndim) {\n    fprintf(stderr, \"nx: nd_copy_iterator_init: dimension mismatch\\n\");\n    abort();\n  }\n  it->ndim = src->ndim;\n  it->shape = src->shape;\n  it->coords = (int *)calloc(src->ndim, sizeof(int));\n  it->src_strides = src->strides;\n  it->dst_strides = dst->strides;\n  if (!it->coords) {\n    fprintf(stderr, \"nx: nd_copy_iterator_init: allocation failed\\n\");\n    abort();\n  }\n}\n\nstatic inline void nd_copy_iterator_get_offsets(const nd_copy_iterator_t *it, long *src_off, long *dst_off) {\n  *src_off = 0;\n  *dst_off = 0;\n  for (int i = 0; i < it->ndim; i++) {\n    *src_off += it->coords[i] * it->src_strides[i];\n    *dst_off += it->coords[i] * it->dst_strides[i];\n  }\n}\n\nstatic inline bool nd_copy_iterator_next(nd_copy_iterator_t *it) {\n  for (int i = it->ndim - 1; i >= 0; i--) {\n    it->coords[i]++;\n    if (it->coords[i] < it->shape[i]) {\n      return true;\n    }\n    it->coords[i] = 0;\n  }\n  return false;\n}\n\nstatic inline void nd_copy_iterator_destroy(nd_copy_iterator_t *it) {\n  if (it && it->coords) {\n    free(it->coords);\n    it->coords = NULL;\n  }\n}\n\n// Helper to extract ndarray from FFI tensor\nstatic inline ndarray_t extract_ndarray(value v_ffi_tensor) {\n  value v_data = Field(v_ffi_tensor, FFI_TENSOR_DATA);\n  value v_shape = Field(v_ffi_tensor, FFI_TENSOR_SHAPE);\n  value v_strides = Field(v_ffi_tensor, FFI_TENSOR_STRIDES);\n  int offset = Int_val(Field(v_ffi_tensor, FFI_TENSOR_OFFSET));\n\n  struct caml_ba_array *ba = Caml_ba_array_val(v_data);\n  void *data = ba->data;\n\n  int ndim = Wosize_val(v_shape);\n\n  // Always allocate on heap to avoid stack corruption\n  int *shape = (int *)malloc(ndim * sizeof(int));\n  int *strides = (int *)malloc(ndim * sizeof(int));\n\n  if (!shape || !strides) {\n    if (shape) free(shape);\n    if (strides) free(strides);\n    caml_failwith(\"extract_ndarray: allocation failed\");\n  }\n\n  // Extract shape and strides\n  for (int i = 0; i < ndim; i++) {\n    shape[i] = Int_val(Field(v_shape, i));\n    strides[i] = Int_val(Field(v_strides, i));\n  }\n\n  ndarray_t arr = {data, ndim, shape, strides, offset};\n  return arr;\n}\n\n// Clean up heap-allocated arrays if needed\nstatic inline void cleanup_ndarray(ndarray_t *arr) {\n  // Always free since we always allocate on heap now\n  if (arr->shape) free(arr->shape);\n  if (arr->strides) free(arr->strides);\n}\n\n// Extract ndarray using caller-provided stack buffers (no malloc)\nstatic inline ndarray_t extract_ndarray_stack(value v_ffi_tensor,\n                                              int *shape_buf,\n                                              int *strides_buf) {\n  value v_data = Field(v_ffi_tensor, FFI_TENSOR_DATA);\n  value v_shape = Field(v_ffi_tensor, FFI_TENSOR_SHAPE);\n  value v_strides = Field(v_ffi_tensor, FFI_TENSOR_STRIDES);\n  int offset = Int_val(Field(v_ffi_tensor, FFI_TENSOR_OFFSET));\n\n  struct caml_ba_array *ba = Caml_ba_array_val(v_data);\n  void *data = ba->data;\n\n  int ndim = Wosize_val(v_shape);\n\n  for (int i = 0; i < ndim; i++) {\n    shape_buf[i] = Int_val(Field(v_shape, i));\n    strides_buf[i] = Int_val(Field(v_strides, i));\n  }\n\n  ndarray_t arr = {data, ndim, shape_buf, strides_buf, offset};\n  return arr;\n}\n\n#endif  // NX_C_SHARED_H\n"
  },
  {
    "path": "packages/nx/lib/backend_c/nx_c_solve.c",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n#include <caml/custom.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/threads.h>\n#include <complex.h>\n#include <float.h>\n#include <lapacke.h>\n\n#include \"nx_c_shared.h\"\n\n// Machine epsilon for float32 and float64\n#define NX_EPS32 FLT_EPSILON\n#define NX_EPS64 DBL_EPSILON\n\n// Helper functions for shape and stride operations\nstatic inline int nx_ndim(value v_shape) { return Wosize_val(v_shape); }\n\nstatic inline int nx_shape_at(value v_shape, int idx) {\n  return Int_val(Field(v_shape, idx));\n}\n\nstatic inline int nx_stride_at(value v_strides, int idx) {\n  return Int_val(Field(v_strides, idx));\n}\n\nstatic inline int nx_batch_size(value v_shape) {\n  int ndim = Wosize_val(v_shape);\n  if (ndim <= 2) return 1;\n  int batch_size = 1;\n  for (int i = 0; i < ndim - 2; i++) {\n    batch_size *= Int_val(Field(v_shape, i));\n  }\n  return batch_size;\n}\n\nstatic inline size_t nx_batch_offset_elems(int b, value v_shape,\n                                           value v_strides) {\n  int ndim = Wosize_val(v_shape);\n  if (ndim <= 2) return 0;\n  size_t offset = 0;\n  int remaining = b;\n  // Calculate offset for batch dimensions\n  for (int i = ndim - 3; i >= 0; i--) {\n    int dim_size = Int_val(Field(v_shape, i));\n    int coord = remaining % dim_size;\n    remaining /= dim_size;\n    offset += coord * Int_val(Field(v_strides, i));\n  }\n  return offset;\n}\n\n// Helper functions for packing/unpacking matrices\nstatic void nx_pack_f32(float* dst, const float* src, int m, int n,\n                        int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * n + j] = src[i * stride_row + j * stride_col];\n    }\n  }\n}\n\nstatic void nx_unpack_f32(float* dst, const float* src, int m, int n,\n                          int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * stride_row + j * stride_col] = src[i * n + j];\n    }\n  }\n}\n\nstatic void nx_pack_f64(double* dst, const double* src, int m, int n,\n                        int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * n + j] = src[i * stride_row + j * stride_col];\n    }\n  }\n}\n\nstatic void nx_unpack_f64(double* dst, const double* src, int m, int n,\n                          int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * stride_row + j * stride_col] = src[i * n + j];\n    }\n  }\n}\n\nstatic void nx_pack_c32(complex32* dst, const complex32* src, int m, int n,\n                        int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * n + j] = src[i * stride_row + j * stride_col];\n    }\n  }\n}\n\nstatic void nx_unpack_c32(complex32* dst, const complex32* src, int m, int n,\n                          int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * stride_row + j * stride_col] = src[i * n + j];\n    }\n  }\n}\n\nstatic void nx_pack_c64(complex64* dst, const complex64* src, int m, int n,\n                        int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * n + j] = src[i * stride_row + j * stride_col];\n    }\n  }\n}\n\nstatic void nx_unpack_c64(complex64* dst, const complex64* src, int m, int n,\n                          int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * stride_row + j * stride_col] = src[i * n + j];\n    }\n  }\n}\n\n// Triangular solve implementations\nstatic void triangular_solve_float32(const float* a, const float* b, float* x,\n                                     int m, int n, int upper, int transpose,\n                                     int unit_diag) {\n  float tol = NX_EPS32 * m;\n  memcpy(x, b, m * n * sizeof(float));\n  if (!transpose) {\n    if (upper) {\n#pragma omp parallel for if (n > 100)\n      for (int j = 0; j < n; j++) {\n        for (int i = m - 1; i >= 0; i--) {\n          float sum = 0.0f;\n          for (int k = i + 1; k < m; k++) {\n            sum += a[i * m + k] * x[k * n + j];\n          }\n          x[i * n + j] -= sum;\n          if (!unit_diag) {\n            if (fabsf(a[i * m + i]) < tol) {\n              x[i * n + j] = (x[i * n + j] == 0.0f) ? 0.0f : INFINITY;\n            } else {\n              x[i * n + j] /= a[i * m + i];\n            }\n          }\n        }\n      }\n    } else {\n#pragma omp parallel for if (n > 100)\n      for (int j = 0; j < n; j++) {\n        for (int i = 0; i < m; i++) {\n          float sum = 0.0f;\n          for (int k = 0; k < i; k++) {\n            sum += a[i * m + k] * x[k * n + j];\n          }\n          x[i * n + j] -= sum;\n          if (!unit_diag) {\n            if (fabsf(a[i * m + i]) < tol) {\n              x[i * n + j] = (x[i * n + j] == 0.0f) ? 0.0f : INFINITY;\n            } else {\n              x[i * n + j] /= a[i * m + i];\n            }\n          }\n        }\n      }\n    }\n  } else {\n    if (upper) {\n#pragma omp parallel for if (n > 100)\n      for (int j = 0; j < n; j++) {\n        for (int i = 0; i < m; i++) {\n          float sum = 0.0f;\n          for (int k = 0; k < i; k++) {\n            sum += a[k * m + i] * x[k * n + j];\n          }\n          x[i * n + j] -= sum;\n          if (!unit_diag) {\n            if (fabsf(a[i * m + i]) < tol) {\n              x[i * n + j] = (x[i * n + j] == 0.0f) ? 0.0f : INFINITY;\n            } else {\n              x[i * n + j] /= a[i * m + i];\n            }\n          }\n        }\n      }\n    } else {\n#pragma omp parallel for if (n > 100)\n      for (int j = 0; j < n; j++) {\n        for (int i = m - 1; i >= 0; i--) {\n          float sum = 0.0f;\n          for (int k = i + 1; k < m; k++) {\n            sum += a[k * m + i] * x[k * n + j];\n          }\n          x[i * n + j] -= sum;\n          if (!unit_diag) {\n            if (fabsf(a[i * m + i]) < tol) {\n              x[i * n + j] = (x[i * n + j] == 0.0f) ? 0.0f : INFINITY;\n            } else {\n              x[i * n + j] /= a[i * m + i];\n            }\n          }\n        }\n      }\n    }\n  }\n}\n\nstatic void triangular_solve_float64(const double* a, const double* b,\n                                     double* x, int m, int n, int upper,\n                                     int transpose, int unit_diag) {\n  double tol = NX_EPS64 * m;\n  memcpy(x, b, m * n * sizeof(double));\n  if (!transpose) {\n    if (upper) {\n#pragma omp parallel for if (n > 100)\n      for (int j = 0; j < n; j++) {\n        for (int i = m - 1; i >= 0; i--) {\n          double sum = 0.0;\n          for (int k = i + 1; k < m; k++) {\n            sum += a[i * m + k] * x[k * n + j];\n          }\n          x[i * n + j] -= sum;\n          if (!unit_diag) {\n            if (fabs(a[i * m + i]) < tol) {\n              x[i * n + j] = (x[i * n + j] == 0.0) ? 0.0 : INFINITY;\n            } else {\n              x[i * n + j] /= a[i * m + i];\n            }\n          }\n        }\n      }\n    } else {\n#pragma omp parallel for if (n > 100)\n      for (int j = 0; j < n; j++) {\n        for (int i = 0; i < m; i++) {\n          double sum = 0.0;\n          for (int k = 0; k < i; k++) {\n            sum += a[i * m + k] * x[k * n + j];\n          }\n          x[i * n + j] -= sum;\n          if (!unit_diag) {\n            if (fabs(a[i * m + i]) < tol) {\n              x[i * n + j] = (x[i * n + j] == 0.0) ? 0.0 : INFINITY;\n            } else {\n              x[i * n + j] /= a[i * m + i];\n            }\n          }\n        }\n      }\n    }\n  } else {\n    if (upper) {\n#pragma omp parallel for if (n > 100)\n      for (int j = 0; j < n; j++) {\n        for (int i = 0; i < m; i++) {\n          double sum = 0.0;\n          for (int k = 0; k < i; k++) {\n            sum += a[k * m + i] * x[k * n + j];\n          }\n          x[i * n + j] -= sum;\n          if (!unit_diag) {\n            if (fabs(a[i * m + i]) < tol) {\n              x[i * n + j] = (x[i * n + j] == 0.0) ? 0.0 : INFINITY;\n            } else {\n              x[i * n + j] /= a[i * m + i];\n            }\n          }\n        }\n      }\n    } else {\n#pragma omp parallel for if (n > 100)\n      for (int j = 0; j < n; j++) {\n        for (int i = m - 1; i >= 0; i--) {\n          double sum = 0.0;\n          for (int k = i + 1; k < m; k++) {\n            sum += a[k * m + i] * x[k * n + j];\n          }\n          x[i * n + j] -= sum;\n          if (!unit_diag) {\n            if (fabs(a[i * m + i]) < tol) {\n              x[i * n + j] = (x[i * n + j] == 0.0) ? 0.0 : INFINITY;\n            } else {\n              x[i * n + j] /= a[i * m + i];\n            }\n          }\n        }\n      }\n    }\n  }\n}\n\nstatic void triangular_solve_complex32(const complex32* a, const complex32* b,\n                                       complex32* x, int m, int n, int upper,\n                                       int transpose, int unit_diag) {\n  float tol = NX_EPS32 * m;\n  memcpy(x, b, m * n * sizeof(complex32));\n  if (!transpose) {\n    if (upper) {\n#pragma omp parallel for if (n > 100)\n      for (int j = 0; j < n; j++) {\n        for (int i = m - 1; i >= 0; i--) {\n          complex32 sum = 0.0f + 0.0f * I;\n          for (int k = i + 1; k < m; k++) {\n            sum += a[i * m + k] * x[k * n + j];\n          }\n          x[i * n + j] -= sum;\n          if (!unit_diag) {\n            if (cabsf(a[i * m + i]) < tol) {\n              x[i * n + j] = (cabsf(x[i * n + j]) == 0.0f)\n                                 ? (0.0f + 0.0f * I)\n                                 : (INFINITY + NAN * I);\n            } else {\n              x[i * n + j] /= a[i * m + i];\n            }\n          }\n        }\n      }\n    } else {\n#pragma omp parallel for if (n > 100)\n      for (int j = 0; j < n; j++) {\n        for (int i = 0; i < m; i++) {\n          complex32 sum = 0.0f + 0.0f * I;\n          for (int k = 0; k < i; k++) {\n            sum += a[i * m + k] * x[k * n + j];\n          }\n          x[i * n + j] -= sum;\n          if (!unit_diag) {\n            if (cabsf(a[i * m + i]) < tol) {\n              x[i * n + j] = (cabsf(x[i * n + j]) == 0.0f)\n                                 ? (0.0f + 0.0f * I)\n                                 : (INFINITY + NAN * I);\n            } else {\n              x[i * n + j] /= a[i * m + i];\n            }\n          }\n        }\n      }\n    }\n  } else {\n    if (upper) {\n#pragma omp parallel for if (n > 100)\n      for (int j = 0; j < n; j++) {\n        for (int i = 0; i < m; i++) {\n          complex32 sum = 0.0f + 0.0f * I;\n          for (int k = 0; k < i; k++) {\n            sum += conjf(a[k * m + i]) * x[k * n + j];\n          }\n          x[i * n + j] -= sum;\n          if (!unit_diag) {\n            if (cabsf(a[i * m + i]) < tol) {\n              x[i * n + j] = (cabsf(x[i * n + j]) == 0.0f)\n                                 ? (0.0f + 0.0f * I)\n                                 : (INFINITY + NAN * I);\n            } else {\n              x[i * n + j] /= conjf(a[i * m + i]);\n            }\n          }\n        }\n      }\n    } else {\n#pragma omp parallel for if (n > 100)\n      for (int j = 0; j < n; j++) {\n        for (int i = m - 1; i >= 0; i--) {\n          complex32 sum = 0.0f + 0.0f * I;\n          for (int k = i + 1; k < m; k++) {\n            sum += conjf(a[k * m + i]) * x[k * n + j];\n          }\n          x[i * n + j] -= sum;\n          if (!unit_diag) {\n            if (cabsf(a[i * m + i]) < tol) {\n              x[i * n + j] = (cabsf(x[i * n + j]) == 0.0f)\n                                 ? (0.0f + 0.0f * I)\n                                 : (INFINITY + NAN * I);\n            } else {\n              x[i * n + j] /= conjf(a[i * m + i]);\n            }\n          }\n        }\n      }\n    }\n  }\n}\n\nstatic void triangular_solve_complex64(const complex64* a, const complex64* b,\n                                       complex64* x, int m, int n, int upper,\n                                       int transpose, int unit_diag) {\n  double tol = NX_EPS64 * m;\n  memcpy(x, b, m * n * sizeof(complex64));\n  if (!transpose) {\n    if (upper) {\n#pragma omp parallel for if (n > 100)\n      for (int j = 0; j < n; j++) {\n        for (int i = m - 1; i >= 0; i--) {\n          complex64 sum = 0.0 + 0.0 * I;\n          for (int k = i + 1; k < m; k++) {\n            sum += a[i * m + k] * x[k * n + j];\n          }\n          x[i * n + j] -= sum;\n          if (!unit_diag) {\n            if (cabs(a[i * m + i]) < tol) {\n              x[i * n + j] = (cabs(x[i * n + j]) == 0.0) ? (0.0 + 0.0 * I)\n                                                         : (INFINITY + NAN * I);\n            } else {\n              x[i * n + j] /= a[i * m + i];\n            }\n          }\n        }\n      }\n    } else {\n#pragma omp parallel for if (n > 100)\n      for (int j = 0; j < n; j++) {\n        for (int i = 0; i < m; i++) {\n          complex64 sum = 0.0 + 0.0 * I;\n          for (int k = 0; k < i; k++) {\n            sum += a[i * m + k] * x[k * n + j];\n          }\n          x[i * n + j] -= sum;\n          if (!unit_diag) {\n            if (cabs(a[i * m + i]) < tol) {\n              x[i * n + j] = (cabs(x[i * n + j]) == 0.0) ? (0.0 + 0.0 * I)\n                                                         : (INFINITY + NAN * I);\n            } else {\n              x[i * n + j] /= a[i * m + i];\n            }\n          }\n        }\n      }\n    }\n  } else {\n    if (upper) {\n#pragma omp parallel for if (n > 100)\n      for (int j = 0; j < n; j++) {\n        for (int i = 0; i < m; i++) {\n          complex64 sum = 0.0 + 0.0 * I;\n          for (int k = 0; k < i; k++) {\n            sum += conj(a[k * m + i]) * x[k * n + j];\n          }\n          x[i * n + j] -= sum;\n          if (!unit_diag) {\n            if (cabs(a[i * m + i]) < tol) {\n              x[i * n + j] = (cabs(x[i * n + j]) == 0.0) ? (0.0 + 0.0 * I)\n                                                         : (INFINITY + NAN * I);\n            } else {\n              x[i * n + j] /= conj(a[i * m + i]);\n            }\n          }\n        }\n      }\n    } else {\n#pragma omp parallel for if (n > 100)\n      for (int j = 0; j < n; j++) {\n        for (int i = m - 1; i >= 0; i--) {\n          complex64 sum = 0.0 + 0.0 * I;\n          for (int k = i + 1; k < m; k++) {\n            sum += conj(a[k * m + i]) * x[k * n + j];\n          }\n          x[i * n + j] -= sum;\n          if (!unit_diag) {\n            if (cabs(a[i * m + i]) < tol) {\n              x[i * n + j] = (cabs(x[i * n + j]) == 0.0) ? (0.0 + 0.0 * I)\n                                                         : (INFINITY + NAN * I);\n            } else {\n              x[i * n + j] /= conj(a[i * m + i]);\n            }\n          }\n        }\n      }\n    }\n  }\n}\n\nstatic int triangular_solve_float16(const uint16_t* a, const uint16_t* b,\n                                    uint16_t* x, int m, int n, int upper,\n                                    int transpose, int unit_diag) {\n  float* a_float = (float*)malloc(m * m * sizeof(float));\n  float* b_float = (float*)malloc(m * n * sizeof(float));\n  float* x_float = (float*)malloc(m * n * sizeof(float));\n  if (!a_float || !b_float || !x_float) {\n    free(a_float);\n    free(b_float);\n    free(x_float);\n    return -1;\n  }\n  for (int i = 0; i < m * m; i++) a_float[i] = half_to_float(a[i]);\n  for (int i = 0; i < m * n; i++) b_float[i] = half_to_float(b[i]);\n  triangular_solve_float32(a_float, b_float, x_float, m, n, upper, transpose,\n                           unit_diag);\n  for (int i = 0; i < m * n; i++) x[i] = float_to_half(x_float[i]);\n  free(a_float);\n  free(b_float);\n  free(x_float);\n  return 0;\n}\n\nstatic int triangular_solve_bfloat16(const caml_ba_bfloat16* a,\n                                     const caml_ba_bfloat16* b,\n                                     caml_ba_bfloat16* x, int m, int n,\n                                     int upper, int transpose, int unit_diag) {\n  float* a_float = (float*)malloc(m * m * sizeof(float));\n  float* b_float = (float*)malloc(m * n * sizeof(float));\n  float* x_float = (float*)malloc(m * n * sizeof(float));\n  if (!a_float || !b_float || !x_float) {\n    free(a_float);\n    free(b_float);\n    free(x_float);\n    return -1;\n  }\n  for (int i = 0; i < m * m; i++) a_float[i] = bfloat16_to_float(a[i]);\n  for (int i = 0; i < m * n; i++) b_float[i] = bfloat16_to_float(b[i]);\n  triangular_solve_float32(a_float, b_float, x_float, m, n, upper, transpose,\n                           unit_diag);\n  for (int i = 0; i < m * n; i++) x[i] = float_to_bfloat16(x_float[i]);\n  free(a_float);\n  free(b_float);\n  free(x_float);\n  return 0;\n}\n\nstatic int triangular_solve_f8e4m3(const caml_ba_fp8_e4m3* a,\n                                   const caml_ba_fp8_e4m3* b,\n                                   caml_ba_fp8_e4m3* x, int m, int n, int upper,\n                                   int transpose, int unit_diag) {\n  float* a_float = (float*)malloc(m * m * sizeof(float));\n  float* b_float = (float*)malloc(m * n * sizeof(float));\n  float* x_float = (float*)malloc(m * n * sizeof(float));\n  if (!a_float || !b_float || !x_float) {\n    free(a_float);\n    free(b_float);\n    free(x_float);\n    return -1;\n  }\n  for (int i = 0; i < m * m; i++) a_float[i] = fp8_e4m3_to_float(a[i]);\n  for (int i = 0; i < m * n; i++) b_float[i] = fp8_e4m3_to_float(b[i]);\n  triangular_solve_float32(a_float, b_float, x_float, m, n, upper, transpose,\n                           unit_diag);\n  for (int i = 0; i < m * n; i++) x[i] = float_to_fp8_e4m3(x_float[i]);\n  free(a_float);\n  free(b_float);\n  free(x_float);\n  return 0;\n}\n\nstatic int triangular_solve_f8e5m2(const caml_ba_fp8_e5m2* a,\n                                   const caml_ba_fp8_e5m2* b,\n                                   caml_ba_fp8_e5m2* x, int m, int n, int upper,\n                                   int transpose, int unit_diag) {\n  float* a_float = (float*)malloc(m * m * sizeof(float));\n  float* b_float = (float*)malloc(m * n * sizeof(float));\n  float* x_float = (float*)malloc(m * n * sizeof(float));\n  if (!a_float || !b_float || !x_float) {\n    free(a_float);\n    free(b_float);\n    free(x_float);\n    return -1;\n  }\n  for (int i = 0; i < m * m; i++) a_float[i] = fp8_e5m2_to_float(a[i]);\n  for (int i = 0; i < m * n; i++) b_float[i] = fp8_e5m2_to_float(b[i]);\n  triangular_solve_float32(a_float, b_float, x_float, m, n, upper, transpose,\n                           unit_diag);\n  for (int i = 0; i < m * n; i++) x[i] = float_to_fp8_e5m2(x_float[i]);\n  free(a_float);\n  free(b_float);\n  free(x_float);\n  return 0;\n}\n\n// ============================================================================\n// General Linear System Solving (Ax = b)\n// ============================================================================\n\n// General solve implementations using LAPACK\nstatic void solve_float32(float* a, float* b, int n, int nrhs) {\n   // Create copies since LAPACK overwrites inputs\n   float* a_copy = (float*)malloc(n * n * sizeof(float));\n   float* b_copy = (float*)malloc(n * nrhs * sizeof(float));\n   if (!a_copy || !b_copy) {\n     free(a_copy);\n     free(b_copy);\n     return;\n   }\n   memcpy(a_copy, a, n * n * sizeof(float));\n   memcpy(b_copy, b, n * nrhs * sizeof(float));\n\n   // Pivot array for LAPACK\n   int* ipiv = (int*)malloc(n * sizeof(int));\n   if (!ipiv) {\n     free(a_copy);\n     free(b_copy);\n     return;\n   }\n\n   // Solve using LAPACK\n   int info = LAPACKE_sgesv(LAPACK_ROW_MAJOR, n, nrhs, a_copy, n, ipiv, b_copy, nrhs);\n\n   if (info == 0) {\n     // Copy solution back to b\n     memcpy(b, b_copy, n * nrhs * sizeof(float));\n   }\n\n   free(a_copy);\n   free(b_copy);\n   free(ipiv);\n}\n\nstatic void solve_float64(double* a, double* b, int n, int nrhs) {\n   // Create copies since LAPACK overwrites inputs\n   double* a_copy = (double*)malloc(n * n * sizeof(double));\n   double* b_copy = (double*)malloc(n * nrhs * sizeof(double));\n   if (!a_copy || !b_copy) {\n     free(a_copy);\n     free(b_copy);\n     return;\n   }\n   memcpy(a_copy, a, n * n * sizeof(double));\n   memcpy(b_copy, b, n * nrhs * sizeof(double));\n\n   // Pivot array for LAPACK\n   int* ipiv = (int*)malloc(n * sizeof(int));\n   if (!ipiv) {\n     free(a_copy);\n     free(b_copy);\n     return;\n   }\n\n   // Solve using LAPACK\n   int info = LAPACKE_dgesv(LAPACK_ROW_MAJOR, n, nrhs, a_copy, n, ipiv, b_copy, nrhs);\n\n   if (info == 0) {\n     // Copy solution back to b\n     memcpy(b, b_copy, n * nrhs * sizeof(double));\n   }\n\n   free(a_copy);\n   free(b_copy);\n   free(ipiv);\n}\n\nstatic void solve_complex32(complex32* a, complex32* b, int n, int nrhs) {\n   // Create copies since LAPACK overwrites inputs\n   complex32* a_copy = (complex32*)malloc(n * n * sizeof(complex32));\n   complex32* b_copy = (complex32*)malloc(n * nrhs * sizeof(complex32));\n   if (!a_copy || !b_copy) {\n     free(a_copy);\n     free(b_copy);\n     return;\n   }\n   memcpy(a_copy, a, n * n * sizeof(complex32));\n   memcpy(b_copy, b, n * nrhs * sizeof(complex32));\n\n   // Pivot array for LAPACK\n   int* ipiv = (int*)malloc(n * sizeof(int));\n   if (!ipiv) {\n     free(a_copy);\n     free(b_copy);\n     return;\n   }\n\n   // Solve using LAPACK\n   int info = LAPACKE_cgesv(LAPACK_ROW_MAJOR, n, nrhs, a_copy, n, ipiv, b_copy, nrhs);\n\n   if (info == 0) {\n     // Copy solution back to b\n     memcpy(b, b_copy, n * nrhs * sizeof(complex32));\n   }\n\n   free(a_copy);\n   free(b_copy);\n   free(ipiv);\n}\n\nstatic void solve_complex64(complex64* a, complex64* b, int n, int nrhs) {\n   // Create copies since LAPACK overwrites inputs\n   complex64* a_copy = (complex64*)malloc(n * n * sizeof(complex64));\n   complex64* b_copy = (complex64*)malloc(n * nrhs * sizeof(complex64));\n   if (!a_copy || !b_copy) {\n     free(a_copy);\n     free(b_copy);\n     return;\n   }\n   memcpy(a_copy, a, n * n * sizeof(complex64));\n   memcpy(b_copy, b, n * nrhs * sizeof(complex64));\n\n   // Pivot array for LAPACK\n   int* ipiv = (int*)malloc(n * sizeof(int));\n   if (!ipiv) {\n     free(a_copy);\n     free(b_copy);\n     return;\n   }\n\n   // Solve using LAPACK\n   int info = LAPACKE_zgesv(LAPACK_ROW_MAJOR, n, nrhs, a_copy, n, ipiv, b_copy, nrhs);\n\n   if (info == 0) {\n     // Copy solution back to b\n     memcpy(b, b_copy, n * nrhs * sizeof(complex64));\n   }\n\n   free(a_copy);\n   free(b_copy);\n   free(ipiv);\n}\n\n// ============================================================================\n// OCaml FFI Stubs\n// ============================================================================\n\n\nCAMLprim value caml_nx_op_triangular_solve(value v_a, value v_b, value v_out,\n                                           value v_upper, value v_transpose,\n                                           value v_unit_diag) {\n  CAMLparam5(v_a, v_b, v_out, v_upper, v_transpose);\n  CAMLxparam1(v_unit_diag);\n  int upper = Int_val(v_upper);\n  int transpose = Int_val(v_transpose);\n  int unit_diag = Int_val(v_unit_diag);\n  ndarray_t a = extract_ndarray(v_a);\n  ndarray_t b = extract_ndarray(v_b);\n  ndarray_t out = extract_ndarray(v_out);\n  struct caml_ba_array* ba_a = Caml_ba_array_val(Field(v_a, FFI_TENSOR_DATA));\n  struct caml_ba_array* ba_b = Caml_ba_array_val(Field(v_b, FFI_TENSOR_DATA));\n  struct caml_ba_array* ba_out =\n      Caml_ba_array_val(Field(v_out, FFI_TENSOR_DATA));\n  int kind = nx_buffer_get_kind(ba_a);\n  if (a.ndim < 2 || b.ndim < 2) {\n    cleanup_ndarray(&a);\n    cleanup_ndarray(&b);\n    cleanup_ndarray(&out);\n    caml_failwith(\"triangular_solve: inputs must have at least 2 dimensions\");\n  }\n  int m = a.shape[a.ndim - 1];\n  if (a.shape[a.ndim - 2] != m) {\n    cleanup_ndarray(&a);\n    cleanup_ndarray(&b);\n    cleanup_ndarray(&out);\n    caml_failwith(\"triangular_solve: A must be square\");\n  }\n  int bn = b.shape[b.ndim - 1];\n  if (b.shape[b.ndim - 2] != m) {\n    cleanup_ndarray(&a);\n    cleanup_ndarray(&b);\n    cleanup_ndarray(&out);\n    caml_failwith(\"triangular_solve: incompatible dimensions\");\n  }\n  int batch_size = 1;\n  for (int i = 0; i < a.ndim - 2; i++) {\n    batch_size *= a.shape[i];\n  }\n  int s_a_row = a.strides[a.ndim - 2];\n  int s_a_col = a.strides[a.ndim - 1];\n  int s_b_row = b.strides[b.ndim - 2];\n  int s_b_col = b.strides[b.ndim - 1];\n  int s_out_row = out.strides[out.ndim - 2];\n  int s_out_col = out.strides[out.ndim - 1];\n  caml_enter_blocking_section();\n  for (int batch = 0; batch < batch_size; batch++) {\n    size_t off_a = a.offset;\n    size_t off_b = b.offset;\n    size_t off_out = out.offset;\n    if (a.ndim > 2) {\n      int remaining = batch;\n      for (int i = a.ndim - 3; i >= 0; i--) {\n        int coord = remaining % a.shape[i];\n        remaining /= a.shape[i];\n        off_a += coord * a.strides[i];\n        off_b += coord * b.strides[i];\n        off_out += coord * out.strides[i];\n      }\n    }\n    switch (kind) {\n      case CAML_BA_FLOAT32: {\n        float* base_a = (float*)ba_a->data + off_a;\n        float* base_b = (float*)ba_b->data + off_b;\n        float* base_out = (float*)ba_out->data + off_out;\n        float* A = (float*)malloc((size_t)m * m * sizeof(float));\n        float* B = (float*)malloc((size_t)m * bn * sizeof(float));\n        float* X = (float*)malloc((size_t)m * bn * sizeof(float));\n        nx_pack_f32(A, base_a, m, m, s_a_row, s_a_col);\n        nx_pack_f32(B, base_b, m, bn, s_b_row, s_b_col);\n        triangular_solve_float32(A, B, X, m, bn, upper, transpose, unit_diag);\n        nx_unpack_f32(base_out, X, m, bn, s_out_row, s_out_col);\n        free(A);\n        free(B);\n        free(X);\n        break;\n      }\n      case CAML_BA_FLOAT64: {\n        double* base_a = (double*)ba_a->data + off_a;\n        double* base_b = (double*)ba_b->data + off_b;\n        double* base_out = (double*)ba_out->data + off_out;\n        double* A = (double*)malloc((size_t)m * m * sizeof(double));\n        double* B = (double*)malloc((size_t)m * bn * sizeof(double));\n        double* X = (double*)malloc((size_t)m * bn * sizeof(double));\n        nx_pack_f64(A, base_a, m, m, s_a_row, s_a_col);\n        nx_pack_f64(B, base_b, m, bn, s_b_row, s_b_col);\n        triangular_solve_float64(A, B, X, m, bn, upper, transpose, unit_diag);\n        nx_unpack_f64(base_out, X, m, bn, s_out_row, s_out_col);\n        free(A);\n        free(B);\n        free(X);\n        break;\n      }\n      case CAML_BA_COMPLEX32: {\n        complex32* base_a = (complex32*)ba_a->data + off_a;\n        complex32* base_b = (complex32*)ba_b->data + off_b;\n        complex32* base_out = (complex32*)ba_out->data + off_out;\n        complex32* A = (complex32*)malloc((size_t)m * m * sizeof(complex32));\n        complex32* B = (complex32*)malloc((size_t)m * bn * sizeof(complex32));\n        complex32* X = (complex32*)malloc((size_t)m * bn * sizeof(complex32));\n        nx_pack_c32(A, base_a, m, m, s_a_row, s_a_col);\n        nx_pack_c32(B, base_b, m, bn, s_b_row, s_b_col);\n        triangular_solve_complex32(A, B, X, m, bn, upper, transpose, unit_diag);\n        nx_unpack_c32(base_out, X, m, bn, s_out_row, s_out_col);\n        free(A);\n        free(B);\n        free(X);\n        break;\n      }\n      case CAML_BA_COMPLEX64: {\n        complex64* base_a = (complex64*)ba_a->data + off_a;\n        complex64* base_b = (complex64*)ba_b->data + off_b;\n        complex64* base_out = (complex64*)ba_out->data + off_out;\n        complex64* A = (complex64*)malloc((size_t)m * m * sizeof(complex64));\n        complex64* B = (complex64*)malloc((size_t)m * bn * sizeof(complex64));\n        complex64* X = (complex64*)malloc((size_t)m * bn * sizeof(complex64));\n        nx_pack_c64(A, base_a, m, m, s_a_row, s_a_col);\n        nx_pack_c64(B, base_b, m, bn, s_b_row, s_b_col);\n        triangular_solve_complex64(A, B, X, m, bn, upper, transpose, unit_diag);\n        nx_unpack_c64(base_out, X, m, bn, s_out_row, s_out_col);\n        free(A);\n        free(B);\n        free(X);\n        break;\n      }\n      case CAML_BA_FLOAT16: {\n        uint16_t* base_a = (uint16_t*)ba_a->data + off_a;\n        uint16_t* base_b = (uint16_t*)ba_b->data + off_b;\n        uint16_t* base_out = (uint16_t*)ba_out->data + off_out;\n        uint16_t* A = (uint16_t*)malloc((size_t)m * m * sizeof(uint16_t));\n        uint16_t* B = (uint16_t*)malloc((size_t)m * bn * sizeof(uint16_t));\n        uint16_t* X = (uint16_t*)malloc((size_t)m * bn * sizeof(uint16_t));\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < m; j++) {\n            A[i * m + j] = base_a[i * s_a_row + j * s_a_col];\n          }\n        }\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < bn; j++) {\n            B[i * bn + j] = base_b[i * s_b_row + j * s_b_col];\n          }\n        }\n        triangular_solve_float16(A, B, X, m, bn, upper, transpose, unit_diag);\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < bn; j++) {\n            base_out[i * s_out_row + j * s_out_col] = X[i * bn + j];\n          }\n        }\n        free(A);\n        free(B);\n        free(X);\n        break;\n      }\n      case NX_BA_BFLOAT16: {\n        caml_ba_bfloat16* base_a = (caml_ba_bfloat16*)ba_a->data + off_a;\n        caml_ba_bfloat16* base_b = (caml_ba_bfloat16*)ba_b->data + off_b;\n        caml_ba_bfloat16* base_out = (caml_ba_bfloat16*)ba_out->data + off_out;\n        caml_ba_bfloat16* A =\n            (caml_ba_bfloat16*)malloc((size_t)m * m * sizeof(caml_ba_bfloat16));\n        caml_ba_bfloat16* B = (caml_ba_bfloat16*)malloc(\n            (size_t)m * bn * sizeof(caml_ba_bfloat16));\n        caml_ba_bfloat16* X = (caml_ba_bfloat16*)malloc(\n            (size_t)m * bn * sizeof(caml_ba_bfloat16));\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < m; j++) {\n            A[i * m + j] = base_a[i * s_a_row + j * s_a_col];\n          }\n        }\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < bn; j++) {\n            B[i * bn + j] = base_b[i * s_b_row + j * s_b_col];\n          }\n        }\n        triangular_solve_bfloat16(A, B, X, m, bn, upper, transpose, unit_diag);\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < bn; j++) {\n            base_out[i * s_out_row + j * s_out_col] = X[i * bn + j];\n          }\n        }\n        free(A);\n        free(B);\n        free(X);\n        break;\n      }\n      case NX_BA_FP8_E4M3: {\n        caml_ba_fp8_e4m3* base_a = (caml_ba_fp8_e4m3*)ba_a->data + off_a;\n        caml_ba_fp8_e4m3* base_b = (caml_ba_fp8_e4m3*)ba_b->data + off_b;\n        caml_ba_fp8_e4m3* base_out = (caml_ba_fp8_e4m3*)ba_out->data + off_out;\n        caml_ba_fp8_e4m3* A =\n            (caml_ba_fp8_e4m3*)malloc((size_t)m * m * sizeof(caml_ba_fp8_e4m3));\n        caml_ba_fp8_e4m3* B = (caml_ba_fp8_e4m3*)malloc(\n            (size_t)m * bn * sizeof(caml_ba_fp8_e4m3));\n        caml_ba_fp8_e4m3* X = (caml_ba_fp8_e4m3*)malloc(\n            (size_t)m * bn * sizeof(caml_ba_fp8_e4m3));\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < m; j++) {\n            A[i * m + j] = base_a[i * s_a_row + j * s_a_col];\n          }\n        }\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < bn; j++) {\n            B[i * bn + j] = base_b[i * s_b_row + j * s_b_col];\n          }\n        }\n        triangular_solve_f8e4m3(A, B, X, m, bn, upper, transpose, unit_diag);\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < bn; j++) {\n            base_out[i * s_out_row + j * s_out_col] = X[i * bn + j];\n          }\n        }\n        free(A);\n        free(B);\n        free(X);\n        break;\n      }\n      case NX_BA_FP8_E5M2: {\n        caml_ba_fp8_e5m2* base_a = (caml_ba_fp8_e5m2*)ba_a->data + off_a;\n        caml_ba_fp8_e5m2* base_b = (caml_ba_fp8_e5m2*)ba_b->data + off_b;\n        caml_ba_fp8_e5m2* base_out = (caml_ba_fp8_e5m2*)ba_out->data + off_out;\n        caml_ba_fp8_e5m2* A =\n            (caml_ba_fp8_e5m2*)malloc((size_t)m * m * sizeof(caml_ba_fp8_e5m2));\n        caml_ba_fp8_e5m2* B = (caml_ba_fp8_e5m2*)malloc(\n            (size_t)m * bn * sizeof(caml_ba_fp8_e5m2));\n        caml_ba_fp8_e5m2* X = (caml_ba_fp8_e5m2*)malloc(\n            (size_t)m * bn * sizeof(caml_ba_fp8_e5m2));\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < m; j++) {\n            A[i * m + j] = base_a[i * s_a_row + j * s_a_col];\n          }\n        }\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < bn; j++) {\n            B[i * bn + j] = base_b[i * s_b_row + j * s_b_col];\n          }\n        }\n        triangular_solve_f8e5m2(A, B, X, m, bn, upper, transpose, unit_diag);\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < bn; j++) {\n            base_out[i * s_out_row + j * s_out_col] = X[i * bn + j];\n          }\n        }\n        free(A);\n        free(B);\n        free(X);\n        break;\n      }\n      default:\n        caml_leave_blocking_section();\n        cleanup_ndarray(&a);\n        cleanup_ndarray(&b);\n        cleanup_ndarray(&out);\n        caml_failwith(\"triangular_solve: unsupported dtype\");\n    }\n  }\n  caml_leave_blocking_section();\n  cleanup_ndarray(&a);\n  cleanup_ndarray(&b);\n  cleanup_ndarray(&out);\n  CAMLreturn(Val_unit);\n}\n\n// Bytecode wrapper for triangular_solve (6 arguments)\nCAMLprim value caml_nx_op_triangular_solve_bc(value *argv, int argn) {\n  CAMLparam0();\n  (void)argn;\n  value ret = caml_nx_op_triangular_solve(argv[0], argv[1], argv[2], argv[3],\n                                          argv[4], argv[5]);\n  CAMLreturn(ret);\n}\n"
  },
  {
    "path": "packages/nx/lib/backend_c/nx_c_sort.c",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n// Sort/search kernels for nx C backend: argmax/argmin/sort/argsort.\n\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <limits.h>\n#include <stdlib.h>\n#include <string.h>\n\n#include \"nx_c_shared.h\"\n\n#if defined(_OPENMP)\n#define NX_ARG_PAR_THRESHOLD 4096\n#define NX_SORT_PAR_THRESHOLD 64\n#else\n#define NX_ARG_PAR_THRESHOLD LONG_MAX\n#define NX_SORT_PAR_THRESHOLD LONG_MAX\n#endif\n\ntypedef struct {\n  const ndarray_t* x;\n  int kind;\n  long x_base;\n  long x_stride_axis;\n  bool descending;\n} slice_sort_ctx_t;\n\nstatic inline int compare_u64(uint64_t a, uint64_t b) {\n  return (a > b) - (a < b);\n}\n\nstatic inline int compare_i64(int64_t a, int64_t b) {\n  return (a > b) - (a < b);\n}\n\nstatic inline int compare_float_total(double a, double b) {\n  if (a < b) return -1;\n  if (a > b) return 1;\n  if (a == b) return 0;\n  bool a_nan = isnan(a);\n  bool b_nan = isnan(b);\n  if (a_nan && b_nan) return 0;\n  if (a_nan) return -1;\n  if (b_nan) return 1;\n  return 0;\n}\n\nstatic inline bool is_nan_at(int kind, const void* data, long off) {\n  switch (kind) {\n    case CAML_BA_FLOAT16:\n      return isnan(half_to_float(((const uint16_t*)data)[off]));\n    case CAML_BA_FLOAT32:\n      return isnan(((const float*)data)[off]);\n    case CAML_BA_FLOAT64:\n      return isnan(((const double*)data)[off]);\n    case NX_BA_BFLOAT16:\n      return isnan(bfloat16_to_float(((const caml_ba_bfloat16*)data)[off]));\n    case NX_BA_FP8_E4M3:\n      return isnan(fp8_e4m3_to_float(((const caml_ba_fp8_e4m3*)data)[off]));\n    case NX_BA_FP8_E5M2:\n      return isnan(fp8_e5m2_to_float(((const caml_ba_fp8_e5m2*)data)[off]));\n    default:\n      return false;\n  }\n}\n\nstatic inline int compare_values_at(int kind, const void* data, long a_off,\n                                    long b_off) {\n  switch (kind) {\n    case CAML_BA_SINT8: {\n      int8_t a = ((const int8_t*)data)[a_off];\n      int8_t b = ((const int8_t*)data)[b_off];\n      return (a > b) - (a < b);\n    }\n    case CAML_BA_UINT8: {\n      uint8_t a = ((const uint8_t*)data)[a_off];\n      uint8_t b = ((const uint8_t*)data)[b_off];\n      return (a > b) - (a < b);\n    }\n    case CAML_BA_SINT16: {\n      int16_t a = ((const int16_t*)data)[a_off];\n      int16_t b = ((const int16_t*)data)[b_off];\n      return (a > b) - (a < b);\n    }\n    case CAML_BA_UINT16: {\n      uint16_t a = ((const uint16_t*)data)[a_off];\n      uint16_t b = ((const uint16_t*)data)[b_off];\n      return (a > b) - (a < b);\n    }\n    case CAML_BA_INT32: {\n      int32_t a = ((const int32_t*)data)[a_off];\n      int32_t b = ((const int32_t*)data)[b_off];\n      return (a > b) - (a < b);\n    }\n    case CAML_BA_INT64: {\n      int64_t a = ((const int64_t*)data)[a_off];\n      int64_t b = ((const int64_t*)data)[b_off];\n      return compare_i64(a, b);\n    }\n    case NX_BA_UINT32: {\n      caml_ba_uint32 a = ((const caml_ba_uint32*)data)[a_off];\n      caml_ba_uint32 b = ((const caml_ba_uint32*)data)[b_off];\n      return (a > b) - (a < b);\n    }\n    case NX_BA_UINT64: {\n      caml_ba_uint64 a = ((const caml_ba_uint64*)data)[a_off];\n      caml_ba_uint64 b = ((const caml_ba_uint64*)data)[b_off];\n      return compare_u64(a, b);\n    }\n    case CAML_BA_CAML_INT:\n    case CAML_BA_NATIVE_INT: {\n      intnat a = ((const intnat*)data)[a_off];\n      intnat b = ((const intnat*)data)[b_off];\n      return (a > b) - (a < b);\n    }\n    case CAML_BA_FLOAT16: {\n      float a = half_to_float(((const uint16_t*)data)[a_off]);\n      float b = half_to_float(((const uint16_t*)data)[b_off]);\n      return compare_float_total(a, b);\n    }\n    case CAML_BA_FLOAT32: {\n      float a = ((const float*)data)[a_off];\n      float b = ((const float*)data)[b_off];\n      return compare_float_total(a, b);\n    }\n    case CAML_BA_FLOAT64: {\n      double a = ((const double*)data)[a_off];\n      double b = ((const double*)data)[b_off];\n      return compare_float_total(a, b);\n    }\n    case CAML_BA_COMPLEX32: {\n      complex32 a = ((const complex32*)data)[a_off];\n      complex32 b = ((const complex32*)data)[b_off];\n      int c = compare_float_total(crealf(a), crealf(b));\n      if (c != 0) return c;\n      return compare_float_total(cimagf(a), cimagf(b));\n    }\n    case CAML_BA_COMPLEX64: {\n      complex64 a = ((const complex64*)data)[a_off];\n      complex64 b = ((const complex64*)data)[b_off];\n      int c = compare_float_total(creal(a), creal(b));\n      if (c != 0) return c;\n      return compare_float_total(cimag(a), cimag(b));\n    }\n    case NX_BA_BFLOAT16: {\n      float a = bfloat16_to_float(((const caml_ba_bfloat16*)data)[a_off]);\n      float b = bfloat16_to_float(((const caml_ba_bfloat16*)data)[b_off]);\n      return compare_float_total(a, b);\n    }\n    case NX_BA_BOOL: {\n      caml_ba_bool a = ((const caml_ba_bool*)data)[a_off];\n      caml_ba_bool b = ((const caml_ba_bool*)data)[b_off];\n      return (a > b) - (a < b);\n    }\n    case NX_BA_INT4: {\n      int a = int4_get((const uint8_t*)data, a_off, true);\n      int b = int4_get((const uint8_t*)data, b_off, true);\n      return (a > b) - (a < b);\n    }\n    case NX_BA_UINT4: {\n      int a = int4_get((const uint8_t*)data, a_off, false);\n      int b = int4_get((const uint8_t*)data, b_off, false);\n      return (a > b) - (a < b);\n    }\n    case NX_BA_FP8_E4M3: {\n      float a = fp8_e4m3_to_float(((const caml_ba_fp8_e4m3*)data)[a_off]);\n      float b = fp8_e4m3_to_float(((const caml_ba_fp8_e4m3*)data)[b_off]);\n      return compare_float_total(a, b);\n    }\n    case NX_BA_FP8_E5M2: {\n      float a = fp8_e5m2_to_float(((const caml_ba_fp8_e5m2*)data)[a_off]);\n      float b = fp8_e5m2_to_float(((const caml_ba_fp8_e5m2*)data)[b_off]);\n      return compare_float_total(a, b);\n    }\n    default:\n      return 0;\n  }\n}\n\nstatic inline void copy_value_at(int kind, const void* src_data, long src_off,\n                                 void* dst_data, long dst_off) {\n  switch (kind) {\n    case CAML_BA_SINT8:\n      ((int8_t*)dst_data)[dst_off] = ((const int8_t*)src_data)[src_off];\n      return;\n    case CAML_BA_UINT8:\n      ((uint8_t*)dst_data)[dst_off] = ((const uint8_t*)src_data)[src_off];\n      return;\n    case CAML_BA_SINT16:\n      ((int16_t*)dst_data)[dst_off] = ((const int16_t*)src_data)[src_off];\n      return;\n    case CAML_BA_UINT16:\n      ((uint16_t*)dst_data)[dst_off] = ((const uint16_t*)src_data)[src_off];\n      return;\n    case CAML_BA_INT32:\n      ((int32_t*)dst_data)[dst_off] = ((const int32_t*)src_data)[src_off];\n      return;\n    case CAML_BA_INT64:\n      ((int64_t*)dst_data)[dst_off] = ((const int64_t*)src_data)[src_off];\n      return;\n    case NX_BA_UINT32:\n      ((caml_ba_uint32*)dst_data)[dst_off] =\n          ((const caml_ba_uint32*)src_data)[src_off];\n      return;\n    case NX_BA_UINT64:\n      ((caml_ba_uint64*)dst_data)[dst_off] =\n          ((const caml_ba_uint64*)src_data)[src_off];\n      return;\n    case CAML_BA_CAML_INT:\n    case CAML_BA_NATIVE_INT:\n      ((intnat*)dst_data)[dst_off] = ((const intnat*)src_data)[src_off];\n      return;\n    case CAML_BA_FLOAT16:\n      ((uint16_t*)dst_data)[dst_off] = ((const uint16_t*)src_data)[src_off];\n      return;\n    case CAML_BA_FLOAT32:\n      ((float*)dst_data)[dst_off] = ((const float*)src_data)[src_off];\n      return;\n    case CAML_BA_FLOAT64:\n      ((double*)dst_data)[dst_off] = ((const double*)src_data)[src_off];\n      return;\n    case CAML_BA_COMPLEX32:\n      ((complex32*)dst_data)[dst_off] = ((const complex32*)src_data)[src_off];\n      return;\n    case CAML_BA_COMPLEX64:\n      ((complex64*)dst_data)[dst_off] = ((const complex64*)src_data)[src_off];\n      return;\n    case NX_BA_BFLOAT16:\n      ((caml_ba_bfloat16*)dst_data)[dst_off] =\n          ((const caml_ba_bfloat16*)src_data)[src_off];\n      return;\n    case NX_BA_BOOL:\n      ((caml_ba_bool*)dst_data)[dst_off] =\n          ((const caml_ba_bool*)src_data)[src_off];\n      return;\n    case NX_BA_INT4: {\n      int v = int4_get((const uint8_t*)src_data, src_off, true);\n      int4_set((uint8_t*)dst_data, dst_off, v, true);\n      return;\n    }\n    case NX_BA_UINT4: {\n      int v = int4_get((const uint8_t*)src_data, src_off, false);\n      int4_set((uint8_t*)dst_data, dst_off, v, false);\n      return;\n    }\n    case NX_BA_FP8_E4M3:\n      ((caml_ba_fp8_e4m3*)dst_data)[dst_off] =\n          ((const caml_ba_fp8_e4m3*)src_data)[src_off];\n      return;\n    case NX_BA_FP8_E5M2:\n      ((caml_ba_fp8_e5m2*)dst_data)[dst_off] =\n          ((const caml_ba_fp8_e5m2*)src_data)[src_off];\n      return;\n    default:\n      return;\n  }\n}\n\nstatic inline bool kind_supported_for_sort(int kind) {\n  switch (kind) {\n    case CAML_BA_SINT8:\n    case CAML_BA_UINT8:\n    case CAML_BA_SINT16:\n    case CAML_BA_UINT16:\n    case CAML_BA_INT32:\n    case CAML_BA_INT64:\n    case NX_BA_UINT32:\n    case NX_BA_UINT64:\n    case CAML_BA_CAML_INT:\n    case CAML_BA_NATIVE_INT:\n    case CAML_BA_FLOAT16:\n    case CAML_BA_FLOAT32:\n    case CAML_BA_FLOAT64:\n    case CAML_BA_COMPLEX32:\n    case CAML_BA_COMPLEX64:\n    case NX_BA_BFLOAT16:\n    case NX_BA_BOOL:\n    case NX_BA_INT4:\n    case NX_BA_UINT4:\n    case NX_BA_FP8_E4M3:\n    case NX_BA_FP8_E5M2:\n      return true;\n    default:\n      return false;\n  }\n}\n\nstatic inline long product_range(const int* shape, int start, int end) {\n  long p = 1;\n  for (int i = start; i < end; ++i) p *= shape[i];\n  return p;\n}\n\nstatic inline long compute_base_offset_same_rank(const ndarray_t* arr,\n                                                 const ndarray_t* ref, int axis,\n                                                 long outer_idx,\n                                                 long inner_idx) {\n  long off = arr->offset;\n  long tmp_outer = outer_idx;\n  for (int d = axis - 1; d >= 0; --d) {\n    int coord = (int)(tmp_outer % ref->shape[d]);\n    tmp_outer /= ref->shape[d];\n    off += (long)coord * arr->strides[d];\n  }\n\n  long tmp_inner = inner_idx;\n  for (int d = ref->ndim - 1; d > axis; --d) {\n    int coord = (int)(tmp_inner % ref->shape[d]);\n    tmp_inner /= ref->shape[d];\n    off += (long)coord * arr->strides[d];\n  }\n  return off;\n}\n\nstatic inline long compute_out_offset_arg(const ndarray_t* x,\n                                          const ndarray_t* out, int axis,\n                                          long outer_idx, long inner_idx,\n                                          bool keepdims) {\n  long off = out->offset;\n  long tmp_outer = outer_idx;\n  for (int d = axis - 1; d >= 0; --d) {\n    int coord = (int)(tmp_outer % x->shape[d]);\n    tmp_outer /= x->shape[d];\n    off += (long)coord * out->strides[d];\n  }\n\n  long tmp_inner = inner_idx;\n  for (int d = x->ndim - 1; d > axis; --d) {\n    int coord = (int)(tmp_inner % x->shape[d]);\n    tmp_inner /= x->shape[d];\n    int out_d = keepdims ? d : (d - 1);\n    off += (long)coord * out->strides[out_d];\n  }\n  return off;\n}\n\nstatic inline int compare_slice_indices(const slice_sort_ctx_t* ctx, int ia,\n                                        int ib) {\n  long a_off = ctx->x_base + (long)ia * ctx->x_stride_axis;\n  long b_off = ctx->x_base + (long)ib * ctx->x_stride_axis;\n\n  bool a_nan = is_nan_at(ctx->kind, ctx->x->data, a_off);\n  bool b_nan = is_nan_at(ctx->kind, ctx->x->data, b_off);\n\n  int cmp = 0;\n  if (a_nan && b_nan) {\n    cmp = 0;\n  } else if (a_nan) {\n    cmp = 1;\n  } else if (b_nan) {\n    cmp = -1;\n  } else {\n    cmp = compare_values_at(ctx->kind, ctx->x->data, a_off, b_off);\n    if (ctx->descending) cmp = -cmp;\n  }\n  if (cmp != 0) return cmp;\n  return (ia > ib) - (ia < ib);\n}\n\nstatic void stable_mergesort_indices(int* idx, int* tmp, int n,\n                                     const slice_sort_ctx_t* ctx) {\n  if (n <= 1) return;\n\n  int* src = idx;\n  int* dst = tmp;\n  for (int width = 1; width < n; width <<= 1) {\n    for (int left = 0; left < n; left += (width << 1)) {\n      int mid = left + width;\n      int right = left + (width << 1);\n      if (mid > n) mid = n;\n      if (right > n) right = n;\n\n      int i = left;\n      int j = mid;\n      int k = left;\n      while (i < mid && j < right) {\n        if (compare_slice_indices(ctx, src[i], src[j]) <= 0) {\n          dst[k++] = src[i++];\n        } else {\n          dst[k++] = src[j++];\n        }\n      }\n      while (i < mid) dst[k++] = src[i++];\n      while (j < right) dst[k++] = src[j++];\n    }\n    int* swap = src;\n    src = dst;\n    dst = swap;\n  }\n\n  if (src != idx) memcpy(idx, src, (size_t)n * sizeof(int));\n}\n\nstatic void arg_reduce_impl(const ndarray_t* x, ndarray_t* out, int kind,\n                            int axis, bool keepdims, bool is_max) {\n  long axis_size = x->shape[axis];\n  long outer = product_range(x->shape, 0, axis);\n  long inner = product_range(x->shape, axis + 1, x->ndim);\n  long groups = outer * inner;\n  if (groups == 0 || axis_size == 0) return;\n\n  bool fast_path = is_contiguous(x) && is_contiguous(out);\n  if (fast_path) {\n    int32_t* restrict out_data = (int32_t*)out->data;\n    const void* x_data = x->data;\n    long x_stride_axis = inner;\n\n    _Pragma(\n        \"omp parallel for if(groups >= NX_ARG_PAR_THRESHOLD)\") for (long g = 0;\n                                                                    g < groups;\n                                                                    ++g) {\n      long outer_idx = g / inner;\n      long inner_idx = g - (outer_idx * inner);\n      long base = x->offset + (outer_idx * axis_size * inner) + inner_idx;\n\n      int best_idx = 0;\n      long best_off = base;\n      for (long k = 1; k < axis_size; ++k) {\n        long off = base + (k * x_stride_axis);\n        int cmp = compare_values_at(kind, x_data, off, best_off);\n        if ((is_max && cmp > 0) || (!is_max && cmp < 0)) {\n          best_idx = (int)k;\n          best_off = off;\n        }\n      }\n      out_data[out->offset + g] = (int32_t)best_idx;\n    }\n    return;\n  }\n\n  for (long outer_idx = 0; outer_idx < outer; ++outer_idx) {\n    for (long inner_idx = 0; inner_idx < inner; ++inner_idx) {\n      long x_base =\n          compute_base_offset_same_rank(x, x, axis, outer_idx, inner_idx);\n      long out_off =\n          compute_out_offset_arg(x, out, axis, outer_idx, inner_idx, keepdims);\n      long x_stride_axis = x->strides[axis];\n\n      int best_idx = 0;\n      long best_off = x_base;\n      for (long k = 1; k < axis_size; ++k) {\n        long off = x_base + (k * x_stride_axis);\n        int cmp = compare_values_at(kind, x->data, off, best_off);\n        if ((is_max && cmp > 0) || (!is_max && cmp < 0)) {\n          best_idx = (int)k;\n          best_off = off;\n        }\n      }\n      ((int32_t*)out->data)[out_off] = (int32_t)best_idx;\n    }\n  }\n}\n\nstatic int sort_impl(const ndarray_t* x, ndarray_t* out, int kind, int axis,\n                     bool descending, bool write_indices) {\n  long axis_size = x->shape[axis];\n  long outer = product_range(x->shape, 0, axis);\n  long inner = product_range(x->shape, axis + 1, x->ndim);\n  long groups = outer * inner;\n  if (groups == 0 || axis_size == 0) return 0;\n  if (axis_size > INT_MAX) return -2;\n\n  int nthreads = 1;\n#if defined(_OPENMP)\n  if (groups >= NX_SORT_PAR_THRESHOLD) nthreads = omp_get_max_threads();\n#endif\n\n  int** idx_bufs = (int**)malloc((size_t)nthreads * sizeof(int*));\n  int** tmp_bufs = (int**)malloc((size_t)nthreads * sizeof(int*));\n  if (!idx_bufs || !tmp_bufs) {\n    free(idx_bufs);\n    free(tmp_bufs);\n    return -1;\n  }\n  for (int t = 0; t < nthreads; ++t) {\n    idx_bufs[t] = (int*)malloc((size_t)axis_size * sizeof(int));\n    tmp_bufs[t] = (int*)malloc((size_t)axis_size * sizeof(int));\n    if (!idx_bufs[t] || !tmp_bufs[t]) {\n      for (int j = 0; j <= t; ++j) {\n        free(idx_bufs[j]);\n        free(tmp_bufs[j]);\n      }\n      free(idx_bufs);\n      free(tmp_bufs);\n      return -1;\n    }\n  }\n\n  bool fast_path = is_contiguous(x) && is_contiguous(out);\n\n  _Pragma(\n      \"omp parallel for if(groups >= NX_SORT_PAR_THRESHOLD)\") for (long g = 0;\n                                                                   g < groups;\n                                                                   ++g) {\n    int tid = 0;\n#if defined(_OPENMP)\n    tid = omp_get_thread_num();\n#endif\n    int* idx = idx_bufs[tid];\n    int* tmp = tmp_bufs[tid];\n    for (int i = 0; i < axis_size; ++i) idx[i] = i;\n\n    long outer_idx = g / inner;\n    long inner_idx = g - (outer_idx * inner);\n\n    long x_base, out_base;\n    long x_stride_axis, out_stride_axis;\n    if (fast_path) {\n      x_base = x->offset + (outer_idx * axis_size * inner) + inner_idx;\n      out_base = out->offset + (outer_idx * axis_size * inner) + inner_idx;\n      x_stride_axis = inner;\n      out_stride_axis = inner;\n    } else {\n      x_base = compute_base_offset_same_rank(x, x, axis, outer_idx, inner_idx);\n      out_base =\n          compute_base_offset_same_rank(out, x, axis, outer_idx, inner_idx);\n      x_stride_axis = x->strides[axis];\n      out_stride_axis = out->strides[axis];\n    }\n\n    slice_sort_ctx_t ctx = {.x = x,\n                            .kind = kind,\n                            .x_base = x_base,\n                            .x_stride_axis = x_stride_axis,\n                            .descending = descending};\n    stable_mergesort_indices(idx, tmp, (int)axis_size, &ctx);\n\n    if (write_indices) {\n      int32_t* out_data = (int32_t*)out->data;\n      for (long k = 0; k < axis_size; ++k) {\n        long dst_off = out_base + (k * out_stride_axis);\n        out_data[dst_off] = (int32_t)idx[k];\n      }\n    } else {\n      for (long k = 0; k < axis_size; ++k) {\n        long src_off = x_base + ((long)idx[k] * x_stride_axis);\n        long dst_off = out_base + (k * out_stride_axis);\n        copy_value_at(kind, x->data, src_off, out->data, dst_off);\n      }\n    }\n  }\n\n  for (int t = 0; t < nthreads; ++t) {\n    free(idx_bufs[t]);\n    free(tmp_bufs[t]);\n  }\n  free(idx_bufs);\n  free(tmp_bufs);\n  return 0;\n}\n\nstatic const char* validate_axis(const ndarray_t* x, int axis, const char* op) {\n  if (axis < 0 || axis >= x->ndim) {\n    if (strcmp(op, \"argmax\") == 0) return \"argmax: axis out of bounds\";\n    if (strcmp(op, \"argmin\") == 0) return \"argmin: axis out of bounds\";\n    if (strcmp(op, \"sort\") == 0) return \"sort: axis out of bounds\";\n    if (strcmp(op, \"argsort\") == 0) return \"argsort: axis out of bounds\";\n    return \"axis out of bounds\";\n  }\n  return NULL;\n}\n\nstatic const char* validate_same_shape(const ndarray_t* a, const ndarray_t* b,\n                                       const char* op) {\n  if (a->ndim != b->ndim) {\n    if (strcmp(op, \"sort\") == 0) return \"sort: shape mismatch\";\n    if (strcmp(op, \"argsort\") == 0) return \"argsort: shape mismatch\";\n    return \"shape mismatch\";\n  }\n  for (int i = 0; i < a->ndim; ++i) {\n    if (a->shape[i] != b->shape[i]) {\n      if (strcmp(op, \"sort\") == 0) return \"sort: shape mismatch\";\n      if (strcmp(op, \"argsort\") == 0) return \"argsort: shape mismatch\";\n      return \"shape mismatch\";\n    }\n  }\n  return NULL;\n}\n\nstatic const char* validate_arg_output(const ndarray_t* x, const ndarray_t* out,\n                                       int axis, bool keepdims,\n                                       const char* op) {\n  if (keepdims) {\n    if (out->ndim != x->ndim) {\n      if (strcmp(op, \"argmax\") == 0) return \"argmax: shape mismatch\";\n      if (strcmp(op, \"argmin\") == 0) return \"argmin: shape mismatch\";\n      return \"shape mismatch\";\n    }\n    for (int d = 0; d < x->ndim; ++d) {\n      int expected = (d == axis) ? 1 : x->shape[d];\n      if (out->shape[d] != expected) {\n        if (strcmp(op, \"argmax\") == 0) return \"argmax: shape mismatch\";\n        if (strcmp(op, \"argmin\") == 0) return \"argmin: shape mismatch\";\n        return \"shape mismatch\";\n      }\n    }\n  } else {\n    if (out->ndim != x->ndim - 1) {\n      if (strcmp(op, \"argmax\") == 0) return \"argmax: shape mismatch\";\n      if (strcmp(op, \"argmin\") == 0) return \"argmin: shape mismatch\";\n      return \"shape mismatch\";\n    }\n    for (int d = 0; d < x->ndim; ++d) {\n      if (d < axis) {\n        if (out->shape[d] != x->shape[d]) {\n          if (strcmp(op, \"argmax\") == 0) return \"argmax: shape mismatch\";\n          if (strcmp(op, \"argmin\") == 0) return \"argmin: shape mismatch\";\n          return \"shape mismatch\";\n        }\n      } else if (d > axis) {\n        if (out->shape[d - 1] != x->shape[d]) {\n          if (strcmp(op, \"argmax\") == 0) return \"argmax: shape mismatch\";\n          if (strcmp(op, \"argmin\") == 0) return \"argmin: shape mismatch\";\n          return \"shape mismatch\";\n        }\n      }\n    }\n  }\n  return NULL;\n}\n\nCAMLprim value caml_nx_argmax(value v_x, value v_out, value v_axis,\n                              value v_keepdims) {\n  CAMLparam4(v_x, v_out, v_axis, v_keepdims);\n\n  ndarray_t x = extract_ndarray(v_x);\n  ndarray_t out = extract_ndarray(v_out);\n  const char* err = NULL;\n  int axis = Int_val(v_axis);\n  bool keepdims = Bool_val(v_keepdims);\n\n  int kind = nx_buffer_get_kind(Caml_ba_array_val(Field(v_x, FFI_TENSOR_DATA)));\n  int out_kind =\n      nx_buffer_get_kind(Caml_ba_array_val(Field(v_out, FFI_TENSOR_DATA)));\n\n  err = validate_axis(&x, axis, \"argmax\");\n  if (err) goto fail;\n  err = validate_arg_output(&x, &out, axis, keepdims, \"argmax\");\n  if (err) goto fail;\n  if (out_kind != CAML_BA_INT32) {\n    err = \"argmax: output must be int32\";\n    goto fail;\n  }\n  if (!kind_supported_for_sort(kind)) {\n    err = \"argmax: unsupported dtype\";\n    goto fail;\n  }\n  if (x.shape[axis] > INT_MAX || x.shape[axis] > INT32_MAX) {\n    err = \"argmax: axis too large\";\n    goto fail;\n  }\n\n  caml_enter_blocking_section();\n  arg_reduce_impl(&x, &out, kind, axis, keepdims, true);\n  caml_leave_blocking_section();\n\n  cleanup_ndarray(&x);\n  cleanup_ndarray(&out);\n  CAMLreturn(Val_unit);\n\nfail:\n  cleanup_ndarray(&x);\n  cleanup_ndarray(&out);\n  caml_failwith(err);\n}\n\nCAMLprim value caml_nx_argmin(value v_x, value v_out, value v_axis,\n                              value v_keepdims) {\n  CAMLparam4(v_x, v_out, v_axis, v_keepdims);\n\n  ndarray_t x = extract_ndarray(v_x);\n  ndarray_t out = extract_ndarray(v_out);\n  const char* err = NULL;\n  int axis = Int_val(v_axis);\n  bool keepdims = Bool_val(v_keepdims);\n\n  int kind = nx_buffer_get_kind(Caml_ba_array_val(Field(v_x, FFI_TENSOR_DATA)));\n  int out_kind =\n      nx_buffer_get_kind(Caml_ba_array_val(Field(v_out, FFI_TENSOR_DATA)));\n\n  err = validate_axis(&x, axis, \"argmin\");\n  if (err) goto fail;\n  err = validate_arg_output(&x, &out, axis, keepdims, \"argmin\");\n  if (err) goto fail;\n  if (out_kind != CAML_BA_INT32) {\n    err = \"argmin: output must be int32\";\n    goto fail;\n  }\n  if (!kind_supported_for_sort(kind)) {\n    err = \"argmin: unsupported dtype\";\n    goto fail;\n  }\n  if (x.shape[axis] > INT_MAX || x.shape[axis] > INT32_MAX) {\n    err = \"argmin: axis too large\";\n    goto fail;\n  }\n\n  caml_enter_blocking_section();\n  arg_reduce_impl(&x, &out, kind, axis, keepdims, false);\n  caml_leave_blocking_section();\n\n  cleanup_ndarray(&x);\n  cleanup_ndarray(&out);\n  CAMLreturn(Val_unit);\n\nfail:\n  cleanup_ndarray(&x);\n  cleanup_ndarray(&out);\n  caml_failwith(err);\n}\n\nCAMLprim value caml_nx_sort(value v_x, value v_out, value v_axis,\n                            value v_descending) {\n  CAMLparam4(v_x, v_out, v_axis, v_descending);\n\n  ndarray_t x = extract_ndarray(v_x);\n  ndarray_t out = extract_ndarray(v_out);\n  const char* err = NULL;\n  int status = 0;\n  int axis = Int_val(v_axis);\n  bool descending = Bool_val(v_descending);\n\n  int kind = nx_buffer_get_kind(Caml_ba_array_val(Field(v_x, FFI_TENSOR_DATA)));\n  int out_kind =\n      nx_buffer_get_kind(Caml_ba_array_val(Field(v_out, FFI_TENSOR_DATA)));\n\n  err = validate_axis(&x, axis, \"sort\");\n  if (err) goto fail;\n  err = validate_same_shape(&x, &out, \"sort\");\n  if (err) goto fail;\n  if (kind != out_kind) {\n    err = \"sort: dtype mismatch\";\n    goto fail;\n  }\n  if (!kind_supported_for_sort(kind)) {\n    err = \"sort: unsupported dtype\";\n    goto fail;\n  }\n  if (x.shape[axis] > INT_MAX) {\n    err = \"sort: axis too large\";\n    goto fail;\n  }\n\n  caml_enter_blocking_section();\n  status = sort_impl(&x, &out, kind, axis, descending, false);\n  caml_leave_blocking_section();\n  if (status == -1) {\n    err = \"sort: allocation failed\";\n    goto fail;\n  }\n  if (status == -2) {\n    err = \"sort: axis too large\";\n    goto fail;\n  }\n\n  cleanup_ndarray(&x);\n  cleanup_ndarray(&out);\n  CAMLreturn(Val_unit);\n\nfail:\n  cleanup_ndarray(&x);\n  cleanup_ndarray(&out);\n  caml_failwith(err);\n}\n\nCAMLprim value caml_nx_argsort(value v_x, value v_out, value v_axis,\n                               value v_descending) {\n  CAMLparam4(v_x, v_out, v_axis, v_descending);\n\n  ndarray_t x = extract_ndarray(v_x);\n  ndarray_t out = extract_ndarray(v_out);\n  const char* err = NULL;\n  int status = 0;\n  int axis = Int_val(v_axis);\n  bool descending = Bool_val(v_descending);\n\n  int kind = nx_buffer_get_kind(Caml_ba_array_val(Field(v_x, FFI_TENSOR_DATA)));\n  int out_kind =\n      nx_buffer_get_kind(Caml_ba_array_val(Field(v_out, FFI_TENSOR_DATA)));\n\n  err = validate_axis(&x, axis, \"argsort\");\n  if (err) goto fail;\n  err = validate_same_shape(&x, &out, \"argsort\");\n  if (err) goto fail;\n  if (out_kind != CAML_BA_INT32) {\n    err = \"argsort: output must be int32\";\n    goto fail;\n  }\n  if (!kind_supported_for_sort(kind)) {\n    err = \"argsort: unsupported dtype\";\n    goto fail;\n  }\n  if (x.shape[axis] > INT_MAX || x.shape[axis] > INT32_MAX) {\n    err = \"argsort: axis too large\";\n    goto fail;\n  }\n\n  caml_enter_blocking_section();\n  status = sort_impl(&x, &out, kind, axis, descending, true);\n  caml_leave_blocking_section();\n  if (status == -1) {\n    err = \"argsort: allocation failed\";\n    goto fail;\n  }\n  if (status == -2) {\n    err = \"argsort: axis too large\";\n    goto fail;\n  }\n\n  cleanup_ndarray(&x);\n  cleanup_ndarray(&out);\n  CAMLreturn(Val_unit);\n\nfail:\n  cleanup_ndarray(&x);\n  cleanup_ndarray(&out);\n  caml_failwith(err);\n}\n"
  },
  {
    "path": "packages/nx/lib/backend_c/nx_c_svd.c",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n#include <caml/custom.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/threads.h>\n#include <complex.h>\n#include <float.h>\n#include <lapacke.h>\n\n#include \"nx_c_shared.h\"\n\n// Machine epsilon for float32 and float64\n#define NX_EPS32 FLT_EPSILON\n#define NX_EPS64 DBL_EPSILON\n\n// Helper to get element size\nstatic long get_element_size(int kind) {\n  switch (kind) {\n    case CAML_BA_SINT8:\n    case CAML_BA_UINT8:\n    case NX_BA_BOOL:\n    case NX_BA_FP8_E4M3:\n    case NX_BA_FP8_E5M2:\n      return 1;\n    case CAML_BA_SINT16:\n    case CAML_BA_UINT16:\n    case CAML_BA_FLOAT16:\n    case NX_BA_BFLOAT16:\n      return 2;\n    case CAML_BA_INT32:\n    case CAML_BA_FLOAT32:\n    case NX_BA_UINT32:\n      return 4;\n    case CAML_BA_INT64:\n    case CAML_BA_FLOAT64:\n    case NX_BA_UINT64:\n    case CAML_BA_NATIVE_INT:\n    case CAML_BA_CAML_INT:\n      return 8;\n    case CAML_BA_COMPLEX32:\n      return 4;\n    case CAML_BA_COMPLEX64:\n      return 16;\n    case NX_BA_INT4:\n    case NX_BA_UINT4:\n      caml_failwith(\"get_element_size: int4/uint4 require special handling\");\n    default:\n      caml_failwith(\"get_element_size: unsupported kind\");\n  }\n  return 0;\n}\n\n// Helper functions for shape and stride operations\nstatic inline int nx_ndim(value v_shape) { return Wosize_val(v_shape); }\n\nstatic inline int nx_shape_at(value v_shape, int idx) {\n  return Int_val(Field(v_shape, idx));\n}\n\nstatic inline int nx_stride_at(value v_strides, int idx) {\n  return Int_val(Field(v_strides, idx));\n}\n\nstatic inline int nx_batch_size(value v_shape) {\n  int ndim = Wosize_val(v_shape);\n  if (ndim <= 2) return 1;\n  int batch_size = 1;\n  for (int i = 0; i < ndim - 2; i++) {\n    batch_size *= Int_val(Field(v_shape, i));\n  }\n  return batch_size;\n}\n\nstatic inline size_t nx_batch_offset_elems(int b, value v_shape,\n                                           value v_strides) {\n  int ndim = Wosize_val(v_shape);\n  if (ndim <= 2) return 0;\n  size_t offset = 0;\n  int remaining = b;\n  // Calculate offset for batch dimensions\n  for (int i = ndim - 3; i >= 0; i--) {\n    int dim_size = Int_val(Field(v_shape, i));\n    int coord = remaining % dim_size;\n    remaining /= dim_size;\n    offset += coord * Int_val(Field(v_strides, i));\n  }\n  return offset;\n}\n\n// Helper functions for packing/unpacking matrices\nstatic void nx_pack_f32(float* dst, const float* src, int m, int n,\n                        int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * n + j] = src[i * stride_row + j * stride_col];\n    }\n  }\n}\n\nstatic void nx_unpack_f32(float* dst, const float* src, int m, int n,\n                          int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * stride_row + j * stride_col] = src[i * n + j];\n    }\n  }\n}\n\nstatic void nx_pack_f64(double* dst, const double* src, int m, int n,\n                        int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * n + j] = src[i * stride_row + j * stride_col];\n    }\n  }\n}\n\nstatic void nx_unpack_f64(double* dst, const double* src, int m, int n,\n                          int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * stride_row + j * stride_col] = src[i * n + j];\n    }\n  }\n}\n\nstatic void nx_pack_c32(complex32* dst, const complex32* src, int m, int n,\n                        int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * n + j] = src[i * stride_row + j * stride_col];\n    }\n  }\n}\n\nstatic void nx_unpack_c32(complex32* dst, const complex32* src, int m, int n,\n                          int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * stride_row + j * stride_col] = src[i * n + j];\n    }\n  }\n}\n\nstatic void nx_pack_c64(complex64* dst, const complex64* src, int m, int n,\n                        int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * n + j] = src[i * stride_row + j * stride_col];\n    }\n  }\n}\n\nstatic void nx_unpack_c64(complex64* dst, const complex64* src, int m, int n,\n                          int stride_row, int stride_col) {\n  for (int i = 0; i < m; i++) {\n    for (int j = 0; j < n; j++) {\n      dst[i * stride_row + j * stride_col] = src[i * n + j];\n    }\n  }\n}\n\n// SVD helper functions\nstatic inline float sign_float32(float x) { return (x >= 0.0f) ? 1.0f : -1.0f; }\n\nstatic inline double sign_float64(double x) { return (x >= 0.0) ? 1.0 : -1.0; }\n\nstatic inline float hypot_float32(float a, float b) {\n  float absa = fabsf(a);\n  float absb = fabsf(b);\n  if (absa > absb) {\n    float ratio = absb / absa;\n    return absa * sqrtf(1.0f + ratio * ratio);\n  } else if (absb > 0.0f) {\n    float ratio = absa / absb;\n    return absb * sqrtf(1.0f + ratio * ratio);\n  } else {\n    return 0.0f;\n  }\n}\n\nstatic inline double hypot_float64(double a, double b) {\n  double absa = fabs(a);\n  double absb = fabs(b);\n  if (absa > absb) {\n    double ratio = absb / absa;\n    return absa * sqrt(1.0 + ratio * ratio);\n  } else if (absb > 0.0) {\n    double ratio = absa / absb;\n    return absb * sqrt(1.0 + ratio * ratio);\n  } else {\n    return 0.0;\n  }\n}\n\nstatic void givens_float32(float a, float b, float* c, float* s) {\n  if (b == 0.0f) {\n    *c = 1.0f;\n    *s = 0.0f;\n  } else if (fabsf(b) > fabsf(a)) {\n    float t = a / b;\n    float sign_b = sign_float32(b);\n    *s = sign_b / sqrtf(1.0f + t * t);\n    *c = *s * t;\n  } else {\n    float t = b / a;\n    float sign_a = sign_float32(a);\n    *c = sign_a / sqrtf(1.0f + t * t);\n    *s = *c * t;\n  }\n}\n\nstatic void givens_float64(double a, double b, double* c, double* s) {\n  if (b == 0.0) {\n    *c = 1.0;\n    *s = 0.0;\n  } else if (fabs(b) > fabs(a)) {\n    double t = a / b;\n    double sign_b = sign_float64(b);\n    *s = sign_b / sqrt(1.0 + t * t);\n    *c = *s * t;\n  } else {\n    double t = b / a;\n    double sign_a = sign_float64(a);\n    *c = sign_a / sqrt(1.0 + t * t);\n    *s = *c * t;\n  }\n}\n\nstatic void apply_givens_left_float32(float* a, int m, int n, int i, int j,\n                                      float c, float s) {\n#pragma omp parallel for if (n > 100)\n  for (int k = 0; k < n; k++) {\n    float temp = c * a[i * n + k] + s * a[j * n + k];\n    a[j * n + k] = -s * a[i * n + k] + c * a[j * n + k];\n    a[i * n + k] = temp;\n  }\n}\n\nstatic void apply_givens_left_float64(double* a, int m, int n, int i, int j,\n                                      double c, double s) {\n#pragma omp parallel for if (n > 100)\n  for (int k = 0; k < n; k++) {\n    double temp = c * a[i * n + k] + s * a[j * n + k];\n    a[j * n + k] = -s * a[i * n + k] + c * a[j * n + k];\n    a[i * n + k] = temp;\n  }\n}\n\nstatic void apply_givens_right_float32(float* a, int m, int n, int i, int j,\n                                       float c, float s) {\n#pragma omp parallel for if (m > 100)\n  for (int k = 0; k < m; k++) {\n    float temp = c * a[k * n + i] + s * a[k * n + j];\n    a[k * n + j] = -s * a[k * n + i] + c * a[k * n + j];\n    a[k * n + i] = temp;\n  }\n}\n\nstatic void apply_givens_right_float64(double* a, int m, int n, int i, int j,\n                                       double c, double s) {\n#pragma omp parallel for if (m > 100)\n  for (int k = 0; k < m; k++) {\n    double temp = c * a[k * n + i] + s * a[k * n + j];\n    a[k * n + j] = -s * a[k * n + i] + c * a[k * n + j];\n    a[k * n + i] = temp;\n  }\n}\n\nstatic void bidiagonalize_float32(float* a, float* u, float* v, float* diag,\n                                  float* superdiag, int m, int n) {\n  const int minmn = (m < n ? m : n);\n#pragma omp parallel for if (m > 100)\n  for (int i = 0; i < m; i++)\n    for (int j = 0; j < m; j++) u[i * m + j] = (i == j) ? 1.f : 0.f;\n#pragma omp parallel for if (n > 100)\n  for (int i = 0; i < n; i++)\n    for (int j = 0; j < n; j++) v[i * n + j] = (i == j) ? 1.f : 0.f;\n  for (int p = 0; p < minmn; ++p) {\n    float norm2 = 0.f;\n    for (int i = p; i < m; i++) norm2 += a[i * n + p] * a[i * n + p];\n    float norm = sqrtf(norm2);\n    if (norm > 0.f) {\n      float sign = sign_float32(a[p * n + p]);\n      float alpha = -sign * norm;\n      a[p * n + p] -= alpha;\n      float beta = 1.f / (alpha * a[p * n + p]);\n#pragma omp parallel for if (n - p > 100)\n      for (int j = p + 1; j < n; ++j) {\n        float gamma = 0.f;\n        for (int i = p; i < m; i++) gamma += a[i * n + p] * a[i * n + j];\n        gamma *= beta;\n        for (int i = p; i < m; i++) a[i * n + j] -= gamma * a[i * n + p];\n      }\n#pragma omp parallel for if (m > 100)\n      for (int j = 0; j < m; ++j) {\n        float gamma = 0.f;\n        for (int i = p; i < m; i++) gamma += a[i * n + p] * u[i * m + j];\n        gamma *= beta;\n        for (int i = p; i < m; i++) u[i * m + j] -= gamma * a[i * n + p];\n      }\n    }\n    diag[p] = a[p * n + p];\n    if (p < n - 1) {\n      float norm2r = 0.f;\n      for (int j = p + 1; j < n; j++) norm2r += a[p * n + j] * a[p * n + j];\n      float normr = sqrtf(norm2r);\n      if (normr > 0.f) {\n        float sign = sign_float32(a[p * n + (p + 1)]);\n        float alpha = -sign * normr;\n        a[p * n + (p + 1)] -= alpha;\n        float beta = 1.f / (alpha * a[p * n + (p + 1)]);\n#pragma omp parallel for if (m - p > 100)\n        for (int i = p + 1; i < m; ++i) {\n          float gamma = 0.f;\n          for (int j = p + 1; j < n; j++) gamma += a[i * n + j] * a[p * n + j];\n          gamma *= beta;\n          for (int j = p + 1; j < n; j++) a[i * n + j] -= gamma * a[p * n + j];\n        }\n#pragma omp parallel for if (n > 100)\n        for (int j = 0; j < n; ++j) {\n          float gamma = 0.f;\n          for (int t = p + 1; t < n; t++) gamma += v[t * n + j] * a[p * n + t];\n          gamma *= beta;\n          for (int t = p + 1; t < n; t++) v[t * n + j] -= gamma * a[p * n + t];\n        }\n      }\n      superdiag[p] = (p < minmn - 1) ? a[p * n + (p + 1)] : 0.f;\n    }\n  }\n}\n\nstatic void bidiagonalize_float64(double* a, double* u, double* v, double* diag,\n                                  double* superdiag, int m, int n) {\n  const int minmn = (m < n ? m : n);\n#pragma omp parallel for if (m > 100)\n  for (int i = 0; i < m; i++)\n    for (int j = 0; j < m; j++) u[i * m + j] = (i == j) ? 1.0 : 0.0;\n#pragma omp parallel for if (n > 100)\n  for (int i = 0; i < n; i++)\n    for (int j = 0; j < n; j++) v[i * n + j] = (i == j) ? 1.0 : 0.0;\n  for (int p = 0; p < minmn; ++p) {\n    double norm2 = 0.0;\n    for (int i = p; i < m; i++) norm2 += a[i * n + p] * a[i * n + p];\n    double norm = sqrt(norm2);\n    if (norm > 0.0) {\n      double sign = sign_float64(a[p * n + p]);\n      double alpha = -sign * norm;\n      a[p * n + p] -= alpha;\n      double beta = 1.0 / (alpha * a[p * n + p]);\n#pragma omp parallel for if (n - p > 100)\n      for (int j = p + 1; j < n; ++j) {\n        double gamma = 0.0;\n        for (int i = p; i < m; i++) gamma += a[i * n + p] * a[i * n + j];\n        gamma *= beta;\n        for (int i = p; i < m; i++) a[i * n + j] -= gamma * a[i * n + p];\n      }\n#pragma omp parallel for if (m > 100)\n      for (int j = 0; j < m; ++j) {\n        double gamma = 0.0;\n        for (int i = p; i < m; i++) gamma += a[i * n + p] * u[i * m + j];\n        gamma *= beta;\n        for (int i = p; i < m; i++) u[i * m + j] -= gamma * a[i * n + p];\n      }\n    }\n    diag[p] = a[p * n + p];\n    if (p < n - 1) {\n      double norm2r = 0.0;\n      for (int j = p + 1; j < n; j++) norm2r += a[p * n + j] * a[p * n + j];\n      double normr = sqrt(norm2r);\n      if (normr > 0.0) {\n        double sign = sign_float64(a[p * n + (p + 1)]);\n        double alpha = -sign * normr;\n        a[p * n + (p + 1)] -= alpha;\n        double beta = 1.0 / (alpha * a[p * n + (p + 1)]);\n#pragma omp parallel for if (m - p > 100)\n        for (int i = p + 1; i < m; ++i) {\n          double gamma = 0.0;\n          for (int j = p + 1; j < n; j++) gamma += a[i * n + j] * a[p * n + j];\n          gamma *= beta;\n          for (int j = p + 1; j < n; j++) a[i * n + j] -= gamma * a[p * n + j];\n        }\n#pragma omp parallel for if (n > 100)\n        for (int j = 0; j < n; ++j) {\n          double gamma = 0.0;\n          for (int t = p + 1; t < n; t++) gamma += v[t * n + j] * a[p * n + t];\n          gamma *= beta;\n          for (int t = p + 1; t < n; t++) v[t * n + j] -= gamma * a[p * n + t];\n        }\n      }\n      superdiag[p] = (p < minmn - 1) ? a[p * n + (p + 1)] : 0.0;\n    }\n  }\n}\n\nstatic void svd_qr_iteration_float32(float* diag, float* superdiag, float* u,\n                                     float* v, int m, int n, int p, int q) {\n  float d = (diag[q - 1] - diag[q]) / 2.0f;\n  float shift =\n      diag[q] - superdiag[q - 1] * superdiag[q - 1] /\n                    (d + sign_float32(d) * hypot_float32(d, superdiag[q - 1]));\n  float c, s;\n  float f = diag[p] - shift;\n  float g = superdiag[p];\n  for (int k = p; k < q; k++) {\n    givens_float32(f, g, &c, &s);\n    if (k > p) superdiag[k - 1] = hypot_float32(f, g);\n    f = c * diag[k] + s * superdiag[k];\n    superdiag[k] = -s * diag[k] + c * superdiag[k];\n    g = s * diag[k + 1];\n    diag[k + 1] = c * diag[k + 1];\n    apply_givens_right_float32(v, n, n, k, k + 1, c, s);\n    givens_float32(f, g, &c, &s);\n    diag[k] = hypot_float32(f, g);\n    f = c * superdiag[k] + s * diag[k + 1];\n    diag[k + 1] = -s * superdiag[k] + c * diag[k + 1];\n    if (k < q - 1) {\n      g = s * superdiag[k + 1];\n      superdiag[k + 1] = c * superdiag[k + 1];\n    }\n    apply_givens_left_float32(u, m, m, k, k + 1, c, s);\n  }\n  superdiag[q - 1] = f;\n}\n\nstatic void svd_qr_iteration_float64(double* diag, double* superdiag, double* u,\n                                     double* v, int m, int n, int p, int q) {\n  double d = (diag[q - 1] - diag[q]) / 2.0;\n  double shift =\n      diag[q] - superdiag[q - 1] * superdiag[q - 1] /\n                    (d + sign_float64(d) * hypot_float64(d, superdiag[q - 1]));\n  double c, s;\n  double f = diag[p] - shift;\n  double g = superdiag[p];\n  for (int k = p; k < q; k++) {\n    givens_float64(f, g, &c, &s);\n    if (k > p) superdiag[k - 1] = hypot_float64(f, g);\n    f = c * diag[k] + s * superdiag[k];\n    superdiag[k] = -s * diag[k] + c * superdiag[k];\n    g = s * diag[k + 1];\n    diag[k + 1] = c * diag[k + 1];\n    apply_givens_right_float64(v, n, n, k, k + 1, c, s);\n    givens_float64(f, g, &c, &s);\n    diag[k] = hypot_float64(f, g);\n    f = c * superdiag[k] + s * diag[k + 1];\n    diag[k + 1] = -s * superdiag[k] + c * diag[k + 1];\n    if (k < q - 1) {\n      g = s * superdiag[k + 1];\n      superdiag[k + 1] = c * superdiag[k + 1];\n    }\n    apply_givens_left_float64(u, m, m, k, k + 1, c, s);\n  }\n  superdiag[q - 1] = f;\n}\n\nstatic void svd_iterate_float32(float* diag, float* superdiag, float* u,\n                                float* v, int m, int n) {\n  const int minmn = (m < n ? m : n);\n  const float tol = NX_EPS32 * (float)(m > n ? m : n);\n  const int max_iter = 75 * minmn;\n  int iter = 0;\n  while (iter++ < max_iter) {\n    int converged = 1;\n    for (int i = 0; i < minmn - 1; i++) {\n      if (fabsf(superdiag[i]) > tol * (fabsf(diag[i]) + fabsf(diag[i + 1]))) {\n        converged = 0;\n        break;\n      }\n    }\n    if (converged) break;\n    int q_pos = minmn - 1;\n    while (q_pos > 0 &&\n           fabsf(superdiag[q_pos - 1]) <=\n               tol * (fabsf(diag[q_pos - 1]) + fabsf(diag[q_pos]))) {\n      superdiag[q_pos - 1] = 0.0f;\n      q_pos--;\n    }\n    int p_pos = q_pos;\n    while (p_pos > 0 && fabsf(superdiag[p_pos - 1]) >\n                            tol * (fabsf(diag[p_pos - 1]) + fabsf(diag[p_pos])))\n      p_pos--;\n    if (p_pos < q_pos) {\n      svd_qr_iteration_float32(diag, superdiag, u, v, m, n, p_pos, q_pos);\n    }\n  }\n  if (iter >= max_iter) {\n    // Handle non-convergence if needed, but for production, assume convergence\n    // or log\n  }\n}\n\nstatic void svd_iterate_float64(double* diag, double* superdiag, double* u,\n                                double* v, int m, int n) {\n  const int minmn = (m < n ? m : n);\n  const double tol = NX_EPS64 * (double)(m > n ? m : n);\n  const int max_iter = 75 * minmn;\n  int iter = 0;\n  while (iter++ < max_iter) {\n    int converged = 1;\n    for (int i = 0; i < minmn - 1; i++) {\n      if (fabs(superdiag[i]) > tol * (fabs(diag[i]) + fabs(diag[i + 1]))) {\n        converged = 0;\n        break;\n      }\n    }\n    if (converged) break;\n    int q_pos = minmn - 1;\n    while (q_pos > 0 && fabs(superdiag[q_pos - 1]) <=\n                            tol * (fabs(diag[q_pos - 1]) + fabs(diag[q_pos]))) {\n      superdiag[q_pos - 1] = 0.0;\n      q_pos--;\n    }\n    int p_pos = q_pos;\n    while (p_pos > 0 && fabs(superdiag[p_pos - 1]) >\n                            tol * (fabs(diag[p_pos - 1]) + fabs(diag[p_pos])))\n      p_pos--;\n    if (p_pos < q_pos) {\n      svd_qr_iteration_float64(diag, superdiag, u, v, m, n, p_pos, q_pos);\n    }\n  }\n  if (iter >= max_iter) {\n    // Handle non-convergence if needed\n  }\n}\n\n// SVD implementations\nstatic void svd_float32(float* a, float* u, float* s, float* vt, int m, int n,\n                        int full_matrices) {\n  // LAPACK destroys the input matrix, so we need to make a copy\n  float* a_copy = (float*)malloc(m * n * sizeof(float));\n  if (!a_copy) return;\n  memcpy(a_copy, a, m * n * sizeof(float));\n\n  char jobu = full_matrices ? 'A' : 'S';\n  char jobvt = full_matrices ? 'A' : 'S';\n  int minmn = m < n ? m : n;\n  // ldu: U is [m, m] (full) or [m, minmn] (econ), leading dim is # cols\n  lapack_int ldu = full_matrices ? m : minmn;\n  // ldvt: VT is [n, n] (full) or [minmn, n] (econ), leading dim is # cols = n\n  lapack_int ldvt = n;\n\n  // Allocate space for superbidiagonal elements (not used in our interface)\n  float* superb = (float*)malloc((minmn - 1) * sizeof(float));\n  if (!superb) {\n    free(a_copy);\n    return;\n  }\n\n  lapack_int info = LAPACKE_sgesvd(LAPACK_ROW_MAJOR, jobu, jobvt, m, n, a_copy, n, s, u, ldu, vt, ldvt, superb);\n  free(a_copy);\n  free(superb);\n  // Note: LAPACK returns singular values in descending order, which matches our expectation\n}\n\nstatic void svd_float64(double* a, double* u, double* s, double* vt, int m,\n                        int n, int full_matrices) {\n  // LAPACK destroys the input matrix, so we need to make a copy\n  double* a_copy = (double*)malloc(m * n * sizeof(double));\n  if (!a_copy) return;\n  memcpy(a_copy, a, m * n * sizeof(double));\n\n  char jobu = full_matrices ? 'A' : 'S';\n  char jobvt = full_matrices ? 'A' : 'S';\n  int minmn = m < n ? m : n;\n  // ldu: U is [m, m] (full) or [m, minmn] (econ), leading dim is # cols\n  lapack_int ldu = full_matrices ? m : minmn;\n  // ldvt: VT is [n, n] (full) or [minmn, n] (econ), leading dim is # cols = n\n  lapack_int ldvt = n;\n\n  // Allocate space for superbidiagonal elements (not used in our interface)\n  double* superb = (double*)malloc((minmn - 1) * sizeof(double));\n  if (!superb) {\n    free(a_copy);\n    return;\n  }\n\n  lapack_int info = LAPACKE_dgesvd(LAPACK_ROW_MAJOR, jobu, jobvt, m, n, a_copy, n, s, u, ldu, vt, ldvt, superb);\n  free(a_copy);\n  free(superb);\n  // Note: LAPACK returns singular values in descending order, which matches our expectation\n}\n\n// Complex SVD helpers (similar structure, with conj and cabs)\nstatic inline complex32 sign_complex32(complex32 x) {\n  float mag = cabsf(x);\n  return (mag == 0.0f) ? (1.0f + 0.0f * I) : (x / mag);\n}\n\nstatic inline complex64 sign_complex64(complex64 x) {\n  double mag = cabs(x);\n  return (mag == 0.0) ? (1.0 + 0.0 * I) : (x / mag);\n}\n\nstatic inline float hypot_complex32(complex32 a, complex32 b) {\n  return hypot_float32(crealf(a), cimagf(a)) +\n         hypot_float32(crealf(b), cimagf(b));  // Approximate for simplicity\n}\n\nstatic inline double hypot_complex64(complex64 a, complex64 b) {\n  return hypot_float64(creal(a), cimag(a)) + hypot_float64(creal(b), cimag(b));\n}\n\nstatic void givens_complex32(complex32 a, complex32 b, float* c, complex32* s) {\n  float na = cabsf(a);\n  float nb = cabsf(b);\n  if (nb == 0.0f) {\n    *c = 1.0f;\n    *s = 0.0f + 0.0f * I;\n  } else if (nb > na) {\n    complex32 t = a / b;\n    *s = (1.0f / sqrtf(1.0f + cabsf(t) * cabsf(t))) * sign_complex32(b);\n    *c = crealf(*s * t);\n  } else {\n    complex32 t = b / a;\n    *c = 1.0f / sqrtf(1.0f + cabsf(t) * cabsf(t));\n    *s = *c * t * sign_complex32(a);\n  }\n}\n\nstatic void givens_complex64(complex64 a, complex64 b, double* c,\n                             complex64* s) {\n  double na = cabs(a);\n  double nb = cabs(b);\n  if (nb == 0.0) {\n    *c = 1.0;\n    *s = 0.0 + 0.0 * I;\n  } else if (nb > na) {\n    complex64 t = a / b;\n    *s = (1.0 / sqrt(1.0 + cabs(t) * cabs(t))) * sign_complex64(b);\n    *c = creal(*s * t);\n  } else {\n    complex64 t = b / a;\n    *c = 1.0 / sqrt(1.0 + cabs(t) * cabs(t));\n    *s = *c * t * sign_complex64(a);\n  }\n}\n\nstatic void apply_givens_left_complex32(complex32* a, int m, int n, int i,\n                                        int j, float c, complex32 s) {\n#pragma omp parallel for if (n > 100)\n  for (int k = 0; k < n; k++) {\n    complex32 temp = c * a[i * n + k] + s * a[j * n + k];\n    a[j * n + k] = -conj(s) * a[i * n + k] + c * a[j * n + k];\n    a[i * n + k] = temp;\n  }\n}\n\nstatic void apply_givens_left_complex64(complex64* a, int m, int n, int i,\n                                        int j, double c, complex64 s) {\n#pragma omp parallel for if (n > 100)\n  for (int k = 0; k < n; k++) {\n    complex64 temp = c * a[i * n + k] + s * a[j * n + k];\n    a[j * n + k] = -conj(s) * a[i * n + k] + c * a[j * n + k];\n    a[i * n + k] = temp;\n  }\n}\n\nstatic void apply_givens_right_complex32(complex32* a, int m, int n, int i,\n                                         int j, float c, complex32 s) {\n#pragma omp parallel for if (m > 100)\n  for (int k = 0; k < m; k++) {\n    complex32 temp = c * a[k * n + i] + conj(s) * a[k * n + j];\n    a[k * n + j] = -s * a[k * n + i] + c * a[k * n + j];\n    a[k * n + i] = temp;\n  }\n}\n\nstatic void apply_givens_right_complex64(complex64* a, int m, int n, int i,\n                                         int j, double c, complex64 s) {\n#pragma omp parallel for if (m > 100)\n  for (int k = 0; k < m; k++) {\n    complex64 temp = c * a[k * n + i] + conj(s) * a[k * n + j];\n    a[k * n + j] = -s * a[k * n + i] + c * a[k * n + j];\n    a[k * n + i] = temp;\n  }\n}\n\nstatic void bidiagonalize_complex32(complex32* a, complex32* u, complex32* v,\n                                    float* diag, float* superdiag, int m,\n                                    int n) {\n  const int minmn = (m < n ? m : n);\n#pragma omp parallel for if (m > 100)\n  for (int i = 0; i < m; i++)\n    for (int j = 0; j < m; j++) u[i * m + j] = (i == j) ? 1.0f : 0.0f;\n#pragma omp parallel for if (n > 100)\n  for (int i = 0; i < n; i++)\n    for (int j = 0; j < n; j++) v[i * n + j] = (i == j) ? 1.0f : 0.0f;\n  for (int p = 0; p < minmn; ++p) {\n    float norm2 = 0.0f;\n    for (int i = p; i < m; i++) {\n      norm2 += crealf(a[i * n + p] * conjf(a[i * n + p]));\n    }\n    float norm = sqrtf(norm2);\n    if (norm > 0.0f) {\n      complex32 phase = a[p * n + p] / cabsf(a[p * n + p]);\n      complex32 alpha = -norm * phase;\n      a[p * n + p] -= alpha;\n      float beta = 1.0f / crealf(conjf(alpha) * a[p * n + p] / norm);\n#pragma omp parallel for if (n - p > 100)\n      for (int j = p + 1; j < n; ++j) {\n        complex32 gamma = 0.0f + 0.0f * I;\n        for (int i = p; i < m; i++) gamma += conjf(a[i * n + p]) * a[i * n + j];\n        gamma *= beta;\n        for (int i = p; i < m; i++) a[i * n + j] -= gamma * a[i * n + p];\n      }\n#pragma omp parallel for if (m > 100)\n      for (int j = 0; j < m; ++j) {\n        complex32 gamma = 0.0f + 0.0f * I;\n        for (int i = p; i < m; i++) gamma += conjf(a[i * n + p]) * u[i * m + j];\n        gamma *= beta;\n        for (int i = p; i < m; i++) u[i * m + j] -= gamma * a[i * n + p];\n      }\n    }\n    diag[p] = cabsf(a[p * n + p]);\n    a[p * n + p] = diag[p];\n    if (p < n - 1) {\n      float norm2r = 0.0f;\n      for (int j = p + 1; j < n; j++) {\n        norm2r += crealf(a[p * n + j] * conjf(a[p * n + j]));\n      }\n      float normr = sqrtf(norm2r);\n      if (normr > 0.0f) {\n        complex32 phase = a[p * n + (p + 1)] / cabsf(a[p * n + (p + 1)]);\n        complex32 alpha = -normr * phase;\n        a[p * n + (p + 1)] -= alpha;\n        float beta = 1.0f / crealf(a[p * n + (p + 1)] * conjf(alpha) / normr);\n#pragma omp parallel for if (m - p > 100)\n        for (int i = p + 1; i < m; ++i) {\n          complex32 gamma = 0.0f + 0.0f * I;\n          for (int j = p + 1; j < n; j++)\n            gamma += a[i * n + j] * conjf(a[p * n + j]);\n          gamma *= beta;\n          for (int j = p + 1; j < n; j++) a[i * n + j] -= gamma * a[p * n + j];\n        }\n#pragma omp parallel for if (n > 100)\n        for (int j = 0; j < n; ++j) {\n          complex32 gamma = 0.0f + 0.0f * I;\n          for (int t = p + 1; t < n; t++)\n            gamma += v[t * n + j] * conjf(a[p * n + t]);\n          gamma *= beta;\n          for (int t = p + 1; t < n; t++) v[t * n + j] -= gamma * a[p * n + t];\n        }\n      }\n      superdiag[p] = cabsf(a[p * n + (p + 1)]);\n      a[p * n + (p + 1)] = superdiag[p];\n    }\n  }\n}\n\nstatic void bidiagonalize_complex64(complex64* a, complex64* u, complex64* v,\n                                    double* diag, double* superdiag, int m,\n                                    int n) {\n  const int minmn = (m < n ? m : n);\n#pragma omp parallel for if (m > 100)\n  for (int i = 0; i < m; i++)\n    for (int j = 0; j < m; j++) u[i * m + j] = (i == j) ? 1.0 : 0.0;\n#pragma omp parallel for if (n > 100)\n  for (int i = 0; i < n; i++)\n    for (int j = 0; j < n; j++) v[i * n + j] = (i == j) ? 1.0 : 0.0;\n  for (int p = 0; p < minmn; ++p) {\n    double norm2 = 0.0;\n    for (int i = p; i < m; i++) {\n      norm2 += creal(a[i * n + p] * conj(a[i * n + p]));\n    }\n    double norm = sqrt(norm2);\n    if (norm > 0.0) {\n      complex64 phase = a[p * n + p] / cabs(a[p * n + p]);\n      complex64 alpha = -norm * phase;\n      a[p * n + p] -= alpha;\n      double beta = 1.0 / creal(conj(alpha) * a[p * n + p] / norm);\n#pragma omp parallel for if (n - p > 100)\n      for (int j = p + 1; j < n; ++j) {\n        complex64 gamma = 0.0 + 0.0 * I;\n        for (int i = p; i < m; i++) gamma += conj(a[i * n + p]) * a[i * n + j];\n        gamma *= beta;\n        for (int i = p; i < m; i++) a[i * n + j] -= gamma * a[i * n + p];\n      }\n#pragma omp parallel for if (m > 100)\n      for (int j = 0; j < m; ++j) {\n        complex64 gamma = 0.0 + 0.0 * I;\n        for (int i = p; i < m; i++) gamma += conj(a[i * n + p]) * u[i * m + j];\n        gamma *= beta;\n        for (int i = p; i < m; i++) u[i * m + j] -= gamma * a[i * n + p];\n      }\n    }\n    diag[p] = cabs(a[p * n + p]);\n    a[p * n + p] = diag[p];\n    if (p < n - 1) {\n      double norm2r = 0.0;\n      for (int j = p + 1; j < n; j++) {\n        norm2r += creal(a[p * n + j] * conj(a[p * n + j]));\n      }\n      double normr = sqrt(norm2r);\n      if (normr > 0.0) {\n        complex64 phase = a[p * n + (p + 1)] / cabs(a[p * n + (p + 1)]);\n        complex64 alpha = -normr * phase;\n        a[p * n + (p + 1)] -= alpha;\n        double beta = 1.0 / creal(a[p * n + (p + 1)] * conj(alpha) / normr);\n#pragma omp parallel for if (m - p > 100)\n        for (int i = p + 1; i < m; ++i) {\n          complex64 gamma = 0.0 + 0.0 * I;\n          for (int j = p + 1; j < n; j++)\n            gamma += a[i * n + j] * conj(a[p * n + j]);\n          gamma *= beta;\n          for (int j = p + 1; j < n; j++) a[i * n + j] -= gamma * a[p * n + j];\n        }\n#pragma omp parallel for if (n > 100)\n        for (int j = 0; j < n; ++j) {\n          complex64 gamma = 0.0 + 0.0 * I;\n          for (int t = p + 1; t < n; t++)\n            gamma += v[t * n + j] * conj(a[p * n + t]);\n          gamma *= beta;\n          for (int t = p + 1; t < n; t++) v[t * n + j] -= gamma * a[p * n + t];\n        }\n      }\n      superdiag[p] = cabs(a[p * n + (p + 1)]);\n      a[p * n + (p + 1)] = superdiag[p];\n    }\n  }\n}\n\nstatic void svd_qr_iteration_complex32(float* diag, float* superdiag,\n                                       complex32* u, complex32* v, int m, int n,\n                                       int p, int q) {\n  float d = (diag[q - 1] - diag[q]) / 2.0f;\n  float shift =\n      diag[q] - superdiag[q - 1] * superdiag[q - 1] /\n                    (d + sign_float32(d) * hypot_float32(d, superdiag[q - 1]));\n  float c;\n  complex32 s;\n  float f = diag[p] - shift;\n  float g = superdiag[p];\n  for (int k = p; k < q; k++) {\n    givens_complex32(f + 0.0f * I, g + 0.0f * I, &c, &s);\n    if (k > p) superdiag[k - 1] = hypot_float32(f, g);\n    f = c * diag[k] +\n        crealf(s) * superdiag[k];  // Simplified for real diag/superdiag\n    superdiag[k] = -crealf(conj(s)) * diag[k] + c * superdiag[k];\n    g = crealf(s) * diag[k + 1];\n    diag[k + 1] = c * diag[k + 1];\n    apply_givens_right_complex32(v, n, n, k, k + 1, c, s);\n    givens_complex32(f + 0.0f * I, g + 0.0f * I, &c, &s);\n    diag[k] = hypot_float32(f, g);\n    f = c * superdiag[k] + crealf(s) * diag[k + 1];\n    diag[k + 1] = -crealf(conj(s)) * superdiag[k] + c * diag[k + 1];\n    if (k < q - 1) {\n      g = crealf(s) * superdiag[k + 1];\n      superdiag[k + 1] = c * superdiag[k + 1];\n    }\n    apply_givens_left_complex32(u, m, m, k, k + 1, c, s);\n  }\n  superdiag[q - 1] = f;\n}\n\nstatic void svd_qr_iteration_complex64(double* diag, double* superdiag,\n                                       complex64* u, complex64* v, int m, int n,\n                                       int p, int q) {\n  double d = (diag[q - 1] - diag[q]) / 2.0;\n  double shift =\n      diag[q] - superdiag[q - 1] * superdiag[q - 1] /\n                    (d + sign_float64(d) * hypot_float64(d, superdiag[q - 1]));\n  double c;\n  complex64 s;\n  double f = diag[p] - shift;\n  double g = superdiag[p];\n  for (int k = p; k < q; k++) {\n    givens_complex64(f + 0.0 * I, g + 0.0 * I, &c, &s);\n    if (k > p) superdiag[k - 1] = hypot_float64(f, g);\n    f = c * diag[k] + creal(s) * superdiag[k];\n    superdiag[k] = -creal(conj(s)) * diag[k] + c * superdiag[k];\n    g = creal(s) * diag[k + 1];\n    diag[k + 1] = c * diag[k + 1];\n    apply_givens_right_complex64(v, n, n, k, k + 1, c, s);\n    givens_complex64(f + 0.0 * I, g + 0.0 * I, &c, &s);\n    diag[k] = hypot_float64(f, g);\n    f = c * superdiag[k] + creal(s) * diag[k + 1];\n    diag[k + 1] = -creal(conj(s)) * superdiag[k] + c * diag[k + 1];\n    if (k < q - 1) {\n      g = creal(s) * superdiag[k + 1];\n      superdiag[k + 1] = c * superdiag[k + 1];\n    }\n    apply_givens_left_complex64(u, m, m, k, k + 1, c, s);\n  }\n  superdiag[q - 1] = f;\n}\n\nstatic void svd_iterate_complex32(float* diag, float* superdiag, complex32* u,\n                                  complex32* v, int m, int n) {\n  const int minmn = (m < n ? m : n);\n  const float tol = NX_EPS32 * (float)(m > n ? m : n);\n  const int max_iter = 75 * minmn;\n  int iter = 0;\n  while (iter++ < max_iter) {\n    int converged = 1;\n    for (int i = 0; i < minmn - 1; i++) {\n      if (fabsf(superdiag[i]) > tol * (diag[i] + diag[i + 1])) {\n        converged = 0;\n        break;\n      }\n    }\n    if (converged) break;\n    int q_pos = minmn - 1;\n    while (q_pos > 0 && fabsf(superdiag[q_pos - 1]) <=\n                            tol * (diag[q_pos - 1] + diag[q_pos])) {\n      superdiag[q_pos - 1] = 0.0f;\n      q_pos--;\n    }\n    int p_pos = q_pos;\n    while (p_pos > 0 &&\n           fabsf(superdiag[p_pos - 1]) > tol * (diag[p_pos - 1] + diag[p_pos]))\n      p_pos--;\n    if (p_pos < q_pos) {\n      svd_qr_iteration_complex32(diag, superdiag, u, v, m, n, p_pos, q_pos);\n    }\n  }\n}\n\nstatic void svd_iterate_complex64(double* diag, double* superdiag, complex64* u,\n                                  complex64* v, int m, int n) {\n  const int minmn = (m < n ? m : n);\n  const double tol = NX_EPS64 * (double)(m > n ? m : n);\n  const int max_iter = 75 * minmn;\n  int iter = 0;\n  while (iter++ < max_iter) {\n    int converged = 1;\n    for (int i = 0; i < minmn - 1; i++) {\n      if (fabs(superdiag[i]) > tol * (diag[i] + diag[i + 1])) {\n        converged = 0;\n        break;\n      }\n    }\n    if (converged) break;\n    int q_pos = minmn - 1;\n    while (q_pos > 0 && fabs(superdiag[q_pos - 1]) <=\n                            tol * (diag[q_pos - 1] + diag[q_pos])) {\n      superdiag[q_pos - 1] = 0.0;\n      q_pos--;\n    }\n    int p_pos = q_pos;\n    while (p_pos > 0 &&\n           fabs(superdiag[p_pos - 1]) > tol * (diag[p_pos - 1] + diag[p_pos]))\n      p_pos--;\n    if (p_pos < q_pos) {\n      svd_qr_iteration_complex64(diag, superdiag, u, v, m, n, p_pos, q_pos);\n    }\n  }\n}\n\nstatic void svd_complex32(complex32* a, complex32* u, float* s, complex32* vt,\n                          int m, int n, int full_matrices) {\n  // LAPACK destroys the input matrix, so we need to make a copy\n  complex32* a_copy = (complex32*)malloc(m * n * sizeof(complex32));\n  if (!a_copy) return;\n  memcpy(a_copy, a, m * n * sizeof(complex32));\n\n  char jobu = full_matrices ? 'A' : 'S';\n  char jobvt = full_matrices ? 'A' : 'S';\n  int minmn = m < n ? m : n;\n  lapack_int ldu = full_matrices ? m : minmn;\n  lapack_int ldvt = full_matrices ? n : minmn;\n\n  // Allocate space for superbidiagonal elements (not used in our interface)\n  float* superb = (float*)malloc((minmn - 1) * sizeof(float));\n  if (!superb) {\n    free(a_copy);\n    return;\n  }\n\n  lapack_int info = LAPACKE_cgesvd(LAPACK_ROW_MAJOR, jobu, jobvt, m, n, a_copy, n, s, u, ldu, vt, ldvt, superb);\n  free(a_copy);\n  free(superb);\n  // Note: LAPACK returns singular values in descending order, which matches our expectation\n  // Note: For complex SVD, LAPACK returns V^H (conjugate transpose), but our interface expects V^T\n  // We need to conjugate the result to match our expected output\n  if (full_matrices) {\n    for (int i = 0; i < n; i++) {\n      for (int j = 0; j < n; j++) {\n        vt[i * n + j] = conj(vt[i * n + j]);\n      }\n    }\n  } else {\n    for (int i = 0; i < minmn; i++) {\n      for (int j = 0; j < n; j++) {\n        vt[i * n + j] = conj(vt[i * n + j]);\n      }\n    }\n  }\n}\n\nstatic void svd_complex64(complex64* a, complex64* u, double* s, complex64* vt,\n                          int m, int n, int full_matrices) {\n  // LAPACK destroys the input matrix, so we need to make a copy\n  complex64* a_copy = (complex64*)malloc(m * n * sizeof(complex64));\n  if (!a_copy) return;\n  memcpy(a_copy, a, m * n * sizeof(complex64));\n\n  char jobu = full_matrices ? 'A' : 'S';\n  char jobvt = full_matrices ? 'A' : 'S';\n  int minmn = m < n ? m : n;\n  lapack_int ldu = full_matrices ? m : minmn;\n  lapack_int ldvt = full_matrices ? n : minmn;\n\n  // Allocate space for superbidiagonal elements (not used in our interface)\n  double* superb = (double*)malloc((minmn - 1) * sizeof(double));\n  if (!superb) {\n    free(a_copy);\n    return;\n  }\n\n  lapack_int info = LAPACKE_zgesvd(LAPACK_ROW_MAJOR, jobu, jobvt, m, n, a_copy, n, s, u, ldu, vt, ldvt, superb);\n  free(a_copy);\n  free(superb);\n  // Note: LAPACK returns singular values in descending order, which matches our expectation\n  // Note: For complex SVD, LAPACK returns V^H (conjugate transpose), but our interface expects V^T\n  // We need to conjugate the result to match our expected output\n  if (full_matrices) {\n    for (int i = 0; i < n; i++) {\n      for (int j = 0; j < n; j++) {\n        vt[i * n + j] = conj(vt[i * n + j]);\n      }\n    }\n  } else {\n    for (int i = 0; i < minmn; i++) {\n      for (int j = 0; j < n; j++) {\n        vt[i * n + j] = conj(vt[i * n + j]);\n      }\n    }\n  }\n}\n\nstatic void svd_float16(uint16_t* a, uint16_t* u, uint16_t* s, uint16_t* vt,\n                        int m, int n, int full_matrices) {\n  int minmn = m < n ? m : n;\n  float* a_float = (float*)malloc(m * n * sizeof(float));\n  int u_cols = full_matrices ? m : minmn;\n  float* u_float = (float*)malloc(m * u_cols * sizeof(float));\n  float* s_float = (float*)malloc(minmn * sizeof(float));\n  int vt_rows = full_matrices ? n : minmn;\n  float* vt_float = (float*)malloc(vt_rows * n * sizeof(float));\n  if (!a_float || !u_float || !s_float || !vt_float) {\n    free(a_float);\n    free(u_float);\n    free(s_float);\n    free(vt_float);\n    return;\n  }\n  for (int i = 0; i < m * n; i++) a_float[i] = half_to_float(a[i]);\n  svd_float32(a_float, u_float, s_float, vt_float, m, n, full_matrices);\n  for (int i = 0; i < m * u_cols; i++) u[i] = float_to_half(u_float[i]);\n  for (int i = 0; i < minmn; i++) s[i] = float_to_half(s_float[i]);\n  for (int i = 0; i < vt_rows * n; i++) vt[i] = float_to_half(vt_float[i]);\n  free(a_float);\n  free(u_float);\n  free(s_float);\n  free(vt_float);\n}\n\nstatic void svd_bfloat16(caml_ba_bfloat16* a, caml_ba_bfloat16* u,\n                         caml_ba_bfloat16* s, caml_ba_bfloat16* vt, int m,\n                         int n, int full_matrices) {\n  int minmn = m < n ? m : n;\n  float* a_float = (float*)malloc(m * n * sizeof(float));\n  int u_cols = full_matrices ? m : minmn;\n  float* u_float = (float*)malloc(m * u_cols * sizeof(float));\n  float* s_float = (float*)malloc(minmn * sizeof(float));\n  int vt_rows = full_matrices ? n : minmn;\n  float* vt_float = (float*)malloc(vt_rows * n * sizeof(float));\n  if (!a_float || !u_float || !s_float || !vt_float) {\n    free(a_float);\n    free(u_float);\n    free(s_float);\n    free(vt_float);\n    return;\n  }\n  for (int i = 0; i < m * n; i++) a_float[i] = bfloat16_to_float(a[i]);\n  svd_float32(a_float, u_float, s_float, vt_float, m, n, full_matrices);\n  for (int i = 0; i < m * u_cols; i++) u[i] = float_to_bfloat16(u_float[i]);\n  for (int i = 0; i < minmn; i++) s[i] = float_to_bfloat16(s_float[i]);\n  for (int i = 0; i < vt_rows * n; i++) vt[i] = float_to_bfloat16(vt_float[i]);\n  free(a_float);\n  free(u_float);\n  free(s_float);\n  free(vt_float);\n}\n\nstatic void svd_f8e4m3(caml_ba_fp8_e4m3* a, caml_ba_fp8_e4m3* u,\n                       caml_ba_fp8_e4m3* s, caml_ba_fp8_e4m3* vt, int m, int n,\n                       int full_matrices) {\n  int minmn = m < n ? m : n;\n  float* a_float = (float*)malloc(m * n * sizeof(float));\n  int u_cols = full_matrices ? m : minmn;\n  float* u_float = (float*)malloc(m * u_cols * sizeof(float));\n  float* s_float = (float*)malloc(minmn * sizeof(float));\n  int vt_rows = full_matrices ? n : minmn;\n  float* vt_float = (float*)malloc(vt_rows * n * sizeof(float));\n  if (!a_float || !u_float || !s_float || !vt_float) {\n    free(a_float);\n    free(u_float);\n    free(s_float);\n    free(vt_float);\n    return;\n  }\n  for (int i = 0; i < m * n; i++) a_float[i] = fp8_e4m3_to_float(a[i]);\n  svd_float32(a_float, u_float, s_float, vt_float, m, n, full_matrices);\n  for (int i = 0; i < m * u_cols; i++) u[i] = float_to_fp8_e4m3(u_float[i]);\n  for (int i = 0; i < minmn; i++) s[i] = float_to_fp8_e4m3(s_float[i]);\n  for (int i = 0; i < vt_rows * n; i++) vt[i] = float_to_fp8_e4m3(vt_float[i]);\n  free(a_float);\n  free(u_float);\n  free(s_float);\n  free(vt_float);\n}\n\nstatic void svd_f8e5m2(caml_ba_fp8_e5m2* a, caml_ba_fp8_e5m2* u,\n                       caml_ba_fp8_e5m2* s, caml_ba_fp8_e5m2* vt, int m, int n,\n                       int full_matrices) {\n  int minmn = m < n ? m : n;\n  float* a_float = (float*)malloc(m * n * sizeof(float));\n  int u_cols = full_matrices ? m : minmn;\n  float* u_float = (float*)malloc(m * u_cols * sizeof(float));\n  float* s_float = (float*)malloc(minmn * sizeof(float));\n  int vt_rows = full_matrices ? n : minmn;\n  float* vt_float = (float*)malloc(vt_rows * n * sizeof(float));\n  if (!a_float || !u_float || !s_float || !vt_float) {\n    free(a_float);\n    free(u_float);\n    free(s_float);\n    free(vt_float);\n    return;\n  }\n  for (int i = 0; i < m * n; i++) a_float[i] = fp8_e5m2_to_float(a[i]);\n  svd_float32(a_float, u_float, s_float, vt_float, m, n, full_matrices);\n  for (int i = 0; i < m * u_cols; i++) u[i] = float_to_fp8_e5m2(u_float[i]);\n  for (int i = 0; i < minmn; i++) s[i] = float_to_fp8_e5m2(s_float[i]);\n  for (int i = 0; i < vt_rows * n; i++) vt[i] = float_to_fp8_e5m2(vt_float[i]);\n  free(a_float);\n  free(u_float);\n  free(s_float);\n  free(vt_float);\n}\n\n// ============================================================================\n// OCaml FFI Stubs\n// ============================================================================\n\nCAMLprim value caml_nx_op_svd(value v_in, value v_u, value v_s, value v_vt,\n                              value v_full_matrices) {\n  CAMLparam5(v_in, v_u, v_s, v_vt, v_full_matrices);\n  int full_matrices = Int_val(v_full_matrices);\n  ndarray_t in = extract_ndarray(v_in);\n  ndarray_t u_nd = extract_ndarray(v_u);\n  ndarray_t s_nd = extract_ndarray(v_s);\n  ndarray_t vt_nd = extract_ndarray(v_vt);\n  struct caml_ba_array* ba_in = Caml_ba_array_val(Field(v_in, FFI_TENSOR_DATA));\n  struct caml_ba_array* ba_u = Caml_ba_array_val(Field(v_u, FFI_TENSOR_DATA));\n  struct caml_ba_array* ba_s = Caml_ba_array_val(Field(v_s, FFI_TENSOR_DATA));\n  struct caml_ba_array* ba_vt = Caml_ba_array_val(Field(v_vt, FFI_TENSOR_DATA));\n  int kind = nx_buffer_get_kind(ba_in);\n  if (in.ndim < 2) {\n    cleanup_ndarray(&in);\n    cleanup_ndarray(&u_nd);\n    cleanup_ndarray(&s_nd);\n    cleanup_ndarray(&vt_nd);\n    caml_failwith(\"svd: input must have at least 2 dimensions\");\n  }\n  int m = in.shape[in.ndim - 2];\n  int n = in.shape[in.ndim - 1];\n  int minmn = m < n ? m : n;\n  int u_cols = full_matrices ? m : minmn;\n  int vt_rows = full_matrices ? n : minmn;\n  if (u_nd.shape[u_nd.ndim - 1] != u_cols || u_nd.shape[u_nd.ndim - 2] != m ||\n      vt_nd.shape[vt_nd.ndim - 1] != n ||\n      vt_nd.shape[vt_nd.ndim - 2] != vt_rows ||\n      s_nd.shape[s_nd.ndim - 1] != minmn) {\n    cleanup_ndarray(&in);\n    cleanup_ndarray(&u_nd);\n    cleanup_ndarray(&s_nd);\n    cleanup_ndarray(&vt_nd);\n    caml_failwith(\"svd: output shapes mismatch\");\n  }\n  int batch_size = 1;\n  for (int i = 0; i < in.ndim - 2; i++) {\n    batch_size *= in.shape[i];\n  }\n  int s_in_row = in.strides[in.ndim - 2];\n  int s_in_col = in.strides[in.ndim - 1];\n  int s_u_row = u_nd.strides[u_nd.ndim - 2];\n  int s_u_col = u_nd.strides[u_nd.ndim - 1];\n  int s_s_stride = s_nd.strides[s_nd.ndim - 1];\n  int s_vt_row = vt_nd.strides[vt_nd.ndim - 2];\n  int s_vt_col = vt_nd.strides[vt_nd.ndim - 1];\n  caml_enter_blocking_section();\n  for (int b = 0; b < batch_size; b++) {\n    size_t off_in = in.offset;\n    size_t off_u = u_nd.offset;\n    size_t off_s = s_nd.offset;\n    size_t off_vt = vt_nd.offset;\n    if (in.ndim > 2) {\n      int remaining = b;\n      for (int i = in.ndim - 3; i >= 0; i--) {\n        int coord = remaining % in.shape[i];\n        remaining /= in.shape[i];\n        off_in += coord * in.strides[i];\n        off_u += coord * u_nd.strides[i];\n        off_s += coord * s_nd.strides[i];\n        off_vt += coord * vt_nd.strides[i];\n      }\n    }\n    switch (kind) {\n      case CAML_BA_FLOAT32: {\n        float* base_in = (float*)ba_in->data + off_in;\n        float* base_u = (float*)ba_u->data + off_u;\n        double* base_s = (double*)ba_s->data + off_s;  // S is always float64\n        float* base_vt = (float*)ba_vt->data + off_vt;\n        float* A = (float*)malloc((size_t)m * n * sizeof(float));\n        float* U = (float*)malloc((size_t)m * u_cols * sizeof(float));\n        float* S = (float*)malloc((size_t)minmn * sizeof(float));\n        float* VT = (float*)malloc((size_t)vt_rows * n * sizeof(float));\n        if (!A || !U || !S || !VT) {\n          free(A);\n          free(U);\n          free(S);\n          free(VT);\n          continue;\n        }\n        nx_pack_f32(A, base_in, m, n, s_in_row, s_in_col);\n        svd_float32(A, U, S, VT, m, n, full_matrices);\n        nx_unpack_f32(base_u, U, m, u_cols, s_u_row, s_u_col);\n        // Convert S from float32 to float64\n        for (int i = 0; i < minmn; i++) base_s[i * s_s_stride] = (double)S[i];\n        nx_unpack_f32(base_vt, VT, vt_rows, n, s_vt_row, s_vt_col);\n        free(A);\n        free(U);\n        free(S);\n        free(VT);\n        break;\n      }\n      case CAML_BA_FLOAT64: {\n        double* base_in = (double*)ba_in->data + off_in;\n        double* base_u = (double*)ba_u->data + off_u;\n        double* base_s = (double*)ba_s->data + off_s;\n        double* base_vt = (double*)ba_vt->data + off_vt;\n        double* A = (double*)malloc((size_t)m * n * sizeof(double));\n        double* U = (double*)malloc((size_t)m * u_cols * sizeof(double));\n        double* S = (double*)malloc((size_t)minmn * sizeof(double));\n        double* VT = (double*)malloc((size_t)vt_rows * n * sizeof(double));\n        if (!A || !U || !S || !VT) {\n          free(A);\n          free(U);\n          free(S);\n          free(VT);\n          continue;\n        }\n        nx_pack_f64(A, base_in, m, n, s_in_row, s_in_col);\n        svd_float64(A, U, S, VT, m, n, full_matrices);\n        nx_unpack_f64(base_u, U, m, u_cols, s_u_row, s_u_col);\n        for (int i = 0; i < minmn; i++) base_s[i * s_s_stride] = S[i];\n        nx_unpack_f64(base_vt, VT, vt_rows, n, s_vt_row, s_vt_col);\n        free(A);\n        free(U);\n        free(S);\n        free(VT);\n        break;\n      }\n      case CAML_BA_COMPLEX32: {\n        complex32* base_in = (complex32*)ba_in->data + off_in;\n        complex32* base_u = (complex32*)ba_u->data + off_u;\n        double* base_s = (double*)ba_s->data + off_s;  // S is always float64\n        complex32* base_vt = (complex32*)ba_vt->data + off_vt;\n        complex32* A = (complex32*)malloc((size_t)m * n * sizeof(complex32));\n        complex32* U =\n            (complex32*)malloc((size_t)m * u_cols * sizeof(complex32));\n        float* S = (float*)malloc((size_t)minmn * sizeof(float));\n        complex32* VT =\n            (complex32*)malloc((size_t)vt_rows * n * sizeof(complex32));\n        if (!A || !U || !S || !VT) {\n          free(A);\n          free(U);\n          free(S);\n          free(VT);\n          continue;\n        }\n        nx_pack_c32(A, base_in, m, n, s_in_row, s_in_col);\n        svd_complex32(A, U, S, VT, m, n, full_matrices);\n        nx_unpack_c32(base_u, U, m, u_cols, s_u_row, s_u_col);\n        // Convert S from float32 to float64\n        for (int i = 0; i < minmn; i++) base_s[i * s_s_stride] = (double)S[i];\n        nx_unpack_c32(base_vt, VT, vt_rows, n, s_vt_row, s_vt_col);\n        free(A);\n        free(U);\n        free(S);\n        free(VT);\n        break;\n      }\n      case CAML_BA_COMPLEX64: {\n        complex64* base_in = (complex64*)ba_in->data + off_in;\n        complex64* base_u = (complex64*)ba_u->data + off_u;\n        double* base_s = (double*)ba_s->data + off_s;\n        complex64* base_vt = (complex64*)ba_vt->data + off_vt;\n        complex64* A = (complex64*)malloc((size_t)m * n * sizeof(complex64));\n        complex64* U =\n            (complex64*)malloc((size_t)m * u_cols * sizeof(complex64));\n        double* S = (double*)malloc((size_t)minmn * sizeof(double));\n        complex64* VT =\n            (complex64*)malloc((size_t)vt_rows * n * sizeof(complex64));\n        if (!A || !U || !S || !VT) {\n          free(A);\n          free(U);\n          free(S);\n          free(VT);\n          continue;\n        }\n        nx_pack_c64(A, base_in, m, n, s_in_row, s_in_col);\n        svd_complex64(A, U, S, VT, m, n, full_matrices);\n        nx_unpack_c64(base_u, U, m, u_cols, s_u_row, s_u_col);\n        for (int i = 0; i < minmn; i++) base_s[i * s_s_stride] = S[i];\n        nx_unpack_c64(base_vt, VT, vt_rows, n, s_vt_row, s_vt_col);\n        free(A);\n        free(U);\n        free(S);\n        free(VT);\n        break;\n      }\n      case CAML_BA_FLOAT16: {\n        uint16_t* base_in = (uint16_t*)ba_in->data + off_in;\n        uint16_t* base_u = (uint16_t*)ba_u->data + off_u;\n        uint16_t* base_s = (uint16_t*)ba_s->data + off_s;\n        uint16_t* base_vt = (uint16_t*)ba_vt->data + off_vt;\n        uint16_t* A = (uint16_t*)malloc((size_t)m * n * sizeof(uint16_t));\n        uint16_t* U = (uint16_t*)malloc((size_t)m * u_cols * sizeof(uint16_t));\n        uint16_t* S = (uint16_t*)malloc((size_t)minmn * sizeof(uint16_t));\n        uint16_t* VT =\n            (uint16_t*)malloc((size_t)vt_rows * n * sizeof(uint16_t));\n        if (!A || !U || !S || !VT) {\n          free(A);\n          free(U);\n          free(S);\n          free(VT);\n          continue;\n        }\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < n; j++) {\n            A[i * n + j] = base_in[i * s_in_row + j * s_in_col];\n          }\n        }\n        svd_float16(A, U, S, VT, m, n, full_matrices);\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < u_cols; j++) {\n            base_u[i * s_u_row + j * s_u_col] = U[i * u_cols + j];\n          }\n        }\n        for (int i = 0; i < minmn; i++) base_s[i * s_s_stride] = S[i];\n        for (int i = 0; i < vt_rows; i++) {\n          for (int j = 0; j < n; j++) {\n            base_vt[i * s_vt_row + j * s_vt_col] = VT[i * n + j];\n          }\n        }\n        free(A);\n        free(U);\n        free(S);\n        free(VT);\n        break;\n      }\n      case NX_BA_BFLOAT16: {\n        caml_ba_bfloat16* base_in = (caml_ba_bfloat16*)ba_in->data + off_in;\n        caml_ba_bfloat16* base_u = (caml_ba_bfloat16*)ba_u->data + off_u;\n        caml_ba_bfloat16* base_s = (caml_ba_bfloat16*)ba_s->data + off_s;\n        caml_ba_bfloat16* base_vt = (caml_ba_bfloat16*)ba_vt->data + off_vt;\n        caml_ba_bfloat16* A =\n            (caml_ba_bfloat16*)malloc((size_t)m * n * sizeof(caml_ba_bfloat16));\n        caml_ba_bfloat16* U = (caml_ba_bfloat16*)malloc(\n            (size_t)m * u_cols * sizeof(caml_ba_bfloat16));\n        caml_ba_bfloat16* S =\n            (caml_ba_bfloat16*)malloc((size_t)minmn * sizeof(caml_ba_bfloat16));\n        caml_ba_bfloat16* VT = (caml_ba_bfloat16*)malloc(\n            (size_t)vt_rows * n * sizeof(caml_ba_bfloat16));\n        if (!A || !U || !S || !VT) {\n          free(A);\n          free(U);\n          free(S);\n          free(VT);\n          continue;\n        }\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < n; j++) {\n            A[i * n + j] = base_in[i * s_in_row + j * s_in_col];\n          }\n        }\n        svd_bfloat16(A, U, S, VT, m, n, full_matrices);\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < u_cols; j++) {\n            base_u[i * s_u_row + j * s_u_col] = U[i * u_cols + j];\n          }\n        }\n        for (int i = 0; i < minmn; i++) base_s[i * s_s_stride] = S[i];\n        for (int i = 0; i < vt_rows; i++) {\n          for (int j = 0; j < n; j++) {\n            base_vt[i * s_vt_row + j * s_vt_col] = VT[i * n + j];\n          }\n        }\n        free(A);\n        free(U);\n        free(S);\n        free(VT);\n        break;\n      }\n      case NX_BA_FP8_E4M3: {\n        caml_ba_fp8_e4m3* base_in = (caml_ba_fp8_e4m3*)ba_in->data + off_in;\n        caml_ba_fp8_e4m3* base_u = (caml_ba_fp8_e4m3*)ba_u->data + off_u;\n        caml_ba_fp8_e4m3* base_s = (caml_ba_fp8_e4m3*)ba_s->data + off_s;\n        caml_ba_fp8_e4m3* base_vt = (caml_ba_fp8_e4m3*)ba_vt->data + off_vt;\n        caml_ba_fp8_e4m3* A =\n            (caml_ba_fp8_e4m3*)malloc((size_t)m * n * sizeof(caml_ba_fp8_e4m3));\n        caml_ba_fp8_e4m3* U = (caml_ba_fp8_e4m3*)malloc(\n            (size_t)m * u_cols * sizeof(caml_ba_fp8_e4m3));\n        caml_ba_fp8_e4m3* S =\n            (caml_ba_fp8_e4m3*)malloc((size_t)minmn * sizeof(caml_ba_fp8_e4m3));\n        caml_ba_fp8_e4m3* VT = (caml_ba_fp8_e4m3*)malloc(\n            (size_t)vt_rows * n * sizeof(caml_ba_fp8_e4m3));\n        if (!A || !U || !S || !VT) {\n          free(A);\n          free(U);\n          free(S);\n          free(VT);\n          continue;\n        }\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < n; j++) {\n            A[i * n + j] = base_in[i * s_in_row + j * s_in_col];\n          }\n        }\n        svd_f8e4m3(A, U, S, VT, m, n, full_matrices);\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < u_cols; j++) {\n            base_u[i * s_u_row + j * s_u_col] = U[i * u_cols + j];\n          }\n        }\n        for (int i = 0; i < minmn; i++) base_s[i * s_s_stride] = S[i];\n        for (int i = 0; i < vt_rows; i++) {\n          for (int j = 0; j < n; j++) {\n            base_vt[i * s_vt_row + j * s_vt_col] = VT[i * n + j];\n          }\n        }\n        free(A);\n        free(U);\n        free(S);\n        free(VT);\n        break;\n      }\n      case NX_BA_FP8_E5M2: {\n        caml_ba_fp8_e5m2* base_in = (caml_ba_fp8_e5m2*)ba_in->data + off_in;\n        caml_ba_fp8_e5m2* base_u = (caml_ba_fp8_e5m2*)ba_u->data + off_u;\n        caml_ba_fp8_e5m2* base_s = (caml_ba_fp8_e5m2*)ba_s->data + off_s;\n        caml_ba_fp8_e5m2* base_vt = (caml_ba_fp8_e5m2*)ba_vt->data + off_vt;\n        caml_ba_fp8_e5m2* A =\n            (caml_ba_fp8_e5m2*)malloc((size_t)m * n * sizeof(caml_ba_fp8_e5m2));\n        caml_ba_fp8_e5m2* U = (caml_ba_fp8_e5m2*)malloc(\n            (size_t)m * u_cols * sizeof(caml_ba_fp8_e5m2));\n        caml_ba_fp8_e5m2* S =\n            (caml_ba_fp8_e5m2*)malloc((size_t)minmn * sizeof(caml_ba_fp8_e5m2));\n        caml_ba_fp8_e5m2* VT = (caml_ba_fp8_e5m2*)malloc(\n            (size_t)vt_rows * n * sizeof(caml_ba_fp8_e5m2));\n        if (!A || !U || !S || !VT) {\n          free(A);\n          free(U);\n          free(S);\n          free(VT);\n          continue;\n        }\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < n; j++) {\n            A[i * n + j] = base_in[i * s_in_row + j * s_in_col];\n          }\n        }\n        svd_f8e5m2(A, U, S, VT, m, n, full_matrices);\n        for (int i = 0; i < m; i++) {\n          for (int j = 0; j < u_cols; j++) {\n            base_u[i * s_u_row + j * s_u_col] = U[i * u_cols + j];\n          }\n        }\n        for (int i = 0; i < minmn; i++) base_s[i * s_s_stride] = S[i];\n        for (int i = 0; i < vt_rows; i++) {\n          for (int j = 0; j < n; j++) {\n            base_vt[i * s_vt_row + j * s_vt_col] = VT[i * n + j];\n          }\n        }\n        free(A);\n        free(U);\n        free(S);\n        free(VT);\n        break;\n      }\n      default:\n        caml_leave_blocking_section();\n        cleanup_ndarray(&in);\n        cleanup_ndarray(&u_nd);\n        cleanup_ndarray(&s_nd);\n        cleanup_ndarray(&vt_nd);\n        caml_failwith(\"svd: unsupported dtype\");\n    }\n  }\n  caml_leave_blocking_section();\n  cleanup_ndarray(&in);\n  cleanup_ndarray(&u_nd);\n  cleanup_ndarray(&s_nd);\n  cleanup_ndarray(&vt_nd);\n  CAMLreturn(Val_unit);\n}\n"
  },
  {
    "path": "packages/nx/lib/backend_c/nx_c_ternary.c",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n// Ternary operations for nx C backend\n\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n#include <caml/custom.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/threads.h>\n\n#include \"nx_c_shared.h\"\n\n// Type definitions for ternary operations\ntypedef void (*ternary_op_t)(const ndarray_t *, const ndarray_t *,\n                             const ndarray_t *, ndarray_t *);\n\n// Dispatch table for each type\ntypedef struct {\n  ternary_op_t i8, u8, i16, u16, i32, i64, u32, u64, inat;\n  ternary_op_t f16, f32, f64;\n  ternary_op_t c32, c64;\n  ternary_op_t bf16, bool_, i4, u4, f8e4m3, f8e5m2;\n} ternary_op_table;\n\n// Iterator for ternary operations (4 arrays)\ntypedef struct {\n  int ndim;\n  int *shape;\n  int *coords;\n  int *cond_strides;\n  int *x_strides;\n  int *y_strides;\n  int *z_strides;\n} nd_iterator_ternary_t;\n\n// Check if all 4 arrays are fully contiguous\nstatic inline bool is_fully_contiguous_ternary(const ndarray_t *cond,\n                                               const ndarray_t *x,\n                                               const ndarray_t *y,\n                                               const ndarray_t *z) {\n  if (!cond || !x || !y || !z || cond->ndim != x->ndim || x->ndim != y->ndim ||\n      y->ndim != z->ndim)\n    return false;\n  if (cond->ndim == 0) return true;\n\n  // Check C-contiguous layout\n  int expected_stride = 1;\n  for (int i = cond->ndim - 1; i >= 0; i--) {\n    if (cond->strides[i] != expected_stride ||\n        x->strides[i] != expected_stride || y->strides[i] != expected_stride ||\n        z->strides[i] != expected_stride) {\n      return false;\n    }\n    expected_stride *= cond->shape[i];\n  }\n  return true;\n}\n\nstatic inline void nd_iterator_init_ternary(nd_iterator_ternary_t *it,\n                                            const ndarray_t *cond,\n                                            const ndarray_t *x,\n                                            const ndarray_t *y,\n                                            const ndarray_t *z) {\n  if (!it || !cond || !x || !y || !z) {\n    fprintf(stderr, \"nx: nd_iterator_init_ternary: null pointer\\n\");\n    abort();\n  }\n  if (cond->ndim != x->ndim || x->ndim != y->ndim || y->ndim != z->ndim) {\n    fprintf(stderr, \"nx: nd_iterator_init_ternary: dimension mismatch\\n\");\n    abort();\n  }\n  it->ndim = cond->ndim;\n  it->shape = cond->shape;\n  it->coords = (int *)calloc(cond->ndim, sizeof(int));\n  it->cond_strides = cond->strides;\n  it->x_strides = x->strides;\n  it->y_strides = y->strides;\n  it->z_strides = z->strides;\n  if (!it->coords) {\n    fprintf(stderr, \"nx: nd_iterator_init_ternary: allocation failed\\n\");\n    abort();\n  }\n}\n\nstatic inline void nd_iterator_get_offsets_ternary(\n    const nd_iterator_ternary_t *it, long *cond_off, long *x_off, long *y_off,\n    long *z_off) {\n  *cond_off = 0;\n  *x_off = 0;\n  *y_off = 0;\n  *z_off = 0;\n  for (int i = 0; i < it->ndim; i++) {\n    *cond_off += it->coords[i] * it->cond_strides[i];\n    *x_off += it->coords[i] * it->x_strides[i];\n    *y_off += it->coords[i] * it->y_strides[i];\n    *z_off += it->coords[i] * it->z_strides[i];\n  }\n}\n\nstatic inline bool nd_iterator_next_ternary(nd_iterator_ternary_t *it) {\n  for (int i = it->ndim - 1; i >= 0; i--) {\n    it->coords[i]++;\n    if (it->coords[i] < it->shape[i]) {\n      return true;\n    }\n    it->coords[i] = 0;\n  }\n  return false;\n}\n\nstatic inline void nd_iterator_destroy_ternary(nd_iterator_ternary_t *it) {\n  if (it && it->coords) {\n    free(it->coords);\n    it->coords = NULL;\n  }\n}\n\n// Macro to generate all standard type variants for an operation\n#define GENERATE_TERNARY_OP(name)                     \\\n  TERNARY_OP_FOR_TYPE(name, int8_t, i8)               \\\n  TERNARY_OP_FOR_TYPE(name, uint8_t, u8)              \\\n  TERNARY_OP_FOR_TYPE(name, int16_t, i16)             \\\n  TERNARY_OP_FOR_TYPE(name, uint16_t, u16)            \\\n  TERNARY_OP_FOR_TYPE(name, int32_t, i32)             \\\n  TERNARY_OP_FOR_TYPE(name, int64_t, i64)             \\\n  TERNARY_OP_FOR_TYPE(name, uint32_t, u32)            \\\n  TERNARY_OP_FOR_TYPE(name, uint64_t, u64)            \\\n  TERNARY_OP_FOR_TYPE(name, intnat, inat)             \\\n  TERNARY_OP_FOR_TYPE(name, float, f32)               \\\n  TERNARY_OP_FOR_TYPE(name, double, f64)              \\\n  TERNARY_OP_FOR_TYPE(name, complex32, c32)           \\\n  TERNARY_OP_FOR_TYPE(name, complex64, c64)           \\\n  TERNARY_OP_FOR_TYPE(name, uint16_t, f16)            \\\n  TERNARY_OP_FOR_TYPE(name, caml_ba_bfloat16, bf16)   \\\n  TERNARY_OP_FOR_TYPE(name, caml_ba_fp8_e4m3, f8e4m3) \\\n  TERNARY_OP_FOR_TYPE(name, caml_ba_fp8_e5m2, f8e5m2) \\\n  TERNARY_OP_FOR_TYPE(name, caml_ba_bool, bool_)\n\n// Macro to build dispatch table\n#define BUILD_DISPATCH_TABLE(name)               \\\n  static const ternary_op_table name##_table = { \\\n      .i8 = nx_c_##name##_i8,                    \\\n      .u8 = nx_c_##name##_u8,                    \\\n      .i16 = nx_c_##name##_i16,                  \\\n      .u16 = nx_c_##name##_u16,                  \\\n      .i32 = nx_c_##name##_i32,                  \\\n      .i64 = nx_c_##name##_i64,                  \\\n      .u32 = nx_c_##name##_u32,                  \\\n      .u64 = nx_c_##name##_u64,                  \\\n      .inat = nx_c_##name##_inat,                \\\n      .f16 = nx_c_##name##_f16,                  \\\n      .f32 = nx_c_##name##_f32,                  \\\n      .f64 = nx_c_##name##_f64,                  \\\n      .c32 = nx_c_##name##_c32,                  \\\n      .c64 = nx_c_##name##_c64,                  \\\n      .bf16 = nx_c_##name##_bf16,                \\\n      .bool_ = nx_c_##name##_bool_,              \\\n      .i4 = nx_c_##name##_i4,                    \\\n      .u4 = nx_c_##name##_u4,                    \\\n      .f8e4m3 = nx_c_##name##_f8e4m3,            \\\n      .f8e5m2 = nx_c_##name##_f8e5m2}\n\n// Helper to iterate over inner dimensions with a kernel function for ternary\n// operations\ntypedef void (*kernel_fn)(void *, void *, void *, void *, long, long, long,\n                          long);\n\nstatic inline void iterate_inner_dims_ternary(\n    const ndarray_t *cond, const ndarray_t *x, const ndarray_t *y,\n    const ndarray_t *z, long outer_idx, kernel_fn kernel, void *cond_data,\n    void *x_data, void *y_data, void *z_data) {\n  if (x->ndim <= 1) {\n    kernel(cond_data, x_data, y_data, z_data,\n           cond->offset + outer_idx * cond->strides[0],\n           x->offset + outer_idx * x->strides[0],\n           y->offset + outer_idx * y->strides[0],\n           z->offset + outer_idx * z->strides[0]);\n    return;\n  }\n\n  long cond_base = cond->offset + outer_idx * cond->strides[0];\n  long x_base = x->offset + outer_idx * x->strides[0];\n  long y_base = y->offset + outer_idx * y->strides[0];\n  long z_base = z->offset + outer_idx * z->strides[0];\n\n  // Create temporary iterator for inner dimensions\n  int inner_ndim = x->ndim - 1;\n  int *coords = (int *)calloc(inner_ndim, sizeof(int));\n  if (!coords) {\n    fprintf(stderr, \"nx: iterate_inner_dims_ternary: allocation failed\\n\");\n    abort();\n  }\n\n  // Iterate over inner dimensions\n  bool done = false;\n  while (!done) {\n    long cond_off = cond_base;\n    long x_off = x_base;\n    long y_off = y_base;\n    long z_off = z_base;\n\n    for (int i = 0; i < inner_ndim; i++) {\n      cond_off += coords[i] * cond->strides[i + 1];\n      x_off += coords[i] * x->strides[i + 1];\n      y_off += coords[i] * y->strides[i + 1];\n      z_off += coords[i] * z->strides[i + 1];\n    }\n\n    kernel(cond_data, x_data, y_data, z_data, cond_off, x_off, y_off, z_off);\n\n    // Advance to next position\n    done = true;\n    for (int i = inner_ndim - 1; i >= 0; i--) {\n      coords[i]++;\n      if (coords[i] < x->shape[i + 1]) {\n        done = false;\n        break;\n      }\n      coords[i] = 0;\n    }\n  }\n\n  free(coords);\n}\n\n// Generic ternary operation kernel\n#define TERNARY_OP_KERNEL(name, T, suffix)                       \\\n  static void nx_c_##name##_##suffix##_kernel(                   \\\n      void *cond_data, void *x_data, void *y_data, void *z_data, \\\n      long cond_off, long x_off, long y_off, long z_off) {       \\\n    uint8_t *cond = (uint8_t *)cond_data;                        \\\n    T *x = (T *)x_data;                                          \\\n    T *y = (T *)y_data;                                          \\\n    T *z = (T *)z_data;                                          \\\n    z[z_off] = cond[cond_off] ? x[x_off] : y[y_off];             \\\n  }\n\n// Generic ternary operation implementation\n#define TERNARY_OP_IMPL(name, T, suffix)                                       \\\n  static void nx_c_##name##_##suffix(const ndarray_t *cond,                    \\\n                                     const ndarray_t *x, const ndarray_t *y,   \\\n                                     ndarray_t *z) {                           \\\n    if (!cond || !x || !y || !z) {                                             \\\n      fprintf(stderr, \"nx: nx_c_\" #name \"_\" #suffix \": null pointer\\n\");       \\\n      abort();                                                                 \\\n    }                                                                          \\\n    long total = total_elements_safe(x);                                       \\\n    if (total == 0) return;                                                    \\\n                                                                               \\\n    if (is_fully_contiguous_ternary(cond, x, y, z)) {                          \\\n      _Pragma(\"omp parallel for simd if(total > 1000)\") for (long i = 0;       \\\n                                                             i < total; i++) { \\\n        nx_c_##name##_##suffix##_kernel(cond->data, x->data, y->data, z->data, \\\n                                        cond->offset + i, x->offset + i,       \\\n                                        y->offset + i, z->offset + i);         \\\n      }                                                                        \\\n    } else if (x->shape[0] > 1 && total / x->shape[0] > 50) {                  \\\n      _Pragma(\"omp parallel for if(x->shape[0] > 4)\") for (long i = 0;         \\\n                                                           i < x->shape[0];    \\\n                                                           i++) {              \\\n        iterate_inner_dims_ternary(cond, x, y, z, i,                           \\\n                                   nx_c_##name##_##suffix##_kernel,            \\\n                                   cond->data, x->data, y->data, z->data);     \\\n      }                                                                        \\\n    } else {                                                                   \\\n      nd_iterator_ternary_t it;                                                \\\n      nd_iterator_init_ternary(&it, cond, x, y, z);                            \\\n      do {                                                                     \\\n        long cond_off, x_off, y_off, z_off;                                    \\\n        nd_iterator_get_offsets_ternary(&it, &cond_off, &x_off, &y_off,        \\\n                                        &z_off);                               \\\n        nx_c_##name##_##suffix##_kernel(                                       \\\n            cond->data, x->data, y->data, z->data, cond->offset + cond_off,    \\\n            x->offset + x_off, y->offset + y_off, z->offset + z_off);          \\\n      } while (nd_iterator_next_ternary(&it));                                 \\\n      nd_iterator_destroy_ternary(&it);                                        \\\n    }                                                                          \\\n  }\n\n// Macro to generate both kernel and implementation for an operation\n#define TERNARY_OP_FOR_TYPE(name, T, suffix) \\\n  TERNARY_OP_KERNEL(name, T, suffix)         \\\n  TERNARY_OP_IMPL(name, T, suffix)\n\n// Special implementation for int4 (packed, unpack/select/pack)\n#define INT4_WHERE_IMPL(signedness, suffix)                                    \\\n  static void nx_c_where_##suffix##_kernel(                                    \\\n      void *cond_data, void *x_data, void *y_data, void *z_data,               \\\n      long cond_off, long x_off, long y_off, long z_off) {                     \\\n    uint8_t *cond = (uint8_t *)cond_data;                                      \\\n    uint8_t *x = (uint8_t *)x_data;                                            \\\n    uint8_t *y = (uint8_t *)y_data;                                            \\\n    uint8_t *z = (uint8_t *)z_data;                                            \\\n    long byte_off = z_off / 2;                                                 \\\n    int nib_off = z_off % 2;                                                   \\\n    uint8_t *src = cond[cond_off] ? x : y;                                     \\\n    long src_byte_off = (cond[cond_off] ? x_off : y_off) / 2;                  \\\n    int src_nib_off = (cond[cond_off] ? x_off : y_off) % 2;                    \\\n    int a = src_nib_off                                                        \\\n                ? (signedness ? (int8_t)(src[src_byte_off] >> 4)               \\\n                              : (src[src_byte_off] >> 4) & 0x0F)               \\\n                : (signedness ? (int8_t)((src[src_byte_off] & 0x0F) << 4) >> 4 \\\n                              : src[src_byte_off] & 0x0F);                     \\\n    uint8_t nib = (uint8_t)a & 0x0F;                                           \\\n    if (nib_off) {                                                             \\\n      z[byte_off] = (z[byte_off] & 0x0F) | (nib << 4);                         \\\n    } else {                                                                   \\\n      z[byte_off] = (z[byte_off] & 0xF0) | nib;                                \\\n    }                                                                          \\\n  }                                                                            \\\n  static void nx_c_where_##suffix(const ndarray_t *cond, const ndarray_t *x,   \\\n                                  const ndarray_t *y, ndarray_t *z) {          \\\n    if (!cond || !x || !y || !z) {                                             \\\n      fprintf(stderr, \"nx: nx_c_where_\" #suffix \": null pointer\\n\");           \\\n      abort();                                                                 \\\n    }                                                                          \\\n    long total = total_elements_safe(x);                                       \\\n    if (total == 0) return;                                                    \\\n                                                                               \\\n    if (is_fully_contiguous_ternary(cond, x, y, z)) {                          \\\n      void *cond_data = cond->data + cond->offset;                             \\\n      void *x_data = x->data + x->offset;                                      \\\n      void *y_data = y->data + y->offset;                                      \\\n      void *z_data = z->data + z->offset;                                      \\\n      _Pragma(\"omp parallel for if(total > 10000)\") for (long i = 0;           \\\n                                                         i < total; i++) {     \\\n        nx_c_where_##suffix##_kernel(cond_data, x_data, y_data, z_data, i, i,  \\\n                                     i, i);                                    \\\n      }                                                                        \\\n    } else {                                                                   \\\n      nd_iterator_ternary_t it;                                                \\\n      nd_iterator_init_ternary(&it, cond, x, y, z);                            \\\n      void *cond_data = cond->data;                                            \\\n      void *x_data = x->data;                                                  \\\n      void *y_data = y->data;                                                  \\\n      void *z_data = z->data;                                                  \\\n      do {                                                                     \\\n        long cond_off, x_off, y_off, z_off;                                    \\\n        nd_iterator_get_offsets_ternary(&it, &cond_off, &x_off, &y_off,        \\\n                                        &z_off);                               \\\n        nx_c_where_##suffix##_kernel(                                          \\\n            cond_data, x_data, y_data, z_data, cond->offset + cond_off,        \\\n            x->offset + x_off, y->offset + y_off, z->offset + z_off);          \\\n      } while (nd_iterator_next_ternary(&it));                                 \\\n      nd_iterator_destroy_ternary(&it);                                        \\\n    }                                                                          \\\n  }\n\n// Generate for where\nGENERATE_TERNARY_OP(where)\nINT4_WHERE_IMPL(1, i4)\nINT4_WHERE_IMPL(0, u4)\nBUILD_DISPATCH_TABLE(where);\n\n// Generic dispatch function for ternary operations\nstatic void dispatch_ternary_op(value v_cond, value v_x, value v_y, value v_z,\n                                const ternary_op_table *table,\n                                const char *op_name) {\n  // Extract ndarrays from FFI tensors\n  ndarray_t cond = extract_ndarray(v_cond);\n  ndarray_t x = extract_ndarray(v_x);\n  ndarray_t y = extract_ndarray(v_y);\n  ndarray_t z = extract_ndarray(v_z);\n\n  // Check shapes match\n  if (cond.ndim != x.ndim || cond.ndim != y.ndim || cond.ndim != z.ndim) {\n    cleanup_ndarray(&cond);\n    cleanup_ndarray(&x);\n    cleanup_ndarray(&y);\n    cleanup_ndarray(&z);\n    caml_failwith(\"shape mismatch\");\n  }\n  for (int i = 0; i < cond.ndim; i++) {\n    if (cond.shape[i] != x.shape[i] || cond.shape[i] != y.shape[i] ||\n        cond.shape[i] != z.shape[i]) {\n      cleanup_ndarray(&cond);\n      cleanup_ndarray(&x);\n      cleanup_ndarray(&y);\n      cleanup_ndarray(&z);\n      caml_failwith(\"shape mismatch\");\n    }\n  }\n\n  // Get bigarray kind from the data fields\n  value v_cond_data = Field(v_cond, FFI_TENSOR_DATA);\n  value v_x_data = Field(v_x, FFI_TENSOR_DATA);\n  value v_y_data = Field(v_y, FFI_TENSOR_DATA);\n  value v_z_data = Field(v_z, FFI_TENSOR_DATA);\n\n  struct caml_ba_array *ba_cond = Caml_ba_array_val(v_cond_data);\n  int kind_cond = nx_buffer_get_kind(ba_cond);\n\n  // Assume condition is bool or uint8\n  if (kind_cond != NX_BA_BOOL && kind_cond != CAML_BA_UINT8) {\n    cleanup_ndarray(&cond);\n    cleanup_ndarray(&x);\n    cleanup_ndarray(&y);\n    cleanup_ndarray(&z);\n    caml_failwith(\"condition must be bool or uint8\");\n  }\n\n  struct caml_ba_array *ba_x = Caml_ba_array_val(v_x_data);\n  int kind = nx_buffer_get_kind(ba_x);\n\n  // Check kinds match for x, y, z\n  int kind_y = nx_buffer_get_kind(Caml_ba_array_val(v_y_data));\n  int kind_z = nx_buffer_get_kind(Caml_ba_array_val(v_z_data));\n  if (kind != kind_y || kind != kind_z) {\n    cleanup_ndarray(&cond);\n    cleanup_ndarray(&x);\n    cleanup_ndarray(&y);\n    cleanup_ndarray(&z);\n    caml_failwith(\"dtype mismatch\");\n  }\n\n  // Select operation based on dtype\n  ternary_op_t op = NULL;\n  switch (kind) {\n    case CAML_BA_SINT8:\n      op = table->i8;\n      break;\n    case CAML_BA_UINT8:\n      op = table->u8;\n      break;\n    case CAML_BA_SINT16:\n      op = table->i16;\n      break;\n    case CAML_BA_UINT16:\n      op = table->u16;\n      break;\n    case CAML_BA_INT32:\n      op = table->i32;\n      break;\n    case CAML_BA_INT64:\n      op = table->i64;\n      break;\n    case NX_BA_UINT32:\n      op = table->u32;\n      break;\n    case NX_BA_UINT64:\n      op = table->u64;\n      break;\n    case CAML_BA_CAML_INT:\n    case CAML_BA_NATIVE_INT:\n      op = table->inat;\n      break;\n    case CAML_BA_FLOAT16:\n      op = table->f16;\n      break;\n    case CAML_BA_FLOAT32:\n      op = table->f32;\n      break;\n    case CAML_BA_FLOAT64:\n      op = table->f64;\n      break;\n    case CAML_BA_COMPLEX32:\n      op = table->c32;\n      break;\n    case CAML_BA_COMPLEX64:\n      op = table->c64;\n      break;\n    case NX_BA_BFLOAT16:\n      op = table->bf16;\n      break;\n    case NX_BA_BOOL:\n      op = table->bool_;\n      break;\n    case NX_BA_INT4:\n      op = table->i4;\n      break;\n    case NX_BA_UINT4:\n      op = table->u4;\n      break;\n    case NX_BA_FP8_E4M3:\n      op = table->f8e4m3;\n      break;\n    case NX_BA_FP8_E5M2:\n      op = table->f8e5m2;\n      break;\n    default:\n      cleanup_ndarray(&cond);\n      cleanup_ndarray(&x);\n      cleanup_ndarray(&y);\n      cleanup_ndarray(&z);\n      caml_failwith(\"dispatch_ternary_op: unsupported dtype\");\n  }\n\n  if (!op) {\n    char msg[256];\n    snprintf(msg, sizeof(msg), \"%s: operation not supported for dtype\",\n             op_name);\n    cleanup_ndarray(&cond);\n    cleanup_ndarray(&x);\n    cleanup_ndarray(&y);\n    cleanup_ndarray(&z);\n    caml_failwith(msg);\n  }\n\n  // Enter blocking section for potentially long computation\n  caml_enter_blocking_section();\n  op(&cond, &x, &y, &z);\n  caml_leave_blocking_section();\n\n  // Clean up if heap allocated\n  cleanup_ndarray(&cond);\n  cleanup_ndarray(&x);\n  cleanup_ndarray(&y);\n  cleanup_ndarray(&z);\n}\n\n// ============================================================================\n// OCaml FFI Stubs\n// ============================================================================\n\nCAMLprim value caml_nx_where(value v_cond, value v_x, value v_y, value v_z) {\n  CAMLparam4(v_cond, v_x, v_y, v_z);\n  dispatch_ternary_op(v_cond, v_x, v_y, v_z, &where_table, \"where\");\n  CAMLreturn(Val_unit);\n}\n"
  },
  {
    "path": "packages/nx/lib/backend_c/nx_c_unary.c",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n// Unary operations for nx C backend\n\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n#include <caml/custom.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/threads.h>\n#include <complex.h>\n#include <math.h>\n\n#include \"nx_c_shared.h\"\n\n// Type definitions for unary operations\ntypedef void (*unary_op_t)(const ndarray_t *, ndarray_t *);\n\n// Dispatch table for each type\ntypedef struct {\n  unary_op_t i8, u8, i16, u16, i32, i64, u32, u64, inat;\n  unary_op_t f16, f32, f64;\n  unary_op_t c32, c64;\n  unary_op_t bf16, bool_, i4, u4, f8e4m3, f8e5m2;\n} unary_op_table;\n\n// Macro to generate all standard type variants for an operation (ints and\n// floats) Note: float16, bfloat16, fp8 types need special handling with\n// conversion\n#define GENERATE_UNARY_OP(name, OP_EXPR)               \\\n  UNARY_OP_FOR_TYPE(name, int8_t, i8, OP_EXPR)         \\\n  UNARY_OP_FOR_TYPE(name, uint8_t, u8, OP_EXPR)        \\\n  UNARY_OP_FOR_TYPE(name, int16_t, i16, OP_EXPR)       \\\n  UNARY_OP_FOR_TYPE(name, uint16_t, u16, OP_EXPR)      \\\n  UNARY_OP_FOR_TYPE(name, int32_t, i32, OP_EXPR)       \\\n  UNARY_OP_FOR_TYPE(name, int64_t, i64, OP_EXPR)       \\\n  UNARY_OP_FOR_TYPE(name, uint32_t, u32, OP_EXPR)      \\\n  UNARY_OP_FOR_TYPE(name, uint64_t, u64, OP_EXPR)      \\\n  UNARY_OP_FOR_TYPE(name, intnat, inat, OP_EXPR)       \\\n  UNARY_OP_FOR_TYPE(name, float, f32, OP_EXPR)         \\\n  UNARY_OP_FOR_TYPE(name, double, f64, OP_EXPR)\n\n// Macro to generate floating-point only variants\n#define GENERATE_UNARY_FLOAT_OP(name, OP_FLOAT, OP_DOUBLE) \\\n  UNARY_OP_FOR_TYPE(name, float, f32, OP_FLOAT)            \\\n  UNARY_OP_FOR_TYPE(name, double, f64, OP_DOUBLE)\n\n// Macro to build dispatch table\n#define BUILD_DISPATCH_TABLE(name)                                            \\\n  static const unary_op_table name##_table = {.i8 = nx_c_##name##_i8,         \\\n                                              .u8 = nx_c_##name##_u8,         \\\n                                              .i16 = nx_c_##name##_i16,       \\\n                                              .u16 = nx_c_##name##_u16,       \\\n                                              .i32 = nx_c_##name##_i32,       \\\n                                              .i64 = nx_c_##name##_i64,       \\\n                                              .u32 = nx_c_##name##_u32,       \\\n                                              .u64 = nx_c_##name##_u64,       \\\n                                              .inat = nx_c_##name##_inat,     \\\n                                              .f16 = nx_c_##name##_f16,       \\\n                                              .f32 = nx_c_##name##_f32,       \\\n                                              .f64 = nx_c_##name##_f64,       \\\n                                              .c32 = nx_c_##name##_c32,       \\\n                                              .c64 = nx_c_##name##_c64,       \\\n                                              .bf16 = nx_c_##name##_bf16,     \\\n                                              .bool_ = nx_c_##name##_bool_,   \\\n                                              .i4 = nx_c_##name##_i4,         \\\n                                              .u4 = nx_c_##name##_u4,         \\\n                                              .f8e4m3 = nx_c_##name##_f8e4m3, \\\n                                              .f8e5m2 = nx_c_##name##_f8e5m2}\n\n// Helper function to iterate over inner dimensions for unary operations\ntypedef void (*unary_kernel_fn)(void *, void *, long, long);\n\nstatic inline void iterate_inner_dims_unary(const ndarray_t *x,\n                                            const ndarray_t *z, long outer_idx,\n                                            unary_kernel_fn kernel,\n                                            void *x_data, void *z_data) {\n  if (x->ndim <= 1) {\n    kernel(x_data, z_data, x->offset + outer_idx * x->strides[0],\n           z->offset + outer_idx * z->strides[0]);\n    return;\n  }\n\n  long x_base = x->offset + outer_idx * x->strides[0];\n  long z_base = z->offset + outer_idx * z->strides[0];\n\n  // Create temporary iterator for inner dimensions\n  int inner_ndim = x->ndim - 1;\n  int *coords = (int *)calloc(inner_ndim, sizeof(int));\n  if (!coords) {\n    fprintf(stderr, \"nx: iterate_inner_dims_unary: allocation failed\\n\");\n    abort();\n  }\n\n  // Iterate over inner dimensions\n  bool done = false;\n  while (!done) {\n    long x_off = x_base;\n    long z_off = z_base;\n\n    for (int i = 0; i < inner_ndim; i++) {\n      x_off += coords[i] * x->strides[i + 1];\n      z_off += coords[i] * z->strides[i + 1];\n    }\n\n    kernel(x_data, z_data, x_off, z_off);\n\n    // Advance to next position\n    done = true;\n    for (int i = inner_ndim - 1; i >= 0; i--) {\n      coords[i]++;\n      if (coords[i] < x->shape[i + 1]) {\n        done = false;\n        break;\n      }\n      coords[i] = 0;\n    }\n  }\n\n  free(coords);\n}\n\n// Generic unary operation kernel\n#define UNARY_OP_KERNEL(name, T, suffix, OP)                              \\\n  static inline void nx_c_##name##_##suffix##_kernel(void *x_data, void *z_data, \\\n                                              long x_off, long z_off) {   \\\n    T *x = (T *)x_data;                                                   \\\n    T *z = (T *)z_data;                                                   \\\n    z[z_off] = OP(x[x_off]);                                              \\\n  }\n\n// Generic unary operation implementation\n#define UNARY_OP_IMPL(name, T, suffix)                                         \\\n  static void nx_c_##name##_##suffix(const ndarray_t *x, ndarray_t *z) {       \\\n    if (!x || !z) {                                                            \\\n      fprintf(stderr, \"nx: nx_c_\" #name \"_\" #suffix \": null pointer\\n\");       \\\n      abort();                                                                 \\\n    }                                                                          \\\n    long total = total_elements_safe(x);                                       \\\n    if (total == 0) return;                                                    \\\n                                                                               \\\n    if (is_contiguous(x) && is_contiguous(z)) {                                \\\n      T *restrict xs = (T *)x->data + x->offset;                               \\\n      T *restrict zs = (T *)z->data + z->offset;                               \\\n      _Pragma(\"omp parallel for simd if(total > 1000)\") for (long i = 0;       \\\n                                                             i < total; i++) { \\\n        nx_c_##name##_##suffix##_kernel(xs, zs, i, i);                         \\\n      }                                                                        \\\n    } else if (x->shape[0] > 1 && total / x->shape[0] > 50) {                  \\\n      _Pragma(\"omp parallel for if(x->shape[0] > 4)\") for (long i = 0;         \\\n                                                           i < x->shape[0];    \\\n                                                           i++) {              \\\n        iterate_inner_dims_unary(x, z, i, nx_c_##name##_##suffix##_kernel,     \\\n                                 x->data, z->data);                            \\\n      }                                                                        \\\n    } else {                                                                   \\\n      nd_copy_iterator_t it;                                                   \\\n      nd_copy_iterator_init(&it, x, z);                                        \\\n      do {                                                                     \\\n        long x_off, z_off;                                                     \\\n        nd_copy_iterator_get_offsets(&it, &x_off, &z_off);                     \\\n        nx_c_##name##_##suffix##_kernel(x->data, z->data, x->offset + x_off,   \\\n                                        z->offset + z_off);                    \\\n      } while (nd_copy_iterator_next(&it));                                    \\\n      nd_copy_iterator_destroy(&it);                                           \\\n    }                                                                          \\\n  }\n\n// Macro to generate both kernel and implementation for an operation\n#define UNARY_OP_FOR_TYPE(name, T, suffix, OP) \\\n  UNARY_OP_KERNEL(name, T, suffix, OP)         \\\n  UNARY_OP_IMPL(name, T, suffix)\n\n// Low-precision float kernel (convert to float for op)\n#define LOW_PREC_OP_KERNEL(name, T, suffix, OP, TO_FLOAT, FROM_FLOAT)     \\\n  static void nx_c_##name##_##suffix##_kernel(void *x_data, void *z_data, \\\n                                              long x_off, long z_off) {   \\\n    T *x = (T *)x_data;                                                   \\\n    T *z = (T *)z_data;                                                   \\\n    float a = TO_FLOAT(x[x_off]);                                         \\\n    z[z_off] = FROM_FLOAT(OP(a));                                         \\\n  }\n\n// For low-precision, use the impl with the special kernel\n#define LOW_PREC_OP_IMPL(name, T, suffix) UNARY_OP_IMPL(name, T, suffix)\n\n// Helper macros for int4 saturation\n#define CLAMP_I4(x) ((x) < -8 ? -8 : ((x) > 7 ? 7 : (x)))\n#define CLAMP_U4(x) ((x) < 0 ? 0 : ((x) > 15 ? 15 : (x)))\n\n// Special implementation for int4 (packed, unpack/op/pack with saturation)\n#define INT4_UNARY_IMPL(name, signedness, suffix, OP)                        \\\n  static void nx_c_##name##_##suffix##_kernel(void *x_data, void *z_data,    \\\n                                              long x_off, long z_off) {      \\\n    uint8_t *x = (uint8_t *)x_data;                                          \\\n    uint8_t *z = (uint8_t *)z_data;                                          \\\n    long byte_off = x_off / 2;                                               \\\n    int nib_off = x_off % 2;                                                 \\\n    int a = nib_off ? (signedness ? (int8_t)(x[byte_off] >> 4)               \\\n                                  : (x[byte_off] >> 4) & 0x0F)               \\\n                    : (signedness ? (int8_t)((x[byte_off] & 0x0F) << 4) >> 4 \\\n                                  : x[byte_off] & 0x0F);                     \\\n    int res = OP(a);                                                         \\\n    /* Saturate to 4-bit range */                                            \\\n    res = signedness ? CLAMP_I4(res) : CLAMP_U4(res);                        \\\n    uint8_t nib = (uint8_t)res & 0x0F;                                       \\\n    if (nib_off) {                                                           \\\n      z[byte_off] = (z[byte_off] & 0x0F) | (nib << 4);                       \\\n    } else {                                                                 \\\n      z[byte_off] = (z[byte_off] & 0xF0) | nib;                              \\\n    }                                                                        \\\n  }                                                                          \\\n  static void nx_c_##name##_##suffix(const ndarray_t *x, ndarray_t *z) {     \\\n    if (is_contiguous(x) && is_contiguous(z)) {                              \\\n      long total = total_elements_safe(x);                                   \\\n      void *x_data = x->data + x->offset;                                    \\\n      void *z_data = z->data + z->offset;                                    \\\n      _Pragma(\"omp parallel for if(total > 10000)\") for (long i = 0;         \\\n                                                         i < total; i++) {   \\\n        nx_c_##name##_##suffix##_kernel(x_data, z_data, i, i);               \\\n      }                                                                      \\\n    } else {                                                                 \\\n      nd_copy_iterator_t it;                                                 \\\n      nd_copy_iterator_init(&it, x, z);                                      \\\n      void *x_data = x->data;                                                \\\n      void *z_data = z->data;                                                \\\n      do {                                                                   \\\n        long x_off, z_off;                                                   \\\n        nd_copy_iterator_get_offsets(&it, &x_off, &z_off);                   \\\n        nx_c_##name##_##suffix##_kernel(x_data, z_data, x_off + x->offset,   \\\n                                        z_off + z->offset);                  \\\n      } while (nd_copy_iterator_next(&it));                                  \\\n      nd_copy_iterator_destroy(&it);                                         \\\n    }                                                                        \\\n  }\n\n// Generate for all ops\n// Negation\n#define NEG_OP(x) (-(x))\n#define NEG_BOOL_OP(x) (!(x))\nGENERATE_UNARY_OP(neg, NEG_OP)\n\n// Float16, BFloat16, FP8 variants need conversion\nLOW_PREC_OP_KERNEL(neg, uint16_t, f16, NEG_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(neg, uint16_t, f16)\nLOW_PREC_OP_KERNEL(neg, caml_ba_bfloat16, bf16, NEG_OP, bfloat16_to_float,\n                   float_to_bfloat16)\nLOW_PREC_OP_IMPL(neg, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(neg, caml_ba_fp8_e4m3, f8e4m3, NEG_OP, fp8_e4m3_to_float,\n                   float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(neg, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(neg, caml_ba_fp8_e5m2, f8e5m2, NEG_OP, fp8_e5m2_to_float,\n                   float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(neg, caml_ba_fp8_e5m2, f8e5m2)\n\nUNARY_OP_FOR_TYPE(neg, complex32, c32, NEG_OP)\nUNARY_OP_FOR_TYPE(neg, complex64, c64, NEG_OP)\n\nINT4_UNARY_IMPL(neg, 1, i4, NEG_OP)\nINT4_UNARY_IMPL(neg, 0, u4, NEG_OP)\nUNARY_OP_FOR_TYPE(neg, caml_ba_bool, bool_, NEG_BOOL_OP)\nBUILD_DISPATCH_TABLE(neg);\n\n// Sin (floating-point and complex only)\n#define SIN_FLOAT_OP(x) (sinf(x))\n#define SIN_DOUBLE_OP(x) (sin(x))\n#define COMPLEX32_SIN_OP(x) (csinf(x))\n#define COMPLEX64_SIN_OP(x) (csin(x))\nGENERATE_UNARY_FLOAT_OP(sin, SIN_FLOAT_OP, SIN_DOUBLE_OP)\n\n// Float16, BFloat16, FP8 variants need conversion\nLOW_PREC_OP_KERNEL(sin, uint16_t, f16, SIN_FLOAT_OP, half_to_float,\n                   float_to_half)\nLOW_PREC_OP_IMPL(sin, uint16_t, f16)\nLOW_PREC_OP_KERNEL(sin, caml_ba_bfloat16, bf16, SIN_FLOAT_OP, bfloat16_to_float,\n                   float_to_bfloat16)\nLOW_PREC_OP_IMPL(sin, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(sin, caml_ba_fp8_e4m3, f8e4m3, SIN_FLOAT_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(sin, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(sin, caml_ba_fp8_e5m2, f8e5m2, SIN_FLOAT_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(sin, caml_ba_fp8_e5m2, f8e5m2)\n\nUNARY_OP_FOR_TYPE(sin, complex32, c32, COMPLEX32_SIN_OP)\nUNARY_OP_FOR_TYPE(sin, complex64, c64, COMPLEX64_SIN_OP)\n\n// Build dispatch table with only float types (integers not supported)\nstatic const unary_op_table sin_table = {.i8 = NULL,\n                                         .u8 = NULL,\n                                         .i16 = NULL,\n                                         .u16 = NULL,\n                                         .i32 = NULL,\n                                         .i64 = NULL,\n                                         .inat = NULL,\n                                         .f16 = nx_c_sin_f16,\n                                         .f32 = nx_c_sin_f32,\n                                         .f64 = nx_c_sin_f64,\n                                         .c32 = nx_c_sin_c32,\n                                         .c64 = nx_c_sin_c64,\n                                         .bf16 = nx_c_sin_bf16,\n                                         .bool_ = NULL,\n                                         .i4 = NULL,\n                                         .u4 = NULL,\n                                         .f8e4m3 = nx_c_sin_f8e4m3,\n                                         .f8e5m2 = nx_c_sin_f8e5m2};\n\n// Sqrt (floating-point and complex only)\n#define SQRT_FLOAT_OP(x) (sqrtf(x))\n#define SQRT_DOUBLE_OP(x) (sqrt(x))\n#define COMPLEX32_SQRT_OP(x) (csqrtf(x))\n#define COMPLEX64_SQRT_OP(x) (csqrt(x))\nGENERATE_UNARY_FLOAT_OP(sqrt, SQRT_FLOAT_OP, SQRT_DOUBLE_OP)\n\n// Float16, BFloat16, FP8 variants need conversion\nLOW_PREC_OP_KERNEL(sqrt, uint16_t, f16, SQRT_FLOAT_OP, half_to_float,\n                   float_to_half)\nLOW_PREC_OP_IMPL(sqrt, uint16_t, f16)\nLOW_PREC_OP_KERNEL(sqrt, caml_ba_bfloat16, bf16, SQRT_FLOAT_OP,\n                   bfloat16_to_float, float_to_bfloat16)\nLOW_PREC_OP_IMPL(sqrt, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(sqrt, caml_ba_fp8_e4m3, f8e4m3, SQRT_FLOAT_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(sqrt, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(sqrt, caml_ba_fp8_e5m2, f8e5m2, SQRT_FLOAT_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(sqrt, caml_ba_fp8_e5m2, f8e5m2)\n\nUNARY_OP_FOR_TYPE(sqrt, complex32, c32, COMPLEX32_SQRT_OP)\nUNARY_OP_FOR_TYPE(sqrt, complex64, c64, COMPLEX64_SQRT_OP)\n\n// Build dispatch table with only float types (integers not supported)\nstatic const unary_op_table sqrt_table = {.i8 = NULL,\n                                          .u8 = NULL,\n                                          .i16 = NULL,\n                                          .u16 = NULL,\n                                          .i32 = NULL,\n                                          .i64 = NULL,\n                                          .inat = NULL,\n                                          .f16 = nx_c_sqrt_f16,\n                                          .f32 = nx_c_sqrt_f32,\n                                          .f64 = nx_c_sqrt_f64,\n                                          .c32 = nx_c_sqrt_c32,\n                                          .c64 = nx_c_sqrt_c64,\n                                          .bf16 = nx_c_sqrt_bf16,\n                                          .bool_ = NULL,\n                                          .i4 = NULL,\n                                          .u4 = NULL,\n                                          .f8e4m3 = nx_c_sqrt_f8e4m3,\n                                          .f8e5m2 = nx_c_sqrt_f8e5m2};\n\n// Reciprocal - separate handling for integers (check zero) vs floats (IEEE 754)\n#define INT_RECIP_OP(x) \\\n  ((x) == 0 ? (fprintf(stderr, \"nx: division by zero\\n\"), abort(), (x)) : (1 / (x)))\n#define FLOAT_RECIP_OP(x) (1 / (x))\n#define COMPLEX32_RECIP_OP(x) (1.0f / (x))\n#define COMPLEX64_RECIP_OP(x) (1.0 / (x))\n\n// Integer types need zero check\nUNARY_OP_FOR_TYPE(recip, int8_t, i8, INT_RECIP_OP)\nUNARY_OP_FOR_TYPE(recip, uint8_t, u8, INT_RECIP_OP)\nUNARY_OP_FOR_TYPE(recip, int16_t, i16, INT_RECIP_OP)\nUNARY_OP_FOR_TYPE(recip, uint16_t, u16, INT_RECIP_OP)\nUNARY_OP_FOR_TYPE(recip, int32_t, i32, INT_RECIP_OP)\nUNARY_OP_FOR_TYPE(recip, int64_t, i64, INT_RECIP_OP)\nUNARY_OP_FOR_TYPE(recip, uint32_t, u32, INT_RECIP_OP)\nUNARY_OP_FOR_TYPE(recip, uint64_t, u64, INT_RECIP_OP)\nUNARY_OP_FOR_TYPE(recip, intnat, inat, INT_RECIP_OP)\n\n// Floating-point types use IEEE 754 semantics (no zero check)\nUNARY_OP_FOR_TYPE(recip, float, f32, FLOAT_RECIP_OP)\nUNARY_OP_FOR_TYPE(recip, double, f64, FLOAT_RECIP_OP)\n\n// Float16, BFloat16, FP8 variants - no zero check, let IEEE semantics apply\nLOW_PREC_OP_KERNEL(recip, uint16_t, f16, FLOAT_RECIP_OP, half_to_float,\n                   float_to_half)\nLOW_PREC_OP_IMPL(recip, uint16_t, f16)\nLOW_PREC_OP_KERNEL(recip, caml_ba_bfloat16, bf16, FLOAT_RECIP_OP,\n                   bfloat16_to_float, float_to_bfloat16)\nLOW_PREC_OP_IMPL(recip, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(recip, caml_ba_fp8_e4m3, f8e4m3, FLOAT_RECIP_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(recip, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(recip, caml_ba_fp8_e5m2, f8e5m2, FLOAT_RECIP_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(recip, caml_ba_fp8_e5m2, f8e5m2)\n\nUNARY_OP_FOR_TYPE(recip, complex32, c32, COMPLEX32_RECIP_OP)\nUNARY_OP_FOR_TYPE(recip, complex64, c64, COMPLEX64_RECIP_OP)\n\nINT4_UNARY_IMPL(recip, 1, i4, INT_RECIP_OP)\nINT4_UNARY_IMPL(recip, 0, u4, INT_RECIP_OP)\nUNARY_OP_FOR_TYPE(recip, caml_ba_bool, bool_, INT_RECIP_OP)\nBUILD_DISPATCH_TABLE(recip);\n\n// Cos (floating-point and complex only)\n#define COS_FLOAT_OP(x) (cosf(x))\n#define COS_DOUBLE_OP(x) (cos(x))\n#define COMPLEX32_COS_OP(x) (ccosf(x))\n#define COMPLEX64_COS_OP(x) (ccos(x))\nGENERATE_UNARY_FLOAT_OP(cos, COS_FLOAT_OP, COS_DOUBLE_OP)\n\nLOW_PREC_OP_KERNEL(cos, uint16_t, f16, COS_FLOAT_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(cos, uint16_t, f16)\nLOW_PREC_OP_KERNEL(cos, caml_ba_bfloat16, bf16, COS_FLOAT_OP, bfloat16_to_float,\n                   float_to_bfloat16)\nLOW_PREC_OP_IMPL(cos, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(cos, caml_ba_fp8_e4m3, f8e4m3, COS_FLOAT_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(cos, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(cos, caml_ba_fp8_e5m2, f8e5m2, COS_FLOAT_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(cos, caml_ba_fp8_e5m2, f8e5m2)\n\nUNARY_OP_FOR_TYPE(cos, complex32, c32, COMPLEX32_COS_OP)\nUNARY_OP_FOR_TYPE(cos, complex64, c64, COMPLEX64_COS_OP)\n\n\nstatic const unary_op_table cos_table = {.i8 = NULL,\n                                         .u8 = NULL,\n                                         .i16 = NULL,\n                                         .u16 = NULL,\n                                         .i32 = NULL,\n                                         .i64 = NULL,\n                                         .inat = NULL,\n                                         .f16 = nx_c_cos_f16,\n                                         .f32 = nx_c_cos_f32,\n                                         .f64 = nx_c_cos_f64,\n                                         .c32 = nx_c_cos_c32,\n                                         .c64 = nx_c_cos_c64,\n                                         .bf16 = nx_c_cos_bf16,\n                                         .bool_ = NULL,\n                                         .i4 = NULL,\n                                         .u4 = NULL,\n                                         .f8e4m3 = nx_c_cos_f8e4m3,\n                                         .f8e5m2 = nx_c_cos_f8e5m2};\n\n// Log (natural logarithm, floating-point and complex only)\n#define LOG_FLOAT_OP(x) (logf(x))\n#define LOG_DOUBLE_OP(x) (log(x))\n#define COMPLEX32_LOG_OP(x) (clogf(x))\n#define COMPLEX64_LOG_OP(x) (clog(x))\nGENERATE_UNARY_FLOAT_OP(log, LOG_FLOAT_OP, LOG_DOUBLE_OP)\n\nLOW_PREC_OP_KERNEL(log, uint16_t, f16, LOG_FLOAT_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(log, uint16_t, f16)\nLOW_PREC_OP_KERNEL(log, caml_ba_bfloat16, bf16, LOG_FLOAT_OP, bfloat16_to_float,\n                   float_to_bfloat16)\nLOW_PREC_OP_IMPL(log, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(log, caml_ba_fp8_e4m3, f8e4m3, LOG_FLOAT_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(log, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(log, caml_ba_fp8_e5m2, f8e5m2, LOG_FLOAT_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(log, caml_ba_fp8_e5m2, f8e5m2)\n\nUNARY_OP_FOR_TYPE(log, complex32, c32, COMPLEX32_LOG_OP)\nUNARY_OP_FOR_TYPE(log, complex64, c64, COMPLEX64_LOG_OP)\n\n\nstatic const unary_op_table log_table = {.i8 = NULL,\n                                         .u8 = NULL,\n                                         .i16 = NULL,\n                                         .u16 = NULL,\n                                         .i32 = NULL,\n                                         .i64 = NULL,\n                                         .inat = NULL,\n                                         .f16 = nx_c_log_f16,\n                                         .f32 = nx_c_log_f32,\n                                         .f64 = nx_c_log_f64,\n                                         .c32 = nx_c_log_c32,\n                                         .c64 = nx_c_log_c64,\n                                         .bf16 = nx_c_log_bf16,\n                                         .bool_ = NULL,\n                                         .i4 = NULL,\n                                         .u4 = NULL,\n                                         .f8e4m3 = nx_c_log_f8e4m3,\n                                         .f8e5m2 = nx_c_log_f8e5m2};\n\n// Exp (natural exponential, floating-point and complex only)\n#define EXP_FLOAT_OP(x) (expf(x))\n#define EXP_DOUBLE_OP(x) (exp(x))\n#define COMPLEX32_EXP_OP(x) (cexpf(x))\n#define COMPLEX64_EXP_OP(x) (cexp(x))\nGENERATE_UNARY_FLOAT_OP(exp, EXP_FLOAT_OP, EXP_DOUBLE_OP)\n\nLOW_PREC_OP_KERNEL(exp, uint16_t, f16, EXP_FLOAT_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(exp, uint16_t, f16)\nLOW_PREC_OP_KERNEL(exp, caml_ba_bfloat16, bf16, EXP_FLOAT_OP, bfloat16_to_float,\n                   float_to_bfloat16)\nLOW_PREC_OP_IMPL(exp, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(exp, caml_ba_fp8_e4m3, f8e4m3, EXP_FLOAT_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(exp, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(exp, caml_ba_fp8_e5m2, f8e5m2, EXP_FLOAT_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(exp, caml_ba_fp8_e5m2, f8e5m2)\n\nUNARY_OP_FOR_TYPE(exp, complex32, c32, COMPLEX32_EXP_OP)\nUNARY_OP_FOR_TYPE(exp, complex64, c64, COMPLEX64_EXP_OP)\n\n\nstatic const unary_op_table exp_table = {.i8 = NULL,\n                                         .u8 = NULL,\n                                         .i16 = NULL,\n                                         .u16 = NULL,\n                                         .i32 = NULL,\n                                         .i64 = NULL,\n                                         .inat = NULL,\n                                         .f16 = nx_c_exp_f16,\n                                         .f32 = nx_c_exp_f32,\n                                         .f64 = nx_c_exp_f64,\n                                         .c32 = nx_c_exp_c32,\n                                         .c64 = nx_c_exp_c64,\n                                         .bf16 = nx_c_exp_bf16,\n                                         .bool_ = NULL,\n                                         .i4 = NULL,\n                                         .u4 = NULL,\n                                         .f8e4m3 = nx_c_exp_f8e4m3,\n                                         .f8e5m2 = nx_c_exp_f8e5m2};\n\n// Abs (absolute value, works on all numeric types)\n#define INT_ABS_OP(x) ((x) < 0 ? -(x) : (x))\n#define FLOAT_ABS_OP(x) (fabsf(x))\n#define DOUBLE_ABS_OP(x) (fabs(x))\n#define COMPLEX32_ABS_OP(x) (cabsf(x))\n#define COMPLEX64_ABS_OP(x) (cabs(x))\n\nUNARY_OP_FOR_TYPE(abs, int8_t, i8, INT_ABS_OP)\nUNARY_OP_FOR_TYPE(abs, uint8_t, u8, INT_ABS_OP)\nUNARY_OP_FOR_TYPE(abs, int16_t, i16, INT_ABS_OP)\nUNARY_OP_FOR_TYPE(abs, uint16_t, u16, INT_ABS_OP)\nUNARY_OP_FOR_TYPE(abs, int32_t, i32, INT_ABS_OP)\nUNARY_OP_FOR_TYPE(abs, int64_t, i64, INT_ABS_OP)\nUNARY_OP_FOR_TYPE(abs, uint32_t, u32, INT_ABS_OP)\nUNARY_OP_FOR_TYPE(abs, uint64_t, u64, INT_ABS_OP)\nUNARY_OP_FOR_TYPE(abs, intnat, inat, INT_ABS_OP)\nUNARY_OP_FOR_TYPE(abs, float, f32, FLOAT_ABS_OP)\nUNARY_OP_FOR_TYPE(abs, double, f64, DOUBLE_ABS_OP)\n\nLOW_PREC_OP_KERNEL(abs, uint16_t, f16, FLOAT_ABS_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(abs, uint16_t, f16)\nLOW_PREC_OP_KERNEL(abs, caml_ba_bfloat16, bf16, FLOAT_ABS_OP, bfloat16_to_float,\n                   float_to_bfloat16)\nLOW_PREC_OP_IMPL(abs, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(abs, caml_ba_fp8_e4m3, f8e4m3, FLOAT_ABS_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(abs, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(abs, caml_ba_fp8_e5m2, f8e5m2, FLOAT_ABS_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(abs, caml_ba_fp8_e5m2, f8e5m2)\n\n// Complex abs returns real magnitude - handled separately\n// For now, use float result for complex types (stores in complex buffer as real part)\nUNARY_OP_FOR_TYPE(abs, complex32, c32, COMPLEX32_ABS_OP)\nUNARY_OP_FOR_TYPE(abs, complex64, c64, COMPLEX64_ABS_OP)\n\nINT4_UNARY_IMPL(abs, 1, i4, INT_ABS_OP)\nINT4_UNARY_IMPL(abs, 0, u4, INT_ABS_OP)\nUNARY_OP_FOR_TYPE(abs, caml_ba_bool, bool_, INT_ABS_OP)\nBUILD_DISPATCH_TABLE(abs);\n\n// Sign\n#define SIGNED_SIGN_OP(x) (((x) > 0) ? 1 : (((x) < 0) ? -1 : 0))\n#define UNSIGNED_SIGN_OP(x) (((x) == 0) ? 0 : 1)\n#define FLOAT_SIGN_OP(x) (isnan(x) ? (x) : (((x) > 0) ? 1.0f : (((x) < 0) ? -1.0f : 0.0f)))\n#define DOUBLE_SIGN_OP(x) (isnan(x) ? (x) : (((x) > 0) ? 1.0 : (((x) < 0) ? -1.0 : 0.0)))\n\nUNARY_OP_FOR_TYPE(sign, int8_t, i8, SIGNED_SIGN_OP)\nUNARY_OP_FOR_TYPE(sign, uint8_t, u8, UNSIGNED_SIGN_OP)\nUNARY_OP_FOR_TYPE(sign, int16_t, i16, SIGNED_SIGN_OP)\nUNARY_OP_FOR_TYPE(sign, uint16_t, u16, UNSIGNED_SIGN_OP)\nUNARY_OP_FOR_TYPE(sign, int32_t, i32, SIGNED_SIGN_OP)\nUNARY_OP_FOR_TYPE(sign, int64_t, i64, SIGNED_SIGN_OP)\nUNARY_OP_FOR_TYPE(sign, uint32_t, u32, UNSIGNED_SIGN_OP)\nUNARY_OP_FOR_TYPE(sign, uint64_t, u64, UNSIGNED_SIGN_OP)\nUNARY_OP_FOR_TYPE(sign, intnat, inat, SIGNED_SIGN_OP)\nUNARY_OP_FOR_TYPE(sign, float, f32, FLOAT_SIGN_OP)\nUNARY_OP_FOR_TYPE(sign, double, f64, DOUBLE_SIGN_OP)\n\nLOW_PREC_OP_KERNEL(sign, uint16_t, f16, FLOAT_SIGN_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(sign, uint16_t, f16)\nLOW_PREC_OP_KERNEL(sign, caml_ba_bfloat16, bf16, FLOAT_SIGN_OP,\n                   bfloat16_to_float, float_to_bfloat16)\nLOW_PREC_OP_IMPL(sign, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(sign, caml_ba_fp8_e4m3, f8e4m3, FLOAT_SIGN_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(sign, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(sign, caml_ba_fp8_e5m2, f8e5m2, FLOAT_SIGN_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(sign, caml_ba_fp8_e5m2, f8e5m2)\n\nINT4_UNARY_IMPL(sign, 1, i4, SIGNED_SIGN_OP)\nINT4_UNARY_IMPL(sign, 0, u4, UNSIGNED_SIGN_OP)\nUNARY_OP_FOR_TYPE(sign, caml_ba_bool, bool_, UNSIGNED_SIGN_OP)\n\nstatic const unary_op_table sign_table = {.i8 = nx_c_sign_i8,\n                                          .u8 = nx_c_sign_u8,\n                                          .i16 = nx_c_sign_i16,\n                                          .u16 = nx_c_sign_u16,\n                                          .i32 = nx_c_sign_i32,\n                                          .i64 = nx_c_sign_i64,\n                                          .u32 = nx_c_sign_u32,\n                                          .u64 = nx_c_sign_u64,\n                                          .inat = nx_c_sign_inat,\n                                          .f16 = nx_c_sign_f16,\n                                          .f32 = nx_c_sign_f32,\n                                          .f64 = nx_c_sign_f64,\n                                          .c32 = NULL,\n                                          .c64 = NULL,\n                                          .bf16 = nx_c_sign_bf16,\n                                          .bool_ = nx_c_sign_bool_,\n                                          .i4 = nx_c_sign_i4,\n                                          .u4 = nx_c_sign_u4,\n                                          .f8e4m3 = nx_c_sign_f8e4m3,\n                                          .f8e5m2 = nx_c_sign_f8e5m2};\n\n// Tan\n#define TAN_FLOAT_OP(x) (tanf(x))\n#define TAN_DOUBLE_OP(x) (tan(x))\nGENERATE_UNARY_FLOAT_OP(tan, TAN_FLOAT_OP, TAN_DOUBLE_OP)\n\nLOW_PREC_OP_KERNEL(tan, uint16_t, f16, TAN_FLOAT_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(tan, uint16_t, f16)\nLOW_PREC_OP_KERNEL(tan, caml_ba_bfloat16, bf16, TAN_FLOAT_OP,\n                   bfloat16_to_float, float_to_bfloat16)\nLOW_PREC_OP_IMPL(tan, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(tan, caml_ba_fp8_e4m3, f8e4m3, TAN_FLOAT_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(tan, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(tan, caml_ba_fp8_e5m2, f8e5m2, TAN_FLOAT_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(tan, caml_ba_fp8_e5m2, f8e5m2)\n\nstatic const unary_op_table tan_table = {.f16 = nx_c_tan_f16,\n                                         .f32 = nx_c_tan_f32,\n                                         .f64 = nx_c_tan_f64,\n                                         .bf16 = nx_c_tan_bf16,\n                                         .f8e4m3 = nx_c_tan_f8e4m3,\n                                         .f8e5m2 = nx_c_tan_f8e5m2};\n\n// Asin\n#define ASIN_FLOAT_OP(x) (asinf(x))\n#define ASIN_DOUBLE_OP(x) (asin(x))\nGENERATE_UNARY_FLOAT_OP(asin, ASIN_FLOAT_OP, ASIN_DOUBLE_OP)\n\nLOW_PREC_OP_KERNEL(asin, uint16_t, f16, ASIN_FLOAT_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(asin, uint16_t, f16)\nLOW_PREC_OP_KERNEL(asin, caml_ba_bfloat16, bf16, ASIN_FLOAT_OP,\n                   bfloat16_to_float, float_to_bfloat16)\nLOW_PREC_OP_IMPL(asin, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(asin, caml_ba_fp8_e4m3, f8e4m3, ASIN_FLOAT_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(asin, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(asin, caml_ba_fp8_e5m2, f8e5m2, ASIN_FLOAT_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(asin, caml_ba_fp8_e5m2, f8e5m2)\n\nstatic const unary_op_table asin_table = {.f16 = nx_c_asin_f16,\n                                          .f32 = nx_c_asin_f32,\n                                          .f64 = nx_c_asin_f64,\n                                          .bf16 = nx_c_asin_bf16,\n                                          .f8e4m3 = nx_c_asin_f8e4m3,\n                                          .f8e5m2 = nx_c_asin_f8e5m2};\n\n// Acos\n#define ACOS_FLOAT_OP(x) (acosf(x))\n#define ACOS_DOUBLE_OP(x) (acos(x))\nGENERATE_UNARY_FLOAT_OP(acos, ACOS_FLOAT_OP, ACOS_DOUBLE_OP)\n\nLOW_PREC_OP_KERNEL(acos, uint16_t, f16, ACOS_FLOAT_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(acos, uint16_t, f16)\nLOW_PREC_OP_KERNEL(acos, caml_ba_bfloat16, bf16, ACOS_FLOAT_OP,\n                   bfloat16_to_float, float_to_bfloat16)\nLOW_PREC_OP_IMPL(acos, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(acos, caml_ba_fp8_e4m3, f8e4m3, ACOS_FLOAT_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(acos, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(acos, caml_ba_fp8_e5m2, f8e5m2, ACOS_FLOAT_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(acos, caml_ba_fp8_e5m2, f8e5m2)\n\nstatic const unary_op_table acos_table = {.f16 = nx_c_acos_f16,\n                                          .f32 = nx_c_acos_f32,\n                                          .f64 = nx_c_acos_f64,\n                                          .bf16 = nx_c_acos_bf16,\n                                          .f8e4m3 = nx_c_acos_f8e4m3,\n                                          .f8e5m2 = nx_c_acos_f8e5m2};\n\n// Atan\n#define ATAN_FLOAT_OP(x) (atanf(x))\n#define ATAN_DOUBLE_OP(x) (atan(x))\nGENERATE_UNARY_FLOAT_OP(atan, ATAN_FLOAT_OP, ATAN_DOUBLE_OP)\n\nLOW_PREC_OP_KERNEL(atan, uint16_t, f16, ATAN_FLOAT_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(atan, uint16_t, f16)\nLOW_PREC_OP_KERNEL(atan, caml_ba_bfloat16, bf16, ATAN_FLOAT_OP,\n                   bfloat16_to_float, float_to_bfloat16)\nLOW_PREC_OP_IMPL(atan, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(atan, caml_ba_fp8_e4m3, f8e4m3, ATAN_FLOAT_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(atan, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(atan, caml_ba_fp8_e5m2, f8e5m2, ATAN_FLOAT_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(atan, caml_ba_fp8_e5m2, f8e5m2)\n\nstatic const unary_op_table atan_table = {.f16 = nx_c_atan_f16,\n                                          .f32 = nx_c_atan_f32,\n                                          .f64 = nx_c_atan_f64,\n                                          .bf16 = nx_c_atan_bf16,\n                                          .f8e4m3 = nx_c_atan_f8e4m3,\n                                          .f8e5m2 = nx_c_atan_f8e5m2};\n\n// Sinh\n#define SINH_FLOAT_OP(x) (sinhf(x))\n#define SINH_DOUBLE_OP(x) (sinh(x))\nGENERATE_UNARY_FLOAT_OP(sinh, SINH_FLOAT_OP, SINH_DOUBLE_OP)\n\nLOW_PREC_OP_KERNEL(sinh, uint16_t, f16, SINH_FLOAT_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(sinh, uint16_t, f16)\nLOW_PREC_OP_KERNEL(sinh, caml_ba_bfloat16, bf16, SINH_FLOAT_OP,\n                   bfloat16_to_float, float_to_bfloat16)\nLOW_PREC_OP_IMPL(sinh, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(sinh, caml_ba_fp8_e4m3, f8e4m3, SINH_FLOAT_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(sinh, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(sinh, caml_ba_fp8_e5m2, f8e5m2, SINH_FLOAT_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(sinh, caml_ba_fp8_e5m2, f8e5m2)\n\nstatic const unary_op_table sinh_table = {.f16 = nx_c_sinh_f16,\n                                          .f32 = nx_c_sinh_f32,\n                                          .f64 = nx_c_sinh_f64,\n                                          .bf16 = nx_c_sinh_bf16,\n                                          .f8e4m3 = nx_c_sinh_f8e4m3,\n                                          .f8e5m2 = nx_c_sinh_f8e5m2};\n\n// Cosh\n#define COSH_FLOAT_OP(x) (coshf(x))\n#define COSH_DOUBLE_OP(x) (cosh(x))\nGENERATE_UNARY_FLOAT_OP(cosh, COSH_FLOAT_OP, COSH_DOUBLE_OP)\n\nLOW_PREC_OP_KERNEL(cosh, uint16_t, f16, COSH_FLOAT_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(cosh, uint16_t, f16)\nLOW_PREC_OP_KERNEL(cosh, caml_ba_bfloat16, bf16, COSH_FLOAT_OP,\n                   bfloat16_to_float, float_to_bfloat16)\nLOW_PREC_OP_IMPL(cosh, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(cosh, caml_ba_fp8_e4m3, f8e4m3, COSH_FLOAT_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(cosh, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(cosh, caml_ba_fp8_e5m2, f8e5m2, COSH_FLOAT_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(cosh, caml_ba_fp8_e5m2, f8e5m2)\n\nstatic const unary_op_table cosh_table = {.f16 = nx_c_cosh_f16,\n                                          .f32 = nx_c_cosh_f32,\n                                          .f64 = nx_c_cosh_f64,\n                                          .bf16 = nx_c_cosh_bf16,\n                                          .f8e4m3 = nx_c_cosh_f8e4m3,\n                                          .f8e5m2 = nx_c_cosh_f8e5m2};\n\n// Tanh\n#define TANH_FLOAT_OP(x) (tanhf(x))\n#define TANH_DOUBLE_OP(x) (tanh(x))\nGENERATE_UNARY_FLOAT_OP(tanh, TANH_FLOAT_OP, TANH_DOUBLE_OP)\n\nLOW_PREC_OP_KERNEL(tanh, uint16_t, f16, TANH_FLOAT_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(tanh, uint16_t, f16)\nLOW_PREC_OP_KERNEL(tanh, caml_ba_bfloat16, bf16, TANH_FLOAT_OP,\n                   bfloat16_to_float, float_to_bfloat16)\nLOW_PREC_OP_IMPL(tanh, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(tanh, caml_ba_fp8_e4m3, f8e4m3, TANH_FLOAT_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(tanh, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(tanh, caml_ba_fp8_e5m2, f8e5m2, TANH_FLOAT_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(tanh, caml_ba_fp8_e5m2, f8e5m2)\n\nstatic const unary_op_table tanh_table = {.f16 = nx_c_tanh_f16,\n                                          .f32 = nx_c_tanh_f32,\n                                          .f64 = nx_c_tanh_f64,\n                                          .bf16 = nx_c_tanh_bf16,\n                                          .f8e4m3 = nx_c_tanh_f8e4m3,\n                                          .f8e5m2 = nx_c_tanh_f8e5m2};\n\n// Rounding ops: float apply op, non-float are identity\n#define IDENTITY_OP(x) (x)\n#define TRUNC_FLOAT_OP(x) (truncf(x))\n#define TRUNC_DOUBLE_OP(x) (trunc(x))\n#define CEIL_FLOAT_OP(x) (ceilf(x))\n#define CEIL_DOUBLE_OP(x) (ceil(x))\n#define FLOOR_FLOAT_OP(x) (floorf(x))\n#define FLOOR_DOUBLE_OP(x) (floor(x))\n#define ROUND_FLOAT_OP(x) (roundf(x))\n#define ROUND_DOUBLE_OP(x) (round(x))\n\n#define GENERATE_UNARY_IDENTITY_NONFLOAT(name) \\\n  UNARY_OP_FOR_TYPE(name, int8_t, i8, IDENTITY_OP) \\\n  UNARY_OP_FOR_TYPE(name, uint8_t, u8, IDENTITY_OP) \\\n  UNARY_OP_FOR_TYPE(name, int16_t, i16, IDENTITY_OP) \\\n  UNARY_OP_FOR_TYPE(name, uint16_t, u16, IDENTITY_OP) \\\n  UNARY_OP_FOR_TYPE(name, int32_t, i32, IDENTITY_OP) \\\n  UNARY_OP_FOR_TYPE(name, int64_t, i64, IDENTITY_OP) \\\n  UNARY_OP_FOR_TYPE(name, uint32_t, u32, IDENTITY_OP) \\\n  UNARY_OP_FOR_TYPE(name, uint64_t, u64, IDENTITY_OP) \\\n  UNARY_OP_FOR_TYPE(name, intnat, inat, IDENTITY_OP) \\\n  UNARY_OP_FOR_TYPE(name, complex32, c32, IDENTITY_OP) \\\n  UNARY_OP_FOR_TYPE(name, complex64, c64, IDENTITY_OP) \\\n  UNARY_OP_FOR_TYPE(name, caml_ba_bool, bool_, IDENTITY_OP) \\\n  INT4_UNARY_IMPL(name, 1, i4, IDENTITY_OP) \\\n  INT4_UNARY_IMPL(name, 0, u4, IDENTITY_OP)\n\nGENERATE_UNARY_IDENTITY_NONFLOAT(trunc)\nUNARY_OP_FOR_TYPE(trunc, float, f32, TRUNC_FLOAT_OP)\nUNARY_OP_FOR_TYPE(trunc, double, f64, TRUNC_DOUBLE_OP)\nLOW_PREC_OP_KERNEL(trunc, uint16_t, f16, TRUNC_FLOAT_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(trunc, uint16_t, f16)\nLOW_PREC_OP_KERNEL(trunc, caml_ba_bfloat16, bf16, TRUNC_FLOAT_OP,\n                   bfloat16_to_float, float_to_bfloat16)\nLOW_PREC_OP_IMPL(trunc, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(trunc, caml_ba_fp8_e4m3, f8e4m3, TRUNC_FLOAT_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(trunc, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(trunc, caml_ba_fp8_e5m2, f8e5m2, TRUNC_FLOAT_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(trunc, caml_ba_fp8_e5m2, f8e5m2)\nBUILD_DISPATCH_TABLE(trunc);\n\nGENERATE_UNARY_IDENTITY_NONFLOAT(ceil)\nUNARY_OP_FOR_TYPE(ceil, float, f32, CEIL_FLOAT_OP)\nUNARY_OP_FOR_TYPE(ceil, double, f64, CEIL_DOUBLE_OP)\nLOW_PREC_OP_KERNEL(ceil, uint16_t, f16, CEIL_FLOAT_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(ceil, uint16_t, f16)\nLOW_PREC_OP_KERNEL(ceil, caml_ba_bfloat16, bf16, CEIL_FLOAT_OP,\n                   bfloat16_to_float, float_to_bfloat16)\nLOW_PREC_OP_IMPL(ceil, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(ceil, caml_ba_fp8_e4m3, f8e4m3, CEIL_FLOAT_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(ceil, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(ceil, caml_ba_fp8_e5m2, f8e5m2, CEIL_FLOAT_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(ceil, caml_ba_fp8_e5m2, f8e5m2)\nBUILD_DISPATCH_TABLE(ceil);\n\nGENERATE_UNARY_IDENTITY_NONFLOAT(floor)\nUNARY_OP_FOR_TYPE(floor, float, f32, FLOOR_FLOAT_OP)\nUNARY_OP_FOR_TYPE(floor, double, f64, FLOOR_DOUBLE_OP)\nLOW_PREC_OP_KERNEL(floor, uint16_t, f16, FLOOR_FLOAT_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(floor, uint16_t, f16)\nLOW_PREC_OP_KERNEL(floor, caml_ba_bfloat16, bf16, FLOOR_FLOAT_OP,\n                   bfloat16_to_float, float_to_bfloat16)\nLOW_PREC_OP_IMPL(floor, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(floor, caml_ba_fp8_e4m3, f8e4m3, FLOOR_FLOAT_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(floor, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(floor, caml_ba_fp8_e5m2, f8e5m2, FLOOR_FLOAT_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(floor, caml_ba_fp8_e5m2, f8e5m2)\nBUILD_DISPATCH_TABLE(floor);\n\nGENERATE_UNARY_IDENTITY_NONFLOAT(round)\nUNARY_OP_FOR_TYPE(round, float, f32, ROUND_FLOAT_OP)\nUNARY_OP_FOR_TYPE(round, double, f64, ROUND_DOUBLE_OP)\nLOW_PREC_OP_KERNEL(round, uint16_t, f16, ROUND_FLOAT_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(round, uint16_t, f16)\nLOW_PREC_OP_KERNEL(round, caml_ba_bfloat16, bf16, ROUND_FLOAT_OP,\n                   bfloat16_to_float, float_to_bfloat16)\nLOW_PREC_OP_IMPL(round, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(round, caml_ba_fp8_e4m3, f8e4m3, ROUND_FLOAT_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(round, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(round, caml_ba_fp8_e5m2, f8e5m2, ROUND_FLOAT_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(round, caml_ba_fp8_e5m2, f8e5m2)\nBUILD_DISPATCH_TABLE(round);\n\n// Erf\n#define ERF_FLOAT_OP(x) (erff(x))\n#define ERF_DOUBLE_OP(x) (erf(x))\nGENERATE_UNARY_FLOAT_OP(erf, ERF_FLOAT_OP, ERF_DOUBLE_OP)\n\nLOW_PREC_OP_KERNEL(erf, uint16_t, f16, ERF_FLOAT_OP, half_to_float, float_to_half)\nLOW_PREC_OP_IMPL(erf, uint16_t, f16)\nLOW_PREC_OP_KERNEL(erf, caml_ba_bfloat16, bf16, ERF_FLOAT_OP,\n                   bfloat16_to_float, float_to_bfloat16)\nLOW_PREC_OP_IMPL(erf, caml_ba_bfloat16, bf16)\nLOW_PREC_OP_KERNEL(erf, caml_ba_fp8_e4m3, f8e4m3, ERF_FLOAT_OP,\n                   fp8_e4m3_to_float, float_to_fp8_e4m3)\nLOW_PREC_OP_IMPL(erf, caml_ba_fp8_e4m3, f8e4m3)\nLOW_PREC_OP_KERNEL(erf, caml_ba_fp8_e5m2, f8e5m2, ERF_FLOAT_OP,\n                   fp8_e5m2_to_float, float_to_fp8_e5m2)\nLOW_PREC_OP_IMPL(erf, caml_ba_fp8_e5m2, f8e5m2)\n\nstatic const unary_op_table erf_table = {.f16 = nx_c_erf_f16,\n                                         .f32 = nx_c_erf_f32,\n                                         .f64 = nx_c_erf_f64,\n                                         .bf16 = nx_c_erf_bf16,\n                                         .f8e4m3 = nx_c_erf_f8e4m3,\n                                         .f8e5m2 = nx_c_erf_f8e5m2};\n\n// Shared dispatch infrastructure\n\n// Generic dispatch function for unary operations\nstatic void dispatch_unary_op(value v_x, value v_z, const unary_op_table *table,\n                              const char *op_name) {\n  // Extract ndarrays from FFI tensors\n  ndarray_t x = extract_ndarray(v_x);\n  ndarray_t z = extract_ndarray(v_z);\n\n  // Check shapes match\n  if (x.ndim != z.ndim) {\n    cleanup_ndarray(&x);\n    cleanup_ndarray(&z);\n    caml_failwith(\"shape mismatch\");\n  }\n  for (int i = 0; i < x.ndim; i++) {\n    if (x.shape[i] != z.shape[i]) {\n      cleanup_ndarray(&x);\n      cleanup_ndarray(&z);\n      caml_failwith(\"shape mismatch\");\n    }\n  }\n\n  // Get bigarray kind from the data field\n  value v_x_data = Field(v_x, FFI_TENSOR_DATA);\n  value v_z_data = Field(v_z, FFI_TENSOR_DATA);\n\n  struct caml_ba_array *ba = Caml_ba_array_val(v_x_data);\n  int kind = nx_buffer_get_kind(ba);\n\n  // Check kinds match for z\n  int kind_z = nx_buffer_get_kind(Caml_ba_array_val(v_z_data));\n  if (kind != kind_z) {\n    cleanup_ndarray(&x);\n    cleanup_ndarray(&z);\n    caml_failwith(\"dtype mismatch\");\n  }\n\n  // Select operation based on dtype\n  unary_op_t op = NULL;\n  switch (kind) {\n    case CAML_BA_SINT8:\n      op = table->i8;\n      break;\n    case CAML_BA_UINT8:\n      op = table->u8;\n      break;\n    case CAML_BA_SINT16:\n      op = table->i16;\n      break;\n    case CAML_BA_UINT16:\n      op = table->u16;\n      break;\n    case CAML_BA_INT32:\n      op = table->i32;\n      break;\n    case CAML_BA_INT64:\n      op = table->i64;\n      break;\n    case NX_BA_UINT32:\n      op = table->u32;\n      break;\n    case NX_BA_UINT64:\n      op = table->u64;\n      break;\n    case CAML_BA_CAML_INT:\n    case CAML_BA_NATIVE_INT:\n      op = table->inat;\n      break;\n    case CAML_BA_FLOAT16:\n      op = table->f16;\n      break;\n    case CAML_BA_FLOAT32:\n      op = table->f32;\n      break;\n    case CAML_BA_FLOAT64:\n      op = table->f64;\n      break;\n    case CAML_BA_COMPLEX32:\n      op = table->c32;\n      break;\n    case CAML_BA_COMPLEX64:\n      op = table->c64;\n      break;\n    case NX_BA_BFLOAT16:\n      op = table->bf16;\n      break;\n    case NX_BA_BOOL:\n      op = table->bool_;\n      break;\n    case NX_BA_INT4:\n      op = table->i4;\n      break;\n    case NX_BA_UINT4:\n      op = table->u4;\n      break;\n    case NX_BA_FP8_E4M3:\n      op = table->f8e4m3;\n      break;\n    case NX_BA_FP8_E5M2:\n      op = table->f8e5m2;\n      break;\n    default:\n      cleanup_ndarray(&x);\n      cleanup_ndarray(&z);\n      caml_failwith(\"dispatch_unary_op: unsupported dtype\");\n  }\n\n  if (!op) {\n    char msg[256];\n    snprintf(msg, sizeof(msg), \"%s: operation not supported for dtype\",\n             op_name);\n    cleanup_ndarray(&x);\n    cleanup_ndarray(&z);\n    caml_failwith(msg);\n  }\n\n  // Enter blocking section for potentially long computation\n  caml_enter_blocking_section();\n  op(&x, &z);\n  caml_leave_blocking_section();\n\n  // Clean up if heap allocated\n  cleanup_ndarray(&x);\n  cleanup_ndarray(&z);\n}\n\n// ============================================================================\n// OCaml FFI Stubs\n// ============================================================================\n\n// Macro to define FFI stub for each operation\n#define DEFINE_FFI_STUB(name)                           \\\n  CAMLprim value caml_nx_##name(value v_x, value v_z) { \\\n    CAMLparam2(v_x, v_z);                               \\\n    dispatch_unary_op(v_x, v_z, &name##_table, #name);  \\\n    CAMLreturn(Val_unit);                               \\\n  }\n\nDEFINE_FFI_STUB(neg)\nDEFINE_FFI_STUB(log)\nDEFINE_FFI_STUB(exp)\nDEFINE_FFI_STUB(sin)\nDEFINE_FFI_STUB(cos)\nDEFINE_FFI_STUB(sqrt)\nDEFINE_FFI_STUB(abs)\nDEFINE_FFI_STUB(recip)\nDEFINE_FFI_STUB(sign)\nDEFINE_FFI_STUB(tan)\nDEFINE_FFI_STUB(asin)\nDEFINE_FFI_STUB(acos)\nDEFINE_FFI_STUB(atan)\nDEFINE_FFI_STUB(sinh)\nDEFINE_FFI_STUB(cosh)\nDEFINE_FFI_STUB(tanh)\nDEFINE_FFI_STUB(trunc)\nDEFINE_FFI_STUB(ceil)\nDEFINE_FFI_STUB(floor)\nDEFINE_FFI_STUB(round)\nDEFINE_FFI_STUB(erf)\n"
  },
  {
    "path": "packages/nx/lib/backend_c/nx_c_window.c",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n// Window operations for nx C backend (unfold/fold)\n//\n// Generalized sliding-window extraction and its inverse.\n// The last K dimensions of the input are treated as spatial; all preceding\n// dimensions (\"leading\") are preserved as-is.\n//\n// unfold: (*leading, *spatial_K) -> (*leading, prod(kernel_size), L)\n// fold:   (*leading, prod(kernel_size), L) -> (*leading, *output_size)\n\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n#include <caml/custom.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/threads.h>\n\n#include \"nx_c_shared.h\"\n\n// Max supported spatial dimensions (for stack arrays)\n#define MAX_SPATIAL_DIMS 32\n\n// Type operations for element-wise copy, zero, add\ntypedef void (*elem_add_t)(void*, long, void*, long);\ntypedef void (*elem_copy_t)(void*, long, void*, long);\ntypedef void (*elem_zero_t)(void*, long);\n\n// Table for type-specific operations\ntypedef struct {\n  elem_add_t add;\n  elem_copy_t copy;\n  elem_zero_t zero;\n} type_ops_t;\n\n// Dispatch table for each type\ntypedef struct {\n  type_ops_t i8, u8, i16, u16, i32, i64, u32, u64, inat;\n  type_ops_t f16, f32, f64;\n  type_ops_t c32, c64;\n  type_ops_t bf16, bool_, i4, u4, f8e4m3, f8e5m2;\n} type_ops_table_t;\n\n// Macros for standard types\n#define STANDARD_ADD(T, suffix)                                                \\\n  static void add_elem_##suffix(void* out, long o_off, void* in, long i_off) { \\\n    T* ot = (T*)out;                                                           \\\n    T* it = (T*)in;                                                            \\\n    ot[o_off] += it[i_off];                                                    \\\n  }\n\n#define STANDARD_COPY(T, suffix)                                  \\\n  static void copy_elem_##suffix(void* out, long o_off, void* in, \\\n                                 long i_off) {                    \\\n    T* ot = (T*)out;                                              \\\n    T* it = (T*)in;                                               \\\n    ot[o_off] = it[i_off];                                        \\\n  }\n\n#define STANDARD_ZERO(T, suffix)                          \\\n  static void zero_elem_##suffix(void* out, long o_off) { \\\n    T* ot = (T*)out;                                      \\\n    ot[o_off] = (T)0;                                     \\\n  }\n\n// Generate for standard types\n#define GENERATE_STANDARD_OPS(T, suffix) \\\n  STANDARD_ADD(T, suffix)                \\\n  STANDARD_COPY(T, suffix)               \\\n  STANDARD_ZERO(T, suffix)\n\nGENERATE_STANDARD_OPS(int8_t, i8)\nGENERATE_STANDARD_OPS(uint8_t, u8)\nGENERATE_STANDARD_OPS(int16_t, i16)\nGENERATE_STANDARD_OPS(uint16_t, u16)\nGENERATE_STANDARD_OPS(int32_t, i32)\nGENERATE_STANDARD_OPS(int64_t, i64)\nGENERATE_STANDARD_OPS(uint32_t, u32)\nGENERATE_STANDARD_OPS(uint64_t, u64)\nGENERATE_STANDARD_OPS(intnat, inat)\nGENERATE_STANDARD_OPS(float, f32)\nGENERATE_STANDARD_OPS(double, f64)\nGENERATE_STANDARD_OPS(complex32, c32)\nGENERATE_STANDARD_OPS(complex64, c64)\nGENERATE_STANDARD_OPS(caml_ba_bool, bool_)\n\n// For low-precision floats: add requires conversion, copy/zero are direct\nSTANDARD_COPY(uint16_t, f16)\nSTANDARD_ZERO(uint16_t, f16)\nstatic void add_elem_f16(void* out, long o_off, void* in, long i_off) {\n  uint16_t* ot = (uint16_t*)out;\n  uint16_t* it = (uint16_t*)in;\n  float a = half_to_float(ot[o_off]);\n  float b = half_to_float(it[i_off]);\n  ot[o_off] = float_to_half(a + b);\n}\n\nSTANDARD_COPY(caml_ba_bfloat16, bf16)\nSTANDARD_ZERO(caml_ba_bfloat16, bf16)\nstatic void add_elem_bf16(void* out, long o_off, void* in, long i_off) {\n  caml_ba_bfloat16* ot = (caml_ba_bfloat16*)out;\n  caml_ba_bfloat16* it = (caml_ba_bfloat16*)in;\n  float a = bfloat16_to_float(ot[o_off]);\n  float b = bfloat16_to_float(it[i_off]);\n  ot[o_off] = float_to_bfloat16(a + b);\n}\n\nSTANDARD_COPY(caml_ba_fp8_e4m3, f8e4m3)\nSTANDARD_ZERO(caml_ba_fp8_e4m3, f8e4m3)\nstatic void add_elem_f8e4m3(void* out, long o_off, void* in, long i_off) {\n  caml_ba_fp8_e4m3* ot = (caml_ba_fp8_e4m3*)out;\n  caml_ba_fp8_e4m3* it = (caml_ba_fp8_e4m3*)in;\n  float a = fp8_e4m3_to_float(ot[o_off]);\n  float b = fp8_e4m3_to_float(it[i_off]);\n  ot[o_off] = float_to_fp8_e4m3(a + b);\n}\n\nSTANDARD_COPY(caml_ba_fp8_e5m2, f8e5m2)\nSTANDARD_ZERO(caml_ba_fp8_e5m2, f8e5m2)\nstatic void add_elem_f8e5m2(void* out, long o_off, void* in, long i_off) {\n  caml_ba_fp8_e5m2* ot = (caml_ba_fp8_e5m2*)out;\n  caml_ba_fp8_e5m2* it = (caml_ba_fp8_e5m2*)in;\n  float a = fp8_e5m2_to_float(ot[o_off]);\n  float b = fp8_e5m2_to_float(it[i_off]);\n  ot[o_off] = float_to_fp8_e5m2(a + b);\n}\n\n// For int4/uint4 (packed nibbles)\nstatic void zero_elem_i4(void* out, long o_off) {\n  uint8_t* ot = (uint8_t*)out;\n  long byte_off = o_off / 2;\n  int nib_off = o_off % 2;\n  if (nib_off) {\n    ot[byte_off] &= 0x0F;\n  } else {\n    ot[byte_off] &= 0xF0;\n  }\n}\n\nstatic void copy_elem_i4(void* out, long o_off, void* in, long i_off) {\n  uint8_t* oi = (uint8_t*)in;\n  uint8_t* oo = (uint8_t*)out;\n  long byte_i = i_off / 2;\n  int nib_i = i_off % 2;\n  long byte_o = o_off / 2;\n  int nib_o = o_off % 2;\n  int8_t val = nib_i ? (oi[byte_i] >> 4) : ((oi[byte_i] & 0x0F) << 4) >> 4;\n  uint8_t nib = (uint8_t)val & 0x0F;\n  if (nib_o) {\n    oo[byte_o] = (oo[byte_o] & 0x0F) | (nib << 4);\n  } else {\n    oo[byte_o] = (oo[byte_o] & 0xF0) | nib;\n  }\n}\n\nstatic void add_elem_i4(void* out, long o_off, void* in, long i_off) {\n  uint8_t* od = (uint8_t*)out;\n  uint8_t* id = (uint8_t*)in;\n  long byte_o = o_off / 2;\n  int nib_o = o_off % 2;\n  long byte_i = i_off / 2;\n  int nib_i = i_off % 2;\n  int8_t a = nib_o ? (od[byte_o] >> 4) : ((od[byte_o] & 0x0F) << 4) >> 4;\n  int8_t b = nib_i ? (id[byte_i] >> 4) : ((id[byte_i] & 0x0F) << 4) >> 4;\n  int res = (int)a + (int)b;\n  res = CLAMP_I4(res);\n  uint8_t nib = (uint8_t)res & 0x0F;\n  if (nib_o) {\n    od[byte_o] = (od[byte_o] & 0x0F) | (nib << 4);\n  } else {\n    od[byte_o] = (od[byte_o] & 0xF0) | nib;\n  }\n}\n\nstatic void zero_elem_u4(void* out, long o_off) {\n  uint8_t* ot = (uint8_t*)out;\n  long byte_off = o_off / 2;\n  int nib_off = o_off % 2;\n  if (nib_off) {\n    ot[byte_off] &= 0x0F;\n  } else {\n    ot[byte_off] &= 0xF0;\n  }\n}\n\nstatic void copy_elem_u4(void* out, long o_off, void* in, long i_off) {\n  uint8_t* oi = (uint8_t*)in;\n  uint8_t* oo = (uint8_t*)out;\n  long byte_i = i_off / 2;\n  int nib_i = i_off % 2;\n  long byte_o = o_off / 2;\n  int nib_o = o_off % 2;\n  uint8_t val = nib_i ? (oi[byte_i] >> 4) & 0x0F : oi[byte_i] & 0x0F;\n  uint8_t nib = val & 0x0F;\n  if (nib_o) {\n    oo[byte_o] = (oo[byte_o] & 0x0F) | (nib << 4);\n  } else {\n    oo[byte_o] = (oo[byte_o] & 0xF0) | nib;\n  }\n}\n\nstatic void add_elem_u4(void* out, long o_off, void* in, long i_off) {\n  uint8_t* od = (uint8_t*)out;\n  uint8_t* id = (uint8_t*)in;\n  long byte_o = o_off / 2;\n  int nib_o = o_off % 2;\n  long byte_i = i_off / 2;\n  int nib_i = i_off % 2;\n  uint8_t a = nib_o ? (od[byte_o] >> 4) & 0x0F : od[byte_o] & 0x0F;\n  uint8_t b = nib_i ? (id[byte_i] >> 4) & 0x0F : id[byte_i] & 0x0F;\n  int res = (int)a + (int)b;\n  res = CLAMP_U4(res);\n  uint8_t nib = (uint8_t)res & 0x0F;\n  if (nib_o) {\n    od[byte_o] = (od[byte_o] & 0x0F) | (nib << 4);\n  } else {\n    od[byte_o] = (od[byte_o] & 0xF0) | nib;\n  }\n}\n\n// Build dispatch table\nstatic const type_ops_table_t type_ops_table = {\n    .i8 = {add_elem_i8, copy_elem_i8, zero_elem_i8},\n    .u8 = {add_elem_u8, copy_elem_u8, zero_elem_u8},\n    .i16 = {add_elem_i16, copy_elem_i16, zero_elem_i16},\n    .u16 = {add_elem_u16, copy_elem_u16, zero_elem_u16},\n    .i32 = {add_elem_i32, copy_elem_i32, zero_elem_i32},\n    .i64 = {add_elem_i64, copy_elem_i64, zero_elem_i64},\n    .u32 = {add_elem_u32, copy_elem_u32, zero_elem_u32},\n    .u64 = {add_elem_u64, copy_elem_u64, zero_elem_u64},\n    .inat = {add_elem_inat, copy_elem_inat, zero_elem_inat},\n    .f16 = {add_elem_f16, copy_elem_f16, zero_elem_f16},\n    .f32 = {add_elem_f32, copy_elem_f32, zero_elem_f32},\n    .f64 = {add_elem_f64, copy_elem_f64, zero_elem_f64},\n    .c32 = {add_elem_c32, copy_elem_c32, zero_elem_c32},\n    .c64 = {add_elem_c64, copy_elem_c64, zero_elem_c64},\n    .bf16 = {add_elem_bf16, copy_elem_bf16, zero_elem_bf16},\n    .bool_ = {add_elem_bool_, copy_elem_bool_, zero_elem_bool_},\n    .i4 = {add_elem_i4, copy_elem_i4, zero_elem_i4},\n    .u4 = {add_elem_u4, copy_elem_u4, zero_elem_u4},\n    .f8e4m3 = {add_elem_f8e4m3, copy_elem_f8e4m3, zero_elem_f8e4m3},\n    .f8e5m2 = {add_elem_f8e5m2, copy_elem_f8e5m2, zero_elem_f8e5m2}};\n\n// Helper to get elem_size (for memset, etc.)\nstatic size_t get_elem_size(int kind) {\n  switch (kind) {\n    case CAML_BA_SINT8:\n    case CAML_BA_UINT8:\n    case NX_BA_BOOL:\n    case NX_BA_FP8_E4M3:\n    case NX_BA_FP8_E5M2:\n      return 1;\n    case CAML_BA_SINT16:\n    case CAML_BA_UINT16:\n    case CAML_BA_FLOAT16:\n    case NX_BA_BFLOAT16:\n      return 2;\n    case CAML_BA_INT32:\n    case CAML_BA_FLOAT32:\n      return 4;\n    case CAML_BA_INT64:\n    case CAML_BA_NATIVE_INT:\n    case CAML_BA_CAML_INT:\n    case CAML_BA_FLOAT64:\n    case CAML_BA_COMPLEX32:\n      return 8;\n    case CAML_BA_COMPLEX64:\n      return 16;\n    case NX_BA_INT4:\n    case NX_BA_UINT4:\n      return 0;  // Special handling\n    default:\n      return 0;\n  }\n}\n\n// Compute the flat offset into a tensor for a given leading index.\n// leading_idx is a flat index into the collapsed leading dims.\n// Returns the strided offset for that leading position.\nstatic long leading_offset(const ndarray_t* t, int leading_ndim,\n                           long leading_idx) {\n  long off = 0;\n  long rem = leading_idx;\n  for (int d = leading_ndim - 1; d >= 0; d--) {\n    long coord = rem % t->shape[d];\n    rem /= t->shape[d];\n    off += coord * t->strides[d];\n  }\n  return off;\n}\n\n// Implementation for unfold\n// input:  (*leading, *spatial_K)\n// output: (*leading, kernel_prod, L)\nstatic void nx_c_unfold_impl(const ndarray_t* in, ndarray_t* out,\n                             int K, const long* kernel_size,\n                             const long* stride_arr, const long* dilation_arr,\n                             const long* pad_before, const long* pad_after,\n                             int leading_ndim, const type_ops_t* ops,\n                             size_t elem_size) {\n  long* out_spatial = (long*)calloc(K, sizeof(long));\n\n  // Compute leading_size = product of all leading dimensions\n  long leading_size = 1;\n  for (int d = 0; d < leading_ndim; d++) leading_size *= in->shape[d];\n\n  long kernel_prod = 1;\n  bool no_padding = true;\n  for (int d = 0; d < K; d++) {\n    long effective_ker = dilation_arr[d] * (kernel_size[d] - 1) + 1;\n    long padded = in->shape[leading_ndim + d] + pad_before[d] + pad_after[d];\n    long diff = padded - effective_ker;\n    out_spatial[d] = (diff / stride_arr[d]) + 1;\n    kernel_prod *= kernel_size[d];\n    if (pad_before[d] != 0 || pad_after[d] != 0) no_padding = false;\n  }\n  if (kernel_prod == 0) no_padding = false;\n  long L = 1;\n  for (int d = 0; d < K; d++) L *= out_spatial[d];\n\n  // Output shape: (*leading, kernel_prod, L)\n  // out dims: leading_ndim + 2\n  // out->strides[leading_ndim] is stride for kernel_prod axis\n  // out->strides[leading_ndim + 1] is stride for L axis\n\n  long* out_cumprod = (long*)calloc(K, sizeof(long));\n  if (K > 0) {\n    out_cumprod[K - 1] = 1;\n    for (int i = K - 2; i >= 0; i--)\n      out_cumprod[i] = out_cumprod[i + 1] * out_spatial[i + 1];\n  }\n\n  // Pre-compute kernel offsets for the no-padding fast path\n  long* kernel_offsets = NULL;\n  if (no_padding && kernel_prod > 0) {\n    kernel_offsets = (long*)malloc(kernel_prod * sizeof(long));\n    if (kernel_offsets) {\n      long coords[MAX_SPATIAL_DIMS] = {0};\n      for (long kf = 0; kf < kernel_prod; ++kf) {\n        long offset = 0;\n        for (int d = 0; d < K; ++d)\n          offset += coords[d] * dilation_arr[d] * in->strides[leading_ndim + d];\n        kernel_offsets[kf] = offset;\n        for (int d = K - 1; d >= 0; --d) {\n          coords[d]++;\n          if (coords[d] < kernel_size[d]) break;\n          coords[d] = 0;\n        }\n      }\n    } else {\n      no_padding = false;\n    }\n  }\n\n  if (no_padding) {\n    long stride_steps[MAX_SPATIAL_DIMS];\n    for (int d = 0; d < K; ++d)\n      stride_steps[d] = stride_arr[d] * in->strides[leading_ndim + d];\n\n    for (long lead = 0; lead < leading_size; ++lead) {\n      long base_in = leading_offset(in, leading_ndim, lead);\n      long base_out = leading_offset(out, leading_ndim, lead);\n      long block_coords[MAX_SPATIAL_DIMS] = {0};\n      long block_offset = 0;\n      for (long l = 0; l < L; ++l) {\n        long out_l_base = base_out + l * out->strides[leading_ndim + 1];\n        long in_block_base = base_in + block_offset;\n        for (long kf = 0; kf < kernel_prod; ++kf) {\n          long out_off = out_l_base + kf * out->strides[leading_ndim];\n          long in_off = in_block_base + kernel_offsets[kf];\n          ops->copy(out->data, out->offset + out_off, in->data,\n                    in->offset + in_off);\n        }\n        for (int d = K - 1; d >= 0; --d) {\n          block_coords[d]++;\n          block_offset += stride_steps[d];\n          if (block_coords[d] < out_spatial[d]) break;\n          block_offset -= out_spatial[d] * stride_steps[d];\n          block_coords[d] = 0;\n        }\n      }\n    }\n    goto cleanup;\n  }\n\n  // General path with padding\n#pragma omp parallel for collapse(2) if (leading_size * L > 1000)\n  for (long lead = 0; lead < leading_size; lead++) {\n    for (long l = 0; l < L; l++) {\n      long base_in = leading_offset(in, leading_ndim, lead);\n      long base_out = leading_offset(out, leading_ndim, lead);\n      long temp = l;\n      long block_pos[MAX_SPATIAL_DIMS];\n      for (int d = 0; d < K; d++) {\n        block_pos[d] = temp / out_cumprod[d];\n        temp %= out_cumprod[d];\n      }\n      for (long kf = 0; kf < kernel_prod; kf++) {\n        long k_temp = kf;\n        long k_pos[MAX_SPATIAL_DIMS];\n        for (int d = K - 1; d >= 0; d--) {\n          k_pos[d] = k_temp % kernel_size[d];\n          k_temp /= kernel_size[d];\n        }\n        long in_off = base_in;\n        bool valid = true;\n        for (int d = 0; d < K; d++) {\n          long sp =\n              block_pos[d] * stride_arr[d] + k_pos[d] * dilation_arr[d] - pad_before[d];\n          if (sp < 0 || sp >= in->shape[leading_ndim + d]) {\n            valid = false;\n            break;\n          }\n          in_off += sp * in->strides[leading_ndim + d];\n        }\n        long out_off = base_out + kf * out->strides[leading_ndim] +\n                       l * out->strides[leading_ndim + 1];\n        if (valid) {\n          ops->copy(out->data, out->offset + out_off, in->data,\n                    in->offset + in_off);\n        } else {\n          ops->zero(out->data, out->offset + out_off);\n        }\n      }\n    }\n  }\n\ncleanup:\n  free(out_spatial);\n  free(out_cumprod);\n  if (kernel_offsets) free(kernel_offsets);\n}\n\n// Implementation for fold\n// input:  (*leading, kernel_prod, L)\n// output: (*leading, *output_size)\nstatic void nx_c_fold_impl(const ndarray_t* in, ndarray_t* out,\n                           int K, const long* output_size,\n                           const long* kernel_size, const long* stride_arr,\n                           const long* dilation_arr, const long* pad_before,\n                           const long* pad_after, int leading_ndim,\n                           const type_ops_t* ops, size_t elem_size) {\n\n  // Compute leading_size = product of all leading dimensions\n  long leading_size = 1;\n  for (int d = 0; d < leading_ndim; d++) leading_size *= in->shape[d];\n\n  long kernel_prod = in->shape[leading_ndim];\n  long L = in->shape[leading_ndim + 1];\n\n  long expected_block[MAX_SPATIAL_DIMS];\n  long expected_L = 1;\n  for (int d = 0; d < K; d++) {\n    long effective_ker = dilation_arr[d] * (kernel_size[d] - 1) + 1;\n    long padded = output_size[d] + pad_before[d] + pad_after[d];\n    long diff = padded - effective_ker;\n    expected_block[d] = (diff / stride_arr[d]) + 1;\n    expected_L *= expected_block[d];\n  }\n\n  long* out_cumprod = (long*)calloc(K, sizeof(long));\n  if (K > 0) {\n    out_cumprod[K - 1] = 1;\n    for (int i = K - 2; i >= 0; i--)\n      out_cumprod[i] = out_cumprod[i + 1] * expected_block[i + 1];\n  }\n\n  // Zero the output (bytes zero works for all types)\n  long total_out = total_elements_safe(out);\n  memset((char*)out->data + out->offset * elem_size, 0, total_out * elem_size);\n\n#pragma omp parallel for collapse(2) if (leading_size * L > 1000)\n  for (long lead = 0; lead < leading_size; lead++) {\n    for (long l = 0; l < L; l++) {\n      long base_in = leading_offset(in, leading_ndim, lead);\n      long base_out = leading_offset(out, leading_ndim, lead);\n      long temp = l;\n      long block_pos[MAX_SPATIAL_DIMS];\n      for (int d = 0; d < K; d++) {\n        block_pos[d] = temp / out_cumprod[d];\n        temp %= out_cumprod[d];\n      }\n      for (long kf = 0; kf < kernel_prod; kf++) {\n        long k_temp = kf;\n        long k_pos[MAX_SPATIAL_DIMS];\n        for (int d = K - 1; d >= 0; d--) {\n          k_pos[d] = k_temp % kernel_size[d];\n          k_temp /= kernel_size[d];\n        }\n        long out_off = base_out;\n        bool valid = true;\n        for (int d = 0; d < K; d++) {\n          long sp =\n              block_pos[d] * stride_arr[d] + k_pos[d] * dilation_arr[d] - pad_before[d];\n          if (sp < 0 || sp >= out->shape[leading_ndim + d]) {\n            valid = false;\n            break;\n          }\n          out_off += sp * out->strides[leading_ndim + d];\n        }\n        long in_off = base_in + kf * in->strides[leading_ndim] +\n                      l * in->strides[leading_ndim + 1];\n        if (valid) {\n          ops->add(out->data, out->offset + out_off, in->data,\n                   in->offset + in_off);\n        }\n      }\n    }\n  }\n\n  free(out_cumprod);\n}\n\n// Dispatch helper\nstatic const type_ops_t* get_type_ops(int kind) {\n  switch (kind) {\n    case CAML_BA_SINT8:\n      return &type_ops_table.i8;\n    case CAML_BA_UINT8:\n      return &type_ops_table.u8;\n    case CAML_BA_SINT16:\n      return &type_ops_table.i16;\n    case CAML_BA_UINT16:\n      return &type_ops_table.u16;\n    case CAML_BA_INT32:\n      return &type_ops_table.i32;\n    case CAML_BA_INT64:\n      return &type_ops_table.i64;\n    case NX_BA_UINT32:\n      return &type_ops_table.u32;\n    case NX_BA_UINT64:\n      return &type_ops_table.u64;\n    case CAML_BA_CAML_INT:\n    case CAML_BA_NATIVE_INT:\n      return &type_ops_table.inat;\n    case CAML_BA_FLOAT16:\n      return &type_ops_table.f16;\n    case CAML_BA_FLOAT32:\n      return &type_ops_table.f32;\n    case CAML_BA_FLOAT64:\n      return &type_ops_table.f64;\n    case CAML_BA_COMPLEX32:\n      return &type_ops_table.c32;\n    case CAML_BA_COMPLEX64:\n      return &type_ops_table.c64;\n    case NX_BA_BFLOAT16:\n      return &type_ops_table.bf16;\n    case NX_BA_BOOL:\n      return &type_ops_table.bool_;\n    case NX_BA_INT4:\n      return &type_ops_table.i4;\n    case NX_BA_UINT4:\n      return &type_ops_table.u4;\n    case NX_BA_FP8_E4M3:\n      return &type_ops_table.f8e4m3;\n    case NX_BA_FP8_E5M2:\n      return &type_ops_table.f8e5m2;\n    default:\n      return NULL;\n  }\n}\n\n// OCaml FFI Stubs\n\nCAMLprim value caml_nx_op_unfold(value v_in, value v_kernel_size,\n                                 value v_stride, value v_dilation,\n                                 value v_padding, value v_out) {\n  CAMLparam5(v_in, v_kernel_size, v_stride, v_dilation, v_padding);\n  CAMLxparam1(v_out);\n\n  ndarray_t input = extract_ndarray(v_in);\n  ndarray_t output = extract_ndarray(v_out);\n\n  value v_in_data = Field(v_in, FFI_TENSOR_DATA);\n  struct caml_ba_array* ba_in = Caml_ba_array_val(v_in_data);\n  int kind = nx_buffer_get_kind(ba_in);\n\n  value v_out_data = Field(v_out, FFI_TENSOR_DATA);\n  int kind_out = nx_buffer_get_kind(Caml_ba_array_val(v_out_data));\n  if (kind != kind_out) caml_failwith(\"dtype mismatch\");\n\n  const type_ops_t* ops = get_type_ops(kind);\n  if (!ops) caml_failwith(\"unsupported dtype\");\n\n  size_t elem_size = get_elem_size(kind);\n\n  // Validate parameters before entering blocking section\n  int K = Wosize_val(v_kernel_size);\n  if (K > MAX_SPATIAL_DIMS) {\n    cleanup_ndarray(&input);\n    cleanup_ndarray(&output);\n    caml_failwith(\"too many spatial dimensions\");\n  }\n  if (Wosize_val(v_stride) != K || Wosize_val(v_dilation) != K ||\n      Wosize_val(v_padding) != 2 * K) {\n    cleanup_ndarray(&input);\n    cleanup_ndarray(&output);\n    caml_failwith(\"parameter length mismatch\");\n  }\n  if (input.ndim < K) {\n    cleanup_ndarray(&input);\n    cleanup_ndarray(&output);\n    caml_failwith(\"unfold: input must have at least K dimensions\");\n  }\n\n  int leading_ndim = input.ndim - K;\n\n  // Validate output ndim: should be leading_ndim + 2 (kernel_prod, L)\n  if (output.ndim != leading_ndim + 2) {\n    cleanup_ndarray(&input);\n    cleanup_ndarray(&output);\n    caml_failwith(\"unfold: output ndim mismatch\");\n  }\n\n  // Validate leading dims match\n  for (int d = 0; d < leading_ndim; d++) {\n    if (output.shape[d] != input.shape[d]) {\n      cleanup_ndarray(&input);\n      cleanup_ndarray(&output);\n      caml_failwith(\"unfold: leading dimension mismatch\");\n    }\n  }\n\n  // Extract OCaml arrays into C arrays BEFORE releasing the runtime lock.\n  long* c_kernel_size = (long*)calloc(K, sizeof(long));\n  long* c_stride = (long*)calloc(K, sizeof(long));\n  long* c_dilation = (long*)calloc(K, sizeof(long));\n  long* c_pad_before = (long*)calloc(K, sizeof(long));\n  long* c_pad_after = (long*)calloc(K, sizeof(long));\n  for (int d = 0; d < K; d++) {\n    c_kernel_size[d] = Long_val(Field(v_kernel_size, d));\n    c_stride[d] = Long_val(Field(v_stride, d));\n    c_dilation[d] = Long_val(Field(v_dilation, d));\n    c_pad_before[d] = Long_val(Field(v_padding, 2 * d));\n    c_pad_after[d] = Long_val(Field(v_padding, 2 * d + 1));\n  }\n\n  caml_enter_blocking_section();\n  nx_c_unfold_impl(&input, &output, K, c_kernel_size, c_stride, c_dilation,\n                   c_pad_before, c_pad_after, leading_ndim, ops, elem_size);\n  caml_leave_blocking_section();\n\n  free(c_kernel_size);\n  free(c_stride);\n  free(c_dilation);\n  free(c_pad_before);\n  free(c_pad_after);\n\n  cleanup_ndarray(&input);\n  cleanup_ndarray(&output);\n\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_nx_op_fold(value v_in, value v_output_size,\n                               value v_kernel_size, value v_stride,\n                               value v_dilation, value v_padding, value v_out) {\n  CAMLparam5(v_in, v_output_size, v_kernel_size, v_stride, v_dilation);\n  CAMLxparam2(v_padding, v_out);\n\n  ndarray_t input = extract_ndarray(v_in);\n  ndarray_t output = extract_ndarray(v_out);\n\n  value v_in_data = Field(v_in, FFI_TENSOR_DATA);\n  struct caml_ba_array* ba_in = Caml_ba_array_val(v_in_data);\n  int kind = nx_buffer_get_kind(ba_in);\n\n  value v_out_data = Field(v_out, FFI_TENSOR_DATA);\n  int kind_out = nx_buffer_get_kind(Caml_ba_array_val(v_out_data));\n  if (kind != kind_out) {\n    cleanup_ndarray(&input);\n    cleanup_ndarray(&output);\n    caml_failwith(\"dtype mismatch\");\n  }\n\n  const type_ops_t* ops = get_type_ops(kind);\n  if (!ops) {\n    cleanup_ndarray(&input);\n    cleanup_ndarray(&output);\n    caml_failwith(\"unsupported dtype\");\n  }\n\n  size_t elem_size = get_elem_size(kind);\n\n  // Validate parameters before entering blocking section\n  int K = Wosize_val(v_kernel_size);\n  if (K > MAX_SPATIAL_DIMS) {\n    cleanup_ndarray(&input);\n    cleanup_ndarray(&output);\n    caml_failwith(\"too many spatial dimensions\");\n  }\n  if (Wosize_val(v_output_size) != K || Wosize_val(v_stride) != K ||\n      Wosize_val(v_dilation) != K || Wosize_val(v_padding) != 2 * K) {\n    cleanup_ndarray(&input);\n    cleanup_ndarray(&output);\n    caml_failwith(\"parameter length mismatch\");\n  }\n\n  // Input must have at least 2 dims (kernel_prod, L) plus optional leading\n  if (input.ndim < 2) {\n    cleanup_ndarray(&input);\n    cleanup_ndarray(&output);\n    caml_failwith(\"fold: input must have at least 2 dimensions\");\n  }\n\n  int leading_ndim = input.ndim - 2;\n\n  // Output must have leading_ndim + K dims\n  if (output.ndim != leading_ndim + K) {\n    cleanup_ndarray(&input);\n    cleanup_ndarray(&output);\n    caml_failwith(\"fold: output ndim mismatch\");\n  }\n\n  // Validate leading dims match\n  for (int d = 0; d < leading_ndim; d++) {\n    if (output.shape[d] != input.shape[d]) {\n      cleanup_ndarray(&input);\n      cleanup_ndarray(&output);\n      caml_failwith(\"fold: leading dimension mismatch\");\n    }\n  }\n\n  // Validate spatial dimensions match output_size\n  for (int d = 0; d < K; d++) {\n    if (output.shape[leading_ndim + d] != Long_val(Field(v_output_size, d))) {\n      cleanup_ndarray(&input);\n      cleanup_ndarray(&output);\n      caml_failwith(\"fold: output spatial dimension mismatch\");\n    }\n  }\n\n  // Extract OCaml arrays into C arrays BEFORE releasing the runtime lock.\n  long* c_output_size = (long*)calloc(K, sizeof(long));\n  long* c_kernel_size = (long*)calloc(K, sizeof(long));\n  long* c_stride = (long*)calloc(K, sizeof(long));\n  long* c_dilation = (long*)calloc(K, sizeof(long));\n  long* c_pad_before = (long*)calloc(K, sizeof(long));\n  long* c_pad_after = (long*)calloc(K, sizeof(long));\n  for (int d = 0; d < K; d++) {\n    c_output_size[d] = Long_val(Field(v_output_size, d));\n    c_kernel_size[d] = Long_val(Field(v_kernel_size, d));\n    c_stride[d] = Long_val(Field(v_stride, d));\n    c_dilation[d] = Long_val(Field(v_dilation, d));\n    c_pad_before[d] = Long_val(Field(v_padding, 2 * d));\n    c_pad_after[d] = Long_val(Field(v_padding, 2 * d + 1));\n  }\n\n  caml_enter_blocking_section();\n  nx_c_fold_impl(&input, &output, K, c_output_size, c_kernel_size, c_stride,\n                 c_dilation, c_pad_before, c_pad_after, leading_ndim, ops,\n                 elem_size);\n  caml_leave_blocking_section();\n\n  free(c_output_size);\n  free(c_kernel_size);\n  free(c_stride);\n  free(c_dilation);\n  free(c_pad_before);\n  free(c_pad_after);\n\n  cleanup_ndarray(&input);\n  cleanup_ndarray(&output);\n\n  CAMLreturn(Val_unit);\n}\n\n// Bytecode wrappers for functions with >5 arguments\n// These forward to the native versions and let them manage GC roots.\n// OCaml expects these when the external is declared with two names\n// (bytecode stub first, native stub second).\n//\n// unfold: 6 arguments\nCAMLprim value caml_nx_op_unfold_bc(value* argv, int argn) {\n  CAMLparam0();\n  (void)argn;\n  value ret =\n      caml_nx_op_unfold(argv[0], argv[1], argv[2], argv[3], argv[4], argv[5]);\n  CAMLreturn(ret);\n}\n\n// fold: 7 arguments\nCAMLprim value caml_nx_op_fold_bc(value* argv, int argn) {\n  CAMLparam0();\n  (void)argn;\n  value ret = caml_nx_op_fold(argv[0], argv[1], argv[2], argv[3], argv[4],\n                              argv[5], argv[6]);\n  CAMLreturn(ret);\n}\n"
  },
  {
    "path": "packages/nx/lib/buffer/dune",
    "content": "(library\n (name nx_buffer)\n (public_name nx.buffer)\n (install_c_headers nx_buffer_stubs)\n (foreign_stubs\n  (language c)\n  (names nx_buffer_stubs))\n (js_of_ocaml\n  (javascript_files nx_buffer_stubs.js))\n (libraries))\n"
  },
  {
    "path": "packages/nx/lib/buffer/nx_buffer.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Element types *)\n\ntype float16_elt = Bigarray.float16_elt\ntype float32_elt = Bigarray.float32_elt\ntype float64_elt = Bigarray.float64_elt\ntype int8_signed_elt = Bigarray.int8_signed_elt\ntype int8_unsigned_elt = Bigarray.int8_unsigned_elt\ntype int16_signed_elt = Bigarray.int16_signed_elt\ntype int16_unsigned_elt = Bigarray.int16_unsigned_elt\ntype int32_elt = Bigarray.int32_elt\ntype int64_elt = Bigarray.int64_elt\ntype complex32_elt = Bigarray.complex32_elt\ntype complex64_elt = Bigarray.complex64_elt\ntype bfloat16_elt = |\ntype bool_elt = |\ntype int4_signed_elt = |\ntype int4_unsigned_elt = |\ntype float8_e4m3_elt = |\ntype float8_e5m2_elt = |\ntype uint32_elt = |\ntype uint64_elt = |\n\n(* Kind GADT *)\n\ntype ('a, 'b) kind =\n  | Float16 : (float, float16_elt) kind\n  | Float32 : (float, float32_elt) kind\n  | Float64 : (float, float64_elt) kind\n  | Bfloat16 : (float, bfloat16_elt) kind\n  | Float8_e4m3 : (float, float8_e4m3_elt) kind\n  | Float8_e5m2 : (float, float8_e5m2_elt) kind\n  | Int8_signed : (int, int8_signed_elt) kind\n  | Int8_unsigned : (int, int8_unsigned_elt) kind\n  | Int16_signed : (int, int16_signed_elt) kind\n  | Int16_unsigned : (int, int16_unsigned_elt) kind\n  | Int32 : (int32, int32_elt) kind\n  | Uint32 : (int32, uint32_elt) kind\n  | Int64 : (int64, int64_elt) kind\n  | Uint64 : (int64, uint64_elt) kind\n  | Int4_signed : (int, int4_signed_elt) kind\n  | Int4_unsigned : (int, int4_unsigned_elt) kind\n  | Complex32 : (Complex.t, complex32_elt) kind\n  | Complex64 : (Complex.t, complex64_elt) kind\n  | Bool : (bool, bool_elt) kind\n\n(* Kind values *)\n\nlet float16 = Float16\nlet float32 = Float32\nlet float64 = Float64\nlet bfloat16 = Bfloat16\nlet float8_e4m3 = Float8_e4m3\nlet float8_e5m2 = Float8_e5m2\nlet int8_signed = Int8_signed\nlet int8_unsigned = Int8_unsigned\nlet int16_signed = Int16_signed\nlet int16_unsigned = Int16_unsigned\nlet int32 = Int32\nlet uint32 = Uint32\nlet int64 = Int64\nlet uint64 = Uint64\nlet int4_signed = Int4_signed\nlet int4_unsigned = Int4_unsigned\nlet complex32 = Complex32\nlet complex64 = Complex64\nlet bool = Bool\n\n(* Kind properties *)\n\nlet kind_size_in_bytes : type a b. (a, b) kind -> int = function\n  | Float16 -> 2\n  | Float32 -> 4\n  | Float64 -> 8\n  | Bfloat16 -> 2\n  | Float8_e4m3 -> 1\n  | Float8_e5m2 -> 1\n  | Int8_signed -> 1\n  | Int8_unsigned -> 1\n  | Int16_signed -> 2\n  | Int16_unsigned -> 2\n  | Int32 -> 4\n  | Uint32 -> 4\n  | Int64 -> 8\n  | Uint64 -> 8\n  | Int4_signed -> 1\n  | Int4_unsigned -> 1\n  | Complex32 -> 8\n  | Complex64 -> 16\n  | Bool -> 1\n\nlet to_stdlib_kind : type a b. (a, b) kind -> (a, b) Bigarray.kind option =\n  function\n  | Float16 -> Some Bigarray.Float16\n  | Float32 -> Some Bigarray.Float32\n  | Float64 -> Some Bigarray.Float64\n  | Int8_signed -> Some Bigarray.Int8_signed\n  | Int8_unsigned -> Some Bigarray.Int8_unsigned\n  | Int16_signed -> Some Bigarray.Int16_signed\n  | Int16_unsigned -> Some Bigarray.Int16_unsigned\n  | Int32 -> Some Bigarray.Int32\n  | Int64 -> Some Bigarray.Int64\n  | Complex32 -> Some Bigarray.Complex32\n  | Complex64 -> Some Bigarray.Complex64\n  | Bfloat16 -> None\n  | Bool -> None\n  | Int4_signed -> None\n  | Int4_unsigned -> None\n  | Float8_e4m3 -> None\n  | Float8_e5m2 -> None\n  | Uint32 -> None\n  | Uint64 -> None\n\n(* Buffer type *)\n\ntype ('a, 'b) t = ('a, 'b, Bigarray.c_layout) Bigarray.Array1.t\n\n(* Genarray externals *)\n\nexternal create_bfloat16_genarray :\n  'c Bigarray.layout -> int array -> ('a, 'b, 'c) Bigarray.Genarray.t\n  = \"caml_nx_buffer_create_bfloat16\"\n\nexternal create_bool_genarray :\n  'c Bigarray.layout -> int array -> ('a, 'b, 'c) Bigarray.Genarray.t\n  = \"caml_nx_buffer_create_bool\"\n\nexternal create_int4_signed_genarray :\n  'c Bigarray.layout -> int array -> ('a, 'b, 'c) Bigarray.Genarray.t\n  = \"caml_nx_buffer_create_int4_signed\"\n\nexternal create_int4_unsigned_genarray :\n  'c Bigarray.layout -> int array -> ('a, 'b, 'c) Bigarray.Genarray.t\n  = \"caml_nx_buffer_create_int4_unsigned\"\n\nexternal create_float8_e4m3_genarray :\n  'c Bigarray.layout -> int array -> ('a, 'b, 'c) Bigarray.Genarray.t\n  = \"caml_nx_buffer_create_float8_e4m3\"\n\nexternal create_float8_e5m2_genarray :\n  'c Bigarray.layout -> int array -> ('a, 'b, 'c) Bigarray.Genarray.t\n  = \"caml_nx_buffer_create_float8_e5m2\"\n\nexternal create_uint32_genarray :\n  'c Bigarray.layout -> int array -> ('a, 'b, 'c) Bigarray.Genarray.t\n  = \"caml_nx_buffer_create_uint32\"\n\nexternal create_uint64_genarray :\n  'c Bigarray.layout -> int array -> ('a, 'b, 'c) Bigarray.Genarray.t\n  = \"caml_nx_buffer_create_uint64\"\n\n(* Extended-kind genarray creation *)\n\nlet genarray_create : type a b c.\n    (a, b) kind ->\n    c Bigarray.layout ->\n    int array ->\n    (a, b, c) Bigarray.Genarray.t =\n fun kind layout dims ->\n  match kind with\n  | Bfloat16 -> create_bfloat16_genarray layout dims\n  | Bool -> create_bool_genarray layout dims\n  | Int4_signed -> create_int4_signed_genarray layout dims\n  | Int4_unsigned -> create_int4_unsigned_genarray layout dims\n  | Float8_e4m3 -> create_float8_e4m3_genarray layout dims\n  | Float8_e5m2 -> create_float8_e5m2_genarray layout dims\n  | Uint32 -> create_uint32_genarray layout dims\n  | Uint64 -> create_uint64_genarray layout dims\n  | _ -> (\n      match to_stdlib_kind kind with\n      | Some k -> Bigarray.Genarray.create k layout dims\n      | None -> assert false)\n\n(* Genarray externals *)\n\nexternal genarray_get : ('a, 'b, 'c) Bigarray.Genarray.t -> int array -> 'a\n  = \"caml_nx_buffer_get\"\n\nexternal genarray_set :\n  ('a, 'b, 'c) Bigarray.Genarray.t -> int array -> 'a -> unit\n  = \"caml_nx_buffer_set\"\n\nexternal genarray_kind_ext : ('a, 'b, 'c) Bigarray.Genarray.t -> ('a, 'b) kind\n  = \"caml_nx_buffer_kind\"\n[@@noalloc]\n\nexternal genarray_blit_ext :\n  ('a, 'b, 'c) Bigarray.Genarray.t -> ('a, 'b, 'c) Bigarray.Genarray.t -> unit\n  = \"caml_nx_buffer_blit\"\n\nexternal genarray_fill_ext : ('a, 'b, 'c) Bigarray.Genarray.t -> 'a -> unit\n  = \"caml_nx_buffer_fill\"\n\nexternal unsafe_blit_from_bytes :\n  bytes -> int -> ('a, 'b, 'c) Bigarray.Genarray.t -> int -> int -> unit\n  = \"caml_nx_buffer_blit_from_bytes\"\n[@@noalloc]\n\nexternal unsafe_blit_to_bytes :\n  ('a, 'b, 'c) Bigarray.Genarray.t -> int -> bytes -> int -> int -> unit\n  = \"caml_nx_buffer_blit_to_bytes\"\n[@@noalloc]\n\n(* Buffer creation *)\n\nlet create kind n =\n  Bigarray.reshape_1 (genarray_create kind Bigarray.c_layout [| n |]) n\n\n(* Buffer properties *)\n\nlet kind buf = genarray_kind_ext (Bigarray.genarray_of_array1 buf)\nlet length buf = Bigarray.Array1.dim buf\n\n(* Element access *)\n\nlet get buf i = genarray_get (Bigarray.genarray_of_array1 buf) [| i |]\nlet set buf i v = genarray_set (Bigarray.genarray_of_array1 buf) [| i |] v\n\nexternal unsafe_get : ('a, 'b, Bigarray.c_layout) Bigarray.Array1.t -> int -> 'a\n  = \"caml_nx_buffer_unsafe_get\"\n\nexternal unsafe_set :\n  ('a, 'b, Bigarray.c_layout) Bigarray.Array1.t -> int -> 'a -> unit\n  = \"caml_nx_buffer_unsafe_set\"\n\n(* Byte count for a span of elements, accounting for int4 packing *)\nlet elts_to_bytes : type a b. (a, b) kind -> int -> int =\n fun k n ->\n  match k with\n  | Int4_signed -> (n + 1) / 2\n  | Int4_unsigned -> (n + 1) / 2\n  | _ -> n * kind_size_in_bytes k\n\n(* Bulk operations *)\n\nlet fill buf v = genarray_fill_ext (Bigarray.genarray_of_array1 buf) v\n\nlet blit ~src ~dst =\n  genarray_blit_ext\n    (Bigarray.genarray_of_array1 src)\n    (Bigarray.genarray_of_array1 dst)\n\nlet blit_from_bytes ?(src_off = 0) ?(dst_off = 0) ?len bytes buf =\n  let k = kind buf in\n  let buf_len = length buf in\n  let len = match len with Some l -> l | None -> buf_len - dst_off in\n  if src_off < 0 then invalid_arg \"blit_from_bytes: negative src_off\";\n  if dst_off < 0 then invalid_arg \"blit_from_bytes: negative dst_off\";\n  if len < 0 then invalid_arg \"blit_from_bytes: negative length\";\n  if dst_off + len > buf_len then\n    invalid_arg \"blit_from_bytes: dst_off + len > buffer length\";\n  let byte_len = elts_to_bytes k len in\n  let src_byte_off = src_off * kind_size_in_bytes k in\n  if src_byte_off + byte_len > Bytes.length bytes then\n    invalid_arg \"blit_from_bytes: src_off + len > bytes length\";\n  let dst_byte_off = elts_to_bytes k dst_off in\n  unsafe_blit_from_bytes bytes src_byte_off\n    (Bigarray.genarray_of_array1 buf)\n    dst_byte_off byte_len\n\nlet blit_to_bytes ?(src_off = 0) ?(dst_off = 0) ?len buf bytes =\n  let k = kind buf in\n  let buf_len = length buf in\n  let len = match len with Some l -> l | None -> buf_len - src_off in\n  if src_off < 0 then invalid_arg \"blit_to_bytes: negative src_off\";\n  if dst_off < 0 then invalid_arg \"blit_to_bytes: negative dst_off\";\n  if len < 0 then invalid_arg \"blit_to_bytes: negative length\";\n  if src_off + len > buf_len then\n    invalid_arg \"blit_to_bytes: src_off + len > buffer length\";\n  let byte_len = elts_to_bytes k len in\n  let dst_byte_off = dst_off * kind_size_in_bytes k in\n  if dst_byte_off + byte_len > Bytes.length bytes then\n    invalid_arg \"blit_to_bytes: dst_off + len > bytes length\";\n  let src_byte_off = elts_to_bytes k src_off in\n  unsafe_blit_to_bytes\n    (Bigarray.genarray_of_array1 buf)\n    src_byte_off bytes dst_byte_off byte_len\n\n(* Bigarray conversions *)\n\nlet of_bigarray1 buf = buf\nlet to_bigarray1 buf = buf\n\nlet to_genarray buf shape =\n  Bigarray.reshape (Bigarray.genarray_of_array1 buf) shape\n\nlet of_genarray ga =\n  let size = Array.fold_left ( * ) 1 (Bigarray.Genarray.dims ga) in\n  Bigarray.array1_of_genarray (Bigarray.reshape ga [| size |])\n\n(* Genarray utilities *)\n\nlet genarray_kind : type a b c. (a, b, c) Bigarray.Genarray.t -> (a, b) kind =\n fun ga -> genarray_kind_ext ga\n\nlet genarray_dims ga = Bigarray.Genarray.dims ga\n\nlet genarray_blit : type a b c.\n    (a, b, c) Bigarray.Genarray.t -> (a, b, c) Bigarray.Genarray.t -> unit =\n fun src dst -> genarray_blit_ext src dst\n\nlet genarray_change_layout = Bigarray.Genarray.change_layout\n"
  },
  {
    "path": "packages/nx/lib/buffer/nx_buffer.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Flat buffers for tensor storage.\n\n    Flat, C-layout, one-dimensional buffers with support for both standard\n    Bigarray element types and extended types (bfloat16, bool, int4, float8,\n    uint32, uint64).\n\n    The buffer type {!t} is abstract in this interface. Conversions to and from\n    {!Bigarray} are explicit via {!of_bigarray1}, {!to_bigarray1},\n    {!of_genarray}, and {!to_genarray}. *)\n\n(** {1:elt Element types}\n\n    Standard element types are aliases from {!Bigarray}. Extended types are\n    defined here. *)\n\ntype float16_elt = Bigarray.float16_elt\ntype float32_elt = Bigarray.float32_elt\ntype float64_elt = Bigarray.float64_elt\ntype int8_signed_elt = Bigarray.int8_signed_elt\ntype int8_unsigned_elt = Bigarray.int8_unsigned_elt\ntype int16_signed_elt = Bigarray.int16_signed_elt\ntype int16_unsigned_elt = Bigarray.int16_unsigned_elt\ntype int32_elt = Bigarray.int32_elt\ntype int64_elt = Bigarray.int64_elt\ntype complex32_elt = Bigarray.complex32_elt\ntype complex64_elt = Bigarray.complex64_elt\n\ntype bfloat16_elt\n(** Brain floating-point 16-bit. *)\n\ntype bool_elt\n(** Boolean stored as a byte. *)\n\ntype int4_signed_elt\n(** Signed 4-bit integer (two values packed per byte). *)\n\ntype int4_unsigned_elt\n(** Unsigned 4-bit integer (two values packed per byte). *)\n\ntype float8_e4m3_elt\n(** 8-bit float with 4 exponent and 3 mantissa bits. *)\n\ntype float8_e5m2_elt\n(** 8-bit float with 5 exponent and 2 mantissa bits. *)\n\ntype uint32_elt\n(** Unsigned 32-bit integer. *)\n\ntype uint64_elt\n(** Unsigned 64-bit integer. *)\n\n(** {1:kind Kind GADT} *)\n\ntype ('a, 'b) kind =\n  | Float16 : (float, float16_elt) kind\n  | Float32 : (float, float32_elt) kind\n  | Float64 : (float, float64_elt) kind\n  | Bfloat16 : (float, bfloat16_elt) kind\n  | Float8_e4m3 : (float, float8_e4m3_elt) kind\n  | Float8_e5m2 : (float, float8_e5m2_elt) kind\n  | Int8_signed : (int, int8_signed_elt) kind\n  | Int8_unsigned : (int, int8_unsigned_elt) kind\n  | Int16_signed : (int, int16_signed_elt) kind\n  | Int16_unsigned : (int, int16_unsigned_elt) kind\n  | Int32 : (int32, int32_elt) kind\n  | Uint32 : (int32, uint32_elt) kind\n  | Int64 : (int64, int64_elt) kind\n  | Uint64 : (int64, uint64_elt) kind\n  | Int4_signed : (int, int4_signed_elt) kind\n  | Int4_unsigned : (int, int4_unsigned_elt) kind\n  | Complex32 : (Complex.t, complex32_elt) kind\n  | Complex64 : (Complex.t, complex64_elt) kind\n  | Bool : (bool, bool_elt) kind\n      (** The type for element kinds. Nineteen constructors covering standard\n          Bigarray kinds and extended types. *)\n\n(** {2:kind_values Kind values} *)\n\nval float16 : (float, float16_elt) kind\nval float32 : (float, float32_elt) kind\nval float64 : (float, float64_elt) kind\nval bfloat16 : (float, bfloat16_elt) kind\nval float8_e4m3 : (float, float8_e4m3_elt) kind\nval float8_e5m2 : (float, float8_e5m2_elt) kind\nval int8_signed : (int, int8_signed_elt) kind\nval int8_unsigned : (int, int8_unsigned_elt) kind\nval int16_signed : (int, int16_signed_elt) kind\nval int16_unsigned : (int, int16_unsigned_elt) kind\nval int32 : (int32, int32_elt) kind\nval uint32 : (int32, uint32_elt) kind\nval int64 : (int64, int64_elt) kind\nval uint64 : (int64, uint64_elt) kind\nval int4_signed : (int, int4_signed_elt) kind\nval int4_unsigned : (int, int4_unsigned_elt) kind\nval complex32 : (Complex.t, complex32_elt) kind\nval complex64 : (Complex.t, complex64_elt) kind\nval bool : (bool, bool_elt) kind\n\n(** {2:kind_props Kind properties} *)\n\nval kind_size_in_bytes : ('a, 'b) kind -> int\n(** [kind_size_in_bytes k] is the storage size in bytes per element for kind\n    [k]. For [Int4_signed] and [Int4_unsigned] this is [1] (two values packed\n    per byte). *)\n\nval to_stdlib_kind : ('a, 'b) kind -> ('a, 'b) Bigarray.kind option\n(** [to_stdlib_kind k] is the standard {!Bigarray.kind} for [k], or [None] for\n    extended types. *)\n\n(** {1:buf Buffer type and operations} *)\n\ntype ('a, 'b) t\n(** [('a, 'b) t] is a flat, C-layout, one-dimensional buffer. *)\n\n(** {2:create Creation} *)\n\nval create : ('a, 'b) kind -> int -> ('a, 'b) t\n(** [create kind n] allocates a zero-initialized buffer of [n] elements. *)\n\n(** {2:props Properties} *)\n\nval kind : ('a, 'b) t -> ('a, 'b) kind\n(** [kind buf] is the element kind of [buf]. *)\n\nval length : ('a, 'b) t -> int\n(** [length buf] is the number of elements in [buf]. *)\n\n(** {2:access Element access} *)\n\nval get : ('a, 'b) t -> int -> 'a\n(** [get buf i] is the element at index [i].\n\n    Raises [Invalid_argument] if [i] is out of bounds. *)\n\nval set : ('a, 'b) t -> int -> 'a -> unit\n(** [set buf i v] sets the element at index [i] to [v].\n\n    Raises [Invalid_argument] if [i] is out of bounds. *)\n\nval unsafe_get : ('a, 'b) t -> int -> 'a\n(** [unsafe_get buf i] is like {!get} without bounds checking. *)\n\nval unsafe_set : ('a, 'b) t -> int -> 'a -> unit\n(** [unsafe_set buf i v] is like {!set} without bounds checking. *)\n\n(** {2:bulk Bulk operations} *)\n\nval fill : ('a, 'b) t -> 'a -> unit\n(** [fill buf v] sets every element of [buf] to [v]. *)\n\nval blit : src:('a, 'b) t -> dst:('a, 'b) t -> unit\n(** [blit ~src ~dst] copies all elements from [src] to [dst].\n\n    Raises [Invalid_argument] if dimensions differ. *)\n\nval blit_from_bytes :\n  ?src_off:int -> ?dst_off:int -> ?len:int -> bytes -> ('a, 'b) t -> unit\n(** [blit_from_bytes ?src_off ?dst_off ?len bytes buf] copies [len] elements\n    from [bytes] into [buf]. Offsets and length are in elements. [src_off] and\n    [dst_off] default to [0]. [len] defaults to [length buf - dst_off]. *)\n\nval blit_to_bytes :\n  ?src_off:int -> ?dst_off:int -> ?len:int -> ('a, 'b) t -> bytes -> unit\n(** [blit_to_bytes ?src_off ?dst_off ?len buf bytes] copies [len] elements from\n    [buf] into [bytes]. Offsets and length are in elements. [src_off] and\n    [dst_off] default to [0]. [len] defaults to [length buf - src_off]. *)\n\n(** {1:ba Bigarray conversions} *)\n\nval of_bigarray1 : ('a, 'b, Bigarray.c_layout) Bigarray.Array1.t -> ('a, 'b) t\n(** [of_bigarray1 ba] is [ba] viewed as a buffer. Zero-copy for standard kinds.\n*)\n\nval to_bigarray1 : ('a, 'b) t -> ('a, 'b, Bigarray.c_layout) Bigarray.Array1.t\n(** [to_bigarray1 buf] is [buf] viewed as a one-dimensional bigarray. Zero-copy.\n*)\n\nval to_genarray :\n  ('a, 'b) t -> int array -> ('a, 'b, Bigarray.c_layout) Bigarray.Genarray.t\n(** [to_genarray buf shape] reshapes [buf] into a genarray with [shape]. The\n    product of [shape] must equal [length buf]. *)\n\nval of_genarray : ('a, 'b, Bigarray.c_layout) Bigarray.Genarray.t -> ('a, 'b) t\n(** [of_genarray ga] flattens [ga] into a one-dimensional buffer. *)\n\n(** {1:ga Genarray utilities}\n\n    Operations on {!Bigarray.Genarray.t} that handle extended kinds. Used by I/O\n    modules (npy, safetensors, images). *)\n\nval genarray_create :\n  ('a, 'b) kind ->\n  'c Bigarray.layout ->\n  int array ->\n  ('a, 'b, 'c) Bigarray.Genarray.t\n(** [genarray_create kind layout dims] allocates a genarray. Handles both\n    standard and extended kinds. *)\n\nval genarray_kind : ('a, 'b, 'c) Bigarray.Genarray.t -> ('a, 'b) kind\n(** [genarray_kind ga] is the kind of [ga], including extended kinds. *)\n\nval genarray_dims : ('a, 'b, 'c) Bigarray.Genarray.t -> int array\n(** [genarray_dims ga] is the dimensions of [ga]. *)\n\nval genarray_blit :\n  ('a, 'b, 'c) Bigarray.Genarray.t -> ('a, 'b, 'c) Bigarray.Genarray.t -> unit\n(** [genarray_blit src dst] copies [src] to [dst]. Handles extended kinds. *)\n\nval genarray_change_layout :\n  ('a, 'b, 'c) Bigarray.Genarray.t ->\n  'd Bigarray.layout ->\n  ('a, 'b, 'd) Bigarray.Genarray.t\n(** [genarray_change_layout ga layout] changes the layout of [ga]. *)\n"
  },
  {
    "path": "packages/nx/lib/buffer/nx_buffer_stubs.c",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n#include \"nx_buffer_stubs.h\"\n\n#include <caml/fail.h>\n#include <stdlib.h>\n#include <string.h>\n\n/* External declarations for standard bigarray functions */\nextern value caml_ba_get_N(value vb, value *vind, int nind);\nextern value caml_ba_set_N(value vb, value *vind, int nargs);\nextern value caml_ba_blit(value vsrc, value vdst);\nextern CAMLprim value caml_ba_fill(value vb, value vinit);\n\n/*---------------------------------------------------------------------------\n   Helpers\n  ---------------------------------------------------------------------------*/\n\nstatic int nx_buffer_base_kind(int kind) {\n  switch (kind) {\n    case NX_BA_BFLOAT16:    return CAML_BA_FLOAT16;\n    case NX_BA_BOOL:        return CAML_BA_UINT8;\n    case NX_BA_INT4:        return CAML_BA_UINT8;\n    case NX_BA_UINT4:       return CAML_BA_UINT8;\n    case NX_BA_FP8_E4M3:   return CAML_BA_UINT8;\n    case NX_BA_FP8_E5M2:   return CAML_BA_UINT8;\n    case NX_BA_UINT32:      return CAML_BA_INT32;\n    case NX_BA_UINT64:      return CAML_BA_INT64;\n    default:                return -1;\n  }\n}\n\n/* Byte size per element for extended kinds. Returns 0 for int4/uint4\n   (which pack 2 per byte) and -1 for unknown kinds. */\nstatic int nx_buffer_element_byte_size(int kind) {\n  switch (kind) {\n    case NX_BA_BFLOAT16:    return 2;\n    case NX_BA_BOOL:        return 1;\n    case NX_BA_FP8_E4M3:   return 1;\n    case NX_BA_FP8_E5M2:   return 1;\n    case NX_BA_UINT32:      return 4;\n    case NX_BA_UINT64:      return 8;\n    case NX_BA_INT4:        return 0;\n    case NX_BA_UINT4:       return 0;\n    default:                return -1;\n  }\n}\n\nstatic value nx_buffer_alloc_with_kind(int kind, int layout_flag, int num_dims,\n                                       intnat *dim, void *data) {\n  int base_kind = nx_buffer_is_extended_kind(kind)\n                      ? nx_buffer_base_kind(kind)\n                      : kind;\n  if (base_kind < 0) caml_failwith(\"Unknown extended bigarray kind\");\n  int flags = base_kind | layout_flag | CAML_BA_MANAGED;\n  value res = caml_ba_alloc(flags, num_dims, data, dim);\n  struct caml_ba_array *ba = Caml_ba_array_val(res);\n  ba->flags = nx_buffer_store_extended_kind(ba->flags, kind);\n  return res;\n}\n\n/* Overflow-safe multiplication */\nstatic int umul_overflow(uintnat a, uintnat b, uintnat *res) {\n  if (b != 0 && a > (uintnat)(-1) / b) return 1;\n  *res = a * b;\n  return 0;\n}\n\nstatic uintnat nx_buffer_num_elts_from_dims(int num_dims, intnat *dim) {\n  uintnat num_elts = 1;\n  for (int i = 0; i < num_dims; i++) {\n    if (umul_overflow(num_elts, dim[i], &num_elts))\n      caml_raise_out_of_memory();\n  }\n  return num_elts;\n}\n\nstatic uintnat nx_buffer_num_elts(struct caml_ba_array *b) {\n  uintnat num_elts = 1;\n  for (int i = 0; i < b->num_dims; i++)\n    num_elts *= b->dim[i];\n  return num_elts;\n}\n\nstatic intnat nx_buffer_offset(struct caml_ba_array *b, intnat *index) {\n  intnat offset = 0;\n  switch ((enum caml_ba_layout)(b->flags & CAML_BA_LAYOUT_MASK)) {\n    case CAML_BA_C_LAYOUT:\n      for (int i = 0; i < b->num_dims; i++) {\n        if ((uintnat)index[i] >= (uintnat)b->dim[i])\n          caml_array_bound_error();\n        offset = offset * b->dim[i] + index[i];\n      }\n      break;\n    case CAML_BA_FORTRAN_LAYOUT:\n      for (int i = b->num_dims - 1; i >= 0; i--) {\n        if ((uintnat)(index[i] - 1) >= (uintnat)b->dim[i])\n          caml_array_bound_error();\n        offset = offset * b->dim[i] + (index[i] - 1);\n      }\n      break;\n  }\n  return offset;\n}\n\n/*---------------------------------------------------------------------------\n   Creation\n  ---------------------------------------------------------------------------*/\n\n#define CREATE_BA_FUNCTION(name, type_enum, bytes_per_elem)                    \\\n  CAMLprim value caml_nx_buffer_create_##name(value vlayout, value vdim) {     \\\n    CAMLparam2(vlayout, vdim);                                                 \\\n    CAMLlocal1(res);                                                           \\\n                                                                               \\\n    int num_dims = Wosize_val(vdim);                                           \\\n    intnat dim[CAML_BA_MAX_NUM_DIMS];                                          \\\n    for (int i = 0; i < num_dims; i++)                                         \\\n      dim[i] = Long_val(Field(vdim, i));                                       \\\n                                                                               \\\n    uintnat num_elts = nx_buffer_num_elts_from_dims(num_dims, dim);            \\\n    uintnat size;                                                              \\\n    if (umul_overflow(num_elts, (bytes_per_elem), &size))                      \\\n      caml_raise_out_of_memory();                                              \\\n                                                                               \\\n    void *data = calloc(1, size);                                              \\\n    if (data == NULL && size != 0) caml_raise_out_of_memory();                 \\\n                                                                               \\\n    int layout_flag = Caml_ba_layout_val(vlayout);                             \\\n    res = nx_buffer_alloc_with_kind((type_enum), layout_flag, num_dims, dim,   \\\n                                    data);                                     \\\n    CAMLreturn(res);                                                           \\\n  }\n\nCREATE_BA_FUNCTION(bfloat16, NX_BA_BFLOAT16, 2)\nCREATE_BA_FUNCTION(bool, NX_BA_BOOL, 1)\nCREATE_BA_FUNCTION(float8_e4m3, NX_BA_FP8_E4M3, 1)\nCREATE_BA_FUNCTION(float8_e5m2, NX_BA_FP8_E5M2, 1)\nCREATE_BA_FUNCTION(uint32, NX_BA_UINT32, 4)\nCREATE_BA_FUNCTION(uint64, NX_BA_UINT64, 8)\n\n/* Int4/uint4 pack 2 values per byte */\nstatic value nx_buffer_create_int4(int kind, value vlayout, value vdim) {\n  CAMLparam2(vlayout, vdim);\n  CAMLlocal1(res);\n  int num_dims = Wosize_val(vdim);\n  intnat dim[CAML_BA_MAX_NUM_DIMS];\n  for (int i = 0; i < num_dims; i++)\n    dim[i] = Long_val(Field(vdim, i));\n  uintnat num_elts = nx_buffer_num_elts_from_dims(num_dims, dim);\n  uintnat size = (num_elts + 1) / 2;\n  void *data = calloc(1, size);\n  if (data == NULL && size != 0) caml_raise_out_of_memory();\n  int layout_flag = Caml_ba_layout_val(vlayout);\n  res = nx_buffer_alloc_with_kind(kind, layout_flag, num_dims, dim, data);\n  CAMLreturn(res);\n}\n\nCAMLprim value caml_nx_buffer_create_int4_signed(value vlayout, value vdim) {\n  return nx_buffer_create_int4(NX_BA_INT4, vlayout, vdim);\n}\n\nCAMLprim value caml_nx_buffer_create_int4_unsigned(value vlayout, value vdim) {\n  return nx_buffer_create_int4(NX_BA_UINT4, vlayout, vdim);\n}\n\n/*---------------------------------------------------------------------------\n   Element access\n  ---------------------------------------------------------------------------*/\n\nCAMLprim value caml_nx_buffer_get(value vb, value vind) {\n  CAMLparam2(vb, vind);\n  CAMLlocal1(res);\n  struct caml_ba_array *b = Caml_ba_array_val(vb);\n  int num_dims = Wosize_val(vind);\n  if (num_dims != b->num_dims)\n    caml_invalid_argument(\"Bigarray.get: wrong number of indices\");\n\n  intnat index[CAML_BA_MAX_NUM_DIMS];\n  for (int i = 0; i < num_dims; i++)\n    index[i] = Long_val(Field(vind, i));\n  intnat offset = nx_buffer_offset(b, index);\n\n  int kind = nx_buffer_get_kind(b);\n  if (kind < CAML_BA_FIRST_UNIMPLEMENTED_KIND) {\n    value args[CAML_BA_MAX_NUM_DIMS];\n    for (int i = 0; i < num_dims; i++)\n      args[i] = Field(vind, i);\n    CAMLreturn(caml_ba_get_N(vb, args, num_dims));\n  }\n\n  switch (kind) {\n    case NX_BA_BFLOAT16:\n      res = caml_copy_double(\n          (double)bfloat16_to_float(((uint16_t *)b->data)[offset]));\n      break;\n    case NX_BA_BOOL:\n      res = Val_bool(((uint8_t *)b->data)[offset]);\n      break;\n    case NX_BA_INT4: {\n      uint8_t byte = ((uint8_t *)b->data)[offset / 2];\n      int val;\n      if (offset % 2 == 0)\n        val = (int8_t)((byte & 0x0F) << 4) >> 4; /* Sign extend lower nibble */\n      else\n        val = (int8_t)(byte & 0xF0) >> 4; /* Sign extend upper nibble */\n      res = Val_int(val);\n      break;\n    }\n    case NX_BA_UINT4: {\n      uint8_t byte = ((uint8_t *)b->data)[offset / 2];\n      int val;\n      if (offset % 2 == 0)\n        val = byte & 0x0F;\n      else\n        val = (byte >> 4) & 0x0F;\n      res = Val_int(val);\n      break;\n    }\n    case NX_BA_FP8_E4M3:\n      res = caml_copy_double(\n          (double)fp8_e4m3_to_float(((uint8_t *)b->data)[offset]));\n      break;\n    case NX_BA_FP8_E5M2:\n      res = caml_copy_double(\n          (double)fp8_e5m2_to_float(((uint8_t *)b->data)[offset]));\n      break;\n    case NX_BA_UINT32:\n      res = caml_copy_int32(((uint32_t *)b->data)[offset]);\n      break;\n    case NX_BA_UINT64:\n      res = caml_copy_int64(((uint64_t *)b->data)[offset]);\n      break;\n    default:\n      caml_failwith(\"Unsupported bigarray kind\");\n  }\n  CAMLreturn(res);\n}\n\nCAMLprim value caml_nx_buffer_set(value vb, value vind, value newval) {\n  CAMLparam3(vb, vind, newval);\n  struct caml_ba_array *b = Caml_ba_array_val(vb);\n  int num_dims = Wosize_val(vind);\n  if (num_dims != b->num_dims)\n    caml_invalid_argument(\"Bigarray.set: wrong number of indices\");\n\n  intnat index[CAML_BA_MAX_NUM_DIMS];\n  for (int i = 0; i < num_dims; i++)\n    index[i] = Long_val(Field(vind, i));\n  intnat offset = nx_buffer_offset(b, index);\n\n  int kind = nx_buffer_get_kind(b);\n  if (kind < CAML_BA_FIRST_UNIMPLEMENTED_KIND) {\n    value args[CAML_BA_MAX_NUM_DIMS + 1];\n    for (int i = 0; i < num_dims; i++)\n      args[i] = Field(vind, i);\n    args[num_dims] = newval;\n    caml_ba_set_N(vb, args, num_dims + 1);\n    CAMLreturn(Val_unit);\n  }\n\n  switch (kind) {\n    case NX_BA_BFLOAT16:\n      ((uint16_t *)b->data)[offset] =\n          float_to_bfloat16((float)Double_val(newval));\n      break;\n    case NX_BA_BOOL:\n      ((uint8_t *)b->data)[offset] = Bool_val(newval);\n      break;\n    case NX_BA_INT4: {\n      int val = Int_val(newval);\n      if (val > 7) val = 7;\n      if (val < -8) val = -8;\n      uint8_t nibble = val & 0x0F;\n      uint8_t *byte_ptr = &((uint8_t *)b->data)[offset / 2];\n      if (offset % 2 == 0)\n        *byte_ptr = (*byte_ptr & 0xF0) | nibble;\n      else\n        *byte_ptr = (*byte_ptr & 0x0F) | (nibble << 4);\n      break;\n    }\n    case NX_BA_UINT4: {\n      int val = Int_val(newval);\n      if (val > 15) val = 15;\n      if (val < 0) val = 0;\n      uint8_t nibble = val & 0x0F;\n      uint8_t *byte_ptr = &((uint8_t *)b->data)[offset / 2];\n      if (offset % 2 == 0)\n        *byte_ptr = (*byte_ptr & 0xF0) | nibble;\n      else\n        *byte_ptr = (*byte_ptr & 0x0F) | (nibble << 4);\n      break;\n    }\n    case NX_BA_FP8_E4M3:\n      ((uint8_t *)b->data)[offset] =\n          float_to_fp8_e4m3((float)Double_val(newval));\n      break;\n    case NX_BA_FP8_E5M2:\n      ((uint8_t *)b->data)[offset] =\n          float_to_fp8_e5m2((float)Double_val(newval));\n      break;\n    case NX_BA_UINT32:\n      ((uint32_t *)b->data)[offset] = Int32_val(newval);\n      break;\n    case NX_BA_UINT64:\n      ((uint64_t *)b->data)[offset] = Int64_val(newval);\n      break;\n    default:\n      caml_failwith(\"Unsupported bigarray kind\");\n  }\n  CAMLreturn(Val_unit);\n}\n\n/* Unsafe 1D get — flat offset, no bounds check */\nstatic value nx_buffer_unsafe_get_ext(struct caml_ba_array *b, intnat offset) {\n  int kind = nx_buffer_get_kind(b);\n  switch (kind) {\n    case NX_BA_BFLOAT16:\n      return caml_copy_double(\n          (double)bfloat16_to_float(((uint16_t *)b->data)[offset]));\n    case NX_BA_BOOL:\n      return Val_bool(((uint8_t *)b->data)[offset]);\n    case NX_BA_INT4: {\n      uint8_t byte = ((uint8_t *)b->data)[offset / 2];\n      int val;\n      if (offset % 2 == 0)\n        val = (int8_t)((byte & 0x0F) << 4) >> 4;\n      else\n        val = (int8_t)(byte & 0xF0) >> 4;\n      return Val_int(val);\n    }\n    case NX_BA_UINT4: {\n      uint8_t byte = ((uint8_t *)b->data)[offset / 2];\n      int val;\n      if (offset % 2 == 0)\n        val = byte & 0x0F;\n      else\n        val = (byte >> 4) & 0x0F;\n      return Val_int(val);\n    }\n    case NX_BA_FP8_E4M3:\n      return caml_copy_double(\n          (double)fp8_e4m3_to_float(((uint8_t *)b->data)[offset]));\n    case NX_BA_FP8_E5M2:\n      return caml_copy_double(\n          (double)fp8_e5m2_to_float(((uint8_t *)b->data)[offset]));\n    case NX_BA_UINT32:\n      return caml_copy_int32(((uint32_t *)b->data)[offset]);\n    case NX_BA_UINT64:\n      return caml_copy_int64(((uint64_t *)b->data)[offset]);\n    default:\n      caml_failwith(\"Unsupported bigarray kind\");\n  }\n}\n\nCAMLprim value caml_nx_buffer_unsafe_get(value vb, value vi) {\n  CAMLparam1(vb);\n  CAMLlocal1(res);\n  struct caml_ba_array *b = Caml_ba_array_val(vb);\n  intnat i = Long_val(vi);\n  int kind = nx_buffer_get_kind(b);\n  if (kind < CAML_BA_FIRST_UNIMPLEMENTED_KIND) {\n    /* For standard kinds, use Bigarray.Array1.unsafe_get semantics:\n       direct data access without bounds checking */\n    extern value caml_ba_get_1(value vb, value vind);\n    CAMLreturn(caml_ba_get_1(vb, vi));\n  }\n  res = nx_buffer_unsafe_get_ext(b, i);\n  CAMLreturn(res);\n}\n\n/* Unsafe 1D set — flat offset, no bounds check */\nstatic void nx_buffer_unsafe_set_ext(struct caml_ba_array *b, intnat offset,\n                                     value newval) {\n  int kind = nx_buffer_get_kind(b);\n  switch (kind) {\n    case NX_BA_BFLOAT16:\n      ((uint16_t *)b->data)[offset] =\n          float_to_bfloat16((float)Double_val(newval));\n      break;\n    case NX_BA_BOOL:\n      ((uint8_t *)b->data)[offset] = Bool_val(newval);\n      break;\n    case NX_BA_INT4: {\n      int val = Int_val(newval);\n      if (val > 7) val = 7;\n      if (val < -8) val = -8;\n      uint8_t nibble = val & 0x0F;\n      uint8_t *byte_ptr = &((uint8_t *)b->data)[offset / 2];\n      if (offset % 2 == 0)\n        *byte_ptr = (*byte_ptr & 0xF0) | nibble;\n      else\n        *byte_ptr = (*byte_ptr & 0x0F) | (nibble << 4);\n      break;\n    }\n    case NX_BA_UINT4: {\n      int val = Int_val(newval);\n      if (val > 15) val = 15;\n      if (val < 0) val = 0;\n      uint8_t nibble = val & 0x0F;\n      uint8_t *byte_ptr = &((uint8_t *)b->data)[offset / 2];\n      if (offset % 2 == 0)\n        *byte_ptr = (*byte_ptr & 0xF0) | nibble;\n      else\n        *byte_ptr = (*byte_ptr & 0x0F) | (nibble << 4);\n      break;\n    }\n    case NX_BA_FP8_E4M3:\n      ((uint8_t *)b->data)[offset] =\n          float_to_fp8_e4m3((float)Double_val(newval));\n      break;\n    case NX_BA_FP8_E5M2:\n      ((uint8_t *)b->data)[offset] =\n          float_to_fp8_e5m2((float)Double_val(newval));\n      break;\n    case NX_BA_UINT32:\n      ((uint32_t *)b->data)[offset] = Int32_val(newval);\n      break;\n    case NX_BA_UINT64:\n      ((uint64_t *)b->data)[offset] = Int64_val(newval);\n      break;\n    default:\n      caml_failwith(\"Unsupported bigarray kind\");\n  }\n}\n\nCAMLprim value caml_nx_buffer_unsafe_set(value vb, value vi, value newval) {\n  CAMLparam2(vb, newval);\n  struct caml_ba_array *b = Caml_ba_array_val(vb);\n  intnat i = Long_val(vi);\n  int kind = nx_buffer_get_kind(b);\n  if (kind < CAML_BA_FIRST_UNIMPLEMENTED_KIND) {\n    extern value caml_ba_set_1(value vb, value vind, value newval);\n    caml_ba_set_1(vb, vi, newval);\n    CAMLreturn(Val_unit);\n  }\n  nx_buffer_unsafe_set_ext(b, i, newval);\n  CAMLreturn(Val_unit);\n}\n\n/*---------------------------------------------------------------------------\n   Kind query\n  ---------------------------------------------------------------------------*/\n\nCAMLprim value caml_nx_buffer_kind(value vb) {\n  struct caml_ba_array *b = Caml_ba_array_val(vb);\n  int kind = nx_buffer_get_kind(b);\n\n  /* Map to GADT constructor index (19 constructors) */\n  switch (kind) {\n    case CAML_BA_FLOAT16:    return Val_int(0);\n    case CAML_BA_FLOAT32:    return Val_int(1);\n    case CAML_BA_FLOAT64:    return Val_int(2);\n    case NX_BA_BFLOAT16:     return Val_int(3);\n    case NX_BA_FP8_E4M3:    return Val_int(4);\n    case NX_BA_FP8_E5M2:    return Val_int(5);\n    case CAML_BA_SINT8:      return Val_int(6);\n    case CAML_BA_UINT8:      return Val_int(7);\n    case CAML_BA_SINT16:     return Val_int(8);\n    case CAML_BA_UINT16:     return Val_int(9);\n    case CAML_BA_INT32:      return Val_int(10);\n    case NX_BA_UINT32:       return Val_int(11);\n    case CAML_BA_INT64:      return Val_int(12);\n    case NX_BA_UINT64:       return Val_int(13);\n    case NX_BA_INT4:         return Val_int(14);\n    case NX_BA_UINT4:        return Val_int(15);\n    case CAML_BA_COMPLEX32:  return Val_int(16);\n    case CAML_BA_COMPLEX64:  return Val_int(17);\n    case NX_BA_BOOL:         return Val_int(18);\n    default:\n      caml_failwith(\"Unknown bigarray kind\");\n  }\n}\n\n/*---------------------------------------------------------------------------\n   Bulk operations\n  ---------------------------------------------------------------------------*/\n\nCAMLprim value caml_nx_buffer_blit(value vsrc, value vdst) {\n  CAMLparam2(vsrc, vdst);\n  struct caml_ba_array *src = Caml_ba_array_val(vsrc);\n  struct caml_ba_array *dst = Caml_ba_array_val(vdst);\n\n  int src_kind = nx_buffer_get_kind(src);\n  int dst_kind = nx_buffer_get_kind(dst);\n  if (src_kind != dst_kind)\n    caml_invalid_argument(\"Nx_buffer.blit: arrays have different kinds\");\n  if (src->num_dims != dst->num_dims)\n    caml_invalid_argument(\"Nx_buffer.blit: arrays have different dimensions\");\n\n  uintnat num_elts = 1;\n  for (int i = 0; i < src->num_dims; i++) {\n    if (src->dim[i] != dst->dim[i])\n      caml_invalid_argument(\"Nx_buffer.blit: arrays have different dimensions\");\n    num_elts *= src->dim[i];\n  }\n\n  if (src_kind >= CAML_BA_FIRST_UNIMPLEMENTED_KIND) {\n    int elem_size = nx_buffer_element_byte_size(src_kind);\n    size_t byte_size;\n    if (elem_size > 0)\n      byte_size = num_elts * elem_size;\n    else\n      byte_size = (num_elts + 1) / 2; /* int4/uint4 */\n    memcpy(dst->data, src->data, byte_size);\n  } else {\n    caml_ba_blit(vsrc, vdst);\n  }\n\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_nx_buffer_fill(value vb, value vinit) {\n  CAMLparam2(vb, vinit);\n  struct caml_ba_array *b = Caml_ba_array_val(vb);\n  int kind = nx_buffer_get_kind(b);\n\n  if (kind < CAML_BA_FIRST_UNIMPLEMENTED_KIND) {\n    caml_ba_fill(vb, vinit);\n    CAMLreturn(Val_unit);\n  }\n\n  uintnat num_elts = nx_buffer_num_elts(b);\n\n  switch (kind) {\n    case NX_BA_BFLOAT16: {\n      uint16_t init = float_to_bfloat16((float)Double_val(vinit));\n      uint16_t *p = (uint16_t *)b->data;\n      for (uintnat i = 0; i < num_elts; i++)\n        p[i] = init;\n      break;\n    }\n    case NX_BA_BOOL: {\n      uint8_t init = Bool_val(vinit);\n      memset(b->data, init, num_elts);\n      break;\n    }\n    case NX_BA_INT4: {\n      int val = Int_val(vinit);\n      val = val < -8 ? -8 : val > 7 ? 7 : val;\n      uint8_t nibble = (uint8_t)val & 0x0F;\n      uint8_t packed = (nibble << 4) | nibble;\n      memset(b->data, packed, (num_elts + 1) / 2);\n      break;\n    }\n    case NX_BA_UINT4: {\n      int val = Int_val(vinit);\n      val = val < 0 ? 0 : val > 15 ? 15 : val;\n      uint8_t nibble = (uint8_t)val & 0x0F;\n      uint8_t packed = (nibble << 4) | nibble;\n      memset(b->data, packed, (num_elts + 1) / 2);\n      break;\n    }\n    case NX_BA_FP8_E4M3: {\n      uint8_t init = float_to_fp8_e4m3((float)Double_val(vinit));\n      memset(b->data, init, num_elts);\n      break;\n    }\n    case NX_BA_FP8_E5M2: {\n      uint8_t init = float_to_fp8_e5m2((float)Double_val(vinit));\n      memset(b->data, init, num_elts);\n      break;\n    }\n    case NX_BA_UINT32: {\n      uint32_t init = Int32_val(vinit);\n      uint32_t *p = (uint32_t *)b->data;\n      for (uintnat i = 0; i < num_elts; i++)\n        p[i] = init;\n      break;\n    }\n    case NX_BA_UINT64: {\n      uint64_t init = Int64_val(vinit);\n      uint64_t *p = (uint64_t *)b->data;\n      for (uintnat i = 0; i < num_elts; i++)\n        p[i] = init;\n      break;\n    }\n    default:\n      caml_failwith(\"Unknown extended bigarray kind in fill\");\n  }\n\n  CAMLreturn(Val_unit);\n}\n\n/*---------------------------------------------------------------------------\n   Bytes blit ([@@noalloc] — no OCaml allocation, no exceptions)\n  ---------------------------------------------------------------------------*/\n\nCAMLprim value caml_nx_buffer_blit_from_bytes(value vbytes, value vsrc_off,\n                                              value vdst, value vdst_off,\n                                              value vlen) {\n  struct caml_ba_array *dst = Caml_ba_array_val(vdst);\n  size_t len = (size_t)Long_val(vlen);\n  uint8_t *dst_ptr = (uint8_t *)dst->data + (size_t)Long_val(vdst_off);\n  const uint8_t *src_ptr =\n      (const uint8_t *)Bytes_val(vbytes) + (size_t)Long_val(vsrc_off);\n  memcpy(dst_ptr, src_ptr, len);\n  return Val_unit;\n}\n\nCAMLprim value caml_nx_buffer_blit_to_bytes(value vsrc, value vsrc_off,\n                                            value vbytes, value vdst_off,\n                                            value vlen) {\n  struct caml_ba_array *src = Caml_ba_array_val(vsrc);\n  size_t len = (size_t)Long_val(vlen);\n  const uint8_t *src_ptr =\n      (const uint8_t *)src->data + (size_t)Long_val(vsrc_off);\n  uint8_t *dst_ptr = (uint8_t *)Bytes_val(vbytes) + (size_t)Long_val(vdst_off);\n  memcpy(dst_ptr, src_ptr, len);\n  return Val_unit;\n}\n"
  },
  {
    "path": "packages/nx/lib/buffer/nx_buffer_stubs.h",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n#ifndef NX_BUFFER_STUBS_H\n#define NX_BUFFER_STUBS_H\n\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n#include <caml/custom.h>\n#include <caml/memory.h>\n#include <caml/mlvalues.h>\n#include <math.h>\n#include <stdbool.h>\n#include <stdint.h>\n\n/* Additional types not in standard bigarray, following stdlib naming convention */\ntypedef uint16_t caml_ba_bfloat16;   /* BFloat16 */\ntypedef uint8_t caml_ba_fp8_e4m3;    /* 8-bit float: 1 sign, 4 exponent, 3 mantissa */\ntypedef uint8_t caml_ba_fp8_e5m2;    /* 8-bit float: 1 sign, 5 exponent, 2 mantissa */\ntypedef uint8_t caml_ba_bool;        /* Bool as byte (0/1) */\n/* Note: int4/uint4 pack 2 values per byte — no single-element typedef */\ntypedef uint32_t caml_ba_uint32;     /* Unsigned 32-bit */\ntypedef uint64_t caml_ba_uint64;     /* Unsigned 64-bit */\n\n/* Extended kind enumeration that continues from OCaml's bigarray kinds */\nenum nx_ba_extended_kind {\n  NX_BA_BFLOAT16 = CAML_BA_FIRST_UNIMPLEMENTED_KIND,\n  NX_BA_BOOL,\n  NX_BA_INT4,\n  NX_BA_UINT4,\n  NX_BA_FP8_E4M3,\n  NX_BA_FP8_E5M2,\n  NX_BA_UINT32,\n  NX_BA_UINT64,\n  NX_BA_LAST_KIND\n};\n\n#define NX_BA_EXTENDED_KIND_SHIFT 16\n#define NX_BA_EXTENDED_KIND_FIELD(kind) \\\n  ((int)((kind) << NX_BA_EXTENDED_KIND_SHIFT))\n#define NX_BA_EXTENDED_KIND_MASK NX_BA_EXTENDED_KIND_FIELD(0xFF)\n\nstatic inline bool nx_buffer_is_extended_kind(int kind) {\n  return kind >= NX_BA_BFLOAT16 && kind < NX_BA_LAST_KIND;\n}\n\nstatic inline int nx_buffer_get_stored_extended_kind(int flags) {\n  return (flags & NX_BA_EXTENDED_KIND_MASK) >> NX_BA_EXTENDED_KIND_SHIFT;\n}\n\nstatic inline int nx_buffer_store_extended_kind(int flags, int kind) {\n  flags &= ~NX_BA_EXTENDED_KIND_MASK;\n  if (nx_buffer_is_extended_kind(kind))\n    flags |= NX_BA_EXTENDED_KIND_FIELD(kind);\n  return flags;\n}\n\nstatic inline int nx_buffer_get_kind_from_flags(int flags) {\n  int stored = nx_buffer_get_stored_extended_kind(flags);\n  if (stored != 0) return stored;\n  return flags & CAML_BA_KIND_MASK;\n}\n\nstatic inline int nx_buffer_get_kind(const struct caml_ba_array *b) {\n  return nx_buffer_get_kind_from_flags(b->flags);\n}\n\n/* Conversion functions for extended types */\n\n/* BFloat16 conversions */\nstatic inline uint16_t float_to_bfloat16(float f) {\n  union {\n    float f;\n    uint32_t i;\n  } u = {.f = f};\n  /* Round to nearest even */\n  uint32_t rounding_bias = ((u.i >> 16) & 1) + 0x7FFF;\n  return (u.i + rounding_bias) >> 16;\n}\n\nstatic inline float bfloat16_to_float(uint16_t bf16) {\n  union {\n    float f;\n    uint32_t i;\n  } u;\n  u.i = ((uint32_t)bf16) << 16;\n  return u.f;\n}\n\n/* Float16 (IEEE 754 half-precision) conversions */\nstatic inline uint16_t float_to_half(float f) {\n  union {\n    float f;\n    uint32_t i;\n  } u = {.f = f};\n  uint32_t i = u.i;\n  uint16_t h_sgn = (uint16_t)((i & 0x80000000u) >> 16);\n  uint32_t f_m = i & 0x00FFFFFFu;\n  uint32_t f_e = (i & 0x7F800000u) >> 23;\n\n  if (f_e == 0xFF) { /* Inf or NaN */\n    h_sgn |= 0x7C00u;\n    h_sgn |= (f_m != 0); /* NaN if mantissa != 0 */\n    return h_sgn;\n  }\n  if (f_e == 0) { /* Denormal or zero */\n    return h_sgn; /* Flush to zero */\n  }\n  int exp = (int)f_e - 127 + 15;\n  if (exp >= 31) return h_sgn | 0x7C00u; /* Inf */\n  if (exp <= 0) return h_sgn;            /* Underflow */\n\n  uint32_t mant = f_m >> 13;\n  uint32_t round = (f_m >> 12) & 1;\n  if (round) {\n    mant += 1;\n    if (mant >= (1u << 10)) {\n      mant = 0;\n      exp += 1;\n    }\n  }\n  return h_sgn | (exp << 10) | (mant & 0x3FFu);\n}\n\nstatic inline float half_to_float(uint16_t h) {\n  uint32_t sign = ((uint32_t)(h & 0x8000u)) << 16;\n  uint32_t exp = (h & 0x7C00u) >> 10;\n  uint32_t mant = h & 0x3FFu;\n\n  if (exp == 0x1F) { /* Inf/NaN */\n    exp = 0xFFu << 23;\n    mant = (mant != 0) ? (mant << 13) | 0x400000u : 0;\n  } else if (exp == 0) { /* Denorm or zero */\n    if (mant == 0) {\n      exp = 0;\n    } else { /* Denorm */\n      exp = 1;\n      while ((mant & 0x400u) == 0) {\n        mant <<= 1;\n        exp--;\n      }\n      mant &= 0x3FFu;\n      exp = (exp + 112) << 23;\n      mant <<= 13;\n    }\n  } else { /* Normal */\n    exp = (exp + 112) << 23;\n    mant <<= 13;\n  }\n\n  union {\n    float f;\n    uint32_t i;\n  } u;\n  u.i = sign | exp | mant;\n  return u.f;\n}\n\n/* FP8 E4M3 conversions (OCP MX spec: no infinity, 0x7F is NaN) */\nstatic inline uint8_t float_to_fp8_e4m3(float f) {\n  if (isnan(f)) return 0x7F; /* NaN */\n  /* E4M3 has no infinity — clamp to max finite */\n  if (isinf(f)) return signbit(f) ? 0xFE : 0x7E;\n\n  union {\n    float f;\n    uint32_t i;\n  } u = {.f = f};\n  uint32_t sign = (u.i >> 31) << 7;\n  int exp = ((u.i >> 23) & 0xFF) - 127;\n  uint32_t mant = u.i & 0x7FFFFF;\n\n  if (exp > 7) return sign | 0x7E;  /* Clamp to max finite (448.0 or -448.0) */\n  if (exp < -8) return sign;        /* Underflow to +/-0 */\n\n  /* Normalize mantissa for rounding (add implicit 1) */\n  mant |= (1 << 23);\n\n  /* Shift to 3-bit mantissa position (23 - 3 = 20 bits shift) */\n  uint32_t mant_shifted = mant >> 20;\n  uint32_t round_bit = (mant >> 19) & 1;\n  uint32_t sticky_bits = mant & ((1 << 19) - 1);\n\n  /* Round to nearest, ties to even */\n  if (round_bit && (sticky_bits || (mant_shifted & 1))) {\n    mant_shifted += 1;\n    if (mant_shifted >= (1 << 4)) { /* Overflow from rounding */\n      mant_shifted >>= 1;\n      exp += 1;\n      if (exp > 7) return sign | 0x7E;\n    }\n  }\n\n  uint8_t exp_bits = (exp + 7) & 0xF;  /* Bias 7 for E4M3 */\n  uint8_t mant_bits = mant_shifted & 0x7;\n\n  return sign | (exp_bits << 3) | mant_bits;\n}\n\nstatic inline float fp8_e4m3_to_float(uint8_t fp8) {\n  uint32_t sign = (fp8 >> 7) ? 0x80000000 : 0;\n  uint32_t exp = (fp8 >> 3) & 0xF;\n  uint32_t mant = fp8 & 0x7;\n\n  if (exp == 0xF) {\n    /* E4M3 has no infinity; exp=15 with mant!=0 is NaN, mant==0 is max finite */\n    if (mant != 0) return NAN;\n    exp = 0x86;      /* 7 + 127 */\n    mant = 0x700000;\n  } else if (exp == 0) {\n    if (mant == 0) return sign ? -0.0f : 0.0f;\n    /* No subnormals in E4M3; treat as min normal */\n    exp = 0x7F - 7;\n  } else {\n    exp = exp - 7 + 127;\n    exp <<= 23;\n    mant <<= 20;\n  }\n\n  union {\n    float f;\n    uint32_t i;\n  } u;\n  u.i = sign | exp | mant;\n  return u.f;\n}\n\n/* FP8 E5M2 conversions (IEEE-like: has infinity and subnormals) */\nstatic inline uint8_t float_to_fp8_e5m2(float f) {\n  if (isnan(f)) return 0x7F;                       /* NaN */\n  if (isinf(f)) return signbit(f) ? 0xFC : 0x7C;  /* +/-Inf */\n\n  union {\n    float f;\n    uint32_t i;\n  } u = {.f = f};\n  uint32_t sign = (u.i >> 31) << 7;\n  int exp = ((u.i >> 23) & 0xFF) - 127;\n  uint32_t mant = u.i & 0x7FFFFF;\n\n  if (exp > 15) return sign | 0x7C;   /* Clamp to Inf */\n  if (exp < -25) return sign;         /* Underflow to 0 */\n\n  bool subnormal = (exp < -14);\n  if (subnormal) {\n    /* Denormalize */\n    mant |= (1 << 23); /* Implicit 1 */\n    int shift = -14 - exp;\n    mant >>= shift;\n    exp = 0;\n  } else {\n    mant |= (1 << 23);\n  }\n\n  /* Round to 2-bit mantissa (shift 21 bits) */\n  uint32_t mant_shifted = mant >> 21;\n  uint32_t round_bit = (mant >> 20) & 1;\n  uint32_t sticky_bits = mant & ((1 << 20) - 1);\n\n  /* Round nearest, ties even */\n  if (round_bit && (sticky_bits || (mant_shifted & 1))) {\n    mant_shifted += 1;\n    if (mant_shifted >= (1 << 3)) { /* Overflow */\n      mant_shifted = 0;\n      exp += 1;\n      if (exp >= 0x1F) return sign | 0x7C; /* To Inf */\n    }\n  }\n\n  uint8_t exp_bits = subnormal ? 0 : ((exp + 15) & 0x1F); /* Bias 15 */\n  uint8_t mant_bits = mant_shifted & 0x3;\n\n  return sign | (exp_bits << 2) | mant_bits;\n}\n\nstatic inline float fp8_e5m2_to_float(uint8_t fp8) {\n  bool negative = (fp8 & 0x80) != 0;\n  uint32_t exp = (fp8 >> 2) & 0x1F;\n  uint32_t mant = fp8 & 0x3;\n\n  if (exp == 0x1F) { /* Inf/NaN */\n    if (mant == 0) return negative ? -INFINITY : INFINITY;\n    return NAN;\n  }\n\n  float value;\n  if (exp == 0) {\n    if (mant == 0) return negative ? -0.0f : 0.0f;\n    /* Subnormal: mantissa has no implicit leading 1 */\n    float frac = (float)mant / 4.0f;\n    value = ldexpf(frac, 1 - 15); /* 2^(1-bias) */\n  } else {\n    float frac = 1.0f + (float)mant / 4.0f;\n    value = ldexpf(frac, (int)exp - 15);\n  }\n\n  return negative ? -value : value;\n}\n\n#endif /* NX_BUFFER_STUBS_H */\n"
  },
  {
    "path": "packages/nx/lib/buffer/nx_buffer_stubs.js",
    "content": "/*---------------------------------------------------------------------------\n   Copyright (c) 2026 The Raven authors. All rights reserved.\n   SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*/\n\n/* JavaScript stubs for extended bigarray types.\n   Extends the standard js_of_ocaml bigarray implementation.\n\n   Supported extended types:\n   - bfloat16: Brain floating-point (16-bit)\n   - bool: Boolean values (8-bit)\n   - int4_signed/unsigned: 4-bit integers (packed 2 per byte)\n   - float8_e4m3/e5m2: 8-bit floating-point formats\n   - uint32/uint64: Unsigned 32/64-bit integers\n\n   The implementation extends the standard Ml_Bigarray class with\n   Ml_Nx_buffer to handle get/set/fill operations for these types. */\n\n//Provides: caml_unpackBfloat16\nfunction caml_unpackBfloat16(bytes) {\n  /* bfloat16 is the upper 16 bits of a float32 */\n  var buffer = new ArrayBuffer(4);\n  var view = new DataView(buffer);\n  view.setUint16(2, bytes, false); /* big-endian, upper 16 bits */\n  view.setUint16(0, 0, false);    /* lower 16 bits are zero */\n  return view.getFloat32(0, false);\n}\n\n//Provides: caml_packBfloat16\nfunction caml_packBfloat16(num) {\n  /* Convert to float32 and take upper 16 bits */\n  var buffer = new ArrayBuffer(4);\n  var view = new DataView(buffer);\n  view.setFloat32(0, num, false);\n  return view.getUint16(2, false);\n}\n\n//Provides: caml_unpackFp8_e4m3\nfunction caml_unpackFp8_e4m3(byte) {\n  var sign = (byte >> 7) & 0x1;\n  var exp = (byte >> 3) & 0xf;\n  var mantissa = byte & 0x7;\n\n  if (exp === 0 && mantissa === 0) return sign ? -0.0 : 0.0;\n\n  /* Convert to float32 format */\n  exp = exp - 7 + 127; /* Remove E4M3 bias, add float32 bias */\n  var bits = (sign << 31) | (exp << 23) | (mantissa << 20);\n\n  var buffer = new ArrayBuffer(4);\n  var view = new DataView(buffer);\n  view.setUint32(0, bits, false);\n  return view.getFloat32(0, false);\n}\n\n//Provides: caml_packFp8_e4m3\nfunction caml_packFp8_e4m3(num) {\n  var buffer = new ArrayBuffer(4);\n  var view = new DataView(buffer);\n  view.setFloat32(0, num, false);\n  var bits = view.getUint32(0, false);\n\n  var sign = (bits >> 31) & 0x1;\n  var exp = ((bits >> 23) & 0xff) - 127; /* Extract and unbias exponent */\n  var mantissa = (bits >> 20) & 0x7;     /* Take top 3 bits of mantissa */\n\n  /* Clamp exponent to E4M3 range [-6, 8] with bias 7 */\n  exp = exp + 7; /* Apply E4M3 bias */\n  if (exp <= 0) exp = 0;\n  if (exp >= 15) exp = 15;\n\n  return (sign << 7) | ((exp & 0xf) << 3) | (mantissa & 0x7);\n}\n\n//Provides: caml_unpackFp8_e5m2\nfunction caml_unpackFp8_e5m2(byte) {\n  var sign = (byte >> 7) & 0x1;\n  var exp = (byte >> 2) & 0x1f;\n  var mantissa = byte & 0x3;\n\n  if (exp === 0 && mantissa === 0) return sign ? -0.0 : 0.0;\n\n  /* Convert to float32 format */\n  exp = exp - 15 + 127; /* Remove E5M2 bias, add float32 bias */\n  var bits = (sign << 31) | (exp << 23) | (mantissa << 21);\n\n  var buffer = new ArrayBuffer(4);\n  var view = new DataView(buffer);\n  view.setUint32(0, bits, false);\n  return view.getFloat32(0, false);\n}\n\n//Provides: caml_packFp8_e5m2\nfunction caml_packFp8_e5m2(num) {\n  var buffer = new ArrayBuffer(4);\n  var view = new DataView(buffer);\n  view.setFloat32(0, num, false);\n  var bits = view.getUint32(0, false);\n\n  var sign = (bits >> 31) & 0x1;\n  var exp = ((bits >> 23) & 0xff) - 127; /* Extract and unbias exponent */\n  var mantissa = (bits >> 21) & 0x3;     /* Take top 2 bits of mantissa */\n\n  /* Clamp exponent to E5M2 range [-14, 15] with bias 15 */\n  exp = exp + 15; /* Apply E5M2 bias */\n  if (exp <= 0) exp = 0;\n  if (exp >= 31) exp = 31;\n\n  return (sign << 7) | ((exp & 0x1f) << 2) | (mantissa & 0x3);\n}\n\n/* Extended kind enumeration matching our C implementation */\nvar NX_BA_BFLOAT16 = 14;\nvar NX_BA_BOOL = 15;\nvar NX_BA_INT4 = 16;\nvar NX_BA_UINT4 = 17;\nvar NX_BA_FP8_E4M3 = 18;\nvar NX_BA_FP8_E5M2 = 19;\nvar NX_BA_UINT32 = 20;\nvar NX_BA_UINT64 = 21;\n\n//Provides: caml_nx_buffer_size_per_element\n//Requires: caml_ba_get_size_per_element\nfunction caml_nx_buffer_size_per_element(kind) {\n  /* Handle standard types first */\n  if (kind < 14) {\n    return caml_ba_get_size_per_element(kind);\n  }\n\n  /* Handle extended types */\n  switch (kind) {\n    case 14: /* NX_BA_BFLOAT16 */\n      return 2;\n    case 15: /* NX_BA_BOOL */\n      return 1;\n    case 16: /* NX_BA_INT4 */\n    case 17: /* NX_BA_UINT4 */\n      return 1; /* Packed 2 per byte */\n    case 18: /* NX_BA_FP8_E4M3 */\n    case 19: /* NX_BA_FP8_E5M2 */\n      return 1;\n    case 20: /* NX_BA_UINT32 */\n      return 4;\n    case 21: /* NX_BA_UINT64 */\n      return 8;\n    default:\n      return 1;\n  }\n}\n\n//Provides: caml_nx_buffer_create_data\n//Requires: caml_ba_create_buffer, caml_nx_buffer_size_per_element\n//Requires: caml_invalid_argument\nfunction caml_nx_buffer_create_data(kind, size) {\n  /* Handle standard types */\n  if (kind < 14) {\n    return caml_ba_create_buffer(kind, size);\n  }\n\n  /* For extended types, use appropriate typed arrays */\n  var view;\n  switch (kind) {\n    case 14: /* NX_BA_BFLOAT16 */\n      view = Uint16Array;\n      break;\n    case 15: /* NX_BA_BOOL */\n      view = Uint8Array;\n      break;\n    case 16: /* NX_BA_INT4 */\n    case 17: /* NX_BA_UINT4 */\n      /* Pack 2 values per byte */\n      view = Uint8Array;\n      size = Math.ceil(size / 2);\n      break;\n    case 18: /* NX_BA_FP8_E4M3 */\n    case 19: /* NX_BA_FP8_E5M2 */\n      view = Uint8Array;\n      break;\n    case 20: /* NX_BA_UINT32 */\n      view = Uint32Array;\n      break;\n    case 21: /* NX_BA_UINT64 */\n      if (typeof BigUint64Array === \"undefined\") {\n        caml_invalid_argument(\"Bigarray.create: uint64 not supported\");\n      }\n      view = BigUint64Array;\n      break;\n    default:\n      caml_invalid_argument(\"Bigarray.create: unsupported extended kind\");\n  }\n\n  return new view(size);\n}\n\n//Provides: Ml_Nx_buffer\n//Requires: Ml_Bigarray, caml_invalid_argument\n//Requires: caml_unpackBfloat16, caml_packBfloat16\n//Requires: caml_unpackFp8_e4m3, caml_packFp8_e4m3\n//Requires: caml_unpackFp8_e5m2, caml_packFp8_e5m2\nclass Ml_Nx_buffer extends Ml_Bigarray {\n  get(ofs) {\n    /* Handle standard types */\n    if (this.kind < 14) {\n      return super.get(ofs);\n    }\n\n    /* Handle extended types */\n    switch (this.kind) {\n      case 14: /* NX_BA_BFLOAT16 */\n        return caml_unpackBfloat16(this.data[ofs]);\n      case 15: /* NX_BA_BOOL */\n        return this.data[ofs] ? 1 : 0;\n      case 16: { /* NX_BA_INT4 */\n        var byte = this.data[Math.floor(ofs / 2)];\n        var val;\n        if (ofs % 2 === 0) {\n          val = (byte & 0x0f);\n          /* Sign extend */\n          if (val & 0x08) val |= 0xfffffff0;\n        } else {\n          val = (byte >> 4) & 0x0f;\n          /* Sign extend */\n          if (val & 0x08) val |= 0xfffffff0;\n        }\n        return val;\n      }\n      case 17: { /* NX_BA_UINT4 */\n        var byte = this.data[Math.floor(ofs / 2)];\n        if (ofs % 2 === 0) {\n          return byte & 0x0f;\n        } else {\n          return (byte >> 4) & 0x0f;\n        }\n      }\n      case 18: /* NX_BA_FP8_E4M3 */\n        return caml_unpackFp8_e4m3(this.data[ofs]);\n      case 19: /* NX_BA_FP8_E5M2 */\n        return caml_unpackFp8_e5m2(this.data[ofs]);\n      case 20: /* NX_BA_UINT32 */\n        return this.data[ofs] | 0;\n      case 21: /* NX_BA_UINT64 */\n        return BigInt.asIntN(64, this.data[ofs]);\n      default:\n        return this.data[ofs];\n    }\n  }\n\n  set(ofs, v) {\n    /* Handle standard types */\n    if (this.kind < 14) {\n      return super.set(ofs, v);\n    }\n\n    /* Handle extended types */\n    switch (this.kind) {\n      case 14: /* NX_BA_BFLOAT16 */\n        this.data[ofs] = caml_packBfloat16(v);\n        break;\n      case 15: /* NX_BA_BOOL */\n        this.data[ofs] = v ? 1 : 0;\n        break;\n      case 16: { /* NX_BA_INT4 */\n        if (v < -8 || v > 7) {\n          caml_invalid_argument(\"Bigarray.set: int4 value out of range [-8, 7]\");\n        }\n        var byte_idx = Math.floor(ofs / 2);\n        var byte = this.data[byte_idx];\n        if (ofs % 2 === 0) {\n          this.data[byte_idx] = (byte & 0xf0) | (v & 0x0f);\n        } else {\n          this.data[byte_idx] = (byte & 0x0f) | ((v & 0x0f) << 4);\n        }\n        break;\n      }\n      case 17: { /* NX_BA_UINT4 */\n        if (v < 0 || v > 15) {\n          caml_invalid_argument(\"Bigarray.set: uint4 value out of range [0, 15]\");\n        }\n        var byte_idx = Math.floor(ofs / 2);\n        var byte = this.data[byte_idx];\n        if (ofs % 2 === 0) {\n          this.data[byte_idx] = (byte & 0xf0) | (v & 0x0f);\n        } else {\n          this.data[byte_idx] = (byte & 0x0f) | ((v & 0x0f) << 4);\n        }\n        break;\n      }\n      case 18: /* NX_BA_FP8_E4M3 */\n        this.data[ofs] = caml_packFp8_e4m3(v);\n        break;\n      case 19: /* NX_BA_FP8_E5M2 */\n        this.data[ofs] = caml_packFp8_e5m2(v);\n        break;\n      case 20: /* NX_BA_UINT32 */\n        this.data[ofs] = v >>> 0;\n        break;\n      case 21: /* NX_BA_UINT64 */\n        this.data[ofs] = BigInt.asUintN(64, v);\n        break;\n      default:\n        this.data[ofs] = v;\n        break;\n    }\n    return 0;\n  }\n\n  fill(v) {\n    /* Handle standard types */\n    if (this.kind < 14) {\n      return super.fill(v);\n    }\n\n    /* Handle extended types */\n    switch (this.kind) {\n      case 14: /* NX_BA_BFLOAT16 */\n        this.data.fill(caml_packBfloat16(v));\n        break;\n      case 15: /* NX_BA_BOOL */\n        this.data.fill(v ? 1 : 0);\n        break;\n      case 16: /* NX_BA_INT4 */\n      case 17: /* NX_BA_UINT4 */\n        /* For int4/uint4, pack 2 values per byte */\n        var packed = (v & 0x0f) | ((v & 0x0f) << 4);\n        this.data.fill(packed);\n        break;\n      case 18: /* NX_BA_FP8_E4M3 */\n        this.data.fill(caml_packFp8_e4m3(v));\n        break;\n      case 19: /* NX_BA_FP8_E5M2 */\n        this.data.fill(caml_packFp8_e5m2(v));\n        break;\n      case 20: /* NX_BA_UINT32 */\n        this.data.fill(v >>> 0);\n        break;\n      case 21: /* NX_BA_UINT64 */\n        this.data.fill(BigInt.asUintN(64, v));\n        break;\n      default:\n        this.data.fill(v);\n        break;\n    }\n  }\n}\n\n//Provides: caml_nx_buffer_create_unsafe\n//Requires: Ml_Nx_buffer, Ml_Bigarray_c_1_1, Ml_Bigarray\n//Requires: caml_ba_get_size, caml_nx_buffer_size_per_element\n//Requires: caml_invalid_argument\nfunction caml_nx_buffer_create_unsafe(kind, layout, dims, data) {\n  var size_per_element = caml_nx_buffer_size_per_element(kind);\n\n  /* For int4/uint4, adjust size calculation */\n  if (kind === 16 || kind === 17) {\n    var num_elts = caml_ba_get_size(dims);\n    if (Math.ceil(num_elts / 2) !== data.length) {\n      caml_invalid_argument(\"length doesn't match dims (int4/uint4)\");\n    }\n  } else if (caml_ba_get_size(dims) * size_per_element !== data.length) {\n    caml_invalid_argument(\"length doesn't match dims\");\n  }\n\n  /* Use extended class for extended types */\n  if (kind >= 14) {\n    return new Ml_Nx_buffer(kind, layout, dims, data);\n  }\n\n  /* Use standard classes for standard types */\n  if (\n    layout === 0 && /* c_layout */\n    dims.length === 1 && /* Array1 */\n    size_per_element === 1 &&\n    kind !== 13 /* float16 */\n  ) {\n    return new Ml_Bigarray_c_1_1(kind, layout, dims, data);\n  }\n  return new Ml_Bigarray(kind, layout, dims, data);\n}\n\n//Provides: caml_nx_buffer_create_internal\n//Requires: caml_js_from_array\n//Requires: caml_ba_get_size, caml_nx_buffer_create_unsafe\n//Requires: caml_nx_buffer_create_data\nfunction caml_nx_buffer_create_internal(kind, layout, dims_ml) {\n  var dims = caml_js_from_array(dims_ml);\n  var data = caml_nx_buffer_create_data(kind, caml_ba_get_size(dims));\n  return caml_nx_buffer_create_unsafe(kind, layout, dims, data);\n}\n\n/* Creation functions for each extended type */\n//Provides: caml_nx_buffer_create_bfloat16\n//Requires: caml_nx_buffer_create_internal\nfunction caml_nx_buffer_create_bfloat16(layout, dims) {\n  return caml_nx_buffer_create_internal(14, layout, dims);\n}\n\n//Provides: caml_nx_buffer_create_bool\n//Requires: caml_nx_buffer_create_internal\nfunction caml_nx_buffer_create_bool(layout, dims) {\n  return caml_nx_buffer_create_internal(15, layout, dims);\n}\n\n//Provides: caml_nx_buffer_create_int4_signed\n//Requires: caml_nx_buffer_create_internal\nfunction caml_nx_buffer_create_int4_signed(layout, dims) {\n  return caml_nx_buffer_create_internal(16, layout, dims);\n}\n\n//Provides: caml_nx_buffer_create_int4_unsigned\n//Requires: caml_nx_buffer_create_internal\nfunction caml_nx_buffer_create_int4_unsigned(layout, dims) {\n  return caml_nx_buffer_create_internal(17, layout, dims);\n}\n\n//Provides: caml_nx_buffer_create_float8_e4m3\n//Requires: caml_nx_buffer_create_internal\nfunction caml_nx_buffer_create_float8_e4m3(layout, dims) {\n  return caml_nx_buffer_create_internal(18, layout, dims);\n}\n\n//Provides: caml_nx_buffer_create_float8_e5m2\n//Requires: caml_nx_buffer_create_internal\nfunction caml_nx_buffer_create_float8_e5m2(layout, dims) {\n  return caml_nx_buffer_create_internal(19, layout, dims);\n}\n\n//Provides: caml_nx_buffer_create_uint32\n//Requires: caml_nx_buffer_create_internal\nfunction caml_nx_buffer_create_uint32(layout, dims) {\n  return caml_nx_buffer_create_internal(20, layout, dims);\n}\n\n//Provides: caml_nx_buffer_create_uint64\n//Requires: caml_nx_buffer_create_internal\nfunction caml_nx_buffer_create_uint64(layout, dims) {\n  return caml_nx_buffer_create_internal(21, layout, dims);\n}\n\n//Provides: caml_nx_buffer_get\n//Requires: caml_js_from_array, caml_ba_get_generic\nfunction caml_nx_buffer_get(ba, i) {\n  /* If it's an extended bigarray, use its get method */\n  if (ba.kind >= 14) {\n    var ofs = ba.offset(caml_js_from_array(i));\n    return ba.get(ofs);\n  }\n  /* Otherwise use standard implementation */\n  return caml_ba_get_generic(ba, i);\n}\n\n//Provides: caml_nx_buffer_set\n//Requires: caml_js_from_array, caml_ba_set_generic\nfunction caml_nx_buffer_set(ba, i, v) {\n  /* If it's an extended bigarray, use its set method */\n  if (ba.kind >= 14) {\n    ba.set(ba.offset(caml_js_from_array(i)), v);\n    return 0;\n  }\n  /* Otherwise use standard implementation */\n  return caml_ba_set_generic(ba, i, v);\n}\n\n//Provides: caml_nx_buffer_unsafe_get\n//Requires: caml_ba_get_1\nfunction caml_nx_buffer_unsafe_get(ba, i) {\n  if (ba.kind >= 14) {\n    return ba.get(i);\n  }\n  return caml_ba_get_1(ba, i);\n}\n\n//Provides: caml_nx_buffer_unsafe_set\n//Requires: caml_ba_set_1\nfunction caml_nx_buffer_unsafe_set(ba, i, v) {\n  if (ba.kind >= 14) {\n    ba.set(i, v);\n    return 0;\n  }\n  return caml_ba_set_1(ba, i, v);\n}\n\n//Provides: caml_nx_buffer_kind\n//Requires: Ml_Nx_buffer\nfunction caml_nx_buffer_kind(ba) {\n  /* Map bigarray kind to our extended kind enum values.\n     These must match the OCaml type constructor order. */\n  switch (ba.kind) {\n    case 1: return 0;   /* Float32 */\n    case 0: return 1;   /* Float64 */\n    case 2: return 2;   /* Int8_signed */\n    case 3: return 3;   /* Int8_unsigned */\n    case 4: return 4;   /* Int16_signed */\n    case 5: return 5;   /* Int16_unsigned */\n    case 8: return 6;   /* Int32 */\n    case 9: return 7;   /* Int64 */\n    case 10: return 8;  /* Int */\n    case 11: return 9;  /* Nativeint */\n    case 6: return 10;  /* Complex32 */\n    case 7: return 11;  /* Complex64 */\n    case 12: return 12; /* Char */\n    case 13: return 13; /* Float16 */\n    /* Extended types */\n    case 14: return 14; /* Bfloat16 */\n    case 15: return 15; /* Bool */\n    case 16: return 16; /* Int4_signed */\n    case 17: return 17; /* Int4_unsigned */\n    case 18: return 18; /* Float8_e4m3 */\n    case 19: return 19; /* Float8_e5m2 */\n    case 20: return 20; /* Uint32 */\n    case 21: return 21; /* Uint64 */\n    default:\n      throw new Error(\"Unknown bigarray kind: \" + ba.kind);\n  }\n}\n\n//Provides: caml_nx_buffer_blit\n//Requires: caml_ba_blit\nfunction caml_nx_buffer_blit(src, dst) {\n  if (src.kind >= 14 && dst.kind >= 14 && src.kind === dst.kind) {\n    /* For extended types, raw data copy */\n    dst.data.set(src.data);\n    return 0;\n  }\n  return caml_ba_blit(src, dst);\n}\n\n//Provides: caml_nx_buffer_fill\n//Requires: caml_ba_fill\nfunction caml_nx_buffer_fill(ba, v) {\n  if (ba.kind >= 14) {\n    ba.fill(v);\n    return 0;\n  }\n  return caml_ba_fill(ba, v);\n}\n\n//Provides: caml_nx_buffer_blit_from_bytes\nfunction caml_nx_buffer_blit_from_bytes(bytes, src_off, dst, dst_off, len) {\n  var dst_data = new Uint8Array(dst.data.buffer, dst.data.byteOffset);\n  for (var i = 0; i < len; i++) {\n    dst_data[dst_off + i] = bytes[src_off + i];\n  }\n  return 0;\n}\n\n//Provides: caml_nx_buffer_blit_to_bytes\nfunction caml_nx_buffer_blit_to_bytes(src, src_off, bytes, dst_off, len) {\n  var src_data = new Uint8Array(src.data.buffer, src.data.byteOffset);\n  for (var i = 0; i < len; i++) {\n    bytes[dst_off + i] = src_data[src_off + i];\n  }\n  return 0;\n}\n"
  },
  {
    "path": "packages/nx/lib/buffer/test/dune",
    "content": "(test\n (name test_nx_buffer)\n (package nx)\n (libraries nx_buffer windtrap))\n"
  },
  {
    "path": "packages/nx/lib/buffer/test/test_nx_buffer.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Nx_buffer\nopen Windtrap\n\n(* Test creation of different buffer types *)\nlet test_create_bfloat16 () =\n  let buf = create bfloat16 10 in\n  equal ~msg:\"bfloat16 buffer size\" int 10 (length buf);\n  set buf 0 1.0;\n  set buf 5 2.5;\n  equal ~msg:\"bfloat16 get\" (float 0.1) 1.0 (get buf 0);\n  equal ~msg:\"bfloat16 get\" (float 0.1) 2.5 (get buf 5)\n\nlet test_create_bool () =\n  let buf = create Nx_buffer.bool 8 in\n  equal ~msg:\"bool buffer size\" int 8 (length buf);\n  set buf 0 true;\n  set buf 1 false;\n  set buf 7 true;\n  equal ~msg:\"bool get\" bool true (get buf 0);\n  equal ~msg:\"bool get\" bool false (get buf 1);\n  equal ~msg:\"bool get\" bool true (get buf 7)\n\nlet test_create_int4_signed () =\n  let buf = create int4_signed 16 in\n  equal ~msg:\"int4_signed buffer size\" int 16 (length buf);\n  set buf 0 (-8);\n  set buf 1 7;\n  set buf 2 0;\n  equal ~msg:\"int4_signed get\" int (-8) (get buf 0);\n  equal ~msg:\"int4_signed get\" int 7 (get buf 1);\n  equal ~msg:\"int4_signed get\" int 0 (get buf 2)\n\nlet test_create_int4_unsigned () =\n  let buf = create int4_unsigned 16 in\n  equal ~msg:\"int4_unsigned buffer size\" int 16 (length buf);\n  set buf 0 0;\n  set buf 1 15;\n  set buf 2 8;\n  equal ~msg:\"int4_unsigned get\" int 0 (get buf 0);\n  equal ~msg:\"int4_unsigned get\" int 15 (get buf 1);\n  equal ~msg:\"int4_unsigned get\" int 8 (get buf 2)\n\nlet test_create_float8_e4m3 () =\n  let buf = create float8_e4m3 10 in\n  equal ~msg:\"float8_e4m3 buffer size\" int 10 (length buf);\n  set buf 0 0.0;\n  set buf 1 1.0;\n  set buf 2 (-1.5);\n  equal ~msg:\"float8_e4m3 get\" (float 0.1) 0.0 (get buf 0);\n  equal ~msg:\"float8_e4m3 get\" (float 0.1) 1.0 (get buf 1);\n  equal ~msg:\"float8_e4m3 get\" (float 0.1) (-1.5) (get buf 2)\n\nlet test_create_float8_e5m2 () =\n  let buf = create float8_e5m2 10 in\n  equal ~msg:\"float8_e5m2 buffer size\" int 10 (length buf);\n  set buf 0 0.0;\n  set buf 1 2.0;\n  set buf 2 (-0.5);\n  equal ~msg:\"float8_e5m2 get\" (float 0.1) 0.0 (get buf 0);\n  equal ~msg:\"float8_e5m2 get\" (float 0.1) 2.0 (get buf 1);\n  equal ~msg:\"float8_e5m2 get\" (float 0.1) (-0.5) (get buf 2)\n\n(* Test genarray creation *)\nlet test_genarray_creation () =\n  let dims = [| 2; 3; 4 |] in\n  let ga_bf16 = genarray_create bfloat16 Bigarray.c_layout dims in\n  let ga_bool = genarray_create Nx_buffer.bool Bigarray.c_layout dims in\n  let ga_fp8 = genarray_create float8_e4m3 Bigarray.c_layout dims in\n  equal ~msg:\"Genarray bfloat16 dims\" int 3\n    (Array.length (Bigarray.Genarray.dims ga_bf16));\n  equal ~msg:\"Genarray bool dims\" int 3\n    (Array.length (Bigarray.Genarray.dims ga_bool));\n  equal ~msg:\"Genarray float8 dims\" int 3\n    (Array.length (Bigarray.Genarray.dims ga_fp8));\n  equal ~msg:\"Genarray dim 0\" int 2 (Bigarray.Genarray.nth_dim ga_bf16 0);\n  equal ~msg:\"Genarray dim 1\" int 3 (Bigarray.Genarray.nth_dim ga_bf16 1);\n  equal ~msg:\"Genarray dim 2\" int 4 (Bigarray.Genarray.nth_dim ga_bf16 2)\n\n(* Test kind_size_in_bytes *)\nlet test_kind_sizes () =\n  equal ~msg:\"bfloat16 size\" int 2 (kind_size_in_bytes bfloat16);\n  equal ~msg:\"bool size\" int 1 (kind_size_in_bytes Nx_buffer.bool);\n  equal ~msg:\"int4_signed size\" int 1 (kind_size_in_bytes int4_signed);\n  equal ~msg:\"int4_unsigned size\" int 1 (kind_size_in_bytes int4_unsigned);\n  equal ~msg:\"float8_e4m3 size\" int 1 (kind_size_in_bytes float8_e4m3);\n  equal ~msg:\"float8_e5m2 size\" int 1 (kind_size_in_bytes float8_e5m2);\n  equal ~msg:\"uint32 size\" int 4 (kind_size_in_bytes uint32);\n  equal ~msg:\"uint64 size\" int 8 (kind_size_in_bytes uint64);\n  equal ~msg:\"float32 size\" int 4 (kind_size_in_bytes float32);\n  equal ~msg:\"float64 size\" int 8 (kind_size_in_bytes float64);\n  equal ~msg:\"int32 size\" int 4 (kind_size_in_bytes Nx_buffer.int32)\n\n(* Test blit *)\nlet test_blit () =\n  let src = create float32 4 in\n  let dst = create float32 4 in\n  set src 0 1.0;\n  set src 1 2.0;\n  set src 2 3.0;\n  set src 3 4.0;\n  blit ~src ~dst;\n  equal ~msg:\"blit[0]\" (float 1e-6) 1.0 (get dst 0);\n  equal ~msg:\"blit[3]\" (float 1e-6) 4.0 (get dst 3)\n\n(* Test fill *)\nlet test_fill () =\n  let buf = create float32 4 in\n  fill buf 7.0;\n  equal ~msg:\"fill[0]\" (float 1e-6) 7.0 (get buf 0);\n  equal ~msg:\"fill[3]\" (float 1e-6) 7.0 (get buf 3)\n\n(* Test bigarray conversions *)\nlet test_bigarray_roundtrip () =\n  let buf = create float32 3 in\n  set buf 0 1.0;\n  set buf 1 2.0;\n  set buf 2 3.0;\n  let ba1 = to_bigarray1 buf in\n  equal ~msg:\"to_bigarray1 dim\" int 3 (Bigarray.Array1.dim ba1);\n  let buf2 = of_bigarray1 ba1 in\n  equal ~msg:\"roundtrip[0]\" (float 1e-6) 1.0 (get buf2 0);\n  equal ~msg:\"roundtrip[2]\" (float 1e-6) 3.0 (get buf2 2)\n\nlet test_genarray_roundtrip () =\n  let buf = create float32 6 in\n  for i = 0 to 5 do\n    set buf i (float_of_int i)\n  done;\n  let ga = to_genarray buf [| 2; 3 |] in\n  equal ~msg:\"genarray dims\" (array int) [| 2; 3 |] (Bigarray.Genarray.dims ga);\n  let buf2 = of_genarray ga in\n  equal ~msg:\"genarray roundtrip length\" int 6 (length buf2);\n  equal ~msg:\"genarray roundtrip[0]\" (float 1e-6) 0.0 (get buf2 0);\n  equal ~msg:\"genarray roundtrip[5]\" (float 1e-6) 5.0 (get buf2 5)\n\n(* Test suite *)\nlet () =\n  run \"Nx_buffer tests\"\n    [\n      group \"creation\"\n        [\n          test \"create bfloat16\" test_create_bfloat16;\n          test \"create bool\" test_create_bool;\n          test \"create int4_signed\" test_create_int4_signed;\n          test \"create int4_unsigned\" test_create_int4_unsigned;\n          test \"create float8_e4m3\" test_create_float8_e4m3;\n          test \"create float8_e5m2\" test_create_float8_e5m2;\n        ];\n      group \"genarray\" [ test \"genarray creation\" test_genarray_creation ];\n      group \"properties\" [ test \"kind sizes\" test_kind_sizes ];\n      group \"operations\" [ test \"blit\" test_blit; test \"fill\" test_fill ];\n      group \"conversions\"\n        [\n          test \"bigarray roundtrip\" test_bigarray_roundtrip;\n          test \"genarray roundtrip\" test_genarray_roundtrip;\n        ];\n    ]\n"
  },
  {
    "path": "packages/nx/lib/core/backend_intf.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Backend interface for Nx tensor operations.\n\n    This module type defines the contract between Nx's frontend and its\n    pluggable backends. Backends may execute operations eagerly (C backend),\n    raise effects for JIT compilation (Rune), build computation graphs, or\n    implement other execution strategies.\n\n    {1 Design Philosophy}\n\n    Operations exist at the level of C standard library functions: every\n    operation that maps to a C stdlib call is a backend primitive, avoiding the\n    overhead of composing multiple operations in eager mode. Rune's JIT pipeline\n    can decompose these into lower primitives when building computation graphs.\n\n    {1 Frontend/Backend Contract}\n\n    The frontend is responsible for:\n    - Broadcasting inputs to matching shapes before calling binary operations.\n    - Promoting dtypes to compatible types before calling operations.\n    - Validating parameters (axes in range, shapes compatible, etc.).\n\n    The backend can assume all inputs are well-formed. It is responsible for:\n    - Executing the operation correctly for all supported dtypes.\n    - Handling strided (non-contiguous) inputs via the view metadata.\n    - Returning tensors with correct view metadata.\n\n    {1 Conventions}\n\n    - All compute operations allocate and return their result. The frontend\n      passes pre-broadcasted, pre-validated inputs and receives the result\n      tensor.\n    - Movement operations manipulate view metadata (shape, strides, offset)\n      without copying data when possible. *)\nmodule type S = sig\n  (** {1 Types} *)\n\n  type ('a, 'b) t\n  (** ['a] is the OCaml element type (e.g., [float], [int32]). ['b] is a phantom\n      type that tags the dtype for type safety. *)\n\n  type context\n  (** Backend execution context.\n\n      Carries backend-specific state such as memory pools, device handles,\n      command queues, or computation graphs. *)\n\n  (** {1 Tensor Properties} *)\n\n  val view : ('a, 'b) t -> View.t\n  (** [view t] returns the strided view metadata describing [t]'s logical layout\n      (shape, strides, offset) over its underlying buffer. *)\n\n  val dtype : ('a, 'b) t -> ('a, 'b) Dtype.t\n  (** [dtype t] returns the element type of [t]. *)\n\n  val context : ('a, 'b) t -> context\n  (** [context t] returns the execution context that owns [t]. *)\n\n  val to_host : ('a, 'b) t -> ('a, 'b) Nx_buffer.t\n  (** [to_host t] returns [t]'s data as a flat, C-contiguous host buffer.\n\n      Use {!view} to interpret the logical structure. CPU backends may return a\n      direct reference (zero-copy); GPU backends copy from device to host. *)\n\n  (** {1 Tensor Creation} *)\n\n  val buffer : context -> ('a, 'b) Dtype.t -> int array -> ('a, 'b) t\n  (** [buffer ctx dtype shape] allocates an uninitialized tensor.\n\n      Contents are undefined. Used internally by backends to allocate output\n      tensors.\n\n      {b Backend must:} return a tensor with the given shape and dtype whose\n      view is C-contiguous. *)\n\n  val full : context -> ('a, 'b) Dtype.t -> int array -> 'a -> ('a, 'b) t\n  (** [full ctx dtype shape value] creates a tensor where every element is\n      [value].\n\n      For scalars, [shape] is [[||]]. Subsumes zeros, ones, and constant fill.\n\n      {b Backend must:} return a C-contiguous tensor of the given shape and\n      dtype with all elements set to [value]. *)\n\n  val from_host : context -> ('a, 'b) Nx_buffer.t -> ('a, 'b) t\n  (** [from_host ctx buf] creates a tensor from a flat, C-contiguous host\n      buffer.\n\n      CPU backends may share the buffer directly (zero-copy). GPU backends copy\n      from host to device.\n\n      {b Frontend guarantees:} [buf] is C-contiguous. *)\n\n  (** {1 Element-wise Binary Operations}\n\n      {b Frontend guarantees:} [a] and [b] have identical shapes (after\n      broadcasting) and compatible dtypes (after promotion).\n\n      {b Backend must:} allocate a C-contiguous output tensor with the correct\n      shape and write the result.\n\n      {2 Arithmetic} *)\n\n  val add : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [add a b] is the element-wise sum of [a] and [b]. *)\n\n  val sub : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [sub a b] is the element-wise difference of [a] and [b]. *)\n\n  val mul : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [mul a b] is the element-wise product of [a] and [b]. *)\n\n  val div : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [div a b] is the element-wise quotient of [a] and [b].\n\n      Integer dtypes use truncation toward zero (C division). Floating-point\n      dtypes use IEEE 754 division. *)\n\n  val mod_ : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [mod_ a b] is the element-wise remainder of [a / b].\n\n      Integers use C's [%] operator (truncated division). Floats use [fmod]. The\n      sign of the result follows the dividend [a]. *)\n\n  val pow : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [pow base exponent] is the element-wise power [base ^ exponent]. *)\n\n  val atan2 : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [atan2 y x] is the element-wise arc tangent of [y / x].\n\n      Returns the angle in radians in [(-π, π\\]], handling all quadrants. *)\n\n  (** {2 Comparison}\n\n      Comparison operations produce boolean tensors. *)\n\n  val cmpeq : ('a, 'b) t -> ('a, 'b) t -> (bool, Dtype.bool_elt) t\n  (** [cmpeq a b] is the element-wise equality test of [a] and [b]. *)\n\n  val cmpne : ('a, 'b) t -> ('a, 'b) t -> (bool, Dtype.bool_elt) t\n  (** [cmpne a b] is the element-wise inequality test of [a] and [b]. *)\n\n  val cmplt : ('a, 'b) t -> ('a, 'b) t -> (bool, Dtype.bool_elt) t\n  (** [cmplt a b] is the element-wise less-than test of [a] and [b]. *)\n\n  val cmple : ('a, 'b) t -> ('a, 'b) t -> (bool, Dtype.bool_elt) t\n  (** [cmple a b] is the element-wise less-or-equal test of [a] and [b]. *)\n\n  (** {2 Min/Max} *)\n\n  val max : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [max a b] is the element-wise maximum of [a] and [b]. *)\n\n  val min : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [min a b] is the element-wise minimum of [a] and [b]. *)\n\n  (** {2 Bitwise}\n\n      Operate on the binary representation of integer and boolean dtypes. For\n      booleans, these are equivalent to logical AND/OR/XOR. *)\n\n  val xor : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [xor a b] is the element-wise bitwise XOR of [a] and [b]. *)\n\n  val or_ : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [or_ a b] is the element-wise bitwise OR of [a] and [b]. *)\n\n  val and_ : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [and_ a b] is the element-wise bitwise AND of [a] and [b]. *)\n\n  (** {1 Element-wise Unary Operations}\n\n      {b Frontend guarantees:} [x] has compatible dtype.\n\n      {b Backend must:} allocate a C-contiguous output tensor with the correct\n      shape and write the result.\n\n      {2 Arithmetic} *)\n\n  val neg : ('a, 'b) t -> ('a, 'b) t\n  (** [neg x] is the element-wise negation of [x]. *)\n\n  val recip : ('a, 'b) t -> ('a, 'b) t\n  (** [recip x] is the element-wise reciprocal of [x]. *)\n\n  val abs : ('a, 'b) t -> ('a, 'b) t\n  (** [abs x] is the element-wise absolute value of [x]. *)\n\n  val sqrt : ('a, 'b) t -> ('a, 'b) t\n  (** [sqrt x] is the element-wise square root of [x]. *)\n\n  val sign : ('a, 'b) t -> ('a, 'b) t\n  (** [sign x] is the element-wise sign of [x]: [-1] for negative, [0] for zero,\n      [1] for positive. Returns NaN for floating-point NaN inputs. *)\n\n  (** {2 Exponential and Logarithm} *)\n\n  val exp : ('a, 'b) t -> ('a, 'b) t\n  (** [exp x] is the element-wise exponential of [x]. *)\n\n  val log : ('a, 'b) t -> ('a, 'b) t\n  (** [log x] is the element-wise natural logarithm of [x]. *)\n\n  (** {2 Trigonometric}\n\n      All inputs are in radians. *)\n\n  val sin : ('a, 'b) t -> ('a, 'b) t\n  (** [sin x] is the element-wise sine of [x]. *)\n\n  val cos : ('a, 'b) t -> ('a, 'b) t\n  (** [cos x] is the element-wise cosine of [x]. *)\n\n  val tan : ('a, 'b) t -> ('a, 'b) t\n  (** [tan x] is the element-wise tangent of [x]. *)\n\n  val asin : ('a, 'b) t -> ('a, 'b) t\n  (** [asin x] is the element-wise arc sine of [x].\n\n      Returns values in [[-π/2, π/2]]. *)\n\n  val acos : ('a, 'b) t -> ('a, 'b) t\n  (** [acos x] is the element-wise arc cosine of [x].\n\n      Returns values in [[0, π]]. *)\n\n  val atan : ('a, 'b) t -> ('a, 'b) t\n  (** [atan x] is the element-wise arc tangent of [x].\n\n      Returns values in [[-π/2, π/2]]. *)\n\n  (** {2 Hyperbolic} *)\n\n  val sinh : ('a, 'b) t -> ('a, 'b) t\n  (** [sinh x] is the element-wise hyperbolic sine of [x]. *)\n\n  val cosh : ('a, 'b) t -> ('a, 'b) t\n  (** [cosh x] is the element-wise hyperbolic cosine of [x]. *)\n\n  val tanh : ('a, 'b) t -> ('a, 'b) t\n  (** [tanh x] is the element-wise hyperbolic tangent of [x]. *)\n\n  (** {2 Rounding}\n\n      For integer dtypes, all rounding operations are the identity. *)\n\n  val trunc : ('a, 'b) t -> ('a, 'b) t\n  (** [trunc x] rounds each element toward zero. *)\n\n  val ceil : ('a, 'b) t -> ('a, 'b) t\n  (** [ceil x] rounds each element toward positive infinity. *)\n\n  val floor : ('a, 'b) t -> ('a, 'b) t\n  (** [floor x] rounds each element toward negative infinity. *)\n\n  val round : ('a, 'b) t -> ('a, 'b) t\n  (** [round x] rounds each element to nearest integer, half away from zero (C's\n      [round]). *)\n\n  (** {2 Special Functions} *)\n\n  val erf : ('a, 'b) t -> ('a, 'b) t\n  (** [erf x] computes the error function [erf(x) = 2/√π ∫₀ˣ e^(-t²) dt]. *)\n\n  (** {1 Ternary Operations} *)\n\n  val where : (bool, Dtype.bool_elt) t -> ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [where cond if_true if_false] selects elements: [if_true.{i}] where\n      [cond.{i}] is true, [if_false.{i}] otherwise.\n\n      {b Frontend guarantees:} all three input tensors have identical shapes.\n      [cond] is boolean. [if_true] and [if_false] share the same dtype. *)\n\n  (** {1 Reduction Operations}\n\n      Reductions aggregate values along one or more axes.\n\n      {b Frontend guarantees:} [axes] contains valid, non-negative, deduplicated\n      axis indices. *)\n\n  val reduce_sum : axes:int array -> keepdims:bool -> ('a, 'b) t -> ('a, 'b) t\n  (** [reduce_sum ~axes ~keepdims x] sums elements of [x] along [axes]. *)\n\n  val reduce_prod : axes:int array -> keepdims:bool -> ('a, 'b) t -> ('a, 'b) t\n  (** [reduce_prod ~axes ~keepdims x] multiplies elements of [x] along [axes].\n  *)\n\n  val reduce_max : axes:int array -> keepdims:bool -> ('a, 'b) t -> ('a, 'b) t\n  (** [reduce_max ~axes ~keepdims x] finds the maximum of [x] along [axes]. *)\n\n  val reduce_min : axes:int array -> keepdims:bool -> ('a, 'b) t -> ('a, 'b) t\n  (** [reduce_min ~axes ~keepdims x] finds the minimum of [x] along [axes]. *)\n\n  val argmax :\n    axis:int -> keepdims:bool -> ('a, 'b) t -> (int32, Dtype.int32_elt) t\n  (** [argmax ~axis ~keepdims x] returns int32 indices of maximum values of [x]\n      along [axis]. For ties, returns the first occurrence.\n\n      {b Frontend guarantees:} [axis] is valid and non-negative. *)\n\n  val argmin :\n    axis:int -> keepdims:bool -> ('a, 'b) t -> (int32, Dtype.int32_elt) t\n  (** [argmin ~axis ~keepdims x] returns int32 indices of minimum values of [x]\n      along [axis]. For ties, returns the first occurrence.\n\n      {b Frontend guarantees:} [axis] is valid and non-negative. *)\n\n  val associative_scan :\n    axis:int -> op:[ `Sum | `Prod | `Max | `Min ] -> ('a, 'b) t -> ('a, 'b) t\n  (** [associative_scan ~axis ~op x] computes an inclusive prefix scan of [x]\n      along [axis]. [`Sum] for cumulative sum, [`Prod] for cumulative product,\n      [`Max]/[`Min] for running max/min.\n\n      {b Frontend guarantees:} [axis] is valid and non-negative. *)\n\n  (** {1 Sort Operations}\n\n      {b Frontend guarantees:} [axis] is valid and non-negative. *)\n\n  val sort : axis:int -> descending:bool -> ('a, 'b) t -> ('a, 'b) t\n  (** [sort ~axis ~descending x] sorts elements of [x] along [axis]. NaN values\n      are placed at the end regardless of sort direction. *)\n\n  val argsort :\n    axis:int -> descending:bool -> ('a, 'b) t -> (int32, Dtype.int32_elt) t\n  (** [argsort ~axis ~descending x] returns int32 indices that would sort\n      elements of [x] along [axis]. *)\n\n  (** {1 Movement Operations}\n\n      Movement operations manipulate view metadata (shape, strides, offset)\n      without copying data when possible. They return new tensor handles sharing\n      the underlying buffer.\n\n      {b Frontend guarantees:} all parameters are validated (axes in range,\n      shapes compatible, bounds within limits).\n\n      {b Backend must:} return a tensor with the correct view metadata. May\n      share the underlying buffer (zero-copy) or allocate if necessary. *)\n\n  val expand : ('a, 'b) t -> int array -> ('a, 'b) t\n  (** [expand t shape] broadcasts dimensions of size 1 to match [shape] by\n      setting their stride to 0. Non-singleton dimensions must already match.\n      Zero-copy. *)\n\n  val reshape : ('a, 'b) t -> int array -> ('a, 'b) t\n  (** [reshape t shape] changes the logical shape, preserving element count.\n\n      Zero-copy when [t] is C-contiguous or the reshape is compatible with the\n      current strides. May copy if [t] is non-contiguous. *)\n\n  val permute : ('a, 'b) t -> int array -> ('a, 'b) t\n  (** [permute t axes] reorders dimensions according to [axes], which must be a\n      permutation of [[0, ..., ndim-1]]. Zero-copy. *)\n\n  val shrink : ('a, 'b) t -> (int * int) array -> ('a, 'b) t\n  (** [shrink t ranges] extracts a contiguous slice. [ranges.(i)] is\n      [(start, stop)] with exclusive [stop]. Zero-copy (adjusts offset and\n      shape). *)\n\n  val flip : ('a, 'b) t -> bool array -> ('a, 'b) t\n  (** [flip t axes] reverses dimensions where [axes.(i) = true] by negating\n      strides. Zero-copy. *)\n\n  val pad : ('a, 'b) t -> (int * int) array -> 'a -> ('a, 'b) t\n  (** [pad t padding fill_value] extends [t] with [fill_value]. [padding.(i)] is\n      [(before, after)] for dimension [i].\n\n      {b Backend must:} allocate a new buffer and copy data. *)\n\n  val cat : ('a, 'b) t list -> axis:int -> ('a, 'b) t\n  (** [cat tensors ~axis] concatenates [tensors] along [axis].\n\n      {b Frontend guarantees:} all tensors have the same shape except along\n      [axis]. [axis] is valid. The list is non-empty. *)\n\n  (** {1 Type Conversion and Memory} *)\n\n  val cast : dtype:('c, 'd) Dtype.t -> ('a, 'b) t -> ('c, 'd) t\n  (** [cast ~dtype x] converts elements of [x] to [dtype].\n\n      Float-to-int truncates toward zero. Int-to-float may lose precision for\n      large values. *)\n\n  val contiguous : ('a, 'b) t -> ('a, 'b) t\n  (** [contiguous t] returns a C-contiguous version of [t].\n\n      May return [t] unchanged if already C-contiguous. Otherwise allocates and\n      copies.\n\n      {b Backend must:} return a C-contiguous tensor with the same data. *)\n\n  val copy : ('a, 'b) t -> ('a, 'b) t\n  (** [copy t] creates an independent copy with its own buffer.\n\n      {b Backend must:} always allocate a new buffer, even if [t] is already\n      contiguous. *)\n\n  val assign : ('a, 'b) t -> ('a, 'b) t -> unit\n  (** [assign dst src] copies elements from [src] into [dst] in-place.\n\n      {b Frontend guarantees:} [dst] and [src] have matching shapes and dtypes.\n\n      {b Backend must:} write [src]'s data into [dst]'s buffer, respecting both\n      tensors' strides. *)\n\n  (** {1 Random Number Generation} *)\n\n  val threefry :\n    (int32, Dtype.int32_elt) t ->\n    (int32, Dtype.int32_elt) t ->\n    (int32, Dtype.int32_elt) t\n  (** [threefry key counter] applies the Threefry-2x32 hash function.\n\n      {b Frontend guarantees:} [key] and [counter] are int32 tensors with\n      compatible shapes. *)\n\n  (** {1 Indexed Access Operations} *)\n\n  val gather :\n    ('a, 'b) t -> (int32, Dtype.int32_elt) t -> axis:int -> ('a, 'b) t\n  (** [gather data indices ~axis] selects elements from [data] along [axis]\n      using [indices].\n\n      {b Frontend guarantees:} [rank data = rank indices]. [axis] is valid.\n      Index values are in range for [data]'s size along [axis]. *)\n\n  val scatter :\n    ?mode:[ `Set | `Add ] ->\n    ?unique_indices:bool ->\n    ('a, 'b) t ->\n    indices:(int32, Dtype.int32_elt) t ->\n    updates:('a, 'b) t ->\n    axis:int ->\n    ('a, 'b) t\n  (** [scatter ?mode ?unique_indices template ~indices ~updates ~axis] places\n      [updates] into a tensor shaped like [template] along [axis].\n\n      [`Set] (default) uses the last update for duplicate indices. [`Add]\n      accumulates. [unique_indices = true] hints that indices are unique.\n\n      {b Frontend guarantees:} [rank indices = rank updates]. [axis] is valid.\n      [template] has the desired output shape.\n\n      {b Backend must:} allocate and return the result tensor, initialized from\n      [template]'s data. *)\n\n  (** {1 Window Operations}\n\n      Sliding-window extraction and its inverse. Used to implement convolution\n      as [unfold + reshape + matmul] and pooling as [unfold + reduce]. *)\n\n  val unfold :\n    ('a, 'b) t ->\n    kernel_size:int array ->\n    stride:int array ->\n    dilation:int array ->\n    padding:(int * int) array ->\n    ('a, 'b) t\n  (** [unfold t ~kernel_size ~stride ~dilation ~padding] extracts sliding\n      windows from the last [K] spatial dimensions, where\n      [K = Array.length kernel_size].\n\n      Input shape [(leading..., spatial...)] produces\n      [(leading..., prod(kernel_size), L)] where [L] is the number of windows.\n      All dimensions before the last [K] are preserved as-is.\n\n      {b Frontend guarantees:} all array parameters have length [K]. Values are\n      positive. Input has at least [K] dimensions.\n\n      {b Backend must:} allocate and return the result tensor. *)\n\n  val fold :\n    ('a, 'b) t ->\n    output_size:int array ->\n    kernel_size:int array ->\n    stride:int array ->\n    dilation:int array ->\n    padding:(int * int) array ->\n    ('a, 'b) t\n  (** [fold t ~output_size ~kernel_size ~stride ~dilation ~padding] combines\n      sliding windows (inverse of {!unfold}). Overlapping values are summed.\n\n      Input shape [(leading..., prod(kernel_size), L)] produces\n      [(leading..., output_size...)].\n\n      {b Frontend guarantees:} parameters are consistent with a valid unfold\n      configuration.\n\n      {b Backend must:} allocate and return the result tensor. *)\n\n  (** {1 Matrix Operations} *)\n\n  val matmul : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [matmul a b] computes matrix multiplication [a × b].\n\n      For 2D inputs: standard matrix multiply. For higher dimensions: batched\n      multiply on the last two dimensions, with broadcasting via strides.\n\n      {b Frontend guarantees:} [a]'s last dim equals [b]'s second-to-last dim.\n\n      {b Backend must:} allocate and return the result. May use BLAS for\n      performance. [a] and [b] may be non-contiguous. *)\n\n  (** {1 Fourier Transforms}\n\n      {b Frontend guarantees:} [axes] contains valid, non-negative axis indices.\n      Input tensors have compatible complex or real dtypes. *)\n\n  val fft :\n    ?out:(Complex.t, 'b) t ->\n    (Complex.t, 'b) t ->\n    axes:int array ->\n    (Complex.t, 'b) t\n  (** [fft ?out t ~axes] computes the forward DFT along [axes]. *)\n\n  val ifft :\n    ?out:(Complex.t, 'b) t ->\n    (Complex.t, 'b) t ->\n    axes:int array ->\n    (Complex.t, 'b) t\n  (** [ifft ?out t ~axes] computes the inverse DFT along [axes]. *)\n\n  val rfft :\n    ?out:(Complex.t, 'b) t ->\n    (float, 'a) t ->\n    dtype:(Complex.t, 'b) Dtype.t ->\n    axes:int array ->\n    (Complex.t, 'b) t\n  (** [rfft ?out t ~dtype ~axes] computes the real-input DFT along [axes].\n\n      Exploits conjugate symmetry to return only the non-redundant half of the\n      spectrum along the last transformed axis. *)\n\n  val irfft :\n    ?out:(float, 'b) t ->\n    ?s:int array ->\n    (Complex.t, 'a) t ->\n    dtype:(float, 'b) Dtype.t ->\n    axes:int array ->\n    (float, 'b) t\n  (** [irfft ?out ?s t ~dtype ~axes] computes the inverse real-input DFT along\n      [axes].\n\n      Takes conjugate-symmetric complex input, returns real output. [s]\n      specifies output sizes along the transformed axes; [None] infers sizes\n      from the input. *)\n\n  (** {1 Linear Algebra}\n\n      All linalg operations support batching: the last two dimensions are the\n      matrix dimensions, earlier dimensions are batch dimensions.\n\n      {b Frontend guarantees:} input matrices have compatible shapes (square\n      where required, matching dimensions for solves).\n\n      {b Backend must:} allocate and return result tensors. Typically delegates\n      to LAPACK. *)\n\n  val cholesky : upper:bool -> ('a, 'b) t -> ('a, 'b) t\n  (** [cholesky ~upper t] computes the Cholesky factorization of a\n      positive-definite matrix. Returns [L] (lower) or [U] (upper) such that\n      [A = L·Lᵀ] or [A = Uᵀ·U].\n\n      @raise Failure if not positive-definite. *)\n\n  val qr : reduced:bool -> ('a, 'b) t -> ('a, 'b) t * ('a, 'b) t\n  (** [qr ~reduced t] returns [(Q, R)] where [Q] is orthogonal and [R] is upper\n      triangular. [reduced = true] returns economy-size factorization. *)\n\n  val svd :\n    full_matrices:bool ->\n    ('a, 'b) t ->\n    ('a, 'b) t * (float, Dtype.float64_elt) t * ('a, 'b) t\n  (** [svd ~full_matrices t] returns [(U, S, Vᴴ)]. [S] is a 1D float64 vector of\n      singular values in descending order. [full_matrices = false] returns thin\n      SVD. *)\n\n  val eig :\n    vectors:bool ->\n    ('a, 'b) t ->\n    (Complex.t, Dtype.complex64_elt) t\n    * (Complex.t, Dtype.complex64_elt) t option\n  (** [eig ~vectors t] computes eigenvalues (and optionally eigenvectors) of a\n      square matrix. Returns complex64 results. *)\n\n  val eigh :\n    vectors:bool ->\n    ('a, 'b) t ->\n    (float, Dtype.float64_elt) t * ('a, 'b) t option\n  (** [eigh ~vectors t] computes eigenvalues (and optionally eigenvectors) of a\n      symmetric/Hermitian matrix. Eigenvalues are float64. *)\n\n  val triangular_solve :\n    upper:bool ->\n    transpose:bool ->\n    unit_diag:bool ->\n    ('a, 'b) t ->\n    ('a, 'b) t ->\n    ('a, 'b) t\n  (** [triangular_solve ~upper ~transpose ~unit_diag a b] solves [A·x = b] or\n      [Aᵀ·x = b] where [A] is triangular.\n\n      [upper]: [A] is upper triangular. [transpose]: solve [Aᵀ·x = b].\n      [unit_diag]: assume diagonal is all ones. *)\nend\n"
  },
  {
    "path": "packages/nx/lib/core/dtype.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* ───── Element Types ───── *)\n\ntype float16_elt = Nx_buffer.float16_elt\ntype float32_elt = Nx_buffer.float32_elt\ntype float64_elt = Nx_buffer.float64_elt\ntype bfloat16_elt = Nx_buffer.bfloat16_elt\ntype float8_e4m3_elt = Nx_buffer.float8_e4m3_elt\ntype float8_e5m2_elt = Nx_buffer.float8_e5m2_elt\ntype int4_elt = Nx_buffer.int4_signed_elt\ntype uint4_elt = Nx_buffer.int4_unsigned_elt\ntype int8_elt = Nx_buffer.int8_signed_elt\ntype uint8_elt = Nx_buffer.int8_unsigned_elt\ntype int16_elt = Nx_buffer.int16_signed_elt\ntype uint16_elt = Nx_buffer.int16_unsigned_elt\ntype int32_elt = Nx_buffer.int32_elt\ntype uint32_elt = Nx_buffer.uint32_elt\ntype int64_elt = Nx_buffer.int64_elt\ntype uint64_elt = Nx_buffer.uint64_elt\ntype complex32_elt = Nx_buffer.complex32_elt\ntype complex64_elt = Nx_buffer.complex64_elt\ntype bool_elt = Nx_buffer.bool_elt\n\n(* ───── Dtype GADT ───── *)\n\ntype ('a, 'b) t =\n  | Float16 : (float, float16_elt) t\n  | Float32 : (float, float32_elt) t\n  | Float64 : (float, float64_elt) t\n  | BFloat16 : (float, bfloat16_elt) t\n  | Float8_e4m3 : (float, float8_e4m3_elt) t\n  | Float8_e5m2 : (float, float8_e5m2_elt) t\n  | Int4 : (int, int4_elt) t\n  | UInt4 : (int, uint4_elt) t\n  | Int8 : (int, int8_elt) t\n  | UInt8 : (int, uint8_elt) t\n  | Int16 : (int, int16_elt) t\n  | UInt16 : (int, uint16_elt) t\n  | Int32 : (int32, int32_elt) t\n  | UInt32 : (int32, uint32_elt) t\n  | Int64 : (int64, int64_elt) t\n  | UInt64 : (int64, uint64_elt) t\n  | Complex64 : (Complex.t, complex32_elt) t\n  | Complex128 : (Complex.t, complex64_elt) t\n  | Bool : (bool, bool_elt) t\n\n(* ───── Constructor Shortcuts ───── *)\n\nlet float16 = Float16\nlet float32 = Float32\nlet float64 = Float64\nlet bfloat16 = BFloat16\nlet float8_e4m3 = Float8_e4m3\nlet float8_e5m2 = Float8_e5m2\nlet int4 = Int4\nlet uint4 = UInt4\nlet int8 = Int8\nlet uint8 = UInt8\nlet int16 = Int16\nlet uint16 = UInt16\nlet int32 = Int32\nlet uint32 = UInt32\nlet int64 = Int64\nlet uint64 = UInt64\nlet complex64 = Complex64\nlet complex128 = Complex128\nlet bool = Bool\n\n(* ───── String Conversion ───── *)\n\nlet to_string : type a b. (a, b) t -> string = function\n  | Float16 -> \"float16\"\n  | Float32 -> \"float32\"\n  | Float64 -> \"float64\"\n  | BFloat16 -> \"bfloat16\"\n  | Float8_e4m3 -> \"float8_e4m3\"\n  | Float8_e5m2 -> \"float8_e5m2\"\n  | Int4 -> \"int4\"\n  | UInt4 -> \"uint4\"\n  | Int8 -> \"int8\"\n  | UInt8 -> \"uint8\"\n  | Int16 -> \"int16\"\n  | UInt16 -> \"uint16\"\n  | Int32 -> \"int32\"\n  | UInt32 -> \"uint32\"\n  | Int64 -> \"int64\"\n  | UInt64 -> \"uint64\"\n  | Complex64 -> \"complex64\"\n  | Complex128 -> \"complex128\"\n  | Bool -> \"bool\"\n\nlet pp fmt dtype = Format.fprintf fmt \"%s\" (to_string dtype)\n\n(* ───── Properties ───── *)\n\nlet itemsize : type a b. (a, b) t -> int = function\n  | Float16 -> 2\n  | Float32 -> 4\n  | Float64 -> 8\n  | BFloat16 -> 2\n  | Float8_e4m3 -> 1\n  | Float8_e5m2 -> 1\n  | Int4 -> 1 (* stored as 1 byte; packing is caller's responsibility *)\n  | UInt4 -> 1 (* stored as 1 byte; packing is caller's responsibility *)\n  | Int8 -> 1\n  | UInt8 -> 1\n  | Int16 -> 2\n  | UInt16 -> 2\n  | Int32 -> 4\n  | UInt32 -> 4\n  | Int64 -> 8\n  | UInt64 -> 8\n  | Complex64 -> 8\n  | Complex128 -> 16\n  | Bool -> 1\n\nlet bits : type a b. (a, b) t -> int = function\n  | Float16 -> 16\n  | Float32 -> 32\n  | Float64 -> 64\n  | BFloat16 -> 16\n  | Float8_e4m3 -> 8\n  | Float8_e5m2 -> 8\n  | Int4 -> 4\n  | UInt4 -> 4\n  | Int8 -> 8\n  | UInt8 -> 8\n  | Int16 -> 16\n  | UInt16 -> 16\n  | Int32 -> 32\n  | UInt32 -> 32\n  | Int64 -> 64\n  | UInt64 -> 64\n  | Complex64 -> 64\n  | Complex128 -> 128\n  | Bool -> 8\n\n(* ───── Type Predicates ───── *)\n\nlet is_float (type a b) (dt : (a, b) t) : bool =\n  match dt with\n  | Float16 | Float32 | Float64 | BFloat16 | Float8_e4m3 | Float8_e5m2 -> true\n  | _ -> false\n\nlet is_complex (type a b) (dt : (a, b) t) : bool =\n  match dt with Complex64 | Complex128 -> true | _ -> false\n\nlet is_int (type a b) (dt : (a, b) t) : bool =\n  match dt with\n  | Int4 | UInt4 | Int8 | UInt8 | Int16 | UInt16 | Int32 | UInt32 | Int64\n  | UInt64 ->\n      true\n  | _ -> false\n\nlet is_uint (type a b) (dt : (a, b) t) : bool =\n  match dt with UInt4 | UInt8 | UInt16 | UInt32 | UInt64 -> true | _ -> false\n\n(* ───── Constants ───── *)\n\nlet zero : type a b. (a, b) t -> a = function\n  | Float16 -> 0.0\n  | Float32 -> 0.0\n  | Float64 -> 0.0\n  | BFloat16 -> 0.0\n  | Float8_e4m3 -> 0.0\n  | Float8_e5m2 -> 0.0\n  | Int4 -> 0\n  | UInt4 -> 0\n  | Int8 -> 0\n  | UInt8 -> 0\n  | Int16 -> 0\n  | UInt16 -> 0\n  | Int32 -> 0l\n  | UInt32 -> 0l\n  | Int64 -> 0L\n  | UInt64 -> 0L\n  | Complex64 -> Complex.zero\n  | Complex128 -> Complex.zero\n  | Bool -> false\n\nlet one : type a b. (a, b) t -> a = function\n  | Float16 -> 1.0\n  | Float32 -> 1.0\n  | Float64 -> 1.0\n  | BFloat16 -> 1.0\n  | Float8_e4m3 -> 1.0\n  | Float8_e5m2 -> 1.0\n  | Int4 -> 1\n  | UInt4 -> 1\n  | Int8 -> 1\n  | UInt8 -> 1\n  | Int16 -> 1\n  | UInt16 -> 1\n  | Int32 -> 1l\n  | UInt32 -> 1l\n  | Int64 -> 1L\n  | UInt64 -> 1L\n  | Complex64 -> Complex.one\n  | Complex128 -> Complex.one\n  | Bool -> true\n\nlet minus_one : type a b. (a, b) t -> a = function\n  | Float16 -> -1.0\n  | Float32 -> -1.0\n  | Float64 -> -1.0\n  | BFloat16 -> -1.0\n  | Float8_e4m3 -> -1.0\n  | Float8_e5m2 -> -1.0\n  | Int4 -> -1\n  | UInt4 -> 15 (* all bits set *)\n  | Int8 -> -1\n  | UInt8 -> 255 (* all bits set *)\n  | Int16 -> -1\n  | UInt16 -> 65535 (* all bits set *)\n  | Int32 -> -1l\n  | UInt32 -> Int32.lognot 0l\n  | Int64 -> -1L\n  | UInt64 -> Int64.lognot 0L\n  | Complex64 -> Complex.{ re = -1.0; im = 0.0 }\n  | Complex128 -> Complex.{ re = -1.0; im = 0.0 }\n  | Bool -> true (* all bits set *)\n\nlet two : type a b. (a, b) t -> a = function\n  | Float16 -> 2.0\n  | Float32 -> 2.0\n  | Float64 -> 2.0\n  | BFloat16 -> 2.0\n  | Float8_e4m3 -> 2.0\n  | Float8_e5m2 -> 2.0\n  | Int4 -> 2\n  | UInt4 -> 2\n  | Int8 -> 2\n  | UInt8 -> 2\n  | Int16 -> 2\n  | UInt16 -> 2\n  | Int32 -> 2l\n  | UInt32 -> 2l\n  | Int64 -> 2L\n  | UInt64 -> 2L\n  | Complex64 -> Complex.{ re = 2.0; im = 0.0 }\n  | Complex128 -> Complex.{ re = 2.0; im = 0.0 }\n  | Bool -> true (* saturates to max *)\n\n(* ───── Bounds ───── *)\n\nlet min_value : type a b. (a, b) t -> a = function\n  | Float16 -> Float.neg_infinity\n  | Float32 -> Float.neg_infinity\n  | Float64 -> Float.neg_infinity\n  | BFloat16 -> Float.neg_infinity\n  | Float8_e4m3 -> Float.neg_infinity\n  | Float8_e5m2 -> Float.neg_infinity\n  | Int4 -> -8\n  | UInt4 -> 0\n  | Int8 -> -128\n  | UInt8 -> 0\n  | Int16 -> -32768\n  | UInt16 -> 0\n  | Int32 -> Int32.min_int\n  | UInt32 -> 0l\n  | Int64 -> Int64.min_int\n  | UInt64 -> 0L\n  | Complex64 -> invalid_arg \"Dtype.min_value: complex numbers are not ordered\"\n  | Complex128 -> invalid_arg \"Dtype.min_value: complex numbers are not ordered\"\n  | Bool -> false\n\nlet max_value : type a b. (a, b) t -> a = function\n  | Float16 -> Float.infinity\n  | Float32 -> Float.infinity\n  | Float64 -> Float.infinity\n  | BFloat16 -> Float.infinity\n  | Float8_e4m3 -> Float.infinity\n  | Float8_e5m2 -> Float.infinity\n  | Int4 -> 7\n  | UInt4 -> 15\n  | Int8 -> 127\n  | UInt8 -> 255\n  | Int16 -> 32767\n  | UInt16 -> 65535\n  | Int32 -> Int32.max_int\n  | UInt32 -> Int32.lognot 0l\n  | Int64 -> Int64.max_int\n  | UInt64 -> Int64.lognot 0L\n  | Complex64 -> invalid_arg \"Dtype.max_value: complex numbers are not ordered\"\n  | Complex128 -> invalid_arg \"Dtype.max_value: complex numbers are not ordered\"\n  | Bool -> true\n\n(* ───── Conversion ───── *)\n\nlet of_float (type a b) (dtype : (a, b) t) (v : float) : a =\n  match dtype with\n  | Float16 -> v\n  | Float32 -> v\n  | Float64 -> v\n  | BFloat16 -> v\n  | Float8_e4m3 -> v\n  | Float8_e5m2 -> v\n  | Int4 -> int_of_float v\n  | UInt4 ->\n      int_of_float (if v <= 0.0 then 0.0 else if v >= 15.0 then 15.0 else v)\n  | Int8 -> int_of_float v\n  | UInt8 ->\n      int_of_float (if v <= 0.0 then 0.0 else if v >= 255.0 then 255.0 else v)\n  | Int16 -> int_of_float v\n  | UInt16 ->\n      int_of_float\n        (if v <= 0.0 then 0.0 else if v >= 65535.0 then 65535.0 else v)\n  | Int32 -> Int32.of_float v\n  | UInt32 ->\n      Int64.to_int32\n        (Int64.of_float\n           (if v <= 0.0 then 0.0\n            else if v >= 4294967295.0 then 4294967295.0\n            else v))\n  | Int64 -> Int64.of_float v\n  | UInt64 ->\n      let max_u64 = 18446744073709551615.0 in\n      let max_i64 = Int64.to_float Int64.max_int in\n      if v <= 0.0 then 0L\n      else if v >= max_u64 then Int64.lognot 0L\n      else if v <= max_i64 then Int64.of_float v\n      else Int64.of_float (v -. 18446744073709551616.0)\n  | Complex64 -> Complex.{ re = v; im = 0. }\n  | Complex128 -> Complex.{ re = v; im = 0. }\n  | Bool -> v <> 0.0\n\n(* ───── Buffer/Bigarray Conversions ───── *)\n\nlet of_buffer_kind : type a b. (a, b) Nx_buffer.kind -> (a, b) t = function\n  | Nx_buffer.Float16 -> Float16\n  | Nx_buffer.Float32 -> Float32\n  | Nx_buffer.Float64 -> Float64\n  | Nx_buffer.Bfloat16 -> BFloat16\n  | Nx_buffer.Float8_e4m3 -> Float8_e4m3\n  | Nx_buffer.Float8_e5m2 -> Float8_e5m2\n  | Nx_buffer.Int4_signed -> Int4\n  | Nx_buffer.Int4_unsigned -> UInt4\n  | Nx_buffer.Int8_signed -> Int8\n  | Nx_buffer.Int8_unsigned -> UInt8\n  | Nx_buffer.Int16_signed -> Int16\n  | Nx_buffer.Int16_unsigned -> UInt16\n  | Nx_buffer.Int32 -> Int32\n  | Nx_buffer.Uint32 -> UInt32\n  | Nx_buffer.Int64 -> Int64\n  | Nx_buffer.Uint64 -> UInt64\n  | Nx_buffer.Complex32 -> Complex64\n  | Nx_buffer.Complex64 -> Complex128\n  | Nx_buffer.Bool -> Bool\n\nlet to_buffer_kind : type a b. (a, b) t -> (a, b) Nx_buffer.kind = function\n  | Float16 -> Nx_buffer.Float16\n  | Float32 -> Nx_buffer.Float32\n  | Float64 -> Nx_buffer.Float64\n  | BFloat16 -> Nx_buffer.Bfloat16\n  | Float8_e4m3 -> Nx_buffer.Float8_e4m3\n  | Float8_e5m2 -> Nx_buffer.Float8_e5m2\n  | Int4 -> Nx_buffer.Int4_signed\n  | UInt4 -> Nx_buffer.Int4_unsigned\n  | Int8 -> Nx_buffer.Int8_signed\n  | UInt8 -> Nx_buffer.Int8_unsigned\n  | Int16 -> Nx_buffer.Int16_signed\n  | UInt16 -> Nx_buffer.Int16_unsigned\n  | Int32 -> Nx_buffer.Int32\n  | UInt32 -> Nx_buffer.Uint32\n  | Int64 -> Nx_buffer.Int64\n  | UInt64 -> Nx_buffer.Uint64\n  | Complex64 -> Nx_buffer.Complex32\n  | Complex128 -> Nx_buffer.Complex64\n  | Bool -> Nx_buffer.Bool\n\nlet of_bigarray_kind : type a b. (a, b) Bigarray.kind -> (a, b) t = function\n  | Bigarray.Float16 -> Float16\n  | Bigarray.Float32 -> Float32\n  | Bigarray.Float64 -> Float64\n  | Bigarray.Int8_signed -> Int8\n  | Bigarray.Int8_unsigned -> UInt8\n  | Bigarray.Int16_signed -> Int16\n  | Bigarray.Int16_unsigned -> UInt16\n  | Bigarray.Int32 -> Int32\n  | Bigarray.Int64 -> Int64\n  | Bigarray.Complex32 -> Complex64\n  | Bigarray.Complex64 -> Complex128\n  | _ -> invalid_arg \"Dtype.of_bigarray_kind: unsupported bigarray kind\"\n\nlet to_bigarray_kind : type a b. (a, b) t -> (a, b) Bigarray.kind = function\n  | Float16 -> Bigarray.Float16\n  | Float32 -> Bigarray.Float32\n  | Float64 -> Bigarray.Float64\n  | Int8 -> Bigarray.Int8_signed\n  | UInt8 -> Bigarray.Int8_unsigned\n  | Int16 -> Bigarray.Int16_signed\n  | UInt16 -> Bigarray.Int16_unsigned\n  | Int32 -> Bigarray.Int32\n  | Int64 -> Bigarray.Int64\n  | Complex64 -> Bigarray.Complex32\n  | Complex128 -> Bigarray.Complex64\n  | BFloat16 | Float8_e4m3 | Float8_e5m2 | Int4 | UInt4 | UInt32 | UInt64 | Bool\n    ->\n      invalid_arg\n        \"Dtype.to_bigarray_kind: extended type not supported by Bigarray\"\n\n(* ───── Equality ───── *)\n\nlet equal (type a b c d) (dt1 : (a, b) t) (dt2 : (c, d) t) : bool =\n  match (dt1, dt2) with\n  | Float16, Float16 -> true\n  | Float32, Float32 -> true\n  | Float64, Float64 -> true\n  | BFloat16, BFloat16 -> true\n  | Float8_e4m3, Float8_e4m3 -> true\n  | Float8_e5m2, Float8_e5m2 -> true\n  | Int4, Int4 -> true\n  | UInt4, UInt4 -> true\n  | Int8, Int8 -> true\n  | UInt8, UInt8 -> true\n  | Int16, Int16 -> true\n  | UInt16, UInt16 -> true\n  | Int32, Int32 -> true\n  | UInt32, UInt32 -> true\n  | Int64, Int64 -> true\n  | UInt64, UInt64 -> true\n  | Complex64, Complex64 -> true\n  | Complex128, Complex128 -> true\n  | Bool, Bool -> true\n  | _ -> false\n\nlet equal_witness (type a b c d) (dt1 : (a, b) t) (dt2 : (c, d) t) :\n    ((a, b) t, (c, d) t) Type.eq option =\n  match (dt1, dt2) with\n  | Float16, Float16 -> Some Type.Equal\n  | Float32, Float32 -> Some Type.Equal\n  | Float64, Float64 -> Some Type.Equal\n  | BFloat16, BFloat16 -> Some Type.Equal\n  | Float8_e4m3, Float8_e4m3 -> Some Type.Equal\n  | Float8_e5m2, Float8_e5m2 -> Some Type.Equal\n  | Int4, Int4 -> Some Type.Equal\n  | UInt4, UInt4 -> Some Type.Equal\n  | Int8, Int8 -> Some Type.Equal\n  | UInt8, UInt8 -> Some Type.Equal\n  | Int16, Int16 -> Some Type.Equal\n  | UInt16, UInt16 -> Some Type.Equal\n  | Int32, Int32 -> Some Type.Equal\n  | UInt32, UInt32 -> Some Type.Equal\n  | Int64, Int64 -> Some Type.Equal\n  | UInt64, UInt64 -> Some Type.Equal\n  | Complex64, Complex64 -> Some Type.Equal\n  | Complex128, Complex128 -> Some Type.Equal\n  | Bool, Bool -> Some Type.Equal\n  | _ -> None\n\n(* ───── Packed ───── *)\n\ntype packed = Pack : ('a, 'b) t -> packed\n\nlet pack (type a b) (dt : (a, b) t) : packed = Pack dt\n\nmodule Packed = struct\n  type t = packed\n\n  let all : t list =\n    [\n      Pack Float16;\n      Pack Float32;\n      Pack Float64;\n      Pack BFloat16;\n      Pack Float8_e4m3;\n      Pack Float8_e5m2;\n      Pack Int4;\n      Pack UInt4;\n      Pack Int8;\n      Pack UInt8;\n      Pack Int16;\n      Pack UInt16;\n      Pack Int32;\n      Pack UInt32;\n      Pack Int64;\n      Pack UInt64;\n      Pack Complex64;\n      Pack Complex128;\n      Pack Bool;\n    ]\n\n  let to_string (Pack dt) = to_string dt\n  let pp fmt t = Format.fprintf fmt \"%s\" (to_string t)\n\n  let of_string (s : string) : t option =\n    List.find_map\n      (fun packed ->\n        if String.equal (to_string packed) s then Some packed else None)\n      all\n\n  let equal (Pack dt1) (Pack dt2) : bool = equal dt1 dt2\n\n  let tag : t -> int = function\n    | Pack Float16 -> 0\n    | Pack Float32 -> 1\n    | Pack Float64 -> 2\n    | Pack BFloat16 -> 3\n    | Pack Float8_e4m3 -> 4\n    | Pack Float8_e5m2 -> 5\n    | Pack Int4 -> 6\n    | Pack UInt4 -> 7\n    | Pack Int8 -> 8\n    | Pack UInt8 -> 9\n    | Pack Int16 -> 10\n    | Pack UInt16 -> 11\n    | Pack Int32 -> 12\n    | Pack UInt32 -> 13\n    | Pack Int64 -> 14\n    | Pack UInt64 -> 15\n    | Pack Complex64 -> 16\n    | Pack Complex128 -> 17\n    | Pack Bool -> 18\n\n  let compare a b = Int.compare (tag a) (tag b)\n  let hash t = tag t\nend\n\n(* ───── Narrow Integer Wrapping ───── *)\n\nlet wrap_uint4 x = x land 0xF\nlet wrap_uint8 x = x land 0xFF\nlet wrap_uint16 x = x land 0xFFFF\n\nlet wrap_int4 x =\n  let masked = x land 0xF in\n  if masked land 0x8 <> 0 then masked lor lnot 0xF else masked\n\nlet wrap_int8 x =\n  let masked = x land 0xFF in\n  if masked land 0x80 <> 0 then masked lor lnot 0xFF else masked\n\nlet wrap_int16 x =\n  let masked = x land 0xFFFF in\n  if masked land 0x8000 <> 0 then masked lor lnot 0xFFFF else masked\n\n(* ───── Arithmetic Operations ───── *)\n\nlet add (type a b) (dt : (a, b) t) (x : a) (y : a) : a =\n  match dt with\n  | Float16 -> x +. y\n  | Float32 -> x +. y\n  | Float64 -> x +. y\n  | BFloat16 -> x +. y\n  | Float8_e4m3 -> x +. y\n  | Float8_e5m2 -> x +. y\n  | Int4 -> wrap_int4 (x + y)\n  | UInt4 -> wrap_uint4 (x + y)\n  | Int8 -> wrap_int8 (x + y)\n  | UInt8 -> wrap_uint8 (x + y)\n  | Int16 -> wrap_int16 (x + y)\n  | UInt16 -> wrap_uint16 (x + y)\n  | Int32 -> Int32.add x y\n  | UInt32 -> Int32.add x y\n  | Int64 -> Int64.add x y\n  | UInt64 -> Int64.add x y\n  | Complex64 -> Complex.add x y\n  | Complex128 -> Complex.add x y\n  | Bool -> x || y\n\nlet sub (type a b) (dt : (a, b) t) (x : a) (y : a) : a =\n  match dt with\n  | Float16 -> x -. y\n  | Float32 -> x -. y\n  | Float64 -> x -. y\n  | BFloat16 -> x -. y\n  | Float8_e4m3 -> x -. y\n  | Float8_e5m2 -> x -. y\n  | Int4 -> wrap_int4 (x - y)\n  | UInt4 -> wrap_uint4 (x - y)\n  | Int8 -> wrap_int8 (x - y)\n  | UInt8 -> wrap_uint8 (x - y)\n  | Int16 -> wrap_int16 (x - y)\n  | UInt16 -> wrap_uint16 (x - y)\n  | Int32 -> Int32.sub x y\n  | UInt32 -> Int32.sub x y\n  | Int64 -> Int64.sub x y\n  | UInt64 -> Int64.sub x y\n  | Complex64 -> Complex.sub x y\n  | Complex128 -> Complex.sub x y\n  | Bool -> invalid_arg \"Dtype.sub: undefined for bool\"\n\nlet mul (type a b) (dt : (a, b) t) (x : a) (y : a) : a =\n  match dt with\n  | Float16 -> x *. y\n  | Float32 -> x *. y\n  | Float64 -> x *. y\n  | BFloat16 -> x *. y\n  | Float8_e4m3 -> x *. y\n  | Float8_e5m2 -> x *. y\n  | Int4 -> wrap_int4 (x * y)\n  | UInt4 -> wrap_uint4 (x * y)\n  | Int8 -> wrap_int8 (x * y)\n  | UInt8 -> wrap_uint8 (x * y)\n  | Int16 -> wrap_int16 (x * y)\n  | UInt16 -> wrap_uint16 (x * y)\n  | Int32 -> Int32.mul x y\n  | UInt32 -> Int32.mul x y\n  | Int64 -> Int64.mul x y\n  | UInt64 -> Int64.mul x y\n  | Complex64 -> Complex.mul x y\n  | Complex128 -> Complex.mul x y\n  | Bool -> x && y\n\nlet uint64_compare a b =\n  Int64.compare (Int64.logxor a Int64.min_int) (Int64.logxor b Int64.min_int)\n\nlet uint32_div x y =\n  let ux = Int64.logand (Int64.of_int32 x) 0xFFFFFFFFL in\n  let uy = Int64.logand (Int64.of_int32 y) 0xFFFFFFFFL in\n  if uy = 0L then raise Division_by_zero;\n  Int32.of_int (Int64.to_int (Int64.div ux uy))\n\nlet uint64_div x y =\n  if y = 0L then raise Division_by_zero;\n  let open Int64 in\n  let rec loop i rem quot =\n    if i < 0 then quot\n    else\n      let bit = logand (shift_right_logical x i) 1L in\n      let rem' = logor (shift_left rem 1) bit in\n      if uint64_compare rem' y >= 0 then\n        loop (i - 1) (sub rem' y) (logor quot (shift_left 1L i))\n      else loop (i - 1) rem' quot\n  in\n  loop 63 0L 0L\n\nlet div (type a b) (dt : (a, b) t) (x : a) (y : a) : a =\n  match dt with\n  | Float16 -> x /. y\n  | Float32 -> x /. y\n  | Float64 -> x /. y\n  | BFloat16 -> x /. y\n  | Float8_e4m3 -> x /. y\n  | Float8_e5m2 -> x /. y\n  | Int4 -> wrap_int4 (x / y)\n  | UInt4 -> wrap_uint4 (x / y)\n  | Int8 -> wrap_int8 (x / y)\n  | UInt8 -> wrap_uint8 (x / y)\n  | Int16 -> wrap_int16 (x / y)\n  | UInt16 -> wrap_uint16 (x / y)\n  | Int32 -> Int32.div x y\n  | UInt32 -> uint32_div x y\n  | Int64 -> Int64.div x y\n  | UInt64 -> uint64_div x y\n  | Complex64 -> Complex.div x y\n  | Complex128 -> Complex.div x y\n  | Bool -> invalid_arg \"Dtype.div: undefined for bool\"\n"
  },
  {
    "path": "packages/nx/lib/core/dtype.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Tensor element types.\n\n    A dtype value describes both the OCaml value representation and the\n    underlying buffer element kind used by [nx]. *)\n\n(** {1:elements Element kinds} *)\n\ntype float16_elt = Nx_buffer.float16_elt\n(** The element kind for IEEE 754 binary16 values. *)\n\ntype float32_elt = Nx_buffer.float32_elt\n(** The element kind for IEEE 754 binary32 values. *)\n\ntype float64_elt = Nx_buffer.float64_elt\n(** The element kind for IEEE 754 binary64 values. *)\n\ntype bfloat16_elt = Nx_buffer.bfloat16_elt\n(** The element kind for bfloat16 values. *)\n\ntype float8_e4m3_elt = Nx_buffer.float8_e4m3_elt\n(** The element kind for float8 e4m3 values. *)\n\ntype float8_e5m2_elt = Nx_buffer.float8_e5m2_elt\n(** The element kind for float8 e5m2 values. *)\n\ntype int4_elt = Nx_buffer.int4_signed_elt\n(** The element kind for signed 4-bit integers. *)\n\ntype uint4_elt = Nx_buffer.int4_unsigned_elt\n(** The element kind for unsigned 4-bit integers. *)\n\ntype int8_elt = Nx_buffer.int8_signed_elt\n(** The element kind for signed 8-bit integers. *)\n\ntype uint8_elt = Nx_buffer.int8_unsigned_elt\n(** The element kind for unsigned 8-bit integers. *)\n\ntype int16_elt = Nx_buffer.int16_signed_elt\n(** The element kind for signed 16-bit integers. *)\n\ntype uint16_elt = Nx_buffer.int16_unsigned_elt\n(** The element kind for unsigned 16-bit integers. *)\n\ntype int32_elt = Nx_buffer.int32_elt\n(** The element kind for signed 32-bit integers. *)\n\ntype uint32_elt = Nx_buffer.uint32_elt\n(** The element kind for unsigned 32-bit integers. *)\n\ntype int64_elt = Nx_buffer.int64_elt\n(** The element kind for signed 64-bit integers. *)\n\ntype uint64_elt = Nx_buffer.uint64_elt\n(** The element kind for unsigned 64-bit integers. *)\n\ntype complex32_elt = Nx_buffer.complex32_elt\n(** The element kind for complex values with float32 components. *)\n\ntype complex64_elt = Nx_buffer.complex64_elt\n(** The element kind for complex values with float64 components. *)\n\ntype bool_elt = Nx_buffer.bool_elt\n(** The element kind for boolean values. *)\n\n(** {1:types Dtypes} *)\n\n(** The type for dtypes.\n\n    The first parameter is the OCaml value type and the second parameter is the\n    buffer element kind. *)\ntype ('a, 'b) t =\n  | Float16 : (float, float16_elt) t  (** 16-bit float. *)\n  | Float32 : (float, float32_elt) t  (** 32-bit float. *)\n  | Float64 : (float, float64_elt) t  (** 64-bit float. *)\n  | BFloat16 : (float, bfloat16_elt) t  (** bfloat16. *)\n  | Float8_e4m3 : (float, float8_e4m3_elt) t  (** float8 e4m3. *)\n  | Float8_e5m2 : (float, float8_e5m2_elt) t  (** float8 e5m2. *)\n  | Int4 : (int, int4_elt) t  (** Signed 4-bit integer carried in [int]. *)\n  | UInt4 : (int, uint4_elt) t  (** Unsigned 4-bit integer carried in [int]. *)\n  | Int8 : (int, int8_elt) t  (** Signed 8-bit integer carried in [int]. *)\n  | UInt8 : (int, uint8_elt) t  (** Unsigned 8-bit integer carried in [int]. *)\n  | Int16 : (int, int16_elt) t  (** Signed 16-bit integer carried in [int]. *)\n  | UInt16 : (int, uint16_elt) t\n      (** Unsigned 16-bit integer carried in [int]. *)\n  | Int32 : (int32, int32_elt) t  (** Signed 32-bit integer. *)\n  | UInt32 : (int32, uint32_elt) t\n      (** Unsigned 32-bit integer carried in [int32]. *)\n  | Int64 : (int64, int64_elt) t  (** Signed 64-bit integer. *)\n  | UInt64 : (int64, uint64_elt) t\n      (** Unsigned 64-bit integer carried in [int64]. *)\n  | Complex64 : (Complex.t, complex32_elt) t\n      (** Complex values with float32 components. *)\n  | Complex128 : (Complex.t, complex64_elt) t\n      (** Complex values with float64 components. *)\n  | Bool : (bool, bool_elt) t  (** Boolean values. *)\n\n(** {1:constructors Constructor values} *)\n\nval float16 : (float, float16_elt) t\n(** [float16] is {!Float16}. *)\n\nval float32 : (float, float32_elt) t\n(** [float32] is {!Float32}. *)\n\nval float64 : (float, float64_elt) t\n(** [float64] is {!Float64}. *)\n\nval bfloat16 : (float, bfloat16_elt) t\n(** [bfloat16] is {!BFloat16}. *)\n\nval float8_e4m3 : (float, float8_e4m3_elt) t\n(** [float8_e4m3] is {!Float8_e4m3}. *)\n\nval float8_e5m2 : (float, float8_e5m2_elt) t\n(** [float8_e5m2] is {!Float8_e5m2}. *)\n\nval int4 : (int, int4_elt) t\n(** [int4] is {!Int4}. *)\n\nval uint4 : (int, uint4_elt) t\n(** [uint4] is {!UInt4}. *)\n\nval int8 : (int, int8_elt) t\n(** [int8] is {!Int8}. *)\n\nval uint8 : (int, uint8_elt) t\n(** [uint8] is {!UInt8}. *)\n\nval int16 : (int, int16_elt) t\n(** [int16] is {!Int16}. *)\n\nval uint16 : (int, uint16_elt) t\n(** [uint16] is {!UInt16}. *)\n\nval int32 : (int32, int32_elt) t\n(** [int32] is {!Int32}. *)\n\nval uint32 : (int32, uint32_elt) t\n(** [uint32] is {!UInt32}. *)\n\nval int64 : (int64, int64_elt) t\n(** [int64] is {!Int64}. *)\n\nval uint64 : (int64, uint64_elt) t\n(** [uint64] is {!UInt64}. *)\n\nval complex64 : (Complex.t, complex32_elt) t\n(** [complex64] is {!Complex64}. *)\n\nval complex128 : (Complex.t, complex64_elt) t\n(** [complex128] is {!Complex128}. *)\n\nval bool : (bool, bool_elt) t\n(** [bool] is {!Bool}. *)\n\n(** {1:preds Predicates and properties} *)\n\nval to_string : ('a, 'b) t -> string\n(** [to_string d] is the stable lowercase name of [d]. *)\n\nval pp : Format.formatter -> ('a, 'b) t -> unit\n(** [pp] formats dtypes with [to_string]. *)\n\nval itemsize : ('a, 'b) t -> int\n(** [itemsize d] is the storage quantum in bytes for [d].\n\n    For [Int4] and [UInt4], this is [1] even though values are 4-bit. Use\n    {!bits} to get logical bit width. *)\n\nval bits : ('a, 'b) t -> int\n(** [bits d] is the logical bit width of elements of [d]. *)\n\nval is_float : ('a, 'b) t -> bool\n(** [is_float d] is [true] iff [d] is one of the float dtypes. *)\n\nval is_complex : ('a, 'b) t -> bool\n(** [is_complex d] is [true] iff [d] is one of the complex dtypes. *)\n\nval is_int : ('a, 'b) t -> bool\n(** [is_int d] is [true] iff [d] is an integer dtype, signed or unsigned. *)\n\nval is_uint : ('a, 'b) t -> bool\n(** [is_uint d] is [true] iff [d] is an unsigned integer dtype. *)\n\n(** {1:constants Constants} *)\n\nval zero : ('a, 'b) t -> 'a\n(** [zero d] is the additive identity value for [d]. *)\n\nval one : ('a, 'b) t -> 'a\n(** [one d] is the multiplicative identity value for [d]. *)\n\nval two : ('a, 'b) t -> 'a\n(** [two d] is the value [2] represented in [d].\n\n    For [Bool], [two Bool] is [true]. *)\n\nval minus_one : ('a, 'b) t -> 'a\n(** [minus_one d] is the value [-1] represented in [d].\n\n    For unsigned integer and bool dtypes this is the all-ones bit pattern. *)\n\nval min_value : ('a, 'b) t -> 'a\n(** [min_value d] is the minimum value used by [d].\n\n    For floating dtypes this is [-infinity].\n\n    Raises [Invalid_argument] for complex dtypes. *)\n\nval max_value : ('a, 'b) t -> 'a\n(** [max_value d] is the maximum value used by [d].\n\n    For floating dtypes this is [+infinity].\n\n    Raises [Invalid_argument] for complex dtypes. *)\n\n(** {1:converting Converting} *)\n\nval of_float : ('a, 'b) t -> float -> 'a\n(** [of_float d x] converts [x] to dtype [d].\n\n    Unsigned integer conversions clamp to their representable range. *)\n\nval of_buffer_kind : ('a, 'b) Nx_buffer.kind -> ('a, 'b) t\n(** [of_buffer_kind k] is the dtype corresponding to [k].\n\n    Raises [Invalid_argument] if [k] is unsupported. *)\n\nval to_buffer_kind : ('a, 'b) t -> ('a, 'b) Nx_buffer.kind\n(** [to_buffer_kind d] is the [Nx_buffer] kind corresponding to [d]. *)\n\nval of_bigarray_kind : ('a, 'b) Bigarray.kind -> ('a, 'b) t\n(** [of_bigarray_kind k] is the dtype corresponding to [k].\n\n    Raises [Invalid_argument] if [k] is unsupported. *)\n\nval to_bigarray_kind : ('a, 'b) t -> ('a, 'b) Bigarray.kind\n(** [to_bigarray_kind d] is the standard [Bigarray] kind for [d].\n\n    Raises [Invalid_argument] for extended dtypes that standard [Bigarray]\n    cannot represent ([BFloat16], [Float8_e4m3], [Float8_e5m2], [Int4], [UInt4],\n    [UInt32], [UInt64], [Bool]). *)\n\n(** {1:equality Equality} *)\n\nval equal : ('a, 'b) t -> ('c, 'd) t -> bool\n(** [equal d0 d1] is [true] iff [d0] and [d1] denote the same dtype constructor.\n*)\n\nval equal_witness :\n  ('a, 'b) t -> ('c, 'd) t -> (('a, 'b) t, ('c, 'd) t) Type.eq option\n(** [equal_witness d0 d1] is [Some Type.Equal] iff [equal d0 d1] is [true], and\n    [None] otherwise. *)\n\n(** {1:packed Packed dtypes} *)\n\ntype packed =\n  | Pack : ('a, 'b) t -> packed  (** Existential wrapper over dtypes. *)\n\nval pack : ('a, 'b) t -> packed\n(** [pack d] is [Pack d]. *)\n\nmodule Packed : sig\n  (** Operations on [packed]. *)\n\n  type t = packed\n  (** The type for packed dtypes. *)\n\n  val all : t list\n  (** [all] lists all supported dtypes. *)\n\n  val of_string : string -> t option\n  (** [of_string s] is the dtype named [s], if any. *)\n\n  val to_string : t -> string\n  (** [to_string t] is the lowercase name of [t]. *)\n\n  val pp : Format.formatter -> t -> unit\n  (** [pp] formats packed dtypes with [to_string]. *)\n\n  val equal : t -> t -> bool\n  (** [equal d0 d1] is [true] iff [d0] and [d1] are the same dtype. *)\n\n  val compare : t -> t -> int\n  (** [compare] orders dtypes by a stable internal tag. *)\n\n  val hash : t -> int\n  (** [hash t] is a hash derived from [tag]. *)\n\n  val tag : t -> int\n  (** [tag t] is the stable integer tag used by [compare] and [hash]. *)\nend\n\n(** {1:ops Scalar operations} *)\n\nval add : ('a, 'b) t -> 'a -> 'a -> 'a\n(** [add d x y] adds [x] and [y] with dtype semantics of [d].\n\n    Narrow integer dtypes wrap to their bit width. For [Bool], this is boolean\n    disjunction. *)\n\nval sub : ('a, 'b) t -> 'a -> 'a -> 'a\n(** [sub d x y] subtracts [y] from [x] with dtype semantics of [d].\n\n    Narrow integer dtypes wrap to their bit width.\n\n    Raises [Invalid_argument] for [Bool]. *)\n\nval mul : ('a, 'b) t -> 'a -> 'a -> 'a\n(** [mul d x y] multiplies [x] and [y] with dtype semantics of [d].\n\n    Narrow integer dtypes wrap to their bit width. For [Bool], this is boolean\n    conjunction. *)\n\nval div : ('a, 'b) t -> 'a -> 'a -> 'a\n(** [div d x y] divides [x] by [y] with dtype semantics of [d].\n\n    Narrow integer dtypes wrap to their bit width.\n\n    Raises [Division_by_zero] for integer division by zero. Raises\n    [Invalid_argument] for [Bool]. *)\n"
  },
  {
    "path": "packages/nx/lib/core/dune",
    "content": "(library\n (name nx_core)\n (public_name nx.core)\n (libraries str nx_buffer)\n (instrumentation\n  (backend landmarks)))\n"
  },
  {
    "path": "packages/nx/lib/core/frontend.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule Make (B : Backend_intf.S) = struct\n  module B = B\n\n  let err op fmt =\n    Printf.ksprintf (fun msg -> invalid_arg (op ^ \": \" ^ msg)) fmt\n\n  (* ───── Core Types ───── *)\n\n  type ('a, 'b) t = ('a, 'b) B.t\n  type context = B.context\n  type float16_elt = Nx_buffer.float16_elt\n  type float32_elt = Nx_buffer.float32_elt\n  type float64_elt = Nx_buffer.float64_elt\n  type bfloat16_elt = Nx_buffer.bfloat16_elt\n  type float8_e4m3_elt = Nx_buffer.float8_e4m3_elt\n  type float8_e5m2_elt = Nx_buffer.float8_e5m2_elt\n  type int4_elt = Nx_buffer.int4_signed_elt\n  type uint4_elt = Nx_buffer.int4_unsigned_elt\n  type int8_elt = Nx_buffer.int8_signed_elt\n  type uint8_elt = Nx_buffer.int8_unsigned_elt\n  type int16_elt = Nx_buffer.int16_signed_elt\n  type uint16_elt = Nx_buffer.int16_unsigned_elt\n  type int32_elt = Nx_buffer.int32_elt\n  type uint32_elt = Nx_buffer.uint32_elt\n  type int64_elt = Nx_buffer.int64_elt\n  type uint64_elt = Nx_buffer.uint64_elt\n  type complex32_elt = Nx_buffer.complex32_elt\n  type complex64_elt = Nx_buffer.complex64_elt\n  type bool_elt = Nx_buffer.bool_elt\n\n  type ('a, 'b) dtype = ('a, 'b) Dtype.t =\n    | Float16 : (float, float16_elt) dtype\n    | Float32 : (float, float32_elt) dtype\n    | Float64 : (float, float64_elt) dtype\n    | BFloat16 : (float, bfloat16_elt) dtype\n    | Float8_e4m3 : (float, float8_e4m3_elt) dtype\n    | Float8_e5m2 : (float, float8_e5m2_elt) dtype\n    | Int4 : (int, int4_elt) dtype\n    | UInt4 : (int, uint4_elt) dtype\n    | Int8 : (int, int8_elt) dtype\n    | UInt8 : (int, uint8_elt) dtype\n    | Int16 : (int, int16_elt) dtype\n    | UInt16 : (int, uint16_elt) dtype\n    | Int32 : (int32, int32_elt) dtype\n    | UInt32 : (int32, uint32_elt) dtype\n    | Int64 : (int64, int64_elt) dtype\n    | UInt64 : (int64, uint64_elt) dtype\n    | Complex64 : (Complex.t, complex32_elt) dtype\n    | Complex128 : (Complex.t, complex64_elt) dtype\n    | Bool : (bool, bool_elt) dtype\n\n  type float16_t = (float, float16_elt) t\n  type float32_t = (float, float32_elt) t\n  type float64_t = (float, float64_elt) t\n  type int8_t = (int, int8_elt) t\n  type uint8_t = (int, uint8_elt) t\n  type int16_t = (int, int16_elt) t\n  type uint16_t = (int, uint16_elt) t\n  type int32_t = (int32, int32_elt) t\n  type int64_t = (int64, int64_elt) t\n  type uint32_t = (int32, uint32_elt) t\n  type uint64_t = (int64, uint64_elt) t\n  type complex64_t = (Complex.t, complex32_elt) t\n  type complex128_t = (Complex.t, complex64_elt) t\n  type bool_t = (bool, bool_elt) t\n\n  let float16 = Float16\n  let float32 = Float32\n  let float64 = Float64\n  let bfloat16 = BFloat16\n  let float8_e4m3 = Float8_e4m3\n  let float8_e5m2 = Float8_e5m2\n  let int4 = Int4\n  let uint4 = UInt4\n  let int8 = Int8\n  let uint8 = UInt8\n  let int16 = Int16\n  let uint16 = UInt16\n  let int32 = Int32\n  let uint32 = UInt32\n  let int64 = Int64\n  let uint64 = UInt64\n  let complex64 = Complex64\n  let complex128 = Complex128\n  let bool = Bool\n\n  type index =\n    | I of int\n    | L of int list\n    | R of int * int\n    | Rs of int * int * int\n    | A\n    | M of (bool, bool_elt) t\n    | N\n\n  (* ───── Tensor Properties ───── *)\n\n  let data x = B.to_host x\n  let shape x = View.shape (B.view x)\n  let dtype x = B.dtype x\n  let itemsize x = Dtype.itemsize (B.dtype x)\n\n  let strides x =\n    let view = B.view x in\n    let itemsize = itemsize x in\n    match View.strides_opt view with\n    | Some elem_strides -> Array.map (fun s -> s * itemsize) elem_strides\n    | None ->\n        err \"strides\"\n          \"view has non-materializable layout, call contiguous() to get a \\\n           standard layout\"\n\n  let stride i x =\n    let view = B.view x in\n    let itemsize = itemsize x in\n    match View.strides_opt view with\n    | Some elem_strides ->\n        let ndim = View.ndim view in\n        let i = if i < 0 then i + ndim else i in\n        if i < 0 || i >= ndim then\n          err \"stride\" \"axis %d out of bounds for %dD tensor\" i ndim\n        else elem_strides.(i) * itemsize\n    | None ->\n        err \"stride\"\n          \"stride for dimension %d, tensor does not have defined strides, call \\\n           contiguous() first or check has_strides()\"\n          i\n\n  let dims x = View.shape (B.view x)\n\n  let dim i x =\n    let shape = View.shape (B.view x) in\n    let ndim = Array.length shape in\n    let i = if i < 0 then i + ndim else i in\n    if i < 0 || i >= ndim then\n      err \"dim\" \"axis %d out of bounds for %dD tensor\" i ndim\n    else shape.(i)\n\n  let ndim x = View.ndim (B.view x)\n  let size x = View.numel (B.view x)\n  let numel x = size x\n  let nbytes x = numel x * itemsize x\n  let offset x = View.offset (B.view x)\n  let is_c_contiguous x = View.is_c_contiguous (B.view x)\n\n  (* ───── Internal Utilities ───── *)\n\n  let array_prod arr = Array.fold_left ( * ) 1 arr\n\n  module IntSet = Set.Make (Int)\n\n  (* 2^shift_val for integer dtypes, used by lshift/rshift. *)\n  let power_of_two : type a b. (a, b) Dtype.t -> int -> a =\n   fun dtype shift_val ->\n    if shift_val < 0 then\n      err \"power_of_two\" \"shift_val must be >= 0, got %d\" shift_val;\n    match dtype with\n    | Int8 -> 1 lsl shift_val\n    | UInt8 -> (1 lsl shift_val) land 0xFF\n    | Int16 -> 1 lsl shift_val\n    | UInt16 -> (1 lsl shift_val) land 0xFFFF\n    | Int32 -> Int32.shift_left Int32.one shift_val\n    | UInt32 -> Int32.shift_left Int32.one shift_val\n    | Int64 -> Int64.shift_left Int64.one shift_val\n    | UInt64 -> Int64.shift_left Int64.one shift_val\n    | _ ->\n        err \"power_of_two\" \"dtype %s, not an integer type\"\n          (Dtype.to_string dtype)\n\n  let ensure_float_dtype fname x =\n    if not (Dtype.is_float (dtype x)) then\n      err fname \"dtype %s, expected float type (Float16, Float32, or Float64)\"\n        (Dtype.to_string (dtype x))\n\n  let ensure_int_dtype fname x =\n    if not (Dtype.is_int (dtype x)) then\n      invalid_arg (fname ^ \": dtype must be an integer type\")\n\n  let resolve_axis ?ndim_opt x (axis_opt : int option) =\n    let ndim = match ndim_opt with Some n -> n | None -> ndim x in\n    match axis_opt with\n    | None -> Array.init ndim Fun.id\n    | Some a ->\n        let resolved_a = if a < 0 then a + ndim else a in\n        [| resolved_a |]\n\n  let resolve_single_axis ?ndim_opt x axis : int =\n    let ndim = match ndim_opt with Some n -> n | None -> ndim x in\n    if axis < 0 then axis + ndim else axis\n\n  (* Normalize negative axes, validate bounds, sort, and deduplicate. *)\n  let normalize_and_dedup_axes ~op ndim axes =\n    let normalized =\n      List.map\n        (fun ax ->\n          let axis = if ax < 0 then ndim + ax else ax in\n          if axis < 0 || axis >= ndim then\n            err op \"axis %d out of bounds for %dD tensor\" ax ndim;\n          axis)\n        axes\n    in\n    List.sort_uniq compare normalized\n\n  (* Count elements across reduction axes. *)\n  let reduction_element_count input_shape ?axes () =\n    let rank = Array.length input_shape in\n    let axes_arr =\n      match axes with\n      | None -> Array.init rank Fun.id\n      | Some ax_list ->\n          Array.of_list\n            (List.map (fun ax -> if ax < 0 then ax + rank else ax) ax_list)\n    in\n    if Array.length axes_arr = 0 then 1\n    else array_prod (Array.map (fun ax -> input_shape.(ax)) axes_arr)\n\n  (* Write [result] into [?out] if provided, otherwise return [result]. *)\n  let copy_to_out result = result\n\n  (* ───── Shape Manipulation Helpers ───── *)\n\n  let reshape shape_spec x =\n    let current_shape = shape x in\n    (* Resolve -1 dimensions *)\n    let infer_count = ref 0 in\n    Array.iter (fun d -> if d = -1 then incr infer_count) shape_spec;\n    if !infer_count > 1 then\n      invalid_arg\n        \"reshape: shape specification, multiple -1 dimensions, can only \\\n         specify one unknown dimension\";\n    let target_shape =\n      if !infer_count = 0 then shape_spec\n      else\n        let old_numel = array_prod current_shape in\n        let known_numel = ref 1 in\n        Array.iter\n          (fun d -> if d <> -1 then known_numel := !known_numel * d)\n          shape_spec;\n        if !known_numel = 0 || old_numel mod !known_numel <> 0 then\n          err \"reshape\" \"cannot infer dimension: %d elements into shape %s\"\n            old_numel\n            (Shape.to_string shape_spec);\n        let inferred = old_numel / !known_numel in\n        Array.map (fun d -> if d = -1 then inferred else d) shape_spec\n    in\n    Array.iter\n      (fun d ->\n        if d < 0 then err \"reshape\" \"shape specification, dimension %d < -1\" d)\n      target_shape;\n    if current_shape = target_shape then x else B.reshape x target_shape\n\n  let broadcast_shapes shape_a shape_b =\n    let rank_a = Array.length shape_a in\n    let rank_b = Array.length shape_b in\n    let rank_out = max rank_a rank_b in\n    let result = Array.make rank_out 1 in\n    for i = 0 to rank_out - 1 do\n      let idx_a = rank_a - rank_out + i in\n      let idx_b = rank_b - rank_out + i in\n      let dim_a = if idx_a >= 0 then shape_a.(idx_a) else 1 in\n      let dim_b = if idx_b >= 0 then shape_b.(idx_b) else 1 in\n      result.(i) <-\n        (if dim_a = dim_b then dim_a\n         else if dim_a = 1 then dim_b\n         else if dim_b = 1 then dim_a\n         else\n           err \"broadcast\"\n             \"cannot broadcast %s with %s (dim %d: %d\\xe2\\x89\\xa0%d)\"\n             (Shape.to_string shape_a) (Shape.to_string shape_b) i dim_a dim_b)\n    done;\n    result\n\n  let broadcast_to new_shape x =\n    Array.iter\n      (fun dim ->\n        if dim < 0 then err \"broadcast_to\" \"target shape, dimension %d < 0\" dim)\n      new_shape;\n    let current_shape = shape x in\n    if current_shape = new_shape then x\n    else\n      let rank_current = Array.length current_shape in\n      let rank_target = Array.length new_shape in\n      if rank_current > rank_target then\n        err \"broadcast_to\"\n          \"rank mismatch: source rank %d exceeds target rank %d, target shape \\\n           must have at least as many dimensions as source\"\n          rank_current rank_target\n      else\n        let pad_count = rank_target - rank_current in\n        let padded_shape =\n          if pad_count <= 0 then current_shape\n          else\n            let arr = Array.make rank_target 1 in\n            Array.blit current_shape 0 arr pad_count rank_current;\n            arr\n        in\n        for i = 0 to rank_target - 1 do\n          let curr_dim = padded_shape.(i) in\n          let target_dim = new_shape.(i) in\n          if curr_dim <> target_dim && curr_dim <> 1 then\n            err \"broadcast_to\"\n              \"cannot broadcast %s to %s (dim %d: %d\\xe2\\x89\\xa0%d)\"\n              (Shape.to_string padded_shape)\n              (Shape.to_string new_shape)\n              i curr_dim target_dim\n        done;\n        let x_aligned =\n          if pad_count <= 0 then x else B.reshape x padded_shape\n        in\n        if shape x_aligned = new_shape then x_aligned\n        else B.expand x_aligned new_shape\n\n  let broadcasted ?(reverse = false) x y =\n    let a, b = if reverse then (y, x) else (x, y) in\n    let broadcast_shape = broadcast_shapes (shape a) (shape b) in\n    (broadcast_to broadcast_shape a, broadcast_to broadcast_shape b)\n\n  (* Like [broadcast_to] but [-1] keeps the original dimension. *)\n  let expand shape_spec x =\n    let current_shape = shape x in\n    let rank_current = Array.length current_shape in\n    let rank_spec = Array.length shape_spec in\n    let rank_new = max rank_current rank_spec in\n    let current_aligned =\n      if rank_current = rank_new then current_shape\n      else\n        let arr = Array.make rank_new 1 in\n        Array.blit current_shape 0 arr (rank_new - rank_current) rank_current;\n        arr\n    in\n    let target_shape =\n      Array.init rank_new (fun i ->\n          let spec_idx = i - (rank_new - rank_spec) in\n          let spec_dim = if spec_idx < 0 then -1 else shape_spec.(spec_idx) in\n          if spec_dim = -1 then current_aligned.(i)\n          else if spec_dim < -1 then\n            err \"expand\" \"dimension %d, negative size %d\" i spec_dim\n          else spec_dim)\n    in\n    broadcast_to target_shape x\n\n  (* ───── Type Conversion and Tensor Creation ───── *)\n\n  let cast (type a b c d) (dt : (c, d) Dtype.t) (x : (a, b) t) : (c, d) t =\n    match Dtype.equal_witness (dtype x) dt with\n    | Some Equal -> B.copy x\n    | None -> B.cast ~dtype:dt x\n\n  let astype dt x = cast dt x\n  let contiguous x = B.contiguous x\n  let copy x = B.copy x\n\n  let blit src dst =\n    let ss = shape src and ds = shape dst in\n    if ss <> ds then\n      err \"blit\"\n        \"shape mismatch %s vs %s, source and destination must have identical \\\n         shapes\"\n        (Shape.to_string ss) (Shape.to_string ds);\n    B.assign dst src\n\n  let create ctx dtype shape arr =\n    let n = Array.fold_left ( * ) 1 shape in\n    if Array.length arr <> n then\n      err \"create\" \"array size, got %d elements, expected %d\" (Array.length arr)\n        n;\n    let kind = Dtype.to_buffer_kind dtype in\n    let bigarray = Nx_buffer.create kind n in\n    for i = 0 to n - 1 do\n      Nx_buffer.unsafe_set bigarray i arr.(i)\n    done;\n    let tensor_1d = B.from_host ctx bigarray in\n    if Array.length shape = 1 && shape.(0) = n then tensor_1d\n    else B.reshape tensor_1d shape\n\n  let init ctx dtype shape f =\n    let size = Array.fold_left ( * ) 1 shape in\n    let arr = Array.init size (fun i -> f (Shape.unravel_index i shape)) in\n    create ctx dtype shape arr\n\n  let scalar ctx dt value = B.full ctx dt [||] value\n  let scalar_like x_ref value = scalar (B.context x_ref) (B.dtype x_ref) value\n\n  let fill value x =\n    let copied = B.copy x in\n    B.assign copied (broadcast_to (shape copied) (scalar_like copied value));\n    copied\n\n  let empty ctx dtype shape_arr = B.buffer ctx dtype shape_arr\n  let zeros ctx dtype shape_arr = B.full ctx dtype shape_arr (Dtype.zero dtype)\n  let ones ctx dtype shape_arr = B.full ctx dtype shape_arr (Dtype.one dtype)\n\n  let full ctx dt target_shape fill_value =\n    B.full ctx dt target_shape fill_value\n\n  let create_like x_ref fill_fn =\n    fill_fn (B.context x_ref) (B.dtype x_ref) (shape x_ref)\n\n  let empty_like x_ref = create_like x_ref empty\n\n  let full_like x_ref fill_value =\n    create_like x_ref (fun ctx dt sh -> full ctx dt sh fill_value)\n\n  let zeros_like x = full_like x (Dtype.zero (B.dtype x))\n  let ones_like x = full_like x (Dtype.one (B.dtype x))\n\n  let to_buffer x =\n    let t =\n      let t = if is_c_contiguous x && offset x = 0 then x else contiguous x in\n      let buffer = data t in\n      if Nx_buffer.length buffer = numel t then t else copy t\n    in\n    data t\n\n  let to_bigarray x =\n    let buf = to_buffer x in\n    let _ = Dtype.to_bigarray_kind (B.dtype x) in\n    let ga = Nx_buffer.to_genarray buf (shape x) in\n    (Obj.magic ga : ('a, 'b, Bigarray.c_layout) Bigarray.Genarray.t)\n\n  let of_buffer ctx ~shape buf = reshape shape (B.from_host ctx buf)\n\n  let of_bigarray ctx ba =\n    let ga_ext : ('a, 'b, Bigarray.c_layout) Bigarray.Genarray.t =\n      Obj.magic ba\n    in\n    of_buffer ctx\n      ~shape:(Bigarray.Genarray.dims ga_ext)\n      (Nx_buffer.of_genarray ga_ext)\n\n  let to_array x =\n    let ba = data (contiguous x) in\n    let n = numel x in\n    Array.init n (fun i -> Nx_buffer.get ba i)\n\n  (* ───── Element-wise Binary Operations ───── *)\n\n  let binop op a b =\n    let a', b' = broadcasted a b in\n    op a' b'\n\n  let cmpop op a b =\n    let a', b' = broadcasted a b in\n    op a' b'\n\n  let add a b = binop B.add a b\n  let add_s t s = add t (scalar_like t s)\n  let radd_s s t = add (scalar_like t s) t\n  let sub a b = binop B.sub a b\n  let sub_s t s = sub t (scalar_like t s)\n  let rsub_s s t = sub (scalar_like t s) t\n  let mul a b = binop B.mul a b\n  let mul_s t s = mul t (scalar_like t s)\n  let rmul_s s t = mul (scalar_like t s) t\n  let div a b = binop B.div a b\n  let div_s t s = div t (scalar_like t s)\n  let rdiv_s s t = div (scalar_like t s) t\n  let pow a b = binop B.pow a b\n  let pow_s t s = pow t (scalar_like t s)\n  let rpow_s s t = pow (scalar_like t s) t\n  let maximum a b = binop B.max a b\n  let maximum_s t s = maximum t (scalar_like t s)\n  let rmaximum_s s t = maximum (scalar_like t s) t\n  let minimum a b = binop B.min a b\n  let minimum_s t s = minimum t (scalar_like t s)\n  let rminimum_s s t = minimum (scalar_like t s) t\n  let mod_ a b = binop B.mod_ a b\n  let mod_s t s = mod_ t (scalar_like t s)\n  let rmod_s s t = mod_ (scalar_like t s) t\n  let bitwise_xor a b = binop B.xor a b\n  let bitwise_or a b = binop B.or_ a b\n  let bitwise_and a b = binop B.and_ a b\n\n  (* ───── Logical and Comparison Operations ───── *)\n\n  let logical_and a b = binop B.and_ a b\n  let logical_or a b = binop B.or_ a b\n  let logical_xor a b = binop B.xor a b\n\n  let logical_not (type a b) (x : (a, b) t) : (a, b) t =\n    let dt = dtype x in\n    let one = full (B.context x) dt (shape x) (Dtype.one dt) in\n    match dt with\n    | Dtype.UInt8 | Dtype.Bool | Dtype.UInt4 -> binop B.xor x one\n    | _ -> sub one x\n\n  let cmpeq a b = cmpop B.cmpeq a b\n  let cmpne a b = cmpop B.cmpne a b\n  let cmplt a b = cmpop B.cmplt a b\n  let cmple a b = cmpop B.cmple a b\n  let cmpgt a b = cmplt b a\n  let cmpge a b = cmple b a\n  let less = cmplt\n  let less_equal = cmple\n  let greater = cmpgt\n  let greater_equal = cmpge\n  let equal = cmpeq\n  let not_equal = cmpne\n  let equal_s a s = equal a (scalar_like a s)\n  let not_equal_s a s = not_equal a (scalar_like a s)\n  let less_s a s = less a (scalar_like a s)\n  let greater_s a s = greater a (scalar_like a s)\n  let less_equal_s a s = less_equal a (scalar_like a s)\n  let greater_equal_s a s = greater_equal a (scalar_like a s)\n\n  (* ───── Element-wise Unary Operations ───── *)\n\n  let unaryop op x = op x\n  let neg x = unaryop B.neg x\n\n  let bitwise_not x =\n    let dt = dtype x in\n    binop B.xor x\n      (broadcast_to (shape x)\n         (B.full (B.context x) dt [||] (Dtype.minus_one dt)))\n\n  let invert x = bitwise_not x\n  let sin x = unaryop B.sin x\n  let cos x = unaryop B.cos x\n  let sqrt x = unaryop B.sqrt x\n  let recip x = unaryop B.recip x\n  let log x = unaryop B.log x\n  let exp x = unaryop B.exp x\n  let abs x = unaryop B.abs x\n\n  let log2 x =\n    mul (log x)\n      (broadcast_to (shape x)\n         (scalar (B.context x) (dtype x)\n            (Dtype.of_float (dtype x) (1.0 /. Stdlib.log 2.0))))\n\n  let exp2 x =\n    exp\n      (mul x\n         (broadcast_to (shape x)\n            (scalar (B.context x) (dtype x)\n               (Dtype.of_float (dtype x) (Stdlib.log 2.0)))))\n\n  let tan x = unaryop B.tan x\n  let square x = mul x x\n  let sign x = unaryop B.sign x\n  let relu x = maximum x (zeros_like x)\n\n  let sigmoid x =\n    let dt = dtype x in\n    let neg_one_over_log2 =\n      B.full (B.context x) dt [||] (Dtype.of_float dt (-1.0 /. Stdlib.log 2.0))\n    in\n    recip (add (ones_like x) (exp2 (mul x neg_one_over_log2)))\n\n  let rsqrt x = recip (sqrt x)\n  let asin x = unaryop B.asin x\n  let acos x = unaryop B.acos x\n  let atan x = unaryop B.atan x\n  let sinh x = unaryop B.sinh x\n  let cosh x = unaryop B.cosh x\n  let tanh x = unaryop B.tanh x\n\n  let asinh x =\n    let dt = dtype x in\n    let one_x = full (B.context x) dt (shape x) (Dtype.one dt) in\n    log (add x (sqrt (add (square x) one_x)))\n\n  let acosh x =\n    let dt = dtype x in\n    let one_x = full (B.context x) dt (shape x) (Dtype.one dt) in\n    log (add x (sqrt (sub (square x) one_x)))\n\n  let atanh x =\n    let dt = dtype x in\n    let one_x = full (B.context x) dt (shape x) (Dtype.one dt) in\n    let two_x = full (B.context x) dt (shape x) (Dtype.two dt) in\n    div (log (div (add one_x x) (sub one_x x))) two_x\n\n  let trunc x = unaryop B.trunc x\n  let ceil x = unaryop B.ceil x\n  let floor x = unaryop B.floor x\n  let round x = unaryop B.round x\n\n  let isinf x =\n    if not (Dtype.is_float (dtype x)) then\n      copy_to_out (zeros (B.context x) Dtype.bool (shape x))\n    else\n      let dt = dtype x in\n      let pos_inf =\n        broadcast_to (shape x)\n          (B.full (B.context x) dt [||] (Dtype.of_float dt Float.infinity))\n      in\n      let neg_inf =\n        broadcast_to (shape x)\n          (B.full (B.context x) dt [||] (Dtype.of_float dt Float.neg_infinity))\n      in\n      logical_or (cmpeq x pos_inf) (cmpeq x neg_inf)\n\n  let isnan x =\n    if not (Dtype.is_float (dtype x)) then\n      copy_to_out (zeros (B.context x) Dtype.bool (shape x))\n    else cmpne x x\n\n  let isfinite x =\n    if not (Dtype.is_float (dtype x)) then\n      copy_to_out (ones (B.context x) Dtype.bool (shape x))\n    else logical_not (logical_or (isinf x) (isnan x))\n\n  let lerp start_tensor end_tensor weight =\n    add start_tensor (mul (sub end_tensor start_tensor) weight)\n\n  let lerp_scalar_weight start_tensor end_tensor weight_val =\n    lerp start_tensor end_tensor\n      (full (B.context start_tensor) (dtype start_tensor) (shape start_tensor)\n         weight_val)\n\n  let shift_op ~op ~apply x shift_val =\n    let dt = dtype x in\n    if not (Dtype.is_int dt) then\n      err op \"dtype %s, expected integer type\" (Dtype.to_string dt);\n    if shift_val < 0 then err op \"shift_val must be >= 0, got %d\" shift_val;\n    if shift_val = 0 then copy_to_out x\n    else\n      apply x\n        (broadcast_to (shape x)\n           (B.full (B.context x) dt [||] (power_of_two dt shift_val)))\n\n  let lshift x shift_val = shift_op ~op:\"lshift\" ~apply:mul x shift_val\n\n  let rshift x shift_val =\n    shift_op ~op:\"rshift\" ~apply:(fun a b -> binop B.div a b) x shift_val\n\n  let clamp ?min ?max x =\n    let x =\n      match min with None -> x | Some min_v -> maximum x (full_like x min_v)\n    in\n    match max with\n    | None -> copy_to_out x\n    | Some max_v -> minimum x (full_like x max_v)\n\n  let clip = clamp\n\n  (* ───── Ternary Operations ───── *)\n\n  let where cond if_true if_false =\n    let target = Shape.broadcast (shape if_true) (shape if_false) in\n    let target = Shape.broadcast target (shape cond) in\n    let cond_b = broadcast_to target cond in\n    let if_true_b = broadcast_to target if_true in\n    let if_false_b = broadcast_to target if_false in\n    B.where cond_b if_true_b if_false_b\n\n  (* ───── Binary Mathematical Functions ───── *)\n\n  let atan2 y x = binop B.atan2 y x\n\n  (* sqrt(x² + y²) with overflow protection via max * sqrt(1 + (min/max)²) *)\n  let hypot x y =\n    let x', y' = broadcasted x y in\n    let x_abs = abs x' in\n    let y_abs = abs y' in\n    let max_val = maximum x_abs y_abs in\n    let min_val = minimum x_abs y_abs in\n    let both_zero =\n      logical_and\n        (cmpeq x_abs (zeros_like x_abs))\n        (cmpeq y_abs (zeros_like y_abs))\n    in\n    let ratio = where both_zero (zeros_like min_val) (div min_val max_val) in\n    let result = mul max_val (sqrt (add (ones_like ratio) (square ratio))) in\n    where both_zero (zeros_like result) result\n\n  (* ───── Reduction Operations ───── *)\n\n  let reduce_op backend_op ?axes ?(keepdims = false) x =\n    let input_shape = shape x in\n    let rank = Array.length input_shape in\n    let axes_to_reduce =\n      match axes with\n      | None -> Array.init rank Fun.id\n      | Some ax_list ->\n          Array.of_list\n            (List.map (fun ax -> if ax < 0 then ax + rank else ax) ax_list)\n    in\n    Array.iter\n      (fun ax ->\n        if ax < 0 || ax >= rank then\n          err \"reduce\" \"axis %d out of bounds for %dD tensor\" ax rank)\n      axes_to_reduce;\n    backend_op ~axes:axes_to_reduce ~keepdims x\n\n  let sum ?axes ?(keepdims = false) x = reduce_op B.reduce_sum ?axes ~keepdims x\n  let max ?axes ?(keepdims = false) x = reduce_op B.reduce_max ?axes ~keepdims x\n  let min ?axes ?(keepdims = false) x = reduce_op B.reduce_min ?axes ~keepdims x\n\n  let prod ?axes ?(keepdims = false) x =\n    reduce_op B.reduce_prod ?axes ~keepdims x\n\n  let associative_scan ~axis op x =\n    let x_shape = shape x in\n    let rank = Array.length x_shape in\n    if rank = 0 then\n      let a = if axis < 0 then axis + 1 else axis in\n      if a = 0 then x\n      else\n        err \"associative_scan\"\n          \"axis %d out of bounds for rank 0 tensor (only axis 0 valid)\" axis\n    else\n      let a = if axis < 0 then axis + rank else axis in\n      if a < 0 || a >= rank then\n        err \"associative_scan\" \"axis %d out of bounds for %dD tensor\" axis rank\n      else B.associative_scan ~axis:a ~op x\n\n  let cumulative_scan ?axis op x =\n    let orig_shape = shape x in\n    match axis with\n    | Some axis -> associative_scan ~axis op x\n    | None ->\n        let flat = reshape [| array_prod orig_shape |] x in\n        let scanned = associative_scan ~axis:0 op flat in\n        if Array.length orig_shape = 0 then reshape [||] scanned\n        else reshape orig_shape scanned\n\n  let cumsum ?axis x = cumulative_scan ?axis `Sum x\n  let cumprod ?axis x = cumulative_scan ?axis `Prod x\n  let cummax ?axis x = cumulative_scan ?axis `Max x\n  let cummin ?axis x = cumulative_scan ?axis `Min x\n\n  let mean ?axes ?(keepdims = false) x =\n    let dt = B.dtype x in\n    let s = sum ?axes ~keepdims x in\n    let n = reduction_element_count (shape x) ?axes () in\n    let divisor =\n      broadcast_to (shape s)\n        (scalar (B.context x) dt\n           (Dtype.of_float dt (float_of_int (Stdlib.max 1 n))))\n    in\n    div s divisor\n\n  let var ?axes ?(keepdims = false) ?(ddof = 0) x =\n    let dt = B.dtype x in\n    let mean_x = mean ?axes ~keepdims:true x in\n    let sum_sq = sum ?axes ~keepdims (square (sub x mean_x)) in\n    let n = reduction_element_count (shape x) ?axes () in\n    let n_corr = float_of_int (Stdlib.max 0 (n - ddof)) in\n    let divisor =\n      broadcast_to (shape sum_sq)\n        (scalar (B.context x) dt (Dtype.of_float dt n_corr))\n    in\n    div sum_sq divisor\n\n  let std ?axes ?(keepdims = false) ?(ddof = 0) x =\n    sqrt (var ?axes ~keepdims ~ddof x)\n\n  let all ?axes ?(keepdims = false) x =\n    let bool_t = cmpne x (full_like x (Dtype.zero (dtype x))) in\n    prod ?axes ~keepdims bool_t\n\n  let any ?axes ?(keepdims = false) x =\n    let bool_t = cmpne x (full_like x (Dtype.zero (dtype x))) in\n    max ?axes ~keepdims bool_t\n\n  let array_equal x y =\n    let can_broadcast =\n      try\n        ignore (Shape.broadcast (shape x) (shape y));\n        true\n      with _ -> false\n    in\n    if not can_broadcast then zeros (B.context x) Dtype.bool [||]\n    else all (equal x y)\n\n  (* ───── Shape Manipulation ───── *)\n\n  let pad padding_config fill_value x =\n    Array.iter\n      (fun (before, after) ->\n        if before < 0 || after < 0 then\n          invalid_arg\n            \"pad: padding values, negative values not allowed, use shrink or \\\n             slice to remove elements\")\n      padding_config;\n    B.pad x padding_config fill_value\n\n  let shrink shrink_args x = B.shrink x shrink_args\n\n  let flatten ?(start_dim = 0) ?(end_dim = -1) x =\n    let sh = shape x in\n    let r = Array.length sh in\n    let s = if start_dim < 0 then start_dim + r else start_dim in\n    let e = if end_dim < 0 then end_dim + r else end_dim in\n    if\n      not\n        ((s >= 0 && s < r && e >= 0 && e < r)\n        || (r = 0 && (s = 0 || start_dim = 0) && (e = -1 || end_dim = -1)))\n    then\n      err \"flatten\" \"start_dim %d or end_dim %d, out of bounds for rank %d\"\n        start_dim end_dim r;\n    if s > e then\n      invalid_arg \"flatten: dimensions, start_dim must be <= end_dim\";\n    if r = 0 then reshape [| 1 |] x\n    else if s = 0 && e = r - 1 then reshape [| array_prod sh |] x\n    else\n      let pre = Array.to_list (Array.sub sh 0 s) in\n      let mid = array_prod (Array.sub sh s (e - s + 1)) in\n      let post = Array.to_list (Array.sub sh (e + 1) (r - (e + 1))) in\n      reshape (Array.of_list (pre @ [ mid ] @ post)) x\n\n  let unflatten dim sizes x =\n    let dim = resolve_single_axis x dim in\n    let current_shape = shape x in\n    let dim_size = current_shape.(dim) in\n    let sizes = Array.copy sizes in\n    let neg_one_count =\n      Array.fold_left (fun acc s -> if s = -1 then acc + 1 else acc) 0 sizes\n    in\n    if neg_one_count > 1 then\n      invalid_arg\n        \"unflatten: sizes, can only specify one unknown dimension (using -1)\";\n    if neg_one_count = 1 then begin\n      let known_product =\n        Array.fold_left (fun acc s -> if s = -1 then acc else acc * s) 1 sizes\n      in\n      if known_product = 0 || dim_size mod known_product <> 0 then\n        err \"unflatten\"\n          \"cannot infer dimension from total size %d to known product %d, %d \\\n           not divisible by %d, ensure total size is divisible by product of \\\n           known dimensions\"\n          dim_size known_product dim_size known_product;\n      let inferred = dim_size / known_product in\n      Array.iteri (fun i s -> if s = -1 then sizes.(i) <- inferred) sizes\n    end;\n    let sizes_product = Array.fold_left ( * ) 1 sizes in\n    if sizes_product <> dim_size then\n      err \"unflatten\" \"sizes, product %d does not match dimension size %d\"\n        sizes_product dim_size;\n    reshape\n      (Array.concat\n         [\n           Array.sub current_shape 0 dim;\n           sizes;\n           Array.sub current_shape (dim + 1)\n             (Array.length current_shape - dim - 1);\n         ])\n      x\n\n  let ravel x = flatten x\n\n  let squeeze ?axes x =\n    let sh = shape x in\n    let r = Array.length sh in\n    let reshape_or_id new_sh =\n      if Array.length new_sh = 0 && r > 0 then reshape [||] x\n      else if Array.length new_sh = 0 then x\n      else reshape new_sh x\n    in\n    match axes with\n    | None ->\n        reshape_or_id\n          (Array.of_list (List.filter (( <> ) 1) (Array.to_list sh)))\n    | Some axes_list ->\n        if r = 0 then x\n        else\n          let normalized =\n            List.map (fun ax -> if ax < 0 then ax + r else ax) axes_list\n          in\n          let seen = Array.make r false in\n          List.iter\n            (fun ax ->\n              if ax < 0 || ax >= r then\n                err \"squeeze\" \"axis %d out of bounds for %dD tensor\" ax r;\n              if seen.(ax) then err \"squeeze\" \"axis %d, duplicate axis\" ax;\n              seen.(ax) <- true)\n            normalized;\n          List.iter\n            (fun ax ->\n              if sh.(ax) <> 1 then\n                err \"squeeze\"\n                  \"cannot remove dimension at axis %d (size %d), size %d≠1\" ax\n                  sh.(ax) sh.(ax))\n            normalized;\n          let axes_set =\n            List.fold_left (fun s ax -> IntSet.add ax s) IntSet.empty normalized\n          in\n          reshape_or_id\n            (Array.of_list\n               (List.filteri\n                  (fun i _ -> not (IntSet.mem i axes_set))\n                  (Array.to_list sh)))\n\n  let unsqueeze ?axes x =\n    let sh = shape x in\n    let r = Array.length sh in\n    let axes_list =\n      match axes with\n      | None -> invalid_arg \"unsqueeze: axes must be specified\"\n      | Some lst -> lst\n    in\n    if List.length axes_list = 0 then x\n    else\n      let output_rank = r + List.length axes_list in\n      let normalized =\n        List.map (fun ax -> if ax < 0 then ax + output_rank else ax) axes_list\n      in\n      let seen = Array.make output_rank false in\n      List.iter\n        (fun ax ->\n          if ax < 0 || ax >= output_rank then\n            err \"unsqueeze\"\n              \"axis %d, out of bounds for output rank %d, valid range is [%d, \\\n               %d)\"\n              ax output_rank (-output_rank) output_rank;\n          if seen.(ax) then err \"unsqueeze\" \"axis %d, duplicate axis\" ax;\n          seen.(ax) <- true)\n        normalized;\n      let axes_set =\n        List.fold_left (fun s ax -> IntSet.add ax s) IntSet.empty normalized\n      in\n      let new_shape = ref [] in\n      let input_idx = ref 0 in\n      for output_idx = 0 to output_rank - 1 do\n        if IntSet.mem output_idx axes_set then new_shape := 1 :: !new_shape\n        else if !input_idx < r then begin\n          new_shape := sh.(!input_idx) :: !new_shape;\n          incr input_idx\n        end\n      done;\n      reshape (Array.of_list (List.rev !new_shape)) x\n\n  let squeeze_axis axis x = squeeze ~axes:[ axis ] x\n  let unsqueeze_axis axis x = unsqueeze ~axes:[ axis ] x\n  let expand_dims axes x = unsqueeze ~axes x\n\n  let transpose ?axes x =\n    let r = ndim x in\n    let resolved =\n      match axes with\n      | None -> Array.init r (fun i -> r - 1 - i)\n      | Some ax_list ->\n          if List.length ax_list <> r then\n            err \"transpose\"\n              \"axes (length %d), expected rank %d, got %d, provide exactly one \\\n               axis per dimension\"\n              (List.length ax_list) r (List.length ax_list);\n          let seen = Array.make r false in\n          List.iter\n            (fun ax_val ->\n              let ax = if ax_val < 0 then ax_val + r else ax_val in\n              if ax < 0 || ax >= r then\n                err \"transpose\" \"axis %d out of bounds for %dD tensor\" ax_val r;\n              if seen.(ax) then err \"transpose\" \"axis %d, repeated\" ax_val;\n              seen.(ax) <- true)\n            ax_list;\n          if not (Array.for_all Fun.id seen) then\n            invalid_arg \"transpose: axes do not form a permutation\";\n          Array.of_list (List.map (fun v -> if v < 0 then v + r else v) ax_list)\n    in\n    B.permute x resolved\n\n  let flip ?axes x =\n    let r = ndim x in\n    let flip_bools = Array.make r false in\n    (match axes with\n    | None -> Array.fill flip_bools 0 r true\n    | Some ax_list ->\n        List.iter\n          (fun ax_val ->\n            let ax = if ax_val < 0 then ax_val + r else ax_val in\n            if ax < 0 || ax >= r then\n              err \"flip\" \"axis %d out of bounds for %dD tensor\" ax_val r;\n            flip_bools.(ax) <- true)\n          ax_list);\n    B.flip x flip_bools\n\n  let moveaxis src dst x =\n    let r = ndim x in\n    let s = if src < 0 then src + r else src in\n    let d = if dst < 0 then dst + r else dst in\n    if s < 0 || s >= r || d < 0 || d >= r then\n      err \"moveaxis\" \"source %d or destination %d, out of bounds for shape %s\"\n        src dst\n        (Shape.to_string (shape x));\n    if s = d then x\n    else\n      let axes = Array.to_list (Array.init r Fun.id) in\n      let without = List.filter (( <> ) s) axes in\n      let rec insert_at idx item = function\n        | [] -> [ item ]\n        | hd :: tl ->\n            if idx = 0 then item :: hd :: tl\n            else hd :: insert_at (idx - 1) item tl\n      in\n      B.permute x (Array.of_list (insert_at d s without))\n\n  let swapaxes axis1 axis2 x =\n    let r = ndim x in\n    let a1 = if axis1 < 0 then axis1 + r else axis1 in\n    let a2 = if axis2 < 0 then axis2 + r else axis2 in\n    if a1 < 0 || a1 >= r || a2 < 0 || a2 >= r then\n      err \"swapaxes\" \"axes (%d, %d), out of bounds for shape %s\" axis1 axis2\n        (Shape.to_string (shape x));\n    if a1 = a2 then x\n    else\n      let axes = Array.init r Fun.id in\n      axes.(a1) <- a2;\n      axes.(a2) <- a1;\n      B.permute x axes\n\n  let cat_tensors ~axis tensors =\n    match tensors with\n    | [] ->\n        invalid_arg\n          \"concatenate: tensor list cannot be empty, provide at least one \\\n           tensor\"\n    | _ -> B.cat tensors ~axis\n\n  let roll ?axis shift x =\n    let original_shape = shape x in\n    let x, ax_idx =\n      match axis with\n      | None -> (flatten x, 0)\n      | Some a ->\n          let r = ndim x in\n          let norm = if a < 0 then a + r else a in\n          if norm < 0 || norm >= r then\n            err \"roll\" \"axis %d out of bounds for %dD tensor\" a r;\n          (x, norm)\n    in\n    let sh = shape x in\n    let r = ndim x in\n    if r = 0 then x\n    else\n      let dim_size = sh.(ax_idx) in\n      if dim_size = 0 then x\n      else\n        let s = shift mod dim_size in\n        let actual = if s < 0 then s + dim_size else s in\n        if actual = 0 then if axis = None then reshape (shape x) x else x\n        else\n          let ranges_p1 =\n            Array.mapi\n              (fun i d -> if i = ax_idx then (dim_size - actual, d) else (0, d))\n              sh\n          in\n          let ranges_p2 =\n            Array.mapi\n              (fun i d -> if i = ax_idx then (0, dim_size - actual) else (0, d))\n              sh\n          in\n          let rolled =\n            cat_tensors ~axis:ax_idx [ shrink ranges_p1 x; shrink ranges_p2 x ]\n          in\n          if axis = None then reshape original_shape rolled else rolled\n\n  let tile reps x =\n    let t_shape = shape x in\n    let t_ndim = ndim x in\n    let reps_len = Array.length reps in\n    if reps_len < t_ndim then\n      invalid_arg \"tile: reps length must be >= tensor rank\";\n    let x_promoted, promoted_shape =\n      if reps_len > t_ndim then (\n        let new_shape = Array.make reps_len 1 in\n        Array.blit t_shape 0 new_shape (reps_len - t_ndim) t_ndim;\n        (reshape new_shape x, new_shape))\n      else (x, t_shape)\n    in\n    Array.iteri\n      (fun i r ->\n        if r < 0 then\n          err \"tile\"\n            \"reps[%d], negative (%d<0), use positive integers (or 0 for empty \\\n             result)\"\n            i r)\n      reps;\n    if Array.for_all (( = ) 1) reps then B.copy x_promoted\n    else if Array.exists (( = ) 0) reps || Array.exists (( = ) 0) promoted_shape\n    then\n      empty (B.context x) (dtype x)\n        (Array.mapi (fun i s -> s * reps.(i)) promoted_shape)\n    else\n      let rec tile_axis curr axis =\n        if axis >= reps_len then curr\n        else if reps.(axis) = 1 then tile_axis curr (axis + 1)\n        else\n          tile_axis\n            (cat_tensors ~axis (List.init reps.(axis) (fun _ -> curr)))\n            (axis + 1)\n      in\n      tile_axis x_promoted 0\n\n  let repeat ?axis count x =\n    if count < 0 then err \"repeat\" \"count must be >= 0, got %d\" count;\n    let x, ax_idx =\n      match axis with\n      | None -> (flatten x, 0)\n      | Some a ->\n          let r = ndim x in\n          let norm = if a < 0 then a + r else a in\n          if norm < 0 || norm >= r then\n            err \"repeat\" \"axis %d out of bounds for %dD tensor\" a r;\n          (x, norm)\n    in\n    let t_shape = shape x in\n    let t_ndim = ndim x in\n    if count = 0 then begin\n      let s = Array.copy t_shape in\n      if t_ndim > 0 then s.(ax_idx) <- 0;\n      empty (B.context x) (dtype x) (if axis = None then [| 0 |] else s)\n    end\n    else if count = 1 then B.copy x\n    else if t_ndim = 0 then\n      let repeated = expand [| count |] (reshape [| 1 |] x) in\n      if axis = None then repeated else reshape (shape x) repeated\n    else\n      let axis_size = t_shape.(ax_idx) in\n      let slices = ref [] in\n      for i = axis_size - 1 downto 0 do\n        let slice =\n          Array.init t_ndim (fun dim ->\n              if dim = ax_idx then (i, i + 1) else (0, t_shape.(dim)))\n        in\n        let sv = B.shrink x slice in\n        for _ = 1 to count do\n          slices := sv :: !slices\n        done\n      done;\n      cat_tensors ~axis:ax_idx !slices\n\n  (* ───── Concatenation and Stacking ───── *)\n\n  let check_dtypes_match ~op ts =\n    let first_dtype = dtype (List.hd ts) in\n    List.iter\n      (fun x ->\n        let d = dtype x in\n        if not (Dtype.equal first_dtype d) then\n          err op \"expected dtype %s, got %s\"\n            (Dtype.to_string first_dtype)\n            (Dtype.to_string d))\n      (List.tl ts)\n\n  let concatenate ?axis ts =\n    match ts with\n    | [] ->\n        invalid_arg\n          \"concatenate: tensor list cannot be empty, provide at least one \\\n           tensor\"\n    | [ x ] -> copy x\n    | _ -> (\n        check_dtypes_match ~op:\"concatenate\" ts;\n        match axis with\n        | None -> cat_tensors ~axis:0 (List.map flatten ts)\n        | Some a ->\n            let first = List.hd ts in\n            let first_ndim = ndim first in\n            let axis = resolve_single_axis ~ndim_opt:first_ndim first a in\n            if not (List.for_all (fun x -> ndim x = first_ndim) ts) then\n              invalid_arg\n                \"concatenate: arrays must have same number of dimensions\";\n            let first_shape = shape first in\n            List.iter\n              (fun x ->\n                let s = shape x in\n                Array.iteri\n                  (fun i d ->\n                    if i <> axis && d <> first_shape.(i) then\n                      err \"concatenate\" \"dimension %d, size %d≠%d\" i d\n                        first_shape.(i))\n                  s)\n              (List.tl ts);\n            cat_tensors ~axis ts)\n\n  let stack ?axis ts =\n    match ts with\n    | [] -> invalid_arg \"stack: tensor list cannot be empty\"\n    | _ ->\n        let first_ndim = Array.length (shape (List.hd ts)) in\n        let axis =\n          match axis with\n          | None -> 0\n          | Some a ->\n              let a = if a < 0 then a + first_ndim + 1 else a in\n              if a < 0 || a > first_ndim then\n                err \"stack\" \"axis %d out of bounds for %dD tensor\" a first_ndim;\n              a\n        in\n        concatenate ~axis (List.map (fun x -> unsqueeze ~axes:[ axis ] x) ts)\n\n  let ensure_ndim n x =\n    let s = shape x in\n    let nd = Array.length s in\n    if nd >= n then x\n    else\n      let new_shape = Array.make n 1 in\n      Array.blit s 0 new_shape 0 nd;\n      reshape new_shape x\n\n  let vstack ts =\n    match ts with\n    | [] -> invalid_arg \"vstack: tensor list cannot be empty\"\n    | _ ->\n        concatenate ~axis:0\n          (List.map\n             (fun x ->\n               if ndim x = 0 then reshape [| 1; 1 |] x\n               else if ndim x = 1 then reshape [| 1; numel x |] x\n               else x)\n             ts)\n\n  let hstack ts =\n    match ts with\n    | [] -> invalid_arg \"hstack: tensor list cannot be empty\"\n    | _ ->\n        if List.for_all (fun x -> ndim x <= 1) ts then\n          concatenate ~axis:0\n            (List.map (fun x -> if ndim x = 0 then reshape [| 1 |] x else x) ts)\n        else\n          concatenate ~axis:1\n            (List.map\n               (fun x ->\n                 if ndim x = 0 then reshape [| 1; 1 |] x\n                 else if ndim x = 1 then reshape [| numel x; 1 |] x\n                 else x)\n               ts)\n\n  let dstack ts =\n    match ts with\n    | [] -> invalid_arg \"dstack: tensor list cannot be empty\"\n    | _ ->\n        concatenate ~axis:2\n          (List.map\n             (fun x ->\n               let s = shape x in\n               let nd = Array.length s in\n               if nd = 0 then reshape [| 1; 1; 1 |] x\n               else if nd = 1 then reshape [| 1; s.(0); 1 |] x\n               else if nd = 2 then reshape [| s.(0); s.(1); 1 |] x\n               else x)\n             ts)\n\n  let broadcast_arrays ts =\n    match ts with\n    | [] -> []\n    | [ x ] -> [ x ]\n    | _ ->\n        let target =\n          List.fold_left\n            (fun acc x -> Shape.broadcast acc (shape x))\n            (shape (List.hd ts))\n            (List.tl ts)\n        in\n        List.map (fun x -> broadcast_to target x) ts\n\n  (* ───── Array Creation ───── *)\n\n  let eye ctx ?m ?k dtype n =\n    let rows = match m with Some v -> v | None -> n in\n    let cols = n in\n    let k_val = match k with Some v -> v | None -> 0 in\n    if rows <= 0 || cols <= 0 || k_val >= cols || k_val <= -rows then\n      zeros ctx dtype [| rows; cols |]\n    else\n      let arr = Array.make (rows * cols) (Dtype.zero dtype) in\n      let one = Dtype.one dtype in\n      for i = 0 to Stdlib.min rows cols - 1 do\n        let col = i + k_val in\n        if col >= 0 && col < cols then arr.((i * cols) + col) <- one\n      done;\n      create ctx dtype [| rows; cols |] arr\n\n  let identity ctx dtype n = eye ctx ~m:n ~k:0 dtype n\n\n  let diag ?(k = 0) v =\n    let v_shape = shape v in\n    let v_ndim = Array.length v_shape in\n    if v_ndim = 1 then\n      let n = v_shape.(0) in\n      let size = n + Int.abs k in\n      let v_arr = to_array v in\n      init (B.context v) (dtype v) [| size; size |] (fun indices ->\n          let row = indices.(0) in\n          let col = indices.(1) in\n          let diag_idx =\n            if k >= 0 then\n              if col = row + k && row >= 0 && row < n then row else -1\n            else if row = col - k && col >= 0 && col < n then col\n            else -1\n          in\n          if diag_idx >= 0 && diag_idx < n then v_arr.(diag_idx)\n          else Dtype.zero (dtype v))\n    else if v_ndim >= 2 then\n      let rows = v_shape.(0) in\n      let cols = v_shape.(1) in\n      let diag_len =\n        Stdlib.max 0\n          (if k >= 0 then Int.min rows (cols - k) else Int.min (rows + k) cols)\n      in\n      if diag_len = 0 then empty (B.context v) (dtype v) [| 0 |]\n      else\n        let v_arr = to_array v in\n        init (B.context v) (dtype v) [| diag_len |] (fun indices ->\n            let i = indices.(0) in\n            let row = if k >= 0 then i else i - k in\n            let col = if k >= 0 then i + k else i in\n            v_arr.((row * cols) + col))\n    else err \"diag\" \"input, expected 1D or 2D array, got %dD\" v_ndim\n\n  let arange (type a b) ctx (dtype : (a, b) Dtype.t) start stop step =\n    if start >= stop && step > 0 then\n      err \"arange\"\n        \"range [%d, %d), empty with step=%d, ensure start < stop for positive \\\n         step, or start > stop for negative step\"\n        start stop step;\n    if step = 0 then invalid_arg \"arange: step cannot be zero\";\n    let num_elements =\n      if step > 0 then\n        if start >= stop then 0 else (stop - start + step - 1) / step\n      else if start <= stop then 0\n      else (start - stop + -step - 1) / -step\n    in\n    if num_elements <= 0 then empty ctx dtype [| 0 |]\n    else\n      let float_at i =\n        float_of_int start +. (float_of_int i *. float_of_int step)\n      in\n      let int_at i = start + (i * step) in\n      let f_init idx_arr : a =\n        let i = idx_arr.(0) in\n        match dtype with\n        | Dtype.Float16 -> float_at i\n        | Dtype.Float32 -> float_at i\n        | Dtype.Float64 -> float_at i\n        | Dtype.BFloat16 -> float_at i\n        | Dtype.Float8_e4m3 -> float_at i\n        | Dtype.Float8_e5m2 -> float_at i\n        | Dtype.Int8 -> int_at i\n        | Dtype.UInt8 -> int_at i\n        | Dtype.Int16 -> int_at i\n        | Dtype.UInt16 -> int_at i\n        | Dtype.Int4 -> int_at i\n        | Dtype.UInt4 -> int_at i\n        | Dtype.Bool -> i <> 0\n        | Dtype.Int32 ->\n            Int32.(add (of_int start) (mul (of_int i) (of_int step)))\n        | Dtype.UInt32 ->\n            Int32.(add (of_int start) (mul (of_int i) (of_int step)))\n        | Dtype.Int64 ->\n            Int64.(add (of_int start) (mul (of_int i) (of_int step)))\n        | Dtype.UInt64 ->\n            Int64.(add (of_int start) (mul (of_int i) (of_int step)))\n        | Dtype.Complex64 -> { Complex.re = float_at i; im = 0. }\n        | Dtype.Complex128 -> { Complex.re = float_at i; im = 0. }\n      in\n      init ctx dtype [| num_elements |] f_init\n\n  let arange_f ctx dtype start_f stop_f step_f =\n    if step_f = 0. then invalid_arg \"arange_f: step cannot be zero\";\n    let num_exact_steps = (stop_f -. start_f) /. step_f in\n    let eps = 1e-9 in\n    let num_elements =\n      if\n        (step_f > 0. && stop_f <= start_f +. (eps *. Float.abs step_f))\n        || (step_f < 0. && stop_f >= start_f +. (eps *. Float.abs step_f))\n        || (Float.abs num_exact_steps < eps && num_exact_steps <= 0.)\n      then 0\n      else\n        let corrected =\n          num_exact_steps -. Float.copy_sign eps num_exact_steps\n        in\n        int_of_float (Float.floor corrected +. 1.)\n    in\n    let n = Stdlib.max 0 num_elements in\n    if n <= 0 then empty ctx dtype [| 0 |]\n    else\n      init ctx dtype [| n |] (fun idx ->\n          start_f +. (float_of_int idx.(0) *. step_f))\n\n  let linspace ctx dtype ?(endpoint = true) start_f stop_f count =\n    if count < 0 then\n      err \"linspace\" \"count %d, negative count, use count >= 0\" count;\n    if count = 0 then empty ctx dtype [| 0 |]\n    else if count = 1 then full ctx dtype [| 1 |] (Dtype.of_float dtype start_f)\n    else\n      let div_factor = float_of_int (if endpoint then count - 1 else count) in\n      let step = (stop_f -. start_f) /. div_factor in\n      init ctx dtype [| count |] (fun idx ->\n          Dtype.of_float dtype (start_f +. (float_of_int idx.(0) *. step)))\n\n  let logspace ctx dtype ?(endpoint = true) ?(base = 10.0) start_exp stop_exp\n      count =\n    if count < 0 then err \"logspace\" \"count must be >= 0, got %d\" count;\n    if count = 0 then empty ctx dtype [| 0 |]\n    else\n      let exponents = linspace ctx dtype ~endpoint start_exp stop_exp count in\n      if base = Float.exp 1.0 then exp exponents\n      else if base = 2.0 then exp2 exponents\n      else\n        let log2_base = Stdlib.log base /. Stdlib.log 2.0 in\n        let log2_base_t =\n          broadcast_to (shape exponents) (scalar ctx dtype log2_base)\n        in\n        exp2 (mul exponents log2_base_t)\n\n  let geomspace ctx dtype ?(endpoint = true) start_f stop_f count =\n    if start_f <= 0. || stop_f <= 0. then\n      err \"geomspace\"\n        \"%s, must be positive (>0), geomspace requires positive values for \\\n         logarithmic spacing\"\n        (if start_f <= 0. then Printf.sprintf \"start %g\" start_f\n         else Printf.sprintf \"stop %g\" stop_f);\n    if count < 0 then err \"geomspace\" \"count must be >= 0, got %d\" count;\n    if count = 0 then empty ctx dtype [| 0 |]\n    else if count = 1 then full ctx dtype [| 1 |] start_f\n    else\n      exp\n        (linspace ctx dtype ~endpoint (Stdlib.log start_f) (Stdlib.log stop_f)\n           count)\n\n  let meshgrid ?(indexing = `xy) x y =\n    let x_shape = shape x in\n    let y_shape = shape y in\n    if Array.length x_shape <> 1 then invalid_arg \"meshgrid: x must be 1D\";\n    if Array.length y_shape <> 1 then invalid_arg \"meshgrid: y must be 1D\";\n    let nx = x_shape.(0) in\n    let ny = y_shape.(0) in\n    match indexing with\n    | `xy ->\n        ( broadcast_to [| ny; nx |] (reshape [| 1; nx |] x),\n          broadcast_to [| ny; nx |] (reshape [| ny; 1 |] y) )\n    | `ij ->\n        ( broadcast_to [| nx; ny |] (reshape [| nx; 1 |] x),\n          broadcast_to [| nx; ny |] (reshape [| 1; ny |] y) )\n\n  (* Triangular mask: tril uses (>=), triu uses (<=) *)\n  let triangular_mask ~op ~cmp ?k x =\n    let k_val = match k with Some v -> v | None -> 0 in\n    let sh = shape x in\n    let nd = Array.length sh in\n    if nd < 2 then err op \"input requires at least 2D tensor\";\n    let rows = sh.(nd - 2) in\n    let cols = sh.(nd - 1) in\n    let row_idx = reshape [| rows; 1 |] (arange (B.context x) int32 0 rows 1) in\n    let col_idx = reshape [| 1; cols |] (arange (B.context x) int32 0 cols 1) in\n    let k_offset =\n      sub col_idx (scalar (B.context x) int32 (Int32.of_int k_val))\n    in\n    let mask = cmp row_idx k_offset in\n    let mask =\n      if nd > 2 then\n        broadcast_to\n          (Array.concat [ Array.sub sh 0 (nd - 2); [| rows; cols |] ])\n          mask\n      else mask\n    in\n    where mask x (zeros_like x)\n\n  let tril ?k x = triangular_mask ~op:\"tril\" ~cmp:greater_equal ?k x\n  let triu ?k x = triangular_mask ~op:\"triu\" ~cmp:less_equal ?k x\n\n  (* ───── Take Operations ───── *)\n\n  let apply_index_mode ~mode ~n ctx indices =\n    match mode with\n    | `raise -> indices\n    | `wrap -> mod_ indices (scalar (B.context indices) Int32 (Int32.of_int n))\n    | `clip ->\n        let s = shape indices in\n        minimum\n          (maximum indices (zeros ctx Int32 s))\n          (full ctx Int32 s (Int32.of_int (n - 1)))\n\n  let take ?axis ?(mode = `raise) indices t =\n    let ctx = B.context t in\n    match axis with\n    | None ->\n        let t_flat = reshape [| numel t |] t in\n        let idx = apply_index_mode ~mode ~n:(numel t) ctx indices in\n        B.gather t_flat idx ~axis:0\n    | Some axis ->\n        let t_shape = shape t in\n        let axis = resolve_single_axis t axis in\n        let idx = apply_index_mode ~mode ~n:t_shape.(axis) ctx indices in\n        let n_idx = numel idx in\n        (* Reshape indices for broadcasting: [1,...,1,n_idx,1,...,1] *)\n        let expanded_shape =\n          Array.init (Array.length t_shape) (fun i ->\n              if i = axis then n_idx else 1)\n        in\n        let broadcast_shape = Array.copy t_shape in\n        broadcast_shape.(axis) <- n_idx;\n        let idx_broadcast =\n          broadcast_to broadcast_shape (reshape expanded_shape idx)\n        in\n        let out = B.gather t idx_broadcast ~axis in\n        let out_shape = Array.copy t_shape in\n        out_shape.(axis) <- n_idx;\n        reshape out_shape out\n\n  let take_along_axis ~axis indices t =\n    let axis = resolve_single_axis t axis in\n    let t_shape = shape t in\n    let idx_shape = shape indices in\n    if Array.length t_shape <> Array.length idx_shape then\n      err \"take_along_axis\" \"cannot reshape %s to %s\"\n        (Shape.to_string idx_shape)\n        (Shape.to_string t_shape);\n    Array.iteri\n      (fun i dim ->\n        if i <> axis && dim <> idx_shape.(i) then\n          err \"take_along_axis\"\n            \"shape, dimension %d: indices has %d but tensor has %d\" i\n            idx_shape.(i) dim)\n      t_shape;\n    B.gather t indices ~axis\n\n  (* ───── Indexing and Slicing ───── *)\n\n  let normalize_index dim_size idx = if idx < 0 then dim_size + idx else idx\n\n  let normalize_and_check_index ~op dim_size idx =\n    let idx' = if idx < 0 then dim_size + idx else idx in\n    if idx' < 0 || idx' >= dim_size then\n      err op \"index %d out of bounds [0, %d)\" idx dim_size;\n    idx'\n\n  type dim_op =\n    | View of { start : int; stop : int; step : int; dim_len : int }\n    | Squeeze of { idx : int }\n    | Gather of int array\n    | New_axis\n\n  let normalize_slice_spec dim_size = function\n    | I idx ->\n        Squeeze { idx = normalize_and_check_index ~op:\"slice\" dim_size idx }\n    | A -> View { start = 0; stop = dim_size; step = 1; dim_len = dim_size }\n    | R (start, stop) ->\n        let s = Int.max 0 (Int.min (normalize_index dim_size start) dim_size) in\n        let e = Int.max 0 (Int.min (normalize_index dim_size stop) dim_size) in\n        View { start = s; stop = e; step = 1; dim_len = Int.max 0 (e - s) }\n    | Rs (start, stop, step) ->\n        if step = 0 then\n          invalid_arg\n            \"slice: step cannot be zero, use positive step for forward slicing \\\n             or negative for reverse\";\n        let s = normalize_index dim_size start in\n        let e = normalize_index dim_size stop in\n        let len, actual_stop =\n          if step > 0 then\n            let s = Int.max 0 (Int.min s dim_size) in\n            let e = Int.max 0 (Int.min e dim_size) in\n            ((if s >= e then 0 else ((e - 1 - s) / step) + 1), e)\n          else\n            let s = Int.min (dim_size - 1) (Int.max (-1) s) in\n            let e = Int.min (dim_size - 1) (Int.max (-1) e) in\n            ((if s <= e then 0 else ((s - e - 1) / -step) + 1), e)\n        in\n        View { start = s; stop = actual_stop; step; dim_len = len }\n    | L indices ->\n        Gather\n          (Array.map\n             (normalize_and_check_index ~op:\"slice\" dim_size)\n             (Array.of_list indices))\n    | N -> New_axis\n    | M _ -> invalid_arg \"slice: mask slicing not supported\"\n\n  let slice_internal specs x =\n    let input_shape = shape x in\n    let ndim_in = Array.length input_shape in\n    (* Parse specs, then pad with A for unspecified trailing dimensions *)\n    let ops, consumed =\n      List.fold_left\n        (fun (acc, dim) spec ->\n          match spec with\n          | N -> (New_axis :: acc, dim)\n          | _ ->\n              if dim >= ndim_in then invalid_arg \"slice: too many indices\";\n              (normalize_slice_spec input_shape.(dim) spec :: acc, dim + 1))\n        ([], 0) specs\n    in\n    let rec pad_trailing acc dim =\n      if dim >= ndim_in then List.rev acc\n      else\n        pad_trailing (normalize_slice_spec input_shape.(dim) A :: acc) (dim + 1)\n    in\n    let ops = pad_trailing ops consumed in\n    let gather_axis axis indices t =\n      let idx_t =\n        init (B.context t) Dtype.int32\n          [| Array.length indices |]\n          (fun i -> Int32.of_int indices.(i.(0)))\n      in\n      take ~axis idx_t t\n    in\n    let shrink_axis axis start stop t =\n      if start < stop then\n        B.shrink t\n          (Array.mapi\n             (fun i dim -> if i = axis then (start, stop) else (0, dim))\n             (shape t))\n      else take ~axis (empty (B.context t) Dtype.int32 [| 0 |]) t\n    in\n    let rec apply current axis sq_axes = function\n      | [] -> (current, sq_axes)\n      | New_axis :: rest ->\n          apply (unsqueeze ~axes:[ axis ] current) (axis + 1) sq_axes rest\n      | Squeeze { idx } :: rest ->\n          apply\n            (shrink_axis axis idx (idx + 1) current)\n            (axis + 1) (axis :: sq_axes) rest\n      | Gather indices :: rest ->\n          apply (gather_axis axis indices current) (axis + 1) sq_axes rest\n      | View { start; step; dim_len; _ } :: rest ->\n          let current' =\n            if step = 1 then shrink_axis axis start (start + dim_len) current\n            else if step = -1 then (\n              if dim_len = 0 then shrink_axis axis 0 0 current\n              else\n                let sliced =\n                  shrink_axis axis (start - dim_len + 1) (start + 1) current\n                in\n                let fb = Array.make (ndim sliced) false in\n                fb.(axis) <- true;\n                B.flip sliced fb)\n            else\n              gather_axis axis\n                (Array.init dim_len (fun i -> start + (i * step)))\n                current\n          in\n          apply current' (axis + 1) sq_axes rest\n    in\n    let result, sq_axes = apply x 0 [] ops in\n    match List.sort_uniq compare sq_axes with\n    | [] -> result\n    | axes -> squeeze ~axes result\n\n  let set_slice_internal specs x y =\n    let x_shape = shape x in\n    let nd = Array.length x_shape in\n    let full_specs =\n      if List.length specs < nd then\n        specs @ List.init (nd - List.length specs) (fun _ -> A)\n      else specs\n    in\n    (* Fast path: contiguous view — just assign *)\n    let is_view_compatible =\n      List.for_all\n        (function\n          | L _ | M _ -> false | Rs (_, _, s) -> Int.abs s = 1 | _ -> true)\n        full_specs\n    in\n    if is_view_compatible then\n      let target = slice_internal full_specs x in\n      B.assign target (broadcast_to (shape target) y)\n    else begin\n      (* Slow path: scatter for fancy indexing *)\n      let strides = Array.make nd 1 in\n      for i = nd - 2 downto 0 do\n        strides.(i) <- strides.(i + 1) * x_shape.(i + 1)\n      done;\n      let ctx = B.context x in\n      let dims_info =\n        List.mapi\n          (fun i spec ->\n            match normalize_slice_spec x_shape.(i) spec with\n            | Squeeze { idx } ->\n                (true, scalar ctx Dtype.int32 (Int32.of_int idx))\n            | View { start; stop; step; _ } ->\n                (false, arange ctx Dtype.int32 start stop step)\n            | Gather indices ->\n                ( false,\n                  init ctx Dtype.int32\n                    [| Array.length indices |]\n                    (fun k -> Int32.of_int indices.(k.(0))) )\n            | New_axis -> invalid_arg \"set_slice: New_axis not supported\")\n          full_specs\n      in\n      let target_shape =\n        Array.of_list\n          (List.filter_map\n             (fun (sq, t) -> if sq then None else Some (numel t))\n             dims_info)\n      in\n      let target_rank = Array.length target_shape in\n      let flat_idx = ref (scalar ctx Dtype.int32 0l) in\n      let tdim = ref 0 in\n      List.iteri\n        (fun i (squeezed, idx_t) ->\n          let stride = Int32.of_int strides.(i) in\n          let weighted =\n            if stride = 1l then idx_t\n            else mul idx_t (scalar ctx Dtype.int32 stride)\n          in\n          if squeezed then flat_idx := add !flat_idx weighted\n          else begin\n            let rs = Array.make target_rank 1 in\n            rs.(!tdim) <- numel idx_t;\n            flat_idx := add !flat_idx (reshape rs weighted);\n            incr tdim\n          end)\n        dims_info;\n      let x_flat = reshape [| numel x |] x in\n      let y_flat =\n        reshape\n          [| numel (broadcast_to target_shape y) |]\n          (broadcast_to target_shape y)\n      in\n      let result =\n        B.scatter ~mode:`Set ~unique_indices:false x_flat\n          ~indices:(reshape [| numel !flat_idx |] !flat_idx)\n          ~updates:y_flat ~axis:0\n      in\n      B.assign x (reshape x_shape result)\n    end\n\n  let get indices x =\n    let x_shape = shape x in\n    let checked =\n      List.mapi\n        (fun dim idx ->\n          if dim >= Array.length x_shape then\n            err \"get\" \"indices, too many for shape %s\" (Shape.to_string x_shape);\n          let idx' = normalize_index x_shape.(dim) idx in\n          if idx' < 0 || idx' >= x_shape.(dim) then\n            err \"get\"\n              \"index [%s] out of bounds for shape %s, index %d at dim %d: %d \\\n               not in [0, %d)\"\n              (String.concat \",\" (List.map string_of_int indices))\n              (Shape.to_string x_shape) dim dim idx' x_shape.(dim);\n          idx')\n        indices\n    in\n    slice_internal (List.map (fun i -> I i) checked) x\n\n  let set indices x value =\n    let x_shape = shape x in\n    let checked =\n      List.mapi\n        (fun dim idx ->\n          if dim >= Array.length x_shape then\n            err \"set\" \"indices, too many for shape %s\" (Shape.to_string x_shape);\n          let idx' = normalize_index x_shape.(dim) idx in\n          if idx' < 0 || idx' >= x_shape.(dim) then\n            err \"set\"\n              \"index %d at dimension %d, out of bounds for shape %s, index %d \\\n               at dim %d: %d not in [0, %d)\"\n              idx dim (Shape.to_string x_shape) dim dim idx' x_shape.(dim);\n          idx')\n        indices\n    in\n    set_slice_internal (List.map (fun i -> I i) checked) x value\n\n  let unsafe_get indices x =\n    let t = get indices x in\n    let ba = data t in\n    if numel t <> 1 then\n      err \"unsafe_get\" \"expected scalar result, got %d elements\" (numel t);\n    match View.strides_opt (B.view t) with\n    | Some _ -> Nx_buffer.get ba (offset t)\n    | None ->\n        if Nx_buffer.length ba = 1 then Nx_buffer.get ba 0\n        else\n          invalid_arg \"unsafe_get: cannot read from non-composable scalar view\"\n\n  let unsafe_set indices value x =\n    set indices x (scalar (B.context x) (dtype x) value)\n\n  let slice specs t = slice_internal specs t\n  let set_slice specs t value = set_slice_internal specs t value\n\n  let item indices t =\n    let s = shape t in\n    if List.length indices <> Array.length s then\n      invalid_arg\n        (Printf.sprintf \"item: need %d indices for %d-d tensor, got %d\"\n           (Array.length s) (Array.length s) (List.length indices));\n    unsafe_get [] (get indices t)\n\n  let set_item indices value t =\n    let s = shape t in\n    if List.length indices <> Array.length s then\n      invalid_arg\n        (Printf.sprintf \"set_item: need %d indices for %dD tensor, got %d\"\n           (Array.length s) (Array.length s) (List.length indices));\n    unsafe_set indices value t\n\n  let put ?axis ~indices ~values ?(mode = `raise) t =\n    let indices =\n      if dtype indices = Int32 then indices else astype Int32 indices\n    in\n    let ctx = B.context t in\n    match axis with\n    | None ->\n        let orig_shape = shape t in\n        let t_flat = reshape [| numel t |] t in\n        let idx = apply_index_mode ~mode ~n:(numel t) ctx indices in\n        let result =\n          B.scatter ~mode:`Set ~unique_indices:false t_flat\n            ~indices:(reshape [| numel indices |] idx)\n            ~updates:(reshape [| numel values |] values)\n            ~axis:0\n        in\n        blit (reshape orig_shape result) t\n    | Some axis ->\n        let axis = resolve_single_axis t axis in\n        let idx = apply_index_mode ~mode ~n:(dim axis t) ctx indices in\n        let result =\n          B.scatter ~mode:`Set ~unique_indices:false t ~indices:idx\n            ~updates:values ~axis\n        in\n        blit result t\n\n  let index_put ~indices ~values ?(mode = `raise) t =\n    let ctx = B.context t in\n    let t_shape = shape t in\n    let nd = Array.length t_shape in\n    if nd = 0 then\n      invalid_arg \"index_put: tensor rank, cannot index into scalar tensor\";\n    if Array.length indices <> nd then\n      err \"index_put\" \"indices, expected %d index tensors, got %d\" nd\n        (Array.length indices);\n    let indices_bc =\n      Array.map\n        (fun idx -> if dtype idx = Int32 then idx else astype Int32 idx)\n        indices\n      |> Array.to_list |> broadcast_arrays |> Array.of_list\n    in\n    let indices_processed =\n      Array.mapi\n        (fun axis idx ->\n          let n = t_shape.(axis) in\n          if n = 0 && numel idx <> 0 then\n            err \"index_put\" \"axis %d, cannot index into zero-sized dimension\"\n              axis;\n          if numel idx = 0 then idx\n          else\n            match mode with\n            | `raise -> idx\n            | `wrap ->\n                let m =\n                  broadcast_to (shape idx) (scalar ctx Int32 (Int32.of_int n))\n                in\n                let wrapped = mod_ idx m in\n                let z = zeros ctx Int32 (shape idx) in\n                where (cmplt wrapped z) (add wrapped m) wrapped\n            | `clip ->\n                minimum\n                  (maximum idx (zeros ctx Int32 (shape idx)))\n                  (full ctx Int32 (shape idx) (Int32.of_int (n - 1))))\n        indices_bc\n    in\n    let target_shape = shape indices_processed.(0) in\n    if array_prod target_shape = 0 then ()\n    else\n      let values =\n        if shape values = target_shape then values\n        else broadcast_to target_shape values\n      in\n      let strides = Shape.c_contiguous_strides t_shape in\n      let flat_indices =\n        let acc = ref (zeros ctx Int32 target_shape) in\n        for axis = 0 to nd - 1 do\n          let idx = indices_processed.(axis) in\n          let s = strides.(axis) in\n          let contribution =\n            if s = 0 || s = 1 then idx\n            else mul idx (full ctx Int32 target_shape (Int32.of_int s))\n          in\n          acc := add !acc contribution\n        done;\n        !acc\n      in\n      put ~indices:flat_indices ~values ~mode:`raise t\n\n  let put_along_axis ~axis ~indices ~values t =\n    let axis = resolve_single_axis t axis in\n    let t_shape = shape t in\n    let idx_shape = shape indices in\n    if Array.length t_shape <> Array.length idx_shape then\n      err \"put_along_axis\" \"cannot reshape %s to %s\"\n        (Shape.to_string idx_shape)\n        (Shape.to_string t_shape);\n    let values =\n      if shape values = idx_shape then values else broadcast_to idx_shape values\n    in\n    blit\n      (B.scatter ~mode:`Set ~unique_indices:false t ~indices ~updates:values\n         ~axis)\n      t\n\n  (* Data-dependent output shapes — not differentiable *)\n\n  let nonzero_indices_only (condition : (bool, bool_elt) t) =\n    let total = numel condition in\n    let cond_flat = reshape [| total |] condition in\n    let n =\n      sum (astype Int32 cond_flat) |> squeeze |> unsafe_get [] |> Int32.to_int\n    in\n    if n = 0 then [| empty (B.context condition) Int32 [| 0 |] |]\n    else\n      let result =\n        create (B.context condition) Int32 [| n |] (Array.make n 0l)\n      in\n      let idx = ref 0 in\n      for i = 0 to total - 1 do\n        if unsafe_get [ i ] cond_flat then begin\n          set_item [ !idx ] (Int32.of_int i) result;\n          incr idx\n        end\n      done;\n      [| result |]\n\n  let compress ?axis ~(condition : (bool, bool_elt) t) t =\n    match axis with\n    | None ->\n        let t_flat = flatten t in\n        let cond_flat = flatten condition in\n        let n =\n          sum ~axes:[ 0 ] (astype Int32 cond_flat)\n          |> squeeze |> unsafe_get [] |> Int32.to_int\n        in\n        if n = 0 then empty (B.context t) (dtype t) [| 0 |]\n        else take (nonzero_indices_only cond_flat).(0) t_flat\n    | Some axis ->\n        let axis = resolve_single_axis t axis in\n        let axis_size = dim axis t in\n        if numel condition <> axis_size then\n          invalid_arg\n            (Printf.sprintf \"compress: length %d doesn't match axis %d size %d\"\n               (numel condition) axis axis_size);\n        let cond_1d = reshape [| axis_size |] condition in\n        let true_idx = nonzero_indices_only cond_1d in\n        if numel true_idx.(0) = 0 then begin\n          let s = Array.copy (shape t) in\n          s.(axis) <- 0;\n          empty (B.context t) (dtype t) s\n        end\n        else take ~axis true_idx.(0) t\n\n  let extract ~condition t =\n    if shape condition <> shape t then invalid_arg \"extract: shape mismatch\";\n    compress ~condition (flatten t)\n\n  let nonzero (type a b) (t : (a, b) t) =\n    let t_shape = shape t in\n    let nd = Array.length t_shape in\n    let mask =\n      not_equal t (broadcast_to t_shape (zeros (B.context t) (dtype t) [| 1 |]))\n    in\n    let mask_flat = reshape [| numel mask |] mask in\n    let n =\n      sum (astype Int32 mask_flat) |> squeeze |> unsafe_get [] |> Int32.to_int\n    in\n    if n = 0 then Array.init nd (fun _ -> empty (B.context t) Int32 [| 0 |])\n    else\n      let coords =\n        Array.init nd (fun _ ->\n            create (B.context t) Int32 [| n |] (Array.make n 0l))\n      in\n      let idx = ref 0 in\n      let pos = Array.make nd 0 in\n      let rec walk dim =\n        if dim = nd then begin\n          let elem = get (Array.to_list pos) t in\n          let z = zeros (B.context t) (dtype t) (shape elem) in\n          if unsafe_get [] (not_equal elem z) <> false then begin\n            for d = 0 to nd - 1 do\n              set_item [ !idx ] (Int32.of_int pos.(d)) coords.(d)\n            done;\n            incr idx\n          end\n        end\n        else\n          for i = 0 to t_shape.(dim) - 1 do\n            pos.(dim) <- i;\n            walk (dim + 1)\n          done\n      in\n      walk 0;\n      Array.map (fun c -> slice [ Rs (0, !idx, 1) ] c) coords\n\n  let argwhere t =\n    let coords = nonzero t in\n    if Array.length coords = 0 then empty (B.context t) Int32 [| 0; 0 |]\n    else\n      let n = dim 0 coords.(0) in\n      let nd = Array.length coords in\n      if n = 0 then empty (B.context t) Int32 [| 0; nd |]\n      else\n        let result = zeros (B.context t) Int32 [| n; nd |] in\n        for i = 0 to nd - 1 do\n          blit (flatten coords.(i)) (slice_internal [ A; I i ] result)\n        done;\n        result\n\n  (* ───── Splitting ───── *)\n\n  let array_split ~axis sections x =\n    let nd = ndim x in\n    let axis = resolve_single_axis x axis in\n    let axis_size = dim axis x in\n    let make_slice start stop =\n      if start < stop then\n        slice_internal\n          (List.init nd (fun j -> if j = axis then R (start, stop) else A))\n          x\n      else\n        let s = Array.copy (shape x) in\n        s.(axis) <- 0;\n        empty (B.context x) (dtype x) s\n    in\n    match sections with\n    | `Indices indices ->\n        let idx = Array.of_list indices in\n        let n = Array.length idx + 1 in\n        let bounds = Array.make (n + 1) 0 in\n        Array.iteri (fun i v -> bounds.(i + 1) <- v) idx;\n        bounds.(n) <- axis_size;\n        Array.to_list\n          (Array.init n (fun i -> make_slice bounds.(i) bounds.(i + 1)))\n    | `Count n ->\n        if n <= 0 then err \"array_split\" \"sections must be >= 1, got %d\" n;\n        let base = axis_size / n in\n        let rem = axis_size mod n in\n        let splits = Array.make n x in\n        let start = ref 0 in\n        for i = 0 to n - 1 do\n          let sz = base + if i < rem then 1 else 0 in\n          splits.(i) <- make_slice !start (!start + sz);\n          start := !start + sz\n        done;\n        Array.to_list splits\n\n  let split ~axis sections x =\n    let axis = resolve_single_axis x axis in\n    let axis_size = dim axis x in\n    if axis_size mod sections <> 0 then\n      err \"split\"\n        \"cannot divide evenly axis %d (size %d) to %d sections, %d %% %d = %d, \\\n         use array_split for uneven division\"\n        axis axis_size sections axis_size sections (axis_size mod sections);\n    array_split ~axis (`Count sections) x\n\n  (* ───── Sorting and Searching ───── *)\n\n  let sort (type a b) ?(descending = false) ?(axis = -1) (x : (a, b) t) =\n    if ndim x = 0 then (x, scalar (B.context x) Dtype.int32 0l)\n    else\n      let r = ndim x in\n      let axis = if axis < 0 then axis + r else axis in\n      if axis < 0 || axis >= r then\n        err \"sort\" \"axis %d out of bounds for %dD tensor\" axis r;\n      let out_sorted = B.sort ~axis ~descending x in\n      let out_indices = B.argsort ~axis ~descending x in\n      (out_sorted, out_indices)\n\n  let argsort ?(descending = false) ?(axis = -1) x =\n    snd (sort ~descending ~axis x)\n\n  let argmax ?axis ?(keepdims = false) x =\n    let x', axis =\n      match axis with\n      | None -> (flatten x, 0)\n      | Some a ->\n          let r = ndim x in\n          let a = resolve_single_axis ~ndim_opt:r x a in\n          if a < 0 || a >= r then\n            err \"argmax\" \"axis %d out of bounds for %dD tensor\" a r;\n          (x, a)\n    in\n    B.argmax ~axis ~keepdims x'\n\n  let argmin (type a b) ?axis ?(keepdims = false) (x : (a, b) t) :\n      (int32, Dtype.int32_elt) t =\n    let x', axis =\n      match axis with\n      | None -> (flatten x, 0)\n      | Some a ->\n          let r = ndim x in\n          let a = resolve_single_axis ~ndim_opt:r x a in\n          if a < 0 || a >= r then\n            err \"argmin\" \"axis %d out of bounds for %dD tensor\" a r;\n          (x, a)\n    in\n    B.argmin ~axis ~keepdims x'\n\n  (* ───── Random Number Generation ───── *)\n\n  let validate_random_float_params op dtype shape =\n    if not (Dtype.is_float dtype) then\n      err op\n        \"dtype %s, not a float type, rand/randn only support Float16, Float32, \\\n         Float64\"\n        (Dtype.to_string dtype);\n    if Array.exists (fun x -> x < 0) shape then\n      err op \"invalid shape %s, dimensions must be non-negative\"\n        (Shape.to_string shape)\n\n  let rand ctx dtype shape =\n    validate_random_float_params \"rand\" dtype shape;\n    let key = Rng.next_key () in\n    let n = array_prod shape in\n    if n = 0 then zeros ctx dtype shape\n    else\n      (* Threefry: each value needs 2 int32s for key and counter *)\n      let key_t =\n        create ctx Dtype.int32 [| n; 2 |]\n          (Array.init (n * 2) (fun i -> Int32.of_int (Rng.fold_in key i)))\n      in\n      let counter =\n        create ctx Dtype.int32 [| n; 2 |]\n          (Array.init (n * 2) (fun i -> Int32.of_int i))\n      in\n      let bits = B.threefry key_t counter in\n      let bits_flat = flatten bits in\n      let bits_needed =\n        if n < size bits_flat then shrink [| (0, n) |] bits_flat else bits_flat\n      in\n      (* Signed int32 → [0, 1): add 2^31 then divide by 2^32 *)\n      let f32 = cast Dtype.float32 bits_needed in\n      let normalized =\n        div\n          (add f32 (scalar ctx Dtype.float32 2147483648.0))\n          (scalar ctx Dtype.float32 4294967296.0)\n      in\n      reshape shape (cast dtype normalized)\n\n  let randn ctx dtype shape =\n    validate_random_float_params \"randn\" dtype shape;\n    if array_prod shape = 0 then zeros ctx dtype shape\n    else\n      (* Box-Muller: z = cos(2π u1) · sqrt(-2 ln(u2)) *)\n      let u1 = rand ctx Dtype.float32 shape in\n      let u2 = rand ctx Dtype.float32 shape in\n      let angle = mul u1 (scalar ctx Dtype.float32 (2.0 *. Float.pi)) in\n      let u2_safe =\n        maximum (sub (ones_like u2) u2) (scalar ctx Dtype.float32 1e-7)\n      in\n      let result =\n        mul (cos angle)\n          (sqrt (mul (scalar ctx Dtype.float32 (-2.0)) (log u2_safe)))\n      in\n      cast dtype result\n\n  let randint ctx dtype ?(high = 10) shape low =\n    if low >= high then err \"randint\" \"range, low=%d >= high=%d\" low high;\n    if not (Dtype.is_int dtype) then\n      invalid_arg \"randint: dtype, only integer dtypes supported\";\n    let u = rand ctx Dtype.float32 shape in\n    astype dtype\n      (add\n         (mul u (scalar ctx Dtype.float32 (float_of_int (high - low))))\n         (scalar ctx Dtype.float32 (float_of_int low)))\n\n  let bernoulli ctx ~p shape =\n    if p < 0.0 || p > 1.0 then invalid_arg \"bernoulli: p must be in [0, 1]\";\n    if Array.exists (fun x -> x < 0) shape then\n      err \"bernoulli\" \"invalid shape %s, dimensions must be non-negative\"\n        (Shape.to_string shape);\n    cmplt (rand ctx Dtype.float32 shape) (scalar ctx Dtype.float32 p)\n\n  let permutation ctx n =\n    if n <= 0 then invalid_arg \"permutation: n must be positive\";\n    argsort (rand ctx Dtype.float32 [| n |]) ~axis:0 ~descending:false\n\n  let shuffle ctx x =\n    let s = shape x in\n    if Array.length s = 0 then x else take ~axis:0 (permutation ctx s.(0)) x\n\n  let categorical (type a b) ctx ?(axis = -1) ?shape:(batch_shape = [||])\n      (logits : (a, b) t) =\n    let logits_dtype = dtype logits in\n    let logits_shape = shape logits in\n    if not (Dtype.is_float logits_dtype) then\n      invalid_arg \"categorical: logits requires floating point dtype\";\n    let nd = Array.length logits_shape in\n    let axis = if axis < 0 then nd + axis else axis in\n    if axis < 0 || axis >= nd then\n      err \"categorical\" \"axis %d out of bounds for %dD tensor\" axis nd;\n    let full_shape = Array.append batch_shape logits_shape in\n    (* Gumbel-max trick: argmax(logits + Gumbel noise) *)\n    let run_float float_dtype eps =\n      let u =\n        clip (rand ctx float_dtype full_shape) ~min:eps ~max:(1. -. eps)\n      in\n      let neg_one = scalar ctx float_dtype (-1.0) in\n      let gumbel =\n        mul (log (mul (log u) neg_one)) neg_one |> astype logits_dtype\n      in\n      astype Dtype.int32\n        (argmax (add logits gumbel)\n           ~axis:(axis + Array.length batch_shape)\n           ~keepdims:false)\n    in\n    match logits_dtype with\n    | Float64 -> run_float Dtype.float64 1e-12\n    | Float32 -> run_float Dtype.float32 1e-6\n    | Float16 -> run_float Dtype.float32 1e-3\n    | BFloat16 -> run_float Dtype.float32 1e-2\n    | Float8_e4m3 | Float8_e5m2 ->\n        invalid_arg \"categorical: logits, float8 logits not supported\"\n    | _ -> invalid_arg \"categorical: logits requires floating point dtype\"\n\n  let truncated_normal (type a b) ctx (dtype : (a, b) Dtype.t) ~lower ~upper\n      shape =\n    if lower >= upper then\n      invalid_arg \"truncated_normal: bounds, lower must be less than upper\";\n    (match dtype with\n    | Float16 | Float32 | Float64 | BFloat16 -> ()\n    | _ -> invalid_arg \"truncated_normal: dtype must be floating point\");\n    let lo = scalar ctx Dtype.float64 lower |> astype dtype in\n    let hi = scalar ctx Dtype.float64 upper |> astype dtype in\n    let has_remaining mask =\n      match to_array (any mask) with [| v |] -> v | _ -> false\n    in\n    let initial = randn ctx dtype shape in\n    let accepted =\n      logical_and (greater_equal initial lo) (less_equal initial hi)\n    in\n    let remaining = logical_not accepted in\n    let rec fill acc remaining attempt =\n      if not (has_remaining remaining) then acc\n      else if attempt > 1000 then\n        invalid_arg\n          \"truncated_normal: generation, failed to find samples within bounds \\\n           after 1000 tries\"\n      else\n        let c = randn ctx dtype shape in\n        let within = logical_and (greater_equal c lo) (less_equal c hi) in\n        let take_new = logical_and remaining within in\n        fill (where take_new c acc)\n          (logical_and remaining (logical_not within))\n          (attempt + 1)\n    in\n    fill initial remaining 1\n\n  (* ───── Linear Algebra ───── *)\n\n  let matmul_with_alloc a b = B.matmul a b\n\n  let dot x w =\n    if not (ndim x > 0 && ndim w > 0) then\n      invalid_arg \"dot: tensors, both must be at least 1D\";\n    match (ndim x, ndim w) with\n    | 1, 1 -> sum (mul x w)\n    | 1, _ ->\n        let r = matmul_with_alloc (unsqueeze ~axes:[ 0 ] x) w in\n        copy_to_out (squeeze ~axes:[ ndim r - 2 ] r)\n    | _, 1 ->\n        let r = matmul_with_alloc x (unsqueeze ~axes:[ 1 ] w) in\n        copy_to_out (squeeze ~axes:[ ndim r - 1 ] r)\n    | _ -> matmul_with_alloc x w\n\n  let matmul a_orig b_orig =\n    if ndim a_orig = 0 || ndim b_orig = 0 then\n      invalid_arg \"matmul: inputs cannot be 0-D (scalars)\";\n    if ndim a_orig >= 2 && ndim b_orig >= 2 then matmul_with_alloc a_orig b_orig\n    else\n      let a, b =\n        match (ndim a_orig, ndim b_orig) with\n        | 1, 1 -> (unsqueeze ~axes:[ 0 ] a_orig, unsqueeze ~axes:[ 1 ] b_orig)\n        | 1, _ -> (unsqueeze ~axes:[ 0 ] a_orig, b_orig)\n        | _ -> (a_orig, unsqueeze ~axes:[ 1 ] b_orig)\n      in\n      let r = matmul_with_alloc a b in\n      if ndim a_orig = 1 && ndim b_orig = 1 then squeeze r\n      else if ndim a_orig = 1 then squeeze ~axes:[ ndim r - 2 ] r\n      else squeeze ~axes:[ ndim r - 1 ] r\n\n  let diagonal ?(offset = 0) ?axis1 ?axis2 x =\n    let nd = ndim x in\n    let ax1 =\n      let a = Option.value axis1 ~default:(nd - 2) in\n      if a < 0 then nd + a else a\n    in\n    let ax2 =\n      let a = Option.value axis2 ~default:(nd - 1) in\n      if a < 0 then nd + a else a\n    in\n    if ax1 = ax2 then invalid_arg \"diagonal: axes must be different\";\n    let perm =\n      let others =\n        List.filter (fun a -> a <> ax1 && a <> ax2) (List.init nd Fun.id)\n      in\n      others @ [ ax1; ax2 ]\n    in\n    let x_trans = transpose ~axes:perm x in\n    let d1 = dim (nd - 2) x_trans in\n    let d2 = dim (nd - 1) x_trans in\n    let diag_len =\n      if offset >= 0 then Stdlib.max 0 (Stdlib.min d1 (d2 - offset))\n      else Stdlib.max 0 (Stdlib.min (d1 + offset) d2)\n    in\n    if diag_len = 0 then\n      empty (B.context x) (dtype x)\n        (Array.append (Array.sub (shape x_trans) 0 (nd - 2)) [| 0 |])\n    else\n      let prefix = Array.sub (shape x_trans) 0 (nd - 2) in\n      let x_flat =\n        reshape (Array.append prefix [| d1 * d2 |]) (contiguous x_trans)\n      in\n      (* Diagonal indices: start + i*(d2+1) for i in 0..diag_len-1 *)\n      let start = if offset >= 0 then offset else -offset * d2 in\n      let step = d2 + 1 in\n      let ctx = B.context x in\n      let idx =\n        add\n          (mul\n             (arange ctx Dtype.int32 0 diag_len 1)\n             (scalar ctx Dtype.int32 (Int32.of_int step)))\n          (scalar ctx Dtype.int32 (Int32.of_int start))\n      in\n      take ~axis:(nd - 2) idx x_flat\n\n  let matrix_transpose x =\n    let nd = ndim x in\n    if nd < 2 then x else swapaxes (nd - 2) (nd - 1) x\n\n  (* ───── Complex ───── *)\n\n  let extract_complex_part (type a b) ~op ~field (x : (a, b) t) =\n    let extract (type c d e f) (x : (Complex.t, c) t) (out_dt : (d, e) Dtype.t)\n        (get : Complex.t -> d) : (f, _) t =\n      let s = shape x in\n      let size = array_prod s in\n      let data =\n        Array.init size (fun i ->\n            let idx = Shape.unravel_index i s |> Array.to_list in\n            get (unsafe_get idx x))\n      in\n      Obj.magic (create (B.context x) out_dt s data)\n    in\n    match dtype x with\n    | Complex64 ->\n        extract\n          (x : (Complex.t, complex32_elt) t)\n          Dtype.float32\n          (fun c -> field c)\n    | Complex128 ->\n        extract\n          (x : (Complex.t, complex64_elt) t)\n          Dtype.float64\n          (fun c -> field c)\n    | _ -> err op \"dtype, input must be complex64 or complex128\"\n\n  let complex (type a b) ~(real : (a, b) t) ~(imag : (a, b) t) =\n    let s = shape real in\n    if s <> shape imag then\n      err \"complex\" \"cannot reshape %s to %s\"\n        (Shape.to_string (shape imag))\n        (Shape.to_string s);\n    let size = array_prod s in\n    match dtype real with\n    | Float32 ->\n        let real = (real : (float, float32_elt) t) in\n        let imag = (imag : (float, float32_elt) t) in\n        let data =\n          Array.init size (fun i ->\n              let idx = Shape.unravel_index i s |> Array.to_list in\n              Complex.{ re = unsafe_get idx real; im = unsafe_get idx imag })\n        in\n        Obj.magic (create (B.context real) Dtype.complex64 s data)\n    | Float64 ->\n        let real = (real : (float, float64_elt) t) in\n        let imag = (imag : (float, float64_elt) t) in\n        let data =\n          Array.init size (fun i ->\n              let idx = Shape.unravel_index i s |> Array.to_list in\n              Complex.{ re = unsafe_get idx real; im = unsafe_get idx imag })\n        in\n        Obj.magic (create (B.context real) Dtype.complex128 s data)\n    | _ ->\n        invalid_arg \"complex: dtype, real and imag must be float32 or float64\"\n\n  let real (type a b) (x : (a, b) t) =\n    extract_complex_part ~op:\"real\" ~field:(fun c -> c.Complex.re) x\n\n  let imag (type a b) (x : (a, b) t) =\n    extract_complex_part ~op:\"imag\" ~field:(fun c -> c.Complex.im) x\n\n  let conjugate (type a b) (x : (a, b) t) =\n    match dtype x with\n    | Complex64 | Complex128 -> complex ~real:(real x) ~imag:(neg (imag x))\n    | _ -> x\n\n  (* ───── Dot Products and Tensor Contractions ───── *)\n\n  let vdot (type a b) (a : (a, b) t) (b : (a, b) t) =\n    let a', b' =\n      try\n        let bc = broadcast_arrays [ a; b ] in\n        (contiguous (List.nth bc 0), contiguous (List.nth bc 1))\n      with _ -> (a, b)\n    in\n    let fa = flatten a' in\n    let fb = flatten b' in\n    if numel fa <> numel fb then\n      invalid_arg \"vdot: different number of elements\";\n    match dtype a with\n    | (Complex64 | Complex128) when dtype a = dtype b ->\n        sum (mul (conjugate fa) fb)\n    | _ -> sum (mul fa fb)\n\n  let vecdot ?axis x1 x2 =\n    let ax =\n      match axis with\n      | None -> ndim x1 - 1\n      | Some a -> if a < 0 then ndim x1 + a else a\n    in\n    sum ~axes:[ ax ] ~keepdims:false (mul x1 x2)\n\n  let inner a b =\n    if (shape a).(ndim a - 1) <> (shape b).(ndim b - 1) then\n      invalid_arg \"inner: last dimensions differ\";\n    vecdot ~axis:(-1) a b\n\n  let outer a b =\n    let fa = if ndim a = 0 then reshape [| 1 |] a else flatten a in\n    let fb = if ndim b = 0 then reshape [| 1 |] b else flatten b in\n    let r =\n      matmul (reshape [| numel fa; 1 |] fa) (reshape [| 1; numel fb |] fb)\n    in\n    let r = if ndim a = 0 then squeeze ~axes:[ 0 ] r else r in\n    if ndim b = 0 then squeeze ~axes:[ (if ndim a = 0 then 0 else 1) ] r else r\n\n  let tensordot ?axes a b =\n    match axes with\n    | None -> matmul a b\n    | Some (axes_a, axes_b) ->\n        let n_axes = List.length axes_a in\n        if n_axes <> List.length axes_b then\n          invalid_arg \"tensordot: axes lists must have same length\";\n        let ndim_a = ndim a in\n        let ndim_b = ndim b in\n        let axes_a =\n          Array.of_list\n            (List.map (fun ax -> if ax < 0 then ndim_a + ax else ax) axes_a)\n        in\n        let axes_b =\n          Array.of_list\n            (List.map (fun ax -> if ax < 0 then ndim_b + ax else ax) axes_b)\n        in\n        let sa = shape a in\n        let sb = shape b in\n        Array.iter2\n          (fun ax_a ax_b ->\n            if sa.(ax_a) <> sb.(ax_b) then\n              invalid_arg \"tensordot: axes have different sizes\")\n          axes_a axes_b;\n        let axes_a_set =\n          Array.fold_left (fun s x -> IntSet.add x s) IntSet.empty axes_a\n        in\n        let axes_b_set =\n          Array.fold_left (fun s x -> IntSet.add x s) IntSet.empty axes_b\n        in\n        let free_a =\n          Array.of_list\n            (List.filter\n               (fun i -> not (IntSet.mem i axes_a_set))\n               (List.init ndim_a Fun.id))\n        in\n        let free_b =\n          Array.of_list\n            (List.filter\n               (fun i -> not (IntSet.mem i axes_b_set))\n               (List.init ndim_b Fun.id))\n        in\n        let perm_a = Array.append free_a axes_a in\n        let perm_b = Array.append axes_b free_b in\n        let do_transpose perm t =\n          if Array.length perm > 1 then\n            contiguous (transpose ~axes:(Array.to_list perm) t)\n          else t\n        in\n        let at = do_transpose perm_a a in\n        let bt = do_transpose perm_b b in\n        let sat = shape at in\n        let sbt = shape bt in\n        let nfa = Array.length free_a in\n        let nfb = Array.length free_b in\n        let prod arr = Array.fold_left ( * ) 1 arr in\n        let free_size_a = if nfa = 0 then 1 else prod (Array.sub sat 0 nfa) in\n        let free_size_b =\n          if nfb = 0 then 1 else prod (Array.sub sbt n_axes (ndim_b - n_axes))\n        in\n        let contract_size = prod (Array.sub sat nfa n_axes) in\n        let r =\n          matmul\n            (reshape [| free_size_a; contract_size |] at)\n            (reshape [| contract_size; free_size_b |] bt)\n        in\n        let result_shape =\n          Array.append\n            (if nfa = 0 then [||] else Array.sub sat 0 nfa)\n            (if nfb = 0 then [||] else Array.sub sbt n_axes (ndim_b - n_axes))\n        in\n        if Array.length result_shape = 0 then squeeze r\n        else reshape result_shape r\n\n  module Einsum = struct\n    type token = Axis of char | Ellipsis\n\n    let parse_operand str =\n      let len = String.length str in\n      if len = 0 then []\n      else\n        let rec loop idx acc ell =\n          if idx >= len then List.rev acc\n          else\n            match str.[idx] with\n            | '.' ->\n                if\n                  idx + 2 >= len || str.[idx + 1] <> '.' || str.[idx + 2] <> '.'\n                then invalid_arg \"einsum: ellipsis must be '...'\";\n                if ell then invalid_arg \"einsum: multiple ellipsis in operand\";\n                loop (idx + 3) (Ellipsis :: acc) true\n            | c\n              when (c >= 'a' && c <= 'z')\n                   || (c >= 'A' && c <= 'Z')\n                   || (c >= '0' && c <= '9')\n                   || c = '_' ->\n                loop (idx + 1) (Axis c :: acc) ell\n            | c ->\n                invalid_arg (Printf.sprintf \"einsum: invalid character '%c'\" c)\n        in\n        loop 0 [] false\n\n    let parse_equation subscripts =\n      let parts = String.split_on_char '-' subscripts in\n      match parts with\n      | [ lhs; rhs ] when String.length rhs > 0 && rhs.[0] = '>' ->\n          let inputs =\n            String.split_on_char ',' lhs\n            |> List.map String.trim\n            |> List.filter (( <> ) \"\")\n          in\n          let output = String.trim (String.sub rhs 1 (String.length rhs - 1)) in\n          ( Array.of_list (List.map parse_operand inputs),\n            Some (parse_operand output) )\n      | [ lhs ] ->\n          let inputs =\n            String.split_on_char ',' lhs\n            |> List.map String.trim\n            |> List.filter (( <> ) \"\")\n          in\n          (Array.of_list (List.map parse_operand inputs), None)\n      | _ -> invalid_arg \"einsum: invalid format, expected inputs->output\"\n\n    let handle_repeated_indices tensor tokens =\n      let rec find_dups acc idx = function\n        | [] -> None\n        | Axis c :: rest -> (\n            match List.find_opt (fun (ch, _) -> ch = c) acc with\n            | Some (_, prev) -> Some (prev, idx, c)\n            | None -> find_dups ((c, idx) :: acc) (idx + 1) rest)\n        | Ellipsis :: rest -> find_dups acc (idx + 1) rest\n      in\n      let rec process t toks =\n        match find_dups [] 0 toks with\n        | None -> (t, toks)\n        | Some (ax1, ax2, c) ->\n            let s = shape t in\n            if s.(ax1) <> s.(ax2) then\n              invalid_arg\n                (Printf.sprintf\n                   \"einsum: index var '%c' must have consistent dimensions (%d \\\n                    vs %d)\"\n                   c s.(ax1) s.(ax2));\n            let t' = diagonal ~axis1:ax1 ~axis2:ax2 t in\n            let rec remove_at i = function\n              | [] -> []\n              | _ :: xs when i = 0 -> xs\n              | x :: xs -> x :: remove_at (i - 1) xs\n            in\n            process t' (remove_at ax2 toks)\n      in\n      process tensor tokens\n\n    type tensor_info = { id : int; shape : int array; axis_labels : char list }\n\n    type contraction_path =\n      | Leaf of int\n      | Node of contraction_path * contraction_path * tensor_info\n\n    let estimate_cost (t1 : tensor_info) (t2 : tensor_info) common_chars =\n      let dim_map = Hashtbl.create 16 in\n      List.iteri\n        (fun i c -> Hashtbl.replace dim_map c t1.shape.(i))\n        t1.axis_labels;\n      List.iteri\n        (fun i c -> Hashtbl.replace dim_map c t2.shape.(i))\n        t2.axis_labels;\n      let all = List.sort_uniq Char.compare (t1.axis_labels @ t2.axis_labels) in\n      let output_size =\n        List.fold_left\n          (fun acc c ->\n            if List.mem c common_chars then acc\n            else acc * Hashtbl.find dim_map c)\n          1 all\n      in\n      let op_cost =\n        List.fold_left (fun acc c -> acc * Hashtbl.find dim_map c) 1 all\n      in\n      (float_of_int op_cost, float_of_int output_size)\n\n    let optimize_path inputs output_chars =\n      let workset = ref (List.mapi (fun i t -> (Leaf i, t)) inputs) in\n      let contract_info (p1, t1) (p2, t2) =\n        let common =\n          List.filter (fun c -> List.mem c t2.axis_labels) t1.axis_labels\n        in\n        let new_labels =\n          let all =\n            List.sort_uniq Char.compare (t1.axis_labels @ t2.axis_labels)\n          in\n          List.filter\n            (fun c -> (not (List.mem c common)) || List.mem c output_chars)\n            all\n        in\n        let find_index x lst =\n          let rec aux i = function\n            | [] -> raise Not_found\n            | h :: _ when h = x -> i\n            | _ :: t -> aux (i + 1) t\n          in\n          aux 0 lst\n        in\n        let get_dim c =\n          if List.mem c t1.axis_labels then\n            t1.shape.(find_index c t1.axis_labels)\n          else t2.shape.(find_index c t2.axis_labels)\n        in\n        let new_shape = Array.of_list (List.map get_dim new_labels) in\n        let info = { id = -1; shape = new_shape; axis_labels = new_labels } in\n        let cost, size =\n          estimate_cost t1 t2\n            (List.filter (fun c -> not (List.mem c new_labels)) common)\n        in\n        (cost, size, Node (p1, p2, info), info)\n      in\n      while List.length !workset > 1 do\n        let items = !workset in\n        let best = ref None in\n        let min_cost = ref Float.infinity in\n        let rec iter_pairs = function\n          | [] -> ()\n          | x :: rest ->\n              List.iter\n                (fun y ->\n                  let cost, _, path, info = contract_info x y in\n                  if cost < !min_cost then (\n                    min_cost := cost;\n                    best := Some (x, y, path, info)))\n                rest;\n              iter_pairs rest\n        in\n        iter_pairs items;\n        match !best with\n        | None -> failwith \"einsum: could not find valid contraction\"\n        | Some (i1, i2, new_path, new_info) ->\n            workset :=\n              (new_path, new_info)\n              :: List.filter (fun x -> x != i1 && x != i2) items\n      done;\n      match !workset with\n      | [ (p, _) ] -> p\n      | _ -> failwith \"einsum: optimization failed\"\n\n    let contract_pair op_a str_a op_b str_b result_str =\n      let sa = shape op_a in\n      let sb = shape op_b in\n      let chars_a = String.to_seq str_a |> List.of_seq in\n      let chars_b = String.to_seq str_b |> List.of_seq in\n      let chars_out = String.to_seq result_str |> List.of_seq in\n      let batch_chars =\n        List.filter\n          (fun c -> List.mem c chars_b && List.mem c chars_out)\n          chars_a\n      in\n      let contract_chars =\n        List.filter\n          (fun c -> List.mem c chars_b && not (List.mem c chars_out))\n          chars_a\n      in\n      let a_free = List.filter (fun c -> not (List.mem c chars_b)) chars_a in\n      let b_free = List.filter (fun c -> not (List.mem c chars_a)) chars_b in\n      let get_axes source target =\n        List.map\n          (fun c ->\n            let rec find i = function\n              | [] -> failwith \"char not found\"\n              | x :: _ when x = c -> i\n              | _ :: xs -> find (i + 1) xs\n            in\n            find 0 source)\n          target\n      in\n      let perm_a = get_axes chars_a (batch_chars @ a_free @ contract_chars) in\n      let perm_b = get_axes chars_b (batch_chars @ contract_chars @ b_free) in\n      let is_identity perm n =\n        let rec check i = function\n          | [] -> i = n\n          | x :: xs -> x = i && check (i + 1) xs\n        in\n        check 0 perm\n      in\n      let at =\n        if is_identity perm_a (String.length str_a) then op_a\n        else contiguous (transpose ~axes:perm_a op_a)\n      in\n      let bt =\n        if is_identity perm_b (String.length str_b) then op_b\n        else contiguous (transpose ~axes:perm_b op_b)\n      in\n      let prod dims = Array.fold_left ( * ) 1 dims in\n      let pa = Array.of_list perm_a in\n      let pb = Array.of_list perm_b in\n      let nb = List.length batch_chars in\n      let naf = List.length a_free in\n      let nc = List.length contract_chars in\n      let nbf = List.length b_free in\n      let batch_dims =\n        Array.init nb (fun i ->\n            let da = sa.(pa.(i)) in\n            let db = sb.(pb.(i)) in\n            if da = db then da\n            else if da = 1 then db\n            else if db = 1 then da\n            else\n              invalid_arg\n                (Printf.sprintf\n                   \"einsum: incompatible broadcast dimensions (%d vs %d)\" da db))\n      in\n      let a_free_dims = Array.init naf (fun i -> sa.(pa.(nb + i))) in\n      let contract_dims = Array.init nc (fun i -> sa.(pa.(nb + naf + i))) in\n      let b_free_dims = Array.init nbf (fun i -> sb.(pb.(nb + nc + i))) in\n      let bs = prod batch_dims in\n      let m = prod a_free_dims in\n      let k = prod contract_dims in\n      let n = prod b_free_dims in\n      let broadcast_batch tensor parr src_shape =\n        if nb = 0 then tensor\n        else\n          let needs = ref false in\n          let target =\n            Array.init (ndim tensor) (fun i ->\n                if i < nb then (\n                  let src = src_shape.(parr.(i)) in\n                  let tgt = batch_dims.(i) in\n                  if src <> tgt then needs := true;\n                  tgt)\n                else src_shape.(parr.(i)))\n          in\n          if !needs then broadcast_to target tensor else tensor\n      in\n      let at = broadcast_batch at pa sa in\n      let bt = broadcast_batch bt pb sb in\n      let r = matmul (reshape [| bs; m; k |] at) (reshape [| bs; k; n |] bt) in\n      let intermediate =\n        reshape (Array.concat [ batch_dims; a_free_dims; b_free_dims ]) r\n      in\n      let inter_chars = batch_chars @ a_free @ b_free in\n      if inter_chars = chars_out then intermediate\n      else transpose ~axes:(get_axes inter_chars chars_out) intermediate\n\n    let calculate subscripts operands =\n      let n_ops = Array.length operands in\n      if n_ops = 0 then invalid_arg \"einsum: no input operands\";\n      match (subscripts, n_ops) with\n      | \"i,i->\", 2 -> sum (mul operands.(0) operands.(1))\n      | \"ij,jk->ik\", 2 -> matmul operands.(0) operands.(1)\n      | \"ij->ji\", 1 -> transpose operands.(0)\n      | _ ->\n          let input_tokens, output_opt = parse_equation subscripts in\n          if Array.length input_tokens <> n_ops then\n            invalid_arg \"einsum: number of inputs must equal number of operands\";\n          let ell_rank =\n            let max_rank = ref 0 in\n            for i = 0 to n_ops - 1 do\n              let n_named =\n                List.length\n                  (List.filter\n                     (function Axis _ -> true | _ -> false)\n                     input_tokens.(i))\n              in\n              let r = ndim operands.(i) - n_named in\n              if r < 0 then\n                invalid_arg \"einsum: operand rank too small for subscripts\";\n              if r > !max_rank then max_rank := r\n            done;\n            !max_rank\n          in\n          let get_ell_char i = char_of_int (200 + i) in\n          let normalized_inputs =\n            Array.mapi\n              (fun i tokens ->\n                let op = operands.(i) in\n                let n_named =\n                  List.length\n                    (List.filter\n                       (function Axis _ -> true | _ -> false)\n                       tokens)\n                in\n                let ell_dim = ndim op - n_named in\n                let expanded =\n                  List.concat_map\n                    (function\n                      | Axis c -> [ Axis c ]\n                      | Ellipsis ->\n                          List.init ell_dim (fun k ->\n                              Axis (get_ell_char (ell_rank - ell_dim + k))))\n                    tokens\n                in\n                let op_diag, final = handle_repeated_indices op expanded in\n                let chars =\n                  List.map (function Axis c -> c | _ -> assert false) final\n                in\n                ({ id = i; shape = shape op_diag; axis_labels = chars }, op_diag))\n              input_tokens\n          in\n          let ops_info = Array.map fst normalized_inputs in\n          let ops_tensors = Array.map snd normalized_inputs in\n          (* Validate dimension consistency *)\n          let char_dims = Hashtbl.create 16 in\n          Array.iter\n            (fun info ->\n              List.iteri\n                (fun idx c ->\n                  let d = info.shape.(idx) in\n                  match Hashtbl.find_opt char_dims c with\n                  | None -> Hashtbl.add char_dims c d\n                  | Some prev ->\n                      if prev <> d && prev <> 1 && d <> 1 then\n                        invalid_arg\n                          (Printf.sprintf\n                             \"einsum: index var '%c' must have consistent \\\n                              dimensions (%d vs %d)\"\n                             c prev d)\n                      else if d > prev then Hashtbl.replace char_dims c d)\n                info.axis_labels)\n            ops_info;\n          let inputs_have_ell =\n            Array.exists\n              (fun toks -> List.exists (( = ) Ellipsis) toks)\n              input_tokens\n          in\n          let target_chars =\n            match output_opt with\n            | Some tokens ->\n                if List.exists (( = ) Ellipsis) tokens && not inputs_have_ell\n                then\n                  invalid_arg\n                    \"einsum: output ellipsis requires ellipsis in inputs\";\n                List.concat_map\n                  (function\n                    | Axis c -> [ c ]\n                    | Ellipsis -> List.init ell_rank (fun k -> get_ell_char k))\n                  tokens\n            | None ->\n                let all_chars =\n                  List.concat\n                    (Array.to_list\n                       (Array.map\n                          (fun toks ->\n                            List.filter_map\n                              (function Axis c -> Some c | Ellipsis -> None)\n                              toks)\n                          input_tokens))\n                in\n                let counts = Hashtbl.create 16 in\n                List.iter\n                  (fun c ->\n                    Hashtbl.replace counts c\n                      (1\n                      + (Hashtbl.find_opt counts c |> Option.value ~default:0)))\n                  all_chars;\n                let ell_chars = List.init ell_rank (fun k -> get_ell_char k) in\n                let named =\n                  List.filter (fun c -> int_of_char c < 200) all_chars\n                  |> List.sort_uniq Char.compare\n                  |> List.filter (fun c -> Hashtbl.find counts c = 1)\n                in\n                ell_chars @ named\n          in\n          let all_input_chars =\n            Array.fold_left (fun acc info -> acc @ info.axis_labels) [] ops_info\n          in\n          List.iter\n            (fun c ->\n              if not (List.mem c all_input_chars) then\n                invalid_arg\n                  (Printf.sprintf\n                     \"einsum: output index '%c' not found in inputs\" c))\n            target_chars;\n          (* Pre-reduce single-operand axes absent from output *)\n          Array.iteri\n            (fun i info ->\n              let reduce_axes = ref [] in\n              let new_labels = ref [] in\n              let char_count = Hashtbl.create 16 in\n              Array.iter\n                (fun inf ->\n                  List.iter\n                    (fun c ->\n                      Hashtbl.replace char_count c\n                        (1\n                        + (Hashtbl.find_opt char_count c\n                          |> Option.value ~default:0)))\n                    inf.axis_labels)\n                ops_info;\n              List.iteri\n                (fun axis_idx c ->\n                  if\n                    Hashtbl.find char_count c = 1\n                    && not (List.mem c target_chars)\n                  then reduce_axes := axis_idx :: !reduce_axes\n                  else new_labels := c :: !new_labels)\n                info.axis_labels;\n              match !reduce_axes with\n              | [] -> ()\n              | axes ->\n                  ops_tensors.(i) <- sum ~axes:(List.rev axes) ops_tensors.(i);\n                  ops_info.(i) <-\n                    {\n                      info with\n                      shape = shape ops_tensors.(i);\n                      axis_labels = List.rev !new_labels;\n                    })\n            ops_info;\n          let finalize result current_chars =\n            let reduce =\n              List.filter_map\n                (fun (i, c) ->\n                  if not (List.mem c target_chars) then Some i else None)\n                (List.mapi (fun i c -> (i, c)) current_chars)\n            in\n            let result =\n              if reduce = [] then result else sum ~axes:reduce result\n            in\n            let final =\n              List.filter (fun c -> List.mem c target_chars) current_chars\n            in\n            if final = target_chars then result\n            else\n              let perm =\n                List.map\n                  (fun c ->\n                    let rec find i = function\n                      | [] -> 0\n                      | x :: xs -> if x = c then i else find (i + 1) xs\n                    in\n                    find 0 final)\n                  target_chars\n              in\n              transpose ~axes:perm result\n          in\n          if n_ops = 1 then finalize ops_tensors.(0) ops_info.(0).axis_labels\n          else if n_ops = 2 then\n            let ia = ops_info.(0) in\n            let ib = ops_info.(1) in\n            let stra = ia.axis_labels |> List.to_seq |> String.of_seq in\n            let strb = ib.axis_labels |> List.to_seq |> String.of_seq in\n            let common =\n              List.filter (fun c -> List.mem c ib.axis_labels) ia.axis_labels\n            in\n            let result_labels =\n              List.sort_uniq Char.compare (ia.axis_labels @ ib.axis_labels)\n              |> List.filter (fun c ->\n                  (not (List.mem c common)) || List.mem c target_chars)\n            in\n            let str_out = result_labels |> List.to_seq |> String.of_seq in\n            finalize\n              (contract_pair ops_tensors.(0) stra ops_tensors.(1) strb str_out)\n              result_labels\n          else\n            let plan = optimize_path (Array.to_list ops_info) target_chars in\n            let rec execute = function\n              | Leaf idx ->\n                  ( ops_tensors.(idx),\n                    ops_info.(idx).axis_labels |> List.to_seq |> String.of_seq\n                  )\n              | Node (left, right, info) ->\n                  let ra, sa = execute left in\n                  let rb, sb = execute right in\n                  let so = info.axis_labels |> List.to_seq |> String.of_seq in\n                  (contract_pair ra sa rb sb so, so)\n            in\n            let result, rstr = execute plan in\n            finalize result (String.to_seq rstr |> List.of_seq)\n  end\n\n  let einsum subscripts operands = Einsum.calculate subscripts operands\n\n  let kron a b =\n    let sa = shape a in\n    let sb = shape b in\n    let a2 = if ndim a = 1 then reshape [| sa.(0); 1 |] a else a in\n    let b2 = if ndim b = 1 then reshape [| sb.(0); 1 |] b else b in\n    let sa2 = shape a2 in\n    let sb2 = shape b2 in\n    let r =\n      mul\n        (reshape [| sa2.(0); 1; sa2.(1); 1 |] a2)\n        (reshape [| 1; sb2.(0); 1; sb2.(1) |] b2)\n    in\n    let flat = reshape [| sa2.(0) * sb2.(0); sa2.(1) * sb2.(1) |] r in\n    if ndim a = 1 && ndim b = 1 then flatten flat else flat\n\n  let multi_dot arrays =\n    match arrays with\n    | [||] -> invalid_arg \"multi_dot: empty array\"\n    | [| arr |] -> arr\n    | _ ->\n        let n = Array.length arrays in\n        let dims = Array.make (n + 1) 0 in\n        let matrix_dims idx =\n          let t = arrays.(idx) in\n          match ndim t with\n          | 1 ->\n              let len = (shape t).(0) in\n              if idx = 0 then (1, len)\n              else if idx = n - 1 then (len, 1)\n              else\n                invalid_arg\n                  \"multi_dot: only first and last arguments may be 1D vectors\"\n          | 2 ->\n              let s = shape t in\n              (s.(0), s.(1))\n          | _ ->\n              invalid_arg\n                (Printf.sprintf\n                   \"multi_dot: argument %d must be 1D (endpoints) or 2D matrix\"\n                   idx)\n        in\n        for i = 0 to n - 1 do\n          let rows, cols = matrix_dims i in\n          if i = 0 then dims.(0) <- rows\n          else if dims.(i) <> rows then\n            invalid_arg\n              (Printf.sprintf\n                 \"multi_dot: shapes not aligned between arguments %d and %d \\\n                  (%d <> %d)\"\n                 (i - 1) i dims.(i) rows);\n          dims.(i + 1) <- cols\n        done;\n        (* MCM dynamic programming *)\n        let d64 = Array.map Int64.of_int dims in\n        let cost = Array.make_matrix n n Int64.zero in\n        let split = Array.make_matrix n n 0 in\n        for len = 2 to n do\n          for i = 0 to n - len do\n            let j = i + len - 1 in\n            let best_c = ref Int64.max_int in\n            let best_s = ref i in\n            for k = i to j - 1 do\n              let c =\n                Int64.(\n                  add\n                    cost.(i).(k)\n                    (add\n                       cost.(k + 1).(j)\n                       (mul d64.(i) (mul d64.(k + 1) d64.(j + 1)))))\n              in\n              if c < !best_c then (\n                best_c := c;\n                best_s := k)\n            done;\n            cost.(i).(j) <- !best_c;\n            split.(i).(j) <- !best_s\n          done\n        done;\n        let memo = Array.init n (fun _ -> Array.make n None) in\n        let rec compute i j =\n          match memo.(i).(j) with\n          | Some t -> t\n          | None ->\n              let r =\n                if i = j then arrays.(i)\n                else\n                  matmul\n                    (compute i split.(i).(j))\n                    (compute (split.(i).(j) + 1) j)\n              in\n              memo.(i).(j) <- Some r;\n              r\n        in\n        compute 0 (n - 1)\n\n  let cross ?axis a b =\n    let axis =\n      let ax = Option.value axis ~default:(-1) in\n      if ax < 0 then ndim a + ax else ax\n    in\n    if axis >= ndim a then invalid_arg \"cross: axis out of bounds\";\n    if (shape a).(axis) <> 3 then invalid_arg \"cross: axis dim not 3\";\n    if (shape b).(axis) <> 3 then invalid_arg \"cross: axis dim not 3\";\n    let at i t =\n      squeeze ~axes:[ axis ]\n        (slice_internal\n           (Array.to_list\n              (Array.init (ndim t) (fun j ->\n                   if j = axis then R (i, i + 1) else A)))\n           t)\n    in\n    let c1 = sub (mul (at 1 a) (at 2 b)) (mul (at 2 a) (at 1 b)) in\n    let c2 = sub (mul (at 2 a) (at 0 b)) (mul (at 0 a) (at 2 b)) in\n    let c3 = sub (mul (at 0 a) (at 1 b)) (mul (at 1 a) (at 0 b)) in\n    stack ~axis [ c1; c2; c3 ]\n\n  (* ───── Matrix Decompositions and Solving ───── *)\n\n  let check_square ~op a =\n    let sh = shape a in\n    let n = Array.length sh in\n    if n < 2 then err op \"input requires at least 2D array\";\n    if sh.(n - 1) <> sh.(n - 2) then\n      invalid_arg (Printf.sprintf \"%s: coefficient matrix must be square\" op)\n\n  let check_float_or_complex (type a b) ~op (a : (a, b) t) =\n    match dtype a with\n    | Float16 | Float32 | Float64 | Complex64 | Complex128 -> ()\n    | _ -> err op \"dtype must be float or complex\"\n\n  let check_real (type a b) ~op (a : (a, b) t) =\n    match dtype a with\n    | Float16 | Float32 | Float64 -> ()\n    | _ -> err op \"dtype must be real (float)\"\n\n  let cholesky ?upper a =\n    check_square ~op:\"cholesky\" a;\n    check_float_or_complex ~op:\"cholesky\" a;\n    B.cholesky ~upper:(Option.value upper ~default:false) a\n\n  let qr ?mode a =\n    check_float_or_complex ~op:\"qr\" a;\n    let reduced =\n      match mode with Some `Reduced -> true | None | Some `Complete -> false\n    in\n    B.qr ~reduced a\n\n  let svd ?full_matrices a =\n    check_float_or_complex ~op:\"svd\" a;\n    B.svd ~full_matrices:(Option.value full_matrices ~default:false) a\n\n  let svdvals a =\n    check_float_or_complex ~op:\"svdvals\" a;\n    let _, s, _ = B.svd ~full_matrices:false a in\n    s\n\n  let eig a =\n    check_square ~op:\"eig\" a;\n    check_float_or_complex ~op:\"eig\" a;\n    match B.eig ~vectors:true a with\n    | vals, Some vecs -> (vals, vecs)\n    | _ -> invalid_arg \"eig: result, expected eigenvectors\"\n\n  let eigh ?uplo a =\n    check_square ~op:\"eigh\" a;\n    check_real ~op:\"eigh\" a;\n    let _ = uplo in\n    match B.eigh ~vectors:true a with\n    | vals, Some vecs -> (vals, vecs)\n    | _ -> invalid_arg \"eigh: result, expected eigenvectors\"\n\n  let eigvals a =\n    check_square ~op:\"eigvals\" a;\n    check_float_or_complex ~op:\"eigvals\" a;\n    fst (B.eig ~vectors:false a)\n\n  let eigvalsh ?uplo a =\n    check_square ~op:\"eigvalsh\" a;\n    check_real ~op:\"eigvalsh\" a;\n    let _ = uplo in\n    fst (B.eigh ~vectors:false a)\n\n  let norm (type a b) ?ord ?axes ?keepdims (x : (a, b) t) =\n    let keepdims = Option.value keepdims ~default:false in\n    match (ord, axes) with\n    | None, None -> sqrt (sum (square (abs x)) ~keepdims)\n    | None, Some _ | Some `Fro, _ -> sqrt (sum (square (abs x)) ?axes ~keepdims)\n    | Some `One, None ->\n        max (sum (abs x) ~axes:[ ndim x - 2 ] ~keepdims) ~keepdims\n    | Some `NegOne, None ->\n        if ndim x = 1 then min (abs x) ~keepdims\n        else min (sum (abs x) ~axes:[ ndim x - 2 ]) ~keepdims\n    | Some `Two, None -> max (svdvals x |> cast (dtype x)) ~keepdims\n    | Some `NegTwo, None -> min (svdvals x |> cast (dtype x)) ~keepdims\n    | Some `Inf, None ->\n        if ndim x = 1 then max (abs x) ~keepdims\n        else max (sum (abs x) ~axes:[ ndim x - 1 ] ~keepdims) ~keepdims\n    | Some `NegInf, None ->\n        if ndim x = 1 then min (abs x) ~keepdims\n        else min (sum (abs x) ~axes:[ ndim x - 1 ]) ~keepdims\n    | Some `Nuc, None ->\n        if ndim x < 2 then\n          invalid_arg \"norm: input, nuclear norm defined for matrices\";\n        sum (svdvals x |> cast (dtype x)) ~keepdims\n    | Some `NegOne, _ | Some `NegTwo, _ | Some `NegInf, _ | Some `Nuc, _ ->\n        invalid_arg \"norm: this combination of ord and axis not implemented\"\n    | Some (`P p), _ ->\n        if p = 1.0 && axes = None && ndim x = 2 then\n          max (sum (abs x) ~axes:[ ndim x - 2 ] ~keepdims) ~keepdims\n        else\n          let p_t =\n            full (B.context x) (dtype x) [||] (Dtype.of_float (dtype x) p)\n          in\n          let inv_p =\n            div (full (B.context x) (dtype x) [||] (Dtype.one (dtype x))) p_t\n          in\n          pow (sum (pow (abs x) p_t) ?axes ~keepdims) inv_p\n    | _ -> invalid_arg \"norm: this combination of ord and axis not implemented\"\n\n  let rec slogdet a =\n    check_square ~op:\"slogdet\" a;\n    check_float_or_complex ~op:\"slogdet\" a;\n    let dtype_a = dtype a in\n    let is_complex =\n      Dtype.equal dtype_a Dtype.complex64\n      || Dtype.equal dtype_a Dtype.complex128\n    in\n    let sh = shape a in\n    let rank = Array.length sh in\n    if (not is_complex) && sh.(rank - 1) = 2 && sh.(rank - 2) = 2 then\n      (* 2x2 fast path *)\n      let prefix = List.init (Stdlib.max 0 (rank - 2)) (fun _ -> A) in\n      let a11 = slice_internal (prefix @ [ I 0; I 0 ]) a in\n      let a12 = slice_internal (prefix @ [ I 0; I 1 ]) a in\n      let a21 = slice_internal (prefix @ [ I 1; I 0 ]) a in\n      let a22 = slice_internal (prefix @ [ I 1; I 1 ]) a in\n      let det64 = sub (mul a11 a22) (mul a12 a21) |> cast Dtype.float64 in\n      let z = zeros (B.context det64) Dtype.float64 (shape det64) in\n      let sign_float =\n        sub\n          (cast Dtype.float32 (cast Dtype.float64 (greater det64 z)))\n          (cast Dtype.float32 (cast Dtype.float64 (less det64 z)))\n      in\n      let abs_det = abs det64 in\n      let logdet =\n        cast Dtype.float32\n          (where (cmpeq abs_det z)\n             (full (B.context det64) Dtype.float64 (shape det64)\n                Float.neg_infinity)\n             (log abs_det))\n      in\n      (sign_float, logdet)\n    else\n      let _q, r = B.qr ~reduced:false a in\n      let r_diag = diagonal r in\n      let sign_det =\n        let signs = sign r_diag in\n        if ndim signs > 1 then prod signs ~axes:[ -1 ] ~keepdims:false\n        else prod signs\n      in\n      let sign_float = cast Dtype.float32 (cast Dtype.float64 sign_det) in\n      let abs_f64 = cast Dtype.float64 (abs r_diag) in\n      let z = zeros (B.context abs_f64) Dtype.float64 (shape abs_f64) in\n      let log_abs =\n        where (cmpeq abs_f64 z)\n          (full (B.context abs_f64) Dtype.float64 (shape abs_f64)\n             Float.neg_infinity)\n          (log abs_f64)\n      in\n      let logdet64 =\n        if ndim log_abs > 1 then sum log_abs ~axes:[ -1 ] ~keepdims:false\n        else sum log_abs\n      in\n      (sign_float, cast Dtype.float32 logdet64)\n\n  and det a =\n    check_square ~op:\"det\" a;\n    check_float_or_complex ~op:\"det\" a;\n    let sign, logabs = slogdet a in\n    mul (cast (dtype a) sign) (exp logabs |> cast (dtype a))\n\n  let matrix_rank ?tol ?rtol ?hermitian a =\n    check_float_or_complex ~op:\"matrix_rank\" a;\n    let s =\n      match hermitian with\n      | Some true -> abs (fst (B.eigh ~vectors:false a))\n      | _ -> svdvals a\n    in\n    let max_s = max s |> unsafe_get [] in\n    let sh = shape a in\n    let m = sh.(Array.length sh - 2) in\n    let n = sh.(Array.length sh - 1) in\n    let eps =\n      let dt = dtype a in\n      if Dtype.equal dt Dtype.float32 || Dtype.equal dt Dtype.complex64 then\n        1.2e-7\n      else if Dtype.equal dt Dtype.float64 || Dtype.equal dt Dtype.complex128\n      then 2.2e-16\n      else 1e-15\n    in\n    let tol =\n      match (tol, rtol) with\n      | Some t, _ -> t\n      | None, Some r -> r *. max_s\n      | None, None -> float_of_int (Stdlib.max m n) *. eps *. max_s\n    in\n    let mask = greater s (scalar (B.context a) (dtype s) tol) in\n    int_of_float (Float.round (sum (cast (dtype s) mask) |> unsafe_get []))\n\n  let trace ?offset a =\n    if ndim a < 2 then invalid_arg \"trace: input requires at least 2D array\";\n    sum\n      (diagonal ~offset:(Option.value offset ~default:0) a)\n      ~axes:[ -1 ] ~keepdims:false\n\n  let solve a b =\n    check_square ~op:\"solve\" a;\n    check_float_or_complex ~op:\"solve\" a;\n    check_float_or_complex ~op:\"solve\" b;\n    let b_expanded =\n      if ndim a > 2 && ndim b = 2 then\n        let sa = shape a in\n        let sb = shape b in\n        let batch = array_prod (Array.sub sa 0 (ndim a - 2)) in\n        if sb.(0) = batch && sb.(1) = sa.(ndim a - 2) then expand_dims [ -1 ] b\n        else b\n      else b\n    in\n    let q, r = B.qr ~reduced:true a in\n    let r_diag = diagonal r |> cast Dtype.float64 in\n    let m = dim (-2) a in\n    let eps = if Dtype.equal (dtype a) Dtype.float32 then 1e-6 else 1e-12 in\n    let tol_t =\n      full (B.context r_diag) Dtype.float64 (shape r_diag)\n        (eps *. float_of_int m)\n    in\n    if sum (cast Dtype.float64 (less (abs r_diag) tol_t)) |> unsafe_get [] > 0.\n    then invalid_arg \"solve: matrix is singular\";\n    let y = matmul (matrix_transpose q) b_expanded in\n    let result =\n      B.triangular_solve ~upper:true ~transpose:false ~unit_diag:false r y\n    in\n    if b_expanded != b then squeeze ~axes:[ ndim result - 1 ] result else result\n\n  let pinv (type a b) ?rtol ?hermitian (a : (a, b) t) =\n    check_float_or_complex ~op:\"pinv\" a;\n    let sh = shape a in\n    let m = sh.(Array.length sh - 2) in\n    let n = sh.(Array.length sh - 1) in\n    let dtype_a = dtype a in\n    let eps =\n      if\n        Dtype.equal dtype_a Dtype.float32 || Dtype.equal dtype_a Dtype.complex64\n      then 1.2e-7\n      else if\n        Dtype.equal dtype_a Dtype.float64\n        || Dtype.equal dtype_a Dtype.complex128\n      then 2.2e-16\n      else 1e-15\n    in\n    let max_dim = float_of_int (Stdlib.max m n) in\n    let cutoff ~max_s =\n      match rtol with\n      | Some r -> r *. max_s *. max_dim\n      | None -> max_dim *. eps *. max_s\n    in\n    let pinv_from_factors u s vh =\n      let max_s = max s |> unsafe_get [] in\n      let cutoff = cutoff ~max_s in\n      let ones_s = ones (B.context s) (dtype s) (shape s) in\n      let threshold = scalar (B.context s) (dtype s) cutoff in\n      let mask = greater s threshold in\n      let s_inv =\n        mul (div ones_s (where mask s ones_s)) (cast (dtype s) mask)\n        |> cast dtype_a\n      in\n      let v = matrix_transpose vh in\n      let vs = mul v (unsqueeze ~axes:[ 0 ] s_inv) in\n      if Dtype.is_complex dtype_a then\n        matmul vs (matrix_transpose (conjugate u))\n      else matmul vs (matrix_transpose u)\n    in\n    let pinv_via_svd () =\n      let u, s, vh = B.svd ~full_matrices:false a in\n      pinv_from_factors u s vh\n    in\n    match hermitian with\n    | Some true -> (\n        match B.eigh ~vectors:true a with\n        | vals, Some vecs ->\n            let abs_vals = abs vals in\n            let sign_vals = sign vals in\n            let o = ones (B.context vals) (dtype vals) (shape vals) in\n            let z = zeros (B.context vals) (dtype vals) (shape vals) in\n            let sign_fixed = where (cmpeq sign_vals z) o sign_vals in\n            let vh =\n              mul\n                (expand_dims [ -1 ] (cast dtype_a sign_fixed))\n                (matrix_transpose vecs)\n            in\n            pinv_from_factors vecs abs_vals vh\n        | _ -> pinv_via_svd ())\n    | _ -> pinv_via_svd ()\n\n  let lstsq ?rcond a b =\n    check_float_or_complex ~op:\"lstsq\" a;\n    check_float_or_complex ~op:\"lstsq\" b;\n    let sh = shape a in\n    let m = sh.(Array.length sh - 2) in\n    let n = sh.(Array.length sh - 1) in\n    let rcond_value =\n      match rcond with\n      | Some v -> v\n      | None ->\n          let eps =\n            if Dtype.equal (dtype a) Dtype.float32 then 1.2e-7\n            else if Dtype.equal (dtype a) Dtype.float64 then 2.2e-16\n            else 1e-15\n          in\n          float_of_int (Stdlib.max m n)\n          *. eps\n          *. (max (svdvals a) |> unsafe_get [])\n    in\n    let x =\n      if m >= n then\n        let q, r = B.qr ~reduced:true a in\n        let y = matmul (matrix_transpose q) b in\n        let r_sq =\n          if ndim r = 2 then slice_internal [ R (0, n); R (0, n) ] r\n          else slice_internal [ A; R (0, n); R (0, n) ] r\n        in\n        let y_top =\n          if ndim y = 2 then slice_internal [ R (0, n); A ] y\n          else if ndim y = 1 then slice_internal [ R (0, n) ] y\n          else slice_internal [ A; R (0, n); A ] y\n        in\n        B.triangular_solve ~upper:true ~transpose:false ~unit_diag:false r_sq\n          y_top\n      else matmul (pinv a ~rtol:rcond_value) b\n    in\n    let residuals =\n      if m > n then\n        let res = sub b (matmul a x) in\n        sum (square res) ~axes:[ ndim res - 2 ] ~keepdims:false\n      else zeros (B.context a) (dtype b) [||]\n    in\n    (x, residuals, matrix_rank a, svdvals a)\n\n  let inv a =\n    check_square ~op:\"inv\" a;\n    check_float_or_complex ~op:\"inv\" a;\n    let sh = shape a in\n    let n = sh.(Array.length sh - 1) in\n    let batch = Array.sub sh 0 (Array.length sh - 2) in\n    let i =\n      broadcast_to\n        (Array.append batch [| n; n |])\n        (eye (B.context a) (dtype a) n)\n    in\n    try solve a i\n    with Invalid_argument msg when String.sub msg 0 5 = \"solve\" ->\n      invalid_arg (\"inv\" ^ String.sub msg 5 (String.length msg - 5))\n\n  let matrix_power a n =\n    let sh = shape a in\n    let rank = Array.length sh in\n    if rank < 2 then\n      invalid_arg \"matrix_power: input requires at least 2D array\";\n    if sh.(rank - 2) <> sh.(rank - 1) then\n      err \"matrix_power\" \"matrix must be square, got %dx%d\"\n        sh.(rank - 2)\n        sh.(rank - 1);\n    let rec power acc base exp =\n      if exp = 0 then acc\n      else if exp mod 2 = 0 then power acc (matmul base base) (exp / 2)\n      else power (matmul acc base) (matmul base base) (exp / 2)\n    in\n    if n = 0 then eye (B.context a) (dtype a) sh.(rank - 1)\n    else if n > 0 then power a a (n - 1)\n    else\n      try\n        let ia = inv a in\n        if -n = 1 then ia else power ia ia (-n - 1)\n      with Invalid_argument _ ->\n        invalid_arg \"matrix_power: singular for negative exponent\"\n\n  let cond ?p x =\n    check_square ~op:\"cond\" x;\n    check_float_or_complex ~op:\"cond\" x;\n    match p with\n    | None | Some `Two ->\n        let s = svdvals x in\n        let ds = dtype s in\n        let mx = max s in\n        let max_v = mx |> unsafe_get [] in\n        let eps =\n          if Dtype.equal ds Dtype.float32 then 1.2e-7\n          else if Dtype.equal ds Dtype.float64 then 2.2e-16\n          else 1e-15\n        in\n        let tol_t = scalar (B.context x) ds (eps *. max_v) in\n        let safe_s = where (greater s tol_t) s tol_t in\n        let mn =\n          if ndim safe_s > 1 then min safe_s ~axes:[ -1 ] ~keepdims:false\n          else min safe_s\n        in\n        cast (dtype x) (div mx mn)\n    | Some `One -> mul (norm ~ord:`One x) (norm ~ord:`One (inv x))\n    | Some `Inf -> mul (norm ~ord:`Inf x) (norm ~ord:`Inf (inv x))\n    | _ -> invalid_arg \"cond: unsupported norm\"\n\n  let tensorsolve ?axes a b =\n    check_float_or_complex ~op:\"tensorsolve\" a;\n    check_float_or_complex ~op:\"tensorsolve\" b;\n    let sa = shape a in\n    let sb = shape b in\n    let ra = Array.length sa in\n    let rb = Array.length sb in\n    if rb = 0 then invalid_arg \"tensorsolve: b must have at least one dimension\";\n    if ra < rb then invalid_arg \"tensorsolve: a, rank must be >= rank of b\";\n    let axes_for_b =\n      match axes with\n      | None -> Array.init rb (fun i -> ra - rb + i)\n      | Some axes ->\n          if List.length axes <> rb then\n            err \"tensorsolve\" \"axes, expected %d entries, got %d\" rb\n              (List.length axes);\n          let seen = Array.make ra false in\n          Array.map\n            (fun ax ->\n              let axis = if ax < 0 then ax + ra else ax in\n              if axis < 0 || axis >= ra then\n                err \"tensorsolve\" \"axis %d out of bounds for %dD tensor\" ax ra;\n              if seen.(axis) then err \"tensorsolve\" \"axis %d, repeated\" ax;\n              seen.(axis) <- true;\n              axis)\n            (Array.of_list axes)\n    in\n    let selected = Array.make ra false in\n    Array.iter (fun ax -> selected.(ax) <- true) axes_for_b;\n    let free =\n      Array.of_list\n        (List.filter (fun ax -> not selected.(ax)) (List.init ra Fun.id))\n    in\n    let perm = Array.append free axes_for_b in\n    let a_perm =\n      let rec is_id i =\n        if i = ra then true else if perm.(i) <> i then false else is_id (i + 1)\n      in\n      if is_id 0 then a else transpose ~axes:(Array.to_list perm) a\n    in\n    let ps = shape a_perm in\n    let nf = Array.length free in\n    let free_shape = Array.sub ps 0 nf in\n    let rhs_shape = Array.sub ps nf rb in\n    if rhs_shape <> sb then\n      err \"tensorsolve\" \"cannot reshape %s to %s\"\n        (Shape.to_string rhs_shape)\n        (Shape.to_string sb);\n    let rows = array_prod free_shape in\n    let cols = array_prod rhs_shape in\n    if rows <> cols then\n      invalid_arg\n        \"tensorsolve: a, leading dimensions must match trailing dimensions\";\n    let a_mat = reshape [| rows; cols |] a_perm in\n    let b_vec = reshape [| rows |] b in\n    let solution =\n      try solve a_mat b_vec\n      with Invalid_argument _ ->\n        let x_col = matmul (pinv a_mat) (reshape [| rows; 1 |] b_vec) in\n        reshape [| cols |] x_col\n    in\n    reshape free_shape solution\n\n  let tensorinv ?ind a =\n    check_float_or_complex ~op:\"tensorinv\" a;\n    let sh = shape a in\n    let rank = Array.length sh in\n    if rank = 0 then\n      invalid_arg \"tensorinv: input must have at least one dimension\";\n    let ind = Option.value ind ~default:(rank / 2) in\n    if ind <= 0 || ind >= rank then\n      invalid_arg\n        \"tensorinv: ind must split dimensions into two non-empty groups\";\n    let left = Array.sub sh 0 ind in\n    let right = Array.sub sh ind (rank - ind) in\n    let ls = array_prod left in\n    let rs = array_prod right in\n    if ls <> rs then\n      invalid_arg\n        \"tensorinv: input, leading and trailing dimensions must have equal \\\n         product\";\n    let inv_mat =\n      try inv (reshape [| ls; rs |] a)\n      with Invalid_argument _ -> pinv (reshape [| ls; rs |] a)\n    in\n    reshape (Array.append right left) inv_mat\n\n  (* ───── FFT ───── *)\n\n  type fft_norm = [ `Backward | `Forward | `Ortho ]\n\n  let pad_or_truncate_for_fft x axes s =\n    match s with\n    | None -> x\n    | Some sizes ->\n        let s_arr = Array.of_list sizes in\n        let acc = ref x in\n        List.iteri\n          (fun i ax ->\n            let ax = if ax < 0 then ndim !acc + ax else ax in\n            let cur = dim ax !acc in\n            let target = s_arr.(i) in\n            if target > cur then (\n              let pad_config = Array.make (ndim !acc) (0, 0) in\n              pad_config.(ax) <- (0, target - cur);\n              acc := B.pad !acc pad_config (Dtype.zero (dtype !acc)))\n            else if target < cur then\n              acc :=\n                B.shrink !acc\n                  (Array.init (ndim !acc) (fun idx ->\n                       if idx = ax then (0, target) else (0, dim idx !acc))))\n          axes;\n        !acc\n\n  let fft_norm_scale norm axes_list x =\n    match norm with\n    | `Backward -> 1.0\n    | `Forward ->\n        let n = List.fold_left (fun acc ax -> acc * dim ax x) 1 axes_list in\n        1.0 /. float_of_int n\n    | `Ortho ->\n        let n = List.fold_left (fun acc ax -> acc * dim ax x) 1 axes_list in\n        1.0 /. Stdlib.sqrt (float_of_int n)\n\n  (* Inverse: Backward↔Forward swapped, Ortho unchanged *)\n  let ifft_norm_scale norm axes_list x =\n    match norm with\n    | `Backward ->\n        let n = List.fold_left (fun acc ax -> acc * dim ax x) 1 axes_list in\n        1.0 /. float_of_int n\n    | `Forward -> 1.0\n    | `Ortho ->\n        let n = List.fold_left (fun acc ax -> acc * dim ax x) 1 axes_list in\n        1.0 /. Stdlib.sqrt (float_of_int n)\n\n  let apply_fft_scale (type a) scale (result : (Complex.t, a) t) :\n      (Complex.t, a) t =\n    if scale <> 1.0 then\n      let sv =\n        match B.dtype result with\n        | Complex64 | Complex128 -> Complex.{ re = scale; im = 0.0 }\n      in\n      mul result (scalar (B.context result) (B.dtype result) sv)\n    else copy_to_out result\n\n  let fftn (type a) ?axes ?s ?(norm = `Backward) (x : (Complex.t, a) t) :\n      (Complex.t, a) t =\n    let nd = ndim x in\n    let axes_list =\n      match axes with\n      | None -> List.init nd Fun.id\n      | Some a -> List.map (fun ax -> if ax < 0 then nd + ax else ax) a\n    in\n    (match s with\n    | Some sizes when List.length sizes <> List.length axes_list ->\n        invalid_arg \"fft: s parameter must have same length as axes\"\n    | _ -> ());\n    let xp = pad_or_truncate_for_fft x axes_list s in\n    let scale = fft_norm_scale norm axes_list xp in\n    let r = B.fft xp ~axes:(Array.of_list axes_list) in\n    apply_fft_scale scale r\n\n  let ifftn (type a) ?axes ?s ?(norm = `Backward) (x : (Complex.t, a) t) :\n      (Complex.t, a) t =\n    let nd = ndim x in\n    let axes_list =\n      match axes with\n      | None -> List.init nd Fun.id\n      | Some a -> List.map (fun ax -> if ax < 0 then nd + ax else ax) a\n    in\n    (match s with\n    | Some sizes when List.length sizes <> List.length axes_list ->\n        invalid_arg \"ifft: s parameter must have same length as axes\"\n    | _ -> ());\n    let xp = pad_or_truncate_for_fft x axes_list s in\n    let scale = ifft_norm_scale norm axes_list xp in\n    let r = B.ifft xp ~axes:(Array.of_list axes_list) in\n    apply_fft_scale scale r\n\n  let rfftn ?axes ?s ?(norm = `Backward) x =\n    let nd = ndim x in\n    let axes_list = match axes with None -> [ nd - 1 ] | Some ax -> ax in\n    let xp = pad_or_truncate_for_fft x axes_list s in\n    let scale = fft_norm_scale norm axes_list xp in\n    let r = B.rfft xp ~dtype:Dtype.Complex128 ~axes:(Array.of_list axes_list) in\n    apply_fft_scale scale r\n\n  let irfftn ?axes ?s ?(norm = `Backward) x =\n    let nd = ndim x in\n    let axes_list = match axes with None -> [ nd - 1 ] | Some ax -> ax in\n    let input_shape = shape x in\n    let output_sizes =\n      match s with\n      | Some sizes -> sizes\n      | None ->\n          List.mapi\n            (fun i axis ->\n              let axis = if axis < 0 then nd + axis else axis in\n              if i = List.length axes_list - 1 then (input_shape.(axis) - 1) * 2\n              else input_shape.(axis))\n            axes_list\n    in\n    let norm_scale =\n      let n = List.fold_left ( * ) 1 output_sizes in\n      match norm with\n      | `Backward -> 1.0 /. float_of_int n\n      | `Forward -> 1.0\n      | `Ortho -> 1.0 /. Stdlib.sqrt (float_of_int n)\n    in\n    let s_param =\n      match s with None -> None | Some _ -> Some (Array.of_list output_sizes)\n    in\n    let r =\n      B.irfft ?s:s_param x ~dtype:Dtype.Float64 ~axes:(Array.of_list axes_list)\n    in\n    if norm_scale <> 1.0 then\n      mul r (scalar (B.context r) (B.dtype r) norm_scale)\n    else copy_to_out r\n\n  (* 1D FFT convenience *)\n  let fft ?(axis = -1) ?n ?(norm = `Backward) x =\n    let s = match n with None -> None | Some sz -> Some [ sz ] in\n    fftn x ~axes:[ axis ] ?s ~norm\n\n  let ifft ?(axis = -1) ?n ?(norm = `Backward) x =\n    let s = match n with None -> None | Some sz -> Some [ sz ] in\n    ifftn x ~axes:[ axis ] ?s ~norm\n\n  let rfft ?(axis = -1) ?n ?(norm = `Backward) x =\n    let s = match n with None -> None | Some sz -> Some [ sz ] in\n    rfftn x ~axes:[ axis ] ?s ~norm\n\n  let irfft ?(axis = -1) ?n ?(norm = `Backward) x =\n    let s = match n with None -> None | Some sz -> Some [ sz ] in\n    irfftn x ~axes:[ axis ] ?s ~norm\n\n  (* 2D FFT *)\n\n  let check_fft2 ~op x axes =\n    let n = ndim x in\n    if n < 2 then err op \"input requires at least 2D array, got %dD\" n;\n    let axes_list =\n      match axes with None -> [ n - 2; n - 1 ] | Some ax -> ax\n    in\n    if List.length axes_list <> 2 then err op \"axes must specify exactly 2 axes\";\n    axes_list\n\n  let fft2 ?axes ?s ?(norm = `Backward) x =\n    let axes_list = check_fft2 ~op:\"fft2\" x axes in\n    fftn x ~axes:axes_list ?s ~norm\n\n  let ifft2 ?axes ?s ?(norm = `Backward) x =\n    let axes_list = check_fft2 ~op:\"ifft2\" x axes in\n    ifftn x ~axes:axes_list ?s ~norm\n\n  (* N-dimensional FFT public wrappers *)\n  let fftn ?axes ?s ?(norm = `Backward) x =\n    fftn x\n      ~axes:\n        (match axes with None -> List.init (ndim x) Fun.id | Some ax -> ax)\n      ?s ~norm\n\n  let ifftn ?axes ?s ?(norm = `Backward) x =\n    ifftn x\n      ~axes:\n        (match axes with None -> List.init (ndim x) Fun.id | Some ax -> ax)\n      ?s ~norm\n\n  let rfft2 ?axes ?s ?(norm = `Backward) x =\n    let axes_list = check_fft2 ~op:\"rfft2\" x axes in\n    rfftn x ~axes:axes_list ?s ~norm\n\n  let irfft2 ?axes ?s ?(norm = `Backward) x =\n    let axes_list = check_fft2 ~op:\"irfft2\" x axes in\n    irfftn x ~axes:axes_list ?s ~norm\n\n  let rfftn ?axes ?s ?(norm = `Backward) x =\n    rfftn x\n      ~axes:\n        (match axes with None -> List.init (ndim x) Fun.id | Some ax -> ax)\n      ?s ~norm\n\n  let irfftn ?axes ?s ?(norm = `Backward) x =\n    irfftn x\n      ~axes:\n        (match axes with None -> List.init (ndim x) Fun.id | Some ax -> ax)\n      ?s ~norm\n\n  (* Hermitian FFT *)\n  let hfft ?(axis = -1) ?n ?norm x =\n    let n = match n with None -> 2 * (dim axis x - 1) | Some n -> n in\n    let axis = resolve_single_axis x axis in\n    irfftn x ~axes:[ axis ] ~s:[ n ] ?norm\n\n  let ihfft ?(axis = -1) ?n ?norm x =\n    let n = match n with None -> dim axis x | Some n -> n in\n    let axis = resolve_single_axis x axis in\n    rfftn x ~axes:[ axis ] ~s:[ n ] ?norm\n\n  (* FFT helpers *)\n  let fftfreq ctx ?(d = 1.0) n =\n    let dt = Dtype.float64 in\n    let v = 1.0 /. (float_of_int n *. d) in\n    let freqs =\n      if n mod 2 = 0 then\n        concatenate ~axis:0\n          [\n            cast dt (arange ctx Dtype.int32 0 (n / 2) 1);\n            cast dt (arange ctx Dtype.int32 (-(n / 2)) 0 1);\n          ]\n      else\n        concatenate ~axis:0\n          [\n            cast dt (arange ctx Dtype.int32 0 ((n + 1) / 2) 1);\n            cast dt (arange ctx Dtype.int32 (-((n - 1) / 2)) 0 1);\n          ]\n    in\n    mul_s freqs v\n\n  let rfftfreq ctx ?(d = 1.0) n =\n    let dt = Dtype.float64 in\n    let v = 1.0 /. (float_of_int n *. d) in\n    mul (cast dt (arange ctx Dtype.int32 0 ((n / 2) + 1) 1)) (scalar ctx dt v)\n\n  let fftshift ?axes x =\n    let sh = shape x in\n    let axes_list =\n      match axes with\n      | None -> List.init (Array.length sh) Fun.id\n      | Some ax -> ax\n    in\n    List.fold_left\n      (fun acc axis ->\n        let axis = resolve_single_axis acc axis in\n        roll (sh.(axis) / 2) acc ~axis)\n      x axes_list\n\n  let ifftshift ?axes x =\n    let sh = shape x in\n    let axes_list =\n      match axes with\n      | None -> List.init (Array.length sh) Fun.id\n      | Some ax -> ax\n    in\n    List.fold_left\n      (fun acc axis ->\n        let axis = resolve_single_axis acc axis in\n        roll (-(sh.(axis) / 2)) acc ~axis)\n      x axes_list\n\n  (* ───── Neural Network Operations ───── *)\n\n  let softmax ?(axes = [ -1 ]) ?(scale = 1.0) x =\n    let nd = Array.length (shape x) in\n    let axes_norm = List.map (fun ax -> if ax < 0 then nd + ax else ax) axes in\n    let max_x = max x ~axes:axes_norm ~keepdims:true in\n    let dt = dtype x in\n    let shifted =\n      if scale = 1.0 then sub x max_x\n      else mul (scalar_like x (Dtype.of_float dt scale)) (sub x max_x)\n    in\n    let e = exp shifted in\n    div e (sum e ~axes:axes_norm ~keepdims:true)\n\n  let log_softmax ?(axes = [ -1 ]) ?(scale = 1.0) x =\n    let axes_norm = normalize_and_dedup_axes ~op:\"log_softmax\" (ndim x) axes in\n    if axes_norm = [] then copy_to_out (zeros_like x)\n    else\n      let max_x = max x ~axes:axes_norm ~keepdims:true in\n      let shifted = sub x max_x in\n      let dt = dtype x in\n      let scaled =\n        if scale = 1.0 then shifted\n        else mul (scalar_like shifted (Dtype.of_float dt scale)) shifted\n      in\n      let log_den = log (sum (exp scaled) ~axes:axes_norm ~keepdims:true) in\n      sub scaled log_den\n\n  let logsumexp ?axes ?(keepdims = false) x =\n    let axes_norm =\n      match axes with\n      | None -> List.init (ndim x) Fun.id\n      | Some lst -> normalize_and_dedup_axes ~op:\"logsumexp\" (ndim x) lst\n    in\n    if axes_norm = [] then copy_to_out x\n    else\n      let max_x = max x ~axes:axes_norm ~keepdims:true in\n      let log_sum =\n        add (log (sum (exp (sub x max_x)) ~axes:axes_norm ~keepdims:true)) max_x\n      in\n      if keepdims then copy_to_out log_sum\n      else copy_to_out (squeeze ~axes:(List.rev axes_norm) log_sum)\n\n  let logmeanexp ?axes ?(keepdims = false) x =\n    let axes_norm =\n      match axes with\n      | None -> List.init (ndim x) Fun.id\n      | Some lst -> normalize_and_dedup_axes ~op:\"logmeanexp\" (ndim x) lst\n    in\n    if axes_norm = [] then copy_to_out x\n    else\n      let log_sum = logsumexp ~axes:axes_norm ~keepdims:true x in\n      let count = List.fold_left (fun acc ax -> acc * dim ax x) 1 axes_norm in\n      let log_mean =\n        sub log_sum\n          (log\n             (scalar_like log_sum\n                (Dtype.of_float (dtype x) (float_of_int count))))\n      in\n      if keepdims then copy_to_out log_mean\n      else copy_to_out (squeeze ~axes:(List.rev axes_norm) log_mean)\n\n  let standardize ?axes ?mean:mean_param ?variance:variance_param\n      ?(epsilon = 1e-5) x =\n    let nd = ndim x in\n    let axes_norm =\n      match axes with\n      | None -> List.init nd Fun.id\n      | Some lst -> normalize_and_dedup_axes ~op:\"standardize\" nd lst\n    in\n    let x_shape = shape x in\n    let keep_shape =\n      Array.mapi\n        (fun idx d -> if List.exists (( = ) idx) axes_norm then 1 else d)\n        x_shape\n    in\n    let unaffected =\n      List.filter\n        (fun idx -> not (List.exists (( = ) idx) axes_norm))\n        (List.init nd Fun.id)\n    in\n    let core_shape =\n      Array.of_list (List.map (fun idx -> x_shape.(idx)) unaffected)\n    in\n    let broadcast_param name param =\n      let ps = shape param in\n      if ps = x_shape || ps = keep_shape then param\n      else if ps = core_shape then reshape keep_shape param\n      else err \"standardize\" \"%s, shape must match normalized axes\" name\n    in\n    let mean_tensor =\n      match mean_param with\n      | Some m -> broadcast_param \"mean\" m\n      | None ->\n          if axes_norm = [] then x else mean x ~axes:axes_norm ~keepdims:true\n    in\n    let variance_tensor =\n      match variance_param with\n      | Some v -> broadcast_param \"variance\" v\n      | None ->\n          if axes_norm = [] then zeros_like x\n          else var x ~axes:axes_norm ~keepdims:true\n    in\n    div (sub x mean_tensor)\n      (sqrt\n         (add variance_tensor\n            (scalar_like x (Dtype.of_float (dtype x) epsilon))))\n\n  let erf x = unaryop B.erf x\n\n  let extract_patches ~kernel_size ~stride ~dilation ~padding x =\n    B.unfold x ~kernel_size ~stride ~dilation ~padding\n\n  let combine_patches ~output_size ~kernel_size ~stride ~dilation ~padding x =\n    B.fold x ~output_size ~kernel_size ~stride ~dilation ~padding\n\n  (* Correlation and convolution *)\n\n  let correlate_padding ~mode input_spatial k_shape =\n    let k = Array.length k_shape in\n    match mode with\n    | `Valid -> Array.make k (0, 0)\n    | `Full ->\n        Array.init k (fun i ->\n            let p = k_shape.(i) - 1 in\n            (p, p))\n    | `Same ->\n        Array.init k (fun i ->\n            let total = k_shape.(i) - 1 in\n            (total / 2, total - (total / 2)))\n\n  let correlate ?(padding = `Valid) x kernel =\n    let kr = ndim kernel in\n    let xr = ndim x in\n    if xr < kr then err \"correlate\" \"input rank %d < kernel rank %d\" xr kr;\n    let ks = shape kernel in\n    let input_spatial = Array.sub (shape x) (xr - kr) kr in\n    let pad_pairs = correlate_padding ~mode:padding input_spatial ks in\n    let ones_arr = Array.make kr 1 in\n    let x_unf =\n      B.unfold x ~kernel_size:ks ~stride:ones_arr ~dilation:ones_arr\n        ~padding:pad_pairs\n    in\n    let und = ndim x_unf in\n    let kp = (shape x_unf).(und - 2) in\n    let result =\n      sum (mul x_unf (reshape [| kp; 1 |] kernel)) ~axes:[ und - 2 ]\n    in\n    let leading = Array.sub (shape x) 0 (xr - kr) in\n    let out_spatial =\n      Array.init kr (fun i ->\n          input_spatial.(i) + fst pad_pairs.(i) + snd pad_pairs.(i) - ks.(i) + 1)\n    in\n    reshape (Array.concat [ leading; out_spatial ]) result\n\n  let convolve ?(padding = `Valid) x kernel =\n    correlate ~padding x (flip ~axes:(List.init (ndim kernel) Fun.id) kernel)\n\n  (* Sliding window filters *)\n\n  let sliding_filter ~reduce_fn ~kernel_size ?stride x =\n    let kr = Array.length kernel_size in\n    let stride = match stride with Some s -> s | None -> kernel_size in\n    let ones_arr = Array.make kr 1 in\n    let zeros_arr = Array.make kr (0, 0) in\n    let x_unf =\n      B.unfold x ~kernel_size ~stride ~dilation:ones_arr ~padding:zeros_arr\n    in\n    let und = ndim x_unf in\n    let reduced = reduce_fn x_unf ~axes:[ und - 2 ] ~keepdims:false in\n    let xr = ndim x in\n    let leading = Array.sub (shape x) 0 (xr - kr) in\n    let input_spatial = Array.sub (shape x) (xr - kr) kr in\n    let out_spatial =\n      Array.init kr (fun i ->\n          ((input_spatial.(i) - kernel_size.(i)) / stride.(i)) + 1)\n    in\n    reshape (Array.concat [ leading; out_spatial ]) reduced\n\n  let maximum_filter ~kernel_size ?stride x =\n    sliding_filter\n      ~reduce_fn:(fun x ~axes ~keepdims -> max x ~axes ~keepdims)\n      ~kernel_size ?stride x\n\n  let minimum_filter ~kernel_size ?stride x =\n    sliding_filter\n      ~reduce_fn:(fun x ~axes ~keepdims -> min x ~axes ~keepdims)\n      ~kernel_size ?stride x\n\n  let uniform_filter ~kernel_size ?stride x =\n    sliding_filter\n      ~reduce_fn:(fun x ~axes ~keepdims:_ -> mean x ~axes)\n      ~kernel_size ?stride x\n\n  let one_hot ~num_classes index_tensor =\n    let dt = dtype index_tensor in\n    if not (Dtype.is_int dt || Dtype.is_uint dt) then\n      err \"one_hot\" \"dtype %s, indices must be integer type\"\n        (Dtype.to_string dt);\n    let idx_exp = unsqueeze index_tensor ~axes:[ ndim index_tensor ] in\n    let nd_exp = ndim idx_exp in\n    let s = Array.make nd_exp 1 in\n    s.(nd_exp - 1) <- num_classes;\n    let arange_b =\n      reshape s (arange (B.context index_tensor) dt 0 num_classes 1)\n    in\n    cast Dtype.uint8 (cmpeq idx_exp arange_b)\n\n  (* ───── Display and Formatting ───── *)\n\n  let pp_data (type a b) fmt (x : (a, b) t) =\n    let open Format in\n    let view = B.view x in\n    let buffer = B.to_host x in\n    let dtype = dtype x in\n    let shape = View.shape view in\n    let ndim = Array.length shape in\n    let sz = View.numel view in\n    let pp_element fmt (elt : a) =\n      match dtype with\n      | Float16 -> fprintf fmt \"%g\" elt\n      | Float32 -> fprintf fmt \"%g\" elt\n      | Float64 -> fprintf fmt \"%g\" elt\n      | BFloat16 -> fprintf fmt \"%g\" elt\n      | Float8_e4m3 -> fprintf fmt \"%g\" elt\n      | Float8_e5m2 -> fprintf fmt \"%g\" elt\n      | Int8 -> fprintf fmt \"%d\" elt\n      | Int16 -> fprintf fmt \"%d\" elt\n      | Int32 -> fprintf fmt \"%ld\" elt\n      | Int64 -> fprintf fmt \"%Ld\" elt\n      | UInt8 -> fprintf fmt \"%d\" elt\n      | UInt16 -> fprintf fmt \"%d\" elt\n      | UInt32 -> fprintf fmt \"%ld\" elt\n      | UInt64 -> fprintf fmt \"%Ld\" elt\n      | Int4 -> fprintf fmt \"%d\" elt\n      | UInt4 -> fprintf fmt \"%d\" elt\n      | Bool -> fprintf fmt \"%b\" elt\n      | Complex64 -> fprintf fmt \"(%g+%gi)\" elt.re elt.im\n      | Complex128 -> fprintf fmt \"(%g+%gi)\" elt.re elt.im\n    in\n    let edge = 2 in\n    if sz = 0 && ndim > 0 then fprintf fmt \"[]\"\n    else if ndim = 0 then\n      if sz > 0 then\n        pp_element fmt (Nx_buffer.unsafe_get buffer (View.offset view))\n      else fprintf fmt \"<empty scalar>\"\n    else\n      let strides =\n        match View.strides_opt view with\n        | Some s -> s\n        | None ->\n            invalid_arg\n              \"pp_data: cannot print tensor with non-materializable view\"\n      in\n      let base_offset = View.offset view in\n      let sep fmt axis first =\n        if not first then (\n          fprintf fmt \",\";\n          if axis = ndim - 1 then fprintf fmt \" \" else pp_print_cut fmt ())\n      in\n      let rec pp_slice fmt indices =\n        let depth = List.length indices in\n        if depth = ndim then\n          let md_index = Array.of_list indices in\n          let offset = Shape.ravel_index md_index strides + base_offset in\n          if offset < 0 || offset >= Nx_buffer.length buffer then\n            fprintf fmt \"<OOB:%d/%d>\" offset (Nx_buffer.length buffer)\n          else pp_element fmt (Nx_buffer.unsafe_get buffer offset)\n        else\n          let axis = depth in\n          let dim_size = shape.(axis) in\n          let truncate = dim_size > edge * 2 in\n          fprintf fmt \"[\";\n          if dim_size > 0 then (\n            if axis < ndim - 1 then pp_open_vbox fmt 0 else pp_open_hbox fmt ();\n            if truncate then (\n              for i = 0 to edge - 1 do\n                sep fmt axis (i = 0);\n                pp_slice fmt (indices @ [ i ])\n              done;\n              fprintf fmt \",\";\n              if axis = ndim - 1 then fprintf fmt \" ..., \"\n              else (\n                pp_print_cut fmt ();\n                fprintf fmt \"...\";\n                pp_print_cut fmt ());\n              for i = dim_size - edge to dim_size - 1 do\n                sep fmt axis (i = dim_size - edge);\n                pp_slice fmt (indices @ [ i ])\n              done)\n            else\n              for i = 0 to dim_size - 1 do\n                sep fmt axis (i = 0);\n                pp_slice fmt (indices @ [ i ])\n              done;\n            pp_close_box fmt ());\n          fprintf fmt \"]\"\n      in\n      (* Print shape and dtype header for non-trivial tensors *)\n      if ndim > 1 || sz > edge * 2 then (\n        fprintf fmt \"%s [%s] \" (Dtype.to_string dtype)\n          (Array.to_list shape |> List.map string_of_int |> String.concat \"; \");\n        pp_print_cut fmt ());\n      if sz > 0 then pp_slice fmt [] else fprintf fmt \"[]\"\n\n  let format_to_string pp x =\n    let buf = Stdlib.Buffer.create 1024 in\n    let fmt = Format.formatter_of_buffer buf in\n    pp fmt x;\n    Format.pp_print_flush fmt ();\n    Stdlib.Buffer.contents buf\n\n  let print_with_formatter pp x =\n    pp Format.std_formatter x;\n    Format.pp_print_newline Format.std_formatter ();\n    Format.pp_print_flush Format.std_formatter ()\n\n  let data_to_string x = format_to_string pp_data x\n  let print_data x = print_with_formatter pp_data x\n  let pp_dtype fmt dtype = Format.fprintf fmt \"%s\" (Dtype.to_string dtype)\n  let dtype_to_string dtype = Dtype.to_string dtype\n\n  let shape_to_string shape =\n    Printf.sprintf \"[%s]\"\n      (Array.map string_of_int shape |> Array.to_list |> String.concat \"x\")\n\n  let pp_shape fmt shape = Format.fprintf fmt \"%s\" (shape_to_string shape)\n\n  let pp fmt x =\n    let open Format in\n    let view = B.view x in\n    fprintf fmt \"@[<v 0>\";\n    fprintf fmt \"Nx Info:@,\";\n    fprintf fmt \"  Shape: %s@,\" (Shape.to_string (View.shape view));\n    fprintf fmt \"  Dtype: %a@,\" pp_dtype (dtype x);\n    fprintf fmt \"  Strides: %s@,\"\n      (match View.strides_opt view with\n      | Some s ->\n          \"[\"\n          ^ String.concat \"; \" (Array.to_list (Array.map string_of_int s))\n          ^ \"]\"\n      | None -> \"<non-materializable>\");\n    fprintf fmt \"  Offset: %d@,\" (View.offset view);\n    fprintf fmt \"  Size: %d@,\" (View.numel view);\n    fprintf fmt \"  Data: %a@,\" pp_data x\n\n  let print x = print_with_formatter pp x\n  let to_string x = format_to_string pp x\n\n  (* ───── Higher-order Functions ───── *)\n\n  let map_item f x =\n    let dt = dtype x in\n    let sh = shape x in\n    let result = empty (B.context x) dt sh in\n    let src = data (contiguous x) in\n    let dst = data result in\n    let sz = size x in\n    for i = 0 to sz - 1 do\n      Nx_buffer.unsafe_set dst i (f (Nx_buffer.unsafe_get src i))\n    done;\n    result\n\n  let iter_item f x =\n    let src = data (contiguous x) in\n    let sz = size x in\n    for i = 0 to sz - 1 do\n      f (Nx_buffer.unsafe_get src i)\n    done\n\n  let fold_item f init x =\n    let src = data (contiguous x) in\n    let sz = size x in\n    let acc = ref init in\n    for i = 0 to sz - 1 do\n      acc := f !acc (Nx_buffer.unsafe_get src i)\n    done;\n    !acc\n\n  let map f x =\n    let dt = dtype x in\n    let sh = shape x in\n    let result = empty (B.context x) dt sh in\n    let total = size x in\n    for i = 0 to total - 1 do\n      let idx = Shape.unravel_index i sh |> Array.to_list in\n      set idx result (f (get idx x))\n    done;\n    result\n\n  let iter f x =\n    let sh = shape x in\n    let total = size x in\n    for i = 0 to total - 1 do\n      f (get (Shape.unravel_index i sh |> Array.to_list) x)\n    done\n\n  let fold f init x =\n    let sh = shape x in\n    let total = size x in\n    let acc = ref init in\n    for i = 0 to total - 1 do\n      acc := f !acc (get (Shape.unravel_index i sh |> Array.to_list) x)\n    done;\n    !acc\n\n  (* ───── Infix Operators ───── *)\n\n  module Infix = struct\n    let ( + ) a b = add a b\n    let ( +$ ) a s = add_s a s\n    let ( - ) a b = sub a b\n    let ( -$ ) a s = sub_s a s\n    let ( * ) a b = mul a b\n    let ( *$ ) a s = mul_s a s\n    let ( / ) a b = div a b\n    let ( /$ ) a s = div_s a s\n    let ( ** ) a b = pow a b\n    let ( **$ ) a s = pow_s a s\n    let ( % ) a b = mod_ a b\n    let ( mod ) a b = mod_ a b\n    let ( %$ ) a s = mod_s a s\n    let ( lxor ) a b = bitwise_xor a b\n    let ( lor ) a b = bitwise_or a b\n    let ( land ) a b = bitwise_and a b\n    let ( ^ ) a b = logical_xor a b\n    let ( && ) a b = logical_and a b\n    let ( || ) a b = logical_or a b\n    let ( ~- ) x = logical_not x\n    let ( < ) a b = less a b\n    let ( <$ ) a b = less_s a b\n    let ( <> ) a b = not_equal a b\n    let ( <>$ ) a b = not_equal_s a b\n    let ( = ) a b = equal a b\n    let ( =$ ) a b = equal_s a b\n    let ( > ) a b = greater a b\n    let ( >$ ) a b = greater_s a b\n    let ( <= ) a b = less_equal a b\n    let ( <=$ ) a b = less_equal_s a b\n    let ( >= ) a b = greater_equal a b\n    let ( >=$ ) a b = greater_equal_s a b\n    let ( @@ ) a b = matmul a b\n    let ( /@ ) = solve\n    let ( **@ ) = matrix_power\n    let ( <.> ) = dot\n    let ( @= ) a b = concatenate ~axis:0 [ a; b ]\n    let ( @|| ) a b = concatenate ~axis:1 [ a; b ]\n    let ( .%{} ) x indices = get indices x\n    let ( .%{}<- ) x indices value = set indices x value\n    let ( .${} ) x slice_def = slice slice_def x\n    let ( .${}<- ) x slice_def value = set_slice slice_def x value\n  end\nend\n"
  },
  {
    "path": "packages/nx/lib/core/nx_core.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule Dtype = Dtype\nmodule Shape = Shape\nmodule View = View\nmodule Backend_intf = Backend_intf\nmodule Rng = Rng\nmodule Make_frontend = Frontend.Make\n"
  },
  {
    "path": "packages/nx/lib/core/nx_core.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Core modules for [nx].\n\n    This module re-exports core building blocks used by backends and the\n    high-level [Nx] frontend. *)\n\nmodule Dtype = Dtype\n(** Tensor element dtypes. *)\n\nmodule Shape = Shape\n(** Concrete shape operations. *)\n\nmodule View = View\n(** Strided tensor views. *)\n\nmodule Backend_intf = Backend_intf\n(** Backend interface used by frontend functors. *)\n\nmodule Rng = Rng\n(** RNG key utilities. *)\n\nmodule Make_frontend = Frontend.Make\n(** Frontend functor parameterized by a backend implementation. *)\n"
  },
  {
    "path": "packages/nx/lib/core/rng.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Keys *)\n\ntype key = int\n\nlet key seed = Stdlib.abs seed land 0x7FFFFFFF\n\nlet hash_int x =\n  let open Int32 in\n  let x = of_int x in\n  let x = logxor x (shift_right_logical x 16) in\n  let x = mul x 0x85ebca6bl in\n  let x = logxor x (shift_right_logical x 13) in\n  let x = mul x 0xc2b2ae35l in\n  let x = logxor x (shift_right_logical x 16) in\n  to_int (logand x 0x7FFFFFFFl)\n\nlet split ?(n = 2) k = Array.init n (fun i -> hash_int ((k * (n + 1)) + i + 1))\nlet fold_in k data = hash_int (k lxor data)\nlet to_int k = k\n\n(* Implicit key management *)\n\ntype _ Effect.t += E_next_key : key Effect.t\n\nlet make_handler root =\n  let counter = ref 0 in\n  let open Effect.Deep in\n  {\n    retc = Fun.id;\n    exnc = raise;\n    effc =\n      (fun (type a) (eff : a Effect.t) ->\n        match eff with\n        | E_next_key ->\n            Some\n              (fun (k : (a, _) continuation) ->\n                let i = !counter in\n                incr counter;\n                continue k (fold_in root i))\n        | _ -> None);\n  }\n\nlet run ~seed f = Effect.Deep.match_with f () (make_handler (key seed))\nlet with_key k f = Effect.Deep.match_with f () (make_handler k)\nlet fallback_key = Domain.DLS.new_key (fun () -> ref (key (Random.bits ())))\n\nlet next_key () =\n  try Effect.perform E_next_key\n  with Effect.Unhandled _ ->\n    let state = Domain.DLS.get fallback_key in\n    let keys = split !state in\n    state := keys.(0);\n    keys.(1)\n"
  },
  {
    "path": "packages/nx/lib/core/rng.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Splittable RNG keys and implicit key management.\n\n    Keys are deterministic integers that can be split to derive independent\n    subkeys. {!run} and {!with_key} install an effect handler that provides\n    implicit key threading via {!next_key}; outside any handler a domain-local\n    auto-seeded generator is used as a convenient fallback. *)\n\n(** {1:keys Keys} *)\n\ntype key = int\n(** The type for RNG keys. *)\n\nval key : int -> key\n(** [key seed] is a normalized 31-bit non-negative key derived from [seed]. *)\n\nval split : ?n:int -> key -> key array\n(** [split ?n k] deterministically derives [n] subkeys from [k].\n\n    [n] defaults to [2]. *)\n\nval fold_in : key -> int -> key\n(** [fold_in k data] mixes [data] into [k] and returns the derived key. *)\n\nval to_int : key -> int\n(** [to_int k] is [k] as an integer. *)\n\n(** {1:implicit Implicit key management} *)\n\nval next_key : unit -> key\n(** [next_key ()] returns a fresh subkey from the current RNG scope.\n\n    Inside a {!run} or {!with_key} block, each call returns a deterministically\n    derived key. Outside any scope, falls back to a domain-local auto-seeded\n    generator (convenient but non-reproducible).\n\n    Two calls to [next_key ()] always return different keys. *)\n\nval run : seed:int -> (unit -> 'a) -> 'a\n(** [run ~seed f] executes [f] in an RNG scope seeded by [seed].\n\n    Every {!next_key} call within [f] returns a deterministically derived key.\n    The same [seed] and the same sequence of [next_key] calls produce the same\n    keys. Scopes nest: an inner [run] replaces the outer scope for its duration.\n*)\n\nval with_key : key -> (unit -> 'a) -> 'a\n(** [with_key k f] executes [f] in an RNG scope initialized from [k].\n\n    This is the explicit-key equivalent of [run]: useful when you have an\n    existing key from a split and want to establish a scope for a\n    sub-computation (e.g. in layer composition). *)\n"
  },
  {
    "path": "packages/nx/lib/core/shape.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype t = int array\n\nlet err op fmt = Printf.ksprintf (fun msg -> invalid_arg (op ^ \": \" ^ msg)) fmt\n\nlet to_string shape =\n  let shape_str =\n    Array.map string_of_int shape |> Array.to_list |> String.concat \",\"\n  in\n  Printf.sprintf \"[%s]\" shape_str\n\nlet numel shape =\n  let n = Array.length shape in\n  if n = 0 then 1 else Array.fold_left ( * ) 1 shape\n\nlet equal = ( = )\n\nlet c_contiguous_strides shape =\n  let n = Array.length shape in\n  if n = 0 then [||]\n  else\n    let strides = Array.make n 0 in\n    strides.(n - 1) <- (if shape.(n - 1) = 0 then 0 else 1);\n    for i = n - 2 downto 0 do\n      strides.(i) <-\n        (if shape.(i) = 0 then 0 else strides.(i + 1) * max 1 shape.(i + 1))\n    done;\n    strides\n\nlet ravel_index indices strides =\n  if Array.length indices <> Array.length strides then\n    err \"ravel_index\" \"indices[%d] vs strides[%d], dimensions must match\"\n      (Array.length indices) (Array.length strides);\n  let o = ref 0 in\n  Array.iteri (fun i v -> o := !o + (v * strides.(i))) indices;\n  !o\n\nlet unravel_index k shape =\n  let n = Array.length shape in\n  if n = 0 then\n    if k = 0 then [||]\n    else err \"unravel_index\" \"k=%d out of bounds for scalar\" k\n  else if Array.exists (( = ) 0) shape then\n    (* zero-size tensor; only k=0 is allowed *)\n    if k = 0 then Array.make n 0\n    else err \"unravel_index\" \"k=%d out of bounds for zero-size shape\" k\n  else\n    let total_elements = numel shape in\n    if k < 0 || k >= total_elements then\n      err \"unravel_index\" \"k=%d out of bounds for shape (size %d)\" k\n        total_elements;\n\n    let idx = Array.make n 0 in\n    let temp_k = ref k in\n    for i = n - 1 downto 1 do\n      let dim_size = shape.(i) in\n      idx.(i) <- !temp_k mod dim_size;\n      temp_k := !temp_k / dim_size\n    done;\n    idx.(0) <- !temp_k;\n\n    (* sanity check for the leftmost index *)\n    if idx.(0) >= shape.(0) then\n      err \"unravel_index\" \"idx.(0)=%d out of bounds for shape.(0)=%d\" idx.(0)\n        shape.(0);\n    idx\n\nlet unravel_index_into k shape result =\n  let n = Array.length shape in\n  if n = 0 then (\n    if k <> 0 then err \"unravel_index_into\" \"k=%d out of bounds for scalar\" k\n      (* else: k=0 for scalar, result stays empty *))\n  else if Array.exists (( = ) 0) shape then\n    if\n      (* zero-size tensor; only k=0 is allowed *)\n      k = 0\n    then\n      for i = 0 to n - 1 do\n        result.(i) <- 0\n      done\n    else err \"unravel_index_into\" \"k=%d out of bounds for zero-size shape\" k\n  else\n    let total_elements = numel shape in\n    if k < 0 || k >= total_elements then\n      err \"unravel_index_into\" \"k=%d out of bounds for shape (size %d)\" k\n        total_elements\n    else\n      let temp_k = ref k in\n      for i = n - 1 downto 1 do\n        let dim_size = shape.(i) in\n        result.(i) <- !temp_k mod dim_size;\n        temp_k := !temp_k / dim_size\n      done;\n      result.(0) <- !temp_k;\n\n      (* sanity check for the leftmost index *)\n      if result.(0) >= shape.(0) then\n        err \"unravel_index_into\" \"result.(0)=%d out of bounds for shape.(0)=%d\"\n          result.(0) shape.(0)\n\nlet resolve_neg_one current_shape new_shape_spec =\n  let new_shape_spec_l = Array.to_list new_shape_spec in\n  let current_numel = numel current_shape in\n  let neg_one_count =\n    new_shape_spec_l |> List.filter (( = ) (-1)) |> List.length\n  in\n  if neg_one_count > 1 then\n    invalid_arg \"reshape: multiple -1 dimensions, can only infer one\"\n  else if neg_one_count = 0 then new_shape_spec\n  else\n    let specified_numel =\n      List.filter (( <> ) (-1)) new_shape_spec_l |> Array.of_list |> numel\n    in\n    (* when shape_spec includes zero dimensions *)\n    if specified_numel = 0 then\n      if current_numel = 0 then\n        Array.map (fun x -> if x = -1 then 0 else x) new_shape_spec\n      else\n        invalid_arg \"reshape: cannot infer -1 from shape with 0-size dimensions\"\n    else if current_numel mod specified_numel <> 0 then\n      err \"reshape\" \"cannot reshape %d elements into shape with %d elements\"\n        current_numel specified_numel\n    else\n      let inferred_dim = current_numel / specified_numel in\n      Array.map (fun s -> if s = -1 then inferred_dim else s) new_shape_spec\n\nlet broadcast shape_a shape_b =\n  let rank_a = Array.length shape_a and rank_b = Array.length shape_b in\n  let rank_out = max rank_a rank_b in\n  let out_shape = Array.make rank_out 1 in\n  for i = 0 to rank_out - 1 do\n    let dim_a =\n      if i < rank_out - rank_a then 1 else shape_a.(i - (rank_out - rank_a))\n    in\n    let dim_b =\n      if i < rank_out - rank_b then 1 else shape_b.(i - (rank_out - rank_b))\n    in\n    if dim_a = dim_b then out_shape.(i) <- dim_a\n    else if dim_a = 1 then out_shape.(i) <- dim_b\n    else if dim_b = 1 then out_shape.(i) <- dim_a\n    else\n      err \"broadcast\" \"cannot broadcast %s with %s (dim %d: %d\\xe2\\x89\\xa0%d)\"\n        (to_string shape_a) (to_string shape_b) i dim_a dim_b\n  done;\n  out_shape\n\nlet broadcast_index target_multi_idx source_shape =\n  let target_ndim = Array.length target_multi_idx in\n  let source_ndim = Array.length source_shape in\n  let source_multi_idx = Array.make source_ndim 0 in\n  for i = 0 to source_ndim - 1 do\n    let target_idx_pos = target_ndim - source_ndim + i in\n    let source_idx_pos = i in\n    if source_idx_pos < 0 || target_idx_pos < 0 then ()\n    else if source_shape.(source_idx_pos) = 1 then\n      source_multi_idx.(source_idx_pos) <- 0\n    else source_multi_idx.(source_idx_pos) <- target_multi_idx.(target_idx_pos)\n  done;\n  source_multi_idx\n\nlet broadcast_index_into target_multi_idx source_shape result =\n  let target_ndim = Array.length target_multi_idx in\n  let source_ndim = Array.length source_shape in\n  for i = 0 to source_ndim - 1 do\n    let target_idx_pos = target_ndim - source_ndim + i in\n    let source_idx_pos = i in\n    if source_idx_pos < 0 || target_idx_pos < 0 then ()\n    else if source_shape.(source_idx_pos) = 1 then result.(source_idx_pos) <- 0\n    else result.(source_idx_pos) <- target_multi_idx.(target_idx_pos)\n  done\n\nlet reduce_output_shape input_shape axes keepdims =\n  if keepdims then\n    Array.mapi\n      (fun i dim -> if Array.exists (( = ) i) axes then 1 else dim)\n      input_shape\n  else\n    let filtered = ref [] in\n    Array.iteri\n      (fun i dim ->\n        if not (Array.exists (( = ) i) axes) then filtered := dim :: !filtered)\n      input_shape;\n    Array.of_list (List.rev !filtered)\n\nlet pp fmt shape = Format.fprintf fmt \"%s\" (to_string shape)\n"
  },
  {
    "path": "packages/nx/lib/core/shape.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Concrete tensor shapes.\n\n    A shape is an array of non-negative dimension sizes in row-major order. *)\n\ntype t = int array\n(** The type for concrete shapes. *)\n\n(** {1:basic Basic operations} *)\n\nval numel : t -> int\n(** [numel shape] is the product of dimensions in [shape].\n\n    [numel [||]] is [1]. *)\n\nval equal : t -> t -> bool\n(** [equal s0 s1] is [true] iff [s0] and [s1] are structurally equal. *)\n\n(** {1:strides Strides} *)\n\nval c_contiguous_strides : t -> int array\n(** [c_contiguous_strides shape] is the row-major stride vector of [shape].\n\n    For any zero-size dimension, strides to its left are propagated with zero\n    according to the implementation's canonical rule. *)\n\n(** {1:indexing Index conversion} *)\n\nval ravel_index : int array -> int array -> int\n(** [ravel_index indices strides] is the linear offset\n    [sum_i (indices.(i) * strides.(i))].\n\n    Raises [Invalid_argument] if the array lengths differ.\n\n    {b Note.} This function does not perform bounds checks on [indices]. *)\n\nval unravel_index : int -> t -> int array\n(** [unravel_index k shape] is the multi-index of [k] in a C-contiguous layout\n    of [shape].\n\n    For [shape = [||]], [k] must be [0].\n\n    For zero-size shapes, only [k = 0] is accepted and the result is an array of\n    zeros with the same rank as [shape].\n\n    Raises [Invalid_argument] if [k] is out of bounds for [shape]. *)\n\nval unravel_index_into : int -> t -> int array -> unit\n(** [unravel_index_into k shape dst] is like {!unravel_index} but writes indices\n    into [dst].\n\n    [dst] must have length [Array.length shape].\n\n    Raises [Invalid_argument] if [k] is out of bounds for [shape].\n\n    {b Warning.} If [dst] has the wrong length, array access may raise\n    [Invalid_argument] via OCaml's bounds checks. *)\n\n(** {1:transform Shape transformations} *)\n\nval resolve_neg_one : t -> int array -> t\n(** [resolve_neg_one current_shape new_spec] resolves a single [-1] entry in\n    [new_spec] using [numel current_shape].\n\n    Raises [Invalid_argument] if:\n    - [new_spec] contains more than one [-1].\n    - The inferred size is not integral with the specified dimensions.\n    - The specification is incompatible with zero-size inference rules. *)\n\nval broadcast : t -> t -> t\n(** [broadcast a b] is the broadcasted shape of [a] and [b] using NumPy rules\n    (right alignment; dimensions are compatible iff equal or one is [1]).\n\n    Raises [Invalid_argument] if the shapes are not broadcast-compatible. *)\n\nval broadcast_index : int array -> t -> int array\n(** [broadcast_index target_idx source_shape] maps a target index to the\n    corresponding index in [source_shape] under broadcasting.\n\n    Dimensions of [source_shape] equal to [1] map to index [0]. *)\n\nval broadcast_index_into : int array -> t -> int array -> unit\n(** [broadcast_index_into target_idx source_shape dst] is like\n    {!broadcast_index} but writes into [dst].\n\n    [dst] must have length [Array.length source_shape]. *)\n\n(** {1:format Formatting} *)\n\nval reduce_output_shape : t -> int array -> bool -> t\n(** [reduce_output_shape shape axes keepdims] is the output shape after reducing\n    [axes]. Reduced axes are removed when [keepdims] is [false] or replaced by\n    [1] when [true]. *)\n\nval pp : Format.formatter -> t -> unit\n(** [pp] formats shapes with the same syntax as [to_string]. *)\n\nval to_string : t -> string\n(** [to_string shape] formats [shape] as a bracketed comma-separated list, for\n    example [[2,3,4]]. *)\n"
  },
  {
    "path": "packages/nx/lib/core/view.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Lightweight view of tensor layout and helpers for reshaping. *)\n\nlet err op fmt = Printf.ksprintf (fun msg -> invalid_arg (op ^ \": \" ^ msg)) fmt\n\ntype layout = C_contiguous | Strided\n\ntype t = {\n  shape : int array;\n  strides : int array;\n  offset : int;\n  mask : (int * int) array option;\n  layout : layout;\n}\n\n(* ───── Helpers ───── *)\n\nlet prod arr = Array.fold_left ( * ) 1 arr\n\n(* compute C-contiguous strides for concrete shape *)\nlet compute_strides shape_array =\n  let n = Array.length shape_array in\n  if n = 0 then [||]\n  else\n    let strides = Array.make n 0 in\n    strides.(n - 1) <- (if shape_array.(n - 1) = 0 then 0 else 1);\n    for i = n - 2 downto 0 do\n      strides.(i) <-\n        (if shape_array.(i) = 0 then 0\n         else strides.(i + 1) * max 1 shape_array.(i + 1))\n    done;\n    strides\n\n(* canonicalize strides - keep original strides, don't force stride 0 for size\n   1 *)\nlet canonicalize_strides _shape_array strides = strides\n\n(* Check if strides represent a contiguous layout *)\nlet is_c_contiguous_strides shape_arr strides mask =\n  mask = None\n  &&\n  let expected = compute_strides shape_arr in\n  let expected_canonical = canonicalize_strides shape_arr expected in\n  Array.length strides = Array.length expected_canonical\n  && Array.for_all2 ( = ) strides expected_canonical\n\n(* ───── Accessors ───── *)\n\nlet shape v = v.shape\nlet strides v = v.strides\n\nlet stride axis v =\n  let ndim = Array.length v.shape in\n  if axis < 0 || axis >= ndim then\n    err \"stride\" \"axis %d out of bounds for %dD tensor\" axis ndim;\n  Array.unsafe_get v.strides axis\n\nlet offset v = v.offset\nlet mask v = v.mask\nlet is_c_contiguous v = v.layout = C_contiguous\n\nlet dim axis v =\n  let ndim = Array.length v.shape in\n  if axis < 0 || axis >= ndim then\n    err \"dim\" \"axis %d out of bounds for %dD tensor\" axis ndim;\n  v.shape.(axis)\n\nlet ndim v = Array.length v.shape\nlet numel v = prod v.shape\n\n(* ───── View Creation ───── *)\n\nlet create ?(offset = 0) ?strides ?mask shape =\n  let is_zero_size = Array.exists (( = ) 0) shape in\n  let current_shape =\n    if is_zero_size then Array.map (fun s -> max s 0) shape else shape\n  in\n  let current_strides =\n    match strides with\n    | Some s -> canonicalize_strides current_shape s\n    | None -> compute_strides current_shape\n  in\n  let current_offset = if is_zero_size then 0 else offset in\n  let current_mask =\n    if is_zero_size then None\n    else\n      match mask with\n      | Some m\n        when Array.for_all2 (fun (b, e) s -> b = 0 && e = s) m current_shape ->\n          None\n      | _ -> mask\n  in\n  let new_layout =\n    if is_c_contiguous_strides current_shape current_strides current_mask then\n      C_contiguous\n    else Strided\n  in\n  {\n    shape = current_shape;\n    strides = current_strides;\n    offset = current_offset;\n    mask = current_mask;\n    layout = new_layout;\n  }\n\n(* ───── Offset & Validation ───── *)\n\nlet linear_index view indices =\n  let ndim = Array.length view.shape in\n  if Array.length indices <> ndim then\n    err \"linear_index\" \"rank mismatch: indices[%d] vs ndim %d\"\n      (Array.length indices) ndim;\n  let physical_offset = ref view.offset in\n  Array.iteri\n    (fun i idx ->\n      physical_offset := !physical_offset + (idx * view.strides.(i)))\n    indices;\n  !physical_offset\n\nlet is_valid view indices =\n  match view.mask with\n  | None -> true\n  | Some mask_array ->\n      if Array.length indices <> Array.length mask_array then false\n      else\n        Array.for_all2\n          (fun idx (b, e) -> idx >= b && idx < e)\n          indices mask_array\n\n(* ───── View Manipulation ───── *)\n\nlet expand view new_shape =\n  let old_ndim = Array.length view.shape in\n  let new_ndim = Array.length new_shape in\n  (* Allow expanding a scalar to any shape *)\n  if old_ndim = 0 then\n    let strides = Array.make new_ndim 0 in\n    { view with shape = new_shape; strides }\n  else if new_ndim <> old_ndim then\n    err \"expand\" \"rank mismatch: %d vs %d\" new_ndim old_ndim\n  else\n    let old_arr = view.shape in\n    let new_arr = new_shape in\n    if Array.exists (( = ) 0) old_arr then create new_shape\n    else\n      let strides =\n        Array.mapi\n          (fun i ns ->\n            let s = old_arr.(i) in\n            if s = ns then view.strides.(i)\n            else if s = 1 then 0\n            else\n              err \"expand\"\n                \"dimension %d (size %d) cannot expand to size %d, only \\\n                 singletons expand\"\n                i s ns)\n          new_arr\n      in\n      let mask =\n        match view.mask with\n        | None -> None\n        | Some m ->\n            Some\n              (Array.mapi\n                 (fun i (b, e) ->\n                   if old_arr.(i) = 1 && new_arr.(i) <> 1 then\n                     if b = 0 && e = 1 then (0, new_arr.(i))\n                     else\n                       err \"expand\"\n                         \"masked singleton bounds [%d,%d] incompatible with \\\n                          expansion\"\n                         b e\n                   else (b, e))\n                 m)\n      in\n      create ~offset:view.offset ?mask ~strides new_shape\n\nlet permute view axes =\n  let n = ndim view in\n  if Array.length axes <> n then\n    err \"permute\" \"axes length %d != ndim %d\" (Array.length axes) n;\n\n  (* Validate permutation *)\n  let seen = Array.make n false in\n  Array.iter\n    (fun ax ->\n      if ax < 0 || ax >= n then\n        err \"permute\" \"axis %d out of bounds for %dD tensor\" ax n;\n      if seen.(ax) then err \"permute\" \"duplicate axis %d\" ax;\n      seen.(ax) <- true)\n    axes;\n\n  let new_shape = Array.init n (fun i -> view.shape.(axes.(i))) in\n  let new_strides = Array.init n (fun i -> view.strides.(axes.(i))) in\n  let new_mask =\n    Option.map (fun m -> Array.init n (fun i -> m.(axes.(i)))) view.mask\n  in\n  create ~offset:view.offset ?mask:new_mask ~strides:new_strides new_shape\n\nlet reshape view new_shape =\n  (* Early return if shapes are identical *)\n  if view.shape = new_shape then view\n  else\n    let old_arr = view.shape in\n    let new_arr = new_shape in\n    let old_numel = prod old_arr in\n    let new_numel = prod new_arr in\n\n    (* Check size compatibility *)\n    if old_numel <> new_numel && old_numel <> 0 && new_numel <> 0 then\n      err \"reshape\" \"cannot reshape %s to %s\" (Shape.to_string old_arr)\n        (Shape.to_string new_arr)\n    else if Array.exists (( = ) 0) old_arr || Array.exists (( = ) 0) new_arr\n    then create ~offset:0 new_shape\n      (* Check for masks - these complicate reshape *)\n    else if view.mask <> None then\n      invalid_arg\n        \"reshape: cannot reshape views with masks, call contiguous() first\"\n      (* Fast path for C-contiguous views *)\n    else if view.layout = C_contiguous then create ~offset:view.offset new_shape\n    else if\n      (* Special case: reshaping to/from scalar *)\n      Array.length new_shape = 0\n    then create ~offset:view.offset new_shape\n      (* Special case: all strides are 0 (broadcast from scalar) *)\n    else if Array.for_all (( = ) 0) view.strides then\n      let new_strides = Array.make (Array.length new_shape) 0 in\n      create ~offset:view.offset ~strides:new_strides new_shape\n    (* Special case: only expanding/squeezing size-1 dimensions *)\n      else\n      let try_squeeze_unsqueeze () =\n        let old_non_one = Array.to_list old_arr |> List.filter (( <> ) 1) in\n        let new_non_one = Array.to_list new_arr |> List.filter (( <> ) 1) in\n\n        if old_non_one = new_non_one then\n          let old_idx = ref 0 in\n          let new_strides =\n            Array.map\n              (fun dim ->\n                if dim = 1 then 0\n                else (\n                  while\n                    !old_idx < Array.length old_arr && old_arr.(!old_idx) = 1\n                  do\n                    incr old_idx\n                  done;\n                  let stride = view.strides.(!old_idx) in\n                  incr old_idx;\n                  stride))\n              new_arr\n          in\n          Some new_strides\n        else None\n      in\n\n      let try_merge_split () =\n        let old_dims = ref [] in\n        let new_dims = ref [] in\n\n        for i = 0 to Array.length old_arr - 1 do\n          if old_arr.(i) > 1 then\n            old_dims := (old_arr.(i), view.strides.(i)) :: !old_dims\n        done;\n        old_dims := List.rev !old_dims;\n\n        for i = 0 to Array.length new_arr - 1 do\n          if new_arr.(i) > 1 then new_dims := new_arr.(i) :: !new_dims\n        done;\n        new_dims := List.rev !new_dims;\n\n        let rec match_dims old_dims new_dims =\n          match (old_dims, new_dims) with\n          | [], [] -> Some []\n          | [], _ | _, [] -> None\n          | (old_size, old_stride) :: old_rest, new_size :: new_rest ->\n              if old_size = new_size then\n                match match_dims old_rest new_rest with\n                | Some rest_strides ->\n                    Some ((new_size, old_stride) :: rest_strides)\n                | None -> None\n              else if old_size > new_size && old_size mod new_size = 0 then\n                let remaining_size = old_size / new_size in\n                let first_stride = old_stride * remaining_size in\n                let remaining_dims = (remaining_size, old_stride) :: old_rest in\n                match match_dims remaining_dims new_rest with\n                | Some rest_strides ->\n                    Some ((new_size, first_stride) :: rest_strides)\n                | None -> None\n              else if new_size > old_size then\n                let rec collect_merge size stride dims needed =\n                  if size = needed then Some (dims, stride)\n                  else if size > needed then None\n                  else\n                    match dims with\n                    | [] -> None\n                    | (next_size, next_stride) :: rest ->\n                        if stride = next_stride * next_size then\n                          collect_merge (size * next_size) next_stride rest\n                            needed\n                        else None\n                in\n                match collect_merge old_size old_stride old_rest new_size with\n                | Some (remaining, first_stride) -> (\n                    match match_dims remaining new_rest with\n                    | Some rest_strides ->\n                        Some ((new_size, first_stride) :: rest_strides)\n                    | None -> None)\n                | None -> None\n              else None\n        in\n\n        match match_dims !old_dims !new_dims with\n        | None -> None\n        | Some stride_map ->\n            let stride_map_arr = Array.of_list stride_map in\n            let new_strides = Array.make (Array.length new_arr) 0 in\n            let map_idx = ref 0 in\n\n            for i = 0 to Array.length new_arr - 1 do\n              if new_arr.(i) = 1 then new_strides.(i) <- 0\n              else\n                let _, stride = stride_map_arr.(!map_idx) in\n                new_strides.(i) <- stride;\n                incr map_idx\n            done;\n\n            Some new_strides\n      in\n      (* Try reshape strategies in order *)\n      match try_squeeze_unsqueeze () with\n      | Some new_strides ->\n          create ~offset:view.offset ~strides:new_strides new_shape\n      | None -> (\n          match try_merge_split () with\n          | Some new_strides ->\n              create ~offset:view.offset ~strides:new_strides new_shape\n          | None ->\n              let expected_strides = compute_strides new_arr in\n              let stride_str =\n                \"[\"\n                ^ String.concat \",\"\n                    (Array.to_list (Array.map string_of_int view.strides))\n                ^ \"]\"\n              in\n              let expected_str =\n                \"[\"\n                ^ String.concat \",\"\n                    (Array.to_list (Array.map string_of_int expected_strides))\n                ^ \"]\"\n              in\n              err \"reshape\"\n                \"cannot reshape %s to %s, incompatible strides %s (expected \\\n                 %s), call contiguous() first\"\n                (Shape.to_string old_arr) (Shape.to_string new_arr) stride_str\n                expected_str)\n\n(* helper used by [pad] and [shrink] *)\nlet unsafe_resize view arg new_mask_opt =\n  let ndim = Array.length view.shape in\n  if Array.length arg <> ndim then\n    err \"unsafe_resize\" \"argument length %d != ndim %d\" (Array.length arg) ndim;\n\n  let strides = view.strides in\n\n  let new_shape = Array.map (fun (a, b) -> b - a) arg in\n  let new_offset = ref view.offset in\n  Array.iteri\n    (fun i (a, _) -> new_offset := !new_offset + (a * strides.(i)))\n    arg;\n\n  let final_mask =\n    let shift_and_combine_mask old_mask_dim_bounds new_mask_dim_bounds\n        offset_for_dim =\n      let old_b, old_e = old_mask_dim_bounds in\n      let new_b, new_e = new_mask_dim_bounds in\n      let shifted_old_b = max 0 (old_b - offset_for_dim) in\n      let shifted_old_e = max 0 (old_e - offset_for_dim) in\n      (max shifted_old_b new_b, min shifted_old_e new_e)\n    in\n    match (view.mask, new_mask_opt) with\n    | None, None -> None\n    | Some old_m, None ->\n        Some\n          (Array.mapi\n             (fun i (old_b, old_e) ->\n               let a, _ = arg.(i) in\n               let new_dim_size = new_shape.(i) in\n               (max 0 (old_b - a), min new_dim_size (old_e - a)))\n             old_m)\n    | None, Some new_m -> Some new_m\n    | Some old_m, Some new_m ->\n        Some\n          (Array.mapi\n             (fun i (old_b_i, old_e_i) ->\n               let new_m_b_i, new_m_e_i = new_m.(i) in\n               let a_i, _ = arg.(i) in\n               shift_and_combine_mask (old_b_i, old_e_i) (new_m_b_i, new_m_e_i)\n                 a_i)\n             old_m)\n  in\n  create ~offset:!new_offset ?mask:final_mask ~strides new_shape\n\nlet pad view arg =\n  let ndim = Array.length view.shape in\n  if Array.length arg <> ndim then\n    err \"pad\" \"padding length %d != ndim %d\" (Array.length arg) ndim;\n  if Array.for_all (fun (b, e) -> b = 0 && e = 0) arg then view\n  else if Array.exists (fun (b, e) -> b < 0 || e < 0) arg then\n    invalid_arg \"pad: negative padding values, use shrink or slice instead\"\n  else\n    let shape_arr = view.shape in\n    let zvarg =\n      Array.mapi\n        (fun i s ->\n          let pad_before, pad_after = arg.(i) in\n          (-pad_before, s + pad_after))\n        shape_arr\n    in\n    let mask_for_pad =\n      Array.mapi\n        (fun i s_old ->\n          let pad_before, _pad_after = arg.(i) in\n          (pad_before, pad_before + s_old))\n        shape_arr\n    in\n    unsafe_resize view zvarg (Some mask_for_pad)\n\nlet shrink view arg =\n  let ndim = Array.length view.shape in\n  if Array.length arg <> ndim then\n    err \"shrink\" \"bounds length %d != ndim %d\" (Array.length arg) ndim;\n  let shape_arr = view.shape in\n  if Array.for_all2 (fun (b, e) s -> b = 0 && e = s) arg shape_arr then view\n  else if\n    Array.exists2\n      (fun (b, e) s -> b < 0 || e < 0 || b > s || e > s || b >= e)\n      arg shape_arr\n  then invalid_arg \"shrink: bounds must be within shape and start < end\"\n  else unsafe_resize view arg None\n\nlet flip view flip_axes_bools =\n  let ndim = Array.length view.shape in\n  if Array.length flip_axes_bools <> ndim then\n    err \"flip\" \"boolean array length %d != ndim %d\"\n      (Array.length flip_axes_bools)\n      ndim;\n\n  let shape_arr = view.shape in\n  let strides = view.strides in\n\n  let new_offset = ref view.offset in\n  let new_strides = Array.copy strides in\n  let new_mask =\n    match view.mask with Some m -> Some (Array.copy m) | None -> None\n  in\n  Array.iteri\n    (fun i do_flip ->\n      if do_flip then\n        let s_i = shape_arr.(i) in\n        if s_i > 0 then (\n          new_offset := !new_offset + ((s_i - 1) * strides.(i));\n          new_strides.(i) <- -new_strides.(i);\n          match new_mask with\n          | Some m_arr ->\n              let b, e = m_arr.(i) in\n              m_arr.(i) <- (s_i - e, s_i - b)\n          | None -> ()))\n    flip_axes_bools;\n  create ~offset:!new_offset ?mask:new_mask ~strides:new_strides view.shape\n\nlet simplify view =\n  (* Only simplify things that don't change the user-visible shape *)\n\n  (* 1. Canonicalize mask that covers entire dimensions *)\n  let mask =\n    match view.mask with\n    | Some m when Array.for_all2 (fun (b, e) s -> b = 0 && e = s) m view.shape\n      ->\n        None (* Mask covers everything, remove it *)\n    | m -> m\n  in\n\n  (* Just return with simplified mask if changed *)\n  if mask <> view.mask then\n    let new_layout =\n      if mask = None && is_c_contiguous_strides view.shape view.strides mask\n      then C_contiguous\n      else Strided\n    in\n    { view with mask; layout = new_layout }\n  else view\n\nlet can_get_strides_simplified simplified =\n  match simplified.mask with\n  | None -> true\n  | Some mask_array ->\n      Array.for_all2\n        (fun (b, e) s -> b = 0 && e = s)\n        mask_array simplified.shape\n\nlet can_get_strides view = simplify view |> can_get_strides_simplified\n\nlet strides_opt view =\n  let simplified = simplify view in\n  if can_get_strides_simplified simplified then Some (strides simplified)\n  else None\n\nlet is_materializable view =\n  let simplified = simplify view in\n  can_get_strides_simplified simplified\n"
  },
  {
    "path": "packages/nx/lib/core/view.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Strided tensor views.\n\n    A view describes how a linear buffer is interpreted as an n-dimensional\n    tensor through shape, strides, offset, and an optional validity mask. View\n    operations are metadata transformations: they do not copy element storage.\n*)\n\ntype t\n(** The type for tensor views. *)\n\n(** {1:constructors Construction} *)\n\nval create :\n  ?offset:int -> ?strides:int array -> ?mask:(int * int) array -> int array -> t\n(** [create ?offset ?strides ?mask shape] is a view over [shape].\n\n    Defaults:\n    - [offset] defaults to [0].\n    - [strides] defaults to C-contiguous strides derived from [shape].\n    - [mask] defaults to [None] (all indices valid).\n\n    Mask bounds are half-open intervals [(start, end)] per dimension.\n\n    If [shape] has a zero-size dimension, the resulting view has [offset = 0]\n    and no mask.\n\n    {b Warning.} If explicit [strides] or [mask] lengths do not match\n    [Array.length shape], downstream array checks may raise [Invalid_argument].\n*)\n\n(** {1:accessors Accessors} *)\n\nval shape : t -> int array\n(** [shape v] is [v]'s shape. *)\n\nval strides : t -> int array\n(** [strides v] is [v]'s stride vector. *)\n\nval offset : t -> int\n(** [offset v] is [v]'s linear base offset. *)\n\nval ndim : t -> int\n(** [ndim v] is [Array.length (shape v)]. *)\n\nval numel : t -> int\n(** [numel v] is the product of dimensions in [shape v].\n\n    [numel] of a scalar ([ndim v = 0]) is [1]. *)\n\nval dim : int -> t -> int\n(** [dim axis v] is dimension [axis] of [v].\n\n    Raises [Invalid_argument] if [axis] is outside [[0; ndim v - 1]]. *)\n\nval stride : int -> t -> int\n(** [stride axis v] is stride [axis] of [v].\n\n    Raises [Invalid_argument] if [axis] is outside [[0; ndim v - 1]]. *)\n\nval mask : t -> (int * int) array option\n(** [mask v] is [v]'s optional validity mask.\n\n    A mask entry [(b, e)] means [b <= index < e] on the corresponding axis. *)\n\nval is_c_contiguous : t -> bool\n(** [is_c_contiguous v] is [true] iff [v] is recognized as C-contiguous. *)\n\nval strides_opt : t -> int array option\n(** [strides_opt v] is [Some s] if [v] can be represented as a standard strided\n    view without partial masking, and [None] otherwise. *)\n\nval can_get_strides : t -> bool\n(** [can_get_strides v] is [true] iff [strides_opt v] is [Some _]. *)\n\nval is_materializable : t -> bool\n(** [is_materializable v] is [true] iff [can_get_strides v] is [true]. *)\n\n(** {1:indexing Indexing} *)\n\nval linear_index : t -> int array -> int\n(** [linear_index v idx] is [offset v + sum_i (idx.(i) * strides v.(i))].\n\n    Raises [Invalid_argument] if [Array.length idx <> ndim v].\n\n    {b Note.} This function does not validate index bounds or masks. *)\n\nval is_valid : t -> int array -> bool\n(** [is_valid v idx] is [true] iff [idx] is valid with respect to [mask v].\n\n    If [mask v = None], the result is [true] for any [idx].\n\n    If [mask v = Some m], [idx] must have the same rank and satisfy each masked\n    interval bound. *)\n\n(** {1:transform Transformations} *)\n\nval reshape : t -> int array -> t\n(** [reshape v new_shape] returns a view over the same storage with [new_shape]\n    when stride-compatible.\n\n    Supported cases include:\n    - C-contiguous reshape.\n    - Reshape by adding/removing singleton dimensions.\n    - Certain merge/split patterns on compatible strided layouts.\n    - All-zero-stride broadcast layouts.\n\n    Raises [Invalid_argument] if reshape cannot be represented, including size\n    mismatches (except zero-size special cases), masked views, or incompatible\n    stride patterns. *)\n\nval expand : t -> int array -> t\n(** [expand v new_shape] broadcasts singleton dimensions to [new_shape] by\n    setting corresponding strides to [0].\n\n    Scalars ([ndim v = 0]) may expand to any rank.\n\n    Raises [Invalid_argument] if ranks are incompatible for non-scalars, or if a\n    non-singleton dimension would need expansion. *)\n\nval permute : t -> int array -> t\n(** [permute v axes] reorders dimensions according to [axes].\n\n    Raises [Invalid_argument] if [axes] is not a valid permutation of\n    [[0; ndim v - 1]]. *)\n\nval shrink : t -> (int * int) array -> t\n(** [shrink v bounds] restricts [v] to per-axis half-open intervals\n    [(start, end)].\n\n    Bounds must satisfy [0 <= start < end <= size] for each dimension.\n\n    Raises [Invalid_argument] if bounds are malformed or rank mismatches. *)\n\nval pad : t -> (int * int) array -> t\n(** [pad v padding] adds virtual padding [(before, after)] per axis.\n\n    The resulting view keeps data in place and records valid original regions\n    via a mask.\n\n    Raises [Invalid_argument] if:\n    - [padding] rank mismatches [ndim v].\n    - A padding component is negative. *)\n\nval flip : t -> bool array -> t\n(** [flip v axes_to_flip] reverses selected axes by negating strides and\n    shifting offset.\n\n    Raises [Invalid_argument] if [axes_to_flip] rank mismatches [ndim v]. *)\n"
  },
  {
    "path": "packages/nx/lib/dune",
    "content": "(library\n (name nx)\n (public_name nx)\n (modules :standard \\ prelude)\n (libraries nx_core nx_backend nx_buffer nx_effect))\n\n(mdx\n (package nx)\n (files nx.mli)\n (preludes prelude.ml)\n (libraries nx nx_buffer))\n"
  },
  {
    "path": "packages/nx/lib/effect/dune",
    "content": "(library\n (name nx_effect)\n (public_name nx.effect)\n (libraries nx_core nx_backend nx_buffer))\n"
  },
  {
    "path": "packages/nx/lib/effect/nx_effect.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Nx_core\n\n(* Types *)\n\ntype context = Nx_backend.context\n\n(* OCaml extensible GADT constructors (the [E_add], [E_mul], ... below) require\n   that type variables in the payload be deducible from the return type. With a\n   transparent alias [type ('a,'b) t = ('a,'b) Nx_backend.t], the compiler sees\n   through to the concrete record and concludes that ['a] and ['b] are not\n   injective — so every effect definition fails with \"type variable cannot be\n   deduced\". Wrapping in a single-constructor GADT restores injectivity: [T] is\n   a fresh constructor whose parameters are, by definition, determined by the\n   return type. At runtime this is a zero-cost box (single-field\n   constructor). *)\ntype ('a, 'b) t = T : ('a, 'b) Nx_backend.t -> ('a, 'b) t\n\n(* Effects *)\n\ntype _ Effect.t +=\n  | E_view : ('a, 'b) t -> View.t Effect.t\n  | E_buffer : {\n      context : context;\n      dtype : ('a, 'b) Dtype.t;\n      size_in_elements : int;\n    }\n      -> ('a, 'b) t Effect.t\n  | E_const_scalar : {\n      context : context;\n      value : 'a;\n      dtype : ('a, 'b) Dtype.t;\n    }\n      -> ('a, 'b) t Effect.t\n  | E_from_host : {\n      context : context;\n      array : ('a, 'b) Nx_buffer.t;\n    }\n      -> ('a, 'b) t Effect.t\n  | E_add : { a : ('a, 'b) t; b : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_sub : { a : ('a, 'b) t; b : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_mul : { a : ('a, 'b) t; b : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_idiv : { a : ('a, 'b) t; b : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_fdiv : { a : ('a, 'b) t; b : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_max : { a : ('a, 'b) t; b : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_min : { a : ('a, 'b) t; b : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_mod : { a : ('a, 'b) t; b : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_pow : { a : ('a, 'b) t; b : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_xor : { a : ('a, 'b) t; b : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_or : { a : ('a, 'b) t; b : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_and : { a : ('a, 'b) t; b : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_atan2 : { a : ('a, 'b) t; b : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_cmpeq : {\n      a : ('a, 'b) t;\n      b : ('a, 'b) t;\n    }\n      -> (bool, Dtype.bool_elt) t Effect.t\n  | E_cmpne : {\n      a : ('a, 'b) t;\n      b : ('a, 'b) t;\n    }\n      -> (bool, Dtype.bool_elt) t Effect.t\n  | E_cmplt : {\n      a : ('a, 'b) t;\n      b : ('a, 'b) t;\n    }\n      -> (bool, Dtype.bool_elt) t Effect.t\n  | E_cmple : {\n      a : ('a, 'b) t;\n      b : ('a, 'b) t;\n    }\n      -> (bool, Dtype.bool_elt) t Effect.t\n  | E_neg : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_sin : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_sqrt : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_recip : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_log : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_exp : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_cos : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_abs : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_sign : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_tan : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_asin : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_acos : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_atan : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_sinh : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_cosh : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_tanh : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_trunc : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_ceil : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_floor : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_round : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_erf : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_where : {\n      condition : (bool, Dtype.bool_elt) t;\n      if_true : ('a, 'b) t;\n      if_false : ('a, 'b) t;\n    }\n      -> ('a, 'b) t Effect.t\n  | E_reduce_sum : {\n      t_in : ('a, 'b) t;\n      axes : int array;\n      keepdims : bool;\n    }\n      -> ('a, 'b) t Effect.t\n  | E_reduce_max : {\n      t_in : ('a, 'b) t;\n      axes : int array;\n      keepdims : bool;\n    }\n      -> ('a, 'b) t Effect.t\n  | E_reduce_min : {\n      t_in : ('a, 'b) t;\n      axes : int array;\n      keepdims : bool;\n    }\n      -> ('a, 'b) t Effect.t\n  | E_reduce_prod : {\n      t_in : ('a, 'b) t;\n      axes : int array;\n      keepdims : bool;\n    }\n      -> ('a, 'b) t Effect.t\n  | E_argmax : {\n      t_in : ('a, 'b) t;\n      axis : int;\n      keepdims : bool;\n    }\n      -> (int32, Dtype.int32_elt) t Effect.t\n  | E_argmin : {\n      t_in : ('a, 'b) t;\n      axis : int;\n      keepdims : bool;\n    }\n      -> (int32, Dtype.int32_elt) t Effect.t\n  | E_sort : {\n      t_in : ('a, 'b) t;\n      axis : int;\n      descending : bool;\n    }\n      -> ('a, 'b) t Effect.t\n  | E_argsort : {\n      t_in : ('a, 'b) t;\n      axis : int;\n      descending : bool;\n    }\n      -> (int32, Dtype.int32_elt) t Effect.t\n  | E_associative_scan : {\n      t_in : ('a, 'b) t;\n      axis : int;\n      op : [ `Sum | `Prod | `Max | `Min ];\n    }\n      -> ('a, 'b) t Effect.t\n  | E_permute : { t_in : ('a, 'b) t; axes : int array } -> ('a, 'b) t Effect.t\n  | E_reshape : {\n      t_in : ('a, 'b) t;\n      new_shape : int array;\n    }\n      -> ('a, 'b) t Effect.t\n  | E_expand : {\n      t_in : ('a, 'b) t;\n      new_target_shape : int array;\n    }\n      -> ('a, 'b) t Effect.t\n  | E_pad : {\n      t_in : ('a, 'b) t;\n      padding_config : (int * int) array;\n      fill_value : 'a;\n    }\n      -> ('a, 'b) t Effect.t\n  | E_shrink : {\n      t_in : ('a, 'b) t;\n      limits : (int * int) array;\n    }\n      -> ('a, 'b) t Effect.t\n  | E_flip : {\n      t_in : ('a, 'b) t;\n      dims_to_flip : bool array;\n    }\n      -> ('a, 'b) t Effect.t\n  | E_cat : { t_list : ('a, 'b) t list; axis : int } -> ('a, 'b) t Effect.t\n  | E_cast : {\n      t_in : ('a, 'b) t;\n      target_dtype : ('c, 'd) Dtype.t;\n    }\n      -> ('c, 'd) t Effect.t\n  | E_contiguous : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_copy : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_assign : { dst : ('a, 'b) t; src : ('a, 'b) t } -> unit Effect.t\n  | E_threefry : {\n      key : (int32, Dtype.int32_elt) t;\n      ctr : (int32, Dtype.int32_elt) t;\n    }\n      -> (int32, Dtype.int32_elt) t Effect.t\n  | E_gather : {\n      data : ('a, 'b) t;\n      indices : (int32, Dtype.int32_elt) t;\n      axis : int;\n    }\n      -> ('a, 'b) t Effect.t\n  | E_scatter : {\n      data_template : ('a, 'b) t;\n      indices : (int32, Dtype.int32_elt) t;\n      updates : ('a, 'b) t;\n      axis : int;\n    }\n      -> ('a, 'b) t Effect.t\n  | E_to_device : {\n      context : context;\n      t_in : ('a, 'b) t;\n    }\n      -> ('a, 'b) t Effect.t\n  | E_unfold : {\n      t_in : ('a, 'b) t;\n      kernel_size : int array;\n      stride : int array;\n      dilation : int array;\n      padding : (int * int) array;\n    }\n      -> ('a, 'b) t Effect.t\n  | E_fold : {\n      t_in : ('a, 'b) t;\n      output_size : int array;\n      kernel_size : int array;\n      stride : int array;\n      dilation : int array;\n      padding : (int * int) array;\n    }\n      -> ('a, 'b) t Effect.t\n  | E_matmul : { a : ('a, 'b) t; b : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_fft : {\n      t : (Complex.t, 'b) t;\n      axes : int array;\n    }\n      -> (Complex.t, 'b) t Effect.t\n  | E_ifft : {\n      t : (Complex.t, 'b) t;\n      axes : int array;\n    }\n      -> (Complex.t, 'b) t Effect.t\n  | E_rfft : {\n      t : (float, 'b) t;\n      axes : int array;\n    }\n      -> (Complex.t, Dtype.complex64_elt) t Effect.t\n  | E_irfft : {\n      t : (Complex.t, 'b) t;\n      axes : int array;\n      s : int array option;\n    }\n      -> (float, Dtype.float64_elt) t Effect.t\n  | E_psum : { t_in : ('a, 'b) t } -> ('a, 'b) t Effect.t\n  | E_cholesky : { t_in : ('a, 'b) t; upper : bool } -> ('a, 'b) t Effect.t\n  | E_qr : {\n      t_in : ('a, 'b) t;\n      reduced : bool;\n    }\n      -> (('a, 'b) t * ('a, 'b) t) Effect.t\n  | E_svd : {\n      t_in : ('a, 'b) t;\n      full_matrices : bool;\n    }\n      -> (('a, 'b) t * (float, Dtype.float64_elt) t * ('a, 'b) t) Effect.t\n  | E_eig : {\n      t_in : ('a, 'b) t;\n      vectors : bool;\n    }\n      -> ((Complex.t, Dtype.complex64_elt) t\n         * (Complex.t, Dtype.complex64_elt) t option)\n         Effect.t\n  | E_eigh : {\n      t_in : ('a, 'b) t;\n      vectors : bool;\n    }\n      -> ((float, Dtype.float64_elt) t * ('a, 'b) t option) Effect.t\n  | E_triangular_solve : {\n      a : ('a, 'b) t;\n      b : ('a, 'b) t;\n      upper : bool;\n      transpose : bool;\n      unit_diag : bool;\n    }\n      -> ('a, 'b) t Effect.t\n\n(* Unwrap *)\n\nlet unwrap (T t) = t\n\n(* Lenses *)\n\nlet create_context () : context = Nx_backend.create_context ()\nlet context (type a b) (T t : (a, b) t) = Nx_backend.context t\nlet to_device (_ctx : context) (t : ('a, 'b) t) : ('a, 'b) t = t\n\nlet view (type a b) (x : (a, b) t) : View.t =\n  try Effect.perform (E_view x)\n  with Effect.Unhandled _ -> Nx_backend.view (unwrap x)\n\nlet dtype (type a b) (T t : (a, b) t) = Nx_backend.dtype t\nlet to_host (type a b) (T t : (a, b) t) = Nx_backend.to_host t\n\n(* Fallback dispatch helpers.\n\n   Each helper performs an effect. When no handler is installed, it falls back\n   to the C backend. The pattern is uniform: try the effect, on [Unhandled]\n   unwrap the [T] and call [Nx_backend]. *)\n\nlet binary_op eff cpu_op a b =\n  try Effect.perform (eff ())\n  with Effect.Unhandled _ -> T (cpu_op (unwrap a) (unwrap b))\n\nlet unary_op eff cpu_op t_in =\n  try Effect.perform (eff ())\n  with Effect.Unhandled _ -> T (cpu_op (unwrap t_in))\n\nlet reduce_op eff cpu_op ~axes ~keepdims t_in =\n  try Effect.perform (eff ())\n  with Effect.Unhandled _ -> T (cpu_op ~axes ~keepdims (unwrap t_in))\n\nlet movement_op eff cpu_op t_in arg =\n  try Effect.perform (eff ())\n  with Effect.Unhandled _ -> T (cpu_op (unwrap t_in) arg)\n\nlet assign dst src =\n  try Effect.perform (E_assign { dst; src })\n  with Effect.Unhandled _ -> Nx_backend.assign (unwrap dst) (unwrap src)\n\n(* Binary operations *)\n\nlet add a b = binary_op (fun () -> E_add { a; b }) Nx_backend.add a b\nlet sub a b = binary_op (fun () -> E_sub { a; b }) Nx_backend.sub a b\nlet mul a b = binary_op (fun () -> E_mul { a; b }) Nx_backend.mul a b\nlet max a b = binary_op (fun () -> E_max { a; b }) Nx_backend.max a b\nlet min a b = binary_op (fun () -> E_min { a; b }) Nx_backend.min a b\nlet mod_ a b = binary_op (fun () -> E_mod { a; b }) Nx_backend.mod_ a b\nlet pow a b = binary_op (fun () -> E_pow { a; b }) Nx_backend.pow a b\nlet xor a b = binary_op (fun () -> E_xor { a; b }) Nx_backend.xor a b\nlet or_ a b = binary_op (fun () -> E_or { a; b }) Nx_backend.or_ a b\nlet and_ a b = binary_op (fun () -> E_and { a; b }) Nx_backend.and_ a b\nlet atan2 a b = binary_op (fun () -> E_atan2 { a; b }) Nx_backend.atan2 a b\n\nlet div a b =\n  let dt = dtype a in\n  if Dtype.is_int dt || Dtype.is_uint dt then\n    binary_op (fun () -> E_idiv { a; b }) Nx_backend.div a b\n  else binary_op (fun () -> E_fdiv { a; b }) Nx_backend.div a b\n\n(* Comparison operations *)\n\nlet cmpeq a b =\n  try Effect.perform (E_cmpeq { a; b })\n  with Effect.Unhandled _ -> T (Nx_backend.cmpeq (unwrap a) (unwrap b))\n\nlet cmpne a b =\n  try Effect.perform (E_cmpne { a; b })\n  with Effect.Unhandled _ -> T (Nx_backend.cmpne (unwrap a) (unwrap b))\n\nlet cmplt a b =\n  try Effect.perform (E_cmplt { a; b })\n  with Effect.Unhandled _ -> T (Nx_backend.cmplt (unwrap a) (unwrap b))\n\nlet cmple a b =\n  try Effect.perform (E_cmple { a; b })\n  with Effect.Unhandled _ -> T (Nx_backend.cmple (unwrap a) (unwrap b))\n\n(* Unary operations *)\n\nlet neg t = unary_op (fun () -> E_neg { t_in = t }) Nx_backend.neg t\nlet sin t = unary_op (fun () -> E_sin { t_in = t }) Nx_backend.sin t\nlet sqrt t = unary_op (fun () -> E_sqrt { t_in = t }) Nx_backend.sqrt t\nlet recip t = unary_op (fun () -> E_recip { t_in = t }) Nx_backend.recip t\nlet log t = unary_op (fun () -> E_log { t_in = t }) Nx_backend.log t\nlet exp t = unary_op (fun () -> E_exp { t_in = t }) Nx_backend.exp t\nlet cos t = unary_op (fun () -> E_cos { t_in = t }) Nx_backend.cos t\nlet abs t = unary_op (fun () -> E_abs { t_in = t }) Nx_backend.abs t\nlet sign t = unary_op (fun () -> E_sign { t_in = t }) Nx_backend.sign t\nlet tan t = unary_op (fun () -> E_tan { t_in = t }) Nx_backend.tan t\nlet asin t = unary_op (fun () -> E_asin { t_in = t }) Nx_backend.asin t\nlet acos t = unary_op (fun () -> E_acos { t_in = t }) Nx_backend.acos t\nlet atan t = unary_op (fun () -> E_atan { t_in = t }) Nx_backend.atan t\nlet sinh t = unary_op (fun () -> E_sinh { t_in = t }) Nx_backend.sinh t\nlet cosh t = unary_op (fun () -> E_cosh { t_in = t }) Nx_backend.cosh t\nlet tanh t = unary_op (fun () -> E_tanh { t_in = t }) Nx_backend.tanh t\nlet trunc t = unary_op (fun () -> E_trunc { t_in = t }) Nx_backend.trunc t\nlet ceil t = unary_op (fun () -> E_ceil { t_in = t }) Nx_backend.ceil t\nlet floor t = unary_op (fun () -> E_floor { t_in = t }) Nx_backend.floor t\nlet round t = unary_op (fun () -> E_round { t_in = t }) Nx_backend.round t\nlet erf t = unary_op (fun () -> E_erf { t_in = t }) Nx_backend.erf t\n\nlet op_psum t_in =\n  try Effect.perform (E_psum { t_in })\n  with Effect.Unhandled _ -> failwith \"psum must be used under vmap\"\n\n(* Reduction operations *)\n\nlet reduce_sum ~axes ~keepdims t_in =\n  reduce_op\n    (fun () -> E_reduce_sum { t_in; axes; keepdims })\n    Nx_backend.reduce_sum ~axes ~keepdims t_in\n\nlet reduce_max ~axes ~keepdims t_in =\n  reduce_op\n    (fun () -> E_reduce_max { t_in; axes; keepdims })\n    Nx_backend.reduce_max ~axes ~keepdims t_in\n\nlet reduce_min ~axes ~keepdims t_in =\n  reduce_op\n    (fun () -> E_reduce_min { t_in; axes; keepdims })\n    Nx_backend.reduce_min ~axes ~keepdims t_in\n\nlet reduce_prod ~axes ~keepdims t_in =\n  reduce_op\n    (fun () -> E_reduce_prod { t_in; axes; keepdims })\n    Nx_backend.reduce_prod ~axes ~keepdims t_in\n\nlet argmax ~axis ~keepdims t_in =\n  try Effect.perform (E_argmax { t_in; axis; keepdims })\n  with Effect.Unhandled _ ->\n    T (Nx_backend.argmax ~axis ~keepdims (unwrap t_in))\n\nlet argmin ~axis ~keepdims t_in =\n  try Effect.perform (E_argmin { t_in; axis; keepdims })\n  with Effect.Unhandled _ ->\n    T (Nx_backend.argmin ~axis ~keepdims (unwrap t_in))\n\nlet associative_scan ~axis ~op t_in =\n  try Effect.perform (E_associative_scan { t_in; axis; op })\n  with Effect.Unhandled _ ->\n    T (Nx_backend.associative_scan ~axis ~op (unwrap t_in))\n\nlet sort ~axis ~descending t_in =\n  try Effect.perform (E_sort { t_in; axis; descending })\n  with Effect.Unhandled _ ->\n    T (Nx_backend.sort ~axis ~descending (unwrap t_in))\n\nlet argsort ~axis ~descending t_in =\n  try Effect.perform (E_argsort { t_in; axis; descending })\n  with Effect.Unhandled _ ->\n    T (Nx_backend.argsort ~axis ~descending (unwrap t_in))\n\n(* Movement operations *)\n\nlet reshape t_in new_shape =\n  movement_op\n    (fun () -> E_reshape { t_in; new_shape })\n    Nx_backend.reshape t_in new_shape\n\nlet expand t_in new_target_shape =\n  movement_op\n    (fun () -> E_expand { t_in; new_target_shape })\n    Nx_backend.expand t_in new_target_shape\n\nlet permute t_in axes =\n  movement_op (fun () -> E_permute { t_in; axes }) Nx_backend.permute t_in axes\n\nlet shrink t_in limits =\n  movement_op\n    (fun () -> E_shrink { t_in; limits })\n    Nx_backend.shrink t_in limits\n\nlet flip t_in dims_to_flip =\n  movement_op\n    (fun () -> E_flip { t_in; dims_to_flip })\n    Nx_backend.flip t_in dims_to_flip\n\nlet pad t_in padding_config fill_value =\n  try Effect.perform (E_pad { t_in; padding_config; fill_value })\n  with Effect.Unhandled _ ->\n    T (Nx_backend.pad (unwrap t_in) padding_config fill_value)\n\n(* Creation operations *)\n\nlet buffer ctx dtype shape_arr =\n  let size_in_elements = Array.fold_left ( * ) 1 shape_arr in\n  let flat =\n    try Effect.perform (E_buffer { context = ctx; dtype; size_in_elements })\n    with Effect.Unhandled _ -> T (Nx_backend.buffer ctx dtype shape_arr)\n  in\n  reshape flat shape_arr\n\nlet const_scalar ctx value dtype =\n  try Effect.perform (E_const_scalar { context = ctx; value; dtype })\n  with Effect.Unhandled _ -> T (Nx_backend.full ctx dtype [||] value)\n\nlet full ctx dtype shape_arr value =\n  T (Nx_backend.full ctx dtype shape_arr value)\n\nlet from_host ctx array =\n  try Effect.perform (E_from_host { context = ctx; array })\n  with Effect.Unhandled _ -> T (Nx_backend.from_host ctx array)\n\n(* Copy operations *)\n\nlet contiguous t_in =\n  try Effect.perform (E_contiguous { t_in })\n  with Effect.Unhandled _ -> T (Nx_backend.contiguous (unwrap t_in))\n\nlet copy t_in =\n  try Effect.perform (E_copy { t_in })\n  with Effect.Unhandled _ -> T (Nx_backend.copy (unwrap t_in))\n\n(* Ternary operations *)\n\nlet where condition if_true if_false =\n  try Effect.perform (E_where { condition; if_true; if_false })\n  with Effect.Unhandled _ ->\n    T (Nx_backend.where (unwrap condition) (unwrap if_true) (unwrap if_false))\n\n(* Cat *)\n\nlet cat t_list ~axis =\n  try Effect.perform (E_cat { t_list; axis })\n  with Effect.Unhandled _ -> T (Nx_backend.cat (List.map unwrap t_list) ~axis)\n\n(* Cast *)\n\nlet cast (type a b c d) ~(dtype : (c, d) Dtype.t) (t_in : (a, b) t) : (c, d) t =\n  let target_dtype = dtype in\n  try Effect.perform (E_cast { t_in; target_dtype })\n  with Effect.Unhandled _ ->\n    T (Nx_backend.cast ~dtype:target_dtype (unwrap t_in))\n\n(* Indexed access *)\n\nlet gather data indices ~axis =\n  try Effect.perform (E_gather { data; indices; axis })\n  with Effect.Unhandled _ ->\n    T (Nx_backend.gather (unwrap data) (unwrap indices) ~axis)\n\nlet scatter ?(mode = `Set) ?(unique_indices = false) data_template ~indices\n    ~updates ~axis =\n  try Effect.perform (E_scatter { data_template; indices; updates; axis })\n  with Effect.Unhandled _ ->\n    T\n      (Nx_backend.scatter ~mode ~unique_indices (unwrap data_template)\n         ~indices:(unwrap indices) ~updates:(unwrap updates) ~axis)\n\n(* Random *)\n\nlet threefry key ctr =\n  try Effect.perform (E_threefry { key; ctr })\n  with Effect.Unhandled _ -> T (Nx_backend.threefry (unwrap key) (unwrap ctr))\n\n(* Window operations *)\n\nlet unfold t_in ~kernel_size ~stride ~dilation ~padding =\n  try Effect.perform (E_unfold { t_in; kernel_size; stride; dilation; padding })\n  with Effect.Unhandled _ ->\n    T (Nx_backend.unfold (unwrap t_in) ~kernel_size ~stride ~dilation ~padding)\n\nlet fold t_in ~output_size ~kernel_size ~stride ~dilation ~padding =\n  try\n    Effect.perform\n      (E_fold { t_in; output_size; kernel_size; stride; dilation; padding })\n  with Effect.Unhandled _ ->\n    T\n      (Nx_backend.fold (unwrap t_in) ~output_size ~kernel_size ~stride ~dilation\n         ~padding)\n\n(* Matrix operations *)\n\nlet matmul a b =\n  try Effect.perform (E_matmul { a; b })\n  with Effect.Unhandled _ -> T (Nx_backend.matmul (unwrap a) (unwrap b))\n\n(* FFT operations *)\n\nlet fft ?out t ~axes =\n  try Effect.perform (E_fft { t; axes })\n  with Effect.Unhandled _ ->\n    T (Nx_backend.fft ?out:(Option.map unwrap out) (unwrap t) ~axes)\n\nlet ifft ?out t ~axes =\n  try Effect.perform (E_ifft { t; axes })\n  with Effect.Unhandled _ ->\n    T (Nx_backend.ifft ?out:(Option.map unwrap out) (unwrap t) ~axes)\n\nlet rfft (type a c) ?out (t : (float, a) t) ~(dtype : (Complex.t, c) Dtype.t)\n    ~axes : (Complex.t, c) t =\n  let result =\n    Nx_backend.rfft ?out:(Option.map unwrap out) (unwrap t) ~dtype ~axes\n  in\n  (T result : (Complex.t, c) t)\n\nlet irfft (type a c) ?out ?s (t : (Complex.t, a) t)\n    ~(dtype : (float, c) Dtype.t) ~axes : (float, c) t =\n  let result =\n    Nx_backend.irfft ?out:(Option.map unwrap out) ?s (unwrap t) ~dtype ~axes\n  in\n  (T result : (float, c) t)\n\n(* Linear algebra *)\n\nlet cholesky ~upper t_in =\n  try Effect.perform (E_cholesky { t_in; upper })\n  with Effect.Unhandled _ -> T (Nx_backend.cholesky ~upper (unwrap t_in))\n\nlet qr ~reduced t_in =\n  try Effect.perform (E_qr { t_in; reduced })\n  with Effect.Unhandled _ ->\n    let q, r = Nx_backend.qr ~reduced (unwrap t_in) in\n    (T q, T r)\n\nlet svd ~full_matrices t_in =\n  try Effect.perform (E_svd { t_in; full_matrices })\n  with Effect.Unhandled _ ->\n    let u, s, vt = Nx_backend.svd ~full_matrices (unwrap t_in) in\n    (T u, T s, T vt)\n\nlet eig ~vectors t_in =\n  try Effect.perform (E_eig { t_in; vectors })\n  with Effect.Unhandled _ ->\n    let vals, vecs_opt = Nx_backend.eig ~vectors (unwrap t_in) in\n    (T vals, Option.map (fun v -> T v) vecs_opt)\n\nlet eigh ~vectors t_in =\n  try Effect.perform (E_eigh { t_in; vectors })\n  with Effect.Unhandled _ ->\n    let vals, vecs_opt = Nx_backend.eigh ~vectors (unwrap t_in) in\n    (T vals, Option.map (fun v -> T v) vecs_opt)\n\nlet triangular_solve ~upper ~transpose ~unit_diag a b =\n  try Effect.perform (E_triangular_solve { a; b; upper; transpose; unit_diag })\n  with Effect.Unhandled _ ->\n    T\n      (Nx_backend.triangular_solve ~upper ~transpose ~unit_diag (unwrap a)\n         (unwrap b))\n"
  },
  {
    "path": "packages/nx/lib/io/dune",
    "content": "(library\n (name nx_io)\n (public_name nx.io)\n (libraries nx_buffer nx nx_core unix zip stb_image stb_image_write))\n"
  },
  {
    "path": "packages/nx/lib/io/error.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype t =\n  | Io_error of string\n  | Format_error of string\n  | Unsupported_dtype\n  | Unsupported_shape\n  | Missing_entry of string\n  | Other of string\n\nlet to_string = function\n  | Io_error msg -> Printf.sprintf \"I/O error: %s\" msg\n  | Format_error msg -> Printf.sprintf \"Format error: %s\" msg\n  | Unsupported_dtype -> \"Unsupported dtype\"\n  | Unsupported_shape -> \"Unsupported shape\"\n  | Missing_entry name -> Printf.sprintf \"Missing entry: %s\" name\n  | Other msg -> msg\n\nlet fail_msg fmt = Printf.ksprintf failwith fmt\n"
  },
  {
    "path": "packages/nx/lib/io/npy.ml",
    "content": "(*---------------------------------------------------------------------------\n  NumPy .npy and .npz file format reader/writer.\n\n  Based on ocaml-npy by Laurent Mazare. Original:\n  https://github.com/LaurentMazare/ocaml-npy SPDX-License-Identifier: Apache-2.0\n\n  Copyright 2018 Laurent Mazare Copyright 2026 The Raven authors (modifications)\n  ---------------------------------------------------------------------------*)\n\nlet strf = Printf.sprintf\n\n(* Errors *)\n\nexception Read_error of string\n\nlet read_error fmt = Printf.ksprintf (fun s -> raise (Read_error s)) fmt\n\n(* Constants *)\n\nlet magic = \"\\147NUMPY\"\nlet magic_len = String.length magic\n\n(* Byte-level genarray I/O for extended kinds (no C stubs needed) *)\n\nlet really_write_fd fd buf off len =\n  let rec loop off remaining =\n    if remaining > 0 then\n      let w = Unix.write fd buf off remaining in\n      loop (off + w) (remaining - w)\n  in\n  loop off len\n\nlet as_flat_c ga =\n  let n = Array.fold_left ( * ) 1 (Nx_buffer.genarray_dims ga) in\n  let ga = Nx_buffer.genarray_change_layout ga Bigarray.C_layout in\n  (n, Nx_buffer.of_genarray (Bigarray.reshape ga [| n |]))\n\nlet write_genarray_to_fd fd ga =\n  let n, buf = as_flat_c ga in\n  let byte_size =\n    n * Nx_buffer.kind_size_in_bytes (Nx_buffer.genarray_kind ga)\n  in\n  let bytes = Bytes.create byte_size in\n  Nx_buffer.blit_to_bytes ~src_off:0 ~dst_off:0 ~len:n buf bytes;\n  really_write_fd fd bytes 0 byte_size\n\nlet read_fd_to_genarray fd ga =\n  let n, buf = as_flat_c ga in\n  let byte_size =\n    n * Nx_buffer.kind_size_in_bytes (Nx_buffer.genarray_kind ga)\n  in\n  let bytes = Bytes.create byte_size in\n  let rec loop off =\n    if off < byte_size then (\n      let r = Unix.read fd bytes off (byte_size - off) in\n      if r = 0 then read_error \"unexpected eof reading tensor data\";\n      loop (off + r))\n  in\n  loop 0;\n  Nx_buffer.blit_from_bytes ~src_off:0 ~dst_off:0 ~len:n bytes buf\n\n(* Dtype string encoding *)\n\ntype packed_kind = K : (_, _) Nx_buffer.kind -> packed_kind\n\nlet dtype_string (K kind) =\n  let endian =\n    match kind with\n    | Nx_buffer.Int8_signed | Int8_unsigned | Bool -> \"|\"\n    | _ -> if Sys.big_endian then \">\" else \"<\"\n  in\n  let descr =\n    match kind with\n    | Nx_buffer.Float16 -> \"f2\"\n    | Float32 -> \"f4\"\n    | Float64 -> \"f8\"\n    | Bfloat16 -> \"f2\"\n    | Float8_e4m3 -> \"f1\"\n    | Float8_e5m2 -> \"f1\"\n    | Int8_signed -> \"i1\"\n    | Int8_unsigned -> \"u1\"\n    | Int16_signed -> \"i2\"\n    | Int16_unsigned -> \"u2\"\n    | Int32 -> \"i4\"\n    | Int64 -> \"i8\"\n    | Uint32 -> \"u4\"\n    | Uint64 -> \"u8\"\n    | Int4_signed -> \"i1\"\n    | Int4_unsigned -> \"u1\"\n    | Complex32 -> \"c8\"\n    | Complex64 -> \"c16\"\n    | Bool -> \"b1\"\n  in\n  endian ^ descr\n\nlet kind_of_descr = function\n  | \"f4\" -> K Float32\n  | \"f8\" -> K Float64\n  | \"i4\" -> K Int32\n  | \"i8\" -> K Int64\n  | \"u4\" -> K Uint32\n  | \"u8\" -> K Uint64\n  | \"u1\" -> K Int8_unsigned\n  | \"i1\" -> K Int8_signed\n  | \"u2\" -> K Int16_unsigned\n  | \"i2\" -> K Int16_signed\n  | \"c8\" -> K Complex32\n  | \"c16\" -> K Complex64\n  | \"b1\" -> K Bool\n  | s -> read_error \"unsupported dtype descriptor %s\" s\n\n(* Header parsing *)\n\n(* Split a string on [on], respecting parentheses depth *)\nlet header_split str ~on =\n  let parens = ref 0 in\n  let cuts = ref [] in\n  for i = 0 to String.length str - 1 do\n    match str.[i] with\n    | '(' -> incr parens\n    | ')' -> decr parens\n    | c when !parens = 0 && c = on -> cuts := i :: !cuts\n    | _ -> ()\n  done;\n  List.fold_left\n    (fun (prev, acc) i -> (i, String.sub str (i + 1) (prev - i - 1) :: acc))\n    (String.length str, [])\n    !cuts\n  |> fun (first, acc) -> String.sub str 0 first :: acc\n\n(* Trim characters from both ends *)\nlet header_trim str ~on =\n  let len = String.length str in\n  let rec scan_left i =\n    if i >= len then i else if List.mem str.[i] on then scan_left (i + 1) else i\n  in\n  let rec scan_right j =\n    if j <= 0 then j\n    else if List.mem str.[j - 1] on then scan_right (j - 1)\n    else j\n  in\n  let l = scan_left 0 in\n  let r = scan_right len in\n  if l >= r then \"\" else String.sub str l (r - l)\n\ntype header = { kind : packed_kind; fortran_order : bool; shape : int array }\n\nlet parse_header s =\n  let s = header_trim s ~on:[ '{'; ' '; '}'; '\\n' ] in\n  let fields =\n    header_split s ~on:',' |> List.map String.trim\n    |> List.filter (fun s -> String.length s > 0)\n    |> List.map (fun field ->\n        match header_split field ~on:':' with\n        | [ name; value ] ->\n            ( header_trim name ~on:[ '\\''; ' ' ],\n              header_trim value ~on:[ '\\''; ' '; '('; ')' ] )\n        | _ -> read_error \"unable to parse header field %s\" field)\n  in\n  let find name =\n    try List.assoc name fields\n    with Not_found -> read_error \"missing header field %s\" name\n  in\n  let kind =\n    let descr = find \"descr\" in\n    (match descr.[0] with\n    | '|' | '=' -> ()\n    | '>' ->\n        if not Sys.big_endian then\n          read_error \"big endian data on little endian arch\"\n    | '<' ->\n        if Sys.big_endian then\n          read_error \"little endian data on big endian arch\"\n    | c -> read_error \"unknown endianness marker %c\" c);\n    kind_of_descr (String.sub descr 1 (String.length descr - 1))\n  in\n  let fortran_order =\n    match find \"fortran_order\" with\n    | \"False\" -> false\n    | \"True\" -> true\n    | s -> read_error \"invalid fortran_order %s\" s\n  in\n  let shape =\n    find \"shape\" |> header_split ~on:',' |> List.map String.trim\n    |> List.filter (fun s -> String.length s > 0)\n    |> List.map int_of_string |> Array.of_list\n  in\n  { kind; fortran_order; shape }\n\n(* Header writing *)\n\nlet shape_string dims =\n  match dims with\n  | [| n |] -> strf \"%d,\" n\n  | _ -> Array.to_list dims |> List.map string_of_int |> String.concat \", \"\n\nlet fortran_string (type a) (layout : a Bigarray.layout) =\n  match layout with\n  | Bigarray.C_layout -> \"False\"\n  | Bigarray.Fortran_layout -> \"True\"\n\nlet encode_header ~layout ~packed_kind ~dims =\n  let header =\n    strf \"{'descr': '%s', 'fortran_order': %s, 'shape': (%s), }\"\n      (dtype_string packed_kind) (fortran_string layout) (shape_string dims)\n  in\n  let total_len = String.length header + magic_len + 4 + 1 in\n  let pad = if total_len mod 16 = 0 then 0 else 16 - (total_len mod 16) in\n  let header_len = String.length header + pad + 1 in\n  strf \"%s\\001\\000%c%c%s%s\\n\" magic\n    (header_len mod 256 |> Char.chr)\n    (header_len / 256 |> Char.chr)\n    header (String.make pad ' ')\n\n(* Low-level I/O *)\n\nlet with_fd path flags perm f =\n  let fd = Unix.openfile path flags perm in\n  Fun.protect ~finally:(fun () -> Unix.close fd) (fun () -> f fd)\n\nlet really_read_fd fd n =\n  let buf = Bytes.create n in\n  let rec loop off =\n    if off >= n then ()\n    else\n      let r = Unix.read fd buf off (n - off) in\n      if r = 0 then read_error \"unexpected eof\";\n      loop (off + r)\n  in\n  loop 0;\n  Bytes.to_string buf\n\n(* Create a genarray backed by the file, or allocate + read for extended\n   kinds *)\nlet map_or_read fd ~pos kind layout shape =\n  let is_scalar = Array.length shape = 0 in\n  let actual = if is_scalar then [| 1 |] else shape in\n  let ga =\n    match Nx_buffer.to_stdlib_kind kind with\n    | Some std_kind -> Unix.map_file fd ~pos std_kind layout false actual\n    | None ->\n        let ga = Nx_buffer.genarray_create kind layout actual in\n        ignore (Unix.lseek fd (Int64.to_int pos) Unix.SEEK_SET);\n        read_fd_to_genarray fd ga;\n        ga\n  in\n  if is_scalar then Bigarray.reshape ga [||] else ga\n\n(* Npy read/write *)\n\ntype packed = P : (_, _, _) Bigarray.Genarray.t -> packed\n\nlet read_copy path =\n  with_fd path [ O_RDONLY ] 0 @@ fun fd ->\n  let magic' = really_read_fd fd magic_len in\n  if magic <> magic' then read_error \"not a .npy file (bad magic)\";\n  let version = Char.code (really_read_fd fd 2).[0] in\n  let hdr_len_bytes =\n    match version with\n    | 1 -> 2\n    | 2 -> 4\n    | v -> read_error \"unsupported npy version %d\" v\n  in\n  let hdr_len_str = really_read_fd fd hdr_len_bytes in\n  let hdr_len = ref 0 in\n  for i = String.length hdr_len_str - 1 downto 0 do\n    hdr_len := (256 * !hdr_len) + Char.code hdr_len_str.[i]\n  done;\n  let hdr = parse_header (really_read_fd fd !hdr_len) in\n  let pos = Int64.of_int (!hdr_len + hdr_len_bytes + magic_len + 2) in\n  let (K kind) = hdr.kind in\n  let build layout =\n    let src = map_or_read fd ~pos kind layout hdr.shape in\n    let dst =\n      Nx_buffer.genarray_create kind layout (Nx_buffer.genarray_dims src)\n    in\n    Nx_buffer.genarray_blit src dst;\n    P dst\n  in\n  if hdr.fortran_order then build Bigarray.Fortran_layout\n  else build Bigarray.C_layout\n\nlet write ga path =\n  with_fd path [ O_CREAT; O_TRUNC; O_RDWR ] 0o640 @@ fun fd ->\n  let kind = Nx_buffer.genarray_kind ga in\n  let dims = Nx_buffer.genarray_dims ga in\n  let layout = Bigarray.Genarray.layout ga in\n  let hdr = encode_header ~layout ~packed_kind:(K kind) ~dims in\n  let hdr_len = String.length hdr in\n  if Unix.write_substring fd hdr 0 hdr_len <> hdr_len then\n    failwith \"npy: incomplete header write\";\n  match Nx_buffer.to_stdlib_kind kind with\n  | Some std_kind ->\n      let dst =\n        Unix.map_file fd ~pos:(Int64.of_int hdr_len) std_kind layout true dims\n      in\n      Bigarray.Genarray.blit ga dst\n  | None ->\n      ignore (Unix.lseek fd hdr_len Unix.SEEK_SET);\n      write_genarray_to_fd fd ga\n\n(* Npz read/write (via camlzip) *)\n\nmodule Npz = struct\n  let npy_suffix = \".npy\"\n\n  type in_file = Zip.in_file\n  type out_file = Zip.out_file\n\n  let open_in = Zip.open_in\n  let close_in = Zip.close_in\n  let open_out = Zip.open_out\n  let close_out = Zip.close_out\n\n  let entries t =\n    List.map\n      (fun (entry : Zip.entry) ->\n        let name = entry.Zip.filename in\n        let suf_len = String.length npy_suffix in\n        if\n          String.length name >= suf_len\n          && String.sub name (String.length name - suf_len) suf_len = npy_suffix\n        then String.sub name 0 (String.length name - suf_len)\n        else name)\n      (Zip.entries t)\n\n  let read t name =\n    let entry_name = name ^ npy_suffix in\n    let entry =\n      try Zip.find_entry t entry_name with Not_found -> raise Not_found\n    in\n    let tmp = Filename.temp_file \"npy\" \".tmp\" in\n    Fun.protect ~finally:(fun () -> Sys.remove tmp) @@ fun () ->\n    Zip.copy_entry_to_file t entry tmp;\n    read_copy tmp\n\n  let write t name ga =\n    let entry_name = name ^ npy_suffix in\n    let tmp = Filename.temp_file \"npy\" \".tmp\" in\n    Fun.protect ~finally:(fun () -> Sys.remove tmp) @@ fun () ->\n    write ga tmp;\n    Zip.copy_file_to_entry tmp t entry_name\nend\n"
  },
  {
    "path": "packages/nx/lib/io/nx_io.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet strf = Printf.sprintf\n\n(* Errors *)\n\nlet err_file_exists path = strf \"file already exists: %s\" path\nlet err_unsupported_ext ext = strf \"unsupported image format: %s\" ext\nlet err_bad_dims n s = strf \"expected 2 or 3 dimensions, got %d (%s)\" n s\n\n(* Packed tensors *)\n\ntype packed = Packed_nx.t = P : ('a, 'b) Nx.t -> packed\ntype archive = (string, packed) Hashtbl.t\ntype packed_dtype = Dtype : ('a, 'b) Nx.dtype -> packed_dtype\n\nlet to_typed dtype packed = Packed_nx.to_typed dtype packed\nlet packed_dtype (P nx) = Dtype (Nx.dtype nx)\nlet packed_shape (P nx) = Nx.shape nx\n\n(* Result unwrapping *)\n\nlet unwrap = function Ok v -> v | Error err -> failwith (Error.to_string err)\n\n(* Images *)\n\nlet load_image ?(grayscale = false) path =\n  let channels = if grayscale then 1 else 3 in\n  match Stb_image.load ~channels path with\n  | Error (`Msg msg) -> failwith msg\n  | Ok img ->\n      let h = Stb_image.height img in\n      let w = Stb_image.width img in\n      let c = Stb_image.channels img in\n      let buf = Nx_buffer.of_bigarray1 (Stb_image.data img) in\n      let n = Nx_buffer.length buf in\n      let t = Nx.of_buffer buf ~shape:[| n |] in\n      let shape = if c = 1 then [| h; w |] else [| h; w; c |] in\n      Nx.reshape shape t\n\nlet save_image ?(overwrite = true) path img =\n  if (not overwrite) && Sys.file_exists path then\n    failwith (err_file_exists path);\n  let h, w, c =\n    match Nx.shape img with\n    | [| h; w |] -> (h, w, 1)\n    | [| h; w; c |] -> (h, w, c)\n    | s ->\n        let dims =\n          Array.to_list s |> List.map string_of_int |> String.concat \"x\"\n        in\n        failwith (err_bad_dims (Array.length s) dims)\n  in\n  let buf = Nx.to_buffer img in\n  let data =\n    match Nx_buffer.kind buf with\n    | Int8_unsigned -> Nx_buffer.to_bigarray1 buf\n    | _ -> failwith \"save_image: expected uint8 tensor\"\n  in\n  let ext = String.lowercase_ascii (Filename.extension path) in\n  match ext with\n  | \".png\" -> Stb_image_write.png path ~w ~h ~c data\n  | \".bmp\" -> Stb_image_write.bmp path ~w ~h ~c data\n  | \".tga\" -> Stb_image_write.tga path ~w ~h ~c data\n  | \".jpg\" | \".jpeg\" -> Stb_image_write.jpg path ~w ~h ~c ~quality:90 data\n  | _ -> failwith (err_unsupported_ext ext)\n\n(* NumPy *)\n\nlet load_npy path = Nx_npy.load_npy path |> unwrap\nlet save_npy ?overwrite path arr = Nx_npy.save_npy ?overwrite path arr |> unwrap\nlet load_npz path = Nx_npy.load_npz path |> unwrap\nlet load_npz_entry ~name path = Nx_npy.load_npz_entry ~name path |> unwrap\n\nlet save_npz ?overwrite path items =\n  Nx_npy.save_npz ?overwrite path items |> unwrap\n\n(* SafeTensors *)\n\nlet load_safetensors path = Nx_safetensors.load_safetensors path |> unwrap\n\nlet save_safetensors ?overwrite path items =\n  Nx_safetensors.save_safetensors ?overwrite path items |> unwrap\n\n(* Text *)\n\nlet save_txt ?sep ?append ?newline ?header ?footer ?comments path arr =\n  Nx_txt.save ?sep ?append ?newline ?header ?footer ?comments ~out:path arr\n  |> unwrap\n\nlet load_txt ?sep ?comments ?skiprows ?max_rows path dtype =\n  Nx_txt.load ?sep ?comments ?skiprows ?max_rows dtype path |> unwrap\n"
  },
  {
    "path": "packages/nx/lib/io/nx_io.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Tensor I/O.\n\n    Load and save {!Nx} tensors in common formats: images (PNG, JPEG, BMP, TGA),\n    NumPy (.npy, .npz), SafeTensors, and delimited text.\n\n    All functions raise [Failure] on errors. *)\n\n(** {1:packed Packed tensors} *)\n\ntype packed =\n  | P : ('a, 'b) Nx.t -> packed\n      (** An existentially packed tensor. Use {!to_typed} to recover a typed\n          tensor. *)\n\ntype archive = (string, packed) Hashtbl.t\n(** Named tensors. Returned by {!load_npz} and {!load_safetensors}. *)\n\ntype packed_dtype =\n  | Dtype : ('a, 'b) Nx.dtype -> packed_dtype\n      (** An existentially packed dtype. *)\n\nval to_typed : ('a, 'b) Nx.dtype -> packed -> ('a, 'b) Nx.t\n(** [to_typed dtype p] is the tensor in [p] with [dtype].\n\n    Raises [Failure] if the packed tensor has a different dtype. *)\n\nval packed_dtype : packed -> packed_dtype\n(** [packed_dtype p] is the dtype of [p]. *)\n\nval packed_shape : packed -> int array\n(** [packed_shape p] is the shape of [p]. *)\n\n(** {1:image Images} *)\n\nval load_image : ?grayscale:bool -> string -> (int, Nx.uint8_elt) Nx.t\n(** [load_image ?grayscale path] loads an image as a uint8 tensor.\n\n    [grayscale] defaults to [false]. Shape is [[h; w]] when [grayscale] is\n    [true], [[h; w; c]] otherwise.\n\n    Raises [Failure] on I/O or decoding errors. *)\n\nval save_image : ?overwrite:bool -> string -> (int, Nx.uint8_elt) Nx.t -> unit\n(** [save_image ?overwrite path t] writes [t] to [path].\n\n    Format is inferred from extension (.png, .jpg, .bmp, .tga). Accepted shapes\n    are [[h; w]], [[h; w; 1]], [[h; w; 3]], and [[h; w; 4]]. [overwrite]\n    defaults to [true].\n\n    Raises [Failure] on unsupported shape, extension, or I/O errors. *)\n\n(** {1:numpy NumPy formats} *)\n\nval load_npy : string -> packed\n(** [load_npy path] loads a tensor from a [.npy] file.\n\n    Raises [Failure] on I/O or format errors. *)\n\nval save_npy : ?overwrite:bool -> string -> ('a, 'b) Nx.t -> unit\n(** [save_npy ?overwrite path t] writes [t] to a [.npy] file.\n\n    [overwrite] defaults to [true].\n\n    Raises [Failure] on I/O errors. *)\n\nval load_npz : string -> archive\n(** [load_npz path] loads all tensors from an [.npz] archive.\n\n    Raises [Failure] on I/O or format errors. *)\n\nval load_npz_entry : name:string -> string -> packed\n(** [load_npz_entry ~name path] loads a single entry from an [.npz] archive.\n\n    Raises [Failure] if [name] is missing or the archive is invalid. *)\n\nval save_npz : ?overwrite:bool -> string -> (string * packed) list -> unit\n(** [save_npz ?overwrite path entries] writes named tensors to an [.npz]\n    archive.\n\n    [overwrite] defaults to [true].\n\n    Raises [Failure] on I/O errors. *)\n\n(** {1:safetensors SafeTensors} *)\n\nval load_safetensors : string -> archive\n(** [load_safetensors path] loads all tensors from a SafeTensors file.\n\n    Raises [Failure] on I/O or format errors. *)\n\nval save_safetensors :\n  ?overwrite:bool -> string -> (string * packed) list -> unit\n(** [save_safetensors ?overwrite path entries] writes named tensors to a\n    SafeTensors file.\n\n    [overwrite] defaults to [true].\n\n    Raises [Failure] on I/O errors. *)\n\n(** {1:text Text format} *)\n\nval load_txt :\n  ?sep:string ->\n  ?comments:string ->\n  ?skiprows:int ->\n  ?max_rows:int ->\n  string ->\n  ('a, 'b) Nx.dtype ->\n  ('a, 'b) Nx.t\n(** [load_txt ?sep ?comments ?skiprows ?max_rows path dtype] parses delimited\n    text into a tensor.\n\n    [sep] defaults to [\" \"]. [comments] defaults to [\"#\"]. [skiprows] defaults\n    to [0]. The result is 1D or 2D depending on parsed data.\n\n    Raises [Failure] on I/O or parse errors. *)\n\nval save_txt :\n  ?sep:string ->\n  ?append:bool ->\n  ?newline:string ->\n  ?header:string ->\n  ?footer:string ->\n  ?comments:string ->\n  string ->\n  ('a, 'b) Nx.t ->\n  unit\n(** [save_txt ?sep ?append ?newline ?header ?footer ?comments path t] writes a\n    scalar, vector, or matrix tensor to delimited text.\n\n    [sep] defaults to [\" \"]. [append] defaults to [false]. [newline] defaults to\n    [\"\\n\"]. [comments] defaults to [\"# \"].\n\n    Raises [Failure] on unsupported dtype/shape or I/O errors. *)\n"
  },
  {
    "path": "packages/nx/lib/io/nx_npy.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Error\nopen Packed_nx\n\nlet strf = Printf.sprintf\n\n(* Convert genarray from Npy (fortran layout) to Nx (c layout) *)\nlet npy_to_nx (Npy.P ga) =\n  let ga = Nx_buffer.genarray_change_layout ga Bigarray.C_layout in\n  let shape = Nx_buffer.genarray_dims ga in\n  P (Nx.of_buffer (Nx_buffer.of_genarray ga) ~shape)\n\n(* Uniform exception-to-result conversion *)\nlet wrap_exn f =\n  try f () with\n  | Npy.Read_error msg -> Error (Format_error msg)\n  | Zip.Error (name, func, msg) ->\n      Error (Io_error (strf \"zip: %s in %s: %s\" name func msg))\n  | Unix.Unix_error (e, _, _) -> Error (Io_error (Unix.error_message e))\n  | Sys_error msg -> Error (Io_error msg)\n  | Failure msg -> Error (Format_error msg)\n  | ex -> Error (Other (Printexc.to_string ex))\n\nlet check_overwrite overwrite path =\n  if (not overwrite) && Sys.file_exists path then\n    failwith (strf \"file already exists: %s\" path)\n\n(* Npy *)\n\nlet load_npy path = wrap_exn @@ fun () -> Ok (npy_to_nx (Npy.read_copy path))\n\nlet save_npy ?(overwrite = true) path arr =\n  wrap_exn @@ fun () ->\n  check_overwrite overwrite path;\n  let buf = Nx.to_buffer arr in\n  let shape = Nx.shape arr in\n  Npy.write (Nx_buffer.to_genarray buf shape) path;\n  Ok ()\n\n(* Npz *)\n\nlet load_npz path =\n  wrap_exn @@ fun () ->\n  let zi = Npy.Npz.open_in path in\n  Fun.protect ~finally:(fun () -> Npy.Npz.close_in zi) @@ fun () ->\n  let entries = Npy.Npz.entries zi in\n  let archive = Hashtbl.create (List.length entries) in\n  List.iter\n    (fun name -> Hashtbl.add archive name (npy_to_nx (Npy.Npz.read zi name)))\n    entries;\n  Ok archive\n\nlet load_npz_entry ~name path =\n  wrap_exn @@ fun () ->\n  let zi = Npy.Npz.open_in path in\n  Fun.protect ~finally:(fun () -> Npy.Npz.close_in zi) @@ fun () ->\n  match Npy.Npz.read zi name with\n  | packed -> Ok (npy_to_nx packed)\n  | exception Not_found -> Error (Missing_entry name)\n\nlet save_npz ?(overwrite = true) path items =\n  wrap_exn @@ fun () ->\n  check_overwrite overwrite path;\n  let zo = Npy.Npz.open_out path in\n  Fun.protect ~finally:(fun () -> Npy.Npz.close_out zo) @@ fun () ->\n  List.iter\n    (fun (name, P nx) ->\n      let buf = Nx.to_buffer nx in\n      Npy.Npz.write zo name (Nx_buffer.to_genarray buf (Nx.shape nx)))\n    items;\n  Ok ()\n"
  },
  {
    "path": "packages/nx/lib/io/nx_safetensors.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Error\nopen Packed_nx\n\nlet strf = Printf.sprintf\n\n(* Little-endian byte encoding/decoding *)\n\nlet read_i32_le s off =\n  let b0 = Char.code s.[off] in\n  let b1 = Char.code s.[off + 1] in\n  let b2 = Char.code s.[off + 2] in\n  let b3 = Char.code s.[off + 3] in\n  Int32.(\n    logor\n      (shift_left (of_int b3) 24)\n      (logor\n         (shift_left (of_int b2) 16)\n         (logor (shift_left (of_int b1) 8) (of_int b0))))\n\nlet write_i32_le bytes off v =\n  Bytes.set bytes off (Char.chr (Int32.to_int (Int32.logand v 0xffl)));\n  Bytes.set bytes (off + 1)\n    (Char.chr (Int32.to_int (Int32.logand (Int32.shift_right v 8) 0xffl)));\n  Bytes.set bytes (off + 2)\n    (Char.chr (Int32.to_int (Int32.logand (Int32.shift_right v 16) 0xffl)));\n  Bytes.set bytes (off + 3)\n    (Char.chr (Int32.to_int (Int32.logand (Int32.shift_right v 24) 0xffl)))\n\n(* Error conversion *)\n\nlet wrap_exn f =\n  try f () with\n  | Sys_error msg -> Error (Io_error msg)\n  | ex -> Error (Other (Printexc.to_string ex))\n\nlet check_overwrite overwrite path =\n  if (not overwrite) && Sys.file_exists path then\n    failwith (strf \"file already exists: %s\" path)\n\n(* Tensor construction helpers *)\n\nlet make_tensor kind shape n f =\n  let ba = Nx_buffer.create kind n in\n  for i = 0 to n - 1 do\n    Nx_buffer.unsafe_set ba i (f i)\n  done;\n  Nx.reshape shape (Nx.of_buffer ba ~shape:[| n |])\n\n(* Byte-swap 16-bit elements in [buf] from native to little-endian or back *)\nlet swap_16 buf n =\n  for i = 0 to n - 1 do\n    let pos = i * 2 in\n    let b0 = Bytes.get buf pos in\n    Bytes.set buf pos (Bytes.get buf (pos + 1));\n    Bytes.set buf (pos + 1) b0\n  done\n\n(* Load 16-bit LE data into a tensor, byte-swapping on big-endian *)\nlet blit_tensor_16le kind shape n data offset =\n  let byte_len = n * 2 in\n  let ba = Nx_buffer.create kind n in\n  let tmp = Bytes.create byte_len in\n  if Sys.big_endian then begin\n    for i = 0 to n - 1 do\n      let src = offset + (i * 2) in\n      let dst = i * 2 in\n      Bytes.set tmp dst data.[src + 1];\n      Bytes.set tmp (dst + 1) data.[src]\n    done\n  end\n  else Bytes.blit_string data offset tmp 0 byte_len;\n  Nx_buffer.blit_from_bytes ~src_off:0 ~dst_off:0 ~len:n tmp ba;\n  Nx.reshape shape (Nx.of_buffer ba ~shape:[| n |])\n\n(* Loading *)\n\nlet load_tensor (view : Safetensors.tensor_view) =\n  let shape = Array.of_list view.shape in\n  let n = Array.fold_left ( * ) 1 shape in\n  match view.dtype with\n  | F32 ->\n      let f i =\n        Int32.float_of_bits (read_i32_le view.data (view.offset + (i * 4)))\n      in\n      Some (P (make_tensor Float32 shape n f))\n  | F64 ->\n      let f i =\n        Int64.float_of_bits\n          (Safetensors.read_u64_le view.data (view.offset + (i * 8)))\n      in\n      Some (P (make_tensor Float64 shape n f))\n  | I32 ->\n      let f i = read_i32_le view.data (view.offset + (i * 4)) in\n      Some (P (make_tensor Int32 shape n f))\n  | F16 ->\n      if view.offset land 1 <> 0 then\n        fail_msg \"unaligned float16 tensor offset: %d\" view.offset;\n      Some (P (blit_tensor_16le Float16 shape n view.data view.offset))\n  | BF16 ->\n      if view.offset land 1 <> 0 then\n        fail_msg \"unaligned bfloat16 tensor offset: %d\" view.offset;\n      Some (P (blit_tensor_16le Bfloat16 shape n view.data view.offset))\n  | _ -> None\n\nlet load_safetensors path =\n  wrap_exn @@ fun () ->\n  let ic = open_in_bin path in\n  let buf =\n    Fun.protect ~finally:(fun () -> close_in ic) @@ fun () ->\n    let len = in_channel_length ic in\n    really_input_string ic len\n  in\n  match Safetensors.deserialize buf with\n  | Error err -> Error (Format_error (Safetensors.string_of_error err))\n  | Ok st ->\n      let tensors = Safetensors.tensors st in\n      let result = Hashtbl.create (List.length tensors) in\n      List.iter\n        (fun (name, view) ->\n          match load_tensor view with\n          | Some packed -> Hashtbl.add result name packed\n          | None ->\n              Printf.eprintf\n                \"warning: skipping tensor '%s' with unsupported dtype %s\\n\" name\n                (Safetensors.dtype_to_string view.dtype))\n        tensors;\n      Ok result\n\n(* Saving *)\n\nlet tensor_to_bytes (type a b) (arr : (a, b) Nx.t) =\n  let n = Array.fold_left ( * ) 1 (Nx.shape arr) in\n  let buf = Nx.to_buffer (Nx.flatten arr) in\n  match Nx_buffer.kind buf with\n  | Float32 ->\n      let bytes = Bytes.create (n * 4) in\n      for i = 0 to n - 1 do\n        write_i32_le bytes (i * 4)\n          (Int32.bits_of_float (Nx_buffer.unsafe_get buf i))\n      done;\n      (Safetensors.F32, Bytes.unsafe_to_string bytes)\n  | Float64 ->\n      let bytes = Bytes.create (n * 8) in\n      for i = 0 to n - 1 do\n        Safetensors.write_u64_le bytes (i * 8)\n          (Int64.bits_of_float (Nx_buffer.unsafe_get buf i))\n      done;\n      (Safetensors.F64, Bytes.unsafe_to_string bytes)\n  | Int32 ->\n      let bytes = Bytes.create (n * 4) in\n      for i = 0 to n - 1 do\n        write_i32_le bytes (i * 4) (Nx_buffer.unsafe_get buf i)\n      done;\n      (Safetensors.I32, Bytes.unsafe_to_string bytes)\n  | Float16 | Bfloat16 ->\n      let tag =\n        match Nx_buffer.kind buf with\n        | Float16 -> Safetensors.F16\n        | _ -> Safetensors.BF16\n      in\n      let bytes = Bytes.create (n * 2) in\n      Nx_buffer.blit_to_bytes ~src_off:0 ~dst_off:0 ~len:n buf bytes;\n      if Sys.big_endian then swap_16 bytes n;\n      (tag, Bytes.unsafe_to_string bytes)\n  | _ ->\n      fail_msg \"unsupported dtype for safetensors: %s\"\n        (Nx_core.Dtype.of_buffer_kind (Nx_buffer.kind buf)\n        |> Nx_core.Dtype.to_string)\n\nlet save_safetensors ?(overwrite = true) path items =\n  wrap_exn @@ fun () ->\n  check_overwrite overwrite path;\n  let tensor_views =\n    List.map\n      (fun (name, P arr) ->\n        let shape = Array.to_list (Nx.shape arr) in\n        let dtype, data = tensor_to_bytes arr in\n        match Safetensors.tensor_view_new ~dtype ~shape ~data with\n        | Ok view -> (name, view)\n        | Error err ->\n            fail_msg \"failed to create tensor view for '%s': %s\" name\n              (Safetensors.string_of_error err))\n      items\n  in\n  match Safetensors.serialize_to_file tensor_views None path with\n  | Ok () -> Ok ()\n  | Error err -> Error (Format_error (Safetensors.string_of_error err))\n"
  },
  {
    "path": "packages/nx/lib/io/nx_txt.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype error = Error.t =\n  | Io_error of string\n  | Format_error of string\n  | Unsupported_dtype\n  | Unsupported_shape\n  | Missing_entry of string\n  | Other of string\n\nlet strf = Printf.sprintf\n\n(* Errors *)\n\nlet err_invalid_literal dtype token =\n  Format_error (strf \"invalid %s literal: %S\" dtype (String.trim token))\n\nlet err_out_of_range dtype token =\n  Format_error\n    (strf \"value %S is out of range for %s\" (String.trim token) dtype)\n\nlet err_skiprows_negative = Format_error \"skiprows must be non-negative\"\nlet err_max_rows_nonpos = Format_error \"max_rows must be strictly positive\"\nlet err_no_data = Format_error \"no data found\"\nlet err_inconsistent_cols = Format_error \"inconsistent number of columns\"\n\n(* Parsing helpers *)\n\nlet try_parse f s = try Some (f s) with _ -> None\nlet float_of_string_opt = try_parse float_of_string\nlet int_of_string_opt = try_parse int_of_string\nlet int32_of_string_opt = try_parse Int32.of_string\nlet int64_of_string_opt = try_parse Int64.of_string\n\nlet parse_float dtype token =\n  match float_of_string_opt token with\n  | Some v -> Ok v\n  | None -> Error (err_invalid_literal dtype token)\n\nlet parse_bool token =\n  let s = String.lowercase_ascii (String.trim token) in\n  match s with\n  | \"true\" | \"t\" | \"yes\" | \"y\" -> Ok true\n  | \"false\" | \"f\" | \"no\" | \"n\" -> Ok false\n  | _ -> (\n      match int_of_string_opt s with\n      | Some 0 -> Ok false\n      | Some _ -> Ok true\n      | None -> (\n          match float_of_string_opt s with\n          | Some f -> Ok (abs_float f > 0.0)\n          | None -> Error (err_invalid_literal \"bool\" token)))\n\nlet parse_int_with_bounds dtype token ~min ~max =\n  match int_of_string_opt token with\n  | Some v when v >= min && v <= max -> Ok v\n  | Some _ -> Error (err_out_of_range dtype token)\n  | None -> Error (err_invalid_literal dtype token)\n\n(*---------------------------------------------------------------------------\n  Dtype-specific spec\n  ---------------------------------------------------------------------------*)\n\nmodule type SPEC = sig\n  type elt\n  type kind\n\n  val kind : (elt, kind) Nx_buffer.kind\n  val print : out_channel -> elt -> unit\n  val parse : string -> (elt, error) result\nend\n\nlet print_float oc v = Printf.fprintf oc \"%.18e\" v\nlet print_int oc v = output_string oc (string_of_int v)\nlet print_int32 oc v = output_string oc (Int32.to_string v)\nlet print_int64 oc v = output_string oc (Int64.to_string v)\nlet print_bool oc v = output_string oc (if v then \"1\" else \"0\")\n\nlet parse_i32 name token =\n  match int32_of_string_opt token with\n  | Some v -> Ok v\n  | None -> Error (err_invalid_literal name token)\n\nlet parse_i64 name token =\n  match int64_of_string_opt token with\n  | Some v -> Ok v\n  | None -> Error (err_invalid_literal name token)\n\n(* Each GADT arm must be inlined so the type equalities are visible. *)\n\nlet spec_of_dtype (type a b) (dtype : (a, b) Nx.dtype) :\n    (module SPEC with type elt = a and type kind = b) option =\n  let name = Nx_core.Dtype.to_string dtype in\n  let kind = Nx_core.Dtype.to_buffer_kind dtype in\n  let open Nx_core.Dtype in\n  match dtype with\n  | Float16 ->\n      Some\n        (module struct\n          type elt = float\n          type kind = b\n\n          let kind = kind\n          let print = print_float\n          let parse t = parse_float name t\n        end)\n  | Float32 ->\n      Some\n        (module struct\n          type elt = float\n          type kind = b\n\n          let kind = kind\n          let print = print_float\n          let parse t = parse_float name t\n        end)\n  | Float64 ->\n      Some\n        (module struct\n          type elt = float\n          type kind = b\n\n          let kind = kind\n          let print = print_float\n          let parse t = parse_float name t\n        end)\n  | BFloat16 ->\n      Some\n        (module struct\n          type elt = float\n          type kind = b\n\n          let kind = kind\n          let print = print_float\n          let parse t = parse_float name t\n        end)\n  | Int8 ->\n      Some\n        (module struct\n          type elt = int\n          type kind = b\n\n          let kind = kind\n          let print = print_int\n          let parse t = parse_int_with_bounds name t ~min:(-128) ~max:127\n        end)\n  | UInt8 ->\n      Some\n        (module struct\n          type elt = int\n          type kind = b\n\n          let kind = kind\n          let print = print_int\n          let parse t = parse_int_with_bounds name t ~min:0 ~max:255\n        end)\n  | Int16 ->\n      Some\n        (module struct\n          type elt = int\n          type kind = b\n\n          let kind = kind\n          let print = print_int\n          let parse t = parse_int_with_bounds name t ~min:(-32768) ~max:32767\n        end)\n  | UInt16 ->\n      Some\n        (module struct\n          type elt = int\n          type kind = b\n\n          let kind = kind\n          let print = print_int\n          let parse t = parse_int_with_bounds name t ~min:0 ~max:65535\n        end)\n  | Int32 ->\n      Some\n        (module struct\n          type elt = int32\n          type kind = b\n\n          let kind = kind\n          let print = print_int32\n          let parse t = parse_i32 name t\n        end)\n  | UInt32 ->\n      Some\n        (module struct\n          type elt = int32\n          type kind = b\n\n          let kind = kind\n          let print = print_int32\n          let parse t = parse_i32 name t\n        end)\n  | Int64 ->\n      Some\n        (module struct\n          type elt = int64\n          type kind = b\n\n          let kind = kind\n          let print = print_int64\n          let parse t = parse_i64 name t\n        end)\n  | UInt64 ->\n      Some\n        (module struct\n          type elt = int64\n          type kind = b\n\n          let kind = kind\n          let print = print_int64\n          let parse t = parse_i64 name t\n        end)\n  | Bool ->\n      Some\n        (module struct\n          type elt = bool\n          type kind = b\n\n          let kind = kind\n          let print = print_bool\n          let parse = parse_bool\n        end)\n  | _ -> None\n\n(*---------------------------------------------------------------------------\n  Text field splitting\n  ---------------------------------------------------------------------------*)\n\nlet split_fields sep line =\n  let s = String.trim line in\n  if s = \"\" then [||]\n  else if sep = \"\" then [| s |]\n  else if String.length sep = 1 then\n    s\n    |> String.split_on_char sep.[0]\n    |> List.filter (fun t -> t <> \"\")\n    |> Array.of_list\n  else\n    let sep_len = String.length sep in\n    let len = String.length s in\n    let rec loop acc start =\n      if start >= len then List.rev acc\n      else\n        match String.index_from_opt s start sep.[0] with\n        | None ->\n            let part = String.sub s start (len - start) in\n            if part = \"\" then List.rev acc else List.rev (part :: acc)\n        | Some idx ->\n            if idx + sep_len <= len && String.sub s idx sep_len = sep then\n              let part = String.sub s start (idx - start) in\n              let acc = if part = \"\" then acc else part :: acc in\n              loop acc (idx + sep_len)\n            else loop acc (idx + 1)\n    in\n    loop [] 0 |> Array.of_list\n\n(*---------------------------------------------------------------------------\n  Save\n  ---------------------------------------------------------------------------*)\n\nlet write_comment_lines oc comments newline = function\n  | None -> ()\n  | Some text ->\n      List.iter\n        (fun line ->\n          if comments <> \"\" then output_string oc comments;\n          output_string oc line;\n          output_string oc newline)\n        (String.split_on_char '\\n' text)\n\nlet save ?(sep = \" \") ?(append = false) ?(newline = \"\\n\") ?header ?footer\n    ?(comments = \"# \") ~out (type a b) (arr : (a, b) Nx.t) =\n  let shape = Nx.shape arr in\n  let ndim = Array.length shape in\n  if ndim > 2 then Error Unsupported_shape\n  else\n    match spec_of_dtype (Nx.dtype arr) with\n    | None -> Error Unsupported_dtype\n    | Some (module S : SPEC with type elt = a and type kind = b) -> (\n        let perm = 0o666 in\n        let flags =\n          if append then [ Open_wronly; Open_creat; Open_append; Open_text ]\n          else [ Open_wronly; Open_creat; Open_trunc; Open_text ]\n        in\n        try\n          let oc = open_out_gen flags perm out in\n          Fun.protect ~finally:(fun () -> close_out oc) @@ fun () ->\n          write_comment_lines oc comments newline header;\n          let buf = Nx.to_buffer arr in\n          (match ndim with\n          | 0 ->\n              S.print oc (Nx_buffer.get buf 0);\n              output_string oc newline\n          | 1 ->\n              let n = shape.(0) in\n              for j = 0 to n - 1 do\n                if j > 0 then output_string oc sep;\n                S.print oc (Nx_buffer.unsafe_get buf j)\n              done;\n              output_string oc newline\n          | _ ->\n              let rows = shape.(0) and cols = shape.(1) in\n              for i = 0 to rows - 1 do\n                for j = 0 to cols - 1 do\n                  if j > 0 then output_string oc sep;\n                  S.print oc (Nx_buffer.unsafe_get buf ((i * cols) + j))\n                done;\n                output_string oc newline\n              done);\n          write_comment_lines oc comments newline footer;\n          Ok ()\n        with\n        | Sys_error msg -> Error (Io_error msg)\n        | Unix.Unix_error (e, _, _) -> Error (Io_error (Unix.error_message e)))\n\n(*---------------------------------------------------------------------------\n  Load\n  ---------------------------------------------------------------------------*)\n\nexception Parse_error of error\n\nlet load ?(sep = \" \") ?(comments = \"#\") ?(skiprows = 0) ?max_rows (type a b)\n    (dtype : (a, b) Nx.dtype) path =\n  if skiprows < 0 then Error err_skiprows_negative\n  else if match max_rows with Some n -> n <= 0 | None -> false then\n    Error err_max_rows_nonpos\n  else\n    match spec_of_dtype dtype with\n    | None -> Error Unsupported_dtype\n    | Some (module S : SPEC with type elt = a and type kind = b) -> (\n        try\n          let ic = open_in path in\n          Fun.protect ~finally:(fun () -> close_in ic) @@ fun () ->\n          let comment_prefix = String.trim comments in\n          let is_comment line =\n            if comment_prefix = \"\" then false\n            else\n              let t = String.trim line in\n              let n = String.length comment_prefix in\n              String.length t >= n && String.sub t 0 n = comment_prefix\n          in\n          (* Read data rows *)\n          let rows_rev = ref [] in\n          let col_count = ref (-1) in\n          let rows_read = ref 0 in\n          let rec read skip =\n            if match max_rows with Some n -> !rows_read >= n | None -> false\n            then ()\n            else\n              match input_line ic with\n              | exception End_of_file -> ()\n              | line ->\n                  if skip > 0 then read (skip - 1)\n                  else if is_comment line then read 0\n                  else\n                    let fields = split_fields sep line in\n                    let n = Array.length fields in\n                    if n = 0 then read 0\n                    else begin\n                      if !col_count < 0 then col_count := n\n                      else if n <> !col_count then\n                        raise_notrace (Parse_error err_inconsistent_cols);\n                      rows_rev := fields :: !rows_rev;\n                      incr rows_read;\n                      read 0\n                    end\n          in\n          read skiprows;\n          if !col_count < 0 then Error err_no_data\n          else\n            let cols = !col_count in\n            let rows = Array.of_list (List.rev !rows_rev) in\n            let row_count = Array.length rows in\n            let n = row_count * cols in\n            let buf = Nx_buffer.create S.kind n in\n            for i = 0 to row_count - 1 do\n              let row = rows.(i) in\n              for j = 0 to cols - 1 do\n                match S.parse row.(j) with\n                | Ok v -> Nx_buffer.set buf ((i * cols) + j) v\n                | Error err -> raise_notrace (Parse_error err)\n              done\n            done;\n            let t = Nx.of_buffer buf ~shape:[| row_count; cols |] in\n            let result =\n              if row_count = 1 then Nx.reshape [| cols |] t\n              else if cols = 1 then Nx.reshape [| row_count |] t\n              else t\n            in\n            Ok result\n        with\n        | Parse_error err -> Error err\n        | Sys_error msg -> Error (Io_error msg)\n        | Unix.Unix_error (e, _, _) -> Error (Io_error (Unix.error_message e)))\n"
  },
  {
    "path": "packages/nx/lib/io/packed_nx.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet strf = Printf.sprintf\n\ntype t = P : ('a, 'b) Nx.t -> t\ntype archive = (string, t) Hashtbl.t\n\nlet err_dtype_mismatch ~expected ~got =\n  strf \"dtype mismatch: expected %s, got %s\" expected got\n\nlet to_typed : type a b. (a, b) Nx.dtype -> t -> (a, b) Nx.t =\n fun target (P nx) ->\n  let source = Nx.dtype nx in\n  match Nx_core.Dtype.equal_witness source target with\n  | Some Type.Equal -> (nx : (a, b) Nx.t)\n  | None ->\n      let expected = Nx_core.Dtype.to_string target in\n      let got = Nx_core.Dtype.to_string source in\n      failwith (err_dtype_mismatch ~expected ~got)\n\nlet packed_shape (P nx) = Nx.shape nx\n"
  },
  {
    "path": "packages/nx/lib/io/safetensors.ml",
    "content": "(*---------------------------------------------------------------------------\n  Safetensors format reader/writer.\n\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet strf = Printf.sprintf\n\n(* Result monad *)\n\nlet ( let* ) x f = match x with Ok v -> f v | Error _ as e -> e\n\n(*---------------------------------------------------------------------------\n  Minimal JSON codec (subset needed for safetensors headers)\n  ---------------------------------------------------------------------------*)\n\nmodule Json = struct\n  type t =\n    [ `Assoc of (string * t) list\n    | `String of string\n    | `Int of int\n    | `List of t list ]\n\n  exception Parse_error of string\n\n  (* Serialization *)\n\n  let rec to_string = function\n    | `String s -> strf \"\\\"%s\\\"\" (String.escaped s)\n    | `Int i -> string_of_int i\n    | `List l -> \"[\" ^ String.concat \", \" (List.map to_string l) ^ \"]\"\n    | `Assoc kv ->\n        let pair (k, v) = strf \"\\\"%s\\\": %s\" (String.escaped k) (to_string v) in\n        \"{\" ^ String.concat \", \" (List.map pair kv) ^ \"}\"\n\n  (* Parsing *)\n\n  type parser = { input : string; mutable pos : int }\n\n  let peek p =\n    if p.pos < String.length p.input then Some p.input.[p.pos] else None\n\n  let advance p = p.pos <- p.pos + 1\n\n  let skip_ws p =\n    while\n      p.pos < String.length p.input\n      &&\n      match p.input.[p.pos] with\n      | ' ' | '\\t' | '\\n' | '\\r' -> true\n      | _ -> false\n    do\n      advance p\n    done\n\n  let expect p c =\n    skip_ws p;\n    match peek p with\n    | Some ch when ch = c -> advance p\n    | Some ch -> raise (Parse_error (strf \"expected '%c' got '%c'\" c ch))\n    | None -> raise (Parse_error (strf \"expected '%c' got EOF\" c))\n\n  let parse_string p =\n    expect p '\"';\n    let buf = Buffer.create 16 in\n    let rec loop () =\n      match peek p with\n      | None -> raise (Parse_error \"unterminated string\")\n      | Some '\"' ->\n          advance p;\n          Buffer.contents buf\n      | Some '\\\\' ->\n          advance p;\n          (match peek p with\n          | None -> raise (Parse_error \"unterminated escape\")\n          | Some 'n' -> Buffer.add_char buf '\\n'\n          | Some 'r' -> Buffer.add_char buf '\\r'\n          | Some 't' -> Buffer.add_char buf '\\t'\n          | Some (('\"' | '\\\\') as c) -> Buffer.add_char buf c\n          | Some c -> Buffer.add_char buf c);\n          advance p;\n          loop ()\n      | Some c ->\n          Buffer.add_char buf c;\n          advance p;\n          loop ()\n    in\n    loop ()\n\n  let parse_int p =\n    skip_ws p;\n    let start = p.pos in\n    (match peek p with Some '-' -> advance p | _ -> ());\n    while\n      p.pos < String.length p.input\n      && match p.input.[p.pos] with '0' .. '9' -> true | _ -> false\n    do\n      advance p\n    done;\n    let s = String.sub p.input start (p.pos - start) in\n    try int_of_string s with _ -> raise (Parse_error (\"invalid number: \" ^ s))\n\n  let rec parse_value p =\n    skip_ws p;\n    match peek p with\n    | None -> raise (Parse_error \"unexpected EOF\")\n    | Some '\"' -> `String (parse_string p)\n    | Some '{' -> parse_object p\n    | Some '[' -> parse_list p\n    | Some ('-' | '0' .. '9') -> `Int (parse_int p)\n    | Some c -> raise (Parse_error (strf \"unexpected char: '%c'\" c))\n\n  and parse_list p =\n    expect p '[';\n    skip_ws p;\n    if peek p = Some ']' then (\n      advance p;\n      `List [])\n    else\n      let rec loop acc =\n        let v = parse_value p in\n        skip_ws p;\n        match peek p with\n        | Some ',' ->\n            advance p;\n            loop (v :: acc)\n        | Some ']' ->\n            advance p;\n            `List (List.rev (v :: acc))\n        | _ -> raise (Parse_error \"expected ',' or ']'\")\n      in\n      loop []\n\n  and parse_object p =\n    expect p '{';\n    skip_ws p;\n    if peek p = Some '}' then (\n      advance p;\n      `Assoc [])\n    else\n      let rec loop acc =\n        skip_ws p;\n        let key = parse_string p in\n        skip_ws p;\n        expect p ':';\n        let value = parse_value p in\n        skip_ws p;\n        match peek p with\n        | Some ',' ->\n            advance p;\n            loop ((key, value) :: acc)\n        | Some '}' ->\n            advance p;\n            `Assoc (List.rev ((key, value) :: acc))\n        | _ -> raise (Parse_error \"expected ',' or '}'\")\n      in\n      loop []\n\n  let from_string s =\n    let p = { input = s; pos = 0 } in\n    try\n      let v = parse_value p in\n      skip_ws p;\n      if p.pos < String.length s then\n        raise (Parse_error \"trailing characters after JSON\");\n      v\n    with Parse_error msg ->\n      raise (Parse_error (strf \"at position %d: %s\" p.pos msg))\n\n  (* Accessors *)\n\n  let to_assoc = function\n    | `Assoc kv -> kv\n    | _ -> raise (Parse_error \"expected object\")\n\n  let to_string_val = function\n    | `String s -> s\n    | _ -> raise (Parse_error \"expected string\")\n\n  let to_int_val = function\n    | `Int i -> i\n    | _ -> raise (Parse_error \"expected integer\")\n\n  let to_list_val = function\n    | `List l -> l\n    | _ -> raise (Parse_error \"expected array\")\n\n  let member key = function\n    | `Assoc kv -> List.assoc key kv\n    | _ -> raise (Parse_error \"expected object\")\nend\n\n(*---------------------------------------------------------------------------\n  Safetensors format\n  ---------------------------------------------------------------------------*)\n\n(* Errors *)\n\ntype error =\n  | Invalid_header of string\n  | Invalid_header_deserialization of string\n  | Header_too_large\n  | Header_too_small\n  | Invalid_header_length\n  | Tensor_not_found of string\n  | Tensor_invalid_info\n  | Invalid_offset of string\n  | Io_error of string\n  | Invalid_tensor_view of string * int list * int\n  | Metadata_incomplete_buffer\n  | Validation_overflow\n  | Misaligned_slice\n\nlet string_of_error = function\n  | Invalid_header e -> \"invalid UTF-8 in header: \" ^ e\n  | Invalid_header_deserialization e -> \"invalid JSON in header: \" ^ e\n  | Header_too_large -> \"header too large\"\n  | Header_too_small -> \"header too small\"\n  | Invalid_header_length -> \"invalid header length\"\n  | Tensor_not_found n -> strf \"tensor '%s' not found\" n\n  | Tensor_invalid_info -> \"invalid shape, dtype, or offset for tensor\"\n  | Invalid_offset n -> strf \"invalid offset for tensor '%s'\" n\n  | Io_error e -> \"I/O error: \" ^ e\n  | Invalid_tensor_view (dt, shape, n) ->\n      let dims = List.map string_of_int shape |> String.concat \", \" in\n      strf \"tensor of type %s and shape (%s) can't be created from %d bytes\" dt\n        dims n\n  | Metadata_incomplete_buffer -> \"incomplete metadata, file not fully covered\"\n  | Validation_overflow -> \"overflow computing buffer size\"\n  | Misaligned_slice -> \"slice does not end at a byte boundary\"\n\n(* Dtype *)\n\ntype dtype =\n  | BOOL\n  | F4\n  | F6_E2M3\n  | F6_E3M2\n  | U8\n  | I8\n  | F8_E5M2\n  | F8_E4M3\n  | F8_E8M0\n  | I16\n  | U16\n  | F16\n  | BF16\n  | I32\n  | U32\n  | F32\n  | F64\n  | I64\n  | U64\n\nlet dtype_to_string = function\n  | BOOL -> \"BOOL\"\n  | F4 -> \"F4\"\n  | F6_E2M3 -> \"F6_E2M3\"\n  | F6_E3M2 -> \"F6_E3M2\"\n  | U8 -> \"U8\"\n  | I8 -> \"I8\"\n  | F8_E5M2 -> \"F8_E5M2\"\n  | F8_E4M3 -> \"F8_E4M3\"\n  | F8_E8M0 -> \"F8_E8M0\"\n  | I16 -> \"I16\"\n  | U16 -> \"U16\"\n  | F16 -> \"F16\"\n  | BF16 -> \"BF16\"\n  | I32 -> \"I32\"\n  | U32 -> \"U32\"\n  | F32 -> \"F32\"\n  | F64 -> \"F64\"\n  | I64 -> \"I64\"\n  | U64 -> \"U64\"\n\nlet dtype_of_string = function\n  | \"BOOL\" -> Some BOOL\n  | \"F4\" -> Some F4\n  | \"F6_E2M3\" -> Some F6_E2M3\n  | \"F6_E3M2\" -> Some F6_E3M2\n  | \"U8\" -> Some U8\n  | \"I8\" -> Some I8\n  | \"F8_E5M2\" -> Some F8_E5M2\n  | \"F8_E4M3\" -> Some F8_E4M3\n  | \"F8_E8M0\" -> Some F8_E8M0\n  | \"I16\" -> Some I16\n  | \"U16\" -> Some U16\n  | \"F16\" -> Some F16\n  | \"BF16\" -> Some BF16\n  | \"I32\" -> Some I32\n  | \"U32\" -> Some U32\n  | \"F32\" -> Some F32\n  | \"F64\" -> Some F64\n  | \"I64\" -> Some I64\n  | \"U64\" -> Some U64\n  | _ -> None\n\nlet bitsize = function\n  | F4 -> 4\n  | F6_E3M2 | F6_E2M3 -> 6\n  | BOOL | U8 | I8 | F8_E5M2 | F8_E4M3 | F8_E8M0 -> 8\n  | I16 | U16 | F16 | BF16 -> 16\n  | I32 | U32 | F32 -> 32\n  | I64 | U64 | F64 -> 64\n\n(* Alignment rank for serialization ordering (ascending alignment) *)\nlet dtype_rank = function\n  | BOOL -> 0\n  | F4 -> 1\n  | F6_E2M3 -> 2\n  | F6_E3M2 -> 3\n  | U8 -> 4\n  | I8 -> 5\n  | F8_E5M2 -> 6\n  | F8_E4M3 -> 7\n  | F8_E8M0 -> 8\n  | I16 -> 9\n  | U16 -> 10\n  | F16 -> 11\n  | BF16 -> 12\n  | I32 -> 13\n  | U32 -> 14\n  | F32 -> 15\n  | F64 -> 16\n  | I64 -> 17\n  | U64 -> 18\n\n(* Tensor model *)\n\ntype tensor_info = { dtype : dtype; shape : int list; data_offsets : int * int }\n\ntype metadata = {\n  metadata_kv : (string * string) list option;\n  tensors : tensor_info array;\n  index_map : (string, int) Hashtbl.t;\n}\n\n(* UTF-8 validation *)\n\nlet is_valid_utf8 s =\n  let n = String.length s in\n  let i = ref 0 in\n  let ok = ref true in\n  while !ok && !i < n do\n    let c = Char.code s.[!i] in\n    if c land 0x80 = 0 then incr i\n    else if c land 0xE0 = 0xC0 && !i + 1 < n then begin\n      let c1 = Char.code s.[!i + 1] in\n      if c1 land 0xC0 <> 0x80 || c < 0xC2 then ok := false else i := !i + 2\n    end\n    else if c land 0xF0 = 0xE0 && !i + 2 < n then begin\n      let c1 = Char.code s.[!i + 1] in\n      let c2 = Char.code s.[!i + 2] in\n      if c1 land 0xC0 <> 0x80 || c2 land 0xC0 <> 0x80 then ok := false\n      else if c = 0xE0 && c1 < 0xA0 then ok := false\n      else if c = 0xED && c1 >= 0xA0 then ok := false\n      else i := !i + 3\n    end\n    else if c land 0xF8 = 0xF0 && !i + 3 < n then begin\n      let c1 = Char.code s.[!i + 1] in\n      let c2 = Char.code s.[!i + 2] in\n      let c3 = Char.code s.[!i + 3] in\n      if c1 land 0xC0 <> 0x80 || c2 land 0xC0 <> 0x80 || c3 land 0xC0 <> 0x80\n      then ok := false\n      else if c = 0xF0 && c1 < 0x90 then ok := false\n      else if c = 0xF4 && c1 >= 0x90 then ok := false\n      else if c > 0xF4 then ok := false\n      else i := !i + 4\n    end\n    else ok := false\n  done;\n  !ok\n\n(* Arithmetic with overflow checking *)\n\nlet int64_mul_checked a b =\n  if a = 0L || b = 0L then Ok 0L\n  else if a > Int64.div Int64.max_int b then Error ()\n  else Ok (Int64.mul a b)\n\n(* Validation *)\n\nexception Validate_error of error\n\nlet validate m =\n  let start = ref 0 in\n  let buffer_end = ref 0 in\n  try\n    Array.iteri\n      (fun i info ->\n        let s, e = info.data_offsets in\n        if s <> !start || e < s then begin\n          let name = ref \"unknown\" in\n          Hashtbl.iter (fun k idx -> if idx = i then name := k) m.index_map;\n          raise_notrace (Validate_error (Invalid_offset !name))\n        end;\n        start := e;\n        let ne =\n          List.fold_left\n            (fun acc d ->\n              if d < 0 then raise_notrace (Validate_error Validation_overflow);\n              match int64_mul_checked acc (Int64.of_int d) with\n              | Ok v -> v\n              | Error () -> raise_notrace (Validate_error Validation_overflow))\n            1L info.shape\n        in\n        let nbits =\n          match int64_mul_checked ne (Int64.of_int (bitsize info.dtype)) with\n          | Ok v -> v\n          | Error () -> raise_notrace (Validate_error Validation_overflow)\n        in\n        if Int64.rem nbits 8L <> 0L then\n          raise_notrace (Validate_error Misaligned_slice);\n        let size = Int64.to_int (Int64.div nbits 8L) in\n        if e - s <> size then raise_notrace (Validate_error Tensor_invalid_info);\n        buffer_end := e)\n      m.tensors;\n    Ok !buffer_end\n  with Validate_error e -> Error e\n\n(* Little-endian I/O *)\n\nlet read_u64_le s off =\n  let get i = Int64.of_int (Char.code s.[off + i]) in\n  Int64.(\n    logor (get 0)\n      (logor\n         (shift_left (get 1) 8)\n         (logor\n            (shift_left (get 2) 16)\n            (logor\n               (shift_left (get 3) 24)\n               (logor\n                  (shift_left (get 4) 32)\n                  (logor\n                     (shift_left (get 5) 40)\n                     (logor (shift_left (get 6) 48) (shift_left (get 7) 56))))))))\n\nlet write_u64_le b off v =\n  for i = 0 to 7 do\n    Bytes.set b (off + i)\n      (Char.chr\n         (Int64.to_int (Int64.logand (Int64.shift_right v (8 * i)) 0xFFL)))\n  done\n\n(* JSON ↔ metadata *)\n\nlet metadata_to_json m =\n  let names = Array.make (Array.length m.tensors) \"\" in\n  Hashtbl.iter (fun name idx -> names.(idx) <- name) m.index_map;\n  let base =\n    Array.to_list\n      (Array.mapi\n         (fun i ti ->\n           let shape = `List (List.map (fun d -> `Int d) ti.shape) in\n           let s, e = ti.data_offsets in\n           let offs = `List [ `Int s; `Int e ] in\n           ( names.(i),\n             `Assoc\n               [\n                 (\"dtype\", `String (dtype_to_string ti.dtype));\n                 (\"shape\", shape);\n                 (\"data_offsets\", offs);\n               ] ))\n         m.tensors)\n  in\n  let kv =\n    match m.metadata_kv with\n    | None -> base\n    | Some md ->\n        let obj = `Assoc (List.map (fun (k, v) -> (k, `String v)) md) in\n        (\"__metadata__\", obj) :: base\n  in\n  Json.to_string (`Assoc kv)\n\nlet parse_tensor_info (name, j) =\n  try\n    let dt_str = j |> Json.member \"dtype\" |> Json.to_string_val in\n    let dt =\n      match dtype_of_string dt_str with\n      | Some d -> d\n      | None -> failwith \"bad dtype\"\n    in\n    let shape =\n      j |> Json.member \"shape\" |> Json.to_list_val |> List.map Json.to_int_val\n    in\n    let s, e =\n      match\n        j |> Json.member \"data_offsets\" |> Json.to_list_val\n        |> List.map Json.to_int_val\n      with\n      | [ s; e ] -> (s, e)\n      | _ -> failwith \"bad offsets\"\n    in\n    Ok (name, { dtype = dt; shape; data_offsets = (s, e) })\n  with e -> Error (Printexc.to_string e)\n\nlet json_to_metadata j : (metadata, error) result =\n  let parse () =\n    let obj = Json.to_assoc j in\n    let md =\n      match List.assoc_opt \"__metadata__\" obj with\n      | None -> Ok None\n      | Some jmd ->\n          let kv = Json.to_assoc jmd in\n          Ok (Some (List.map (fun (k, v) -> (k, Json.to_string_val v)) kv))\n    in\n    let kv_no_md = List.filter (fun (k, _) -> k <> \"__metadata__\") obj in\n    let rec parse_tensors acc = function\n      | [] -> Ok (List.rev acc)\n      | entry :: rest ->\n          let* ti = parse_tensor_info entry in\n          parse_tensors (ti :: acc) rest\n    in\n    let* md = md in\n    let* ts = parse_tensors [] kv_no_md in\n    let ts =\n      List.sort (fun (_, a) (_, b) -> compare a.data_offsets b.data_offsets) ts\n    in\n    let index_map = Hashtbl.create (List.length ts) in\n    let tensors =\n      Array.of_list\n        (List.mapi\n           (fun i (name, t) ->\n             Hashtbl.add index_map name i;\n             t)\n           ts)\n    in\n    Ok { metadata_kv = md; tensors; index_map }\n  in\n  match parse () with\n  | Error e -> Error (Invalid_header_deserialization e)\n  | Ok m ->\n      let* _ = validate m in\n      Ok m\n\n(* Tensor views *)\n\ntype tensor_view = {\n  dtype : dtype;\n  shape : int list;\n  data : string;\n  offset : int;\n  length : int;\n}\n\nlet tensor_view_new ~dtype ~shape ~data =\n  let nbits =\n    let ne =\n      List.fold_left\n        (fun acc d ->\n          match acc with\n          | Error _ as e -> e\n          | Ok a -> (\n              if d < 0 then Error Validation_overflow\n              else\n                match int64_mul_checked a (Int64.of_int d) with\n                | Ok v -> Ok v\n                | Error () -> Error Validation_overflow))\n        (Ok 1L) shape\n    in\n    match ne with\n    | Error e -> Error e\n    | Ok ne -> (\n        match int64_mul_checked ne (Int64.of_int (bitsize dtype)) with\n        | Ok v -> Ok v\n        | Error () -> Error Validation_overflow)\n  in\n  match nbits with\n  | Error e -> Error e\n  | Ok nb ->\n      if Int64.rem nb 8L <> 0L then Error Misaligned_slice\n      else\n        let size = Int64.to_int (Int64.div nb 8L) in\n        if String.length data <> size then\n          Error\n            (Invalid_tensor_view\n               (dtype_to_string dtype, shape, String.length data))\n        else Ok { dtype; shape; data; offset = 0; length = size }\n\n(* Container *)\n\ntype t = { metadata : metadata; data : string }\n\nlet max_header_size = 100_000_000\nlet header_len_bytes = 8\n\nlet next_multiple_of x k =\n  if k <= 0 || x mod k = 0 then x else x + (k - (x mod k))\n\n(* Deserialization *)\n\nlet deserialize buffer =\n  let len = String.length buffer in\n  if len < header_len_bytes then Error Header_too_small\n  else\n    let n = read_u64_le buffer 0 in\n    if n > Int64.of_int max_header_size then Error Header_too_large\n    else\n      let n_int = Int64.to_int n in\n      let stop =\n        match Int64.to_int (Int64.add n (Int64.of_int header_len_bytes)) with\n        | exception _ -> -1\n        | v -> v\n      in\n      if stop < 0 || stop > len then Error Invalid_header_length\n      else\n        let header = String.sub buffer header_len_bytes n_int in\n        if not (is_valid_utf8 header) then Error (Invalid_header \"bad utf8\")\n        else\n          try\n            let j = Json.from_string header in\n            let* m = json_to_metadata j in\n            let* buffer_end = validate m in\n            if buffer_end + header_len_bytes + n_int <> len then\n              Error Metadata_incomplete_buffer\n            else\n              let data =\n                String.sub buffer (header_len_bytes + n_int)\n                  (len - (header_len_bytes + n_int))\n              in\n              Ok { metadata = m; data }\n          with Json.Parse_error e -> Error (Invalid_header_deserialization e)\n\nlet tensors st =\n  let names = ref [] in\n  Hashtbl.iter (fun name _ -> names := name :: !names) st.metadata.index_map;\n  List.map\n    (fun name ->\n      let idx = Hashtbl.find st.metadata.index_map name in\n      let info = st.metadata.tensors.(idx) in\n      let s, e = info.data_offsets in\n      ( name,\n        {\n          dtype = info.dtype;\n          shape = info.shape;\n          data = st.data;\n          offset = s;\n          length = e - s;\n        } ))\n    !names\n\n(* Serialization *)\n\nlet prepare data data_info =\n  let sorted =\n    List.sort\n      (fun (ln, lt) (rn, rt) ->\n        let cmp = compare (dtype_rank rt.dtype) (dtype_rank lt.dtype) in\n        if cmp <> 0 then cmp else compare ln rn)\n      data\n  in\n  let offset = ref 0 in\n  let hmetadata = ref [] in\n  let tensors = ref [] in\n  List.iter\n    (fun (name, t) ->\n      let n = t.length in\n      let ti =\n        {\n          dtype = t.dtype;\n          shape = t.shape;\n          data_offsets = (!offset, !offset + n);\n        }\n      in\n      offset := !offset + n;\n      hmetadata := (name, ti) :: !hmetadata;\n      tensors := t :: !tensors)\n    sorted;\n  let hmetadata = List.rev !hmetadata in\n  let index_map = Hashtbl.create (List.length hmetadata) in\n  let tensors_arr =\n    Array.of_list\n      (List.mapi\n         (fun i (name, ti) ->\n           Hashtbl.add index_map name i;\n           ti)\n         hmetadata)\n  in\n  let meta = { metadata_kv = data_info; tensors = tensors_arr; index_map } in\n  let* _ = validate meta in\n  let json = metadata_to_json meta in\n  let n_aligned = next_multiple_of (String.length json) header_len_bytes in\n  let header_bytes =\n    if n_aligned = String.length json then json\n    else\n      let b = Bytes.make n_aligned ' ' in\n      Bytes.blit_string json 0 b 0 (String.length json);\n      Bytes.to_string b\n  in\n  Ok (n_aligned, header_bytes, !offset, List.rev !tensors)\n\nlet serialize_to_file data data_info filename =\n  let* n_aligned, header_bytes, total_data_len, tensors =\n    prepare data data_info\n  in\n  let total = header_len_bytes + n_aligned + total_data_len in\n  let b = Bytes.create total in\n  write_u64_le b 0 (Int64.of_int n_aligned);\n  Bytes.blit_string header_bytes 0 b header_len_bytes n_aligned;\n  let pos = ref (header_len_bytes + n_aligned) in\n  List.iter\n    (fun (tv : tensor_view) ->\n      Bytes.blit_string tv.data tv.offset b !pos tv.length;\n      pos := !pos + tv.length)\n    tensors;\n  try\n    let oc = open_out_bin filename in\n    Fun.protect ~finally:(fun () -> close_out oc) (fun () -> output_bytes oc b);\n    Ok ()\n  with e -> Error (Io_error (Printexc.to_string e))\n"
  },
  {
    "path": "packages/nx/lib/nx.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule F = Nx_core.Make_frontend (Nx_effect)\ninclude F\n\nlet context = Lazy.from_fun Nx_effect.create_context\n\nmodule Rng = struct\n  include Nx_core.Rng\nend\n\n(* Re-export extended type aliases *)\ntype bfloat16_t = (float, Nx_buffer.bfloat16_elt) t\ntype bool_t = (bool, Nx_buffer.bool_elt) t\ntype int4_t = (int, Nx_buffer.int4_signed_elt) t\ntype uint4_t = (int, Nx_buffer.int4_unsigned_elt) t\ntype float8_e4m3_t = (float, Nx_buffer.float8_e4m3_elt) t\ntype float8_e5m2_t = (float, Nx_buffer.float8_e5m2_elt) t\n\n(* Re-export extended dtype value constructors *)\nlet bfloat16 = Nx_core.Dtype.bfloat16\nlet bool = Nx_core.Dtype.bool\nlet int4 = Nx_core.Dtype.int4\nlet uint4 = Nx_core.Dtype.uint4\nlet float8_e4m3 = Nx_core.Dtype.float8_e4m3\nlet float8_e5m2 = Nx_core.Dtype.float8_e5m2\n\n(* ───── Overriding Functions With Default Context ───── *)\n\nlet create dtype shape arr = F.create (Lazy.force context) dtype shape arr\nlet init dtype shape f = F.init (Lazy.force context) dtype shape f\nlet empty dtype shape = F.empty (Lazy.force context) dtype shape\nlet full dtype shape value = F.full (Lazy.force context) dtype shape value\nlet ones dtype shape = F.ones (Lazy.force context) dtype shape\nlet zeros dtype shape = F.zeros (Lazy.force context) dtype shape\nlet scalar dtype v = F.scalar (Lazy.force context) dtype v\nlet eye ?m ?k dtype n = F.eye (Lazy.force context) ?m ?k dtype n\nlet identity dtype n = F.identity (Lazy.force context) dtype n\n\nlet arange dtype start stop step =\n  F.arange (Lazy.force context) dtype start stop step\n\nlet arange_f dtype start stop step =\n  F.arange_f (Lazy.force context) dtype start stop step\n\nlet linspace dtype ?endpoint start stop num =\n  F.linspace (Lazy.force context) dtype ?endpoint start stop num\n\nlet logspace dtype ?endpoint ?base start stop num =\n  F.logspace (Lazy.force context) dtype ?endpoint ?base start stop num\n\nlet geomspace dtype ?endpoint start stop num =\n  F.geomspace (Lazy.force context) dtype ?endpoint start stop num\n\nlet of_bigarray ba = F.of_bigarray (Lazy.force context) ba\nlet of_buffer ba ~shape = F.of_buffer (Lazy.force context) ~shape ba\nlet to_bigarray = F.to_bigarray\nlet to_buffer = F.to_buffer\nlet rand dtype shape = F.rand (Lazy.force context) dtype shape\nlet randn dtype shape = F.randn (Lazy.force context) dtype shape\n\nlet randint dtype ?high shape low =\n  F.randint (Lazy.force context) dtype ?high shape low\n\nlet bernoulli ~p shape = F.bernoulli (Lazy.force context) ~p shape\nlet permutation n = F.permutation (Lazy.force context) n\nlet shuffle x = F.shuffle (Lazy.force context) x\n\nlet categorical ?axis ?shape logits =\n  F.categorical (Lazy.force context) ?axis ?shape logits\n\nlet truncated_normal dtype ~lower ~upper shape =\n  F.truncated_normal (Lazy.force context) dtype ~lower ~upper shape\n\n(* ───── FFT ───── *)\n\nlet fftfreq ?d n = F.fftfreq (Lazy.force context) ?d n\nlet rfftfreq ?d n = F.rfftfreq (Lazy.force context) ?d n\n"
  },
  {
    "path": "packages/nx/lib/nx.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** N-dimensional arrays.\n\n    [Nx] provides n-dimensional arrays (tensors) with NumPy-like semantics. A\n    tensor [('a, 'b) t] holds elements of OCaml type ['a] stored in a buffer\n    with element kind ['b].\n\n    {b Tensors, views, and contiguity.} A tensor is a {e view} over a flat\n    buffer described by a shape, strides, and an offset. Operations that only\n    rearrange metadata ({!reshape}, {!transpose}, {!val-slice}, …) return views\n    in O(1) without copying data. Use {!is_c_contiguous} to test whether\n    elements are laid out contiguously in row-major order, and {!contiguous} to\n    obtain a contiguous copy when needed.\n\n    {b Broadcasting.} Binary operations automatically broadcast operands whose\n    shapes differ: dimensions are aligned from the right and each pair must be\n    equal or one of them must be 1.\n\n    {b Immutable tensors.} All operations return freshly allocated tensors. *)\n\n(** {1:types Types} *)\n\ntype ('a, 'b) t = ('a, 'b) Nx_effect.t\n(** The type for tensors with OCaml element type ['a] and buffer element kind\n    ['b]. *)\n\n(** {2:elt_kinds Element kinds}\n\n    Witnesses for the buffer element representation. Used as the second type\n    parameter of {!type-t}. *)\n\ntype float16_elt = Nx_buffer.float16_elt\ntype float32_elt = Nx_buffer.float32_elt\ntype float64_elt = Nx_buffer.float64_elt\ntype bfloat16_elt = Nx_buffer.bfloat16_elt\ntype float8_e4m3_elt = Nx_buffer.float8_e4m3_elt\ntype float8_e5m2_elt = Nx_buffer.float8_e5m2_elt\ntype int4_elt = Nx_buffer.int4_signed_elt\ntype uint4_elt = Nx_buffer.int4_unsigned_elt\ntype int8_elt = Nx_buffer.int8_signed_elt\ntype uint8_elt = Nx_buffer.int8_unsigned_elt\ntype int16_elt = Nx_buffer.int16_signed_elt\ntype uint16_elt = Nx_buffer.int16_unsigned_elt\ntype int32_elt = Nx_buffer.int32_elt\ntype uint32_elt = Nx_buffer.uint32_elt\ntype int64_elt = Nx_buffer.int64_elt\ntype uint64_elt = Nx_buffer.uint64_elt\ntype complex32_elt = Nx_buffer.complex32_elt\ntype complex64_elt = Nx_buffer.complex64_elt\ntype bool_elt = Nx_buffer.bool_elt\n\n(** {2:dtype Data types} *)\n\ntype ('a, 'b) dtype = ('a, 'b) Nx_core.Dtype.t =\n  | Float16 : (float, float16_elt) dtype\n  | Float32 : (float, float32_elt) dtype\n  | Float64 : (float, float64_elt) dtype\n  | BFloat16 : (float, bfloat16_elt) dtype\n  | Float8_e4m3 : (float, float8_e4m3_elt) dtype\n  | Float8_e5m2 : (float, float8_e5m2_elt) dtype\n  | Int4 : (int, int4_elt) dtype\n  | UInt4 : (int, uint4_elt) dtype\n  | Int8 : (int, int8_elt) dtype\n  | UInt8 : (int, uint8_elt) dtype\n  | Int16 : (int, int16_elt) dtype\n  | UInt16 : (int, uint16_elt) dtype\n  | Int32 : (int32, int32_elt) dtype\n  | UInt32 : (int32, uint32_elt) dtype\n  | Int64 : (int64, int64_elt) dtype\n  | UInt64 : (int64, uint64_elt) dtype\n  | Complex64 : (Complex.t, complex32_elt) dtype\n  | Complex128 : (Complex.t, complex64_elt) dtype\n  | Bool : (bool, bool_elt) dtype\n      (** The type for data type descriptors. A [('a, 'b) dtype] links the OCaml\n          element type ['a] to its buffer representation ['b]. *)\n\n(** {2:tensor_aliases Tensor aliases} *)\n\ntype float16_t = (float, float16_elt) t\ntype float32_t = (float, float32_elt) t\ntype float64_t = (float, float64_elt) t\ntype bfloat16_t = (float, bfloat16_elt) t\ntype float8_e4m3_t = (float, float8_e4m3_elt) t\ntype float8_e5m2_t = (float, float8_e5m2_elt) t\ntype int4_t = (int, int4_elt) t\ntype uint4_t = (int, uint4_elt) t\ntype int8_t = (int, int8_elt) t\ntype uint8_t = (int, uint8_elt) t\ntype int16_t = (int, int16_elt) t\ntype uint16_t = (int, uint16_elt) t\ntype int32_t = (int32, int32_elt) t\ntype uint32_t = (int32, uint32_elt) t\ntype int64_t = (int64, int64_elt) t\ntype uint64_t = (int64, uint64_elt) t\ntype complex64_t = (Complex.t, complex32_elt) t\ntype complex128_t = (Complex.t, complex64_elt) t\ntype bool_t = (bool, bool_elt) t\n\n(** {2:dtype_vals Data type values} *)\n\nval float16 : (float, float16_elt) dtype\nval float32 : (float, float32_elt) dtype\nval float64 : (float, float64_elt) dtype\nval bfloat16 : (float, bfloat16_elt) dtype\nval float8_e4m3 : (float, float8_e4m3_elt) dtype\nval float8_e5m2 : (float, float8_e5m2_elt) dtype\nval int4 : (int, int4_elt) dtype\nval uint4 : (int, uint4_elt) dtype\nval int8 : (int, int8_elt) dtype\nval uint8 : (int, uint8_elt) dtype\nval int16 : (int, int16_elt) dtype\nval uint16 : (int, uint16_elt) dtype\nval int32 : (int32, int32_elt) dtype\nval uint32 : (int32, uint32_elt) dtype\nval int64 : (int64, int64_elt) dtype\nval uint64 : (int64, uint64_elt) dtype\nval complex64 : (Complex.t, complex32_elt) dtype\nval complex128 : (Complex.t, complex64_elt) dtype\nval bool : (bool, bool_elt) dtype\n\n(** {2:index Index specifications} *)\n\n(** The type for index specifications used by {!val-slice} and {!set_slice}. *)\ntype index =\n  | I of int  (** [I i] selects a single index, reducing the dimension. *)\n  | L of int list  (** [L [i0; i1; …]] gathers the listed indices. *)\n  | R of int * int\n      (** [R (start, stop)] selects the half-open range \\[[start], [stop]). *)\n  | Rs of int * int * int\n      (** [Rs (start, stop, step)] selects a strided range. *)\n  | A\n      (** [A] selects the entire axis. This is the default for axes not covered\n          by a {!val-slice} specification. *)\n  | M of (bool, bool_elt) t\n      (** [M mask] selects positions where [mask] is [true]. *)\n  | N  (** [N] inserts a new axis of size 1 (does not consume an input axis). *)\n\n(** {1:properties Properties} *)\n\nval data : ('a, 'b) t -> ('a, 'b) Nx_buffer.t\n(** [data t] is the underlying flat buffer of [t].\n\n    The buffer is shared: mutations through the buffer are visible through [t]\n    and vice-versa. The buffer may be larger than the tensor's logical extent\n    when [t] is a strided view. *)\n\nval shape : ('a, 'b) t -> int array\n(** [shape t] is the dimensions of [t]. A scalar tensor has shape [|\\||]. *)\n\nval dtype : ('a, 'b) t -> ('a, 'b) dtype\n(** [dtype t] is the data type of [t]. *)\n\nval strides : ('a, 'b) t -> int array\n(** [strides t] is the byte stride for each dimension of [t].\n\n    Raises [Invalid_argument] if [t] does not have computable strides (e.g.\n    after certain non-contiguous view operations). Use {!is_c_contiguous} or\n    call {!contiguous} first.\n\n    See also {!stride}. *)\n\nval stride : int -> ('a, 'b) t -> int\n(** [stride i t] is the byte stride of dimension [i].\n\n    Raises [Invalid_argument] if [i] is out of bounds or [t] does not have\n    computable strides.\n\n    See also {!strides}. *)\n\nval dims : ('a, 'b) t -> int array\n(** [dims t] is {!shape}. *)\n\nval dim : int -> ('a, 'b) t -> int\n(** [dim i t] is the size of dimension [i].\n\n    Raises [Invalid_argument] if [i] is out of bounds. *)\n\nval ndim : ('a, 'b) t -> int\n(** [ndim t] is the number of dimensions of [t]. *)\n\nval itemsize : ('a, 'b) t -> int\n(** [itemsize t] is the number of bytes per element. *)\n\nval size : ('a, 'b) t -> int\n(** [size t] is the total number of elements. *)\n\nval numel : ('a, 'b) t -> int\n(** [numel t] is {!size}. *)\n\nval nbytes : ('a, 'b) t -> int\n(** [nbytes t] is [size t * itemsize t]. *)\n\nval offset : ('a, 'b) t -> int\n(** [offset t] is the element offset of [t] in its underlying buffer. *)\n\nval is_c_contiguous : ('a, 'b) t -> bool\n(** [is_c_contiguous t] is [true] iff [t]'s elements are laid out contiguously\n    in row-major (C) order.\n\n    See also {!contiguous}. *)\n\nval to_bigarray : ('a, 'b) t -> ('a, 'b, Bigarray.c_layout) Bigarray.Genarray.t\n(** [to_bigarray t] is a contiguous bigarray with the same shape and data as\n    [t]. Always copies.\n\n    Raises [Invalid_argument] if [t]'s dtype is an extended type not supported\n    by [Bigarray].\n\n    See also {!of_bigarray}. *)\n\nval to_buffer : ('a, 'b) t -> ('a, 'b) Nx_buffer.t\n(** [to_buffer t] is a flat, contiguous buffer of [t]'s data.\n\n    Returns the underlying buffer directly when [t] is already contiguous with\n    zero offset and matching size; copies otherwise. *)\n\nval to_array : ('a, 'b) t -> 'a array\n(** [to_array t] is a fresh OCaml array containing the elements of [t] in\n    row-major order. Always copies.\n\n    {@ocaml[\n      # let t =\n          create int32 [| 2; 2 |] [| 1l; 2l; 3l; 4l |]\n        in\n        to_array t\n      - : int32 array = [|1l; 2l; 3l; 4l|]\n    ]} *)\n\n(** {1:creation Creation} *)\n\nval create : ('a, 'b) dtype -> int array -> 'a array -> ('a, 'b) t\n(** [create dtype shape data] is a tensor of the given [dtype] and [shape]\n    initialised from [data] in row-major order.\n\n    Raises [Invalid_argument] if [Array.length data] does not equal the product\n    of [shape].\n\n    {@ocaml[\n      # create float32 [| 2; 3 |]\n          [| 1.; 2.; 3.; 4.; 5.; 6. |]\n      - : (float, float32_elt) t = float32 [2; 3] [[1, 2, 3],\n                                                   [4, 5, 6]]\n    ]} *)\n\nval init : ('a, 'b) dtype -> int array -> (int array -> 'a) -> ('a, 'b) t\n(** [init dtype shape f] is a tensor where the element at multi-index [i] is\n    [f i].\n\n    {@ocaml[\n      # init int32 [| 2; 3 |]\n          (fun i -> Int32.of_int (i.(0) + i.(1)))\n      - : (int32, int32_elt) t = int32 [2; 3] [[0, 1, 2],\n                                               [1, 2, 3]]\n    ]} *)\n\nval empty : ('a, 'b) dtype -> int array -> ('a, 'b) t\n(** [empty dtype shape] is an uninitialized tensor.\n\n    {b Warning.} Elements contain arbitrary values until written. *)\n\nval full : ('a, 'b) dtype -> int array -> 'a -> ('a, 'b) t\n(** [full dtype shape v] is a tensor filled with [v].\n\n    {@ocaml[\n      # full float32 [| 2; 3 |] 3.14\n      - : (float, float32_elt) t = float32 [2; 3]\n      [[3.14, 3.14, 3.14],\n       [3.14, 3.14, 3.14]]\n    ]} *)\n\nval ones : ('a, 'b) dtype -> int array -> ('a, 'b) t\n(** [ones dtype shape] is a tensor filled with ones. *)\n\nval zeros : ('a, 'b) dtype -> int array -> ('a, 'b) t\n(** [zeros dtype shape] is a tensor filled with zeros. *)\n\nval scalar : ('a, 'b) dtype -> 'a -> ('a, 'b) t\n(** [scalar dtype v] is a 0-dimensional tensor containing [v]. The result has\n    shape [|\\||]. *)\n\nval empty_like : ('a, 'b) t -> ('a, 'b) t\n(** [empty_like t] is {!empty} with the same dtype and shape as [t]. *)\n\nval full_like : ('a, 'b) t -> 'a -> ('a, 'b) t\n(** [full_like t v] is {!full} with the same dtype and shape as [t]. *)\n\nval ones_like : ('a, 'b) t -> ('a, 'b) t\n(** [ones_like t] is {!ones} with the same dtype and shape as [t]. *)\n\nval zeros_like : ('a, 'b) t -> ('a, 'b) t\n(** [zeros_like t] is {!zeros} with the same dtype and shape as [t]. *)\n\nval scalar_like : ('a, 'b) t -> 'a -> ('a, 'b) t\n(** [scalar_like t v] is {!scalar} with the same dtype as [t]. *)\n\nval eye : ?m:int -> ?k:int -> ('a, 'b) dtype -> int -> ('a, 'b) t\n(** [eye ?m ?k dtype n] is an [n × m] matrix with ones on the [k]-th diagonal\n    and zeros elsewhere. [m] defaults to [n]. [k] defaults to [0] (main\n    diagonal); positive [k] selects an upper diagonal, negative [k] a lower one.\n\n    {@ocaml[\n      # eye int32 3\n      - : (int32, int32_elt) t = int32 [3; 3] [[1, 0, 0],\n                                               [0, 1, 0],\n                                               [0, 0, 1]]\n      # eye ~k:1 int32 3\n      - : (int32, int32_elt) t = int32 [3; 3] [[0, 1, 0],\n                                               [0, 0, 1],\n                                               [0, 0, 0]]\n    ]}\n\n    See also {!identity}, {!diag}. *)\n\nval identity : ('a, 'b) dtype -> int -> ('a, 'b) t\n(** [identity dtype n] is [eye dtype n]. *)\n\nval diag : ?k:int -> ('a, 'b) t -> ('a, 'b) t\n(** [diag ?k v] extracts or constructs a diagonal.\n\n    When [v] is 1-D, returns a 2-D tensor with [v] on the [k]-th diagonal. When\n    [v] is 2-D, returns the [k]-th diagonal as a 1-D tensor. [k] defaults to\n    [0].\n\n    Raises [Invalid_argument] if [v] is not 1-D or 2-D.\n\n    {@ocaml[\n      # let v = create int32 [| 3 |] [| 1l; 2l; 3l |] in\n        diag v\n      - : (int32, int32_elt) t = int32 [3; 3] [[1, 0, 0],\n                                               [0, 2, 0],\n                                               [0, 0, 3]]\n      # let x =\n          arange int32 0 9 1 |> reshape [| 3; 3 |]\n        in\n        diag x\n      - : (int32, int32_elt) t = [0, 4, 8]\n    ]}\n\n    See also {!eye}, {!diagonal}. *)\n\nval arange : ('a, 'b) dtype -> int -> int -> int -> ('a, 'b) t\n(** [arange dtype start stop step] is a 1-D tensor of values from [start]\n    (inclusive) to [stop] (exclusive) with stride [step].\n\n    Raises [Invalid_argument] if [step = 0].\n\n    {@ocaml[\n      # arange int32 0 10 2\n      - : (int32, int32_elt) t = int32 [5] [0, 2, ..., 6, 8]\n      # arange int32 5 0 (-1)\n      - : (int32, int32_elt) t = int32 [5] [5, 4, ..., 2, 1]\n    ]}\n\n    See also {!arange_f}, {!linspace}. *)\n\nval arange_f : (float, 'a) dtype -> float -> float -> float -> (float, 'a) t\n(** [arange_f dtype start stop step] is like {!arange} for floating-point\n    ranges.\n\n    Raises [Invalid_argument] if [step = 0.0].\n\n    {@ocaml[\n      # arange_f float32 0. 1. 0.2\n      - : (float, float32_elt) t = float32 [5] [0, 0.2, ..., 0.6, 0.8]\n    ]}\n\n    See also {!arange}, {!linspace}. *)\n\nval linspace :\n  ('a, 'b) dtype -> ?endpoint:bool -> float -> float -> int -> ('a, 'b) t\n(** [linspace dtype ?endpoint start stop n] is [n] values evenly spaced from\n    [start] to [stop]. [endpoint] defaults to [true] (include [stop]).\n\n    Raises [Invalid_argument] if [n] is negative.\n\n    {@ocaml[\n      # linspace float32 0. 10. 5\n      - : (float, float32_elt) t = float32 [5] [0, 2.5, ..., 7.5, 10]\n      # linspace float32 ~endpoint:false 0. 10. 5\n      - : (float, float32_elt) t = float32 [5] [0, 2, ..., 6, 8]\n    ]}\n\n    See also {!logspace}, {!geomspace}. *)\n\nval logspace :\n  (float, 'a) dtype ->\n  ?endpoint:bool ->\n  ?base:float ->\n  float ->\n  float ->\n  int ->\n  (float, 'a) t\n(** [logspace dtype ?endpoint ?base start stop n] is [n] values evenly spaced on\n    a logarithmic scale: [base{^x}] where [x] ranges from [start] to [stop].\n    [endpoint] defaults to [true]. [base] defaults to [10.0].\n\n    Raises [Invalid_argument] if [n] is negative.\n\n    {@ocaml[\n      # logspace float32 0. 2. 3\n      - : (float, float32_elt) t = [1, 10, 100]\n      # logspace float32 ~base:2.0 0. 3. 4\n      - : (float, float32_elt) t = [1, 2, 4, 8]\n    ]}\n\n    See also {!linspace}, {!geomspace}. *)\n\nval geomspace :\n  (float, 'a) dtype -> ?endpoint:bool -> float -> float -> int -> (float, 'a) t\n(** [geomspace dtype ?endpoint start stop n] is [n] values evenly spaced on a\n    geometric (multiplicative) scale. [endpoint] defaults to [true].\n\n    Raises [Invalid_argument] if [start] or [stop] is not positive.\n\n    {@ocaml[\n      # geomspace float32 1. 1000. 4\n      - : (float, float32_elt) t = [1, 10, 100, 1000]\n    ]}\n\n    See also {!linspace}, {!logspace}. *)\n\nval meshgrid :\n  ?indexing:[ `xy | `ij ] -> ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t * ('a, 'b) t\n(** [meshgrid ?indexing x y] is a pair of 2-D coordinate grids built from 1-D\n    arrays [x] and [y]. [indexing] defaults to [`xy] (Cartesian: X varies along\n    columns, Y along rows). With [`ij] (matrix), X varies along rows, Y along\n    columns.\n\n    Raises [Invalid_argument] if [x] or [y] is not 1-D.\n\n    {@ocaml[\n      # let x = linspace float32 0. 2. 3 in\n        let y = linspace float32 0. 1. 2 in\n        meshgrid x y\n      - : (float, float32_elt) t * (float, float32_elt) t =\n      (float32 [2; 3] [[0, 1, 2],\n                       [0, 1, 2]], float32 [2; 3] [[0, 0, 0],\n                                                   [1, 1, 1]])\n    ]} *)\n\nval tril : ?k:int -> ('a, 'b) t -> ('a, 'b) t\n(** [tril ?k x] is the lower-triangular part of [x] with elements above the\n    [k]-th diagonal set to zero. [k] defaults to [0].\n\n    Raises [Invalid_argument] if [x] has fewer than 2 dimensions.\n\n    See also {!triu}. *)\n\nval triu : ?k:int -> ('a, 'b) t -> ('a, 'b) t\n(** [triu ?k x] is the upper-triangular part of [x] with elements below the\n    [k]-th diagonal set to zero. [k] defaults to [0].\n\n    Raises [Invalid_argument] if [x] has fewer than 2 dimensions.\n\n    See also {!tril}. *)\n\nval of_bigarray : ('a, 'b, Bigarray.c_layout) Bigarray.Genarray.t -> ('a, 'b) t\n(** [of_bigarray ba] is a tensor sharing memory with [ba].\n\n    Zero-copy: mutations through either are visible to both.\n\n    See also {!to_bigarray}. *)\n\nval of_buffer : ('a, 'b) Nx_buffer.t -> shape:int array -> ('a, 'b) t\n(** [of_buffer buf ~shape] is a tensor viewing [buf] with the given [shape]. The\n    product of [shape] must equal the buffer length. *)\n\nval one_hot : num_classes:int -> ('a, 'b) t -> (int, uint8_elt) t\n(** [one_hot ~num_classes indices] is a one-hot encoded tensor.\n\n    Appends a new trailing dimension of size [num_classes]. Values in [indices]\n    must lie in \\[[0], [num_classes]). Out-of-range indices produce all-zero\n    rows.\n\n    Raises [Invalid_argument] if [indices] is not an integer dtype or\n    [num_classes <= 0].\n\n    {@ocaml[\n      # let idx =\n          create int32 [| 3 |] [| 0l; 1l; 3l |]\n        in\n        one_hot ~num_classes:4 idx\n      - : (int, uint8_elt) t = uint8 [3; 4]\n      [[1, 0, 0, 0],\n       [0, 1, 0, 0],\n       [0, 0, 0, 1]]\n    ]} *)\n\n(** {1:rng Random number generation}\n\n    Sampling functions use the implicit RNG state managed by {!module-Rng}. Wrap\n    calls in {!Rng.run} for reproducibility:\n\n    {v   Rng.run ~seed:42 (fun () -> rand float32 [| 3 |]) v} *)\n\nmodule Rng : sig\n  (** Splittable RNG keys and implicit key management.\n\n      Keys are deterministic integers that can be split to derive independent\n      subkeys. {!run} and {!with_key} install an effect handler that provides\n      implicit key threading via {!next_key}; outside any handler a domain-local\n      auto-seeded generator is used as a convenient fallback. *)\n\n  (** {1:keys Keys} *)\n\n  type key = int\n  (** The type for RNG keys. *)\n\n  val key : int -> key\n  (** [key seed] is a normalized 31-bit non-negative key derived from [seed]. *)\n\n  val split : ?n:int -> key -> key array\n  (** [split ?n k] deterministically derives [n] subkeys from [k].\n\n      [n] defaults to [2]. *)\n\n  val fold_in : key -> int -> key\n  (** [fold_in k data] mixes [data] into [k] and returns the derived key. *)\n\n  val to_int : key -> int\n  (** [to_int k] is [k] as an integer. *)\n\n  (** {1:implicit Implicit key management} *)\n\n  val next_key : unit -> key\n  (** [next_key ()] returns a fresh subkey from the current RNG scope.\n\n      Inside a {!run} or {!with_key} block, each call returns a\n      deterministically derived key. Outside any scope, falls back to a\n      domain-local auto-seeded generator (convenient but non-reproducible).\n\n      Two calls to [next_key ()] always return different keys. *)\n\n  val run : seed:int -> (unit -> 'a) -> 'a\n  (** [run ~seed f] executes [f] in an RNG scope seeded by [seed].\n\n      Every {!next_key} call within [f] returns a deterministically derived key.\n      The same [seed] and the same sequence of [next_key] calls produce the same\n      keys. Scopes nest: an inner [run] replaces the outer scope for its\n      duration. *)\n\n  val with_key : key -> (unit -> 'a) -> 'a\n  (** [with_key k f] executes [f] in an RNG scope initialized from [k].\n\n      This is the explicit-key equivalent of [run]: useful when you have an\n      existing key from a split and want to establish a scope for a\n      sub-computation (e.g. in layer composition). *)\nend\n\nval rand : ('a, 'b) dtype -> int array -> ('a, 'b) t\n(** [rand dtype shape] samples uniformly from \\[[0], [1]).\n\n    Raises [Invalid_argument] if [dtype] is not a float type. *)\n\nval randn : ('a, 'b) dtype -> int array -> ('a, 'b) t\n(** [randn dtype shape] samples from the standard normal distribution (mean 0,\n    variance 1) via the Box–Muller transform.\n\n    Raises [Invalid_argument] if [dtype] is not a float type. *)\n\nval randint : ('a, 'b) dtype -> ?high:int -> int array -> int -> ('a, 'b) t\n(** [randint dtype ?high shape low] samples integers uniformly from \\[[low],\n    [high]). [high] defaults to [10].\n\n    Raises [Invalid_argument] if [dtype] is not an integer type or\n    [low >= high]. *)\n\nval bernoulli : p:float -> int array -> bool_t\n(** [bernoulli ~p shape] samples booleans that are [true] with probability [p].\n\n    Raises [Invalid_argument] if [p] is not in \\[[0], [1]\\]. *)\n\nval permutation : int -> int32_t\n(** [permutation n] is a random permutation of \\[[0], [n-1]\\].\n\n    Raises [Invalid_argument] if [n <= 0]. *)\n\nval shuffle : ('a, 'b) t -> ('a, 'b) t\n(** [shuffle t] is a copy of [t] with the first axis randomly permuted. No-op on\n    scalars. *)\n\nval categorical : ?axis:int -> ?shape:int array -> (float, 'a) t -> int32_t\n(** [categorical ?axis ?shape logits] samples category indices from unnormalised\n    log-probabilities using the Gumbel-max trick. [axis] defaults to [-1] (last\n    axis). [shape] prepends extra batch dimensions.\n\n    Raises [Invalid_argument] if [logits] is not a float type or [axis] is out\n    of bounds. *)\n\nval truncated_normal :\n  ('a, 'b) dtype -> lower:float -> upper:float -> int array -> ('a, 'b) t\n(** [truncated_normal dtype ~lower ~upper shape] samples from a standard normal\n    distribution truncated to \\[[lower], [upper]\\].\n\n    Raises [Invalid_argument] if [dtype] is not a float type or\n    [lower >= upper]. *)\n\n(** {1:shape Shape manipulation} *)\n\nval reshape : int array -> ('a, 'b) t -> ('a, 'b) t\n(** [reshape shape t] is a view of [t] with the given [shape].\n\n    At most one dimension may be [-1]; it is inferred from the total number of\n    elements. The product of [shape] must equal {!size} [t].\n\n    Raises [Invalid_argument] if [shape] is incompatible or contains more than\n    one [-1].\n\n    {@ocaml[\n      # create int32 [| 6 |] [| 1l; 2l; 3l; 4l; 5l; 6l |]\n        |> reshape [| 2; 3 |]\n      - : (int32, int32_elt) t = int32 [2; 3] [[1, 2, 3],\n                                               [4, 5, 6]]\n      # create int32 [| 6 |] [| 1l; 2l; 3l; 4l; 5l; 6l |]\n        |> reshape [| 3; -1 |]\n      - : (int32, int32_elt) t = int32 [3; 2] [[1, 2],\n                                               [3, 4],\n                                               [5, 6]]\n    ]}\n\n    See also {!flatten}, {!unflatten}, {!ravel}. *)\n\nval broadcast_to : int array -> ('a, 'b) t -> ('a, 'b) t\n(** [broadcast_to shape t] is a view of [t] broadcast to [shape].\n\n    Dimensions are aligned from the right; each dimension of [t] must be [1] or\n    equal to the corresponding target dimension. Broadcast dimensions have zero\n    byte-stride (no copy).\n\n    Raises [Invalid_argument] if the shapes are incompatible.\n\n    {@ocaml[\n      # create int32 [| 1; 3 |] [| 1l; 2l; 3l |]\n        |> broadcast_to [| 3; 3 |]\n      - : (int32, int32_elt) t = int32 [3; 3] [[1, 2, 3],\n                                               [1, 2, 3],\n                                               [1, 2, 3]]\n    ]}\n\n    See also {!broadcasted}, {!expand}. *)\n\nval broadcasted :\n  ?reverse:bool -> ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t * ('a, 'b) t\n(** [broadcasted ?reverse t1 t2] is [(t1', t2')] where both are broadcast to\n    their common shape. When [reverse] is [true] (default [false]), returns\n    [(t2', t1')].\n\n    Raises [Invalid_argument] if the shapes are incompatible.\n\n    See also {!broadcast_to}, {!broadcast_arrays}. *)\n\nval expand : int array -> ('a, 'b) t -> ('a, 'b) t\n(** [expand shape t] is like {!broadcast_to} but [-1] in [shape] preserves the\n    corresponding dimension of [t].\n\n    Raises [Invalid_argument] if any dimension in [shape] is negative (other\n    than [-1]).\n\n    {@ocaml[\n      # ones float32 [| 1; 4; 1 |]\n        |> expand [| 3; -1; 5 |] |> shape\n      - : int array = [|3; 4; 5|]\n    ]}\n\n    See also {!broadcast_to}. *)\n\nval flatten : ?start_dim:int -> ?end_dim:int -> ('a, 'b) t -> ('a, 'b) t\n(** [flatten ?start_dim ?end_dim t] collapses dimensions [start_dim] through\n    [end_dim] (inclusive) into a single dimension. [start_dim] defaults to [0].\n    [end_dim] defaults to [-1] (last). Negative indices count from the end.\n\n    Raises [Invalid_argument] if indices are out of bounds.\n\n    {@ocaml[\n      # zeros float32 [| 2; 3; 4 |] |> flatten |> shape\n      - : int array = [|24|]\n      # zeros float32 [| 2; 3; 4; 5 |]\n        |> flatten ~start_dim:1 ~end_dim:2 |> shape\n      - : int array = [|2; 12; 5|]\n    ]}\n\n    See also {!unflatten}, {!ravel}. *)\n\nval unflatten : int -> int array -> ('a, 'b) t -> ('a, 'b) t\n(** [unflatten dim sizes t] expands dimension [dim] into multiple dimensions\n    given by [sizes]. At most one element of [sizes] may be [-1] (inferred). The\n    product of [sizes] must equal the size of dimension [dim].\n\n    Raises [Invalid_argument] if the product mismatches or [dim] is out of\n    bounds.\n\n    {@ocaml[\n      # zeros float32 [| 2; 12; 5 |]\n        |> unflatten 1 [| 3; 4 |] |> shape\n      - : int array = [|2; 3; 4; 5|]\n    ]}\n\n    See also {!flatten}. *)\n\nval ravel : ('a, 'b) t -> ('a, 'b) t\n(** [ravel t] is [t] reshaped to 1-D. Returns a view when possible.\n\n    Raises [Invalid_argument] if [t] cannot be flattened without copying; call\n    {!contiguous} first.\n\n    See also {!flatten}, {!contiguous}. *)\n\nval squeeze : ?axes:int list -> ('a, 'b) t -> ('a, 'b) t\n(** [squeeze ?axes t] removes dimensions of size 1. When [axes] is given, only\n    those axes are removed. Negative indices count from the end.\n\n    Raises [Invalid_argument] if a specified axis does not have size 1.\n\n    {@ocaml[\n      # ones float32 [| 1; 3; 1; 4 |]\n        |> squeeze |> shape\n      - : int array = [|3; 4|]\n      # ones float32 [| 1; 3; 1; 4 |]\n        |> squeeze ~axes:[ 0 ] |> shape\n      - : int array = [|3; 1; 4|]\n    ]}\n\n    See also {!unsqueeze}. *)\n\nval unsqueeze : ?axes:int list -> ('a, 'b) t -> ('a, 'b) t\n(** [unsqueeze ?axes t] inserts dimensions of size 1 at the positions listed in\n    [axes]. Positions refer to the result tensor.\n\n    Raises [Invalid_argument] if [axes] is not specified, contains duplicates,\n    or values are out of bounds.\n\n    {@ocaml[\n      # create float32 [| 3 |] [| 1.; 2.; 3. |]\n        |> unsqueeze ~axes:[ 0; 2 ] |> shape\n      - : int array = [|1; 3; 1|]\n    ]}\n\n    See also {!squeeze}, {!expand_dims}. *)\n\nval squeeze_axis : int -> ('a, 'b) t -> ('a, 'b) t\n(** [squeeze_axis i t] removes dimension [i] if its size is 1.\n\n    Raises [Invalid_argument] if dimension [i] is not 1.\n\n    See also {!squeeze}. *)\n\nval unsqueeze_axis : int -> ('a, 'b) t -> ('a, 'b) t\n(** [unsqueeze_axis i t] inserts a dimension of size 1 at position [i].\n\n    See also {!unsqueeze}. *)\n\nval expand_dims : int list -> ('a, 'b) t -> ('a, 'b) t\n(** [expand_dims axes t] is {!unsqueeze} [~axes t]. *)\n\nval transpose : ?axes:int list -> ('a, 'b) t -> ('a, 'b) t\n(** [transpose ?axes t] permutes the dimensions of [t].\n\n    [axes] must be a permutation of [[0; …; ndim t - 1]]. When omitted, reverses\n    all dimensions. Returns a view (no copy).\n\n    Raises [Invalid_argument] if [axes] is not a valid permutation.\n\n    {@ocaml[\n      # create int32 [| 2; 3 |] [| 1l; 2l; 3l; 4l; 5l; 6l |]\n        |> transpose\n      - : (int32, int32_elt) t = int32 [3; 2] [[1, 4],\n                                               [2, 5],\n                                               [3, 6]]\n    ]}\n\n    See also {!matrix_transpose}, {!moveaxis}, {!swapaxes}. *)\n\nval flip : ?axes:int list -> ('a, 'b) t -> ('a, 'b) t\n(** [flip ?axes t] reverses elements along the given [axes]. When omitted, flips\n    all dimensions.\n\n    Raises [Invalid_argument] if any axis is out of bounds.\n\n    {@ocaml[\n      # create int32 [| 2; 3 |] [| 1l; 2l; 3l; 4l; 5l; 6l |]\n        |> flip ~axes:[ 1 ]\n      - : (int32, int32_elt) t = int32 [2; 3] [[3, 2, 1],\n                                               [6, 5, 4]]\n    ]} *)\n\nval moveaxis : int -> int -> ('a, 'b) t -> ('a, 'b) t\n(** [moveaxis src dst t] moves dimension [src] to position [dst].\n\n    Raises [Invalid_argument] if either index is out of bounds.\n\n    See also {!transpose}, {!swapaxes}. *)\n\nval swapaxes : int -> int -> ('a, 'b) t -> ('a, 'b) t\n(** [swapaxes a1 a2 t] exchanges dimensions [a1] and [a2].\n\n    Raises [Invalid_argument] if either index is out of bounds.\n\n    See also {!transpose}, {!moveaxis}. *)\n\nval roll : ?axis:int -> int -> ('a, 'b) t -> ('a, 'b) t\n(** [roll ?axis shift t] shifts elements along [axis] by [shift] positions,\n    wrapping around. When [axis] is omitted, operates on the flattened tensor.\n    Negative [shift] rolls backward.\n\n    Raises [Invalid_argument] if [axis] is out of bounds.\n\n    {@ocaml[\n      # create int32 [| 5 |] [| 1l; 2l; 3l; 4l; 5l |]\n        |> roll 2\n      - : (int32, int32_elt) t = int32 [5] [4, 5, ..., 2, 3]\n    ]} *)\n\nval pad : (int * int) array -> 'a -> ('a, 'b) t -> ('a, 'b) t\n(** [pad widths value t] pads [t] with [value]. [widths.(i)] is\n    [(before, after)] for dimension [i].\n\n    Raises [Invalid_argument] if [Array.length widths] does not match {!ndim}\n    [t] or any width is negative.\n\n    {@ocaml[\n      # create float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |]\n        |> pad [| (1, 1); (1, 1) |] 0. |> shape\n      - : int array = [|4; 4|]\n    ]}\n\n    See also {!shrink}. *)\n\nval shrink : (int * int) array -> ('a, 'b) t -> ('a, 'b) t\n(** [shrink ranges t] extracts a slice where [ranges.(i)] is [(start, stop)]\n    (exclusive) for dimension [i]. Returns a view.\n\n    {@ocaml[\n      # create int32 [| 3; 3 |]\n          [| 1l; 2l; 3l; 4l; 5l; 6l; 7l; 8l; 9l |]\n        |> shrink [| (1, 3); (0, 2) |]\n      - : (int32, int32_elt) t = int32 [2; 2] [[4, 5],\n                                               [7, 8]]\n    ]}\n\n    See also {!pad}. *)\n\nval tile : int array -> ('a, 'b) t -> ('a, 'b) t\n(** [tile reps t] is [t] repeated according to [reps]. [reps.(i)] gives the\n    repetition count along dimension [i]. If [reps] is longer than {!ndim} [t],\n    dimensions are prepended.\n\n    Raises [Invalid_argument] if any repetition count is negative.\n\n    {@ocaml[\n      # create int32 [| 1; 2 |] [| 1l; 2l |]\n        |> tile [| 2; 3 |]\n      - : (int32, int32_elt) t = int32 [2; 6] [[1, 2, ..., 1, 2],\n                                               [1, 2, ..., 1, 2]]\n    ]}\n\n    See also {!repeat}. *)\n\nval repeat : ?axis:int -> int -> ('a, 'b) t -> ('a, 'b) t\n(** [repeat ?axis n t] repeats each element [n] times along [axis]. When [axis]\n    is omitted, operates on the flattened tensor.\n\n    Raises [Invalid_argument] if [n] is negative or [axis] is out of bounds.\n\n    {@ocaml[\n      # create int32 [| 3 |] [| 1l; 2l; 3l |]\n        |> repeat 2\n      - : (int32, int32_elt) t = int32 [6] [1, 1, ..., 3, 3]\n    ]}\n\n    See also {!tile}. *)\n\n(** {1:combine Combining and splitting} *)\n\nval concatenate : ?axis:int -> ('a, 'b) t list -> ('a, 'b) t\n(** [concatenate ?axis ts] joins tensors along an existing axis. All tensors\n    must have the same shape except on the concatenation axis. When [axis] is\n    omitted, every tensor is flattened first. Always copies.\n\n    Raises [Invalid_argument] if the list is empty or shapes are incompatible.\n\n    {@ocaml[\n      # let a = create int32 [| 2; 2 |] [| 1l; 2l; 3l; 4l |] in\n        let b = create int32 [| 1; 2 |] [| 5l; 6l |] in\n        concatenate ~axis:0 [ a; b ]\n      - : (int32, int32_elt) t = int32 [3; 2] [[1, 2],\n                                               [3, 4],\n                                               [5, 6]]\n    ]}\n\n    See also {!stack}, {!vstack}, {!hstack}. *)\n\nval stack : ?axis:int -> ('a, 'b) t list -> ('a, 'b) t\n(** [stack ?axis ts] joins tensors along a {e new} axis. All tensors must have\n    identical shape. [axis] defaults to [0]. Negative values count from the end\n    of the result shape.\n\n    Raises [Invalid_argument] if the list is empty, shapes differ, or [axis] is\n    out of bounds.\n\n    {@ocaml[\n      # let a = create int32 [| 2 |] [| 1l; 2l |] in\n        let b = create int32 [| 2 |] [| 3l; 4l |] in\n        stack [ a; b ]\n      - : (int32, int32_elt) t = int32 [2; 2] [[1, 2],\n                                               [3, 4]]\n      # let a = create int32 [| 2 |] [| 1l; 2l |] in\n        let b = create int32 [| 2 |] [| 3l; 4l |] in\n        stack ~axis:1 [ a; b ]\n      - : (int32, int32_elt) t = int32 [2; 2] [[1, 3],\n                                               [2, 4]]\n    ]}\n\n    See also {!concatenate}. *)\n\nval vstack : ('a, 'b) t list -> ('a, 'b) t\n(** [vstack ts] stacks vertically (along axis 0). 1-D tensors are treated as row\n    vectors (shape [[1; n]]).\n\n    Raises [Invalid_argument] if shapes are incompatible.\n\n    {@ocaml[\n      # let a = create int32 [| 3 |] [| 1l; 2l; 3l |] in\n        let b = create int32 [| 3 |] [| 4l; 5l; 6l |] in\n        vstack [ a; b ]\n      - : (int32, int32_elt) t = int32 [2; 3] [[1, 2, 3],\n                                               [4, 5, 6]]\n    ]}\n\n    See also {!hstack}, {!dstack}, {!concatenate}. *)\n\nval hstack : ('a, 'b) t list -> ('a, 'b) t\n(** [hstack ts] stacks horizontally. 1-D tensors are concatenated directly;\n    higher-D tensors concatenate along axis 1.\n\n    Raises [Invalid_argument] if shapes are incompatible.\n\n    {@ocaml[\n      # let a = create int32 [| 2; 1 |] [| 1l; 2l |] in\n        let b = create int32 [| 2; 1 |] [| 3l; 4l |] in\n        hstack [ a; b ]\n      - : (int32, int32_elt) t = int32 [2; 2] [[1, 3],\n                                               [2, 4]]\n    ]}\n\n    See also {!vstack}, {!dstack}, {!concatenate}. *)\n\nval dstack : ('a, 'b) t list -> ('a, 'b) t\n(** [dstack ts] stacks depth-wise (along axis 2). Tensors are reshaped to at\n    least 3-D before concatenation: 1-D [[n]] → [[1; n; 1]], 2-D [[m; n]] →\n    [[m; n; 1]].\n\n    Raises [Invalid_argument] if the resulting shapes are incompatible.\n\n    See also {!vstack}, {!hstack}, {!concatenate}. *)\n\nval broadcast_arrays : ('a, 'b) t list -> ('a, 'b) t list\n(** [broadcast_arrays ts] broadcasts every tensor to their common shape. Returns\n    views (no copies).\n\n    Raises [Invalid_argument] if shapes are incompatible.\n\n    See also {!broadcast_to}, {!broadcasted}. *)\n\nval array_split :\n  axis:int ->\n  [< `Count of int | `Indices of int list ] ->\n  ('a, 'b) t ->\n  ('a, 'b) t list\n(** [array_split ~axis spec t] splits [t] into sub-tensors.\n\n    With [`Count n], divides as evenly as possible (first sections absorb extra\n    elements). With [`Indices [i0; i1; …]], splits at the given indices\n    producing [\\[0, i0)], [\\[i0, i1)], …, [\\[ik, end)].\n\n    Raises [Invalid_argument] if [axis] is out of bounds or [spec] is invalid.\n\n    {@ocaml[\n      # create int32 [| 5 |] [| 1l; 2l; 3l; 4l; 5l |]\n        |> array_split ~axis:0 (`Count 3)\n      - : (int32, int32_elt) t list = [[1, 2]; [3, 4]; [5]]\n    ]}\n\n    See also {!split}. *)\n\nval split : axis:int -> int -> ('a, 'b) t -> ('a, 'b) t list\n(** [split ~axis n t] splits [t] into [n] equal parts along [axis].\n\n    Raises [Invalid_argument] if the axis size is not divisible by [n].\n\n    See also {!array_split}. *)\n\n(** {1:conversion Type conversion and copying} *)\n\nval cast : ('c, 'd) dtype -> ('a, 'b) t -> ('c, 'd) t\n(** [cast dtype t] is a copy of [t] with elements converted to [dtype].\n\n    {@ocaml[\n      # create float32 [| 3 |] [| 1.5; 2.7; 3.1 |]\n        |> cast int32\n      - : (int32, int32_elt) t = [1, 2, 3]\n    ]}\n\n    See also {!contiguous}, {!copy}. *)\n\nval astype : ('a, 'b) dtype -> ('c, 'd) t -> ('a, 'b) t\n(** [astype dtype t] is {!cast}. *)\n\nval contiguous : ('a, 'b) t -> ('a, 'b) t\n(** [contiguous t] is [t] if it is already C-contiguous, or a fresh contiguous\n    copy otherwise.\n\n    See also {!is_c_contiguous}, {!copy}. *)\n\nval copy : ('a, 'b) t -> ('a, 'b) t\n(** [copy t] is a deep copy of [t]. Always allocates new memory; the result is\n    contiguous.\n\n    {@ocaml[\n      # let x = create float32 [| 3 |] [| 1.; 2.; 3. |] in\n        let y = copy x in\n        set_item [ 0 ] 999. y;\n        x, y\n      - : (float, float32_elt) t * (float, float32_elt) t =\n      ([1, 2, 3], [999, 2, 3])\n    ]}\n\n    See also {!contiguous}. *)\n\nval blit : ('a, 'b) t -> ('a, 'b) t -> unit\n(** [blit src dst] copies the elements of [src] into [dst] in-place. Shapes must\n    match exactly.\n\n    Raises [Invalid_argument] if shapes differ. *)\n\nval fill : 'a -> ('a, 'b) t -> ('a, 'b) t\n(** [fill v t] is a fresh copy of [t] with every element set to [v]. Does not\n    mutate [t]. *)\n\n(** {1:indexing Indexing and slicing} *)\n\nval get : int list -> ('a, 'b) t -> ('a, 'b) t\n(** [get indices t] is the sub-tensor at [indices], indexing from the outermost\n    dimension inward. Returns a scalar tensor when all dimensions are indexed;\n    otherwise a view of the remaining dimensions. Negative indices count from\n    the end.\n\n    Raises [Invalid_argument] if any index is out of bounds.\n\n    {@ocaml[\n      # let x =\n          create int32 [| 2; 3 |]\n            [| 1l; 2l; 3l; 4l; 5l; 6l |]\n        in\n        get [ 1 ] x\n      - : (int32, int32_elt) t = [4, 5, 6]\n    ]}\n\n    See also {!item}, {!val-slice}. *)\n\nval set : int list -> ('a, 'b) t -> ('a, 'b) t -> unit\n(** [set indices t v] writes [v] at the position given by [indices].\n\n    Raises [Invalid_argument] if indices are out of bounds. *)\n\nval slice : index list -> ('a, 'b) t -> ('a, 'b) t\n(** [slice specs t] extracts a sub-tensor using advanced indexing.\n\n    Each element of [specs] addresses one axis from left to right:\n    - [I i] — single index (reduces dimension; negative from end).\n    - [L [i0; i1; …]] — gather listed indices.\n    - [R (start, stop)] — half-open range \\[[start], [stop]).\n    - [Rs (start, stop, step)] — strided range.\n    - [A] — full axis (default for trailing axes).\n    - [M mask] — boolean mask selecting positions where [mask] is [true].\n    - [N] — insert a new axis of size 1.\n\n    Returns a view when possible.\n\n    Raises [Invalid_argument] if specs are out of bounds, if step is zero, or if\n    a mask spec is used (not yet supported).\n\n    {@ocaml[\n      # let x =\n          create int32 [| 3; 3 |]\n            [| 1l; 2l; 3l; 4l; 5l; 6l; 7l; 8l; 9l |]\n        in\n        slice [ R (0, 2); L [ 0; 2 ] ] x\n      - : (int32, int32_elt) t = int32 [2; 2] [[1, 3],\n                                               [4, 6]]\n    ]}\n\n    See also {!get}, {!set_slice}. *)\n\nval set_slice : index list -> ('a, 'b) t -> ('a, 'b) t -> unit\n(** [set_slice specs t v] writes [v] into the region of [t] selected by [specs].\n    [v] is broadcast if needed.\n\n    Raises [Invalid_argument] if [N] (new-axis) specs are used (not supported\n    for writes).\n\n    See also {!val-slice}. *)\n\nval item : int list -> ('a, 'b) t -> 'a\n(** [item indices t] is the scalar value at [indices]. Indices must cover all\n    dimensions.\n\n    Raises [Invalid_argument] if the number of indices is wrong or any index is\n    out of bounds.\n\n    See also {!get}, {!set_item}. *)\n\nval set_item : int list -> 'a -> ('a, 'b) t -> unit\n(** [set_item indices v t] sets the element at [indices] to [v] in-place.\n    Indices must cover all dimensions.\n\n    Raises [Invalid_argument] if the number of indices is wrong or any index is\n    out of bounds.\n\n    See also {!item}. *)\n\nval take :\n  ?axis:int ->\n  ?mode:[ `raise | `wrap | `clip ] ->\n  (int32, int32_elt) t ->\n  ('a, 'b) t ->\n  ('a, 'b) t\n(** [take ?axis ?mode indices t] gathers elements from [t] at [indices] along\n    [axis]. When [axis] is omitted, [t] is flattened first. [mode] controls\n    out-of-bounds indices: [`raise] (default) raises, [`wrap] uses modular\n    indexing, [`clip] clamps to bounds.\n\n    Raises [Invalid_argument] if [mode] is [`raise] and any index is out of\n    bounds.\n\n    {@ocaml[\n      # let x =\n          create int32 [| 5 |]\n            [| 0l; 1l; 2l; 3l; 4l |]\n        in\n        take\n          (create int32 [| 3 |] [| 1l; 3l; 0l |])\n          x\n      - : (int32, int32_elt) t = [1, 3, 0]\n    ]}\n\n    See also {!put}, {!take_along_axis}. *)\n\nval take_along_axis :\n  axis:int -> (int32, int32_elt) t -> ('a, 'b) t -> ('a, 'b) t\n(** [take_along_axis ~axis indices t] gathers values from [t] along [axis] using\n    [indices]. [indices] must match [t]'s shape except along [axis]. Useful for\n    gathering from {!argmax}/{!argmin} results.\n\n    Raises [Invalid_argument] if shapes are incompatible.\n\n    {@ocaml[\n      # let x =\n          create float32 [| 2; 3 |]\n            [| 4.; 1.; 2.; 3.; 5.; 6. |]\n        in\n        let idx =\n          create int32 [| 2; 1 |] [| 1l; 0l |]\n        in\n        take_along_axis ~axis:1 idx x\n      - : (float, float32_elt) t = float32 [2; 1] [[1],\n                                                   [3]]\n    ]}\n\n    See also {!take}, {!put_along_axis}. *)\n\nval put :\n  ?axis:int ->\n  indices:(int32, int32_elt) t ->\n  values:('a, 'b) t ->\n  ?mode:[ `raise | `wrap | `clip ] ->\n  ('a, 'b) t ->\n  unit\n(** [put ?axis ~indices ~values ?mode t] writes [values] into [t] at positions\n    given by [indices]. When [axis] is omitted, [t] is flattened first. [mode]\n    defaults to [`raise]. Modifies [t] in-place.\n\n    Raises [Invalid_argument] if [mode] is [`raise] and any index is out of\n    bounds.\n\n    See also {!take}, {!put_along_axis}, {!index_put}. *)\n\nval index_put :\n  indices:(int32, int32_elt) t array ->\n  values:('a, 'b) t ->\n  ?mode:[ `raise | `wrap | `clip ] ->\n  ('a, 'b) t ->\n  unit\n(** [index_put ~indices ~values ?mode t] writes [values] into [t] at the\n    coordinates given by [indices].\n\n    [indices] contains one index tensor per axis of [t]; they are broadcast to a\n    common shape that determines the number of updates. [values] is broadcast to\n    the same shape. Duplicate coordinates overwrite. [mode] defaults to\n    [`raise].\n\n    Raises [Invalid_argument] if the number of index tensors does not match\n    {!ndim} [t].\n\n    {@ocaml[\n      # let t = zeros float32 [| 3; 3 |] in\n        let rows =\n          create int32 [| 3 |] [| 0l; 2l; 1l |]\n        in\n        let cols =\n          create int32 [| 3 |] [| 1l; 0l; 2l |]\n        in\n        index_put ~indices:[| rows; cols |]\n          ~values:(create float32 [| 3 |]\n                     [| 10.; 20.; 30. |])\n          t;\n        t\n      - : (float, float32_elt) t = float32 [3; 3]\n      [[0, 10, 0],\n       [0, 0, 30],\n       [20, 0, 0]]\n    ]}\n\n    See also {!put}. *)\n\nval put_along_axis :\n  axis:int ->\n  indices:(int32, int32_elt) t ->\n  values:('a, 'b) t ->\n  ('a, 'b) t ->\n  unit\n(** [put_along_axis ~axis ~indices ~values t] writes [values] into [t] at\n    positions selected by [indices] along [axis]. Modifies [t] in-place.\n\n    Raises [Invalid_argument] if shapes are incompatible.\n\n    See also {!take_along_axis}, {!put}. *)\n\nval compress :\n  ?axis:int -> condition:(bool, bool_elt) t -> ('a, 'b) t -> ('a, 'b) t\n(** [compress ?axis ~condition t] selects elements where [condition] is [true]\n    along [axis]. [condition] must be 1-D. When [axis] is omitted, [t] is\n    flattened first.\n\n    Raises [Invalid_argument] if the condition length is incompatible.\n\n    {@ocaml[\n      # let x =\n          create int32 [| 5 |]\n            [| 1l; 2l; 3l; 4l; 5l |]\n        in\n        compress\n          ~condition:(create bool [| 5 |]\n            [| true; false; true; false; true |])\n          x\n      - : (int32, int32_elt) t = [1, 3, 5]\n    ]}\n\n    See also {!extract}, {!nonzero}. *)\n\nval extract : condition:(bool, bool_elt) t -> ('a, 'b) t -> ('a, 'b) t\n(** [extract ~condition t] is the 1-D tensor of elements of [t] where\n    [condition] is [true]. Both are flattened before comparison.\n\n    Raises [Invalid_argument] if sizes differ.\n\n    See also {!compress}, {!nonzero}. *)\n\nval nonzero : ('a, 'b) t -> (int32, int32_elt) t array\n(** [nonzero t] is an array of 1-D index tensors, one per dimension, giving the\n    coordinates of non-zero elements.\n\n    {@ocaml[\n      # let x =\n          create int32 [| 3; 3 |]\n            [| 0l; 1l; 0l;\n               2l; 0l; 3l;\n               0l; 0l; 4l |]\n        in\n        let idx = nonzero x in\n        idx.(0), idx.(1)\n      - : (int32, int32_elt) t * (int32, int32_elt) t =\n      ([0, 1, 1, 2], [1, 0, 2, 2])\n    ]}\n\n    See also {!argwhere}. *)\n\nval argwhere : ('a, 'b) t -> (int32, int32_elt) t\n(** [argwhere t] is a 2-D tensor of shape [[k; ndim t]] whose rows are the\n    coordinates of the [k] non-zero elements.\n\n    See also {!nonzero}. *)\n\n(** {1:arithmetic Arithmetic}\n\n    Element-wise arithmetic with broadcasting. Each operation [op] has variants:\n    - [op_s t s] — tensor-scalar.\n    - [rop_s s t] — scalar-tensor (reversed operands). *)\n\nval add : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [add a b] is the element-wise sum of [a] and [b]. *)\n\nval add_s : ('a, 'b) t -> 'a -> ('a, 'b) t\n(** [add_s t s] adds scalar [s] to each element of [t]. *)\n\nval radd_s : 'a -> ('a, 'b) t -> ('a, 'b) t\n(** [radd_s s t] is [add_s t s]. *)\n\nval sub : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [sub a b] is the element-wise difference [a - b]. *)\n\nval sub_s : ('a, 'b) t -> 'a -> ('a, 'b) t\n(** [sub_s t s] subtracts scalar [s] from each element. *)\n\nval rsub_s : 'a -> ('a, 'b) t -> ('a, 'b) t\n(** [rsub_s s t] is [s - t] element-wise. *)\n\nval mul : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [mul a b] is the element-wise product of [a] and [b]. *)\n\nval mul_s : ('a, 'b) t -> 'a -> ('a, 'b) t\n(** [mul_s t s] multiplies each element by scalar [s]. *)\n\nval rmul_s : 'a -> ('a, 'b) t -> ('a, 'b) t\n(** [rmul_s s t] is [mul_s t s]. *)\n\nval div : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [div a b] is the element-wise quotient [a / b].\n\n    Float dtypes use true division. Integer dtypes truncate toward zero.\n\n    {@ocaml[\n      # let x = create int32 [| 2 |] [| -7l; 8l |] in\n        let y = create int32 [| 2 |] [| 2l; 2l |] in\n        div x y\n      - : (int32, int32_elt) t = [-3, 4]\n    ]} *)\n\nval div_s : ('a, 'b) t -> 'a -> ('a, 'b) t\n(** [div_s t s] divides each element by scalar [s]. *)\n\nval rdiv_s : 'a -> ('a, 'b) t -> ('a, 'b) t\n(** [rdiv_s s t] is [s / t] element-wise. *)\n\nval pow : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [pow base exp] is [base] raised to [exp] element-wise. *)\n\nval pow_s : ('a, 'b) t -> 'a -> ('a, 'b) t\n(** [pow_s t s] raises each element to scalar power [s]. *)\n\nval rpow_s : 'a -> ('a, 'b) t -> ('a, 'b) t\n(** [rpow_s s t] is [s{^t}] element-wise. *)\n\nval mod_ : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [mod_ a b] is the element-wise remainder of [a / b]. *)\n\nval mod_s : ('a, 'b) t -> 'a -> ('a, 'b) t\n(** [mod_s t s] is the remainder of each element divided by scalar [s]. *)\n\nval rmod_s : 'a -> ('a, 'b) t -> ('a, 'b) t\n(** [rmod_s s t] is [s mod t] element-wise. *)\n\nval neg : ('a, 'b) t -> ('a, 'b) t\n(** [neg t] is the element-wise negation of [t]. *)\n\nval conjugate : ('a, 'b) t -> ('a, 'b) t\n(** [conjugate t] is the complex conjugate of [t]. For complex dtypes, negates\n    the imaginary part. For real dtypes, returns [t] unchanged. *)\n\n(** {1:math Mathematical functions} *)\n\n(** {2:math_basic Basic} *)\n\nval abs : ('a, 'b) t -> ('a, 'b) t\n(** [abs t] is the element-wise absolute value. *)\n\nval sign : ('a, 'b) t -> ('a, 'b) t\n(** [sign t] is [-1], [0], or [1] according to the sign of each element. For\n    unsigned types, returns [1] for non-zero, [0] for zero.\n\n    {@ocaml[\n      # create float32 [| 3 |] [| -2.; 0.; 3.5 |]\n        |> sign\n      - : (float, float32_elt) t = [-1, 0, 1]\n    ]} *)\n\nval square : ('a, 'b) t -> ('a, 'b) t\n(** [square t] is the element-wise square. *)\n\nval sqrt : ('a, 'b) t -> ('a, 'b) t\n(** [sqrt t] is the element-wise square root. *)\n\nval rsqrt : ('a, 'b) t -> ('a, 'b) t\n(** [rsqrt t] is the element-wise reciprocal square root ([1 / sqrt t]). *)\n\nval recip : ('a, 'b) t -> ('a, 'b) t\n(** [recip t] is the element-wise reciprocal ([1 / t]). *)\n\n(** {2:math_exp Exponential and logarithmic} *)\n\nval log : ('a, 'b) t -> ('a, 'b) t\n(** [log t] is the element-wise natural logarithm. *)\n\nval log2 : ('a, 'b) t -> ('a, 'b) t\n(** [log2 t] is the element-wise base-2 logarithm. *)\n\nval exp : ('a, 'b) t -> ('a, 'b) t\n(** [exp t] is the element-wise exponential. *)\n\nval exp2 : ('a, 'b) t -> ('a, 'b) t\n(** [exp2 t] is [2{^t}] element-wise. *)\n\n(** {2:math_trig Trigonometric} *)\n\nval sin : ('a, 'b) t -> ('a, 'b) t\n(** [sin t] is the element-wise sine. *)\n\nval cos : ('a, 'b) t -> ('a, 'b) t\n(** [cos t] is the element-wise cosine. *)\n\nval tan : ('a, 'b) t -> ('a, 'b) t\n(** [tan t] is the element-wise tangent. *)\n\nval asin : ('a, 'b) t -> ('a, 'b) t\n(** [asin t] is the element-wise arcsine. *)\n\nval acos : ('a, 'b) t -> ('a, 'b) t\n(** [acos t] is the element-wise arccosine. *)\n\nval atan : ('a, 'b) t -> ('a, 'b) t\n(** [atan t] is the element-wise arctangent. *)\n\nval atan2 : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [atan2 y x] is the element-wise two-argument arctangent, returning angles in\n    \\[[-π], [π]\\]. *)\n\n(** {2:math_hyp Hyperbolic} *)\n\nval sinh : ('a, 'b) t -> ('a, 'b) t\n(** [sinh t] is the element-wise hyperbolic sine. *)\n\nval cosh : ('a, 'b) t -> ('a, 'b) t\n(** [cosh t] is the element-wise hyperbolic cosine. *)\n\nval tanh : ('a, 'b) t -> ('a, 'b) t\n(** [tanh t] is the element-wise hyperbolic tangent. *)\n\nval asinh : ('a, 'b) t -> ('a, 'b) t\n(** [asinh t] is the element-wise inverse hyperbolic sine. *)\n\nval acosh : ('a, 'b) t -> ('a, 'b) t\n(** [acosh t] is the element-wise inverse hyperbolic cosine. *)\n\nval atanh : ('a, 'b) t -> ('a, 'b) t\n(** [atanh t] is the element-wise inverse hyperbolic tangent. *)\n\n(** {2:math_round Rounding} *)\n\nval trunc : ('a, 'b) t -> ('a, 'b) t\n(** [trunc t] rounds each element toward zero. *)\n\nval ceil : ('a, 'b) t -> ('a, 'b) t\n(** [ceil t] rounds each element toward positive infinity. *)\n\nval floor : ('a, 'b) t -> ('a, 'b) t\n(** [floor t] rounds each element toward negative infinity. *)\n\nval round : ('a, 'b) t -> ('a, 'b) t\n(** [round t] rounds each element to the nearest integer. Ties round away from\n    zero (not banker's rounding).\n\n    {@ocaml[\n      # create float32 [| 4 |] [| 2.5; 3.5; -2.5; -3.5 |]\n        |> round\n      - : (float, float32_elt) t = [3, 4, -3, -4]\n    ]} *)\n\n(** {2:math_misc Other} *)\n\nval hypot : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [hypot x y] is [sqrt(x² + y²)] computed without intermediate overflow.\n\n    {@ocaml[\n      # hypot (scalar float32 3.) (scalar float32 4.)\n        |> item []\n      - : float = 5.\n    ]} *)\n\nval lerp : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [lerp a b w] is the linear interpolation [a + w * (b - a)]. [w] is typically\n    in \\[[0], [1]\\].\n\n    {@ocaml[\n      # let a = create float32 [| 2 |] [| 1.; 2. |] in\n        let b = create float32 [| 2 |] [| 5.; 8. |] in\n        lerp a b (scalar float32 0.25)\n      - : (float, float32_elt) t = [2, 3.5]\n    ]} *)\n\nval lerp_scalar_weight : ('a, 'b) t -> ('a, 'b) t -> 'a -> ('a, 'b) t\n(** [lerp_scalar_weight a b w] is like {!lerp} with a scalar weight. *)\n\nval isinf : ('a, 'b) t -> (bool, bool_elt) t\n(** [isinf t] is [true] where [t] is positive or negative infinity, [false]\n    elsewhere. Non-float dtypes always return all [false].\n\n    {@ocaml[\n      # create float32 [| 4 |]\n          [| 1.; Float.infinity;\n             Float.neg_infinity; Float.nan |]\n        |> isinf\n      - : (bool, bool_elt) t = [false, true, true, false]\n    ]}\n\n    See also {!isnan}, {!isfinite}. *)\n\nval isnan : ('a, 'b) t -> (bool, bool_elt) t\n(** [isnan t] is [true] where [t] is NaN, [false] elsewhere. Non-float dtypes\n    always return all [false].\n\n    See also {!isinf}, {!isfinite}. *)\n\nval isfinite : ('a, 'b) t -> (bool, bool_elt) t\n(** [isfinite t] is [true] where [t] is neither infinite nor NaN. Non-float\n    dtypes always return all [true].\n\n    See also {!isinf}, {!isnan}. *)\n\n(** {1:comparison Comparison and logic} *)\n\nval cmplt : ('a, 'b) t -> ('a, 'b) t -> (bool, bool_elt) t\n(** [cmplt a b] is [true] where [a < b], [false] elsewhere. *)\n\nval less : ('a, 'b) t -> ('a, 'b) t -> (bool, bool_elt) t\n(** [less a b] is {!cmplt}. *)\n\nval less_s : ('a, 'b) t -> 'a -> (bool, bool_elt) t\n(** [less_s t s] is [true] where [t < s]. *)\n\nval cmpne : ('a, 'b) t -> ('a, 'b) t -> (bool, bool_elt) t\n(** [cmpne a b] is [true] where [a ≠ b], [false] elsewhere. *)\n\nval not_equal : ('a, 'b) t -> ('a, 'b) t -> (bool, bool_elt) t\n(** [not_equal a b] is {!cmpne}. *)\n\nval not_equal_s : ('a, 'b) t -> 'a -> (bool, bool_elt) t\n(** [not_equal_s t s] is [true] where [t ≠ s]. *)\n\nval cmpeq : ('a, 'b) t -> ('a, 'b) t -> (bool, bool_elt) t\n(** [cmpeq a b] is [true] where [a = b], [false] elsewhere. *)\n\nval equal : ('a, 'b) t -> ('a, 'b) t -> (bool, bool_elt) t\n(** [equal a b] is {!cmpeq}. *)\n\nval equal_s : ('a, 'b) t -> 'a -> (bool, bool_elt) t\n(** [equal_s t s] is [true] where [t = s]. *)\n\nval cmpgt : ('a, 'b) t -> ('a, 'b) t -> (bool, bool_elt) t\n(** [cmpgt a b] is [true] where [a > b], [false] elsewhere. *)\n\nval greater : ('a, 'b) t -> ('a, 'b) t -> (bool, bool_elt) t\n(** [greater a b] is {!cmpgt}. *)\n\nval greater_s : ('a, 'b) t -> 'a -> (bool, bool_elt) t\n(** [greater_s t s] is [true] where [t > s]. *)\n\nval cmple : ('a, 'b) t -> ('a, 'b) t -> (bool, bool_elt) t\n(** [cmple a b] is [true] where [a ≤ b], [false] elsewhere. *)\n\nval less_equal : ('a, 'b) t -> ('a, 'b) t -> (bool, bool_elt) t\n(** [less_equal a b] is {!cmple}. *)\n\nval less_equal_s : ('a, 'b) t -> 'a -> (bool, bool_elt) t\n(** [less_equal_s t s] is [true] where [t ≤ s]. *)\n\nval cmpge : ('a, 'b) t -> ('a, 'b) t -> (bool, bool_elt) t\n(** [cmpge a b] is [true] where [a ≥ b], [false] elsewhere. *)\n\nval greater_equal : ('a, 'b) t -> ('a, 'b) t -> (bool, bool_elt) t\n(** [greater_equal a b] is {!cmpge}. *)\n\nval greater_equal_s : ('a, 'b) t -> 'a -> (bool, bool_elt) t\n(** [greater_equal_s t s] is [true] where [t ≥ s]. *)\n\nval array_equal : ('a, 'b) t -> ('a, 'b) t -> (bool, bool_elt) t\n(** [array_equal a b] is a scalar [true] iff all elements of [a] and [b] are\n    equal. Returns [false] if shapes differ.\n\n    {@ocaml[\n      # let a = create int32 [| 3 |] [| 1l; 2l; 3l |] in\n        let b = create int32 [| 3 |] [| 1l; 2l; 3l |] in\n        array_equal a b |> item []\n      - : bool = true\n    ]} *)\n\nval maximum : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [maximum a b] is the element-wise maximum of [a] and [b]. *)\n\nval maximum_s : ('a, 'b) t -> 'a -> ('a, 'b) t\n(** [maximum_s t s] is the element-wise maximum of [t] and scalar [s]. *)\n\nval rmaximum_s : 'a -> ('a, 'b) t -> ('a, 'b) t\n(** [rmaximum_s s t] is [maximum_s t s]. *)\n\nval minimum : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [minimum a b] is the element-wise minimum of [a] and [b]. *)\n\nval minimum_s : ('a, 'b) t -> 'a -> ('a, 'b) t\n(** [minimum_s t s] is the element-wise minimum of [t] and scalar [s]. *)\n\nval rminimum_s : 'a -> ('a, 'b) t -> ('a, 'b) t\n(** [rminimum_s s t] is [minimum_s t s]. *)\n\nval logical_and : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [logical_and a b] is the element-wise logical AND. Non-zero is [true]. *)\n\nval logical_or : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [logical_or a b] is the element-wise logical OR. *)\n\nval logical_xor : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [logical_xor a b] is the element-wise logical XOR. *)\n\nval logical_not : ('a, 'b) t -> ('a, 'b) t\n(** [logical_not t] is the element-wise logical NOT: non-zero becomes [0], zero\n    becomes [1]. *)\n\nval where : (bool, bool_elt) t -> ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [where cond if_true if_false] selects elements from [if_true] where [cond]\n    is [true] and from [if_false] elsewhere. All three inputs broadcast to a\n    common shape.\n\n    {@ocaml[\n      # let x =\n          create float32 [| 4 |] [| -1.; 2.; -3.; 4. |]\n        in\n        where\n          (cmpgt x (scalar float32 0.))\n          x (scalar float32 0.)\n      - : (float, float32_elt) t = [0, 2, 0, 4]\n    ]} *)\n\nval clamp : ?min:'a -> ?max:'a -> ('a, 'b) t -> ('a, 'b) t\n(** [clamp ?min ?max t] clamps elements to \\[[min], [max]\\]. Either bound may be\n    omitted.\n\n    See also {!clip}. *)\n\nval clip : ?min:'a -> ?max:'a -> ('a, 'b) t -> ('a, 'b) t\n(** [clip ?min ?max t] is {!clamp}. *)\n\n(** {1:bitwise Bitwise operations} *)\n\nval bitwise_xor : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [bitwise_xor a b] is the element-wise bitwise XOR. *)\n\nval bitwise_or : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [bitwise_or a b] is the element-wise bitwise OR. *)\n\nval bitwise_and : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [bitwise_and a b] is the element-wise bitwise AND. *)\n\nval bitwise_not : ('a, 'b) t -> ('a, 'b) t\n(** [bitwise_not t] is the element-wise bitwise NOT. *)\n\nval invert : ('a, 'b) t -> ('a, 'b) t\n(** [invert t] is {!bitwise_not}. *)\n\nval lshift : ('a, 'b) t -> int -> ('a, 'b) t\n(** [lshift t n] left-shifts each element by [n] bits.\n\n    Raises [Invalid_argument] if [n] is negative or the dtype is not an integer\n    type.\n\n    {@ocaml[\n      # create int32 [| 3 |] [| 1l; 2l; 3l |]\n        |> Fun.flip lshift 2\n      - : (int32, int32_elt) t = [4, 8, 12]\n    ]}\n\n    See also {!rshift}. *)\n\nval rshift : ('a, 'b) t -> int -> ('a, 'b) t\n(** [rshift t n] right-shifts each element by [n] bits.\n\n    Raises [Invalid_argument] if [n] is negative or the dtype is not an integer\n    type.\n\n    See also {!lshift}. *)\n\n(** {1:infix Infix operators} *)\n\nmodule Infix : sig\n  (** {2:infix_arith Element-wise arithmetic} *)\n\n  val ( + ) : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [a + b] is {!add} [a b]. *)\n\n  val ( - ) : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [a - b] is {!sub} [a b]. *)\n\n  val ( * ) : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [a * b] is {!mul} [a b]. *)\n\n  val ( / ) : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [a / b] is {!div} [a b]. *)\n\n  val ( ** ) : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [a ** b] is {!pow} [a b]. *)\n\n  (** {2:infix_scalar Scalar arithmetic} *)\n\n  val ( +$ ) : ('a, 'b) t -> 'a -> ('a, 'b) t\n  (** [t +$ s] is {!add_s} [t s]. *)\n\n  val ( -$ ) : ('a, 'b) t -> 'a -> ('a, 'b) t\n  (** [t -$ s] is {!sub_s} [t s]. *)\n\n  val ( *$ ) : ('a, 'b) t -> 'a -> ('a, 'b) t\n  (** [t *$ s] is {!mul_s} [t s]. *)\n\n  val ( /$ ) : ('a, 'b) t -> 'a -> ('a, 'b) t\n  (** [t /$ s] is {!div_s} [t s]. *)\n\n  val ( **$ ) : ('a, 'b) t -> 'a -> ('a, 'b) t\n  (** [t **$ s] is {!pow_s} [t s]. *)\n\n  (** {2:infix_cmp Comparisons} *)\n\n  val ( < ) : ('a, 'b) t -> ('a, 'b) t -> (bool, bool_elt) t\n  (** [a < b] is {!cmplt} [a b]. *)\n\n  val ( <> ) : ('a, 'b) t -> ('a, 'b) t -> (bool, bool_elt) t\n  (** [a <> b] is {!cmpne} [a b]. *)\n\n  val ( = ) : ('a, 'b) t -> ('a, 'b) t -> (bool, bool_elt) t\n  (** [a = b] is {!cmpeq} [a b]. *)\n\n  val ( > ) : ('a, 'b) t -> ('a, 'b) t -> (bool, bool_elt) t\n  (** [a > b] is {!cmpgt} [a b]. *)\n\n  val ( <= ) : ('a, 'b) t -> ('a, 'b) t -> (bool, bool_elt) t\n  (** [a <= b] is {!cmple} [a b]. *)\n\n  val ( >= ) : ('a, 'b) t -> ('a, 'b) t -> (bool, bool_elt) t\n  (** [a >= b] is {!cmpge} [a b]. *)\n\n  (** {2:infix_scalar_cmp Scalar comparisons} *)\n\n  val ( =$ ) : ('a, 'b) t -> 'a -> (bool, bool_elt) t\n  (** [t =$ s] is {!equal_s} [t s]. *)\n\n  val ( <>$ ) : ('a, 'b) t -> 'a -> (bool, bool_elt) t\n  (** [t <>$ s] is {!not_equal_s} [t s]. *)\n\n  val ( <$ ) : ('a, 'b) t -> 'a -> (bool, bool_elt) t\n  (** [t <$ s] is {!less_s} [t s]. *)\n\n  val ( >$ ) : ('a, 'b) t -> 'a -> (bool, bool_elt) t\n  (** [t >$ s] is {!greater_s} [t s]. *)\n\n  val ( <=$ ) : ('a, 'b) t -> 'a -> (bool, bool_elt) t\n  (** [t <=$ s] is {!less_equal_s} [t s]. *)\n\n  val ( >=$ ) : ('a, 'b) t -> 'a -> (bool, bool_elt) t\n  (** [t >=$ s] is {!greater_equal_s} [t s]. *)\n\n  (** {2:infix_bitwise Bitwise} *)\n\n  val ( lxor ) : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [a lxor b] is {!bitwise_xor} [a b]. *)\n\n  val ( lor ) : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [a lor b] is {!bitwise_or} [a b]. *)\n\n  val ( land ) : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [a land b] is {!bitwise_and} [a b]. *)\n\n  (** {2:infix_mod Modulo} *)\n\n  val ( % ) : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [a % b] is {!mod_} [a b]. *)\n\n  val ( mod ) : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [a mod b] is {!mod_} [a b]. *)\n\n  val ( %$ ) : ('a, 'b) t -> 'a -> ('a, 'b) t\n  (** [t %$ s] is {!mod_s} [t s]. *)\n\n  (** {2:infix_logic Logical} *)\n\n  val ( ^ ) : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [a ^ b] is {!logical_xor} [a b]. *)\n\n  val ( && ) : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [a && b] is {!logical_and} [a b]. *)\n\n  val ( || ) : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [a || b] is {!logical_or} [a b]. *)\n\n  val ( ~- ) : ('a, 'b) t -> ('a, 'b) t\n  (** [~-t] is {!logical_not} [t]. *)\n\n  (** {2:infix_linalg Linear algebra} *)\n\n  val ( @@ ) : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [a @@ b] is {!matmul} [a b]. *)\n\n  val ( /@ ) : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [a /@ b] is {!solve} [a b]. *)\n\n  val ( **@ ) : ('a, 'b) t -> int -> ('a, 'b) t\n  (** [t **@ n] is {!matrix_power} [t n]. *)\n\n  val ( <.> ) : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [a <.> b] is {!dot} [a b]. *)\n\n  (** {2:infix_concat Concatenation} *)\n\n  val ( @= ) : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [a @= b] is {!vstack} [[a; b]]. *)\n\n  val ( @|| ) : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n  (** [a @|| b] is {!hstack} [[a; b]]. *)\n\n  (** {2:infix_index Indexing} *)\n\n  val ( .%{} ) : ('a, 'b) t -> int list -> ('a, 'b) t\n  (** [t.%\\{i\\}] is {!get} [i t]. *)\n\n  val ( .%{}<- ) : ('a, 'b) t -> int list -> ('a, 'b) t -> unit\n  (** [t.%\\{i\\} <- v] is {!set} [i t v]. *)\n\n  val ( .${} ) : ('a, 'b) t -> index list -> ('a, 'b) t\n  (** [t.$\\{s\\}] is {!val-slice} [s t]. *)\n\n  val ( .${}<- ) : ('a, 'b) t -> index list -> ('a, 'b) t -> unit\n  (** [t.$\\{s\\} <- v] is {!set_slice} [s t v]. *)\nend\n\n(** {1:reduction Reductions} *)\n\nval sum : ?axes:int list -> ?keepdims:bool -> ('a, 'b) t -> ('a, 'b) t\n(** [sum ?axes ?keepdims t] sums elements along [axes]. When [axes] is omitted,\n    reduces all axes (returns a scalar). When [keepdims] is [true], reduced axes\n    are kept with size 1. [keepdims] defaults to [false]. Negative axes count\n    from the end.\n\n    {@ocaml[\n      # create float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |]\n        |> sum |> item []\n      - : float = 10.\n      # create float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |]\n        |> sum ~axes:[ 0 ]\n      - : (float, float32_elt) t = [4, 6]\n      # create float32 [| 1; 2 |] [| 1.; 2. |]\n        |> sum ~axes:[ 1 ] ~keepdims:true\n      - : (float, float32_elt) t = float32 [1; 1] [[3]]\n    ]} *)\n\nval max : ?axes:int list -> ?keepdims:bool -> ('a, 'b) t -> ('a, 'b) t\n(** [max ?axes ?keepdims t] is the maximum along [axes]. NaN propagates.\n    [keepdims] defaults to [false].\n\n    {@ocaml[\n      # create float32 [| 2; 3 |]\n          [| 1.; 2.; 3.; 4.; 5.; 6. |]\n        |> max |> item []\n      - : float = 6.\n    ]} *)\n\nval min : ?axes:int list -> ?keepdims:bool -> ('a, 'b) t -> ('a, 'b) t\n(** [min ?axes ?keepdims t] is the minimum along [axes]. NaN propagates.\n    [keepdims] defaults to [false]. *)\n\nval prod : ?axes:int list -> ?keepdims:bool -> ('a, 'b) t -> ('a, 'b) t\n(** [prod ?axes ?keepdims t] is the product along [axes]. [keepdims] defaults to\n    [false].\n\n    {@ocaml[\n      # create int32 [| 3 |] [| 2l; 3l; 4l |]\n        |> prod |> item []\n      - : int32 = 24l\n    ]} *)\n\nval cumsum : ?axis:int -> ('a, 'b) t -> ('a, 'b) t\n(** [cumsum ?axis t] is the inclusive cumulative sum along [axis]. When [axis]\n    is omitted, operates on the flattened tensor.\n\n    See also {!cumprod}. *)\n\nval cumprod : ?axis:int -> ('a, 'b) t -> ('a, 'b) t\n(** [cumprod ?axis t] is the inclusive cumulative product along [axis]. When\n    [axis] is omitted, operates on the flattened tensor.\n\n    See also {!cumsum}. *)\n\nval cummax : ?axis:int -> ('a, 'b) t -> ('a, 'b) t\n(** [cummax ?axis t] is the inclusive cumulative maximum along [axis]. NaN\n    propagates for floating-point dtypes. When [axis] is omitted, operates on\n    the flattened tensor.\n\n    See also {!cummin}. *)\n\nval cummin : ?axis:int -> ('a, 'b) t -> ('a, 'b) t\n(** [cummin ?axis t] is the inclusive cumulative minimum along [axis]. NaN\n    propagates for floating-point dtypes. When [axis] is omitted, operates on\n    the flattened tensor.\n\n    See also {!cummax}. *)\n\nval mean : ?axes:int list -> ?keepdims:bool -> ('a, 'b) t -> ('a, 'b) t\n(** [mean ?axes ?keepdims t] is the arithmetic mean along [axes]. NaN\n    propagates. [keepdims] defaults to [false].\n\n    {@ocaml[\n      # create float32 [| 4 |] [| 1.; 2.; 3.; 4. |]\n        |> mean |> item []\n      - : float = 2.5\n    ]} *)\n\nval var :\n  ?axes:int list -> ?keepdims:bool -> ?ddof:int -> ('a, 'b) t -> ('a, 'b) t\n(** [var ?axes ?keepdims ?ddof t] is the variance along [axes]. [ddof] (delta\n    degrees of freedom) defaults to [0] (population variance); use [1] for\n    sample variance. Computed as [E[(X - E[X])²] / (N - ddof)]. [keepdims]\n    defaults to [false].\n\n    Raises [Invalid_argument] if [ddof >= N].\n\n    {@ocaml[\n      # create float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |]\n        |> var |> item []\n      - : float = 2.\n      # create float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |]\n        |> var ~ddof:1 |> item []\n      - : float = 2.5\n    ]}\n\n    See also {!std}. *)\n\nval std :\n  ?axes:int list -> ?keepdims:bool -> ?ddof:int -> ('a, 'b) t -> ('a, 'b) t\n(** [std ?axes ?keepdims ?ddof t] is the standard deviation:\n    [sqrt({!var} ~ddof t)]. [ddof] defaults to [0]. [keepdims] defaults to\n    [false].\n\n    See also {!var}. *)\n\nval all : ?axes:int list -> ?keepdims:bool -> ('a, 'b) t -> (bool, bool_elt) t\n(** [all ?axes ?keepdims t] is [true] iff every element along [axes] is\n    non-zero. [keepdims] defaults to [false].\n\n    {@ocaml[\n      # create int32 [| 3 |] [| 1l; 2l; 3l |]\n        |> all |> item []\n      - : bool = true\n      # create int32 [| 3 |] [| 1l; 0l; 3l |]\n        |> all |> item []\n      - : bool = false\n    ]}\n\n    See also {!any}. *)\n\nval any : ?axes:int list -> ?keepdims:bool -> ('a, 'b) t -> (bool, bool_elt) t\n(** [any ?axes ?keepdims t] is [true] iff at least one element along [axes] is\n    non-zero. [keepdims] defaults to [false].\n\n    See also {!all}. *)\n\nval argmax : ?axis:int -> ?keepdims:bool -> ('a, 'b) t -> (int32, int32_elt) t\n(** [argmax ?axis ?keepdims t] is the index of the maximum along [axis]. Returns\n    the first occurrence for ties. When [axis] is omitted, operates on the\n    flattened tensor. [keepdims] defaults to [false].\n\n    Raises [Invalid_argument] if [axis] is out of bounds.\n\n    {@ocaml[\n      # create int32 [| 5 |] [| 3l; 1l; 4l; 1l; 5l |]\n        |> argmax |> item []\n      - : int32 = 4l\n    ]}\n\n    See also {!argmin}. *)\n\nval argmin : ?axis:int -> ?keepdims:bool -> ('a, 'b) t -> (int32, int32_elt) t\n(** [argmin ?axis ?keepdims t] is the index of the minimum along [axis]. Returns\n    the first occurrence for ties. When [axis] is omitted, operates on the\n    flattened tensor. [keepdims] defaults to [false].\n\n    Raises [Invalid_argument] if [axis] is out of bounds.\n\n    See also {!argmax}. *)\n\n(** {1:sorting Sorting and searching} *)\n\nval sort :\n  ?descending:bool ->\n  ?axis:int ->\n  ('a, 'b) t ->\n  ('a, 'b) t * (int32, int32_elt) t\n(** [sort ?descending ?axis t] sorts elements along [axis] and returns\n    [(sorted, indices)] where [indices] maps sorted positions back to originals.\n    [descending] defaults to [false]. [axis] defaults to [-1] (last).\n\n    The sort is stable (equal elements preserve their relative order). NaN sorts\n    to the end in ascending order and to the beginning in descending order.\n\n    Raises [Invalid_argument] if [axis] is out of bounds.\n\n    {@ocaml[\n      # create int32 [| 5 |] [| 3l; 1l; 4l; 1l; 5l |]\n        |> sort\n      - : (int32, int32_elt) t * (int32, int32_elt) t =\n      (int32 [5] [1, 1, ..., 4, 5], int32 [5] [1, 3, ..., 2, 4])\n    ]}\n\n    See also {!argsort}. *)\n\nval argsort :\n  ?descending:bool -> ?axis:int -> ('a, 'b) t -> (int32, int32_elt) t\n(** [argsort ?descending ?axis t] is [snd (sort ?descending ?axis t)].\n\n    See also {!sort}. *)\n\n(** {1:linalg Linear algebra} *)\n\n(** {2:linalg_products Products} *)\n\nval dot : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [dot a b] is the generalised dot product.\n\n    Contracts the last axis of [a] with:\n    - the only axis of [b] when [b] is 1-D,\n    - the second-to-last axis of [b] otherwise.\n\n    Dimension rules:\n    - 1-D × 1-D → scalar (inner product).\n    - 2-D × 2-D → matrix multiplication.\n    - N-D × M-D → contraction; output axes are the non-contracted axes of [a]\n      followed by those of [b].\n\n    {b Note.} Unlike {!matmul}, [dot] does {e not} broadcast batch dimensions—it\n    concatenates them.\n\n    Raises [Invalid_argument] if contraction axes differ in size or either input\n    is 0-D.\n\n    {@ocaml[\n      # let a = create float32 [| 2 |] [| 1.; 2. |] in\n        let b = create float32 [| 2 |] [| 3.; 4. |] in\n        dot a b |> item []\n      - : float = 11.\n      # dot (ones float32 [| 3; 4; 5 |])\n            (ones float32 [| 5; 6 |]) |> shape\n      - : int array = [|3; 4; 6|]\n    ]}\n\n    See also {!matmul}, {!vdot}, {!vecdot}. *)\n\nval matmul : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [matmul a b] is the matrix product of [a] and [b] with batch broadcasting.\n\n    Dimension rules:\n    - 1-D × 1-D → scalar (inner product).\n    - 1-D × N-D → [a] is treated as a row vector.\n    - N-D × 1-D → [b] is treated as a column vector.\n    - N-D × M-D → matrix multiply on last two axes; leading axes are broadcast.\n\n    Raises [Invalid_argument] if inputs are 0-D or inner dimensions mismatch.\n\n    {@ocaml[\n      # let a =\n          create float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |]\n        in\n        let b = create float32 [| 2 |] [| 5.; 6. |] in\n        matmul a b\n      - : (float, float32_elt) t = [17, 39]\n      # matmul (ones float32 [| 1; 3; 4 |])\n               (ones float32 [| 5; 4; 2 |]) |> shape\n      - : int array = [|5; 3; 2|]\n    ]}\n\n    See also {!dot}, {!multi_dot}. *)\n\nval diagonal :\n  ?offset:int -> ?axis1:int -> ?axis2:int -> ('a, 'b) t -> ('a, 'b) t\n(** [diagonal ?offset ?axis1 ?axis2 t] extracts diagonals from 2-D planes\n    defined by [axis1] and [axis2]. [offset] defaults to [0]. [axis1] and\n    [axis2] default to the last two axes.\n\n    Raises [Invalid_argument] if [axis1 = axis2] or either is out of bounds.\n\n    See also {!diag}, {!trace}. *)\n\nval matrix_transpose : ('a, 'b) t -> ('a, 'b) t\n(** [matrix_transpose t] swaps the last two axes: [[…; m; n]] → [[…; n; m]]. For\n    1-D tensors, returns [t] unchanged.\n\n    See also {!transpose}. *)\n\nval vdot : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [vdot a b] is the dot product of two vectors. Both inputs are flattened; for\n    complex dtypes, [a] is conjugated first. Always returns a scalar.\n\n    Raises [Invalid_argument] if the inputs have different numbers of elements.\n\n    See also {!dot}, {!vecdot}. *)\n\nval vecdot : ?axis:int -> ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [vecdot ?axis a b] is the dot product of [a] and [b] along [axis] with\n    broadcasting. [axis] defaults to [-1].\n\n    Raises [Invalid_argument] if the specified axis dimensions differ.\n\n    See also {!vdot}, {!dot}. *)\n\nval inner : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [inner a b] is the inner product over the last axes of [a] and [b].\n\n    Raises [Invalid_argument] if the last dimensions differ.\n\n    See also {!dot}, {!outer}. *)\n\nval outer : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [outer a b] is the outer product. Inputs are flattened to 1-D; the result\n    has shape [[numel a; numel b]].\n\n    See also {!inner}. *)\n\nval tensordot :\n  ?axes:int list * int list -> ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [tensordot ?axes a b] contracts [a] and [b] along the specified axis pairs.\n    [axes] defaults to contracting the last axis of [a] with the first axis of\n    [b].\n\n    Raises [Invalid_argument] if the contracted axes have different sizes. *)\n\nval einsum : string -> ('a, 'b) t array -> ('a, 'b) t\n(** [einsum subscripts operands] evaluates Einstein summation.\n\n    {@ocaml[\n      # let a =\n          create float32 [| 2; 3 |]\n            [| 1.; 2.; 3.; 4.; 5.; 6. |]\n        in\n        let b =\n          create float32 [| 3; 2 |]\n            [| 1.; 2.; 3.; 4.; 5.; 6. |]\n        in\n        einsum \"ij,jk->ik\" [| a; b |] |> shape\n      - : int array = [|2; 2|]\n    ]}\n\n    See also {!matmul}, {!tensordot}. *)\n\nval kron : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [kron a b] is the Kronecker product. The result has shape\n    [[a.shape.(i) * b.shape.(i)]] for each [i]. *)\n\nval multi_dot : ('a, 'b) t array -> ('a, 'b) t\n(** [multi_dot ts] is the chained matrix product of [ts], automatically choosing\n    the association order that minimises computation.\n\n    Raises [Invalid_argument] if the array is empty, shapes are incompatible, or\n    dtypes are not floating-point or complex.\n\n    See also {!matmul}. *)\n\nval matrix_power : ('a, 'b) t -> int -> ('a, 'b) t\n(** [matrix_power t n] raises square matrix [t] to integer power [n]. [n = 0]\n    returns the identity; [n < 0] uses the inverse.\n\n    Raises [Invalid_argument] if [t] is not square, the dtype is not\n    floating-point or complex, or [n < 0] and [t] is singular. *)\n\nval cross : ?axis:int -> ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [cross ?axis a b] is the cross product of 3-element vectors along [axis].\n    [axis] defaults to [-1].\n\n    Raises [Invalid_argument] if the axis dimension is not 3. *)\n\n(** {2:linalg_decomp Decompositions} *)\n\nval cholesky : ?upper:bool -> ('a, 'b) t -> ('a, 'b) t\n(** [cholesky ?upper a] is the Cholesky factor of positive- definite matrix [a].\n    When [upper] is [true], returns the upper-triangular factor [U] such that\n    [a = Uᵀ U]; otherwise (default) returns the lower-triangular factor [L] such\n    that [a = L Lᵀ].\n\n    Raises [Invalid_argument] if [a] is not positive-definite or the dtype is\n    not floating-point or complex.\n\n    See also {!solve}. *)\n\nval qr : ?mode:[ `Complete | `Reduced ] -> ('a, 'b) t -> ('a, 'b) t * ('a, 'b) t\n(** [qr ?mode a] is [(Q, R)] where [a = Q R], [Q] is orthogonal, and [R] is\n    upper-triangular. [mode] defaults to [`Reduced].\n\n    Raises [Invalid_argument] if the dtype is not floating-point or complex.\n\n    See also {!svd}. *)\n\nval svd :\n  ?full_matrices:bool ->\n  ('a, 'b) t ->\n  ('a, 'b) t * (float, float64_elt) t * ('a, 'b) t\n(** [svd ?full_matrices a] is [(U, S, Vh)] where [a = U diag(S) Vh]. [S]\n    contains the singular values in descending order. [full_matrices] defaults\n    to [false] (economy decomposition).\n\n    Raises [Invalid_argument] if the dtype is not floating-point or complex.\n\n    See also {!svdvals}, {!qr}. *)\n\nval svdvals : ('a, 'b) t -> (float, float64_elt) t\n(** [svdvals a] is the singular values of [a] in descending order. More\n    efficient than {!svd} when only the values are needed.\n\n    Raises [Invalid_argument] if the dtype is not floating-point or complex. *)\n\n(** {2:linalg_eig Eigenvalues and eigenvectors} *)\n\nval eig :\n  ('a, 'b) t -> (Complex.t, complex64_elt) t * (Complex.t, complex64_elt) t\n(** [eig a] is [(eigenvalues, eigenvectors)] of general square matrix [a].\n    Results are complex since real matrices may have complex eigenvalues.\n\n    Raises [Invalid_argument] if [a] is not square or the dtype is not\n    floating-point or complex.\n\n    See also {!eigh}, {!eigvals}. *)\n\nval eigh :\n  ?uplo:[ `U | `L ] -> ('a, 'b) t -> (float, float64_elt) t * ('a, 'b) t\n(** [eigh ?uplo a] is [(eigenvalues, eigenvectors)] of symmetric / Hermitian\n    matrix [a] in ascending eigenvalue order. [uplo] defaults to [`L]. More\n    efficient than {!eig} for symmetric matrices.\n\n    Raises [Invalid_argument] if [a] is not square or the dtype is not\n    floating-point or complex.\n\n    See also {!eig}, {!eigvalsh}. *)\n\nval eigvals : ('a, 'b) t -> (Complex.t, complex64_elt) t\n(** [eigvals a] is the eigenvalues of general square matrix [a]. More efficient\n    than {!eig} when eigenvectors are not needed.\n\n    Raises [Invalid_argument] if [a] is not square or the dtype is not\n    floating-point or complex.\n\n    See also {!eig}, {!eigvalsh}. *)\n\nval eigvalsh : ?uplo:[ `U | `L ] -> ('a, 'b) t -> (float, float64_elt) t\n(** [eigvalsh ?uplo a] is the eigenvalues of symmetric / Hermitian matrix [a] in\n    ascending order. [uplo] defaults to [`L].\n\n    Raises [Invalid_argument] if [a] is not square or the dtype is not\n    floating-point or complex.\n\n    See also {!eigh}, {!eigvals}. *)\n\n(** {2:linalg_norms Norms and invariants} *)\n\nval norm :\n  ?ord:\n    [ `Fro\n    | `Nuc\n    | `One\n    | `Two\n    | `Inf\n    | `NegOne\n    | `NegTwo\n    | `NegInf\n    | `P of float ] ->\n  ?axes:int list ->\n  ?keepdims:bool ->\n  ('a, 'b) t ->\n  ('a, 'b) t\n(** [norm ?ord ?axes ?keepdims t] is the matrix or vector norm. [ord] defaults\n    to Frobenius for matrices, 2-norm for vectors. [keepdims] defaults to\n    [false].\n\n    - [`Fro] — Frobenius norm.\n    - [`Nuc] — nuclear norm (sum of singular values).\n    - [`One] — max absolute column sum (matrix) or 1-norm (vector).\n    - [`Two] — largest singular value (matrix) or 2-norm (vector).\n    - [`Inf] — max absolute row sum (matrix) or ∞-norm (vector).\n    - [`P p] — p-norm (vectors only).\n    - [`NegOne], [`NegTwo], [`NegInf] — corresponding minimum norms.\n\n    Raises [Invalid_argument] if [ord] requires a floating-point or complex\n    dtype. *)\n\nval cond :\n  ?p:[ `One | `Two | `Inf | `NegOne | `NegTwo | `NegInf | `Fro ] ->\n  ('a, 'b) t ->\n  ('a, 'b) t\n(** [cond ?p a] is the condition number of [a] in the [p]-norm. [p] defaults to\n    [`Two].\n\n    Raises [Invalid_argument] if the dtype is not floating-point or complex. *)\n\nval det : ('a, 'b) t -> ('a, 'b) t\n(** [det a] is the determinant of square matrix [a].\n\n    Raises [Invalid_argument] if [a] is not square or the dtype is not\n    floating-point or complex. *)\n\nval slogdet : ('a, 'b) t -> (float, float32_elt) t * (float, float32_elt) t\n(** [slogdet a] is [(sign, log_abs_det)] where\n    [det a = sign * exp(log_abs_det)]. More numerically stable than {!det} for\n    matrices with very large or small determinants.\n\n    Raises [Invalid_argument] if [a] is not square or the dtype is not\n    floating-point or complex. *)\n\nval matrix_rank :\n  ?tol:float -> ?rtol:float -> ?hermitian:bool -> ('a, 'b) t -> int\n(** [matrix_rank ?tol ?rtol ?hermitian a] is the rank of [a], counting singular\n    values above the tolerance. [rtol] defaults to [max(M, N) * ε * σ_max]. When\n    [hermitian] is [true] (default [false]), uses a more efficient\n    eigenvalue-based algorithm.\n\n    Raises [Invalid_argument] if the dtype is not floating-point or complex. *)\n\nval trace : ?offset:int -> ('a, 'b) t -> ('a, 'b) t\n(** [trace ?offset t] is the sum along the [offset]-th diagonal. [offset]\n    defaults to [0].\n\n    Raises [Invalid_argument] if [t] has fewer than 2 dimensions.\n\n    See also {!diagonal}. *)\n\n(** {2:linalg_solve Solving} *)\n\nval solve : ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [solve a b] is [x] such that [a @@ x = b].\n\n    Raises [Invalid_argument] if [a] is singular or the dtype is not\n    floating-point or complex.\n\n    See also {!lstsq}, {!inv}. *)\n\nval lstsq :\n  ?rcond:float ->\n  ('a, 'b) t ->\n  ('a, 'b) t ->\n  ('a, 'b) t * ('a, 'b) t * int * (float, float64_elt) t\n(** [lstsq ?rcond a b] is [(x, residuals, rank, sv)] — the least-squares\n    solution to [a @@ x ≈ b]. [rcond] defaults to machine precision.\n\n    Raises [Invalid_argument] if the dtype is not floating-point or complex.\n\n    See also {!solve}. *)\n\nval inv : ('a, 'b) t -> ('a, 'b) t\n(** [inv a] is the inverse of square matrix [a].\n\n    Raises [Invalid_argument] if [a] is singular, not square, or the dtype is\n    not floating-point or complex.\n\n    See also {!pinv}, {!solve}. *)\n\nval pinv : ?rtol:float -> ?hermitian:bool -> ('a, 'b) t -> ('a, 'b) t\n(** [pinv ?rtol ?hermitian a] is the Moore–Penrose pseudoinverse of [a]. Handles\n    non-square and singular matrices. [hermitian] defaults to [false].\n\n    Raises [Invalid_argument] if the dtype is not floating-point or complex.\n\n    See also {!inv}. *)\n\nval tensorsolve : ?axes:int list -> ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [tensorsolve ?axes a b] solves the tensor equation [tensordot a x axes = b]\n    for [x].\n\n    Raises [Invalid_argument] if shapes are incompatible or the dtype is not\n    floating-point or complex. *)\n\nval tensorinv : ?ind:int -> ('a, 'b) t -> ('a, 'b) t\n(** [tensorinv ?ind a] is the tensor inverse such that\n    [tensordot a (tensorinv a) ind] is the identity. [ind] defaults to [2].\n\n    Raises [Invalid_argument] if the result is not square in the specified\n    dimensions or the dtype is not floating-point or complex. *)\n\n(** {1:fft Fourier transforms} *)\n\ntype fft_norm = [ `Backward | `Forward | `Ortho ]\n(** FFT normalisation mode.\n    - [`Backward] — normalise by [1/n] on the inverse (default).\n    - [`Forward] — normalise by [1/n] on the forward.\n    - [`Ortho] — normalise by [1/√n] on both. *)\n\nval fft :\n  ?axis:int ->\n  ?n:int ->\n  ?norm:fft_norm ->\n  (Complex.t, 'a) t ->\n  (Complex.t, 'a) t\n(** [fft ?axis ?n ?norm x] is the 1-D discrete Fourier transform along [axis].\n    [axis] defaults to [-1]. [n] truncates or zero-pads the input. [norm]\n    defaults to [`Backward].\n\n    See also {!ifft}, {!rfft}. *)\n\nval ifft :\n  ?axis:int ->\n  ?n:int ->\n  ?norm:fft_norm ->\n  (Complex.t, 'a) t ->\n  (Complex.t, 'a) t\n(** [ifft ?axis ?n ?norm x] is the inverse of {!fft}.\n\n    See also {!fft}, {!irfft}. *)\n\nval fft2 :\n  ?axes:int list ->\n  ?s:int list ->\n  ?norm:fft_norm ->\n  (Complex.t, 'a) t ->\n  (Complex.t, 'a) t\n(** [fft2 ?axes ?s ?norm x] is the 2-D FFT. [axes] defaults to the last two.\n\n    Raises [Invalid_argument] if the input has fewer than 2 dimensions.\n\n    See also {!ifft2}, {!fft}. *)\n\nval ifft2 :\n  ?axes:int list ->\n  ?s:int list ->\n  ?norm:fft_norm ->\n  (Complex.t, 'a) t ->\n  (Complex.t, 'a) t\n(** [ifft2 ?axes ?s ?norm x] is the inverse of {!fft2}. *)\n\nval fftn :\n  ?axes:int list ->\n  ?s:int list ->\n  ?norm:fft_norm ->\n  (Complex.t, 'a) t ->\n  (Complex.t, 'a) t\n(** [fftn ?axes ?s ?norm x] is the N-D FFT. [axes] defaults to all.\n\n    See also {!ifftn}. *)\n\nval ifftn :\n  ?axes:int list ->\n  ?s:int list ->\n  ?norm:fft_norm ->\n  (Complex.t, 'a) t ->\n  (Complex.t, 'a) t\n(** [ifftn ?axes ?s ?norm x] is the inverse of {!fftn}. *)\n\nval rfft :\n  ?axis:int ->\n  ?n:int ->\n  ?norm:fft_norm ->\n  (float, 'a) t ->\n  (Complex.t, complex64_elt) t\n(** [rfft ?axis ?n ?norm x] is the 1-D FFT of real input. Returns only the\n    non-redundant positive frequencies; the output size along the transformed\n    axis is [n/2 + 1].\n\n    {@ocaml[\n      # create float64 [| 4 |] [| 0.; 1.; 2.; 3. |]\n        |> rfft |> shape\n      - : int array = [|3|]\n    ]}\n\n    See also {!irfft}, {!fft}. *)\n\nval irfft :\n  ?axis:int ->\n  ?n:int ->\n  ?norm:fft_norm ->\n  (Complex.t, 'a) t ->\n  (float, float64_elt) t\n(** [irfft ?axis ?n ?norm x] is the inverse of {!rfft}, producing real output.\n    Assumes Hermitian symmetry.\n\n    See also {!rfft}. *)\n\nval rfft2 :\n  ?axes:int list ->\n  ?s:int list ->\n  ?norm:fft_norm ->\n  (float, 'a) t ->\n  (Complex.t, complex64_elt) t\n(** [rfft2 ?axes ?s ?norm x] is the 2-D FFT of real input.\n\n    See also {!irfft2}, {!rfft}. *)\n\nval irfft2 :\n  ?axes:int list ->\n  ?s:int list ->\n  ?norm:fft_norm ->\n  (Complex.t, 'a) t ->\n  (float, float64_elt) t\n(** [irfft2 ?axes ?s ?norm x] is the inverse of {!rfft2}. *)\n\nval rfftn :\n  ?axes:int list ->\n  ?s:int list ->\n  ?norm:fft_norm ->\n  (float, 'a) t ->\n  (Complex.t, complex64_elt) t\n(** [rfftn ?axes ?s ?norm x] is the N-D FFT of real input.\n\n    See also {!irfftn}, {!rfft}. *)\n\nval irfftn :\n  ?axes:int list ->\n  ?s:int list ->\n  ?norm:fft_norm ->\n  (Complex.t, 'a) t ->\n  (float, float64_elt) t\n(** [irfftn ?axes ?s ?norm x] is the inverse of {!rfftn}. *)\n\nval hfft :\n  ?axis:int ->\n  ?n:int ->\n  ?norm:fft_norm ->\n  (Complex.t, 'a) t ->\n  (float, float64_elt) t\n(** [hfft ?axis ?n ?norm x] is the FFT of a signal with Hermitian symmetry,\n    producing real output. *)\n\nval ihfft :\n  ?axis:int ->\n  ?n:int ->\n  ?norm:fft_norm ->\n  (float, 'a) t ->\n  (Complex.t, complex64_elt) t\n(** [ihfft ?axis ?n ?norm x] is the inverse of {!hfft}. *)\n\nval fftfreq : ?d:float -> int -> (float, float64_elt) t\n(** [fftfreq ?d n] is the DFT sample frequencies for window length [n] and\n    sample spacing [d] (default [1.0]).\n\n    {@ocaml[\n      # fftfreq 4\n      - : (float, float64_elt) t = [0, 0.25, -0.5, -0.25]\n    ]}\n\n    See also {!rfftfreq}. *)\n\nval rfftfreq : ?d:float -> int -> (float, float64_elt) t\n(** [rfftfreq ?d n] is the positive DFT sample frequencies:\n    [[0, 1, …, n/2] / (d * n)].\n\n    See also {!fftfreq}. *)\n\nval fftshift : ?axes:int list -> ('a, 'b) t -> ('a, 'b) t\n(** [fftshift ?axes t] shifts the zero-frequency component to the centre. [axes]\n    defaults to all.\n\n    {@ocaml[\n      # fftfreq 5 |> fftshift\n      - : (float, float64_elt) t = float64 [5] [-0.4, -0.2, ..., 0.2, 0.4]\n    ]}\n\n    See also {!ifftshift}. *)\n\nval ifftshift : ?axes:int list -> ('a, 'b) t -> ('a, 'b) t\n(** [ifftshift ?axes t] is the inverse of {!fftshift}. *)\n\n(** {1:activation Activation functions} *)\n\nval relu : ('a, 'b) t -> ('a, 'b) t\n(** [relu t] is [max(0, t)] element-wise.\n\n    {@ocaml[\n      # create float32 [| 5 |]\n          [| -2.; -1.; 0.; 1.; 2. |]\n        |> relu\n      - : (float, float32_elt) t = float32 [5] [0, 0, ..., 1, 2]\n    ]} *)\n\nval sigmoid : ('a, 'b) t -> ('a, 'b) t\n(** [sigmoid t] is [1 / (1 + exp(-t))] element-wise. Output in [(0, 1)].\n\n    {@ocaml[\n      # sigmoid (scalar float32 0.) |> item []\n      - : float = 0.5\n    ]} *)\n\nval softmax : ?axes:int list -> ?scale:float -> ('a, 'b) t -> ('a, 'b) t\n(** [softmax ?axes ?scale t] is the softmax normalisation\n    [exp(scale * (t - max t)) / Σ exp(scale * (t - max t))]. [axes] defaults to\n    [[-1]]. [scale] defaults to [1.0]. Output sums to [1] along the specified\n    axes.\n\n    {@ocaml[\n      # create float32 [| 3 |] [| 1.; 2.; 3. |]\n        |> softmax |> sum |> item []\n      - : float = 1.\n    ]}\n\n    See also {!log_softmax}. *)\n\nval log_softmax : ?axes:int list -> ?scale:float -> ('a, 'b) t -> ('a, 'b) t\n(** [log_softmax ?axes ?scale t] is the natural logarithm of {!softmax}. Same\n    defaults as {!softmax}.\n\n    See also {!softmax}, {!logsumexp}. *)\n\nval logsumexp : ?axes:int list -> ?keepdims:bool -> ('a, 'b) t -> ('a, 'b) t\n(** [logsumexp ?axes ?keepdims t] is [log(Σ exp(t))] computed in a numerically\n    stable way. [axes] defaults to all. [keepdims] defaults to [false].\n\n    See also {!logmeanexp}, {!log_softmax}. *)\n\nval logmeanexp : ?axes:int list -> ?keepdims:bool -> ('a, 'b) t -> ('a, 'b) t\n(** [logmeanexp ?axes ?keepdims t] is [log(mean(exp(t)))]: {!logsumexp} minus\n    [log N]. [axes] defaults to all. [keepdims] defaults to [false].\n\n    See also {!logsumexp}. *)\n\nval standardize :\n  ?axes:int list ->\n  ?mean:('a, 'b) t ->\n  ?variance:('a, 'b) t ->\n  ?epsilon:float ->\n  ('a, 'b) t ->\n  ('a, 'b) t\n(** [standardize ?axes ?mean ?variance ?epsilon t] is\n    [(t - mean) / sqrt(variance + epsilon)]. When [mean] or [variance] are\n    omitted, they are computed along [axes] (default all). [epsilon] defaults to\n    [1e-5]. *)\n\nval erf : ('a, 'b) t -> ('a, 'b) t\n(** [erf t] is the error function [erf(x) = (2/√π) ∫₀ˣ e^{-u²} du].\n\n    {@ocaml[\n      # erf (scalar float32 0.) |> item []\n      - : float = 0.\n    ]} *)\n\n(** {1:windows Sliding windows} *)\n\n(** {2:patches Patches} *)\n\nval extract_patches :\n  kernel_size:int array ->\n  stride:int array ->\n  dilation:int array ->\n  padding:(int * int) array ->\n  ('a, 'b) t ->\n  ('a, 'b) t\n(** [extract_patches ~kernel_size ~stride ~dilation ~padding t] extracts sliding\n    windows from the last [K] spatial dimensions where\n    [K = Array.length kernel_size].\n\n    Input: [[leading…; spatial…]]. Output: [[leading…; prod(kernel_size); L]].\n\n    {@ocaml[\n      # arange_f float32 0. 16. 1.\n        |> reshape [| 1; 1; 4; 4 |]\n        |> extract_patches\n             ~kernel_size:[| 2; 2 |]\n             ~stride:[| 1; 1 |]\n             ~dilation:[| 1; 1 |]\n             ~padding:[| (0, 0); (0, 0) |]\n        |> shape\n      - : int array = [|1; 1; 4; 9|]\n    ]}\n\n    See also {!combine_patches}. *)\n\nval combine_patches :\n  output_size:int array ->\n  kernel_size:int array ->\n  stride:int array ->\n  dilation:int array ->\n  padding:(int * int) array ->\n  ('a, 'b) t ->\n  ('a, 'b) t\n(** [combine_patches ~output_size ~kernel_size ~stride ~dilation ~padding t] is\n    the inverse of {!extract_patches}. Overlapping values are summed.\n\n    See also {!extract_patches}. *)\n\n(** {2:correlate Cross-correlation and convolution} *)\n\nval correlate :\n  ?padding:[ `Full | `Same | `Valid ] -> ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [correlate ?padding x kernel] is the N-D cross-correlation (no kernel flip).\n    Spatial dimensions [K = ndim kernel]. Leading dimensions of [x] beyond [K]\n    are batch dimensions. [padding] defaults to [`Valid].\n\n    See also {!convolve}. *)\n\nval convolve :\n  ?padding:[ `Full | `Same | `Valid ] -> ('a, 'b) t -> ('a, 'b) t -> ('a, 'b) t\n(** [convolve ?padding x kernel] is like {!correlate} but flips the kernel along\n    all spatial axes before correlating.\n\n    See also {!correlate}. *)\n\n(** {2:filters Filters} *)\n\nval maximum_filter :\n  kernel_size:int array -> ?stride:int array -> ('a, 'b) t -> ('a, 'b) t\n(** [maximum_filter ~kernel_size ?stride t] is the sliding-window maximum over\n    the last [K] dimensions. [stride] defaults to [kernel_size].\n\n    See also {!minimum_filter}, {!uniform_filter}. *)\n\nval minimum_filter :\n  kernel_size:int array -> ?stride:int array -> ('a, 'b) t -> ('a, 'b) t\n(** [minimum_filter ~kernel_size ?stride t] is the sliding-window minimum over\n    the last [K] dimensions. [stride] defaults to [kernel_size].\n\n    See also {!maximum_filter}. *)\n\nval uniform_filter :\n  kernel_size:int array -> ?stride:int array -> (float, 'b) t -> (float, 'b) t\n(** [uniform_filter ~kernel_size ?stride t] is the sliding-window mean over the\n    last [K] dimensions. [stride] defaults to [kernel_size].\n\n    See also {!maximum_filter}, {!minimum_filter}. *)\n\n(** {1:iteration Iteration} *)\n\nval map_item : ('a -> 'a) -> ('a, 'b) t -> ('a, 'b) t\n(** [map_item f t] applies [f] to each scalar element of [t] and returns a fresh\n    tensor of the results. *)\n\nval iter_item : ('a -> unit) -> ('a, 'b) t -> unit\n(** [iter_item f t] applies [f] to each scalar element of [t] for its side\n    effects. *)\n\nval fold_item : ('a -> 'b -> 'a) -> 'a -> ('b, 'c) t -> 'a\n(** [fold_item f init t] folds [f] over the scalar elements of [t] in row-major\n    order, starting with [init]. *)\n\nval map : (('a, 'b) t -> ('a, 'b) t) -> ('a, 'b) t -> ('a, 'b) t\n(** [map f t] applies tensor function [f] to each element of [t], presented as a\n    scalar tensor.\n\n    See also {!map_item}. *)\n\nval iter : (('a, 'b) t -> unit) -> ('a, 'b) t -> unit\n(** [iter f t] applies tensor function [f] to each element of [t], presented as\n    a scalar tensor.\n\n    See also {!iter_item}. *)\n\nval fold : ('a -> ('b, 'c) t -> 'a) -> 'a -> ('b, 'c) t -> 'a\n(** [fold f init t] folds tensor function [f] over the elements of [t], each\n    presented as a scalar tensor.\n\n    See also {!fold_item}. *)\n\n(** {1:pp Formatting} *)\n\nval pp_data : Format.formatter -> ('a, 'b) t -> unit\n(** [pp_data fmt t] formats the data of [t]. *)\n\nval format_to_string : (Format.formatter -> 'a -> unit) -> 'a -> string\n(** [format_to_string pp x] is the string produced by [pp]. *)\n\nval print_with_formatter : (Format.formatter -> 'a -> unit) -> 'a -> unit\n(** [print_with_formatter pp x] prints [x] to stdout using [pp]. *)\n\nval data_to_string : ('a, 'b) t -> string\n(** [data_to_string t] is the data of [t] as a string. *)\n\nval print_data : ('a, 'b) t -> unit\n(** [print_data t] prints the data of [t] to stdout. *)\n\nval pp_dtype : Format.formatter -> ('a, 'b) dtype -> unit\n(** [pp_dtype fmt dt] formats [dt]. *)\n\nval dtype_to_string : ('a, 'b) dtype -> string\n(** [dtype_to_string dt] is [dt] as a string. *)\n\nval shape_to_string : int array -> string\n(** [shape_to_string s] formats [s] as [\"[2x3x4]\"]. *)\n\nval pp_shape : Format.formatter -> int array -> unit\n(** [pp_shape fmt s] formats shape [s]. *)\n\nval pp : Format.formatter -> ('a, 'b) t -> unit\n(** [pp fmt t] formats [t] for debugging (dtype, shape, and data). *)\n\nval print : ('a, 'b) t -> unit\n(** [print t] prints [t] to stdout. *)\n\nval to_string : ('a, 'b) t -> string\n(** [to_string t] is [t] formatted as a string (dtype, shape, and data). *)\n"
  },
  {
    "path": "packages/nx/lib/prelude.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n#install_printer Nx.pp_data;;\n\nopen Nx\n"
  },
  {
    "path": "packages/nx/test/dune",
    "content": "(library\n (name test_nx_support)\n (modules test_nx_support)\n (libraries nx.core nx windtrap))\n\n(tests\n (names\n  test_nx_indexing\n  test_nx_sanity\n  test_nx_linalg\n  test_nx_sorting\n  test_nx_basics\n  test_nx_manipulation\n  test_nx_extended_dtypes\n  test_nx_fft\n  test_nx_ops\n  test_nx_rng)\n (package nx)\n (modules :standard \\ test_nx_support test_nx_io)\n (libraries nx.buffer nx nx.core windtrap test_nx_support))\n\n(test\n (name test_nx_io)\n (package nx)\n (modules test_nx_io)\n (libraries nx nx.io windtrap)\n (deps\n  (glob_files fixtures/*)))\n"
  },
  {
    "path": "packages/nx/test/failing/bug_blit_overlapping.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Nx\n\nlet () =\n  let t = create float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n  Printf.printf \"Original array: [\";\n  Array.iter (Printf.printf \"%.0f \") (to_array t);\n  Printf.printf \"]\\n\\n\";\n\n  let view1 = slice [ Nx.R (0, 3) ] t in\n  Printf.printf \"view1 = slice [0] [3] (indices 0-2): [\";\n  Array.iter (Printf.printf \"%.0f \") (to_array view1);\n  Printf.printf \"]\\n\";\n\n  let view2 = slice [ Nx.R (2, 5) ] t in\n  Printf.printf \"view2 = slice [2] [5] (indices 2-4): [\";\n  Array.iter (Printf.printf \"%.0f \") (to_array view2);\n  Printf.printf \"]\\n\\n\";\n\n  Printf.printf \"Attempting blit view1 -> view2...\\n\";\n  try\n    blit view1 view2;\n    Printf.printf \"Result: [\";\n    Array.iter (Printf.printf \"%.0f \") (to_array t);\n    Printf.printf \"]\\n\";\n    Printf.printf \"Expected: [1 2 1 2 3]\\n\"\n  with e -> Printf.printf \"Error: %s\\n\" (Printexc.to_string e)\n"
  },
  {
    "path": "packages/nx/test/failing/dune",
    "content": "(executables\n (names bug_blit_overlapping)\n (libraries nx))\n"
  },
  {
    "path": "packages/nx/test/fixtures/generate.py",
    "content": "#!/usr/bin/env python3\n\"\"\"Generate safetensors test fixtures.\n\nRequires: pip install safetensors numpy torch\n\nUsage:\n    cd nx/test/fixtures\n    python generate.py\n\"\"\"\n\nimport struct\nimport numpy as np\nfrom safetensors.numpy import save_file\nfrom safetensors.torch import save_file as save_torch\nimport torch\n\n\ndef main():\n    # F16 fixture: specific bit patterns\n    # [+0, smallest subnormal, 1.0, +inf, NaN]\n    f16_bits = [0x0000, 0x0001, 0x3C00, 0x7C00, 0x7E01]\n    f16_bytes = struct.pack(\"<\" + \"H\" * len(f16_bits), *f16_bits)\n    f16 = np.frombuffer(f16_bytes, dtype=np.float16)\n    save_file({\"f16_tensor\": f16}, \"f16_bit_exact.safetensors\")\n    print(\"wrote f16_bit_exact.safetensors\")\n\n    # BF16 fixture: specific bit patterns (numpy lacks bfloat16, use torch)\n    # [+0, smallest subnormal, 1.0, +inf, NaN]\n    bf16_bits = [0x0000, 0x0001, 0x3F80, 0x7F80, 0x7FC1]\n    bf16 = torch.tensor(bf16_bits, dtype=torch.int16).view(torch.bfloat16)\n    save_torch({\"bf16_tensor\": bf16}, \"bf16_bit_exact.safetensors\")\n    print(\"wrote bf16_bit_exact.safetensors\")\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "packages/nx/test/props/dune",
    "content": "(library\n (name test_nx_props_support)\n (modules test_nx_props_support)\n (libraries nx_core nx windtrap))\n\n(tests\n (names test_nx_props)\n (package nx)\n (modules :standard \\ test_nx_props_support)\n (libraries nx nx_core windtrap windtrap.prop test_nx_props_support))\n"
  },
  {
    "path": "packages/nx/test/props/test_nx_props.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Property-based tests for Nx operations.\n\n   Each property verifies an algebraic law or invariant over randomly generated\n   tensors. These complement the unit tests which cover edge cases, error\n   conditions, and NaN/Inf behavior. *)\n\nopen Windtrap\nopen Test_nx_props_support\n\n(* ── Arithmetic Properties ── *)\n\nlet arithmetic_props =\n  [\n    (* Addition *)\n    prop \"add commutative (f32)\" f32_pair (fun (a, b) ->\n        approx_equal (Nx.add a b) (Nx.add b a));\n    prop \"add commutative (i32)\" i32_pair (fun (a, b) ->\n        exact_equal (Nx.add a b) (Nx.add b a));\n    prop \"add identity (f32)\" f32_any (fun a ->\n        approx_equal (Nx.add a (Nx.zeros_like a)) a);\n    prop \"add identity (i32)\" i32_any (fun a ->\n        exact_equal (Nx.add a (Nx.zeros_like a)) a);\n    prop \"add inverse (f32)\" f32_any (fun a ->\n        let z = Nx.add a (Nx.neg a) in\n        approx_equal z (Nx.zeros_like a));\n    prop \"sub is add neg (f32)\" f32_pair (fun (a, b) ->\n        approx_equal (Nx.sub a b) (Nx.add a (Nx.neg b)));\n    prop \"sub is add neg (i32)\" i32_pair (fun (a, b) ->\n        exact_equal (Nx.sub a b) (Nx.add a (Nx.neg b)));\n    (* Multiplication *)\n    prop \"mul commutative (f32)\" f32_pair (fun (a, b) ->\n        approx_equal (Nx.mul a b) (Nx.mul b a));\n    prop \"mul commutative (i32)\" i32_pair (fun (a, b) ->\n        exact_equal (Nx.mul a b) (Nx.mul b a));\n    prop \"mul identity (f32)\" f32_any (fun a ->\n        approx_equal (Nx.mul a (Nx.ones_like a)) a);\n    prop \"mul identity (i32)\" i32_any (fun a ->\n        exact_equal (Nx.mul a (Nx.ones_like a)) a);\n    prop \"mul zero (f32)\" f32_any (fun a ->\n        approx_equal (Nx.mul a (Nx.zeros_like a)) (Nx.zeros_like a));\n    prop \"mul zero (i32)\" i32_any (fun a ->\n        exact_equal (Nx.mul a (Nx.zeros_like a)) (Nx.zeros_like a));\n    prop \"distributive (i32)\" i32_triple (fun (a, b, c) ->\n        exact_equal (Nx.mul a (Nx.add b c)) (Nx.add (Nx.mul a b) (Nx.mul a c)));\n    (* Division / Modulo *)\n    prop \"div inverse of mul (f32)\" f32_pair (fun (a, b) ->\n        assume (all_nonzero_f32 b);\n        allclose ~atol:1e-3 ~rtol:1e-3 (Nx.div (Nx.mul a b) b) a);\n    prop \"div self = ones (f32)\" f32_any (fun a ->\n        assume (all_nonzero_f32 a);\n        approx_equal (Nx.div a a) (Nx.ones_like a));\n    prop \"int div/mod relation (i32)\" i32_pair_b_nonzero (fun (a, b) ->\n        exact_equal (Nx.add (Nx.mul (Nx.div a b) b) (Nx.mod_ a b)) a);\n    (* Negation *)\n    prop \"neg involution (f32)\" f32_any (fun a ->\n        approx_equal (Nx.neg (Nx.neg a)) a);\n    prop \"neg involution (i32)\" i32_any (fun a ->\n        exact_equal (Nx.neg (Nx.neg a)) a);\n    (* Min / Max *)\n    prop \"maximum commutative (f32)\" f32_pair (fun (a, b) ->\n        assume (no_nan a && no_nan b);\n        approx_equal (Nx.maximum a b) (Nx.maximum b a));\n    prop \"minimum commutative (f32)\" f32_pair (fun (a, b) ->\n        assume (no_nan a && no_nan b);\n        approx_equal (Nx.minimum a b) (Nx.minimum b a));\n    prop \"maximum idempotent (f32)\" f32_any (fun a ->\n        assume (no_nan a);\n        approx_equal (Nx.maximum a a) a);\n  ]\n\n(* ── Shape Manipulation Properties ── *)\n\nlet shape_props =\n  [\n    prop \"reshape roundtrip (f32)\" f32_any (fun t ->\n        let flat = Nx.flatten t in\n        approx_equal (Nx.reshape (Nx.shape t) flat) t);\n    prop \"flatten preserves data (f32)\" f32_any (fun t ->\n        Nx.to_array (Nx.flatten t) = Nx.to_array t);\n    prop \"transpose involution (2d f32)\" f32_2d (fun t ->\n        approx_equal (Nx.transpose (Nx.transpose t)) t);\n    prop \"transpose shape (2d f32)\" f32_2d (fun t ->\n        let s = Nx.shape t in\n        let ts = Nx.shape (Nx.transpose t) in\n        ts = [| s.(1); s.(0) |]);\n    prop \"flip involution (f32)\" f32_any (fun t ->\n        approx_equal (Nx.flip (Nx.flip t)) t);\n    prop \"copy preserves data (f32)\" f32_any (fun t ->\n        approx_equal (Nx.copy t) t);\n    prop \"copy independence (f32)\" f32_any (fun t ->\n        assume (Nx.numel t > 0);\n        let c = Nx.copy t in\n        let orig_first = Nx.item [ 0 ] (Nx.flatten t) in\n        Nx.set_item [ 0 ] 99999.0 (Nx.flatten c);\n        let after_first = Nx.item [ 0 ] (Nx.flatten t) in\n        Float.equal orig_first after_first);\n    prop \"contiguous is contiguous (f32)\" f32_any (fun t ->\n        Nx.is_c_contiguous (Nx.contiguous t));\n    prop \"contiguous preserves data (f32)\" f32_any (fun t ->\n        approx_equal (Nx.contiguous t) t);\n    prop \"reshape preserves numel (f32)\" f32_any (fun t ->\n        Nx.numel (Nx.flatten t) = Nx.numel t);\n  ]\n\n(* ── Comparison Properties ── *)\n\nlet comparison_props =\n  [\n    prop \"equal reflexive (f32)\" f32_any (fun a ->\n        assume (no_nan a);\n        all_true (Nx.equal a a));\n    prop \"less irreflexive (f32)\" f32_any (fun a ->\n        all_true (Nx.logical_not (Nx.less a a)));\n    prop \"less/greater complement (f32)\" f32_pair (fun (a, b) ->\n        all_true (Nx.array_equal (Nx.less a b) (Nx.greater b a)));\n    prop \"less_equal from less|equal (f32)\" f32_pair (fun (a, b) ->\n        assume (no_nan a && no_nan b);\n        all_true\n          (Nx.array_equal (Nx.less_equal a b)\n             (Nx.logical_or (Nx.less a b) (Nx.equal a b))));\n    prop \"not_equal complement of equal (f32)\" f32_pair (fun (a, b) ->\n        assume (no_nan a && no_nan b);\n        all_true\n          (Nx.array_equal (Nx.not_equal a b) (Nx.logical_not (Nx.equal a b))));\n  ]\n\n(* ── Logical & Bitwise Properties ── *)\n\nlet logical_bitwise_props =\n  [\n    prop \"bitwise_not involution (i32)\" i32_any (fun a ->\n        exact_equal (Nx.bitwise_not (Nx.bitwise_not a)) a);\n    prop \"bitwise_and commutative (i32)\" i32_pair (fun (a, b) ->\n        exact_equal (Nx.bitwise_and a b) (Nx.bitwise_and b a));\n    prop \"bitwise_or commutative (i32)\" i32_pair (fun (a, b) ->\n        exact_equal (Nx.bitwise_or a b) (Nx.bitwise_or b a));\n    prop \"bitwise_xor self = zeros (i32)\" i32_any (fun a ->\n        exact_equal (Nx.bitwise_xor a a) (Nx.zeros_like a));\n    prop \"de morgan and (i32)\" i32_pair (fun (a, b) ->\n        exact_equal\n          (Nx.bitwise_not (Nx.bitwise_and a b))\n          (Nx.bitwise_or (Nx.bitwise_not a) (Nx.bitwise_not b)));\n    prop \"de morgan or (i32)\" i32_pair (fun (a, b) ->\n        exact_equal\n          (Nx.bitwise_not (Nx.bitwise_or a b))\n          (Nx.bitwise_and (Nx.bitwise_not a) (Nx.bitwise_not b)));\n  ]\n\n(* ── Rounding Properties ── *)\n\nlet rounding_props =\n  let open Nx in\n  [\n    prop \"floor <= input (f32)\" f32_any (fun x ->\n        assume (all_finite x);\n        all_true (less_equal (floor x) x));\n    prop \"ceil >= input (f32)\" f32_any (fun x ->\n        assume (all_finite x);\n        all_true (greater_equal (ceil x) x));\n    prop \"floor idempotent (f32)\" f32_any (fun x ->\n        assume (all_finite x);\n        approx_equal (floor (floor x)) (floor x));\n    prop \"ceil idempotent (f32)\" f32_any (fun x ->\n        assume (all_finite x);\n        approx_equal (ceil (ceil x)) (ceil x));\n    prop \"round idempotent (f32)\" f32_any (fun x ->\n        assume (all_finite x);\n        approx_equal (round (round x)) (round x));\n  ]\n\n(* ── Sorting Properties ── *)\n\nlet sorting_props =\n  [\n    prop \"sort is sorted (f32 1d)\" f32_1d (fun x ->\n        assume (no_nan x);\n        let sorted, _indices = Nx.sort x in\n        let n = Nx.numel sorted in\n        let rec check i =\n          if i >= n then true\n          else Nx.item [ i - 1 ] sorted <= Nx.item [ i ] sorted && check (i + 1)\n        in\n        n <= 1 || check 1);\n    prop \"sort idempotent (f32 1d)\" f32_1d (fun x ->\n        assume (no_nan x);\n        let s1, _ = Nx.sort x in\n        let s2, _ = Nx.sort s1 in\n        approx_equal s1 s2);\n    prop \"sort preserves shape (f32 1d)\" f32_1d (fun x ->\n        let sorted, _ = Nx.sort x in\n        Nx.shape sorted = Nx.shape x);\n    prop \"argsort valid indices (f32 1d)\" f32_1d (fun x ->\n        let _, indices = Nx.sort x in\n        let n = Nx.numel x in\n        let valid = ref true in\n        for i = 0 to n - 1 do\n          let idx = Int32.to_int (Nx.item [ i ] indices) in\n          if idx < 0 || idx >= n then valid := false\n        done;\n        !valid);\n    prop \"sort preserves elements (i32 1d)\" i32_1d (fun x ->\n        let sorted, _ = Nx.sort x in\n        let a = Array.copy (Nx.to_array x) in\n        let b = Array.copy (Nx.to_array sorted) in\n        Array.sort Int32.compare a;\n        Array.sort Int32.compare b;\n        a = b);\n  ]\n\n(* ── Math Function Properties ── *)\n\nlet math_function_props =\n  let mk_f32_constrained gen_val =\n    let gen =\n      let open Gen in\n      let* shape = gen_shape ~max_ndim:3 ~max_dim:4 in\n      gen_tensor_with_values Nx.float32 gen_val shape\n    in\n    mk_testable_f32 gen\n  in\n  let f32_small = mk_f32_constrained gen_float_small in\n  let f32_positive = mk_f32_constrained gen_float_positive in\n  let f32_unit = mk_f32_constrained gen_float_unit in\n  let f32_trig = mk_f32_constrained gen_float_trig in\n  let f32_recip = mk_f32_constrained (Gen.float_range 0.1 10.) in\n  [\n    prop \"exp/log inverse (f32)\" f32_small (fun x ->\n        assume (all_finite x);\n        allclose ~atol:1e-4 ~rtol:1e-4 (Nx.log (Nx.exp x)) x);\n    prop \"log/exp inverse (f32)\" f32_positive (fun x ->\n        allclose ~atol:1e-4 ~rtol:1e-4 (Nx.exp (Nx.log x)) x);\n    prop \"sin^2 + cos^2 = 1 (f32)\" f32_trig (fun x ->\n        let sum = Nx.add (Nx.square (Nx.sin x)) (Nx.square (Nx.cos x)) in\n        allclose ~atol:1e-4 ~rtol:0. sum (Nx.ones_like x));\n    prop \"sqrt(square(x)) = abs(x) (f32)\" f32_any (fun x ->\n        assume (all_finite x);\n        allclose ~atol:1e-4 ~rtol:1e-4 (Nx.sqrt (Nx.square x)) (Nx.abs x));\n    prop \"abs idempotent (f32)\" f32_any (fun x ->\n        approx_equal (Nx.abs (Nx.abs x)) (Nx.abs x));\n    prop \"sign * abs = x (f32)\" f32_any (fun x ->\n        assume (all_finite x && all_nonzero_f32 x);\n        approx_equal (Nx.mul (Nx.sign x) (Nx.abs x)) x);\n    prop \"tanh range (f32)\" f32_any (fun x ->\n        assume (all_finite x);\n        all_true (Nx.less_equal (Nx.abs (Nx.tanh x)) (Nx.ones_like x)));\n    prop \"recip involution (f32)\" f32_recip (fun x ->\n        allclose ~atol:1e-3 ~rtol:1e-3 (Nx.recip (Nx.recip x)) x);\n    prop \"square = mul self (f32)\" f32_any (fun x ->\n        approx_equal (Nx.square x) (Nx.mul x x));\n    prop \"asin(sin(x)) = x (f32)\" f32_unit (fun x ->\n        (* asin(sin(x)) = x only when x in [-pi/2, pi/2]; use values in (-1,1)\n           which are well within that range when interpreted as radians *)\n        allclose ~atol:1e-4 ~rtol:1e-4 (Nx.asin (Nx.sin x)) x);\n  ]\n\n(* ── Reduction Properties ── *)\n\nlet reduction_props =\n  [\n    prop \"sum of ones = numel (f32)\" f32_any (fun t ->\n        let ones = Nx.ones_like t in\n        let s = Nx.item [] (Nx.sum ones) in\n        Float.abs (s -. Float.of_int (Nx.numel t)) < 1e-5);\n    prop \"prod of ones = 1 (f32)\" f32_any (fun t ->\n        let ones = Nx.ones_like t in\n        Float.abs (Nx.item [] (Nx.prod ones) -. 1.0) < 1e-5);\n    prop \"mean = sum / numel (f32)\" f32_any (fun t ->\n        assume (Nx.numel t > 0);\n        let m = Nx.item [] (Nx.mean t) in\n        let s = Nx.item [] (Nx.sum t) in\n        let n = Float.of_int (Nx.numel t) in\n        Float.abs (m -. (s /. n)) < 1e-4);\n    prop \"max >= all elements (f32)\" f32_any (fun t ->\n        assume (no_nan t && Nx.numel t > 0);\n        let mx = Nx.max t in\n        all_true (Nx.less_equal t (Nx.broadcast_to (Nx.shape t) mx)));\n    prop \"min <= all elements (f32)\" f32_any (fun t ->\n        assume (no_nan t && Nx.numel t > 0);\n        let mn = Nx.min t in\n        all_true (Nx.greater_equal t (Nx.broadcast_to (Nx.shape t) mn)));\n    prop \"var >= 0 (f32)\" f32_any (fun t ->\n        assume (Nx.numel t > 0);\n        Nx.item [] (Nx.var t) >= 0.0);\n    prop \"sum linearity (f32)\" f32_pair (fun (a, b) ->\n        let lhs = Nx.item [] (Nx.sum (Nx.add a b)) in\n        let rhs = Nx.item [] (Nx.sum a) +. Nx.item [] (Nx.sum b) in\n        Float.abs (lhs -. rhs) < 1e-2);\n    prop \"cumsum last = sum (f32 1d)\" f32_1d (fun t ->\n        assume (all_finite t && Nx.numel t > 0);\n        let cs = Nx.cumsum t in\n        let last = Nx.item [ Nx.numel t - 1 ] cs in\n        let total = Nx.item [] (Nx.sum t) in\n        Float.abs (last -. total) < 1e-3);\n  ]\n\n(* ── Linear Algebra Properties ── *)\n\nlet linalg_props =\n  [\n    prop \"matmul identity (f64)\" square_f64 (fun a ->\n        let n = (Nx.shape a).(0) in\n        let eye = Nx.identity Nx.float64 n in\n        approx_equal ~epsilon:1e-10 (Nx.matmul a eye) a);\n    prop \"transpose matmul (f64)\"\n      (let gen =\n         let open Gen in\n         let* a = gen_square_f64 ~max_n:4 in\n         let n = (Nx.shape a).(0) in\n         let+ b =\n           gen_tensor_with_values Nx.float64 (Gen.float_range (-5.) 5.)\n             [| n; n |]\n         in\n         (a, b)\n       in\n       Testable.make\n         ~pp:(pp_pair pp_tensor pp_tensor)\n         ~equal:(fun (a1, b1) (a2, b2) ->\n           approx_equal ~epsilon:1e-10 a1 a2\n           && approx_equal ~epsilon:1e-10 b1 b2)\n         ~gen ())\n      (fun (a, b) ->\n        let lhs = Nx.transpose (Nx.matmul a b) in\n        let rhs = Nx.matmul (Nx.transpose b) (Nx.transpose a) in\n        approx_equal ~epsilon:1e-8 lhs rhs);\n    prop \"trace = sum diagonal (f64)\" square_f64 (fun a ->\n        let tr = Nx.item [] (Nx.trace a) in\n        let diag_sum = Nx.item [] (Nx.sum (Nx.diagonal a)) in\n        Float.abs (tr -. diag_sum) < 1e-10);\n    prop \"inv roundtrip (f64 posdef)\" posdef_f64 (fun a ->\n        let n = (Nx.shape a).(0) in\n        let eye = Nx.identity Nx.float64 n in\n        let inv_a = Nx.inv a in\n        allclose ~atol:1e-6 ~rtol:1e-6 (Nx.matmul inv_a a) eye);\n    prop \"qr reconstruction (f64)\" square_f64 (fun a ->\n        let q, r = Nx.qr a in\n        allclose ~atol:1e-6 ~rtol:1e-6 (Nx.matmul q r) a);\n    prop \"svd reconstruction (f64)\" square_f64 (fun a ->\n        let u, s, vh = Nx.svd a in\n        let n = (Nx.shape a).(0) in\n        let s_diag =\n          Nx.mul (Nx.identity Nx.float64 n) (Nx.reshape [| 1; n |] s)\n        in\n        let reconstructed = Nx.matmul (Nx.matmul u s_diag) vh in\n        allclose ~atol:1e-6 ~rtol:1e-6 reconstructed a);\n    prop \"cholesky reconstruction (f64 posdef)\" posdef_f64 (fun a ->\n        let l = Nx.cholesky a in\n        let reconstructed = Nx.matmul l (Nx.transpose l) in\n        allclose ~atol:1e-6 ~rtol:1e-6 reconstructed a);\n    prop \"det of identity = 1\"\n      (Testable.make ~pp:Format.pp_print_int ~equal:Int.equal\n         ~gen:(Gen.int_range 1 6) ())\n      (fun n ->\n        let eye = Nx.identity Nx.float64 n in\n        Float.abs (Nx.item [] (Nx.det eye) -. 1.0) < 1e-10);\n  ]\n\n(* ── Concatenation Properties ── *)\n\nlet concat_props =\n  [\n    prop \"concat single = identity (f32)\" f32_any (fun t ->\n        approx_equal (Nx.concatenate ~axis:0 [ t ]) t);\n    prop \"concat shape (f32)\" f32_pair (fun (a, b) ->\n        let sa = Nx.shape a and sb = Nx.shape b in\n        assume\n          (Array.length sa = Array.length sb\n          && Array.length sa > 0\n          && Array.sub sa 1 (Array.length sa - 1)\n             = Array.sub sb 1 (Array.length sb - 1));\n        let c = Nx.concatenate ~axis:0 [ a; b ] in\n        (Nx.shape c).(0) = sa.(0) + sb.(0));\n    prop \"stack creates axis (f32)\" f32_pair (fun (a, b) ->\n        assume (Nx.shape a = Nx.shape b);\n        let s = Nx.stack ~axis:0 [ a; b ] in\n        Nx.ndim s = Nx.ndim a + 1 && (Nx.shape s).(0) = 2);\n    prop \"concat/split roundtrip (f32 1d)\" f32_1d (fun t ->\n        let n = Nx.numel t in\n        assume (n >= 2 && n mod 2 = 0);\n        let parts = Nx.split ~axis:0 2 t in\n        approx_equal (Nx.concatenate ~axis:0 parts) t);\n  ]\n\n(* ── Indexing Properties ── *)\n\nlet indexing_props =\n  [\n    prop \"item/set_item roundtrip (f32)\" f32_with_index (fun (t, indices) ->\n        let c = Nx.copy t in\n        let v = 42.0 in\n        Nx.set_item indices v c;\n        Float.equal (Nx.item indices c) v);\n    prop \"get/set roundtrip (f32)\" f32_any (fun t ->\n        assume (Nx.ndim t >= 1);\n        let c = Nx.copy t in\n        let idx = [ 0 ] in\n        let sub = Nx.get idx t in\n        Nx.set idx c sub;\n        approx_equal (Nx.get idx c) sub);\n    prop \"slice A is identity (f32)\" f32_any (fun t ->\n        let spec = List.init (Nx.ndim t) (fun _ -> Nx.A) in\n        approx_equal (Nx.slice spec t) t);\n    prop \"slice full range = identity (f32 1d)\" f32_1d (fun t ->\n        let n = Nx.numel t in\n        approx_equal (Nx.slice [ Nx.R (0, n) ] t) t);\n    prop \"take all indices = identity (f32 1d)\" f32_1d (fun t ->\n        let n = Nx.numel t in\n        let indices = Nx.arange Nx.int32 0 n 1 in\n        approx_equal (Nx.take indices t) t);\n    prop \"take indices valid (f32 1d)\" f32_1d_with_take_indices\n      (fun (t, indices) ->\n        let taken = Nx.take indices t in\n        let n_idx = Nx.numel indices in\n        let ok = ref true in\n        for i = 0 to n_idx - 1 do\n          let idx = Int32.to_int (Nx.item [ i ] indices) in\n          let expected = Nx.item [ idx ] t in\n          let actual = Nx.item [ i ] taken in\n          if not (Float.equal expected actual) then ok := false\n        done;\n        !ok);\n    prop \"take_along_axis with argsort = sort (f32 1d)\" f32_1d (fun t ->\n        assume (no_nan t);\n        let sorted, _ = Nx.sort t in\n        let arg_indices = Nx.argsort t in\n        let gathered = Nx.take_along_axis ~axis:0 arg_indices t in\n        approx_equal gathered sorted);\n    prop \"extract preserves count (f32)\" f32_with_mask (fun (t, mask) ->\n        let extracted = Nx.extract ~condition:mask t in\n        let n_true =\n          let flat = Nx.flatten mask in\n          let count = ref 0 in\n          for i = 0 to Nx.numel flat - 1 do\n            if Nx.item [ i ] flat then incr count\n          done;\n          !count\n        in\n        Nx.numel extracted = n_true);\n    prop \"set_slice/slice roundtrip (f32)\" f32_any (fun t ->\n        assume (Nx.ndim t >= 1 && (Nx.shape t).(0) >= 1);\n        let spec = [ Nx.R (0, 1) ] in\n        let sub = Nx.slice spec t in\n        let c = Nx.copy t in\n        Nx.set_slice spec c sub;\n        approx_equal c t);\n    prop \"nonzero indices are valid (i32 1d)\" i32_1d (fun t ->\n        let nz = Nx.nonzero t in\n        let indices = nz.(0) in\n        let n = Nx.numel t in\n        let ok = ref true in\n        for i = 0 to Nx.numel indices - 1 do\n          let idx = Int32.to_int (Nx.item [ i ] indices) in\n          if idx < 0 || idx >= n then ok := false\n          else if Int32.equal (Nx.item [ idx ] t) 0l then ok := false\n        done;\n        !ok);\n  ]\n\n(* ── Broadcasting Properties ── *)\n\nlet broadcasting_props =\n  [\n    prop \"broadcast_to idempotent (f32)\" f32_with_broadcast_shape\n      (fun (t, target) ->\n        let b = Nx.broadcast_to target t in\n        approx_equal (Nx.broadcast_to target b) b);\n    prop \"broadcast_to preserves values (f32)\" f32_with_broadcast_shape\n      (fun (t, target) ->\n        let b = Nx.broadcast_to target t in\n        (* Every element in broadcast result must exist in original *)\n        let orig_vals = Nx.to_array (Nx.flatten (Nx.contiguous t)) in\n        let bc_vals = Nx.to_array (Nx.flatten (Nx.contiguous b)) in\n        Array.for_all\n          (fun v -> Array.exists (fun o -> Float.equal v o) orig_vals)\n          bc_vals);\n    prop \"broadcasted common shape (f32)\" f32_broadcastable_pair (fun (a, b) ->\n        let a', b' = Nx.broadcasted a b in\n        Nx.shape a' = Nx.shape b');\n    prop \"broadcasted symmetric shape (f32)\" f32_broadcastable_pair\n      (fun (a, b) ->\n        let a1, _ = Nx.broadcasted a b in\n        let _, b2 = Nx.broadcasted b a in\n        Nx.shape a1 = Nx.shape b2);\n    prop \"broadcast scalar to any shape (f32)\" f32_any (fun t ->\n        let v = 3.0 in\n        let s = Nx.scalar Nx.float32 v in\n        let b = Nx.broadcast_to (Nx.shape t) s in\n        Nx.shape b = Nx.shape t && all_true (Nx.equal b (Nx.full_like t v)));\n    prop \"add with broadcast = add after broadcast (f32)\" f32_broadcastable_pair\n      (fun (a, b) ->\n        let result = Nx.add a b in\n        let a', b' = Nx.broadcasted a b in\n        let result2 = Nx.add a' b' in\n        approx_equal result result2);\n    prop \"expand_dims/squeeze roundtrip (f32)\" f32_any (fun t ->\n        let expanded = Nx.expand_dims [ 0 ] t in\n        let squeezed = Nx.squeeze ~axes:[ 0 ] expanded in\n        approx_equal squeezed t);\n    prop \"broadcast_arrays consistent with broadcasted (f32)\"\n      f32_broadcastable_pair (fun (a, b) ->\n        let arr = Nx.broadcast_arrays [ a; b ] in\n        let a', b' = Nx.broadcasted a b in\n        approx_equal (List.nth arr 0) a' && approx_equal (List.nth arr 1) b');\n  ]\n\n(* ── Einsum Equivalence Properties ── *)\n\nlet einsum_props =\n  let mk_f32_matmul_pair =\n    let gen =\n      let open Gen in\n      let* m = int_range 1 6 in\n      let* n = int_range 1 6 in\n      let* k = int_range 1 6 in\n      let+ a = gen_f32 [| m; n |] and+ b = gen_f32 [| n; k |] in\n      (a, b)\n    in\n    Testable.make\n      ~pp:(pp_pair pp_tensor pp_tensor)\n      ~equal:(fun (a1, b1) (a2, b2) -> approx_equal a1 a2 && approx_equal b1 b2)\n      ~gen ()\n  in\n  let mk_f32_1d_pair =\n    let gen =\n      let open Gen in\n      let* n = int_range 1 10 in\n      let+ a = gen_f32 [| n |] and+ b = gen_f32 [| n |] in\n      (a, b)\n    in\n    Testable.make\n      ~pp:(pp_pair pp_tensor pp_tensor)\n      ~equal:(fun (a1, b1) (a2, b2) -> approx_equal a1 a2 && approx_equal b1 b2)\n      ~gen ()\n  in\n  let mk_f32_outer_pair =\n    let gen =\n      let open Gen in\n      let* m = int_range 1 8 in\n      let* n = int_range 1 8 in\n      let+ a = gen_f32 [| m |] and+ b = gen_f32 [| n |] in\n      (a, b)\n    in\n    Testable.make\n      ~pp:(pp_pair pp_tensor pp_tensor)\n      ~equal:(fun (a1, b1) (a2, b2) -> approx_equal a1 a2 && approx_equal b1 b2)\n      ~gen ()\n  in\n  [\n    (* einsum matmul = Nx.matmul *)\n    prop \"einsum ij,jk->ik = matmul\" mk_f32_matmul_pair (fun (a, b) ->\n        let via_einsum = Nx.einsum \"ij,jk->ik\" [| a; b |] in\n        let via_matmul = Nx.matmul a b in\n        allclose ~atol:1e-4 ~rtol:1e-4 via_einsum via_matmul);\n    (* einsum transpose = Nx.transpose *)\n    prop \"einsum ij->ji = transpose\" f32_2d (fun a ->\n        let via_einsum = Nx.einsum \"ij->ji\" [| a |] in\n        let via_transpose = Nx.transpose a in\n        approx_equal via_einsum via_transpose);\n    (* einsum trace = Nx.trace *)\n    prop \"einsum ii-> = trace\" square_f64 (fun a ->\n        let via_einsum = Nx.item [] (Nx.einsum \"ii->\" [| a |]) in\n        let via_trace = Nx.item [] (Nx.trace a) in\n        Float.abs (via_einsum -. via_trace) < 1e-10);\n    (* einsum diagonal = Nx.diagonal *)\n    prop \"einsum ii->i = diagonal\" square_f64 (fun a ->\n        let via_einsum = Nx.einsum \"ii->i\" [| a |] in\n        let via_diagonal = Nx.diagonal a in\n        approx_equal ~epsilon:1e-10 via_einsum via_diagonal);\n    (* einsum dot product = sum of elementwise mul *)\n    prop \"einsum i,i-> = dot\" mk_f32_1d_pair (fun (a, b) ->\n        let via_einsum = Nx.item [] (Nx.einsum \"i,i->\" [| a; b |]) in\n        let via_sum_mul = Nx.item [] (Nx.sum (Nx.mul a b)) in\n        Float.abs (via_einsum -. via_sum_mul) < 1e-3);\n    (* einsum outer product *)\n    prop \"einsum i,j->ij = outer\" mk_f32_outer_pair (fun (a, b) ->\n        let via_einsum = Nx.einsum \"i,j->ij\" [| a; b |] in\n        let via_outer =\n          Nx.mul\n            (Nx.reshape [| Nx.numel a; 1 |] a)\n            (Nx.reshape [| 1; Nx.numel b |] b)\n        in\n        allclose ~atol:1e-4 ~rtol:1e-4 via_einsum via_outer);\n    (* einsum total sum = Nx.sum *)\n    prop \"einsum ij-> = sum\" f32_2d (fun a ->\n        let via_einsum = Nx.item [] (Nx.einsum \"ij->\" [| a |]) in\n        let via_sum = Nx.item [] (Nx.sum a) in\n        Float.abs (via_einsum -. via_sum) < 1e-3);\n    (* einsum row sum = sum axis 1 *)\n    prop \"einsum ij->i = sum axis 1\" f32_2d (fun a ->\n        let via_einsum = Nx.einsum \"ij->i\" [| a |] in\n        let via_sum = Nx.sum ~axes:[ 1 ] a in\n        allclose ~atol:1e-4 ~rtol:1e-4 via_einsum via_sum);\n    (* einsum col sum = sum axis 0 *)\n    prop \"einsum ij->j = sum axis 0\" f32_2d (fun a ->\n        let via_einsum = Nx.einsum \"ij->j\" [| a |] in\n        let via_sum = Nx.sum ~axes:[ 0 ] a in\n        allclose ~atol:1e-4 ~rtol:1e-4 via_einsum via_sum);\n    (* einsum hadamard = elementwise mul *)\n    prop \"einsum i,i->i = mul\" mk_f32_1d_pair (fun (a, b) ->\n        let via_einsum = Nx.einsum \"i,i->i\" [| a; b |] in\n        let via_mul = Nx.mul a b in\n        approx_equal via_einsum via_mul);\n    (* einsum Frobenius inner product *)\n    prop \"einsum ij,ij-> = sum(mul)\"\n      (let gen =\n         let open Gen in\n         let* shape = gen_shape_2d ~max_dim:5 in\n         let+ a = gen_f32 shape and+ b = gen_f32 shape in\n         (a, b)\n       in\n       Testable.make\n         ~pp:(pp_pair pp_tensor pp_tensor)\n         ~equal:(fun (a1, b1) (a2, b2) ->\n           approx_equal a1 a2 && approx_equal b1 b2)\n         ~gen ())\n      (fun (a, b) ->\n        let via_einsum = Nx.item [] (Nx.einsum \"ij,ij->\" [| a; b |]) in\n        let via_sum_mul = Nx.item [] (Nx.sum (Nx.mul a b)) in\n        Float.abs (via_einsum -. via_sum_mul) < 1e-3);\n    (* einsum matvec = matmul with reshaped vector *)\n    prop \"einsum ij,j->i = matvec\"\n      (let gen =\n         let open Gen in\n         let* m = int_range 1 6 in\n         let* n = int_range 1 6 in\n         let+ a = gen_f32 [| m; n |] and+ b = gen_f32 [| n |] in\n         (a, b)\n       in\n       Testable.make\n         ~pp:(pp_pair pp_tensor pp_tensor)\n         ~equal:(fun (a1, b1) (a2, b2) ->\n           approx_equal a1 a2 && approx_equal b1 b2)\n         ~gen ())\n      (fun (a, b) ->\n        let via_einsum = Nx.einsum \"ij,j->i\" [| a; b |] in\n        let via_matmul =\n          Nx.reshape\n            [| (Nx.shape a).(0) |]\n            (Nx.matmul a (Nx.reshape [| Nx.numel b; 1 |] b))\n        in\n        allclose ~atol:1e-4 ~rtol:1e-4 via_einsum via_matmul);\n  ]\n\n(* ── Stress Tests: Strided Views, Non-Contiguous Ops, High Rank ── *)\n\nlet stress_config =\n  { Windtrap_prop.Prop.default_config with count = 500; max_gen = 1500 }\n\nlet stress_props =\n  [\n    (* Transpose then slice, verify data integrity *)\n    prop ~config:stress_config \"transpose+slice preserves data (f32)\"\n      f32_2d_plus (fun t ->\n        let tr = Nx.transpose t in\n        let spec = List.init (Nx.ndim tr) (fun _ -> Nx.A) in\n        let sliced = Nx.slice spec tr in\n        approx_equal (Nx.contiguous sliced) (Nx.contiguous tr));\n    (* Transpose+slice then flatten vs direct flatten of transpose *)\n    prop ~config:stress_config\n      \"transpose+contiguous = contiguous+transpose data (f32)\" f32_2d_plus\n      (fun t ->\n        let a = Nx.to_array (Nx.contiguous (Nx.transpose t)) in\n        let b = Nx.to_array (Nx.transpose t |> Nx.contiguous) in\n        a = b);\n    (* Slice a non-trivial range after transpose, check item access *)\n    prop ~config:stress_config \"item on transposed view (f32)\" f32_2d_plus\n      (fun t ->\n        let s = Nx.shape t in\n        let tr = Nx.transpose t in\n        let ts = Nx.shape tr in\n        (* item [0, ..., 0] of transpose should equal item [0, ..., 0] of\n           original since both index the same element *)\n        let zeros_orig = List.init (Array.length s) (fun _ -> 0) in\n        let zeros_tr = List.init (Array.length ts) (fun _ -> 0) in\n        Float.equal (Nx.item zeros_orig t) (Nx.item zeros_tr tr));\n    (* Flip + slice: flip is a strided view, slicing it compounds strides *)\n    prop ~config:stress_config \"flip+slice data integrity (f32)\" f32_2d_plus\n      (fun t ->\n        let flipped = Nx.flip t in\n        let spec = [ Nx.R (0, (Nx.shape flipped).(0)) ] in\n        let sliced = Nx.slice spec flipped in\n        approx_equal (Nx.contiguous sliced) (Nx.contiguous flipped));\n    (* Double transpose on high-rank tensor *)\n    prop ~config:stress_config \"double transpose high rank (f32)\" f32_stress\n      (fun t ->\n        assume (Nx.ndim t >= 2);\n        approx_equal (Nx.transpose (Nx.transpose t)) t);\n    (* Contiguous on strided views: transpose then contiguous should equal copy\n       of transpose *)\n    prop ~config:stress_config \"contiguous of strided view (f32)\" f32_2d_plus\n      (fun t ->\n        let tr = Nx.transpose t in\n        let c = Nx.contiguous tr in\n        Nx.is_c_contiguous c && approx_equal c tr);\n    (* Arithmetic on non-contiguous views *)\n    prop ~config:stress_config \"add on transposed views (f32)\" f32_stress_pair\n      (fun (a, b) ->\n        assume (Nx.ndim a >= 2);\n        let at = Nx.transpose a in\n        let bt = Nx.transpose b in\n        let sum_then_transpose = Nx.transpose (Nx.add a b) in\n        let transpose_then_sum = Nx.add at bt in\n        approx_equal sum_then_transpose transpose_then_sum);\n    (* Reduction on transposed view *)\n    prop ~config:stress_config \"sum of transpose = sum of original (f32)\"\n      f32_stress (fun t ->\n        assume (all_finite t);\n        let s1 = Nx.item [] (Nx.sum t) in\n        let s2 = Nx.item [] (Nx.sum (Nx.transpose t)) in\n        Float.abs (s1 -. s2) < 1e-2);\n    (* Broadcasting + arithmetic on high-rank tensors *)\n    prop ~config:stress_config \"mul broadcast high rank (f32)\"\n      f32_broadcastable_stress (fun (a, b) ->\n        let result = Nx.mul a b in\n        let a', b' = Nx.broadcasted a b in\n        approx_equal result (Nx.mul a' b'));\n    (* Slice with step on high-rank tensor *)\n    prop ~config:stress_config \"slice with step roundtrip (f32)\" f32_stress\n      (fun t ->\n        assume (Nx.ndim t >= 1 && (Nx.shape t).(0) >= 2);\n        let dim0 = (Nx.shape t).(0) in\n        let sliced = Nx.slice [ Nx.Rs (0, dim0, 2) ] t in\n        let expected_len = (dim0 + 1) / 2 in\n        (Nx.shape sliced).(0) = expected_len && Nx.ndim sliced = Nx.ndim t);\n    (* Copy of a strided view preserves data *)\n    prop ~config:stress_config \"copy strided view (f32)\" f32_2d_plus (fun t ->\n        let tr = Nx.transpose t in\n        let c = Nx.copy tr in\n        approx_equal c tr && Nx.is_c_contiguous c);\n    (* Reshape after contiguous on strided view *)\n    prop ~config:stress_config \"reshape contiguous strided (f32)\" f32_2d_plus\n      (fun t ->\n        let tr = Nx.contiguous (Nx.transpose t) in\n        let flat = Nx.reshape [| Nx.numel t |] tr in\n        Nx.numel flat = Nx.numel t && Nx.to_array flat = Nx.to_array tr);\n  ]\n\n(* ── Suite ── *)\n\nlet () =\n  run \"Nx Properties\"\n    [\n      group \"Arithmetic\" arithmetic_props;\n      group \"Shape\" shape_props;\n      group \"Comparison\" comparison_props;\n      group \"Logical & Bitwise\" logical_bitwise_props;\n      group \"Rounding\" rounding_props;\n      group \"Sorting\" sorting_props;\n      group \"Math Functions\" math_function_props;\n      group \"Reductions\" reduction_props;\n      group \"Linear Algebra\" linalg_props;\n      group \"Concatenation\" concat_props;\n      group \"Indexing\" indexing_props;\n      group \"Broadcasting\" broadcasting_props;\n      group \"Einsum\" einsum_props;\n      group \"Stress Tests\" stress_props;\n    ]\n"
  },
  {
    "path": "packages/nx/test/props/test_nx_props_support.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Generator infrastructure and helpers for Nx property tests. *)\n\nopen Windtrap\nmodule Gen = Windtrap.Gen\n\n(* ── Shape Generators ── *)\n\nlet gen_shape ~max_ndim ~max_dim =\n  let open Gen in\n  let* ndim = int_range 1 max_ndim in\n  let+ dims = list_size (pure ndim) (int_range 1 max_dim) in\n  Array.of_list dims\n\nlet gen_shape_2d ~max_dim =\n  let open Gen in\n  let+ r = int_range 1 max_dim and+ c = int_range 1 max_dim in\n  [| r; c |]\n\n(* ── Scalar Value Generators ── *)\n\nlet gen_float_safe = Gen.float_range (-10.) 10.\nlet gen_float_positive = Gen.float_range 0.01 10.\nlet gen_float_unit = Gen.float_range (-0.999) 0.999\nlet gen_float_trig = Gen.float_range (-.Float.pi) Float.pi\nlet gen_float_small = Gen.float_range (-5.) 5.\n\nlet gen_int32_safe =\n  let open Gen in\n  let+ n = int_range (-100) 100 in\n  Int32.of_int n\n\nlet gen_int32_nonzero =\n  Gen.oneof\n    [\n      Gen.map Int32.of_int (Gen.int_range (-100) (-1));\n      Gen.map Int32.of_int (Gen.int_range 1 100);\n    ]\n\n(* ── Tensor Generators ── *)\n\nlet gen_tensor_with_values (type a b) (dtype : (a, b) Nx.dtype)\n    (gen_val : a Gen.t) (shape : int array) =\n  let size = Array.fold_left ( * ) 1 shape in\n  let open Gen in\n  let+ data = list_size (pure size) gen_val in\n  Nx.create dtype shape (Array.of_list data)\n\nlet gen_f32 shape = gen_tensor_with_values Nx.float32 gen_float_safe shape\n\nlet gen_f32_positive shape =\n  gen_tensor_with_values Nx.float32 gen_float_positive shape\n\nlet gen_f32_unit shape = gen_tensor_with_values Nx.float32 gen_float_unit shape\nlet gen_f32_trig shape = gen_tensor_with_values Nx.float32 gen_float_trig shape\n\nlet gen_f32_small shape =\n  gen_tensor_with_values Nx.float32 gen_float_small shape\n\nlet gen_i32 shape = gen_tensor_with_values Nx.int32 gen_int32_safe shape\n\nlet gen_i32_nonzero shape =\n  gen_tensor_with_values Nx.int32 gen_int32_nonzero shape\n\n(* Tensor with random shape *)\nlet gen_f32_any =\n  let open Gen in\n  let* shape = gen_shape ~max_ndim:3 ~max_dim:5 in\n  gen_f32 shape\n\nlet gen_i32_any =\n  let open Gen in\n  let* shape = gen_shape ~max_ndim:3 ~max_dim:5 in\n  gen_i32 shape\n\n(* 2D tensor *)\nlet gen_f32_2d =\n  let open Gen in\n  let* shape = gen_shape_2d ~max_dim:5 in\n  gen_f32 shape\n\n(* Same-shape pairs *)\nlet gen_f32_pair =\n  let open Gen in\n  let* shape = gen_shape ~max_ndim:3 ~max_dim:4 in\n  let+ a = gen_f32 shape and+ b = gen_f32 shape in\n  (a, b)\n\nlet gen_i32_pair =\n  let open Gen in\n  let* shape = gen_shape ~max_ndim:3 ~max_dim:4 in\n  let+ a = gen_i32 shape and+ b = gen_i32 shape in\n  (a, b)\n\nlet gen_i32_pair_b_nonzero =\n  let open Gen in\n  let* shape = gen_shape ~max_ndim:3 ~max_dim:4 in\n  let+ a = gen_i32 shape and+ b = gen_i32_nonzero shape in\n  (a, b)\n\n(* Same-shape triples *)\nlet gen_f32_triple =\n  let open Gen in\n  let* shape = gen_shape ~max_ndim:3 ~max_dim:3 in\n  let+ a = gen_f32 shape and+ b = gen_f32 shape and+ c = gen_f32 shape in\n  (a, b, c)\n\nlet gen_i32_triple =\n  let open Gen in\n  let* shape = gen_shape ~max_ndim:3 ~max_dim:3 in\n  let+ a = gen_i32 shape and+ b = gen_i32 shape and+ c = gen_i32 shape in\n  (a, b, c)\n\n(* Square matrices (float64 for linalg stability) *)\nlet gen_square_f64 ~max_n =\n  let open Gen in\n  let* n = int_range 2 max_n in\n  gen_tensor_with_values Nx.float64 (Gen.float_range (-5.) 5.) [| n; n |]\n\n(* Positive definite matrix via A^T A + εI *)\nlet gen_posdef_f64 ~max_n =\n  let open Gen in\n  let+ a = gen_square_f64 ~max_n in\n  let n = (Nx.shape a).(0) in\n  let at = Nx.transpose a in\n  let ata = Nx.matmul at a in\n  let eps_i = Nx.mul_s (Nx.identity Nx.float64 n) 0.1 in\n  Nx.add ata eps_i\n\n(* 1D float32 for sorting *)\nlet gen_f32_1d =\n  let open Gen in\n  let* len = int_range 1 20 in\n  gen_f32 [| len |]\n\nlet gen_i32_1d =\n  let open Gen in\n  let* len = int_range 1 20 in\n  gen_i32 [| len |]\n\n(* ── Testable Wrappers ── *)\n\nlet pp_tensor fmt t = Format.fprintf fmt \"%s\" (Nx.to_string t)\n\nlet approx_equal (type b) ?(epsilon = 1e-5) (a : (float, b) Nx.t)\n    (b : (float, b) Nx.t) =\n  if Nx.shape a <> Nx.shape b then false\n  else\n    let diff = Nx.sub a b in\n    let abs_diff = Nx.abs diff in\n    let max_diff = Nx.item [] (Nx.max abs_diff) in\n    max_diff < epsilon\n\nlet exact_equal (type a b) (x : (a, b) Nx.t) (y : (a, b) Nx.t) =\n  Nx.shape x = Nx.shape y && Nx.item [] (Nx.array_equal x y)\n\nlet mk_testable_f32 gen =\n  Testable.make ~pp:pp_tensor\n    ~equal:(fun a b -> approx_equal ~epsilon:1e-5 a b)\n    ~gen ()\n\nlet mk_testable_f32_tol ~epsilon gen =\n  Testable.make ~pp:pp_tensor\n    ~equal:(fun a b -> approx_equal ~epsilon a b)\n    ~gen ()\n\nlet mk_testable_i32 gen = Testable.make ~pp:pp_tensor ~equal:exact_equal ~gen ()\n\nlet mk_testable_f64 gen =\n  Testable.make ~pp:pp_tensor\n    ~equal:(fun a b -> approx_equal ~epsilon:1e-10 a b)\n    ~gen ()\n\n(* Single tensor testables *)\nlet f32_any = mk_testable_f32 gen_f32_any\nlet i32_any = mk_testable_i32 gen_i32_any\nlet f32_2d = mk_testable_f32 gen_f32_2d\n\n(* Pair testables *)\nlet pp_pair pp1 pp2 fmt (a, b) = Format.fprintf fmt \"(%a, %a)\" pp1 a pp2 b\n\nlet pp_triple pp1 pp2 pp3 fmt (a, b, c) =\n  Format.fprintf fmt \"(%a, %a, %a)\" pp1 a pp2 b pp3 c\n\nlet f32_pair =\n  Testable.make\n    ~pp:(pp_pair pp_tensor pp_tensor)\n    ~equal:(fun (a1, b1) (a2, b2) -> approx_equal a1 a2 && approx_equal b1 b2)\n    ~gen:gen_f32_pair ()\n\nlet i32_pair =\n  Testable.make\n    ~pp:(pp_pair pp_tensor pp_tensor)\n    ~equal:(fun (a1, b1) (a2, b2) -> exact_equal a1 a2 && exact_equal b1 b2)\n    ~gen:gen_i32_pair ()\n\nlet i32_pair_b_nonzero =\n  Testable.make\n    ~pp:(pp_pair pp_tensor pp_tensor)\n    ~equal:(fun (a1, b1) (a2, b2) -> exact_equal a1 a2 && exact_equal b1 b2)\n    ~gen:gen_i32_pair_b_nonzero ()\n\nlet i32_triple =\n  Testable.make\n    ~pp:(pp_triple pp_tensor pp_tensor pp_tensor)\n    ~equal:(fun (a1, b1, c1) (a2, b2, c2) ->\n      exact_equal a1 a2 && exact_equal b1 b2 && exact_equal c1 c2)\n    ~gen:gen_i32_triple ()\n\nlet f32_1d = mk_testable_f32 gen_f32_1d\nlet i32_1d = mk_testable_i32 gen_i32_1d\nlet square_f64 = mk_testable_f64 (gen_square_f64 ~max_n:4)\nlet posdef_f64 = mk_testable_f64 (gen_posdef_f64 ~max_n:4)\n\n(* ── Indexing Generators ── *)\n\n(* Tensor + valid item indices (one per dimension) *)\nlet gen_f32_with_index =\n  let open Gen in\n  let* shape = gen_shape ~max_ndim:3 ~max_dim:5 in\n  let* t = gen_f32 shape in\n  let ndim = Array.length shape in\n  let rec gen_indices i acc =\n    if i >= ndim then pure (List.rev acc)\n    else\n      let* idx = int_range 0 (shape.(i) - 1) in\n      gen_indices (i + 1) (idx :: acc)\n  in\n  let+ indices = gen_indices 0 [] in\n  (t, indices)\n\n(* 1D f32 tensor + i32 index tensor with valid indices *)\nlet gen_f32_1d_with_take_indices =\n  let open Gen in\n  let* len = int_range 1 10 in\n  let* t = gen_f32 [| len |] in\n  let* num_indices = int_range 1 8 in\n  let+ idx_list =\n    list_size (pure num_indices) (map Int32.of_int (int_range 0 (len - 1)))\n  in\n  let indices = Nx.create Nx.int32 [| num_indices |] (Array.of_list idx_list) in\n  (t, indices)\n\n(* Tensor + boolean mask of same shape *)\nlet gen_f32_with_mask =\n  let open Gen in\n  let* shape = gen_shape ~max_ndim:3 ~max_dim:4 in\n  let size = Array.fold_left ( * ) 1 shape in\n  let* t = gen_f32 shape in\n  let+ bools = list_size (pure size) bool in\n  let mask = Nx.create Nx.bool shape (Array.of_list bools) in\n  (t, mask)\n\n(* ── Broadcasting Generators ── *)\n\n(* Generate a broadcastable shape pair. Strategy: generate a \"result\" shape,\n   then for each dim, choose whether it comes from a (b gets 1), from b (a gets\n   1), or both (same value). *)\nlet gen_broadcastable_shapes =\n  let open Gen in\n  let* ndim = int_range 1 3 in\n  let* dims = list_size (pure ndim) (int_range 1 5) in\n  let result_shape = Array.of_list dims in\n  let+ choices = list_size (pure ndim) (int_range 0 2) in\n  let shape_a = Array.copy result_shape in\n  let shape_b = Array.copy result_shape in\n  List.iteri\n    (fun i choice ->\n      match choice with\n      | 0 -> shape_b.(i) <- 1 (* a has the dim, b broadcasts *)\n      | 1 -> shape_a.(i) <- 1 (* b has the dim, a broadcasts *)\n      | _ -> () (* both have the dim *))\n    choices;\n  (shape_a, shape_b)\n\n(* Two tensors with broadcastable shapes *)\nlet gen_f32_broadcastable_pair =\n  let open Gen in\n  let* shape_a, shape_b = gen_broadcastable_shapes in\n  let+ a = gen_f32 shape_a and+ b = gen_f32 shape_b in\n  (a, b)\n\n(* Tensor with some dims of size 1 + valid broadcast target shape *)\nlet gen_f32_with_broadcast_shape =\n  let open Gen in\n  let* ndim = int_range 1 3 in\n  let* dims = list_size (pure ndim) (int_range 1 5) in\n  let target = Array.of_list dims in\n  (* Build source shape: randomly set some dims to 1 *)\n  let* which_ones = list_size (pure ndim) bool in\n  let source =\n    Array.mapi (fun i d -> if List.nth which_ones i then 1 else d) target\n  in\n  let+ t = gen_f32 source in\n  (t, target)\n\n(* ── Indexing & Broadcasting Testables ── *)\n\nlet pp_int_list fmt l =\n  Format.fprintf fmt \"[%s]\" (String.concat \"; \" (List.map string_of_int l))\n\nlet pp_int_array fmt a =\n  Format.fprintf fmt \"[|%s|]\"\n    (String.concat \"; \" (Array.to_list (Array.map string_of_int a)))\n\nlet f32_with_index =\n  Testable.make\n    ~pp:(pp_pair pp_tensor pp_int_list)\n    ~equal:(fun (t1, i1) (t2, i2) -> approx_equal t1 t2 && i1 = i2)\n    ~gen:gen_f32_with_index ()\n\nlet f32_1d_with_take_indices =\n  Testable.make\n    ~pp:(pp_pair pp_tensor pp_tensor)\n    ~equal:(fun (t1, i1) (t2, i2) -> approx_equal t1 t2 && exact_equal i1 i2)\n    ~gen:gen_f32_1d_with_take_indices ()\n\nlet f32_with_mask =\n  Testable.make\n    ~pp:(pp_pair pp_tensor pp_tensor)\n    ~equal:(fun (t1, m1) (t2, m2) ->\n      approx_equal t1 t2 && Nx.shape m1 = Nx.shape m2)\n    ~gen:gen_f32_with_mask ()\n\nlet f32_broadcastable_pair =\n  Testable.make\n    ~pp:(pp_pair pp_tensor pp_tensor)\n    ~equal:(fun (a1, b1) (a2, b2) -> approx_equal a1 a2 && approx_equal b1 b2)\n    ~gen:gen_f32_broadcastable_pair ()\n\nlet f32_with_broadcast_shape =\n  Testable.make\n    ~pp:(pp_pair pp_tensor pp_int_array)\n    ~equal:(fun (t1, s1) (t2, s2) -> approx_equal t1 t2 && s1 = s2)\n    ~gen:gen_f32_with_broadcast_shape ()\n\n(* ── Stress-Test Generators ── *)\n\n(* Higher-rank tensors with larger dims for stress testing *)\nlet gen_f32_stress =\n  let open Gen in\n  let* shape = gen_shape ~max_ndim:5 ~max_dim:8 in\n  gen_f32 shape\n\nlet f32_stress = mk_testable_f32 gen_f32_stress\n\nlet gen_f32_stress_pair =\n  let open Gen in\n  let* shape = gen_shape ~max_ndim:4 ~max_dim:6 in\n  let+ a = gen_f32 shape and+ b = gen_f32 shape in\n  (a, b)\n\nlet f32_stress_pair =\n  Testable.make\n    ~pp:(pp_pair pp_tensor pp_tensor)\n    ~equal:(fun (a1, b1) (a2, b2) -> approx_equal a1 a2 && approx_equal b1 b2)\n    ~gen:gen_f32_stress_pair ()\n\n(* 2D+ tensor for transpose+slice combos *)\nlet gen_f32_2d_plus =\n  let open Gen in\n  let* ndim = int_range 2 4 in\n  let* dims = list_size (pure ndim) (int_range 2 6) in\n  gen_f32 (Array.of_list dims)\n\nlet f32_2d_plus = mk_testable_f32 gen_f32_2d_plus\n\n(* Broadcastable pair with higher ranks *)\nlet gen_broadcastable_shapes_stress =\n  let open Gen in\n  let* ndim = int_range 2 5 in\n  let* dims = list_size (pure ndim) (int_range 1 6) in\n  let result_shape = Array.of_list dims in\n  let+ choices = list_size (pure ndim) (int_range 0 2) in\n  let shape_a = Array.copy result_shape in\n  let shape_b = Array.copy result_shape in\n  List.iteri\n    (fun i choice ->\n      match choice with\n      | 0 -> shape_b.(i) <- 1\n      | 1 -> shape_a.(i) <- 1\n      | _ -> ())\n    choices;\n  (shape_a, shape_b)\n\nlet gen_f32_broadcastable_stress =\n  let open Gen in\n  let* shape_a, shape_b = gen_broadcastable_shapes_stress in\n  let+ a = gen_f32 shape_a and+ b = gen_f32 shape_b in\n  (a, b)\n\nlet f32_broadcastable_stress =\n  Testable.make\n    ~pp:(pp_pair pp_tensor pp_tensor)\n    ~equal:(fun (a1, b1) (a2, b2) -> approx_equal a1 a2 && approx_equal b1 b2)\n    ~gen:gen_f32_broadcastable_stress ()\n\n(* ── Helper Predicates ── *)\n\nlet no_nan (type b) (t : (float, b) Nx.t) =\n  not (Nx.item [] (Nx.any (Nx.isnan t)))\n\nlet all_finite (type b) (t : (float, b) Nx.t) =\n  Nx.item [] (Nx.all (Nx.isfinite t))\n\nlet all_nonzero_f32 (type b) (t : (float, b) Nx.t) =\n  let zeros = Nx.zeros_like t in\n  not (Nx.item [] (Nx.any (Nx.equal t zeros)))\n\nlet allclose (type b) ?(atol = 1e-5) ?(rtol = 1e-5) (a : (float, b) Nx.t)\n    (b : (float, b) Nx.t) =\n  if Nx.shape a <> Nx.shape b then false\n  else\n    let diff = Nx.abs (Nx.sub a b) in\n    let tol = Nx.add_s (Nx.mul_s (Nx.abs b) rtol) atol in\n    Nx.item [] (Nx.all (Nx.less_equal diff tol))\n\nlet all_true (type b) (t : (bool, b) Nx.t) = Nx.item [] (Nx.all t)\n"
  },
  {
    "path": "packages/nx/test/test_nx_basics.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Core functionality tests for Nx - creation, indexing, properties *)\n\nopen Windtrap\nopen Test_nx_support\n\n(* ───── Creation Edge Case Tests ───── *)\n\nlet test_create_1d_int32 () =\n  let t = Nx.create Nx.int32 [| 3 |] [| 1l; 2l; 3l |] in\n  check_t \"create 1D int32\" [| 3 |] [| 1l; 2l; 3l |] t\n\nlet test_create_empty_float32 () =\n  let t = Nx.create Nx.float32 [| 0 |] [||] in\n  check_shape \"empty shape\" [| 0 |] t\n\nlet test_create_2x2x2_float32 () =\n  let t = Nx.create Nx.float32 [| 2; 2; 2 |] (Array.init 8 float_of_int) in\n  check_t \"create 2x2x2\" [| 2; 2; 2 |] [| 0.; 1.; 2.; 3.; 4.; 5.; 6.; 7. |] t\n\nlet test_scalar_float32 () =\n  let t = Nx.scalar Nx.float32 42.0 in\n  check_t \"scalar float32\" [||] [| 42.0 |] t\n\nlet test_scalar_int64 () =\n  let t = Nx.scalar Nx.int64 100L in\n  check_t \"scalar int64\" [||] [| 100L |] t\n\nlet test_create_int16 () =\n  let t = Nx.create Nx.int16 [| 4 |] [| 1; 2; 3; 4 |] in\n  check_t \"create int16\" [| 4 |] [| 1; 2; 3; 4 |] t\n\nlet test_create_empty_shapes () =\n  (* Empty 1D array *)\n  let t1 = Nx.create Nx.float32 [| 0 |] [||] in\n  check_shape \"empty 1D shape\" [| 0 |] t1;\n\n  (* Empty multi-dimensional arrays *)\n  let t2 = Nx.create Nx.float32 [| 0; 5 |] [||] in\n  check_shape \"empty 2D shape [0,5]\" [| 0; 5 |] t2;\n\n  let t3 = Nx.create Nx.float32 [| 5; 0; 3 |] [||] in\n  check_shape \"empty 3D shape [5,0,3]\" [| 5; 0; 3 |] t3\n\nlet test_create_max_rank () =\n  (* Create array with many dimensions but small total size *)\n  (* Use shape like [1, 1, 1, ..., 2, 2, 2] to keep total size manageable *)\n  let shape = Array.init 32 (fun i -> if i < 29 then 1 else 2) in\n  let data_size = Array.fold_left ( * ) 1 shape in\n  (* = 8 total elements *)\n  let data = Array.init data_size float_of_int in\n  let t = Nx.create Nx.float32 shape data in\n  equal ~msg:\"ndim of 32D array\" int 32 (Nx.ndim t);\n  check_shape \"32D shape\" shape t\n\nlet test_create_wrong_data_size () =\n  check_invalid_arg \"data size mismatch\"\n    \"create: array size, got 3 elements, expected 6\" (fun () ->\n      ignore (Nx.create Nx.float32 [| 2; 3 |] [| 1.0; 2.0; 3.0 |]))\n\nlet test_create_negative_shape () =\n  check_invalid_arg \"negative dimension\"\n    \"create: array size, got 2 elements, expected -6\" (fun () ->\n      ignore (Nx.create Nx.float32 [| 2; -3 |] [| 1.0; 2.0 |]))\n\n(* ───── Special Creation Function Tests ───── *)\n\nlet test_empty_float32 () =\n  let t = Nx.empty Nx.float32 [| 2; 2 |] in\n  check_shape \"empty shape\" [| 2; 2 |] t\n\nlet test_full_float32 () =\n  let t = Nx.full Nx.float32 [| 2; 3 |] 5.5 in\n  check_t \"full\" [| 2; 3 |] [| 5.5; 5.5; 5.5; 5.5; 5.5; 5.5 |] t\n\nlet test_full_like_int32 () =\n  let ref_t = Nx.create Nx.int32 [| 2; 2 |] [| 1l; 2l; 3l; 4l |] in\n  let t = Nx.full_like ref_t 10l in\n  check_t \"full_like\" [| 2; 2 |] [| 10l; 10l; 10l; 10l |] t\n\nlet test_empty_like_float32 () =\n  let ref_t = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  let t = Nx.empty_like ref_t in\n  check_shape \"empty_like shape\" [| 2; 2 |] t\n\nlet test_zeros_like_float32 () =\n  let ref_t = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  let t = Nx.zeros_like ref_t in\n  check_t \"zeros_like\" [| 2; 2 |] [| 0.; 0.; 0.; 0. |] t\n\nlet test_ones_like_int32 () =\n  let ref_t = Nx.create Nx.int32 [| 2; 2 |] [| 1l; 2l; 3l; 4l |] in\n  let t = Nx.ones_like ref_t in\n  check_t \"ones_like\" [| 2; 2 |] [| 1l; 1l; 1l; 1l |] t\n\nlet test_zeros_max_size () =\n  let t = Nx.zeros Nx.float32 [| 256; 256; 16 |] in\n  check_shape \"large zeros shape\" [| 256; 256; 16 |] t;\n  equal ~msg:\"zeros[0,0,0]\" (float 1e-6) 0.0 (Nx.item [ 0; 0; 0 ] t)\n\n(* ───── Eye Identity Tests ───── *)\n\nlet test_identity_1x1_int32 () =\n  let t = Nx.identity Nx.int32 1 in\n  check_t \"identity 1x1\" [| 1; 1 |] [| 1l |] t\n\nlet test_eye_3x4_float32 () =\n  let t = Nx.eye ~m:3 Nx.float32 4 in\n  check_t \"eye 3x4\" [| 3; 4 |]\n    [| 1.; 0.; 0.; 0.; 0.; 1.; 0.; 0.; 0.; 0.; 1.; 0. |]\n    t\n\nlet test_eye_4x3_k1_float32 () =\n  let t = Nx.eye ~m:4 ~k:1 Nx.float32 3 in\n  check_t \"eye 4x3 k=1\" [| 4; 3 |]\n    [| 0.; 1.; 0.; 0.; 0.; 1.; 0.; 0.; 0.; 0.; 0.; 0. |]\n    t\n\nlet test_eye_3x3_km1_int32 () =\n  let t = Nx.eye ~k:(-1) Nx.int32 3 in\n  check_t \"eye 3x3 k=-1\" [| 3; 3 |] [| 0l; 0l; 0l; 1l; 0l; 0l; 0l; 1l; 0l |] t\n\nlet test_eye_0x0 () =\n  let t = Nx.eye Nx.float32 0 in\n  check_shape \"0x0 eye shape\" [| 0; 0 |] t\n\nlet test_eye_k_out_of_range () =\n  (* k offset larger than matrix dimensions *)\n  let t1 = Nx.eye Nx.float32 ~k:10 3 in\n  check_t \"eye with k=10\" [| 3; 3 |] [| 0.; 0.; 0.; 0.; 0.; 0.; 0.; 0.; 0. |] t1;\n\n  let t2 = Nx.eye Nx.float32 ~k:(-10) 3 in\n  check_t \"eye with k=-10\" [| 3; 3 |]\n    [| 0.; 0.; 0.; 0.; 0.; 0.; 0.; 0.; 0. |]\n    t2\n\nlet test_diag_extract () =\n  let x = Nx.arange Nx.int32 0 9 1 |> Nx.reshape [| 3; 3 |] in\n  check_t \"diag main\" [| 3 |] [| 0l; 4l; 8l |] (Nx.diag x);\n  check_t \"diag k=1\" [| 2 |] [| 1l; 5l |] (Nx.diag ~k:1 x);\n  check_t \"diag k=-1\" [| 2 |] [| 3l; 7l |] (Nx.diag ~k:(-1) x)\n\nlet test_diag_construct () =\n  let v = Nx.create Nx.int32 [| 3 |] [| 1l; 2l; 3l |] in\n  check_t \"diag 1D\" [| 3; 3 |]\n    [| 1l; 0l; 0l; 0l; 2l; 0l; 0l; 0l; 3l |]\n    (Nx.diag v);\n  check_t \"diag k=1\" [| 4; 4 |]\n    [| 0l; 1l; 0l; 0l; 0l; 0l; 2l; 0l; 0l; 0l; 0l; 3l; 0l; 0l; 0l; 0l |]\n    (Nx.diag ~k:1 v)\n\n(* ───── Range Generation Tests ───── *)\n\nlet test_arange_empty () =\n  check_invalid_arg \"arange empty\"\n    \"arange: range [0, 0), empty with step=1, ensure start < stop for positive \\\n     step, or start > stop for negative step\" (fun () ->\n      Nx.arange Nx.int32 0 0 1)\n\nlet test_arange_negative_step () =\n  let t = Nx.arange Nx.int32 10 0 (-2) in\n  check_t \"arange negative step\" [| 5 |] [| 10l; 8l; 6l; 4l; 2l |] t\n\nlet test_arange_wrong_direction () =\n  let t = Nx.arange Nx.int32 0 10 (-1) in\n  check_shape \"arange wrong direction shape\" [| 0 |] t\n\nlet test_linspace_no_endpoint_float32 () =\n  let t = Nx.linspace ~endpoint:false Nx.float32 0.0 4.0 5 in\n  check_t ~eps:1e-6 \"linspace no endpoint\" [| 5 |]\n    [| 0.0; 0.8; 1.6; 2.4; 3.2 |]\n    t\n\nlet test_linspace_single_point () =\n  let t = Nx.linspace Nx.float32 5.0 5.0 1 in\n  check_t \"linspace single point\" [| 1 |] [| 5.0 |] t\n\nlet test_linspace_zero_points () =\n  (* linspace with 0 points returns empty array, doesn't raise error *)\n  let t = Nx.linspace Nx.float32 0.0 1.0 0 in\n  check_shape \"linspace 0 points\" [| 0 |] t\n\nlet test_logspace_base10_float32 () =\n  let t = Nx.logspace ~base:10.0 Nx.float32 2.0 3.0 4 in\n  check_t ~eps:1e-3 \"logspace base 10\" [| 4 |]\n    [| 100.0; 215.443469003188454; 464.158883361277731; 1000.0 |]\n    t\n\nlet test_logspace_base2_no_endpoint_float32 () =\n  let t = Nx.logspace ~endpoint:false ~base:2.0 Nx.float32 0.0 4.0 5 in\n  check_t ~eps:1e-6 \"logspace base 2 no endpoint\" [| 5 |]\n    [|\n      1.0;\n      1.741101126592248;\n      3.031433133020796;\n      5.278031643091579;\n      9.189586839976281;\n    |]\n    t\n\nlet test_geomspace_no_endpoint_float32 () =\n  let t = Nx.geomspace ~endpoint:false Nx.float32 1.0 256.0 9 in\n  check_t ~eps:1e-4 \"geomspace no endpoint\" [| 9 |]\n    [|\n      1.0;\n      1.851749424574581;\n      3.428975931412292;\n      6.349604207872799;\n      11.757875938204792;\n      21.772640002790030;\n      40.317473596635956;\n      74.657858532871487;\n      138.247646578215210;\n    |]\n    t\n\n(* ───── Property Access Tests ───── *)\n\nlet test_shape_2x3 () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] (Array.init 6 float_of_int) in\n  check_shape \"shape 2x3\" [| 2; 3 |] t\n\nlet test_strides_2x3_float32 () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] (Array.init 6 float_of_int) in\n  equal ~msg:\"strides\" (array int) [| 12; 4 |] (Nx.strides t)\n\nlet test_stride_dim0_2x3_float32 () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] (Array.init 6 float_of_int) in\n  equal ~msg:\"stride dim 0\" int 12 (Nx.stride 0 t)\n\nlet test_stride_dim1_2x3_float32 () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] (Array.init 6 float_of_int) in\n  equal ~msg:\"stride dim 1\" int 4 (Nx.stride 1 t)\n\nlet test_strides_2x3_int64 () =\n  let t = Nx.create Nx.int64 [| 2; 3 |] (Array.init 6 Int64.of_int) in\n  equal ~msg:\"strides int64\" (array int) [| 24; 8 |] (Nx.strides t)\n\nlet test_itemsize_float32 () =\n  let t = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  equal ~msg:\"itemsize float32\" int 4 (Nx.itemsize t)\n\nlet test_itemsize_int64 () =\n  let t = Nx.create Nx.int64 [| 2; 2 |] [| 1L; 2L; 3L; 4L |] in\n  equal ~msg:\"itemsize int64\" int 8 (Nx.itemsize t)\n\nlet test_ndim_scalar () =\n  let t = Nx.scalar Nx.float32 1.0 in\n  equal ~msg:\"ndim scalar\" int 0 (Nx.ndim t)\n\nlet test_ndim_2x2 () =\n  let t = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  equal ~msg:\"ndim 2x2\" int 2 (Nx.ndim t)\n\nlet test_dim_0_2x3 () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0 |] in\n  equal ~msg:\"dim 0\" int 2 (Nx.dim 0 t)\n\nlet test_dims_2x3 () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] (Array.init 6 float_of_int) in\n  equal ~msg:\"dims\" (array int) [| 2; 3 |] (Nx.dims t)\n\nlet test_nbytes_float32 () =\n  let t = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  equal ~msg:\"nbytes float32\" int 16 (Nx.nbytes t)\n\nlet test_nbytes_int64 () =\n  let t = Nx.create Nx.int64 [| 2; 3 |] (Array.init 6 Int64.of_int) in\n  equal ~msg:\"nbytes int64\" int 48 (Nx.nbytes t)\n\nlet test_nbytes_empty () =\n  let t = Nx.create Nx.float32 [| 0 |] [||] in\n  equal ~msg:\"nbytes empty\" int 0 (Nx.nbytes t)\n\nlet test_size_2x3 () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0 |] in\n  equal ~msg:\"size 2x3\" int 6 (Nx.size t)\n\nlet test_size_scalar () =\n  let t = Nx.scalar Nx.float32 10.0 in\n  equal ~msg:\"size scalar\" int 1 (Nx.size t)\n\nlet test_offset_basic () =\n  let t = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  equal ~msg:\"offset basic\" int 0 (Nx.offset t)\n\nlet test_offset_slice () =\n  let t = Nx.create Nx.float32 [| 3; 3 |] (Array.init 9 float_of_int) in\n  let s = Nx.slice [ Nx.R (1, -1); Nx.R (1, -1) ] t in\n  equal ~msg:\"offset slice\" int 4 (Nx.offset s)\n\n(* ───── Element Access And Indexing Tests ───── *)\n\nlet test_get_item_2x2 () =\n  let t = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  equal ~msg:\"get [0,1]\" (float 1e-6) 2.0 (Nx.item [ 0; 1 ] t);\n  equal ~msg:\"get [1,0]\" (float 1e-6) 3.0 (Nx.item [ 1; 0 ] t)\n\nlet test_set_item_2x2 () =\n  let t = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  Nx.set_item [ 1; 0 ] 5.0 t;\n  equal ~msg:\"set [1,0]\" (float 1e-6) 5.0 (Nx.item [ 1; 0 ] t)\n\nlet test_get_item_out_of_bounds () =\n  let t = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  check_invalid_arg \"out of bounds get\"\n    \"get: index [2,0] out of bounds for shape [2,2], index 0 at dim 0: 2 not \\\n     in [0, 2)\" (fun () -> Nx.item [ 2; 0 ] t)\n\nlet test_set_item_out_of_bounds () =\n  let t = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  check_invalid_arg \"out of bounds set\"\n    \"set: index 2 at dimension 1, out of bounds for shape [2,2], index 1 at \\\n     dim 1: 2 not in [0, 2)\" (fun () -> Nx.set_item [ 0; 2 ] 5.0 t)\n\nlet test_set_item_type_safety () =\n  let t = Nx.create Nx.int32 [| 2; 2 |] [| 1l; 2l; 3l; 4l |] in\n  Nx.set_item [ 0; 0 ] 5l t;\n  equal ~msg:\"set int32\" int32 5l (Nx.item [ 0; 0 ] t)\n\nlet test_get_scalar_from_0d () =\n  let t = Nx.scalar Nx.float32 42.0 in\n  equal ~msg:\"get scalar\" (float 1e-6) 42.0 (Nx.item [] t)\n\nlet test_set_scalar_in_0d () =\n  let t = Nx.scalar Nx.float32 42.0 in\n  Nx.set_item [] 99.0 t;\n  equal ~msg:\"set scalar\" (float 1e-6) 99.0 (Nx.item [] t)\n\nlet test_get_view_row () =\n  let t = Nx.create Nx.int32 [| 2; 2 |] [| 1l; 2l; 3l; 4l |] in\n  let row = Nx.get [ 0 ] t in\n  check_t \"get row 0\" [| 2 |] [| 1l; 2l |] row\n\nlet test_get_scalar () =\n  let t = Nx.create Nx.int32 [| 2; 2 |] [| 1l; 2l; 3l; 4l |] in\n  let scalar = Nx.get [ 1; 1 ] t in\n  check_t \"get scalar [1,1]\" [||] [| 4l |] scalar\n\nlet test_set_view_row () =\n  let t = Nx.create Nx.int32 [| 2; 2 |] [| 1l; 2l; 3l; 4l |] in\n  let v = Nx.create Nx.int32 [| 2 |] [| 8l; 9l |] in\n  Nx.set_slice [ Nx.I 0 ] t v;\n  check_t \"set row 0\" [| 2; 2 |] [| 8l; 9l; 3l; 4l |] t\n\nlet test_set_scalar () =\n  let t = Nx.create Nx.int32 [| 2; 2 |] [| 1l; 2l; 3l; 4l |] in\n  Nx.set_item [ 1; 0 ] 99l t;\n  check_t \"set scalar [1,0]\" [| 2; 2 |] [| 1l; 2l; 99l; 4l |] t\n\n(* ───── Slicing Tests ───── *)\n\nlet test_slice_3x4 () =\n  let t = Nx.create Nx.float32 [| 3; 4 |] (Array.init 12 float_of_int) in\n  let s = Nx.slice [ Nx.R (1, 3); Nx.R (0, 4) ] t in\n  check_t \"slice [1:3, 0:4]\" [| 2; 4 |] [| 4.; 5.; 6.; 7.; 8.; 9.; 10.; 11. |] s\n\nlet test_slice_with_steps () =\n  let t = Nx.create Nx.float32 [| 3; 4 |] (Array.init 12 float_of_int) in\n  let s = Nx.slice [ Nx.Rs (0, 3, 2); Nx.Rs (0, 4, 2) ] t in\n  check_t \"slice with steps\" [| 2; 2 |] [| 0.; 2.; 8.; 10. |] s\n\nlet test_slice_view () =\n  let t = Nx.create Nx.float32 [| 3; 2 |] [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0 |] in\n  let s = Nx.slice [ Nx.R (1, 2); Nx.R (0, 2) ] t in\n  Nx.set_item [ 1; 0 ] 99.0 t;\n  equal ~msg:\"slice view modified\" (float 1e-6) 99.0 (Nx.item [ 0; 0 ] s)\n\nlet test_slice_negative_indices () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n  let sliced = Nx.slice [ Nx.R (-3, -1) ] t in\n  check_t \"slice negative indices\" [| 2 |] [| 3.; 4. |] sliced\n\nlet test_slice_empty_range () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n  let sliced = Nx.slice [ Nx.R (2, 1) ] t in\n  check_shape \"empty slice shape\" [| 0 |] sliced\n\nlet test_slice_step_zero () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n  check_invalid_arg \"slice step zero\"\n    \"slice: step cannot be zero, use positive step for forward slicing or \\\n     negative for reverse\" (fun () -> ignore (Nx.slice [ Nx.Rs (0, 5, 0) ] t))\n\nlet test_slice_negative_step () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n  let sliced = Nx.slice [ Nx.Rs (4, 0, -1) ] t in\n  check_t \"slice negative step\" [| 4 |] [| 5.; 4.; 3.; 2. |] sliced\n\n(* ───── Memory And View Tests ───── *)\n\nlet test_data_buffer_view () =\n  let t = Nx.create Nx.float32 [| 3 |] [| 1.0; 2.0; 3.0 |] in\n  let d = Nx.data t in\n  Nx_buffer.set d 0 99.0;\n  equal ~msg:\"data buffer view\" (float 1e-6) 99.0 (Nx.item [ 0 ] t)\n\nlet test_strides_after_transpose () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let original_strides = Nx.strides t in\n  let transposed = Nx.transpose t in\n  let new_strides = Nx.strides transposed in\n  equal ~msg:\"transposed strides\" (array int)\n    [| original_strides.(1); original_strides.(0) |]\n    new_strides\n\nlet test_strides_after_slice () =\n  let t = Nx.create Nx.float32 [| 10 |] (Array.init 10 float_of_int) in\n  let sliced = Nx.slice [ Nx.Rs (0, 10, 2) ] t in\n  let strides = Nx.strides sliced in\n  (* step!=1 slices are materialized via gather and are contiguous *)\n  equal ~msg:\"slice stride\" int 4 strides.(0)\n\nlet test_is_c_contiguous_basic () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  equal ~msg:\"fresh array is contiguous\" bool true (Nx.is_c_contiguous t)\n\nlet test_is_c_contiguous_after_transpose () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let transposed = Nx.transpose t in\n  equal ~msg:\"transposed not contiguous\" bool false\n    (Nx.is_c_contiguous transposed)\n\nlet test_is_c_contiguous_after_slice () =\n  let t = Nx.create Nx.float32 [| 10 |] (Array.init 10 float_of_int) in\n  (* step!=1 slices are materialized via gather *)\n  let sliced = Nx.slice [ Nx.Rs (0, 10, 2) ] t in\n  equal ~msg:\"slice step=2 is contiguous\" bool true (Nx.is_c_contiguous sliced);\n  (* step=1 slice is contiguous *)\n  let sliced_step1 = Nx.slice [ Nx.Rs (0, 5, 1) ] t in\n  equal ~msg:\"slice step=1 is contiguous\" bool true\n    (Nx.is_c_contiguous sliced_step1)\n\nlet test_is_c_contiguous_after_double_transpose () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let transposed = Nx.transpose t in\n  let restored = Nx.transpose transposed in\n  equal ~msg:\"transpose twice restores contiguous layout\" bool true\n    (Nx.is_c_contiguous restored)\n\nlet test_offset_after_multiple_slices () =\n  let t = Nx.create Nx.float32 [| 5; 5 |] (Array.init 25 float_of_int) in\n  let slice1 = Nx.slice [ Nx.R (1, 3); Nx.R (0, 5) ] t in\n  let slice2 = Nx.slice [ Nx.R (0, 1); Nx.R (0, 5) ] slice1 in\n  equal ~msg:\"accumulated offset value\" (float 1e-6) 5.0\n    (Nx.item [ 0; 0 ] slice2)\n\n(* ───── Utility Operation Tests ───── *)\n\nlet test_to_bigarray () =\n  let t = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  let ba = Nx.to_bigarray t in\n  equal ~msg:\"initial [0,0]\" (float 1e-6) 1.0\n    (Bigarray.Genarray.get ba [| 0; 0 |]);\n  Nx.set_item [ 0; 0 ] 55.0 t;\n  equal ~msg:\"after set [0,0]\" (float 1e-6) 55.0\n    (Bigarray.Genarray.get ba [| 0; 0 |])\n\nlet test_to_bigarray_partial_slice () =\n  let base = Nx.arange Nx.float32 0 5 1 |> Nx.reshape [| 5; 1 |] in\n  let slice = Nx.slice [ Nx.R (0, 4); Nx.I 0 ] base in\n  let ba = Nx.to_bigarray slice in\n  equal ~msg:\"slice dims\" (array int) [| 4 |] (Bigarray.Genarray.dims ba);\n  let expected = [| 0.0; 1.0; 2.0; 3.0 |] in\n  Array.iteri\n    (fun i value ->\n      equal\n        ~msg:(Printf.sprintf \"slice[%d]\" i)\n        (float 1e-6) value\n        (Bigarray.Genarray.get ba [| i |]))\n    expected\n\nlet test_copy () =\n  let original = Nx.create Nx.float32 [| 3 |] [| 1.0; 2.0; 3.0 |] in\n  let copy_arr = Nx.copy original in\n  Nx.set_item [ 0 ] 10.0 original;\n  equal ~msg:\"original [0]\" (float 1e-6) 10.0 (Nx.item [ 0 ] original);\n  equal ~msg:\"copy [0]\" (float 1e-6) 1.0 (Nx.item [ 0 ] copy_arr)\n\nlet test_blit_incompatible () =\n  let src = Nx.create Nx.float32 [| 2 |] [| 1.0; 2.0 |] in\n  let dst = Nx.zeros Nx.float32 [| 3 |] in\n  raises ~msg:\"incompatible shapes\"\n    (Invalid_argument\n       \"blit: shape mismatch [2] vs [3], source and destination must have \\\n        identical shapes\") (fun () -> Nx.blit src dst)\n\nlet test_fill_returns_copy () =\n  let t = Nx.zeros Nx.float32 [| 2; 2 |] in\n  let filled = Nx.fill 7.0 t in\n  equal ~msg:\"fill copy result\" (float 1e-6) 7.0 (Nx.item [ 0; 0 ] filled);\n  equal ~msg:\"fill copy leaves source intact\" (float 1e-6) 0.0\n    (Nx.item [ 0; 0 ] t)\n\nlet test_blit_self () =\n  let t = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  Nx.blit t t;\n  check_t \"blit self\" [| 3 |] [| 1.; 2.; 3. |] t\n\n(* TODO: This test is currently failing due to overlapping memory regions in\n   blit. See nx/test/failing/bug_blit_overlapping.ml for details. Uncomment when\n   overlapping blit is properly handled (e.g., using\n   https://github.com/dinosaure/overlap).\n\n   let test_blit_overlapping_views () = let t = Nx.create Nx.float32 [| 5 |] [|\n   1.; 2.; 3.; 4.; 5. |] in let view1 = Nx.slice [ Nx.R (0, 3) ] t in let view2\n   = Nx.slice [ Nx.R (2, 5) ] t in Nx.blit view1 view2; check_t \"blit\n   overlapping views\" [| 5 |] [| 1.; 2.; 1.; 2.; 3. |] t *)\n\n(* ───── Type Conversion Tests ───── *)\n\nlet test_to_array () =\n  let t = Nx.create Nx.int32 [| 3 |] [| 1l; 2l; 3l |] in\n  let a = Nx.to_array t in\n  equal ~msg:\"to_array\" (array int32) [| 1l; 2l; 3l |] a\n\nlet test_astype_float32_to_int32 () =\n  let t = Nx.create Nx.float32 [| 3 |] [| 1.1; 2.9; -3.3 |] in\n  let u = Nx.astype Nx.int32 t in\n  check_t \"astype to int32\" [| 3 |] [| 1l; 2l; -3l |] u\n\nlet test_astype_int32_to_float32 () =\n  let t = Nx.create Nx.int32 [| 3 |] [| 1l; 2l; 3l |] in\n  let u = Nx.astype Nx.float32 t in\n  check_t \"astype to float32\" [| 3 |] [| 1.0; 2.0; 3.0 |] u\n\nlet test_astype_float32_to_int16 () =\n  let t = Nx.create Nx.float32 [| 4 |] [| 1.0; 2.5; 3.9; 255.0 |] in\n  let u = Nx.astype Nx.int16 t in\n  check_t \"astype to int16\" [| 4 |] [| 1; 2; 3; 255 |] u\n\nlet test_astype_int64_to_float32 () =\n  let t = Nx.create Nx.int64 [| 3 |] [| 1000L; 2000L; 3000L |] in\n  let u = Nx.astype Nx.float32 t in\n  check_t \"astype int64 to float32\" [| 3 |] [| 1000.0; 2000.0; 3000.0 |] u\n\n(* Test Suite Organization *)\n\nlet creation_edge_cases =\n  [\n    test \"create 1D int32\" test_create_1d_int32;\n    test \"create empty float32\" test_create_empty_float32;\n    test \"create 2x2x2 float32\" test_create_2x2x2_float32;\n    test \"scalar float32\" test_scalar_float32;\n    test \"scalar int64\" test_scalar_int64;\n    test \"create int16\" test_create_int16;\n    test \"create empty shapes\" test_create_empty_shapes;\n    test \"create max rank\" test_create_max_rank;\n    test \"create wrong data size\" test_create_wrong_data_size;\n    test \"create negative shape\" test_create_negative_shape;\n  ]\n\nlet special_creation =\n  [\n    test \"empty float32\" test_empty_float32;\n    test \"full float32\" test_full_float32;\n    test \"full_like int32\" test_full_like_int32;\n    test \"empty_like float32\" test_empty_like_float32;\n    test \"zeros_like float32\" test_zeros_like_float32;\n    test \"ones_like int32\" test_ones_like_int32;\n    test \"zeros max size\" test_zeros_max_size;\n  ]\n\nlet eye_identity_tests =\n  [\n    test \"identity 1x1 int32\" test_identity_1x1_int32;\n    test \"eye 3x4 float32\" test_eye_3x4_float32;\n    test \"eye 4x3 k=1 float32\" test_eye_4x3_k1_float32;\n    test \"eye 3x3 k=-1 int32\" test_eye_3x3_km1_int32;\n    test \"eye 0x0\" test_eye_0x0;\n    test \"eye k out of range\" test_eye_k_out_of_range;\n    test \"diag extract\" test_diag_extract;\n    test \"diag construct\" test_diag_construct;\n  ]\n\nlet range_generation =\n  [\n    test \"arange empty\" test_arange_empty;\n    test \"arange negative step\" test_arange_negative_step;\n    test \"arange wrong direction\" test_arange_wrong_direction;\n    test \"linspace no endpoint float32\" test_linspace_no_endpoint_float32;\n    test \"linspace single point\" test_linspace_single_point;\n    test \"linspace zero points\" test_linspace_zero_points;\n    test \"logspace base 10 float32\" test_logspace_base10_float32;\n    test \"logspace base 2 no endpoint float32\"\n      test_logspace_base2_no_endpoint_float32;\n    test \"geomspace no endpoint float32\" test_geomspace_no_endpoint_float32;\n  ]\n\nlet property_access =\n  [\n    test \"shape 2x3\" test_shape_2x3;\n    test \"strides 2x3 float32\" test_strides_2x3_float32;\n    test \"stride dim 0 2x3 float32\" test_stride_dim0_2x3_float32;\n    test \"stride dim 1 2x3 float32\" test_stride_dim1_2x3_float32;\n    test \"strides 2x3 int64\" test_strides_2x3_int64;\n    test \"itemsize float32\" test_itemsize_float32;\n    test \"itemsize int64\" test_itemsize_int64;\n    test \"ndim scalar\" test_ndim_scalar;\n    test \"ndim 2x2\" test_ndim_2x2;\n    test \"dim 0 2x3\" test_dim_0_2x3;\n    test \"dims 2x3\" test_dims_2x3;\n    test \"nbytes float32\" test_nbytes_float32;\n    test \"nbytes int64\" test_nbytes_int64;\n    test \"nbytes empty\" test_nbytes_empty;\n    test \"size 2x3\" test_size_2x3;\n    test \"size scalar\" test_size_scalar;\n    test \"offset basic\" test_offset_basic;\n    test \"offset slice\" test_offset_slice;\n  ]\n\nlet element_access_indexing =\n  [\n    test \"get item 2x2\" test_get_item_2x2;\n    test \"set item 2x2\" test_set_item_2x2;\n    test \"get item out of bounds\" test_get_item_out_of_bounds;\n    test \"set item out of bounds\" test_set_item_out_of_bounds;\n    test \"set item type safety\" test_set_item_type_safety;\n    test \"get scalar from 0d\" test_get_scalar_from_0d;\n    test \"set scalar in 0d\" test_set_scalar_in_0d;\n    test \"get view row\" test_get_view_row;\n    test \"get scalar\" test_get_scalar;\n    test \"set view row\" test_set_view_row;\n    test \"set scalar\" test_set_scalar;\n  ]\n\nlet slicing =\n  [\n    test \"slice 3x4\" test_slice_3x4;\n    test \"slice with steps\" test_slice_with_steps;\n    test \"slice view\" test_slice_view;\n    test \"slice negative indices\" test_slice_negative_indices;\n    test \"slice empty range\" test_slice_empty_range;\n    test \"slice step zero\" test_slice_step_zero;\n    test \"slice negative step\" test_slice_negative_step;\n  ]\n\nlet memory_and_views =\n  [\n    test \"data buffer view\" test_data_buffer_view;\n    test \"strides after transpose\" test_strides_after_transpose;\n    test \"strides after slice\" test_strides_after_slice;\n    test \"is contiguous basic\" test_is_c_contiguous_basic;\n    test \"is contiguous after transpose\" test_is_c_contiguous_after_transpose;\n    test \"is contiguous after slice\" test_is_c_contiguous_after_slice;\n    test \"is contiguous after double transpose\"\n      test_is_c_contiguous_after_double_transpose;\n    test \"offset after multiple slices\" test_offset_after_multiple_slices;\n  ]\n\nlet utility_operations =\n  [\n    test \"to bigarray\" test_to_bigarray;\n    test \"to bigarray partial slice\" test_to_bigarray_partial_slice;\n    test \"copy\" test_copy;\n    test \"blit incompatible\" test_blit_incompatible;\n    test \"fill returns copy\" test_fill_returns_copy;\n    test \"blit self\" test_blit_self;\n    (* (\"blit overlapping views\", `Quick, test_blit_overlapping_views ); *)\n  ]\n\nlet type_conversion =\n  [\n    test \"to array\" test_to_array;\n    test \"astype float32 to int32\" test_astype_float32_to_int32;\n    test \"astype int32 to float32\" test_astype_int32_to_float32;\n    test \"astype float32 to int16\" test_astype_float32_to_int16;\n    test \"astype int64 to float32\" test_astype_int64_to_float32;\n  ]\n\nlet suite =\n  [\n    group \"Creation Edge Cases\" creation_edge_cases;\n    group \"Special Creation Functions\" special_creation;\n    group \"Eye and Identity\" eye_identity_tests;\n    group \"Range Generation\" range_generation;\n    group \"Property Access\" property_access;\n    group \"Element Access and Indexing\" element_access_indexing;\n    group \"Slicing\" slicing;\n    group \"Memory and Views\" memory_and_views;\n    group \"Utility Operations\" utility_operations;\n    group \"Type Conversion\" type_conversion;\n  ]\n\nlet () = run \"Nx Basics\" suite\n"
  },
  {
    "path": "packages/nx/test/test_nx_extended_dtypes.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Tests for extended bigarray dtypes *)\n\nopen Windtrap\nopen Test_nx_support\n\n(* ───── BFloat16 Tests ───── *)\n\nlet test_create_bfloat16 () =\n  let t = Nx.create Nx_core.Dtype.bfloat16 [| 3 |] [| 1.0; 2.0; 3.0 |] in\n  check_t ~eps:0.01 \"create bfloat16\" [| 3 |] [| 1.0; 2.0; 3.0 |] t\n\nlet test_scalar_bfloat16 () =\n  let t = Nx.scalar Nx_core.Dtype.bfloat16 42.5 in\n  check_t ~eps:0.01 \"scalar bfloat16\" [||] [| 42.5 |] t\n\nlet test_zeros_bfloat16 () =\n  let t = Nx.zeros Nx_core.Dtype.bfloat16 [| 2; 2 |] in\n  check_t ~eps:0.01 \"zeros bfloat16\" [| 2; 2 |] [| 0.0; 0.0; 0.0; 0.0 |] t\n\nlet test_ones_bfloat16 () =\n  let t = Nx.ones Nx_core.Dtype.bfloat16 [| 2; 2 |] in\n  check_t ~eps:0.01 \"ones bfloat16\" [| 2; 2 |] [| 1.0; 1.0; 1.0; 1.0 |] t\n\nlet test_arange_bfloat16 () =\n  let t = Nx.arange Nx_core.Dtype.bfloat16 0 5 1 in\n  check_t ~eps:0.01 \"arange bfloat16\" [| 5 |] [| 0.0; 1.0; 2.0; 3.0; 4.0 |] t\n\n(* ───── Bool Tests ───── *)\n\nlet test_create_bool () =\n  let t = Nx.create Nx_core.Dtype.bool [| 4 |] [| false; true; false; true |] in\n  check_t \"create bool\" [| 4 |] [| false; true; false; true |] t\n\nlet test_scalar_bool () =\n  let t = Nx.scalar Nx_core.Dtype.bool true in\n  check_t \"scalar bool\" [||] [| true |] t\n\nlet test_zeros_bool () =\n  let t = Nx.zeros Nx_core.Dtype.bool [| 2; 2 |] in\n  check_t \"zeros bool\" [| 2; 2 |] [| false; false; false; false |] t\n\nlet test_ones_bool () =\n  let t = Nx.ones Nx_core.Dtype.bool [| 2; 2 |] in\n  check_t \"ones bool\" [| 2; 2 |] [| true; true; true; true |] t\n\n(* ───── Int4 Tests ───── *)\n\nlet test_create_int4 () =\n  let t = Nx.create Nx_core.Dtype.int4 [| 4 |] [| -8; -1; 0; 7 |] in\n  check_t \"create int4\" [| 4 |] [| -8; -1; 0; 7 |] t\n\nlet test_scalar_int4 () =\n  let t = Nx.scalar Nx_core.Dtype.int4 5 in\n  check_t \"scalar int4\" [||] [| 5 |] t\n\nlet test_zeros_int4 () =\n  let t = Nx.zeros Nx_core.Dtype.int4 [| 2; 2 |] in\n  check_t \"zeros int4\" [| 2; 2 |] [| 0; 0; 0; 0 |] t\n\nlet test_ones_int4 () =\n  let t = Nx.ones Nx_core.Dtype.int4 [| 2; 2 |] in\n  check_t \"ones int4\" [| 2; 2 |] [| 1; 1; 1; 1 |] t\n\nlet test_arange_int4 () =\n  let t = Nx.arange Nx_core.Dtype.int4 (-3) 4 1 in\n  check_t \"arange int4\" [| 7 |] [| -3; -2; -1; 0; 1; 2; 3 |] t\n\n(* ───── UInt4 Tests ───── *)\n\nlet test_create_uint4 () =\n  let t = Nx.create Nx_core.Dtype.uint4 [| 4 |] [| 0; 5; 10; 15 |] in\n  check_t \"create uint4\" [| 4 |] [| 0; 5; 10; 15 |] t\n\nlet test_scalar_uint4 () =\n  let t = Nx.scalar Nx_core.Dtype.uint4 12 in\n  check_t \"scalar uint4\" [||] [| 12 |] t\n\nlet test_zeros_uint4 () =\n  let t = Nx.zeros Nx_core.Dtype.uint4 [| 2; 2 |] in\n  check_t \"zeros uint4\" [| 2; 2 |] [| 0; 0; 0; 0 |] t\n\nlet test_ones_uint4 () =\n  let t = Nx.ones Nx_core.Dtype.uint4 [| 2; 2 |] in\n  check_t \"ones uint4\" [| 2; 2 |] [| 1; 1; 1; 1 |] t\n\nlet test_arange_uint4 () =\n  let t = Nx.arange Nx_core.Dtype.uint4 0 8 2 in\n  check_t \"arange uint4\" [| 4 |] [| 0; 2; 4; 6 |] t\n\n(* ───── UInt32 Tests ───── *)\n\nlet test_create_uint32 () =\n  let t = Nx.create Nx_core.Dtype.uint32 [| 3 |] [| 0l; 1l; 42l |] in\n  check_t \"create uint32\" [| 3 |] [| 0l; 1l; 42l |] t\n\nlet test_scalar_uint32 () =\n  let t = Nx.scalar Nx_core.Dtype.uint32 7l in\n  check_t \"scalar uint32\" [||] [| 7l |] t\n\nlet test_zeros_uint32 () =\n  let t = Nx.zeros Nx_core.Dtype.uint32 [| 2; 2 |] in\n  check_t \"zeros uint32\" [| 2; 2 |] [| 0l; 0l; 0l; 0l |] t\n\nlet test_ones_uint32 () =\n  let t = Nx.ones Nx_core.Dtype.uint32 [| 2; 2 |] in\n  check_t \"ones uint32\" [| 2; 2 |] [| 1l; 1l; 1l; 1l |] t\n\n(* ───── UInt64 Tests ───── *)\n\nlet test_create_uint64 () =\n  let t = Nx.create Nx_core.Dtype.uint64 [| 3 |] [| 0L; 1L; 42L |] in\n  check_t \"create uint64\" [| 3 |] [| 0L; 1L; 42L |] t\n\nlet test_scalar_uint64 () =\n  let t = Nx.scalar Nx_core.Dtype.uint64 7L in\n  check_t \"scalar uint64\" [||] [| 7L |] t\n\nlet test_zeros_uint64 () =\n  let t = Nx.zeros Nx_core.Dtype.uint64 [| 2; 2 |] in\n  check_t \"zeros uint64\" [| 2; 2 |] [| 0L; 0L; 0L; 0L |] t\n\nlet test_ones_uint64 () =\n  let t = Nx.ones Nx_core.Dtype.uint64 [| 2; 2 |] in\n  check_t \"ones uint64\" [| 2; 2 |] [| 1L; 1L; 1L; 1L |] t\n\n(* ───── Float8_e4m3 Tests ───── *)\n\nlet test_create_float8_e4m3 () =\n  let t = Nx.create Nx_core.Dtype.float8_e4m3 [| 3 |] [| 1.0; 2.0; 3.0 |] in\n  check_t ~eps:0.1 \"create float8_e4m3\" [| 3 |] [| 1.0; 2.0; 3.0 |] t\n\nlet test_scalar_float8_e4m3 () =\n  (* Test with a value that can be exactly represented in Float8_e4m3. With a\n     3-bit mantissa, we can represent 1.000 through 1.111 in binary. For\n     example: 11.0 = 1.011 × 2^3 is exactly representable. *)\n  let t = Nx.scalar Nx_core.Dtype.float8_e4m3 11.0 in\n  check_t ~eps:0.1 \"scalar float8_e4m3\" [||] [| 11.0 |] t\n\nlet test_zeros_float8_e4m3 () =\n  let t = Nx.zeros Nx_core.Dtype.float8_e4m3 [| 2; 2 |] in\n  check_t ~eps:0.1 \"zeros float8_e4m3\" [| 2; 2 |] [| 0.0; 0.0; 0.0; 0.0 |] t\n\nlet test_ones_float8_e4m3 () =\n  let t = Nx.ones Nx_core.Dtype.float8_e4m3 [| 2; 2 |] in\n  check_t ~eps:0.1 \"ones float8_e4m3\" [| 2; 2 |] [| 1.0; 1.0; 1.0; 1.0 |] t\n\n(* ───── Float8_e5m2 Tests ───── *)\n\nlet test_create_float8_e5m2 () =\n  let t = Nx.create Nx_core.Dtype.float8_e5m2 [| 3 |] [| 1.0; 2.0; 3.0 |] in\n  check_t ~eps:0.1 \"create float8_e5m2\" [| 3 |] [| 1.0; 2.0; 3.0 |] t\n\nlet test_scalar_float8_e5m2 () =\n  let t = Nx.scalar Nx_core.Dtype.float8_e5m2 20.0 in\n  check_t ~eps:0.1 \"scalar float8_e5m2\" [||] [| 20.0 |] t\n\nlet test_zeros_float8_e5m2 () =\n  let t = Nx.zeros Nx_core.Dtype.float8_e5m2 [| 2; 2 |] in\n  check_t ~eps:0.1 \"zeros float8_e5m2\" [| 2; 2 |] [| 0.0; 0.0; 0.0; 0.0 |] t\n\nlet test_ones_float8_e5m2 () =\n  let t = Nx.ones Nx_core.Dtype.float8_e5m2 [| 2; 2 |] in\n  check_t ~eps:0.1 \"ones float8_e5m2\" [| 2; 2 |] [| 1.0; 1.0; 1.0; 1.0 |] t\n\n(* ───── Dtype Property Tests ───── *)\n\nlet test_dtype_properties () =\n  (* Test is_float *)\n  equal ~msg:\"bfloat16 is_float\" bool true\n    (Nx_core.Dtype.is_float Nx_core.Dtype.bfloat16);\n  equal ~msg:\"float8_e4m3 is_float\" bool true\n    (Nx_core.Dtype.is_float Nx_core.Dtype.float8_e4m3);\n  equal ~msg:\"float8_e5m2 is_float\" bool true\n    (Nx_core.Dtype.is_float Nx_core.Dtype.float8_e5m2);\n  equal ~msg:\"bool is_float\" bool false\n    (Nx_core.Dtype.is_float Nx_core.Dtype.bool);\n\n  (* Test is_complex *)\n  equal ~msg:\"complex64 is_complex\" bool true\n    (Nx_core.Dtype.is_complex Nx_core.Dtype.complex64);\n  equal ~msg:\"complex128 is_complex\" bool true\n    (Nx_core.Dtype.is_complex Nx_core.Dtype.complex128);\n  equal ~msg:\"bfloat16 is_complex\" bool false\n    (Nx_core.Dtype.is_complex Nx_core.Dtype.bfloat16);\n\n  (* Test is_int *)\n  equal ~msg:\"int4 is_int\" bool true (Nx_core.Dtype.is_int Nx_core.Dtype.int4);\n  equal ~msg:\"uint4 is_int\" bool true (Nx_core.Dtype.is_int Nx_core.Dtype.uint4);\n  equal ~msg:\"uint32 is_int\" bool true\n    (Nx_core.Dtype.is_int Nx_core.Dtype.uint32);\n  equal ~msg:\"uint64 is_int\" bool true\n    (Nx_core.Dtype.is_int Nx_core.Dtype.uint64);\n  equal ~msg:\"bool is_int\" bool false (Nx_core.Dtype.is_int Nx_core.Dtype.bool);\n\n  (* Test is_uint *)\n  equal ~msg:\"uint4 is_uint\" bool true\n    (Nx_core.Dtype.is_uint Nx_core.Dtype.uint4);\n  equal ~msg:\"uint32 is_uint\" bool true\n    (Nx_core.Dtype.is_uint Nx_core.Dtype.uint32);\n  equal ~msg:\"uint64 is_uint\" bool true\n    (Nx_core.Dtype.is_uint Nx_core.Dtype.uint64);\n  equal ~msg:\"int4 is_uint\" bool false\n    (Nx_core.Dtype.is_uint Nx_core.Dtype.int4);\n\n  (* Test itemsize *)\n  equal ~msg:\"bfloat16 itemsize\" int 2\n    (Nx_core.Dtype.itemsize Nx_core.Dtype.bfloat16);\n  equal ~msg:\"bool itemsize\" int 1 (Nx_core.Dtype.itemsize Nx_core.Dtype.bool);\n  equal ~msg:\"int4 itemsize\" int 1 (Nx_core.Dtype.itemsize Nx_core.Dtype.int4);\n  equal ~msg:\"uint4 itemsize\" int 1 (Nx_core.Dtype.itemsize Nx_core.Dtype.uint4);\n  equal ~msg:\"float8_e4m3 itemsize\" int 1\n    (Nx_core.Dtype.itemsize Nx_core.Dtype.float8_e4m3);\n  equal ~msg:\"float8_e5m2 itemsize\" int 1\n    (Nx_core.Dtype.itemsize Nx_core.Dtype.float8_e5m2);\n  equal ~msg:\"uint32 itemsize\" int 4\n    (Nx_core.Dtype.itemsize Nx_core.Dtype.uint32);\n  equal ~msg:\"uint64 itemsize\" int 8\n    (Nx_core.Dtype.itemsize Nx_core.Dtype.uint64);\n  equal ~msg:\"complex64 itemsize\" int 8\n    (Nx_core.Dtype.itemsize Nx_core.Dtype.complex64);\n  equal ~msg:\"complex128 itemsize\" int 16\n    (Nx_core.Dtype.itemsize Nx_core.Dtype.complex128);\n\n  (* Test to_string *)\n  equal ~msg:\"bfloat16 to_string\" string \"bfloat16\"\n    (Nx_core.Dtype.to_string Nx_core.Dtype.bfloat16);\n  equal ~msg:\"bool to_string\" string \"bool\"\n    (Nx_core.Dtype.to_string Nx_core.Dtype.bool);\n  equal ~msg:\"int4 to_string\" string \"int4\"\n    (Nx_core.Dtype.to_string Nx_core.Dtype.int4);\n  equal ~msg:\"uint4 to_string\" string \"uint4\"\n    (Nx_core.Dtype.to_string Nx_core.Dtype.uint4);\n  equal ~msg:\"float8_e4m3 to_string\" string \"float8_e4m3\"\n    (Nx_core.Dtype.to_string Nx_core.Dtype.float8_e4m3);\n  equal ~msg:\"float8_e5m2 to_string\" string \"float8_e5m2\"\n    (Nx_core.Dtype.to_string Nx_core.Dtype.float8_e5m2);\n  equal ~msg:\"uint32 to_string\" string \"uint32\"\n    (Nx_core.Dtype.to_string Nx_core.Dtype.uint32);\n  equal ~msg:\"uint64 to_string\" string \"uint64\"\n    (Nx_core.Dtype.to_string Nx_core.Dtype.uint64);\n  equal ~msg:\"complex64 to_string\" string \"complex64\"\n    (Nx_core.Dtype.to_string Nx_core.Dtype.complex64);\n  equal ~msg:\"complex128 to_string\" string \"complex128\"\n    (Nx_core.Dtype.to_string Nx_core.Dtype.complex128)\n\nlet test_dtype_min_max_values () =\n  (* Test min_value *)\n  equal ~msg:\"int4 min_value\" int (-8)\n    (Nx_core.Dtype.min_value Nx_core.Dtype.int4);\n  equal ~msg:\"uint4 min_value\" int 0\n    (Nx_core.Dtype.min_value Nx_core.Dtype.uint4);\n  equal ~msg:\"bool min_value\" bool false\n    (Nx_core.Dtype.min_value Nx_core.Dtype.bool);\n  equal ~msg:\"uint32 min_value\" int32 0l\n    (Nx_core.Dtype.min_value Nx_core.Dtype.uint32);\n  equal ~msg:\"uint64 min_value\" int64 0L\n    (Nx_core.Dtype.min_value Nx_core.Dtype.uint64);\n\n  (* Test max_value *)\n  equal ~msg:\"int4 max_value\" int 7 (Nx_core.Dtype.max_value Nx_core.Dtype.int4);\n  equal ~msg:\"uint4 max_value\" int 15\n    (Nx_core.Dtype.max_value Nx_core.Dtype.uint4);\n  equal ~msg:\"bool max_value\" bool true\n    (Nx_core.Dtype.max_value Nx_core.Dtype.bool);\n  equal ~msg:\"uint32 max_value\" int32 (Int32.lognot 0l)\n    (Nx_core.Dtype.max_value Nx_core.Dtype.uint32);\n  equal ~msg:\"uint64 max_value\" int64 (Int64.lognot 0L)\n    (Nx_core.Dtype.max_value Nx_core.Dtype.uint64)\n\n(* ───── Test Suite Setup ───── *)\n\nlet suite =\n  [\n    group \" \"\n      [\n        (* BFloat16 tests - supported by Metal *)\n        test \"create bfloat16\" test_create_bfloat16;\n        test \"scalar bfloat16\" test_scalar_bfloat16;\n        test \"zeros bfloat16\" test_zeros_bfloat16;\n        test \"ones bfloat16\" test_ones_bfloat16;\n        test \"arange bfloat16\" test_arange_bfloat16;\n        (* Bool tests - supported by Metal *)\n        test \"create bool\" test_create_bool;\n        test \"scalar bool\" test_scalar_bool;\n        test \"zeros bool\" test_zeros_bool;\n        test \"ones bool\" test_ones_bool;\n        (* Int4 tests - NOT supported by Metal *)\n        test \"create int4\" test_create_int4;\n        test \"scalar int4\" test_scalar_int4;\n        test \"zeros int4\" test_zeros_int4;\n        test \"ones int4\" test_ones_int4;\n        test \"arange int4\" test_arange_int4;\n        (* UInt4 tests - NOT supported by Metal *)\n        test \"create uint4\" test_create_uint4;\n        test \"scalar uint4\" test_scalar_uint4;\n        test \"zeros uint4\" test_zeros_uint4;\n        test \"ones uint4\" test_ones_uint4;\n        test \"arange uint4\" test_arange_uint4;\n        (* UInt32 tests - supported by Metal *)\n        test \"create uint32\" test_create_uint32;\n        test \"scalar uint32\" test_scalar_uint32;\n        test \"zeros uint32\" test_zeros_uint32;\n        test \"ones uint32\" test_ones_uint32;\n        (* UInt64 tests - supported by Metal *)\n        test \"create uint64\" test_create_uint64;\n        test \"scalar uint64\" test_scalar_uint64;\n        test \"zeros uint64\" test_zeros_uint64;\n        test \"ones uint64\" test_ones_uint64;\n        (* Float8_e4m3 tests - NOT supported by Metal *)\n        test \"create float8_e4m3\" test_create_float8_e4m3;\n        test \"scalar float8_e4m3\" test_scalar_float8_e4m3;\n        test \"zeros float8_e4m3\" test_zeros_float8_e4m3;\n        test \"ones float8_e4m3\" test_ones_float8_e4m3;\n        (* Float8_e5m2 tests - NOT supported by Metal *)\n        test \"create float8_e5m2\" test_create_float8_e5m2;\n        test \"scalar float8_e5m2\" test_scalar_float8_e5m2;\n        test \"zeros float8_e5m2\" test_zeros_float8_e5m2;\n        test \"ones float8_e5m2\" test_ones_float8_e5m2;\n        (* Dtype property tests - always included *)\n        test \"dtype properties\" test_dtype_properties;\n        test \"dtype min/max values\" test_dtype_min_max_values;\n      ];\n  ]\n\nlet () = run \"Nx Extended Dtypes\" suite\n"
  },
  {
    "path": "packages/nx/test/test_nx_fft.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Test_nx_support\n\nlet pi = 4.0 *. atan 1.0\nlet two_pi = 2.0 *. pi\n\n(* Test standard FFT/IFFT *)\nlet test_fft_ifft () =\n  (* 1D even length *)\n  let shape = [| 8 |] in\n  let input_data =\n    [|\n      Complex.{ re = 1.0; im = 0.5 };\n      Complex.{ re = 2.0; im = -0.5 };\n      Complex.{ re = 3.0; im = 0.2 };\n      Complex.{ re = 4.0; im = -0.2 };\n      Complex.{ re = 0.0; im = 0.0 };\n      Complex.{ re = 0.0; im = 0.0 };\n      Complex.{ re = 0.0; im = 0.0 };\n      Complex.{ re = 0.0; im = 0.0 };\n    |]\n  in\n  let input = Nx.create Nx.complex128 shape input_data in\n  let fft_out = Nx.fft input in\n  let ifft_out = Nx.ifft fft_out in\n  check_t \"1D even fft/ifft\" shape input_data ifft_out;\n\n  (* 1D odd length *)\n  let n_odd = 7 in\n  let shape_odd = [| n_odd |] in\n  let input_data_odd =\n    Array.init n_odd (fun (i : int) ->\n        { Complex.re = Float.of_int i; im = Float.of_int (i - 3) *. 0.1 })\n  in\n  let input_odd = Nx.create Nx.complex128 shape_odd input_data_odd in\n  let fft_odd = Nx.fft input_odd in\n  let ifft_odd = Nx.ifft fft_odd in\n  check_t \"1D odd fft/ifft\" shape_odd input_data_odd ifft_odd;\n\n  (* 2D *)\n  let m, n = (4, 6) in\n  let shape_2d = [| m; n |] in\n  let input_data_2d =\n    Array.init (m * n) (fun i ->\n        {\n          Complex.re = Float.of_int (i mod 10);\n          im = Float.of_int (i mod 5) *. 0.1;\n        })\n  in\n  let input_2d = Nx.create Nx.complex128 shape_2d input_data_2d in\n  let fft_2d = Nx.fft2 input_2d in\n  let ifft_2d = Nx.ifft2 fft_2d in\n  check_t \"2D fft/ifft\" shape_2d input_data_2d ifft_2d;\n\n  (* ND *)\n  let shape_nd = [| 2; 3; 4 |] in\n  let size_nd = 2 * 3 * 4 in\n  let input_data_nd =\n    Array.init size_nd (fun i -> { Complex.re = Float.of_int i; im = 0.0 })\n  in\n  let input_nd = Nx.create Nx.complex128 shape_nd input_data_nd in\n  let fft_nd = Nx.fftn input_nd in\n  let ifft_nd = Nx.ifftn fft_nd in\n  check_t \"ND fft/ifft\" shape_nd input_data_nd ifft_nd\n\nlet test_fft_axes () =\n  let shape = [| 4; 6 |] in\n  let size = 4 * 6 in\n  let input_data =\n    Array.init size (fun i ->\n        { Complex.re = Float.of_int i; im = Float.of_int (i mod 7) *. 0.1 })\n  in\n  let input = Nx.create Nx.complex128 shape input_data in\n\n  (* Specific axes *)\n  let fft_axis0 = Nx.fft input ~axis:0 in\n  let ifft_axis0 = Nx.ifft fft_axis0 ~axis:0 in\n  check_t \"fft axis 0\" shape input_data ifft_axis0;\n\n  let fft_axis1 = Nx.fft input ~axis:1 in\n  let ifft_axis1 = Nx.ifft fft_axis1 ~axis:1 in\n  check_t \"fft axis 1\" shape input_data ifft_axis1;\n\n  (* Negative axes *)\n  let fft_neg_axis = Nx.fft input ~axis:(-2) in\n  let ifft_neg_axis = Nx.ifft fft_neg_axis ~axis:(-2) in\n  check_t \"fft axis -2\" shape input_data ifft_neg_axis\n\nlet test_fft_size () =\n  let n = 8 in\n  let shape = [| n |] in\n  let input_data =\n    Array.init n (fun i ->\n        let angle = two_pi *. Float.of_int i /. Float.of_int n in\n        { Complex.re = sin angle; im = cos angle })\n  in\n  let input = Nx.create Nx.complex128 shape input_data in\n\n  (* Pad to larger size *)\n  let pad_size = 16 in\n  let fft_padded = Nx.fft input ~n:pad_size in\n  equal ~msg:\"fft padded shape\" (array int) [| pad_size |] (Nx.shape fft_padded);\n  let ifft_padded = Nx.ifft fft_padded ~n in\n  (* Note: fft(x, n=16) -> ifft(X, n=8) does NOT give back the original signal.\n     This is expected behavior that matches NumPy. *)\n  equal ~msg:\"fft pad reconstruct shape\" (array int) shape\n    (Nx.shape ifft_padded);\n  (* Check the actual values match NumPy's output *)\n  let expected_padded_complex =\n    [|\n      Complex.{ re = 0.270598050073098; im = 0.500000000000000 };\n      Complex.{ re = 0.038060233744356; im = 0.191341716182545 };\n      Complex.{ re = -0.000000000000000; im = 0.153281482438188 };\n      Complex.{ re = -0.038060233744357; im = 0.191341716182545 };\n      Complex.{ re = -0.270598050073098; im = -0.500000000000000 };\n      Complex.{ re = -0.038060233744357; im = -0.191341716182545 };\n      Complex.{ re = 0.000000000000000; im = -0.153281482438188 };\n      Complex.{ re = 0.038060233744357; im = -0.191341716182545 };\n    |]\n  in\n  check_t ~eps:1e-6 \"fft pad reconstruct values\" shape expected_padded_complex\n    ifft_padded;\n\n  (* Truncate to smaller size *)\n  let trunc_size = 4 in\n  let fft_trunc = Nx.fft input ~n:trunc_size in\n  equal ~msg:\"fft trunc shape\" (array int) [| trunc_size |] (Nx.shape fft_trunc)\n\nlet test_fft_norm () =\n  let n = 4 in\n  let shape = [| n |] in\n  let input_data =\n    [|\n      Complex.{ re = 1.0; im = -1.0 };\n      Complex.{ re = 2.0; im = -2.0 };\n      Complex.{ re = 3.0; im = 3.0 };\n      Complex.{ re = 4.0; im = 4.0 };\n    |]\n  in\n  let input = Nx.create Nx.complex128 shape input_data in\n\n  (* Backward norm (default) *)\n  let fft_backward = Nx.fft input ~norm:`Backward in\n  let ifft_backward = Nx.ifft fft_backward ~norm:`Backward in\n  check_t \"backward norm\" shape input_data ifft_backward;\n\n  (* Forward norm *)\n  let fft_forward = Nx.fft input ~norm:`Forward in\n  let ifft_forward = Nx.ifft fft_forward ~norm:`Forward in\n  check_t \"forward norm\" shape input_data ifft_forward;\n\n  (* Ortho norm *)\n  let fft_ortho = Nx.fft input ~norm:`Ortho in\n  let ifft_ortho = Nx.ifft fft_ortho ~norm:`Ortho in\n  check_t \"ortho norm\" shape input_data ifft_ortho\n\nlet test_fft_edge_cases () =\n  (* Empty tensor *)\n  let empty = Nx.empty Nx.complex128 [| 0 |] in\n  let fft_empty = Nx.fft empty in\n  equal ~msg:\"fft empty\" (array int) [| 0 |] (Nx.shape fft_empty);\n\n  (* Size 1 *)\n  let shape = [| 1 |] in\n  let input_data = [| Complex.{ re = 5.0; im = -3.0 } |] in\n  let single = Nx.create Nx.complex128 shape input_data in\n  let fft_single = Nx.fft single in\n  check_t \"fft size 1\" shape input_data fft_single;\n\n  (* Non-power of 2 *)\n  let n = 5 in\n  let shape_non_pow2 = [| n |] in\n  let input_data_non_pow2 =\n    Array.init n (fun i -> { Complex.re = Float.of_int i; im = 0.0 })\n  in\n  let input = Nx.create Nx.complex128 shape_non_pow2 input_data_non_pow2 in\n  let fft_out = Nx.fft input in\n  let ifft_out = Nx.ifft fft_out in\n  check_t \"non-pow2\" shape_non_pow2 input_data_non_pow2 ifft_out\n\n(* Test real FFT/IFFT *)\nlet test_rfft_irfft () =\n  (* 1D even *)\n  let n_even = 8 in\n  let shape_even = [| n_even |] in\n  let signal_even =\n    Array.init n_even (fun i ->\n        sin (two_pi *. Float.of_int i /. Float.of_int n_even))\n  in\n  let input_even = Nx.create Nx.float64 shape_even signal_even in\n  let rfft_even = Nx.rfft input_even in\n  equal ~msg:\"rfft even shape\" (array int)\n    [| (n_even / 2) + 1 |]\n    (Nx.shape rfft_even);\n  let irfft_even = Nx.irfft rfft_even ~n:n_even in\n  check_t ~eps:1e-10 \"rfft even reconstruct\" shape_even signal_even irfft_even;\n\n  (* 1D odd *)\n  let n_odd = 7 in\n  let shape_odd = [| n_odd |] in\n  let signal_odd = Array.init n_odd (fun i -> Float.of_int i) in\n  let input_odd = Nx.create Nx.float64 shape_odd signal_odd in\n  let rfft_odd = Nx.rfft input_odd in\n  equal ~msg:\"rfft odd shape\" (array int)\n    [| (n_odd / 2) + 1 |]\n    (Nx.shape rfft_odd);\n  let irfft_odd = Nx.irfft rfft_odd ~n:n_odd in\n  check_t ~eps:1e-10 \"rfft odd reconstruct\" shape_odd signal_odd irfft_odd;\n\n  (* 2D *)\n  let m, n = (4, 6) in\n  let shape_2d = [| m; n |] in\n  let signal_2d = Array.init (m * n) Float.of_int in\n  let input_2d = Nx.create Nx.float64 shape_2d signal_2d in\n  let rfft_2d = Nx.rfft2 input_2d in\n  equal ~msg:\"rfft2 shape\" (array int) [| m; (n / 2) + 1 |] (Nx.shape rfft_2d);\n  let irfft_2d = Nx.irfft2 rfft_2d ~s:[ m; n ] in\n  check_t \"rfft2 reconstruct\" shape_2d signal_2d irfft_2d;\n\n  (* ND last axis transform *)\n  let shape_nd = [| 2; 3; 8 |] in\n  let size_nd = 2 * 3 * 8 in\n  let signal_nd = Array.init size_nd (fun i -> Float.of_int i) in\n  let input_nd = Nx.create Nx.float64 shape_nd signal_nd in\n  let rfft_nd = Nx.rfftn input_nd ~axes:[ 2 ] in\n  equal ~msg:\"rfftn last axis shape\" (array int) [| 2; 3; 5 |]\n    (Nx.shape rfft_nd);\n  let irfft_nd = Nx.irfftn rfft_nd ~axes:[ 2 ] ~s:[ 8 ] in\n  check_t \"rfftn last axis reconstruct\" shape_nd signal_nd irfft_nd\n\nlet test_rfft_axes () =\n  let shape = [| 4; 6; 8 |] in\n  let size = 4 * 6 * 8 in\n  let signal = Array.init size (fun i -> Float.of_int i) in\n  let input = Nx.create Nx.float64 shape signal in\n\n  (* Specific axis *)\n  let rfft_axis1 = Nx.rfftn input ~axes:[ 1 ] in\n  equal ~msg:\"rfft axis 1\" (array int) [| 4; 4; 8 |] (Nx.shape rfft_axis1);\n\n  (* Multiple axes, last is halved *)\n  let rfft_axes_01 = Nx.rfftn input ~axes:[ 0; 1 ] in\n  equal ~msg:\"rfft axes [0;1]\" (array int) [| 4; 4; 8 |] (Nx.shape rfft_axes_01);\n\n  (* Negative axis *)\n  let rfft_neg1 = Nx.rfftn input ~axes:[ -1 ] in\n  equal ~msg:\"rfft axis -1\" (array int) [| 4; 6; 5 |] (Nx.shape rfft_neg1)\n\nlet test_rfft_size () =\n  let n = 8 in\n  let shape = [| n |] in\n  let signal =\n    Array.init n (fun i -> sin (two_pi *. Float.of_int i /. Float.of_int n))\n  in\n  let input = Nx.create Nx.float64 shape signal in\n\n  (* Pad last axis *)\n  let pad_size = 16 in\n  let rfft_padded = Nx.rfft input ~n:pad_size in\n  equal ~msg:\"rfft padded\" (array int)\n    [| (pad_size / 2) + 1 |]\n    (Nx.shape rfft_padded);\n  let irfft_padded = Nx.irfft rfft_padded ~n in\n  (* Note: rfft(x, n=16) -> irfft(X, n=8) does NOT give back the original\n     signal. This is expected behavior that matches NumPy. *)\n  equal ~msg:\"rfft pad reconstruct shape\" (array int) shape\n    (Nx.shape irfft_padded);\n  (* Check the actual values match NumPy's output *)\n  let expected_padded =\n    [|\n      0.270598050073098;\n      1.961939766255643;\n      0.000000000000000;\n      -1.961939766255643;\n      -0.270598050073099;\n      0.038060233744357;\n      -0.000000000000000;\n      -0.038060233744357;\n    |]\n  in\n  check_t ~eps:1e-6 \"rfft pad reconstruct values\" shape expected_padded\n    irfft_padded;\n\n  (* Truncate *)\n  let trunc_size = 4 in\n  let rfft_trunc = Nx.rfft input ~n:trunc_size in\n  equal ~msg:\"rfft trunc\" (array int)\n    [| (trunc_size / 2) + 1 |]\n    (Nx.shape rfft_trunc)\n\nlet test_rfft_norm () =\n  let n = 4 in\n  let shape = [| n |] in\n  let signal = [| 1.0; 2.0; 3.0; 4.0 |] in\n  let input = Nx.create Nx.float64 shape signal in\n\n  (* Backward *)\n  let rfft_backward = Nx.rfft input ~norm:`Backward in\n  let irfft_backward = Nx.irfft rfft_backward ~n ~norm:`Backward in\n  check_t \"rfft backward\" shape signal irfft_backward;\n\n  (* Forward *)\n  let rfft_forward = Nx.rfft input ~norm:`Forward in\n  let irfft_forward = Nx.irfft rfft_forward ~n ~norm:`Forward in\n  check_t \"rfft forward\" shape signal irfft_forward;\n\n  (* Ortho *)\n  let rfft_ortho = Nx.rfft input ~norm:`Ortho in\n  let irfft_ortho = Nx.irfft rfft_ortho ~n ~norm:`Ortho in\n  check_t \"rfft ortho\" shape signal irfft_ortho\n\nlet test_rfft_edge_cases () =\n  (* Empty - NumPy raises an error for empty arrays, so we skip this test let\n     empty = Nx.empty Nx.float64 [| 0 |] in let rfft_empty = Nx.rfft empty in\n     Alcotest.(check (array int)) \"rfft empty\" [| 1 |] (Nx.shape rfft_empty); *)\n\n  (* Size 1 *)\n  let shape = [| 1 |] in\n  let signal_data = [| 5.0 |] in\n  let single = Nx.create Nx.float64 shape signal_data in\n  let rfft_single = Nx.rfft single in\n  equal ~msg:\"rfft size 1 shape\" (array int) [| 1 |] (Nx.shape rfft_single);\n  let irfft_single = Nx.irfft rfft_single ~n:1 in\n  check_t \"rfft size 1 reconstruct\" shape signal_data irfft_single\n\n(* Test Hermitian FFT *)\nlet test_hfft_ihfft () =\n  let n = 8 in\n  let shape = [| n |] in\n  let signal =\n    Array.init n (fun i -> sin (two_pi *. Float.of_int i /. Float.of_int n))\n  in\n  let input = Nx.create Nx.float64 shape signal in\n  let ihfft_out = Nx.ihfft input ~n in\n  equal ~msg:\"ihfft shape\" (array int) [| (n / 2) + 1 |] (Nx.shape ihfft_out);\n  let hfft_out = Nx.hfft ihfft_out ~n in\n  check_t \"hfft/ihfft\" shape signal hfft_out\n\n(* Test helper routines *)\nlet test_fftfreq () =\n  let n = 5 in\n  let shape = [| n |] in\n  let freq = Nx.fftfreq n in\n  let expected_data = [| 0.0; 0.2; 0.4; -0.4; -0.2 |] in\n  check_t \"fftfreq odd\" shape expected_data freq;\n\n  let n_even = 4 in\n  let shape_even = [| n_even |] in\n  let freq_even = Nx.fftfreq n_even ~d:0.5 in\n  let expected_even_data = [| 0.0; 0.5; -1.0; -0.5 |] in\n  check_t \"fftfreq even\" shape_even expected_even_data freq_even\n\nlet test_rfftfreq () =\n  let n = 8 in\n  let shape = [| (n / 2) + 1 |] in\n  let freq = Nx.rfftfreq n in\n  let expected_data = [| 0.0; 0.125; 0.25; 0.375; 0.5 |] in\n  check_t \"rfftfreq even\" shape expected_data freq;\n\n  let n_odd = 9 in\n  let shape_odd = [| (n_odd / 2) + 1 |] in\n  let freq_odd = Nx.rfftfreq n_odd ~d:2.0 in\n  let expected_odd_data =\n    [| 0.0; 0.055555555555; 0.111111111111; 0.166666666666; 0.222222222222 |]\n  in\n  check_t ~eps:1e-8 \"rfftfreq odd\" shape_odd expected_odd_data freq_odd\n\nlet test_fftshift () =\n  let x_shape = [| 4 |] in\n  let x_data = [| 0.0; 1.0; 2.0; 3.0 |] in\n  let x = Nx.create Nx.float64 x_shape x_data in\n  let shifted = Nx.fftshift x in\n  let expected_shifted_data = [| 2.0; 3.0; 0.0; 1.0 |] in\n  check_t \"fftshift 1D\" x_shape expected_shifted_data shifted;\n\n  let x2d_shape = [| 3; 3 |] in\n  let x2d_data = Array.init 9 Float.of_int in\n  let x2d = Nx.create Nx.float64 x2d_shape x2d_data in\n  let shifted2d = Nx.fftshift x2d ~axes:[ 0; 1 ] in\n  let expected2d_data = [| 8.0; 6.0; 7.0; 2.0; 0.0; 1.0; 5.0; 3.0; 4.0 |] in\n  check_t \"fftshift 2D\" x2d_shape expected2d_data shifted2d;\n\n  let unshifted = Nx.ifftshift shifted in\n  check_t \"ifftshift 1D\" x_shape x_data unshifted\n\nlet suite =\n  [\n    group \"fft/ifft\"\n      [\n        test \"basic\" test_fft_ifft;\n        test \"axes\" test_fft_axes;\n        test \"size\" test_fft_size;\n        test \"norm\" test_fft_norm;\n        test \"edge_cases\" test_fft_edge_cases;\n      ];\n    group \"rfft/irfft\"\n      [\n        test \"basic\" test_rfft_irfft;\n        test \"axes\" test_rfft_axes;\n        test \"size\" test_rfft_size;\n        test \"norm\" test_rfft_norm;\n        test \"edge_cases\" test_rfft_edge_cases;\n      ];\n    group \"hfft/ihfft\" [ test \"basic\" test_hfft_ihfft ];\n    group \"helpers\"\n      [\n        test \"fftfreq\" test_fftfreq;\n        test \"rfftfreq\" test_rfftfreq;\n        test \"shifts\" test_fftshift;\n      ];\n  ]\n\nlet () = run \"Nx FFT\" suite\n"
  },
  {
    "path": "packages/nx/test/test_nx_indexing.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Comprehensive indexing and slicing tests for Nx *)\n\nopen Windtrap\nopen Test_nx_support\n\n(* ───── Basic Slicing Tests (slice function) ───── *)\n\nlet test_slice_basic () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n  let sliced = Nx.slice [ Nx.R (1, 4) ] t in\n  check_t \"slice [1:4]\" [| 3 |] [| 2.; 3.; 4. |] sliced\n\nlet test_slice_with_step () =\n  let t = Nx.create Nx.float32 [| 10 |] (Array.init 10 float_of_int) in\n  let sliced = Nx.slice [ Nx.Rs (1, 8, 2) ] t in\n  check_t \"slice [1:8:2]\" [| 4 |] [| 1.; 3.; 5.; 7. |] sliced\n\nlet test_slice_negative_indices () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n  let sliced = Nx.slice [ Nx.R (-3, -1) ] t in\n  check_t \"slice [-3:-1]\" [| 2 |] [| 3.; 4. |] sliced\n\nlet test_slice_2d () =\n  let t =\n    Nx.create Nx.float32 [| 3; 4 |]\n      [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9.; 10.; 11.; 12. |]\n  in\n  let sliced = Nx.slice [ Nx.R (1, 3); Nx.R (1, 3) ] t in\n  check_t \"slice 2d [1:3, 1:3]\" [| 2; 2 |] [| 6.; 7.; 10.; 11. |] sliced\n\nlet test_slice_empty () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n  let sliced = Nx.slice [ Nx.R (3, 3) ] t in\n  check_shape \"empty slice\" [| 0 |] sliced\n\n(* ───── Advanced Indexing Tests (index function) ───── *)\n\nlet test_index_all () =\n  let t = Nx.create Nx.float32 [| 3; 4 |] (Array.init 12 float_of_int) in\n  let indexed = Nx.slice [ Nx.A; Nx.A ] t in\n  check_t \"index all\" [| 3; 4 |] (Array.init 12 float_of_int) indexed\n\nlet test_index_at () =\n  let t = Nx.create Nx.float32 [| 3; 4 |] (Array.init 12 float_of_int) in\n  let indexed = Nx.slice [ Nx.I 1 ] t in\n  check_t \"index at\" [| 4 |] [| 4.; 5.; 6.; 7. |] indexed\n\nlet test_index_at_negative () =\n  let t = Nx.create Nx.float32 [| 3; 4 |] (Array.init 12 float_of_int) in\n  let indexed = Nx.slice [ Nx.I (-1) ] t in\n  check_t \"index at negative\" [| 4 |] [| 8.; 9.; 10.; 11. |] indexed\n\nlet test_index_rng () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n  let indexed = Nx.slice [ Nx.R (1, 3) ] t in\n  check_t \"index rng\" [| 2 |] [| 2.; 3. |] indexed\n\nlet test_index_rngs () =\n  let t = Nx.create Nx.float32 [| 10 |] (Array.init 10 float_of_int) in\n  let indexed = Nx.slice [ Nx.Rs (1, 8, 2) ] t in\n  check_t \"index rngs with step\" [| 4 |] [| 1.; 3.; 5.; 7. |] indexed\n\nlet test_index_idx () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 10.; 20.; 30.; 40.; 50. |] in\n  let indexed = Nx.slice [ Nx.L [ 0; 2; 4 ] ] t in\n  check_t \"index idx\" [| 3 |] [| 10.; 30.; 50. |] indexed\n\nlet test_index_idx_repeated () =\n  let t = Nx.create Nx.float32 [| 3 |] [| 10.; 20.; 30. |] in\n  let indexed = Nx.slice [ Nx.L [ 0; 1; 1; 0; 2 ] ] t in\n  check_t \"index idx repeated\" [| 5 |] [| 10.; 20.; 20.; 10.; 30. |] indexed\n\n(* Regression test: fancy indexing should reorder even when length matches dim\n   size *)\nlet test_index_idx_reorder () =\n  let t = Nx.create Nx.float32 [| 3; 2 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  (* L [1; 2; 0] should reorder rows, not return unchanged *)\n  let indexed = Nx.slice [ Nx.L [ 1; 2; 0 ]; Nx.A ] t in\n  check_t \"index idx reorder\" [| 3; 2 |] [| 3.; 4.; 5.; 6.; 1.; 2. |] indexed\n\nlet test_index_mixed () =\n  let t = Nx.create Nx.float32 [| 3; 4; 5 |] (Array.init 60 float_of_int) in\n  (* Select row 1, columns 0 and 2, all in last dimension *)\n  let indexed = Nx.slice [ Nx.I 1; Nx.L [ 0; 2 ]; Nx.A ] t in\n  check_t \"index mixed\" [| 2; 5 |]\n    [| 20.; 21.; 22.; 23.; 24.; 30.; 31.; 32.; 33.; 34. |]\n    indexed\n\n(* Note: `new_ and `mask require implementation *)\n(* let test_index_new_axis  () =\n    let t = Nx.create  Nx.float32 [| 3; 4 |] (Array.init 12 float_of_int) in\n    let indexed = Nx.slice [ Nx.A; Nx.N; Nx.A ] t in\n    check_shape \"index new axis\" [| 3; 1; 4 |] indexed\n\n  let test_index_mask  () =\n    let t = Nx.create  Nx.float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n    let mask = Nx.greater_s t 2.5 in\n    let indexed = Nx.slice [ Nx.M mask ] t in\n    check_t \"index mask\" [| 3 |] [| 3.; 4.; 5. |] indexed *)\n\n(* ───── Set_slice Tests ───── *)\n\nlet test_set_slice_at () =\n  let t = Nx.zeros Nx.float32 [| 3; 4 |] in\n  let value = Nx.ones Nx.float32 [| 4 |] in\n  Nx.set_slice [ Nx.I 1 ] t value;\n  equal ~msg:\"set_slice at [1,2]\" (float 1e-6) 1.0 (Nx.item [ 1; 2 ] t)\n\nlet test_set_slice_rng () =\n  let t = Nx.zeros Nx.float32 [| 5 |] in\n  let value = Nx.create Nx.float32 [| 2 |] [| 10.; 20. |] in\n  Nx.set_slice [ Nx.R (1, 3) ] t value;\n  check_t \"set_slice rng\" [| 5 |] [| 0.; 10.; 20.; 0.; 0. |] t\n\nlet test_set_slice_idx () =\n  let t = Nx.zeros Nx.float32 [| 5 |] in\n  let value = Nx.create Nx.float32 [| 3 |] [| 10.; 20.; 30. |] in\n  Nx.set_slice [ Nx.L [ 0; 2; 4 ] ] t value;\n  check_t \"set_slice idx\" [| 5 |] [| 10.; 0.; 20.; 0.; 30. |] t\n\n(* ───── Item and Set_item Tests ───── *)\n\nlet test_item () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let value = Nx.item [ 1; 2 ] t in\n  equal ~msg:\"item [1,2]\" (float 1e-6) 6.0 value\n\nlet test_item_negative_indices () =\n  let t = Nx.create Nx.float32 [| 3; 3 |] (Array.init 9 float_of_int) in\n  let value = Nx.item [ -1; -1 ] t in\n  equal ~msg:\"item negative indices\" (float 1e-6) 8.0 value\n\nlet test_set_item () =\n  let t = Nx.zeros Nx.float32 [| 2; 3 |] in\n  Nx.set_item [ 1; 2 ] 99.0 t;\n  equal ~msg:\"set_item\" (float 1e-6) 99.0 (Nx.item [ 1; 2 ] t)\n\n(* ───── Take Tests ───── *)\n\nlet test_take_basic () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 10.; 20.; 30.; 40.; 50. |] in\n  let indices = Nx.create Nx.int32 [| 3 |] [| 0l; 2l; 4l |] in\n  let result = Nx.take indices t in\n  check_t \"take basic\" [| 3 |] [| 10.; 30.; 50. |] result\n\nlet test_take_with_axis () =\n  let t =\n    Nx.create Nx.float32 [| 3; 4 |]\n      [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9.; 10.; 11.; 12. |]\n  in\n  let indices = Nx.create Nx.int32 [| 2 |] [| 0l; 2l |] in\n  let result = Nx.take ~axis:1 indices t in\n  check_t \"take with axis\" [| 3; 2 |] [| 1.; 3.; 5.; 7.; 9.; 11. |] result\n\nlet test_take_mode_wrap () =\n  let t = Nx.create Nx.float32 [| 3 |] [| 10.; 20.; 30. |] in\n  let indices = Nx.create Nx.int32 [| 4 |] [| 0l; 1l; 2l; 3l |] in\n  let result = Nx.take ~mode:`wrap indices t in\n  check_t \"take mode wrap\" [| 4 |] [| 10.; 20.; 30.; 10. |] result\n\nlet test_take_mode_clip () =\n  let t = Nx.create Nx.float32 [| 3 |] [| 10.; 20.; 30. |] in\n  let indices = Nx.create Nx.int32 [| 4 |] [| -1l; 0l; 2l; 5l |] in\n  let result = Nx.take ~mode:`clip indices t in\n  check_t \"take mode clip\" [| 4 |] [| 10.; 10.; 30.; 30. |] result\n\nlet test_take_negative_indices () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n  let indices = Nx.create Nx.int32 [| 2 |] [| -1l; -2l |] in\n  let result = Nx.take ~mode:`wrap indices t in\n  check_t \"take negative indices\" [| 2 |] [| 5.; 4. |] result\n\n(* ───── Take_along_axis Tests ───── *)\n\nlet test_take_along_axis_1d () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 3.; 1.; 4.; 1.; 5. |] in\n  let indices = Nx.argsort ~axis:0 t in\n  let sorted = Nx.take_along_axis ~axis:0 indices t in\n  check_t \"take_along_axis 1d\" [| 5 |] [| 1.; 1.; 3.; 4.; 5. |] sorted\n\nlet test_take_along_axis_2d () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 4.; 1.; 2.; 3.; 5.; 6. |] in\n  (* Get argmax along axis 1 *)\n  let indices = Nx.argmax ~axis:1 ~keepdims:true t in\n  let maxvals = Nx.take_along_axis ~axis:1 indices t in\n  check_t \"take_along_axis 2d\" [| 2; 1 |] [| 4.; 6. |] maxvals\n\n(* ───── Put Tests ───── *)\n\nlet test_put_basic () =\n  let t = Nx.zeros Nx.float32 [| 5 |] in\n  let indices = Nx.create Nx.int32 [| 3 |] [| 0l; 2l; 4l |] in\n  let values = Nx.create Nx.float32 [| 3 |] [| 10.; 20.; 30. |] in\n  Nx.put ~indices ~values t;\n  check_t \"put basic\" [| 5 |] [| 10.; 0.; 20.; 0.; 30. |] t\n\nlet test_put_with_axis () =\n  let t = Nx.zeros Nx.float32 [| 3; 4 |] in\n  let indices = Nx.create Nx.int32 [| 3; 2 |] [| 0l; 2l; 0l; 2l; 0l; 2l |] in\n  let values = Nx.ones Nx.float32 [| 3; 2 |] in\n  Nx.put ~axis:1 ~indices ~values t;\n  let expected = [| 1.; 0.; 1.; 0.; 1.; 0.; 1.; 0.; 1.; 0.; 1.; 0. |] in\n  check_t \"put with axis\" [| 3; 4 |] expected t\n\nlet test_put_mode_wrap () =\n  let t = Nx.zeros Nx.float32 [| 3 |] in\n  let indices = Nx.create Nx.int32 [| 4 |] [| 0l; 1l; 2l; 3l |] in\n  let values = Nx.create Nx.float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n  Nx.put ~indices ~values ~mode:`wrap t;\n  check_t \"put mode wrap\" [| 3 |] [| 4.; 2.; 3. |] t\n\nlet test_put_mode_clip () =\n  let t = Nx.zeros Nx.float32 [| 3 |] in\n  let indices = Nx.create Nx.int32 [| 4 |] [| -1l; 0l; 2l; 5l |] in\n  let values = Nx.create Nx.float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n  Nx.put ~indices ~values ~mode:`clip t;\n  check_t \"put mode clip\" [| 3 |] [| 2.; 0.; 4. |] t\n\nlet test_index_put_basic () =\n  let t = Nx.zeros Nx.float32 [| 3; 3 |] in\n  let rows = Nx.create Nx.int32 [| 4 |] [| 0l; 2l; 1l; 2l |] in\n  let cols = Nx.create Nx.int32 [| 4 |] [| 1l; 0l; 2l; 2l |] in\n  let values = Nx.arange_f Nx.float32 10. 14. 1. in\n  Nx.index_put ~indices:[| rows; cols |] ~values t;\n  check_t \"index_put basic\" [| 3; 3 |]\n    [| 0.; 10.; 0.; 0.; 0.; 12.; 11.; 0.; 13. |]\n    t\n\nlet test_index_put_mode_wrap () =\n  let t = Nx.zeros Nx.float32 [| 2; 2 |] in\n  let rows = Nx.create Nx.int32 [| 3 |] [| -1l; 0l; 1l |] in\n  let cols = Nx.create Nx.int32 [| 3 |] [| 0l; -1l; 1l |] in\n  let values = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  Nx.index_put ~indices:[| rows; cols |] ~values ~mode:`wrap t;\n  check_t \"index_put mode wrap\" [| 2; 2 |] [| 0.; 2.; 1.; 3. |] t\n\n(* ───── Put_along_axis Tests ───── *)\n\nlet test_put_along_axis () =\n  let t = Nx.zeros Nx.float32 [| 2; 3 |] in\n  let indices = Nx.create Nx.int32 [| 2; 1 |] [| 1l; 0l |] in\n  let values = Nx.create Nx.float32 [| 2; 1 |] [| 10.; 20. |] in\n  Nx.put_along_axis ~axis:1 ~indices ~values t;\n  check_t \"put_along_axis\" [| 2; 3 |] [| 0.; 10.; 0.; 20.; 0.; 0. |] t\n\n(* ───── Compress Tests ───── *)\n\nlet test_compress_no_axis () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n  let condition =\n    Nx.create Nx.bool [| 5 |] [| true; false; true; false; true |]\n  in\n  let result = Nx.compress ~condition t in\n  check_t \"compress no axis\" [| 3 |] [| 1.; 3.; 5. |] result\n\nlet test_compress_with_axis () =\n  let t =\n    Nx.create Nx.float32 [| 3; 4 |]\n      [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9.; 10.; 11.; 12. |]\n  in\n  let condition = Nx.create Nx.bool [| 3 |] [| false; true; true |] in\n  let result = Nx.compress ~axis:0 ~condition t in\n  check_t \"compress with axis\" [| 2; 4 |]\n    [| 5.; 6.; 7.; 8.; 9.; 10.; 11.; 12. |]\n    result\n\nlet test_compress_empty_result () =\n  let t = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let condition = Nx.create Nx.bool [| 3 |] [| false; false; false |] in\n  let result = Nx.compress ~condition t in\n  check_shape \"compress empty result\" [| 0 |] result\n\n(* ───── Extract Tests ───── *)\n\nlet test_extract_basic () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let condition =\n    Nx.create Nx.bool [| 2; 3 |] [| true; false; true; false; true; false |]\n  in\n  let result = Nx.extract ~condition t in\n  check_t \"extract basic\" [| 3 |] [| 1.; 3.; 5. |] result\n\nlet test_extract_from_comparison () =\n  let t = Nx.create Nx.float32 [| 3; 3 |] (Array.init 9 float_of_int) in\n  let condition = Nx.greater_s t 4. in\n  let result = Nx.extract ~condition t in\n  check_t \"extract from comparison\" [| 4 |] [| 5.; 6.; 7.; 8. |] result\n\n(* ───── Nonzero Tests ───── *)\n\nlet test_nonzero_1d () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 0.; 1.; 0.; 3.; 0. |] in\n  let indices = Nx.nonzero t in\n  equal ~msg:\"nonzero 1d length\" int 1 (Array.length indices);\n  let expected = [| 1.; 3. |] in\n  check_t \"nonzero 1d indices\" [| 2 |] expected\n    (Nx.astype Nx.float32 indices.(0))\n\nlet test_nonzero_2d () =\n  let t =\n    Nx.create Nx.float32 [| 3; 3 |] [| 0.; 1.; 0.; 2.; 0.; 3.; 0.; 0.; 4. |]\n  in\n  let indices = Nx.nonzero t in\n  equal ~msg:\"nonzero 2d length\" int 2 (Array.length indices);\n  (* Row indices *)\n  let expected_rows = [| 0.; 1.; 1.; 2. |] in\n  check_t \"nonzero 2d rows\" [| 4 |] expected_rows\n    (Nx.astype Nx.float32 indices.(0));\n  (* Column indices *)\n  let expected_cols = [| 1.; 0.; 2.; 2. |] in\n  check_t \"nonzero 2d cols\" [| 4 |] expected_cols\n    (Nx.astype Nx.float32 indices.(1))\n\nlet test_nonzero_empty () =\n  let t = Nx.zeros Nx.float32 [| 3; 3 |] in\n  let indices = Nx.nonzero t in\n  equal ~msg:\"nonzero empty length\" int 2 (Array.length indices);\n  Array.iter (fun idx -> check_shape \"nonzero empty shape\" [| 0 |] idx) indices\n\n(* ───── Argwhere Tests ───── *)\n\nlet test_argwhere_basic () =\n  let t =\n    Nx.create Nx.float32 [| 3; 3 |] [| 0.; 1.; 0.; 2.; 0.; 3.; 0.; 0.; 4. |]\n  in\n  let coords = Nx.argwhere t in\n  check_shape \"argwhere shape\" [| 4; 2 |] coords;\n  let expected = [| 0.; 1.; 1.; 0.; 1.; 2.; 2.; 2. |] in\n  check_t \"argwhere coords\" [| 4; 2 |] expected (Nx.astype Nx.float32 coords)\n\nlet test_argwhere_empty () =\n  let t = Nx.zeros Nx.float32 [| 3; 3 |] in\n  let coords = Nx.argwhere t in\n  check_shape \"argwhere empty\" [| 0; 2 |] coords\n\nlet test_argwhere_1d () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 0.; 1.; 0.; 3.; 0. |] in\n  let coords = Nx.argwhere t in\n  check_shape \"argwhere 1d shape\" [| 2; 1 |] coords;\n  let expected = [| 1.; 3. |] in\n  check_t \"argwhere 1d coords\" [| 2; 1 |] expected (Nx.astype Nx.float32 coords)\n\n(* ───── Edge Cases and Error Tests ───── *)\n\nlet test_item_wrong_indices () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] (Array.init 6 float_of_int) in\n  raises ~msg:\"item wrong number of indices\"\n    (Invalid_argument \"item: need 2 indices for 2-d tensor, got 1\") (fun () ->\n      ignore (Nx.item [ 1 ] t))\n\nlet test_set_slice_broadcast () =\n  let t = Nx.zeros Nx.float32 [| 3; 4 |] in\n  let value = Nx.ones Nx.float32 [| 1 |] in\n  Nx.set_slice [ Nx.R (1, 2) ] t value;\n  (* Value should be broadcast to shape [1, 4] *)\n  equal ~msg:\"set_slice broadcast\" (float 1e-6) 1.0 (Nx.item [ 1; 2 ] t)\n\nlet test_index_chained () =\n  let t = Nx.create Nx.float32 [| 4; 5; 6 |] (Array.init 120 float_of_int) in\n  (* Chain multiple index operations *)\n  let indexed1 = Nx.slice [ Nx.R (1, 3); Nx.A; Nx.A ] t in\n  let indexed2 = Nx.slice [ Nx.A; Nx.L [ 0; 2; 4 ]; Nx.A ] indexed1 in\n  let indexed3 = Nx.slice [ Nx.I 1; Nx.I 1; Nx.R (2, 5) ] indexed2 in\n  check_shape \"index chained shape\" [| 3 |] indexed3\n\nlet test_take_empty_indices () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n  let indices = Nx.create Nx.int32 [| 0 |] [||] in\n  let result = Nx.take indices t in\n  check_shape \"take empty indices\" [| 0 |] result\n\nlet test_compress_condition_mismatch () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n  let condition = Nx.create Nx.bool [| 3 |] [| true; false; true |] in\n  raises ~msg:\"compress condition mismatch\"\n    (Invalid_argument \"compress: length 3 doesn't match axis 0 size 5\")\n    (fun () -> ignore (Nx.compress ~axis:0 ~condition t))\n\nlet test_extract_shape_mismatch () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] (Array.init 6 float_of_int) in\n  let condition = Nx.create Nx.bool [| 2; 2 |] [| true; false; true; false |] in\n  raises ~msg:\"extract shape mismatch\"\n    (Invalid_argument \"extract: shape mismatch\") (fun () ->\n      ignore (Nx.extract ~condition t))\n\n(* ───── Test Suite Organization ───── *)\n\nlet slice_tests =\n  [\n    test \"slice basic\" test_slice_basic;\n    test \"slice with step\" test_slice_with_step;\n    test \"slice negative indices\" test_slice_negative_indices;\n    test \"slice 2d\" test_slice_2d;\n    test \"slice empty\" test_slice_empty;\n  ]\n\nlet index_tests =\n  [\n    test \"index all\" test_index_all;\n    test \"index at\" test_index_at;\n    test \"index at negative\" test_index_at_negative;\n    test \"index rng\" test_index_rng;\n    test \"index rngs\" test_index_rngs;\n    test \"index idx\" test_index_idx;\n    test \"index idx repeated\" test_index_idx_repeated;\n    test \"index idx reorder\" test_index_idx_reorder;\n    test \"index mixed\" test_index_mixed;\n    test \"set_slice at\" test_set_slice_at;\n    test \"set_slice rng\" test_set_slice_rng;\n    test \"set_slice idx\" test_set_slice_idx;\n  ]\n\nlet item_tests =\n  [\n    test \"item\" test_item;\n    test \"item negative indices\" test_item_negative_indices;\n    test \"set_item\" test_set_item;\n    test \"item wrong indices\" test_item_wrong_indices;\n  ]\n\nlet take_tests =\n  [\n    test \"take basic\" test_take_basic;\n    test \"take with axis\" test_take_with_axis;\n    test \"take mode wrap\" test_take_mode_wrap;\n    test \"take mode clip\" test_take_mode_clip;\n    test \"take negative indices\" test_take_negative_indices;\n    test \"take_along_axis 1d\" test_take_along_axis_1d;\n    test \"take_along_axis 2d\" test_take_along_axis_2d;\n    test \"take empty indices\" test_take_empty_indices;\n  ]\n\nlet put_tests =\n  [\n    test \"put basic\" test_put_basic;\n    test \"put with axis\" test_put_with_axis;\n    test \"put mode wrap\" test_put_mode_wrap;\n    test \"put mode clip\" test_put_mode_clip;\n    test \"index_put basic\" test_index_put_basic;\n    test \"index_put mode wrap\" test_index_put_mode_wrap;\n    test \"put_along_axis\" test_put_along_axis;\n  ]\n\nlet compress_extract_tests =\n  [\n    test \"compress no axis\" test_compress_no_axis;\n    test \"compress with axis\" test_compress_with_axis;\n    test \"compress empty result\" test_compress_empty_result;\n    test \"extract basic\" test_extract_basic;\n    test \"extract from comparison\" test_extract_from_comparison;\n    test \"compress condition mismatch\" test_compress_condition_mismatch;\n    test \"extract shape mismatch\" test_extract_shape_mismatch;\n  ]\n\nlet nonzero_argwhere_tests =\n  [\n    test \"nonzero 1d\" test_nonzero_1d;\n    test \"nonzero 2d\" test_nonzero_2d;\n    test \"nonzero empty\" test_nonzero_empty;\n    test \"argwhere basic\" test_argwhere_basic;\n    test \"argwhere empty\" test_argwhere_empty;\n    test \"argwhere 1d\" test_argwhere_1d;\n  ]\n\nlet edge_case_tests =\n  [\n    test \"set_slice broadcast\" test_set_slice_broadcast;\n    test \"index chained\" test_index_chained;\n  ]\n\nlet suite =\n  [\n    group \"slice\" slice_tests;\n    group \"index\" index_tests;\n    group \"item\" item_tests;\n    group \"take\" take_tests;\n    group \"put\" put_tests;\n    group \"compress/extract\" compress_extract_tests;\n    group \"nonzero/argwhere\" nonzero_argwhere_tests;\n    group \"edge cases\" edge_case_tests;\n  ]\n\nlet () = run \"Nx Indexing\" suite\n"
  },
  {
    "path": "packages/nx/test/test_nx_io.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\n\n(* Helper functions *)\nlet temp_file prefix suffix = Filename.temp_file prefix suffix\n\nlet read_file path =\n  let ic = open_in path in\n  Fun.protect\n    ~finally:(fun () -> close_in ic)\n    (fun () ->\n      let len = in_channel_length ic in\n      really_input_string ic len)\n\nlet numpy_savetxt_float64_fixture =\n  \"##alpha beta gamma\\r\\n\"\n  ^ \"1.234567889999999890e+00,2.345678910000000172e+00,3.456789119999999826e+00\\r\\n\"\n  ^ \"4.567891229999999858e+00,5.678912340000000114e+00,6.789123449999999949e+00\\r\\n\"\n  ^ \"##generated by numpy\\r\\n\"\n\nlet read_file_bytes path =\n  let ic = open_in_bin path in\n  Fun.protect\n    ~finally:(fun () -> close_in ic)\n    (fun () ->\n      let len = in_channel_length ic in\n      really_input_string ic len)\n\nlet array_approx_equal ?(eps = 1e-6) a b =\n  try\n    let a_flat = Nx.flatten a in\n    let b_flat = Nx.flatten b in\n    let diff = Nx.sub a_flat b_flat in\n    let abs_diff = Nx.abs diff in\n    (* Get maximum value - reshape to scalar and extract *)\n    let max_diff = Nx.max abs_diff ~axes:[ 0 ] ~keepdims:false in\n    let max_val = Nx.item [] max_diff in\n    max_val < eps\n  with _ -> false\n\nlet check_array_approx msg ?(eps = 1e-6) expected actual =\n  if not (array_approx_equal ~eps expected actual) then\n    fail (Printf.sprintf \"%s: arrays not approximately equal\" msg)\n\n(* Test NPY format *)\nlet test_npy_save_load_float32 () =\n  let test_data = Nx.arange Nx.float32 0 12 1 |> Nx.reshape [| 3; 4 |] in\n  let path = temp_file \"test_npy_\" \".npy\" in\n\n  (* Save the data *)\n  Nx_io.save_npy path test_data;\n\n  (* Load it back *)\n  let loaded = Nx_io.load_npy path in\n  let loaded_f32 = Nx_io.to_typed Nx.float32 loaded in\n\n  (* Check shape and values *)\n  equal ~msg:\"loaded shape\" (array int) [| 3; 4 |] (Nx.shape loaded_f32);\n  check_array_approx \"loaded values\" test_data loaded_f32;\n\n  (* Clean up *)\n  Sys.remove path\n\nlet test_npy_save_load_int64 () =\n  let test_data = Nx.arange Nx.int64 0 20 2 |> Nx.reshape [| 2; 5 |] in\n  let path = temp_file \"test_npy_\" \".npy\" in\n\n  (* Save the data *)\n  Nx_io.save_npy path test_data;\n\n  (* Load it back *)\n  let loaded = Nx_io.load_npy path in\n  let loaded_i64 = Nx_io.to_typed Nx.int64 loaded in\n\n  (* Check shape *)\n  equal ~msg:\"loaded shape\" (array int) [| 2; 5 |] (Nx.shape loaded_i64);\n\n  (* Check values *)\n  for i = 0 to 1 do\n    for j = 0 to 4 do\n      let expected = (i * 10) + (j * 2) in\n      let actual = Nx.item [ i; j ] loaded_i64 |> Int64.to_int in\n      equal ~msg:(Printf.sprintf \"value at [%d, %d]\" i j) int expected actual\n    done\n  done;\n\n  (* Clean up *)\n  Sys.remove path\n\nlet test_npy_overwrite_protection () =\n  let test_data = Nx.ones Nx.float32 [| 2; 2 |] in\n  let path = temp_file \"test_npy_\" \".npy\" in\n\n  (* Save initial file *)\n  Nx_io.save_npy path test_data;\n\n  (* Try to save with overwrite=false - should fail *)\n  (try\n     Nx_io.save_npy ~overwrite:false path test_data;\n     fail \"expected Failure for overwrite protection\"\n   with Failure _ -> ());\n\n  (* Save with overwrite=true - should succeed *)\n  Nx_io.save_npy ~overwrite:true path test_data;\n\n  (* Clean up *)\n  Sys.remove path\n\n(* Test NPZ format *)\nlet test_npz_save_load_multiple () =\n  let weights = Nx.Rng.run ~seed:0 (fun () -> Nx.randn Nx.float32 [| 5; 3 |]) in\n  let bias = Nx.zeros Nx.float32 [| 3 |] in\n  let scale = Nx.ones Nx.float64 [| 3 |] in\n  let path = temp_file \"test_npz_\" \".npz\" in\n\n  (* Save multiple arrays *)\n  Nx_io.save_npz path\n    [\n      (\"weights\", Nx_io.P weights);\n      (\"bias\", Nx_io.P bias);\n      (\"scale\", Nx_io.P scale);\n    ];\n\n  (* Load all back *)\n  let archive = Nx_io.load_npz path in\n\n  (* Check we got all arrays *)\n  equal ~msg:\"number of arrays\" int 3 (Hashtbl.length archive);\n  equal ~msg:\"has weights\" bool true (Hashtbl.mem archive \"weights\");\n  equal ~msg:\"has bias\" bool true (Hashtbl.mem archive \"bias\");\n  equal ~msg:\"has scale\" bool true (Hashtbl.mem archive \"scale\");\n\n  (* Check shapes *)\n  let loaded_weights =\n    Hashtbl.find archive \"weights\" |> Nx_io.to_typed Nx.float32\n  in\n  let loaded_bias = Hashtbl.find archive \"bias\" |> Nx_io.to_typed Nx.float32 in\n  let loaded_scale =\n    Hashtbl.find archive \"scale\" |> Nx_io.to_typed Nx.float64\n  in\n\n  equal ~msg:\"weights shape\" (array int) [| 5; 3 |] (Nx.shape loaded_weights);\n  equal ~msg:\"bias shape\" (array int) [| 3 |] (Nx.shape loaded_bias);\n  equal ~msg:\"scale shape\" (array int) [| 3 |] (Nx.shape loaded_scale);\n\n  (* Clean up *)\n  Sys.remove path\n\nlet test_npz_load_entry () =\n  let array1 = Nx.arange Nx.float32 0 10 1 in\n  let array2 = Nx.arange Nx.int32 10 20 1 in\n  let array3 = Nx.ones Nx.float64 [| 2; 3 |] in\n  let path = temp_file \"test_npz_\" \".npz\" in\n\n  (* Save arrays *)\n  Nx_io.save_npz path\n    [\n      (\"array1\", Nx_io.P array1);\n      (\"array2\", Nx_io.P array2);\n      (\"array3\", Nx_io.P array3);\n    ];\n\n  (* Load specific entries *)\n  let loaded1 =\n    Nx_io.load_npz_entry ~name:\"array1\" path |> Nx_io.to_typed Nx.float32\n  in\n  let loaded2 =\n    Nx_io.load_npz_entry ~name:\"array2\" path |> Nx_io.to_typed Nx.int32\n  in\n  let loaded3 =\n    Nx_io.load_npz_entry ~name:\"array3\" path |> Nx_io.to_typed Nx.float64\n  in\n\n  equal ~msg:\"array1 shape\" (array int) [| 10 |] (Nx.shape loaded1);\n  equal ~msg:\"array2 shape\" (array int) [| 10 |] (Nx.shape loaded2);\n  equal ~msg:\"array3 shape\" (array int) [| 2; 3 |] (Nx.shape loaded3);\n\n  (* Test loading non-existent entry *)\n  (try\n     ignore (Nx_io.load_npz_entry ~name:\"nonexistent\" path);\n     fail \"expected Failure for missing entry\"\n   with Failure _ -> ());\n\n  (* Clean up *)\n  Sys.remove path\n\n(* Test SafeTensors format *)\n\nlet test_txt_save_load_float32 () =\n  let data = Nx.reshape [| 2; 3 |] (Nx.arange Nx.float32 0 6 1) in\n  let path = temp_file \"test_txt_\" \".txt\" in\n  Fun.protect\n    ~finally:(fun () -> Sys.remove path)\n    (fun () ->\n      Nx_io.save_txt path data;\n      let loaded = Nx_io.load_txt path Nx.float32 in\n      equal ~msg:\"shape\" (array int) [| 2; 3 |] (Nx.shape loaded);\n      check_array_approx \"values\" data loaded)\n\nlet test_txt_save_load_int64 () =\n  let data = Nx.reshape [| 3; 2 |] (Nx.arange Nx.int64 0 6 1) in\n  let path = temp_file \"test_txt_\" \".txt\" in\n  Fun.protect\n    ~finally:(fun () -> Sys.remove path)\n    (fun () ->\n      Nx_io.save_txt path data;\n      let loaded = Nx_io.load_txt path Nx.int64 in\n      equal ~msg:\"shape\" (array int) [| 3; 2 |] (Nx.shape loaded);\n      for i = 0 to 2 do\n        for j = 0 to 1 do\n          let expected = Nx.item [ i; j ] data |> Int64.to_int in\n          let actual = Nx.item [ i; j ] loaded |> Int64.to_int in\n          equal ~msg:(Printf.sprintf \"[%d,%d]\" i j) int expected actual\n        done\n      done)\n\nlet test_txt_float_precision () =\n  let value = 3.141592653589793 in\n  let data = Nx.full Nx.float64 [| 1 |] value in\n  let path = temp_file \"test_txt_precision_\" \".txt\" in\n  Fun.protect\n    ~finally:(fun () -> Sys.remove path)\n    (fun () ->\n      Nx_io.save_txt path data;\n      let expected = Printf.sprintf \"%.18e\" value in\n      let content = read_file path |> String.trim in\n      equal ~msg:\"formatted value\" string expected content;\n      let loaded = Nx_io.load_txt path Nx.float64 in\n      let loaded_value = Nx.item [ 0 ] loaded in\n      equal ~msg:\"round-trip\" (float 1e-15) value loaded_value)\n\nlet test_txt_bool_roundtrip () =\n  let data =\n    Nx.create Nx.bool [| 2; 3 |] [| true; false; true; false; true; false |]\n  in\n  let path = temp_file \"test_txt_bool_\" \".txt\" in\n  Fun.protect\n    ~finally:(fun () -> Sys.remove path)\n    (fun () ->\n      Nx_io.save_txt path data;\n      let loaded = Nx_io.load_txt path Nx.bool in\n      equal ~msg:\"shape\" (array int) [| 2; 3 |] (Nx.shape loaded);\n      let expected = Nx.to_array data in\n      let actual = Nx.to_array loaded in\n      equal ~msg:\"values\" (array bool) expected actual;\n      let lines =\n        read_file path |> String.split_on_char '\\n'\n        |> List.filter_map (fun line ->\n            let trimmed = String.trim line in\n            if trimmed = \"\" then None else Some trimmed)\n      in\n      match lines with\n      | [ first; second ] ->\n          equal ~msg:\"row 1\" string \"1 0 1\" first;\n          equal ~msg:\"row 2\" string \"0 1 0\" second\n      | _ -> fail \"Unexpected boolean txt contents\")\n\nlet test_txt_skiprows_max_rows () =\n  let data = Nx.reshape [| 3; 2 |] (Nx.arange Nx.float32 0 6 1) in\n  let path = temp_file \"test_txt_skip_\" \".txt\" in\n  Fun.protect\n    ~finally:(fun () -> Sys.remove path)\n    (fun () ->\n      Nx_io.save_txt ~header:\"generated by nx\" path data;\n      let loaded = Nx_io.load_txt path Nx.float32 in\n      equal ~msg:\"shape\" (array int) [| 3; 2 |] (Nx.shape loaded);\n      check_array_approx \"full load\" data loaded;\n      let subset = Nx_io.load_txt ~skiprows:2 ~max_rows:1 path Nx.float32 in\n      equal ~msg:\"subset shape\" (array int) [| 2 |] (Nx.shape subset);\n      let expected = Nx.create Nx.float32 [| 2 |] [| 2.0; 3.0 |] in\n      check_array_approx \"subset values\" expected subset)\n\nlet test_txt_load_numpy_fixture () =\n  let path = temp_file \"numpy_fixture\" \".txt\" in\n  Fun.protect\n    ~finally:(fun () -> Sys.remove path)\n    (fun () ->\n      let oc = open_out_bin path in\n      Fun.protect\n        ~finally:(fun () -> close_out oc)\n        (fun () -> output_string oc numpy_savetxt_float64_fixture);\n      let loaded = Nx_io.load_txt ~sep:\",\" ~comments:\"##\" path Nx.float64 in\n      equal ~msg:\"shape\" (array int) [| 2; 3 |] (Nx.shape loaded);\n      let expected =\n        Nx.create Nx.float64 [| 2; 3 |]\n          [|\n            1.23456789;\n            2.34567891;\n            3.45678912;\n            4.56789123;\n            5.67891234;\n            6.78912345;\n          |]\n      in\n      check_array_approx ~eps:1e-12 \"values\" expected loaded)\n\nlet test_txt_save_numpy_compat () =\n  let data =\n    Nx.create Nx.float64 [| 2; 3 |]\n      [|\n        1.23456789; 2.34567891; 3.45678912; 4.56789123; 5.67891234; 6.78912345;\n      |]\n  in\n  let path = temp_file \"numpy_save_fixture\" \".txt\" in\n  Fun.protect\n    ~finally:(fun () -> Sys.remove path)\n    (fun () ->\n      Nx_io.save_txt ~sep:\",\" ~newline:\"\\r\\n\" ~comments:\"##\"\n        ~header:\"alpha beta gamma\" ~footer:\"generated by numpy\" path data;\n      let contents = read_file path in\n      equal ~msg:\"numpy compatible output\" string numpy_savetxt_float64_fixture\n        contents)\n\nlet txt_tests =\n  [\n    test \"Save/load txt float32\" test_txt_save_load_float32;\n    test \"Save/load txt int64\" test_txt_save_load_int64;\n    test \"Float precision formatting\" test_txt_float_precision;\n    test \"Bool round-trip\" test_txt_bool_roundtrip;\n    test \"Skip rows and max_rows\" test_txt_skiprows_max_rows;\n    test \"Save numpy-compatible file\" test_txt_save_numpy_compat;\n    test \"Load numpy-generated file\" test_txt_load_numpy_fixture;\n  ]\n\nlet test_safetensors_save_load () =\n  let weights, embeddings =\n    Nx.Rng.run ~seed:10 (fun () ->\n        let w = Nx.randn Nx.float32 [| 10; 5 |] in\n        let e = Nx.randn Nx.float32 [| 100; 64 |] in\n        (w, e))\n  in\n  let bias = Nx.zeros Nx.float32 [| 5 |] in\n  let path = temp_file \"test_safetensors_\" \".safetensors\" in\n\n  (* Save tensors *)\n  Nx_io.save_safetensors path\n    [\n      (\"model.weights\", Nx_io.P weights);\n      (\"model.bias\", Nx_io.P bias);\n      (\"embeddings\", Nx_io.P embeddings);\n    ];\n\n  (* Load back *)\n  let archive = Nx_io.load_safetensors path in\n\n  (* Check we got all tensors *)\n  equal ~msg:\"number of tensors\" int 3 (Hashtbl.length archive);\n  equal ~msg:\"has weights\" bool true (Hashtbl.mem archive \"model.weights\");\n  equal ~msg:\"has bias\" bool true (Hashtbl.mem archive \"model.bias\");\n  equal ~msg:\"has embeddings\" bool true (Hashtbl.mem archive \"embeddings\");\n\n  (* Check shapes *)\n  let loaded_weights =\n    Hashtbl.find archive \"model.weights\" |> Nx_io.to_typed Nx.float32\n  in\n  let loaded_bias =\n    Hashtbl.find archive \"model.bias\" |> Nx_io.to_typed Nx.float32\n  in\n  let loaded_embeddings =\n    Hashtbl.find archive \"embeddings\" |> Nx_io.to_typed Nx.float32\n  in\n\n  equal ~msg:\"weights shape\" (array int) [| 10; 5 |] (Nx.shape loaded_weights);\n  equal ~msg:\"bias shape\" (array int) [| 5 |] (Nx.shape loaded_bias);\n  equal ~msg:\"embeddings shape\" (array int) [| 100; 64 |]\n    (Nx.shape loaded_embeddings);\n\n  (* Check values are preserved *)\n  check_array_approx \"weights values\" weights loaded_weights;\n  check_array_approx \"bias values\" bias loaded_bias;\n  check_array_approx \"embeddings values\" embeddings loaded_embeddings;\n\n  (* Clean up *)\n  Sys.remove path\n\nlet test_safetensors_different_dtypes () =\n  let path = temp_file \"test_safetensors_dtypes_\" \".safetensors\" in\n\n  (* Create arrays of different types *)\n  let f32_data = Nx.arange Nx.float32 0 10 1 in\n  let f64_data = Nx.arange Nx.float64 10 20 1 in\n  let i32_data = Nx.arange Nx.int32 20 30 1 in\n\n  (* Save *)\n  Nx_io.save_safetensors path\n    [\n      (\"float32_array\", Nx_io.P f32_data);\n      (\"float64_array\", Nx_io.P f64_data);\n      (\"int32_array\", Nx_io.P i32_data);\n    ];\n\n  (* Load and verify *)\n  let archive = Nx_io.load_safetensors path in\n\n  let loaded_f32 =\n    Hashtbl.find archive \"float32_array\" |> Nx_io.to_typed Nx.float32\n  in\n  let loaded_f64 =\n    Hashtbl.find archive \"float64_array\" |> Nx_io.to_typed Nx.float64\n  in\n  let loaded_i32 =\n    Hashtbl.find archive \"int32_array\" |> Nx_io.to_typed Nx.int32\n  in\n\n  check_array_approx \"float32 values\" f32_data loaded_f32;\n  check_array_approx \"float64 values\" ~eps:1e-10 f64_data loaded_f64;\n\n  (* Check int32 values *)\n  for i = 0 to 9 do\n    let expected = 20 + i in\n    let actual = Nx.item [ i ] loaded_i32 |> Int32.to_int in\n    equal ~msg:(Printf.sprintf \"int32 value at [%d]\" i) int expected actual\n  done;\n\n  (* Clean up *)\n  Sys.remove path\n\n(* Test dtype conversions *)\nlet test_dtype_conversions () =\n  (* Create test data *)\n  let original = Nx.arange Nx.float32 0 10 1 in\n  let path = temp_file \"test_dtype_\" \".npy\" in\n\n  (* Save and load *)\n  Nx_io.save_npy path original;\n  let loaded = Nx_io.load_npy path in\n\n  (* Test successful conversion *)\n  let as_f32 = Nx_io.to_typed Nx.float32 loaded in\n  check_array_approx \"float32 conversion\" original as_f32;\n\n  (* Test failing conversion (wrong dtype) *)\n  (try\n     ignore (Nx_io.to_typed Nx.int32 loaded);\n     fail \"expected Failure for wrong dtype\"\n   with Failure _ -> ());\n\n  (* Clean up *)\n  Sys.remove path\n\n(* Test edge cases *)\nlet test_empty_arrays () =\n  (* Empty array *)\n  let empty = Nx.zeros Nx.float32 [| 0 |] in\n  let path = temp_file \"test_empty_\" \".npy\" in\n\n  Nx_io.save_npy path empty;\n  let loaded = Nx_io.load_npy path in\n  let loaded_f32 = Nx_io.to_typed Nx.float32 loaded in\n\n  equal ~msg:\"empty array shape\" (array int) [| 0 |] (Nx.shape loaded_f32);\n\n  (* Clean up *)\n  Sys.remove path\n\nlet test_large_arrays () =\n  (* Large array (but not too large for tests) *)\n  let large = Nx.ones Nx.float32 [| 100; 100 |] in\n  let path = temp_file \"test_large_\" \".npy\" in\n\n  Nx_io.save_npy path large;\n  let loaded = Nx_io.load_npy path in\n  let loaded_f32 = Nx_io.to_typed Nx.float32 loaded in\n\n  equal ~msg:\"large array shape\" (array int) [| 100; 100 |]\n    (Nx.shape loaded_f32);\n\n  (* Verify all values are 1 - sum and check *)\n  let sum = Nx.sum loaded_f32 ~axes:[ 0; 1 ] ~keepdims:false in\n  let sum_val = Nx.item [] sum in\n  equal ~msg:\"large array sum\" (float 1e-3) 10000.0 sum_val;\n\n  (* Clean up *)\n  Sys.remove path\n\nlet test_high_dimensional_arrays () =\n  (* 5D array *)\n  let high_dim =\n    Nx.arange Nx.float32 0 120 1 |> Nx.reshape [| 2; 3; 4; 5; 1 |]\n  in\n  let path = temp_file \"test_highdim_\" \".npy\" in\n\n  Nx_io.save_npy path high_dim;\n  let loaded = Nx_io.load_npy path in\n  let loaded_f32 = Nx_io.to_typed Nx.float32 loaded in\n\n  equal ~msg:\"5D array shape\" (array int) [| 2; 3; 4; 5; 1 |]\n    (Nx.shape loaded_f32);\n  check_array_approx \"5D array values\" high_dim loaded_f32;\n\n  (* Clean up *)\n  Sys.remove path\n\nlet fixture_dir = \"fixtures\"\n\n(* Extract raw tensor payload from a safetensors file *)\nlet safetensors_payload path =\n  let buf = read_file_bytes path in\n  let hdr_len =\n    let get i = Int64.of_int (Char.code buf.[i]) in\n    Int64.(\n      to_int\n        (logor (get 0)\n           (logor\n              (shift_left (get 1) 8)\n              (logor\n                 (shift_left (get 2) 16)\n                 (logor\n                    (shift_left (get 3) 24)\n                    (logor\n                       (shift_left (get 4) 32)\n                       (logor\n                          (shift_left (get 5) 40)\n                          (logor\n                             (shift_left (get 6) 48)\n                             (shift_left (get 7) 56)))))))))\n  in\n  let start = 8 + hdr_len in\n  String.sub buf start (String.length buf - start)\n\n(* Test SafeTensors with float16 and bfloat16 round-trip *)\nlet test_safetensors_float16_roundtrip () =\n  let test_data = Nx.full Nx.float16 [| 2; 3 |] 1.5 in\n  let path = temp_file \"test_safetensors_f16_\" \".safetensors\" in\n\n  (* Save the data *)\n  Nx_io.save_safetensors path [ (\"test_f16\", Nx_io.P test_data) ];\n\n  (* Load it back *)\n  let archive = Nx_io.load_safetensors path in\n  let loaded = Hashtbl.find archive \"test_f16\" |> Nx_io.to_typed Nx.float16 in\n\n  (* Check shape and values *)\n  equal ~msg:\"float16 shape\" (array int) [| 2; 3 |] (Nx.shape loaded);\n  check_array_approx \"float16 values\" ~eps:1e-3 test_data loaded;\n\n  (* Clean up *)\n  Sys.remove path\n\nlet test_safetensors_float16_bit_exact () =\n  (* Fixture generated by Python: F16 bits [0x0000, 0x0001, 0x3C00, 0x7C00,\n     0x7E01] = [+0, smallest subnormal, 1.0, +inf, NaN] *)\n  let fixture = Filename.concat fixture_dir \"f16_bit_exact.safetensors\" in\n  let archive = Nx_io.load_safetensors fixture in\n  let packed = Hashtbl.find archive \"f16_tensor\" in\n  let values = packed |> Nx_io.to_typed Nx.float16 |> Nx.to_array in\n  equal ~msg:\"subnormal preserved\" bool true (values.(1) <> 0.0);\n  equal ~msg:\"nan preserved\" bool true (Float.is_nan values.(4));\n  (* Round-trip: save and check raw payload is identical *)\n  let path_out = temp_file \"test_f16_rt_\" \".safetensors\" in\n  Fun.protect\n    ~finally:(fun () -> Sys.remove path_out)\n    (fun () ->\n      Nx_io.save_safetensors path_out [ (\"f16_tensor\", packed) ];\n      let payload_in = safetensors_payload fixture in\n      let payload_out = safetensors_payload path_out in\n      equal ~msg:\"float16 payload round-trip\" string payload_in payload_out)\n\nlet test_safetensors_bfloat16_roundtrip () =\n  let test_data = Nx.full Nx.bfloat16 [| 2; 3 |] 1.5 in\n  let path = temp_file \"test_safetensors_bf16_\" \".safetensors\" in\n\n  (* Save the data *)\n  Nx_io.save_safetensors path [ (\"test_bf16\", Nx_io.P test_data) ];\n\n  (* Load it back *)\n  let archive = Nx_io.load_safetensors path in\n  let loaded = Hashtbl.find archive \"test_bf16\" |> Nx_io.to_typed Nx.bfloat16 in\n\n  (* Check shape and values *)\n  equal ~msg:\"bfloat16 shape\" (array int) [| 2; 3 |] (Nx.shape loaded);\n  check_array_approx \"bfloat16 values\" ~eps:1e-3 test_data loaded;\n\n  (* Clean up *)\n  Sys.remove path\n\nlet test_safetensors_bfloat16_bit_exact () =\n  (* Fixture generated by Python: BF16 bits [0x0000, 0x0001, 0x3F80, 0x7F80,\n     0x7FC1] = [+0, smallest subnormal, 1.0, +inf, NaN] *)\n  let fixture = Filename.concat fixture_dir \"bf16_bit_exact.safetensors\" in\n  let archive = Nx_io.load_safetensors fixture in\n  let packed = Hashtbl.find archive \"bf16_tensor\" in\n  let values = packed |> Nx_io.to_typed Nx.bfloat16 |> Nx.to_array in\n  equal ~msg:\"bf16 subnormal preserved\" bool true (values.(1) <> 0.0);\n  equal ~msg:\"bf16 nan preserved\" bool true (Float.is_nan values.(4));\n  (* Round-trip: save and check raw payload is identical *)\n  let path_out = temp_file \"test_bf16_rt_\" \".safetensors\" in\n  Fun.protect\n    ~finally:(fun () -> Sys.remove path_out)\n    (fun () ->\n      Nx_io.save_safetensors path_out [ (\"bf16_tensor\", packed) ];\n      let payload_in = safetensors_payload fixture in\n      let payload_out = safetensors_payload path_out in\n      equal ~msg:\"bfloat16 payload round-trip\" string payload_in payload_out)\n\nlet () =\n  run \"Nx_io comprehensive tests\"\n    [\n      group \"npy\"\n        [\n          test \"Save/load float32\" test_npy_save_load_float32;\n          test \"Save/load int64\" test_npy_save_load_int64;\n          test \"Overwrite protection\" test_npy_overwrite_protection;\n        ];\n      group \"txt\" txt_tests;\n      group \"npz\"\n        [\n          test \"Save/load multiple arrays\" test_npz_save_load_multiple;\n          test \"Load specific entry\" test_npz_load_entry;\n        ];\n      group \"safetensors\"\n        [\n          test \"Save/load tensors\" test_safetensors_save_load;\n          test \"Different dtypes\" test_safetensors_different_dtypes;\n          test \"Float16 round-trip\" test_safetensors_float16_roundtrip;\n          test \"Float16 bit exact\" test_safetensors_float16_bit_exact;\n          test \"Bfloat16 round-trip\" test_safetensors_bfloat16_roundtrip;\n          test \"Bfloat16 bit exact\" test_safetensors_bfloat16_bit_exact;\n        ];\n      group \"dtype_conversions\"\n        [ test \"Basic conversions\" test_dtype_conversions ];\n      group \"edge_cases\"\n        [\n          test \"Empty arrays\" test_empty_arrays;\n          test \"Large arrays\" test_large_arrays;\n          test \"High dimensional arrays\" test_high_dimensional_arrays;\n        ];\n    ]\n"
  },
  {
    "path": "packages/nx/test/test_nx_linalg.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Linear algebra tests for Nx *)\n\nopen Windtrap\nopen Test_nx_support\n\n(* ───── Matrix Multiply Tests ───── *)\n\nlet test_matmul_1d_1d () =\n  let a = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let b = Nx.create Nx.float32 [| 3 |] [| 4.; 5.; 6. |] in\n  let result = Nx.matmul a b in\n  check_t \"matmul 1d x 1d\" [||] [| 32.0 |] result\n\nlet test_matmul_1d_2d () =\n  let a = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let b = Nx.create Nx.float32 [| 3; 4 |] (Array.init 12 float_of_int) in\n  let result = Nx.matmul a b in\n  check_t \"matmul 1d x 2d\" [| 4 |] [| 32.; 38.; 44.; 50. |] result\n\nlet test_matmul_2d_1d () =\n  let a = Nx.create Nx.float32 [| 3; 4 |] (Array.init 12 float_of_int) in\n  let b = Nx.create Nx.float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n  let result = Nx.matmul a b in\n  check_t \"matmul 2d x 1d\" [| 3 |] [| 20.; 60.; 100. |] result\n\nlet test_matmul_batch () =\n  let a = Nx.create Nx.float32 [| 2; 3; 4 |] (Array.init 24 float_of_int) in\n  let b = Nx.create Nx.float32 [| 2; 4; 2 |] (Array.init 16 float_of_int) in\n  let result = Nx.matmul a b in\n  check_shape \"matmul batch shape\" [| 2; 3; 2 |] result;\n  (* Check first batch *)\n  equal ~msg:\"batch[0,0,0]\" (float 1e-6) 28.0 (Nx.item [ 0; 0; 0 ] result);\n  equal ~msg:\"batch[0,0,1]\" (float 1e-6) 34.0 (Nx.item [ 0; 0; 1 ] result)\n\nlet test_matmul_broadcast_batch () =\n  let a = Nx.create Nx.float32 [| 1; 3; 4 |] (Array.init 12 float_of_int) in\n  let b = Nx.create Nx.float32 [| 5; 4; 2 |] (Array.init 40 float_of_int) in\n  let result = Nx.matmul a b in\n  check_shape \"matmul broadcast batch shape\" [| 5; 3; 2 |] result\n\nlet test_matmul_2d_3d_broadcast () =\n  (*\n   * Test case: A (2D) @ B (3D)\n   * A shape: (2, 3) - to be broadcasted\n   * B shape: (4, 3, 2) - batched tensor\n   * Expected output shape: (4, 2, 2)\n   *)\n\n  (* A is a single 2x3 matrix *)\n  let a = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n\n  (* B is a batch of four 3x2 matrices *)\n  let b =\n    Nx.create Nx.float32 [| 4; 3; 2 |]\n      [|\n        (* Batch 0 *)\n        1.;\n        2.;\n        3.;\n        4.;\n        5.;\n        6.;\n        (* Batch 1 *)\n        7.;\n        8.;\n        9.;\n        10.;\n        11.;\n        12.;\n        (* Batch 2 *)\n        1.;\n        0.;\n        0.;\n        1.;\n        1.;\n        0.;\n        (* Batch 3 *)\n        0.;\n        1.;\n        1.;\n        0.;\n        0.;\n        1.;\n      |]\n  in\n\n  (* Perform the matmul *)\n  let result = Nx.matmul a b in\n\n  (* Check shape *)\n  check_shape \"matmul 2d @ 3d shape\" [| 4; 2; 2 |] result;\n\n  (*\n   * Manually calculate the expected result:\n   *\n   * A = [[1, 2, 3],\n   *      [4, 5, 6]]\n   *\n   * B[0] = [[1, 2], [3, 4], [5, 6]]\n   * A @ B[0] = [[22, 28], [49, 64]]\n   *\n   * B[1] = [[7, 8], [9, 10], [11, 12]]\n   * A @ B[1] = [[58, 64], [139, 154]]\n   *\n   * B[2] = [[1, 0], [0, 1], [1, 0]]\n   * A @ B[2] = [[4, 2], [10, 5]]\n   *\n   * B[3] = [[0, 1], [1, 0], [0, 1]]\n   * A @ B[3] = [[2, 4], [5, 10]]\n   *)\n\n  (* Check batch 0 *)\n  equal ~msg:\"batch 0 [0,0]\" (float 1e-6) 22. (Nx.item [ 0; 0; 0 ] result);\n  equal ~msg:\"batch 0 [0,1]\" (float 1e-6) 28. (Nx.item [ 0; 0; 1 ] result);\n  equal ~msg:\"batch 0 [1,0]\" (float 1e-6) 49. (Nx.item [ 0; 1; 0 ] result);\n  equal ~msg:\"batch 0 [1,1]\" (float 1e-6) 64. (Nx.item [ 0; 1; 1 ] result);\n\n  (* Check batch 1 *)\n  equal ~msg:\"batch 1 [0,0]\" (float 1e-6) 58. (Nx.item [ 1; 0; 0 ] result);\n  equal ~msg:\"batch 1 [0,1]\" (float 1e-6) 64. (Nx.item [ 1; 0; 1 ] result);\n  equal ~msg:\"batch 1 [1,0]\" (float 1e-6) 139. (Nx.item [ 1; 1; 0 ] result);\n  equal ~msg:\"batch 1 [1,1]\" (float 1e-6) 154. (Nx.item [ 1; 1; 1 ] result);\n\n  (* Check batch 2 *)\n  equal ~msg:\"batch 2 [0,0]\" (float 1e-6) 4. (Nx.item [ 2; 0; 0 ] result);\n  equal ~msg:\"batch 2 [0,1]\" (float 1e-6) 2. (Nx.item [ 2; 0; 1 ] result);\n  equal ~msg:\"batch 2 [1,0]\" (float 1e-6) 10. (Nx.item [ 2; 1; 0 ] result);\n  equal ~msg:\"batch 2 [1,1]\" (float 1e-6) 5. (Nx.item [ 2; 1; 1 ] result);\n\n  (* Check batch 3 *)\n  equal ~msg:\"batch 3 [0,0]\" (float 1e-6) 2. (Nx.item [ 3; 0; 0 ] result);\n  equal ~msg:\"batch 3 [0,1]\" (float 1e-6) 4. (Nx.item [ 3; 0; 1 ] result);\n  equal ~msg:\"batch 3 [1,0]\" (float 1e-6) 5. (Nx.item [ 3; 1; 0 ] result);\n  equal ~msg:\"batch 3 [1,1]\" (float 1e-6) 10. (Nx.item [ 3; 1; 1 ] result)\n\nlet test_matmul_shape_error () =\n  let a = Nx.create Nx.float32 [| 3; 4 |] (Array.init 12 float_of_int) in\n  let b = Nx.create Nx.float32 [| 5; 6 |] (Array.init 30 float_of_int) in\n  raises ~msg:\"matmul shape error\"\n    (Invalid_argument\n       \"dot: cannot contract [3,4] (last axis: 4) to [5,6] (axis 0: 5) (size \\\n        4\\226\\137\\1605)\") (fun () -> ignore (Nx.matmul a b))\n\nlet test_matmul_empty () =\n  let a = Nx.create Nx.float32 [| 0; 5 |] [||] in\n  let b = Nx.create Nx.float32 [| 5; 3 |] (Array.init 15 float_of_int) in\n  let result = Nx.matmul a b in\n  check_shape \"matmul empty shape\" [| 0; 3 |] result\n\nlet test_matmul_transpose_optimization () =\n  (* Test that matmul handles transposed inputs efficiently *)\n  let a = Nx.create Nx.float32 [| 3; 4 |] (Array.init 12 float_of_int) in\n  let b = Nx.create Nx.float32 [| 5; 4 |] (Array.init 20 float_of_int) in\n  let bt = Nx.transpose b in\n  let result = Nx.matmul a bt in\n  check_shape \"matmul with transpose\" [| 3; 5 |] result\n\n(* ───── Dot Product Tests ───── *)\n\nlet test_dot_1d_1d () =\n  let a = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let b = Nx.create Nx.float32 [| 3 |] [| 4.; 5.; 6. |] in\n  let result = Nx.dot a b in\n  check_t \"dot 1d x 1d\" [||] [| 32.0 |] result\n\nlet test_dot_2d_1d () =\n  let a = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let b = Nx.create Nx.float32 [| 3 |] [| 7.; 8.; 9. |] in\n  let result = Nx.dot a b in\n  check_t \"dot 2d x 1d\" [| 2 |] [| 50.; 122. |] result\n\nlet test_dot_2d_2d () =\n  let a = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let b = Nx.create Nx.float32 [| 3; 2 |] [| 7.; 8.; 9.; 10.; 11.; 12. |] in\n  let result = Nx.dot a b in\n  check_t \"dot 2d x 2d\" [| 2; 2 |] [| 58.; 64.; 139.; 154. |] result\n\nlet test_dot_higher_d () =\n  let a = Nx.create Nx.float32 [| 2; 2; 3 |] (Array.init 12 float_of_int) in\n  let b = Nx.create Nx.float32 [| 3; 2 |] (Array.init 6 float_of_int) in\n  let result = Nx.dot a b in\n  check_t \"dot higher-d\" [| 2; 2; 2 |]\n    [| 10.; 13.; 28.; 40.; 46.; 67.; 64.; 94. |]\n    result\n\nlet test_dot_scalar_result () =\n  (* Ensure dot product of 1D arrays returns proper scalar *)\n  let a = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let b = Nx.create Nx.float32 [| 3 |] [| 4.; 5.; 6. |] in\n  let result = Nx.dot a b in\n  check_shape \"dot scalar shape\" [||] result;\n  equal ~msg:\"dot scalar value\" (float 1e-6) 32.0 (Nx.item [] result)\n\n(* ───── Solve Inverse Tests ───── *)\n\nlet test_solve_identity () =\n  let identity = Nx.eye Nx.float32 3 in\n  let b = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let x = Nx.solve identity b in\n  check_t \"solve identity\" [| 3 |] [| 1.; 2.; 3. |] x\n\nlet test_solve_simple () =\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| 3.; 1.; 1.; 2. |] in\n  let b = Nx.create Nx.float32 [| 2 |] [| 9.; 8. |] in\n  let x = Nx.solve a b in\n  let result = Nx.dot a x in\n  check_nx ~epsilon:1e-5 \"solve simple\" b result\n\nlet test_solve_batch () =\n  let a =\n    Nx.create Nx.float32 [| 2; 3; 3 |]\n      [|\n        1.; 0.; 0.; 0.; 1.; 0.; 0.; 0.; 1.; 2.; 0.; 0.; 0.; 2.; 0.; 0.; 0.; 2.;\n      |]\n  in\n  let b = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 2.; 4.; 6. |] in\n  let x = Nx.solve a b in\n  check_shape \"solve batch shape\" [| 2; 3 |] x\n\nlet test_solve_singular () =\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 2.; 4. |] in\n  let b = Nx.create Nx.float32 [| 2 |] [| 1.; 2. |] in\n  check_invalid_arg \"solve singular\" \"solve: matrix is singular\" (fun () ->\n      ignore (Nx.solve a b))\n\nlet test_solve_non_square () =\n  let a = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let b = Nx.create Nx.float32 [| 2 |] [| 1.; 2. |] in\n  check_invalid_arg \"solve non-square\"\n    \"solve: coefficient matrix must be square\" (fun () -> ignore (Nx.solve a b))\n\nlet test_inv_identity () =\n  let identity = Nx.eye Nx.float32 3 in\n  let inv = Nx.inv identity in\n  check_nx \"inv identity\" identity inv\n\nlet test_inv_inverse () =\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| 2.; 1.; 1.; 2. |] in\n  let inv_a = Nx.inv a in\n  let inv_inv_a = Nx.inv inv_a in\n  check_nx \"inv inverse\" a inv_inv_a\n\nlet test_inv_singular () =\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 2.; 4. |] in\n  check_invalid_arg \"inv singular\" \"inv: matrix is singular\" (fun () ->\n      ignore (Nx.inv a))\n\n(* ───── Decomposition Tests ───── *)\n\nlet test_qr_shape () =\n  let a = Nx.create Nx.float32 [| 4; 3 |] (Array.init 12 float_of_int) in\n  let q, r = Nx.qr a in\n  check_shape \"qr q shape\" [| 4; 4 |] q;\n  check_shape \"qr r shape\" [| 4; 3 |] r\n\nlet test_qr_orthogonal () =\n  let a =\n    Nx.create Nx.float32 [| 3; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 10. |]\n  in\n  let q, _ = Nx.qr a in\n  let qt_q = Nx.matmul (Nx.transpose q) q in\n  let identity = Nx.eye Nx.float32 3 in\n  check_nx \"qr orthogonal\" identity qt_q\n\nlet test_svd_shape () =\n  let a = Nx.create Nx.float32 [| 3; 4 |] (Array.init 12 float_of_int) in\n  let u, s, vt = Nx.svd a in\n  check_shape \"svd u shape\" [| 3; 3 |] u;\n  check_shape \"svd s shape\" [| 3 |] s;\n  check_shape \"svd vt shape (V^H)\" [| 3; 4 |] vt\n\nlet test_cholesky_posdef () =\n  let a =\n    Nx.create Nx.float32 [| 3; 3 |] [| 1.; 0.; 0.; 1.; 1.; 0.; 1.; 1.; 1. |]\n  in\n  let posdef = Nx.matmul (Nx.transpose a) a in\n  let l = Nx.cholesky posdef in\n  check_shape \"cholesky shape\" [| 3; 3 |] l\n\nlet test_eig_shape () =\n  let a =\n    Nx.create Nx.float32 [| 3; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 10. |]\n  in\n  let eigenvalues, eigenvectors = Nx.eig a in\n  check_shape \"eig eigenvalues shape\" [| 3 |] eigenvalues;\n  check_shape \"eig eigenvectors shape\" [| 3; 3 |] eigenvectors\n\nlet test_eig_property () =\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| 2.; 1.; 1.; 2. |] in\n  let eigenvalues, eigenvectors = Nx.eig a in\n  (* Cast to float32 to match a's type *)\n  let eigenvalues_f32 = Nx.cast Nx.float32 eigenvalues in\n  let eigenvectors_f32 = Nx.cast Nx.float32 eigenvectors in\n  let v1 = Nx.slice [ Nx.R (0, 2); Nx.I 0 ] eigenvectors_f32 in\n  let lambda1 = Nx.item [ 0 ] eigenvalues_f32 in\n  let av1 = Nx.dot a v1 in\n  let lambda1_scalar = Nx.scalar Nx.float32 lambda1 in\n  let lambda_v1 = Nx.mul lambda1_scalar v1 in\n  check_nx \"eig property\" av1 lambda_v1\n\n(* ───── Norm Tests ───── *)\n\nlet test_norm_vector_1 () =\n  let v = Nx.create Nx.float32 [| 4 |] [| -1.; 2.; -3.; 4. |] in\n  let result = Nx.norm ~ord:(`P 1.) v in\n  check_t \"norm L1\" [||] [| 10.0 |] result\n\nlet test_norm_vector_2 () =\n  let v = Nx.create Nx.float32 [| 3 |] [| 3.; 4.; 0. |] in\n  let result = Nx.norm v in\n  check_t \"norm L2\" [||] [| 5.0 |] result\n\nlet test_norm_vector_inf () =\n  let v = Nx.create Nx.float32 [| 4 |] [| -1.; 2.; -5.; 4. |] in\n  let result = Nx.norm ~ord:`Inf v in\n  check_t \"norm Linf\" [||] [| 5.0 |] result\n\nlet test_norm_matrix_fro () =\n  let m = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  let result = Nx.norm ~ord:`Fro m in\n  check_t ~eps:1e-5 \"norm Frobenius\" [||] [| 5.477226 |] result\n\nlet test_norm_matrix_1 () =\n  let m = Nx.create Nx.float32 [| 2; 2 |] [| 1.; -2.; 3.; 4. |] in\n  let result = Nx.norm ~ord:(`P 1.) m in\n  check_t \"norm matrix L1\" [||] [| 6.0 |] result\n\nlet test_norm_axis () =\n  let m = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let result = Nx.norm ~axes:[ 1 ] m in\n  check_t ~eps:1e-5 \"norm along axis\" [| 2 |] [| 3.741657; 8.774964 |] result\n\nlet test_norm_empty () =\n  let v = Nx.create Nx.float32 [| 0 |] [||] in\n  let result = Nx.norm v in\n  check_t \"norm empty\" [||] [| 0.0 |] result\n\n(* ───── Linear Algebra Utilities ───── *)\n\nlet test_det_2x2 () =\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| 3.; 8.; 4.; 6. |] in\n  let det = Nx.det a in\n  check_t \"det 2x2\" [||] [| -14.0 |] det\n\nlet test_det_singular () =\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 2.; 4. |] in\n  let det = Nx.det a in\n  check_t ~eps:1e-6 \"det singular\" [||] [| 0.0 |] det\n\nlet test_diag_extract () =\n  let a =\n    Nx.create Nx.float32 [| 3; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9. |]\n  in\n  let diag = Nx.diagonal a in\n  check_t \"diag extract\" [| 3 |] [| 1.; 5.; 9. |] diag\n\n(* ───── Additional Utility Tests ───── *)\n\nlet test_diagonal () =\n  let a = Nx.create Nx.float32 [| 3; 3 |] (Array.init 9 float_of_int) in\n  let d = Nx.diagonal a in\n  check_t \"diagonal main\" [| 3 |] [| 0.; 4.; 8. |] d;\n  let d_offset = Nx.diagonal ~offset:1 a in\n  check_t \"diagonal offset 1\" [| 2 |] [| 1.; 5. |] d_offset;\n  let a_higher =\n    Nx.create Nx.float32 [| 2; 3; 3 |] (Array.init 18 float_of_int)\n  in\n  let d_higher = Nx.diagonal a_higher in\n  check_shape \"diagonal higher dim\" [| 2; 3 |] d_higher\n\nlet test_diagonal_edge () =\n  let a_empty = Nx.create Nx.float32 [| 0; 0 |] [||] in\n  let d_empty = Nx.diagonal a_empty in\n  check_shape \"diagonal empty\" [| 0 |] d_empty;\n  raises ~msg:\"diagonal invalid axes\"\n    (Invalid_argument \"diagonal: axes must be different\") (fun () ->\n      ignore (Nx.diagonal ~axis1:0 ~axis2:0 a_empty))\n\nlet test_matrix_transpose () =\n  let a = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let t = Nx.matrix_transpose a in\n  check_shape \"matrix transpose shape\" [| 3; 2 |] t;\n  check_t \"matrix transpose values\" [| 3; 2 |] [| 1.; 4.; 2.; 5.; 3.; 6. |] t;\n  let a1d = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let t1d = Nx.matrix_transpose a1d in\n  check_t \"matrix transpose 1d unchanged\" [| 3 |] [| 1.; 2.; 3. |] t1d\n\nlet test_trace_offset () =\n  let a =\n    Nx.create Nx.float32 [| 3; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9. |]\n  in\n  let tr_offset = Nx.trace ~offset:1 a in\n  check_t \"trace offset 1\" [||] [| 8. |] tr_offset\n\nlet test_det_batch () =\n  let a =\n    Nx.create Nx.float32 [| 2; 2; 2 |] [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8. |]\n  in\n  let d = Nx.det a in\n  check_shape \"det batch\" [| 2 |] d;\n  check_t \"det batch values\" [| 2 |] [| -2.; -2. |] d\n\nlet test_slogdet () =\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| 3.; 8.; 4.; 6. |] in\n  let sign, logdet = Nx.slogdet a in\n  check_t \"slogdet sign\" [||] [| -1. |] sign;\n  equal ~msg:\"slogdet logdet\" (float 1e-5) (log 14.) (Nx.item [] logdet)\n\nlet test_slogdet_singular () =\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 2.; 4. |] in\n  let sign, logdet = Nx.slogdet a in\n  check_t \"slogdet singular sign\" [||] [| 0. |] sign;\n  equal ~msg:\"slogdet singular logdet\" (float 1e-5) neg_infinity\n    (Nx.item [] logdet)\n\nlet test_matrix_rank () =\n  let a = Nx.create Nx.float32 [| 3; 3 |] (Array.init 9 float_of_int) in\n  let r = Nx.matrix_rank a in\n  equal ~msg:\"matrix rank full\" int 2 r;\n  let a_low =\n    Nx.create Nx.float32 [| 3; 3 |] [| 1.; 2.; 3.; 2.; 4.; 6.; 3.; 6.; 9. |]\n  in\n  let r_low = Nx.matrix_rank a_low in\n  equal ~msg:\"matrix_rank low\" int 1 r_low\n\nlet test_matrix_rank_tol () =\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 0.; 0.; 1e-10 |] in\n  let r = Nx.matrix_rank ~tol:1e-8 a in\n  equal ~msg:\"matrix_rank with tol\" int 1 r\n\nlet test_matrix_rank_hermitian () =\n  (* Create a symmetric matrix with known rank *)\n  let a =\n    Nx.create Nx.float32 [| 3; 3 |] [| 2.; 1.; 0.; 1.; 2.; 0.; 0.; 0.; 0. |]\n  in\n  let r = Nx.matrix_rank ~hermitian:true a in\n  equal ~msg:\"matrix_rank hermitian\" int 2 r;\n  (* Test that hermitian flag is actually used by checking it works on a non-square matrix *)\n  (* This will fail if hermitian flag is ignored because eigh requires square matrices *)\n  let non_square =\n    Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |]\n  in\n  raises ~msg:\"matrix_rank hermitian non-square\"\n    (Failure \"eig: input must be square matrix\") (fun () ->\n      ignore (Nx.matrix_rank ~hermitian:true non_square))\n\nlet test_matrix_rank_hermitian_negative () =\n  (* Test negative-definite matrix *)\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| -2.; 0.; 0.; -1. |] in\n  let r = Nx.matrix_rank ~hermitian:true a in\n  equal ~msg:\"matrix_rank hermitian negative\" int 2 r;\n  (* Compare with non-hermitian version *)\n  let r_svd = Nx.matrix_rank a in\n  equal ~msg:\"matrix_rank hermitian negative vs svd\" int r_svd r\n\nlet test_matrix_rank_hermitian_complex () =\n  (* Complex Hermitian matrix with full rank *)\n  let a =\n    Nx.create Nx.complex128 [| 2; 2 |]\n      [|\n        Complex.{ re = 2.; im = 0. };\n        Complex.{ re = 0.; im = 1.5 };\n        Complex.{ re = 0.; im = -1.5 };\n        Complex.{ re = 3.; im = 0. };\n      |]\n  in\n  let r = Nx.matrix_rank ~hermitian:true a in\n  equal ~msg:\"matrix_rank hermitian complex\" int 2 r;\n  let r_svd = Nx.matrix_rank a in\n  equal ~msg:\"matrix_rank hermitian complex vs svd\" int r_svd r\n\nlet test_pinv_hermitian () =\n  (* Create a symmetric matrix *)\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| 2.; 1.; 1.; 2. |] in\n  let pinv_a = Nx.pinv ~hermitian:true a in\n  (* Check that a @ pinv_a @ a ≈ a (pseudoinverse property) *)\n  let recon = Nx.matmul a (Nx.matmul pinv_a a) in\n  check_nx ~epsilon:1e-5 \"pinv hermitian recon\" a recon;\n  (* Test that hermitian flag is actually used by checking it works on a non-square matrix *)\n  (* This will fail if hermitian flag is ignored because eigh requires square matrices *)\n  let non_square =\n    Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |]\n  in\n  raises ~msg:\"pinv hermitian non-square\"\n    (Failure \"eig: input must be square matrix\") (fun () ->\n      ignore (Nx.pinv ~hermitian:true non_square))\n\nlet test_pinv_hermitian_negative () =\n  (* Test negative-definite matrix *)\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| -2.; 0.; 0.; -1. |] in\n  let pinv_a = Nx.pinv ~hermitian:true a in\n  (* Check that a @ pinv_a @ a ≈ a (pseudoinverse property) *)\n  let recon = Nx.matmul a (Nx.matmul pinv_a a) in\n  check_nx ~epsilon:1e-5 \"pinv hermitian negative recon\" a recon;\n  (* Compare with non-hermitian version *)\n  let pinv_svd = Nx.pinv a in\n  check_nx ~epsilon:1e-5 \"pinv hermitian negative vs svd\" pinv_svd pinv_a\n\nlet test_pinv_hermitian_complex () =\n  (* Complex Hermitian matrix *)\n  let a =\n    Nx.create Nx.complex128 [| 2; 2 |]\n      [|\n        Complex.{ re = 4.; im = 0. };\n        Complex.{ re = 1.; im = 2. };\n        Complex.{ re = 1.; im = -2. };\n        Complex.{ re = 5.; im = 0. };\n      |]\n  in\n  let pinv_a = Nx.pinv ~hermitian:true a in\n  let identity = Nx.identity Nx.complex128 2 in\n  let product = Nx.matmul a pinv_a in\n  check_nx ~epsilon:1e-5 \"pinv hermitian complex identity\" identity product;\n  let recon = Nx.matmul a (Nx.matmul pinv_a a) in\n  check_nx ~epsilon:1e-5 \"pinv hermitian complex recon\" a recon;\n  let pinv_svd = Nx.pinv a in\n  check_nx ~epsilon:1e-5 \"pinv hermitian complex vs svd\" pinv_svd pinv_a\n\n(* ───── Product Ops Tests ───── *)\n\nlet test_vdot () =\n  let a = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let b = Nx.create Nx.float32 [| 3 |] [| 4.; 5.; 6. |] in\n  let res = Nx.vdot a b in\n  check_t \"vdot 1d\" [||] [| 32. |] res;\n  let a2 = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let res2 = Nx.vdot a2 b in\n  check_t \"vdot flatten\" [||] [| 4. +. 10. +. 18. +. 16. +. 25. +. 36. |] res2\n\nlet test_vdot_complex () =\n  (* Test complex vdot with conjugation *)\n  let a =\n    Nx.create Nx.complex64 [| 2 |]\n      [| Complex.{ re = 1.; im = 2. }; Complex.{ re = 3.; im = 4. } |]\n  in\n  let b =\n    Nx.create Nx.complex64 [| 2 |]\n      [| Complex.{ re = 5.; im = 6. }; Complex.{ re = 7.; im = 8. } |]\n  in\n  let result = Nx.vdot a b in\n  (* Expected: conj(a) * b = [(1-2i)(5+6i), (3-4i)(7+8i)] = [17-4i, 53-4i] =\n     70-8i *)\n  let expected = Complex.{ re = 70.; im = -8. } in\n  let actual = Nx.item [] result in\n  equal ~msg:\"vdot complex real part\" (float 1e-6) expected.re actual.re;\n  equal ~msg:\"vdot complex imag part\" (float 1e-6) expected.im actual.im\n\nlet test_conjugate () =\n  (* Test complex conjugate *)\n  let x =\n    Nx.create Nx.complex64 [| 2 |]\n      [| Complex.{ re = 1.; im = 2. }; Complex.{ re = 3.; im = 4. } |]\n  in\n  let conj_x = Nx.conjugate x in\n  let expected =\n    [| Complex.{ re = 1.; im = -2. }; Complex.{ re = 3.; im = -4. } |]\n  in\n  let actual = Nx.to_array conj_x in\n  equal ~msg:\"conjugate[0] real\" (float 1e-6) expected.(0).re actual.(0).re;\n  equal ~msg:\"conjugate[0] imag\" (float 1e-6) expected.(0).im actual.(0).im;\n  equal ~msg:\"conjugate[1] real\" (float 1e-6) expected.(1).re actual.(1).re;\n  equal ~msg:\"conjugate[1] imag\" (float 1e-6) expected.(1).im actual.(1).im;\n  (* Test that real tensors are unchanged *)\n  let real_x = Nx.create Nx.float32 [| 2 |] [| 1.; 2. |] in\n  let conj_real = Nx.conjugate real_x in\n  check_nx \"conjugate real unchanged\" real_x conj_real\n\nlet test_vdot_mismatch () =\n  let a = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let b = Nx.create Nx.float32 [| 4 |] [| 4.; 5.; 6.; 7. |] in\n  raises ~msg:\"vdot mismatch\"\n    (Invalid_argument \"vdot: different number of elements\") (fun () ->\n      ignore (Nx.vdot a b))\n\nlet test_vecdot () =\n  let a = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let b = Nx.create Nx.float32 [| 2; 3 |] [| 7.; 8.; 9.; 10.; 11.; 12. |] in\n  let res = Nx.vecdot a b in\n  check_t \"vecdot default axis\" [| 2 |] [| 50.; 167. |] res;\n  let res_axis0 = Nx.vecdot ~axis:0 a b in\n  check_t \"vecdot axis 0\" [| 3 |] [| 47.; 71.; 99. |] res_axis0\n\nlet test_inner () =\n  let a = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let b = Nx.create Nx.float32 [| 3 |] [| 4.; 5.; 6. |] in\n  let res = Nx.inner a b in\n  check_t \"inner 1d\" [||] [| 32. |] res;\n  let a2 = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let res2 = Nx.inner a2 a in\n  check_t \"inner higher\" [| 2 |] [| 14.; 32. |] res2\n\nlet test_inner_mismatch () =\n  let a = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let b = Nx.create Nx.float32 [| 4 |] [| 4.; 5.; 6.; 7. |] in\n  raises ~msg:\"inner mismatch\"\n    (Invalid_argument \"inner: last dimensions differ\") (fun () ->\n      ignore (Nx.inner a b))\n\nlet test_outer () =\n  let a = Nx.create Nx.float32 [| 2 |] [| 1.; 2. |] in\n  let b = Nx.create Nx.float32 [| 3 |] [| 3.; 4.; 5. |] in\n  let res = Nx.outer a b in\n  check_t \"outer\" [| 2; 3 |] [| 3.; 4.; 5.; 6.; 8.; 10. |] res;\n  let a_scalar = Nx.create Nx.float32 [||] [| 2. |] in\n  let res_scalar = Nx.outer a_scalar b in\n  check_t \"outer scalar\" [| 3 |] [| 6.; 8.; 10. |] res_scalar\n\nlet test_tensordot () =\n  let a = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let b = Nx.create Nx.float32 [| 3; 2 |] [| 7.; 8.; 9.; 10.; 11.; 12. |] in\n  let res = Nx.tensordot a b in\n  check_t \"tensordot default\" [| 2; 2 |] [| 58.; 64.; 139.; 154. |] res;\n  let res_axes = Nx.tensordot ~axes:([ 0 ], [ 1 ]) a b in\n  check_shape \"tensordot custom axes\" [| 3; 3 |] res_axes\n\nlet test_tensordot_mismatch () =\n  let a = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let b = Nx.create Nx.float32 [| 4; 2 |] (Array.init 8 float_of_int) in\n  raises ~msg:\"tensordot mismatch\"\n    (Invalid_argument \"tensordot: axes have different sizes\") (fun () ->\n      ignore (Nx.tensordot ~axes:([ 1 ], [ 0 ]) a b))\n\nlet test_einsum_error () =\n  let a = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let b = Nx.create Nx.float32 [| 3; 2 |] [| 7.; 8.; 9.; 10.; 11.; 12. |] in\n  raises ~msg:\"einsum no input operands\"\n    (Invalid_argument \"einsum: no input operands\") (fun () ->\n      ignore (Nx.einsum \"\" [||]));\n  raises ~msg:\"einsum bad format\"\n    (Invalid_argument \"einsum: invalid format, expected inputs->output\")\n    (fun () -> ignore (Nx.einsum \"IJ,JK-IK\" [| a; b |]));\n  raises ~msg:\"einsum wrong inputs\"\n    (Invalid_argument \"einsum: number of inputs must equal number of operands\")\n    (fun () -> ignore (Nx.einsum \"ij->ij\" [| a; b |]));\n  raises ~msg:\"einsum rectangular diagonal\"\n    (Invalid_argument\n       \"einsum: index var 'i' must have consistent dimensions (2 vs 3)\")\n    (fun () -> ignore (Nx.einsum \"ii->i\" [| a |]));\n  raises ~msg:\"einsum mismatched rank\"\n    (Invalid_argument \"einsum: operand rank too small for subscripts\")\n    (fun () -> ignore (Nx.einsum \"ijl,jk->ik\" [| a; b |]));\n  raises ~msg:\"einsum contracted vars mismatch\"\n    (Invalid_argument \"einsum: output index 'k' not found in inputs\") (fun () ->\n      ignore (Nx.einsum \"ij,jl->ki\" [| a; b |]));\n  raises ~msg:\"einsum dimension mismatch\"\n    (Invalid_argument\n       \"einsum: index var 'j' must have consistent dimensions (3 vs 2)\")\n    (fun () -> ignore (Nx.einsum \"ij,kj->ik\" [| a; b |]));\n\n  raises ~msg:\"einsum output ell without input\"\n    (Invalid_argument \"einsum: output ellipsis requires ellipsis in inputs\")\n    (fun () -> ignore (Nx.einsum \"ij->...\" [| a |]));\n  raises ~msg:\"einsum multi ellipsis\"\n    (Invalid_argument \"einsum: multiple ellipsis in operand\") (fun () ->\n      ignore (Nx.einsum \"i...j...->ij\" [| a |]))\n\n(* Weighted broadcast dot retained from legacy spec *)\nlet einsum_weighted_broadcast () =\n  let a = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let vec = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let got = Nx.einsum \"...i,i->...\" [| a; vec |] in\n  let expected =\n    let mul = Nx.mul a (Nx.reshape [| 1; 3 |] vec) in\n    Nx.sum ~axes:[ 1 ] mul\n  in\n  check_nx \"einsum weighted broadcast ...i,i->...\" expected got\n\nlet einsum_complex_fro_inner () =\n  let open Complex in\n  let a =\n    Nx.create Nx.complex128 [| 2; 2 |]\n      [|\n        { re = 1.; im = 2. };\n        { re = 3.; im = 4. };\n        { re = -1.; im = 0. };\n        { re = 0.5; im = -1.5 };\n      |]\n  in\n  let b =\n    Nx.create Nx.complex128 [| 2; 2 |]\n      [|\n        { re = -2.; im = 1. };\n        { re = 0.; im = 1. };\n        { re = 2.; im = -1. };\n        { re = -0.5; im = 2. };\n      |]\n  in\n  let got = Nx.einsum \"ij,ij->\" [| a; b |] in\n  let expected = Nx.sum (Nx.mul a b) in\n  check_nx \"einsum complex fro inner ij,ij->\" expected got\n\nlet einsum_int_dot_scalar () =\n  let a = Nx.create Nx.int32 [| 4 |] [| 1l; 2l; 3l; 4l |] in\n  let b = Nx.create Nx.int32 [| 4 |] [| 5l; 6l; 7l; 8l |] in\n  let got = Nx.einsum \"i,i->\" [| a; b |] in\n  let expected = Nx.sum (Nx.mul a b) in\n  check_nx \"einsum int dot scalar i,i->\" expected got\n\nlet test_einsum_regression_axis_order () =\n  (* Case 1: i,jk->jki should order as j, k, i *)\n  let a1, b1, a2, b2 =\n    Nx.Rng.run ~seed:0 (fun () ->\n        let a1 = Nx.randn Nx.float32 [| 5 |] in\n        let b1 = Nx.randn Nx.float32 [| 7; 7 |] in\n        let a2 = Nx.randn Nx.float32 [| 5; 5 |] in\n        let b2 = Nx.randn Nx.float32 [| 3; 7; 5 |] in\n        (a1, b1, a2, b2))\n  in\n  let r1 = Nx.einsum \"i,jk->jki\" [| a1; b1 |] in\n  check_shape \"einsum axis order i,jk->jki\" [| 7; 7; 5 |] r1;\n  let r2 = Nx.einsum \"ij,klj->kli\" [| a2; b2 |] in\n  check_shape \"einsum axis order ij,klj->kli\" [| 3; 7; 5 |] r2\n\nlet einsum_dot_scalar () =\n  let a0 = Nx.create Nx.float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n  let a1 = Nx.create Nx.float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n  let got = Nx.einsum \"i,i->\" [| a0; a1 |] in\n  let expected = Nx.create Nx.float32 [||] [| 55. |] in\n  check_nx \"einsum_dot_scalar i,i->\" expected got\n\nlet einsum_matmul () =\n  let a0 = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let a1 = Nx.create Nx.float32 [| 3; 2 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let got = Nx.einsum \"ij,jk->ik\" [| a0; a1 |] in\n  let expected = Nx.create Nx.float32 [| 2; 2 |] [| 22.; 28.; 49.; 64. |] in\n  check_nx \"einsum_matmul ij,jk->ik\" expected got\n\nlet einsum_transpose () =\n  let a0 = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let got = Nx.einsum \"ij->ji\" [| a0 |] in\n  let expected = Nx.create Nx.float32 [| 3; 2 |] [| 1.; 4.; 2.; 5.; 3.; 6. |] in\n  check_nx \"einsum_transpose ij->ji\" expected got\n\nlet einsum_outer () =\n  let a0 = Nx.create Nx.float32 [| 2 |] [| 1.; 2. |] in\n  let a1 = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let got = Nx.einsum \"i,j->ij\" [| a0; a1 |] in\n  let expected = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 2.; 4.; 6. |] in\n  check_nx \"einsum_outer i,j->ij\" expected got\n\nlet einsum_total_sum () =\n  let a0 = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let got = Nx.einsum \"ij->\" [| a0 |] in\n  let expected = Nx.create Nx.float32 [||] [| 21. |] in\n  check_nx \"einsum_total_sum ij->\" expected got\n\nlet einsum_diag_extract () =\n  let a0 =\n    Nx.create Nx.float32 [| 3; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9. |]\n  in\n  let got = Nx.einsum \"ii->i\" [| a0 |] in\n  let expected = Nx.create Nx.float32 [| 3 |] [| 1.; 5.; 9. |] in\n  check_nx \"einsum_diag_extract ii->i\" expected got\n\nlet einsum_batched_diag () =\n  let a0 =\n    Nx.create Nx.float32 [| 2; 3; 3 |]\n      [|\n        1.;\n        2.;\n        3.;\n        4.;\n        5.;\n        6.;\n        7.;\n        8.;\n        9.;\n        10.;\n        11.;\n        12.;\n        13.;\n        14.;\n        15.;\n        16.;\n        17.;\n        18.;\n      |]\n  in\n  let got = Nx.einsum \"...ii->...i\" [| a0 |] in\n  let expected =\n    Nx.create Nx.float32 [| 2; 3 |] [| 1.; 5.; 9.; 10.; 14.; 18. |]\n  in\n  check_nx \"einsum_batched_diag ...ii->...i\" expected got\n\nlet einsum_batched_matmul () =\n  let a0 =\n    Nx.create Nx.float32 [| 2; 3; 4 |]\n      [|\n        1.;\n        2.;\n        3.;\n        4.;\n        5.;\n        6.;\n        7.;\n        8.;\n        9.;\n        10.;\n        11.;\n        12.;\n        13.;\n        14.;\n        15.;\n        16.;\n        17.;\n        18.;\n        19.;\n        20.;\n        21.;\n        22.;\n        23.;\n        24.;\n      |]\n  in\n  let a1 =\n    Nx.create Nx.float32 [| 1; 4; 2 |] [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8. |]\n  in\n  let got = Nx.einsum \"...ij,...jk->...ik\" [| a0; a1 |] in\n  let expected =\n    Nx.create Nx.float32 [| 2; 3; 2 |]\n      [| 50.; 60.; 114.; 140.; 178.; 220.; 242.; 300.; 306.; 380.; 370.; 460. |]\n  in\n  check_nx \"einsum_batched_matmul ...ij,...jk->...ik\" expected got\n\nlet einsum_free_order1 () =\n  let a0 = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let a1 = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  let got = Nx.einsum \"i,jk->jki\" [| a0; a1 |] in\n  let expected =\n    Nx.create Nx.float32 [| 2; 2; 3 |]\n      [| 1.; 2.; 3.; 2.; 4.; 6.; 3.; 6.; 9.; 4.; 8.; 12. |]\n  in\n  check_nx \"einsum_free_order1 i,jk->jki\" expected got\n\nlet einsum_free_order2 () =\n  let a0 = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let a1 =\n    Nx.create Nx.float32 [| 4; 5; 3 |]\n      [|\n        1.;\n        2.;\n        3.;\n        4.;\n        5.;\n        6.;\n        7.;\n        8.;\n        9.;\n        10.;\n        11.;\n        12.;\n        13.;\n        14.;\n        15.;\n        16.;\n        17.;\n        18.;\n        19.;\n        20.;\n        21.;\n        22.;\n        23.;\n        24.;\n        25.;\n        26.;\n        27.;\n        28.;\n        29.;\n        30.;\n        31.;\n        32.;\n        33.;\n        34.;\n        35.;\n        36.;\n        37.;\n        38.;\n        39.;\n        40.;\n        41.;\n        42.;\n        43.;\n        44.;\n        45.;\n        46.;\n        47.;\n        48.;\n        49.;\n        50.;\n        51.;\n        52.;\n        53.;\n        54.;\n        55.;\n        56.;\n        57.;\n        58.;\n        59.;\n        60.;\n      |]\n  in\n  let got = Nx.einsum \"ij,klj->kli\" [| a0; a1 |] in\n  let expected =\n    Nx.create Nx.float32 [| 4; 5; 2 |]\n      [|\n        14.;\n        32.;\n        32.;\n        77.;\n        50.;\n        122.;\n        68.;\n        167.;\n        86.;\n        212.;\n        104.;\n        257.;\n        122.;\n        302.;\n        140.;\n        347.;\n        158.;\n        392.;\n        176.;\n        437.;\n        194.;\n        482.;\n        212.;\n        527.;\n        230.;\n        572.;\n        248.;\n        617.;\n        266.;\n        662.;\n        284.;\n        707.;\n        302.;\n        752.;\n        320.;\n        797.;\n        338.;\n        842.;\n        356.;\n        887.;\n      |]\n  in\n  check_nx \"einsum_free_order2 ij,klj->kli\" expected got\n\nlet einsum_mix_reorder () =\n  let a0 =\n    Nx.create Nx.float32 [| 2; 3; 2 |]\n      [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9.; 10.; 11.; 12. |]\n  in\n  let a1 = Nx.create Nx.float32 [| 3; 2 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let got = Nx.einsum \"abc,bd->dac\" [| a0; a1 |] in\n  let expected =\n    Nx.create Nx.float32 [| 2; 2; 2 |]\n      [| 35.; 44.; 89.; 98.; 44.; 56.; 116.; 128. |]\n  in\n  check_nx \"einsum_mix_reorder abc,bd->dac\" expected got\n\nlet einsum_chain () =\n  let a0 = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let a1 = Nx.create Nx.float32 [| 3; 2 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let a2 = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  let got = Nx.einsum \"ab,bc,cd->ad\" [| a0; a1; a2 |] in\n  let expected = Nx.create Nx.float32 [| 2; 2 |] [| 106.; 156.; 241.; 354. |] in\n  check_nx \"einsum_chain ab,bc,cd->ad\" expected got\n\nlet einsum_diag_sum () =\n  let a0 =\n    Nx.create Nx.float32 [| 3; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9. |]\n  in\n  let got = Nx.einsum \"ii\" [| a0 |] in\n  let expected = Nx.create Nx.float32 [||] [| 15. |] in\n  check_nx \"einsum_diag_sum ii\" expected got\n\nlet einsum_hadamard_vec () =\n  let a0 = Nx.create Nx.float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n  let a1 = Nx.create Nx.float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n  let got = Nx.einsum \"i,i->i\" [| a0; a1 |] in\n  let expected = Nx.create Nx.float32 [| 4 |] [| 1.; 4.; 9.; 16. |] in\n  check_nx \"einsum_hadamard_vec i,i->i\" expected got\n\nlet einsum_fro_inner () =\n  let a0 = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let a1 = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let got = Nx.einsum \"ij,ij->\" [| a0; a1 |] in\n  let expected = Nx.create Nx.float32 [||] [| 91. |] in\n  check_nx \"einsum_fro_inner ij,ij->\" expected got\n\nlet einsum_contract_last () =\n  let a0 =\n    Nx.create Nx.float32 [| 2; 3; 4 |]\n      [|\n        1.;\n        2.;\n        3.;\n        4.;\n        5.;\n        6.;\n        7.;\n        8.;\n        9.;\n        10.;\n        11.;\n        12.;\n        13.;\n        14.;\n        15.;\n        16.;\n        17.;\n        18.;\n        19.;\n        20.;\n        21.;\n        22.;\n        23.;\n        24.;\n      |]\n  in\n  let a1 = Nx.create Nx.float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n  let got = Nx.einsum \"ijk,k->ij\" [| a0; a1 |] in\n  let expected =\n    Nx.create Nx.float32 [| 2; 3 |] [| 30.; 70.; 110.; 150.; 190.; 230. |]\n  in\n  check_nx \"einsum_contract_last ijk,k->ij\" expected got\n\nlet einsum_matvec () =\n  let a0 =\n    Nx.create Nx.float32 [| 3; 4 |]\n      [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9.; 10.; 11.; 12. |]\n  in\n  let a1 = Nx.create Nx.float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n  let got = Nx.einsum \"ab,b->a\" [| a0; a1 |] in\n  let expected = Nx.create Nx.float32 [| 3 |] [| 30.; 70.; 110. |] in\n  check_nx \"einsum_matvec ab,b->a\" expected got\n\nlet einsum_contract_3d_vec () =\n  let a0 =\n    Nx.create Nx.float32 [| 2; 3; 4 |]\n      [|\n        1.;\n        2.;\n        3.;\n        4.;\n        5.;\n        6.;\n        7.;\n        8.;\n        9.;\n        10.;\n        11.;\n        12.;\n        13.;\n        14.;\n        15.;\n        16.;\n        17.;\n        18.;\n        19.;\n        20.;\n        21.;\n        22.;\n        23.;\n        24.;\n      |]\n  in\n  let a1 = Nx.create Nx.float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n  let got = Nx.einsum \"abc,c->ab\" [| a0; a1 |] in\n  let expected =\n    Nx.create Nx.float32 [| 2; 3 |] [| 30.; 70.; 110.; 150.; 190.; 230. |]\n  in\n  check_nx \"einsum_contract_3d_vec abc,c->ab\" expected got\n\nlet einsum_broadcast_last_dot () =\n  let a0 =\n    Nx.create Nx.float32 [| 2; 3; 4 |]\n      [|\n        1.;\n        2.;\n        3.;\n        4.;\n        5.;\n        6.;\n        7.;\n        8.;\n        9.;\n        10.;\n        11.;\n        12.;\n        13.;\n        14.;\n        15.;\n        16.;\n        17.;\n        18.;\n        19.;\n        20.;\n        21.;\n        22.;\n        23.;\n        24.;\n      |]\n  in\n  let a1 = Nx.create Nx.float32 [| 1; 1; 4 |] [| 1.; 2.; 3.; 4. |] in\n  let got = Nx.einsum \"...i,...i->...\" [| a0; a1 |] in\n  let expected =\n    Nx.create Nx.float32 [| 2; 3 |] [| 30.; 70.; 110.; 150.; 190.; 230. |]\n  in\n  check_nx \"einsum_broadcast_last_dot ...i,...i->...\" expected got\n\nlet einsum_move_first_axis_to_last () =\n  let a0 =\n    Nx.create Nx.float32 [| 2; 3; 4 |]\n      [|\n        1.;\n        2.;\n        3.;\n        4.;\n        5.;\n        6.;\n        7.;\n        8.;\n        9.;\n        10.;\n        11.;\n        12.;\n        13.;\n        14.;\n        15.;\n        16.;\n        17.;\n        18.;\n        19.;\n        20.;\n        21.;\n        22.;\n        23.;\n        24.;\n      |]\n  in\n  let got = Nx.einsum \"i...->...i\" [| a0 |] in\n  let expected =\n    Nx.create Nx.float32 [| 3; 4; 2 |]\n      [|\n        1.;\n        13.;\n        2.;\n        14.;\n        3.;\n        15.;\n        4.;\n        16.;\n        5.;\n        17.;\n        6.;\n        18.;\n        7.;\n        19.;\n        8.;\n        20.;\n        9.;\n        21.;\n        10.;\n        22.;\n        11.;\n        23.;\n        12.;\n        24.;\n      |]\n  in\n  check_nx \"einsum_move_first_axis_to_last i...->...i\" expected got\n\nlet einsum_rowwise_dot () =\n  let a0 =\n    Nx.create Nx.float32 [| 3; 4 |]\n      [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9.; 10.; 11.; 12. |]\n  in\n  let a1 = Nx.create Nx.float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n  let got = Nx.einsum \"ij,j->i\" [| a0; a1 |] in\n  let expected = Nx.create Nx.float32 [| 3 |] [| 30.; 70.; 110. |] in\n  check_nx \"einsum_rowwise_dot ij,j->i\" expected got\n\nlet einsum_independent_sum () =\n  (* \"ab,cd->\" with no shared axes: should pre-reduce to scalar * scalar *)\n  let a0 =\n    Nx.create Nx.float32 [| 3; 4 |]\n      [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9.; 10.; 11.; 12. |]\n  in\n  let a1 =\n    Nx.create Nx.float32 [| 2; 5 |]\n      [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9.; 10. |]\n  in\n  let got = Nx.einsum \"ab,cd->\" [| a0; a1 |] in\n  (* sum(A) = 78, sum(B) = 55, result = 78 * 55 = 4290 *)\n  let expected = Nx.create Nx.float32 [||] [| 4290. |] in\n  check_nx \"einsum_independent_sum ab,cd->\" expected got\n\nlet einsum_partial_prereduction () =\n  (* \"ij,kj->\": pre-reduce i from op0, k from op1, then dot over j *)\n  let a0 = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let a1 = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let got = Nx.einsum \"ij,kj->\" [| a0; a1 |] in\n  (* sum_i(A) = [5,7,9], sum_k(B) = [5,7,9], dot = 25+49+81 = 155 *)\n  let expected = Nx.create Nx.float32 [||] [| 155. |] in\n  check_nx \"einsum_partial_prereduction ij,kj->\" expected got\n\nlet einsum_no_shared_with_output () =\n  (* \"ab,cd->ac\": pre-reduce b,d but keep a,c *)\n  let a0 =\n    Nx.create Nx.float32 [| 3; 4 |]\n      [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9.; 10.; 11.; 12. |]\n  in\n  let a1 =\n    Nx.create Nx.float32 [| 2; 5 |]\n      [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9.; 10. |]\n  in\n  let got = Nx.einsum \"ab,cd->ac\" [| a0; a1 |] in\n  (* sum_b(A) = [10, 26, 42], sum_d(B) = [15, 40] *)\n  (* outer = [[150, 400], [390, 1040], [630, 1680]] *)\n  let expected =\n    Nx.create Nx.float32 [| 3; 2 |] [| 150.; 400.; 390.; 1040.; 630.; 1680. |]\n  in\n  check_nx \"einsum_no_shared_with_output ab,cd->ac\" expected got\n\nlet test_kron () =\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  let b = Nx.create Nx.float32 [| 2; 2 |] [| 5.; 6.; 7.; 8. |] in\n  let res = Nx.kron a b in\n  check_t \"kron\" [| 4; 4 |]\n    [|\n      5.; 6.; 10.; 12.; 7.; 8.; 14.; 16.; 15.; 18.; 20.; 24.; 21.; 24.; 28.; 32.;\n    |]\n    res\n\nlet test_multi_dot () =\n  let a = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let b = Nx.create Nx.float32 [| 3; 4 |] (Array.init 12 float_of_int) in\n  let c =\n    Nx.create Nx.float32 [| 4; 2 |] [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8. |]\n  in\n  let res = Nx.multi_dot [| a; b; c |] in\n  let manual = Nx.matmul a (Nx.matmul b c) in\n  check_nx \"multi_dot\" manual res\n\nlet test_multi_dot_empty () =\n  raises ~msg:\"multi_dot empty\" (Invalid_argument \"multi_dot: empty array\")\n    (fun () -> ignore (Nx.multi_dot [||]))\n\nlet test_matrix_power () =\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 1.; 1.; 0. |] in\n  let pow3 = Nx.matrix_power a 3 in\n  check_t \"matrix_power positive\" [| 2; 2 |] [| 3.; 2.; 2.; 1. |] pow3;\n  let pow0 = Nx.matrix_power a 0 in\n  let id = Nx.eye Nx.float32 2 in\n  check_nx \"matrix_power zero\" id pow0;\n  let pow_neg2 = Nx.matrix_power a (-2) in\n  let inv = Nx.inv a in\n  let inv2 = Nx.matmul inv inv in\n  check_nx \"matrix_power negative\" inv2 pow_neg2\n\nlet test_matrix_power_singular () =\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 2.; 4. |] in\n  raises ~msg:\"matrix_power singular negative\"\n    (Invalid_argument \"matrix_power: singular for negative exponent\") (fun () ->\n      ignore (Nx.matrix_power a (-1)))\n\nlet test_cross () =\n  let a = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let b = Nx.create Nx.float32 [| 3 |] [| 4.; 5.; 6. |] in\n  let res = Nx.cross a b in\n  check_t \"cross 3d\" [| 3 |] [| -3.; 6.; -3. |] res;\n  let a_batch = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let b_batch =\n    Nx.create Nx.float32 [| 2; 3 |] [| 7.; 8.; 9.; 10.; 11.; 12. |]\n  in\n  let res_batch = Nx.cross ~axis:1 a_batch b_batch in\n  check_shape \"cross batch\" [| 2; 3 |] res_batch\n\nlet test_cross_invalid () =\n  let a = Nx.create Nx.float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n  let b = Nx.create Nx.float32 [| 4 |] [| 5.; 6.; 7.; 8. |] in\n  raises ~msg:\"cross invalid dim\" (Invalid_argument \"cross: axis dim not 3\")\n    (fun () -> ignore (Nx.cross a b))\n\n(* ───── Advanced Decomposition Tests ───── *)\n\nlet test_cholesky_upper () =\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| 2.; 1.; 1.; 2. |] in\n  let u = Nx.cholesky ~upper:true a in\n  let recon = Nx.matmul (Nx.transpose u) u in\n  check_nx \"cholesky upper\" a recon\n\nlet test_cholesky_non_posdef () =\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  raises ~msg:\"cholesky non posdef\"\n    (Invalid_argument \"cholesky: not positive-definite\") (fun () ->\n      ignore (Nx.cholesky a))\n\nlet test_qr_mode () =\n  let a = Nx.create Nx.float32 [| 4; 3 |] (Array.init 12 float_of_int) in\n  let q_red, r_red = Nx.qr ~mode:`Reduced a in\n  check_shape \"qr reduced q\" [| 4; 3 |] q_red;\n  check_shape \"qr reduced r\" [| 3; 3 |] r_red;\n  let q_comp, r_comp = Nx.qr ~mode:`Complete a in\n  check_shape \"qr complete q\" [| 4; 4 |] q_comp;\n  check_shape \"qr complete r\" [| 4; 3 |] r_comp\n\nlet test_svd_full_matrices () =\n  let a = Nx.create Nx.float32 [| 3; 4 |] (Array.init 12 float_of_int) in\n  let u, s, vh = Nx.svd ~full_matrices:true a in\n  check_shape \"svd full u\" [| 3; 3 |] u;\n  check_shape \"svd full vh\" [| 4; 4 |] vh;\n  let u_econ, s_econ, vh_econ = Nx.svd ~full_matrices:false a in\n  check_shape \"svd econ u\" [| 3; 3 |] u_econ;\n  check_shape \"svd econ vh\" [| 3; 4 |] vh_econ;\n  check_nx \"svd s equal\" s s_econ\n\nlet test_svdvals () =\n  let a =\n    Nx.create Nx.float32 [| 3; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 10. |]\n  in\n  let s = Nx.svdvals a in\n  check_shape \"svdvals shape\" [| 3 |] s;\n  let _, s_full, _ = Nx.svd a in\n  check_nx \"svdvals match svd\" s s_full\n\n(* ───── Eigen Tests ───── *)\n\nlet test_eigh () =\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| 2.; 1.; 1.; 2. |] in\n  let vals, vecs = Nx.eigh a in\n  check_t ~eps:1e-5 \"eigh vals\" [| 2 |] [| 1.; 3. |] vals;\n  let diag_vals =\n    let zeros = Nx.zeros Nx.float32 [| 2; 2 |] in\n    let z_with_diag = Nx.copy zeros in\n    Nx.set_item [ 0; 0 ] (Nx.item [ 0 ] vals) z_with_diag;\n    Nx.set_item [ 1; 1 ] (Nx.item [ 1 ] vals) z_with_diag;\n    z_with_diag\n  in\n  let recon = Nx.matmul vecs (Nx.matmul diag_vals (Nx.transpose vecs)) in\n  check_nx \"eigh recon\" a recon\n\nlet test_eigh_uplo () =\n  let a =\n    Nx.create Nx.float32 [| 3; 3 |] [| 1.; 2.; 3.; 2.; 4.; 5.; 3.; 5.; 6. |]\n  in\n  let vals_l = Nx.eigh ~uplo:`L a |> fst in\n  let vals_u = Nx.eigh ~uplo:`U a |> fst in\n  check_nx \"eigh uplo L=U\" vals_l vals_u\n\nlet test_eigvals () =\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| 2.; 1.; 1.; 2. |] in\n  let vals = Nx.eigvals a in\n  let vals_full, _ = Nx.eig a in\n  check_nx \"eigvals match eig\" vals vals_full\n\nlet test_eigvalsh () =\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| 2.; 1.; 1.; 2. |] in\n  let vals = Nx.eigvalsh a in\n  let vals_full, _ = Nx.eigh a in\n  check_nx \"eigvalsh match eigh\" vals vals_full\n\n(* ───── Advanced Norm Tests ───── *)\n\nlet test_norm_ord () =\n  let m = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 3.; 2.; 4. |] in\n  let n_nuc = Nx.norm ~ord:`Nuc m in\n  equal ~msg:\"norm nuclear\" (float 1e-3) 5.83095 (Nx.item [] n_nuc);\n  let n_two = Nx.norm ~ord:`Two m in\n  equal ~msg:\"norm two\" (float 1e-3) 5.46499 (Nx.item [] n_two);\n  let n_neg_two = Nx.norm ~ord:`NegTwo m in\n  equal ~msg:\"norm neg two\" (float 1e-3) 0.36597 (Nx.item [] n_neg_two)\n\nlet test_norm_keepdims () =\n  let v = Nx.create Nx.float32 [| 3 |] [| 3.; 4.; 0. |] in\n  let n = Nx.norm ~keepdims:true v in\n  check_shape \"norm keepdims\" [| 1 |] n;\n  check_t \"norm keepdims value\" [| 1 |] [| 5. |] n\n\nlet test_cond () =\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 0.; 0.; 1. |] in\n  let c = Nx.cond a in\n  check_t \"cond default\" [||] [| 1. |] c;\n  let c_inf = Nx.cond ~p:`Inf a in\n  check_t \"cond inf\" [||] [| 1. |] c_inf\n\n(* ───── Advanced Solve Tests ───── *)\n\nlet test_lstsq () =\n  let a = Nx.create Nx.float32 [| 3; 2 |] [| 1.; 1.; 1.; 2.; 1.; 3. |] in\n  let b = Nx.create Nx.float32 [| 3 |] [| 3.; 6.; 9. |] in\n  let x, _res, rank, _s = Nx.lstsq a b in\n  check_shape \"lstsq x\" [| 2 |] x;\n  equal ~msg:\"lstsq rank\" int 2 rank;\n  let approx_b = Nx.matmul a x in\n  check_nx ~epsilon:1e-5 \"lstsq approx\" b approx_b\n\nlet test_lstsq_rcond () =\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 0.; 0.; 1e-10 |] in\n  let b = Nx.create Nx.float32 [| 2 |] [| 1.; 0. |] in\n  let _, _, rank, _ = Nx.lstsq ~rcond:1e-8 a b in\n  equal ~msg:\"lstsq rcond rank\" int 1 rank\n\nlet test_lstsq_underdetermined () =\n  let a = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 0.; 2.; 3.; 2.; 4. |] in\n  let b = Nx.create Nx.float32 [| 2 |] [| 1.; 0. |] in\n\n  let x, _res, rank, _s = Nx.lstsq ~rcond:1e-8 a b in\n  check_shape \"lstsq x underdetermined\" [| 3 |] x;\n  equal ~msg:\"lstsq rank underdetermined\" int 2 rank;\n  let approx_b_underdetermined = Nx.matmul a x in\n  check_nx \"lstsq approx underdetermined\" b approx_b_underdetermined\n\nlet test_pinv () =\n  let a = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let pinv = Nx.pinv a in\n  check_shape \"pinv shape\" [| 3; 2 |] pinv;\n  let recon = Nx.matmul a (Nx.matmul pinv a) in\n  check_nx ~epsilon:1e-5 \"pinv recon\" a recon\n\nlet test_pinv_singular () =\n  let a = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 2.; 4. |] in\n  let pinv = Nx.pinv a in\n  let recon = Nx.matmul a (Nx.matmul pinv a) in\n  check_nx ~epsilon:1e-5 \"pinv singular recon\" a recon\n\nlet test_tensorsolve () =\n  let a = Nx.create Nx.float32 [| 2; 2; 2; 2 |] (Array.init 16 float_of_int) in\n  let b = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  let x = Nx.tensorsolve a b in\n  check_shape \"tensorsolve shape\" [| 2; 2 |] x;\n  let recon = Nx.tensordot a x ~axes:([ 2; 3 ], [ 0; 1 ]) in\n  check_nx ~epsilon:1e-5 \"tensorsolve recon\" b recon\n\nlet test_tensorsolve_axes () =\n  let a =\n    Nx.create Nx.float32 [| 3; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9. |]\n  in\n  let b = Nx.create Nx.float32 [| 3 |] [| 14.; 32.; 50. |] in\n  let x = Nx.tensorsolve ~axes:[ 1 ] a b in\n  (* Matrix is singular, so we get minimum norm solution via pinv *)\n  check_t ~eps:1e-5 \"tensorsolve axes\" [| 3 |] [| 1.; 2.; 3. |] x\n\nlet test_tensorinv () =\n  (* Use an invertible tensor *)\n  let a =\n    Nx.create Nx.float32 [| 2; 2; 2; 2 |]\n      [|\n        0.49671414;\n        -0.1382643;\n        0.64768857;\n        1.5230298;\n        -0.23415338;\n        -0.23413695;\n        1.5792128;\n        0.7674347;\n        -0.46947438;\n        0.54256004;\n        -0.46341768;\n        -0.46572974;\n        0.24196227;\n        -1.9132802;\n        -1.7249179;\n        -0.5622875;\n      |]\n  in\n  let inv = Nx.tensorinv ~ind:2 a in\n  check_shape \"tensorinv shape\" [| 2; 2; 2; 2 |] inv;\n  let recon = Nx.tensordot a inv ~axes:([ 2; 3 ], [ 0; 1 ]) in\n  let id = Nx.eye Nx.float32 4 |> Nx.reshape [| 2; 2; 2; 2 |] in\n  check_nx ~epsilon:1e-5 \"tensorinv recon\" id recon\n\nlet test_tensorinv_ind () =\n  let a = Nx.create Nx.float32 [| 4; 4 |] (Array.init 16 float_of_int) in\n  let inv = Nx.tensorinv ~ind:1 a in\n  check_shape \"tensorinv ind shape\" [| 4; 4 |] inv\n\n(* Test Suite Organization *)\n\nlet matmul_tests =\n  [\n    test \"matmul 1d x 1d\" test_matmul_1d_1d;\n    test \"matmul 1d x 2d\" test_matmul_1d_2d;\n    test \"matmul 2d x 1d\" test_matmul_2d_1d;\n    test \"matmul batch\" test_matmul_batch;\n    test \"matmul broadcast batch\" test_matmul_broadcast_batch;\n    test \"matmul 2d @ 3d broadcast\" test_matmul_2d_3d_broadcast;\n    test \"matmul shape error\" test_matmul_shape_error;\n    test \"matmul empty\" test_matmul_empty;\n    test \"matmul transpose optimization\" test_matmul_transpose_optimization;\n  ]\n\nlet dot_tests =\n  [\n    test \"dot 1d x 1d\" test_dot_1d_1d;\n    test \"dot 2d x 1d\" test_dot_2d_1d;\n    test \"dot 2d x 2d\" test_dot_2d_2d;\n    test \"dot higher-d\" test_dot_higher_d;\n    test \"dot scalar result\" test_dot_scalar_result;\n  ]\n\nlet solve_inverse_tests =\n  [\n    test \"solve identity\" test_solve_identity;\n    test \"solve simple\" test_solve_simple;\n    test \"solve batch\" test_solve_batch;\n    test \"solve singular\" test_solve_singular;\n    test \"solve non-square\" test_solve_non_square;\n    test \"inv identity\" test_inv_identity;\n    test \"inv inverse\" test_inv_inverse;\n    test \"inv singular\" test_inv_singular;\n  ]\n\nlet decomposition_tests =\n  [\n    test \"qr shape\" test_qr_shape;\n    test \"qr orthogonal\" test_qr_orthogonal;\n    test \"svd shape\" test_svd_shape;\n    test \"cholesky posdef\" test_cholesky_posdef;\n    test \"eig shape\" test_eig_shape;\n    test \"eig property\" test_eig_property;\n  ]\n\nlet norm_tests =\n  [\n    test \"norm vector L1\" test_norm_vector_1;\n    test \"norm vector L2\" test_norm_vector_2;\n    test \"norm vector Linf\" test_norm_vector_inf;\n    test \"norm matrix Frobenius\" test_norm_matrix_fro;\n    test \"norm matrix L1\" test_norm_matrix_1;\n    test \"norm axis\" test_norm_axis;\n    test \"norm empty\" test_norm_empty;\n  ]\n\nlet utility_tests =\n  [\n    test \"det 2x2\" test_det_2x2;\n    test \"det singular\" test_det_singular;\n    test \"diag extract\" test_diag_extract;\n  ]\n\nlet advanced_utility_tests =\n  [\n    test \"diagonal\" test_diagonal;\n    test \"diagonal edge\" test_diagonal_edge;\n    test \"matrix transpose\" test_matrix_transpose;\n    test \"trace offset\" test_trace_offset;\n    test \"det batch\" test_det_batch;\n    test \"slogdet\" test_slogdet;\n    test \"slogdet singular\" test_slogdet_singular;\n    test \"matrix rank\" test_matrix_rank;\n    test \"matrix rank tol\" test_matrix_rank_tol;\n    test \"matrix rank hermitian\" test_matrix_rank_hermitian;\n    test \"matrix rank hermitian negative\" test_matrix_rank_hermitian_negative;\n    test \"matrix rank hermitian complex\" test_matrix_rank_hermitian_complex;\n    test \"pinv hermitian\" test_pinv_hermitian;\n    test \"pinv hermitian negative\" test_pinv_hermitian_negative;\n    test \"pinv hermitian complex\" test_pinv_hermitian_complex;\n  ]\n\nlet product_tests =\n  [\n    test \"vdot\" test_vdot;\n    test \"vdot complex\" test_vdot_complex;\n    test \"conjugate\" test_conjugate;\n    test \"vdot mismatch\" test_vdot_mismatch;\n    test \"vecdot\" test_vecdot;\n    test \"inner\" test_inner;\n    test \"inner mismatch\" test_inner_mismatch;\n    test \"outer\" test_outer;\n    test \"tensordot\" test_tensordot;\n    test \"tensordot mismatch\" test_tensordot_mismatch;\n    test \"kron\" test_kron;\n    test \"multi dot\" test_multi_dot;\n    test \"multi dot empty\" test_multi_dot_empty;\n    test \"matrix power\" test_matrix_power;\n    test \"matrix power singular\" test_matrix_power_singular;\n    test \"cross\" test_cross;\n    test \"cross invalid\" test_cross_invalid;\n  ]\n\n(* Dedicated suite for einsum; avoids duplication in product_tests *)\nlet einsum_tests =\n  [\n    test \"einsum error cases\" test_einsum_error;\n    test \"einsum weighted broadcast\" einsum_weighted_broadcast;\n    test \"einsum complex fro inner\" einsum_complex_fro_inner;\n    test \"einsum int dot scalar\" einsum_int_dot_scalar;\n    test \"einsum axis order regression\" test_einsum_regression_axis_order;\n    test \"dot scalar i,i->\" einsum_dot_scalar;\n    test \"matmul ij,jk->ik\" einsum_matmul;\n    test \"transpose ij->ji\" einsum_transpose;\n    test \"outer i,j->ij\" einsum_outer;\n    test \"total sum ij->\" einsum_total_sum;\n    test \"diag extract ii->i\" einsum_diag_extract;\n    test \"batched diag ...ii->...i\" einsum_batched_diag;\n    test \"batched matmul ...ij,...jk->...ik\" einsum_batched_matmul;\n    test \"free order1 i,jk->jki\" einsum_free_order1;\n    test \"free order2 ij,klj->kli\" einsum_free_order2;\n    test \"mix reorder abc,bd->dac\" einsum_mix_reorder;\n    test \"chain ab,bc,cd->ad\" einsum_chain;\n    test \"diag sum ii\" einsum_diag_sum;\n    test \"hadamard vec i,i->i\" einsum_hadamard_vec;\n    test \"fro inner ij,ij->\" einsum_fro_inner;\n    test \"contract last ijk,k->ij\" einsum_contract_last;\n    test \"matvec ab,b->a\" einsum_matvec;\n    test \"contract 3d vec abc,c->ab\" einsum_contract_3d_vec;\n    test \"broadcast last dot ...i,...i->...\" einsum_broadcast_last_dot;\n    test \"move first axis i...->...i\" einsum_move_first_axis_to_last;\n    test \"rowwise dot ij,j->i\" einsum_rowwise_dot;\n    test \"independent sum ab,cd->\" einsum_independent_sum;\n    test \"partial prereduction ij,kj->\" einsum_partial_prereduction;\n    test \"no shared with output ab,cd->ac\" einsum_no_shared_with_output;\n  ]\n\nlet advanced_decomposition_tests =\n  [\n    test \"cholesky upper\" test_cholesky_upper;\n    test \"cholesky non posdef\" test_cholesky_non_posdef;\n    test \"qr mode\" test_qr_mode;\n    test \"svd full matrices\" test_svd_full_matrices;\n    test \"svdvals\" test_svdvals;\n  ]\n\nlet eigen_tests =\n  [\n    test \"eigh\" test_eigh;\n    test \"eigh uplo\" test_eigh_uplo;\n    test \"eigvals\" test_eigvals;\n    test \"eigvalsh\" test_eigvalsh;\n  ]\n\nlet advanced_norm_tests =\n  [\n    test \"norm ord\" test_norm_ord;\n    test \"norm keepdims\" test_norm_keepdims;\n    test \"cond\" test_cond;\n  ]\n\nlet advanced_solve_tests =\n  [\n    test \"lstsq\" test_lstsq;\n    test \"lstsq rcond\" test_lstsq_rcond;\n    test \"lstsq underdetermined\" test_lstsq_underdetermined;\n    test \"pinv\" test_pinv;\n    test \"pinv singular\" test_pinv_singular;\n    test \"tensorsolve\" test_tensorsolve;\n    test \"tensorsolve axes\" test_tensorsolve_axes;\n    test \"tensorinv\" test_tensorinv;\n    test \"tensorinv ind\" test_tensorinv_ind;\n  ]\n\nlet () =\n  run \"Nx Linalg\"\n    [\n      group \"Matrix Multiply\" matmul_tests;\n      group \"Dot Product\" dot_tests;\n      group \"Solve/Inverse\" solve_inverse_tests;\n      group \"Decompositions\" decomposition_tests;\n      group \"Norms\" norm_tests;\n      group \"Utilities\" utility_tests;\n      group \"Advanced Utilities\" advanced_utility_tests;\n      group \"Product Ops\" product_tests;\n      group \"Einsum\" einsum_tests;\n      group \"Advanced Decompositions\" advanced_decomposition_tests;\n      group \"Eigen\" eigen_tests;\n      group \"Advanced Norms\" advanced_norm_tests;\n      group \"Advanced Solve\" advanced_solve_tests;\n    ]\n"
  },
  {
    "path": "packages/nx/test/test_nx_manipulation.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Shape manipulation tests for Nx *)\n\nopen Windtrap\nopen Test_nx_support\n\n(* ───── Reshape Tests ───── *)\n\nlet test_reshape_minus_one () =\n  let t = Nx.create Nx.float32 [| 2; 3; 4 |] (Array.init 24 float_of_int) in\n  (* Single -1 inference *)\n  let r1 = Nx.reshape [| -1 |] t in\n  check_shape \"reshape [-1]\" [| 24 |] r1;\n\n  let r2 = Nx.reshape [| 2; -1 |] t in\n  check_shape \"reshape [2,-1]\" [| 2; 12 |] r2;\n\n  let r3 = Nx.reshape [| -1; 6 |] t in\n  check_shape \"reshape [-1,6]\" [| 4; 6 |] r3\n\nlet test_reshape_multiple_minus_one () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  check_invalid_arg \"multiple -1\"\n    \"reshape: shape specification, multiple -1 dimensions, can only specify \\\n     one unknown dimension\" (fun () -> ignore (Nx.reshape [| -1; -1 |] t))\n\nlet test_reshape_wrong_size () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  check_invalid_arg \"wrong size\" \"reshape: cannot reshape [2,3] to [5]\"\n    (fun () -> ignore (Nx.reshape [| 5 |] t))\n\nlet test_reshape_0d_to_1d () =\n  let t = Nx.scalar Nx.float32 42.0 in\n  let r = Nx.reshape [| 1 |] t in\n  check_t \"reshape scalar to [1]\" [| 1 |] [| 42.0 |] r\n\nlet test_reshape_to_0d () =\n  let t = Nx.create Nx.float32 [| 1 |] [| 42.0 |] in\n  let r = Nx.reshape [||] t in\n  check_t \"reshape [1] to scalar\" [||] [| 42.0 |] r\n\nlet test_reshape_empty () =\n  (* Empty array reshapes *)\n  let t1 = Nx.create Nx.float32 [| 0; 5 |] [||] in\n  let r1 = Nx.reshape [| 0 |] t1 in\n  check_shape \"reshape [0,5] to [0]\" [| 0 |] r1;\n\n  let t2 = Nx.create Nx.float32 [| 0; 5 |] [||] in\n  let r2 = Nx.reshape [| 5; 0 |] t2 in\n  check_shape \"reshape [0,5] to [5,0]\" [| 5; 0 |] r2\n\nlet test_reshape_view_when_contiguous () =\n  let t = Nx.create Nx.float32 [| 4 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  let r = Nx.reshape [| 2; 2 |] t in\n  Nx.set_item [ 0 ] 77.0 t;\n  equal ~msg:\"reshape view sees source mutations\" (float 1e-6) 77.0\n    (Nx.item [ 0; 0 ] r);\n  Nx.set_item [ 0; 0 ] 42.0 r;\n  equal ~msg:\"reshape view mutates source\" (float 1e-6) 42.0 (Nx.item [ 0 ] t)\n\nlet test_reshape_copy_when_not_contiguous () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let transposed = Nx.transpose t in\n  check_invalid_arg \"reshape non-contiguous\"\n    \"reshape: cannot reshape [3,2] to [6], incompatible strides [1,3] \\\n     (expected [1]), call contiguous() first\" (fun () ->\n      Nx.reshape [| 6 |] transposed)\n\n(* ───── Transpose Tests ───── *)\n\nlet test_transpose_1d () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n  let tr = Nx.transpose t in\n  check_t \"transpose 1D is no-op\" [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] tr\n\nlet test_transpose_0d () =\n  let t = Nx.scalar Nx.float32 42.0 in\n  let tr = Nx.transpose t in\n  check_t \"transpose scalar is no-op\" [||] [| 42.0 |] tr\n\nlet test_transpose_high_d () =\n  let t = Nx.create Nx.float32 [| 2; 3; 4; 5 |] (Array.init 120 float_of_int) in\n  let tr = Nx.transpose t in\n  check_shape \"transpose high-d shape\" [| 5; 4; 3; 2 |] tr;\n  (* Check a few values to ensure correct transpose *)\n  equal ~msg:\"transpose[0,0,0,0]\" (float 1e-6) 0.0 (Nx.item [ 0; 0; 0; 0 ] tr);\n  equal ~msg:\"transpose[0,0,0,1]\" (float 1e-6) 60.0 (Nx.item [ 0; 0; 0; 1 ] tr)\n\nlet test_transpose_axes () =\n  let t = Nx.create Nx.float32 [| 2; 3; 4 |] (Array.init 24 float_of_int) in\n  let tr = Nx.transpose ~axes:[ 1; 2; 0 ] t in\n  check_shape \"transpose custom axes\" [| 3; 4; 2 |] tr;\n  equal ~msg:\"transpose[0,0,0]\" (float 1e-6) 0.0 (Nx.item [ 0; 0; 0 ] tr);\n  equal ~msg:\"transpose[0,0,1]\" (float 1e-6) 12.0 (Nx.item [ 0; 0; 1 ] tr)\n\nlet test_transpose_invalid_axes () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  check_invalid_arg \"invalid axes length\"\n    \"transpose: axes (length 3), expected rank 2, got 3, provide exactly one \\\n     axis per dimension\" (fun () -> Nx.transpose ~axes:[ 0; 1; 2 ] t)\n\nlet test_transpose_view () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let tr = Nx.transpose t in\n  Nx.set_item [ 0; 1 ] 99.0 t;\n  equal ~msg:\"transpose view modified\" (float 1e-6) 99.0 (Nx.item [ 1; 0 ] tr)\n\n(* ───── Concatenate Tests ───── *)\n\nlet test_concat_axis_1 () =\n  let t1 = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let t2 = Nx.create Nx.float32 [| 2; 2 |] [| 7.; 8.; 9.; 10. |] in\n  let c = Nx.concatenate ~axis:1 [ t1; t2 ] in\n  check_t \"concat axis 1\" [| 2; 5 |]\n    [| 1.; 2.; 3.; 7.; 8.; 4.; 5.; 6.; 9.; 10. |]\n    c\n\nlet test_concat_empty_list () =\n  check_invalid_arg \"concat empty list\"\n    \"concatenate: tensor list cannot be empty, provide at least one tensor\"\n    (fun () -> Nx.concatenate [])\n\nlet test_concat_different_dtypes () =\n  (* For now, assuming concatenate requires same dtype - adjust if it\n     promotes *)\n  let t1 = Nx.create Nx.float32 [| 2 |] [| 1.0; 2.0 |] in\n  let t2 = Nx.create Nx.int32 [| 2 |] [| 3l; 4l |] in\n  check_invalid_arg \"concat different dtypes\"\n    \"concatenate: expected dtype float32, got int32\" (fun () ->\n      ignore (Nx.concatenate [ t1; Obj.magic t2 ]))\n\nlet test_concat_with_empty () =\n  let t1 = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let t2 = Nx.create Nx.float32 [| 0; 3 |] [||] in\n  let c = Nx.concatenate ~axis:0 [ t1; t2 ] in\n  check_t \"concat with empty\" [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] c\n\nlet test_concat_shape_mismatch () =\n  let t1 = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let t2 =\n    Nx.create Nx.float32 [| 2; 4 |] [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8. |]\n  in\n  check_invalid_arg \"shape mismatch\"\n    \"concatenate: dimension 1, size 4\\226\\137\\1603\" (fun () ->\n      Nx.concatenate ~axis:0 [ t1; t2 ])\n\nlet test_concat_new_array () =\n  let t1 = Nx.create Nx.float32 [| 2 |] [| 1.0; 2.0 |] in\n  let t2 = Nx.create Nx.float32 [| 2 |] [| 3.0; 4.0 |] in\n  let c = Nx.concatenate [ t1; t2 ] in\n  Nx.set_item [ 0 ] 99.0 t1;\n  equal ~msg:\"concat is new array\" (float 1e-6) 1.0 (Nx.item [ 0 ] c)\n\n(* ───── Stack Tests ───── *)\n\nlet test_stack_new_axis () =\n  let t1 = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let t2 = Nx.create Nx.float32 [| 2; 3 |] [| 7.; 8.; 9.; 10.; 11.; 12. |] in\n  let s = Nx.stack ~axis:1 [ t1; t2 ] in\n  check_shape \"stack axis 1 shape\" [| 2; 2; 3 |] s;\n  check_t \"stack axis 1 values\" [| 2; 2; 3 |]\n    [| 1.; 2.; 3.; 7.; 8.; 9.; 4.; 5.; 6.; 10.; 11.; 12. |]\n    s\n\nlet test_stack_shape_mismatch () =\n  let t1 = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let t2 = Nx.create Nx.float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n  check_invalid_arg \"stack shape mismatch\"\n    \"concatenate: dimension 1, size 4\\226\\137\\1603\" (fun () ->\n      Nx.stack ~axis:0 [ t1; t2 ])\n\nlet test_stack_new_array () =\n  let t1 = Nx.create Nx.float32 [| 2 |] [| 1.0; 2.0 |] in\n  let t2 = Nx.create Nx.float32 [| 2 |] [| 3.0; 4.0 |] in\n  let s = Nx.stack ~axis:0 [ t1; t2 ] in\n  Nx.set_item [ 0 ] 99.0 t1;\n  equal ~msg:\"stack is new array\" (float 1e-6) 1.0 (Nx.item [ 0; 0 ] s)\n\n(* ───── Split Tests ───── *)\n\nlet test_split_equal () =\n  let t = Nx.create Nx.float32 [| 12 |] (Array.init 12 float_of_int) in\n  let parts = Nx.split ~axis:0 3 t in\n  equal ~msg:\"split count\" int 3 (List.length parts);\n  check_t \"split part 0\" [| 4 |] [| 0.; 1.; 2.; 3. |] (List.nth parts 0);\n  check_t \"split part 1\" [| 4 |] [| 4.; 5.; 6.; 7. |] (List.nth parts 1);\n  check_t \"split part 2\" [| 4 |] [| 8.; 9.; 10.; 11. |] (List.nth parts 2)\n\nlet test_split_unequal () =\n  let t = Nx.create Nx.float32 [| 10 |] (Array.init 10 float_of_int) in\n  check_invalid_arg \"split unequal\"\n    \"split: cannot divide evenly axis 0 (size 10) to 3 sections, 10 % 3 = 1, \\\n     use array_split for uneven division\" (fun () ->\n      ignore (Nx.split ~axis:0 3 t))\n\nlet test_split_axis () =\n  let t = Nx.create Nx.float32 [| 4; 6 |] (Array.init 24 float_of_int) in\n  let parts = Nx.split ~axis:1 2 t in\n  equal ~msg:\"split axis 1 count\" int 2 (List.length parts);\n  check_shape \"split axis 1 shape\" [| 4; 3 |] (List.nth parts 0)\n\nlet test_split_one () =\n  let t = Nx.create Nx.float32 [| 6 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let parts = Nx.split ~axis:0 1 t in\n  equal ~msg:\"split into 1 count\" int 1 (List.length parts);\n  check_t \"split into 1 part\" [| 6 |]\n    [| 1.; 2.; 3.; 4.; 5.; 6. |]\n    (List.nth parts 0)\n\nlet test_split_views () =\n  let t = Nx.create Nx.float32 [| 4 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  let parts = Nx.split ~axis:0 2 t in\n  let p1 = List.nth parts 0 in\n  Nx.set_item [ 0 ] 99.0 p1;\n  equal ~msg:\"split view modified\" (float 1e-6) 99.0 (Nx.item [ 0 ] t)\n\n(* ───── Array Split Tests ───── *)\n\nlet test_array_split_equal () =\n  let t = Nx.create Nx.float32 [| 6 |] [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0 |] in\n  let parts = Nx.array_split ~axis:0 (`Count 3) t in\n  equal ~msg:\"array_split equal count\" int 3 (List.length parts);\n  check_t \"array_split equal part 0\" [| 2 |] [| 1.0; 2.0 |] (List.nth parts 0)\n\nlet test_array_split_unequal () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 1.0; 2.0; 3.0; 4.0; 5.0 |] in\n  let parts = Nx.array_split ~axis:0 (`Count 3) t in\n  equal ~msg:\"array_split unequal count\" int 3 (List.length parts);\n  check_t \"array_split unequal part 0\" [| 2 |] [| 1.0; 2.0 |] (List.nth parts 0);\n  check_t \"array_split unequal part 1\" [| 2 |] [| 3.0; 4.0 |] (List.nth parts 1);\n  check_t \"array_split unequal part 2\" [| 1 |] [| 5.0 |] (List.nth parts 2)\n\nlet test_array_split_views () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 1.0; 2.0; 3.0; 4.0; 5.0 |] in\n  let parts = Nx.array_split ~axis:0 (`Count 2) t in\n  let p1 = List.nth parts 0 in\n  Nx.set_item [ 0 ] 99.0 p1;\n  equal ~msg:\"array_split view modified\" (float 1e-6) 99.0 (Nx.item [ 0 ] t)\n\n(* ───── Squeeze Expand Tests ───── *)\n\nlet test_squeeze_all () =\n  let t =\n    Nx.create Nx.float32 [| 1; 3; 1; 4; 1 |] (Array.init 12 float_of_int)\n  in\n  let s = Nx.squeeze t in\n  check_shape \"squeeze all\" [| 3; 4 |] s\n\nlet test_squeeze_specific () =\n  let t = Nx.create Nx.float32 [| 1; 3; 1; 4 |] (Array.init 12 float_of_int) in\n  let s = Nx.squeeze ~axes:[ 0; 2 ] t in\n  check_shape \"squeeze specific axes\" [| 3; 4 |] s\n\nlet test_squeeze_no_ones () =\n  let t = Nx.create Nx.float32 [| 2; 3; 4 |] (Array.init 24 float_of_int) in\n  let s = Nx.squeeze t in\n  check_shape \"squeeze no ones\" [| 2; 3; 4 |] s\n\nlet test_squeeze_invalid_axis () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  check_invalid_arg \"squeeze invalid axis\"\n    \"squeeze: cannot remove dimension at axis 1 (size 3), size 3\\226\\137\\1601\"\n    (fun () -> ignore (Nx.squeeze ~axes:[ 1 ] t))\n\nlet test_expand_dims_various () =\n  let t = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n\n  (* Add dim at position 0 *)\n  let e0 = Nx.expand_dims [ 0 ] t in\n  check_shape \"expand_dims at 0\" [| 1; 3 |] e0;\n\n  (* Add dim at position -1 (end) *)\n  let e_end = Nx.expand_dims [ -1 ] t in\n  check_shape \"expand_dims at -1\" [| 3; 1 |] e_end;\n\n  (* Add dim in middle *)\n  let t2 = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let e_mid = Nx.expand_dims [ 1 ] t2 in\n  check_shape \"expand_dims in middle\" [| 2; 1; 3 |] e_mid\n\nlet test_expand_dims_invalid_axis () =\n  let t = Nx.create Nx.float32 [| 3 |] [| 1.0; 2.0; 3.0 |] in\n  check_invalid_arg \"expand_dims invalid axis\"\n    \"unsqueeze: axis 2, out of bounds for output rank 2, valid range is [-2, 2)\"\n    (fun () -> Nx.expand_dims [ 2 ] t)\n\n(* ───── Broadcasting Tests ───── *)\n\nlet test_broadcast_to_valid () =\n  let t = Nx.create Nx.float32 [| 3; 1 |] [| 1.; 2.; 3. |] in\n  let b = Nx.broadcast_to [| 3; 4 |] t in\n  check_shape \"broadcast valid shape\" [| 3; 4 |] b;\n  equal ~msg:\"broadcast[0,0]\" (float 1e-6) 1.0 (Nx.item [ 0; 0 ] b);\n  equal ~msg:\"broadcast[0,3]\" (float 1e-6) 1.0 (Nx.item [ 0; 3 ] b);\n  equal ~msg:\"broadcast[2,2]\" (float 1e-6) 3.0 (Nx.item [ 2; 2 ] b)\n\nlet test_broadcast_to_invalid () =\n  let t = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  check_invalid_arg \"broadcast invalid\"\n    \"broadcast_to: cannot broadcast [3] to [4] (dim 0: 3\\226\\137\\1604)\"\n    (fun () -> ignore (Nx.broadcast_to [| 4 |] t))\n\nlet test_broadcast_to_same () =\n  let t = Nx.create Nx.float32 [| 3; 4 |] (Array.init 12 float_of_int) in\n  let b = Nx.broadcast_to [| 3; 4 |] t in\n  check_t \"broadcast to same\" [| 3; 4 |] (Array.init 12 float_of_int) b\n\nlet test_broadcast_scalar () =\n  let t = Nx.scalar Nx.float32 5.0 in\n  let b = Nx.broadcast_to [| 3; 4; 5 |] t in\n  check_shape \"broadcast scalar shape\" [| 3; 4; 5 |] b;\n  equal ~msg:\"broadcast scalar value\" (float 1e-6) 5.0 (Nx.item [ 2; 3; 4 ] b)\n\nlet test_broadcast_arrays_compatible () =\n  let t1 = Nx.create Nx.float32 [| 3; 1 |] [| 1.0; 2.0; 3.0 |] in\n  let t2 = Nx.create Nx.float32 [| 1; 4 |] [| 10.0; 20.0; 30.0; 40.0 |] in\n  let broadcasted = Nx.broadcast_arrays [ t1; t2 ] in\n  equal ~msg:\"broadcast_arrays count\" int 2 (List.length broadcasted);\n  let b1 = List.nth broadcasted 0 in\n  let b2 = List.nth broadcasted 1 in\n  check_shape \"broadcast_arrays shape 1\" [| 3; 4 |] b1;\n  check_shape \"broadcast_arrays shape 2\" [| 3; 4 |] b2\n\nlet test_broadcast_arrays_views () =\n  let t1 = Nx.create Nx.float32 [| 3; 1 |] [| 1.0; 2.0; 3.0 |] in\n  let t2 = Nx.create Nx.float32 [| 1; 1 |] [| 10.0 |] in\n  let broadcasted = Nx.broadcast_arrays [ t1; t2 ] in\n  let b1 = List.nth broadcasted 0 in\n  Nx.set_item [ 0; 0 ] 99.0 t1;\n  equal ~msg:\"broadcast array view modified\" (float 1e-6) 99.0\n    (Nx.item [ 0; 0 ] b1)\n\nlet test_broadcast_arrays_invalid () =\n  let t1 = Nx.create Nx.float32 [| 2 |] [| 1.0; 2.0 |] in\n  let t2 = Nx.create Nx.float32 [| 3 |] [| 1.0; 2.0; 3.0 |] in\n  check_invalid_arg \"broadcast_arrays invalid\"\n    \"broadcast: cannot broadcast [2] with [3] (dim 0: 2\\226\\137\\1603)\"\n    (fun () -> Nx.broadcast_arrays [ t1; t2 ])\n\n(* ───── Tile Repeat Tests ───── *)\n\nlet test_tile_1d () =\n  let t = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let tiled = Nx.tile [| 2 |] t in\n  check_t \"tile 1d\" [| 6 |] [| 1.; 2.; 3.; 1.; 2.; 3. |] tiled\n\nlet test_tile_2d () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let tiled = Nx.tile [| 2; 1 |] t in\n  check_t \"tile 2d\" [| 4; 3 |]\n    [| 1.; 2.; 3.; 4.; 5.; 6.; 1.; 2.; 3.; 4.; 5.; 6. |]\n    tiled\n\nlet test_tile_broadcast () =\n  let t = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let tiled = Nx.tile [| 2; 3 |] t in\n  check_shape \"tile broadcast shape\" [| 2; 9 |] tiled;\n  check_t \"tile broadcast\" [| 2; 9 |]\n    [| 1.; 2.; 3.; 1.; 2.; 3.; 1.; 2.; 3.; 1.; 2.; 3.; 1.; 2.; 3.; 1.; 2.; 3. |]\n    tiled\n\nlet test_tile_invalid () =\n  let t = Nx.create Nx.float32 [| 2 |] [| 1.0; 2.0 |] in\n  (* This test is incorrect - tile should work with more reps than tensor dims\n     by promoting the tensor. Let's test a different invalid case. *)\n  check_invalid_arg \"tile invalid\"\n    \"tile: reps[0], negative (-1<0), use positive integers (or 0 for empty \\\n     result)\" (fun () -> Nx.tile [| -1 |] t)\n\nlet test_repeat_axis () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let r = Nx.repeat ~axis:0 2 t in\n  check_t \"repeat axis 0\" [| 4; 3 |]\n    [| 1.; 2.; 3.; 1.; 2.; 3.; 4.; 5.; 6.; 4.; 5.; 6. |]\n    r\n\nlet test_repeat_no_axis () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let r = Nx.repeat 2 t in\n  check_t \"repeat no axis\" [| 12 |]\n    [| 1.; 1.; 2.; 2.; 3.; 3.; 4.; 4.; 5.; 5.; 6.; 6. |]\n    r\n\nlet test_repeat_invalid () =\n  let t = Nx.create Nx.float32 [| 2 |] [| 1.0; 2.0 |] in\n  check_invalid_arg \"repeat negative\" \"repeat: count must be >= 0, got -1\"\n    (fun () -> Nx.repeat ~axis:0 (-1) t)\n\n(* ───── Other Shape Manipulation Tests ───── *)\n\nlet test_flatten_view () =\n  let t = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  let flat = Nx.flatten t in\n  Nx.set_item [ 0; 0 ] 99.0 t;\n  equal ~msg:\"flatten view modified\" (float 1e-6) 99.0 (Nx.item [ 0 ] flat)\n\nlet test_ravel_contiguous_view () =\n  let t = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  let r = Nx.ravel t in\n  Nx.set_item [ 0; 0 ] 99.0 t;\n  equal ~msg:\"ravel view modified\" (float 1e-6) 99.0 (Nx.item [ 0 ] r)\n\nlet test_ravel_non_contiguous_copy () =\n  let t = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  let tr = Nx.transpose t in\n  check_invalid_arg \"ravel non-contiguous\"\n    \"reshape: cannot reshape [2,2] to [4], incompatible strides [1,2] \\\n     (expected [1]), call contiguous() first\" (fun () -> Nx.ravel tr)\n\nlet test_pad_2d () =\n  let t = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  let p = Nx.pad [| (1, 1); (0, 1) |] 0.0 t in\n  check_t \"pad 2d\" [| 4; 3 |]\n    [| 0.0; 0.0; 0.0; 1.0; 2.0; 0.0; 3.0; 4.0; 0.0; 0.0; 0.0; 0.0 |]\n    p\n\nlet test_pad_invalid () =\n  let t = Nx.create Nx.float32 [| 2 |] [| 1.0; 2.0 |] in\n  check_invalid_arg \"pad negative\"\n    \"pad: padding values, negative values not allowed, use shrink or slice to \\\n     remove elements\" (fun () -> Nx.pad [| (-1, 2) |] 0.0 t)\n\nlet test_flip_axis () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0 |] in\n  let f = Nx.flip ~axes:[ 1 ] t in\n  check_t \"flip axis 1\" [| 2; 3 |] [| 3.0; 2.0; 1.0; 6.0; 5.0; 4.0 |] f\n\nlet test_flip_view () =\n  let t = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  let f = Nx.flip t in\n  Nx.set_item [ 0; 0 ] 99.0 t;\n  equal ~msg:\"flip view modified\" (float 1e-6) 99.0 (Nx.item [ 1; 1 ] f)\n\nlet test_roll_no_axis () =\n  let t = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  let r = Nx.roll 1 t in\n  check_t \"roll no axis\" [| 2; 2 |] [| 4.0; 1.0; 2.0; 3.0 |] r\n\nlet test_roll_negative () =\n  let t = Nx.create Nx.float32 [| 3 |] [| 1.0; 2.0; 3.0 |] in\n  let r = Nx.roll ~axis:0 (-1) t in\n  check_t \"roll negative\" [| 3 |] [| 2.0; 3.0; 1.0 |] r\n\nlet test_moveaxis_view () =\n  let t = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  let m = Nx.moveaxis 0 1 t in\n  Nx.set_item [ 0; 0 ] 99.0 t;\n  equal ~msg:\"moveaxis view modified\" (float 1e-6) 99.0 (Nx.item [ 0; 0 ] m)\n\nlet test_moveaxis_invalid () =\n  let t = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  check_invalid_arg \"moveaxis invalid\"\n    \"moveaxis: source 2 or destination 0, out of bounds for shape [2,2]\"\n    (fun () -> Nx.moveaxis 2 0 t)\n\nlet test_swapaxes_view () =\n  let t = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  let s = Nx.swapaxes 0 1 t in\n  Nx.set_item [ 0; 1 ] 99.0 t;\n  equal ~msg:\"swapaxes view modified\" (float 1e-6) 99.0 (Nx.item [ 1; 0 ] s)\n\nlet test_swapaxes_invalid () =\n  let t = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  check_invalid_arg \"swapaxes invalid\"\n    \"swapaxes: axes (2, 0), out of bounds for shape [2,2]\" (fun () ->\n      Nx.swapaxes 2 0 t)\n\nlet test_dstack_1d () =\n  let t1 = Nx.create Nx.float32 [| 2 |] [| 1.0; 2.0 |] in\n  let t2 = Nx.create Nx.float32 [| 2 |] [| 3.0; 4.0 |] in\n  let d = Nx.dstack [ t1; t2 ] in\n  check_t \"dstack 1d\" [| 1; 2; 2 |] [| 1.0; 3.0; 2.0; 4.0 |] d\n\nlet test_vstack_invalid () =\n  let t1 = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  let t2 = Nx.create Nx.float32 [| 2; 3 |] [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0 |] in\n  check_invalid_arg \"vstack invalid\"\n    \"concatenate: dimension 1, size 3\\226\\137\\1602\" (fun () ->\n      Nx.vstack [ t1; t2 ])\n\nlet test_hstack_invalid () =\n  let t1 = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  let t2 = Nx.create Nx.float32 [| 3; 2 |] [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0 |] in\n  check_invalid_arg \"hstack invalid\"\n    \"concatenate: dimension 0, size 3\\226\\137\\1602\" (fun () ->\n      Nx.hstack [ t1; t2 ])\n\nlet test_dstack_invalid () =\n  let t1 = Nx.create Nx.float32 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  let t2 = Nx.create Nx.float32 [| 2; 3 |] [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0 |] in\n  check_invalid_arg \"dstack invalid\"\n    \"concatenate: dimension 1, size 3\\226\\137\\1602\" (fun () ->\n      Nx.dstack [ t1; t2 ])\n\n(* Test Suite Organization *)\n\nlet reshape_tests =\n  [\n    test \"reshape minus one\" test_reshape_minus_one;\n    test \"reshape multiple minus one\" test_reshape_multiple_minus_one;\n    test \"reshape wrong size\" test_reshape_wrong_size;\n    test \"reshape 0d to 1d\" test_reshape_0d_to_1d;\n    test \"reshape to 0d\" test_reshape_to_0d;\n    test \"reshape empty\" test_reshape_empty;\n    test \"reshape view when contiguous\" test_reshape_view_when_contiguous;\n    test \"reshape copy when not contiguous\"\n      test_reshape_copy_when_not_contiguous;\n  ]\n\nlet transpose_tests =\n  [\n    test \"transpose 1d\" test_transpose_1d;\n    test \"transpose 0d\" test_transpose_0d;\n    test \"transpose high d\" test_transpose_high_d;\n    test \"transpose axes\" test_transpose_axes;\n    test \"transpose invalid axes\" test_transpose_invalid_axes;\n    test \"transpose view\" test_transpose_view;\n  ]\n\nlet concatenate_tests =\n  [\n    test \"concat axis 1\" test_concat_axis_1;\n    test \"concat empty list\" test_concat_empty_list;\n    test \"concat different dtypes\" test_concat_different_dtypes;\n    test \"concat with empty\" test_concat_with_empty;\n    test \"concat shape mismatch\" test_concat_shape_mismatch;\n    test \"concat new array\" test_concat_new_array;\n  ]\n\nlet stack_tests =\n  [\n    test \"stack new axis\" test_stack_new_axis;\n    test \"stack shape mismatch\" test_stack_shape_mismatch;\n    test \"stack new array\" test_stack_new_array;\n  ]\n\nlet split_tests =\n  [\n    test \"split equal\" test_split_equal;\n    test \"split unequal\" test_split_unequal;\n    test \"split axis\" test_split_axis;\n    test \"split one\" test_split_one;\n    test \"split views\" test_split_views;\n    test \"array split equal\" test_array_split_equal;\n    test \"array split unequal\" test_array_split_unequal;\n    test \"array split views\" test_array_split_views;\n  ]\n\nlet squeeze_expand_tests =\n  [\n    test \"squeeze all\" test_squeeze_all;\n    test \"squeeze specific\" test_squeeze_specific;\n    test \"squeeze no ones\" test_squeeze_no_ones;\n    test \"squeeze invalid axis\" test_squeeze_invalid_axis;\n    test \"expand dims various\" test_expand_dims_various;\n    test \"expand dims invalid axis\" test_expand_dims_invalid_axis;\n  ]\n\nlet broadcast_tests =\n  [\n    test \"broadcast to valid\" test_broadcast_to_valid;\n    test \"broadcast to invalid\" test_broadcast_to_invalid;\n    test \"broadcast to same\" test_broadcast_to_same;\n    test \"broadcast scalar\" test_broadcast_scalar;\n    test \"broadcast arrays compatible\" test_broadcast_arrays_compatible;\n    test \"broadcast arrays views\" test_broadcast_arrays_views;\n    test \"broadcast arrays invalid\" test_broadcast_arrays_invalid;\n  ]\n\nlet tile_repeat_tests =\n  [\n    test \"tile 1d\" test_tile_1d;\n    test \"tile 2d\" test_tile_2d;\n    test \"tile broadcast\" test_tile_broadcast;\n    test \"tile invalid\" test_tile_invalid;\n    test \"repeat axis\" test_repeat_axis;\n    test \"repeat no axis\" test_repeat_no_axis;\n    test \"repeat invalid\" test_repeat_invalid;\n  ]\n\nlet other_manipulation_tests =\n  [\n    test \"flatten view\" test_flatten_view;\n    test \"ravel contiguous view\" test_ravel_contiguous_view;\n    test \"ravel non-contiguous copy\" test_ravel_non_contiguous_copy;\n    test \"pad 2d\" test_pad_2d;\n    test \"pad invalid\" test_pad_invalid;\n    test \"flip axis\" test_flip_axis;\n    test \"flip view\" test_flip_view;\n    test \"roll no axis\" test_roll_no_axis;\n    test \"roll negative\" test_roll_negative;\n    test \"moveaxis view\" test_moveaxis_view;\n    test \"moveaxis invalid\" test_moveaxis_invalid;\n    test \"swapaxes view\" test_swapaxes_view;\n    test \"swapaxes invalid\" test_swapaxes_invalid;\n    test \"dstack 1d\" test_dstack_1d;\n    test \"vstack invalid\" test_vstack_invalid;\n    test \"hstack invalid\" test_hstack_invalid;\n    test \"dstack invalid\" test_dstack_invalid;\n  ]\n\nlet () =\n  run \"Nx Manipulation\"\n    [\n      group \"Reshape\" reshape_tests;\n      group \"Transpose\" transpose_tests;\n      group \"Concatenate\" concatenate_tests;\n      group \"Stack\" stack_tests;\n      group \"Split\" split_tests;\n      group \"Squeeze/Expand\" squeeze_expand_tests;\n      group \"Broadcasting\" broadcast_tests;\n      group \"Tile/Repeat\" tile_repeat_tests;\n      group \"Other Manipulation\" other_manipulation_tests;\n    ]\n"
  },
  {
    "path": "packages/nx/test/test_nx_ops.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Edge case and special-value tests for Nx operations.\n\n   Algebraic properties (commutativity, identity, inverse, etc.) are covered by\n   property-based tests in nx/test/props/test_nx_props.ml. This file retains\n   tests for NaN/Inf behavior, error conditions, and specific numerical accuracy\n   checks. *)\n\nopen Windtrap\nopen Test_nx_support\n\n(* ───── Generic Test Helpers ───── *)\n\nlet test_broadcast_error ~op ~op_name ~dtype ~a_shape ~b_shape () =\n  let a = Nx.zeros dtype a_shape in\n  let b = Nx.zeros dtype b_shape in\n  check_invalid_arg\n    (Printf.sprintf \"%s incompatible broadcast\" op_name)\n    (Printf.sprintf\n       \"broadcast: cannot broadcast %s with %s (dim 0: 3\\226\\137\\1604)\"\n       (Nx.shape_to_string a_shape)\n       (Nx.shape_to_string b_shape))\n    (fun () -> ignore (op a b))\n\nlet test_nan_propagation ~op ~op_name () =\n  let a = Nx.create Nx.float32 [| 3 |] [| Float.nan; 1.0; 2.0 |] in\n  let b = Nx.create Nx.float32 [| 3 |] [| 5.0; Float.nan; 3.0 |] in\n  let result = op a b in\n  equal\n    ~msg:(Printf.sprintf \"%s nan[0]\" op_name)\n    bool true\n    (Float.is_nan (Nx.item [ 0 ] result));\n  equal\n    ~msg:(Printf.sprintf \"%s nan[1]\" op_name)\n    bool true\n    (Float.is_nan (Nx.item [ 1 ] result))\n\nlet test_unary_op ~op ~op_name ~dtype ~shape ~input ~expected () =\n  let t = Nx.create dtype shape input in\n  let result = op t in\n  check_t op_name shape expected result\n\n(* ───── Add Edge Cases ───── *)\n\nlet add_edge_cases =\n  [\n    test \"broadcast error\"\n      (test_broadcast_error ~op:Nx.add ~op_name:\"add\" ~dtype:Nx.float32\n         ~a_shape:[| 3 |] ~b_shape:[| 4 |]);\n    test \"nan propagation\" (test_nan_propagation ~op:Nx.add ~op_name:\"add\");\n    test \"inf arithmetic\" (fun () ->\n        let a =\n          Nx.create Nx.float32 [| 2 |] [| Float.infinity; Float.neg_infinity |]\n        in\n        let b = Nx.create Nx.float32 [| 2 |] [| 5.0; 10.0 |] in\n        let result = Nx.add a b in\n        equal ~msg:\"inf + 5\" (float 1e-6) Float.infinity (Nx.item [ 0 ] result);\n        equal ~msg:\"-inf + 10\" (float 1e-6) Float.neg_infinity\n          (Nx.item [ 1 ] result));\n    test \"inf + inf\" (fun () ->\n        let a =\n          Nx.create Nx.float32 [| 2 |] [| Float.infinity; Float.infinity |]\n        in\n        let b =\n          Nx.create Nx.float32 [| 2 |] [| Float.infinity; Float.neg_infinity |]\n        in\n        let result = Nx.add a b in\n        equal ~msg:\"inf + inf\" (float 1e-6) Float.infinity\n          (Nx.item [ 0 ] result);\n        equal ~msg:\"inf + -inf\" bool true (Float.is_nan (Nx.item [ 1 ] result)));\n  ]\n\n(* ───── Sub Edge Cases ───── *)\n\nlet sub_edge_cases =\n  [\n    test \"inf - inf\" (fun () ->\n        let a =\n          Nx.create Nx.float32 [| 2 |] [| Float.infinity; Float.infinity |]\n        in\n        let b =\n          Nx.create Nx.float32 [| 2 |] [| Float.infinity; Float.neg_infinity |]\n        in\n        let result = Nx.sub a b in\n        equal ~msg:\"inf - inf\" bool true (Float.is_nan (Nx.item [ 0 ] result));\n        equal ~msg:\"inf - -inf\" (float 1e-6) Float.infinity\n          (Nx.item [ 1 ] result));\n  ]\n\n(* ───── Div Edge Cases ───── *)\n\nlet div_edge_cases =\n  [\n    test \"div by zero float\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 1.0; -1.0; 0.0 |] in\n        let b = Nx.create Nx.float32 [| 3 |] [| 0.0; 0.0; 0.0 |] in\n        let result = Nx.div a b in\n        equal ~msg:\"1/0\" (float 1e-6) Float.infinity (Nx.item [ 0 ] result);\n        equal ~msg:\"-1/0\" (float 1e-6) Float.neg_infinity (Nx.item [ 1 ] result);\n        equal ~msg:\"0/0\" bool true (Float.is_nan (Nx.item [ 2 ] result)));\n  ]\n\n(* ───── Pow Edge Cases ───── *)\n\nlet pow_edge_cases =\n  [\n    test \"zero^zero\" (fun () ->\n        let a = Nx.create Nx.float32 [| 1 |] [| 0.0 |] in\n        let b = Nx.create Nx.float32 [| 1 |] [| 0.0 |] in\n        let result = Nx.pow a b in\n        equal ~msg:\"0^0\" (float 1e-6) 1.0 (Nx.item [ 0 ] result));\n    test \"negative base fractional exp\" (fun () ->\n        let a = Nx.create Nx.float32 [| 1 |] [| -2.0 |] in\n        let b = Nx.create Nx.float32 [| 1 |] [| 0.5 |] in\n        let result = Nx.pow a b in\n        equal ~msg:\"(-2)^0.5\" bool true (Float.is_nan (Nx.item [ 0 ] result)));\n    test \"pow overflow\" (fun () ->\n        let a = Nx.create Nx.float32 [| 1 |] [| 10.0 |] in\n        let b = Nx.create Nx.float32 [| 1 |] [| 100.0 |] in\n        let result = Nx.pow a b in\n        equal ~msg:\"10^100\" (float 1e-6) Float.infinity (Nx.item [ 0 ] result));\n  ]\n\n(* ───── Math Function Edge Cases ───── *)\n\nlet math_edge_cases =\n  [\n    test \"exp overflow\" (fun () ->\n        let t = Nx.create Nx.float32 [| 1 |] [| 1000.0 |] in\n        let result = Nx.exp t in\n        equal ~msg:\"exp(1000)\" (float 1e-6) Float.infinity\n          (Nx.item [ 0 ] result));\n    test \"exp underflow\" (fun () ->\n        let t = Nx.create Nx.float32 [| 1 |] [| -1000.0 |] in\n        let result = Nx.exp t in\n        equal ~msg:\"exp(-1000)\" (float 1e-6) 0.0 (Nx.item [ 0 ] result));\n    test \"log negative\" (fun () ->\n        let t = Nx.create Nx.float32 [| 1 |] [| -1.0 |] in\n        let result = Nx.log t in\n        equal ~msg:\"log(-1)\" bool true (Float.is_nan (Nx.item [ 0 ] result)));\n    test \"log zero\" (fun () ->\n        let t = Nx.create Nx.float32 [| 1 |] [| 0.0 |] in\n        let result = Nx.log t in\n        equal ~msg:\"log(0)\" (float 1e-6) Float.neg_infinity\n          (Nx.item [ 0 ] result));\n    test \"sqrt negative\" (fun () ->\n        let t = Nx.create Nx.float32 [| 1 |] [| -1.0 |] in\n        let result = Nx.sqrt t in\n        equal ~msg:\"sqrt(-1)\" bool true (Float.is_nan (Nx.item [ 0 ] result)));\n    test \"asin out of domain\" (fun () ->\n        let t = Nx.create Nx.float32 [| 1 |] [| 2.0 |] in\n        let result = Nx.asin t in\n        equal ~msg:\"asin(2)\" bool true (Float.is_nan (Nx.item [ 0 ] result)));\n  ]\n\n(* ───── Comparison Edge Cases ───── *)\n\nlet comparison_edge_cases =\n  [\n    test \"nan comparisons\" (fun () ->\n        let t1 = Nx.create Nx.float32 [| 3 |] [| Float.nan; 1.; Float.nan |] in\n        let t2 = Nx.create Nx.float32 [| 3 |] [| Float.nan; Float.nan; 1. |] in\n        let eq_result = Nx.equal t1 t2 in\n        let ne_result = Nx.not_equal t1 t2 in\n        check_t \"nan equal\" [| 3 |] [| false; false; false |] eq_result;\n        check_t \"nan not_equal\" [| 3 |] [| true; true; true |] ne_result);\n  ]\n\n(* ───── Reduction Edge Cases ───── *)\n\nlet reduction_edge_cases =\n  [\n    test \"sum axis=1 keepdims\" (fun () ->\n        let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n        let result = Nx.sum ~axes:[ 1 ] ~keepdims:true t in\n        check_t \"sum axis=1 keepdims\" [| 2; 1 |] [| 6.; 15. |] result);\n    test \"empty array mean\" (fun () ->\n        let _t = Nx.create Nx.float32 [| 0 |] [||] in\n        (* Skip for now - mean of empty array behavior needs investigation *)\n        ());\n    test \"min/max with nan\" (fun () ->\n        let t = Nx.create Nx.float32 [| 3 |] [| 1.; Float.nan; 3. |] in\n        let min_result = Nx.min t in\n        let max_result = Nx.max t in\n        equal ~msg:\"min with nan\" bool true\n          (Float.is_nan (Nx.item [] min_result));\n        equal ~msg:\"max with nan\" bool true\n          (Float.is_nan (Nx.item [] max_result)));\n  ]\n\n(* ───── Rounding Edge Cases ───── *)\n\nlet rounding_edge_cases =\n  [\n    test \"clip\" (fun () ->\n        let t = Nx.create Nx.float32 [| 5 |] [| -1.; 2.; 5.; 8.; 10. |] in\n        let result = Nx.clip ~min:0. ~max:7. t in\n        check_t \"clip\" [| 5 |] [| 0.; 2.; 5.; 7.; 7. |] result);\n  ]\n\n(* ───── Cumulative Tests ───── *)\n\nlet cumulative_tests =\n  [\n    test \"cumsum default axis\" (fun () ->\n        let t = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n        let result = Nx.cumsum t in\n        check_t ~eps:1e-6 \"cumsum flatten\" [| 2; 2 |] [| 1.; 3.; 6.; 10. |]\n          result);\n    test \"cumsum axis=1\" (fun () ->\n        let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n        let result = Nx.cumsum ~axis:1 t in\n        check_t ~eps:1e-6 \"cumsum axis=1\" [| 2; 3 |]\n          [| 1.; 3.; 6.; 4.; 9.; 15. |]\n          result);\n    test \"cumprod axis=-1\" (fun () ->\n        let t = Nx.create Nx.int32 [| 2; 3 |] [| 1l; 2l; 3l; 2l; 2l; 2l |] in\n        let result = Nx.cumprod ~axis:(-1) t in\n        check_t \"cumprod axis=-1\" [| 2; 3 |] [| 1l; 2l; 6l; 2l; 4l; 8l |] result);\n    test \"cummax nan propagation\" (fun () ->\n        let t = Nx.create Nx.float32 [| 4 |] [| 1.; Float.nan; 2.; 3. |] in\n        let result = Nx.cummax t in\n        equal ~msg:\"cummax nan[1]\" bool true\n          (Float.is_nan (Nx.item [ 1 ] result));\n        equal ~msg:\"cummax nan[2]\" bool true\n          (Float.is_nan (Nx.item [ 2 ] result)));\n    test \"cummin axis option\" (fun () ->\n        let t =\n          Nx.create Nx.int32 [| 2; 4 |] [| 4l; 2l; 3l; 1l; 5l; 6l; 2l; 7l |]\n        in\n        let result = Nx.cummin ~axis:0 t in\n        check_t \"cummin axis=0\" [| 2; 4 |]\n          [| 4l; 2l; 3l; 1l; 4l; 2l; 2l; 1l |]\n          result);\n  ]\n\n(* ───── Bitwise Edge Cases ───── *)\n\nlet bitwise_edge_cases =\n  [\n    test \"invert\"\n      (test_unary_op ~op:Nx.invert ~op_name:\"invert\" ~dtype:Nx.int32\n         ~shape:[| 3 |] ~input:[| 5l; 0l; 7l |] ~expected:[| -6l; -1l; -8l |]);\n  ]\n\n(* ───── Log/Standardize Tests ───── *)\n\nlet test_log_softmax_basic () =\n  let input = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let result = Nx.log_softmax input in\n  let data = [| 1.; 2.; 3. |] in\n  let max_x = Array.fold_left Float.max data.(0) data in\n  let denom =\n    Array.fold_left (fun acc v -> acc +. Float.exp (v -. max_x)) 0. data\n  in\n  let log_den = Float.log denom in\n  let expected = Array.map (fun v -> v -. max_x -. log_den) data in\n  check_t ~eps:1e-6 \"log_softmax basic\" [| 3 |] expected result\n\nlet test_log_softmax_with_scale () =\n  let input = Nx.create Nx.float32 [| 3 |] [| 0.; 1.; 2. |] in\n  let scale = 0.5 in\n  let result = Nx.log_softmax ~scale input in\n  let data = [| 0.; 1.; 2. |] in\n  let max_x = Array.fold_left Float.max data.(0) data in\n  let denom =\n    Array.fold_left\n      (fun acc v -> acc +. Float.exp (scale *. (v -. max_x)))\n      0. data\n  in\n  let log_den = Float.log denom in\n  let expected = Array.map (fun v -> (scale *. (v -. max_x)) -. log_den) data in\n  check_t ~eps:1e-6 \"log_softmax scale\" [| 3 |] expected result\n\nlet test_logsumexp_basic () =\n  let input = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let result = Nx.logsumexp input in\n  let data = [| 1.; 2.; 3. |] in\n  let max_x = Array.fold_left Float.max data.(0) data in\n  let denom =\n    Array.fold_left (fun acc v -> acc +. Float.exp (v -. max_x)) 0. data\n  in\n  let expected = Float.log denom +. max_x in\n  check_t ~eps:1e-6 \"logsumexp basic\" [||] [| expected |] result\n\nlet test_logsumexp_axis () =\n  let input = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  let result = Nx.logsumexp ~axes:[ 1 ] input in\n  let rows = [| [| 1.; 2. |]; [| 3.; 4. |] |] in\n  let expected =\n    Array.map\n      (fun row ->\n        let max_x = Array.fold_left Float.max row.(0) row in\n        let denom =\n          Array.fold_left (fun acc v -> acc +. Float.exp (v -. max_x)) 0. row\n        in\n        Float.log denom +. max_x)\n      rows\n  in\n  check_t ~eps:1e-6 \"logsumexp axis\" [| 2 |] expected result\n\nlet test_logmeanexp_basic () =\n  let input = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let result = Nx.logmeanexp input in\n  let data = [| 1.; 2.; 3. |] in\n  let max_x = Array.fold_left Float.max data.(0) data in\n  let denom =\n    Array.fold_left (fun acc v -> acc +. Float.exp (v -. max_x)) 0. data\n  in\n  let log_sum = Float.log denom +. max_x in\n  let expected = log_sum -. Float.log (float_of_int (Array.length data)) in\n  check_t ~eps:1e-6 \"logmeanexp basic\" [||] [| expected |] result\n\nlet test_logmeanexp_axis () =\n  let input = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  let result = Nx.logmeanexp ~axes:[ 1 ] input in\n  let rows = [| [| 1.; 2. |]; [| 3.; 4. |] |] in\n  let expected =\n    Array.map\n      (fun row ->\n        let max_x = Array.fold_left Float.max row.(0) row in\n        let denom =\n          Array.fold_left (fun acc v -> acc +. Float.exp (v -. max_x)) 0. row\n        in\n        let log_sum = Float.log denom +. max_x in\n        log_sum -. Float.log (float_of_int (Array.length row)))\n      rows\n  in\n  check_t ~eps:1e-6 \"logmeanexp axis\" [| 2 |] expected result\n\nlet test_standardize_global () =\n  let input = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let standardized = Nx.standardize input in\n  let mean = Nx.item [] (Nx.mean standardized) in\n  let variance = Nx.item [] (Nx.var standardized) in\n  equal ~msg:\"standardize mean ~ 0\" (float 1e-5) 0. mean;\n  equal ~msg:\"standardize var ~ 1\" (float 1e-4) 1. variance\n\nlet test_standardize_axes_with_params () =\n  let input = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let axes = [ 0 ] in\n  let mean = Nx.mean ~axes ~keepdims:true input in\n  let variance = Nx.var ~axes ~keepdims:true input in\n  let expected =\n    let eps = 1e-5 in\n    let denom = Nx.sqrt (Nx.add variance (Nx.scalar Nx.float32 eps)) in\n    Nx.div (Nx.sub input mean) denom\n  in\n  let result = Nx.standardize ~axes ~mean ~variance input in\n  check_nx ~epsilon:1e-6 \"standardize with params\" expected result;\n  let auto = Nx.standardize ~axes input in\n  check_nx ~epsilon:1e-6 \"standardize axes\" expected auto\n\nlet log_tests =\n  [\n    test \"log_softmax basic\" test_log_softmax_basic;\n    test \"log_softmax scale\" test_log_softmax_with_scale;\n    test \"logsumexp basic\" test_logsumexp_basic;\n    test \"logsumexp axis\" test_logsumexp_axis;\n    test \"logmeanexp basic\" test_logmeanexp_basic;\n    test \"logmeanexp axis\" test_logmeanexp_axis;\n  ]\n\nlet standardize_tests =\n  [\n    test \"standardize global\" test_standardize_global;\n    test \"standardize axes with params\" test_standardize_axes_with_params;\n  ]\n\n(* Test Suite Organization *)\n\nlet suite =\n  [\n    group \"Add Edge Cases\" add_edge_cases;\n    group \"Sub Edge Cases\" sub_edge_cases;\n    group \"Div Edge Cases\" div_edge_cases;\n    group \"Pow Edge Cases\" pow_edge_cases;\n    group \"Math Edge Cases\" math_edge_cases;\n    group \"Comparison Edge Cases\" comparison_edge_cases;\n    group \"Reduction Edge Cases\" reduction_edge_cases;\n    group \"Rounding Edge Cases\" rounding_edge_cases;\n    group \"Cumulative\" cumulative_tests;\n    group \"Bitwise Edge Cases\" bitwise_edge_cases;\n    group \"Log\" log_tests;\n    group \"Standardize\" standardize_tests;\n  ]\n\nlet () = run \"Nx Ops\" suite\n"
  },
  {
    "path": "packages/nx/test/test_nx_rng.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Nx\nopen Windtrap\n\nlet test_key_creation () =\n  let key1 = Rng.key 42 in\n  let key2 = Rng.key 42 in\n  let key3 = Rng.key 43 in\n  equal ~msg:\"same seed produces same key\" int (Rng.to_int key1)\n    (Rng.to_int key2);\n  equal ~msg:\"different seeds produce different keys\" bool true\n    (Rng.to_int key1 <> Rng.to_int key3)\n\nlet test_key_splitting () =\n  let key = Rng.key 42 in\n  let keys = Rng.split key in\n  equal ~msg:\"default split produces 2 keys\" int 2 (Array.length keys);\n\n  let keys3 = Rng.split ~n:3 key in\n  equal ~msg:\"split with n=3 produces 3 keys\" int 3 (Array.length keys3);\n\n  (* Check keys are different *)\n  equal ~msg:\"split keys are different\" bool true\n    (Rng.to_int keys.(0) <> Rng.to_int keys.(1));\n\n  (* Check deterministic *)\n  let keys2 = Rng.split key in\n  equal ~msg:\"split is deterministic\" int\n    (Rng.to_int keys.(0))\n    (Rng.to_int keys2.(0))\n\nlet test_fold_in () =\n  let key = Rng.key 42 in\n  let key1 = Rng.fold_in key 1 in\n  let key2 = Rng.fold_in key 2 in\n  let key1_again = Rng.fold_in key 1 in\n\n  equal ~msg:\"fold_in with different data produces different keys\" bool true\n    (Rng.to_int key1 <> Rng.to_int key2);\n  equal ~msg:\"fold_in is deterministic\" int (Rng.to_int key1)\n    (Rng.to_int key1_again)\n\nlet test_rand () =\n  let shape = [| 3; 4 |] in\n  let t = Rng.run ~seed:42 (fun () -> rand float32 shape) in\n\n  equal ~msg:\"rand produces correct shape\" (array int) shape (Nx.shape t);\n\n  (* Check values are in [0, 1) *)\n  let values = Nx.to_array (Nx.reshape [| 12 |] t) in\n  Array.iter\n    (fun v -> equal ~msg:\"rand values in [0, 1)\" bool true (v >= 0. && v < 1.))\n    values;\n\n  (* Check deterministic *)\n  let t2 = Rng.run ~seed:42 (fun () -> rand float32 shape) in\n  let is_equal = Nx.all (Nx.equal t t2) in\n  let is_equal_val = Nx.to_array is_equal in\n  equal ~msg:\"rand is deterministic\" bool true is_equal_val.(0)\n\nlet test_randn () =\n  let shape = [| 100 |] in\n  let t = Rng.run ~seed:42 (fun () -> randn float32 shape) in\n\n  equal ~msg:\"randn produces correct shape\" (array int) shape (Nx.shape t);\n\n  (* Check roughly normal distribution (mean ~0, std ~1) *)\n  let values = Nx.to_array t in\n  let mean =\n    Array.fold_left ( +. ) 0. values /. float_of_int (Array.length values)\n  in\n  let variance =\n    Array.fold_left (fun acc v -> acc +. ((v -. mean) ** 2.)) 0. values\n    /. float_of_int (Array.length values)\n  in\n  let std = Stdlib.sqrt variance in\n\n  equal ~msg:\"randn mean ~0\" (float 0.2) 0. mean;\n  equal ~msg:\"randn std ~1\" (float 0.3) 1. std\n\nlet test_randint () =\n  let shape = [| 10 |] in\n  let t = Rng.run ~seed:42 (fun () -> randint Nx.int32 ~high:15 shape 5) in\n\n  equal ~msg:\"randint produces correct shape\" (array int) shape (Nx.shape t);\n\n  (* Check values are in [min, max) *)\n  let values = Nx.to_array t in\n  Array.iter\n    (fun v ->\n      let v = Int32.to_int v in\n      equal ~msg:\"randint values in [5, 15)\" bool true (v >= 5 && v < 15))\n    values\n\nlet test_bernoulli () =\n  let shape = [| 1000 |] in\n  let p = 0.3 in\n  let t = Rng.run ~seed:42 (fun () -> bernoulli ~p shape) in\n\n  equal ~msg:\"bernoulli produces correct shape\" (array int) shape (Nx.shape t);\n  let t_int = astype uint8 t in\n  (* Check proportion roughly matches p *)\n  let values = Nx.to_array t_int in\n  let ones =\n    Array.fold_left (fun acc v -> acc + if v > 0 then 1 else 0) 0 values\n  in\n  let prop = float_of_int ones /. float_of_int (Array.length values) in\n  equal ~msg:\"bernoulli proportion ~p\" (float 0.05) p prop\n\nlet test_shuffle_preserves_shape () =\n  let shape = [| 6; 4 |] in\n  let data =\n    Array.init (shape.(0) * shape.(1)) (fun i -> float_of_int (i + 1))\n  in\n  let x = Nx.create float32 shape data in\n  let shuffled = Rng.run ~seed:7 (fun () -> shuffle x) in\n\n  equal ~msg:\"shuffle preserves leading axis\" (array int) shape\n    (Nx.shape shuffled);\n\n  let flatten t =\n    let dims = Nx.shape t in\n    let total = Array.fold_left ( * ) 1 dims in\n    let reshaped = Nx.reshape [| total |] t in\n    Nx.to_array reshaped\n  in\n  let orig_flat = flatten x in\n  let shuffled_flat = flatten shuffled in\n\n  let sorted_orig = Array.copy orig_flat in\n  let sorted_shuffled = Array.copy shuffled_flat in\n  Array.sort compare sorted_orig;\n  Array.sort compare sorted_shuffled;\n  equal ~msg:\"shuffle preserves multiset\"\n    (array (float 0.0))\n    sorted_orig sorted_shuffled;\n\n  let shuffled_again = Rng.run ~seed:7 (fun () -> shuffle x) in\n  let equality = Nx.equal shuffled shuffled_again |> Nx.all |> Nx.to_array in\n  equal ~msg:\"shuffle deterministic with same seed\" bool true equality.(0)\n\nlet test_truncated_normal () =\n  let shape = [| 100 |] in\n  let lower = -1.5 in\n  let upper = 2.0 in\n  let t =\n    Rng.run ~seed:42 (fun () -> truncated_normal float32 ~lower ~upper shape)\n  in\n\n  equal ~msg:\"truncated_normal produces correct shape\" (array int) shape\n    (Nx.shape t);\n\n  (* Check all values are within bounds *)\n  let values = Nx.to_array t in\n  Array.iter\n    (fun v ->\n      equal\n        ~msg:\n          (Printf.sprintf \"truncated_normal values in [%.1f, %.1f]: %.3f\" lower\n             upper v)\n        bool true\n        (v >= lower && v <= upper))\n    values\n\nlet test_truncated_normal_distribution () =\n  let shape = [| 20_000 |] in\n  let lower = -0.75 in\n  let upper = 1.25 in\n  let samples =\n    Rng.run ~seed:123 (fun () -> truncated_normal float32 ~lower ~upper shape)\n  in\n\n  equal ~msg:\"truncated_normal produces correct shape\" (array int) shape\n    (Nx.shape samples);\n\n  let values = Nx.to_array samples in\n  let total = Array.length values in\n  let boundary_hits =\n    Array.fold_left\n      (fun acc v ->\n        if Float.abs (v -. lower) < 1e-6 || Float.abs (v -. upper) < 1e-6 then\n          acc + 1\n        else acc)\n      0 values\n  in\n\n  equal\n    ~msg:\n      (Printf.sprintf\n         \"truncated normal rarely clips to bounds (%d / %d clipped)\"\n         boundary_hits total)\n    bool true\n    (boundary_hits < total / 1000);\n\n  let mean = Array.fold_left ( +. ) 0. values /. float_of_int total in\n  equal ~msg:\"truncated normal mean lies within interval\" bool true\n    (mean > lower && mean < upper)\n\nlet test_categorical () =\n  (* Test with simple 1D logits: [0.0, 1.0, 2.0] *)\n  (* Expected probabilities after softmax: [0.090, 0.245, 0.665] approximately *)\n  let logits = Nx.create float32 [| 3 |] [| 0.0; 1.0; 2.0 |] in\n  let samples = Rng.run ~seed:42 (fun () -> categorical logits) in\n\n  (* Check output shape *)\n  let output_shape = Nx.shape samples in\n  equal ~msg:\"categorical produces correct shape\" (array int) [||] output_shape;\n\n  (* Check that output is a scalar int32 *)\n  let sample_val = Nx.to_array samples in\n  equal ~msg:\"categorical produces single value\" int 1 (Array.length sample_val);\n\n  (* Check value is in valid range [0, 2] *)\n  let sample_idx = Int32.to_int sample_val.(0) in\n  equal ~msg:\"categorical value in valid range\" bool true\n    (sample_idx >= 0 && sample_idx <= 2);\n\n  (* Test determinism *)\n  let samples2 = Rng.run ~seed:42 (fun () -> categorical logits) in\n  let is_equal = Nx.all (Nx.equal samples samples2) in\n  let is_equal_val = Nx.to_array is_equal in\n  equal ~msg:\"categorical is deterministic\" bool true is_equal_val.(0);\n\n  (* Test with Float64 *)\n  let logits64 = Nx.create float64 [| 3 |] [| 0.0; 1.0; 2.0 |] in\n  let samples64 = Rng.run ~seed:42 (fun () -> categorical logits64) in\n  let is_equal64 = Nx.all (Nx.equal samples samples64) in\n  let is_equal_val64 = Nx.to_array is_equal64 in\n  equal ~msg:\"categorical is type agnostic\" bool true is_equal_val64.(0)\n\nlet test_categorical_2d () =\n  (* Test with 2D logits: [[0.0, 1.0], [2.0, 0.0]] *)\n  (* Expected probabilities after softmax: [[0.269, 0.731], [0.881, 0.119]] approximately *)\n  let logits = Nx.create float32 [| 2; 2 |] [| 0.0; 1.0; 2.0; 0.0 |] in\n  let samples = Rng.run ~seed:42 (fun () -> categorical logits) in\n\n  (* Check output shape (should be [2] - one sample per row) *)\n  let output_shape = Nx.shape samples in\n  equal ~msg:\"categorical 2D produces correct shape\" (array int) [| 2 |]\n    output_shape;\n\n  (* Check values are in valid range [0, 1] for each row *)\n  let sample_vals = Nx.to_array samples in\n  equal ~msg:\"categorical 2D produces 2 values\" int 2 (Array.length sample_vals);\n\n  Array.iter\n    (fun v ->\n      let idx = Int32.to_int v in\n      equal ~msg:\"categorical 2D value in valid range\" bool true\n        (idx >= 0 && idx <= 1))\n    sample_vals\n\nlet test_categorical_axis_handling () =\n  (* 2D logits: shape [2; 3] Row 0 -> [0.0, 1.0, 2.0] Row 1 -> [2.0, 0.5, -1.0]\n     This ensures all probabilities differ. *)\n  let logits =\n    Nx.create float32 [| 2; 3 |] [| 0.0; 1.0; 2.0; 2.0; 0.5; -1.0 |]\n  in\n\n  (* axis=1 -> sample across columns for each row -> shape [2] *)\n  let samples_axis_1 =\n    Rng.run ~seed:42 (fun () -> categorical ~axis:1 logits)\n  in\n\n  (* axis=-1 -> equivalent to axis=1 -> shape [2] *)\n  let samples_axis_neg_1 =\n    Rng.run ~seed:42 (fun () -> categorical ~axis:(-1) logits)\n  in\n\n  (* axis=0 -> sample across rows for each column -> shape [3] *)\n  let samples_axis_0 =\n    Rng.run ~seed:42 (fun () -> categorical ~axis:0 logits)\n  in\n\n  (* Check shape for axis=1 *)\n  let shape_axis_1 = Nx.shape samples_axis_1 in\n  equal ~msg:\"categorical axis=1 produces correct shape\" (array int) [| 2 |]\n    shape_axis_1;\n\n  (* Check shape for axis=-1 (should match axis=1) *)\n  let shape_axis_neg_1 = Nx.shape samples_axis_neg_1 in\n  equal ~msg:\"categorical axis=-1 matches axis=1 shape\" (array int) [| 2 |]\n    shape_axis_neg_1;\n\n  (* Check shape for axis=0 *)\n  let shape_axis_0 = Nx.shape samples_axis_0 in\n  equal ~msg:\"categorical axis=0 produces correct shape\" (array int) [| 3 |]\n    shape_axis_0;\n\n  (* Check that axis=1 and axis=-1 give identical results *)\n  let is_equal = Nx.all (Nx.equal samples_axis_1 samples_axis_neg_1) in\n  let is_equal_val = Nx.to_array is_equal in\n  equal ~msg:\"categorical axis=-1 behaves like axis=1\" bool true\n    is_equal_val.(0);\n\n  (* Sanity check: ensure sampled indices are in valid range *)\n  let vals_axis_0 = Nx.to_array samples_axis_0 in\n  Array.iter\n    (fun i ->\n      equal ~msg:\"axis=0 value in valid range\" bool true\n        (Int32.to_int i >= 0 && Int32.to_int i < 2))\n    vals_axis_0;\n\n  let vals_axis_1 = Nx.to_array samples_axis_1 in\n  Array.iter\n    (fun i ->\n      equal ~msg:\"axis=1 value in valid range\" bool true\n        (Int32.to_int i >= 0 && Int32.to_int i < 3))\n    vals_axis_1\n\nlet test_categorical_shape_prefix_axis () =\n  let logits =\n    Nx.create float64 [| 2; 3; 4 |]\n      [|\n        0.0;\n        0.5;\n        1.0;\n        1.5;\n        2.0;\n        2.5;\n        3.0;\n        -0.5;\n        0.25;\n        1.25;\n        -1.0;\n        0.75;\n        -0.25;\n        0.4;\n        1.8;\n        -1.5;\n        0.2;\n        1.1;\n        0.3;\n        -0.8;\n        0.6;\n        1.4;\n        -0.2;\n        0.9;\n      |]\n  in\n\n  let prefix_shape = [| 5; 6 |] in\n  let samples =\n    Rng.run ~seed:314 (fun () ->\n        categorical ~shape:prefix_shape ~axis:(-2) logits)\n  in\n\n  let expected_shape = [| 5; 6; 2; 4 |] in\n  equal ~msg:\"categorical shape prefix keeps axis semantics\" (array int)\n    expected_shape (Nx.shape samples);\n\n  let values = Nx.to_array samples |> Array.map Int32.to_int in\n  Array.iter\n    (fun v ->\n      equal ~msg:\"categorical indices within axis range\" bool true\n        (v >= 0 && v < 3))\n    values\n\nlet test_categorical_distribution () =\n  let logits = Nx.create float32 [| 3 |] [| 0.0; 1.0; 2.0 |] in\n\n  let n_samples = 20000 in\n  let inds =\n    Rng.run ~seed:123 (fun () -> categorical ~shape:[| n_samples |] logits)\n  in\n\n  equal ~msg:\"categorical produces correct shape\" (array int) [| n_samples |]\n    (Nx.shape inds);\n\n  let values = Nx.to_array inds |> Array.map Int32.to_int in\n\n  (* Histogram counts *)\n  let n_classes = 3 in\n  let counts = Array.make n_classes 0 in\n  Array.iter (fun v -> counts.(v) <- counts.(v) + 1) values;\n\n  (* Compute softmax probabilities from logits_arr *)\n  let logits_arr = [| 0.0; 1.0; 2.0 |] in\n  let max_logit =\n    Array.fold_left\n      (fun acc x -> if x > acc then x else acc)\n      neg_infinity logits_arr\n  in\n  let exps = Array.map (fun x -> Stdlib.exp (x -. max_logit)) logits_arr in\n  let sum_exps = Array.fold_left ( +. ) 0. exps in\n  let probs = Array.map (fun e -> e /. sum_exps) exps in\n\n  (* Check each bucket is within a reasonable statistical tolerance *)\n  Array.iteri\n    (fun i p ->\n      let prop = float_of_int counts.(i) /. float_of_int n_samples in\n      let se = Stdlib.sqrt (p *. (1. -. p) /. float_of_int n_samples) in\n      let tol = Stdlib.max (4. *. se) 0.01 in\n      equal\n        ~msg:(Printf.sprintf \"categorical bucket %d ~ p\" i)\n        (float tol) p prop)\n    probs\n\nlet () =\n  run \"Nx.Rng\"\n    [\n      group \"key\"\n        [\n          test \"creation\" test_key_creation;\n          test \"splitting\" test_key_splitting;\n          test \"fold_in\" test_fold_in;\n        ];\n      group \"sampling\"\n        [\n          test \"rand\" test_rand;\n          test \"randn\" test_randn;\n          test \"randint\" test_randint;\n          test \"bernoulli\" test_bernoulli;\n          test \"shuffle_preserves_shape\" test_shuffle_preserves_shape;\n          test \"truncated_normal\" test_truncated_normal;\n          test \"truncated_normal_distribution\"\n            test_truncated_normal_distribution;\n          test \"categorical\" test_categorical;\n          test \"categorical_2d\" test_categorical_2d;\n          test \"categorical_axis_handling\" test_categorical_axis_handling;\n          test \"categorical_shape_prefix_axis\"\n            test_categorical_shape_prefix_axis;\n          test \"categorical_distribution\" test_categorical_distribution;\n        ];\n    ]\n"
  },
  {
    "path": "packages/nx/test/test_nx_sanity.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Sanity tests for Nx - quick smoke test for every API function *)\n\nopen Windtrap\nopen Test_nx_support\n\n(* Test helper to create simple test data *)\nlet test_array = [| 1.; 2.; 3.; 4.; 5.; 6. |]\nlet shape_2x3 = [| 2; 3 |]\n\nlet creation_tests =\n  [\n    test \"create\" (fun () ->\n        Nx.create Nx.float32 shape_2x3 test_array\n        |> check_t \"create\" shape_2x3 test_array);\n    test \"init\" (fun () ->\n        let t =\n          Nx.init Nx.float32 [| 2; 2 |] (fun indices ->\n              float_of_int (indices.(0) + indices.(1)))\n        in\n        check_t \"init\" [| 2; 2 |] [| 0.; 1.; 1.; 2. |] t);\n    test \"empty\" (fun () ->\n        Nx.empty Nx.float32 shape_2x3 |> check_shape \"empty shape\" shape_2x3);\n    test \"full\" (fun () ->\n        Nx.full Nx.float32 shape_2x3 7.0\n        |> check_t \"full\" shape_2x3 [| 7.; 7.; 7.; 7.; 7.; 7. |]);\n    test \"ones\" (fun () ->\n        Nx.ones Nx.float32 shape_2x3\n        |> check_t \"ones\" shape_2x3 [| 1.; 1.; 1.; 1.; 1.; 1. |]);\n    test \"zeros\" (fun () ->\n        Nx.zeros Nx.float32 shape_2x3\n        |> check_t \"zeros\" shape_2x3 [| 0.; 0.; 0.; 0.; 0.; 0. |]);\n    test \"ones_like\" (fun () ->\n        let ref_t = Nx.create Nx.float32 shape_2x3 test_array in\n        Nx.ones_like ref_t\n        |> check_t \"ones_like\" shape_2x3 [| 1.; 1.; 1.; 1.; 1.; 1. |]);\n    test \"zeros_like\" (fun () ->\n        let ref_t = Nx.create Nx.float32 shape_2x3 test_array in\n        Nx.zeros_like ref_t\n        |> check_t \"zeros_like\" shape_2x3 [| 0.; 0.; 0.; 0.; 0.; 0. |]);\n    test \"empty_like\" (fun () ->\n        let ref_t = Nx.create Nx.float32 shape_2x3 test_array in\n        Nx.empty_like ref_t |> check_shape \"empty_like shape\" shape_2x3);\n    test \"full_like\" (fun () ->\n        let ref_t = Nx.create Nx.float32 shape_2x3 test_array in\n        Nx.full_like ref_t 9.0\n        |> check_t \"full_like\" shape_2x3 [| 9.; 9.; 9.; 9.; 9.; 9. |]);\n    test \"scalar\" (fun () ->\n        Nx.scalar Nx.float32 42.0 |> check_t \"scalar\" [||] [| 42.0 |]);\n    test \"scalar_like\" (fun () ->\n        let ref_t = Nx.create Nx.float32 shape_2x3 test_array in\n        Nx.scalar_like ref_t 5.0 |> check_t \"scalar_like\" [||] [| 5.0 |]);\n    test \"eye\" (fun () ->\n        Nx.eye Nx.float32 3\n        |> check_t \"eye\" [| 3; 3 |] [| 1.; 0.; 0.; 0.; 1.; 0.; 0.; 0.; 1. |]);\n    test \"identity\" (fun () ->\n        Nx.identity Nx.float32 3\n        |> check_t \"identity\" [| 3; 3 |]\n             [| 1.; 0.; 0.; 0.; 1.; 0.; 0.; 0.; 1. |]);\n    test \"copy\" (fun () ->\n        let t = Nx.create Nx.float32 shape_2x3 test_array in\n        Nx.copy t |> check_t \"copy\" shape_2x3 test_array);\n    test \"contiguous\" (fun () ->\n        let t = Nx.create Nx.float32 shape_2x3 test_array in\n        Nx.contiguous t |> check_t \"contiguous\" shape_2x3 test_array);\n  ]\n\nlet range_generation_tests =\n  [\n    test \"arange\" (fun () ->\n        let t = Nx.arange Nx.int32 0 5 1 in\n        check_t \"arange\" [| 5 |] [| 0l; 1l; 2l; 3l; 4l |] t);\n    test \"arange_f\" (fun () ->\n        Nx.arange_f Nx.float32 0.0 1.0 0.25\n        |> check_t \"arange_f\" [| 4 |] [| 0.0; 0.25; 0.5; 0.75 |]);\n    test \"linspace\" (fun () ->\n        let t = Nx.linspace Nx.float32 0.0 1.0 5 in\n        check_t ~eps:1e-6 \"linspace\" [| 5 |] [| 0.0; 0.25; 0.5; 0.75; 1.0 |] t);\n    test \"logspace\" (fun () ->\n        let t = Nx.logspace Nx.float32 0.0 2.0 3 in\n        check_t ~eps:1e-4 \"logspace\" [| 3 |] [| 1.0; 10.0; 100.0 |] t);\n    test \"geomspace\" (fun () ->\n        let t = Nx.geomspace Nx.float32 1.0 100.0 3 in\n        check_t ~eps:1e-4 \"geomspace\" [| 3 |] [| 1.0; 10.0; 100.0 |] t);\n  ]\n\nlet property_access_tests =\n  [\n    test \"data\" (fun () ->\n        let t = Nx.create Nx.float32 shape_2x3 test_array in\n        let data = Nx.data t in\n        equal ~msg:\"data[0]\" (float 1e-6) 1.0 (Nx_buffer.get data 0);\n        equal ~msg:\"data[5]\" (float 1e-6) 6.0 (Nx_buffer.get data 5));\n    test \"shape\" (fun () ->\n        let t = Nx.create Nx.float32 shape_2x3 test_array in\n        equal ~msg:\"shape\" (array int) shape_2x3 (Nx.shape t));\n    test \"dtype\" (fun () ->\n        let t = Nx.create Nx.float32 shape_2x3 test_array in\n        equal ~msg:\"dtype is float32\" bool true (Nx.dtype t = Nx.float32));\n    test \"strides\" (fun () ->\n        let t = Nx.create Nx.float32 shape_2x3 test_array in\n        let strides = Nx.strides t in\n        equal ~msg:\"strides length\" int 2 (Array.length strides);\n        equal ~msg:\"stride 0\" int 12 strides.(0);\n        equal ~msg:\"stride 1\" int 4 strides.(1));\n    test \"stride\" (fun () ->\n        let t = Nx.create Nx.float32 shape_2x3 test_array in\n        equal ~msg:\"stride 0\" int 12 (Nx.stride 0 t);\n        equal ~msg:\"stride 1\" int 4 (Nx.stride 1 t));\n    test \"dims\" (fun () ->\n        let t = Nx.create Nx.float32 shape_2x3 test_array in\n        let d = Nx.dims t in\n        equal ~msg:\"dims length\" int 2 (Array.length d);\n        equal ~msg:\"dims[0]\" int 2 d.(0);\n        equal ~msg:\"dims[1]\" int 3 d.(1));\n    test \"dim\" (fun () ->\n        let t = Nx.create Nx.float32 shape_2x3 test_array in\n        equal ~msg:\"dim 0\" int 2 (Nx.dim 0 t);\n        equal ~msg:\"dim 1\" int 3 (Nx.dim 1 t));\n    test \"ndim\" (fun () ->\n        let t = Nx.create Nx.float32 shape_2x3 test_array in\n        equal ~msg:\"ndim\" int 2 (Nx.ndim t));\n    test \"itemsize\" (fun () ->\n        let t = Nx.create Nx.float32 shape_2x3 test_array in\n        equal ~msg:\"itemsize\" int 4 (Nx.itemsize t));\n    test \"size\" (fun () ->\n        let t = Nx.create Nx.float32 shape_2x3 test_array in\n        equal ~msg:\"size\" int 6 (Nx.size t));\n    test \"numel\" (fun () ->\n        let t = Nx.create Nx.float32 shape_2x3 test_array in\n        equal ~msg:\"numel\" int 6 (Nx.numel t));\n    test \"nbytes\" (fun () ->\n        let t = Nx.create Nx.float32 shape_2x3 test_array in\n        equal ~msg:\"nbytes\" int 24 (Nx.nbytes t));\n    test \"offset\" (fun () ->\n        let t = Nx.create Nx.float32 shape_2x3 test_array in\n        equal ~msg:\"offset\" int 0 (Nx.offset t));\n  ]\n\nlet data_manipulation_tests =\n  [\n    test \"blit\" (fun () ->\n        let src = Nx.ones Nx.float32 shape_2x3 in\n        let dst = Nx.zeros Nx.float32 shape_2x3 in\n        Nx.blit src dst;\n        check_t \"blit\" shape_2x3 [| 1.; 1.; 1.; 1.; 1.; 1. |] dst);\n    test \"fill copy\" (fun () ->\n        let t = Nx.zeros Nx.float32 shape_2x3 in\n        let filled = Nx.fill 5.0 t in\n        check_t \"fill copy\" shape_2x3 [| 5.; 5.; 5.; 5.; 5.; 5. |] filled;\n        check_t \"fill leaves source\" shape_2x3 [| 0.; 0.; 0.; 0.; 0.; 0. |] t);\n  ]\n\nlet element_wise_binary_tests =\n  [\n    test \"add\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 3.0 in\n        let b = Nx.full Nx.float32 shape_2x3 2.0 in\n        Nx.add a b |> check_t \"add\" shape_2x3 [| 5.; 5.; 5.; 5.; 5.; 5. |]);\n    test \"add_s\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 3.0 in\n        Nx.add_s a 5.0 |> check_t \"add_s\" shape_2x3 [| 8.; 8.; 8.; 8.; 8.; 8. |]);\n    test \"radd_s\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 3.0 in\n        Nx.radd_s 5.0 a\n        |> check_t \"radd_s\" shape_2x3 [| 8.; 8.; 8.; 8.; 8.; 8. |]);\n    test \"sub\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 5.0 in\n        let b = Nx.full Nx.float32 shape_2x3 2.0 in\n        Nx.sub a b |> check_t \"sub\" shape_2x3 [| 3.; 3.; 3.; 3.; 3.; 3. |]);\n    test \"sub_s\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 10.0 in\n        Nx.sub_s a 3.0 |> check_t \"sub_s\" shape_2x3 [| 7.; 7.; 7.; 7.; 7.; 7. |]);\n    test \"rsub_s\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 3.0 in\n        Nx.rsub_s 10.0 a\n        |> check_t \"rsub_s\" shape_2x3 [| 7.; 7.; 7.; 7.; 7.; 7. |]);\n    test \"mul\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 3.0 in\n        let b = Nx.full Nx.float32 shape_2x3 2.0 in\n        Nx.mul a b |> check_t \"mul\" shape_2x3 [| 6.; 6.; 6.; 6.; 6.; 6. |]);\n    test \"mul_s\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 4.0 in\n        Nx.mul_s a 3.0\n        |> check_t \"mul_s\" shape_2x3 [| 12.; 12.; 12.; 12.; 12.; 12. |]);\n    test \"rmul_s\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 4.0 in\n        Nx.rmul_s 3.0 a\n        |> check_t \"rmul_s\" shape_2x3 [| 12.; 12.; 12.; 12.; 12.; 12. |]);\n    test \"div\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 6.0 in\n        let b = Nx.full Nx.float32 shape_2x3 2.0 in\n        Nx.div a b |> check_t \"div\" shape_2x3 [| 3.; 3.; 3.; 3.; 3.; 3. |]);\n    test \"div_s\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 12.0 in\n        Nx.div_s a 3.0 |> check_t \"div_s\" shape_2x3 [| 4.; 4.; 4.; 4.; 4.; 4. |]);\n    test \"rdiv_s\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 2.0 in\n        Nx.rdiv_s 6.0 a\n        |> check_t \"rdiv_s\" shape_2x3 [| 3.; 3.; 3.; 3.; 3.; 3. |]);\n    test \"pow\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 2.0 in\n        let b = Nx.full Nx.float32 shape_2x3 3.0 in\n        Nx.pow a b |> check_t \"pow\" shape_2x3 [| 8.; 8.; 8.; 8.; 8.; 8. |]);\n    test \"pow_s\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 2.0 in\n        Nx.pow_s a 3.0 |> check_t \"pow_s\" shape_2x3 [| 8.; 8.; 8.; 8.; 8.; 8. |]);\n    test \"rpow_s\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 3.0 in\n        Nx.rpow_s 2.0 a\n        |> check_t \"rpow_s\" shape_2x3 [| 8.; 8.; 8.; 8.; 8.; 8. |]);\n    test \"mod\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 7.0 in\n        let b = Nx.full Nx.float32 shape_2x3 3.0 in\n        Nx.mod_ a b |> check_t \"mod\" shape_2x3 [| 1.; 1.; 1.; 1.; 1.; 1. |]);\n    test \"mod_s\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 7.0 in\n        Nx.mod_s a 3.0 |> check_t \"mod_s\" shape_2x3 [| 1.; 1.; 1.; 1.; 1.; 1. |]);\n    test \"rmod_s\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 3.0 in\n        Nx.rmod_s 7.0 a\n        |> check_t \"rmod_s\" shape_2x3 [| 1.; 1.; 1.; 1.; 1.; 1. |]);\n    test \"maximum\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 3.0 in\n        let b = Nx.full Nx.float32 shape_2x3 5.0 in\n        Nx.maximum a b\n        |> check_t \"maximum\" shape_2x3 [| 5.; 5.; 5.; 5.; 5.; 5. |]);\n    test \"maximum_s\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 3.0 in\n        Nx.maximum_s a 5.0\n        |> check_t \"maximum_s\" shape_2x3 [| 5.; 5.; 5.; 5.; 5.; 5. |]);\n    test \"rmaximum_s\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 3.0 in\n        Nx.rmaximum_s 5.0 a\n        |> check_t \"rmaximum_s\" shape_2x3 [| 5.; 5.; 5.; 5.; 5.; 5. |]);\n    test \"minimum\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 3.0 in\n        let b = Nx.full Nx.float32 shape_2x3 5.0 in\n        Nx.minimum a b\n        |> check_t \"minimum\" shape_2x3 [| 3.; 3.; 3.; 3.; 3.; 3. |]);\n    test \"minimum_s\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 5.0 in\n        Nx.minimum_s a 3.0\n        |> check_t \"minimum_s\" shape_2x3 [| 3.; 3.; 3.; 3.; 3.; 3. |]);\n    test \"rminimum_s\" (fun () ->\n        let a = Nx.full Nx.float32 shape_2x3 5.0 in\n        Nx.rminimum_s 3.0 a\n        |> check_t \"rminimum_s\" shape_2x3 [| 3.; 3.; 3.; 3.; 3.; 3. |]);\n  ]\n\nlet comparison_tests =\n  [\n    test \"equal\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 1. |] in\n        let b = Nx.create Nx.float32 [| 3 |] [| 1.; 3.; 1. |] in\n        Nx.equal a b |> check_t \"equal\" [| 3 |] [| true; false; true |]);\n    test \"not_equal\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 1. |] in\n        let b = Nx.create Nx.float32 [| 3 |] [| 1.; 3.; 1. |] in\n        Nx.not_equal a b |> check_t \"not_equal\" [| 3 |] [| false; true; false |]);\n    test \"greater\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 5.; 3.; 1. |] in\n        let b = Nx.create Nx.float32 [| 3 |] [| 3.; 3.; 2. |] in\n        Nx.greater a b |> check_t \"greater\" [| 3 |] [| true; false; false |]);\n    test \"greater_equal\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 5.; 3.; 1. |] in\n        let b = Nx.create Nx.float32 [| 3 |] [| 3.; 3.; 2. |] in\n        Nx.greater_equal a b\n        |> check_t \"greater_equal\" [| 3 |] [| true; true; false |]);\n    test \"less\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 1.; 3.; 5. |] in\n        let b = Nx.create Nx.float32 [| 3 |] [| 3.; 3.; 2. |] in\n        Nx.less a b |> check_t \"less\" [| 3 |] [| true; false; false |]);\n    test \"less_equal\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 1.; 3.; 5. |] in\n        let b = Nx.create Nx.float32 [| 3 |] [| 3.; 3.; 2. |] in\n        Nx.less_equal a b\n        |> check_t \"less_equal\" [| 3 |] [| true; true; false |]);\n  ]\n\nlet element_wise_unary_tests =\n  [\n    test \"neg\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 1.; -2.; 3. |] in\n        Nx.neg a |> check_t \"neg\" [| 3 |] [| -1.; 2.; -3. |]);\n    test \"abs\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| -3.; 0.; 5. |] in\n        Nx.abs a |> check_t \"abs\" [| 3 |] [| 3.; 0.; 5. |]);\n    test \"sign\" (fun () ->\n        let a = Nx.create Nx.float32 [| 4 |] [| -5.; 0.; 3.; -0. |] in\n        Nx.sign a |> check_t \"sign\" [| 4 |] [| -1.; 0.; 1.; 0. |]);\n    test \"square\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| -2.; 3.; 4. |] in\n        Nx.square a |> check_t \"square\" [| 3 |] [| 4.; 9.; 16. |]);\n    test \"sqrt\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 4.; 9.; 16. |] in\n        Nx.sqrt a |> check_t \"sqrt\" [| 3 |] [| 2.; 3.; 4. |]);\n    test \"rsqrt\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 1.; 4.; 16. |] in\n        Nx.rsqrt a |> check_t \"rsqrt\" [| 3 |] [| 1.0; 0.5; 0.25 |]);\n    test \"recip\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 4. |] in\n        Nx.recip a |> check_t \"recip\" [| 3 |] [| 1.0; 0.5; 0.25 |]);\n    test \"exp\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 0.; 1.; 2. |] in\n        Nx.exp a\n        |> check_t ~eps:1e-6 \"exp\" [| 3 |] [| 1.0; 2.718282; 7.389056 |]);\n    test \"exp2\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 0.; 1.; 3. |] in\n        Nx.exp2 a |> check_t \"exp2\" [| 3 |] [| 1.; 2.; 8. |]);\n    test \"log\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2 |] [| 1.; 2.718282 |] in\n        Nx.log a |> check_t ~eps:1e-6 \"log\" [| 2 |] [| 0.0; 1.0 |]);\n    test \"log2\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 8. |] in\n        Nx.log2 a |> check_t \"log2\" [| 3 |] [| 0.; 1.; 3. |]);\n    test \"sin\" (fun () ->\n        let pi = 3.14159265359 in\n        let a = Nx.create Nx.float32 [| 3 |] [| 0.; pi /. 2.; pi |] in\n        Nx.sin a |> check_t ~eps:1e-6 \"sin\" [| 3 |] [| 0.0; 1.0; 0.0 |]);\n    test \"cos\" (fun () ->\n        let pi = 3.14159265359 in\n        let a = Nx.create Nx.float32 [| 3 |] [| 0.; pi /. 2.; pi |] in\n        Nx.cos a |> check_t ~eps:1e-6 \"cos\" [| 3 |] [| 1.0; 0.0; -1.0 |]);\n    test \"tan\" (fun () ->\n        let pi = 3.14159265359 in\n        let a = Nx.create Nx.float32 [| 2 |] [| 0.; pi /. 4. |] in\n        Nx.tan a |> check_t ~eps:1e-6 \"tan\" [| 2 |] [| 0.0; 1.0 |]);\n    test \"asin\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 0.; 0.5; 1. |] in\n        Nx.asin a\n        |> check_t ~eps:1e-6 \"asin\" [| 3 |] [| 0.0; 0.523599; 1.570796 |]);\n    test \"acos\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 1.; 0.5; 0. |] in\n        Nx.acos a\n        |> check_t ~eps:1e-6 \"acos\" [| 3 |] [| 0.0; 1.047198; 1.570796 |]);\n    test \"atan\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 0.; 1.; -1. |] in\n        Nx.atan a\n        |> check_t ~eps:1e-6 \"atan\" [| 3 |] [| 0.0; 0.785398; -0.785398 |]);\n    test \"sinh\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2 |] [| 0.; 1. |] in\n        Nx.sinh a |> check_t ~eps:1e-6 \"sinh\" [| 2 |] [| 0.0; 1.175201 |]);\n    test \"cosh\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2 |] [| 0.; 1. |] in\n        Nx.cosh a |> check_t ~eps:1e-6 \"cosh\" [| 2 |] [| 1.0; 1.543081 |]);\n    test \"tanh\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 0.; 1.; -1. |] in\n        Nx.tanh a\n        |> check_t ~eps:1e-6 \"tanh\" [| 3 |] [| 0.0; 0.761594; -0.761594 |]);\n    test \"asinh\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2 |] [| 0.; 1. |] in\n        Nx.asinh a |> check_t ~eps:1e-6 \"asinh\" [| 2 |] [| 0.0; 0.881374 |]);\n    test \"acosh\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2 |] [| 1.; 2. |] in\n        Nx.acosh a |> check_t ~eps:1e-6 \"acosh\" [| 2 |] [| 0.0; 1.316958 |]);\n    test \"atanh\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 0.; 0.5; -0.5 |] in\n        Nx.atanh a\n        |> check_t ~eps:1e-6 \"atanh\" [| 3 |] [| 0.0; 0.549306; -0.549306 |]);\n    test \"round\" (fun () ->\n        let a = Nx.create Nx.float32 [| 4 |] [| 3.2; 3.7; -3.2; -3.7 |] in\n        Nx.round a |> check_t \"round\" [| 4 |] [| 3.; 4.; -3.; -4. |]);\n    test \"floor\" (fun () ->\n        let a = Nx.create Nx.float32 [| 4 |] [| 3.2; 3.7; -3.2; -3.7 |] in\n        Nx.floor a |> check_t \"floor\" [| 4 |] [| 3.; 3.; -4.; -4. |]);\n    test \"ceil\" (fun () ->\n        let a = Nx.create Nx.float32 [| 4 |] [| 3.2; 3.7; -3.2; -3.7 |] in\n        Nx.ceil a |> check_t \"ceil\" [| 4 |] [| 4.; 4.; -3.; -3. |]);\n    test \"trunc\" (fun () ->\n        let a = Nx.create Nx.float32 [| 4 |] [| 3.2; 3.7; -3.2; -3.7 |] in\n        Nx.trunc a |> check_t \"trunc\" [| 4 |] [| 3.; 3.; -3.; -3. |]);\n    test \"clip\" (fun () ->\n        let a = Nx.create Nx.float32 [| 5 |] [| -2.; 0.; 5.; 10.; 12. |] in\n        Nx.clip ~min:2.0 ~max:8.0 a\n        |> check_t \"clip\" [| 5 |] [| 2.0; 2.0; 5.0; 8.0; 8.0 |]);\n    test \"clamp\" (fun () ->\n        let a = Nx.create Nx.float32 [| 5 |] [| -2.; 0.; 5.; 10.; 12. |] in\n        Nx.clamp ~min:2.0 ~max:8.0 a\n        |> check_t \"clamp\" [| 5 |] [| 2.0; 2.0; 5.0; 8.0; 8.0 |]);\n    test \"lerp\" (fun () ->\n        let start_t = Nx.zeros Nx.float32 [| 3 |] in\n        let end_t = Nx.full Nx.float32 [| 3 |] 10.0 in\n        let weight = Nx.create Nx.float32 [| 3 |] [| 0.0; 0.5; 1.0 |] in\n        Nx.lerp start_t end_t weight\n        |> check_t \"lerp\" [| 3 |] [| 0.0; 5.0; 10.0 |]);\n    test \"lerp_scalar_weight\" (fun () ->\n        let start_t = Nx.zeros Nx.float32 [| 3 |] in\n        let end_t = Nx.full Nx.float32 [| 3 |] 10.0 in\n        Nx.lerp_scalar_weight start_t end_t 0.3\n        |> check_t \"lerp_scalar_weight\" [| 3 |] [| 3.0; 3.0; 3.0 |]);\n  ]\n\nlet bitwise_tests =\n  [\n    test \"bitwise_and\" (fun () ->\n        let a = Nx.create Nx.int32 [| 3 |] [| 7l; 12l; 15l |] in\n        let b = Nx.create Nx.int32 [| 3 |] [| 3l; 10l; 7l |] in\n        Nx.bitwise_and a b |> check_t \"bitwise_and\" [| 3 |] [| 3l; 8l; 7l |]);\n    test \"bitwise_or\" (fun () ->\n        let a = Nx.create Nx.int32 [| 3 |] [| 1l; 4l; 8l |] in\n        let b = Nx.create Nx.int32 [| 3 |] [| 2l; 2l; 7l |] in\n        Nx.bitwise_or a b |> check_t \"bitwise_or\" [| 3 |] [| 3l; 6l; 15l |]);\n    test \"bitwise_xor\" (fun () ->\n        let a = Nx.create Nx.int32 [| 3 |] [| 7l; 12l; 15l |] in\n        let b = Nx.create Nx.int32 [| 3 |] [| 3l; 10l; 7l |] in\n        Nx.bitwise_xor a b |> check_t \"bitwise_xor\" [| 3 |] [| 4l; 6l; 8l |]);\n    test \"bitwise_not\" (fun () ->\n        let a = Nx.create Nx.int32 [| 3 |] [| 0l; 1l; -1l |] in\n        Nx.bitwise_not a |> check_t \"bitwise_not\" [| 3 |] [| -1l; -2l; 0l |]);\n    test \"invert\" (fun () ->\n        let a = Nx.create Nx.int32 [| 3 |] [| 0l; 1l; -1l |] in\n        Nx.invert a |> check_t \"invert\" [| 3 |] [| -1l; -2l; 0l |]);\n    test \"lshift\" (fun () ->\n        let a = Nx.create Nx.int32 [| 3 |] [| 1l; 2l; 4l |] in\n        Nx.lshift a 2 |> check_t \"lshift\" [| 3 |] [| 4l; 8l; 16l |]);\n    test \"rshift\" (fun () ->\n        let a = Nx.create Nx.int32 [| 3 |] [| 4l; 8l; 16l |] in\n        Nx.rshift a 2 |> check_t \"rshift\" [| 3 |] [| 1l; 2l; 4l |]);\n  ]\n\nlet logical_tests =\n  [\n    test \"logical_and\" (fun () ->\n        let a = Nx.create Nx.int32 [| 4 |] [| 0l; 1l; 1l; 0l |] in\n        let b = Nx.create Nx.int32 [| 4 |] [| 0l; 0l; 1l; 1l |] in\n        Nx.logical_and a b |> check_t \"logical_and\" [| 4 |] [| 0l; 0l; 1l; 0l |]);\n    test \"logical_or\" (fun () ->\n        let a = Nx.create Nx.int32 [| 4 |] [| 0l; 1l; 1l; 0l |] in\n        let b = Nx.create Nx.int32 [| 4 |] [| 0l; 0l; 1l; 1l |] in\n        Nx.logical_or a b |> check_t \"logical_or\" [| 4 |] [| 0l; 1l; 1l; 1l |]);\n    test \"logical_xor\" (fun () ->\n        let a = Nx.create Nx.int32 [| 4 |] [| 0l; 1l; 1l; 0l |] in\n        let b = Nx.create Nx.int32 [| 4 |] [| 0l; 0l; 1l; 1l |] in\n        Nx.logical_xor a b |> check_t \"logical_xor\" [| 4 |] [| 0l; 1l; 0l; 1l |]);\n    test \"logical_not\" (fun () ->\n        let a = Nx.create Nx.int32 [| 3 |] [| 0l; 1l; 1l |] in\n        Nx.logical_not a |> check_t \"logical_not\" [| 3 |] [| 1l; 0l; 0l |]);\n  ]\n\nlet special_value_tests =\n  [\n    test \"isnan\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 1.0; nan; 0.0 |] in\n        Nx.isnan a |> check_t \"isnan\" [| 3 |] [| false; true; false |]);\n    test \"isinf\" (fun () ->\n        let a =\n          Nx.create Nx.float32 [| 4 |] [| 1.0; infinity; neg_infinity; 0.0 |]\n        in\n        Nx.isinf a |> check_t \"isinf\" [| 4 |] [| false; true; true; false |]);\n    test \"isfinite\" (fun () ->\n        let a = Nx.create Nx.float32 [| 4 |] [| 1.0; infinity; nan; 0.0 |] in\n        Nx.isfinite a\n        |> check_t \"isfinite\" [| 4 |] [| true; false; false; true |]);\n  ]\n\nlet ternary_tests =\n  [\n    test \"where\" (fun () ->\n        let cond =\n          Nx.create Nx.bool [| 5 |] [| true; false; true; false; true |]\n        in\n        let x = Nx.create Nx.float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n        let y = Nx.create Nx.float32 [| 5 |] [| 10.; 20.; 30.; 40.; 50. |] in\n        Nx.where cond x y |> check_t \"where\" [| 5 |] [| 1.; 20.; 3.; 40.; 5. |]);\n  ]\n\nlet reduction_tests =\n  [\n    test \"sum\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n        Nx.sum a |> check_t \"sum\" [||] [| 6.0 |]);\n    test \"prod\" (fun () ->\n        let a = Nx.create Nx.float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n        Nx.prod a |> check_t \"prod\" [||] [| 24.0 |]);\n    test \"max\" (fun () ->\n        let a = Nx.create Nx.float32 [| 5 |] [| 1.; 5.; 3.; 2.; 4. |] in\n        Nx.max a |> check_t \"max\" [||] [| 5.0 |]);\n    test \"min\" (fun () ->\n        let a = Nx.create Nx.float32 [| 5 |] [| 5.; 1.; 3.; 2.; 4. |] in\n        Nx.min a |> check_t \"min\" [||] [| 1.0 |]);\n    test \"mean\" (fun () ->\n        let a = Nx.create Nx.float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n        Nx.mean a |> check_t \"mean\" [||] [| 2.5 |]);\n    test \"var\" (fun () ->\n        let a = Nx.create Nx.float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n        Nx.var a |> check_t \"var\" [||] [| 1.25 |]);\n    test \"std\" (fun () ->\n        let a = Nx.create Nx.float32 [| 4 |] [| 1.; 3.; 5.; 7. |] in\n        Nx.std a |> check_t ~eps:1e-6 \"std\" [||] [| 2.236068 |]);\n    test \"all\" (fun () ->\n        let a = Nx.create Nx.int32 [| 4 |] [| 1l; 1l; 0l; 1l |] in\n        Nx.all a |> check_t \"all with zero\" [||] [| false |];\n        let c = Nx.create Nx.int32 [| 3 |] [| 1l; 1l; 1l |] in\n        Nx.all c |> check_t \"all without zero\" [||] [| true |]);\n    test \"any\" (fun () ->\n        let a = Nx.create Nx.int32 [| 4 |] [| 0l; 0l; 1l; 0l |] in\n        Nx.any a |> check_t \"any with one\" [||] [| true |];\n        let c = Nx.create Nx.int32 [| 3 |] [| 0l; 0l; 0l |] in\n        Nx.any c |> check_t \"any all zeros\" [||] [| false |]);\n    test \"array_equal\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n        let b = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n        let eq1 = Nx.array_equal a b in\n        equal ~msg:\"array_equal same\" bool true (Nx.item [] eq1);\n        let d = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 4. |] in\n        let eq2 = Nx.array_equal a d in\n        equal ~msg:\"array_equal different\" bool false (Nx.item [] eq2));\n  ]\n\nlet shape_manipulation_tests =\n  [\n    test \"reshape\" (fun () ->\n        let a = Nx.create Nx.float32 shape_2x3 test_array in\n        Nx.reshape [| 3; 2 |] a |> check_t \"reshape\" [| 3; 2 |] test_array);\n    test \"flatten\" (fun () ->\n        let a = Nx.create Nx.float32 shape_2x3 test_array in\n        Nx.flatten a |> check_t \"flatten\" [| 6 |] test_array);\n    test \"unflatten\" (fun () ->\n        let a = Nx.create Nx.float32 [| 6 |] test_array in\n        Nx.unflatten 0 [| 2; 3 |] a |> check_t \"unflatten\" [| 2; 3 |] test_array);\n    test \"ravel\" (fun () ->\n        let a = Nx.create Nx.float32 shape_2x3 test_array in\n        Nx.ravel a |> check_t \"ravel\" [| 6 |] test_array);\n    test \"squeeze\" (fun () ->\n        let a = Nx.ones Nx.float32 [| 1; 3; 1 |] in\n        Nx.squeeze a |> check_t \"squeeze\" [| 3 |] [| 1.; 1.; 1. |]);\n    test \"squeeze_axis\" (fun () ->\n        let a = Nx.ones Nx.float32 [| 1; 3; 1 |] in\n        Nx.squeeze_axis 0 a\n        |> check_t \"squeeze_axis\" [| 3; 1 |] [| 1.; 1.; 1. |]);\n    test \"unsqueeze\" (fun () ->\n        let a = Nx.ones Nx.float32 [| 3 |] in\n        Nx.unsqueeze ~axes:[ 0; 2 ] a\n        |> check_t \"unsqueeze\" [| 1; 3; 1 |] [| 1.; 1.; 1. |]);\n    test \"unsqueeze_axis\" (fun () ->\n        let a = Nx.ones Nx.float32 [| 3 |] in\n        Nx.unsqueeze_axis 0 a\n        |> check_t \"unsqueeze_axis\" [| 1; 3 |] [| 1.; 1.; 1. |]);\n    test \"expand_dims\" (fun () ->\n        let a = Nx.ones Nx.float32 [| 3 |] in\n        Nx.expand_dims [ 0 ] a\n        |> check_t \"expand_dims\" [| 1; 3 |] [| 1.; 1.; 1. |]);\n    test \"transpose\" (fun () ->\n        let a = Nx.create Nx.float32 shape_2x3 test_array in\n        Nx.transpose a\n        |> check_t \"transpose\" [| 3; 2 |] [| 1.; 4.; 2.; 5.; 3.; 6. |]);\n    test \"moveaxis\" (fun () ->\n        let a =\n          Nx.create Nx.float32 [| 2; 3; 4 |]\n            (Array.init 24 (fun i -> float_of_int i))\n        in\n        let b = Nx.moveaxis 0 2 a in\n        check_shape \"moveaxis shape\" [| 3; 4; 2 |] b;\n        (* Check a few values to ensure proper axis movement *)\n        let expected =\n          Nx.create Nx.float32 [| 3; 4; 2 |]\n            [|\n              0.;\n              12.;\n              1.;\n              13.;\n              2.;\n              14.;\n              3.;\n              15.;\n              4.;\n              16.;\n              5.;\n              17.;\n              6.;\n              18.;\n              7.;\n              19.;\n              8.;\n              20.;\n              9.;\n              21.;\n              10.;\n              22.;\n              11.;\n              23.;\n            |]\n        in\n        check_t \"moveaxis values\" [| 3; 4; 2 |] (Nx.to_array expected) b);\n    test \"swapaxes\" (fun () ->\n        let a =\n          Nx.create Nx.float32 [| 2; 3; 4 |]\n            (Array.init 24 (fun i -> float_of_int i))\n        in\n        let b = Nx.swapaxes 0 2 a in\n        check_shape \"swapaxes shape\" [| 4; 3; 2 |] b;\n        (* Check a few values to ensure proper axis swapping *)\n        let expected =\n          Nx.create Nx.float32 [| 4; 3; 2 |]\n            [|\n              0.;\n              12.;\n              4.;\n              16.;\n              8.;\n              20.;\n              1.;\n              13.;\n              5.;\n              17.;\n              9.;\n              21.;\n              2.;\n              14.;\n              6.;\n              18.;\n              10.;\n              22.;\n              3.;\n              15.;\n              7.;\n              19.;\n              11.;\n              23.;\n            |]\n        in\n        check_t \"swapaxes values\" [| 4; 3; 2 |] (Nx.to_array expected) b);\n    test \"flip\" (fun () ->\n        let a = Nx.create Nx.float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n        Nx.flip a |> check_t \"flip\" [| 5 |] [| 5.; 4.; 3.; 2.; 1. |]);\n    test \"roll\" (fun () ->\n        let a = Nx.create Nx.float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n        Nx.roll 2 a |> check_t \"roll\" [| 5 |] [| 4.; 5.; 1.; 2.; 3. |]);\n    test \"pad\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n        let b = Nx.pad [| (1, 1); (1, 1) |] 0.0 a in\n        let expected =\n          [| 0.; 0.; 0.; 0.; 0.; 1.; 2.; 0.; 0.; 3.; 4.; 0.; 0.; 0.; 0.; 0. |]\n        in\n        check_t \"pad values\" [| 4; 4 |] expected b);\n    test \"shrink\" (fun () ->\n        let a =\n          Nx.create Nx.float32 [| 4; 4 |]\n            (Array.init 16 (fun i -> float_of_int i))\n        in\n        let b = Nx.shrink [| (1, 3); (1, 3) |] a in\n        check_t \"shrink values\" [| 2; 2 |] [| 5.; 6.; 9.; 10. |] b);\n    test \"expand\" (fun () ->\n        let a = Nx.ones Nx.float32 [| 1; 3 |] in\n        let b = Nx.expand [| 2; -1 |] a in\n        check_t \"expand values\" [| 2; 3 |] [| 1.; 1.; 1.; 1.; 1.; 1. |] b);\n    test \"broadcast_to\" (fun () ->\n        let a = Nx.ones Nx.float32 [| 1; 3 |] in\n        Nx.broadcast_to [| 2; 3 |] a\n        |> check_t \"broadcast_to\" [| 2; 3 |] [| 1.; 1.; 1.; 1.; 1.; 1. |]);\n    test \"broadcast_arrays\" (fun () ->\n        let a = Nx.ones Nx.float32 [| 1; 3 |] in\n        let b = Nx.full Nx.float32 [| 2; 1 |] 2.0 in\n        let cs = Nx.broadcast_arrays [ a; b ] in\n        equal ~msg:\"broadcast_arrays count\" int 2 (List.length cs);\n        List.iter\n          (fun c -> check_shape \"broadcast_arrays shape\" [| 2; 3 |] c)\n          cs);\n    test \"tile\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n        Nx.tile [| 2; 1 |] a\n        |> check_t \"tile\" [| 4; 2 |] [| 1.; 2.; 3.; 4.; 1.; 2.; 3.; 4. |]);\n    test \"repeat\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n        Nx.repeat 2 a |> check_t \"repeat\" [| 6 |] [| 1.; 1.; 2.; 2.; 3.; 3. |]);\n  ]\n\nlet array_combination_tests =\n  [\n    test \"concatenate\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n        let b = Nx.create Nx.float32 [| 1; 3 |] [| 7.; 8.; 9. |] in\n        let c = Nx.concatenate ~axis:0 [ a; b ] in\n        check_t \"concatenate values\" [| 3; 3 |]\n          [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9. |]\n          c);\n    test \"stack\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2 |] [| 1.; 2. |] in\n        let b = Nx.create Nx.float32 [| 2 |] [| 3.; 4. |] in\n        let c = Nx.stack ~axis:0 [ a; b ] in\n        check_t \"stack values\" [| 2; 2 |] [| 1.; 2.; 3.; 4. |] c);\n    test \"vstack\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n        let b = Nx.create Nx.float32 [| 1; 2 |] [| 5.; 6. |] in\n        let c = Nx.vstack [ a; b ] in\n        check_t \"vstack values\" [| 3; 2 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] c);\n    test \"hstack\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n        let b = Nx.create Nx.float32 [| 2; 1 |] [| 5.; 6. |] in\n        let c = Nx.hstack [ a; b ] in\n        check_t \"hstack values\" [| 2; 3 |] [| 1.; 2.; 5.; 3.; 4.; 6. |] c);\n    test \"dstack\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n        let b = Nx.create Nx.float32 [| 2; 2 |] [| 5.; 6.; 7.; 8. |] in\n        let c = Nx.dstack [ a; b ] in\n        check_t \"dstack values\" [| 2; 2; 2 |]\n          [| 1.; 5.; 2.; 6.; 3.; 7.; 4.; 8. |]\n          c);\n    test \"array_split\" (fun () ->\n        let a = Nx.create Nx.float32 [| 6 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n        let splits = Nx.array_split ~axis:0 (`Count 3) a in\n        equal ~msg:\"array_split count\" int 3 (List.length splits);\n        check_t \"split 0 values\" [| 2 |] [| 1.; 2. |] (List.nth splits 0);\n        check_t \"split 1 values\" [| 2 |] [| 3.; 4. |] (List.nth splits 1);\n        check_t \"split 2 values\" [| 2 |] [| 5.; 6. |] (List.nth splits 2));\n    test \"split\" (fun () ->\n        let a = Nx.create Nx.float32 [| 6 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n        let splits = Nx.split ~axis:0 3 a in\n        equal ~msg:\"split count\" int 3 (List.length splits);\n        List.iter (fun s -> check_shape \"split shape\" [| 2 |] s) splits);\n  ]\n\nlet type_conversion_tests =\n  [\n    test \"cast\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 1.7; 2.3; 3.9 |] in\n        let b = Nx.cast Nx.int32 a in\n        check_t \"cast values\" [| 3 |] [| 1l; 2l; 3l |] b);\n    test \"astype\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2 |] [| 3.14; 2.71 |] in\n        let b = Nx.astype Nx.int32 a in\n        equal ~msg:\"astype dtype\" bool true (Nx.dtype b = Nx.int32));\n    test \"to_bigarray\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n        let ba = Nx.to_bigarray a in\n        equal ~msg:\"bigarray dims\" int 1 (Bigarray.Genarray.num_dims ba);\n        equal ~msg:\"bigarray value\" (float 1e-6) 2.0\n          (Bigarray.Genarray.get ba [| 1 |]));\n    test \"of_bigarray\" (fun () ->\n        let ba =\n          Bigarray.Genarray.create Bigarray.float32 Bigarray.c_layout [| 3 |]\n        in\n        Bigarray.Genarray.set ba [| 0 |] 4.0;\n        Bigarray.Genarray.set ba [| 1 |] 5.0;\n        Bigarray.Genarray.set ba [| 2 |] 6.0;\n        Nx.of_bigarray ba |> check_t \"of_bigarray\" [| 3 |] [| 4.; 5.; 6. |]);\n    test \"to_array\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 7.; 8.; 9. |] in\n        let arr = Nx.to_array a in\n        equal ~msg:\"to_array length\" int 3 (Array.length arr);\n        equal ~msg:\"to_array value\" (float 1e-6) 8.0 arr.(1));\n  ]\n\nlet indexing_slicing_tests =\n  [\n    test \"get\" (fun () ->\n        let a = Nx.create Nx.float32 shape_2x3 test_array in\n        Nx.get [ 0 ] a |> check_t \"get row 0\" [| 3 |] [| 1.; 2.; 3. |];\n        Nx.get [ 1 ] a |> check_t \"get row 1\" [| 3 |] [| 4.; 5.; 6. |]);\n    test \"set\" (fun () ->\n        let a = Nx.zeros Nx.float32 shape_2x3 in\n        let value = Nx.create Nx.float32 [| 3 |] [| 7.; 8.; 9. |] in\n        Nx.set [ 1 ] a value;\n        check_t \"set\" shape_2x3 [| 0.; 0.; 0.; 7.; 8.; 9. |] a);\n    test \"item\" (fun () ->\n        let a = Nx.create Nx.float32 shape_2x3 test_array in\n        equal ~msg:\"item [0,0]\" (float 1e-6) 1.0 (Nx.item [ 0; 0 ] a);\n        equal ~msg:\"item [1,2]\" (float 1e-6) 6.0 (Nx.item [ 1; 2 ] a));\n    test \"set_item\" (fun () ->\n        let a = Nx.zeros Nx.float32 shape_2x3 in\n        Nx.set_item [ 0; 1 ] 42.0 a;\n        Nx.set_item [ 1; 2 ] 99.0 a;\n        equal ~msg:\"set_item [0,1]\" (float 1e-6) 42.0 (Nx.item [ 0; 1 ] a);\n        equal ~msg:\"set_item [1,2]\" (float 1e-6) 99.0 (Nx.item [ 1; 2 ] a));\n    test \"slice\" (fun () ->\n        let a = Nx.create Nx.float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n        Nx.slice [ Nx.R (1, 4) ] a |> check_t \"slice\" [| 3 |] [| 2.; 3.; 4. |]);\n    test \"set_slice\" (fun () ->\n        let a = Nx.zeros Nx.float32 [| 5 |] in\n        let value = Nx.create Nx.float32 [| 2 |] [| 10.; 20. |] in\n        Nx.set_slice [ Nx.R (2, 4) ] a value;\n        check_t \"set_slice\" [| 5 |] [| 0.; 0.; 10.; 20.; 0. |] a);\n  ]\n\nlet linear_algebra_tests =\n  [\n    test \"dot\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n        let b = Nx.create Nx.float32 [| 3 |] [| 4.; 5.; 6. |] in\n        Nx.dot a b |> check_t \"dot\" [||] [| 32.0 |]);\n    test \"matmul\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n        let b = Nx.create Nx.float32 [| 3; 2 |] [| 1.; 4.; 2.; 5.; 3.; 6. |] in\n        Nx.matmul a b |> check_t \"matmul\" [| 2; 2 |] [| 14.; 32.; 32.; 77. |]);\n  ]\n\nlet neural_network_tests =\n  [\n    test \"relu\" (fun () ->\n        let a = Nx.create Nx.float32 [| 5 |] [| -2.; -1.; 0.; 1.; 2. |] in\n        Nx.relu a |> check_t \"relu\" [| 5 |] [| 0.; 0.; 0.; 1.; 2. |]);\n    test \"sigmoid\" (fun () ->\n        let a = Nx.create Nx.float32 [| 3 |] [| -10.; 0.; 10. |] in\n        Nx.sigmoid a\n        |> check_t ~eps:1e-6 \"sigmoid\" [| 3 |] [| 0.0000454; 0.5; 0.9999546 |]);\n    test \"one_hot\" (fun () ->\n        let a = Nx.create Nx.int32 [| 3 |] [| 0l; 2l; 1l |] in\n        let b = Nx.one_hot a ~num_classes:3 in\n        check_t \"one_hot values\" [| 3; 3 |] [| 1; 0; 0; 0; 0; 1; 0; 1; 0 |] b);\n  ]\n\nlet random_tests =\n  [\n    test \"rand\" (fun () ->\n        let t = Nx.Rng.run ~seed:0 (fun () -> Nx.rand Nx.float32 shape_2x3) in\n        check_shape \"rand shape\" shape_2x3 t;\n        let vals = Nx.to_array t in\n        Array.iter\n          (fun v -> equal ~msg:\"rand in range\" bool true (v >= 0.0 && v < 1.0))\n          vals);\n    test \"randn\" (fun () ->\n        let t = Nx.Rng.run ~seed:1 (fun () -> Nx.randn Nx.float32 [| 100 |]) in\n        check_shape \"randn shape\" [| 100 |] t;\n        (* Check that values are roughly normally distributed *)\n        let vals = Nx.to_array t in\n        let mean = Array.fold_left ( +. ) 0.0 vals /. 100.0 in\n        equal ~msg:\"randn mean\" bool true (abs_float mean < 0.5));\n    test \"randint\" (fun () ->\n        let t =\n          Nx.Rng.run ~seed:2 (fun () ->\n              Nx.randint Nx.int32 shape_2x3 0 ~high:10)\n        in\n        check_shape \"randint shape\" shape_2x3 t;\n        (* Check all values are in range *)\n        for i = 0 to 1 do\n          for j = 0 to 2 do\n            let v = Nx.item [ i; j ] t in\n            equal ~msg:\"randint in range\" bool true (v >= 0l && v < 10l)\n          done\n        done);\n  ]\n\nlet sorting_searching_tests =\n  [\n    test \"sort\" (fun () ->\n        let a = Nx.create Nx.float32 [| 5 |] [| 3.; 1.; 4.; 1.; 5. |] in\n        let sorted, indices = Nx.sort a in\n        check_t \"sort values\" [| 5 |] [| 1.; 1.; 3.; 4.; 5. |] sorted;\n        check_shape \"sort indices shape\" [| 5 |] indices);\n    test \"argsort\" (fun () ->\n        let a = Nx.create Nx.float32 [| 5 |] [| 3.; 1.; 4.; 1.; 5. |] in\n        Nx.argsort a |> check_t \"argsort\" [| 5 |] [| 1l; 3l; 0l; 2l; 4l |]);\n    test \"argmax\" (fun () ->\n        let a = Nx.create Nx.float32 [| 5 |] [| 3.; 1.; 5.; 2.; 4. |] in\n        Nx.argmax a |> check_t \"argmax\" [||] [| 2l |]);\n    test \"argmin\" (fun () ->\n        let a = Nx.create Nx.float32 [| 5 |] [| 3.; 1.; 5.; 2.; 4. |] in\n        Nx.argmin a |> check_t \"argmin\" [||] [| 1l |]);\n  ]\n\nlet display_formatting_tests =\n  [\n    test \"pp_data\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n        let str = Format.asprintf \"%a\" Nx.pp_data a in\n        equal ~msg:\"pp_data not empty\" bool true (String.length str > 0);\n        equal ~msg:\"pp_data contains data\" bool true (String.contains str '1'));\n    test \"data_to_string\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n        let str = Nx.data_to_string a in\n        equal ~msg:\"data_to_string not empty\" bool true (String.length str > 0);\n        equal ~msg:\"data_to_string contains data\" bool true\n          (String.contains str '1'));\n    test \"print_data\" (fun () ->\n        let a = Nx.ones Nx.float32 [| 2; 2 |] in\n        Nx.print_data a);\n    test \"pp\" (fun () ->\n        let a = Nx.ones Nx.float32 [| 2; 2 |] in\n        let str = Format.asprintf \"%a\" Nx.pp a in\n        equal ~msg:\"pp not empty\" bool true (String.length str > 0));\n    test \"to_string\" (fun () ->\n        let a = Nx.ones Nx.float32 [| 2; 2 |] in\n        let str = Nx.to_string a in\n        equal ~msg:\"to_string not empty\" bool true (String.length str > 0));\n    test \"print\" (fun () ->\n        let a = Nx.ones Nx.float32 [| 2; 2 |] in\n        Nx.print a);\n  ]\n\nlet higher_order_tests =\n  [\n    test \"map\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n        let b = Nx.map (fun x -> Nx.mul_s x 2.0) a in\n        check_t \"map double\" [| 2; 3 |] [| 2.; 4.; 6.; 8.; 10.; 12. |] b);\n    test \"map preserves shape\" (fun () ->\n        let a =\n          Nx.create Nx.float32 [| 3; 2; 2 |]\n            [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9.; 10.; 11.; 12. |]\n        in\n        let b = Nx.map (fun x -> Nx.add_s x 1.0) a in\n        check_t \"map values\" [| 3; 2; 2 |]\n          [| 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9.; 10.; 11.; 12.; 13. |]\n          b);\n    test \"iter\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n        let sum = ref (Nx.scalar Nx.float32 0.0) in\n        Nx.iter (fun x -> sum := Nx.add !sum x) a;\n        equal ~msg:\"iter sum\" (float 0.01) 10.0 (Nx.item [] !sum));\n    test \"fold\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n        let sum =\n          Nx.fold (fun acc x -> Nx.add acc x) (Nx.scalar Nx.float32 0.0) a\n        in\n        equal ~msg:\"fold sum\" (float 0.01) 21.0 (Nx.item [] sum));\n    test \"fold product\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n        let prod =\n          Nx.fold (fun acc x -> Nx.mul acc x) (Nx.scalar Nx.float32 1.0) a\n        in\n        equal ~msg:\"fold product\" (float 0.01) 24.0 (Nx.item [] prod));\n    test \"fold max\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 5.; 3.; 2.; 6.; 4. |] in\n        let max_val =\n          Nx.fold\n            (fun acc x -> Nx.maximum acc x)\n            (Nx.scalar Nx.float32 neg_infinity)\n            a\n        in\n        equal ~msg:\"fold max\" (float 0.01) 6.0 (Nx.item [] max_val));\n    test \"map_item\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n        let b = Nx.map_item (fun x -> x *. 2.0) a in\n        check_t \"map_item double\" [| 2; 3 |] [| 2.; 4.; 6.; 8.; 10.; 12. |] b);\n    test \"iter_item\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n        let sum = ref 0.0 in\n        Nx.iter_item (fun x -> sum := !sum +. x) a;\n        equal ~msg:\"iter_item sum\" (float 0.01) 10.0 !sum);\n    test \"fold_item\" (fun () ->\n        let a = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n        let sum = Nx.fold_item (fun acc x -> acc +. x) 0.0 a in\n        equal ~msg:\"fold_item sum\" (float 0.01) 21.0 sum);\n  ]\n\nlet () =\n  run \"Nx Sanity\"\n    [\n      group \"Creation Functions\" creation_tests;\n      group \"Range Generation\" range_generation_tests;\n      group \"Property Access\" property_access_tests;\n      group \"Data Manipulation\" data_manipulation_tests;\n      group \"Element-wise Binary Operations\" element_wise_binary_tests;\n      group \"Comparison Operations\" comparison_tests;\n      group \"Element-wise Unary Operations\" element_wise_unary_tests;\n      group \"Bitwise Operations\" bitwise_tests;\n      group \"Logical Operations\" logical_tests;\n      group \"Special Value Checks\" special_value_tests;\n      group \"Ternary Operations\" ternary_tests;\n      group \"Reduction Operations\" reduction_tests;\n      group \"Shape Manipulation\" shape_manipulation_tests;\n      group \"Array Combination\" array_combination_tests;\n      group \"Type Conversion\" type_conversion_tests;\n      group \"Indexing and Slicing\" indexing_slicing_tests;\n      group \"Linear Algebra\" linear_algebra_tests;\n      group \"Neural Network\" neural_network_tests;\n      group \"Random Number Generation\" random_tests;\n      group \"Sorting and Searching\" sorting_searching_tests;\n      group \"Display and Formatting\" display_formatting_tests;\n      group \"Higher-order Functions\" higher_order_tests;\n    ]\n"
  },
  {
    "path": "packages/nx/test/test_nx_sorting.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Sorting and searching tests for Nx *)\n\nopen Windtrap\nopen Test_nx_support\n\n(* ───── Where Tests ───── *)\n\nlet test_where_1d () =\n  let mask = Nx.create Nx.bool [| 3 |] [| true; false; true |] in\n  let a = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let b = Nx.create Nx.float32 [| 3 |] [| 4.; 5.; 6. |] in\n  let result = Nx.where mask a b in\n  check_t \"where 1D\" [| 3 |] [| 1.; 5.; 3. |] result\n\nlet test_where_broadcast () =\n  let mask = Nx.create Nx.bool [| 2; 1 |] [| true; false |] in\n  let a = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let b = Nx.create Nx.float32 [| 2; 3 |] [| 7.; 8.; 9.; 10.; 11.; 12. |] in\n  let result = Nx.where mask a b in\n  check_t \"where with broadcasting\" [| 2; 3 |]\n    [| 1.; 2.; 3.; 10.; 11.; 12. |]\n    result\n\nlet test_where_scalar_inputs () =\n  let mask =\n    Nx.create Nx.bool [| 2; 3 |] [| true; false; true; false; true; false |]\n  in\n  let a = Nx.scalar Nx.float32 5.0 in\n  let b = Nx.scalar Nx.float32 10.0 in\n  let result = Nx.where mask a b in\n  check_t \"where with scalar inputs\" [| 2; 3 |]\n    [| 5.0; 10.0; 5.0; 10.0; 5.0; 10.0 |]\n    result\n\nlet test_where_invalid_shapes () =\n  let mask = Nx.create Nx.bool [| 2 |] [| true; false |] in\n  let a = Nx.create Nx.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let b = Nx.create Nx.float32 [| 2 |] [| 4.; 5. |] in\n  raises ~msg:\"where invalid shapes\"\n    (Invalid_argument\n       \"broadcast: cannot broadcast [3] with [2] (dim 0: 3\\226\\137\\1602)\")\n    (fun () -> ignore (Nx.where mask a b))\n\n(* ───── Sort Tests ───── *)\n\nlet test_sort_2d_axis0 () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 4.; 1.; 3.; 2.; 5.; 6. |] in\n  let result, indices = Nx.sort ~axis:0 t in\n  check_t \"sort 2D axis 0 values\" [| 2; 3 |] [| 2.; 1.; 3.; 4.; 5.; 6. |] result;\n  check_t \"sort 2D axis 0 indices\" [| 2; 3 |]\n    [| 1l; 0l; 0l; 0l; 1l; 1l |]\n    indices\n\nlet test_sort_2d_axis1 () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 4.; 1.; 3.; 2.; 5.; 6. |] in\n  let result, indices = Nx.sort ~axis:1 t in\n  check_t \"sort 2D axis 1 values\" [| 2; 3 |] [| 1.; 3.; 4.; 2.; 5.; 6. |] result;\n  check_t \"sort 2D axis 1 indices\" [| 2; 3 |]\n    [| 1l; 2l; 0l; 0l; 1l; 2l |]\n    indices\n\nlet test_sort_invalid_axis () =\n  let t = Nx.create Nx.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  check_invalid_arg \"sort invalid axis\"\n    \"sort: axis 2 out of bounds for 2D tensor\" (fun () -> Nx.sort ~axis:2 t)\n\nlet test_sort_nan_handling () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 3.; nan; 1.; 2.; nan |] in\n  let result, _ = Nx.sort t in\n  (* NaN values should be sorted to the end *)\n  let first_three = Nx.slice [ Nx.R (0, 3) ] result in\n  check_t \"sort NaN handling - non-NaN values\" [| 3 |] [| 1.; 2.; 3. |]\n    first_three;\n  (* Check that last two values are NaN *)\n  equal ~msg:\"sort NaN handling - NaN at end\" bool true\n    (Float.is_nan (Nx.item [ 3 ] result) && Float.is_nan (Nx.item [ 4 ] result))\n\nlet test_sort_stable () =\n  (* Test sort stability with repeated values *)\n  let t = Nx.create Nx.float32 [| 6 |] [| 3.; 1.; 2.; 1.; 3.; 2. |] in\n  let _, indices = Nx.sort t in\n  (* For stable sort, original order should be preserved for equal elements *)\n  check_t \"sort stable indices\" [| 6 |] [| 1l; 3l; 2l; 5l; 0l; 4l |] indices\n\n(* ───── Argsort Tests ───── *)\n\nlet test_argsort_1d () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 3.; 1.; 4.; 1.; 5. |] in\n  let result = Nx.argsort t in\n  check_t \"argsort 1D\" [| 5 |] [| 1l; 3l; 0l; 2l; 4l |] result\n\nlet test_argsort_2d_axis0 () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 4.; 1.; 3.; 2.; 5.; 6. |] in\n  let result = Nx.argsort ~axis:0 t in\n  check_t \"argsort 2D axis 0\" [| 2; 3 |] [| 1l; 0l; 0l; 0l; 1l; 1l |] result\n\nlet test_argsort_2d_axis1 () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 4.; 1.; 3.; 2.; 5.; 6. |] in\n  let result = Nx.argsort ~axis:1 t in\n  check_t \"argsort 2D axis 1\" [| 2; 3 |] [| 1l; 2l; 0l; 0l; 1l; 2l |] result\n\nlet test_argsort_empty () =\n  let t = Nx.create Nx.float32 [| 0 |] [||] in\n  let result = Nx.argsort t in\n  check_t \"argsort empty\" [| 0 |] [||] result\n\n(* ───── Argmax Tests ───── *)\n\nlet test_argmax_1d () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 3.; 1.; 4.; 1.; 5. |] in\n  let result = Nx.argmax t in\n  check_t \"argmax 1D\" [||] [| 4l |] result\n\nlet test_argmax_2d_axis0 () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let result = Nx.argmax ~axis:0 t in\n  check_t \"argmax 2D axis 0\" [| 3 |] [| 1l; 1l; 1l |] result\n\nlet test_argmax_2d_axis1 () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let result = Nx.argmax ~axis:1 t in\n  check_t \"argmax 2D axis 1\" [| 2 |] [| 2l; 2l |] result\n\nlet test_argmax_keepdims () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let result = Nx.argmax ~axis:1 ~keepdims:true t in\n  check_shape \"argmax keepdims shape\" [| 2; 1 |] result;\n  check_t \"argmax keepdims values\" [| 2; 1 |] [| 2l; 2l |] result\n\nlet test_argmax_nan () =\n  let t = Nx.create Nx.float32 [| 4 |] [| 1.; nan; 3.; 2. |] in\n  let result = Nx.argmax t in\n  (* NaN handling may vary - just check it doesn't crash *)\n  check_shape \"argmax with NaN\" [||] result\n\n(* ───── Argmin Tests ───── *)\n\nlet test_argmin_1d () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 3.; 1.; 4.; 1.; 5. |] in\n  let result = Nx.argmin t in\n  check_t \"argmin 1D\" [||] [| 1l |] result\n\nlet test_argmin_2d_axis0 () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let result = Nx.argmin ~axis:0 t in\n  check_t \"argmin 2D axis 0\" [| 3 |] [| 0l; 0l; 0l |] result\n\nlet test_argmin_2d_axis1 () =\n  let t = Nx.create Nx.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let result = Nx.argmin ~axis:1 t in\n  check_t \"argmin 2D axis 1\" [| 2 |] [| 0l; 0l |] result\n\nlet test_argmin_ties () =\n  let t = Nx.create Nx.float32 [| 5 |] [| 3.; 1.; 2.; 1.; 3. |] in\n  let result = Nx.argmin t in\n  (* Should return first occurrence *)\n  check_t \"argmin ties\" [||] [| 1l |] result\n\n(* ───── Sort Regression Tests ───── *)\n\nlet test_sort_large_1d () =\n  (* Regression: bitonic sort breaks for n >= 129. The sort produces duplicate\n     values instead of a correct permutation. *)\n  let n = 150 in\n  let t = Nx.arange Nx.float32 0 n 1 in\n  (* Reverse so it's not already sorted *)\n  let t = Nx.flip ~axes:[ 0 ] t in\n  let sorted_vals, sorted_indices = Nx.sort t in\n  (* Check sorted values are 0, 1, 2, ..., n-1 *)\n  let expected_vals = Nx.arange Nx.float32 0 n 1 in\n  check_nx \"sort large 1D values\" expected_vals sorted_vals;\n  (* Check indices map back to original positions *)\n  let expected_indices = Nx.arange Nx.int32 (n - 1) (-1) (-1) in\n  check_nx \"sort large 1D indices\" expected_indices sorted_indices\n\nlet test_sort_power_of_two () =\n  (* n=256 is a power of two (no padding needed) but still breaks *)\n  let n = 256 in\n  let t = Nx.arange Nx.float32 0 n 1 in\n  let t = Nx.flip ~axes:[ 0 ] t in\n  let sorted_vals, _ = Nx.sort t in\n  let expected_vals = Nx.arange Nx.float32 0 n 1 in\n  check_nx \"sort power-of-two values\" expected_vals sorted_vals\n\nlet test_sort_128_boundary () =\n  (* n=128 works, n=129 does not *)\n  let t128 = Nx.flip ~axes:[ 0 ] (Nx.arange Nx.float32 0 128 1) in\n  let sorted128, _ = Nx.sort t128 in\n  check_nx \"sort n=128 values\" (Nx.arange Nx.float32 0 128 1) sorted128;\n  let t129 = Nx.flip ~axes:[ 0 ] (Nx.arange Nx.float32 0 129 1) in\n  let sorted129, _ = Nx.sort t129 in\n  check_nx \"sort n=129 values\" (Nx.arange Nx.float32 0 129 1) sorted129\n\n(* Test Suite Organization *)\n\nlet where_tests =\n  [\n    test \"where 1D\" test_where_1d;\n    test \"where broadcast\" test_where_broadcast;\n    test \"where scalar inputs\" test_where_scalar_inputs;\n    test \"where invalid shapes\" test_where_invalid_shapes;\n  ]\n\nlet sort_tests =\n  [\n    test \"sort 2D axis 0\" test_sort_2d_axis0;\n    test \"sort 2D axis 1\" test_sort_2d_axis1;\n    test \"sort invalid axis\" test_sort_invalid_axis;\n    test \"sort NaN handling\" test_sort_nan_handling;\n    test \"sort stable\" test_sort_stable;\n  ]\n\nlet sort_regression_tests =\n  [\n    test \"sort large 1D (n=150)\" test_sort_large_1d;\n    test \"sort power of two (n=256)\" test_sort_power_of_two;\n    test \"sort 128 boundary\" test_sort_128_boundary;\n  ]\n\nlet argsort_tests =\n  [\n    test \"argsort 1D\" test_argsort_1d;\n    test \"argsort 2D axis 0\" test_argsort_2d_axis0;\n    test \"argsort 2D axis 1\" test_argsort_2d_axis1;\n    test \"argsort empty\" test_argsort_empty;\n  ]\n\nlet argmax_tests =\n  [\n    test \"argmax 1D\" test_argmax_1d;\n    test \"argmax 2D axis 0\" test_argmax_2d_axis0;\n    test \"argmax 2D axis 1\" test_argmax_2d_axis1;\n    test \"argmax keepdims\" test_argmax_keepdims;\n    test \"argmax NaN\" test_argmax_nan;\n  ]\n\nlet argmin_tests =\n  [\n    test \"argmin 1D\" test_argmin_1d;\n    test \"argmin 2D axis 0\" test_argmin_2d_axis0;\n    test \"argmin 2D axis 1\" test_argmin_2d_axis1;\n    test \"argmin ties\" test_argmin_ties;\n  ]\n\nlet suite =\n  [\n    group \"Where\" where_tests;\n    group \"Sort\" sort_tests;\n    group \"Sort Regression\" sort_regression_tests;\n    group \"Argsort\" argsort_tests;\n    group \"Argmax\" argmax_tests;\n    group \"Argmin\" argmin_tests;\n  ]\n\nlet () = run \"Nx Sorting\" suite\n"
  },
  {
    "path": "packages/nx/test/test_nx_support.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Shared test utilities for Nx test suite *)\n\nopen Windtrap\n\nlet check_invalid_arg msg pattern f =\n  raises ~msg (Invalid_argument pattern) (fun () -> ignore (f ()))\n\nlet check_failure msg pattern f = raises ~msg (Failure pattern) f\n\nlet testable_of_dtype (type a b) ?(eps = 1e-6) (dtype : (a, b) Nx.dtype) :\n    a testable =\n  match dtype with\n  | Nx.Float16 -> float eps\n  | Nx.Float32 -> float eps\n  | Nx.Float64 -> float eps\n  | Nx.BFloat16 -> float eps\n  | Nx.Float8_e4m3 -> float eps\n  | Nx.Float8_e5m2 -> float eps\n  | Nx.Int8 -> int\n  | Nx.Int16 -> int\n  | Nx.Int32 -> int32\n  | Nx.Int64 -> int64\n  | Nx.UInt8 -> int\n  | Nx.UInt16 -> int\n  | Nx.UInt32 -> int32\n  | Nx.UInt64 -> int64\n  | Nx.Int4 -> int\n  | Nx.UInt4 -> int\n  | Nx.Bool -> bool\n  | Nx.Complex64 ->\n      Testable.make\n        ~pp:(fun ppf v ->\n          Format.fprintf ppf \"(%f, %f)\" v.Complex.re v.Complex.im)\n        ~equal:(fun a b ->\n          Float.abs (a.re -. b.re) < eps && Float.abs (a.im -. b.im) < eps)\n        ()\n  | Nx.Complex128 ->\n      Testable.make\n        ~pp:(fun ppf v ->\n          Format.fprintf ppf \"(%f, %f)\" v.Complex.re v.Complex.im)\n        ~equal:(fun a b ->\n          Float.abs (a.re -. b.re) < eps && Float.abs (a.im -. b.im) < eps)\n        ()\n\n(* Check function to test a tensor against an array *)\nlet check_data (type a b) ?eps msg (expected : a array) (actual : (a, b) Nx.t) =\n  let dt_testable = testable_of_dtype ?eps (Nx.dtype actual) in\n  let actual = Nx.to_array actual in\n  equal ~msg (array dt_testable) expected actual\n\nlet check_shape msg expected_shape tensor =\n  equal ~msg (array int) expected_shape (Nx.shape tensor)\n\nlet check_t ?eps msg shape data actual =\n  check_shape msg shape actual;\n  check_data ?eps msg data actual\n\n(* Approximate equality for floating-point comparisons *)\nlet approx_equal (type b) ?(epsilon = 1e-6) (a : (float, b) Nx.t)\n    (b : (float, b) Nx.t) =\n  if Nx.shape a <> Nx.shape b then false\n  else\n    let diff = Nx.sub a b in\n    let abs_diff = Nx.abs diff in\n    let max_diff = Nx.item [] (Nx.max abs_diff) in\n    max_diff < epsilon\n\n(* Approximate equality for complex numbers *)\nlet approx_equal_complex (type b) ?(epsilon = 1e-6) (a : (Complex.t, b) Nx.t)\n    (b : (Complex.t, b) Nx.t) =\n  if Nx.shape a <> Nx.shape b then false\n  else\n    let a_arr = Nx.to_array a in\n    let b_arr = Nx.to_array b in\n    Array.for_all2\n      (fun x y ->\n        Float.abs (x.Complex.re -. y.Complex.re) < epsilon\n        && Float.abs (x.Complex.im -. y.Complex.im) < epsilon)\n      a_arr b_arr\n\n(* Common check functions *)\nlet check_nx (type a b) ?epsilon msg (expected : (a, b) Nx.t)\n    (actual : (a, b) Nx.t) =\n  if Nx.shape expected <> Nx.shape actual then\n    failf \"%s: shapes differ - expected %s, got %s\" msg\n      (String.concat \"x\"\n         (List.map string_of_int (Array.to_list (Nx.shape expected))))\n      (String.concat \"x\"\n         (List.map string_of_int (Array.to_list (Nx.shape actual))))\n  else\n    let test_float expected actual =\n      let approx_equal = approx_equal ?epsilon in\n      if not (approx_equal expected actual) then\n        failf \"%s: tensors not equal\\nExpected:\\n%s\\nActual:\\n%s\" msg\n          (Nx.to_string expected) (Nx.to_string actual)\n    in\n    let test_complex expected actual =\n      let approx_equal_complex = approx_equal_complex ?epsilon in\n      if not (approx_equal_complex expected actual) then\n        failf \"%s: tensors not equal\\nExpected:\\n%s\\nActual:\\n%s\" msg\n          (Nx.to_string expected) (Nx.to_string actual)\n    in\n    match Nx.dtype expected with\n    | Float16 -> test_float expected actual\n    | Float32 -> test_float expected actual\n    | Float64 -> test_float expected actual\n    | Complex64 -> test_complex expected actual\n    | Complex128 -> test_complex expected actual\n    | _ ->\n        let equal = Nx.array_equal expected actual in\n        if Nx.item [] equal = false then\n          failf \"%s: tensors not equal\\nExpected:\\n%s\\nActual:\\n%s\" msg\n            (Nx.to_string expected) (Nx.to_string actual)\n\nlet check_nx_scalar dtype msg expected actual =\n  let expected_t = Nx.scalar dtype expected in\n  let actual_t = Nx.scalar dtype actual in\n  check_nx msg expected_t actual_t\n"
  },
  {
    "path": "packages/nx/top/dune",
    "content": "(library\n (name nx_top)\n (public_name nx.top)\n (modules nx_top)\n (libraries nx nx.c compiler-libs.toplevel)\n (modes byte))\n"
  },
  {
    "path": "packages/nx/top/nx_top.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet install_printer name =\n  let phrase =\n    Printf.sprintf \"#install_printer %s;;\" name\n    |> Lexing.from_string\n    |> !Toploop.parse_toplevel_phrase\n  in\n  Toploop.execute_phrase false Format.err_formatter phrase |> ignore\n\nlet () = install_printer \"Nx.pp_data\"\n"
  },
  {
    "path": "packages/nx/vendor/camlzip/LICENSE",
    "content": "This Library is distributed under the terms of the GNU Lesser General\nPublic License (LGPL) version 2.1 or above (included below).\n\nAs a special exception to the GNU Lesser General Public License, you\nmay link, statically or dynamically, a \"work that uses the Library\"\nwith a publicly distributed version of the Library to produce an\nexecutable file containing portions of the Library, and distribute\nthat executable file under terms of your choice, without any of the\nadditional requirements listed in clause 6 of the GNU Lesser General\nPublic License.  By \"a publicly distributed version of the Library\",\nwe mean either the unmodified Library as distributed by INRIA, or a\nmodified version of the Library that is distributed under the\nconditions defined in clause 3 of the GNU Lesser General Public\nLicense.  This exception does not however invalidate any other reasons\nwhy the executable file might be covered by the GNU Lesser General\nPublic License.\n\n----------------------------------------------------------------------\n\n                   GNU LESSER GENERAL PUBLIC LICENSE\n                       Version 2.1, February 1999\n\n Copyright (C) 1991, 1999 Free Software Foundation, Inc.\n 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA\n Everyone is permitted to copy and distribute verbatim copies\n of this license document, but changing it is not allowed.\n\n[This is the first released version of the Lesser GPL.  It also counts\n as the successor of the GNU Library Public License, version 2, hence\n the version number 2.1.]\n\n                            Preamble\n\n  The licenses for most software are designed to take away your\nfreedom to share and change it.  By contrast, the GNU General Public\nLicenses are intended to guarantee your freedom to share and change\nfree software--to make sure the software is free for all its users.\n\n  This license, the Lesser General Public License, applies to some\nspecially designated software packages--typically libraries--of the\nFree Software Foundation and other authors who decide to use it.  You\ncan use it too, but we suggest you first think carefully about whether\nthis license or the ordinary General Public License is the better\nstrategy to use in any particular case, based on the explanations below.\n\n  When we speak of free software, we are referring to freedom of use,\nnot price.  Our General Public Licenses are designed to make sure that\nyou have the freedom to distribute copies of free software (and charge\nfor this service if you wish); that you receive source code or can get\nit if you want it; that you can change the software and use pieces of\nit in new free programs; and that you are informed that you can do\nthese things.\n\n  To protect your rights, we need to make restrictions that forbid\ndistributors to deny you these rights or to ask you to surrender these\nrights.  These restrictions translate to certain responsibilities for\nyou if you distribute copies of the library or if you modify it.\n\n  For example, if you distribute copies of the library, whether gratis\nor for a fee, you must give the recipients all the rights that we gave\nyou.  You must make sure that they, too, receive or can get the source\ncode.  If you link other code with the library, you must provide\ncomplete object files to the recipients, so that they can relink them\nwith the library after making changes to the library and recompiling\nit.  And you must show them these terms so they know their rights.\n\n  We protect your rights with a two-step method: (1) we copyright the\nlibrary, and (2) we offer you this license, which gives you legal\npermission to copy, distribute and/or modify the library.\n\n  To protect each distributor, we want to make it very clear that\nthere is no warranty for the free library.  Also, if the library is\nmodified by someone else and passed on, the recipients should know\nthat what they have is not the original version, so that the original\nauthor's reputation will not be affected by problems that might be\nintroduced by others.\n\n  Finally, software patents pose a constant threat to the existence of\nany free program.  We wish to make sure that a company cannot\neffectively restrict the users of a free program by obtaining a\nrestrictive license from a patent holder.  Therefore, we insist that\nany patent license obtained for a version of the library must be\nconsistent with the full freedom of use specified in this license.\n\n  Most GNU software, including some libraries, is covered by the\nordinary GNU General Public License.  This license, the GNU Lesser\nGeneral Public License, applies to certain designated libraries, and\nis quite different from the ordinary General Public License.  We use\nthis license for certain libraries in order to permit linking those\nlibraries into non-free programs.\n\n  When a program is linked with a library, whether statically or using\na shared library, the combination of the two is legally speaking a\ncombined work, a derivative of the original library.  The ordinary\nGeneral Public License therefore permits such linking only if the\nentire combination fits its criteria of freedom.  The Lesser General\nPublic License permits more lax criteria for linking other code with\nthe library.\n\n  We call this license the \"Lesser\" General Public License because it\ndoes Less to protect the user's freedom than the ordinary General\nPublic License.  It also provides other free software developers Less\nof an advantage over competing non-free programs.  These disadvantages\nare the reason we use the ordinary General Public License for many\nlibraries.  However, the Lesser license provides advantages in certain\nspecial circumstances.\n\n  For example, on rare occasions, there may be a special need to\nencourage the widest possible use of a certain library, so that it becomes\na de-facto standard.  To achieve this, non-free programs must be\nallowed to use the library.  A more frequent case is that a free\nlibrary does the same job as widely used non-free libraries.  In this\ncase, there is little to gain by limiting the free library to free\nsoftware only, so we use the Lesser General Public License.\n\n  In other cases, permission to use a particular library in non-free\nprograms enables a greater number of people to use a large body of\nfree software.  For example, permission to use the GNU C Library in\nnon-free programs enables many more people to use the whole GNU\noperating system, as well as its variant, the GNU/Linux operating\nsystem.\n\n  Although the Lesser General Public License is Less protective of the\nusers' freedom, it does ensure that the user of a program that is\nlinked with the Library has the freedom and the wherewithal to run\nthat program using a modified version of the Library.\n\n  The precise terms and conditions for copying, distribution and\nmodification follow.  Pay close attention to the difference between a\n\"work based on the library\" and a \"work that uses the library\".  The\nformer contains code derived from the library, whereas the latter must\nbe combined with the library in order to run.\n\n                  GNU LESSER GENERAL PUBLIC LICENSE\n   TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION\n\n  0. This License Agreement applies to any software library or other\nprogram which contains a notice placed by the copyright holder or\nother authorized party saying it may be distributed under the terms of\nthis Lesser General Public License (also called \"this License\").\nEach licensee is addressed as \"you\".\n\n  A \"library\" means a collection of software functions and/or data\nprepared so as to be conveniently linked with application programs\n(which use some of those functions and data) to form executables.\n\n  The \"Library\", below, refers to any such software library or work\nwhich has been distributed under these terms.  A \"work based on the\nLibrary\" means either the Library or any derivative work under\ncopyright law: that is to say, a work containing the Library or a\nportion of it, either verbatim or with modifications and/or translated\nstraightforwardly into another language.  (Hereinafter, translation is\nincluded without limitation in the term \"modification\".)\n\n  \"Source code\" for a work means the preferred form of the work for\nmaking modifications to it.  For a library, complete source code means\nall the source code for all modules it contains, plus any associated\ninterface definition files, plus the scripts used to control compilation\nand installation of the library.\n\n  Activities other than copying, distribution and modification are not\ncovered by this License; they are outside its scope.  The act of\nrunning a program using the Library is not restricted, and output from\nsuch a program is covered only if its contents constitute a work based\non the Library (independent of the use of the Library in a tool for\nwriting it).  Whether that is true depends on what the Library does\nand what the program that uses the Library does.\n  \n  1. You may copy and distribute verbatim copies of the Library's\ncomplete source code as you receive it, in any medium, provided that\nyou conspicuously and appropriately publish on each copy an\nappropriate copyright notice and disclaimer of warranty; keep intact\nall the notices that refer to this License and to the absence of any\nwarranty; and distribute a copy of this License along with the\nLibrary.\n\n  You may charge a fee for the physical act of transferring a copy,\nand you may at your option offer warranty protection in exchange for a\nfee.\n\n  2. You may modify your copy or copies of the Library or any portion\nof it, thus forming a work based on the Library, and copy and\ndistribute such modifications or work under the terms of Section 1\nabove, provided that you also meet all of these conditions:\n\n    a) The modified work must itself be a software library.\n\n    b) You must cause the files modified to carry prominent notices\n    stating that you changed the files and the date of any change.\n\n    c) You must cause the whole of the work to be licensed at no\n    charge to all third parties under the terms of this License.\n\n    d) If a facility in the modified Library refers to a function or a\n    table of data to be supplied by an application program that uses\n    the facility, other than as an argument passed when the facility\n    is invoked, then you must make a good faith effort to ensure that,\n    in the event an application does not supply such function or\n    table, the facility still operates, and performs whatever part of\n    its purpose remains meaningful.\n\n    (For example, a function in a library to compute square roots has\n    a purpose that is entirely well-defined independent of the\n    application.  Therefore, Subsection 2d requires that any\n    application-supplied function or table used by this function must\n    be optional: if the application does not supply it, the square\n    root function must still compute square roots.)\n\nThese requirements apply to the modified work as a whole.  If\nidentifiable sections of that work are not derived from the Library,\nand can be reasonably considered independent and separate works in\nthemselves, then this License, and its terms, do not apply to those\nsections when you distribute them as separate works.  But when you\ndistribute the same sections as part of a whole which is a work based\non the Library, the distribution of the whole must be on the terms of\nthis License, whose permissions for other licensees extend to the\nentire whole, and thus to each and every part regardless of who wrote\nit.\n\nThus, it is not the intent of this section to claim rights or contest\nyour rights to work written entirely by you; rather, the intent is to\nexercise the right to control the distribution of derivative or\ncollective works based on the Library.\n\nIn addition, mere aggregation of another work not based on the Library\nwith the Library (or with a work based on the Library) on a volume of\na storage or distribution medium does not bring the other work under\nthe scope of this License.\n\n  3. You may opt to apply the terms of the ordinary GNU General Public\nLicense instead of this License to a given copy of the Library.  To do\nthis, you must alter all the notices that refer to this License, so\nthat they refer to the ordinary GNU General Public License, version 2,\ninstead of to this License.  (If a newer version than version 2 of the\nordinary GNU General Public License has appeared, then you can specify\nthat version instead if you wish.)  Do not make any other change in\nthese notices.\n\n  Once this change is made in a given copy, it is irreversible for\nthat copy, so the ordinary GNU General Public License applies to all\nsubsequent copies and derivative works made from that copy.\n\n  This option is useful when you wish to copy part of the code of\nthe Library into a program that is not a library.\n\n  4. You may copy and distribute the Library (or a portion or\nderivative of it, under Section 2) in object code or executable form\nunder the terms of Sections 1 and 2 above provided that you accompany\nit with the complete corresponding machine-readable source code, which\nmust be distributed under the terms of Sections 1 and 2 above on a\nmedium customarily used for software interchange.\n\n  If distribution of object code is made by offering access to copy\nfrom a designated place, then offering equivalent access to copy the\nsource code from the same place satisfies the requirement to\ndistribute the source code, even though third parties are not\ncompelled to copy the source along with the object code.\n\n  5. A program that contains no derivative of any portion of the\nLibrary, but is designed to work with the Library by being compiled or\nlinked with it, is called a \"work that uses the Library\".  Such a\nwork, in isolation, is not a derivative work of the Library, and\ntherefore falls outside the scope of this License.\n\n  However, linking a \"work that uses the Library\" with the Library\ncreates an executable that is a derivative of the Library (because it\ncontains portions of the Library), rather than a \"work that uses the\nlibrary\".  The executable is therefore covered by this License.\nSection 6 states terms for distribution of such executables.\n\n  When a \"work that uses the Library\" uses material from a header file\nthat is part of the Library, the object code for the work may be a\nderivative work of the Library even though the source code is not.\nWhether this is true is especially significant if the work can be\nlinked without the Library, or if the work is itself a library.  The\nthreshold for this to be true is not precisely defined by law.\n\n  If such an object file uses only numerical parameters, data\nstructure layouts and accessors, and small macros and small inline\nfunctions (ten lines or less in length), then the use of the object\nfile is unrestricted, regardless of whether it is legally a derivative\nwork.  (Executables containing this object code plus portions of the\nLibrary will still fall under Section 6.)\n\n  Otherwise, if the work is a derivative of the Library, you may\ndistribute the object code for the work under the terms of Section 6.\nAny executables containing that work also fall under Section 6,\nwhether or not they are linked directly with the Library itself.\n\n  6. As an exception to the Sections above, you may also combine or\nlink a \"work that uses the Library\" with the Library to produce a\nwork containing portions of the Library, and distribute that work\nunder terms of your choice, provided that the terms permit\nmodification of the work for the customer's own use and reverse\nengineering for debugging such modifications.\n\n  You must give prominent notice with each copy of the work that the\nLibrary is used in it and that the Library and its use are covered by\nthis License.  You must supply a copy of this License.  If the work\nduring execution displays copyright notices, you must include the\ncopyright notice for the Library among them, as well as a reference\ndirecting the user to the copy of this License.  Also, you must do one\nof these things:\n\n    a) Accompany the work with the complete corresponding\n    machine-readable source code for the Library including whatever\n    changes were used in the work (which must be distributed under\n    Sections 1 and 2 above); and, if the work is an executable linked\n    with the Library, with the complete machine-readable \"work that\n    uses the Library\", as object code and/or source code, so that the\n    user can modify the Library and then relink to produce a modified\n    executable containing the modified Library.  (It is understood\n    that the user who changes the contents of definitions files in the\n    Library will not necessarily be able to recompile the application\n    to use the modified definitions.)\n\n    b) Use a suitable shared library mechanism for linking with the\n    Library.  A suitable mechanism is one that (1) uses at run time a\n    copy of the library already present on the user's computer system,\n    rather than copying library functions into the executable, and (2)\n    will operate properly with a modified version of the library, if\n    the user installs one, as long as the modified version is\n    interface-compatible with the version that the work was made with.\n\n    c) Accompany the work with a written offer, valid for at\n    least three years, to give the same user the materials\n    specified in Subsection 6a, above, for a charge no more\n    than the cost of performing this distribution.\n\n    d) If distribution of the work is made by offering access to copy\n    from a designated place, offer equivalent access to copy the above\n    specified materials from the same place.\n\n    e) Verify that the user has already received a copy of these\n    materials or that you have already sent this user a copy.\n\n  For an executable, the required form of the \"work that uses the\nLibrary\" must include any data and utility programs needed for\nreproducing the executable from it.  However, as a special exception,\nthe materials to be distributed need not include anything that is\nnormally distributed (in either source or binary form) with the major\ncomponents (compiler, kernel, and so on) of the operating system on\nwhich the executable runs, unless that component itself accompanies\nthe executable.\n\n  It may happen that this requirement contradicts the license\nrestrictions of other proprietary libraries that do not normally\naccompany the operating system.  Such a contradiction means you cannot\nuse both them and the Library together in an executable that you\ndistribute.\n\n  7. You may place library facilities that are a work based on the\nLibrary side-by-side in a single library together with other library\nfacilities not covered by this License, and distribute such a combined\nlibrary, provided that the separate distribution of the work based on\nthe Library and of the other library facilities is otherwise\npermitted, and provided that you do these two things:\n\n    a) Accompany the combined library with a copy of the same work\n    based on the Library, uncombined with any other library\n    facilities.  This must be distributed under the terms of the\n    Sections above.\n\n    b) Give prominent notice with the combined library of the fact\n    that part of it is a work based on the Library, and explaining\n    where to find the accompanying uncombined form of the same work.\n\n  8. You may not copy, modify, sublicense, link with, or distribute\nthe Library except as expressly provided under this License.  Any\nattempt otherwise to copy, modify, sublicense, link with, or\ndistribute the Library is void, and will automatically terminate your\nrights under this License.  However, parties who have received copies,\nor rights, from you under this License will not have their licenses\nterminated so long as such parties remain in full compliance.\n\n  9. You are not required to accept this License, since you have not\nsigned it.  However, nothing else grants you permission to modify or\ndistribute the Library or its derivative works.  These actions are\nprohibited by law if you do not accept this License.  Therefore, by\nmodifying or distributing the Library (or any work based on the\nLibrary), you indicate your acceptance of this License to do so, and\nall its terms and conditions for copying, distributing or modifying\nthe Library or works based on it.\n\n  10. Each time you redistribute the Library (or any work based on the\nLibrary), the recipient automatically receives a license from the\noriginal licensor to copy, distribute, link with or modify the Library\nsubject to these terms and conditions.  You may not impose any further\nrestrictions on the recipients' exercise of the rights granted herein.\nYou are not responsible for enforcing compliance by third parties with\nthis License.\n\n  11. If, as a consequence of a court judgment or allegation of patent\ninfringement or for any other reason (not limited to patent issues),\nconditions are imposed on you (whether by court order, agreement or\notherwise) that contradict the conditions of this License, they do not\nexcuse you from the conditions of this License.  If you cannot\ndistribute so as to satisfy simultaneously your obligations under this\nLicense and any other pertinent obligations, then as a consequence you\nmay not distribute the Library at all.  For example, if a patent\nlicense would not permit royalty-free redistribution of the Library by\nall those who receive copies directly or indirectly through you, then\nthe only way you could satisfy both it and this License would be to\nrefrain entirely from distribution of the Library.\n\nIf any portion of this section is held invalid or unenforceable under any\nparticular circumstance, the balance of the section is intended to apply,\nand the section as a whole is intended to apply in other circumstances.\n\nIt is not the purpose of this section to induce you to infringe any\npatents or other property right claims or to contest validity of any\nsuch claims; this section has the sole purpose of protecting the\nintegrity of the free software distribution system which is\nimplemented by public license practices.  Many people have made\ngenerous contributions to the wide range of software distributed\nthrough that system in reliance on consistent application of that\nsystem; it is up to the author/donor to decide if he or she is willing\nto distribute software through any other system and a licensee cannot\nimpose that choice.\n\nThis section is intended to make thoroughly clear what is believed to\nbe a consequence of the rest of this License.\n\n  12. If the distribution and/or use of the Library is restricted in\ncertain countries either by patents or by copyrighted interfaces, the\noriginal copyright holder who places the Library under this License may add\nan explicit geographical distribution limitation excluding those countries,\nso that distribution is permitted only in or among countries not thus\nexcluded.  In such case, this License incorporates the limitation as if\nwritten in the body of this License.\n\n  13. The Free Software Foundation may publish revised and/or new\nversions of the Lesser General Public License from time to time.\nSuch new versions will be similar in spirit to the present version,\nbut may differ in detail to address new problems or concerns.\n\nEach version is given a distinguishing version number.  If the Library\nspecifies a version number of this License which applies to it and\n\"any later version\", you have the option of following the terms and\nconditions either of that version or of any later version published by\nthe Free Software Foundation.  If the Library does not specify a\nlicense version number, you may choose any version ever published by\nthe Free Software Foundation.\n\n  14. If you wish to incorporate parts of the Library into other free\nprograms whose distribution conditions are incompatible with these,\nwrite to the author to ask for permission.  For software which is\ncopyrighted by the Free Software Foundation, write to the Free\nSoftware Foundation; we sometimes make exceptions for this.  Our\ndecision will be guided by the two goals of preserving the free status\nof all derivatives of our free software and of promoting the sharing\nand reuse of software generally.\n\n                            NO WARRANTY\n\n  15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO\nWARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.\nEXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR\nOTHER PARTIES PROVIDE THE LIBRARY \"AS IS\" WITHOUT WARRANTY OF ANY\nKIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR\nPURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE\nLIBRARY IS WITH YOU.  SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME\nTHE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.\n\n  16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN\nWRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY\nAND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU\nFOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR\nCONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE\nLIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING\nRENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A\nFAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF\nSUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH\nDAMAGES.\n\n                     END OF TERMS AND CONDITIONS\n\n           How to Apply These Terms to Your New Libraries\n\n  If you develop a new library, and you want it to be of the greatest\npossible use to the public, we recommend making it free software that\neveryone can redistribute and change.  You can do so by permitting\nredistribution under these terms (or, alternatively, under the terms of the\nordinary General Public License).\n\n  To apply these terms, attach the following notices to the library.  It is\nsafest to attach them to the start of each source file to most effectively\nconvey the exclusion of warranty; and each file should have at least the\n\"copyright\" line and a pointer to where the full notice is found.\n\n    <one line to give the library's name and a brief idea of what it does.>\n    Copyright (C) <year>  <name of author>\n\n    This library is free software; you can redistribute it and/or\n    modify it under the terms of the GNU Lesser General Public\n    License as published by the Free Software Foundation; either\n    version 2.1 of the License, or (at your option) any later version.\n\n    This library is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n    Lesser General Public License for more details.\n\n    You should have received a copy of the GNU Lesser General Public\n    License along with this library; if not, write to the Free Software\n    Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA\n\nAlso add information on how to contact you by electronic and paper mail.\n\nYou should also get your employer (if you work as a programmer) or your\nschool, if any, to sign a \"copyright disclaimer\" for the library, if\nnecessary.  Here is a sample; alter the names:\n\n  Yoyodyne, Inc., hereby disclaims all copyright interest in the\n  library `Frob' (a library for tweaking knobs) written by James Random Hacker.\n\n  <signature of Ty Coon>, 1 April 1990\n  Ty Coon, President of Vice\n\nThat's all there is to it!\n"
  },
  {
    "path": "packages/nx/vendor/camlzip/config/discover.ml",
    "content": "module C = Configurator.V1\n\nlet () =\nC.main ~name:\"zip\" (fun c ->\n\nlet stale_gzip : C.Pkg_config.package_conf = {\n libs = [ \"-lz\" ];\n cflags = []\n} in\n\nlet conf =\n  match C.Pkg_config.get c with\n  | None -> C.die \"'pkg-config' missing\"\n  | Some pc ->\n    match (C.Pkg_config.query pc ~package:\"zlib\") with\n      | None -> stale_gzip\n      | Some deps -> deps\n  in\n\n  (* Add -fPIC on Linux and BSD systems for position-independent code.\n     This is required when building shared libraries on x86-64 Linux to avoid\n     relocation errors like \"relocation R_X86_64_32 against `.data' can not be \n     used when making a shared object\" *)\n  let cflags = \n    match C.ocaml_config_var c \"system\" with\n    | Some \"linux\" | Some \"freebsd\" | Some \"netbsd\" | Some \"openbsd\" \n    | Some \"dragonfly\" | Some \"gnu\" -> \"-fPIC\" :: conf.cflags\n    | _ -> conf.cflags\n  in\n\n  C.Flags.write_sexp \"c_flags.sexp\"         cflags;\n  C.Flags.write_sexp \"c_library_flags.sexp\" conf.libs)"
  },
  {
    "path": "packages/nx/vendor/camlzip/config/dune",
    "content": "(executable\n (name discover)\n (libraries dune-configurator))\n"
  },
  {
    "path": "packages/nx/vendor/camlzip/dune",
    "content": "(library\n (name zip)\n (public_name nx.zip)\n (synopsis \"OCaml ZIP interface\")\n (wrapped false)\n (modules gzip zip zlib)\n (foreign_stubs\n  (language c)\n  (names zlibstubs)\n  (flags\n   (:include c_flags.sexp)))\n (c_library_flags\n  (:include c_library_flags.sexp)))\n\n(rule\n (targets c_flags.sexp c_library_flags.sexp)\n (deps\n  (:discover config/discover.exe))\n (action\n  (run %{discover})))\n"
  },
  {
    "path": "packages/nx/vendor/camlzip/gzip.ml",
    "content": "(***********************************************************************)\n(*                                                                     *)\n(*                         The CamlZip library                         *)\n(*                                                                     *)\n(*            Xavier Leroy, projet Cristal, INRIA Rocquencourt         *)\n(*                                                                     *)\n(*  Copyright 2001 Institut National de Recherche en Informatique et   *)\n(*  en Automatique.  All rights reserved.  This file is distributed    *)\n(*  under the terms of the GNU Library General Public License, with    *)\n(*  the special exception on linking described in file LICENSE.        *)\n(*                                                                     *)\n(***********************************************************************)\n\n(* $Id$ *)\n\n(* Module [Gzip]: reading and writing to/from [gzip] compressed files *)\n\nexception Error of string\n\nlet buffer_size = 1024\n\ntype in_channel =\n  { in_chan: Stdlib.in_channel;\n    in_buffer: bytes;\n    mutable in_pos: int;\n    mutable in_avail: int;\n    mutable in_eof: bool;\n    in_stream: Zlib.stream;\n    mutable in_size: int32;\n    mutable in_crc: int32 }\n\nlet open_in_chan ic =\n  (* Superficial parsing of header *)\n  begin try\n    let id1 = input_byte ic in\n    let id2 = input_byte ic in\n    if id1 <> 0x1F || id2 <> 0x8B then\n      raise(Error(\"bad magic number, not a gzip file\"));\n    let cm = input_byte ic in\n    if cm <> 8 then\n      raise(Error(\"unknown compression method\"));\n    let flags = input_byte ic in\n    if flags land 0xE0 <> 0 then\n      raise(Error(\"bad flags, not a gzip file\"));\n    for i = 1 to 6 do ignore(input_byte ic) done;\n    if flags land 0x04 <> 0 then begin\n      (* Skip extra data *)\n      let len1 = input_byte ic in\n      let len2 = input_byte ic in\n      for i = 1 to len1 + len2 lsl 8 do ignore(input_byte ic) done\n    end;\n    if flags land 0x08 <> 0 then begin\n      (* Skip original file name *)\n      while input_byte ic <> 0 do () done\n    end;\n    if flags land 0x10 <> 0 then begin\n      (* Skip comment *)\n      while input_byte ic <> 0 do () done\n    end;\n    if flags land 0x02 <> 0 then begin\n      (* Skip header CRC *)\n      ignore(input_byte ic); ignore(input_byte ic)\n    end\n  with End_of_file ->\n    raise(Error(\"premature end of file, not a gzip file\"))\n  end;\n  { in_chan = ic;\n    in_buffer = Bytes.create buffer_size;\n    in_pos = 0;\n    in_avail = 0;\n    in_eof = false;\n    in_stream = Zlib.inflate_init false;\n    in_size = Int32.zero;\n    in_crc = Int32.zero }\n\nlet open_in filename =\n  let ic = Stdlib.open_in_bin filename in\n  try\n    open_in_chan ic\n  with exn ->\n    Stdlib.close_in ic; raise exn\n\nlet read_byte iz =\n  if iz.in_avail = 0 then begin\n    let n = Stdlib.input iz.in_chan iz.in_buffer 0\n                             (Bytes.length iz.in_buffer) in\n    if n = 0 then raise End_of_file;\n    iz.in_pos <- 0;\n    iz.in_avail <- n\n  end;\n  let c = Bytes.get iz.in_buffer iz.in_pos in\n  iz.in_pos <- iz.in_pos + 1;\n  iz.in_avail <- iz.in_avail - 1;\n  Char.code c\n\nlet read_int32 iz =\n  let b1 = read_byte iz in\n  let b2 = read_byte iz in\n  let b3 = read_byte iz in\n  let b4 = read_byte iz in\n  Int32.logor (Int32.of_int b1)\n    (Int32.logor (Int32.shift_left (Int32.of_int b2) 8)\n      (Int32.logor (Int32.shift_left (Int32.of_int b3) 16)\n                   (Int32.shift_left (Int32.of_int b4) 24)))\n\nlet rec input iz buf pos len =\n  if pos < 0 || len < 0 || pos + len > Bytes.length buf then\n    invalid_arg \"Gzip.input\";\n  if iz.in_eof then 0 else begin\n    if iz.in_avail = 0 then begin\n      let n = Stdlib.input iz.in_chan iz.in_buffer 0\n                               (Bytes.length iz.in_buffer) in\n      if n = 0 then raise(Error(\"truncated file\"));\n      iz.in_pos <- 0;\n      iz.in_avail <- n\n    end;\n    let (finished, used_in, used_out) =\n      try\n        Zlib.inflate iz.in_stream iz.in_buffer iz.in_pos iz.in_avail\n                                   buf pos len Zlib.Z_SYNC_FLUSH\n      with Zlib.Error(_, _) ->\n        raise(Error(\"error during decompression\")) in\n    iz.in_pos <- iz.in_pos + used_in;\n    iz.in_avail <- iz.in_avail - used_in;\n    iz.in_crc <- Zlib.update_crc iz.in_crc buf pos used_out;\n    iz.in_size <- Int32.add iz.in_size (Int32.of_int used_out);\n    if finished then begin\n      try\n        let crc = read_int32 iz in\n        let size = read_int32 iz in\n        if iz.in_crc <> crc then\n          raise(Error(\"CRC mismatch, data corrupted\"));\n        if iz.in_size <> size then\n          raise(Error(\"size mismatch, data corrupted\"));\n        iz.in_eof <- true;\n        used_out\n      with End_of_file ->\n        raise(Error(\"truncated file\"))\n    end\n    else if used_out = 0 then\n      input iz buf pos len\n    else\n      used_out\n  end\n\nlet rec really_input iz buf pos len =\n  if len <= 0 then () else begin\n    let n = input iz buf pos len in\n    if n = 0 then raise End_of_file;\n    really_input iz buf (pos + n) (len - n)\n  end\n\nlet char_buffer = Bytes.create 1\n\nlet input_char iz =\n  if input iz char_buffer 0 1 = 0\n  then raise End_of_file\n  else Bytes.get char_buffer 0\n\nlet input_byte iz =\n  Char.code (input_char iz)\n\nlet dispose iz =\n  iz.in_eof <- true;\n  Zlib.inflate_end iz.in_stream\n\nlet close_in iz =\n  dispose iz;\n  Stdlib.close_in iz.in_chan\n\ntype out_channel =\n  { out_chan: Stdlib.out_channel;\n    out_buffer: bytes;\n    mutable out_pos: int;\n    mutable out_avail: int;\n    out_stream: Zlib.stream;\n    mutable out_size: int32;\n    mutable out_crc: int32 }\n\nlet open_out_chan ?(level = 6) oc =\n  if level < 1 || level > 9 then invalid_arg \"Gzip.open_out: bad level\";\n  (* Write minimal header *)\n  output_byte oc 0x1F;                  (* ID1 *)\n  output_byte oc 0x8B;                  (* ID2 *)\n  output_byte oc 8;                     (* compression method *)\n  output_byte oc 0;                     (* flags *)\n  for i = 1 to 4 do output_byte oc 0 done; (* mtime *)\n  output_byte oc 0;                     (* xflags *)\n  output_byte oc 0xFF;                  (* OS (unknown) *)\n  { out_chan = oc;\n    out_buffer = Bytes.create buffer_size;\n    out_pos = 0;\n    out_avail = buffer_size;\n    out_stream = Zlib.deflate_init level false;\n    out_size = Int32.zero;\n    out_crc = Int32.zero }\n\nlet open_out ?(level = 6) filename =\n  open_out_chan ~level (Stdlib.open_out_bin filename)\n\nlet flush_and_reset_out_buffer oz =\n  Stdlib.output oz.out_chan oz.out_buffer 0 oz.out_pos;\n  oz.out_pos <- 0;\n  oz.out_avail <- Bytes.length oz.out_buffer\n\nlet rec output oz buf pos len =\n  if pos < 0 || len < 0 || pos + len > Bytes.length buf then\n    invalid_arg \"Gzip.output\";\n  (* If output buffer is full, flush it *)\n  if oz.out_avail = 0 then flush_and_reset_out_buffer oz;\n  (* Patch request #1428: Zlib disallows zero-length writes *)\n  if len > 0 then begin\n    let (_, used_in, used_out) =\n      try\n        Zlib.deflate oz.out_stream buf pos len\n                                   oz.out_buffer oz.out_pos oz.out_avail\n                                   Zlib.Z_NO_FLUSH\n      with Zlib.Error(_, _) ->\n        raise (Error(\"error during compression\")) in\n    oz.out_pos <- oz.out_pos + used_out;\n    oz.out_avail <- oz.out_avail - used_out;\n    oz.out_size <- Int32.add oz.out_size (Int32.of_int used_in);\n    oz.out_crc <- Zlib.update_crc oz.out_crc buf pos used_in;\n    if used_in < len then output oz buf (pos + used_in) (len - used_in)\n  end\n\nlet output_substring oz buf pos len =\n  output oz (Bytes.unsafe_of_string buf) pos len\n\nlet output_char oz c =\n  Bytes.set char_buffer 0 c;\n  output oz char_buffer 0 1\n\nlet output_byte oz b =\n  output_char oz (Char.unsafe_chr b)\n\nlet write_int32 oc n =\n  let r = ref n in\n  for i = 1 to 4 do\n    Stdlib.output_byte oc (Int32.to_int !r);\n    r := Int32.shift_right_logical !r 8\n  done\n\nlet flush_to_out_chan ~flush_command oz =\n  let rec do_flush () =\n    (* If output buffer is full, flush it *)\n    if oz.out_avail = 0 then flush_and_reset_out_buffer oz;\n    let (finished, _, used_out) =\n      Zlib.deflate oz.out_stream oz.out_buffer 0 0\n                                 oz.out_buffer oz.out_pos oz.out_avail\n                                 flush_command in\n    oz.out_pos <- oz.out_pos + used_out;\n    oz.out_avail <- oz.out_avail - used_out;\n    (* When we use the Z_FINISH command, we must retry if finished is false. For all other\n     * flush commands, we should retry if we have filled the output buffer *)\n    let continue = (flush_command = Zlib.Z_FINISH && not finished) || oz.out_avail = 0 in\n    if continue then do_flush() in\n  do_flush();\n  (* Final data flush *)\n  if oz.out_pos > 0 then flush_and_reset_out_buffer oz\n\nlet flush_continue oz =\n  (* Flush everything to the underlying file channel, then flush the channel. *)\n  flush_to_out_chan ~flush_command:Zlib.Z_SYNC_FLUSH oz;\n  Stdlib.flush oz.out_chan\n\nlet flush oz =\n  (* Flush everything to the output channel. *)\n  flush_to_out_chan ~flush_command:Zlib.Z_FINISH oz;\n  (* Write CRC and size *)\n  write_int32 oz.out_chan oz.out_crc;\n  write_int32 oz.out_chan oz.out_size;\n  (* Dispose of stream *)\n  Zlib.deflate_end oz.out_stream\n\nlet close_out oz =\n  flush oz;\n  Stdlib.close_out oz.out_chan\n\n"
  },
  {
    "path": "packages/nx/vendor/camlzip/gzip.mli",
    "content": "(***********************************************************************)\n(*                                                                     *)\n(*                         The CamlZip library                         *)\n(*                                                                     *)\n(*            Xavier Leroy, projet Cristal, INRIA Rocquencourt         *)\n(*                                                                     *)\n(*  Copyright 2001 Institut National de Recherche en Informatique et   *)\n(*  en Automatique.  All rights reserved.  This file is distributed    *)\n(*  under the terms of the GNU Library General Public License, with    *)\n(*  the special exception on linking described in file LICENSE.        *)\n(*                                                                     *)\n(***********************************************************************)\n\n(* $Id$ *)\n\n(** Reading and writing to/from [gzip] compressed files\n\n   This module provides functions to read and write compressed data\n   to/from files in [gzip] format. *)\n\n(** {1 Reading from compressed files} *)\n\ntype in_channel\n       (** Abstract type representing a channel opened for reading\n           from a compressed file. *)\nval open_in: string -> in_channel\n       (** Open a compressed file for reading.  The argument is the file\n           name. *)\nval open_in_chan: Stdlib.in_channel -> in_channel\n       (** Open a compressed file for reading.  The argument is a\n           regular file channel already opened on the compressed file. *)\nval input_char: in_channel -> char\n       (** Uncompress one character from the given channel, and return it.\n           Raise [End_of_file] if no more compressed data is available. *)\nval input_byte: in_channel -> int\n       (** Same as [Gzip.input_char], but return the 8-bit integer\n           representing the character.\n           Raise [End_of_file] if no more compressed data is available. *)\nval input: in_channel -> bytes -> int -> int -> int\n       (** [input ic buf pos len] uncompresses up to [len] characters\n           from the given channel [ic],\n           storing them in string [buf], starting at character number [pos].\n           It returns the actual number of characters read, between 0 and\n           [len] (inclusive).\n           A return value of 0 means that the end of file was reached.\n           A return value between 0 and [len] exclusive means that\n           not all requested [len] characters were read, either because\n           no more characters were available at that time, or because\n           the implementation found it convenient to do a partial read;\n           [input] must be called again to read the remaining characters,\n           if desired.  (See also [Gzip.really_input] for reading\n           exactly [len] characters.)\n           Exception [Invalid_argument \"Gzip.input\"] is raised if\n           [pos] and [len] do not designate a valid substring of [buf]. *)\nval really_input: in_channel -> bytes -> int -> int -> unit\n       (** [really_input ic buf pos len] uncompresses [len] characters\n           from the given channel, storing them in\n           string [buf], starting at character number [pos].\n           Raise [End_of_file] if fewer than [len] characters can be read.\n           Raise [Invalid_argument \"Gzip.input\"] if\n           [pos] and [len] do not designate a valid substring of [buf]. *)\nval close_in: in_channel -> unit\n       (** Close the given input channel.  If the channel was created with\n           [Gzip.open_in_chan], the underlying regular file channel\n           (of type [Stdlib.in_channel]) is also closed.\n           Do not apply any of the functions above to a closed channel. *)\nval dispose: in_channel -> unit\n       (** Same as [Gzip.close_in], but does not close the underlying\n           regular file channel (of type [Stdlib.in_channel]);\n           just dispose of the resources associated with the decompression\n           channel.  This can be useful if e.g. the underlying file channel\n           is a network socket on which more (uncompressed) data\n           is expected. *)\n\n(** {1 Writing to compressed files} *)\n\ntype out_channel\n       (** Abstract type representing a channel opened for writing\n           to a compressed file. *)\nval open_out: ?level:int -> string -> out_channel\n       (** Open a compressed file for writing.  The argument is the file\n           name.  The file is created if it does not exist, or\n           truncated to zero length if it exists.\n           The optional [level] argument (an integer between 1 and 9)\n           indicates the compression level, with 1 being the weakest\n           (but fastest) compression and 9 being the strongest\n           (but slowest) compression.  The default level is 6\n           (medium compression). *)\nval open_out_chan: ?level:int -> Stdlib.out_channel -> out_channel\n       (** Open a compressed file for writing.  The argument is a\n           regular file channel already opened on the compressed file.\n           The optional [level] argument sets the compression level\n           as documented for [Gzip.open_out]. *)\nval output_char: out_channel -> char -> unit\n       (** Output one character to the given compressed channel. *)\nval output_byte: out_channel -> int -> unit\n       (** Same as [Gzip.output_char], but the output character is given\n           by its code.  The given integer is taken modulo 256. *)\nval output: out_channel -> bytes -> int -> int -> unit\n       (** [output oc buf pos len] compresses and writes [len] characters\n           from string [buf], starting at offset [pos], and writes the\n           compressed data to the channel [oc].\n           Raise [Invalid_argument \"Gzip.output\"] if\n           [pos] and [len] do not designate a valid substring of [buf]. *)\nval output_substring: out_channel -> string -> int -> int -> unit\n       (** Same as [output], but takes a string as argument instead of\n           a byte sequence.\n          @since 1.06 *)\nval close_out: out_channel -> unit\n       (** Close the given output channel.  If the channel was created with\n           [Gzip.open_out_chan], the underlying regular file channel\n           (of type [Stdlib.out_channel]) is also closed.\n           Do not apply any of the functions above to a closed channel. *)\nval flush: out_channel -> unit\n       (** Same as [Gzip.close_out], but do not close the underlying\n           regular file channel (of type [Stdlib.out_channel]);\n           just flush all pending compressed data and\n           dispose of the resources associated with the compression\n           channel.  This can be useful if e.g. the underlying file channel\n           is a network socket on which more data is to be sent. *)\nval flush_continue: out_channel -> unit\n       (** Flush all pending compressed data through both the compression\n           channel and the underlying regular file channel, but keep both\n           channels open to accept further data. *)\n\n(** {1 Error reporting} *)\n\nexception Error of string\n       (** Exception raised by the functions above to signal errors during\n           compression or decompression, or ill-formed input files. *)\n"
  },
  {
    "path": "packages/nx/vendor/camlzip/zip.ml",
    "content": "(***********************************************************************)\n(*                                                                     *)\n(*                         The CamlZip library                         *)\n(*                                                                     *)\n(*            Xavier Leroy, projet Cristal, INRIA Rocquencourt         *)\n(*                                                                     *)\n(*  Copyright 2001 Institut National de Recherche en Informatique et   *)\n(*  en Automatique.  All rights reserved.  This file is distributed    *)\n(*  under the terms of the GNU Lesser General Public License, with     *)\n(*  the special exception on linking described in file LICENSE.        *)\n(*                                                                     *)\n(***********************************************************************)\n\n(* $Id$ *)\n\n(* Module [Zip]: reading and writing ZIP archives *)\n\nexception Error of string * string * string\n\nlet int64_of_uint32 n =\n  Int64.(logand (of_int32 n) 0xFFFF_FFFFL)\n\nlet read1 = input_byte\nlet read2 ic =\n  let lb = read1 ic in let hb = read1 ic in lb lor (hb lsl 8)\nlet read4 ic =\n  let lw = read2 ic in let hw = read2 ic in\n  Int32.logor (Int32.of_int lw) (Int32.shift_left (Int32.of_int hw) 16)\nlet read8 ic =\n  let ll = read4 ic in let hl = read4 ic in\n  Int64.logor (int64_of_uint32 ll) (Int64.shift_left (int64_of_uint32 hl) 32)\n\nlet readstring ic n =\n  let s = Bytes.create n in\n  really_input ic s 0 n; Bytes.unsafe_to_string s\n\nlet write1 = output_byte\nlet write2 oc n =\n  write1 oc n; write1 oc (n lsr 8)\nlet write4 oc n =\n  write2 oc (Int32.to_int n);\n  write2 oc (Int32.to_int (Int32.shift_right_logical n 16))\nlet write8 oc n =\n  write4 oc (Int64.to_int32 n);\n  write4 oc (Int64.to_int32 (Int64.shift_right_logical n 32))\nlet writestring oc s =\n  output_string oc s\n\ntype compression_method = Stored | Deflated\n\ntype entry =\n  { filename: string;\n    comment: string;\n    methd: compression_method;\n    mtime: float;\n    crc: int32;\n    uncompressed_size: int;\n    compressed_size: int;\n    is_directory: bool;\n    file_offset: int64 }\n\ntype in_file =\n  { if_filename: string;\n    if_channel: Stdlib.in_channel;\n    if_entries: entry list;\n    if_directory: (string, entry) Hashtbl.t;\n    if_comment: string }\n\nlet entries ifile = ifile.if_entries\nlet comment ifile = ifile.if_comment\n\ntype out_file =\n  { of_filename: string;\n    of_channel: Stdlib.out_channel;\n    mutable of_entries: entry list;\n    of_comment: string }\n\n(* Return the position of the last occurrence of [pattern] in [buf],\n   or -1 if not found. *)\n\nlet strrstr (pattern: string) (buf: bytes) ofs len =\n  let rec search i j =\n    if i < ofs then -1\n    else if j >= String.length pattern then i\n    else if String.get pattern j = Bytes.get buf (i + j) then search i (j+1)\n    else search (i-1) 0\n  in search (ofs + len - String.length pattern) 0\n\n(* Determine if a file name is a directory (ends with /) *)\n\nlet filename_is_directory name =\n  String.length name > 0 && name.[String.length name - 1] = '/'\n\n(* Convert between Unix dates and DOS dates *)\n\nlet unixtime_of_dostime time date =\n  fst(Unix.mktime\n        { Unix.tm_sec = (time lsl 1) land 0x3e;\n          Unix.tm_min = (time lsr 5) land 0x3f;\n          Unix.tm_hour = (time lsr 11) land 0x1f;\n          Unix.tm_mday = date land 0x1f;\n          Unix.tm_mon = ((date lsr 5) land 0xf) - 1;\n          Unix.tm_year = ((date lsr 9) land 0x7f) + 80;\n          Unix.tm_wday = 0;\n          Unix.tm_yday = 0;\n          Unix.tm_isdst = false })\n\nlet dostime_of_unixtime t =\n  let tm = Unix.localtime t in\n  (tm.Unix.tm_sec lsr 1\n     + (tm.Unix.tm_min lsl 5)\n     + (tm.Unix.tm_hour lsl 11),\n   tm.Unix.tm_mday\n     + (tm.Unix.tm_mon + 1) lsl 5\n     + (tm.Unix.tm_year - 80) lsl 9)\n\n(* Parse the extra fields attached to some other structures *)\n\nlet parse_extra_field ef =\n  let rec parse accu pos =\n    if pos + 4 > String.length ef then List.rev accu else begin\n      let id = String.get_uint16_le ef pos in\n      let sz = String.get_uint16_le ef (pos + 2) in\n      let sz = min sz (String.length ef - (pos + 4)) in\n      let data = String.sub ef (pos + 4) sz in\n      parse ((id, data) :: accu) (pos + 4 + sz)\n    end\n  in parse [] 0\n\n(* Locate the end of central directory record *)\n\nlet locate_ecd filename ic =\n  let buf = Bytes.create 256 in\n  let filelen = LargeFile.in_channel_length ic in\n  let rec find_ecd pos len =\n    (* On input, bytes 0 ... len - 1 of buf reflect what is at pos in ic *)\n    if pos <= 0L || Int64.sub filelen pos >= 0x10000L then\n      raise (Error(filename, \"\",\n                   \"end of central directory not found, not a ZIP file\"));\n    let toread = if pos >= 128L then 128 else Int64.to_int pos in\n    (* Make room for \"toread\" extra bytes, and read them *)\n    Bytes.blit buf 0 buf toread (256 - toread);\n    let newpos = Int64.(sub pos (of_int toread)) in\n    LargeFile.seek_in ic newpos;\n    really_input ic buf 0 toread;\n    let newlen = min (toread + len) 256 in\n    (* Search for magic number *)\n    let ofs = strrstr \"PK\\005\\006\" buf 0 newlen in\n    if ofs >= 0 && newlen >= 22 &&\n       (let comment_len = Bytes.get_uint16_le buf (ofs + 20) in\n        Int64.(add newpos (of_int (ofs + 22 + comment_len)))= filelen)\n    then Int64.(add newpos (of_int ofs))\n    else find_ecd newpos newlen\n  in find_ecd filelen 0\n\n(* Read ZIP64 end of central directory record locator *)\n\nlet read_ecd64_locator filename ic ecd_pos =\n  if ecd_pos < 20L then\n    raise(Error(filename, \"\", \"ZIP64 ECD record locator missing\"));\n  let ecd64_locator_pos = Int64.(sub ecd_pos (of_int 20)) in\n  LargeFile.seek_in ic ecd64_locator_pos ;\n  let magic = read4 ic in\n  if magic <> 0x07064b50l then\n    raise(Error(filename, \"\", \"ZIP64 ECD record locator missing\"));\n  let disk_no = read4 ic in\n  let ecd64_offset = read8 ic in\n  let n_disks = read4 ic in\n  if disk_no <> 0l || n_disks <> 0l then\n    raise (Error(filename, \"\", \"multi-disk ZIP files not supported\"));\n  ecd64_offset\n\n(* Read ZIP64 end of central directory record *)\n\ntype cd_info = {\n  cd_offset: int64;   (* file position of start of CD *)\n  cd_size: int64;     (* size of CD in bytes *)\n  cd_count: int64;    (* number of CD entries *)\n  ecd_comment: string\n}\n\nlet read_ecd64 filename ic ecd_pos comment =\n  let ecd64_pos = read_ecd64_locator filename ic ecd_pos in\n  LargeFile.seek_in ic ecd64_pos ;\n  let magic = read4 ic in\n  if magic <> 0x06064b50l then\n    raise(Error(filename, \"\", \"ZIP64 ECD record missing\"));\n  let _size = read8 ic in\n  let _version_made_by = read2 ic in\n  let version_needed = read2 ic in\n  let n_disks = read4 ic in\n  let cd_disk_no = read4 ic in\n  let _disk_n_entries = read8 ic in\n  let cd_count = read8 ic in\n  let cd_size = read8 ic in\n  let cd_offset = read8 ic in\n  if version_needed > 45 then\n    raise(Error(filename, filename, \"unsupported ZIP version\"));\n  if cd_disk_no <> 0l || n_disks <> 0l then\n    raise (Error(filename, \"\", \"multi-disk ZIP files not supported\"));\n  { cd_offset; cd_size; cd_count; ecd_comment = comment }\n\n(* Read end of central directory record *)\n\nlet read_ecd filename ic =\n  let ecd_pos = locate_ecd filename ic in\n  LargeFile.seek_in ic ecd_pos;\n  let magic = read4 ic in\n  let disk_no = read2 ic in\n  let cd_disk_no = read2 ic in\n  let _disk_entries = read2 ic in\n  let cd_entries = read2 ic in\n  let cd_size = read4 ic in\n  let cd_offset = read4 ic in\n  let comment_len = read2 ic in\n  let comment = readstring ic comment_len in\n  assert (magic = Int32.of_int 0x06054b50);\n  if disk_no <> 0 || cd_disk_no <> 0 then\n    raise (Error(filename, \"\", \"multi-disk ZIP files not supported\"));\n  if cd_offset = 0xffff_ffffl || cd_size = 0xffff_ffffl then\n    read_ecd64 filename ic ecd_pos comment\n  else\n    { cd_offset = int64_of_uint32 cd_offset;\n      cd_size = int64_of_uint32 cd_size;\n      cd_count = Int64.of_int cd_entries;\n      ecd_comment = comment }\n\n(* Fixup sizes from a ZIP64 extended information extra field *)\n\nlet fixup_sizes extra uncompressed_size compressed_size offset =\n  let pos = ref 0 in\n  let process orig =\n    if orig <> 0xFFFF_FFFFl then\n      int64_of_uint32 orig\n    else begin\n      let newval = String.get_int64_le extra !pos in\n      pos := !pos + 8;\n      newval\n    end in\n  let uncompressed_size = process uncompressed_size in\n  let compressed_size = process compressed_size in\n  let offset = process offset in\n  (uncompressed_size, compressed_size, offset)\n\n(* Read central directory entry *)\n\nlet read_directory_entry filename ic =\n  let magic = read4 ic in\n  if magic <> 0x02014b50l then\n    raise (Error(filename, \"\", \"wrong file header in central directory\"));\n  let _version_made_by = read2 ic in\n  let version_needed = read2 ic in\n  let flags = read2 ic in\n  let methd = read2 ic in\n  let lastmod_time = read2 ic in\n  let lastmod_date = read2 ic in\n  let crc = read4 ic in\n  let compr_size = read4 ic in\n  let uncompr_size = read4 ic in\n  let name_len = read2 ic in\n  let extra_len = read2 ic in\n  let comment_len = read2 ic in\n  let _disk_number = read2 ic in\n  let _internal_attr = read2 ic in\n  let _external_attr = read4 ic in\n  let header_offset = read4 ic in\n  let name = readstring ic name_len in\n  let extra = readstring ic extra_len in\n  let comment = readstring ic comment_len in\n  if version_needed > 45 then\n    raise(Error(filename, name, \"unsupported ZIP version\"));\n  if flags land 1 <> 0 then\n    raise (Error(filename, name, \"encrypted entries not supported\"));\n  let (uncompressed_size, compressed_size, file_offset) =\n    if compr_size <> 0xffff_ffffl\n    && uncompr_size <> 0xffff_ffffl\n    && header_offset <> 0xffff_ffffl\n    then\n      (int64_of_uint32 uncompr_size,\n       int64_of_uint32 compr_size,\n       int64_of_uint32 header_offset)\n    else begin\n      match List.assoc_opt 1 (parse_extra_field extra) with\n      | None ->\n          raise(Error(filename, name, \"ZIP64 extensible data record missing\"))\n      | Some e ->\n          fixup_sizes e uncompr_size compr_size header_offset\n    end in\n  let int_of_uint64 n =\n    if n >= 0L && n <= Int64.of_int max_int\n    then Int64.to_int n\n    else raise(Error(filename, name, \"size too large to be represented\"))\n  in\n  { filename = name;\n    comment = comment;\n    methd = (match methd with\n             | 0 -> Stored\n             | 8 -> Deflated\n             | _ -> raise (Error(filename, name,\n                                     \"unknown compression method\")));\n    mtime = unixtime_of_dostime lastmod_time lastmod_date;\n    crc = crc;\n    uncompressed_size = int_of_uint64 uncompressed_size;\n    compressed_size = int_of_uint64 compressed_size;\n    is_directory = filename_is_directory name;\n    file_offset\n  }  \n\n(* Read central directory *)\n\nlet read_cd filename ic cdinfo =\n  try\n    LargeFile.seek_in ic cdinfo.cd_offset;\n    let entries = ref [] in\n    let entrycnt = ref Int64.zero in\n    let cd_bound = Int64.add cdinfo.cd_offset cdinfo.cd_size in\n    while LargeFile.pos_in ic < cd_bound do\n      entrycnt := Int64.(add !entrycnt one) ;\n      let e = read_directory_entry filename ic in\n      entries := e :: !entries\n    done;\n    if cd_bound <> LargeFile.pos_in ic\n    || (cdinfo.cd_count <> !entrycnt && cdinfo.cd_count <> 0xFFFFL)\n    then\n      raise(Error(filename, \"\",\n                  \"wrong number of entries in central directory\"));\n    List.rev !entries\n  with End_of_file ->\n    raise (Error(filename, \"\", \"end-of-file while reading central directory\"))\n\n(* Open a ZIP file for reading *)\n\nlet open_in filename =\n  let ic = Stdlib.open_in_bin filename in\n  try\n    let cdinfo = read_ecd filename ic in\n    let entries = read_cd filename ic cdinfo in\n    let table_size =\n      match Int64.(div cdinfo.cd_count 3L |> unsigned_to_int) with\n        Some sz -> sz\n      | None -> 65535 in\n    let dir = Hashtbl.create table_size in\n    List.iter (fun e -> Hashtbl.add dir e.filename e) entries;\n    { if_filename = filename;\n      if_channel = ic;\n      if_entries = entries;\n      if_directory = dir;\n      if_comment = cdinfo.ecd_comment }\n  with exn ->\n    Stdlib.close_in ic; raise exn\n\n(* Close a ZIP file opened for reading *)\n\nlet close_in ifile =\n  Stdlib.close_in ifile.if_channel\n\n(* Return the info associated with an entry *)\n\nlet find_entry ifile name =\n  Hashtbl.find ifile.if_directory name\n\n(* Position on an entry *)\n\nlet goto_entry ifile e =\n  try\n    let ic = ifile.if_channel in\n    LargeFile.seek_in ic e.file_offset;\n    let magic = read4 ic in\n    if magic <> 0x04034b50l then\n       raise (Error(ifile.if_filename, e.filename, \"wrong local file header\"));\n    let _version_needed = read2 ic in\n    let _flags = read2 ic in\n    let _methd = read2 ic in\n    let _lastmod_time = read2 ic in\n    let _lastmod_date = read2 ic in\n    let _crc = read4 ic in\n    let _compr_size = read4 ic in\n    let _uncompr_size = read4 ic in\n    let filename_len = read2 ic in\n    let extra_len = read2 ic in\n    (* Could validate information read against directory entry, but\n       what the heck *)\n    LargeFile.seek_in ifile.if_channel\n      (Int64.add e.file_offset (Int64.of_int (30 + filename_len + extra_len)))\n  with End_of_file ->\n    raise (Error(ifile.if_filename, e.filename, \"truncated local file header\"))\n\n(* Read the contents of an entry as a string *)\n\nlet read_entry ifile e =\n  try\n    goto_entry ifile e;\n    let res = Bytes.create e.uncompressed_size in\n    match e.methd with\n      Stored ->\n        if e.compressed_size <> e.uncompressed_size then\n          raise (Error(ifile.if_filename, e.filename,\n                       \"wrong size for stored entry\"));\n        really_input ifile.if_channel res 0 e.uncompressed_size;\n        Bytes.unsafe_to_string res\n    | Deflated ->\n        let in_avail = ref e.compressed_size in\n        let out_pos = ref 0 in\n        if e.uncompressed_size = 0 then\n          (* Empty zip entries may be marked as deflated (#44) *)\n          \"\"\n        else begin\n          begin try\n            Zlib.uncompress ~header:false\n              (fun buf ->\n                let read = input ifile.if_channel buf 0\n                                (min !in_avail (Bytes.length buf)) in\n                in_avail := !in_avail - read;\n                read)\n              (fun buf len ->\n                if !out_pos + len > Bytes.length res then\n                  raise (Error(ifile.if_filename, e.filename,\n                              \"wrong size for deflated entry (too much data)\"));\n                Bytes.blit buf 0 res !out_pos len;\n                out_pos := !out_pos + len)\n          with Zlib.Error(_, msg) ->\n            raise (Error(ifile.if_filename, e.filename,\n                         \"decompression error: \" ^ msg))\n          end;\n          if !out_pos <> Bytes.length res then\n            raise (Error(ifile.if_filename, e.filename,\n                        \"wrong size for deflated entry (not enough data)\"));\n          let crc = Zlib.update_crc Int32.zero res 0 (Bytes.length res) in\n          if crc <> e.crc then\n            raise (Error(ifile.if_filename, e.filename, \"CRC mismatch\"));\n          Bytes.unsafe_to_string res\n        end\n  with End_of_file ->\n    raise (Error(ifile.if_filename, e.filename, \"truncated data\"))\n\n(* Write the contents of an entry into an out channel *)\n\nlet copy_entry_to_channel ifile e oc =\n  try\n    goto_entry ifile e;\n    match e.methd with\n      Stored ->\n        if e.compressed_size <> e.uncompressed_size then\n          raise (Error(ifile.if_filename, e.filename,\n                       \"wrong size for stored entry\"));\n        let buf = Bytes.create 4096 in\n        let rec copy n =\n          if n > 0 then begin\n            let r = input ifile.if_channel buf 0 (min n (Bytes.length buf)) in\n            output oc buf 0 r;\n            copy (n - r)\n          end in\n        copy e.uncompressed_size\n    | Deflated ->\n        let in_avail = ref e.compressed_size in\n        let crc = ref Int32.zero in\n        begin try\n          Zlib.uncompress ~header:false\n            (fun buf ->\n              let read = input ifile.if_channel buf 0\n                               (min !in_avail (Bytes.length buf)) in\n              in_avail := !in_avail - read;\n              read)\n            (fun buf len ->\n              output oc buf 0 len;\n              crc := Zlib.update_crc !crc buf 0 len)\n        with Zlib.Error(_, msg) ->\n          raise (Error(ifile.if_filename, e.filename,\n                       \"decompression error: \" ^ msg))\n        end;\n        if !crc <> e.crc then\n          raise (Error(ifile.if_filename, e.filename, \"CRC mismatch\"))\n  with End_of_file ->\n    raise (Error(ifile.if_filename, e.filename, \"truncated data\"))\n\n(* Write the contents of an entry to a file *)\n\nlet copy_entry_to_file ifile e outfilename =\n  let oc = open_out_bin outfilename in\n  try\n    copy_entry_to_channel ifile e oc;\n    close_out oc;\n    begin try\n      Unix.utimes outfilename e.mtime e.mtime\n    with Unix.Unix_error(_, _, _) | Invalid_argument _ -> ()\n    end\n  with x ->\n    close_out oc;\n    Sys.remove outfilename;\n    raise x\n\n(* Open a ZIP file for writing *)\n\nlet open_out ?(comment = \"\") filename =\n  if String.length comment >= 0x10000 then\n    raise(Error(filename, \"\", \"comment too long\"));\n  { of_filename = filename;\n    of_channel = Stdlib.open_out_bin filename;\n    of_entries = [];\n    of_comment = comment }\n\n(* Open an existing ZIP file for updating *)\n\nlet open_update ?comment filename =\n  let fd =\n    try Unix.openfile filename [Unix.O_RDWR] 0\n    with Unix.Unix_error(code, _, _) ->\n      raise (Sys_error (filename ^ \": \" ^ Unix.error_message code)) in\n  let ic = Unix.in_channel_of_descr fd in\n  try \n    let cdinfo = read_ecd filename ic in\n    let entries = read_cd filename ic cdinfo in\n    Unix.LargeFile.ftruncate fd cdinfo.cd_offset;\n    ignore (Unix.LargeFile.lseek fd 0L Unix.SEEK_END);\n    { of_filename = filename;\n      of_channel = Unix.out_channel_of_descr fd;\n      of_entries = entries;\n      of_comment = Option.value comment ~default:cdinfo.ecd_comment }\n  with exn ->\n    Stdlib.close_in ic; raise exn\n\n(* Reverse list of entries, removing duplicate file names.\n   Keep only the most recent entry for a given name, i.e. the one that occurs\n   first in the input list. *)\n\nmodule StringSet = Set.Make(String)\n\nlet rev_uniq entries =\n  let rec rev accu seen = function\n    | [] -> accu\n    | e :: l ->\n        if StringSet.mem e.filename seen\n        then rev accu seen l\n        else rev (e :: accu) (StringSet.add e.filename seen) l\n  in rev [] StringSet.empty entries\n\n(* Close a ZIP file for writing.  Add central directory and ECD. *)\n\nlet write4_cautious oc ov n =\n  write4 oc (if ov then 0xFFFF_FFFFl else Int64.to_int32 n)\n\nlet write_directory_entry oc e =\n  let overflow =\n       e.file_offset > 0xFFFF_FFFFL\n    || Int64.of_int e.compressed_size > 0xFFFF_FFFFL\n    || Int64.of_int e.uncompressed_size > 0xFFFF_FFFFL in\n  write4 oc 0x02014b50l;                (* signature *)\n  let version = match e.methd with Stored -> 10 | Deflated -> 20 in\n  write2 oc version;                    (* version made by *)\n  write2 oc version;                    (* version needed to extract *)\n  write2 oc 8;                          (* flags *)\n  write2 oc (match e.methd with Stored -> 0 | Deflated -> 8); (* method *)\n  let (time, date) = dostime_of_unixtime e.mtime in\n  write2 oc time;                       (* last mod time *)\n  write2 oc date;                       (* last mod date *)\n  write4 oc e.crc;                      (* CRC32 *)\n  write4_cautious oc overflow (Int64.of_int e.compressed_size);\n                                        (* compressed size *)\n  write4_cautious oc overflow (Int64.of_int e.uncompressed_size);\n                                        (* uncompressed size *)\n  write2 oc (String.length e.filename); (* filename length *)\n  write2 oc (if overflow then 28 else 0); (* extra length *)\n  write2 oc (String.length e.comment);  (* comment length *)\n  write2 oc 0;                          (* disk number start *)\n  write2 oc 0;                          (* internal attributes *)\n  write4 oc 0l;                         (* external attributes *)\n  write4_cautious oc overflow e.file_offset;     (* offset of local header *)\n  writestring oc e.filename;            (* filename *)\n  if overflow then begin                (* extra data *)\n    write2 oc 0x0001;   (* header ID *)\n    write2 oc 24;       (* payload size *)\n    write8 oc (Int64.of_int e.uncompressed_size);\n    write8 oc (Int64.of_int e.compressed_size);\n    write8 oc e.file_offset\n  end;\n  writestring oc e.comment              (* file comment *)\n\nlet close_out ofile =\n  let oc = ofile.of_channel in\n  let start_cd = LargeFile.pos_out oc in\n  let entries = rev_uniq ofile.of_entries in\n  List.iter (write_directory_entry oc) entries;\n  let start_ecd = LargeFile.pos_out oc in\n  let cd_size = Int64.sub start_ecd start_cd in\n  let num_entries = List.length entries in\n  let overflow =\n       num_entries > 0xFFFF\n    || start_cd > 0xFFFF_FFFFL\n    || cd_size > 0xFFFF_FFFFL in\n  if overflow then begin\n    (* Write ZIP64 end of central directory record *)\n    write4 oc 0x06064b50l;              (* signature *)\n    write8 oc 44L;                      (* size ECD record *)\n    write2 oc 45;                       (* version made *)\n    write2 oc 45;                       (* version needed *)\n    write4 oc 0l;                       (* disk number *)\n    write4 oc 0l;                       (* CD disk number *)\n    let ne = Int64.of_int num_entries in\n    write8 oc ne;                       (* num disk entries *)\n    write8 oc ne;                       (* num entries *)\n    write8 oc cd_size;                  (* size of the CD *)\n    write8 oc start_cd;                 (* start offset for CD *)\n    (* Write ZIP64 end of central directory locator *)\n    write4 oc 0x07064b50l;              (* signature *)\n    write4 oc 0l;                       (* CD disk number *)\n    write8 oc start_ecd;                (* Position of ECD record *)\n    write4 oc 0l                        (* number of disks *)\n  end;\n  (* Write ZIP end of central directory record *)\n  write4 oc 0x06054b50l;                (* signature *)\n  write2 oc 0;                          (* disk number *)\n  write2 oc 0;                          (* number of disk with central dir *)\n  let ne = if overflow then 0xFFFF else num_entries in\n  write2 oc ne;                         (* # entries in this disk *)\n  write2 oc ne;                         (* # entries in central dir *)\n  write4_cautious oc overflow cd_size;  (* size of central dir *)\n  write4_cautious oc overflow start_cd; (* offset of central dir *)\n  write2 oc (String.length ofile.of_comment); (* length of comment *)\n  writestring oc ofile.of_comment;         (* comment *)\n  Stdlib.close_out oc\n\n(* Write a local file header and return the corresponding entry *)\n\nlet add_entry_header ofile comment level mtime filename =\n  if level < 0 || level > 9 then\n    raise(Error(ofile.of_filename, filename, \"wrong compression level\"));\n  if String.length filename >= 0x10000 then\n    raise(Error(ofile.of_filename, filename, \"filename too long\"));\n  if not (Filename.is_relative filename) then\n    raise(Error(ofile.of_filename, filename, \"file name must not be absolute\"));\n  if String.length comment >= 0x10000 then\n    raise(Error(ofile.of_filename, filename, \"comment too long\"));\n  let filename =\n    if Sys.os_type = \"Win32\"   (* normalize directory separators *)\n    then String.map (function '\\\\' -> '/' | c -> c) filename\n    else filename in\n  let oc = ofile.of_channel in\n  let pos = LargeFile.pos_out oc in\n  write4 oc 0x04034b50l;                (* signature *)\n  let version = if level = 0 then 10 else 20 in\n  write2 oc version;                    (* version needed to extract *)\n  write2 oc 0;                          (* flags *)\n  write2 oc (if level = 0 then 0 else 8); (* method *)\n  let (time, date) = dostime_of_unixtime mtime in\n  write2 oc time;                       (* last mod time *)\n  write2 oc date;                       (* last mod date *)\n  write4 oc 0l;                         (* CRC32 - to be filled later *)\n  write4 oc 0l;                         (* compressed size - later *)\n  write4 oc 0l;                         (* uncompressed size - later *)\n  write2 oc (String.length filename);   (* filename length *)\n  write2 oc 20;                         (* extra length *)\n  writestring oc filename;              (* filename *)\n  write2 oc 0x0001;                     (* extra data - header ID *)\n  write2 oc 16;                         (* payload size *)\n  write8 oc 0L;                         (* compressed size - later *)\n  write8 oc 0L;                         (* uncompressed size - later *)\n  { filename = filename;\n    comment = comment;\n    methd = (if level = 0 then Stored else Deflated);\n    mtime = mtime;\n    crc = Int32.zero;\n    uncompressed_size = 0;\n    compressed_size = 0;\n    is_directory = filename_is_directory filename;\n    file_offset = pos }\n\n(* Write the correct sizes and CRC in the local file header\n   and update the entry *)\n\nlet update_entry ofile crc compr_size uncompr_size entry =\n  let csz = Int64.of_int compr_size\n  and usz = Int64.of_int uncompr_size in\n  let overflow = csz > 0xFFFF_FFFFL || usz > 0xFFFF_FFFFL in\n  let oc = ofile.of_channel in\n  let cur = LargeFile.pos_out oc in\n  LargeFile.seek_out oc (Int64.add entry.file_offset 14L);\n  write4 oc crc;                        (* CRC *)\n  write4_cautious oc overflow csz;      (* compressed size *)\n  write4_cautious oc overflow usz;      (* uncompressed size *)\n  if overflow then begin\n    LargeFile.seek_out oc\n      Int64.(add entry.file_offset\n                 (of_int (30 + String.length entry.filename + 4)));\n    write8 oc csz;                        (* compressed size *)\n    write8 oc usz                         (* uncompressed size *)\n  end;\n  LargeFile.seek_out oc cur;\n  { entry with crc = crc;\n               uncompressed_size = uncompr_size;\n               compressed_size = compr_size }\n\n(* Add an entry with the contents of a string *)\n\nlet add_entry data ofile ?(comment = \"\")\n                         ?(level = 6) ?(mtime = Unix.time()) name =\n  let e = add_entry_header ofile comment level mtime name in\n  let crc = Zlib.update_crc_string Int32.zero data 0 (String.length data) in\n  let compr_size =\n    match level with\n      0 ->\n        output_substring ofile.of_channel data 0 (String.length data);\n        String.length data\n    | _ ->\n        let in_pos = ref 0 in\n        let out_pos = ref 0 in\n        try\n          Zlib.compress ~level ~header:false\n            (fun buf ->\n               let n = min (String.length data - !in_pos)\n                           (Bytes.length buf) in\n               String.blit data !in_pos buf 0 n;\n               in_pos := !in_pos + n;\n               n)\n            (fun buf n ->\n                output ofile.of_channel buf 0 n;\n                out_pos := !out_pos + n);\n          !out_pos\n        with Zlib.Error(_, msg) ->\n          raise (Error(ofile.of_filename, name,\n                       \"compression error: \" ^ msg)) in\n  let e' = update_entry ofile crc compr_size (String.length data) e in\n  ofile.of_entries <- e' :: ofile.of_entries\n\n(* Add an entry with the contents of an in channel *)\n\nlet copy_channel_to_entry ic ofile ?(comment = \"\")\n                                   ?(level = 6) ?(mtime = Unix.time()) name =\n  let e = add_entry_header ofile comment level mtime name in\n  let crc = ref Int32.zero in\n  let (compr_size, uncompr_size) =\n    match level with\n      0 ->\n        let buf = Bytes.create 4096 in\n        let rec copy sz =\n          let r = input ic buf 0 (Bytes.length buf) in\n          if r = 0 then sz else begin\n            crc := Zlib.update_crc !crc buf 0 r;\n            output ofile.of_channel buf 0 r;\n            copy (sz + r)\n          end in\n        let size = copy 0 in\n        (size, size)\n    | _ ->\n        let in_pos = ref 0 in\n        let out_pos = ref 0 in\n        try\n          Zlib.compress ~level ~header:false\n            (fun buf ->\n               let r = input ic buf 0 (Bytes.length buf) in\n               crc := Zlib.update_crc !crc buf 0 r;\n               in_pos := !in_pos + r;\n               r)\n            (fun buf n ->\n               output ofile.of_channel buf 0 n;\n               out_pos := !out_pos + n);\n          (!out_pos, !in_pos)\n        with Zlib.Error(_, msg) ->\n          raise (Error(ofile.of_filename, name,\n                       \"compression error: \" ^ msg)) in\n  let e' = update_entry ofile !crc compr_size uncompr_size e in\n  ofile.of_entries <- e' :: ofile.of_entries\n\n(* Add an entry with the contents of a file *)\n\nlet copy_file_to_entry infilename ofile ?(comment = \"\")\n                                        ?(level = 6) ?mtime name =\n  let ic = open_in_bin infilename in\n  let mtime' =\n    match mtime with\n      Some t -> mtime\n    | None ->\n        try Some((Unix.stat infilename).Unix.st_mtime)\n        with Unix.Unix_error(_,_,_) -> None in\n  try\n    copy_channel_to_entry ic ofile ~comment ~level ?mtime:mtime' name;\n    Stdlib.close_in ic\n  with x ->\n    Stdlib.close_in ic; raise x\n\n\n(* Add an entry whose content will be produced by the caller *)\n\nlet add_entry_generator ofile ?(comment = \"\")\n                              ?(level = 6) ?(mtime = Unix.time()) name =\n  let e = add_entry_header ofile comment level mtime name in\n  let crc = ref Int32.zero in\n  let compr_size = ref 0 in\n  let uncompr_size = ref 0 in\n  let finished = ref false in\n  let check () =\n    if !finished then\n      raise (Error(ofile.of_filename, name, \"entry already finished\"))\n  in\n  let finish () =\n    finished := true;\n    let e' = update_entry ofile !crc !compr_size !uncompr_size e in\n    ofile.of_entries <- e' :: ofile.of_entries\n  in\n  match level with\n  | 0 ->\n      (fun buf pos len ->\n        check ();\n        output ofile.of_channel buf pos len;\n        compr_size := !compr_size + len;\n        uncompr_size := !uncompr_size + len;\n        crc := Zlib.update_crc !crc buf pos len\n      ),\n      (fun () ->\n        check ();\n        finish ()\n      )\n  | _ ->\n      let (send, flush) = Zlib.compress_direct ~level ~header:false\n          (fun buf n ->\n            output ofile.of_channel buf 0 n;\n            compr_size := !compr_size + n)\n      in\n      (fun buf pos len ->\n        check ();\n        try\n          send buf pos len;\n          uncompr_size := !uncompr_size + len;\n          crc := Zlib.update_crc !crc buf pos len\n        with Zlib.Error(_, msg) ->\n          raise (Error(ofile.of_filename, name,\n                       \"compression error: \" ^ msg))\n      ),\n      (fun () ->\n        check ();\n        try\n          flush ();\n          finish ()\n        with Zlib.Error(_, msg) ->\n          raise (Error(ofile.of_filename, name,\n                       \"compression error: \" ^ msg))\n      )\n"
  },
  {
    "path": "packages/nx/vendor/camlzip/zip.mli",
    "content": "(***********************************************************************)\n(*                                                                     *)\n(*                         The CamlZip library                         *)\n(*                                                                     *)\n(*            Xavier Leroy, projet Cristal, INRIA Rocquencourt         *)\n(*                                                                     *)\n(*  Copyright 2001 Institut National de Recherche en Informatique et   *)\n(*  en Automatique.  All rights reserved.  This file is distributed    *)\n(*  under the terms of the GNU Lesser General Public License, with     *)\n(*  the special exception on linking described in file LICENSE.        *)\n(*                                                                     *)\n(***********************************************************************)\n\n(* $Id$ *)\n\n(** Reading and writing ZIP archives\n\n    This module provides functions for reading and writing ZIP archive\n    files.  ZIP archives package one or more compressed files into\n    a single ZIP file, along with information about the files,\n    including file name, date and time of last modification, user-provided\n    comments, and a checksum to verify the integrity of each entry.\n    The entries of a ZIP file are not necessarily actual files, and can\n    actually consist of arbitrary data.\n\n    The ZIP file format used in this module is compatible with that\n    implemented by the popular [pkzip] archiver under Windows,\n    and by the Info-ZIP [zip] and [unzip] commands under Unix and Windows.\n    This format is also compatible with the JAR file format used by Java. *)\n\n(** {1 Information on ZIP entries} *)\n\ntype compression_method =\n  | Stored                     (** data is stored without compression *)\n  | Deflated                   (** data is compressed with the ``deflate'' algorithm *)\n        (** Indicate whether the data in the entry is compressed or not. *)\n\ntype entry =\n  { filename: string;          (** file name for entry *)\n    comment: string;           (** comment attached to entry *)\n    methd: compression_method; (** compression method *)\n    mtime: float;              (** last modification time (seconds since epoch) *)\n    crc: int32;                (** cyclic redundancy check for data *)\n    uncompressed_size: int;    (** size of original data in bytes *)\n    compressed_size: int;      (** size of compressed data *)\n    is_directory: bool;        (** whether this entry represents a directory *)\n    file_offset: int64         (** for internal use *)\n  }\n          (** Description of an entry in a ZIP file. *)\n\n(** {1 Reading from ZIP files} *)\n\ntype in_file\n          (** Abstract type representing a handle opened for reading from\n              a ZIP file. *)\nval open_in: string -> in_file\n          (** [Zip.open_in zipfilename] opens the ZIP file with the given\n              filename.  The file must already exist.\n              Return a handle opened for reading from this file. *)\nval entries: in_file -> entry list\n          (** Return a list of all entries in the given ZIP file. *)\nval comment: in_file -> string\n          (** Return the comment attached to the given ZIP file, or the\n              empty string if none. *)\nval find_entry: in_file -> string -> entry\n          (** [Zip.find_entry zf filename] returns the description of the\n              entry having name [filename] in the ZIP file [zf].\n              Raises [Not_found] if no such entry exists.\n              The file name must match exactly; in particular, case is\n              significant.  File names must use [/] (slash) as the directory\n              separator.  The name of a directory must end with a trailing \n              [/] (slash). *)\nval read_entry: in_file -> entry -> string\n          (** [Zip.read_entry zf e] reads and uncompresses the data\n              (file contents) associated with entry [e] of ZIP file [zf].\n              The data is returned as a character string. *)\nval copy_entry_to_channel: in_file -> entry -> out_channel -> unit\n          (** [Zip.copy_entry_to_channel zf e oc] reads and uncompresses\n              the data associated with entry [e] of ZIP file [zf].\n              It then writes this data to the output channel [oc]. *)\nval copy_entry_to_file: in_file -> entry -> string -> unit\n          (** [Zip.copy_entry_to_file zf e destfile] reads and uncompresses\n              the data associated with entry [e] of ZIP file [zf].\n              It then writes this data to the file named [destfile].\n              The file [destfile] is created if it does not exist,\n              and overwritten otherwise.  The last modification date of\n              the file is set to that indicated in the ZIP entry [e],\n              if possible. *)\nval close_in: in_file -> unit\n          (** Close the given ZIP file handle.  If the ZIP file handle was\n              created by [open_in_channel], the underlying input channel\n              is closed. *)\n\n(** {1 Writing to ZIP files} *)\n\ntype out_file\n          (** Abstract type representing a handle opened for writing to\n              a ZIP file. *)\nval open_out: ?comment: string -> string -> out_file\n          (** [Zip.open_out zipfilename] creates (or truncates to zero length)\n              the ZIP file with the given filename.\n              Return a handle opened for writing to this file.\n              @param comment  comment string attached to the ZIP file as\n                as whole.  Default: empty. *)\nval open_update: ?comment: string -> string -> out_file\n          (** [Zip.open_update zipfilename] opens the ZIP file with the\n              given filename, preserving its contents.  The file must already\n              exist.  Return a handle opened for writing to this file.\n              Entries added via this handle will be added to the existing\n              entries.  If an entry is added with the same file name as\n              an existing entry, the old entry becomes inaccessible, only\n              the new entry remains.\n              @param comment  comment string attached to the ZIP file as\n                as whole.  Default: keep the comment that was attached\n                to the original ZIP file. *)\nval add_entry:\n  string -> out_file -> \n    ?comment: string -> ?level: int -> ?mtime: float ->\n    string -> unit\n          (** [Zip.add_entry data zf name] adds a new entry to the \n              ZIP file [zf].  The data (file contents) associated with\n              the entry is taken from the string [data].  It is compressed\n              and written to the ZIP file [zf].  [name] is the file name\n              stored along with this entry. \n\n              Under Windows, backslash characters in the [name] parameter\n              are stored in the ZIP file as forward slashes [/], for\n              compatibility with other operating systems.\n\n              Several optional arguments can be provided to control\n              the format and attached information of the entry:\n              @param comment  attached to the entry (a string).\n                Default: empty.\n              @param level  compression level for the entry.  This is an\n                integer between 0 and 9, with 0 meaning no compression (store\n                as is), 1 lowest compression, 9 highest compression.  Higher\n                levels result in smaller compressed data, but longer\n                compression times.\n                Default: 6 (moderate compression).\n              @param mtime  last modification time (in seconds since the\n                epoch).\n                Default: the current time. *)\n\nval copy_channel_to_entry:\n  in_channel -> out_file -> \n    ?comment: string -> ?level: int -> ?mtime: float ->\n    string -> unit\n          (** Same as [Zip.add_entry], but the data associated with the\n              entry is read from the input channel given as first argument.\n              The channel is read up to end of file. *)\nval copy_file_to_entry:\n  string -> out_file -> \n    ?comment: string -> ?level: int -> ?mtime: float ->\n    string -> unit\n          (** Same as [Zip.add_entry], but the data associated with the\n              entry is read from the file whose name is given as first\n              argument.  Also, the default value for the [mtime]\n              optional parameter is the time of last modification of the\n              file. *)\nval add_entry_generator:\n  out_file ->\n    ?comment: string -> ?level: int -> ?mtime: float -> \n    string -> (bytes -> int -> int -> unit) * (unit -> unit)\n          (** [Zip.add_entry_generator zf name] returns a pair of functions\n              [(add, finish)].  It adds a new entry to the \n              ZIP file [zf].  The file name stored along with this entry\n              is [name].  Initially, no data is stored in this entry.\n              To store data in this entry, the program must repeatedly call\n              the [add] function returned by [Zip.add_entry_generator].\n              An invocation [add s ofs len] stores [len] characters of\n              byte sequence [s] starting at offset [ofs] in the ZIP entry.\n              When all the data forming the entry has been sent, the\n              program must call the [finish] function returned by\n              [Zip.add_entry_generator].  [finish] must be called exactly once.\n              The optional arguments to [Zip.add_entry_generator]\n              are as described in {!Zip.add_entry}. *)\nval close_out: out_file -> unit\n          (** Finish writing the ZIP archive by adding the table of\n              contents, and close it. *)\n\n(** {1 Error reporting} *)\n\nexception Error of string * string * string\n          (** Exception raised when an ill-formed ZIP archive is encountered,\n              or illegal parameters are given to the functions in this\n              module.  The exception is of the form\n              [Error(ZIP_name, entry_name, message)] where [ZIP_name]\n              is the name of the ZIP file, [entry_name] the name of\n              the offending entry, and [message] an explanation of the\n              error. *)\n"
  },
  {
    "path": "packages/nx/vendor/camlzip/zlib.ml",
    "content": "(***********************************************************************)\n(*                                                                     *)\n(*                         The CamlZip library                         *)\n(*                                                                     *)\n(*            Xavier Leroy, projet Cristal, INRIA Rocquencourt         *)\n(*                                                                     *)\n(*  Copyright 2001 Institut National de Recherche en Informatique et   *)\n(*  en Automatique.  All rights reserved.  This file is distributed    *)\n(*  under the terms of the GNU Lesser General Public License, with     *)\n(*  the special exception on linking described in file LICENSE.        *)\n(*                                                                     *)\n(***********************************************************************)\n\n(* $Id$ *)\n\nexception Error of string * string\n\nlet _ =\n  Callback.register_exception \"Zlib.Error\" (Error(\"\",\"\"))\n\ntype stream\n\ntype flush_command =\n    Z_NO_FLUSH\n  | Z_SYNC_FLUSH\n  | Z_FULL_FLUSH\n  | Z_FINISH\n\nexternal deflate_init: int -> bool -> stream = \"camlzip_deflateInit\"\nexternal deflate:\n  stream -> bytes -> int -> int -> bytes -> int -> int -> flush_command\n         -> bool * int * int\n  = \"camlzip_deflate_bytecode\" \"camlzip_deflate\"\nexternal deflate_string:\n  stream -> string -> int -> int -> bytes -> int -> int -> flush_command\n         -> bool * int * int\n  = \"camlzip_deflate_bytecode\" \"camlzip_deflate\"\nexternal deflate_end: stream -> unit = \"camlzip_deflateEnd\"\n\nexternal inflate_init: bool -> stream = \"camlzip_inflateInit\"\nexternal inflate:\n  stream -> bytes -> int -> int -> bytes -> int -> int -> flush_command\n         -> bool * int * int\n  = \"camlzip_inflate_bytecode\" \"camlzip_inflate\"\nexternal inflate_string:\n  stream -> string -> int -> int -> bytes -> int -> int -> flush_command\n         -> bool * int * int\n  = \"camlzip_inflate_bytecode\" \"camlzip_inflate\"\nexternal inflate_end: stream -> unit = \"camlzip_inflateEnd\"\n\nexternal update_crc: int32 -> bytes -> int -> int -> int32\n                   = \"camlzip_update_crc32\"\nexternal update_crc_string: int32 -> string -> int -> int -> int32\n                   = \"camlzip_update_crc32\"\n\nlet buffer_size = 1024\n\nlet compress ?(level = 6) ?(header = true) refill flush =\n  let inbuf = Bytes.create buffer_size\n  and outbuf = Bytes.create buffer_size in\n  let zs = deflate_init level header in\n  let rec compr inpos inavail =\n    if inavail = 0 then begin\n      let incount = refill inbuf in\n      if incount = 0 then compr_finish() else compr 0 incount\n    end else begin\n      let (_, used_in, used_out) =\n        deflate zs inbuf inpos inavail outbuf 0 buffer_size Z_NO_FLUSH in\n      flush outbuf used_out;\n      compr (inpos + used_in) (inavail - used_in)\n    end\n  and compr_finish () =\n    let (finished, _, used_out) =\n       deflate zs inbuf 0 0 outbuf 0 buffer_size Z_FINISH in\n    flush outbuf used_out;\n    if not finished then compr_finish()\n  in\n    compr 0 0;\n    deflate_end zs\n\nlet compress_direct  ?(level = 6) ?(header = true) flush =\n  let outbuf = Bytes.create buffer_size in\n  let zs = deflate_init level header in\n  let rec compr inbuf inpos inavail =\n    if inavail = 0 then ()\n    else begin\n      let (_, used_in, used_out) =\n        deflate zs inbuf inpos inavail outbuf 0 buffer_size Z_NO_FLUSH in\n      flush outbuf used_out;\n      compr inbuf (inpos + used_in) (inavail - used_in)\n    end\n  and compr_finish () =\n    let (finished, _, used_out) =\n      deflate zs (Bytes.unsafe_of_string \"\") 0 0\n                 outbuf 0 buffer_size Z_FINISH in\n    flush outbuf used_out;\n    if not finished then compr_finish()\n    else deflate_end zs\n  in\n  compr, compr_finish\n\nlet uncompress ?(header = true) refill flush =\n  let inbuf = Bytes.create buffer_size\n  and outbuf = Bytes.create buffer_size in\n  let zs = inflate_init header in\n  let rec uncompr inpos inavail =\n    if inavail = 0 then begin\n      let incount = refill inbuf in\n      if incount = 0 then uncompr_finish 0 else uncompr 0 incount\n    end else begin\n      let (finished, used_in, used_out) =\n        inflate zs inbuf inpos inavail outbuf 0 buffer_size Z_SYNC_FLUSH in\n      flush outbuf used_out;\n      if not finished then uncompr (inpos + used_in) (inavail - used_in)\n    end\n  and uncompr_finish num_round =\n    (* Gotcha: if there is no header, inflate requires an extra \"dummy\" byte\n       after the compressed stream in order to complete decompression\n       and return finished = true. *)\n    let dummy_byte = if num_round = 0 && not header then 1 else 0 in\n    let (finished, _, used_out) =\n       inflate zs inbuf 0 dummy_byte outbuf 0 buffer_size Z_SYNC_FLUSH in\n    flush outbuf used_out;\n    if finished then ()\n    else if used_out > 0 then uncompr_finish 1\n    else if num_round < 10 then uncompr_finish (num_round + 1)\n    else\n      (* Gotcha: truncated input can cause an infinite loop where\n         [inflate] doesn't produce output and never returns \"finished\".\n         Raise an error after too many calls to [inflate] that produced\n         no output. *)\n      raise(Error(\"Zlib.uncompress\", \"truncated input data\"))\n  in\n    uncompr 0 0;\n    inflate_end zs\n"
  },
  {
    "path": "packages/nx/vendor/camlzip/zlib.mli",
    "content": "(***********************************************************************)\n(*                                                                     *)\n(*                         The CamlZip library                         *)\n(*                                                                     *)\n(*            Xavier Leroy, projet Cristal, INRIA Rocquencourt         *)\n(*                                                                     *)\n(*  Copyright 2001 Institut National de Recherche en Informatique et   *)\n(*  en Automatique.  All rights reserved.  This file is distributed    *)\n(*  under the terms of the GNU Lesser General Public License, with     *)\n(*  the special exception on linking described in file LICENSE.        *)\n(*                                                                     *)\n(***********************************************************************)\n\n(* $Id$ *)\n\nexception Error of string * string\n\nval compress:\n  ?level: int -> ?header: bool -> \n  (bytes -> int) -> (bytes -> int -> unit) -> unit\n\nval compress_direct:\n  ?level: int -> ?header: bool -> (bytes -> int -> unit) ->\n  (bytes -> int -> int -> unit) * (unit -> unit)\n\nval uncompress:\n  ?header: bool -> (bytes -> int) -> (bytes -> int -> unit) -> unit\n\ntype stream\n\ntype flush_command =\n    Z_NO_FLUSH\n  | Z_SYNC_FLUSH\n  | Z_FULL_FLUSH\n  | Z_FINISH\n\nexternal deflate_init: int -> bool -> stream = \"camlzip_deflateInit\"\nexternal deflate:\n  stream -> bytes -> int -> int -> bytes -> int -> int -> flush_command\n         -> bool * int * int\n  = \"camlzip_deflate_bytecode\" \"camlzip_deflate\"\nexternal deflate_string:\n  stream -> string -> int -> int -> bytes -> int -> int -> flush_command\n         -> bool * int * int\n  = \"camlzip_deflate_bytecode\" \"camlzip_deflate\"\nexternal deflate_end: stream -> unit = \"camlzip_deflateEnd\"\n\nexternal inflate_init: bool -> stream = \"camlzip_inflateInit\"\nexternal inflate:\n  stream -> bytes -> int -> int -> bytes -> int -> int -> flush_command\n         -> bool * int * int\n  = \"camlzip_inflate_bytecode\" \"camlzip_inflate\"\nexternal inflate_string:\n  stream -> string -> int -> int -> bytes -> int -> int -> flush_command\n         -> bool * int * int\n  = \"camlzip_inflate_bytecode\" \"camlzip_inflate\"\nexternal inflate_end: stream -> unit = \"camlzip_inflateEnd\"\n\nexternal update_crc: int32 -> bytes -> int -> int -> int32\n                   = \"camlzip_update_crc32\"\nexternal update_crc_string: int32 -> string -> int -> int -> int32\n                   = \"camlzip_update_crc32\"\n"
  },
  {
    "path": "packages/nx/vendor/camlzip/zlibstubs.c",
    "content": "/***********************************************************************/\n/*                                                                     */\n/*                      The CamlZip library                            */\n/*                                                                     */\n/*            Xavier Leroy, projet Cristal, INRIA Rocquencourt         */\n/*                                                                     */\n/*  Copyright 2001 Institut National de Recherche en Informatique et   */\n/*  en Automatique.  All rights reserved.  This file is distributed    */\n/*  under the terms of the GNU Lesser General Public License, with     */\n/*  the special exception on linking described in file LICENSE.        */\n/*                                                                     */\n/***********************************************************************/\n\n/* $Id$ */\n\n/* Stub code to interface with Zlib */\n\n#include <stdint.h>\n#include <zlib.h>\n\n#include <caml/mlvalues.h>\n#include <caml/alloc.h>\n#include <caml/callback.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/custom.h>\n\n#define ZStream_val(v) (*((z_streamp *) Data_custom_val(v)))\n\nstatic const value * camlzip_error_exn = NULL;\n\nstatic void camlzip_error(char * fn, value vzs)\n{\n  char * msg;\n  value s1 = Val_unit, s2 = Val_unit, bucket = Val_unit;\n\n  msg = ZStream_val(vzs)->msg;\n  if (msg == NULL) msg = \"\";\n  if (camlzip_error_exn == NULL) {\n    camlzip_error_exn = caml_named_value(\"Zlib.Error\");\n    if (camlzip_error_exn == NULL)\n      caml_invalid_argument(\"Exception Zlib.Error not initialized\");\n  }\n  Begin_roots3(s1, s2, bucket);\n    s1 = caml_copy_string(fn);\n    s2 = caml_copy_string(msg);\n    bucket = caml_alloc_small(3, 0);\n    Field(bucket, 0) = *camlzip_error_exn;\n    Field(bucket, 1) = s1;\n    Field(bucket, 2) = s2;\n  End_roots();\n  caml_raise(bucket);\n}\n\nstatic void camlzip_free_dstream(value vzs)\n{\n  deflateEnd(ZStream_val(vzs));\n  caml_stat_free(ZStream_val(vzs));\n  ZStream_val(vzs) = NULL;\n}\n\nstatic struct custom_operations camlzip_dstream_ops = {\n  \"camlzip_dstream_ops\", &camlzip_free_dstream, NULL, NULL, NULL, NULL\n};\n\nvalue camlzip_deflateInit(value vlevel, value expect_header)\n{\n  value vzs =\n    caml_alloc_custom_mem(&camlzip_dstream_ops,\n                          sizeof(z_streamp), sizeof(z_stream));\n  ZStream_val(vzs) = caml_stat_alloc(sizeof(z_stream));\n  /* Zlib API: the fields zalloc, zfree and opaque must be initialized */\n  ZStream_val(vzs)->zalloc = NULL;\n  ZStream_val(vzs)->zfree = NULL;\n  ZStream_val(vzs)->opaque = NULL;\n  if (deflateInit2(ZStream_val(vzs),\n                   Int_val(vlevel),\n                   Z_DEFLATED,\n                   Bool_val(expect_header) ? MAX_WBITS : -MAX_WBITS,\n                   8,\n                   Z_DEFAULT_STRATEGY) != Z_OK)\n    camlzip_error(\"Zlib.deflateInit\", vzs);\n  return vzs;\n}\n\nstatic int camlzip_flush_table[] = \n{ Z_NO_FLUSH, Z_SYNC_FLUSH, Z_FULL_FLUSH, Z_FINISH };\n\nvalue camlzip_deflate(value vzs, value srcbuf, value srcpos, value srclen,\n                      value dstbuf, value dstpos, value dstlen,\n                      value vflush)\n{\n  z_stream * zs = ZStream_val(vzs);\n  int retcode;\n  long used_in, used_out;\n  value res;\n\n  zs->next_in = &Byte_u(srcbuf, Long_val(srcpos));\n  zs->avail_in = Long_val(srclen);\n  zs->next_out = &Byte_u(dstbuf, Long_val(dstpos));\n  zs->avail_out = Long_val(dstlen);\n  retcode = deflate(zs, camlzip_flush_table[Int_val(vflush)]);\n  if (retcode < 0 && retcode != Z_BUF_ERROR) camlzip_error(\"Zlib.deflate\", vzs);\n  used_in = Long_val(srclen) - zs->avail_in;\n  used_out = Long_val(dstlen) - zs->avail_out;\n  zs->next_in = NULL;         /* not required, but cleaner */\n  zs->next_out = NULL;        /* (avoid dangling pointers into Caml heap) */\n  res = caml_alloc_small(3, 0);\n  Field(res, 0) = Val_bool(retcode == Z_STREAM_END);\n  Field(res, 1) = Val_int(used_in);\n  Field(res, 2) = Val_int(used_out);\n  return res;\n}\n\nvalue camlzip_deflate_bytecode(value * arg, int nargs)\n{\n  return camlzip_deflate(arg[0], arg[1], arg[2], arg[3],\n                         arg[4], arg[5], arg[6], arg[7]);\n}\n\nvalue camlzip_deflateEnd(value vzs)\n{\n  if (deflateEnd(ZStream_val(vzs)) != Z_OK)\n    camlzip_error(\"Zlib.deflateEnd\", vzs);\n  return Val_unit;\n}\n\nstatic void camlzip_free_istream(value vzs)\n{\n  inflateEnd(ZStream_val(vzs));\n  caml_stat_free(ZStream_val(vzs));\n  ZStream_val(vzs) = NULL;\n}\n\nstatic struct custom_operations camlzip_istream_ops = {\n  \"camlzip_dstream_ops\", &camlzip_free_istream, NULL, NULL, NULL, NULL\n};\n\nvalue camlzip_inflateInit(value expect_header)\n{\n  value vzs =\n    caml_alloc_custom_mem(&camlzip_istream_ops,\n                          sizeof(z_streamp), sizeof(z_stream));\n  /* Zlib API: The fields next_in, avail_in, zalloc, zfree and opaque\n     must be initialized */\n  ZStream_val(vzs) = caml_stat_alloc(sizeof(z_stream));\n  ZStream_val(vzs)->zalloc = NULL;\n  ZStream_val(vzs)->zfree = NULL;\n  ZStream_val(vzs)->opaque = NULL;\n  ZStream_val(vzs)->next_in = NULL;\n  ZStream_val(vzs)->avail_in = 0;\n  if (inflateInit2(ZStream_val(vzs),\n                   Bool_val(expect_header) ? MAX_WBITS : -MAX_WBITS) != Z_OK)\n    camlzip_error(\"Zlib.inflateInit\", vzs);\n  return vzs;\n}\n\nvalue camlzip_inflate(value vzs, value srcbuf, value srcpos, value srclen,\n                      value dstbuf, value dstpos, value dstlen,\n                      value vflush)\n{\n  z_stream * zs = ZStream_val(vzs);\n  int retcode;\n  long used_in, used_out;\n  value res;\n\n  zs->next_in = &Byte_u(srcbuf, Long_val(srcpos));\n  zs->avail_in = Long_val(srclen);\n  zs->next_out = &Byte_u(dstbuf, Long_val(dstpos));\n  zs->avail_out = Long_val(dstlen);\n  retcode = inflate(zs, camlzip_flush_table[Int_val(vflush)]);\n  if ((retcode < 0 && retcode != Z_BUF_ERROR) || retcode == Z_NEED_DICT)\n    camlzip_error(\"Zlib.inflate\", vzs);\n  used_in = Long_val(srclen) - zs->avail_in;\n  used_out = Long_val(dstlen) - zs->avail_out;\n  zs->next_in = NULL;           /* not required, but cleaner */\n  zs->next_out = NULL;          /* (avoid dangling pointers into Caml heap) */\n  res = caml_alloc_small(3, 0);\n  Field(res, 0) = Val_bool(retcode == Z_STREAM_END);\n  Field(res, 1) = Val_int(used_in);\n  Field(res, 2) = Val_int(used_out);\n  return res;\n}\n\nvalue camlzip_inflate_bytecode(value * arg, int nargs)\n{\n  return camlzip_inflate(arg[0], arg[1], arg[2], arg[3],\n                         arg[4], arg[5], arg[6], arg[7]);\n}\n\nvalue camlzip_inflateEnd(value vzs)\n{\n  if (inflateEnd(ZStream_val(vzs)) != Z_OK)\n    camlzip_error(\"Zlib.inflateEnd\", vzs);\n  return Val_unit;\n}\n\nvalue camlzip_update_crc32(value crc, value buf, value pos, value len)\n{\n  return caml_copy_int32(crc32((uint32_t) Int32_val(crc), \n                          &Byte_u(buf, Long_val(pos)),\n                          Long_val(len)));\n}\n\n"
  },
  {
    "path": "packages/nx/vendor/dune",
    "content": "(vendored_dirs *)\n"
  },
  {
    "path": "packages/nx/vendor/ocaml-pocketfft/config/discover.ml",
    "content": "module C = Configurator.V1\n\nlet () =\n  C.main ~name:\"pocketfft\" (fun c ->\n      let architecture = C.ocaml_config_var_exn c \"architecture\" in\n      let word_size = C.ocaml_config_var_exn c \"word_size\" in\n\n      let arch_flags =\n        match architecture with\n        | \"amd64\" | \"x86_64\" -> [ \"-march=native\"; \"-mtune=native\" ]\n        | \"arm64\" | \"aarch64\" -> [ \"-mcpu=native\" ]\n        | \"power\" | \"ppc\" | \"ppc64\" | \"ppc64le\" -> [ \"-mcpu=native\" ]\n        | \"riscv32\" -> [ \"-march=rv32gc\" ]\n        | \"riscv64\" -> [ \"-march=rv64gc\" ]\n        | \"riscv\" ->\n            if word_size = \"64\" then [ \"-march=rv64gc\" ]\n            else [ \"-march=rv32gc\" ]\n        | \"s390x\" -> [ \"-march=native\" ]\n        | _ -> []\n      in\n\n      let lto_flags =\n        match Sys.getenv_opt \"NX_POCKETFFT_ENABLE_LTO\" with\n        | Some v ->\n            let normalized = String.(lowercase_ascii (trim v)) in\n            if normalized = \"1\" || normalized = \"true\" || normalized = \"yes\" then\n              [ \"-flto\" ]\n            else\n              []\n        | None -> []\n      in\n\n      let cxx_flags =\n        [ \"-O3\" ]\n        @ arch_flags\n        @ lto_flags\n        @ [\n            \"-ffast-math\";\n            \"-DNDEBUG\";\n            \"-funroll-loops\";\n            \"-fomit-frame-pointer\";\n            \"-finline-functions\";\n            \"-fno-rtti\";\n            \"-std=c++17\";\n            \"-I.\";\n            \"-DPOCKETFFT_NO_MULTITHREADING=0\";\n            \"-DPOCKETFFT_CACHE_SIZE=32768\";\n          ]\n      in\n\n      C.Flags.write_sexp \"cxx_flags.sexp\" cxx_flags)\n"
  },
  {
    "path": "packages/nx/vendor/ocaml-pocketfft/config/dune",
    "content": "(executable\n (name discover)\n (libraries dune-configurator))\n"
  },
  {
    "path": "packages/nx/vendor/ocaml-pocketfft/dune",
    "content": "(library\n (public_name nx.pocketfft)\n (name pocketfft)\n (foreign_stubs\n  (language cxx)\n  (names pocketfft_stubs)\n  (flags\n   (:standard\n    (:include cxx_flags.sexp)))\n  (include_dirs pocketfft))\n (c_library_flags (:standard \\ -shared-libgcc)))\n\n(rule\n (targets cxx_flags.sexp)\n (action\n  (run config/discover.exe)))\n"
  },
  {
    "path": "packages/nx/vendor/ocaml-pocketfft/pocketfft/LICENSE",
    "content": "Copyright (C) 2010-2018 Max-Planck-Society\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without modification,\nare permitted provided that the following conditions are met:\n\n* Redistributions of source code must retain the above copyright notice, this\n  list of conditions and the following disclaimer.\n* Redistributions in binary form must reproduce the above copyright notice, this\n  list of conditions and the following disclaimer in the documentation and/or\n  other materials provided with the distribution.\n* Neither the name of the copyright holder nor the names of its contributors may\n  be used to endorse or promote products derived from this software without\n  specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\nWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR\nANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\nLOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON\nANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n"
  },
  {
    "path": "packages/nx/vendor/ocaml-pocketfft/pocketfft/pocketfft_hdronly.h",
    "content": "/*\nThis file is part of pocketfft.\n\nCopyright (C) 2010-2022 Max-Planck-Society\nCopyright (C) 2019-2020 Peter Bell\n\nFor the odd-sized DCT-IV transforms:\n  Copyright (C) 2003, 2007-14 Matteo Frigo\n  Copyright (C) 2003, 2007-14 Massachusetts Institute of Technology\n\nFor the prev_good_size search:\n  Copyright (C) 2024 Tan Ping Liang, Peter Bell\n\nAuthors: Martin Reinecke, Peter Bell\n\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without modification,\nare permitted provided that the following conditions are met:\n\n* Redistributions of source code must retain the above copyright notice, this\n  list of conditions and the following disclaimer.\n* Redistributions in binary form must reproduce the above copyright notice, this\n  list of conditions and the following disclaimer in the documentation and/or\n  other materials provided with the distribution.\n* Neither the name of the copyright holder nor the names of its contributors may\n  be used to endorse or promote products derived from this software without\n  specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\nWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR\nANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\nLOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON\nANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n*/\n\n#ifndef POCKETFFT_HDRONLY_H\n#define POCKETFFT_HDRONLY_H\n\n#ifndef __cplusplus\n#error This file is C++ and requires a C++ compiler.\n#endif\n\n#if !(__cplusplus >= 201103L || (defined(_MSVC_LANG) && _MSVC_LANG >= 201103L))\n#error This file requires at least C++11 support.\n#endif\n\n#ifndef POCKETFFT_CACHE_SIZE\n#define POCKETFFT_CACHE_SIZE 0\n#endif\n\n#include <cmath>\n#include <cstdlib>\n#include <stdexcept>\n#include <memory>\n#include <vector>\n#include <complex>\n#include <algorithm>\n#if POCKETFFT_CACHE_SIZE!=0\n#include <array>\n#include <mutex>\n#endif\n\n#ifndef POCKETFFT_NO_MULTITHREADING\n#include <mutex>\n#include <condition_variable>\n#include <thread>\n#include <queue>\n#include <atomic>\n#include <functional>\n#include <new>\n\n#ifdef POCKETFFT_PTHREADS\n#  include <pthread.h>\n#endif\n#endif\n\n#if defined(__GNUC__)\n#define POCKETFFT_NOINLINE __attribute__((noinline))\n#define POCKETFFT_RESTRICT __restrict__\n#elif defined(_MSC_VER)\n#define POCKETFFT_NOINLINE __declspec(noinline)\n#define POCKETFFT_RESTRICT __restrict\n#else\n#define POCKETFFT_NOINLINE\n#define POCKETFFT_RESTRICT\n#endif\n\nnamespace pocketfft {\n\nnamespace detail {\nusing std::size_t;\nusing std::ptrdiff_t;\n\n// Always use std:: for <cmath> functions\ntemplate <typename T> T cos(T) = delete;\ntemplate <typename T> T sin(T) = delete;\ntemplate <typename T> T sqrt(T) = delete;\n\nusing shape_t = std::vector<size_t>;\nusing stride_t = std::vector<ptrdiff_t>;\n\nconstexpr bool FORWARD  = true,\n               BACKWARD = false;\n\n// only enable vector support for gcc>=5.0 and clang>=5.0\n#ifndef POCKETFFT_NO_VECTORS\n#define POCKETFFT_NO_VECTORS\n#if defined(__INTEL_COMPILER)\n// do nothing. This is necessary because this compiler also sets __GNUC__.\n#elif defined(__clang__)\n// AppleClang has their own version numbering\n#ifdef __apple_build_version__\n#  if (__clang_major__ > 9) || (__clang_major__ == 9 && __clang_minor__ >= 1)\n#     undef POCKETFFT_NO_VECTORS\n#  endif\n#elif __clang_major__ >= 5\n#  undef POCKETFFT_NO_VECTORS\n#endif\n#elif defined(__GNUC__)\n#if __GNUC__>=5\n#undef POCKETFFT_NO_VECTORS\n#endif\n#endif\n#endif\n\ntemplate<typename T> struct VLEN { static constexpr size_t val=1; };\n\n#ifndef POCKETFFT_NO_VECTORS\n#if (defined(__AVX512F__))\ntemplate<> struct VLEN<float> { static constexpr size_t val=16; };\ntemplate<> struct VLEN<double> { static constexpr size_t val=8; };\n#elif (defined(__AVX__))\ntemplate<> struct VLEN<float> { static constexpr size_t val=8; };\ntemplate<> struct VLEN<double> { static constexpr size_t val=4; };\n#elif (defined(__SSE2__))\ntemplate<> struct VLEN<float> { static constexpr size_t val=4; };\ntemplate<> struct VLEN<double> { static constexpr size_t val=2; };\n#elif (defined(__VSX__))\ntemplate<> struct VLEN<float> { static constexpr size_t val=4; };\ntemplate<> struct VLEN<double> { static constexpr size_t val=2; };\n#elif (defined(__ARM_NEON__) || defined(__ARM_NEON))\ntemplate<> struct VLEN<float> { static constexpr size_t val=4; };\ntemplate<> struct VLEN<double> { static constexpr size_t val=2; };\n#else\n#define POCKETFFT_NO_VECTORS\n#endif\n#endif\n\n// std::aligned_alloc is a bit cursed ... it doesn't exist on MacOS < 10.15\n// and in musl, and other OSes seem to have even more peculiarities.\n// Let's unconditionally work around it for now.\n# if 0\n//#if (__cplusplus >= 201703L) && (!defined(__MINGW32__)) && (!defined(_MSC_VER)) && (__MAC_OS_X_VERSION_MIN_REQUIRED >= MAC_OS_X_VERSION_10_15)\ninline void *aligned_alloc(size_t align, size_t size)\n  {\n  // aligned_alloc() requires that the requested size is a multiple of \"align\"\n  void *ptr = ::aligned_alloc(align,(size+align-1)&(~(align-1)));\n  if (!ptr) throw std::bad_alloc();\n  return ptr;\n  }\ninline void aligned_dealloc(void *ptr)\n    { free(ptr); }\n#else // portable emulation\ninline void *aligned_alloc(size_t align, size_t size)\n  {\n  align = std::max(align, alignof(max_align_t));\n  void *ptr = malloc(size+align);\n  if (!ptr) throw std::bad_alloc();\n  void *res = reinterpret_cast<void *>\n    ((reinterpret_cast<uintptr_t>(ptr) & ~(uintptr_t(align-1))) + uintptr_t(align));\n  (reinterpret_cast<void**>(res))[-1] = ptr;\n  return res;\n  }\ninline void aligned_dealloc(void *ptr)\n  { if (ptr) free((reinterpret_cast<void**>(ptr))[-1]); }\n#endif\n\ntemplate<typename T> class arr\n  {\n  private:\n    T *p;\n    size_t sz;\n\n#if defined(POCKETFFT_NO_VECTORS)\n    static T *ralloc(size_t num)\n      {\n      if (num==0) return nullptr;\n      void *res = malloc(num*sizeof(T));\n      if (!res) throw std::bad_alloc();\n      return reinterpret_cast<T *>(res);\n      }\n    static void dealloc(T *ptr)\n      { free(ptr); }\n#else\n    static T *ralloc(size_t num)\n      {\n      if (num==0) return nullptr;\n      void *ptr = aligned_alloc(64, num*sizeof(T));\n      return static_cast<T*>(ptr);\n      }\n    static void dealloc(T *ptr)\n      { aligned_dealloc(ptr); }\n#endif\n\n  public:\n    arr() : p(0), sz(0) {}\n    arr(size_t n) : p(ralloc(n)), sz(n) {}\n    arr(arr &&other)\n      : p(other.p), sz(other.sz)\n      { other.p=nullptr; other.sz=0; }\n    ~arr() { dealloc(p); }\n\n    void resize(size_t n)\n      {\n      if (n==sz) return;\n      dealloc(p);\n      p = ralloc(n);\n      sz = n;\n      }\n\n    T &operator[](size_t idx) { return p[idx]; }\n    const T &operator[](size_t idx) const { return p[idx]; }\n\n    T *data() { return p; }\n    const T *data() const { return p; }\n\n    size_t size() const { return sz; }\n  };\n\ntemplate<typename T> struct cmplx {\n  T r, i;\n  cmplx() {}\n  cmplx(T r_, T i_) : r(r_), i(i_) {}\n  void Set(T r_, T i_) { r=r_; i=i_; }\n  void Set(T r_) { r=r_; i=T(0); }\n  cmplx &operator+= (const cmplx &other)\n    { r+=other.r; i+=other.i; return *this; }\n  template<typename T2>cmplx &operator*= (T2 other)\n    { r*=other; i*=other; return *this; }\n  template<typename T2>cmplx &operator*= (const cmplx<T2> &other)\n    {\n    T tmp = r*other.r - i*other.i;\n    i = r*other.i + i*other.r;\n    r = tmp;\n    return *this;\n    }\n  template<typename T2>cmplx &operator+= (const cmplx<T2> &other)\n    { r+=other.r; i+=other.i; return *this; }\n  template<typename T2>cmplx &operator-= (const cmplx<T2> &other)\n    { r-=other.r; i-=other.i; return *this; }\n  template<typename T2> auto operator* (const T2 &other) const\n    -> cmplx<decltype(r*other)>\n    { return {r*other, i*other}; }\n  template<typename T2> auto operator+ (const cmplx<T2> &other) const\n    -> cmplx<decltype(r+other.r)>\n    { return {r+other.r, i+other.i}; }\n  template<typename T2> auto operator- (const cmplx<T2> &other) const\n    -> cmplx<decltype(r+other.r)>\n    { return {r-other.r, i-other.i}; }\n  template<typename T2> auto operator* (const cmplx<T2> &other) const\n    -> cmplx<decltype(r+other.r)>\n    { return {r*other.r-i*other.i, r*other.i + i*other.r}; }\n  template<bool fwd, typename T2> auto special_mul (const cmplx<T2> &other) const\n    -> cmplx<decltype(r+other.r)>\n    {\n    using Tres = cmplx<decltype(r+other.r)>;\n    return fwd ? Tres(r*other.r+i*other.i, i*other.r-r*other.i)\n               : Tres(r*other.r-i*other.i, r*other.i+i*other.r);\n    }\n};\ntemplate<typename T> inline void PM(T &a, T &b, T c, T d)\n  { a=c+d; b=c-d; }\ntemplate<typename T> inline void PMINPLACE(T &a, T &b)\n  { T t = a; a+=b; b=t-b; }\ntemplate<typename T> inline void MPINPLACE(T &a, T &b)\n  { T t = a; a-=b; b=t+b; }\ntemplate<typename T> cmplx<T> conj(const cmplx<T> &a)\n  { return {a.r, -a.i}; }\ntemplate<bool fwd, typename T, typename T2> void special_mul (const cmplx<T> &v1, const cmplx<T2> &v2, cmplx<T> &res)\n  {\n  res = fwd ? cmplx<T>(v1.r*v2.r+v1.i*v2.i, v1.i*v2.r-v1.r*v2.i)\n            : cmplx<T>(v1.r*v2.r-v1.i*v2.i, v1.r*v2.i+v1.i*v2.r);\n  }\n\ntemplate<typename T> void ROT90(cmplx<T> &a)\n  { auto tmp_=a.r; a.r=-a.i; a.i=tmp_; }\ntemplate<bool fwd, typename T> void ROTX90(cmplx<T> &a)\n  { auto tmp_= fwd ? -a.r : a.r; a.r = fwd ? a.i : -a.i; a.i=tmp_; }\n\n//\n// twiddle factor section\n//\ntemplate<typename T> class sincos_2pibyn\n  {\n  private:\n    using Thigh = typename std::conditional<(sizeof(T)>sizeof(double)), T, double>::type;\n    size_t N, mask, shift;\n    arr<cmplx<Thigh>> v1, v2;\n\n    static cmplx<Thigh> calc(size_t x, size_t n, Thigh ang)\n      {\n      x<<=3;\n      if (x<4*n) // first half\n        {\n        if (x<2*n) // first quadrant\n          {\n          if (x<n) return cmplx<Thigh>(std::cos(Thigh(x)*ang), std::sin(Thigh(x)*ang));\n          return cmplx<Thigh>(std::sin(Thigh(2*n-x)*ang), std::cos(Thigh(2*n-x)*ang));\n          }\n        else // second quadrant\n          {\n          x-=2*n;\n          if (x<n) return cmplx<Thigh>(-std::sin(Thigh(x)*ang), std::cos(Thigh(x)*ang));\n          return cmplx<Thigh>(-std::cos(Thigh(2*n-x)*ang), std::sin(Thigh(2*n-x)*ang));\n          }\n        }\n      else\n        {\n        x=8*n-x;\n        if (x<2*n) // third quadrant\n          {\n          if (x<n) return cmplx<Thigh>(std::cos(Thigh(x)*ang), -std::sin(Thigh(x)*ang));\n          return cmplx<Thigh>(std::sin(Thigh(2*n-x)*ang), -std::cos(Thigh(2*n-x)*ang));\n          }\n        else // fourth quadrant\n          {\n          x-=2*n;\n          if (x<n) return cmplx<Thigh>(-std::sin(Thigh(x)*ang), -std::cos(Thigh(x)*ang));\n          return cmplx<Thigh>(-std::cos(Thigh(2*n-x)*ang), -std::sin(Thigh(2*n-x)*ang));\n          }\n        }\n      }\n\n  public:\n    POCKETFFT_NOINLINE sincos_2pibyn(size_t n)\n      : N(n)\n      {\n      constexpr auto pi = 3.141592653589793238462643383279502884197L;\n      Thigh ang = Thigh(0.25L*pi/n);\n      size_t nval = (n+2)/2;\n      shift = 1;\n      while((size_t(1)<<shift)*(size_t(1)<<shift) < nval) ++shift;\n      mask = (size_t(1)<<shift)-1;\n      v1.resize(mask+1);\n      v1[0].Set(Thigh(1), Thigh(0));\n      for (size_t i=1; i<v1.size(); ++i)\n        v1[i]=calc(i,n,ang);\n      v2.resize((nval+mask)/(mask+1));\n      v2[0].Set(Thigh(1), Thigh(0));\n      for (size_t i=1; i<v2.size(); ++i)\n        v2[i]=calc(i*(mask+1),n,ang);\n      }\n\n    cmplx<T> operator[](size_t idx) const\n      {\n      if (2*idx<=N)\n        {\n        auto x1=v1[idx&mask], x2=v2[idx>>shift];\n        return cmplx<T>(T(x1.r*x2.r-x1.i*x2.i), T(x1.r*x2.i+x1.i*x2.r));\n        }\n      idx = N-idx;\n      auto x1=v1[idx&mask], x2=v2[idx>>shift];\n      return cmplx<T>(T(x1.r*x2.r-x1.i*x2.i), -T(x1.r*x2.i+x1.i*x2.r));\n      }\n  };\n\nstruct util // hack to avoid duplicate symbols\n  {\n  static POCKETFFT_NOINLINE size_t largest_prime_factor (size_t n)\n    {\n    size_t res=1;\n    while ((n&1)==0)\n      { res=2; n>>=1; }\n    for (size_t x=3; x*x<=n; x+=2)\n      while ((n%x)==0)\n        { res=x; n/=x; }\n    if (n>1) res=n;\n    return res;\n    }\n\n  static POCKETFFT_NOINLINE double cost_guess (size_t n)\n    {\n    constexpr double lfp=1.1; // penalty for non-hardcoded larger factors\n    size_t ni=n;\n    double result=0.;\n    while ((n&1)==0)\n      { result+=2; n>>=1; }\n    for (size_t x=3; x*x<=n; x+=2)\n      while ((n%x)==0)\n        {\n        result+= (x<=5) ? double(x) : lfp*double(x); // penalize larger prime factors\n        n/=x;\n        }\n    if (n>1) result+=(n<=5) ? double(n) : lfp*double(n);\n    return result*double(ni);\n    }\n\n  /* returns the smallest composite of 2, 3, 5, 7 and 11 which is >= n */\n  static POCKETFFT_NOINLINE size_t good_size_cmplx(size_t n)\n    {\n    if (n<=12) return n;\n\n    size_t bestfac=2*n;\n    for (size_t f11=1; f11<bestfac; f11*=11)\n      for (size_t f117=f11; f117<bestfac; f117*=7)\n        for (size_t f1175=f117; f1175<bestfac; f1175*=5)\n          {\n          size_t x=f1175;\n          while (x<n) x*=2;\n          for (;;)\n            {\n            if (x<n)\n              x*=3;\n            else if (x>n)\n              {\n              if (x<bestfac) bestfac=x;\n              if (x&1) break;\n              x>>=1;\n              }\n            else\n              return n;\n            }\n          }\n    return bestfac;\n    }\n\n  /* returns the smallest composite of 2, 3, 5 which is >= n */\n  static POCKETFFT_NOINLINE size_t good_size_real(size_t n)\n    {\n    if (n<=6) return n;\n\n    size_t bestfac=2*n;\n    for (size_t f5=1; f5<bestfac; f5*=5)\n      {\n      size_t x = f5;\n      while (x<n) x *= 2;\n      for (;;)\n        {\n        if (x<n)\n          x*=3;\n        else if (x>n)\n          {\n          if (x<bestfac) bestfac=x;\n          if (x&1) break;\n          x>>=1;\n          }\n        else\n          return n;\n        }\n      }\n    return bestfac;\n    }\n\n  /* returns the largest composite of 2, 3, 5, 7 and 11 which is <= n */\n  static POCKETFFT_NOINLINE size_t prev_good_size_cmplx(size_t n)\n  {\n    if (n<=12) return n;\n\n    size_t bestfound = 1;\n    for (size_t f11 = 1;f11 <= n; f11 *= 11)\n      for (size_t f117 = f11; f117 <= n; f117 *= 7)\n        for (size_t f1175 = f117; f1175 <= n; f1175 *= 5)\n        {\n          size_t x = f1175;\n          while (x*2 <= n) x *= 2;\n          if (x > bestfound) bestfound = x;\n          while (true)\n          {\n            if (x * 3 <= n) x *= 3;\n            else if (x % 2 == 0) x /= 2;\n            else break;\n\n            if (x > bestfound) bestfound = x;\n          }\n        }\n    return bestfound;\n  }\n\n  /* returns the largest composite of 2, 3, 5 which is <= n */\n  static POCKETFFT_NOINLINE size_t prev_good_size_real(size_t n)\n  {\n    if (n<=6) return n;\n\n    size_t bestfound = 1;\n    for (size_t f5 = 1; f5 <= n; f5 *= 5)\n    {\n      size_t x = f5;\n      while (x*2 <= n) x *= 2;\n      if (x > bestfound) bestfound = x;\n      while (true)\n      {\n        if (x * 3 <= n) x *= 3;\n        else if (x % 2 == 0) x /= 2;\n        else break;\n\n        if (x > bestfound) bestfound = x;\n      }\n    }\n    return bestfound;\n  }\n\n  static size_t prod(const shape_t &shape)\n    {\n    size_t res=1;\n    for (auto sz: shape)\n      res*=sz;\n    return res;\n    }\n\n  static POCKETFFT_NOINLINE void sanity_check(const shape_t &shape,\n    const stride_t &stride_in, const stride_t &stride_out, bool inplace)\n    {\n    auto ndim = shape.size();\n    if (ndim<1) throw std::runtime_error(\"ndim must be >= 1\");\n    if ((stride_in.size()!=ndim) || (stride_out.size()!=ndim))\n      throw std::runtime_error(\"stride dimension mismatch\");\n    if (inplace && (stride_in!=stride_out))\n      throw std::runtime_error(\"stride mismatch\");\n    }\n\n  static POCKETFFT_NOINLINE void sanity_check(const shape_t &shape,\n    const stride_t &stride_in, const stride_t &stride_out, bool inplace,\n    const shape_t &axes)\n    {\n    sanity_check(shape, stride_in, stride_out, inplace);\n    auto ndim = shape.size();\n    shape_t tmp(ndim,0);\n    for (auto ax : axes)\n      {\n      if (ax>=ndim) throw std::invalid_argument(\"bad axis number\");\n      if (++tmp[ax]>1) throw std::invalid_argument(\"axis specified repeatedly\");\n      }\n    }\n\n  static POCKETFFT_NOINLINE void sanity_check(const shape_t &shape,\n    const stride_t &stride_in, const stride_t &stride_out, bool inplace,\n    size_t axis)\n    {\n    sanity_check(shape, stride_in, stride_out, inplace);\n    if (axis>=shape.size()) throw std::invalid_argument(\"bad axis number\");\n    }\n\n#ifdef POCKETFFT_NO_MULTITHREADING\n  static size_t thread_count (size_t /*nthreads*/, const shape_t &/*shape*/,\n    size_t /*axis*/, size_t /*vlen*/)\n    { return 1; }\n#else\n  static size_t thread_count (size_t nthreads, const shape_t &shape,\n    size_t axis, size_t vlen)\n    {\n    if (nthreads==1) return 1;\n    size_t size = prod(shape);\n    size_t parallel = size / (shape[axis] * vlen);\n    if (shape[axis] < 1000)\n      parallel /= 4;\n    size_t max_threads = nthreads == 0 ?\n      std::thread::hardware_concurrency() : nthreads;\n    return std::max(size_t(1), std::min(parallel, max_threads));\n    }\n#endif\n  };\n\nnamespace threading {\n\n#ifdef POCKETFFT_NO_MULTITHREADING\n\nconstexpr inline size_t thread_id() { return 0; }\nconstexpr inline size_t num_threads() { return 1; }\n\ntemplate <typename Func>\nvoid thread_map(size_t /* nthreads */, Func f)\n  { f(); }\n\n#else\n\ninline size_t &thread_id()\n  {\n  static thread_local size_t thread_id_=0;\n  return thread_id_;\n  }\ninline size_t &num_threads()\n  {\n  static thread_local size_t num_threads_=1;\n  return num_threads_;\n  }\nstatic const size_t max_threads = std::max(1u, std::thread::hardware_concurrency());\n\nclass latch\n  {\n    std::atomic<size_t> num_left_;\n    std::mutex mut_;\n    std::condition_variable completed_;\n    using lock_t = std::unique_lock<std::mutex>;\n\n  public:\n    latch(size_t n): num_left_(n) {}\n\n    void count_down()\n      {\n      lock_t lock(mut_);\n      if (--num_left_)\n        return;\n      completed_.notify_all();\n      }\n\n    void wait()\n      {\n      lock_t lock(mut_);\n      completed_.wait(lock, [this]{ return is_ready(); });\n      }\n    bool is_ready() { return num_left_ == 0; }\n  };\n\ntemplate <typename T> class concurrent_queue\n  {\n    std::queue<T> q_;\n    std::mutex mut_;\n    std::atomic<size_t> size_;\n    using lock_t = std::lock_guard<std::mutex>;\n\n  public:\n\n    void push(T val)\n      {\n      lock_t lock(mut_);\n      ++size_;\n      q_.push(std::move(val));\n      }\n\n    bool try_pop(T &val)\n      {\n      if (size_ == 0) return false;\n      lock_t lock(mut_);\n      // Queue might have been emptied while we acquired the lock\n      if (q_.empty()) return false;\n\n      val = std::move(q_.front());\n      --size_;\n      q_.pop();\n      return true;\n      }\n\n    bool empty() const { return size_==0; }\n  };\n\n// C++ allocator with support for over-aligned types\ntemplate <typename T> struct aligned_allocator\n  {\n  using value_type = T;\n  template <class U>\n  aligned_allocator(const aligned_allocator<U>&) {}\n  aligned_allocator() = default;\n\n  T *allocate(size_t n)\n    {\n    void* mem = aligned_alloc(alignof(T), n*sizeof(T));\n    return static_cast<T*>(mem);\n    }\n\n  void deallocate(T *p, size_t /*n*/)\n    { aligned_dealloc(p); }\n  };\n\nclass thread_pool\n  {\n    // A reasonable guess, probably close enough for most hardware\n    static constexpr size_t cache_line_size = 64;\n    struct alignas(cache_line_size) worker\n      {\n      std::thread thread;\n      std::condition_variable work_ready;\n      std::mutex mut;\n      std::atomic_flag busy_flag = ATOMIC_FLAG_INIT;\n      std::function<void()> work;\n\n      void worker_main(\n        std::atomic<bool> &shutdown_flag,\n        std::atomic<size_t> &unscheduled_tasks,\n        concurrent_queue<std::function<void()>> &overflow_work)\n        {\n        using lock_t = std::unique_lock<std::mutex>;\n        bool expect_work = true;\n        while (!shutdown_flag || expect_work)\n          {\n          std::function<void()> local_work;\n          if (expect_work || unscheduled_tasks == 0)\n            {\n            lock_t lock(mut);\n            // Wait until there is work to be executed\n            work_ready.wait(lock, [&]{ return (work || shutdown_flag); });\n            local_work.swap(work);\n            expect_work = false;\n            }\n\n          bool marked_busy = false;\n          if (local_work)\n            {\n            marked_busy = true;\n            local_work();\n            }\n\n          if (!overflow_work.empty())\n            {\n            if (!marked_busy && busy_flag.test_and_set())\n              {\n              expect_work = true;\n              continue;\n              }\n            marked_busy = true;\n\n            while (overflow_work.try_pop(local_work))\n              {\n              --unscheduled_tasks;\n              local_work();\n              }\n            }\n\n          if (marked_busy) busy_flag.clear();\n          }\n        }\n      };\n\n    concurrent_queue<std::function<void()>> overflow_work_;\n    std::mutex mut_;\n    std::vector<worker, aligned_allocator<worker>> workers_;\n    std::atomic<bool> shutdown_;\n    std::atomic<size_t> unscheduled_tasks_;\n    using lock_t = std::lock_guard<std::mutex>;\n\n    void create_threads()\n      {\n      lock_t lock(mut_);\n      size_t nthreads=workers_.size();\n      for (size_t i=0; i<nthreads; ++i)\n        {\n        try\n          {\n          auto *worker = &workers_[i];\n          worker->busy_flag.clear();\n          worker->work = nullptr;\n          worker->thread = std::thread([worker, this]\n            {\n            worker->worker_main(shutdown_, unscheduled_tasks_, overflow_work_);\n            });\n          }\n        catch (...)\n          {\n          shutdown_locked();\n          throw;\n          }\n        }\n      }\n\n    void shutdown_locked()\n      {\n      shutdown_ = true;\n      for (auto &worker : workers_)\n        worker.work_ready.notify_all();\n\n      for (auto &worker : workers_)\n        if (worker.thread.joinable())\n          worker.thread.join();\n      }\n\n  public:\n    explicit thread_pool(size_t nthreads):\n      workers_(nthreads)\n      { create_threads(); }\n\n    thread_pool(): thread_pool(max_threads) {}\n\n    ~thread_pool() { shutdown(); }\n\n    void submit(std::function<void()> work)\n      {\n      lock_t lock(mut_);\n      if (shutdown_)\n        throw std::runtime_error(\"Work item submitted after shutdown\");\n\n      ++unscheduled_tasks_;\n\n      // First check for any idle workers and wake those\n      for (auto &worker : workers_)\n        if (!worker.busy_flag.test_and_set())\n          {\n          --unscheduled_tasks_;\n          {\n          lock_t lock(worker.mut);\n          worker.work = std::move(work);\n          }\n          worker.work_ready.notify_one();\n          return;\n          }\n\n      // If no workers were idle, push onto the overflow queue for later\n      overflow_work_.push(std::move(work));\n      }\n\n    void shutdown()\n      {\n      lock_t lock(mut_);\n      shutdown_locked();\n      }\n\n    void restart()\n      {\n      shutdown_ = false;\n      create_threads();\n      }\n  };\n\ninline thread_pool & get_pool()\n  {\n  static thread_pool pool;\n#ifdef POCKETFFT_PTHREADS\n  static std::once_flag f;\n  std::call_once(f,\n    []{\n    pthread_atfork(\n      +[]{ get_pool().shutdown(); },  // prepare\n      +[]{ get_pool().restart(); },   // parent\n      +[]{ get_pool().restart(); }    // child\n      );\n    });\n#endif\n\n  return pool;\n  }\n\n/** Map a function f over nthreads */\ntemplate <typename Func>\nvoid thread_map(size_t nthreads, Func f)\n  {\n  if (nthreads == 0)\n    nthreads = max_threads;\n\n  if (nthreads == 1)\n    { f(); return; }\n\n  auto & pool = get_pool();\n  latch counter(nthreads);\n  std::exception_ptr ex;\n  std::mutex ex_mut;\n  for (size_t i=0; i<nthreads; ++i)\n    {\n    pool.submit(\n      [&f, &counter, &ex, &ex_mut, i, nthreads] {\n      thread_id() = i;\n      num_threads() = nthreads;\n      try { f(); }\n      catch (...)\n        {\n        std::lock_guard<std::mutex> lock(ex_mut);\n        ex = std::current_exception();\n        }\n      counter.count_down();\n      });\n    }\n  counter.wait();\n  if (ex)\n    std::rethrow_exception(ex);\n  }\n\n#endif\n\n}\n\n//\n// complex FFTPACK transforms\n//\n\ntemplate<typename T0> class cfftp\n  {\n  private:\n    struct fctdata\n      {\n      size_t fct;\n      cmplx<T0> *tw, *tws;\n      };\n\n    size_t length;\n    arr<cmplx<T0>> mem;\n    std::vector<fctdata> fact;\n\n    void add_factor(size_t factor)\n      { fact.push_back({factor, nullptr, nullptr}); }\n\ntemplate<bool fwd, typename T> void pass2 (size_t ido, size_t l1,\n  const T * POCKETFFT_RESTRICT cc, T * POCKETFFT_RESTRICT ch,\n  const cmplx<T0> * POCKETFFT_RESTRICT wa) const\n  {\n  auto CH = [ch,ido,l1](size_t a, size_t b, size_t c) -> T&\n    { return ch[a+ido*(b+l1*c)]; };\n  auto CC = [cc,ido](size_t a, size_t b, size_t c) -> const T&\n    { return cc[a+ido*(b+2*c)]; };\n  auto WA = [wa, ido](size_t x, size_t i)\n    { return wa[i-1+x*(ido-1)]; };\n\n  if (ido==1)\n    for (size_t k=0; k<l1; ++k)\n      {\n      CH(0,k,0) = CC(0,0,k)+CC(0,1,k);\n      CH(0,k,1) = CC(0,0,k)-CC(0,1,k);\n      }\n  else\n    for (size_t k=0; k<l1; ++k)\n      {\n      CH(0,k,0) = CC(0,0,k)+CC(0,1,k);\n      CH(0,k,1) = CC(0,0,k)-CC(0,1,k);\n      for (size_t i=1; i<ido; ++i)\n        {\n        CH(i,k,0) = CC(i,0,k)+CC(i,1,k);\n        special_mul<fwd>(CC(i,0,k)-CC(i,1,k),WA(0,i),CH(i,k,1));\n        }\n      }\n  }\n\n#define POCKETFFT_PREP3(idx) \\\n        T t0 = CC(idx,0,k), t1, t2; \\\n        PM (t1,t2,CC(idx,1,k),CC(idx,2,k)); \\\n        CH(idx,k,0)=t0+t1;\n#define POCKETFFT_PARTSTEP3a(u1,u2,twr,twi) \\\n        { \\\n        T ca=t0+t1*twr; \\\n        T cb{-t2.i*twi, t2.r*twi}; \\\n        PM(CH(0,k,u1),CH(0,k,u2),ca,cb) ;\\\n        }\n#define POCKETFFT_PARTSTEP3b(u1,u2,twr,twi) \\\n        { \\\n        T ca=t0+t1*twr; \\\n        T cb{-t2.i*twi, t2.r*twi}; \\\n        special_mul<fwd>(ca+cb,WA(u1-1,i),CH(i,k,u1)); \\\n        special_mul<fwd>(ca-cb,WA(u2-1,i),CH(i,k,u2)); \\\n        }\ntemplate<bool fwd, typename T> void pass3 (size_t ido, size_t l1,\n  const T * POCKETFFT_RESTRICT cc, T * POCKETFFT_RESTRICT ch,\n  const cmplx<T0> * POCKETFFT_RESTRICT wa) const\n  {\n  constexpr T0 tw1r=-0.5,\n               tw1i= (fwd ? -1: 1) * T0(0.8660254037844386467637231707529362L);\n\n  auto CH = [ch,ido,l1](size_t a, size_t b, size_t c) -> T&\n    { return ch[a+ido*(b+l1*c)]; };\n  auto CC = [cc,ido](size_t a, size_t b, size_t c) -> const T&\n    { return cc[a+ido*(b+3*c)]; };\n  auto WA = [wa, ido](size_t x, size_t i)\n    { return wa[i-1+x*(ido-1)]; };\n\n  if (ido==1)\n    for (size_t k=0; k<l1; ++k)\n      {\n      POCKETFFT_PREP3(0)\n      POCKETFFT_PARTSTEP3a(1,2,tw1r,tw1i)\n      }\n  else\n    for (size_t k=0; k<l1; ++k)\n      {\n      {\n      POCKETFFT_PREP3(0)\n      POCKETFFT_PARTSTEP3a(1,2,tw1r,tw1i)\n      }\n      for (size_t i=1; i<ido; ++i)\n        {\n        POCKETFFT_PREP3(i)\n        POCKETFFT_PARTSTEP3b(1,2,tw1r,tw1i)\n        }\n      }\n  }\n\n#undef POCKETFFT_PARTSTEP3b\n#undef POCKETFFT_PARTSTEP3a\n#undef POCKETFFT_PREP3\n\ntemplate<bool fwd, typename T> void pass4 (size_t ido, size_t l1,\n  const T * POCKETFFT_RESTRICT cc, T * POCKETFFT_RESTRICT ch,\n  const cmplx<T0> * POCKETFFT_RESTRICT wa) const\n  {\n  auto CH = [ch,ido,l1](size_t a, size_t b, size_t c) -> T&\n    { return ch[a+ido*(b+l1*c)]; };\n  auto CC = [cc,ido](size_t a, size_t b, size_t c) -> const T&\n    { return cc[a+ido*(b+4*c)]; };\n  auto WA = [wa, ido](size_t x, size_t i)\n    { return wa[i-1+x*(ido-1)]; };\n\n  if (ido==1)\n    for (size_t k=0; k<l1; ++k)\n      {\n      T t1, t2, t3, t4;\n      PM(t2,t1,CC(0,0,k),CC(0,2,k));\n      PM(t3,t4,CC(0,1,k),CC(0,3,k));\n      ROTX90<fwd>(t4);\n      PM(CH(0,k,0),CH(0,k,2),t2,t3);\n      PM(CH(0,k,1),CH(0,k,3),t1,t4);\n      }\n  else\n    for (size_t k=0; k<l1; ++k)\n      {\n      {\n      T t1, t2, t3, t4;\n      PM(t2,t1,CC(0,0,k),CC(0,2,k));\n      PM(t3,t4,CC(0,1,k),CC(0,3,k));\n      ROTX90<fwd>(t4);\n      PM(CH(0,k,0),CH(0,k,2),t2,t3);\n      PM(CH(0,k,1),CH(0,k,3),t1,t4);\n      }\n      for (size_t i=1; i<ido; ++i)\n        {\n        T t1, t2, t3, t4;\n        T cc0=CC(i,0,k), cc1=CC(i,1,k),cc2=CC(i,2,k),cc3=CC(i,3,k);\n        PM(t2,t1,cc0,cc2);\n        PM(t3,t4,cc1,cc3);\n        ROTX90<fwd>(t4);\n        CH(i,k,0) = t2+t3;\n        special_mul<fwd>(t1+t4,WA(0,i),CH(i,k,1));\n        special_mul<fwd>(t2-t3,WA(1,i),CH(i,k,2));\n        special_mul<fwd>(t1-t4,WA(2,i),CH(i,k,3));\n        }\n      }\n  }\n\n#define POCKETFFT_PREP5(idx) \\\n        T t0 = CC(idx,0,k), t1, t2, t3, t4; \\\n        PM (t1,t4,CC(idx,1,k),CC(idx,4,k)); \\\n        PM (t2,t3,CC(idx,2,k),CC(idx,3,k)); \\\n        CH(idx,k,0).r=t0.r+t1.r+t2.r; \\\n        CH(idx,k,0).i=t0.i+t1.i+t2.i;\n\n#define POCKETFFT_PARTSTEP5a(u1,u2,twar,twbr,twai,twbi) \\\n        { \\\n        T ca,cb; \\\n        ca.r=t0.r+twar*t1.r+twbr*t2.r; \\\n        ca.i=t0.i+twar*t1.i+twbr*t2.i; \\\n        cb.i=twai*t4.r twbi*t3.r; \\\n        cb.r=-(twai*t4.i twbi*t3.i); \\\n        PM(CH(0,k,u1),CH(0,k,u2),ca,cb); \\\n        }\n\n#define POCKETFFT_PARTSTEP5b(u1,u2,twar,twbr,twai,twbi) \\\n        { \\\n        T ca,cb,da,db; \\\n        ca.r=t0.r+twar*t1.r+twbr*t2.r; \\\n        ca.i=t0.i+twar*t1.i+twbr*t2.i; \\\n        cb.i=twai*t4.r twbi*t3.r; \\\n        cb.r=-(twai*t4.i twbi*t3.i); \\\n        special_mul<fwd>(ca+cb,WA(u1-1,i),CH(i,k,u1)); \\\n        special_mul<fwd>(ca-cb,WA(u2-1,i),CH(i,k,u2)); \\\n        }\ntemplate<bool fwd, typename T> void pass5 (size_t ido, size_t l1,\n  const T * POCKETFFT_RESTRICT cc, T * POCKETFFT_RESTRICT ch,\n  const cmplx<T0> * POCKETFFT_RESTRICT wa) const\n  {\n  constexpr T0 tw1r= T0(0.3090169943749474241022934171828191L),\n               tw1i= (fwd ? -1: 1) * T0(0.9510565162951535721164393333793821L),\n               tw2r= T0(-0.8090169943749474241022934171828191L),\n               tw2i= (fwd ? -1: 1) * T0(0.5877852522924731291687059546390728L);\n\n  auto CH = [ch,ido,l1](size_t a, size_t b, size_t c) -> T&\n    { return ch[a+ido*(b+l1*c)]; };\n  auto CC = [cc,ido](size_t a, size_t b, size_t c) -> const T&\n    { return cc[a+ido*(b+5*c)]; };\n  auto WA = [wa, ido](size_t x, size_t i)\n    { return wa[i-1+x*(ido-1)]; };\n\n  if (ido==1)\n    for (size_t k=0; k<l1; ++k)\n      {\n      POCKETFFT_PREP5(0)\n      POCKETFFT_PARTSTEP5a(1,4,tw1r,tw2r,+tw1i,+tw2i)\n      POCKETFFT_PARTSTEP5a(2,3,tw2r,tw1r,+tw2i,-tw1i)\n      }\n  else\n    for (size_t k=0; k<l1; ++k)\n      {\n      {\n      POCKETFFT_PREP5(0)\n      POCKETFFT_PARTSTEP5a(1,4,tw1r,tw2r,+tw1i,+tw2i)\n      POCKETFFT_PARTSTEP5a(2,3,tw2r,tw1r,+tw2i,-tw1i)\n      }\n      for (size_t i=1; i<ido; ++i)\n        {\n        POCKETFFT_PREP5(i)\n        POCKETFFT_PARTSTEP5b(1,4,tw1r,tw2r,+tw1i,+tw2i)\n        POCKETFFT_PARTSTEP5b(2,3,tw2r,tw1r,+tw2i,-tw1i)\n        }\n      }\n  }\n\n#undef POCKETFFT_PARTSTEP5b\n#undef POCKETFFT_PARTSTEP5a\n#undef POCKETFFT_PREP5\n\n#define POCKETFFT_PREP7(idx) \\\n        T t1 = CC(idx,0,k), t2, t3, t4, t5, t6, t7; \\\n        PM (t2,t7,CC(idx,1,k),CC(idx,6,k)); \\\n        PM (t3,t6,CC(idx,2,k),CC(idx,5,k)); \\\n        PM (t4,t5,CC(idx,3,k),CC(idx,4,k)); \\\n        CH(idx,k,0).r=t1.r+t2.r+t3.r+t4.r; \\\n        CH(idx,k,0).i=t1.i+t2.i+t3.i+t4.i;\n\n#define POCKETFFT_PARTSTEP7a0(u1,u2,x1,x2,x3,y1,y2,y3,out1,out2) \\\n        { \\\n        T ca,cb; \\\n        ca.r=t1.r+x1*t2.r+x2*t3.r+x3*t4.r; \\\n        ca.i=t1.i+x1*t2.i+x2*t3.i+x3*t4.i; \\\n        cb.i=y1*t7.r y2*t6.r y3*t5.r; \\\n        cb.r=-(y1*t7.i y2*t6.i y3*t5.i); \\\n        PM(out1,out2,ca,cb); \\\n        }\n#define POCKETFFT_PARTSTEP7a(u1,u2,x1,x2,x3,y1,y2,y3) \\\n        POCKETFFT_PARTSTEP7a0(u1,u2,x1,x2,x3,y1,y2,y3,CH(0,k,u1),CH(0,k,u2))\n#define POCKETFFT_PARTSTEP7(u1,u2,x1,x2,x3,y1,y2,y3) \\\n        { \\\n        T da,db; \\\n        POCKETFFT_PARTSTEP7a0(u1,u2,x1,x2,x3,y1,y2,y3,da,db) \\\n        special_mul<fwd>(da,WA(u1-1,i),CH(i,k,u1)); \\\n        special_mul<fwd>(db,WA(u2-1,i),CH(i,k,u2)); \\\n        }\n\ntemplate<bool fwd, typename T> void pass7(size_t ido, size_t l1,\n  const T * POCKETFFT_RESTRICT cc, T * POCKETFFT_RESTRICT ch,\n  const cmplx<T0> * POCKETFFT_RESTRICT wa) const\n  {\n  constexpr T0 tw1r= T0(0.6234898018587335305250048840042398L),\n               tw1i= (fwd ? -1 : 1) * T0(0.7818314824680298087084445266740578L),\n               tw2r= T0(-0.2225209339563144042889025644967948L),\n               tw2i= (fwd ? -1 : 1) * T0(0.9749279121818236070181316829939312L),\n               tw3r= T0(-0.9009688679024191262361023195074451L),\n               tw3i= (fwd ? -1 : 1) * T0(0.433883739117558120475768332848359L);\n\n  auto CH = [ch,ido,l1](size_t a, size_t b, size_t c) -> T&\n    { return ch[a+ido*(b+l1*c)]; };\n  auto CC = [cc,ido](size_t a, size_t b, size_t c) -> const T&\n    { return cc[a+ido*(b+7*c)]; };\n  auto WA = [wa, ido](size_t x, size_t i)\n    { return wa[i-1+x*(ido-1)]; };\n\n  if (ido==1)\n    for (size_t k=0; k<l1; ++k)\n      {\n      POCKETFFT_PREP7(0)\n      POCKETFFT_PARTSTEP7a(1,6,tw1r,tw2r,tw3r,+tw1i,+tw2i,+tw3i)\n      POCKETFFT_PARTSTEP7a(2,5,tw2r,tw3r,tw1r,+tw2i,-tw3i,-tw1i)\n      POCKETFFT_PARTSTEP7a(3,4,tw3r,tw1r,tw2r,+tw3i,-tw1i,+tw2i)\n      }\n  else\n    for (size_t k=0; k<l1; ++k)\n      {\n      {\n      POCKETFFT_PREP7(0)\n      POCKETFFT_PARTSTEP7a(1,6,tw1r,tw2r,tw3r,+tw1i,+tw2i,+tw3i)\n      POCKETFFT_PARTSTEP7a(2,5,tw2r,tw3r,tw1r,+tw2i,-tw3i,-tw1i)\n      POCKETFFT_PARTSTEP7a(3,4,tw3r,tw1r,tw2r,+tw3i,-tw1i,+tw2i)\n      }\n      for (size_t i=1; i<ido; ++i)\n        {\n        POCKETFFT_PREP7(i)\n        POCKETFFT_PARTSTEP7(1,6,tw1r,tw2r,tw3r,+tw1i,+tw2i,+tw3i)\n        POCKETFFT_PARTSTEP7(2,5,tw2r,tw3r,tw1r,+tw2i,-tw3i,-tw1i)\n        POCKETFFT_PARTSTEP7(3,4,tw3r,tw1r,tw2r,+tw3i,-tw1i,+tw2i)\n        }\n      }\n  }\n\n#undef POCKETFFT_PARTSTEP7\n#undef POCKETFFT_PARTSTEP7a0\n#undef POCKETFFT_PARTSTEP7a\n#undef POCKETFFT_PREP7\n\ntemplate <bool fwd, typename T> void ROTX45(T &a) const\n  {\n  constexpr T0 hsqt2=T0(0.707106781186547524400844362104849L);\n  if (fwd)\n    { auto tmp_=a.r; a.r=hsqt2*(a.r+a.i); a.i=hsqt2*(a.i-tmp_); }\n  else\n    { auto tmp_=a.r; a.r=hsqt2*(a.r-a.i); a.i=hsqt2*(a.i+tmp_); }\n  }\ntemplate <bool fwd, typename T> void ROTX135(T &a) const\n  {\n  constexpr T0 hsqt2=T0(0.707106781186547524400844362104849L);\n  if (fwd)\n    { auto tmp_=a.r; a.r=hsqt2*(a.i-a.r); a.i=hsqt2*(-tmp_-a.i); }\n  else\n    { auto tmp_=a.r; a.r=hsqt2*(-a.r-a.i); a.i=hsqt2*(tmp_-a.i); }\n  }\n\ntemplate<bool fwd, typename T> void pass8 (size_t ido, size_t l1,\n  const T * POCKETFFT_RESTRICT cc, T * POCKETFFT_RESTRICT ch,\n  const cmplx<T0> * POCKETFFT_RESTRICT wa) const\n  {\n  auto CH = [ch,ido,l1](size_t a, size_t b, size_t c) -> T&\n    { return ch[a+ido*(b+l1*c)]; };\n  auto CC = [cc,ido](size_t a, size_t b, size_t c) -> const T&\n    { return cc[a+ido*(b+8*c)]; };\n  auto WA = [wa, ido](size_t x, size_t i)\n    { return wa[i-1+x*(ido-1)]; };\n\n  if (ido==1)\n    for (size_t k=0; k<l1; ++k)\n      {\n      T a0, a1, a2, a3, a4, a5, a6, a7;\n      PM(a1,a5,CC(0,1,k),CC(0,5,k));\n      PM(a3,a7,CC(0,3,k),CC(0,7,k));\n      PMINPLACE(a1,a3);\n      ROTX90<fwd>(a3);\n\n      ROTX90<fwd>(a7);\n      PMINPLACE(a5,a7);\n      ROTX45<fwd>(a5);\n      ROTX135<fwd>(a7);\n\n      PM(a0,a4,CC(0,0,k),CC(0,4,k));\n      PM(a2,a6,CC(0,2,k),CC(0,6,k));\n      PM(CH(0,k,0),CH(0,k,4),a0+a2,a1);\n      PM(CH(0,k,2),CH(0,k,6),a0-a2,a3);\n      ROTX90<fwd>(a6);\n      PM(CH(0,k,1),CH(0,k,5),a4+a6,a5);\n      PM(CH(0,k,3),CH(0,k,7),a4-a6,a7);\n      }\n  else\n    for (size_t k=0; k<l1; ++k)\n      {\n      {\n      T a0, a1, a2, a3, a4, a5, a6, a7;\n      PM(a1,a5,CC(0,1,k),CC(0,5,k));\n      PM(a3,a7,CC(0,3,k),CC(0,7,k));\n      PMINPLACE(a1,a3);\n      ROTX90<fwd>(a3);\n\n      ROTX90<fwd>(a7);\n      PMINPLACE(a5,a7);\n      ROTX45<fwd>(a5);\n      ROTX135<fwd>(a7);\n\n      PM(a0,a4,CC(0,0,k),CC(0,4,k));\n      PM(a2,a6,CC(0,2,k),CC(0,6,k));\n      PM(CH(0,k,0),CH(0,k,4),a0+a2,a1);\n      PM(CH(0,k,2),CH(0,k,6),a0-a2,a3);\n      ROTX90<fwd>(a6);\n      PM(CH(0,k,1),CH(0,k,5),a4+a6,a5);\n      PM(CH(0,k,3),CH(0,k,7),a4-a6,a7);\n      }\n      for (size_t i=1; i<ido; ++i)\n        {\n        T a0, a1, a2, a3, a4, a5, a6, a7;\n        PM(a1,a5,CC(i,1,k),CC(i,5,k));\n        PM(a3,a7,CC(i,3,k),CC(i,7,k));\n        ROTX90<fwd>(a7);\n        PMINPLACE(a1,a3);\n        ROTX90<fwd>(a3);\n        PMINPLACE(a5,a7);\n        ROTX45<fwd>(a5);\n        ROTX135<fwd>(a7);\n        PM(a0,a4,CC(i,0,k),CC(i,4,k));\n        PM(a2,a6,CC(i,2,k),CC(i,6,k));\n        PMINPLACE(a0,a2);\n        CH(i,k,0) = a0+a1;\n        special_mul<fwd>(a0-a1,WA(3,i),CH(i,k,4));\n        special_mul<fwd>(a2+a3,WA(1,i),CH(i,k,2));\n        special_mul<fwd>(a2-a3,WA(5,i),CH(i,k,6));\n        ROTX90<fwd>(a6);\n        PMINPLACE(a4,a6);\n        special_mul<fwd>(a4+a5,WA(0,i),CH(i,k,1));\n        special_mul<fwd>(a4-a5,WA(4,i),CH(i,k,5));\n        special_mul<fwd>(a6+a7,WA(2,i),CH(i,k,3));\n        special_mul<fwd>(a6-a7,WA(6,i),CH(i,k,7));\n        }\n      }\n   }\n\n\n#define POCKETFFT_PREP11(idx) \\\n        T t1 = CC(idx,0,k), t2, t3, t4, t5, t6, t7, t8, t9, t10, t11; \\\n        PM (t2,t11,CC(idx,1,k),CC(idx,10,k)); \\\n        PM (t3,t10,CC(idx,2,k),CC(idx, 9,k)); \\\n        PM (t4,t9 ,CC(idx,3,k),CC(idx, 8,k)); \\\n        PM (t5,t8 ,CC(idx,4,k),CC(idx, 7,k)); \\\n        PM (t6,t7 ,CC(idx,5,k),CC(idx, 6,k)); \\\n        CH(idx,k,0).r=t1.r+t2.r+t3.r+t4.r+t5.r+t6.r; \\\n        CH(idx,k,0).i=t1.i+t2.i+t3.i+t4.i+t5.i+t6.i;\n\n#define POCKETFFT_PARTSTEP11a0(u1,u2,x1,x2,x3,x4,x5,y1,y2,y3,y4,y5,out1,out2) \\\n        { \\\n        T ca = t1 + t2*x1 + t3*x2 + t4*x3 + t5*x4 +t6*x5, \\\n          cb; \\\n        cb.i=y1*t11.r y2*t10.r y3*t9.r y4*t8.r y5*t7.r; \\\n        cb.r=-(y1*t11.i y2*t10.i y3*t9.i y4*t8.i y5*t7.i ); \\\n        PM(out1,out2,ca,cb); \\\n        }\n#define POCKETFFT_PARTSTEP11a(u1,u2,x1,x2,x3,x4,x5,y1,y2,y3,y4,y5) \\\n        POCKETFFT_PARTSTEP11a0(u1,u2,x1,x2,x3,x4,x5,y1,y2,y3,y4,y5,CH(0,k,u1),CH(0,k,u2))\n#define POCKETFFT_PARTSTEP11(u1,u2,x1,x2,x3,x4,x5,y1,y2,y3,y4,y5) \\\n        { \\\n        T da,db; \\\n        POCKETFFT_PARTSTEP11a0(u1,u2,x1,x2,x3,x4,x5,y1,y2,y3,y4,y5,da,db) \\\n        special_mul<fwd>(da,WA(u1-1,i),CH(i,k,u1)); \\\n        special_mul<fwd>(db,WA(u2-1,i),CH(i,k,u2)); \\\n        }\n\ntemplate<bool fwd, typename T> void pass11 (size_t ido, size_t l1,\n  const T * POCKETFFT_RESTRICT cc, T * POCKETFFT_RESTRICT ch,\n  const cmplx<T0> * POCKETFFT_RESTRICT wa) const\n  {\n  constexpr T0 tw1r= T0(0.8412535328311811688618116489193677L),\n               tw1i= (fwd ? -1 : 1) * T0(0.5406408174555975821076359543186917L),\n               tw2r= T0(0.4154150130018864255292741492296232L),\n               tw2i= (fwd ? -1 : 1) * T0(0.9096319953545183714117153830790285L),\n               tw3r= T0(-0.1423148382732851404437926686163697L),\n               tw3i= (fwd ? -1 : 1) * T0(0.9898214418809327323760920377767188L),\n               tw4r= T0(-0.6548607339452850640569250724662936L),\n               tw4i= (fwd ? -1 : 1) * T0(0.7557495743542582837740358439723444L),\n               tw5r= T0(-0.9594929736144973898903680570663277L),\n               tw5i= (fwd ? -1 : 1) * T0(0.2817325568414296977114179153466169L);\n\n  auto CH = [ch,ido,l1](size_t a, size_t b, size_t c) -> T&\n    { return ch[a+ido*(b+l1*c)]; };\n  auto CC = [cc,ido](size_t a, size_t b, size_t c) -> const T&\n    { return cc[a+ido*(b+11*c)]; };\n  auto WA = [wa, ido](size_t x, size_t i)\n    { return wa[i-1+x*(ido-1)]; };\n\n  if (ido==1)\n    for (size_t k=0; k<l1; ++k)\n      {\n      POCKETFFT_PREP11(0)\n      POCKETFFT_PARTSTEP11a(1,10,tw1r,tw2r,tw3r,tw4r,tw5r,+tw1i,+tw2i,+tw3i,+tw4i,+tw5i)\n      POCKETFFT_PARTSTEP11a(2, 9,tw2r,tw4r,tw5r,tw3r,tw1r,+tw2i,+tw4i,-tw5i,-tw3i,-tw1i)\n      POCKETFFT_PARTSTEP11a(3, 8,tw3r,tw5r,tw2r,tw1r,tw4r,+tw3i,-tw5i,-tw2i,+tw1i,+tw4i)\n      POCKETFFT_PARTSTEP11a(4, 7,tw4r,tw3r,tw1r,tw5r,tw2r,+tw4i,-tw3i,+tw1i,+tw5i,-tw2i)\n      POCKETFFT_PARTSTEP11a(5, 6,tw5r,tw1r,tw4r,tw2r,tw3r,+tw5i,-tw1i,+tw4i,-tw2i,+tw3i)\n      }\n  else\n    for (size_t k=0; k<l1; ++k)\n      {\n      {\n      POCKETFFT_PREP11(0)\n      POCKETFFT_PARTSTEP11a(1,10,tw1r,tw2r,tw3r,tw4r,tw5r,+tw1i,+tw2i,+tw3i,+tw4i,+tw5i)\n      POCKETFFT_PARTSTEP11a(2, 9,tw2r,tw4r,tw5r,tw3r,tw1r,+tw2i,+tw4i,-tw5i,-tw3i,-tw1i)\n      POCKETFFT_PARTSTEP11a(3, 8,tw3r,tw5r,tw2r,tw1r,tw4r,+tw3i,-tw5i,-tw2i,+tw1i,+tw4i)\n      POCKETFFT_PARTSTEP11a(4, 7,tw4r,tw3r,tw1r,tw5r,tw2r,+tw4i,-tw3i,+tw1i,+tw5i,-tw2i)\n      POCKETFFT_PARTSTEP11a(5, 6,tw5r,tw1r,tw4r,tw2r,tw3r,+tw5i,-tw1i,+tw4i,-tw2i,+tw3i)\n      }\n      for (size_t i=1; i<ido; ++i)\n        {\n        POCKETFFT_PREP11(i)\n        POCKETFFT_PARTSTEP11(1,10,tw1r,tw2r,tw3r,tw4r,tw5r,+tw1i,+tw2i,+tw3i,+tw4i,+tw5i)\n        POCKETFFT_PARTSTEP11(2, 9,tw2r,tw4r,tw5r,tw3r,tw1r,+tw2i,+tw4i,-tw5i,-tw3i,-tw1i)\n        POCKETFFT_PARTSTEP11(3, 8,tw3r,tw5r,tw2r,tw1r,tw4r,+tw3i,-tw5i,-tw2i,+tw1i,+tw4i)\n        POCKETFFT_PARTSTEP11(4, 7,tw4r,tw3r,tw1r,tw5r,tw2r,+tw4i,-tw3i,+tw1i,+tw5i,-tw2i)\n        POCKETFFT_PARTSTEP11(5, 6,tw5r,tw1r,tw4r,tw2r,tw3r,+tw5i,-tw1i,+tw4i,-tw2i,+tw3i)\n        }\n      }\n  }\n\n#undef POCKETFFT_PARTSTEP11\n#undef POCKETFFT_PARTSTEP11a0\n#undef POCKETFFT_PARTSTEP11a\n#undef POCKETFFT_PREP11\n\ntemplate<bool fwd, typename T> void passg (size_t ido, size_t ip,\n  size_t l1, T * POCKETFFT_RESTRICT cc, T * POCKETFFT_RESTRICT ch,\n  const cmplx<T0> * POCKETFFT_RESTRICT wa,\n  const cmplx<T0> * POCKETFFT_RESTRICT csarr) const\n  {\n  const size_t cdim=ip;\n  size_t ipph = (ip+1)/2;\n  size_t idl1 = ido*l1;\n\n  auto CH = [ch,ido,l1](size_t a, size_t b, size_t c) -> T&\n    { return ch[a+ido*(b+l1*c)]; };\n  auto CC = [cc,ido,cdim](size_t a, size_t b, size_t c) -> const T&\n    { return cc[a+ido*(b+cdim*c)]; };\n  auto CX = [cc, ido, l1](size_t a, size_t b, size_t c) -> T&\n    { return cc[a+ido*(b+l1*c)]; };\n  auto CX2 = [cc, idl1](size_t a, size_t b) -> T&\n    { return cc[a+idl1*b]; };\n  auto CH2 = [ch, idl1](size_t a, size_t b) -> const T&\n    { return ch[a+idl1*b]; };\n\n  arr<cmplx<T0>> wal(ip);\n  wal[0] = cmplx<T0>(1., 0.);\n  for (size_t i=1; i<ip; ++i)\n    wal[i]=cmplx<T0>(csarr[i].r,fwd ? -csarr[i].i : csarr[i].i);\n\n  for (size_t k=0; k<l1; ++k)\n    for (size_t i=0; i<ido; ++i)\n      CH(i,k,0) = CC(i,0,k);\n  for (size_t j=1, jc=ip-1; j<ipph; ++j, --jc)\n    for (size_t k=0; k<l1; ++k)\n      for (size_t i=0; i<ido; ++i)\n        PM(CH(i,k,j),CH(i,k,jc),CC(i,j,k),CC(i,jc,k));\n  for (size_t k=0; k<l1; ++k)\n    for (size_t i=0; i<ido; ++i)\n      {\n      T tmp = CH(i,k,0);\n      for (size_t j=1; j<ipph; ++j)\n        tmp+=CH(i,k,j);\n      CX(i,k,0) = tmp;\n      }\n  for (size_t l=1, lc=ip-1; l<ipph; ++l, --lc)\n    {\n    // j=0\n    for (size_t ik=0; ik<idl1; ++ik)\n      {\n      CX2(ik,l).r = CH2(ik,0).r+wal[l].r*CH2(ik,1).r+wal[2*l].r*CH2(ik,2).r;\n      CX2(ik,l).i = CH2(ik,0).i+wal[l].r*CH2(ik,1).i+wal[2*l].r*CH2(ik,2).i;\n      CX2(ik,lc).r=-wal[l].i*CH2(ik,ip-1).i-wal[2*l].i*CH2(ik,ip-2).i;\n      CX2(ik,lc).i=wal[l].i*CH2(ik,ip-1).r+wal[2*l].i*CH2(ik,ip-2).r;\n      }\n\n    size_t iwal=2*l;\n    size_t j=3, jc=ip-3;\n    for (; j<ipph-1; j+=2, jc-=2)\n      {\n      iwal+=l; if (iwal>ip) iwal-=ip;\n      cmplx<T0> xwal=wal[iwal];\n      iwal+=l; if (iwal>ip) iwal-=ip;\n      cmplx<T0> xwal2=wal[iwal];\n      for (size_t ik=0; ik<idl1; ++ik)\n        {\n        CX2(ik,l).r += CH2(ik,j).r*xwal.r+CH2(ik,j+1).r*xwal2.r;\n        CX2(ik,l).i += CH2(ik,j).i*xwal.r+CH2(ik,j+1).i*xwal2.r;\n        CX2(ik,lc).r -= CH2(ik,jc).i*xwal.i+CH2(ik,jc-1).i*xwal2.i;\n        CX2(ik,lc).i += CH2(ik,jc).r*xwal.i+CH2(ik,jc-1).r*xwal2.i;\n        }\n      }\n    for (; j<ipph; ++j, --jc)\n      {\n      iwal+=l; if (iwal>ip) iwal-=ip;\n      cmplx<T0> xwal=wal[iwal];\n      for (size_t ik=0; ik<idl1; ++ik)\n        {\n        CX2(ik,l).r += CH2(ik,j).r*xwal.r;\n        CX2(ik,l).i += CH2(ik,j).i*xwal.r;\n        CX2(ik,lc).r -= CH2(ik,jc).i*xwal.i;\n        CX2(ik,lc).i += CH2(ik,jc).r*xwal.i;\n        }\n      }\n    }\n\n  // shuffling and twiddling\n  if (ido==1)\n    for (size_t j=1, jc=ip-1; j<ipph; ++j, --jc)\n      for (size_t ik=0; ik<idl1; ++ik)\n        {\n        T t1=CX2(ik,j), t2=CX2(ik,jc);\n        PM(CX2(ik,j),CX2(ik,jc),t1,t2);\n        }\n  else\n    {\n    for (size_t j=1, jc=ip-1; j<ipph; ++j,--jc)\n      for (size_t k=0; k<l1; ++k)\n        {\n        T t1=CX(0,k,j), t2=CX(0,k,jc);\n        PM(CX(0,k,j),CX(0,k,jc),t1,t2);\n        for (size_t i=1; i<ido; ++i)\n          {\n          T x1, x2;\n          PM(x1,x2,CX(i,k,j),CX(i,k,jc));\n          size_t idij=(j-1)*(ido-1)+i-1;\n          special_mul<fwd>(x1,wa[idij],CX(i,k,j));\n          idij=(jc-1)*(ido-1)+i-1;\n          special_mul<fwd>(x2,wa[idij],CX(i,k,jc));\n          }\n        }\n    }\n  }\n\ntemplate<bool fwd, typename T> void pass_all(T c[], T0 fct) const\n  {\n  if (length==1) { c[0]*=fct; return; }\n  size_t l1=1;\n  arr<T> ch(length);\n  T *p1=c, *p2=ch.data();\n\n  for(size_t k1=0; k1<fact.size(); k1++)\n    {\n    size_t ip=fact[k1].fct;\n    size_t l2=ip*l1;\n    size_t ido = length/l2;\n    if     (ip==4)\n      pass4<fwd> (ido, l1, p1, p2, fact[k1].tw);\n    else if(ip==8)\n      pass8<fwd>(ido, l1, p1, p2, fact[k1].tw);\n    else if(ip==2)\n      pass2<fwd>(ido, l1, p1, p2, fact[k1].tw);\n    else if(ip==3)\n      pass3<fwd> (ido, l1, p1, p2, fact[k1].tw);\n    else if(ip==5)\n      pass5<fwd> (ido, l1, p1, p2, fact[k1].tw);\n    else if(ip==7)\n      pass7<fwd> (ido, l1, p1, p2, fact[k1].tw);\n    else if(ip==11)\n      pass11<fwd> (ido, l1, p1, p2, fact[k1].tw);\n    else\n      {\n      passg<fwd>(ido, ip, l1, p1, p2, fact[k1].tw, fact[k1].tws);\n      std::swap(p1,p2);\n      }\n    std::swap(p1,p2);\n    l1=l2;\n    }\n  if (p1!=c)\n    {\n    if (fct!=1.)\n      for (size_t i=0; i<length; ++i)\n        c[i] = ch[i]*fct;\n    else\n      std::copy_n (p1, length, c);\n    }\n  else\n    if (fct!=1.)\n      for (size_t i=0; i<length; ++i)\n        c[i] *= fct;\n  }\n\n  public:\n    template<typename T> void exec(T c[], T0 fct, bool fwd) const\n      { fwd ? pass_all<true>(c, fct) : pass_all<false>(c, fct); }\n\n  private:\n    POCKETFFT_NOINLINE void factorize()\n      {\n      size_t len=length;\n      while ((len&7)==0)\n        { add_factor(8); len>>=3; }\n      while ((len&3)==0)\n        { add_factor(4); len>>=2; }\n      if ((len&1)==0)\n        {\n        len>>=1;\n        // factor 2 should be at the front of the factor list\n        add_factor(2);\n        std::swap(fact[0].fct, fact.back().fct);\n        }\n      for (size_t divisor=3; divisor*divisor<=len; divisor+=2)\n        while ((len%divisor)==0)\n          {\n          add_factor(divisor);\n          len/=divisor;\n          }\n      if (len>1) add_factor(len);\n      }\n\n    size_t twsize() const\n      {\n      size_t twsize=0, l1=1;\n      for (size_t k=0; k<fact.size(); ++k)\n        {\n        size_t ip=fact[k].fct, ido= length/(l1*ip);\n        twsize+=(ip-1)*(ido-1);\n        if (ip>11)\n          twsize+=ip;\n        l1*=ip;\n        }\n      return twsize;\n      }\n\n    void comp_twiddle()\n      {\n      sincos_2pibyn<T0> twiddle(length);\n      size_t l1=1;\n      size_t memofs=0;\n      for (size_t k=0; k<fact.size(); ++k)\n        {\n        size_t ip=fact[k].fct, ido=length/(l1*ip);\n        fact[k].tw=mem.data()+memofs;\n        memofs+=(ip-1)*(ido-1);\n        for (size_t j=1; j<ip; ++j)\n          for (size_t i=1; i<ido; ++i)\n            fact[k].tw[(j-1)*(ido-1)+i-1] = twiddle[j*l1*i];\n        if (ip>11)\n          {\n          fact[k].tws=mem.data()+memofs;\n          memofs+=ip;\n          for (size_t j=0; j<ip; ++j)\n            fact[k].tws[j] = twiddle[j*l1*ido];\n          }\n        l1*=ip;\n        }\n      }\n\n  public:\n    POCKETFFT_NOINLINE cfftp(size_t length_)\n      : length(length_)\n      {\n      if (length==0) throw std::runtime_error(\"zero-length FFT requested\");\n      if (length==1) return;\n      factorize();\n      mem.resize(twsize());\n      comp_twiddle();\n      }\n  };\n\n//\n// real-valued FFTPACK transforms\n//\n\ntemplate<typename T0> class rfftp\n  {\n  private:\n    struct fctdata\n      {\n      size_t fct;\n      T0 *tw, *tws;\n      };\n\n    size_t length;\n    arr<T0> mem;\n    std::vector<fctdata> fact;\n\n    void add_factor(size_t factor)\n      { fact.push_back({factor, nullptr, nullptr}); }\n\n/* (a+ib) = conj(c+id) * (e+if) */\ntemplate<typename T1, typename T2, typename T3> inline void MULPM\n  (T1 &a, T1 &b, T2 c, T2 d, T3 e, T3 f) const\n  {  a=c*e+d*f; b=c*f-d*e; }\n\ntemplate<typename T> void radf2 (size_t ido, size_t l1,\n  const T * POCKETFFT_RESTRICT cc, T * POCKETFFT_RESTRICT ch,\n  const T0 * POCKETFFT_RESTRICT wa) const\n  {\n  auto WA = [wa,ido](size_t x, size_t i) { return wa[i+x*(ido-1)]; };\n  auto CC = [cc,ido,l1](size_t a, size_t b, size_t c) -> const T&\n    { return cc[a+ido*(b+l1*c)]; };\n  auto CH = [ch,ido](size_t a, size_t b, size_t c) -> T&\n    { return ch[a+ido*(b+2*c)]; };\n\n  for (size_t k=0; k<l1; k++)\n    PM (CH(0,0,k),CH(ido-1,1,k),CC(0,k,0),CC(0,k,1));\n  if ((ido&1)==0)\n    for (size_t k=0; k<l1; k++)\n      {\n      CH(    0,1,k) = -CC(ido-1,k,1);\n      CH(ido-1,0,k) =  CC(ido-1,k,0);\n      }\n  if (ido<=2) return;\n  for (size_t k=0; k<l1; k++)\n    for (size_t i=2; i<ido; i+=2)\n      {\n      size_t ic=ido-i;\n      T tr2, ti2;\n      MULPM (tr2,ti2,WA(0,i-2),WA(0,i-1),CC(i-1,k,1),CC(i,k,1));\n      PM (CH(i-1,0,k),CH(ic-1,1,k),CC(i-1,k,0),tr2);\n      PM (CH(i  ,0,k),CH(ic  ,1,k),ti2,CC(i  ,k,0));\n      }\n  }\n\n// a2=a+b; b2=i*(b-a);\n#define POCKETFFT_REARRANGE(rx, ix, ry, iy) \\\n  {\\\n  auto t1=rx+ry, t2=ry-rx, t3=ix+iy, t4=ix-iy; \\\n  rx=t1; ix=t3; ry=t4; iy=t2; \\\n  }\n\ntemplate<typename T> void radf3(size_t ido, size_t l1,\n  const T * POCKETFFT_RESTRICT cc, T * POCKETFFT_RESTRICT ch,\n  const T0 * POCKETFFT_RESTRICT wa) const\n  {\n  constexpr T0 taur=-0.5, taui=T0(0.8660254037844386467637231707529362L);\n\n  auto WA = [wa,ido](size_t x, size_t i) { return wa[i+x*(ido-1)]; };\n  auto CC = [cc,ido,l1](size_t a, size_t b, size_t c) -> const T&\n    { return cc[a+ido*(b+l1*c)]; };\n  auto CH = [ch,ido](size_t a, size_t b, size_t c) -> T&\n    { return ch[a+ido*(b+3*c)]; };\n\n  for (size_t k=0; k<l1; k++)\n    {\n    T cr2=CC(0,k,1)+CC(0,k,2);\n    CH(0,0,k) = CC(0,k,0)+cr2;\n    CH(0,2,k) = taui*(CC(0,k,2)-CC(0,k,1));\n    CH(ido-1,1,k) = CC(0,k,0)+taur*cr2;\n    }\n  if (ido==1) return;\n  for (size_t k=0; k<l1; k++)\n    for (size_t i=2; i<ido; i+=2)\n      {\n      size_t ic=ido-i;\n      T di2, di3, dr2, dr3;\n      MULPM (dr2,di2,WA(0,i-2),WA(0,i-1),CC(i-1,k,1),CC(i,k,1)); // d2=conj(WA0)*CC1\n      MULPM (dr3,di3,WA(1,i-2),WA(1,i-1),CC(i-1,k,2),CC(i,k,2)); // d3=conj(WA1)*CC2\n      POCKETFFT_REARRANGE(dr2, di2, dr3, di3);\n      CH(i-1,0,k) = CC(i-1,k,0)+dr2; // c add\n      CH(i  ,0,k) = CC(i  ,k,0)+di2;\n      T tr2 = CC(i-1,k,0)+taur*dr2; // c add\n      T ti2 = CC(i  ,k,0)+taur*di2;\n      T tr3 = taui*dr3;  // t3 = taui*i*(d3-d2)?\n      T ti3 = taui*di3;\n      PM(CH(i-1,2,k),CH(ic-1,1,k),tr2,tr3); // PM(i) = t2+t3\n      PM(CH(i  ,2,k),CH(ic  ,1,k),ti3,ti2); // PM(ic) = conj(t2-t3)\n      }\n  }\n\ntemplate<typename T> void radf4(size_t ido, size_t l1,\n  const T * POCKETFFT_RESTRICT cc, T * POCKETFFT_RESTRICT ch,\n  const T0 * POCKETFFT_RESTRICT wa) const\n  {\n  constexpr T0 hsqt2=T0(0.707106781186547524400844362104849L);\n\n  auto WA = [wa,ido](size_t x, size_t i) { return wa[i+x*(ido-1)]; };\n  auto CC = [cc,ido,l1](size_t a, size_t b, size_t c) -> const T&\n    { return cc[a+ido*(b+l1*c)]; };\n  auto CH = [ch,ido](size_t a, size_t b, size_t c) -> T&\n    { return ch[a+ido*(b+4*c)]; };\n\n  for (size_t k=0; k<l1; k++)\n    {\n    T tr1,tr2;\n    PM (tr1,CH(0,2,k),CC(0,k,3),CC(0,k,1));\n    PM (tr2,CH(ido-1,1,k),CC(0,k,0),CC(0,k,2));\n    PM (CH(0,0,k),CH(ido-1,3,k),tr2,tr1);\n    }\n  if ((ido&1)==0)\n    for (size_t k=0; k<l1; k++)\n      {\n      T ti1=-hsqt2*(CC(ido-1,k,1)+CC(ido-1,k,3));\n      T tr1= hsqt2*(CC(ido-1,k,1)-CC(ido-1,k,3));\n      PM (CH(ido-1,0,k),CH(ido-1,2,k),CC(ido-1,k,0),tr1);\n      PM (CH(    0,3,k),CH(    0,1,k),ti1,CC(ido-1,k,2));\n      }\n  if (ido<=2) return;\n  for (size_t k=0; k<l1; k++)\n    for (size_t i=2; i<ido; i+=2)\n      {\n      size_t ic=ido-i;\n      T ci2, ci3, ci4, cr2, cr3, cr4, ti1, ti2, ti3, ti4, tr1, tr2, tr3, tr4;\n      MULPM(cr2,ci2,WA(0,i-2),WA(0,i-1),CC(i-1,k,1),CC(i,k,1));\n      MULPM(cr3,ci3,WA(1,i-2),WA(1,i-1),CC(i-1,k,2),CC(i,k,2));\n      MULPM(cr4,ci4,WA(2,i-2),WA(2,i-1),CC(i-1,k,3),CC(i,k,3));\n      PM(tr1,tr4,cr4,cr2);\n      PM(ti1,ti4,ci2,ci4);\n      PM(tr2,tr3,CC(i-1,k,0),cr3);\n      PM(ti2,ti3,CC(i  ,k,0),ci3);\n      PM(CH(i-1,0,k),CH(ic-1,3,k),tr2,tr1);\n      PM(CH(i  ,0,k),CH(ic  ,3,k),ti1,ti2);\n      PM(CH(i-1,2,k),CH(ic-1,1,k),tr3,ti4);\n      PM(CH(i  ,2,k),CH(ic  ,1,k),tr4,ti3);\n      }\n  }\n\ntemplate<typename T> void radf5(size_t ido, size_t l1,\n  const T * POCKETFFT_RESTRICT cc, T * POCKETFFT_RESTRICT ch,\n  const T0 * POCKETFFT_RESTRICT wa) const\n  {\n  constexpr T0 tr11= T0(0.3090169943749474241022934171828191L),\n               ti11= T0(0.9510565162951535721164393333793821L),\n               tr12= T0(-0.8090169943749474241022934171828191L),\n               ti12= T0(0.5877852522924731291687059546390728L);\n\n  auto WA = [wa,ido](size_t x, size_t i) { return wa[i+x*(ido-1)]; };\n  auto CC = [cc,ido,l1](size_t a, size_t b, size_t c) -> const T&\n    { return cc[a+ido*(b+l1*c)]; };\n  auto CH = [ch,ido](size_t a, size_t b, size_t c) -> T&\n    { return ch[a+ido*(b+5*c)]; };\n\n  for (size_t k=0; k<l1; k++)\n    {\n    T cr2, cr3, ci4, ci5;\n    PM (cr2,ci5,CC(0,k,4),CC(0,k,1));\n    PM (cr3,ci4,CC(0,k,3),CC(0,k,2));\n    CH(0,0,k)=CC(0,k,0)+cr2+cr3;\n    CH(ido-1,1,k)=CC(0,k,0)+tr11*cr2+tr12*cr3;\n    CH(0,2,k)=ti11*ci5+ti12*ci4;\n    CH(ido-1,3,k)=CC(0,k,0)+tr12*cr2+tr11*cr3;\n    CH(0,4,k)=ti12*ci5-ti11*ci4;\n    }\n  if (ido==1) return;\n  for (size_t k=0; k<l1;++k)\n    for (size_t i=2, ic=ido-2; i<ido; i+=2, ic-=2)\n      {\n      T di2, di3, di4, di5, dr2, dr3, dr4, dr5;\n      MULPM (dr2,di2,WA(0,i-2),WA(0,i-1),CC(i-1,k,1),CC(i,k,1));\n      MULPM (dr3,di3,WA(1,i-2),WA(1,i-1),CC(i-1,k,2),CC(i,k,2));\n      MULPM (dr4,di4,WA(2,i-2),WA(2,i-1),CC(i-1,k,3),CC(i,k,3));\n      MULPM (dr5,di5,WA(3,i-2),WA(3,i-1),CC(i-1,k,4),CC(i,k,4));\n      POCKETFFT_REARRANGE(dr2, di2, dr5, di5);\n      POCKETFFT_REARRANGE(dr3, di3, dr4, di4);\n      CH(i-1,0,k)=CC(i-1,k,0)+dr2+dr3;\n      CH(i  ,0,k)=CC(i  ,k,0)+di2+di3;\n      T tr2=CC(i-1,k,0)+tr11*dr2+tr12*dr3;\n      T ti2=CC(i  ,k,0)+tr11*di2+tr12*di3;\n      T tr3=CC(i-1,k,0)+tr12*dr2+tr11*dr3;\n      T ti3=CC(i  ,k,0)+tr12*di2+tr11*di3;\n      T tr5 = ti11*dr5 + ti12*dr4;\n      T ti5 = ti11*di5 + ti12*di4;\n      T tr4 = ti12*dr5 - ti11*dr4;\n      T ti4 = ti12*di5 - ti11*di4;\n      PM(CH(i-1,2,k),CH(ic-1,1,k),tr2,tr5);\n      PM(CH(i  ,2,k),CH(ic  ,1,k),ti5,ti2);\n      PM(CH(i-1,4,k),CH(ic-1,3,k),tr3,tr4);\n      PM(CH(i  ,4,k),CH(ic  ,3,k),ti4,ti3);\n      }\n  }\n\n#undef POCKETFFT_REARRANGE\n\ntemplate<typename T> void radfg(size_t ido, size_t ip, size_t l1,\n  T * POCKETFFT_RESTRICT cc, T * POCKETFFT_RESTRICT ch,\n  const T0 * POCKETFFT_RESTRICT wa, const T0 * POCKETFFT_RESTRICT csarr) const\n  {\n  const size_t cdim=ip;\n  size_t ipph=(ip+1)/2;\n  size_t idl1 = ido*l1;\n\n  auto CC = [cc,ido,cdim](size_t a, size_t b, size_t c) -> T&\n    { return cc[a+ido*(b+cdim*c)]; };\n  auto CH = [ch,ido,l1](size_t a, size_t b, size_t c) -> const T&\n    { return ch[a+ido*(b+l1*c)]; };\n  auto C1 = [cc,ido,l1] (size_t a, size_t b, size_t c) -> T&\n    { return cc[a+ido*(b+l1*c)]; };\n  auto C2 = [cc,idl1] (size_t a, size_t b) -> T&\n    { return cc[a+idl1*b]; };\n  auto CH2 = [ch,idl1] (size_t a, size_t b) -> T&\n    { return ch[a+idl1*b]; };\n\n  if (ido>1)\n    {\n    for (size_t j=1, jc=ip-1; j<ipph; ++j,--jc)              // 114\n      {\n      size_t is=(j-1)*(ido-1),\n             is2=(jc-1)*(ido-1);\n      for (size_t k=0; k<l1; ++k)                            // 113\n        {\n        size_t idij=is;\n        size_t idij2=is2;\n        for (size_t i=1; i<=ido-2; i+=2)                      // 112\n          {\n          T t1=C1(i,k,j ), t2=C1(i+1,k,j ),\n            t3=C1(i,k,jc), t4=C1(i+1,k,jc);\n          T x1=wa[idij]*t1 + wa[idij+1]*t2,\n            x2=wa[idij]*t2 - wa[idij+1]*t1,\n            x3=wa[idij2]*t3 + wa[idij2+1]*t4,\n            x4=wa[idij2]*t4 - wa[idij2+1]*t3;\n          PM(C1(i,k,j),C1(i+1,k,jc),x3,x1);\n          PM(C1(i+1,k,j),C1(i,k,jc),x2,x4);\n          idij+=2;\n          idij2+=2;\n          }\n        }\n      }\n    }\n\n  for (size_t j=1, jc=ip-1; j<ipph; ++j,--jc)                // 123\n    for (size_t k=0; k<l1; ++k)                              // 122\n      MPINPLACE(C1(0,k,jc), C1(0,k,j));\n\n//everything in C\n//memset(ch,0,ip*l1*ido*sizeof(double));\n\n  for (size_t l=1,lc=ip-1; l<ipph; ++l,--lc)                 // 127\n    {\n    for (size_t ik=0; ik<idl1; ++ik)                         // 124\n      {\n      CH2(ik,l ) = C2(ik,0)+csarr[2*l]*C2(ik,1)+csarr[4*l]*C2(ik,2);\n      CH2(ik,lc) = csarr[2*l+1]*C2(ik,ip-1)+csarr[4*l+1]*C2(ik,ip-2);\n      }\n    size_t iang = 2*l;\n    size_t j=3, jc=ip-3;\n    for (; j<ipph-3; j+=4,jc-=4)              // 126\n      {\n      iang+=l; if (iang>=ip) iang-=ip;\n      T0 ar1=csarr[2*iang], ai1=csarr[2*iang+1];\n      iang+=l; if (iang>=ip) iang-=ip;\n      T0 ar2=csarr[2*iang], ai2=csarr[2*iang+1];\n      iang+=l; if (iang>=ip) iang-=ip;\n      T0 ar3=csarr[2*iang], ai3=csarr[2*iang+1];\n      iang+=l; if (iang>=ip) iang-=ip;\n      T0 ar4=csarr[2*iang], ai4=csarr[2*iang+1];\n      for (size_t ik=0; ik<idl1; ++ik)                       // 125\n        {\n        CH2(ik,l ) += ar1*C2(ik,j )+ar2*C2(ik,j +1)\n                     +ar3*C2(ik,j +2)+ar4*C2(ik,j +3);\n        CH2(ik,lc) += ai1*C2(ik,jc)+ai2*C2(ik,jc-1)\n                     +ai3*C2(ik,jc-2)+ai4*C2(ik,jc-3);\n        }\n      }\n    for (; j<ipph-1; j+=2,jc-=2)              // 126\n      {\n      iang+=l; if (iang>=ip) iang-=ip;\n      T0 ar1=csarr[2*iang], ai1=csarr[2*iang+1];\n      iang+=l; if (iang>=ip) iang-=ip;\n      T0 ar2=csarr[2*iang], ai2=csarr[2*iang+1];\n      for (size_t ik=0; ik<idl1; ++ik)                       // 125\n        {\n        CH2(ik,l ) += ar1*C2(ik,j )+ar2*C2(ik,j +1);\n        CH2(ik,lc) += ai1*C2(ik,jc)+ai2*C2(ik,jc-1);\n        }\n      }\n    for (; j<ipph; ++j,--jc)              // 126\n      {\n      iang+=l; if (iang>=ip) iang-=ip;\n      T0 ar=csarr[2*iang], ai=csarr[2*iang+1];\n      for (size_t ik=0; ik<idl1; ++ik)                       // 125\n        {\n        CH2(ik,l ) += ar*C2(ik,j );\n        CH2(ik,lc) += ai*C2(ik,jc);\n        }\n      }\n    }\n  for (size_t ik=0; ik<idl1; ++ik)                         // 101\n    CH2(ik,0) = C2(ik,0);\n  for (size_t j=1; j<ipph; ++j)                              // 129\n    for (size_t ik=0; ik<idl1; ++ik)                         // 128\n      CH2(ik,0) += C2(ik,j);\n\n// everything in CH at this point!\n//memset(cc,0,ip*l1*ido*sizeof(double));\n\n  for (size_t k=0; k<l1; ++k)                                // 131\n    for (size_t i=0; i<ido; ++i)                             // 130\n      CC(i,0,k) = CH(i,k,0);\n\n  for (size_t j=1, jc=ip-1; j<ipph; ++j,--jc)                // 137\n    {\n    size_t j2=2*j-1;\n    for (size_t k=0; k<l1; ++k)                              // 136\n      {\n      CC(ido-1,j2,k) = CH(0,k,j);\n      CC(0,j2+1,k) = CH(0,k,jc);\n      }\n    }\n\n  if (ido==1) return;\n\n  for (size_t j=1, jc=ip-1; j<ipph; ++j,--jc)                // 140\n    {\n    size_t j2=2*j-1;\n    for(size_t k=0; k<l1; ++k)                               // 139\n      for(size_t i=1, ic=ido-i-2; i<=ido-2; i+=2, ic-=2)      // 138\n        {\n        CC(i   ,j2+1,k) = CH(i  ,k,j )+CH(i  ,k,jc);\n        CC(ic  ,j2  ,k) = CH(i  ,k,j )-CH(i  ,k,jc);\n        CC(i+1 ,j2+1,k) = CH(i+1,k,j )+CH(i+1,k,jc);\n        CC(ic+1,j2  ,k) = CH(i+1,k,jc)-CH(i+1,k,j );\n        }\n    }\n  }\n\ntemplate<typename T> void radb2(size_t ido, size_t l1,\n  const T * POCKETFFT_RESTRICT cc, T * POCKETFFT_RESTRICT ch,\n  const T0 * POCKETFFT_RESTRICT wa) const\n  {\n  auto WA = [wa,ido](size_t x, size_t i) { return wa[i+x*(ido-1)]; };\n  auto CC = [cc,ido](size_t a, size_t b, size_t c) -> const T&\n    { return cc[a+ido*(b+2*c)]; };\n  auto CH = [ch,ido,l1](size_t a, size_t b, size_t c) -> T&\n    { return ch[a+ido*(b+l1*c)]; };\n\n  for (size_t k=0; k<l1; k++)\n    PM (CH(0,k,0),CH(0,k,1),CC(0,0,k),CC(ido-1,1,k));\n  if ((ido&1)==0)\n    for (size_t k=0; k<l1; k++)\n      {\n      CH(ido-1,k,0) = 2*CC(ido-1,0,k);\n      CH(ido-1,k,1) =-2*CC(0    ,1,k);\n      }\n  if (ido<=2) return;\n  for (size_t k=0; k<l1;++k)\n    for (size_t i=2; i<ido; i+=2)\n      {\n      size_t ic=ido-i;\n      T ti2, tr2;\n      PM (CH(i-1,k,0),tr2,CC(i-1,0,k),CC(ic-1,1,k));\n      PM (ti2,CH(i  ,k,0),CC(i  ,0,k),CC(ic  ,1,k));\n      MULPM (CH(i,k,1),CH(i-1,k,1),WA(0,i-2),WA(0,i-1),ti2,tr2);\n      }\n  }\n\ntemplate<typename T> void radb3(size_t ido, size_t l1,\n  const T * POCKETFFT_RESTRICT cc, T * POCKETFFT_RESTRICT ch,\n  const T0 * POCKETFFT_RESTRICT wa) const\n  {\n  constexpr T0 taur=-0.5, taui=T0(0.8660254037844386467637231707529362L);\n\n  auto WA = [wa,ido](size_t x, size_t i) { return wa[i+x*(ido-1)]; };\n  auto CC = [cc,ido](size_t a, size_t b, size_t c) -> const T&\n    { return cc[a+ido*(b+3*c)]; };\n  auto CH = [ch,ido,l1](size_t a, size_t b, size_t c) -> T&\n    { return ch[a+ido*(b+l1*c)]; };\n\n  for (size_t k=0; k<l1; k++)\n    {\n    T tr2=2*CC(ido-1,1,k);\n    T cr2=CC(0,0,k)+taur*tr2;\n    CH(0,k,0)=CC(0,0,k)+tr2;\n    T ci3=2*taui*CC(0,2,k);\n    PM (CH(0,k,2),CH(0,k,1),cr2,ci3);\n    }\n  if (ido==1) return;\n  for (size_t k=0; k<l1; k++)\n    for (size_t i=2, ic=ido-2; i<ido; i+=2, ic-=2)\n      {\n      T tr2=CC(i-1,2,k)+CC(ic-1,1,k); // t2=CC(I) + conj(CC(ic))\n      T ti2=CC(i  ,2,k)-CC(ic  ,1,k);\n      T cr2=CC(i-1,0,k)+taur*tr2;     // c2=CC +taur*t2\n      T ci2=CC(i  ,0,k)+taur*ti2;\n      CH(i-1,k,0)=CC(i-1,0,k)+tr2;         // CH=CC+t2\n      CH(i  ,k,0)=CC(i  ,0,k)+ti2;\n      T cr3=taui*(CC(i-1,2,k)-CC(ic-1,1,k));// c3=taui*(CC(i)-conj(CC(ic)))\n      T ci3=taui*(CC(i  ,2,k)+CC(ic  ,1,k));\n      T di2, di3, dr2, dr3;\n      PM(dr3,dr2,cr2,ci3); // d2= (cr2-ci3, ci2+cr3) = c2+i*c3\n      PM(di2,di3,ci2,cr3); // d3= (cr2+ci3, ci2-cr3) = c2-i*c3\n      MULPM(CH(i,k,1),CH(i-1,k,1),WA(0,i-2),WA(0,i-1),di2,dr2); // ch = WA*d2\n      MULPM(CH(i,k,2),CH(i-1,k,2),WA(1,i-2),WA(1,i-1),di3,dr3);\n      }\n  }\n\ntemplate<typename T> void radb4(size_t ido, size_t l1,\n  const T * POCKETFFT_RESTRICT cc, T * POCKETFFT_RESTRICT ch,\n  const T0 * POCKETFFT_RESTRICT wa) const\n  {\n  constexpr T0 sqrt2=T0(1.414213562373095048801688724209698L);\n\n  auto WA = [wa,ido](size_t x, size_t i) { return wa[i+x*(ido-1)]; };\n  auto CC = [cc,ido](size_t a, size_t b, size_t c) -> const T&\n    { return cc[a+ido*(b+4*c)]; };\n  auto CH = [ch,ido,l1](size_t a, size_t b, size_t c) -> T&\n    { return ch[a+ido*(b+l1*c)]; };\n\n  for (size_t k=0; k<l1; k++)\n    {\n    T tr1, tr2;\n    PM (tr2,tr1,CC(0,0,k),CC(ido-1,3,k));\n    T tr3=2*CC(ido-1,1,k);\n    T tr4=2*CC(0,2,k);\n    PM (CH(0,k,0),CH(0,k,2),tr2,tr3);\n    PM (CH(0,k,3),CH(0,k,1),tr1,tr4);\n    }\n  if ((ido&1)==0)\n    for (size_t k=0; k<l1; k++)\n      {\n      T tr1,tr2,ti1,ti2;\n      PM (ti1,ti2,CC(0    ,3,k),CC(0    ,1,k));\n      PM (tr2,tr1,CC(ido-1,0,k),CC(ido-1,2,k));\n      CH(ido-1,k,0)=tr2+tr2;\n      CH(ido-1,k,1)=sqrt2*(tr1-ti1);\n      CH(ido-1,k,2)=ti2+ti2;\n      CH(ido-1,k,3)=-sqrt2*(tr1+ti1);\n      }\n  if (ido<=2) return;\n  for (size_t k=0; k<l1;++k)\n    for (size_t i=2; i<ido; i+=2)\n      {\n      T ci2, ci3, ci4, cr2, cr3, cr4, ti1, ti2, ti3, ti4, tr1, tr2, tr3, tr4;\n      size_t ic=ido-i;\n      PM (tr2,tr1,CC(i-1,0,k),CC(ic-1,3,k));\n      PM (ti1,ti2,CC(i  ,0,k),CC(ic  ,3,k));\n      PM (tr4,ti3,CC(i  ,2,k),CC(ic  ,1,k));\n      PM (tr3,ti4,CC(i-1,2,k),CC(ic-1,1,k));\n      PM (CH(i-1,k,0),cr3,tr2,tr3);\n      PM (CH(i  ,k,0),ci3,ti2,ti3);\n      PM (cr4,cr2,tr1,tr4);\n      PM (ci2,ci4,ti1,ti4);\n      MULPM (CH(i,k,1),CH(i-1,k,1),WA(0,i-2),WA(0,i-1),ci2,cr2);\n      MULPM (CH(i,k,2),CH(i-1,k,2),WA(1,i-2),WA(1,i-1),ci3,cr3);\n      MULPM (CH(i,k,3),CH(i-1,k,3),WA(2,i-2),WA(2,i-1),ci4,cr4);\n      }\n  }\n\ntemplate<typename T> void radb5(size_t ido, size_t l1,\n  const T * POCKETFFT_RESTRICT cc, T * POCKETFFT_RESTRICT ch,\n  const T0 * POCKETFFT_RESTRICT wa) const\n  {\n  constexpr T0 tr11= T0(0.3090169943749474241022934171828191L),\n               ti11= T0(0.9510565162951535721164393333793821L),\n               tr12= T0(-0.8090169943749474241022934171828191L),\n               ti12= T0(0.5877852522924731291687059546390728L);\n\n  auto WA = [wa,ido](size_t x, size_t i) { return wa[i+x*(ido-1)]; };\n  auto CC = [cc,ido](size_t a, size_t b, size_t c) -> const T&\n    { return cc[a+ido*(b+5*c)]; };\n  auto CH = [ch,ido,l1](size_t a, size_t b, size_t c) -> T&\n    { return ch[a+ido*(b+l1*c)]; };\n\n  for (size_t k=0; k<l1; k++)\n    {\n    T ti5=CC(0,2,k)+CC(0,2,k);\n    T ti4=CC(0,4,k)+CC(0,4,k);\n    T tr2=CC(ido-1,1,k)+CC(ido-1,1,k);\n    T tr3=CC(ido-1,3,k)+CC(ido-1,3,k);\n    CH(0,k,0)=CC(0,0,k)+tr2+tr3;\n    T cr2=CC(0,0,k)+tr11*tr2+tr12*tr3;\n    T cr3=CC(0,0,k)+tr12*tr2+tr11*tr3;\n    T ci4, ci5;\n    MULPM(ci5,ci4,ti5,ti4,ti11,ti12);\n    PM(CH(0,k,4),CH(0,k,1),cr2,ci5);\n    PM(CH(0,k,3),CH(0,k,2),cr3,ci4);\n    }\n  if (ido==1) return;\n  for (size_t k=0; k<l1;++k)\n    for (size_t i=2, ic=ido-2; i<ido; i+=2, ic-=2)\n      {\n      T tr2, tr3, tr4, tr5, ti2, ti3, ti4, ti5;\n      PM(tr2,tr5,CC(i-1,2,k),CC(ic-1,1,k));\n      PM(ti5,ti2,CC(i  ,2,k),CC(ic  ,1,k));\n      PM(tr3,tr4,CC(i-1,4,k),CC(ic-1,3,k));\n      PM(ti4,ti3,CC(i  ,4,k),CC(ic  ,3,k));\n      CH(i-1,k,0)=CC(i-1,0,k)+tr2+tr3;\n      CH(i  ,k,0)=CC(i  ,0,k)+ti2+ti3;\n      T cr2=CC(i-1,0,k)+tr11*tr2+tr12*tr3;\n      T ci2=CC(i  ,0,k)+tr11*ti2+tr12*ti3;\n      T cr3=CC(i-1,0,k)+tr12*tr2+tr11*tr3;\n      T ci3=CC(i  ,0,k)+tr12*ti2+tr11*ti3;\n      T ci4, ci5, cr5, cr4;\n      MULPM(cr5,cr4,tr5,tr4,ti11,ti12);\n      MULPM(ci5,ci4,ti5,ti4,ti11,ti12);\n      T dr2, dr3, dr4, dr5, di2, di3, di4, di5;\n      PM(dr4,dr3,cr3,ci4);\n      PM(di3,di4,ci3,cr4);\n      PM(dr5,dr2,cr2,ci5);\n      PM(di2,di5,ci2,cr5);\n      MULPM(CH(i,k,1),CH(i-1,k,1),WA(0,i-2),WA(0,i-1),di2,dr2);\n      MULPM(CH(i,k,2),CH(i-1,k,2),WA(1,i-2),WA(1,i-1),di3,dr3);\n      MULPM(CH(i,k,3),CH(i-1,k,3),WA(2,i-2),WA(2,i-1),di4,dr4);\n      MULPM(CH(i,k,4),CH(i-1,k,4),WA(3,i-2),WA(3,i-1),di5,dr5);\n      }\n  }\n\ntemplate<typename T> void radbg(size_t ido, size_t ip, size_t l1,\n  T * POCKETFFT_RESTRICT cc, T * POCKETFFT_RESTRICT ch,\n  const T0 * POCKETFFT_RESTRICT wa, const T0 * POCKETFFT_RESTRICT csarr) const\n  {\n  const size_t cdim=ip;\n  size_t ipph=(ip+1)/ 2;\n  size_t idl1 = ido*l1;\n\n  auto CC = [cc,ido,cdim](size_t a, size_t b, size_t c) -> const T&\n    { return cc[a+ido*(b+cdim*c)]; };\n  auto CH = [ch,ido,l1](size_t a, size_t b, size_t c) -> T&\n    { return ch[a+ido*(b+l1*c)]; };\n  auto C1 = [cc,ido,l1](size_t a, size_t b, size_t c) -> const T&\n    { return cc[a+ido*(b+l1*c)]; };\n  auto C2 = [cc,idl1](size_t a, size_t b) -> T&\n    { return cc[a+idl1*b]; };\n  auto CH2 = [ch,idl1](size_t a, size_t b) -> T&\n    { return ch[a+idl1*b]; };\n\n  for (size_t k=0; k<l1; ++k)        // 102\n    for (size_t i=0; i<ido; ++i)     // 101\n      CH(i,k,0) = CC(i,0,k);\n  for (size_t j=1, jc=ip-1; j<ipph; ++j, --jc)   // 108\n    {\n    size_t j2=2*j-1;\n    for (size_t k=0; k<l1; ++k)\n      {\n      CH(0,k,j ) = 2*CC(ido-1,j2,k);\n      CH(0,k,jc) = 2*CC(0,j2+1,k);\n      }\n    }\n\n  if (ido!=1)\n    {\n    for (size_t j=1, jc=ip-1; j<ipph; ++j,--jc)   // 111\n      {\n      size_t j2=2*j-1;\n      for (size_t k=0; k<l1; ++k)\n        for (size_t i=1, ic=ido-i-2; i<=ido-2; i+=2, ic-=2)      // 109\n          {\n          CH(i  ,k,j ) = CC(i  ,j2+1,k)+CC(ic  ,j2,k);\n          CH(i  ,k,jc) = CC(i  ,j2+1,k)-CC(ic  ,j2,k);\n          CH(i+1,k,j ) = CC(i+1,j2+1,k)-CC(ic+1,j2,k);\n          CH(i+1,k,jc) = CC(i+1,j2+1,k)+CC(ic+1,j2,k);\n          }\n      }\n    }\n  for (size_t l=1,lc=ip-1; l<ipph; ++l,--lc)\n    {\n    for (size_t ik=0; ik<idl1; ++ik)\n      {\n      C2(ik,l ) = CH2(ik,0)+csarr[2*l]*CH2(ik,1)+csarr[4*l]*CH2(ik,2);\n      C2(ik,lc) = csarr[2*l+1]*CH2(ik,ip-1)+csarr[4*l+1]*CH2(ik,ip-2);\n      }\n    size_t iang=2*l;\n    size_t j=3,jc=ip-3;\n    for(; j<ipph-3; j+=4,jc-=4)\n      {\n      iang+=l; if(iang>ip) iang-=ip;\n      T0 ar1=csarr[2*iang], ai1=csarr[2*iang+1];\n      iang+=l; if(iang>ip) iang-=ip;\n      T0 ar2=csarr[2*iang], ai2=csarr[2*iang+1];\n      iang+=l; if(iang>ip) iang-=ip;\n      T0 ar3=csarr[2*iang], ai3=csarr[2*iang+1];\n      iang+=l; if(iang>ip) iang-=ip;\n      T0 ar4=csarr[2*iang], ai4=csarr[2*iang+1];\n      for (size_t ik=0; ik<idl1; ++ik)\n        {\n        C2(ik,l ) += ar1*CH2(ik,j )+ar2*CH2(ik,j +1)\n                    +ar3*CH2(ik,j +2)+ar4*CH2(ik,j +3);\n        C2(ik,lc) += ai1*CH2(ik,jc)+ai2*CH2(ik,jc-1)\n                    +ai3*CH2(ik,jc-2)+ai4*CH2(ik,jc-3);\n        }\n      }\n    for(; j<ipph-1; j+=2,jc-=2)\n      {\n      iang+=l; if(iang>ip) iang-=ip;\n      T0 ar1=csarr[2*iang], ai1=csarr[2*iang+1];\n      iang+=l; if(iang>ip) iang-=ip;\n      T0 ar2=csarr[2*iang], ai2=csarr[2*iang+1];\n      for (size_t ik=0; ik<idl1; ++ik)\n        {\n        C2(ik,l ) += ar1*CH2(ik,j )+ar2*CH2(ik,j +1);\n        C2(ik,lc) += ai1*CH2(ik,jc)+ai2*CH2(ik,jc-1);\n        }\n      }\n    for(; j<ipph; ++j,--jc)\n      {\n      iang+=l; if(iang>ip) iang-=ip;\n      T0 war=csarr[2*iang], wai=csarr[2*iang+1];\n      for (size_t ik=0; ik<idl1; ++ik)\n        {\n        C2(ik,l ) += war*CH2(ik,j );\n        C2(ik,lc) += wai*CH2(ik,jc);\n        }\n      }\n    }\n  for (size_t j=1; j<ipph; ++j)\n    for (size_t ik=0; ik<idl1; ++ik)\n      CH2(ik,0) += CH2(ik,j);\n  for (size_t j=1, jc=ip-1; j<ipph; ++j,--jc)   // 124\n    for (size_t k=0; k<l1; ++k)\n      PM(CH(0,k,jc),CH(0,k,j),C1(0,k,j),C1(0,k,jc));\n\n  if (ido==1) return;\n\n  for (size_t j=1, jc=ip-1; j<ipph; ++j, --jc)  // 127\n    for (size_t k=0; k<l1; ++k)\n      for (size_t i=1; i<=ido-2; i+=2)\n        {\n        CH(i  ,k,j ) = C1(i  ,k,j)-C1(i+1,k,jc);\n        CH(i  ,k,jc) = C1(i  ,k,j)+C1(i+1,k,jc);\n        CH(i+1,k,j ) = C1(i+1,k,j)+C1(i  ,k,jc);\n        CH(i+1,k,jc) = C1(i+1,k,j)-C1(i  ,k,jc);\n        }\n\n// All in CH\n\n  for (size_t j=1; j<ip; ++j)\n    {\n    size_t is = (j-1)*(ido-1);\n    for (size_t k=0; k<l1; ++k)\n      {\n      size_t idij = is;\n      for (size_t i=1; i<=ido-2; i+=2)\n        {\n        T t1=CH(i,k,j), t2=CH(i+1,k,j);\n        CH(i  ,k,j) = wa[idij]*t1-wa[idij+1]*t2;\n        CH(i+1,k,j) = wa[idij]*t2+wa[idij+1]*t1;\n        idij+=2;\n        }\n      }\n    }\n  }\n\n    template<typename T> void copy_and_norm(T *c, T *p1, T0 fct) const\n      {\n      if (p1!=c)\n        {\n        if (fct!=1.)\n          for (size_t i=0; i<length; ++i)\n            c[i] = fct*p1[i];\n        else\n          std::copy_n (p1, length, c);\n        }\n      else\n        if (fct!=1.)\n          for (size_t i=0; i<length; ++i)\n            c[i] *= fct;\n      }\n\n  public:\n    template<typename T> void exec(T c[], T0 fct, bool r2hc) const\n      {\n      if (length==1) { c[0]*=fct; return; }\n      size_t nf=fact.size();\n      arr<T> ch(length);\n      T *p1=c, *p2=ch.data();\n\n      if (r2hc)\n        for(size_t k1=0, l1=length; k1<nf;++k1)\n          {\n          size_t k=nf-k1-1;\n          size_t ip=fact[k].fct;\n          size_t ido=length / l1;\n          l1 /= ip;\n          if(ip==4)\n            radf4(ido, l1, p1, p2, fact[k].tw);\n          else if(ip==2)\n            radf2(ido, l1, p1, p2, fact[k].tw);\n          else if(ip==3)\n            radf3(ido, l1, p1, p2, fact[k].tw);\n          else if(ip==5)\n            radf5(ido, l1, p1, p2, fact[k].tw);\n          else\n            { radfg(ido, ip, l1, p1, p2, fact[k].tw, fact[k].tws); std::swap (p1,p2); }\n          std::swap (p1,p2);\n          }\n      else\n        for(size_t k=0, l1=1; k<nf; k++)\n          {\n          size_t ip = fact[k].fct,\n                 ido= length/(ip*l1);\n          if(ip==4)\n            radb4(ido, l1, p1, p2, fact[k].tw);\n          else if(ip==2)\n            radb2(ido, l1, p1, p2, fact[k].tw);\n          else if(ip==3)\n            radb3(ido, l1, p1, p2, fact[k].tw);\n          else if(ip==5)\n            radb5(ido, l1, p1, p2, fact[k].tw);\n          else\n            radbg(ido, ip, l1, p1, p2, fact[k].tw, fact[k].tws);\n          std::swap (p1,p2);\n          l1*=ip;\n          }\n\n      copy_and_norm(c,p1,fct);\n      }\n\n  private:\n    void factorize()\n      {\n      size_t len=length;\n      while ((len%4)==0)\n        { add_factor(4); len>>=2; }\n      if ((len%2)==0)\n        {\n        len>>=1;\n        // factor 2 should be at the front of the factor list\n        add_factor(2);\n        std::swap(fact[0].fct, fact.back().fct);\n        }\n      for (size_t divisor=3; divisor*divisor<=len; divisor+=2)\n        while ((len%divisor)==0)\n          {\n          add_factor(divisor);\n          len/=divisor;\n          }\n      if (len>1) add_factor(len);\n      }\n\n    size_t twsize() const\n      {\n      size_t twsz=0, l1=1;\n      for (size_t k=0; k<fact.size(); ++k)\n        {\n        size_t ip=fact[k].fct, ido=length/(l1*ip);\n        twsz+=(ip-1)*(ido-1);\n        if (ip>5) twsz+=2*ip;\n        l1*=ip;\n        }\n      return twsz;\n      }\n\n    void comp_twiddle()\n      {\n      sincos_2pibyn<T0> twid(length);\n      size_t l1=1;\n      T0 *ptr=mem.data();\n      for (size_t k=0; k<fact.size(); ++k)\n        {\n        size_t ip=fact[k].fct, ido=length/(l1*ip);\n        if (k<fact.size()-1) // last factor doesn't need twiddles\n          {\n          fact[k].tw=ptr; ptr+=(ip-1)*(ido-1);\n          for (size_t j=1; j<ip; ++j)\n            for (size_t i=1; i<=(ido-1)/2; ++i)\n              {\n              fact[k].tw[(j-1)*(ido-1)+2*i-2] = twid[j*l1*i].r;\n              fact[k].tw[(j-1)*(ido-1)+2*i-1] = twid[j*l1*i].i;\n              }\n          }\n        if (ip>5) // special factors required by *g functions\n          {\n          fact[k].tws=ptr; ptr+=2*ip;\n          fact[k].tws[0] = 1.;\n          fact[k].tws[1] = 0.;\n          for (size_t i=2, ic=2*ip-2; i<=ic; i+=2, ic-=2)\n            {\n            fact[k].tws[i  ] = twid[i/2*(length/ip)].r;\n            fact[k].tws[i+1] = twid[i/2*(length/ip)].i;\n            fact[k].tws[ic]   = twid[i/2*(length/ip)].r;\n            fact[k].tws[ic+1] = -twid[i/2*(length/ip)].i;\n            }\n          }\n        l1*=ip;\n        }\n      }\n\n  public:\n    POCKETFFT_NOINLINE rfftp(size_t length_)\n      : length(length_)\n      {\n      if (length==0) throw std::runtime_error(\"zero-length FFT requested\");\n      if (length==1) return;\n      factorize();\n      mem.resize(twsize());\n      comp_twiddle();\n      }\n};\n\n//\n// complex Bluestein transforms\n//\n\ntemplate<typename T0> class fftblue\n  {\n  private:\n    size_t n, n2;\n    cfftp<T0> plan;\n    arr<cmplx<T0>> mem;\n    cmplx<T0> *bk, *bkf;\n\n    template<bool fwd, typename T> void fft(cmplx<T> c[], T0 fct) const\n      {\n      arr<cmplx<T>> akf(n2);\n\n      /* initialize a_k and FFT it */\n      for (size_t m=0; m<n; ++m)\n        special_mul<fwd>(c[m],bk[m],akf[m]);\n      auto zero = akf[0]*T0(0);\n      for (size_t m=n; m<n2; ++m)\n        akf[m]=zero;\n\n      plan.exec (akf.data(),1.,true);\n\n      /* do the convolution */\n      akf[0] = akf[0].template special_mul<!fwd>(bkf[0]);\n      for (size_t m=1; m<(n2+1)/2; ++m)\n        {\n        akf[m] = akf[m].template special_mul<!fwd>(bkf[m]);\n        akf[n2-m] = akf[n2-m].template special_mul<!fwd>(bkf[m]);\n        }\n      if ((n2&1)==0)\n        akf[n2/2] = akf[n2/2].template special_mul<!fwd>(bkf[n2/2]);\n\n      /* inverse FFT */\n      plan.exec (akf.data(),1.,false);\n\n      /* multiply by b_k */\n      for (size_t m=0; m<n; ++m)\n        c[m] = akf[m].template special_mul<fwd>(bk[m])*fct;\n      }\n\n  public:\n    POCKETFFT_NOINLINE fftblue(size_t length)\n      : n(length), n2(util::good_size_cmplx(n*2-1)), plan(n2), mem(n+n2/2+1),\n        bk(mem.data()), bkf(mem.data()+n)\n      {\n      /* initialize b_k */\n      sincos_2pibyn<T0> tmp(2*n);\n      bk[0].Set(1, 0);\n\n      size_t coeff=0;\n      for (size_t m=1; m<n; ++m)\n        {\n        coeff+=2*m-1;\n        if (coeff>=2*n) coeff-=2*n;\n        bk[m] = tmp[coeff];\n        }\n\n      /* initialize the zero-padded, Fourier transformed b_k. Add normalisation. */\n      arr<cmplx<T0>> tbkf(n2);\n      T0 xn2 = T0(1)/T0(n2);\n      tbkf[0] = bk[0]*xn2;\n      for (size_t m=1; m<n; ++m)\n        tbkf[m] = tbkf[n2-m] = bk[m]*xn2;\n      for (size_t m=n;m<=(n2-n);++m)\n        tbkf[m].Set(0.,0.);\n      plan.exec(tbkf.data(),1.,true);\n      for (size_t i=0; i<n2/2+1; ++i)\n        bkf[i] = tbkf[i];\n      }\n\n    template<typename T> void exec(cmplx<T> c[], T0 fct, bool fwd) const\n      { fwd ? fft<true>(c,fct) : fft<false>(c,fct); }\n\n    template<typename T> void exec_r(T c[], T0 fct, bool fwd)\n      {\n      arr<cmplx<T>> tmp(n);\n      if (fwd)\n        {\n        auto zero = T0(0)*c[0];\n        for (size_t m=0; m<n; ++m)\n          tmp[m].Set(c[m], zero);\n        fft<true>(tmp.data(),fct);\n        c[0] = tmp[0].r;\n        std::copy_n (&tmp[1].r, n-1, &c[1]);\n        }\n      else\n        {\n        tmp[0].Set(c[0],c[0]*0);\n        std::copy_n (c+1, n-1, &tmp[1].r);\n        if ((n&1)==0) tmp[n/2].i=T0(0)*c[0];\n        for (size_t m=1; 2*m<n; ++m)\n          tmp[n-m].Set(tmp[m].r, -tmp[m].i);\n        fft<false>(tmp.data(),fct);\n        for (size_t m=0; m<n; ++m)\n          c[m] = tmp[m].r;\n        }\n      }\n  };\n\n//\n// flexible (FFTPACK/Bluestein) complex 1D transform\n//\n\ntemplate<typename T0> class pocketfft_c\n  {\n  private:\n    std::unique_ptr<cfftp<T0>> packplan;\n    std::unique_ptr<fftblue<T0>> blueplan;\n    size_t len;\n\n  public:\n    POCKETFFT_NOINLINE pocketfft_c(size_t length)\n      : len(length)\n      {\n      if (length==0) throw std::runtime_error(\"zero-length FFT requested\");\n      size_t tmp = (length<50) ? 0 : util::largest_prime_factor(length);\n      if (tmp*tmp <= length)\n        {\n        packplan=std::unique_ptr<cfftp<T0>>(new cfftp<T0>(length));\n        return;\n        }\n      double comp1 = util::cost_guess(length);\n      double comp2 = 2*util::cost_guess(util::good_size_cmplx(2*length-1));\n      comp2*=1.5; /* fudge factor that appears to give good overall performance */\n      if (comp2<comp1) // use Bluestein\n        blueplan=std::unique_ptr<fftblue<T0>>(new fftblue<T0>(length));\n      else\n        packplan=std::unique_ptr<cfftp<T0>>(new cfftp<T0>(length));\n      }\n\n    template<typename T> POCKETFFT_NOINLINE void exec(cmplx<T> c[], T0 fct, bool fwd) const\n      { packplan ? packplan->exec(c,fct,fwd) : blueplan->exec(c,fct,fwd); }\n\n    size_t length() const { return len; }\n  };\n\n//\n// flexible (FFTPACK/Bluestein) real-valued 1D transform\n//\n\ntemplate<typename T0> class pocketfft_r\n  {\n  private:\n    std::unique_ptr<rfftp<T0>> packplan;\n    std::unique_ptr<fftblue<T0>> blueplan;\n    size_t len;\n\n  public:\n    POCKETFFT_NOINLINE pocketfft_r(size_t length)\n      : len(length)\n      {\n      if (length==0) throw std::runtime_error(\"zero-length FFT requested\");\n      size_t tmp = (length<50) ? 0 : util::largest_prime_factor(length);\n      if (tmp*tmp <= length)\n        {\n        packplan=std::unique_ptr<rfftp<T0>>(new rfftp<T0>(length));\n        return;\n        }\n      double comp1 = 0.5*util::cost_guess(length);\n      double comp2 = 2*util::cost_guess(util::good_size_cmplx(2*length-1));\n      comp2*=1.5; /* fudge factor that appears to give good overall performance */\n      if (comp2<comp1) // use Bluestein\n        blueplan=std::unique_ptr<fftblue<T0>>(new fftblue<T0>(length));\n      else\n        packplan=std::unique_ptr<rfftp<T0>>(new rfftp<T0>(length));\n      }\n\n    template<typename T> POCKETFFT_NOINLINE void exec(T c[], T0 fct, bool fwd) const\n      { packplan ? packplan->exec(c,fct,fwd) : blueplan->exec_r(c,fct,fwd); }\n\n    size_t length() const { return len; }\n  };\n\n\n//\n// sine/cosine transforms\n//\n\ntemplate<typename T0> class T_dct1\n  {\n  private:\n    pocketfft_r<T0> fftplan;\n\n  public:\n    POCKETFFT_NOINLINE T_dct1(size_t length)\n      : fftplan(2*(length-1)) {}\n\n    template<typename T> POCKETFFT_NOINLINE void exec(T c[], T0 fct, bool ortho,\n      int /*type*/, bool /*cosine*/) const\n      {\n      constexpr T0 sqrt2=T0(1.414213562373095048801688724209698L);\n      size_t N=fftplan.length(), n=N/2+1;\n      if (ortho)\n        { c[0]*=sqrt2; c[n-1]*=sqrt2; }\n      arr<T> tmp(N);\n      tmp[0] = c[0];\n      for (size_t i=1; i<n; ++i)\n        tmp[i] = tmp[N-i] = c[i];\n      fftplan.exec(tmp.data(), fct, true);\n      c[0] = tmp[0];\n      for (size_t i=1; i<n; ++i)\n        c[i] = tmp[2*i-1];\n      if (ortho)\n        { c[0]*=sqrt2*T0(0.5); c[n-1]*=sqrt2*T0(0.5); }\n      }\n\n    size_t length() const { return fftplan.length()/2+1; }\n  };\n\ntemplate<typename T0> class T_dst1\n  {\n  private:\n    pocketfft_r<T0> fftplan;\n\n  public:\n    POCKETFFT_NOINLINE T_dst1(size_t length)\n      : fftplan(2*(length+1)) {}\n\n    template<typename T> POCKETFFT_NOINLINE void exec(T c[], T0 fct,\n      bool /*ortho*/, int /*type*/, bool /*cosine*/) const\n      {\n      size_t N=fftplan.length(), n=N/2-1;\n      arr<T> tmp(N);\n      tmp[0] = tmp[n+1] = c[0]*0;\n      for (size_t i=0; i<n; ++i)\n        { tmp[i+1]=c[i]; tmp[N-1-i]=-c[i]; }\n      fftplan.exec(tmp.data(), fct, true);\n      for (size_t i=0; i<n; ++i)\n        c[i] = -tmp[2*i+2];\n      }\n\n    size_t length() const { return fftplan.length()/2-1; }\n  };\n\ntemplate<typename T0> class T_dcst23\n  {\n  private:\n    pocketfft_r<T0> fftplan;\n    std::vector<T0> twiddle;\n\n  public:\n    POCKETFFT_NOINLINE T_dcst23(size_t length)\n      : fftplan(length), twiddle(length)\n      {\n      sincos_2pibyn<T0> tw(4*length);\n      for (size_t i=0; i<length; ++i)\n        twiddle[i] = tw[i+1].r;\n      }\n\n    template<typename T> POCKETFFT_NOINLINE void exec(T c[], T0 fct, bool ortho,\n      int type, bool cosine) const\n      {\n      constexpr T0 sqrt2=T0(1.414213562373095048801688724209698L);\n      size_t N=length();\n      size_t NS2 = (N+1)/2;\n      if (type==2)\n        {\n        if (!cosine)\n          for (size_t k=1; k<N; k+=2)\n            c[k] = -c[k];\n        c[0] *= 2;\n        if ((N&1)==0) c[N-1]*=2;\n        for (size_t k=1; k<N-1; k+=2)\n          MPINPLACE(c[k+1], c[k]);\n        fftplan.exec(c, fct, false);\n        for (size_t k=1, kc=N-1; k<NS2; ++k, --kc)\n          {\n          T t1 = twiddle[k-1]*c[kc]+twiddle[kc-1]*c[k];\n          T t2 = twiddle[k-1]*c[k]-twiddle[kc-1]*c[kc];\n          c[k] = T0(0.5)*(t1+t2); c[kc]=T0(0.5)*(t1-t2);\n          }\n        if ((N&1)==0)\n          c[NS2] *= twiddle[NS2-1];\n        if (!cosine)\n          for (size_t k=0, kc=N-1; k<kc; ++k, --kc)\n            std::swap(c[k], c[kc]);\n        if (ortho)\n          cosine ? c[0]*=sqrt2*T0(0.5) : c[N-1]*=sqrt2*T0(0.5);\n        }\n      else\n        {\n        if (ortho)\n          cosine ? c[0]*=sqrt2 : c[N-1]*=sqrt2;\n        if (!cosine)\n          for (size_t k=0, kc=N-1; k<NS2; ++k, --kc)\n            std::swap(c[k], c[kc]);\n        for (size_t k=1, kc=N-1; k<NS2; ++k, --kc)\n          {\n          T t1=c[k]+c[kc], t2=c[k]-c[kc];\n          c[k] = twiddle[k-1]*t2+twiddle[kc-1]*t1;\n          c[kc]= twiddle[k-1]*t1-twiddle[kc-1]*t2;\n          }\n        if ((N&1)==0)\n          c[NS2] *= 2*twiddle[NS2-1];\n        fftplan.exec(c, fct, true);\n        for (size_t k=1; k<N-1; k+=2)\n          MPINPLACE(c[k], c[k+1]);\n        if (!cosine)\n          for (size_t k=1; k<N; k+=2)\n            c[k] = -c[k];\n        }\n      }\n\n    size_t length() const { return fftplan.length(); }\n  };\n\ntemplate<typename T0> class T_dcst4\n  {\n  private:\n    size_t N;\n    std::unique_ptr<pocketfft_c<T0>> fft;\n    std::unique_ptr<pocketfft_r<T0>> rfft;\n    arr<cmplx<T0>> C2;\n\n  public:\n    POCKETFFT_NOINLINE T_dcst4(size_t length)\n      : N(length),\n        fft((N&1) ? nullptr : new pocketfft_c<T0>(N/2)),\n        rfft((N&1)? new pocketfft_r<T0>(N) : nullptr),\n        C2((N&1) ? 0 : N/2)\n      {\n      if ((N&1)==0)\n        {\n        sincos_2pibyn<T0> tw(16*N);\n        for (size_t i=0; i<N/2; ++i)\n          C2[i] = conj(tw[8*i+1]);\n        }\n      }\n\n    template<typename T> POCKETFFT_NOINLINE void exec(T c[], T0 fct,\n      bool /*ortho*/, int /*type*/, bool cosine) const\n      {\n      size_t n2 = N/2;\n      if (!cosine)\n        for (size_t k=0, kc=N-1; k<n2; ++k, --kc)\n          std::swap(c[k], c[kc]);\n      if (N&1)\n        {\n        // The following code is derived from the FFTW3 function apply_re11()\n        // and is released under the 3-clause BSD license with friendly\n        // permission of Matteo Frigo and Steven G. Johnson.\n\n        arr<T> y(N);\n        {\n        size_t i=0, m=n2;\n        for (; m<N; ++i, m+=4)\n          y[i] = c[m];\n        for (; m<2*N; ++i, m+=4)\n          y[i] = -c[2*N-m-1];\n        for (; m<3*N; ++i, m+=4)\n          y[i] = -c[m-2*N];\n        for (; m<4*N; ++i, m+=4)\n          y[i] = c[4*N-m-1];\n        for (; i<N; ++i, m+=4)\n          y[i] = c[m-4*N];\n        }\n        rfft->exec(y.data(), fct, true);\n        {\n        auto SGN = [](size_t i)\n           {\n           constexpr T0 sqrt2=T0(1.414213562373095048801688724209698L);\n           return (i&2) ? -sqrt2 : sqrt2;\n           };\n        c[n2] = y[0]*SGN(n2+1);\n        size_t i=0, i1=1, k=1;\n        for (; k<n2; ++i, ++i1, k+=2)\n          {\n          c[i    ] = y[2*k-1]*SGN(i1)     + y[2*k  ]*SGN(i);\n          c[N -i1] = y[2*k-1]*SGN(N -i)   - y[2*k  ]*SGN(N -i1);\n          c[n2-i1] = y[2*k+1]*SGN(n2-i)   - y[2*k+2]*SGN(n2-i1);\n          c[n2+i1] = y[2*k+1]*SGN(n2+i+2) + y[2*k+2]*SGN(n2+i1);\n          }\n        if (k == n2)\n          {\n          c[i   ] = y[2*k-1]*SGN(i+1) + y[2*k]*SGN(i);\n          c[N-i1] = y[2*k-1]*SGN(i+2) + y[2*k]*SGN(i1);\n          }\n        }\n\n        // FFTW-derived code ends here\n        }\n      else\n        {\n        // even length algorithm from\n        // https://www.appletonaudio.com/blog/2013/derivation-of-fast-dct-4-algorithm-based-on-dft/\n        arr<cmplx<T>> y(n2);\n        for(size_t i=0; i<n2; ++i)\n          {\n          y[i].Set(c[2*i],c[N-1-2*i]);\n          y[i] *= C2[i];\n          }\n        fft->exec(y.data(), fct, true);\n        for(size_t i=0, ic=n2-1; i<n2; ++i, --ic)\n          {\n          c[2*i  ] =  2*(y[i ].r*C2[i ].r-y[i ].i*C2[i ].i);\n          c[2*i+1] = -2*(y[ic].i*C2[ic].r+y[ic].r*C2[ic].i);\n          }\n        }\n      if (!cosine)\n        for (size_t k=1; k<N; k+=2)\n          c[k] = -c[k];\n      }\n\n    size_t length() const { return N; }\n  };\n\n\n//\n// multi-D infrastructure\n//\n\ntemplate<typename T> std::shared_ptr<T> get_plan(size_t length)\n  {\n#if POCKETFFT_CACHE_SIZE==0\n  return std::make_shared<T>(length);\n#else\n  constexpr size_t nmax=POCKETFFT_CACHE_SIZE;\n  static std::array<std::shared_ptr<T>, nmax> cache;\n  static std::array<size_t, nmax> last_access{{0}};\n  static size_t access_counter = 0;\n  static std::mutex mut;\n\n  auto find_in_cache = [&]() -> std::shared_ptr<T>\n    {\n    for (size_t i=0; i<nmax; ++i)\n      if (cache[i] && (cache[i]->length()==length))\n        {\n        // no need to update if this is already the most recent entry\n        if (last_access[i]!=access_counter)\n          {\n          last_access[i] = ++access_counter;\n          // Guard against overflow\n          if (access_counter == 0)\n            last_access.fill(0);\n          }\n        return cache[i];\n        }\n\n    return nullptr;\n    };\n\n  {\n  std::lock_guard<std::mutex> lock(mut);\n  auto p = find_in_cache();\n  if (p) return p;\n  }\n  auto plan = std::make_shared<T>(length);\n  {\n  std::lock_guard<std::mutex> lock(mut);\n  auto p = find_in_cache();\n  if (p) return p;\n\n  size_t lru = 0;\n  for (size_t i=1; i<nmax; ++i)\n    if (last_access[i] < last_access[lru])\n      lru = i;\n\n  cache[lru] = plan;\n  last_access[lru] = ++access_counter;\n  }\n  return plan;\n#endif\n  }\n\nclass arr_info\n  {\n  protected:\n    shape_t shp;\n    stride_t str;\n\n  public:\n    arr_info(const shape_t &shape_, const stride_t &stride_)\n      : shp(shape_), str(stride_) {}\n    size_t ndim() const { return shp.size(); }\n    size_t size() const { return util::prod(shp); }\n    const shape_t &shape() const { return shp; }\n    size_t shape(size_t i) const { return shp[i]; }\n    const stride_t &stride() const { return str; }\n    const ptrdiff_t &stride(size_t i) const { return str[i]; }\n  };\n\ntemplate<typename T> class cndarr: public arr_info\n  {\n  protected:\n    const char *d;\n\n  public:\n    cndarr(const void *data_, const shape_t &shape_, const stride_t &stride_)\n      : arr_info(shape_, stride_),\n        d(reinterpret_cast<const char *>(data_)) {}\n    const T &operator[](ptrdiff_t ofs) const\n      { return *reinterpret_cast<const T *>(d+ofs); }\n  };\n\ntemplate<typename T> class ndarr: public cndarr<T>\n  {\n  public:\n    ndarr(void *data_, const shape_t &shape_, const stride_t &stride_)\n      : cndarr<T>::cndarr(const_cast<const void *>(data_), shape_, stride_)\n      {}\n    T &operator[](ptrdiff_t ofs)\n      { return *reinterpret_cast<T *>(const_cast<char *>(cndarr<T>::d+ofs)); }\n  };\n\ntemplate<size_t N> class multi_iter\n  {\n  private:\n    shape_t pos;\n    const arr_info &iarr, &oarr;\n    ptrdiff_t p_ii, p_i[N], str_i, p_oi, p_o[N], str_o;\n    size_t idim, rem;\n\n    void advance_i()\n      {\n      for (int i_=int(pos.size())-1; i_>=0; --i_)\n        {\n        auto i = size_t(i_);\n        if (i==idim) continue;\n        p_ii += iarr.stride(i);\n        p_oi += oarr.stride(i);\n        if (++pos[i] < iarr.shape(i))\n          return;\n        pos[i] = 0;\n        p_ii -= ptrdiff_t(iarr.shape(i))*iarr.stride(i);\n        p_oi -= ptrdiff_t(oarr.shape(i))*oarr.stride(i);\n        }\n      }\n\n  public:\n    multi_iter(const arr_info &iarr_, const arr_info &oarr_, size_t idim_)\n      : pos(iarr_.ndim(), 0), iarr(iarr_), oarr(oarr_), p_ii(0),\n        str_i(iarr.stride(idim_)), p_oi(0), str_o(oarr.stride(idim_)),\n        idim(idim_), rem(iarr.size()/iarr.shape(idim))\n      {\n      auto nshares = threading::num_threads();\n      if (nshares==1) return;\n      if (nshares==0) throw std::runtime_error(\"can't run with zero threads\");\n      auto myshare = threading::thread_id();\n      if (myshare>=nshares) throw std::runtime_error(\"impossible share requested\");\n      size_t nbase = rem/nshares;\n      size_t additional = rem%nshares;\n      size_t lo = myshare*nbase + ((myshare<additional) ? myshare : additional);\n      size_t hi = lo+nbase+(myshare<additional);\n      size_t todo = hi-lo;\n\n      size_t chunk = rem;\n      for (size_t i=0; i<pos.size(); ++i)\n        {\n        if (i==idim) continue;\n        chunk /= iarr.shape(i);\n        size_t n_advance = lo/chunk;\n        pos[i] += n_advance;\n        p_ii += ptrdiff_t(n_advance)*iarr.stride(i);\n        p_oi += ptrdiff_t(n_advance)*oarr.stride(i);\n        lo -= n_advance*chunk;\n        }\n      rem = todo;\n      }\n    void advance(size_t n)\n      {\n      if (rem<n) throw std::runtime_error(\"underrun\");\n      for (size_t i=0; i<n; ++i)\n        {\n        p_i[i] = p_ii;\n        p_o[i] = p_oi;\n        advance_i();\n        }\n      rem -= n;\n      }\n    ptrdiff_t iofs(size_t i) const { return p_i[0] + ptrdiff_t(i)*str_i; }\n    ptrdiff_t iofs(size_t j, size_t i) const { return p_i[j] + ptrdiff_t(i)*str_i; }\n    ptrdiff_t oofs(size_t i) const { return p_o[0] + ptrdiff_t(i)*str_o; }\n    ptrdiff_t oofs(size_t j, size_t i) const { return p_o[j] + ptrdiff_t(i)*str_o; }\n    size_t length_in() const { return iarr.shape(idim); }\n    size_t length_out() const { return oarr.shape(idim); }\n    ptrdiff_t stride_in() const { return str_i; }\n    ptrdiff_t stride_out() const { return str_o; }\n    size_t remaining() const { return rem; }\n  };\n\nclass simple_iter\n  {\n  private:\n    shape_t pos;\n    const arr_info &arr;\n    ptrdiff_t p;\n    size_t rem;\n\n  public:\n    simple_iter(const arr_info &arr_)\n      : pos(arr_.ndim(), 0), arr(arr_), p(0), rem(arr_.size()) {}\n    void advance()\n      {\n      --rem;\n      for (int i_=int(pos.size())-1; i_>=0; --i_)\n        {\n        auto i = size_t(i_);\n        p += arr.stride(i);\n        if (++pos[i] < arr.shape(i))\n          return;\n        pos[i] = 0;\n        p -= ptrdiff_t(arr.shape(i))*arr.stride(i);\n        }\n      }\n    ptrdiff_t ofs() const { return p; }\n    size_t remaining() const { return rem; }\n  };\n\nclass rev_iter\n  {\n  private:\n    shape_t pos;\n    const arr_info &arr;\n    std::vector<char> rev_axis;\n    std::vector<char> rev_jump;\n    size_t last_axis, last_size;\n    shape_t shp;\n    ptrdiff_t p, rp;\n    size_t rem;\n\n  public:\n    rev_iter(const arr_info &arr_, const shape_t &axes)\n      : pos(arr_.ndim(), 0), arr(arr_), rev_axis(arr_.ndim(), 0),\n        rev_jump(arr_.ndim(), 1), p(0), rp(0)\n      {\n      for (auto ax: axes)\n        rev_axis[ax]=1;\n      last_axis = axes.back();\n      last_size = arr.shape(last_axis)/2 + 1;\n      shp = arr.shape();\n      shp[last_axis] = last_size;\n      rem=1;\n      for (auto i: shp)\n        rem *= i;\n      }\n    void advance()\n      {\n      --rem;\n      for (int i_=int(pos.size())-1; i_>=0; --i_)\n        {\n        auto i = size_t(i_);\n        p += arr.stride(i);\n        if (!rev_axis[i])\n          rp += arr.stride(i);\n        else\n          {\n          rp -= arr.stride(i);\n          if (rev_jump[i])\n            {\n            rp += ptrdiff_t(arr.shape(i))*arr.stride(i);\n            rev_jump[i] = 0;\n            }\n          }\n        if (++pos[i] < shp[i])\n          return;\n        pos[i] = 0;\n        p -= ptrdiff_t(shp[i])*arr.stride(i);\n        if (rev_axis[i])\n          {\n          rp -= ptrdiff_t(arr.shape(i)-shp[i])*arr.stride(i);\n          rev_jump[i] = 1;\n          }\n        else\n          rp -= ptrdiff_t(shp[i])*arr.stride(i);\n        }\n      }\n    ptrdiff_t ofs() const { return p; }\n    ptrdiff_t rev_ofs() const { return rp; }\n    size_t remaining() const { return rem; }\n  };\n\ntemplate<typename T> struct VTYPE {};\ntemplate <typename T> using vtype_t = typename VTYPE<T>::type;\n\n#ifndef POCKETFFT_NO_VECTORS\ntemplate<> struct VTYPE<float>\n  {\n  using type = float __attribute__ ((vector_size (VLEN<float>::val*sizeof(float))));\n  };\ntemplate<> struct VTYPE<double>\n  {\n  using type = double __attribute__ ((vector_size (VLEN<double>::val*sizeof(double))));\n  };\ntemplate<> struct VTYPE<long double>\n  {\n  using type = long double __attribute__ ((vector_size (VLEN<long double>::val*sizeof(long double))));\n  };\n#endif\n\ntemplate<typename T> arr<char> alloc_tmp(const shape_t &shape,\n  size_t axsize, size_t elemsize)\n  {\n  auto othersize = util::prod(shape)/axsize;\n  auto tmpsize = axsize*((othersize>=VLEN<T>::val) ? VLEN<T>::val : 1);\n  return arr<char>(tmpsize*elemsize);\n  }\ntemplate<typename T> arr<char> alloc_tmp(const shape_t &shape,\n  const shape_t &axes, size_t elemsize)\n  {\n  size_t fullsize=util::prod(shape);\n  size_t tmpsize=0;\n  for (size_t i=0; i<axes.size(); ++i)\n    {\n    auto axsize = shape[axes[i]];\n    auto othersize = fullsize/axsize;\n    auto sz = axsize*((othersize>=VLEN<T>::val) ? VLEN<T>::val : 1);\n    if (sz>tmpsize) tmpsize=sz;\n    }\n  return arr<char>(tmpsize*elemsize);\n  }\n\ntemplate <typename T, size_t vlen> void copy_input(const multi_iter<vlen> &it,\n  const cndarr<cmplx<T>> &src, cmplx<vtype_t<T>> *POCKETFFT_RESTRICT dst)\n  {\n  for (size_t i=0; i<it.length_in(); ++i)\n    for (size_t j=0; j<vlen; ++j)\n      {\n      dst[i].r[j] = src[it.iofs(j,i)].r;\n      dst[i].i[j] = src[it.iofs(j,i)].i;\n      }\n  }\n\ntemplate <typename T, size_t vlen> void copy_input(const multi_iter<vlen> &it,\n  const cndarr<T> &src, vtype_t<T> *POCKETFFT_RESTRICT dst)\n  {\n  for (size_t i=0; i<it.length_in(); ++i)\n    for (size_t j=0; j<vlen; ++j)\n      dst[i][j] = src[it.iofs(j,i)];\n  }\n\ntemplate <typename T, size_t vlen> void copy_input(const multi_iter<vlen> &it,\n  const cndarr<T> &src, T *POCKETFFT_RESTRICT dst)\n  {\n  if (dst == &src[it.iofs(0)]) return;  // in-place\n  for (size_t i=0; i<it.length_in(); ++i)\n    dst[i] = src[it.iofs(i)];\n  }\n\ntemplate<typename T, size_t vlen> void copy_output(const multi_iter<vlen> &it,\n  const cmplx<vtype_t<T>> *POCKETFFT_RESTRICT src, ndarr<cmplx<T>> &dst)\n  {\n  for (size_t i=0; i<it.length_out(); ++i)\n    for (size_t j=0; j<vlen; ++j)\n      dst[it.oofs(j,i)].Set(src[i].r[j],src[i].i[j]);\n  }\n\ntemplate<typename T, size_t vlen> void copy_output(const multi_iter<vlen> &it,\n  const vtype_t<T> *POCKETFFT_RESTRICT src, ndarr<T> &dst)\n  {\n  for (size_t i=0; i<it.length_out(); ++i)\n    for (size_t j=0; j<vlen; ++j)\n      dst[it.oofs(j,i)] = src[i][j];\n  }\n\ntemplate<typename T, size_t vlen> void copy_output(const multi_iter<vlen> &it,\n  const T *POCKETFFT_RESTRICT src, ndarr<T> &dst)\n  {\n  if (src == &dst[it.oofs(0)]) return;  // in-place\n  for (size_t i=0; i<it.length_out(); ++i)\n    dst[it.oofs(i)] = src[i];\n  }\n\ntemplate <typename T> struct add_vec { using type = vtype_t<T>; };\ntemplate <typename T> struct add_vec<cmplx<T>>\n  { using type = cmplx<vtype_t<T>>; };\ntemplate <typename T> using add_vec_t = typename add_vec<T>::type;\n\ntemplate<typename Tplan, typename T, typename T0, typename Exec>\nPOCKETFFT_NOINLINE void general_nd(const cndarr<T> &in, ndarr<T> &out,\n  const shape_t &axes, T0 fct, size_t nthreads, const Exec & exec,\n  const bool allow_inplace=true)\n  {\n  std::shared_ptr<Tplan> plan;\n\n  for (size_t iax=0; iax<axes.size(); ++iax)\n    {\n    size_t len=in.shape(axes[iax]);\n    if ((!plan) || (len!=plan->length()))\n      plan = get_plan<Tplan>(len);\n\n    threading::thread_map(\n      util::thread_count(nthreads, in.shape(), axes[iax], VLEN<T>::val),\n      [&] {\n        constexpr auto vlen = VLEN<T0>::val;\n        auto storage = alloc_tmp<T0>(in.shape(), len, sizeof(T));\n        const auto &tin(iax==0? in : out);\n        multi_iter<vlen> it(tin, out, axes[iax]);\n#ifndef POCKETFFT_NO_VECTORS\n        if (vlen>1)\n          while (it.remaining()>=vlen)\n            {\n            it.advance(vlen);\n            auto tdatav = reinterpret_cast<add_vec_t<T> *>(storage.data());\n            exec(it, tin, out, tdatav, *plan, fct);\n            }\n#endif\n        while (it.remaining()>0)\n          {\n          it.advance(1);\n          auto buf = allow_inplace && it.stride_out() == sizeof(T) ?\n            &out[it.oofs(0)] : reinterpret_cast<T *>(storage.data());\n          exec(it, tin, out, buf, *plan, fct);\n          }\n      });  // end of parallel region\n    fct = T0(1); // factor has been applied, use 1 for remaining axes\n    }\n  }\n\nstruct ExecC2C\n  {\n  bool forward;\n\n  template <typename T0, typename T, size_t vlen> void operator () (\n    const multi_iter<vlen> &it, const cndarr<cmplx<T0>> &in,\n    ndarr<cmplx<T0>> &out, T * buf, const pocketfft_c<T0> &plan, T0 fct) const\n    {\n    copy_input(it, in, buf);\n    plan.exec(buf, fct, forward);\n    copy_output(it, buf, out);\n    }\n  };\n\ntemplate <typename T, size_t vlen> void copy_hartley(const multi_iter<vlen> &it,\n  const vtype_t<T> *POCKETFFT_RESTRICT src, ndarr<T> &dst)\n  {\n  for (size_t j=0; j<vlen; ++j)\n    dst[it.oofs(j,0)] = src[0][j];\n  size_t i=1, i1=1, i2=it.length_out()-1;\n  for (i=1; i<it.length_out()-1; i+=2, ++i1, --i2)\n    for (size_t j=0; j<vlen; ++j)\n      {\n        dst[it.oofs(j,i1)] = src[i][j]+src[i+1][j];\n        dst[it.oofs(j,i2)] = src[i][j]-src[i+1][j];\n      }\n  if (i<it.length_out())\n    for (size_t j=0; j<vlen; ++j)\n      dst[it.oofs(j,i1)] = src[i][j];\n  }\n\ntemplate <typename T, size_t vlen> void copy_hartley(const multi_iter<vlen> &it,\n  const T *POCKETFFT_RESTRICT src, ndarr<T> &dst)\n  {\n  dst[it.oofs(0)] = src[0];\n  size_t i=1, i1=1, i2=it.length_out()-1;\n  for (i=1; i<it.length_out()-1; i+=2, ++i1, --i2)\n    {\n    dst[it.oofs(i1)] = src[i]+src[i+1];\n    dst[it.oofs(i2)] = src[i]-src[i+1];\n    }\n  if (i<it.length_out())\n    dst[it.oofs(i1)] = src[i];\n  }\n\nstruct ExecHartley\n  {\n  template <typename T0, typename T, size_t vlen> void operator () (\n    const multi_iter<vlen> &it, const cndarr<T0> &in, ndarr<T0> &out,\n    T * buf, const pocketfft_r<T0> &plan, T0 fct) const\n    {\n    copy_input(it, in, buf);\n    plan.exec(buf, fct, true);\n    copy_hartley(it, buf, out);\n    }\n  };\n\nstruct ExecDcst\n  {\n  bool ortho;\n  int type;\n  bool cosine;\n\n  template <typename T0, typename T, typename Tplan, size_t vlen>\n  void operator () (const multi_iter<vlen> &it, const cndarr<T0> &in,\n    ndarr<T0> &out, T * buf, const Tplan &plan, T0 fct) const\n    {\n    copy_input(it, in, buf);\n    plan.exec(buf, fct, ortho, type, cosine);\n    copy_output(it, buf, out);\n    }\n  };\n\ntemplate<typename T> POCKETFFT_NOINLINE void general_r2c(\n  const cndarr<T> &in, ndarr<cmplx<T>> &out, size_t axis, bool forward, T fct,\n  size_t nthreads)\n  {\n  auto plan = get_plan<pocketfft_r<T>>(in.shape(axis));\n  size_t len=in.shape(axis);\n  threading::thread_map(\n    util::thread_count(nthreads, in.shape(), axis, VLEN<T>::val),\n    [&] {\n    constexpr auto vlen = VLEN<T>::val;\n    auto storage = alloc_tmp<T>(in.shape(), len, sizeof(T));\n    multi_iter<vlen> it(in, out, axis);\n#ifndef POCKETFFT_NO_VECTORS\n    if (vlen>1)\n      while (it.remaining()>=vlen)\n        {\n        it.advance(vlen);\n        auto tdatav = reinterpret_cast<vtype_t<T> *>(storage.data());\n        copy_input(it, in, tdatav);\n        plan->exec(tdatav, fct, true);\n        for (size_t j=0; j<vlen; ++j)\n          out[it.oofs(j,0)].Set(tdatav[0][j]);\n        size_t i=1, ii=1;\n        if (forward)\n          for (; i<len-1; i+=2, ++ii)\n            for (size_t j=0; j<vlen; ++j)\n              out[it.oofs(j,ii)].Set(tdatav[i][j], tdatav[i+1][j]);\n        else\n          for (; i<len-1; i+=2, ++ii)\n            for (size_t j=0; j<vlen; ++j)\n              out[it.oofs(j,ii)].Set(tdatav[i][j], -tdatav[i+1][j]);\n        if (i<len)\n          for (size_t j=0; j<vlen; ++j)\n            out[it.oofs(j,ii)].Set(tdatav[i][j]);\n        }\n#endif\n    while (it.remaining()>0)\n      {\n      it.advance(1);\n      auto tdata = reinterpret_cast<T *>(storage.data());\n      copy_input(it, in, tdata);\n      plan->exec(tdata, fct, true);\n      out[it.oofs(0)].Set(tdata[0]);\n      size_t i=1, ii=1;\n      if (forward)\n        for (; i<len-1; i+=2, ++ii)\n          out[it.oofs(ii)].Set(tdata[i], tdata[i+1]);\n      else\n        for (; i<len-1; i+=2, ++ii)\n          out[it.oofs(ii)].Set(tdata[i], -tdata[i+1]);\n      if (i<len)\n        out[it.oofs(ii)].Set(tdata[i]);\n      }\n    });  // end of parallel region\n  }\ntemplate<typename T> POCKETFFT_NOINLINE void general_c2r(\n  const cndarr<cmplx<T>> &in, ndarr<T> &out, size_t axis, bool forward, T fct,\n  size_t nthreads)\n  {\n  auto plan = get_plan<pocketfft_r<T>>(out.shape(axis));\n  size_t len=out.shape(axis);\n  threading::thread_map(\n    util::thread_count(nthreads, in.shape(), axis, VLEN<T>::val),\n    [&] {\n      constexpr auto vlen = VLEN<T>::val;\n      auto storage = alloc_tmp<T>(out.shape(), len, sizeof(T));\n      multi_iter<vlen> it(in, out, axis);\n#ifndef POCKETFFT_NO_VECTORS\n      if (vlen>1)\n        while (it.remaining()>=vlen)\n          {\n          it.advance(vlen);\n          auto tdatav = reinterpret_cast<vtype_t<T> *>(storage.data());\n          for (size_t j=0; j<vlen; ++j)\n            tdatav[0][j]=in[it.iofs(j,0)].r;\n          {\n          size_t i=1, ii=1;\n          if (forward)\n            for (; i<len-1; i+=2, ++ii)\n              for (size_t j=0; j<vlen; ++j)\n                {\n                tdatav[i  ][j] =  in[it.iofs(j,ii)].r;\n                tdatav[i+1][j] = -in[it.iofs(j,ii)].i;\n                }\n          else\n            for (; i<len-1; i+=2, ++ii)\n              for (size_t j=0; j<vlen; ++j)\n                {\n                tdatav[i  ][j] = in[it.iofs(j,ii)].r;\n                tdatav[i+1][j] = in[it.iofs(j,ii)].i;\n                }\n          if (i<len)\n            for (size_t j=0; j<vlen; ++j)\n              tdatav[i][j] = in[it.iofs(j,ii)].r;\n          }\n          plan->exec(tdatav, fct, false);\n          copy_output(it, tdatav, out);\n          }\n#endif\n      while (it.remaining()>0)\n        {\n        it.advance(1);\n        auto tdata = reinterpret_cast<T *>(storage.data());\n        tdata[0]=in[it.iofs(0)].r;\n        {\n        size_t i=1, ii=1;\n        if (forward)\n          for (; i<len-1; i+=2, ++ii)\n            {\n            tdata[i  ] =  in[it.iofs(ii)].r;\n            tdata[i+1] = -in[it.iofs(ii)].i;\n            }\n        else\n          for (; i<len-1; i+=2, ++ii)\n            {\n            tdata[i  ] = in[it.iofs(ii)].r;\n            tdata[i+1] = in[it.iofs(ii)].i;\n            }\n        if (i<len)\n          tdata[i] = in[it.iofs(ii)].r;\n        }\n        plan->exec(tdata, fct, false);\n        copy_output(it, tdata, out);\n        }\n    });  // end of parallel region\n  }\n\nstruct ExecR2R\n  {\n  bool r2h, forward;\n\n  template <typename T0, typename T, size_t vlen> void operator () (\n    const multi_iter<vlen> &it, const cndarr<T0> &in, ndarr<T0> &out, T * buf,\n    const pocketfft_r<T0> &plan, T0 fct) const\n    {\n    copy_input(it, in, buf);\n    if ((!r2h) && forward)\n      for (size_t i=2; i<it.length_out(); i+=2)\n        buf[i] = -buf[i];\n    plan.exec(buf, fct, r2h);\n    if (r2h && (!forward))\n      for (size_t i=2; i<it.length_out(); i+=2)\n        buf[i] = -buf[i];\n    copy_output(it, buf, out);\n    }\n  };\n\ntemplate<typename T> void c2c(const shape_t &shape, const stride_t &stride_in,\n  const stride_t &stride_out, const shape_t &axes, bool forward,\n  const std::complex<T> *data_in, std::complex<T> *data_out, T fct,\n  size_t nthreads=1)\n  {\n  if (util::prod(shape)==0) return;\n  util::sanity_check(shape, stride_in, stride_out, data_in==data_out, axes);\n  cndarr<cmplx<T>> ain(data_in, shape, stride_in);\n  ndarr<cmplx<T>> aout(data_out, shape, stride_out);\n  general_nd<pocketfft_c<T>>(ain, aout, axes, fct, nthreads, ExecC2C{forward});\n  }\n\ntemplate<typename T> void dct(const shape_t &shape,\n  const stride_t &stride_in, const stride_t &stride_out, const shape_t &axes,\n  int type, const T *data_in, T *data_out, T fct, bool ortho, size_t nthreads=1)\n  {\n  if ((type<1) || (type>4)) throw std::invalid_argument(\"invalid DCT type\");\n  if (util::prod(shape)==0) return;\n  util::sanity_check(shape, stride_in, stride_out, data_in==data_out, axes);\n  cndarr<T> ain(data_in, shape, stride_in);\n  ndarr<T> aout(data_out, shape, stride_out);\n  const ExecDcst exec{ortho, type, true};\n  if (type==1)\n    general_nd<T_dct1<T>>(ain, aout, axes, fct, nthreads, exec);\n  else if (type==4)\n    general_nd<T_dcst4<T>>(ain, aout, axes, fct, nthreads, exec);\n  else\n    general_nd<T_dcst23<T>>(ain, aout, axes, fct, nthreads, exec);\n  }\n\ntemplate<typename T> void dst(const shape_t &shape,\n  const stride_t &stride_in, const stride_t &stride_out, const shape_t &axes,\n  int type, const T *data_in, T *data_out, T fct, bool ortho, size_t nthreads=1)\n  {\n  if ((type<1) || (type>4)) throw std::invalid_argument(\"invalid DST type\");\n  if (util::prod(shape)==0) return;\n  util::sanity_check(shape, stride_in, stride_out, data_in==data_out, axes);\n  cndarr<T> ain(data_in, shape, stride_in);\n  ndarr<T> aout(data_out, shape, stride_out);\n  const ExecDcst exec{ortho, type, false};\n  if (type==1)\n    general_nd<T_dst1<T>>(ain, aout, axes, fct, nthreads, exec);\n  else if (type==4)\n    general_nd<T_dcst4<T>>(ain, aout, axes, fct, nthreads, exec);\n  else\n    general_nd<T_dcst23<T>>(ain, aout, axes, fct, nthreads, exec);\n  }\n\ntemplate<typename T> void r2c(const shape_t &shape_in,\n  const stride_t &stride_in, const stride_t &stride_out, size_t axis,\n  bool forward, const T *data_in, std::complex<T> *data_out, T fct,\n  size_t nthreads=1)\n  {\n  if (util::prod(shape_in)==0) return;\n  util::sanity_check(shape_in, stride_in, stride_out, false, axis);\n  cndarr<T> ain(data_in, shape_in, stride_in);\n  shape_t shape_out(shape_in);\n  shape_out[axis] = shape_in[axis]/2 + 1;\n  ndarr<cmplx<T>> aout(data_out, shape_out, stride_out);\n  general_r2c(ain, aout, axis, forward, fct, nthreads);\n  }\n\ntemplate<typename T> void r2c(const shape_t &shape_in,\n  const stride_t &stride_in, const stride_t &stride_out, const shape_t &axes,\n  bool forward, const T *data_in, std::complex<T> *data_out, T fct,\n  size_t nthreads=1)\n  {\n  if (util::prod(shape_in)==0) return;\n  util::sanity_check(shape_in, stride_in, stride_out, false, axes);\n  r2c(shape_in, stride_in, stride_out, axes.back(), forward, data_in, data_out,\n    fct, nthreads);\n  if (axes.size()==1) return;\n\n  shape_t shape_out(shape_in);\n  shape_out[axes.back()] = shape_in[axes.back()]/2 + 1;\n  auto newaxes = shape_t{axes.begin(), --axes.end()};\n  c2c(shape_out, stride_out, stride_out, newaxes, forward, data_out, data_out,\n    T(1), nthreads);\n  }\n\ntemplate<typename T> void c2r(const shape_t &shape_out,\n  const stride_t &stride_in, const stride_t &stride_out, size_t axis,\n  bool forward, const std::complex<T> *data_in, T *data_out, T fct,\n  size_t nthreads=1)\n  {\n  if (util::prod(shape_out)==0) return;\n  util::sanity_check(shape_out, stride_in, stride_out, false, axis);\n  shape_t shape_in(shape_out);\n  shape_in[axis] = shape_out[axis]/2 + 1;\n  cndarr<cmplx<T>> ain(data_in, shape_in, stride_in);\n  ndarr<T> aout(data_out, shape_out, stride_out);\n  general_c2r(ain, aout, axis, forward, fct, nthreads);\n  }\n\ntemplate<typename T> void c2r(const shape_t &shape_out,\n  const stride_t &stride_in, const stride_t &stride_out, const shape_t &axes,\n  bool forward, const std::complex<T> *data_in, T *data_out, T fct,\n  size_t nthreads=1)\n  {\n  if (util::prod(shape_out)==0) return;\n  if (axes.size()==1)\n    return c2r(shape_out, stride_in, stride_out, axes[0], forward,\n      data_in, data_out, fct, nthreads);\n  util::sanity_check(shape_out, stride_in, stride_out, false, axes);\n  auto shape_in = shape_out;\n  shape_in[axes.back()] = shape_out[axes.back()]/2 + 1;\n  auto nval = util::prod(shape_in);\n  stride_t stride_inter(shape_in.size());\n  stride_inter.back() = sizeof(cmplx<T>);\n  for (int i=int(shape_in.size())-2; i>=0; --i)\n    stride_inter[size_t(i)] =\n      stride_inter[size_t(i+1)]*ptrdiff_t(shape_in[size_t(i+1)]);\n  arr<std::complex<T>> tmp(nval);\n  auto newaxes = shape_t{axes.begin(), --axes.end()};\n  c2c(shape_in, stride_in, stride_inter, newaxes, forward, data_in, tmp.data(),\n    T(1), nthreads);\n  c2r(shape_out, stride_inter, stride_out, axes.back(), forward,\n    tmp.data(), data_out, fct, nthreads);\n  }\n\ntemplate<typename T> void r2r_fftpack(const shape_t &shape,\n  const stride_t &stride_in, const stride_t &stride_out, const shape_t &axes,\n  bool real2hermitian, bool forward, const T *data_in, T *data_out, T fct,\n  size_t nthreads=1)\n  {\n  if (util::prod(shape)==0) return;\n  util::sanity_check(shape, stride_in, stride_out, data_in==data_out, axes);\n  cndarr<T> ain(data_in, shape, stride_in);\n  ndarr<T> aout(data_out, shape, stride_out);\n  general_nd<pocketfft_r<T>>(ain, aout, axes, fct, nthreads,\n    ExecR2R{real2hermitian, forward});\n  }\n\ntemplate<typename T> void r2r_separable_hartley(const shape_t &shape,\n  const stride_t &stride_in, const stride_t &stride_out, const shape_t &axes,\n  const T *data_in, T *data_out, T fct, size_t nthreads=1)\n  {\n  if (util::prod(shape)==0) return;\n  util::sanity_check(shape, stride_in, stride_out, data_in==data_out, axes);\n  cndarr<T> ain(data_in, shape, stride_in);\n  ndarr<T> aout(data_out, shape, stride_out);\n  general_nd<pocketfft_r<T>>(ain, aout, axes, fct, nthreads, ExecHartley{},\n    false);\n  }\n\ntemplate<typename T> void r2r_genuine_hartley(const shape_t &shape,\n  const stride_t &stride_in, const stride_t &stride_out, const shape_t &axes,\n  const T *data_in, T *data_out, T fct, size_t nthreads=1)\n  {\n  if (util::prod(shape)==0) return;\n  if (axes.size()==1)\n    return r2r_separable_hartley(shape, stride_in, stride_out, axes, data_in,\n      data_out, fct, nthreads);\n  util::sanity_check(shape, stride_in, stride_out, data_in==data_out, axes);\n  shape_t tshp(shape);\n  tshp[axes.back()] = tshp[axes.back()]/2+1;\n  arr<std::complex<T>> tdata(util::prod(tshp));\n  stride_t tstride(shape.size());\n  tstride.back()=sizeof(std::complex<T>);\n  for (size_t i=tstride.size()-1; i>0; --i)\n    tstride[i-1]=tstride[i]*ptrdiff_t(tshp[i]);\n  r2c(shape, stride_in, tstride, axes, true, data_in, tdata.data(), fct, nthreads);\n  cndarr<cmplx<T>> atmp(tdata.data(), tshp, tstride);\n  ndarr<T> aout(data_out, shape, stride_out);\n  simple_iter iin(atmp);\n  rev_iter iout(aout, axes);\n  while(iin.remaining()>0)\n    {\n    auto v = atmp[iin.ofs()];\n    aout[iout.ofs()] = v.r+v.i;\n    aout[iout.rev_ofs()] = v.r-v.i;\n    iin.advance(); iout.advance();\n    }\n  }\n\n} // namespace detail\n\nusing detail::FORWARD;\nusing detail::BACKWARD;\nusing detail::shape_t;\nusing detail::stride_t;\nusing detail::c2c;\nusing detail::c2r;\nusing detail::r2c;\nusing detail::r2r_fftpack;\nusing detail::r2r_separable_hartley;\nusing detail::r2r_genuine_hartley;\nusing detail::dct;\nusing detail::dst;\n\n} // namespace pocketfft\n\n#undef POCKETFFT_NOINLINE\n#undef POCKETFFT_RESTRICT\n\n#endif // POCKETFFT_HDRONLY_H\n"
  },
  {
    "path": "packages/nx/vendor/ocaml-pocketfft/pocketfft.ml",
    "content": "(** PocketFFT bindings *)\n\nexternal c2c_f32 :\n     shape:int array\n  -> stride_in:int array\n  -> stride_out:int array\n  -> axes:int array\n  -> forward:bool\n  -> fct:float\n  -> data_in:(Complex.t, Bigarray.complex32_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> data_out:(Complex.t, Bigarray.complex32_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> nthreads:int\n  -> unit = \"caml_pocketfft_c2c_f32_bytecode\" \"caml_pocketfft_c2c_f32\"\n[@@noalloc]\n\nexternal r2c_f32 :\n     shape_in:int array\n  -> stride_in:int array\n  -> stride_out:int array\n  -> axes:int array\n  -> forward:bool\n  -> fct:float\n  -> data_in:(float, Bigarray.float32_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> data_out:(Complex.t, Bigarray.complex32_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> nthreads:int\n  -> unit = \"caml_pocketfft_r2c_f32_bytecode\" \"caml_pocketfft_r2c_f32\"\n[@@noalloc]\n\nexternal c2r_f32 :\n     shape_out:int array\n  -> stride_in:int array\n  -> stride_out:int array\n  -> axes:int array\n  -> forward:bool\n  -> fct:float\n  -> data_in:(Complex.t, Bigarray.complex32_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> data_out:(float, Bigarray.float32_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> nthreads:int\n  -> unit = \"caml_pocketfft_c2r_f32_bytecode\" \"caml_pocketfft_c2r_f32\"\n[@@noalloc]\n\nexternal dct_f32 :\n     shape:int array\n  -> stride_in:int array\n  -> stride_out:int array\n  -> axes:int array\n  -> dct_type:int\n  -> ortho:bool\n  -> fct:float\n  -> data_in:(float, Bigarray.float32_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> data_out:(float, Bigarray.float32_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> nthreads:int\n  -> unit = \"caml_pocketfft_dct_f32_bytecode\" \"caml_pocketfft_dct_f32\"\n[@@noalloc]\n\nexternal dst_f32 :\n     shape:int array\n  -> stride_in:int array\n  -> stride_out:int array\n  -> axes:int array\n  -> dct_type:int\n  -> ortho:bool\n  -> fct:float\n  -> data_in:(float, Bigarray.float32_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> data_out:(float, Bigarray.float32_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> nthreads:int\n  -> unit = \"caml_pocketfft_dst_f32_bytecode\" \"caml_pocketfft_dst_f32\"\n[@@noalloc]\n\nexternal c2c_f64 :\n     shape:int array\n  -> stride_in:int array\n  -> stride_out:int array\n  -> axes:int array\n  -> forward:bool\n  -> fct:float\n  -> data_in:(Complex.t, Bigarray.complex64_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> data_out:(Complex.t, Bigarray.complex64_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> nthreads:int\n  -> unit = \"caml_pocketfft_c2c_f64_bytecode\" \"caml_pocketfft_c2c_f64\"\n[@@noalloc]\n\nexternal r2c_f64 :\n     shape_in:int array\n  -> stride_in:int array\n  -> stride_out:int array\n  -> axes:int array\n  -> forward:bool\n  -> fct:float\n  -> data_in:(float, Bigarray.float64_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> data_out:(Complex.t, Bigarray.complex64_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> nthreads:int\n  -> unit = \"caml_pocketfft_r2c_f64_bytecode\" \"caml_pocketfft_r2c_f64\"\n[@@noalloc]\n\nexternal c2r_f64 :\n     shape_out:int array\n  -> stride_in:int array\n  -> stride_out:int array\n  -> axes:int array\n  -> forward:bool\n  -> fct:float\n  -> data_in:(Complex.t, Bigarray.complex64_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> data_out:(float, Bigarray.float64_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> nthreads:int\n  -> unit = \"caml_pocketfft_c2r_f64_bytecode\" \"caml_pocketfft_c2r_f64\"\n[@@noalloc]\n\nexternal dct_f64 :\n     shape:int array\n  -> stride_in:int array\n  -> stride_out:int array\n  -> axes:int array\n  -> dct_type:int\n  -> ortho:bool\n  -> fct:float\n  -> data_in:(float, Bigarray.float64_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> data_out:(float, Bigarray.float64_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> nthreads:int\n  -> unit = \"caml_pocketfft_dct_f64_bytecode\" \"caml_pocketfft_dct_f64\"\n[@@noalloc]\n\nexternal dst_f64 :\n     shape:int array\n  -> stride_in:int array\n  -> stride_out:int array\n  -> axes:int array\n  -> dct_type:int\n  -> ortho:bool\n  -> fct:float\n  -> data_in:(float, Bigarray.float64_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> data_out:(float, Bigarray.float64_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> nthreads:int\n  -> unit = \"caml_pocketfft_dst_f64_bytecode\" \"caml_pocketfft_dst_f64\"\n[@@noalloc]\n"
  },
  {
    "path": "packages/nx/vendor/ocaml-pocketfft/pocketfft_stubs.cpp",
    "content": "/*****************************************************************************/\n/*                                                                           */\n/*                                                                           */\n/*  OCaml PocketFFT Bindings                                                 */\n/*                                                                           */\n/*                                                                           */\n/*  Licensed under the Apache License, Version 2.0 (the \"License\");          */\n/*  you may not use this file except in compliance with the License.         */\n/*  You may obtain a copy of the License at                                  */\n/*                                                                           */\n/*    http://www.apache.org/licenses/LICENSE-2.0                             */\n/*                                                                           */\n/*  Unless required by applicable law or agreed to in writing, software      */\n/*  distributed under the License is distributed on an \"AS IS\" BASIS,        */\n/*  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. */\n/*  See the License for the specific language governing permissions and      */\n/*  limitations under the License.                                           */\n/*                                                                           */\n/*****************************************************************************/\n\n#include \"pocketfft/pocketfft_hdronly.h\"\n\nextern \"C\" {\n#include <caml/bigarray.h>\n#include <caml/fail.h>\n#include <caml/mlvalues.h>\n#include <caml/threads.h>\n}\n\n#if defined(__GNUC__) || defined(__clang__)\n#define POCKETFFT_RESTRICT __restrict__\n#define POCKETFFT_INLINE inline __attribute__((used))\n#define POCKETFFT_HOT __attribute__((hot))\n#define POCKETFFT_CONST __attribute__((const))\n#define POCKETFFT_PURE __attribute__((pure))\n#elif defined(_MSC_VER)\n#define POCKETFFT_RESTRICT __restrict\n#define POCKETFFT_INLINE __forceinline\n#define POCKETFFT_HOT\n#define POCKETFFT_CONST\n#define POCKETFFT_PURE\n#else\n#define POCKETFFT_RESTRICT\n#define POCKETFFT_INLINE inline\n#define POCKETFFT_HOT\n#define POCKETFFT_CONST\n#define POCKETFFT_PURE\n#endif\n\n#if defined(__GNUC__) || defined(__clang__)\n#define ASSUME_ALIGNED(ptr, alignment) __builtin_assume_aligned(ptr, alignment)\n#elif defined(__cpp_lib_assume_aligned)\n#define ASSUME_ALIGNED(ptr, alignment) std::assume_aligned<alignment>(ptr)\n#else\n#define ASSUME_ALIGNED(ptr, alignment) (ptr)\n#endif\n\n#define EXTRACT_SHAPE_STACK(v_shape, shape_var)                                \\\n    size_t shape_var##_len = Wosize_val(v_shape);                              \\\n    std::vector<size_t> shape_var##_data(shape_var##_len);                     \\\n    for (size_t i = 0; i < shape_var##_len; i++) {                             \\\n        shape_var##_data[i] = Long_val(Field(v_shape, i));                     \\\n    }                                                                          \\\n    pocketfft::shape_t shape_var(shape_var##_data.begin(),                     \\\n                                 shape_var##_data.end());\n\n#define EXTRACT_STRIDE_STACK(v_stride, stride_var)                             \\\n    size_t stride_var##_len = Wosize_val(v_stride);                            \\\n    std::vector<ptrdiff_t> stride_var##_data(stride_var##_len);                \\\n    for (size_t i = 0; i < stride_var##_len; i++) {                            \\\n        stride_var##_data[i] = Long_val(Field(v_stride, i));                   \\\n    }                                                                          \\\n    pocketfft::stride_t stride_var(stride_var##_data.begin(),                  \\\n                                   stride_var##_data.end());\n\nextern \"C\" {\n\n// Float32 Complex-to-Complex FFT\nPOCKETFFT_HOT POCKETFFT_INLINE value\ncaml_pocketfft_c2c_f32(value v_shape, value v_stride_in, value v_stride_out,\n                       value v_axes, value v_forward, value v_fct,\n                       value v_data_in, value v_data_out, value v_nthreads) {\n    try {\n        EXTRACT_SHAPE_STACK(v_shape, shape);\n        EXTRACT_STRIDE_STACK(v_stride_in, stride_in);\n        EXTRACT_STRIDE_STACK(v_stride_out, stride_out);\n        EXTRACT_SHAPE_STACK(v_axes, axes);\n\n        bool forward = Bool_val(v_forward);\n        float fct = Double_val(v_fct);\n        size_t nthreads = Long_val(v_nthreads);\n\n        auto* POCKETFFT_RESTRICT data_in = static_cast<std::complex<float>*>(\n            ASSUME_ALIGNED(Caml_ba_data_val(v_data_in), 32));\n        auto* POCKETFFT_RESTRICT data_out = static_cast<std::complex<float>*>(\n            ASSUME_ALIGNED(Caml_ba_data_val(v_data_out), 32));\n\n        caml_release_runtime_system();\n        pocketfft::c2c(shape, stride_in, stride_out, axes, forward, data_in,\n                       data_out, fct, nthreads);\n        caml_acquire_runtime_system();\n\n    } catch (const std::exception& e) {\n        caml_acquire_runtime_system();\n        caml_failwith(e.what());\n    }\n    return Val_unit;\n}\n\nvalue caml_pocketfft_c2c_f32_bytecode(value* argv, int argn) {\n    return caml_pocketfft_c2c_f32(argv[0], argv[1], argv[2], argv[3], argv[4],\n                                  argv[5], argv[6], argv[7], argv[8]);\n}\n\n// Float32 Real-to-Complex FFT\nPOCKETFFT_HOT POCKETFFT_INLINE value\ncaml_pocketfft_r2c_f32(value v_shape_in, value v_stride_in, value v_stride_out,\n                       value v_axes, value v_forward, value v_fct,\n                       value v_data_in, value v_data_out, value v_nthreads) {\n    try {\n        EXTRACT_SHAPE_STACK(v_shape_in, shape_in);\n        EXTRACT_STRIDE_STACK(v_stride_in, stride_in);\n        EXTRACT_STRIDE_STACK(v_stride_out, stride_out);\n        EXTRACT_SHAPE_STACK(v_axes, axes);\n\n        bool forward = Bool_val(v_forward);\n        float fct = Double_val(v_fct);\n        size_t nthreads = Long_val(v_nthreads);\n\n        auto* POCKETFFT_RESTRICT data_in = static_cast<float*>(\n            ASSUME_ALIGNED(Caml_ba_data_val(v_data_in), 32));\n        auto* POCKETFFT_RESTRICT data_out = static_cast<std::complex<float>*>(\n            ASSUME_ALIGNED(Caml_ba_data_val(v_data_out), 32));\n\n        caml_release_runtime_system();\n        pocketfft::r2c(shape_in, stride_in, stride_out, axes, forward, data_in,\n                       data_out, fct, nthreads);\n        caml_acquire_runtime_system();\n\n    } catch (const std::exception& e) {\n        caml_acquire_runtime_system();\n        caml_failwith(e.what());\n    }\n    return Val_unit;\n}\n\nvalue caml_pocketfft_r2c_f32_bytecode(value* argv, int argn) {\n    return caml_pocketfft_r2c_f32(argv[0], argv[1], argv[2], argv[3], argv[4],\n                                  argv[5], argv[6], argv[7], argv[8]);\n}\n\n// Float32 Complex-to-Real FFT\nPOCKETFFT_HOT POCKETFFT_INLINE value\ncaml_pocketfft_c2r_f32(value v_shape_out, value v_stride_in, value v_stride_out,\n                       value v_axes, value v_forward, value v_fct,\n                       value v_data_in, value v_data_out, value v_nthreads) {\n    try {\n        EXTRACT_SHAPE_STACK(v_shape_out, shape_out);\n        EXTRACT_STRIDE_STACK(v_stride_in, stride_in);\n        EXTRACT_STRIDE_STACK(v_stride_out, stride_out);\n        EXTRACT_SHAPE_STACK(v_axes, axes);\n\n        bool forward = Bool_val(v_forward);\n        float fct = Double_val(v_fct);\n        size_t nthreads = Long_val(v_nthreads);\n\n        auto* POCKETFFT_RESTRICT data_in = static_cast<std::complex<float>*>(\n            ASSUME_ALIGNED(Caml_ba_data_val(v_data_in), 32));\n        auto* POCKETFFT_RESTRICT data_out = static_cast<float*>(\n            ASSUME_ALIGNED(Caml_ba_data_val(v_data_out), 32));\n\n        caml_release_runtime_system();\n        pocketfft::c2r(shape_out, stride_in, stride_out, axes, forward, data_in,\n                       data_out, fct, nthreads);\n        caml_acquire_runtime_system();\n\n    } catch (const std::exception& e) {\n        caml_acquire_runtime_system();\n        caml_failwith(e.what());\n    }\n    return Val_unit;\n}\n\nvalue caml_pocketfft_c2r_f32_bytecode(value* argv, int argn) {\n    return caml_pocketfft_c2r_f32(argv[0], argv[1], argv[2], argv[3], argv[4],\n                                  argv[5], argv[6], argv[7], argv[8]);\n}\n\n// Float32 DCT\nPOCKETFFT_HOT POCKETFFT_INLINE value caml_pocketfft_dct_f32(\n    value v_shape, value v_stride_in, value v_stride_out, value v_axes,\n    value v_dct_type, value v_ortho, value v_fct, value v_data_in,\n    value v_data_out, value v_nthreads) {\n    try {\n        EXTRACT_SHAPE_STACK(v_shape, shape);\n        EXTRACT_STRIDE_STACK(v_stride_in, stride_in);\n        EXTRACT_STRIDE_STACK(v_stride_out, stride_out);\n        EXTRACT_SHAPE_STACK(v_axes, axes);\n\n        int dct_type = Long_val(v_dct_type);\n        bool ortho = Bool_val(v_ortho);\n        float fct = Double_val(v_fct);\n        size_t nthreads = Long_val(v_nthreads);\n\n        auto* POCKETFFT_RESTRICT data_in = static_cast<float*>(\n            ASSUME_ALIGNED(Caml_ba_data_val(v_data_in), 32));\n        auto* POCKETFFT_RESTRICT data_out = static_cast<float*>(\n            ASSUME_ALIGNED(Caml_ba_data_val(v_data_out), 32));\n\n        caml_release_runtime_system();\n        pocketfft::dct(shape, stride_in, stride_out, axes, dct_type, data_in,\n                       data_out, fct, ortho, nthreads);\n        caml_acquire_runtime_system();\n\n    } catch (const std::exception& e) {\n        caml_acquire_runtime_system();\n        caml_failwith(e.what());\n    }\n    return Val_unit;\n}\n\nvalue caml_pocketfft_dct_f32_bytecode(value* argv, int argn) {\n    return caml_pocketfft_dct_f32(argv[0], argv[1], argv[2], argv[3], argv[4],\n                                  argv[5], argv[6], argv[7], argv[8], argv[9]);\n}\n\n// Float32 DST\nPOCKETFFT_HOT POCKETFFT_INLINE value caml_pocketfft_dst_f32(\n    value v_shape, value v_stride_in, value v_stride_out, value v_axes,\n    value v_dct_type, value v_ortho, value v_fct, value v_data_in,\n    value v_data_out, value v_nthreads) {\n    try {\n        EXTRACT_SHAPE_STACK(v_shape, shape);\n        EXTRACT_STRIDE_STACK(v_stride_in, stride_in);\n        EXTRACT_STRIDE_STACK(v_stride_out, stride_out);\n        EXTRACT_SHAPE_STACK(v_axes, axes);\n\n        int dct_type = Long_val(v_dct_type);\n        bool ortho = Bool_val(v_ortho);\n        float fct = Double_val(v_fct);\n        size_t nthreads = Long_val(v_nthreads);\n\n        auto* POCKETFFT_RESTRICT data_in = static_cast<float*>(\n            ASSUME_ALIGNED(Caml_ba_data_val(v_data_in), 32));\n        auto* POCKETFFT_RESTRICT data_out = static_cast<float*>(\n            ASSUME_ALIGNED(Caml_ba_data_val(v_data_out), 32));\n\n        caml_release_runtime_system();\n        pocketfft::dst(shape, stride_in, stride_out, axes, dct_type, data_in,\n                       data_out, fct, ortho, nthreads);\n        caml_acquire_runtime_system();\n\n    } catch (const std::exception& e) {\n        caml_acquire_runtime_system();\n        caml_failwith(e.what());\n    }\n    return Val_unit;\n}\n\nvalue caml_pocketfft_dst_f32_bytecode(value* argv, int argn) {\n    return caml_pocketfft_dst_f32(argv[0], argv[1], argv[2], argv[3], argv[4],\n                                  argv[5], argv[6], argv[7], argv[8], argv[9]);\n}\n\n// Float64 Complex-to-Complex FFT\nPOCKETFFT_HOT POCKETFFT_INLINE value\ncaml_pocketfft_c2c_f64(value v_shape, value v_stride_in, value v_stride_out,\n                       value v_axes, value v_forward, value v_fct,\n                       value v_data_in, value v_data_out, value v_nthreads) {\n    try {\n        EXTRACT_SHAPE_STACK(v_shape, shape);\n        EXTRACT_STRIDE_STACK(v_stride_in, stride_in);\n        EXTRACT_STRIDE_STACK(v_stride_out, stride_out);\n        EXTRACT_SHAPE_STACK(v_axes, axes);\n\n        bool forward = Bool_val(v_forward);\n        double fct = Double_val(v_fct);\n        size_t nthreads = Long_val(v_nthreads);\n\n        auto* POCKETFFT_RESTRICT data_in = static_cast<std::complex<double>*>(\n            ASSUME_ALIGNED(Caml_ba_data_val(v_data_in), 32));\n        auto* POCKETFFT_RESTRICT data_out = static_cast<std::complex<double>*>(\n            ASSUME_ALIGNED(Caml_ba_data_val(v_data_out), 32));\n\n        caml_release_runtime_system();\n        pocketfft::c2c(shape, stride_in, stride_out, axes, forward, data_in,\n                       data_out, fct, nthreads);\n        caml_acquire_runtime_system();\n\n    } catch (const std::exception& e) {\n        caml_acquire_runtime_system();\n        caml_failwith(e.what());\n    }\n    return Val_unit;\n}\n\nvalue caml_pocketfft_c2c_f64_bytecode(value* argv, int argn) {\n    return caml_pocketfft_c2c_f64(argv[0], argv[1], argv[2], argv[3], argv[4],\n                                  argv[5], argv[6], argv[7], argv[8]);\n}\n\n// Float64 Real-to-Complex FFT\nPOCKETFFT_HOT POCKETFFT_INLINE value\ncaml_pocketfft_r2c_f64(value v_shape_in, value v_stride_in, value v_stride_out,\n                       value v_axes, value v_forward, value v_fct,\n                       value v_data_in, value v_data_out, value v_nthreads) {\n    try {\n        EXTRACT_SHAPE_STACK(v_shape_in, shape_in);\n        EXTRACT_STRIDE_STACK(v_stride_in, stride_in);\n        EXTRACT_STRIDE_STACK(v_stride_out, stride_out);\n        EXTRACT_SHAPE_STACK(v_axes, axes);\n\n        bool forward = Bool_val(v_forward);\n        double fct = Double_val(v_fct);\n        size_t nthreads = Long_val(v_nthreads);\n\n        auto* POCKETFFT_RESTRICT data_in = static_cast<double*>(\n            ASSUME_ALIGNED(Caml_ba_data_val(v_data_in), 32));\n        auto* POCKETFFT_RESTRICT data_out = static_cast<std::complex<double>*>(\n            ASSUME_ALIGNED(Caml_ba_data_val(v_data_out), 32));\n\n        caml_release_runtime_system();\n        pocketfft::r2c(shape_in, stride_in, stride_out, axes, forward, data_in,\n                       data_out, fct, nthreads);\n        caml_acquire_runtime_system();\n\n    } catch (const std::exception& e) {\n        caml_acquire_runtime_system();\n        caml_failwith(e.what());\n    }\n    return Val_unit;\n}\n\nvalue caml_pocketfft_r2c_f64_bytecode(value* argv, int argn) {\n    return caml_pocketfft_r2c_f64(argv[0], argv[1], argv[2], argv[3], argv[4],\n                                  argv[5], argv[6], argv[7], argv[8]);\n}\n\n// Float64 Complex-to-Real FFT\nPOCKETFFT_HOT POCKETFFT_INLINE value\ncaml_pocketfft_c2r_f64(value v_shape_out, value v_stride_in, value v_stride_out,\n                       value v_axes, value v_forward, value v_fct,\n                       value v_data_in, value v_data_out, value v_nthreads) {\n    try {\n        EXTRACT_SHAPE_STACK(v_shape_out, shape_out);\n        EXTRACT_STRIDE_STACK(v_stride_in, stride_in);\n        EXTRACT_STRIDE_STACK(v_stride_out, stride_out);\n        EXTRACT_SHAPE_STACK(v_axes, axes);\n\n        bool forward = Bool_val(v_forward);\n        double fct = Double_val(v_fct);\n        size_t nthreads = Long_val(v_nthreads);\n\n        auto* POCKETFFT_RESTRICT data_in = static_cast<std::complex<double>*>(\n            ASSUME_ALIGNED(Caml_ba_data_val(v_data_in), 32));\n        auto* POCKETFFT_RESTRICT data_out = static_cast<double*>(\n            ASSUME_ALIGNED(Caml_ba_data_val(v_data_out), 32));\n\n        caml_release_runtime_system();\n        pocketfft::c2r(shape_out, stride_in, stride_out, axes, forward, data_in,\n                       data_out, fct, nthreads);\n        caml_acquire_runtime_system();\n\n    } catch (const std::exception& e) {\n        caml_acquire_runtime_system();\n        caml_failwith(e.what());\n    }\n    return Val_unit;\n}\n\nvalue caml_pocketfft_c2r_f64_bytecode(value* argv, int argn) {\n    return caml_pocketfft_c2r_f64(argv[0], argv[1], argv[2], argv[3], argv[4],\n                                  argv[5], argv[6], argv[7], argv[8]);\n}\n\n// Float64 DCT\nPOCKETFFT_HOT POCKETFFT_INLINE value caml_pocketfft_dct_f64(\n    value v_shape, value v_stride_in, value v_stride_out, value v_axes,\n    value v_dct_type, value v_ortho, value v_fct, value v_data_in,\n    value v_data_out, value v_nthreads) {\n    try {\n        EXTRACT_SHAPE_STACK(v_shape, shape);\n        EXTRACT_STRIDE_STACK(v_stride_in, stride_in);\n        EXTRACT_STRIDE_STACK(v_stride_out, stride_out);\n        EXTRACT_SHAPE_STACK(v_axes, axes);\n\n        int dct_type = Long_val(v_dct_type);\n        bool ortho = Bool_val(v_ortho);\n        double fct = Double_val(v_fct);\n        size_t nthreads = Long_val(v_nthreads);\n\n        auto* POCKETFFT_RESTRICT data_in = static_cast<double*>(\n            ASSUME_ALIGNED(Caml_ba_data_val(v_data_in), 32));\n        auto* POCKETFFT_RESTRICT data_out = static_cast<double*>(\n            ASSUME_ALIGNED(Caml_ba_data_val(v_data_out), 32));\n\n        caml_release_runtime_system();\n        pocketfft::dct(shape, stride_in, stride_out, axes, dct_type, data_in,\n                       data_out, fct, ortho, nthreads);\n        caml_acquire_runtime_system();\n\n    } catch (const std::exception& e) {\n        caml_acquire_runtime_system();\n        caml_failwith(e.what());\n    }\n    return Val_unit;\n}\n\nvalue caml_pocketfft_dct_f64_bytecode(value* argv, int argn) {\n    return caml_pocketfft_dct_f64(argv[0], argv[1], argv[2], argv[3], argv[4],\n                                  argv[5], argv[6], argv[7], argv[8], argv[9]);\n}\n\n// Float64 DST\nPOCKETFFT_HOT POCKETFFT_INLINE value caml_pocketfft_dst_f64(\n    value v_shape, value v_stride_in, value v_stride_out, value v_axes,\n    value v_dct_type, value v_ortho, value v_fct, value v_data_in,\n    value v_data_out, value v_nthreads) {\n    try {\n        EXTRACT_SHAPE_STACK(v_shape, shape);\n        EXTRACT_STRIDE_STACK(v_stride_in, stride_in);\n        EXTRACT_STRIDE_STACK(v_stride_out, stride_out);\n        EXTRACT_SHAPE_STACK(v_axes, axes);\n\n        int dct_type = Long_val(v_dct_type);\n        bool ortho = Bool_val(v_ortho);\n        double fct = Double_val(v_fct);\n        size_t nthreads = Long_val(v_nthreads);\n\n        auto* POCKETFFT_RESTRICT data_in = static_cast<double*>(\n            ASSUME_ALIGNED(Caml_ba_data_val(v_data_in), 32));\n        auto* POCKETFFT_RESTRICT data_out = static_cast<double*>(\n            ASSUME_ALIGNED(Caml_ba_data_val(v_data_out), 32));\n\n        caml_release_runtime_system();\n        pocketfft::dst(shape, stride_in, stride_out, axes, dct_type, data_in,\n                       data_out, fct, ortho, nthreads);\n        caml_acquire_runtime_system();\n\n    } catch (const std::exception& e) {\n        caml_acquire_runtime_system();\n        caml_failwith(e.what());\n    }\n    return Val_unit;\n}\n\nvalue caml_pocketfft_dst_f64_bytecode(value* argv, int argn) {\n    return caml_pocketfft_dst_f64(argv[0], argv[1], argv[2], argv[3], argv[4],\n                                  argv[5], argv[6], argv[7], argv[8], argv[9]);\n}\n}\n"
  },
  {
    "path": "packages/nx/vendor/stb_image/dune",
    "content": "(library\n (name stb_image)\n (public_name nx.io.stb_image)\n (libraries bigarray)\n (foreign_stubs\n  (language c)\n  (names ml_stb_image)))\n"
  },
  {
    "path": "packages/nx/vendor/stb_image/ml_stb_image.c",
    "content": "#include <assert.h>\n#include <stdio.h>\n#include <caml/mlvalues.h>\n#include <caml/memory.h>\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n\n#define STB_IMAGE_IMPLEMENTATION\n#include \"stb_image.h\"\n\nstatic int Channels_val(value channel)\n{\n  CAMLparam1(channel);\n  int ret = 0;\n  if (channel != Val_unit)\n    ret = Long_val(Field(channel, 0));\n  CAMLreturn(ret);\n}\n\nstatic value return_image(void *data, int ty, int x, int y, int n)\n{\n  CAMLparam0();\n  CAMLlocal3(ret, tup, ba);\n\n  ba = caml_ba_alloc_dims(ty | CAML_BA_C_LAYOUT, 1, data, x * y * n);\n\n  tup = caml_alloc(6, 0);\n  Store_field(tup, 0, Val_long(x));\n  Store_field(tup, 1, Val_long(y));\n  Store_field(tup, 2, Val_long(n));\n  Store_field(tup, 3, Val_long(0));\n  Store_field(tup, 4, Val_long(x * n));\n  Store_field(tup, 5, ba);\n\n  /* Result.Ok tup */\n  ret = caml_alloc(1, 0);\n  Store_field(ret, 0, tup);\n\n  CAMLreturn(ret);\n}\n\nstatic value return_failure(void)\n{\n  CAMLparam0();\n  CAMLlocal3(ret, str, err);\n\n  str = caml_copy_string(stbi_failure_reason());\n\n  /* `Msg \"str\" */\n  err = caml_alloc(2, 0);\n  Store_field(err, 0, Val_long(3854881));\n  Store_field(err, 1, str);\n\n  /* Result.Error (`Msg \"str\") */\n  ret = caml_alloc(1, 1);\n  Store_field(ret, 0, err);\n\n  CAMLreturn(ret);\n}\n\nCAMLprim value ml_stbi_load(value channels, value filename)\n{\n  CAMLparam2(channels, filename);\n  CAMLlocal1(ret);\n\n  int x, y, n, n0;\n  n0 = Channels_val(channels);\n\tunsigned char* image_data = stbi_load(String_val(filename), &x, &y, &n, n0);\n  if (n0 != 0) n = n0;\n\n  if (image_data)\n    ret = return_image(image_data, CAML_BA_UINT8, x, y, n);\n  else\n    ret = return_failure();\n\n  CAMLreturn(ret);\n}\n\nCAMLprim value ml_stbi_loadf(value channels, value filename)\n{\n  CAMLparam2(channels, filename);\n  CAMLlocal1(ret);\n\n  int x, y, n, n0;\n  n0 = Channels_val(channels);\n\tfloat* image_data =\n    stbi_loadf(String_val(filename), &x, &y, &n, n0);\n  if (n0 != 0) n = n0;\n\n  if (image_data)\n    ret = return_image(image_data, CAML_BA_FLOAT32, x, y, n);\n  else\n    ret = return_failure();\n\n  CAMLreturn(ret);\n}\n\nCAMLprim value ml_stbi_load_mem(value channels, value mem)\n{\n  CAMLparam2(channels, mem);\n  CAMLlocal1(ret);\n\n  int x, y, n, n0;\n  n0 = Channels_val(channels);\n\tunsigned char* image_data =\n    stbi_load_from_memory(Caml_ba_data_val(mem),\n        caml_ba_byte_size(Caml_ba_array_val(mem)),\n        &x, &y, &n, n0);\n  if (n0 != 0) n = n0;\n\n  if (image_data)\n    ret = return_image(image_data, CAML_BA_UINT8, x, y, n);\n  else\n    ret = return_failure();\n\n  CAMLreturn(ret);\n}\n\nCAMLprim value ml_stbi_loadf_mem(value channels, value mem)\n{\n  CAMLparam2(channels, mem);\n  CAMLlocal1(ret);\n\n  int x, y, n, n0;\n  n0 = Channels_val(channels);\n\tfloat* image_data =\n    stbi_loadf_from_memory(Caml_ba_data_val(mem),\n        caml_ba_byte_size(Caml_ba_array_val(mem)),\n        &x, &y, &n, n0);\n  if (n0 != 0) n = n0;\n\n  if (image_data)\n    ret = return_image(image_data, CAML_BA_FLOAT32, x, y, n);\n  else\n    ret = return_failure();\n\n  CAMLreturn(ret);\n}\n\nCAMLprim value ml_stbi_image_free(value ba)\n{\n  CAMLparam1(ba);\n  void *data = Caml_ba_data_val(ba);\n\n  assert (data);\n  stbi_image_free(data);\n  Caml_ba_data_val(ba) = NULL;\n\n  CAMLreturn(Val_unit);\n}\n\n#define POUT(x,n) pout[x] = (pin[x] + pin[n + x] + pin[w * n + n] + pin[w * n + x]) / 4\n#define POUTf(x,n) pout[x] = (pin[x] + pin[n + x] + pin[w * n + n] + pin[w * n + x]) / 4.0f\n\n#define LOOP(w,h,n) \\\n  for (unsigned int y = 0, w2 = (w) / 2, h2 = (h) / 2; \\\n       y < h2; ++y, pin0 += sin, pin = pin0, pout0 += sout, pout = pout0) \\\n    for (unsigned int x = 0; x < w2; ++x, pin += 2 * n, pout += n)\n\nCAMLprim value ml_stbi_mipmap(value img_in, value img_out)\n{\n  CAMLparam2(img_in, img_out);\n  unsigned char *pin = NULL, *pout = NULL,\n    *pin0 = Caml_ba_data_val(Field(img_in, 5)),\n    *pout0 = Caml_ba_data_val(Field(img_out, 5));\n  assert (pin0 && pout0);\n\n  pin0 += Long_val(Field(img_in, 3));\n  pout0 += Long_val(Field(img_out, 3));\n\n  unsigned int\n    sin = Long_val(Field(img_in, 4)),\n    sout = Long_val(Field(img_out, 4)),\n    w = Long_val(Field(img_in, 0)),\n    h = Long_val(Field(img_in, 1));\n\n  switch (Long_val(Field(img_in, 2))) {\n    case 1:\n      LOOP(w, h, 1) { POUT(0, 1); }\n      break;\n    case 2:\n      LOOP(w, h, 2) { POUT(0, 2); POUT(1, 2); }\n      break;\n    case 3:\n      LOOP(w, h, 3) { POUT(0, 3); POUT(1, 3); POUT(2, 3); }\n      break;\n    case 4:\n      LOOP(w, h, 4) { POUT(0, 4); POUT(1, 4); POUT(2, 4); POUT(3, 4); }\n      break;\n  }\n\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value ml_stbi_mipmapf(value img_in, value img_out)\n{\n  CAMLparam2(img_in, img_out);\n  float *pin = NULL, *pout = NULL,\n    *pin0 = Caml_ba_data_val(Field(img_in, 5)),\n    *pout0 = Caml_ba_data_val(Field(img_out, 5));\n  assert (pin0 && pout0);\n\n  pin0 += Long_val(Field(img_in, 3));\n  pout0 += Long_val(Field(img_out, 3));\n\n  unsigned int\n    sin = Long_val(Field(img_in, 4)),\n    sout = Long_val(Field(img_out, 4)),\n    w = Long_val(Field(img_in, 0)),\n    h = Long_val(Field(img_in, 1));\n\n  switch (Long_val(Field(img_in, 2))) {\n    case 1:\n      LOOP(w, h, 1) { POUTf(0, 1); }\n      break;\n    case 2:\n      LOOP(w, h, 2) { POUTf(0, 2); POUTf(1, 2); }\n      break;\n    case 3:\n      LOOP(w, h, 3) { POUTf(0, 3); POUTf(1, 3); POUTf(2, 3); }\n      break;\n    case 4:\n      LOOP(w, h, 4) { POUTf(0, 4); POUTf(1, 4); POUTf(2, 4); POUTf(3, 4); }\n      break;\n  }\n\n  CAMLreturn(Val_unit);\n}\n\nstatic void memswap(void *i0, void *i1, size_t count)\n{\n  unsigned char *p0 = i0, *p1 = i1;\n  for (size_t i = 0; i < count; ++i)\n  {\n    unsigned char tmp = p0[i];\n    p0[i] = p1[i];\n    p1[i] = tmp;\n  }\n}\n\nCAMLprim value ml_stbi_vflip(value img)\n{\n  CAMLparam1(img);\n  unsigned char *ptop = Caml_ba_data_val(Field(img, 5));\n  assert (ptop);\n  ptop += Long_val(Field(img, 3));\n\n  unsigned int\n    w = Long_val(Field(img, 0)),\n    h = Long_val(Field(img, 1)),\n    n = Long_val(Field(img, 2)),\n    stride = Long_val(Field(img, 4)),\n    row = w * n;\n\n  unsigned char *pbot = ptop + (stride * h - stride);\n  w = w * n;\n\n  for (unsigned int y = 0; y < h; y++)\n  {\n    memswap(ptop, pbot, row);\n    ptop += stride;\n    pbot -= stride;\n  }\n\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value ml_stbi_vflipf(value img)\n{\n  CAMLparam1(img);\n  float *ptop = Caml_ba_data_val(Field(img, 5));\n  assert (ptop);\n  ptop += Long_val(Field(img, 3));\n\n  unsigned int\n    w = Long_val(Field(img, 0)),\n    h = Long_val(Field(img, 1)),\n    n = Long_val(Field(img, 2)),\n    stride = Long_val(Field(img, 4)),\n    row = w * n * sizeof(float);\n\n  float *pbot = ptop + (stride * h - stride);\n  w = w * n;\n\n  for (unsigned int y = 0; y < h; y++)\n  {\n    memswap(ptop, pbot, row);\n    ptop += stride;\n    pbot -= stride;\n  }\n\n  CAMLreturn(Val_unit);\n}\n\n// Based on Exponential blur, Jani Huhtanen, 2006\n// and [https://github.com/memononen/fontstash](fontstash), Mikko Mononen, 2014\n\n#define APREC 16\n#define ZPREC 7\n\n#define APPROX(alpha, reg, acc) \\\n  ((alpha * (((int)(reg) << ZPREC) - acc)) >> APREC)\n\n#define BLUR0(reg, acc) int acc = (int)(reg) << ZPREC\n\n#define BLUR(reg, acc) \\\n  do { \\\n    acc += APPROX(alpha, reg, acc); \\\n    reg = (unsigned char)(acc >> ZPREC); \\\n  } while (0)\n\n#define OUTERLOOP(var, ptr, bound, stride) \\\n  for (unsigned char *_limit = ptr + bound * stride, *var = ptr; var < _limit; var += stride)\n\n#define INNERLOOP(var, bound, stride, BODY) \\\n  do { \\\n    int var; \\\n    for (var = stride; var < bound * stride; var += stride) BODY; \\\n    for (var = (bound - 2) * stride; var >= 0; var -= stride) BODY; \\\n    for (var = stride; var < bound * stride; var += stride) BODY; \\\n    for (var = (bound - 2) * stride; var >= 0; var -= stride) BODY; \\\n  } while (0)\n\nstatic void expblur4(unsigned char* ptr, int w, int h, int stride, int alpha)\n{\n  OUTERLOOP(dst, ptr, h, stride)\n  {\n    BLUR0(dst[0], acc0);\n    BLUR0(dst[1], acc1);\n    BLUR0(dst[2], acc2);\n    BLUR0(dst[3], acc3);\n    INNERLOOP(x, w, 4,\n        {\n         BLUR(dst[x+0], acc0);\n         BLUR(dst[x+1], acc1);\n         BLUR(dst[x+2], acc2);\n         BLUR(dst[x+3], acc3);\n        });\n  }\n\n  OUTERLOOP(dst, ptr, w, 4)\n  {\n    BLUR0(dst[0], acc0);\n    BLUR0(dst[1], acc1);\n    BLUR0(dst[2], acc2);\n    BLUR0(dst[3], acc3);\n    INNERLOOP(y, h, stride,\n        {\n         BLUR(dst[y+0], acc0);\n         BLUR(dst[y+1], acc1);\n         BLUR(dst[y+2], acc2);\n         BLUR(dst[y+3], acc3);\n        });\n  }\n}\n\nstatic void expblur3(unsigned char* ptr, int w, int h, int stride, int alpha)\n{\n  OUTERLOOP(dst, ptr, h, stride)\n  {\n    BLUR0(dst[0], acc0);\n    BLUR0(dst[1], acc1);\n    BLUR0(dst[2], acc2);\n    INNERLOOP(x, w, 3,\n        {\n         BLUR(dst[x+0], acc0);\n         BLUR(dst[x+1], acc1);\n         BLUR(dst[x+2], acc2);\n        });\n  }\n\n  OUTERLOOP(dst, ptr, w, 3)\n  {\n    BLUR0(dst[0], acc0);\n    BLUR0(dst[1], acc1);\n    BLUR0(dst[2], acc2);\n    INNERLOOP(y, h, stride,\n        {\n         BLUR(dst[y+0], acc0);\n         BLUR(dst[y+1], acc1);\n         BLUR(dst[y+2], acc2);\n        });\n  }\n}\n\nstatic void expblur2(unsigned char* ptr, int w, int h, int stride, int alpha)\n{\n  OUTERLOOP(dst, ptr, h, stride)\n  {\n    BLUR0(dst[0], acc0);\n    BLUR0(dst[1], acc1);\n    INNERLOOP(x, w, 2,\n        {\n         BLUR(dst[x+0], acc0);\n         BLUR(dst[x+1], acc1);\n        });\n  }\n\n  OUTERLOOP(dst, ptr, w, 2)\n  {\n    BLUR0(dst[0], acc0);\n    BLUR0(dst[1], acc1);\n    INNERLOOP(y, h, stride,\n        {\n         BLUR(dst[y+0], acc0);\n         BLUR(dst[y+1], acc1);\n        });\n  }\n}\n\nstatic void expblur1(unsigned char* ptr, int w, int h, int stride, int alpha)\n{\n  OUTERLOOP(dst, ptr, h, stride)\n  {\n    BLUR0(dst[0], acc0);\n    INNERLOOP(x, w, 1,\n        {\n         BLUR(dst[x+0], acc0);\n        });\n  }\n\n  OUTERLOOP(dst, ptr, w, 1)\n  {\n    BLUR0(dst[0], acc0);\n    INNERLOOP(y, h, stride,\n        {\n         BLUR(dst[y+0], acc0);\n        });\n  }\n}\n\nstatic void expblur(unsigned char* ptr, int w, int h, int channels, int stride, float radius)\n{\n\tint alpha;\n\tfloat sigma;\n\n  if (radius < 0.01) return;\n\n  // Calculate the alpha such that 90% of the kernel is within the radius.\n  // (Kernel extends to infinity)\n\tsigma = radius * 0.57735f; // 1 / sqrt(3)\n\n  // Improve blur quality by doing two pass\n  // blur(sigma1) o blur(sigma2) = blur(sqrt(sqr(sigma1)*sqr(sigma2)))\n  sigma = sigma * 0.707106f; // 1 / sqrt(2)\n\n\talpha = (int)((1<<APREC) * (1.0f - expf(-2.3f / (sigma + 1.0f))));\n\n  switch (channels)\n  {\n    case 1: expblur1(ptr, w, h, stride, alpha); break;\n    case 2: expblur2(ptr, w, h, stride, alpha); break;\n    case 3: expblur3(ptr, w, h, stride, alpha); break;\n    case 4: expblur4(ptr, w, h, stride, alpha); break;\n    default: abort();\n  }\n}\n\nCAMLprim value ml_stbi_expblur(value img, value radius)\n{\n  CAMLparam2(img, radius);\n\n  unsigned char *ptr = Caml_ba_data_val(Field(img, 5));\n  assert (ptr);\n  ptr += Long_val(Field(img, 3));\n\n  unsigned int\n    w = Long_val(Field(img, 0)),\n    h = Long_val(Field(img, 1)),\n    n = Long_val(Field(img, 2)),\n    stride = Long_val(Field(img, 4));\n\n  expblur(ptr, w, h, n, stride, Double_val(radius));\n  CAMLreturn(Val_unit);\n}"
  },
  {
    "path": "packages/nx/vendor/stb_image/stb_image.h",
    "content": "/* stb_image - v2.30 - public domain image loader - http://nothings.org/stb\n                                  no warranty implied; use at your own risk\n\n   Do this:\n      #define STB_IMAGE_IMPLEMENTATION\n   before you include this file in *one* C or C++ file to create the implementation.\n\n   // i.e. it should look like this:\n   #include ...\n   #include ...\n   #include ...\n   #define STB_IMAGE_IMPLEMENTATION\n   #include \"stb_image.h\"\n\n   You can #define STBI_ASSERT(x) before the #include to avoid using assert.h.\n   And #define STBI_MALLOC, STBI_REALLOC, and STBI_FREE to avoid using malloc,realloc,free\n\n\n   QUICK NOTES:\n      Primarily of interest to game developers and other people who can\n          avoid problematic images and only need the trivial interface\n\n      JPEG baseline & progressive (12 bpc/arithmetic not supported, same as stock IJG lib)\n      PNG 1/2/4/8/16-bit-per-channel\n\n      TGA (not sure what subset, if a subset)\n      BMP non-1bpp, non-RLE\n      PSD (composited view only, no extra channels, 8/16 bit-per-channel)\n\n      GIF (*comp always reports as 4-channel)\n      HDR (radiance rgbE format)\n      PIC (Softimage PIC)\n      PNM (PPM and PGM binary only)\n\n      Animated GIF still needs a proper API, but here's one way to do it:\n          http://gist.github.com/urraka/685d9a6340b26b830d49\n\n      - decode from memory or through FILE (define STBI_NO_STDIO to remove code)\n      - decode from arbitrary I/O callbacks\n      - SIMD acceleration on x86/x64 (SSE2) and ARM (NEON)\n\n   Full documentation under \"DOCUMENTATION\" below.\n\n\nLICENSE\n\n  See end of file for license information.\n\nRECENT REVISION HISTORY:\n\n      2.30  (2024-05-31) avoid erroneous gcc warning\n      2.29  (2023-05-xx) optimizations\n      2.28  (2023-01-29) many error fixes, security errors, just tons of stuff\n      2.27  (2021-07-11) document stbi_info better, 16-bit PNM support, bug fixes\n      2.26  (2020-07-13) many minor fixes\n      2.25  (2020-02-02) fix warnings\n      2.24  (2020-02-02) fix warnings; thread-local failure_reason and flip_vertically\n      2.23  (2019-08-11) fix clang static analysis warning\n      2.22  (2019-03-04) gif fixes, fix warnings\n      2.21  (2019-02-25) fix typo in comment\n      2.20  (2019-02-07) support utf8 filenames in Windows; fix warnings and platform ifdefs\n      2.19  (2018-02-11) fix warning\n      2.18  (2018-01-30) fix warnings\n      2.17  (2018-01-29) bugfix, 1-bit BMP, 16-bitness query, fix warnings\n      2.16  (2017-07-23) all functions have 16-bit variants; optimizations; bugfixes\n      2.15  (2017-03-18) fix png-1,2,4; all Imagenet JPGs; no runtime SSE detection on GCC\n      2.14  (2017-03-03) remove deprecated STBI_JPEG_OLD; fixes for Imagenet JPGs\n      2.13  (2016-12-04) experimental 16-bit API, only for PNG so far; fixes\n      2.12  (2016-04-02) fix typo in 2.11 PSD fix that caused crashes\n      2.11  (2016-04-02) 16-bit PNGS; enable SSE2 in non-gcc x64\n                         RGB-format JPEG; remove white matting in PSD;\n                         allocate large structures on the stack;\n                         correct channel count for PNG & BMP\n      2.10  (2016-01-22) avoid warning introduced in 2.09\n      2.09  (2016-01-16) 16-bit TGA; comments in PNM files; STBI_REALLOC_SIZED\n\n   See end of file for full revision history.\n\n\n ============================    Contributors    =========================\n\n Image formats                          Extensions, features\n    Sean Barrett (jpeg, png, bmp)          Jetro Lauha (stbi_info)\n    Nicolas Schulz (hdr, psd)              Martin \"SpartanJ\" Golini (stbi_info)\n    Jonathan Dummer (tga)                  James \"moose2000\" Brown (iPhone PNG)\n    Jean-Marc Lienher (gif)                Ben \"Disch\" Wenger (io callbacks)\n    Tom Seddon (pic)                       Omar Cornut (1/2/4-bit PNG)\n    Thatcher Ulrich (psd)                  Nicolas Guillemot (vertical flip)\n    Ken Miller (pgm, ppm)                  Richard Mitton (16-bit PSD)\n    github:urraka (animated gif)           Junggon Kim (PNM comments)\n    Christopher Forseth (animated gif)     Daniel Gibson (16-bit TGA)\n                                           socks-the-fox (16-bit PNG)\n                                           Jeremy Sawicki (handle all ImageNet JPGs)\n Optimizations & bugfixes                  Mikhail Morozov (1-bit BMP)\n    Fabian \"ryg\" Giesen                    Anael Seghezzi (is-16-bit query)\n    Arseny Kapoulkine                      Simon Breuss (16-bit PNM)\n    John-Mark Allen\n    Carmelo J Fdez-Aguera\n\n Bug & warning fixes\n    Marc LeBlanc            David Woo          Guillaume George     Martins Mozeiko\n    Christpher Lloyd        Jerry Jansson      Joseph Thomson       Blazej Dariusz Roszkowski\n    Phil Jordan                                Dave Moore           Roy Eltham\n    Hayaki Saito            Nathan Reed        Won Chun\n    Luke Graham             Johan Duparc       Nick Verigakis       the Horde3D comnxity\n    Thomas Ruf              Ronny Chevalier                         github:rlyeh\n    Janez Zemva             John Bartholomew   Michal Cichon        github:romigrou\n    Jonathan Blow           Ken Hamada         Tero Hanninen        github:svdijk\n    Eugene Golushkov        Laurent Gomila     Cort Stratton        github:snagar\n    Aruelien Pocheville     Sergio Gonzalez    Thibault Reuille     github:Zelex\n    Cass Everitt            Ryamond Barbiero                        github:grim210\n    Paul Du Bois            Engin Manap        Aldo Culquicondor    github:sammyhw\n    Philipp Wiesemann       Dale Weiler        Oriol Ferrer Mesia   github:phprus\n    Josh Tobin              Neil Bickford      Matthew Gregan       github:poppolopoppo\n    Julian Raschke          Gregory Mullen     Christian Floisand   github:darealshinji\n    Baldur Karlsson         Kevin Schmidt      JR Smith             github:Michaelangel007\n                            Brad Weinberger    Matvey Cherevko      github:mosra\n    Luca Sas                Alexander Veselov  Zack Middleton       [reserved]\n    Ryan C. Gordon          [reserved]                              [reserved]\n                     DO NOT ADD YOUR NAME HERE\n\n                     Jacko Dirks\n\n  To add your name to the credits, pick a random blank space in the middle and fill it.\n  80% of merge conflicts on stb PRs are due to people adding their name at the end\n  of the credits.\n*/\n\n#ifndef STBI_INCLUDE_STB_IMAGE_H\n#define STBI_INCLUDE_STB_IMAGE_H\n\n// DOCUMENTATION\n//\n// Limitations:\n//    - no 12-bit-per-channel JPEG\n//    - no JPEGs with arithmetic coding\n//    - GIF always returns *comp=4\n//\n// Basic usage (see HDR discussion below for HDR usage):\n//    int x,y,n;\n//    unsigned char *data = stbi_load(filename, &x, &y, &n, 0);\n//    // ... process data if not NULL ...\n//    // ... x = width, y = height, n = # 8-bit components per pixel ...\n//    // ... replace '0' with '1'..'4' to force that many components per pixel\n//    // ... but 'n' will always be the number that it would have been if you said 0\n//    stbi_image_free(data);\n//\n// Standard parameters:\n//    int *x                 -- outputs image width in pixels\n//    int *y                 -- outputs image height in pixels\n//    int *channels_in_file  -- outputs # of image components in image file\n//    int desired_channels   -- if non-zero, # of image components requested in result\n//\n// The return value from an image loader is an 'unsigned char *' which points\n// to the pixel data, or NULL on an allocation failure or if the image is\n// corrupt or invalid. The pixel data consists of *y scanlines of *x pixels,\n// with each pixel consisting of N interleaved 8-bit components; the first\n// pixel pointed to is top-left-most in the image. There is no padding between\n// image scanlines or between pixels, regardless of format. The number of\n// components N is 'desired_channels' if desired_channels is non-zero, or\n// *channels_in_file otherwise. If desired_channels is non-zero,\n// *channels_in_file has the number of components that _would_ have been\n// output otherwise. E.g. if you set desired_channels to 4, you will always\n// get RGBA output, but you can check *channels_in_file to see if it's trivially\n// opaque because e.g. there were only 3 channels in the source image.\n//\n// An output image with N components has the following components interleaved\n// in this order in each pixel:\n//\n//     N=#comp     components\n//       1           grey\n//       2           grey, alpha\n//       3           red, green, blue\n//       4           red, green, blue, alpha\n//\n// If image loading fails for any reason, the return value will be NULL,\n// and *x, *y, *channels_in_file will be unchanged. The function\n// stbi_failure_reason() can be queried for an extremely brief, end-user\n// unfriendly explanation of why the load failed. Define STBI_NO_FAILURE_STRINGS\n// to avoid compiling these strings at all, and STBI_FAILURE_USERMSG to get slightly\n// more user-friendly ones.\n//\n// Paletted PNG, BMP, GIF, and PIC images are automatically depalettized.\n//\n// To query the width, height and component count of an image without having to\n// decode the full file, you can use the stbi_info family of functions:\n//\n//   int x,y,n,ok;\n//   ok = stbi_info(filename, &x, &y, &n);\n//   // returns ok=1 and sets x, y, n if image is a supported format,\n//   // 0 otherwise.\n//\n// Note that stb_image pervasively uses ints in its public API for sizes,\n// including sizes of memory buffers. This is now part of the API and thus\n// hard to change without causing breakage. As a result, the various image\n// loaders all have certain limits on image size; these differ somewhat\n// by format but generally boil down to either just under 2GB or just under\n// 1GB. When the decoded image would be larger than this, stb_image decoding\n// will fail.\n//\n// Additionally, stb_image will reject image files that have any of their\n// dimensions set to a larger value than the configurable STBI_MAX_DIMENSIONS,\n// which defaults to 2**24 = 16777216 pixels. Due to the above memory limit,\n// the only way to have an image with such dimensions load correctly\n// is for it to have a rather extreme aspect ratio. Either way, the\n// assumption here is that such larger images are likely to be malformed\n// or malicious. If you do need to load an image with individual dimensions\n// larger than that, and it still fits in the overall size limit, you can\n// #define STBI_MAX_DIMENSIONS on your own to be something larger.\n//\n// ===========================================================================\n//\n// UNICODE:\n//\n//   If compiling for Windows and you wish to use Unicode filenames, compile\n//   with\n//       #define STBI_WINDOWS_UTF8\n//   and pass utf8-encoded filenames. Call stbi_convert_wchar_to_utf8 to convert\n//   Windows wchar_t filenames to utf8.\n//\n// ===========================================================================\n//\n// Philosophy\n//\n// stb libraries are designed with the following priorities:\n//\n//    1. easy to use\n//    2. easy to maintain\n//    3. good performance\n//\n// Sometimes I let \"good performance\" creep up in priority over \"easy to maintain\",\n// and for best performance I may provide less-easy-to-use APIs that give higher\n// performance, in addition to the easy-to-use ones. Nevertheless, it's important\n// to keep in mind that from the standpoint of you, a client of this library,\n// all you care about is #1 and #3, and stb libraries DO NOT emphasize #3 above all.\n//\n// Some secondary priorities arise directly from the first two, some of which\n// provide more explicit reasons why performance can't be emphasized.\n//\n//    - Portable (\"ease of use\")\n//    - Small source code footprint (\"easy to maintain\")\n//    - No dependencies (\"ease of use\")\n//\n// ===========================================================================\n//\n// I/O callbacks\n//\n// I/O callbacks allow you to read from arbitrary sources, like packaged\n// files or some other source. Data read from callbacks are processed\n// through a small internal buffer (currently 128 bytes) to try to reduce\n// overhead.\n//\n// The three functions you must define are \"read\" (reads some bytes of data),\n// \"skip\" (skips some bytes of data), \"eof\" (reports if the stream is at the end).\n//\n// ===========================================================================\n//\n// SIMD support\n//\n// The JPEG decoder will try to automatically use SIMD kernels on x86 when\n// supported by the compiler. For ARM Neon support, you must explicitly\n// request it.\n//\n// (The old do-it-yourself SIMD API is no longer supported in the current\n// code.)\n//\n// On x86, SSE2 will automatically be used when available based on a run-time\n// test; if not, the generic C versions are used as a fall-back. On ARM targets,\n// the typical path is to have separate builds for NEON and non-NEON devices\n// (at least this is true for iOS and Android). Therefore, the NEON support is\n// toggled by a build flag: define STBI_NEON to get NEON loops.\n//\n// If for some reason you do not want to use any of SIMD code, or if\n// you have issues compiling it, you can disable it entirely by\n// defining STBI_NO_SIMD.\n//\n// ===========================================================================\n//\n// HDR image support   (disable by defining STBI_NO_HDR)\n//\n// stb_image supports loading HDR images in general, and currently the Radiance\n// .HDR file format specifically. You can still load any file through the existing\n// interface; if you attempt to load an HDR file, it will be automatically remapped\n// to LDR, assuming gamma 2.2 and an arbitrary scale factor defaulting to 1;\n// both of these constants can be reconfigured through this interface:\n//\n//     stbi_hdr_to_ldr_gamma(2.2f);\n//     stbi_hdr_to_ldr_scale(1.0f);\n//\n// (note, do not use _inverse_ constants; stbi_image will invert them\n// appropriately).\n//\n// Additionally, there is a new, parallel interface for loading files as\n// (linear) floats to preserve the full dynamic range:\n//\n//    float *data = stbi_loadf(filename, &x, &y, &n, 0);\n//\n// If you load LDR images through this interface, those images will\n// be promoted to floating point values, run through the inverse of\n// constants corresponding to the above:\n//\n//     stbi_ldr_to_hdr_scale(1.0f);\n//     stbi_ldr_to_hdr_gamma(2.2f);\n//\n// Finally, given a filename (or an open file or memory block--see header\n// file for details) containing image data, you can query for the \"most\n// appropriate\" interface to use (that is, whether the image is HDR or\n// not), using:\n//\n//     stbi_is_hdr(char *filename);\n//\n// ===========================================================================\n//\n// iPhone PNG support:\n//\n// We optionally support converting iPhone-formatted PNGs (which store\n// premultiplied BGRA) back to RGB, even though they're internally encoded\n// differently. To enable this conversion, call\n// stbi_convert_iphone_png_to_rgb(1).\n//\n// Call stbi_set_unpremultiply_on_load(1) as well to force a divide per\n// pixel to remove any premultiplied alpha *only* if the image file explicitly\n// says there's premultiplied data (currently only happens in iPhone images,\n// and only if iPhone convert-to-rgb processing is on).\n//\n// ===========================================================================\n//\n// ADDITIONAL CONFIGURATION\n//\n//  - You can suppress implementation of any of the decoders to reduce\n//    your code footprint by #defining one or more of the following\n//    symbols before creating the implementation.\n//\n//        STBI_NO_JPEG\n//        STBI_NO_PNG\n//        STBI_NO_BMP\n//        STBI_NO_PSD\n//        STBI_NO_TGA\n//        STBI_NO_GIF\n//        STBI_NO_HDR\n//        STBI_NO_PIC\n//        STBI_NO_PNM   (.ppm and .pgm)\n//\n//  - You can request *only* certain decoders and suppress all other ones\n//    (this will be more forward-compatible, as addition of new decoders\n//    doesn't require you to disable them explicitly):\n//\n//        STBI_ONLY_JPEG\n//        STBI_ONLY_PNG\n//        STBI_ONLY_BMP\n//        STBI_ONLY_PSD\n//        STBI_ONLY_TGA\n//        STBI_ONLY_GIF\n//        STBI_ONLY_HDR\n//        STBI_ONLY_PIC\n//        STBI_ONLY_PNM   (.ppm and .pgm)\n//\n//   - If you use STBI_NO_PNG (or _ONLY_ without PNG), and you still\n//     want the zlib decoder to be available, #define STBI_SUPPORT_ZLIB\n//\n//  - If you define STBI_MAX_DIMENSIONS, stb_image will reject images greater\n//    than that size (in either width or height) without further processing.\n//    This is to let programs in the wild set an upper bound to prevent\n//    denial-of-service attacks on untrusted data, as one could generate a\n//    valid image of gigantic dimensions and force stb_image to allocate a\n//    huge block of memory and spend disproportionate time decoding it. By\n//    default this is set to (1 << 24), which is 16777216, but that's still\n//    very big.\n\n#ifndef STBI_NO_STDIO\n#include <stdio.h>\n#endif // STBI_NO_STDIO\n\n#define STBI_VERSION 1\n\nenum\n{\n   STBI_default = 0, // only used for desired_channels\n\n   STBI_grey       = 1,\n   STBI_grey_alpha = 2,\n   STBI_rgb        = 3,\n   STBI_rgb_alpha  = 4\n};\n\n#include <stdlib.h>\ntypedef unsigned char stbi_uc;\ntypedef unsigned short stbi_us;\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#ifndef STBIDEF\n#ifdef STB_IMAGE_STATIC\n#define STBIDEF static\n#else\n#define STBIDEF extern\n#endif\n#endif\n\n//////////////////////////////////////////////////////////////////////////////\n//\n// PRIMARY API - works on images of any type\n//\n\n//\n// load image by filename, open file, or memory buffer\n//\n\ntypedef struct\n{\n   int      (*read)  (void *user,char *data,int size);   // fill 'data' with 'size' bytes.  return number of bytes actually read\n   void     (*skip)  (void *user,int n);                 // skip the next 'n' bytes, or 'unget' the last -n bytes if negative\n   int      (*eof)   (void *user);                       // returns nonzero if we are at end of file/data\n} stbi_io_callbacks;\n\n////////////////////////////////////\n//\n// 8-bits-per-channel interface\n//\n\nSTBIDEF stbi_uc *stbi_load_from_memory   (stbi_uc           const *buffer, int len   , int *x, int *y, int *channels_in_file, int desired_channels);\nSTBIDEF stbi_uc *stbi_load_from_callbacks(stbi_io_callbacks const *clbk  , void *user, int *x, int *y, int *channels_in_file, int desired_channels);\n\n#ifndef STBI_NO_STDIO\nSTBIDEF stbi_uc *stbi_load            (char const *filename, int *x, int *y, int *channels_in_file, int desired_channels);\nSTBIDEF stbi_uc *stbi_load_from_file  (FILE *f, int *x, int *y, int *channels_in_file, int desired_channels);\n// for stbi_load_from_file, file pointer is left pointing immediately after image\n#endif\n\n#ifndef STBI_NO_GIF\nSTBIDEF stbi_uc *stbi_load_gif_from_memory(stbi_uc const *buffer, int len, int **delays, int *x, int *y, int *z, int *comp, int req_comp);\n#endif\n\n#ifdef STBI_WINDOWS_UTF8\nSTBIDEF int stbi_convert_wchar_to_utf8(char *buffer, size_t bufferlen, const wchar_t* input);\n#endif\n\n////////////////////////////////////\n//\n// 16-bits-per-channel interface\n//\n\nSTBIDEF stbi_us *stbi_load_16_from_memory   (stbi_uc const *buffer, int len, int *x, int *y, int *channels_in_file, int desired_channels);\nSTBIDEF stbi_us *stbi_load_16_from_callbacks(stbi_io_callbacks const *clbk, void *user, int *x, int *y, int *channels_in_file, int desired_channels);\n\n#ifndef STBI_NO_STDIO\nSTBIDEF stbi_us *stbi_load_16          (char const *filename, int *x, int *y, int *channels_in_file, int desired_channels);\nSTBIDEF stbi_us *stbi_load_from_file_16(FILE *f, int *x, int *y, int *channels_in_file, int desired_channels);\n#endif\n\n////////////////////////////////////\n//\n// float-per-channel interface\n//\n#ifndef STBI_NO_LINEAR\n   STBIDEF float *stbi_loadf_from_memory     (stbi_uc const *buffer, int len, int *x, int *y, int *channels_in_file, int desired_channels);\n   STBIDEF float *stbi_loadf_from_callbacks  (stbi_io_callbacks const *clbk, void *user, int *x, int *y,  int *channels_in_file, int desired_channels);\n\n   #ifndef STBI_NO_STDIO\n   STBIDEF float *stbi_loadf            (char const *filename, int *x, int *y, int *channels_in_file, int desired_channels);\n   STBIDEF float *stbi_loadf_from_file  (FILE *f, int *x, int *y, int *channels_in_file, int desired_channels);\n   #endif\n#endif\n\n#ifndef STBI_NO_HDR\n   STBIDEF void   stbi_hdr_to_ldr_gamma(float gamma);\n   STBIDEF void   stbi_hdr_to_ldr_scale(float scale);\n#endif // STBI_NO_HDR\n\n#ifndef STBI_NO_LINEAR\n   STBIDEF void   stbi_ldr_to_hdr_gamma(float gamma);\n   STBIDEF void   stbi_ldr_to_hdr_scale(float scale);\n#endif // STBI_NO_LINEAR\n\n// stbi_is_hdr is always defined, but always returns false if STBI_NO_HDR\nSTBIDEF int    stbi_is_hdr_from_callbacks(stbi_io_callbacks const *clbk, void *user);\nSTBIDEF int    stbi_is_hdr_from_memory(stbi_uc const *buffer, int len);\n#ifndef STBI_NO_STDIO\nSTBIDEF int      stbi_is_hdr          (char const *filename);\nSTBIDEF int      stbi_is_hdr_from_file(FILE *f);\n#endif // STBI_NO_STDIO\n\n\n// get a VERY brief reason for failure\n// on most compilers (and ALL modern mainstream compilers) this is threadsafe\nSTBIDEF const char *stbi_failure_reason  (void);\n\n// free the loaded image -- this is just free()\nSTBIDEF void     stbi_image_free      (void *retval_from_stbi_load);\n\n// get image dimensions & components without fully decoding\nSTBIDEF int      stbi_info_from_memory(stbi_uc const *buffer, int len, int *x, int *y, int *comp);\nSTBIDEF int      stbi_info_from_callbacks(stbi_io_callbacks const *clbk, void *user, int *x, int *y, int *comp);\nSTBIDEF int      stbi_is_16_bit_from_memory(stbi_uc const *buffer, int len);\nSTBIDEF int      stbi_is_16_bit_from_callbacks(stbi_io_callbacks const *clbk, void *user);\n\n#ifndef STBI_NO_STDIO\nSTBIDEF int      stbi_info               (char const *filename,     int *x, int *y, int *comp);\nSTBIDEF int      stbi_info_from_file     (FILE *f,                  int *x, int *y, int *comp);\nSTBIDEF int      stbi_is_16_bit          (char const *filename);\nSTBIDEF int      stbi_is_16_bit_from_file(FILE *f);\n#endif\n\n\n\n// for image formats that explicitly notate that they have premultiplied alpha,\n// we just return the colors as stored in the file. set this flag to force\n// unpremultiplication. results are undefined if the unpremultiply overflow.\nSTBIDEF void stbi_set_unpremultiply_on_load(int flag_true_if_should_unpremultiply);\n\n// indicate whether we should process iphone images back to canonical format,\n// or just pass them through \"as-is\"\nSTBIDEF void stbi_convert_iphone_png_to_rgb(int flag_true_if_should_convert);\n\n// flip the image vertically, so the first pixel in the output array is the bottom left\nSTBIDEF void stbi_set_flip_vertically_on_load(int flag_true_if_should_flip);\n\n// as above, but only applies to images loaded on the thread that calls the function\n// this function is only available if your compiler supports thread-local variables;\n// calling it will fail to link if your compiler doesn't\nSTBIDEF void stbi_set_unpremultiply_on_load_thread(int flag_true_if_should_unpremultiply);\nSTBIDEF void stbi_convert_iphone_png_to_rgb_thread(int flag_true_if_should_convert);\nSTBIDEF void stbi_set_flip_vertically_on_load_thread(int flag_true_if_should_flip);\n\n// ZLIB client - used by PNG, available for other purposes\n\nSTBIDEF char *stbi_zlib_decode_malloc_guesssize(const char *buffer, int len, int initial_size, int *outlen);\nSTBIDEF char *stbi_zlib_decode_malloc_guesssize_headerflag(const char *buffer, int len, int initial_size, int *outlen, int parse_header);\nSTBIDEF char *stbi_zlib_decode_malloc(const char *buffer, int len, int *outlen);\nSTBIDEF int   stbi_zlib_decode_buffer(char *obuffer, int olen, const char *ibuffer, int ilen);\n\nSTBIDEF char *stbi_zlib_decode_noheader_malloc(const char *buffer, int len, int *outlen);\nSTBIDEF int   stbi_zlib_decode_noheader_buffer(char *obuffer, int olen, const char *ibuffer, int ilen);\n\n\n#ifdef __cplusplus\n}\n#endif\n\n//\n//\n////   end header file   /////////////////////////////////////////////////////\n#endif // STBI_INCLUDE_STB_IMAGE_H\n\n#ifdef STB_IMAGE_IMPLEMENTATION\n\n#if defined(STBI_ONLY_JPEG) || defined(STBI_ONLY_PNG) || defined(STBI_ONLY_BMP) \\\n  || defined(STBI_ONLY_TGA) || defined(STBI_ONLY_GIF) || defined(STBI_ONLY_PSD) \\\n  || defined(STBI_ONLY_HDR) || defined(STBI_ONLY_PIC) || defined(STBI_ONLY_PNM) \\\n  || defined(STBI_ONLY_ZLIB)\n   #ifndef STBI_ONLY_JPEG\n   #define STBI_NO_JPEG\n   #endif\n   #ifndef STBI_ONLY_PNG\n   #define STBI_NO_PNG\n   #endif\n   #ifndef STBI_ONLY_BMP\n   #define STBI_NO_BMP\n   #endif\n   #ifndef STBI_ONLY_PSD\n   #define STBI_NO_PSD\n   #endif\n   #ifndef STBI_ONLY_TGA\n   #define STBI_NO_TGA\n   #endif\n   #ifndef STBI_ONLY_GIF\n   #define STBI_NO_GIF\n   #endif\n   #ifndef STBI_ONLY_HDR\n   #define STBI_NO_HDR\n   #endif\n   #ifndef STBI_ONLY_PIC\n   #define STBI_NO_PIC\n   #endif\n   #ifndef STBI_ONLY_PNM\n   #define STBI_NO_PNM\n   #endif\n#endif\n\n#if defined(STBI_NO_PNG) && !defined(STBI_SUPPORT_ZLIB) && !defined(STBI_NO_ZLIB)\n#define STBI_NO_ZLIB\n#endif\n\n\n#include <stdarg.h>\n#include <stddef.h> // ptrdiff_t on osx\n#include <stdlib.h>\n#include <string.h>\n#include <limits.h>\n\n#if !defined(STBI_NO_LINEAR) || !defined(STBI_NO_HDR)\n#include <math.h>  // ldexp, pow\n#endif\n\n#ifndef STBI_NO_STDIO\n#include <stdio.h>\n#endif\n\n#ifndef STBI_ASSERT\n#include <assert.h>\n#define STBI_ASSERT(x) assert(x)\n#endif\n\n#ifdef __cplusplus\n#define STBI_EXTERN extern \"C\"\n#else\n#define STBI_EXTERN extern\n#endif\n\n\n#ifndef _MSC_VER\n   #ifdef __cplusplus\n   #define stbi_inline inline\n   #else\n   #define stbi_inline\n   #endif\n#else\n   #define stbi_inline __forceinline\n#endif\n\n#ifndef STBI_NO_THREAD_LOCALS\n   #if defined(__cplusplus) &&  __cplusplus >= 201103L\n      #define STBI_THREAD_LOCAL       thread_local\n   #elif defined(__GNUC__) && __GNUC__ < 5\n      #define STBI_THREAD_LOCAL       __thread\n   #elif defined(_MSC_VER)\n      #define STBI_THREAD_LOCAL       __declspec(thread)\n   #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 201112L && !defined(__STDC_NO_THREADS__)\n      #define STBI_THREAD_LOCAL       _Thread_local\n   #endif\n\n   #ifndef STBI_THREAD_LOCAL\n      #if defined(__GNUC__)\n        #define STBI_THREAD_LOCAL       __thread\n      #endif\n   #endif\n#endif\n\n#if defined(_MSC_VER) || defined(__SYMBIAN32__)\ntypedef unsigned short stbi__uint16;\ntypedef   signed short stbi__int16;\ntypedef unsigned int   stbi__uint32;\ntypedef   signed int   stbi__int32;\n#else\n#include <stdint.h>\ntypedef uint16_t stbi__uint16;\ntypedef int16_t  stbi__int16;\ntypedef uint32_t stbi__uint32;\ntypedef int32_t  stbi__int32;\n#endif\n\n// should produce compiler error if size is wrong\ntypedef unsigned char validate_uint32[sizeof(stbi__uint32)==4 ? 1 : -1];\n\n#ifdef _MSC_VER\n#define STBI_NOTUSED(v)  (void)(v)\n#else\n#define STBI_NOTUSED(v)  (void)sizeof(v)\n#endif\n\n#ifdef _MSC_VER\n#define STBI_HAS_LROTL\n#endif\n\n#ifdef STBI_HAS_LROTL\n   #define stbi_lrot(x,y)  _lrotl(x,y)\n#else\n   #define stbi_lrot(x,y)  (((x) << (y)) | ((x) >> (-(y) & 31)))\n#endif\n\n#if defined(STBI_MALLOC) && defined(STBI_FREE) && (defined(STBI_REALLOC) || defined(STBI_REALLOC_SIZED))\n// ok\n#elif !defined(STBI_MALLOC) && !defined(STBI_FREE) && !defined(STBI_REALLOC) && !defined(STBI_REALLOC_SIZED)\n// ok\n#else\n#error \"Must define all or none of STBI_MALLOC, STBI_FREE, and STBI_REALLOC (or STBI_REALLOC_SIZED).\"\n#endif\n\n#ifndef STBI_MALLOC\n#define STBI_MALLOC(sz)           malloc(sz)\n#define STBI_REALLOC(p,newsz)     realloc(p,newsz)\n#define STBI_FREE(p)              free(p)\n#endif\n\n#ifndef STBI_REALLOC_SIZED\n#define STBI_REALLOC_SIZED(p,oldsz,newsz) STBI_REALLOC(p,newsz)\n#endif\n\n// x86/x64 detection\n#if defined(__x86_64__) || defined(_M_X64)\n#define STBI__X64_TARGET\n#elif defined(__i386) || defined(_M_IX86)\n#define STBI__X86_TARGET\n#endif\n\n#if defined(__GNUC__) && defined(STBI__X86_TARGET) && !defined(__SSE2__) && !defined(STBI_NO_SIMD)\n// gcc doesn't support sse2 intrinsics unless you compile with -msse2,\n// which in turn means it gets to use SSE2 everywhere. This is unfortunate,\n// but previous attempts to provide the SSE2 functions with runtime\n// detection caused numerous issues. The way architecture extensions are\n// exposed in GCC/Clang is, sadly, not really suited for one-file libs.\n// New behavior: if compiled with -msse2, we use SSE2 without any\n// detection; if not, we don't use it at all.\n#define STBI_NO_SIMD\n#endif\n\n#if defined(__MINGW32__) && defined(STBI__X86_TARGET) && !defined(STBI_MINGW_ENABLE_SSE2) && !defined(STBI_NO_SIMD)\n// Note that __MINGW32__ doesn't actually mean 32-bit, so we have to avoid STBI__X64_TARGET\n//\n// 32-bit MinGW wants ESP to be 16-byte aligned, but this is not in the\n// Windows ABI and VC++ as well as Windows DLLs don't maintain that invariant.\n// As a result, enabling SSE2 on 32-bit MinGW is dangerous when not\n// simultaneously enabling \"-mstackrealign\".\n//\n// See https://github.com/nothings/stb/issues/81 for more information.\n//\n// So default to no SSE2 on 32-bit MinGW. If you've read this far and added\n// -mstackrealign to your build settings, feel free to #define STBI_MINGW_ENABLE_SSE2.\n#define STBI_NO_SIMD\n#endif\n\n#if !defined(STBI_NO_SIMD) && (defined(STBI__X86_TARGET) || defined(STBI__X64_TARGET))\n#define STBI_SSE2\n#include <emmintrin.h>\n\n#ifdef _MSC_VER\n\n#if _MSC_VER >= 1400  // not VC6\n#include <intrin.h> // __cpuid\nstatic int stbi__cpuid3(void)\n{\n   int info[4];\n   __cpuid(info,1);\n   return info[3];\n}\n#else\nstatic int stbi__cpuid3(void)\n{\n   int res;\n   __asm {\n      mov  eax,1\n      cpuid\n      mov  res,edx\n   }\n   return res;\n}\n#endif\n\n#define STBI_SIMD_ALIGN(type, name) __declspec(align(16)) type name\n\n#if !defined(STBI_NO_JPEG) && defined(STBI_SSE2)\nstatic int stbi__sse2_available(void)\n{\n   int info3 = stbi__cpuid3();\n   return ((info3 >> 26) & 1) != 0;\n}\n#endif\n\n#else // assume GCC-style if not VC++\n#define STBI_SIMD_ALIGN(type, name) type name __attribute__((aligned(16)))\n\n#if !defined(STBI_NO_JPEG) && defined(STBI_SSE2)\nstatic int stbi__sse2_available(void)\n{\n   // If we're even attempting to compile this on GCC/Clang, that means\n   // -msse2 is on, which means the compiler is allowed to use SSE2\n   // instructions at will, and so are we.\n   return 1;\n}\n#endif\n\n#endif\n#endif\n\n// ARM NEON\n#if defined(STBI_NO_SIMD) && defined(STBI_NEON)\n#undef STBI_NEON\n#endif\n\n#ifdef STBI_NEON\n#include <arm_neon.h>\n#ifdef _MSC_VER\n#define STBI_SIMD_ALIGN(type, name) __declspec(align(16)) type name\n#else\n#define STBI_SIMD_ALIGN(type, name) type name __attribute__((aligned(16)))\n#endif\n#endif\n\n#ifndef STBI_SIMD_ALIGN\n#define STBI_SIMD_ALIGN(type, name) type name\n#endif\n\n#ifndef STBI_MAX_DIMENSIONS\n#define STBI_MAX_DIMENSIONS (1 << 24)\n#endif\n\n///////////////////////////////////////////////\n//\n//  stbi__context struct and start_xxx functions\n\n// stbi__context structure is our basic context used by all images, so it\n// contains all the IO context, plus some basic image information\ntypedef struct\n{\n   stbi__uint32 img_x, img_y;\n   int img_n, img_out_n;\n\n   stbi_io_callbacks io;\n   void *io_user_data;\n\n   int read_from_callbacks;\n   int buflen;\n   stbi_uc buffer_start[128];\n   int callback_already_read;\n\n   stbi_uc *img_buffer, *img_buffer_end;\n   stbi_uc *img_buffer_original, *img_buffer_original_end;\n} stbi__context;\n\n\nstatic void stbi__refill_buffer(stbi__context *s);\n\n// initialize a memory-decode context\nstatic void stbi__start_mem(stbi__context *s, stbi_uc const *buffer, int len)\n{\n   s->io.read = NULL;\n   s->read_from_callbacks = 0;\n   s->callback_already_read = 0;\n   s->img_buffer = s->img_buffer_original = (stbi_uc *) buffer;\n   s->img_buffer_end = s->img_buffer_original_end = (stbi_uc *) buffer+len;\n}\n\n// initialize a callback-based context\nstatic void stbi__start_callbacks(stbi__context *s, stbi_io_callbacks *c, void *user)\n{\n   s->io = *c;\n   s->io_user_data = user;\n   s->buflen = sizeof(s->buffer_start);\n   s->read_from_callbacks = 1;\n   s->callback_already_read = 0;\n   s->img_buffer = s->img_buffer_original = s->buffer_start;\n   stbi__refill_buffer(s);\n   s->img_buffer_original_end = s->img_buffer_end;\n}\n\n#ifndef STBI_NO_STDIO\n\nstatic int stbi__stdio_read(void *user, char *data, int size)\n{\n   return (int) fread(data,1,size,(FILE*) user);\n}\n\nstatic void stbi__stdio_skip(void *user, int n)\n{\n   int ch;\n   fseek((FILE*) user, n, SEEK_CUR);\n   ch = fgetc((FILE*) user);  /* have to read a byte to reset feof()'s flag */\n   if (ch != EOF) {\n      ungetc(ch, (FILE *) user);  /* push byte back onto stream if valid. */\n   }\n}\n\nstatic int stbi__stdio_eof(void *user)\n{\n   return feof((FILE*) user) || ferror((FILE *) user);\n}\n\nstatic stbi_io_callbacks stbi__stdio_callbacks =\n{\n   stbi__stdio_read,\n   stbi__stdio_skip,\n   stbi__stdio_eof,\n};\n\nstatic void stbi__start_file(stbi__context *s, FILE *f)\n{\n   stbi__start_callbacks(s, &stbi__stdio_callbacks, (void *) f);\n}\n\n//static void stop_file(stbi__context *s) { }\n\n#endif // !STBI_NO_STDIO\n\nstatic void stbi__rewind(stbi__context *s)\n{\n   // conceptually rewind SHOULD rewind to the beginning of the stream,\n   // but we just rewind to the beginning of the initial buffer, because\n   // we only use it after doing 'test', which only ever looks at at most 92 bytes\n   s->img_buffer = s->img_buffer_original;\n   s->img_buffer_end = s->img_buffer_original_end;\n}\n\nenum\n{\n   STBI_ORDER_RGB,\n   STBI_ORDER_BGR\n};\n\ntypedef struct\n{\n   int bits_per_channel;\n   int num_channels;\n   int channel_order;\n} stbi__result_info;\n\n#ifndef STBI_NO_JPEG\nstatic int      stbi__jpeg_test(stbi__context *s);\nstatic void    *stbi__jpeg_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri);\nstatic int      stbi__jpeg_info(stbi__context *s, int *x, int *y, int *comp);\n#endif\n\n#ifndef STBI_NO_PNG\nstatic int      stbi__png_test(stbi__context *s);\nstatic void    *stbi__png_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri);\nstatic int      stbi__png_info(stbi__context *s, int *x, int *y, int *comp);\nstatic int      stbi__png_is16(stbi__context *s);\n#endif\n\n#ifndef STBI_NO_BMP\nstatic int      stbi__bmp_test(stbi__context *s);\nstatic void    *stbi__bmp_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri);\nstatic int      stbi__bmp_info(stbi__context *s, int *x, int *y, int *comp);\n#endif\n\n#ifndef STBI_NO_TGA\nstatic int      stbi__tga_test(stbi__context *s);\nstatic void    *stbi__tga_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri);\nstatic int      stbi__tga_info(stbi__context *s, int *x, int *y, int *comp);\n#endif\n\n#ifndef STBI_NO_PSD\nstatic int      stbi__psd_test(stbi__context *s);\nstatic void    *stbi__psd_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri, int bpc);\nstatic int      stbi__psd_info(stbi__context *s, int *x, int *y, int *comp);\nstatic int      stbi__psd_is16(stbi__context *s);\n#endif\n\n#ifndef STBI_NO_HDR\nstatic int      stbi__hdr_test(stbi__context *s);\nstatic float   *stbi__hdr_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri);\nstatic int      stbi__hdr_info(stbi__context *s, int *x, int *y, int *comp);\n#endif\n\n#ifndef STBI_NO_PIC\nstatic int      stbi__pic_test(stbi__context *s);\nstatic void    *stbi__pic_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri);\nstatic int      stbi__pic_info(stbi__context *s, int *x, int *y, int *comp);\n#endif\n\n#ifndef STBI_NO_GIF\nstatic int      stbi__gif_test(stbi__context *s);\nstatic void    *stbi__gif_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri);\nstatic void    *stbi__load_gif_main(stbi__context *s, int **delays, int *x, int *y, int *z, int *comp, int req_comp);\nstatic int      stbi__gif_info(stbi__context *s, int *x, int *y, int *comp);\n#endif\n\n#ifndef STBI_NO_PNM\nstatic int      stbi__pnm_test(stbi__context *s);\nstatic void    *stbi__pnm_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri);\nstatic int      stbi__pnm_info(stbi__context *s, int *x, int *y, int *comp);\nstatic int      stbi__pnm_is16(stbi__context *s);\n#endif\n\nstatic\n#ifdef STBI_THREAD_LOCAL\nSTBI_THREAD_LOCAL\n#endif\nconst char *stbi__g_failure_reason;\n\nSTBIDEF const char *stbi_failure_reason(void)\n{\n   return stbi__g_failure_reason;\n}\n\n#ifndef STBI_NO_FAILURE_STRINGS\nstatic int stbi__err(const char *str)\n{\n   stbi__g_failure_reason = str;\n   return 0;\n}\n#endif\n\nstatic void *stbi__malloc(size_t size)\n{\n    return STBI_MALLOC(size);\n}\n\n// stb_image uses ints pervasively, including for offset calculations.\n// therefore the largest decoded image size we can support with the\n// current code, even on 64-bit targets, is INT_MAX. this is not a\n// significant limitation for the intended use case.\n//\n// we do, however, need to make sure our size calculations don't\n// overflow. hence a few helper functions for size calculations that\n// multiply integers together, making sure that they're non-negative\n// and no overflow occurs.\n\n// return 1 if the sum is valid, 0 on overflow.\n// negative terms are considered invalid.\nstatic int stbi__addsizes_valid(int a, int b)\n{\n   if (b < 0) return 0;\n   // now 0 <= b <= INT_MAX, hence also\n   // 0 <= INT_MAX - b <= INTMAX.\n   // And \"a + b <= INT_MAX\" (which might overflow) is the\n   // same as a <= INT_MAX - b (no overflow)\n   return a <= INT_MAX - b;\n}\n\n// returns 1 if the product is valid, 0 on overflow.\n// negative factors are considered invalid.\nstatic int stbi__mul2sizes_valid(int a, int b)\n{\n   if (a < 0 || b < 0) return 0;\n   if (b == 0) return 1; // mul-by-0 is always safe\n   // portable way to check for no overflows in a*b\n   return a <= INT_MAX/b;\n}\n\n#if !defined(STBI_NO_JPEG) || !defined(STBI_NO_PNG) || !defined(STBI_NO_TGA) || !defined(STBI_NO_HDR)\n// returns 1 if \"a*b + add\" has no negative terms/factors and doesn't overflow\nstatic int stbi__mad2sizes_valid(int a, int b, int add)\n{\n   return stbi__mul2sizes_valid(a, b) && stbi__addsizes_valid(a*b, add);\n}\n#endif\n\n// returns 1 if \"a*b*c + add\" has no negative terms/factors and doesn't overflow\nstatic int stbi__mad3sizes_valid(int a, int b, int c, int add)\n{\n   return stbi__mul2sizes_valid(a, b) && stbi__mul2sizes_valid(a*b, c) &&\n      stbi__addsizes_valid(a*b*c, add);\n}\n\n// returns 1 if \"a*b*c*d + add\" has no negative terms/factors and doesn't overflow\n#if !defined(STBI_NO_LINEAR) || !defined(STBI_NO_HDR) || !defined(STBI_NO_PNM)\nstatic int stbi__mad4sizes_valid(int a, int b, int c, int d, int add)\n{\n   return stbi__mul2sizes_valid(a, b) && stbi__mul2sizes_valid(a*b, c) &&\n      stbi__mul2sizes_valid(a*b*c, d) && stbi__addsizes_valid(a*b*c*d, add);\n}\n#endif\n\n#if !defined(STBI_NO_JPEG) || !defined(STBI_NO_PNG) || !defined(STBI_NO_TGA) || !defined(STBI_NO_HDR)\n// mallocs with size overflow checking\nstatic void *stbi__malloc_mad2(int a, int b, int add)\n{\n   if (!stbi__mad2sizes_valid(a, b, add)) return NULL;\n   return stbi__malloc(a*b + add);\n}\n#endif\n\nstatic void *stbi__malloc_mad3(int a, int b, int c, int add)\n{\n   if (!stbi__mad3sizes_valid(a, b, c, add)) return NULL;\n   return stbi__malloc(a*b*c + add);\n}\n\n#if !defined(STBI_NO_LINEAR) || !defined(STBI_NO_HDR) || !defined(STBI_NO_PNM)\nstatic void *stbi__malloc_mad4(int a, int b, int c, int d, int add)\n{\n   if (!stbi__mad4sizes_valid(a, b, c, d, add)) return NULL;\n   return stbi__malloc(a*b*c*d + add);\n}\n#endif\n\n// returns 1 if the sum of two signed ints is valid (between -2^31 and 2^31-1 inclusive), 0 on overflow.\nstatic int stbi__addints_valid(int a, int b)\n{\n   if ((a >= 0) != (b >= 0)) return 1; // a and b have different signs, so no overflow\n   if (a < 0 && b < 0) return a >= INT_MIN - b; // same as a + b >= INT_MIN; INT_MIN - b cannot overflow since b < 0.\n   return a <= INT_MAX - b;\n}\n\n// returns 1 if the product of two ints fits in a signed short, 0 on overflow.\nstatic int stbi__mul2shorts_valid(int a, int b)\n{\n   if (b == 0 || b == -1) return 1; // multiplication by 0 is always 0; check for -1 so SHRT_MIN/b doesn't overflow\n   if ((a >= 0) == (b >= 0)) return a <= SHRT_MAX/b; // product is positive, so similar to mul2sizes_valid\n   if (b < 0) return a <= SHRT_MIN / b; // same as a * b >= SHRT_MIN\n   return a >= SHRT_MIN / b;\n}\n\n// stbi__err - error\n// stbi__errpf - error returning pointer to float\n// stbi__errpuc - error returning pointer to unsigned char\n\n#ifdef STBI_NO_FAILURE_STRINGS\n   #define stbi__err(x,y)  0\n#elif defined(STBI_FAILURE_USERMSG)\n   #define stbi__err(x,y)  stbi__err(y)\n#else\n   #define stbi__err(x,y)  stbi__err(x)\n#endif\n\n#define stbi__errpf(x,y)   ((float *)(size_t) (stbi__err(x,y)?NULL:NULL))\n#define stbi__errpuc(x,y)  ((unsigned char *)(size_t) (stbi__err(x,y)?NULL:NULL))\n\nSTBIDEF void stbi_image_free(void *retval_from_stbi_load)\n{\n   STBI_FREE(retval_from_stbi_load);\n}\n\n#ifndef STBI_NO_LINEAR\nstatic float   *stbi__ldr_to_hdr(stbi_uc *data, int x, int y, int comp);\n#endif\n\n#ifndef STBI_NO_HDR\nstatic stbi_uc *stbi__hdr_to_ldr(float   *data, int x, int y, int comp);\n#endif\n\nstatic int stbi__vertically_flip_on_load_global = 0;\n\nSTBIDEF void stbi_set_flip_vertically_on_load(int flag_true_if_should_flip)\n{\n   stbi__vertically_flip_on_load_global = flag_true_if_should_flip;\n}\n\n#ifndef STBI_THREAD_LOCAL\n#define stbi__vertically_flip_on_load  stbi__vertically_flip_on_load_global\n#else\nstatic STBI_THREAD_LOCAL int stbi__vertically_flip_on_load_local, stbi__vertically_flip_on_load_set;\n\nSTBIDEF void stbi_set_flip_vertically_on_load_thread(int flag_true_if_should_flip)\n{\n   stbi__vertically_flip_on_load_local = flag_true_if_should_flip;\n   stbi__vertically_flip_on_load_set = 1;\n}\n\n#define stbi__vertically_flip_on_load  (stbi__vertically_flip_on_load_set       \\\n                                         ? stbi__vertically_flip_on_load_local  \\\n                                         : stbi__vertically_flip_on_load_global)\n#endif // STBI_THREAD_LOCAL\n\nstatic void *stbi__load_main(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri, int bpc)\n{\n   memset(ri, 0, sizeof(*ri)); // make sure it's initialized if we add new fields\n   ri->bits_per_channel = 8; // default is 8 so most paths don't have to be changed\n   ri->channel_order = STBI_ORDER_RGB; // all current input & output are this, but this is here so we can add BGR order\n   ri->num_channels = 0;\n\n   // test the formats with a very explicit header first (at least a FOURCC\n   // or distinctive magic number first)\n   #ifndef STBI_NO_PNG\n   if (stbi__png_test(s))  return stbi__png_load(s,x,y,comp,req_comp, ri);\n   #endif\n   #ifndef STBI_NO_BMP\n   if (stbi__bmp_test(s))  return stbi__bmp_load(s,x,y,comp,req_comp, ri);\n   #endif\n   #ifndef STBI_NO_GIF\n   if (stbi__gif_test(s))  return stbi__gif_load(s,x,y,comp,req_comp, ri);\n   #endif\n   #ifndef STBI_NO_PSD\n   if (stbi__psd_test(s))  return stbi__psd_load(s,x,y,comp,req_comp, ri, bpc);\n   #else\n   STBI_NOTUSED(bpc);\n   #endif\n   #ifndef STBI_NO_PIC\n   if (stbi__pic_test(s))  return stbi__pic_load(s,x,y,comp,req_comp, ri);\n   #endif\n\n   // then the formats that can end up attempting to load with just 1 or 2\n   // bytes matching expectations; these are prone to false positives, so\n   // try them later\n   #ifndef STBI_NO_JPEG\n   if (stbi__jpeg_test(s)) return stbi__jpeg_load(s,x,y,comp,req_comp, ri);\n   #endif\n   #ifndef STBI_NO_PNM\n   if (stbi__pnm_test(s))  return stbi__pnm_load(s,x,y,comp,req_comp, ri);\n   #endif\n\n   #ifndef STBI_NO_HDR\n   if (stbi__hdr_test(s)) {\n      float *hdr = stbi__hdr_load(s, x,y,comp,req_comp, ri);\n      return stbi__hdr_to_ldr(hdr, *x, *y, req_comp ? req_comp : *comp);\n   }\n   #endif\n\n   #ifndef STBI_NO_TGA\n   // test tga last because it's a crappy test!\n   if (stbi__tga_test(s))\n      return stbi__tga_load(s,x,y,comp,req_comp, ri);\n   #endif\n\n   return stbi__errpuc(\"unknown image type\", \"Image not of any known type, or corrupt\");\n}\n\nstatic stbi_uc *stbi__convert_16_to_8(stbi__uint16 *orig, int w, int h, int channels)\n{\n   int i;\n   int img_len = w * h * channels;\n   stbi_uc *reduced;\n\n   reduced = (stbi_uc *) stbi__malloc(img_len);\n   if (reduced == NULL) return stbi__errpuc(\"outofmem\", \"Out of memory\");\n\n   for (i = 0; i < img_len; ++i)\n      reduced[i] = (stbi_uc)((orig[i] >> 8) & 0xFF); // top half of each byte is sufficient approx of 16->8 bit scaling\n\n   STBI_FREE(orig);\n   return reduced;\n}\n\nstatic stbi__uint16 *stbi__convert_8_to_16(stbi_uc *orig, int w, int h, int channels)\n{\n   int i;\n   int img_len = w * h * channels;\n   stbi__uint16 *enlarged;\n\n   enlarged = (stbi__uint16 *) stbi__malloc(img_len*2);\n   if (enlarged == NULL) return (stbi__uint16 *) stbi__errpuc(\"outofmem\", \"Out of memory\");\n\n   for (i = 0; i < img_len; ++i)\n      enlarged[i] = (stbi__uint16)((orig[i] << 8) + orig[i]); // replicate to high and low byte, maps 0->0, 255->0xffff\n\n   STBI_FREE(orig);\n   return enlarged;\n}\n\nstatic void stbi__vertical_flip(void *image, int w, int h, int bytes_per_pixel)\n{\n   int row;\n   size_t bytes_per_row = (size_t)w * bytes_per_pixel;\n   stbi_uc temp[2048];\n   stbi_uc *bytes = (stbi_uc *)image;\n\n   for (row = 0; row < (h>>1); row++) {\n      stbi_uc *row0 = bytes + row*bytes_per_row;\n      stbi_uc *row1 = bytes + (h - row - 1)*bytes_per_row;\n      // swap row0 with row1\n      size_t bytes_left = bytes_per_row;\n      while (bytes_left) {\n         size_t bytes_copy = (bytes_left < sizeof(temp)) ? bytes_left : sizeof(temp);\n         memcpy(temp, row0, bytes_copy);\n         memcpy(row0, row1, bytes_copy);\n         memcpy(row1, temp, bytes_copy);\n         row0 += bytes_copy;\n         row1 += bytes_copy;\n         bytes_left -= bytes_copy;\n      }\n   }\n}\n\n#ifndef STBI_NO_GIF\nstatic void stbi__vertical_flip_slices(void *image, int w, int h, int z, int bytes_per_pixel)\n{\n   int slice;\n   int slice_size = w * h * bytes_per_pixel;\n\n   stbi_uc *bytes = (stbi_uc *)image;\n   for (slice = 0; slice < z; ++slice) {\n      stbi__vertical_flip(bytes, w, h, bytes_per_pixel);\n      bytes += slice_size;\n   }\n}\n#endif\n\nstatic unsigned char *stbi__load_and_postprocess_8bit(stbi__context *s, int *x, int *y, int *comp, int req_comp)\n{\n   stbi__result_info ri;\n   void *result = stbi__load_main(s, x, y, comp, req_comp, &ri, 8);\n\n   if (result == NULL)\n      return NULL;\n\n   // it is the responsibility of the loaders to make sure we get either 8 or 16 bit.\n   STBI_ASSERT(ri.bits_per_channel == 8 || ri.bits_per_channel == 16);\n\n   if (ri.bits_per_channel != 8) {\n      result = stbi__convert_16_to_8((stbi__uint16 *) result, *x, *y, req_comp == 0 ? *comp : req_comp);\n      ri.bits_per_channel = 8;\n   }\n\n   // @TODO: move stbi__convert_format to here\n\n   if (stbi__vertically_flip_on_load) {\n      int channels = req_comp ? req_comp : *comp;\n      stbi__vertical_flip(result, *x, *y, channels * sizeof(stbi_uc));\n   }\n\n   return (unsigned char *) result;\n}\n\nstatic stbi__uint16 *stbi__load_and_postprocess_16bit(stbi__context *s, int *x, int *y, int *comp, int req_comp)\n{\n   stbi__result_info ri;\n   void *result = stbi__load_main(s, x, y, comp, req_comp, &ri, 16);\n\n   if (result == NULL)\n      return NULL;\n\n   // it is the responsibility of the loaders to make sure we get either 8 or 16 bit.\n   STBI_ASSERT(ri.bits_per_channel == 8 || ri.bits_per_channel == 16);\n\n   if (ri.bits_per_channel != 16) {\n      result = stbi__convert_8_to_16((stbi_uc *) result, *x, *y, req_comp == 0 ? *comp : req_comp);\n      ri.bits_per_channel = 16;\n   }\n\n   // @TODO: move stbi__convert_format16 to here\n   // @TODO: special case RGB-to-Y (and RGBA-to-YA) for 8-bit-to-16-bit case to keep more precision\n\n   if (stbi__vertically_flip_on_load) {\n      int channels = req_comp ? req_comp : *comp;\n      stbi__vertical_flip(result, *x, *y, channels * sizeof(stbi__uint16));\n   }\n\n   return (stbi__uint16 *) result;\n}\n\n#if !defined(STBI_NO_HDR) && !defined(STBI_NO_LINEAR)\nstatic void stbi__float_postprocess(float *result, int *x, int *y, int *comp, int req_comp)\n{\n   if (stbi__vertically_flip_on_load && result != NULL) {\n      int channels = req_comp ? req_comp : *comp;\n      stbi__vertical_flip(result, *x, *y, channels * sizeof(float));\n   }\n}\n#endif\n\n#ifndef STBI_NO_STDIO\n\n#if defined(_WIN32) && defined(STBI_WINDOWS_UTF8)\nSTBI_EXTERN __declspec(dllimport) int __stdcall MultiByteToWideChar(unsigned int cp, unsigned long flags, const char *str, int cbmb, wchar_t *widestr, int cchwide);\nSTBI_EXTERN __declspec(dllimport) int __stdcall WideCharToMultiByte(unsigned int cp, unsigned long flags, const wchar_t *widestr, int cchwide, char *str, int cbmb, const char *defchar, int *used_default);\n#endif\n\n#if defined(_WIN32) && defined(STBI_WINDOWS_UTF8)\nSTBIDEF int stbi_convert_wchar_to_utf8(char *buffer, size_t bufferlen, const wchar_t* input)\n{\n\treturn WideCharToMultiByte(65001 /* UTF8 */, 0, input, -1, buffer, (int) bufferlen, NULL, NULL);\n}\n#endif\n\nstatic FILE *stbi__fopen(char const *filename, char const *mode)\n{\n   FILE *f;\n#if defined(_WIN32) && defined(STBI_WINDOWS_UTF8)\n   wchar_t wMode[64];\n   wchar_t wFilename[1024];\n\tif (0 == MultiByteToWideChar(65001 /* UTF8 */, 0, filename, -1, wFilename, sizeof(wFilename)/sizeof(*wFilename)))\n      return 0;\n\n\tif (0 == MultiByteToWideChar(65001 /* UTF8 */, 0, mode, -1, wMode, sizeof(wMode)/sizeof(*wMode)))\n      return 0;\n\n#if defined(_MSC_VER) && _MSC_VER >= 1400\n\tif (0 != _wfopen_s(&f, wFilename, wMode))\n\t\tf = 0;\n#else\n   f = _wfopen(wFilename, wMode);\n#endif\n\n#elif defined(_MSC_VER) && _MSC_VER >= 1400\n   if (0 != fopen_s(&f, filename, mode))\n      f=0;\n#else\n   f = fopen(filename, mode);\n#endif\n   return f;\n}\n\n\nSTBIDEF stbi_uc *stbi_load(char const *filename, int *x, int *y, int *comp, int req_comp)\n{\n   FILE *f = stbi__fopen(filename, \"rb\");\n   unsigned char *result;\n   if (!f) return stbi__errpuc(\"can't fopen\", \"Unable to open file\");\n   result = stbi_load_from_file(f,x,y,comp,req_comp);\n   fclose(f);\n   return result;\n}\n\nSTBIDEF stbi_uc *stbi_load_from_file(FILE *f, int *x, int *y, int *comp, int req_comp)\n{\n   unsigned char *result;\n   stbi__context s;\n   stbi__start_file(&s,f);\n   result = stbi__load_and_postprocess_8bit(&s,x,y,comp,req_comp);\n   if (result) {\n      // need to 'unget' all the characters in the IO buffer\n      fseek(f, - (int) (s.img_buffer_end - s.img_buffer), SEEK_CUR);\n   }\n   return result;\n}\n\nSTBIDEF stbi__uint16 *stbi_load_from_file_16(FILE *f, int *x, int *y, int *comp, int req_comp)\n{\n   stbi__uint16 *result;\n   stbi__context s;\n   stbi__start_file(&s,f);\n   result = stbi__load_and_postprocess_16bit(&s,x,y,comp,req_comp);\n   if (result) {\n      // need to 'unget' all the characters in the IO buffer\n      fseek(f, - (int) (s.img_buffer_end - s.img_buffer), SEEK_CUR);\n   }\n   return result;\n}\n\nSTBIDEF stbi_us *stbi_load_16(char const *filename, int *x, int *y, int *comp, int req_comp)\n{\n   FILE *f = stbi__fopen(filename, \"rb\");\n   stbi__uint16 *result;\n   if (!f) return (stbi_us *) stbi__errpuc(\"can't fopen\", \"Unable to open file\");\n   result = stbi_load_from_file_16(f,x,y,comp,req_comp);\n   fclose(f);\n   return result;\n}\n\n\n#endif //!STBI_NO_STDIO\n\nSTBIDEF stbi_us *stbi_load_16_from_memory(stbi_uc const *buffer, int len, int *x, int *y, int *channels_in_file, int desired_channels)\n{\n   stbi__context s;\n   stbi__start_mem(&s,buffer,len);\n   return stbi__load_and_postprocess_16bit(&s,x,y,channels_in_file,desired_channels);\n}\n\nSTBIDEF stbi_us *stbi_load_16_from_callbacks(stbi_io_callbacks const *clbk, void *user, int *x, int *y, int *channels_in_file, int desired_channels)\n{\n   stbi__context s;\n   stbi__start_callbacks(&s, (stbi_io_callbacks *)clbk, user);\n   return stbi__load_and_postprocess_16bit(&s,x,y,channels_in_file,desired_channels);\n}\n\nSTBIDEF stbi_uc *stbi_load_from_memory(stbi_uc const *buffer, int len, int *x, int *y, int *comp, int req_comp)\n{\n   stbi__context s;\n   stbi__start_mem(&s,buffer,len);\n   return stbi__load_and_postprocess_8bit(&s,x,y,comp,req_comp);\n}\n\nSTBIDEF stbi_uc *stbi_load_from_callbacks(stbi_io_callbacks const *clbk, void *user, int *x, int *y, int *comp, int req_comp)\n{\n   stbi__context s;\n   stbi__start_callbacks(&s, (stbi_io_callbacks *) clbk, user);\n   return stbi__load_and_postprocess_8bit(&s,x,y,comp,req_comp);\n}\n\n#ifndef STBI_NO_GIF\nSTBIDEF stbi_uc *stbi_load_gif_from_memory(stbi_uc const *buffer, int len, int **delays, int *x, int *y, int *z, int *comp, int req_comp)\n{\n   unsigned char *result;\n   stbi__context s;\n   stbi__start_mem(&s,buffer,len);\n\n   result = (unsigned char*) stbi__load_gif_main(&s, delays, x, y, z, comp, req_comp);\n   if (stbi__vertically_flip_on_load) {\n      stbi__vertical_flip_slices( result, *x, *y, *z, *comp );\n   }\n\n   return result;\n}\n#endif\n\n#ifndef STBI_NO_LINEAR\nstatic float *stbi__loadf_main(stbi__context *s, int *x, int *y, int *comp, int req_comp)\n{\n   unsigned char *data;\n   #ifndef STBI_NO_HDR\n   if (stbi__hdr_test(s)) {\n      stbi__result_info ri;\n      float *hdr_data = stbi__hdr_load(s,x,y,comp,req_comp, &ri);\n      if (hdr_data)\n         stbi__float_postprocess(hdr_data,x,y,comp,req_comp);\n      return hdr_data;\n   }\n   #endif\n   data = stbi__load_and_postprocess_8bit(s, x, y, comp, req_comp);\n   if (data)\n      return stbi__ldr_to_hdr(data, *x, *y, req_comp ? req_comp : *comp);\n   return stbi__errpf(\"unknown image type\", \"Image not of any known type, or corrupt\");\n}\n\nSTBIDEF float *stbi_loadf_from_memory(stbi_uc const *buffer, int len, int *x, int *y, int *comp, int req_comp)\n{\n   stbi__context s;\n   stbi__start_mem(&s,buffer,len);\n   return stbi__loadf_main(&s,x,y,comp,req_comp);\n}\n\nSTBIDEF float *stbi_loadf_from_callbacks(stbi_io_callbacks const *clbk, void *user, int *x, int *y, int *comp, int req_comp)\n{\n   stbi__context s;\n   stbi__start_callbacks(&s, (stbi_io_callbacks *) clbk, user);\n   return stbi__loadf_main(&s,x,y,comp,req_comp);\n}\n\n#ifndef STBI_NO_STDIO\nSTBIDEF float *stbi_loadf(char const *filename, int *x, int *y, int *comp, int req_comp)\n{\n   float *result;\n   FILE *f = stbi__fopen(filename, \"rb\");\n   if (!f) return stbi__errpf(\"can't fopen\", \"Unable to open file\");\n   result = stbi_loadf_from_file(f,x,y,comp,req_comp);\n   fclose(f);\n   return result;\n}\n\nSTBIDEF float *stbi_loadf_from_file(FILE *f, int *x, int *y, int *comp, int req_comp)\n{\n   stbi__context s;\n   stbi__start_file(&s,f);\n   return stbi__loadf_main(&s,x,y,comp,req_comp);\n}\n#endif // !STBI_NO_STDIO\n\n#endif // !STBI_NO_LINEAR\n\n// these is-hdr-or-not is defined independent of whether STBI_NO_LINEAR is\n// defined, for API simplicity; if STBI_NO_LINEAR is defined, it always\n// reports false!\n\nSTBIDEF int stbi_is_hdr_from_memory(stbi_uc const *buffer, int len)\n{\n   #ifndef STBI_NO_HDR\n   stbi__context s;\n   stbi__start_mem(&s,buffer,len);\n   return stbi__hdr_test(&s);\n   #else\n   STBI_NOTUSED(buffer);\n   STBI_NOTUSED(len);\n   return 0;\n   #endif\n}\n\n#ifndef STBI_NO_STDIO\nSTBIDEF int      stbi_is_hdr          (char const *filename)\n{\n   FILE *f = stbi__fopen(filename, \"rb\");\n   int result=0;\n   if (f) {\n      result = stbi_is_hdr_from_file(f);\n      fclose(f);\n   }\n   return result;\n}\n\nSTBIDEF int stbi_is_hdr_from_file(FILE *f)\n{\n   #ifndef STBI_NO_HDR\n   long pos = ftell(f);\n   int res;\n   stbi__context s;\n   stbi__start_file(&s,f);\n   res = stbi__hdr_test(&s);\n   fseek(f, pos, SEEK_SET);\n   return res;\n   #else\n   STBI_NOTUSED(f);\n   return 0;\n   #endif\n}\n#endif // !STBI_NO_STDIO\n\nSTBIDEF int      stbi_is_hdr_from_callbacks(stbi_io_callbacks const *clbk, void *user)\n{\n   #ifndef STBI_NO_HDR\n   stbi__context s;\n   stbi__start_callbacks(&s, (stbi_io_callbacks *) clbk, user);\n   return stbi__hdr_test(&s);\n   #else\n   STBI_NOTUSED(clbk);\n   STBI_NOTUSED(user);\n   return 0;\n   #endif\n}\n\n#ifndef STBI_NO_LINEAR\nstatic float stbi__l2h_gamma=2.2f, stbi__l2h_scale=1.0f;\n\nSTBIDEF void   stbi_ldr_to_hdr_gamma(float gamma) { stbi__l2h_gamma = gamma; }\nSTBIDEF void   stbi_ldr_to_hdr_scale(float scale) { stbi__l2h_scale = scale; }\n#endif\n\nstatic float stbi__h2l_gamma_i=1.0f/2.2f, stbi__h2l_scale_i=1.0f;\n\nSTBIDEF void   stbi_hdr_to_ldr_gamma(float gamma) { stbi__h2l_gamma_i = 1/gamma; }\nSTBIDEF void   stbi_hdr_to_ldr_scale(float scale) { stbi__h2l_scale_i = 1/scale; }\n\n\n//////////////////////////////////////////////////////////////////////////////\n//\n// Common code used by all image loaders\n//\n\nenum\n{\n   STBI__SCAN_load=0,\n   STBI__SCAN_type,\n   STBI__SCAN_header\n};\n\nstatic void stbi__refill_buffer(stbi__context *s)\n{\n   int n = (s->io.read)(s->io_user_data,(char*)s->buffer_start,s->buflen);\n   s->callback_already_read += (int) (s->img_buffer - s->img_buffer_original);\n   if (n == 0) {\n      // at end of file, treat same as if from memory, but need to handle case\n      // where s->img_buffer isn't pointing to safe memory, e.g. 0-byte file\n      s->read_from_callbacks = 0;\n      s->img_buffer = s->buffer_start;\n      s->img_buffer_end = s->buffer_start+1;\n      *s->img_buffer = 0;\n   } else {\n      s->img_buffer = s->buffer_start;\n      s->img_buffer_end = s->buffer_start + n;\n   }\n}\n\nstbi_inline static stbi_uc stbi__get8(stbi__context *s)\n{\n   if (s->img_buffer < s->img_buffer_end)\n      return *s->img_buffer++;\n   if (s->read_from_callbacks) {\n      stbi__refill_buffer(s);\n      return *s->img_buffer++;\n   }\n   return 0;\n}\n\n#if defined(STBI_NO_JPEG) && defined(STBI_NO_HDR) && defined(STBI_NO_PIC) && defined(STBI_NO_PNM)\n// nothing\n#else\nstbi_inline static int stbi__at_eof(stbi__context *s)\n{\n   if (s->io.read) {\n      if (!(s->io.eof)(s->io_user_data)) return 0;\n      // if feof() is true, check if buffer = end\n      // special case: we've only got the special 0 character at the end\n      if (s->read_from_callbacks == 0) return 1;\n   }\n\n   return s->img_buffer >= s->img_buffer_end;\n}\n#endif\n\n#if defined(STBI_NO_JPEG) && defined(STBI_NO_PNG) && defined(STBI_NO_BMP) && defined(STBI_NO_PSD) && defined(STBI_NO_TGA) && defined(STBI_NO_GIF) && defined(STBI_NO_PIC)\n// nothing\n#else\nstatic void stbi__skip(stbi__context *s, int n)\n{\n   if (n == 0) return;  // already there!\n   if (n < 0) {\n      s->img_buffer = s->img_buffer_end;\n      return;\n   }\n   if (s->io.read) {\n      int blen = (int) (s->img_buffer_end - s->img_buffer);\n      if (blen < n) {\n         s->img_buffer = s->img_buffer_end;\n         (s->io.skip)(s->io_user_data, n - blen);\n         return;\n      }\n   }\n   s->img_buffer += n;\n}\n#endif\n\n#if defined(STBI_NO_PNG) && defined(STBI_NO_TGA) && defined(STBI_NO_HDR) && defined(STBI_NO_PNM)\n// nothing\n#else\nstatic int stbi__getn(stbi__context *s, stbi_uc *buffer, int n)\n{\n   if (s->io.read) {\n      int blen = (int) (s->img_buffer_end - s->img_buffer);\n      if (blen < n) {\n         int res, count;\n\n         memcpy(buffer, s->img_buffer, blen);\n\n         count = (s->io.read)(s->io_user_data, (char*) buffer + blen, n - blen);\n         res = (count == (n-blen));\n         s->img_buffer = s->img_buffer_end;\n         return res;\n      }\n   }\n\n   if (s->img_buffer+n <= s->img_buffer_end) {\n      memcpy(buffer, s->img_buffer, n);\n      s->img_buffer += n;\n      return 1;\n   } else\n      return 0;\n}\n#endif\n\n#if defined(STBI_NO_JPEG) && defined(STBI_NO_PNG) && defined(STBI_NO_PSD) && defined(STBI_NO_PIC)\n// nothing\n#else\nstatic int stbi__get16be(stbi__context *s)\n{\n   int z = stbi__get8(s);\n   return (z << 8) + stbi__get8(s);\n}\n#endif\n\n#if defined(STBI_NO_PNG) && defined(STBI_NO_PSD) && defined(STBI_NO_PIC)\n// nothing\n#else\nstatic stbi__uint32 stbi__get32be(stbi__context *s)\n{\n   stbi__uint32 z = stbi__get16be(s);\n   return (z << 16) + stbi__get16be(s);\n}\n#endif\n\n#if defined(STBI_NO_BMP) && defined(STBI_NO_TGA) && defined(STBI_NO_GIF)\n// nothing\n#else\nstatic int stbi__get16le(stbi__context *s)\n{\n   int z = stbi__get8(s);\n   return z + (stbi__get8(s) << 8);\n}\n#endif\n\n#ifndef STBI_NO_BMP\nstatic stbi__uint32 stbi__get32le(stbi__context *s)\n{\n   stbi__uint32 z = stbi__get16le(s);\n   z += (stbi__uint32)stbi__get16le(s) << 16;\n   return z;\n}\n#endif\n\n#define STBI__BYTECAST(x)  ((stbi_uc) ((x) & 255))  // truncate int to byte without warnings\n\n#if defined(STBI_NO_JPEG) && defined(STBI_NO_PNG) && defined(STBI_NO_BMP) && defined(STBI_NO_PSD) && defined(STBI_NO_TGA) && defined(STBI_NO_GIF) && defined(STBI_NO_PIC) && defined(STBI_NO_PNM)\n// nothing\n#else\n//////////////////////////////////////////////////////////////////////////////\n//\n//  generic converter from built-in img_n to req_comp\n//    individual types do this automatically as much as possible (e.g. jpeg\n//    does all cases internally since it needs to colorspace convert anyway,\n//    and it never has alpha, so very few cases ). png can automatically\n//    interleave an alpha=255 channel, but falls back to this for other cases\n//\n//  assume data buffer is malloced, so malloc a new one and free that one\n//  only failure mode is malloc failing\n\nstatic stbi_uc stbi__compute_y(int r, int g, int b)\n{\n   return (stbi_uc) (((r*77) + (g*150) +  (29*b)) >> 8);\n}\n#endif\n\n#if defined(STBI_NO_PNG) && defined(STBI_NO_BMP) && defined(STBI_NO_PSD) && defined(STBI_NO_TGA) && defined(STBI_NO_GIF) && defined(STBI_NO_PIC) && defined(STBI_NO_PNM)\n// nothing\n#else\nstatic unsigned char *stbi__convert_format(unsigned char *data, int img_n, int req_comp, unsigned int x, unsigned int y)\n{\n   int i,j;\n   unsigned char *good;\n\n   if (req_comp == img_n) return data;\n   STBI_ASSERT(req_comp >= 1 && req_comp <= 4);\n\n   good = (unsigned char *) stbi__malloc_mad3(req_comp, x, y, 0);\n   if (good == NULL) {\n      STBI_FREE(data);\n      return stbi__errpuc(\"outofmem\", \"Out of memory\");\n   }\n\n   for (j=0; j < (int) y; ++j) {\n      unsigned char *src  = data + j * x * img_n   ;\n      unsigned char *dest = good + j * x * req_comp;\n\n      #define STBI__COMBO(a,b)  ((a)*8+(b))\n      #define STBI__CASE(a,b)   case STBI__COMBO(a,b): for(i=x-1; i >= 0; --i, src += a, dest += b)\n      // convert source image with img_n components to one with req_comp components;\n      // avoid switch per pixel, so use switch per scanline and massive macros\n      switch (STBI__COMBO(img_n, req_comp)) {\n         STBI__CASE(1,2) { dest[0]=src[0]; dest[1]=255;                                     } break;\n         STBI__CASE(1,3) { dest[0]=dest[1]=dest[2]=src[0];                                  } break;\n         STBI__CASE(1,4) { dest[0]=dest[1]=dest[2]=src[0]; dest[3]=255;                     } break;\n         STBI__CASE(2,1) { dest[0]=src[0];                                                  } break;\n         STBI__CASE(2,3) { dest[0]=dest[1]=dest[2]=src[0];                                  } break;\n         STBI__CASE(2,4) { dest[0]=dest[1]=dest[2]=src[0]; dest[3]=src[1];                  } break;\n         STBI__CASE(3,4) { dest[0]=src[0];dest[1]=src[1];dest[2]=src[2];dest[3]=255;        } break;\n         STBI__CASE(3,1) { dest[0]=stbi__compute_y(src[0],src[1],src[2]);                   } break;\n         STBI__CASE(3,2) { dest[0]=stbi__compute_y(src[0],src[1],src[2]); dest[1] = 255;    } break;\n         STBI__CASE(4,1) { dest[0]=stbi__compute_y(src[0],src[1],src[2]);                   } break;\n         STBI__CASE(4,2) { dest[0]=stbi__compute_y(src[0],src[1],src[2]); dest[1] = src[3]; } break;\n         STBI__CASE(4,3) { dest[0]=src[0];dest[1]=src[1];dest[2]=src[2];                    } break;\n         default: STBI_ASSERT(0); STBI_FREE(data); STBI_FREE(good); return stbi__errpuc(\"unsupported\", \"Unsupported format conversion\");\n      }\n      #undef STBI__CASE\n   }\n\n   STBI_FREE(data);\n   return good;\n}\n#endif\n\n#if defined(STBI_NO_PNG) && defined(STBI_NO_PSD)\n// nothing\n#else\nstatic stbi__uint16 stbi__compute_y_16(int r, int g, int b)\n{\n   return (stbi__uint16) (((r*77) + (g*150) +  (29*b)) >> 8);\n}\n#endif\n\n#if defined(STBI_NO_PNG) && defined(STBI_NO_PSD)\n// nothing\n#else\nstatic stbi__uint16 *stbi__convert_format16(stbi__uint16 *data, int img_n, int req_comp, unsigned int x, unsigned int y)\n{\n   int i,j;\n   stbi__uint16 *good;\n\n   if (req_comp == img_n) return data;\n   STBI_ASSERT(req_comp >= 1 && req_comp <= 4);\n\n   good = (stbi__uint16 *) stbi__malloc(req_comp * x * y * 2);\n   if (good == NULL) {\n      STBI_FREE(data);\n      return (stbi__uint16 *) stbi__errpuc(\"outofmem\", \"Out of memory\");\n   }\n\n   for (j=0; j < (int) y; ++j) {\n      stbi__uint16 *src  = data + j * x * img_n   ;\n      stbi__uint16 *dest = good + j * x * req_comp;\n\n      #define STBI__COMBO(a,b)  ((a)*8+(b))\n      #define STBI__CASE(a,b)   case STBI__COMBO(a,b): for(i=x-1; i >= 0; --i, src += a, dest += b)\n      // convert source image with img_n components to one with req_comp components;\n      // avoid switch per pixel, so use switch per scanline and massive macros\n      switch (STBI__COMBO(img_n, req_comp)) {\n         STBI__CASE(1,2) { dest[0]=src[0]; dest[1]=0xffff;                                     } break;\n         STBI__CASE(1,3) { dest[0]=dest[1]=dest[2]=src[0];                                     } break;\n         STBI__CASE(1,4) { dest[0]=dest[1]=dest[2]=src[0]; dest[3]=0xffff;                     } break;\n         STBI__CASE(2,1) { dest[0]=src[0];                                                     } break;\n         STBI__CASE(2,3) { dest[0]=dest[1]=dest[2]=src[0];                                     } break;\n         STBI__CASE(2,4) { dest[0]=dest[1]=dest[2]=src[0]; dest[3]=src[1];                     } break;\n         STBI__CASE(3,4) { dest[0]=src[0];dest[1]=src[1];dest[2]=src[2];dest[3]=0xffff;        } break;\n         STBI__CASE(3,1) { dest[0]=stbi__compute_y_16(src[0],src[1],src[2]);                   } break;\n         STBI__CASE(3,2) { dest[0]=stbi__compute_y_16(src[0],src[1],src[2]); dest[1] = 0xffff; } break;\n         STBI__CASE(4,1) { dest[0]=stbi__compute_y_16(src[0],src[1],src[2]);                   } break;\n         STBI__CASE(4,2) { dest[0]=stbi__compute_y_16(src[0],src[1],src[2]); dest[1] = src[3]; } break;\n         STBI__CASE(4,3) { dest[0]=src[0];dest[1]=src[1];dest[2]=src[2];                       } break;\n         default: STBI_ASSERT(0); STBI_FREE(data); STBI_FREE(good); return (stbi__uint16*) stbi__errpuc(\"unsupported\", \"Unsupported format conversion\");\n      }\n      #undef STBI__CASE\n   }\n\n   STBI_FREE(data);\n   return good;\n}\n#endif\n\n#ifndef STBI_NO_LINEAR\nstatic float   *stbi__ldr_to_hdr(stbi_uc *data, int x, int y, int comp)\n{\n   int i,k,n;\n   float *output;\n   if (!data) return NULL;\n   output = (float *) stbi__malloc_mad4(x, y, comp, sizeof(float), 0);\n   if (output == NULL) { STBI_FREE(data); return stbi__errpf(\"outofmem\", \"Out of memory\"); }\n   // compute number of non-alpha components\n   if (comp & 1) n = comp; else n = comp-1;\n   for (i=0; i < x*y; ++i) {\n      for (k=0; k < n; ++k) {\n         output[i*comp + k] = (float) (pow(data[i*comp+k]/255.0f, stbi__l2h_gamma) * stbi__l2h_scale);\n      }\n   }\n   if (n < comp) {\n      for (i=0; i < x*y; ++i) {\n         output[i*comp + n] = data[i*comp + n]/255.0f;\n      }\n   }\n   STBI_FREE(data);\n   return output;\n}\n#endif\n\n#ifndef STBI_NO_HDR\n#define stbi__float2int(x)   ((int) (x))\nstatic stbi_uc *stbi__hdr_to_ldr(float   *data, int x, int y, int comp)\n{\n   int i,k,n;\n   stbi_uc *output;\n   if (!data) return NULL;\n   output = (stbi_uc *) stbi__malloc_mad3(x, y, comp, 0);\n   if (output == NULL) { STBI_FREE(data); return stbi__errpuc(\"outofmem\", \"Out of memory\"); }\n   // compute number of non-alpha components\n   if (comp & 1) n = comp; else n = comp-1;\n   for (i=0; i < x*y; ++i) {\n      for (k=0; k < n; ++k) {\n         float z = (float) pow(data[i*comp+k]*stbi__h2l_scale_i, stbi__h2l_gamma_i) * 255 + 0.5f;\n         if (z < 0) z = 0;\n         if (z > 255) z = 255;\n         output[i*comp + k] = (stbi_uc) stbi__float2int(z);\n      }\n      if (k < comp) {\n         float z = data[i*comp+k] * 255 + 0.5f;\n         if (z < 0) z = 0;\n         if (z > 255) z = 255;\n         output[i*comp + k] = (stbi_uc) stbi__float2int(z);\n      }\n   }\n   STBI_FREE(data);\n   return output;\n}\n#endif\n\n//////////////////////////////////////////////////////////////////////////////\n//\n//  \"baseline\" JPEG/JFIF decoder\n//\n//    simple implementation\n//      - doesn't support delayed output of y-dimension\n//      - simple interface (only one output format: 8-bit interleaved RGB)\n//      - doesn't try to recover corrupt jpegs\n//      - doesn't allow partial loading, loading multiple at once\n//      - still fast on x86 (copying globals into locals doesn't help x86)\n//      - allocates lots of intermediate memory (full size of all components)\n//        - non-interleaved case requires this anyway\n//        - allows good upsampling (see next)\n//    high-quality\n//      - upsampled channels are bilinearly interpolated, even across blocks\n//      - quality integer IDCT derived from IJG's 'slow'\n//    performance\n//      - fast huffman; reasonable integer IDCT\n//      - some SIMD kernels for common paths on targets with SSE2/NEON\n//      - uses a lot of intermediate memory, could cache poorly\n\n#ifndef STBI_NO_JPEG\n\n// huffman decoding acceleration\n#define FAST_BITS   9  // larger handles more cases; smaller stomps less cache\n\ntypedef struct\n{\n   stbi_uc  fast[1 << FAST_BITS];\n   // weirdly, repacking this into AoS is a 10% speed loss, instead of a win\n   stbi__uint16 code[256];\n   stbi_uc  values[256];\n   stbi_uc  size[257];\n   unsigned int maxcode[18];\n   int    delta[17];   // old 'firstsymbol' - old 'firstcode'\n} stbi__huffman;\n\ntypedef struct\n{\n   stbi__context *s;\n   stbi__huffman huff_dc[4];\n   stbi__huffman huff_ac[4];\n   stbi__uint16 dequant[4][64];\n   stbi__int16 fast_ac[4][1 << FAST_BITS];\n\n// sizes for components, interleaved MCUs\n   int img_h_max, img_v_max;\n   int img_mcu_x, img_mcu_y;\n   int img_mcu_w, img_mcu_h;\n\n// definition of jpeg image component\n   struct\n   {\n      int id;\n      int h,v;\n      int tq;\n      int hd,ha;\n      int dc_pred;\n\n      int x,y,w2,h2;\n      stbi_uc *data;\n      void *raw_data, *raw_coeff;\n      stbi_uc *linebuf;\n      short   *coeff;   // progressive only\n      int      coeff_w, coeff_h; // number of 8x8 coefficient blocks\n   } img_comp[4];\n\n   stbi__uint32   code_buffer; // jpeg entropy-coded buffer\n   int            code_bits;   // number of valid bits\n   unsigned char  marker;      // marker seen while filling entropy buffer\n   int            nomore;      // flag if we saw a marker so must stop\n\n   int            progressive;\n   int            spec_start;\n   int            spec_end;\n   int            succ_high;\n   int            succ_low;\n   int            eob_run;\n   int            jfif;\n   int            app14_color_transform; // Adobe APP14 tag\n   int            rgb;\n\n   int scan_n, order[4];\n   int restart_interval, todo;\n\n// kernels\n   void (*idct_block_kernel)(stbi_uc *out, int out_stride, short data[64]);\n   void (*YCbCr_to_RGB_kernel)(stbi_uc *out, const stbi_uc *y, const stbi_uc *pcb, const stbi_uc *pcr, int count, int step);\n   stbi_uc *(*resample_row_hv_2_kernel)(stbi_uc *out, stbi_uc *in_near, stbi_uc *in_far, int w, int hs);\n} stbi__jpeg;\n\nstatic int stbi__build_huffman(stbi__huffman *h, int *count)\n{\n   int i,j,k=0;\n   unsigned int code;\n   // build size list for each symbol (from JPEG spec)\n   for (i=0; i < 16; ++i) {\n      for (j=0; j < count[i]; ++j) {\n         h->size[k++] = (stbi_uc) (i+1);\n         if(k >= 257) return stbi__err(\"bad size list\",\"Corrupt JPEG\");\n      }\n   }\n   h->size[k] = 0;\n\n   // compute actual symbols (from jpeg spec)\n   code = 0;\n   k = 0;\n   for(j=1; j <= 16; ++j) {\n      // compute delta to add to code to compute symbol id\n      h->delta[j] = k - code;\n      if (h->size[k] == j) {\n         while (h->size[k] == j)\n            h->code[k++] = (stbi__uint16) (code++);\n         if (code-1 >= (1u << j)) return stbi__err(\"bad code lengths\",\"Corrupt JPEG\");\n      }\n      // compute largest code + 1 for this size, preshifted as needed later\n      h->maxcode[j] = code << (16-j);\n      code <<= 1;\n   }\n   h->maxcode[j] = 0xffffffff;\n\n   // build non-spec acceleration table; 255 is flag for not-accelerated\n   memset(h->fast, 255, 1 << FAST_BITS);\n   for (i=0; i < k; ++i) {\n      int s = h->size[i];\n      if (s <= FAST_BITS) {\n         int c = h->code[i] << (FAST_BITS-s);\n         int m = 1 << (FAST_BITS-s);\n         for (j=0; j < m; ++j) {\n            h->fast[c+j] = (stbi_uc) i;\n         }\n      }\n   }\n   return 1;\n}\n\n// build a table that decodes both magnitude and value of small ACs in\n// one go.\nstatic void stbi__build_fast_ac(stbi__int16 *fast_ac, stbi__huffman *h)\n{\n   int i;\n   for (i=0; i < (1 << FAST_BITS); ++i) {\n      stbi_uc fast = h->fast[i];\n      fast_ac[i] = 0;\n      if (fast < 255) {\n         int rs = h->values[fast];\n         int run = (rs >> 4) & 15;\n         int magbits = rs & 15;\n         int len = h->size[fast];\n\n         if (magbits && len + magbits <= FAST_BITS) {\n            // magnitude code followed by receive_extend code\n            int k = ((i << len) & ((1 << FAST_BITS) - 1)) >> (FAST_BITS - magbits);\n            int m = 1 << (magbits - 1);\n            if (k < m) k += (~0U << magbits) + 1;\n            // if the result is small enough, we can fit it in fast_ac table\n            if (k >= -128 && k <= 127)\n               fast_ac[i] = (stbi__int16) ((k * 256) + (run * 16) + (len + magbits));\n         }\n      }\n   }\n}\n\nstatic void stbi__grow_buffer_unsafe(stbi__jpeg *j)\n{\n   do {\n      unsigned int b = j->nomore ? 0 : stbi__get8(j->s);\n      if (b == 0xff) {\n         int c = stbi__get8(j->s);\n         while (c == 0xff) c = stbi__get8(j->s); // consume fill bytes\n         if (c != 0) {\n            j->marker = (unsigned char) c;\n            j->nomore = 1;\n            return;\n         }\n      }\n      j->code_buffer |= b << (24 - j->code_bits);\n      j->code_bits += 8;\n   } while (j->code_bits <= 24);\n}\n\n// (1 << n) - 1\nstatic const stbi__uint32 stbi__bmask[17]={0,1,3,7,15,31,63,127,255,511,1023,2047,4095,8191,16383,32767,65535};\n\n// decode a jpeg huffman value from the bitstream\nstbi_inline static int stbi__jpeg_huff_decode(stbi__jpeg *j, stbi__huffman *h)\n{\n   unsigned int temp;\n   int c,k;\n\n   if (j->code_bits < 16) stbi__grow_buffer_unsafe(j);\n\n   // look at the top FAST_BITS and determine what symbol ID it is,\n   // if the code is <= FAST_BITS\n   c = (j->code_buffer >> (32 - FAST_BITS)) & ((1 << FAST_BITS)-1);\n   k = h->fast[c];\n   if (k < 255) {\n      int s = h->size[k];\n      if (s > j->code_bits)\n         return -1;\n      j->code_buffer <<= s;\n      j->code_bits -= s;\n      return h->values[k];\n   }\n\n   // naive test is to shift the code_buffer down so k bits are\n   // valid, then test against maxcode. To speed this up, we've\n   // preshifted maxcode left so that it has (16-k) 0s at the\n   // end; in other words, regardless of the number of bits, it\n   // wants to be compared against something shifted to have 16;\n   // that way we don't need to shift inside the loop.\n   temp = j->code_buffer >> 16;\n   for (k=FAST_BITS+1 ; ; ++k)\n      if (temp < h->maxcode[k])\n         break;\n   if (k == 17) {\n      // error! code not found\n      j->code_bits -= 16;\n      return -1;\n   }\n\n   if (k > j->code_bits)\n      return -1;\n\n   // convert the huffman code to the symbol id\n   c = ((j->code_buffer >> (32 - k)) & stbi__bmask[k]) + h->delta[k];\n   if(c < 0 || c >= 256) // symbol id out of bounds!\n       return -1;\n   STBI_ASSERT((((j->code_buffer) >> (32 - h->size[c])) & stbi__bmask[h->size[c]]) == h->code[c]);\n\n   // convert the id to a symbol\n   j->code_bits -= k;\n   j->code_buffer <<= k;\n   return h->values[c];\n}\n\n// bias[n] = (-1<<n) + 1\nstatic const int stbi__jbias[16] = {0,-1,-3,-7,-15,-31,-63,-127,-255,-511,-1023,-2047,-4095,-8191,-16383,-32767};\n\n// combined JPEG 'receive' and JPEG 'extend', since baseline\n// always extends everything it receives.\nstbi_inline static int stbi__extend_receive(stbi__jpeg *j, int n)\n{\n   unsigned int k;\n   int sgn;\n   if (j->code_bits < n) stbi__grow_buffer_unsafe(j);\n   if (j->code_bits < n) return 0; // ran out of bits from stream, return 0s intead of continuing\n\n   sgn = j->code_buffer >> 31; // sign bit always in MSB; 0 if MSB clear (positive), 1 if MSB set (negative)\n   k = stbi_lrot(j->code_buffer, n);\n   j->code_buffer = k & ~stbi__bmask[n];\n   k &= stbi__bmask[n];\n   j->code_bits -= n;\n   return k + (stbi__jbias[n] & (sgn - 1));\n}\n\n// get some unsigned bits\nstbi_inline static int stbi__jpeg_get_bits(stbi__jpeg *j, int n)\n{\n   unsigned int k;\n   if (j->code_bits < n) stbi__grow_buffer_unsafe(j);\n   if (j->code_bits < n) return 0; // ran out of bits from stream, return 0s intead of continuing\n   k = stbi_lrot(j->code_buffer, n);\n   j->code_buffer = k & ~stbi__bmask[n];\n   k &= stbi__bmask[n];\n   j->code_bits -= n;\n   return k;\n}\n\nstbi_inline static int stbi__jpeg_get_bit(stbi__jpeg *j)\n{\n   unsigned int k;\n   if (j->code_bits < 1) stbi__grow_buffer_unsafe(j);\n   if (j->code_bits < 1) return 0; // ran out of bits from stream, return 0s intead of continuing\n   k = j->code_buffer;\n   j->code_buffer <<= 1;\n   --j->code_bits;\n   return k & 0x80000000;\n}\n\n// given a value that's at position X in the zigzag stream,\n// where does it appear in the 8x8 matrix coded as row-major?\nstatic const stbi_uc stbi__jpeg_dezigzag[64+15] =\n{\n    0,  1,  8, 16,  9,  2,  3, 10,\n   17, 24, 32, 25, 18, 11,  4,  5,\n   12, 19, 26, 33, 40, 48, 41, 34,\n   27, 20, 13,  6,  7, 14, 21, 28,\n   35, 42, 49, 56, 57, 50, 43, 36,\n   29, 22, 15, 23, 30, 37, 44, 51,\n   58, 59, 52, 45, 38, 31, 39, 46,\n   53, 60, 61, 54, 47, 55, 62, 63,\n   // let corrupt input sample past end\n   63, 63, 63, 63, 63, 63, 63, 63,\n   63, 63, 63, 63, 63, 63, 63\n};\n\n// decode one 64-entry block--\nstatic int stbi__jpeg_decode_block(stbi__jpeg *j, short data[64], stbi__huffman *hdc, stbi__huffman *hac, stbi__int16 *fac, int b, stbi__uint16 *dequant)\n{\n   int diff,dc,k;\n   int t;\n\n   if (j->code_bits < 16) stbi__grow_buffer_unsafe(j);\n   t = stbi__jpeg_huff_decode(j, hdc);\n   if (t < 0 || t > 15) return stbi__err(\"bad huffman code\",\"Corrupt JPEG\");\n\n   // 0 all the ac values now so we can do it 32-bits at a time\n   memset(data,0,64*sizeof(data[0]));\n\n   diff = t ? stbi__extend_receive(j, t) : 0;\n   if (!stbi__addints_valid(j->img_comp[b].dc_pred, diff)) return stbi__err(\"bad delta\",\"Corrupt JPEG\");\n   dc = j->img_comp[b].dc_pred + diff;\n   j->img_comp[b].dc_pred = dc;\n   if (!stbi__mul2shorts_valid(dc, dequant[0])) return stbi__err(\"can't merge dc and ac\", \"Corrupt JPEG\");\n   data[0] = (short) (dc * dequant[0]);\n\n   // decode AC components, see JPEG spec\n   k = 1;\n   do {\n      unsigned int zig;\n      int c,r,s;\n      if (j->code_bits < 16) stbi__grow_buffer_unsafe(j);\n      c = (j->code_buffer >> (32 - FAST_BITS)) & ((1 << FAST_BITS)-1);\n      r = fac[c];\n      if (r) { // fast-AC path\n         k += (r >> 4) & 15; // run\n         s = r & 15; // combined length\n         if (s > j->code_bits) return stbi__err(\"bad huffman code\", \"Combined length longer than code bits available\");\n         j->code_buffer <<= s;\n         j->code_bits -= s;\n         // decode into unzigzag'd location\n         zig = stbi__jpeg_dezigzag[k++];\n         data[zig] = (short) ((r >> 8) * dequant[zig]);\n      } else {\n         int rs = stbi__jpeg_huff_decode(j, hac);\n         if (rs < 0) return stbi__err(\"bad huffman code\",\"Corrupt JPEG\");\n         s = rs & 15;\n         r = rs >> 4;\n         if (s == 0) {\n            if (rs != 0xf0) break; // end block\n            k += 16;\n         } else {\n            k += r;\n            // decode into unzigzag'd location\n            zig = stbi__jpeg_dezigzag[k++];\n            data[zig] = (short) (stbi__extend_receive(j,s) * dequant[zig]);\n         }\n      }\n   } while (k < 64);\n   return 1;\n}\n\nstatic int stbi__jpeg_decode_block_prog_dc(stbi__jpeg *j, short data[64], stbi__huffman *hdc, int b)\n{\n   int diff,dc;\n   int t;\n   if (j->spec_end != 0) return stbi__err(\"can't merge dc and ac\", \"Corrupt JPEG\");\n\n   if (j->code_bits < 16) stbi__grow_buffer_unsafe(j);\n\n   if (j->succ_high == 0) {\n      // first scan for DC coefficient, must be first\n      memset(data,0,64*sizeof(data[0])); // 0 all the ac values now\n      t = stbi__jpeg_huff_decode(j, hdc);\n      if (t < 0 || t > 15) return stbi__err(\"can't merge dc and ac\", \"Corrupt JPEG\");\n      diff = t ? stbi__extend_receive(j, t) : 0;\n\n      if (!stbi__addints_valid(j->img_comp[b].dc_pred, diff)) return stbi__err(\"bad delta\", \"Corrupt JPEG\");\n      dc = j->img_comp[b].dc_pred + diff;\n      j->img_comp[b].dc_pred = dc;\n      if (!stbi__mul2shorts_valid(dc, 1 << j->succ_low)) return stbi__err(\"can't merge dc and ac\", \"Corrupt JPEG\");\n      data[0] = (short) (dc * (1 << j->succ_low));\n   } else {\n      // refinement scan for DC coefficient\n      if (stbi__jpeg_get_bit(j))\n         data[0] += (short) (1 << j->succ_low);\n   }\n   return 1;\n}\n\n// @OPTIMIZE: store non-zigzagged during the decode passes,\n// and only de-zigzag when dequantizing\nstatic int stbi__jpeg_decode_block_prog_ac(stbi__jpeg *j, short data[64], stbi__huffman *hac, stbi__int16 *fac)\n{\n   int k;\n   if (j->spec_start == 0) return stbi__err(\"can't merge dc and ac\", \"Corrupt JPEG\");\n\n   if (j->succ_high == 0) {\n      int shift = j->succ_low;\n\n      if (j->eob_run) {\n         --j->eob_run;\n         return 1;\n      }\n\n      k = j->spec_start;\n      do {\n         unsigned int zig;\n         int c,r,s;\n         if (j->code_bits < 16) stbi__grow_buffer_unsafe(j);\n         c = (j->code_buffer >> (32 - FAST_BITS)) & ((1 << FAST_BITS)-1);\n         r = fac[c];\n         if (r) { // fast-AC path\n            k += (r >> 4) & 15; // run\n            s = r & 15; // combined length\n            if (s > j->code_bits) return stbi__err(\"bad huffman code\", \"Combined length longer than code bits available\");\n            j->code_buffer <<= s;\n            j->code_bits -= s;\n            zig = stbi__jpeg_dezigzag[k++];\n            data[zig] = (short) ((r >> 8) * (1 << shift));\n         } else {\n            int rs = stbi__jpeg_huff_decode(j, hac);\n            if (rs < 0) return stbi__err(\"bad huffman code\",\"Corrupt JPEG\");\n            s = rs & 15;\n            r = rs >> 4;\n            if (s == 0) {\n               if (r < 15) {\n                  j->eob_run = (1 << r);\n                  if (r)\n                     j->eob_run += stbi__jpeg_get_bits(j, r);\n                  --j->eob_run;\n                  break;\n               }\n               k += 16;\n            } else {\n               k += r;\n               zig = stbi__jpeg_dezigzag[k++];\n               data[zig] = (short) (stbi__extend_receive(j,s) * (1 << shift));\n            }\n         }\n      } while (k <= j->spec_end);\n   } else {\n      // refinement scan for these AC coefficients\n\n      short bit = (short) (1 << j->succ_low);\n\n      if (j->eob_run) {\n         --j->eob_run;\n         for (k = j->spec_start; k <= j->spec_end; ++k) {\n            short *p = &data[stbi__jpeg_dezigzag[k]];\n            if (*p != 0)\n               if (stbi__jpeg_get_bit(j))\n                  if ((*p & bit)==0) {\n                     if (*p > 0)\n                        *p += bit;\n                     else\n                        *p -= bit;\n                  }\n         }\n      } else {\n         k = j->spec_start;\n         do {\n            int r,s;\n            int rs = stbi__jpeg_huff_decode(j, hac); // @OPTIMIZE see if we can use the fast path here, advance-by-r is so slow, eh\n            if (rs < 0) return stbi__err(\"bad huffman code\",\"Corrupt JPEG\");\n            s = rs & 15;\n            r = rs >> 4;\n            if (s == 0) {\n               if (r < 15) {\n                  j->eob_run = (1 << r) - 1;\n                  if (r)\n                     j->eob_run += stbi__jpeg_get_bits(j, r);\n                  r = 64; // force end of block\n               } else {\n                  // r=15 s=0 should write 16 0s, so we just do\n                  // a run of 15 0s and then write s (which is 0),\n                  // so we don't have to do anything special here\n               }\n            } else {\n               if (s != 1) return stbi__err(\"bad huffman code\", \"Corrupt JPEG\");\n               // sign bit\n               if (stbi__jpeg_get_bit(j))\n                  s = bit;\n               else\n                  s = -bit;\n            }\n\n            // advance by r\n            while (k <= j->spec_end) {\n               short *p = &data[stbi__jpeg_dezigzag[k++]];\n               if (*p != 0) {\n                  if (stbi__jpeg_get_bit(j))\n                     if ((*p & bit)==0) {\n                        if (*p > 0)\n                           *p += bit;\n                        else\n                           *p -= bit;\n                     }\n               } else {\n                  if (r == 0) {\n                     *p = (short) s;\n                     break;\n                  }\n                  --r;\n               }\n            }\n         } while (k <= j->spec_end);\n      }\n   }\n   return 1;\n}\n\n// take a -128..127 value and stbi__clamp it and convert to 0..255\nstbi_inline static stbi_uc stbi__clamp(int x)\n{\n   // trick to use a single test to catch both cases\n   if ((unsigned int) x > 255) {\n      if (x < 0) return 0;\n      if (x > 255) return 255;\n   }\n   return (stbi_uc) x;\n}\n\n#define stbi__f2f(x)  ((int) (((x) * 4096 + 0.5)))\n#define stbi__fsh(x)  ((x) * 4096)\n\n// derived from jidctint -- DCT_ISLOW\n#define STBI__IDCT_1D(s0,s1,s2,s3,s4,s5,s6,s7) \\\n   int t0,t1,t2,t3,p1,p2,p3,p4,p5,x0,x1,x2,x3; \\\n   p2 = s2;                                    \\\n   p3 = s6;                                    \\\n   p1 = (p2+p3) * stbi__f2f(0.5411961f);       \\\n   t2 = p1 + p3*stbi__f2f(-1.847759065f);      \\\n   t3 = p1 + p2*stbi__f2f( 0.765366865f);      \\\n   p2 = s0;                                    \\\n   p3 = s4;                                    \\\n   t0 = stbi__fsh(p2+p3);                      \\\n   t1 = stbi__fsh(p2-p3);                      \\\n   x0 = t0+t3;                                 \\\n   x3 = t0-t3;                                 \\\n   x1 = t1+t2;                                 \\\n   x2 = t1-t2;                                 \\\n   t0 = s7;                                    \\\n   t1 = s5;                                    \\\n   t2 = s3;                                    \\\n   t3 = s1;                                    \\\n   p3 = t0+t2;                                 \\\n   p4 = t1+t3;                                 \\\n   p1 = t0+t3;                                 \\\n   p2 = t1+t2;                                 \\\n   p5 = (p3+p4)*stbi__f2f( 1.175875602f);      \\\n   t0 = t0*stbi__f2f( 0.298631336f);           \\\n   t1 = t1*stbi__f2f( 2.053119869f);           \\\n   t2 = t2*stbi__f2f( 3.072711026f);           \\\n   t3 = t3*stbi__f2f( 1.501321110f);           \\\n   p1 = p5 + p1*stbi__f2f(-0.899976223f);      \\\n   p2 = p5 + p2*stbi__f2f(-2.562915447f);      \\\n   p3 = p3*stbi__f2f(-1.961570560f);           \\\n   p4 = p4*stbi__f2f(-0.390180644f);           \\\n   t3 += p1+p4;                                \\\n   t2 += p2+p3;                                \\\n   t1 += p2+p4;                                \\\n   t0 += p1+p3;\n\nstatic void stbi__idct_block(stbi_uc *out, int out_stride, short data[64])\n{\n   int i,val[64],*v=val;\n   stbi_uc *o;\n   short *d = data;\n\n   // columns\n   for (i=0; i < 8; ++i,++d, ++v) {\n      // if all zeroes, shortcut -- this avoids dequantizing 0s and IDCTing\n      if (d[ 8]==0 && d[16]==0 && d[24]==0 && d[32]==0\n           && d[40]==0 && d[48]==0 && d[56]==0) {\n         //    no shortcut                 0     seconds\n         //    (1|2|3|4|5|6|7)==0          0     seconds\n         //    all separate               -0.047 seconds\n         //    1 && 2|3 && 4|5 && 6|7:    -0.047 seconds\n         int dcterm = d[0]*4;\n         v[0] = v[8] = v[16] = v[24] = v[32] = v[40] = v[48] = v[56] = dcterm;\n      } else {\n         STBI__IDCT_1D(d[ 0],d[ 8],d[16],d[24],d[32],d[40],d[48],d[56])\n         // constants scaled things up by 1<<12; let's bring them back\n         // down, but keep 2 extra bits of precision\n         x0 += 512; x1 += 512; x2 += 512; x3 += 512;\n         v[ 0] = (x0+t3) >> 10;\n         v[56] = (x0-t3) >> 10;\n         v[ 8] = (x1+t2) >> 10;\n         v[48] = (x1-t2) >> 10;\n         v[16] = (x2+t1) >> 10;\n         v[40] = (x2-t1) >> 10;\n         v[24] = (x3+t0) >> 10;\n         v[32] = (x3-t0) >> 10;\n      }\n   }\n\n   for (i=0, v=val, o=out; i < 8; ++i,v+=8,o+=out_stride) {\n      // no fast case since the first 1D IDCT spread components out\n      STBI__IDCT_1D(v[0],v[1],v[2],v[3],v[4],v[5],v[6],v[7])\n      // constants scaled things up by 1<<12, plus we had 1<<2 from first\n      // loop, plus horizontal and vertical each scale by sqrt(8) so together\n      // we've got an extra 1<<3, so 1<<17 total we need to remove.\n      // so we want to round that, which means adding 0.5 * 1<<17,\n      // aka 65536. Also, we'll end up with -128 to 127 that we want\n      // to encode as 0..255 by adding 128, so we'll add that before the shift\n      x0 += 65536 + (128<<17);\n      x1 += 65536 + (128<<17);\n      x2 += 65536 + (128<<17);\n      x3 += 65536 + (128<<17);\n      // tried computing the shifts into temps, or'ing the temps to see\n      // if any were out of range, but that was slower\n      o[0] = stbi__clamp((x0+t3) >> 17);\n      o[7] = stbi__clamp((x0-t3) >> 17);\n      o[1] = stbi__clamp((x1+t2) >> 17);\n      o[6] = stbi__clamp((x1-t2) >> 17);\n      o[2] = stbi__clamp((x2+t1) >> 17);\n      o[5] = stbi__clamp((x2-t1) >> 17);\n      o[3] = stbi__clamp((x3+t0) >> 17);\n      o[4] = stbi__clamp((x3-t0) >> 17);\n   }\n}\n\n#ifdef STBI_SSE2\n// sse2 integer IDCT. not the fastest possible implementation but it\n// produces bit-identical results to the generic C version so it's\n// fully \"transparent\".\nstatic void stbi__idct_simd(stbi_uc *out, int out_stride, short data[64])\n{\n   // This is constructed to match our regular (generic) integer IDCT exactly.\n   __m128i row0, row1, row2, row3, row4, row5, row6, row7;\n   __m128i tmp;\n\n   // dot product constant: even elems=x, odd elems=y\n   #define dct_const(x,y)  _mm_setr_epi16((x),(y),(x),(y),(x),(y),(x),(y))\n\n   // out(0) = c0[even]*x + c0[odd]*y   (c0, x, y 16-bit, out 32-bit)\n   // out(1) = c1[even]*x + c1[odd]*y\n   #define dct_rot(out0,out1, x,y,c0,c1) \\\n      __m128i c0##lo = _mm_unpacklo_epi16((x),(y)); \\\n      __m128i c0##hi = _mm_unpackhi_epi16((x),(y)); \\\n      __m128i out0##_l = _mm_madd_epi16(c0##lo, c0); \\\n      __m128i out0##_h = _mm_madd_epi16(c0##hi, c0); \\\n      __m128i out1##_l = _mm_madd_epi16(c0##lo, c1); \\\n      __m128i out1##_h = _mm_madd_epi16(c0##hi, c1)\n\n   // out = in << 12  (in 16-bit, out 32-bit)\n   #define dct_widen(out, in) \\\n      __m128i out##_l = _mm_srai_epi32(_mm_unpacklo_epi16(_mm_setzero_si128(), (in)), 4); \\\n      __m128i out##_h = _mm_srai_epi32(_mm_unpackhi_epi16(_mm_setzero_si128(), (in)), 4)\n\n   // wide add\n   #define dct_wadd(out, a, b) \\\n      __m128i out##_l = _mm_add_epi32(a##_l, b##_l); \\\n      __m128i out##_h = _mm_add_epi32(a##_h, b##_h)\n\n   // wide sub\n   #define dct_wsub(out, a, b) \\\n      __m128i out##_l = _mm_sub_epi32(a##_l, b##_l); \\\n      __m128i out##_h = _mm_sub_epi32(a##_h, b##_h)\n\n   // butterfly a/b, add bias, then shift by \"s\" and pack\n   #define dct_bfly32o(out0, out1, a,b,bias,s) \\\n      { \\\n         __m128i abiased_l = _mm_add_epi32(a##_l, bias); \\\n         __m128i abiased_h = _mm_add_epi32(a##_h, bias); \\\n         dct_wadd(sum, abiased, b); \\\n         dct_wsub(dif, abiased, b); \\\n         out0 = _mm_packs_epi32(_mm_srai_epi32(sum_l, s), _mm_srai_epi32(sum_h, s)); \\\n         out1 = _mm_packs_epi32(_mm_srai_epi32(dif_l, s), _mm_srai_epi32(dif_h, s)); \\\n      }\n\n   // 8-bit interleave step (for transposes)\n   #define dct_interleave8(a, b) \\\n      tmp = a; \\\n      a = _mm_unpacklo_epi8(a, b); \\\n      b = _mm_unpackhi_epi8(tmp, b)\n\n   // 16-bit interleave step (for transposes)\n   #define dct_interleave16(a, b) \\\n      tmp = a; \\\n      a = _mm_unpacklo_epi16(a, b); \\\n      b = _mm_unpackhi_epi16(tmp, b)\n\n   #define dct_pass(bias,shift) \\\n      { \\\n         /* even part */ \\\n         dct_rot(t2e,t3e, row2,row6, rot0_0,rot0_1); \\\n         __m128i sum04 = _mm_add_epi16(row0, row4); \\\n         __m128i dif04 = _mm_sub_epi16(row0, row4); \\\n         dct_widen(t0e, sum04); \\\n         dct_widen(t1e, dif04); \\\n         dct_wadd(x0, t0e, t3e); \\\n         dct_wsub(x3, t0e, t3e); \\\n         dct_wadd(x1, t1e, t2e); \\\n         dct_wsub(x2, t1e, t2e); \\\n         /* odd part */ \\\n         dct_rot(y0o,y2o, row7,row3, rot2_0,rot2_1); \\\n         dct_rot(y1o,y3o, row5,row1, rot3_0,rot3_1); \\\n         __m128i sum17 = _mm_add_epi16(row1, row7); \\\n         __m128i sum35 = _mm_add_epi16(row3, row5); \\\n         dct_rot(y4o,y5o, sum17,sum35, rot1_0,rot1_1); \\\n         dct_wadd(x4, y0o, y4o); \\\n         dct_wadd(x5, y1o, y5o); \\\n         dct_wadd(x6, y2o, y5o); \\\n         dct_wadd(x7, y3o, y4o); \\\n         dct_bfly32o(row0,row7, x0,x7,bias,shift); \\\n         dct_bfly32o(row1,row6, x1,x6,bias,shift); \\\n         dct_bfly32o(row2,row5, x2,x5,bias,shift); \\\n         dct_bfly32o(row3,row4, x3,x4,bias,shift); \\\n      }\n\n   __m128i rot0_0 = dct_const(stbi__f2f(0.5411961f), stbi__f2f(0.5411961f) + stbi__f2f(-1.847759065f));\n   __m128i rot0_1 = dct_const(stbi__f2f(0.5411961f) + stbi__f2f( 0.765366865f), stbi__f2f(0.5411961f));\n   __m128i rot1_0 = dct_const(stbi__f2f(1.175875602f) + stbi__f2f(-0.899976223f), stbi__f2f(1.175875602f));\n   __m128i rot1_1 = dct_const(stbi__f2f(1.175875602f), stbi__f2f(1.175875602f) + stbi__f2f(-2.562915447f));\n   __m128i rot2_0 = dct_const(stbi__f2f(-1.961570560f) + stbi__f2f( 0.298631336f), stbi__f2f(-1.961570560f));\n   __m128i rot2_1 = dct_const(stbi__f2f(-1.961570560f), stbi__f2f(-1.961570560f) + stbi__f2f( 3.072711026f));\n   __m128i rot3_0 = dct_const(stbi__f2f(-0.390180644f) + stbi__f2f( 2.053119869f), stbi__f2f(-0.390180644f));\n   __m128i rot3_1 = dct_const(stbi__f2f(-0.390180644f), stbi__f2f(-0.390180644f) + stbi__f2f( 1.501321110f));\n\n   // rounding biases in column/row passes, see stbi__idct_block for explanation.\n   __m128i bias_0 = _mm_set1_epi32(512);\n   __m128i bias_1 = _mm_set1_epi32(65536 + (128<<17));\n\n   // load\n   row0 = _mm_load_si128((const __m128i *) (data + 0*8));\n   row1 = _mm_load_si128((const __m128i *) (data + 1*8));\n   row2 = _mm_load_si128((const __m128i *) (data + 2*8));\n   row3 = _mm_load_si128((const __m128i *) (data + 3*8));\n   row4 = _mm_load_si128((const __m128i *) (data + 4*8));\n   row5 = _mm_load_si128((const __m128i *) (data + 5*8));\n   row6 = _mm_load_si128((const __m128i *) (data + 6*8));\n   row7 = _mm_load_si128((const __m128i *) (data + 7*8));\n\n   // column pass\n   dct_pass(bias_0, 10);\n\n   {\n      // 16bit 8x8 transpose pass 1\n      dct_interleave16(row0, row4);\n      dct_interleave16(row1, row5);\n      dct_interleave16(row2, row6);\n      dct_interleave16(row3, row7);\n\n      // transpose pass 2\n      dct_interleave16(row0, row2);\n      dct_interleave16(row1, row3);\n      dct_interleave16(row4, row6);\n      dct_interleave16(row5, row7);\n\n      // transpose pass 3\n      dct_interleave16(row0, row1);\n      dct_interleave16(row2, row3);\n      dct_interleave16(row4, row5);\n      dct_interleave16(row6, row7);\n   }\n\n   // row pass\n   dct_pass(bias_1, 17);\n\n   {\n      // pack\n      __m128i p0 = _mm_packus_epi16(row0, row1); // a0a1a2a3...a7b0b1b2b3...b7\n      __m128i p1 = _mm_packus_epi16(row2, row3);\n      __m128i p2 = _mm_packus_epi16(row4, row5);\n      __m128i p3 = _mm_packus_epi16(row6, row7);\n\n      // 8bit 8x8 transpose pass 1\n      dct_interleave8(p0, p2); // a0e0a1e1...\n      dct_interleave8(p1, p3); // c0g0c1g1...\n\n      // transpose pass 2\n      dct_interleave8(p0, p1); // a0c0e0g0...\n      dct_interleave8(p2, p3); // b0d0f0h0...\n\n      // transpose pass 3\n      dct_interleave8(p0, p2); // a0b0c0d0...\n      dct_interleave8(p1, p3); // a4b4c4d4...\n\n      // store\n      _mm_storel_epi64((__m128i *) out, p0); out += out_stride;\n      _mm_storel_epi64((__m128i *) out, _mm_shuffle_epi32(p0, 0x4e)); out += out_stride;\n      _mm_storel_epi64((__m128i *) out, p2); out += out_stride;\n      _mm_storel_epi64((__m128i *) out, _mm_shuffle_epi32(p2, 0x4e)); out += out_stride;\n      _mm_storel_epi64((__m128i *) out, p1); out += out_stride;\n      _mm_storel_epi64((__m128i *) out, _mm_shuffle_epi32(p1, 0x4e)); out += out_stride;\n      _mm_storel_epi64((__m128i *) out, p3); out += out_stride;\n      _mm_storel_epi64((__m128i *) out, _mm_shuffle_epi32(p3, 0x4e));\n   }\n\n#undef dct_const\n#undef dct_rot\n#undef dct_widen\n#undef dct_wadd\n#undef dct_wsub\n#undef dct_bfly32o\n#undef dct_interleave8\n#undef dct_interleave16\n#undef dct_pass\n}\n\n#endif // STBI_SSE2\n\n#ifdef STBI_NEON\n\n// NEON integer IDCT. should produce bit-identical\n// results to the generic C version.\nstatic void stbi__idct_simd(stbi_uc *out, int out_stride, short data[64])\n{\n   int16x8_t row0, row1, row2, row3, row4, row5, row6, row7;\n\n   int16x4_t rot0_0 = vdup_n_s16(stbi__f2f(0.5411961f));\n   int16x4_t rot0_1 = vdup_n_s16(stbi__f2f(-1.847759065f));\n   int16x4_t rot0_2 = vdup_n_s16(stbi__f2f( 0.765366865f));\n   int16x4_t rot1_0 = vdup_n_s16(stbi__f2f( 1.175875602f));\n   int16x4_t rot1_1 = vdup_n_s16(stbi__f2f(-0.899976223f));\n   int16x4_t rot1_2 = vdup_n_s16(stbi__f2f(-2.562915447f));\n   int16x4_t rot2_0 = vdup_n_s16(stbi__f2f(-1.961570560f));\n   int16x4_t rot2_1 = vdup_n_s16(stbi__f2f(-0.390180644f));\n   int16x4_t rot3_0 = vdup_n_s16(stbi__f2f( 0.298631336f));\n   int16x4_t rot3_1 = vdup_n_s16(stbi__f2f( 2.053119869f));\n   int16x4_t rot3_2 = vdup_n_s16(stbi__f2f( 3.072711026f));\n   int16x4_t rot3_3 = vdup_n_s16(stbi__f2f( 1.501321110f));\n\n#define dct_long_mul(out, inq, coeff) \\\n   int32x4_t out##_l = vmull_s16(vget_low_s16(inq), coeff); \\\n   int32x4_t out##_h = vmull_s16(vget_high_s16(inq), coeff)\n\n#define dct_long_mac(out, acc, inq, coeff) \\\n   int32x4_t out##_l = vmlal_s16(acc##_l, vget_low_s16(inq), coeff); \\\n   int32x4_t out##_h = vmlal_s16(acc##_h, vget_high_s16(inq), coeff)\n\n#define dct_widen(out, inq) \\\n   int32x4_t out##_l = vshll_n_s16(vget_low_s16(inq), 12); \\\n   int32x4_t out##_h = vshll_n_s16(vget_high_s16(inq), 12)\n\n// wide add\n#define dct_wadd(out, a, b) \\\n   int32x4_t out##_l = vaddq_s32(a##_l, b##_l); \\\n   int32x4_t out##_h = vaddq_s32(a##_h, b##_h)\n\n// wide sub\n#define dct_wsub(out, a, b) \\\n   int32x4_t out##_l = vsubq_s32(a##_l, b##_l); \\\n   int32x4_t out##_h = vsubq_s32(a##_h, b##_h)\n\n// butterfly a/b, then shift using \"shiftop\" by \"s\" and pack\n#define dct_bfly32o(out0,out1, a,b,shiftop,s) \\\n   { \\\n      dct_wadd(sum, a, b); \\\n      dct_wsub(dif, a, b); \\\n      out0 = vcombine_s16(shiftop(sum_l, s), shiftop(sum_h, s)); \\\n      out1 = vcombine_s16(shiftop(dif_l, s), shiftop(dif_h, s)); \\\n   }\n\n#define dct_pass(shiftop, shift) \\\n   { \\\n      /* even part */ \\\n      int16x8_t sum26 = vaddq_s16(row2, row6); \\\n      dct_long_mul(p1e, sum26, rot0_0); \\\n      dct_long_mac(t2e, p1e, row6, rot0_1); \\\n      dct_long_mac(t3e, p1e, row2, rot0_2); \\\n      int16x8_t sum04 = vaddq_s16(row0, row4); \\\n      int16x8_t dif04 = vsubq_s16(row0, row4); \\\n      dct_widen(t0e, sum04); \\\n      dct_widen(t1e, dif04); \\\n      dct_wadd(x0, t0e, t3e); \\\n      dct_wsub(x3, t0e, t3e); \\\n      dct_wadd(x1, t1e, t2e); \\\n      dct_wsub(x2, t1e, t2e); \\\n      /* odd part */ \\\n      int16x8_t sum15 = vaddq_s16(row1, row5); \\\n      int16x8_t sum17 = vaddq_s16(row1, row7); \\\n      int16x8_t sum35 = vaddq_s16(row3, row5); \\\n      int16x8_t sum37 = vaddq_s16(row3, row7); \\\n      int16x8_t sumodd = vaddq_s16(sum17, sum35); \\\n      dct_long_mul(p5o, sumodd, rot1_0); \\\n      dct_long_mac(p1o, p5o, sum17, rot1_1); \\\n      dct_long_mac(p2o, p5o, sum35, rot1_2); \\\n      dct_long_mul(p3o, sum37, rot2_0); \\\n      dct_long_mul(p4o, sum15, rot2_1); \\\n      dct_wadd(sump13o, p1o, p3o); \\\n      dct_wadd(sump24o, p2o, p4o); \\\n      dct_wadd(sump23o, p2o, p3o); \\\n      dct_wadd(sump14o, p1o, p4o); \\\n      dct_long_mac(x4, sump13o, row7, rot3_0); \\\n      dct_long_mac(x5, sump24o, row5, rot3_1); \\\n      dct_long_mac(x6, sump23o, row3, rot3_2); \\\n      dct_long_mac(x7, sump14o, row1, rot3_3); \\\n      dct_bfly32o(row0,row7, x0,x7,shiftop,shift); \\\n      dct_bfly32o(row1,row6, x1,x6,shiftop,shift); \\\n      dct_bfly32o(row2,row5, x2,x5,shiftop,shift); \\\n      dct_bfly32o(row3,row4, x3,x4,shiftop,shift); \\\n   }\n\n   // load\n   row0 = vld1q_s16(data + 0*8);\n   row1 = vld1q_s16(data + 1*8);\n   row2 = vld1q_s16(data + 2*8);\n   row3 = vld1q_s16(data + 3*8);\n   row4 = vld1q_s16(data + 4*8);\n   row5 = vld1q_s16(data + 5*8);\n   row6 = vld1q_s16(data + 6*8);\n   row7 = vld1q_s16(data + 7*8);\n\n   // add DC bias\n   row0 = vaddq_s16(row0, vsetq_lane_s16(1024, vdupq_n_s16(0), 0));\n\n   // column pass\n   dct_pass(vrshrn_n_s32, 10);\n\n   // 16bit 8x8 transpose\n   {\n// these three map to a single VTRN.16, VTRN.32, and VSWP, respectively.\n// whether compilers actually get this is another story, sadly.\n#define dct_trn16(x, y) { int16x8x2_t t = vtrnq_s16(x, y); x = t.val[0]; y = t.val[1]; }\n#define dct_trn32(x, y) { int32x4x2_t t = vtrnq_s32(vreinterpretq_s32_s16(x), vreinterpretq_s32_s16(y)); x = vreinterpretq_s16_s32(t.val[0]); y = vreinterpretq_s16_s32(t.val[1]); }\n#define dct_trn64(x, y) { int16x8_t x0 = x; int16x8_t y0 = y; x = vcombine_s16(vget_low_s16(x0), vget_low_s16(y0)); y = vcombine_s16(vget_high_s16(x0), vget_high_s16(y0)); }\n\n      // pass 1\n      dct_trn16(row0, row1); // a0b0a2b2a4b4a6b6\n      dct_trn16(row2, row3);\n      dct_trn16(row4, row5);\n      dct_trn16(row6, row7);\n\n      // pass 2\n      dct_trn32(row0, row2); // a0b0c0d0a4b4c4d4\n      dct_trn32(row1, row3);\n      dct_trn32(row4, row6);\n      dct_trn32(row5, row7);\n\n      // pass 3\n      dct_trn64(row0, row4); // a0b0c0d0e0f0g0h0\n      dct_trn64(row1, row5);\n      dct_trn64(row2, row6);\n      dct_trn64(row3, row7);\n\n#undef dct_trn16\n#undef dct_trn32\n#undef dct_trn64\n   }\n\n   // row pass\n   // vrshrn_n_s32 only supports shifts up to 16, we need\n   // 17. so do a non-rounding shift of 16 first then follow\n   // up with a rounding shift by 1.\n   dct_pass(vshrn_n_s32, 16);\n\n   {\n      // pack and round\n      uint8x8_t p0 = vqrshrun_n_s16(row0, 1);\n      uint8x8_t p1 = vqrshrun_n_s16(row1, 1);\n      uint8x8_t p2 = vqrshrun_n_s16(row2, 1);\n      uint8x8_t p3 = vqrshrun_n_s16(row3, 1);\n      uint8x8_t p4 = vqrshrun_n_s16(row4, 1);\n      uint8x8_t p5 = vqrshrun_n_s16(row5, 1);\n      uint8x8_t p6 = vqrshrun_n_s16(row6, 1);\n      uint8x8_t p7 = vqrshrun_n_s16(row7, 1);\n\n      // again, these can translate into one instruction, but often don't.\n#define dct_trn8_8(x, y) { uint8x8x2_t t = vtrn_u8(x, y); x = t.val[0]; y = t.val[1]; }\n#define dct_trn8_16(x, y) { uint16x4x2_t t = vtrn_u16(vreinterpret_u16_u8(x), vreinterpret_u16_u8(y)); x = vreinterpret_u8_u16(t.val[0]); y = vreinterpret_u8_u16(t.val[1]); }\n#define dct_trn8_32(x, y) { uint32x2x2_t t = vtrn_u32(vreinterpret_u32_u8(x), vreinterpret_u32_u8(y)); x = vreinterpret_u8_u32(t.val[0]); y = vreinterpret_u8_u32(t.val[1]); }\n\n      // sadly can't use interleaved stores here since we only write\n      // 8 bytes to each scan line!\n\n      // 8x8 8-bit transpose pass 1\n      dct_trn8_8(p0, p1);\n      dct_trn8_8(p2, p3);\n      dct_trn8_8(p4, p5);\n      dct_trn8_8(p6, p7);\n\n      // pass 2\n      dct_trn8_16(p0, p2);\n      dct_trn8_16(p1, p3);\n      dct_trn8_16(p4, p6);\n      dct_trn8_16(p5, p7);\n\n      // pass 3\n      dct_trn8_32(p0, p4);\n      dct_trn8_32(p1, p5);\n      dct_trn8_32(p2, p6);\n      dct_trn8_32(p3, p7);\n\n      // store\n      vst1_u8(out, p0); out += out_stride;\n      vst1_u8(out, p1); out += out_stride;\n      vst1_u8(out, p2); out += out_stride;\n      vst1_u8(out, p3); out += out_stride;\n      vst1_u8(out, p4); out += out_stride;\n      vst1_u8(out, p5); out += out_stride;\n      vst1_u8(out, p6); out += out_stride;\n      vst1_u8(out, p7);\n\n#undef dct_trn8_8\n#undef dct_trn8_16\n#undef dct_trn8_32\n   }\n\n#undef dct_long_mul\n#undef dct_long_mac\n#undef dct_widen\n#undef dct_wadd\n#undef dct_wsub\n#undef dct_bfly32o\n#undef dct_pass\n}\n\n#endif // STBI_NEON\n\n#define STBI__MARKER_none  0xff\n// if there's a pending marker from the entropy stream, return that\n// otherwise, fetch from the stream and get a marker. if there's no\n// marker, return 0xff, which is never a valid marker value\nstatic stbi_uc stbi__get_marker(stbi__jpeg *j)\n{\n   stbi_uc x;\n   if (j->marker != STBI__MARKER_none) { x = j->marker; j->marker = STBI__MARKER_none; return x; }\n   x = stbi__get8(j->s);\n   if (x != 0xff) return STBI__MARKER_none;\n   while (x == 0xff)\n      x = stbi__get8(j->s); // consume repeated 0xff fill bytes\n   return x;\n}\n\n// in each scan, we'll have scan_n components, and the order\n// of the components is specified by order[]\n#define STBI__RESTART(x)     ((x) >= 0xd0 && (x) <= 0xd7)\n\n// after a restart interval, stbi__jpeg_reset the entropy decoder and\n// the dc prediction\nstatic void stbi__jpeg_reset(stbi__jpeg *j)\n{\n   j->code_bits = 0;\n   j->code_buffer = 0;\n   j->nomore = 0;\n   j->img_comp[0].dc_pred = j->img_comp[1].dc_pred = j->img_comp[2].dc_pred = j->img_comp[3].dc_pred = 0;\n   j->marker = STBI__MARKER_none;\n   j->todo = j->restart_interval ? j->restart_interval : 0x7fffffff;\n   j->eob_run = 0;\n   // no more than 1<<31 MCUs if no restart_interal? that's plenty safe,\n   // since we don't even allow 1<<30 pixels\n}\n\nstatic int stbi__parse_entropy_coded_data(stbi__jpeg *z)\n{\n   stbi__jpeg_reset(z);\n   if (!z->progressive) {\n      if (z->scan_n == 1) {\n         int i,j;\n         STBI_SIMD_ALIGN(short, data[64]);\n         int n = z->order[0];\n         // non-interleaved data, we just need to process one block at a time,\n         // in trivial scanline order\n         // number of blocks to do just depends on how many actual \"pixels\" this\n         // component has, independent of interleaved MCU blocking and such\n         int w = (z->img_comp[n].x+7) >> 3;\n         int h = (z->img_comp[n].y+7) >> 3;\n         for (j=0; j < h; ++j) {\n            for (i=0; i < w; ++i) {\n               int ha = z->img_comp[n].ha;\n               if (!stbi__jpeg_decode_block(z, data, z->huff_dc+z->img_comp[n].hd, z->huff_ac+ha, z->fast_ac[ha], n, z->dequant[z->img_comp[n].tq])) return 0;\n               z->idct_block_kernel(z->img_comp[n].data+z->img_comp[n].w2*j*8+i*8, z->img_comp[n].w2, data);\n               // every data block is an MCU, so countdown the restart interval\n               if (--z->todo <= 0) {\n                  if (z->code_bits < 24) stbi__grow_buffer_unsafe(z);\n                  // if it's NOT a restart, then just bail, so we get corrupt data\n                  // rather than no data\n                  if (!STBI__RESTART(z->marker)) return 1;\n                  stbi__jpeg_reset(z);\n               }\n            }\n         }\n         return 1;\n      } else { // interleaved\n         int i,j,k,x,y;\n         STBI_SIMD_ALIGN(short, data[64]);\n         for (j=0; j < z->img_mcu_y; ++j) {\n            for (i=0; i < z->img_mcu_x; ++i) {\n               // scan an interleaved mcu... process scan_n components in order\n               for (k=0; k < z->scan_n; ++k) {\n                  int n = z->order[k];\n                  // scan out an mcu's worth of this component; that's just determined\n                  // by the basic H and V specified for the component\n                  for (y=0; y < z->img_comp[n].v; ++y) {\n                     for (x=0; x < z->img_comp[n].h; ++x) {\n                        int x2 = (i*z->img_comp[n].h + x)*8;\n                        int y2 = (j*z->img_comp[n].v + y)*8;\n                        int ha = z->img_comp[n].ha;\n                        if (!stbi__jpeg_decode_block(z, data, z->huff_dc+z->img_comp[n].hd, z->huff_ac+ha, z->fast_ac[ha], n, z->dequant[z->img_comp[n].tq])) return 0;\n                        z->idct_block_kernel(z->img_comp[n].data+z->img_comp[n].w2*y2+x2, z->img_comp[n].w2, data);\n                     }\n                  }\n               }\n               // after all interleaved components, that's an interleaved MCU,\n               // so now count down the restart interval\n               if (--z->todo <= 0) {\n                  if (z->code_bits < 24) stbi__grow_buffer_unsafe(z);\n                  if (!STBI__RESTART(z->marker)) return 1;\n                  stbi__jpeg_reset(z);\n               }\n            }\n         }\n         return 1;\n      }\n   } else {\n      if (z->scan_n == 1) {\n         int i,j;\n         int n = z->order[0];\n         // non-interleaved data, we just need to process one block at a time,\n         // in trivial scanline order\n         // number of blocks to do just depends on how many actual \"pixels\" this\n         // component has, independent of interleaved MCU blocking and such\n         int w = (z->img_comp[n].x+7) >> 3;\n         int h = (z->img_comp[n].y+7) >> 3;\n         for (j=0; j < h; ++j) {\n            for (i=0; i < w; ++i) {\n               short *data = z->img_comp[n].coeff + 64 * (i + j * z->img_comp[n].coeff_w);\n               if (z->spec_start == 0) {\n                  if (!stbi__jpeg_decode_block_prog_dc(z, data, &z->huff_dc[z->img_comp[n].hd], n))\n                     return 0;\n               } else {\n                  int ha = z->img_comp[n].ha;\n                  if (!stbi__jpeg_decode_block_prog_ac(z, data, &z->huff_ac[ha], z->fast_ac[ha]))\n                     return 0;\n               }\n               // every data block is an MCU, so countdown the restart interval\n               if (--z->todo <= 0) {\n                  if (z->code_bits < 24) stbi__grow_buffer_unsafe(z);\n                  if (!STBI__RESTART(z->marker)) return 1;\n                  stbi__jpeg_reset(z);\n               }\n            }\n         }\n         return 1;\n      } else { // interleaved\n         int i,j,k,x,y;\n         for (j=0; j < z->img_mcu_y; ++j) {\n            for (i=0; i < z->img_mcu_x; ++i) {\n               // scan an interleaved mcu... process scan_n components in order\n               for (k=0; k < z->scan_n; ++k) {\n                  int n = z->order[k];\n                  // scan out an mcu's worth of this component; that's just determined\n                  // by the basic H and V specified for the component\n                  for (y=0; y < z->img_comp[n].v; ++y) {\n                     for (x=0; x < z->img_comp[n].h; ++x) {\n                        int x2 = (i*z->img_comp[n].h + x);\n                        int y2 = (j*z->img_comp[n].v + y);\n                        short *data = z->img_comp[n].coeff + 64 * (x2 + y2 * z->img_comp[n].coeff_w);\n                        if (!stbi__jpeg_decode_block_prog_dc(z, data, &z->huff_dc[z->img_comp[n].hd], n))\n                           return 0;\n                     }\n                  }\n               }\n               // after all interleaved components, that's an interleaved MCU,\n               // so now count down the restart interval\n               if (--z->todo <= 0) {\n                  if (z->code_bits < 24) stbi__grow_buffer_unsafe(z);\n                  if (!STBI__RESTART(z->marker)) return 1;\n                  stbi__jpeg_reset(z);\n               }\n            }\n         }\n         return 1;\n      }\n   }\n}\n\nstatic void stbi__jpeg_dequantize(short *data, stbi__uint16 *dequant)\n{\n   int i;\n   for (i=0; i < 64; ++i)\n      data[i] *= dequant[i];\n}\n\nstatic void stbi__jpeg_finish(stbi__jpeg *z)\n{\n   if (z->progressive) {\n      // dequantize and idct the data\n      int i,j,n;\n      for (n=0; n < z->s->img_n; ++n) {\n         int w = (z->img_comp[n].x+7) >> 3;\n         int h = (z->img_comp[n].y+7) >> 3;\n         for (j=0; j < h; ++j) {\n            for (i=0; i < w; ++i) {\n               short *data = z->img_comp[n].coeff + 64 * (i + j * z->img_comp[n].coeff_w);\n               stbi__jpeg_dequantize(data, z->dequant[z->img_comp[n].tq]);\n               z->idct_block_kernel(z->img_comp[n].data+z->img_comp[n].w2*j*8+i*8, z->img_comp[n].w2, data);\n            }\n         }\n      }\n   }\n}\n\nstatic int stbi__process_marker(stbi__jpeg *z, int m)\n{\n   int L;\n   switch (m) {\n      case STBI__MARKER_none: // no marker found\n         return stbi__err(\"expected marker\",\"Corrupt JPEG\");\n\n      case 0xDD: // DRI - specify restart interval\n         if (stbi__get16be(z->s) != 4) return stbi__err(\"bad DRI len\",\"Corrupt JPEG\");\n         z->restart_interval = stbi__get16be(z->s);\n         return 1;\n\n      case 0xDB: // DQT - define quantization table\n         L = stbi__get16be(z->s)-2;\n         while (L > 0) {\n            int q = stbi__get8(z->s);\n            int p = q >> 4, sixteen = (p != 0);\n            int t = q & 15,i;\n            if (p != 0 && p != 1) return stbi__err(\"bad DQT type\",\"Corrupt JPEG\");\n            if (t > 3) return stbi__err(\"bad DQT table\",\"Corrupt JPEG\");\n\n            for (i=0; i < 64; ++i)\n               z->dequant[t][stbi__jpeg_dezigzag[i]] = (stbi__uint16)(sixteen ? stbi__get16be(z->s) : stbi__get8(z->s));\n            L -= (sixteen ? 129 : 65);\n         }\n         return L==0;\n\n      case 0xC4: // DHT - define huffman table\n         L = stbi__get16be(z->s)-2;\n         while (L > 0) {\n            stbi_uc *v;\n            int sizes[16],i,n=0;\n            int q = stbi__get8(z->s);\n            int tc = q >> 4;\n            int th = q & 15;\n            if (tc > 1 || th > 3) return stbi__err(\"bad DHT header\",\"Corrupt JPEG\");\n            for (i=0; i < 16; ++i) {\n               sizes[i] = stbi__get8(z->s);\n               n += sizes[i];\n            }\n            if(n > 256) return stbi__err(\"bad DHT header\",\"Corrupt JPEG\"); // Loop over i < n would write past end of values!\n            L -= 17;\n            if (tc == 0) {\n               if (!stbi__build_huffman(z->huff_dc+th, sizes)) return 0;\n               v = z->huff_dc[th].values;\n            } else {\n               if (!stbi__build_huffman(z->huff_ac+th, sizes)) return 0;\n               v = z->huff_ac[th].values;\n            }\n            for (i=0; i < n; ++i)\n               v[i] = stbi__get8(z->s);\n            if (tc != 0)\n               stbi__build_fast_ac(z->fast_ac[th], z->huff_ac + th);\n            L -= n;\n         }\n         return L==0;\n   }\n\n   // check for comment block or APP blocks\n   if ((m >= 0xE0 && m <= 0xEF) || m == 0xFE) {\n      L = stbi__get16be(z->s);\n      if (L < 2) {\n         if (m == 0xFE)\n            return stbi__err(\"bad COM len\",\"Corrupt JPEG\");\n         else\n            return stbi__err(\"bad APP len\",\"Corrupt JPEG\");\n      }\n      L -= 2;\n\n      if (m == 0xE0 && L >= 5) { // JFIF APP0 segment\n         static const unsigned char tag[5] = {'J','F','I','F','\\0'};\n         int ok = 1;\n         int i;\n         for (i=0; i < 5; ++i)\n            if (stbi__get8(z->s) != tag[i])\n               ok = 0;\n         L -= 5;\n         if (ok)\n            z->jfif = 1;\n      } else if (m == 0xEE && L >= 12) { // Adobe APP14 segment\n         static const unsigned char tag[6] = {'A','d','o','b','e','\\0'};\n         int ok = 1;\n         int i;\n         for (i=0; i < 6; ++i)\n            if (stbi__get8(z->s) != tag[i])\n               ok = 0;\n         L -= 6;\n         if (ok) {\n            stbi__get8(z->s); // version\n            stbi__get16be(z->s); // flags0\n            stbi__get16be(z->s); // flags1\n            z->app14_color_transform = stbi__get8(z->s); // color transform\n            L -= 6;\n         }\n      }\n\n      stbi__skip(z->s, L);\n      return 1;\n   }\n\n   return stbi__err(\"unknown marker\",\"Corrupt JPEG\");\n}\n\n// after we see SOS\nstatic int stbi__process_scan_header(stbi__jpeg *z)\n{\n   int i;\n   int Ls = stbi__get16be(z->s);\n   z->scan_n = stbi__get8(z->s);\n   if (z->scan_n < 1 || z->scan_n > 4 || z->scan_n > (int) z->s->img_n) return stbi__err(\"bad SOS component count\",\"Corrupt JPEG\");\n   if (Ls != 6+2*z->scan_n) return stbi__err(\"bad SOS len\",\"Corrupt JPEG\");\n   for (i=0; i < z->scan_n; ++i) {\n      int id = stbi__get8(z->s), which;\n      int q = stbi__get8(z->s);\n      for (which = 0; which < z->s->img_n; ++which)\n         if (z->img_comp[which].id == id)\n            break;\n      if (which == z->s->img_n) return 0; // no match\n      z->img_comp[which].hd = q >> 4;   if (z->img_comp[which].hd > 3) return stbi__err(\"bad DC huff\",\"Corrupt JPEG\");\n      z->img_comp[which].ha = q & 15;   if (z->img_comp[which].ha > 3) return stbi__err(\"bad AC huff\",\"Corrupt JPEG\");\n      z->order[i] = which;\n   }\n\n   {\n      int aa;\n      z->spec_start = stbi__get8(z->s);\n      z->spec_end   = stbi__get8(z->s); // should be 63, but might be 0\n      aa = stbi__get8(z->s);\n      z->succ_high = (aa >> 4);\n      z->succ_low  = (aa & 15);\n      if (z->progressive) {\n         if (z->spec_start > 63 || z->spec_end > 63  || z->spec_start > z->spec_end || z->succ_high > 13 || z->succ_low > 13)\n            return stbi__err(\"bad SOS\", \"Corrupt JPEG\");\n      } else {\n         if (z->spec_start != 0) return stbi__err(\"bad SOS\",\"Corrupt JPEG\");\n         if (z->succ_high != 0 || z->succ_low != 0) return stbi__err(\"bad SOS\",\"Corrupt JPEG\");\n         z->spec_end = 63;\n      }\n   }\n\n   return 1;\n}\n\nstatic int stbi__free_jpeg_components(stbi__jpeg *z, int ncomp, int why)\n{\n   int i;\n   for (i=0; i < ncomp; ++i) {\n      if (z->img_comp[i].raw_data) {\n         STBI_FREE(z->img_comp[i].raw_data);\n         z->img_comp[i].raw_data = NULL;\n         z->img_comp[i].data = NULL;\n      }\n      if (z->img_comp[i].raw_coeff) {\n         STBI_FREE(z->img_comp[i].raw_coeff);\n         z->img_comp[i].raw_coeff = 0;\n         z->img_comp[i].coeff = 0;\n      }\n      if (z->img_comp[i].linebuf) {\n         STBI_FREE(z->img_comp[i].linebuf);\n         z->img_comp[i].linebuf = NULL;\n      }\n   }\n   return why;\n}\n\nstatic int stbi__process_frame_header(stbi__jpeg *z, int scan)\n{\n   stbi__context *s = z->s;\n   int Lf,p,i,q, h_max=1,v_max=1,c;\n   Lf = stbi__get16be(s);         if (Lf < 11) return stbi__err(\"bad SOF len\",\"Corrupt JPEG\"); // JPEG\n   p  = stbi__get8(s);            if (p != 8) return stbi__err(\"only 8-bit\",\"JPEG format not supported: 8-bit only\"); // JPEG baseline\n   s->img_y = stbi__get16be(s);   if (s->img_y == 0) return stbi__err(\"no header height\", \"JPEG format not supported: delayed height\"); // Legal, but we don't handle it--but neither does IJG\n   s->img_x = stbi__get16be(s);   if (s->img_x == 0) return stbi__err(\"0 width\",\"Corrupt JPEG\"); // JPEG requires\n   if (s->img_y > STBI_MAX_DIMENSIONS) return stbi__err(\"too large\",\"Very large image (corrupt?)\");\n   if (s->img_x > STBI_MAX_DIMENSIONS) return stbi__err(\"too large\",\"Very large image (corrupt?)\");\n   c = stbi__get8(s);\n   if (c != 3 && c != 1 && c != 4) return stbi__err(\"bad component count\",\"Corrupt JPEG\");\n   s->img_n = c;\n   for (i=0; i < c; ++i) {\n      z->img_comp[i].data = NULL;\n      z->img_comp[i].linebuf = NULL;\n   }\n\n   if (Lf != 8+3*s->img_n) return stbi__err(\"bad SOF len\",\"Corrupt JPEG\");\n\n   z->rgb = 0;\n   for (i=0; i < s->img_n; ++i) {\n      static const unsigned char rgb[3] = { 'R', 'G', 'B' };\n      z->img_comp[i].id = stbi__get8(s);\n      if (s->img_n == 3 && z->img_comp[i].id == rgb[i])\n         ++z->rgb;\n      q = stbi__get8(s);\n      z->img_comp[i].h = (q >> 4);  if (!z->img_comp[i].h || z->img_comp[i].h > 4) return stbi__err(\"bad H\",\"Corrupt JPEG\");\n      z->img_comp[i].v = q & 15;    if (!z->img_comp[i].v || z->img_comp[i].v > 4) return stbi__err(\"bad V\",\"Corrupt JPEG\");\n      z->img_comp[i].tq = stbi__get8(s);  if (z->img_comp[i].tq > 3) return stbi__err(\"bad TQ\",\"Corrupt JPEG\");\n   }\n\n   if (scan != STBI__SCAN_load) return 1;\n\n   if (!stbi__mad3sizes_valid(s->img_x, s->img_y, s->img_n, 0)) return stbi__err(\"too large\", \"Image too large to decode\");\n\n   for (i=0; i < s->img_n; ++i) {\n      if (z->img_comp[i].h > h_max) h_max = z->img_comp[i].h;\n      if (z->img_comp[i].v > v_max) v_max = z->img_comp[i].v;\n   }\n\n   // check that plane subsampling factors are integer ratios; our resamplers can't deal with fractional ratios\n   // and I've never seen a non-corrupted JPEG file actually use them\n   for (i=0; i < s->img_n; ++i) {\n      if (h_max % z->img_comp[i].h != 0) return stbi__err(\"bad H\",\"Corrupt JPEG\");\n      if (v_max % z->img_comp[i].v != 0) return stbi__err(\"bad V\",\"Corrupt JPEG\");\n   }\n\n   // compute interleaved mcu info\n   z->img_h_max = h_max;\n   z->img_v_max = v_max;\n   z->img_mcu_w = h_max * 8;\n   z->img_mcu_h = v_max * 8;\n   // these sizes can't be more than 17 bits\n   z->img_mcu_x = (s->img_x + z->img_mcu_w-1) / z->img_mcu_w;\n   z->img_mcu_y = (s->img_y + z->img_mcu_h-1) / z->img_mcu_h;\n\n   for (i=0; i < s->img_n; ++i) {\n      // number of effective pixels (e.g. for non-interleaved MCU)\n      z->img_comp[i].x = (s->img_x * z->img_comp[i].h + h_max-1) / h_max;\n      z->img_comp[i].y = (s->img_y * z->img_comp[i].v + v_max-1) / v_max;\n      // to simplify generation, we'll allocate enough memory to decode\n      // the bogus oversized data from using interleaved MCUs and their\n      // big blocks (e.g. a 16x16 iMCU on an image of width 33); we won't\n      // discard the extra data until colorspace conversion\n      //\n      // img_mcu_x, img_mcu_y: <=17 bits; comp[i].h and .v are <=4 (checked earlier)\n      // so these muls can't overflow with 32-bit ints (which we require)\n      z->img_comp[i].w2 = z->img_mcu_x * z->img_comp[i].h * 8;\n      z->img_comp[i].h2 = z->img_mcu_y * z->img_comp[i].v * 8;\n      z->img_comp[i].coeff = 0;\n      z->img_comp[i].raw_coeff = 0;\n      z->img_comp[i].linebuf = NULL;\n      z->img_comp[i].raw_data = stbi__malloc_mad2(z->img_comp[i].w2, z->img_comp[i].h2, 15);\n      if (z->img_comp[i].raw_data == NULL)\n         return stbi__free_jpeg_components(z, i+1, stbi__err(\"outofmem\", \"Out of memory\"));\n      // align blocks for idct using mmx/sse\n      z->img_comp[i].data = (stbi_uc*) (((size_t) z->img_comp[i].raw_data + 15) & ~15);\n      if (z->progressive) {\n         // w2, h2 are multiples of 8 (see above)\n         z->img_comp[i].coeff_w = z->img_comp[i].w2 / 8;\n         z->img_comp[i].coeff_h = z->img_comp[i].h2 / 8;\n         z->img_comp[i].raw_coeff = stbi__malloc_mad3(z->img_comp[i].w2, z->img_comp[i].h2, sizeof(short), 15);\n         if (z->img_comp[i].raw_coeff == NULL)\n            return stbi__free_jpeg_components(z, i+1, stbi__err(\"outofmem\", \"Out of memory\"));\n         z->img_comp[i].coeff = (short*) (((size_t) z->img_comp[i].raw_coeff + 15) & ~15);\n      }\n   }\n\n   return 1;\n}\n\n// use comparisons since in some cases we handle more than one case (e.g. SOF)\n#define stbi__DNL(x)         ((x) == 0xdc)\n#define stbi__SOI(x)         ((x) == 0xd8)\n#define stbi__EOI(x)         ((x) == 0xd9)\n#define stbi__SOF(x)         ((x) == 0xc0 || (x) == 0xc1 || (x) == 0xc2)\n#define stbi__SOS(x)         ((x) == 0xda)\n\n#define stbi__SOF_progressive(x)   ((x) == 0xc2)\n\nstatic int stbi__decode_jpeg_header(stbi__jpeg *z, int scan)\n{\n   int m;\n   z->jfif = 0;\n   z->app14_color_transform = -1; // valid values are 0,1,2\n   z->marker = STBI__MARKER_none; // initialize cached marker to empty\n   m = stbi__get_marker(z);\n   if (!stbi__SOI(m)) return stbi__err(\"no SOI\",\"Corrupt JPEG\");\n   if (scan == STBI__SCAN_type) return 1;\n   m = stbi__get_marker(z);\n   while (!stbi__SOF(m)) {\n      if (!stbi__process_marker(z,m)) return 0;\n      m = stbi__get_marker(z);\n      while (m == STBI__MARKER_none) {\n         // some files have extra padding after their blocks, so ok, we'll scan\n         if (stbi__at_eof(z->s)) return stbi__err(\"no SOF\", \"Corrupt JPEG\");\n         m = stbi__get_marker(z);\n      }\n   }\n   z->progressive = stbi__SOF_progressive(m);\n   if (!stbi__process_frame_header(z, scan)) return 0;\n   return 1;\n}\n\nstatic stbi_uc stbi__skip_jpeg_junk_at_end(stbi__jpeg *j)\n{\n   // some JPEGs have junk at end, skip over it but if we find what looks\n   // like a valid marker, resume there\n   while (!stbi__at_eof(j->s)) {\n      stbi_uc x = stbi__get8(j->s);\n      while (x == 0xff) { // might be a marker\n         if (stbi__at_eof(j->s)) return STBI__MARKER_none;\n         x = stbi__get8(j->s);\n         if (x != 0x00 && x != 0xff) {\n            // not a stuffed zero or lead-in to another marker, looks\n            // like an actual marker, return it\n            return x;\n         }\n         // stuffed zero has x=0 now which ends the loop, meaning we go\n         // back to regular scan loop.\n         // repeated 0xff keeps trying to read the next byte of the marker.\n      }\n   }\n   return STBI__MARKER_none;\n}\n\n// decode image to YCbCr format\nstatic int stbi__decode_jpeg_image(stbi__jpeg *j)\n{\n   int m;\n   for (m = 0; m < 4; m++) {\n      j->img_comp[m].raw_data = NULL;\n      j->img_comp[m].raw_coeff = NULL;\n   }\n   j->restart_interval = 0;\n   if (!stbi__decode_jpeg_header(j, STBI__SCAN_load)) return 0;\n   m = stbi__get_marker(j);\n   while (!stbi__EOI(m)) {\n      if (stbi__SOS(m)) {\n         if (!stbi__process_scan_header(j)) return 0;\n         if (!stbi__parse_entropy_coded_data(j)) return 0;\n         if (j->marker == STBI__MARKER_none ) {\n         j->marker = stbi__skip_jpeg_junk_at_end(j);\n            // if we reach eof without hitting a marker, stbi__get_marker() below will fail and we'll eventually return 0\n         }\n         m = stbi__get_marker(j);\n         if (STBI__RESTART(m))\n            m = stbi__get_marker(j);\n      } else if (stbi__DNL(m)) {\n         int Ld = stbi__get16be(j->s);\n         stbi__uint32 NL = stbi__get16be(j->s);\n         if (Ld != 4) return stbi__err(\"bad DNL len\", \"Corrupt JPEG\");\n         if (NL != j->s->img_y) return stbi__err(\"bad DNL height\", \"Corrupt JPEG\");\n         m = stbi__get_marker(j);\n      } else {\n         if (!stbi__process_marker(j, m)) return 1;\n         m = stbi__get_marker(j);\n      }\n   }\n   if (j->progressive)\n      stbi__jpeg_finish(j);\n   return 1;\n}\n\n// static jfif-centered resampling (across block boundaries)\n\ntypedef stbi_uc *(*resample_row_func)(stbi_uc *out, stbi_uc *in0, stbi_uc *in1,\n                                    int w, int hs);\n\n#define stbi__div4(x) ((stbi_uc) ((x) >> 2))\n\nstatic stbi_uc *resample_row_1(stbi_uc *out, stbi_uc *in_near, stbi_uc *in_far, int w, int hs)\n{\n   STBI_NOTUSED(out);\n   STBI_NOTUSED(in_far);\n   STBI_NOTUSED(w);\n   STBI_NOTUSED(hs);\n   return in_near;\n}\n\nstatic stbi_uc* stbi__resample_row_v_2(stbi_uc *out, stbi_uc *in_near, stbi_uc *in_far, int w, int hs)\n{\n   // need to generate two samples vertically for every one in input\n   int i;\n   STBI_NOTUSED(hs);\n   for (i=0; i < w; ++i)\n      out[i] = stbi__div4(3*in_near[i] + in_far[i] + 2);\n   return out;\n}\n\nstatic stbi_uc*  stbi__resample_row_h_2(stbi_uc *out, stbi_uc *in_near, stbi_uc *in_far, int w, int hs)\n{\n   // need to generate two samples horizontally for every one in input\n   int i;\n   stbi_uc *input = in_near;\n\n   if (w == 1) {\n      // if only one sample, can't do any interpolation\n      out[0] = out[1] = input[0];\n      return out;\n   }\n\n   out[0] = input[0];\n   out[1] = stbi__div4(input[0]*3 + input[1] + 2);\n   for (i=1; i < w-1; ++i) {\n      int n = 3*input[i]+2;\n      out[i*2+0] = stbi__div4(n+input[i-1]);\n      out[i*2+1] = stbi__div4(n+input[i+1]);\n   }\n   out[i*2+0] = stbi__div4(input[w-2]*3 + input[w-1] + 2);\n   out[i*2+1] = input[w-1];\n\n   STBI_NOTUSED(in_far);\n   STBI_NOTUSED(hs);\n\n   return out;\n}\n\n#define stbi__div16(x) ((stbi_uc) ((x) >> 4))\n\nstatic stbi_uc *stbi__resample_row_hv_2(stbi_uc *out, stbi_uc *in_near, stbi_uc *in_far, int w, int hs)\n{\n   // need to generate 2x2 samples for every one in input\n   int i,t0,t1;\n   if (w == 1) {\n      out[0] = out[1] = stbi__div4(3*in_near[0] + in_far[0] + 2);\n      return out;\n   }\n\n   t1 = 3*in_near[0] + in_far[0];\n   out[0] = stbi__div4(t1+2);\n   for (i=1; i < w; ++i) {\n      t0 = t1;\n      t1 = 3*in_near[i]+in_far[i];\n      out[i*2-1] = stbi__div16(3*t0 + t1 + 8);\n      out[i*2  ] = stbi__div16(3*t1 + t0 + 8);\n   }\n   out[w*2-1] = stbi__div4(t1+2);\n\n   STBI_NOTUSED(hs);\n\n   return out;\n}\n\n#if defined(STBI_SSE2) || defined(STBI_NEON)\nstatic stbi_uc *stbi__resample_row_hv_2_simd(stbi_uc *out, stbi_uc *in_near, stbi_uc *in_far, int w, int hs)\n{\n   // need to generate 2x2 samples for every one in input\n   int i=0,t0,t1;\n\n   if (w == 1) {\n      out[0] = out[1] = stbi__div4(3*in_near[0] + in_far[0] + 2);\n      return out;\n   }\n\n   t1 = 3*in_near[0] + in_far[0];\n   // process groups of 8 pixels for as long as we can.\n   // note we can't handle the last pixel in a row in this loop\n   // because we need to handle the filter boundary conditions.\n   for (; i < ((w-1) & ~7); i += 8) {\n#if defined(STBI_SSE2)\n      // load and perform the vertical filtering pass\n      // this uses 3*x + y = 4*x + (y - x)\n      __m128i zero  = _mm_setzero_si128();\n      __m128i farb  = _mm_loadl_epi64((__m128i *) (in_far + i));\n      __m128i nearb = _mm_loadl_epi64((__m128i *) (in_near + i));\n      __m128i farw  = _mm_unpacklo_epi8(farb, zero);\n      __m128i nearw = _mm_unpacklo_epi8(nearb, zero);\n      __m128i diff  = _mm_sub_epi16(farw, nearw);\n      __m128i nears = _mm_slli_epi16(nearw, 2);\n      __m128i curr  = _mm_add_epi16(nears, diff); // current row\n\n      // horizontal filter works the same based on shifted vers of current\n      // row. \"prev\" is current row shifted right by 1 pixel; we need to\n      // insert the previous pixel value (from t1).\n      // \"next\" is current row shifted left by 1 pixel, with first pixel\n      // of next block of 8 pixels added in.\n      __m128i prv0 = _mm_slli_si128(curr, 2);\n      __m128i nxt0 = _mm_srli_si128(curr, 2);\n      __m128i prev = _mm_insert_epi16(prv0, t1, 0);\n      __m128i next = _mm_insert_epi16(nxt0, 3*in_near[i+8] + in_far[i+8], 7);\n\n      // horizontal filter, polyphase implementation since it's convenient:\n      // even pixels = 3*cur + prev = cur*4 + (prev - cur)\n      // odd  pixels = 3*cur + next = cur*4 + (next - cur)\n      // note the shared term.\n      __m128i bias  = _mm_set1_epi16(8);\n      __m128i curs = _mm_slli_epi16(curr, 2);\n      __m128i prvd = _mm_sub_epi16(prev, curr);\n      __m128i nxtd = _mm_sub_epi16(next, curr);\n      __m128i curb = _mm_add_epi16(curs, bias);\n      __m128i even = _mm_add_epi16(prvd, curb);\n      __m128i odd  = _mm_add_epi16(nxtd, curb);\n\n      // interleave even and odd pixels, then undo scaling.\n      __m128i int0 = _mm_unpacklo_epi16(even, odd);\n      __m128i int1 = _mm_unpackhi_epi16(even, odd);\n      __m128i de0  = _mm_srli_epi16(int0, 4);\n      __m128i de1  = _mm_srli_epi16(int1, 4);\n\n      // pack and write output\n      __m128i outv = _mm_packus_epi16(de0, de1);\n      _mm_storeu_si128((__m128i *) (out + i*2), outv);\n#elif defined(STBI_NEON)\n      // load and perform the vertical filtering pass\n      // this uses 3*x + y = 4*x + (y - x)\n      uint8x8_t farb  = vld1_u8(in_far + i);\n      uint8x8_t nearb = vld1_u8(in_near + i);\n      int16x8_t diff  = vreinterpretq_s16_u16(vsubl_u8(farb, nearb));\n      int16x8_t nears = vreinterpretq_s16_u16(vshll_n_u8(nearb, 2));\n      int16x8_t curr  = vaddq_s16(nears, diff); // current row\n\n      // horizontal filter works the same based on shifted vers of current\n      // row. \"prev\" is current row shifted right by 1 pixel; we need to\n      // insert the previous pixel value (from t1).\n      // \"next\" is current row shifted left by 1 pixel, with first pixel\n      // of next block of 8 pixels added in.\n      int16x8_t prv0 = vextq_s16(curr, curr, 7);\n      int16x8_t nxt0 = vextq_s16(curr, curr, 1);\n      int16x8_t prev = vsetq_lane_s16(t1, prv0, 0);\n      int16x8_t next = vsetq_lane_s16(3*in_near[i+8] + in_far[i+8], nxt0, 7);\n\n      // horizontal filter, polyphase implementation since it's convenient:\n      // even pixels = 3*cur + prev = cur*4 + (prev - cur)\n      // odd  pixels = 3*cur + next = cur*4 + (next - cur)\n      // note the shared term.\n      int16x8_t curs = vshlq_n_s16(curr, 2);\n      int16x8_t prvd = vsubq_s16(prev, curr);\n      int16x8_t nxtd = vsubq_s16(next, curr);\n      int16x8_t even = vaddq_s16(curs, prvd);\n      int16x8_t odd  = vaddq_s16(curs, nxtd);\n\n      // undo scaling and round, then store with even/odd phases interleaved\n      uint8x8x2_t o;\n      o.val[0] = vqrshrun_n_s16(even, 4);\n      o.val[1] = vqrshrun_n_s16(odd,  4);\n      vst2_u8(out + i*2, o);\n#endif\n\n      // \"previous\" value for next iter\n      t1 = 3*in_near[i+7] + in_far[i+7];\n   }\n\n   t0 = t1;\n   t1 = 3*in_near[i] + in_far[i];\n   out[i*2] = stbi__div16(3*t1 + t0 + 8);\n\n   for (++i; i < w; ++i) {\n      t0 = t1;\n      t1 = 3*in_near[i]+in_far[i];\n      out[i*2-1] = stbi__div16(3*t0 + t1 + 8);\n      out[i*2  ] = stbi__div16(3*t1 + t0 + 8);\n   }\n   out[w*2-1] = stbi__div4(t1+2);\n\n   STBI_NOTUSED(hs);\n\n   return out;\n}\n#endif\n\nstatic stbi_uc *stbi__resample_row_generic(stbi_uc *out, stbi_uc *in_near, stbi_uc *in_far, int w, int hs)\n{\n   // resample with nearest-neighbor\n   int i,j;\n   STBI_NOTUSED(in_far);\n   for (i=0; i < w; ++i)\n      for (j=0; j < hs; ++j)\n         out[i*hs+j] = in_near[i];\n   return out;\n}\n\n// this is a reduced-precision calculation of YCbCr-to-RGB introduced\n// to make sure the code produces the same results in both SIMD and scalar\n#define stbi__float2fixed(x)  (((int) ((x) * 4096.0f + 0.5f)) << 8)\nstatic void stbi__YCbCr_to_RGB_row(stbi_uc *out, const stbi_uc *y, const stbi_uc *pcb, const stbi_uc *pcr, int count, int step)\n{\n   int i;\n   for (i=0; i < count; ++i) {\n      int y_fixed = (y[i] << 20) + (1<<19); // rounding\n      int r,g,b;\n      int cr = pcr[i] - 128;\n      int cb = pcb[i] - 128;\n      r = y_fixed +  cr* stbi__float2fixed(1.40200f);\n      g = y_fixed + (cr*-stbi__float2fixed(0.71414f)) + ((cb*-stbi__float2fixed(0.34414f)) & 0xffff0000);\n      b = y_fixed                                     +   cb* stbi__float2fixed(1.77200f);\n      r >>= 20;\n      g >>= 20;\n      b >>= 20;\n      if ((unsigned) r > 255) { if (r < 0) r = 0; else r = 255; }\n      if ((unsigned) g > 255) { if (g < 0) g = 0; else g = 255; }\n      if ((unsigned) b > 255) { if (b < 0) b = 0; else b = 255; }\n      out[0] = (stbi_uc)r;\n      out[1] = (stbi_uc)g;\n      out[2] = (stbi_uc)b;\n      out[3] = 255;\n      out += step;\n   }\n}\n\n#if defined(STBI_SSE2) || defined(STBI_NEON)\nstatic void stbi__YCbCr_to_RGB_simd(stbi_uc *out, stbi_uc const *y, stbi_uc const *pcb, stbi_uc const *pcr, int count, int step)\n{\n   int i = 0;\n\n#ifdef STBI_SSE2\n   // step == 3 is pretty ugly on the final interleave, and i'm not convinced\n   // it's useful in practice (you wouldn't use it for textures, for example).\n   // so just accelerate step == 4 case.\n   if (step == 4) {\n      // this is a fairly straightforward implementation and not super-optimized.\n      __m128i signflip  = _mm_set1_epi8(-0x80);\n      __m128i cr_const0 = _mm_set1_epi16(   (short) ( 1.40200f*4096.0f+0.5f));\n      __m128i cr_const1 = _mm_set1_epi16( - (short) ( 0.71414f*4096.0f+0.5f));\n      __m128i cb_const0 = _mm_set1_epi16( - (short) ( 0.34414f*4096.0f+0.5f));\n      __m128i cb_const1 = _mm_set1_epi16(   (short) ( 1.77200f*4096.0f+0.5f));\n      __m128i y_bias = _mm_set1_epi8((char) (unsigned char) 128);\n      __m128i xw = _mm_set1_epi16(255); // alpha channel\n\n      for (; i+7 < count; i += 8) {\n         // load\n         __m128i y_bytes = _mm_loadl_epi64((__m128i *) (y+i));\n         __m128i cr_bytes = _mm_loadl_epi64((__m128i *) (pcr+i));\n         __m128i cb_bytes = _mm_loadl_epi64((__m128i *) (pcb+i));\n         __m128i cr_biased = _mm_xor_si128(cr_bytes, signflip); // -128\n         __m128i cb_biased = _mm_xor_si128(cb_bytes, signflip); // -128\n\n         // unpack to short (and left-shift cr, cb by 8)\n         __m128i yw  = _mm_unpacklo_epi8(y_bias, y_bytes);\n         __m128i crw = _mm_unpacklo_epi8(_mm_setzero_si128(), cr_biased);\n         __m128i cbw = _mm_unpacklo_epi8(_mm_setzero_si128(), cb_biased);\n\n         // color transform\n         __m128i yws = _mm_srli_epi16(yw, 4);\n         __m128i cr0 = _mm_mulhi_epi16(cr_const0, crw);\n         __m128i cb0 = _mm_mulhi_epi16(cb_const0, cbw);\n         __m128i cb1 = _mm_mulhi_epi16(cbw, cb_const1);\n         __m128i cr1 = _mm_mulhi_epi16(crw, cr_const1);\n         __m128i rws = _mm_add_epi16(cr0, yws);\n         __m128i gwt = _mm_add_epi16(cb0, yws);\n         __m128i bws = _mm_add_epi16(yws, cb1);\n         __m128i gws = _mm_add_epi16(gwt, cr1);\n\n         // descale\n         __m128i rw = _mm_srai_epi16(rws, 4);\n         __m128i bw = _mm_srai_epi16(bws, 4);\n         __m128i gw = _mm_srai_epi16(gws, 4);\n\n         // back to byte, set up for transpose\n         __m128i brb = _mm_packus_epi16(rw, bw);\n         __m128i gxb = _mm_packus_epi16(gw, xw);\n\n         // transpose to interleave channels\n         __m128i t0 = _mm_unpacklo_epi8(brb, gxb);\n         __m128i t1 = _mm_unpackhi_epi8(brb, gxb);\n         __m128i o0 = _mm_unpacklo_epi16(t0, t1);\n         __m128i o1 = _mm_unpackhi_epi16(t0, t1);\n\n         // store\n         _mm_storeu_si128((__m128i *) (out + 0), o0);\n         _mm_storeu_si128((__m128i *) (out + 16), o1);\n         out += 32;\n      }\n   }\n#endif\n\n#ifdef STBI_NEON\n   // in this version, step=3 support would be easy to add. but is there demand?\n   if (step == 4) {\n      // this is a fairly straightforward implementation and not super-optimized.\n      uint8x8_t signflip = vdup_n_u8(0x80);\n      int16x8_t cr_const0 = vdupq_n_s16(   (short) ( 1.40200f*4096.0f+0.5f));\n      int16x8_t cr_const1 = vdupq_n_s16( - (short) ( 0.71414f*4096.0f+0.5f));\n      int16x8_t cb_const0 = vdupq_n_s16( - (short) ( 0.34414f*4096.0f+0.5f));\n      int16x8_t cb_const1 = vdupq_n_s16(   (short) ( 1.77200f*4096.0f+0.5f));\n\n      for (; i+7 < count; i += 8) {\n         // load\n         uint8x8_t y_bytes  = vld1_u8(y + i);\n         uint8x8_t cr_bytes = vld1_u8(pcr + i);\n         uint8x8_t cb_bytes = vld1_u8(pcb + i);\n         int8x8_t cr_biased = vreinterpret_s8_u8(vsub_u8(cr_bytes, signflip));\n         int8x8_t cb_biased = vreinterpret_s8_u8(vsub_u8(cb_bytes, signflip));\n\n         // expand to s16\n         int16x8_t yws = vreinterpretq_s16_u16(vshll_n_u8(y_bytes, 4));\n         int16x8_t crw = vshll_n_s8(cr_biased, 7);\n         int16x8_t cbw = vshll_n_s8(cb_biased, 7);\n\n         // color transform\n         int16x8_t cr0 = vqdmulhq_s16(crw, cr_const0);\n         int16x8_t cb0 = vqdmulhq_s16(cbw, cb_const0);\n         int16x8_t cr1 = vqdmulhq_s16(crw, cr_const1);\n         int16x8_t cb1 = vqdmulhq_s16(cbw, cb_const1);\n         int16x8_t rws = vaddq_s16(yws, cr0);\n         int16x8_t gws = vaddq_s16(vaddq_s16(yws, cb0), cr1);\n         int16x8_t bws = vaddq_s16(yws, cb1);\n\n         // undo scaling, round, convert to byte\n         uint8x8x4_t o;\n         o.val[0] = vqrshrun_n_s16(rws, 4);\n         o.val[1] = vqrshrun_n_s16(gws, 4);\n         o.val[2] = vqrshrun_n_s16(bws, 4);\n         o.val[3] = vdup_n_u8(255);\n\n         // store, interleaving r/g/b/a\n         vst4_u8(out, o);\n         out += 8*4;\n      }\n   }\n#endif\n\n   for (; i < count; ++i) {\n      int y_fixed = (y[i] << 20) + (1<<19); // rounding\n      int r,g,b;\n      int cr = pcr[i] - 128;\n      int cb = pcb[i] - 128;\n      r = y_fixed + cr* stbi__float2fixed(1.40200f);\n      g = y_fixed + cr*-stbi__float2fixed(0.71414f) + ((cb*-stbi__float2fixed(0.34414f)) & 0xffff0000);\n      b = y_fixed                                   +   cb* stbi__float2fixed(1.77200f);\n      r >>= 20;\n      g >>= 20;\n      b >>= 20;\n      if ((unsigned) r > 255) { if (r < 0) r = 0; else r = 255; }\n      if ((unsigned) g > 255) { if (g < 0) g = 0; else g = 255; }\n      if ((unsigned) b > 255) { if (b < 0) b = 0; else b = 255; }\n      out[0] = (stbi_uc)r;\n      out[1] = (stbi_uc)g;\n      out[2] = (stbi_uc)b;\n      out[3] = 255;\n      out += step;\n   }\n}\n#endif\n\n// set up the kernels\nstatic void stbi__setup_jpeg(stbi__jpeg *j)\n{\n   j->idct_block_kernel = stbi__idct_block;\n   j->YCbCr_to_RGB_kernel = stbi__YCbCr_to_RGB_row;\n   j->resample_row_hv_2_kernel = stbi__resample_row_hv_2;\n\n#ifdef STBI_SSE2\n   if (stbi__sse2_available()) {\n      j->idct_block_kernel = stbi__idct_simd;\n      j->YCbCr_to_RGB_kernel = stbi__YCbCr_to_RGB_simd;\n      j->resample_row_hv_2_kernel = stbi__resample_row_hv_2_simd;\n   }\n#endif\n\n#ifdef STBI_NEON\n   j->idct_block_kernel = stbi__idct_simd;\n   j->YCbCr_to_RGB_kernel = stbi__YCbCr_to_RGB_simd;\n   j->resample_row_hv_2_kernel = stbi__resample_row_hv_2_simd;\n#endif\n}\n\n// clean up the temporary component buffers\nstatic void stbi__cleanup_jpeg(stbi__jpeg *j)\n{\n   stbi__free_jpeg_components(j, j->s->img_n, 0);\n}\n\ntypedef struct\n{\n   resample_row_func resample;\n   stbi_uc *line0,*line1;\n   int hs,vs;   // expansion factor in each axis\n   int w_lores; // horizontal pixels pre-expansion\n   int ystep;   // how far through vertical expansion we are\n   int ypos;    // which pre-expansion row we're on\n} stbi__resample;\n\n// fast 0..255 * 0..255 => 0..255 rounded multiplication\nstatic stbi_uc stbi__blinn_8x8(stbi_uc x, stbi_uc y)\n{\n   unsigned int t = x*y + 128;\n   return (stbi_uc) ((t + (t >>8)) >> 8);\n}\n\nstatic stbi_uc *load_jpeg_image(stbi__jpeg *z, int *out_x, int *out_y, int *comp, int req_comp)\n{\n   int n, decode_n, is_rgb;\n   z->s->img_n = 0; // make stbi__cleanup_jpeg safe\n\n   // validate req_comp\n   if (req_comp < 0 || req_comp > 4) return stbi__errpuc(\"bad req_comp\", \"Internal error\");\n\n   // load a jpeg image from whichever source, but leave in YCbCr format\n   if (!stbi__decode_jpeg_image(z)) { stbi__cleanup_jpeg(z); return NULL; }\n\n   // determine actual number of components to generate\n   n = req_comp ? req_comp : z->s->img_n >= 3 ? 3 : 1;\n\n   is_rgb = z->s->img_n == 3 && (z->rgb == 3 || (z->app14_color_transform == 0 && !z->jfif));\n\n   if (z->s->img_n == 3 && n < 3 && !is_rgb)\n      decode_n = 1;\n   else\n      decode_n = z->s->img_n;\n\n   // nothing to do if no components requested; check this now to avoid\n   // accessing uninitialized coutput[0] later\n   if (decode_n <= 0) { stbi__cleanup_jpeg(z); return NULL; }\n\n   // resample and color-convert\n   {\n      int k;\n      unsigned int i,j;\n      stbi_uc *output;\n      stbi_uc *coutput[4] = { NULL, NULL, NULL, NULL };\n\n      stbi__resample res_comp[4];\n\n      for (k=0; k < decode_n; ++k) {\n         stbi__resample *r = &res_comp[k];\n\n         // allocate line buffer big enough for upsampling off the edges\n         // with upsample factor of 4\n         z->img_comp[k].linebuf = (stbi_uc *) stbi__malloc(z->s->img_x + 3);\n         if (!z->img_comp[k].linebuf) { stbi__cleanup_jpeg(z); return stbi__errpuc(\"outofmem\", \"Out of memory\"); }\n\n         r->hs      = z->img_h_max / z->img_comp[k].h;\n         r->vs      = z->img_v_max / z->img_comp[k].v;\n         r->ystep   = r->vs >> 1;\n         r->w_lores = (z->s->img_x + r->hs-1) / r->hs;\n         r->ypos    = 0;\n         r->line0   = r->line1 = z->img_comp[k].data;\n\n         if      (r->hs == 1 && r->vs == 1) r->resample = resample_row_1;\n         else if (r->hs == 1 && r->vs == 2) r->resample = stbi__resample_row_v_2;\n         else if (r->hs == 2 && r->vs == 1) r->resample = stbi__resample_row_h_2;\n         else if (r->hs == 2 && r->vs == 2) r->resample = z->resample_row_hv_2_kernel;\n         else                               r->resample = stbi__resample_row_generic;\n      }\n\n      // can't error after this so, this is safe\n      output = (stbi_uc *) stbi__malloc_mad3(n, z->s->img_x, z->s->img_y, 1);\n      if (!output) { stbi__cleanup_jpeg(z); return stbi__errpuc(\"outofmem\", \"Out of memory\"); }\n\n      // now go ahead and resample\n      for (j=0; j < z->s->img_y; ++j) {\n         stbi_uc *out = output + n * z->s->img_x * j;\n         for (k=0; k < decode_n; ++k) {\n            stbi__resample *r = &res_comp[k];\n            int y_bot = r->ystep >= (r->vs >> 1);\n            coutput[k] = r->resample(z->img_comp[k].linebuf,\n                                     y_bot ? r->line1 : r->line0,\n                                     y_bot ? r->line0 : r->line1,\n                                     r->w_lores, r->hs);\n            if (++r->ystep >= r->vs) {\n               r->ystep = 0;\n               r->line0 = r->line1;\n               if (++r->ypos < z->img_comp[k].y)\n                  r->line1 += z->img_comp[k].w2;\n            }\n         }\n         if (n >= 3) {\n            stbi_uc *y = coutput[0];\n            if (z->s->img_n == 3) {\n               if (is_rgb) {\n                  for (i=0; i < z->s->img_x; ++i) {\n                     out[0] = y[i];\n                     out[1] = coutput[1][i];\n                     out[2] = coutput[2][i];\n                     out[3] = 255;\n                     out += n;\n                  }\n               } else {\n                  z->YCbCr_to_RGB_kernel(out, y, coutput[1], coutput[2], z->s->img_x, n);\n               }\n            } else if (z->s->img_n == 4) {\n               if (z->app14_color_transform == 0) { // CMYK\n                  for (i=0; i < z->s->img_x; ++i) {\n                     stbi_uc m = coutput[3][i];\n                     out[0] = stbi__blinn_8x8(coutput[0][i], m);\n                     out[1] = stbi__blinn_8x8(coutput[1][i], m);\n                     out[2] = stbi__blinn_8x8(coutput[2][i], m);\n                     out[3] = 255;\n                     out += n;\n                  }\n               } else if (z->app14_color_transform == 2) { // YCCK\n                  z->YCbCr_to_RGB_kernel(out, y, coutput[1], coutput[2], z->s->img_x, n);\n                  for (i=0; i < z->s->img_x; ++i) {\n                     stbi_uc m = coutput[3][i];\n                     out[0] = stbi__blinn_8x8(255 - out[0], m);\n                     out[1] = stbi__blinn_8x8(255 - out[1], m);\n                     out[2] = stbi__blinn_8x8(255 - out[2], m);\n                     out += n;\n                  }\n               } else { // YCbCr + alpha?  Ignore the fourth channel for now\n                  z->YCbCr_to_RGB_kernel(out, y, coutput[1], coutput[2], z->s->img_x, n);\n               }\n            } else\n               for (i=0; i < z->s->img_x; ++i) {\n                  out[0] = out[1] = out[2] = y[i];\n                  out[3] = 255; // not used if n==3\n                  out += n;\n               }\n         } else {\n            if (is_rgb) {\n               if (n == 1)\n                  for (i=0; i < z->s->img_x; ++i)\n                     *out++ = stbi__compute_y(coutput[0][i], coutput[1][i], coutput[2][i]);\n               else {\n                  for (i=0; i < z->s->img_x; ++i, out += 2) {\n                     out[0] = stbi__compute_y(coutput[0][i], coutput[1][i], coutput[2][i]);\n                     out[1] = 255;\n                  }\n               }\n            } else if (z->s->img_n == 4 && z->app14_color_transform == 0) {\n               for (i=0; i < z->s->img_x; ++i) {\n                  stbi_uc m = coutput[3][i];\n                  stbi_uc r = stbi__blinn_8x8(coutput[0][i], m);\n                  stbi_uc g = stbi__blinn_8x8(coutput[1][i], m);\n                  stbi_uc b = stbi__blinn_8x8(coutput[2][i], m);\n                  out[0] = stbi__compute_y(r, g, b);\n                  out[1] = 255;\n                  out += n;\n               }\n            } else if (z->s->img_n == 4 && z->app14_color_transform == 2) {\n               for (i=0; i < z->s->img_x; ++i) {\n                  out[0] = stbi__blinn_8x8(255 - coutput[0][i], coutput[3][i]);\n                  out[1] = 255;\n                  out += n;\n               }\n            } else {\n               stbi_uc *y = coutput[0];\n               if (n == 1)\n                  for (i=0; i < z->s->img_x; ++i) out[i] = y[i];\n               else\n                  for (i=0; i < z->s->img_x; ++i) { *out++ = y[i]; *out++ = 255; }\n            }\n         }\n      }\n      stbi__cleanup_jpeg(z);\n      *out_x = z->s->img_x;\n      *out_y = z->s->img_y;\n      if (comp) *comp = z->s->img_n >= 3 ? 3 : 1; // report original components, not output\n      return output;\n   }\n}\n\nstatic void *stbi__jpeg_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri)\n{\n   unsigned char* result;\n   stbi__jpeg* j = (stbi__jpeg*) stbi__malloc(sizeof(stbi__jpeg));\n   if (!j) return stbi__errpuc(\"outofmem\", \"Out of memory\");\n   memset(j, 0, sizeof(stbi__jpeg));\n   STBI_NOTUSED(ri);\n   j->s = s;\n   stbi__setup_jpeg(j);\n   result = load_jpeg_image(j, x,y,comp,req_comp);\n   STBI_FREE(j);\n   return result;\n}\n\nstatic int stbi__jpeg_test(stbi__context *s)\n{\n   int r;\n   stbi__jpeg* j = (stbi__jpeg*)stbi__malloc(sizeof(stbi__jpeg));\n   if (!j) return stbi__err(\"outofmem\", \"Out of memory\");\n   memset(j, 0, sizeof(stbi__jpeg));\n   j->s = s;\n   stbi__setup_jpeg(j);\n   r = stbi__decode_jpeg_header(j, STBI__SCAN_type);\n   stbi__rewind(s);\n   STBI_FREE(j);\n   return r;\n}\n\nstatic int stbi__jpeg_info_raw(stbi__jpeg *j, int *x, int *y, int *comp)\n{\n   if (!stbi__decode_jpeg_header(j, STBI__SCAN_header)) {\n      stbi__rewind( j->s );\n      return 0;\n   }\n   if (x) *x = j->s->img_x;\n   if (y) *y = j->s->img_y;\n   if (comp) *comp = j->s->img_n >= 3 ? 3 : 1;\n   return 1;\n}\n\nstatic int stbi__jpeg_info(stbi__context *s, int *x, int *y, int *comp)\n{\n   int result;\n   stbi__jpeg* j = (stbi__jpeg*) (stbi__malloc(sizeof(stbi__jpeg)));\n   if (!j) return stbi__err(\"outofmem\", \"Out of memory\");\n   memset(j, 0, sizeof(stbi__jpeg));\n   j->s = s;\n   result = stbi__jpeg_info_raw(j, x, y, comp);\n   STBI_FREE(j);\n   return result;\n}\n#endif\n\n// public domain zlib decode    v0.2  Sean Barrett 2006-11-18\n//    simple implementation\n//      - all input must be provided in an upfront buffer\n//      - all output is written to a single output buffer (can malloc/realloc)\n//    performance\n//      - fast huffman\n\n#ifndef STBI_NO_ZLIB\n\n// fast-way is faster to check than jpeg huffman, but slow way is slower\n#define STBI__ZFAST_BITS  9 // accelerate all cases in default tables\n#define STBI__ZFAST_MASK  ((1 << STBI__ZFAST_BITS) - 1)\n#define STBI__ZNSYMS 288 // number of symbols in literal/length alphabet\n\n// zlib-style huffman encoding\n// (jpegs packs from left, zlib from right, so can't share code)\ntypedef struct\n{\n   stbi__uint16 fast[1 << STBI__ZFAST_BITS];\n   stbi__uint16 firstcode[16];\n   int maxcode[17];\n   stbi__uint16 firstsymbol[16];\n   stbi_uc  size[STBI__ZNSYMS];\n   stbi__uint16 value[STBI__ZNSYMS];\n} stbi__zhuffman;\n\nstbi_inline static int stbi__bitreverse16(int n)\n{\n  n = ((n & 0xAAAA) >>  1) | ((n & 0x5555) << 1);\n  n = ((n & 0xCCCC) >>  2) | ((n & 0x3333) << 2);\n  n = ((n & 0xF0F0) >>  4) | ((n & 0x0F0F) << 4);\n  n = ((n & 0xFF00) >>  8) | ((n & 0x00FF) << 8);\n  return n;\n}\n\nstbi_inline static int stbi__bit_reverse(int v, int bits)\n{\n   STBI_ASSERT(bits <= 16);\n   // to bit reverse n bits, reverse 16 and shift\n   // e.g. 11 bits, bit reverse and shift away 5\n   return stbi__bitreverse16(v) >> (16-bits);\n}\n\nstatic int stbi__zbuild_huffman(stbi__zhuffman *z, const stbi_uc *sizelist, int num)\n{\n   int i,k=0;\n   int code, next_code[16], sizes[17];\n\n   // DEFLATE spec for generating codes\n   memset(sizes, 0, sizeof(sizes));\n   memset(z->fast, 0, sizeof(z->fast));\n   for (i=0; i < num; ++i)\n      ++sizes[sizelist[i]];\n   sizes[0] = 0;\n   for (i=1; i < 16; ++i)\n      if (sizes[i] > (1 << i))\n         return stbi__err(\"bad sizes\", \"Corrupt PNG\");\n   code = 0;\n   for (i=1; i < 16; ++i) {\n      next_code[i] = code;\n      z->firstcode[i] = (stbi__uint16) code;\n      z->firstsymbol[i] = (stbi__uint16) k;\n      code = (code + sizes[i]);\n      if (sizes[i])\n         if (code-1 >= (1 << i)) return stbi__err(\"bad codelengths\",\"Corrupt PNG\");\n      z->maxcode[i] = code << (16-i); // preshift for inner loop\n      code <<= 1;\n      k += sizes[i];\n   }\n   z->maxcode[16] = 0x10000; // sentinel\n   for (i=0; i < num; ++i) {\n      int s = sizelist[i];\n      if (s) {\n         int c = next_code[s] - z->firstcode[s] + z->firstsymbol[s];\n         stbi__uint16 fastv = (stbi__uint16) ((s << 9) | i);\n         z->size [c] = (stbi_uc     ) s;\n         z->value[c] = (stbi__uint16) i;\n         if (s <= STBI__ZFAST_BITS) {\n            int j = stbi__bit_reverse(next_code[s],s);\n            while (j < (1 << STBI__ZFAST_BITS)) {\n               z->fast[j] = fastv;\n               j += (1 << s);\n            }\n         }\n         ++next_code[s];\n      }\n   }\n   return 1;\n}\n\n// zlib-from-memory implementation for PNG reading\n//    because PNG allows splitting the zlib stream arbitrarily,\n//    and it's annoying structurally to have PNG call ZLIB call PNG,\n//    we require PNG read all the IDATs and combine them into a single\n//    memory buffer\n\ntypedef struct\n{\n   stbi_uc *zbuffer, *zbuffer_end;\n   int num_bits;\n   int hit_zeof_once;\n   stbi__uint32 code_buffer;\n\n   char *zout;\n   char *zout_start;\n   char *zout_end;\n   int   z_expandable;\n\n   stbi__zhuffman z_length, z_distance;\n} stbi__zbuf;\n\nstbi_inline static int stbi__zeof(stbi__zbuf *z)\n{\n   return (z->zbuffer >= z->zbuffer_end);\n}\n\nstbi_inline static stbi_uc stbi__zget8(stbi__zbuf *z)\n{\n   return stbi__zeof(z) ? 0 : *z->zbuffer++;\n}\n\nstatic void stbi__fill_bits(stbi__zbuf *z)\n{\n   do {\n      if (z->code_buffer >= (1U << z->num_bits)) {\n        z->zbuffer = z->zbuffer_end;  /* treat this as EOF so we fail. */\n        return;\n      }\n      z->code_buffer |= (unsigned int) stbi__zget8(z) << z->num_bits;\n      z->num_bits += 8;\n   } while (z->num_bits <= 24);\n}\n\nstbi_inline static unsigned int stbi__zreceive(stbi__zbuf *z, int n)\n{\n   unsigned int k;\n   if (z->num_bits < n) stbi__fill_bits(z);\n   k = z->code_buffer & ((1 << n) - 1);\n   z->code_buffer >>= n;\n   z->num_bits -= n;\n   return k;\n}\n\nstatic int stbi__zhuffman_decode_slowpath(stbi__zbuf *a, stbi__zhuffman *z)\n{\n   int b,s,k;\n   // not resolved by fast table, so compute it the slow way\n   // use jpeg approach, which requires MSbits at top\n   k = stbi__bit_reverse(a->code_buffer, 16);\n   for (s=STBI__ZFAST_BITS+1; ; ++s)\n      if (k < z->maxcode[s])\n         break;\n   if (s >= 16) return -1; // invalid code!\n   // code size is s, so:\n   b = (k >> (16-s)) - z->firstcode[s] + z->firstsymbol[s];\n   if (b >= STBI__ZNSYMS) return -1; // some data was corrupt somewhere!\n   if (z->size[b] != s) return -1;  // was originally an assert, but report failure instead.\n   a->code_buffer >>= s;\n   a->num_bits -= s;\n   return z->value[b];\n}\n\nstbi_inline static int stbi__zhuffman_decode(stbi__zbuf *a, stbi__zhuffman *z)\n{\n   int b,s;\n   if (a->num_bits < 16) {\n      if (stbi__zeof(a)) {\n         if (!a->hit_zeof_once) {\n            // This is the first time we hit eof, insert 16 extra padding btis\n            // to allow us to keep going; if we actually consume any of them\n            // though, that is invalid data. This is caught later.\n            a->hit_zeof_once = 1;\n            a->num_bits += 16; // add 16 implicit zero bits\n         } else {\n            // We already inserted our extra 16 padding bits and are again\n            // out, this stream is actually prematurely terminated.\n            return -1;\n         }\n      } else {\n         stbi__fill_bits(a);\n      }\n   }\n   b = z->fast[a->code_buffer & STBI__ZFAST_MASK];\n   if (b) {\n      s = b >> 9;\n      a->code_buffer >>= s;\n      a->num_bits -= s;\n      return b & 511;\n   }\n   return stbi__zhuffman_decode_slowpath(a, z);\n}\n\nstatic int stbi__zexpand(stbi__zbuf *z, char *zout, int n)  // need to make room for n bytes\n{\n   char *q;\n   unsigned int cur, limit, old_limit;\n   z->zout = zout;\n   if (!z->z_expandable) return stbi__err(\"output buffer limit\",\"Corrupt PNG\");\n   cur   = (unsigned int) (z->zout - z->zout_start);\n   limit = old_limit = (unsigned) (z->zout_end - z->zout_start);\n   if (UINT_MAX - cur < (unsigned) n) return stbi__err(\"outofmem\", \"Out of memory\");\n   while (cur + n > limit) {\n      if(limit > UINT_MAX / 2) return stbi__err(\"outofmem\", \"Out of memory\");\n      limit *= 2;\n   }\n   q = (char *) STBI_REALLOC_SIZED(z->zout_start, old_limit, limit);\n   STBI_NOTUSED(old_limit);\n   if (q == NULL) return stbi__err(\"outofmem\", \"Out of memory\");\n   z->zout_start = q;\n   z->zout       = q + cur;\n   z->zout_end   = q + limit;\n   return 1;\n}\n\nstatic const int stbi__zlength_base[31] = {\n   3,4,5,6,7,8,9,10,11,13,\n   15,17,19,23,27,31,35,43,51,59,\n   67,83,99,115,131,163,195,227,258,0,0 };\n\nstatic const int stbi__zlength_extra[31]=\n{ 0,0,0,0,0,0,0,0,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5,0,0,0 };\n\nstatic const int stbi__zdist_base[32] = { 1,2,3,4,5,7,9,13,17,25,33,49,65,97,129,193,\n257,385,513,769,1025,1537,2049,3073,4097,6145,8193,12289,16385,24577,0,0};\n\nstatic const int stbi__zdist_extra[32] =\n{ 0,0,0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,8,9,9,10,10,11,11,12,12,13,13};\n\nstatic int stbi__parse_huffman_block(stbi__zbuf *a)\n{\n   char *zout = a->zout;\n   for(;;) {\n      int z = stbi__zhuffman_decode(a, &a->z_length);\n      if (z < 256) {\n         if (z < 0) return stbi__err(\"bad huffman code\",\"Corrupt PNG\"); // error in huffman codes\n         if (zout >= a->zout_end) {\n            if (!stbi__zexpand(a, zout, 1)) return 0;\n            zout = a->zout;\n         }\n         *zout++ = (char) z;\n      } else {\n         stbi_uc *p;\n         int len,dist;\n         if (z == 256) {\n            a->zout = zout;\n            if (a->hit_zeof_once && a->num_bits < 16) {\n               // The first time we hit zeof, we inserted 16 extra zero bits into our bit\n               // buffer so the decoder can just do its speculative decoding. But if we\n               // actually consumed any of those bits (which is the case when num_bits < 16),\n               // the stream actually read past the end so it is malformed.\n               return stbi__err(\"unexpected end\",\"Corrupt PNG\");\n            }\n            return 1;\n         }\n         if (z >= 286) return stbi__err(\"bad huffman code\",\"Corrupt PNG\"); // per DEFLATE, length codes 286 and 287 must not appear in compressed data\n         z -= 257;\n         len = stbi__zlength_base[z];\n         if (stbi__zlength_extra[z]) len += stbi__zreceive(a, stbi__zlength_extra[z]);\n         z = stbi__zhuffman_decode(a, &a->z_distance);\n         if (z < 0 || z >= 30) return stbi__err(\"bad huffman code\",\"Corrupt PNG\"); // per DEFLATE, distance codes 30 and 31 must not appear in compressed data\n         dist = stbi__zdist_base[z];\n         if (stbi__zdist_extra[z]) dist += stbi__zreceive(a, stbi__zdist_extra[z]);\n         if (zout - a->zout_start < dist) return stbi__err(\"bad dist\",\"Corrupt PNG\");\n         if (len > a->zout_end - zout) {\n            if (!stbi__zexpand(a, zout, len)) return 0;\n            zout = a->zout;\n         }\n         p = (stbi_uc *) (zout - dist);\n         if (dist == 1) { // run of one byte; common in images.\n            stbi_uc v = *p;\n            if (len) { do *zout++ = v; while (--len); }\n         } else {\n            if (len) { do *zout++ = *p++; while (--len); }\n         }\n      }\n   }\n}\n\nstatic int stbi__compute_huffman_codes(stbi__zbuf *a)\n{\n   static const stbi_uc length_dezigzag[19] = { 16,17,18,0,8,7,9,6,10,5,11,4,12,3,13,2,14,1,15 };\n   stbi__zhuffman z_codelength;\n   stbi_uc lencodes[286+32+137];//padding for maximum single op\n   stbi_uc codelength_sizes[19];\n   int i,n;\n\n   int hlit  = stbi__zreceive(a,5) + 257;\n   int hdist = stbi__zreceive(a,5) + 1;\n   int hclen = stbi__zreceive(a,4) + 4;\n   int ntot  = hlit + hdist;\n\n   memset(codelength_sizes, 0, sizeof(codelength_sizes));\n   for (i=0; i < hclen; ++i) {\n      int s = stbi__zreceive(a,3);\n      codelength_sizes[length_dezigzag[i]] = (stbi_uc) s;\n   }\n   if (!stbi__zbuild_huffman(&z_codelength, codelength_sizes, 19)) return 0;\n\n   n = 0;\n   while (n < ntot) {\n      int c = stbi__zhuffman_decode(a, &z_codelength);\n      if (c < 0 || c >= 19) return stbi__err(\"bad codelengths\", \"Corrupt PNG\");\n      if (c < 16)\n         lencodes[n++] = (stbi_uc) c;\n      else {\n         stbi_uc fill = 0;\n         if (c == 16) {\n            c = stbi__zreceive(a,2)+3;\n            if (n == 0) return stbi__err(\"bad codelengths\", \"Corrupt PNG\");\n            fill = lencodes[n-1];\n         } else if (c == 17) {\n            c = stbi__zreceive(a,3)+3;\n         } else if (c == 18) {\n            c = stbi__zreceive(a,7)+11;\n         } else {\n            return stbi__err(\"bad codelengths\", \"Corrupt PNG\");\n         }\n         if (ntot - n < c) return stbi__err(\"bad codelengths\", \"Corrupt PNG\");\n         memset(lencodes+n, fill, c);\n         n += c;\n      }\n   }\n   if (n != ntot) return stbi__err(\"bad codelengths\",\"Corrupt PNG\");\n   if (!stbi__zbuild_huffman(&a->z_length, lencodes, hlit)) return 0;\n   if (!stbi__zbuild_huffman(&a->z_distance, lencodes+hlit, hdist)) return 0;\n   return 1;\n}\n\nstatic int stbi__parse_uncompressed_block(stbi__zbuf *a)\n{\n   stbi_uc header[4];\n   int len,nlen,k;\n   if (a->num_bits & 7)\n      stbi__zreceive(a, a->num_bits & 7); // discard\n   // drain the bit-packed data into header\n   k = 0;\n   while (a->num_bits > 0) {\n      header[k++] = (stbi_uc) (a->code_buffer & 255); // suppress MSVC run-time check\n      a->code_buffer >>= 8;\n      a->num_bits -= 8;\n   }\n   if (a->num_bits < 0) return stbi__err(\"zlib corrupt\",\"Corrupt PNG\");\n   // now fill header the normal way\n   while (k < 4)\n      header[k++] = stbi__zget8(a);\n   len  = header[1] * 256 + header[0];\n   nlen = header[3] * 256 + header[2];\n   if (nlen != (len ^ 0xffff)) return stbi__err(\"zlib corrupt\",\"Corrupt PNG\");\n   if (a->zbuffer + len > a->zbuffer_end) return stbi__err(\"read past buffer\",\"Corrupt PNG\");\n   if (a->zout + len > a->zout_end)\n      if (!stbi__zexpand(a, a->zout, len)) return 0;\n   memcpy(a->zout, a->zbuffer, len);\n   a->zbuffer += len;\n   a->zout += len;\n   return 1;\n}\n\nstatic int stbi__parse_zlib_header(stbi__zbuf *a)\n{\n   int cmf   = stbi__zget8(a);\n   int cm    = cmf & 15;\n   /* int cinfo = cmf >> 4; */\n   int flg   = stbi__zget8(a);\n   if (stbi__zeof(a)) return stbi__err(\"bad zlib header\",\"Corrupt PNG\"); // zlib spec\n   if ((cmf*256+flg) % 31 != 0) return stbi__err(\"bad zlib header\",\"Corrupt PNG\"); // zlib spec\n   if (flg & 32) return stbi__err(\"no preset dict\",\"Corrupt PNG\"); // preset dictionary not allowed in png\n   if (cm != 8) return stbi__err(\"bad compression\",\"Corrupt PNG\"); // DEFLATE required for png\n   // window = 1 << (8 + cinfo)... but who cares, we fully buffer output\n   return 1;\n}\n\nstatic const stbi_uc stbi__zdefault_length[STBI__ZNSYMS] =\n{\n   8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8, 8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,\n   8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8, 8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,\n   8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8, 8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,\n   8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8, 8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,\n   8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8, 9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,\n   9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9, 9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,\n   9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9, 9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,\n   9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9, 9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,\n   7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7, 7,7,7,7,7,7,7,7,8,8,8,8,8,8,8,8\n};\nstatic const stbi_uc stbi__zdefault_distance[32] =\n{\n   5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5\n};\n/*\nInit algorithm:\n{\n   int i;   // use <= to match clearly with spec\n   for (i=0; i <= 143; ++i)     stbi__zdefault_length[i]   = 8;\n   for (   ; i <= 255; ++i)     stbi__zdefault_length[i]   = 9;\n   for (   ; i <= 279; ++i)     stbi__zdefault_length[i]   = 7;\n   for (   ; i <= 287; ++i)     stbi__zdefault_length[i]   = 8;\n\n   for (i=0; i <=  31; ++i)     stbi__zdefault_distance[i] = 5;\n}\n*/\n\nstatic int stbi__parse_zlib(stbi__zbuf *a, int parse_header)\n{\n   int final, type;\n   if (parse_header)\n      if (!stbi__parse_zlib_header(a)) return 0;\n   a->num_bits = 0;\n   a->code_buffer = 0;\n   a->hit_zeof_once = 0;\n   do {\n      final = stbi__zreceive(a,1);\n      type = stbi__zreceive(a,2);\n      if (type == 0) {\n         if (!stbi__parse_uncompressed_block(a)) return 0;\n      } else if (type == 3) {\n         return 0;\n      } else {\n         if (type == 1) {\n            // use fixed code lengths\n            if (!stbi__zbuild_huffman(&a->z_length  , stbi__zdefault_length  , STBI__ZNSYMS)) return 0;\n            if (!stbi__zbuild_huffman(&a->z_distance, stbi__zdefault_distance,  32)) return 0;\n         } else {\n            if (!stbi__compute_huffman_codes(a)) return 0;\n         }\n         if (!stbi__parse_huffman_block(a)) return 0;\n      }\n   } while (!final);\n   return 1;\n}\n\nstatic int stbi__do_zlib(stbi__zbuf *a, char *obuf, int olen, int exp, int parse_header)\n{\n   a->zout_start = obuf;\n   a->zout       = obuf;\n   a->zout_end   = obuf + olen;\n   a->z_expandable = exp;\n\n   return stbi__parse_zlib(a, parse_header);\n}\n\nSTBIDEF char *stbi_zlib_decode_malloc_guesssize(const char *buffer, int len, int initial_size, int *outlen)\n{\n   stbi__zbuf a;\n   char *p = (char *) stbi__malloc(initial_size);\n   if (p == NULL) return NULL;\n   a.zbuffer = (stbi_uc *) buffer;\n   a.zbuffer_end = (stbi_uc *) buffer + len;\n   if (stbi__do_zlib(&a, p, initial_size, 1, 1)) {\n      if (outlen) *outlen = (int) (a.zout - a.zout_start);\n      return a.zout_start;\n   } else {\n      STBI_FREE(a.zout_start);\n      return NULL;\n   }\n}\n\nSTBIDEF char *stbi_zlib_decode_malloc(char const *buffer, int len, int *outlen)\n{\n   return stbi_zlib_decode_malloc_guesssize(buffer, len, 16384, outlen);\n}\n\nSTBIDEF char *stbi_zlib_decode_malloc_guesssize_headerflag(const char *buffer, int len, int initial_size, int *outlen, int parse_header)\n{\n   stbi__zbuf a;\n   char *p = (char *) stbi__malloc(initial_size);\n   if (p == NULL) return NULL;\n   a.zbuffer = (stbi_uc *) buffer;\n   a.zbuffer_end = (stbi_uc *) buffer + len;\n   if (stbi__do_zlib(&a, p, initial_size, 1, parse_header)) {\n      if (outlen) *outlen = (int) (a.zout - a.zout_start);\n      return a.zout_start;\n   } else {\n      STBI_FREE(a.zout_start);\n      return NULL;\n   }\n}\n\nSTBIDEF int stbi_zlib_decode_buffer(char *obuffer, int olen, char const *ibuffer, int ilen)\n{\n   stbi__zbuf a;\n   a.zbuffer = (stbi_uc *) ibuffer;\n   a.zbuffer_end = (stbi_uc *) ibuffer + ilen;\n   if (stbi__do_zlib(&a, obuffer, olen, 0, 1))\n      return (int) (a.zout - a.zout_start);\n   else\n      return -1;\n}\n\nSTBIDEF char *stbi_zlib_decode_noheader_malloc(char const *buffer, int len, int *outlen)\n{\n   stbi__zbuf a;\n   char *p = (char *) stbi__malloc(16384);\n   if (p == NULL) return NULL;\n   a.zbuffer = (stbi_uc *) buffer;\n   a.zbuffer_end = (stbi_uc *) buffer+len;\n   if (stbi__do_zlib(&a, p, 16384, 1, 0)) {\n      if (outlen) *outlen = (int) (a.zout - a.zout_start);\n      return a.zout_start;\n   } else {\n      STBI_FREE(a.zout_start);\n      return NULL;\n   }\n}\n\nSTBIDEF int stbi_zlib_decode_noheader_buffer(char *obuffer, int olen, const char *ibuffer, int ilen)\n{\n   stbi__zbuf a;\n   a.zbuffer = (stbi_uc *) ibuffer;\n   a.zbuffer_end = (stbi_uc *) ibuffer + ilen;\n   if (stbi__do_zlib(&a, obuffer, olen, 0, 0))\n      return (int) (a.zout - a.zout_start);\n   else\n      return -1;\n}\n#endif\n\n// public domain \"baseline\" PNG decoder   v0.10  Sean Barrett 2006-11-18\n//    simple implementation\n//      - only 8-bit samples\n//      - no CRC checking\n//      - allocates lots of intermediate memory\n//        - avoids problem of streaming data between subsystems\n//        - avoids explicit window management\n//    performance\n//      - uses stb_zlib, a PD zlib implementation with fast huffman decoding\n\n#ifndef STBI_NO_PNG\ntypedef struct\n{\n   stbi__uint32 length;\n   stbi__uint32 type;\n} stbi__pngchunk;\n\nstatic stbi__pngchunk stbi__get_chunk_header(stbi__context *s)\n{\n   stbi__pngchunk c;\n   c.length = stbi__get32be(s);\n   c.type   = stbi__get32be(s);\n   return c;\n}\n\nstatic int stbi__check_png_header(stbi__context *s)\n{\n   static const stbi_uc png_sig[8] = { 137,80,78,71,13,10,26,10 };\n   int i;\n   for (i=0; i < 8; ++i)\n      if (stbi__get8(s) != png_sig[i]) return stbi__err(\"bad png sig\",\"Not a PNG\");\n   return 1;\n}\n\ntypedef struct\n{\n   stbi__context *s;\n   stbi_uc *idata, *expanded, *out;\n   int depth;\n} stbi__png;\n\n\nenum {\n   STBI__F_none=0,\n   STBI__F_sub=1,\n   STBI__F_up=2,\n   STBI__F_avg=3,\n   STBI__F_paeth=4,\n   // synthetic filter used for first scanline to avoid needing a dummy row of 0s\n   STBI__F_avg_first\n};\n\nstatic stbi_uc first_row_filter[5] =\n{\n   STBI__F_none,\n   STBI__F_sub,\n   STBI__F_none,\n   STBI__F_avg_first,\n   STBI__F_sub // Paeth with b=c=0 turns out to be equivalent to sub\n};\n\nstatic int stbi__paeth(int a, int b, int c)\n{\n   // This formulation looks very different from the reference in the PNG spec, but is\n   // actually equivalent and has favorable data dependencies and admits straightforward\n   // generation of branch-free code, which helps performance significantly.\n   int thresh = c*3 - (a + b);\n   int lo = a < b ? a : b;\n   int hi = a < b ? b : a;\n   int t0 = (hi <= thresh) ? lo : c;\n   int t1 = (thresh <= lo) ? hi : t0;\n   return t1;\n}\n\nstatic const stbi_uc stbi__depth_scale_table[9] = { 0, 0xff, 0x55, 0, 0x11, 0,0,0, 0x01 };\n\n// adds an extra all-255 alpha channel\n// dest == src is legal\n// img_n must be 1 or 3\nstatic void stbi__create_png_alpha_expand8(stbi_uc *dest, stbi_uc *src, stbi__uint32 x, int img_n)\n{\n   int i;\n   // must process data backwards since we allow dest==src\n   if (img_n == 1) {\n      for (i=x-1; i >= 0; --i) {\n         dest[i*2+1] = 255;\n         dest[i*2+0] = src[i];\n      }\n   } else {\n      STBI_ASSERT(img_n == 3);\n      for (i=x-1; i >= 0; --i) {\n         dest[i*4+3] = 255;\n         dest[i*4+2] = src[i*3+2];\n         dest[i*4+1] = src[i*3+1];\n         dest[i*4+0] = src[i*3+0];\n      }\n   }\n}\n\n// create the png data from post-deflated data\nstatic int stbi__create_png_image_raw(stbi__png *a, stbi_uc *raw, stbi__uint32 raw_len, int out_n, stbi__uint32 x, stbi__uint32 y, int depth, int color)\n{\n   int bytes = (depth == 16 ? 2 : 1);\n   stbi__context *s = a->s;\n   stbi__uint32 i,j,stride = x*out_n*bytes;\n   stbi__uint32 img_len, img_width_bytes;\n   stbi_uc *filter_buf;\n   int all_ok = 1;\n   int k;\n   int img_n = s->img_n; // copy it into a local for later\n\n   int output_bytes = out_n*bytes;\n   int filter_bytes = img_n*bytes;\n   int width = x;\n\n   STBI_ASSERT(out_n == s->img_n || out_n == s->img_n+1);\n   a->out = (stbi_uc *) stbi__malloc_mad3(x, y, output_bytes, 0); // extra bytes to write off the end into\n   if (!a->out) return stbi__err(\"outofmem\", \"Out of memory\");\n\n   // note: error exits here don't need to clean up a->out individually,\n   // stbi__do_png always does on error.\n   if (!stbi__mad3sizes_valid(img_n, x, depth, 7)) return stbi__err(\"too large\", \"Corrupt PNG\");\n   img_width_bytes = (((img_n * x * depth) + 7) >> 3);\n   if (!stbi__mad2sizes_valid(img_width_bytes, y, img_width_bytes)) return stbi__err(\"too large\", \"Corrupt PNG\");\n   img_len = (img_width_bytes + 1) * y;\n\n   // we used to check for exact match between raw_len and img_len on non-interlaced PNGs,\n   // but issue #276 reported a PNG in the wild that had extra data at the end (all zeros),\n   // so just check for raw_len < img_len always.\n   if (raw_len < img_len) return stbi__err(\"not enough pixels\",\"Corrupt PNG\");\n\n   // Allocate two scan lines worth of filter workspace buffer.\n   filter_buf = (stbi_uc *) stbi__malloc_mad2(img_width_bytes, 2, 0);\n   if (!filter_buf) return stbi__err(\"outofmem\", \"Out of memory\");\n\n   // Filtering for low-bit-depth images\n   if (depth < 8) {\n      filter_bytes = 1;\n      width = img_width_bytes;\n   }\n\n   for (j=0; j < y; ++j) {\n      // cur/prior filter buffers alternate\n      stbi_uc *cur = filter_buf + (j & 1)*img_width_bytes;\n      stbi_uc *prior = filter_buf + (~j & 1)*img_width_bytes;\n      stbi_uc *dest = a->out + stride*j;\n      int nk = width * filter_bytes;\n      int filter = *raw++;\n\n      // check filter type\n      if (filter > 4) {\n         all_ok = stbi__err(\"invalid filter\",\"Corrupt PNG\");\n         break;\n      }\n\n      // if first row, use special filter that doesn't sample previous row\n      if (j == 0) filter = first_row_filter[filter];\n\n      // perform actual filtering\n      switch (filter) {\n      case STBI__F_none:\n         memcpy(cur, raw, nk);\n         break;\n      case STBI__F_sub:\n         memcpy(cur, raw, filter_bytes);\n         for (k = filter_bytes; k < nk; ++k)\n            cur[k] = STBI__BYTECAST(raw[k] + cur[k-filter_bytes]);\n         break;\n      case STBI__F_up:\n         for (k = 0; k < nk; ++k)\n            cur[k] = STBI__BYTECAST(raw[k] + prior[k]);\n         break;\n      case STBI__F_avg:\n         for (k = 0; k < filter_bytes; ++k)\n            cur[k] = STBI__BYTECAST(raw[k] + (prior[k]>>1));\n         for (k = filter_bytes; k < nk; ++k)\n            cur[k] = STBI__BYTECAST(raw[k] + ((prior[k] + cur[k-filter_bytes])>>1));\n         break;\n      case STBI__F_paeth:\n         for (k = 0; k < filter_bytes; ++k)\n            cur[k] = STBI__BYTECAST(raw[k] + prior[k]); // prior[k] == stbi__paeth(0,prior[k],0)\n         for (k = filter_bytes; k < nk; ++k)\n            cur[k] = STBI__BYTECAST(raw[k] + stbi__paeth(cur[k-filter_bytes], prior[k], prior[k-filter_bytes]));\n         break;\n      case STBI__F_avg_first:\n         memcpy(cur, raw, filter_bytes);\n         for (k = filter_bytes; k < nk; ++k)\n            cur[k] = STBI__BYTECAST(raw[k] + (cur[k-filter_bytes] >> 1));\n         break;\n      }\n\n      raw += nk;\n\n      // expand decoded bits in cur to dest, also adding an extra alpha channel if desired\n      if (depth < 8) {\n         stbi_uc scale = (color == 0) ? stbi__depth_scale_table[depth] : 1; // scale grayscale values to 0..255 range\n         stbi_uc *in = cur;\n         stbi_uc *out = dest;\n         stbi_uc inb = 0;\n         stbi__uint32 nsmp = x*img_n;\n\n         // expand bits to bytes first\n         if (depth == 4) {\n            for (i=0; i < nsmp; ++i) {\n               if ((i & 1) == 0) inb = *in++;\n               *out++ = scale * (inb >> 4);\n               inb <<= 4;\n            }\n         } else if (depth == 2) {\n            for (i=0; i < nsmp; ++i) {\n               if ((i & 3) == 0) inb = *in++;\n               *out++ = scale * (inb >> 6);\n               inb <<= 2;\n            }\n         } else {\n            STBI_ASSERT(depth == 1);\n            for (i=0; i < nsmp; ++i) {\n               if ((i & 7) == 0) inb = *in++;\n               *out++ = scale * (inb >> 7);\n               inb <<= 1;\n            }\n         }\n\n         // insert alpha=255 values if desired\n         if (img_n != out_n)\n            stbi__create_png_alpha_expand8(dest, dest, x, img_n);\n      } else if (depth == 8) {\n         if (img_n == out_n)\n            memcpy(dest, cur, x*img_n);\n         else\n            stbi__create_png_alpha_expand8(dest, cur, x, img_n);\n      } else if (depth == 16) {\n         // convert the image data from big-endian to platform-native\n         stbi__uint16 *dest16 = (stbi__uint16*)dest;\n         stbi__uint32 nsmp = x*img_n;\n\n         if (img_n == out_n) {\n            for (i = 0; i < nsmp; ++i, ++dest16, cur += 2)\n               *dest16 = (cur[0] << 8) | cur[1];\n         } else {\n            STBI_ASSERT(img_n+1 == out_n);\n            if (img_n == 1) {\n               for (i = 0; i < x; ++i, dest16 += 2, cur += 2) {\n                  dest16[0] = (cur[0] << 8) | cur[1];\n                  dest16[1] = 0xffff;\n               }\n            } else {\n               STBI_ASSERT(img_n == 3);\n               for (i = 0; i < x; ++i, dest16 += 4, cur += 6) {\n                  dest16[0] = (cur[0] << 8) | cur[1];\n                  dest16[1] = (cur[2] << 8) | cur[3];\n                  dest16[2] = (cur[4] << 8) | cur[5];\n                  dest16[3] = 0xffff;\n               }\n            }\n         }\n      }\n   }\n\n   STBI_FREE(filter_buf);\n   if (!all_ok) return 0;\n\n   return 1;\n}\n\nstatic int stbi__create_png_image(stbi__png *a, stbi_uc *image_data, stbi__uint32 image_data_len, int out_n, int depth, int color, int interlaced)\n{\n   int bytes = (depth == 16 ? 2 : 1);\n   int out_bytes = out_n * bytes;\n   stbi_uc *final;\n   int p;\n   if (!interlaced)\n      return stbi__create_png_image_raw(a, image_data, image_data_len, out_n, a->s->img_x, a->s->img_y, depth, color);\n\n   // de-interlacing\n   final = (stbi_uc *) stbi__malloc_mad3(a->s->img_x, a->s->img_y, out_bytes, 0);\n   if (!final) return stbi__err(\"outofmem\", \"Out of memory\");\n   for (p=0; p < 7; ++p) {\n      int xorig[] = { 0,4,0,2,0,1,0 };\n      int yorig[] = { 0,0,4,0,2,0,1 };\n      int xspc[]  = { 8,8,4,4,2,2,1 };\n      int yspc[]  = { 8,8,8,4,4,2,2 };\n      int i,j,x,y;\n      // pass1_x[4] = 0, pass1_x[5] = 1, pass1_x[12] = 1\n      x = (a->s->img_x - xorig[p] + xspc[p]-1) / xspc[p];\n      y = (a->s->img_y - yorig[p] + yspc[p]-1) / yspc[p];\n      if (x && y) {\n         stbi__uint32 img_len = ((((a->s->img_n * x * depth) + 7) >> 3) + 1) * y;\n         if (!stbi__create_png_image_raw(a, image_data, image_data_len, out_n, x, y, depth, color)) {\n            STBI_FREE(final);\n            return 0;\n         }\n         for (j=0; j < y; ++j) {\n            for (i=0; i < x; ++i) {\n               int out_y = j*yspc[p]+yorig[p];\n               int out_x = i*xspc[p]+xorig[p];\n               memcpy(final + out_y*a->s->img_x*out_bytes + out_x*out_bytes,\n                      a->out + (j*x+i)*out_bytes, out_bytes);\n            }\n         }\n         STBI_FREE(a->out);\n         image_data += img_len;\n         image_data_len -= img_len;\n      }\n   }\n   a->out = final;\n\n   return 1;\n}\n\nstatic int stbi__compute_transparency(stbi__png *z, stbi_uc tc[3], int out_n)\n{\n   stbi__context *s = z->s;\n   stbi__uint32 i, pixel_count = s->img_x * s->img_y;\n   stbi_uc *p = z->out;\n\n   // compute color-based transparency, assuming we've\n   // already got 255 as the alpha value in the output\n   STBI_ASSERT(out_n == 2 || out_n == 4);\n\n   if (out_n == 2) {\n      for (i=0; i < pixel_count; ++i) {\n         p[1] = (p[0] == tc[0] ? 0 : 255);\n         p += 2;\n      }\n   } else {\n      for (i=0; i < pixel_count; ++i) {\n         if (p[0] == tc[0] && p[1] == tc[1] && p[2] == tc[2])\n            p[3] = 0;\n         p += 4;\n      }\n   }\n   return 1;\n}\n\nstatic int stbi__compute_transparency16(stbi__png *z, stbi__uint16 tc[3], int out_n)\n{\n   stbi__context *s = z->s;\n   stbi__uint32 i, pixel_count = s->img_x * s->img_y;\n   stbi__uint16 *p = (stbi__uint16*) z->out;\n\n   // compute color-based transparency, assuming we've\n   // already got 65535 as the alpha value in the output\n   STBI_ASSERT(out_n == 2 || out_n == 4);\n\n   if (out_n == 2) {\n      for (i = 0; i < pixel_count; ++i) {\n         p[1] = (p[0] == tc[0] ? 0 : 65535);\n         p += 2;\n      }\n   } else {\n      for (i = 0; i < pixel_count; ++i) {\n         if (p[0] == tc[0] && p[1] == tc[1] && p[2] == tc[2])\n            p[3] = 0;\n         p += 4;\n      }\n   }\n   return 1;\n}\n\nstatic int stbi__expand_png_palette(stbi__png *a, stbi_uc *palette, int len, int pal_img_n)\n{\n   stbi__uint32 i, pixel_count = a->s->img_x * a->s->img_y;\n   stbi_uc *p, *temp_out, *orig = a->out;\n\n   p = (stbi_uc *) stbi__malloc_mad2(pixel_count, pal_img_n, 0);\n   if (p == NULL) return stbi__err(\"outofmem\", \"Out of memory\");\n\n   // between here and free(out) below, exitting would leak\n   temp_out = p;\n\n   if (pal_img_n == 3) {\n      for (i=0; i < pixel_count; ++i) {\n         int n = orig[i]*4;\n         p[0] = palette[n  ];\n         p[1] = palette[n+1];\n         p[2] = palette[n+2];\n         p += 3;\n      }\n   } else {\n      for (i=0; i < pixel_count; ++i) {\n         int n = orig[i]*4;\n         p[0] = palette[n  ];\n         p[1] = palette[n+1];\n         p[2] = palette[n+2];\n         p[3] = palette[n+3];\n         p += 4;\n      }\n   }\n   STBI_FREE(a->out);\n   a->out = temp_out;\n\n   STBI_NOTUSED(len);\n\n   return 1;\n}\n\nstatic int stbi__unpremultiply_on_load_global = 0;\nstatic int stbi__de_iphone_flag_global = 0;\n\nSTBIDEF void stbi_set_unpremultiply_on_load(int flag_true_if_should_unpremultiply)\n{\n   stbi__unpremultiply_on_load_global = flag_true_if_should_unpremultiply;\n}\n\nSTBIDEF void stbi_convert_iphone_png_to_rgb(int flag_true_if_should_convert)\n{\n   stbi__de_iphone_flag_global = flag_true_if_should_convert;\n}\n\n#ifndef STBI_THREAD_LOCAL\n#define stbi__unpremultiply_on_load  stbi__unpremultiply_on_load_global\n#define stbi__de_iphone_flag  stbi__de_iphone_flag_global\n#else\nstatic STBI_THREAD_LOCAL int stbi__unpremultiply_on_load_local, stbi__unpremultiply_on_load_set;\nstatic STBI_THREAD_LOCAL int stbi__de_iphone_flag_local, stbi__de_iphone_flag_set;\n\nSTBIDEF void stbi_set_unpremultiply_on_load_thread(int flag_true_if_should_unpremultiply)\n{\n   stbi__unpremultiply_on_load_local = flag_true_if_should_unpremultiply;\n   stbi__unpremultiply_on_load_set = 1;\n}\n\nSTBIDEF void stbi_convert_iphone_png_to_rgb_thread(int flag_true_if_should_convert)\n{\n   stbi__de_iphone_flag_local = flag_true_if_should_convert;\n   stbi__de_iphone_flag_set = 1;\n}\n\n#define stbi__unpremultiply_on_load  (stbi__unpremultiply_on_load_set           \\\n                                       ? stbi__unpremultiply_on_load_local      \\\n                                       : stbi__unpremultiply_on_load_global)\n#define stbi__de_iphone_flag  (stbi__de_iphone_flag_set                         \\\n                                ? stbi__de_iphone_flag_local                    \\\n                                : stbi__de_iphone_flag_global)\n#endif // STBI_THREAD_LOCAL\n\nstatic void stbi__de_iphone(stbi__png *z)\n{\n   stbi__context *s = z->s;\n   stbi__uint32 i, pixel_count = s->img_x * s->img_y;\n   stbi_uc *p = z->out;\n\n   if (s->img_out_n == 3) {  // convert bgr to rgb\n      for (i=0; i < pixel_count; ++i) {\n         stbi_uc t = p[0];\n         p[0] = p[2];\n         p[2] = t;\n         p += 3;\n      }\n   } else {\n      STBI_ASSERT(s->img_out_n == 4);\n      if (stbi__unpremultiply_on_load) {\n         // convert bgr to rgb and unpremultiply\n         for (i=0; i < pixel_count; ++i) {\n            stbi_uc a = p[3];\n            stbi_uc t = p[0];\n            if (a) {\n               stbi_uc half = a / 2;\n               p[0] = (p[2] * 255 + half) / a;\n               p[1] = (p[1] * 255 + half) / a;\n               p[2] = ( t   * 255 + half) / a;\n            } else {\n               p[0] = p[2];\n               p[2] = t;\n            }\n            p += 4;\n         }\n      } else {\n         // convert bgr to rgb\n         for (i=0; i < pixel_count; ++i) {\n            stbi_uc t = p[0];\n            p[0] = p[2];\n            p[2] = t;\n            p += 4;\n         }\n      }\n   }\n}\n\n#define STBI__PNG_TYPE(a,b,c,d)  (((unsigned) (a) << 24) + ((unsigned) (b) << 16) + ((unsigned) (c) << 8) + (unsigned) (d))\n\nstatic int stbi__parse_png_file(stbi__png *z, int scan, int req_comp)\n{\n   stbi_uc palette[1024], pal_img_n=0;\n   stbi_uc has_trans=0, tc[3]={0};\n   stbi__uint16 tc16[3];\n   stbi__uint32 ioff=0, idata_limit=0, i, pal_len=0;\n   int first=1,k,interlace=0, color=0, is_iphone=0;\n   stbi__context *s = z->s;\n\n   z->expanded = NULL;\n   z->idata = NULL;\n   z->out = NULL;\n\n   if (!stbi__check_png_header(s)) return 0;\n\n   if (scan == STBI__SCAN_type) return 1;\n\n   for (;;) {\n      stbi__pngchunk c = stbi__get_chunk_header(s);\n      switch (c.type) {\n         case STBI__PNG_TYPE('C','g','B','I'):\n            is_iphone = 1;\n            stbi__skip(s, c.length);\n            break;\n         case STBI__PNG_TYPE('I','H','D','R'): {\n            int comp,filter;\n            if (!first) return stbi__err(\"multiple IHDR\",\"Corrupt PNG\");\n            first = 0;\n            if (c.length != 13) return stbi__err(\"bad IHDR len\",\"Corrupt PNG\");\n            s->img_x = stbi__get32be(s);\n            s->img_y = stbi__get32be(s);\n            if (s->img_y > STBI_MAX_DIMENSIONS) return stbi__err(\"too large\",\"Very large image (corrupt?)\");\n            if (s->img_x > STBI_MAX_DIMENSIONS) return stbi__err(\"too large\",\"Very large image (corrupt?)\");\n            z->depth = stbi__get8(s);  if (z->depth != 1 && z->depth != 2 && z->depth != 4 && z->depth != 8 && z->depth != 16)  return stbi__err(\"1/2/4/8/16-bit only\",\"PNG not supported: 1/2/4/8/16-bit only\");\n            color = stbi__get8(s);  if (color > 6)         return stbi__err(\"bad ctype\",\"Corrupt PNG\");\n            if (color == 3 && z->depth == 16)                  return stbi__err(\"bad ctype\",\"Corrupt PNG\");\n            if (color == 3) pal_img_n = 3; else if (color & 1) return stbi__err(\"bad ctype\",\"Corrupt PNG\");\n            comp  = stbi__get8(s);  if (comp) return stbi__err(\"bad comp method\",\"Corrupt PNG\");\n            filter= stbi__get8(s);  if (filter) return stbi__err(\"bad filter method\",\"Corrupt PNG\");\n            interlace = stbi__get8(s); if (interlace>1) return stbi__err(\"bad interlace method\",\"Corrupt PNG\");\n            if (!s->img_x || !s->img_y) return stbi__err(\"0-pixel image\",\"Corrupt PNG\");\n            if (!pal_img_n) {\n               s->img_n = (color & 2 ? 3 : 1) + (color & 4 ? 1 : 0);\n               if ((1 << 30) / s->img_x / s->img_n < s->img_y) return stbi__err(\"too large\", \"Image too large to decode\");\n            } else {\n               // if paletted, then pal_n is our final components, and\n               // img_n is # components to decompress/filter.\n               s->img_n = 1;\n               if ((1 << 30) / s->img_x / 4 < s->img_y) return stbi__err(\"too large\",\"Corrupt PNG\");\n            }\n            // even with SCAN_header, have to scan to see if we have a tRNS\n            break;\n         }\n\n         case STBI__PNG_TYPE('P','L','T','E'):  {\n            if (first) return stbi__err(\"first not IHDR\", \"Corrupt PNG\");\n            if (c.length > 256*3) return stbi__err(\"invalid PLTE\",\"Corrupt PNG\");\n            pal_len = c.length / 3;\n            if (pal_len * 3 != c.length) return stbi__err(\"invalid PLTE\",\"Corrupt PNG\");\n            for (i=0; i < pal_len; ++i) {\n               palette[i*4+0] = stbi__get8(s);\n               palette[i*4+1] = stbi__get8(s);\n               palette[i*4+2] = stbi__get8(s);\n               palette[i*4+3] = 255;\n            }\n            break;\n         }\n\n         case STBI__PNG_TYPE('t','R','N','S'): {\n            if (first) return stbi__err(\"first not IHDR\", \"Corrupt PNG\");\n            if (z->idata) return stbi__err(\"tRNS after IDAT\",\"Corrupt PNG\");\n            if (pal_img_n) {\n               if (scan == STBI__SCAN_header) { s->img_n = 4; return 1; }\n               if (pal_len == 0) return stbi__err(\"tRNS before PLTE\",\"Corrupt PNG\");\n               if (c.length > pal_len) return stbi__err(\"bad tRNS len\",\"Corrupt PNG\");\n               pal_img_n = 4;\n               for (i=0; i < c.length; ++i)\n                  palette[i*4+3] = stbi__get8(s);\n            } else {\n               if (!(s->img_n & 1)) return stbi__err(\"tRNS with alpha\",\"Corrupt PNG\");\n               if (c.length != (stbi__uint32) s->img_n*2) return stbi__err(\"bad tRNS len\",\"Corrupt PNG\");\n               has_trans = 1;\n               // non-paletted with tRNS = constant alpha. if header-scanning, we can stop now.\n               if (scan == STBI__SCAN_header) { ++s->img_n; return 1; }\n               if (z->depth == 16) {\n                  for (k = 0; k < s->img_n && k < 3; ++k) // extra loop test to suppress false GCC warning\n                     tc16[k] = (stbi__uint16)stbi__get16be(s); // copy the values as-is\n               } else {\n                  for (k = 0; k < s->img_n && k < 3; ++k)\n                     tc[k] = (stbi_uc)(stbi__get16be(s) & 255) * stbi__depth_scale_table[z->depth]; // non 8-bit images will be larger\n               }\n            }\n            break;\n         }\n\n         case STBI__PNG_TYPE('I','D','A','T'): {\n            if (first) return stbi__err(\"first not IHDR\", \"Corrupt PNG\");\n            if (pal_img_n && !pal_len) return stbi__err(\"no PLTE\",\"Corrupt PNG\");\n            if (scan == STBI__SCAN_header) {\n               // header scan definitely stops at first IDAT\n               if (pal_img_n)\n                  s->img_n = pal_img_n;\n               return 1;\n            }\n            if (c.length > (1u << 30)) return stbi__err(\"IDAT size limit\", \"IDAT section larger than 2^30 bytes\");\n            if ((int)(ioff + c.length) < (int)ioff) return 0;\n            if (ioff + c.length > idata_limit) {\n               stbi__uint32 idata_limit_old = idata_limit;\n               stbi_uc *p;\n               if (idata_limit == 0) idata_limit = c.length > 4096 ? c.length : 4096;\n               while (ioff + c.length > idata_limit)\n                  idata_limit *= 2;\n               STBI_NOTUSED(idata_limit_old);\n               p = (stbi_uc *) STBI_REALLOC_SIZED(z->idata, idata_limit_old, idata_limit); if (p == NULL) return stbi__err(\"outofmem\", \"Out of memory\");\n               z->idata = p;\n            }\n            if (!stbi__getn(s, z->idata+ioff,c.length)) return stbi__err(\"outofdata\",\"Corrupt PNG\");\n            ioff += c.length;\n            break;\n         }\n\n         case STBI__PNG_TYPE('I','E','N','D'): {\n            stbi__uint32 raw_len, bpl;\n            if (first) return stbi__err(\"first not IHDR\", \"Corrupt PNG\");\n            if (scan != STBI__SCAN_load) return 1;\n            if (z->idata == NULL) return stbi__err(\"no IDAT\",\"Corrupt PNG\");\n            // initial guess for decoded data size to avoid unnecessary reallocs\n            bpl = (s->img_x * z->depth + 7) / 8; // bytes per line, per component\n            raw_len = bpl * s->img_y * s->img_n /* pixels */ + s->img_y /* filter mode per row */;\n            z->expanded = (stbi_uc *) stbi_zlib_decode_malloc_guesssize_headerflag((char *) z->idata, ioff, raw_len, (int *) &raw_len, !is_iphone);\n            if (z->expanded == NULL) return 0; // zlib should set error\n            STBI_FREE(z->idata); z->idata = NULL;\n            if ((req_comp == s->img_n+1 && req_comp != 3 && !pal_img_n) || has_trans)\n               s->img_out_n = s->img_n+1;\n            else\n               s->img_out_n = s->img_n;\n            if (!stbi__create_png_image(z, z->expanded, raw_len, s->img_out_n, z->depth, color, interlace)) return 0;\n            if (has_trans) {\n               if (z->depth == 16) {\n                  if (!stbi__compute_transparency16(z, tc16, s->img_out_n)) return 0;\n               } else {\n                  if (!stbi__compute_transparency(z, tc, s->img_out_n)) return 0;\n               }\n            }\n            if (is_iphone && stbi__de_iphone_flag && s->img_out_n > 2)\n               stbi__de_iphone(z);\n            if (pal_img_n) {\n               // pal_img_n == 3 or 4\n               s->img_n = pal_img_n; // record the actual colors we had\n               s->img_out_n = pal_img_n;\n               if (req_comp >= 3) s->img_out_n = req_comp;\n               if (!stbi__expand_png_palette(z, palette, pal_len, s->img_out_n))\n                  return 0;\n            } else if (has_trans) {\n               // non-paletted image with tRNS -> source image has (constant) alpha\n               ++s->img_n;\n            }\n            STBI_FREE(z->expanded); z->expanded = NULL;\n            // end of PNG chunk, read and skip CRC\n            stbi__get32be(s);\n            return 1;\n         }\n\n         default:\n            // if critical, fail\n            if (first) return stbi__err(\"first not IHDR\", \"Corrupt PNG\");\n            if ((c.type & (1 << 29)) == 0) {\n               #ifndef STBI_NO_FAILURE_STRINGS\n               // not threadsafe\n               static char invalid_chunk[] = \"XXXX PNG chunk not known\";\n               invalid_chunk[0] = STBI__BYTECAST(c.type >> 24);\n               invalid_chunk[1] = STBI__BYTECAST(c.type >> 16);\n               invalid_chunk[2] = STBI__BYTECAST(c.type >>  8);\n               invalid_chunk[3] = STBI__BYTECAST(c.type >>  0);\n               #endif\n               return stbi__err(invalid_chunk, \"PNG not supported: unknown PNG chunk type\");\n            }\n            stbi__skip(s, c.length);\n            break;\n      }\n      // end of PNG chunk, read and skip CRC\n      stbi__get32be(s);\n   }\n}\n\nstatic void *stbi__do_png(stbi__png *p, int *x, int *y, int *n, int req_comp, stbi__result_info *ri)\n{\n   void *result=NULL;\n   if (req_comp < 0 || req_comp > 4) return stbi__errpuc(\"bad req_comp\", \"Internal error\");\n   if (stbi__parse_png_file(p, STBI__SCAN_load, req_comp)) {\n      if (p->depth <= 8)\n         ri->bits_per_channel = 8;\n      else if (p->depth == 16)\n         ri->bits_per_channel = 16;\n      else\n         return stbi__errpuc(\"bad bits_per_channel\", \"PNG not supported: unsupported color depth\");\n      result = p->out;\n      p->out = NULL;\n      if (req_comp && req_comp != p->s->img_out_n) {\n         if (ri->bits_per_channel == 8)\n            result = stbi__convert_format((unsigned char *) result, p->s->img_out_n, req_comp, p->s->img_x, p->s->img_y);\n         else\n            result = stbi__convert_format16((stbi__uint16 *) result, p->s->img_out_n, req_comp, p->s->img_x, p->s->img_y);\n         p->s->img_out_n = req_comp;\n         if (result == NULL) return result;\n      }\n      *x = p->s->img_x;\n      *y = p->s->img_y;\n      if (n) *n = p->s->img_n;\n   }\n   STBI_FREE(p->out);      p->out      = NULL;\n   STBI_FREE(p->expanded); p->expanded = NULL;\n   STBI_FREE(p->idata);    p->idata    = NULL;\n\n   return result;\n}\n\nstatic void *stbi__png_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri)\n{\n   stbi__png p;\n   p.s = s;\n   return stbi__do_png(&p, x,y,comp,req_comp, ri);\n}\n\nstatic int stbi__png_test(stbi__context *s)\n{\n   int r;\n   r = stbi__check_png_header(s);\n   stbi__rewind(s);\n   return r;\n}\n\nstatic int stbi__png_info_raw(stbi__png *p, int *x, int *y, int *comp)\n{\n   if (!stbi__parse_png_file(p, STBI__SCAN_header, 0)) {\n      stbi__rewind( p->s );\n      return 0;\n   }\n   if (x) *x = p->s->img_x;\n   if (y) *y = p->s->img_y;\n   if (comp) *comp = p->s->img_n;\n   return 1;\n}\n\nstatic int stbi__png_info(stbi__context *s, int *x, int *y, int *comp)\n{\n   stbi__png p;\n   p.s = s;\n   return stbi__png_info_raw(&p, x, y, comp);\n}\n\nstatic int stbi__png_is16(stbi__context *s)\n{\n   stbi__png p;\n   p.s = s;\n   if (!stbi__png_info_raw(&p, NULL, NULL, NULL))\n\t   return 0;\n   if (p.depth != 16) {\n      stbi__rewind(p.s);\n      return 0;\n   }\n   return 1;\n}\n#endif\n\n// Microsoft/Windows BMP image\n\n#ifndef STBI_NO_BMP\nstatic int stbi__bmp_test_raw(stbi__context *s)\n{\n   int r;\n   int sz;\n   if (stbi__get8(s) != 'B') return 0;\n   if (stbi__get8(s) != 'M') return 0;\n   stbi__get32le(s); // discard filesize\n   stbi__get16le(s); // discard reserved\n   stbi__get16le(s); // discard reserved\n   stbi__get32le(s); // discard data offset\n   sz = stbi__get32le(s);\n   r = (sz == 12 || sz == 40 || sz == 56 || sz == 108 || sz == 124);\n   return r;\n}\n\nstatic int stbi__bmp_test(stbi__context *s)\n{\n   int r = stbi__bmp_test_raw(s);\n   stbi__rewind(s);\n   return r;\n}\n\n\n// returns 0..31 for the highest set bit\nstatic int stbi__high_bit(unsigned int z)\n{\n   int n=0;\n   if (z == 0) return -1;\n   if (z >= 0x10000) { n += 16; z >>= 16; }\n   if (z >= 0x00100) { n +=  8; z >>=  8; }\n   if (z >= 0x00010) { n +=  4; z >>=  4; }\n   if (z >= 0x00004) { n +=  2; z >>=  2; }\n   if (z >= 0x00002) { n +=  1;/* >>=  1;*/ }\n   return n;\n}\n\nstatic int stbi__bitcount(unsigned int a)\n{\n   a = (a & 0x55555555) + ((a >>  1) & 0x55555555); // max 2\n   a = (a & 0x33333333) + ((a >>  2) & 0x33333333); // max 4\n   a = (a + (a >> 4)) & 0x0f0f0f0f; // max 8 per 4, now 8 bits\n   a = (a + (a >> 8)); // max 16 per 8 bits\n   a = (a + (a >> 16)); // max 32 per 8 bits\n   return a & 0xff;\n}\n\n// extract an arbitrarily-aligned N-bit value (N=bits)\n// from v, and then make it 8-bits long and fractionally\n// extend it to full full range.\nstatic int stbi__shiftsigned(unsigned int v, int shift, int bits)\n{\n   static unsigned int mul_table[9] = {\n      0,\n      0xff/*0b11111111*/, 0x55/*0b01010101*/, 0x49/*0b01001001*/, 0x11/*0b00010001*/,\n      0x21/*0b00100001*/, 0x41/*0b01000001*/, 0x81/*0b10000001*/, 0x01/*0b00000001*/,\n   };\n   static unsigned int shift_table[9] = {\n      0, 0,0,1,0,2,4,6,0,\n   };\n   if (shift < 0)\n      v <<= -shift;\n   else\n      v >>= shift;\n   STBI_ASSERT(v < 256);\n   v >>= (8-bits);\n   STBI_ASSERT(bits >= 0 && bits <= 8);\n   return (int) ((unsigned) v * mul_table[bits]) >> shift_table[bits];\n}\n\ntypedef struct\n{\n   int bpp, offset, hsz;\n   unsigned int mr,mg,mb,ma, all_a;\n   int extra_read;\n} stbi__bmp_data;\n\nstatic int stbi__bmp_set_mask_defaults(stbi__bmp_data *info, int compress)\n{\n   // BI_BITFIELDS specifies masks explicitly, don't override\n   if (compress == 3)\n      return 1;\n\n   if (compress == 0) {\n      if (info->bpp == 16) {\n         info->mr = 31u << 10;\n         info->mg = 31u <<  5;\n         info->mb = 31u <<  0;\n      } else if (info->bpp == 32) {\n         info->mr = 0xffu << 16;\n         info->mg = 0xffu <<  8;\n         info->mb = 0xffu <<  0;\n         info->ma = 0xffu << 24;\n         info->all_a = 0; // if all_a is 0 at end, then we loaded alpha channel but it was all 0\n      } else {\n         // otherwise, use defaults, which is all-0\n         info->mr = info->mg = info->mb = info->ma = 0;\n      }\n      return 1;\n   }\n   return 0; // error\n}\n\nstatic void *stbi__bmp_parse_header(stbi__context *s, stbi__bmp_data *info)\n{\n   int hsz;\n   if (stbi__get8(s) != 'B' || stbi__get8(s) != 'M') return stbi__errpuc(\"not BMP\", \"Corrupt BMP\");\n   stbi__get32le(s); // discard filesize\n   stbi__get16le(s); // discard reserved\n   stbi__get16le(s); // discard reserved\n   info->offset = stbi__get32le(s);\n   info->hsz = hsz = stbi__get32le(s);\n   info->mr = info->mg = info->mb = info->ma = 0;\n   info->extra_read = 14;\n\n   if (info->offset < 0) return stbi__errpuc(\"bad BMP\", \"bad BMP\");\n\n   if (hsz != 12 && hsz != 40 && hsz != 56 && hsz != 108 && hsz != 124) return stbi__errpuc(\"unknown BMP\", \"BMP type not supported: unknown\");\n   if (hsz == 12) {\n      s->img_x = stbi__get16le(s);\n      s->img_y = stbi__get16le(s);\n   } else {\n      s->img_x = stbi__get32le(s);\n      s->img_y = stbi__get32le(s);\n   }\n   if (stbi__get16le(s) != 1) return stbi__errpuc(\"bad BMP\", \"bad BMP\");\n   info->bpp = stbi__get16le(s);\n   if (hsz != 12) {\n      int compress = stbi__get32le(s);\n      if (compress == 1 || compress == 2) return stbi__errpuc(\"BMP RLE\", \"BMP type not supported: RLE\");\n      if (compress >= 4) return stbi__errpuc(\"BMP JPEG/PNG\", \"BMP type not supported: unsupported compression\"); // this includes PNG/JPEG modes\n      if (compress == 3 && info->bpp != 16 && info->bpp != 32) return stbi__errpuc(\"bad BMP\", \"bad BMP\"); // bitfields requires 16 or 32 bits/pixel\n      stbi__get32le(s); // discard sizeof\n      stbi__get32le(s); // discard hres\n      stbi__get32le(s); // discard vres\n      stbi__get32le(s); // discard colorsused\n      stbi__get32le(s); // discard max important\n      if (hsz == 40 || hsz == 56) {\n         if (hsz == 56) {\n            stbi__get32le(s);\n            stbi__get32le(s);\n            stbi__get32le(s);\n            stbi__get32le(s);\n         }\n         if (info->bpp == 16 || info->bpp == 32) {\n            if (compress == 0) {\n               stbi__bmp_set_mask_defaults(info, compress);\n            } else if (compress == 3) {\n               info->mr = stbi__get32le(s);\n               info->mg = stbi__get32le(s);\n               info->mb = stbi__get32le(s);\n               info->extra_read += 12;\n               // not documented, but generated by photoshop and handled by mspaint\n               if (info->mr == info->mg && info->mg == info->mb) {\n                  // ?!?!?\n                  return stbi__errpuc(\"bad BMP\", \"bad BMP\");\n               }\n            } else\n               return stbi__errpuc(\"bad BMP\", \"bad BMP\");\n         }\n      } else {\n         // V4/V5 header\n         int i;\n         if (hsz != 108 && hsz != 124)\n            return stbi__errpuc(\"bad BMP\", \"bad BMP\");\n         info->mr = stbi__get32le(s);\n         info->mg = stbi__get32le(s);\n         info->mb = stbi__get32le(s);\n         info->ma = stbi__get32le(s);\n         if (compress != 3) // override mr/mg/mb unless in BI_BITFIELDS mode, as per docs\n            stbi__bmp_set_mask_defaults(info, compress);\n         stbi__get32le(s); // discard color space\n         for (i=0; i < 12; ++i)\n            stbi__get32le(s); // discard color space parameters\n         if (hsz == 124) {\n            stbi__get32le(s); // discard rendering intent\n            stbi__get32le(s); // discard offset of profile data\n            stbi__get32le(s); // discard size of profile data\n            stbi__get32le(s); // discard reserved\n         }\n      }\n   }\n   return (void *) 1;\n}\n\n\nstatic void *stbi__bmp_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri)\n{\n   stbi_uc *out;\n   unsigned int mr=0,mg=0,mb=0,ma=0, all_a;\n   stbi_uc pal[256][4];\n   int psize=0,i,j,width;\n   int flip_vertically, pad, target;\n   stbi__bmp_data info;\n   STBI_NOTUSED(ri);\n\n   info.all_a = 255;\n   if (stbi__bmp_parse_header(s, &info) == NULL)\n      return NULL; // error code already set\n\n   flip_vertically = ((int) s->img_y) > 0;\n   s->img_y = abs((int) s->img_y);\n\n   if (s->img_y > STBI_MAX_DIMENSIONS) return stbi__errpuc(\"too large\",\"Very large image (corrupt?)\");\n   if (s->img_x > STBI_MAX_DIMENSIONS) return stbi__errpuc(\"too large\",\"Very large image (corrupt?)\");\n\n   mr = info.mr;\n   mg = info.mg;\n   mb = info.mb;\n   ma = info.ma;\n   all_a = info.all_a;\n\n   if (info.hsz == 12) {\n      if (info.bpp < 24)\n         psize = (info.offset - info.extra_read - 24) / 3;\n   } else {\n      if (info.bpp < 16)\n         psize = (info.offset - info.extra_read - info.hsz) >> 2;\n   }\n   if (psize == 0) {\n      // accept some number of extra bytes after the header, but if the offset points either to before\n      // the header ends or implies a large amount of extra data, reject the file as malformed\n      int bytes_read_so_far = s->callback_already_read + (int)(s->img_buffer - s->img_buffer_original);\n      int header_limit = 1024; // max we actually read is below 256 bytes currently.\n      int extra_data_limit = 256*4; // what ordinarily goes here is a palette; 256 entries*4 bytes is its max size.\n      if (bytes_read_so_far <= 0 || bytes_read_so_far > header_limit) {\n         return stbi__errpuc(\"bad header\", \"Corrupt BMP\");\n      }\n      // we established that bytes_read_so_far is positive and sensible.\n      // the first half of this test rejects offsets that are either too small positives, or\n      // negative, and guarantees that info.offset >= bytes_read_so_far > 0. this in turn\n      // ensures the number computed in the second half of the test can't overflow.\n      if (info.offset < bytes_read_so_far || info.offset - bytes_read_so_far > extra_data_limit) {\n         return stbi__errpuc(\"bad offset\", \"Corrupt BMP\");\n      } else {\n         stbi__skip(s, info.offset - bytes_read_so_far);\n      }\n   }\n\n   if (info.bpp == 24 && ma == 0xff000000)\n      s->img_n = 3;\n   else\n      s->img_n = ma ? 4 : 3;\n   if (req_comp && req_comp >= 3) // we can directly decode 3 or 4\n      target = req_comp;\n   else\n      target = s->img_n; // if they want monochrome, we'll post-convert\n\n   // sanity-check size\n   if (!stbi__mad3sizes_valid(target, s->img_x, s->img_y, 0))\n      return stbi__errpuc(\"too large\", \"Corrupt BMP\");\n\n   out = (stbi_uc *) stbi__malloc_mad3(target, s->img_x, s->img_y, 0);\n   if (!out) return stbi__errpuc(\"outofmem\", \"Out of memory\");\n   if (info.bpp < 16) {\n      int z=0;\n      if (psize == 0 || psize > 256) { STBI_FREE(out); return stbi__errpuc(\"invalid\", \"Corrupt BMP\"); }\n      for (i=0; i < psize; ++i) {\n         pal[i][2] = stbi__get8(s);\n         pal[i][1] = stbi__get8(s);\n         pal[i][0] = stbi__get8(s);\n         if (info.hsz != 12) stbi__get8(s);\n         pal[i][3] = 255;\n      }\n      stbi__skip(s, info.offset - info.extra_read - info.hsz - psize * (info.hsz == 12 ? 3 : 4));\n      if (info.bpp == 1) width = (s->img_x + 7) >> 3;\n      else if (info.bpp == 4) width = (s->img_x + 1) >> 1;\n      else if (info.bpp == 8) width = s->img_x;\n      else { STBI_FREE(out); return stbi__errpuc(\"bad bpp\", \"Corrupt BMP\"); }\n      pad = (-width)&3;\n      if (info.bpp == 1) {\n         for (j=0; j < (int) s->img_y; ++j) {\n            int bit_offset = 7, v = stbi__get8(s);\n            for (i=0; i < (int) s->img_x; ++i) {\n               int color = (v>>bit_offset)&0x1;\n               out[z++] = pal[color][0];\n               out[z++] = pal[color][1];\n               out[z++] = pal[color][2];\n               if (target == 4) out[z++] = 255;\n               if (i+1 == (int) s->img_x) break;\n               if((--bit_offset) < 0) {\n                  bit_offset = 7;\n                  v = stbi__get8(s);\n               }\n            }\n            stbi__skip(s, pad);\n         }\n      } else {\n         for (j=0; j < (int) s->img_y; ++j) {\n            for (i=0; i < (int) s->img_x; i += 2) {\n               int v=stbi__get8(s),v2=0;\n               if (info.bpp == 4) {\n                  v2 = v & 15;\n                  v >>= 4;\n               }\n               out[z++] = pal[v][0];\n               out[z++] = pal[v][1];\n               out[z++] = pal[v][2];\n               if (target == 4) out[z++] = 255;\n               if (i+1 == (int) s->img_x) break;\n               v = (info.bpp == 8) ? stbi__get8(s) : v2;\n               out[z++] = pal[v][0];\n               out[z++] = pal[v][1];\n               out[z++] = pal[v][2];\n               if (target == 4) out[z++] = 255;\n            }\n            stbi__skip(s, pad);\n         }\n      }\n   } else {\n      int rshift=0,gshift=0,bshift=0,ashift=0,rcount=0,gcount=0,bcount=0,acount=0;\n      int z = 0;\n      int easy=0;\n      stbi__skip(s, info.offset - info.extra_read - info.hsz);\n      if (info.bpp == 24) width = 3 * s->img_x;\n      else if (info.bpp == 16) width = 2*s->img_x;\n      else /* bpp = 32 and pad = 0 */ width=0;\n      pad = (-width) & 3;\n      if (info.bpp == 24) {\n         easy = 1;\n      } else if (info.bpp == 32) {\n         if (mb == 0xff && mg == 0xff00 && mr == 0x00ff0000 && ma == 0xff000000)\n            easy = 2;\n      }\n      if (!easy) {\n         if (!mr || !mg || !mb) { STBI_FREE(out); return stbi__errpuc(\"bad masks\", \"Corrupt BMP\"); }\n         // right shift amt to put high bit in position #7\n         rshift = stbi__high_bit(mr)-7; rcount = stbi__bitcount(mr);\n         gshift = stbi__high_bit(mg)-7; gcount = stbi__bitcount(mg);\n         bshift = stbi__high_bit(mb)-7; bcount = stbi__bitcount(mb);\n         ashift = stbi__high_bit(ma)-7; acount = stbi__bitcount(ma);\n         if (rcount > 8 || gcount > 8 || bcount > 8 || acount > 8) { STBI_FREE(out); return stbi__errpuc(\"bad masks\", \"Corrupt BMP\"); }\n      }\n      for (j=0; j < (int) s->img_y; ++j) {\n         if (easy) {\n            for (i=0; i < (int) s->img_x; ++i) {\n               unsigned char a;\n               out[z+2] = stbi__get8(s);\n               out[z+1] = stbi__get8(s);\n               out[z+0] = stbi__get8(s);\n               z += 3;\n               a = (easy == 2 ? stbi__get8(s) : 255);\n               all_a |= a;\n               if (target == 4) out[z++] = a;\n            }\n         } else {\n            int bpp = info.bpp;\n            for (i=0; i < (int) s->img_x; ++i) {\n               stbi__uint32 v = (bpp == 16 ? (stbi__uint32) stbi__get16le(s) : stbi__get32le(s));\n               unsigned int a;\n               out[z++] = STBI__BYTECAST(stbi__shiftsigned(v & mr, rshift, rcount));\n               out[z++] = STBI__BYTECAST(stbi__shiftsigned(v & mg, gshift, gcount));\n               out[z++] = STBI__BYTECAST(stbi__shiftsigned(v & mb, bshift, bcount));\n               a = (ma ? stbi__shiftsigned(v & ma, ashift, acount) : 255);\n               all_a |= a;\n               if (target == 4) out[z++] = STBI__BYTECAST(a);\n            }\n         }\n         stbi__skip(s, pad);\n      }\n   }\n\n   // if alpha channel is all 0s, replace with all 255s\n   if (target == 4 && all_a == 0)\n      for (i=4*s->img_x*s->img_y-1; i >= 0; i -= 4)\n         out[i] = 255;\n\n   if (flip_vertically) {\n      stbi_uc t;\n      for (j=0; j < (int) s->img_y>>1; ++j) {\n         stbi_uc *p1 = out +      j     *s->img_x*target;\n         stbi_uc *p2 = out + (s->img_y-1-j)*s->img_x*target;\n         for (i=0; i < (int) s->img_x*target; ++i) {\n            t = p1[i]; p1[i] = p2[i]; p2[i] = t;\n         }\n      }\n   }\n\n   if (req_comp && req_comp != target) {\n      out = stbi__convert_format(out, target, req_comp, s->img_x, s->img_y);\n      if (out == NULL) return out; // stbi__convert_format frees input on failure\n   }\n\n   *x = s->img_x;\n   *y = s->img_y;\n   if (comp) *comp = s->img_n;\n   return out;\n}\n#endif\n\n// Targa Truevision - TGA\n// by Jonathan Dummer\n#ifndef STBI_NO_TGA\n// returns STBI_rgb or whatever, 0 on error\nstatic int stbi__tga_get_comp(int bits_per_pixel, int is_grey, int* is_rgb16)\n{\n   // only RGB or RGBA (incl. 16bit) or grey allowed\n   if (is_rgb16) *is_rgb16 = 0;\n   switch(bits_per_pixel) {\n      case 8:  return STBI_grey;\n      case 16: if(is_grey) return STBI_grey_alpha;\n               // fallthrough\n      case 15: if(is_rgb16) *is_rgb16 = 1;\n               return STBI_rgb;\n      case 24: // fallthrough\n      case 32: return bits_per_pixel/8;\n      default: return 0;\n   }\n}\n\nstatic int stbi__tga_info(stbi__context *s, int *x, int *y, int *comp)\n{\n    int tga_w, tga_h, tga_comp, tga_image_type, tga_bits_per_pixel, tga_colormap_bpp;\n    int sz, tga_colormap_type;\n    stbi__get8(s);                   // discard Offset\n    tga_colormap_type = stbi__get8(s); // colormap type\n    if( tga_colormap_type > 1 ) {\n        stbi__rewind(s);\n        return 0;      // only RGB or indexed allowed\n    }\n    tga_image_type = stbi__get8(s); // image type\n    if ( tga_colormap_type == 1 ) { // colormapped (paletted) image\n        if (tga_image_type != 1 && tga_image_type != 9) {\n            stbi__rewind(s);\n            return 0;\n        }\n        stbi__skip(s,4);       // skip index of first colormap entry and number of entries\n        sz = stbi__get8(s);    //   check bits per palette color entry\n        if ( (sz != 8) && (sz != 15) && (sz != 16) && (sz != 24) && (sz != 32) ) {\n            stbi__rewind(s);\n            return 0;\n        }\n        stbi__skip(s,4);       // skip image x and y origin\n        tga_colormap_bpp = sz;\n    } else { // \"normal\" image w/o colormap - only RGB or grey allowed, +/- RLE\n        if ( (tga_image_type != 2) && (tga_image_type != 3) && (tga_image_type != 10) && (tga_image_type != 11) ) {\n            stbi__rewind(s);\n            return 0; // only RGB or grey allowed, +/- RLE\n        }\n        stbi__skip(s,9); // skip colormap specification and image x/y origin\n        tga_colormap_bpp = 0;\n    }\n    tga_w = stbi__get16le(s);\n    if( tga_w < 1 ) {\n        stbi__rewind(s);\n        return 0;   // test width\n    }\n    tga_h = stbi__get16le(s);\n    if( tga_h < 1 ) {\n        stbi__rewind(s);\n        return 0;   // test height\n    }\n    tga_bits_per_pixel = stbi__get8(s); // bits per pixel\n    stbi__get8(s); // ignore alpha bits\n    if (tga_colormap_bpp != 0) {\n        if((tga_bits_per_pixel != 8) && (tga_bits_per_pixel != 16)) {\n            // when using a colormap, tga_bits_per_pixel is the size of the indexes\n            // I don't think anything but 8 or 16bit indexes makes sense\n            stbi__rewind(s);\n            return 0;\n        }\n        tga_comp = stbi__tga_get_comp(tga_colormap_bpp, 0, NULL);\n    } else {\n        tga_comp = stbi__tga_get_comp(tga_bits_per_pixel, (tga_image_type == 3) || (tga_image_type == 11), NULL);\n    }\n    if(!tga_comp) {\n      stbi__rewind(s);\n      return 0;\n    }\n    if (x) *x = tga_w;\n    if (y) *y = tga_h;\n    if (comp) *comp = tga_comp;\n    return 1;                   // seems to have passed everything\n}\n\nstatic int stbi__tga_test(stbi__context *s)\n{\n   int res = 0;\n   int sz, tga_color_type;\n   stbi__get8(s);      //   discard Offset\n   tga_color_type = stbi__get8(s);   //   color type\n   if ( tga_color_type > 1 ) goto errorEnd;   //   only RGB or indexed allowed\n   sz = stbi__get8(s);   //   image type\n   if ( tga_color_type == 1 ) { // colormapped (paletted) image\n      if (sz != 1 && sz != 9) goto errorEnd; // colortype 1 demands image type 1 or 9\n      stbi__skip(s,4);       // skip index of first colormap entry and number of entries\n      sz = stbi__get8(s);    //   check bits per palette color entry\n      if ( (sz != 8) && (sz != 15) && (sz != 16) && (sz != 24) && (sz != 32) ) goto errorEnd;\n      stbi__skip(s,4);       // skip image x and y origin\n   } else { // \"normal\" image w/o colormap\n      if ( (sz != 2) && (sz != 3) && (sz != 10) && (sz != 11) ) goto errorEnd; // only RGB or grey allowed, +/- RLE\n      stbi__skip(s,9); // skip colormap specification and image x/y origin\n   }\n   if ( stbi__get16le(s) < 1 ) goto errorEnd;      //   test width\n   if ( stbi__get16le(s) < 1 ) goto errorEnd;      //   test height\n   sz = stbi__get8(s);   //   bits per pixel\n   if ( (tga_color_type == 1) && (sz != 8) && (sz != 16) ) goto errorEnd; // for colormapped images, bpp is size of an index\n   if ( (sz != 8) && (sz != 15) && (sz != 16) && (sz != 24) && (sz != 32) ) goto errorEnd;\n\n   res = 1; // if we got this far, everything's good and we can return 1 instead of 0\n\nerrorEnd:\n   stbi__rewind(s);\n   return res;\n}\n\n// read 16bit value and convert to 24bit RGB\nstatic void stbi__tga_read_rgb16(stbi__context *s, stbi_uc* out)\n{\n   stbi__uint16 px = (stbi__uint16)stbi__get16le(s);\n   stbi__uint16 fiveBitMask = 31;\n   // we have 3 channels with 5bits each\n   int r = (px >> 10) & fiveBitMask;\n   int g = (px >> 5) & fiveBitMask;\n   int b = px & fiveBitMask;\n   // Note that this saves the data in RGB(A) order, so it doesn't need to be swapped later\n   out[0] = (stbi_uc)((r * 255)/31);\n   out[1] = (stbi_uc)((g * 255)/31);\n   out[2] = (stbi_uc)((b * 255)/31);\n\n   // some people claim that the most significant bit might be used for alpha\n   // (possibly if an alpha-bit is set in the \"image descriptor byte\")\n   // but that only made 16bit test images completely translucent..\n   // so let's treat all 15 and 16bit TGAs as RGB with no alpha.\n}\n\nstatic void *stbi__tga_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri)\n{\n   //   read in the TGA header stuff\n   int tga_offset = stbi__get8(s);\n   int tga_indexed = stbi__get8(s);\n   int tga_image_type = stbi__get8(s);\n   int tga_is_RLE = 0;\n   int tga_palette_start = stbi__get16le(s);\n   int tga_palette_len = stbi__get16le(s);\n   int tga_palette_bits = stbi__get8(s);\n   int tga_x_origin = stbi__get16le(s);\n   int tga_y_origin = stbi__get16le(s);\n   int tga_width = stbi__get16le(s);\n   int tga_height = stbi__get16le(s);\n   int tga_bits_per_pixel = stbi__get8(s);\n   int tga_comp, tga_rgb16=0;\n   int tga_inverted = stbi__get8(s);\n   // int tga_alpha_bits = tga_inverted & 15; // the 4 lowest bits - unused (useless?)\n   //   image data\n   unsigned char *tga_data;\n   unsigned char *tga_palette = NULL;\n   int i, j;\n   unsigned char raw_data[4] = {0};\n   int RLE_count = 0;\n   int RLE_repeating = 0;\n   int read_next_pixel = 1;\n   STBI_NOTUSED(ri);\n   STBI_NOTUSED(tga_x_origin); // @TODO\n   STBI_NOTUSED(tga_y_origin); // @TODO\n\n   if (tga_height > STBI_MAX_DIMENSIONS) return stbi__errpuc(\"too large\",\"Very large image (corrupt?)\");\n   if (tga_width > STBI_MAX_DIMENSIONS) return stbi__errpuc(\"too large\",\"Very large image (corrupt?)\");\n\n   //   do a tiny bit of precessing\n   if ( tga_image_type >= 8 )\n   {\n      tga_image_type -= 8;\n      tga_is_RLE = 1;\n   }\n   tga_inverted = 1 - ((tga_inverted >> 5) & 1);\n\n   //   If I'm paletted, then I'll use the number of bits from the palette\n   if ( tga_indexed ) tga_comp = stbi__tga_get_comp(tga_palette_bits, 0, &tga_rgb16);\n   else tga_comp = stbi__tga_get_comp(tga_bits_per_pixel, (tga_image_type == 3), &tga_rgb16);\n\n   if(!tga_comp) // shouldn't really happen, stbi__tga_test() should have ensured basic consistency\n      return stbi__errpuc(\"bad format\", \"Can't find out TGA pixelformat\");\n\n   //   tga info\n   *x = tga_width;\n   *y = tga_height;\n   if (comp) *comp = tga_comp;\n\n   if (!stbi__mad3sizes_valid(tga_width, tga_height, tga_comp, 0))\n      return stbi__errpuc(\"too large\", \"Corrupt TGA\");\n\n   tga_data = (unsigned char*)stbi__malloc_mad3(tga_width, tga_height, tga_comp, 0);\n   if (!tga_data) return stbi__errpuc(\"outofmem\", \"Out of memory\");\n\n   // skip to the data's starting position (offset usually = 0)\n   stbi__skip(s, tga_offset );\n\n   if ( !tga_indexed && !tga_is_RLE && !tga_rgb16 ) {\n      for (i=0; i < tga_height; ++i) {\n         int row = tga_inverted ? tga_height -i - 1 : i;\n         stbi_uc *tga_row = tga_data + row*tga_width*tga_comp;\n         stbi__getn(s, tga_row, tga_width * tga_comp);\n      }\n   } else  {\n      //   do I need to load a palette?\n      if ( tga_indexed)\n      {\n         if (tga_palette_len == 0) {  /* you have to have at least one entry! */\n            STBI_FREE(tga_data);\n            return stbi__errpuc(\"bad palette\", \"Corrupt TGA\");\n         }\n\n         //   any data to skip? (offset usually = 0)\n         stbi__skip(s, tga_palette_start );\n         //   load the palette\n         tga_palette = (unsigned char*)stbi__malloc_mad2(tga_palette_len, tga_comp, 0);\n         if (!tga_palette) {\n            STBI_FREE(tga_data);\n            return stbi__errpuc(\"outofmem\", \"Out of memory\");\n         }\n         if (tga_rgb16) {\n            stbi_uc *pal_entry = tga_palette;\n            STBI_ASSERT(tga_comp == STBI_rgb);\n            for (i=0; i < tga_palette_len; ++i) {\n               stbi__tga_read_rgb16(s, pal_entry);\n               pal_entry += tga_comp;\n            }\n         } else if (!stbi__getn(s, tga_palette, tga_palette_len * tga_comp)) {\n               STBI_FREE(tga_data);\n               STBI_FREE(tga_palette);\n               return stbi__errpuc(\"bad palette\", \"Corrupt TGA\");\n         }\n      }\n      //   load the data\n      for (i=0; i < tga_width * tga_height; ++i)\n      {\n         //   if I'm in RLE mode, do I need to get a RLE stbi__pngchunk?\n         if ( tga_is_RLE )\n         {\n            if ( RLE_count == 0 )\n            {\n               //   yep, get the next byte as a RLE command\n               int RLE_cmd = stbi__get8(s);\n               RLE_count = 1 + (RLE_cmd & 127);\n               RLE_repeating = RLE_cmd >> 7;\n               read_next_pixel = 1;\n            } else if ( !RLE_repeating )\n            {\n               read_next_pixel = 1;\n            }\n         } else\n         {\n            read_next_pixel = 1;\n         }\n         //   OK, if I need to read a pixel, do it now\n         if ( read_next_pixel )\n         {\n            //   load however much data we did have\n            if ( tga_indexed )\n            {\n               // read in index, then perform the lookup\n               int pal_idx = (tga_bits_per_pixel == 8) ? stbi__get8(s) : stbi__get16le(s);\n               if ( pal_idx >= tga_palette_len ) {\n                  // invalid index\n                  pal_idx = 0;\n               }\n               pal_idx *= tga_comp;\n               for (j = 0; j < tga_comp; ++j) {\n                  raw_data[j] = tga_palette[pal_idx+j];\n               }\n            } else if(tga_rgb16) {\n               STBI_ASSERT(tga_comp == STBI_rgb);\n               stbi__tga_read_rgb16(s, raw_data);\n            } else {\n               //   read in the data raw\n               for (j = 0; j < tga_comp; ++j) {\n                  raw_data[j] = stbi__get8(s);\n               }\n            }\n            //   clear the reading flag for the next pixel\n            read_next_pixel = 0;\n         } // end of reading a pixel\n\n         // copy data\n         for (j = 0; j < tga_comp; ++j)\n           tga_data[i*tga_comp+j] = raw_data[j];\n\n         //   in case we're in RLE mode, keep counting down\n         --RLE_count;\n      }\n      //   do I need to invert the image?\n      if ( tga_inverted )\n      {\n         for (j = 0; j*2 < tga_height; ++j)\n         {\n            int index1 = j * tga_width * tga_comp;\n            int index2 = (tga_height - 1 - j) * tga_width * tga_comp;\n            for (i = tga_width * tga_comp; i > 0; --i)\n            {\n               unsigned char temp = tga_data[index1];\n               tga_data[index1] = tga_data[index2];\n               tga_data[index2] = temp;\n               ++index1;\n               ++index2;\n            }\n         }\n      }\n      //   clear my palette, if I had one\n      if ( tga_palette != NULL )\n      {\n         STBI_FREE( tga_palette );\n      }\n   }\n\n   // swap RGB - if the source data was RGB16, it already is in the right order\n   if (tga_comp >= 3 && !tga_rgb16)\n   {\n      unsigned char* tga_pixel = tga_data;\n      for (i=0; i < tga_width * tga_height; ++i)\n      {\n         unsigned char temp = tga_pixel[0];\n         tga_pixel[0] = tga_pixel[2];\n         tga_pixel[2] = temp;\n         tga_pixel += tga_comp;\n      }\n   }\n\n   // convert to target component count\n   if (req_comp && req_comp != tga_comp)\n      tga_data = stbi__convert_format(tga_data, tga_comp, req_comp, tga_width, tga_height);\n\n   //   the things I do to get rid of an error message, and yet keep\n   //   Microsoft's C compilers happy... [8^(\n   tga_palette_start = tga_palette_len = tga_palette_bits =\n         tga_x_origin = tga_y_origin = 0;\n   STBI_NOTUSED(tga_palette_start);\n   //   OK, done\n   return tga_data;\n}\n#endif\n\n// *************************************************************************************************\n// Photoshop PSD loader -- PD by Thatcher Ulrich, integration by Nicolas Schulz, tweaked by STB\n\n#ifndef STBI_NO_PSD\nstatic int stbi__psd_test(stbi__context *s)\n{\n   int r = (stbi__get32be(s) == 0x38425053);\n   stbi__rewind(s);\n   return r;\n}\n\nstatic int stbi__psd_decode_rle(stbi__context *s, stbi_uc *p, int pixelCount)\n{\n   int count, nleft, len;\n\n   count = 0;\n   while ((nleft = pixelCount - count) > 0) {\n      len = stbi__get8(s);\n      if (len == 128) {\n         // No-op.\n      } else if (len < 128) {\n         // Copy next len+1 bytes literally.\n         len++;\n         if (len > nleft) return 0; // corrupt data\n         count += len;\n         while (len) {\n            *p = stbi__get8(s);\n            p += 4;\n            len--;\n         }\n      } else if (len > 128) {\n         stbi_uc   val;\n         // Next -len+1 bytes in the dest are replicated from next source byte.\n         // (Interpret len as a negative 8-bit int.)\n         len = 257 - len;\n         if (len > nleft) return 0; // corrupt data\n         val = stbi__get8(s);\n         count += len;\n         while (len) {\n            *p = val;\n            p += 4;\n            len--;\n         }\n      }\n   }\n\n   return 1;\n}\n\nstatic void *stbi__psd_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri, int bpc)\n{\n   int pixelCount;\n   int channelCount, compression;\n   int channel, i;\n   int bitdepth;\n   int w,h;\n   stbi_uc *out;\n   STBI_NOTUSED(ri);\n\n   // Check identifier\n   if (stbi__get32be(s) != 0x38425053)   // \"8BPS\"\n      return stbi__errpuc(\"not PSD\", \"Corrupt PSD image\");\n\n   // Check file type version.\n   if (stbi__get16be(s) != 1)\n      return stbi__errpuc(\"wrong version\", \"Unsupported version of PSD image\");\n\n   // Skip 6 reserved bytes.\n   stbi__skip(s, 6 );\n\n   // Read the number of channels (R, G, B, A, etc).\n   channelCount = stbi__get16be(s);\n   if (channelCount < 0 || channelCount > 16)\n      return stbi__errpuc(\"wrong channel count\", \"Unsupported number of channels in PSD image\");\n\n   // Read the rows and columns of the image.\n   h = stbi__get32be(s);\n   w = stbi__get32be(s);\n\n   if (h > STBI_MAX_DIMENSIONS) return stbi__errpuc(\"too large\",\"Very large image (corrupt?)\");\n   if (w > STBI_MAX_DIMENSIONS) return stbi__errpuc(\"too large\",\"Very large image (corrupt?)\");\n\n   // Make sure the depth is 8 bits.\n   bitdepth = stbi__get16be(s);\n   if (bitdepth != 8 && bitdepth != 16)\n      return stbi__errpuc(\"unsupported bit depth\", \"PSD bit depth is not 8 or 16 bit\");\n\n   // Make sure the color mode is RGB.\n   // Valid options are:\n   //   0: Bitmap\n   //   1: Grayscale\n   //   2: Indexed color\n   //   3: RGB color\n   //   4: CMYK color\n   //   7: Multichannel\n   //   8: Duotone\n   //   9: Lab color\n   if (stbi__get16be(s) != 3)\n      return stbi__errpuc(\"wrong color format\", \"PSD is not in RGB color format\");\n\n   // Skip the Mode Data.  (It's the palette for indexed color; other info for other modes.)\n   stbi__skip(s,stbi__get32be(s) );\n\n   // Skip the image resources.  (resolution, pen tool paths, etc)\n   stbi__skip(s, stbi__get32be(s) );\n\n   // Skip the reserved data.\n   stbi__skip(s, stbi__get32be(s) );\n\n   // Find out if the data is compressed.\n   // Known values:\n   //   0: no compression\n   //   1: RLE compressed\n   compression = stbi__get16be(s);\n   if (compression > 1)\n      return stbi__errpuc(\"bad compression\", \"PSD has an unknown compression format\");\n\n   // Check size\n   if (!stbi__mad3sizes_valid(4, w, h, 0))\n      return stbi__errpuc(\"too large\", \"Corrupt PSD\");\n\n   // Create the destination image.\n\n   if (!compression && bitdepth == 16 && bpc == 16) {\n      out = (stbi_uc *) stbi__malloc_mad3(8, w, h, 0);\n      ri->bits_per_channel = 16;\n   } else\n      out = (stbi_uc *) stbi__malloc(4 * w*h);\n\n   if (!out) return stbi__errpuc(\"outofmem\", \"Out of memory\");\n   pixelCount = w*h;\n\n   // Initialize the data to zero.\n   //memset( out, 0, pixelCount * 4 );\n\n   // Finally, the image data.\n   if (compression) {\n      // RLE as used by .PSD and .TIFF\n      // Loop until you get the number of unpacked bytes you are expecting:\n      //     Read the next source byte into n.\n      //     If n is between 0 and 127 inclusive, copy the next n+1 bytes literally.\n      //     Else if n is between -127 and -1 inclusive, copy the next byte -n+1 times.\n      //     Else if n is 128, noop.\n      // Endloop\n\n      // The RLE-compressed data is preceded by a 2-byte data count for each row in the data,\n      // which we're going to just skip.\n      stbi__skip(s, h * channelCount * 2 );\n\n      // Read the RLE data by channel.\n      for (channel = 0; channel < 4; channel++) {\n         stbi_uc *p;\n\n         p = out+channel;\n         if (channel >= channelCount) {\n            // Fill this channel with default data.\n            for (i = 0; i < pixelCount; i++, p += 4)\n               *p = (channel == 3 ? 255 : 0);\n         } else {\n            // Read the RLE data.\n            if (!stbi__psd_decode_rle(s, p, pixelCount)) {\n               STBI_FREE(out);\n               return stbi__errpuc(\"corrupt\", \"bad RLE data\");\n            }\n         }\n      }\n\n   } else {\n      // We're at the raw image data.  It's each channel in order (Red, Green, Blue, Alpha, ...)\n      // where each channel consists of an 8-bit (or 16-bit) value for each pixel in the image.\n\n      // Read the data by channel.\n      for (channel = 0; channel < 4; channel++) {\n         if (channel >= channelCount) {\n            // Fill this channel with default data.\n            if (bitdepth == 16 && bpc == 16) {\n               stbi__uint16 *q = ((stbi__uint16 *) out) + channel;\n               stbi__uint16 val = channel == 3 ? 65535 : 0;\n               for (i = 0; i < pixelCount; i++, q += 4)\n                  *q = val;\n            } else {\n               stbi_uc *p = out+channel;\n               stbi_uc val = channel == 3 ? 255 : 0;\n               for (i = 0; i < pixelCount; i++, p += 4)\n                  *p = val;\n            }\n         } else {\n            if (ri->bits_per_channel == 16) {    // output bpc\n               stbi__uint16 *q = ((stbi__uint16 *) out) + channel;\n               for (i = 0; i < pixelCount; i++, q += 4)\n                  *q = (stbi__uint16) stbi__get16be(s);\n            } else {\n               stbi_uc *p = out+channel;\n               if (bitdepth == 16) {  // input bpc\n                  for (i = 0; i < pixelCount; i++, p += 4)\n                     *p = (stbi_uc) (stbi__get16be(s) >> 8);\n               } else {\n                  for (i = 0; i < pixelCount; i++, p += 4)\n                     *p = stbi__get8(s);\n               }\n            }\n         }\n      }\n   }\n\n   // remove weird white matte from PSD\n   if (channelCount >= 4) {\n      if (ri->bits_per_channel == 16) {\n         for (i=0; i < w*h; ++i) {\n            stbi__uint16 *pixel = (stbi__uint16 *) out + 4*i;\n            if (pixel[3] != 0 && pixel[3] != 65535) {\n               float a = pixel[3] / 65535.0f;\n               float ra = 1.0f / a;\n               float inv_a = 65535.0f * (1 - ra);\n               pixel[0] = (stbi__uint16) (pixel[0]*ra + inv_a);\n               pixel[1] = (stbi__uint16) (pixel[1]*ra + inv_a);\n               pixel[2] = (stbi__uint16) (pixel[2]*ra + inv_a);\n            }\n         }\n      } else {\n         for (i=0; i < w*h; ++i) {\n            unsigned char *pixel = out + 4*i;\n            if (pixel[3] != 0 && pixel[3] != 255) {\n               float a = pixel[3] / 255.0f;\n               float ra = 1.0f / a;\n               float inv_a = 255.0f * (1 - ra);\n               pixel[0] = (unsigned char) (pixel[0]*ra + inv_a);\n               pixel[1] = (unsigned char) (pixel[1]*ra + inv_a);\n               pixel[2] = (unsigned char) (pixel[2]*ra + inv_a);\n            }\n         }\n      }\n   }\n\n   // convert to desired output format\n   if (req_comp && req_comp != 4) {\n      if (ri->bits_per_channel == 16)\n         out = (stbi_uc *) stbi__convert_format16((stbi__uint16 *) out, 4, req_comp, w, h);\n      else\n         out = stbi__convert_format(out, 4, req_comp, w, h);\n      if (out == NULL) return out; // stbi__convert_format frees input on failure\n   }\n\n   if (comp) *comp = 4;\n   *y = h;\n   *x = w;\n\n   return out;\n}\n#endif\n\n// *************************************************************************************************\n// Softimage PIC loader\n// by Tom Seddon\n//\n// See http://softimage.wiki.softimage.com/index.php/INFO:_PIC_file_format\n// See http://ozviz.wasp.uwa.edu.au/~pbourke/dataformats/softimagepic/\n\n#ifndef STBI_NO_PIC\nstatic int stbi__pic_is4(stbi__context *s,const char *str)\n{\n   int i;\n   for (i=0; i<4; ++i)\n      if (stbi__get8(s) != (stbi_uc)str[i])\n         return 0;\n\n   return 1;\n}\n\nstatic int stbi__pic_test_core(stbi__context *s)\n{\n   int i;\n\n   if (!stbi__pic_is4(s,\"\\x53\\x80\\xF6\\x34\"))\n      return 0;\n\n   for(i=0;i<84;++i)\n      stbi__get8(s);\n\n   if (!stbi__pic_is4(s,\"PICT\"))\n      return 0;\n\n   return 1;\n}\n\ntypedef struct\n{\n   stbi_uc size,type,channel;\n} stbi__pic_packet;\n\nstatic stbi_uc *stbi__readval(stbi__context *s, int channel, stbi_uc *dest)\n{\n   int mask=0x80, i;\n\n   for (i=0; i<4; ++i, mask>>=1) {\n      if (channel & mask) {\n         if (stbi__at_eof(s)) return stbi__errpuc(\"bad file\",\"PIC file too short\");\n         dest[i]=stbi__get8(s);\n      }\n   }\n\n   return dest;\n}\n\nstatic void stbi__copyval(int channel,stbi_uc *dest,const stbi_uc *src)\n{\n   int mask=0x80,i;\n\n   for (i=0;i<4; ++i, mask>>=1)\n      if (channel&mask)\n         dest[i]=src[i];\n}\n\nstatic stbi_uc *stbi__pic_load_core(stbi__context *s,int width,int height,int *comp, stbi_uc *result)\n{\n   int act_comp=0,num_packets=0,y,chained;\n   stbi__pic_packet packets[10];\n\n   // this will (should...) cater for even some bizarre stuff like having data\n    // for the same channel in multiple packets.\n   do {\n      stbi__pic_packet *packet;\n\n      if (num_packets==sizeof(packets)/sizeof(packets[0]))\n         return stbi__errpuc(\"bad format\",\"too many packets\");\n\n      packet = &packets[num_packets++];\n\n      chained = stbi__get8(s);\n      packet->size    = stbi__get8(s);\n      packet->type    = stbi__get8(s);\n      packet->channel = stbi__get8(s);\n\n      act_comp |= packet->channel;\n\n      if (stbi__at_eof(s))          return stbi__errpuc(\"bad file\",\"file too short (reading packets)\");\n      if (packet->size != 8)  return stbi__errpuc(\"bad format\",\"packet isn't 8bpp\");\n   } while (chained);\n\n   *comp = (act_comp & 0x10 ? 4 : 3); // has alpha channel?\n\n   for(y=0; y<height; ++y) {\n      int packet_idx;\n\n      for(packet_idx=0; packet_idx < num_packets; ++packet_idx) {\n         stbi__pic_packet *packet = &packets[packet_idx];\n         stbi_uc *dest = result+y*width*4;\n\n         switch (packet->type) {\n            default:\n               return stbi__errpuc(\"bad format\",\"packet has bad compression type\");\n\n            case 0: {//uncompressed\n               int x;\n\n               for(x=0;x<width;++x, dest+=4)\n                  if (!stbi__readval(s,packet->channel,dest))\n                     return 0;\n               break;\n            }\n\n            case 1://Pure RLE\n               {\n                  int left=width, i;\n\n                  while (left>0) {\n                     stbi_uc count,value[4];\n\n                     count=stbi__get8(s);\n                     if (stbi__at_eof(s))   return stbi__errpuc(\"bad file\",\"file too short (pure read count)\");\n\n                     if (count > left)\n                        count = (stbi_uc) left;\n\n                     if (!stbi__readval(s,packet->channel,value))  return 0;\n\n                     for(i=0; i<count; ++i,dest+=4)\n                        stbi__copyval(packet->channel,dest,value);\n                     left -= count;\n                  }\n               }\n               break;\n\n            case 2: {//Mixed RLE\n               int left=width;\n               while (left>0) {\n                  int count = stbi__get8(s), i;\n                  if (stbi__at_eof(s))  return stbi__errpuc(\"bad file\",\"file too short (mixed read count)\");\n\n                  if (count >= 128) { // Repeated\n                     stbi_uc value[4];\n\n                     if (count==128)\n                        count = stbi__get16be(s);\n                     else\n                        count -= 127;\n                     if (count > left)\n                        return stbi__errpuc(\"bad file\",\"scanline overrun\");\n\n                     if (!stbi__readval(s,packet->channel,value))\n                        return 0;\n\n                     for(i=0;i<count;++i, dest += 4)\n                        stbi__copyval(packet->channel,dest,value);\n                  } else { // Raw\n                     ++count;\n                     if (count>left) return stbi__errpuc(\"bad file\",\"scanline overrun\");\n\n                     for(i=0;i<count;++i, dest+=4)\n                        if (!stbi__readval(s,packet->channel,dest))\n                           return 0;\n                  }\n                  left-=count;\n               }\n               break;\n            }\n         }\n      }\n   }\n\n   return result;\n}\n\nstatic void *stbi__pic_load(stbi__context *s,int *px,int *py,int *comp,int req_comp, stbi__result_info *ri)\n{\n   stbi_uc *result;\n   int i, x,y, internal_comp;\n   STBI_NOTUSED(ri);\n\n   if (!comp) comp = &internal_comp;\n\n   for (i=0; i<92; ++i)\n      stbi__get8(s);\n\n   x = stbi__get16be(s);\n   y = stbi__get16be(s);\n\n   if (y > STBI_MAX_DIMENSIONS) return stbi__errpuc(\"too large\",\"Very large image (corrupt?)\");\n   if (x > STBI_MAX_DIMENSIONS) return stbi__errpuc(\"too large\",\"Very large image (corrupt?)\");\n\n   if (stbi__at_eof(s))  return stbi__errpuc(\"bad file\",\"file too short (pic header)\");\n   if (!stbi__mad3sizes_valid(x, y, 4, 0)) return stbi__errpuc(\"too large\", \"PIC image too large to decode\");\n\n   stbi__get32be(s); //skip `ratio'\n   stbi__get16be(s); //skip `fields'\n   stbi__get16be(s); //skip `pad'\n\n   // intermediate buffer is RGBA\n   result = (stbi_uc *) stbi__malloc_mad3(x, y, 4, 0);\n   if (!result) return stbi__errpuc(\"outofmem\", \"Out of memory\");\n   memset(result, 0xff, x*y*4);\n\n   if (!stbi__pic_load_core(s,x,y,comp, result)) {\n      STBI_FREE(result);\n      result=0;\n   }\n   *px = x;\n   *py = y;\n   if (req_comp == 0) req_comp = *comp;\n   result=stbi__convert_format(result,4,req_comp,x,y);\n\n   return result;\n}\n\nstatic int stbi__pic_test(stbi__context *s)\n{\n   int r = stbi__pic_test_core(s);\n   stbi__rewind(s);\n   return r;\n}\n#endif\n\n// *************************************************************************************************\n// GIF loader -- public domain by Jean-Marc Lienher -- simplified/shrunk by stb\n\n#ifndef STBI_NO_GIF\ntypedef struct\n{\n   stbi__int16 prefix;\n   stbi_uc first;\n   stbi_uc suffix;\n} stbi__gif_lzw;\n\ntypedef struct\n{\n   int w,h;\n   stbi_uc *out;                 // output buffer (always 4 components)\n   stbi_uc *background;          // The current \"background\" as far as a gif is concerned\n   stbi_uc *history;\n   int flags, bgindex, ratio, transparent, eflags;\n   stbi_uc  pal[256][4];\n   stbi_uc lpal[256][4];\n   stbi__gif_lzw codes[8192];\n   stbi_uc *color_table;\n   int parse, step;\n   int lflags;\n   int start_x, start_y;\n   int max_x, max_y;\n   int cur_x, cur_y;\n   int line_size;\n   int delay;\n} stbi__gif;\n\nstatic int stbi__gif_test_raw(stbi__context *s)\n{\n   int sz;\n   if (stbi__get8(s) != 'G' || stbi__get8(s) != 'I' || stbi__get8(s) != 'F' || stbi__get8(s) != '8') return 0;\n   sz = stbi__get8(s);\n   if (sz != '9' && sz != '7') return 0;\n   if (stbi__get8(s) != 'a') return 0;\n   return 1;\n}\n\nstatic int stbi__gif_test(stbi__context *s)\n{\n   int r = stbi__gif_test_raw(s);\n   stbi__rewind(s);\n   return r;\n}\n\nstatic void stbi__gif_parse_colortable(stbi__context *s, stbi_uc pal[256][4], int num_entries, int transp)\n{\n   int i;\n   for (i=0; i < num_entries; ++i) {\n      pal[i][2] = stbi__get8(s);\n      pal[i][1] = stbi__get8(s);\n      pal[i][0] = stbi__get8(s);\n      pal[i][3] = transp == i ? 0 : 255;\n   }\n}\n\nstatic int stbi__gif_header(stbi__context *s, stbi__gif *g, int *comp, int is_info)\n{\n   stbi_uc version;\n   if (stbi__get8(s) != 'G' || stbi__get8(s) != 'I' || stbi__get8(s) != 'F' || stbi__get8(s) != '8')\n      return stbi__err(\"not GIF\", \"Corrupt GIF\");\n\n   version = stbi__get8(s);\n   if (version != '7' && version != '9')    return stbi__err(\"not GIF\", \"Corrupt GIF\");\n   if (stbi__get8(s) != 'a')                return stbi__err(\"not GIF\", \"Corrupt GIF\");\n\n   stbi__g_failure_reason = \"\";\n   g->w = stbi__get16le(s);\n   g->h = stbi__get16le(s);\n   g->flags = stbi__get8(s);\n   g->bgindex = stbi__get8(s);\n   g->ratio = stbi__get8(s);\n   g->transparent = -1;\n\n   if (g->w > STBI_MAX_DIMENSIONS) return stbi__err(\"too large\",\"Very large image (corrupt?)\");\n   if (g->h > STBI_MAX_DIMENSIONS) return stbi__err(\"too large\",\"Very large image (corrupt?)\");\n\n   if (comp != 0) *comp = 4;  // can't actually tell whether it's 3 or 4 until we parse the comments\n\n   if (is_info) return 1;\n\n   if (g->flags & 0x80)\n      stbi__gif_parse_colortable(s,g->pal, 2 << (g->flags & 7), -1);\n\n   return 1;\n}\n\nstatic int stbi__gif_info_raw(stbi__context *s, int *x, int *y, int *comp)\n{\n   stbi__gif* g = (stbi__gif*) stbi__malloc(sizeof(stbi__gif));\n   if (!g) return stbi__err(\"outofmem\", \"Out of memory\");\n   if (!stbi__gif_header(s, g, comp, 1)) {\n      STBI_FREE(g);\n      stbi__rewind( s );\n      return 0;\n   }\n   if (x) *x = g->w;\n   if (y) *y = g->h;\n   STBI_FREE(g);\n   return 1;\n}\n\nstatic void stbi__out_gif_code(stbi__gif *g, stbi__uint16 code)\n{\n   stbi_uc *p, *c;\n   int idx;\n\n   // recurse to decode the prefixes, since the linked-list is backwards,\n   // and working backwards through an interleaved image would be nasty\n   if (g->codes[code].prefix >= 0)\n      stbi__out_gif_code(g, g->codes[code].prefix);\n\n   if (g->cur_y >= g->max_y) return;\n\n   idx = g->cur_x + g->cur_y;\n   p = &g->out[idx];\n   g->history[idx / 4] = 1;\n\n   c = &g->color_table[g->codes[code].suffix * 4];\n   if (c[3] > 128) { // don't render transparent pixels;\n      p[0] = c[2];\n      p[1] = c[1];\n      p[2] = c[0];\n      p[3] = c[3];\n   }\n   g->cur_x += 4;\n\n   if (g->cur_x >= g->max_x) {\n      g->cur_x = g->start_x;\n      g->cur_y += g->step;\n\n      while (g->cur_y >= g->max_y && g->parse > 0) {\n         g->step = (1 << g->parse) * g->line_size;\n         g->cur_y = g->start_y + (g->step >> 1);\n         --g->parse;\n      }\n   }\n}\n\nstatic stbi_uc *stbi__process_gif_raster(stbi__context *s, stbi__gif *g)\n{\n   stbi_uc lzw_cs;\n   stbi__int32 len, init_code;\n   stbi__uint32 first;\n   stbi__int32 codesize, codemask, avail, oldcode, bits, valid_bits, clear;\n   stbi__gif_lzw *p;\n\n   lzw_cs = stbi__get8(s);\n   if (lzw_cs > 12) return NULL;\n   clear = 1 << lzw_cs;\n   first = 1;\n   codesize = lzw_cs + 1;\n   codemask = (1 << codesize) - 1;\n   bits = 0;\n   valid_bits = 0;\n   for (init_code = 0; init_code < clear; init_code++) {\n      g->codes[init_code].prefix = -1;\n      g->codes[init_code].first = (stbi_uc) init_code;\n      g->codes[init_code].suffix = (stbi_uc) init_code;\n   }\n\n   // support no starting clear code\n   avail = clear+2;\n   oldcode = -1;\n\n   len = 0;\n   for(;;) {\n      if (valid_bits < codesize) {\n         if (len == 0) {\n            len = stbi__get8(s); // start new block\n            if (len == 0)\n               return g->out;\n         }\n         --len;\n         bits |= (stbi__int32) stbi__get8(s) << valid_bits;\n         valid_bits += 8;\n      } else {\n         stbi__int32 code = bits & codemask;\n         bits >>= codesize;\n         valid_bits -= codesize;\n         // @OPTIMIZE: is there some way we can accelerate the non-clear path?\n         if (code == clear) {  // clear code\n            codesize = lzw_cs + 1;\n            codemask = (1 << codesize) - 1;\n            avail = clear + 2;\n            oldcode = -1;\n            first = 0;\n         } else if (code == clear + 1) { // end of stream code\n            stbi__skip(s, len);\n            while ((len = stbi__get8(s)) > 0)\n               stbi__skip(s,len);\n            return g->out;\n         } else if (code <= avail) {\n            if (first) {\n               return stbi__errpuc(\"no clear code\", \"Corrupt GIF\");\n            }\n\n            if (oldcode >= 0) {\n               p = &g->codes[avail++];\n               if (avail > 8192) {\n                  return stbi__errpuc(\"too many codes\", \"Corrupt GIF\");\n               }\n\n               p->prefix = (stbi__int16) oldcode;\n               p->first = g->codes[oldcode].first;\n               p->suffix = (code == avail) ? p->first : g->codes[code].first;\n            } else if (code == avail)\n               return stbi__errpuc(\"illegal code in raster\", \"Corrupt GIF\");\n\n            stbi__out_gif_code(g, (stbi__uint16) code);\n\n            if ((avail & codemask) == 0 && avail <= 0x0FFF) {\n               codesize++;\n               codemask = (1 << codesize) - 1;\n            }\n\n            oldcode = code;\n         } else {\n            return stbi__errpuc(\"illegal code in raster\", \"Corrupt GIF\");\n         }\n      }\n   }\n}\n\n// this function is designed to support animated gifs, although stb_image doesn't support it\n// two back is the image from two frames ago, used for a very specific disposal format\nstatic stbi_uc *stbi__gif_load_next(stbi__context *s, stbi__gif *g, int *comp, int req_comp, stbi_uc *two_back)\n{\n   int dispose;\n   int first_frame;\n   int pi;\n   int pcount;\n   STBI_NOTUSED(req_comp);\n\n   // on first frame, any non-written pixels get the background colour (non-transparent)\n   first_frame = 0;\n   if (g->out == 0) {\n      if (!stbi__gif_header(s, g, comp,0)) return 0; // stbi__g_failure_reason set by stbi__gif_header\n      if (!stbi__mad3sizes_valid(4, g->w, g->h, 0))\n         return stbi__errpuc(\"too large\", \"GIF image is too large\");\n      pcount = g->w * g->h;\n      g->out = (stbi_uc *) stbi__malloc(4 * pcount);\n      g->background = (stbi_uc *) stbi__malloc(4 * pcount);\n      g->history = (stbi_uc *) stbi__malloc(pcount);\n      if (!g->out || !g->background || !g->history)\n         return stbi__errpuc(\"outofmem\", \"Out of memory\");\n\n      // image is treated as \"transparent\" at the start - ie, nothing overwrites the current background;\n      // background colour is only used for pixels that are not rendered first frame, after that \"background\"\n      // color refers to the color that was there the previous frame.\n      memset(g->out, 0x00, 4 * pcount);\n      memset(g->background, 0x00, 4 * pcount); // state of the background (starts transparent)\n      memset(g->history, 0x00, pcount);        // pixels that were affected previous frame\n      first_frame = 1;\n   } else {\n      // second frame - how do we dispose of the previous one?\n      dispose = (g->eflags & 0x1C) >> 2;\n      pcount = g->w * g->h;\n\n      if ((dispose == 3) && (two_back == 0)) {\n         dispose = 2; // if I don't have an image to revert back to, default to the old background\n      }\n\n      if (dispose == 3) { // use previous graphic\n         for (pi = 0; pi < pcount; ++pi) {\n            if (g->history[pi]) {\n               memcpy( &g->out[pi * 4], &two_back[pi * 4], 4 );\n            }\n         }\n      } else if (dispose == 2) {\n         // restore what was changed last frame to background before that frame;\n         for (pi = 0; pi < pcount; ++pi) {\n            if (g->history[pi]) {\n               memcpy( &g->out[pi * 4], &g->background[pi * 4], 4 );\n            }\n         }\n      } else {\n         // This is a non-disposal case eithe way, so just\n         // leave the pixels as is, and they will become the new background\n         // 1: do not dispose\n         // 0:  not specified.\n      }\n\n      // background is what out is after the undoing of the previou frame;\n      memcpy( g->background, g->out, 4 * g->w * g->h );\n   }\n\n   // clear my history;\n   memset( g->history, 0x00, g->w * g->h );        // pixels that were affected previous frame\n\n   for (;;) {\n      int tag = stbi__get8(s);\n      switch (tag) {\n         case 0x2C: /* Image Descriptor */\n         {\n            stbi__int32 x, y, w, h;\n            stbi_uc *o;\n\n            x = stbi__get16le(s);\n            y = stbi__get16le(s);\n            w = stbi__get16le(s);\n            h = stbi__get16le(s);\n            if (((x + w) > (g->w)) || ((y + h) > (g->h)))\n               return stbi__errpuc(\"bad Image Descriptor\", \"Corrupt GIF\");\n\n            g->line_size = g->w * 4;\n            g->start_x = x * 4;\n            g->start_y = y * g->line_size;\n            g->max_x   = g->start_x + w * 4;\n            g->max_y   = g->start_y + h * g->line_size;\n            g->cur_x   = g->start_x;\n            g->cur_y   = g->start_y;\n\n            // if the width of the specified rectangle is 0, that means\n            // we may not see *any* pixels or the image is malformed;\n            // to make sure this is caught, move the current y down to\n            // max_y (which is what out_gif_code checks).\n            if (w == 0)\n               g->cur_y = g->max_y;\n\n            g->lflags = stbi__get8(s);\n\n            if (g->lflags & 0x40) {\n               g->step = 8 * g->line_size; // first interlaced spacing\n               g->parse = 3;\n            } else {\n               g->step = g->line_size;\n               g->parse = 0;\n            }\n\n            if (g->lflags & 0x80) {\n               stbi__gif_parse_colortable(s,g->lpal, 2 << (g->lflags & 7), g->eflags & 0x01 ? g->transparent : -1);\n               g->color_table = (stbi_uc *) g->lpal;\n            } else if (g->flags & 0x80) {\n               g->color_table = (stbi_uc *) g->pal;\n            } else\n               return stbi__errpuc(\"missing color table\", \"Corrupt GIF\");\n\n            o = stbi__process_gif_raster(s, g);\n            if (!o) return NULL;\n\n            // if this was the first frame,\n            pcount = g->w * g->h;\n            if (first_frame && (g->bgindex > 0)) {\n               // if first frame, any pixel not drawn to gets the background color\n               for (pi = 0; pi < pcount; ++pi) {\n                  if (g->history[pi] == 0) {\n                     g->pal[g->bgindex][3] = 255; // just in case it was made transparent, undo that; It will be reset next frame if need be;\n                     memcpy( &g->out[pi * 4], &g->pal[g->bgindex], 4 );\n                  }\n               }\n            }\n\n            return o;\n         }\n\n         case 0x21: // Comment Extension.\n         {\n            int len;\n            int ext = stbi__get8(s);\n            if (ext == 0xF9) { // Graphic Control Extension.\n               len = stbi__get8(s);\n               if (len == 4) {\n                  g->eflags = stbi__get8(s);\n                  g->delay = 10 * stbi__get16le(s); // delay - 1/100th of a second, saving as 1/1000ths.\n\n                  // unset old transparent\n                  if (g->transparent >= 0) {\n                     g->pal[g->transparent][3] = 255;\n                  }\n                  if (g->eflags & 0x01) {\n                     g->transparent = stbi__get8(s);\n                     if (g->transparent >= 0) {\n                        g->pal[g->transparent][3] = 0;\n                     }\n                  } else {\n                     // don't need transparent\n                     stbi__skip(s, 1);\n                     g->transparent = -1;\n                  }\n               } else {\n                  stbi__skip(s, len);\n                  break;\n               }\n            }\n            while ((len = stbi__get8(s)) != 0) {\n               stbi__skip(s, len);\n            }\n            break;\n         }\n\n         case 0x3B: // gif stream termination code\n            return (stbi_uc *) s; // using '1' causes warning on some compilers\n\n         default:\n            return stbi__errpuc(\"unknown code\", \"Corrupt GIF\");\n      }\n   }\n}\n\nstatic void *stbi__load_gif_main_outofmem(stbi__gif *g, stbi_uc *out, int **delays)\n{\n   STBI_FREE(g->out);\n   STBI_FREE(g->history);\n   STBI_FREE(g->background);\n\n   if (out) STBI_FREE(out);\n   if (delays && *delays) STBI_FREE(*delays);\n   return stbi__errpuc(\"outofmem\", \"Out of memory\");\n}\n\nstatic void *stbi__load_gif_main(stbi__context *s, int **delays, int *x, int *y, int *z, int *comp, int req_comp)\n{\n   if (stbi__gif_test(s)) {\n      int layers = 0;\n      stbi_uc *u = 0;\n      stbi_uc *out = 0;\n      stbi_uc *two_back = 0;\n      stbi__gif g;\n      int stride;\n      int out_size = 0;\n      int delays_size = 0;\n\n      STBI_NOTUSED(out_size);\n      STBI_NOTUSED(delays_size);\n\n      memset(&g, 0, sizeof(g));\n      if (delays) {\n         *delays = 0;\n      }\n\n      do {\n         u = stbi__gif_load_next(s, &g, comp, req_comp, two_back);\n         if (u == (stbi_uc *) s) u = 0;  // end of animated gif marker\n\n         if (u) {\n            *x = g.w;\n            *y = g.h;\n            ++layers;\n            stride = g.w * g.h * 4;\n\n            if (out) {\n               void *tmp = (stbi_uc*) STBI_REALLOC_SIZED( out, out_size, layers * stride );\n               if (!tmp)\n                  return stbi__load_gif_main_outofmem(&g, out, delays);\n               else {\n                   out = (stbi_uc*) tmp;\n                   out_size = layers * stride;\n               }\n\n               if (delays) {\n                  int *new_delays = (int*) STBI_REALLOC_SIZED( *delays, delays_size, sizeof(int) * layers );\n                  if (!new_delays)\n                     return stbi__load_gif_main_outofmem(&g, out, delays);\n                  *delays = new_delays;\n                  delays_size = layers * sizeof(int);\n               }\n            } else {\n               out = (stbi_uc*)stbi__malloc( layers * stride );\n               if (!out)\n                  return stbi__load_gif_main_outofmem(&g, out, delays);\n               out_size = layers * stride;\n               if (delays) {\n                  *delays = (int*) stbi__malloc( layers * sizeof(int) );\n                  if (!*delays)\n                     return stbi__load_gif_main_outofmem(&g, out, delays);\n                  delays_size = layers * sizeof(int);\n               }\n            }\n            memcpy( out + ((layers - 1) * stride), u, stride );\n            if (layers >= 2) {\n               two_back = out - 2 * stride;\n            }\n\n            if (delays) {\n               (*delays)[layers - 1U] = g.delay;\n            }\n         }\n      } while (u != 0);\n\n      // free temp buffer;\n      STBI_FREE(g.out);\n      STBI_FREE(g.history);\n      STBI_FREE(g.background);\n\n      // do the final conversion after loading everything;\n      if (req_comp && req_comp != 4)\n         out = stbi__convert_format(out, 4, req_comp, layers * g.w, g.h);\n\n      *z = layers;\n      return out;\n   } else {\n      return stbi__errpuc(\"not GIF\", \"Image was not as a gif type.\");\n   }\n}\n\nstatic void *stbi__gif_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri)\n{\n   stbi_uc *u = 0;\n   stbi__gif g;\n   memset(&g, 0, sizeof(g));\n   STBI_NOTUSED(ri);\n\n   u = stbi__gif_load_next(s, &g, comp, req_comp, 0);\n   if (u == (stbi_uc *) s) u = 0;  // end of animated gif marker\n   if (u) {\n      *x = g.w;\n      *y = g.h;\n\n      // moved conversion to after successful load so that the same\n      // can be done for multiple frames.\n      if (req_comp && req_comp != 4)\n         u = stbi__convert_format(u, 4, req_comp, g.w, g.h);\n   } else if (g.out) {\n      // if there was an error and we allocated an image buffer, free it!\n      STBI_FREE(g.out);\n   }\n\n   // free buffers needed for multiple frame loading;\n   STBI_FREE(g.history);\n   STBI_FREE(g.background);\n\n   return u;\n}\n\nstatic int stbi__gif_info(stbi__context *s, int *x, int *y, int *comp)\n{\n   return stbi__gif_info_raw(s,x,y,comp);\n}\n#endif\n\n// *************************************************************************************************\n// Radiance RGBE HDR loader\n// originally by Nicolas Schulz\n#ifndef STBI_NO_HDR\nstatic int stbi__hdr_test_core(stbi__context *s, const char *signature)\n{\n   int i;\n   for (i=0; signature[i]; ++i)\n      if (stbi__get8(s) != signature[i])\n          return 0;\n   stbi__rewind(s);\n   return 1;\n}\n\nstatic int stbi__hdr_test(stbi__context* s)\n{\n   int r = stbi__hdr_test_core(s, \"#?RADIANCE\\n\");\n   stbi__rewind(s);\n   if(!r) {\n       r = stbi__hdr_test_core(s, \"#?RGBE\\n\");\n       stbi__rewind(s);\n   }\n   return r;\n}\n\n#define STBI__HDR_BUFLEN  1024\nstatic char *stbi__hdr_gettoken(stbi__context *z, char *buffer)\n{\n   int len=0;\n   char c = '\\0';\n\n   c = (char) stbi__get8(z);\n\n   while (!stbi__at_eof(z) && c != '\\n') {\n      buffer[len++] = c;\n      if (len == STBI__HDR_BUFLEN-1) {\n         // flush to end of line\n         while (!stbi__at_eof(z) && stbi__get8(z) != '\\n')\n            ;\n         break;\n      }\n      c = (char) stbi__get8(z);\n   }\n\n   buffer[len] = 0;\n   return buffer;\n}\n\nstatic void stbi__hdr_convert(float *output, stbi_uc *input, int req_comp)\n{\n   if ( input[3] != 0 ) {\n      float f1;\n      // Exponent\n      f1 = (float) ldexp(1.0f, input[3] - (int)(128 + 8));\n      if (req_comp <= 2)\n         output[0] = (input[0] + input[1] + input[2]) * f1 / 3;\n      else {\n         output[0] = input[0] * f1;\n         output[1] = input[1] * f1;\n         output[2] = input[2] * f1;\n      }\n      if (req_comp == 2) output[1] = 1;\n      if (req_comp == 4) output[3] = 1;\n   } else {\n      switch (req_comp) {\n         case 4: output[3] = 1; /* fallthrough */\n         case 3: output[0] = output[1] = output[2] = 0;\n                 break;\n         case 2: output[1] = 1; /* fallthrough */\n         case 1: output[0] = 0;\n                 break;\n      }\n   }\n}\n\nstatic float *stbi__hdr_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri)\n{\n   char buffer[STBI__HDR_BUFLEN];\n   char *token;\n   int valid = 0;\n   int width, height;\n   stbi_uc *scanline;\n   float *hdr_data;\n   int len;\n   unsigned char count, value;\n   int i, j, k, c1,c2, z;\n   const char *headerToken;\n   STBI_NOTUSED(ri);\n\n   // Check identifier\n   headerToken = stbi__hdr_gettoken(s,buffer);\n   if (strcmp(headerToken, \"#?RADIANCE\") != 0 && strcmp(headerToken, \"#?RGBE\") != 0)\n      return stbi__errpf(\"not HDR\", \"Corrupt HDR image\");\n\n   // Parse header\n   for(;;) {\n      token = stbi__hdr_gettoken(s,buffer);\n      if (token[0] == 0) break;\n      if (strcmp(token, \"FORMAT=32-bit_rle_rgbe\") == 0) valid = 1;\n   }\n\n   if (!valid)    return stbi__errpf(\"unsupported format\", \"Unsupported HDR format\");\n\n   // Parse width and height\n   // can't use sscanf() if we're not using stdio!\n   token = stbi__hdr_gettoken(s,buffer);\n   if (strncmp(token, \"-Y \", 3))  return stbi__errpf(\"unsupported data layout\", \"Unsupported HDR format\");\n   token += 3;\n   height = (int) strtol(token, &token, 10);\n   while (*token == ' ') ++token;\n   if (strncmp(token, \"+X \", 3))  return stbi__errpf(\"unsupported data layout\", \"Unsupported HDR format\");\n   token += 3;\n   width = (int) strtol(token, NULL, 10);\n\n   if (height > STBI_MAX_DIMENSIONS) return stbi__errpf(\"too large\",\"Very large image (corrupt?)\");\n   if (width > STBI_MAX_DIMENSIONS) return stbi__errpf(\"too large\",\"Very large image (corrupt?)\");\n\n   *x = width;\n   *y = height;\n\n   if (comp) *comp = 3;\n   if (req_comp == 0) req_comp = 3;\n\n   if (!stbi__mad4sizes_valid(width, height, req_comp, sizeof(float), 0))\n      return stbi__errpf(\"too large\", \"HDR image is too large\");\n\n   // Read data\n   hdr_data = (float *) stbi__malloc_mad4(width, height, req_comp, sizeof(float), 0);\n   if (!hdr_data)\n      return stbi__errpf(\"outofmem\", \"Out of memory\");\n\n   // Load image data\n   // image data is stored as some number of sca\n   if ( width < 8 || width >= 32768) {\n      // Read flat data\n      for (j=0; j < height; ++j) {\n         for (i=0; i < width; ++i) {\n            stbi_uc rgbe[4];\n           main_decode_loop:\n            stbi__getn(s, rgbe, 4);\n            stbi__hdr_convert(hdr_data + j * width * req_comp + i * req_comp, rgbe, req_comp);\n         }\n      }\n   } else {\n      // Read RLE-encoded data\n      scanline = NULL;\n\n      for (j = 0; j < height; ++j) {\n         c1 = stbi__get8(s);\n         c2 = stbi__get8(s);\n         len = stbi__get8(s);\n         if (c1 != 2 || c2 != 2 || (len & 0x80)) {\n            // not run-length encoded, so we have to actually use THIS data as a decoded\n            // pixel (note this can't be a valid pixel--one of RGB must be >= 128)\n            stbi_uc rgbe[4];\n            rgbe[0] = (stbi_uc) c1;\n            rgbe[1] = (stbi_uc) c2;\n            rgbe[2] = (stbi_uc) len;\n            rgbe[3] = (stbi_uc) stbi__get8(s);\n            stbi__hdr_convert(hdr_data, rgbe, req_comp);\n            i = 1;\n            j = 0;\n            STBI_FREE(scanline);\n            goto main_decode_loop; // yes, this makes no sense\n         }\n         len <<= 8;\n         len |= stbi__get8(s);\n         if (len != width) { STBI_FREE(hdr_data); STBI_FREE(scanline); return stbi__errpf(\"invalid decoded scanline length\", \"corrupt HDR\"); }\n         if (scanline == NULL) {\n            scanline = (stbi_uc *) stbi__malloc_mad2(width, 4, 0);\n            if (!scanline) {\n               STBI_FREE(hdr_data);\n               return stbi__errpf(\"outofmem\", \"Out of memory\");\n            }\n         }\n\n         for (k = 0; k < 4; ++k) {\n            int nleft;\n            i = 0;\n            while ((nleft = width - i) > 0) {\n               count = stbi__get8(s);\n               if (count > 128) {\n                  // Run\n                  value = stbi__get8(s);\n                  count -= 128;\n                  if ((count == 0) || (count > nleft)) { STBI_FREE(hdr_data); STBI_FREE(scanline); return stbi__errpf(\"corrupt\", \"bad RLE data in HDR\"); }\n                  for (z = 0; z < count; ++z)\n                     scanline[i++ * 4 + k] = value;\n               } else {\n                  // Dump\n                  if ((count == 0) || (count > nleft)) { STBI_FREE(hdr_data); STBI_FREE(scanline); return stbi__errpf(\"corrupt\", \"bad RLE data in HDR\"); }\n                  for (z = 0; z < count; ++z)\n                     scanline[i++ * 4 + k] = stbi__get8(s);\n               }\n            }\n         }\n         for (i=0; i < width; ++i)\n            stbi__hdr_convert(hdr_data+(j*width + i)*req_comp, scanline + i*4, req_comp);\n      }\n      if (scanline)\n         STBI_FREE(scanline);\n   }\n\n   return hdr_data;\n}\n\nstatic int stbi__hdr_info(stbi__context *s, int *x, int *y, int *comp)\n{\n   char buffer[STBI__HDR_BUFLEN];\n   char *token;\n   int valid = 0;\n   int dummy;\n\n   if (!x) x = &dummy;\n   if (!y) y = &dummy;\n   if (!comp) comp = &dummy;\n\n   if (stbi__hdr_test(s) == 0) {\n       stbi__rewind( s );\n       return 0;\n   }\n\n   for(;;) {\n      token = stbi__hdr_gettoken(s,buffer);\n      if (token[0] == 0) break;\n      if (strcmp(token, \"FORMAT=32-bit_rle_rgbe\") == 0) valid = 1;\n   }\n\n   if (!valid) {\n       stbi__rewind( s );\n       return 0;\n   }\n   token = stbi__hdr_gettoken(s,buffer);\n   if (strncmp(token, \"-Y \", 3)) {\n       stbi__rewind( s );\n       return 0;\n   }\n   token += 3;\n   *y = (int) strtol(token, &token, 10);\n   while (*token == ' ') ++token;\n   if (strncmp(token, \"+X \", 3)) {\n       stbi__rewind( s );\n       return 0;\n   }\n   token += 3;\n   *x = (int) strtol(token, NULL, 10);\n   *comp = 3;\n   return 1;\n}\n#endif // STBI_NO_HDR\n\n#ifndef STBI_NO_BMP\nstatic int stbi__bmp_info(stbi__context *s, int *x, int *y, int *comp)\n{\n   void *p;\n   stbi__bmp_data info;\n\n   info.all_a = 255;\n   p = stbi__bmp_parse_header(s, &info);\n   if (p == NULL) {\n      stbi__rewind( s );\n      return 0;\n   }\n   if (x) *x = s->img_x;\n   if (y) *y = s->img_y;\n   if (comp) {\n      if (info.bpp == 24 && info.ma == 0xff000000)\n         *comp = 3;\n      else\n         *comp = info.ma ? 4 : 3;\n   }\n   return 1;\n}\n#endif\n\n#ifndef STBI_NO_PSD\nstatic int stbi__psd_info(stbi__context *s, int *x, int *y, int *comp)\n{\n   int channelCount, dummy, depth;\n   if (!x) x = &dummy;\n   if (!y) y = &dummy;\n   if (!comp) comp = &dummy;\n   if (stbi__get32be(s) != 0x38425053) {\n       stbi__rewind( s );\n       return 0;\n   }\n   if (stbi__get16be(s) != 1) {\n       stbi__rewind( s );\n       return 0;\n   }\n   stbi__skip(s, 6);\n   channelCount = stbi__get16be(s);\n   if (channelCount < 0 || channelCount > 16) {\n       stbi__rewind( s );\n       return 0;\n   }\n   *y = stbi__get32be(s);\n   *x = stbi__get32be(s);\n   depth = stbi__get16be(s);\n   if (depth != 8 && depth != 16) {\n       stbi__rewind( s );\n       return 0;\n   }\n   if (stbi__get16be(s) != 3) {\n       stbi__rewind( s );\n       return 0;\n   }\n   *comp = 4;\n   return 1;\n}\n\nstatic int stbi__psd_is16(stbi__context *s)\n{\n   int channelCount, depth;\n   if (stbi__get32be(s) != 0x38425053) {\n       stbi__rewind( s );\n       return 0;\n   }\n   if (stbi__get16be(s) != 1) {\n       stbi__rewind( s );\n       return 0;\n   }\n   stbi__skip(s, 6);\n   channelCount = stbi__get16be(s);\n   if (channelCount < 0 || channelCount > 16) {\n       stbi__rewind( s );\n       return 0;\n   }\n   STBI_NOTUSED(stbi__get32be(s));\n   STBI_NOTUSED(stbi__get32be(s));\n   depth = stbi__get16be(s);\n   if (depth != 16) {\n       stbi__rewind( s );\n       return 0;\n   }\n   return 1;\n}\n#endif\n\n#ifndef STBI_NO_PIC\nstatic int stbi__pic_info(stbi__context *s, int *x, int *y, int *comp)\n{\n   int act_comp=0,num_packets=0,chained,dummy;\n   stbi__pic_packet packets[10];\n\n   if (!x) x = &dummy;\n   if (!y) y = &dummy;\n   if (!comp) comp = &dummy;\n\n   if (!stbi__pic_is4(s,\"\\x53\\x80\\xF6\\x34\")) {\n      stbi__rewind(s);\n      return 0;\n   }\n\n   stbi__skip(s, 88);\n\n   *x = stbi__get16be(s);\n   *y = stbi__get16be(s);\n   if (stbi__at_eof(s)) {\n      stbi__rewind( s);\n      return 0;\n   }\n   if ( (*x) != 0 && (1 << 28) / (*x) < (*y)) {\n      stbi__rewind( s );\n      return 0;\n   }\n\n   stbi__skip(s, 8);\n\n   do {\n      stbi__pic_packet *packet;\n\n      if (num_packets==sizeof(packets)/sizeof(packets[0]))\n         return 0;\n\n      packet = &packets[num_packets++];\n      chained = stbi__get8(s);\n      packet->size    = stbi__get8(s);\n      packet->type    = stbi__get8(s);\n      packet->channel = stbi__get8(s);\n      act_comp |= packet->channel;\n\n      if (stbi__at_eof(s)) {\n          stbi__rewind( s );\n          return 0;\n      }\n      if (packet->size != 8) {\n          stbi__rewind( s );\n          return 0;\n      }\n   } while (chained);\n\n   *comp = (act_comp & 0x10 ? 4 : 3);\n\n   return 1;\n}\n#endif\n\n// *************************************************************************************************\n// Portable Gray Map and Portable Pixel Map loader\n// by Ken Miller\n//\n// PGM: http://netpbm.sourceforge.net/doc/pgm.html\n// PPM: http://netpbm.sourceforge.net/doc/ppm.html\n//\n// Known limitations:\n//    Does not support comments in the header section\n//    Does not support ASCII image data (formats P2 and P3)\n\n#ifndef STBI_NO_PNM\n\nstatic int      stbi__pnm_test(stbi__context *s)\n{\n   char p, t;\n   p = (char) stbi__get8(s);\n   t = (char) stbi__get8(s);\n   if (p != 'P' || (t != '5' && t != '6')) {\n       stbi__rewind( s );\n       return 0;\n   }\n   return 1;\n}\n\nstatic void *stbi__pnm_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri)\n{\n   stbi_uc *out;\n   STBI_NOTUSED(ri);\n\n   ri->bits_per_channel = stbi__pnm_info(s, (int *)&s->img_x, (int *)&s->img_y, (int *)&s->img_n);\n   if (ri->bits_per_channel == 0)\n      return 0;\n\n   if (s->img_y > STBI_MAX_DIMENSIONS) return stbi__errpuc(\"too large\",\"Very large image (corrupt?)\");\n   if (s->img_x > STBI_MAX_DIMENSIONS) return stbi__errpuc(\"too large\",\"Very large image (corrupt?)\");\n\n   *x = s->img_x;\n   *y = s->img_y;\n   if (comp) *comp = s->img_n;\n\n   if (!stbi__mad4sizes_valid(s->img_n, s->img_x, s->img_y, ri->bits_per_channel / 8, 0))\n      return stbi__errpuc(\"too large\", \"PNM too large\");\n\n   out = (stbi_uc *) stbi__malloc_mad4(s->img_n, s->img_x, s->img_y, ri->bits_per_channel / 8, 0);\n   if (!out) return stbi__errpuc(\"outofmem\", \"Out of memory\");\n   if (!stbi__getn(s, out, s->img_n * s->img_x * s->img_y * (ri->bits_per_channel / 8))) {\n      STBI_FREE(out);\n      return stbi__errpuc(\"bad PNM\", \"PNM file truncated\");\n   }\n\n   if (req_comp && req_comp != s->img_n) {\n      if (ri->bits_per_channel == 16) {\n         out = (stbi_uc *) stbi__convert_format16((stbi__uint16 *) out, s->img_n, req_comp, s->img_x, s->img_y);\n      } else {\n         out = stbi__convert_format(out, s->img_n, req_comp, s->img_x, s->img_y);\n      }\n      if (out == NULL) return out; // stbi__convert_format frees input on failure\n   }\n   return out;\n}\n\nstatic int      stbi__pnm_isspace(char c)\n{\n   return c == ' ' || c == '\\t' || c == '\\n' || c == '\\v' || c == '\\f' || c == '\\r';\n}\n\nstatic void     stbi__pnm_skip_whitespace(stbi__context *s, char *c)\n{\n   for (;;) {\n      while (!stbi__at_eof(s) && stbi__pnm_isspace(*c))\n         *c = (char) stbi__get8(s);\n\n      if (stbi__at_eof(s) || *c != '#')\n         break;\n\n      while (!stbi__at_eof(s) && *c != '\\n' && *c != '\\r' )\n         *c = (char) stbi__get8(s);\n   }\n}\n\nstatic int      stbi__pnm_isdigit(char c)\n{\n   return c >= '0' && c <= '9';\n}\n\nstatic int      stbi__pnm_getinteger(stbi__context *s, char *c)\n{\n   int value = 0;\n\n   while (!stbi__at_eof(s) && stbi__pnm_isdigit(*c)) {\n      value = value*10 + (*c - '0');\n      *c = (char) stbi__get8(s);\n      if((value > 214748364) || (value == 214748364 && *c > '7'))\n          return stbi__err(\"integer parse overflow\", \"Parsing an integer in the PPM header overflowed a 32-bit int\");\n   }\n\n   return value;\n}\n\nstatic int      stbi__pnm_info(stbi__context *s, int *x, int *y, int *comp)\n{\n   int maxv, dummy;\n   char c, p, t;\n\n   if (!x) x = &dummy;\n   if (!y) y = &dummy;\n   if (!comp) comp = &dummy;\n\n   stbi__rewind(s);\n\n   // Get identifier\n   p = (char) stbi__get8(s);\n   t = (char) stbi__get8(s);\n   if (p != 'P' || (t != '5' && t != '6')) {\n       stbi__rewind(s);\n       return 0;\n   }\n\n   *comp = (t == '6') ? 3 : 1;  // '5' is 1-component .pgm; '6' is 3-component .ppm\n\n   c = (char) stbi__get8(s);\n   stbi__pnm_skip_whitespace(s, &c);\n\n   *x = stbi__pnm_getinteger(s, &c); // read width\n   if(*x == 0)\n       return stbi__err(\"invalid width\", \"PPM image header had zero or overflowing width\");\n   stbi__pnm_skip_whitespace(s, &c);\n\n   *y = stbi__pnm_getinteger(s, &c); // read height\n   if (*y == 0)\n       return stbi__err(\"invalid width\", \"PPM image header had zero or overflowing width\");\n   stbi__pnm_skip_whitespace(s, &c);\n\n   maxv = stbi__pnm_getinteger(s, &c);  // read max value\n   if (maxv > 65535)\n      return stbi__err(\"max value > 65535\", \"PPM image supports only 8-bit and 16-bit images\");\n   else if (maxv > 255)\n      return 16;\n   else\n      return 8;\n}\n\nstatic int stbi__pnm_is16(stbi__context *s)\n{\n   if (stbi__pnm_info(s, NULL, NULL, NULL) == 16)\n\t   return 1;\n   return 0;\n}\n#endif\n\nstatic int stbi__info_main(stbi__context *s, int *x, int *y, int *comp)\n{\n   #ifndef STBI_NO_JPEG\n   if (stbi__jpeg_info(s, x, y, comp)) return 1;\n   #endif\n\n   #ifndef STBI_NO_PNG\n   if (stbi__png_info(s, x, y, comp))  return 1;\n   #endif\n\n   #ifndef STBI_NO_GIF\n   if (stbi__gif_info(s, x, y, comp))  return 1;\n   #endif\n\n   #ifndef STBI_NO_BMP\n   if (stbi__bmp_info(s, x, y, comp))  return 1;\n   #endif\n\n   #ifndef STBI_NO_PSD\n   if (stbi__psd_info(s, x, y, comp))  return 1;\n   #endif\n\n   #ifndef STBI_NO_PIC\n   if (stbi__pic_info(s, x, y, comp))  return 1;\n   #endif\n\n   #ifndef STBI_NO_PNM\n   if (stbi__pnm_info(s, x, y, comp))  return 1;\n   #endif\n\n   #ifndef STBI_NO_HDR\n   if (stbi__hdr_info(s, x, y, comp))  return 1;\n   #endif\n\n   // test tga last because it's a crappy test!\n   #ifndef STBI_NO_TGA\n   if (stbi__tga_info(s, x, y, comp))\n       return 1;\n   #endif\n   return stbi__err(\"unknown image type\", \"Image not of any known type, or corrupt\");\n}\n\nstatic int stbi__is_16_main(stbi__context *s)\n{\n   #ifndef STBI_NO_PNG\n   if (stbi__png_is16(s))  return 1;\n   #endif\n\n   #ifndef STBI_NO_PSD\n   if (stbi__psd_is16(s))  return 1;\n   #endif\n\n   #ifndef STBI_NO_PNM\n   if (stbi__pnm_is16(s))  return 1;\n   #endif\n   return 0;\n}\n\n#ifndef STBI_NO_STDIO\nSTBIDEF int stbi_info(char const *filename, int *x, int *y, int *comp)\n{\n    FILE *f = stbi__fopen(filename, \"rb\");\n    int result;\n    if (!f) return stbi__err(\"can't fopen\", \"Unable to open file\");\n    result = stbi_info_from_file(f, x, y, comp);\n    fclose(f);\n    return result;\n}\n\nSTBIDEF int stbi_info_from_file(FILE *f, int *x, int *y, int *comp)\n{\n   int r;\n   stbi__context s;\n   long pos = ftell(f);\n   stbi__start_file(&s, f);\n   r = stbi__info_main(&s,x,y,comp);\n   fseek(f,pos,SEEK_SET);\n   return r;\n}\n\nSTBIDEF int stbi_is_16_bit(char const *filename)\n{\n    FILE *f = stbi__fopen(filename, \"rb\");\n    int result;\n    if (!f) return stbi__err(\"can't fopen\", \"Unable to open file\");\n    result = stbi_is_16_bit_from_file(f);\n    fclose(f);\n    return result;\n}\n\nSTBIDEF int stbi_is_16_bit_from_file(FILE *f)\n{\n   int r;\n   stbi__context s;\n   long pos = ftell(f);\n   stbi__start_file(&s, f);\n   r = stbi__is_16_main(&s);\n   fseek(f,pos,SEEK_SET);\n   return r;\n}\n#endif // !STBI_NO_STDIO\n\nSTBIDEF int stbi_info_from_memory(stbi_uc const *buffer, int len, int *x, int *y, int *comp)\n{\n   stbi__context s;\n   stbi__start_mem(&s,buffer,len);\n   return stbi__info_main(&s,x,y,comp);\n}\n\nSTBIDEF int stbi_info_from_callbacks(stbi_io_callbacks const *c, void *user, int *x, int *y, int *comp)\n{\n   stbi__context s;\n   stbi__start_callbacks(&s, (stbi_io_callbacks *) c, user);\n   return stbi__info_main(&s,x,y,comp);\n}\n\nSTBIDEF int stbi_is_16_bit_from_memory(stbi_uc const *buffer, int len)\n{\n   stbi__context s;\n   stbi__start_mem(&s,buffer,len);\n   return stbi__is_16_main(&s);\n}\n\nSTBIDEF int stbi_is_16_bit_from_callbacks(stbi_io_callbacks const *c, void *user)\n{\n   stbi__context s;\n   stbi__start_callbacks(&s, (stbi_io_callbacks *) c, user);\n   return stbi__is_16_main(&s);\n}\n\n#endif // STB_IMAGE_IMPLEMENTATION\n\n/*\n   revision history:\n      2.20  (2019-02-07) support utf8 filenames in Windows; fix warnings and platform ifdefs\n      2.19  (2018-02-11) fix warning\n      2.18  (2018-01-30) fix warnings\n      2.17  (2018-01-29) change sbti__shiftsigned to avoid clang -O2 bug\n                         1-bit BMP\n                         *_is_16_bit api\n                         avoid warnings\n      2.16  (2017-07-23) all functions have 16-bit variants;\n                         STBI_NO_STDIO works again;\n                         compilation fixes;\n                         fix rounding in unpremultiply;\n                         optimize vertical flip;\n                         disable raw_len validation;\n                         documentation fixes\n      2.15  (2017-03-18) fix png-1,2,4 bug; now all Imagenet JPGs decode;\n                         warning fixes; disable run-time SSE detection on gcc;\n                         uniform handling of optional \"return\" values;\n                         thread-safe initialization of zlib tables\n      2.14  (2017-03-03) remove deprecated STBI_JPEG_OLD; fixes for Imagenet JPGs\n      2.13  (2016-11-29) add 16-bit API, only supported for PNG right now\n      2.12  (2016-04-02) fix typo in 2.11 PSD fix that caused crashes\n      2.11  (2016-04-02) allocate large structures on the stack\n                         remove white matting for transparent PSD\n                         fix reported channel count for PNG & BMP\n                         re-enable SSE2 in non-gcc 64-bit\n                         support RGB-formatted JPEG\n                         read 16-bit PNGs (only as 8-bit)\n      2.10  (2016-01-22) avoid warning introduced in 2.09 by STBI_REALLOC_SIZED\n      2.09  (2016-01-16) allow comments in PNM files\n                         16-bit-per-pixel TGA (not bit-per-component)\n                         info() for TGA could break due to .hdr handling\n                         info() for BMP to shares code instead of sloppy parse\n                         can use STBI_REALLOC_SIZED if allocator doesn't support realloc\n                         code cleanup\n      2.08  (2015-09-13) fix to 2.07 cleanup, reading RGB PSD as RGBA\n      2.07  (2015-09-13) fix compiler warnings\n                         partial animated GIF support\n                         limited 16-bpc PSD support\n                         #ifdef unused functions\n                         bug with < 92 byte PIC,PNM,HDR,TGA\n      2.06  (2015-04-19) fix bug where PSD returns wrong '*comp' value\n      2.05  (2015-04-19) fix bug in progressive JPEG handling, fix warning\n      2.04  (2015-04-15) try to re-enable SIMD on MinGW 64-bit\n      2.03  (2015-04-12) extra corruption checking (mmozeiko)\n                         stbi_set_flip_vertically_on_load (nguillemot)\n                         fix NEON support; fix mingw support\n      2.02  (2015-01-19) fix incorrect assert, fix warning\n      2.01  (2015-01-17) fix various warnings; suppress SIMD on gcc 32-bit without -msse2\n      2.00b (2014-12-25) fix STBI_MALLOC in progressive JPEG\n      2.00  (2014-12-25) optimize JPG, including x86 SSE2 & NEON SIMD (ryg)\n                         progressive JPEG (stb)\n                         PGM/PPM support (Ken Miller)\n                         STBI_MALLOC,STBI_REALLOC,STBI_FREE\n                         GIF bugfix -- seemingly never worked\n                         STBI_NO_*, STBI_ONLY_*\n      1.48  (2014-12-14) fix incorrectly-named assert()\n      1.47  (2014-12-14) 1/2/4-bit PNG support, both direct and paletted (Omar Cornut & stb)\n                         optimize PNG (ryg)\n                         fix bug in interlaced PNG with user-specified channel count (stb)\n      1.46  (2014-08-26)\n              fix broken tRNS chunk (colorkey-style transparency) in non-paletted PNG\n      1.45  (2014-08-16)\n              fix MSVC-ARM internal compiler error by wrapping malloc\n      1.44  (2014-08-07)\n              various warning fixes from Ronny Chevalier\n      1.43  (2014-07-15)\n              fix MSVC-only compiler problem in code changed in 1.42\n      1.42  (2014-07-09)\n              don't define _CRT_SECURE_NO_WARNINGS (affects user code)\n              fixes to stbi__cleanup_jpeg path\n              added STBI_ASSERT to avoid requiring assert.h\n      1.41  (2014-06-25)\n              fix search&replace from 1.36 that messed up comments/error messages\n      1.40  (2014-06-22)\n              fix gcc struct-initialization warning\n      1.39  (2014-06-15)\n              fix to TGA optimization when req_comp != number of components in TGA;\n              fix to GIF loading because BMP wasn't rewinding (whoops, no GIFs in my test suite)\n              add support for BMP version 5 (more ignored fields)\n      1.38  (2014-06-06)\n              suppress MSVC warnings on integer casts truncating values\n              fix accidental rename of 'skip' field of I/O\n      1.37  (2014-06-04)\n              remove duplicate typedef\n      1.36  (2014-06-03)\n              convert to header file single-file library\n              if de-iphone isn't set, load iphone images color-swapped instead of returning NULL\n      1.35  (2014-05-27)\n              various warnings\n              fix broken STBI_SIMD path\n              fix bug where stbi_load_from_file no longer left file pointer in correct place\n              fix broken non-easy path for 32-bit BMP (possibly never used)\n              TGA optimization by Arseny Kapoulkine\n      1.34  (unknown)\n              use STBI_NOTUSED in stbi__resample_row_generic(), fix one more leak in tga failure case\n      1.33  (2011-07-14)\n              make stbi_is_hdr work in STBI_NO_HDR (as specified), minor compiler-friendly improvements\n      1.32  (2011-07-13)\n              support for \"info\" function for all supported filetypes (SpartanJ)\n      1.31  (2011-06-20)\n              a few more leak fixes, bug in PNG handling (SpartanJ)\n      1.30  (2011-06-11)\n              added ability to load files via callbacks to accomidate custom input streams (Ben Wenger)\n              removed deprecated format-specific test/load functions\n              removed support for installable file formats (stbi_loader) -- would have been broken for IO callbacks anyway\n              error cases in bmp and tga give messages and don't leak (Raymond Barbiero, grisha)\n              fix inefficiency in decoding 32-bit BMP (David Woo)\n      1.29  (2010-08-16)\n              various warning fixes from Aurelien Pocheville\n      1.28  (2010-08-01)\n              fix bug in GIF palette transparency (SpartanJ)\n      1.27  (2010-08-01)\n              cast-to-stbi_uc to fix warnings\n      1.26  (2010-07-24)\n              fix bug in file buffering for PNG reported by SpartanJ\n      1.25  (2010-07-17)\n              refix trans_data warning (Won Chun)\n      1.24  (2010-07-12)\n              perf improvements reading from files on platforms with lock-heavy fgetc()\n              minor perf improvements for jpeg\n              deprecated type-specific functions so we'll get feedback if they're needed\n              attempt to fix trans_data warning (Won Chun)\n      1.23    fixed bug in iPhone support\n      1.22  (2010-07-10)\n              removed image *writing* support\n              stbi_info support from Jetro Lauha\n              GIF support from Jean-Marc Lienher\n              iPhone PNG-extensions from James Brown\n              warning-fixes from Nicolas Schulz and Janez Zemva (i.stbi__err. Janez (U+017D)emva)\n      1.21    fix use of 'stbi_uc' in header (reported by jon blow)\n      1.20    added support for Softimage PIC, by Tom Seddon\n      1.19    bug in interlaced PNG corruption check (found by ryg)\n      1.18  (2008-08-02)\n              fix a threading bug (local mutable static)\n      1.17    support interlaced PNG\n      1.16    major bugfix - stbi__convert_format converted one too many pixels\n      1.15    initialize some fields for thread safety\n      1.14    fix threadsafe conversion bug\n              header-file-only version (#define STBI_HEADER_FILE_ONLY before including)\n      1.13    threadsafe\n      1.12    const qualifiers in the API\n      1.11    Support installable IDCT, colorspace conversion routines\n      1.10    Fixes for 64-bit (don't use \"unsigned long\")\n              optimized upsampling by Fabian \"ryg\" Giesen\n      1.09    Fix format-conversion for PSD code (bad global variables!)\n      1.08    Thatcher Ulrich's PSD code integrated by Nicolas Schulz\n      1.07    attempt to fix C++ warning/errors again\n      1.06    attempt to fix C++ warning/errors again\n      1.05    fix TGA loading to return correct *comp and use good luminance calc\n      1.04    default float alpha is 1, not 255; use 'void *' for stbi_image_free\n      1.03    bugfixes to STBI_NO_STDIO, STBI_NO_HDR\n      1.02    support for (subset of) HDR files, float interface for preferred access to them\n      1.01    fix bug: possible bug in handling right-side up bmps... not sure\n              fix bug: the stbi__bmp_load() and stbi__tga_load() functions didn't work at all\n      1.00    interface to zlib that skips zlib header\n      0.99    correct handling of alpha in palette\n      0.98    TGA loader by lonesock; dynamically add loaders (untested)\n      0.97    jpeg errors on too large a file; also catch another malloc failure\n      0.96    fix detection of invalid v value - particleman@mollyrocket forum\n      0.95    during header scan, seek to markers in case of padding\n      0.94    STBI_NO_STDIO to disable stdio usage; rename all #defines the same\n      0.93    handle jpegtran output; verbose errors\n      0.92    read 4,8,16,24,32-bit BMP files of several formats\n      0.91    output 24-bit Windows 3.0 BMP files\n      0.90    fix a few more warnings; bump version number to approach 1.0\n      0.61    bugfixes due to Marc LeBlanc, Christopher Lloyd\n      0.60    fix compiling as c++\n      0.59    fix warnings: merge Dave Moore's -Wall fixes\n      0.58    fix bug: zlib uncompressed mode len/nlen was wrong endian\n      0.57    fix bug: jpg last huffman symbol before marker was >9 bits but less than 16 available\n      0.56    fix bug: zlib uncompressed mode len vs. nlen\n      0.55    fix bug: restart_interval not initialized to 0\n      0.54    allow NULL for 'int *comp'\n      0.53    fix bug in png 3->4; speedup png decoding\n      0.52    png handles req_comp=3,4 directly; minor cleanup; jpeg comments\n      0.51    obey req_comp requests, 1-component jpegs return as 1-component,\n              on 'test' only check type, not whether we support this variant\n      0.50  (2006-11-19)\n              first released version\n*/\n\n\n/*\n------------------------------------------------------------------------------\nThis software is available under 2 licenses -- choose whichever you prefer.\n------------------------------------------------------------------------------\nALTERNATIVE A - MIT License\nCopyright (c) 2017 Sean Barrett\nPermission is hereby granted, free of charge, to any person obtaining a copy of\nthis software and associated documentation files (the \"Software\"), to deal in\nthe Software without restriction, including without limitation the rights to\nuse, copy, modify, merge, publish, distribute, sublicense, and/or sell copies\nof the Software, and to permit persons to whom the Software is furnished to do\nso, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n------------------------------------------------------------------------------\nALTERNATIVE B - Public Domain (www.unlicense.org)\nThis is free and unencumbered software released into the public domain.\nAnyone is free to copy, modify, publish, use, compile, sell, or distribute this\nsoftware, either in source code form or as a compiled binary, for any purpose,\ncommercial or non-commercial, and by any means.\nIn jurisdictions that recognize copyright laws, the author or authors of this\nsoftware dedicate any and all copyright interest in the software to the public\ndomain. We make this dedication for the benefit of the public at large and to\nthe detriment of our heirs and successors. We intend this dedication to be an\novert act of relinquishment in perpetuity of all present and future rights to\nthis software under copyright law.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN\nACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION\nWITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n------------------------------------------------------------------------------\n*/"
  },
  {
    "path": "packages/nx/vendor/stb_image/stb_image.ml",
    "content": "open Bigarray\n\ntype 'a result = ('a, [ `Msg of string ]) Result.t\ntype 'kind buffer = ('a, 'b, c_layout) Array1.t constraint 'kind = ('a, 'b) kind\n\ntype 'kind t = {\n  width : int;\n  height : int;\n  channels : int;\n  offset : int;\n  stride : int;\n  data : 'kind buffer;\n}\n\ntype float32 = (float, float32_elt) kind\ntype int8 = (int, int8_unsigned_elt) kind\n\nexternal load_unmanaged : ?channels:int -> string -> int8 t result\n  = \"ml_stbi_load\"\n\nexternal loadf_unmanaged : ?channels:int -> string -> float32 t result\n  = \"ml_stbi_loadf\"\n\nexternal decode_unmanaged : ?channels:int -> _ buffer -> int8 t result\n  = \"ml_stbi_load_mem\"\n\nexternal decodef_unmanaged : ?channels:int -> _ buffer -> float32 t result\n  = \"ml_stbi_loadf_mem\"\n\nexternal ml_stbi_image_free : _ buffer -> unit = \"ml_stbi_image_free\"\n\nlet free_unmanaged image = ml_stbi_image_free image.data\n\nlet clone buf =\n  let buf' = Array1.create (Array1.kind buf) c_layout (Array1.dim buf) in\n  Array1.blit buf buf';\n  buf'\n\nlet manage f ?channels filename =\n  match f ?channels filename with\n  | Result.Error _ as err -> err\n  | Result.Ok image ->\n      let managed = { image with data = clone image.data } in\n      free_unmanaged image;\n      Result.Ok managed\n\nlet load ?channels filename = manage load_unmanaged ?channels filename\nlet loadf ?channels filename = manage loadf_unmanaged ?channels filename\nlet decode ?channels filename = manage decode_unmanaged ?channels filename\nlet decodef ?channels filename = manage decodef_unmanaged ?channels filename\n\nlet image ~width ~height ~channels ?(offset = 0) ?(stride = width * channels)\n    data =\n  let size = Array1.dim data in\n  if width < 0 then Result.Error (`Msg \"width should be positive\")\n  else if height < 0 then Result.Error (`Msg \"height should be positive\")\n  else if channels < 0 || channels > 4 then\n    Result.Error (`Msg \"channels should be between 1 and 4\")\n  else if offset < 0 then Result.Error (`Msg \"offset should be positive\")\n  else if offset + (stride * height) > size then\n    Result.Error (`Msg \"image does not fit in buffer\")\n  else Result.Ok { width; height; channels; offset; stride; data }\n\nlet width t = t.width\nlet height t = t.height\nlet channels t = t.channels\nlet data t = t.data\n\nlet validate_mipmap t1 t2 =\n  if t1.channels <> t2.channels then\n    invalid_arg \"mipmap: images have different number of channels\";\n  if t1.width / 2 <> t2.width || t1.height / 2 <> t2.height then\n    invalid_arg\n      \"mipmap: second image size should exactly be half of first image\"\n\nexternal mipmap : int8 t -> int8 t -> unit = \"ml_stbi_mipmap\"\nexternal mipmapf : float32 t -> float32 t -> unit = \"ml_stbi_mipmapf\"\n\nlet mipmap t1 t2 =\n  validate_mipmap t1 t2;\n  mipmap t1 t2\n\nlet mipmapf t1 t2 =\n  validate_mipmap t1 t2;\n  mipmapf t1 t2\n\nexternal vflip : int8 t -> unit = \"ml_stbi_vflip\"\nexternal vflipf : float32 t -> unit = \"ml_stbi_vflipf\"\n\nexternal expblur : int8 t -> radius:float -> unit = \"ml_stbi_expblur\"\n(** Blur the image *)\n"
  },
  {
    "path": "packages/nx/vendor/stb_image/stb_image.mli",
    "content": "(* Stb_image for OCaml by Frédéric Bour <frederic.bour(_)lakaban.net> To the\n   extent possible under law, the person who associated CC0 with Stb_image for\n   OCaml has waived all copyright and related or neighboring rights to Stb_image\n   for OCaml.\n\n   You should have received a copy of the CC0 legalcode along with this work. If\n   not, see <http://creativecommons.org/publicdomain/zero/1.0/>.\n\n   Website: https://github.com/let-def/stb_image stb_image is a public domain\n   library by Sean Barrett, http://nothings.org/ Version 0.1, September 2015 *)\nopen Bigarray\n\ntype 'a result = ('a, [ `Msg of string ]) Result.t\n\n(*##############################*)\n(** {1 Image representation} *)\n\ntype 'kind buffer = ('a, 'b, c_layout) Array1.t constraint 'kind = ('a, 'b) kind\n(** [buffer] simply is an alias to a bigarray with c_layout. The [buffer] type\n    serves two purposes:\n    - representing input files,\n    - representing the raw pixels of an image.\n\n    Two kind of pixel buffers are manipulated:\n    - int8 for images with 8-bit channels\n    - float32 for images with floating point channels *)\n\ntype float32 = (float, float32_elt) kind\ntype int8 = (int, int8_unsigned_elt) kind\n\ntype 'kind t = private {\n  width : int;\n  height : int;\n  channels : int;\n  offset : int;\n  stride : int;\n  data : 'kind buffer;\n}\n(** A record describing an image. The buffer contains\n    [channels * width * height] items, in this order:\n    - channels are interleaved\n    - each pixel is made of [channels] items\n    - each line is made of [width] pixels\n    - image is made of [height] lines\n    - there is [stride] items between two lines.\n\n    The pixel at coordinates (x, y) and in channel c is thus stored at index\n    [y * stride + x * channels + c] in the buffer. *)\n\n(** {2 Creating image} *)\n\nval image :\n  width:int ->\n  height:int ->\n  channels:int ->\n  ?offset:int ->\n  ?stride:int ->\n  'kind buffer ->\n  'kind t result\n\n(** {2 Image accessors} *)\n\nval width : _ t -> int\nval height : _ t -> int\nval channels : _ t -> int\nval data : 'kind t -> 'kind buffer\n\n(** {1 Image decoding} *)\n\nval load : ?channels:int -> string -> int8 t result\n(** Load an 8-bit per channel image from a filename. If [channels] is specified,\n    it has to be between 1 and 4 and the decoded image will be processed to have\n    the requested number of channels. *)\n\nval loadf : ?channels:int -> string -> float32 t result\n(** Load a floating point channel image from a filename. See [load] for\n    [channels] parameter. *)\n\nval decode : ?channels:int -> _ buffer -> int8 t result\n(** Decode an 8-bit per channel image from a buffer. See [load] for [channels]\n    parameter. *)\n\nval decodef : ?channels:int -> _ buffer -> float32 t result\n(** Decode a floating point channel image from a buffer. See [load] for\n    [channels] parameter. *)\n\n(** {2 Low-level interface}\n\n    Functions are similar to the above one, except memory is not managed by\n    OCaml GC. It has to be released explicitly with [free_unmanaged] function.\n\n    You get slightly faster load times, more deterministic memory use and more\n    responsibility. Use at your own risk! *)\n\nval load_unmanaged : ?channels:int -> string -> int8 t result\nval loadf_unmanaged : ?channels:int -> string -> float32 t result\nval decode_unmanaged : ?channels:int -> _ buffer -> int8 t result\nval decodef_unmanaged : ?channels:int -> _ buffer -> float32 t result\nval free_unmanaged : _ t -> unit\n\n(** {2 Image filtering} *)\n\nval mipmap : int8 t -> int8 t -> unit\n(** Generate one level of mipmap: downsample image half in each dimension. In\n    [mipmap imgin imgout]:\n    - imgout.channels must be imgin.channels\n    - imgout.width must be imgin.width / 2\n    - imgout.height must be imgin.height / 2\n    - imgout.data will be filled with downsampled imgin.data *)\n\nval mipmapf : float32 t -> float32 t -> unit\n(** Downsample floating point images. See [mipmap]. *)\n\nval vflip : int8 t -> unit\n(** Flip the image vertically *)\n\nval vflipf : float32 t -> unit\n(** Flip the image vertically *)\n\nval expblur : int8 t -> radius:float -> unit\n(** Blur the image *)\n"
  },
  {
    "path": "packages/nx/vendor/stb_image_write/dune",
    "content": "(library\n (name stb_image_write)\n (public_name nx.io.stb_image_write)\n (libraries bigarray)\n (foreign_stubs\n  (language c)\n  (names ml_stb_image_write)))\n"
  },
  {
    "path": "packages/nx/vendor/stb_image_write/ml_stb_image_write.c",
    "content": "#include <assert.h>\n#include <stdio.h>\n#include <caml/mlvalues.h>\n#include <caml/memory.h>\n#include <caml/alloc.h>\n#include <caml/bigarray.h>\n\n#define STB_IMAGE_WRITE_IMPLEMENTATION\n#include \"stb_image_write.h\"\n\nstatic int validate_dim(value ba, value w, value h, value comp, int byte)\n{\n  size_t sz = caml_ba_byte_size(Caml_ba_array_val(ba));\n  size_t expected = Int_val(w) * Int_val(h) * Int_val(comp) * byte;\n  return (expected <= sz);\n}\n\nCAMLprim value ml_stbi_write_png(value filename, value w, value h, value comp, value ba)\n{\n  CAMLparam5(filename, w, h, comp, ba);\n  int result;\n\n  if (validate_dim(ba, w, h, comp, 1))\n    result = stbi_write_png(String_val(filename), Int_val(w), Int_val(h),\n        Int_val(comp), Caml_ba_data_val(ba), 0);\n  else\n    result = 0;\n\n  CAMLreturn(Val_int(result));\n}\n\nCAMLprim value ml_stbi_write_bmp(value filename, value w, value h, value comp, value ba)\n{\n  CAMLparam5(filename, w, h, comp, ba);\n  int result;\n\n  if (validate_dim(ba, w, h, comp, 1))\n    result = stbi_write_bmp(String_val(filename), Int_val(w), Int_val(h),\n        Int_val(comp), Caml_ba_data_val(ba));\n  else\n    result = 0;\n\n  CAMLreturn(Val_int(result));\n}\n\nCAMLprim value ml_stbi_write_tga(value filename, value w, value h, value comp, value ba)\n{\n  CAMLparam5(filename, w, h, comp, ba);\n  int result;\n\n  if (validate_dim(ba, w, h, comp, 1))\n    result = stbi_write_tga(String_val(filename), Int_val(w), Int_val(h),\n        Int_val(comp), Caml_ba_data_val(ba));\n  else\n    result = 0;\n\n  CAMLreturn(Val_int(result));\n}\n\nCAMLprim value ml_stbi_write_hdr(value filename, value w, value h, value comp, value ba)\n{\n  CAMLparam5(filename, w, h, comp, ba);\n  int result;\n\n  if (validate_dim(ba, w, h, comp, 4))\n    result = stbi_write_hdr(String_val(filename), Int_val(w), Int_val(h),\n        Int_val(comp), Caml_ba_data_val(ba));\n  else\n    result = 0;\n\n  CAMLreturn(Val_int(result));\n}\n\nCAMLprim value ml_stbi_write_jpg_native(value filename, value w, value h, value comp, value q, value ba)\n{\n  CAMLparam5(filename, w, h, comp, q);\n  CAMLxparam1(ba);\n  int result;\n\n  if (validate_dim(ba, w, h, comp, 1))\n    result = stbi_write_jpg(String_val(filename), Int_val(w), Int_val(h),\n        Int_val(comp), Caml_ba_data_val(ba), Int_val(q));\n  else\n    result = 0;\n\n  CAMLreturn(Val_int(result));\n}\n\nCAMLprim value ml_stbi_write_jpg_bytecode(value * argv, int nargs)\n{\n    return ml_stbi_write_jpg_native(argv[0], argv[1], argv[2], argv[3], argv[4], argv[5]);\n}"
  },
  {
    "path": "packages/nx/vendor/stb_image_write/stb_image_write.h",
    "content": "/* stb_image_write - v1.16 - public domain - http://nothings.org/stb\n   writes out PNG/BMP/TGA/JPEG/HDR images to C stdio - Sean Barrett 2010-2015\n                                     no warranty implied; use at your own risk\n\n   Before #including,\n\n       #define STB_IMAGE_WRITE_IMPLEMENTATION\n\n   in the file that you want to have the implementation.\n\n   Will probably not work correctly with strict-aliasing optimizations.\n\nABOUT:\n\n   This header file is a library for writing images to C stdio or a callback.\n\n   The PNG output is not optimal; it is 20-50% larger than the file\n   written by a decent optimizing implementation; though providing a custom\n   zlib compress function (see STBIW_ZLIB_COMPRESS) can mitigate that.\n   This library is designed for source code compactness and simplicity,\n   not optimal image file size or run-time performance.\n\nBUILDING:\n\n   You can #define STBIW_ASSERT(x) before the #include to avoid using assert.h.\n   You can #define STBIW_MALLOC(), STBIW_REALLOC(), and STBIW_FREE() to replace\n   malloc,realloc,free.\n   You can #define STBIW_MEMMOVE() to replace memmove()\n   You can #define STBIW_ZLIB_COMPRESS to use a custom zlib-style compress function\n   for PNG compression (instead of the builtin one), it must have the following signature:\n   unsigned char * my_compress(unsigned char *data, int data_len, int *out_len, int quality);\n   The returned data will be freed with STBIW_FREE() (free() by default),\n   so it must be heap allocated with STBIW_MALLOC() (malloc() by default),\n\nUNICODE:\n\n   If compiling for Windows and you wish to use Unicode filenames, compile\n   with\n       #define STBIW_WINDOWS_UTF8\n   and pass utf8-encoded filenames. Call stbiw_convert_wchar_to_utf8 to convert\n   Windows wchar_t filenames to utf8.\n\nUSAGE:\n\n   There are five functions, one for each image file format:\n\n     int stbi_write_png(char const *filename, int w, int h, int comp, const void *data, int stride_in_bytes);\n     int stbi_write_bmp(char const *filename, int w, int h, int comp, const void *data);\n     int stbi_write_tga(char const *filename, int w, int h, int comp, const void *data);\n     int stbi_write_jpg(char const *filename, int w, int h, int comp, const void *data, int quality);\n     int stbi_write_hdr(char const *filename, int w, int h, int comp, const float *data);\n\n     void stbi_flip_vertically_on_write(int flag); // flag is non-zero to flip data vertically\n\n   There are also five equivalent functions that use an arbitrary write function. You are\n   expected to open/close your file-equivalent before and after calling these:\n\n     int stbi_write_png_to_func(stbi_write_func *func, void *context, int w, int h, int comp, const void  *data, int stride_in_bytes);\n     int stbi_write_bmp_to_func(stbi_write_func *func, void *context, int w, int h, int comp, const void  *data);\n     int stbi_write_tga_to_func(stbi_write_func *func, void *context, int w, int h, int comp, const void  *data);\n     int stbi_write_hdr_to_func(stbi_write_func *func, void *context, int w, int h, int comp, const float *data);\n     int stbi_write_jpg_to_func(stbi_write_func *func, void *context, int x, int y, int comp, const void *data, int quality);\n\n   where the callback is:\n      void stbi_write_func(void *context, void *data, int size);\n\n   You can configure it with these global variables:\n      int stbi_write_tga_with_rle;             // defaults to true; set to 0 to disable RLE\n      int stbi_write_png_compression_level;    // defaults to 8; set to higher for more compression\n      int stbi_write_force_png_filter;         // defaults to -1; set to 0..5 to force a filter mode\n\n\n   You can define STBI_WRITE_NO_STDIO to disable the file variant of these\n   functions, so the library will not use stdio.h at all. However, this will\n   also disable HDR writing, because it requires stdio for formatted output.\n\n   Each function returns 0 on failure and non-0 on success.\n\n   The functions create an image file defined by the parameters. The image\n   is a rectangle of pixels stored from left-to-right, top-to-bottom.\n   Each pixel contains 'comp' channels of data stored interleaved with 8-bits\n   per channel, in the following order: 1=Y, 2=YA, 3=RGB, 4=RGBA. (Y is\n   monochrome color.) The rectangle is 'w' pixels wide and 'h' pixels tall.\n   The *data pointer points to the first byte of the top-left-most pixel.\n   For PNG, \"stride_in_bytes\" is the distance in bytes from the first byte of\n   a row of pixels to the first byte of the next row of pixels.\n\n   PNG creates output files with the same number of components as the input.\n   The BMP format expands Y to RGB in the file format and does not\n   output alpha.\n\n   PNG supports writing rectangles of data even when the bytes storing rows of\n   data are not consecutive in memory (e.g. sub-rectangles of a larger image),\n   by supplying the stride between the beginning of adjacent rows. The other\n   formats do not. (Thus you cannot write a native-format BMP through the BMP\n   writer, both because it is in BGR order and because it may have padding\n   at the end of the line.)\n\n   PNG allows you to set the deflate compression level by setting the global\n   variable 'stbi_write_png_compression_level' (it defaults to 8).\n\n   HDR expects linear float data. Since the format is always 32-bit rgb(e)\n   data, alpha (if provided) is discarded, and for monochrome data it is\n   replicated across all three channels.\n\n   TGA supports RLE or non-RLE compressed data. To use non-RLE-compressed\n   data, set the global variable 'stbi_write_tga_with_rle' to 0.\n\n   JPEG does ignore alpha channels in input data; quality is between 1 and 100.\n   Higher quality looks better but results in a bigger image.\n   JPEG baseline (no JPEG progressive).\n\nCREDITS:\n\n\n   Sean Barrett           -    PNG/BMP/TGA\n   Baldur Karlsson        -    HDR\n   Jean-Sebastien Guay    -    TGA monochrome\n   Tim Kelsey             -    misc enhancements\n   Alan Hickman           -    TGA RLE\n   Emmanuel Julien        -    initial file IO callback implementation\n   Jon Olick              -    original jo_jpeg.cpp code\n   Daniel Gibson          -    integrate JPEG, allow external zlib\n   Aarni Koskela          -    allow choosing PNG filter\n\n   bugfixes:\n      github:Chribba\n      Guillaume Chereau\n      github:jry2\n      github:romigrou\n      Sergio Gonzalez\n      Jonas Karlsson\n      Filip Wasil\n      Thatcher Ulrich\n      github:poppolopoppo\n      Patrick Boettcher\n      github:xeekworx\n      Cap Petschulat\n      Simon Rodriguez\n      Ivan Tikhonov\n      github:ignotion\n      Adam Schackart\n      Andrew Kensler\n\nLICENSE\n\n  See end of file for license information.\n\n*/\n\n#ifndef INCLUDE_STB_IMAGE_WRITE_H\n#define INCLUDE_STB_IMAGE_WRITE_H\n\n#include <stdlib.h>\n\n// if STB_IMAGE_WRITE_STATIC causes problems, try defining STBIWDEF to 'inline' or 'static inline'\n#ifndef STBIWDEF\n#ifdef STB_IMAGE_WRITE_STATIC\n#define STBIWDEF  static\n#else\n#ifdef __cplusplus\n#define STBIWDEF  extern \"C\"\n#else\n#define STBIWDEF  extern\n#endif\n#endif\n#endif\n\n#ifndef STB_IMAGE_WRITE_STATIC  // C++ forbids static forward declarations\nSTBIWDEF int stbi_write_tga_with_rle;\nSTBIWDEF int stbi_write_png_compression_level;\nSTBIWDEF int stbi_write_force_png_filter;\n#endif\n\n#ifndef STBI_WRITE_NO_STDIO\nSTBIWDEF int stbi_write_png(char const *filename, int w, int h, int comp, const void  *data, int stride_in_bytes);\nSTBIWDEF int stbi_write_bmp(char const *filename, int w, int h, int comp, const void  *data);\nSTBIWDEF int stbi_write_tga(char const *filename, int w, int h, int comp, const void  *data);\nSTBIWDEF int stbi_write_hdr(char const *filename, int w, int h, int comp, const float *data);\nSTBIWDEF int stbi_write_jpg(char const *filename, int x, int y, int comp, const void  *data, int quality);\n\n#ifdef STBIW_WINDOWS_UTF8\nSTBIWDEF int stbiw_convert_wchar_to_utf8(char *buffer, size_t bufferlen, const wchar_t* input);\n#endif\n#endif\n\ntypedef void stbi_write_func(void *context, void *data, int size);\n\nSTBIWDEF int stbi_write_png_to_func(stbi_write_func *func, void *context, int w, int h, int comp, const void  *data, int stride_in_bytes);\nSTBIWDEF int stbi_write_bmp_to_func(stbi_write_func *func, void *context, int w, int h, int comp, const void  *data);\nSTBIWDEF int stbi_write_tga_to_func(stbi_write_func *func, void *context, int w, int h, int comp, const void  *data);\nSTBIWDEF int stbi_write_hdr_to_func(stbi_write_func *func, void *context, int w, int h, int comp, const float *data);\nSTBIWDEF int stbi_write_jpg_to_func(stbi_write_func *func, void *context, int x, int y, int comp, const void  *data, int quality);\n\nSTBIWDEF void stbi_flip_vertically_on_write(int flip_boolean);\n\n#endif//INCLUDE_STB_IMAGE_WRITE_H\n\n#ifdef STB_IMAGE_WRITE_IMPLEMENTATION\n\n#ifdef _WIN32\n   #ifndef _CRT_SECURE_NO_WARNINGS\n   #define _CRT_SECURE_NO_WARNINGS\n   #endif\n   #ifndef _CRT_NONSTDC_NO_DEPRECATE\n   #define _CRT_NONSTDC_NO_DEPRECATE\n   #endif\n#endif\n\n#ifndef STBI_WRITE_NO_STDIO\n#include <stdio.h>\n#endif // STBI_WRITE_NO_STDIO\n\n#include <stdarg.h>\n#include <stdlib.h>\n#include <string.h>\n#include <math.h>\n\n#if defined(STBIW_MALLOC) && defined(STBIW_FREE) && (defined(STBIW_REALLOC) || defined(STBIW_REALLOC_SIZED))\n// ok\n#elif !defined(STBIW_MALLOC) && !defined(STBIW_FREE) && !defined(STBIW_REALLOC) && !defined(STBIW_REALLOC_SIZED)\n// ok\n#else\n#error \"Must define all or none of STBIW_MALLOC, STBIW_FREE, and STBIW_REALLOC (or STBIW_REALLOC_SIZED).\"\n#endif\n\n#ifndef STBIW_MALLOC\n#define STBIW_MALLOC(sz)        malloc(sz)\n#define STBIW_REALLOC(p,newsz)  realloc(p,newsz)\n#define STBIW_FREE(p)           free(p)\n#endif\n\n#ifndef STBIW_REALLOC_SIZED\n#define STBIW_REALLOC_SIZED(p,oldsz,newsz) STBIW_REALLOC(p,newsz)\n#endif\n\n\n#ifndef STBIW_MEMMOVE\n#define STBIW_MEMMOVE(a,b,sz) memmove(a,b,sz)\n#endif\n\n\n#ifndef STBIW_ASSERT\n#include <assert.h>\n#define STBIW_ASSERT(x) assert(x)\n#endif\n\n#define STBIW_UCHAR(x) (unsigned char) ((x) & 0xff)\n\n#ifdef STB_IMAGE_WRITE_STATIC\nstatic int stbi_write_png_compression_level = 8;\nstatic int stbi_write_tga_with_rle = 1;\nstatic int stbi_write_force_png_filter = -1;\n#else\nint stbi_write_png_compression_level = 8;\nint stbi_write_tga_with_rle = 1;\nint stbi_write_force_png_filter = -1;\n#endif\n\nstatic int stbi__flip_vertically_on_write = 0;\n\nSTBIWDEF void stbi_flip_vertically_on_write(int flag)\n{\n   stbi__flip_vertically_on_write = flag;\n}\n\ntypedef struct\n{\n   stbi_write_func *func;\n   void *context;\n   unsigned char buffer[64];\n   int buf_used;\n} stbi__write_context;\n\n// initialize a callback-based context\nstatic void stbi__start_write_callbacks(stbi__write_context *s, stbi_write_func *c, void *context)\n{\n   s->func    = c;\n   s->context = context;\n}\n\n#ifndef STBI_WRITE_NO_STDIO\n\nstatic void stbi__stdio_write(void *context, void *data, int size)\n{\n   fwrite(data,1,size,(FILE*) context);\n}\n\n#if defined(_WIN32) && defined(STBIW_WINDOWS_UTF8)\n#ifdef __cplusplus\n#define STBIW_EXTERN extern \"C\"\n#else\n#define STBIW_EXTERN extern\n#endif\nSTBIW_EXTERN __declspec(dllimport) int __stdcall MultiByteToWideChar(unsigned int cp, unsigned long flags, const char *str, int cbmb, wchar_t *widestr, int cchwide);\nSTBIW_EXTERN __declspec(dllimport) int __stdcall WideCharToMultiByte(unsigned int cp, unsigned long flags, const wchar_t *widestr, int cchwide, char *str, int cbmb, const char *defchar, int *used_default);\n\nSTBIWDEF int stbiw_convert_wchar_to_utf8(char *buffer, size_t bufferlen, const wchar_t* input)\n{\n   return WideCharToMultiByte(65001 /* UTF8 */, 0, input, -1, buffer, (int) bufferlen, NULL, NULL);\n}\n#endif\n\nstatic FILE *stbiw__fopen(char const *filename, char const *mode)\n{\n   FILE *f;\n#if defined(_WIN32) && defined(STBIW_WINDOWS_UTF8)\n   wchar_t wMode[64];\n   wchar_t wFilename[1024];\n   if (0 == MultiByteToWideChar(65001 /* UTF8 */, 0, filename, -1, wFilename, sizeof(wFilename)/sizeof(*wFilename)))\n      return 0;\n\n   if (0 == MultiByteToWideChar(65001 /* UTF8 */, 0, mode, -1, wMode, sizeof(wMode)/sizeof(*wMode)))\n      return 0;\n\n#if defined(_MSC_VER) && _MSC_VER >= 1400\n   if (0 != _wfopen_s(&f, wFilename, wMode))\n      f = 0;\n#else\n   f = _wfopen(wFilename, wMode);\n#endif\n\n#elif defined(_MSC_VER) && _MSC_VER >= 1400\n   if (0 != fopen_s(&f, filename, mode))\n      f=0;\n#else\n   f = fopen(filename, mode);\n#endif\n   return f;\n}\n\nstatic int stbi__start_write_file(stbi__write_context *s, const char *filename)\n{\n   FILE *f = stbiw__fopen(filename, \"wb\");\n   stbi__start_write_callbacks(s, stbi__stdio_write, (void *) f);\n   return f != NULL;\n}\n\nstatic void stbi__end_write_file(stbi__write_context *s)\n{\n   fclose((FILE *)s->context);\n}\n\n#endif // !STBI_WRITE_NO_STDIO\n\ntypedef unsigned int stbiw_uint32;\ntypedef int stb_image_write_test[sizeof(stbiw_uint32)==4 ? 1 : -1];\n\nstatic void stbiw__writefv(stbi__write_context *s, const char *fmt, va_list v)\n{\n   while (*fmt) {\n      switch (*fmt++) {\n         case ' ': break;\n         case '1': { unsigned char x = STBIW_UCHAR(va_arg(v, int));\n                     s->func(s->context,&x,1);\n                     break; }\n         case '2': { int x = va_arg(v,int);\n                     unsigned char b[2];\n                     b[0] = STBIW_UCHAR(x);\n                     b[1] = STBIW_UCHAR(x>>8);\n                     s->func(s->context,b,2);\n                     break; }\n         case '4': { stbiw_uint32 x = va_arg(v,int);\n                     unsigned char b[4];\n                     b[0]=STBIW_UCHAR(x);\n                     b[1]=STBIW_UCHAR(x>>8);\n                     b[2]=STBIW_UCHAR(x>>16);\n                     b[3]=STBIW_UCHAR(x>>24);\n                     s->func(s->context,b,4);\n                     break; }\n         default:\n            STBIW_ASSERT(0);\n            return;\n      }\n   }\n}\n\nstatic void stbiw__writef(stbi__write_context *s, const char *fmt, ...)\n{\n   va_list v;\n   va_start(v, fmt);\n   stbiw__writefv(s, fmt, v);\n   va_end(v);\n}\n\nstatic void stbiw__write_flush(stbi__write_context *s)\n{\n   if (s->buf_used) {\n      s->func(s->context, &s->buffer, s->buf_used);\n      s->buf_used = 0;\n   }\n}\n\nstatic void stbiw__putc(stbi__write_context *s, unsigned char c)\n{\n   s->func(s->context, &c, 1);\n}\n\nstatic void stbiw__write1(stbi__write_context *s, unsigned char a)\n{\n   if ((size_t)s->buf_used + 1 > sizeof(s->buffer))\n      stbiw__write_flush(s);\n   s->buffer[s->buf_used++] = a;\n}\n\nstatic void stbiw__write3(stbi__write_context *s, unsigned char a, unsigned char b, unsigned char c)\n{\n   int n;\n   if ((size_t)s->buf_used + 3 > sizeof(s->buffer))\n      stbiw__write_flush(s);\n   n = s->buf_used;\n   s->buf_used = n+3;\n   s->buffer[n+0] = a;\n   s->buffer[n+1] = b;\n   s->buffer[n+2] = c;\n}\n\nstatic void stbiw__write_pixel(stbi__write_context *s, int rgb_dir, int comp, int write_alpha, int expand_mono, unsigned char *d)\n{\n   unsigned char bg[3] = { 255, 0, 255}, px[3];\n   int k;\n\n   if (write_alpha < 0)\n      stbiw__write1(s, d[comp - 1]);\n\n   switch (comp) {\n      case 2: // 2 pixels = mono + alpha, alpha is written separately, so same as 1-channel case\n      case 1:\n         if (expand_mono)\n            stbiw__write3(s, d[0], d[0], d[0]); // monochrome bmp\n         else\n            stbiw__write1(s, d[0]);  // monochrome TGA\n         break;\n      case 4:\n         if (!write_alpha) {\n            // composite against pink background\n            for (k = 0; k < 3; ++k)\n               px[k] = bg[k] + ((d[k] - bg[k]) * d[3]) / 255;\n            stbiw__write3(s, px[1 - rgb_dir], px[1], px[1 + rgb_dir]);\n            break;\n         }\n         /* FALLTHROUGH */\n      case 3:\n         stbiw__write3(s, d[1 - rgb_dir], d[1], d[1 + rgb_dir]);\n         break;\n   }\n   if (write_alpha > 0)\n      stbiw__write1(s, d[comp - 1]);\n}\n\nstatic void stbiw__write_pixels(stbi__write_context *s, int rgb_dir, int vdir, int x, int y, int comp, void *data, int write_alpha, int scanline_pad, int expand_mono)\n{\n   stbiw_uint32 zero = 0;\n   int i,j, j_end;\n\n   if (y <= 0)\n      return;\n\n   if (stbi__flip_vertically_on_write)\n      vdir *= -1;\n\n   if (vdir < 0) {\n      j_end = -1; j = y-1;\n   } else {\n      j_end =  y; j = 0;\n   }\n\n   for (; j != j_end; j += vdir) {\n      for (i=0; i < x; ++i) {\n         unsigned char *d = (unsigned char *) data + (j*x+i)*comp;\n         stbiw__write_pixel(s, rgb_dir, comp, write_alpha, expand_mono, d);\n      }\n      stbiw__write_flush(s);\n      s->func(s->context, &zero, scanline_pad);\n   }\n}\n\nstatic int stbiw__outfile(stbi__write_context *s, int rgb_dir, int vdir, int x, int y, int comp, int expand_mono, void *data, int alpha, int pad, const char *fmt, ...)\n{\n   if (y < 0 || x < 0) {\n      return 0;\n   } else {\n      va_list v;\n      va_start(v, fmt);\n      stbiw__writefv(s, fmt, v);\n      va_end(v);\n      stbiw__write_pixels(s,rgb_dir,vdir,x,y,comp,data,alpha,pad, expand_mono);\n      return 1;\n   }\n}\n\nstatic int stbi_write_bmp_core(stbi__write_context *s, int x, int y, int comp, const void *data)\n{\n   if (comp != 4) {\n      // write RGB bitmap\n      int pad = (-x*3) & 3;\n      return stbiw__outfile(s,-1,-1,x,y,comp,1,(void *) data,0,pad,\n              \"11 4 22 4\" \"4 44 22 444444\",\n              'B', 'M', 14+40+(x*3+pad)*y, 0,0, 14+40,  // file header\n               40, x,y, 1,24, 0,0,0,0,0,0);             // bitmap header\n   } else {\n      // RGBA bitmaps need a v4 header\n      // use BI_BITFIELDS mode with 32bpp and alpha mask\n      // (straight BI_RGB with alpha mask doesn't work in most readers)\n      return stbiw__outfile(s,-1,-1,x,y,comp,1,(void *)data,1,0,\n         \"11 4 22 4\" \"4 44 22 444444 4444 4 444 444 444 444\",\n         'B', 'M', 14+108+x*y*4, 0, 0, 14+108, // file header\n         108, x,y, 1,32, 3,0,0,0,0,0, 0xff0000,0xff00,0xff,0xff000000u, 0, 0,0,0, 0,0,0, 0,0,0, 0,0,0); // bitmap V4 header\n   }\n}\n\nSTBIWDEF int stbi_write_bmp_to_func(stbi_write_func *func, void *context, int x, int y, int comp, const void *data)\n{\n   stbi__write_context s = { 0 };\n   stbi__start_write_callbacks(&s, func, context);\n   return stbi_write_bmp_core(&s, x, y, comp, data);\n}\n\n#ifndef STBI_WRITE_NO_STDIO\nSTBIWDEF int stbi_write_bmp(char const *filename, int x, int y, int comp, const void *data)\n{\n   stbi__write_context s = { 0 };\n   if (stbi__start_write_file(&s,filename)) {\n      int r = stbi_write_bmp_core(&s, x, y, comp, data);\n      stbi__end_write_file(&s);\n      return r;\n   } else\n      return 0;\n}\n#endif //!STBI_WRITE_NO_STDIO\n\nstatic int stbi_write_tga_core(stbi__write_context *s, int x, int y, int comp, void *data)\n{\n   int has_alpha = (comp == 2 || comp == 4);\n   int colorbytes = has_alpha ? comp-1 : comp;\n   int format = colorbytes < 2 ? 3 : 2; // 3 color channels (RGB/RGBA) = 2, 1 color channel (Y/YA) = 3\n\n   if (y < 0 || x < 0)\n      return 0;\n\n   if (!stbi_write_tga_with_rle) {\n      return stbiw__outfile(s, -1, -1, x, y, comp, 0, (void *) data, has_alpha, 0,\n         \"111 221 2222 11\", 0, 0, format, 0, 0, 0, 0, 0, x, y, (colorbytes + has_alpha) * 8, has_alpha * 8);\n   } else {\n      int i,j,k;\n      int jend, jdir;\n\n      stbiw__writef(s, \"111 221 2222 11\", 0,0,format+8, 0,0,0, 0,0,x,y, (colorbytes + has_alpha) * 8, has_alpha * 8);\n\n      if (stbi__flip_vertically_on_write) {\n         j = 0;\n         jend = y;\n         jdir = 1;\n      } else {\n         j = y-1;\n         jend = -1;\n         jdir = -1;\n      }\n      for (; j != jend; j += jdir) {\n         unsigned char *row = (unsigned char *) data + j * x * comp;\n         int len;\n\n         for (i = 0; i < x; i += len) {\n            unsigned char *begin = row + i * comp;\n            int diff = 1;\n            len = 1;\n\n            if (i < x - 1) {\n               ++len;\n               diff = memcmp(begin, row + (i + 1) * comp, comp);\n               if (diff) {\n                  const unsigned char *prev = begin;\n                  for (k = i + 2; k < x && len < 128; ++k) {\n                     if (memcmp(prev, row + k * comp, comp)) {\n                        prev += comp;\n                        ++len;\n                     } else {\n                        --len;\n                        break;\n                     }\n                  }\n               } else {\n                  for (k = i + 2; k < x && len < 128; ++k) {\n                     if (!memcmp(begin, row + k * comp, comp)) {\n                        ++len;\n                     } else {\n                        break;\n                     }\n                  }\n               }\n            }\n\n            if (diff) {\n               unsigned char header = STBIW_UCHAR(len - 1);\n               stbiw__write1(s, header);\n               for (k = 0; k < len; ++k) {\n                  stbiw__write_pixel(s, -1, comp, has_alpha, 0, begin + k * comp);\n               }\n            } else {\n               unsigned char header = STBIW_UCHAR(len - 129);\n               stbiw__write1(s, header);\n               stbiw__write_pixel(s, -1, comp, has_alpha, 0, begin);\n            }\n         }\n      }\n      stbiw__write_flush(s);\n   }\n   return 1;\n}\n\nSTBIWDEF int stbi_write_tga_to_func(stbi_write_func *func, void *context, int x, int y, int comp, const void *data)\n{\n   stbi__write_context s = { 0 };\n   stbi__start_write_callbacks(&s, func, context);\n   return stbi_write_tga_core(&s, x, y, comp, (void *) data);\n}\n\n#ifndef STBI_WRITE_NO_STDIO\nSTBIWDEF int stbi_write_tga(char const *filename, int x, int y, int comp, const void *data)\n{\n   stbi__write_context s = { 0 };\n   if (stbi__start_write_file(&s,filename)) {\n      int r = stbi_write_tga_core(&s, x, y, comp, (void *) data);\n      stbi__end_write_file(&s);\n      return r;\n   } else\n      return 0;\n}\n#endif\n\n// *************************************************************************************************\n// Radiance RGBE HDR writer\n// by Baldur Karlsson\n\n#define stbiw__max(a, b)  ((a) > (b) ? (a) : (b))\n\n#ifndef STBI_WRITE_NO_STDIO\n\nstatic void stbiw__linear_to_rgbe(unsigned char *rgbe, float *linear)\n{\n   int exponent;\n   float maxcomp = stbiw__max(linear[0], stbiw__max(linear[1], linear[2]));\n\n   if (maxcomp < 1e-32f) {\n      rgbe[0] = rgbe[1] = rgbe[2] = rgbe[3] = 0;\n   } else {\n      float normalize = (float) frexp(maxcomp, &exponent) * 256.0f/maxcomp;\n\n      rgbe[0] = (unsigned char)(linear[0] * normalize);\n      rgbe[1] = (unsigned char)(linear[1] * normalize);\n      rgbe[2] = (unsigned char)(linear[2] * normalize);\n      rgbe[3] = (unsigned char)(exponent + 128);\n   }\n}\n\nstatic void stbiw__write_run_data(stbi__write_context *s, int length, unsigned char databyte)\n{\n   unsigned char lengthbyte = STBIW_UCHAR(length+128);\n   STBIW_ASSERT(length+128 <= 255);\n   s->func(s->context, &lengthbyte, 1);\n   s->func(s->context, &databyte, 1);\n}\n\nstatic void stbiw__write_dump_data(stbi__write_context *s, int length, unsigned char *data)\n{\n   unsigned char lengthbyte = STBIW_UCHAR(length);\n   STBIW_ASSERT(length <= 128); // inconsistent with spec but consistent with official code\n   s->func(s->context, &lengthbyte, 1);\n   s->func(s->context, data, length);\n}\n\nstatic void stbiw__write_hdr_scanline(stbi__write_context *s, int width, int ncomp, unsigned char *scratch, float *scanline)\n{\n   unsigned char scanlineheader[4] = { 2, 2, 0, 0 };\n   unsigned char rgbe[4];\n   float linear[3];\n   int x;\n\n   scanlineheader[2] = (width&0xff00)>>8;\n   scanlineheader[3] = (width&0x00ff);\n\n   /* skip RLE for images too small or large */\n   if (width < 8 || width >= 32768) {\n      for (x=0; x < width; x++) {\n         switch (ncomp) {\n            case 4: /* fallthrough */\n            case 3: linear[2] = scanline[x*ncomp + 2];\n                    linear[1] = scanline[x*ncomp + 1];\n                    linear[0] = scanline[x*ncomp + 0];\n                    break;\n            default:\n                    linear[0] = linear[1] = linear[2] = scanline[x*ncomp + 0];\n                    break;\n         }\n         stbiw__linear_to_rgbe(rgbe, linear);\n         s->func(s->context, rgbe, 4);\n      }\n   } else {\n      int c,r;\n      /* encode into scratch buffer */\n      for (x=0; x < width; x++) {\n         switch(ncomp) {\n            case 4: /* fallthrough */\n            case 3: linear[2] = scanline[x*ncomp + 2];\n                    linear[1] = scanline[x*ncomp + 1];\n                    linear[0] = scanline[x*ncomp + 0];\n                    break;\n            default:\n                    linear[0] = linear[1] = linear[2] = scanline[x*ncomp + 0];\n                    break;\n         }\n         stbiw__linear_to_rgbe(rgbe, linear);\n         scratch[x + width*0] = rgbe[0];\n         scratch[x + width*1] = rgbe[1];\n         scratch[x + width*2] = rgbe[2];\n         scratch[x + width*3] = rgbe[3];\n      }\n\n      s->func(s->context, scanlineheader, 4);\n\n      /* RLE each component separately */\n      for (c=0; c < 4; c++) {\n         unsigned char *comp = &scratch[width*c];\n\n         x = 0;\n         while (x < width) {\n            // find first run\n            r = x;\n            while (r+2 < width) {\n               if (comp[r] == comp[r+1] && comp[r] == comp[r+2])\n                  break;\n               ++r;\n            }\n            if (r+2 >= width)\n               r = width;\n            // dump up to first run\n            while (x < r) {\n               int len = r-x;\n               if (len > 128) len = 128;\n               stbiw__write_dump_data(s, len, &comp[x]);\n               x += len;\n            }\n            // if there's a run, output it\n            if (r+2 < width) { // same test as what we break out of in search loop, so only true if we break'd\n               // find next byte after run\n               while (r < width && comp[r] == comp[x])\n                  ++r;\n               // output run up to r\n               while (x < r) {\n                  int len = r-x;\n                  if (len > 127) len = 127;\n                  stbiw__write_run_data(s, len, comp[x]);\n                  x += len;\n               }\n            }\n         }\n      }\n   }\n}\n\nstatic int stbi_write_hdr_core(stbi__write_context *s, int x, int y, int comp, float *data)\n{\n   if (y <= 0 || x <= 0 || data == NULL)\n      return 0;\n   else {\n      // Each component is stored separately. Allocate scratch space for full output scanline.\n      unsigned char *scratch = (unsigned char *) STBIW_MALLOC(x*4);\n      int i, len;\n      char buffer[128];\n      char header[] = \"#?RADIANCE\\n# Written by stb_image_write.h\\nFORMAT=32-bit_rle_rgbe\\n\";\n      s->func(s->context, header, sizeof(header)-1);\n\n#ifdef __STDC_LIB_EXT1__\n      len = sprintf_s(buffer, sizeof(buffer), \"EXPOSURE=          1.0000000000000\\n\\n-Y %d +X %d\\n\", y, x);\n#else\n      len = sprintf(buffer, \"EXPOSURE=          1.0000000000000\\n\\n-Y %d +X %d\\n\", y, x);\n#endif\n      s->func(s->context, buffer, len);\n\n      for(i=0; i < y; i++)\n         stbiw__write_hdr_scanline(s, x, comp, scratch, data + comp*x*(stbi__flip_vertically_on_write ? y-1-i : i));\n      STBIW_FREE(scratch);\n      return 1;\n   }\n}\n\nSTBIWDEF int stbi_write_hdr_to_func(stbi_write_func *func, void *context, int x, int y, int comp, const float *data)\n{\n   stbi__write_context s = { 0 };\n   stbi__start_write_callbacks(&s, func, context);\n   return stbi_write_hdr_core(&s, x, y, comp, (float *) data);\n}\n\nSTBIWDEF int stbi_write_hdr(char const *filename, int x, int y, int comp, const float *data)\n{\n   stbi__write_context s = { 0 };\n   if (stbi__start_write_file(&s,filename)) {\n      int r = stbi_write_hdr_core(&s, x, y, comp, (float *) data);\n      stbi__end_write_file(&s);\n      return r;\n   } else\n      return 0;\n}\n#endif // STBI_WRITE_NO_STDIO\n\n\n//////////////////////////////////////////////////////////////////////////////\n//\n// PNG writer\n//\n\n#ifndef STBIW_ZLIB_COMPRESS\n// stretchy buffer; stbiw__sbpush() == vector<>::push_back() -- stbiw__sbcount() == vector<>::size()\n#define stbiw__sbraw(a) ((int *) (void *) (a) - 2)\n#define stbiw__sbm(a)   stbiw__sbraw(a)[0]\n#define stbiw__sbn(a)   stbiw__sbraw(a)[1]\n\n#define stbiw__sbneedgrow(a,n)  ((a)==0 || stbiw__sbn(a)+n >= stbiw__sbm(a))\n#define stbiw__sbmaybegrow(a,n) (stbiw__sbneedgrow(a,(n)) ? stbiw__sbgrow(a,n) : 0)\n#define stbiw__sbgrow(a,n)  stbiw__sbgrowf((void **) &(a), (n), sizeof(*(a)))\n\n#define stbiw__sbpush(a, v)      (stbiw__sbmaybegrow(a,1), (a)[stbiw__sbn(a)++] = (v))\n#define stbiw__sbcount(a)        ((a) ? stbiw__sbn(a) : 0)\n#define stbiw__sbfree(a)         ((a) ? STBIW_FREE(stbiw__sbraw(a)),0 : 0)\n\nstatic void *stbiw__sbgrowf(void **arr, int increment, int itemsize)\n{\n   int m = *arr ? 2*stbiw__sbm(*arr)+increment : increment+1;\n   void *p = STBIW_REALLOC_SIZED(*arr ? stbiw__sbraw(*arr) : 0, *arr ? (stbiw__sbm(*arr)*itemsize + sizeof(int)*2) : 0, itemsize * m + sizeof(int)*2);\n   STBIW_ASSERT(p);\n   if (p) {\n      if (!*arr) ((int *) p)[1] = 0;\n      *arr = (void *) ((int *) p + 2);\n      stbiw__sbm(*arr) = m;\n   }\n   return *arr;\n}\n\nstatic unsigned char *stbiw__zlib_flushf(unsigned char *data, unsigned int *bitbuffer, int *bitcount)\n{\n   while (*bitcount >= 8) {\n      stbiw__sbpush(data, STBIW_UCHAR(*bitbuffer));\n      *bitbuffer >>= 8;\n      *bitcount -= 8;\n   }\n   return data;\n}\n\nstatic int stbiw__zlib_bitrev(int code, int codebits)\n{\n   int res=0;\n   while (codebits--) {\n      res = (res << 1) | (code & 1);\n      code >>= 1;\n   }\n   return res;\n}\n\nstatic unsigned int stbiw__zlib_countm(unsigned char *a, unsigned char *b, int limit)\n{\n   int i;\n   for (i=0; i < limit && i < 258; ++i)\n      if (a[i] != b[i]) break;\n   return i;\n}\n\nstatic unsigned int stbiw__zhash(unsigned char *data)\n{\n   stbiw_uint32 hash = data[0] + (data[1] << 8) + (data[2] << 16);\n   hash ^= hash << 3;\n   hash += hash >> 5;\n   hash ^= hash << 4;\n   hash += hash >> 17;\n   hash ^= hash << 25;\n   hash += hash >> 6;\n   return hash;\n}\n\n#define stbiw__zlib_flush() (out = stbiw__zlib_flushf(out, &bitbuf, &bitcount))\n#define stbiw__zlib_add(code,codebits) \\\n      (bitbuf |= (code) << bitcount, bitcount += (codebits), stbiw__zlib_flush())\n#define stbiw__zlib_huffa(b,c)  stbiw__zlib_add(stbiw__zlib_bitrev(b,c),c)\n// default huffman tables\n#define stbiw__zlib_huff1(n)  stbiw__zlib_huffa(0x30 + (n), 8)\n#define stbiw__zlib_huff2(n)  stbiw__zlib_huffa(0x190 + (n)-144, 9)\n#define stbiw__zlib_huff3(n)  stbiw__zlib_huffa(0 + (n)-256,7)\n#define stbiw__zlib_huff4(n)  stbiw__zlib_huffa(0xc0 + (n)-280,8)\n#define stbiw__zlib_huff(n)  ((n) <= 143 ? stbiw__zlib_huff1(n) : (n) <= 255 ? stbiw__zlib_huff2(n) : (n) <= 279 ? stbiw__zlib_huff3(n) : stbiw__zlib_huff4(n))\n#define stbiw__zlib_huffb(n) ((n) <= 143 ? stbiw__zlib_huff1(n) : stbiw__zlib_huff2(n))\n\n#define stbiw__ZHASH   16384\n\n#endif // STBIW_ZLIB_COMPRESS\n\nSTBIWDEF unsigned char * stbi_zlib_compress(unsigned char *data, int data_len, int *out_len, int quality)\n{\n#ifdef STBIW_ZLIB_COMPRESS\n   // user provided a zlib compress implementation, use that\n   return STBIW_ZLIB_COMPRESS(data, data_len, out_len, quality);\n#else // use builtin\n   static unsigned short lengthc[] = { 3,4,5,6,7,8,9,10,11,13,15,17,19,23,27,31,35,43,51,59,67,83,99,115,131,163,195,227,258, 259 };\n   static unsigned char  lengtheb[]= { 0,0,0,0,0,0,0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4,  4,  5,  5,  5,  5,  0 };\n   static unsigned short distc[]   = { 1,2,3,4,5,7,9,13,17,25,33,49,65,97,129,193,257,385,513,769,1025,1537,2049,3073,4097,6145,8193,12289,16385,24577, 32768 };\n   static unsigned char  disteb[]  = { 0,0,0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,8,9,9,10,10,11,11,12,12,13,13 };\n   unsigned int bitbuf=0;\n   int i,j, bitcount=0;\n   unsigned char *out = NULL;\n   unsigned char ***hash_table = (unsigned char***) STBIW_MALLOC(stbiw__ZHASH * sizeof(unsigned char**));\n   if (hash_table == NULL)\n      return NULL;\n   if (quality < 5) quality = 5;\n\n   stbiw__sbpush(out, 0x78);   // DEFLATE 32K window\n   stbiw__sbpush(out, 0x5e);   // FLEVEL = 1\n   stbiw__zlib_add(1,1);  // BFINAL = 1\n   stbiw__zlib_add(1,2);  // BTYPE = 1 -- fixed huffman\n\n   for (i=0; i < stbiw__ZHASH; ++i)\n      hash_table[i] = NULL;\n\n   i=0;\n   while (i < data_len-3) {\n      // hash next 3 bytes of data to be compressed\n      int h = stbiw__zhash(data+i)&(stbiw__ZHASH-1), best=3;\n      unsigned char *bestloc = 0;\n      unsigned char **hlist = hash_table[h];\n      int n = stbiw__sbcount(hlist);\n      for (j=0; j < n; ++j) {\n         if (hlist[j]-data > i-32768) { // if entry lies within window\n            int d = stbiw__zlib_countm(hlist[j], data+i, data_len-i);\n            if (d >= best) { best=d; bestloc=hlist[j]; }\n         }\n      }\n      // when hash table entry is too long, delete half the entries\n      if (hash_table[h] && stbiw__sbn(hash_table[h]) == 2*quality) {\n         STBIW_MEMMOVE(hash_table[h], hash_table[h]+quality, sizeof(hash_table[h][0])*quality);\n         stbiw__sbn(hash_table[h]) = quality;\n      }\n      stbiw__sbpush(hash_table[h],data+i);\n\n      if (bestloc) {\n         // \"lazy matching\" - check match at *next* byte, and if it's better, do cur byte as literal\n         h = stbiw__zhash(data+i+1)&(stbiw__ZHASH-1);\n         hlist = hash_table[h];\n         n = stbiw__sbcount(hlist);\n         for (j=0; j < n; ++j) {\n            if (hlist[j]-data > i-32767) {\n               int e = stbiw__zlib_countm(hlist[j], data+i+1, data_len-i-1);\n               if (e > best) { // if next match is better, bail on current match\n                  bestloc = NULL;\n                  break;\n               }\n            }\n         }\n      }\n\n      if (bestloc) {\n         int d = (int) (data+i - bestloc); // distance back\n         STBIW_ASSERT(d <= 32767 && best <= 258);\n         for (j=0; best > lengthc[j+1]-1; ++j);\n         stbiw__zlib_huff(j+257);\n         if (lengtheb[j]) stbiw__zlib_add(best - lengthc[j], lengtheb[j]);\n         for (j=0; d > distc[j+1]-1; ++j);\n         stbiw__zlib_add(stbiw__zlib_bitrev(j,5),5);\n         if (disteb[j]) stbiw__zlib_add(d - distc[j], disteb[j]);\n         i += best;\n      } else {\n         stbiw__zlib_huffb(data[i]);\n         ++i;\n      }\n   }\n   // write out final bytes\n   for (;i < data_len; ++i)\n      stbiw__zlib_huffb(data[i]);\n   stbiw__zlib_huff(256); // end of block\n   // pad with 0 bits to byte boundary\n   while (bitcount)\n      stbiw__zlib_add(0,1);\n\n   for (i=0; i < stbiw__ZHASH; ++i)\n      (void) stbiw__sbfree(hash_table[i]);\n   STBIW_FREE(hash_table);\n\n   // store uncompressed instead if compression was worse\n   if (stbiw__sbn(out) > data_len + 2 + ((data_len+32766)/32767)*5) {\n      stbiw__sbn(out) = 2;  // truncate to DEFLATE 32K window and FLEVEL = 1\n      for (j = 0; j < data_len;) {\n         int blocklen = data_len - j;\n         if (blocklen > 32767) blocklen = 32767;\n         stbiw__sbpush(out, data_len - j == blocklen); // BFINAL = ?, BTYPE = 0 -- no compression\n         stbiw__sbpush(out, STBIW_UCHAR(blocklen)); // LEN\n         stbiw__sbpush(out, STBIW_UCHAR(blocklen >> 8));\n         stbiw__sbpush(out, STBIW_UCHAR(~blocklen)); // NLEN\n         stbiw__sbpush(out, STBIW_UCHAR(~blocklen >> 8));\n         memcpy(out+stbiw__sbn(out), data+j, blocklen);\n         stbiw__sbn(out) += blocklen;\n         j += blocklen;\n      }\n   }\n\n   {\n      // compute adler32 on input\n      unsigned int s1=1, s2=0;\n      int blocklen = (int) (data_len % 5552);\n      j=0;\n      while (j < data_len) {\n         for (i=0; i < blocklen; ++i) { s1 += data[j+i]; s2 += s1; }\n         s1 %= 65521; s2 %= 65521;\n         j += blocklen;\n         blocklen = 5552;\n      }\n      stbiw__sbpush(out, STBIW_UCHAR(s2 >> 8));\n      stbiw__sbpush(out, STBIW_UCHAR(s2));\n      stbiw__sbpush(out, STBIW_UCHAR(s1 >> 8));\n      stbiw__sbpush(out, STBIW_UCHAR(s1));\n   }\n   *out_len = stbiw__sbn(out);\n   // make returned pointer freeable\n   STBIW_MEMMOVE(stbiw__sbraw(out), out, *out_len);\n   return (unsigned char *) stbiw__sbraw(out);\n#endif // STBIW_ZLIB_COMPRESS\n}\n\nstatic unsigned int stbiw__crc32(unsigned char *buffer, int len)\n{\n#ifdef STBIW_CRC32\n    return STBIW_CRC32(buffer, len);\n#else\n   static unsigned int crc_table[256] =\n   {\n      0x00000000, 0x77073096, 0xEE0E612C, 0x990951BA, 0x076DC419, 0x706AF48F, 0xE963A535, 0x9E6495A3,\n      0x0eDB8832, 0x79DCB8A4, 0xE0D5E91E, 0x97D2D988, 0x09B64C2B, 0x7EB17CBD, 0xE7B82D07, 0x90BF1D91,\n      0x1DB71064, 0x6AB020F2, 0xF3B97148, 0x84BE41DE, 0x1ADAD47D, 0x6DDDE4EB, 0xF4D4B551, 0x83D385C7,\n      0x136C9856, 0x646BA8C0, 0xFD62F97A, 0x8A65C9EC, 0x14015C4F, 0x63066CD9, 0xFA0F3D63, 0x8D080DF5,\n      0x3B6E20C8, 0x4C69105E, 0xD56041E4, 0xA2677172, 0x3C03E4D1, 0x4B04D447, 0xD20D85FD, 0xA50AB56B,\n      0x35B5A8FA, 0x42B2986C, 0xDBBBC9D6, 0xACBCF940, 0x32D86CE3, 0x45DF5C75, 0xDCD60DCF, 0xABD13D59,\n      0x26D930AC, 0x51DE003A, 0xC8D75180, 0xBFD06116, 0x21B4F4B5, 0x56B3C423, 0xCFBA9599, 0xB8BDA50F,\n      0x2802B89E, 0x5F058808, 0xC60CD9B2, 0xB10BE924, 0x2F6F7C87, 0x58684C11, 0xC1611DAB, 0xB6662D3D,\n      0x76DC4190, 0x01DB7106, 0x98D220BC, 0xEFD5102A, 0x71B18589, 0x06B6B51F, 0x9FBFE4A5, 0xE8B8D433,\n      0x7807C9A2, 0x0F00F934, 0x9609A88E, 0xE10E9818, 0x7F6A0DBB, 0x086D3D2D, 0x91646C97, 0xE6635C01,\n      0x6B6B51F4, 0x1C6C6162, 0x856530D8, 0xF262004E, 0x6C0695ED, 0x1B01A57B, 0x8208F4C1, 0xF50FC457,\n      0x65B0D9C6, 0x12B7E950, 0x8BBEB8EA, 0xFCB9887C, 0x62DD1DDF, 0x15DA2D49, 0x8CD37CF3, 0xFBD44C65,\n      0x4DB26158, 0x3AB551CE, 0xA3BC0074, 0xD4BB30E2, 0x4ADFA541, 0x3DD895D7, 0xA4D1C46D, 0xD3D6F4FB,\n      0x4369E96A, 0x346ED9FC, 0xAD678846, 0xDA60B8D0, 0x44042D73, 0x33031DE5, 0xAA0A4C5F, 0xDD0D7CC9,\n      0x5005713C, 0x270241AA, 0xBE0B1010, 0xC90C2086, 0x5768B525, 0x206F85B3, 0xB966D409, 0xCE61E49F,\n      0x5EDEF90E, 0x29D9C998, 0xB0D09822, 0xC7D7A8B4, 0x59B33D17, 0x2EB40D81, 0xB7BD5C3B, 0xC0BA6CAD,\n      0xEDB88320, 0x9ABFB3B6, 0x03B6E20C, 0x74B1D29A, 0xEAD54739, 0x9DD277AF, 0x04DB2615, 0x73DC1683,\n      0xE3630B12, 0x94643B84, 0x0D6D6A3E, 0x7A6A5AA8, 0xE40ECF0B, 0x9309FF9D, 0x0A00AE27, 0x7D079EB1,\n      0xF00F9344, 0x8708A3D2, 0x1E01F268, 0x6906C2FE, 0xF762575D, 0x806567CB, 0x196C3671, 0x6E6B06E7,\n      0xFED41B76, 0x89D32BE0, 0x10DA7A5A, 0x67DD4ACC, 0xF9B9DF6F, 0x8EBEEFF9, 0x17B7BE43, 0x60B08ED5,\n      0xD6D6A3E8, 0xA1D1937E, 0x38D8C2C4, 0x4FDFF252, 0xD1BB67F1, 0xA6BC5767, 0x3FB506DD, 0x48B2364B,\n      0xD80D2BDA, 0xAF0A1B4C, 0x36034AF6, 0x41047A60, 0xDF60EFC3, 0xA867DF55, 0x316E8EEF, 0x4669BE79,\n      0xCB61B38C, 0xBC66831A, 0x256FD2A0, 0x5268E236, 0xCC0C7795, 0xBB0B4703, 0x220216B9, 0x5505262F,\n      0xC5BA3BBE, 0xB2BD0B28, 0x2BB45A92, 0x5CB36A04, 0xC2D7FFA7, 0xB5D0CF31, 0x2CD99E8B, 0x5BDEAE1D,\n      0x9B64C2B0, 0xEC63F226, 0x756AA39C, 0x026D930A, 0x9C0906A9, 0xEB0E363F, 0x72076785, 0x05005713,\n      0x95BF4A82, 0xE2B87A14, 0x7BB12BAE, 0x0CB61B38, 0x92D28E9B, 0xE5D5BE0D, 0x7CDCEFB7, 0x0BDBDF21,\n      0x86D3D2D4, 0xF1D4E242, 0x68DDB3F8, 0x1FDA836E, 0x81BE16CD, 0xF6B9265B, 0x6FB077E1, 0x18B74777,\n      0x88085AE6, 0xFF0F6A70, 0x66063BCA, 0x11010B5C, 0x8F659EFF, 0xF862AE69, 0x616BFFD3, 0x166CCF45,\n      0xA00AE278, 0xD70DD2EE, 0x4E048354, 0x3903B3C2, 0xA7672661, 0xD06016F7, 0x4969474D, 0x3E6E77DB,\n      0xAED16A4A, 0xD9D65ADC, 0x40DF0B66, 0x37D83BF0, 0xA9BCAE53, 0xDEBB9EC5, 0x47B2CF7F, 0x30B5FFE9,\n      0xBDBDF21C, 0xCABAC28A, 0x53B39330, 0x24B4A3A6, 0xBAD03605, 0xCDD70693, 0x54DE5729, 0x23D967BF,\n      0xB3667A2E, 0xC4614AB8, 0x5D681B02, 0x2A6F2B94, 0xB40BBE37, 0xC30C8EA1, 0x5A05DF1B, 0x2D02EF8D\n   };\n\n   unsigned int crc = ~0u;\n   int i;\n   for (i=0; i < len; ++i)\n      crc = (crc >> 8) ^ crc_table[buffer[i] ^ (crc & 0xff)];\n   return ~crc;\n#endif\n}\n\n#define stbiw__wpng4(o,a,b,c,d) ((o)[0]=STBIW_UCHAR(a),(o)[1]=STBIW_UCHAR(b),(o)[2]=STBIW_UCHAR(c),(o)[3]=STBIW_UCHAR(d),(o)+=4)\n#define stbiw__wp32(data,v) stbiw__wpng4(data, (v)>>24,(v)>>16,(v)>>8,(v));\n#define stbiw__wptag(data,s) stbiw__wpng4(data, s[0],s[1],s[2],s[3])\n\nstatic void stbiw__wpcrc(unsigned char **data, int len)\n{\n   unsigned int crc = stbiw__crc32(*data - len - 4, len+4);\n   stbiw__wp32(*data, crc);\n}\n\nstatic unsigned char stbiw__paeth(int a, int b, int c)\n{\n   int p = a + b - c, pa = abs(p-a), pb = abs(p-b), pc = abs(p-c);\n   if (pa <= pb && pa <= pc) return STBIW_UCHAR(a);\n   if (pb <= pc) return STBIW_UCHAR(b);\n   return STBIW_UCHAR(c);\n}\n\n// @OPTIMIZE: provide an option that always forces left-predict or paeth predict\nstatic void stbiw__encode_png_line(unsigned char *pixels, int stride_bytes, int width, int height, int y, int n, int filter_type, signed char *line_buffer)\n{\n   static int mapping[] = { 0,1,2,3,4 };\n   static int firstmap[] = { 0,1,0,5,6 };\n   int *mymap = (y != 0) ? mapping : firstmap;\n   int i;\n   int type = mymap[filter_type];\n   unsigned char *z = pixels + stride_bytes * (stbi__flip_vertically_on_write ? height-1-y : y);\n   int signed_stride = stbi__flip_vertically_on_write ? -stride_bytes : stride_bytes;\n\n   if (type==0) {\n      memcpy(line_buffer, z, width*n);\n      return;\n   }\n\n   // first loop isn't optimized since it's just one pixel\n   for (i = 0; i < n; ++i) {\n      switch (type) {\n         case 1: line_buffer[i] = z[i]; break;\n         case 2: line_buffer[i] = z[i] - z[i-signed_stride]; break;\n         case 3: line_buffer[i] = z[i] - (z[i-signed_stride]>>1); break;\n         case 4: line_buffer[i] = (signed char) (z[i] - stbiw__paeth(0,z[i-signed_stride],0)); break;\n         case 5: line_buffer[i] = z[i]; break;\n         case 6: line_buffer[i] = z[i]; break;\n      }\n   }\n   switch (type) {\n      case 1: for (i=n; i < width*n; ++i) line_buffer[i] = z[i] - z[i-n]; break;\n      case 2: for (i=n; i < width*n; ++i) line_buffer[i] = z[i] - z[i-signed_stride]; break;\n      case 3: for (i=n; i < width*n; ++i) line_buffer[i] = z[i] - ((z[i-n] + z[i-signed_stride])>>1); break;\n      case 4: for (i=n; i < width*n; ++i) line_buffer[i] = z[i] - stbiw__paeth(z[i-n], z[i-signed_stride], z[i-signed_stride-n]); break;\n      case 5: for (i=n; i < width*n; ++i) line_buffer[i] = z[i] - (z[i-n]>>1); break;\n      case 6: for (i=n; i < width*n; ++i) line_buffer[i] = z[i] - stbiw__paeth(z[i-n], 0,0); break;\n   }\n}\n\nSTBIWDEF unsigned char *stbi_write_png_to_mem(const unsigned char *pixels, int stride_bytes, int x, int y, int n, int *out_len)\n{\n   int force_filter = stbi_write_force_png_filter;\n   int ctype[5] = { -1, 0, 4, 2, 6 };\n   unsigned char sig[8] = { 137,80,78,71,13,10,26,10 };\n   unsigned char *out,*o, *filt, *zlib;\n   signed char *line_buffer;\n   int j,zlen;\n\n   if (stride_bytes == 0)\n      stride_bytes = x * n;\n\n   if (force_filter >= 5) {\n      force_filter = -1;\n   }\n\n   filt = (unsigned char *) STBIW_MALLOC((x*n+1) * y); if (!filt) return 0;\n   line_buffer = (signed char *) STBIW_MALLOC(x * n); if (!line_buffer) { STBIW_FREE(filt); return 0; }\n   for (j=0; j < y; ++j) {\n      int filter_type;\n      if (force_filter > -1) {\n         filter_type = force_filter;\n         stbiw__encode_png_line((unsigned char*)(pixels), stride_bytes, x, y, j, n, force_filter, line_buffer);\n      } else { // Estimate the best filter by running through all of them:\n         int best_filter = 0, best_filter_val = 0x7fffffff, est, i;\n         for (filter_type = 0; filter_type < 5; filter_type++) {\n            stbiw__encode_png_line((unsigned char*)(pixels), stride_bytes, x, y, j, n, filter_type, line_buffer);\n\n            // Estimate the entropy of the line using this filter; the less, the better.\n            est = 0;\n            for (i = 0; i < x*n; ++i) {\n               est += abs((signed char) line_buffer[i]);\n            }\n            if (est < best_filter_val) {\n               best_filter_val = est;\n               best_filter = filter_type;\n            }\n         }\n         if (filter_type != best_filter) {  // If the last iteration already got us the best filter, don't redo it\n            stbiw__encode_png_line((unsigned char*)(pixels), stride_bytes, x, y, j, n, best_filter, line_buffer);\n            filter_type = best_filter;\n         }\n      }\n      // when we get here, filter_type contains the filter type, and line_buffer contains the data\n      filt[j*(x*n+1)] = (unsigned char) filter_type;\n      STBIW_MEMMOVE(filt+j*(x*n+1)+1, line_buffer, x*n);\n   }\n   STBIW_FREE(line_buffer);\n   zlib = stbi_zlib_compress(filt, y*( x*n+1), &zlen, stbi_write_png_compression_level);\n   STBIW_FREE(filt);\n   if (!zlib) return 0;\n\n   // each tag requires 12 bytes of overhead\n   out = (unsigned char *) STBIW_MALLOC(8 + 12+13 + 12+zlen + 12);\n   if (!out) return 0;\n   *out_len = 8 + 12+13 + 12+zlen + 12;\n\n   o=out;\n   STBIW_MEMMOVE(o,sig,8); o+= 8;\n   stbiw__wp32(o, 13); // header length\n   stbiw__wptag(o, \"IHDR\");\n   stbiw__wp32(o, x);\n   stbiw__wp32(o, y);\n   *o++ = 8;\n   *o++ = STBIW_UCHAR(ctype[n]);\n   *o++ = 0;\n   *o++ = 0;\n   *o++ = 0;\n   stbiw__wpcrc(&o,13);\n\n   stbiw__wp32(o, zlen);\n   stbiw__wptag(o, \"IDAT\");\n   STBIW_MEMMOVE(o, zlib, zlen);\n   o += zlen;\n   STBIW_FREE(zlib);\n   stbiw__wpcrc(&o, zlen);\n\n   stbiw__wp32(o,0);\n   stbiw__wptag(o, \"IEND\");\n   stbiw__wpcrc(&o,0);\n\n   STBIW_ASSERT(o == out + *out_len);\n\n   return out;\n}\n\n#ifndef STBI_WRITE_NO_STDIO\nSTBIWDEF int stbi_write_png(char const *filename, int x, int y, int comp, const void *data, int stride_bytes)\n{\n   FILE *f;\n   int len;\n   unsigned char *png = stbi_write_png_to_mem((const unsigned char *) data, stride_bytes, x, y, comp, &len);\n   if (png == NULL) return 0;\n\n   f = stbiw__fopen(filename, \"wb\");\n   if (!f) { STBIW_FREE(png); return 0; }\n   fwrite(png, 1, len, f);\n   fclose(f);\n   STBIW_FREE(png);\n   return 1;\n}\n#endif\n\nSTBIWDEF int stbi_write_png_to_func(stbi_write_func *func, void *context, int x, int y, int comp, const void *data, int stride_bytes)\n{\n   int len;\n   unsigned char *png = stbi_write_png_to_mem((const unsigned char *) data, stride_bytes, x, y, comp, &len);\n   if (png == NULL) return 0;\n   func(context, png, len);\n   STBIW_FREE(png);\n   return 1;\n}\n\n\n/* ***************************************************************************\n *\n * JPEG writer\n *\n * This is based on Jon Olick's jo_jpeg.cpp:\n * public domain Simple, Minimalistic JPEG writer - http://www.jonolick.com/code.html\n */\n\nstatic const unsigned char stbiw__jpg_ZigZag[] = { 0,1,5,6,14,15,27,28,2,4,7,13,16,26,29,42,3,8,12,17,25,30,41,43,9,11,18,\n      24,31,40,44,53,10,19,23,32,39,45,52,54,20,22,33,38,46,51,55,60,21,34,37,47,50,56,59,61,35,36,48,49,57,58,62,63 };\n\nstatic void stbiw__jpg_writeBits(stbi__write_context *s, int *bitBufP, int *bitCntP, const unsigned short *bs) {\n   int bitBuf = *bitBufP, bitCnt = *bitCntP;\n   bitCnt += bs[1];\n   bitBuf |= bs[0] << (24 - bitCnt);\n   while(bitCnt >= 8) {\n      unsigned char c = (bitBuf >> 16) & 255;\n      stbiw__putc(s, c);\n      if(c == 255) {\n         stbiw__putc(s, 0);\n      }\n      bitBuf <<= 8;\n      bitCnt -= 8;\n   }\n   *bitBufP = bitBuf;\n   *bitCntP = bitCnt;\n}\n\nstatic void stbiw__jpg_DCT(float *d0p, float *d1p, float *d2p, float *d3p, float *d4p, float *d5p, float *d6p, float *d7p) {\n   float d0 = *d0p, d1 = *d1p, d2 = *d2p, d3 = *d3p, d4 = *d4p, d5 = *d5p, d6 = *d6p, d7 = *d7p;\n   float z1, z2, z3, z4, z5, z11, z13;\n\n   float tmp0 = d0 + d7;\n   float tmp7 = d0 - d7;\n   float tmp1 = d1 + d6;\n   float tmp6 = d1 - d6;\n   float tmp2 = d2 + d5;\n   float tmp5 = d2 - d5;\n   float tmp3 = d3 + d4;\n   float tmp4 = d3 - d4;\n\n   // Even part\n   float tmp10 = tmp0 + tmp3;   // phase 2\n   float tmp13 = tmp0 - tmp3;\n   float tmp11 = tmp1 + tmp2;\n   float tmp12 = tmp1 - tmp2;\n\n   d0 = tmp10 + tmp11;       // phase 3\n   d4 = tmp10 - tmp11;\n\n   z1 = (tmp12 + tmp13) * 0.707106781f; // c4\n   d2 = tmp13 + z1;       // phase 5\n   d6 = tmp13 - z1;\n\n   // Odd part\n   tmp10 = tmp4 + tmp5;       // phase 2\n   tmp11 = tmp5 + tmp6;\n   tmp12 = tmp6 + tmp7;\n\n   // The rotator is modified from fig 4-8 to avoid extra negations.\n   z5 = (tmp10 - tmp12) * 0.382683433f; // c6\n   z2 = tmp10 * 0.541196100f + z5; // c2-c6\n   z4 = tmp12 * 1.306562965f + z5; // c2+c6\n   z3 = tmp11 * 0.707106781f; // c4\n\n   z11 = tmp7 + z3;      // phase 5\n   z13 = tmp7 - z3;\n\n   *d5p = z13 + z2;         // phase 6\n   *d3p = z13 - z2;\n   *d1p = z11 + z4;\n   *d7p = z11 - z4;\n\n   *d0p = d0;  *d2p = d2;  *d4p = d4;  *d6p = d6;\n}\n\nstatic void stbiw__jpg_calcBits(int val, unsigned short bits[2]) {\n   int tmp1 = val < 0 ? -val : val;\n   val = val < 0 ? val-1 : val;\n   bits[1] = 1;\n   while(tmp1 >>= 1) {\n      ++bits[1];\n   }\n   bits[0] = val & ((1<<bits[1])-1);\n}\n\nstatic int stbiw__jpg_processDU(stbi__write_context *s, int *bitBuf, int *bitCnt, float *CDU, int du_stride, float *fdtbl, int DC, const unsigned short HTDC[256][2], const unsigned short HTAC[256][2]) {\n   const unsigned short EOB[2] = { HTAC[0x00][0], HTAC[0x00][1] };\n   const unsigned short M16zeroes[2] = { HTAC[0xF0][0], HTAC[0xF0][1] };\n   int dataOff, i, j, n, diff, end0pos, x, y;\n   int DU[64];\n\n   // DCT rows\n   for(dataOff=0, n=du_stride*8; dataOff<n; dataOff+=du_stride) {\n      stbiw__jpg_DCT(&CDU[dataOff], &CDU[dataOff+1], &CDU[dataOff+2], &CDU[dataOff+3], &CDU[dataOff+4], &CDU[dataOff+5], &CDU[dataOff+6], &CDU[dataOff+7]);\n   }\n   // DCT columns\n   for(dataOff=0; dataOff<8; ++dataOff) {\n      stbiw__jpg_DCT(&CDU[dataOff], &CDU[dataOff+du_stride], &CDU[dataOff+du_stride*2], &CDU[dataOff+du_stride*3], &CDU[dataOff+du_stride*4],\n                     &CDU[dataOff+du_stride*5], &CDU[dataOff+du_stride*6], &CDU[dataOff+du_stride*7]);\n   }\n   // Quantize/descale/zigzag the coefficients\n   for(y = 0, j=0; y < 8; ++y) {\n      for(x = 0; x < 8; ++x,++j) {\n         float v;\n         i = y*du_stride+x;\n         v = CDU[i]*fdtbl[j];\n         // DU[stbiw__jpg_ZigZag[j]] = (int)(v < 0 ? ceilf(v - 0.5f) : floorf(v + 0.5f));\n         // ceilf() and floorf() are C99, not C89, but I /think/ they're not needed here anyway?\n         DU[stbiw__jpg_ZigZag[j]] = (int)(v < 0 ? v - 0.5f : v + 0.5f);\n      }\n   }\n\n   // Encode DC\n   diff = DU[0] - DC;\n   if (diff == 0) {\n      stbiw__jpg_writeBits(s, bitBuf, bitCnt, HTDC[0]);\n   } else {\n      unsigned short bits[2];\n      stbiw__jpg_calcBits(diff, bits);\n      stbiw__jpg_writeBits(s, bitBuf, bitCnt, HTDC[bits[1]]);\n      stbiw__jpg_writeBits(s, bitBuf, bitCnt, bits);\n   }\n   // Encode ACs\n   end0pos = 63;\n   for(; (end0pos>0)&&(DU[end0pos]==0); --end0pos) {\n   }\n   // end0pos = first element in reverse order !=0\n   if(end0pos == 0) {\n      stbiw__jpg_writeBits(s, bitBuf, bitCnt, EOB);\n      return DU[0];\n   }\n   for(i = 1; i <= end0pos; ++i) {\n      int startpos = i;\n      int nrzeroes;\n      unsigned short bits[2];\n      for (; DU[i]==0 && i<=end0pos; ++i) {\n      }\n      nrzeroes = i-startpos;\n      if ( nrzeroes >= 16 ) {\n         int lng = nrzeroes>>4;\n         int nrmarker;\n         for (nrmarker=1; nrmarker <= lng; ++nrmarker)\n            stbiw__jpg_writeBits(s, bitBuf, bitCnt, M16zeroes);\n         nrzeroes &= 15;\n      }\n      stbiw__jpg_calcBits(DU[i], bits);\n      stbiw__jpg_writeBits(s, bitBuf, bitCnt, HTAC[(nrzeroes<<4)+bits[1]]);\n      stbiw__jpg_writeBits(s, bitBuf, bitCnt, bits);\n   }\n   if(end0pos != 63) {\n      stbiw__jpg_writeBits(s, bitBuf, bitCnt, EOB);\n   }\n   return DU[0];\n}\n\nstatic int stbi_write_jpg_core(stbi__write_context *s, int width, int height, int comp, const void* data, int quality) {\n   // Constants that don't pollute global namespace\n   static const unsigned char std_dc_luminance_nrcodes[] = {0,0,1,5,1,1,1,1,1,1,0,0,0,0,0,0,0};\n   static const unsigned char std_dc_luminance_values[] = {0,1,2,3,4,5,6,7,8,9,10,11};\n   static const unsigned char std_ac_luminance_nrcodes[] = {0,0,2,1,3,3,2,4,3,5,5,4,4,0,0,1,0x7d};\n   static const unsigned char std_ac_luminance_values[] = {\n      0x01,0x02,0x03,0x00,0x04,0x11,0x05,0x12,0x21,0x31,0x41,0x06,0x13,0x51,0x61,0x07,0x22,0x71,0x14,0x32,0x81,0x91,0xa1,0x08,\n      0x23,0x42,0xb1,0xc1,0x15,0x52,0xd1,0xf0,0x24,0x33,0x62,0x72,0x82,0x09,0x0a,0x16,0x17,0x18,0x19,0x1a,0x25,0x26,0x27,0x28,\n      0x29,0x2a,0x34,0x35,0x36,0x37,0x38,0x39,0x3a,0x43,0x44,0x45,0x46,0x47,0x48,0x49,0x4a,0x53,0x54,0x55,0x56,0x57,0x58,0x59,\n      0x5a,0x63,0x64,0x65,0x66,0x67,0x68,0x69,0x6a,0x73,0x74,0x75,0x76,0x77,0x78,0x79,0x7a,0x83,0x84,0x85,0x86,0x87,0x88,0x89,\n      0x8a,0x92,0x93,0x94,0x95,0x96,0x97,0x98,0x99,0x9a,0xa2,0xa3,0xa4,0xa5,0xa6,0xa7,0xa8,0xa9,0xaa,0xb2,0xb3,0xb4,0xb5,0xb6,\n      0xb7,0xb8,0xb9,0xba,0xc2,0xc3,0xc4,0xc5,0xc6,0xc7,0xc8,0xc9,0xca,0xd2,0xd3,0xd4,0xd5,0xd6,0xd7,0xd8,0xd9,0xda,0xe1,0xe2,\n      0xe3,0xe4,0xe5,0xe6,0xe7,0xe8,0xe9,0xea,0xf1,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7,0xf8,0xf9,0xfa\n   };\n   static const unsigned char std_dc_chrominance_nrcodes[] = {0,0,3,1,1,1,1,1,1,1,1,1,0,0,0,0,0};\n   static const unsigned char std_dc_chrominance_values[] = {0,1,2,3,4,5,6,7,8,9,10,11};\n   static const unsigned char std_ac_chrominance_nrcodes[] = {0,0,2,1,2,4,4,3,4,7,5,4,4,0,1,2,0x77};\n   static const unsigned char std_ac_chrominance_values[] = {\n      0x00,0x01,0x02,0x03,0x11,0x04,0x05,0x21,0x31,0x06,0x12,0x41,0x51,0x07,0x61,0x71,0x13,0x22,0x32,0x81,0x08,0x14,0x42,0x91,\n      0xa1,0xb1,0xc1,0x09,0x23,0x33,0x52,0xf0,0x15,0x62,0x72,0xd1,0x0a,0x16,0x24,0x34,0xe1,0x25,0xf1,0x17,0x18,0x19,0x1a,0x26,\n      0x27,0x28,0x29,0x2a,0x35,0x36,0x37,0x38,0x39,0x3a,0x43,0x44,0x45,0x46,0x47,0x48,0x49,0x4a,0x53,0x54,0x55,0x56,0x57,0x58,\n      0x59,0x5a,0x63,0x64,0x65,0x66,0x67,0x68,0x69,0x6a,0x73,0x74,0x75,0x76,0x77,0x78,0x79,0x7a,0x82,0x83,0x84,0x85,0x86,0x87,\n      0x88,0x89,0x8a,0x92,0x93,0x94,0x95,0x96,0x97,0x98,0x99,0x9a,0xa2,0xa3,0xa4,0xa5,0xa6,0xa7,0xa8,0xa9,0xaa,0xb2,0xb3,0xb4,\n      0xb5,0xb6,0xb7,0xb8,0xb9,0xba,0xc2,0xc3,0xc4,0xc5,0xc6,0xc7,0xc8,0xc9,0xca,0xd2,0xd3,0xd4,0xd5,0xd6,0xd7,0xd8,0xd9,0xda,\n      0xe2,0xe3,0xe4,0xe5,0xe6,0xe7,0xe8,0xe9,0xea,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7,0xf8,0xf9,0xfa\n   };\n   // Huffman tables\n   static const unsigned short YDC_HT[256][2] = { {0,2},{2,3},{3,3},{4,3},{5,3},{6,3},{14,4},{30,5},{62,6},{126,7},{254,8},{510,9}};\n   static const unsigned short UVDC_HT[256][2] = { {0,2},{1,2},{2,2},{6,3},{14,4},{30,5},{62,6},{126,7},{254,8},{510,9},{1022,10},{2046,11}};\n   static const unsigned short YAC_HT[256][2] = {\n      {10,4},{0,2},{1,2},{4,3},{11,4},{26,5},{120,7},{248,8},{1014,10},{65410,16},{65411,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {12,4},{27,5},{121,7},{502,9},{2038,11},{65412,16},{65413,16},{65414,16},{65415,16},{65416,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {28,5},{249,8},{1015,10},{4084,12},{65417,16},{65418,16},{65419,16},{65420,16},{65421,16},{65422,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {58,6},{503,9},{4085,12},{65423,16},{65424,16},{65425,16},{65426,16},{65427,16},{65428,16},{65429,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {59,6},{1016,10},{65430,16},{65431,16},{65432,16},{65433,16},{65434,16},{65435,16},{65436,16},{65437,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {122,7},{2039,11},{65438,16},{65439,16},{65440,16},{65441,16},{65442,16},{65443,16},{65444,16},{65445,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {123,7},{4086,12},{65446,16},{65447,16},{65448,16},{65449,16},{65450,16},{65451,16},{65452,16},{65453,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {250,8},{4087,12},{65454,16},{65455,16},{65456,16},{65457,16},{65458,16},{65459,16},{65460,16},{65461,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {504,9},{32704,15},{65462,16},{65463,16},{65464,16},{65465,16},{65466,16},{65467,16},{65468,16},{65469,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {505,9},{65470,16},{65471,16},{65472,16},{65473,16},{65474,16},{65475,16},{65476,16},{65477,16},{65478,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {506,9},{65479,16},{65480,16},{65481,16},{65482,16},{65483,16},{65484,16},{65485,16},{65486,16},{65487,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {1017,10},{65488,16},{65489,16},{65490,16},{65491,16},{65492,16},{65493,16},{65494,16},{65495,16},{65496,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {1018,10},{65497,16},{65498,16},{65499,16},{65500,16},{65501,16},{65502,16},{65503,16},{65504,16},{65505,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {2040,11},{65506,16},{65507,16},{65508,16},{65509,16},{65510,16},{65511,16},{65512,16},{65513,16},{65514,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {65515,16},{65516,16},{65517,16},{65518,16},{65519,16},{65520,16},{65521,16},{65522,16},{65523,16},{65524,16},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {2041,11},{65525,16},{65526,16},{65527,16},{65528,16},{65529,16},{65530,16},{65531,16},{65532,16},{65533,16},{65534,16},{0,0},{0,0},{0,0},{0,0},{0,0}\n   };\n   static const unsigned short UVAC_HT[256][2] = {\n      {0,2},{1,2},{4,3},{10,4},{24,5},{25,5},{56,6},{120,7},{500,9},{1014,10},{4084,12},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {11,4},{57,6},{246,8},{501,9},{2038,11},{4085,12},{65416,16},{65417,16},{65418,16},{65419,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {26,5},{247,8},{1015,10},{4086,12},{32706,15},{65420,16},{65421,16},{65422,16},{65423,16},{65424,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {27,5},{248,8},{1016,10},{4087,12},{65425,16},{65426,16},{65427,16},{65428,16},{65429,16},{65430,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {58,6},{502,9},{65431,16},{65432,16},{65433,16},{65434,16},{65435,16},{65436,16},{65437,16},{65438,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {59,6},{1017,10},{65439,16},{65440,16},{65441,16},{65442,16},{65443,16},{65444,16},{65445,16},{65446,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {121,7},{2039,11},{65447,16},{65448,16},{65449,16},{65450,16},{65451,16},{65452,16},{65453,16},{65454,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {122,7},{2040,11},{65455,16},{65456,16},{65457,16},{65458,16},{65459,16},{65460,16},{65461,16},{65462,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {249,8},{65463,16},{65464,16},{65465,16},{65466,16},{65467,16},{65468,16},{65469,16},{65470,16},{65471,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {503,9},{65472,16},{65473,16},{65474,16},{65475,16},{65476,16},{65477,16},{65478,16},{65479,16},{65480,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {504,9},{65481,16},{65482,16},{65483,16},{65484,16},{65485,16},{65486,16},{65487,16},{65488,16},{65489,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {505,9},{65490,16},{65491,16},{65492,16},{65493,16},{65494,16},{65495,16},{65496,16},{65497,16},{65498,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {506,9},{65499,16},{65500,16},{65501,16},{65502,16},{65503,16},{65504,16},{65505,16},{65506,16},{65507,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {2041,11},{65508,16},{65509,16},{65510,16},{65511,16},{65512,16},{65513,16},{65514,16},{65515,16},{65516,16},{0,0},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {16352,14},{65517,16},{65518,16},{65519,16},{65520,16},{65521,16},{65522,16},{65523,16},{65524,16},{65525,16},{0,0},{0,0},{0,0},{0,0},{0,0},\n      {1018,10},{32707,15},{65526,16},{65527,16},{65528,16},{65529,16},{65530,16},{65531,16},{65532,16},{65533,16},{65534,16},{0,0},{0,0},{0,0},{0,0},{0,0}\n   };\n   static const int YQT[] = {16,11,10,16,24,40,51,61,12,12,14,19,26,58,60,55,14,13,16,24,40,57,69,56,14,17,22,29,51,87,80,62,18,22,\n                             37,56,68,109,103,77,24,35,55,64,81,104,113,92,49,64,78,87,103,121,120,101,72,92,95,98,112,100,103,99};\n   static const int UVQT[] = {17,18,24,47,99,99,99,99,18,21,26,66,99,99,99,99,24,26,56,99,99,99,99,99,47,66,99,99,99,99,99,99,\n                              99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99};\n   static const float aasf[] = { 1.0f * 2.828427125f, 1.387039845f * 2.828427125f, 1.306562965f * 2.828427125f, 1.175875602f * 2.828427125f,\n                                 1.0f * 2.828427125f, 0.785694958f * 2.828427125f, 0.541196100f * 2.828427125f, 0.275899379f * 2.828427125f };\n\n   int row, col, i, k, subsample;\n   float fdtbl_Y[64], fdtbl_UV[64];\n   unsigned char YTable[64], UVTable[64];\n\n   if(!data || !width || !height || comp > 4 || comp < 1) {\n      return 0;\n   }\n\n   quality = quality ? quality : 90;\n   subsample = quality <= 90 ? 1 : 0;\n   quality = quality < 1 ? 1 : quality > 100 ? 100 : quality;\n   quality = quality < 50 ? 5000 / quality : 200 - quality * 2;\n\n   for(i = 0; i < 64; ++i) {\n      int uvti, yti = (YQT[i]*quality+50)/100;\n      YTable[stbiw__jpg_ZigZag[i]] = (unsigned char) (yti < 1 ? 1 : yti > 255 ? 255 : yti);\n      uvti = (UVQT[i]*quality+50)/100;\n      UVTable[stbiw__jpg_ZigZag[i]] = (unsigned char) (uvti < 1 ? 1 : uvti > 255 ? 255 : uvti);\n   }\n\n   for(row = 0, k = 0; row < 8; ++row) {\n      for(col = 0; col < 8; ++col, ++k) {\n         fdtbl_Y[k]  = 1 / (YTable [stbiw__jpg_ZigZag[k]] * aasf[row] * aasf[col]);\n         fdtbl_UV[k] = 1 / (UVTable[stbiw__jpg_ZigZag[k]] * aasf[row] * aasf[col]);\n      }\n   }\n\n   // Write Headers\n   {\n      static const unsigned char head0[] = { 0xFF,0xD8,0xFF,0xE0,0,0x10,'J','F','I','F',0,1,1,0,0,1,0,1,0,0,0xFF,0xDB,0,0x84,0 };\n      static const unsigned char head2[] = { 0xFF,0xDA,0,0xC,3,1,0,2,0x11,3,0x11,0,0x3F,0 };\n      const unsigned char head1[] = { 0xFF,0xC0,0,0x11,8,(unsigned char)(height>>8),STBIW_UCHAR(height),(unsigned char)(width>>8),STBIW_UCHAR(width),\n                                      3,1,(unsigned char)(subsample?0x22:0x11),0,2,0x11,1,3,0x11,1,0xFF,0xC4,0x01,0xA2,0 };\n      s->func(s->context, (void*)head0, sizeof(head0));\n      s->func(s->context, (void*)YTable, sizeof(YTable));\n      stbiw__putc(s, 1);\n      s->func(s->context, UVTable, sizeof(UVTable));\n      s->func(s->context, (void*)head1, sizeof(head1));\n      s->func(s->context, (void*)(std_dc_luminance_nrcodes+1), sizeof(std_dc_luminance_nrcodes)-1);\n      s->func(s->context, (void*)std_dc_luminance_values, sizeof(std_dc_luminance_values));\n      stbiw__putc(s, 0x10); // HTYACinfo\n      s->func(s->context, (void*)(std_ac_luminance_nrcodes+1), sizeof(std_ac_luminance_nrcodes)-1);\n      s->func(s->context, (void*)std_ac_luminance_values, sizeof(std_ac_luminance_values));\n      stbiw__putc(s, 1); // HTUDCinfo\n      s->func(s->context, (void*)(std_dc_chrominance_nrcodes+1), sizeof(std_dc_chrominance_nrcodes)-1);\n      s->func(s->context, (void*)std_dc_chrominance_values, sizeof(std_dc_chrominance_values));\n      stbiw__putc(s, 0x11); // HTUACinfo\n      s->func(s->context, (void*)(std_ac_chrominance_nrcodes+1), sizeof(std_ac_chrominance_nrcodes)-1);\n      s->func(s->context, (void*)std_ac_chrominance_values, sizeof(std_ac_chrominance_values));\n      s->func(s->context, (void*)head2, sizeof(head2));\n   }\n\n   // Encode 8x8 macroblocks\n   {\n      static const unsigned short fillBits[] = {0x7F, 7};\n      int DCY=0, DCU=0, DCV=0;\n      int bitBuf=0, bitCnt=0;\n      // comp == 2 is grey+alpha (alpha is ignored)\n      int ofsG = comp > 2 ? 1 : 0, ofsB = comp > 2 ? 2 : 0;\n      const unsigned char *dataR = (const unsigned char *)data;\n      const unsigned char *dataG = dataR + ofsG;\n      const unsigned char *dataB = dataR + ofsB;\n      int x, y, pos;\n      if(subsample) {\n         for(y = 0; y < height; y += 16) {\n            for(x = 0; x < width; x += 16) {\n               float Y[256], U[256], V[256];\n               for(row = y, pos = 0; row < y+16; ++row) {\n                  // row >= height => use last input row\n                  int clamped_row = (row < height) ? row : height - 1;\n                  int base_p = (stbi__flip_vertically_on_write ? (height-1-clamped_row) : clamped_row)*width*comp;\n                  for(col = x; col < x+16; ++col, ++pos) {\n                     // if col >= width => use pixel from last input column\n                     int p = base_p + ((col < width) ? col : (width-1))*comp;\n                     float r = dataR[p], g = dataG[p], b = dataB[p];\n                     Y[pos]= +0.29900f*r + 0.58700f*g + 0.11400f*b - 128;\n                     U[pos]= -0.16874f*r - 0.33126f*g + 0.50000f*b;\n                     V[pos]= +0.50000f*r - 0.41869f*g - 0.08131f*b;\n                  }\n               }\n               DCY = stbiw__jpg_processDU(s, &bitBuf, &bitCnt, Y+0,   16, fdtbl_Y, DCY, YDC_HT, YAC_HT);\n               DCY = stbiw__jpg_processDU(s, &bitBuf, &bitCnt, Y+8,   16, fdtbl_Y, DCY, YDC_HT, YAC_HT);\n               DCY = stbiw__jpg_processDU(s, &bitBuf, &bitCnt, Y+128, 16, fdtbl_Y, DCY, YDC_HT, YAC_HT);\n               DCY = stbiw__jpg_processDU(s, &bitBuf, &bitCnt, Y+136, 16, fdtbl_Y, DCY, YDC_HT, YAC_HT);\n\n               // subsample U,V\n               {\n                  float subU[64], subV[64];\n                  int yy, xx;\n                  for(yy = 0, pos = 0; yy < 8; ++yy) {\n                     for(xx = 0; xx < 8; ++xx, ++pos) {\n                        int j = yy*32+xx*2;\n                        subU[pos] = (U[j+0] + U[j+1] + U[j+16] + U[j+17]) * 0.25f;\n                        subV[pos] = (V[j+0] + V[j+1] + V[j+16] + V[j+17]) * 0.25f;\n                     }\n                  }\n                  DCU = stbiw__jpg_processDU(s, &bitBuf, &bitCnt, subU, 8, fdtbl_UV, DCU, UVDC_HT, UVAC_HT);\n                  DCV = stbiw__jpg_processDU(s, &bitBuf, &bitCnt, subV, 8, fdtbl_UV, DCV, UVDC_HT, UVAC_HT);\n               }\n            }\n         }\n      } else {\n         for(y = 0; y < height; y += 8) {\n            for(x = 0; x < width; x += 8) {\n               float Y[64], U[64], V[64];\n               for(row = y, pos = 0; row < y+8; ++row) {\n                  // row >= height => use last input row\n                  int clamped_row = (row < height) ? row : height - 1;\n                  int base_p = (stbi__flip_vertically_on_write ? (height-1-clamped_row) : clamped_row)*width*comp;\n                  for(col = x; col < x+8; ++col, ++pos) {\n                     // if col >= width => use pixel from last input column\n                     int p = base_p + ((col < width) ? col : (width-1))*comp;\n                     float r = dataR[p], g = dataG[p], b = dataB[p];\n                     Y[pos]= +0.29900f*r + 0.58700f*g + 0.11400f*b - 128;\n                     U[pos]= -0.16874f*r - 0.33126f*g + 0.50000f*b;\n                     V[pos]= +0.50000f*r - 0.41869f*g - 0.08131f*b;\n                  }\n               }\n\n               DCY = stbiw__jpg_processDU(s, &bitBuf, &bitCnt, Y, 8, fdtbl_Y,  DCY, YDC_HT, YAC_HT);\n               DCU = stbiw__jpg_processDU(s, &bitBuf, &bitCnt, U, 8, fdtbl_UV, DCU, UVDC_HT, UVAC_HT);\n               DCV = stbiw__jpg_processDU(s, &bitBuf, &bitCnt, V, 8, fdtbl_UV, DCV, UVDC_HT, UVAC_HT);\n            }\n         }\n      }\n\n      // Do the bit alignment of the EOI marker\n      stbiw__jpg_writeBits(s, &bitBuf, &bitCnt, fillBits);\n   }\n\n   // EOI\n   stbiw__putc(s, 0xFF);\n   stbiw__putc(s, 0xD9);\n\n   return 1;\n}\n\nSTBIWDEF int stbi_write_jpg_to_func(stbi_write_func *func, void *context, int x, int y, int comp, const void *data, int quality)\n{\n   stbi__write_context s = { 0 };\n   stbi__start_write_callbacks(&s, func, context);\n   return stbi_write_jpg_core(&s, x, y, comp, (void *) data, quality);\n}\n\n\n#ifndef STBI_WRITE_NO_STDIO\nSTBIWDEF int stbi_write_jpg(char const *filename, int x, int y, int comp, const void *data, int quality)\n{\n   stbi__write_context s = { 0 };\n   if (stbi__start_write_file(&s,filename)) {\n      int r = stbi_write_jpg_core(&s, x, y, comp, data, quality);\n      stbi__end_write_file(&s);\n      return r;\n   } else\n      return 0;\n}\n#endif\n\n#endif // STB_IMAGE_WRITE_IMPLEMENTATION\n\n/* Revision history\n      1.16  (2021-07-11)\n             make Deflate code emit uncompressed blocks when it would otherwise expand\n             support writing BMPs with alpha channel\n      1.15  (2020-07-13) unknown\n      1.14  (2020-02-02) updated JPEG writer to downsample chroma channels\n      1.13\n      1.12\n      1.11  (2019-08-11)\n\n      1.10  (2019-02-07)\n             support utf8 filenames in Windows; fix warnings and platform ifdefs\n      1.09  (2018-02-11)\n             fix typo in zlib quality API, improve STB_I_W_STATIC in C++\n      1.08  (2018-01-29)\n             add stbi__flip_vertically_on_write, external zlib, zlib quality, choose PNG filter\n      1.07  (2017-07-24)\n             doc fix\n      1.06 (2017-07-23)\n             writing JPEG (using Jon Olick's code)\n      1.05   ???\n      1.04 (2017-03-03)\n             monochrome BMP expansion\n      1.03   ???\n      1.02 (2016-04-02)\n             avoid allocating large structures on the stack\n      1.01 (2016-01-16)\n             STBIW_REALLOC_SIZED: support allocators with no realloc support\n             avoid race-condition in crc initialization\n             minor compile issues\n      1.00 (2015-09-14)\n             installable file IO function\n      0.99 (2015-09-13)\n             warning fixes; TGA rle support\n      0.98 (2015-04-08)\n             added STBIW_MALLOC, STBIW_ASSERT etc\n      0.97 (2015-01-18)\n             fixed HDR asserts, rewrote HDR rle logic\n      0.96 (2015-01-17)\n             add HDR output\n             fix monochrome BMP\n      0.95 (2014-08-17)\n             add monochrome TGA output\n      0.94 (2014-05-31)\n             rename private functions to avoid conflicts with stb_image.h\n      0.93 (2014-05-27)\n             warning fixes\n      0.92 (2010-08-01)\n             casts to unsigned char to fix warnings\n      0.91 (2010-07-17)\n             first public release\n      0.90   first internal release\n*/\n\n/*\n------------------------------------------------------------------------------\nThis software is available under 2 licenses -- choose whichever you prefer.\n------------------------------------------------------------------------------\nALTERNATIVE A - MIT License\nCopyright (c) 2017 Sean Barrett\nPermission is hereby granted, free of charge, to any person obtaining a copy of\nthis software and associated documentation files (the \"Software\"), to deal in\nthe Software without restriction, including without limitation the rights to\nuse, copy, modify, merge, publish, distribute, sublicense, and/or sell copies\nof the Software, and to permit persons to whom the Software is furnished to do\nso, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n------------------------------------------------------------------------------\nALTERNATIVE B - Public Domain (www.unlicense.org)\nThis is free and unencumbered software released into the public domain.\nAnyone is free to copy, modify, publish, use, compile, sell, or distribute this\nsoftware, either in source code form or as a compiled binary, for any purpose,\ncommercial or non-commercial, and by any means.\nIn jurisdictions that recognize copyright laws, the author or authors of this\nsoftware dedicate any and all copyright interest in the software to the public\ndomain. We make this dedication for the benefit of the public at large and to\nthe detriment of our heirs and successors. We intend this dedication to be an\novert act of relinquishment in perpetuity of all present and future rights to\nthis software under copyright law.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN\nACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION\nWITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n------------------------------------------------------------------------------\n*/"
  },
  {
    "path": "packages/nx/vendor/stb_image_write/stb_image_write.ml",
    "content": "open Bigarray\n\ntype 'kind buffer = ('a, 'b, c_layout) Array1.t constraint 'kind = ('a, 'b) kind\ntype float32 = (float, float32_elt) kind\ntype int8 = (int, int8_unsigned_elt) kind\n\nexternal png : string -> w:int -> h:int -> c:int -> int8 buffer -> unit\n  = \"ml_stbi_write_png\"\n\nexternal bmp : string -> w:int -> h:int -> c:int -> int8 buffer -> unit\n  = \"ml_stbi_write_bmp\"\n\nexternal tga : string -> w:int -> h:int -> c:int -> int8 buffer -> unit\n  = \"ml_stbi_write_tga\"\n\nexternal hdr : string -> w:int -> h:int -> c:int -> float32 buffer -> unit\n  = \"ml_stbi_write_hdr\"\n\nexternal jpg :\n  string -> w:int -> h:int -> c:int -> quality:int -> int8 buffer -> unit\n  = \"ml_stbi_write_jpg_bytecode\" \"ml_stbi_write_jpg_native\"\n"
  },
  {
    "path": "packages/nx/vendor/stb_image_write/stb_image_write.mli",
    "content": "(* Stb_image_write for OCaml by Frédéric Bour <frederic.bour(_)lakaban.net> To\n   the extent possible under law, the person who associated CC0 with\n   Stb_image_write for OCaml has waived all copyright and related or neighboring\n   rights to Stb_image_write for OCaml.\n\n   You should have received a copy of the CC0 legalcode along with this work. If\n   not, see <http://creativecommons.org/publicdomain/zero/1.0/>.\n\n   Website: https://github.com/let-def/stb_image_write stb_image_write is a\n   public domain library by Sean Barrett, http://nothings.org/ Version 0.1,\n   September 2015 *)\nopen Bigarray\n\n(*####################*)\n(** {1 Image writing} *)\n\n(** [buffer] simply is an alias to a bigarray with c_layout. Two kind of pixel\n    buffers are manipulated:\n    - int8 for images with 8-bit channels\n    - float32 for images with floating point channels\n\n    Content of an image with [c] channels of width [w] and height [h] is\n    represented as a contiguous sequence of items such that:\n    - channels are interleaved\n    - each pixel is made of [c] items\n    - each line is made of [w] pixels\n    - image is made of [h] lines *)\n\ntype 'kind buffer = ('a, 'b, c_layout) Array1.t constraint 'kind = ('a, 'b) kind\ntype float32 = (float, float32_elt) kind\ntype int8 = (int, int8_unsigned_elt) kind\n\nval png : string -> w:int -> h:int -> c:int -> int8 buffer -> unit\nval bmp : string -> w:int -> h:int -> c:int -> int8 buffer -> unit\nval tga : string -> w:int -> h:int -> c:int -> int8 buffer -> unit\nval hdr : string -> w:int -> h:int -> c:int -> float32 buffer -> unit\n\nval jpg :\n  string -> w:int -> h:int -> c:int -> quality:int -> int8 buffer -> unit\n"
  },
  {
    "path": "packages/nx-oxcaml/.ocamlformat",
    "content": "# OCamlFormat configuration file\n\n# Pin the version of OCamlFormat to ensure consistent formatting across different environments.\n# Uncomment and update this line to specify a version:\n# version = 0.26.2\n\n# The formatting style to use. Options include 'default', 'ocamlformat', and 'janestreet'.\n# 'default' is a good starting point for most projects.\nprofile = default\n\n# Parse and format comments in docstrings\nparse-docstrings = true\n\n# Wrap comments and docstrings to fit within the 'max-width'\nwrap-comments = true"
  },
  {
    "path": "packages/nx-oxcaml/AGENTS.md",
    "content": "# nx-oxcaml\n\nnx-oxcaml is a high-performance nx backend using oxcaml's unboxed types and SIMD intrinsics.\n\nit is part of the raven ecosystem. see the root [AGENTS.md](../AGENTS.md) for overall project philosophy and guidelines.\n\n## project structure\n\n- `lib/` - main library (`nx_oxcaml` / `nx-oxcaml`)\n- `test/` - test suite (`test_nx_oxcaml`)\n- `bench/` - benchmarks (`bench_nx_oxcaml`)\n- `vendor/` - vendored dependencies\n\n## build instructions\n\nall dune commands MUST be run from the `nx-oxcaml/` directory with `--root .` to get an isolated build that does not conflict with the parent raven project.\n\n```sh\n# build\ndune build --root .\n\n# run tests\ndune test --root .\n\n# run benchmarks\ndune exec --root . bench/bench_nx_oxcaml.exe\n\n# watch mode\ndune build --root . --watch\n```\n\n## important rules\n\n- ALWAYS use `--root .` with every dune command\n- ALWAYS run dune commands from the `nx-oxcaml/` directory\n- NEVER run dune commands from the raven root — this will cause conflicts with the parent project\n- NEVER stage or commit changes unless explicitly requested\n- NEVER run `dune clean`\n- NEVER use the `--force` argument\n- NEVER try to remove the dune lock file or kill dune when running in watch mode\n- NEVER hide warnings and NEVER hide unused variables by adding an underscore\n"
  },
  {
    "path": "packages/nx-oxcaml/README.md",
    "content": "# Nx OxCaml Backend\n\nAn experimental high-performance backend for Nx that leverages OxCaml's unboxed types.\n\n## Overview\n\nThis backend implements the Nx backend interface using OxCaml's unboxed types for improved performance:\n\n- **Unboxed arithmetic**: Uses `float#`, `int32#`, `int64#` for zero-allocation numeric operations\n- **Parallel execution**: Built-in support for parallel operations (currently sequential, Domain support planned)\n- **Memory efficiency**: Reduces GC pressure by avoiding boxing/unboxing overhead\n\n## Building\n\n```bash\ncd dev/nx-oxcaml\ndune pkg lock --root . \ndune build --root . \n```\n\n## Benchmark Results\n\nSee [bench/](bench/README.md) for comparative benchmark results against the C backend.\n"
  },
  {
    "path": "packages/nx-oxcaml/bench/README.md",
    "content": "# Nx OxCaml Benchmarks\n\nThis directory contains benchmarks comparing the Nx OxCaml backend against the Nx C backend.\n\n### Float64 Performance\n\nHere is a comparison of the wall time for adding two Float64 matrices of varying sizes using both backends:\n\n```mermaid\nxychart-beta\n    title \"Add f64 Wall Time (μs)\"\n    x-axis [\"50x50\", \"200x200\", \"500x500\", \"1000x1000\", \"2000x2000\"]\n    y-axis \"Wall Time (μs)\" 0 --> 850\n    line \"Nx (C)\" [1.35, 42.56, 63.99, 146.03, 836.12]\n    line \"Nx (OxCaml)\" [1.19, 17.00, 64.15, 181.18, 723.55]\n```\n\nOverall, we achieve comparable or better performance with the OxCaml backend after SIMD vectorization with 4x loop unrolling:\n- **Small matrices (50x50)**: OxCaml is **2x faster** for f32 due to lower FFI overhead and SIMD\n- **Medium matrices (200x200)**: OxCaml is **2.5-5x faster** - SIMD loop unrolling shows significant gains\n- **Large matrices (500x500+)**: Performance is comparable between backends, both scale similarly\n- **Very large matrices (2000x2000)**: OxCaml f64 is **15% faster** than C backend\n\n## Results\n\n```\n┌─────────────────────────────────────────┬──────────┬──────────┬─────────┬─────────┬────────────┐\n│ Name                                    │ Wall/Run │  CPU/Run │ mWd/Run │ Speedup │ vs Fastest │\n├─────────────────────────────────────────┼──────────┼──────────┼─────────┼─────────┼────────────┤\n│ Add 500x500 f32 (Nx (C))                │  61.36μs │ 150.41μs │ 462.00w │   0.36x │       277% │\n│ Add 500x500 f32 (Nx (OxCaml))           │  58.72μs │ 205.55μs │ 166.00w │   0.38x │       265% │\n│ Sub 500x500 f32 (Nx (C))                │  59.24μs │ 126.79μs │ 462.00w │   0.37x │       268% │\n│ Sub 500x500 f32 (Nx (OxCaml))           │  57.96μs │ 200.45μs │ 166.00w │   0.38x │       262% │\n│ Mul 500x500 f32 (Nx (C))                │  54.27μs │ 130.21μs │ 462.00w │   0.41x │       245% │\n│ Mul 500x500 f32 (Nx (OxCaml))           │  60.43μs │ 212.87μs │ 166.00w │   0.37x │       273% │\n│ Div 500x500 f32 (Nx (C))                │  56.82μs │ 132.29μs │ 470.00w │   0.39x │       257% │\n│ Div 500x500 f32 (Nx (OxCaml))           │  66.67μs │ 231.87μs │ 174.00w │   0.33x │       301% │\n│ Mod 500x500 f32 (Nx (C))                │ 557.00μs │   2.77ms │ 462.00w │   0.04x │      2516% │\n│ Mod 500x500 f32 (Nx (OxCaml))           │ 167.60μs │ 780.53μs │ 166.00w │   0.13x │       757% │\n│ Pow 500x500 f32 (Nx (C))                │ 237.58μs │   1.13ms │ 462.00w │   0.09x │      1073% │\n│ Pow 500x500 f32 (Nx (OxCaml))           │ 180.03μs │ 782.76μs │ 166.00w │   0.12x │       813% │\n│ Max 500x500 f32 (Nx (C))                │  61.00μs │ 128.29μs │ 462.00w │   0.36x │       276% │\n│ Max 500x500 f32 (Nx (OxCaml))           │  61.06μs │ 224.67μs │ 166.00w │   0.36x │       276% │\n│ Min 500x500 f32 (Nx (C))                │  59.89μs │ 125.08μs │ 462.00w │   0.37x │       271% │\n│ Min 500x500 f32 (Nx (OxCaml))           │  59.95μs │ 200.56μs │ 166.00w │   0.37x │       271% │\n│ Neg 500x500 f32 (Nx (C))                │  57.88μs │ 114.88μs │ 251.00w │   0.38x │       261% │\n│ Neg 500x500 f32 (Nx (OxCaml))           │  75.16μs │ 276.47μs │ 138.00w │   0.29x │       340% │\n│ Abs 500x500 f32 (Nx (C))                │  58.40μs │ 117.60μs │ 251.00w │   0.38x │       264% │\n│ Abs 500x500 f32 (Nx (OxCaml))           │  77.14μs │ 283.22μs │ 138.00w │   0.29x │       348% │\n│ Sqrt 500x500 f32 (Nx (C))               │  62.45μs │ 139.13μs │ 251.00w │   0.35x │       282% │\n│ Sqrt 500x500 f32 (Nx (OxCaml))          │  60.38μs │ 208.74μs │ 138.00w │   0.37x │       273% │\n│ Exp 500x500 f32 (Nx (C))                │ 130.54μs │ 516.47μs │ 251.00w │   0.17x │       590% │\n│ Exp 500x500 f32 (Nx (OxCaml))           │ 131.77μs │ 650.60μs │ 138.00w │   0.17x │       595% │\n│ Log 500x500 f32 (Nx (C))                │ 163.99μs │ 698.78μs │ 251.00w │   0.13x │       741% │\n│ Log 500x500 f32 (Nx (OxCaml))           │ 173.22μs │ 906.04μs │ 138.00w │   0.13x │       783% │\n│ Sin 500x500 f32 (Nx (C))                │ 219.64μs │ 992.90μs │ 251.00w │   0.10x │       992% │\n│ Sin 500x500 f32 (Nx (OxCaml))           │ 192.34μs │   1.03ms │ 138.00w │   0.12x │       869% │\n│ Cos 500x500 f32 (Nx (C))                │ 227.13μs │   1.07ms │ 251.00w │   0.10x │      1026% │\n│ Cos 500x500 f32 (Nx (OxCaml))           │ 189.97μs │   1.01ms │ 138.00w │   0.12x │       858% │\n│ Reduce_sum 500x500 f32 (Nx (C))         │ 146.36μs │ 438.04μs │ 252.00w │   0.15x │       661% │\n│ Reduce_sum 500x500 f32 (Nx (OxCaml))    │  22.14μs │  22.22μs │  53.00w │   1.00x │       100% │\n│ Reduce_prod 500x500 f32 (Nx (C))        │ 708.62μs │ 708.48μs │ 252.00w │   0.03x │      3201% │\n│ Reduce_prod 500x500 f32 (Nx (OxCaml))   │  22.14μs │  22.30μs │  53.00w │   1.00x │       100% │\n│ Reduce_max 500x500 f32 (Nx (C))         │ 778.82μs │ 777.59μs │ 252.00w │   0.03x │      3518% │\n│ Reduce_max 500x500 f32 (Nx (OxCaml))    │  22.30μs │  22.26μs │  53.00w │   0.99x │       101% │\n│ Reduce_min 500x500 f32 (Nx (C))         │ 789.01μs │ 788.46μs │ 252.00w │   0.03x │      3565% │\n│ Reduce_min 500x500 f32 (Nx (OxCaml))    │  22.22μs │  22.22μs │  53.00w │   1.00x │       100% │\n│ Add 500x500 f64 (Nx (C))                │  67.92μs │ 175.73μs │ 462.00w │   0.33x │       307% │\n│ Add 500x500 f64 (Nx (OxCaml))           │  70.31μs │ 254.80μs │ 166.00w │   0.31x │       318% │\n│ Sub 500x500 f64 (Nx (C))                │  66.49μs │ 167.75μs │ 462.00w │   0.33x │       300% │\n│ Sub 500x500 f64 (Nx (OxCaml))           │  70.09μs │ 281.07μs │ 166.00w │   0.32x │       317% │\n│ Mul 500x500 f64 (Nx (C))                │  66.98μs │ 171.22μs │ 462.00w │   0.33x │       303% │\n│ Mul 500x500 f64 (Nx (OxCaml))           │  69.77μs │ 260.14μs │ 166.00w │   0.32x │       315% │\n│ Div 500x500 f64 (Nx (C))                │  66.11μs │ 175.33μs │ 470.00w │   0.33x │       299% │\n│ Div 500x500 f64 (Nx (OxCaml))           │  79.08μs │ 338.32μs │ 174.00w │   0.28x │       357% │\n│ Mod 500x500 f64 (Nx (C))                │ 523.08μs │   2.74ms │ 462.00w │   0.04x │      2363% │\n│ Mod 500x500 f64 (Nx (OxCaml))           │ 146.25μs │ 736.75μs │ 166.00w │   0.15x │       661% │\n│ Pow 500x500 f64 (Nx (C))                │ 446.33μs │   2.35ms │ 462.00w │   0.05x │      2016% │\n│ Pow 500x500 f64 (Nx (OxCaml))           │ 167.71μs │ 775.31μs │ 166.00w │   0.13x │       758% │\n│ Max 500x500 f64 (Nx (C))                │  66.57μs │ 172.09μs │ 462.00w │   0.33x │       301% │\n│ Max 500x500 f64 (Nx (OxCaml))           │  81.05μs │ 324.25μs │ 166.00w │   0.27x │       366% │\n│ Min 500x500 f64 (Nx (C))                │  46.07μs │ 183.69μs │ 462.00w │   0.48x │       208% │\n│ Min 500x500 f64 (Nx (OxCaml))           │  63.10μs │ 296.07μs │ 166.00w │   0.35x │       285% │\n│ Neg 500x500 f64 (Nx (C))                │  48.19μs │ 182.14μs │ 251.00w │   0.46x │       218% │\n│ Neg 500x500 f64 (Nx (OxCaml))           │  86.61μs │ 441.74μs │ 138.00w │   0.26x │       391% │\n│ Abs 500x500 f64 (Nx (C))                │  49.16μs │ 179.88μs │ 251.00w │   0.45x │       222% │\n│ Abs 500x500 f64 (Nx (OxCaml))           │  85.48μs │ 438.08μs │ 138.00w │   0.26x │       386% │\n│ Sqrt 500x500 f64 (Nx (C))               │  50.34μs │ 202.63μs │ 251.00w │   0.44x │       227% │\n│ Sqrt 500x500 f64 (Nx (OxCaml))          │  64.40μs │ 321.38μs │ 138.00w │   0.34x │       291% │\n│ Exp 500x500 f64 (Nx (C))                │ 151.92μs │ 700.39μs │ 251.00w │   0.15x │       686% │\n│ Exp 500x500 f64 (Nx (OxCaml))           │ 181.28μs │ 957.84μs │ 138.00w │   0.12x │       819% │\n│ Log 500x500 f64 (Nx (C))                │ 443.62μs │   1.42ms │ 251.00w │   0.05x │      2004% │\n│ Log 500x500 f64 (Nx (OxCaml))           │ 124.17μs │ 656.37μs │ 138.00w │   0.18x │       561% │\n│ Sin 500x500 f64 (Nx (C))                │ 243.21μs │   1.19ms │ 251.00w │   0.09x │      1099% │\n│ Sin 500x500 f64 (Nx (OxCaml))           │ 106.87μs │ 556.18μs │ 138.00w │   0.21x │       483% │\n│ Cos 500x500 f64 (Nx (C))                │ 238.71μs │   1.19ms │ 251.00w │   0.09x │      1078% │\n│ Cos 500x500 f64 (Nx (OxCaml))           │ 158.79μs │ 834.74μs │ 138.00w │   0.14x │       717% │\n│ Reduce_sum 500x500 f64 (Nx (C))         │ 123.71μs │ 484.77μs │ 252.00w │   0.18x │       559% │\n│ Reduce_sum 500x500 f64 (Nx (OxCaml))    │  44.50μs │  44.89μs │  53.00w │   0.50x │       201% │\n│ Reduce_prod 500x500 f64 (Nx (C))        │ 711.25μs │ 709.47μs │ 252.00w │   0.03x │      3213% │\n│ Reduce_prod 500x500 f64 (Nx (OxCaml))   │  44.01μs │  43.85μs │  53.00w │   0.50x │       199% │\n│ Reduce_max 500x500 f64 (Nx (C))         │ 779.10μs │ 777.41μs │ 252.00w │   0.03x │      3520% │\n│ Reduce_max 500x500 f64 (Nx (OxCaml))    │  43.90μs │  43.96μs │  53.00w │   0.50x │       198% │\n│ Reduce_min 500x500 f64 (Nx (C))         │ 780.50μs │ 779.36μs │ 252.00w │   0.03x │      3526% │\n│ Reduce_min 500x500 f64 (Nx (OxCaml))    │  44.47μs │  44.38μs │  53.00w │   0.50x │       201% │\n│ Add 1000x1000 f32 (Nx (C))              │  79.39μs │ 293.15μs │ 462.00w │   0.28x │       359% │\n│ Add 1000x1000 f32 (Nx (OxCaml))         │  98.17μs │ 417.84μs │ 166.00w │   0.23x │       444% │\n│ Sub 1000x1000 f32 (Nx (C))              │  84.05μs │ 275.53μs │ 462.00w │   0.26x │       380% │\n│ Sub 1000x1000 f32 (Nx (OxCaml))         │  96.02μs │ 407.21μs │ 166.00w │   0.23x │       434% │\n│ Mul 1000x1000 f32 (Nx (C))              │  87.29μs │ 267.21μs │ 462.00w │   0.25x │       394% │\n│ Mul 1000x1000 f32 (Nx (OxCaml))         │  95.80μs │ 412.65μs │ 166.00w │   0.23x │       433% │\n│ Div 1000x1000 f32 (Nx (C))              │  86.76μs │ 268.92μs │ 470.00w │   0.26x │       392% │\n│ Div 1000x1000 f32 (Nx (OxCaml))         │ 115.02μs │ 517.98μs │ 174.00w │   0.19x │       520% │\n│ Mod 1000x1000 f32 (Nx (C))              │   1.93ms │  10.74ms │ 462.00w │   0.01x │      8735% │\n│ Mod 1000x1000 f32 (Nx (OxCaml))         │ 466.79μs │   2.77ms │ 166.00w │   0.05x │      2109% │\n│ Pow 1000x1000 f32 (Nx (C))              │ 797.45μs │   4.49ms │ 462.00w │   0.03x │      3603% │\n│ Pow 1000x1000 f32 (Nx (OxCaml))         │ 472.48μs │   2.84ms │ 166.00w │   0.05x │      2135% │\n│ Max 1000x1000 f32 (Nx (C))              │  83.15μs │ 271.90μs │ 462.00w │   0.27x │       376% │\n│ Max 1000x1000 f32 (Nx (OxCaml))         │  96.39μs │ 409.00μs │ 166.00w │   0.23x │       435% │\n│ Min 1000x1000 f32 (Nx (C))              │  86.76μs │ 266.46μs │ 462.00w │   0.26x │       392% │\n│ Min 1000x1000 f32 (Nx (OxCaml))         │  94.88μs │ 406.41μs │ 166.00w │   0.23x │       429% │\n│ Neg 1000x1000 f32 (Nx (C))              │  84.45μs │ 248.04μs │ 251.00w │   0.26x │       382% │\n│ Neg 1000x1000 f32 (Nx (OxCaml))         │ 150.24μs │ 729.42μs │ 138.00w │   0.15x │       679% │\n│ Abs 1000x1000 f32 (Nx (C))              │  76.43μs │ 233.95μs │ 251.00w │   0.29x │       345% │\n│ Abs 1000x1000 f32 (Nx (OxCaml))         │ 145.87μs │ 717.71μs │ 138.00w │   0.15x │       659% │\n│ Sqrt 1000x1000 f32 (Nx (C))             │  85.35μs │ 288.38μs │ 251.00w │   0.26x │       386% │\n│ Sqrt 1000x1000 f32 (Nx (OxCaml))        │  98.74μs │ 430.70μs │ 138.00w │   0.22x │       446% │\n│ Exp 1000x1000 f32 (Nx (C))              │ 362.78μs │   1.94ms │ 251.00w │   0.06x │      1639% │\n│ Exp 1000x1000 f32 (Nx (OxCaml))         │ 462.83μs │   2.76ms │ 138.00w │   0.05x │      2091% │\n│ Log 1000x1000 f32 (Nx (C))              │ 493.49μs │   2.72ms │ 251.00w │   0.04x │      2229% │\n│ Log 1000x1000 f32 (Nx (OxCaml))         │ 392.15μs │   2.35ms │ 138.00w │   0.06x │      1772% │\n│ Sin 1000x1000 f32 (Nx (C))              │ 796.76μs │   4.05ms │ 251.00w │   0.03x │      3600% │\n│ Sin 1000x1000 f32 (Nx (OxCaml))         │ 393.62μs │   2.36ms │ 138.00w │   0.06x │      1778% │\n│ Cos 1000x1000 f32 (Nx (C))              │ 807.11μs │   4.22ms │ 251.00w │   0.03x │      3646% │\n│ Cos 1000x1000 f32 (Nx (OxCaml))         │ 407.78μs │   2.29ms │ 138.00w │   0.05x │      1842% │\n│ Reduce_sum 1000x1000 f32 (Nx (C))       │ 291.83μs │   1.22ms │ 252.00w │   0.08x │      1318% │\n│ Reduce_sum 1000x1000 f32 (Nx (OxCaml))  │  89.58μs │  89.17μs │  53.00w │   0.25x │       405% │\n│ Reduce_prod 1000x1000 f32 (Nx (C))      │   2.93ms │   2.86ms │ 252.00w │   0.01x │     13237% │\n│ Reduce_prod 1000x1000 f32 (Nx (OxCaml)) │  89.32μs │  89.15μs │  53.00w │   0.25x │       404% │\n│ Reduce_max 1000x1000 f32 (Nx (C))       │   3.11ms │   3.11ms │ 252.00w │   0.01x │     14041% │\n│ Reduce_max 1000x1000 f32 (Nx (OxCaml))  │  91.41μs │  90.35μs │  53.00w │   0.24x │       413% │\n│ Reduce_min 1000x1000 f32 (Nx (C))       │   3.12ms │   3.12ms │ 252.00w │   0.01x │     14102% │\n│ Reduce_min 1000x1000 f32 (Nx (OxCaml))  │  91.76μs │  90.69μs │  53.00w │   0.24x │       415% │\n│ Add 1000x1000 f64 (Nx (C))              │ 157.87μs │ 740.54μs │ 462.00w │   0.14x │       713% │\n│ Add 1000x1000 f64 (Nx (OxCaml))         │ 195.23μs │ 981.89μs │ 166.00w │   0.11x │       882% │\n│ Sub 1000x1000 f64 (Nx (C))              │ 175.50μs │ 809.93μs │ 462.00w │   0.13x │       793% │\n│ Sub 1000x1000 f64 (Nx (OxCaml))         │ 221.59μs │   1.07ms │ 166.00w │   0.10x │      1001% │\n│ Mul 1000x1000 f64 (Nx (C))              │ 171.58μs │ 749.02μs │ 462.00w │   0.13x │       775% │\n│ Mul 1000x1000 f64 (Nx (OxCaml))         │ 181.17μs │ 972.61μs │ 166.00w │   0.12x │       818% │\n│ Div 1000x1000 f64 (Nx (C))              │ 186.54μs │ 726.85μs │ 470.00w │   0.12x │       843% │\n│ Div 1000x1000 f64 (Nx (OxCaml))         │ 204.95μs │   1.12ms │ 174.00w │   0.11x │       926% │\n│ Mod 1000x1000 f64 (Nx (C))              │   2.07ms │  10.87ms │ 462.00w │   0.01x │      9354% │\n│ Mod 1000x1000 f64 (Nx (OxCaml))         │  72.10ms │ 522.65ms │ 166.00w │   0.00x │    325706% │\n│ Pow 1000x1000 f64 (Nx (C))              │   1.63ms │   9.32ms │ 462.00w │   0.01x │      7361% │\n│ Pow 1000x1000 f64 (Nx (OxCaml))         │   3.50ms │  19.72ms │ 166.00w │   0.01x │     15790% │\n│ Max 1000x1000 f64 (Nx (C))              │ 184.49μs │ 799.52μs │ 462.00w │   0.12x │       833% │\n│ Max 1000x1000 f64 (Nx (OxCaml))         │ 184.99μs │   1.01ms │ 166.00w │   0.12x │       836% │\n│ Min 1000x1000 f64 (Nx (C))              │ 155.90μs │ 682.97μs │ 462.00w │   0.14x │       704% │\n│ Min 1000x1000 f64 (Nx (OxCaml))         │ 179.69μs │ 962.78μs │ 166.00w │   0.12x │       812% │\n│ Neg 1000x1000 f64 (Nx (C))              │ 114.30μs │ 437.55μs │ 251.00w │   0.19x │       516% │\n│ Neg 1000x1000 f64 (Nx (OxCaml))         │ 255.21μs │   1.38ms │ 138.00w │   0.09x │      1153% │\n│ Abs 1000x1000 f64 (Nx (C))              │ 109.34μs │ 445.23μs │ 251.00w │   0.20x │       494% │\n│ Abs 1000x1000 f64 (Nx (OxCaml))         │ 269.10μs │   1.40ms │ 138.00w │   0.08x │      1216% │\n│ Sqrt 1000x1000 f64 (Nx (C))             │ 123.37μs │ 509.52μs │ 251.00w │   0.18x │       557% │\n│ Sqrt 1000x1000 f64 (Nx (OxCaml))        │ 148.50μs │ 730.49μs │ 138.00w │   0.15x │       671% │\n│ Exp 1000x1000 f64 (Nx (C))              │ 511.82μs │   2.79ms │ 251.00w │   0.04x │      2312% │\n│ Exp 1000x1000 f64 (Nx (OxCaml))         │   1.28ms │   7.62ms │ 138.00w │   0.02x │      5781% │\n│ Log 1000x1000 f64 (Nx (C))              │ 952.15μs │   5.19ms │ 251.00w │   0.02x │      4302% │\n│ Log 1000x1000 f64 (Nx (OxCaml))         │ 716.59μs │   4.14ms │ 138.00w │   0.03x │      3237% │\n│ Sin 1000x1000 f64 (Nx (C))              │ 853.96μs │   4.74ms │ 251.00w │   0.03x │      3858% │\n│ Sin 1000x1000 f64 (Nx (OxCaml))         │   2.37ms │  14.14ms │ 138.00w │   0.01x │     10726% │\n│ Cos 1000x1000 f64 (Nx (C))              │ 857.64μs │   4.81ms │ 251.00w │   0.03x │      3875% │\n│ Cos 1000x1000 f64 (Nx (OxCaml))         │   2.35ms │  14.17ms │ 138.00w │   0.01x │     10620% │\n│ Reduce_sum 1000x1000 f64 (Nx (C))       │ 279.41μs │   1.24ms │ 252.00w │   0.08x │      1262% │\n│ Reduce_sum 1000x1000 f64 (Nx (OxCaml))  │ 182.97μs │ 183.16μs │  53.00w │   0.12x │       827% │\n│ Reduce_prod 1000x1000 f64 (Nx (C))      │   2.84ms │   2.83ms │ 252.00w │   0.01x │     12816% │\n│ Reduce_prod 1000x1000 f64 (Nx (OxCaml)) │ 184.18μs │ 184.10μs │  53.00w │   0.12x │       832% │\n│ Reduce_max 1000x1000 f64 (Nx (C))       │   3.12ms │   3.11ms │ 252.00w │   0.01x │     14098% │\n│ Reduce_max 1000x1000 f64 (Nx (OxCaml))  │ 184.97μs │ 184.31μs │  53.00w │   0.12x │       836% │\n│ Reduce_min 1000x1000 f64 (Nx (C))       │   3.13ms │   3.12ms │ 252.00w │   0.01x │     14158% │\n│ Reduce_min 1000x1000 f64 (Nx (OxCaml))  │ 196.13μs │ 192.41μs │  53.00w │   0.11x │       886% │\n└─────────────────────────────────────────┴──────────┴──────────┴─────────┴─────────┴────────────┘\n```\n"
  },
  {
    "path": "packages/nx-oxcaml/bench/bench_nx_c.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet () = Thumper.run \"nx_c\" (Bench_nx_common.benchmarks ())\n"
  },
  {
    "path": "packages/nx-oxcaml/bench/bench_nx_common.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet sizes = [ 500; 1000 ]\n\nlet bench_name op size dtype =\n  Printf.sprintf \"%s %dx%d %s\" op size size dtype\n\nlet ops_f32 ~size =\n  let shape = [| size; size |] in\n  let a = Nx.rand Nx.Float32 shape in\n  let b = Nx.rand Nx.Float32 shape in\n  [\n    (\"Add\", fun () -> (Nx.add a b));\n    (\"Matmul\", fun () -> (Nx.matmul a b));\n  ]\n\nlet ops_f64 ~size =\n  let shape = [| size; size |] in\n  let a = Nx.rand Nx.Float64 shape in\n  let b = Nx.rand Nx.Float64 shape in\n  [\n    (\"Add\", fun () -> (Nx.add a b));\n    (\"Matmul\", fun () -> (Nx.matmul a b));\n  ]\n\nlet benchmarks () =\n  List.concat_map\n    (fun size ->\n      let f32 =\n        List.map\n          (fun (op, fn) -> Thumper.bench (bench_name op size \"f32\") fn)\n          (ops_f32 ~size)\n      in\n      let f64 =\n        List.map\n          (fun (op, fn) -> Thumper.bench (bench_name op size \"f64\") fn)\n          (ops_f64 ~size)\n      in\n      f32 @ f64)\n    sizes\n"
  },
  {
    "path": "packages/nx-oxcaml/bench/bench_nx_oxcaml.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet () = Thumper.run \"nx_oxcaml\" (Bench_nx_common.benchmarks ())\n"
  },
  {
    "path": "packages/nx-oxcaml/bench/dune",
    "content": "(library\n (name bench_nx_common)\n (modules bench_nx_common)\n (libraries nx thumper))\n\n(executable\n (name bench_nx_c)\n (modules bench_nx_c)\n (libraries bench_nx_common nx.c))\n\n(executable\n (name bench_nx_oxcaml)\n (modules bench_nx_oxcaml)\n (libraries bench_nx_common nx_oxcaml))\n\n(rule\n (alias runtest)\n (action\n  (progn\n   (run %{exe:bench_nx_c.exe} -q)\n   (diff? nx_c.thumper nx_c.thumper.corrected))))\n\n(rule\n (alias runtest)\n (action\n  (progn\n   (run %{exe:bench_nx_oxcaml.exe} -q)\n   (diff? nx_oxcaml.thumper nx_oxcaml.thumper.corrected))))\n"
  },
  {
    "path": "packages/nx-oxcaml/dune-project",
    "content": "(lang dune 3.21)\n\n(name nx-oxcaml)\n\n(generate_opam_files true)\n\n(using oxcaml 0.1)\n\n(source\n (github raven-ml/raven))\n\n(authors \"Thibaut Mattio\")\n\n(maintainers \"Thibaut Mattio <thibaut.mattio@gmail.com>\")\n\n(license ISC)\n\n(pin\n (url ../../)\n (package\n  (name nx)))\n\n(pin\n (url git+https://github.com/nirnayroy/ocamlfind)\n (package\n  (name ocamlfind)))\n\n(pin\n (url \"git+https://github.com/Sudha247/ocamlbuild#oxcaml+dune\")\n (package\n   (name ocamlbuild)\n   (version 0.15.0+ox)))\n\n(package\n (name nx-oxcaml)\n (synopsis \"High-performance Nx backend using OxCaml's unboxed types\")\n (description\n  \"An experimental backend for Nx that leverages OxCaml's unboxed types for improved performance.\")\n (depends\n  (ocaml-variants\n   (= 5.2.0+ox))\n  dune\n  nx))\n"
  },
  {
    "path": "packages/nx-oxcaml/dune-workspace",
    "content": "(lang dune 3.20)\n\n(repository\n (name oxcaml)\n (url git+https://github.com/oxcaml/opam-repository))\n\n(lock_dir\n (repositories upstream overlay oxcaml))\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/binary_ops/op_add.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet add_float64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n8 = n - 7 in\n    while !i < n8 do\n      let idx = !i in\n      let a0 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 2) in\n      let b1 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 2) in\n      let a2 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b2 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a3 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 6) in\n      let b3 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 6) in\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float64x2.add a0 b0);\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 2) (Float64x2.add a1 b1);\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) (Float64x2.add a2 b2);\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 6) (Float64x2.add a3 b3);\n      i := idx + 8\n    done;\n    let n2 = n - 1 in\n    while !i < n2 do\n      let idx = !i in\n      let a_vec = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float64x2.add a_vec b_vec);\n      i := idx + 2\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.add a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.add a_val b_val)\n    done\n\nlet add_float32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n16 = n - 15 in\n    while !i < n16 do\n      let idx = !i in\n      let a0 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b1 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a2 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 8) in\n      let b2 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 8) in\n      let a3 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 12) in\n      let b3 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 12) in\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float32x4.add a0 b0);\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) (Float32x4.add a1 b1);\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 8) (Float32x4.add a2 b2);\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 12) (Float32x4.add a3 b3);\n      i := idx + 16\n    done;\n    let n4 = n - 3 in\n    while !i < n4 do\n      let idx = !i in\n      let a_vec = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float32x4.add a_vec b_vec);\n      i := idx + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.add a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.add a_val b_val)\n    done\n\nlet add_int8 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int8_u.add a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int8_u.add a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int8_u.add a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int8_u.add a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int8_u.add a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int8_u.add a_val b_val)\n    done\n\nlet add_int16 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int16_u.add a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int16_u.add a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int16_u.add a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int16_u.add a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int16_u.add a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int16_u.add a_val b_val)\n    done\n\nlet add_int32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n16 = n - 15 in\n    while !i < n16 do\n      let idx = !i in\n      let a0 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b1 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a2 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 8) in\n      let b2 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 8) in\n      let a3 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 12) in\n      let b3 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 12) in\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) (Int32x4.add a0 b0);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) (Int32x4.add a1 b1);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 8) (Int32x4.add a2 b2);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 12) (Int32x4.add a3 b3);\n      i := idx + 16\n    done;\n    let n4 = n - 3 in\n    while !i < n4 do\n      let idx = !i in\n      let a_vec = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) (Int32x4.add a_vec b_vec);\n      i := idx + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int32_u.add a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int32_u.add a_val b_val)\n    done\n\nlet add_int64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n8 = n - 7 in\n    while !i < n8 do\n      let idx = !i in\n      let a0 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 2) in\n      let b1 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 2) in\n      let a2 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b2 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a3 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 6) in\n      let b3 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 6) in\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) (Int64x2.add a0 b0);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 2) (Int64x2.add a1 b1);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) (Int64x2.add a2 b2);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 6) (Int64x2.add a3 b3);\n      i := idx + 8\n    done;\n    let n2 = n - 1 in\n    while !i < n2 do\n      let idx = !i in\n      let a_vec = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) (Int64x2.add a_vec b_vec);\n      i := idx + 2\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int64_u.add a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int64_u.add a_val b_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/binary_ops/op_atan2.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet atan2_float64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float_u.atan2 a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Float_u.atan2 a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Float_u.atan2 a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Float_u.atan2 a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.atan2 a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.atan2 a_val b_val)\n    done\n\nlet atan2_float32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float32_u.atan2 a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Float32_u.atan2 a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Float32_u.atan2 a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Float32_u.atan2 a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.atan2 a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.atan2 a_val b_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/binary_ops/op_fdiv.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet fdiv_float64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n8 = n - 7 in\n    while !i < n8 do\n      let idx = !i in\n      let a0 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 2) in\n      let b1 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 2) in\n      let a2 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b2 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a3 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 6) in\n      let b3 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 6) in\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float64x2.div a0 b0);\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 2) (Float64x2.div a1 b1);\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) (Float64x2.div a2 b2);\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 6) (Float64x2.div a3 b3);\n      i := idx + 8\n    done;\n    let n2 = n - 1 in\n    while !i < n2 do\n      let idx = !i in\n      let a_vec = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float64x2.div a_vec b_vec);\n      i := idx + 2\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.div a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.div a_val b_val)\n    done\n\nlet fdiv_float32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n16 = n - 15 in\n    while !i < n16 do\n      let idx = !i in\n      let a0 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b1 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a2 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 8) in\n      let b2 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 8) in\n      let a3 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 12) in\n      let b3 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 12) in\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float32x4.div a0 b0);\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) (Float32x4.div a1 b1);\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 8) (Float32x4.div a2 b2);\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 12) (Float32x4.div a3 b3);\n      i := idx + 16\n    done;\n    let n4 = n - 3 in\n    while !i < n4 do\n      let idx = !i in\n      let a_vec = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float32x4.div a_vec b_vec);\n      i := idx + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.div a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.div a_val b_val)\n    done\n\nlet fdiv_int8 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int8_u.div a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int8_u.div a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int8_u.div a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int8_u.div a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int8_u.div a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int8_u.div a_val b_val)\n    done\n\nlet fdiv_int16 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int16_u.div a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int16_u.div a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int16_u.div a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int16_u.div a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int16_u.div a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int16_u.div a_val b_val)\n    done\n\nlet fdiv_int32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int32_u.div a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int32_u.div a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int32_u.div a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int32_u.div a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int32_u.div a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int32_u.div a_val b_val)\n    done\n\nlet fdiv_int64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int64_u.div a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int64_u.div a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int64_u.div a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int64_u.div a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int64_u.div a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int64_u.div a_val b_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/binary_ops/op_idiv.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet idiv_float64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0)\n        (Float_u.of_int (Float_u.to_int (Float_u.div a0 b0)));\n      Array.unsafe_set out_arr (out_base + i1)\n        (Float_u.of_int (Float_u.to_int (Float_u.div a1 b1)));\n      Array.unsafe_set out_arr (out_base + i2)\n        (Float_u.of_int (Float_u.to_int (Float_u.div a2 b2)));\n      Array.unsafe_set out_arr (out_base + i3)\n        (Float_u.of_int (Float_u.to_int (Float_u.div a3 b3)));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx)\n        (Float_u.of_int (Float_u.to_int (Float_u.div a_val b_val)));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k)\n        (Float_u.of_int (Float_u.to_int (Float_u.div a_val b_val)))\n    done\n\nlet idiv_float32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0)\n        (Float32_u.of_int (Float32_u.to_int (Float32_u.div a0 b0)));\n      Array.unsafe_set out_arr (out_base + i1)\n        (Float32_u.of_int (Float32_u.to_int (Float32_u.div a1 b1)));\n      Array.unsafe_set out_arr (out_base + i2)\n        (Float32_u.of_int (Float32_u.to_int (Float32_u.div a2 b2)));\n      Array.unsafe_set out_arr (out_base + i3)\n        (Float32_u.of_int (Float32_u.to_int (Float32_u.div a3 b3)));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx)\n        (Float32_u.of_int (Float32_u.to_int (Float32_u.div a_val b_val)));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k)\n        (Float32_u.of_int (Float32_u.to_int (Float32_u.div a_val b_val)))\n    done\n\nlet idiv_int8 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int8_u.div a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int8_u.div a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int8_u.div a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int8_u.div a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int8_u.div a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int8_u.div a_val b_val)\n    done\n\nlet idiv_int16 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int16_u.div a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int16_u.div a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int16_u.div a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int16_u.div a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int16_u.div a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int16_u.div a_val b_val)\n    done\n\nlet idiv_int32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int32_u.div a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int32_u.div a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int32_u.div a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int32_u.div a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int32_u.div a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int32_u.div a_val b_val)\n    done\n\nlet idiv_int64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int64_u.div a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int64_u.div a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int64_u.div a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int64_u.div a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int64_u.div a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int64_u.div a_val b_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/binary_ops/op_max.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet max_float64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n8 = n - 7 in\n    while !i < n8 do\n      let idx = !i in\n      let a0 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 2) in\n      let b1 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 2) in\n      let a2 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b2 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a3 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 6) in\n      let b3 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 6) in\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float64x2.max a0 b0);\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 2) (Float64x2.max a1 b1);\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) (Float64x2.max a2 b2);\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 6) (Float64x2.max a3 b3);\n      i := idx + 8\n    done;\n    let n2 = n - 1 in\n    while !i < n2 do\n      let idx = !i in\n      let a_vec = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float64x2.max a_vec b_vec);\n      i := idx + 2\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.max a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.max a_val b_val)\n    done\n\nlet max_float32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n16 = n - 15 in\n    while !i < n16 do\n      let idx = !i in\n      let a0 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b1 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a2 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 8) in\n      let b2 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 8) in\n      let a3 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 12) in\n      let b3 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 12) in\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float32x4.max a0 b0);\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) (Float32x4.max a1 b1);\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 8) (Float32x4.max a2 b2);\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 12) (Float32x4.max a3 b3);\n      i := idx + 16\n    done;\n    let n4 = n - 3 in\n    while !i < n4 do\n      let idx = !i in\n      let a_vec = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float32x4.max a_vec b_vec);\n      i := idx + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.max a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.max a_val b_val)\n    done\n\nlet max_int8 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int8_u.max a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int8_u.max a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int8_u.max a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int8_u.max a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int8_u.max a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int8_u.max a_val b_val)\n    done\n\nlet max_int16 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int16_u.max a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int16_u.max a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int16_u.max a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int16_u.max a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int16_u.max a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int16_u.max a_val b_val)\n    done\n\nlet max_int32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n16 = n - 15 in\n    while !i < n16 do\n      let idx = !i in\n      let a0 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b1 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a2 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 8) in\n      let b2 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 8) in\n      let a3 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 12) in\n      let b3 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 12) in\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) (Int32x4.max a0 b0);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) (Int32x4.max a1 b1);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 8) (Int32x4.max a2 b2);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 12) (Int32x4.max a3 b3);\n      i := idx + 16\n    done;\n    let n4 = n - 3 in\n    while !i < n4 do\n      let idx = !i in\n      let a_vec = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) (Int32x4.max a_vec b_vec);\n      i := idx + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int32_u.max a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int32_u.max a_val b_val)\n    done\n\n(* Neither NEON nor SSE have native int64x2 max; emulate via cmpgt + blendv *)\nlet[@inline] int64x2_max a b =\n  let mask = Int64x2.cmpgt a b in\n  Int64x2.blendv b a mask\n\nlet max_int64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n8 = n - 7 in\n    while !i < n8 do\n      let idx = !i in\n      let a0 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 2) in\n      let b1 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 2) in\n      let a2 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b2 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a3 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 6) in\n      let b3 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 6) in\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) (int64x2_max a0 b0);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 2) (int64x2_max a1 b1);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) (int64x2_max a2 b2);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 6) (int64x2_max a3 b3);\n      i := idx + 8\n    done;\n    let n2 = n - 1 in\n    while !i < n2 do\n      let idx = !i in\n      let a_vec = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) (int64x2_max a_vec b_vec);\n      i := idx + 2\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int64_u.max a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int64_u.max a_val b_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/binary_ops/op_min.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet min_float64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n8 = n - 7 in\n    while !i < n8 do\n      let idx = !i in\n      let a0 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 2) in\n      let b1 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 2) in\n      let a2 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b2 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a3 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 6) in\n      let b3 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 6) in\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float64x2.min a0 b0);\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 2) (Float64x2.min a1 b1);\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) (Float64x2.min a2 b2);\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 6) (Float64x2.min a3 b3);\n      i := idx + 8\n    done;\n    let n2 = n - 1 in\n    while !i < n2 do\n      let idx = !i in\n      let a_vec = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float64x2.min a_vec b_vec);\n      i := idx + 2\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.min a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.min a_val b_val)\n    done\n\nlet min_float32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n16 = n - 15 in\n    while !i < n16 do\n      let idx = !i in\n      let a0 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b1 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a2 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 8) in\n      let b2 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 8) in\n      let a3 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 12) in\n      let b3 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 12) in\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float32x4.min a0 b0);\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) (Float32x4.min a1 b1);\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 8) (Float32x4.min a2 b2);\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 12) (Float32x4.min a3 b3);\n      i := idx + 16\n    done;\n    let n4 = n - 3 in\n    while !i < n4 do\n      let idx = !i in\n      let a_vec = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float32x4.min a_vec b_vec);\n      i := idx + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.min a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.min a_val b_val)\n    done\n\nlet min_int8 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int8_u.min a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int8_u.min a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int8_u.min a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int8_u.min a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int8_u.min a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int8_u.min a_val b_val)\n    done\n\nlet min_int16 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int16_u.min a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int16_u.min a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int16_u.min a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int16_u.min a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int16_u.min a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int16_u.min a_val b_val)\n    done\n\nlet min_int32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n16 = n - 15 in\n    while !i < n16 do\n      let idx = !i in\n      let a0 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b1 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a2 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 8) in\n      let b2 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 8) in\n      let a3 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 12) in\n      let b3 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 12) in\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) (Int32x4.min a0 b0);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) (Int32x4.min a1 b1);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 8) (Int32x4.min a2 b2);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 12) (Int32x4.min a3 b3);\n      i := idx + 16\n    done;\n    let n4 = n - 3 in\n    while !i < n4 do\n      let idx = !i in\n      let a_vec = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) (Int32x4.min a_vec b_vec);\n      i := idx + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int32_u.min a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int32_u.min a_val b_val)\n    done\n\n(* Neither NEON nor SSE have native int64x2 min; emulate via cmpgt + blendv *)\nlet[@inline] int64x2_min a b =\n  let mask = Int64x2.cmpgt a b in\n  Int64x2.blendv a b mask\n\nlet min_int64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n8 = n - 7 in\n    while !i < n8 do\n      let idx = !i in\n      let a0 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 2) in\n      let b1 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 2) in\n      let a2 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b2 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a3 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 6) in\n      let b3 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 6) in\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) (int64x2_min a0 b0);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 2) (int64x2_min a1 b1);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) (int64x2_min a2 b2);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 6) (int64x2_min a3 b3);\n      i := idx + 8\n    done;\n    let n2 = n - 1 in\n    while !i < n2 do\n      let idx = !i in\n      let a_vec = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) (int64x2_min a_vec b_vec);\n      i := idx + 2\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int64_u.min a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int64_u.min a_val b_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/binary_ops/op_mod.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet mod_float64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float_u.rem a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Float_u.rem a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Float_u.rem a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Float_u.rem a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.rem a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.rem a_val b_val)\n    done\n\nlet mod_float32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float32_u.rem a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Float32_u.rem a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Float32_u.rem a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Float32_u.rem a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.rem a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.rem a_val b_val)\n    done\n\nlet mod_int8 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int8_u.rem a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int8_u.rem a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int8_u.rem a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int8_u.rem a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int8_u.rem a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int8_u.rem a_val b_val)\n    done\n\nlet mod_int16 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int16_u.rem a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int16_u.rem a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int16_u.rem a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int16_u.rem a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int16_u.rem a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int16_u.rem a_val b_val)\n    done\n\nlet mod_int32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int32_u.rem a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int32_u.rem a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int32_u.rem a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int32_u.rem a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int32_u.rem a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int32_u.rem a_val b_val)\n    done\n\nlet mod_int64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int64_u.rem a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int64_u.rem a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int64_u.rem a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int64_u.rem a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int64_u.rem a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int64_u.rem a_val b_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/binary_ops/op_mul.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet mul_float64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n8 = n - 7 in\n    while !i < n8 do\n      let idx = !i in\n      let a0 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 2) in\n      let b1 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 2) in\n      let a2 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b2 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a3 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 6) in\n      let b3 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 6) in\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float64x2.mul a0 b0);\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 2) (Float64x2.mul a1 b1);\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) (Float64x2.mul a2 b2);\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 6) (Float64x2.mul a3 b3);\n      i := idx + 8\n    done;\n    let n2 = n - 1 in\n    while !i < n2 do\n      let idx = !i in\n      let a_vec = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float64x2.mul a_vec b_vec);\n      i := idx + 2\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.mul a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.mul a_val b_val)\n    done\n\nlet mul_float32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n16 = n - 15 in\n    while !i < n16 do\n      let idx = !i in\n      let a0 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b1 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a2 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 8) in\n      let b2 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 8) in\n      let a3 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 12) in\n      let b3 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 12) in\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float32x4.mul a0 b0);\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) (Float32x4.mul a1 b1);\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 8) (Float32x4.mul a2 b2);\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 12) (Float32x4.mul a3 b3);\n      i := idx + 16\n    done;\n    let n4 = n - 3 in\n    while !i < n4 do\n      let idx = !i in\n      let a_vec = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float32x4.mul a_vec b_vec);\n      i := idx + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.mul a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.mul a_val b_val)\n    done\n\nlet mul_int8 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int8_u.mul a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int8_u.mul a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int8_u.mul a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int8_u.mul a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int8_u.mul a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int8_u.mul a_val b_val)\n    done\n\nlet mul_int16 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int16_u.mul a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int16_u.mul a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int16_u.mul a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int16_u.mul a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int16_u.mul a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int16_u.mul a_val b_val)\n    done\n\nlet mul_int32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int32_u.mul a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int32_u.mul a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int32_u.mul a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int32_u.mul a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int32_u.mul a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int32_u.mul a_val b_val)\n    done\n\nlet mul_int64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int64_u.mul a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int64_u.mul a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int64_u.mul a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int64_u.mul a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int64_u.mul a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int64_u.mul a_val b_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/binary_ops/op_pow.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet pow_float64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float_u.pow a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Float_u.pow a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Float_u.pow a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Float_u.pow a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.pow a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.pow a_val b_val)\n    done\n\nlet pow_float32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float32_u.pow a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Float32_u.pow a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Float32_u.pow a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Float32_u.pow a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.pow a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.pow a_val b_val)\n    done\n\nlet pow_int8\n    (a_arr : int8# array)\n    (b_arr : int8# array)\n    (out_arr : int8# array)\n    va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Float_u.of_int (Int8_u.to_int (Array.unsafe_get a_arr (a_base + i0))) in\n      let b0 = Float_u.of_int (Int8_u.to_int (Array.unsafe_get b_arr (b_base + i0))) in\n      let a1 = Float_u.of_int (Int8_u.to_int (Array.unsafe_get a_arr (a_base + i1))) in\n      let b1 = Float_u.of_int (Int8_u.to_int (Array.unsafe_get b_arr (b_base + i1))) in\n      let a2 = Float_u.of_int (Int8_u.to_int (Array.unsafe_get a_arr (a_base + i2))) in\n      let b2 = Float_u.of_int (Int8_u.to_int (Array.unsafe_get b_arr (b_base + i2))) in\n      let a3 = Float_u.of_int (Int8_u.to_int (Array.unsafe_get a_arr (a_base + i3))) in\n      let b3 = Float_u.of_int (Int8_u.to_int (Array.unsafe_get b_arr (b_base + i3))) in\n      Array.unsafe_set out_arr (out_base + i0)\n        (Int8_u.of_int (Float_u.to_int (Float_u.pow a0 b0)));\n      Array.unsafe_set out_arr (out_base + i1)\n        (Int8_u.of_int (Float_u.to_int (Float_u.pow a1 b1)));\n      Array.unsafe_set out_arr (out_base + i2)\n        (Int8_u.of_int (Float_u.to_int (Float_u.pow a2 b2)));\n      Array.unsafe_set out_arr (out_base + i3)\n        (Int8_u.of_int (Float_u.to_int (Float_u.pow a3 b3)));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Float_u.of_int (Int8_u.to_int (Array.unsafe_get a_arr (a_base + idx))) in\n      let b_val = Float_u.of_int (Int8_u.to_int (Array.unsafe_get b_arr (b_base + idx))) in\n      Array.unsafe_set out_arr (out_base + idx)\n        (Int8_u.of_int (Float_u.to_int (Float_u.pow a_val b_val)));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Float_u.of_int (Int8_u.to_int (Array.unsafe_get a_arr (a_offset + a_lin))) in\n      let b_val = Float_u.of_int (Int8_u.to_int (Array.unsafe_get b_arr (b_offset + b_lin))) in\n      Array.unsafe_set out_arr (out_offset + k)\n        (Int8_u.of_int (Float_u.to_int (Float_u.pow a_val b_val)))\n    done\n\nlet pow_int16\n    (a_arr : int16# array)\n    (b_arr : int16# array)\n    (out_arr : int16# array)\n    va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Float_u.of_int (Int16_u.to_int (Array.unsafe_get a_arr (a_base + i0))) in\n      let b0 = Float_u.of_int (Int16_u.to_int (Array.unsafe_get b_arr (b_base + i0))) in\n      let a1 = Float_u.of_int (Int16_u.to_int (Array.unsafe_get a_arr (a_base + i1))) in\n      let b1 = Float_u.of_int (Int16_u.to_int (Array.unsafe_get b_arr (b_base + i1))) in\n      let a2 = Float_u.of_int (Int16_u.to_int (Array.unsafe_get a_arr (a_base + i2))) in\n      let b2 = Float_u.of_int (Int16_u.to_int (Array.unsafe_get b_arr (b_base + i2))) in\n      let a3 = Float_u.of_int (Int16_u.to_int (Array.unsafe_get a_arr (a_base + i3))) in\n      let b3 = Float_u.of_int (Int16_u.to_int (Array.unsafe_get b_arr (b_base + i3))) in\n      Array.unsafe_set out_arr (out_base + i0)\n        (Int16_u.of_int (Float_u.to_int (Float_u.pow a0 b0)));\n      Array.unsafe_set out_arr (out_base + i1)\n        (Int16_u.of_int (Float_u.to_int (Float_u.pow a1 b1)));\n      Array.unsafe_set out_arr (out_base + i2)\n        (Int16_u.of_int (Float_u.to_int (Float_u.pow a2 b2)));\n      Array.unsafe_set out_arr (out_base + i3)\n        (Int16_u.of_int (Float_u.to_int (Float_u.pow a3 b3)));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Float_u.of_int (Int16_u.to_int (Array.unsafe_get a_arr (a_base + idx))) in\n      let b_val = Float_u.of_int (Int16_u.to_int (Array.unsafe_get b_arr (b_base + idx))) in\n      Array.unsafe_set out_arr (out_base + idx)\n        (Int16_u.of_int (Float_u.to_int (Float_u.pow a_val b_val)));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Float_u.of_int (Int16_u.to_int (Array.unsafe_get a_arr (a_offset + a_lin))) in\n      let b_val = Float_u.of_int (Int16_u.to_int (Array.unsafe_get b_arr (b_offset + b_lin))) in\n      Array.unsafe_set out_arr (out_offset + k)\n        (Int16_u.of_int (Float_u.to_int (Float_u.pow a_val b_val)))\n    done\n\nlet pow_int32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 =\n        Float_u.of_int (Int32.to_int (Array.unsafe_get a_arr (a_base + i0)))\n      in\n      let b0 =\n        Float_u.of_int (Int32.to_int (Array.unsafe_get b_arr (b_base + i0)))\n      in\n      let a1 =\n        Float_u.of_int (Int32.to_int (Array.unsafe_get a_arr (a_base + i1)))\n      in\n      let b1 =\n        Float_u.of_int (Int32.to_int (Array.unsafe_get b_arr (b_base + i1)))\n      in\n      let a2 =\n        Float_u.of_int (Int32.to_int (Array.unsafe_get a_arr (a_base + i2)))\n      in\n      let b2 =\n        Float_u.of_int (Int32.to_int (Array.unsafe_get b_arr (b_base + i2)))\n      in\n      let a3 =\n        Float_u.of_int (Int32.to_int (Array.unsafe_get a_arr (a_base + i3)))\n      in\n      let b3 =\n        Float_u.of_int (Int32.to_int (Array.unsafe_get b_arr (b_base + i3)))\n      in\n      Array.unsafe_set out_arr (out_base + i0)\n        (Int32.of_int (Float_u.to_int (Float_u.pow a0 b0)));\n      Array.unsafe_set out_arr (out_base + i1)\n        (Int32.of_int (Float_u.to_int (Float_u.pow a1 b1)));\n      Array.unsafe_set out_arr (out_base + i2)\n        (Int32.of_int (Float_u.to_int (Float_u.pow a2 b2)));\n      Array.unsafe_set out_arr (out_base + i3)\n        (Int32.of_int (Float_u.to_int (Float_u.pow a3 b3)));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val =\n        Float_u.of_int (Int32.to_int (Array.unsafe_get a_arr (a_base + idx)))\n      in\n      let b_val =\n        Float_u.of_int (Int32.to_int (Array.unsafe_get b_arr (b_base + idx)))\n      in\n      Array.unsafe_set out_arr (out_base + idx)\n        (Int32.of_int (Float_u.to_int (Float_u.pow a_val b_val)));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val =\n        Float_u.of_int\n          (Int32.to_int (Array.unsafe_get a_arr (a_offset + a_lin)))\n      in\n      let b_val =\n        Float_u.of_int\n          (Int32.to_int (Array.unsafe_get b_arr (b_offset + b_lin)))\n      in\n      Array.unsafe_set out_arr (out_offset + k)\n        (Int32.of_int (Float_u.to_int (Float_u.pow a_val b_val)))\n    done\n\nlet pow_int64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Float_u.of_int (Array.unsafe_get a_arr (a_base + i0)) in\n      let b0 = Float_u.of_int (Array.unsafe_get b_arr (b_base + i0)) in\n      let a1 = Float_u.of_int (Array.unsafe_get a_arr (a_base + i1)) in\n      let b1 = Float_u.of_int (Array.unsafe_get b_arr (b_base + i1)) in\n      let a2 = Float_u.of_int (Array.unsafe_get a_arr (a_base + i2)) in\n      let b2 = Float_u.of_int (Array.unsafe_get b_arr (b_base + i2)) in\n      let a3 = Float_u.of_int (Array.unsafe_get a_arr (a_base + i3)) in\n      let b3 = Float_u.of_int (Array.unsafe_get b_arr (b_base + i3)) in\n      Array.unsafe_set out_arr (out_base + i0)\n        (Int64.of_int (Float_u.to_int (Float_u.pow a0 b0)));\n      Array.unsafe_set out_arr (out_base + i1)\n        (Int64.of_int (Float_u.to_int (Float_u.pow a1 b1)));\n      Array.unsafe_set out_arr (out_base + i2)\n        (Int64.of_int (Float_u.to_int (Float_u.pow a2 b2)));\n      Array.unsafe_set out_arr (out_base + i3)\n        (Int64.of_int (Float_u.to_int (Float_u.pow a3 b3)));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Float_u.of_int (Array.unsafe_get a_arr (a_base + idx)) in\n      let b_val = Float_u.of_int (Array.unsafe_get b_arr (b_base + idx)) in\n      Array.unsafe_set out_arr (out_base + idx)\n        (Int64.of_int (Float_u.to_int (Float_u.pow a_val b_val)));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Float_u.of_int (Array.unsafe_get a_arr (a_offset + a_lin)) in\n      let b_val = Float_u.of_int (Array.unsafe_get b_arr (b_offset + b_lin)) in\n      Array.unsafe_set out_arr (out_offset + k)\n        (Int64.of_int (Float_u.to_int (Float_u.pow a_val b_val)))\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/binary_ops/op_sub.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet sub_float64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n8 = n - 7 in\n    while !i < n8 do\n      let idx = !i in\n      let a0 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 2) in\n      let b1 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 2) in\n      let a2 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b2 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a3 = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 6) in\n      let b3 = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 6) in\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float64x2.sub a0 b0);\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 2) (Float64x2.sub a1 b1);\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) (Float64x2.sub a2 b2);\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 6) (Float64x2.sub a3 b3);\n      i := idx + 8\n    done;\n    let n2 = n - 1 in\n    while !i < n2 do\n      let idx = !i in\n      let a_vec = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Float64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float64x2.sub a_vec b_vec);\n      i := idx + 2\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.sub a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.sub a_val b_val)\n    done\n\nlet sub_float32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n16 = n - 15 in\n    while !i < n16 do\n      let idx = !i in\n      let a0 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b1 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a2 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 8) in\n      let b2 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 8) in\n      let a3 = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 12) in\n      let b3 = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 12) in\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float32x4.sub a0 b0);\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) (Float32x4.sub a1 b1);\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 8) (Float32x4.sub a2 b2);\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 12) (Float32x4.sub a3 b3);\n      i := idx + 16\n    done;\n    let n4 = n - 3 in\n    while !i < n4 do\n      let idx = !i in\n      let a_vec = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Float32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) (Float32x4.sub a_vec b_vec);\n      i := idx + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.sub a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.sub a_val b_val)\n    done\n\nlet sub_int8 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int8_u.sub a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int8_u.sub a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int8_u.sub a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int8_u.sub a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int8_u.sub a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int8_u.sub a_val b_val)\n    done\n\nlet sub_int16 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int16_u.sub a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int16_u.sub a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int16_u.sub a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int16_u.sub a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int16_u.sub a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int16_u.sub a_val b_val)\n    done\n\nlet sub_int32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n16 = n - 15 in\n    while !i < n16 do\n      let idx = !i in\n      let a0 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b1 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a2 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 8) in\n      let b2 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 8) in\n      let a3 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 12) in\n      let b3 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 12) in\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) (Int32x4.sub a0 b0);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) (Int32x4.sub a1 b1);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 8) (Int32x4.sub a2 b2);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 12) (Int32x4.sub a3 b3);\n      i := idx + 16\n    done;\n    let n4 = n - 3 in\n    while !i < n4 do\n      let idx = !i in\n      let a_vec = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) (Int32x4.sub a_vec b_vec);\n      i := idx + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int32_u.sub a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int32_u.sub a_val b_val)\n    done\n\nlet sub_int64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n8 = n - 7 in\n    while !i < n8 do\n      let idx = !i in\n      let a0 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 2) in\n      let b1 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 2) in\n      let a2 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b2 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a3 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 6) in\n      let b3 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 6) in\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) (Int64x2.sub a0 b0);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 2) (Int64x2.sub a1 b1);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) (Int64x2.sub a2 b2);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 6) (Int64x2.sub a3 b3);\n      i := idx + 8\n    done;\n    let n2 = n - 1 in\n    while !i < n2 do\n      let idx = !i in\n      let a_vec = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) (Int64x2.sub a_vec b_vec);\n      i := idx + 2\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int64_u.sub a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int64_u.sub a_val b_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/comparison_ops/op_cmpeq.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet cmpeq_float64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float_u.equal a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Float_u.equal a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Float_u.equal a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Float_u.equal a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.equal a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.equal a_val b_val)\n    done\n\nlet cmpeq_float32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float32_u.equal a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Float32_u.equal a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Float32_u.equal a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Float32_u.equal a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.equal a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.equal a_val b_val)\n    done\n\nlet cmpeq_int8 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int8_u.equal a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int8_u.equal a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int8_u.equal a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int8_u.equal a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int8_u.equal a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int8_u.equal a_val b_val)\n    done\n\nlet cmpeq_int16 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int16_u.equal a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int16_u.equal a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int16_u.equal a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int16_u.equal a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int16_u.equal a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int16_u.equal a_val b_val)\n    done\n\nlet cmpeq_int32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int32_u.equal a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int32_u.equal a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int32_u.equal a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int32_u.equal a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int32_u.equal a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int32_u.equal a_val b_val)\n    done\n\nlet cmpeq_int64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int64_u.equal a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int64_u.equal a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int64_u.equal a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int64_u.equal a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int64_u.equal a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int64_u.equal a_val b_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/comparison_ops/op_cmple.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet cmple_float64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0)\n        (Float_u.equal a0 a0 && Float_u.equal b0 b0\n        && Float_u.compare a0 b0 <= 0);\n      Array.unsafe_set out_arr (out_base + i1)\n        (Float_u.equal a1 a1 && Float_u.equal b1 b1\n        && Float_u.compare a1 b1 <= 0);\n      Array.unsafe_set out_arr (out_base + i2)\n        (Float_u.equal a2 a2 && Float_u.equal b2 b2\n        && Float_u.compare a2 b2 <= 0);\n      Array.unsafe_set out_arr (out_base + i3)\n        (Float_u.equal a3 a3 && Float_u.equal b3 b3\n        && Float_u.compare a3 b3 <= 0);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx)\n        (Float_u.equal a_val a_val && Float_u.equal b_val b_val\n        && Float_u.compare a_val b_val <= 0);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k)\n        (Float_u.equal a_val a_val && Float_u.equal b_val b_val\n        && Float_u.compare a_val b_val <= 0)\n    done\n\nlet cmple_float32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0)\n        (Float32_u.equal a0 a0 && Float32_u.equal b0 b0\n        && Float32_u.compare a0 b0 <= 0);\n      Array.unsafe_set out_arr (out_base + i1)\n        (Float32_u.equal a1 a1 && Float32_u.equal b1 b1\n        && Float32_u.compare a1 b1 <= 0);\n      Array.unsafe_set out_arr (out_base + i2)\n        (Float32_u.equal a2 a2 && Float32_u.equal b2 b2\n        && Float32_u.compare a2 b2 <= 0);\n      Array.unsafe_set out_arr (out_base + i3)\n        (Float32_u.equal a3 a3 && Float32_u.equal b3 b3\n        && Float32_u.compare a3 b3 <= 0);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx)\n        (Float32_u.equal a_val a_val && Float32_u.equal b_val b_val\n        && Float32_u.compare a_val b_val <= 0);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k)\n        (Float32_u.equal a_val a_val && Float32_u.equal b_val b_val\n        && Float32_u.compare a_val b_val <= 0)\n    done\n\nlet cmple_int8 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int8_u.compare a0 b0 <= 0);\n      Array.unsafe_set out_arr (out_base + i1) (Int8_u.compare a1 b1 <= 0);\n      Array.unsafe_set out_arr (out_base + i2) (Int8_u.compare a2 b2 <= 0);\n      Array.unsafe_set out_arr (out_base + i3) (Int8_u.compare a3 b3 <= 0);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int8_u.compare a_val b_val <= 0);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int8_u.compare a_val b_val <= 0)\n    done\n\nlet cmple_int16 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int16_u.compare a0 b0 <= 0);\n      Array.unsafe_set out_arr (out_base + i1) (Int16_u.compare a1 b1 <= 0);\n      Array.unsafe_set out_arr (out_base + i2) (Int16_u.compare a2 b2 <= 0);\n      Array.unsafe_set out_arr (out_base + i3) (Int16_u.compare a3 b3 <= 0);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int16_u.compare a_val b_val <= 0);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int16_u.compare a_val b_val <= 0)\n    done\n\nlet cmple_int32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int32_u.compare a0 b0 <= 0);\n      Array.unsafe_set out_arr (out_base + i1) (Int32_u.compare a1 b1 <= 0);\n      Array.unsafe_set out_arr (out_base + i2) (Int32_u.compare a2 b2 <= 0);\n      Array.unsafe_set out_arr (out_base + i3) (Int32_u.compare a3 b3 <= 0);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int32_u.compare a_val b_val <= 0);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int32_u.compare a_val b_val <= 0)\n    done\n\nlet cmple_int64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int64_u.compare a0 b0 <= 0);\n      Array.unsafe_set out_arr (out_base + i1) (Int64_u.compare a1 b1 <= 0);\n      Array.unsafe_set out_arr (out_base + i2) (Int64_u.compare a2 b2 <= 0);\n      Array.unsafe_set out_arr (out_base + i3) (Int64_u.compare a3 b3 <= 0);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int64_u.compare a_val b_val <= 0);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k)\n        (Int64_u.compare a_val b_val <= 0)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/comparison_ops/op_cmplt.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet cmplt_float64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0)\n        (Float_u.equal a0 a0 && Float_u.equal b0 b0\n        && Float_u.compare a0 b0 < 0);\n      Array.unsafe_set out_arr (out_base + i1)\n        (Float_u.equal a1 a1 && Float_u.equal b1 b1\n        && Float_u.compare a1 b1 < 0);\n      Array.unsafe_set out_arr (out_base + i2)\n        (Float_u.equal a2 a2 && Float_u.equal b2 b2\n        && Float_u.compare a2 b2 < 0);\n      Array.unsafe_set out_arr (out_base + i3)\n        (Float_u.equal a3 a3 && Float_u.equal b3 b3\n        && Float_u.compare a3 b3 < 0);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx)\n        (Float_u.equal a_val a_val && Float_u.equal b_val b_val\n        && Float_u.compare a_val b_val < 0);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k)\n        (Float_u.equal a_val a_val && Float_u.equal b_val b_val\n        && Float_u.compare a_val b_val < 0)\n    done\n\nlet cmplt_float32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0)\n        (Float32_u.equal a0 a0 && Float32_u.equal b0 b0\n        && Float32_u.compare a0 b0 < 0);\n      Array.unsafe_set out_arr (out_base + i1)\n        (Float32_u.equal a1 a1 && Float32_u.equal b1 b1\n        && Float32_u.compare a1 b1 < 0);\n      Array.unsafe_set out_arr (out_base + i2)\n        (Float32_u.equal a2 a2 && Float32_u.equal b2 b2\n        && Float32_u.compare a2 b2 < 0);\n      Array.unsafe_set out_arr (out_base + i3)\n        (Float32_u.equal a3 a3 && Float32_u.equal b3 b3\n        && Float32_u.compare a3 b3 < 0);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx)\n        (Float32_u.equal a_val a_val && Float32_u.equal b_val b_val\n        && Float32_u.compare a_val b_val < 0);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k)\n        (Float32_u.equal a_val a_val && Float32_u.equal b_val b_val\n        && Float32_u.compare a_val b_val < 0)\n    done\n\nlet cmplt_int8 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int8_u.compare a0 b0 < 0);\n      Array.unsafe_set out_arr (out_base + i1) (Int8_u.compare a1 b1 < 0);\n      Array.unsafe_set out_arr (out_base + i2) (Int8_u.compare a2 b2 < 0);\n      Array.unsafe_set out_arr (out_base + i3) (Int8_u.compare a3 b3 < 0);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int8_u.compare a_val b_val < 0);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int8_u.compare a_val b_val < 0)\n    done\n\nlet cmplt_int16 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int16_u.compare a0 b0 < 0);\n      Array.unsafe_set out_arr (out_base + i1) (Int16_u.compare a1 b1 < 0);\n      Array.unsafe_set out_arr (out_base + i2) (Int16_u.compare a2 b2 < 0);\n      Array.unsafe_set out_arr (out_base + i3) (Int16_u.compare a3 b3 < 0);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int16_u.compare a_val b_val < 0);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int16_u.compare a_val b_val < 0)\n    done\n\nlet cmplt_int32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int32_u.compare a0 b0 < 0);\n      Array.unsafe_set out_arr (out_base + i1) (Int32_u.compare a1 b1 < 0);\n      Array.unsafe_set out_arr (out_base + i2) (Int32_u.compare a2 b2 < 0);\n      Array.unsafe_set out_arr (out_base + i3) (Int32_u.compare a3 b3 < 0);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int32_u.compare a_val b_val < 0);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int32_u.compare a_val b_val < 0)\n    done\n\nlet cmplt_int64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int64_u.compare a0 b0 < 0);\n      Array.unsafe_set out_arr (out_base + i1) (Int64_u.compare a1 b1 < 0);\n      Array.unsafe_set out_arr (out_base + i2) (Int64_u.compare a2 b2 < 0);\n      Array.unsafe_set out_arr (out_base + i3) (Int64_u.compare a3 b3 < 0);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int64_u.compare a_val b_val < 0);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int64_u.compare a_val b_val < 0)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/comparison_ops/op_cmpne.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet cmpne_float64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (not (Float_u.equal a0 b0));\n      Array.unsafe_set out_arr (out_base + i1) (not (Float_u.equal a1 b1));\n      Array.unsafe_set out_arr (out_base + i2) (not (Float_u.equal a2 b2));\n      Array.unsafe_set out_arr (out_base + i3) (not (Float_u.equal a3 b3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (not (Float_u.equal a_val b_val));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (not (Float_u.equal a_val b_val))\n    done\n\nlet cmpne_float32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (not (Float32_u.equal a0 b0));\n      Array.unsafe_set out_arr (out_base + i1) (not (Float32_u.equal a1 b1));\n      Array.unsafe_set out_arr (out_base + i2) (not (Float32_u.equal a2 b2));\n      Array.unsafe_set out_arr (out_base + i3) (not (Float32_u.equal a3 b3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (not (Float32_u.equal a_val b_val));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (not (Float32_u.equal a_val b_val))\n    done\n\nlet cmpne_int8 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (not (Int8_u.equal a0 b0));\n      Array.unsafe_set out_arr (out_base + i1) (not (Int8_u.equal a1 b1));\n      Array.unsafe_set out_arr (out_base + i2) (not (Int8_u.equal a2 b2));\n      Array.unsafe_set out_arr (out_base + i3) (not (Int8_u.equal a3 b3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (not (Int8_u.equal a_val b_val));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (not (Int8_u.equal a_val b_val))\n    done\n\nlet cmpne_int16 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (not (Int16_u.equal a0 b0));\n      Array.unsafe_set out_arr (out_base + i1) (not (Int16_u.equal a1 b1));\n      Array.unsafe_set out_arr (out_base + i2) (not (Int16_u.equal a2 b2));\n      Array.unsafe_set out_arr (out_base + i3) (not (Int16_u.equal a3 b3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (not (Int16_u.equal a_val b_val));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (not (Int16_u.equal a_val b_val))\n    done\n\nlet cmpne_int32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (not (Int32_u.equal a0 b0));\n      Array.unsafe_set out_arr (out_base + i1) (not (Int32_u.equal a1 b1));\n      Array.unsafe_set out_arr (out_base + i2) (not (Int32_u.equal a2 b2));\n      Array.unsafe_set out_arr (out_base + i3) (not (Int32_u.equal a3 b3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (not (Int32_u.equal a_val b_val));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (not (Int32_u.equal a_val b_val))\n    done\n\nlet cmpne_int64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (not (Int64_u.equal a0 b0));\n      Array.unsafe_set out_arr (out_base + i1) (not (Int64_u.equal a1 b1));\n      Array.unsafe_set out_arr (out_base + i2) (not (Int64_u.equal a2 b2));\n      Array.unsafe_set out_arr (out_base + i3) (not (Int64_u.equal a3 b3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (not (Int64_u.equal a_val b_val));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (not (Int64_u.equal a_val b_val))\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/dune",
    "content": "(include_subdirs unqualified)\n\n; Copy architecture-specific SIMD module\n\n(rule\n (enabled_if\n  (= %{architecture} arm64))\n (targets simd.ml)\n (action\n  (copy simd_neon.ml simd.ml)))\n\n(rule\n (enabled_if\n  (= %{architecture} amd64))\n (targets simd.ml)\n (action\n  (copy simd_sse.ml simd.ml)))\n\n(library\n (name nx_oxcaml)\n (public_name nx-oxcaml)\n (implements nx.backend)\n (modules\n  (:standard \\ simd_neon simd_sse))\n (foreign_stubs\n  (language c)\n  (names nx_oxcaml_stubs simd_stubs))\n (libraries nx.core nx.buffer stdlib_stable stdlib_upstream_compatible))\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/import.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule Dtype = Nx_core.Dtype\nmodule View = Nx_core.View\nmodule Shape = Nx_core.Shape\nmodule Array1 = Bigarray.Array1\nmodule Parallel = Parallel\nmodule Float_u = Stdlib_upstream_compatible.Float_u\nmodule Float32_u = Stdlib_stable.Float32_u\nmodule Int32_u = Stdlib_upstream_compatible.Int32_u\nmodule Int64_u = Stdlib_upstream_compatible.Int64_u\nmodule Int8_u = Stdlib_stable.Int8_u\nmodule Int16_u = Stdlib_stable.Int16_u\nmodule Float32x4 = Simd.Float32x4\nmodule Float64x2 = Simd.Float64x2\nmodule Int32x4 = Simd.Int32x4\nmodule Int64x2 = Simd.Int64x2\nmodule Array = struct\n  include Stdlib.Array\n\n  external get : ('a : any mod non_null separable). 'a array -> int -> 'a\n    = \"%array_safe_get\"\n  [@@layout_poly]\n\n  external set :\n    ('a : any mod non_null separable). 'a array -> int -> 'a -> unit\n    = \"%array_safe_set\"\n  [@@layout_poly]\n\n  external unsafe_get : ('a : any mod non_null separable). 'a array -> int -> 'a\n    = \"%array_unsafe_get\"\n  [@@layout_poly]\n\n  external unsafe_set :\n    ('a : any mod non_null separable). 'a array -> int -> 'a -> unit\n    = \"%array_unsafe_set\"\n  [@@layout_poly]\n\n  external length : ('a : any mod non_null separable). 'a array -> int\n    = \"%array_length\"\n  [@@layout_poly]\n\n  external make_float64 : int -> float# array = \"caml_make_unboxed_float64_vect\"\n\n  external make_float32 : int -> float32# array\n    = \"caml_make_unboxed_float32_vect\"\n\n  external make_int32 : int -> int32# array = \"caml_make_unboxed_int32_vect\"\n  external make_int64 : int -> int64# array = \"caml_make_unboxed_int64_vect\"\n  external make_int8 : int -> int8# array = \"caml_make_untagged_int8_vect\"\n  external make_int16 : int -> int16# array\n    = \"caml_make_untagged_int16_vect\"\n\n  external ba_to_unboxed_float_array\n  : (float, Bigarray.float64_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> float# array\n  = \"caml_ba_to_unboxed_float64_array\"\n\n  external ba_to_unboxed_float32_array\n  : (float, Bigarray.float32_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> float32# array\n  = \"caml_ba_to_unboxed_float32_array\"\n\n  external ba_to_unboxed_int64_array\n  : (int64, Bigarray.int64_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> int64# array\n  = \"caml_ba_to_unboxed_int64_array\"\n\n  external ba_to_unboxed_int32_array\n  : (int32, Bigarray.int32_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> int32# array\n  = \"caml_ba_to_unboxed_int32_array\"\n\n  external ba_to_unboxed_int8_array\n  : (int, Bigarray.int8_signed_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> int8# array\n  = \"caml_ba_to_unboxed_int8_array\"\n\n  external ba_to_unboxed_int16_array\n  : (int, Bigarray.int16_signed_elt, Bigarray.c_layout) Bigarray.Array1.t\n  -> int16# array\n  = \"caml_ba_to_unboxed_int16_array\"\n\n  external unboxed_float64_to_ba\n  : float# array -> int\n  -> (float, Bigarray.float64_elt, Bigarray.c_layout) Bigarray.Array1.t\n  = \"caml_unboxed_float64_array_to_ba\"\n\n  external unboxed_float32_to_ba\n  : float32# array -> int\n  -> (float, Bigarray.float32_elt, Bigarray.c_layout) Bigarray.Array1.t\n  = \"caml_unboxed_float32_array_to_ba\"\n\n  external unboxed_int64_to_ba\n  : int64# array -> int\n  -> (int64, Bigarray.int64_elt, Bigarray.c_layout) Bigarray.Array1.t\n  = \"caml_unboxed_int64_array_to_ba\"\n\n  external unboxed_int32_to_ba\n  : int32# array -> int\n  -> (int32, Bigarray.int32_elt, Bigarray.c_layout) Bigarray.Array1.t\n  = \"caml_unboxed_int32_array_to_ba\"\n\n  external unboxed_int8_to_ba\n  : int8# array -> int\n  -> (int, Bigarray.int8_signed_elt, Bigarray.c_layout) Bigarray.Array1.t\n  = \"caml_unboxed_int8_array_to_ba\"\n\n  external unboxed_int16_to_ba\n  : int16# array -> int\n  -> (int, Bigarray.int16_signed_elt, Bigarray.c_layout) Bigarray.Array1.t\n  = \"caml_unboxed_int16_array_to_ba\"\nend\n\nlet shape (v : View.t) : int array = View.shape v\nlet numel (v : View.t) : int = View.numel v\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/logical_ops/op_and.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet and_int8 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int8_u.logand a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int8_u.logand a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int8_u.logand a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int8_u.logand a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int8_u.logand a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int8_u.logand a_val b_val)\n    done\n\nlet and_int16 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int16_u.logand a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int16_u.logand a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int16_u.logand a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int16_u.logand a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int16_u.logand a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int16_u.logand a_val b_val)\n    done\n\nlet and_int32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n16 = n - 15 in\n    while !i < n16 do\n      let idx = !i in\n      let a0 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b1 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a2 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 8) in\n      let b2 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 8) in\n      let a3 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 12) in\n      let b3 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 12) in\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) Int32x4.(a0 land b0);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) Int32x4.(a1 land b1);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 8) Int32x4.(a2 land b2);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 12) Int32x4.(a3 land b3);\n      i := idx + 16\n    done;\n    let n4 = n - 3 in\n    while !i < n4 do\n      let idx = !i in\n      let a_vec = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) Int32x4.(a_vec land b_vec);\n      i := idx + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int32_u.logand a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int32_u.logand a_val b_val)\n    done\n\nlet and_int64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n8 = n - 7 in\n    while !i < n8 do\n      let idx = !i in\n      let a0 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 2) in\n      let b1 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 2) in\n      let a2 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b2 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a3 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 6) in\n      let b3 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 6) in\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) Int64x2.(a0 land b0);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 2) Int64x2.(a1 land b1);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) Int64x2.(a2 land b2);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 6) Int64x2.(a3 land b3);\n      i := idx + 8\n    done;\n    let n2 = n - 1 in\n    while !i < n2 do\n      let idx = !i in\n      let a_vec = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) Int64x2.(a_vec land b_vec);\n      i := idx + 2\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int64_u.logand a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int64_u.logand a_val b_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/logical_ops/op_or.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet or_int8 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int8_u.logor a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int8_u.logor a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int8_u.logor a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int8_u.logor a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int8_u.logor a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int8_u.logor a_val b_val)\n    done\n\nlet or_int16 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int16_u.logor a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int16_u.logor a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int16_u.logor a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int16_u.logor a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int16_u.logor a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int16_u.logor a_val b_val)\n    done\n\nlet or_int32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n16 = n - 15 in\n    while !i < n16 do\n      let idx = !i in\n      let a0 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b1 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a2 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 8) in\n      let b2 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 8) in\n      let a3 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 12) in\n      let b3 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 12) in\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) Int32x4.(a0 lor b0);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) Int32x4.(a1 lor b1);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 8) Int32x4.(a2 lor b2);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 12) Int32x4.(a3 lor b3);\n      i := idx + 16\n    done;\n    let n4 = n - 3 in\n    while !i < n4 do\n      let idx = !i in\n      let a_vec = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) Int32x4.(a_vec lor b_vec);\n      i := idx + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int32_u.logor a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int32_u.logor a_val b_val)\n    done\n\nlet or_int64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n8 = n - 7 in\n    while !i < n8 do\n      let idx = !i in\n      let a0 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 2) in\n      let b1 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 2) in\n      let a2 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b2 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a3 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 6) in\n      let b3 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 6) in\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) Int64x2.(a0 lor b0);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 2) Int64x2.(a1 lor b1);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) Int64x2.(a2 lor b2);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 6) Int64x2.(a3 lor b3);\n      i := idx + 8\n    done;\n    let n2 = n - 1 in\n    while !i < n2 do\n      let idx = !i in\n      let a_vec = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) Int64x2.(a_vec lor b_vec);\n      i := idx + 2\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int64_u.logor a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int64_u.logor a_val b_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/logical_ops/op_xor.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet xor_int8 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int8_u.logxor a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int8_u.logxor a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int8_u.logxor a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int8_u.logxor a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int8_u.logxor a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int8_u.logxor a_val b_val)\n    done\n\nlet xor_int16 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let b0 = Array.unsafe_get b_arr (b_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let b1 = Array.unsafe_get b_arr (b_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let b2 = Array.unsafe_get b_arr (b_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      let b3 = Array.unsafe_get b_arr (b_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int16_u.logxor a0 b0);\n      Array.unsafe_set out_arr (out_base + i1) (Int16_u.logxor a1 b1);\n      Array.unsafe_set out_arr (out_base + i2) (Int16_u.logxor a2 b2);\n      Array.unsafe_set out_arr (out_base + i3) (Int16_u.logxor a3 b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int16_u.logxor a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int16_u.logxor a_val b_val)\n    done\n\nlet xor_int32 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n16 = n - 15 in\n    while !i < n16 do\n      let idx = !i in\n      let a0 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b1 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a2 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 8) in\n      let b2 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 8) in\n      let a3 = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx + 12) in\n      let b3 = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx + 12) in\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) Int32x4.(a0 lxor b0);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) Int32x4.(a1 lxor b1);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 8) Int32x4.(a2 lxor b2);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 12) Int32x4.(a3 lxor b3);\n      i := idx + 16\n    done;\n    let n4 = n - 3 in\n    while !i < n4 do\n      let idx = !i in\n      let a_vec = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Int32x4.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) Int32x4.(a_vec lxor b_vec);\n      i := idx + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int32_u.logxor a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int32_u.logxor a_val b_val)\n    done\n\nlet xor_int64 a_arr b_arr out_arr va vb vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  let b_base = View.offset vb + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous va\n    && View.is_c_contiguous vb\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n8 = n - 7 in\n    while !i < n8 do\n      let idx = !i in\n      let a0 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b0 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      let a1 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 2) in\n      let b1 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 2) in\n      let a2 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 4) in\n      let b2 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 4) in\n      let a3 = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx + 6) in\n      let b3 = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx + 6) in\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) Int64x2.(a0 lxor b0);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 2) Int64x2.(a1 lxor b1);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 4) Int64x2.(a2 lxor b2);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 6) Int64x2.(a3 lxor b3);\n      i := idx + 8\n    done;\n    let n2 = n - 1 in\n    while !i < n2 do\n      let idx = !i in\n      let a_vec = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let b_vec = Int64x2.Array.unsafe_get b_arr ~idx:(b_base + idx) in\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) Int64x2.(a_vec lxor b_vec);\n      i := idx + 2\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      let b_val = Array.unsafe_get b_arr (b_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int64_u.logxor a_val b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let b_shape = shape vb in\n    let a_strides = View.strides va in\n    let b_strides = View.strides vb in\n    let a_offset = View.offset va in\n    let b_offset = View.offset vb in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get b_arr (b_offset + b_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int64_u.logxor a_val b_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/nx_backend.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\nopen Nx_buffer\n\nlet err op fmt = Printf.ksprintf (fun msg -> invalid_arg (op ^ \": \" ^ msg)) fmt\n\ntype context = { pool : Parallel.pool }\n\nlet create_context () = { pool = Parallel.get_or_setup_pool () }\n\ntype 'b buffer =\n  | Float64 : float# array -> Dtype.float64_elt buffer\n  | Float32 : float32# array -> Dtype.float32_elt buffer\n  | Int8 : int8# array -> Dtype.int8_elt buffer\n  | Int16 : int16# array -> Dtype.int16_elt buffer\n  | Int32 : int32# array -> Dtype.int32_elt buffer\n  | Int64 : int64# array -> Dtype.int64_elt buffer\n  | Bool : bool array -> Dtype.bool_elt buffer\n\ntype ('a, 'b) t = {\n  dtype : ('a, 'b) Dtype.t;\n  buffer : 'b buffer;\n  view : View.t;\n  context : context;\n}\n\nlet view t = t.view\nlet dtype t = t.dtype\nlet context t = t.context\n\nlet parallel_threshold = 62500\n\n(* Run [f start end] either in parallel or sequentially depending on [vol]. *)\nlet par pool vol f =\n  if vol > parallel_threshold then\n    Parallel.parallel_for pool 0 (vol - 1) f\n  else f 0 vol\n\nlet to_host (type a b) (t : (a, b) t) :\n    (a, b) Nx_buffer.t =\n  let n = numel t.view in\n  match t.dtype with\n  | Dtype.Float64 ->\n    (match t.buffer with\n     | Float64 arr -> of_bigarray1 (Array.unboxed_float64_to_ba arr n)\n     | _ -> assert false)\n  | Dtype.Float32 ->\n    (match t.buffer with\n     | Float32 arr -> of_bigarray1 (Array.unboxed_float32_to_ba arr n)\n     | _ -> assert false)\n  | Dtype.Int64 ->\n    (match t.buffer with\n     | Int64 arr -> of_bigarray1 (Array.unboxed_int64_to_ba arr n)\n     | _ -> assert false)\n  | Dtype.Int32 ->\n    (match t.buffer with\n     | Int32 arr -> of_bigarray1 (Array.unboxed_int32_to_ba arr n)\n     | _ -> assert false)\n  | Dtype.Int8 ->\n    (match t.buffer with\n     | Int8 arr -> of_bigarray1 (Array.unboxed_int8_to_ba arr n)\n     | _ -> assert false)\n  | Dtype.Int16 ->\n    (match t.buffer with\n     | Int16 arr -> of_bigarray1 (Array.unboxed_int16_to_ba arr n)\n     | _ -> assert false)\n  | Dtype.Bool ->\n    (match t.buffer with\n     | Bool arr ->\n       let ba = Nx_buffer.create Nx_buffer.Bool n in\n       for i = 0 to n - 1 do\n         Nx_buffer.unsafe_set ba i arr.(i)\n       done;\n       ba\n     | _ -> assert false)\n  | _ -> invalid_arg \"to_host: unsupported dtype\"\n\nlet buffer (type a b) context (dtype : (a, b) Dtype.t) (shape_arr : int array) :\n    (a, b) t =\n  let size = Stdlib.Array.fold_left ( * ) 1 shape_arr in\n  let view = View.create shape_arr in\n  match dtype with\n  | Dtype.Float64 ->\n      let buffer = Array.make_float64 size in\n      { dtype; buffer = Float64 buffer; view; context }\n  | Dtype.Float32 ->\n      let buffer = Array.make_float32 size in\n      { dtype; buffer = Float32 buffer; view; context }\n  | Dtype.Int8 ->\n      let buffer = Array.make_int8 size in\n      { dtype; buffer = Int8 buffer; view; context }\n  | Dtype.Int16 ->\n      let buffer = Array.make_int16 size in\n      { dtype; buffer = Int16 buffer; view; context }\n  | Dtype.Int32 ->\n      let buffer = Array.make_int32 size in\n      { dtype; buffer = Int32 buffer; view; context }\n  | Dtype.Int64 ->\n      let buffer = Array.make_int64 size in\n      { dtype; buffer = Int64 buffer; view; context }\n  | Dtype.Bool ->\n      let buffer = Array.make size false in\n      { dtype; buffer = Bool buffer; view; context }\n  | _ -> invalid_arg \"buffer: unsupported dtype\"\n\nlet full (type a b) context (dtype : (a, b) Dtype.t) (shape_arr : int array)\n    (value : a) : (a, b) t =\n  let t = buffer context dtype shape_arr in\n  let size = Stdlib.Array.fold_left ( * ) 1 shape_arr in\n  (match (dtype : (a, b) Dtype.t) with\n  | Dtype.Float64 ->\n      (match t.buffer with\n       | Float64 arr ->\n         let v = Float_u.of_float value in\n         for i = 0 to size - 1 do\n           Array.unsafe_set arr i v\n         done\n       | _ -> assert false)\n  | Dtype.Float32 ->\n      (match t.buffer with\n       | Float32 arr ->\n         let v = Float32_u.of_float (Float_u.of_float value) in\n         for i = 0 to size - 1 do\n           Array.unsafe_set arr i v\n         done\n       | _ -> assert false)\n  | Dtype.Int8 ->\n      (match t.buffer with\n       | Int8 arr ->\n         let v = Int8_u.of_int value in\n         for i = 0 to size - 1 do\n           Array.unsafe_set arr i v\n         done\n       | _ -> assert false)\n  | Dtype.Int16 ->\n      (match t.buffer with\n       | Int16 arr ->\n         let v = Int16_u.of_int value in\n         for i = 0 to size - 1 do\n           Array.unsafe_set arr i v\n         done\n       | _ -> assert false)\n  | Dtype.Int32 ->\n      (match t.buffer with\n       | Int32 arr ->\n         let v = Int32_u.of_int32 value in\n         for i = 0 to size - 1 do\n           Array.unsafe_set arr i v\n         done\n       | _ -> assert false)\n  | Dtype.Int64 ->\n      (match t.buffer with\n       | Int64 arr ->\n         let v = Int64_u.of_int64 value in\n         for i = 0 to size - 1 do\n           Array.unsafe_set arr i v\n         done\n       | _ -> assert false)\n  | Dtype.Bool ->\n      (match t.buffer with\n       | Bool arr ->\n         for i = 0 to size - 1 do\n           Stdlib.Array.unsafe_set arr i value\n         done\n       | _ -> assert false)\n  | _ -> invalid_arg \"full: unsupported dtype\");\n  t\n\nlet add (type a b) (a : (a, b) t) (b : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vb = b.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer, b.buffer) with\n  | Float64 out_arr, Float64 a_arr, Float64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_add.add_float64 a_arr b_arr out_arr va vb vout s e)\n  | Float32 out_arr, Float32 a_arr, Float32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_add.add_float32 a_arr b_arr out_arr va vb vout s e)\n  | Int32 out_arr, Int32 a_arr, Int32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_add.add_int32 a_arr b_arr out_arr va vb vout s e)\n  | Int64 out_arr, Int64 a_arr, Int64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_add.add_int64 a_arr b_arr out_arr va vb vout s e)\n  | _ -> invalid_arg \"buffer: unsupported dtype\");\n  out\n\nlet sub (type a b) (a : (a, b) t) (b : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vb = b.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer, b.buffer) with\n  | Float64 out_arr, Float64 a_arr, Float64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_sub.sub_float64 a_arr b_arr out_arr va vb vout s e)\n  | Float32 out_arr, Float32 a_arr, Float32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_sub.sub_float32 a_arr b_arr out_arr va vb vout s e)\n  | Int32 out_arr, Int32 a_arr, Int32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_sub.sub_int32 a_arr b_arr out_arr va vb vout s e)\n  | Int64 out_arr, Int64 a_arr, Int64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_sub.sub_int64 a_arr b_arr out_arr va vb vout s e)\n  | _ -> invalid_arg \"buffer: unsupported dtype\");\n  out\n\nlet mul (type a b) (a : (a, b) t) (b : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vb = b.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer, b.buffer) with\n  | Float64 out_arr, Float64 a_arr, Float64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_mul.mul_float64 a_arr b_arr out_arr va vb vout s e)\n  | Float32 out_arr, Float32 a_arr, Float32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_mul.mul_float32 a_arr b_arr out_arr va vb vout s e)\n  | Int32 out_arr, Int32 a_arr, Int32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_mul.mul_int32 a_arr b_arr out_arr va vb vout s e)\n  | Int64 out_arr, Int64 a_arr, Int64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_mul.mul_int64 a_arr b_arr out_arr va vb vout s e)\n  | _ -> invalid_arg \"buffer: unsupported dtype\");\n  out\n\nlet idiv (type a b) (a : (a, b) t) (b : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vb = b.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer, b.buffer) with\n  | Float64 out_arr, Float64 a_arr, Float64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_idiv.idiv_float64 a_arr b_arr out_arr va vb vout s e)\n  | Float32 out_arr, Float32 a_arr, Float32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_idiv.idiv_float32 a_arr b_arr out_arr va vb vout s e)\n  | Int32 out_arr, Int32 a_arr, Int32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_idiv.idiv_int32 a_arr b_arr out_arr va vb vout s e)\n  | Int64 out_arr, Int64 a_arr, Int64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_idiv.idiv_int64 a_arr b_arr out_arr va vb vout s e)\n  | _ -> invalid_arg \"buffer: unsupported dtype\");\n  out\n\nlet fdiv (type a b) (a : (a, b) t) (b : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vb = b.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer, b.buffer) with\n  | Float64 out_arr, Float64 a_arr, Float64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_fdiv.fdiv_float64 a_arr b_arr out_arr va vb vout s e)\n  | Float32 out_arr, Float32 a_arr, Float32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_fdiv.fdiv_float32 a_arr b_arr out_arr va vb vout s e)\n  | Int32 out_arr, Int32 a_arr, Int32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_fdiv.fdiv_int32 a_arr b_arr out_arr va vb vout s e)\n  | Int64 out_arr, Int64 a_arr, Int64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_fdiv.fdiv_int64 a_arr b_arr out_arr va vb vout s e)\n  | _ -> invalid_arg \"fdiv: unsupported dtype\");\n  out\n\nlet div x y =\n  let dt = dtype x in\n  if Dtype.is_int dt || Dtype.is_uint dt then idiv x y\n  else fdiv x y\n\nlet mod_ (type a b) (a : (a, b) t) (b : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vb = b.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer, b.buffer) with\n  | Float64 out_arr, Float64 a_arr, Float64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_mod.mod_float64 a_arr b_arr out_arr va vb vout s e)\n  | Float32 out_arr, Float32 a_arr, Float32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_mod.mod_float32 a_arr b_arr out_arr va vb vout s e)\n  | Int32 out_arr, Int32 a_arr, Int32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_mod.mod_int32 a_arr b_arr out_arr va vb vout s e)\n  | Int64 out_arr, Int64 a_arr, Int64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_mod.mod_int64 a_arr b_arr out_arr va vb vout s e)\n  | _ -> invalid_arg \"buffer: unsupported dtype\");\n  out\n\nlet pow (type a b) (a : (a, b) t) (b : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vb = b.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer, b.buffer) with\n  | Float64 out_arr, Float64 a_arr, Float64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_pow.pow_float64 a_arr b_arr out_arr va vb vout s e)\n  | Float32 out_arr, Float32 a_arr, Float32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_pow.pow_float32 a_arr b_arr out_arr va vb vout s e)\n  | _ ->\n      invalid_arg \"pow: not implemented for unboxed ints\");\n  out\n\nlet cmpeq (type a b) (a : (a, b) t) (b : (a, b) t) : (bool, Nx_buffer.bool_elt) t =\n  let out = buffer a.context Dtype.Bool (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vb = b.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer, b.buffer) with\n  | Bool out_arr, Float64 a_arr, Float64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_cmpeq.cmpeq_float64 a_arr b_arr out_arr va vb vout s e)\n  | Bool out_arr, Float32 a_arr, Float32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_cmpeq.cmpeq_float32 a_arr b_arr out_arr va vb vout s e)\n  | Bool out_arr, Int32 a_arr, Int32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_cmpeq.cmpeq_int32 a_arr b_arr out_arr va vb vout s e)\n  | Bool out_arr, Int64 a_arr, Int64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_cmpeq.cmpeq_int64 a_arr b_arr out_arr va vb vout s e)\n  | _ -> invalid_arg \"buffer: unsupported dtype\");\n  out\n\nlet cmpne (type a b) (a : (a, b) t) (b : (a, b) t) : (bool, Nx_buffer.bool_elt) t =\n  let out = buffer a.context Dtype.Bool (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vb = b.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer, b.buffer) with\n  | Bool out_arr, Float64 a_arr, Float64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_cmpne.cmpne_float64 a_arr b_arr out_arr va vb vout s e)\n  | Bool out_arr, Float32 a_arr, Float32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_cmpne.cmpne_float32 a_arr b_arr out_arr va vb vout s e)\n  | Bool out_arr, Int32 a_arr, Int32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_cmpne.cmpne_int32 a_arr b_arr out_arr va vb vout s e)\n  | Bool out_arr, Int64 a_arr, Int64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_cmpne.cmpne_int64 a_arr b_arr out_arr va vb vout s e)\n  | _ -> invalid_arg \"buffer: unsupported dtype\");\n  out\n\nlet cmplt (type a b) (a : (a, b) t) (b : (a, b) t) : (bool, Nx_buffer.bool_elt) t =\n  let out = buffer a.context Dtype.Bool (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vb = b.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer, b.buffer) with\n  | Bool out_arr, Float64 a_arr, Float64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_cmplt.cmplt_float64 a_arr b_arr out_arr va vb vout s e)\n  | Bool out_arr, Float32 a_arr, Float32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_cmplt.cmplt_float32 a_arr b_arr out_arr va vb vout s e)\n  | Bool out_arr, Int32 a_arr, Int32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_cmplt.cmplt_int32 a_arr b_arr out_arr va vb vout s e)\n  | Bool out_arr, Int64 a_arr, Int64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_cmplt.cmplt_int64 a_arr b_arr out_arr va vb vout s e)\n  | _ -> invalid_arg \"buffer: unsupported dtype\");\n  out\n\nlet cmple (type a b) (a : (a, b) t) (b : (a, b) t) : (bool, Nx_buffer.bool_elt) t =\n  let out = buffer a.context Dtype.Bool (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vb = b.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer, b.buffer) with\n  | Bool out_arr, Float64 a_arr, Float64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_cmple.cmple_float64 a_arr b_arr out_arr va vb vout s e)\n  | Bool out_arr, Float32 a_arr, Float32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_cmple.cmple_float32 a_arr b_arr out_arr va vb vout s e)\n  | Bool out_arr, Int32 a_arr, Int32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_cmple.cmple_int32 a_arr b_arr out_arr va vb vout s e)\n  | Bool out_arr, Int64 a_arr, Int64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_cmple.cmple_int64 a_arr b_arr out_arr va vb vout s e)\n  | _ -> invalid_arg \"buffer: unsupported dtype\");\n  out\n\nlet max (type a b) (a : (a, b) t) (b : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vb = b.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer, b.buffer) with\n  | Float64 out_arr, Float64 a_arr, Float64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_max.max_float64 a_arr b_arr out_arr va vb vout s e)\n  | Float32 out_arr, Float32 a_arr, Float32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_max.max_float32 a_arr b_arr out_arr va vb vout s e)\n  | Int32 out_arr, Int32 a_arr, Int32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_max.max_int32 a_arr b_arr out_arr va vb vout s e)\n  | Int64 out_arr, Int64 a_arr, Int64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_max.max_int64 a_arr b_arr out_arr va vb vout s e)\n  | _ -> invalid_arg \"max: unsupported dtype\");\n  out\n\nlet min (type a b) (a : (a, b) t) (b : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vb = b.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer, b.buffer) with\n  | Float64 out_arr, Float64 a_arr, Float64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_min.min_float64 a_arr b_arr out_arr va vb vout s e)\n  | Float32 out_arr, Float32 a_arr, Float32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_min.min_float32 a_arr b_arr out_arr va vb vout s e)\n  | Int32 out_arr, Int32 a_arr, Int32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_min.min_int32 a_arr b_arr out_arr va vb vout s e)\n  | Int64 out_arr, Int64 a_arr, Int64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_min.min_int64 a_arr b_arr out_arr va vb vout s e)\n  | _ -> invalid_arg \"min: unsupported dtype\");\n  out\n\nlet xor (type a b) (a : (a, b) t) (b : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vb = b.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer, b.buffer) with\n  | Int32 out_arr, Int32 a_arr, Int32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_xor.xor_int32 a_arr b_arr out_arr va vb vout s e)\n  | Int64 out_arr, Int64 a_arr, Int64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_xor.xor_int64 a_arr b_arr out_arr va vb vout s e)\n  | _ -> invalid_arg \"or_: not implemented for unboxed ints\");\n  out\n\nlet or_ (type a b) (a : (a, b) t) (b : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vb = b.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer, b.buffer) with\n  | Int32 out_arr, Int32 a_arr, Int32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_or.or_int32 a_arr b_arr out_arr va vb vout s e)\n  | Int64 out_arr, Int64 a_arr, Int64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_or.or_int64 a_arr b_arr out_arr va vb vout s e)\n  | _ -> invalid_arg \"or_: not implemented for unboxed ints\");\n  out\n\nlet and_ (type a b) (a : (a, b) t) (b : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vb = b.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer, b.buffer) with\n  | Int32 out_arr, Int32 a_arr, Int32 b_arr ->\n      par out.context.pool vol (fun s e -> Op_and.and_int32 a_arr b_arr out_arr va vb vout s e)\n  | Int64 out_arr, Int64 a_arr, Int64 b_arr ->\n      par out.context.pool vol (fun s e -> Op_and.and_int64 a_arr b_arr out_arr va vb vout s e)\n  | _ -> invalid_arg \"and_: not implemented for unboxed ints\");\n  out\n\nlet neg (type a b) (a : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_neg.neg_float64 a_arr out_arr va vout s e)\n  | Float32 out_arr, Float32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_neg.neg_float32 a_arr out_arr va vout s e)\n  | Int8 out_arr, Int8 a_arr ->\n      par out.context.pool vol (fun s e -> Op_neg.neg_int8 a_arr out_arr va vout s e)\n  | Int16 out_arr, Int16 a_arr ->\n      par out.context.pool vol (fun s e -> Op_neg.neg_int16 a_arr out_arr va vout s e)\n  | Int32 out_arr, Int32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_neg.neg_int32 a_arr out_arr va vout s e)\n  | Int64 out_arr, Int64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_neg.neg_int64 a_arr out_arr va vout s e)\n  | _ -> invalid_arg \"buffer: unsupported dtype\");\n  out\n\nlet recip (type a b) (a : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_recip.recip_float64 a_arr out_arr va vout s e)\n  | Float32 out_arr, Float32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_recip.recip_float32 a_arr out_arr va vout s e)\n  | Int8 out_arr, Int8 a_arr ->\n      par out.context.pool vol (fun s e -> Op_recip.recip_int8 a_arr out_arr va vout s e)\n  | Int16 out_arr, Int16 a_arr ->\n      par out.context.pool vol (fun s e -> Op_recip.recip_int16 a_arr out_arr va vout s e)\n  | Int32 out_arr, Int32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_recip.recip_int32 a_arr out_arr va vout s e)\n  | Int64 out_arr, Int64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_recip.recip_int64 a_arr out_arr va vout s e)\n  | _ -> invalid_arg \"buffer: unsupported dtype\");\n  out\n\nlet abs (type a b) (a : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_abs.abs_float64 a_arr out_arr va vout s e)\n  | Float32 out_arr, Float32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_abs.abs_float32 a_arr out_arr va vout s e)\n  | Int8 out_arr, Int8 a_arr ->\n      par out.context.pool vol (fun s e -> Op_abs.abs_int8 a_arr out_arr va vout s e)\n  | Int16 out_arr, Int16 a_arr ->\n      par out.context.pool vol (fun s e -> Op_abs.abs_int16 a_arr out_arr va vout s e)\n  | Int32 out_arr, Int32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_abs.abs_int32 a_arr out_arr va vout s e)\n  | Int64 out_arr, Int64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_abs.abs_int64 a_arr out_arr va vout s e)\n  | _ -> invalid_arg \"buffer: unsupported dtype\");\n  out\n\nlet sqrt (type a b) (a : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_sqrt.sqrt_float64 a_arr out_arr va vout s e)\n  | Float32 out_arr, Float32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_sqrt.sqrt_float32 a_arr out_arr va vout s e)\n  | _ ->\n      invalid_arg \"sqrt: not implemented for unboxed ints\");\n  out\n\nlet exp (type a b) (a : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_exp.exp_float64 a_arr out_arr va vout s e)\n  | Float32 out_arr, Float32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_exp.exp_float32 a_arr out_arr va vout s e)\n  | _ -> invalid_arg \"exp: not implemented for unboxed ints\");\n  out\n\nlet log (type a b) (a : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_log.log_float64 a_arr out_arr va vout s e)\n  | Float32 out_arr, Float32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_log.log_float32 a_arr out_arr va vout s e)\n  | _ -> invalid_arg \"log: not implemented for unboxed ints\");\n  out\n\nlet sin (type a b) (a : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_sin.sin_float64 a_arr out_arr va vout s e)\n  | Float32 out_arr, Float32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_sin.sin_float32 a_arr out_arr va vout s e)\n  | _ -> invalid_arg \"sin: not implemented for unboxed ints\");\n  out\n\nlet cos (type a b) (a : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_cos.cos_float64 a_arr out_arr va vout s e)\n  | Float32 out_arr, Float32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_cos.cos_float32 a_arr out_arr va vout s e)\n  | _ -> invalid_arg \"cos: not implemented for unboxed ints\");\n  out\n\nlet sign (type a b) (a : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_sign.sign_float64 a_arr out_arr va vout s e)\n  | Float32 out_arr, Float32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_sign.sign_float32 a_arr out_arr va vout s e)\n  | Int8 out_arr, Int8 a_arr ->\n      par out.context.pool vol (fun s e -> Op_sign.sign_int8 a_arr out_arr va vout s e)\n  | Int16 out_arr, Int16 a_arr ->\n      par out.context.pool vol (fun s e -> Op_sign.sign_int16 a_arr out_arr va vout s e)\n  | Int32 out_arr, Int32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_sign.sign_int32 a_arr out_arr va vout s e)\n  | Int64 out_arr, Int64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_sign.sign_int64 a_arr out_arr va vout s e)\n  | Bool out_arr, Bool a_arr ->\n      par out.context.pool vol (fun s e -> Op_sign.sign_bool a_arr out_arr va vout s e)\n  | _ -> assert false);\n  out\n\nlet tan (type a b) (a : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_tan.tan_float64 a_arr out_arr va vout s e)\n  | Float32 out_arr, Float32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_tan.tan_float32 a_arr out_arr va vout s e)\n  | _ -> invalid_arg \"tan: not implemented for unboxed ints\");\n  out\n\nlet asin (type a b) (a : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_asin.asin_float64 a_arr out_arr va vout s e)\n  | Float32 out_arr, Float32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_asin.asin_float32 a_arr out_arr va vout s e)\n  | _ -> invalid_arg \"asin: not implemented for unboxed ints\");\n  out\n\nlet acos (type a b) (a : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_acos.acos_float64 a_arr out_arr va vout s e)\n  | Float32 out_arr, Float32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_acos.acos_float32 a_arr out_arr va vout s e)\n  | _ -> invalid_arg \"acos: not implemented for unboxed ints\");\n  out\n\nlet atan (type a b) (a : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_atan.atan_float64 a_arr out_arr va vout s e)\n  | Float32 out_arr, Float32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_atan.atan_float32 a_arr out_arr va vout s e)\n  | _ -> invalid_arg \"atan: not implemented for unboxed ints\");\n  out\n\nlet atan2 (type a b) (a : (a, b) t) (b : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vb = b.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer, b.buffer) with\n  | Float64 out_arr, Float64 a_arr, Float64 b_arr ->\n      par out.context.pool vol (fun s e ->\n          Op_atan2.atan2_float64 a_arr b_arr out_arr va vb vout s e)\n  | Float32 out_arr, Float32 a_arr, Float32 b_arr ->\n      par out.context.pool vol (fun s e ->\n          Op_atan2.atan2_float32 a_arr b_arr out_arr va vb vout s e)\n  | _ -> invalid_arg \"atan2: not implemented for unboxed ints\");\n  out\n\nlet sinh (type a b) (a : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_sinh.sinh_float64 a_arr out_arr va vout s e)\n  | Float32 out_arr, Float32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_sinh.sinh_float32 a_arr out_arr va vout s e)\n  | _ -> invalid_arg \"sinh: not implemented for unboxed ints\");\n  out\n\nlet cosh (type a b) (a : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_cosh.cosh_float64 a_arr out_arr va vout s e)\n  | Float32 out_arr, Float32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_cosh.cosh_float32 a_arr out_arr va vout s e)\n  | _ -> invalid_arg \"cosh: not implemented for unboxed ints\");\n  out\n\nlet tanh (type a b) (a : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_tanh.tanh_float64 a_arr out_arr va vout s e)\n  | Float32 out_arr, Float32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_tanh.tanh_float32 a_arr out_arr va vout s e)\n  | _ -> invalid_arg \"tanh: not implemented for unboxed ints\");\n  out\n\nlet trunc (type a b) (a : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_trunc.trunc_float64 a_arr out_arr va vout s e)\n  | Float32 out_arr, Float32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_trunc.trunc_float32 a_arr out_arr va vout s e)\n  | Int8 out_arr, Int8 a_arr ->\n      par out.context.pool vol (fun s e -> Op_trunc.trunc_int8 a_arr out_arr va vout s e)\n  | Int16 out_arr, Int16 a_arr ->\n      par out.context.pool vol (fun s e -> Op_trunc.trunc_int16 a_arr out_arr va vout s e)\n  | Int32 out_arr, Int32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_trunc.trunc_int32 a_arr out_arr va vout s e)\n  | Int64 out_arr, Int64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_trunc.trunc_int64 a_arr out_arr va vout s e)\n  | Bool out_arr, Bool a_arr ->\n      par out.context.pool vol (fun s e -> Op_trunc.trunc_bool a_arr out_arr va vout s e)\n  | _ -> assert false);\n  out\n\nlet ceil (type a b) (a : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_ceil.ceil_float64 a_arr out_arr va vout s e)\n  | Float32 out_arr, Float32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_ceil.ceil_float32 a_arr out_arr va vout s e)\n  | Int8 out_arr, Int8 a_arr ->\n      par out.context.pool vol (fun s e -> Op_ceil.ceil_int8 a_arr out_arr va vout s e)\n  | Int16 out_arr, Int16 a_arr ->\n      par out.context.pool vol (fun s e -> Op_ceil.ceil_int16 a_arr out_arr va vout s e)\n  | Int32 out_arr, Int32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_ceil.ceil_int32 a_arr out_arr va vout s e)\n  | Int64 out_arr, Int64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_ceil.ceil_int64 a_arr out_arr va vout s e)\n  | Bool out_arr, Bool a_arr ->\n      par out.context.pool vol (fun s e -> Op_ceil.ceil_bool a_arr out_arr va vout s e)\n  | _ -> assert false);\n  out\n\nlet floor (type a b) (a : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_floor.floor_float64 a_arr out_arr va vout s e)\n  | Float32 out_arr, Float32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_floor.floor_float32 a_arr out_arr va vout s e)\n  | Int8 out_arr, Int8 a_arr ->\n      par out.context.pool vol (fun s e -> Op_floor.floor_int8 a_arr out_arr va vout s e)\n  | Int16 out_arr, Int16 a_arr ->\n      par out.context.pool vol (fun s e -> Op_floor.floor_int16 a_arr out_arr va vout s e)\n  | Int32 out_arr, Int32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_floor.floor_int32 a_arr out_arr va vout s e)\n  | Int64 out_arr, Int64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_floor.floor_int64 a_arr out_arr va vout s e)\n  | Bool out_arr, Bool a_arr ->\n      par out.context.pool vol (fun s e -> Op_floor.floor_bool a_arr out_arr va vout s e)\n  | _ -> assert false);\n  out\n\nlet round (type a b) (a : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_round.round_float64 a_arr out_arr va vout s e)\n  | Float32 out_arr, Float32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_round.round_float32 a_arr out_arr va vout s e)\n  | Int8 out_arr, Int8 a_arr ->\n      par out.context.pool vol (fun s e -> Op_round.round_int8 a_arr out_arr va vout s e)\n  | Int16 out_arr, Int16 a_arr ->\n      par out.context.pool vol (fun s e -> Op_round.round_int16 a_arr out_arr va vout s e)\n  | Int32 out_arr, Int32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_round.round_int32 a_arr out_arr va vout s e)\n  | Int64 out_arr, Int64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_round.round_int64 a_arr out_arr va vout s e)\n  | Bool out_arr, Bool a_arr ->\n      par out.context.pool vol (fun s e -> Op_round.round_bool a_arr out_arr va vout s e)\n  | _ -> assert false);\n  out\n\nlet erf (type a b) (a : (a, b) t) : (a, b) t =\n  let out = buffer a.context a.dtype (shape a.view) in\n  let vout = out.view in\n  let va = a.view in\n  let vol = numel vout in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      par out.context.pool vol (fun s e -> Op_erf.erf_float64 a_arr out_arr va vout s e)\n  | Float32 out_arr, Float32 a_arr ->\n      par out.context.pool vol (fun s e -> Op_erf.erf_float32 a_arr out_arr va vout s e)\n  | _ -> invalid_arg \"erf: not implemented for unboxed ints\");\n  out\n\nlet where (type a b) (cond : (bool, Nx_buffer.bool_elt) t)\n    (if_true : (a, b) t) (if_false : (a, b) t) : (a, b) t =\n  let out = buffer if_true.context if_true.dtype (shape if_true.view) in\n  let vout = out.view in\n  let vtrue = if_true.view in\n  let vfalse = if_false.view in\n  let vcond = cond.view in\n  let vol = numel vout in\n  (match (out.buffer, cond.buffer, if_true.buffer, if_false.buffer) with\n  | Float64 out_arr, Bool cond_arr, Float64 true_arr, Float64 false_arr ->\n      par out.context.pool vol (fun s e ->\n          Op_where.where_float64 cond_arr true_arr false_arr out_arr vcond vtrue vfalse vout s e)\n  | Float32 out_arr, Bool cond_arr, Float32 true_arr, Float32 false_arr ->\n      par out.context.pool vol (fun s e ->\n          Op_where.where_float32 cond_arr true_arr false_arr out_arr vcond vtrue vfalse vout s e)\n  | Int64 out_arr, Bool cond_arr, Int64 true_arr, Int64 false_arr ->\n      par out.context.pool vol (fun s e ->\n          Op_where.where_int64 cond_arr true_arr false_arr out_arr vcond vtrue vfalse vout s e)\n  | Int32 out_arr, Bool cond_arr, Int32 true_arr, Int32 false_arr ->\n      par out.context.pool vol (fun s e ->\n          Op_where.where_int32 cond_arr true_arr false_arr out_arr vcond vtrue vfalse vout s e)\n  | Int8 out_arr, Bool cond_arr, Int8 true_arr, Int8 false_arr ->\n      par out.context.pool vol (fun s e ->\n          Op_where.where_int8 cond_arr true_arr false_arr out_arr vcond vtrue vfalse vout s e)\n  | Int16 out_arr, Bool cond_arr, Int16 true_arr, Int16 false_arr ->\n      par out.context.pool vol (fun s e ->\n          Op_where.where_int16 cond_arr true_arr false_arr out_arr vcond vtrue vfalse vout s e)\n  | _ -> invalid_arg \"where: not implemented for this dtype\");\n  out\n\nlet reduce_sum (type a b) ~axes ~keepdims (a : (a, b) t) : (a, b) t =\n  let out_shape = Shape.reduce_output_shape (shape a.view) axes keepdims in\n  let out = buffer a.context a.dtype out_shape in\n  let vout = out.view in\n  let va = a.view in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      Reduce_ops.reduce_sum_float64 out.context.pool ~out_arr ~a_arr ~va ~vout\n        ~axes ~keepdims\n  | Float32 out_arr, Float32 a_arr ->\n      Reduce_ops.reduce_sum_float32 out.context.pool ~out_arr ~a_arr ~va ~vout\n        ~axes ~keepdims\n  | Int32 out_arr, Int32 a_arr ->\n      Reduce_ops.reduce_sum_int32 out.context.pool ~out_arr ~a_arr ~va ~vout\n        ~axes ~keepdims\n  | Int64 out_arr, Int64 a_arr ->\n      Reduce_ops.reduce_sum_int64 out.context.pool ~out_arr ~a_arr ~va ~vout\n        ~axes ~keepdims\n  | _ -> invalid_arg \"buffer: unsupported dtype\");\n  out\n\nlet reduce_prod (type a b) ~axes ~keepdims (a : (a, b) t) : (a, b) t =\n  let out_shape = Shape.reduce_output_shape (shape a.view) axes keepdims in\n  let out = buffer a.context a.dtype out_shape in\n  let vout = out.view in\n  let va = a.view in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      Reduce_ops.reduce_prod_float64 out.context.pool ~out_arr ~a_arr ~va ~vout\n        ~axes ~keepdims\n  | Float32 out_arr, Float32 a_arr ->\n      Reduce_ops.reduce_prod_float32 out.context.pool ~out_arr ~a_arr ~va ~vout\n        ~axes ~keepdims\n  | Int32 out_arr, Int32 a_arr ->\n      Reduce_ops.reduce_prod_int32 out.context.pool ~out_arr ~a_arr ~va ~vout\n        ~axes ~keepdims\n  | Int64 out_arr, Int64 a_arr ->\n      Reduce_ops.reduce_prod_int64 out.context.pool ~out_arr ~a_arr ~va ~vout\n        ~axes ~keepdims\n  | _ -> invalid_arg \"buffer: unsupported dtype\");\n  out\n\nlet reduce_max (type a b) ~axes ~keepdims (a : (a, b) t) : (a, b) t =\n  let out_shape = Shape.reduce_output_shape (shape a.view) axes keepdims in\n  let out = buffer a.context a.dtype out_shape in\n  let vout = out.view in\n  let va = a.view in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      Reduce_ops.reduce_max_float64 out.context.pool ~out_arr ~a_arr ~va ~vout\n        ~axes ~keepdims\n  | Float32 out_arr, Float32 a_arr ->\n      Reduce_ops.reduce_max_float32 out.context.pool ~out_arr ~a_arr ~va ~vout\n        ~axes ~keepdims\n  | Int32 out_arr, Int32 a_arr ->\n      Reduce_ops.reduce_max_int32 out.context.pool ~out_arr ~a_arr ~va ~vout\n        ~axes ~keepdims\n  | Int64 out_arr, Int64 a_arr ->\n      Reduce_ops.reduce_max_int64 out.context.pool ~out_arr ~a_arr ~va ~vout\n        ~axes ~keepdims\n  | _ -> invalid_arg \"buffer: unsupported dtype\");\n  out\n\nlet reduce_min (type a b) ~axes ~keepdims (a : (a, b) t) : (a, b) t =\n  let out_shape = Shape.reduce_output_shape (shape a.view) axes keepdims in\n  let out = buffer a.context a.dtype out_shape in\n  let vout = out.view in\n  let va = a.view in\n  (match (out.buffer, a.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      Reduce_ops.reduce_min_float64 out.context.pool ~out_arr ~a_arr ~va ~vout\n        ~axes ~keepdims\n  | Float32 out_arr, Float32 a_arr ->\n      Reduce_ops.reduce_min_float32 out.context.pool ~out_arr ~a_arr ~va ~vout\n        ~axes ~keepdims\n  | Int32 out_arr, Int32 a_arr ->\n      Reduce_ops.reduce_min_int32 out.context.pool ~out_arr ~a_arr ~va ~vout\n        ~axes ~keepdims\n  | Int64 out_arr, Int64 a_arr ->\n      Reduce_ops.reduce_min_int64 out.context.pool ~out_arr ~a_arr ~va ~vout\n        ~axes ~keepdims\n  | _ -> invalid_arg \"buffer: unsupported dtype\");\n  out\n\nlet associative_scan (type a b) ~(axis : int)\n    ~(op : [ `Sum | `Prod | `Max | `Min ]) (x : (a, b) t) : (a, b) t =\n  let in_shape = shape x.view in\n  let out = buffer x.context x.dtype in_shape in\n  let rank = Array.length in_shape in\n  if rank = 0 then\n    invalid_arg \"associative_scan: tensor, requires rank >= 1\";\n  if axis < 0 || axis >= rank then\n    err \"associative_scan\" \"axis %d out of bounds for %dD tensor\" axis rank;\n  let op_name =\n    match op with `Sum -> \"sum\" | `Prod -> \"prod\" | `Max -> \"max\" | `Min -> \"min\"\n  in\n  let unsupported_for_dtype dtype =\n    err \"associative_scan\" \"%s not supported for dtype %s\" op_name\n      (Dtype.to_string dtype)\n  in\n  (match (out.buffer, x.buffer) with\n  | Float64 out_arr, Float64 in_arr ->\n      Op_associative_scan.scan_float64 out.context.pool ~out_arr ~in_arr\n        ~shape:in_shape ~axis ~in_view:x.view ~out_view:out.view ~op\n  | Float32 out_arr, Float32 in_arr ->\n      Op_associative_scan.scan_float32 out.context.pool ~out_arr ~in_arr\n        ~shape:in_shape ~axis ~in_view:x.view ~out_view:out.view ~op\n  | Int8 out_arr, Int8 in_arr ->\n      Op_associative_scan.scan_int8 out.context.pool ~out_arr ~in_arr\n        ~shape:in_shape ~axis ~in_view:x.view ~out_view:out.view ~op\n  | Int16 out_arr, Int16 in_arr ->\n      Op_associative_scan.scan_int16 out.context.pool ~out_arr ~in_arr\n        ~shape:in_shape ~axis ~in_view:x.view ~out_view:out.view ~op\n  | Int32 out_arr, Int32 in_arr ->\n      Op_associative_scan.scan_int32 out.context.pool ~out_arr ~in_arr\n        ~shape:in_shape ~axis ~in_view:x.view ~out_view:out.view ~op\n  | Int64 out_arr, Int64 in_arr ->\n      Op_associative_scan.scan_int64 out.context.pool ~out_arr ~in_arr\n        ~shape:in_shape ~axis ~in_view:x.view ~out_view:out.view ~op\n  | Bool _, Bool _ -> unsupported_for_dtype out.dtype\n  | _ -> invalid_arg \"associative_scan: unsupported dtype\");\n  out\n\nlet argmax (type a b) ~axis ~keepdims (x : (a, b) t) : (int32, Dtype.int32_elt) t =\n  let out_shape = Shape.reduce_output_shape (shape x.view) [| axis |] keepdims in\n  let out = buffer x.context Dtype.Int32 out_shape in\n  let vout = out.view in\n  let va = x.view in\n  (match (out.buffer, x.buffer) with\n  | Int32 out_arr, Float64 a_arr ->\n      Op_argmax.argmax_float64 out.context.pool ~out_arr ~a_arr ~va ~vout ~axis\n        ~keepdims\n  | Int32 out_arr, Float32 a_arr ->\n      Op_argmax.argmax_float32 out.context.pool ~out_arr ~a_arr ~va ~vout ~axis\n        ~keepdims\n  | Int32 out_arr, Int32 a_arr ->\n      Op_argmax.argmax_int32 out.context.pool ~out_arr ~a_arr ~va ~vout ~axis\n        ~keepdims\n  | Int32 out_arr, Int64 a_arr ->\n      Op_argmax.argmax_int64 out.context.pool ~out_arr ~a_arr ~va ~vout ~axis\n        ~keepdims\n  | _ -> invalid_arg \"argmax: unsupported dtype\");\n  out\n\nlet argmin (type a b) ~axis ~keepdims (x : (a, b) t) : (int32, Dtype.int32_elt) t =\n  let out_shape = Shape.reduce_output_shape (shape x.view) [| axis |] keepdims in\n  let out = buffer x.context Dtype.Int32 out_shape in\n  let vout = out.view in\n  let va = x.view in\n  (match (out.buffer, x.buffer) with\n  | Int32 out_arr, Float64 a_arr ->\n      Op_argmax.argmin_float64 out.context.pool ~out_arr ~a_arr ~va ~vout ~axis\n        ~keepdims\n  | Int32 out_arr, Float32 a_arr ->\n      Op_argmax.argmin_float32 out.context.pool ~out_arr ~a_arr ~va ~vout ~axis\n        ~keepdims\n  | Int32 out_arr, Int32 a_arr ->\n      Op_argmax.argmin_int32 out.context.pool ~out_arr ~a_arr ~va ~vout ~axis\n        ~keepdims\n  | Int32 out_arr, Int64 a_arr ->\n      Op_argmax.argmin_int64 out.context.pool ~out_arr ~a_arr ~va ~vout ~axis\n        ~keepdims\n  | _ -> invalid_arg \"argmin: unsupported dtype\");\n  out\n\nlet sort (type a b) ~axis ~descending (x : (a, b) t) : (a, b) t =\n  let out = buffer x.context x.dtype (shape x.view) in\n  (match (out.buffer, x.buffer) with\n  | Float64 out_arr, Float64 a_arr ->\n      Op_sort.sort_float64 out.context.pool ~out_arr ~a_arr ~va:x.view\n        ~vout:out.view ~axis ~descending\n  | Float32 out_arr, Float32 a_arr ->\n      Op_sort.sort_float32 out.context.pool ~out_arr ~a_arr ~va:x.view\n        ~vout:out.view ~axis ~descending\n  | Int32 out_arr, Int32 a_arr ->\n      Op_sort.sort_int32 out.context.pool ~out_arr ~a_arr ~va:x.view\n        ~vout:out.view ~axis ~descending\n  | Int64 out_arr, Int64 a_arr ->\n      Op_sort.sort_int64 out.context.pool ~out_arr ~a_arr ~va:x.view\n        ~vout:out.view ~axis ~descending\n  | _ -> invalid_arg \"sort: unsupported dtype\");\n  out\n\nlet argsort (type a b) ~axis ~descending (x : (a, b) t) : (int32, Dtype.int32_elt) t =\n  let out = buffer x.context Dtype.Int32 (shape x.view) in\n  (match (out.buffer, x.buffer) with\n  | Int32 out_arr, Float64 a_arr ->\n      Op_sort.argsort_float64 out.context.pool ~out_arr ~a_arr ~va:x.view\n        ~vout:out.view ~axis ~descending\n  | Int32 out_arr, Float32 a_arr ->\n      Op_sort.argsort_float32 out.context.pool ~out_arr ~a_arr ~va:x.view\n        ~vout:out.view ~axis ~descending\n  | Int32 out_arr, Int32 a_arr ->\n      Op_sort.argsort_int32 out.context.pool ~out_arr ~a_arr ~va:x.view\n        ~vout:out.view ~axis ~descending\n  | Int32 out_arr, Int64 a_arr ->\n      Op_sort.argsort_int64 out.context.pool ~out_arr ~a_arr ~va:x.view\n        ~vout:out.view ~axis ~descending\n  | _ -> invalid_arg \"argsort: unsupported dtype\");\n  out\n\nlet from_host (type a b) ctx (array : (a, b) Nx_buffer.t) :\n    (a, b) t =\n  let dtype = Dtype.of_buffer_kind (Nx_buffer.kind array) in\n  let size = Nx_buffer.length array in\n  let view = View.create [| size |] in\n  let ba = Nx_buffer.to_bigarray1 array in\n  match dtype with\n  | Dtype.Float64 ->\n    let unboxed_array = Array.ba_to_unboxed_float_array ba in\n    { context = ctx; dtype; buffer = Float64 unboxed_array; view }\n  | Dtype.Float32 ->\n    let unboxed_array = Array.ba_to_unboxed_float32_array ba in\n    { context = ctx; dtype; buffer = Float32 unboxed_array; view }\n  | Dtype.Int64 ->\n    let unboxed_array = Array.ba_to_unboxed_int64_array ba in\n    { context = ctx; dtype; buffer = Int64 unboxed_array; view }\n  | Dtype.Int32 ->\n    let unboxed_array = Array.ba_to_unboxed_int32_array ba in\n    { context = ctx; dtype; buffer = Int32 unboxed_array; view }\n  | Dtype.Int8 ->\n    let unboxed_array = Array.ba_to_unboxed_int8_array ba in\n    { context = ctx; dtype; buffer = Int8 unboxed_array; view }\n  | Dtype.Int16 ->\n    let unboxed_array = Array.ba_to_unboxed_int16_array ba in\n    { context = ctx; dtype; buffer = Int16 unboxed_array; view }\n  | Dtype.Bool ->\n    let unboxed_array = Array.make size false in\n    for i = 0 to size - 1 do\n      unboxed_array.(i) <- Nx_buffer.unsafe_get array i\n    done;\n    { context = ctx; dtype; buffer = Bool unboxed_array; view }\n  | _ -> invalid_arg \"from_host: unsupported dtype\"\n\nlet expand x shape = { x with view = View.expand x.view shape }\nlet reshape x shape = { x with view = View.reshape x.view shape }\nlet permute x axes = { x with view = View.permute x.view axes }\nlet shrink x bounds = { x with view = View.shrink x.view bounds }\nlet flip x axes = { x with view = View.flip x.view axes }\n\nlet pad (type a b) (x : (a, b) t) (padding : (int * int) array)\n    (fill_value : a) : (a, b) t =\n  let in_view = x.view in\n  let in_shape = shape in_view in\n  let ndim = Array.length in_shape in\n  if Array.length padding <> ndim then\n    invalid_arg \"pad: padding rank mismatch\";\n  let out_shape =\n    Array.init ndim (fun i ->\n        let before, after = padding.(i) in\n        if before < 0 || after < 0 then\n          invalid_arg \"pad: padding values must be non-negative\";\n        in_shape.(i) + before + after)\n  in\n  let out_view = View.create out_shape in\n  let in_numel = numel in_view in\n  let out_numel = numel out_view in\n  let in_offset = View.offset in_view in\n  let out_offset = View.offset out_view in\n  let in_strides = View.strides in_view in\n  let out_strides = View.strides out_view in\n  match x with\n  | { dtype = Dtype.Float64; buffer = Float64 in_arr; context; _ } ->\n    let fill_value = Float_u.of_float fill_value in\n    let out_arr = Array.make_float64 out_numel in\n    for i = 0 to out_numel - 1 do\n      Array.unsafe_set out_arr i fill_value\n    done;\n    Op_pad.pad_float64 in_arr out_arr in_shape padding in_offset out_offset\n      in_strides out_strides in_numel;\n    { dtype = Dtype.Float64; buffer = Float64 out_arr; view = out_view; context }\n  | { dtype = Dtype.Float32; buffer = Float32 in_arr; context; _ } ->\n    let fill_value = Float32_u.of_float (Float_u.of_float fill_value) in\n    let out_arr = Array.make_float32 out_numel in\n    for i = 0 to out_numel - 1 do\n      Array.unsafe_set out_arr i fill_value\n    done;\n    Op_pad.pad_float32 in_arr out_arr in_shape padding in_offset out_offset\n      in_strides out_strides in_numel;\n    { dtype = Dtype.Float32; buffer = Float32 out_arr; view = out_view; context }\n  | { dtype = Dtype.Int8; buffer = Int8 in_arr; context; _ } ->\n    let fill_value = Int8_u.of_int fill_value in\n    let out_arr = Array.make_int8 out_numel in\n    for i = 0 to out_numel - 1 do\n      Array.unsafe_set out_arr i fill_value\n    done;\n    Op_pad.pad_int8 in_arr out_arr in_shape padding in_offset out_offset\n      in_strides out_strides in_numel;\n    { dtype = Dtype.Int8; buffer = Int8 out_arr; view = out_view; context }\n  | { dtype = Dtype.Int16; buffer = Int16 in_arr; context; _ } ->\n    let fill_value = Int16_u.of_int fill_value in\n    let out_arr = Array.make_int16 out_numel in\n    for i = 0 to out_numel - 1 do\n      Array.unsafe_set out_arr i fill_value\n    done;\n    Op_pad.pad_int16 in_arr out_arr in_shape padding in_offset out_offset\n      in_strides out_strides in_numel;\n    { dtype = Dtype.Int16; buffer = Int16 out_arr; view = out_view; context }\n  | { dtype = Dtype.Int32; buffer = Int32 in_arr; context; _ } ->\n    let fill_value = Int32_u.of_int32 fill_value in\n    let out_arr = Array.make_int32 out_numel in\n    for i = 0 to out_numel - 1 do\n      Array.unsafe_set out_arr i fill_value\n    done;\n    Op_pad.pad_int32 in_arr out_arr in_shape padding in_offset out_offset\n      in_strides out_strides in_numel;\n    { dtype = Dtype.Int32; buffer = Int32 out_arr; view = out_view; context }\n  | { dtype = Dtype.Int64; buffer = Int64 in_arr; context; _ } ->\n    let fill_value = Int64_u.of_int64 fill_value in\n    let out_arr = Array.make_int64 out_numel in\n    for i = 0 to out_numel - 1 do\n      Array.unsafe_set out_arr i fill_value\n    done;\n    Op_pad.pad_int64 in_arr out_arr in_shape padding in_offset out_offset\n      in_strides out_strides in_numel;\n    { dtype = Dtype.Int64; buffer = Int64 out_arr; view = out_view; context }\n  | { dtype = Dtype.Bool; buffer = Bool in_arr; context; _ } ->\n    let out_arr = Array.make out_numel fill_value in\n    Op_pad.pad_bool in_arr out_arr in_shape padding in_offset out_offset\n      in_strides out_strides in_numel;\n    { dtype = Dtype.Bool; buffer = Bool out_arr; view = out_view; context }\n  | _ -> assert false\n\nlet cat (type a b) (xs : (a, b) t list) ~(axis : int) : (a, b) t =\n  match xs with\n  | [] -> invalid_arg \"cat: empty input list\"\n  | x0 :: _ ->\n    let first_shape = shape x0.view in\n    let rank = Array.length first_shape in\n    let axis = if axis < 0 then rank + axis else axis in\n    if axis < 0 || axis >= rank then\n      err \"cat\" \"axis %d out of bounds for %dD tensor\" axis rank;\n    let total_axis_size =\n      List.fold_left (fun acc t -> acc + (shape t.view).(axis)) 0 xs\n    in\n    let out_shape = Array.copy first_shape in\n    out_shape.(axis) <- total_axis_size;\n    let out = buffer x0.context x0.dtype out_shape in\n    let out_offset = View.offset out.view in\n    let out_strides = View.strides out.view in\n    (match (x0, out) with\n    | { buffer = Float64 _; _ }, { buffer = Float64 out_arr; _ } ->\n      let srcs =\n        List.map\n          (fun x -> match x.buffer with Float64 a -> (a, x.view) | _ -> assert false)\n          xs\n      in\n      Op_cat.cat_float64 srcs out_arr rank axis out_offset out_strides\n    | { buffer = Float32 _; _ }, { buffer = Float32 out_arr; _ } ->\n      let srcs =\n        List.map\n          (fun x -> match x.buffer with Float32 a -> (a, x.view) | _ -> assert false)\n          xs\n      in\n      Op_cat.cat_float32 srcs out_arr rank axis out_offset out_strides\n    | { buffer = Int8 _; _ }, { buffer = Int8 out_arr; _ } ->\n      let srcs =\n        List.map\n          (fun x -> match x.buffer with Int8 a -> (a, x.view) | _ -> assert false)\n          xs\n      in\n      Op_cat.cat_int8 srcs out_arr rank axis out_offset out_strides\n    | { buffer = Int16 _; _ }, { buffer = Int16 out_arr; _ } ->\n      let srcs =\n        List.map\n          (fun x -> match x.buffer with Int16 a -> (a, x.view) | _ -> assert false)\n          xs\n      in\n      Op_cat.cat_int16 srcs out_arr rank axis out_offset out_strides\n    | { buffer = Int32 _; _ }, { buffer = Int32 out_arr; _ } ->\n      let srcs =\n        List.map\n          (fun x -> match x.buffer with Int32 a -> (a, x.view) | _ -> assert false)\n          xs\n      in\n      Op_cat.cat_int32 srcs out_arr rank axis out_offset out_strides\n    | { buffer = Int64 _; _ }, { buffer = Int64 out_arr; _ } ->\n      let srcs =\n        List.map\n          (fun x -> match x.buffer with Int64 a -> (a, x.view) | _ -> assert false)\n          xs\n      in\n      Op_cat.cat_int64 srcs out_arr rank axis out_offset out_strides\n    | { buffer = Bool _; _ }, { buffer = Bool out_arr; _ } ->\n      let srcs =\n        List.map\n          (fun x -> match x.buffer with Bool a -> (a, x.view) | _ -> assert false)\n          xs\n      in\n      Op_cat.cat_bool srcs out_arr rank axis out_offset out_strides\n    | _ -> assert false);\n    out\n\nlet cast (type a b c d) ~(dtype : (c, d) Dtype.t) (x : (a, b) t) : (c, d) t =\n  let in_view = x.view in\n  let in_shape = shape in_view in\n  let n = numel in_view in\n  let out =\n    let t = buffer x.context dtype [|n|] in\n    { t with view = View.reshape t.view in_shape }\n  in\n  let in_offset = View.offset in_view in\n  let in_strides = View.strides in_view in\n  let out_offset = View.offset out.view in\n  let out_strides = View.strides out.view in\n  (match (x.buffer, out.buffer) with\n  | Float64 src, Float32 dst ->\n      Op_cast.cast_float64_float32 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Float64 src, Int8 dst ->\n      Op_cast.cast_float64_int8 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Float64 src, Int16 dst ->\n      Op_cast.cast_float64_int16 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Float64 src, Int32 dst ->\n      Op_cast.cast_float64_int32 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Float64 src, Int64 dst ->\n      Op_cast.cast_float64_int64 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Float64 src, Bool dst ->\n      Op_cast.cast_float64_bool src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Float32 src, Float64 dst ->\n      Op_cast.cast_float32_float64 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Float32 src, Int8 dst ->\n      Op_cast.cast_float32_int8 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Float32 src, Int16 dst ->\n      Op_cast.cast_float32_int16 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Float32 src, Int32 dst ->\n      Op_cast.cast_float32_int32 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Float32 src, Int64 dst ->\n      Op_cast.cast_float32_int64 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Float32 src, Bool dst ->\n      Op_cast.cast_float32_bool src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int8 src, Float64 dst ->\n      Op_cast.cast_int8_float64 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int8 src, Float32 dst ->\n      Op_cast.cast_int8_float32 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int8 src, Int16 dst ->\n      Op_cast.cast_int8_int16 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int8 src, Int32 dst ->\n      Op_cast.cast_int8_int32 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int8 src, Int64 dst ->\n      Op_cast.cast_int8_int64 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int8 src, Bool dst ->\n      Op_cast.cast_int8_bool src dst n in_shape in_offset in_strides out_offset\n        out_strides;\n  | Int16 src, Float64 dst ->\n      Op_cast.cast_int16_float64 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int16 src, Float32 dst ->\n      Op_cast.cast_int16_float32 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int16 src, Int8 dst ->\n      Op_cast.cast_int16_int8 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int16 src, Int32 dst ->\n      Op_cast.cast_int16_int32 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int16 src, Int64 dst ->\n      Op_cast.cast_int16_int64 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int16 src, Bool dst ->\n      Op_cast.cast_int16_bool src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int32 src, Float64 dst ->\n      Op_cast.cast_int32_float64 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int32 src, Float32 dst ->\n      Op_cast.cast_int32_float32 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int32 src, Int8 dst ->\n      Op_cast.cast_int32_int8 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int32 src, Int16 dst ->\n      Op_cast.cast_int32_int16 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int32 src, Int64 dst ->\n      Op_cast.cast_int32_int64 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int32 src, Bool dst ->\n      Op_cast.cast_int32_bool src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int64 src, Float64 dst ->\n      Op_cast.cast_int64_float64 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int64 src, Float32 dst ->\n      Op_cast.cast_int64_float32 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int64 src, Int8 dst ->\n      Op_cast.cast_int64_int8 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int64 src, Int16 dst ->\n      Op_cast.cast_int64_int16 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int64 src, Int32 dst ->\n      Op_cast.cast_int64_int32 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Int64 src, Bool dst ->\n      Op_cast.cast_int64_bool src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Bool src, Float64 dst ->\n      Op_cast.cast_bool_float64 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Bool src, Float32 dst ->\n      Op_cast.cast_bool_float32 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Bool src, Int8 dst ->\n      Op_cast.cast_bool_int8 src dst n in_shape in_offset in_strides out_offset\n        out_strides;\n  | Bool src, Int16 dst ->\n      Op_cast.cast_bool_int16 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Bool src, Int32 dst ->\n      Op_cast.cast_bool_int32 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | Bool src, Int64 dst ->\n      Op_cast.cast_bool_int64 src dst n in_shape in_offset in_strides\n        out_offset out_strides;\n  | _ -> invalid_arg \"unsupported cast\");\n  out\n\nlet contiguous (type a b) (t : (a, b) t) : (a, b) t =\n  let v = t.view in\n  if View.is_c_contiguous v && View.offset v = 0 then t\n  else\n    let shape_arr = shape v in\n    let ndim = Stdlib.Array.length shape_arr in\n    let n = numel v in\n    let strides = View.strides v in\n    let off = View.offset v in\n    let out = buffer t.context t.dtype shape_arr in\n    let indices = Stdlib.Array.make ndim 0 in\n    (* Compute flat source index from multi-dimensional indices. *)\n    let src_flat () =\n      let f = ref off in\n      for d = 0 to ndim - 1 do\n        f := !f + indices.(d) * strides.(d)\n      done;\n      !f\n    in\n    (* Advance the multi-dimensional index by one element. *)\n    let advance () =\n      let d = ref (ndim - 1) in\n      while !d >= 0 do\n        indices.(!d) <- indices.(!d) + 1;\n        if indices.(!d) < shape_arr.(!d) then d := -1\n        else (indices.(!d) <- 0; d := !d - 1)\n      done\n    in\n    (match (t.buffer, out.buffer) with\n    | Float64 src, Float64 dst ->\n      for i = 0 to n - 1 do\n        Array.unsafe_set dst i (Array.unsafe_get src (src_flat ()));\n        advance ()\n      done\n    | Float32 src, Float32 dst ->\n      for i = 0 to n - 1 do\n        Array.unsafe_set dst i (Array.unsafe_get src (src_flat ()));\n        advance ()\n      done\n    | Int32 src, Int32 dst ->\n      for i = 0 to n - 1 do\n        Array.unsafe_set dst i (Array.unsafe_get src (src_flat ()));\n        advance ()\n      done\n    | Int64 src, Int64 dst ->\n      for i = 0 to n - 1 do\n        Array.unsafe_set dst i (Array.unsafe_get src (src_flat ()));\n        advance ()\n      done\n    | Int8 src, Int8 dst ->\n      for i = 0 to n - 1 do\n        Array.unsafe_set dst i (Array.unsafe_get src (src_flat ()));\n        advance ()\n      done\n    | Int16 src, Int16 dst ->\n      for i = 0 to n - 1 do\n        Array.unsafe_set dst i (Array.unsafe_get src (src_flat ()));\n        advance ()\n      done\n    | Bool src, Bool dst ->\n      for i = 0 to n - 1 do\n        dst.(i) <- src.(src_flat ());\n        advance ()\n      done\n    | _ -> invalid_arg \"contiguous: unsupported dtype\");\n    out\n\nlet copy (type a b) (t : (a, b) t) : (a, b) t =\n  let c = contiguous t in\n  let shape_arr = shape c.view in\n  let n = numel c.view in\n  let out = buffer t.context t.dtype shape_arr in\n  (match (c.buffer, out.buffer) with\n  | Float64 src, Float64 dst ->\n    for i = 0 to n - 1 do Array.unsafe_set dst i (Array.unsafe_get src i) done\n  | Float32 src, Float32 dst ->\n    for i = 0 to n - 1 do Array.unsafe_set dst i (Array.unsafe_get src i) done\n  | Int32 src, Int32 dst ->\n    for i = 0 to n - 1 do Array.unsafe_set dst i (Array.unsafe_get src i) done\n  | Int64 src, Int64 dst ->\n    for i = 0 to n - 1 do Array.unsafe_set dst i (Array.unsafe_get src i) done\n  | Int8 src, Int8 dst ->\n    for i = 0 to n - 1 do Array.unsafe_set dst i (Array.unsafe_get src i) done\n  | Int16 src, Int16 dst ->\n    for i = 0 to n - 1 do Array.unsafe_set dst i (Array.unsafe_get src i) done\n  | Bool src, Bool dst ->\n    for i = 0 to n - 1 do dst.(i) <- src.(i) done\n  | _ -> invalid_arg \"copy: unsupported dtype\");\n  out\n\nlet assign (type a b) (dst : (a, b) t) (src : (a, b) t) : unit =\n  let src_c = contiguous src in\n  let n = numel dst.view in\n  match (src_c.buffer, dst.buffer) with\n  | Float64 s, Float64 d ->\n    for i = 0 to n - 1 do Array.unsafe_set d i (Array.unsafe_get s i) done\n  | Float32 s, Float32 d ->\n    for i = 0 to n - 1 do Array.unsafe_set d i (Array.unsafe_get s i) done\n  | Int32 s, Int32 d ->\n    for i = 0 to n - 1 do Array.unsafe_set d i (Array.unsafe_get s i) done\n  | Int64 s, Int64 d ->\n    for i = 0 to n - 1 do Array.unsafe_set d i (Array.unsafe_get s i) done\n  | Int8 s, Int8 d ->\n    for i = 0 to n - 1 do Array.unsafe_set d i (Array.unsafe_get s i) done\n  | Int16 s, Int16 d ->\n    for i = 0 to n - 1 do Array.unsafe_set d i (Array.unsafe_get s i) done\n  | Bool s, Bool d ->\n    for i = 0 to n - 1 do d.(i) <- s.(i) done\n  | _ -> invalid_arg \"assign: unsupported dtype\"\n\nlet threefry (key : (int32, Dtype.int32_elt) t)\n    (counter : (int32, Dtype.int32_elt) t) : (int32, Dtype.int32_elt) t =\n  let key_shape = shape key.view in\n  let ctr_shape = shape counter.view in\n  if key_shape <> ctr_shape then\n    err \"threefry\" \"shape mismatch: expected %s, got %s\"\n      (Shape.to_string key_shape) (Shape.to_string ctr_shape);\n  let rank = Array.length key_shape in\n  if rank = 0 then\n    invalid_arg \"threefry: tensor, requires rank >= 1 with last dimension size 2\";\n  let last_dim = rank - 1 in\n  if key_shape.(last_dim) <> 2 then\n    invalid_arg \"threefry: shape, last dimension must be 2 for Threefry2x32\";\n  let out = buffer key.context Dtype.Int32 key_shape in\n  (match (out.buffer, key.buffer, counter.buffer) with\n  | Int32 out_arr, Int32 key_arr, Int32 ctr_arr ->\n      Op_threefry.threefry_int32 out.context.pool ~out_arr ~key_arr ~ctr_arr\n        ~shape:key_shape ~key_view:key.view ~ctr_view:counter.view\n        ~out_view:out.view\n  | _ -> assert false);\n  out\n\nlet gather (type a b) (data : (a, b) t)\n    (indices : (int32, Dtype.int32_elt) t) ~(axis : int) : (a, b) t =\n  let dshape = shape data.view in\n  let ishape = shape indices.view in\n  if Array.length dshape <> Array.length ishape then\n    invalid_arg \"gather: rank mismatch\";\n  let rank = Array.length dshape in\n  let axis = if axis < 0 then rank + axis else axis in\n  if axis < 0 || axis >= rank then\n    err \"gather\" \"axis %d out of bounds for %dD tensor\" axis rank;\n  let out = buffer data.context data.dtype ishape in\n  let n = numel indices.view in\n  let data_offset = View.offset data.view in\n  let data_strides = View.strides data.view in\n  let idx_offset = View.offset indices.view in\n  let idx_strides = View.strides indices.view in\n  let out_offset = View.offset out.view in\n  let out_strides = View.strides out.view in\n  let idx_arr =\n    match indices.buffer with Int32 a -> a | _ -> assert false\n  in\n  let run f = par out.context.pool n (fun s e ->\n    f ishape dshape axis idx_arr data_offset data_strides idx_offset\n      idx_strides out_offset out_strides s e)\n  in\n  (match (data.buffer, out.buffer) with\n  | Float64 src, Float64 dst ->\n      run (Op_gather.gather_float64 src dst)\n  | Float32 src, Float32 dst ->\n      run (Op_gather.gather_float32 src dst)\n  | Int8 src, Int8 dst ->\n      run (Op_gather.gather_int8 src dst)\n  | Int16 src, Int16 dst ->\n      run (Op_gather.gather_int16 src dst)\n  | Int32 src, Int32 dst ->\n      run (Op_gather.gather_int32 src dst)\n  | Int64 src, Int64 dst ->\n      run (Op_gather.gather_int64 src dst)\n  | Bool src, Bool dst ->\n      run (Op_gather.gather_bool src dst)\n  | _ -> invalid_arg \"gather: unsupported dtype\");\n  out\n\nlet scatter ?(mode = `Set) ?(unique_indices = false) (type a b)\n    (data_template : (a, b) t)\n    ~(indices : (int32, Dtype.int32_elt) t)\n    ~(updates : (a, b) t)\n    ~(axis : int) : (a, b) t =\n  let tshape = shape data_template.view in\n  let ishape = shape indices.view in\n  let ushape = shape updates.view in\n  if Array.length tshape <> Array.length ishape then\n    invalid_arg \"scatter: rank mismatch\";\n  if ishape <> ushape then\n    invalid_arg \"scatter: indices/updates shape mismatch\";\n  let rank = Array.length tshape in\n  let axis = if axis < 0 then rank + axis else axis in\n  if axis < 0 || axis >= rank then\n    err \"scatter\" \"axis %d out of bounds for %dD tensor\" axis rank;\n  let out = copy data_template in\n  let n = numel indices.view in\n  let idx_offset = View.offset indices.view in\n  let idx_strides = View.strides indices.view in\n  let upd_offset = View.offset updates.view in\n  let upd_strides = View.strides updates.view in\n  let out_offset = View.offset out.view in\n  let out_strides = View.strides out.view in\n  let idx_arr =\n    match indices.buffer with Int32 a -> a | _ -> assert false\n  in\n  (* Scatter with Set mode and unique indices is safe to parallelize since\n     each output position is written at most once. Add mode or non-unique\n     indices require sequential execution to avoid write conflicts. *)\n  let run f =\n    if unique_indices && mode = `Set then\n      par out.context.pool n (fun s e ->\n        f ishape tshape axis idx_arr upd_offset upd_strides\n          idx_offset idx_strides out_offset out_strides s e)\n    else\n      f ishape tshape axis idx_arr upd_offset upd_strides\n        idx_offset idx_strides out_offset out_strides 0 n\n  in\n  (match (updates.buffer, out.buffer) with\n  | Float64 src_arr, Float64 out_arr ->\n      run (Op_scatter.scatter_float64 mode src_arr out_arr)\n  | Float32 src_arr, Float32 out_arr ->\n      run (Op_scatter.scatter_float32 mode src_arr out_arr)\n  | Int8 src_arr, Int8 out_arr ->\n      run (Op_scatter.scatter_int8 mode src_arr out_arr)\n  | Int16 src_arr, Int16 out_arr ->\n      run (Op_scatter.scatter_int16 mode src_arr out_arr)\n  | Int32 src_arr, Int32 out_arr ->\n      run (Op_scatter.scatter_int32 mode src_arr out_arr)\n  | Int64 src_arr, Int64 out_arr ->\n      run (Op_scatter.scatter_int64 mode src_arr out_arr)\n  | Bool src_arr, Bool out_arr ->\n      run (Op_scatter.scatter_bool mode src_arr out_arr)\n  | _ -> invalid_arg \"scatter: unsupported dtype\");\n  out\n\nlet unfold :\n  type a b.\n  (a, b) t ->\n  kernel_size:int array ->\n  stride:int array ->\n  dilation:int array ->\n  padding:(int * int) array ->\n  (a, b) t =\n  fun (x : (a, b) t) ~kernel_size ~stride ~dilation ~padding ->\n  let in_view = x.view in\n  let in_shape = shape in_view in\n  if Array.length in_shape < 3 then\n    invalid_arg \"unfold: input rank, expected input shape [N, C, ...spatial_dims]\";\n\n  let spatial_ndim = Array.length in_shape - 2 in\n  if\n    not\n      (Array.length kernel_size = spatial_ndim\n      && Array.length stride = spatial_ndim\n      && Array.length dilation = spatial_ndim\n      && Array.length padding = spatial_ndim)\n  then\n    invalid_arg \"unfold: parameter lengths, kernel_size/stride/dilation/padding must match spatial rank\";\n\n  let n = in_shape.(0) in\n  let channels = in_shape.(1) in\n  let input_spatial = Array.sub in_shape 2 spatial_ndim in\n\n  let kernel_elems = ref 1 in\n  for i = 0 to spatial_ndim - 1 do\n    if kernel_size.(i) <= 0 then\n      invalid_arg \"unfold: all kernel dimensions must be positive\";\n    if stride.(i) <= 0 then\n      invalid_arg \"unfold: all stride dimensions must be positive\";\n    if dilation.(i) <= 0 then\n      invalid_arg \"unfold: all dilation dimensions must be positive\";\n    let pad_before, pad_after = padding.(i) in\n    if pad_before < 0 || pad_after < 0 then\n      invalid_arg \"unfold: padding must be non-negative\";\n    kernel_elems := !kernel_elems * kernel_size.(i)\n  done;\n\n  let out_spatial = Array.make spatial_ndim 0 in\n  for i = 0 to spatial_ndim - 1 do\n    let pad_before, pad_after = padding.(i) in\n    let padded = input_spatial.(i) + pad_before + pad_after in\n    let kernel_extent = (dilation.(i) * (kernel_size.(i) - 1)) + 1 in\n    let diff = padded - kernel_extent in\n    if diff < 0 then\n      invalid_arg \"unfold: kernel size larger than padded input\";\n    out_spatial.(i) <- (diff / stride.(i)) + 1\n  done;\n\n  let num_blocks = Shape.numel out_spatial in\n  let out_shape = [| n; channels * !kernel_elems; num_blocks |] in\n  let out_numel = Shape.numel out_shape in\n  let out_t : (a, b) t =\n    let out_t : (a, b) t = buffer x.context x.dtype [|out_numel|] in\n    { out_t with view = View.reshape out_t.view out_shape }\n  in\n\n  let in_offset = View.offset in_view in\n  let in_strides = View.strides in_view in\n  let out_view = out_t.view in\n  let out_offset = View.offset out_view in\n  let out_strides = View.strides out_view in\n  let kernel_elems = !kernel_elems in\n  let run_batches f =\n    if n <= 0 then ()\n    else if n = 1 then f 0 1\n    else Parallel.parallel_for x.context.pool 0 (n - 1) f\n  in\n  match (x.buffer, out_t.buffer) with\n  | Float64 in_arr, Float64 out_arr ->\n      run_batches (fun n_start n_end ->\n          Op_unfold.unfold_float64 in_arr out_arr ~n_start ~n_end ~channels\n            ~input_spatial ~kernel_elems ~num_blocks ~spatial_ndim ~out_spatial\n            ~kernel_size ~stride ~dilation ~padding ~in_offset ~in_strides\n            ~out_offset ~out_strides);\n      out_t\n  | Float32 in_arr, Float32 out_arr ->\n      run_batches (fun n_start n_end ->\n          Op_unfold.unfold_float32 in_arr out_arr ~n_start ~n_end ~channels\n            ~input_spatial ~kernel_elems ~num_blocks ~spatial_ndim ~out_spatial\n            ~kernel_size ~stride ~dilation ~padding ~in_offset ~in_strides\n            ~out_offset ~out_strides);\n      out_t\n  | Int8 in_arr, Int8 out_arr ->\n      run_batches (fun n_start n_end ->\n          Op_unfold.unfold_int8 in_arr out_arr ~n_start ~n_end ~channels\n            ~input_spatial ~kernel_elems ~num_blocks ~spatial_ndim ~out_spatial\n            ~kernel_size ~stride ~dilation ~padding ~in_offset ~in_strides\n            ~out_offset ~out_strides);\n      out_t\n  | Int16 in_arr, Int16 out_arr ->\n      run_batches (fun n_start n_end ->\n          Op_unfold.unfold_int16 in_arr out_arr ~n_start ~n_end ~channels\n            ~input_spatial ~kernel_elems ~num_blocks ~spatial_ndim ~out_spatial\n            ~kernel_size ~stride ~dilation ~padding ~in_offset ~in_strides\n            ~out_offset ~out_strides);\n      out_t\n  | Int32 in_arr, Int32 out_arr ->\n      run_batches (fun n_start n_end ->\n          Op_unfold.unfold_int32 in_arr out_arr ~n_start ~n_end ~channels\n            ~input_spatial ~kernel_elems ~num_blocks ~spatial_ndim ~out_spatial\n            ~kernel_size ~stride ~dilation ~padding ~in_offset ~in_strides\n            ~out_offset ~out_strides);\n      out_t\n  | Int64 in_arr, Int64 out_arr ->\n      run_batches (fun n_start n_end ->\n          Op_unfold.unfold_int64 in_arr out_arr ~n_start ~n_end ~channels\n            ~input_spatial ~kernel_elems ~num_blocks ~spatial_ndim ~out_spatial\n            ~kernel_size ~stride ~dilation ~padding ~in_offset ~in_strides\n            ~out_offset ~out_strides);\n      out_t\n  | Bool in_arr, Bool out_arr ->\n      run_batches (fun n_start n_end ->\n          Op_unfold.unfold_bool in_arr out_arr ~n_start ~n_end ~channels\n            ~input_spatial ~kernel_elems ~num_blocks ~spatial_ndim ~out_spatial\n            ~kernel_size ~stride ~dilation ~padding ~in_offset ~in_strides\n            ~out_offset ~out_strides);\n      out_t\n  | _ -> invalid_arg \"unfold: unsupported dtype\"\n\nlet fold :\n  type a b.\n  (a, b) t ->\n  output_size:int array ->\n  kernel_size:int array ->\n  stride:int array ->\n  dilation:int array ->\n  padding:(int * int) array ->\n  (a, b) t =\n  fun (x : (a, b) t) ~output_size ~kernel_size ~stride ~dilation ~padding ->\n  let in_view = x.view in\n  let in_shape = shape in_view in\n  if Array.length in_shape <> 3 then\n    invalid_arg \"fold: input rank, expected input shape [N, C * prod(kernel_size), L]\";\n\n  let spatial_ndim = Array.length output_size in\n  if spatial_ndim = 0 then\n    invalid_arg \"fold: output_size must contain at least one spatial dimension\";\n  if\n    not\n      (Array.length kernel_size = spatial_ndim\n      && Array.length stride = spatial_ndim\n      && Array.length dilation = spatial_ndim\n      && Array.length padding = spatial_ndim)\n  then\n    invalid_arg \"fold: parameter lengths, output_size/kernel_size/stride/dilation/padding must match\";\n\n  let n = in_shape.(0) in\n  let c_times_k = in_shape.(1) in\n  let num_blocks = in_shape.(2) in\n\n  let kernel_elems = ref 1 in\n  for i = 0 to spatial_ndim - 1 do\n    if output_size.(i) <= 0 then\n      invalid_arg \"fold: all output dimensions must be positive\";\n    if kernel_size.(i) <= 0 then\n      invalid_arg \"fold: all kernel dimensions must be positive\";\n    if stride.(i) <= 0 then\n      invalid_arg \"fold: all stride dimensions must be positive\";\n    if dilation.(i) <= 0 then\n      invalid_arg \"fold: all dilation dimensions must be positive\";\n    let pad_before, pad_after = padding.(i) in\n    if pad_before < 0 || pad_after < 0 then\n      invalid_arg \"fold: padding must be non-negative\";\n    kernel_elems := !kernel_elems * kernel_size.(i)\n  done;\n\n  if c_times_k mod !kernel_elems <> 0 then\n    invalid_arg \"fold: input shape, C * prod(kernel_size) dimension mismatch\";\n  let channels = c_times_k / !kernel_elems in\n\n  let blocks_shape = Array.make spatial_ndim 0 in\n  for i = 0 to spatial_ndim - 1 do\n    let pad_before, pad_after = padding.(i) in\n    let eff_kernel = (dilation.(i) * (kernel_size.(i) - 1)) + 1 in\n    let numer = output_size.(i) + pad_before + pad_after - eff_kernel in\n    if numer < 0 then\n      invalid_arg \"fold: effective kernel does not fit output spatial dimension\";\n    blocks_shape.(i) <- (numer / stride.(i)) + 1\n  done;\n  let expected_blocks = Shape.numel blocks_shape in\n  if expected_blocks <> num_blocks then\n    invalid_arg \"fold: input shape, L dimension does not match computed number of sliding blocks\";\n\n  let out_shape = Array.append [| n; channels |] output_size in\n  let out_numel = Shape.numel out_shape in\n  let out_t : (a, b) t =\n    let out_t : (a, b) t = buffer x.context x.dtype [|out_numel|] in\n    { out_t with view = View.reshape out_t.view out_shape }\n  in\n\n  let in_offset = View.offset in_view in\n  let in_strides = View.strides in_view in\n  let out_view = out_t.view in\n  let out_offset = View.offset out_view in\n  let out_strides = View.strides out_view in\n\n  let kernel_elems = !kernel_elems in\n  let run_batches f =\n    if n <= 0 then ()\n    else if n = 1 then f 0 1\n    else Parallel.parallel_for x.context.pool 0 (n - 1) f\n  in\n  match (x.buffer, out_t.buffer) with\n  | Float64 in_arr, Float64 out_arr ->\n      run_batches (fun n_start n_end ->\n          Op_fold.fold_float64 in_arr out_arr ~n_start ~n_end ~channels\n            ~num_blocks ~kernel_elems ~spatial_ndim ~blocks_shape ~kernel_size\n            ~output_size ~stride ~dilation ~padding ~in_offset ~in_strides\n            ~out_offset ~out_strides);\n      out_t\n  | Float32 in_arr, Float32 out_arr ->\n      run_batches (fun n_start n_end ->\n          Op_fold.fold_float32 in_arr out_arr ~n_start ~n_end ~channels\n            ~num_blocks ~kernel_elems ~spatial_ndim ~blocks_shape ~kernel_size\n            ~output_size ~stride ~dilation ~padding ~in_offset ~in_strides\n            ~out_offset ~out_strides);\n      out_t\n  | Int8 in_arr, Int8 out_arr ->\n      run_batches (fun n_start n_end ->\n          Op_fold.fold_int8 in_arr out_arr ~n_start ~n_end ~channels ~num_blocks\n            ~kernel_elems ~spatial_ndim ~blocks_shape ~kernel_size ~output_size\n            ~stride ~dilation ~padding ~in_offset ~in_strides ~out_offset\n            ~out_strides);\n      out_t\n  | Int16 in_arr, Int16 out_arr ->\n      run_batches (fun n_start n_end ->\n          Op_fold.fold_int16 in_arr out_arr ~n_start ~n_end ~channels ~num_blocks\n            ~kernel_elems ~spatial_ndim ~blocks_shape ~kernel_size ~output_size\n            ~stride ~dilation ~padding ~in_offset ~in_strides ~out_offset\n            ~out_strides);\n      out_t\n  | Int32 in_arr, Int32 out_arr ->\n      run_batches (fun n_start n_end ->\n          Op_fold.fold_int32 in_arr out_arr ~n_start ~n_end ~channels ~num_blocks\n            ~kernel_elems ~spatial_ndim ~blocks_shape ~kernel_size ~output_size\n            ~stride ~dilation ~padding ~in_offset ~in_strides ~out_offset\n            ~out_strides);\n      out_t\n  | Int64 in_arr, Int64 out_arr ->\n      run_batches (fun n_start n_end ->\n          Op_fold.fold_int64 in_arr out_arr ~n_start ~n_end ~channels ~num_blocks\n            ~kernel_elems ~spatial_ndim ~blocks_shape ~kernel_size ~output_size\n            ~stride ~dilation ~padding ~in_offset ~in_strides ~out_offset\n            ~out_strides);\n      out_t\n  | Bool _, _ ->\n      invalid_arg \"fold: unsupported dtype, bool fold is undefined because overlaps are summed\"\n  | _ -> invalid_arg \"fold: unsupported dtype\"\n\nlet matmul (type a b) (a : (a, b) t) (b : (a, b) t) : (a, b) t =\n  let a_shape = shape a.view in\n  let b_shape = shape b.view in\n  let x_ndim = Array.length a_shape in\n  let y_ndim = Array.length b_shape in\n  let m = a_shape.(x_ndim - 2) in\n  let n = b_shape.(y_ndim - 1) in\n  let batch_dims = Array.sub a_shape 0 (x_ndim - 2) in\n  let out_shape = Array.append batch_dims [| m; n |] in\n  let out = buffer a.context a.dtype out_shape in\n  let va = a.view and vb = b.view and vout = out.view in\n  let nd_out = Array.length (shape vout) in\n  let batch_shape = Array.sub (shape vout) 0 (Stdlib.max 0 (nd_out - 2)) in\n  let batch_sz =\n    if Array.length batch_shape = 0 then 1 else Shape.numel batch_shape\n  in\n  let total_units = batch_sz * m in\n  (match (out.buffer, a.buffer, b.buffer) with\n  | Float64 c, Float64 a, Float64 b ->\n      if\n        View.is_c_contiguous va && View.is_c_contiguous vb\n        && Array.length (shape va) = 2\n        && Array.length (shape vb) = 2\n      then\n        let n = (shape vout).(nd_out - 1) in\n        let k = (shape va).(Array.length (shape va) - 1) in\n        Op_matmul.Gemm_f64.gemm ~pool:out.context.pool a b c ~m ~n ~k\n          ~a_off:(View.offset va) ~b_off:(View.offset vb)\n          ~c_off:(View.offset vout) ~ldc:n ()\n      else\n        Parallel.parallel_for out.context.pool 0 (total_units - 1) (fun s e ->\n            Op_matmul.matmul_float64_slow a b c va vb vout s e)\n  | Float32 c, Float32 a, Float32 b ->\n      if\n        View.is_c_contiguous va && View.is_c_contiguous vb\n        && Array.length (shape va) = 2\n        && Array.length (shape vb) = 2\n      then\n        let n = (shape vout).(nd_out - 1) in\n        let k = (shape va).(Array.length (shape va) - 1) in\n        Op_matmul.Gemm_f32.gemm ~pool:out.context.pool a b c ~m ~n ~k\n          ~a_off:(View.offset va) ~b_off:(View.offset vb)\n          ~c_off:(View.offset vout) ~ldc:n ()\n      else\n        Parallel.parallel_for out.context.pool 0 (total_units - 1) (fun s e ->\n            Op_matmul.matmul_float32_slow a b c va vb vout s e)\n  | Int64 c, Int64 a, Int64 b ->\n      if\n        View.is_c_contiguous va && View.is_c_contiguous vb\n        && View.offset va = 0\n        && View.offset vb = 0\n        && Array.length (shape va) = 2\n        && Array.length (shape vb) = 2\n      then\n        Parallel.parallel_for out.context.pool 0 (m - 1) (fun s e ->\n            Op_matmul.matmul_int64_fast a b c va vb vout s e)\n      else\n        Parallel.parallel_for out.context.pool 0 (total_units - 1) (fun s e ->\n            Op_matmul.matmul_int64_slow a b c va vb vout s e)\n  | Int32 c, Int32 a, Int32 b ->\n      if\n        View.is_c_contiguous va && View.is_c_contiguous vb\n        && View.offset va = 0\n        && View.offset vb = 0\n        && Array.length (shape va) = 2\n        && Array.length (shape vb) = 2\n      then\n        Parallel.parallel_for out.context.pool 0 (m - 1) (fun s e ->\n            Op_matmul.matmul_int32_fast a b c va vb vout s e)\n      else\n        Parallel.parallel_for out.context.pool 0 (total_units - 1) (fun s e ->\n            Op_matmul.matmul_int32_slow a b c va vb vout s e)\n  | _ ->\n      invalid_arg \"matmul: not implemented for small unboxed ints\");\n  out\n\nlet fft ?out:_ _ ~axes:_ =\n  invalid_arg \"fft: not implemented\"\n\nlet ifft ?out:_ _ ~axes:_ =\n  invalid_arg \"ifft: not implemented\"\n\nlet rfft ?out:_ _ ~dtype:_ ~axes:_ =\n  invalid_arg \"rfft: not implemented\"\n\nlet irfft ?out:_ ?s:_ _ ~dtype:_ ~axes:_ =\n  invalid_arg \"irfft: not implemented\"\n\nlet cholesky ~upper:_ _ =\n  invalid_arg \"cholesky: not implemented\"\n\nlet qr ~reduced:_ _ = invalid_arg \"qr: not implemented\"\n\nlet svd ~full_matrices:_ _ =\n  invalid_arg \"svd: not implemented\"\n\nlet eig ~vectors:_ _ = invalid_arg \"eig: not implemented\"\n\nlet eigh ~vectors:_ _ =\n  invalid_arg \"eigh: not implemented\"\n\nlet triangular_solve ~upper:_ ~transpose:_ ~unit_diag:_ _ _ =\n  invalid_arg \"triangular_solve: not implemented\"\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/nx_oxcaml_stubs.c",
    "content": "#include <caml/mlvalues.h>\n#include <caml/memory.h>\n#include <caml/fail.h>\n#include <caml/bigarray.h>\n#include <string.h>\n\nCAMLprim value caml_make_unboxed_float64_vect(value len);\nCAMLprim value caml_make_unboxed_float32_vect(value len);\nCAMLprim value caml_make_unboxed_int64_vect(value len);\nCAMLprim value caml_make_unboxed_int32_vect(value len);\nCAMLprim value caml_make_untagged_int8_vect(value len);\nCAMLprim value caml_make_untagged_int16_vect(value len);\n\n\nCAMLprim value caml_ba_to_unboxed_float64_array(value v_ba)\n{\n  CAMLparam1(v_ba);\n\n  struct caml_ba_array *ba = Caml_ba_array_val(v_ba);\n\n  if (ba->num_dims != 1)\n    caml_invalid_argument(\"Bigarray must be 1D\");\n\n  if ((ba->flags & CAML_BA_KIND_MASK) != CAML_BA_FLOAT64)\n    caml_invalid_argument(\"Bigarray must be float64\");\n\n  mlsize_t len = ba->dim[0];\n  void *data = ba->data;\n\n  value arr = caml_make_unboxed_float64_vect(Val_long(len));\n\n  memcpy((double *)arr, (double *)data, len * sizeof(double));\n\n  CAMLreturn(arr);\n}\n\nCAMLprim value caml_ba_to_unboxed_float32_array(value v_ba)\n{\n  CAMLparam1(v_ba);\n\n  struct caml_ba_array *ba = Caml_ba_array_val(v_ba);\n\n  if (ba->num_dims != 1)\n    caml_invalid_argument(\"Bigarray must be 1D\");\n\n  if ((ba->flags & CAML_BA_KIND_MASK) != CAML_BA_FLOAT32)\n    caml_invalid_argument(\"Bigarray must be float32\");\n\n  mlsize_t len = ba->dim[0];\n  void *data = ba->data;\n\n  value arr = caml_make_unboxed_float32_vect(Val_long(len));\n\n  memcpy((float *)arr, (float *)data, len * sizeof(float));\n\n  CAMLreturn(arr);\n}\n\nCAMLprim value caml_ba_to_unboxed_int64_array(value v_ba)\n{\n  CAMLparam1(v_ba);\n\n  struct caml_ba_array *ba = Caml_ba_array_val(v_ba);\n\n  if (ba->num_dims != 1)\n    caml_invalid_argument(\"Bigarray must be 1D\");\n\n  if ((ba->flags & CAML_BA_KIND_MASK) != CAML_BA_INT64)\n    caml_invalid_argument(\"Bigarray must be int64\");\n\n  mlsize_t len = ba->dim[0];\n  void *data = ba->data;\n\n  value arr = caml_make_unboxed_int64_vect(Val_long(len));\n\n  memcpy((int64_t *)arr, (int64_t *)data, len * sizeof(int64_t));\n\n  CAMLreturn(arr);\n}\n\nCAMLprim value caml_ba_to_unboxed_int32_array(value v_ba)\n{\n  CAMLparam1(v_ba);\n\n  struct caml_ba_array *ba = Caml_ba_array_val(v_ba);\n\n  if (ba->num_dims != 1)\n    caml_invalid_argument(\"Bigarray must be 1D\");\n\n  if ((ba->flags & CAML_BA_KIND_MASK) != CAML_BA_INT32)\n    caml_invalid_argument(\"Bigarray must be int32\");\n\n  mlsize_t len = ba->dim[0];\n  void *data = ba->data;\n\n  value arr = caml_make_unboxed_int32_vect(Val_long(len));\n\n  memcpy((int32_t *)arr, (int32_t *)data, len * sizeof(int32_t));\n\n  CAMLreturn(arr);\n}\n\nCAMLprim value caml_ba_to_unboxed_int8_array(value v_ba)\n{\n  CAMLparam1(v_ba);\n\n  struct caml_ba_array *ba = Caml_ba_array_val(v_ba);\n\n  if (ba->num_dims != 1)\n    caml_invalid_argument(\"Bigarray must be 1D\");\n\n  if ((ba->flags & CAML_BA_KIND_MASK) != CAML_BA_SINT8)\n    caml_invalid_argument(\"Bigarray must be int8\");\n\n  mlsize_t len = ba->dim[0];\n  void *data = ba->data;\n\n  value arr = caml_make_untagged_int8_vect(Val_long(len));\n\n  memcpy((int8_t *)arr, (int8_t *)data, len * sizeof(int8_t));\n\n  CAMLreturn(arr);\n}\n\nCAMLprim value caml_ba_to_unboxed_int16_array(value v_ba)\n{\n  CAMLparam1(v_ba);\n\n  struct caml_ba_array *ba = Caml_ba_array_val(v_ba);\n\n  if (ba->num_dims != 1)\n    caml_invalid_argument(\"Bigarray must be 1D\");\n\n  if ((ba->flags & CAML_BA_KIND_MASK) != CAML_BA_SINT16)\n    caml_invalid_argument(\"Bigarray must be int16\");\n\n  mlsize_t len = ba->dim[0];\n  void *data = ba->data;\n\n  value arr = caml_make_untagged_int16_vect(Val_long(len));\n\n  memcpy((int16_t *)arr, (int16_t *)data, len * sizeof(int16_t));\n\n  CAMLreturn(arr);\n}\n\n/* ── Unboxed array → Bigarray (to_host direction) ── */\n\nCAMLprim value caml_unboxed_float64_array_to_ba(value v_arr, value v_len)\n{\n  CAMLparam1(v_arr);\n  mlsize_t len = Long_val(v_len);\n  void *src = (void *)v_arr;\n\n  intnat dims[1] = { (intnat)len };\n  value ba = caml_ba_alloc(CAML_BA_FLOAT64 | CAML_BA_C_LAYOUT, 1, NULL, dims);\n\n  memcpy(Caml_ba_data_val(ba), src, len * sizeof(double));\n\n  CAMLreturn(ba);\n}\n\nCAMLprim value caml_unboxed_float32_array_to_ba(value v_arr, value v_len)\n{\n  CAMLparam1(v_arr);\n  mlsize_t len = Long_val(v_len);\n  void *src = (void *)v_arr;\n\n  intnat dims[1] = { (intnat)len };\n  value ba = caml_ba_alloc(CAML_BA_FLOAT32 | CAML_BA_C_LAYOUT, 1, NULL, dims);\n\n  memcpy(Caml_ba_data_val(ba), src, len * sizeof(float));\n\n  CAMLreturn(ba);\n}\n\nCAMLprim value caml_unboxed_int64_array_to_ba(value v_arr, value v_len)\n{\n  CAMLparam1(v_arr);\n  mlsize_t len = Long_val(v_len);\n  void *src = (void *)v_arr;\n\n  intnat dims[1] = { (intnat)len };\n  value ba = caml_ba_alloc(CAML_BA_INT64 | CAML_BA_C_LAYOUT, 1, NULL, dims);\n\n  memcpy(Caml_ba_data_val(ba), src, len * sizeof(int64_t));\n\n  CAMLreturn(ba);\n}\n\nCAMLprim value caml_unboxed_int32_array_to_ba(value v_arr, value v_len)\n{\n  CAMLparam1(v_arr);\n  mlsize_t len = Long_val(v_len);\n  void *src = (void *)v_arr;\n\n  intnat dims[1] = { (intnat)len };\n  value ba = caml_ba_alloc(CAML_BA_INT32 | CAML_BA_C_LAYOUT, 1, NULL, dims);\n\n  memcpy(Caml_ba_data_val(ba), src, len * sizeof(int32_t));\n\n  CAMLreturn(ba);\n}\n\nCAMLprim value caml_unboxed_int8_array_to_ba(value v_arr, value v_len)\n{\n  CAMLparam1(v_arr);\n  mlsize_t len = Long_val(v_len);\n  void *src = (void *)v_arr;\n\n  intnat dims[1] = { (intnat)len };\n  value ba = caml_ba_alloc(CAML_BA_SINT8 | CAML_BA_C_LAYOUT, 1, NULL, dims);\n\n  memcpy(Caml_ba_data_val(ba), src, len * sizeof(int8_t));\n\n  CAMLreturn(ba);\n}\n\nCAMLprim value caml_unboxed_int16_array_to_ba(value v_arr, value v_len)\n{\n  CAMLparam1(v_arr);\n  mlsize_t len = Long_val(v_len);\n  void *src = (void *)v_arr;\n\n  intnat dims[1] = { (intnat)len };\n  value ba = caml_ba_alloc(CAML_BA_SINT16 | CAML_BA_C_LAYOUT, 1, NULL, dims);\n\n  memcpy(Caml_ba_data_val(ba), src, len * sizeof(int16_t));\n\n  CAMLreturn(ba);\n}\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/op_argmax.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet parallel_threshold = 62500\n\n(* --- argmax --- *)\n\nlet argmax_all_float64 (out_arr : int32# array) out_offset a_arr va in_numel =\n  if in_numel = 0 then\n    invalid_arg \"argmax: empty input\";\n  let a_offset = View.offset va in\n  if View.is_c_contiguous va then (\n    let best_idx = ref 0 in\n    let acc = Array.make_float64 1 in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr a_offset);\n    for i = 1 to in_numel - 1 do\n      let v = Array.unsafe_get a_arr (a_offset + i) in\n      if Float_u.compare v (Array.unsafe_get acc 0) > 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := i)\n    done;\n    Array.unsafe_set out_arr out_offset (Int32_u.of_int !best_idx))\n  else\n    let in_shape = shape va in\n    let in_strides = View.strides va in\n    let md_idx = Array.make (Array.length in_shape) 0 in\n    let best_idx = ref 0 in\n    let acc = Array.make_float64 1 in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr a_offset);\n    for i = 1 to in_numel - 1 do\n      Shape.unravel_index_into i in_shape md_idx;\n      let lin = Shape.ravel_index md_idx in_strides in\n      let v = Array.unsafe_get a_arr (a_offset + lin) in\n      if Float_u.compare v (Array.unsafe_get acc 0) > 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := i)\n    done;\n    Array.unsafe_set out_arr out_offset (Int32_u.of_int !best_idx)\n\nlet argmax_all_float32 (out_arr : int32# array) out_offset a_arr va in_numel =\n  if in_numel = 0 then\n    invalid_arg \"argmax: empty input\";\n  let a_offset = View.offset va in\n  if View.is_c_contiguous va then (\n    let best_idx = ref 0 in\n    let acc = Array.make_float32 1 in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr a_offset);\n    for i = 1 to in_numel - 1 do\n      let v = Array.unsafe_get a_arr (a_offset + i) in\n      if Float32_u.compare v (Array.unsafe_get acc 0) > 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := i)\n    done;\n    Array.unsafe_set out_arr out_offset (Int32_u.of_int !best_idx))\n  else\n    let in_shape = shape va in\n    let in_strides = View.strides va in\n    let md_idx = Array.make (Array.length in_shape) 0 in\n    let best_idx = ref 0 in\n    let acc = Array.make_float32 1 in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr a_offset);\n    for i = 1 to in_numel - 1 do\n      Shape.unravel_index_into i in_shape md_idx;\n      let lin = Shape.ravel_index md_idx in_strides in\n      let v = Array.unsafe_get a_arr (a_offset + lin) in\n      if Float32_u.compare v (Array.unsafe_get acc 0) > 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := i)\n    done;\n    Array.unsafe_set out_arr out_offset (Int32_u.of_int !best_idx)\n\nlet argmax_all_int32 (out_arr : int32# array) out_offset (a_arr : int32# array)\n    va in_numel =\n  if in_numel = 0 then\n    invalid_arg \"argmax: empty input\";\n  let a_offset = View.offset va in\n  if View.is_c_contiguous va then (\n    let best_idx = ref 0 in\n    let acc = Array.make_int32 1 in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr a_offset);\n    for i = 1 to in_numel - 1 do\n      let v = Array.unsafe_get a_arr (a_offset + i) in\n      if Int32_u.compare v (Array.unsafe_get acc 0) > 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := i)\n    done;\n    Array.unsafe_set out_arr out_offset (Int32_u.of_int !best_idx))\n  else\n    let in_shape = shape va in\n    let in_strides = View.strides va in\n    let md_idx = Array.make (Array.length in_shape) 0 in\n    let best_idx = ref 0 in\n    let acc = Array.make_int32 1 in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr a_offset);\n    for i = 1 to in_numel - 1 do\n      Shape.unravel_index_into i in_shape md_idx;\n      let lin = Shape.ravel_index md_idx in_strides in\n      let v = Array.unsafe_get a_arr (a_offset + lin) in\n      if Int32_u.compare v (Array.unsafe_get acc 0) > 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := i)\n    done;\n    Array.unsafe_set out_arr out_offset (Int32_u.of_int !best_idx)\n\nlet argmax_all_int64 (out_arr : int32# array) out_offset (a_arr : int64# array)\n    va in_numel =\n  if in_numel = 0 then\n    invalid_arg \"argmax: empty input\";\n  let a_offset = View.offset va in\n  if View.is_c_contiguous va then (\n    let best_idx = ref 0 in\n    let acc = Array.make_int64 1 in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr a_offset);\n    for i = 1 to in_numel - 1 do\n      let v = Array.unsafe_get a_arr (a_offset + i) in\n      if Int64_u.compare v (Array.unsafe_get acc 0) > 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := i)\n    done;\n    Array.unsafe_set out_arr out_offset (Int32_u.of_int !best_idx))\n  else\n    let in_shape = shape va in\n    let in_strides = View.strides va in\n    let md_idx = Array.make (Array.length in_shape) 0 in\n    let best_idx = ref 0 in\n    let acc = Array.make_int64 1 in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr a_offset);\n    for i = 1 to in_numel - 1 do\n      Shape.unravel_index_into i in_shape md_idx;\n      let lin = Shape.ravel_index md_idx in_strides in\n      let v = Array.unsafe_get a_arr (a_offset + lin) in\n      if Int64_u.compare v (Array.unsafe_get acc 0) > 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := i)\n    done;\n    Array.unsafe_set out_arr out_offset (Int32_u.of_int !best_idx)\n\n(* Axis-based argmax *)\n\nlet argmax_axis_float64 (out_arr : int32# array) a_arr va vout axis keepdims\n    start_idx end_idx =\n  let plan = Reduce_ops.make_plan [| axis |] keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_float64 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    Reduce_ops.init_input_index plan out_md_index in_md_index;\n    let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (plan.in_offset + a_lin));\n    let best_idx = ref 0 in\n    let idx = ref 1 in\n    let continue = ref (Reduce_ops.increment_input_index plan in_md_index) in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      if Float_u.compare v (Array.unsafe_get acc 0) > 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := !idx);\n      incr idx;\n      continue := Reduce_ops.increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Int32_u.of_int !best_idx)\n  done\n\nlet argmax_axis_float32 (out_arr : int32# array) a_arr va vout axis keepdims\n    start_idx end_idx =\n  let plan = Reduce_ops.make_plan [| axis |] keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_float32 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    Reduce_ops.init_input_index plan out_md_index in_md_index;\n    let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (plan.in_offset + a_lin));\n    let best_idx = ref 0 in\n    let idx = ref 1 in\n    let continue = ref (Reduce_ops.increment_input_index plan in_md_index) in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      if Float32_u.compare v (Array.unsafe_get acc 0) > 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := !idx);\n      incr idx;\n      continue := Reduce_ops.increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Int32_u.of_int !best_idx)\n  done\n\nlet argmax_axis_int32 (out_arr : int32# array) (a_arr : int32# array) va vout\n    axis keepdims start_idx end_idx =\n  let plan = Reduce_ops.make_plan [| axis |] keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_int32 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    Reduce_ops.init_input_index plan out_md_index in_md_index;\n    let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (plan.in_offset + a_lin));\n    let best_idx = ref 0 in\n    let idx = ref 1 in\n    let continue = ref (Reduce_ops.increment_input_index plan in_md_index) in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      if Int32_u.compare v (Array.unsafe_get acc 0) > 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := !idx);\n      incr idx;\n      continue := Reduce_ops.increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Int32_u.of_int !best_idx)\n  done\n\nlet argmax_axis_int64 (out_arr : int32# array) (a_arr : int64# array) va vout\n    axis keepdims start_idx end_idx =\n  let plan = Reduce_ops.make_plan [| axis |] keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_int64 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    Reduce_ops.init_input_index plan out_md_index in_md_index;\n    let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (plan.in_offset + a_lin));\n    let best_idx = ref 0 in\n    let idx = ref 1 in\n    let continue = ref (Reduce_ops.increment_input_index plan in_md_index) in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      if Int64_u.compare v (Array.unsafe_get acc 0) > 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := !idx);\n      incr idx;\n      continue := Reduce_ops.increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Int32_u.of_int !best_idx)\n  done\n\n(* Entry points *)\n\nlet argmax_float64 pool ~(out_arr : int32# array) ~a_arr ~va ~vout ~axis\n    ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if in_numel = 0 then\n    invalid_arg \"argmax: empty input\"\n  else if out_numel = 1 then\n    argmax_all_float64 out_arr (View.offset vout) a_arr va in_numel\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        argmax_axis_float64 out_arr a_arr va vout axis keepdims s e)\n  else argmax_axis_float64 out_arr a_arr va vout axis keepdims 0 out_numel\n\nlet argmax_float32 pool ~(out_arr : int32# array) ~a_arr ~va ~vout ~axis\n    ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if in_numel = 0 then\n    invalid_arg \"argmax: empty input\"\n  else if out_numel = 1 then\n    argmax_all_float32 out_arr (View.offset vout) a_arr va in_numel\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        argmax_axis_float32 out_arr a_arr va vout axis keepdims s e)\n  else argmax_axis_float32 out_arr a_arr va vout axis keepdims 0 out_numel\n\nlet argmax_int32 pool ~(out_arr : int32# array) ~a_arr ~va ~vout ~axis\n    ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if in_numel = 0 then\n    invalid_arg \"argmax: empty input\"\n  else if out_numel = 1 then\n    argmax_all_int32 out_arr (View.offset vout) a_arr va in_numel\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        argmax_axis_int32 out_arr a_arr va vout axis keepdims s e)\n  else argmax_axis_int32 out_arr a_arr va vout axis keepdims 0 out_numel\n\nlet argmax_int64 pool ~(out_arr : int32# array) ~a_arr ~va ~vout ~axis\n    ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if in_numel = 0 then\n    invalid_arg \"argmax: empty input\"\n  else if out_numel = 1 then\n    argmax_all_int64 out_arr (View.offset vout) a_arr va in_numel\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        argmax_axis_int64 out_arr a_arr va vout axis keepdims s e)\n  else argmax_axis_int64 out_arr a_arr va vout axis keepdims 0 out_numel\n\n(* --- argmin --- *)\n\nlet argmin_all_float64 (out_arr : int32# array) out_offset a_arr va in_numel =\n  if in_numel = 0 then\n    invalid_arg \"argmin: empty input\";\n  let a_offset = View.offset va in\n  if View.is_c_contiguous va then (\n    let best_idx = ref 0 in\n    let acc = Array.make_float64 1 in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr a_offset);\n    for i = 1 to in_numel - 1 do\n      let v = Array.unsafe_get a_arr (a_offset + i) in\n      if Float_u.compare v (Array.unsafe_get acc 0) < 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := i)\n    done;\n    Array.unsafe_set out_arr out_offset (Int32_u.of_int !best_idx))\n  else\n    let in_shape = shape va in\n    let in_strides = View.strides va in\n    let md_idx = Array.make (Array.length in_shape) 0 in\n    let best_idx = ref 0 in\n    let acc = Array.make_float64 1 in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr a_offset);\n    for i = 1 to in_numel - 1 do\n      Shape.unravel_index_into i in_shape md_idx;\n      let lin = Shape.ravel_index md_idx in_strides in\n      let v = Array.unsafe_get a_arr (a_offset + lin) in\n      if Float_u.compare v (Array.unsafe_get acc 0) < 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := i)\n    done;\n    Array.unsafe_set out_arr out_offset (Int32_u.of_int !best_idx)\n\nlet argmin_all_float32 (out_arr : int32# array) out_offset a_arr va in_numel =\n  if in_numel = 0 then\n    invalid_arg \"argmin: empty input\";\n  let a_offset = View.offset va in\n  if View.is_c_contiguous va then (\n    let best_idx = ref 0 in\n    let acc = Array.make_float32 1 in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr a_offset);\n    for i = 1 to in_numel - 1 do\n      let v = Array.unsafe_get a_arr (a_offset + i) in\n      if Float32_u.compare v (Array.unsafe_get acc 0) < 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := i)\n    done;\n    Array.unsafe_set out_arr out_offset (Int32_u.of_int !best_idx))\n  else\n    let in_shape = shape va in\n    let in_strides = View.strides va in\n    let md_idx = Array.make (Array.length in_shape) 0 in\n    let best_idx = ref 0 in\n    let acc = Array.make_float32 1 in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr a_offset);\n    for i = 1 to in_numel - 1 do\n      Shape.unravel_index_into i in_shape md_idx;\n      let lin = Shape.ravel_index md_idx in_strides in\n      let v = Array.unsafe_get a_arr (a_offset + lin) in\n      if Float32_u.compare v (Array.unsafe_get acc 0) < 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := i)\n    done;\n    Array.unsafe_set out_arr out_offset (Int32_u.of_int !best_idx)\n\nlet argmin_all_int32 (out_arr : int32# array) out_offset (a_arr : int32# array)\n    va in_numel =\n  if in_numel = 0 then\n    invalid_arg \"argmin: empty input\";\n  let a_offset = View.offset va in\n  if View.is_c_contiguous va then (\n    let best_idx = ref 0 in\n    let acc = Array.make_int32 1 in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr a_offset);\n    for i = 1 to in_numel - 1 do\n      let v = Array.unsafe_get a_arr (a_offset + i) in\n      if Int32_u.compare v (Array.unsafe_get acc 0) < 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := i)\n    done;\n    Array.unsafe_set out_arr out_offset (Int32_u.of_int !best_idx))\n  else\n    let in_shape = shape va in\n    let in_strides = View.strides va in\n    let md_idx = Array.make (Array.length in_shape) 0 in\n    let best_idx = ref 0 in\n    let acc = Array.make_int32 1 in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr a_offset);\n    for i = 1 to in_numel - 1 do\n      Shape.unravel_index_into i in_shape md_idx;\n      let lin = Shape.ravel_index md_idx in_strides in\n      let v = Array.unsafe_get a_arr (a_offset + lin) in\n      if Int32_u.compare v (Array.unsafe_get acc 0) < 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := i)\n    done;\n    Array.unsafe_set out_arr out_offset (Int32_u.of_int !best_idx)\n\nlet argmin_all_int64 (out_arr : int32# array) out_offset (a_arr : int64# array)\n    va in_numel =\n  if in_numel = 0 then\n    invalid_arg \"argmin: empty input\";\n  let a_offset = View.offset va in\n  if View.is_c_contiguous va then (\n    let best_idx = ref 0 in\n    let acc = Array.make_int64 1 in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr a_offset);\n    for i = 1 to in_numel - 1 do\n      let v = Array.unsafe_get a_arr (a_offset + i) in\n      if Int64_u.compare v (Array.unsafe_get acc 0) < 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := i)\n    done;\n    Array.unsafe_set out_arr out_offset (Int32_u.of_int !best_idx))\n  else\n    let in_shape = shape va in\n    let in_strides = View.strides va in\n    let md_idx = Array.make (Array.length in_shape) 0 in\n    let best_idx = ref 0 in\n    let acc = Array.make_int64 1 in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr a_offset);\n    for i = 1 to in_numel - 1 do\n      Shape.unravel_index_into i in_shape md_idx;\n      let lin = Shape.ravel_index md_idx in_strides in\n      let v = Array.unsafe_get a_arr (a_offset + lin) in\n      if Int64_u.compare v (Array.unsafe_get acc 0) < 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := i)\n    done;\n    Array.unsafe_set out_arr out_offset (Int32_u.of_int !best_idx)\n\n(* Axis-based argmin *)\n\nlet argmin_axis_float64 (out_arr : int32# array) a_arr va vout axis keepdims\n    start_idx end_idx =\n  let plan = Reduce_ops.make_plan [| axis |] keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_float64 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    Reduce_ops.init_input_index plan out_md_index in_md_index;\n    let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (plan.in_offset + a_lin));\n    let best_idx = ref 0 in\n    let idx = ref 1 in\n    let continue = ref (Reduce_ops.increment_input_index plan in_md_index) in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      if Float_u.compare v (Array.unsafe_get acc 0) < 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := !idx);\n      incr idx;\n      continue := Reduce_ops.increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Int32_u.of_int !best_idx)\n  done\n\nlet argmin_axis_float32 (out_arr : int32# array) a_arr va vout axis keepdims\n    start_idx end_idx =\n  let plan = Reduce_ops.make_plan [| axis |] keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_float32 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    Reduce_ops.init_input_index plan out_md_index in_md_index;\n    let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (plan.in_offset + a_lin));\n    let best_idx = ref 0 in\n    let idx = ref 1 in\n    let continue = ref (Reduce_ops.increment_input_index plan in_md_index) in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      if Float32_u.compare v (Array.unsafe_get acc 0) < 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := !idx);\n      incr idx;\n      continue := Reduce_ops.increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Int32_u.of_int !best_idx)\n  done\n\nlet argmin_axis_int32 (out_arr : int32# array) (a_arr : int32# array) va vout\n    axis keepdims start_idx end_idx =\n  let plan = Reduce_ops.make_plan [| axis |] keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_int32 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    Reduce_ops.init_input_index plan out_md_index in_md_index;\n    let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (plan.in_offset + a_lin));\n    let best_idx = ref 0 in\n    let idx = ref 1 in\n    let continue = ref (Reduce_ops.increment_input_index plan in_md_index) in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      if Int32_u.compare v (Array.unsafe_get acc 0) < 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := !idx);\n      incr idx;\n      continue := Reduce_ops.increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Int32_u.of_int !best_idx)\n  done\n\nlet argmin_axis_int64 (out_arr : int32# array) (a_arr : int64# array) va vout\n    axis keepdims start_idx end_idx =\n  let plan = Reduce_ops.make_plan [| axis |] keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_int64 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    Reduce_ops.init_input_index plan out_md_index in_md_index;\n    let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (plan.in_offset + a_lin));\n    let best_idx = ref 0 in\n    let idx = ref 1 in\n    let continue = ref (Reduce_ops.increment_input_index plan in_md_index) in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      if Int64_u.compare v (Array.unsafe_get acc 0) < 0 then (\n        Array.unsafe_set acc 0 v;\n        best_idx := !idx);\n      incr idx;\n      continue := Reduce_ops.increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Int32_u.of_int !best_idx)\n  done\n\n(* Entry points *)\n\nlet argmin_float64 pool ~(out_arr : int32# array) ~a_arr ~va ~vout ~axis\n    ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if in_numel = 0 then\n    invalid_arg \"argmin: empty input\"\n  else if out_numel = 1 then\n    argmin_all_float64 out_arr (View.offset vout) a_arr va in_numel\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        argmin_axis_float64 out_arr a_arr va vout axis keepdims s e)\n  else argmin_axis_float64 out_arr a_arr va vout axis keepdims 0 out_numel\n\nlet argmin_float32 pool ~(out_arr : int32# array) ~a_arr ~va ~vout ~axis\n    ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if in_numel = 0 then\n    invalid_arg \"argmin: empty input\"\n  else if out_numel = 1 then\n    argmin_all_float32 out_arr (View.offset vout) a_arr va in_numel\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        argmin_axis_float32 out_arr a_arr va vout axis keepdims s e)\n  else argmin_axis_float32 out_arr a_arr va vout axis keepdims 0 out_numel\n\nlet argmin_int32 pool ~(out_arr : int32# array) ~a_arr ~va ~vout ~axis\n    ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if in_numel = 0 then\n    invalid_arg \"argmin: empty input\"\n  else if out_numel = 1 then\n    argmin_all_int32 out_arr (View.offset vout) a_arr va in_numel\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        argmin_axis_int32 out_arr a_arr va vout axis keepdims s e)\n  else argmin_axis_int32 out_arr a_arr va vout axis keepdims 0 out_numel\n\nlet argmin_int64 pool ~(out_arr : int32# array) ~a_arr ~va ~vout ~axis\n    ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if in_numel = 0 then\n    invalid_arg \"argmin: empty input\"\n  else if out_numel = 1 then\n    argmin_all_int64 out_arr (View.offset vout) a_arr va in_numel\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        argmin_axis_int64 out_arr a_arr va vout axis keepdims s e)\n  else argmin_axis_int64 out_arr a_arr va vout axis keepdims 0 out_numel\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/op_associative_scan.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\ntype op = [ `Sum | `Prod | `Max | `Min ]\n\nlet product_range arr start_idx end_idx =\n  let p = ref 1 in\n  for i = start_idx to end_idx - 1 do\n    p := !p * arr.(i)\n  done;\n  !p\n\nlet run_scan ~pool ~shape ~axis ~in_view ~out_view ~scan_slice =\n  let rank = Array.length shape in\n  let axis_len = shape.(axis) in\n  if axis_len = 0 then ()\n  else\n    let inner_size = product_range shape (axis + 1) rank in\n    let outer_size = product_range shape 0 axis in\n    let total_slices = outer_size * inner_size in\n    if total_slices = 0 then ()\n    else\n      let in_strides = View.strides in_view in\n      let out_strides = View.strides out_view in\n      let in_offset = View.offset in_view in\n      let out_offset = View.offset out_view in\n      let in_axis_stride = in_strides.(axis) in\n      let out_axis_stride = out_strides.(axis) in\n\n      let contiguous =\n        View.is_c_contiguous in_view\n        && View.is_c_contiguous out_view\n        && axis = rank - 1\n      in\n\n      if contiguous then (\n        let process_rows start_row end_row =\n          for row = start_row to end_row - 1 do\n            let base_in = in_offset + (row * axis_len) in\n            let base_out = out_offset + (row * axis_len) in\n            scan_slice base_in base_out 1 1 axis_len\n          done\n        in\n        if outer_size > 8192 then\n          Parallel.parallel_for pool 0 (outer_size - 1) process_rows\n        else process_rows 0 outer_size\n      )\n\n      else (\n        (* Build dims/strides excluding axis *)\n        let slice_rank = rank - 1 in\n        let dims = Array.make slice_rank 0 in\n        let in_str = Array.make slice_rank 0 in\n        let out_str = Array.make slice_rank 0 in\n\n        let idx = ref 0 in\n        for d = 0 to rank - 1 do\n          if d <> axis then (\n            dims.(!idx) <- shape.(d);\n            in_str.(!idx) <- in_strides.(d);\n            out_str.(!idx) <- out_strides.(d);\n            incr idx)\n        done;\n\n        let process_chunk start_slice end_slice =\n          (* Initialize coordinate state for this chunk *)\n          let coords = Array.make slice_rank 0 in\n          let in_base = ref in_offset in\n          let out_base = ref out_offset in\n\n          (* Compute coordinates for start_slice directly *)\n          let rem = ref start_slice in\n          for d = 0 to slice_rank - 1 do\n            let block = ref 1 in\n            for d' = d + 1 to slice_rank - 1 do\n              block := !block * dims.(d')\n            done;\n            let c = !rem / !block in\n            rem := !rem mod !block;\n            coords.(d) <- c;\n            in_base := !in_base + (c * in_str.(d));\n            out_base := !out_base + (c * out_str.(d))\n          done;\n\n          for _ = start_slice to end_slice - 1 do\n            scan_slice !in_base !out_base\n              in_axis_stride out_axis_stride axis_len;\n\n            (* Increment slice coordinates *)\n            let rec carry d =\n              if d >= 0 then\n                let next = coords.(d) + 1 in\n                if next < dims.(d) then (\n                  coords.(d) <- next;\n                  in_base := !in_base + in_str.(d);\n                  out_base := !out_base + out_str.(d)\n                ) else (\n                  coords.(d) <- 0;\n                  in_base := !in_base - (dims.(d) - 1) * in_str.(d);\n                  out_base := !out_base - (dims.(d) - 1) * out_str.(d);\n                  carry (d - 1)\n                )\n            in\n            carry (slice_rank - 1)\n          done\n        in\n\n        let parallel_threshold = 62500 in\n        if total_slices > parallel_threshold then\n          Parallel.parallel_for pool 0 (total_slices - 1) process_chunk\n        else process_chunk 0 total_slices\n      )\n\nlet scan_float64 pool ~(out_arr : float# array) ~(in_arr : float# array) ~shape\n    ~axis ~in_view ~out_view ~op =\n  let scan_slice =\n    match op with\n    | `Sum ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Float_u.add acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n    | `Prod ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Float_u.mul acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n    | `Max ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Float_u.max acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n    | `Min ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Float_u.min acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n  in\n  run_scan ~pool ~shape ~axis ~in_view ~out_view ~scan_slice\n\nlet scan_float32 pool ~(out_arr : float32# array) ~(in_arr : float32# array)\n    ~shape ~axis ~in_view ~out_view ~op =\n  let scan_slice =\n    match op with\n    | `Sum ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Float32_u.add acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n    | `Prod ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Float32_u.mul acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n    | `Max ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Float32_u.max acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n    | `Min ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Float32_u.min acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n  in\n  run_scan ~pool ~shape ~axis ~in_view ~out_view ~scan_slice\n\nlet scan_int8 pool ~(out_arr : int8# array) ~(in_arr : int8# array) ~shape ~axis\n    ~in_view ~out_view ~op =\n  let scan_slice =\n    match op with\n    | `Sum ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Int8_u.add acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n    | `Prod ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Int8_u.mul acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n    | `Max ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Int8_u.max acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n    | `Min ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Int8_u.min acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n  in\n  run_scan ~pool ~shape ~axis ~in_view ~out_view ~scan_slice\n\nlet scan_int16 pool ~(out_arr : int16# array) ~(in_arr : int16# array) ~shape\n    ~axis ~in_view ~out_view ~op =\n  let scan_slice =\n    match op with\n    | `Sum ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Int16_u.add acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n    | `Prod ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Int16_u.mul acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n    | `Max ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Int16_u.max acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n    | `Min ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Int16_u.min acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n  in\n  run_scan ~pool ~shape ~axis ~in_view ~out_view ~scan_slice\n\nlet scan_int32 pool ~(out_arr : int32# array) ~(in_arr : int32# array) ~shape\n    ~axis ~in_view ~out_view ~op =\n  let scan_slice =\n    match op with\n    | `Sum ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Int32_u.add acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n    | `Prod ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Int32_u.mul acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n    | `Max ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Int32_u.max acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n    | `Min ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Int32_u.min acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n  in\n  run_scan ~pool ~shape ~axis ~in_view ~out_view ~scan_slice\n\nlet scan_int64 pool ~(out_arr : int64# array) ~(in_arr : int64# array) ~shape\n    ~axis ~in_view ~out_view ~op =\n  let scan_slice =\n    match op with\n    | `Sum ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Int64_u.add acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n    | `Prod ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Int64_u.mul acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n    | `Max ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Int64_u.max acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n    | `Min ->\n        fun in_base out_base in_step out_step axis_len ->\n          let first = Array.unsafe_get in_arr in_base in\n          Array.unsafe_set out_arr out_base first;\n          let rec loop i acc in_idx out_idx =\n            if i < axis_len then\n              let next = Int64_u.min acc (Array.unsafe_get in_arr in_idx) in\n              Array.unsafe_set out_arr out_idx next;\n              loop (i + 1) next (in_idx + in_step) (out_idx + out_step)\n          in\n          loop 1 first (in_base + in_step) (out_base + out_step)\n  in\n  run_scan ~pool ~shape ~axis ~in_view ~out_view ~scan_slice\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/op_cast.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\n(* --- Float64 --- *)\n\nlet cast_float64_float32 (src : float# array) (dst : float32# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Float32_u.of_float (Array.unsafe_get src src_lin))\n  done\n\nlet cast_float64_int8 (src : float# array) (dst : int8# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int8_u.of_int (Float_u.to_int (Array.unsafe_get src src_lin)))\n  done\n\nlet cast_float64_int16 (src : float# array) (dst : int16# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int16_u.of_int (Float_u.to_int (Array.unsafe_get src src_lin)))\n  done\n\nlet cast_float64_int32 (src : float# array) (dst : int32# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int32_u.of_int32\n          (Int32.of_int (Float_u.to_int (Array.unsafe_get src src_lin))))\n  done\n\nlet cast_float64_int64 (src : float# array) (dst : int64# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int64_u.of_int64\n          (Int64.of_int (Float_u.to_int (Array.unsafe_get src src_lin))))\n  done\n\nlet cast_float64_bool (src : float# array) (dst : bool array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Float_u.to_float (Array.unsafe_get src src_lin) <> 0.0)\n  done\n\n(* --- Float32 --- *)\n\nlet cast_float32_float64 (src : float32# array) (dst : float# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Float32_u.to_float (Array.unsafe_get src src_lin))\n  done\n\nlet cast_float32_int8 (src : float32# array) (dst : int8# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int8_u.of_int (Float32_u.to_int (Array.unsafe_get src src_lin)))\n  done\n\nlet cast_float32_int16 (src : float32# array) (dst : int16# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int16_u.of_int (Float32_u.to_int (Array.unsafe_get src src_lin)))\n  done\n\nlet cast_float32_int32 (src : float32# array) (dst : int32# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int32_u.of_int32\n          (Int32.of_int (Float32_u.to_int (Array.unsafe_get src src_lin))))\n  done\n\nlet cast_float32_int64 (src : float32# array) (dst : int64# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int64_u.of_int64\n          (Int64.of_int (Float32_u.to_int (Array.unsafe_get src src_lin))))\n  done\n\nlet cast_float32_bool (src : float32# array) (dst : bool array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Float_u.to_float (Float32_u.to_float (Array.unsafe_get src src_lin))\n      <> 0.0)\n  done\n\n(* --- Int8 --- *)\n\nlet cast_int8_float64 (src : int8# array) (dst : float# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Float_u.of_int (Int8_u.to_int (Array.unsafe_get src src_lin)))\n  done\n\nlet cast_int8_float32 (src : int8# array) (dst : float32# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Float32_u.of_int (Int8_u.to_int (Array.unsafe_get src src_lin)))\n  done\n\nlet cast_int8_int16 (src : int8# array) (dst : int16# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int16_u.of_int (Int8_u.to_int (Array.unsafe_get src src_lin)))\n  done\n\nlet cast_int8_int32 (src : int8# array) (dst : int32# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int32_u.of_int32\n          (Int32.of_int (Int8_u.to_int (Array.unsafe_get src src_lin))))\n  done\n\nlet cast_int8_int64 (src : int8# array) (dst : int64# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int64_u.of_int64\n          (Int64.of_int (Int8_u.to_int (Array.unsafe_get src src_lin))))\n  done\n\nlet cast_int8_bool (src : int8# array) (dst : bool array) n in_shape in_offset\n    in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int8_u.to_int (Array.unsafe_get src src_lin) <> 0)\n  done\n\n(* --- Int16 --- *)\n\nlet cast_int16_float64 (src : int16# array) (dst : float# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Float_u.of_int (Int16_u.to_int (Array.unsafe_get src src_lin)))\n  done\n\nlet cast_int16_float32 (src : int16# array) (dst : float32# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Float32_u.of_int (Int16_u.to_int (Array.unsafe_get src src_lin)))\n  done\n\nlet cast_int16_int8 (src : int16# array) (dst : int8# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int8_u.of_int (Int16_u.to_int (Array.unsafe_get src src_lin)))\n  done\n\nlet cast_int16_int32 (src : int16# array) (dst : int32# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int32_u.of_int32\n          (Int32.of_int (Int16_u.to_int (Array.unsafe_get src src_lin))))\n  done\n\nlet cast_int16_int64 (src : int16# array) (dst : int64# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int64_u.of_int64\n          (Int64.of_int (Int16_u.to_int (Array.unsafe_get src src_lin))))\n  done\n\nlet cast_int16_bool (src : int16# array) (dst : bool array) n in_shape in_offset\n    in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int16_u.to_int (Array.unsafe_get src src_lin) <> 0)\n  done\n\n(* --- Int32 --- *)\n\nlet cast_int32_float64 (src : int32# array) (dst : float# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Float_u.of_int\n          (Int32.to_int (Int32_u.to_int32 (Array.unsafe_get src src_lin))))\n  done\n\nlet cast_int32_float32 (src : int32# array) (dst : float32# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Float32_u.of_int\n          (Int32.to_int (Int32_u.to_int32 (Array.unsafe_get src src_lin))))\n  done\n\nlet cast_int32_int8 (src : int32# array) (dst : int8# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int8_u.of_int\n          (Int32.to_int (Int32_u.to_int32 (Array.unsafe_get src src_lin))))\n  done\n\nlet cast_int32_int16 (src : int32# array) (dst : int16# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int16_u.of_int\n          (Int32.to_int (Int32_u.to_int32 (Array.unsafe_get src src_lin))))\n  done\n\nlet cast_int32_int64 (src : int32# array) (dst : int64# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int64_u.of_int64\n          (Int64.of_int32 (Int32_u.to_int32 (Array.unsafe_get src src_lin))))\n  done\n\nlet cast_int32_bool (src : int32# array) (dst : bool array) n in_shape in_offset\n    in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int32_u.to_int32 (Array.unsafe_get src src_lin) <> 0l)\n  done\n\n(* --- Int64 --- *)\n\nlet cast_int64_float64 (src : int64# array) (dst : float# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Float_u.of_int\n          (Int64.to_int (Int64_u.to_int64 (Array.unsafe_get src src_lin))))\n  done\n\nlet cast_int64_float32 (src : int64# array) (dst : float32# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Float32_u.of_int\n          (Int64.to_int (Int64_u.to_int64 (Array.unsafe_get src src_lin))))\n  done\n\nlet cast_int64_int8 (src : int64# array) (dst : int8# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int8_u.of_int\n          (Int64.to_int (Int64_u.to_int64 (Array.unsafe_get src src_lin))))\n  done\n\nlet cast_int64_int16 (src : int64# array) (dst : int16# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int16_u.of_int\n          (Int64.to_int (Int64_u.to_int64 (Array.unsafe_get src src_lin))))\n  done\n\nlet cast_int64_int32 (src : int64# array) (dst : int32# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int32_u.of_int32\n          (Int64.to_int32 (Int64_u.to_int64 (Array.unsafe_get src src_lin))))\n  done\n\nlet cast_int64_bool (src : int64# array) (dst : bool array) n in_shape in_offset\n    in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int64_u.to_int64 (Array.unsafe_get src src_lin) <> 0L)\n  done\n\n(* --- Bool --- *)\n\nlet cast_bool_float64 (src : bool array) (dst : float# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Float_u.of_float (if Array.unsafe_get src src_lin then 1.0 else 0.0))\n  done\n\nlet cast_bool_float32 (src : bool array) (dst : float32# array) n in_shape\n    in_offset in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Float32_u.of_int (if Array.unsafe_get src src_lin then 1 else 0))\n  done\n\nlet cast_bool_int8 (src : bool array) (dst : int8# array) n in_shape in_offset\n    in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int8_u.of_int (if Array.unsafe_get src src_lin then 1 else 0))\n  done\n\nlet cast_bool_int16 (src : bool array) (dst : int16# array) n in_shape in_offset\n    in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int16_u.of_int (if Array.unsafe_get src src_lin then 1 else 0))\n  done\n\nlet cast_bool_int32 (src : bool array) (dst : int32# array) n in_shape in_offset\n    in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int32_u.of_int32 (if Array.unsafe_get src src_lin then 1l else 0l))\n  done\n\nlet cast_bool_int64 (src : bool array) (dst : int64# array) n in_shape in_offset\n    in_strides out_offset out_strides =\n  let md_index = Array.make (Array.length in_shape) 0 in\n  for k = 0 to n - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set dst dst_lin\n      (Int64_u.of_int64 (if Array.unsafe_get src src_lin then 1L else 0L))\n  done"
  },
  {
    "path": "packages/nx-oxcaml/lib/op_cat.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet cat_float64 (srcs : (float# array * View.t) list) (dst : float# array)\n    (rank : int) (axis : int) (out_offset : int) (out_strides : int array) =\n  let axis_base = ref 0 in\n  List.iter\n    (fun (src, view) ->\n      let in_shape = shape view in\n      let in_offset = View.offset view in\n      let in_strides = View.strides view in\n      let n = numel view in\n      let md_index = Array.make rank 0 in\n      let dst_index = Array.make rank 0 in\n      for k = 0 to n - 1 do\n        Shape.unravel_index_into k in_shape md_index;\n        Array.blit md_index 0 dst_index 0 rank;\n        dst_index.(axis) <- dst_index.(axis) + !axis_base;\n        let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n        let dst_lin = out_offset + Shape.ravel_index dst_index out_strides in\n        Array.unsafe_set dst dst_lin (Array.unsafe_get src src_lin)\n      done;\n      axis_base := !axis_base + in_shape.(axis))\n    srcs\n\nlet cat_float32 (srcs : (float32# array * View.t) list) (dst : float32# array)\n    (rank : int) (axis : int) (out_offset : int) (out_strides : int array) =\n  let axis_base = ref 0 in\n  List.iter\n    (fun (src, view) ->\n      let in_shape = shape view in\n      let in_offset = View.offset view in\n      let in_strides = View.strides view in\n      let n = numel view in\n      let md_index = Array.make rank 0 in\n      let dst_index = Array.make rank 0 in\n      for k = 0 to n - 1 do\n        Shape.unravel_index_into k in_shape md_index;\n        Array.blit md_index 0 dst_index 0 rank;\n        dst_index.(axis) <- dst_index.(axis) + !axis_base;\n        let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n        let dst_lin = out_offset + Shape.ravel_index dst_index out_strides in\n        Array.unsafe_set dst dst_lin (Array.unsafe_get src src_lin)\n      done;\n      axis_base := !axis_base + in_shape.(axis))\n    srcs\n\nlet cat_int8 (srcs : (int8# array * View.t) list) (dst : int8# array)\n    (rank : int) (axis : int) (out_offset : int) (out_strides : int array) =\n  let axis_base = ref 0 in\n  List.iter\n    (fun (src, view) ->\n      let in_shape = shape view in\n      let in_offset = View.offset view in\n      let in_strides = View.strides view in\n      let n = numel view in\n      let md_index = Array.make rank 0 in\n      let dst_index = Array.make rank 0 in\n      for k = 0 to n - 1 do\n        Shape.unravel_index_into k in_shape md_index;\n        Array.blit md_index 0 dst_index 0 rank;\n        dst_index.(axis) <- dst_index.(axis) + !axis_base;\n        let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n        let dst_lin = out_offset + Shape.ravel_index dst_index out_strides in\n        Array.unsafe_set dst dst_lin (Array.unsafe_get src src_lin)\n      done;\n      axis_base := !axis_base + in_shape.(axis))\n    srcs\n\nlet cat_int16 (srcs : (int16# array * View.t) list) (dst : int16# array)\n    (rank : int) (axis : int) (out_offset : int) (out_strides : int array) =\n  let axis_base = ref 0 in\n  List.iter\n    (fun (src, view) ->\n      let in_shape = shape view in\n      let in_offset = View.offset view in\n      let in_strides = View.strides view in\n      let n = numel view in\n      let md_index = Array.make rank 0 in\n      let dst_index = Array.make rank 0 in\n      for k = 0 to n - 1 do\n        Shape.unravel_index_into k in_shape md_index;\n        Array.blit md_index 0 dst_index 0 rank;\n        dst_index.(axis) <- dst_index.(axis) + !axis_base;\n        let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n        let dst_lin = out_offset + Shape.ravel_index dst_index out_strides in\n        Array.unsafe_set dst dst_lin (Array.unsafe_get src src_lin)\n      done;\n      axis_base := !axis_base + in_shape.(axis))\n    srcs\n\nlet cat_int32 (srcs : (int32# array * View.t) list) (dst : int32# array)\n    (rank : int) (axis : int) (out_offset : int) (out_strides : int array) =\n  let axis_base = ref 0 in\n  List.iter\n    (fun (src, view) ->\n      let in_shape = shape view in\n      let in_offset = View.offset view in\n      let in_strides = View.strides view in\n      let n = numel view in\n      let md_index = Array.make rank 0 in\n      let dst_index = Array.make rank 0 in\n      for k = 0 to n - 1 do\n        Shape.unravel_index_into k in_shape md_index;\n        Array.blit md_index 0 dst_index 0 rank;\n        dst_index.(axis) <- dst_index.(axis) + !axis_base;\n        let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n        let dst_lin = out_offset + Shape.ravel_index dst_index out_strides in\n        Array.unsafe_set dst dst_lin (Array.unsafe_get src src_lin)\n      done;\n      axis_base := !axis_base + in_shape.(axis))\n    srcs\n\nlet cat_int64 (srcs : (int64# array * View.t) list) (dst : int64# array)\n    (rank : int) (axis : int) (out_offset : int) (out_strides : int array) =\n  let axis_base = ref 0 in\n  List.iter\n    (fun (src, view) ->\n      let in_shape = shape view in\n      let in_offset = View.offset view in\n      let in_strides = View.strides view in\n      let n = numel view in\n      let md_index = Array.make rank 0 in\n      let dst_index = Array.make rank 0 in\n      for k = 0 to n - 1 do\n        Shape.unravel_index_into k in_shape md_index;\n        Array.blit md_index 0 dst_index 0 rank;\n        dst_index.(axis) <- dst_index.(axis) + !axis_base;\n        let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n        let dst_lin = out_offset + Shape.ravel_index dst_index out_strides in\n        Array.unsafe_set dst dst_lin (Array.unsafe_get src src_lin)\n      done;\n      axis_base := !axis_base + in_shape.(axis))\n    srcs\n\nlet cat_bool (srcs : (bool array * View.t) list) (dst : bool array) (rank : int)\n    (axis : int) (out_offset : int) (out_strides : int array) =\n  let axis_base = ref 0 in\n  List.iter\n    (fun (src, view) ->\n      let in_shape = shape view in\n      let in_offset = View.offset view in\n      let in_strides = View.strides view in\n      let n = numel view in\n      let md_index = Array.make rank 0 in\n      let dst_index = Array.make rank 0 in\n      for k = 0 to n - 1 do\n        Shape.unravel_index_into k in_shape md_index;\n        Array.blit md_index 0 dst_index 0 rank;\n        dst_index.(axis) <- dst_index.(axis) + !axis_base;\n        let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n        let dst_lin = out_offset + Shape.ravel_index dst_index out_strides in\n        Array.unsafe_set dst dst_lin (Array.unsafe_get src src_lin)\n      done;\n      axis_base := !axis_base + in_shape.(axis))\n    srcs\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/op_fold.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet is_identity_window ~spatial_ndim ~kernel_elems ~kernel_size ~stride ~dilation\n    ~padding =\n  if kernel_elems <> 1 then false\n  else\n    let ok = ref true in\n    for d = 0 to spatial_ndim - 1 do\n      let pad_before, pad_after = padding.(d) in\n      if\n        kernel_size.(d) <> 1 || stride.(d) <> 1 || dilation.(d) <> 1\n        || pad_before <> 0 || pad_after <> 0\n      then ok := false\n    done;\n    !ok\n\nlet is_c_contiguous_spatial_tail spatial strides =\n  let expected = ref 1 in\n  let ok = ref true in\n  for d = Array.length spatial - 1 downto 0 do\n    if strides.(d + 2) <> !expected then ok := false;\n    expected := !expected * spatial.(d)\n  done;\n  !ok\n\nlet fold_float64 in_arr out_arr ~n_start ~n_end ~channels ~num_blocks ~kernel_elems\n    ~spatial_ndim ~blocks_shape ~kernel_size ~output_size ~stride ~dilation\n    ~padding ~in_offset ~in_strides ~out_offset ~out_strides =\n  if\n    is_identity_window ~spatial_ndim ~kernel_elems ~kernel_size ~stride\n      ~dilation ~padding\n    && num_blocks = Shape.numel output_size\n    && in_strides.(2) = 1\n    && is_c_contiguous_spatial_tail output_size out_strides\n  then (\n    for n_idx = n_start to n_end - 1 do\n      for c_idx = 0 to channels - 1 do\n        let src_base =\n          in_offset + (n_idx * in_strides.(0)) + (c_idx * in_strides.(1))\n        in\n        let dst_base =\n          out_offset + (n_idx * out_strides.(0)) + (c_idx * out_strides.(1))\n        in\n        if in_strides.(2) = 1 then (\n          let i = ref 0 in\n          let n = num_blocks in\n          let n8 = n - 7 in\n          while !i < n8 do\n            let idx = !i in\n            let a0 = Float64x2.Array.unsafe_get in_arr ~idx:(src_base + idx) in\n            let a1 =\n              Float64x2.Array.unsafe_get in_arr ~idx:(src_base + idx + 2)\n            in\n            let a2 =\n              Float64x2.Array.unsafe_get in_arr ~idx:(src_base + idx + 4)\n            in\n            let a3 =\n              Float64x2.Array.unsafe_get in_arr ~idx:(src_base + idx + 6)\n            in\n            Float64x2.Array.unsafe_set out_arr ~idx:(dst_base + idx) a0;\n            Float64x2.Array.unsafe_set out_arr ~idx:(dst_base + idx + 2) a1;\n            Float64x2.Array.unsafe_set out_arr ~idx:(dst_base + idx + 4) a2;\n            Float64x2.Array.unsafe_set out_arr ~idx:(dst_base + idx + 6) a3;\n            i := idx + 8\n          done;\n          let n2 = n - 1 in\n          while !i < n2 do\n            let idx = !i in\n            let a = Float64x2.Array.unsafe_get in_arr ~idx:(src_base + idx) in\n            Float64x2.Array.unsafe_set out_arr ~idx:(dst_base + idx) a;\n            i := idx + 2\n          done;\n          while !i < n do\n            let idx = !i in\n            Array.unsafe_set out_arr (dst_base + idx)\n              (Array.unsafe_get in_arr (src_base + idx));\n            incr i\n          done)\n        else\n          for b_idx = 0 to num_blocks - 1 do\n            let src_lin = src_base + (b_idx * in_strides.(2)) in\n            let dst_lin = dst_base + (b_idx * out_strides.(2)) in\n            Array.unsafe_set out_arr dst_lin (Array.unsafe_get in_arr src_lin)\n          done\n      done\n    done)\n  else (\n  let block_coords = Array.make spatial_ndim 0 in\n  let kernel_coords = Array.make spatial_ndim 0 in\n  let out_spatial = Array.make spatial_ndim 0 in\n  let zero = Float_u.of_float 0.0 in\n  let batch_numel = Array.fold_left ( * ) channels output_size in\n  let zero_start = n_start * batch_numel in\n  let zero_end = (n_end * batch_numel) - 1 in\n  for i = zero_start to zero_end do\n    Array.unsafe_set out_arr i zero\n  done;\n  for n_idx = n_start to n_end - 1 do\n    for b_idx = 0 to num_blocks - 1 do\n      Shape.unravel_index_into b_idx blocks_shape block_coords;\n      for c_idx = 0 to channels - 1 do\n        for k_idx = 0 to kernel_elems - 1 do\n          Shape.unravel_index_into k_idx kernel_size kernel_coords;\n          let valid = ref true in\n          for d = 0 to spatial_ndim - 1 do\n            let pad_before, _ = padding.(d) in\n            let pos =\n              (block_coords.(d) * stride.(d))\n              - pad_before\n              + (kernel_coords.(d) * dilation.(d))\n            in\n            out_spatial.(d) <- pos;\n            if pos < 0 || pos >= output_size.(d) then valid := false\n          done;\n          if !valid then (\n            let src_ch = (c_idx * kernel_elems) + k_idx in\n            let src_lin =\n              in_offset\n              + (n_idx * in_strides.(0))\n              + (src_ch * in_strides.(1))\n              + (b_idx * in_strides.(2))\n            in\n            let dst_lin =\n              ref (out_offset + (n_idx * out_strides.(0)) + (c_idx * out_strides.(1)))\n            in\n            for d = 0 to spatial_ndim - 1 do\n              dst_lin := !dst_lin + (out_spatial.(d) * out_strides.(d + 2))\n            done;\n            let prev = Array.unsafe_get out_arr !dst_lin in\n            let v = Array.unsafe_get in_arr src_lin in\n            Array.unsafe_set out_arr !dst_lin (Float_u.add prev v))\n        done\n      done\n    done\n  done)\n\nlet fold_float32 in_arr out_arr ~n_start ~n_end ~channels ~num_blocks ~kernel_elems\n    ~spatial_ndim ~blocks_shape ~kernel_size ~output_size ~stride ~dilation\n    ~padding ~in_offset ~in_strides ~out_offset ~out_strides =\n  if\n    is_identity_window ~spatial_ndim ~kernel_elems ~kernel_size ~stride\n      ~dilation ~padding\n    && num_blocks = Shape.numel output_size\n    && in_strides.(2) = 1\n    && is_c_contiguous_spatial_tail output_size out_strides\n  then (\n    for n_idx = n_start to n_end - 1 do\n      for c_idx = 0 to channels - 1 do\n        let src_base =\n          in_offset + (n_idx * in_strides.(0)) + (c_idx * in_strides.(1))\n        in\n        let dst_base =\n          out_offset + (n_idx * out_strides.(0)) + (c_idx * out_strides.(1))\n        in\n        if in_strides.(2) = 1 then (\n          let i = ref 0 in\n          let n = num_blocks in\n          let n16 = n - 15 in\n          while !i < n16 do\n            let idx = !i in\n            let a0 = Float32x4.Array.unsafe_get in_arr ~idx:(src_base + idx) in\n            let a1 =\n              Float32x4.Array.unsafe_get in_arr ~idx:(src_base + idx + 4)\n            in\n            let a2 =\n              Float32x4.Array.unsafe_get in_arr ~idx:(src_base + idx + 8)\n            in\n            let a3 =\n              Float32x4.Array.unsafe_get in_arr ~idx:(src_base + idx + 12)\n            in\n            Float32x4.Array.unsafe_set out_arr ~idx:(dst_base + idx) a0;\n            Float32x4.Array.unsafe_set out_arr ~idx:(dst_base + idx + 4) a1;\n            Float32x4.Array.unsafe_set out_arr ~idx:(dst_base + idx + 8) a2;\n            Float32x4.Array.unsafe_set out_arr ~idx:(dst_base + idx + 12) a3;\n            i := idx + 16\n          done;\n          let n4 = n - 3 in\n          while !i < n4 do\n            let idx = !i in\n            let a = Float32x4.Array.unsafe_get in_arr ~idx:(src_base + idx) in\n            Float32x4.Array.unsafe_set out_arr ~idx:(dst_base + idx) a;\n            i := idx + 4\n          done;\n          while !i < n do\n            let idx = !i in\n            Array.unsafe_set out_arr (dst_base + idx)\n              (Array.unsafe_get in_arr (src_base + idx));\n            incr i\n          done)\n        else\n          for b_idx = 0 to num_blocks - 1 do\n            let src_lin = src_base + (b_idx * in_strides.(2)) in\n            let dst_lin = dst_base + (b_idx * out_strides.(2)) in\n            Array.unsafe_set out_arr dst_lin (Array.unsafe_get in_arr src_lin)\n          done\n      done\n    done)\n  else (\n  let block_coords = Array.make spatial_ndim 0 in\n  let kernel_coords = Array.make spatial_ndim 0 in\n  let out_spatial = Array.make spatial_ndim 0 in\n  let zero = Float32_u.of_int 0 in\n  let batch_numel = Array.fold_left ( * ) channels output_size in\n  let zero_start = n_start * batch_numel in\n  let zero_end = (n_end * batch_numel) - 1 in\n  for i = zero_start to zero_end do\n    Array.unsafe_set out_arr i zero\n  done;\n  for n_idx = n_start to n_end - 1 do\n    for b_idx = 0 to num_blocks - 1 do\n      Shape.unravel_index_into b_idx blocks_shape block_coords;\n      for c_idx = 0 to channels - 1 do\n        for k_idx = 0 to kernel_elems - 1 do\n          Shape.unravel_index_into k_idx kernel_size kernel_coords;\n          let valid = ref true in\n          for d = 0 to spatial_ndim - 1 do\n            let pad_before, _ = padding.(d) in\n            let pos =\n              (block_coords.(d) * stride.(d))\n              - pad_before\n              + (kernel_coords.(d) * dilation.(d))\n            in\n            out_spatial.(d) <- pos;\n            if pos < 0 || pos >= output_size.(d) then valid := false\n          done;\n          if !valid then (\n            let src_ch = (c_idx * kernel_elems) + k_idx in\n            let src_lin =\n              in_offset\n              + (n_idx * in_strides.(0))\n              + (src_ch * in_strides.(1))\n              + (b_idx * in_strides.(2))\n            in\n            let dst_lin =\n              ref (out_offset + (n_idx * out_strides.(0)) + (c_idx * out_strides.(1)))\n            in\n            for d = 0 to spatial_ndim - 1 do\n              dst_lin := !dst_lin + (out_spatial.(d) * out_strides.(d + 2))\n            done;\n            let prev = Array.unsafe_get out_arr !dst_lin in\n            let v = Array.unsafe_get in_arr src_lin in\n            Array.unsafe_set out_arr !dst_lin (Float32_u.add prev v))\n        done\n      done\n    done\n  done)\n\nlet fold_int8 in_arr out_arr ~n_start ~n_end ~channels ~num_blocks ~kernel_elems\n    ~spatial_ndim ~blocks_shape ~kernel_size ~output_size ~stride ~dilation\n    ~padding ~in_offset ~in_strides ~out_offset ~out_strides =\n  let block_coords = Array.make spatial_ndim 0 in\n  let kernel_coords = Array.make spatial_ndim 0 in\n  let out_spatial = Array.make spatial_ndim 0 in\n  let zero = Int8_u.of_int 0 in\n  let batch_numel = Array.fold_left ( * ) channels output_size in\n  let zero_start = n_start * batch_numel in\n  let zero_end = (n_end * batch_numel) - 1 in\n  for i = zero_start to zero_end do\n    Array.unsafe_set out_arr i zero\n  done;\n  for n_idx = n_start to n_end - 1 do\n    for b_idx = 0 to num_blocks - 1 do\n      Shape.unravel_index_into b_idx blocks_shape block_coords;\n      for c_idx = 0 to channels - 1 do\n        for k_idx = 0 to kernel_elems - 1 do\n          Shape.unravel_index_into k_idx kernel_size kernel_coords;\n          let valid = ref true in\n          for d = 0 to spatial_ndim - 1 do\n            let pad_before, _ = padding.(d) in\n            let pos =\n              (block_coords.(d) * stride.(d))\n              - pad_before\n              + (kernel_coords.(d) * dilation.(d))\n            in\n            out_spatial.(d) <- pos;\n            if pos < 0 || pos >= output_size.(d) then valid := false\n          done;\n          if !valid then (\n            let src_ch = (c_idx * kernel_elems) + k_idx in\n            let src_lin =\n              in_offset\n              + (n_idx * in_strides.(0))\n              + (src_ch * in_strides.(1))\n              + (b_idx * in_strides.(2))\n            in\n            let dst_lin =\n              ref (out_offset + (n_idx * out_strides.(0)) + (c_idx * out_strides.(1)))\n            in\n            for d = 0 to spatial_ndim - 1 do\n              dst_lin := !dst_lin + (out_spatial.(d) * out_strides.(d + 2))\n            done;\n            let prev = Array.unsafe_get out_arr !dst_lin in\n            let v = Array.unsafe_get in_arr src_lin in\n            Array.unsafe_set out_arr !dst_lin (Int8_u.add prev v))\n        done\n      done\n    done\n  done\n\nlet fold_int16 in_arr out_arr ~n_start ~n_end ~channels ~num_blocks ~kernel_elems\n    ~spatial_ndim ~blocks_shape ~kernel_size ~output_size ~stride ~dilation\n    ~padding ~in_offset ~in_strides ~out_offset ~out_strides =\n  let block_coords = Array.make spatial_ndim 0 in\n  let kernel_coords = Array.make spatial_ndim 0 in\n  let out_spatial = Array.make spatial_ndim 0 in\n  let zero = Int16_u.of_int 0 in\n  let batch_numel = Array.fold_left ( * ) channels output_size in\n  let zero_start = n_start * batch_numel in\n  let zero_end = (n_end * batch_numel) - 1 in\n  for i = zero_start to zero_end do\n    Array.unsafe_set out_arr i zero\n  done;\n  for n_idx = n_start to n_end - 1 do\n    for b_idx = 0 to num_blocks - 1 do\n      Shape.unravel_index_into b_idx blocks_shape block_coords;\n      for c_idx = 0 to channels - 1 do\n        for k_idx = 0 to kernel_elems - 1 do\n          Shape.unravel_index_into k_idx kernel_size kernel_coords;\n          let valid = ref true in\n          for d = 0 to spatial_ndim - 1 do\n            let pad_before, _ = padding.(d) in\n            let pos =\n              (block_coords.(d) * stride.(d))\n              - pad_before\n              + (kernel_coords.(d) * dilation.(d))\n            in\n            out_spatial.(d) <- pos;\n            if pos < 0 || pos >= output_size.(d) then valid := false\n          done;\n          if !valid then (\n            let src_ch = (c_idx * kernel_elems) + k_idx in\n            let src_lin =\n              in_offset\n              + (n_idx * in_strides.(0))\n              + (src_ch * in_strides.(1))\n              + (b_idx * in_strides.(2))\n            in\n            let dst_lin =\n              ref (out_offset + (n_idx * out_strides.(0)) + (c_idx * out_strides.(1)))\n            in\n            for d = 0 to spatial_ndim - 1 do\n              dst_lin := !dst_lin + (out_spatial.(d) * out_strides.(d + 2))\n            done;\n            let prev = Array.unsafe_get out_arr !dst_lin in\n            let v = Array.unsafe_get in_arr src_lin in\n            Array.unsafe_set out_arr !dst_lin (Int16_u.add prev v))\n        done\n      done\n    done\n  done\n\nlet fold_int32 in_arr out_arr ~n_start ~n_end ~channels ~num_blocks ~kernel_elems\n    ~spatial_ndim ~blocks_shape ~kernel_size ~output_size ~stride ~dilation\n    ~padding ~in_offset ~in_strides ~out_offset ~out_strides =\n  if\n    is_identity_window ~spatial_ndim ~kernel_elems ~kernel_size ~stride\n      ~dilation ~padding\n    && num_blocks = Shape.numel output_size\n    && in_strides.(2) = 1\n    && is_c_contiguous_spatial_tail output_size out_strides\n  then (\n    for n_idx = n_start to n_end - 1 do\n      for c_idx = 0 to channels - 1 do\n        let src_base =\n          in_offset + (n_idx * in_strides.(0)) + (c_idx * in_strides.(1))\n        in\n        let dst_base =\n          out_offset + (n_idx * out_strides.(0)) + (c_idx * out_strides.(1))\n        in\n        let i = ref 0 in\n        let n = num_blocks in\n        let n16 = n - 15 in\n        while !i < n16 do\n          let idx = !i in\n          let a0 = Int32x4.Array.unsafe_get in_arr ~idx:(src_base + idx) in\n          let a1 = Int32x4.Array.unsafe_get in_arr ~idx:(src_base + idx + 4) in\n          let a2 = Int32x4.Array.unsafe_get in_arr ~idx:(src_base + idx + 8) in\n          let a3 = Int32x4.Array.unsafe_get in_arr ~idx:(src_base + idx + 12) in\n          Int32x4.Array.unsafe_set out_arr ~idx:(dst_base + idx) a0;\n          Int32x4.Array.unsafe_set out_arr ~idx:(dst_base + idx + 4) a1;\n          Int32x4.Array.unsafe_set out_arr ~idx:(dst_base + idx + 8) a2;\n          Int32x4.Array.unsafe_set out_arr ~idx:(dst_base + idx + 12) a3;\n          i := idx + 16\n        done;\n        let n4 = n - 3 in\n        while !i < n4 do\n          let idx = !i in\n          let a = Int32x4.Array.unsafe_get in_arr ~idx:(src_base + idx) in\n          Int32x4.Array.unsafe_set out_arr ~idx:(dst_base + idx) a;\n          i := idx + 4\n        done;\n        while !i < n do\n          let idx = !i in\n          Array.unsafe_set out_arr (dst_base + idx)\n            (Array.unsafe_get in_arr (src_base + idx));\n          incr i\n        done\n      done\n    done)\n  else (\n  let block_coords = Array.make spatial_ndim 0 in\n  let kernel_coords = Array.make spatial_ndim 0 in\n  let out_spatial = Array.make spatial_ndim 0 in\n  let zero = Int32_u.of_int32 0l in\n  let batch_numel = Array.fold_left ( * ) channels output_size in\n  let zero_start = n_start * batch_numel in\n  let zero_end = (n_end * batch_numel) - 1 in\n  for i = zero_start to zero_end do\n    Array.unsafe_set out_arr i zero\n  done;\n  for n_idx = n_start to n_end - 1 do\n    for b_idx = 0 to num_blocks - 1 do\n      Shape.unravel_index_into b_idx blocks_shape block_coords;\n      for c_idx = 0 to channels - 1 do\n        for k_idx = 0 to kernel_elems - 1 do\n          Shape.unravel_index_into k_idx kernel_size kernel_coords;\n          let valid = ref true in\n          for d = 0 to spatial_ndim - 1 do\n            let pad_before, _ = padding.(d) in\n            let pos =\n              (block_coords.(d) * stride.(d))\n              - pad_before\n              + (kernel_coords.(d) * dilation.(d))\n            in\n            out_spatial.(d) <- pos;\n            if pos < 0 || pos >= output_size.(d) then valid := false\n          done;\n          if !valid then (\n            let src_ch = (c_idx * kernel_elems) + k_idx in\n            let src_lin =\n              in_offset\n              + (n_idx * in_strides.(0))\n              + (src_ch * in_strides.(1))\n              + (b_idx * in_strides.(2))\n            in\n            let dst_lin =\n              ref (out_offset + (n_idx * out_strides.(0)) + (c_idx * out_strides.(1)))\n            in\n            for d = 0 to spatial_ndim - 1 do\n              dst_lin := !dst_lin + (out_spatial.(d) * out_strides.(d + 2))\n            done;\n            let prev = Array.unsafe_get out_arr !dst_lin in\n            let v = Array.unsafe_get in_arr src_lin in\n            Array.unsafe_set out_arr !dst_lin (Int32_u.add prev v))\n        done\n      done\n    done\n  done)\n\nlet fold_int64 in_arr out_arr ~n_start ~n_end ~channels ~num_blocks ~kernel_elems\n    ~spatial_ndim ~blocks_shape ~kernel_size ~output_size ~stride ~dilation\n    ~padding ~in_offset ~in_strides ~out_offset ~out_strides =\n  if\n    is_identity_window ~spatial_ndim ~kernel_elems ~kernel_size ~stride\n      ~dilation ~padding\n    && num_blocks = Shape.numel output_size\n    && in_strides.(2) = 1\n    && is_c_contiguous_spatial_tail output_size out_strides\n  then (\n    for n_idx = n_start to n_end - 1 do\n      for c_idx = 0 to channels - 1 do\n        let src_base =\n          in_offset + (n_idx * in_strides.(0)) + (c_idx * in_strides.(1))\n        in\n        let dst_base =\n          out_offset + (n_idx * out_strides.(0)) + (c_idx * out_strides.(1))\n        in\n        let i = ref 0 in\n        let n = num_blocks in\n        let n8 = n - 7 in\n        while !i < n8 do\n          let idx = !i in\n          let a0 = Int64x2.Array.unsafe_get in_arr ~idx:(src_base + idx) in\n          let a1 = Int64x2.Array.unsafe_get in_arr ~idx:(src_base + idx + 2) in\n          let a2 = Int64x2.Array.unsafe_get in_arr ~idx:(src_base + idx + 4) in\n          let a3 = Int64x2.Array.unsafe_get in_arr ~idx:(src_base + idx + 6) in\n          Int64x2.Array.unsafe_set out_arr ~idx:(dst_base + idx) a0;\n          Int64x2.Array.unsafe_set out_arr ~idx:(dst_base + idx + 2) a1;\n          Int64x2.Array.unsafe_set out_arr ~idx:(dst_base + idx + 4) a2;\n          Int64x2.Array.unsafe_set out_arr ~idx:(dst_base + idx + 6) a3;\n          i := idx + 8\n        done;\n        let n2 = n - 1 in\n        while !i < n2 do\n          let idx = !i in\n          let a = Int64x2.Array.unsafe_get in_arr ~idx:(src_base + idx) in\n          Int64x2.Array.unsafe_set out_arr ~idx:(dst_base + idx) a;\n          i := idx + 2\n        done;\n        while !i < n do\n          let idx = !i in\n          Array.unsafe_set out_arr (dst_base + idx)\n            (Array.unsafe_get in_arr (src_base + idx));\n          incr i\n        done\n      done\n    done)\n  else (\n  let block_coords = Array.make spatial_ndim 0 in\n  let kernel_coords = Array.make spatial_ndim 0 in\n  let out_spatial = Array.make spatial_ndim 0 in\n  let zero = Int64_u.of_int64 0L in\n  let batch_numel = Array.fold_left ( * ) channels output_size in\n  let zero_start = n_start * batch_numel in\n  let zero_end = (n_end * batch_numel) - 1 in\n  for i = zero_start to zero_end do\n    Array.unsafe_set out_arr i zero\n  done;\n  for n_idx = n_start to n_end - 1 do\n    for b_idx = 0 to num_blocks - 1 do\n      Shape.unravel_index_into b_idx blocks_shape block_coords;\n      for c_idx = 0 to channels - 1 do\n        for k_idx = 0 to kernel_elems - 1 do\n          Shape.unravel_index_into k_idx kernel_size kernel_coords;\n          let valid = ref true in\n          for d = 0 to spatial_ndim - 1 do\n            let pad_before, _ = padding.(d) in\n            let pos =\n              (block_coords.(d) * stride.(d))\n              - pad_before\n              + (kernel_coords.(d) * dilation.(d))\n            in\n            out_spatial.(d) <- pos;\n            if pos < 0 || pos >= output_size.(d) then valid := false\n          done;\n          if !valid then (\n            let src_ch = (c_idx * kernel_elems) + k_idx in\n            let src_lin =\n              in_offset\n              + (n_idx * in_strides.(0))\n              + (src_ch * in_strides.(1))\n              + (b_idx * in_strides.(2))\n            in\n            let dst_lin =\n              ref (out_offset + (n_idx * out_strides.(0)) + (c_idx * out_strides.(1)))\n            in\n            for d = 0 to spatial_ndim - 1 do\n              dst_lin := !dst_lin + (out_spatial.(d) * out_strides.(d + 2))\n            done;\n            let prev = Array.unsafe_get out_arr !dst_lin in\n            let v = Array.unsafe_get in_arr src_lin in\n            Array.unsafe_set out_arr !dst_lin (Int64_u.add prev v))\n        done\n      done\n    done\n  done)\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/op_gather.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet init_state ishape idx_str out_str dshape data_strides axis start_idx\n    idx_offset out_offset data_offset =\n  let rank = Array.length dshape in\n  let md_index = Array.make rank 0 in\n  if start_idx <> 0 then Shape.unravel_index_into start_idx ishape md_index;\n  let idx_lin = ref (idx_offset + Shape.ravel_index md_index idx_str) in\n  let out_lin = ref (out_offset + Shape.ravel_index md_index out_str) in\n  let src_base = ref data_offset in\n  for d = 0 to rank - 1 do\n    if d <> axis then\n      src_base :=\n        !src_base\n        + (Array.unsafe_get md_index d * Array.unsafe_get data_strides d)\n  done;\n  (md_index, idx_lin, out_lin, src_base)\n\nlet advance_state md_index ishape idx_str out_str data_strides axis idx_lin\n    out_lin src_base =\n  let d = ref (Array.length ishape - 1) in\n  while !d >= 0 do\n    let dim = !d in\n    let cur = Array.unsafe_get md_index dim in\n    let next = cur + 1 in\n    if next < Array.unsafe_get ishape dim then (\n      Array.unsafe_set md_index dim next;\n      idx_lin := !idx_lin + Array.unsafe_get idx_str dim;\n      out_lin := !out_lin + Array.unsafe_get out_str dim;\n      if dim <> axis then\n        src_base := !src_base + Array.unsafe_get data_strides dim;\n      d := -1)\n    else (\n      Array.unsafe_set md_index dim 0;\n      idx_lin := !idx_lin - (cur * Array.unsafe_get idx_str dim);\n      out_lin := !out_lin - (cur * Array.unsafe_get out_str dim);\n      if dim <> axis then\n        src_base := !src_base - (cur * Array.unsafe_get data_strides dim);\n      d := dim - 1)\n  done\n\nlet gather_float64 (src : float# array) (dst : float# array) ishape dshape axis\n    (idx_arr : int32# array) data_offset data_strides idx_offset idx_str\n    out_offset out_strides start_idx end_idx =\n  if start_idx >= end_idx then ()\n  else (\n    let rank = Array.length dshape in\n    let axis_stride = Array.unsafe_get data_strides axis in\n    if\n      rank = 1 && axis = 0\n      && Array.unsafe_get data_strides 0 = 1\n      && Array.unsafe_get idx_str 0 = 1\n      && Array.unsafe_get out_strides 0 = 1\n    then (\n      let i = ref start_idx in\n      let n2 = end_idx - 1 in\n      while !i < n2 do\n        let k0 = !i in\n        let k1 = k0 + 1 in\n        let idx0 =\n          Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr (idx_offset + k0)))\n        in\n        let idx1 =\n          Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr (idx_offset + k1)))\n        in\n        if idx0 < 0 || idx0 >= Array.unsafe_get dshape 0 || idx1 < 0\n           || idx1 >= Array.unsafe_get dshape 0\n        then invalid_arg \"gather: index out of bounds\";\n        let v0 = Array.unsafe_get src (data_offset + idx0) in\n        let v1 = Array.unsafe_get src (data_offset + idx1) in\n        let vec = Float64x2.set v0 v1 in\n        Float64x2.Array.unsafe_set dst ~idx:(out_offset + k0) vec;\n        i := k0 + 2\n      done;\n      while !i < end_idx do\n        let k = !i in\n        let idx =\n          Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr (idx_offset + k)))\n        in\n        if idx < 0 || idx >= Array.unsafe_get dshape 0 then\n          invalid_arg \"gather: index out of bounds\";\n        Array.unsafe_set dst (out_offset + k) (Array.unsafe_get src (data_offset + idx));\n        incr i\n      done)\n    else\n      let md_index, idx_lin, out_lin, src_base =\n        init_state ishape idx_str out_strides dshape data_strides axis start_idx\n          idx_offset out_offset data_offset\n      in\n      for k = start_idx to end_idx - 1 do\n        let idx =\n          Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr !idx_lin))\n        in\n        if idx < 0 || idx >= Array.unsafe_get dshape axis then\n          invalid_arg \"gather: index out of bounds\";\n        let src_lin = !src_base + (idx * axis_stride) in\n        Array.unsafe_set dst !out_lin (Array.unsafe_get src src_lin);\n        if k + 1 < end_idx then\n          advance_state md_index ishape idx_str out_strides data_strides axis\n            idx_lin out_lin src_base\n      done)\n\nlet gather_float32 (src : float32# array) (dst : float32# array) ishape dshape\n    axis (idx_arr : int32# array) data_offset data_strides idx_offset idx_str\n    out_offset out_strides start_idx end_idx =\n  if start_idx >= end_idx then ()\n  else (\n    let rank = Array.length dshape in\n    let axis_stride = Array.unsafe_get data_strides axis in\n    if\n      rank = 1 && axis = 0\n      && Array.unsafe_get data_strides 0 = 1\n      && Array.unsafe_get idx_str 0 = 1\n      && Array.unsafe_get out_strides 0 = 1\n    then (\n      let i = ref start_idx in\n      let n4 = end_idx - 3 in\n      while !i < n4 do\n        let k0 = !i in\n        let k1 = k0 + 1 in\n        let k2 = k0 + 2 in\n        let k3 = k0 + 3 in\n        let idx0 =\n          Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr (idx_offset + k0)))\n        in\n        let idx1 =\n          Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr (idx_offset + k1)))\n        in\n        let idx2 =\n          Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr (idx_offset + k2)))\n        in\n        let idx3 =\n          Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr (idx_offset + k3)))\n        in\n        if idx0 < 0 || idx0 >= Array.unsafe_get dshape 0 || idx1 < 0\n           || idx1 >= Array.unsafe_get dshape 0 || idx2 < 0\n           || idx2 >= Array.unsafe_get dshape 0 || idx3 < 0\n           || idx3 >= Array.unsafe_get dshape 0\n        then invalid_arg \"gather: index out of bounds\";\n        let v0 = Array.unsafe_get src (data_offset + idx0) in\n        let v1 = Array.unsafe_get src (data_offset + idx1) in\n        let v2 = Array.unsafe_get src (data_offset + idx2) in\n        let v3 = Array.unsafe_get src (data_offset + idx3) in\n        let vec = Float32x4.set v0 v1 v2 v3 in\n        Float32x4.Array.unsafe_set dst ~idx:(out_offset + k0) vec;\n        i := k0 + 4\n      done;\n      while !i < end_idx do\n        let k = !i in\n        let idx =\n          Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr (idx_offset + k)))\n        in\n        if idx < 0 || idx >= Array.unsafe_get dshape 0 then\n          invalid_arg \"gather: index out of bounds\";\n        Array.unsafe_set dst (out_offset + k) (Array.unsafe_get src (data_offset + idx));\n        incr i\n      done)\n    else\n      let md_index, idx_lin, out_lin, src_base =\n        init_state ishape idx_str out_strides dshape data_strides axis start_idx\n          idx_offset out_offset data_offset\n      in\n      for k = start_idx to end_idx - 1 do\n        let idx =\n          Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr !idx_lin))\n        in\n        if idx < 0 || idx >= Array.unsafe_get dshape axis then\n          invalid_arg \"gather: index out of bounds\";\n        let src_lin = !src_base + (idx * axis_stride) in\n        Array.unsafe_set dst !out_lin (Array.unsafe_get src src_lin);\n        if k + 1 < end_idx then\n          advance_state md_index ishape idx_str out_strides data_strides axis\n            idx_lin out_lin src_base\n      done)\n\nlet gather_int8 (src : int8# array) (dst : int8# array) ishape dshape axis\n    (idx_arr : int32# array) data_offset data_strides idx_offset idx_str\n    out_offset out_strides start_idx end_idx =\n  if start_idx >= end_idx then ()\n  else\n    let axis_stride = Array.unsafe_get data_strides axis in\n    let md_index, idx_lin, out_lin, src_base =\n      init_state ishape idx_str out_strides dshape data_strides axis start_idx\n        idx_offset out_offset data_offset\n    in\n    for k = start_idx to end_idx - 1 do\n      let idx =\n        Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr !idx_lin))\n      in\n      if idx < 0 || idx >= Array.unsafe_get dshape axis then\n        invalid_arg \"gather: index out of bounds\";\n      let src_lin = !src_base + (idx * axis_stride) in\n      Array.unsafe_set dst !out_lin (Array.unsafe_get src src_lin);\n      if k + 1 < end_idx then\n        advance_state md_index ishape idx_str out_strides data_strides axis\n          idx_lin out_lin src_base\n    done\n\nlet gather_int16 (src : int16# array) (dst : int16# array) ishape dshape axis\n    (idx_arr : int32# array) data_offset data_strides idx_offset idx_str\n    out_offset out_strides start_idx end_idx =\n  if start_idx >= end_idx then ()\n  else\n    let axis_stride = Array.unsafe_get data_strides axis in\n    let md_index, idx_lin, out_lin, src_base =\n      init_state ishape idx_str out_strides dshape data_strides axis start_idx\n        idx_offset out_offset data_offset\n    in\n    for k = start_idx to end_idx - 1 do\n      let idx =\n        Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr !idx_lin))\n      in\n      if idx < 0 || idx >= Array.unsafe_get dshape axis then\n        invalid_arg \"gather: index out of bounds\";\n      let src_lin = !src_base + (idx * axis_stride) in\n      Array.unsafe_set dst !out_lin (Array.unsafe_get src src_lin);\n      if k + 1 < end_idx then\n        advance_state md_index ishape idx_str out_strides data_strides axis\n          idx_lin out_lin src_base\n    done\n\nlet gather_int32 (src : int32# array) (dst : int32# array) ishape dshape axis\n    (idx_arr : int32# array) data_offset data_strides idx_offset idx_str\n    out_offset out_strides start_idx end_idx =\n  if start_idx >= end_idx then ()\n  else (\n    let rank = Array.length dshape in\n    let axis_stride = Array.unsafe_get data_strides axis in\n    if\n      rank = 1 && axis = 0\n      && Array.unsafe_get data_strides 0 = 1\n      && Array.unsafe_get idx_str 0 = 1\n      && Array.unsafe_get out_strides 0 = 1\n    then (\n      let i = ref start_idx in\n      let n4 = end_idx - 3 in\n      while !i < n4 do\n        let k0 = !i in\n        let k1 = k0 + 1 in\n        let k2 = k0 + 2 in\n        let k3 = k0 + 3 in\n        let idx0 =\n          Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr (idx_offset + k0)))\n        in\n        let idx1 =\n          Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr (idx_offset + k1)))\n        in\n        let idx2 =\n          Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr (idx_offset + k2)))\n        in\n        let idx3 =\n          Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr (idx_offset + k3)))\n        in\n        if idx0 < 0 || idx0 >= Array.unsafe_get dshape 0 || idx1 < 0\n           || idx1 >= Array.unsafe_get dshape 0 || idx2 < 0\n           || idx2 >= Array.unsafe_get dshape 0 || idx3 < 0\n           || idx3 >= Array.unsafe_get dshape 0\n        then invalid_arg \"gather: index out of bounds\";\n        let v0 = Array.unsafe_get src (data_offset + idx0) in\n        let v1 = Array.unsafe_get src (data_offset + idx1) in\n        let v2 = Array.unsafe_get src (data_offset + idx2) in\n        let v3 = Array.unsafe_get src (data_offset + idx3) in\n        let vec = Int32x4.set v0 v1 v2 v3 in\n        Int32x4.Array.unsafe_set dst ~idx:(out_offset + k0) vec;\n        i := k0 + 4\n      done;\n      while !i < end_idx do\n        let k = !i in\n        let idx =\n          Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr (idx_offset + k)))\n        in\n        if idx < 0 || idx >= Array.unsafe_get dshape 0 then\n          invalid_arg \"gather: index out of bounds\";\n        Array.unsafe_set dst (out_offset + k) (Array.unsafe_get src (data_offset + idx));\n        incr i\n      done)\n    else\n      let md_index, idx_lin, out_lin, src_base =\n        init_state ishape idx_str out_strides dshape data_strides axis start_idx\n          idx_offset out_offset data_offset\n      in\n      for k = start_idx to end_idx - 1 do\n        let idx =\n          Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr !idx_lin))\n        in\n        if idx < 0 || idx >= Array.unsafe_get dshape axis then\n          invalid_arg \"gather: index out of bounds\";\n        let src_lin = !src_base + (idx * axis_stride) in\n        Array.unsafe_set dst !out_lin (Array.unsafe_get src src_lin);\n        if k + 1 < end_idx then\n          advance_state md_index ishape idx_str out_strides data_strides axis\n            idx_lin out_lin src_base\n      done)\n\nlet gather_int64 (src : int64# array) (dst : int64# array) ishape dshape axis\n    (idx_arr : int32# array) data_offset data_strides idx_offset idx_str\n    out_offset out_strides start_idx end_idx =\n  if start_idx >= end_idx then ()\n  else (\n    let rank = Array.length dshape in\n    let axis_stride = Array.unsafe_get data_strides axis in\n    if\n      rank = 1 && axis = 0\n      && Array.unsafe_get data_strides 0 = 1\n      && Array.unsafe_get idx_str 0 = 1\n      && Array.unsafe_get out_strides 0 = 1\n    then (\n      let i = ref start_idx in\n      let n2 = end_idx - 1 in\n      while !i < n2 do\n        let k0 = !i in\n        let k1 = k0 + 1 in\n        let idx0 =\n          Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr (idx_offset + k0)))\n        in\n        let idx1 =\n          Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr (idx_offset + k1)))\n        in\n        if idx0 < 0 || idx0 >= Array.unsafe_get dshape 0 || idx1 < 0\n           || idx1 >= Array.unsafe_get dshape 0\n        then invalid_arg \"gather: index out of bounds\";\n        let v0 = Array.unsafe_get src (data_offset + idx0) in\n        let v1 = Array.unsafe_get src (data_offset + idx1) in\n        let vec = Int64x2.set v0 v1 in\n        Int64x2.Array.unsafe_set dst ~idx:(out_offset + k0) vec;\n        i := k0 + 2\n      done;\n      while !i < end_idx do\n        let k = !i in\n        let idx =\n          Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr (idx_offset + k)))\n        in\n        if idx < 0 || idx >= Array.unsafe_get dshape 0 then\n          invalid_arg \"gather: index out of bounds\";\n        Array.unsafe_set dst (out_offset + k) (Array.unsafe_get src (data_offset + idx));\n        incr i\n      done)\n    else\n      let md_index, idx_lin, out_lin, src_base =\n        init_state ishape idx_str out_strides dshape data_strides axis start_idx\n          idx_offset out_offset data_offset\n      in\n      for k = start_idx to end_idx - 1 do\n        let idx =\n          Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr !idx_lin))\n        in\n        if idx < 0 || idx >= Array.unsafe_get dshape axis then\n          invalid_arg \"gather: index out of bounds\";\n        let src_lin = !src_base + (idx * axis_stride) in\n        Array.unsafe_set dst !out_lin (Array.unsafe_get src src_lin);\n        if k + 1 < end_idx then\n          advance_state md_index ishape idx_str out_strides data_strides axis\n            idx_lin out_lin src_base\n      done)\n\nlet gather_bool (src : bool array) (dst : bool array) ishape dshape axis\n    (idx_arr : int32# array) data_offset data_strides idx_offset idx_str\n    out_offset out_strides start_idx end_idx =\n  if start_idx >= end_idx then ()\n  else\n    let axis_stride = Array.unsafe_get data_strides axis in\n    let md_index, idx_lin, out_lin, src_base =\n      init_state ishape idx_str out_strides dshape data_strides axis start_idx\n        idx_offset out_offset data_offset\n    in\n    for k = start_idx to end_idx - 1 do\n      let idx =\n        Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr !idx_lin))\n      in\n      if idx < 0 || idx >= Array.unsafe_get dshape axis then\n        invalid_arg \"gather: index out of bounds\";\n      let src_lin = !src_base + (idx * axis_stride) in\n      Array.unsafe_set dst !out_lin (Array.unsafe_get src src_lin);\n      if k + 1 < end_idx then\n        advance_state md_index ishape idx_str out_strides data_strides axis\n          idx_lin out_lin src_base\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/op_matmul.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\n(* ---------------------------------------------------------------------------\n   BLIS-style GEMM implementation\n   ---------------------------------------------------------------------------\n\n   We use a BLIS-style blocked GEMM with three levels of tiling (jc, pc, ic)\n   and explicit packing of A and B panels into contiguous buffers (pack_a,\n   pack_b) so that the microkernel streams over cache-friendly memory.\n\n   Microkernel design (ARM64 NEON, 128-bit vectors):\n   - f64: MR=4, NR=4 → 8 Float64x2 accumulators (4×2 tile = 4×4 scalars)\n   - f32: MR=6, NR=8 → 12 Float32x4 accumulators (6×2 tile = 6×8 scalars)\n\n   Blocking parameters (tuned for Apple Silicon L1/L2):\n   - f64: KC=128, MC=384, NC=256\n   - f32: KC=256, MC=240, NC=640\n\n   The microkernel is a recursive kloop (f64_kloop / f32_kloop) defined at\n   module level, with all SIMD accumulators passed as function arguments so\n   they stay in registers across the entire k-iteration. kloop must be at\n   module level — not nested inside kernel_zero/kernel_accum — to avoid\n   per-call closure allocations.\n\n   Threading: the ic-loop is parallelized via Parallel.parallel_for. Each\n   domain gets its own ap/bp scratch buffers allocated inside the closure.\n\n   Known limitations and next optimizations\n   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n   A pure-C BLIS implementation can match BLAS (see Salykov, \"Advanced Matrix\n   Multiplication Optimization on Modern Multi-Core Processors\"). The ~8–31×\n   gap vs. the C backend is closeable. In priority order:\n\n   - No FMA: mul_add compiles to fmul + fadd. NEON fmla exists in simd_neon\n     but is not a [@@builtin] external, so it hits the same cross-module\n     inlining issue (see SIMD wrappers comment below). Needs upstream OxCaml.\n\n   - Edge tiles run the full SIMD kernel into a padded temp buffer, then\n     copy back valid elements. This avoids the scalar fallback (~40% faster\n     for f32 at 500–1000). Further gains possible with NEON masked stores.\n\n   - Kernel size is constrained by the recursive kloop approach. Although\n     ARM64 NEON has 32 registers, the OCaml calling convention passes at most\n     8 SIMD values in registers (v0-v7). A MR=8,NR=4 kernel (16 accumulators)\n     regressed ~38% due to stack spills on every recursive call. The current\n     MR=4,NR=4 (f64) and MR=6,NR=8 (f32) with 8/12 accumulators are near the\n     sweet spot. A loop-based microkernel could bypass this limit.\n\n   - Cache blocking parameters (KC, MC, NC): increasing them for Apple\n     Silicon's large caches gave marginal improvement at 1000×1000 but\n     regressed smaller sizes. Current values are reasonable.\n\n   - Pack B is redundantly done per domain. Restructuring to pack once per\n     (jc, pc) block regressed due to effect-handler overhead in\n     Parallel.run — fixing the parallel primitives would unlock this.\n\n   - Parallelization strategy: we parallelize the ic-loop (3rd loop). BLIS\n     literature suggests parallelizing the jr/ir loops (1st/2nd) around the\n     microkernel can be more efficient, as it avoids redundant packing and\n     gives finer-grained work distribution.\n   --------------------------------------------------------------------------- *)\n\n(* ---------------------------- Helpers ------------------------------------ *)\n\nlet[@inline] min_int a b = if a < b then a else b\nlet[@inline] round_up x m = ((x + m - 1) / m) * m\n\n(* Local wrappers that call [@@builtin] externals directly.\n   Wrappers defined in other modules (e.g. mul_add, set1 in Simd) are not\n   inlined into this compilation unit — even when both modules are in the\n   same library. One hypothesis is dune's -opaque flag preventing flambda2\n   from exporting function bodies, but moving Simd into the same library\n   did not help, so the root cause may lie elsewhere (flambda2 inlining\n   heuristics, or how [@@builtin] externals bypass the optimizer while\n   regular wrappers do not). Defining them here works around the issue.\n\n   TODO: mul_add uses separate mul+add instead of a true FMA instruction.\n   OxCaml has NEON FMA via simd_neon, but the emulated fma is not a\n   [@@builtin] external and suffers from the same inlining issue described\n   above. Upstreaming NEON fmla/fmls as [@@builtin] in OxCaml would let\n   us replace these with single-instruction FMA. *)\nlet[@inline always] f64_mul_add a b c =\n  Float64x2.add (Float64x2.mul a b) c\n\nlet[@inline always] f64_set1 a =\n  Float64x2.of_int64x2 (Int64x2.dup (Int64x2.of_float64x2 (Float64x2.low_of a)))\n\nlet[@inline always] f32_mul_add a b c =\n  Float32x4.add (Float32x4.mul a b) c\n\nlet[@inline always] f32_set1 a =\n  Float32x4.of_int32x4 (Int32x4.dup (Int32x4.of_float32x4 (Float32x4.low_of a)))\n\nmodule Gemm_f64 = struct\n  let mr = 4\n  let nr = 4\n  let kc_blk = 128\n  let mc_blk = 384\n  let nc_blk = 256\n\n  let pack_a a ~a_off ~lda ~ic ~pc ~mc ~kc ap =\n    let dst = ref 0 in\n    let i = ref 0 in\n    while !i + mr <= mc do\n      for p = 0 to kc - 1 do\n        let src_base = a_off + (ic + !i) * lda + pc + p in\n        for ii = 0 to mr - 1 do\n          Array.unsafe_set ap (!dst + ii)\n            (Array.unsafe_get a (src_base + ii * lda))\n        done;\n        dst := !dst + mr\n      done;\n      i := !i + mr\n    done;\n    if !i < mc then begin\n      let mr_rem = mc - !i in\n      for p = 0 to kc - 1 do\n        let src_base = a_off + (ic + !i) * lda + pc + p in\n        for ii = 0 to mr_rem - 1 do\n          Array.unsafe_set ap (!dst + ii)\n            (Array.unsafe_get a (src_base + ii * lda))\n        done;\n        for ii = mr_rem to mr - 1 do\n          Array.unsafe_set ap (!dst + ii) #0.\n        done;\n        dst := !dst + mr\n      done\n    end\n\n  let pack_b b ~b_off ~ldb ~pc ~jc ~kc ~nc bp =\n    let dst = ref 0 in\n    let j = ref 0 in\n    while !j + nr <= nc do\n      for p = 0 to kc - 1 do\n        let src = b_off + (pc + p) * ldb + jc + !j in\n        for jj = 0 to nr - 1 do\n          Array.unsafe_set bp (!dst + jj) (Array.unsafe_get b (src + jj))\n        done;\n        dst := !dst + nr\n      done;\n      j := !j + nr\n    done;\n    if !j < nc then begin\n      let nr_rem = nc - !j in\n      for p = 0 to kc - 1 do\n        let src = b_off + (pc + p) * ldb + jc + !j in\n        for jj = 0 to nr_rem - 1 do\n          Array.unsafe_set bp (!dst + jj) (Array.unsafe_get b (src + jj))\n        done;\n        for jj = nr_rem to nr - 1 do\n          Array.unsafe_set bp (!dst + jj) #0.\n        done;\n        dst := !dst + nr\n      done\n    end\n\n  let rec f64_kloop ap ap_off bp bp_off c_buf c_off ldc kc p\n      c00 c01 c10 c11 c20 c21 c30 c31 =\n    if p = kc then begin\n      Float64x2.Array.unsafe_set c_buf ~idx:c_off c00;\n      Float64x2.Array.unsafe_set c_buf ~idx:(c_off + 2) c01;\n      let r1 = c_off + ldc in\n      Float64x2.Array.unsafe_set c_buf ~idx:r1 c10;\n      Float64x2.Array.unsafe_set c_buf ~idx:(r1 + 2) c11;\n      let r2 = c_off + 2 * ldc in\n      Float64x2.Array.unsafe_set c_buf ~idx:r2 c20;\n      Float64x2.Array.unsafe_set c_buf ~idx:(r2 + 2) c21;\n      let r3 = c_off + 3 * ldc in\n      Float64x2.Array.unsafe_set c_buf ~idx:r3 c30;\n      Float64x2.Array.unsafe_set c_buf ~idx:(r3 + 2) c31\n    end\n    else\n      let ab = ap_off + p * 4 in\n      let bb = bp_off + p * 4 in\n      let a0 = f64_set1 (Array.unsafe_get ap ab) in\n      let a1 = f64_set1 (Array.unsafe_get ap (ab + 1)) in\n      let a2 = f64_set1 (Array.unsafe_get ap (ab + 2)) in\n      let a3 = f64_set1 (Array.unsafe_get ap (ab + 3)) in\n      let b0 = Float64x2.Array.unsafe_get bp ~idx:bb in\n      let b1 = Float64x2.Array.unsafe_get bp ~idx:(bb + 2) in\n      f64_kloop ap ap_off bp bp_off c_buf c_off ldc kc (p + 1)\n        (f64_mul_add a0 b0 c00) (f64_mul_add a0 b1 c01)\n        (f64_mul_add a1 b0 c10) (f64_mul_add a1 b1 c11)\n        (f64_mul_add a2 b0 c20) (f64_mul_add a2 b1 c21)\n        (f64_mul_add a3 b0 c30) (f64_mul_add a3 b1 c31)\n\n  let kernel_zero ap ~ap_off bp ~bp_off c_buf ~c_off ~ldc ~kc =\n    let z = f64_set1 #0. in\n    f64_kloop ap ap_off bp bp_off c_buf c_off ldc kc 0\n      z z z z z z z z\n\n  let kernel_accum ap ~ap_off bp ~bp_off c_buf ~c_off ~ldc ~kc =\n    let r1 = c_off + ldc in\n    let r2 = c_off + 2 * ldc in\n    let r3 = c_off + 3 * ldc in\n    f64_kloop ap ap_off bp bp_off c_buf c_off ldc kc 0\n      (Float64x2.Array.unsafe_get c_buf ~idx:c_off)\n      (Float64x2.Array.unsafe_get c_buf ~idx:(c_off + 2))\n      (Float64x2.Array.unsafe_get c_buf ~idx:r1)\n      (Float64x2.Array.unsafe_get c_buf ~idx:(r1 + 2))\n      (Float64x2.Array.unsafe_get c_buf ~idx:r2)\n      (Float64x2.Array.unsafe_get c_buf ~idx:(r2 + 2))\n      (Float64x2.Array.unsafe_get c_buf ~idx:r3)\n      (Float64x2.Array.unsafe_get c_buf ~idx:(r3 + 2))\n\n  let macro_kernel ap bp c_buf ~c_off ~ldc ~mc ~nc ~kc ~first tmp =\n    let ir = ref 0 in\n    while !ir < mc do\n      let mr_eff = min_int mr (mc - !ir) in\n      let ap_off = (!ir / mr) * mr * kc in\n      let jr = ref 0 in\n      while !jr < nc do\n        let nr_eff = min_int nr (nc - !jr) in\n        let bp_off = (!jr / nr) * nr * kc in\n        let c_tile = c_off + (!ir * ldc) + !jr in\n        if mr_eff = mr && nr_eff = nr then begin\n          if first then\n            kernel_zero ap ~ap_off bp ~bp_off c_buf\n              ~c_off:c_tile ~ldc ~kc\n          else\n            kernel_accum ap ~ap_off bp ~bp_off c_buf\n              ~c_off:c_tile ~ldc ~kc\n        end\n        else begin\n          (* Edge tile: run full SIMD kernel into tmp buffer, copy valid part *)\n          if first then\n            kernel_zero ap ~ap_off bp ~bp_off tmp\n              ~c_off:0 ~ldc:nr ~kc\n          else begin\n            (* Load current C values into tmp before accumulating *)\n            for i = 0 to mr_eff - 1 do\n              for j = 0 to nr_eff - 1 do\n                Array.unsafe_set tmp (i * nr + j)\n                  (Array.unsafe_get c_buf (c_tile + i * ldc + j))\n              done\n            done;\n            kernel_accum ap ~ap_off bp ~bp_off tmp\n              ~c_off:0 ~ldc:nr ~kc\n          end;\n          for i = 0 to mr_eff - 1 do\n            for j = 0 to nr_eff - 1 do\n              Array.unsafe_set c_buf (c_tile + i * ldc + j)\n                (Array.unsafe_get tmp (i * nr + j))\n            done\n          done\n        end;\n        jr := !jr + nr\n      done;\n      ir := !ir + mr\n    done\n\n  let gemm ~pool a_buf b_buf c_buf ~m ~n ~k ~a_off ~b_off ~c_off ~ldc () =\n    let lda = k and ldb = n in\n    let mc = mc_blk and nc = nc_blk and kc = kc_blk in\n    let rec jc_loop jc =\n      if jc >= n then ()\n      else\n        let nc' = min_int nc (n - jc) in\n        Parallel.parallel_for pool 0 (m - 1) (fun start_row end_row ->\n            let bp = Array.make_float64 (round_up nc' nr * kc) in\n            let ap = Array.make_float64 (round_up mc mr * kc) in\n            let tmp = Array.make_float64 (mr * nr) in\n            let rec pc_loop pc =\n              if pc >= k then ()\n              else\n                let kc' = min_int kc (k - pc) in\n                let first = pc = 0 in\n                pack_b b_buf ~b_off ~ldb ~pc ~jc ~kc:kc' ~nc:nc' bp;\n                let rec ic_loop ic =\n                  if ic >= end_row then ()\n                  else\n                    let mc' = min_int mc (end_row - ic) in\n                    pack_a a_buf ~a_off ~lda ~ic ~pc ~mc:mc' ~kc:kc' ap;\n                    macro_kernel ap bp c_buf\n                      ~c_off:(c_off + ic * ldc + jc)\n                      ~ldc ~mc:mc' ~nc:nc' ~kc:kc' ~first tmp;\n                    ic_loop (ic + mc')\n                in\n                ic_loop start_row;\n                pc_loop (pc + kc')\n            in\n            pc_loop 0);\n        jc_loop (jc + nc')\n    in\n    jc_loop 0\nend\n\nlet matmul_float64_slow a_buf b_buf c_buf va vb vout start_idx end_idx =\n  let nd_a = Array.length (shape va)\n  and nd_b = Array.length (shape vb)\n  and nd_out = Array.length (shape vout) in\n\n  let rank = nd_out in\n  let m = (shape vout).(rank - 2) in\n  let n = (shape vout).(rank - 1) in\n  let k = (shape va).(rank - 1) in\n\n  let a_idx0 = Array.make nd_a 0\n  and a_idx1 = Array.make nd_a 0\n  and b_idx = Array.make nd_b 0\n  and out_idx0 = Array.make nd_out 0\n  and out_idx1 = Array.make nd_out 0 in\n\n  let a_str = View.strides va\n  and b_str = View.strides vb\n  and out_str = View.strides vout in\n\n  let batch_shape = Array.sub (shape vout) 0 (max 0 (nd_out - 2)) in\n  let batch_sz =\n    if Array.length batch_shape = 0 then 1 else Shape.numel batch_shape\n  in\n\n  let work = ref start_idx in\n  while !work < end_idx do\n    let i0 = !work mod m in\n    let batch = !work / m in\n    let has_row1 = (i0 + 1 < m) && (!work + 1 < end_idx) in\n\n    if has_row1 then \n      begin\n        if batch_sz <> 1 then begin\n          Shape.unravel_index_into batch batch_shape out_idx0;\n          Shape.unravel_index_into batch batch_shape out_idx1;\n        end;\n    \n        Shape.broadcast_index_into out_idx0 (shape va) a_idx0;\n        Shape.broadcast_index_into out_idx0 (shape vb) b_idx;\n        Shape.broadcast_index_into out_idx0 (shape va) a_idx1;\n    \n        out_idx0.(nd_out - 2) <- i0;\n        a_idx0.(nd_a - 2) <- i0;\n    \n        if has_row1 then begin\n          out_idx1.(nd_out - 2) <- i0 + 1;\n          a_idx1.(nd_a - 2) <- i0 + 1\n        end;\n    \n        let j = ref 0 in\n        while !j + 1 < n do\n          out_idx0.(nd_out - 1) <- !j;\n          b_idx.(nd_b - 1) <- !j;\n    \n          out_idx1.(nd_out - 1) <- !j;\n    \n          let rec kloop l acc0 acc1=\n            if l = k then #(acc0, acc1)\n            else begin\n              a_idx1.(nd_a - 1) <- l;\n              b_idx.(nd_b - 2) <- l;\n              let bv =\n                Float64x2.Array.unsafe_get b_buf\n                  ~idx:(View.offset vb + Shape.ravel_index b_idx b_str)\n              in\n              let av0 =\n                Array.unsafe_get a_buf\n                  (View.offset va + Shape.ravel_index a_idx0 a_str)\n              in\n              let a0v = f64_set1 av0 in\n              let av1 =\n                Array.unsafe_get a_buf\n                  (View.offset va + Shape.ravel_index a_idx1 a_str)\n              in\n              let a1v = f64_set1 av1 in\n              kloop (l + 1)\n              (f64_mul_add a0v bv acc0)\n              (f64_mul_add a1v bv acc1)\n            end\n          in\n          let #(acc0, acc1) = kloop 0 (f64_set1 #0.0) (f64_set1 #0.0) in\n          let out_off0 =\n            View.offset vout + Shape.ravel_index out_idx0 out_str\n          in\n          Float64x2.Array.unsafe_set c_buf ~idx:out_off0 acc0;\n          let out_off1 =\n            View.offset vout + Shape.ravel_index out_idx1 out_str\n          in\n          Float64x2.Array.unsafe_set c_buf ~idx:out_off1 acc1;\n    \n          j := !j + 2\n        done;\n    \n        while !j < n do\n          out_idx0.(nd_out - 1) <- !j;\n          b_idx.(nd_b - 1) <- !j;\n    \n          let rec scalar l acc0 acc1 =\n            if l = k then #(acc0, acc1)\n            else begin\n              a_idx0.(nd_a - 1) <- l;\n              b_idx.(nd_b - 2) <- l;\n              let av0 =\n                Array.unsafe_get a_buf\n                  (View.offset va + Shape.ravel_index a_idx0 a_str)\n              in\n              let av1 =\n                Array.unsafe_get a_buf\n                  (View.offset va + Shape.ravel_index a_idx1 a_str)\n              in\n              let bv =\n                Array.unsafe_get b_buf\n                  (View.offset vb + Shape.ravel_index b_idx b_str)\n              in\n              scalar (l + 1) (Float_u.fma av0 bv acc0) (Float_u.fma av1 bv acc1)\n            end\n          in\n    \n          let #(sum0, sum1) = scalar 0 #0.0 #0.0 in\n          let out0 =\n            View.offset vout + Shape.ravel_index out_idx0 out_str\n          in\n          Array.unsafe_set c_buf out0 sum0;\n\n          out_idx1.(nd_out - 1) <- !j;\n          let out1 =\n            View.offset vout + Shape.ravel_index out_idx1 out_str\n          in\n          Array.unsafe_set c_buf out1 sum1;\n\n          j := !j + 1\n        done;\n    \n        work := !work + 2\n    end else \n      begin\n        if batch_sz <> 1 then begin\n          Shape.unravel_index_into batch batch_shape out_idx0;\n        end;\n    \n        Shape.broadcast_index_into out_idx0 (shape va) a_idx0;\n        Shape.broadcast_index_into out_idx0 (shape vb) b_idx;\n    \n        out_idx0.(nd_out - 2) <- i0;\n        a_idx0.(nd_a - 2) <- i0;\n    \n    \n        let j = ref 0 in\n        while !j + 1 < n do\n          out_idx0.(nd_out - 1) <- !j;\n          b_idx.(nd_b - 1) <- !j;\n    \n    \n          let rec kloop l acc0 =\n            if l = k then acc0\n            else begin\n              a_idx0.(nd_a - 1) <- l;\n              b_idx.(nd_b - 2) <- l;\n              let av0 =\n                Array.unsafe_get a_buf\n                  (View.offset va + Shape.ravel_index a_idx0 a_str)\n              in\n              let bv =\n                Float64x2.Array.unsafe_get b_buf\n                  ~idx:(View.offset vb + Shape.ravel_index b_idx b_str)\n              in\n              let a0v = f64_set1 av0 in\n              kloop (l + 1) (f64_mul_add a0v bv acc0)\n            end\n          in\n          let acc0 = kloop 0 (f64_set1 #0.0) in\n          let out_off0 =\n            View.offset vout + Shape.ravel_index out_idx0 out_str\n          in\n          Float64x2.Array.unsafe_set c_buf ~idx:out_off0 acc0;\n    \n          j := !j + 2\n        done;\n    \n        while !j < n do\n          out_idx0.(nd_out - 1) <- !j;\n          b_idx.(nd_b - 1) <- !j;\n    \n          let rec scalar l acc =\n            if l = k then acc\n            else begin\n              a_idx0.(nd_a - 1) <- l;\n              b_idx.(nd_b - 2) <- l;\n    \n              let av =\n                Array.unsafe_get a_buf\n                  (View.offset va + Shape.ravel_index a_idx0 a_str)\n              in\n              let bv =\n                Array.unsafe_get b_buf\n                  (View.offset vb + Shape.ravel_index b_idx b_str)\n              in\n              scalar (l + 1) (Float_u.fma av bv acc)\n            end\n          in\n    \n          let sum0 = scalar 0 #0.0 in\n          let out0 =\n            View.offset vout + Shape.ravel_index out_idx0 out_str\n          in\n          Array.unsafe_set c_buf out0 sum0;\n    \n          j := !j + 1\n        done;\n    \n        work := !work + 1\n    end;\n\n  done\n\nmodule Gemm_f32 = struct\n  let mr = 6\n  let nr = 8\n  let kc_blk = 256\n  let mc_blk = 240\n  let nc_blk = 640\n\n  let pack_a a ~a_off ~lda ~ic ~pc ~mc ~kc ap =\n    let dst = ref 0 in\n    let i = ref 0 in\n    while !i + mr <= mc do\n      for p = 0 to kc - 1 do\n        let src_base = a_off + (ic + !i) * lda + pc + p in\n        for ii = 0 to mr - 1 do\n          Array.unsafe_set ap (!dst + ii)\n            (Array.unsafe_get a (src_base + ii * lda))\n        done;\n        dst := !dst + mr\n      done;\n      i := !i + mr\n    done;\n    if !i < mc then begin\n      let mr_rem = mc - !i in\n      for p = 0 to kc - 1 do\n        let src_base = a_off + (ic + !i) * lda + pc + p in\n        for ii = 0 to mr_rem - 1 do\n          Array.unsafe_set ap (!dst + ii)\n            (Array.unsafe_get a (src_base + ii * lda))\n        done;\n        for ii = mr_rem to mr - 1 do\n          Array.unsafe_set ap (!dst + ii) #0.0s\n        done;\n        dst := !dst + mr\n      done\n    end\n\n  let pack_b b ~b_off ~ldb ~pc ~jc ~kc ~nc bp =\n    let dst = ref 0 in\n    let j = ref 0 in\n    while !j + nr <= nc do\n      for p = 0 to kc - 1 do\n        let src = b_off + (pc + p) * ldb + jc + !j in\n        for jj = 0 to nr - 1 do\n          Array.unsafe_set bp (!dst + jj) (Array.unsafe_get b (src + jj))\n        done;\n        dst := !dst + nr\n      done;\n      j := !j + nr\n    done;\n    if !j < nc then begin\n      let nr_rem = nc - !j in\n      for p = 0 to kc - 1 do\n        let src = b_off + (pc + p) * ldb + jc + !j in\n        for jj = 0 to nr_rem - 1 do\n          Array.unsafe_set bp (!dst + jj) (Array.unsafe_get b (src + jj))\n        done;\n        for jj = nr_rem to nr - 1 do\n          Array.unsafe_set bp (!dst + jj) #0.0s\n        done;\n        dst := !dst + nr\n      done\n    end\n\n  let rec f32_kloop ap ap_off bp bp_off c_buf c_off ldc kc p\n      c00 c01 c10 c11 c20 c21 c30 c31 c40 c41 c50 c51 =\n    if p = kc then begin\n      Float32x4.Array.unsafe_set c_buf ~idx:c_off c00;\n      Float32x4.Array.unsafe_set c_buf ~idx:(c_off + 4) c01;\n      let r1 = c_off + ldc in\n      Float32x4.Array.unsafe_set c_buf ~idx:r1 c10;\n      Float32x4.Array.unsafe_set c_buf ~idx:(r1 + 4) c11;\n      let r2 = c_off + 2 * ldc in\n      Float32x4.Array.unsafe_set c_buf ~idx:r2 c20;\n      Float32x4.Array.unsafe_set c_buf ~idx:(r2 + 4) c21;\n      let r3 = c_off + 3 * ldc in\n      Float32x4.Array.unsafe_set c_buf ~idx:r3 c30;\n      Float32x4.Array.unsafe_set c_buf ~idx:(r3 + 4) c31;\n      let r4 = c_off + 4 * ldc in\n      Float32x4.Array.unsafe_set c_buf ~idx:r4 c40;\n      Float32x4.Array.unsafe_set c_buf ~idx:(r4 + 4) c41;\n      let r5 = c_off + 5 * ldc in\n      Float32x4.Array.unsafe_set c_buf ~idx:r5 c50;\n      Float32x4.Array.unsafe_set c_buf ~idx:(r5 + 4) c51\n    end\n    else\n      let ab = ap_off + p * 6 in\n      let bb = bp_off + p * 8 in\n      let a0 = f32_set1 (Array.unsafe_get ap ab) in\n      let a1 = f32_set1 (Array.unsafe_get ap (ab + 1)) in\n      let a2 = f32_set1 (Array.unsafe_get ap (ab + 2)) in\n      let a3 = f32_set1 (Array.unsafe_get ap (ab + 3)) in\n      let a4 = f32_set1 (Array.unsafe_get ap (ab + 4)) in\n      let a5 = f32_set1 (Array.unsafe_get ap (ab + 5)) in\n      let b0 = Float32x4.Array.unsafe_get bp ~idx:bb in\n      let b1 = Float32x4.Array.unsafe_get bp ~idx:(bb + 4) in\n      f32_kloop ap ap_off bp bp_off c_buf c_off ldc kc (p + 1)\n        (f32_mul_add a0 b0 c00) (f32_mul_add a0 b1 c01)\n        (f32_mul_add a1 b0 c10) (f32_mul_add a1 b1 c11)\n        (f32_mul_add a2 b0 c20) (f32_mul_add a2 b1 c21)\n        (f32_mul_add a3 b0 c30) (f32_mul_add a3 b1 c31)\n        (f32_mul_add a4 b0 c40) (f32_mul_add a4 b1 c41)\n        (f32_mul_add a5 b0 c50) (f32_mul_add a5 b1 c51)\n\n  let kernel_zero ap ~ap_off bp ~bp_off c_buf ~c_off ~ldc ~kc =\n    let z = f32_set1 #0.0s in\n    f32_kloop ap ap_off bp bp_off c_buf c_off ldc kc 0\n      z z z z z z z z z z z z\n\n  let kernel_accum ap ~ap_off bp ~bp_off c_buf ~c_off ~ldc ~kc =\n    let r1 = c_off + ldc in\n    let r2 = c_off + 2 * ldc in\n    let r3 = c_off + 3 * ldc in\n    let r4 = c_off + 4 * ldc in\n    let r5 = c_off + 5 * ldc in\n    f32_kloop ap ap_off bp bp_off c_buf c_off ldc kc 0\n      (Float32x4.Array.unsafe_get c_buf ~idx:c_off)\n      (Float32x4.Array.unsafe_get c_buf ~idx:(c_off + 4))\n      (Float32x4.Array.unsafe_get c_buf ~idx:r1)\n      (Float32x4.Array.unsafe_get c_buf ~idx:(r1 + 4))\n      (Float32x4.Array.unsafe_get c_buf ~idx:r2)\n      (Float32x4.Array.unsafe_get c_buf ~idx:(r2 + 4))\n      (Float32x4.Array.unsafe_get c_buf ~idx:r3)\n      (Float32x4.Array.unsafe_get c_buf ~idx:(r3 + 4))\n      (Float32x4.Array.unsafe_get c_buf ~idx:r4)\n      (Float32x4.Array.unsafe_get c_buf ~idx:(r4 + 4))\n      (Float32x4.Array.unsafe_get c_buf ~idx:r5)\n      (Float32x4.Array.unsafe_get c_buf ~idx:(r5 + 4))\n\n  let macro_kernel ap bp c_buf ~c_off ~ldc ~mc ~nc ~kc ~first tmp =\n    let ir = ref 0 in\n    while !ir < mc do\n      let mr_eff = min_int mr (mc - !ir) in\n      let ap_off = (!ir / mr) * mr * kc in\n      let jr = ref 0 in\n      while !jr < nc do\n        let nr_eff = min_int nr (nc - !jr) in\n        let bp_off = (!jr / nr) * nr * kc in\n        let c_tile = c_off + (!ir * ldc) + !jr in\n        if mr_eff = mr && nr_eff = nr then begin\n          if first then\n            kernel_zero ap ~ap_off bp ~bp_off c_buf\n              ~c_off:c_tile ~ldc ~kc\n          else\n            kernel_accum ap ~ap_off bp ~bp_off c_buf\n              ~c_off:c_tile ~ldc ~kc\n        end\n        else begin\n          if first then\n            kernel_zero ap ~ap_off bp ~bp_off tmp\n              ~c_off:0 ~ldc:nr ~kc\n          else begin\n            for i = 0 to mr_eff - 1 do\n              for j = 0 to nr_eff - 1 do\n                Array.unsafe_set tmp (i * nr + j)\n                  (Array.unsafe_get c_buf (c_tile + i * ldc + j))\n              done\n            done;\n            kernel_accum ap ~ap_off bp ~bp_off tmp\n              ~c_off:0 ~ldc:nr ~kc\n          end;\n          for i = 0 to mr_eff - 1 do\n            for j = 0 to nr_eff - 1 do\n              Array.unsafe_set c_buf (c_tile + i * ldc + j)\n                (Array.unsafe_get tmp (i * nr + j))\n            done\n          done\n        end;\n        jr := !jr + nr\n      done;\n      ir := !ir + mr\n    done\n\n  let gemm ~pool a_buf b_buf c_buf ~m ~n ~k ~a_off ~b_off ~c_off ~ldc () =\n    let lda = k and ldb = n in\n    let mc = mc_blk and nc = nc_blk and kc = kc_blk in\n    let rec jc_loop jc =\n      if jc >= n then ()\n      else\n        let nc' = min_int nc (n - jc) in\n        Parallel.parallel_for pool 0 (m - 1) (fun start_row end_row ->\n            let bp = Array.make_float32 (round_up nc' nr * kc) in\n            let ap = Array.make_float32 (round_up mc mr * kc) in\n            let tmp = Array.make_float32 (mr * nr) in\n            let rec pc_loop pc =\n              if pc >= k then ()\n              else\n                let kc' = min_int kc (k - pc) in\n                let first = pc = 0 in\n                pack_b b_buf ~b_off ~ldb ~pc ~jc ~kc:kc' ~nc:nc' bp;\n                let rec ic_loop ic =\n                  if ic >= end_row then ()\n                  else\n                    let mc' = min_int mc (end_row - ic) in\n                    pack_a a_buf ~a_off ~lda ~ic ~pc ~mc:mc' ~kc:kc' ap;\n                    macro_kernel ap bp c_buf\n                      ~c_off:(c_off + ic * ldc + jc)\n                      ~ldc ~mc:mc' ~nc:nc' ~kc:kc' ~first tmp;\n                    ic_loop (ic + mc')\n                in\n                ic_loop start_row;\n                pc_loop (pc + kc')\n            in\n            pc_loop 0);\n        jc_loop (jc + nc')\n    in\n    jc_loop 0\nend\n\nlet matmul_float32_slow a_buf b_buf c_buf va vb vout start_idx end_idx =\n  let nd_a = Array.length (shape va)\n  and nd_b = Array.length (shape vb)\n  and nd_out = Array.length (shape vout) in\n\n  let rank = nd_out in\n  let m = (shape vout).(rank - 2) in\n  let n = (shape vout).(rank - 1) in\n  let k = (shape va).(rank - 1) in\n\n  let a_idx0 = Array.make nd_a 0\n  and a_idx1 = Array.make nd_a 0\n  and b_idx = Array.make nd_b 0\n  and out_idx0 = Array.make nd_out 0\n  and out_idx1 = Array.make nd_out 0 in\n\n  let a_str = View.strides va\n  and b_str = View.strides vb\n  and out_str = View.strides vout in\n\n  let batch_shape = Array.sub (shape vout) 0 (max 0 (nd_out - 2)) in\n  let batch_sz =\n    if Array.length batch_shape = 0 then 1 else Shape.numel batch_shape\n  in\n\n  let work = ref start_idx in\n  while !work < end_idx do\n    let i0 = !work mod m in\n    let batch = !work / m in\n    let has_row1 = (i0 + 1 < m) && (!work + 1 < end_idx) in\n\n    if has_row1 then \n      begin\n        if batch_sz <> 1 then begin\n          Shape.unravel_index_into batch batch_shape out_idx0;\n          Shape.unravel_index_into batch batch_shape out_idx1;\n        end;\n    \n        Shape.broadcast_index_into out_idx0 (shape va) a_idx0;\n        Shape.broadcast_index_into out_idx0 (shape vb) b_idx;\n        Shape.broadcast_index_into out_idx0 (shape va) a_idx1;\n    \n        out_idx0.(nd_out - 2) <- i0;\n        a_idx0.(nd_a - 2) <- i0;\n    \n        if has_row1 then begin\n          out_idx1.(nd_out - 2) <- i0 + 1;\n          a_idx1.(nd_a - 2) <- i0 + 1\n        end;\n    \n        let j = ref 0 in\n        while !j + 3 < n do\n          out_idx0.(nd_out - 1) <- !j;\n          b_idx.(nd_b - 1) <- !j;\n    \n          out_idx1.(nd_out - 1) <- !j;\n    \n    \n          let rec kloop l acc0 acc1 =\n            if l = k then #(acc0, acc1)\n            else begin\n              a_idx0.(nd_a - 1) <- l;\n              b_idx.(nd_b - 2) <- l;\n              let av0 =\n                Array.unsafe_get a_buf\n                  (View.offset va + Shape.ravel_index a_idx0 a_str)\n              in\n              let av1 =\n                Array.unsafe_get a_buf\n                  (View.offset va + Shape.ravel_index a_idx1 a_str)\n              in\n              let bv =\n                Float32x4.Array.unsafe_get b_buf\n                  ~idx:(View.offset vb + Shape.ravel_index b_idx b_str)\n              in\n              let a0v = f32_set1 av0 in\n              let a1v = f32_set1 av1 in\n              kloop (l + 1)\n              (f32_mul_add a0v bv acc0)\n              (f32_mul_add a1v bv acc1)\n            end\n\n          in\n          let #(acc0, acc1) = kloop 0 (f32_set1 #0.0s) (f32_set1 #0.0s) in\n          let out_off0 =\n            View.offset vout + Shape.ravel_index out_idx0 out_str\n          in\n          Float32x4.Array.unsafe_set c_buf ~idx:out_off0 acc0;\n          let out_off1 =\n            View.offset vout + Shape.ravel_index out_idx1 out_str\n          in\n          Float32x4.Array.unsafe_set c_buf ~idx:out_off1 acc1;\n    \n          j := !j + 4\n        done;\n    \n        while !j < n do\n          out_idx0.(nd_out - 1) <- !j;\n          b_idx.(nd_b - 1) <- !j;\n    \n          let rec scalar l acc0 acc1 =\n            if l = k then #(acc0, acc1)\n            else begin\n              a_idx0.(nd_a - 1) <- l;\n              b_idx.(nd_b - 2) <- l;\n    \n              let av =\n                Array.unsafe_get a_buf\n                  (View.offset va + Shape.ravel_index a_idx0 a_str)\n              in\n              let bv =\n                Array.unsafe_get b_buf\n                  (View.offset vb + Shape.ravel_index b_idx b_str)\n              in\n              scalar (l + 1) (Float32_u.fma av bv acc0) (Float32_u.fma av bv acc1)\n            end\n          in\n    \n          let #(sum0, sum1) = scalar 0 #0.0s #0.0s in\n          let out0 =\n            View.offset vout + Shape.ravel_index out_idx0 out_str\n          in\n          Array.unsafe_set c_buf out0 sum0;\n\n          out_idx1.(nd_out - 1) <- !j;\n          let out1 =\n            View.offset vout + Shape.ravel_index out_idx1 out_str\n          in\n          Array.unsafe_set c_buf out1 sum1;\n\n          j := !j + 1\n        done;\n    \n        work := !work + 2\n    end else \n      begin\n        if batch_sz <> 1 then begin\n          Shape.unravel_index_into batch batch_shape out_idx0;\n        end;\n    \n        Shape.broadcast_index_into out_idx0 (shape va) a_idx0;\n        Shape.broadcast_index_into out_idx0 (shape vb) b_idx;\n    \n        out_idx0.(nd_out - 2) <- i0;\n        a_idx0.(nd_a - 2) <- i0;\n    \n    \n        let j = ref 0 in\n        while !j + 7 < n do\n          out_idx0.(nd_out - 1) <- !j;\n          b_idx.(nd_b - 1) <- !j;\n    \n    \n          let rec kloop_r0 l acc0 =\n            if l = k then acc0\n            else begin\n              a_idx0.(nd_a - 1) <- l;\n              b_idx.(nd_b - 2) <- l;\n              let av0 =\n                Array.unsafe_get a_buf\n                  (View.offset va + Shape.ravel_index a_idx0 a_str)\n              in\n              let bv =\n                Float32x4.Array.unsafe_get b_buf\n                  ~idx:(View.offset vb + Shape.ravel_index b_idx b_str)\n              in\n              let a0v = f32_set1 av0 in\n              kloop_r0 (l + 1) (f32_mul_add a0v bv acc0)\n            end\n          in\n          let acc0 = kloop_r0 0 (f32_set1 #0.0s) in\n          let out_off0 =\n            View.offset vout + Shape.ravel_index out_idx0 out_str\n          in\n          Float32x4.Array.unsafe_set c_buf ~idx:out_off0 acc0;\n    \n          j := !j + 8\n        done;\n\n        while !j + 3 < n do\n          out_idx0.(nd_out - 1) <- !j;\n          b_idx.(nd_b - 1) <- !j;\n    \n    \n          let rec kloop l acc0 =\n            if l = k then acc0\n            else begin\n              a_idx0.(nd_a - 1) <- l;\n              b_idx.(nd_b - 2) <- l;\n              let av0 =\n                Array.unsafe_get a_buf\n                  (View.offset va + Shape.ravel_index a_idx0 a_str)\n              in\n              let bv =\n                Float32x4.Array.unsafe_get b_buf\n                  ~idx:(View.offset vb + Shape.ravel_index b_idx b_str)\n              in\n              let a0v = f32_set1 av0 in\n              kloop (l + 1) (f32_mul_add a0v bv acc0)\n            end\n          in\n          let acc0 = kloop 0 (f32_set1 #0.0s) in\n          let out_off0 =\n            View.offset vout + Shape.ravel_index out_idx0 out_str\n          in\n          Float32x4.Array.unsafe_set c_buf ~idx:out_off0 acc0;\n    \n          j := !j + 4\n        done;\n    \n        while !j < n do\n          out_idx0.(nd_out - 1) <- !j;\n          b_idx.(nd_b - 1) <- !j;\n    \n          let rec scalar l acc =\n            if l = k then acc\n            else begin\n              a_idx0.(nd_a - 1) <- l;\n              b_idx.(nd_b - 2) <- l;\n    \n              let av =\n                Array.unsafe_get a_buf\n                  (View.offset va + Shape.ravel_index a_idx0 a_str)\n              in\n              let bv =\n                Array.unsafe_get b_buf\n                  (View.offset vb + Shape.ravel_index b_idx b_str)\n              in\n              scalar (l + 1) (Float32_u.fma av bv acc)\n            end\n          in\n    \n          let sum0 = scalar 0 #0.0s in\n          let out0 =\n            View.offset vout + Shape.ravel_index out_idx0 out_str\n          in\n          Array.unsafe_set c_buf out0 sum0;\n    \n          j := !j + 1\n        done;\n    \n        work := !work + 1\n    end;\n\n  done\n\nlet matmul_int64_fast a_buf b_buf c_buf va vb vout start_idx end_idx = \n  let mc = 128 in\n  let nc = 128 in\n  let kc = 64 in\n  let rank = Array.length (shape vout) in\n  let n = (shape vout).(rank - 1) in\n  let k = (shape va).(rank - 1) in\n\n  let a_rs = k and b_rs = n and c_rs = n in\n    let a0 = View.offset va and b0 = View.offset vb and c0 = View.offset vout in\n  \n\n  let rec jc_loop jc =\n    if jc >= n then ()\n    else\n      let nc' = min nc (n - jc) in\n      let rec pc_loop pc =\n        if pc >= k then ()\n        else\n          let kc' = min kc (k - pc) in\n          let rec ic_loop ic =\n            if ic >= end_idx then ()\n            else\n              let mc' = min mc (end_idx - ic) in\n                for i = ic to ic + mc' - 1 do\n                  let a_row = a0 + (i * a_rs) + pc\n                  and c_row = c0 + (i * c_rs) + jc in\n                  for j = jc to jc + nc' - 1 do\n                    let a_idx0 = a_row in\n                    let b_idx0 = b0 + (pc * b_rs) + j in\n                    \n                    let rec loop p a_idx b_idx acc =\n                      if p = kc' then\n                        acc\n                    else\n                        let av = Array.unsafe_get a_buf a_idx in\n                        let bv = Array.unsafe_get b_buf b_idx in\n                        loop (p + 1) (a_idx + 1) (b_idx + b_rs) (Int64_u.add (Int64_u.mul av bv) acc)\n                  in\n                    let sum = loop 0 a_idx0 b_idx0 #0L in\n                    Array.unsafe_set c_buf (c_row + j - jc) sum\n                  done\n                done;\n              ic_loop (ic + mc')\n          in\n          ic_loop start_idx;\n          pc_loop (pc + kc')\n      in\n      pc_loop 0;\n      jc_loop (jc + nc')\n  in\n  jc_loop 0\n    \nlet matmul_int64_slow a_buf b_buf c_buf va vb vout start_idx end_idx = \n  let nd_a, nd_b, nd_out =\n  (Array.length (shape va), Array.length (shape vb), Array.length (shape vout))\n  in\n  let rank = Array.length (shape vout) in\n  let m = (shape vout).(rank - 2) in\n  let n = (shape vout).(rank - 1) in\n  let k = (shape va).(rank - 1) in\n  let a_idx = Array.make nd_a 0\n  and b_idx = Array.make nd_b 0\n  and out_idx = Array.make nd_out 0 in\n  let a_str = View.strides va\n  and b_str = View.strides vb\n  and out_str = View.strides vout in\n  let nd_out = Array.length (shape vout) in\n  let batch_shape = Array.sub (shape vout) 0 (max 0 (nd_out - 2)) in\n  let batch_sz =\n    if Array.length batch_shape = 0 then 1 else Shape.numel batch_shape\n  in\n\n  for work = start_idx to end_idx - 1 do\n    let batch = work / m and i = work mod m in\n    (* unravel batch index into leading dims of C *)\n    if batch_sz <> 1 then Shape.unravel_index_into batch batch_shape out_idx;\n    (* broadcast batch into a_idx / b_idx *)\n    Shape.broadcast_index_into out_idx (shape va) a_idx;\n    Shape.broadcast_index_into out_idx (shape vb) b_idx;\n    (* set row index *)\n    out_idx.(nd_out - 2) <- i;\n    a_idx.(nd_a - 2) <- i;\n    for j = 0 to n - 1 do\n      out_idx.(nd_out - 1) <- j;\n      b_idx.(nd_b - 1) <- j;\n      let rec loop l acc =\n        if l = k then\n          acc\n        else (\n          a_idx.(nd_a - 1) <- l;\n          b_idx.(nd_b - 2) <- l;\n      \n          let av =\n            Array.unsafe_get a_buf (View.offset va + Shape.ravel_index a_idx a_str)\n          in\n          let bv =\n            Array.unsafe_get b_buf (View.offset vb + Shape.ravel_index b_idx b_str)\n          in\n      \n          loop (l + 1) (Int64_u.add (Int64_u.mul av bv) acc)\n        )\n      in\n      let sum = loop 0 #0L in\n      \n      let out_off = View.offset vout + Shape.ravel_index out_idx out_str in\n      Array.unsafe_set c_buf out_off sum\n    done\n  done\n\nlet matmul_int32_fast a_buf b_buf c_buf va vb vout start_idx end_idx = \n  let mc = 128 in\n  let nc = 128 in\n  let kc = 64 in\n  let rank = Array.length (shape vout) in\n  let n = (shape vout).(rank - 1) in\n  let k = (shape va).(rank - 1) in\n\n  let a_rs = k and b_rs = n and c_rs = n in\n  let a0 = View.offset va and b0 = View.offset vb and c0 = View.offset vout in\n\n\n  let rec jc_loop jc =\n    if jc >= n then ()\n    else\n      let nc' = min nc (n - jc) in\n      let rec pc_loop pc =\n        if pc >= k then ()\n        else\n          let kc' = min kc (k - pc) in\n          let rec ic_loop ic =\n            if ic >= end_idx then ()\n            else\n              let mc' = min mc (end_idx - ic) in\n              for i = ic to ic + mc' - 1 do\n                let a_row = a0 + (i * a_rs) + pc\n                and c_row = c0 + (i * c_rs) + jc in\n                for j = jc to jc + nc' - 1 do\n                  let a_idx0 = a_row in\n                  let b_idx0 = b0 + (pc * b_rs) + j in\n                  \n                  let rec loop p a_idx b_idx acc =\n                    if p = kc' then\n                      acc\n                    else\n                      let av = Array.unsafe_get a_buf a_idx in\n                      let bv = Array.unsafe_get b_buf b_idx in\n                      loop (p + 1) (a_idx + 1) (b_idx + b_rs) (Int32_u.add (Int32_u.mul av bv) acc)\n                  in\n                  let sum = loop 0 a_idx0 b_idx0 #0l in\n                  Array.unsafe_set c_buf (c_row + j - jc) sum\n                done\n              done;\n              ic_loop (ic + mc')\n          in\n          ic_loop start_idx;\n          pc_loop (pc + kc')\n      in\n      pc_loop 0;\n      jc_loop (jc + nc')\n  in\n  jc_loop 0\n\nlet matmul_int32_slow a_buf b_buf c_buf va vb vout start_idx end_idx = \n  let nd_a, nd_b, nd_out =\n  (Array.length (shape va), Array.length (shape vb), Array.length (shape vout))\nin\nlet rank = Array.length (shape vout) in\nlet m = (shape vout).(rank - 2) in\nlet n = (shape vout).(rank - 1) in\nlet k = (shape va).(rank - 1) in\n  let a_idx = Array.make nd_a 0\n  and b_idx = Array.make nd_b 0\n  and out_idx = Array.make nd_out 0 in\n  let a_str = View.strides va\n  and b_str = View.strides vb\n  and out_str = View.strides vout in\n  let nd_out = Array.length (shape vout) in\n  let batch_shape = Array.sub (shape vout) 0 (max 0 (nd_out - 2)) in\n  let batch_sz =\n    if Array.length batch_shape = 0 then 1 else Shape.numel batch_shape\n  in\n\n  for work = start_idx to end_idx - 1 do\n    let batch = work / m and i = work mod m in\n    (* unravel batch index into leading dims of C *)\n    if batch_sz <> 1 then Shape.unravel_index_into batch batch_shape out_idx;\n    (* broadcast batch into a_idx / b_idx *)\n    Shape.broadcast_index_into out_idx (shape va) a_idx;\n    Shape.broadcast_index_into out_idx (shape vb) b_idx;\n    (* set row index *)\n    out_idx.(nd_out - 2) <- i;\n    a_idx.(nd_a - 2) <- i;\n    for j = 0 to n - 1 do\n      out_idx.(nd_out - 1) <- j;\n      b_idx.(nd_b - 1) <- j;\n      let rec loop l acc =\n        if l = k then\n          acc\n        else (\n          a_idx.(nd_a - 1) <- l;\n          b_idx.(nd_b - 2) <- l;\n      \n          let av =\n            Array.unsafe_get a_buf (View.offset va + Shape.ravel_index a_idx a_str)\n          in\n          let bv =\n            Array.unsafe_get b_buf (View.offset vb + Shape.ravel_index b_idx b_str)\n          in\n      \n          loop (l + 1) (Int32_u.add (Int32_u.mul av bv) acc)\n        )\n      in\n      let sum = loop 0 #0l in\n      \n      let out_off = View.offset vout + Shape.ravel_index out_idx out_str in\n      Array.unsafe_set c_buf out_off sum\n    done\n  done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/op_pad.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet pad_float64 (in_arr : float# array) (out_arr : float# array) in_shape padding\n    in_offset out_offset in_strides out_strides in_numel =\n  let ndim = Array.length in_shape in\n  let md_index = Array.make ndim 0 in\n  for k = 0 to in_numel - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let v = Array.unsafe_get in_arr src_lin in\n    for d = 0 to ndim - 1 do\n      let before, _ = padding.(d) in\n      md_index.(d) <- md_index.(d) + before\n    done;\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set out_arr dst_lin v\n  done\n\nlet pad_float32 (in_arr : float32# array) (out_arr : float32# array) in_shape\n    padding in_offset out_offset in_strides out_strides in_numel =\n  let ndim = Array.length in_shape in\n  let md_index = Array.make ndim 0 in\n  for k = 0 to in_numel - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let v = Array.unsafe_get in_arr src_lin in\n    for d = 0 to ndim - 1 do\n      let before, _ = padding.(d) in\n      md_index.(d) <- md_index.(d) + before\n    done;\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set out_arr dst_lin v\n  done\n\nlet pad_int8 (in_arr : int8# array) (out_arr : int8# array) in_shape padding\n    in_offset out_offset in_strides out_strides in_numel =\n  let ndim = Array.length in_shape in\n  let md_index = Array.make ndim 0 in\n  for k = 0 to in_numel - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let v = Array.unsafe_get in_arr src_lin in\n    for d = 0 to ndim - 1 do\n      let before, _ = padding.(d) in\n      md_index.(d) <- md_index.(d) + before\n    done;\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set out_arr dst_lin v\n  done\n\nlet pad_int16 (in_arr : int16# array) (out_arr : int16# array) in_shape padding\n    in_offset out_offset in_strides out_strides in_numel =\n  let ndim = Array.length in_shape in\n  let md_index = Array.make ndim 0 in\n  for k = 0 to in_numel - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let v = Array.unsafe_get in_arr src_lin in\n    for d = 0 to ndim - 1 do\n      let before, _ = padding.(d) in\n      md_index.(d) <- md_index.(d) + before\n    done;\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set out_arr dst_lin v\n  done\n\nlet pad_int32 (in_arr : int32# array) (out_arr : int32# array) in_shape padding\n    in_offset out_offset in_strides out_strides in_numel =\n  let ndim = Array.length in_shape in\n  let md_index = Array.make ndim 0 in\n  for k = 0 to in_numel - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let v = Array.unsafe_get in_arr src_lin in\n    for d = 0 to ndim - 1 do\n      let before, _ = padding.(d) in\n      md_index.(d) <- md_index.(d) + before\n    done;\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set out_arr dst_lin v\n  done\n\nlet pad_int64 (in_arr : int64# array) (out_arr : int64# array) in_shape padding\n    in_offset out_offset in_strides out_strides in_numel =\n  let ndim = Array.length in_shape in\n  let md_index = Array.make ndim 0 in\n  for k = 0 to in_numel - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let v = Array.unsafe_get in_arr src_lin in\n    for d = 0 to ndim - 1 do\n      let before, _ = padding.(d) in\n      md_index.(d) <- md_index.(d) + before\n    done;\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set out_arr dst_lin v\n  done\n\nlet pad_bool (in_arr : bool array) (out_arr : bool array) in_shape padding\n    in_offset out_offset in_strides out_strides in_numel =\n  let ndim = Array.length in_shape in\n  let md_index = Array.make ndim 0 in\n  for k = 0 to in_numel - 1 do\n    Shape.unravel_index_into k in_shape md_index;\n    let src_lin = in_offset + Shape.ravel_index md_index in_strides in\n    let v = Array.unsafe_get in_arr src_lin in\n    for d = 0 to ndim - 1 do\n      let before, _ = padding.(d) in\n      md_index.(d) <- md_index.(d) + before\n    done;\n    let dst_lin = out_offset + Shape.ravel_index md_index out_strides in\n    Array.unsafe_set out_arr dst_lin v\n  done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/op_scatter.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet init_state ishape idx_strides src_strides out_strides dshape axis start_idx\n    idx_offset src_offset out_offset =\n  let rank = Array.length dshape in\n  let md_index = Array.make rank 0 in\n  if start_idx <> 0 then Shape.unravel_index_into start_idx ishape md_index;\n  let idx_lin = ref (idx_offset + Shape.ravel_index md_index idx_strides) in\n  let src_lin = ref (src_offset + Shape.ravel_index md_index src_strides) in\n  let dst_base = ref out_offset in\n  for d = 0 to rank - 1 do\n    if d <> axis then\n      dst_base :=\n        !dst_base + (Array.unsafe_get md_index d * Array.unsafe_get out_strides d)\n  done;\n  (md_index, idx_lin, src_lin, dst_base)\n\nlet advance_state md_index ishape idx_strides src_strides out_strides axis idx_lin\n    src_lin dst_base =\n  let d = ref (Array.length ishape - 1) in\n  while !d >= 0 do\n    let dim = !d in\n    let cur = Array.unsafe_get md_index dim in\n    let next = cur + 1 in\n    if next < Array.unsafe_get ishape dim then (\n      Array.unsafe_set md_index dim next;\n      idx_lin := !idx_lin + Array.unsafe_get idx_strides dim;\n      src_lin := !src_lin + Array.unsafe_get src_strides dim;\n      if dim <> axis then\n        dst_base := !dst_base + Array.unsafe_get out_strides dim;\n      d := -1)\n    else (\n      Array.unsafe_set md_index dim 0;\n      idx_lin := !idx_lin - (cur * Array.unsafe_get idx_strides dim);\n      src_lin := !src_lin - (cur * Array.unsafe_get src_strides dim);\n      if dim <> axis then\n        dst_base := !dst_base - (cur * Array.unsafe_get out_strides dim);\n      d := dim - 1)\n  done\n\nlet scatter_float64 mode (src : float# array) (dst : float# array) ishape dshape\n    axis (idx_arr : int32# array) src_offset src_strides idx_offset idx_strides\n    out_offset out_strides start_idx end_idx =\n  if start_idx >= end_idx then ()\n  else\n    let axis_stride = Array.unsafe_get out_strides axis in\n    let md_index, idx_lin, src_lin, dst_base =\n      init_state ishape idx_strides src_strides out_strides dshape axis start_idx\n        idx_offset src_offset out_offset\n    in\n    let step () =\n      let idx =\n        Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr !idx_lin))\n      in\n      if idx < 0 || idx >= Array.unsafe_get dshape axis then\n        invalid_arg \"scatter: index out of bounds\";\n      let dst_lin = !dst_base + (idx * axis_stride) in\n      let v = Array.unsafe_get src !src_lin in\n      (match mode with\n      | `Set -> Array.unsafe_set dst dst_lin v\n      | `Add ->\n          Array.unsafe_set dst dst_lin\n            (Float_u.add (Array.unsafe_get dst dst_lin) v))\n    in\n    let i = ref start_idx in\n    let n4 = end_idx - 3 in\n    while !i < n4 do\n      step ();\n      advance_state md_index ishape idx_strides src_strides out_strides axis\n        idx_lin src_lin dst_base;\n      step ();\n      advance_state md_index ishape idx_strides src_strides out_strides axis\n        idx_lin src_lin dst_base;\n      step ();\n      advance_state md_index ishape idx_strides src_strides out_strides axis\n        idx_lin src_lin dst_base;\n      step ();\n      i := !i + 4;\n      if !i < end_idx then\n        advance_state md_index ishape idx_strides src_strides out_strides axis\n          idx_lin src_lin dst_base\n    done;\n    while !i < end_idx do\n      step ();\n      incr i;\n      if !i < end_idx then\n        advance_state md_index ishape idx_strides src_strides out_strides axis\n          idx_lin src_lin dst_base\n    done\n\nlet scatter_float32 mode (src : float32# array) (dst : float32# array) ishape\n    dshape axis (idx_arr : int32# array) src_offset src_strides idx_offset\n    idx_strides out_offset out_strides start_idx end_idx =\n  if start_idx >= end_idx then ()\n  else\n    let axis_stride = Array.unsafe_get out_strides axis in\n    let md_index, idx_lin, src_lin, dst_base =\n      init_state ishape idx_strides src_strides out_strides dshape axis start_idx\n        idx_offset src_offset out_offset\n    in\n    let step () =\n      let idx =\n        Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr !idx_lin))\n      in\n      if idx < 0 || idx >= Array.unsafe_get dshape axis then\n        invalid_arg \"scatter: index out of bounds\";\n      let dst_lin = !dst_base + (idx * axis_stride) in\n      let v = Array.unsafe_get src !src_lin in\n      (match mode with\n      | `Set -> Array.unsafe_set dst dst_lin v\n      | `Add ->\n          Array.unsafe_set dst dst_lin\n            (Float32_u.add (Array.unsafe_get dst dst_lin) v))\n    in\n    let i = ref start_idx in\n    let n4 = end_idx - 3 in\n    while !i < n4 do\n      step ();\n      advance_state md_index ishape idx_strides src_strides out_strides axis\n        idx_lin src_lin dst_base;\n      step ();\n      advance_state md_index ishape idx_strides src_strides out_strides axis\n        idx_lin src_lin dst_base;\n      step ();\n      advance_state md_index ishape idx_strides src_strides out_strides axis\n        idx_lin src_lin dst_base;\n      step ();\n      i := !i + 4;\n      if !i < end_idx then\n        advance_state md_index ishape idx_strides src_strides out_strides axis\n          idx_lin src_lin dst_base\n    done;\n    while !i < end_idx do\n      step ();\n      incr i;\n      if !i < end_idx then\n        advance_state md_index ishape idx_strides src_strides out_strides axis\n          idx_lin src_lin dst_base\n    done\n\nlet scatter_int8 mode (src : int8# array) (dst : int8# array) ishape dshape axis\n    (idx_arr : int32# array) src_offset src_strides idx_offset idx_strides\n    out_offset out_strides start_idx end_idx =\n  if start_idx >= end_idx then ()\n  else\n    let axis_stride = Array.unsafe_get out_strides axis in\n    let md_index, idx_lin, src_lin, dst_base =\n      init_state ishape idx_strides src_strides out_strides dshape axis start_idx\n        idx_offset src_offset out_offset\n    in\n    let step () =\n      let idx =\n        Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr !idx_lin))\n      in\n      if idx < 0 || idx >= Array.unsafe_get dshape axis then\n        invalid_arg \"scatter: index out of bounds\";\n      let dst_lin = !dst_base + (idx * axis_stride) in\n      let v = Array.unsafe_get src !src_lin in\n      (match mode with\n      | `Set -> Array.unsafe_set dst dst_lin v\n      | `Add ->\n          Array.unsafe_set dst dst_lin\n            (Int8_u.add (Array.unsafe_get dst dst_lin) v))\n    in\n    let i = ref start_idx in\n    let n4 = end_idx - 3 in\n    while !i < n4 do\n      step ();\n      advance_state md_index ishape idx_strides src_strides out_strides axis\n        idx_lin src_lin dst_base;\n      step ();\n      advance_state md_index ishape idx_strides src_strides out_strides axis\n        idx_lin src_lin dst_base;\n      step ();\n      advance_state md_index ishape idx_strides src_strides out_strides axis\n        idx_lin src_lin dst_base;\n      step ();\n      i := !i + 4;\n      if !i < end_idx then\n        advance_state md_index ishape idx_strides src_strides out_strides axis\n          idx_lin src_lin dst_base\n    done;\n    while !i < end_idx do\n      step ();\n      incr i;\n      if !i < end_idx then\n        advance_state md_index ishape idx_strides src_strides out_strides axis\n          idx_lin src_lin dst_base\n    done\n\nlet scatter_int16 mode (src : int16# array) (dst : int16# array) ishape dshape\n    axis (idx_arr : int32# array) src_offset src_strides idx_offset idx_strides\n    out_offset out_strides start_idx end_idx =\n  if start_idx >= end_idx then ()\n  else\n    let axis_stride = Array.unsafe_get out_strides axis in\n    let md_index, idx_lin, src_lin, dst_base =\n      init_state ishape idx_strides src_strides out_strides dshape axis start_idx\n        idx_offset src_offset out_offset\n    in\n    let step () =\n      let idx =\n        Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr !idx_lin))\n      in\n      if idx < 0 || idx >= Array.unsafe_get dshape axis then\n        invalid_arg \"scatter: index out of bounds\";\n      let dst_lin = !dst_base + (idx * axis_stride) in\n      let v = Array.unsafe_get src !src_lin in\n      (match mode with\n      | `Set -> Array.unsafe_set dst dst_lin v\n      | `Add ->\n          Array.unsafe_set dst dst_lin\n            (Int16_u.add (Array.unsafe_get dst dst_lin) v))\n    in\n    let i = ref start_idx in\n    let n4 = end_idx - 3 in\n    while !i < n4 do\n      step ();\n      advance_state md_index ishape idx_strides src_strides out_strides axis\n        idx_lin src_lin dst_base;\n      step ();\n      advance_state md_index ishape idx_strides src_strides out_strides axis\n        idx_lin src_lin dst_base;\n      step ();\n      advance_state md_index ishape idx_strides src_strides out_strides axis\n        idx_lin src_lin dst_base;\n      step ();\n      i := !i + 4;\n      if !i < end_idx then\n        advance_state md_index ishape idx_strides src_strides out_strides axis\n          idx_lin src_lin dst_base\n    done;\n    while !i < end_idx do\n      step ();\n      incr i;\n      if !i < end_idx then\n        advance_state md_index ishape idx_strides src_strides out_strides axis\n          idx_lin src_lin dst_base\n    done\n\nlet scatter_int32 mode (src : int32# array) (dst : int32# array) ishape dshape\n    axis (idx_arr : int32# array) src_offset src_strides idx_offset idx_strides\n    out_offset out_strides start_idx end_idx =\n  if start_idx >= end_idx then ()\n  else\n    let axis_stride = Array.unsafe_get out_strides axis in\n    let md_index, idx_lin, src_lin, dst_base =\n      init_state ishape idx_strides src_strides out_strides dshape axis start_idx\n        idx_offset src_offset out_offset\n    in\n    let step () =\n      let idx =\n        Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr !idx_lin))\n      in\n      if idx < 0 || idx >= Array.unsafe_get dshape axis then\n        invalid_arg \"scatter: index out of bounds\";\n      let dst_lin = !dst_base + (idx * axis_stride) in\n      let v = Array.unsafe_get src !src_lin in\n      (match mode with\n      | `Set -> Array.unsafe_set dst dst_lin v\n      | `Add ->\n          Array.unsafe_set dst dst_lin\n            (Int32_u.add (Array.unsafe_get dst dst_lin) v))\n    in\n    let i = ref start_idx in\n    let n4 = end_idx - 3 in\n    while !i < n4 do\n      step ();\n      advance_state md_index ishape idx_strides src_strides out_strides axis\n        idx_lin src_lin dst_base;\n      step ();\n      advance_state md_index ishape idx_strides src_strides out_strides axis\n        idx_lin src_lin dst_base;\n      step ();\n      advance_state md_index ishape idx_strides src_strides out_strides axis\n        idx_lin src_lin dst_base;\n      step ();\n      i := !i + 4;\n      if !i < end_idx then\n        advance_state md_index ishape idx_strides src_strides out_strides axis\n          idx_lin src_lin dst_base\n    done;\n    while !i < end_idx do\n      step ();\n      incr i;\n      if !i < end_idx then\n        advance_state md_index ishape idx_strides src_strides out_strides axis\n          idx_lin src_lin dst_base\n    done\n\nlet scatter_int64 mode (src : int64# array) (dst : int64# array) ishape dshape\n    axis (idx_arr : int32# array) src_offset src_strides idx_offset idx_strides\n    out_offset out_strides start_idx end_idx =\n  if start_idx >= end_idx then ()\n  else\n    let axis_stride = Array.unsafe_get out_strides axis in\n    let md_index, idx_lin, src_lin, dst_base =\n      init_state ishape idx_strides src_strides out_strides dshape axis start_idx\n        idx_offset src_offset out_offset\n    in\n    let step () =\n      let idx =\n        Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr !idx_lin))\n      in\n      if idx < 0 || idx >= Array.unsafe_get dshape axis then\n        invalid_arg \"scatter: index out of bounds\";\n      let dst_lin = !dst_base + (idx * axis_stride) in\n      let v = Array.unsafe_get src !src_lin in\n      (match mode with\n      | `Set -> Array.unsafe_set dst dst_lin v\n      | `Add ->\n          Array.unsafe_set dst dst_lin\n            (Int64_u.add (Array.unsafe_get dst dst_lin) v))\n    in\n    let i = ref start_idx in\n    let n4 = end_idx - 3 in\n    while !i < n4 do\n      step ();\n      advance_state md_index ishape idx_strides src_strides out_strides axis\n        idx_lin src_lin dst_base;\n      step ();\n      advance_state md_index ishape idx_strides src_strides out_strides axis\n        idx_lin src_lin dst_base;\n      step ();\n      advance_state md_index ishape idx_strides src_strides out_strides axis\n        idx_lin src_lin dst_base;\n      step ();\n      i := !i + 4;\n      if !i < end_idx then\n        advance_state md_index ishape idx_strides src_strides out_strides axis\n          idx_lin src_lin dst_base\n    done;\n    while !i < end_idx do\n      step ();\n      incr i;\n      if !i < end_idx then\n        advance_state md_index ishape idx_strides src_strides out_strides axis\n          idx_lin src_lin dst_base\n    done\n\nlet scatter_bool mode (src : bool array) (dst : bool array) ishape dshape axis\n    (idx_arr : int32# array) src_offset src_strides idx_offset idx_strides\n    out_offset out_strides start_idx end_idx =\n  if start_idx >= end_idx then ()\n  else\n    let axis_stride = Array.unsafe_get out_strides axis in\n    let md_index, idx_lin, src_lin, dst_base =\n      init_state ishape idx_strides src_strides out_strides dshape axis start_idx\n        idx_offset src_offset out_offset\n    in\n    let step () =\n      let idx =\n        Int32.to_int (Int32_u.to_int32 (Array.unsafe_get idx_arr !idx_lin))\n      in\n      if idx < 0 || idx >= Array.unsafe_get dshape axis then\n        invalid_arg \"scatter: index out of bounds\";\n      let dst_lin = !dst_base + (idx * axis_stride) in\n      let v = Array.unsafe_get src !src_lin in\n      (match mode with\n      | `Set -> Array.unsafe_set dst dst_lin v\n      | `Add -> Array.unsafe_set dst dst_lin (Array.unsafe_get dst dst_lin || v))\n    in\n    let i = ref start_idx in\n    let n4 = end_idx - 3 in\n    while !i < n4 do\n      step ();\n      advance_state md_index ishape idx_strides src_strides out_strides axis\n        idx_lin src_lin dst_base;\n      step ();\n      advance_state md_index ishape idx_strides src_strides out_strides axis\n        idx_lin src_lin dst_base;\n      step ();\n      advance_state md_index ishape idx_strides src_strides out_strides axis\n        idx_lin src_lin dst_base;\n      step ();\n      i := !i + 4;\n      if !i < end_idx then\n        advance_state md_index ishape idx_strides src_strides out_strides axis\n          idx_lin src_lin dst_base\n    done;\n    while !i < end_idx do\n      step ();\n      incr i;\n      if !i < end_idx then\n        advance_state md_index ishape idx_strides src_strides out_strides axis\n          idx_lin src_lin dst_base\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/op_sort.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet parallel_threshold = 64\n\n(* Stable merge sort on indices. Sorts [indices[0..n-1]] by comparing\n   values fetched via [get_val]. [tmp] is a pre-allocated scratch buffer. *)\nlet merge_sort_indices (indices : int array) (tmp : int array) n\n    (cmp : int -> int -> int) =\n  (* Bottom-up merge sort *)\n  let width = ref 1 in\n  while !width < n do\n    let w = !width in\n    let i = ref 0 in\n    while !i < n do\n      let left = !i in\n      let mid = min (left + w) n in\n      let right = min (left + 2 * w) n in\n      (* Merge [left..mid) and [mid..right) into tmp *)\n      let l = ref left in\n      let r = ref mid in\n      for k = left to right - 1 do\n        if !l < mid && (!r >= right || cmp indices.(!l) indices.(!r) <= 0) then (\n          tmp.(k) <- indices.(!l);\n          incr l)\n        else (\n          tmp.(k) <- indices.(!r);\n          incr r)\n      done;\n      (* Copy back *)\n      Array.blit tmp left indices left (right - left);\n      i := !i + 2 * w\n    done;\n    width := w * 2\n  done\n\n(* --- sort --- *)\n\nlet sort_float64 pool ~(out_arr : float# array) ~a_arr ~va ~vout ~axis\n    ~descending =\n  let in_shape = shape va in\n  let rank = Array.length in_shape in\n  let axis_size = in_shape.(axis) in\n  if axis_size <= 1 then (\n    (* Nothing to sort, just copy *)\n    let n = numel vout in\n    let out_offset = View.offset vout in\n    let a_offset = View.offset va in\n    if View.is_c_contiguous vout && View.is_c_contiguous va then\n      for i = 0 to n - 1 do\n        Array.unsafe_set out_arr (out_offset + i)\n          (Array.unsafe_get a_arr (a_offset + i))\n      done\n    else\n      let out_shape = shape vout in\n      let out_strides = View.strides vout in\n      let a_strides = View.strides va in\n      let md_idx = Array.make rank 0 in\n      for i = 0 to n - 1 do\n        Shape.unravel_index_into i out_shape md_idx;\n        let a_lin = Shape.ravel_index md_idx a_strides in\n        let o_lin = Shape.ravel_index md_idx out_strides in\n        Array.unsafe_set out_arr (out_offset + o_lin)\n          (Array.unsafe_get a_arr (a_offset + a_lin))\n      done)\n  else\n    let outer =\n      let p = ref 1 in\n      for d = 0 to axis - 1 do\n        p := !p * in_shape.(d)\n      done;\n      !p\n    in\n    let inner =\n      let p = ref 1 in\n      for d = axis + 1 to rank - 1 do\n        p := !p * in_shape.(d)\n      done;\n      !p\n    in\n    let groups = outer * inner in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_strides = View.strides vout in\n    let out_offset = View.offset vout in\n    let a_axis_stride = a_strides.(axis) in\n    let out_axis_stride = out_strides.(axis) in\n    let work_on_group g =\n      let o = g / inner in\n      let i = g mod inner in\n      (* Compute base offset for this lane in input *)\n      let a_base =\n        let off = ref a_offset in\n        let rem = ref o in\n        for d = axis - 1 downto 0 do\n          let s = in_shape.(d) in\n          off := !off + (!rem mod s) * a_strides.(d);\n          rem := !rem / s\n        done;\n        let rem2 = ref i in\n        for d = rank - 1 downto axis + 1 do\n          let s = in_shape.(d) in\n          off := !off + (!rem2 mod s) * a_strides.(d);\n          rem2 := !rem2 / s\n        done;\n        !off\n      in\n      let out_base =\n        let off = ref out_offset in\n        let rem = ref o in\n        for d = axis - 1 downto 0 do\n          let s = in_shape.(d) in\n          off := !off + (!rem mod s) * out_strides.(d);\n          rem := !rem / s\n        done;\n        let rem2 = ref i in\n        for d = rank - 1 downto axis + 1 do\n          let s = in_shape.(d) in\n          off := !off + (!rem2 mod s) * out_strides.(d);\n          rem2 := !rem2 / s\n        done;\n        !off\n      in\n      let indices = Array.init axis_size Fun.id in\n      let tmp = Array.make axis_size 0 in\n      let acc = Array.make_float64 2 in\n      let cmp a_idx b_idx =\n        Array.unsafe_set acc 0\n          (Array.unsafe_get a_arr (a_base + (a_idx * a_axis_stride)));\n        Array.unsafe_set acc 1\n          (Array.unsafe_get a_arr (a_base + (b_idx * a_axis_stride)));\n        let c = Float_u.compare (Array.unsafe_get acc 0) (Array.unsafe_get acc 1) in\n        if descending then -c else c\n      in\n      merge_sort_indices indices tmp axis_size cmp;\n      for j = 0 to axis_size - 1 do\n        let src_idx = indices.(j) in\n        Array.unsafe_set out_arr\n          (out_base + (j * out_axis_stride))\n          (Array.unsafe_get a_arr (a_base + (src_idx * a_axis_stride)))\n      done\n    in\n    if groups > parallel_threshold then\n      Parallel.parallel_for pool 0 (groups - 1) (fun s e ->\n          for g = s to e - 1 do\n            work_on_group g\n          done)\n    else\n      for g = 0 to groups - 1 do\n        work_on_group g\n      done\n\nlet sort_float32 pool ~(out_arr : float32# array) ~a_arr ~va ~vout ~axis\n    ~descending =\n  let in_shape = shape va in\n  let rank = Array.length in_shape in\n  let axis_size = in_shape.(axis) in\n  if axis_size <= 1 then (\n    let n = numel vout in\n    let out_offset = View.offset vout in\n    let a_offset = View.offset va in\n    if View.is_c_contiguous vout && View.is_c_contiguous va then\n      for i = 0 to n - 1 do\n        Array.unsafe_set out_arr (out_offset + i)\n          (Array.unsafe_get a_arr (a_offset + i))\n      done\n    else\n      let out_shape = shape vout in\n      let out_strides = View.strides vout in\n      let a_strides = View.strides va in\n      let md_idx = Array.make rank 0 in\n      for i = 0 to n - 1 do\n        Shape.unravel_index_into i out_shape md_idx;\n        let a_lin = Shape.ravel_index md_idx a_strides in\n        let o_lin = Shape.ravel_index md_idx out_strides in\n        Array.unsafe_set out_arr (out_offset + o_lin)\n          (Array.unsafe_get a_arr (a_offset + a_lin))\n      done)\n  else\n    let outer =\n      let p = ref 1 in\n      for d = 0 to axis - 1 do\n        p := !p * in_shape.(d)\n      done;\n      !p\n    in\n    let inner =\n      let p = ref 1 in\n      for d = axis + 1 to rank - 1 do\n        p := !p * in_shape.(d)\n      done;\n      !p\n    in\n    let groups = outer * inner in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_strides = View.strides vout in\n    let out_offset = View.offset vout in\n    let a_axis_stride = a_strides.(axis) in\n    let out_axis_stride = out_strides.(axis) in\n    let work_on_group g =\n      let o = g / inner in\n      let i = g mod inner in\n      let a_base =\n        let off = ref a_offset in\n        let rem = ref o in\n        for d = axis - 1 downto 0 do\n          let s = in_shape.(d) in\n          off := !off + (!rem mod s) * a_strides.(d);\n          rem := !rem / s\n        done;\n        let rem2 = ref i in\n        for d = rank - 1 downto axis + 1 do\n          let s = in_shape.(d) in\n          off := !off + (!rem2 mod s) * a_strides.(d);\n          rem2 := !rem2 / s\n        done;\n        !off\n      in\n      let out_base =\n        let off = ref out_offset in\n        let rem = ref o in\n        for d = axis - 1 downto 0 do\n          let s = in_shape.(d) in\n          off := !off + (!rem mod s) * out_strides.(d);\n          rem := !rem / s\n        done;\n        let rem2 = ref i in\n        for d = rank - 1 downto axis + 1 do\n          let s = in_shape.(d) in\n          off := !off + (!rem2 mod s) * out_strides.(d);\n          rem2 := !rem2 / s\n        done;\n        !off\n      in\n      let indices = Array.init axis_size Fun.id in\n      let tmp = Array.make axis_size 0 in\n      let acc = Array.make_float32 2 in\n      let cmp a_idx b_idx =\n        Array.unsafe_set acc 0\n          (Array.unsafe_get a_arr (a_base + (a_idx * a_axis_stride)));\n        Array.unsafe_set acc 1\n          (Array.unsafe_get a_arr (a_base + (b_idx * a_axis_stride)));\n        let c =\n          Float32_u.compare (Array.unsafe_get acc 0) (Array.unsafe_get acc 1)\n        in\n        if descending then -c else c\n      in\n      merge_sort_indices indices tmp axis_size cmp;\n      for j = 0 to axis_size - 1 do\n        let src_idx = indices.(j) in\n        Array.unsafe_set out_arr\n          (out_base + (j * out_axis_stride))\n          (Array.unsafe_get a_arr (a_base + (src_idx * a_axis_stride)))\n      done\n    in\n    if groups > parallel_threshold then\n      Parallel.parallel_for pool 0 (groups - 1) (fun s e ->\n          for g = s to e - 1 do\n            work_on_group g\n          done)\n    else\n      for g = 0 to groups - 1 do\n        work_on_group g\n      done\n\nlet sort_int32 pool ~(out_arr : int32# array) ~(a_arr : int32# array) ~va ~vout\n    ~axis ~descending =\n  let in_shape = shape va in\n  let rank = Array.length in_shape in\n  let axis_size = in_shape.(axis) in\n  if axis_size <= 1 then (\n    let n = numel vout in\n    let out_offset = View.offset vout in\n    let a_offset = View.offset va in\n    if View.is_c_contiguous vout && View.is_c_contiguous va then\n      for i = 0 to n - 1 do\n        Array.unsafe_set out_arr (out_offset + i)\n          (Array.unsafe_get a_arr (a_offset + i))\n      done\n    else\n      let out_shape = shape vout in\n      let out_strides = View.strides vout in\n      let a_strides = View.strides va in\n      let md_idx = Array.make rank 0 in\n      for i = 0 to n - 1 do\n        Shape.unravel_index_into i out_shape md_idx;\n        let a_lin = Shape.ravel_index md_idx a_strides in\n        let o_lin = Shape.ravel_index md_idx out_strides in\n        Array.unsafe_set out_arr (out_offset + o_lin)\n          (Array.unsafe_get a_arr (a_offset + a_lin))\n      done)\n  else\n    let outer =\n      let p = ref 1 in\n      for d = 0 to axis - 1 do\n        p := !p * in_shape.(d)\n      done;\n      !p\n    in\n    let inner =\n      let p = ref 1 in\n      for d = axis + 1 to rank - 1 do\n        p := !p * in_shape.(d)\n      done;\n      !p\n    in\n    let groups = outer * inner in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_strides = View.strides vout in\n    let out_offset = View.offset vout in\n    let a_axis_stride = a_strides.(axis) in\n    let out_axis_stride = out_strides.(axis) in\n    let work_on_group g =\n      let o = g / inner in\n      let i = g mod inner in\n      let a_base =\n        let off = ref a_offset in\n        let rem = ref o in\n        for d = axis - 1 downto 0 do\n          let s = in_shape.(d) in\n          off := !off + (!rem mod s) * a_strides.(d);\n          rem := !rem / s\n        done;\n        let rem2 = ref i in\n        for d = rank - 1 downto axis + 1 do\n          let s = in_shape.(d) in\n          off := !off + (!rem2 mod s) * a_strides.(d);\n          rem2 := !rem2 / s\n        done;\n        !off\n      in\n      let out_base =\n        let off = ref out_offset in\n        let rem = ref o in\n        for d = axis - 1 downto 0 do\n          let s = in_shape.(d) in\n          off := !off + (!rem mod s) * out_strides.(d);\n          rem := !rem / s\n        done;\n        let rem2 = ref i in\n        for d = rank - 1 downto axis + 1 do\n          let s = in_shape.(d) in\n          off := !off + (!rem2 mod s) * out_strides.(d);\n          rem2 := !rem2 / s\n        done;\n        !off\n      in\n      let indices = Array.init axis_size Fun.id in\n      let tmp = Array.make axis_size 0 in\n      let acc = Array.make_int32 2 in\n      let cmp a_idx b_idx =\n        Array.unsafe_set acc 0\n          (Array.unsafe_get a_arr (a_base + (a_idx * a_axis_stride)));\n        Array.unsafe_set acc 1\n          (Array.unsafe_get a_arr (a_base + (b_idx * a_axis_stride)));\n        let c =\n          Int32_u.compare (Array.unsafe_get acc 0) (Array.unsafe_get acc 1)\n        in\n        if descending then -c else c\n      in\n      merge_sort_indices indices tmp axis_size cmp;\n      for j = 0 to axis_size - 1 do\n        let src_idx = indices.(j) in\n        Array.unsafe_set out_arr\n          (out_base + (j * out_axis_stride))\n          (Array.unsafe_get a_arr (a_base + (src_idx * a_axis_stride)))\n      done\n    in\n    if groups > parallel_threshold then\n      Parallel.parallel_for pool 0 (groups - 1) (fun s e ->\n          for g = s to e - 1 do\n            work_on_group g\n          done)\n    else\n      for g = 0 to groups - 1 do\n        work_on_group g\n      done\n\nlet sort_int64 pool ~(out_arr : int64# array) ~(a_arr : int64# array) ~va ~vout\n    ~axis ~descending =\n  let in_shape = shape va in\n  let rank = Array.length in_shape in\n  let axis_size = in_shape.(axis) in\n  if axis_size <= 1 then (\n    let n = numel vout in\n    let out_offset = View.offset vout in\n    let a_offset = View.offset va in\n    if View.is_c_contiguous vout && View.is_c_contiguous va then\n      for i = 0 to n - 1 do\n        Array.unsafe_set out_arr (out_offset + i)\n          (Array.unsafe_get a_arr (a_offset + i))\n      done\n    else\n      let out_shape = shape vout in\n      let out_strides = View.strides vout in\n      let a_strides = View.strides va in\n      let md_idx = Array.make rank 0 in\n      for i = 0 to n - 1 do\n        Shape.unravel_index_into i out_shape md_idx;\n        let a_lin = Shape.ravel_index md_idx a_strides in\n        let o_lin = Shape.ravel_index md_idx out_strides in\n        Array.unsafe_set out_arr (out_offset + o_lin)\n          (Array.unsafe_get a_arr (a_offset + a_lin))\n      done)\n  else\n    let outer =\n      let p = ref 1 in\n      for d = 0 to axis - 1 do\n        p := !p * in_shape.(d)\n      done;\n      !p\n    in\n    let inner =\n      let p = ref 1 in\n      for d = axis + 1 to rank - 1 do\n        p := !p * in_shape.(d)\n      done;\n      !p\n    in\n    let groups = outer * inner in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_strides = View.strides vout in\n    let out_offset = View.offset vout in\n    let a_axis_stride = a_strides.(axis) in\n    let out_axis_stride = out_strides.(axis) in\n    let work_on_group g =\n      let o = g / inner in\n      let i = g mod inner in\n      let a_base =\n        let off = ref a_offset in\n        let rem = ref o in\n        for d = axis - 1 downto 0 do\n          let s = in_shape.(d) in\n          off := !off + (!rem mod s) * a_strides.(d);\n          rem := !rem / s\n        done;\n        let rem2 = ref i in\n        for d = rank - 1 downto axis + 1 do\n          let s = in_shape.(d) in\n          off := !off + (!rem2 mod s) * a_strides.(d);\n          rem2 := !rem2 / s\n        done;\n        !off\n      in\n      let out_base =\n        let off = ref out_offset in\n        let rem = ref o in\n        for d = axis - 1 downto 0 do\n          let s = in_shape.(d) in\n          off := !off + (!rem mod s) * out_strides.(d);\n          rem := !rem / s\n        done;\n        let rem2 = ref i in\n        for d = rank - 1 downto axis + 1 do\n          let s = in_shape.(d) in\n          off := !off + (!rem2 mod s) * out_strides.(d);\n          rem2 := !rem2 / s\n        done;\n        !off\n      in\n      let indices = Array.init axis_size Fun.id in\n      let tmp = Array.make axis_size 0 in\n      let acc = Array.make_int64 2 in\n      let cmp a_idx b_idx =\n        Array.unsafe_set acc 0\n          (Array.unsafe_get a_arr (a_base + (a_idx * a_axis_stride)));\n        Array.unsafe_set acc 1\n          (Array.unsafe_get a_arr (a_base + (b_idx * a_axis_stride)));\n        let c =\n          Int64_u.compare (Array.unsafe_get acc 0) (Array.unsafe_get acc 1)\n        in\n        if descending then -c else c\n      in\n      merge_sort_indices indices tmp axis_size cmp;\n      for j = 0 to axis_size - 1 do\n        let src_idx = indices.(j) in\n        Array.unsafe_set out_arr\n          (out_base + (j * out_axis_stride))\n          (Array.unsafe_get a_arr (a_base + (src_idx * a_axis_stride)))\n      done\n    in\n    if groups > parallel_threshold then\n      Parallel.parallel_for pool 0 (groups - 1) (fun s e ->\n          for g = s to e - 1 do\n            work_on_group g\n          done)\n    else\n      for g = 0 to groups - 1 do\n        work_on_group g\n      done\n\n(* --- argsort --- *)\n\nlet argsort_float64 pool ~(out_arr : int32# array) ~a_arr ~va ~vout ~axis\n    ~descending =\n  let in_shape = shape va in\n  let rank = Array.length in_shape in\n  let axis_size = in_shape.(axis) in\n  if axis_size <= 1 then (\n    let n = numel vout in\n    let out_offset = View.offset vout in\n    for i = 0 to n - 1 do\n      Array.unsafe_set out_arr (out_offset + i) (Int32_u.of_int 0)\n    done)\n  else\n    let outer =\n      let p = ref 1 in\n      for d = 0 to axis - 1 do\n        p := !p * in_shape.(d)\n      done;\n      !p\n    in\n    let inner =\n      let p = ref 1 in\n      for d = axis + 1 to rank - 1 do\n        p := !p * in_shape.(d)\n      done;\n      !p\n    in\n    let groups = outer * inner in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_strides = View.strides vout in\n    let out_offset = View.offset vout in\n    let a_axis_stride = a_strides.(axis) in\n    let out_axis_stride = out_strides.(axis) in\n    let work_on_group g =\n      let o = g / inner in\n      let i = g mod inner in\n      let a_base =\n        let off = ref a_offset in\n        let rem = ref o in\n        for d = axis - 1 downto 0 do\n          let s = in_shape.(d) in\n          off := !off + (!rem mod s) * a_strides.(d);\n          rem := !rem / s\n        done;\n        let rem2 = ref i in\n        for d = rank - 1 downto axis + 1 do\n          let s = in_shape.(d) in\n          off := !off + (!rem2 mod s) * a_strides.(d);\n          rem2 := !rem2 / s\n        done;\n        !off\n      in\n      let out_base =\n        let off = ref out_offset in\n        let rem = ref o in\n        for d = axis - 1 downto 0 do\n          let s = in_shape.(d) in\n          off := !off + (!rem mod s) * out_strides.(d);\n          rem := !rem / s\n        done;\n        let rem2 = ref i in\n        for d = rank - 1 downto axis + 1 do\n          let s = in_shape.(d) in\n          off := !off + (!rem2 mod s) * out_strides.(d);\n          rem2 := !rem2 / s\n        done;\n        !off\n      in\n      let indices = Array.init axis_size Fun.id in\n      let tmp = Array.make axis_size 0 in\n      let acc = Array.make_float64 2 in\n      let cmp a_idx b_idx =\n        Array.unsafe_set acc 0\n          (Array.unsafe_get a_arr (a_base + (a_idx * a_axis_stride)));\n        Array.unsafe_set acc 1\n          (Array.unsafe_get a_arr (a_base + (b_idx * a_axis_stride)));\n        let c =\n          Float_u.compare (Array.unsafe_get acc 0) (Array.unsafe_get acc 1)\n        in\n        if descending then -c else c\n      in\n      merge_sort_indices indices tmp axis_size cmp;\n      for j = 0 to axis_size - 1 do\n        Array.unsafe_set out_arr\n          (out_base + (j * out_axis_stride))\n          (Int32_u.of_int indices.(j))\n      done\n    in\n    if groups > parallel_threshold then\n      Parallel.parallel_for pool 0 (groups - 1) (fun s e ->\n          for g = s to e - 1 do\n            work_on_group g\n          done)\n    else\n      for g = 0 to groups - 1 do\n        work_on_group g\n      done\n\nlet argsort_float32 pool ~(out_arr : int32# array) ~a_arr ~va ~vout ~axis\n    ~descending =\n  let in_shape = shape va in\n  let rank = Array.length in_shape in\n  let axis_size = in_shape.(axis) in\n  if axis_size <= 1 then (\n    let n = numel vout in\n    let out_offset = View.offset vout in\n    for i = 0 to n - 1 do\n      Array.unsafe_set out_arr (out_offset + i) (Int32_u.of_int 0)\n    done)\n  else\n    let outer =\n      let p = ref 1 in\n      for d = 0 to axis - 1 do\n        p := !p * in_shape.(d)\n      done;\n      !p\n    in\n    let inner =\n      let p = ref 1 in\n      for d = axis + 1 to rank - 1 do\n        p := !p * in_shape.(d)\n      done;\n      !p\n    in\n    let groups = outer * inner in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_strides = View.strides vout in\n    let out_offset = View.offset vout in\n    let a_axis_stride = a_strides.(axis) in\n    let out_axis_stride = out_strides.(axis) in\n    let work_on_group g =\n      let o = g / inner in\n      let i = g mod inner in\n      let a_base =\n        let off = ref a_offset in\n        let rem = ref o in\n        for d = axis - 1 downto 0 do\n          let s = in_shape.(d) in\n          off := !off + (!rem mod s) * a_strides.(d);\n          rem := !rem / s\n        done;\n        let rem2 = ref i in\n        for d = rank - 1 downto axis + 1 do\n          let s = in_shape.(d) in\n          off := !off + (!rem2 mod s) * a_strides.(d);\n          rem2 := !rem2 / s\n        done;\n        !off\n      in\n      let out_base =\n        let off = ref out_offset in\n        let rem = ref o in\n        for d = axis - 1 downto 0 do\n          let s = in_shape.(d) in\n          off := !off + (!rem mod s) * out_strides.(d);\n          rem := !rem / s\n        done;\n        let rem2 = ref i in\n        for d = rank - 1 downto axis + 1 do\n          let s = in_shape.(d) in\n          off := !off + (!rem2 mod s) * out_strides.(d);\n          rem2 := !rem2 / s\n        done;\n        !off\n      in\n      let indices = Array.init axis_size Fun.id in\n      let tmp = Array.make axis_size 0 in\n      let acc = Array.make_float32 2 in\n      let cmp a_idx b_idx =\n        Array.unsafe_set acc 0\n          (Array.unsafe_get a_arr (a_base + (a_idx * a_axis_stride)));\n        Array.unsafe_set acc 1\n          (Array.unsafe_get a_arr (a_base + (b_idx * a_axis_stride)));\n        let c =\n          Float32_u.compare (Array.unsafe_get acc 0) (Array.unsafe_get acc 1)\n        in\n        if descending then -c else c\n      in\n      merge_sort_indices indices tmp axis_size cmp;\n      for j = 0 to axis_size - 1 do\n        Array.unsafe_set out_arr\n          (out_base + (j * out_axis_stride))\n          (Int32_u.of_int indices.(j))\n      done\n    in\n    if groups > parallel_threshold then\n      Parallel.parallel_for pool 0 (groups - 1) (fun s e ->\n          for g = s to e - 1 do\n            work_on_group g\n          done)\n    else\n      for g = 0 to groups - 1 do\n        work_on_group g\n      done\n\nlet argsort_int32 pool ~(out_arr : int32# array) ~(a_arr : int32# array) ~va\n    ~vout ~axis ~descending =\n  let in_shape = shape va in\n  let rank = Array.length in_shape in\n  let axis_size = in_shape.(axis) in\n  if axis_size <= 1 then (\n    let n = numel vout in\n    let out_offset = View.offset vout in\n    for i = 0 to n - 1 do\n      Array.unsafe_set out_arr (out_offset + i) (Int32_u.of_int 0)\n    done)\n  else\n    let outer =\n      let p = ref 1 in\n      for d = 0 to axis - 1 do\n        p := !p * in_shape.(d)\n      done;\n      !p\n    in\n    let inner =\n      let p = ref 1 in\n      for d = axis + 1 to rank - 1 do\n        p := !p * in_shape.(d)\n      done;\n      !p\n    in\n    let groups = outer * inner in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_strides = View.strides vout in\n    let out_offset = View.offset vout in\n    let a_axis_stride = a_strides.(axis) in\n    let out_axis_stride = out_strides.(axis) in\n    let work_on_group g =\n      let o = g / inner in\n      let i = g mod inner in\n      let a_base =\n        let off = ref a_offset in\n        let rem = ref o in\n        for d = axis - 1 downto 0 do\n          let s = in_shape.(d) in\n          off := !off + (!rem mod s) * a_strides.(d);\n          rem := !rem / s\n        done;\n        let rem2 = ref i in\n        for d = rank - 1 downto axis + 1 do\n          let s = in_shape.(d) in\n          off := !off + (!rem2 mod s) * a_strides.(d);\n          rem2 := !rem2 / s\n        done;\n        !off\n      in\n      let out_base =\n        let off = ref out_offset in\n        let rem = ref o in\n        for d = axis - 1 downto 0 do\n          let s = in_shape.(d) in\n          off := !off + (!rem mod s) * out_strides.(d);\n          rem := !rem / s\n        done;\n        let rem2 = ref i in\n        for d = rank - 1 downto axis + 1 do\n          let s = in_shape.(d) in\n          off := !off + (!rem2 mod s) * out_strides.(d);\n          rem2 := !rem2 / s\n        done;\n        !off\n      in\n      let indices = Array.init axis_size Fun.id in\n      let tmp = Array.make axis_size 0 in\n      let acc = Array.make_int32 2 in\n      let cmp a_idx b_idx =\n        Array.unsafe_set acc 0\n          (Array.unsafe_get a_arr (a_base + (a_idx * a_axis_stride)));\n        Array.unsafe_set acc 1\n          (Array.unsafe_get a_arr (a_base + (b_idx * a_axis_stride)));\n        let c =\n          Int32_u.compare (Array.unsafe_get acc 0) (Array.unsafe_get acc 1)\n        in\n        if descending then -c else c\n      in\n      merge_sort_indices indices tmp axis_size cmp;\n      for j = 0 to axis_size - 1 do\n        Array.unsafe_set out_arr\n          (out_base + (j * out_axis_stride))\n          (Int32_u.of_int indices.(j))\n      done\n    in\n    if groups > parallel_threshold then\n      Parallel.parallel_for pool 0 (groups - 1) (fun s e ->\n          for g = s to e - 1 do\n            work_on_group g\n          done)\n    else\n      for g = 0 to groups - 1 do\n        work_on_group g\n      done\n\nlet argsort_int64 pool ~(out_arr : int32# array) ~(a_arr : int64# array) ~va\n    ~vout ~axis ~descending =\n  let in_shape = shape va in\n  let rank = Array.length in_shape in\n  let axis_size = in_shape.(axis) in\n  if axis_size <= 1 then (\n    let n = numel vout in\n    let out_offset = View.offset vout in\n    for i = 0 to n - 1 do\n      Array.unsafe_set out_arr (out_offset + i) (Int32_u.of_int 0)\n    done)\n  else\n    let outer =\n      let p = ref 1 in\n      for d = 0 to axis - 1 do\n        p := !p * in_shape.(d)\n      done;\n      !p\n    in\n    let inner =\n      let p = ref 1 in\n      for d = axis + 1 to rank - 1 do\n        p := !p * in_shape.(d)\n      done;\n      !p\n    in\n    let groups = outer * inner in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_strides = View.strides vout in\n    let out_offset = View.offset vout in\n    let a_axis_stride = a_strides.(axis) in\n    let out_axis_stride = out_strides.(axis) in\n    let work_on_group g =\n      let o = g / inner in\n      let i = g mod inner in\n      let a_base =\n        let off = ref a_offset in\n        let rem = ref o in\n        for d = axis - 1 downto 0 do\n          let s = in_shape.(d) in\n          off := !off + (!rem mod s) * a_strides.(d);\n          rem := !rem / s\n        done;\n        let rem2 = ref i in\n        for d = rank - 1 downto axis + 1 do\n          let s = in_shape.(d) in\n          off := !off + (!rem2 mod s) * a_strides.(d);\n          rem2 := !rem2 / s\n        done;\n        !off\n      in\n      let out_base =\n        let off = ref out_offset in\n        let rem = ref o in\n        for d = axis - 1 downto 0 do\n          let s = in_shape.(d) in\n          off := !off + (!rem mod s) * out_strides.(d);\n          rem := !rem / s\n        done;\n        let rem2 = ref i in\n        for d = rank - 1 downto axis + 1 do\n          let s = in_shape.(d) in\n          off := !off + (!rem2 mod s) * out_strides.(d);\n          rem2 := !rem2 / s\n        done;\n        !off\n      in\n      let indices = Array.init axis_size Fun.id in\n      let tmp = Array.make axis_size 0 in\n      let acc = Array.make_int64 2 in\n      let cmp a_idx b_idx =\n        Array.unsafe_set acc 0\n          (Array.unsafe_get a_arr (a_base + (a_idx * a_axis_stride)));\n        Array.unsafe_set acc 1\n          (Array.unsafe_get a_arr (a_base + (b_idx * a_axis_stride)));\n        let c =\n          Int64_u.compare (Array.unsafe_get acc 0) (Array.unsafe_get acc 1)\n        in\n        if descending then -c else c\n      in\n      merge_sort_indices indices tmp axis_size cmp;\n      for j = 0 to axis_size - 1 do\n        Array.unsafe_set out_arr\n          (out_base + (j * out_axis_stride))\n          (Int32_u.of_int indices.(j))\n      done\n    in\n    if groups > parallel_threshold then\n      Parallel.parallel_for pool 0 (groups - 1) (fun s e ->\n          for g = s to e - 1 do\n            work_on_group g\n          done)\n    else\n      for g = 0 to groups - 1 do\n        work_on_group g\n      done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/op_threefry.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet threefry_parity = Int32_u.of_int32 0x1BD11BDAl\nlet s1 = Int32_u.of_int32 1l\nlet s2 = Int32_u.of_int32 2l\nlet s3 = Int32_u.of_int32 3l\nlet s4 = Int32_u.of_int32 4l\nlet s5 = Int32_u.of_int32 5l\n\nlet[@inline] rotl32 x r =\n  Int32_u.logor (Int32_u.shift_left x r) (Int32_u.shift_right_logical x (32 - r))\n\n(* Random123 Threefry2x32: 8 rotation constants, 20 rounds, key injection\n   every 4 rounds. Reference: D.E. Shaw Research, Random123 library. *)\nlet[@inline] threefry2x32 ks0 ks1 ks2 c0 c1 k =\n  let rec round r x0 x1 =\n    if r = 20 then k x0 x1\n    else\n      let rot =\n        match r land 7 with\n        | 0 -> 13\n        | 1 -> 15\n        | 2 -> 26\n        | 3 -> 6\n        | 4 -> 17\n        | 5 -> 29\n        | 6 -> 16\n        | _ -> 24\n      in\n      let x0 = Int32_u.add x0 x1 in\n      let x1 = Int32_u.logxor (rotl32 x1 rot) x0 in\n      let r' = r + 1 in\n      if r' land 3 = 0 then\n        (match r' asr 2 with\n        | 1 -> round r' (Int32_u.add x0 ks1) (Int32_u.add x1 (Int32_u.add ks2 s1))\n        | 2 -> round r' (Int32_u.add x0 ks2) (Int32_u.add x1 (Int32_u.add ks0 s2))\n        | 3 -> round r' (Int32_u.add x0 ks0) (Int32_u.add x1 (Int32_u.add ks1 s3))\n        | 4 -> round r' (Int32_u.add x0 ks1) (Int32_u.add x1 (Int32_u.add ks2 s4))\n        | _ -> round r' (Int32_u.add x0 ks2) (Int32_u.add x1 (Int32_u.add ks0 s5)))\n      else round r' x0 x1\n  in\n  round 0 (Int32_u.add c0 ks0) (Int32_u.add c1 ks1)\n\nlet[@inline] lane1 v = Int32x4.low_to (Int32x4.dup_lane 1 v)\nlet[@inline] lane2 v = Int32x4.low_to (Int32x4.dup_lane 2 v)\nlet[@inline] lane3 v = Int32x4.low_to (Int32x4.dup_lane 3 v)\n\nlet[@inline] threefry_pair ~(key_arr : int32# array) ~(ctr_arr : int32# array)\n    ~(out_arr : int32# array) ~kb ~cb ~ob ~kl ~cl ~ol =\n  let ks0 = Array.unsafe_get key_arr kb in\n  let ks1 = Array.unsafe_get key_arr (kb + kl) in\n  let ks2 = Int32_u.logxor threefry_parity (Int32_u.logxor ks0 ks1) in\n  let c0 = Array.unsafe_get ctr_arr cb in\n  let c1 = Array.unsafe_get ctr_arr (cb + cl) in\n  threefry2x32 ks0 ks1 ks2 c0 c1 (fun r0 r1 ->\n      Array.unsafe_set out_arr ob r0;\n      Array.unsafe_set out_arr (ob + ol) r1)\n\nlet threefry_int32 pool ~(out_arr : int32# array) ~(key_arr : int32# array)\n    ~(ctr_arr : int32# array) ~shape ~key_view ~ctr_view ~out_view =\n  let rank = Array.length shape in\n  let last_dim = rank - 1 in\n  let total_vectors =\n    let p = ref 1 in\n    for i = 0 to last_dim - 1 do\n      p := !p * shape.(i)\n    done;\n    !p\n  in\n  if total_vectors = 0 then ()\n  else\n    let key_strides = View.strides key_view in\n    let ctr_strides = View.strides ctr_view in\n    let out_strides = View.strides out_view in\n    let key_offset = View.offset key_view in\n    let ctr_offset = View.offset ctr_view in\n    let out_offset = View.offset out_view in\n    let contiguous =\n      View.is_c_contiguous key_view\n      && View.is_c_contiguous ctr_view\n      && View.is_c_contiguous out_view\n    in\n    let process_chunk start_idx end_idx =\n      if contiguous then (\n        let kb = ref (key_offset + (start_idx lsl 1)) in\n        let cb = ref (ctr_offset + (start_idx lsl 1)) in\n        let ob = ref (out_offset + (start_idx lsl 1)) in\n        let stop = key_offset + (end_idx lsl 1) in\n        let stop_simd = stop - (((stop - !kb) land 3)) in\n        while !kb < stop_simd do\n          let key_v = Int32x4.Array.unsafe_get key_arr ~idx:!kb in\n          let ctr_v = Int32x4.Array.unsafe_get ctr_arr ~idx:!cb in\n          let k0a = Int32x4.low_to key_v in\n          let k1a = lane1 key_v in\n          let k0b = lane2 key_v in\n          let k1b = lane3 key_v in\n          let c0a = Int32x4.low_to ctr_v in\n          let c1a = lane1 ctr_v in\n          let c0b = lane2 ctr_v in\n          let c1b = lane3 ctr_v in\n          let ks2a = Int32_u.logxor threefry_parity (Int32_u.logxor k0a k1a) in\n          let ks2b = Int32_u.logxor threefry_parity (Int32_u.logxor k0b k1b) in\n          threefry2x32 k0a k1a ks2a c0a c1a (fun r0a r1a ->\n              threefry2x32 k0b k1b ks2b c0b c1b (fun r0b r1b ->\n                  let out_v = Int32x4.set r0a r1a r0b r1b in\n                  Int32x4.Array.unsafe_set out_arr ~idx:!ob out_v));\n          kb := !kb + 4;\n          cb := !cb + 4;\n          ob := !ob + 4\n        done;\n        while !kb < stop do\n          threefry_pair ~key_arr ~ctr_arr ~out_arr ~kb:!kb ~cb:!cb ~ob:!ob ~kl:1\n            ~cl:1 ~ol:1;\n          kb := !kb + 2;\n          cb := !cb + 2;\n          ob := !ob + 2\n        done)\n      else (\n        let key_last = key_strides.(last_dim) in\n        let ctr_last = ctr_strides.(last_dim) in\n        let out_last = out_strides.(last_dim) in\n        let slice_rank = rank - 1 in\n        let dims = Array.make slice_rank 0 in\n        let key_str = Array.make slice_rank 0 in\n        let ctr_str = Array.make slice_rank 0 in\n        let out_str = Array.make slice_rank 0 in\n        let j = ref 0 in\n        for d = 0 to rank - 1 do\n          if d <> last_dim then (\n            dims.(!j) <- shape.(d);\n            key_str.(!j) <- key_strides.(d);\n            ctr_str.(!j) <- ctr_strides.(d);\n            out_str.(!j) <- out_strides.(d);\n            incr j)\n        done;\n        let coords = Array.make slice_rank 0 in\n        let kb = ref key_offset in\n        let cb = ref ctr_offset in\n        let ob = ref out_offset in\n        let rem = ref start_idx in\n        for d = 0 to slice_rank - 1 do\n          let block = ref 1 in\n          for d' = d + 1 to slice_rank - 1 do\n            block := !block * dims.(d')\n          done;\n          let c = !rem / !block in\n          rem := !rem mod !block;\n          coords.(d) <- c;\n          kb := !kb + (c * key_str.(d));\n          cb := !cb + (c * ctr_str.(d));\n          ob := !ob + (c * out_str.(d))\n        done;\n        let rec carry d =\n          if d >= 0 then\n            let next = coords.(d) + 1 in\n            if next < dims.(d) then (\n              coords.(d) <- next;\n              kb := !kb + key_str.(d);\n              cb := !cb + ctr_str.(d);\n              ob := !ob + out_str.(d))\n            else (\n              coords.(d) <- 0;\n              kb := !kb - ((dims.(d) - 1) * key_str.(d));\n              cb := !cb - ((dims.(d) - 1) * ctr_str.(d));\n              ob := !ob - ((dims.(d) - 1) * out_str.(d));\n              carry (d - 1))\n        in\n        for _ = start_idx to end_idx - 1 do\n          threefry_pair ~key_arr ~ctr_arr ~out_arr ~kb:!kb ~cb:!cb ~ob:!ob\n            ~kl:key_last ~cl:ctr_last ~ol:out_last;\n          carry (slice_rank - 1)\n        done)\n    in\n    let parallel_threshold = 62500 in\n    if total_vectors > parallel_threshold then\n      Parallel.parallel_for pool 0 (total_vectors - 1) process_chunk\n    else process_chunk 0 total_vectors\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/op_unfold.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet is_identity_window ~spatial_ndim ~kernel_elems ~kernel_size ~stride ~dilation\n    ~padding =\n  if kernel_elems <> 1 then false\n  else\n    let ok = ref true in\n    for d = 0 to spatial_ndim - 1 do\n      let pad_before, pad_after = padding.(d) in\n      if\n        kernel_size.(d) <> 1 || stride.(d) <> 1 || dilation.(d) <> 1\n        || pad_before <> 0 || pad_after <> 0\n      then ok := false\n    done;\n    !ok\n\nlet is_c_contiguous_spatial_tail spatial strides =\n  let expected = ref 1 in\n  let ok = ref true in\n  for d = Array.length spatial - 1 downto 0 do\n    if strides.(d + 2) <> !expected then ok := false;\n    expected := !expected * spatial.(d)\n  done;\n  !ok\n\nlet unfold_float64 in_arr out_arr ~n_start ~n_end ~channels ~input_spatial ~kernel_elems\n    ~num_blocks ~spatial_ndim ~out_spatial ~kernel_size ~stride ~dilation\n    ~padding ~in_offset ~in_strides ~out_offset ~out_strides =\n  if\n    is_identity_window ~spatial_ndim ~kernel_elems ~kernel_size ~stride\n      ~dilation ~padding\n    && num_blocks = Shape.numel input_spatial\n    && is_c_contiguous_spatial_tail input_spatial in_strides\n    && out_strides.(2) = 1\n  then (\n    for n_idx = n_start to n_end - 1 do\n      for c_idx = 0 to channels - 1 do\n        let src_base =\n          in_offset + (n_idx * in_strides.(0)) + (c_idx * in_strides.(1))\n        in\n        let dst_base =\n          out_offset + (n_idx * out_strides.(0)) + (c_idx * out_strides.(1))\n        in\n        if out_strides.(2) = 1 then (\n          let i = ref 0 in\n          let n = num_blocks in\n          let n8 = n - 7 in\n          while !i < n8 do\n            let idx = !i in\n            let a0 = Float64x2.Array.unsafe_get in_arr ~idx:(src_base + idx) in\n            let a1 =\n              Float64x2.Array.unsafe_get in_arr ~idx:(src_base + idx + 2)\n            in\n            let a2 =\n              Float64x2.Array.unsafe_get in_arr ~idx:(src_base + idx + 4)\n            in\n            let a3 =\n              Float64x2.Array.unsafe_get in_arr ~idx:(src_base + idx + 6)\n            in\n            Float64x2.Array.unsafe_set out_arr ~idx:(dst_base + idx) a0;\n            Float64x2.Array.unsafe_set out_arr ~idx:(dst_base + idx + 2) a1;\n            Float64x2.Array.unsafe_set out_arr ~idx:(dst_base + idx + 4) a2;\n            Float64x2.Array.unsafe_set out_arr ~idx:(dst_base + idx + 6) a3;\n            i := idx + 8\n          done;\n          let n2 = n - 1 in\n          while !i < n2 do\n            let idx = !i in\n            let a = Float64x2.Array.unsafe_get in_arr ~idx:(src_base + idx) in\n            Float64x2.Array.unsafe_set out_arr ~idx:(dst_base + idx) a;\n            i := idx + 2\n          done;\n          while !i < n do\n            let idx = !i in\n            Array.unsafe_set out_arr (dst_base + idx)\n              (Array.unsafe_get in_arr (src_base + idx));\n            incr i\n          done)\n        else\n          for b_idx = 0 to num_blocks - 1 do\n            let src_lin = src_base + (b_idx * in_strides.(2)) in\n            let dst_lin = dst_base + (b_idx * out_strides.(2)) in\n            Array.unsafe_set out_arr dst_lin (Array.unsafe_get in_arr src_lin)\n          done\n      done\n    done)\n  else (\n  let block_coords = Array.make spatial_ndim 0 in\n  let kernel_coords = Array.make spatial_ndim 0 in\n  let in_spatial = Array.make spatial_ndim 0 in\n  let zero = Float_u.of_float 0.0 in\n  for n_idx = n_start to n_end - 1 do\n    for b_idx = 0 to num_blocks - 1 do\n      Shape.unravel_index_into b_idx out_spatial block_coords;\n      for c_idx = 0 to channels - 1 do\n        for k_idx = 0 to kernel_elems - 1 do\n          Shape.unravel_index_into k_idx kernel_size kernel_coords;\n          let valid = ref true in\n          for d = 0 to spatial_ndim - 1 do\n            let pad_before, _ = padding.(d) in\n            let pos =\n              (block_coords.(d) * stride.(d))\n              - pad_before\n              + (kernel_coords.(d) * dilation.(d))\n            in\n            in_spatial.(d) <- pos;\n            if pos < 0 || pos >= input_spatial.(d) then valid := false\n          done;\n          let v =\n            if !valid then\n              let src_lin =\n                ref (in_offset + (n_idx * in_strides.(0)) + (c_idx * in_strides.(1)))\n              in\n              for d = 0 to spatial_ndim - 1 do\n                src_lin := !src_lin + (in_spatial.(d) * in_strides.(d + 2))\n              done;\n              Array.unsafe_get in_arr !src_lin\n            else zero\n          in\n          let dst_ch = (c_idx * kernel_elems) + k_idx in\n          let dst_lin =\n            out_offset\n            + (n_idx * out_strides.(0))\n            + (dst_ch * out_strides.(1))\n            + (b_idx * out_strides.(2))\n          in\n          Array.unsafe_set out_arr dst_lin v\n        done\n      done\n    done\n  done)\n\nlet unfold_float32 in_arr out_arr ~n_start ~n_end ~channels ~input_spatial ~kernel_elems\n    ~num_blocks ~spatial_ndim ~out_spatial ~kernel_size ~stride ~dilation\n    ~padding ~in_offset ~in_strides ~out_offset ~out_strides =\n  if\n    is_identity_window ~spatial_ndim ~kernel_elems ~kernel_size ~stride\n      ~dilation ~padding\n    && num_blocks = Shape.numel input_spatial\n    && is_c_contiguous_spatial_tail input_spatial in_strides\n    && out_strides.(2) = 1\n  then (\n    for n_idx = n_start to n_end - 1 do\n      for c_idx = 0 to channels - 1 do\n        let src_base =\n          in_offset + (n_idx * in_strides.(0)) + (c_idx * in_strides.(1))\n        in\n        let dst_base =\n          out_offset + (n_idx * out_strides.(0)) + (c_idx * out_strides.(1))\n        in\n        if out_strides.(2) = 1 then (\n          let i = ref 0 in\n          let n = num_blocks in\n          let n16 = n - 15 in\n          while !i < n16 do\n            let idx = !i in\n            let a0 = Float32x4.Array.unsafe_get in_arr ~idx:(src_base + idx) in\n            let a1 =\n              Float32x4.Array.unsafe_get in_arr ~idx:(src_base + idx + 4)\n            in\n            let a2 =\n              Float32x4.Array.unsafe_get in_arr ~idx:(src_base + idx + 8)\n            in\n            let a3 =\n              Float32x4.Array.unsafe_get in_arr ~idx:(src_base + idx + 12)\n            in\n            Float32x4.Array.unsafe_set out_arr ~idx:(dst_base + idx) a0;\n            Float32x4.Array.unsafe_set out_arr ~idx:(dst_base + idx + 4) a1;\n            Float32x4.Array.unsafe_set out_arr ~idx:(dst_base + idx + 8) a2;\n            Float32x4.Array.unsafe_set out_arr ~idx:(dst_base + idx + 12) a3;\n            i := idx + 16\n          done;\n          let n4 = n - 3 in\n          while !i < n4 do\n            let idx = !i in\n            let a = Float32x4.Array.unsafe_get in_arr ~idx:(src_base + idx) in\n            Float32x4.Array.unsafe_set out_arr ~idx:(dst_base + idx) a;\n            i := idx + 4\n          done;\n          while !i < n do\n            let idx = !i in\n            Array.unsafe_set out_arr (dst_base + idx)\n              (Array.unsafe_get in_arr (src_base + idx));\n            incr i\n          done)\n        else\n          for b_idx = 0 to num_blocks - 1 do\n            let src_lin = src_base + (b_idx * in_strides.(2)) in\n            let dst_lin = dst_base + (b_idx * out_strides.(2)) in\n            Array.unsafe_set out_arr dst_lin (Array.unsafe_get in_arr src_lin)\n          done\n      done\n    done)\n  else (\n  let block_coords = Array.make spatial_ndim 0 in\n  let kernel_coords = Array.make spatial_ndim 0 in\n  let in_spatial = Array.make spatial_ndim 0 in\n  let zero = Float32_u.of_int 0 in\n  for n_idx = n_start to n_end - 1 do\n    for b_idx = 0 to num_blocks - 1 do\n      Shape.unravel_index_into b_idx out_spatial block_coords;\n      for c_idx = 0 to channels - 1 do\n        for k_idx = 0 to kernel_elems - 1 do\n          Shape.unravel_index_into k_idx kernel_size kernel_coords;\n          let valid = ref true in\n          for d = 0 to spatial_ndim - 1 do\n            let pad_before, _ = padding.(d) in\n            let pos =\n              (block_coords.(d) * stride.(d))\n              - pad_before\n              + (kernel_coords.(d) * dilation.(d))\n            in\n            in_spatial.(d) <- pos;\n            if pos < 0 || pos >= input_spatial.(d) then valid := false\n          done;\n          let v =\n            if !valid then\n              let src_lin =\n                ref (in_offset + (n_idx * in_strides.(0)) + (c_idx * in_strides.(1)))\n              in\n              for d = 0 to spatial_ndim - 1 do\n                src_lin := !src_lin + (in_spatial.(d) * in_strides.(d + 2))\n              done;\n              Array.unsafe_get in_arr !src_lin\n            else zero\n          in\n          let dst_ch = (c_idx * kernel_elems) + k_idx in\n          let dst_lin =\n            out_offset\n            + (n_idx * out_strides.(0))\n            + (dst_ch * out_strides.(1))\n            + (b_idx * out_strides.(2))\n          in\n          Array.unsafe_set out_arr dst_lin v\n        done\n      done\n    done\n  done)\n\nlet unfold_int8 in_arr out_arr ~n_start ~n_end ~channels ~input_spatial ~kernel_elems\n    ~num_blocks ~spatial_ndim ~out_spatial ~kernel_size ~stride ~dilation\n    ~padding ~in_offset ~in_strides ~out_offset ~out_strides =\n  let block_coords = Array.make spatial_ndim 0 in\n  let kernel_coords = Array.make spatial_ndim 0 in\n  let in_spatial = Array.make spatial_ndim 0 in\n  let zero = Int8_u.of_int 0 in\n  for n_idx = n_start to n_end - 1 do\n    for b_idx = 0 to num_blocks - 1 do\n      Shape.unravel_index_into b_idx out_spatial block_coords;\n      for c_idx = 0 to channels - 1 do\n        for k_idx = 0 to kernel_elems - 1 do\n          Shape.unravel_index_into k_idx kernel_size kernel_coords;\n          let valid = ref true in\n          for d = 0 to spatial_ndim - 1 do\n            let pad_before, _ = padding.(d) in\n            let pos =\n              (block_coords.(d) * stride.(d))\n              - pad_before\n              + (kernel_coords.(d) * dilation.(d))\n            in\n            in_spatial.(d) <- pos;\n            if pos < 0 || pos >= input_spatial.(d) then valid := false\n          done;\n          let v =\n            if !valid then\n              let src_lin =\n                ref (in_offset + (n_idx * in_strides.(0)) + (c_idx * in_strides.(1)))\n              in\n              for d = 0 to spatial_ndim - 1 do\n                src_lin := !src_lin + (in_spatial.(d) * in_strides.(d + 2))\n              done;\n              Array.unsafe_get in_arr !src_lin\n            else zero\n          in\n          let dst_ch = (c_idx * kernel_elems) + k_idx in\n          let dst_lin =\n            out_offset\n            + (n_idx * out_strides.(0))\n            + (dst_ch * out_strides.(1))\n            + (b_idx * out_strides.(2))\n          in\n          Array.unsafe_set out_arr dst_lin v\n        done\n      done\n    done\n  done\n\nlet unfold_int16 in_arr out_arr ~n_start ~n_end ~channels ~input_spatial ~kernel_elems\n    ~num_blocks ~spatial_ndim ~out_spatial ~kernel_size ~stride ~dilation\n    ~padding ~in_offset ~in_strides ~out_offset ~out_strides =\n  let block_coords = Array.make spatial_ndim 0 in\n  let kernel_coords = Array.make spatial_ndim 0 in\n  let in_spatial = Array.make spatial_ndim 0 in\n  let zero = Int16_u.of_int 0 in\n  for n_idx = n_start to n_end - 1 do\n    for b_idx = 0 to num_blocks - 1 do\n      Shape.unravel_index_into b_idx out_spatial block_coords;\n      for c_idx = 0 to channels - 1 do\n        for k_idx = 0 to kernel_elems - 1 do\n          Shape.unravel_index_into k_idx kernel_size kernel_coords;\n          let valid = ref true in\n          for d = 0 to spatial_ndim - 1 do\n            let pad_before, _ = padding.(d) in\n            let pos =\n              (block_coords.(d) * stride.(d))\n              - pad_before\n              + (kernel_coords.(d) * dilation.(d))\n            in\n            in_spatial.(d) <- pos;\n            if pos < 0 || pos >= input_spatial.(d) then valid := false\n          done;\n          let v =\n            if !valid then\n              let src_lin =\n                ref (in_offset + (n_idx * in_strides.(0)) + (c_idx * in_strides.(1)))\n              in\n              for d = 0 to spatial_ndim - 1 do\n                src_lin := !src_lin + (in_spatial.(d) * in_strides.(d + 2))\n              done;\n              Array.unsafe_get in_arr !src_lin\n            else zero\n          in\n          let dst_ch = (c_idx * kernel_elems) + k_idx in\n          let dst_lin =\n            out_offset\n            + (n_idx * out_strides.(0))\n            + (dst_ch * out_strides.(1))\n            + (b_idx * out_strides.(2))\n          in\n          Array.unsafe_set out_arr dst_lin v\n        done\n      done\n    done\n  done\n\nlet unfold_int32 in_arr out_arr ~n_start ~n_end ~channels ~input_spatial ~kernel_elems\n    ~num_blocks ~spatial_ndim ~out_spatial ~kernel_size ~stride ~dilation\n    ~padding ~in_offset ~in_strides ~out_offset ~out_strides =\n  if\n    is_identity_window ~spatial_ndim ~kernel_elems ~kernel_size ~stride\n      ~dilation ~padding\n    && num_blocks = Shape.numel input_spatial\n    && is_c_contiguous_spatial_tail input_spatial in_strides\n    && out_strides.(2) = 1\n  then (\n    for n_idx = n_start to n_end - 1 do\n      for c_idx = 0 to channels - 1 do\n        let src_base =\n          in_offset + (n_idx * in_strides.(0)) + (c_idx * in_strides.(1))\n        in\n        let dst_base =\n          out_offset + (n_idx * out_strides.(0)) + (c_idx * out_strides.(1))\n        in\n        let i = ref 0 in\n        let n = num_blocks in\n        let n16 = n - 15 in\n        while !i < n16 do\n          let idx = !i in\n          let a0 = Int32x4.Array.unsafe_get in_arr ~idx:(src_base + idx) in\n          let a1 = Int32x4.Array.unsafe_get in_arr ~idx:(src_base + idx + 4) in\n          let a2 = Int32x4.Array.unsafe_get in_arr ~idx:(src_base + idx + 8) in\n          let a3 = Int32x4.Array.unsafe_get in_arr ~idx:(src_base + idx + 12) in\n          Int32x4.Array.unsafe_set out_arr ~idx:(dst_base + idx) a0;\n          Int32x4.Array.unsafe_set out_arr ~idx:(dst_base + idx + 4) a1;\n          Int32x4.Array.unsafe_set out_arr ~idx:(dst_base + idx + 8) a2;\n          Int32x4.Array.unsafe_set out_arr ~idx:(dst_base + idx + 12) a3;\n          i := idx + 16\n        done;\n        let n4 = n - 3 in\n        while !i < n4 do\n          let idx = !i in\n          let a = Int32x4.Array.unsafe_get in_arr ~idx:(src_base + idx) in\n          Int32x4.Array.unsafe_set out_arr ~idx:(dst_base + idx) a;\n          i := idx + 4\n        done;\n        while !i < n do\n          let idx = !i in\n          Array.unsafe_set out_arr (dst_base + idx)\n            (Array.unsafe_get in_arr (src_base + idx));\n          incr i\n        done\n      done\n    done)\n  else (\n  let block_coords = Array.make spatial_ndim 0 in\n  let kernel_coords = Array.make spatial_ndim 0 in\n  let in_spatial = Array.make spatial_ndim 0 in\n  let zero = Int32_u.of_int32 0l in\n  for n_idx = n_start to n_end - 1 do\n    for b_idx = 0 to num_blocks - 1 do\n      Shape.unravel_index_into b_idx out_spatial block_coords;\n      for c_idx = 0 to channels - 1 do\n        for k_idx = 0 to kernel_elems - 1 do\n          Shape.unravel_index_into k_idx kernel_size kernel_coords;\n          let valid = ref true in\n          for d = 0 to spatial_ndim - 1 do\n            let pad_before, _ = padding.(d) in\n            let pos =\n              (block_coords.(d) * stride.(d))\n              - pad_before\n              + (kernel_coords.(d) * dilation.(d))\n            in\n            in_spatial.(d) <- pos;\n            if pos < 0 || pos >= input_spatial.(d) then valid := false\n          done;\n          let v =\n            if !valid then\n              let src_lin =\n                ref (in_offset + (n_idx * in_strides.(0)) + (c_idx * in_strides.(1)))\n              in\n              for d = 0 to spatial_ndim - 1 do\n                src_lin := !src_lin + (in_spatial.(d) * in_strides.(d + 2))\n              done;\n              Array.unsafe_get in_arr !src_lin\n            else zero\n          in\n          let dst_ch = (c_idx * kernel_elems) + k_idx in\n          let dst_lin =\n            out_offset\n            + (n_idx * out_strides.(0))\n            + (dst_ch * out_strides.(1))\n            + (b_idx * out_strides.(2))\n          in\n          Array.unsafe_set out_arr dst_lin v\n        done\n      done\n    done\n  done)\n\nlet unfold_int64 in_arr out_arr ~n_start ~n_end ~channels ~input_spatial ~kernel_elems\n    ~num_blocks ~spatial_ndim ~out_spatial ~kernel_size ~stride ~dilation\n    ~padding ~in_offset ~in_strides ~out_offset ~out_strides =\n  if\n    is_identity_window ~spatial_ndim ~kernel_elems ~kernel_size ~stride\n      ~dilation ~padding\n    && num_blocks = Shape.numel input_spatial\n    && is_c_contiguous_spatial_tail input_spatial in_strides\n    && out_strides.(2) = 1\n  then (\n    for n_idx = n_start to n_end - 1 do\n      for c_idx = 0 to channels - 1 do\n        let src_base =\n          in_offset + (n_idx * in_strides.(0)) + (c_idx * in_strides.(1))\n        in\n        let dst_base =\n          out_offset + (n_idx * out_strides.(0)) + (c_idx * out_strides.(1))\n        in\n        let i = ref 0 in\n        let n = num_blocks in\n        let n8 = n - 7 in\n        while !i < n8 do\n          let idx = !i in\n          let a0 = Int64x2.Array.unsafe_get in_arr ~idx:(src_base + idx) in\n          let a1 = Int64x2.Array.unsafe_get in_arr ~idx:(src_base + idx + 2) in\n          let a2 = Int64x2.Array.unsafe_get in_arr ~idx:(src_base + idx + 4) in\n          let a3 = Int64x2.Array.unsafe_get in_arr ~idx:(src_base + idx + 6) in\n          Int64x2.Array.unsafe_set out_arr ~idx:(dst_base + idx) a0;\n          Int64x2.Array.unsafe_set out_arr ~idx:(dst_base + idx + 2) a1;\n          Int64x2.Array.unsafe_set out_arr ~idx:(dst_base + idx + 4) a2;\n          Int64x2.Array.unsafe_set out_arr ~idx:(dst_base + idx + 6) a3;\n          i := idx + 8\n        done;\n        let n2 = n - 1 in\n        while !i < n2 do\n          let idx = !i in\n          let a = Int64x2.Array.unsafe_get in_arr ~idx:(src_base + idx) in\n          Int64x2.Array.unsafe_set out_arr ~idx:(dst_base + idx) a;\n          i := idx + 2\n        done;\n        while !i < n do\n          let idx = !i in\n          Array.unsafe_set out_arr (dst_base + idx)\n            (Array.unsafe_get in_arr (src_base + idx));\n          incr i\n        done\n      done\n    done)\n  else (\n  let block_coords = Array.make spatial_ndim 0 in\n  let kernel_coords = Array.make spatial_ndim 0 in\n  let in_spatial = Array.make spatial_ndim 0 in\n  let zero = Int64_u.of_int64 0L in\n  for n_idx = n_start to n_end - 1 do\n    for b_idx = 0 to num_blocks - 1 do\n      Shape.unravel_index_into b_idx out_spatial block_coords;\n      for c_idx = 0 to channels - 1 do\n        for k_idx = 0 to kernel_elems - 1 do\n          Shape.unravel_index_into k_idx kernel_size kernel_coords;\n          let valid = ref true in\n          for d = 0 to spatial_ndim - 1 do\n            let pad_before, _ = padding.(d) in\n            let pos =\n              (block_coords.(d) * stride.(d))\n              - pad_before\n              + (kernel_coords.(d) * dilation.(d))\n            in\n            in_spatial.(d) <- pos;\n            if pos < 0 || pos >= input_spatial.(d) then valid := false\n          done;\n          let v =\n            if !valid then\n              let src_lin =\n                ref (in_offset + (n_idx * in_strides.(0)) + (c_idx * in_strides.(1)))\n              in\n              for d = 0 to spatial_ndim - 1 do\n                src_lin := !src_lin + (in_spatial.(d) * in_strides.(d + 2))\n              done;\n              Array.unsafe_get in_arr !src_lin\n            else zero\n          in\n          let dst_ch = (c_idx * kernel_elems) + k_idx in\n          let dst_lin =\n            out_offset\n            + (n_idx * out_strides.(0))\n            + (dst_ch * out_strides.(1))\n            + (b_idx * out_strides.(2))\n          in\n          Array.unsafe_set out_arr dst_lin v\n        done\n      done\n    done\n  done)\n\nlet unfold_bool in_arr out_arr ~n_start ~n_end ~channels ~input_spatial ~kernel_elems\n    ~num_blocks ~spatial_ndim ~out_spatial ~kernel_size ~stride ~dilation\n    ~padding ~in_offset ~in_strides ~out_offset ~out_strides =\n  let block_coords = Array.make spatial_ndim 0 in\n  let kernel_coords = Array.make spatial_ndim 0 in\n  let in_spatial = Array.make spatial_ndim 0 in\n  let zero = false in\n  for n_idx = n_start to n_end - 1 do\n    for b_idx = 0 to num_blocks - 1 do\n      Shape.unravel_index_into b_idx out_spatial block_coords;\n      for c_idx = 0 to channels - 1 do\n        for k_idx = 0 to kernel_elems - 1 do\n          Shape.unravel_index_into k_idx kernel_size kernel_coords;\n          let valid = ref true in\n          for d = 0 to spatial_ndim - 1 do\n            let pad_before, _ = padding.(d) in\n            let pos =\n              (block_coords.(d) * stride.(d))\n              - pad_before\n              + (kernel_coords.(d) * dilation.(d))\n            in\n            in_spatial.(d) <- pos;\n            if pos < 0 || pos >= input_spatial.(d) then valid := false\n          done;\n          let v =\n            if !valid then\n              let src_lin =\n                ref (in_offset + (n_idx * in_strides.(0)) + (c_idx * in_strides.(1)))\n              in\n              for d = 0 to spatial_ndim - 1 do\n                src_lin := !src_lin + (in_spatial.(d) * in_strides.(d + 2))\n              done;\n              Array.unsafe_get in_arr !src_lin\n            else zero\n          in\n          let dst_ch = (c_idx * kernel_elems) + k_idx in\n          let dst_lin =\n            out_offset\n            + (n_idx * out_strides.(0))\n            + (dst_ch * out_strides.(1))\n            + (b_idx * out_strides.(2))\n          in\n          Array.unsafe_set out_arr dst_lin v\n        done\n      done\n    done\n  done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/parallel.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype task = { start_idx : int; end_idx : int; compute : int -> int -> unit }\ntype _ Effect.t += WaitCompletion : int -> unit Effect.t\n\ntype pool = {\n  num_workers : int;\n  task_assignments : task option array;\n  completed : int Atomic.t;\n  generation : int Atomic.t;\n  mutex : Mutex.t;\n  work_available : Condition.t;\n}\n[@@contended]\n\nlet current_pool = ref None\n\nlet setup_pool () =\n  let num_workers = Domain.recommended_domain_count () - 1 in\n  let task_assignments = Array.make num_workers None in\n  let completed = Atomic.make 0 in\n  let generation = Atomic.make 0 in\n  let mutex = Mutex.create () in\n  let work_available = Condition.create () in\n  let (pool : pool) =\n    {\n      num_workers;\n      task_assignments;\n      completed;\n      generation;\n      mutex;\n      work_available;\n    }\n  in\n  let worker id =\n    let last_gen = ref (-1) in\n    while true do\n      Mutex.lock pool.mutex;\n      let current_gen = Atomic.get pool.generation in\n      while pool.task_assignments.(id) = None && !last_gen = current_gen do\n        Condition.wait pool.work_available pool.mutex\n      done;\n      let current_gen = Atomic.get pool.generation in\n      if pool.task_assignments.(id) <> None then (\n        let task = Option.get pool.task_assignments.(id) in\n        pool.task_assignments.(id) <- None;\n        last_gen := current_gen;\n        Mutex.unlock pool.mutex;\n        (try task.compute task.start_idx task.end_idx\n         with exn ->\n           Printf.eprintf \"Worker %d: Exception in task: %s\\n\" id\n             (Printexc.to_string exn);\n           flush stderr);\n        Atomic.incr pool.completed)\n      else (\n        (* New generation without task for us, loop back *)\n        last_gen := current_gen;\n        Mutex.unlock pool.mutex)\n    done\n  in\n  for i = 0 to num_workers - 1 do\n    ignore (Domain.spawn (fun () -> worker i))\n  done;\n  pool\n\nlet get_or_setup_pool () =\n  match !current_pool with\n  | Some pool -> pool\n  | None ->\n      let pool = setup_pool () in\n      current_pool := Some pool;\n      pool\n\nlet get_num_domains pool = pool.num_workers + 1\n\nlet run pool f =\n  let open Effect.Deep in\n  try_with f ()\n    Effect.\n      {\n        effc =\n          (fun (type a) (e : a t) ->\n            match e with\n            | WaitCompletion target ->\n                Some\n                  (fun (k : (a, unit) continuation) ->\n                    let rec wait () =\n                      if Atomic.get pool.completed >= target then continue k ()\n                      else (\n                        Domain.cpu_relax ();\n                        wait ())\n                    in\n                    wait ())\n            | _ -> None);\n      }\n\nlet parallel_execute pool tasks =\n  run pool (fun () ->\n      let num_tasks = Array.length tasks in\n      if num_tasks <> get_num_domains pool then\n        invalid_arg\n          \"parallel_execute: number of tasks must equal num_workers + 1\";\n      Atomic.set pool.completed 0;\n      Mutex.lock pool.mutex;\n      Atomic.incr pool.generation;\n      for i = 0 to pool.num_workers - 1 do\n        pool.task_assignments.(i) <- Some tasks.(i)\n      done;\n      Condition.broadcast pool.work_available;\n      Mutex.unlock pool.mutex;\n      let main_task = tasks.(pool.num_workers) in\n      main_task.compute main_task.start_idx main_task.end_idx;\n      Effect.perform (WaitCompletion pool.num_workers))\n\nlet parallel_for pool start end_ compute_chunk =\n  let total_iterations = end_ - start + 1 in\n  if total_iterations <= 0 then ()\n  else if total_iterations <= 1 then compute_chunk start (start + 1)\n  else\n    let total_domains = get_num_domains pool in\n    let chunk_size = total_iterations / total_domains in\n    let remainder = total_iterations mod total_domains in\n    let tasks =\n      Array.init total_domains (fun d ->\n          let start_idx = start + (d * chunk_size) + min d remainder in\n          let len = chunk_size + if d < remainder then 1 else 0 in\n          let end_idx = start_idx + len in\n          { start_idx; end_idx; compute = compute_chunk })\n    in\n    parallel_execute pool tasks\n\nlet parallel_for_reduce (pool @ portable) start end_ body reduce init =\n  let total_domains = get_num_domains pool in\n  let results = Array.make total_domains init in\n  let chunk_size = (end_ - start + 1) / total_domains in\n  let remainder = (end_ - start + 1) mod total_domains in\n  let tasks =\n    Array.init total_domains (fun d ->\n        let start_idx = start + (d * chunk_size) + min d remainder in\n        let len = chunk_size + if d < remainder then 1 else 0 in\n        let end_idx = start_idx + len in\n        let compute _ _ =\n          (* Ignore args since start_idx and end_idx are captured *)\n          let partial_result = body start_idx end_idx in\n          results.(d) <- partial_result\n        in\n        { start_idx; end_idx; compute })\n  in\n  parallel_execute pool tasks;\n  let final_result = ref init in\n  for i = 0 to total_domains - 1 do\n    final_result := reduce !final_result results.(i)\n  done;\n  !final_result\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/reduce_ops.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\ntype plan = {\n  axes_mask : bool array;\n  in_shape : int array;\n  in_strides : int array;\n  in_offset : int;\n  out_shape : int array;\n  out_offset : int;\n  rank : int;\n  out_rank : int;\n  keepdims : bool;\n}\n\nlet make_plan axes keepdims va vout =\n  let in_shape = shape va in\n  let rank = Array.length in_shape in\n  let axes_mask = Array.make rank false in\n  Array.iter\n    (fun ax ->\n      let ax' = if ax < 0 then ax + rank else ax in\n      axes_mask.(ax') <- true)\n    axes;\n  let out_shape = shape vout in\n  {\n    axes_mask;\n    in_shape;\n    in_strides = View.strides va;\n    in_offset = View.offset va;\n    out_shape;\n    out_offset = View.offset vout;\n    rank;\n    out_rank = Array.length out_shape;\n    keepdims;\n  }\n\nlet init_input_index plan out_md_index in_md_index =\n  if plan.keepdims then\n    for d = 0 to plan.rank - 1 do\n      if plan.axes_mask.(d) then in_md_index.(d) <- 0\n      else in_md_index.(d) <- out_md_index.(d)\n    done\n  else\n    let out_pos = ref 0 in\n    for d = 0 to plan.rank - 1 do\n      if plan.axes_mask.(d) then in_md_index.(d) <- 0\n      else (\n        in_md_index.(d) <- out_md_index.(!out_pos);\n        incr out_pos)\n    done\n\nlet increment_input_index plan in_md_index =\n  let rec carry d =\n    if d < 0 then false\n    else if not plan.axes_mask.(d) then carry (d - 1)\n    else\n      let next = in_md_index.(d) + 1 in\n      if next < plan.in_shape.(d) then (\n        in_md_index.(d) <- next;\n        true)\n      else (\n        in_md_index.(d) <- 0;\n        carry (d - 1))\n  in\n  carry (plan.rank - 1)\n\nlet parallel_threshold = 62500\n\nlet copy_float64 a_arr out_arr va vout start_idx end_idx =\n  let out_offset = View.offset vout in\n  let in_offset = View.offset va in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let out_base = out_offset + start_idx in\n    let in_base = in_offset + start_idx in\n    let n = end_idx - start_idx in\n    for i = 0 to n - 1 do\n      Array.unsafe_set out_arr (out_base + i)\n        (Array.unsafe_get a_arr (in_base + i))\n    done)\n  else\n    let out_shape = shape vout in\n    let a_strides = View.strides va in\n    let md_index = Array.make (Array.length out_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_index;\n      let a_lin = Shape.ravel_index md_index a_strides in\n      let v = Array.unsafe_get a_arr (in_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) v\n    done\n\nlet copy_float32 a_arr out_arr va vout start_idx end_idx =\n  let out_offset = View.offset vout in\n  let in_offset = View.offset va in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let out_base = out_offset + start_idx in\n    let in_base = in_offset + start_idx in\n    let n = end_idx - start_idx in\n    for i = 0 to n - 1 do\n      Array.unsafe_set out_arr (out_base + i)\n        (Array.unsafe_get a_arr (in_base + i))\n    done)\n  else\n    let out_shape = shape vout in\n    let a_strides = View.strides va in\n    let md_index = Array.make (Array.length out_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_index;\n      let a_lin = Shape.ravel_index md_index a_strides in\n      let v = Array.unsafe_get a_arr (in_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) v\n    done\n\nlet copy_int8 a_arr out_arr va vout start_idx end_idx =\n  let out_offset = View.offset vout in\n  let in_offset = View.offset va in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let out_base = out_offset + start_idx in\n    let in_base = in_offset + start_idx in\n    let n = end_idx - start_idx in\n    for i = 0 to n - 1 do\n      Array.unsafe_set out_arr (out_base + i)\n        (Array.unsafe_get a_arr (in_base + i))\n    done)\n  else\n    let out_shape = shape vout in\n    let a_strides = View.strides va in\n    let md_index = Array.make (Array.length out_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_index;\n      let a_lin = Shape.ravel_index md_index a_strides in\n      let v = Array.unsafe_get a_arr (in_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) v\n    done\n\nlet copy_int16 a_arr out_arr va vout start_idx end_idx =\n  let out_offset = View.offset vout in\n  let in_offset = View.offset va in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let out_base = out_offset + start_idx in\n    let in_base = in_offset + start_idx in\n    let n = end_idx - start_idx in\n    for i = 0 to n - 1 do\n      Array.unsafe_set out_arr (out_base + i)\n        (Array.unsafe_get a_arr (in_base + i))\n    done)\n  else\n    let out_shape = shape vout in\n    let a_strides = View.strides va in\n    let md_index = Array.make (Array.length out_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_index;\n      let a_lin = Shape.ravel_index md_index a_strides in\n      let v = Array.unsafe_get a_arr (in_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) v\n    done\n\nlet copy_int32 a_arr out_arr va vout start_idx end_idx =\n  let out_offset = View.offset vout in\n  let in_offset = View.offset va in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let out_base = out_offset + start_idx in\n    let in_base = in_offset + start_idx in\n    let n = end_idx - start_idx in\n    for i = 0 to n - 1 do\n      Array.unsafe_set out_arr (out_base + i)\n        (Array.unsafe_get a_arr (in_base + i))\n    done)\n  else\n    let out_shape = shape vout in\n    let a_strides = View.strides va in\n    let md_index = Array.make (Array.length out_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_index;\n      let a_lin = Shape.ravel_index md_index a_strides in\n      let v = Array.unsafe_get a_arr (in_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) v\n    done\n\nlet copy_int64 a_arr out_arr va vout start_idx end_idx =\n  let out_offset = View.offset vout in\n  let in_offset = View.offset va in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let out_base = out_offset + start_idx in\n    let in_base = in_offset + start_idx in\n    let n = end_idx - start_idx in\n    for i = 0 to n - 1 do\n      Array.unsafe_set out_arr (out_base + i)\n        (Array.unsafe_get a_arr (in_base + i))\n    done)\n  else\n    let out_shape = shape vout in\n    let a_strides = View.strides va in\n    let md_index = Array.make (Array.length out_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_index;\n      let a_lin = Shape.ravel_index md_index a_strides in\n      let v = Array.unsafe_get a_arr (in_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) v\n    done\n\nlet fill_float64 out_arr vout value =\n  let out_offset = View.offset vout in\n  let out_numel = numel vout in\n  for i = 0 to out_numel - 1 do\n    Array.unsafe_set out_arr (out_offset + i) value\n  done\n\nlet fill_float32 out_arr vout value =\n  let out_offset = View.offset vout in\n  let out_numel = numel vout in\n  for i = 0 to out_numel - 1 do\n    Array.unsafe_set out_arr (out_offset + i) value\n  done\n\nlet fill_int8 out_arr vout value =\n  let out_offset = View.offset vout in\n  let out_numel = numel vout in\n  for i = 0 to out_numel - 1 do\n    Array.unsafe_set out_arr (out_offset + i) value\n  done\n\nlet fill_int16 out_arr vout value =\n  let out_offset = View.offset vout in\n  let out_numel = numel vout in\n  for i = 0 to out_numel - 1 do\n    Array.unsafe_set out_arr (out_offset + i) value\n  done\n\nlet fill_int32 out_arr vout value =\n  let out_offset = View.offset vout in\n  let out_numel = numel vout in\n  for i = 0 to out_numel - 1 do\n    Array.unsafe_set out_arr (out_offset + i) value\n  done\n\nlet fill_int64 out_arr vout value =\n  let out_offset = View.offset vout in\n  let out_numel = numel vout in\n  for i = 0 to out_numel - 1 do\n    Array.unsafe_set out_arr (out_offset + i) value\n  done\n\nlet sum_axis_float64 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_float64 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    Array.unsafe_set acc 0 (Float_u.of_int 0);\n    let continue = ref true in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Float_u.add cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet sum_axis_float32 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_float32 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    Array.unsafe_set acc 0 (Float32_u.of_int 0);\n    let continue = ref true in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Float32_u.add cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet sum_axis_int8 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_int8 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    Array.unsafe_set acc 0 #0s;\n    let continue = ref true in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int8_u.add cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet sum_axis_int16 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_int16 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    Array.unsafe_set acc 0 #0S;\n    let continue = ref true in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int16_u.add cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet sum_axis_int32 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_int32 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    Array.unsafe_set acc 0 #0l;\n    let continue = ref true in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int32_u.add cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet sum_axis_int64 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_int64 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    Array.unsafe_set acc 0 #0L;\n    let continue = ref true in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int64_u.add cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet sum_all_partial_float64 a_arr va start_idx end_idx =\n  if start_idx >= end_idx then Float_u.of_int 0\n  else if View.is_c_contiguous va then (\n    let base = View.offset va + start_idx in\n    let n = end_idx - start_idx in\n    let n8 = n - 7 in\n    let rec unrolled_loop i (acc0 : float64x2#) (acc1 : float64x2#)\n        (acc2 : float64x2#) (acc3 : float64x2#) =\n      if i < n8 then\n        let v0 = Float64x2.Array.unsafe_get a_arr ~idx:(base + i) in\n        let v1 = Float64x2.Array.unsafe_get a_arr ~idx:(base + i + 2) in\n        let v2 = Float64x2.Array.unsafe_get a_arr ~idx:(base + i + 4) in\n        let v3 = Float64x2.Array.unsafe_get a_arr ~idx:(base + i + 6) in\n        unrolled_loop (i + 8) (Float64x2.add acc0 v0) (Float64x2.add acc1 v1)\n          (Float64x2.add acc2 v2) (Float64x2.add acc3 v3)\n      else #(acc0, acc1, acc2, acc3, i)\n    in\n    let #(acc0, acc1, acc2, acc3, i) =\n      unrolled_loop 0 (Float64x2.zero ()) (Float64x2.zero ()) (Float64x2.zero ())\n        (Float64x2.zero ())\n    in\n    let acc01 = Float64x2.add acc0 acc1 in\n    let acc23 = Float64x2.add acc2 acc3 in\n    let acc_vec = Float64x2.add acc01 acc23 in\n    let n2 = n - 1 in\n    let rec simd_loop j (acc : float64x2#) =\n      if j < n2 then\n        let vec = Float64x2.Array.unsafe_get a_arr ~idx:(base + j) in\n        simd_loop (j + 2) (Float64x2.add acc vec)\n      else acc\n    in\n    let acc_vec = simd_loop i acc_vec in\n    let h = Float64x2.horizontal_add acc_vec acc_vec in\n    let simd_result = Float64x2.extract0 h in\n    let start_remainder = (n / 2) * 2 in\n    let rec scalar_loop k (acc : float#) =\n      if k < n then\n        scalar_loop (k + 1) (Float_u.add acc (Array.unsafe_get a_arr (base + k)))\n      else acc\n    in\n    scalar_loop start_remainder simd_result)\n  else\n    let acc = Array.make_float64 1 in\n    Array.unsafe_set acc 0 (Float_u.of_int 0);\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let md_index = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k a_shape md_index;\n      let a_lin = Shape.ravel_index md_index a_strides in\n      let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Float_u.add cur v)\n    done;\n    Array.unsafe_get acc 0\n\nlet sum_all_partial_float32 a_arr va start_idx end_idx =\n  if start_idx >= end_idx then Float32_u.of_int 0\n  else if View.is_c_contiguous va then (\n    let base = View.offset va + start_idx in\n    let n = end_idx - start_idx in\n    let n16 = n - 15 in\n    let rec unrolled_loop i (acc0 : float32x4#) (acc1 : float32x4#)\n        (acc2 : float32x4#) (acc3 : float32x4#) =\n      if i < n16 then\n        let v0 = Float32x4.Array.unsafe_get a_arr ~idx:(base + i) in\n        let v1 = Float32x4.Array.unsafe_get a_arr ~idx:(base + i + 4) in\n        let v2 = Float32x4.Array.unsafe_get a_arr ~idx:(base + i + 8) in\n        let v3 = Float32x4.Array.unsafe_get a_arr ~idx:(base + i + 12) in\n        unrolled_loop (i + 16) (Float32x4.add acc0 v0) (Float32x4.add acc1 v1)\n          (Float32x4.add acc2 v2) (Float32x4.add acc3 v3)\n      else #(acc0, acc1, acc2, acc3, i)\n    in\n    let #(acc0, acc1, acc2, acc3, i) =\n      unrolled_loop 0 (Float32x4.zero ()) (Float32x4.zero ()) (Float32x4.zero ())\n        (Float32x4.zero ())\n    in\n    let acc01 = Float32x4.add acc0 acc1 in\n    let acc23 = Float32x4.add acc2 acc3 in\n    let acc_vec = Float32x4.add acc01 acc23 in\n    let n4 = n - 3 in\n    let rec simd_loop j (acc : float32x4#) =\n      if j < n4 then\n        let vec = Float32x4.Array.unsafe_get a_arr ~idx:(base + j) in\n        simd_loop (j + 4) (Float32x4.add acc vec)\n      else acc\n    in\n    let acc_vec = simd_loop i acc_vec in\n    let h1 = Float32x4.horizontal_add acc_vec acc_vec in\n    let h2 = Float32x4.horizontal_add h1 h1 in\n    let simd_result = Float32x4.extract0 h2 in\n    let start_remainder = (n / 4) * 4 in\n    let rec scalar_loop k (acc : float32#) =\n      if k < n then\n        scalar_loop (k + 1) (Float32_u.add acc (Array.unsafe_get a_arr (base + k)))\n      else acc\n    in\n    scalar_loop start_remainder simd_result)\n  else\n    let acc = Array.make_float32 1 in\n    Array.unsafe_set acc 0 (Float32_u.of_int 0);\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let md_index = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k a_shape md_index;\n      let a_lin = Shape.ravel_index md_index a_strides in\n      let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Float32_u.add cur v)\n    done;\n    Array.unsafe_get acc 0\n\nlet sum_all_partial_int8 a_arr va start_idx end_idx =\n  if start_idx >= end_idx then #0s\n  else\n    let acc = Array.make_int8 1 in\n    Array.unsafe_set acc 0 #0s;\n    if View.is_c_contiguous va then (\n      let base = View.offset va + start_idx in\n      let last = View.offset va + end_idx in\n      for i = base to last - 1 do\n        let cur = Array.unsafe_get acc 0 in\n        Array.unsafe_set acc 0 (Int8_u.add cur (Array.unsafe_get a_arr i))\n      done;\n      Array.unsafe_get acc 0)\n    else\n      let a_shape = shape va in\n      let a_strides = View.strides va in\n      let a_offset = View.offset va in\n      let md_index = Array.make (Array.length a_shape) 0 in\n      for k = start_idx to end_idx - 1 do\n        Shape.unravel_index_into k a_shape md_index;\n        let a_lin = Shape.ravel_index md_index a_strides in\n        let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n        let cur = Array.unsafe_get acc 0 in\n        Array.unsafe_set acc 0 (Int8_u.add cur v)\n      done;\n      Array.unsafe_get acc 0\n\nlet sum_all_partial_int16 a_arr va start_idx end_idx =\n  if start_idx >= end_idx then #0S\n  else\n    let acc = Array.make_int16 1 in\n    Array.unsafe_set acc 0 #0S;\n    if View.is_c_contiguous va then (\n      let base = View.offset va + start_idx in\n      let last = View.offset va + end_idx in\n      for i = base to last - 1 do\n        let cur = Array.unsafe_get acc 0 in\n        Array.unsafe_set acc 0 (Int16_u.add cur (Array.unsafe_get a_arr i))\n      done;\n      Array.unsafe_get acc 0)\n    else\n      let a_shape = shape va in\n      let a_strides = View.strides va in\n      let a_offset = View.offset va in\n      let md_index = Array.make (Array.length a_shape) 0 in\n      for k = start_idx to end_idx - 1 do\n        Shape.unravel_index_into k a_shape md_index;\n        let a_lin = Shape.ravel_index md_index a_strides in\n        let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n        let cur = Array.unsafe_get acc 0 in\n        Array.unsafe_set acc 0 (Int16_u.add cur v)\n      done;\n      Array.unsafe_get acc 0\n\nlet sum_all_partial_int32 a_arr va start_idx end_idx =\n  if start_idx >= end_idx then #0l\n  else\n    let acc = Array.make_int32 1 in\n    Array.unsafe_set acc 0 #0l;\n    if View.is_c_contiguous va then (\n      let base = View.offset va + start_idx in\n      let last = View.offset va + end_idx in\n      for i = base to last - 1 do\n        let cur = Array.unsafe_get acc 0 in\n        Array.unsafe_set acc 0 (Int32_u.add cur (Array.unsafe_get a_arr i))\n      done;\n      Array.unsafe_get acc 0)\n    else\n      let a_shape = shape va in\n      let a_strides = View.strides va in\n      let a_offset = View.offset va in\n      let md_index = Array.make (Array.length a_shape) 0 in\n      for k = start_idx to end_idx - 1 do\n        Shape.unravel_index_into k a_shape md_index;\n        let a_lin = Shape.ravel_index md_index a_strides in\n        let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n        let cur = Array.unsafe_get acc 0 in\n        Array.unsafe_set acc 0 (Int32_u.add cur v)\n      done;\n      Array.unsafe_get acc 0\n\nlet sum_all_partial_int64 a_arr va start_idx end_idx =\n  if start_idx >= end_idx then #0L\n  else\n    let acc = Array.make_int64 1 in\n    Array.unsafe_set acc 0 #0L;\n    if View.is_c_contiguous va then (\n      let base = View.offset va + start_idx in\n      let last = View.offset va + end_idx in\n      for i = base to last - 1 do\n        let cur = Array.unsafe_get acc 0 in\n        Array.unsafe_set acc 0 (Int64_u.add cur (Array.unsafe_get a_arr i))\n      done;\n      Array.unsafe_get acc 0)\n    else\n      let a_shape = shape va in\n      let a_strides = View.strides va in\n      let a_offset = View.offset va in\n      let md_index = Array.make (Array.length a_shape) 0 in\n      for k = start_idx to end_idx - 1 do\n        Shape.unravel_index_into k a_shape md_index;\n        let a_lin = Shape.ravel_index md_index a_strides in\n        let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n        let cur = Array.unsafe_get acc 0 in\n        Array.unsafe_set acc 0 (Int64_u.add cur v)\n      done;\n      Array.unsafe_get acc 0\n\nlet prod_axis_float64 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_float64 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    Array.unsafe_set acc 0 (Float_u.of_int 1);\n    let continue = ref true in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Float_u.mul cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet prod_axis_float32 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_float32 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    Array.unsafe_set acc 0 (Float32_u.of_int 1);\n    let continue = ref true in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Float32_u.mul cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet prod_axis_int8 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_int8 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    Array.unsafe_set acc 0 #1s;\n    let continue = ref true in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int8_u.mul cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet prod_axis_int16 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_int16 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    Array.unsafe_set acc 0 #1S;\n    let continue = ref true in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int16_u.mul cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet prod_axis_int32 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_int32 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    Array.unsafe_set acc 0 #1l;\n    let continue = ref true in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int32_u.mul cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet prod_axis_int64 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_int64 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    Array.unsafe_set acc 0 #1L;\n    let continue = ref true in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int64_u.mul cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet prod_all_partial_float64 a_arr va start_idx end_idx =\n  if start_idx >= end_idx then Float_u.of_int 1\n  else if View.is_c_contiguous va then (\n    let base = View.offset va + start_idx in\n    let n = end_idx - start_idx in\n    let n8 = n - 7 in\n    let rec unrolled_loop i (acc0 : float64x2#) (acc1 : float64x2#)\n        (acc2 : float64x2#) (acc3 : float64x2#) =\n      if i < n8 then\n        let v0 = Float64x2.Array.unsafe_get a_arr ~idx:(base + i) in\n        let v1 = Float64x2.Array.unsafe_get a_arr ~idx:(base + i + 2) in\n        let v2 = Float64x2.Array.unsafe_get a_arr ~idx:(base + i + 4) in\n        let v3 = Float64x2.Array.unsafe_get a_arr ~idx:(base + i + 6) in\n        unrolled_loop (i + 8) (Float64x2.mul acc0 v0) (Float64x2.mul acc1 v1)\n          (Float64x2.mul acc2 v2) (Float64x2.mul acc3 v3)\n      else #(acc0, acc1, acc2, acc3, i)\n    in\n    let #(acc0, acc1, acc2, acc3, i) =\n      unrolled_loop 0 (Float64x2.one ()) (Float64x2.one ()) (Float64x2.one ())\n        (Float64x2.one ())\n    in\n    let acc01 = Float64x2.mul acc0 acc1 in\n    let acc23 = Float64x2.mul acc2 acc3 in\n    let acc_vec = Float64x2.mul acc01 acc23 in\n    let n2 = n - 1 in\n    let rec simd_loop j (acc : float64x2#) =\n      if j < n2 then\n        let vec = Float64x2.Array.unsafe_get a_arr ~idx:(base + j) in\n        simd_loop (j + 2) (Float64x2.mul acc vec)\n      else acc\n    in\n    let acc_vec = simd_loop i acc_vec in\n    let #(v0, v1) = Float64x2.splat acc_vec in\n    let simd_result = Float_u.mul v0 v1 in\n    let start_remainder = (n / 2) * 2 in\n    let rec scalar_loop k (acc : float#) =\n      if k < n then\n        scalar_loop (k + 1) (Float_u.mul acc (Array.unsafe_get a_arr (base + k)))\n      else acc\n    in\n    scalar_loop start_remainder simd_result)\n  else\n    let acc = Array.make_float64 1 in\n    Array.unsafe_set acc 0 (Float_u.of_int 1);\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let md_index = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k a_shape md_index;\n      let a_lin = Shape.ravel_index md_index a_strides in\n      let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Float_u.mul cur v)\n    done;\n    Array.unsafe_get acc 0\n\nlet prod_all_partial_float32 a_arr va start_idx end_idx =\n  if start_idx >= end_idx then Float32_u.of_int 1\n  else if View.is_c_contiguous va then (\n    let base = View.offset va + start_idx in\n    let n = end_idx - start_idx in\n    let n16 = n - 15 in\n    let rec unrolled_loop i (acc0 : float32x4#) (acc1 : float32x4#)\n        (acc2 : float32x4#) (acc3 : float32x4#) =\n      if i < n16 then\n        let v0 = Float32x4.Array.unsafe_get a_arr ~idx:(base + i) in\n        let v1 = Float32x4.Array.unsafe_get a_arr ~idx:(base + i + 4) in\n        let v2 = Float32x4.Array.unsafe_get a_arr ~idx:(base + i + 8) in\n        let v3 = Float32x4.Array.unsafe_get a_arr ~idx:(base + i + 12) in\n        unrolled_loop (i + 16) (Float32x4.mul acc0 v0) (Float32x4.mul acc1 v1)\n          (Float32x4.mul acc2 v2) (Float32x4.mul acc3 v3)\n      else #(acc0, acc1, acc2, acc3, i)\n    in\n    let #(acc0, acc1, acc2, acc3, i) =\n      unrolled_loop 0 (Float32x4.one ()) (Float32x4.one ()) (Float32x4.one ())\n        (Float32x4.one ())\n    in\n    let acc01 = Float32x4.mul acc0 acc1 in\n    let acc23 = Float32x4.mul acc2 acc3 in\n    let acc_vec = Float32x4.mul acc01 acc23 in\n    let n4 = n - 3 in\n    let rec simd_loop j (acc : float32x4#) =\n      if j < n4 then\n        let vec = Float32x4.Array.unsafe_get a_arr ~idx:(base + j) in\n        simd_loop (j + 4) (Float32x4.mul acc vec)\n      else acc\n    in\n    let acc_vec = simd_loop i acc_vec in\n    let #(v0, v1, v2, v3) = Float32x4.splat acc_vec in\n    let simd_result = Float32_u.mul (Float32_u.mul v0 v1) (Float32_u.mul v2 v3) in\n    let start_remainder = (n / 4) * 4 in\n    let rec scalar_loop k (acc : float32#) =\n      if k < n then\n        scalar_loop (k + 1) (Float32_u.mul acc (Array.unsafe_get a_arr (base + k)))\n      else acc\n    in\n    scalar_loop start_remainder simd_result)\n  else\n    let acc = Array.make_float32 1 in\n    Array.unsafe_set acc 0 (Float32_u.of_int 1);\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let md_index = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k a_shape md_index;\n      let a_lin = Shape.ravel_index md_index a_strides in\n      let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Float32_u.mul cur v)\n    done;\n    Array.unsafe_get acc 0\n\nlet prod_all_partial_int8 a_arr va start_idx end_idx =\n  if start_idx >= end_idx then #1s\n  else\n    let acc = Array.make_int8 1 in\n    Array.unsafe_set acc 0 #1s;\n    if View.is_c_contiguous va then (\n      let base = View.offset va + start_idx in\n      let last = View.offset va + end_idx in\n      for i = base to last - 1 do\n        let cur = Array.unsafe_get acc 0 in\n        Array.unsafe_set acc 0 (Int8_u.mul cur (Array.unsafe_get a_arr i))\n      done;\n      Array.unsafe_get acc 0)\n    else\n      let a_shape = shape va in\n      let a_strides = View.strides va in\n      let a_offset = View.offset va in\n      let md_index = Array.make (Array.length a_shape) 0 in\n      for k = start_idx to end_idx - 1 do\n        Shape.unravel_index_into k a_shape md_index;\n        let a_lin = Shape.ravel_index md_index a_strides in\n        let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n        let cur = Array.unsafe_get acc 0 in\n        Array.unsafe_set acc 0 (Int8_u.mul cur v)\n      done;\n      Array.unsafe_get acc 0\n\nlet prod_all_partial_int16 a_arr va start_idx end_idx =\n  if start_idx >= end_idx then #1S\n  else\n    let acc = Array.make_int16 1 in\n    Array.unsafe_set acc 0 #1S;\n    if View.is_c_contiguous va then (\n      let base = View.offset va + start_idx in\n      let last = View.offset va + end_idx in\n      for i = base to last - 1 do\n        let cur = Array.unsafe_get acc 0 in\n        Array.unsafe_set acc 0 (Int16_u.mul cur (Array.unsafe_get a_arr i))\n      done;\n      Array.unsafe_get acc 0)\n    else\n      let a_shape = shape va in\n      let a_strides = View.strides va in\n      let a_offset = View.offset va in\n      let md_index = Array.make (Array.length a_shape) 0 in\n      for k = start_idx to end_idx - 1 do\n        Shape.unravel_index_into k a_shape md_index;\n        let a_lin = Shape.ravel_index md_index a_strides in\n        let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n        let cur = Array.unsafe_get acc 0 in\n        Array.unsafe_set acc 0 (Int16_u.mul cur v)\n      done;\n      Array.unsafe_get acc 0\n\nlet prod_all_partial_int32 a_arr va start_idx end_idx =\n  if start_idx >= end_idx then #1l\n  else\n    let acc = Array.make_int32 1 in\n    Array.unsafe_set acc 0 #1l;\n    if View.is_c_contiguous va then (\n      let base = View.offset va + start_idx in\n      let last = View.offset va + end_idx in\n      for i = base to last - 1 do\n        let cur = Array.unsafe_get acc 0 in\n        Array.unsafe_set acc 0 (Int32_u.mul cur (Array.unsafe_get a_arr i))\n      done;\n      Array.unsafe_get acc 0)\n    else\n      let a_shape = shape va in\n      let a_strides = View.strides va in\n      let a_offset = View.offset va in\n      let md_index = Array.make (Array.length a_shape) 0 in\n      for k = start_idx to end_idx - 1 do\n        Shape.unravel_index_into k a_shape md_index;\n        let a_lin = Shape.ravel_index md_index a_strides in\n        let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n        let cur = Array.unsafe_get acc 0 in\n        Array.unsafe_set acc 0 (Int32_u.mul cur v)\n      done;\n      Array.unsafe_get acc 0\n\nlet prod_all_partial_int64 a_arr va start_idx end_idx =\n  if start_idx >= end_idx then #1L\n  else\n    let acc = Array.make_int64 1 in\n    Array.unsafe_set acc 0 #1L;\n    if View.is_c_contiguous va then (\n      let base = View.offset va + start_idx in\n      let last = View.offset va + end_idx in\n      for i = base to last - 1 do\n        let cur = Array.unsafe_get acc 0 in\n        Array.unsafe_set acc 0 (Int64_u.mul cur (Array.unsafe_get a_arr i))\n      done;\n      Array.unsafe_get acc 0)\n    else\n      let a_shape = shape va in\n      let a_strides = View.strides va in\n      let a_offset = View.offset va in\n      let md_index = Array.make (Array.length a_shape) 0 in\n      for k = start_idx to end_idx - 1 do\n        Shape.unravel_index_into k a_shape md_index;\n        let a_lin = Shape.ravel_index md_index a_strides in\n        let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n        let cur = Array.unsafe_get acc 0 in\n        Array.unsafe_set acc 0 (Int64_u.mul cur v)\n      done;\n      Array.unsafe_get acc 0\n\nlet min_axis_float64 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_float64 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (plan.in_offset + a_lin));\n    let continue = ref (increment_input_index plan in_md_index) in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Float_u.min cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet min_axis_float32 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_float32 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (plan.in_offset + a_lin));\n    let continue = ref (increment_input_index plan in_md_index) in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Float32_u.min cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet min_axis_int8 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_int8 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (plan.in_offset + a_lin));\n    let continue = ref (increment_input_index plan in_md_index) in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int8_u.min cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet min_axis_int16 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_int16 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (plan.in_offset + a_lin));\n    let continue = ref (increment_input_index plan in_md_index) in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int16_u.min cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet min_axis_int32 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_int32 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (plan.in_offset + a_lin));\n    let continue = ref (increment_input_index plan in_md_index) in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int32_u.min cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet min_axis_int64 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_int64 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (plan.in_offset + a_lin));\n    let continue = ref (increment_input_index plan in_md_index) in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int64_u.min cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet min_all_float64 a_arr va start_idx end_idx =\n  if View.is_c_contiguous va then (\n    let base = View.offset va + start_idx in\n    let n = end_idx - start_idx in\n    if n < 2 then Array.unsafe_get a_arr base\n    else\n      let n8 = n - 7 in\n      let first_vec = Float64x2.Array.unsafe_get a_arr ~idx:base in\n      let rec unrolled_loop i (acc0 : float64x2#) (acc1 : float64x2#)\n          (acc2 : float64x2#) (acc3 : float64x2#) =\n        if i < n8 then\n          let v0 = Float64x2.Array.unsafe_get a_arr ~idx:(base + i) in\n          let v1 = Float64x2.Array.unsafe_get a_arr ~idx:(base + i + 2) in\n          let v2 = Float64x2.Array.unsafe_get a_arr ~idx:(base + i + 4) in\n          let v3 = Float64x2.Array.unsafe_get a_arr ~idx:(base + i + 6) in\n          unrolled_loop (i + 8) (Float64x2.min acc0 v0) (Float64x2.min acc1 v1)\n            (Float64x2.min acc2 v2) (Float64x2.min acc3 v3)\n        else #(acc0, acc1, acc2, acc3, i)\n      in\n      let #(acc0, acc1, acc2, acc3, i) =\n        unrolled_loop 2 first_vec first_vec first_vec first_vec\n      in\n      let acc01 = Float64x2.min acc0 acc1 in\n      let acc23 = Float64x2.min acc2 acc3 in\n      let acc_vec = Float64x2.min acc01 acc23 in\n      let n2 = n - 1 in\n      let rec simd_loop j (acc : float64x2#) =\n        if j < n2 then\n          let vec = Float64x2.Array.unsafe_get a_arr ~idx:(base + j) in\n          simd_loop (j + 2) (Float64x2.min acc vec)\n        else acc\n      in\n      let acc_vec = simd_loop i acc_vec in\n      let #(v0, v1) = Float64x2.splat acc_vec in\n      let simd_result = Float_u.min v0 v1 in\n      let start_remainder = (n / 2) * 2 in\n      let rec scalar_loop k (acc : float#) =\n        if k < n then\n          scalar_loop (k + 1) (Float_u.min acc (Array.unsafe_get a_arr (base + k)))\n        else acc\n      in\n      scalar_loop start_remainder simd_result)\n  else\n    let acc = Array.make_float64 1 in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let md_index = Array.make (Array.length a_shape) 0 in\n    Shape.unravel_index_into start_idx a_shape md_index;\n    let first_lin = Shape.ravel_index md_index a_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (a_offset + first_lin));\n    for k = start_idx + 1 to end_idx - 1 do\n      Shape.unravel_index_into k a_shape md_index;\n      let a_lin = Shape.ravel_index md_index a_strides in\n      let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Float_u.min cur v)\n    done;\n    Array.unsafe_get acc 0\n\nlet min_all_float32 a_arr va start_idx end_idx =\n  if View.is_c_contiguous va then (\n    let base = View.offset va + start_idx in\n    let n = end_idx - start_idx in\n    if n < 4 then (\n      let rec scalar_loop i (acc : float32#) =\n        if i < n then\n          scalar_loop (i + 1) (Float32_u.min acc (Array.unsafe_get a_arr (base + i)))\n        else acc\n      in\n      scalar_loop 1 (Array.unsafe_get a_arr base))\n    else\n      let n16 = n - 15 in\n      let first_vec = Float32x4.Array.unsafe_get a_arr ~idx:base in\n      let rec unrolled_loop i (acc0 : float32x4#) (acc1 : float32x4#)\n          (acc2 : float32x4#) (acc3 : float32x4#) =\n        if i < n16 then\n          let v0 = Float32x4.Array.unsafe_get a_arr ~idx:(base + i) in\n          let v1 = Float32x4.Array.unsafe_get a_arr ~idx:(base + i + 4) in\n          let v2 = Float32x4.Array.unsafe_get a_arr ~idx:(base + i + 8) in\n          let v3 = Float32x4.Array.unsafe_get a_arr ~idx:(base + i + 12) in\n          unrolled_loop (i + 16) (Float32x4.min acc0 v0) (Float32x4.min acc1 v1)\n            (Float32x4.min acc2 v2) (Float32x4.min acc3 v3)\n        else #(acc0, acc1, acc2, acc3, i)\n      in\n      let #(acc0, acc1, acc2, acc3, i) =\n        unrolled_loop 4 first_vec first_vec first_vec first_vec\n      in\n      let acc01 = Float32x4.min acc0 acc1 in\n      let acc23 = Float32x4.min acc2 acc3 in\n      let acc_vec = Float32x4.min acc01 acc23 in\n      let n4 = n - 3 in\n      let rec simd_loop j (acc : float32x4#) =\n        if j < n4 then\n          let vec = Float32x4.Array.unsafe_get a_arr ~idx:(base + j) in\n          simd_loop (j + 4) (Float32x4.min acc vec)\n        else acc\n      in\n      let acc_vec = simd_loop i acc_vec in\n      let #(v0, v1, v2, v3) = Float32x4.splat acc_vec in\n      let simd_result = Float32_u.min (Float32_u.min v0 v1) (Float32_u.min v2 v3) in\n      let start_remainder = (n / 4) * 4 in\n      let rec scalar_loop k (acc : float32#) =\n        if k < n then\n          scalar_loop (k + 1) (Float32_u.min acc (Array.unsafe_get a_arr (base + k)))\n        else acc\n      in\n      scalar_loop start_remainder simd_result)\n  else\n    let acc = Array.make_float32 1 in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let md_index = Array.make (Array.length a_shape) 0 in\n    Shape.unravel_index_into start_idx a_shape md_index;\n    let first_lin = Shape.ravel_index md_index a_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (a_offset + first_lin));\n    for k = start_idx + 1 to end_idx - 1 do\n      Shape.unravel_index_into k a_shape md_index;\n      let a_lin = Shape.ravel_index md_index a_strides in\n      let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Float32_u.min cur v)\n    done;\n    Array.unsafe_get acc 0\n\nlet min_all_int8 a_arr va start_idx end_idx =\n  let acc = Array.make_int8 1 in\n  if View.is_c_contiguous va then (\n    let base = View.offset va + start_idx in\n    let last = View.offset va + end_idx in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr base);\n    for i = base + 1 to last - 1 do\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int8_u.min cur (Array.unsafe_get a_arr i))\n    done;\n    Array.unsafe_get acc 0)\n  else\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let md_index = Array.make (Array.length a_shape) 0 in\n    Shape.unravel_index_into start_idx a_shape md_index;\n    let first_lin = Shape.ravel_index md_index a_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (a_offset + first_lin));\n    for k = start_idx + 1 to end_idx - 1 do\n      Shape.unravel_index_into k a_shape md_index;\n      let a_lin = Shape.ravel_index md_index a_strides in\n      let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int8_u.min cur v)\n    done;\n    Array.unsafe_get acc 0\n\nlet min_all_int16 a_arr va start_idx end_idx =\n  let acc = Array.make_int16 1 in\n  if View.is_c_contiguous va then (\n    let base = View.offset va + start_idx in\n    let last = View.offset va + end_idx in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr base);\n    for i = base + 1 to last - 1 do\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int16_u.min cur (Array.unsafe_get a_arr i))\n    done;\n    Array.unsafe_get acc 0)\n  else\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let md_index = Array.make (Array.length a_shape) 0 in\n    Shape.unravel_index_into start_idx a_shape md_index;\n    let first_lin = Shape.ravel_index md_index a_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (a_offset + first_lin));\n    for k = start_idx + 1 to end_idx - 1 do\n      Shape.unravel_index_into k a_shape md_index;\n      let a_lin = Shape.ravel_index md_index a_strides in\n      let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int16_u.min cur v)\n    done;\n    Array.unsafe_get acc 0\n\nlet min_all_int32 a_arr va start_idx end_idx =\n  let acc = Array.make_int32 1 in\n  if View.is_c_contiguous va then (\n    let base = View.offset va + start_idx in\n    let last = View.offset va + end_idx in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr base);\n    for i = base + 1 to last - 1 do\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int32_u.min cur (Array.unsafe_get a_arr i))\n    done;\n    Array.unsafe_get acc 0)\n  else\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let md_index = Array.make (Array.length a_shape) 0 in\n    Shape.unravel_index_into start_idx a_shape md_index;\n    let first_lin = Shape.ravel_index md_index a_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (a_offset + first_lin));\n    for k = start_idx + 1 to end_idx - 1 do\n      Shape.unravel_index_into k a_shape md_index;\n      let a_lin = Shape.ravel_index md_index a_strides in\n      let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int32_u.min cur v)\n    done;\n    Array.unsafe_get acc 0\n\nlet min_all_int64 a_arr va start_idx end_idx =\n  let acc = Array.make_int64 1 in\n  if View.is_c_contiguous va then (\n    let base = View.offset va + start_idx in\n    let last = View.offset va + end_idx in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr base);\n    for i = base + 1 to last - 1 do\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int64_u.min cur (Array.unsafe_get a_arr i))\n    done;\n    Array.unsafe_get acc 0)\n  else\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let md_index = Array.make (Array.length a_shape) 0 in\n    Shape.unravel_index_into start_idx a_shape md_index;\n    let first_lin = Shape.ravel_index md_index a_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (a_offset + first_lin));\n    for k = start_idx + 1 to end_idx - 1 do\n      Shape.unravel_index_into k a_shape md_index;\n      let a_lin = Shape.ravel_index md_index a_strides in\n      let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int64_u.min cur v)\n    done;\n    Array.unsafe_get acc 0\n\nlet max_axis_float64 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_float64 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (plan.in_offset + a_lin));\n    let continue = ref (increment_input_index plan in_md_index) in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Float_u.max cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet max_axis_float32 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_float32 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (plan.in_offset + a_lin));\n    let continue = ref (increment_input_index plan in_md_index) in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Float32_u.max cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet max_axis_int8 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_int8 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (plan.in_offset + a_lin));\n    let continue = ref (increment_input_index plan in_md_index) in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int8_u.max cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet max_axis_int16 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_int16 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (plan.in_offset + a_lin));\n    let continue = ref (increment_input_index plan in_md_index) in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int16_u.max cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet max_axis_int32 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_int32 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (plan.in_offset + a_lin));\n    let continue = ref (increment_input_index plan in_md_index) in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int32_u.max cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet max_axis_int64 a_arr out_arr va vout axes keepdims start_idx end_idx =\n  let plan = make_plan axes keepdims va vout in\n  let out_md_index = Array.make plan.out_rank 0 in\n  let in_md_index = Array.make plan.rank 0 in\n  let acc = Array.make_int64 1 in\n  for k = start_idx to end_idx - 1 do\n    Shape.unravel_index_into k plan.out_shape out_md_index;\n    init_input_index plan out_md_index in_md_index;\n    let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (plan.in_offset + a_lin));\n    let continue = ref (increment_input_index plan in_md_index) in\n    while !continue do\n      let a_lin = Shape.ravel_index in_md_index plan.in_strides in\n      let v = Array.unsafe_get a_arr (plan.in_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int64_u.max cur v);\n      continue := increment_input_index plan in_md_index\n    done;\n    Array.unsafe_set out_arr (plan.out_offset + k) (Array.unsafe_get acc 0)\n  done\n\nlet max_all_float64 a_arr va start_idx end_idx =\n  if View.is_c_contiguous va then (\n    let base = View.offset va + start_idx in\n    let n = end_idx - start_idx in\n    if n < 2 then Array.unsafe_get a_arr base\n    else\n      let n8 = n - 7 in\n      let first_vec = Float64x2.Array.unsafe_get a_arr ~idx:base in\n      let rec unrolled_loop i (acc0 : float64x2#) (acc1 : float64x2#)\n          (acc2 : float64x2#) (acc3 : float64x2#) =\n        if i < n8 then\n          let v0 = Float64x2.Array.unsafe_get a_arr ~idx:(base + i) in\n          let v1 = Float64x2.Array.unsafe_get a_arr ~idx:(base + i + 2) in\n          let v2 = Float64x2.Array.unsafe_get a_arr ~idx:(base + i + 4) in\n          let v3 = Float64x2.Array.unsafe_get a_arr ~idx:(base + i + 6) in\n          unrolled_loop (i + 8) (Float64x2.max acc0 v0) (Float64x2.max acc1 v1)\n            (Float64x2.max acc2 v2) (Float64x2.max acc3 v3)\n        else #(acc0, acc1, acc2, acc3, i)\n      in\n      let #(acc0, acc1, acc2, acc3, i) =\n        unrolled_loop 2 first_vec first_vec first_vec first_vec\n      in\n      let acc01 = Float64x2.max acc0 acc1 in\n      let acc23 = Float64x2.max acc2 acc3 in\n      let acc_vec = Float64x2.max acc01 acc23 in\n      let n2 = n - 1 in\n      let rec simd_loop j (acc : float64x2#) =\n        if j < n2 then\n          let vec = Float64x2.Array.unsafe_get a_arr ~idx:(base + j) in\n          simd_loop (j + 2) (Float64x2.max acc vec)\n        else acc\n      in\n      let acc_vec = simd_loop i acc_vec in\n      let #(v0, v1) = Float64x2.splat acc_vec in\n      let simd_result = Float_u.max v0 v1 in\n      let start_remainder = (n / 2) * 2 in\n      let rec scalar_loop k (acc : float#) =\n        if k < n then\n          scalar_loop (k + 1) (Float_u.max acc (Array.unsafe_get a_arr (base + k)))\n        else acc\n      in\n      scalar_loop start_remainder simd_result)\n  else\n    let acc = Array.make_float64 1 in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let md_index = Array.make (Array.length a_shape) 0 in\n    Shape.unravel_index_into start_idx a_shape md_index;\n    let first_lin = Shape.ravel_index md_index a_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (a_offset + first_lin));\n    for k = start_idx + 1 to end_idx - 1 do\n      Shape.unravel_index_into k a_shape md_index;\n      let a_lin = Shape.ravel_index md_index a_strides in\n      let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Float_u.max cur v)\n    done;\n    Array.unsafe_get acc 0\n\nlet max_all_float32 a_arr va start_idx end_idx =\n  if View.is_c_contiguous va then (\n    let base = View.offset va + start_idx in\n    let n = end_idx - start_idx in\n    if n < 4 then (\n      let rec scalar_loop i (acc : float32#) =\n        if i < n then\n          scalar_loop (i + 1) (Float32_u.max acc (Array.unsafe_get a_arr (base + i)))\n        else acc\n      in\n      scalar_loop 1 (Array.unsafe_get a_arr base))\n    else\n      let n16 = n - 15 in\n      let first_vec = Float32x4.Array.unsafe_get a_arr ~idx:base in\n      let rec unrolled_loop i (acc0 : float32x4#) (acc1 : float32x4#)\n          (acc2 : float32x4#) (acc3 : float32x4#) =\n        if i < n16 then\n          let v0 = Float32x4.Array.unsafe_get a_arr ~idx:(base + i) in\n          let v1 = Float32x4.Array.unsafe_get a_arr ~idx:(base + i + 4) in\n          let v2 = Float32x4.Array.unsafe_get a_arr ~idx:(base + i + 8) in\n          let v3 = Float32x4.Array.unsafe_get a_arr ~idx:(base + i + 12) in\n          unrolled_loop (i + 16) (Float32x4.max acc0 v0) (Float32x4.max acc1 v1)\n            (Float32x4.max acc2 v2) (Float32x4.max acc3 v3)\n        else #(acc0, acc1, acc2, acc3, i)\n      in\n      let #(acc0, acc1, acc2, acc3, i) =\n        unrolled_loop 4 first_vec first_vec first_vec first_vec\n      in\n      let acc01 = Float32x4.max acc0 acc1 in\n      let acc23 = Float32x4.max acc2 acc3 in\n      let acc_vec = Float32x4.max acc01 acc23 in\n      let n4 = n - 3 in\n      let rec simd_loop j (acc : float32x4#) =\n        if j < n4 then\n          let vec = Float32x4.Array.unsafe_get a_arr ~idx:(base + j) in\n          simd_loop (j + 4) (Float32x4.max acc vec)\n        else acc\n      in\n      let acc_vec = simd_loop i acc_vec in\n      let #(v0, v1, v2, v3) = Float32x4.splat acc_vec in\n      let simd_result = Float32_u.max (Float32_u.max v0 v1) (Float32_u.max v2 v3) in\n      let start_remainder = (n / 4) * 4 in\n      let rec scalar_loop k (acc : float32#) =\n        if k < n then\n          scalar_loop (k + 1) (Float32_u.max acc (Array.unsafe_get a_arr (base + k)))\n        else acc\n      in\n      scalar_loop start_remainder simd_result)\n  else\n    let acc = Array.make_float32 1 in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let md_index = Array.make (Array.length a_shape) 0 in\n    Shape.unravel_index_into start_idx a_shape md_index;\n    let first_lin = Shape.ravel_index md_index a_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (a_offset + first_lin));\n    for k = start_idx + 1 to end_idx - 1 do\n      Shape.unravel_index_into k a_shape md_index;\n      let a_lin = Shape.ravel_index md_index a_strides in\n      let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Float32_u.max cur v)\n    done;\n    Array.unsafe_get acc 0\n\nlet max_all_int8 a_arr va start_idx end_idx =\n  let acc = Array.make_int8 1 in\n  if View.is_c_contiguous va then (\n    let base = View.offset va + start_idx in\n    let last = View.offset va + end_idx in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr base);\n    for i = base + 1 to last - 1 do\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int8_u.max cur (Array.unsafe_get a_arr i))\n    done;\n    Array.unsafe_get acc 0)\n  else\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let md_index = Array.make (Array.length a_shape) 0 in\n    Shape.unravel_index_into start_idx a_shape md_index;\n    let first_lin = Shape.ravel_index md_index a_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (a_offset + first_lin));\n    for k = start_idx + 1 to end_idx - 1 do\n      Shape.unravel_index_into k a_shape md_index;\n      let a_lin = Shape.ravel_index md_index a_strides in\n      let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int8_u.max cur v)\n    done;\n    Array.unsafe_get acc 0\n\nlet max_all_int16 a_arr va start_idx end_idx =\n  let acc = Array.make_int16 1 in\n  if View.is_c_contiguous va then (\n    let base = View.offset va + start_idx in\n    let last = View.offset va + end_idx in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr base);\n    for i = base + 1 to last - 1 do\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int16_u.max cur (Array.unsafe_get a_arr i))\n    done;\n    Array.unsafe_get acc 0)\n  else\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let md_index = Array.make (Array.length a_shape) 0 in\n    Shape.unravel_index_into start_idx a_shape md_index;\n    let first_lin = Shape.ravel_index md_index a_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (a_offset + first_lin));\n    for k = start_idx + 1 to end_idx - 1 do\n      Shape.unravel_index_into k a_shape md_index;\n      let a_lin = Shape.ravel_index md_index a_strides in\n      let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int16_u.max cur v)\n    done;\n    Array.unsafe_get acc 0\n\nlet max_all_int32 a_arr va start_idx end_idx =\n  let acc = Array.make_int32 1 in\n  if View.is_c_contiguous va then (\n    let base = View.offset va + start_idx in\n    let last = View.offset va + end_idx in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr base);\n    for i = base + 1 to last - 1 do\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int32_u.max cur (Array.unsafe_get a_arr i))\n    done;\n    Array.unsafe_get acc 0)\n  else\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let md_index = Array.make (Array.length a_shape) 0 in\n    Shape.unravel_index_into start_idx a_shape md_index;\n    let first_lin = Shape.ravel_index md_index a_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (a_offset + first_lin));\n    for k = start_idx + 1 to end_idx - 1 do\n      Shape.unravel_index_into k a_shape md_index;\n      let a_lin = Shape.ravel_index md_index a_strides in\n      let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int32_u.max cur v)\n    done;\n    Array.unsafe_get acc 0\n\nlet max_all_int64 a_arr va start_idx end_idx =\n  let acc = Array.make_int64 1 in\n  if View.is_c_contiguous va then (\n    let base = View.offset va + start_idx in\n    let last = View.offset va + end_idx in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr base);\n    for i = base + 1 to last - 1 do\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int64_u.max cur (Array.unsafe_get a_arr i))\n    done;\n    Array.unsafe_get acc 0)\n  else\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let md_index = Array.make (Array.length a_shape) 0 in\n    Shape.unravel_index_into start_idx a_shape md_index;\n    let first_lin = Shape.ravel_index md_index a_strides in\n    Array.unsafe_set acc 0 (Array.unsafe_get a_arr (a_offset + first_lin));\n    for k = start_idx + 1 to end_idx - 1 do\n      Shape.unravel_index_into k a_shape md_index;\n      let a_lin = Shape.ravel_index md_index a_strides in\n      let v = Array.unsafe_get a_arr (a_offset + a_lin) in\n      let cur = Array.unsafe_get acc 0 in\n      Array.unsafe_set acc 0 (Int64_u.max cur v)\n    done;\n    Array.unsafe_get acc 0\n\nlet reduce_sum_float64 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_float64 a_arr out_arr va vout s e)\n    else copy_float64 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then (\n    if out_numel > 0 then fill_float64 out_arr vout (Float_u.of_int 0))\n  else if out_numel = 1 then\n    let total = sum_all_partial_float64 a_arr va 0 in_numel in\n    fill_float64 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        sum_axis_float64 a_arr out_arr va vout axes keepdims s e)\n  else sum_axis_float64 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_sum_float32 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_float32 a_arr out_arr va vout s e)\n    else copy_float32 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then (\n    if out_numel > 0 then fill_float32 out_arr vout (Float32_u.of_int 0))\n  else if out_numel = 1 then\n    let total = sum_all_partial_float32 a_arr va 0 in_numel in\n    fill_float32 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        sum_axis_float32 a_arr out_arr va vout axes keepdims s e)\n  else sum_axis_float32 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_sum_int8 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_int8 a_arr out_arr va vout s e)\n    else copy_int8 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then (if out_numel > 0 then fill_int8 out_arr vout #0s)\n  else if out_numel = 1 then\n    let total = sum_all_partial_int8 a_arr va 0 in_numel in\n    fill_int8 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        sum_axis_int8 a_arr out_arr va vout axes keepdims s e)\n  else sum_axis_int8 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_sum_int16 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_int16 a_arr out_arr va vout s e)\n    else copy_int16 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then (if out_numel > 0 then fill_int16 out_arr vout #0S)\n  else if out_numel = 1 then\n    let total = sum_all_partial_int16 a_arr va 0 in_numel in\n    fill_int16 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        sum_axis_int16 a_arr out_arr va vout axes keepdims s e)\n  else sum_axis_int16 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_sum_int32 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_int32 a_arr out_arr va vout s e)\n    else copy_int32 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then (if out_numel > 0 then fill_int32 out_arr vout #0l)\n  else if out_numel = 1 then\n    let total = sum_all_partial_int32 a_arr va 0 in_numel in\n    fill_int32 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        sum_axis_int32 a_arr out_arr va vout axes keepdims s e)\n  else sum_axis_int32 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_sum_int64 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_int64 a_arr out_arr va vout s e)\n    else copy_int64 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then (if out_numel > 0 then fill_int64 out_arr vout #0L)\n  else if out_numel = 1 then\n    let total = sum_all_partial_int64 a_arr va 0 in_numel in\n    fill_int64 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        sum_axis_int64 a_arr out_arr va vout axes keepdims s e)\n  else sum_axis_int64 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_prod_float64 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_float64 a_arr out_arr va vout s e)\n    else copy_float64 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then (\n    if out_numel > 0 then fill_float64 out_arr vout (Float_u.of_int 1))\n  else if out_numel = 1 then\n    let total = prod_all_partial_float64 a_arr va 0 in_numel in\n    fill_float64 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        prod_axis_float64 a_arr out_arr va vout axes keepdims s e)\n  else prod_axis_float64 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_prod_float32 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_float32 a_arr out_arr va vout s e)\n    else copy_float32 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then (\n    if out_numel > 0 then fill_float32 out_arr vout (Float32_u.of_int 1))\n  else if out_numel = 1 then\n    let total = prod_all_partial_float32 a_arr va 0 in_numel in\n    fill_float32 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        prod_axis_float32 a_arr out_arr va vout axes keepdims s e)\n  else prod_axis_float32 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_prod_int8 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_int8 a_arr out_arr va vout s e)\n    else copy_int8 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then (if out_numel > 0 then fill_int8 out_arr vout #1s)\n  else if out_numel = 1 then\n    let total = prod_all_partial_int8 a_arr va 0 in_numel in\n    fill_int8 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        prod_axis_int8 a_arr out_arr va vout axes keepdims s e)\n  else prod_axis_int8 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_prod_int16 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_int16 a_arr out_arr va vout s e)\n    else copy_int16 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then (if out_numel > 0 then fill_int16 out_arr vout #1S)\n  else if out_numel = 1 then\n    let total = prod_all_partial_int16 a_arr va 0 in_numel in\n    fill_int16 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        prod_axis_int16 a_arr out_arr va vout axes keepdims s e)\n  else prod_axis_int16 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_prod_int32 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_int32 a_arr out_arr va vout s e)\n    else copy_int32 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then (if out_numel > 0 then fill_int32 out_arr vout #1l)\n  else if out_numel = 1 then\n    let total = prod_all_partial_int32 a_arr va 0 in_numel in\n    fill_int32 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        prod_axis_int32 a_arr out_arr va vout axes keepdims s e)\n  else prod_axis_int32 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_prod_int64 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_int64 a_arr out_arr va vout s e)\n    else copy_int64 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then (if out_numel > 0 then fill_int64 out_arr vout #1L)\n  else if out_numel = 1 then\n    let total = prod_all_partial_int64 a_arr va 0 in_numel in\n    fill_int64 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        prod_axis_int64 a_arr out_arr va vout axes keepdims s e)\n  else prod_axis_int64 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_min_float64 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_float64 a_arr out_arr va vout s e)\n    else copy_float64 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then\n    invalid_arg \"reduce_min: empty input\"\n  else if out_numel = 1 then\n    let total = min_all_float64 a_arr va 0 in_numel in\n    fill_float64 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        min_axis_float64 a_arr out_arr va vout axes keepdims s e)\n  else min_axis_float64 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_min_float32 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_float32 a_arr out_arr va vout s e)\n    else copy_float32 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then\n    invalid_arg \"reduce_min: empty input\"\n  else if out_numel = 1 then\n    let total = min_all_float32 a_arr va 0 in_numel in\n    fill_float32 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        min_axis_float32 a_arr out_arr va vout axes keepdims s e)\n  else min_axis_float32 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_min_int8 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_int8 a_arr out_arr va vout s e)\n    else copy_int8 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then\n    invalid_arg \"reduce_min: empty input\"\n  else if out_numel = 1 then\n    let total = min_all_int8 a_arr va 0 in_numel in\n    fill_int8 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        min_axis_int8 a_arr out_arr va vout axes keepdims s e)\n  else min_axis_int8 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_min_int16 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_int16 a_arr out_arr va vout s e)\n    else copy_int16 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then\n    invalid_arg \"reduce_min: empty input\"\n  else if out_numel = 1 then\n    let total = min_all_int16 a_arr va 0 in_numel in\n    fill_int16 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        min_axis_int16 a_arr out_arr va vout axes keepdims s e)\n  else min_axis_int16 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_min_int32 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_int32 a_arr out_arr va vout s e)\n    else copy_int32 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then\n    invalid_arg \"reduce_min: empty input\"\n  else if out_numel = 1 then\n    let total = min_all_int32 a_arr va 0 in_numel in\n    fill_int32 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        min_axis_int32 a_arr out_arr va vout axes keepdims s e)\n  else min_axis_int32 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_min_int64 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_int64 a_arr out_arr va vout s e)\n    else copy_int64 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then\n    invalid_arg \"reduce_min: empty input\"\n  else if out_numel = 1 then\n    let total = min_all_int64 a_arr va 0 in_numel in\n    fill_int64 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        min_axis_int64 a_arr out_arr va vout axes keepdims s e)\n  else min_axis_int64 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_max_float64 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_float64 a_arr out_arr va vout s e)\n    else copy_float64 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then\n    invalid_arg \"reduce_max: empty input\"\n  else if out_numel = 1 then\n    let total = max_all_float64 a_arr va 0 in_numel in\n    fill_float64 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        max_axis_float64 a_arr out_arr va vout axes keepdims s e)\n  else max_axis_float64 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_max_float32 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_float32 a_arr out_arr va vout s e)\n    else copy_float32 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then\n    invalid_arg \"reduce_max: empty input\"\n  else if out_numel = 1 then\n    let total = max_all_float32 a_arr va 0 in_numel in\n    fill_float32 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        max_axis_float32 a_arr out_arr va vout axes keepdims s e)\n  else max_axis_float32 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_max_int8 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_int8 a_arr out_arr va vout s e)\n    else copy_int8 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then\n    invalid_arg \"reduce_max: empty input\"\n  else if out_numel = 1 then\n    let total = max_all_int8 a_arr va 0 in_numel in\n    fill_int8 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        max_axis_int8 a_arr out_arr va vout axes keepdims s e)\n  else max_axis_int8 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_max_int16 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_int16 a_arr out_arr va vout s e)\n    else copy_int16 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then\n    invalid_arg \"reduce_max: empty input\"\n  else if out_numel = 1 then\n    let total = max_all_int16 a_arr va 0 in_numel in\n    fill_int16 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        max_axis_int16 a_arr out_arr va vout axes keepdims s e)\n  else max_axis_int16 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_max_int32 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_int32 a_arr out_arr va vout s e)\n    else copy_int32 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then\n    invalid_arg \"reduce_max: empty input\"\n  else if out_numel = 1 then\n    let total = max_all_int32 a_arr va 0 in_numel in\n    fill_int32 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        max_axis_int32 a_arr out_arr va vout axes keepdims s e)\n  else max_axis_int32 a_arr out_arr va vout axes keepdims 0 out_numel\n\nlet reduce_max_int64 pool ~out_arr ~a_arr ~va ~vout ~axes ~keepdims =\n  let in_numel = numel va in\n  let out_numel = numel vout in\n  if Array.length axes = 0 then\n    if out_numel = 0 then ()\n    else if out_numel > parallel_threshold then\n      Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n          copy_int64 a_arr out_arr va vout s e)\n    else copy_int64 a_arr out_arr va vout 0 out_numel\n  else if in_numel = 0 then\n    invalid_arg \"reduce_max: empty input\"\n  else if out_numel = 1 then\n    let total = max_all_int64 a_arr va 0 in_numel in\n    fill_int64 out_arr vout total\n  else if out_numel > parallel_threshold then\n    Parallel.parallel_for pool 0 (out_numel - 1) (fun s e ->\n        max_axis_int64 a_arr out_arr va vout axes keepdims s e)\n  else max_axis_int64 a_arr out_arr va vout axes keepdims 0 out_numel\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/simd_neon.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  Distributed under the ISC license, see terms at the end of the file.\n\n  Based on ocaml_simd (https://github.com/janestreet/ocaml_simd)\n  Copyright (c) 2025-2026 Jane Street Group, LLC\n  Released under the MIT license.\n  ---------------------------------------------------------------------------*)\n\n(* ARM64 NEON SIMD — Prefetch, Int64x2, Int32x4, Float64x2, Float32x4 *)\n\nmodule Prefetch = struct\n  external read : 'a -> (int[@untagged]) -> unit\n    = \"caml_prefetch_ignore\" \"caml_prefetch_read_high_val_offset_untagged\"\n    [@@noalloc] [@@builtin]\nend\n\nmodule Int64x2 = struct\n  type t = int64x2#\n\n  (* ───── Arithmetic ───── *)\n\n  external add : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_int64x2_add\"\n    [@@noalloc] [@@builtin]\n\n  external sub : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_int64x2_sub\"\n    [@@noalloc] [@@builtin]\n\n  external neg : t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_int64x2_neg\"\n    [@@noalloc] [@@builtin]\n\n  (* ───── Bitwise ───── *)\n\n  external bitwise_and : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_int64x2_bitwise_and\"\n    [@@noalloc] [@@builtin]\n\n  external bitwise_or : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_int64x2_bitwise_or\"\n    [@@noalloc] [@@builtin]\n\n  external bitwise_xor : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_int64x2_bitwise_xor\"\n    [@@noalloc] [@@builtin]\n\n  external bitwise_not : t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_int64x2_bitwise_not\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] ( land ) x y = bitwise_and x y\n  let[@inline always] ( lor ) x y = bitwise_or x y\n  let[@inline always] ( lxor ) x y = bitwise_xor x y\n\n  (* ───── Comparison ───── *)\n\n  external cmpgt : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_int64x2_cmpgt\"\n    [@@noalloc] [@@builtin]\n\n  (* ───── Blend ───── *)\n\n  let[@inline always] blendv a b mask =\n    bitwise_or (bitwise_and b mask) (bitwise_and a (bitwise_not mask))\n\n  (* ───── Constants ───── *)\n\n  external const1 : int64# -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_int64x2_const1\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] zero () = const1 #0L\n  let[@inline always] one () = const1 #1L\n  let[@inline always] all_ones () = const1 #0xffffffffffffffffL\n\n  (* ───── Lanes ───── *)\n\n  external low_of : int64# -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_int64x2_low_of_int64\"\n    [@@noalloc] [@@builtin]\n\n  external low_to : t -> int64# @@ portable\n    = \"caml_vec128_unreachable\" \"caml_int64x2_low_to_int64\"\n    [@@noalloc] [@@builtin]\n\n  external dup : t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_int64x2_dup\"\n    [@@noalloc] [@@builtin]\n\n  external low_64_to_high_64 : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_simd_vec128_low_64_to_high_64\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] set1 a = dup (low_of a)\n  let[@inline always] set a b = low_64_to_high_64 (low_of a) (low_of b)\n\n  (* ───── Casts ───── *)\n\n  external of_float64x2 : float64x2# -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_vec128_cast\"\n    [@@noalloc] [@@builtin]\n\n  (* ───── Array ───── *)\n\n  module Array = struct\n    external unsafe_get : (int64# array[@local_opt]) @ read -> idx:int -> t\n      = \"%caml_unboxed_int64_array_get128u#\"\n\n    external unsafe_set : (int64# array[@local_opt]) -> idx:int -> t -> unit\n      = \"%caml_unboxed_int64_array_set128u#\"\n  end\nend\n\nmodule Int32x4 = struct\n  type t = int32x4#\n\n  (* ───── Arithmetic ───── *)\n\n  external add : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_int32x4_add\"\n    [@@noalloc] [@@builtin]\n\n  external sub : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_int32x4_sub\"\n    [@@noalloc] [@@builtin]\n\n  external neg : t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_int32x4_neg\"\n    [@@noalloc] [@@builtin]\n\n  external abs : t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_int32x4_abs\"\n    [@@noalloc] [@@builtin]\n\n  (* ───── Min/Max ───── *)\n\n  external min : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_int32x4_min\"\n    [@@noalloc] [@@builtin]\n\n  external max : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_int32x4_max\"\n    [@@noalloc] [@@builtin]\n\n  (* ───── Bitwise ───── *)\n\n  external bitwise_and : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_int32x4_bitwise_and\"\n    [@@noalloc] [@@builtin]\n\n  external bitwise_or : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_int32x4_bitwise_or\"\n    [@@noalloc] [@@builtin]\n\n  external bitwise_xor : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_int32x4_bitwise_xor\"\n    [@@noalloc] [@@builtin]\n\n  external bitwise_not : t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_int32x4_bitwise_not\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] ( land ) x y = bitwise_and x y\n  let[@inline always] ( lor ) x y = bitwise_or x y\n  let[@inline always] ( lxor ) x y = bitwise_xor x y\n\n  (* ───── Constants ───── *)\n\n  external const1 : int32# -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_int32x4_const1\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] zero () = const1 #0l\n  let[@inline always] one () = const1 #1l\n\n  (* ───── Lanes ───── *)\n\n  external low_of : int32# -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_int32x4_low_of_int32\"\n    [@@noalloc] [@@builtin]\n\n  external low_to : t -> int32# @@ portable\n    = \"caml_vec128_unreachable\" \"caml_int32x4_low_to_int32\"\n    [@@noalloc] [@@builtin]\n\n  external dup : t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_int32x4_dup\"\n    [@@noalloc] [@@builtin]\n\n  external interleave_low_32 : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_simd_vec128_interleave_low_32\"\n    [@@noalloc] [@@builtin]\n\n  external interleave_low_64 : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_simd_vec128_interleave_low_64\"\n    [@@noalloc] [@@builtin]\n\n  external dup_lane : (int[@untagged]) -> (t[@unboxed]) -> (t[@unboxed]) @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_int32x4_dup_lane\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] set1 a = dup (low_of a)\n\n  let[@inline always] set a b c d =\n    let a = low_of a in\n    let b = low_of b in\n    let c = low_of c in\n    let d = low_of d in\n    let ba = interleave_low_32 a b in\n    let dc = interleave_low_32 c d in\n    interleave_low_64 ba dc\n\n  (* ───── Casts ───── *)\n\n  external of_float32x4 : float32x4# -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_vec128_cast\"\n    [@@noalloc] [@@builtin]\n\n  (* ───── Array ───── *)\n\n  module Array = struct\n    external unsafe_get : (int32# array[@local_opt]) @ read -> idx:int -> t\n      = \"%caml_unboxed_int32_array_get128u#\"\n\n    external unsafe_set : (int32# array[@local_opt]) -> idx:int -> t -> unit\n      = \"%caml_unboxed_int32_array_set128u#\"\n  end\nend\n\nmodule Float64x2 = struct\n  type t = float64x2#\n\n  (* ───── Arithmetic ───── *)\n\n  external add : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_float64x2_add\"\n    [@@noalloc] [@@builtin]\n\n  external sub : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_float64x2_sub\"\n    [@@noalloc] [@@builtin]\n\n  external mul : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_float64x2_mul\"\n    [@@noalloc] [@@builtin]\n\n  external div : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_float64x2_div\"\n    [@@noalloc] [@@builtin]\n\n  external sqrt : t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_float64x2_sqrt\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] mul_add a b c = add (mul a b) c\n\n  external hadd : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_float64x2_hadd\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] horizontal_add x y = hadd x y\n\n  (* ───── Min/Max ───── *)\n\n  external min : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_float64x2_min\"\n    [@@noalloc] [@@builtin]\n\n  external max : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_float64x2_max\"\n    [@@noalloc] [@@builtin]\n\n  (* ───── Constants ───── *)\n\n  external const1 : float# -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_float64x2_const1\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] zero () = const1 #0.\n  let[@inline always] one () = const1 #1.\n\n  (* ───── Bitwise (for neg/abs) ───── *)\n\n  external of_int64x2 : int64x2# -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_vec128_cast\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] neg x =\n    Int64x2.(bitwise_xor (const1 #0x8000000000000000L) (of_float64x2 x))\n    |> of_int64x2\n\n  let[@inline always] abs x =\n    Int64x2.(bitwise_and (const1 #0x7fffffffffffffffL) (of_float64x2 x))\n    |> of_int64x2\n\n  (* ───── Lanes ───── *)\n\n  external low_of : float# -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_float64x2_low_of_float\"\n    [@@noalloc] [@@builtin]\n\n  external low_to : t -> float# @@ portable\n    = \"caml_vec128_unreachable\" \"caml_float64x2_low_to_float\"\n    [@@noalloc] [@@builtin]\n\n  external low_64_to_high_64 : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_simd_vec128_low_64_to_high_64\"\n    [@@noalloc] [@@builtin]\n\n  external high_64_to_low_64 : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_simd_vec128_high_64_to_low_64\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] set1 a =\n    let a = low_of a in\n    Int64x2.dup (Int64x2.of_float64x2 a) |> of_int64x2\n\n  let[@inline always] set a b = low_64_to_high_64 (low_of a) (low_of b)\n  let[@inline always] extract0 x = low_to x\n  let[@inline always] splat x = #(low_to x, low_to (high_64_to_low_64 x x))\n\n  (* ───── Array ───── *)\n\n  module Array = struct\n    external unsafe_get : (float# array[@local_opt]) @ read -> idx:int -> t\n      = \"%caml_unboxed_float_array_get128u#\"\n\n    external unsafe_set : (float# array[@local_opt]) -> idx:int -> t -> unit\n      = \"%caml_unboxed_float_array_set128u#\"\n  end\nend\n\nmodule Float32x4 = struct\n  type t = float32x4#\n\n  (* ───── Arithmetic ───── *)\n\n  external add : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_float32x4_add\"\n    [@@noalloc] [@@builtin]\n\n  external sub : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_float32x4_sub\"\n    [@@noalloc] [@@builtin]\n\n  external mul : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_float32x4_mul\"\n    [@@noalloc] [@@builtin]\n\n  external div : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_float32x4_div\"\n    [@@noalloc] [@@builtin]\n\n  external sqrt : t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_float32x4_sqrt\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] mul_add a b c = add (mul a b) c\n\n  external hadd : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_float32x4_hadd\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] horizontal_add x y = hadd x y\n\n  (* ───── Min/Max ───── *)\n\n  external min : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_float32x4_min\"\n    [@@noalloc] [@@builtin]\n\n  external max : t -> t -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_neon_float32x4_max\"\n    [@@noalloc] [@@builtin]\n\n  (* ───── Constants ───── *)\n\n  external const1 : float32# -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_float32x4_const1\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] zero () = const1 #0.0s\n  let[@inline always] one () = const1 #1.0s\n\n  (* ───── Bitwise (for neg/abs) ───── *)\n\n  external of_int32x4 : int32x4# -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_vec128_cast\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] neg x =\n    Int32x4.(bitwise_xor (const1 #0x80000000l) (of_float32x4 x)) |> of_int32x4\n\n  let[@inline always] abs x =\n    Int32x4.(bitwise_and (const1 #0x7fffffffl) (of_float32x4 x)) |> of_int32x4\n\n  (* ───── Lanes ───── *)\n\n  external low_of : float32# -> t @@ portable\n    = \"caml_vec128_unreachable\" \"caml_float32x4_low_of_float32\"\n    [@@noalloc] [@@builtin]\n\n  external low_to : t -> float32# @@ portable\n    = \"caml_vec128_unreachable\" \"caml_float32x4_low_to_float32\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] set1 a =\n    let a = low_of a in\n    Int32x4.dup (Int32x4.of_float32x4 a) |> of_int32x4\n\n  let[@inline always] set a b c d =\n    let a = Int32x4.of_float32x4 (low_of a) in\n    let b = Int32x4.of_float32x4 (low_of b) in\n    let c = Int32x4.of_float32x4 (low_of c) in\n    let d = Int32x4.of_float32x4 (low_of d) in\n    let ba = Int32x4.interleave_low_32 a b in\n    let dc = Int32x4.interleave_low_32 c d in\n    Int32x4.interleave_low_64 ba dc |> of_int32x4\n\n  let[@inline always] extract0 x = low_to x\n\n  let[@inline always] splat x =\n    let as_i = Int32x4.of_float32x4 x in\n    let lane1 = Int32x4.dup_lane 1 as_i |> of_int32x4 |> low_to in\n    let lane2 = Int32x4.dup_lane 2 as_i |> of_int32x4 |> low_to in\n    let lane3 = Int32x4.dup_lane 3 as_i |> of_int32x4 |> low_to in\n    #(low_to x, lane1, lane2, lane3)\n\n  (* ───── Array ───── *)\n\n  module Array = struct\n    external unsafe_get : (float32# array[@local_opt]) @ read -> idx:int -> t\n      = \"%caml_unboxed_float32_array_get128u#\"\n\n    external unsafe_set : (float32# array[@local_opt]) -> idx:int -> t -> unit\n      = \"%caml_unboxed_float32_array_set128u#\"\n  end\nend\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/simd_sse.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  Distributed under the ISC license, see terms at the end of the file.\n\n  Based on ocaml_simd (https://github.com/janestreet/ocaml_simd)\n  Copyright (c) 2025-2026 Jane Street Group, LLC\n  Released under the MIT license.\n  ---------------------------------------------------------------------------*)\n\n(* x86-64 SSE SIMD — Prefetch, Int64x2, Int32x4, Float64x2, Float32x4 *)\n\nmodule Prefetch = struct\n  external read : 'a -> (int[@untagged]) -> unit\n    = \"caml_prefetch_ignore\" \"caml_prefetch_read_high_val_offset_untagged\"\n    [@@noalloc] [@@builtin]\nend\n\nmodule Int64x2 = struct\n  type t = int64x2#\n\n  (* ───── Arithmetic ───── *)\n\n  external add : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_int64x2_add\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external sub : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_int64x2_sub\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external neg : t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_int64x2_neg\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  (* ───── Bitwise ───── *)\n\n  external bitwise_and : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_vec128_and\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external bitwise_or : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_vec128_or\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external bitwise_xor : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_vec128_xor\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external bitwise_not : t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_vec128_not\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  let[@inline always] ( land ) x y = bitwise_and x y\n  let[@inline always] ( lor ) x y = bitwise_or x y\n  let[@inline always] ( lxor ) x y = bitwise_xor x y\n\n  (* ───── Comparison ───── *)\n\n  external cmpgt : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse42_int64x2_cmpgt\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  (* ───── Blend ───── *)\n\n  let[@inline always] blendv a b mask =\n    bitwise_or (bitwise_and b mask) (bitwise_and a (bitwise_not mask))\n\n  (* ───── Constants ───── *)\n\n  external const1 : int64# -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_int64x2_const1\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] zero () = const1 #0L\n  let[@inline always] one () = const1 #1L\n  let[@inline always] all_ones () = const1 #0xffffffffffffffffL\n\n  (* ───── Lanes ───── *)\n\n  external low_of : int64# -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_int64x2_low_of_int64\"\n    [@@noalloc] [@@builtin]\n\n  external low_to : t -> int64# @@ portable\n    = \"caml_sse2_unreachable\" \"caml_int64x2_low_to_int64\"\n    [@@noalloc] [@@builtin]\n\n  external dup : t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_int64x2_dup\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external low_64_to_high_64 : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_simd_vec128_low_64_to_high_64\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  let[@inline always] set1 a = dup (low_of a)\n  let[@inline always] set a b = low_64_to_high_64 (low_of a) (low_of b)\n\n  (* ───── Casts ───── *)\n\n  external of_float64x2 : float64x2# -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_vec128_cast\"\n    [@@noalloc] [@@builtin]\n\n  (* ───── Array ───── *)\n\n  module Array = struct\n    external unsafe_get : (int64# array[@local_opt]) @ read -> idx:int -> t\n      = \"%caml_unboxed_int64_array_get128u#\"\n\n    external unsafe_set : (int64# array[@local_opt]) -> idx:int -> t -> unit\n      = \"%caml_unboxed_int64_array_set128u#\"\n  end\nend\n\nmodule Int32x4 = struct\n  type t = int32x4#\n\n  (* ───── Arithmetic ───── *)\n\n  external add : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_int32x4_add\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external sub : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_int32x4_sub\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external neg : t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_int32x4_neg\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external abs : t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_ssse3_int32x4_abs\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  (* ───── Min/Max ───── *)\n\n  external min : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse41_int32x4_min\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external max : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse41_int32x4_max\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  (* ───── Bitwise ───── *)\n\n  external bitwise_and : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_vec128_and\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external bitwise_or : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_vec128_or\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external bitwise_xor : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_vec128_xor\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external bitwise_not : t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_vec128_not\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  let[@inline always] ( land ) x y = bitwise_and x y\n  let[@inline always] ( lor ) x y = bitwise_or x y\n  let[@inline always] ( lxor ) x y = bitwise_xor x y\n\n  (* ───── Constants ───── *)\n\n  external const1 : int32# -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_int32x4_const1\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] zero () = const1 #0l\n  let[@inline always] one () = const1 #1l\n\n  (* ───── Lanes ───── *)\n\n  external low_of : int32# -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_int32x4_low_of_int32\"\n    [@@noalloc] [@@builtin]\n\n  external low_to : t -> int32# @@ portable\n    = \"caml_sse2_unreachable\" \"caml_int32x4_low_to_int32\"\n    [@@noalloc] [@@builtin]\n\n  external dup : t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_int32x4_shuffle_0000\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external interleave_low_32 : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_simd_vec128_interleave_low_32\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external interleave_low_64 : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_simd_vec128_interleave_low_64\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external dup_lane : (int[@untagged]) -> (t[@unboxed]) -> (t[@unboxed]) @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_int32x4_dup_lane\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] set1 a = dup (low_of a)\n\n  let[@inline always] set a b c d =\n    let a = low_of a in\n    let b = low_of b in\n    let c = low_of c in\n    let d = low_of d in\n    let ba = interleave_low_32 a b in\n    let dc = interleave_low_32 c d in\n    interleave_low_64 ba dc\n\n  (* ───── Casts ───── *)\n\n  external of_float32x4 : float32x4# -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_vec128_cast\"\n    [@@noalloc] [@@builtin]\n\n  (* ───── Array ───── *)\n\n  module Array = struct\n    external unsafe_get : (int32# array[@local_opt]) @ read -> idx:int -> t\n      = \"%caml_unboxed_int32_array_get128u#\"\n\n    external unsafe_set : (int32# array[@local_opt]) -> idx:int -> t -> unit\n      = \"%caml_unboxed_int32_array_set128u#\"\n  end\nend\n\nmodule Float64x2 = struct\n  type t = float64x2#\n\n  (* ───── Arithmetic ───── *)\n\n  external add : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_float64x2_add\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external sub : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_float64x2_sub\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external mul : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_float64x2_mul\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external div : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_float64x2_div\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external sqrt : t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_float64x2_sqrt\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external mul_add : (t[@unboxed]) -> (t[@unboxed]) -> (t[@unboxed]) -> (t[@unboxed]) @@ portable\n    = \"caml_sse2_unreachable\" \"caml_fma_float64x2_fmadd\"\n    [@@noalloc]\n\n  external hadd : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse3_float64x2_hadd\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  let[@inline always] horizontal_add x y = hadd x y\n\n  (* ───── Min/Max ───── *)\n\n  external min : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_float64x2_min\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external max : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse2_float64x2_max\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  (* ───── Constants ───── *)\n\n  external const1 : float# -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_float64x2_const1\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] zero () = const1 #0.\n  let[@inline always] one () = const1 #1.\n\n  (* ───── Bitwise (for neg/abs) ───── *)\n\n  external of_int64x2 : int64x2# -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_vec128_cast\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] neg x =\n    Int64x2.(bitwise_xor (const1 #0x8000000000000000L) (of_float64x2 x))\n    |> of_int64x2\n\n  let[@inline always] abs x =\n    Int64x2.(bitwise_and (const1 #0x7fffffffffffffffL) (of_float64x2 x))\n    |> of_int64x2\n\n  (* ───── Lanes ───── *)\n\n  external low_of : float# -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_float64x2_low_of_float\"\n    [@@noalloc] [@@builtin]\n\n  external low_to : t -> float# @@ portable\n    = \"caml_sse2_unreachable\" \"caml_float64x2_low_to_float\"\n    [@@noalloc] [@@builtin]\n\n  external low_64_to_high_64 : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_simd_vec128_low_64_to_high_64\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external high_64_to_low_64 : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_simd_vec128_high_64_to_low_64\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  let[@inline always] set1 a =\n    let a = low_of a in\n    Int64x2.dup (Int64x2.of_float64x2 a) |> of_int64x2\n\n  let[@inline always] set a b = low_64_to_high_64 (low_of a) (low_of b)\n  let[@inline always] extract0 x = low_to x\n  let[@inline always] splat x = #(low_to x, low_to (high_64_to_low_64 x x))\n\n  (* ───── Array ───── *)\n\n  module Array = struct\n    external unsafe_get : (float# array[@local_opt]) @ read -> idx:int -> t\n      = \"%caml_unboxed_float_array_get128u#\"\n\n    external unsafe_set : (float# array[@local_opt]) -> idx:int -> t -> unit\n      = \"%caml_unboxed_float_array_set128u#\"\n  end\nend\n\nmodule Float32x4 = struct\n  type t = float32x4#\n\n  (* ───── Arithmetic ───── *)\n\n  external add : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse_float32x4_add\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external sub : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse_float32x4_sub\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external mul : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse_float32x4_mul\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external div : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse_float32x4_div\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external sqrt : t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse_float32x4_sqrt\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external mul_add : (t[@unboxed]) -> (t[@unboxed]) -> (t[@unboxed]) -> (t[@unboxed]) @@ portable\n    = \"caml_sse2_unreachable\" \"caml_fma_float32x4_fmadd\"\n    [@@noalloc]\n\n  external hadd : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse3_float32x4_hadd\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  let[@inline always] horizontal_add x y = hadd x y\n\n  (* ───── Min/Max ───── *)\n\n  external min : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse_float32x4_min\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  external max : t -> t -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_sse_float32x4_max\"\n    [@@noalloc] [@@unboxed] [@@builtin]\n\n  (* ───── Constants ───── *)\n\n  external const1 : float32# -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_float32x4_const1\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] zero () = const1 #0.0s\n  let[@inline always] one () = const1 #1.0s\n\n  (* ───── Bitwise (for neg/abs) ───── *)\n\n  external of_int32x4 : int32x4# -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_vec128_cast\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] neg x =\n    Int32x4.(bitwise_xor (const1 #0x80000000l) (of_float32x4 x)) |> of_int32x4\n\n  let[@inline always] abs x =\n    Int32x4.(bitwise_and (const1 #0x7fffffffl) (of_float32x4 x)) |> of_int32x4\n\n  (* ───── Lanes ───── *)\n\n  external low_of : float32# -> t @@ portable\n    = \"caml_sse2_unreachable\" \"caml_float32x4_low_of_float32\"\n    [@@noalloc] [@@builtin]\n\n  external low_to : t -> float32# @@ portable\n    = \"caml_sse2_unreachable\" \"caml_float32x4_low_to_float32\"\n    [@@noalloc] [@@builtin]\n\n  let[@inline always] set1 a =\n    let a = low_of a in\n    Int32x4.dup (Int32x4.of_float32x4 a) |> of_int32x4\n\n  let[@inline always] set a b c d =\n    let a = Int32x4.of_float32x4 (low_of a) in\n    let b = Int32x4.of_float32x4 (low_of b) in\n    let c = Int32x4.of_float32x4 (low_of c) in\n    let d = Int32x4.of_float32x4 (low_of d) in\n    let ba = Int32x4.interleave_low_32 a b in\n    let dc = Int32x4.interleave_low_32 c d in\n    Int32x4.interleave_low_64 ba dc |> of_int32x4\n\n  let[@inline always] extract0 x = low_to x\n\n  let[@inline always] splat x =\n    let as_i = Int32x4.of_float32x4 x in\n    let lane1 = Int32x4.dup_lane 1 as_i |> of_int32x4 |> low_to in\n    let lane2 = Int32x4.dup_lane 2 as_i |> of_int32x4 |> low_to in\n    let lane3 = Int32x4.dup_lane 3 as_i |> of_int32x4 |> low_to in\n    #(low_to x, lane1, lane2, lane3)\n\n  (* ───── Array ───── *)\n\n  module Array = struct\n    external unsafe_get : (float32# array[@local_opt]) @ read -> idx:int -> t\n      = \"%caml_unboxed_float32_array_get128u#\"\n\n    external unsafe_set : (float32# array[@local_opt]) -> idx:int -> t -> unit\n      = \"%caml_unboxed_float32_array_set128u#\"\n  end\nend\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/simd_stubs.c",
    "content": "/*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  Distributed under the ISC license, see terms at the end of the file.\n\n  Based on ocaml_simd (https://github.com/janestreet/ocaml_simd)\n  Copyright (c) 2025-2026 Jane Street Group, LLC\n  Released under the MIT license.\n  ---------------------------------------------------------------------------*/\n\n#include <assert.h>\n\n#define BUILTIN(name) void name() { assert(!\"Didn't use [@@builtin] intrinsic.\"); }\n\n/* Prefetch (no-op in bytecode; native compiler inlines the instruction) */\nvoid caml_prefetch_ignore() { }\n\n/* Shared builtins (architecture-independent symbol names) */\nBUILTIN(caml_vec128_cast)\nBUILTIN(caml_int64x2_const1)\nBUILTIN(caml_int64x2_low_of_int64)\nBUILTIN(caml_int64x2_low_to_int64)\nBUILTIN(caml_simd_vec128_low_64_to_high_64)\nBUILTIN(caml_simd_vec128_high_64_to_low_64)\nBUILTIN(caml_int32x4_const1)\nBUILTIN(caml_int32x4_low_of_int32)\nBUILTIN(caml_int32x4_low_to_int32)\nBUILTIN(caml_simd_vec128_interleave_low_32)\nBUILTIN(caml_simd_vec128_interleave_low_64)\nBUILTIN(caml_float64x2_const1)\nBUILTIN(caml_float64x2_low_of_float)\nBUILTIN(caml_float64x2_low_to_float)\nBUILTIN(caml_float32x4_const1)\nBUILTIN(caml_float32x4_low_of_float32)\nBUILTIN(caml_float32x4_low_to_float32)\n\n#if defined(__aarch64__) || defined(_M_ARM64)\n\nBUILTIN(caml_vec128_unreachable)\n\n/* Int64x2 - NEON */\nBUILTIN(caml_neon_int64x2_add)\nBUILTIN(caml_neon_int64x2_sub)\nBUILTIN(caml_neon_int64x2_neg)\nBUILTIN(caml_neon_int64x2_bitwise_and)\nBUILTIN(caml_neon_int64x2_bitwise_or)\nBUILTIN(caml_neon_int64x2_bitwise_xor)\nBUILTIN(caml_neon_int64x2_bitwise_not)\nBUILTIN(caml_neon_int64x2_cmpgt)\nBUILTIN(caml_neon_int64x2_dup)\n\n/* Int32x4 - NEON */\nBUILTIN(caml_neon_int32x4_add)\nBUILTIN(caml_neon_int32x4_sub)\nBUILTIN(caml_neon_int32x4_neg)\nBUILTIN(caml_neon_int32x4_abs)\nBUILTIN(caml_neon_int32x4_min)\nBUILTIN(caml_neon_int32x4_max)\nBUILTIN(caml_neon_int32x4_bitwise_and)\nBUILTIN(caml_neon_int32x4_bitwise_or)\nBUILTIN(caml_neon_int32x4_bitwise_xor)\nBUILTIN(caml_neon_int32x4_bitwise_not)\nBUILTIN(caml_neon_int32x4_dup)\nBUILTIN(caml_neon_int32x4_dup_lane)\n\n/* Float64x2 - NEON */\nBUILTIN(caml_neon_float64x2_add)\nBUILTIN(caml_neon_float64x2_sub)\nBUILTIN(caml_neon_float64x2_mul)\nBUILTIN(caml_neon_float64x2_div)\nBUILTIN(caml_neon_float64x2_sqrt)\nBUILTIN(caml_neon_float64x2_hadd)\nBUILTIN(caml_neon_float64x2_min)\nBUILTIN(caml_neon_float64x2_max)\n\n/* Float32x4 - NEON */\nBUILTIN(caml_neon_float32x4_add)\nBUILTIN(caml_neon_float32x4_sub)\nBUILTIN(caml_neon_float32x4_mul)\nBUILTIN(caml_neon_float32x4_div)\nBUILTIN(caml_neon_float32x4_sqrt)\nBUILTIN(caml_neon_float32x4_hadd)\nBUILTIN(caml_neon_float32x4_min)\nBUILTIN(caml_neon_float32x4_max)\n\n/* The ARM64 OxCaml backend does not support Cprefetch (operation_supported\n   returns false), so [@@builtin] is not inlined. Provide a real C stub that\n   emits the PRFM instruction via __builtin_prefetch. */\n\n#include <caml/mlvalues.h>\n\nvoid caml_prefetch_read_high_val_offset_untagged(value v, intnat offset) {\n    __builtin_prefetch((char *)v + offset, 0, 3);\n}\n\n#elif defined(__x86_64__) || defined(_M_X64)\n\nBUILTIN(caml_sse2_unreachable)\nBUILTIN(caml_prefetch_read_high_val_offset_untagged)\n\n/* Int64x2 - SSE */\nBUILTIN(caml_sse2_int64x2_add)\nBUILTIN(caml_sse2_int64x2_sub)\nBUILTIN(caml_sse2_int64x2_neg)\nBUILTIN(caml_sse2_vec128_and)\nBUILTIN(caml_sse2_vec128_or)\nBUILTIN(caml_sse2_vec128_xor)\nBUILTIN(caml_sse2_vec128_not)\nBUILTIN(caml_sse42_int64x2_cmpgt)\nBUILTIN(caml_sse2_int64x2_dup)\n\n/* Int32x4 - SSE */\nBUILTIN(caml_sse2_int32x4_add)\nBUILTIN(caml_sse2_int32x4_sub)\nBUILTIN(caml_sse2_int32x4_neg)\nBUILTIN(caml_ssse3_int32x4_abs)\nBUILTIN(caml_sse41_int32x4_min)\nBUILTIN(caml_sse41_int32x4_max)\nBUILTIN(caml_sse2_int32x4_shuffle_0000)\nBUILTIN(caml_sse2_int32x4_dup_lane)\n\n/* Float64x2 - SSE/FMA */\nBUILTIN(caml_sse2_float64x2_add)\nBUILTIN(caml_sse2_float64x2_sub)\nBUILTIN(caml_sse2_float64x2_mul)\nBUILTIN(caml_sse2_float64x2_div)\nBUILTIN(caml_sse2_float64x2_sqrt)\nBUILTIN(caml_fma_float64x2_fmadd)\nBUILTIN(caml_sse3_float64x2_hadd)\nBUILTIN(caml_sse2_float64x2_min)\nBUILTIN(caml_sse2_float64x2_max)\n\n/* Float32x4 - SSE/FMA */\nBUILTIN(caml_sse_float32x4_add)\nBUILTIN(caml_sse_float32x4_sub)\nBUILTIN(caml_sse_float32x4_mul)\nBUILTIN(caml_sse_float32x4_div)\nBUILTIN(caml_sse_float32x4_sqrt)\nBUILTIN(caml_fma_float32x4_fmadd)\nBUILTIN(caml_sse3_float32x4_hadd)\nBUILTIN(caml_sse_float32x4_min)\nBUILTIN(caml_sse_float32x4_max)\n\n#else\n#error \"Unsupported architecture: expected arm64 or x86_64\"\n#endif\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/ternary_ops/op_where.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet where_float64\n    cond_arr true_arr false_arr out_arr\n    vcond vtrue vfalse vout\n    start_idx end_idx =\n  let[@inline] select_f64x2 (mask : Int64x2.t) (a : Float64x2.t) (b : Float64x2.t) =\n    let ai = Int64x2.of_float64x2 a in\n    let bi = Int64x2.of_float64x2 b in\n    let r =\n      Int64x2.bitwise_or\n        (Int64x2.bitwise_and mask ai)\n        (Int64x2.bitwise_and (Int64x2.bitwise_not mask) bi)\n    in\n    Float64x2.of_int64x2 r\n  in\n  let[@inline] mask2 cond_arr base i =\n    let m0 =\n      if Array.unsafe_get cond_arr (base + i)\n      then (-#1L)\n      else #0L\n    in\n    let m1 =\n      if Array.unsafe_get cond_arr (base + i + 1)\n      then (-#1L)\n      else #0L\n    in\n    Int64x2.set m0 m1\n  in\n  let cond_base = View.offset vcond + start_idx in\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset vtrue + start_idx in\n  let b_base = View.offset vfalse + start_idx in\n  if\n    View.is_c_contiguous vout &&\n    View.is_c_contiguous vtrue &&\n    View.is_c_contiguous vfalse &&\n    View.is_c_contiguous vcond\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n8 = n - 7 in\n    while !i < n8 do\n      let idx = !i in\n      let a0 = Float64x2.Array.unsafe_get true_arr ~idx:(a_base + idx) in\n      let b0 = Float64x2.Array.unsafe_get false_arr ~idx:(b_base + idx) in\n      let m0 = mask2 cond_arr cond_base idx in\n      let a1 = Float64x2.Array.unsafe_get true_arr ~idx:(a_base + idx + 2) in\n      let b1 = Float64x2.Array.unsafe_get false_arr ~idx:(b_base + idx + 2) in\n      let m1 = mask2 cond_arr cond_base (idx + 2) in\n      let a2 = Float64x2.Array.unsafe_get true_arr ~idx:(a_base + idx + 4) in\n      let b2 = Float64x2.Array.unsafe_get false_arr ~idx:(b_base + idx + 4) in\n      let m2 = mask2 cond_arr cond_base (idx + 4) in\n      let a3 = Float64x2.Array.unsafe_get true_arr ~idx:(a_base + idx + 6) in\n      let b3 = Float64x2.Array.unsafe_get false_arr ~idx:(b_base + idx + 6) in\n      let m3 = mask2 cond_arr cond_base (idx + 6) in\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx)\n        (select_f64x2 m0 a0 b0);\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 2)\n        (select_f64x2 m1 a1 b1);\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 4)\n        (select_f64x2 m2 a2 b2);\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 6)\n        (select_f64x2 m3 a3 b3);\n      i := idx + 8\n    done;\n    let n2 = n - 1 in\n    while !i < n2 do\n      let idx = !i in\n      let a = Float64x2.Array.unsafe_get true_arr ~idx:(a_base + idx) in\n      let b = Float64x2.Array.unsafe_get false_arr ~idx:(b_base + idx) in\n      let m = mask2 cond_arr cond_base idx in\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx)\n        (select_f64x2 m a b);\n      i := idx + 2\n    done;\n    while !i < n do\n      let idx = !i in\n      let a = Array.unsafe_get true_arr (a_base + idx) in\n      let b = Array.unsafe_get false_arr (b_base + idx) in\n      let c = Array.unsafe_get cond_arr (cond_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (if c then a else b);\n      incr i\n    done\n  )\n  else\n    let out_shape = shape vout in\n    let out_strides = View.strides vout in\n    let a_shape = shape vtrue in\n    let b_shape = shape vfalse in\n    let c_shape = shape vcond in\n    let a_strides = View.strides vtrue in\n    let b_strides = View.strides vfalse in\n    let c_strides = View.strides vcond in\n    let a_offset = View.offset vtrue in\n    let b_offset = View.offset vfalse in\n    let c_offset = View.offset vcond in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    let c_idx = Array.make (Array.length c_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      Shape.broadcast_index_into md_idx c_shape c_idx;\n      let c_lin = Shape.ravel_index c_idx c_strides in\n      let out_lin = Shape.ravel_index md_idx out_strides in\n      let a = Array.unsafe_get true_arr (a_offset + a_lin) in\n      let b = Array.unsafe_get false_arr (b_offset + b_lin) in\n      let c = Array.unsafe_get cond_arr (c_offset + c_lin) in\n      Array.unsafe_set out_arr (out_offset + out_lin) (if c then a else b)\n    done\n\nlet where_float32\n    cond_arr true_arr false_arr out_arr\n    vcond vtrue vfalse vout\n    start_idx end_idx =\n  let[@inline] select_f32x4\n      (mask : Int32x4.t)\n      (a : Float32x4.t)\n      (b : Float32x4.t) =\n    let ai = Int32x4.of_float32x4 a in\n    let bi = Int32x4.of_float32x4 b in\n    let r =\n      Int32x4.bitwise_or\n        (Int32x4.bitwise_and mask ai)\n        (Int32x4.bitwise_and (Int32x4.bitwise_not mask) bi)\n    in\n    Float32x4.of_int32x4 r\n  in\n  let[@inline] mask4 cond_arr base i =\n    let m0 =\n      if Array.unsafe_get cond_arr (base + i)\n      then (-#1l) else #0l\n    in\n    let m1 =\n      if Array.unsafe_get cond_arr (base + i + 1)\n      then (-#1l) else #0l\n    in\n    let m2 =\n      if Array.unsafe_get cond_arr (base + i + 2)\n      then (-#1l) else #0l\n    in\n    let m3 =\n      if Array.unsafe_get cond_arr (base + i + 3)\n      then (-#1l) else #0l\n    in\n    Int32x4.set m0 m1 m2 m3\n  in\n  let cond_base = View.offset vcond + start_idx in\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset vtrue + start_idx in\n  let b_base = View.offset vfalse + start_idx in\n  if\n    View.is_c_contiguous vout &&\n    View.is_c_contiguous vtrue &&\n    View.is_c_contiguous vfalse &&\n    View.is_c_contiguous vcond\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n16 = n - 15 in\n    while !i < n16 do\n      let idx = !i in\n      let a0 = Float32x4.Array.unsafe_get true_arr ~idx:(a_base + idx) in\n      let b0 = Float32x4.Array.unsafe_get false_arr ~idx:(b_base + idx) in\n      let m0 = mask4 cond_arr cond_base idx in\n      let a1 = Float32x4.Array.unsafe_get true_arr ~idx:(a_base + idx + 4) in\n      let b1 = Float32x4.Array.unsafe_get false_arr ~idx:(b_base + idx + 4) in\n      let m1 = mask4 cond_arr cond_base (idx + 4) in\n      let a2 = Float32x4.Array.unsafe_get true_arr ~idx:(a_base + idx + 8) in\n      let b2 = Float32x4.Array.unsafe_get false_arr ~idx:(b_base + idx + 8) in\n      let m2 = mask4 cond_arr cond_base (idx + 8) in\n      let a3 = Float32x4.Array.unsafe_get true_arr ~idx:(a_base + idx + 12) in\n      let b3 = Float32x4.Array.unsafe_get false_arr ~idx:(b_base + idx + 12) in\n      let m3 = mask4 cond_arr cond_base (idx + 12) in\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx)\n        (select_f32x4 m0 a0 b0);\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 4)\n        (select_f32x4 m1 a1 b1);\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 8)\n        (select_f32x4 m2 a2 b2);\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 12)\n        (select_f32x4 m3 a3 b3);\n      i := idx + 16\n    done;\n    let n4 = n - 3 in\n    while !i < n4 do\n      let idx = !i in\n      let a = Float32x4.Array.unsafe_get true_arr ~idx:(a_base + idx) in\n      let b = Float32x4.Array.unsafe_get false_arr ~idx:(b_base + idx) in\n      let m = mask4 cond_arr cond_base idx in\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx)\n        (select_f32x4 m a b);\n      i := idx + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a = Array.unsafe_get true_arr (a_base + idx) in\n      let b = Array.unsafe_get false_arr (b_base + idx) in\n      let c = Array.unsafe_get cond_arr (cond_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (if c then a else b);\n      incr i\n    done\n  )\n  else\n    let out_shape = shape vout in\n    let out_strides = View.strides vout in\n    let a_shape = shape vtrue in\n    let b_shape = shape vfalse in\n    let c_shape = shape vcond in\n    let a_strides = View.strides vtrue in\n    let b_strides = View.strides vfalse in\n    let c_strides = View.strides vcond in\n    let a_offset = View.offset vtrue in\n    let b_offset = View.offset vfalse in\n    let c_offset = View.offset vcond in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    let c_idx = Array.make (Array.length c_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      Shape.broadcast_index_into md_idx c_shape c_idx;\n      let c_lin = Shape.ravel_index c_idx c_strides in\n      let out_lin = Shape.ravel_index md_idx out_strides in\n      let a = Array.unsafe_get true_arr (a_offset + a_lin) in\n      let b = Array.unsafe_get false_arr (b_offset + b_lin) in\n      let c = Array.unsafe_get cond_arr (c_offset + c_lin) in\n      Array.unsafe_set out_arr (out_offset + out_lin) (if c then a else b)\n    done\n\nlet where_int8\n    (cond_arr : bool array)\n    (true_arr : int8# array)\n    (false_arr : int8# array)\n    (out_arr : int8# array)\n    vcond vtrue vfalse vout\n    start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset vtrue + start_idx in\n  let b_base = View.offset vfalse + start_idx in\n  let c_base = View.offset vcond + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous vtrue\n    && View.is_c_contiguous vfalse && View.is_c_contiguous vcond\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get true_arr (a_base + i0) in\n      let b0 = Array.unsafe_get false_arr (b_base + i0) in\n      let c0 = Array.unsafe_get cond_arr (c_base + i0) in\n      let a1 = Array.unsafe_get true_arr (a_base + i1) in\n      let b1 = Array.unsafe_get false_arr (b_base + i1) in\n      let c1 = Array.unsafe_get cond_arr (c_base + i1) in\n      let a2 = Array.unsafe_get true_arr (a_base + i2) in\n      let b2 = Array.unsafe_get false_arr (b_base + i2) in\n      let c2 = Array.unsafe_get cond_arr (c_base + i2) in\n      let a3 = Array.unsafe_get true_arr (a_base + i3) in\n      let b3 = Array.unsafe_get false_arr (b_base + i3) in\n      let c3 = Array.unsafe_get cond_arr (c_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (if c0 then a0 else b0);\n      Array.unsafe_set out_arr (out_base + i1) (if c1 then a1 else b1);\n      Array.unsafe_set out_arr (out_base + i2) (if c2 then a2 else b2);\n      Array.unsafe_set out_arr (out_base + i3) (if c3 then a3 else b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get true_arr (a_base + idx) in\n      let b_val = Array.unsafe_get false_arr (b_base + idx) in\n      let c_val = Array.unsafe_get cond_arr (c_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (if c_val then a_val else b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let out_strides = View.strides vout in\n    let a_shape = shape vtrue in\n    let b_shape = shape vfalse in\n    let c_shape = shape vcond in\n    let a_strides = View.strides vtrue in\n    let b_strides = View.strides vfalse in\n    let c_strides = View.strides vcond in\n    let a_offset = View.offset vtrue in\n    let b_offset = View.offset vfalse in\n    let c_offset = View.offset vcond in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    let c_idx = Array.make (Array.length c_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      Shape.broadcast_index_into md_idx c_shape c_idx;\n      let c_lin = Shape.ravel_index c_idx c_strides in\n      let out_lin = Shape.ravel_index md_idx out_strides in\n      let a_val = Array.unsafe_get true_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get false_arr (b_offset + b_lin) in\n      let c_val = Array.unsafe_get cond_arr (c_offset + c_lin) in\n      Array.unsafe_set out_arr (out_offset + out_lin) (if c_val then a_val else b_val)\n    done\n\nlet where_int16\n    (cond_arr : bool array)\n    (true_arr : int16# array)\n    (false_arr : int16# array)\n    (out_arr : int16# array)\n    vcond vtrue vfalse vout\n    start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset vtrue + start_idx in\n  let b_base = View.offset vfalse + start_idx in\n  let c_base = View.offset vcond + start_idx in\n  if\n    View.is_c_contiguous vout && View.is_c_contiguous vtrue\n    && View.is_c_contiguous vfalse && View.is_c_contiguous vcond\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get true_arr (a_base + i0) in\n      let b0 = Array.unsafe_get false_arr (b_base + i0) in\n      let c0 = Array.unsafe_get cond_arr (c_base + i0) in\n      let a1 = Array.unsafe_get true_arr (a_base + i1) in\n      let b1 = Array.unsafe_get false_arr (b_base + i1) in\n      let c1 = Array.unsafe_get cond_arr (c_base + i1) in\n      let a2 = Array.unsafe_get true_arr (a_base + i2) in\n      let b2 = Array.unsafe_get false_arr (b_base + i2) in\n      let c2 = Array.unsafe_get cond_arr (c_base + i2) in\n      let a3 = Array.unsafe_get true_arr (a_base + i3) in\n      let b3 = Array.unsafe_get false_arr (b_base + i3) in\n      let c3 = Array.unsafe_get cond_arr (c_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (if c0 then a0 else b0);\n      Array.unsafe_set out_arr (out_base + i1) (if c1 then a1 else b1);\n      Array.unsafe_set out_arr (out_base + i2) (if c2 then a2 else b2);\n      Array.unsafe_set out_arr (out_base + i3) (if c3 then a3 else b3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get true_arr (a_base + idx) in\n      let b_val = Array.unsafe_get false_arr (b_base + idx) in\n      let c_val = Array.unsafe_get cond_arr (c_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (if c_val then a_val else b_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let out_strides = View.strides vout in\n    let a_shape = shape vtrue in\n    let b_shape = shape vfalse in\n    let c_shape = shape vcond in\n    let a_strides = View.strides vtrue in\n    let b_strides = View.strides vfalse in\n    let c_strides = View.strides vcond in\n    let a_offset = View.offset vtrue in\n    let b_offset = View.offset vfalse in\n    let c_offset = View.offset vcond in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    let c_idx = Array.make (Array.length c_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      Shape.broadcast_index_into md_idx c_shape c_idx;\n      let c_lin = Shape.ravel_index c_idx c_strides in\n      let out_lin = Shape.ravel_index md_idx out_strides in\n      let a_val = Array.unsafe_get true_arr (a_offset + a_lin) in\n      let b_val = Array.unsafe_get false_arr (b_offset + b_lin) in\n      let c_val = Array.unsafe_get cond_arr (c_offset + c_lin) in\n      Array.unsafe_set out_arr (out_offset + out_lin) (if c_val then a_val else b_val)\n    done\n\nlet where_int32\n    cond_arr true_arr false_arr out_arr\n    vcond vtrue vfalse vout\n    start_idx end_idx =\n  let[@inline] select_i32x4\n      (mask : Int32x4.t)\n      (a : Int32x4.t)\n      (b : Int32x4.t) =\n    Int32x4.bitwise_or\n      (Int32x4.bitwise_and mask a)\n      (Int32x4.bitwise_and (Int32x4.bitwise_not mask) b)\n  in\n  let[@inline] mask4 cond_arr base i =\n    let m0 = if Array.unsafe_get cond_arr (base + i) then -#1l else #0l in\n    let m1 = if Array.unsafe_get cond_arr (base + i + 1) then -#1l else #0l in\n    let m2 = if Array.unsafe_get cond_arr (base + i + 2) then -#1l else #0l in\n    let m3 = if Array.unsafe_get cond_arr (base + i + 3) then -#1l else #0l in\n    Int32x4.set m0 m1 m2 m3\n  in\n  let cond_base = View.offset vcond + start_idx in\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset vtrue + start_idx in\n  let b_base = View.offset vfalse + start_idx in\n  if\n    View.is_c_contiguous vout &&\n    View.is_c_contiguous vtrue &&\n    View.is_c_contiguous vfalse &&\n    View.is_c_contiguous vcond\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n16 = n - 15 in\n    while !i < n16 do\n      let idx = !i in\n      let a0 = Int32x4.Array.unsafe_get true_arr ~idx:(a_base + idx) in\n      let b0 = Int32x4.Array.unsafe_get false_arr ~idx:(b_base + idx) in\n      let m0 = mask4 cond_arr cond_base idx in\n      let a1 = Int32x4.Array.unsafe_get true_arr ~idx:(a_base + idx + 4) in\n      let b1 = Int32x4.Array.unsafe_get false_arr ~idx:(b_base + idx + 4) in\n      let m1 = mask4 cond_arr cond_base (idx + 4) in\n      let a2 = Int32x4.Array.unsafe_get true_arr ~idx:(a_base + idx + 8) in\n      let b2 = Int32x4.Array.unsafe_get false_arr ~idx:(b_base + idx + 8) in\n      let m2 = mask4 cond_arr cond_base (idx + 8) in\n      let a3 = Int32x4.Array.unsafe_get true_arr ~idx:(a_base + idx + 12) in\n      let b3 = Int32x4.Array.unsafe_get false_arr ~idx:(b_base + idx + 12) in\n      let m3 = mask4 cond_arr cond_base (idx + 12) in\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx)\n        (select_i32x4 m0 a0 b0);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 4)\n        (select_i32x4 m1 a1 b1);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 8)\n        (select_i32x4 m2 a2 b2);\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx + 12)\n        (select_i32x4 m3 a3 b3);\n      i := idx + 16\n    done;\n    let n4 = n - 3 in\n    while !i < n4 do\n      let idx = !i in\n      let a = Int32x4.Array.unsafe_get true_arr ~idx:(a_base + idx) in\n      let b = Int32x4.Array.unsafe_get false_arr ~idx:(b_base + idx) in\n      let m = mask4 cond_arr cond_base idx in\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx)\n        (select_i32x4 m a b);\n      i := idx + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a = Array.unsafe_get true_arr (a_base + idx) in\n      let b = Array.unsafe_get false_arr (b_base + idx) in\n      let c = Array.unsafe_get cond_arr (cond_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (if c then a else b);\n      incr i\n    done\n  )\n  else\n    let out_shape = shape vout in\n    let out_strides = View.strides vout in\n    let a_shape = shape vtrue in\n    let b_shape = shape vfalse in\n    let c_shape = shape vcond in\n    let a_strides = View.strides vtrue in\n    let b_strides = View.strides vfalse in\n    let c_strides = View.strides vcond in\n    let a_offset = View.offset vtrue in\n    let b_offset = View.offset vfalse in\n    let c_offset = View.offset vcond in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    let c_idx = Array.make (Array.length c_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      Shape.broadcast_index_into md_idx c_shape c_idx;\n      let c_lin = Shape.ravel_index c_idx c_strides in\n      let out_lin = Shape.ravel_index md_idx out_strides in\n      let a = Array.unsafe_get true_arr (a_offset + a_lin) in\n      let b = Array.unsafe_get false_arr (b_offset + b_lin) in\n      let c = Array.unsafe_get cond_arr (c_offset + c_lin) in\n      Array.unsafe_set out_arr (out_offset + out_lin) (if c then a else b)\n    done\n\nlet where_int64\n    cond_arr true_arr false_arr out_arr\n    vcond vtrue vfalse vout\n    start_idx end_idx =\n  let[@inline] select_i64x2\n      (mask : Int64x2.t)\n      (a : Int64x2.t)\n      (b : Int64x2.t) =\n    Int64x2.bitwise_or\n      (Int64x2.bitwise_and mask a)\n      (Int64x2.bitwise_and (Int64x2.bitwise_not mask) b)\n  in\n  let[@inline] mask2 cond_arr base i =\n    let m0 = if Array.unsafe_get cond_arr (base + i) then -#1L else #0L in\n    let m1 = if Array.unsafe_get cond_arr (base + i + 1) then -#1L else #0L in\n    Int64x2.set m0 m1\n  in\n  let cond_base = View.offset vcond + start_idx in\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset vtrue + start_idx in\n  let b_base = View.offset vfalse + start_idx in\n  if\n    View.is_c_contiguous vout &&\n    View.is_c_contiguous vtrue &&\n    View.is_c_contiguous vfalse &&\n    View.is_c_contiguous vcond\n  then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n8 = n - 7 in\n    while !i < n8 do\n      let idx = !i in\n      let a0 = Int64x2.Array.unsafe_get true_arr ~idx:(a_base + idx) in\n      let b0 = Int64x2.Array.unsafe_get false_arr ~idx:(b_base + idx) in\n      let m0 = mask2 cond_arr cond_base idx in\n      let a1 = Int64x2.Array.unsafe_get true_arr ~idx:(a_base + idx + 2) in\n      let b1 = Int64x2.Array.unsafe_get false_arr ~idx:(b_base + idx + 2) in\n      let m1 = mask2 cond_arr cond_base (idx + 2) in\n      let a2 = Int64x2.Array.unsafe_get true_arr ~idx:(a_base + idx + 4) in\n      let b2 = Int64x2.Array.unsafe_get false_arr ~idx:(b_base + idx + 4) in\n      let m2 = mask2 cond_arr cond_base (idx + 4) in\n      let a3 = Int64x2.Array.unsafe_get true_arr ~idx:(a_base + idx + 6) in\n      let b3 = Int64x2.Array.unsafe_get false_arr ~idx:(b_base + idx + 6) in\n      let m3 = mask2 cond_arr cond_base (idx + 6) in\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx)\n        (select_i64x2 m0 a0 b0);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 2)\n        (select_i64x2 m1 a1 b1);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 4)\n        (select_i64x2 m2 a2 b2);\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx + 6)\n        (select_i64x2 m3 a3 b3);\n      i := idx + 8\n    done;\n    let n2 = n - 1 in\n    while !i < n2 do\n      let idx = !i in\n      let a = Int64x2.Array.unsafe_get true_arr ~idx:(a_base + idx) in\n      let b = Int64x2.Array.unsafe_get false_arr ~idx:(b_base + idx) in\n      let m = mask2 cond_arr cond_base idx in\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx)\n        (select_i64x2 m a b);\n      i := idx + 2\n    done;\n    while !i < n do\n      let idx = !i in\n      let a = Array.unsafe_get true_arr (a_base + idx) in\n      let b = Array.unsafe_get false_arr (b_base + idx) in\n      let c = Array.unsafe_get cond_arr (cond_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (if c then a else b);\n      incr i\n    done\n  )\n  else\n    let out_shape = shape vout in\n    let out_strides = View.strides vout in\n    let a_shape = shape vtrue in\n    let b_shape = shape vfalse in\n    let c_shape = shape vcond in\n    let a_strides = View.strides vtrue in\n    let b_strides = View.strides vfalse in\n    let c_strides = View.strides vcond in\n    let a_offset = View.offset vtrue in\n    let b_offset = View.offset vfalse in\n    let c_offset = View.offset vcond in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    let b_idx = Array.make (Array.length b_shape) 0 in\n    let c_idx = Array.make (Array.length c_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Shape.broadcast_index_into md_idx b_shape b_idx;\n      let b_lin = Shape.ravel_index b_idx b_strides in\n      Shape.broadcast_index_into md_idx c_shape c_idx;\n      let c_lin = Shape.ravel_index c_idx c_strides in\n      let out_lin = Shape.ravel_index md_idx out_strides in\n      let a = Array.unsafe_get true_arr (a_offset + a_lin) in\n      let b = Array.unsafe_get false_arr (b_offset + b_lin) in\n      let c = Array.unsafe_get cond_arr (c_offset + c_lin) in\n      Array.unsafe_set out_arr (out_offset + out_lin) (if c then a else b)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/unary_ops/op_abs.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet abs_float64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n2 = n - 1 in\n    while !i < n2 do\n      let idx = !i in\n      let a_vec = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let out_vec = Float64x2.abs a_vec in\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) out_vec;\n      i := idx + 2\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.abs a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.abs a_val)\n    done\n\nlet abs_float32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let idx = !i in\n      let a_vec = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let out_vec = Float32x4.abs a_vec in\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) out_vec;\n      i := idx + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.abs a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.abs a_val)\n    done\n\nlet abs_int8 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int8_u.abs a0);\n      Array.unsafe_set out_arr (out_base + i1) (Int8_u.abs a1);\n      Array.unsafe_set out_arr (out_base + i2) (Int8_u.abs a2);\n      Array.unsafe_set out_arr (out_base + i3) (Int8_u.abs a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int8_u.abs a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int8_u.abs a_val)\n    done\n\nlet abs_int16 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int16_u.abs a0);\n      Array.unsafe_set out_arr (out_base + i1) (Int16_u.abs a1);\n      Array.unsafe_set out_arr (out_base + i2) (Int16_u.abs a2);\n      Array.unsafe_set out_arr (out_base + i3) (Int16_u.abs a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int16_u.abs a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int16_u.abs a_val)\n    done\n\nlet abs_int32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let idx = !i in\n      let a_vec = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let out_vec = Int32x4.abs a_vec in\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) out_vec;\n      i := idx + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int32_u.abs a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int32_u.abs a_val)\n    done\n\nlet abs_int64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int64_u.abs a0);\n      Array.unsafe_set out_arr (out_base + i1) (Int64_u.abs a1);\n      Array.unsafe_set out_arr (out_base + i2) (Int64_u.abs a2);\n      Array.unsafe_set out_arr (out_base + i3) (Int64_u.abs a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int64_u.abs a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int64_u.abs a_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/unary_ops/op_acos.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet acos_float64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float_u.acos a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float_u.acos a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float_u.acos a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float_u.acos a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.acos a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.acos a_val)\n    done\n\nlet acos_float32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float32_u.acos a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float32_u.acos a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float32_u.acos a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float32_u.acos a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.acos a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.acos a_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/unary_ops/op_asin.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet asin_float64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float_u.asin a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float_u.asin a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float_u.asin a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float_u.asin a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.asin a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.asin a_val)\n    done\n\nlet asin_float32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float32_u.asin a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float32_u.asin a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float32_u.asin a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float32_u.asin a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.asin a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.asin a_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/unary_ops/op_atan.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet atan_float64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float_u.atan a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float_u.atan a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float_u.atan a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float_u.atan a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.atan a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.atan a_val)\n    done\n\nlet atan_float32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float32_u.atan a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float32_u.atan a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float32_u.atan a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float32_u.atan a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.atan a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.atan a_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/unary_ops/op_ceil.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet ceil_float64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float_u.ceil a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float_u.ceil a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float_u.ceil a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float_u.ceil a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.ceil a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.ceil a_val)\n    done\n\nlet ceil_float32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float32_u.ceil a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float32_u.ceil a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float32_u.ceil a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float32_u.ceil a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.ceil a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.ceil a_val)\n    done\n\nlet ceil_int8 (a_arr : int8# array) (out_arr : int8# array) va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      Array.unsafe_set out_arr (out_base + i0) (Array.unsafe_get a_arr (a_base + i0));\n      Array.unsafe_set out_arr (out_base + i1) (Array.unsafe_get a_arr (a_base + i1));\n      Array.unsafe_set out_arr (out_base + i2) (Array.unsafe_get a_arr (a_base + i2));\n      Array.unsafe_set out_arr (out_base + i3) (Array.unsafe_get a_arr (a_base + i3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      Array.unsafe_set out_arr (out_base + idx) (Array.unsafe_get a_arr (a_base + idx));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Array.unsafe_set out_arr (out_offset + k) (Array.unsafe_get a_arr (a_offset + a_lin))\n    done\n\nlet ceil_int16 (a_arr : int16# array) (out_arr : int16# array) va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      Array.unsafe_set out_arr (out_base + i0) (Array.unsafe_get a_arr (a_base + i0));\n      Array.unsafe_set out_arr (out_base + i1) (Array.unsafe_get a_arr (a_base + i1));\n      Array.unsafe_set out_arr (out_base + i2) (Array.unsafe_get a_arr (a_base + i2));\n      Array.unsafe_set out_arr (out_base + i3) (Array.unsafe_get a_arr (a_base + i3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      Array.unsafe_set out_arr (out_base + idx) (Array.unsafe_get a_arr (a_base + idx));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Array.unsafe_set out_arr (out_offset + k) (Array.unsafe_get a_arr (a_offset + a_lin))\n    done\n\nlet ceil_int32 (a_arr : int32# array) (out_arr : int32# array) va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      Array.unsafe_set out_arr (out_base + i0) (Array.unsafe_get a_arr (a_base + i0));\n      Array.unsafe_set out_arr (out_base + i1) (Array.unsafe_get a_arr (a_base + i1));\n      Array.unsafe_set out_arr (out_base + i2) (Array.unsafe_get a_arr (a_base + i2));\n      Array.unsafe_set out_arr (out_base + i3) (Array.unsafe_get a_arr (a_base + i3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      Array.unsafe_set out_arr (out_base + idx) (Array.unsafe_get a_arr (a_base + idx));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Array.unsafe_set out_arr (out_offset + k) (Array.unsafe_get a_arr (a_offset + a_lin))\n    done\n\nlet ceil_int64 (a_arr : int64# array) (out_arr : int64# array) va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      Array.unsafe_set out_arr (out_base + i0) (Array.unsafe_get a_arr (a_base + i0));\n      Array.unsafe_set out_arr (out_base + i1) (Array.unsafe_get a_arr (a_base + i1));\n      Array.unsafe_set out_arr (out_base + i2) (Array.unsafe_get a_arr (a_base + i2));\n      Array.unsafe_set out_arr (out_base + i3) (Array.unsafe_get a_arr (a_base + i3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      Array.unsafe_set out_arr (out_base + idx) (Array.unsafe_get a_arr (a_base + idx));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Array.unsafe_set out_arr (out_offset + k) (Array.unsafe_get a_arr (a_offset + a_lin))\n    done\n\nlet ceil_bool (a_arr : bool array) (out_arr : bool array) va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      Array.unsafe_set out_arr (out_base + i0) (Array.unsafe_get a_arr (a_base + i0));\n      Array.unsafe_set out_arr (out_base + i1) (Array.unsafe_get a_arr (a_base + i1));\n      Array.unsafe_set out_arr (out_base + i2) (Array.unsafe_get a_arr (a_base + i2));\n      Array.unsafe_set out_arr (out_base + i3) (Array.unsafe_get a_arr (a_base + i3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      Array.unsafe_set out_arr (out_base + idx) (Array.unsafe_get a_arr (a_base + idx));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Array.unsafe_set out_arr (out_offset + k) (Array.unsafe_get a_arr (a_offset + a_lin))\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/unary_ops/op_cos.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet cos_float64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float_u.cos a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float_u.cos a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float_u.cos a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float_u.cos a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.cos a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.cos a_val)\n    done\n\nlet cos_float32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float32_u.cos a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float32_u.cos a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float32_u.cos a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float32_u.cos a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.cos a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.cos a_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/unary_ops/op_cosh.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet cosh_float64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float_u.cosh a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float_u.cosh a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float_u.cosh a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float_u.cosh a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.cosh a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.cosh a_val)\n    done\n\nlet cosh_float32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float32_u.cosh a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float32_u.cosh a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float32_u.cosh a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float32_u.cosh a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.cosh a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.cosh a_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/unary_ops/op_erf.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet erf_float64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float_u.erf a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float_u.erf a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float_u.erf a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float_u.erf a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.erf a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.erf a_val)\n    done\n\nlet erf_float32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float32_u.erf a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float32_u.erf a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float32_u.erf a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float32_u.erf a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.erf a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.erf a_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/unary_ops/op_exp.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet exp_float64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float_u.exp a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float_u.exp a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float_u.exp a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float_u.exp a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.exp a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.exp a_val)\n    done\n\nlet exp_float32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float32_u.exp a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float32_u.exp a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float32_u.exp a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float32_u.exp a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.exp a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.exp a_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/unary_ops/op_floor.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet floor_float64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float_u.floor a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float_u.floor a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float_u.floor a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float_u.floor a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.floor a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.floor a_val)\n    done\n\nlet floor_float32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float32_u.floor a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float32_u.floor a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float32_u.floor a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float32_u.floor a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.floor a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.floor a_val)\n    done\n\nlet floor_int8 (a_arr : int8# array) (out_arr : int8# array) va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      Array.unsafe_set out_arr (out_base + i0) (Array.unsafe_get a_arr (a_base + i0));\n      Array.unsafe_set out_arr (out_base + i1) (Array.unsafe_get a_arr (a_base + i1));\n      Array.unsafe_set out_arr (out_base + i2) (Array.unsafe_get a_arr (a_base + i2));\n      Array.unsafe_set out_arr (out_base + i3) (Array.unsafe_get a_arr (a_base + i3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      Array.unsafe_set out_arr (out_base + idx) (Array.unsafe_get a_arr (a_base + idx));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Array.unsafe_set out_arr (out_offset + k) (Array.unsafe_get a_arr (a_offset + a_lin))\n    done\n\nlet floor_int16 (a_arr : int16# array) (out_arr : int16# array) va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      Array.unsafe_set out_arr (out_base + i0) (Array.unsafe_get a_arr (a_base + i0));\n      Array.unsafe_set out_arr (out_base + i1) (Array.unsafe_get a_arr (a_base + i1));\n      Array.unsafe_set out_arr (out_base + i2) (Array.unsafe_get a_arr (a_base + i2));\n      Array.unsafe_set out_arr (out_base + i3) (Array.unsafe_get a_arr (a_base + i3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      Array.unsafe_set out_arr (out_base + idx) (Array.unsafe_get a_arr (a_base + idx));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Array.unsafe_set out_arr (out_offset + k) (Array.unsafe_get a_arr (a_offset + a_lin))\n    done\n\nlet floor_int32 (a_arr : int32# array) (out_arr : int32# array) va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      Array.unsafe_set out_arr (out_base + i0) (Array.unsafe_get a_arr (a_base + i0));\n      Array.unsafe_set out_arr (out_base + i1) (Array.unsafe_get a_arr (a_base + i1));\n      Array.unsafe_set out_arr (out_base + i2) (Array.unsafe_get a_arr (a_base + i2));\n      Array.unsafe_set out_arr (out_base + i3) (Array.unsafe_get a_arr (a_base + i3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      Array.unsafe_set out_arr (out_base + idx) (Array.unsafe_get a_arr (a_base + idx));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Array.unsafe_set out_arr (out_offset + k) (Array.unsafe_get a_arr (a_offset + a_lin))\n    done\n\nlet floor_int64 (a_arr : int64# array) (out_arr : int64# array) va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      Array.unsafe_set out_arr (out_base + i0) (Array.unsafe_get a_arr (a_base + i0));\n      Array.unsafe_set out_arr (out_base + i1) (Array.unsafe_get a_arr (a_base + i1));\n      Array.unsafe_set out_arr (out_base + i2) (Array.unsafe_get a_arr (a_base + i2));\n      Array.unsafe_set out_arr (out_base + i3) (Array.unsafe_get a_arr (a_base + i3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      Array.unsafe_set out_arr (out_base + idx) (Array.unsafe_get a_arr (a_base + idx));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Array.unsafe_set out_arr (out_offset + k) (Array.unsafe_get a_arr (a_offset + a_lin))\n    done\n\nlet floor_bool (a_arr : bool array) (out_arr : bool array) va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      Array.unsafe_set out_arr (out_base + i0) (Array.unsafe_get a_arr (a_base + i0));\n      Array.unsafe_set out_arr (out_base + i1) (Array.unsafe_get a_arr (a_base + i1));\n      Array.unsafe_set out_arr (out_base + i2) (Array.unsafe_get a_arr (a_base + i2));\n      Array.unsafe_set out_arr (out_base + i3) (Array.unsafe_get a_arr (a_base + i3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      Array.unsafe_set out_arr (out_base + idx) (Array.unsafe_get a_arr (a_base + idx));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Array.unsafe_set out_arr (out_offset + k) (Array.unsafe_get a_arr (a_offset + a_lin))\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/unary_ops/op_log.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet log_float64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float_u.log a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float_u.log a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float_u.log a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float_u.log a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.log a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.log a_val)\n    done\n\nlet log_float32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float32_u.log a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float32_u.log a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float32_u.log a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float32_u.log a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.log a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.log a_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/unary_ops/op_neg.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet neg_float64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n2 = n - 1 in\n    while !i < n2 do\n      let idx = !i in\n      let a_vec = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let out_vec = Float64x2.neg a_vec in\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) out_vec;\n      i := idx + 2\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.neg a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.neg a_val)\n    done\n\nlet neg_float32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let idx = !i in\n      let a_vec = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let out_vec = Float32x4.neg a_vec in\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) out_vec;\n      i := idx + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.neg a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.neg a_val)\n    done\n\nlet neg_int8 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int8_u.neg a0);\n      Array.unsafe_set out_arr (out_base + i1) (Int8_u.neg a1);\n      Array.unsafe_set out_arr (out_base + i2) (Int8_u.neg a2);\n      Array.unsafe_set out_arr (out_base + i3) (Int8_u.neg a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int8_u.neg a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int8_u.neg a_val)\n    done\n\nlet neg_int16 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int16_u.neg a0);\n      Array.unsafe_set out_arr (out_base + i1) (Int16_u.neg a1);\n      Array.unsafe_set out_arr (out_base + i2) (Int16_u.neg a2);\n      Array.unsafe_set out_arr (out_base + i3) (Int16_u.neg a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int16_u.neg a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int16_u.neg a_val)\n    done\n\nlet neg_int32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let idx = !i in\n      let a_vec = Int32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let out_vec = Int32x4.neg a_vec in\n      Int32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) out_vec;\n      i := idx + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int32_u.neg a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int32_u.neg a_val)\n    done\n\nlet neg_int64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n2 = n - 1 in\n    while !i < n2 do\n      let idx = !i in\n      let a_vec = Int64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let out_vec = Int64x2.neg a_vec in\n      Int64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) out_vec;\n      i := idx + 2\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int64_u.neg a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int64_u.neg a_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/unary_ops/op_recip.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet recip_float64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n2 = n - 1 in\n    let ones = Float64x2.const1 #1.0 in\n    while !i < n2 do\n      let idx = !i in\n      let a_vec = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let out_vec = Float64x2.div ones a_vec in\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) out_vec;\n      i := idx + 2\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.div #1.0 a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.div #1.0 a_val)\n    done\n\nlet recip_float32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    let ones = Float32x4.const1 #1.0s in\n    while !i < n4 do\n      let idx = !i in\n      let a_vec = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let out_vec = Float32x4.div ones a_vec in\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) out_vec;\n      i := idx + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.div #1.0s a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.div #1.0s a_val)\n    done\n\nlet recip_int8 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int8_u.div #1s a0);\n      Array.unsafe_set out_arr (out_base + i1) (Int8_u.div #1s a1);\n      Array.unsafe_set out_arr (out_base + i2) (Int8_u.div #1s a2);\n      Array.unsafe_set out_arr (out_base + i3) (Int8_u.div #1s a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int8_u.div #1s a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int8_u.div #1s a_val)\n    done\n\nlet recip_int16 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int16_u.div #1S a0);\n      Array.unsafe_set out_arr (out_base + i1) (Int16_u.div #1S a1);\n      Array.unsafe_set out_arr (out_base + i2) (Int16_u.div #1S a2);\n      Array.unsafe_set out_arr (out_base + i3) (Int16_u.div #1S a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int16_u.div #1S a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int16_u.div #1S a_val)\n    done\n\nlet recip_int32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int32_u.div #1l a0);\n      Array.unsafe_set out_arr (out_base + i1) (Int32_u.div #1l a1);\n      Array.unsafe_set out_arr (out_base + i2) (Int32_u.div #1l a2);\n      Array.unsafe_set out_arr (out_base + i3) (Int32_u.div #1l a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int32_u.div #1l a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int32_u.div #1l a_val)\n    done\n\nlet recip_int64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Int64_u.div #1L a0);\n      Array.unsafe_set out_arr (out_base + i1) (Int64_u.div #1L a1);\n      Array.unsafe_set out_arr (out_base + i2) (Int64_u.div #1L a2);\n      Array.unsafe_set out_arr (out_base + i3) (Int64_u.div #1L a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Int64_u.div #1L a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Int64_u.div #1L a_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/unary_ops/op_round.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet round_float64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float_u.round a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float_u.round a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float_u.round a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float_u.round a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.round a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.round a_val)\n    done\n\nlet round_float32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float32_u.round a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float32_u.round a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float32_u.round a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float32_u.round a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.round a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.round a_val)\n    done\n\nlet round_int8 (a_arr : int8# array) (out_arr : int8# array) va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      Array.unsafe_set out_arr (out_base + i0) (Array.unsafe_get a_arr (a_base + i0));\n      Array.unsafe_set out_arr (out_base + i1) (Array.unsafe_get a_arr (a_base + i1));\n      Array.unsafe_set out_arr (out_base + i2) (Array.unsafe_get a_arr (a_base + i2));\n      Array.unsafe_set out_arr (out_base + i3) (Array.unsafe_get a_arr (a_base + i3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      Array.unsafe_set out_arr (out_base + idx) (Array.unsafe_get a_arr (a_base + idx));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Array.unsafe_set out_arr (out_offset + k) (Array.unsafe_get a_arr (a_offset + a_lin))\n    done\n\nlet round_int16 (a_arr : int16# array) (out_arr : int16# array) va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      Array.unsafe_set out_arr (out_base + i0) (Array.unsafe_get a_arr (a_base + i0));\n      Array.unsafe_set out_arr (out_base + i1) (Array.unsafe_get a_arr (a_base + i1));\n      Array.unsafe_set out_arr (out_base + i2) (Array.unsafe_get a_arr (a_base + i2));\n      Array.unsafe_set out_arr (out_base + i3) (Array.unsafe_get a_arr (a_base + i3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      Array.unsafe_set out_arr (out_base + idx) (Array.unsafe_get a_arr (a_base + idx));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Array.unsafe_set out_arr (out_offset + k) (Array.unsafe_get a_arr (a_offset + a_lin))\n    done\n\nlet round_int32 (a_arr : int32# array) (out_arr : int32# array) va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      Array.unsafe_set out_arr (out_base + i0) (Array.unsafe_get a_arr (a_base + i0));\n      Array.unsafe_set out_arr (out_base + i1) (Array.unsafe_get a_arr (a_base + i1));\n      Array.unsafe_set out_arr (out_base + i2) (Array.unsafe_get a_arr (a_base + i2));\n      Array.unsafe_set out_arr (out_base + i3) (Array.unsafe_get a_arr (a_base + i3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      Array.unsafe_set out_arr (out_base + idx) (Array.unsafe_get a_arr (a_base + idx));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Array.unsafe_set out_arr (out_offset + k) (Array.unsafe_get a_arr (a_offset + a_lin))\n    done\n\nlet round_int64 (a_arr : int64# array) (out_arr : int64# array) va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      Array.unsafe_set out_arr (out_base + i0) (Array.unsafe_get a_arr (a_base + i0));\n      Array.unsafe_set out_arr (out_base + i1) (Array.unsafe_get a_arr (a_base + i1));\n      Array.unsafe_set out_arr (out_base + i2) (Array.unsafe_get a_arr (a_base + i2));\n      Array.unsafe_set out_arr (out_base + i3) (Array.unsafe_get a_arr (a_base + i3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      Array.unsafe_set out_arr (out_base + idx) (Array.unsafe_get a_arr (a_base + idx));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Array.unsafe_set out_arr (out_offset + k) (Array.unsafe_get a_arr (a_offset + a_lin))\n    done\n\nlet round_bool (a_arr : bool array) (out_arr : bool array) va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      Array.unsafe_set out_arr (out_base + i0) (Array.unsafe_get a_arr (a_base + i0));\n      Array.unsafe_set out_arr (out_base + i1) (Array.unsafe_get a_arr (a_base + i1));\n      Array.unsafe_set out_arr (out_base + i2) (Array.unsafe_get a_arr (a_base + i2));\n      Array.unsafe_set out_arr (out_base + i3) (Array.unsafe_get a_arr (a_base + i3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      Array.unsafe_set out_arr (out_base + idx) (Array.unsafe_get a_arr (a_base + idx));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Array.unsafe_set out_arr (out_offset + k) (Array.unsafe_get a_arr (a_offset + a_lin))\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/unary_ops/op_sign.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet[@inline] sign_float64_scalar x =\n  if Float_u.is_nan x then x\n  else\n    let c = Float_u.compare x #0.0 in\n    if c > 0 then #1.0 else if c < 0 then -#1.0 else #0.0\n\nlet[@inline] sign_float32_scalar x =\n  if Float32_u.is_nan x then x\n  else\n    let c = Float32_u.compare x #0.0s in\n    if c > 0 then #1.0s else if c < 0 then -#1.0s else #0.0s\n\nlet[@inline] sign_int8_scalar x =\n  let c = Int8_u.compare x #0s in\n  if c > 0 then #1s else if c < 0 then -#1s else #0s\n\nlet[@inline] sign_int16_scalar x =\n  let c = Int16_u.compare x #0S in\n  if c > 0 then #1S else if c < 0 then -#1S else #0S\n\nlet[@inline] sign_int32_scalar x =\n  let c = Int32_u.compare x #0l in\n  if c > 0 then #1l else if c < 0 then -#1l else #0l\n\nlet[@inline] sign_int64_scalar x =\n  let c = Int64_u.compare x #0L in\n  if c > 0 then #1L else if c < 0 then -#1L else #0L\n\nlet sign_float64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (sign_float64_scalar a0);\n      Array.unsafe_set out_arr (out_base + i1) (sign_float64_scalar a1);\n      Array.unsafe_set out_arr (out_base + i2) (sign_float64_scalar a2);\n      Array.unsafe_set out_arr (out_base + i3) (sign_float64_scalar a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (sign_float64_scalar a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (sign_float64_scalar a_val)\n    done\n\nlet sign_float32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (sign_float32_scalar a0);\n      Array.unsafe_set out_arr (out_base + i1) (sign_float32_scalar a1);\n      Array.unsafe_set out_arr (out_base + i2) (sign_float32_scalar a2);\n      Array.unsafe_set out_arr (out_base + i3) (sign_float32_scalar a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (sign_float32_scalar a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (sign_float32_scalar a_val)\n    done\n\nlet sign_int8 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (sign_int8_scalar a0);\n      Array.unsafe_set out_arr (out_base + i1) (sign_int8_scalar a1);\n      Array.unsafe_set out_arr (out_base + i2) (sign_int8_scalar a2);\n      Array.unsafe_set out_arr (out_base + i3) (sign_int8_scalar a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (sign_int8_scalar a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (sign_int8_scalar a_val)\n    done\n\nlet sign_int16 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (sign_int16_scalar a0);\n      Array.unsafe_set out_arr (out_base + i1) (sign_int16_scalar a1);\n      Array.unsafe_set out_arr (out_base + i2) (sign_int16_scalar a2);\n      Array.unsafe_set out_arr (out_base + i3) (sign_int16_scalar a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (sign_int16_scalar a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (sign_int16_scalar a_val)\n    done\n\nlet sign_int32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (sign_int32_scalar a0);\n      Array.unsafe_set out_arr (out_base + i1) (sign_int32_scalar a1);\n      Array.unsafe_set out_arr (out_base + i2) (sign_int32_scalar a2);\n      Array.unsafe_set out_arr (out_base + i3) (sign_int32_scalar a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (sign_int32_scalar a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (sign_int32_scalar a_val)\n    done\n\nlet sign_int64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (sign_int64_scalar a0);\n      Array.unsafe_set out_arr (out_base + i1) (sign_int64_scalar a1);\n      Array.unsafe_set out_arr (out_base + i2) (sign_int64_scalar a2);\n      Array.unsafe_set out_arr (out_base + i3) (sign_int64_scalar a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (sign_int64_scalar a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (sign_int64_scalar a_val)\n    done\n\nlet sign_bool a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      Array.unsafe_set out_arr (out_base + i0) (Array.unsafe_get a_arr (a_base + i0));\n      Array.unsafe_set out_arr (out_base + i1) (Array.unsafe_get a_arr (a_base + i1));\n      Array.unsafe_set out_arr (out_base + i2) (Array.unsafe_get a_arr (a_base + i2));\n      Array.unsafe_set out_arr (out_base + i3) (Array.unsafe_get a_arr (a_base + i3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      Array.unsafe_set out_arr (out_base + idx) (Array.unsafe_get a_arr (a_base + idx));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Array.unsafe_set out_arr (out_offset + k) (Array.unsafe_get a_arr (a_offset + a_lin))\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/unary_ops/op_sin.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet sin_float64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float_u.sin a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float_u.sin a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float_u.sin a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float_u.sin a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.sin a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.sin a_val)\n    done\n\nlet sin_float32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float32_u.sin a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float32_u.sin a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float32_u.sin a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float32_u.sin a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.sin a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.sin a_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/unary_ops/op_sinh.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet sinh_float64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float_u.sinh a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float_u.sinh a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float_u.sinh a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float_u.sinh a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.sinh a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.sinh a_val)\n    done\n\nlet sinh_float32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float32_u.sinh a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float32_u.sinh a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float32_u.sinh a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float32_u.sinh a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.sinh a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.sinh a_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/unary_ops/op_sqrt.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet sqrt_float64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n2 = n - 1 in\n    while !i < n2 do\n      let idx = !i in\n      let a_vec = Float64x2.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let out_vec = Float64x2.sqrt a_vec in\n      Float64x2.Array.unsafe_set out_arr ~idx:(out_base + idx) out_vec;\n      i := idx + 2\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.sqrt a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.sqrt a_val)\n    done\n\nlet sqrt_float32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let idx = !i in\n      let a_vec = Float32x4.Array.unsafe_get a_arr ~idx:(a_base + idx) in\n      let out_vec = Float32x4.sqrt a_vec in\n      Float32x4.Array.unsafe_set out_arr ~idx:(out_base + idx) out_vec;\n      i := idx + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.sqrt a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.sqrt a_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/unary_ops/op_tan.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet tan_float64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float_u.tan a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float_u.tan a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float_u.tan a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float_u.tan a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.tan a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.tan a_val)\n    done\n\nlet tan_float32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float32_u.tan a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float32_u.tan a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float32_u.tan a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float32_u.tan a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.tan a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.tan a_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/unary_ops/op_tanh.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet tanh_float64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float_u.tanh a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float_u.tanh a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float_u.tanh a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float_u.tanh a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.tanh a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.tanh a_val)\n    done\n\nlet tanh_float32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float32_u.tanh a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float32_u.tanh a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float32_u.tanh a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float32_u.tanh a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.tanh a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.tanh a_val)\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/lib/unary_ops/op_trunc.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Import\n\nlet trunc_float64 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float_u.trunc a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float_u.trunc a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float_u.trunc a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float_u.trunc a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float_u.trunc a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float_u.trunc a_val)\n    done\n\nlet trunc_float32 a_arr out_arr va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      let a0 = Array.unsafe_get a_arr (a_base + i0) in\n      let a1 = Array.unsafe_get a_arr (a_base + i1) in\n      let a2 = Array.unsafe_get a_arr (a_base + i2) in\n      let a3 = Array.unsafe_get a_arr (a_base + i3) in\n      Array.unsafe_set out_arr (out_base + i0) (Float32_u.trunc a0);\n      Array.unsafe_set out_arr (out_base + i1) (Float32_u.trunc a1);\n      Array.unsafe_set out_arr (out_base + i2) (Float32_u.trunc a2);\n      Array.unsafe_set out_arr (out_base + i3) (Float32_u.trunc a3);\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      let a_val = Array.unsafe_get a_arr (a_base + idx) in\n      Array.unsafe_set out_arr (out_base + idx) (Float32_u.trunc a_val);\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      let a_val = Array.unsafe_get a_arr (a_offset + a_lin) in\n      Array.unsafe_set out_arr (out_offset + k) (Float32_u.trunc a_val)\n    done\n\nlet trunc_int8 (a_arr : int8# array) (out_arr : int8# array) va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      Array.unsafe_set out_arr (out_base + i0) (Array.unsafe_get a_arr (a_base + i0));\n      Array.unsafe_set out_arr (out_base + i1) (Array.unsafe_get a_arr (a_base + i1));\n      Array.unsafe_set out_arr (out_base + i2) (Array.unsafe_get a_arr (a_base + i2));\n      Array.unsafe_set out_arr (out_base + i3) (Array.unsafe_get a_arr (a_base + i3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      Array.unsafe_set out_arr (out_base + idx) (Array.unsafe_get a_arr (a_base + idx));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Array.unsafe_set out_arr (out_offset + k) (Array.unsafe_get a_arr (a_offset + a_lin))\n    done\n\nlet trunc_int16 (a_arr : int16# array) (out_arr : int16# array) va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      Array.unsafe_set out_arr (out_base + i0) (Array.unsafe_get a_arr (a_base + i0));\n      Array.unsafe_set out_arr (out_base + i1) (Array.unsafe_get a_arr (a_base + i1));\n      Array.unsafe_set out_arr (out_base + i2) (Array.unsafe_get a_arr (a_base + i2));\n      Array.unsafe_set out_arr (out_base + i3) (Array.unsafe_get a_arr (a_base + i3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      Array.unsafe_set out_arr (out_base + idx) (Array.unsafe_get a_arr (a_base + idx));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Array.unsafe_set out_arr (out_offset + k) (Array.unsafe_get a_arr (a_offset + a_lin))\n    done\n\nlet trunc_int32 (a_arr : int32# array) (out_arr : int32# array) va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      Array.unsafe_set out_arr (out_base + i0) (Array.unsafe_get a_arr (a_base + i0));\n      Array.unsafe_set out_arr (out_base + i1) (Array.unsafe_get a_arr (a_base + i1));\n      Array.unsafe_set out_arr (out_base + i2) (Array.unsafe_get a_arr (a_base + i2));\n      Array.unsafe_set out_arr (out_base + i3) (Array.unsafe_get a_arr (a_base + i3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      Array.unsafe_set out_arr (out_base + idx) (Array.unsafe_get a_arr (a_base + idx));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Array.unsafe_set out_arr (out_offset + k) (Array.unsafe_get a_arr (a_offset + a_lin))\n    done\n\nlet trunc_int64 (a_arr : int64# array) (out_arr : int64# array) va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      Array.unsafe_set out_arr (out_base + i0) (Array.unsafe_get a_arr (a_base + i0));\n      Array.unsafe_set out_arr (out_base + i1) (Array.unsafe_get a_arr (a_base + i1));\n      Array.unsafe_set out_arr (out_base + i2) (Array.unsafe_get a_arr (a_base + i2));\n      Array.unsafe_set out_arr (out_base + i3) (Array.unsafe_get a_arr (a_base + i3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      Array.unsafe_set out_arr (out_base + idx) (Array.unsafe_get a_arr (a_base + idx));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Array.unsafe_set out_arr (out_offset + k) (Array.unsafe_get a_arr (a_offset + a_lin))\n    done\n\nlet trunc_bool (a_arr : bool array) (out_arr : bool array) va vout start_idx end_idx =\n  let out_base = View.offset vout + start_idx in\n  let a_base = View.offset va + start_idx in\n  if View.is_c_contiguous vout && View.is_c_contiguous va then (\n    let i = ref 0 in\n    let n = end_idx - start_idx in\n    let n4 = n - 3 in\n    while !i < n4 do\n      let i0 = !i in\n      let i1 = i0 + 1 in\n      let i2 = i0 + 2 in\n      let i3 = i0 + 3 in\n      Array.unsafe_set out_arr (out_base + i0) (Array.unsafe_get a_arr (a_base + i0));\n      Array.unsafe_set out_arr (out_base + i1) (Array.unsafe_get a_arr (a_base + i1));\n      Array.unsafe_set out_arr (out_base + i2) (Array.unsafe_get a_arr (a_base + i2));\n      Array.unsafe_set out_arr (out_base + i3) (Array.unsafe_get a_arr (a_base + i3));\n      i := i0 + 4\n    done;\n    while !i < n do\n      let idx = !i in\n      Array.unsafe_set out_arr (out_base + idx) (Array.unsafe_get a_arr (a_base + idx));\n      incr i\n    done)\n  else\n    let out_shape = shape vout in\n    let a_shape = shape va in\n    let a_strides = View.strides va in\n    let a_offset = View.offset va in\n    let out_offset = View.offset vout in\n    let md_idx = Array.make (Array.length out_shape) 0 in\n    let a_idx = Array.make (Array.length a_shape) 0 in\n    for k = start_idx to end_idx - 1 do\n      Shape.unravel_index_into k out_shape md_idx;\n      Shape.broadcast_index_into md_idx a_shape a_idx;\n      let a_lin = Shape.ravel_index a_idx a_strides in\n      Array.unsafe_set out_arr (out_offset + k) (Array.unsafe_get a_arr (a_offset + a_lin))\n    done\n"
  },
  {
    "path": "packages/nx-oxcaml/nx-oxcaml.opam",
    "content": "# This file is generated by dune, edit dune-project instead\nopam-version: \"2.0\"\nsynopsis: \"High-performance Nx backend using OxCaml's unboxed types\"\ndescription:\n  \"An experimental backend for Nx that leverages OxCaml's unboxed types for improved performance.\"\nmaintainer: [\"Thibaut Mattio <thibaut.mattio@gmail.com>\"]\nauthors: [\"Thibaut Mattio\"]\nlicense: \"ISC\"\nhomepage: \"https://github.com/raven-ml/raven\"\nbug-reports: \"https://github.com/raven-ml/raven/issues\"\ndepends: [\n  \"ocaml-variants\" {= \"5.2.0+ox\"}\n  \"dune\" {>= \"3.21\"}\n  \"nx\"\n  \"odoc\" {with-doc}\n]\nbuild: [\n  [\"dune\" \"subst\"] {dev}\n  [\n    \"dune\"\n    \"build\"\n    \"-p\"\n    name\n    \"-j\"\n    jobs\n    \"@install\"\n    \"@runtest\" {with-test}\n    \"@doc\" {with-doc}\n  ]\n]\ndev-repo: \"git+https://github.com/raven-ml/raven.git\"\nx-maintenance-intent: [\"(latest)\"]\n"
  },
  {
    "path": "packages/nx-oxcaml/test/dune",
    "content": "(tests\n (names test_nx_oxcaml)\n (libraries nx_oxcaml nx.core))\n"
  },
  {
    "path": "packages/nx-oxcaml/test/test_nx_oxcaml.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Unit tests for Nx_backend backend operations *)\n\nmodule Dtype = Nx_core.Dtype\nmodule View = Nx_core.View\nmodule Nx_ox = Nx_core.Make_frontend (Nx_backend)\nlet failed = ref 0\nlet passed = ref 0\n\nlet check name cond =\n  if cond then incr passed\n  else (\n    incr failed;\n    Printf.printf \"FAIL: %s\\n%!\" name)\n\nlet check_float name ~eps exp act =\n  check name (Float.abs (exp -. act) < eps)\n\nlet check_int32 name exp act = check name (Int32.equal exp act)\nlet check_int64 name exp act = check name (Int64.equal exp act)\nlet check_int name exp act = check name (exp = act)\nlet check_bool name exp act = check name (exp = act)\n\nlet numel v = View.numel v\n\nlet test_buffer_float64 () =\n  let t = Nx_ox.empty (Nx_backend.create_context ()) Dtype.Float64 [| 5 |] in\n  check \"buffer_float64: dtype\" (Nx_backend.dtype t = Dtype.Float64);\n  check \"buffer_float64: size\" (numel (Nx_backend.view t) = 5)\n\nlet test_buffer_float32 () =\n  let t = Nx_ox.empty (Nx_backend.create_context ()) Dtype.Float32 [| 3 |] in\n  check \"buffer_float32: dtype\" (Nx_backend.dtype t = Dtype.Float32);\n  check \"buffer_float32: size\" (numel (Nx_backend.view t) = 3)\n\nlet test_buffer_int32 () =\n  let t = Nx_ox.empty (Nx_backend.create_context ()) Dtype.Int32 [| 4 |] in\n  check \"buffer_int32: dtype\" (Nx_backend.dtype t = Dtype.Int32);\n  check \"buffer_int32: size\" (numel (Nx_backend.view t) = 4)\n\nlet test_buffer_int64 () =\n  let t = Nx_ox.empty (Nx_backend.create_context ()) Dtype.Int64 [| 2 |] in\n  check \"buffer_int64: dtype\" (Nx_backend.dtype t = Dtype.Int64);\n  check \"buffer_int64: size\" (numel (Nx_backend.view t) = 2)\n\nlet test_add_float64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 1.0; 2.0; 3.0 |] in\n  let b = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 10.0; 20.0; 30.0 |] in\n  let out = Nx_backend.add a b in\n  let d = Nx_ox.to_array out in\n  check_float \"add_float64[0]\" ~eps:1e-9 11.0 d.(0);\n  check_float \"add_float64[1]\" ~eps:1e-9 22.0 d.(1);\n  check_float \"add_float64[2]\" ~eps:1e-9 33.0 d.(2)\n\nlet test_add_float32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 1.5; 2.5; 3.5 |] in\n  let b = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 0.5; 0.5; 0.5 |] in\n  let out = Nx_backend.add a b in\n  let d = Nx_ox.to_array out in\n  check_float \"add_float32[0]\" ~eps:1e-6 2.0 d.(0);\n  check_float \"add_float32[1]\" ~eps:1e-6 3.0 d.(1);\n  check_float \"add_float32[2]\" ~eps:1e-6 4.0 d.(2)\n\nlet test_add_int32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 1l; 2l; 3l |] in\n  let b = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 100l; 200l; 300l |] in\n  let out = Nx_backend.add a b in\n  let d = Nx_ox.to_array out in\n  check_int32 \"add_int32[0]\" 101l d.(0);\n  check_int32 \"add_int32[1]\" 202l d.(1);\n  check_int32 \"add_int32[2]\" 303l d.(2)\n\nlet test_add_int64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 1000L; 2000L; 3000L |] in\n  let b = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 1L; 2L; 3L |] in\n  let out = Nx_backend.add a b in\n  let d = Nx_ox.to_array out in\n  check_int64 \"add_int64[0]\" 1001L d.(0);\n  check_int64 \"add_int64[1]\" 2002L d.(1);\n  check_int64 \"add_int64[2]\" 3003L d.(2)\n\nlet test_sub_float64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 10.0; 20.0; 30.0 |] in\n  let b = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 1.0; 2.0; 3.0 |] in\n  let out = Nx_backend.sub a b in\n  let d = Nx_ox.to_array out in\n  check_float \"sub_float64[0]\" ~eps:1e-9 9.0 d.(0);\n  check_float \"sub_float64[1]\" ~eps:1e-9 18.0 d.(1);\n  check_float \"sub_float64[2]\" ~eps:1e-9 27.0 d.(2)\n\nlet test_sub_float32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 5.0; 10.0; 15.0 |] in\n  let b = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 1.0; 2.0; 3.0 |] in\n  let out = Nx_backend.sub a b in\n  let d = Nx_ox.to_array out in\n  check_float \"sub_float32[0]\" ~eps:1e-6 4.0 d.(0);\n  check_float \"sub_float32[1]\" ~eps:1e-6 8.0 d.(1);\n  check_float \"sub_float32[2]\" ~eps:1e-6 12.0 d.(2)\n\nlet test_sub_int32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 100l; 200l; 300l |] in\n  let b = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 1l; 2l; 3l |] in\n  let out = Nx_backend.sub a b in\n  let d = Nx_ox.to_array out in\n  check_int32 \"sub_int32[0]\" 99l d.(0);\n  check_int32 \"sub_int32[1]\" 198l d.(1);\n  check_int32 \"sub_int32[2]\" 297l d.(2)\n\nlet test_sub_int64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 1000L; 2000L; 3000L |] in\n  let b = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 1L; 2L; 3L |] in\n  let out = Nx_backend.sub a b in\n  let d = Nx_ox.to_array out in\n  check_int64 \"sub_int64[0]\" 999L d.(0);\n  check_int64 \"sub_int64[1]\" 1998L d.(1);\n  check_int64 \"sub_int64[2]\" 2997L d.(2)\n\nlet test_add_single_element () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 1 |] [| 42.0 |] in\n  let b = Nx_ox.create ctx Dtype.Float64 [| 1 |] [| 8.0 |] in\n  let out = Nx_backend.add a b in\n  let d = Nx_ox.to_array out in\n  check_float \"add_single[0]\" ~eps:1e-9 50.0 d.(0)\n\nlet test_add_negative_values () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 2 |] [| -5.0; 10.0 |] in\n  let b = Nx_ox.create ctx Dtype.Float64 [| 2 |] [| -3.0; -7.0 |] in\n  let out = Nx_backend.add a b in\n  let d = Nx_ox.to_array out in\n  check_float \"add_neg[0]\" ~eps:1e-9 (-8.0) d.(0);\n  check_float \"add_neg[1]\" ~eps:1e-9 3.0 d.(1)\n\nlet test_sub_to_zero () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int32 [| 2 |] [| 5l; 10l |] in\n  let b = Nx_ox.create ctx Dtype.Int32 [| 2 |] [| 5l; 10l |] in\n  let out = Nx_backend.sub a b in\n  let d = Nx_ox.to_array out in\n  check_int32 \"sub_zero[0]\" 0l d.(0);\n  check_int32 \"sub_zero[1]\" 0l d.(1)\n\nlet test_in_place_add () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 1.0; 2.0; 3.0 |] in\n  let b = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 10.0; 20.0; 30.0 |] in\n  let a = Nx_backend.add a b in\n  let d = Nx_ox.to_array a in\n  check_float \"inplace_add[0]\" ~eps:1e-9 11.0 d.(0);\n  check_float \"inplace_add[1]\" ~eps:1e-9 22.0 d.(1);\n  check_float \"inplace_add[2]\" ~eps:1e-9 33.0 d.(2)\n\nlet test_mul_float64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 1.0; 2.0; 3.0 |] in\n  let b = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 10.0; 20.0; 30.0 |] in\n  let out = Nx_backend.mul a b in\n  let d = Nx_ox.to_array out in\n  check_float \"mul_float64[0]\" ~eps:1e-9 10.0 d.(0);\n  check_float \"mul_float64[1]\" ~eps:1e-9 40.0 d.(1);\n  check_float \"mul_float64[2]\" ~eps:1e-9 90.0 d.(2)\n\nlet test_mul_float32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 1.5; 2.5; 3.5 |] in\n  let b = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 0.5; 0.5; 2.0 |] in\n  let out = Nx_backend.mul a b in\n  let d = Nx_ox.to_array out in\n  check_float \"mul_float32[0]\" ~eps:1e-6 0.75 d.(0);\n  check_float \"mul_float32[1]\" ~eps:1e-6 1.25 d.(1);\n  check_float \"mul_float32[2]\" ~eps:1e-6 7.0 d.(2)\n\nlet test_mul_int32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 1l; 2l; 3l |] in\n  let b = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 100l; 200l; 300l |] in\n  let out = Nx_backend.mul a b in\n  let d = Nx_ox.to_array out in\n  check_int32 \"mul_int32[0]\" 100l d.(0);\n  check_int32 \"mul_int32[1]\" 400l d.(1);\n  check_int32 \"mul_int32[2]\" 900l d.(2)\n\nlet test_mul_int64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 1000L; 2000L; 3000L |] in\n  let b = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 1L; 2L; 3L |] in\n  let out = Nx_backend.mul a b in\n  let d = Nx_ox.to_array out in\n  check_int64 \"mul_int64[0]\" 1000L d.(0);\n  check_int64 \"mul_int64[1]\" 4000L d.(1);\n  check_int64 \"mul_int64[2]\" 9000L d.(2)\n\nlet test_fdiv_float64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 1.0; 2.0; 2.0 |] in\n  let b = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 0.0; 2.0; 3.0 |] in\n  let out = Nx_backend.div b a in\n  let d = Nx_ox.to_array out in\n  check_float \"fdiv_float64[0]\" ~eps:1e-9 0.0 d.(0);\n  check_float \"fdiv_float64[1]\" ~eps:1e-9 1.0 d.(1);\n  check_float \"fdiv_float64[2]\" ~eps:1e-9 1.5 d.(2)\n\nlet test_fdiv_float32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 1.5; 2.5; 7.0 |] in\n  let b = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 0.5; 0.5; 2.0 |] in\n  let out = Nx_backend.div a b in\n  let d = Nx_ox.to_array out in\n  check_float \"fdiv_float32[0]\" ~eps:1e-6 3.0 d.(0);\n  check_float \"fdiv_float32[1]\" ~eps:1e-6 5.0 d.(1);\n  check_float \"fdiv_float32[2]\" ~eps:1e-6 3.5 d.(2)\n\nlet test_fdiv_int32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 1l; 2l; 3l |] in\n  let b = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 100l; 1l; 2l |] in\n  let out = Nx_backend.div a b in\n  let d = Nx_ox.to_array out in\n  check_int32 \"fdiv_int32[0]\" 0l d.(0);\n  check_int32 \"fdiv_int32[1]\" 2l d.(1);\n  check_int32 \"fdiv_int32[2]\" 1l d.(2)\n\nlet test_fdiv_int64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 1000L; 2000L; 3000L |] in\n  let b = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 1L; 2L; 3L |] in\n  let out = Nx_backend.div a b in\n  let d = Nx_ox.to_array out in\n  check_int64 \"fdiv_int64[0]\" 1000L d.(0);\n  check_int64 \"fdiv_int64[1]\" 1000L d.(1);\n  check_int64 \"fdiv_int64[2]\" 1000L d.(2)\n\nlet test_idiv_int32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 1l; 2l; 3l |] in\n  let b = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 100l; 1l; 2l |] in\n  let out = Nx_backend.div a b in\n  let d = Nx_ox.to_array out in\n  check_int32 \"idiv_int32[0]\" 0l d.(0);\n  check_int32 \"idiv_int32[1]\" 2l d.(1);\n  check_int32 \"idiv_int32[2]\" 1l d.(2)\n\nlet test_idiv_int64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 1000L; 2000L; 3000L |] in\n  let b = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 1L; 2L; 3L |] in\n  let out = Nx_backend.div a b in\n  let d = Nx_ox.to_array out in\n  check_int64 \"idiv_int64[0]\" 1000L d.(0);\n  check_int64 \"idiv_int64[1]\" 1000L d.(1);\n  check_int64 \"idiv_int64[2]\" 1000L d.(2)\n\nlet test_mod_float64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 1.0; 2.0; 2.0 |] in\n  let b = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 0.0; 2.0; 3.0 |] in\n  let out = Nx_backend.mod_ b a in\n  let d = Nx_ox.to_array out in\n  check_float \"mod_float64[0]\" ~eps:1e-9 0.0 d.(0);\n  (* 0 mod 1 = 0 *)\n  check_float \"mod_float64[1]\" ~eps:1e-9 0.0 d.(1);\n  (* 2 mod 2 = 0 *)\n  check_float \"mod_float64[2]\" ~eps:1e-9 1.0 d.(2)\n(* 3 mod 2 = 1 *)\n\nlet test_mod_float32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 1.5; 2.5; 7.0 |] in\n  let b = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 0.5; 0.5; 2.0 |] in\n  let out = Nx_backend.mod_ a b in\n  let d = Nx_ox.to_array out in\n  check_float \"mod_float32[0]\" ~eps:1e-6 0.0 d.(0);\n  (* 1.5 mod 0.5 = 0 *)\n  check_float \"mod_float32[1]\" ~eps:1e-6 0.0 d.(1);\n  (* 2.5 mod 0.5 = 0 *)\n  check_float \"mod_float32[2]\" ~eps:1e-6 1.0 d.(2)\n(* 7 mod 2 = 1 *)\n\nlet test_mod_int32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 1l; 2l; 3l |] in\n  let b = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 100l; 1l; 2l |] in\n  let out = Nx_backend.mod_ a b in\n  let d = Nx_ox.to_array out in\n  check_int32 \"mod_int32[0]\" 1l d.(0);\n  (* 1 mod 100 = 1 *)\n  check_int32 \"mod_int32[1]\" 0l d.(1);\n  (* 2 mod 1 = 0 *)\n  check_int32 \"mod_int32[2]\" 1l d.(2)\n(* 3 mod 2 = 1 *)\n\nlet test_mod_int64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 1000L; 2000L; 3000L |] in\n  let b = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 1L; 2L; 3L |] in\n  let out = Nx_backend.mod_ a b in\n  let d = Nx_ox.to_array out in\n  check_int64 \"mod_int64[0]\" 0L d.(0);\n  (* 1000 mod 1 = 0 *)\n  check_int64 \"mod_int64[1]\" 0L d.(1);\n  (* 2000 mod 2 = 0 *)\n  check_int64 \"mod_int64[2]\" 0L d.(2)\n(* 3000 mod 3 = 0 *)\n\nlet test_max_float64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 1.0; 2.0; 2.0 |] in\n  let b = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 0.0; 2.5; 1.5 |] in\n  let out = Nx_backend.max a b in\n  let d = Nx_ox.to_array out in\n  check_float \"max_float64[0]\" ~eps:1e-9 1.0 d.(0);\n  check_float \"max_float64[1]\" ~eps:1e-9 2.5 d.(1);\n  check_float \"max_float64[2]\" ~eps:1e-9 2.0 d.(2)\n\nlet test_max_float32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 1.5; 2.5; 7.0 |] in\n  let b = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 2.0; 2.0; 3.0 |] in\n  let out = Nx_backend.max a b in\n  let d = Nx_ox.to_array out in\n  check_float \"max_float32[0]\" ~eps:1e-6 2.0 d.(0);\n  check_float \"max_float32[1]\" ~eps:1e-6 2.5 d.(1);\n  check_float \"max_float32[2]\" ~eps:1e-6 7.0 d.(2)\n\nlet test_max_int32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 1l; 2l; 3l |] in\n  let b = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 0l; 3l; 2l |] in\n  let out = Nx_backend.max a b in\n  let d = Nx_ox.to_array out in\n  check_int32 \"max_int32[0]\" 1l d.(0);\n  check_int32 \"max_int32[1]\" 3l d.(1);\n  check_int32 \"max_int32[2]\" 3l d.(2)\n\nlet test_max_int64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 1000L; 2000L; 3000L |] in\n  let b = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 1500L; 1500L; 1000L |] in\n  let out = Nx_backend.max a b in\n  let d = Nx_ox.to_array out in\n  check_int64 \"max_int64[0]\" 1500L d.(0);\n  check_int64 \"max_int64[1]\" 2000L d.(1);\n  check_int64 \"max_int64[2]\" 3000L d.(2)\n\nlet test_min_float64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 1.0; 2.0; 2.0 |] in\n  let b = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 0.0; 2.5; 1.5 |] in\n  let out = Nx_backend.min a b in\n  let d = Nx_ox.to_array out in\n  check_float \"min_float64[0]\" ~eps:1e-9 0.0 d.(0);\n  check_float \"min_float64[1]\" ~eps:1e-9 2.0 d.(1);\n  check_float \"min_float64[2]\" ~eps:1e-9 1.5 d.(2)\n\nlet test_min_float32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 1.5; 2.5; 7.0 |] in\n  let b = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 2.0; 2.0; 3.0 |] in\n  let out = Nx_backend.min a b in\n  let d = Nx_ox.to_array out in\n  check_float \"min_float32[0]\" ~eps:1e-6 1.5 d.(0);\n  check_float \"min_float32[1]\" ~eps:1e-6 2.0 d.(1);\n  check_float \"min_float32[2]\" ~eps:1e-6 3.0 d.(2)\n\nlet test_min_int32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 1l; 2l; 3l |] in\n  let b = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 0l; 3l; 2l |] in\n  let out = Nx_backend.min a b in\n  let d = Nx_ox.to_array out in\n  check_int32 \"min_int32[0]\" 0l d.(0);\n  check_int32 \"min_int32[1]\" 2l d.(1);\n  check_int32 \"min_int32[2]\" 2l d.(2)\n\nlet test_min_int64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 1000L; 2000L; 3000L |] in\n  let b = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 1500L; 1500L; 1000L |] in\n  let out = Nx_backend.min a b in\n  let d = Nx_ox.to_array out in\n  check_int64 \"min_int64[0]\" 1000L d.(0);\n  check_int64 \"min_int64[1]\" 1500L d.(1);\n  check_int64 \"min_int64[2]\" 1000L d.(2)\n\nlet test_pow_float64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 2.0; 3.0; 4.0 |] in\n  let b = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 3.0; 2.0; 0.5 |] in\n  let out = Nx_backend.pow a b in\n  let d = Nx_ox.to_array out in\n  check_float \"pow_float64[0]\" ~eps:1e-9 8.0 d.(0);\n  (* 2^3 *)\n  check_float \"pow_float64[1]\" ~eps:1e-9 9.0 d.(1);\n  (* 3^2 *)\n  check_float \"pow_float64[2]\" ~eps:1e-9 2.0 d.(2)\n(* 4^0.5 *)\n\nlet test_pow_float32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 2.0; 5.0; 9.0 |] in\n  let b = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 3.0; 1.0; 0.5 |] in\n  let out = Nx_backend.pow a b in\n  let d = Nx_ox.to_array out in\n  check_float \"pow_float32[0]\" ~eps:1e-6 8.0 d.(0);\n  (* 2^3 *)\n  check_float \"pow_float32[1]\" ~eps:1e-6 5.0 d.(1);\n  (* 5^1 *)\n  check_float \"pow_float32[2]\" ~eps:1e-6 3.0 d.(2)\n(* 9^0.5 *)\n\nlet test_and_int32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 0b1101l; 0b1010l; 0b1111l |] in\n  let b = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 0b1011l; 0b1100l; 0b0101l |] in\n  let out = Nx_backend.and_ a b in\n  let d = Nx_ox.to_array out in\n  check_int32 \"and_int32[0]\" 0b1001l d.(0);\n  (* 1101 & 1011 *)\n  check_int32 \"and_int32[1]\" 0b1000l d.(1);\n  (* 1010 & 1100 *)\n  check_int32 \"and_int32[2]\" 0b0101l d.(2)\n(* 1111 & 0101 *)\n\nlet test_and_int64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 0b1101L; 0b1010L; 0b1111L |] in\n  let b = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 0b1011L; 0b1100L; 0b0101L |] in\n  let out = Nx_backend.and_ a b in\n  let d = Nx_ox.to_array out in\n  check_int64 \"and_int64[0]\" 0b1001L d.(0);\n  check_int64 \"and_int64[1]\" 0b1000L d.(1);\n  check_int64 \"and_int64[2]\" 0b0101L d.(2)\n\nlet test_or_int32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 0b1101l; 0b1010l; 0b1111l |] in\n  let b = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 0b1011l; 0b1100l; 0b0101l |] in\n  let out = Nx_backend.or_ a b in\n  let d = Nx_ox.to_array out in\n  check_int32 \"or_int32[0]\" 0b1111l d.(0);\n  (* 1101 | 1011 *)\n  check_int32 \"or_int32[1]\" 0b1110l d.(1);\n  (* 1010 | 1100 *)\n  check_int32 \"or_int32[2]\" 0b1111l d.(2)\n(* 1111 | 0101 *)\n\nlet test_or_int64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 0b1101L; 0b1010L; 0b1111L |] in\n  let b = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 0b1011L; 0b1100L; 0b0101L |] in\n  let out = Nx_backend.or_ a b in\n  let d = Nx_ox.to_array out in\n  check_int64 \"or_int64[0]\" 0b1111L d.(0);\n  check_int64 \"or_int64[1]\" 0b1110L d.(1);\n  check_int64 \"or_int64[2]\" 0b1111L d.(2)\n\nlet test_xor_int32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 0b1101l; 0b1010l; 0b1111l |] in\n  let b = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 0b1011l; 0b1100l; 0b0101l |] in\n  let out = Nx_backend.xor a b in\n  let d = Nx_ox.to_array out in\n  check_int32 \"xor_int32[0]\" 0b0110l d.(0);\n  (* 1101 ^ 1011 *)\n  check_int32 \"xor_int32[1]\" 0b0110l d.(1);\n  (* 1010 ^ 1100 *)\n  check_int32 \"xor_int32[2]\" 0b1010l d.(2)\n(* 1111 ^ 0101 *)\n\nlet test_xor_int64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 0b1101L; 0b1010L; 0b1111L |] in\n  let b = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 0b1011L; 0b1100L; 0b0101L |] in\n  let out = Nx_backend.xor a b in\n  let d = Nx_ox.to_array out in\n  check_int64 \"xor_int64[0]\" 0b0110L d.(0);\n  check_int64 \"xor_int64[1]\" 0b0110L d.(1);\n  check_int64 \"xor_int64[2]\" 0b1010L d.(2)\n\nlet test_neg_float64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 1.0; -2.5; 0.0 |] in\n  let out = Nx_backend.neg a in\n  let d = Nx_ox.to_array out in\n  check_float \"neg_float64[0]\" ~eps:1e-9 (-1.0) d.(0);\n  check_float \"neg_float64[1]\" ~eps:1e-9 2.5 d.(1);\n  check_float \"neg_float64[2]\" ~eps:1e-9 0.0 d.(2)\n\nlet test_neg_float32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 1.5; -3.0; 0.0 |] in\n  let out = Nx_backend.neg a in\n  let d = Nx_ox.to_array out in\n  check_float \"neg_float32[0]\" ~eps:1e-6 (-1.5) d.(0);\n  check_float \"neg_float32[1]\" ~eps:1e-6 3.0 d.(1);\n  check_float \"neg_float32[2]\" ~eps:1e-6 0.0 d.(2)\n\nlet test_neg_int32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 1l; (-2l); 0l |] in\n  let out = Nx_backend.neg a in\n  let d = Nx_ox.to_array out in\n  check_int32 \"neg_int32[0]\" (-1l) d.(0);\n  check_int32 \"neg_int32[1]\" 2l d.(1);\n  check_int32 \"neg_int32[2]\" 0l d.(2)\n\nlet test_neg_int64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 10L; (-20L); 0L |] in\n  let out = Nx_backend.neg a in\n  let d = Nx_ox.to_array out in\n  check_int64 \"neg_int64[0]\" (-10L) d.(0);\n  check_int64 \"neg_int64[1]\" 20L d.(1);\n  check_int64 \"neg_int64[2]\" 0L d.(2)\n\nlet test_abs_float64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| -1.0; 2.5; -0.0 |] in\n  let out = Nx_backend.abs a in\n  let d = Nx_ox.to_array out in\n  check_float \"abs_float64[0]\" ~eps:1e-9 1.0 d.(0);\n  check_float \"abs_float64[1]\" ~eps:1e-9 2.5 d.(1);\n  check_float \"abs_float64[2]\" ~eps:1e-9 0.0 d.(2)\n\nlet test_abs_float32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| -1.5; 3.0; 0.0 |] in\n  let out = Nx_backend.abs a in\n  let d = Nx_ox.to_array out in\n  check_float \"abs_float32[0]\" ~eps:1e-6 1.5 d.(0);\n  check_float \"abs_float32[1]\" ~eps:1e-6 3.0 d.(1);\n  check_float \"abs_float32[2]\" ~eps:1e-6 0.0 d.(2)\n\nlet test_abs_int32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| (-1l); 2l; 0l |] in\n  let out = Nx_backend.abs a in\n  let d = Nx_ox.to_array out in\n  check_int32 \"abs_int32[0]\" 1l d.(0);\n  check_int32 \"abs_int32[1]\" 2l d.(1);\n  check_int32 \"abs_int32[2]\" 0l d.(2)\n\nlet test_abs_int64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| (-10L); 20L; 0L |] in\n  let out = Nx_backend.abs a in\n  let d = Nx_ox.to_array out in\n  check_int64 \"abs_int64[0]\" 10L d.(0);\n  check_int64 \"abs_int64[1]\" 20L d.(1);\n  check_int64 \"abs_int64[2]\" 0L d.(2)\n\nlet test_log_float64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 1.0; 2.718281828459045; 10.0 |] in\n  let out = Nx_backend.log a in\n  let d = Nx_ox.to_array out in\n  check_float \"log_float64[0]\" ~eps:1e-9 0.0 d.(0);\n  check_float \"log_float64[1]\" ~eps:1e-9 1.0 d.(1);\n  check_float \"log_float64[2]\" ~eps:1e-9 2.302585092994046 d.(2)\n\nlet test_log_float32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 1.0; 2.7182817; 10.0 |] in\n  let out = Nx_backend.log a in\n  let d = Nx_ox.to_array out in\n  check_float \"log_float32[0]\" ~eps:1e-6 0.0 d.(0);\n  check_float \"log_float32[1]\" ~eps:1e-6 1.0 d.(1);\n  check_float \"log_float32[2]\" ~eps:1e-6 2.3025851 d.(2)\n\nlet test_exp_float64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 0.0; 1.0; 2.0 |] in\n  let out = Nx_backend.exp a in\n  let d = Nx_ox.to_array out in\n  check_float \"exp_float64[0]\" ~eps:1e-9 1.0 d.(0);\n  check_float \"exp_float64[1]\" ~eps:1e-9 2.718281828459045 d.(1);\n  check_float \"exp_float64[2]\" ~eps:1e-9 7.38905609893065 d.(2)\n\nlet test_exp_float32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 0.0; 1.0; 2.0 |] in\n  let out = Nx_backend.exp a in\n  let d = Nx_ox.to_array out in\n  check_float \"exp_float32[0]\" ~eps:1e-6 1.0 d.(0);\n  check_float \"exp_float32[1]\" ~eps:1e-6 2.7182817 d.(1);\n  check_float \"exp_float32[2]\" ~eps:1e-6 7.389056 d.(2)\n\nlet test_sin_float64 () =\n  let ctx = Nx_backend.create_context () in\n  let a =\n    Nx_ox.create ctx Dtype.Float64 [| 3 |]\n      [| 0.0; 1.5707963267948966; 3.141592653589793 |]\n  in\n  let out = Nx_backend.sin a in\n  let d = Nx_ox.to_array out in\n  check_float \"sin_float64[0]\" ~eps:1e-9 0.0 d.(0);\n  check_float \"sin_float64[1]\" ~eps:1e-9 1.0 d.(1);\n  check_float \"sin_float64[2]\" ~eps:1e-9 0.0 d.(2)\n\nlet test_sin_float32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 0.0; 1.5707964; 3.1415927 |] in\n  let out = Nx_backend.sin a in\n  let d = Nx_ox.to_array out in\n  check_float \"sin_float32[0]\" ~eps:1e-6 0.0 d.(0);\n  check_float \"sin_float32[1]\" ~eps:1e-6 1.0 d.(1);\n  check_float \"sin_float32[2]\" ~eps:1e-6 0.0 d.(2)\n\nlet test_cos_float64 () =\n  let ctx = Nx_backend.create_context () in\n  let a =\n    Nx_ox.create ctx Dtype.Float64 [| 3 |]\n      [| 0.0; 1.5707963267948966; 3.141592653589793 |]\n  in\n  let out = Nx_backend.cos a in\n  let d = Nx_ox.to_array out in\n  check_float \"cos_float64[0]\" ~eps:1e-9 1.0 d.(0);\n  check_float \"cos_float64[1]\" ~eps:1e-9 0.0 d.(1);\n  check_float \"cos_float64[2]\" ~eps:1e-9 (-1.0) d.(2)\n\nlet test_cos_float32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 0.0; 1.5707964; 3.1415927 |] in\n  let out = Nx_backend.cos a in\n  let d = Nx_ox.to_array out in\n  check_float \"cos_float32[0]\" ~eps:1e-6 1.0 d.(0);\n  check_float \"cos_float32[1]\" ~eps:1e-6 0.0 d.(1);\n  check_float \"cos_float32[2]\" ~eps:1e-6 (-1.0) d.(2)\n\nlet test_sqrt_float64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 0.0; 4.0; 9.0 |] in\n  let out = Nx_backend.sqrt a in\n  let d = Nx_ox.to_array out in\n  check_float \"sqrt_float64[0]\" ~eps:1e-9 0.0 d.(0);\n  check_float \"sqrt_float64[1]\" ~eps:1e-9 2.0 d.(1);\n  check_float \"sqrt_float64[2]\" ~eps:1e-9 3.0 d.(2)\n\nlet test_sqrt_float32 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 0.0; 4.0; 9.0 |] in\n  let out = Nx_backend.sqrt a in\n  let d = Nx_ox.to_array out in\n  check_float \"sqrt_float32[0]\" ~eps:1e-6 0.0 d.(0);\n  check_float \"sqrt_float32[1]\" ~eps:1e-6 2.0 d.(1);\n  check_float \"sqrt_float32[2]\" ~eps:1e-6 3.0 d.(2)\n\nlet test_recip_float64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 0.5; 0.25; 0.125 |] in\n  let out = Nx_backend.recip a in\n  let d = Nx_ox.to_array out in\n  check_float \"recip_float64[0]\" ~eps:1e-9 2.0 d.(0);\n  check_float \"recip_float64[1]\" ~eps:1e-9 4.0 d.(1);\n  check_float \"recip_float64[2]\" ~eps:1e-9 8.0 d.(2)\n\nlet test_cmpeq_int64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 1L; 2L; 3L |] in\n  let b = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 1L; 2L; 4L |] in\n  let out = Nx_backend.cmpeq a b in\n  let d = Nx_ox.to_array out in\n  check_bool \"cmpeq_bool[0]\" true d.(0);\n  check_bool \"cmpeq_bool[1]\" true d.(1);\n  check_bool \"cmpeq_bool[2]\" false d.(2)\n\nlet test_cmpeq_float64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 1.0; 2.0; 3.0 |] in\n  let b = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 1.0; 2.0; 4.0 |] in\n  let out = Nx_backend.cmpeq a b in\n  let d = Nx_ox.to_array out in\n  check_bool \"cmpeq_bool[0]\" true d.(0);\n  check_bool \"cmpeq_bool[1]\" true d.(1);\n  check_bool \"cmpeq_bool[2]\" false d.(2)\n\nlet test_cmpne_int64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 1L; 2L; 3L |] in\n  let b = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 1L; 2L; 4L |] in\n  let out = Nx_backend.cmpne a b in\n  let d = Nx_ox.to_array out in\n  check_bool \"cmpne_bool[0]\" false d.(0);\n  check_bool \"cmpne_bool[1]\" false d.(1);\n  check_bool \"cmpne_bool[2]\" true d.(2)\n\nlet test_cmpne_float64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 1.0; 2.0; 3.0 |] in\n  let b = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 1.0; 2.0; 4.0 |] in\n  let out = Nx_backend.cmpne a b in\n  let d = Nx_ox.to_array out in\n  check_bool \"cmpne_bool[0]\" false d.(0);\n  check_bool \"cmpne_bool[1]\" false d.(1);\n  check_bool \"cmpne_bool[2]\" true d.(2)\n\nlet test_cmplt_float64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 0.5; 1.0; 2.0 |] in\n  let b = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 1.0; 1.0; 1.0 |] in\n  let out = Nx_backend.cmplt a b in\n  let d = Nx_ox.to_array out in\n  check_bool \"cmplt_bool[0]\" true d.(0);\n  check_bool \"cmplt_bool[1]\" false d.(1);\n  check_bool \"cmplt_bool[2]\" false d.(2)\n\nlet test_cmplt_int64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 0L; 1L; 2L |] in\n  let b = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 1L; 1L; 1L |] in\n  let out = Nx_backend.cmplt a b in\n  let d = Nx_ox.to_array out in\n  check_bool \"cmplt_bool[0]\" true d.(0);\n  check_bool \"cmplt_bool[1]\" false d.(1);\n  check_bool \"cmplt_bool[2]\" false d.(2)\n\nlet test_cmple_float64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 0.5; 1.0; 2.0 |] in\n  let b = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 1.0; 1.0; 1.0 |] in\n  let out = Nx_backend.cmple a b in\n  let d = Nx_ox.to_array out in\n  check_bool \"cmple_bool[0]\" true d.(0);\n  check_bool \"cmple_bool[1]\" true d.(1);\n  check_bool \"cmple_bool[2]\" false d.(2)\n\nlet test_cmple_int64 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 0L; 1L; 2L |] in\n  let b = Nx_ox.create ctx Dtype.Int64 [| 3 |] [| 1L; 1L; 1L |] in\n  let out = Nx_backend.cmple a b in\n  let d = Nx_ox.to_array out in\n  check_bool \"cmple_bool[0]\" true d.(0);\n  check_bool \"cmple_bool[1]\" true d.(1);\n  check_bool \"cmple_bool[2]\" false d.(2)\n\nlet test_where_float64_basic () =\n  let ctx = Nx_backend.create_context () in\n  let cond = Nx_ox.create ctx Dtype.Bool [| 4 |] [| true; false; true; false |] in\n  let if_true = Nx_ox.create ctx Dtype.Float64 [| 4 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  let if_false =\n    Nx_ox.create ctx Dtype.Float64 [| 4 |] [| 10.0; 20.0; 30.0; 40.0 |]\n  in\n  let out = Nx_backend.where cond if_true if_false in\n  let d = Nx_ox.to_array out in\n  check_float \"where_basic[0]\" ~eps:1e-9 1.0 d.(0);\n  check_float \"where_basic[1]\" ~eps:1e-9 20.0 d.(1);\n  check_float \"where_basic[2]\" ~eps:1e-9 3.0 d.(2);\n  check_float \"where_basic[3]\" ~eps:1e-9 40.0 d.(3)\n\nlet test_where_float32_basic () =\n  let ctx = Nx_backend.create_context () in\n  let cond = Nx_ox.create ctx Dtype.Bool [| 4 |] [| true; false; true; false |] in\n  let if_true = Nx_ox.create ctx Dtype.Float32 [| 4 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  let if_false =\n    Nx_ox.create ctx Dtype.Float32 [| 4 |] [| 10.0; 20.0; 30.0; 40.0 |]\n  in\n  let out = Nx_backend.where cond if_true if_false in\n  let d = Nx_ox.to_array out in\n  check_float \"where_float32[0]\" ~eps:1e-6 1.0 d.(0);\n  check_float \"where_float32[1]\" ~eps:1e-6 20.0 d.(1);\n  check_float \"where_float32[2]\" ~eps:1e-6 3.0 d.(2);\n  check_float \"where_float32[3]\" ~eps:1e-6 40.0 d.(3)\n\nlet test_where_int32_basic () =\n  let ctx = Nx_backend.create_context () in\n  let cond = Nx_ox.create ctx Dtype.Bool [| 4 |] [| true; false; true; false |] in\n  let if_true = Nx_ox.create ctx Dtype.Int32 [| 4 |] [| 1l; 2l; 3l; 4l |] in\n  let if_false = Nx_ox.create ctx Dtype.Int32 [| 4 |] [| 10l; 20l; 30l; 40l |] in\n  let out = Nx_backend.where cond if_true if_false in\n  let d = Nx_ox.to_array out in\n  check_int32 \"where_int32[0]\" 1l d.(0);\n  check_int32 \"where_int32[1]\" 20l d.(1);\n  check_int32 \"where_int32[2]\" 3l d.(2);\n  check_int32 \"where_int32[3]\" 40l d.(3)\n\nlet test_where_int32_zero_negative () =\n  let ctx = Nx_backend.create_context () in\n  let cond = Nx_ox.create ctx Dtype.Bool [| 4 |] [| true; false; false; true |] in\n  let if_true = Nx_ox.create ctx Dtype.Int32 [| 4 |] [| 0l; (-1l); (-2l); 3l |] in\n  let if_false = Nx_ox.create ctx Dtype.Int32 [| 4 |] [| 5l; 6l; 7l; 8l |] in\n  let out = Nx_backend.where cond if_true if_false in\n  let d = Nx_ox.to_array out in\n  check_int32 \"where_int32_zero_neg[0]\" 0l d.(0);\n  check_int32 \"where_int32_zero_neg[1]\" 6l d.(1);\n  check_int32 \"where_int32_zero_neg[2]\" 7l d.(2);\n  check_int32 \"where_int32_zero_neg[3]\" 3l d.(3)\n\nlet test_where_int64_zero_negative () =\n  let ctx = Nx_backend.create_context () in\n  let cond = Nx_ox.create ctx Dtype.Bool [| 4 |] [| true; false; false; true |] in\n  let if_true = Nx_ox.create ctx Dtype.Int64 [| 4 |] [| 0L; (-1L); (-2L); 3L |] in\n  let if_false = Nx_ox.create ctx Dtype.Int64 [| 4 |] [| 5L; 6L; 7L; 8L |] in\n  let out = Nx_backend.where cond if_true if_false in\n  let d = Nx_ox.to_array out in\n  check_int64 \"where_int64_zero_neg[0]\" 0L d.(0);\n  check_int64 \"where_int64_zero_neg[1]\" 6L d.(1);\n  check_int64 \"where_int64_zero_neg[2]\" 7L d.(2);\n  check_int64 \"where_int64_zero_neg[3]\" 3L d.(3)\n\nlet test_where_int8_basic () =\n  let ctx = Nx_backend.create_context () in\n  let cond = Nx_ox.create ctx Dtype.Bool [| 4 |] [| true; false; true; false |] in\n  let if_true = Nx_ox.create ctx Dtype.Int8 [| 4 |] [| 1; 2; 3; 4 |] in\n  let if_false = Nx_ox.create ctx Dtype.Int8 [| 4 |] [| 10; 20; 30; 40 |] in\n  let out = Nx_backend.where cond if_true if_false in\n  let d = Nx_ox.to_array out in\n  check_int \"where_int8[0]\" 1 d.(0);\n  check_int \"where_int8[1]\" 20 d.(1);\n  check_int \"where_int8[2]\" 3 d.(2);\n  check_int \"where_int8[3]\" 40 d.(3)\n\nlet test_where_int16_zero_negative () =\n  let ctx = Nx_backend.create_context () in\n  let cond = Nx_ox.create ctx Dtype.Bool [| 4 |] [| true; false; false; true |] in\n  let if_true = Nx_ox.create ctx Dtype.Int16 [| 4 |] [| 0; (-1); (-2); 3 |] in\n  let if_false = Nx_ox.create ctx Dtype.Int16 [| 4 |] [| 5; 6; 7; 8 |] in\n  let out = Nx_backend.where cond if_true if_false in\n  let d = Nx_ox.to_array out in\n  check_int \"where_int16_zero_neg[0]\" 0 d.(0);\n  check_int \"where_int16_zero_neg[1]\" 6 d.(1);\n  check_int \"where_int16_zero_neg[2]\" 7 d.(2);\n  check_int \"where_int16_zero_neg[3]\" 3 d.(3)\n\nlet test_matmul_2d () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 2; 2 |] [| 1.; 1.; 1.; 1. |] in\n  let b = Nx_ox.create ctx Dtype.Float64 [| 2; 2 |] [| 1.; 1.; 1.; 1. |] in\n  let out = Nx_backend.matmul a b in\n  let d = Nx_ox.to_array out in\n  check_float \"mm[0,0]\" ~eps:1e-9 2.0 d.(0);\n  check_float \"mm[1,1]\" ~eps:1e-9 2.0 d.(1);\n  check_float \"mm[2,2]\" ~eps:1e-9 2.0 d.(2);\n  check_float \"mm[0,1]\" ~eps:1e-9 2.0 d.(3)\n\nlet test_matmul_identity () =\n  let ctx = Nx_backend.create_context () in\n  let a =\n    Nx_ox.create ctx Dtype.Float64 [| 2; 3 |]\n      [| 1.; 2.; 3.; 4.; 5.; 6. |]\n  in\n  let id =\n    Nx_ox.create ctx Dtype.Float64 [| 3; 3 |]\n      [| 1.; 0.; 0.; 0.; 1.; 0.; 0.; 0.; 1. |]\n  in\n  let out = Nx_backend.matmul a id in\n  let d = Nx_ox.to_array out in\n  check_float \"id@0\" ~eps:1e-9 1. d.(0);\n  check_float \"id@1\" ~eps:1e-9 2. d.(1);\n  check_float \"id@2\" ~eps:1e-9 3. d.(2);\n  check_float \"id@3\" ~eps:1e-9 4. d.(3);\n  check_float \"id@4\" ~eps:1e-9 5. d.(4);\n  check_float \"id@5\" ~eps:1e-9 6. d.(5)\n\nlet test_matmul_rectangular () =\n  let ctx = Nx_backend.create_context () in\n  let a =\n    Nx_ox.create ctx Dtype.Float64 [| 2; 3 |]\n      [| 1.; 2.; 3.; 4.; 5.; 6. |]\n  in\n  let b =\n    Nx_ox.create ctx Dtype.Float64 [| 3; 4 |]\n      [| 7.; 8.; 9.; 10.; 11.; 12.; 13.; 14.; 15.; 16.; 17.; 18. |]\n  in\n  let out = Nx_backend.matmul a b in\n  let d = Nx_ox.to_array out in\n  (* row 0 *)\n  check_float \"rect[0,0]\" ~eps:1e-9 74. d.(0);\n  check_float \"rect[0,1]\" ~eps:1e-9 80. d.(1);\n  check_float \"rect[0,2]\" ~eps:1e-9 86. d.(2);\n  check_float \"rect[0,3]\" ~eps:1e-9 92. d.(3);\n  (* row 1 *)\n  check_float \"rect[1,0]\" ~eps:1e-9 173. d.(4);\n  check_float \"rect[1,1]\" ~eps:1e-9 188. d.(5);\n  check_float \"rect[1,2]\" ~eps:1e-9 203. d.(6);\n  check_float \"rect[1,3]\" ~eps:1e-9 218. d.(7)\n\nlet test_matmul_batched () =\n  let ctx = Nx_backend.create_context () in\n  let a =\n    Nx_ox.create ctx Dtype.Float64 [| 2; 2; 2 |]\n      [| 1.; 0.; 0.; 1.; 2.; 0.; 0.; 2. |]\n  in\n  let b =\n    Nx_ox.create ctx Dtype.Float64 [| 2; 2; 2 |]\n      [| 3.; 4.; 5.; 6.; 1.; 1.; 1.; 1. |]\n  in\n  let out = Nx_backend.matmul a b in\n  let d = Nx_ox.to_array out in\n  (* batch 0 *)\n  check_float \"bat0[0,0]\" ~eps:1e-9 3. d.(0);\n  check_float \"bat0[0,1]\" ~eps:1e-9 4. d.(1);\n  check_float \"bat0[1,0]\" ~eps:1e-9 5. d.(2);\n  check_float \"bat0[1,1]\" ~eps:1e-9 6. d.(3);\n  (* batch 1 *)\n  check_float \"bat1[0,0]\" ~eps:1e-9 2. d.(4);\n  check_float \"bat1[0,1]\" ~eps:1e-9 2. d.(5);\n  check_float \"bat1[1,0]\" ~eps:1e-9 2. d.(6);\n  check_float \"bat1[1,1]\" ~eps:1e-9 2. d.(7)\n\nlet test_matmul_dot_product () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Float64 [| 1; 3 |] [| 1.; 2.; 3. |] in\n  let b = Nx_ox.create ctx Dtype.Float64 [| 3; 1 |] [| 4.; 5.; 6. |] in\n  let out = Nx_backend.matmul a b in\n  let d = Nx_ox.to_array out in\n  check_float \"dot\" ~eps:1e-9 32. d.(0)\n\nlet test_matmul_rectangular_f32 () =\n  let ctx = Nx_backend.create_context () in\n  let a =\n    Nx_ox.create ctx Dtype.Float32 [| 2; 3 |]\n      [| 1.; 2.; 3.; 4.; 5.; 6. |]\n  in\n  let b =\n    Nx_ox.create ctx Dtype.Float32 [| 3; 4 |]\n      [| 7.; 8.; 9.; 10.; 11.; 12.; 13.; 14.; 15.; 16.; 17.; 18. |]\n  in\n  let out = Nx_backend.matmul a b in\n  let d = Nx_ox.to_array out in\n  (* row 0 *)\n  check_float \"rect[0,0]\" ~eps:1e-9 74. d.(0);\n  check_float \"rect[0,1]\" ~eps:1e-9 80. d.(1);\n  check_float \"rect[0,2]\" ~eps:1e-9 86. d.(2);\n  check_float \"rect[0,3]\" ~eps:1e-9 92. d.(3);\n  (* row 1 *)\n  check_float \"rect[1,0]\" ~eps:1e-9 173. d.(4);\n  check_float \"rect[1,1]\" ~eps:1e-9 188. d.(5);\n  check_float \"rect[1,2]\" ~eps:1e-9 203. d.(6);\n  check_float \"rect[1,3]\" ~eps:1e-9 218. d.(7)\n\nlet test_matmul_batched_f32 () =\n  let ctx = Nx_backend.create_context () in\n  let a =\n    Nx_ox.create ctx Dtype.Float32 [| 2; 2; 2 |]\n      [| 1.; 0.; 0.; 1.; 2.; 0.; 0.; 2. |]\n  in\n  let b =\n    Nx_ox.create ctx Dtype.Float32 [| 2; 2; 2 |]\n      [| 3.; 4.; 5.; 6.; 1.; 1.; 1.; 1. |]\n  in\n  let out = Nx_backend.matmul a b in\n  let d = Nx_ox.to_array out in\n  (* batch 0 *)\n  check_float \"bat0[0,0]\" ~eps:1e-9 3. d.(0);\n  check_float \"bat0[0,1]\" ~eps:1e-9 4. d.(1);\n  check_float \"bat0[1,0]\" ~eps:1e-9 5. d.(2);\n  check_float \"bat0[1,1]\" ~eps:1e-9 6. d.(3);\n  (* batch 1 *)\n  check_float \"bat1[0,0]\" ~eps:1e-9 2. d.(4);\n  check_float \"bat1[0,1]\" ~eps:1e-9 2. d.(5);\n  check_float \"bat1[1,0]\" ~eps:1e-9 2. d.(6);\n  check_float \"bat1[1,1]\" ~eps:1e-9 2. d.(7)\n\nlet test_pad_int32_1d () =\n  let ctx = Nx_backend.create_context () in\n  let x = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 10l; 20l; 30l |] in\n  let y = Nx_backend.pad x [| (2, 1) |] (-7l) in\n  check \"pad_int32_1d: dtype\" (Nx_backend.dtype y = Dtype.Int32);\n  check \"pad_int32_1d: size\" (numel (Nx_backend.view y) = 6);\n  let d = Nx_ox.to_array y in\n  check_int32 \"pad_int32_1d[0]\" (-7l) d.(0);\n  check_int32 \"pad_int32_1d[1]\" (-7l) d.(1);\n  check_int32 \"pad_int32_1d[2]\" 10l d.(2);\n  check_int32 \"pad_int32_1d[3]\" 20l d.(3);\n  check_int32 \"pad_int32_1d[4]\" 30l d.(4);\n  check_int32 \"pad_int32_1d[5]\" (-7l) d.(5)\n\nlet test_pad_float64_2d () =\n  let ctx = Nx_backend.create_context () in\n  let x = Nx_ox.create ctx Dtype.Float64 [| 2; 2 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  let y = Nx_backend.pad x [| (1, 2); (2, 1) |] (-1.0) in\n  let shape_y =\n    View.shape (Nx_backend.view y)\n  in\n  check \"pad_float64_2d: shape0\" (shape_y.(0) = 5);\n  check \"pad_float64_2d: shape1\" (shape_y.(1) = 5);\n  let d = Nx_ox.to_array y in\n  check_float \"pad_float64_2d[0,0]\" ~eps:1e-9 (-1.0) d.(0);\n  check_float \"pad_float64_2d[1,2]\" ~eps:1e-9 1.0 d.(7);\n  check_float \"pad_float64_2d[1,3]\" ~eps:1e-9 2.0 d.(8);\n  check_float \"pad_float64_2d[2,2]\" ~eps:1e-9 3.0 d.(12);\n  check_float \"pad_float64_2d[2,3]\" ~eps:1e-9 4.0 d.(13);\n  check_float \"pad_float64_2d[4,4]\" ~eps:1e-9 (-1.0) d.(24)\n\nlet test_pad_float64_permuted_view () =\n  let ctx = Nx_backend.create_context () in\n  let base =\n    Nx_ox.create ctx Dtype.Float64 [| 2; 3 |]\n      [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0 |]\n  in\n  let x = Nx_backend.permute base [| 1; 0 |] in\n  let y = Nx_backend.pad x [| (1, 0); (0, 1) |] 0.0 in\n  let shape_y =\n    View.shape (Nx_backend.view y)\n  in\n  check \"pad_float64_perm: shape0\" (shape_y.(0) = 4);\n  check \"pad_float64_perm: shape1\" (shape_y.(1) = 3);\n  let d = Nx_ox.to_array y in\n  check_float \"pad_float64_perm[0,0]\" ~eps:1e-9 0.0 d.(0);\n  check_float \"pad_float64_perm[1,0]\" ~eps:1e-9 1.0 d.(3);\n  check_float \"pad_float64_perm[1,1]\" ~eps:1e-9 4.0 d.(4);\n  check_float \"pad_float64_perm[2,0]\" ~eps:1e-9 2.0 d.(6);\n  check_float \"pad_float64_perm[2,1]\" ~eps:1e-9 5.0 d.(7);\n  check_float \"pad_float64_perm[3,0]\" ~eps:1e-9 3.0 d.(9);\n  check_float \"pad_float64_perm[3,1]\" ~eps:1e-9 6.0 d.(10);\n  check_float \"pad_float64_perm[3,2]\" ~eps:1e-9 0.0 d.(11)\n\nlet test_shrink_int32_view () =\n  let ctx = Nx_backend.create_context () in\n  let x = Nx_ox.create ctx Dtype.Int32 [| 2; 3 |] [| 1l; 2l; 3l; 4l; 5l; 6l |] in\n  let y = Nx_backend.shrink x [| (0, 2); (1, 3) |] in\n  let zeros = Nx_ox.create ctx Dtype.Int32 [| 2; 2 |] [| 0l; 0l; 0l; 0l |] in\n  let out = Nx_backend.add y zeros in\n  let d = Nx_ox.to_array out in\n  check_int32 \"shrink_int32_view[0]\" 2l d.(0);\n  check_int32 \"shrink_int32_view[1]\" 3l d.(1);\n  check_int32 \"shrink_int32_view[2]\" 5l d.(2);\n  check_int32 \"shrink_int32_view[3]\" 6l d.(3)\n\nlet test_flip_int32_view () =\n  let ctx = Nx_backend.create_context () in\n  let x = Nx_ox.create ctx Dtype.Int32 [| 2; 3 |] [| 1l; 2l; 3l; 4l; 5l; 6l |] in\n  let y = Nx_backend.flip x [| true; false |] in\n  let zeros =\n    Nx_ox.create ctx Dtype.Int32 [| 2; 3 |] [| 0l; 0l; 0l; 0l; 0l; 0l |]\n  in\n  let out = Nx_backend.add y zeros in\n  let d = Nx_ox.to_array out in\n  check_int32 \"flip_int32_view[0]\" 4l d.(0);\n  check_int32 \"flip_int32_view[1]\" 5l d.(1);\n  check_int32 \"flip_int32_view[2]\" 6l d.(2);\n  check_int32 \"flip_int32_view[3]\" 1l d.(3);\n  check_int32 \"flip_int32_view[4]\" 2l d.(4);\n  check_int32 \"flip_int32_view[5]\" 3l d.(5)\n\nlet test_cat_int32_axis1 () =\n  let ctx = Nx_backend.create_context () in\n  let a = Nx_ox.create ctx Dtype.Int32 [| 2; 2 |] [| 1l; 2l; 3l; 4l |] in\n  let b = Nx_ox.create ctx Dtype.Int32 [| 2; 2 |] [| 5l; 6l; 7l; 8l |] in\n  let out = Nx_backend.cat [ a; b ] ~axis:1 in\n  let d = Nx_ox.to_array out in\n  check_int32 \"cat_int32_axis1[0]\" 1l d.(0);\n  check_int32 \"cat_int32_axis1[1]\" 2l d.(1);\n  check_int32 \"cat_int32_axis1[2]\" 5l d.(2);\n  check_int32 \"cat_int32_axis1[3]\" 6l d.(3);\n  check_int32 \"cat_int32_axis1[4]\" 3l d.(4);\n  check_int32 \"cat_int32_axis1[5]\" 4l d.(5);\n  check_int32 \"cat_int32_axis1[6]\" 7l d.(6);\n  check_int32 \"cat_int32_axis1[7]\" 8l d.(7)\n\nlet test_gather_int32_axis1 () =\n  let ctx = Nx_backend.create_context () in\n  let data =\n    Nx_ox.create ctx  Dtype.Int32\n    [| 2; 4 |]\n      [| 10l; 11l; 12l; 13l; 20l; 21l; 22l; 23l |]\n  in\n  let indices =\n    Nx_ox.create ctx Dtype.Int32 [| 2; 3 |] [| 3l; 1l; 0l; 0l; 2l; 2l |] \n  in\n  let out = Nx_backend.gather data indices ~axis:1 in\n  let d = Nx_ox.to_array out in\n  check_int32 \"gather_int32_axis1[0]\" 13l d.(0);\n  check_int32 \"gather_int32_axis1[1]\" 11l d.(1);\n  check_int32 \"gather_int32_axis1[2]\" 10l d.(2);\n  check_int32 \"gather_int32_axis1[3]\" 20l d.(3);\n  check_int32 \"gather_int32_axis1[4]\" 22l d.(4);\n  check_int32 \"gather_int32_axis1[5]\" 22l d.(5)\n\nlet test_gather_float32_axis0_contiguous () =\n  let ctx = Nx_backend.create_context () in\n  let data = Nx_ox.create ctx Dtype.Float32 [| 8 |] [| 0.5; 1.5; 2.5; 3.5; 4.5; 5.5; 6.5; 7.5 |] in\n  let indices = Nx_ox.create ctx Dtype.Int32 [| 8 |] [| 7l; 0l; 6l; 1l; 5l; 2l; 4l; 3l |] in\n  let out = Nx_backend.gather data indices ~axis:0 in\n  let d = Nx_ox.to_array out in\n  check_float \"gather_float32_axis0_contiguous[0]\" ~eps:1e-6 7.5 d.(0);\n  check_float \"gather_float32_axis0_contiguous[1]\" ~eps:1e-6 0.5 d.(1);\n  check_float \"gather_float32_axis0_contiguous[2]\" ~eps:1e-6 6.5 d.(2);\n  check_float \"gather_float32_axis0_contiguous[3]\" ~eps:1e-6 1.5 d.(3);\n  check_float \"gather_float32_axis0_contiguous[4]\" ~eps:1e-6 5.5 d.(4);\n  check_float \"gather_float32_axis0_contiguous[5]\" ~eps:1e-6 2.5 d.(5);\n  check_float \"gather_float32_axis0_contiguous[6]\" ~eps:1e-6 4.5 d.(6);\n  check_float \"gather_float32_axis0_contiguous[7]\" ~eps:1e-6 3.5 d.(7)\n\nlet test_scatter_int32_set_axis1 () =\n  let ctx = Nx_backend.create_context () in\n  let template =\n    Nx_ox.create ctx Dtype.Int32\n    [| 2; 4 |]\n      [| 0l; 0l; 0l; 0l; 0l; 0l; 0l; 0l |]\n  in\n  let indices =\n    Nx_ox.create ctx Dtype.Int32 [| 2; 3 |] [| 3l; 1l; 0l; 0l; 2l; 2l |]\n  in\n  let updates =\n    Nx_ox.create ctx Dtype.Int32 [| 2; 3 |] [| 9l; 8l; 7l; 6l; 5l; 4l |]\n  in\n  let y = Nx_backend.scatter template ~indices ~updates ~axis:1 in\n  let d = Nx_ox.to_array y in\n  check_int32 \"scatter_int32_set_axis1[0]\" 7l d.(0);\n  check_int32 \"scatter_int32_set_axis1[1]\" 8l d.(1);\n  check_int32 \"scatter_int32_set_axis1[2]\" 0l d.(2);\n  check_int32 \"scatter_int32_set_axis1[3]\" 9l d.(3);\n  check_int32 \"scatter_int32_set_axis1[4]\" 6l d.(4);\n  check_int32 \"scatter_int32_set_axis1[5]\" 0l d.(5);\n  check_int32 \"scatter_int32_set_axis1[6]\" 4l d.(6);\n  check_int32 \"scatter_int32_set_axis1[7]\" 0l d.(7)\n\nlet test_scatter_int32_add_axis1 () =\n  let ctx = Nx_backend.create_context () in\n  let template =\n    Nx_ox.create ctx Dtype.Int32\n      [| 2; 4 |]\n      [| 100l; 100l; 100l; 100l;\n          100l; 100l; 100l; 100l |]\n  in\n  let indices =\n    Nx_ox.create ctx Dtype.Int32\n      [| 2; 3 |]\n      [| 3l; 1l; 0l;\n          0l; 2l; 2l |]\n  in\n  let updates =\n    Nx_ox.create ctx Dtype.Int32\n      [| 2; 3 |]\n      [| 9l; 8l; 7l;\n          6l; 5l; 4l |]\n  in\n  let y = Nx_backend.scatter ~mode:`Add template ~indices ~updates ~axis:1 in\n  let d = Nx_ox.to_array y in\n\n  check_int32 \"scatter_int32_add_axis1[0]\" 107l d.(0);\n  check_int32 \"scatter_int32_add_axis1[1]\" 108l d.(1);\n  check_int32 \"scatter_int32_add_axis1[2]\" 100l d.(2);\n  check_int32 \"scatter_int32_add_axis1[3]\" 109l d.(3);\n  check_int32 \"scatter_int32_add_axis1[4]\" 106l d.(4);\n  check_int32 \"scatter_int32_add_axis1[5]\" 100l d.(5);\n  check_int32 \"scatter_int32_add_axis1[6]\" 109l d.(6);\n  check_int32 \"scatter_int32_add_axis1[7]\" 100l d.(7)\n\n(* Gather: float64 1D contiguous — exercises the Float64x2 SIMD path *)\nlet test_gather_float64_axis0_contiguous () =\n  let ctx = Nx_backend.create_context () in\n  let data =\n    Nx_ox.create ctx Dtype.Float64 [| 6 |]\n      [| 10.0; 20.0; 30.0; 40.0; 50.0; 60.0 |]\n  in\n  let indices =\n    Nx_ox.create ctx Dtype.Int32 [| 6 |] [| 5l; 3l; 1l; 0l; 4l; 2l |]\n  in\n  let out = Nx_backend.gather data indices ~axis:0 in\n  let d = Nx_ox.to_array out in\n  check_float \"gather_f64_contiguous[0]\" ~eps:1e-12 60.0 d.(0);\n  check_float \"gather_f64_contiguous[1]\" ~eps:1e-12 40.0 d.(1);\n  check_float \"gather_f64_contiguous[2]\" ~eps:1e-12 20.0 d.(2);\n  check_float \"gather_f64_contiguous[3]\" ~eps:1e-12 10.0 d.(3);\n  check_float \"gather_f64_contiguous[4]\" ~eps:1e-12 50.0 d.(4);\n  check_float \"gather_f64_contiguous[5]\" ~eps:1e-12 30.0 d.(5)\n\n(* Gather: axis=0 with 2D tensor — general multi-dim path *)\nlet test_gather_float64_axis0_2d () =\n  let ctx = Nx_backend.create_context () in\n  (* 3x2 data, gather rows 2, 0 *)\n  let data =\n    Nx_ox.create ctx Dtype.Float64 [| 3; 2 |]\n      [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0 |]\n  in\n  let indices =\n    Nx_ox.create ctx Dtype.Int32 [| 2; 2 |] [| 2l; 0l; 1l; 2l |]\n  in\n  let out = Nx_backend.gather data indices ~axis:0 in\n  let d = Nx_ox.to_array out in\n  check_float \"gather_f64_axis0_2d[0]\" ~eps:1e-12 5.0 d.(0);\n  check_float \"gather_f64_axis0_2d[1]\" ~eps:1e-12 2.0 d.(1);\n  check_float \"gather_f64_axis0_2d[2]\" ~eps:1e-12 3.0 d.(2);\n  check_float \"gather_f64_axis0_2d[3]\" ~eps:1e-12 6.0 d.(3)\n\n(* Gather: int32 1D contiguous — exercises the Int32x4 SIMD path *)\nlet test_gather_int32_axis0_contiguous () =\n  let ctx = Nx_backend.create_context () in\n  let data =\n    Nx_ox.create ctx Dtype.Int32 [| 8 |]\n      [| 10l; 20l; 30l; 40l; 50l; 60l; 70l; 80l |]\n  in\n  let indices =\n    Nx_ox.create ctx Dtype.Int32 [| 8 |]\n      [| 7l; 5l; 3l; 1l; 6l; 4l; 2l; 0l |]\n  in\n  let out = Nx_backend.gather data indices ~axis:0 in\n  let d = Nx_ox.to_array out in\n  check_int32 \"gather_i32_contiguous[0]\" 80l d.(0);\n  check_int32 \"gather_i32_contiguous[1]\" 60l d.(1);\n  check_int32 \"gather_i32_contiguous[2]\" 40l d.(2);\n  check_int32 \"gather_i32_contiguous[3]\" 20l d.(3);\n  check_int32 \"gather_i32_contiguous[4]\" 70l d.(4);\n  check_int32 \"gather_i32_contiguous[5]\" 50l d.(5);\n  check_int32 \"gather_i32_contiguous[6]\" 30l d.(6);\n  check_int32 \"gather_i32_contiguous[7]\" 10l d.(7)\n\n(* Gather: int64 1D contiguous — exercises the Int64x2 SIMD path *)\nlet test_gather_int64_axis0_contiguous () =\n  let ctx = Nx_backend.create_context () in\n  let data =\n    Nx_ox.create ctx Dtype.Int64 [| 6 |]\n      [| 100L; 200L; 300L; 400L; 500L; 600L |]\n  in\n  let indices =\n    Nx_ox.create ctx Dtype.Int32 [| 6 |] [| 4l; 2l; 0l; 5l; 3l; 1l |]\n  in\n  let out = Nx_backend.gather data indices ~axis:0 in\n  let d = Nx_ox.to_array out in\n  check_int64 \"gather_i64_contiguous[0]\" 500L d.(0);\n  check_int64 \"gather_i64_contiguous[1]\" 300L d.(1);\n  check_int64 \"gather_i64_contiguous[2]\" 100L d.(2);\n  check_int64 \"gather_i64_contiguous[3]\" 600L d.(3);\n  check_int64 \"gather_i64_contiguous[4]\" 400L d.(4);\n  check_int64 \"gather_i64_contiguous[5]\" 200L d.(5)\n\n(* Gather: single element *)\nlet test_gather_single_element () =\n  let ctx = Nx_backend.create_context () in\n  let data = Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 1.0; 2.0; 3.0 |] in\n  let indices = Nx_ox.create ctx Dtype.Int32 [| 1 |] [| 2l |] in\n  let out = Nx_backend.gather data indices ~axis:0 in\n  let d = Nx_ox.to_array out in\n  check_float \"gather_single[0]\" ~eps:1e-12 3.0 d.(0)\n\n(* Gather: negative axis *)\nlet test_gather_negative_axis () =\n  let ctx = Nx_backend.create_context () in\n  let data =\n    Nx_ox.create ctx Dtype.Int32 [| 2; 4 |]\n      [| 10l; 11l; 12l; 13l; 20l; 21l; 22l; 23l |]\n  in\n  let indices =\n    Nx_ox.create ctx Dtype.Int32 [| 2; 2 |] [| 3l; 0l; 1l; 2l |]\n  in\n  let out = Nx_backend.gather data indices ~axis:(-1) in\n  let d = Nx_ox.to_array out in\n  check_int32 \"gather_neg_axis[0]\" 13l d.(0);\n  check_int32 \"gather_neg_axis[1]\" 10l d.(1);\n  check_int32 \"gather_neg_axis[2]\" 21l d.(2);\n  check_int32 \"gather_neg_axis[3]\" 22l d.(3)\n\n(* Scatter: float64 set *)\nlet test_scatter_float64_set () =\n  let ctx = Nx_backend.create_context () in\n  let template =\n    Nx_ox.create ctx Dtype.Float64 [| 5 |] [| 0.0; 0.0; 0.0; 0.0; 0.0 |]\n  in\n  let indices = Nx_ox.create ctx Dtype.Int32 [| 3 |] [| 4l; 1l; 0l |] in\n  let updates =\n    Nx_ox.create ctx Dtype.Float64 [| 3 |] [| 9.0; 8.0; 7.0 |]\n  in\n  let y = Nx_backend.scatter template ~indices ~updates ~axis:0 in\n  let d = Nx_ox.to_array y in\n  check_float \"scatter_f64_set[0]\" ~eps:1e-12 7.0 d.(0);\n  check_float \"scatter_f64_set[1]\" ~eps:1e-12 8.0 d.(1);\n  check_float \"scatter_f64_set[2]\" ~eps:1e-12 0.0 d.(2);\n  check_float \"scatter_f64_set[3]\" ~eps:1e-12 0.0 d.(3);\n  check_float \"scatter_f64_set[4]\" ~eps:1e-12 9.0 d.(4)\n\n(* Scatter: duplicate indices with Add mode — accumulation *)\nlet test_scatter_float64_add_duplicates () =\n  let ctx = Nx_backend.create_context () in\n  let template =\n    Nx_ox.create ctx Dtype.Float64 [| 4 |] [| 0.0; 0.0; 0.0; 0.0 |]\n  in\n  let indices =\n    Nx_ox.create ctx Dtype.Int32 [| 5 |] [| 0l; 1l; 0l; 2l; 0l |]\n  in\n  let updates =\n    Nx_ox.create ctx Dtype.Float64 [| 5 |] [| 1.0; 2.0; 3.0; 4.0; 5.0 |]\n  in\n  let y = Nx_backend.scatter ~mode:`Add template ~indices ~updates ~axis:0 in\n  let d = Nx_ox.to_array y in\n  check_float \"scatter_f64_add_dup[0]\" ~eps:1e-12 9.0 d.(0);\n  check_float \"scatter_f64_add_dup[1]\" ~eps:1e-12 2.0 d.(1);\n  check_float \"scatter_f64_add_dup[2]\" ~eps:1e-12 4.0 d.(2);\n  check_float \"scatter_f64_add_dup[3]\" ~eps:1e-12 0.0 d.(3)\n\n(* Scatter: bool dtype *)\nlet test_scatter_bool_set () =\n  let ctx = Nx_backend.create_context () in\n  let template =\n    Nx_ox.create ctx Dtype.Bool [| 4 |] [| false; false; false; false |]\n  in\n  let indices = Nx_ox.create ctx Dtype.Int32 [| 2 |] [| 1l; 3l |] in\n  let updates = Nx_ox.create ctx Dtype.Bool [| 2 |] [| true; true |] in\n  let y = Nx_backend.scatter template ~indices ~updates ~axis:0 in\n  let d = Nx_ox.to_array y in\n  check_bool \"scatter_bool_set[0]\" false d.(0);\n  check_bool \"scatter_bool_set[1]\" true d.(1);\n  check_bool \"scatter_bool_set[2]\" false d.(2);\n  check_bool \"scatter_bool_set[3]\" true d.(3)\n\n(* Scatter: preserves template values for untouched indices *)\nlet test_scatter_preserves_template () =\n  let ctx = Nx_backend.create_context () in\n  let template =\n    Nx_ox.create ctx Dtype.Float64 [| 4 |] [| 10.0; 20.0; 30.0; 40.0 |]\n  in\n  let indices = Nx_ox.create ctx Dtype.Int32 [| 1 |] [| 2l |] in\n  let updates = Nx_ox.create ctx Dtype.Float64 [| 1 |] [| 99.0 |] in\n  let y = Nx_backend.scatter template ~indices ~updates ~axis:0 in\n  let d = Nx_ox.to_array y in\n  check_float \"scatter_preserve[0]\" ~eps:1e-12 10.0 d.(0);\n  check_float \"scatter_preserve[1]\" ~eps:1e-12 20.0 d.(1);\n  check_float \"scatter_preserve[2]\" ~eps:1e-12 99.0 d.(2);\n  check_float \"scatter_preserve[3]\" ~eps:1e-12 40.0 d.(3)\n\nlet test_fold_int32_1d_overlap () =\n  let ctx = Nx_backend.create_context () in\n  (* Shape [N=1, C*K=2, L=2] where C=1, K=2 *)\n  let x_flat = Nx_ox.create ctx Dtype.Int32 [|4|] [| 1l; 3l; 2l; 4l |] in\n  let x = Nx_backend.reshape x_flat [| 1; 2; 2 |] in\n  let y =\n    Nx_backend.fold x\n      ~output_size:[| 3 |]\n      ~kernel_size:[| 2 |]\n      ~stride:[| 1 |]\n      ~dilation:[| 1 |]\n      ~padding:[| (0, 0) |]\n  in\n  let shape_y =\n    View.shape (Nx_backend.view y)\n  in\n  check \"fold_int32_1d_overlap: shape0\" (shape_y.(0) = 1);\n  check \"fold_int32_1d_overlap: shape1\" (shape_y.(1) = 1);\n  check \"fold_int32_1d_overlap: shape2\" (shape_y.(2) = 3);\n  let d = Nx_ox.to_array y in\n  check_int32 \"fold_int32_1d_overlap[0]\" 1l d.(0);\n  check_int32 \"fold_int32_1d_overlap[1]\" 5l d.(1);\n  check_int32 \"fold_int32_1d_overlap[2]\" 4l d.(2)\n\nlet test_fold_int32_1d_padding_stride () =\n  let ctx = Nx_backend.create_context () in\n  (* Shape [N=1, C*K=3, L=2] where C=1, K=3 *)\n  let x_flat = Nx_ox.create ctx Dtype.Int32 [|6|] [| 10l; 20l; 30l; 40l; 50l; 60l |] in\n  let x = Nx_backend.reshape x_flat [| 1; 3; 2 |] in\n  let y =\n    Nx_backend.fold x\n      ~output_size:[| 4 |]\n      ~kernel_size:[| 3 |]\n      ~stride:[| 2 |]\n      ~dilation:[| 1 |]\n      ~padding:[| (1, 1) |]\n  in\n  let d = Nx_ox.to_array y in\n  check_int32 \"fold_int32_1d_padding_stride[0]\" 30l d.(0);\n  check_int32 \"fold_int32_1d_padding_stride[1]\" 70l d.(1);\n  check_int32 \"fold_int32_1d_padding_stride[2]\" 40l d.(2);\n  check_int32 \"fold_int32_1d_padding_stride[3]\" 60l d.(3)\n\nlet test_unfold_int32_1d_basic () =\n  let ctx = Nx_backend.create_context () in\n  let x_flat = Nx_ox.create ctx Dtype.Int32 [|4|] [| 1l; 2l; 3l; 4l |] in\n  let x = Nx_backend.reshape x_flat [| 1; 1; 4 |] in\n  let y =\n    Nx_backend.unfold x\n      ~kernel_size:[| 2 |]\n      ~stride:[| 1 |]\n      ~dilation:[| 1 |]\n      ~padding:[| (0, 0) |]\n  in\n  let shape_y =\n    View.shape (Nx_backend.view y)\n  in\n  check \"unfold_int32_1d_basic: shape0\" (shape_y.(0) = 1);\n  check \"unfold_int32_1d_basic: shape1\" (shape_y.(1) = 2);\n  check \"unfold_int32_1d_basic: shape2\" (shape_y.(2) = 3);\n  let d = Nx_ox.to_array y in\n  check_int32 \"unfold_int32_1d_basic[0]\" 1l d.(0);\n  check_int32 \"unfold_int32_1d_basic[1]\" 2l d.(1);\n  check_int32 \"unfold_int32_1d_basic[2]\" 3l d.(2);\n  check_int32 \"unfold_int32_1d_basic[3]\" 2l d.(3);\n  check_int32 \"unfold_int32_1d_basic[4]\" 3l d.(4);\n  check_int32 \"unfold_int32_1d_basic[5]\" 4l d.(5)\n\nlet test_unfold_int32_1d_padding_stride () =\n  let ctx = Nx_backend.create_context () in\n  let x_flat = Nx_ox.create ctx Dtype.Int32 [|4|] [| 1l; 2l; 3l; 4l |] in\n  let x = Nx_backend.reshape x_flat [| 1; 1; 4 |] in\n  let y =\n    Nx_backend.unfold x\n      ~kernel_size:[| 3 |]\n      ~stride:[| 2 |]\n      ~dilation:[| 1 |]\n      ~padding:[| (1, 1) |]\n  in\n  let shape_y =\n    View.shape (Nx_backend.view y)\n  in\n  check \"unfold_int32_1d_padding_stride: shape0\" (shape_y.(0) = 1);\n  check \"unfold_int32_1d_padding_stride: shape1\" (shape_y.(1) = 3);\n  check \"unfold_int32_1d_padding_stride: shape2\" (shape_y.(2) = 2);\n  let d = Nx_ox.to_array y in\n  check_int32 \"unfold_int32_1d_padding_stride[0]\" 0l d.(0);\n  check_int32 \"unfold_int32_1d_padding_stride[1]\" 2l d.(1);\n  check_int32 \"unfold_int32_1d_padding_stride[2]\" 1l d.(2);\n  check_int32 \"unfold_int32_1d_padding_stride[3]\" 3l d.(3);\n  check_int32 \"unfold_int32_1d_padding_stride[4]\" 2l d.(4);\n  check_int32 \"unfold_int32_1d_padding_stride[5]\" 4l d.(5)\n\nlet test_unfold_int64_1d_identity () =\n  let ctx = Nx_backend.create_context () in\n  let x_flat = Nx_ox.create ctx Dtype.Int64 [| 4 |] [| 11L; 22L; 33L; 44L |] in\n  let x = Nx_backend.reshape x_flat [| 1; 1; 4 |] in\n  let y =\n    Nx_backend.unfold x\n      ~kernel_size:[| 1 |]\n      ~stride:[| 1 |]\n      ~dilation:[| 1 |]\n      ~padding:[| (0, 0) |]\n  in\n  let d = Nx_ox.to_array y in\n  check_int64 \"unfold_int64_1d_identity[0]\" 11L d.(0);\n  check_int64 \"unfold_int64_1d_identity[1]\" 22L d.(1);\n  check_int64 \"unfold_int64_1d_identity[2]\" 33L d.(2);\n  check_int64 \"unfold_int64_1d_identity[3]\" 44L d.(3)\n\nlet test_unfold_float32_1d_identity () =\n  let ctx = Nx_backend.create_context () in\n  let x_flat = Nx_ox.create ctx Dtype.Float32 [| 4 |] [| 1.5; 2.5; 3.5; 4.5 |] in\n  let x = Nx_backend.reshape x_flat [| 1; 1; 4 |] in\n  let y =\n    Nx_backend.unfold x\n      ~kernel_size:[| 1 |]\n      ~stride:[| 1 |]\n      ~dilation:[| 1 |]\n      ~padding:[| (0, 0) |]\n  in\n  let d = Nx_ox.to_array y in\n  check_float \"unfold_float32_1d_identity[0]\" ~eps:1e-6 1.5 d.(0);\n  check_float \"unfold_float32_1d_identity[1]\" ~eps:1e-6 2.5 d.(1);\n  check_float \"unfold_float32_1d_identity[2]\" ~eps:1e-6 3.5 d.(2);\n  check_float \"unfold_float32_1d_identity[3]\" ~eps:1e-6 4.5 d.(3)\n\nlet test_unfold_float64_1d_identity () =\n  let ctx = Nx_backend.create_context () in\n  let x_flat = Nx_ox.create ctx Dtype.Float64 [| 4 |] [| 1.25; 2.25; 3.25; 4.25 |] in\n  let x = Nx_backend.reshape x_flat [| 1; 1; 4 |] in\n  let y =\n    Nx_backend.unfold x\n      ~kernel_size:[| 1 |]\n      ~stride:[| 1 |]\n      ~dilation:[| 1 |]\n      ~padding:[| (0, 0) |]\n  in\n  let d = Nx_ox.to_array y in\n  check_float \"unfold_float64_1d_identity[0]\" ~eps:1e-9 1.25 d.(0);\n  check_float \"unfold_float64_1d_identity[1]\" ~eps:1e-9 2.25 d.(1);\n  check_float \"unfold_float64_1d_identity[2]\" ~eps:1e-9 3.25 d.(2);\n  check_float \"unfold_float64_1d_identity[3]\" ~eps:1e-9 4.25 d.(3)\n\nlet test_unfold_int8_1d_identity () =\n  let ctx = Nx_backend.create_context () in\n  let x_flat = Nx_ox.create ctx Dtype.Int8 [| 4 |] [| 1; 2; 3; 4 |] in\n  let x = Nx_backend.reshape x_flat [| 1; 1; 4 |] in\n  let y =\n    Nx_backend.unfold x\n      ~kernel_size:[| 1 |]\n      ~stride:[| 1 |]\n      ~dilation:[| 1 |]\n      ~padding:[| (0, 0) |]\n  in\n  let d = Nx_ox.to_array y in\n  check_int \"unfold_int8_1d_identity[0]\" 1 d.(0);\n  check_int \"unfold_int8_1d_identity[1]\" 2 d.(1);\n  check_int \"unfold_int8_1d_identity[2]\" 3 d.(2);\n  check_int \"unfold_int8_1d_identity[3]\" 4 d.(3)\n\nlet test_unfold_int16_1d_identity () =\n  let ctx = Nx_backend.create_context () in\n  let x_flat = Nx_ox.create ctx Dtype.Int16 [| 4 |] [| 10; 20; 30; 40 |] in\n  let x = Nx_backend.reshape x_flat [| 1; 1; 4 |] in\n  let y =\n    Nx_backend.unfold x\n      ~kernel_size:[| 1 |]\n      ~stride:[| 1 |]\n      ~dilation:[| 1 |]\n      ~padding:[| (0, 0) |]\n  in\n  let d = Nx_ox.to_array y in\n  check_int \"unfold_int16_1d_identity[0]\" 10 d.(0);\n  check_int \"unfold_int16_1d_identity[1]\" 20 d.(1);\n  check_int \"unfold_int16_1d_identity[2]\" 30 d.(2);\n  check_int \"unfold_int16_1d_identity[3]\" 40 d.(3)\n\nlet test_unfold_bool_1d_identity () =\n  let ctx = Nx_backend.create_context () in\n  let x_flat =\n    Nx_ox.create ctx Dtype.Bool [| 4 |] [| true; false; true; false |]\n  in\n  let x = Nx_backend.reshape x_flat [| 1; 1; 4 |] in\n  let y =\n    Nx_backend.unfold x\n      ~kernel_size:[| 1 |]\n      ~stride:[| 1 |]\n      ~dilation:[| 1 |]\n      ~padding:[| (0, 0) |]\n  in\n  let d = Nx_ox.to_array y in\n  check_bool \"unfold_bool_1d_identity[0]\" true d.(0);\n  check_bool \"unfold_bool_1d_identity[1]\" false d.(1);\n  check_bool \"unfold_bool_1d_identity[2]\" true d.(2);\n  check_bool \"unfold_bool_1d_identity[3]\" false d.(3)\n\nlet test_fold_int64_1d_identity () =\n  let ctx = Nx_backend.create_context () in\n  let x_flat = Nx_ox.create ctx Dtype.Int64 [| 4 |] [| 9L; 8L; 7L; 6L |] in\n  let x = Nx_backend.reshape x_flat [| 1; 1; 4 |] in\n  let y =\n    Nx_backend.fold x\n      ~output_size:[| 4 |]\n      ~kernel_size:[| 1 |]\n      ~stride:[| 1 |]\n      ~dilation:[| 1 |]\n      ~padding:[| (0, 0) |]\n  in\n  let d = Nx_ox.to_array y in\n  check_int64 \"fold_int64_1d_identity[0]\" 9L d.(0);\n  check_int64 \"fold_int64_1d_identity[1]\" 8L d.(1);\n  check_int64 \"fold_int64_1d_identity[2]\" 7L d.(2);\n  check_int64 \"fold_int64_1d_identity[3]\" 6L d.(3)\n\nlet test_fold_float32_1d_identity () =\n  let ctx = Nx_backend.create_context () in\n  let x_flat = Nx_ox.create ctx Dtype.Float32 [| 4 |] [| 0.5; 1.5; 2.5; 3.5 |] in\n  let x = Nx_backend.reshape x_flat [| 1; 1; 4 |] in\n  let y =\n    Nx_backend.fold x\n      ~output_size:[| 4 |]\n      ~kernel_size:[| 1 |]\n      ~stride:[| 1 |]\n      ~dilation:[| 1 |]\n      ~padding:[| (0, 0) |]\n  in\n  let d = Nx_ox.to_array y in\n  check_float \"fold_float32_1d_identity[0]\" ~eps:1e-6 0.5 d.(0);\n  check_float \"fold_float32_1d_identity[1]\" ~eps:1e-6 1.5 d.(1);\n  check_float \"fold_float32_1d_identity[2]\" ~eps:1e-6 2.5 d.(2);\n  check_float \"fold_float32_1d_identity[3]\" ~eps:1e-6 3.5 d.(3)\n\nlet test_fold_float64_1d_identity () =\n  let ctx = Nx_backend.create_context () in\n  let x_flat =\n    Nx_ox.create ctx Dtype.Float64 [| 4 |] [| 10.25; 11.25; 12.25; 13.25 |]\n  in\n  let x = Nx_backend.reshape x_flat [| 1; 1; 4 |] in\n  let y =\n    Nx_backend.fold x\n      ~output_size:[| 4 |]\n      ~kernel_size:[| 1 |]\n      ~stride:[| 1 |]\n      ~dilation:[| 1 |]\n      ~padding:[| (0, 0) |]\n  in\n  let d = Nx_ox.to_array y in\n  check_float \"fold_float64_1d_identity[0]\" ~eps:1e-9 10.25 d.(0);\n  check_float \"fold_float64_1d_identity[1]\" ~eps:1e-9 11.25 d.(1);\n  check_float \"fold_float64_1d_identity[2]\" ~eps:1e-9 12.25 d.(2);\n  check_float \"fold_float64_1d_identity[3]\" ~eps:1e-9 13.25 d.(3)\n\nlet test_fold_int8_1d_identity () =\n  let ctx = Nx_backend.create_context () in\n  let x_flat = Nx_ox.create ctx Dtype.Int8 [| 4 |] [| 1; 3; 5; 7 |] in\n  let x = Nx_backend.reshape x_flat [| 1; 1; 4 |] in\n  let y =\n    Nx_backend.fold x\n      ~output_size:[| 4 |]\n      ~kernel_size:[| 1 |]\n      ~stride:[| 1 |]\n      ~dilation:[| 1 |]\n      ~padding:[| (0, 0) |]\n  in\n  let d = Nx_ox.to_array y in\n  check_int \"fold_int8_1d_identity[0]\" 1 d.(0);\n  check_int \"fold_int8_1d_identity[1]\" 3 d.(1);\n  check_int \"fold_int8_1d_identity[2]\" 5 d.(2);\n  check_int \"fold_int8_1d_identity[3]\" 7 d.(3)\n\nlet test_fold_int16_1d_identity () =\n  let ctx = Nx_backend.create_context () in\n  let x_flat = Nx_ox.create ctx Dtype.Int16 [| 4 |] [| 2; 4; 6; 8 |] in\n  let x = Nx_backend.reshape x_flat [| 1; 1; 4 |] in\n  let y =\n    Nx_backend.fold x\n      ~output_size:[| 4 |]\n      ~kernel_size:[| 1 |]\n      ~stride:[| 1 |]\n      ~dilation:[| 1 |]\n      ~padding:[| (0, 0) |]\n  in\n  let d = Nx_ox.to_array y in\n  check_int \"fold_int16_1d_identity[0]\" 2 d.(0);\n  check_int \"fold_int16_1d_identity[1]\" 4 d.(1);\n  check_int \"fold_int16_1d_identity[2]\" 6 d.(2);\n  check_int \"fold_int16_1d_identity[3]\" 8 d.(3)\n\nlet test_associative_scan_sum_int32_axis1 () =\n  let ctx = Nx_backend.create_context () in\n  let x =\n    Nx_ox.create ctx Dtype.Int32 [| 2; 3 |] [| 1l; 2l; 3l; 4l; 5l; 6l |]\n  in\n  let out = Nx_backend.associative_scan ~axis:1 ~op:`Sum x in\n  let d = Nx_ox.to_array out in\n  check_int32 \"associative_scan_sum_int32_axis1[0]\" 1l d.(0);\n  check_int32 \"associative_scan_sum_int32_axis1[1]\" 3l d.(1);\n  check_int32 \"associative_scan_sum_int32_axis1[2]\" 6l d.(2);\n  check_int32 \"associative_scan_sum_int32_axis1[3]\" 4l d.(3);\n  check_int32 \"associative_scan_sum_int32_axis1[4]\" 9l d.(4);\n  check_int32 \"associative_scan_sum_int32_axis1[5]\" 15l d.(5)\n\nlet test_associative_scan_prod_int64_axis0 () =\n  let ctx = Nx_backend.create_context () in\n  let x =\n    Nx_ox.create ctx Dtype.Int64 [| 2; 3 |] [| 1L; 2L; 3L; 4L; 5L; 6L |]\n  in\n  let out = Nx_backend.associative_scan ~axis:0 ~op:`Prod x in\n  let d = Nx_ox.to_array out in\n  check_int64 \"associative_scan_prod_int64_axis0[0]\" 1L d.(0);\n  check_int64 \"associative_scan_prod_int64_axis0[1]\" 2L d.(1);\n  check_int64 \"associative_scan_prod_int64_axis0[2]\" 3L d.(2);\n  check_int64 \"associative_scan_prod_int64_axis0[3]\" 4L d.(3);\n  check_int64 \"associative_scan_prod_int64_axis0[4]\" 10L d.(4);\n  check_int64 \"associative_scan_prod_int64_axis0[5]\" 18L d.(5)\n\nlet test_associative_scan_sum_int32_permuted_view () =\n  let ctx = Nx_backend.create_context () in\n  let x =\n    Nx_ox.create ctx Dtype.Int32 [| 2; 3 |] [| 1l; 2l; 3l; 4l; 5l; 6l |]\n  in\n  let x_permuted = Nx_backend.permute x [| 1; 0 |] in\n  let out = Nx_backend.associative_scan ~axis:1 ~op:`Sum x_permuted in\n  let d = Nx_ox.to_array out in\n  check_int32 \"associative_scan_sum_int32_permuted_view[0]\" 1l d.(0);\n  check_int32 \"associative_scan_sum_int32_permuted_view[1]\" 5l d.(1);\n  check_int32 \"associative_scan_sum_int32_permuted_view[2]\" 2l d.(2);\n  check_int32 \"associative_scan_sum_int32_permuted_view[3]\" 7l d.(3);\n  check_int32 \"associative_scan_sum_int32_permuted_view[4]\" 3l d.(4);\n  check_int32 \"associative_scan_sum_int32_permuted_view[5]\" 9l d.(5)\n\nlet test_associative_scan_zero_axis_length () =\n  let ctx = Nx_backend.create_context () in\n  let x = Nx_ox.empty ctx Dtype.Float32 [| 0; 3 |] in\n  let out = Nx_backend.associative_scan ~axis:0 ~op:`Max x in\n  check_int \"associative_scan_zero_axis_length:numel\"\n    (numel (Nx_backend.view out)) 0\n\nlet test_threefry_strided_view_matches_contiguous () =\n  let ctx = Nx_backend.create_context () in\n  let key_base =\n    Nx_ox.create ctx Dtype.Int32\n      [| 2; 2 |]\n      [| 1l; 2l; -1l; 0l |]\n  in\n  let ctr_base =\n    Nx_ox.create ctx Dtype.Int32\n      [| 2; 2 |]\n      [| 3l; 4l; 123l; 456l |]\n  in\n  let key_perm = Nx_backend.permute key_base [| 1; 0 |] in\n  let ctr_perm = Nx_backend.permute ctr_base [| 1; 0 |] in\n  let out_perm = Nx_backend.threefry key_perm ctr_perm in\n  let key_contig = Nx_backend.contiguous key_perm in\n  let ctr_contig = Nx_backend.contiguous ctr_perm in\n  let out_contig = Nx_backend.threefry key_contig ctr_contig in\n  let perm_data = Nx_ox.to_array out_perm in\n  let contig_data = Nx_ox.to_array out_contig in\n  for i = 0 to Array.length perm_data - 1 do\n    check_int32 (Printf.sprintf \"threefry_strided_view_matches_contiguous[%d]\" i)\n      contig_data.(i) perm_data.(i)\n  done\n\nlet test_argmax_float64_1d () =\n  let ctx = Nx_backend.create_context () in\n  let x = Nx_ox.create ctx Dtype.Float64 [| 5 |] [| 1.0; 5.0; 3.0; 2.0; 4.0 |] in\n  let out = Nx_backend.argmax ~axis:0 ~keepdims:true x in\n  let d = Nx_ox.to_array out in\n  check_int32 \"argmax_float64_1d\" 1l d.(0)\n\nlet test_argmax_float64_2d_axis0 () =\n  let ctx = Nx_backend.create_context () in\n  (* [[1, 4], [3, 2]] -> axis 0 -> [1, 0] *)\n  let x = Nx_ox.create ctx Dtype.Float64 [| 2; 2 |] [| 1.0; 4.0; 3.0; 2.0 |] in\n  let out = Nx_backend.argmax ~axis:0 ~keepdims:false x in\n  let d = Nx_ox.to_array out in\n  check_int32 \"argmax_float64_2d_axis0[0]\" 1l d.(0);\n  check_int32 \"argmax_float64_2d_axis0[1]\" 0l d.(1)\n\nlet test_argmax_float64_2d_axis1 () =\n  let ctx = Nx_backend.create_context () in\n  (* [[1, 4], [3, 2]] -> axis 1 -> [1, 0] *)\n  let x = Nx_ox.create ctx Dtype.Float64 [| 2; 2 |] [| 1.0; 4.0; 3.0; 2.0 |] in\n  let out = Nx_backend.argmax ~axis:1 ~keepdims:false x in\n  let d = Nx_ox.to_array out in\n  check_int32 \"argmax_float64_2d_axis1[0]\" 1l d.(0);\n  check_int32 \"argmax_float64_2d_axis1[1]\" 0l d.(1)\n\nlet test_argmin_float64_1d () =\n  let ctx = Nx_backend.create_context () in\n  let x = Nx_ox.create ctx Dtype.Float64 [| 5 |] [| 3.0; 1.0; 5.0; 2.0; 4.0 |] in\n  let out = Nx_backend.argmin ~axis:0 ~keepdims:true x in\n  let d = Nx_ox.to_array out in\n  check_int32 \"argmin_float64_1d\" 1l d.(0)\n\nlet test_argmax_int32 () =\n  let ctx = Nx_backend.create_context () in\n  let x = Nx_ox.create ctx Dtype.Int32 [| 4 |] [| 10l; 30l; 20l; 5l |] in\n  let out = Nx_backend.argmax ~axis:0 ~keepdims:true x in\n  let d = Nx_ox.to_array out in\n  check_int32 \"argmax_int32\" 1l d.(0)\n\nlet test_argmin_int64 () =\n  let ctx = Nx_backend.create_context () in\n  let x = Nx_ox.create ctx Dtype.Int64 [| 4 |] [| 10L; 30L; 5L; 20L |] in\n  let out = Nx_backend.argmin ~axis:0 ~keepdims:true x in\n  let d = Nx_ox.to_array out in\n  check_int32 \"argmin_int64\" 2l d.(0)\n\nlet test_sort_float64_ascending () =\n  let ctx = Nx_backend.create_context () in\n  let x = Nx_ox.create ctx Dtype.Float64 [| 5 |] [| 3.0; 1.0; 4.0; 1.5; 2.0 |] in\n  let out = Nx_backend.sort ~axis:0 ~descending:false x in\n  let d = Nx_ox.to_array out in\n  check_float \"sort_f64_asc[0]\" ~eps:1e-10 1.0 d.(0);\n  check_float \"sort_f64_asc[1]\" ~eps:1e-10 1.5 d.(1);\n  check_float \"sort_f64_asc[2]\" ~eps:1e-10 2.0 d.(2);\n  check_float \"sort_f64_asc[3]\" ~eps:1e-10 3.0 d.(3);\n  check_float \"sort_f64_asc[4]\" ~eps:1e-10 4.0 d.(4)\n\nlet test_sort_float64_descending () =\n  let ctx = Nx_backend.create_context () in\n  let x = Nx_ox.create ctx Dtype.Float64 [| 4 |] [| 3.0; 1.0; 4.0; 2.0 |] in\n  let out = Nx_backend.sort ~axis:0 ~descending:true x in\n  let d = Nx_ox.to_array out in\n  check_float \"sort_f64_desc[0]\" ~eps:1e-10 4.0 d.(0);\n  check_float \"sort_f64_desc[1]\" ~eps:1e-10 3.0 d.(1);\n  check_float \"sort_f64_desc[2]\" ~eps:1e-10 2.0 d.(2);\n  check_float \"sort_f64_desc[3]\" ~eps:1e-10 1.0 d.(3)\n\nlet test_sort_int32_1d () =\n  let ctx = Nx_backend.create_context () in\n  let x = Nx_ox.create ctx Dtype.Int32 [| 4 |] [| 3l; 1l; 4l; 2l |] in\n  let out = Nx_backend.sort ~axis:0 ~descending:false x in\n  let d = Nx_ox.to_array out in\n  check_int32 \"sort_i32_1d[0]\" 1l d.(0);\n  check_int32 \"sort_i32_1d[1]\" 2l d.(1);\n  check_int32 \"sort_i32_1d[2]\" 3l d.(2);\n  check_int32 \"sort_i32_1d[3]\" 4l d.(3)\n\nlet test_sort_int32_2d_axis1 () =\n  let ctx = Nx_backend.create_context () in\n  (* [[3, 1, 2], [6, 4, 5]] -> sort axis 1 -> [[1, 2, 3], [4, 5, 6]] *)\n  let x = Nx_ox.create ctx Dtype.Int32 [| 2; 3 |] [| 3l; 1l; 2l; 6l; 4l; 5l |] in\n  let out = Nx_backend.sort ~axis:1 ~descending:false x in\n  let d = Nx_ox.to_array out in\n  check_int32 \"sort_i32_2d_axis1[0]\" 1l d.(0);\n  check_int32 \"sort_i32_2d_axis1[1]\" 2l d.(1);\n  check_int32 \"sort_i32_2d_axis1[2]\" 3l d.(2);\n  check_int32 \"sort_i32_2d_axis1[3]\" 4l d.(3);\n  check_int32 \"sort_i32_2d_axis1[4]\" 5l d.(4);\n  check_int32 \"sort_i32_2d_axis1[5]\" 6l d.(5)\n\nlet test_sort_int32_2d_axis0 () =\n  let ctx = Nx_backend.create_context () in\n  (* [[3, 1], [1, 3]] -> sort axis 0 -> [[1, 1], [3, 3]] *)\n  let x = Nx_ox.create ctx Dtype.Int32 [| 2; 2 |] [| 3l; 1l; 1l; 3l |] in\n  let out = Nx_backend.sort ~axis:0 ~descending:false x in\n  let d = Nx_ox.to_array out in\n  check_int32 \"sort_i32_2d_axis0[0]\" 1l d.(0);\n  check_int32 \"sort_i32_2d_axis0[1]\" 1l d.(1);\n  check_int32 \"sort_i32_2d_axis0[2]\" 3l d.(2);\n  check_int32 \"sort_i32_2d_axis0[3]\" 3l d.(3)\n\nlet test_argsort_float64 () =\n  let ctx = Nx_backend.create_context () in\n  (* [3.0, 1.0, 4.0, 2.0] -> argsort asc -> [1, 3, 0, 2] *)\n  let x = Nx_ox.create ctx Dtype.Float64 [| 4 |] [| 3.0; 1.0; 4.0; 2.0 |] in\n  let out = Nx_backend.argsort ~axis:0 ~descending:false x in\n  let d = Nx_ox.to_array out in\n  check_int32 \"argsort_f64[0]\" 1l d.(0);\n  check_int32 \"argsort_f64[1]\" 3l d.(1);\n  check_int32 \"argsort_f64[2]\" 0l d.(2);\n  check_int32 \"argsort_f64[3]\" 2l d.(3)\n\nlet test_argsort_descending () =\n  let ctx = Nx_backend.create_context () in\n  (* [3.0, 1.0, 4.0, 2.0] -> argsort desc -> [2, 0, 3, 1] *)\n  let x = Nx_ox.create ctx Dtype.Float64 [| 4 |] [| 3.0; 1.0; 4.0; 2.0 |] in\n  let out = Nx_backend.argsort ~axis:0 ~descending:true x in\n  let d = Nx_ox.to_array out in\n  check_int32 \"argsort_desc[0]\" 2l d.(0);\n  check_int32 \"argsort_desc[1]\" 0l d.(1);\n  check_int32 \"argsort_desc[2]\" 3l d.(2);\n  check_int32 \"argsort_desc[3]\" 1l d.(3)\n\nlet test_atan2_float64 () =\n  let ctx = Nx_backend.create_context () in\n  let y = Nx_ox.create ctx Dtype.Float64 [| 4 |] [| 1.0; -1.0; 1.0; 0.0 |] in\n  let x = Nx_ox.create ctx Dtype.Float64 [| 4 |] [| 1.0; 1.0; -1.0; 1.0 |] in\n  let out = Nx_backend.atan2 y x in\n  let data = Nx_ox.to_array out in\n  check_float \"atan2_float64[0]\" ~eps:1e-10 (Float.atan2 1.0 1.0) data.(0);\n  check_float \"atan2_float64[1]\" ~eps:1e-10 (Float.atan2 (-1.0) 1.0) data.(1);\n  check_float \"atan2_float64[2]\" ~eps:1e-10 (Float.atan2 1.0 (-1.0)) data.(2);\n  check_float \"atan2_float64[3]\" ~eps:1e-10 (Float.atan2 0.0 1.0) data.(3)\n\nlet test_atan2_float32 () =\n  let ctx = Nx_backend.create_context () in\n  let y = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 1.0; 0.0; -1.0 |] in\n  let x = Nx_ox.create ctx Dtype.Float32 [| 3 |] [| 0.0; 1.0; -1.0 |] in\n  let out = Nx_backend.atan2 y x in\n  let data = Nx_ox.to_array out in\n  check_float \"atan2_float32[0]\" ~eps:1e-5 (Float.atan2 1.0 0.0) data.(0);\n  check_float \"atan2_float32[1]\" ~eps:1e-5 (Float.atan2 0.0 1.0) data.(1);\n  check_float \"atan2_float32[2]\" ~eps:1e-5 (Float.atan2 (-1.0) (-1.0)) data.(2)\n\nlet () =\n  print_endline \"Running Nx_backend backend tests...\";\n  test_buffer_float64 ();\n  test_buffer_float32 ();\n  test_buffer_int32 ();\n  test_buffer_int64 ();\n  test_add_float64 ();\n  test_add_float32 ();\n  test_add_int32 ();\n  test_add_int64 ();\n  test_sub_float64 ();\n  test_sub_float32 ();\n  test_sub_int32 ();\n  test_sub_int64 ();\n  test_add_single_element ();\n  test_add_negative_values ();\n  test_sub_to_zero ();\n  test_in_place_add ();\n  test_mul_float64 ();\n  test_mul_float32 ();\n  test_mul_int64 ();\n  test_mul_int32 ();\n  test_fdiv_float64 ();\n  test_fdiv_float32 ();\n  test_fdiv_int64 ();\n  test_fdiv_int32 ();\n  test_idiv_int64 ();\n  test_idiv_int32 ();\n  test_mod_float64 ();\n  test_mod_float32 ();\n  test_mod_int64 ();\n  test_mod_int32 ();\n  test_min_float64 ();\n  test_min_float32 ();\n  test_min_int64 ();\n  test_min_int32 ();\n  test_max_float64 ();\n  test_max_float32 ();\n  test_max_int64 ();\n  test_max_int32 ();\n  test_pow_float64 ();\n  test_pow_float32 ();\n  test_xor_int64 ();\n  test_xor_int32 ();\n  test_or_int64 ();\n  test_or_int32 ();\n  test_and_int64 ();\n  test_and_int32 ();\n  test_neg_float64 ();\n  test_neg_float32 ();\n  test_neg_int64 ();\n  test_neg_int32 ();\n  test_abs_float64 ();\n  test_abs_float32 ();\n  test_abs_int64 ();\n  test_abs_int32 ();\n  test_log_float64 ();\n  test_log_float32 ();\n  test_exp_float64 ();\n  test_exp_float32 ();\n  test_sin_float64 ();\n  test_sin_float32 ();\n  test_cos_float64 ();\n  test_cos_float32 ();\n  test_sqrt_float64 ();\n  test_sqrt_float32 ();\n  test_cmpeq_int64 ();\n  test_cmpeq_float64 ();\n  test_cmpne_int64 ();\n  test_cmpne_float64 ();\n  test_cmplt_float64 ();\n  test_cmplt_int64 ();\n  test_cmple_float64 ();\n  test_cmple_int64 ();\n  test_recip_float64 ();\n  test_where_float64_basic ();\n  test_where_float32_basic ();\n  test_where_int32_basic ();\n  test_where_int32_zero_negative ();\n  test_where_int64_zero_negative ();\n  test_where_int8_basic ();\n  test_where_int16_zero_negative ();\n  test_matmul_2d ();\n  test_matmul_identity ();\n  test_matmul_rectangular ();\n  test_matmul_batched ();\n  test_matmul_dot_product ();\n  test_matmul_rectangular_f32 ();\n  test_matmul_batched_f32 ();\n  test_pad_int32_1d ();\n  test_pad_float64_2d ();\n  test_pad_float64_permuted_view ();\n  test_shrink_int32_view ();\n  test_flip_int32_view ();\n  test_cat_int32_axis1 ();\n  test_gather_int32_axis1 ();\n  test_gather_float32_axis0_contiguous ();\n  test_gather_float64_axis0_contiguous ();\n  test_gather_float64_axis0_2d ();\n  test_gather_int32_axis0_contiguous ();\n  test_gather_int64_axis0_contiguous ();\n  test_gather_single_element ();\n  test_gather_negative_axis ();\n  test_scatter_int32_set_axis1 ();\n  test_scatter_int32_add_axis1 ();\n  test_scatter_float64_set ();\n  test_scatter_float64_add_duplicates ();\n  test_scatter_bool_set ();\n  test_scatter_preserves_template ();\n  test_unfold_int32_1d_basic ();\n  test_unfold_int32_1d_padding_stride ();\n  test_unfold_int64_1d_identity ();\n  test_unfold_float32_1d_identity ();\n  test_unfold_float64_1d_identity ();\n  test_unfold_int8_1d_identity ();\n  test_unfold_int16_1d_identity ();\n  test_unfold_bool_1d_identity ();\n  test_fold_int32_1d_overlap ();\n  test_fold_int32_1d_padding_stride ();\n  test_fold_int64_1d_identity ();\n  test_fold_float32_1d_identity ();\n  test_fold_float64_1d_identity ();\n  test_fold_int8_1d_identity ();\n  test_fold_int16_1d_identity ();\n  test_associative_scan_sum_int32_axis1 ();\n  test_associative_scan_prod_int64_axis0 ();\n  test_associative_scan_sum_int32_permuted_view ();\n  test_associative_scan_zero_axis_length ();\n  test_threefry_strided_view_matches_contiguous ();\n  test_atan2_float64 ();\n  test_atan2_float32 ();\n  test_argmax_float64_1d ();\n  test_argmax_float64_2d_axis0 ();\n  test_argmax_float64_2d_axis1 ();\n  test_argmin_float64_1d ();\n  test_argmax_int32 ();\n  test_argmin_int64 ();\n  test_sort_int32_1d ();\n  test_sort_float64_ascending ();\n  test_sort_float64_descending ();\n  test_sort_int32_2d_axis1 ();\n  test_sort_int32_2d_axis0 ();\n  test_argsort_float64 ();\n  test_argsort_descending ();\n  Printf.printf \"\\nResults: %d passed, %d failed\\n\" !passed !failed;\n  if !failed > 0 then exit 1\n"
  },
  {
    "path": "packages/nx-oxcaml/vendor/dune",
    "content": "(vendored_dirs *)\n"
  },
  {
    "path": "packages/quill/README.md",
    "content": "# Quill\n\nInteractive computing environment for OCaml.\n\nQuill is a REPL and notebook environment for OCaml. Run `quill` for an\ninteractive toplevel with syntax highlighting, completion, and persistent\nhistory — or open a markdown file for a full notebook experience with a\nterminal UI, web frontend, or batch evaluator. Part of the Raven\necosystem.\n\n## Features\n\n- Interactive REPL: `quill` launches a toplevel with syntax highlighting,\n  tab completion with ghost text, persistent history, smart phrase-aware\n  submission, and type inspection — no browser or file required\n- Markdown notebooks: notebooks are `.md` files with fenced OCaml code\n  blocks — git-friendly, editor-agnostic, zero lock-in\n- Terminal UI: full-screen TUI for cell navigation, execution, and\n  output display — no browser required\n- Web frontend: `quill serve` opens a browser-based notebook with\n  CodeMirror 6 editor, real-time execution, autocompletion, and\n  diagnostics\n- Batch execution: `quill run` executes all code blocks and prints or\n  saves results\n- Live editing: `quill run --watch` re-executes on file change for a live\n  editing workflow\n- Output format: cell outputs stored as HTML comments, invisible in\n  rendered markdown\n- Raven integrated: Nx, Rune, Kaun, Hugin, Sowilo, Talon, Brot, and\n  Fehu are pre-loaded\n\n## Quick Start\n\n<!-- $MDX skip -->\n```bash\n# Interactive REPL\nquill\n\n# Open a notebook in the terminal UI\nquill note notebook.md\n\n# Open in the browser\nquill serve notebook.md\n\n# Execute all cells from the command line\nquill run notebook.md\n\n# Live-edit: outputs update on every save\nquill run -w notebook.md\n```\n\n## Contributing\n\nSee the [Raven monorepo README](../README.md) for contribution guidelines.\n\n## License\n\nISC License. See [LICENSE](../LICENSE) for details.\n"
  },
  {
    "path": "packages/quill/bin/dune",
    "content": "(executable\n (name main)\n (modes byte)\n (public_name quill)\n (package quill)\n (link_flags -linkall)\n (libraries\n  quill\n  quill.project\n  quill.markdown\n  quill.top\n  quill.tui\n  quill.server\n  quill.book\n  cmdliner\n  findlib\n  unix\n  threads.posix))\n"
  },
  {
    "path": "packages/quill/bin/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet raven_packages =\n  [\n    \"nx.c\";\n    \"nx.io\";\n    \"rune\";\n    \"vega\";\n    \"norn\";\n    \"kaun\";\n    \"kaun.datasets\";\n    \"hugin\";\n    \"sowilo\";\n    \"talon\";\n    \"talon.csv\";\n    \"brot\";\n    \"fehu\";\n  ]\n\nlet raven_printers = [ \"Nx.pp_data\"; \"Hugin.pp\"; \"Talon.pp_display\" ]\n\nlet load_optional pkg =\n  match Quill_top.load_package pkg with\n  | () -> true\n  | exception Fl_package_base.No_such_package _ -> false\n  | exception exn ->\n      Printf.eprintf \"[quill] failed to load %s: %s\\n%!\" pkg\n        (Printexc.to_string exn);\n      false\n\nlet setup () =\n  (* Mark packages already linked into the quill executable so that load_package\n     does not try to load their .cma archives again. *)\n  Quill_top.add_packages\n    [\n      \"compiler-libs\";\n      \"compiler-libs.common\";\n      \"compiler-libs.bytecomp\";\n      \"compiler-libs.toplevel\";\n      \"findlib\";\n      \"findlib.internal\";\n      \"unix\";\n      \"threads\";\n      \"threads.posix\";\n    ];\n  (* Load raven packages individually. We skip the .top packages (nx.top,\n     hugin.top) — they only install printers during module init, which fails\n     inside dir_load. We install printers ourselves below. *)\n  List.iter (fun pkg -> ignore (load_optional pkg)) raven_packages;\n  List.iter Quill_top.install_printer raven_printers\n\nlet create_kernel ~on_event = Quill_top.create ~setup ~on_event ()\n\n(* ───── Template ───── *)\n\nlet default_template =\n  {|# Welcome to Quill\n\nInteractive OCaml notebooks — run each cell with **Enter** to see results.\n[Raven](https://github.com/raven-ml) packages are loaded automatically when installed.\n\n## Arrays with Nx\n\nNx provides n-dimensional arrays, like NumPy for OCaml.\n\n```ocaml\nlet x = Nx.linspace Nx.float32 0. 5. 6\nlet y = Nx.sin x\n```\n\n## Plotting with Hugin\n\nHugin renders plots directly in the notebook.\n\n```ocaml\nlet x = Nx.linspace Nx.float32 0. 6.28 200\nlet y = Nx.sin x\n\nlet _fig =\n  Hugin.line ~x ~y ()\n  |> Hugin.title \"A sine wave\"\n  |> Hugin.xlabel \"x\" |> Hugin.ylabel \"y\"\n```\n\n## Automatic Differentiation with Rune\n\nRune computes gradients automatically — define any function and differentiate it.\n\n```ocaml\nlet f x = Nx.pow_s x 3.                      (* f(x) = x³ *)\n\nlet x = Nx.scalar Nx.float32 2.0\nlet value = f x                                (* f(2) = 8 *)\nlet gradient = Rune.grad f x                   (* f'(2) = 3·2² = 12 *)\n```\n\n## Putting It Together\n\nPlot a function alongside its derivative.\n\n```ocaml\nlet f x = Nx.pow_s x 3.\n\nlet xs = Nx.linspace Nx.float32 (-2.) 3. 200\nlet ys = f xs\nlet gs = Rune.grad f xs\n\nlet _fig =\n  Hugin.layers [\n    Hugin.line ~x:xs ~y:ys ~label:\"f(x) = x³\" ();\n    Hugin.line ~x:xs ~y:gs ~label:\"f'(x) = 3x²\" ();\n  ]\n  |> Hugin.xlabel \"x\" |> Hugin.ylabel \"y\"\n  |> Hugin.legend\n```\n|}\n\n(* ───── Helpers ───── *)\n\nlet read_file path =\n  let ic = open_in path in\n  Fun.protect\n    ~finally:(fun () -> close_in ic)\n    (fun () -> really_input_string ic (in_channel_length ic))\n\nlet write_file path content =\n  let oc = open_out path in\n  Fun.protect\n    ~finally:(fun () -> close_out oc)\n    (fun () -> output_string oc content)\n\nlet resolve_path path =\n  if Filename.is_relative path then Filename.concat (Sys.getcwd ()) path\n  else path\n\nlet ensure_file path =\n  if not (Sys.file_exists path) then (\n    write_file path default_template;\n    Printf.printf \"Created %s\\n%!\" path)\n\nlet open_browser url =\n  let cmd = if Sys.file_exists \"/usr/bin/open\" then \"open\" else \"xdg-open\" in\n  ignore (Sys.command (cmd ^ \" \" ^ Filename.quote url))\n\n(* ───── Project loading ───── *)\n\nlet is_dir path = Sys.file_exists path && Sys.is_directory path\n\nlet discover_notebooks dir =\n  let entries = Sys.readdir dir in\n  let mds =\n    Array.to_list entries\n    |> List.filter (fun name -> Filename.check_suffix name \".md\")\n    |> List.sort String.compare\n  in\n  List.map\n    (fun name ->\n      let title = Quill_project.title_of_filename name in\n      Quill_project.Notebook ({ title; path = name }, []))\n    mds\n\nlet load_project dir =\n  let conf_path = Filename.concat dir \"quill.conf\" in\n  if Sys.file_exists conf_path then (\n    let source = read_file conf_path in\n    match Quill_project.parse_config source with\n    | Ok (config, toc) ->\n        let title =\n          match config.title with Some t -> t | None -> Filename.basename dir\n        in\n        { Quill_project.title; root = dir; toc; config }\n    | Error msg ->\n        Printf.eprintf \"Error: %s\\n%!\" msg;\n        exit 1)\n  else\n    let toc = discover_notebooks dir in\n    let title = Filename.basename dir in\n    {\n      Quill_project.title;\n      root = dir;\n      toc;\n      config = Quill_project.default_config;\n    }\n\nlet scratch_dir =\n  match Sys.getenv_opt \"XDG_DATA_HOME\" with\n  | Some dir -> Filename.concat dir \"quill\"\n  | None ->\n      Filename.concat\n        (Filename.concat (Sys.getenv \"HOME\") \".local/share\")\n        \"quill\"\n\nlet scratch_path = Filename.concat scratch_dir \"scratch.md\"\n\nlet ensure_scratch_dir () =\n  if not (Sys.file_exists scratch_dir) then\n    ignore\n      (Sys.command (Printf.sprintf \"mkdir -p %s\" (Filename.quote scratch_dir)))\n\n(* ───── Default: TUI notebook ───── *)\n\nlet default_cmd path =\n  let path =\n    match path with\n    | Some p -> p\n    | None ->\n        ensure_scratch_dir ();\n        scratch_path\n  in\n  let abs_path = resolve_path path in\n  Sys.chdir (Filename.dirname abs_path);\n  Quill_tui.run ~create_kernel abs_path\n\n(* ───── Serve: web notebook ───── *)\n\nlet serve_notebook port path =\n  let abs_path = resolve_path path in\n  ensure_file abs_path;\n  Sys.chdir (Filename.dirname abs_path);\n  let url = Printf.sprintf \"http://127.0.0.1:%d\" port in\n  Quill_server.serve ~create_kernel ~port\n    ~on_ready:(fun () -> open_browser url)\n    abs_path\n\nlet serve_project port project =\n  let prelude nb_path =\n    let nb_dir =\n      Filename.concat project.Quill_project.root (Filename.dirname nb_path)\n    in\n    let path = Filename.concat nb_dir \"prelude.ml\" in\n    if Sys.file_exists path then Some (read_file path) else None\n  in\n  let url = Printf.sprintf \"http://127.0.0.1:%d\" port in\n  Quill_server.serve_dir ~create_kernel ~port ~prelude ~toc:project.toc\n    ~on_ready:(fun () -> open_browser url)\n    project.root\n\nlet serve_cmd port path =\n  if is_dir path then serve_project port (load_project path)\n  else serve_notebook port path\n\n(* ───── Run: batch execution ───── *)\n\nlet run_file ?prelude ?figures_dir inplace path =\n  let abs_path = resolve_path path in\n  let abs_prelude = Option.map resolve_path prelude in\n  let nb_dir = Filename.dirname abs_path in\n  let figures_dir =\n    Option.map\n      (fun d -> if Filename.is_relative d then Filename.concat nb_dir d else d)\n      figures_dir\n  in\n  let md = read_file abs_path in\n  let doc = Quill_markdown.of_string md in\n  let create_kernel ~on_event =\n    let k = create_kernel ~on_event in\n    (match abs_prelude with\n    | Some p ->\n        let code = read_file p in\n        k.Quill.Kernel.execute ~cell_id:\"__prelude__\" ~code\n    | None -> ());\n    k\n  in\n  let doc = Quill.Doc.clear_all_outputs doc in\n  let prev_cwd = Sys.getcwd () in\n  Sys.chdir nb_dir;\n  let doc =\n    Fun.protect\n      ~finally:(fun () -> Sys.chdir prev_cwd)\n      (fun () -> Quill.Eval.run ~create_kernel doc)\n  in\n  let result = Quill_markdown.to_string_with_outputs ?figures_dir doc in\n  if inplace then (\n    write_file abs_path result;\n    Printf.printf \"Updated %s\\n%!\" abs_path)\n  else print_string result\n\nlet get_mtime path =\n  try Some (Unix.stat path).Unix.st_mtime with Unix.Unix_error _ -> None\n\nlet rec watch_loop ?prelude ?figures_dir path last_mtime =\n  Unix.sleepf 1.0;\n  match get_mtime path with\n  | None ->\n      Printf.eprintf \"File %s no longer exists\\n%!\" path;\n      exit 1\n  | Some mtime when mtime > last_mtime ->\n      let tm = Unix.localtime (Unix.gettimeofday ()) in\n      Printf.printf \"\\n[%02d:%02d:%02d] File changed, re-evaluating...\\n%!\"\n        tm.Unix.tm_hour tm.Unix.tm_min tm.Unix.tm_sec;\n      run_file ?prelude ?figures_dir true path;\n      let new_mtime = Option.value ~default:mtime (get_mtime path) in\n      watch_loop ?prelude ?figures_dir path new_mtime\n  | Some _ -> watch_loop ?prelude ?figures_dir path last_mtime\n\nlet run_project ?prelude ?figures_dir project =\n  List.iter\n    (fun (nb : Quill_project.notebook) ->\n      let path = Filename.concat project.Quill_project.root nb.path in\n      if Sys.file_exists path then (\n        Printf.printf \"  Running %s...\\n%!\" nb.title;\n        run_file ?prelude ?figures_dir true path))\n    (Quill_project.notebooks project)\n\nlet run_cmd watch inplace prelude figures_dir path =\n  if not (Sys.file_exists path) then (\n    Printf.eprintf \"Error: %s not found\\n%!\" path;\n    exit 1);\n  if is_dir path then run_project ?prelude ?figures_dir (load_project path)\n  else if watch then begin\n    run_file ?prelude ?figures_dir true path;\n    match get_mtime path with\n    | None ->\n        Printf.eprintf \"Error: Cannot watch %s\\n%!\" path;\n        exit 1\n    | Some mtime ->\n        Printf.printf \"\\nWatching %s for changes... (Ctrl-C to stop)\\n%!\" path;\n        watch_loop ?prelude ?figures_dir path mtime\n  end\n  else run_file ?prelude ?figures_dir inplace path\n\n(* ───── Build: static HTML ───── *)\n\nlet build_cmd skip_eval output path =\n  if not (Sys.file_exists path) then (\n    Printf.eprintf \"Error: %s not found\\n%!\" path;\n    exit 1);\n  if is_dir path then\n    let project = load_project path in\n    Quill_book.Build.build ~create_kernel ~skip_eval ?output project\n  else Quill_book.Build.build_file ~create_kernel ~skip_eval ?output path\n\n(* ───── Clean: strip outputs ───── *)\n\nlet rec rm_rf path =\n  if Sys.file_exists path then\n    if Sys.is_directory path then (\n      Array.iter\n        (fun name -> rm_rf (Filename.concat path name))\n        (Sys.readdir path);\n      Unix.rmdir path)\n    else Sys.remove path\n\nlet clean_figures_dir dir =\n  let figures = Filename.concat dir \"figures\" in\n  if Sys.file_exists figures && Sys.is_directory figures then rm_rf figures\n\nlet clean_cmd path =\n  if not (Sys.file_exists path) then (\n    Printf.eprintf \"Error: %s not found\\n%!\" path;\n    exit 1);\n  if is_dir path then begin\n    let project = load_project path in\n    List.iter\n      (fun (nb : Quill_project.notebook) ->\n        let path = Filename.concat project.root nb.path in\n        if Sys.file_exists path then (\n          let md = read_file path in\n          let doc = Quill_markdown.of_string md in\n          let doc = Quill.Doc.clear_all_outputs doc in\n          let result = Quill_markdown.to_string doc in\n          write_file path result;\n          let nb_dir = Filename.dirname path in\n          clean_figures_dir nb_dir;\n          Printf.printf \"  Cleaned %s\\n%!\" nb.title))\n      (Quill_project.notebooks project);\n    Printf.printf \"Done.\\n%!\"\n  end\n  else begin\n    let md = read_file path in\n    let doc = Quill_markdown.of_string md in\n    let doc = Quill.Doc.clear_all_outputs doc in\n    let result = Quill_markdown.to_string doc in\n    write_file path result;\n    let nb_dir = Filename.dirname path in\n    clean_figures_dir nb_dir;\n    Printf.printf \"Stripped outputs from %s\\n%!\" path\n  end\n\n(* ───── Cmdliner ───── *)\n\nopen Cmdliner\n\nlet optional_path_arg =\n  Arg.(\n    value\n    & pos 0 (some string) None\n    & info [] ~docv:\"FILE\"\n        ~doc:\"Path to a notebook file. If omitted, opens a scratch notebook.\")\n\nlet serve_path_arg =\n  Arg.(\n    value & pos 0 string \"notebook.md\"\n    & info [] ~docv:\"PATH\"\n        ~doc:\n          \"Path to a notebook file or project directory (contains quill.conf).\")\n\nlet required_path_arg =\n  Arg.(\n    required\n    & pos 0 (some string) None\n    & info [] ~docv:\"PATH\"\n        ~doc:\n          \"Path to a notebook file or project directory (contains quill.conf).\")\n\nlet port_flag =\n  Arg.(\n    value & opt int 8888\n    & info [ \"port\"; \"p\" ] ~docv:\"PORT\" ~doc:\"Port to listen on (default 8888).\")\n\nlet watch_flag =\n  Arg.(\n    value & flag & info [ \"watch\"; \"w\" ] ~doc:\"Re-execute on every file save.\")\n\nlet inplace_flag =\n  Arg.(\n    value & flag\n    & info [ \"inplace\"; \"i\" ] ~doc:\"Write changes back into the file.\")\n\nlet prelude_flag =\n  Arg.(\n    value\n    & opt (some string) None\n    & info [ \"prelude\" ] ~docv:\"FILE\"\n        ~doc:\"Execute OCaml code from $(docv) before the notebook cells.\")\n\nlet figures_dir_flag =\n  Arg.(\n    value\n    & opt (some string) None\n    & info [ \"figures-dir\" ] ~docv:\"DIR\"\n        ~doc:\n          \"Write images to $(docv) and reference by path instead of inlining.\")\n\nlet skip_eval_flag =\n  Arg.(\n    value & flag\n    & info [ \"skip-eval\" ]\n        ~doc:\"Render HTML from existing outputs without re-executing code.\")\n\nlet output_flag =\n  Arg.(\n    value\n    & opt (some string) None\n    & info [ \"output\"; \"o\" ] ~docv:\"DIR\"\n        ~doc:\"Output directory (default: build/ inside the project directory).\")\n\n(* Default: TUI notebook *)\nlet default_term = Term.(const default_cmd $ optional_path_arg)\n\n(* serve: web notebook *)\nlet serve_term =\n  let doc = \"Open a notebook or project in the browser.\" in\n  Cmd.v (Cmd.info \"serve\" ~doc)\n    Term.(const serve_cmd $ port_flag $ serve_path_arg)\n\n(* run: batch execution *)\nlet run_term =\n  let doc = \"Execute all code blocks in a notebook or project.\" in\n  Cmd.v (Cmd.info \"run\" ~doc)\n    Term.(\n      const run_cmd $ watch_flag $ inplace_flag $ prelude_flag\n      $ figures_dir_flag $ required_path_arg)\n\n(* build: static HTML *)\nlet build_term =\n  let doc = \"Build a notebook or project as static HTML.\" in\n  Cmd.v (Cmd.info \"build\" ~doc)\n    Term.(const build_cmd $ skip_eval_flag $ output_flag $ required_path_arg)\n\n(* clean: strip outputs *)\nlet clean_term =\n  let doc = \"Strip outputs from a notebook or all project notebooks.\" in\n  Cmd.v (Cmd.info \"clean\" ~doc) Term.(const clean_cmd $ required_path_arg)\n\nlet subcommands = [ serve_term; run_term; build_term; clean_term ]\nlet known_commands = List.map Cmd.name subcommands\n\nlet quill_cmd =\n  let doc = \"Interactive OCaml notebooks.\" in\n  let info = Cmd.info \"quill\" ~version:\"1.0.0\" ~doc in\n  Cmd.group ~default:default_term info subcommands\n\nlet () =\n  (* cmdliner's Cmd.group matches the first positional arg against subcommand\n     names before falling through to the default term. Pre-parse argv to insert\n     \"--\" when the first arg is not a known subcommand, so that [quill file.md]\n     works without requiring [quill -- file.md]. *)\n  let argv =\n    let a = Sys.argv in\n    if\n      Array.length a >= 2\n      && String.length a.(1) > 0\n      && a.(1).[0] <> '-'\n      && not (List.mem a.(1) known_commands)\n    then Array.concat [ [| a.(0); \"--\" |]; Array.sub a 1 (Array.length a - 1) ]\n    else a\n  in\n  exit (Cmd.eval ~argv quill_cmd)\n"
  },
  {
    "path": "packages/quill/doc/01-getting-started.md",
    "content": "# Getting Started\n\nThis guide covers the REPL, creating a notebook, executing it in\ndifferent modes, and viewing results.\n\n## Installation\n\n<!-- $MDX skip -->\n```bash\nopam install quill\n```\n\nOr build from source:\n\n<!-- $MDX skip -->\n```bash\ngit clone https://github.com/raven-ml/raven\ncd raven && dune build quill\n```\n\n## The REPL\n\nThe fastest way to try Quill:\n\n<!-- $MDX skip -->\n```bash\nquill\n```\n\nThis launches an interactive toplevel. Type OCaml expressions, press\nEnter to evaluate. All Raven packages are pre-loaded — try\n`Nx.create Float32 [|3|] [|1.; 2.; 3.|]` right away.\n\n| Key | Action |\n| --- | --- |\n| Enter | Submit (if phrase is complete) |\n| Ctrl-Enter | Insert newline |\n| Tab | Trigger completion |\n| Ctrl-T | Type at cursor |\n| Up / Down | History navigation |\n| Ctrl-C | Clear input / interrupt |\n| Ctrl-D | Quit (on empty input) |\n\nQuill also works in pipes: `echo 'print_endline \"hello\"' | quill`\nexecutes the input and prints the result.\n\n## Your First Notebook\n\nCreate a notebook with a starter template:\n\n<!-- $MDX skip -->\n```bash\nquill new notebook.md\nquill note notebook.md\n```\n\nThis opens the terminal UI. Run each cell with `Enter` to see arrays,\nplots, and automatic differentiation in action.\n\nYou can also create a named notebook:\n\n<!-- $MDX skip -->\n```bash\nquill new analysis.md\nquill note analysis.md\n```\n\nOr open an existing notebook:\n\n<!-- $MDX skip -->\n```bash\nquill note notebook.md\n```\n\n## Creating a Notebook\n\nAny `.md` file with fenced OCaml code blocks is a Quill notebook. Create\na file `notebook.md`:\n\n    # Statistics\n\n    We'll compute some basic statistics.\n\n    ```ocaml\n    open Nx\n\n    let data = of_list float32 [| 1.0; 2.0; 3.0; 4.0; 5.0 |] [| 5 |]\n    let () = Printf.printf \"Data: %s\\n\" (to_string data)\n    ```\n\n    Now the mean:\n\n    ```ocaml\n    let m = mean data\n    let () = Printf.printf \"Mean: %s\\n\" (to_string m)\n    ```\n\nCode blocks share state: variables defined in one block are available in\nall subsequent blocks.\n\n## Running with `quill run`\n\nBatch-execute all code blocks:\n\n<!-- $MDX skip -->\n```bash\nquill run notebook.md\n```\n\nThis prints the complete notebook with outputs to stdout. The original\nfile is not modified. Useful for quick checks and CI.\n\n### Saving outputs in-place\n\n<!-- $MDX skip -->\n```bash\nquill run --inplace notebook.md\n```\n\nExecutes all code blocks and writes outputs back into the file as HTML\ncomments. The file now contains `<!-- quill:output -->` sections below\neach code block. The notebook remains valid, readable markdown.\n\n## Watch Mode\n\n<!-- $MDX skip -->\n```bash\nquill run --watch notebook.md\n```\n\nWatches the file for changes (polling every second). On each save,\nre-executes all cells and writes outputs back. This enables a live\nediting workflow: edit in your favorite editor in one terminal, see\nresults update in the file.\n\n## Running with the TUI\n\nOpen a notebook in the terminal UI:\n\n<!-- $MDX skip -->\n```bash\nquill note notebook.md\n```\n\nThe TUI displays a full-screen interface with:\n\n- **Header**: filename, cell count, running indicator\n- **Cells**: code cells in numbered bordered boxes with syntax\n  highlighting, text cells as rendered markdown\n- **Footer**: keybinding hints and error messages\n\n### Keybindings\n\n| Key | Action |\n| --- | --- |\n| j / k | Navigate cells |\n| J / K | Move cell down / up |\n| Up / Down | Navigate cells |\n| Enter | Execute focused cell |\n| Ctrl-A | Execute all cells |\n| a | Insert code cell below |\n| t | Insert text cell below |\n| d | Delete focused cell |\n| m | Toggle cell kind (code / text) |\n| c | Clear focused cell outputs |\n| Ctrl-L | Clear all outputs |\n| s / Ctrl-S | Save |\n| Ctrl-C | Interrupt execution |\n| q | Quit |\n\nThe TUI watches the file for external changes. If you edit the notebook\nin another editor, the TUI reloads automatically.\n\nQuitting with unsaved changes requires pressing `q` twice, or `s` to\nsave first.\n\n## Running with the Web UI\n\nStart the web frontend:\n\n<!-- $MDX skip -->\n```bash\nquill serve notebook.md\n```\n\nThis starts an HTTP server at `http://127.0.0.1:8888` and opens the\nnotebook in your browser. The web UI provides:\n\n- **CodeMirror 6 editor** with OCaml syntax highlighting\n- **Real-time execution** via WebSocket — outputs appear as cells run\n- **Autocompletion** and **type-at-position** for OCaml code\n- **Diagnostics** — errors and warnings shown inline\n- **Keyboard shortcuts** — `j`/`k` navigation, `Ctrl+Enter` to execute\n\nUse `--port` (or `-p`) to change the port:\n\n<!-- $MDX skip -->\n```bash\nquill serve --port 9000 notebook.md\n```\n\nThe web UI shares the same markdown notebook format and Raven kernel as\nthe TUI and batch evaluator.\n\n## Stripping Outputs\n\nRemove all outputs from a notebook:\n\n<!-- $MDX skip -->\n```bash\nquill clean notebook.md            # print clean markdown to stdout\nquill clean --inplace notebook.md  # strip outputs from the file\n```\n\nUseful before committing to git for clean diffs, or to get a fresh start\nbefore re-execution.\n\n## Persistent State\n\nCode cells execute sequentially in a shared OCaml toplevel. Variables\nand functions defined in one cell are available in all subsequent cells:\n\n    ```ocaml\n    let greet name = Printf.printf \"Hello, %s!\\n\" name\n    ```\n\n    ```ocaml\n    let () = greet \"world\"\n    (* prints: Hello, world! *)\n    ```\n\nThis mirrors the behavior of the OCaml toplevel (`ocaml` REPL).\n\n## Raven Packages\n\nAll Raven packages are pre-loaded automatically. Your first code cell\ncan immediately use `open Nx`, `open Rune`, `open Hugin`, etc. without\nany setup. Pretty-printers for Nx and Rune tensors are installed\nautomatically.\n\n## Next Steps\n\n- [Notebook Format](02-notebook-format/) — how markdown maps to cells,\n  how outputs are serialized\n- [Execution Modes](03-execution-modes/) — TUI, web frontend, live\n  editing workflow, batch execution\n"
  },
  {
    "path": "packages/quill/doc/02-notebook-format.md",
    "content": "# Notebook Format\n\nA Quill notebook is a CommonMark markdown file. Fenced code blocks with\na language tag become executable code cells. Everything else becomes text\ncells. This page explains the mapping and the output serialization\nformat.\n\n## Cell Types\n\nA notebook contains two kinds of cells:\n\n- **Code cells**: fenced code blocks with a language info string (e.g.,\n  ` ```ocaml `). The language tag identifies the execution kernel.\n- **Text cells**: all other markdown content between code blocks.\n  Adjacent paragraphs, headings, lists, and other block elements form a\n  single text cell.\n\nFor example, this notebook has three cells:\n\n    # My Notebook          ← text cell\n\n    Some explanation.\n\n    ```ocaml               ← code cell\n    let x = 42\n    ```\n\n    More text here.        ← text cell\n\n    ```ocaml               ← code cell\n    let y = x + 1\n    ```\n\n## Cell IDs\n\nQuill assigns each cell a stable identifier stored as an HTML comment\nbefore the cell:\n\n    <!-- quill:cell id=\"c_a1b2c3d4e5f6\" -->\n    ```ocaml\n    let x = 42\n    ```\n\nCell IDs are generated automatically for cells that lack them. They\nenable the TUI and session to track cells across file reloads and edits.\n\nUsers do not need to manage cell IDs. They are preserved by `quill clean`\nand `quill run --inplace`. Deleting them is harmless — fresh IDs are\nassigned on the next load.\n\n## Output Format\n\nAfter executing a code cell, outputs are stored between marker comments:\n\n    ```ocaml\n    let x = 42\n    let () = Printf.printf \"x = %d\\n\" x\n    ```\n    <!-- quill:output -->\n    <!-- out:stdout -->\n    x = 42\n    val x : int = 42\n    <!-- /quill:output -->\n\nEach output section is tagged with its type:\n\n- `<!-- out:stdout -->` — captured standard output and toplevel value\n  printing\n- `<!-- out:stderr -->` — warnings and standard error\n- `<!-- out:error -->` — execution errors (syntax errors, type errors,\n  runtime exceptions)\n- `<!-- out:display MIME -->` — rich output with a MIME type (e.g.,\n  `<!-- out:display text/html -->` or `<!-- out:display image/png -->`)\n\nA single code cell can produce multiple output sections. For example, a\ncell that prints to both stdout and stderr:\n\n    ```ocaml\n    let () = Printf.printf \"result: 42\\n\"\n    let () = Printf.eprintf \"warning: something\\n\"\n    ```\n    <!-- quill:output -->\n    <!-- out:stdout -->\n    result: 42\n    <!-- out:stderr -->\n    warning: something\n    <!-- /quill:output -->\n\n## Why HTML Comments?\n\nThe output format uses HTML comments for several reasons:\n\n1. **Invisible in rendered markdown.** GitHub, editors with preview, and\n   documentation tools render the notebook without showing outputs. The\n   document reads cleanly whether outputs are present or not.\n2. **Valid markdown.** HTML comments are part of the CommonMark\n   specification. No custom syntax, no extensions, no preprocessing.\n3. **Single file.** Outputs live in the notebook itself. No sidecar\n   files, no `.ipynb_checkpoints`, no separate output directories.\n4. **Clean stripping.** `quill clean` removes all output sections in one\n   pass. `quill run --inplace` regenerates them. This makes it easy to\n   commit clean notebooks and regenerate outputs in CI.\n\n## Non-OCaml Code Blocks\n\nCode blocks without a language tag, or with a language other than\n`ocaml`, are not executed. They pass through unchanged as code cells:\n\n    ```bash\n    # This is not executed — it's documentation\n    quill run notebook.md\n    ```\n\n    ```json\n    { \"this\": \"is also not executed\" }\n    ```\n\nThis lets you include shell commands, JSON examples, and other snippets\nin your notebook as documentation without affecting execution.\n\n## Roundtrip Guarantees\n\nParsing a markdown file with `Quill_markdown.of_string` and rendering it\nback with `Quill_markdown.to_string` or `to_string_with_outputs`\npreserves:\n\n- Cell content and ordering\n- Cell IDs\n- Output content and types\n- Text cell markdown (headings, lists, links, etc.)\n\nThe rendering normalizes some whitespace (consistent blank lines between\ncells), but the semantic content is preserved exactly.\n"
  },
  {
    "path": "packages/quill/doc/03-execution-modes.md",
    "content": "# Execution Modes\n\nQuill provides five execution modes: the interactive REPL, the terminal\nnotebook UI, a web frontend, batch execution (with optional watch), and clean.\n\n## Interactive REPL\n\nRun `quill` with no file argument to launch the interactive toplevel:\n\n<!-- $MDX skip -->\n```bash\nquill\n```\n\nThe REPL provides:\n\n- **Syntax highlighting** via tree-sitter\n- **Tab completion** with ghost text preview\n- **Persistent history** — Up/Down to recall previous expressions\n- **Smart submission** — Enter submits complete phrases, inserts a\n  newline for incomplete code. Ctrl-Enter always inserts a newline\n- **Type inspection** — Ctrl-T shows the type at the cursor\n- **Interrupt** — Ctrl-C clears input (idle) or interrupts execution\n\n### Piped Mode\n\nWhen stdin is not a terminal, Quill reads code from stdin, executes it,\nand prints the output:\n\n<!-- $MDX skip -->\n```bash\necho 'List.iter print_endline [\"a\"; \"b\"; \"c\"]' | quill\n```\n\nThis is useful for scripting and one-off evaluation.\n\n## Terminal UI\n\nOpen a notebook in the TUI:\n\n<!-- $MDX skip -->\n```bash\nquill note notebook.md\n```\n\nIf the file doesn't exist, Quill creates it with a starter template.\n\n### Layout\n\nThe TUI displays three areas:\n\n- **Header**: the filename, total cell count (or a running indicator\n  with spinner when cells are executing), and an unsaved-changes dot.\n- **Cell list**: a scrollable view of all cells. Code cells appear in\n  numbered bordered boxes with syntax highlighting. Text cells appear as\n  rendered markdown.\n- **Footer**: keybinding hints and error messages.\n\n### Navigating and Executing\n\nNavigate between cells with `j`/`k` or the arrow keys. The focused cell\nis highlighted with a distinct background and border.\n\nPress `Enter` to execute the focused code cell. Press `Ctrl-A` to\nexecute all code cells top-to-bottom. During execution, a spinner and\n\"evaluating\" label appear. Outputs display inline below the code.\n\nPressing `Enter` on a text cell shows an error — only code cells are\nexecutable.\n\n### Cell Management\n\n| Key | Action |\n| --- | --- |\n| a | Insert a code cell below the focused cell |\n| t | Insert a text cell below the focused cell |\n| d | Delete the focused cell |\n| m | Toggle the focused cell between code and text |\n| J | Move the focused cell down |\n| K | Move the focused cell up |\n| c | Clear the focused cell's outputs |\n| Ctrl-L | Clear all outputs |\n\n### File Watching\n\nThe TUI checks the file for external modifications every second. If the\nfile changes on disk (e.g., you edit it in vim or another editor), the\nTUI reloads automatically. This means you can keep the TUI open while\nediting the notebook externally.\n\n### Saving\n\nPress `s` (or `Ctrl-S`) to save. The notebook is written with all\ncurrent outputs. An unsaved-changes indicator (a dot in the header)\nappears when the document has been modified since the last save.\n\nQuitting with unsaved changes requires pressing `q` twice. The error\nbar shows: \"Unsaved changes. Press q again to quit, s to save.\"\n\n### Interrupting\n\nPress `Ctrl-C` to interrupt a running execution. This sends an\ninterrupt signal to the kernel.\n\n## Web Frontend\n\nStart the web notebook server:\n\n<!-- $MDX skip -->\n```bash\nquill serve notebook.md\n```\n\nThis starts an HTTP server at `http://127.0.0.1:8888` and opens the\nnotebook in your browser. Use `--port` (or `-p`) to change the port:\n\n<!-- $MDX skip -->\n```bash\nquill serve --port 9000 notebook.md\n```\n\nWith no file argument, `quill serve` defaults to `notebook.md` in the\ncurrent directory, creating it if needed.\n\n### Features\n\nThe web UI provides a full notebook interface in the browser:\n\n- **CodeMirror 6 editor** with OCaml syntax highlighting and theming\n- **Real-time execution** — cell outputs stream via WebSocket as code runs\n- **Autocompletion** — context-aware completions for OCaml code\n- **Type information** — hover over identifiers to see their types\n- **Diagnostics** — errors and warnings shown inline in the editor\n- **Undo / redo** — checkpoint-based history\n- **Cell management** — insert, delete, move, and toggle cells between\n  code and text\n\n### Keyboard Shortcuts\n\n| Key | Action |\n| --- | --- |\n| j / k | Navigate cells |\n| Enter | Edit focused cell |\n| Ctrl-Enter | Execute focused cell |\n| Ctrl-Shift-Enter | Execute all cells |\n| a | Insert code cell below |\n| t | Insert text cell below |\n| d | Delete focused cell |\n| Ctrl-S | Save |\n| Ctrl-C | Interrupt execution |\n\n### Connection Status\n\nThe web UI automatically reconnects if the server restarts or the\nconnection drops. A banner appears during disconnection with the\nreconnection status. Reconnection uses exponential backoff (up to 30\nseconds).\n\n## Batch Execution\n\nNon-interactive execution of all code cells:\n\n<!-- $MDX skip -->\n```bash\nquill run notebook.md\n```\n\nExecutes every code cell in order and prints the complete notebook with\noutputs to stdout. The original file is not modified.\n\nUse cases:\n- Quick review of notebook outputs\n- Piping output to other tools\n- CI/CD validation\n\n### In-place updates\n\n<!-- $MDX skip -->\n```bash\nquill run --inplace notebook.md\n```\n\nSame as above, but writes outputs back into the file. After running,\nthe file contains `<!-- quill:output -->` sections below each code\nblock.\n\n## Watch Mode\n\n<!-- $MDX skip -->\n```bash\nquill run --watch notebook.md\n```\n\nWatches the file for changes and re-executes all cells on every save,\nwriting outputs back into the file. A timestamp is printed on each\nre-evaluation:\n\n    [14:32:05] File changed, re-evaluating...\n\nWatch mode runs until interrupted with `Ctrl-C`.\n\n### The Live Editing Workflow\n\n`quill run -w` creates a live notebook experience using your own editor:\n\n1. Open two terminals (or splits in tmux / zellij)\n2. Terminal 1: open the notebook in your editor (`vim notebook.md`)\n3. Terminal 2: `quill run -w notebook.md`\n4. Edit and save in terminal 1. Terminal 2 detects the change,\n   re-executes all cells, and writes outputs back into the file.\n5. Your editor picks up the file change (vim with `:set autoread`,\n   VS Code automatically, etc.)\n\nThis gives you the \"edit in your editor, see results update\" workflow\nwithout a browser or notebook server.\n\n## Cleaning\n\nStrip all outputs from a notebook:\n\n<!-- $MDX skip -->\n```bash\nquill clean notebook.md            # print clean markdown to stdout\nquill clean --inplace notebook.md  # strip outputs from the file\n```\n\nCell IDs are preserved. Only `<!-- quill:output -->` sections are\nremoved.\n\nUse cases:\n- **Clean diffs**: strip outputs before committing, regenerate with\n  `quill run --inplace` in CI\n- **Fresh start**: remove stale outputs before a full re-run\n- **Sharing**: send a clean notebook without outputs\n\n## Creating Notebooks\n\nCreate a new notebook from a starter template:\n\n<!-- $MDX skip -->\n```bash\nquill new                   # creates notebook.md\nquill new analysis.md       # creates analysis.md\n```\n\nThe starter template includes working examples of Nx arrays, Hugin\nplotting, and Rune automatic differentiation.\n\n## Raven Packages\n\nAll execution modes (REPL, TUI, web, run, and watch) use the Raven kernel,\nwhich pre-loads these packages automatically:\n\n- **Nx** — n-dimensional arrays\n- **Rune** — tensor computation with autodiff\n- **Kaun** — neural networks and training\n- **Hugin** — visualization and plotting\n- **Sowilo** — image processing\n- **Talon** — dataframes\n- **Brot** — tokenization\n- **Fehu** — reinforcement learning\n\nPretty-printers for Nx and Rune tensors are installed automatically.\nYour first code cell can use `open Nx` or any other Raven module without\nsetup.\n"
  },
  {
    "path": "packages/quill/doc/dune",
    "content": "(mdx\n (files *.md)\n (package quill)\n (libraries quill quill.markdown))\n"
  },
  {
    "path": "packages/quill/doc/index.md",
    "content": "# Quill\n\nQuill is a REPL and notebook environment for OCaml. Run `quill` for an\ninteractive toplevel with syntax highlighting, completion, and persistent\nhistory — or open a markdown file for a full notebook with a terminal UI,\nweb frontend, or batch evaluator.\n\n## Features\n\n- **Interactive REPL**: `quill` launches a toplevel with syntax highlighting, tab completion with ghost text, persistent history, smart phrase-aware submission, and type inspection\n- **Markdown notebooks**: notebooks are `.md` files with fenced OCaml code blocks — git-friendly, editor-agnostic, zero lock-in\n- **Terminal UI**: full-screen TUI for cell navigation, execution, and output display\n- **Web frontend**: `quill serve` opens a browser-based notebook with CodeMirror 6 editor, real-time execution, autocompletion, and diagnostics\n- **Batch execution**: `quill run` executes all code blocks and prints or saves results\n- **Watch mode**: `quill run --watch` re-executes on file change for a live editing workflow\n- **Output format**: cell outputs stored as `<!-- quill:output -->` HTML comments, invisible in rendered markdown\n- **Raven integrated**: Nx, Rune, Kaun, Hugin, Sowilo, Talon, Brot, and Fehu are pre-loaded\n\n## Quick Start\n\nLaunch the REPL:\n\n<!-- $MDX skip -->\n```bash\nquill\n```\n\nOr open a notebook in the terminal UI:\n\n<!-- $MDX skip -->\n```bash\nquill note notebook.md\n```\n\nOr in the browser:\n\n<!-- $MDX skip -->\n```bash\nquill serve notebook.md\n```\n\nOr execute from the command line:\n\n<!-- $MDX skip -->\n```bash\nquill run notebook.md\n```\n\n## Next Steps\n\n- [Getting Started](01-getting-started/) — REPL, notebooks, execution modes\n- [Notebook Format](02-notebook-format/) — how markdown becomes cells, how outputs are stored\n- [Execution Modes](03-execution-modes/) — REPL, TUI, web, run, and clean\n"
  },
  {
    "path": "packages/quill/examples/hello.md",
    "content": "<!-- quill:cell id=\"c_21swpr9nmlzh\" -->\n# Hello Quill\n\nA sample notebook to test the TUI.\n\n\n<!-- quill:cell id=\"c_nt02ejs0jlrx\" -->\n```ocaml\nlet greeting = \"Hello from Quill!\"\nlet () = print_endline greeting\n```\n\n<!-- quill:cell id=\"c_vwh224nbgzms\" -->\nSome markdown text between cells. Try pressing **e** to edit the code above,\nthen **Escape** to exit, or **Ctrl-Enter** to run.\n\n\n<!-- quill:cell id=\"c_1l7dzrlbekw8\" -->\n```ocaml\nlet square x = x * x\n\nlet () =\n  List.iter\n    (fun n -> Printf.printf \"%d^2 = %d\\n\" n (square n))\n    [1; 2; 3; 4; 5]\n```\n\n<!-- quill:cell id=\"c_q446fmuel0h4\" -->\n```ocaml\nlet rec fib n =\n  if n <= 1 then n\n  else fib (n - 1) + fib (n - 2)\n\nlet () =\n  Printf.printf \"fib(10) = %d\\n\" (fib 10)\n```\n\n<!-- quill:cell id=\"c_m4thd3m0t3xt\" -->\n## Math Equations\n\nText cells support LaTeX math. Inline math uses single dollars: the quadratic formula is $x = \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}$.\n\nDisplay math uses double dollars:\n\n$$\\int_0^\\infty e^{-x^2}\\, dx = \\frac{\\sqrt{\\pi}}{2}$$\n\n$$\\sum_{n=1}^{\\infty} \\frac{1}{n^2} = \\frac{\\pi^2}{6}$$\n\n<!-- quill:cell id=\"c_m4thc0d3c3ll\" -->\n```ocaml\n(* The Euler identity: e^(i*pi) + 1 = 0 *)\nlet () =\n  let open Complex in\n  let e_i_pi = exp { re = 0.; im = Float.pi } in\n  Printf.printf \"e^(iπ) = %.4f + %.4fi\\n\" e_i_pi.re e_i_pi.im\n```\n"
  },
  {
    "path": "packages/quill/examples/mnist.md",
    "content": "<!-- quill:cell id=\"c_mnist_title\" -->\n# MNIST Digit Classification\n\nIn this notebook we train a neural network to recognize handwritten digits from\nthe [MNIST](https://yann.lecun.com/exdb/mnist/) dataset. Each image is a 28x28\ngrayscale picture of a digit (0--9), and the model learns to predict which digit\nit is.\n\nWe use three raven packages:\n- **Nx** -- n-dimensional arrays\n- **Kaun** -- neural network layers, optimizers and training\n- **Hugin** -- plotting and visualization\n\n\n<!-- quill:cell id=\"c_mnist_load\" -->\n## 1. Loading the dataset\n\n`Kaun_datasets.mnist` downloads MNIST the first time and caches it locally.\nIt returns `((x_train, y_train), (x_test, y_test))` — images as float32\nin [0, 1] with shape `[N; 1; 28; 28]`, labels as int32 with shape `[N]`.\n\n```ocaml\nopen Kaun\n\nlet () = Printf.printf \"Loading MNIST...\\n%!\"\nlet (x_train, y_train), (x_test, y_test) = Kaun_datasets.mnist ()\n\nlet () =\n  let s = Nx.shape x_train in\n  Printf.printf \"train: %d images  shape: [%d; %d; %d]\\n\" s.(0) s.(1) s.(2) s.(3);\n  Printf.printf \"test:  %d images\\n\" (Nx.shape x_test).(0)\n```\n\n<!-- quill:cell id=\"c_mnist_viz_text\" -->\n## 2. Visualizing the data\n\nLet's look at the first 10 training images and their labels.\n\n<!-- quill:cell id=\"c_mnist_viz_code\" -->\n```ocaml\nlet _fig =\n  List.init 10 (fun i ->\n    let img = Nx.get [i; 0] x_train |> Nx.reshape [|28; 28|] in\n    let label = Nx.item [i] y_train in\n    Hugin.imshow ~data:img ~cmap:Hugin.Cmap.gray ()\n    |> Hugin.title (Printf.sprintf \"%ld\" label)\n    |> Hugin.no_axes)\n  |> Hugin.hstack ~gap:0.\n```\n\n<!-- quill:cell id=\"c_mnist_model_text\" -->\n## 3. Defining the model\n\nWe use a simple multi-layer perceptron (MLP): flatten the 1x28x28 image into a\n784-element vector, pass through a hidden layer with 128 units and ReLU\nactivation, then project to 10 output logits (one per digit class).\n\n<!-- quill:cell id=\"c_mnist_model_code\" -->\n```ocaml\nlet model =\n  Layer.sequential [\n    Layer.flatten ();\n    Layer.linear ~in_features:784 ~out_features:128 ();\n    Layer.relu ();\n    Layer.linear ~in_features:128 ~out_features:10 ();\n  ]\n```\n\n<!-- quill:cell id=\"c_mnist_trainer_text\" -->\n## 4. Setting up the trainer\n\nA `Train.t` pairs the model with an optimizer. We use Adam with a constant\nlearning rate of 0.001. `Train.init` creates initial random weights.\n\n<!-- quill:cell id=\"c_mnist_trainer_code\" -->\n```ocaml\nlet batch_size = 64\n\nlet trainer =\n  Train.make ~model\n    ~optimizer:(Vega.adam (Vega.Schedule.constant 0.001))\n\nlet st = ref (Nx.Rng.run ~seed:42 @@ fun () -> Train.init trainer ~dtype:Nx.float32)\n```\n\n<!-- quill:cell id=\"c_mnist_train_text\" -->\n## 5. Training\n\n`Train.fit` iterates over the data, computing the forward pass, loss, gradients,\nand optimizer update on each batch. The `~report` callback prints the current\nloss after every batch -- you should see it decrease in real time.\n\n<!-- quill:cell id=\"c_mnist_train_code\" -->\n```ocaml\nlet epochs = 1\n\nlet () =\n  let n_train = (Nx.shape x_train).(0) in\n  let num_batches = n_train / batch_size in\n  let test_batches = Data.prepare ~batch_size (x_test, y_test) in\n  for epoch = 1 to epochs do\n    let train_data =\n      Nx.Rng.run ~seed:(42 + epoch) @@ fun () ->\n      Data.prepare ~shuffle:true ~batch_size (x_train, y_train)\n      |> Data.map (fun (x, y) ->\n          (x, fun logits -> Loss.cross_entropy_sparse logits y))\n    in\n    let tracker = Metric.tracker () in\n    st :=\n      Train.fit trainer !st\n        ~report:(fun ~step ~loss _st ->\n          Metric.observe tracker \"loss\" loss;\n          Printf.printf \"\\r  epoch %d  batch %d/%d  loss: %.4f%!\" epoch step num_batches loss)\n        train_data;\n    Printf.printf \"\\n%!\";\n\n    Data.reset test_batches;\n    let test_acc =\n      Metric.eval\n        (fun (x, y) ->\n          let logits = Train.predict trainer !st x in\n          Metric.accuracy logits y)\n        test_batches\n    in\n    Printf.printf \"  train_loss: %.4f  test_acc: %.2f%%\\n%!\"\n      (Metric.mean tracker \"loss\") (test_acc *. 100.)\n  done\n```\n\n<!-- quill:cell id=\"c_mnist_eval_text\" -->\n## 6. Evaluating predictions\n\nLet's look at the model's predictions on some test images. For each image we\nshow the true label and the predicted label.\n\n<!-- quill:cell id=\"c_mnist_eval_code\" -->\n```ocaml\nlet _fig =\n  List.init 10 (fun i ->\n    let img = Nx.get [i; 0] x_test |> Nx.reshape [|28; 28|] in\n    let true_l = Nx.item [i] y_test in\n    let logits = Train.predict trainer !st (Nx.get [i] x_test |> Nx.expand_dims [0]) in\n    let pred_l = Nx.item [0] (Nx.argmax ~axis:1 logits) in\n    Hugin.imshow ~data:img ~cmap:Hugin.Cmap.gray ()\n    |> Hugin.title (Printf.sprintf \"%ld->%ld\" true_l pred_l)\n    |> Hugin.no_axes)\n  |> Hugin.hstack ~gap:0.\n```\n"
  },
  {
    "path": "packages/quill/lib/quill/cell.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* ───── Identifiers ───── *)\n\ntype id = string\n\nlet () = Random.self_init ()\n\nlet fresh_id () =\n  let n = 12 in\n  let chars = \"abcdefghijklmnopqrstuvwxyz0123456789\" in\n  let b = Bytes.create (n + 2) in\n  Bytes.unsafe_set b 0 'c';\n  Bytes.unsafe_set b 1 '_';\n  for i = 0 to n - 1 do\n    Bytes.unsafe_set b (i + 2) chars.[Random.int 36]\n  done;\n  Bytes.unsafe_to_string b\n\n(* ───── Outputs ───── *)\n\ntype output =\n  | Stdout of string\n  | Stderr of string\n  | Error of string\n  | Display of { mime : string; data : string }\n\ntype Format.stag += Display_tag of { mime : string; data : string }\n\n(* ───── Attributes ───── *)\n\ntype attrs = { collapsed : bool; hide_source : bool }\n\nlet default_attrs = { collapsed = false; hide_source = false }\n\n(* ───── Cells ───── *)\n\ntype t =\n  | Code of {\n      id : id;\n      source : string;\n      language : string;\n      outputs : output list;\n      execution_count : int;\n      attrs : attrs;\n    }\n  | Text of { id : id; source : string; attrs : attrs }\n\nlet code ?id ?(language = \"ocaml\") ?(attrs = default_attrs) source =\n  let id = match id with Some id -> id | None -> fresh_id () in\n  Code { id; source; language; outputs = []; execution_count = 0; attrs }\n\nlet text ?id ?(attrs = default_attrs) source =\n  let id = match id with Some id -> id | None -> fresh_id () in\n  Text { id; source; attrs }\n\nlet id = function Code c -> c.id | Text t -> t.id\nlet source = function Code c -> c.source | Text t -> t.source\nlet attrs = function Code c -> c.attrs | Text t -> t.attrs\n\nlet set_source s = function\n  | Code c -> Code { c with source = s }\n  | Text t -> Text { t with source = s }\n\nlet set_attrs a = function\n  | Code c -> Code { c with attrs = a }\n  | Text t -> Text { t with attrs = a }\n\nlet set_outputs os = function\n  | Code c -> Code { c with outputs = os }\n  | Text _ as t -> t\n\nlet apply_cr s =\n  let lines = String.split_on_char '\\n' s in\n  let apply_line line =\n    match String.rindex_opt line '\\r' with\n    | None -> line\n    | Some i -> String.sub line (i + 1) (String.length line - i - 1)\n  in\n  String.concat \"\\n\" (List.map apply_line lines)\n\nlet rec append_or_coalesce o acc = function\n  | [] -> List.rev (o :: acc)\n  | [ Stdout prev ] ->\n      begin match o with\n      | Stdout next -> List.rev (Stdout (apply_cr (prev ^ next)) :: acc)\n      | _ -> List.rev (o :: Stdout prev :: acc)\n      end\n  | out :: rest -> append_or_coalesce o (out :: acc) rest\n\nlet append_output o = function\n  | Code c -> Code { c with outputs = append_or_coalesce o [] c.outputs }\n  | Text _ as t -> t\n\nlet clear_outputs = function\n  | Code c -> Code { c with outputs = [] }\n  | Text _ as t -> t\n\nlet increment_execution_count = function\n  | Code c -> Code { c with execution_count = c.execution_count + 1 }\n  | Text _ as t -> t\n"
  },
  {
    "path": "packages/quill/lib/quill/cell.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Notebook cells.\n\n    A cell is the atomic unit of a notebook: either a block of text or an\n    executable code block with outputs. *)\n\n(** {1:ids Cell identifiers} *)\n\ntype id = string\n(** The type for cell identifiers. Stable across serialization. *)\n\nval fresh_id : unit -> id\n(** [fresh_id ()] is a fresh unique identifier. *)\n\n(** {1:outputs Execution outputs} *)\n\ntype output =\n  | Stdout of string\n  | Stderr of string\n  | Error of string\n  | Display of { mime : string; data : string }\n      (** The type for cell execution outputs. A single execution may produce\n          multiple outputs (e.g. stdout text followed by a displayed image).\n\n          - [Stdout s] is captured standard output.\n          - [Stderr s] is captured standard error.\n          - [Error s] is an execution error message.\n          - [Display {mime; data}] is rich content identified by MIME type (e.g.\n            [\"text/html\"], [\"image/png\"]). Binary data is base64-encoded in\n            [data]. *)\n\ntype Format.stag +=\n  | Display_tag of { mime : string; data : string }\n        (** Semantic tag for rich display output. When a pretty-printer opens\n            this tag on a formatter configured by the notebook kernel, the\n            payload is emitted as a {!Display} output. On other formatters the\n            tag is silently ignored and only the text content between the\n            open/close tags is printed. *)\n\n(** {1:attrs Cell attributes} *)\n\ntype attrs = {\n  collapsed : bool;\n      (** When [true], the cell renders as a compact one-line bar. Source and\n          outputs are hidden until the user expands the cell. Purely\n          presentational — execution is unaffected. *)\n  hide_source : bool;\n      (** When [true], the code editor is folded away and only outputs are\n          visible. Clicking the placeholder reveals the source. Only meaningful\n          for code cells. Purely presentational — execution is unaffected. *)\n}\n(** The type for cell display attributes. All attributes are purely\n    presentational and never affect execution semantics. *)\n\nval default_attrs : attrs\n(** [default_attrs] is [{collapsed = false; hide_source = false}]. *)\n\n(** {1:cells Cells} *)\n\ntype t = private\n  | Code of {\n      id : id;\n      source : string;\n      language : string;\n      outputs : output list;\n      execution_count : int;\n      attrs : attrs;\n    }\n  | Text of { id : id; source : string; attrs : attrs }\n      (** The type for notebook cells.\n\n          - [Code] is an executable code cell. [language] identifies the kernel\n            (e.g. [\"ocaml\"]). [execution_count] tracks how many times this cell\n            has been executed (starts at [0]).\n          - [Text] is a text cell whose [source] is markdown.\n\n          The type is private: pattern matching is allowed, but cells must be\n          constructed via {!code} and {!text}. *)\n\n(** {1:constructors Constructors} *)\n\nval code : ?id:id -> ?language:string -> ?attrs:attrs -> string -> t\n(** [code ?id ?language ?attrs source] is a code cell with the given [source].\n    [language] defaults to [\"ocaml\"]. [attrs] defaults to {!default_attrs}. A\n    fresh identifier is generated when [id] is not provided. *)\n\nval text : ?id:id -> ?attrs:attrs -> string -> t\n(** [text ?id ?attrs source] is a text cell with the given [source]. [attrs]\n    defaults to {!default_attrs}. A fresh identifier is generated when [id] is\n    not provided. *)\n\n(** {1:accessors Accessors} *)\n\nval id : t -> id\n(** [id c] is the unique identifier of cell [c]. *)\n\nval source : t -> string\n(** [source c] is the source text of cell [c]. *)\n\nval attrs : t -> attrs\n(** [attrs c] is the display attributes of cell [c]. *)\n\n(** {1:transformations Transformations} *)\n\nval set_source : string -> t -> t\n(** [set_source s c] is [c] with source replaced by [s]. *)\n\nval set_attrs : attrs -> t -> t\n(** [set_attrs a c] is [c] with display attributes replaced by [a]. *)\n\nval set_outputs : output list -> t -> t\n(** [set_outputs os c] is [c] with outputs replaced by [os]. Text cells are\n    returned unchanged. *)\n\nval append_output : output -> t -> t\n(** [append_output o c] appends [o] to the outputs of [c]. Text cells are\n    returned unchanged. *)\n\nval clear_outputs : t -> t\n(** [clear_outputs c] is [c] with an empty output list. Text cells are returned\n    unchanged. *)\n\nval increment_execution_count : t -> t\n(** [increment_execution_count c] increments the execution count of a code cell.\n    Text cells are returned unchanged. *)\n"
  },
  {
    "path": "packages/quill/lib/quill/doc.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule Id_map = Map.Make (String)\n\ntype t = {\n  order : Cell.id list;\n  by_id : Cell.t Id_map.t;\n  metadata : (string * string) list;\n}\n\nlet empty () = { order = []; by_id = Id_map.empty; metadata = [] }\n\nlet of_cells ?(metadata = []) cs =\n  let by_id =\n    List.fold_left (fun m c -> Id_map.add (Cell.id c) c m) Id_map.empty cs\n  in\n  let order = List.map Cell.id cs in\n  { order; by_id; metadata }\n\n(* ───── Accessors ───── *)\n\nlet cells d = List.filter_map (fun id -> Id_map.find_opt id d.by_id) d.order\nlet length d = List.length d.order\nlet metadata d = d.metadata\nlet set_metadata metadata d = { d with metadata }\n\nlet nth i d =\n  let rec loop j = function\n    | [] -> None\n    | id :: rest ->\n        if j = i then Id_map.find_opt id d.by_id else loop (j + 1) rest\n  in\n  if i < 0 then None else loop 0 d.order\n\nlet find id d = Id_map.find_opt id d.by_id\n\nlet find_index id d =\n  let rec loop i = function\n    | [] -> None\n    | hd :: rest -> if String.equal hd id then Some i else loop (i + 1) rest\n  in\n  loop 0 d.order\n\n(* ───── Modifications ───── *)\n\nlet insert ~pos cell d =\n  let id = Cell.id cell in\n  let by_id = Id_map.add id cell d.by_id in\n  let pos = max 0 pos in\n  let rec loop i acc = function\n    | rest when i = pos -> List.rev_append acc (id :: rest)\n    | hd :: rest -> loop (i + 1) (hd :: acc) rest\n    | [] -> List.rev (id :: acc)\n  in\n  { d with order = loop 0 [] d.order; by_id }\n\nlet remove id d =\n  if not (Id_map.mem id d.by_id) then d\n  else\n    let by_id = Id_map.remove id d.by_id in\n    let order = List.filter (fun i -> not (String.equal i id)) d.order in\n    { d with order; by_id }\n\nlet replace id cell d =\n  if not (Id_map.mem id d.by_id) then d\n  else\n    let new_id = Cell.id cell in\n    let by_id = Id_map.remove id d.by_id in\n    let by_id = Id_map.add new_id cell by_id in\n    let order =\n      if String.equal id new_id then d.order\n      else List.map (fun i -> if String.equal i id then new_id else i) d.order\n    in\n    { d with order; by_id }\n\nlet move id ~pos d =\n  match find_index id d with\n  | None -> d\n  | Some i ->\n      if i = pos then d\n      else\n        let order = List.filter (fun x -> not (String.equal x id)) d.order in\n        let pos = if pos > i then pos - 1 else pos in\n        let pos = max 0 pos in\n        let rec loop j acc = function\n          | rest when j = pos -> List.rev_append acc (id :: rest)\n          | hd :: rest -> loop (j + 1) (hd :: acc) rest\n          | [] -> List.rev (id :: acc)\n        in\n        { d with order = loop 0 [] order }\n\nlet update id f d =\n  match find id d with None -> d | Some c -> replace id (f c) d\n\nlet clear_all_outputs d =\n  let by_id = Id_map.map Cell.clear_outputs d.by_id in\n  { d with by_id }\n"
  },
  {
    "path": "packages/quill/lib/quill/doc.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Notebook documents.\n\n    A document is an ordered sequence of {!Cell.t} values. Operations maintain\n    cell ordering and identity. *)\n\n(** {1:documents Documents} *)\n\ntype t\n(** The type for notebook documents. *)\n\nval empty : unit -> t\n(** [empty ()] is a document with no cells. *)\n\nval of_cells : ?metadata:(string * string) list -> Cell.t list -> t\n(** [of_cells ?metadata cs] is a document containing [cs] in order with the\n    given [metadata] (defaults to [[]]). *)\n\n(** {1:accessors Accessors} *)\n\nval cells : t -> Cell.t list\n(** [cells d] is the ordered list of cells in [d]. *)\n\nval length : t -> int\n(** [length d] is the number of cells in [d]. *)\n\nval metadata : t -> (string * string) list\n(** [metadata d] is the document-level metadata of [d]. *)\n\nval set_metadata : (string * string) list -> t -> t\n(** [set_metadata m d] is [d] with metadata replaced by [m]. *)\n\nval nth : int -> t -> Cell.t option\n(** [nth i d] is the [i]th cell (zero-indexed), or [None]. *)\n\nval find : Cell.id -> t -> Cell.t option\n(** [find id d] is the cell with identifier [id] in [d], or [None]. *)\n\nval find_index : Cell.id -> t -> int option\n(** [find_index id d] is the zero-based index of cell [id] in [d]. *)\n\n(** {1:modifications Modifications} *)\n\nval insert : pos:int -> Cell.t -> t -> t\n(** [insert ~pos c d] inserts [c] at position [pos]. Cells at [pos] and beyond\n    shift right. [pos] is clamped to [[0, length d]]. *)\n\nval remove : Cell.id -> t -> t\n(** [remove id d] removes the cell with identifier [id] from [d]. Returns [d]\n    unchanged if [id] is not found. *)\n\nval replace : Cell.id -> Cell.t -> t -> t\n(** [replace id c d] replaces the cell identified by [id] with [c]. Returns [d]\n    unchanged if [id] is not found. *)\n\nval move : Cell.id -> pos:int -> t -> t\n(** [move id ~pos d] moves the cell [id] to position [pos]. *)\n\nval update : Cell.id -> (Cell.t -> Cell.t) -> t -> t\n(** [update id f d] applies [f] to the cell identified by [id]. *)\n\nval clear_all_outputs : t -> t\n(** [clear_all_outputs d] clears outputs from all code cells. *)\n"
  },
  {
    "path": "packages/quill/lib/quill/dune",
    "content": "(library\n (name quill)\n (public_name quill))\n"
  },
  {
    "path": "packages/quill/lib/quill/eval.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet run ~create_kernel doc =\n  let doc = ref doc in\n  let on_event = function\n    | Kernel.Output { cell_id; output } ->\n        doc := Doc.update cell_id (Cell.append_output output) !doc\n    | _ -> ()\n  in\n  let (kernel : Kernel.t) = create_kernel ~on_event in\n  List.iter\n    (fun cell ->\n      match cell with\n      | Cell.Code { id; source; _ } ->\n          kernel.execute ~cell_id:id ~code:source;\n          doc := Doc.update id Cell.increment_execution_count !doc\n      | Cell.Text _ -> ())\n    (Doc.cells !doc);\n  kernel.shutdown ();\n  !doc\n"
  },
  {
    "path": "packages/quill/lib/quill/eval.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Batch evaluation.\n\n    Evaluate all code cells in a document sequentially, collecting outputs. *)\n\nval run :\n  create_kernel:(on_event:(Kernel.event -> unit) -> Kernel.t) -> Doc.t -> Doc.t\n(** [run ~create_kernel doc] creates a kernel, executes all code cells in [doc]\n    in order, collects outputs into each cell, shuts down the kernel, and\n    returns the updated document. *)\n"
  },
  {
    "path": "packages/quill/lib/quill/kernel.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype status = Starting | Idle | Busy | Shutting_down\n\ntype event =\n  | Output of { cell_id : Cell.id; output : Cell.output }\n  | Finished of { cell_id : Cell.id; success : bool }\n  | Status_changed of status\n\ntype completion_kind =\n  | Value\n  | Type\n  | Module\n  | Module_type\n  | Constructor\n  | Label\n\ntype completion_item = {\n  label : string;\n  kind : completion_kind;\n  detail : string;\n}\n\ntype severity = Error | Warning\n\ntype diagnostic = {\n  from_pos : int;\n  to_pos : int;\n  severity : severity;\n  message : string;\n}\n\ntype type_info = {\n  typ : string;\n  doc : string option;\n  from_pos : int;\n  to_pos : int;\n}\n\ntype t = {\n  execute : cell_id:Cell.id -> code:string -> unit;\n  interrupt : unit -> unit;\n  complete : code:string -> pos:int -> completion_item list;\n  type_at : (code:string -> pos:int -> type_info option) option;\n  diagnostics : (code:string -> diagnostic list) option;\n  is_complete : (string -> bool) option;\n  status : unit -> status;\n  shutdown : unit -> unit;\n}\n"
  },
  {
    "path": "packages/quill/lib/quill/kernel.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Code execution kernels.\n\n    A kernel executes code and produces outputs. The interface is abstract to\n    support different backends: in-process toplevels, subprocess-based kernels,\n    remote kernels, etc. *)\n\n(** {1:status Kernel status} *)\n\ntype status =\n  | Starting\n  | Idle\n  | Busy\n  | Shutting_down  (** The type for kernel lifecycle status. *)\n\n(** {1:events Kernel events} *)\n\ntype event =\n  | Output of { cell_id : Cell.id; output : Cell.output }\n  | Finished of { cell_id : Cell.id; success : bool }\n  | Status_changed of status\n      (** The type for kernel events.\n\n          - [Output] is emitted for each piece of output during execution.\n          - [Finished] signals that execution of a cell has completed.\n          - [Status_changed] signals a kernel lifecycle change. *)\n\n(** {1:completions Completions} *)\n\ntype completion_kind =\n  | Value\n  | Type\n  | Module\n  | Module_type\n  | Constructor\n  | Label  (** The type for completion item kinds. *)\n\ntype completion_item = {\n  label : string;\n  kind : completion_kind;\n  detail : string;\n}\n(** The type for completion items. [label] is the identifier name, [kind]\n    classifies it, and [detail] is a formatted type signature. *)\n\n(** {1:intellisense Intellisense} *)\n\ntype severity =\n  | Error\n  | Warning  (** The type for diagnostic severity levels. *)\n\ntype diagnostic = {\n  from_pos : int;\n  to_pos : int;\n  severity : severity;\n  message : string;\n}\n(** The type for diagnostics. Positions are byte offsets within the cell. *)\n\ntype type_info = {\n  typ : string;\n  doc : string option;\n  from_pos : int;\n  to_pos : int;\n}\n(** The type for type-at-position results. [typ] is the formatted type, [doc] is\n    the optional documentation string, and positions delimit the expression\n    span. *)\n\n(** {1:kernel Kernel interface} *)\n\ntype t = {\n  execute : cell_id:Cell.id -> code:string -> unit;\n  interrupt : unit -> unit;\n  complete : code:string -> pos:int -> completion_item list;\n  type_at : (code:string -> pos:int -> type_info option) option;\n  diagnostics : (code:string -> diagnostic list) option;\n  is_complete : (string -> bool) option;\n  status : unit -> status;\n  shutdown : unit -> unit;\n}\n(** The type for kernel handles.\n\n    - [execute ~cell_id ~code] submits code for execution. Results are delivered\n      as {!event} values through the callback registered at kernel creation\n      time.\n    - [interrupt ()] requests interruption of the current execution.\n    - [complete ~code ~pos] returns completion candidates at the given cursor\n      position in [code].\n    - [type_at] when [Some f], [f ~code ~pos] returns type information at the\n      given cursor position.\n    - [diagnostics] when [Some f], [f ~code] returns parse and type errors.\n    - [is_complete] when [Some f], [f code] returns [true] if [code] contains a\n      complete toplevel phrase ready for execution.\n    - [status ()] returns the current kernel status.\n    - [shutdown ()] initiates graceful shutdown. *)\n"
  },
  {
    "path": "packages/quill/lib/quill/quill.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule Cell = Cell\nmodule Doc = Doc\nmodule Kernel = Kernel\nmodule Eval = Eval\nmodule Session = Session\n"
  },
  {
    "path": "packages/quill/lib/quill/quill.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Quill -- notebook core library.\n\n    Quill provides the foundational types and protocol for building notebook\n    applications. It is frontend-agnostic: web, TUI, and desktop frontends can\n    all be built on the core.\n\n    {1 Overview}\n\n    A notebook is a {!Doc.t} containing an ordered sequence of {!Cell.t} values.\n    Each cell is either text or executable code with outputs.\n\n    Code execution is handled by a {!Kernel.t}, an abstract interface that\n    supports different backends (OCaml toplevel, subprocess, remote).\n\n    A {!Session.t} manages the document and kernel together, processing\n    {!Session.request} values from frontends and producing\n    {!Session.notification} values.\n\n    For batch evaluation of notebooks without an interactive session, see\n    {!Eval}.\n\n    For organizing notebooks into projects, see {!Quill_project}.\n\n    {1 Modules} *)\n\nmodule Cell = Cell\nmodule Doc = Doc\nmodule Kernel = Kernel\nmodule Eval = Eval\nmodule Session = Session\n"
  },
  {
    "path": "packages/quill/lib/quill/session.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule Id_map = Map.Make (String)\n\n(* ───── Types ───── *)\n\ntype cell_status = Idle | Queued | Running\n\n(* ───── History ───── *)\n\ntype history = {\n  past : Doc.t list;\n  future : Doc.t list;\n  count : int;\n  capacity : int;\n}\n\nlet empty_history capacity = { past = []; future = []; count = 0; capacity }\n\nlet take n xs =\n  let rec loop acc n = function\n    | [] -> List.rev acc\n    | _ when n <= 0 -> List.rev acc\n    | x :: rest -> loop (x :: acc) (n - 1) rest\n  in\n  loop [] n xs\n\nlet push_history doc h =\n  let past = doc :: h.past in\n  if h.count >= h.capacity then\n    { h with past = take h.capacity past; future = []; count = h.capacity }\n  else { h with past; future = []; count = h.count + 1 }\n\n(* ───── Session ───── *)\n\ntype t = {\n  doc : Doc.t;\n  last_checkpoint : Doc.t;\n  statuses : cell_status Id_map.t;\n  history : history;\n}\n\nlet create ?(history_capacity = 100) doc =\n  {\n    doc;\n    last_checkpoint = doc;\n    statuses = Id_map.empty;\n    history = empty_history history_capacity;\n  }\n\nlet doc s = s.doc\n\nlet cell_status id s =\n  match Id_map.find_opt id s.statuses with Some st -> st | None -> Idle\n\nlet can_undo s = s.history.past <> []\nlet can_redo s = s.history.future <> []\n\n(* ───── Document operations ───── *)\n\nlet update_source cell_id source s =\n  match Doc.find cell_id s.doc with\n  | Some c ->\n      let doc = Doc.replace cell_id (Cell.set_source source c) s.doc in\n      { s with doc }\n  | None -> s\n\nlet checkpoint s =\n  if s.doc == s.last_checkpoint then s\n  else\n    let history = push_history s.last_checkpoint s.history in\n    { s with last_checkpoint = s.doc; history }\n\nlet with_history_push f s =\n  let s = checkpoint s in\n  let history = push_history s.doc s.history in\n  let doc = f s.doc in\n  { s with doc; last_checkpoint = doc; history }\n\nlet insert_cell ~pos cell s = with_history_push (Doc.insert ~pos cell) s\nlet remove_cell cell_id s = with_history_push (Doc.remove cell_id) s\nlet move_cell cell_id ~pos s = with_history_push (Doc.move cell_id ~pos) s\n\nlet clear_outputs cell_id s =\n  let doc = Doc.update cell_id Cell.clear_outputs s.doc in\n  { s with doc }\n\nlet clear_all_outputs s =\n  let doc = Doc.clear_all_outputs s.doc in\n  { s with doc }\n\nlet set_cell_kind cell_id kind s =\n  match Doc.find cell_id s.doc with\n  | Some c ->\n      with_history_push\n        (fun doc ->\n          let src = Cell.source c in\n          let id = Cell.id c in\n          let attrs = Cell.attrs c in\n          let c' =\n            match kind with\n            | `Code -> Cell.code ~id ~attrs src\n            | `Text -> Cell.text ~id ~attrs src\n          in\n          Doc.replace cell_id c' doc)\n        s\n  | None -> s\n\nlet set_cell_attrs cell_id attrs s =\n  match Doc.find cell_id s.doc with\n  | Some c ->\n      with_history_push\n        (fun doc -> Doc.replace cell_id (Cell.set_attrs attrs c) doc)\n        s\n  | None -> s\n\n(* ───── Execution state ───── *)\n\nlet mark_running cell_id s =\n  { s with statuses = Id_map.add cell_id Running s.statuses }\n\nlet mark_queued cell_id s =\n  { s with statuses = Id_map.add cell_id Queued s.statuses }\n\nlet mark_idle cell_id s =\n  { s with statuses = Id_map.add cell_id Idle s.statuses }\n\nlet apply_output cell_id output s =\n  let doc = Doc.update cell_id (Cell.append_output output) s.doc in\n  { s with doc }\n\nlet finish_execution cell_id ~success:_ s =\n  let doc = Doc.update cell_id Cell.increment_execution_count s.doc in\n  { s with doc; statuses = Id_map.add cell_id Idle s.statuses }\n\n(* ───── History ───── *)\n\nlet undo s =\n  match s.history.past with\n  | prev :: rest ->\n      let history =\n        {\n          s.history with\n          past = rest;\n          future = s.doc :: s.history.future;\n          count = s.history.count - 1;\n        }\n      in\n      { s with doc = prev; last_checkpoint = prev; history }\n  | [] -> s\n\nlet redo s =\n  match s.history.future with\n  | next :: rest ->\n      let history =\n        {\n          s.history with\n          past = s.doc :: s.history.past;\n          future = rest;\n          count = s.history.count + 1;\n        }\n      in\n      { s with doc = next; last_checkpoint = next; history }\n  | [] -> s\n\n(* ───── Reload ───── *)\n\nlet reload doc s =\n  {\n    doc;\n    last_checkpoint = doc;\n    statuses = Id_map.empty;\n    history = empty_history s.history.capacity;\n  }\n"
  },
  {
    "path": "packages/quill/lib/quill/session.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Interactive notebook sessions.\n\n    A session manages document state with undo/redo history and transient cell\n    execution statuses. It is purely functional: all operations return a new\n    session value.\n\n    Sessions do not own a kernel. The caller is responsible for driving kernel\n    execution and feeding results back via {!apply_output} and\n    {!finish_execution}. *)\n\n(** {1:status Cell status} *)\n\ntype cell_status =\n  | Idle\n  | Queued\n  | Running  (** The type for transient cell execution status. *)\n\n(** {1:sessions Sessions} *)\n\ntype t\n(** The type for notebook sessions. *)\n\nval create : ?history_capacity:int -> Doc.t -> t\n(** [create ?history_capacity doc] creates a session from [doc].\n    [history_capacity] defaults to [100]. *)\n\n(** {1:accessors Accessors} *)\n\nval doc : t -> Doc.t\n(** [doc s] is the current document of session [s]. *)\n\nval cell_status : Cell.id -> t -> cell_status\n(** [cell_status id s] is the execution status of cell [id] in [s]. *)\n\nval can_undo : t -> bool\n(** [can_undo s] is [true] if an undo operation is available. *)\n\nval can_redo : t -> bool\n(** [can_redo s] is [true] if a redo operation is available. *)\n\n(** {1:document Document operations}\n\n    Structural operations ({!insert_cell}, {!remove_cell}, {!move_cell},\n    {!set_cell_kind}) record undo history automatically. Source edits via\n    {!update_source} do not -- call {!checkpoint} when the edit sequence is\n    complete. *)\n\nval update_source : Cell.id -> string -> t -> t\n(** [update_source id source s] updates the source of cell [id]. Does not record\n    undo history. Call {!checkpoint} when the edit sequence ends. *)\n\nval checkpoint : t -> t\n(** [checkpoint s] saves the current document to the undo history. Call this at\n    natural boundaries: before execution, before save, on cell focus change.\n    No-op if the document hasn't changed since the last checkpoint. *)\n\nval insert_cell : pos:int -> Cell.t -> t -> t\n(** [insert_cell ~pos cell s] inserts [cell] at position [pos]. *)\n\nval remove_cell : Cell.id -> t -> t\n(** [remove_cell id s] removes the cell with identifier [id]. *)\n\nval move_cell : Cell.id -> pos:int -> t -> t\n(** [move_cell id ~pos s] moves cell [id] to position [pos]. *)\n\nval clear_outputs : Cell.id -> t -> t\n(** [clear_outputs id s] clears the outputs of cell [id]. *)\n\nval clear_all_outputs : t -> t\n(** [clear_all_outputs s] clears outputs from all code cells. *)\n\nval set_cell_kind : Cell.id -> [ `Code | `Text ] -> t -> t\n(** [set_cell_kind id kind s] changes cell [id] to the given [kind]. *)\n\nval set_cell_attrs : Cell.id -> Cell.attrs -> t -> t\n(** [set_cell_attrs id attrs s] sets the display attributes of cell [id]. *)\n\n(** {1:execution Execution state}\n\n    Update transient cell status. These do not touch the kernel -- the caller is\n    responsible for driving kernel execution. *)\n\nval mark_running : Cell.id -> t -> t\n(** [mark_running id s] marks cell [id] as running. *)\n\nval mark_queued : Cell.id -> t -> t\n(** [mark_queued id s] marks cell [id] as queued. *)\n\nval mark_idle : Cell.id -> t -> t\n(** [mark_idle id s] marks cell [id] as idle. *)\n\nval apply_output : Cell.id -> Cell.output -> t -> t\n(** [apply_output id output s] appends [output] to cell [id] in the document.\n    The output is visible immediately via {!doc}. *)\n\nval finish_execution : Cell.id -> success:bool -> t -> t\n(** [finish_execution id ~success s] marks cell [id] as idle and increments its\n    execution count. *)\n\n(** {1:history History} *)\n\nval undo : t -> t\n(** [undo s] restores the previous document state. Returns [s] unchanged if no\n    undo is available. *)\n\nval redo : t -> t\n(** [redo s] restores the next document state. Returns [s] unchanged if no redo\n    is available. *)\n\n(** {1:reload Reload} *)\n\nval reload : Doc.t -> t -> t\n(** [reload doc s] replaces the document, clearing history and statuses. *)\n"
  },
  {
    "path": "packages/quill/lib/quill-book/build.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* ───── File utilities ───── *)\n\nlet read_file path =\n  let ic = open_in path in\n  Fun.protect\n    ~finally:(fun () -> close_in ic)\n    (fun () -> really_input_string ic (in_channel_length ic))\n\nlet write_file path content =\n  let oc = open_out path in\n  Fun.protect\n    ~finally:(fun () -> close_out oc)\n    (fun () -> output_string oc content)\n\nlet rec mkdir_p dir =\n  if Sys.file_exists dir then ()\n  else (\n    mkdir_p (Filename.dirname dir);\n    try Unix.mkdir dir 0o755 with Unix.Unix_error (Unix.EEXIST, _, _) -> ())\n\nlet copy_file ~src ~dst =\n  let ic = open_in_bin src in\n  Fun.protect\n    ~finally:(fun () -> close_in ic)\n    (fun () ->\n      let oc = open_out_bin dst in\n      Fun.protect\n        ~finally:(fun () -> close_out oc)\n        (fun () ->\n          let buf = Bytes.create 8192 in\n          let rec loop () =\n            let n = input ic buf 0 8192 in\n            if n > 0 then (\n              output oc buf 0 n;\n              loop ())\n          in\n          loop ()))\n\nlet rec copy_dir_contents ~src_dir ~dst_dir =\n  if Sys.file_exists src_dir && Sys.is_directory src_dir then (\n    mkdir_p dst_dir;\n    let entries = Sys.readdir src_dir in\n    Array.iter\n      (fun name ->\n        let src = Filename.concat src_dir name in\n        let dst = Filename.concat dst_dir name in\n        if not (Sys.is_directory src) then copy_file ~src ~dst\n        else copy_dir_contents ~src_dir:src ~dst_dir:dst)\n      entries)\n\n(* ───── Path computation ───── *)\n\nlet notebook_dir (project : Quill_project.t) (nb : Quill_project.notebook) =\n  let dir = Filename.dirname nb.path in\n  if dir = \".\" then project.root else Filename.concat project.root dir\n\nlet prelude_path (project : Quill_project.t) (nb : Quill_project.notebook) =\n  let dir = notebook_dir project nb in\n  let path = Filename.concat dir \"prelude.ml\" in\n  if Sys.file_exists path then Some path else None\n\nlet relative_root_path (nb : Quill_project.notebook) =\n  let dir = Filename.dirname nb.path in\n  if dir = \".\" then \"./\"\n  else\n    let parts = String.split_on_char '/' dir in\n    let depth =\n      List.length (List.filter (fun s -> s <> \"\" && s <> \".\") parts)\n    in\n    if depth = 0 then \"./\"\n    else String.concat \"\" (List.init depth (fun _ -> \"../\"))\n\n(* ───── Build ───── *)\n\nlet build_notebook ~create_kernel ~skip_eval ~output_dir ~live_reload_script\n    (project : Quill_project.t) (nb : Quill_project.notebook) =\n  let nb_path = Filename.concat project.root nb.path in\n  let nb_dir = notebook_dir project nb in\n  let md = read_file nb_path in\n  let doc = Quill_markdown.of_string md in\n  let doc =\n    if skip_eval then doc\n    else\n      let create_kernel ~on_event =\n        let k = create_kernel ~on_event in\n        (match prelude_path project nb with\n        | Some p ->\n            let code = read_file p in\n            k.Quill.Kernel.execute ~cell_id:\"__prelude__\" ~code\n        | None -> ());\n        k\n      in\n      let prev_cwd = Sys.getcwd () in\n      Sys.chdir nb_dir;\n      Fun.protect\n        ~finally:(fun () -> Sys.chdir prev_cwd)\n        (fun () ->\n          let doc = Quill.Doc.clear_all_outputs doc in\n          Quill.Eval.run ~create_kernel doc)\n  in\n  let content = Render.chapter_html doc in\n  let root_path = relative_root_path nb in\n  let toc = Render.toc_html project ~current:nb ~root_path in\n  let prev =\n    match Quill_project.prev_notebook project nb with\n    | Some p -> Some (root_path ^ Render.notebook_output_path p, p.title)\n    | None -> None\n  in\n  let next =\n    match Quill_project.next_notebook project nb with\n    | Some n -> Some (root_path ^ Render.notebook_output_path n, n.title)\n    | None -> None\n  in\n  let edit_url =\n    match project.config.edit_url with\n    | Some base -> Some (base ^ nb.path)\n    | None -> None\n  in\n  let html =\n    Render.page_html ~book_title:project.title ~chapter_title:nb.title\n      ~toc_html:toc ~prev ~next ~root_path ~content ~edit_url\n      ~live_reload_script\n  in\n  let output_path =\n    Filename.concat output_dir (Render.notebook_output_path nb)\n  in\n  mkdir_p (Filename.dirname output_path);\n  write_file output_path html;\n  let asset_dirs = [ \"figures\"; \"images\"; \"assets\" ] in\n  List.iter\n    (fun name ->\n      let src = Filename.concat nb_dir name in\n      let dst =\n        Filename.concat output_dir\n          (Filename.concat (Filename.dirname nb.path) name)\n      in\n      copy_dir_contents ~src_dir:src ~dst_dir:dst)\n    asset_dirs;\n  Printf.printf \"  %s\\n%!\" nb.title;\n  content\n\n(* ───── Search index ───── *)\n\nlet json_escape_string s =\n  let buf = Buffer.create (String.length s + 16) in\n  Buffer.add_char buf '\"';\n  String.iter\n    (function\n      | '\"' -> Buffer.add_string buf {|\\\"|}\n      | '\\\\' -> Buffer.add_string buf {|\\\\|}\n      | '\\n' -> Buffer.add_string buf {|\\n|}\n      | '\\r' -> Buffer.add_string buf {|\\r|}\n      | '\\t' -> Buffer.add_string buf {|\\t|}\n      | c -> Buffer.add_char buf c)\n    s;\n  Buffer.add_char buf '\"';\n  Buffer.contents buf\n\nlet search_entry ~title ~url ~body =\n  Printf.sprintf {|{\"title\":%s,\"url\":%s,\"body\":%s}|} (json_escape_string title)\n    (json_escape_string url) (json_escape_string body)\n\nlet build_search_index ~output_dir ~toc\n    (notebooks : (Quill_project.notebook * string) list) =\n  let entries =\n    List.map\n      (fun (nb, content_html) ->\n        let number_prefix =\n          match Quill_project.number_string (Quill_project.number toc nb) with\n          | \"\" -> \"\"\n          | s -> s ^ \". \"\n        in\n        let title = number_prefix ^ nb.title in\n        let url = Render.notebook_output_path nb in\n        let body = Render.strip_html_tags content_html in\n        search_entry ~title ~url ~body)\n      notebooks\n  in\n  let json = \"[\" ^ String.concat \",\" entries ^ \"]\" in\n  write_file (Filename.concat output_dir \"searchindex.json\") json\n\nlet build_print_page ~output_dir ~toc (project : Quill_project.t)\n    (notebooks : (Quill_project.notebook * string) list) =\n  let chapter_pairs =\n    List.map\n      (fun (nb, content_html) ->\n        let number_prefix =\n          match Quill_project.number_string (Quill_project.number toc nb) with\n          | \"\" -> \"\"\n          | s -> s ^ \". \"\n        in\n        (number_prefix ^ nb.title, content_html))\n      notebooks\n  in\n  let html =\n    Render.print_page_html ~book_title:project.title ~chapters:chapter_pairs\n  in\n  write_file (Filename.concat output_dir \"print.html\") html\n\nlet build_index ~output_dir (project : Quill_project.t) ~live_reload_script =\n  match Quill_project.notebooks project with\n  | [] -> ()\n  | first :: _ ->\n      let url = Render.notebook_output_path first in\n      let html =\n        Printf.sprintf\n          {|<!DOCTYPE html>\n<html><head>\n<meta charset=\"utf-8\">\n<meta http-equiv=\"refresh\" content=\"0; url=%s\">\n<title>%s</title>\n</head>\n<body><p>Redirecting to <a href=\"%s\">%s</a>...</p>%s</body>\n</html>|}\n          (Render.escape_html url)\n          (Render.escape_html project.title)\n          (Render.escape_html url)\n          (Render.escape_html first.title)\n          live_reload_script\n      in\n      write_file (Filename.concat output_dir \"index.html\") html\n\nlet build ~create_kernel ?(skip_eval = false) ?output ?(live_reload_script = \"\")\n    (project : Quill_project.t) =\n  let output_dir =\n    match output with\n    | Some dir -> dir\n    | None -> Filename.concat project.root \"build\"\n  in\n  mkdir_p output_dir;\n  write_file (Filename.concat output_dir \"style.css\") Theme.style_css;\n  let nbs = Quill_project.notebooks project in\n  Printf.printf \"Building %s (%d notebooks)\\n%!\" project.title (List.length nbs);\n  let notebook_contents =\n    List.map\n      (fun nb ->\n        let content =\n          build_notebook ~create_kernel ~skip_eval ~output_dir\n            ~live_reload_script project nb\n        in\n        (nb, content))\n      nbs\n  in\n  build_search_index ~output_dir ~toc:project.toc notebook_contents;\n  build_print_page ~output_dir ~toc:project.toc project notebook_contents;\n  build_index ~output_dir project ~live_reload_script;\n  Printf.printf \"Done → %s\\n%!\" output_dir\n\nlet build_file ~create_kernel ?(skip_eval = false) ?output\n    ?(live_reload_script = \"\") path =\n  let abs_path =\n    if Filename.is_relative path then Filename.concat (Sys.getcwd ()) path\n    else path\n  in\n  let nb_dir = Filename.dirname abs_path in\n  let basename = Filename.basename abs_path in\n  let title = Quill_project.title_of_filename basename in\n  let md = read_file abs_path in\n  let doc = Quill_markdown.of_string md in\n  let doc =\n    if skip_eval then doc\n    else\n      let create_kernel ~on_event =\n        let k = create_kernel ~on_event in\n        let prelude = Filename.concat nb_dir \"prelude.ml\" in\n        (if Sys.file_exists prelude then\n           let code = read_file prelude in\n           k.Quill.Kernel.execute ~cell_id:\"__prelude__\" ~code);\n        k\n      in\n      let prev_cwd = Sys.getcwd () in\n      Sys.chdir nb_dir;\n      Fun.protect\n        ~finally:(fun () -> Sys.chdir prev_cwd)\n        (fun () ->\n          let doc = Quill.Doc.clear_all_outputs doc in\n          Quill.Eval.run ~create_kernel doc)\n  in\n  let content = Render.chapter_html doc in\n  let html = Render.standalone_page_html ~title ~content ~live_reload_script in\n  let output_dir = match output with Some dir -> dir | None -> nb_dir in\n  mkdir_p output_dir;\n  let html_name = Filename.remove_extension basename ^ \".html\" in\n  let output_path = Filename.concat output_dir html_name in\n  write_file output_path html;\n  if output_dir <> nb_dir then begin\n    let asset_dirs = [ \"figures\"; \"images\"; \"assets\" ] in\n    List.iter\n      (fun name ->\n        let src = Filename.concat nb_dir name in\n        let dst = Filename.concat output_dir name in\n        copy_dir_contents ~src_dir:src ~dst_dir:dst)\n      asset_dirs\n  end;\n  Printf.printf \"Built %s → %s\\n%!\" title output_path\n"
  },
  {
    "path": "packages/quill/lib/quill-book/build.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Project build pipeline.\n\n    Executes code cells in each notebook, renders to HTML, and writes the output\n    as a static site. *)\n\nval build :\n  create_kernel:(on_event:(Quill.Kernel.event -> unit) -> Quill.Kernel.t) ->\n  ?skip_eval:bool ->\n  ?output:string ->\n  ?live_reload_script:string ->\n  Quill_project.t ->\n  unit\n(** [build ~create_kernel project] builds the project. For each notebook: reads\n    the markdown, executes code cells via {!Quill.Eval.run} (unless [skip_eval]\n    is [true]), renders to HTML, and copies assets to the output directory.\n    Source files are never modified.\n\n    [output] defaults to [build/] inside the project root. [live_reload_script]\n    defaults to [\"\"] (empty). *)\n\nval build_file :\n  create_kernel:(on_event:(Quill.Kernel.event -> unit) -> Quill.Kernel.t) ->\n  ?skip_eval:bool ->\n  ?output:string ->\n  ?live_reload_script:string ->\n  string ->\n  unit\n(** [build_file ~create_kernel path] builds a single notebook file to a\n    self-contained HTML page. [output] is the output directory (defaults to the\n    directory containing the source file). [live_reload_script] defaults to [\"\"]\n    (empty). *)\n"
  },
  {
    "path": "packages/quill/lib/quill-book/dune",
    "content": "(library\n (name quill_book)\n (public_name quill.book)\n (libraries quill quill.project quill.markdown cmarkit unix))\n"
  },
  {
    "path": "packages/quill/lib/quill-book/quill_book.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule Render = Render\nmodule Build = Build\nmodule Theme = Theme\n"
  },
  {
    "path": "packages/quill/lib/quill-book/quill_book.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Static site rendering for notebook projects.\n\n    Renders notebook projects (see {!Quill_project}) to static HTML sites.\n\n    {1 Modules} *)\n\nmodule Render = Render\nmodule Build = Build\nmodule Theme = Theme\n"
  },
  {
    "path": "packages/quill/lib/quill-book/render.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet buf_add_escaped buf s =\n  String.iter\n    (function\n      | '&' -> Buffer.add_string buf \"&amp;\"\n      | '<' -> Buffer.add_string buf \"&lt;\"\n      | '>' -> Buffer.add_string buf \"&gt;\"\n      | '\"' -> Buffer.add_string buf \"&quot;\"\n      | c -> Buffer.add_char buf c)\n    s\n\nlet escape_html s =\n  let buf = Buffer.create (String.length s) in\n  buf_add_escaped buf s;\n  Buffer.contents buf\n\n(* ───── Server-side OCaml syntax highlighting ───── *)\n\nlet ocaml_keywords =\n  let tbl = Hashtbl.create 64 in\n  List.iter\n    (fun k -> Hashtbl.replace tbl k true)\n    [\n      \"let\";\n      \"in\";\n      \"match\";\n      \"with\";\n      \"if\";\n      \"then\";\n      \"else\";\n      \"fun\";\n      \"function\";\n      \"type\";\n      \"module\";\n      \"struct\";\n      \"sig\";\n      \"end\";\n      \"open\";\n      \"val\";\n      \"rec\";\n      \"and\";\n      \"of\";\n      \"begin\";\n      \"for\";\n      \"do\";\n      \"done\";\n      \"while\";\n      \"to\";\n      \"downto\";\n      \"try\";\n      \"exception\";\n      \"raise\";\n      \"when\";\n      \"as\";\n      \"mutable\";\n      \"include\";\n      \"external\";\n      \"class\";\n      \"object\";\n      \"method\";\n      \"inherit\";\n      \"virtual\";\n      \"private\";\n      \"constraint\";\n      \"assert\";\n      \"lazy\";\n      \"true\";\n      \"false\";\n    ];\n  tbl\n\nlet is_ident_start c =\n  (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || c = '_'\n\nlet is_ident_char c = is_ident_start c || (c >= '0' && c <= '9') || c = '\\''\nlet is_digit c = c >= '0' && c <= '9'\n\nlet is_operator_char c =\n  match c with\n  | '!' | '$' | '%' | '&' | '*' | '+' | '-' | '.' | '/' | ':' | '<' | '=' | '>'\n  | '?' | '@' | '^' | '|' | '~' ->\n      true\n  | _ -> false\n\nlet highlight_ocaml buf source =\n  let len = String.length source in\n  let i = ref 0 in\n  let span cls text =\n    Buffer.add_string buf \"<span class=\\\"hljs-\";\n    Buffer.add_string buf cls;\n    Buffer.add_string buf \"\\\">\";\n    buf_add_escaped buf text;\n    Buffer.add_string buf \"</span>\"\n  in\n  while !i < len do\n    let c = source.[!i] in\n    if c = '(' && !i + 1 < len && source.[!i + 1] = '*' then begin\n      (* Nested comment *)\n      let start = !i in\n      let depth = ref 1 in\n      i := !i + 2;\n      while !i < len && !depth > 0 do\n        if !i + 1 < len && source.[!i] = '(' && source.[!i + 1] = '*' then (\n          incr depth;\n          i := !i + 2)\n        else if !i + 1 < len && source.[!i] = '*' && source.[!i + 1] = ')' then (\n          decr depth;\n          i := !i + 2)\n        else incr i\n      done;\n      span \"comment\" (String.sub source start (!i - start))\n    end\n    else if c = '\"' then begin\n      (* String literal *)\n      let start = !i in\n      incr i;\n      while !i < len && source.[!i] <> '\"' do\n        if source.[!i] = '\\\\' && !i + 1 < len then i := !i + 2 else incr i\n      done;\n      if !i < len then incr i;\n      span \"string\" (String.sub source start (!i - start))\n    end\n    else if c = '\\'' && !i + 1 < len then\n      (* Character literal or type variable — check for char literal patterns *)\n      begin if !i + 2 < len && source.[!i + 1] <> '\\'' && source.[!i + 2] = '\\''\n      then begin\n        (* 'x' *)\n        span \"string\" (String.sub source !i 3);\n        i := !i + 3\n      end\n      else if !i + 3 < len && source.[!i + 1] = '\\\\' && source.[!i + 3] = '\\''\n      then begin\n        (* '\\n' etc *)\n        span \"string\" (String.sub source !i 4);\n        i := !i + 4\n      end\n      else begin\n        buf_add_escaped buf (String.make 1 c);\n        incr i\n      end\n      end\n    else if is_digit c then begin\n      (* Number literal *)\n      let start = !i in\n      incr i;\n      (* Handle 0x, 0o, 0b prefixes *)\n      if\n        c = '0' && !i < len\n        && (source.[!i] = 'x'\n           || source.[!i] = 'X'\n           || source.[!i] = 'o'\n           || source.[!i] = 'O'\n           || source.[!i] = 'b'\n           || source.[!i] = 'B')\n      then incr i;\n      while\n        !i < len\n        && (is_digit source.[!i]\n           || source.[!i] = '_'\n           || source.[!i] = '.'\n           || (source.[!i] >= 'a' && source.[!i] <= 'f')\n           || (source.[!i] >= 'A' && source.[!i] <= 'F')\n           || source.[!i] = 'e'\n           || source.[!i] = 'E')\n      do\n        incr i\n      done;\n      span \"number\" (String.sub source start (!i - start))\n    end\n    else if is_ident_start c then begin\n      (* Identifier or keyword *)\n      let start = !i in\n      incr i;\n      while !i < len && is_ident_char source.[!i] do\n        incr i\n      done;\n      let word = String.sub source start (!i - start) in\n      if Hashtbl.mem ocaml_keywords word then span \"keyword\" word\n      else if word.[0] >= 'A' && word.[0] <= 'Z' then span \"type\" word\n      else buf_add_escaped buf word\n    end\n    else if c = ';' && !i + 1 < len && source.[!i + 1] = ';' then begin\n      span \"operator\" \";;\";\n      i := !i + 2\n    end\n    else if c = '-' && !i + 1 < len && source.[!i + 1] = '>' then begin\n      span \"operator\" \"->\";\n      i := !i + 2\n    end\n    else if c = '|' && !i + 1 < len && source.[!i + 1] = '>' then begin\n      span \"operator\" \"|>\";\n      i := !i + 2\n    end\n    else if c = '<' && !i + 1 < len && source.[!i + 1] = '-' then begin\n      span \"operator\" \"<-\";\n      i := !i + 2\n    end\n    else if is_operator_char c then begin\n      let start = !i in\n      incr i;\n      while !i < len && is_operator_char source.[!i] do\n        incr i\n      done;\n      span \"operator\" (String.sub source start (!i - start))\n    end\n    else begin\n      buf_add_escaped buf (String.make 1 c);\n      incr i\n    end\n  done\n\n(* ───── Markdown to HTML with heading anchors ───── *)\n\nlet markdown_to_html source =\n  let doc = Cmarkit.Doc.of_string ~strict:false ~heading_auto_ids:true source in\n  let module C = Cmarkit_renderer.Context in\n  let heading_block c = function\n    | Cmarkit.Block.Heading (h, _) ->\n        let level = string_of_int (Cmarkit.Block.Heading.level h) in\n        C.string c \"<h\";\n        C.string c level;\n        (match Cmarkit.Block.Heading.id h with\n        | None -> C.byte c '>'\n        | Some (`Auto id | `Id id) ->\n            C.string c \" id=\\\"\";\n            Cmarkit_html.html_escaped_string c id;\n            C.string c \"\\\">\");\n        C.inline c (Cmarkit.Block.Heading.inline h);\n        (match Cmarkit.Block.Heading.id h with\n        | None -> ()\n        | Some (`Auto id | `Id id) ->\n            C.string c \" <a class=\\\"heading-anchor\\\" href=\\\"#\";\n            Cmarkit_html.pct_encoded_string c id;\n            C.string c \"\\\">#</a>\");\n        C.string c \"</h\";\n        C.string c level;\n        C.string c \">\\n\";\n        true\n    | _ -> false\n  in\n  let custom = Cmarkit_renderer.make ~block:heading_block () in\n  let default = Cmarkit_html.renderer ~safe:true () in\n  let r = Cmarkit_renderer.compose default custom in\n  Cmarkit_renderer.doc_to_string r doc\n\n(* ───── Chapter rendering ───── *)\n\nlet render_output buf (output : Quill.Cell.output) =\n  match output with\n  | Stdout s ->\n      Buffer.add_string buf {|<pre class=\"output\">|};\n      buf_add_escaped buf s;\n      Buffer.add_string buf \"</pre>\\n\"\n  | Stderr s ->\n      Buffer.add_string buf {|<pre class=\"output stderr\">|};\n      buf_add_escaped buf s;\n      Buffer.add_string buf \"</pre>\\n\"\n  | Error s ->\n      Buffer.add_string buf {|<pre class=\"output error\">|};\n      buf_add_escaped buf s;\n      Buffer.add_string buf \"</pre>\\n\"\n  | Display { mime; data } ->\n      if String.length mime >= 6 && String.sub mime 0 6 = \"image/\" then (\n        Buffer.add_string buf {|<div class=\"output\"><img src=\"data:|};\n        Buffer.add_string buf mime;\n        Buffer.add_string buf \";base64,\";\n        Buffer.add_string buf data;\n        Buffer.add_string buf {|\"></div>|};\n        Buffer.add_char buf '\\n')\n      else if mime = \"text/html\" then (\n        Buffer.add_string buf {|<div class=\"output\">|};\n        Buffer.add_string buf data;\n        Buffer.add_string buf \"</div>\\n\")\n      else (\n        Buffer.add_string buf {|<pre class=\"output\">|};\n        buf_add_escaped buf data;\n        Buffer.add_string buf \"</pre>\\n\")\n\nlet render_code_cell buf ~language ~source ~outputs ~(attrs : Quill.Cell.attrs)\n    =\n  let collapsed = attrs.collapsed in\n  let hide_source = attrs.hide_source in\n  if collapsed then (\n    Buffer.add_string buf {|<details class=\"collapsed\"><summary>Code</summary>|};\n    Buffer.add_string buf \"\\n\");\n  Buffer.add_string buf {|<div class=\"code-cell\">|};\n  Buffer.add_string buf \"\\n\";\n  if not hide_source then (\n    Buffer.add_string buf \"<pre><code class=\\\"language-\";\n    Buffer.add_string buf (escape_html language);\n    Buffer.add_string buf \"\\\">\";\n    if language = \"ocaml\" || language = \"ml\" then highlight_ocaml buf source\n    else buf_add_escaped buf source;\n    Buffer.add_string buf \"</code></pre>\\n\";\n    (* Copy button *)\n    Buffer.add_string buf {|<button class=\"copy-btn\" aria-label=\"Copy code\">|};\n    Buffer.add_string buf\n      {|<svg class=\"copy-icon\" viewBox=\"0 0 24 24\" width=\"16\" height=\"16\" fill=\"none\" stroke=\"currentColor\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"><rect x=\"9\" y=\"9\" width=\"13\" height=\"13\" rx=\"2\"/><path d=\"M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1\"/></svg>|};\n    Buffer.add_string buf\n      {|<svg class=\"check-icon\" viewBox=\"0 0 24 24\" width=\"16\" height=\"16\" fill=\"none\" stroke=\"currentColor\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"><polyline points=\"20 6 9 17 4 12\"/></svg>|};\n    Buffer.add_string buf \"</button>\\n\");\n  List.iter (render_output buf) outputs;\n  Buffer.add_string buf \"</div>\\n\";\n  if collapsed then Buffer.add_string buf \"</details>\\n\"\n\nlet chapter_html (doc : Quill.Doc.t) =\n  let buf = Buffer.create 4096 in\n  List.iter\n    (fun cell ->\n      match cell with\n      | Quill.Cell.Text { source; _ } ->\n          Buffer.add_string buf (markdown_to_html source)\n      | Quill.Cell.Code { language; source; outputs; attrs; _ } ->\n          render_code_cell buf ~language ~source ~outputs ~attrs)\n    (Quill.Doc.cells doc);\n  Buffer.contents buf\n\n(* ───── TOC rendering ───── *)\n\nlet notebook_output_path (nb : Quill_project.notebook) =\n  (* chapters/01-intro/chapter.md → chapters/01-intro/index.html hello.md →\n     hello.html *)\n  let dir = Filename.dirname nb.path in\n  if dir = \".\" || dir = Filename.current_dir_name then\n    Filename.remove_extension (Filename.basename nb.path) ^ \".html\"\n  else Filename.concat dir \"index.html\"\n\nlet rec render_toc_items buf ~toc ~(current : Quill_project.notebook) ~root_path\n    ~depth items =\n  List.iter\n    (fun item ->\n      match item with\n      | Quill_project.Section title ->\n          Buffer.add_string buf {|<div class=\"toc-part\">|};\n          buf_add_escaped buf title;\n          Buffer.add_string buf \"</div>\\n\"\n      | Quill_project.Notebook (nb, children) ->\n          let number_prefix =\n            match Quill_project.number_string (Quill_project.number toc nb) with\n            | \"\" -> \"\"\n            | s -> s ^ \". \"\n          in\n          (if Quill_project.is_placeholder nb then (\n             Buffer.add_string buf\n               (Printf.sprintf {|<span class=\"toc-chapter draft depth-%d\">|}\n                  depth);\n             buf_add_escaped buf (number_prefix ^ nb.title);\n             Buffer.add_string buf \"</span>\\n\")\n           else\n             let active = if nb.path = current.path then \" active\" else \"\" in\n             Buffer.add_string buf\n               (Printf.sprintf {|<a class=\"toc-chapter%s depth-%d\" href=\"|}\n                  active depth);\n             Buffer.add_string buf\n               (escape_html (root_path ^ notebook_output_path nb));\n             Buffer.add_string buf {|\">|};\n             buf_add_escaped buf (number_prefix ^ nb.title);\n             Buffer.add_string buf \"</a>\\n\");\n          if children <> [] then\n            render_toc_items buf ~toc ~current ~root_path ~depth:(depth + 1)\n              children\n      | Quill_project.Separator ->\n          Buffer.add_string buf {|<div class=\"toc-separator\"></div>|};\n          Buffer.add_string buf \"\\n\")\n    items\n\nlet toc_html (project : Quill_project.t) ~(current : Quill_project.notebook)\n    ~root_path =\n  let buf = Buffer.create 1024 in\n  render_toc_items buf ~toc:project.toc ~current ~root_path ~depth:0 project.toc;\n  Buffer.contents buf\n\n(* ───── HTML stripping ───── *)\n\nlet strip_html_tags s =\n  let len = String.length s in\n  let buf = Buffer.create len in\n  let in_tag = ref false in\n  for i = 0 to len - 1 do\n    let c = s.[i] in\n    if c = '<' then in_tag := true\n    else if c = '>' then in_tag := false\n    else if not !in_tag then Buffer.add_char buf c\n  done;\n  (* Collapse whitespace *)\n  let raw = Buffer.contents buf in\n  let rlen = String.length raw in\n  let buf2 = Buffer.create rlen in\n  let prev_space = ref true in\n  for i = 0 to rlen - 1 do\n    let c = raw.[i] in\n    if c = ' ' || c = '\\n' || c = '\\r' || c = '\\t' then (\n      if not !prev_space then Buffer.add_char buf2 ' ';\n      prev_space := true)\n    else (\n      Buffer.add_char buf2 c;\n      prev_space := false)\n  done;\n  Buffer.contents buf2\n\n(* ───── Page template ───── *)\n\nlet replace_all ~pattern ~with_ s =\n  let plen = String.length pattern in\n  if plen = 0 then s\n  else\n    let buf = Buffer.create (String.length s) in\n    let slen = String.length s in\n    let rec loop i =\n      if i > slen - plen then (\n        Buffer.add_substring buf s i (slen - i);\n        Buffer.contents buf)\n      else if String.sub s i plen = pattern then (\n        Buffer.add_string buf with_;\n        loop (i + plen))\n      else (\n        Buffer.add_char buf s.[i];\n        loop (i + 1))\n    in\n    loop 0\n\nlet nav_html ~dir ~url ~title =\n  let buf = Buffer.create 128 in\n  Buffer.add_string buf {|<a href=\"|};\n  Buffer.add_string buf (escape_html url);\n  Buffer.add_string buf {|\" class=\"nav-|};\n  Buffer.add_string buf dir;\n  Buffer.add_string buf {|\"><span class=\"nav-dir\">|};\n  Buffer.add_string buf\n    (if dir = \"prev\" then \"&larr; Previous\" else \"Next &rarr;\");\n  Buffer.add_string buf {|</span><span class=\"nav-title\">|};\n  buf_add_escaped buf title;\n  Buffer.add_string buf \"</span></a>\";\n  Buffer.contents buf\n\n(* ───── On-page TOC ───── *)\n\nlet extract_headings html =\n  (* Extract h2 and h3 tags with their id and text content. Scans for <h2\n     id=\"...\"> or <h3 id=\"...\"> patterns. *)\n  let len = String.length html in\n  let headings = ref [] in\n  let i = ref 0 in\n  while !i < len - 6 do\n    if\n      html.[!i] = '<'\n      && html.[!i + 1] = 'h'\n      && (html.[!i + 2] = '2' || html.[!i + 2] = '3')\n    then begin\n      let level = Char.code html.[!i + 2] - Char.code '0' in\n      let tag_start = !i in\n      (* Find the end of opening tag *)\n      let tag_end = ref (!i + 3) in\n      while !tag_end < len && html.[!tag_end] <> '>' do\n        incr tag_end\n      done;\n      if !tag_end < len then begin\n        let tag = String.sub html tag_start (!tag_end - tag_start + 1) in\n        (* Extract id attribute *)\n        let id_prefix = \" id=\\\"\" in\n        let id_start =\n          let rec find j =\n            if j + String.length id_prefix > String.length tag then None\n            else if String.sub tag j (String.length id_prefix) = id_prefix then\n              Some (j + String.length id_prefix)\n            else find (j + 1)\n          in\n          find 0\n        in\n        match id_start with\n        | Some id_s ->\n            let id_end = ref id_s in\n            while !id_end < String.length tag && tag.[!id_end] <> '\"' do\n              incr id_end\n            done;\n            let id = String.sub tag id_s (!id_end - id_s) in\n            (* Find closing tag </h2> or </h3> *)\n            let close_tag = Printf.sprintf \"</h%d>\" level in\n            let close_len = String.length close_tag in\n            let body_start = !tag_end + 1 in\n            let close_pos = ref body_start in\n            while\n              !close_pos + close_len <= len\n              && String.sub html !close_pos close_len <> close_tag\n            do\n              incr close_pos\n            done;\n            if !close_pos + close_len <= len then begin\n              let body = String.sub html body_start (!close_pos - body_start) in\n              (* Strip HTML tags from body to get plain text *)\n              let text = strip_html_tags body in\n              headings := (level, id, text) :: !headings;\n              i := !close_pos + close_len\n            end\n            else i := !tag_end + 1\n        | None -> i := !tag_end + 1\n      end\n      else i := !i + 1\n    end\n    else incr i\n  done;\n  List.rev !headings\n\nlet page_toc_html headings =\n  match headings with\n  | [] -> \"\"\n  | _ ->\n      let buf = Buffer.create 512 in\n      Buffer.add_string buf\n        {|<nav class=\"page-toc\"><div class=\"page-toc-title\">On this page</div><ul>|};\n      Buffer.add_char buf '\\n';\n      List.iter\n        (fun (level, id, text) ->\n          let cls = if level = 3 then {| class=\"toc-h3\"|} else \"\" in\n          Buffer.add_string buf\n            (Printf.sprintf {|<li%s><a href=\"#%s\">|} cls (escape_html id));\n          buf_add_escaped buf text;\n          Buffer.add_string buf \"</a></li>\\n\")\n        headings;\n      Buffer.add_string buf \"</ul></nav>\\n\";\n      Buffer.contents buf\n\nlet edit_link_html edit_url =\n  match edit_url with\n  | None -> \"\"\n  | Some url ->\n      Printf.sprintf\n        {|<div class=\"edit-link\"><a href=\"%s\">Edit this page</a></div>|}\n        (escape_html url)\n\nlet page_html ~book_title ~chapter_title ~toc_html ~prev ~next ~root_path\n    ~content ~edit_url ~live_reload_script =\n  let prev_nav =\n    match prev with\n    | Some (url, title) -> nav_html ~dir:\"prev\" ~url ~title\n    | None -> \"\"\n  in\n  let next_nav =\n    match next with\n    | Some (url, title) -> nav_html ~dir:\"next\" ~url ~title\n    | None -> \"\"\n  in\n  let edit_link = edit_link_html edit_url in\n  let page_toc =\n    let headings = extract_headings content in\n    page_toc_html headings\n  in\n  Theme.template_html\n  |> replace_all ~pattern:\"{{book_title}}\" ~with_:(escape_html book_title)\n  |> replace_all ~pattern:\"{{chapter_title}}\" ~with_:(escape_html chapter_title)\n  |> replace_all ~pattern:\"{{root_path}}\" ~with_:root_path\n  |> replace_all ~pattern:\"{{toc}}\" ~with_:toc_html\n  |> replace_all ~pattern:\"{{edit_link}}\" ~with_:edit_link\n  |> replace_all ~pattern:\"{{content}}\" ~with_:content\n  |> replace_all ~pattern:\"{{page_toc}}\" ~with_:page_toc\n  |> replace_all ~pattern:\"{{prev_nav}}\" ~with_:prev_nav\n  |> replace_all ~pattern:\"{{next_nav}}\" ~with_:next_nav\n  |> replace_all ~pattern:\"{{live_reload_script}}\" ~with_:live_reload_script\n\nlet print_page_html ~book_title ~chapters =\n  let buf = Buffer.create 4096 in\n  List.iter\n    (fun (title, content) ->\n      Buffer.add_string buf {|<article class=\"print-chapter\">|};\n      Buffer.add_string buf \"\\n<h1>\";\n      buf_add_escaped buf title;\n      Buffer.add_string buf \"</h1>\\n\";\n      Buffer.add_string buf content;\n      Buffer.add_string buf \"\\n</article>\\n\")\n    chapters;\n  let chapters_html = Buffer.contents buf in\n  Theme.print_template_html\n  |> replace_all ~pattern:\"{{book_title}}\" ~with_:(escape_html book_title)\n  |> replace_all ~pattern:\"{{chapters}}\" ~with_:chapters_html\n\nlet standalone_page_html ~title ~content ~live_reload_script =\n  let page_toc =\n    let headings = extract_headings content in\n    page_toc_html headings\n  in\n  Theme.standalone_html\n  |> replace_all ~pattern:\"{{title}}\" ~with_:(escape_html title)\n  |> replace_all ~pattern:\"{{content}}\" ~with_:content\n  |> replace_all ~pattern:\"{{page_toc}}\" ~with_:page_toc\n  |> replace_all ~pattern:\"{{live_reload_script}}\" ~with_:live_reload_script\n"
  },
  {
    "path": "packages/quill/lib/quill-book/render.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** HTML rendering for project notebooks.\n\n    Converts notebook documents to HTML pages suitable for a static site. Text\n    cells are rendered via cmarkit. Code cells include syntax-highlighted source\n    and execution outputs. *)\n\nval escape_html : string -> string\n(** [escape_html s] escapes HTML special characters in [s]. *)\n\nval notebook_output_path : Quill_project.notebook -> string\n(** [notebook_output_path nb] is the output HTML path for [nb] relative to the\n    project root (e.g. [\"chapters/01-intro/index.html\"]). *)\n\nval chapter_html : Quill.Doc.t -> string\n(** [chapter_html doc] renders [doc] to an HTML content fragment. Code cell\n    outputs are rendered after their source blocks: stdout as [<pre>], images as\n    [<img>], and errors with appropriate styling. *)\n\nval toc_html :\n  Quill_project.t ->\n  current:Quill_project.notebook ->\n  root_path:string ->\n  string\n(** [toc_html project ~current ~root_path] renders the sidebar table of contents\n    with [current] highlighted as the active notebook. [root_path] is the\n    relative path from the notebook to the project root (e.g. [\"../../\"]). *)\n\nval page_html :\n  book_title:string ->\n  chapter_title:string ->\n  toc_html:string ->\n  prev:(string * string) option ->\n  next:(string * string) option ->\n  root_path:string ->\n  content:string ->\n  edit_url:string option ->\n  live_reload_script:string ->\n  string\n(** [page_html] wraps a content fragment in the full page template with\n    navigation. [prev] and [next] are [(url, title)] pairs. [root_path] is the\n    relative path from the notebook to the project root (e.g. [\"../../\"]).\n    [edit_url] is an optional URL for an \"Edit this page\" link.\n    [live_reload_script] is empty for static builds or a [<script>] tag for live\n    serve mode. *)\n\nval strip_html_tags : string -> string\n(** [strip_html_tags s] removes HTML tags from [s] and collapses whitespace. *)\n\nval replace_all : pattern:string -> with_:string -> string -> string\n(** [replace_all ~pattern ~with_ s] replaces all occurrences of [pattern] in [s]\n    with [with_]. *)\n\nval print_page_html :\n  book_title:string -> chapters:(string * string) list -> string\n(** [print_page_html ~book_title ~chapters] renders a print page containing all\n    chapters. Each element of [chapters] is a [(title, content_html)] pair. *)\n\nval standalone_page_html :\n  title:string -> content:string -> live_reload_script:string -> string\n(** [standalone_page_html ~title ~content ~live_reload_script] renders a\n    self-contained HTML page for a single notebook. CSS is inlined, no sidebar\n    or multi-page navigation. *)\n"
  },
  {
    "path": "packages/quill/lib/quill-book/theme.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Default book theme — embedded HTML template and CSS stylesheet.\n\n   Aesthetic: editorial/scholarly with warm paper tones. Designed for long-form\n   technical reading with code and figures. System fonts for performance and\n   offline use. *)\n\nlet style_css =\n  {css|:root {\n  --bg: #f9f7f3;\n  --text: #2d2d2d;\n  --text-muted: #6b6560;\n  --heading: #1a1a1a;\n  --accent: #daa550;\n  --accent-hover: #c4933e;\n  --link: #b8862e;\n  --link-hover: #daa550;\n  --border: #d8d2c8;\n  --rule: #c8c0b4;\n  --sidebar-bg: #1c2128;\n  --sidebar-text: #b0b4bc;\n  --sidebar-heading: #d8dce4;\n  --sidebar-active: #daa550;\n  --sidebar-hover: #d0d4dc;\n  --sidebar-border: #2c333b;\n  --code-bg: #efece6;\n  --code-border: #ddd7cd;\n  --code-text: #3c3428;\n  --output-bg: #e6e1d8;\n  --output-border: #d0c9bc;\n  --error-bg: #fcecea;\n  --error-border: #e4b4ac;\n  --error-text: #a43828;\n  --stderr-text: #887020;\n  --content-width: 48rem;\n  --sidebar-width: 17rem;\n}\n\n[data-theme=\"dark\"] {\n  --bg: #1a1a22;\n  --text: #c8ccd4;\n  --text-muted: #8a8e96;\n  --heading: #e0e4ec;\n  --accent: #d4a060;\n  --accent-hover: #e0b878;\n  --link: #c0946c;\n  --link-hover: #d4a878;\n  --border: #2c2c38;\n  --rule: #3c3c48;\n  --sidebar-bg: #141418;\n  --sidebar-text: #8a8e96;\n  --sidebar-heading: #c8ccd4;\n  --sidebar-active: #e4b868;\n  --sidebar-hover: #c0c4cc;\n  --sidebar-border: #22222c;\n  --code-bg: #22222c;\n  --code-border: #2c2c38;\n  --code-text: #c8ccd4;\n  --output-bg: #1e1e28;\n  --output-border: #2c2c38;\n  --error-bg: #2c1a1a;\n  --error-border: #5c2828;\n  --error-text: #e87060;\n  --stderr-text: #c8a848;\n}\n\n*, *::before, *::after { box-sizing: border-box; margin: 0; padding: 0; }\n\nhtml {\n  font-size: 17px;\n  -webkit-text-size-adjust: 100%;\n  text-size-adjust: 100%;\n}\n\nbody {\n  font-family: Georgia, \"Iowan Old Style\", \"Palatino Linotype\", Palatino,\n    \"Book Antiqua\", serif;\n  color: var(--text);\n  background: var(--bg);\n  line-height: 1.68;\n}\n\n/* ─── Sidebar ─── */\n\n.sidebar {\n  position: fixed;\n  top: 0;\n  left: 0;\n  bottom: 0;\n  width: var(--sidebar-width);\n  background: var(--sidebar-bg);\n  color: var(--sidebar-text);\n  overflow: hidden;\n  display: flex;\n  flex-direction: column;\n  z-index: 100;\n  font-family: system-ui, -apple-system, \"Segoe UI\", sans-serif;\n  font-size: 0.82rem;\n  line-height: 1.45;\n  transition: transform 0.25s ease;\n}\n\n.sidebar-scrollable {\n  overflow-y: auto;\n  flex: 1;\n  padding: 1.5rem 0 2rem;\n  scrollbar-width: thin;\n  scrollbar-color: #3c444e transparent;\n}\n\n.sidebar-header {\n  padding: 1rem 1.25rem;\n  border-bottom: 1px solid var(--sidebar-border);\n  margin-bottom: 1rem;\n}\n\n.sidebar-header a {\n  color: var(--sidebar-heading);\n  text-decoration: none;\n  font-weight: 700;\n  font-size: 0.92rem;\n  letter-spacing: 0.01em;\n}\n\n.toc-part {\n  padding: 0.9rem 1.25rem 0.35rem;\n  color: var(--sidebar-heading);\n  font-weight: 600;\n  font-size: 0.78rem;\n  letter-spacing: 0.04em;\n  text-transform: uppercase;\n}\n\n.toc-chapter {\n  display: block;\n  padding: 0.3rem 1.25rem 0.3rem 1.5rem;\n  color: var(--sidebar-text);\n  text-decoration: none;\n  border-left: 2px solid transparent;\n  transition: color 0.15s, border-color 0.15s;\n}\n\n.toc-chapter:hover {\n  color: var(--sidebar-hover);\n}\n\n.toc-chapter.active {\n  color: var(--sidebar-active);\n  border-left-color: var(--sidebar-active);\n  font-weight: 600;\n}\n\n.toc-separator {\n  height: 1px;\n  background: var(--sidebar-border);\n  margin: 0.8rem 1.25rem;\n}\n\n.menu-btn {\n  display: none;\n}\n\n/* ─── Main content ─── */\n\nmain {\n  margin-left: var(--sidebar-width);\n  min-height: 100vh;\n}\n\narticle {\n  max-width: var(--content-width);\n  width: 100%;\n  margin: 0 auto;\n  padding: 2.5rem 2rem 3rem;\n}\n\n/* ─── Typography ─── */\n\nh1, h2, h3, h4, h5, h6 {\n  color: var(--heading);\n  line-height: 1.25;\n  margin-top: 2.4rem;\n  margin-bottom: 0.7rem;\n  position: relative;\n}\n\nh1 { font-size: 2rem; margin-top: 0; margin-bottom: 1rem; }\nh2 { font-size: 1.45rem; padding-bottom: 0.3rem; border-bottom: 1px solid var(--border); }\nh3 { font-size: 1.2rem; }\nh4 { font-size: 1.05rem; }\n\np { margin-bottom: 1rem; }\n\na { color: var(--link); text-decoration-thickness: 1px; text-underline-offset: 2px; }\na:hover { color: var(--link-hover); }\n\nstrong { font-weight: 700; }\nem { font-style: italic; }\n\nblockquote {\n  border-left: 3px solid var(--rule);\n  padding: 0.15rem 0 0.15rem 1.2rem;\n  margin: 1.2rem 0;\n  color: var(--text-muted);\n}\n\nblockquote > p:last-child { margin-bottom: 0; }\n\nhr {\n  border: none;\n  border-top: 1px solid var(--border);\n  margin: 2rem 0;\n}\n\nul, ol { padding-left: 1.6rem; margin-bottom: 1rem; }\nli { margin-bottom: 0.25rem; }\nli > p { margin-bottom: 0.4rem; }\n\ntable {\n  border-collapse: collapse;\n  width: 100%;\n  margin: 1.2rem 0;\n  font-size: 0.92rem;\n}\n\nth, td {\n  border: 1px solid var(--border);\n  padding: 0.45rem 0.7rem;\n  text-align: left;\n}\n\nth {\n  background: var(--code-bg);\n  font-weight: 600;\n}\n\n/* ─── Heading anchors ─── */\n\n.heading-anchor {\n  color: var(--text-muted);\n  text-decoration: none;\n  font-weight: 400;\n  opacity: 0;\n  margin-left: 0.3em;\n  font-size: 0.85em;\n  transition: opacity 0.15s;\n}\n\nh1:hover .heading-anchor,\nh2:hover .heading-anchor,\nh3:hover .heading-anchor,\nh4:hover .heading-anchor,\nh5:hover .heading-anchor,\nh6:hover .heading-anchor {\n  opacity: 0.5;\n}\n\n.heading-anchor:hover {\n  opacity: 1 !important;\n  color: var(--accent);\n}\n\n/* ─── Edit link ─── */\n\n.edit-link {\n  text-align: right;\n  margin-bottom: 0.5rem;\n  font-family: system-ui, -apple-system, \"Segoe UI\", sans-serif;\n  font-size: 0.8rem;\n}\n\n.edit-link a {\n  color: var(--text-muted);\n  text-decoration: none;\n  transition: color 0.15s;\n}\n\n.edit-link a:hover {\n  color: var(--accent);\n}\n\n/* ─── Code ─── */\n\ncode {\n  font-family: ui-monospace, \"Cascadia Code\", \"Source Code Pro\", Menlo,\n    Consolas, monospace;\n  font-size: 0.88em;\n}\n\n:not(pre) > code {\n  background: var(--code-bg);\n  border: 1px solid var(--code-border);\n  border-radius: 3px;\n  padding: 0.12em 0.35em;\n}\n\n.code-cell {\n  margin: 1.2rem 0;\n  position: relative;\n}\n\n.code-cell > pre {\n  background: var(--code-bg);\n  border: 1px solid var(--code-border);\n  border-radius: 4px;\n  padding: 0.85rem 1rem;\n  overflow-x: auto;\n  line-height: 1.5;\n  tab-size: 2;\n}\n\n.code-cell > pre > code {\n  font-size: 0.84rem;\n  color: var(--code-text);\n}\n\n/* ─── Copy button ─── */\n\n.copy-btn {\n  position: absolute;\n  top: 0.4rem;\n  right: 0.4rem;\n  display: flex;\n  align-items: center;\n  justify-content: center;\n  width: 1.8rem;\n  height: 1.8rem;\n  border: 1px solid var(--code-border);\n  border-radius: 4px;\n  background: var(--code-bg);\n  color: var(--text-muted);\n  cursor: pointer;\n  opacity: 0;\n  transition: opacity 0.15s, color 0.15s, background 0.15s;\n  z-index: 2;\n}\n\n.code-cell:hover .copy-btn { opacity: 1; }\n.copy-btn:hover { color: var(--text); background: var(--bg); }\n\n.copy-btn .check-icon { display: none; }\n.copy-btn.copied .copy-icon { display: none; }\n.copy-btn.copied .check-icon { display: block; }\n.copy-btn.copied { color: #5a7a40; }\n\n/* ─── Outputs ─── */\n\n.output {\n  background: var(--output-bg);\n  border: 1px solid var(--output-border);\n  border-top: none;\n  border-radius: 0 0 4px 4px;\n  padding: 0.7rem 1rem;\n  overflow-x: auto;\n  font-family: ui-monospace, \"Cascadia Code\", \"Source Code Pro\", Menlo,\n    Consolas, monospace;\n  font-size: 0.82rem;\n  line-height: 1.5;\n  white-space: pre-wrap;\n  word-break: break-word;\n}\n\n.output + .output { border-top: 1px dashed var(--output-border); border-radius: 0; }\n.output:first-of-type { margin-top: -1px; }\n.output:last-of-type { margin-bottom: 0; }\n\n.code-cell > pre + .output {\n  border-radius: 0 0 4px 4px;\n}\n\n.code-cell > pre:has(+ .output) {\n  border-radius: 4px 4px 0 0;\n  border-bottom: none;\n}\n\n.output.stderr { color: var(--stderr-text); }\n\n.output.error {\n  background: var(--error-bg);\n  border-color: var(--error-border);\n  color: var(--error-text);\n}\n\n.output img {\n  display: block;\n  max-width: 100%;\n  height: auto;\n  margin: 0.3rem 0;\n  border-radius: 3px;\n}\n\n.output:has(table) {\n  white-space: normal;\n  word-break: normal;\n}\n\n.output table {\n  border-collapse: collapse;\n  width: auto;\n  margin: 0;\n  font-family: ui-monospace, \"Cascadia Code\", \"Source Code Pro\", Menlo,\n    Consolas, monospace;\n  font-size: 0.8rem;\n  line-height: 1.4;\n}\n\n.output th {\n  padding: 0.3rem 0.75rem;\n  text-align: left;\n  font-weight: 600;\n  border: none;\n  border-bottom: 2px solid var(--border);\n  background: transparent;\n  white-space: nowrap;\n}\n\n.output td {\n  padding: 0.2rem 0.75rem;\n  border: none;\n  border-bottom: 1px solid var(--output-border);\n  white-space: nowrap;\n}\n\n.output p {\n  margin: 0.3rem 0 0;\n  color: var(--text-muted);\n  font-size: 0.75rem;\n}\n\n/* Collapsed cells */\ndetails.collapsed {\n  margin: 1.2rem 0;\n}\n\ndetails.collapsed > summary {\n  cursor: pointer;\n  padding: 0.4rem 0.7rem;\n  background: var(--code-bg);\n  border: 1px solid var(--code-border);\n  border-radius: 4px;\n  font-family: system-ui, -apple-system, \"Segoe UI\", sans-serif;\n  font-size: 0.82rem;\n  color: var(--text-muted);\n  list-style: none;\n}\n\ndetails.collapsed > summary::before {\n  content: \"\\25B8 \";\n}\n\ndetails.collapsed[open] > summary {\n  border-radius: 4px 4px 0 0;\n  border-bottom: none;\n}\n\ndetails.collapsed[open] > summary::before {\n  content: \"\\25BE \";\n}\n\n/* ─── Images & Figures ─── */\n\narticle img {\n  max-width: 100%;\n  height: auto;\n  border-radius: 4px;\n}\n\nfigure {\n  margin: 1.5rem 0;\n  text-align: center;\n}\n\nfigure img { margin: 0 auto; }\n\nfigcaption, blockquote:has(> strong:first-child) {\n  font-size: 0.9rem;\n  color: var(--text-muted);\n  margin-top: 0.5rem;\n}\n\n/* ─── On-page TOC ─── */\n\n.page-toc {\n  display: none;\n}\n\n@media (min-width: 1200px) {\n  main:has(> .page-toc) {\n    display: grid;\n    grid-template-columns: 1fr 14rem;\n    grid-template-rows: 1fr auto;\n  }\n\n  main:has(> .page-toc) > article {\n    grid-column: 1;\n    grid-row: 1;\n  }\n\n  .page-toc {\n    display: block;\n    grid-column: 2;\n    grid-row: 1;\n    position: sticky;\n    top: 1.5rem;\n    align-self: start;\n    max-height: calc(100vh - 3rem);\n    overflow-y: auto;\n    padding: 2.5rem 1rem 2rem 0;\n    font-family: system-ui, -apple-system, \"Segoe UI\", sans-serif;\n    font-size: 0.78rem;\n    line-height: 1.5;\n    scrollbar-width: thin;\n    scrollbar-color: var(--border) transparent;\n  }\n\n  main:has(> .page-toc) > .page-nav {\n    grid-column: 1;\n    grid-row: 2;\n  }\n}\n\n.page-toc-title {\n  font-weight: 600;\n  color: var(--heading);\n  font-size: 0.72rem;\n  letter-spacing: 0.04em;\n  text-transform: uppercase;\n  margin-bottom: 0.6rem;\n}\n\n.page-toc ul {\n  list-style: none;\n  padding: 0;\n  margin: 0;\n  border-left: 1px solid var(--border);\n}\n\n.page-toc li {\n  margin: 0;\n}\n\n.page-toc li a {\n  display: block;\n  padding: 0.2rem 0 0.2rem 0.75rem;\n  color: var(--text-muted);\n  text-decoration: none;\n  transition: color 0.15s;\n}\n\n.page-toc li a:hover {\n  color: var(--accent);\n}\n\n.page-toc li.toc-h3 a {\n  padding-left: 1.5rem;\n}\n\n@media print {\n  .page-toc { display: none; }\n}\n\n/* ─── Page navigation ─── */\n\n.page-nav {\n  max-width: var(--content-width);\n  width: 100%;\n  margin: 0 auto;\n  padding: 1.5rem 2rem 2.5rem;\n  display: flex;\n  justify-content: space-between;\n  gap: 1rem;\n  border-top: 1px solid var(--border);\n  font-family: system-ui, -apple-system, \"Segoe UI\", sans-serif;\n  font-size: 0.88rem;\n}\n\n.page-nav a {\n  text-decoration: none;\n  color: var(--accent);\n  display: flex;\n  flex-direction: column;\n  gap: 0.15rem;\n  transition: color 0.15s;\n}\n\n.page-nav a:hover { color: var(--accent-hover); }\n\n.page-nav .nav-dir {\n  font-size: 0.75rem;\n  text-transform: uppercase;\n  letter-spacing: 0.06em;\n  color: var(--text-muted);\n}\n\n.page-nav .nav-title { font-weight: 600; }\n.nav-next { text-align: right; margin-left: auto; }\n\n/* ─── Syntax highlighting (light theme) ─── */\n\n.hljs-keyword, .hljs-built_in { color: #8b5a30; font-weight: 600; }\n.hljs-type, .hljs-title.class_ { color: #3d6880; }\n.hljs-string, .hljs-doctag { color: #5a7a40; }\n.hljs-number, .hljs-literal { color: #a06030; }\n.hljs-comment { color: #8a8480; font-style: italic; }\n.hljs-meta, .hljs-preprocessor { color: #7a6a58; }\n.hljs-symbol, .hljs-attr { color: #6a5a80; }\n.hljs-variable, .hljs-template-variable { color: #805a50; }\n.hljs-function .hljs-title, .hljs-title.function_ { color: #6a5030; }\n.hljs-operator { color: #6a6058; }\n\n/* ─── Syntax highlighting (dark theme) ─── */\n\n[data-theme=\"dark\"] .hljs-keyword,\n[data-theme=\"dark\"] .hljs-built_in { color: #d4a060; }\n[data-theme=\"dark\"] .hljs-type,\n[data-theme=\"dark\"] .hljs-title.class_ { color: #6cb0d0; }\n[data-theme=\"dark\"] .hljs-string,\n[data-theme=\"dark\"] .hljs-doctag { color: #8cb870; }\n[data-theme=\"dark\"] .hljs-number,\n[data-theme=\"dark\"] .hljs-literal { color: #d0884c; }\n[data-theme=\"dark\"] .hljs-comment { color: #6a6e76; }\n[data-theme=\"dark\"] .hljs-meta,\n[data-theme=\"dark\"] .hljs-preprocessor { color: #9a8e7c; }\n[data-theme=\"dark\"] .hljs-symbol,\n[data-theme=\"dark\"] .hljs-attr { color: #a08cc0; }\n[data-theme=\"dark\"] .hljs-variable,\n[data-theme=\"dark\"] .hljs-template-variable { color: #c09080; }\n[data-theme=\"dark\"] .hljs-function .hljs-title,\n[data-theme=\"dark\"] .hljs-title.function_ { color: #c0a060; }\n[data-theme=\"dark\"] .hljs-operator { color: #9090a0; }\n\n/* ─── Theme toggle ─── */\n\n.theme-toggle {\n  position: fixed;\n  top: 0.75rem;\n  right: 0.75rem;\n  z-index: 200;\n  width: 2.2rem;\n  height: 2.2rem;\n  border: 1px solid var(--border);\n  border-radius: 4px;\n  background: var(--bg);\n  color: var(--text-muted);\n  cursor: pointer;\n  display: flex;\n  align-items: center;\n  justify-content: center;\n  transition: color 0.15s, background 0.15s;\n}\n\n.theme-toggle:hover { color: var(--text); }\n\n.theme-toggle .icon-sun,\n.theme-toggle .icon-moon { width: 18px; height: 18px; }\n\n.theme-toggle .icon-moon { display: none; }\n[data-theme=\"dark\"] .theme-toggle .icon-sun { display: none; }\n[data-theme=\"dark\"] .theme-toggle .icon-moon { display: block; }\n\n/* ─── Sidebar search ─── */\n\n.sidebar-search {\n  padding: 0.6rem 1.25rem 0.8rem;\n}\n\n.sidebar-search input {\n  width: 100%;\n  padding: 0.38rem 0.6rem;\n  background: rgba(255, 255, 255, 0.06);\n  border: 1px solid var(--sidebar-border);\n  border-radius: 4px;\n  color: var(--sidebar-text);\n  font-family: system-ui, -apple-system, \"Segoe UI\", sans-serif;\n  font-size: 0.82rem;\n  outline: none;\n  transition: border-color 0.15s;\n}\n\n.sidebar-search input::placeholder {\n  color: rgba(176, 180, 188, 0.5);\n}\n\n.sidebar-search input:focus {\n  border-color: var(--sidebar-active);\n}\n\n.search-results {\n  padding: 0 0 0.5rem;\n}\n\n.search-results a {\n  display: block;\n  padding: 0.32rem 1.25rem 0.32rem 1.6rem;\n  color: var(--sidebar-text);\n  text-decoration: none;\n  font-size: 0.82rem;\n  border-left: 2px solid transparent;\n  transition: color 0.15s;\n}\n\n.search-results a:hover {\n  color: var(--sidebar-hover);\n}\n\n.search-results .no-results {\n  padding: 0.32rem 1.25rem 0.32rem 1.6rem;\n  color: var(--sidebar-text);\n  font-size: 0.82rem;\n  opacity: 0.6;\n}\n\n/* ─── Sidebar footer ─── */\n\n.sidebar-footer {\n  padding: 0.6rem 1.25rem;\n  border-top: 1px solid var(--sidebar-border);\n  font-size: 0.78rem;\n}\n\n.sidebar-footer a {\n  color: var(--sidebar-text);\n  text-decoration: none;\n  opacity: 0.7;\n  transition: opacity 0.15s;\n}\n\n.sidebar-footer a:hover {\n  opacity: 1;\n}\n\n/* ─── Print page ─── */\n\n.print-chapter {\n  page-break-after: always;\n  margin-bottom: 3rem;\n}\n\n.print-chapter:last-child {\n  page-break-after: avoid;\n}\n\n.print-header {\n  max-width: var(--content-width);\n  margin: 0 auto;\n  padding: 2.5rem 2rem 1rem;\n  text-align: center;\n}\n\n.print-header h1 {\n  font-size: 2.5rem;\n  margin-bottom: 1rem;\n}\n\n.print-btn {\n  display: inline-block;\n  padding: 0.5rem 1.2rem;\n  background: var(--accent);\n  color: white;\n  border: none;\n  border-radius: 4px;\n  font-family: system-ui, -apple-system, \"Segoe UI\", sans-serif;\n  font-size: 0.88rem;\n  cursor: pointer;\n  text-decoration: none;\n  transition: background 0.15s;\n}\n\n.print-btn:hover {\n  background: var(--accent-hover);\n}\n\n@media print {\n  .print-btn { display: none; }\n}\n\n/* ─── Responsive ─── */\n\n@media (max-width: 768px) {\n  :root { --sidebar-width: 16rem; }\n\n  html { font-size: 16px; }\n\n  .sidebar {\n    transform: translateX(-100%);\n  }\n\n  .sidebar.open {\n    transform: translateX(0);\n    box-shadow: 4px 0 20px rgba(0, 0, 0, 0.25);\n  }\n\n  .menu-btn {\n    display: flex;\n    align-items: center;\n    justify-content: center;\n    position: fixed;\n    top: 0.75rem;\n    left: 0.75rem;\n    z-index: 200;\n    width: 2.5rem;\n    height: 2.5rem;\n    border: none;\n    border-radius: 4px;\n    background: var(--bg);\n    color: var(--text);\n    cursor: pointer;\n    box-shadow: 0 1px 4px rgba(0, 0, 0, 0.1);\n  }\n\n  main { margin-left: 0; }\n\n  article {\n    padding: 3.5rem 1.2rem 2rem;\n  }\n\n  .page-nav { padding: 1.2rem 1.2rem 2rem; }\n}\n\n@media print {\n  .sidebar, .menu-btn, .page-nav, .theme-toggle, .copy-btn { display: none; }\n  main { margin-left: 0; }\n  article { max-width: none; padding: 0; }\n  body { background: white; font-size: 11pt; }\n  a { color: inherit; text-decoration: none; }\n  .code-cell > pre, .output { break-inside: avoid; }\n  .heading-anchor { display: none; }\n}\n|css}\n\nlet template_html =\n  {html|<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n<meta charset=\"utf-8\">\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n<title>{{chapter_title}} — {{book_title}}</title>\n<link rel=\"stylesheet\" href=\"{{root_path}}style.css\">\n<link rel=\"stylesheet\" href=\"https://cdn.jsdelivr.net/npm/katex@0.16.21/dist/katex.min.css\">\n<script defer src=\"https://cdn.jsdelivr.net/npm/katex@0.16.21/dist/katex.min.js\"></script>\n<script defer src=\"https://cdn.jsdelivr.net/npm/katex@0.16.21/dist/contrib/auto-render.min.js\"\n  onload=\"renderMathInElement(document.body,{delimiters:[{left:'$$',right:'$$',display:true},{left:'$',right:'$',display:false},{left:'\\\\(',right:'\\\\)',display:false},{left:'\\\\[',right:'\\\\]',display:true}]})\"></script>\n<script>\n(function() {\n  var t = localStorage.getItem('quill-theme');\n  if (t === 'dark') {\n    document.documentElement.setAttribute('data-theme', 'dark');\n  }\n})();\n</script>\n</head>\n<body>\n<button class=\"menu-btn\" aria-label=\"Menu\">\n<svg viewBox=\"0 0 24 24\" width=\"22\" height=\"22\"><path d=\"M3 6h18M3 12h18M3 18h18\" stroke=\"currentColor\" stroke-width=\"2\" stroke-linecap=\"round\" fill=\"none\"/></svg>\n</button>\n\n<button class=\"theme-toggle\" aria-label=\"Toggle theme\">\n<svg class=\"icon-sun\" viewBox=\"0 0 24 24\" fill=\"none\" stroke=\"currentColor\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"><circle cx=\"12\" cy=\"12\" r=\"5\"/><line x1=\"12\" y1=\"1\" x2=\"12\" y2=\"3\"/><line x1=\"12\" y1=\"21\" x2=\"12\" y2=\"23\"/><line x1=\"4.22\" y1=\"4.22\" x2=\"5.64\" y2=\"5.64\"/><line x1=\"18.36\" y1=\"18.36\" x2=\"19.78\" y2=\"19.78\"/><line x1=\"1\" y1=\"12\" x2=\"3\" y2=\"12\"/><line x1=\"21\" y1=\"12\" x2=\"23\" y2=\"12\"/><line x1=\"4.22\" y1=\"19.78\" x2=\"5.64\" y2=\"18.36\"/><line x1=\"18.36\" y1=\"5.64\" x2=\"19.78\" y2=\"4.22\"/></svg>\n<svg class=\"icon-moon\" viewBox=\"0 0 24 24\" fill=\"none\" stroke=\"currentColor\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"><path d=\"M21 12.79A9 9 0 1 1 11.21 3 7 7 0 0 0 21 12.79z\"/></svg>\n</button>\n\n<nav class=\"sidebar\">\n<div class=\"sidebar-scrollable\">\n<div class=\"sidebar-header\">\n<a href=\"{{root_path}}index.html\">{{book_title}}</a>\n</div>\n<div class=\"sidebar-search\">\n<input type=\"text\" id=\"search-input\" placeholder=\"Search...\" aria-label=\"Search\">\n</div>\n<div id=\"search-results\" class=\"search-results\" hidden></div>\n<div id=\"sidebar-toc\">\n{{toc}}\n</div>\n</div>\n<div class=\"sidebar-footer\">\n<a href=\"{{root_path}}print.html\">Print version</a>\n</div>\n</nav>\n\n<main>\n<article>\n{{edit_link}}\n{{content}}\n</article>\n{{page_toc}}\n<nav class=\"page-nav\">\n{{prev_nav}}\n{{next_nav}}\n</nav>\n</main>\n<script>\ndocument.querySelector('.menu-btn').addEventListener('click', function() {\n  document.querySelector('.sidebar').classList.toggle('open');\n});\ndocument.querySelectorAll('.sidebar a').forEach(function(a) {\n  a.addEventListener('click', function() {\n    document.querySelector('.sidebar').classList.remove('open');\n  });\n});\ndocument.querySelector('.theme-toggle').addEventListener('click', function() {\n  var html = document.documentElement;\n  var isDark = html.getAttribute('data-theme') === 'dark';\n  if (isDark) {\n    html.removeAttribute('data-theme');\n    localStorage.setItem('quill-theme', 'light');\n  } else {\n    html.setAttribute('data-theme', 'dark');\n    localStorage.setItem('quill-theme', 'dark');\n  }\n});\ndocument.querySelectorAll('.copy-btn').forEach(function(btn) {\n  btn.addEventListener('click', function() {\n    var code = btn.parentElement.querySelector('pre code');\n    if (code) {\n      navigator.clipboard.writeText(code.textContent).then(function() {\n        btn.classList.add('copied');\n        setTimeout(function() { btn.classList.remove('copied'); }, 1500);\n      });\n    }\n  });\n});\n</script>\n<script>\n(function() {\n  var searchIndex = null;\n  var searchInput = document.getElementById('search-input');\n  var searchResults = document.getElementById('search-results');\n  var sidebarToc = document.getElementById('sidebar-toc');\n  fetch('{{root_path}}searchindex.json')\n    .then(function(r) { return r.json(); })\n    .then(function(data) { searchIndex = data; });\n  searchInput.addEventListener('input', function() {\n    var q = searchInput.value.trim().toLowerCase();\n    if (!q || !searchIndex) {\n      searchResults.hidden = true;\n      sidebarToc.hidden = false;\n      searchResults.innerHTML = '';\n      return;\n    }\n    var matches = [];\n    for (var i = 0; i < searchIndex.length && matches.length < 20; i++) {\n      var entry = searchIndex[i];\n      if (entry.title.toLowerCase().indexOf(q) !== -1 ||\n          entry.body.toLowerCase().indexOf(q) !== -1) {\n        matches.push(entry);\n      }\n    }\n    sidebarToc.hidden = true;\n    searchResults.hidden = false;\n    if (matches.length === 0) {\n      searchResults.innerHTML = '<div class=\"no-results\">No results found</div>';\n    } else {\n      var html = '';\n      for (var j = 0; j < matches.length; j++) {\n        html += '<a href=\"{{root_path}}' + matches[j].url + '\">' +\n          matches[j].title.replace(/</g, '&lt;') + '</a>';\n      }\n      searchResults.innerHTML = html;\n    }\n  });\n  document.addEventListener('keydown', function(e) {\n    if (e.key === '/' && document.activeElement !== searchInput &&\n        document.activeElement.tagName !== 'INPUT' &&\n        document.activeElement.tagName !== 'TEXTAREA') {\n      e.preventDefault();\n      searchInput.focus();\n    }\n  });\n})();\n</script>\n{{live_reload_script}}\n</body>\n</html>|html}\n\nlet print_template_html =\n  {html|<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n<meta charset=\"utf-8\">\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n<title>{{book_title}} — Print</title>\n<link rel=\"stylesheet\" href=\"style.css\">\n<link rel=\"stylesheet\" href=\"https://cdn.jsdelivr.net/npm/katex@0.16.21/dist/katex.min.css\">\n<script defer src=\"https://cdn.jsdelivr.net/npm/katex@0.16.21/dist/katex.min.js\"></script>\n<script defer src=\"https://cdn.jsdelivr.net/npm/katex@0.16.21/dist/contrib/auto-render.min.js\"\n  onload=\"renderMathInElement(document.body,{delimiters:[{left:'$$',right:'$$',display:true},{left:'$',right:'$',display:false},{left:'\\\\(',right:'\\\\)',display:false},{left:'\\\\[',right:'\\\\]',display:true}]})\"></script>\n<script>\n(function() {\n  var t = localStorage.getItem('quill-theme');\n  if (t === 'dark') {\n    document.documentElement.setAttribute('data-theme', 'dark');\n  }\n})();\n</script>\n</head>\n<body>\n<div class=\"print-header\">\n<h1>{{book_title}}</h1>\n<button class=\"print-btn\" onclick=\"window.print()\">Print this page</button>\n</div>\n<main style=\"margin-left:0\">\n{{chapters}}\n</main>\n</body>\n</html>|html}\n\nlet standalone_html =\n  {html|<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n<meta charset=\"utf-8\">\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n<title>{{title}}</title>\n<style>\n|html}\n  ^ style_css\n  ^ {html|\n</style>\n<link rel=\"stylesheet\" href=\"https://cdn.jsdelivr.net/npm/katex@0.16.21/dist/katex.min.css\">\n<script defer src=\"https://cdn.jsdelivr.net/npm/katex@0.16.21/dist/katex.min.js\"></script>\n<script defer src=\"https://cdn.jsdelivr.net/npm/katex@0.16.21/dist/contrib/auto-render.min.js\"\n  onload=\"renderMathInElement(document.body,{delimiters:[{left:'$$',right:'$$',display:true},{left:'$',right:'$',display:false},{left:'\\\\(',right:'\\\\)',display:false},{left:'\\\\[',right:'\\\\]',display:true}]})\"></script>\n<script>\n(function() {\n  var t = localStorage.getItem('quill-theme');\n  if (t === 'dark') {\n    document.documentElement.setAttribute('data-theme', 'dark');\n  }\n})();\n</script>\n</head>\n<body>\n<button class=\"theme-toggle\" aria-label=\"Toggle theme\">\n<svg class=\"icon-sun\" viewBox=\"0 0 24 24\" fill=\"none\" stroke=\"currentColor\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"><circle cx=\"12\" cy=\"12\" r=\"5\"/><line x1=\"12\" y1=\"1\" x2=\"12\" y2=\"3\"/><line x1=\"12\" y1=\"21\" x2=\"12\" y2=\"23\"/><line x1=\"4.22\" y1=\"4.22\" x2=\"5.64\" y2=\"5.64\"/><line x1=\"18.36\" y1=\"18.36\" x2=\"19.78\" y2=\"19.78\"/><line x1=\"1\" y1=\"12\" x2=\"3\" y2=\"12\"/><line x1=\"21\" y1=\"12\" x2=\"23\" y2=\"12\"/><line x1=\"4.22\" y1=\"19.78\" x2=\"5.64\" y2=\"18.36\"/><line x1=\"18.36\" y1=\"5.64\" x2=\"19.78\" y2=\"4.22\"/></svg>\n<svg class=\"icon-moon\" viewBox=\"0 0 24 24\" fill=\"none\" stroke=\"currentColor\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"><path d=\"M21 12.79A9 9 0 1 1 11.21 3 7 7 0 0 0 21 12.79z\"/></svg>\n</button>\n\n<main style=\"margin-left:0\">\n<article>\n{{content}}\n</article>\n{{page_toc}}\n</main>\n<script>\ndocument.querySelector('.theme-toggle').addEventListener('click', function() {\n  var html = document.documentElement;\n  var isDark = html.getAttribute('data-theme') === 'dark';\n  if (isDark) {\n    html.removeAttribute('data-theme');\n    localStorage.setItem('quill-theme', 'light');\n  } else {\n    html.setAttribute('data-theme', 'dark');\n    localStorage.setItem('quill-theme', 'dark');\n  }\n});\ndocument.querySelectorAll('.copy-btn').forEach(function(btn) {\n  btn.addEventListener('click', function() {\n    var code = btn.parentElement.querySelector('pre code');\n    if (code) {\n      navigator.clipboard.writeText(code.textContent).then(function() {\n        btn.classList.add('copied');\n        setTimeout(function() { btn.classList.remove('copied'); }, 1500);\n      });\n    }\n  });\n});\n</script>\n{{live_reload_script}}\n</body>\n</html>|html}\n"
  },
  {
    "path": "packages/quill/lib/quill-markdown/dune",
    "content": "(library\n (name quill_markdown)\n (public_name quill.markdown)\n (libraries quill cmarkit unix))\n"
  },
  {
    "path": "packages/quill/lib/quill-markdown/edit.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype span = { first : int; last : int }\n\ntype block_kind =\n  | Paragraph\n  | Heading of int\n  | Block_quote\n  | List\n  | Thematic_break\n  | Table\n  | Blank\n\ntype block = { span : span; kind : block_kind }\ntype t = { source : string; blocks : block list }\n\nlet classify_block = function\n  | Cmarkit.Block.Paragraph _ -> Paragraph\n  | Cmarkit.Block.Heading (h, _) -> Heading (Cmarkit.Block.Heading.level h)\n  | Cmarkit.Block.Code_block _ -> Paragraph\n  | Cmarkit.Block.Block_quote _ -> Block_quote\n  | Cmarkit.Block.List _ -> List\n  | Cmarkit.Block.Thematic_break _ -> Thematic_break\n  | Cmarkit.Block.Html_block _ -> Paragraph\n  | Cmarkit.Block.Blank_line _ -> Blank\n  | Cmarkit.Block.Link_reference_definition _ -> Blank\n  | Cmarkit.Block.Ext_table _ -> Table\n  | Cmarkit.Block.Ext_math_block _ -> Paragraph\n  | Cmarkit.Block.Ext_footnote_definition _ -> Blank\n  | Cmarkit.Block.Blocks _ -> Paragraph\n  | _ -> Paragraph\n\nlet parse source =\n  let doc = Cmarkit.Doc.of_string ~locs:true ~strict:false source in\n  let top_blocks =\n    match Cmarkit.Doc.block doc with\n    | Cmarkit.Block.Blocks (bs, _) -> bs\n    | b -> [ b ]\n  in\n  let blocks =\n    List.filter_map\n      (fun b ->\n        let loc = Cmarkit.Meta.textloc (Cmarkit.Block.meta b) in\n        if Cmarkit.Textloc.is_none loc then None\n        else\n          let first = Cmarkit.Textloc.first_byte loc in\n          let last = Cmarkit.Textloc.last_byte loc in\n          let kind = classify_block b in\n          Some { span = { first; last }; kind })\n      top_blocks\n  in\n  { source; blocks }\n\nlet source t = t.source\nlet blocks t = t.blocks\n\nlet active_block t ~cursor =\n  List.find_opt\n    (fun b -> cursor >= b.span.first && cursor <= b.span.last)\n    t.blocks\n\nlet block_source t block =\n  let len = block.span.last - block.span.first + 1 in\n  String.sub t.source block.span.first len\n\nlet to_html source =\n  let doc = Cmarkit.Doc.of_string ~heading_auto_ids:true ~strict:false source in\n  Cmarkit_html.of_doc ~safe:true doc\n\nlet block_to_html t block = to_html (block_source t block)\n"
  },
  {
    "path": "packages/quill/lib/quill-markdown/edit.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Typora-style cursor-aware markdown block segmentation.\n\n    Parses a markdown source string into top-level blocks with byte ranges.\n    Consumers use this to render inactive blocks formatted and the active block\n    (containing the cursor) as raw text for editing. *)\n\n(** {1:types Types} *)\n\ntype span = {\n  first : int;  (** Inclusive start byte offset in source (zero-based). *)\n  last : int;  (** Inclusive end byte offset in source (zero-based). *)\n}\n(** A byte range within the source string. *)\n\ntype block_kind =\n  | Paragraph\n  | Heading of int\n  | Block_quote\n  | List\n  | Thematic_break\n  | Table\n  | Blank  (** The kind of a top-level markdown block. *)\n\ntype block = { span : span; kind : block_kind }\n(** A top-level block extracted from a markdown source string. *)\n\ntype t\n(** A parsed markdown source split into blocks with byte ranges. *)\n\n(** {1:parse Parsing} *)\n\nval parse : string -> t\n(** [parse source] parses [source] into blocks with byte ranges. *)\n\nval source : t -> string\n(** [source t] is the original source string. *)\n\nval blocks : t -> block list\n(** [blocks t] is the list of top-level blocks in document order. *)\n\n(** {1:query Queries} *)\n\nval active_block : t -> cursor:int -> block option\n(** [active_block t ~cursor] is the block containing byte offset [cursor], or\n    [None] if [cursor] is outside all blocks. *)\n\nval block_source : t -> block -> string\n(** [block_source t block] extracts the raw source substring for [block]. *)\n\n(** {1:render Rendering} *)\n\nval block_to_html : t -> block -> string\n(** [block_to_html t block] renders [block] to an HTML fragment. *)\n\nval to_html : string -> string\n(** [to_html source] renders CommonMark [source] to an HTML fragment. *)\n"
  },
  {
    "path": "packages/quill/lib/quill-markdown/quill_markdown.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* ───── Parsing ───── *)\n\nlet is_blank s =\n  let n = String.length s in\n  let rec loop i =\n    if i >= n then true\n    else\n      match s.[i] with ' ' | '\\t' | '\\n' | '\\r' -> loop (i + 1) | _ -> false\n  in\n  loop 0\n\nlet trim_blank_lines s =\n  let n = String.length s in\n  if n = 0 then s\n  else\n    let i = ref 0 in\n    while !i < n && (s.[!i] = '\\n' || s.[!i] = '\\r') do\n      incr i\n    done;\n    let j = ref (n - 1) in\n    while !j >= !i && (s.[!j] = '\\n' || s.[!j] = '\\r') do\n      decr j\n    done;\n    if !i > !j then \"\" else String.sub s !i (!j - !i + 1)\n\nlet code_block_range b =\n  match b with\n  | Cmarkit.Block.Code_block (cb, meta) ->\n      let language =\n        match Cmarkit.Block.Code_block.info_string cb with\n        | Some (info, _) -> (\n            match Cmarkit.Block.Code_block.language_of_info_string info with\n            | Some (lang, _) -> lang\n            | None -> \"\")\n        | None -> \"\"\n      in\n      let code_lines = Cmarkit.Block.Code_block.code cb in\n      let code =\n        String.concat \"\\n\" (List.map (fun (line, _) -> line) code_lines)\n      in\n      let loc = Cmarkit.Meta.textloc meta in\n      let first = Cmarkit.Textloc.first_byte loc in\n      let last = Cmarkit.Textloc.last_byte loc in\n      Some (first, last, language, code)\n  | _ -> None\n\nlet cell_id_open = \"<!-- quill:cell id=\\\"\"\nlet comment_close = \"-->\"\nlet output_open = \"<!-- quill:output -->\"\nlet output_close = \"<!-- /quill:output -->\"\n\nlet find_substring haystack needle start =\n  let nlen = String.length needle in\n  let hlen = String.length haystack in\n  if nlen = 0 then Some start\n  else\n    let limit = hlen - nlen in\n    let rec loop i =\n      if i > limit then None\n      else if String.sub haystack i nlen = needle then Some i\n      else loop (i + 1)\n    in\n    loop start\n\nlet parse_attrs_tokens s =\n  let tokens = String.split_on_char ' ' s |> List.filter (fun t -> t <> \"\") in\n  let rec loop (attrs : Quill.Cell.attrs) = function\n    | [] -> attrs\n    | \"collapsed\" :: rest -> loop { attrs with collapsed = true } rest\n    | \"hide-source\" :: rest -> loop { attrs with hide_source = true } rest\n    | _ :: rest -> loop attrs rest\n  in\n  loop Quill.Cell.default_attrs tokens\n\nlet quote = \"\\\"\"\n\nlet try_parse_cell_id s start =\n  match find_substring s cell_id_open start with\n  | Some open_pos -> (\n      let id_start = open_pos + String.length cell_id_open in\n      match find_substring s quote id_start with\n      | Some quote_pos -> (\n          let id = String.sub s id_start (quote_pos - id_start) in\n          match find_substring s comment_close (quote_pos + 1) with\n          | Some close_pos ->\n              let attrs_str =\n                String.sub s (quote_pos + 1) (close_pos - quote_pos - 1)\n              in\n              let attrs = parse_attrs_tokens attrs_str in\n              let comment_end = close_pos + String.length comment_close in\n              Some (id, attrs, open_pos, comment_end)\n          | None -> None)\n      | None -> None)\n  | None -> None\n\nlet strip_leading_cell_id s =\n  let s_trimmed = trim_blank_lines s in\n  match try_parse_cell_id s_trimmed 0 with\n  | Some (id, attrs, 0, comment_end) ->\n      let rest =\n        if comment_end < String.length s_trimmed then\n          String.sub s_trimmed comment_end\n            (String.length s_trimmed - comment_end)\n        else \"\"\n      in\n      Some (id, attrs, trim_blank_lines rest)\n  | _ -> None\n\nlet strip_trailing_cell_id s =\n  let s_trimmed = trim_blank_lines s in\n  let len = String.length s_trimmed in\n  (* Minimum length: cell_id_open + closing quote + space + comment_close *)\n  let min_len = String.length cell_id_open + String.length \"\\\" -->\" in\n  if len < min_len then None\n  else\n    (* Scan backwards for the last newline to find the last line *)\n    let last_line_start =\n      let rec loop i =\n        if i < 0 then 0 else if s_trimmed.[i] = '\\n' then i + 1 else loop (i - 1)\n      in\n      loop (len - 1)\n    in\n    let last_line =\n      String.sub s_trimmed last_line_start (len - last_line_start)\n    in\n    match try_parse_cell_id last_line 0 with\n    | Some (id, attrs, 0, comment_end)\n      when comment_end = String.length last_line ->\n        let rest =\n          if last_line_start > 0 then String.sub s_trimmed 0 last_line_start\n          else \"\"\n        in\n        Some (id, attrs, trim_blank_lines rest)\n    | _ -> None\n\nlet out_marker_prefix = \"<!-- out:\"\nlet out_marker_suffix = \" -->\"\nlet is_image mime = String.length mime >= 6 && String.sub mime 0 6 = \"image/\"\n\nlet extension_of_mime mime =\n  match mime with\n  | \"image/png\" -> \"png\"\n  | \"image/jpeg\" -> \"jpg\"\n  | \"image/gif\" -> \"gif\"\n  | \"image/svg+xml\" -> \"svg\"\n  | \"image/webp\" -> \"webp\"\n  | _ ->\n      if is_image mime && String.length mime > 6 then\n        String.sub mime 6 (String.length mime - 6)\n      else \"bin\"\n\nlet base64_decode_table =\n  let t = Array.make 256 (-1) in\n  String.iteri\n    (fun i c -> t.(Char.code c) <- i)\n    \"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/\";\n  t\n\nlet base64_decode s =\n  let len = String.length s in\n  (* Count valid base64 characters *)\n  let valid = ref 0 in\n  for i = 0 to len - 1 do\n    let c = Char.code (String.unsafe_get s i) in\n    if base64_decode_table.(c) >= 0 then incr valid\n  done;\n  let out_len = !valid * 3 / 4 in\n  let out = Bytes.create out_len in\n  let j = ref 0 in\n  let acc = ref 0 in\n  let bits = ref 0 in\n  for i = 0 to len - 1 do\n    let c = Char.code (String.unsafe_get s i) in\n    let v = base64_decode_table.(c) in\n    if v >= 0 then begin\n      acc := (!acc lsl 6) lor v;\n      bits := !bits + 6;\n      if !bits >= 8 then begin\n        bits := !bits - 8;\n        if !j < out_len then begin\n          Bytes.unsafe_set out !j (Char.chr ((!acc lsr !bits) land 0xff));\n          incr j\n        end\n      end\n    end\n  done;\n  Bytes.sub_string out 0 !j\n\n(* Extract src attribute value from an <img> tag *)\nlet extract_img_src s =\n  let src_attr = \"src=\\\"\" in\n  match find_substring s src_attr 0 with\n  | None -> None\n  | Some i ->\n      let start = i + String.length src_attr in\n      let rec find_quote j =\n        if j >= String.length s then None\n        else if s.[j] = '\"' then Some (String.sub s start (j - start))\n        else find_quote (j + 1)\n      in\n      find_quote start\n\n(* Extract base64 data from a data URI: data:mime;base64,DATA *)\nlet extract_data_uri_base64 src =\n  let prefix = \"data:\" in\n  let marker = \";base64,\" in\n  if\n    String.length src > String.length prefix\n    && String.sub src 0 (String.length prefix) = prefix\n  then\n    match find_substring src marker 0 with\n    | Some i ->\n        let data_start = i + String.length marker in\n        Some (String.sub src data_start (String.length src - data_start))\n    | None -> None\n  else None\n\n(* Parse image Display data from <img> tag content *)\nlet parse_image_display ?base_dir mime content =\n  match extract_img_src content with\n  | Some src ->\n      begin match extract_data_uri_base64 src with\n      | Some base64 ->\n          (* Inline data URI: extract base64 directly *)\n          Quill.Cell.Display { mime; data = base64 }\n      | None ->\n          (* File reference: read and base64-encode *)\n          begin match base_dir with\n          | Some dir ->\n              let path = Filename.concat dir src in\n              let ic = open_in_bin path in\n              let raw =\n                Fun.protect\n                  ~finally:(fun () -> close_in ic)\n                  (fun () -> really_input_string ic (in_channel_length ic))\n              in\n              let data =\n                (* Reuse the base64_encode from Hugin's image_util convention *)\n                let alphabet =\n                  \"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/\"\n                in\n                let len = String.length raw in\n                let out_len = (len + 2) / 3 * 4 in\n                let out = Bytes.create out_len in\n                let rec loop i j =\n                  if i < len then begin\n                    let b0 = Char.code (String.unsafe_get raw i) in\n                    let b1 =\n                      if i + 1 < len then\n                        Char.code (String.unsafe_get raw (i + 1))\n                      else 0\n                    in\n                    let b2 =\n                      if i + 2 < len then\n                        Char.code (String.unsafe_get raw (i + 2))\n                      else 0\n                    in\n                    Bytes.unsafe_set out j\n                      (String.unsafe_get alphabet (b0 lsr 2));\n                    Bytes.unsafe_set out (j + 1)\n                      (String.unsafe_get alphabet\n                         (((b0 land 3) lsl 4) lor (b1 lsr 4)));\n                    Bytes.unsafe_set out (j + 2)\n                      (if i + 1 < len then\n                         String.unsafe_get alphabet\n                           (((b1 land 0xf) lsl 2) lor (b2 lsr 6))\n                       else '=');\n                    Bytes.unsafe_set out (j + 3)\n                      (if i + 2 < len then\n                         String.unsafe_get alphabet (b2 land 0x3f)\n                       else '=');\n                    loop (i + 3) (j + 4)\n                  end\n                in\n                loop 0 0;\n                Bytes.unsafe_to_string out\n              in\n              Quill.Cell.Display { mime; data }\n          | None ->\n              (* No base_dir, store src as placeholder *)\n              Quill.Cell.Display { mime; data = \"\" }\n          end\n      end\n  | None ->\n      (* No <img> tag — treat as raw data *)\n      Quill.Cell.Display { mime; data = content }\n\nlet parse_output_sections ?base_dir content =\n  let lines = String.split_on_char '\\n' content in\n  let flush_section tag buf acc =\n    let text = Buffer.contents buf in\n    Buffer.clear buf;\n    if is_blank text then acc\n    else\n      let trimmed =\n        let n = String.length text in\n        let j = ref (n - 1) in\n        while !j >= 0 && text.[!j] = '\\n' do\n          decr j\n        done;\n        if !j < 0 then \"\" else String.sub text 0 (!j + 1)\n      in\n      let output =\n        match tag with\n        | \"stdout\" -> Quill.Cell.Stdout trimmed\n        | \"stderr\" -> Quill.Cell.Stderr trimmed\n        | \"error\" -> Quill.Cell.Error trimmed\n        | display_tag ->\n            (* \"display MIME\" *)\n            let prefix = \"display \" in\n            let plen = String.length prefix in\n            if\n              String.length display_tag > plen\n              && String.sub display_tag 0 plen = prefix\n            then\n              let mime =\n                String.sub display_tag plen (String.length display_tag - plen)\n              in\n              if is_image mime then parse_image_display ?base_dir mime trimmed\n              else Quill.Cell.Display { mime; data = trimmed }\n            else (* Unknown tag, treat as stdout *)\n              Quill.Cell.Stdout trimmed\n      in\n      output :: acc\n  in\n  let has_markers =\n    List.exists\n      (fun l ->\n        let t = String.trim l in\n        String.length t > String.length out_marker_prefix\n        && String.sub t 0 (String.length out_marker_prefix) = out_marker_prefix)\n      lines\n  in\n  if not has_markers then\n    (* Backward compat: no markers, treat entire content as Stdout *)\n    if is_blank content then []\n    else\n      let trimmed =\n        let n = String.length content in\n        if n > 0 && content.[n - 1] = '\\n' then String.sub content 0 (n - 1)\n        else content\n      in\n      [ Quill.Cell.Stdout trimmed ]\n  else\n    let buf = Buffer.create 256 in\n    let tag = ref \"\" in\n    let acc = ref [] in\n    List.iter\n      (fun line ->\n        let trimmed = String.trim line in\n        let plen = String.length out_marker_prefix in\n        let slen = String.length out_marker_suffix in\n        if\n          String.length trimmed > plen + slen\n          && String.sub trimmed 0 plen = out_marker_prefix\n          && String.sub trimmed (String.length trimmed - slen) slen\n             = out_marker_suffix\n        then (\n          (* Flush previous section *)\n          if !tag <> \"\" then acc := flush_section !tag buf !acc;\n          (* Extract tag name *)\n          let tag_str =\n            String.sub trimmed plen (String.length trimmed - plen - slen)\n          in\n          tag := tag_str)\n        else (\n          Buffer.add_string buf line;\n          Buffer.add_char buf '\\n'))\n      lines;\n    (* Flush last section *)\n    if !tag <> \"\" then acc := flush_section !tag buf !acc;\n    List.rev !acc\n\nlet parse_outputs ?base_dir md ~after =\n  let len = String.length md in\n  let pos = ref after in\n  while !pos < len && (md.[!pos] = '\\n' || md.[!pos] = '\\r') do\n    incr pos\n  done;\n  match find_substring md output_open !pos with\n  | Some open_pos when open_pos = !pos -> (\n      let content_start = open_pos + String.length output_open in\n      let content_start =\n        if content_start < len && md.[content_start] = '\\n' then\n          content_start + 1\n        else content_start\n      in\n      match find_substring md output_close content_start with\n      | Some close_pos ->\n          let content =\n            String.sub md content_start (close_pos - content_start)\n          in\n          let outputs = parse_output_sections ?base_dir content in\n          let end_pos = close_pos + String.length output_close in\n          Some (outputs, end_pos)\n      | None -> None)\n  | _ -> None\n\nlet of_string ?base_dir md =\n  let doc = Cmarkit.Doc.of_string ~locs:true md in\n  let top_blocks =\n    match Cmarkit.Doc.block doc with\n    | Cmarkit.Block.Blocks (bs, _) -> bs\n    | b -> [ b ]\n  in\n  (* Collect code block byte ranges *)\n  let code_ranges = List.filter_map code_block_range top_blocks in\n  (* Build cells by slicing the original text at code block boundaries *)\n  let cells = ref [] in\n  let cursor = ref 0 in\n  List.iter\n    (fun (first, last, lang, code) ->\n      (* Text between previous position and this code block *)\n      let code_id = ref None in\n      let code_attrs = ref Quill.Cell.default_attrs in\n      (if !cursor < first then\n         let gap = String.sub md !cursor (first - !cursor) in\n         (* Extract trailing cell ID for the code block *)\n         let gap =\n           match strip_trailing_cell_id gap with\n           | Some (id, attrs, rest) ->\n               code_id := Some id;\n               code_attrs := attrs;\n               rest\n           | None -> gap\n         in\n         (* Extract leading cell ID for the text cell *)\n         let text_id, text_attrs, gap =\n           match strip_leading_cell_id gap with\n           | Some (id, attrs, rest) -> (Some id, Some attrs, rest)\n           | None -> (None, None, gap)\n         in\n         let gap = trim_blank_lines gap in\n         if not (is_blank gap) then\n           cells := Quill.Cell.text ?id:text_id ?attrs:text_attrs gap :: !cells);\n      (* The code block itself *)\n      let cell =\n        Quill.Cell.code ?id:!code_id ~attrs:!code_attrs ~language:lang code\n      in\n      (* Check for output markers immediately after the code block *)\n      let cell, end_pos =\n        match parse_outputs ?base_dir md ~after:(last + 1) with\n        | Some (outputs, end_pos) ->\n            (Quill.Cell.set_outputs outputs cell, end_pos)\n        | None -> (cell, last + 1)\n      in\n      cells := cell :: !cells;\n      cursor := end_pos)\n    code_ranges;\n  (* Remaining text after last code block *)\n  (if !cursor < String.length md then\n     let remaining = String.sub md !cursor (String.length md - !cursor) in\n     let text_id, text_attrs, remaining =\n       match strip_leading_cell_id remaining with\n       | Some (id, attrs, rest) -> (Some id, Some attrs, rest)\n       | None -> (None, None, remaining)\n     in\n     let remaining = trim_blank_lines remaining in\n     if not (is_blank remaining) then\n       cells :=\n         Quill.Cell.text ?id:text_id ?attrs:text_attrs remaining :: !cells);\n  Quill.Doc.of_cells (List.rev !cells)\n\n(* ───── Rendering ───── *)\n\nlet add_content buf s =\n  Buffer.add_string buf s;\n  if s <> \"\" && s.[String.length s - 1] <> '\\n' then Buffer.add_char buf '\\n'\n\nlet rec mkdir_p dir =\n  if Sys.file_exists dir then ()\n  else (\n    mkdir_p (Filename.dirname dir);\n    try Unix.mkdir dir 0o755 with Unix.Unix_error (Unix.EEXIST, _, _) -> ())\n\nlet write_figure_file ~path ~data =\n  mkdir_p (Filename.dirname path);\n  let raw = base64_decode data in\n  let oc = open_out_bin path in\n  Fun.protect ~finally:(fun () -> close_out oc) (fun () -> output_string oc raw)\n\n(* Extract cell ID prefix from a figure filename like \"c_abc123.png\" or\n   \"c_abc123-2.png\" *)\nlet cell_id_of_figure_name name =\n  let base = Filename.remove_extension name in\n  (* Strip trailing -N suffix *)\n  match String.rindex_opt base '-' with\n  | Some i ->\n      let suffix = String.sub base (i + 1) (String.length base - i - 1) in\n      let all_digits =\n        String.length suffix > 0\n        && String.to_seq suffix |> Seq.for_all (fun c -> c >= '0' && c <= '9')\n      in\n      if all_digits then String.sub base 0 i else base\n  | None -> base\n\nlet clean_orphan_figures ~figures_dir ~cell_ids =\n  if Sys.file_exists figures_dir && Sys.is_directory figures_dir then\n    let entries = Sys.readdir figures_dir in\n    Array.iter\n      (fun name ->\n        let cid = cell_id_of_figure_name name in\n        if not (List.mem cid cell_ids) then\n          Sys.remove (Filename.concat figures_dir name))\n      entries\n\nlet render_output ?figures_dir ~cell_id ~img_counter buf = function\n  | Quill.Cell.Stdout s ->\n      Buffer.add_string buf \"<!-- out:stdout -->\\n\";\n      add_content buf s\n  | Quill.Cell.Stderr s ->\n      Buffer.add_string buf \"<!-- out:stderr -->\\n\";\n      add_content buf s\n  | Quill.Cell.Error s ->\n      Buffer.add_string buf \"<!-- out:error -->\\n\";\n      add_content buf s\n  | Quill.Cell.Display { mime; data } ->\n      Buffer.add_string buf \"<!-- out:display \";\n      Buffer.add_string buf mime;\n      Buffer.add_string buf \" -->\\n\";\n      if is_image mime then begin\n        let ext = extension_of_mime mime in\n        match figures_dir with\n        | Some dir ->\n            (* Disk mode: write file, reference by path *)\n            incr img_counter;\n            let basename =\n              if !img_counter = 1 then cell_id ^ \".\" ^ ext\n              else cell_id ^ \"-\" ^ string_of_int !img_counter ^ \".\" ^ ext\n            in\n            let path = Filename.concat dir basename in\n            write_figure_file ~path ~data;\n            Buffer.add_string buf \"<img src=\\\"figures/\";\n            Buffer.add_string buf basename;\n            Buffer.add_string buf \"\\\">\\n\"\n        | None ->\n            (* Inline mode (default): data URI in <img> tag *)\n            Buffer.add_string buf \"<img src=\\\"data:\";\n            Buffer.add_string buf mime;\n            Buffer.add_string buf \";base64,\";\n            Buffer.add_string buf data;\n            Buffer.add_string buf \"\\\">\\n\"\n      end\n      else if mime = \"text/html\" then add_content buf data\n      else add_content buf data\n\nlet render_cell_id buf id (attrs : Quill.Cell.attrs) =\n  Buffer.add_string buf cell_id_open;\n  Buffer.add_string buf id;\n  Buffer.add_char buf '\"';\n  if attrs.collapsed then Buffer.add_string buf \" collapsed\";\n  if attrs.hide_source then Buffer.add_string buf \" hide-source\";\n  Buffer.add_string buf \" -->\\n\"\n\nlet render_cell ?figures_dir ~with_outputs buf = function\n  | Quill.Cell.Text { source; _ } ->\n      Buffer.add_string buf source;\n      Buffer.add_char buf '\\n'\n  | Quill.Cell.Code { id; source; language; outputs; attrs; _ } ->\n      render_cell_id buf id attrs;\n      Buffer.add_string buf \"```\";\n      Buffer.add_string buf language;\n      Buffer.add_char buf '\\n';\n      Buffer.add_string buf source;\n      Buffer.add_char buf '\\n';\n      Buffer.add_string buf \"```\";\n      if with_outputs && outputs <> [] then (\n        Buffer.add_char buf '\\n';\n        Buffer.add_string buf \"<!-- quill:output -->\\n\";\n        let img_counter = ref 0 in\n        List.iter\n          (render_output ?figures_dir ~cell_id:id ~img_counter buf)\n          outputs;\n        Buffer.add_string buf \"<!-- /quill:output -->\")\n\nlet render ?figures_dir ~with_outputs doc =\n  let buf = Buffer.create 4096 in\n  let cells = Quill.Doc.cells doc in\n  (* Clean orphaned figures before writing new ones *)\n  (match figures_dir with\n  | Some dir ->\n      let cell_ids =\n        List.filter_map\n          (function\n            | Quill.Cell.Code { id; _ } -> Some id | Quill.Cell.Text _ -> None)\n          cells\n      in\n      clean_orphan_figures ~figures_dir:dir ~cell_ids\n  | None -> ());\n  let rec loop = function\n    | [] -> ()\n    | [ c ] -> render_cell ?figures_dir ~with_outputs buf c\n    | c :: rest ->\n        render_cell ?figures_dir ~with_outputs buf c;\n        Buffer.add_char buf '\\n';\n        Buffer.add_char buf '\\n';\n        loop rest\n  in\n  loop cells;\n  let s = Buffer.contents buf in\n  (* Ensure file ends with a newline *)\n  if s <> \"\" && s.[String.length s - 1] <> '\\n' then s ^ \"\\n\" else s\n\nlet to_string doc = render ~with_outputs:false doc\n\nlet to_string_with_outputs ?figures_dir doc =\n  render ?figures_dir ~with_outputs:true doc\n\nmodule Edit = Edit\n"
  },
  {
    "path": "packages/quill/lib/quill-markdown/quill_markdown.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Markdown notebook format.\n\n    Parses markdown files into {!Quill.Doc.t} and renders documents back to\n    markdown. Code blocks become code cells; everything else becomes text cells.\n*)\n\nval of_string : ?base_dir:string -> string -> Quill.Doc.t\n(** [of_string ?base_dir s] parses markdown string [s] into a document.\n\n    Fenced code blocks with a language info string become code cells. All other\n    content between code blocks is merged into text cells. Adjacent non-code\n    content forms a single text cell.\n\n    When [base_dir] is provided, image Display outputs that reference files on\n    disk (e.g. [<img src=\"figures/cell-id.png\">]) are resolved by reading the\n    file relative to [base_dir] and base64-encoding the contents. *)\n\nval to_string : Quill.Doc.t -> string\n(** [to_string doc] renders [doc] as a markdown string.\n\n    Text cells are emitted verbatim. Code cells are rendered as fenced code\n    blocks. Cell outputs are not included. *)\n\nval to_string_with_outputs : ?figures_dir:string -> Quill.Doc.t -> string\n(** [to_string_with_outputs ?figures_dir doc] renders [doc] as markdown with\n    outputs.\n\n    Like {!to_string} but code cell outputs are serialized between\n    [<!-- quill:output -->] and [<!-- /quill:output -->] comment markers after\n    each code block.\n\n    Display outputs are rendered as inline HTML:\n    - Image outputs become [<img>] tags with data URIs (default) or file\n      references (when [figures_dir] is set).\n    - HTML outputs are emitted as inline HTML.\n\n    When [figures_dir] is provided, images are written to disk as\n    [<figures_dir>/<cell-id>.<ext>] and referenced by relative path. Orphaned\n    figure files (from deleted or changed cells) are cleaned up automatically.\n*)\n\nmodule Edit : module type of Edit\n"
  },
  {
    "path": "packages/quill/lib/quill-project/dune",
    "content": "(library\n (name quill_project)\n (public_name quill.project))\n"
  },
  {
    "path": "packages/quill/lib/quill-project/quill_project.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype notebook = { title : string; path : string }\n\ntype toc_item =\n  | Notebook of notebook * toc_item list\n  | Section of string\n  | Separator\n\ntype config = {\n  title : string option;\n  authors : string list;\n  description : string option;\n  output : string option;\n  edit_url : string option;\n}\n\ntype t = { title : string; root : string; toc : toc_item list; config : config }\n\nlet default_config =\n  {\n    title = None;\n    authors = [];\n    description = None;\n    output = None;\n    edit_url = None;\n  }\n\n(* ───── Config parser ───── *)\n\nlet trim = String.trim\n\nlet leading_spaces s =\n  let len = String.length s in\n  let rec loop i = if i < len && s.[i] = ' ' then loop (i + 1) else i in\n  loop 0\n\nlet is_comment_or_blank s =\n  let s = trim s in\n  String.length s = 0 || s.[0] = '#'\n\nlet is_separator s = trim s = \"---\"\n\nlet is_section s =\n  let s = trim s in\n  let len = String.length s in\n  len >= 2 && s.[0] = '[' && s.[len - 1] = ']'\n\nlet parse_section s =\n  let s = trim s in\n  String.sub s 1 (String.length s - 2)\n\nlet parse_kv s =\n  match String.index_opt s '=' with\n  | None -> None\n  | Some i ->\n      let key = trim (String.sub s 0 i) in\n      let value = trim (String.sub s (i + 1) (String.length s - i - 1)) in\n      Some (key, value)\n\nlet is_toc_entry s = parse_kv s <> None || is_section s || is_separator s\n\nlet parse_metadata (cfg : config) key value =\n  match key with\n  | \"title\" -> { cfg with title = Some value }\n  | \"authors\" ->\n      let authors = List.map trim (String.split_on_char ',' value) in\n      { cfg with authors }\n  | \"description\" -> { cfg with description = Some value }\n  | \"output\" -> { cfg with output = Some value }\n  | \"edit-url\" -> { cfg with edit_url = Some value }\n  | _ -> cfg\n\n(* TOC parser: builds a tree from indented lines.\n\n   We collect items at each indent level. When indentation increases, new items\n   become children of the previous notebook. When it decreases, we close the\n   current group. *)\n\ntype toc_entry =\n  | E_notebook of string * string * int (* title, path, indent *)\n  | E_section of string\n  | E_separator\n\nlet collect_toc_entries lines =\n  let entries = ref [] in\n  List.iter\n    (fun line ->\n      if is_comment_or_blank line then ()\n      else if is_separator line then entries := E_separator :: !entries\n      else if is_section line then\n        entries := E_section (parse_section line) :: !entries\n      else\n        let indent = leading_spaces line in\n        match parse_kv line with\n        | Some (title, path) ->\n            entries := E_notebook (title, path, indent) :: !entries\n        | None -> ())\n    lines;\n  List.rev !entries\n\nlet rec build_toc entries =\n  match entries with\n  | [] -> ([], [])\n  | entry :: rest -> (\n      match entry with\n      | E_separator ->\n          let siblings, remaining = build_toc rest in\n          (Separator :: siblings, remaining)\n      | E_section title ->\n          let siblings, remaining = build_toc rest in\n          (Section title :: siblings, remaining)\n      | E_notebook (title, path, indent) ->\n          let children, after_children = collect_children (indent + 1) rest in\n          let nb = { title; path } in\n          let siblings, remaining = build_toc after_children in\n          (Notebook (nb, children) :: siblings, remaining))\n\nand collect_children min_indent entries =\n  match entries with\n  | E_notebook (_, _, indent) :: _ when indent >= min_indent ->\n      let item, rest = take_one_child min_indent entries in\n      let more_children, remaining = collect_children min_indent rest in\n      (item :: more_children, remaining)\n  | _ -> ([], entries)\n\nand take_one_child min_indent entries =\n  match entries with\n  | E_notebook (title, path, indent) :: rest when indent >= min_indent ->\n      let children, remaining = collect_children (indent + 1) rest in\n      let nb = { title; path } in\n      (Notebook (nb, children), remaining)\n  | _ -> failwith \"take_one_child: expected notebook entry\"\n\nlet parse_config source =\n  let lines = String.split_on_char '\\n' source in\n  (* Split into metadata lines and TOC lines *)\n  let in_metadata = ref true in\n  let meta_lines = ref [] in\n  let toc_lines = ref [] in\n  List.iter\n    (fun line ->\n      if !in_metadata then\n        if is_comment_or_blank line then ()\n        else if\n          is_toc_entry (String.trim line) && not (is_comment_or_blank line)\n        then (\n          (* Check if this is a key=value that looks like metadata or TOC *)\n          match parse_kv line with\n          | Some (_, _) when (not (is_section line)) && leading_spaces line = 0\n            ->\n              (* Could be metadata or a TOC entry. Heuristic: if the value looks\n                 like a file path (contains . or /), it's TOC *)\n              let trimmed = trim line in\n              let value =\n                match String.index_opt trimmed '=' with\n                | Some i ->\n                    trim\n                      (String.sub trimmed (i + 1)\n                         (String.length trimmed - i - 1))\n                | None -> \"\"\n              in\n              if\n                String.contains value '/'\n                || String.contains value '.'\n                   && String.length value > 0\n                   && value <> \"\"\n              then (\n                in_metadata := false;\n                toc_lines := line :: !toc_lines)\n              else if value = \"\" then (\n                (* Empty value at indent 0: could be a placeholder TOC entry or\n                   a metadata key with no value. If we haven't seen any TOC\n                   entries yet, check if the key is a known metadata key *)\n                let key =\n                  match String.index_opt trimmed '=' with\n                  | Some i -> trim (String.sub trimmed 0 i)\n                  | None -> trimmed\n                in\n                match key with\n                | \"title\" | \"authors\" | \"description\" | \"output\" | \"edit-url\" ->\n                    meta_lines := line :: !meta_lines\n                | _ ->\n                    in_metadata := false;\n                    toc_lines := line :: !toc_lines)\n              else meta_lines := line :: !meta_lines\n          | _ ->\n              in_metadata := false;\n              toc_lines := line :: !toc_lines)\n        else meta_lines := line :: !meta_lines\n      else toc_lines := line :: !toc_lines)\n    lines;\n  let config =\n    List.fold_left\n      (fun cfg line ->\n        match parse_kv line with\n        | Some (key, value) -> parse_metadata cfg key value\n        | None -> cfg)\n      default_config (List.rev !meta_lines)\n  in\n  let toc_entries = collect_toc_entries (List.rev !toc_lines) in\n  let toc, _ = build_toc toc_entries in\n  Ok (config, toc)\n\n(* ───── Title from filename ───── *)\n\nlet title_of_filename path =\n  let base = Filename.basename path in\n  let name = Filename.remove_extension base in\n  (* Strip leading digits and separators *)\n  let len = String.length name in\n  let start = ref 0 in\n  while\n    !start < len\n    &&\n    let c = name.[!start] in\n    (c >= '0' && c <= '9') || c = '-' || c = '_'\n  do\n    incr start\n  done;\n  let name =\n    if !start >= len then name else String.sub name !start (len - !start)\n  in\n  (* Replace dashes and underscores with spaces *)\n  let buf = Buffer.create (String.length name) in\n  String.iter\n    (fun c ->\n      match c with\n      | '-' | '_' -> Buffer.add_char buf ' '\n      | c -> Buffer.add_char buf c)\n    name;\n  let result = Buffer.contents buf in\n  (* Capitalize first letter *)\n  if String.length result > 0 then\n    let first = Char.uppercase_ascii result.[0] in\n    let rest = String.sub result 1 (String.length result - 1) in\n    String.make 1 first ^ rest\n  else result\n\n(* ───── Queries ───── *)\n\nlet rec all_notebooks toc =\n  List.concat_map\n    (fun item ->\n      match item with\n      | Notebook (nb, children) -> nb :: all_notebooks children\n      | Section _ | Separator -> [])\n    toc\n\nlet is_placeholder nb = nb.path = \"\"\n\nlet notebooks project =\n  List.filter (fun nb -> not (is_placeholder nb)) (all_notebooks project.toc)\n\nlet notebooks_array project = Array.of_list (notebooks project)\n\nlet find_notebook_index project nb =\n  let nbs = notebooks_array project in\n  let rec loop i =\n    if i >= Array.length nbs then None\n    else if nbs.(i).path = nb.path then Some i\n    else loop (i + 1)\n  in\n  loop 0\n\nlet prev_notebook project nb =\n  match find_notebook_index project nb with\n  | Some i when i > 0 -> Some (notebooks_array project).(i - 1)\n  | _ -> None\n\nlet next_notebook project nb =\n  let nbs = notebooks_array project in\n  match find_notebook_index project nb with\n  | Some i when i < Array.length nbs - 1 -> Some nbs.(i + 1)\n  | _ -> None\n\nlet number toc nb =\n  let rec search counter = function\n    | [] -> None\n    | Notebook (n, children) :: rest ->\n        incr counter;\n        if n.path = nb.path then Some [ !counter ]\n        else\n          begin match search (ref 0) children with\n          | Some sub -> Some (!counter :: sub)\n          | None -> search counter rest\n          end\n    | Section _ :: rest ->\n        counter := 0;\n        search counter rest\n    | Separator :: rest -> search counter rest\n  in\n  match search (ref 0) toc with Some ns -> ns | None -> []\n\nlet number_string = function\n  | [] -> \"\"\n  | ns -> String.concat \".\" (List.map string_of_int ns)\n"
  },
  {
    "path": "packages/quill/lib/quill-project/quill_project.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Notebook project model.\n\n    A project is a collection of notebooks, optionally organized by a config\n    file. Without a config file, any directory of [.md] files is a valid\n    project. *)\n\n(** {1:types Types} *)\n\ntype notebook = { title : string; path : string }\n(** A notebook reference. [path] is relative to the project root. Empty for\n    placeholders. *)\n\ntype toc_item =\n  | Notebook of notebook * toc_item list\n  | Section of string\n  | Separator\n      (** An entry in the table of contents. [Notebook (nb, children)] is a\n          notebook with optional sub-entries. [Section title] introduces a named\n          group. [Separator] is a visual break. *)\n\ntype config = {\n  title : string option;\n  authors : string list;\n  description : string option;\n  output : string option;\n  edit_url : string option;\n}\n(** Project configuration. *)\n\ntype t = {\n  title : string;  (** Project title. *)\n  root : string;  (** Absolute path to the project directory. *)\n  toc : toc_item list;  (** Table of contents. *)\n  config : config;  (** Configuration. *)\n}\n(** A project. *)\n\n(** {1:config Configuration} *)\n\nval default_config : config\n(** Default configuration with all fields empty/none. *)\n\nval parse_config : string -> (config * toc_item list, string) result\n(** [parse_config s] parses a [quill.conf] file. Returns the configuration and\n    table of contents entries.\n\n    Format: [key = value] lines for metadata, [[Section Name]] for groups,\n    [Title = path] for notebooks (indentation creates nesting), [---] for\n    separators, [#] for comments. *)\n\n(** {1:titles Titles} *)\n\nval title_of_filename : string -> string\n(** [title_of_filename \"01-intro.md\"] is [\"Intro\"]. Strips the extension, strips\n    leading digits and separators, replaces dashes and underscores with spaces,\n    and capitalizes the first letter. *)\n\n(** {1:queries Queries} *)\n\nval notebooks : t -> notebook list\n(** [notebooks t] is the flat, ordered list of all notebooks in [t], excluding\n    placeholders. *)\n\nval all_notebooks : toc_item list -> notebook list\n(** [all_notebooks toc] flattens [toc] into an ordered list of all notebooks,\n    including placeholders. *)\n\nval is_placeholder : notebook -> bool\n(** [is_placeholder nb] is [true] iff [nb] has no file (empty path). *)\n\nval prev_notebook : t -> notebook -> notebook option\n(** [prev_notebook t nb] is the notebook before [nb], or [None]. *)\n\nval next_notebook : t -> notebook -> notebook option\n(** [next_notebook t nb] is the notebook after [nb], or [None]. *)\n\nval number : toc_item list -> notebook -> int list\n(** [number toc nb] is the section number of [nb] in [toc], derived from its\n    position. E.g. [[1; 2]] for the second entry in the first group. Returns\n    [[]] if [nb] is not found. *)\n\nval number_string : int list -> string\n(** [number_string [1; 2]] is [\"1.2\"]. Returns [\"\"] for [[]]. *)\n"
  },
  {
    "path": "packages/quill/lib/quill-server/dune",
    "content": "(library\n (name quill_server)\n (public_name quill.server)\n (modules quill_server httpd protocol assets)\n (private_modules httpd protocol assets)\n (libraries\n  quill\n  quill.project\n  quill.markdown\n  jsont\n  jsont.bytesrw\n  unix\n  threads.posix))\n\n; Generate assets.ml from committed frontend/dist\n\n(rule\n (targets assets.ml)\n (deps\n  (glob_files frontend/dist/*.js)\n  (glob_files frontend/dist/*.css)\n  (glob_files frontend/fonts/*.woff2)\n  frontend/index.html\n  support/gen_assets.ml)\n (action\n  (run ocaml support/gen_assets.ml frontend assets.ml)))\n"
  },
  {
    "path": "packages/quill/lib/quill-server/frontend/.gitignore",
    "content": "/node_modules"
  },
  {
    "path": "packages/quill/lib/quill-server/frontend/css/notebook.css",
    "content": "/* ───── Fonts ───── */\n\n@font-face {\n  font-family: 'IBM Plex Mono';\n  font-style: normal;\n  font-weight: 400;\n  font-display: swap;\n  src: url('/assets/fonts/IBMPlexMono-Regular.woff2') format('woff2');\n}\n\n@font-face {\n  font-family: 'IBM Plex Mono';\n  font-style: italic;\n  font-weight: 400;\n  font-display: swap;\n  src: url('/assets/fonts/IBMPlexMono-Italic.woff2') format('woff2');\n}\n\n@font-face {\n  font-family: 'IBM Plex Mono';\n  font-style: normal;\n  font-weight: 500;\n  font-display: swap;\n  src: url('/assets/fonts/IBMPlexMono-Medium.woff2') format('woff2');\n}\n\n@font-face {\n  font-family: 'IBM Plex Mono';\n  font-style: normal;\n  font-weight: 600;\n  font-display: swap;\n  src: url('/assets/fonts/IBMPlexMono-SemiBold.woff2') format('woff2');\n}\n\n/* ───── Base ───── */\n\n:root {\n  --bg-primary: #18181e;\n  --bg-secondary: #1e1e26;\n  --bg-cell: #1e1e26;\n  --bg-cell-focused: #22222c;\n  --bg-toolbar: #141418;\n  --bg-output: #1a1a22;\n\n  --text-primary: #c8ccd4;\n  --text-secondary: #aab0ba;\n  --text-dim: #7a7e88;\n  --text-accent: #daa550;\n\n  --border-focused: #78788c;\n  --border-unfocused: #2a2a32;\n  --border-accent: #daa550;\n\n  --color-success: #4abe4a;\n  --color-error: #d26464;\n\n  --output-fg: #aab0b9;\n  --stderr-fg: #d2b464;\n  --error-fg: #d26464;\n  --error-bg: #2a1e1e;\n\n  --btn-bg: #2a2a34;\n  --btn-bg-hover: #34343e;\n  --btn-fg: #aab0ba;\n\n  --font-ui: 'IBM Plex Mono', 'SF Mono', 'Consolas', monospace;\n  --font-code: 'JetBrains Mono', 'SF Mono', 'Fira Code', 'Consolas', monospace;\n}\n\n* { box-sizing: border-box; margin: 0; padding: 0; }\n\n::selection {\n  background: rgba(218, 165, 80, 0.3);\n  color: inherit;\n}\n\nbody {\n  background: var(--bg-primary);\n  color: var(--text-primary);\n  font-family: var(--font-ui);\n  font-size: 13px;\n  line-height: 1.6;\n}\n\n:focus-visible {\n  outline: 1px solid var(--text-accent);\n  outline-offset: 1px;\n}\n\n/* ───── Toolbar ───── */\n\n.toolbar {\n  position: sticky;\n  top: 0;\n  z-index: 100;\n  display: flex;\n  align-items: center;\n  justify-content: space-between;\n  padding: 8px 24px;\n  background: var(--bg-toolbar);\n  border-bottom: 1px solid var(--border-unfocused);\n}\n\n.toolbar-left, .toolbar-center, .toolbar-right {\n  display: flex;\n  align-items: center;\n  gap: 8px;\n}\n\n.notebook-title {\n  font-weight: 600;\n  color: var(--text-accent);\n  font-size: 14px;\n  letter-spacing: 0.02em;\n}\n\n.kernel-status {\n  display: flex;\n  align-items: center;\n  gap: 6px;\n  color: var(--text-dim);\n  font-size: 12px;\n}\n\n.status-dot {\n  width: 7px;\n  height: 7px;\n  border-radius: 50%;\n  background: var(--text-dim);\n  transition: background 0.3s;\n}\n\n.status-dot[data-status=\"connected\"] {\n  background: var(--color-success);\n}\n\n.status-dot[data-status=\"disconnected\"] {\n  background: var(--color-error);\n  animation: pulse 1.5s ease-in-out infinite;\n}\n\n.btn {\n  padding: 4px 12px;\n  border: 1px solid var(--border-unfocused);\n  border-radius: 4px;\n  background: var(--btn-bg);\n  color: var(--btn-fg);\n  font-family: var(--font-ui);\n  font-size: 12px;\n  cursor: pointer;\n  transition: background 0.15s, border-color 0.15s;\n}\n\n.btn:hover {\n  background: var(--btn-bg-hover);\n  border-color: var(--border-focused);\n}\n\n.btn-help {\n  font-weight: 600;\n  min-width: 28px;\n  text-align: center;\n}\n\n/* ───── Connection banner ───── */\n\n.connection-banner {\n  background: var(--error-bg);\n  color: var(--color-error);\n  text-align: center;\n  padding: 6px 24px;\n  font-size: 12px;\n  font-weight: 500;\n  display: none;\n  align-items: center;\n  justify-content: center;\n  gap: 8px;\n}\n\n.connection-banner.visible {\n  display: flex;\n}\n\n.connection-banner-dot {\n  width: 6px;\n  height: 6px;\n  border-radius: 50%;\n  background: var(--color-error);\n  animation: pulse 1.5s ease-in-out infinite;\n}\n\n/* ───── Layout ───── */\n\n.layout {\n  display: flex;\n}\n\n.layout.has-sidebar .notebook {\n  margin-left: 0;\n}\n\n/* ───── Sidebar ───── */\n\n.sidebar {\n  position: sticky;\n  top: 41px; /* toolbar height */\n  align-self: flex-start;\n  width: 240px;\n  min-width: 240px;\n  height: calc(100vh - 41px);\n  overflow-y: auto;\n  background: var(--bg-toolbar);\n  border-right: 1px solid var(--border-unfocused);\n  padding: 8px 0;\n}\n\n.sidebar[hidden] {\n  display: none;\n}\n\n.sidebar-section {\n  margin-bottom: 4px;\n}\n\n.sidebar-part {\n  font-size: 10px;\n  font-weight: 600;\n  color: var(--text-dim);\n  text-transform: uppercase;\n  letter-spacing: 0.06em;\n  padding: 12px 16px 4px;\n}\n\n.sidebar-separator {\n  height: 1px;\n  background: var(--border-unfocused);\n  margin: 8px 16px;\n}\n\n.sidebar-chapter {\n  display: block;\n  padding: 6px 16px;\n  color: var(--text-secondary);\n  text-decoration: none;\n  font-size: 12px;\n  transition: color 0.15s, background 0.15s;\n}\n\n.sidebar-chapter:hover {\n  color: var(--text-primary);\n  background: var(--bg-cell-focused);\n}\n\n.sidebar-chapter.active {\n  color: var(--text-accent);\n  background: var(--bg-secondary);\n  border-left: 2px solid var(--text-accent);\n  padding-left: 14px;\n}\n\n/* ───── Notebook ───── */\n\n.notebook {\n  max-width: 900px;\n  margin: 0 auto;\n  padding: 16px 24px 120px;\n  flex: 1;\n  min-width: 0;\n}\n\n/* ───── Loading skeleton ───── */\n\n.skeleton {\n  display: flex;\n  flex-direction: column;\n  gap: 8px;\n  padding-top: 8px;\n}\n\n.skeleton-cell {\n  height: 120px;\n  border-radius: 4px;\n  background: linear-gradient(\n    90deg,\n    var(--bg-cell) 0%,\n    var(--bg-cell-focused) 40%,\n    var(--bg-cell) 80%\n  );\n  background-size: 300% 100%;\n  animation: shimmer 1.8s ease-in-out infinite;\n}\n\n.skeleton-cell-short {\n  height: 56px;\n}\n\n@keyframes shimmer {\n  0% { background-position: 100% 0; }\n  100% { background-position: -100% 0; }\n}\n\n/* ───── Empty state ───── */\n\n.empty-state {\n  text-align: center;\n  padding: 80px 24px;\n  color: var(--text-dim);\n}\n\n.empty-state-icon {\n  font-size: 32px;\n  color: var(--text-accent);\n  opacity: 0.4;\n  margin-bottom: 16px;\n  letter-spacing: 0.1em;\n}\n\n.empty-state p {\n  margin: 4px 0;\n  font-size: 14px;\n}\n\n.empty-state-hint {\n  font-size: 12px;\n}\n\n.empty-state kbd {\n  display: inline-block;\n  padding: 1px 6px;\n  border: 1px solid var(--border-unfocused);\n  border-radius: 3px;\n  background: var(--btn-bg);\n  font-family: var(--font-ui);\n  font-size: 11px;\n}\n\n/* ───── Sections ───── */\n\n.notebook-section {\n  margin-bottom: 8px;\n}\n\n.section-header {\n  display: flex;\n  align-items: center;\n  gap: 8px;\n  padding: 4px 0;\n  cursor: pointer;\n  user-select: none;\n}\n\n.section-header h2 {\n  font-size: 12px;\n  font-weight: 600;\n  color: var(--text-dim);\n  text-transform: uppercase;\n  letter-spacing: 0.06em;\n}\n\n.section-toggle {\n  color: var(--text-dim);\n  font-size: 12px;\n  width: 16px;\n  text-align: center;\n}\n\n.notebook-section[data-collapsed=\"true\"] .section-body {\n  display: none;\n}\n\n.section-body {\n}\n\n/* ───── Cells ───── */\n\n.cell {\n  border-left: 3px solid transparent;\n  border-radius: 4px;\n  background: var(--bg-cell);\n  margin-bottom: 2px;\n  transition: border-color 0.15s, background 0.15s;\n}\n\n.cell.cell-text {\n  background: transparent;\n}\n\n.cell.focused {\n  border-left-color: var(--border-accent);\n  background: var(--bg-cell-focused);\n}\n\n.cell:hover {\n  background: var(--bg-cell-focused);\n}\n\n/* Execution completion flash */\n.cell.flash-success {\n  animation: flash-success 0.8s ease;\n}\n\n.cell.flash-error {\n  animation: flash-error 0.8s ease;\n}\n\n@keyframes flash-success {\n  0% { border-left-color: var(--color-success); box-shadow: inset 3px 0 8px -2px rgba(74, 190, 74, 0.3); }\n  100% { border-left-color: transparent; box-shadow: none; }\n}\n\n@keyframes flash-error {\n  0% { border-left-color: var(--color-error); box-shadow: inset 3px 0 8px -2px rgba(210, 100, 100, 0.3); }\n  100% { border-left-color: transparent; box-shadow: none; }\n}\n\n/* Keep accent border if cell is focused during flash */\n.cell.focused.flash-success {\n  animation: flash-success-focused 0.8s ease;\n}\n\n.cell.focused.flash-error {\n  animation: flash-error-focused 0.8s ease;\n}\n\n@keyframes flash-success-focused {\n  0% { border-left-color: var(--color-success); box-shadow: inset 3px 0 8px -2px rgba(74, 190, 74, 0.3); }\n  100% { border-left-color: var(--border-accent); box-shadow: none; }\n}\n\n@keyframes flash-error-focused {\n  0% { border-left-color: var(--color-error); box-shadow: inset 3px 0 8px -2px rgba(210, 100, 100, 0.3); }\n  100% { border-left-color: var(--border-accent); box-shadow: none; }\n}\n\n.cell-wrapper {\n  display: flex;\n}\n\n/* ───── Gutter ───── */\n\n.cell-gutter {\n  flex: 0 0 48px;\n  display: flex;\n  flex-direction: column;\n  align-items: center;\n  padding-top: 10px;\n  gap: 4px;\n}\n\n.cell-number {\n  font-family: var(--font-ui);\n  font-size: 11px;\n  color: var(--text-dim);\n}\n\n.cell-status-icon {\n  width: 12px;\n  height: 12px;\n  border-radius: 50%;\n  transition: background 0.3s;\n}\n\n/* Execution result indicators */\n.cell-status-icon[data-result=\"success\"] {\n  background: var(--color-success);\n  opacity: 0.7;\n}\n\n.cell-status-icon[data-result=\"error\"] {\n  background: var(--color-error);\n  opacity: 0.7;\n}\n\n.cell[data-status=\"running\"] .cell-status-icon {\n  border: 2px solid var(--text-accent);\n  border-top-color: transparent;\n  animation: spin 0.8s linear infinite;\n  background: transparent;\n  opacity: 1;\n}\n\n.cell[data-status=\"queued\"] .cell-status-icon {\n  background: var(--text-dim);\n  animation: pulse 1.5s ease-in-out infinite;\n  opacity: 1;\n}\n\n@keyframes spin {\n  to { transform: rotate(360deg); }\n}\n\n@keyframes pulse {\n  0%, 100% { opacity: 0.3; }\n  50% { opacity: 1; }\n}\n\n/* ───── Cell content ───── */\n\n.cell-content {\n  flex: 1;\n  min-width: 0;\n  overflow: hidden;\n  padding: 4px 8px 4px 0;\n}\n\n.cell-editor {\n  border-radius: 3px;\n  overflow: hidden;\n}\n\n.cell-editor .cm-editor {\n  max-height: 600px;\n  overflow-y: auto;\n}\n\n/* ───── Cell actions ───── */\n\n.cell-actions {\n  display: flex;\n  gap: 4px;\n  padding: 4px 0;\n  opacity: 0;\n  transition: opacity 0.15s;\n}\n\n/* Show on hover OR when focused */\n.cell:hover .cell-actions,\n.cell.focused .cell-actions {\n  opacity: 1;\n}\n\n.cell-actions button {\n  padding: 2px 8px;\n  border: none;\n  border-radius: 3px;\n  background: transparent;\n  color: var(--text-dim);\n  font-family: var(--font-ui);\n  font-size: 11px;\n  cursor: pointer;\n  transition: color 0.15s, background 0.15s;\n}\n\n.cell-actions button:hover {\n  color: var(--text-primary);\n  background: var(--btn-bg);\n}\n\n/* ───── Outputs ───── */\n\n.cell-outputs {\n  padding: 0 8px;\n}\n\n.output {\n  font-family: var(--font-code);\n  font-size: 13px;\n  line-height: 1.5;\n  white-space: pre-wrap;\n  word-break: break-word;\n  padding: 6px 0;\n}\n\n.output-stdout {\n  color: var(--output-fg);\n}\n\n.output-stderr {\n  color: var(--stderr-fg);\n}\n\n.output-error {\n  color: var(--error-fg);\n  background: var(--error-bg);\n  border-radius: 4px;\n  padding: 8px 12px;\n  margin: 4px 0;\n}\n\n.output-error pre {\n  white-space: pre-wrap;\n  word-break: break-word;\n}\n\n.output-display img {\n  max-width: 100%;\n  border-radius: 4px;\n  margin: 4px 0;\n}\n\n.output-display .display-html {\n  white-space: normal;\n  word-break: normal;\n  overflow-x: auto;\n}\n\n.output-display table {\n  border-collapse: collapse;\n  font-family: var(--font-code);\n  font-size: 12px;\n  line-height: 1.4;\n  margin: 4px 0;\n}\n\n.output-display th {\n  padding: 5px 12px;\n  text-align: left;\n  font-weight: 600;\n  color: var(--text-primary);\n  border-bottom: 2px solid var(--border-unfocused);\n  white-space: nowrap;\n}\n\n.output-display td {\n  padding: 3px 12px;\n  border-bottom: 1px solid var(--border-unfocused);\n  white-space: nowrap;\n}\n\n.output-display tbody tr:hover {\n  background: var(--bg-output);\n}\n\n.output-display p {\n  margin: 4px 0 0;\n  color: var(--text-dim);\n  font-size: 11px;\n}\n\n/* ───── Markdown cells ───── */\n\n.cell-markdown {\n  padding: 8px 12px;\n  cursor: text;\n  min-height: 32px;\n}\n\n.cell-markdown-empty {\n  color: var(--text-dim);\n  font-style: italic;\n}\n\n.cell-markdown h1 { font-size: 1.75em; font-weight: 600; margin: 0.5em 0 0.25em; color: var(--text-primary); }\n.cell-markdown h2 { font-size: 1.4em; font-weight: 600; margin: 0.5em 0 0.25em; color: var(--text-primary); }\n.cell-markdown h3 { font-size: 1.15em; font-weight: 600; margin: 0.5em 0 0.25em; color: var(--text-primary); }\n.cell-markdown p { margin: 0.4em 0; }\n.cell-markdown ul, .cell-markdown ol { padding-left: 1.5em; margin: 0.4em 0; }\n.cell-markdown li { margin: 0.2em 0; }\n.cell-markdown code {\n  background: var(--bg-output);\n  padding: 0.15em 0.4em;\n  border-radius: 3px;\n  font-family: var(--font-code);\n  font-size: 0.9em;\n}\n.cell-markdown pre {\n  background: var(--bg-output);\n  padding: 12px;\n  border-radius: 4px;\n  overflow-x: auto;\n  margin: 0.5em 0;\n}\n.cell-markdown pre code {\n  background: none;\n  padding: 0;\n}\n.cell-markdown img {\n  max-width: 100%;\n  height: auto;\n  border-radius: 4px;\n  margin: 0.5em 0;\n}\n.cell-markdown a { color: var(--text-accent); }\n.cell-markdown blockquote {\n  border-left: 3px solid var(--border-unfocused);\n  padding-left: 12px;\n  color: var(--text-secondary);\n  margin: 0.5em 0;\n}\n.cell-markdown table { border-collapse: collapse; margin: 0.5em 0; }\n.cell-markdown th, .cell-markdown td {\n  border: 1px solid var(--border-unfocused);\n  padding: 6px 12px;\n}\n.cell-markdown th { background: var(--bg-output); }\n.cell-markdown strong { color: var(--text-primary); }\n.cell-markdown math[display=\"block\"] { display: block; margin: 0.5em 0; }\n\n/* ───── Collapsed cells ───── */\n\n.cell-collapsed-bar {\n  display: flex;\n  align-items: center;\n  gap: 8px;\n  padding: 6px 8px;\n  color: var(--text-dim);\n  font-size: 12px;\n  cursor: pointer;\n  border-radius: 3px;\n  transition: color 0.15s;\n}\n\n.cell-collapsed-bar:hover {\n  color: var(--text-secondary);\n}\n\n.cell-collapsed-toggle {\n  font-size: 10px;\n  width: 12px;\n  text-align: center;\n}\n\n.cell-collapsed-source {\n  flex: 1;\n  overflow: hidden;\n  text-overflow: ellipsis;\n  white-space: nowrap;\n  font-family: var(--font-code);\n  font-size: 12px;\n}\n\n/* ───── Source-hidden cells ───── */\n\n.cell-source-placeholder {\n  display: flex;\n  align-items: center;\n  gap: 6px;\n  padding: 4px 8px;\n  color: var(--text-dim);\n  font-size: 11px;\n  cursor: pointer;\n  transition: color 0.15s;\n}\n\n.cell-source-placeholder:hover {\n  color: var(--text-secondary);\n}\n\n/* ───── Cell dividers ───── */\n\n.cell-divider {\n  display: flex;\n  align-items: center;\n  gap: 0;\n  padding: 2px 0;\n  min-height: 20px;\n}\n\n.divider-line {\n  flex: 1;\n  height: 1px;\n  background: var(--border-unfocused);\n  opacity: 0;\n  transition: opacity 0.2s;\n}\n\n.cell-divider:hover .divider-line {\n  opacity: 1;\n}\n\n.divider-buttons {\n  display: flex;\n  gap: 6px;\n  opacity: 0;\n  transition: opacity 0.2s;\n}\n\n.cell-divider:hover .divider-buttons {\n  opacity: 1;\n}\n\n.divider-buttons button {\n  padding: 2px 10px;\n  border: 1px dashed var(--border-unfocused);\n  border-radius: 3px;\n  background: transparent;\n  color: var(--text-dim);\n  font-family: var(--font-ui);\n  font-size: 11px;\n  cursor: pointer;\n  transition: color 0.15s, border-color 0.15s;\n}\n\n.divider-buttons button:hover {\n  color: var(--text-accent);\n  border-color: var(--text-accent);\n}\n\n/* ───── Text cell: no gutter ───── */\n\n.cell-text .cell-gutter {\n  display: none;\n}\n\n/* ───── Toast notifications ───── */\n\n.toast-container {\n  position: fixed;\n  bottom: 24px;\n  right: 24px;\n  z-index: 200;\n  display: flex;\n  flex-direction: column-reverse;\n  gap: 8px;\n  pointer-events: none;\n}\n\n.toast {\n  padding: 8px 16px;\n  border-radius: 4px;\n  border-left: 3px solid var(--text-dim);\n  background: var(--bg-toolbar);\n  color: var(--text-primary);\n  font-family: var(--font-ui);\n  font-size: 12px;\n  pointer-events: auto;\n  opacity: 0;\n  transform: translateY(8px);\n  transition: opacity 0.25s, transform 0.25s;\n  box-shadow: 0 4px 16px rgba(0, 0, 0, 0.4);\n}\n\n.toast.toast-visible {\n  opacity: 1;\n  transform: translateY(0);\n}\n\n.toast-success {\n  border-left-color: var(--color-success);\n}\n\n.toast-error {\n  border-left-color: var(--color-error);\n}\n\n.toast-info {\n  border-left-color: var(--text-accent);\n}\n\n/* ───── Shortcuts dialog ───── */\n\n.dialog-backdrop {\n  position: fixed;\n  inset: 0;\n  z-index: 300;\n  background: rgba(0, 0, 0, 0.6);\n  display: flex;\n  align-items: center;\n  justify-content: center;\n}\n\n.dialog-backdrop[hidden] {\n  display: none;\n}\n\n.dialog {\n  background: var(--bg-secondary);\n  border: 1px solid var(--border-unfocused);\n  border-radius: 8px;\n  max-width: 600px;\n  width: 90%;\n  max-height: 80vh;\n  overflow-y: auto;\n  box-shadow: 0 16px 48px rgba(0, 0, 0, 0.5);\n}\n\n.dialog-header {\n  display: flex;\n  align-items: center;\n  justify-content: space-between;\n  padding: 16px 20px 12px;\n  border-bottom: 1px solid var(--border-unfocused);\n}\n\n.dialog-header h3 {\n  font-size: 14px;\n  font-weight: 600;\n  color: var(--text-primary);\n}\n\n.dialog-close {\n  border: none;\n  background: none;\n  color: var(--text-dim);\n  font-size: 18px;\n  cursor: pointer;\n  padding: 0 4px;\n  line-height: 1;\n}\n\n.dialog-close:hover {\n  color: var(--text-primary);\n}\n\n.dialog-body {\n  padding: 16px 20px 20px;\n}\n\n.shortcuts-grid {\n  display: grid;\n  grid-template-columns: 1fr 1fr;\n  gap: 20px;\n}\n\n.shortcut-group h4 {\n  font-size: 11px;\n  font-weight: 600;\n  color: var(--text-accent);\n  text-transform: uppercase;\n  letter-spacing: 0.06em;\n  margin-bottom: 8px;\n}\n\n.shortcut-group dl {\n  display: flex;\n  flex-direction: column;\n  gap: 4px;\n}\n\n.shortcut-group dl > div {\n  display: flex;\n  align-items: center;\n  gap: 8px;\n}\n\n.shortcut-group dt {\n  flex: 0 0 auto;\n  min-width: 100px;\n  text-align: right;\n  white-space: nowrap;\n}\n\n.shortcut-group dd {\n  color: var(--text-secondary);\n  font-size: 12px;\n}\n\n.shortcut-group kbd {\n  display: inline-block;\n  padding: 1px 5px;\n  border: 1px solid var(--border-focused);\n  border-radius: 3px;\n  background: var(--btn-bg);\n  font-family: var(--font-ui);\n  font-size: 11px;\n  color: var(--text-primary);\n  box-shadow: 0 1px 0 var(--border-unfocused);\n}\n\n/* ───── Page TOC (right side) ───── */\n\n.page-toc {\n  display: none;\n}\n\n@media (min-width: 1200px) {\n  .page-toc {\n    display: block;\n    position: sticky;\n    top: 57px; /* toolbar height + 16px */\n    align-self: flex-start;\n    width: 200px;\n    min-width: 200px;\n    max-height: calc(100vh - 57px);\n    overflow-y: auto;\n    padding: 16px 16px 16px 0;\n    font-size: 12px;\n  }\n\n  .page-toc[hidden] {\n    display: none;\n  }\n}\n\n.page-toc-title {\n  font-size: 10px;\n  font-weight: 600;\n  color: var(--text-dim);\n  text-transform: uppercase;\n  letter-spacing: 0.06em;\n  margin-bottom: 8px;\n}\n\n.page-toc ul {\n  list-style: none;\n  border-left: 1px solid var(--border-unfocused);\n  padding-left: 0;\n}\n\n.page-toc li {\n  padding: 3px 0 3px 12px;\n}\n\n.page-toc li.toc-h3 {\n  padding-left: 24px;\n}\n\n.page-toc a {\n  color: var(--text-dim);\n  text-decoration: none;\n  transition: color 0.15s;\n}\n\n.page-toc a:hover {\n  color: var(--text-accent);\n}\n\n/* ───── Chapter navigation ───── */\n\n.chapter-nav {\n  display: flex;\n  justify-content: space-between;\n  padding: 16px 24px;\n  border-top: 1px solid var(--border-unfocused);\n  margin-top: 24px;\n  font-size: 12px;\n}\n\n.chapter-nav-link {\n  display: flex;\n  flex-direction: column;\n  gap: 2px;\n  text-decoration: none;\n  color: var(--text-secondary);\n  transition: color 0.15s;\n}\n\n.chapter-nav-link:hover {\n  color: var(--text-accent);\n}\n\n.chapter-nav-next {\n  text-align: right;\n  margin-left: auto;\n}\n\n.chapter-nav-dir {\n  font-size: 11px;\n  color: var(--text-dim);\n}\n\n.chapter-nav-link:hover .chapter-nav-dir {\n  color: var(--text-accent);\n}\n\n.chapter-nav-title {\n  font-weight: 500;\n}\n\n/* ───── Reduced motion ───── */\n\n@media (prefers-reduced-motion: reduce) {\n  *, *::before, *::after {\n    animation-duration: 0.01ms !important;\n    transition-duration: 0.01ms !important;\n  }\n}\n"
  },
  {
    "path": "packages/quill/lib/quill-server/frontend/dist/app.css",
    "content": "@font-face{font-family:IBM Plex Mono;font-style:normal;font-weight:400;font-display:swap;src:url(/assets/fonts/IBMPlexMono-Regular.woff2) format(\"woff2\")}@font-face{font-family:IBM Plex Mono;font-style:italic;font-weight:400;font-display:swap;src:url(/assets/fonts/IBMPlexMono-Italic.woff2) format(\"woff2\")}@font-face{font-family:IBM Plex Mono;font-style:normal;font-weight:500;font-display:swap;src:url(/assets/fonts/IBMPlexMono-Medium.woff2) format(\"woff2\")}@font-face{font-family:IBM Plex Mono;font-style:normal;font-weight:600;font-display:swap;src:url(/assets/fonts/IBMPlexMono-SemiBold.woff2) format(\"woff2\")}:root{--bg-primary: #18181e;--bg-secondary: #1e1e26;--bg-cell: #1e1e26;--bg-cell-focused: #22222c;--bg-toolbar: #141418;--bg-output: #1a1a22;--text-primary: #c8ccd4;--text-secondary: #aab0ba;--text-dim: #7a7e88;--text-accent: #daa550;--border-focused: #78788c;--border-unfocused: #2a2a32;--border-accent: #daa550;--color-success: #4abe4a;--color-error: #d26464;--output-fg: #aab0b9;--stderr-fg: #d2b464;--error-fg: #d26464;--error-bg: #2a1e1e;--btn-bg: #2a2a34;--btn-bg-hover: #34343e;--btn-fg: #aab0ba;--font-ui: \"IBM Plex Mono\", \"SF Mono\", \"Consolas\", monospace;--font-code: \"JetBrains Mono\", \"SF Mono\", \"Fira Code\", \"Consolas\", monospace}*{box-sizing:border-box;margin:0;padding:0}::selection{background:#daa5504d;color:inherit}body{background:var(--bg-primary);color:var(--text-primary);font-family:var(--font-ui);font-size:13px;line-height:1.6}:focus-visible{outline:1px solid var(--text-accent);outline-offset:1px}.toolbar{position:sticky;top:0;z-index:100;display:flex;align-items:center;justify-content:space-between;padding:8px 24px;background:var(--bg-toolbar);border-bottom:1px solid var(--border-unfocused)}.toolbar-left,.toolbar-center,.toolbar-right{display:flex;align-items:center;gap:8px}.notebook-title{font-weight:600;color:var(--text-accent);font-size:14px;letter-spacing:.02em}.kernel-status{display:flex;align-items:center;gap:6px;color:var(--text-dim);font-size:12px}.status-dot{width:7px;height:7px;border-radius:50%;background:var(--text-dim);transition:background .3s}.status-dot[data-status=connected]{background:var(--color-success)}.status-dot[data-status=disconnected]{background:var(--color-error);animation:pulse 1.5s ease-in-out infinite}.btn{padding:4px 12px;border:1px solid var(--border-unfocused);border-radius:4px;background:var(--btn-bg);color:var(--btn-fg);font-family:var(--font-ui);font-size:12px;cursor:pointer;transition:background .15s,border-color .15s}.btn:hover{background:var(--btn-bg-hover);border-color:var(--border-focused)}.btn-help{font-weight:600;min-width:28px;text-align:center}.connection-banner{background:var(--error-bg);color:var(--color-error);text-align:center;padding:6px 24px;font-size:12px;font-weight:500;display:none;align-items:center;justify-content:center;gap:8px}.connection-banner.visible{display:flex}.connection-banner-dot{width:6px;height:6px;border-radius:50%;background:var(--color-error);animation:pulse 1.5s ease-in-out infinite}.layout{display:flex}.layout.has-sidebar .notebook{margin-left:0}.sidebar{position:sticky;top:41px;align-self:flex-start;width:240px;min-width:240px;height:calc(100vh - 41px);overflow-y:auto;background:var(--bg-toolbar);border-right:1px solid var(--border-unfocused);padding:8px 0}.sidebar[hidden]{display:none}.sidebar-section{margin-bottom:4px}.sidebar-part{font-size:10px;font-weight:600;color:var(--text-dim);text-transform:uppercase;letter-spacing:.06em;padding:12px 16px 4px}.sidebar-separator{height:1px;background:var(--border-unfocused);margin:8px 16px}.sidebar-chapter{display:block;padding:6px 16px;color:var(--text-secondary);text-decoration:none;font-size:12px;transition:color .15s,background .15s}.sidebar-chapter:hover{color:var(--text-primary);background:var(--bg-cell-focused)}.sidebar-chapter.active{color:var(--text-accent);background:var(--bg-secondary);border-left:2px solid var(--text-accent);padding-left:14px}.notebook{max-width:900px;margin:0 auto;padding:16px 24px 120px;flex:1;min-width:0}.skeleton{display:flex;flex-direction:column;gap:8px;padding-top:8px}.skeleton-cell{height:120px;border-radius:4px;background:linear-gradient(90deg,var(--bg-cell) 0%,var(--bg-cell-focused) 40%,var(--bg-cell) 80%);background-size:300% 100%;animation:shimmer 1.8s ease-in-out infinite}.skeleton-cell-short{height:56px}@keyframes shimmer{0%{background-position:100% 0}to{background-position:-100% 0}}.empty-state{text-align:center;padding:80px 24px;color:var(--text-dim)}.empty-state-icon{font-size:32px;color:var(--text-accent);opacity:.4;margin-bottom:16px;letter-spacing:.1em}.empty-state p{margin:4px 0;font-size:14px}.empty-state-hint{font-size:12px}.empty-state kbd{display:inline-block;padding:1px 6px;border:1px solid var(--border-unfocused);border-radius:3px;background:var(--btn-bg);font-family:var(--font-ui);font-size:11px}.notebook-section{margin-bottom:8px}.section-header{display:flex;align-items:center;gap:8px;padding:4px 0;cursor:pointer;user-select:none}.section-header h2{font-size:12px;font-weight:600;color:var(--text-dim);text-transform:uppercase;letter-spacing:.06em}.section-toggle{color:var(--text-dim);font-size:12px;width:16px;text-align:center}.notebook-section[data-collapsed=true] .section-body{display:none}.cell{border-left:3px solid transparent;border-radius:4px;background:var(--bg-cell);margin-bottom:2px;transition:border-color .15s,background .15s}.cell.cell-text{background:transparent}.cell.focused{border-left-color:var(--border-accent);background:var(--bg-cell-focused)}.cell:hover{background:var(--bg-cell-focused)}.cell.flash-success{animation:flash-success .8s ease}.cell.flash-error{animation:flash-error .8s ease}@keyframes flash-success{0%{border-left-color:var(--color-success);box-shadow:inset 3px 0 8px -2px #4abe4a4d}to{border-left-color:transparent;box-shadow:none}}@keyframes flash-error{0%{border-left-color:var(--color-error);box-shadow:inset 3px 0 8px -2px #d264644d}to{border-left-color:transparent;box-shadow:none}}.cell.focused.flash-success{animation:flash-success-focused .8s ease}.cell.focused.flash-error{animation:flash-error-focused .8s ease}@keyframes flash-success-focused{0%{border-left-color:var(--color-success);box-shadow:inset 3px 0 8px -2px #4abe4a4d}to{border-left-color:var(--border-accent);box-shadow:none}}@keyframes flash-error-focused{0%{border-left-color:var(--color-error);box-shadow:inset 3px 0 8px -2px #d264644d}to{border-left-color:var(--border-accent);box-shadow:none}}.cell-wrapper{display:flex}.cell-gutter{flex:0 0 48px;display:flex;flex-direction:column;align-items:center;padding-top:10px;gap:4px}.cell-number{font-family:var(--font-ui);font-size:11px;color:var(--text-dim)}.cell-status-icon{width:12px;height:12px;border-radius:50%;transition:background .3s}.cell-status-icon[data-result=success]{background:var(--color-success);opacity:.7}.cell-status-icon[data-result=error]{background:var(--color-error);opacity:.7}.cell[data-status=running] .cell-status-icon{border:2px solid var(--text-accent);border-top-color:transparent;animation:spin .8s linear infinite;background:transparent;opacity:1}.cell[data-status=queued] .cell-status-icon{background:var(--text-dim);animation:pulse 1.5s ease-in-out infinite;opacity:1}@keyframes spin{to{transform:rotate(360deg)}}@keyframes pulse{0%,to{opacity:.3}50%{opacity:1}}.cell-content{flex:1;min-width:0;overflow:hidden;padding:4px 8px 4px 0}.cell-editor{border-radius:3px;overflow:hidden}.cell-editor .cm-editor{max-height:600px;overflow-y:auto}.cell-actions{display:flex;gap:4px;padding:4px 0;opacity:0;transition:opacity .15s}.cell:hover .cell-actions,.cell.focused .cell-actions{opacity:1}.cell-actions button{padding:2px 8px;border:none;border-radius:3px;background:transparent;color:var(--text-dim);font-family:var(--font-ui);font-size:11px;cursor:pointer;transition:color .15s,background .15s}.cell-actions button:hover{color:var(--text-primary);background:var(--btn-bg)}.cell-outputs{padding:0 8px}.output{font-family:var(--font-code);font-size:13px;line-height:1.5;white-space:pre-wrap;word-break:break-word;padding:6px 0}.output-stdout{color:var(--output-fg)}.output-stderr{color:var(--stderr-fg)}.output-error{color:var(--error-fg);background:var(--error-bg);border-radius:4px;padding:8px 12px;margin:4px 0}.output-error pre{white-space:pre-wrap;word-break:break-word}.output-display img{max-width:100%;border-radius:4px;margin:4px 0}.output-display .display-html{white-space:normal;word-break:normal;overflow-x:auto}.output-display table{border-collapse:collapse;font-family:var(--font-code);font-size:12px;line-height:1.4;margin:4px 0}.output-display th{padding:5px 12px;text-align:left;font-weight:600;color:var(--text-primary);border-bottom:2px solid var(--border-unfocused);white-space:nowrap}.output-display td{padding:3px 12px;border-bottom:1px solid var(--border-unfocused);white-space:nowrap}.output-display tbody tr:hover{background:var(--bg-output)}.output-display p{margin:4px 0 0;color:var(--text-dim);font-size:11px}.cell-markdown{padding:8px 12px;cursor:text;min-height:32px}.cell-markdown-empty{color:var(--text-dim);font-style:italic}.cell-markdown h1{font-size:1.75em;font-weight:600;margin:.5em 0 .25em;color:var(--text-primary)}.cell-markdown h2{font-size:1.4em;font-weight:600;margin:.5em 0 .25em;color:var(--text-primary)}.cell-markdown h3{font-size:1.15em;font-weight:600;margin:.5em 0 .25em;color:var(--text-primary)}.cell-markdown p{margin:.4em 0}.cell-markdown ul,.cell-markdown ol{padding-left:1.5em;margin:.4em 0}.cell-markdown li{margin:.2em 0}.cell-markdown code{background:var(--bg-output);padding:.15em .4em;border-radius:3px;font-family:var(--font-code);font-size:.9em}.cell-markdown pre{background:var(--bg-output);padding:12px;border-radius:4px;overflow-x:auto;margin:.5em 0}.cell-markdown pre code{background:none;padding:0}.cell-markdown img{max-width:100%;height:auto;border-radius:4px;margin:.5em 0}.cell-markdown a{color:var(--text-accent)}.cell-markdown blockquote{border-left:3px solid var(--border-unfocused);padding-left:12px;color:var(--text-secondary);margin:.5em 0}.cell-markdown table{border-collapse:collapse;margin:.5em 0}.cell-markdown th,.cell-markdown td{border:1px solid var(--border-unfocused);padding:6px 12px}.cell-markdown th{background:var(--bg-output)}.cell-markdown strong{color:var(--text-primary)}.cell-markdown math[display=block]{display:block;margin:.5em 0}.cell-collapsed-bar{display:flex;align-items:center;gap:8px;padding:6px 8px;color:var(--text-dim);font-size:12px;cursor:pointer;border-radius:3px;transition:color .15s}.cell-collapsed-bar:hover{color:var(--text-secondary)}.cell-collapsed-toggle{font-size:10px;width:12px;text-align:center}.cell-collapsed-source{flex:1;overflow:hidden;text-overflow:ellipsis;white-space:nowrap;font-family:var(--font-code);font-size:12px}.cell-source-placeholder{display:flex;align-items:center;gap:6px;padding:4px 8px;color:var(--text-dim);font-size:11px;cursor:pointer;transition:color .15s}.cell-source-placeholder:hover{color:var(--text-secondary)}.cell-divider{display:flex;align-items:center;gap:0;padding:2px 0;min-height:20px}.divider-line{flex:1;height:1px;background:var(--border-unfocused);opacity:0;transition:opacity .2s}.cell-divider:hover .divider-line{opacity:1}.divider-buttons{display:flex;gap:6px;opacity:0;transition:opacity .2s}.cell-divider:hover .divider-buttons{opacity:1}.divider-buttons button{padding:2px 10px;border:1px dashed var(--border-unfocused);border-radius:3px;background:transparent;color:var(--text-dim);font-family:var(--font-ui);font-size:11px;cursor:pointer;transition:color .15s,border-color .15s}.divider-buttons button:hover{color:var(--text-accent);border-color:var(--text-accent)}.cell-text .cell-gutter{display:none}.toast-container{position:fixed;bottom:24px;right:24px;z-index:200;display:flex;flex-direction:column-reverse;gap:8px;pointer-events:none}.toast{padding:8px 16px;border-radius:4px;border-left:3px solid var(--text-dim);background:var(--bg-toolbar);color:var(--text-primary);font-family:var(--font-ui);font-size:12px;pointer-events:auto;opacity:0;transform:translateY(8px);transition:opacity .25s,transform .25s;box-shadow:0 4px 16px #0006}.toast.toast-visible{opacity:1;transform:translateY(0)}.toast-success{border-left-color:var(--color-success)}.toast-error{border-left-color:var(--color-error)}.toast-info{border-left-color:var(--text-accent)}.dialog-backdrop{position:fixed;inset:0;z-index:300;background:#0009;display:flex;align-items:center;justify-content:center}.dialog-backdrop[hidden]{display:none}.dialog{background:var(--bg-secondary);border:1px solid var(--border-unfocused);border-radius:8px;max-width:600px;width:90%;max-height:80vh;overflow-y:auto;box-shadow:0 16px 48px #00000080}.dialog-header{display:flex;align-items:center;justify-content:space-between;padding:16px 20px 12px;border-bottom:1px solid var(--border-unfocused)}.dialog-header h3{font-size:14px;font-weight:600;color:var(--text-primary)}.dialog-close{border:none;background:none;color:var(--text-dim);font-size:18px;cursor:pointer;padding:0 4px;line-height:1}.dialog-close:hover{color:var(--text-primary)}.dialog-body{padding:16px 20px 20px}.shortcuts-grid{display:grid;grid-template-columns:1fr 1fr;gap:20px}.shortcut-group h4{font-size:11px;font-weight:600;color:var(--text-accent);text-transform:uppercase;letter-spacing:.06em;margin-bottom:8px}.shortcut-group dl{display:flex;flex-direction:column;gap:4px}.shortcut-group dl>div{display:flex;align-items:center;gap:8px}.shortcut-group dt{flex:0 0 auto;min-width:100px;text-align:right;white-space:nowrap}.shortcut-group dd{color:var(--text-secondary);font-size:12px}.shortcut-group kbd{display:inline-block;padding:1px 5px;border:1px solid var(--border-focused);border-radius:3px;background:var(--btn-bg);font-family:var(--font-ui);font-size:11px;color:var(--text-primary);box-shadow:0 1px 0 var(--border-unfocused)}.page-toc{display:none}@media (min-width: 1200px){.page-toc{display:block;position:sticky;top:57px;align-self:flex-start;width:200px;min-width:200px;max-height:calc(100vh - 57px);overflow-y:auto;padding:16px 16px 16px 0;font-size:12px}.page-toc[hidden]{display:none}}.page-toc-title{font-size:10px;font-weight:600;color:var(--text-dim);text-transform:uppercase;letter-spacing:.06em;margin-bottom:8px}.page-toc ul{list-style:none;border-left:1px solid var(--border-unfocused);padding-left:0}.page-toc li{padding:3px 0 3px 12px}.page-toc li.toc-h3{padding-left:24px}.page-toc a{color:var(--text-dim);text-decoration:none;transition:color .15s}.page-toc a:hover{color:var(--text-accent)}.chapter-nav{display:flex;justify-content:space-between;padding:16px 24px;border-top:1px solid var(--border-unfocused);margin-top:24px;font-size:12px}.chapter-nav-link{display:flex;flex-direction:column;gap:2px;text-decoration:none;color:var(--text-secondary);transition:color .15s}.chapter-nav-link:hover{color:var(--text-accent)}.chapter-nav-next{text-align:right;margin-left:auto}.chapter-nav-dir{font-size:11px;color:var(--text-dim)}.chapter-nav-link:hover .chapter-nav-dir{color:var(--text-accent)}.chapter-nav-title{font-weight:500}@media (prefers-reduced-motion: reduce){*,*:before,*:after{animation-duration:.01ms!important;transition-duration:.01ms!important}}\n"
  },
  {
    "path": "packages/quill/lib/quill-server/frontend/dist/app.js",
    "content": "var qn=class{constructor(){this.cells=[],this.focusedCellId=null,this.kernelStatus=\"connecting\",this.canUndo=!1,this.canRedo=!1,this.loaded=!1,this._listeners=new Map}on(e,t){this._listeners.has(e)||this._listeners.set(e,[]),this._listeners.get(e).push(t)}off(e,t){let r=this._listeners.get(e);if(r){let n=r.indexOf(t);n!==-1&&r.splice(n,1)}}emit(e,t){let r=this._listeners.get(e);r&&r.forEach(n=>n(t))}loadNotebook(e){if(this.cells=e.cells,this.canUndo=e.can_undo,this.canRedo=e.can_redo,this.loaded=!0,!this.focusedCellId||!this.cells.find(t=>t.id===this.focusedCellId)){let t=this.cells.find(r=>r.kind===\"code\");this.focusedCellId=t?t.id:this.cells.length>0?this.cells[0].id:null}this.emit(\"notebook:loaded\",this.cells)}findCell(e){return this.cells.find(t=>t.id===e)}findCellIndex(e){return this.cells.findIndex(t=>t.id===e)}setCellStatus(e,t){let r=this.findCell(e);r&&(r.status=t,this.emit(\"cell:status\",{cellId:e,status:t}))}finishExecution(e,t){let r=this.findCell(e);r&&(r.lastRunSuccess=t,r.status=\"idle\",this.emit(\"cell:execution-done\",{cellId:e,success:t}))}appendOutput(e,t){let r=this.findCell(e);r&&(r.outputs||(r.outputs=[]),r.outputs.push(t),this.emit(\"cell:output\",{cellId:e,output:t}))}clearOutputs(e){let t=this.findCell(e);t&&t.outputs&&(t.outputs=[],this.emit(\"cell:outputs-cleared\",{cellId:e}))}updateCell(e,t){let r=this.findCellIndex(e);if(r!==-1){let n=this.cells[r];n.lastRunSuccess!==void 0&&(t.lastRunSuccess=n.lastRunSuccess),this.cells[r]=t,this.emit(\"cell:updated\",{cellId:e,cell:t})}}insertCell(e,t){this.cells.splice(e,0,t),this.emit(\"cell:inserted\",{pos:e,cell:t})}deleteCell(e){let t=this.findCellIndex(e);if(t!==-1&&(this.cells.splice(t,1),this.emit(\"cell:deleted\",{cellId:e}),this.focusedCellId===e))if(this.cells.length>0){let r=Math.min(t,this.cells.length-1);this.setFocus(this.cells[r].id)}else this.focusedCellId=null}moveCell(e,t){let r=this.findCellIndex(e);if(r!==-1){let[n]=this.cells.splice(r,1);this.cells.splice(t,0,n),this.emit(\"cell:moved\",{cellId:e,pos:t})}}setUndoRedo(e,t){this.canUndo=e,this.canRedo=t,this.emit(\"undo-redo:changed\",{canUndo:e,canRedo:t})}setConnectionStatus(e){this.kernelStatus=e,this.emit(\"connection:changed\",{status:e})}setFocus(e){let t=this.focusedCellId;this.focusedCellId=e,this.emit(\"focus:changed\",{cellId:e,prevCellId:t})}clearFocus(){if(this.focusedCellId){let e=this.focusedCellId;this.focusedCellId=null,this.emit(\"focus:changed\",{cellId:null,prevCellId:e})}}focusNext(){let e=this.findCellIndex(this.focusedCellId);e<this.cells.length-1&&this.setFocus(this.cells[e+1].id)}focusPrev(){let e=this.findCellIndex(this.focusedCellId);e>0&&this.setFocus(this.cells[e-1].id)}};var Wn=class{constructor(e){this.store=e,this.ws=null,this.reconnectDelay=1e3,this._pendingCompletions=new Map,this._pendingTypeAt=new Map,this._pendingDiagnostics=new Map,this._requestCounter=0,this._sourceDebounceTimers=new Map,this.chapterPath=null}connect(){let t=`${location.protocol===\"https:\"?\"wss:\":\"ws:\"}//${location.host}/ws`;this.chapterPath&&(t+=`?path=${encodeURIComponent(this.chapterPath)}`),this.ws=new WebSocket(t),this.ws.onopen=()=>{let r=this.reconnectDelay>1e3;this.reconnectDelay=1e3,this.store.setConnectionStatus(\"connected\"),r&&this.store.emit(\"reconnected\")},this.ws.onmessage=r=>{try{let n=JSON.parse(r.data);console.debug(\"[ws]\",n.type,n.cell_id||\"\"),this._onMessage(n)}catch(n){console.error(\"[ws] message error:\",n,r.data.slice(0,200))}},this.ws.onclose=()=>{this.ws=null,this.store.setConnectionStatus(\"disconnected\"),setTimeout(()=>this.reconnect(),this.reconnectDelay)},this.ws.onerror=()=>{this.ws&&this.ws.close()}}reconnect(){this.reconnectDelay=Math.min(this.reconnectDelay*2,3e4),this.connect()}send(e){this.ws&&this.ws.readyState===WebSocket.OPEN&&this.ws.send(JSON.stringify(e))}_onMessage(e){switch(e.type){case\"notebook\":this.store.loadNotebook(e);break;case\"cell_status\":this.store.setCellStatus(e.cell_id,e.status);break;case\"cell_output\":this.store.appendOutput(e.cell_id,e.output);break;case\"cell_updated\":{let t=this.store.findCell(e.cell_id),r=t&&(t.status===\"running\"||t.status===\"queued\");if(this.store.updateCell(e.cell_id,e.cell),r&&e.cell.status===\"idle\"){let n=e.cell.outputs&&e.cell.outputs.some(s=>s.kind===\"error\");this.store.finishExecution(e.cell_id,!n)}break}case\"cell_inserted\":this.store.insertCell(e.pos,e.cell);break;case\"cell_deleted\":this.store.deleteCell(e.cell_id);break;case\"cell_moved\":this.store.moveCell(e.cell_id,e.pos);break;case\"completions\":{let t=this._pendingCompletions.get(e.request_id);t&&(this._pendingCompletions.delete(e.request_id),t(e.items));break}case\"type_at\":{let t=this._pendingTypeAt.get(e.request_id);t&&(this._pendingTypeAt.delete(e.request_id),t(e));break}case\"diagnostics\":{let t=this._pendingDiagnostics.get(e.request_id);t&&(this._pendingDiagnostics.delete(e.request_id),t(e));break}case\"saved\":this.store.emit(\"saved\");break;case\"undo_redo\":this.store.setUndoRedo(e.can_undo,e.can_redo);break;case\"error\":this.store.emit(\"error\",{message:e.message});break}}updateSource(e,t){let r=this._sourceDebounceTimers.get(e);r&&clearTimeout(r),this._sourceDebounceTimers.set(e,setTimeout(()=>{this._sourceDebounceTimers.delete(e),this.send({type:\"update_source\",cell_id:e,source:t})},150))}cancelPendingSource(e){let t=this._sourceDebounceTimers.get(e);t&&(clearTimeout(t),this._sourceDebounceTimers.delete(e))}checkpoint(){this.send({type:\"checkpoint\"})}executeCell(e){this.cancelPendingSource(e);let t=this.store.findCell(e);t&&this.send({type:\"update_source\",cell_id:e,source:t.source}),this.send({type:\"execute_cell\",cell_id:e})}executeCells(e){for(let t of e){this.cancelPendingSource(t);let r=this.store.findCell(t);r&&this.send({type:\"update_source\",cell_id:t,source:r.source})}this.send({type:\"execute_cells\",cell_ids:e})}executeAll(){for(let[e,t]of this._sourceDebounceTimers){clearTimeout(t);let r=this.store.findCell(e);r&&this.send({type:\"update_source\",cell_id:e,source:r.source})}this._sourceDebounceTimers.clear(),this.send({type:\"execute_all\"})}interrupt(){this.send({type:\"interrupt\"})}insertCell(e,t){this.send({type:\"insert_cell\",pos:e,kind:t})}deleteCell(e){this.send({type:\"delete_cell\",cell_id:e})}moveCell(e,t){this.send({type:\"move_cell\",cell_id:e,pos:t})}setCellKind(e,t){this.send({type:\"set_cell_kind\",cell_id:e,kind:t})}setCellAttrs(e,t){this.send({type:\"set_cell_attrs\",cell_id:e,...t})}clearOutputs(e){this.send({type:\"clear_outputs\",cell_id:e})}clearAllOutputs(){this.send({type:\"clear_all_outputs\"})}save(){this.send({type:\"save\"})}undo(){this.send({type:\"undo\"})}redo(){this.send({type:\"redo\"})}complete(e,t){let r=`req_${++this._requestCounter}`;return new Promise(n=>{this._pendingCompletions.set(r,n),this.send({type:\"complete\",request_id:r,code:e,pos:t}),setTimeout(()=>{this._pendingCompletions.has(r)&&(this._pendingCompletions.delete(r),n([]))},3e3)})}typeAt(e,t){let r=`req_${++this._requestCounter}`;return new Promise(n=>{this._pendingTypeAt.set(r,n),this.send({type:\"type_at\",request_id:r,code:e,pos:t}),setTimeout(()=>{this._pendingTypeAt.has(r)&&(this._pendingTypeAt.delete(r),n(null))},3e3)})}diagnostics(e){let t=`req_${++this._requestCounter}`;return new Promise(r=>{this._pendingDiagnostics.set(t,r),this.send({type:\"diagnostics\",request_id:t,code:e}),setTimeout(()=>{this._pendingDiagnostics.has(t)&&(this._pendingDiagnostics.delete(t),r({items:[]}))},3e3)})}};var rl=[],Rh=[];(()=>{let i=\"lc,34,7n,7,7b,19,,,,2,,2,,,20,b,1c,l,g,,2t,7,2,6,2,2,,4,z,,u,r,2j,b,1m,9,9,,o,4,,9,,3,,5,17,3,3b,f,,w,1j,,,,4,8,4,,3,7,a,2,t,,1m,,,,2,4,8,,9,,a,2,q,,2,2,1l,,4,2,4,2,2,3,3,,u,2,3,,b,2,1l,,4,5,,2,4,,k,2,m,6,,,1m,,,2,,4,8,,7,3,a,2,u,,1n,,,,c,,9,,14,,3,,1l,3,5,3,,4,7,2,b,2,t,,1m,,2,,2,,3,,5,2,7,2,b,2,s,2,1l,2,,,2,4,8,,9,,a,2,t,,20,,4,,2,3,,,8,,29,,2,7,c,8,2q,,2,9,b,6,22,2,r,,,,,,1j,e,,5,,2,5,b,,10,9,,2u,4,,6,,2,2,2,p,2,4,3,g,4,d,,2,2,6,,f,,jj,3,qa,3,t,3,t,2,u,2,1s,2,,7,8,,2,b,9,,19,3,3b,2,y,,3a,3,4,2,9,,6,3,63,2,2,,1m,,,7,,,,,2,8,6,a,2,,1c,h,1r,4,1c,7,,,5,,14,9,c,2,w,4,2,2,,3,1k,,,2,3,,,3,1m,8,2,2,48,3,,d,,7,4,,6,,3,2,5i,1m,,5,ek,,5f,x,2da,3,3x,,2o,w,fe,6,2x,2,n9w,4,,a,w,2,28,2,7k,,3,,4,,p,2,5,,47,2,q,i,d,,12,8,p,b,1a,3,1c,,2,4,2,2,13,,1v,6,2,2,2,2,c,,8,,1b,,1f,,,3,2,2,5,2,,,16,2,8,,6m,,2,,4,,fn4,,kh,g,g,g,a6,2,gt,,6a,,45,5,1ae,3,,2,5,4,14,3,4,,4l,2,fx,4,ar,2,49,b,4w,,1i,f,1k,3,1d,4,2,2,1x,3,10,5,,8,1q,,c,2,1g,9,a,4,2,,2n,3,2,,,2,6,,4g,,3,8,l,2,1l,2,,,,,m,,e,7,3,5,5f,8,2,3,,,n,,29,,2,6,,,2,,,2,,2,6j,,2,4,6,2,,2,r,2,2d,8,2,,,2,2y,,,,2,6,,,2t,3,2,4,,5,77,9,,2,6t,,a,2,,,4,,40,4,2,2,4,,w,a,14,6,2,4,8,,9,6,2,3,1a,d,,2,ba,7,,6,,,2a,m,2,7,,2,,2,3e,6,3,,,2,,7,,,20,2,3,,,,9n,2,f0b,5,1n,7,t4,,1r,4,29,,f5k,2,43q,,,3,4,5,8,8,2,7,u,4,44,3,1iz,1j,4,1e,8,,e,,m,5,,f,11s,7,,h,2,7,,2,,5,79,7,c5,4,15s,7,31,7,240,5,gx7k,2o,3k,6o\".split(\",\").map(e=>e?parseInt(e,36):1);for(let e=0,t=0;e<i.length;e++)(e%2?Rh:rl).push(t=t+i[e])})();function wp(i){if(i<768)return!1;for(let e=0,t=rl.length;;){let r=e+t>>1;if(i<rl[r])t=r;else if(i>=Rh[r])e=r+1;else return!0;if(e==t)return!1}}function zh(i){return i>=127462&&i<=127487}var Lh=8205;function Ph(i,e,t=!0,r=!0){return(t?Nh:kp)(i,e,r)}function Nh(i,e,t){if(e==i.length)return e;e&&Fh(i.charCodeAt(e))&&Hh(i.charCodeAt(e-1))&&e--;let r=tl(i,e);for(e+=Ih(r);e<i.length;){let n=tl(i,e);if(r==Lh||n==Lh||t&&wp(n))e+=Ih(n),r=n;else if(zh(n)){let s=0,o=e-2;for(;o>=0&&zh(tl(i,o));)s++,o-=2;if(s%2==0)break;e+=2}else break}return e}function kp(i,e,t){for(;e>0;){let r=Nh(i,e-2,t);if(r<e)return r;e--}return 0}function tl(i,e){let t=i.charCodeAt(e);if(!Hh(t)||e+1==i.length)return t;let r=i.charCodeAt(e+1);return Fh(r)?(t-55296<<10)+(r-56320)+65536:t}function Fh(i){return i>=56320&&i<57344}function Hh(i){return i>=55296&&i<56320}function Ih(i){return i<65536?1:2}var se=class i{lineAt(e){if(e<0||e>this.length)throw new RangeError(`Invalid position ${e} in document of length ${this.length}`);return this.lineInner(e,!1,1,0)}line(e){if(e<1||e>this.lines)throw new RangeError(`Invalid line number ${e} in ${this.lines}-line document`);return this.lineInner(e,!0,1,0)}replace(e,t,r){[e,t]=hi(this,e,t);let n=[];return this.decompose(0,e,n,2),r.length&&r.decompose(0,r.length,n,3),this.decompose(t,this.length,n,1),oi.from(n,this.length-(t-e)+r.length)}append(e){return this.replace(this.length,this.length,e)}slice(e,t=this.length){[e,t]=hi(this,e,t);let r=[];return this.decompose(e,t,r,0),oi.from(r,t-e)}eq(e){if(e==this)return!0;if(e.length!=this.length||e.lines!=this.lines)return!1;let t=this.scanIdentical(e,1),r=this.length-this.scanIdentical(e,-1),n=new Nr(this),s=new Nr(e);for(let o=t,l=t;;){if(n.next(o),s.next(o),o=0,n.lineBreak!=s.lineBreak||n.done!=s.done||n.value!=s.value)return!1;if(l+=n.value.length,n.done||l>=r)return!0}}iter(e=1){return new Nr(this,e)}iterRange(e,t=this.length){return new Kn(this,e,t)}iterLines(e,t){let r;if(e==null)r=this.iter();else{t==null&&(t=this.lines+1);let n=this.line(e).from;r=this.iterRange(n,Math.max(n,t==this.lines+1?this.length:t<=1?0:this.line(t-1).to))}return new jn(r)}toString(){return this.sliceString(0)}toJSON(){let e=[];return this.flatten(e),e}constructor(){}static of(e){if(e.length==0)throw new RangeError(\"A document must have at least one line\");return e.length==1&&!e[0]?i.empty:e.length<=32?new ut(e):oi.from(ut.split(e,[]))}},ut=class i extends se{constructor(e,t=Sp(e)){super(),this.text=e,this.length=t}get lines(){return this.text.length}get children(){return null}lineInner(e,t,r,n){for(let s=0;;s++){let o=this.text[s],l=n+o.length;if((t?r:l)>=e)return new nl(n,l,r,o);n=l+1,r++}}decompose(e,t,r,n){let s=e<=0&&t>=this.length?this:new i(qh(this.text,e,t),Math.min(t,this.length)-Math.max(0,e));if(n&1){let o=r.pop(),l=Un(s.text,o.text.slice(),0,s.length);if(l.length<=32)r.push(new i(l,o.length+s.length));else{let a=l.length>>1;r.push(new i(l.slice(0,a)),new i(l.slice(a)))}}else r.push(s)}replace(e,t,r){if(!(r instanceof i))return super.replace(e,t,r);[e,t]=hi(this,e,t);let n=Un(this.text,Un(r.text,qh(this.text,0,e)),t),s=this.length+r.length-(t-e);return n.length<=32?new i(n,s):oi.from(i.split(n,[]),s)}sliceString(e,t=this.length,r=`\n`){[e,t]=hi(this,e,t);let n=\"\";for(let s=0,o=0;s<=t&&o<this.text.length;o++){let l=this.text[o],a=s+l.length;s>e&&o&&(n+=r),e<a&&t>s&&(n+=l.slice(Math.max(0,e-s),t-s)),s=a+1}return n}flatten(e){for(let t of this.text)e.push(t)}scanIdentical(){return 0}static split(e,t){let r=[],n=-1;for(let s of e)r.push(s),n+=s.length+1,r.length==32&&(t.push(new i(r,n)),r=[],n=-1);return n>-1&&t.push(new i(r,n)),t}},oi=class i extends se{constructor(e,t){super(),this.children=e,this.length=t,this.lines=0;for(let r of e)this.lines+=r.lines}lineInner(e,t,r,n){for(let s=0;;s++){let o=this.children[s],l=n+o.length,a=r+o.lines-1;if((t?a:l)>=e)return o.lineInner(e,t,r,n);n=l+1,r=a+1}}decompose(e,t,r,n){for(let s=0,o=0;o<=t&&s<this.children.length;s++){let l=this.children[s],a=o+l.length;if(e<=a&&t>=o){let h=n&((o<=e?1:0)|(a>=t?2:0));o>=e&&a<=t&&!h?r.push(l):l.decompose(e-o,t-o,r,h)}o=a+1}}replace(e,t,r){if([e,t]=hi(this,e,t),r.lines<this.lines)for(let n=0,s=0;n<this.children.length;n++){let o=this.children[n],l=s+o.length;if(e>=s&&t<=l){let a=o.replace(e-s,t-s,r),h=this.lines-o.lines+a.lines;if(a.lines<h>>4&&a.lines>h>>6){let c=this.children.slice();return c[n]=a,new i(c,this.length-(t-e)+r.length)}return super.replace(s,l,a)}s=l+1}return super.replace(e,t,r)}sliceString(e,t=this.length,r=`\n`){[e,t]=hi(this,e,t);let n=\"\";for(let s=0,o=0;s<this.children.length&&o<=t;s++){let l=this.children[s],a=o+l.length;o>e&&s&&(n+=r),e<a&&t>o&&(n+=l.sliceString(e-o,t-o,r)),o=a+1}return n}flatten(e){for(let t of this.children)t.flatten(e)}scanIdentical(e,t){if(!(e instanceof i))return 0;let r=0,[n,s,o,l]=t>0?[0,0,this.children.length,e.children.length]:[this.children.length-1,e.children.length-1,-1,-1];for(;;n+=t,s+=t){if(n==o||s==l)return r;let a=this.children[n],h=e.children[s];if(a!=h)return r+a.scanIdentical(h,t);r+=a.length+1}}static from(e,t=e.reduce((r,n)=>r+n.length+1,-1)){let r=0;for(let p of e)r+=p.lines;if(r<32){let p=[];for(let v of e)v.flatten(p);return new ut(p,t)}let n=Math.max(32,r>>5),s=n<<1,o=n>>1,l=[],a=0,h=-1,c=[];function u(p){let v;if(p.lines>s&&p instanceof i)for(let y of p.children)u(y);else p.lines>o&&(a>o||!a)?(d(),l.push(p)):p instanceof ut&&a&&(v=c[c.length-1])instanceof ut&&p.lines+v.lines<=32?(a+=p.lines,h+=p.length+1,c[c.length-1]=new ut(v.text.concat(p.text),v.length+1+p.length)):(a+p.lines>n&&d(),a+=p.lines,h+=p.length+1,c.push(p))}function d(){a!=0&&(l.push(c.length==1?c[0]:i.from(c,h)),h=-1,a=c.length=0)}for(let p of e)u(p);return d(),l.length==1?l[0]:new i(l,t)}};se.empty=new ut([\"\"],0);function Sp(i){let e=-1;for(let t of i)e+=t.length+1;return e}function Un(i,e,t=0,r=1e9){for(let n=0,s=0,o=!0;s<i.length&&n<=r;s++){let l=i[s],a=n+l.length;a>=t&&(a>r&&(l=l.slice(0,r-n)),n<t&&(l=l.slice(t-n)),o?(e[e.length-1]+=l,o=!1):e.push(l)),n=a+1}return e}function qh(i,e,t){return Un(i,[\"\"],e,t)}var Nr=class{constructor(e,t=1){this.dir=t,this.done=!1,this.lineBreak=!1,this.value=\"\",this.nodes=[e],this.offsets=[t>0?1:(e instanceof ut?e.text.length:e.children.length)<<1]}nextInner(e,t){for(this.done=this.lineBreak=!1;;){let r=this.nodes.length-1,n=this.nodes[r],s=this.offsets[r],o=s>>1,l=n instanceof ut?n.text.length:n.children.length;if(o==(t>0?l:0)){if(r==0)return this.done=!0,this.value=\"\",this;t>0&&this.offsets[r-1]++,this.nodes.pop(),this.offsets.pop()}else if((s&1)==(t>0?0:1)){if(this.offsets[r]+=t,e==0)return this.lineBreak=!0,this.value=`\n`,this;e--}else if(n instanceof ut){let a=n.text[o+(t<0?-1:0)];if(this.offsets[r]+=t,a.length>Math.max(0,e))return this.value=e==0?a:t>0?a.slice(e):a.slice(0,a.length-e),this;e-=a.length}else{let a=n.children[o+(t<0?-1:0)];e>a.length?(e-=a.length,this.offsets[r]+=t):(t<0&&this.offsets[r]--,this.nodes.push(a),this.offsets.push(t>0?1:(a instanceof ut?a.text.length:a.children.length)<<1))}}}next(e=0){return e<0&&(this.nextInner(-e,-this.dir),e=this.value.length),this.nextInner(e,this.dir)}},Kn=class{constructor(e,t,r){this.value=\"\",this.done=!1,this.cursor=new Nr(e,t>r?-1:1),this.pos=t>r?e.length:0,this.from=Math.min(t,r),this.to=Math.max(t,r)}nextInner(e,t){if(t<0?this.pos<=this.from:this.pos>=this.to)return this.value=\"\",this.done=!0,this;e+=Math.max(0,t<0?this.pos-this.to:this.from-this.pos);let r=t<0?this.pos-this.from:this.to-this.pos;e>r&&(e=r),r-=e;let{value:n}=this.cursor.next(e);return this.pos+=(n.length+e)*t,this.value=n.length<=r?n:t<0?n.slice(n.length-r):n.slice(0,r),this.done=!this.value,this}next(e=0){return e<0?e=Math.max(e,this.from-this.pos):e>0&&(e=Math.min(e,this.to-this.pos)),this.nextInner(e,this.cursor.dir)}get lineBreak(){return this.cursor.lineBreak&&this.value!=\"\"}},jn=class{constructor(e){this.inner=e,this.afterBreak=!0,this.value=\"\",this.done=!1}next(e=0){let{done:t,lineBreak:r,value:n}=this.inner.next(e);return t&&this.afterBreak?(this.value=\"\",this.afterBreak=!1):t?(this.done=!0,this.value=\"\"):r?this.afterBreak?this.value=\"\":(this.afterBreak=!0,this.next()):(this.value=n,this.afterBreak=!1),this}get lineBreak(){return!1}};typeof Symbol<\"u\"&&(se.prototype[Symbol.iterator]=function(){return this.iter()},Nr.prototype[Symbol.iterator]=Kn.prototype[Symbol.iterator]=jn.prototype[Symbol.iterator]=function(){return this});var nl=class{constructor(e,t,r,n){this.from=e,this.to=t,this.number=r,this.text=n}get length(){return this.to-this.from}};function hi(i,e,t){return e=Math.max(0,Math.min(i.length,e)),[e,Math.max(e,Math.min(i.length,t))]}function Ie(i,e,t=!0,r=!0){return Ph(i,e,t,r)}function Cp(i){return i>=56320&&i<57344}function Ap(i){return i>=55296&&i<56320}function Xe(i,e){let t=i.charCodeAt(e);if(!Ap(t)||e+1==i.length)return t;let r=i.charCodeAt(e+1);return Cp(r)?(t-55296<<10)+(r-56320)+65536:t}function qi(i){return i<=65535?String.fromCharCode(i):(i-=65536,String.fromCharCode((i>>10)+55296,(i&1023)+56320))}function vt(i){return i<65536?1:2}var sl=/\\r\\n?|\\n/,We=function(i){return i[i.Simple=0]=\"Simple\",i[i.TrackDel=1]=\"TrackDel\",i[i.TrackBefore=2]=\"TrackBefore\",i[i.TrackAfter=3]=\"TrackAfter\",i}(We||(We={})),nr=class i{constructor(e){this.sections=e}get length(){let e=0;for(let t=0;t<this.sections.length;t+=2)e+=this.sections[t];return e}get newLength(){let e=0;for(let t=0;t<this.sections.length;t+=2){let r=this.sections[t+1];e+=r<0?this.sections[t]:r}return e}get empty(){return this.sections.length==0||this.sections.length==2&&this.sections[1]<0}iterGaps(e){for(let t=0,r=0,n=0;t<this.sections.length;){let s=this.sections[t++],o=this.sections[t++];o<0?(e(r,n,s),n+=s):n+=o,r+=s}}iterChangedRanges(e,t=!1){ol(this,e,t)}get invertedDesc(){let e=[];for(let t=0;t<this.sections.length;){let r=this.sections[t++],n=this.sections[t++];n<0?e.push(r,n):e.push(n,r)}return new i(e)}composeDesc(e){return this.empty?e:e.empty?this:Uh(this,e)}mapDesc(e,t=!1){return e.empty?this:ll(this,e,t)}mapPos(e,t=-1,r=We.Simple){let n=0,s=0;for(let o=0;o<this.sections.length;){let l=this.sections[o++],a=this.sections[o++],h=n+l;if(a<0){if(h>e)return s+(e-n);s+=l}else{if(r!=We.Simple&&h>=e&&(r==We.TrackDel&&n<e&&h>e||r==We.TrackBefore&&n<e||r==We.TrackAfter&&h>e))return null;if(h>e||h==e&&t<0&&!l)return e==n||t<0?s:s+a;s+=a}n=h}if(e>n)throw new RangeError(`Position ${e} is out of range for changeset of length ${n}`);return s}touchesRange(e,t=e){for(let r=0,n=0;r<this.sections.length&&n<=t;){let s=this.sections[r++],o=this.sections[r++],l=n+s;if(o>=0&&n<=t&&l>=e)return n<e&&l>t?\"cover\":!0;n=l}return!1}toString(){let e=\"\";for(let t=0;t<this.sections.length;){let r=this.sections[t++],n=this.sections[t++];e+=(e?\" \":\"\")+r+(n>=0?\":\"+n:\"\")}return e}toJSON(){return this.sections}static fromJSON(e){if(!Array.isArray(e)||e.length%2||e.some(t=>typeof t!=\"number\"))throw new RangeError(\"Invalid JSON representation of ChangeDesc\");return new i(e)}static create(e){return new i(e)}},Ye=class i extends nr{constructor(e,t){super(e),this.inserted=t}apply(e){if(this.length!=e.length)throw new RangeError(\"Applying change set to a document with the wrong length\");return ol(this,(t,r,n,s,o)=>e=e.replace(n,n+(r-t),o),!1),e}mapDesc(e,t=!1){return ll(this,e,t,!0)}invert(e){let t=this.sections.slice(),r=[];for(let n=0,s=0;n<t.length;n+=2){let o=t[n],l=t[n+1];if(l>=0){t[n]=l,t[n+1]=o;let a=n>>1;for(;r.length<a;)r.push(se.empty);r.push(o?e.slice(s,s+o):se.empty)}s+=o}return new i(t,r)}compose(e){return this.empty?e:e.empty?this:Uh(this,e,!0)}map(e,t=!1){return e.empty?this:ll(this,e,t,!0)}iterChanges(e,t=!1){ol(this,e,t)}get desc(){return nr.create(this.sections)}filter(e){let t=[],r=[],n=[],s=new Fr(this);e:for(let o=0,l=0;;){let a=o==e.length?1e9:e[o++];for(;l<a||l==a&&s.len==0;){if(s.done)break e;let c=Math.min(s.len,a-l);$e(n,c,-1);let u=s.ins==-1?-1:s.off==0?s.ins:0;$e(t,c,u),u>0&&gr(r,t,s.text),s.forward(c),l+=c}let h=e[o++];for(;l<h;){if(s.done)break e;let c=Math.min(s.len,h-l);$e(t,c,-1),$e(n,c,s.ins==-1?-1:s.off==0?s.ins:0),s.forward(c),l+=c}}return{changes:new i(t,r),filtered:nr.create(n)}}toJSON(){let e=[];for(let t=0;t<this.sections.length;t+=2){let r=this.sections[t],n=this.sections[t+1];n<0?e.push(r):n==0?e.push([r]):e.push([r].concat(this.inserted[t>>1].toJSON()))}return e}static of(e,t,r){let n=[],s=[],o=0,l=null;function a(c=!1){if(!c&&!n.length)return;o<t&&$e(n,t-o,-1);let u=new i(n,s);l=l?l.compose(u.map(l)):u,n=[],s=[],o=0}function h(c){if(Array.isArray(c))for(let u of c)h(u);else if(c instanceof i){if(c.length!=t)throw new RangeError(`Mismatched change set length (got ${c.length}, expected ${t})`);a(),l=l?l.compose(c.map(l)):c}else{let{from:u,to:d=u,insert:p}=c;if(u>d||u<0||d>t)throw new RangeError(`Invalid change range ${u} to ${d} (in doc of length ${t})`);let v=p?typeof p==\"string\"?se.of(p.split(r||sl)):p:se.empty,y=v.length;if(u==d&&y==0)return;u<o&&a(),u>o&&$e(n,u-o,-1),$e(n,d-u,y),gr(s,n,v),o=d}}return h(e),a(!l),l}static empty(e){return new i(e?[e,-1]:[],[])}static fromJSON(e){if(!Array.isArray(e))throw new RangeError(\"Invalid JSON representation of ChangeSet\");let t=[],r=[];for(let n=0;n<e.length;n++){let s=e[n];if(typeof s==\"number\")t.push(s,-1);else{if(!Array.isArray(s)||typeof s[0]!=\"number\"||s.some((o,l)=>l&&typeof o!=\"string\"))throw new RangeError(\"Invalid JSON representation of ChangeSet\");if(s.length==1)t.push(s[0],0);else{for(;r.length<n;)r.push(se.empty);r[n]=se.of(s.slice(1)),t.push(s[0],r[n].length)}}}return new i(t,r)}static createSet(e,t){return new i(e,t)}};function $e(i,e,t,r=!1){if(e==0&&t<=0)return;let n=i.length-2;n>=0&&t<=0&&t==i[n+1]?i[n]+=e:n>=0&&e==0&&i[n]==0?i[n+1]+=t:r?(i[n]+=e,i[n+1]+=t):i.push(e,t)}function gr(i,e,t){if(t.length==0)return;let r=e.length-2>>1;if(r<i.length)i[i.length-1]=i[i.length-1].append(t);else{for(;i.length<r;)i.push(se.empty);i.push(t)}}function ol(i,e,t){let r=i.inserted;for(let n=0,s=0,o=0;o<i.sections.length;){let l=i.sections[o++],a=i.sections[o++];if(a<0)n+=l,s+=l;else{let h=n,c=s,u=se.empty;for(;h+=l,c+=a,a&&r&&(u=u.append(r[o-2>>1])),!(t||o==i.sections.length||i.sections[o+1]<0);)l=i.sections[o++],a=i.sections[o++];e(n,h,s,c,u),n=h,s=c}}}function ll(i,e,t,r=!1){let n=[],s=r?[]:null,o=new Fr(i),l=new Fr(e);for(let a=-1;;){if(o.done&&l.len||l.done&&o.len)throw new Error(\"Mismatched change set lengths\");if(o.ins==-1&&l.ins==-1){let h=Math.min(o.len,l.len);$e(n,h,-1),o.forward(h),l.forward(h)}else if(l.ins>=0&&(o.ins<0||a==o.i||o.off==0&&(l.len<o.len||l.len==o.len&&!t))){let h=l.len;for($e(n,l.ins,-1);h;){let c=Math.min(o.len,h);o.ins>=0&&a<o.i&&o.len<=c&&($e(n,0,o.ins),s&&gr(s,n,o.text),a=o.i),o.forward(c),h-=c}l.next()}else if(o.ins>=0){let h=0,c=o.len;for(;c;)if(l.ins==-1){let u=Math.min(c,l.len);h+=u,c-=u,l.forward(u)}else if(l.ins==0&&l.len<c)c-=l.len,l.next();else break;$e(n,h,a<o.i?o.ins:0),s&&a<o.i&&gr(s,n,o.text),a=o.i,o.forward(o.len-c)}else{if(o.done&&l.done)return s?Ye.createSet(n,s):nr.create(n);throw new Error(\"Mismatched change set lengths\")}}}function Uh(i,e,t=!1){let r=[],n=t?[]:null,s=new Fr(i),o=new Fr(e);for(let l=!1;;){if(s.done&&o.done)return n?Ye.createSet(r,n):nr.create(r);if(s.ins==0)$e(r,s.len,0,l),s.next();else if(o.len==0&&!o.done)$e(r,0,o.ins,l),n&&gr(n,r,o.text),o.next();else{if(s.done||o.done)throw new Error(\"Mismatched change set lengths\");{let a=Math.min(s.len2,o.len),h=r.length;if(s.ins==-1){let c=o.ins==-1?-1:o.off?0:o.ins;$e(r,a,c,l),n&&c&&gr(n,r,o.text)}else o.ins==-1?($e(r,s.off?0:s.len,a,l),n&&gr(n,r,s.textBit(a))):($e(r,s.off?0:s.len,o.off?0:o.ins,l),n&&!o.off&&gr(n,r,o.text));l=(s.ins>a||o.ins>=0&&o.len>a)&&(l||r.length>h),s.forward2(a),o.forward(a)}}}}var Fr=class{constructor(e){this.set=e,this.i=0,this.next()}next(){let{sections:e}=this.set;this.i<e.length?(this.len=e[this.i++],this.ins=e[this.i++]):(this.len=0,this.ins=-2),this.off=0}get done(){return this.ins==-2}get len2(){return this.ins<0?this.len:this.ins}get text(){let{inserted:e}=this.set,t=this.i-2>>1;return t>=e.length?se.empty:e[t]}textBit(e){let{inserted:t}=this.set,r=this.i-2>>1;return r>=t.length&&!e?se.empty:t[r].slice(this.off,e==null?void 0:this.off+e)}forward(e){e==this.len?this.next():(this.len-=e,this.off+=e)}forward2(e){this.ins==-1?this.forward(e):e==this.ins?this.next():(this.ins-=e,this.off+=e)}},si=class i{constructor(e,t,r){this.from=e,this.to=t,this.flags=r}get anchor(){return this.flags&32?this.to:this.from}get head(){return this.flags&32?this.from:this.to}get empty(){return this.from==this.to}get assoc(){return this.flags&8?-1:this.flags&16?1:0}get bidiLevel(){let e=this.flags&7;return e==7?null:e}get goalColumn(){let e=this.flags>>6;return e==16777215?void 0:e}map(e,t=-1){let r,n;return this.empty?r=n=e.mapPos(this.from,t):(r=e.mapPos(this.from,1),n=e.mapPos(this.to,-1)),r==this.from&&n==this.to?this:new i(r,n,this.flags)}extend(e,t=e){if(e<=this.anchor&&t>=this.anchor)return R.range(e,t);let r=Math.abs(e-this.anchor)>Math.abs(t-this.anchor)?e:t;return R.range(this.anchor,r)}eq(e,t=!1){return this.anchor==e.anchor&&this.head==e.head&&this.goalColumn==e.goalColumn&&(!t||!this.empty||this.assoc==e.assoc)}toJSON(){return{anchor:this.anchor,head:this.head}}static fromJSON(e){if(!e||typeof e.anchor!=\"number\"||typeof e.head!=\"number\")throw new RangeError(\"Invalid JSON representation for SelectionRange\");return R.range(e.anchor,e.head)}static create(e,t,r){return new i(e,t,r)}},R=class i{constructor(e,t){this.ranges=e,this.mainIndex=t}map(e,t=-1){return e.empty?this:i.create(this.ranges.map(r=>r.map(e,t)),this.mainIndex)}eq(e,t=!1){if(this.ranges.length!=e.ranges.length||this.mainIndex!=e.mainIndex)return!1;for(let r=0;r<this.ranges.length;r++)if(!this.ranges[r].eq(e.ranges[r],t))return!1;return!0}get main(){return this.ranges[this.mainIndex]}asSingle(){return this.ranges.length==1?this:new i([this.main],0)}addRange(e,t=!0){return i.create([e].concat(this.ranges),t?0:this.mainIndex+1)}replaceRange(e,t=this.mainIndex){let r=this.ranges.slice();return r[t]=e,i.create(r,this.mainIndex)}toJSON(){return{ranges:this.ranges.map(e=>e.toJSON()),main:this.mainIndex}}static fromJSON(e){if(!e||!Array.isArray(e.ranges)||typeof e.main!=\"number\"||e.main>=e.ranges.length)throw new RangeError(\"Invalid JSON representation for EditorSelection\");return new i(e.ranges.map(t=>si.fromJSON(t)),e.main)}static single(e,t=e){return new i([i.range(e,t)],0)}static create(e,t=0){if(e.length==0)throw new RangeError(\"A selection needs at least one range\");for(let r=0,n=0;n<e.length;n++){let s=e[n];if(s.empty?s.from<=r:s.from<r)return i.normalized(e.slice(),t);r=s.to}return new i(e,t)}static cursor(e,t=0,r,n){return si.create(e,e,(t==0?0:t<0?8:16)|(r==null?7:Math.min(6,r))|(n??16777215)<<6)}static range(e,t,r,n){let s=(r??16777215)<<6|(n==null?7:Math.min(6,n));return t<e?si.create(t,e,48|s):si.create(e,t,(t>e?8:0)|s)}static normalized(e,t=0){let r=e[t];e.sort((n,s)=>n.from-s.from),t=e.indexOf(r);for(let n=1;n<e.length;n++){let s=e[n],o=e[n-1];if(s.empty?s.from<=o.to:s.from<o.to){let l=o.from,a=Math.max(s.to,o.to);n<=t&&t--,e.splice(--n,2,s.anchor>s.head?i.range(a,l):i.range(l,a))}}return new i(e,t)}};function Kh(i,e){for(let t of i.ranges)if(t.to>e)throw new RangeError(\"Selection points outside of document\")}var vl=0,H=class i{constructor(e,t,r,n,s){this.combine=e,this.compareInput=t,this.compare=r,this.isStatic=n,this.id=vl++,this.default=e([]),this.extensions=typeof s==\"function\"?s(this):s}get reader(){return this}static define(e={}){return new i(e.combine||(t=>t),e.compareInput||((t,r)=>t===r),e.compare||(e.combine?(t,r)=>t===r:bl),!!e.static,e.enables)}of(e){return new li([],this,0,e)}compute(e,t){if(this.isStatic)throw new Error(\"Can't compute a static facet\");return new li(e,this,1,t)}computeN(e,t){if(this.isStatic)throw new Error(\"Can't compute a static facet\");return new li(e,this,2,t)}from(e,t){return t||(t=r=>r),this.compute([e],r=>t(r.field(e)))}};function bl(i,e){return i==e||i.length==e.length&&i.every((t,r)=>t===e[r])}var li=class{constructor(e,t,r,n){this.dependencies=e,this.facet=t,this.type=r,this.value=n,this.id=vl++}dynamicSlot(e){var t;let r=this.value,n=this.facet.compareInput,s=this.id,o=e[s]>>1,l=this.type==2,a=!1,h=!1,c=[];for(let u of this.dependencies)u==\"doc\"?a=!0:u==\"selection\"?h=!0:((t=e[u.id])!==null&&t!==void 0?t:1)&1||c.push(e[u.id]);return{create(u){return u.values[o]=r(u),1},update(u,d){if(a&&d.docChanged||h&&(d.docChanged||d.selection)||al(u,c)){let p=r(u);if(l?!Wh(p,u.values[o],n):!n(p,u.values[o]))return u.values[o]=p,1}return 0},reconfigure:(u,d)=>{let p,v=d.config.address[s];if(v!=null){let y=Jn(d,v);if(this.dependencies.every(w=>w instanceof H?d.facet(w)===u.facet(w):w instanceof Re?d.field(w,!1)==u.field(w,!1):!0)||(l?Wh(p=r(u),y,n):n(p=r(u),y)))return u.values[o]=y,0}else p=r(u);return u.values[o]=p,1}}}};function Wh(i,e,t){if(i.length!=e.length)return!1;for(let r=0;r<i.length;r++)if(!t(i[r],e[r]))return!1;return!0}function al(i,e){let t=!1;for(let r of e)Pi(i,r)&1&&(t=!0);return t}function Mp(i,e,t){let r=t.map(a=>i[a.id]),n=t.map(a=>a.type),s=r.filter(a=>!(a&1)),o=i[e.id]>>1;function l(a){let h=[];for(let c=0;c<r.length;c++){let u=Jn(a,r[c]);if(n[c]==2)for(let d of u)h.push(d);else h.push(u)}return e.combine(h)}return{create(a){for(let h of r)Pi(a,h);return a.values[o]=l(a),1},update(a,h){if(!al(a,s))return 0;let c=l(a);return e.compare(c,a.values[o])?0:(a.values[o]=c,1)},reconfigure(a,h){let c=al(a,r),u=h.config.facets[e.id],d=h.facet(e);if(u&&!c&&bl(t,u))return a.values[o]=d,0;let p=l(a);return e.compare(p,d)?(a.values[o]=d,0):(a.values[o]=p,1)}}}var Vn=H.define({static:!0}),Re=class i{constructor(e,t,r,n,s){this.id=e,this.createF=t,this.updateF=r,this.compareF=n,this.spec=s,this.provides=void 0}static define(e){let t=new i(vl++,e.create,e.update,e.compare||((r,n)=>r===n),e);return e.provide&&(t.provides=e.provide(t)),t}create(e){let t=e.facet(Vn).find(r=>r.field==this);return(t?.create||this.createF)(e)}slot(e){let t=e[this.id]>>1;return{create:r=>(r.values[t]=this.create(r),1),update:(r,n)=>{let s=r.values[t],o=this.updateF(s,n);return this.compareF(s,o)?0:(r.values[t]=o,1)},reconfigure:(r,n)=>{let s=r.facet(Vn),o=n.facet(Vn),l;return(l=s.find(a=>a.field==this))&&l!=o.find(a=>a.field==this)?(r.values[t]=l.create(r),1):n.config.address[this.id]!=null?(r.values[t]=n.field(this),0):(r.values[t]=this.create(r),1)}}}init(e){return[this,Vn.of({field:this,create:e})]}get extension(){return this}},Rr={lowest:4,low:3,default:2,high:1,highest:0};function Ri(i){return e=>new Yn(e,i)}var Wt={highest:Ri(Rr.highest),high:Ri(Rr.high),default:Ri(Rr.default),low:Ri(Rr.low),lowest:Ri(Rr.lowest)},Yn=class{constructor(e,t){this.inner=e,this.prec=t}},Xn=class i{of(e){return new Ni(this,e)}reconfigure(e){return i.reconfigure.of({compartment:this,extension:e})}get(e){return e.config.compartments.get(this)}},Ni=class{constructor(e,t){this.compartment=e,this.inner=t}},_n=class i{constructor(e,t,r,n,s,o){for(this.base=e,this.compartments=t,this.dynamicSlots=r,this.address=n,this.staticValues=s,this.facets=o,this.statusTemplate=[];this.statusTemplate.length<r.length;)this.statusTemplate.push(0)}staticFacet(e){let t=this.address[e.id];return t==null?e.default:this.staticValues[t>>1]}static resolve(e,t,r){let n=[],s=Object.create(null),o=new Map;for(let d of Tp(e,t,o))d instanceof Re?n.push(d):(s[d.facet.id]||(s[d.facet.id]=[])).push(d);let l=Object.create(null),a=[],h=[];for(let d of n)l[d.id]=h.length<<1,h.push(p=>d.slot(p));let c=r?.config.facets;for(let d in s){let p=s[d],v=p[0].facet,y=c&&c[d]||[];if(p.every(w=>w.type==0))if(l[v.id]=a.length<<1|1,bl(y,p))a.push(r.facet(v));else{let w=v.combine(p.map(S=>S.value));a.push(r&&v.compare(w,r.facet(v))?r.facet(v):w)}else{for(let w of p)w.type==0?(l[w.id]=a.length<<1|1,a.push(w.value)):(l[w.id]=h.length<<1,h.push(S=>w.dynamicSlot(S)));l[v.id]=h.length<<1,h.push(w=>Mp(w,v,p))}}let u=h.map(d=>d(l));return new i(e,o,u,l,a,s)}};function Tp(i,e,t){let r=[[],[],[],[],[]],n=new Map;function s(o,l){let a=n.get(o);if(a!=null){if(a<=l)return;let h=r[a].indexOf(o);h>-1&&r[a].splice(h,1),o instanceof Ni&&t.delete(o.compartment)}if(n.set(o,l),Array.isArray(o))for(let h of o)s(h,l);else if(o instanceof Ni){if(t.has(o.compartment))throw new RangeError(\"Duplicate use of compartment in extensions\");let h=e.get(o.compartment)||o.inner;t.set(o.compartment,h),s(h,l)}else if(o instanceof Yn)s(o.inner,o.prec);else if(o instanceof Re)r[l].push(o),o.provides&&s(o.provides,l);else if(o instanceof li)r[l].push(o),o.facet.extensions&&s(o.facet.extensions,Rr.default);else{let h=o.extension;if(!h)throw new Error(`Unrecognized extension value in extension set (${o}). This sometimes happens because multiple instances of @codemirror/state are loaded, breaking instanceof checks.`);s(h,l)}}return s(i,Rr.default),r.reduce((o,l)=>o.concat(l))}function Pi(i,e){if(e&1)return 2;let t=e>>1,r=i.status[t];if(r==4)throw new Error(\"Cyclic dependency between fields and/or facets\");if(r&2)return r;i.status[t]=4;let n=i.computeSlot(i,i.config.dynamicSlots[t]);return i.status[t]=2|n}function Jn(i,e){return e&1?i.config.staticValues[e>>1]:i.values[e>>1]}var jh=H.define(),hl=H.define({combine:i=>i.some(e=>e),static:!0}),Yh=H.define({combine:i=>i.length?i[0]:void 0,static:!0}),Xh=H.define(),_h=H.define(),Jh=H.define(),Zh=H.define({combine:i=>i.length?i[0]:!1}),rt=class{constructor(e,t){this.type=e,this.value=t}static define(){return new cl}},cl=class{of(e){return new rt(this,e)}},ul=class{constructor(e){this.map=e}of(e){return new te(this,e)}},te=class i{constructor(e,t){this.type=e,this.value=t}map(e){let t=this.type.map(this.value,e);return t===void 0?void 0:t==this.value?this:new i(this.type,t)}is(e){return this.type==e}static define(e={}){return new ul(e.map||(t=>t))}static mapEffects(e,t){if(!e.length)return e;let r=[];for(let n of e){let s=n.map(t);s&&r.push(s)}return r}};te.reconfigure=te.define();te.appendConfig=te.define();var Le=class i{constructor(e,t,r,n,s,o){this.startState=e,this.changes=t,this.selection=r,this.effects=n,this.annotations=s,this.scrollIntoView=o,this._doc=null,this._state=null,r&&Kh(r,t.newLength),s.some(l=>l.type==i.time)||(this.annotations=s.concat(i.time.of(Date.now())))}static create(e,t,r,n,s,o){return new i(e,t,r,n,s,o)}get newDoc(){return this._doc||(this._doc=this.changes.apply(this.startState.doc))}get newSelection(){return this.selection||this.startState.selection.map(this.changes)}get state(){return this._state||this.startState.applyTransaction(this),this._state}annotation(e){for(let t of this.annotations)if(t.type==e)return t.value}get docChanged(){return!this.changes.empty}get reconfigured(){return this.startState.config!=this.state.config}isUserEvent(e){let t=this.annotation(i.userEvent);return!!(t&&(t==e||t.length>e.length&&t.slice(0,e.length)==e&&t[e.length]==\".\"))}};Le.time=rt.define();Le.userEvent=rt.define();Le.addToHistory=rt.define();Le.remote=rt.define();function Dp(i,e){let t=[];for(let r=0,n=0;;){let s,o;if(r<i.length&&(n==e.length||e[n]>=i[r]))s=i[r++],o=i[r++];else if(n<e.length)s=e[n++],o=e[n++];else return t;!t.length||t[t.length-1]<s?t.push(s,o):t[t.length-1]<o&&(t[t.length-1]=o)}}function Qh(i,e,t){var r;let n,s,o;return t?(n=e.changes,s=Ye.empty(e.changes.length),o=i.changes.compose(e.changes)):(n=e.changes.map(i.changes),s=i.changes.mapDesc(e.changes,!0),o=i.changes.compose(n)),{changes:o,selection:e.selection?e.selection.map(s):(r=i.selection)===null||r===void 0?void 0:r.map(n),effects:te.mapEffects(i.effects,n).concat(te.mapEffects(e.effects,s)),annotations:i.annotations.length?i.annotations.concat(e.annotations):e.annotations,scrollIntoView:i.scrollIntoView||e.scrollIntoView}}function fl(i,e,t){let r=e.selection,n=ai(e.annotations);return e.userEvent&&(n=n.concat(Le.userEvent.of(e.userEvent))),{changes:e.changes instanceof Ye?e.changes:Ye.of(e.changes||[],t,i.facet(Yh)),selection:r&&(r instanceof R?r:R.single(r.anchor,r.head)),effects:ai(e.effects),annotations:n,scrollIntoView:!!e.scrollIntoView}}function ec(i,e,t){let r=fl(i,e.length?e[0]:{},i.doc.length);e.length&&e[0].filter===!1&&(t=!1);for(let s=1;s<e.length;s++){e[s].filter===!1&&(t=!1);let o=!!e[s].sequential;r=Qh(r,fl(i,e[s],o?r.changes.newLength:i.doc.length),o)}let n=Le.create(i,r.changes,r.selection,r.effects,r.annotations,r.scrollIntoView);return Ep(t?Bp(n):n)}function Bp(i){let e=i.startState,t=!0;for(let n of e.facet(Xh)){let s=n(i);if(s===!1){t=!1;break}Array.isArray(s)&&(t=t===!0?s:Dp(t,s))}if(t!==!0){let n,s;if(t===!1)s=i.changes.invertedDesc,n=Ye.empty(e.doc.length);else{let o=i.changes.filter(t);n=o.changes,s=o.filtered.mapDesc(o.changes).invertedDesc}i=Le.create(e,n,i.selection&&i.selection.map(s),te.mapEffects(i.effects,s),i.annotations,i.scrollIntoView)}let r=e.facet(_h);for(let n=r.length-1;n>=0;n--){let s=r[n](i);s instanceof Le?i=s:Array.isArray(s)&&s.length==1&&s[0]instanceof Le?i=s[0]:i=ec(e,ai(s),!1)}return i}function Ep(i){let e=i.startState,t=e.facet(Jh),r=i;for(let n=t.length-1;n>=0;n--){let s=t[n](i);s&&Object.keys(s).length&&(r=Qh(r,fl(e,s,i.changes.newLength),!0))}return r==i?i:Le.create(e,i.changes,i.selection,r.effects,r.annotations,r.scrollIntoView)}var Op=[];function ai(i){return i==null?Op:Array.isArray(i)?i:[i]}var Ee=function(i){return i[i.Word=0]=\"Word\",i[i.Space=1]=\"Space\",i[i.Other=2]=\"Other\",i}(Ee||(Ee={})),zp=/[\\u00df\\u0587\\u0590-\\u05f4\\u0600-\\u06ff\\u3040-\\u309f\\u30a0-\\u30ff\\u3400-\\u4db5\\u4e00-\\u9fcc\\uac00-\\ud7af]/,dl;try{dl=new RegExp(\"[\\\\p{Alphabetic}\\\\p{Number}_]\",\"u\")}catch{}function Lp(i){if(dl)return dl.test(i);for(let e=0;e<i.length;e++){let t=i[e];if(/\\w/.test(t)||t>\"\\x80\"&&(t.toUpperCase()!=t.toLowerCase()||zp.test(t)))return!0}return!1}function Ip(i){return e=>{if(!/\\S/.test(e))return Ee.Space;if(Lp(e))return Ee.Word;for(let t=0;t<i.length;t++)if(e.indexOf(i[t])>-1)return Ee.Word;return Ee.Other}}var fe=class i{constructor(e,t,r,n,s,o){this.config=e,this.doc=t,this.selection=r,this.values=n,this.status=e.statusTemplate.slice(),this.computeSlot=s,o&&(o._state=this);for(let l=0;l<this.config.dynamicSlots.length;l++)Pi(this,l<<1);this.computeSlot=null}field(e,t=!0){let r=this.config.address[e.id];if(r==null){if(t)throw new RangeError(\"Field is not present in this state\");return}return Pi(this,r),Jn(this,r)}update(...e){return ec(this,e,!0)}applyTransaction(e){let t=this.config,{base:r,compartments:n}=t;for(let l of e.effects)l.is(Xn.reconfigure)?(t&&(n=new Map,t.compartments.forEach((a,h)=>n.set(h,a)),t=null),n.set(l.value.compartment,l.value.extension)):l.is(te.reconfigure)?(t=null,r=l.value):l.is(te.appendConfig)&&(t=null,r=ai(r).concat(l.value));let s;t?s=e.startState.values.slice():(t=_n.resolve(r,n,this),s=new i(t,this.doc,this.selection,t.dynamicSlots.map(()=>null),(a,h)=>h.reconfigure(a,this),null).values);let o=e.startState.facet(hl)?e.newSelection:e.newSelection.asSingle();new i(t,e.newDoc,o,s,(l,a)=>a.update(l,e),e)}replaceSelection(e){return typeof e==\"string\"&&(e=this.toText(e)),this.changeByRange(t=>({changes:{from:t.from,to:t.to,insert:e},range:R.cursor(t.from+e.length)}))}changeByRange(e){let t=this.selection,r=e(t.ranges[0]),n=this.changes(r.changes),s=[r.range],o=ai(r.effects);for(let l=1;l<t.ranges.length;l++){let a=e(t.ranges[l]),h=this.changes(a.changes),c=h.map(n);for(let d=0;d<l;d++)s[d]=s[d].map(c);let u=n.mapDesc(h,!0);s.push(a.range.map(u)),n=n.compose(c),o=te.mapEffects(o,c).concat(te.mapEffects(ai(a.effects),u))}return{changes:n,selection:R.create(s,t.mainIndex),effects:o}}changes(e=[]){return e instanceof Ye?e:Ye.of(e,this.doc.length,this.facet(i.lineSeparator))}toText(e){return se.of(e.split(this.facet(i.lineSeparator)||sl))}sliceDoc(e=0,t=this.doc.length){return this.doc.sliceString(e,t,this.lineBreak)}facet(e){let t=this.config.address[e.id];return t==null?e.default:(Pi(this,t),Jn(this,t))}toJSON(e){let t={doc:this.sliceDoc(),selection:this.selection.toJSON()};if(e)for(let r in e){let n=e[r];n instanceof Re&&this.config.address[n.id]!=null&&(t[r]=n.spec.toJSON(this.field(e[r]),this))}return t}static fromJSON(e,t={},r){if(!e||typeof e.doc!=\"string\")throw new RangeError(\"Invalid JSON representation for EditorState\");let n=[];if(r){for(let s in r)if(Object.prototype.hasOwnProperty.call(e,s)){let o=r[s],l=e[s];n.push(o.init(a=>o.spec.fromJSON(l,a)))}}return i.create({doc:e.doc,selection:R.fromJSON(e.selection),extensions:t.extensions?n.concat([t.extensions]):n})}static create(e={}){let t=_n.resolve(e.extensions||[],new Map),r=e.doc instanceof se?e.doc:se.of((e.doc||\"\").split(t.staticFacet(i.lineSeparator)||sl)),n=e.selection?e.selection instanceof R?e.selection:R.single(e.selection.anchor,e.selection.head):R.single(0);return Kh(n,r.length),t.staticFacet(hl)||(n=n.asSingle()),new i(t,r,n,t.dynamicSlots.map(()=>null),(s,o)=>o.create(s),null)}get tabSize(){return this.facet(i.tabSize)}get lineBreak(){return this.facet(i.lineSeparator)||`\n`}get readOnly(){return this.facet(Zh)}phrase(e,...t){for(let r of this.facet(i.phrases))if(Object.prototype.hasOwnProperty.call(r,e)){e=r[e];break}return t.length&&(e=e.replace(/\\$(\\$|\\d*)/g,(r,n)=>{if(n==\"$\")return\"$\";let s=+(n||1);return!s||s>t.length?r:t[s-1]})),e}languageDataAt(e,t,r=-1){let n=[];for(let s of this.facet(jh))for(let o of s(this,t,r))Object.prototype.hasOwnProperty.call(o,e)&&n.push(o[e]);return n}charCategorizer(e){let t=this.languageDataAt(\"wordChars\",e);return Ip(t.length?t[0]:\"\")}wordAt(e){let{text:t,from:r,length:n}=this.doc.lineAt(e),s=this.charCategorizer(e),o=e-r,l=e-r;for(;o>0;){let a=Ie(t,o,!1);if(s(t.slice(a,o))!=Ee.Word)break;o=a}for(;l<n;){let a=Ie(t,l);if(s(t.slice(l,a))!=Ee.Word)break;l=a}return o==l?null:R.range(o+r,l+r)}};fe.allowMultipleSelections=hl;fe.tabSize=H.define({combine:i=>i.length?i[0]:4});fe.lineSeparator=Yh;fe.readOnly=Zh;fe.phrases=H.define({compare(i,e){let t=Object.keys(i),r=Object.keys(e);return t.length==r.length&&t.every(n=>i[n]==e[n])}});fe.languageData=jh;fe.changeFilter=Xh;fe.transactionFilter=_h;fe.transactionExtender=Jh;Xn.reconfigure=te.define();function it(i,e,t={}){let r={};for(let n of i)for(let s of Object.keys(n)){let o=n[s],l=r[s];if(l===void 0)r[s]=o;else if(!(l===o||o===void 0))if(Object.hasOwnProperty.call(t,s))r[s]=t[s](l,o);else throw new Error(\"Config merge conflict for field \"+s)}for(let n in e)r[n]===void 0&&(r[n]=e[n]);return r}var gt=class{eq(e){return this==e}range(e,t=e){return Fi.create(e,t,this)}};gt.prototype.startSide=gt.prototype.endSide=0;gt.prototype.point=!1;gt.prototype.mapMode=We.TrackDel;function yl(i,e){return i==e||i.constructor==e.constructor&&i.eq(e)}var Fi=class i{constructor(e,t,r){this.from=e,this.to=t,this.value=r}static create(e,t,r){return new i(e,t,r)}};function ml(i,e){return i.from-e.from||i.value.startSide-e.value.startSide}var pl=class i{constructor(e,t,r,n){this.from=e,this.to=t,this.value=r,this.maxPoint=n}get length(){return this.to[this.to.length-1]}findIndex(e,t,r,n=0){let s=r?this.to:this.from;for(let o=n,l=s.length;;){if(o==l)return o;let a=o+l>>1,h=s[a]-e||(r?this.value[a].endSide:this.value[a].startSide)-t;if(a==o)return h>=0?o:l;h>=0?l=a:o=a+1}}between(e,t,r,n){for(let s=this.findIndex(t,-1e9,!0),o=this.findIndex(r,1e9,!1,s);s<o;s++)if(n(this.from[s]+e,this.to[s]+e,this.value[s])===!1)return!1}map(e,t){let r=[],n=[],s=[],o=-1,l=-1;for(let a=0;a<this.value.length;a++){let h=this.value[a],c=this.from[a]+e,u=this.to[a]+e,d,p;if(c==u){let v=t.mapPos(c,h.startSide,h.mapMode);if(v==null||(d=p=v,h.startSide!=h.endSide&&(p=t.mapPos(c,h.endSide),p<d)))continue}else if(d=t.mapPos(c,h.startSide),p=t.mapPos(u,h.endSide),d>p||d==p&&h.startSide>0&&h.endSide<=0)continue;(p-d||h.endSide-h.startSide)<0||(o<0&&(o=d),h.point&&(l=Math.max(l,p-d)),r.push(h),n.push(d-o),s.push(p-o))}return{mapped:r.length?new i(n,s,r,l):null,pos:o}}},le=class i{constructor(e,t,r,n){this.chunkPos=e,this.chunk=t,this.nextLayer=r,this.maxPoint=n}static create(e,t,r,n){return new i(e,t,r,n)}get length(){let e=this.chunk.length-1;return e<0?0:Math.max(this.chunkEnd(e),this.nextLayer.length)}get size(){if(this.isEmpty)return 0;let e=this.nextLayer.size;for(let t of this.chunk)e+=t.value.length;return e}chunkEnd(e){return this.chunkPos[e]+this.chunk[e].length}update(e){let{add:t=[],sort:r=!1,filterFrom:n=0,filterTo:s=this.length}=e,o=e.filter;if(t.length==0&&!o)return this;if(r&&(t=t.slice().sort(ml)),this.isEmpty)return t.length?i.of(t):this;let l=new Zn(this,null,-1).goto(0),a=0,h=[],c=new Et;for(;l.value||a<t.length;)if(a<t.length&&(l.from-t[a].from||l.startSide-t[a].value.startSide)>=0){let u=t[a++];c.addInner(u.from,u.to,u.value)||h.push(u)}else l.rangeIndex==1&&l.chunkIndex<this.chunk.length&&(a==t.length||this.chunkEnd(l.chunkIndex)<t[a].from)&&(!o||n>this.chunkEnd(l.chunkIndex)||s<this.chunkPos[l.chunkIndex])&&c.addChunk(this.chunkPos[l.chunkIndex],this.chunk[l.chunkIndex])?l.nextChunk():((!o||n>l.to||s<l.from||o(l.from,l.to,l.value))&&(c.addInner(l.from,l.to,l.value)||h.push(Fi.create(l.from,l.to,l.value))),l.next());return c.finishInner(this.nextLayer.isEmpty&&!h.length?i.empty:this.nextLayer.update({add:h,filter:o,filterFrom:n,filterTo:s}))}map(e){if(e.empty||this.isEmpty)return this;let t=[],r=[],n=-1;for(let o=0;o<this.chunk.length;o++){let l=this.chunkPos[o],a=this.chunk[o],h=e.touchesRange(l,l+a.length);if(h===!1)n=Math.max(n,a.maxPoint),t.push(a),r.push(e.mapPos(l));else if(h===!0){let{mapped:c,pos:u}=a.map(l,e);c&&(n=Math.max(n,c.maxPoint),t.push(c),r.push(u))}}let s=this.nextLayer.map(e);return t.length==0?s:new i(r,t,s||i.empty,n)}between(e,t,r){if(!this.isEmpty){for(let n=0;n<this.chunk.length;n++){let s=this.chunkPos[n],o=this.chunk[n];if(t>=s&&e<=s+o.length&&o.between(s,e-s,t-s,r)===!1)return}this.nextLayer.between(e,t,r)}}iter(e=0){return Hi.from([this]).goto(e)}get isEmpty(){return this.nextLayer==this}static iter(e,t=0){return Hi.from(e).goto(t)}static compare(e,t,r,n,s=-1){let o=e.filter(u=>u.maxPoint>0||!u.isEmpty&&u.maxPoint>=s),l=t.filter(u=>u.maxPoint>0||!u.isEmpty&&u.maxPoint>=s),a=Vh(o,l,r),h=new Pr(o,a,s),c=new Pr(l,a,s);r.iterGaps((u,d,p)=>$h(h,u,c,d,p,n)),r.empty&&r.length==0&&$h(h,0,c,0,0,n)}static eq(e,t,r=0,n){n==null&&(n=999999999);let s=e.filter(c=>!c.isEmpty&&t.indexOf(c)<0),o=t.filter(c=>!c.isEmpty&&e.indexOf(c)<0);if(s.length!=o.length)return!1;if(!s.length)return!0;let l=Vh(s,o),a=new Pr(s,l,0).goto(r),h=new Pr(o,l,0).goto(r);for(;;){if(a.to!=h.to||!gl(a.active,h.active)||a.point&&(!h.point||!yl(a.point,h.point)))return!1;if(a.to>n)return!0;a.next(),h.next()}}static spans(e,t,r,n,s=-1){let o=new Pr(e,null,s).goto(t),l=t,a=o.openStart;for(;;){let h=Math.min(o.to,r);if(o.point){let c=o.activeForPoint(o.to),u=o.pointFrom<t?c.length+1:o.point.startSide<0?c.length:Math.min(c.length,a);n.point(l,h,o.point,c,u,o.pointRank),a=Math.min(o.openEnd(h),c.length)}else h>l&&(n.span(l,h,o.active,a),a=o.openEnd(h));if(o.to>r)return a+(o.point&&o.to>r?1:0);l=o.to,o.next()}}static of(e,t=!1){let r=new Et;for(let n of e instanceof Fi?[e]:t?Rp(e):e)r.add(n.from,n.to,n.value);return r.finish()}static join(e){if(!e.length)return i.empty;let t=e[e.length-1];for(let r=e.length-2;r>=0;r--)for(let n=e[r];n!=i.empty;n=n.nextLayer)t=new i(n.chunkPos,n.chunk,t,Math.max(n.maxPoint,t.maxPoint));return t}};le.empty=new le([],[],null,-1);function Rp(i){if(i.length>1)for(let e=i[0],t=1;t<i.length;t++){let r=i[t];if(ml(e,r)>0)return i.slice().sort(ml);e=r}return i}le.empty.nextLayer=le.empty;var Et=class i{finishChunk(e){this.chunks.push(new pl(this.from,this.to,this.value,this.maxPoint)),this.chunkPos.push(this.chunkStart),this.chunkStart=-1,this.setMaxPoint=Math.max(this.setMaxPoint,this.maxPoint),this.maxPoint=-1,e&&(this.from=[],this.to=[],this.value=[])}constructor(){this.chunks=[],this.chunkPos=[],this.chunkStart=-1,this.last=null,this.lastFrom=-1e9,this.lastTo=-1e9,this.from=[],this.to=[],this.value=[],this.maxPoint=-1,this.setMaxPoint=-1,this.nextLayer=null}add(e,t,r){this.addInner(e,t,r)||(this.nextLayer||(this.nextLayer=new i)).add(e,t,r)}addInner(e,t,r){let n=e-this.lastTo||r.startSide-this.last.endSide;if(n<=0&&(e-this.lastFrom||r.startSide-this.last.startSide)<0)throw new Error(\"Ranges must be added sorted by `from` position and `startSide`\");return n<0?!1:(this.from.length==250&&this.finishChunk(!0),this.chunkStart<0&&(this.chunkStart=e),this.from.push(e-this.chunkStart),this.to.push(t-this.chunkStart),this.last=r,this.lastFrom=e,this.lastTo=t,this.value.push(r),r.point&&(this.maxPoint=Math.max(this.maxPoint,t-e)),!0)}addChunk(e,t){if((e-this.lastTo||t.value[0].startSide-this.last.endSide)<0)return!1;this.from.length&&this.finishChunk(!0),this.setMaxPoint=Math.max(this.setMaxPoint,t.maxPoint),this.chunks.push(t),this.chunkPos.push(e);let r=t.value.length-1;return this.last=t.value[r],this.lastFrom=t.from[r]+e,this.lastTo=t.to[r]+e,!0}finish(){return this.finishInner(le.empty)}finishInner(e){if(this.from.length&&this.finishChunk(!1),this.chunks.length==0)return e;let t=le.create(this.chunkPos,this.chunks,this.nextLayer?this.nextLayer.finishInner(e):e,this.setMaxPoint);return this.from=null,t}};function Vh(i,e,t){let r=new Map;for(let s of i)for(let o=0;o<s.chunk.length;o++)s.chunk[o].maxPoint<=0&&r.set(s.chunk[o],s.chunkPos[o]);let n=new Set;for(let s of e)for(let o=0;o<s.chunk.length;o++){let l=r.get(s.chunk[o]);l!=null&&(t?t.mapPos(l):l)==s.chunkPos[o]&&!t?.touchesRange(l,l+s.chunk[o].length)&&n.add(s.chunk[o])}return n}var Zn=class{constructor(e,t,r,n=0){this.layer=e,this.skip=t,this.minPoint=r,this.rank=n}get startSide(){return this.value?this.value.startSide:0}get endSide(){return this.value?this.value.endSide:0}goto(e,t=-1e9){return this.chunkIndex=this.rangeIndex=0,this.gotoInner(e,t,!1),this}gotoInner(e,t,r){for(;this.chunkIndex<this.layer.chunk.length;){let n=this.layer.chunk[this.chunkIndex];if(!(this.skip&&this.skip.has(n)||this.layer.chunkEnd(this.chunkIndex)<e||n.maxPoint<this.minPoint))break;this.chunkIndex++,r=!1}if(this.chunkIndex<this.layer.chunk.length){let n=this.layer.chunk[this.chunkIndex].findIndex(e-this.layer.chunkPos[this.chunkIndex],t,!0);(!r||this.rangeIndex<n)&&this.setRangeIndex(n)}this.next()}forward(e,t){(this.to-e||this.endSide-t)<0&&this.gotoInner(e,t,!0)}next(){for(;;)if(this.chunkIndex==this.layer.chunk.length){this.from=this.to=1e9,this.value=null;break}else{let e=this.layer.chunkPos[this.chunkIndex],t=this.layer.chunk[this.chunkIndex],r=e+t.from[this.rangeIndex];if(this.from=r,this.to=e+t.to[this.rangeIndex],this.value=t.value[this.rangeIndex],this.setRangeIndex(this.rangeIndex+1),this.minPoint<0||this.value.point&&this.to-this.from>=this.minPoint)break}}setRangeIndex(e){if(e==this.layer.chunk[this.chunkIndex].value.length){if(this.chunkIndex++,this.skip)for(;this.chunkIndex<this.layer.chunk.length&&this.skip.has(this.layer.chunk[this.chunkIndex]);)this.chunkIndex++;this.rangeIndex=0}else this.rangeIndex=e}nextChunk(){this.chunkIndex++,this.rangeIndex=0,this.next()}compare(e){return this.from-e.from||this.startSide-e.startSide||this.rank-e.rank||this.to-e.to||this.endSide-e.endSide}},Hi=class i{constructor(e){this.heap=e}static from(e,t=null,r=-1){let n=[];for(let s=0;s<e.length;s++)for(let o=e[s];!o.isEmpty;o=o.nextLayer)o.maxPoint>=r&&n.push(new Zn(o,t,r,s));return n.length==1?n[0]:new i(n)}get startSide(){return this.value?this.value.startSide:0}goto(e,t=-1e9){for(let r of this.heap)r.goto(e,t);for(let r=this.heap.length>>1;r>=0;r--)il(this.heap,r);return this.next(),this}forward(e,t){for(let r of this.heap)r.forward(e,t);for(let r=this.heap.length>>1;r>=0;r--)il(this.heap,r);(this.to-e||this.value.endSide-t)<0&&this.next()}next(){if(this.heap.length==0)this.from=this.to=1e9,this.value=null,this.rank=-1;else{let e=this.heap[0];this.from=e.from,this.to=e.to,this.value=e.value,this.rank=e.rank,e.value&&e.next(),il(this.heap,0)}}};function il(i,e){for(let t=i[e];;){let r=(e<<1)+1;if(r>=i.length)break;let n=i[r];if(r+1<i.length&&n.compare(i[r+1])>=0&&(n=i[r+1],r++),t.compare(n)<0)break;i[r]=t,i[e]=n,e=r}}var Pr=class{constructor(e,t,r){this.minPoint=r,this.active=[],this.activeTo=[],this.activeRank=[],this.minActive=-1,this.point=null,this.pointFrom=0,this.pointRank=0,this.to=-1e9,this.endSide=0,this.openStart=-1,this.cursor=Hi.from(e,t,r)}goto(e,t=-1e9){return this.cursor.goto(e,t),this.active.length=this.activeTo.length=this.activeRank.length=0,this.minActive=-1,this.to=e,this.endSide=t,this.openStart=-1,this.next(),this}forward(e,t){for(;this.minActive>-1&&(this.activeTo[this.minActive]-e||this.active[this.minActive].endSide-t)<0;)this.removeActive(this.minActive);this.cursor.forward(e,t)}removeActive(e){$n(this.active,e),$n(this.activeTo,e),$n(this.activeRank,e),this.minActive=Gh(this.active,this.activeTo)}addActive(e){let t=0,{value:r,to:n,rank:s}=this.cursor;for(;t<this.activeRank.length&&(s-this.activeRank[t]||n-this.activeTo[t])>0;)t++;Gn(this.active,t,r),Gn(this.activeTo,t,n),Gn(this.activeRank,t,s),e&&Gn(e,t,this.cursor.from),this.minActive=Gh(this.active,this.activeTo)}next(){let e=this.to,t=this.point;this.point=null;let r=this.openStart<0?[]:null;for(;;){let n=this.minActive;if(n>-1&&(this.activeTo[n]-this.cursor.from||this.active[n].endSide-this.cursor.startSide)<0){if(this.activeTo[n]>e){this.to=this.activeTo[n],this.endSide=this.active[n].endSide;break}this.removeActive(n),r&&$n(r,n)}else if(this.cursor.value)if(this.cursor.from>e){this.to=this.cursor.from,this.endSide=this.cursor.startSide;break}else{let s=this.cursor.value;if(!s.point)this.addActive(r),this.cursor.next();else if(t&&this.cursor.to==this.to&&this.cursor.from<this.cursor.to)this.cursor.next();else{this.point=s,this.pointFrom=this.cursor.from,this.pointRank=this.cursor.rank,this.to=this.cursor.to,this.endSide=s.endSide,this.cursor.next(),this.forward(this.to,this.endSide);break}}else{this.to=this.endSide=1e9;break}}if(r){this.openStart=0;for(let n=r.length-1;n>=0&&r[n]<e;n--)this.openStart++}}activeForPoint(e){if(!this.active.length)return this.active;let t=[];for(let r=this.active.length-1;r>=0&&!(this.activeRank[r]<this.pointRank);r--)(this.activeTo[r]>e||this.activeTo[r]==e&&this.active[r].endSide>=this.point.endSide)&&t.push(this.active[r]);return t.reverse()}openEnd(e){let t=0;for(let r=this.activeTo.length-1;r>=0&&this.activeTo[r]>e;r--)t++;return t}};function $h(i,e,t,r,n,s){i.goto(e),t.goto(r);let o=r+n,l=r,a=r-e,h=!!s.boundChange;for(let c=!1;;){let u=i.to+a-t.to,d=u||i.endSide-t.endSide,p=d<0?i.to+a:t.to,v=Math.min(p,o);if(i.point||t.point?(i.point&&t.point&&yl(i.point,t.point)&&gl(i.activeForPoint(i.to),t.activeForPoint(t.to))||s.comparePoint(l,v,i.point,t.point),c=!1):(c&&s.boundChange(l),v>l&&!gl(i.active,t.active)&&s.compareRange(l,v,i.active,t.active),h&&v<o&&(u||i.openEnd(p)!=t.openEnd(p))&&(c=!0)),p>o)break;l=p,d<=0&&i.next(),d>=0&&t.next()}}function gl(i,e){if(i.length!=e.length)return!1;for(let t=0;t<i.length;t++)if(i[t]!=e[t]&&!yl(i[t],e[t]))return!1;return!0}function $n(i,e){for(let t=e,r=i.length-1;t<r;t++)i[t]=i[t+1];i.pop()}function Gn(i,e,t){for(let r=i.length-1;r>=e;r--)i[r+1]=i[r];i[e]=t}function Gh(i,e){let t=-1,r=1e9;for(let n=0;n<e.length;n++)(e[n]-r||i[n].endSide-i[t].endSide)<0&&(t=n,r=e[n]);return t}function vr(i,e,t=i.length){let r=0;for(let n=0;n<t&&n<i.length;)i.charCodeAt(n)==9?(r+=e-r%e,n++):(r++,n=Ie(i,n));return r}function tc(i,e,t,r){for(let n=0,s=0;;){if(s>=e)return n;if(n==i.length)break;s+=i.charCodeAt(n)==9?t-s%t:1,n=Ie(i,n)}return r===!0?-1:i.length}var xl=\"\\u037C\",rc=typeof Symbol>\"u\"?\"__\"+xl:Symbol.for(xl),wl=typeof Symbol>\"u\"?\"__styleSet\"+Math.floor(Math.random()*1e8):Symbol(\"styleSet\"),ic=typeof globalThis<\"u\"?globalThis:typeof window<\"u\"?window:{},bt=class{constructor(e,t){this.rules=[];let{finish:r}=t||{};function n(o){return/^@/.test(o)?[o]:o.split(/,\\s*/)}function s(o,l,a,h){let c=[],u=/^@(\\w+)\\b/.exec(o[0]),d=u&&u[1]==\"keyframes\";if(u&&l==null)return a.push(o[0]+\";\");for(let p in l){let v=l[p];if(/&/.test(p))s(p.split(/,\\s*/).map(y=>o.map(w=>y.replace(/&/,w))).reduce((y,w)=>y.concat(w)),v,a);else if(v&&typeof v==\"object\"){if(!u)throw new RangeError(\"The value of a property (\"+p+\") should be a primitive value.\");s(n(p),v,c,d)}else v!=null&&c.push(p.replace(/_.*/,\"\").replace(/[A-Z]/g,y=>\"-\"+y.toLowerCase())+\": \"+v+\";\")}(c.length||d)&&a.push((r&&!u&&!h?o.map(r):o).join(\", \")+\" {\"+c.join(\" \")+\"}\")}for(let o in e)s(n(o),e[o],this.rules)}getRules(){return this.rules.join(`\n`)}static newName(){let e=ic[rc]||1;return ic[rc]=e+1,xl+e.toString(36)}static mount(e,t,r){let n=e[wl],s=r&&r.nonce;n?s&&n.setNonce(s):n=new kl(e,s),n.mount(Array.isArray(t)?t:[t],e)}},nc=new Map,kl=class{constructor(e,t){let r=e.ownerDocument||e,n=r.defaultView;if(!e.head&&e.adoptedStyleSheets&&n.CSSStyleSheet){let s=nc.get(r);if(s)return e[wl]=s;this.sheet=new n.CSSStyleSheet,nc.set(r,this)}else this.styleTag=r.createElement(\"style\"),t&&this.styleTag.setAttribute(\"nonce\",t);this.modules=[],e[wl]=this}mount(e,t){let r=this.sheet,n=0,s=0;for(let o=0;o<e.length;o++){let l=e[o],a=this.modules.indexOf(l);if(a<s&&a>-1&&(this.modules.splice(a,1),s--,a=-1),a==-1){if(this.modules.splice(s++,0,l),r)for(let h=0;h<l.rules.length;h++)r.insertRule(l.rules[h],n++)}else{for(;s<a;)n+=this.modules[s++].rules.length;n+=l.rules.length,s++}}if(r)t.adoptedStyleSheets.indexOf(this.sheet)<0&&(t.adoptedStyleSheets=[this.sheet,...t.adoptedStyleSheets]);else{let o=\"\";for(let a=0;a<this.modules.length;a++)o+=this.modules[a].getRules()+`\n`;this.styleTag.textContent=o;let l=t.head||t;this.styleTag.parentNode!=l&&l.insertBefore(this.styleTag,l.firstChild)}}setNonce(e){this.styleTag&&this.styleTag.getAttribute(\"nonce\")!=e&&this.styleTag.setAttribute(\"nonce\",e)}};var sr={8:\"Backspace\",9:\"Tab\",10:\"Enter\",12:\"NumLock\",13:\"Enter\",16:\"Shift\",17:\"Control\",18:\"Alt\",20:\"CapsLock\",27:\"Escape\",32:\" \",33:\"PageUp\",34:\"PageDown\",35:\"End\",36:\"Home\",37:\"ArrowLeft\",38:\"ArrowUp\",39:\"ArrowRight\",40:\"ArrowDown\",44:\"PrintScreen\",45:\"Insert\",46:\"Delete\",59:\";\",61:\"=\",91:\"Meta\",92:\"Meta\",106:\"*\",107:\"+\",108:\",\",109:\"-\",110:\".\",111:\"/\",144:\"NumLock\",145:\"ScrollLock\",160:\"Shift\",161:\"Shift\",162:\"Control\",163:\"Control\",164:\"Alt\",165:\"Alt\",173:\"-\",186:\";\",187:\"=\",188:\",\",189:\"-\",190:\".\",191:\"/\",192:\"`\",219:\"[\",220:\"\\\\\",221:\"]\",222:\"'\"},ci={48:\")\",49:\"!\",50:\"@\",51:\"#\",52:\"$\",53:\"%\",54:\"^\",55:\"&\",56:\"*\",57:\"(\",59:\":\",61:\"+\",173:\"_\",186:\":\",187:\"+\",188:\"<\",189:\"_\",190:\">\",191:\"?\",192:\"~\",219:\"{\",220:\"|\",221:\"}\",222:'\"'},Pp=typeof navigator<\"u\"&&/Mac/.test(navigator.platform),Np=typeof navigator<\"u\"&&/MSIE \\d|Trident\\/(?:[7-9]|\\d{2,})\\..*rv:(\\d+)/.exec(navigator.userAgent);for(Oe=0;Oe<10;Oe++)sr[48+Oe]=sr[96+Oe]=String(Oe);var Oe;for(Oe=1;Oe<=24;Oe++)sr[Oe+111]=\"F\"+Oe;var Oe;for(Oe=65;Oe<=90;Oe++)sr[Oe]=String.fromCharCode(Oe+32),ci[Oe]=String.fromCharCode(Oe);var Oe;for(Qn in sr)ci.hasOwnProperty(Qn)||(ci[Qn]=sr[Qn]);var Qn;function sc(i){var e=Pp&&i.metaKey&&i.shiftKey&&!i.ctrlKey&&!i.altKey||Np&&i.shiftKey&&i.key&&i.key.length==1||i.key==\"Unidentified\",t=!e&&i.key||(i.shiftKey?ci:sr)[i.keyCode]||i.key||\"Unidentified\";return t==\"Esc\"&&(t=\"Escape\"),t==\"Del\"&&(t=\"Delete\"),t==\"Left\"&&(t=\"ArrowLeft\"),t==\"Up\"&&(t=\"ArrowUp\"),t==\"Right\"&&(t=\"ArrowRight\"),t==\"Down\"&&(t=\"ArrowDown\"),t}function nt(){var i=arguments[0];typeof i==\"string\"&&(i=document.createElement(i));var e=1,t=arguments[1];if(t&&typeof t==\"object\"&&t.nodeType==null&&!Array.isArray(t)){for(var r in t)if(Object.prototype.hasOwnProperty.call(t,r)){var n=t[r];typeof n==\"string\"?i.setAttribute(r,n):n!=null&&(i[r]=n)}e++}for(;e<arguments.length;e++)oc(i,arguments[e]);return i}function oc(i,e){if(typeof e==\"string\")i.appendChild(document.createTextNode(e));else if(e!=null)if(e.nodeType!=null)i.appendChild(e);else if(Array.isArray(e))for(var t=0;t<e.length;t++)oc(i,e[t]);else throw new RangeError(\"Unsupported child node: \"+e)}var _e=typeof navigator<\"u\"?navigator:{userAgent:\"\",vendor:\"\",platform:\"\"},Ol=typeof document<\"u\"?document:{documentElement:{style:{}}},zl=/Edge\\/(\\d+)/.exec(_e.userAgent),Gc=/MSIE \\d/.test(_e.userAgent),Ll=/Trident\\/(?:[7-9]|\\d{2,})\\..*rv:(\\d+)/.exec(_e.userAgent),Ls=!!(Gc||Ll||zl),lc=!Ls&&/gecko\\/(\\d+)/i.test(_e.userAgent),Sl=!Ls&&/Chrome\\/(\\d+)/.exec(_e.userAgent),ac=\"webkitFontSmoothing\"in Ol.documentElement.style,Il=!Ls&&/Apple Computer/.test(_e.vendor),hc=Il&&(/Mobile\\/\\w+/.test(_e.userAgent)||_e.maxTouchPoints>2),W={mac:hc||/Mac/.test(_e.platform),windows:/Win/.test(_e.platform),linux:/Linux|X11/.test(_e.platform),ie:Ls,ie_version:Gc?Ol.documentMode||6:Ll?+Ll[1]:zl?+zl[1]:0,gecko:lc,gecko_version:lc?+(/Firefox\\/(\\d+)/.exec(_e.userAgent)||[0,0])[1]:0,chrome:!!Sl,chrome_version:Sl?+Sl[1]:0,ios:hc,android:/Android\\b/.test(_e.userAgent),webkit:ac,webkit_version:ac?+(/\\bAppleWebKit\\/(\\d+)/.exec(_e.userAgent)||[0,0])[1]:0,safari:Il,safari_version:Il?+(/\\bVersion\\/(\\d+(\\.\\d+)?)/.exec(_e.userAgent)||[0,0])[1]:0,tabSize:Ol.documentElement.style.tabSize!=null?\"tab-size\":\"-moz-tab-size\"};function Aa(i,e){for(let t in i)t==\"class\"&&e.class?e.class+=\" \"+i.class:t==\"style\"&&e.style?e.style+=\";\"+i.style:e[t]=i[t];return e}var ps=Object.create(null);function Ma(i,e,t){if(i==e)return!0;i||(i=ps),e||(e=ps);let r=Object.keys(i),n=Object.keys(e);if(r.length-(t&&r.indexOf(t)>-1?1:0)!=n.length-(t&&n.indexOf(t)>-1?1:0))return!1;for(let s of r)if(s!=t&&(n.indexOf(s)==-1||i[s]!==e[s]))return!1;return!0}function Fp(i,e){for(let t=i.attributes.length-1;t>=0;t--){let r=i.attributes[t].name;e[r]==null&&i.removeAttribute(r)}for(let t in e){let r=e[t];t==\"style\"?i.style.cssText=r:i.getAttribute(t)!=r&&i.setAttribute(t,r)}}function cc(i,e,t){let r=!1;if(e)for(let n in e)t&&n in t||(r=!0,n==\"style\"?i.style.cssText=\"\":i.removeAttribute(n));if(t)for(let n in t)e&&e[n]==t[n]||(r=!0,n==\"style\"?i.style.cssText=t[n]:i.setAttribute(n,t[n]));return r}function Hp(i){let e=Object.create(null);for(let t=0;t<i.attributes.length;t++){let r=i.attributes[t];e[r.name]=r.value}return e}var kt=class{eq(e){return!1}updateDOM(e,t){return!1}compare(e){return this==e||this.constructor==e.constructor&&this.eq(e)}get estimatedHeight(){return-1}get lineBreaks(){return 0}ignoreEvent(e){return!0}coordsAt(e,t,r){return null}get isHidden(){return!1}get editable(){return!1}destroy(e){}},Ve=function(i){return i[i.Text=0]=\"Text\",i[i.WidgetBefore=1]=\"WidgetBefore\",i[i.WidgetAfter=2]=\"WidgetAfter\",i[i.WidgetRange=3]=\"WidgetRange\",i}(Ve||(Ve={})),X=class extends gt{constructor(e,t,r,n){super(),this.startSide=e,this.endSide=t,this.widget=r,this.spec=n}get heightRelevant(){return!1}static mark(e){return new Qi(e)}static widget(e){let t=Math.max(-1e4,Math.min(1e4,e.side||0)),r=!!e.block;return t+=r&&!e.inlineOrder?t>0?3e8:-4e8:t>0?1e8:-1e8,new Wr(e,t,t,r,e.widget||null,!1)}static replace(e){let t=!!e.block,r,n;if(e.isBlockGap)r=-5e8,n=4e8;else{let{start:s,end:o}=Uc(e,t);r=(s?t?-3e8:-1:5e8)-1,n=(o?t?2e8:1:-6e8)+1}return new Wr(e,r,n,t,e.widget||null,!0)}static line(e){return new en(e)}static set(e,t=!1){return le.of(e,t)}hasHeight(){return this.widget?this.widget.estimatedHeight>-1:!1}};X.none=le.empty;var Qi=class i extends X{constructor(e){let{start:t,end:r}=Uc(e);super(t?-1:5e8,r?1:-6e8,null,e),this.tagName=e.tagName||\"span\",this.attrs=e.class&&e.attributes?Aa(e.attributes,{class:e.class}):e.class?{class:e.class}:e.attributes||ps}eq(e){return this==e||e instanceof i&&this.tagName==e.tagName&&Ma(this.attrs,e.attrs)}range(e,t=e){if(e>=t)throw new RangeError(\"Mark decorations may not be empty\");return super.range(e,t)}};Qi.prototype.point=!1;var en=class i extends X{constructor(e){super(-2e8,-2e8,null,e)}eq(e){return e instanceof i&&this.spec.class==e.spec.class&&Ma(this.spec.attributes,e.spec.attributes)}range(e,t=e){if(t!=e)throw new RangeError(\"Line decoration ranges must be zero-length\");return super.range(e,t)}};en.prototype.mapMode=We.TrackBefore;en.prototype.point=!0;var Wr=class i extends X{constructor(e,t,r,n,s,o){super(t,r,s,e),this.block=n,this.isReplace=o,this.mapMode=n?t<=0?We.TrackBefore:We.TrackAfter:We.TrackDel}get type(){return this.startSide!=this.endSide?Ve.WidgetRange:this.startSide<=0?Ve.WidgetBefore:Ve.WidgetAfter}get heightRelevant(){return this.block||!!this.widget&&(this.widget.estimatedHeight>=5||this.widget.lineBreaks>0)}eq(e){return e instanceof i&&qp(this.widget,e.widget)&&this.block==e.block&&this.startSide==e.startSide&&this.endSide==e.endSide}range(e,t=e){if(this.isReplace&&(e>t||e==t&&this.startSide>0&&this.endSide<=0))throw new RangeError(\"Invalid range for replacement decoration\");if(!this.isReplace&&t!=e)throw new RangeError(\"Widget decorations can only have zero-length ranges\");return super.range(e,t)}};Wr.prototype.point=!0;function Uc(i,e=!1){let{inclusiveStart:t,inclusiveEnd:r}=i;return t==null&&(t=i.inclusive),r==null&&(r=i.inclusive),{start:t??e,end:r??e}}function qp(i,e){return i==e||!!(i&&e&&i.compare(e))}function pi(i,e,t,r=0){let n=t.length-1;n>=0&&t[n]+r>=i?t[n]=Math.max(t[n],e):t.push(i,e)}var gs=class i extends gt{constructor(e,t){super(),this.tagName=e,this.attributes=t}eq(e){return e==this||e instanceof i&&this.tagName==e.tagName&&Ma(this.attributes,e.attributes)}static create(e){return new i(e.tagName,e.attributes||ps)}static set(e,t=!1){return le.of(e,t)}};gs.prototype.startSide=gs.prototype.endSide=-1;function tn(i){let e;return i.nodeType==11?e=i.getSelection?i:i.ownerDocument:e=i,e.getSelection()}function Rl(i,e){return e?i==e||i.contains(e.nodeType!=1?e.parentNode:e):!1}function Ui(i,e){if(!e.anchorNode)return!1;try{return Rl(i,e.anchorNode)}catch{return!1}}function hs(i){return i.nodeType==3?rn(i,0,i.nodeValue.length).getClientRects():i.nodeType==1?i.getClientRects():[]}function Ki(i,e,t,r){return t?uc(i,e,t,r,-1)||uc(i,e,t,r,1):!1}function xr(i){for(var e=0;;e++)if(i=i.previousSibling,!i)return e}function vs(i){return i.nodeType==1&&/^(DIV|P|LI|UL|OL|BLOCKQUOTE|DD|DT|H\\d|SECTION|PRE)$/.test(i.nodeName)}function uc(i,e,t,r,n){for(;;){if(i==t&&e==r)return!0;if(e==(n<0?0:ar(i))){if(i.nodeName==\"DIV\")return!1;let s=i.parentNode;if(!s||s.nodeType!=1)return!1;e=xr(i)+(n<0?0:1),i=s}else if(i.nodeType==1){if(i=i.childNodes[e+(n<0?-1:0)],i.nodeType==1&&i.contentEditable==\"false\")return!1;e=n<0?ar(i):0}else return!1}}function ar(i){return i.nodeType==3?i.nodeValue.length:i.childNodes.length}function bs(i,e){let t=e?i.left:i.right;return{left:t,right:t,top:i.top,bottom:i.bottom}}function Wp(i){let e=i.visualViewport;return e?{left:0,right:e.width,top:0,bottom:e.height}:{left:0,right:i.innerWidth,top:0,bottom:i.innerHeight}}function Kc(i,e){let t=e.width/i.offsetWidth,r=e.height/i.offsetHeight;return(t>.995&&t<1.005||!isFinite(t)||Math.abs(e.width-i.offsetWidth)<1)&&(t=1),(r>.995&&r<1.005||!isFinite(r)||Math.abs(e.height-i.offsetHeight)<1)&&(r=1),{scaleX:t,scaleY:r}}function Vp(i,e,t,r,n,s,o,l){let a=i.ownerDocument,h=a.defaultView||window;for(let c=i,u=!1;c&&!u;)if(c.nodeType==1){let d,p=c==a.body,v=1,y=1;if(p)d=Wp(h);else{if(/^(fixed|sticky)$/.test(getComputedStyle(c).position)&&(u=!0),c.scrollHeight<=c.clientHeight&&c.scrollWidth<=c.clientWidth){c=c.assignedSlot||c.parentNode;continue}let A=c.getBoundingClientRect();({scaleX:v,scaleY:y}=Kc(c,A)),d={left:A.left,right:A.left+c.clientWidth*v,top:A.top,bottom:A.top+c.clientHeight*y}}let w=0,S=0;if(n==\"nearest\")e.top<d.top?(S=e.top-(d.top+o),t>0&&e.bottom>d.bottom+S&&(S=e.bottom-d.bottom+o)):e.bottom>d.bottom&&(S=e.bottom-d.bottom+o,t<0&&e.top-S<d.top&&(S=e.top-(d.top+o)));else{let A=e.bottom-e.top,M=d.bottom-d.top;S=(n==\"center\"&&A<=M?e.top+A/2-M/2:n==\"start\"||n==\"center\"&&t<0?e.top-o:e.bottom-M+o)-d.top}if(r==\"nearest\"?e.left<d.left?(w=e.left-(d.left+s),t>0&&e.right>d.right+w&&(w=e.right-d.right+s)):e.right>d.right&&(w=e.right-d.right+s,t<0&&e.left<d.left+w&&(w=e.left-(d.left+s))):w=(r==\"center\"?e.left+(e.right-e.left)/2-(d.right-d.left)/2:r==\"start\"==l?e.left-s:e.right-(d.right-d.left)+s)-d.left,w||S)if(p)h.scrollBy(w,S);else{let A=0,M=0;if(S){let E=c.scrollTop;c.scrollTop+=S/y,M=(c.scrollTop-E)*y}if(w){let E=c.scrollLeft;c.scrollLeft+=w/v,A=(c.scrollLeft-E)*v}e={left:e.left-A,top:e.top-M,right:e.right-A,bottom:e.bottom-M},A&&Math.abs(A-w)<1&&(r=\"nearest\"),M&&Math.abs(M-S)<1&&(n=\"nearest\")}if(p)break;(e.top<d.top||e.bottom>d.bottom||e.left<d.left||e.right>d.right)&&(e={left:Math.max(e.left,d.left),right:Math.min(e.right,d.right),top:Math.max(e.top,d.top),bottom:Math.min(e.bottom,d.bottom)}),c=c.assignedSlot||c.parentNode}else if(c.nodeType==11)c=c.host;else break}function $p(i){let e=i.ownerDocument,t,r;for(let n=i.parentNode;n&&!(n==e.body||t&&r);)if(n.nodeType==1)!r&&n.scrollHeight>n.clientHeight&&(r=n),!t&&n.scrollWidth>n.clientWidth&&(t=n),n=n.assignedSlot||n.parentNode;else if(n.nodeType==11)n=n.host;else break;return{x:t,y:r}}var Pl=class{constructor(){this.anchorNode=null,this.anchorOffset=0,this.focusNode=null,this.focusOffset=0}eq(e){return this.anchorNode==e.anchorNode&&this.anchorOffset==e.anchorOffset&&this.focusNode==e.focusNode&&this.focusOffset==e.focusOffset}setRange(e){let{anchorNode:t,focusNode:r}=e;this.set(t,Math.min(e.anchorOffset,t?ar(t):0),r,Math.min(e.focusOffset,r?ar(r):0))}set(e,t,r,n){this.anchorNode=e,this.anchorOffset=t,this.focusNode=r,this.focusOffset=n}},Hr=null;W.safari&&W.safari_version>=26&&(Hr=!1);function jc(i){if(i.setActive)return i.setActive();if(Hr)return i.focus(Hr);let e=[];for(let t=i;t&&(e.push(t,t.scrollTop,t.scrollLeft),t!=t.ownerDocument);t=t.parentNode);if(i.focus(Hr==null?{get preventScroll(){return Hr={preventScroll:!0},!0}}:void 0),!Hr){Hr=!1;for(let t=0;t<e.length;){let r=e[t++],n=e[t++],s=e[t++];r.scrollTop!=n&&(r.scrollTop=n),r.scrollLeft!=s&&(r.scrollLeft=s)}}}var fc;function rn(i,e,t=e){let r=fc||(fc=document.createRange());return r.setEnd(i,t),r.setStart(i,e),r}function gi(i,e,t,r){let n={key:e,code:e,keyCode:t,which:t,cancelable:!0};r&&({altKey:n.altKey,ctrlKey:n.ctrlKey,shiftKey:n.shiftKey,metaKey:n.metaKey}=r);let s=new KeyboardEvent(\"keydown\",n);s.synthetic=!0,i.dispatchEvent(s);let o=new KeyboardEvent(\"keyup\",n);return o.synthetic=!0,i.dispatchEvent(o),s.defaultPrevented||o.defaultPrevented}function Gp(i){for(;i;){if(i&&(i.nodeType==9||i.nodeType==11&&i.host))return i;i=i.assignedSlot||i.parentNode}return null}function Up(i,e){let t=e.focusNode,r=e.focusOffset;if(!t||e.anchorNode!=t||e.anchorOffset!=r)return!1;for(r=Math.min(r,ar(t));;)if(r){if(t.nodeType!=1)return!1;let n=t.childNodes[r-1];n.contentEditable==\"false\"?r--:(t=n,r=ar(t))}else{if(t==i)return!0;r=xr(t),t=t.parentNode}}function Yc(i){return i.scrollTop>Math.max(1,i.scrollHeight-i.clientHeight-4)}function Xc(i,e){for(let t=i,r=e;;){if(t.nodeType==3&&r>0)return{node:t,offset:r};if(t.nodeType==1&&r>0){if(t.contentEditable==\"false\")return null;t=t.childNodes[r-1],r=ar(t)}else if(t.parentNode&&!vs(t))r=xr(t),t=t.parentNode;else return null}}function _c(i,e){for(let t=i,r=e;;){if(t.nodeType==3&&r<t.nodeValue.length)return{node:t,offset:r};if(t.nodeType==1&&r<t.childNodes.length){if(t.contentEditable==\"false\")return null;t=t.childNodes[r],r=0}else if(t.parentNode&&!vs(t))r=xr(t)+1,t=t.parentNode;else return null}}var $t=class i{constructor(e,t,r=!0){this.node=e,this.offset=t,this.precise=r}static before(e,t){return new i(e.parentNode,xr(e),t)}static after(e,t){return new i(e.parentNode,xr(e)+1,t)}},he=function(i){return i[i.LTR=0]=\"LTR\",i[i.RTL=1]=\"RTL\",i}(he||(he={})),Vr=he.LTR,Ta=he.RTL;function Jc(i){let e=[];for(let t=0;t<i.length;t++)e.push(1<<+i[t]);return e}var Kp=Jc(\"88888888888888888888888888888888888666888888787833333333337888888000000000000000000000000008888880000000000000000000000000088888888888888888888888888888888888887866668888088888663380888308888800000000000000000000000800000000000000000000000000000008\"),jp=Jc(\"4444448826627288999999999992222222222222222222222222222222222222222222222229999999999999999999994444444444644222822222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222999999949999999229989999223333333333\"),Nl=Object.create(null),Vt=[];for(let i of[\"()\",\"[]\",\"{}\"]){let e=i.charCodeAt(0),t=i.charCodeAt(1);Nl[e]=t,Nl[t]=-e}function Zc(i){return i<=247?Kp[i]:1424<=i&&i<=1524?2:1536<=i&&i<=1785?jp[i-1536]:1774<=i&&i<=2220?4:8192<=i&&i<=8204?256:64336<=i&&i<=65023?4:1}var Yp=/[\\u0590-\\u05f4\\u0600-\\u06ff\\u0700-\\u08ac\\ufb50-\\ufdff]/,wt=class{get dir(){return this.level%2?Ta:Vr}constructor(e,t,r){this.from=e,this.to=t,this.level=r}side(e,t){return this.dir==t==e?this.to:this.from}forward(e,t){return e==(this.dir==t)}static find(e,t,r,n){let s=-1;for(let o=0;o<e.length;o++){let l=e[o];if(l.from<=t&&l.to>=t){if(l.level==r)return o;(s<0||(n!=0?n<0?l.from<t:l.to>t:e[s].level>l.level))&&(s=o)}}if(s<0)throw new RangeError(\"Index out of range\");return s}};function Qc(i,e){if(i.length!=e.length)return!1;for(let t=0;t<i.length;t++){let r=i[t],n=e[t];if(r.from!=n.from||r.to!=n.to||r.direction!=n.direction||!Qc(r.inner,n.inner))return!1}return!0}var ce=[];function Xp(i,e,t,r,n){for(let s=0;s<=r.length;s++){let o=s?r[s-1].to:e,l=s<r.length?r[s].from:t,a=s?256:n;for(let h=o,c=a,u=a;h<l;h++){let d=Zc(i.charCodeAt(h));d==512?d=c:d==8&&u==4&&(d=16),ce[h]=d==4?2:d,d&7&&(u=d),c=d}for(let h=o,c=a,u=a;h<l;h++){let d=ce[h];if(d==128)h<l-1&&c==ce[h+1]&&c&24?d=ce[h]=c:ce[h]=256;else if(d==64){let p=h+1;for(;p<l&&ce[p]==64;)p++;let v=h&&c==8||p<t&&ce[p]==8?u==1?1:8:256;for(let y=h;y<p;y++)ce[y]=v;h=p-1}else d==8&&u==1&&(ce[h]=1);c=d,d&7&&(u=d)}}}function _p(i,e,t,r,n){let s=n==1?2:1;for(let o=0,l=0,a=0;o<=r.length;o++){let h=o?r[o-1].to:e,c=o<r.length?r[o].from:t;for(let u=h,d,p,v;u<c;u++)if(p=Nl[d=i.charCodeAt(u)])if(p<0){for(let y=l-3;y>=0;y-=3)if(Vt[y+1]==-p){let w=Vt[y+2],S=w&2?n:w&4?w&1?s:n:0;S&&(ce[u]=ce[Vt[y]]=S),l=y;break}}else{if(Vt.length==189)break;Vt[l++]=u,Vt[l++]=d,Vt[l++]=a}else if((v=ce[u])==2||v==1){let y=v==n;a=y?0:1;for(let w=l-3;w>=0;w-=3){let S=Vt[w+2];if(S&2)break;if(y)Vt[w+2]|=2;else{if(S&4)break;Vt[w+2]|=4}}}}}function Jp(i,e,t,r){for(let n=0,s=r;n<=t.length;n++){let o=n?t[n-1].to:i,l=n<t.length?t[n].from:e;for(let a=o;a<l;){let h=ce[a];if(h==256){let c=a+1;for(;;)if(c==l){if(n==t.length)break;c=t[n++].to,l=n<t.length?t[n].from:e}else if(ce[c]==256)c++;else break;let u=s==1,d=(c<e?ce[c]:r)==1,p=u==d?u?1:2:r;for(let v=c,y=n,w=y?t[y-1].to:i;v>a;)v==w&&(v=t[--y].from,w=y?t[y-1].to:i),ce[--v]=p;a=c}else s=h,a++}}}function Fl(i,e,t,r,n,s,o){let l=r%2?2:1;if(r%2==n%2)for(let a=e,h=0;a<t;){let c=!0,u=!1;if(h==s.length||a<s[h].from){let y=ce[a];y!=l&&(c=!1,u=y==16)}let d=!c&&l==1?[]:null,p=c?r:r+1,v=a;e:for(;;)if(h<s.length&&v==s[h].from){if(u)break e;let y=s[h];if(!c)for(let w=y.to,S=h+1;;){if(w==t)break e;if(S<s.length&&s[S].from==w)w=s[S++].to;else{if(ce[w]==l)break e;break}}if(h++,d)d.push(y);else{y.from>a&&o.push(new wt(a,y.from,p));let w=y.direction==Vr!=!(p%2);Hl(i,w?r+1:r,n,y.inner,y.from,y.to,o),a=y.to}v=y.to}else{if(v==t||(c?ce[v]!=l:ce[v]==l))break;v++}d?Fl(i,a,v,r+1,n,d,o):a<v&&o.push(new wt(a,v,p)),a=v}else for(let a=t,h=s.length;a>e;){let c=!0,u=!1;if(!h||a>s[h-1].to){let y=ce[a-1];y!=l&&(c=!1,u=y==16)}let d=!c&&l==1?[]:null,p=c?r:r+1,v=a;e:for(;;)if(h&&v==s[h-1].to){if(u)break e;let y=s[--h];if(!c)for(let w=y.from,S=h;;){if(w==e)break e;if(S&&s[S-1].to==w)w=s[--S].from;else{if(ce[w-1]==l)break e;break}}if(d)d.push(y);else{y.to<a&&o.push(new wt(y.to,a,p));let w=y.direction==Vr!=!(p%2);Hl(i,w?r+1:r,n,y.inner,y.from,y.to,o),a=y.from}v=y.from}else{if(v==e||(c?ce[v-1]!=l:ce[v-1]==l))break;v--}d?Fl(i,v,a,r+1,n,d,o):v<a&&o.push(new wt(v,a,p)),a=v}}function Hl(i,e,t,r,n,s,o){let l=e%2?2:1;Xp(i,n,s,r,l),_p(i,n,s,r,l),Jp(n,s,r,l),Fl(i,n,s,e,t,r,o)}function Zp(i,e,t){if(!i)return[new wt(0,0,e==Ta?1:0)];if(e==Vr&&!t.length&&!Yp.test(i))return eu(i.length);if(t.length)for(;i.length>ce.length;)ce[ce.length]=256;let r=[],n=e==Vr?0:1;return Hl(i,n,n,t,0,i.length,r),r}function eu(i){return[new wt(0,i,0)]}var tu=\"\";function Qp(i,e,t,r,n){var s;let o=r.head-i.from,l=wt.find(e,o,(s=r.bidiLevel)!==null&&s!==void 0?s:-1,r.assoc),a=e[l],h=a.side(n,t);if(o==h){let d=l+=n?1:-1;if(d<0||d>=e.length)return null;a=e[l=d],o=a.side(!n,t),h=a.side(n,t)}let c=Ie(i.text,o,a.forward(n,t));(c<a.from||c>a.to)&&(c=h),tu=i.text.slice(Math.min(o,c),Math.max(o,c));let u=l==(n?e.length-1:0)?null:e[l+(n?1:-1)];return u&&c==h&&u.level+(n?0:1)<a.level?R.cursor(u.side(!n,t)+i.from,u.forward(n,t)?1:-1,u.level):R.cursor(c+i.from,a.forward(n,t)?-1:1,a.level)}function e1(i,e,t){for(let r=e;r<t;r++){let n=Zc(i.charCodeAt(r));if(n==1)return Vr;if(n==2||n==4)return Ta}return Vr}var ru=H.define(),iu=H.define(),nu=H.define(),su=H.define(),ql=H.define(),ou=H.define(),lu=H.define(),Da=H.define(),Ba=H.define(),au=H.define({combine:i=>i.some(e=>e)}),hu=H.define({combine:i=>i.some(e=>e)}),cu=H.define(),ji=class i{constructor(e,t=\"nearest\",r=\"nearest\",n=5,s=5,o=!1){this.range=e,this.y=t,this.x=r,this.yMargin=n,this.xMargin=s,this.isSnapshot=o}map(e){return e.empty?this:new i(this.range.map(e),this.y,this.x,this.yMargin,this.xMargin,this.isSnapshot)}clip(e){return this.range.to<=e.doc.length?this:new i(R.cursor(e.doc.length),this.y,this.x,this.yMargin,this.xMargin,this.isSnapshot)}},es=te.define({map:(i,e)=>i.map(e)}),uu=te.define();function Fe(i,e,t){let r=i.facet(su);r.length?r[0](e):window.onerror&&window.onerror(String(e),t,void 0,void 0,e)||(t?console.error(t+\":\",e):console.error(e))}var or=H.define({combine:i=>i.length?i[0]:!0}),t1=0,ui=H.define({combine(i){return i.filter((e,t)=>{for(let r=0;r<t;r++)if(i[r].plugin==e.plugin)return!1;return!0})}}),Se=class i{constructor(e,t,r,n,s){this.id=e,this.create=t,this.domEventHandlers=r,this.domEventObservers=n,this.baseExtensions=s(this),this.extension=this.baseExtensions.concat(ui.of({plugin:this,arg:void 0}))}of(e){return this.baseExtensions.concat(ui.of({plugin:this,arg:e}))}static define(e,t){let{eventHandlers:r,eventObservers:n,provide:s,decorations:o}=t||{};return new i(t1++,e,r,n,l=>{let a=[];return o&&a.push(Is.of(h=>{let c=h.plugin(l);return c?o(c):X.none})),s&&a.push(s(l)),a})}static fromClass(e,t){return i.define((r,n)=>new e(r,n),t)}},Yi=class{constructor(e){this.spec=e,this.mustUpdate=null,this.value=null}get plugin(){return this.spec&&this.spec.plugin}update(e){if(this.value){if(this.mustUpdate){let t=this.mustUpdate;if(this.mustUpdate=null,this.value.update)try{this.value.update(t)}catch(r){if(Fe(t.state,r,\"CodeMirror plugin crashed\"),this.value.destroy)try{this.value.destroy()}catch{}this.deactivate()}}}else if(this.spec)try{this.value=this.spec.plugin.create(e,this.spec.arg)}catch(t){Fe(e.state,t,\"CodeMirror plugin crashed\"),this.deactivate()}return this}destroy(e){var t;if(!((t=this.value)===null||t===void 0)&&t.destroy)try{this.value.destroy()}catch(r){Fe(e.state,r,\"CodeMirror plugin crashed\")}}deactivate(){this.spec=this.value=null}},fu=H.define(),Ea=H.define(),Is=H.define(),du=H.define(),Oa=H.define(),ln=H.define(),mu=H.define();function dc(i,e){let t=i.state.facet(mu);if(!t.length)return t;let r=t.map(s=>s instanceof Function?s(i):s),n=[];return le.spans(r,e.from,e.to,{point(){},span(s,o,l,a){let h=s-e.from,c=o-e.from,u=n;for(let d=l.length-1;d>=0;d--,a--){let p=l[d].spec.bidiIsolate,v;if(p==null&&(p=e1(e.text,h,c)),a>0&&u.length&&(v=u[u.length-1]).to==h&&v.direction==p)v.to=c,u=v.inner;else{let y={from:h,to:c,direction:p,inner:[]};u.push(y),u=y.inner}}}}),n}var pu=H.define();function za(i){let e=0,t=0,r=0,n=0;for(let s of i.state.facet(pu)){let o=s(i);o&&(o.left!=null&&(e=Math.max(e,o.left)),o.right!=null&&(t=Math.max(t,o.right)),o.top!=null&&(r=Math.max(r,o.top)),o.bottom!=null&&(n=Math.max(n,o.bottom)))}return{left:e,right:t,top:r,bottom:n}}var Wi=H.define(),zt=class i{constructor(e,t,r,n){this.fromA=e,this.toA=t,this.fromB=r,this.toB=n}join(e){return new i(Math.min(this.fromA,e.fromA),Math.max(this.toA,e.toA),Math.min(this.fromB,e.fromB),Math.max(this.toB,e.toB))}addToSet(e){let t=e.length,r=this;for(;t>0;t--){let n=e[t-1];if(!(n.fromA>r.toA)){if(n.toA<r.fromA)break;r=r.join(n),e.splice(t-1,1)}}return e.splice(t,0,r),e}static extendWithRanges(e,t){if(t.length==0)return e;let r=[];for(let n=0,s=0,o=0;;){let l=n<e.length?e[n].fromB:1e9,a=s<t.length?t[s]:1e9,h=Math.min(l,a);if(h==1e9)break;let c=h+o,u=h,d=c;for(;;)if(s<t.length&&t[s]<=u){let p=t[s+1];s+=2,u=Math.max(u,p);for(let v=n;v<e.length&&e[v].fromB<=u;v++)o=e[v].toA-e[v].toB;d=Math.max(d,p+o)}else if(n<e.length&&e[n].fromB<=u){let p=e[n++];u=Math.max(u,p.toB),d=Math.max(d,p.toA),o=p.toA-p.toB}else break;r.push(new i(c,d,h,u))}return r}},ys=class i{constructor(e,t,r){this.view=e,this.state=t,this.transactions=r,this.flags=0,this.startState=e.state,this.changes=Ye.empty(this.startState.doc.length);for(let s of r)this.changes=this.changes.compose(s.changes);let n=[];this.changes.iterChangedRanges((s,o,l,a)=>n.push(new zt(s,o,l,a))),this.changedRanges=n}static create(e,t,r){return new i(e,t,r)}get viewportChanged(){return(this.flags&4)>0}get viewportMoved(){return(this.flags&8)>0}get heightChanged(){return(this.flags&2)>0}get geometryChanged(){return this.docChanged||(this.flags&18)>0}get focusChanged(){return(this.flags&1)>0}get docChanged(){return!this.changes.empty}get selectionSet(){return this.transactions.some(e=>e.selection)}get empty(){return this.flags==0&&this.transactions.length==0}},r1=[],ke=class{constructor(e,t,r=0){this.dom=e,this.length=t,this.flags=r,this.parent=null,e.cmTile=this}get breakAfter(){return this.flags&1}get children(){return r1}isWidget(){return!1}get isHidden(){return!1}isComposite(){return!1}isLine(){return!1}isText(){return!1}isBlock(){return!1}get domAttrs(){return null}sync(e){if(this.flags|=2,this.flags&4){this.flags&=-5;let t=this.domAttrs;t&&Fp(this.dom,t)}}toString(){return this.constructor.name+(this.children.length?`(${this.children})`:\"\")+(this.breakAfter?\"#\":\"\")}destroy(){this.parent=null}setDOM(e){this.dom=e,e.cmTile=this}get posAtStart(){return this.parent?this.parent.posBefore(this):0}get posAtEnd(){return this.posAtStart+this.length}posBefore(e,t=this.posAtStart){let r=t;for(let n of this.children){if(n==e)return r;r+=n.length+n.breakAfter}throw new RangeError(\"Invalid child in posBefore\")}posAfter(e){return this.posBefore(e)+e.length}covers(e){return!0}coordsIn(e,t){return null}domPosFor(e,t){let r=xr(this.dom),n=this.length?e>0:t>0;return new $t(this.parent.dom,r+(n?1:0),e==0||e==this.length)}markDirty(e){this.flags&=-3,e&&(this.flags|=4),this.parent&&this.parent.flags&2&&this.parent.markDirty(!1)}get overrideDOMText(){return null}get root(){for(let e=this;e;e=e.parent)if(e instanceof bi)return e;return null}static get(e){return e.cmTile}},vi=class extends ke{constructor(e){super(e,0),this._children=[]}isComposite(){return!0}get children(){return this._children}get lastChild(){return this.children.length?this.children[this.children.length-1]:null}append(e){this.children.push(e),e.parent=this}sync(e){if(this.flags&2)return;super.sync(e);let t=this.dom,r=null,n,s=e?.node==t?e:null,o=0;for(let l of this.children){if(l.sync(e),o+=l.length+l.breakAfter,n=r?r.nextSibling:t.firstChild,s&&n!=l.dom&&(s.written=!0),l.dom.parentNode==t)for(;n&&n!=l.dom;)n=mc(n);else t.insertBefore(l.dom,n);r=l.dom}for(n=r?r.nextSibling:t.firstChild,s&&n&&(s.written=!0);n;)n=mc(n);this.length=o}};function mc(i){let e=i.nextSibling;return i.parentNode.removeChild(i),e}var bi=class extends vi{constructor(e,t){super(t),this.view=e}owns(e){for(;e;e=e.parent)if(e==this)return!0;return!1}isBlock(){return!0}nearest(e){for(;;){if(!e)return null;let t=ke.get(e);if(t&&this.owns(t))return t;e=e.parentNode}}blockTiles(e){for(let t=[],r=this,n=0,s=0;;)if(n==r.children.length){if(!t.length)return;r=r.parent,r.breakAfter&&s++,n=t.pop()}else{let o=r.children[n++];if(o instanceof lr)t.push(n),r=o,n=0;else{let l=s+o.length,a=e(o,s);if(a!==void 0)return a;s=l+o.breakAfter}}}resolveBlock(e,t){let r,n=-1,s,o=-1;if(this.blockTiles((l,a)=>{let h=a+l.length;if(e>=a&&e<=h){if(l.isWidget()&&t>=-1&&t<=1){if(l.flags&32)return!0;l.flags&16&&(r=void 0)}(a<e||e==h&&(t<-1?l.length:l.covers(1)))&&(!r||!l.isWidget()&&r.isWidget())&&(r=l,n=e-a),(h>e||e==a&&(t>1?l.length:l.covers(-1)))&&(!s||!l.isWidget()&&s.isWidget())&&(s=l,o=e-a)}}),!r&&!s)throw new Error(\"No tile at position \"+e);return r&&t<0||!s?{tile:r,offset:n}:{tile:s,offset:o}}},lr=class i extends vi{constructor(e,t){super(e),this.wrapper=t}isBlock(){return!0}covers(e){return this.children.length?e<0?this.children[0].covers(-1):this.lastChild.covers(1):!1}get domAttrs(){return this.wrapper.attributes}static of(e,t){let r=new i(t||document.createElement(e.tagName),e);return t||(r.flags|=4),r}},yi=class i extends vi{constructor(e,t){super(e),this.attrs=t}isLine(){return!0}static start(e,t,r){let n=new i(t||document.createElement(\"div\"),e);return(!t||!r)&&(n.flags|=4),n}get domAttrs(){return this.attrs}resolveInline(e,t,r){let n=null,s=-1,o=null,l=-1;function a(c,u){for(let d=0,p=0;d<c.children.length&&p<=u;d++){let v=c.children[d],y=p+v.length;y>=u&&(v.isComposite()?a(v,u-p):(!o||o.isHidden&&(t>0||r&&n1(o,v)))&&(y>u||v.flags&32)?(o=v,l=u-p):(p<u||v.flags&16&&!v.isHidden)&&(n=v,s=u-p)),p=y}}a(this,e);let h=(t<0?n:o)||n||o;return h?{tile:h,offset:h==n?s:l}:null}coordsIn(e,t){let r=this.resolveInline(e,t,!0);return r?r.tile.coordsIn(Math.max(0,r.offset),t):i1(this)}domIn(e,t){let r=this.resolveInline(e,t);if(r){let{tile:n,offset:s}=r;if(this.dom.contains(n.dom))return n.isText()?new $t(n.dom,Math.min(n.dom.nodeValue.length,s)):n.domPosFor(s,n.flags&16?1:n.flags&32?-1:t);let o=r.tile.parent,l=!1;for(let a of o.children){if(l)return new $t(a.dom,0);a==r.tile&&(l=!0)}}return new $t(this.dom,0)}};function i1(i){let e=i.dom.lastChild;if(!e)return i.dom.getBoundingClientRect();let t=hs(e);return t[t.length-1]||null}function n1(i,e){let t=i.coordsIn(0,1),r=e.coordsIn(0,1);return t&&r&&r.top<t.bottom}var st=class i extends vi{constructor(e,t){super(e),this.mark=t}get domAttrs(){return this.mark.attrs}static of(e,t){let r=new i(t||document.createElement(e.tagName),e);return t||(r.flags|=4),r}},qr=class i extends ke{constructor(e,t){super(e,t.length),this.text=t}sync(e){this.flags&2||(super.sync(e),this.dom.nodeValue!=this.text&&(e&&e.node==this.dom&&(e.written=!0),this.dom.nodeValue=this.text))}isText(){return!0}toString(){return JSON.stringify(this.text)}coordsIn(e,t){let r=this.dom.nodeValue.length;e>r&&(e=r);let n=e,s=e,o=0;e==0&&t<0||e==r&&t>=0?W.chrome||W.gecko||(e?(n--,o=1):s<r&&(s++,o=-1)):t<0?n--:s<r&&s++;let l=rn(this.dom,n,s).getClientRects();if(!l.length)return null;let a=l[(o?o<0:t>=0)?0:l.length-1];return W.safari&&!o&&a.width==0&&(a=Array.prototype.find.call(l,h=>h.width)||a),o?bs(a,o<0):a||null}static of(e,t){let r=new i(t||document.createTextNode(e),e);return t||(r.flags|=2),r}},$r=class i extends ke{constructor(e,t,r,n){super(e,t,n),this.widget=r}isWidget(){return!0}get isHidden(){return this.widget.isHidden}covers(e){return this.flags&48?!1:(this.flags&(e<0?64:128))>0}coordsIn(e,t){return this.coordsInWidget(e,t,!1)}coordsInWidget(e,t,r){let n=this.widget.coordsAt(this.dom,e,t);if(n)return n;if(r)return bs(this.dom.getBoundingClientRect(),this.length?e==0:t<=0);{let s=this.dom.getClientRects(),o=null;if(!s.length)return null;let l=this.flags&16?!0:this.flags&32?!1:e>0;for(let a=l?s.length-1:0;o=s[a],!(e>0?a==0:a==s.length-1||o.top<o.bottom);a+=l?-1:1);return bs(o,!l)}}get overrideDOMText(){if(!this.length)return se.empty;let{root:e}=this;if(!e)return se.empty;let t=this.posAtStart;return e.view.state.doc.slice(t,t+this.length)}destroy(){super.destroy(),this.widget.destroy(this.dom)}static of(e,t,r,n,s){return s||(s=e.toDOM(t),e.editable||(s.contentEditable=\"false\")),new i(s,r,e,n)}},xi=class extends ke{constructor(e){let t=document.createElement(\"img\");t.className=\"cm-widgetBuffer\",t.setAttribute(\"aria-hidden\",\"true\"),super(t,0,e)}get isHidden(){return!0}get overrideDOMText(){return se.empty}coordsIn(e){return this.dom.getBoundingClientRect()}},Wl=class{constructor(e){this.index=0,this.beforeBreak=!1,this.parents=[],this.tile=e}advance(e,t,r){let{tile:n,index:s,beforeBreak:o,parents:l}=this;for(;e||t>0;)if(n.isComposite())if(o){if(!e)break;r&&r.break(),e--,o=!1}else if(s==n.children.length){if(!e&&!l.length)break;r&&r.leave(n),o=!!n.breakAfter,{tile:n,index:s}=l.pop(),s++}else{let a=n.children[s],h=a.breakAfter;(t>0?a.length<=e:a.length<e)&&(!r||r.skip(a,0,a.length)!==!1||!a.isComposite)?(o=!!h,s++,e-=a.length):(l.push({tile:n,index:s}),n=a,s=0,r&&a.isComposite()&&r.enter(a))}else if(s==n.length)o=!!n.breakAfter,{tile:n,index:s}=l.pop(),s++;else if(e){let a=Math.min(e,n.length-s);r&&r.skip(n,s,s+a),e-=a,s+=a}else break;return this.tile=n,this.index=s,this.beforeBreak=o,this}get root(){return this.parents.length?this.parents[0].tile:this.tile}},Vl=class{constructor(e,t,r,n){this.from=e,this.to=t,this.wrapper=r,this.rank=n}},$l=class{constructor(e,t,r){this.cache=e,this.root=t,this.blockWrappers=r,this.curLine=null,this.lastBlock=null,this.afterWidget=null,this.pos=0,this.wrappers=[],this.wrapperPos=0}addText(e,t,r,n){var s;this.flushBuffer();let o=this.ensureMarks(t,r),l=o.lastChild;if(l&&l.isText()&&!(l.flags&8)&&l.length+e.length<512){this.cache.reused.set(l,2);let a=o.children[o.children.length-1]=new qr(l.dom,l.text+e);a.parent=o}else o.append(n||qr.of(e,(s=this.cache.find(qr))===null||s===void 0?void 0:s.dom));this.pos+=e.length,this.afterWidget=null}addComposition(e,t){let r=this.curLine;r.dom!=t.line.dom&&(r.setDOM(this.cache.reused.has(t.line)?Cl(t.line.dom):t.line.dom),this.cache.reused.set(t.line,2));let n=r;for(let l=t.marks.length-1;l>=0;l--){let a=t.marks[l],h=n.lastChild;if(h instanceof st&&h.mark.eq(a.mark))h.dom!=a.dom&&h.setDOM(Cl(a.dom)),n=h;else{if(this.cache.reused.get(a)){let u=ke.get(a.dom);u&&u.setDOM(Cl(a.dom))}let c=st.of(a.mark,a.dom);n.append(c),n=c}this.cache.reused.set(a,2)}let s=ke.get(e.text);s&&this.cache.reused.set(s,2);let o=new qr(e.text,e.text.nodeValue);o.flags|=8,n.append(o)}addInlineWidget(e,t,r){let n=this.afterWidget&&e.flags&48&&(this.afterWidget.flags&48)==(e.flags&48);n||this.flushBuffer();let s=this.ensureMarks(t,r);!n&&!(e.flags&16)&&s.append(this.getBuffer(1)),s.append(e),this.pos+=e.length,this.afterWidget=e}addMark(e,t,r){this.flushBuffer(),this.ensureMarks(t,r).append(e),this.pos+=e.length,this.afterWidget=null}addBlockWidget(e){this.getBlockPos().append(e),this.pos+=e.length,this.lastBlock=e,this.endLine()}continueWidget(e){let t=this.afterWidget||this.lastBlock;t.length+=e,this.pos+=e}addLineStart(e,t){var r;e||(e=gu);let n=yi.start(e,t||((r=this.cache.find(yi))===null||r===void 0?void 0:r.dom),!!t);this.getBlockPos().append(this.lastBlock=this.curLine=n)}addLine(e){this.getBlockPos().append(e),this.pos+=e.length,this.lastBlock=e,this.endLine()}addBreak(){this.lastBlock.flags|=1,this.endLine(),this.pos++}addLineStartIfNotCovered(e){this.blockPosCovered()||this.addLineStart(e)}ensureLine(e){this.curLine||this.addLineStart(e)}ensureMarks(e,t){var r;let n=this.curLine;for(let s=e.length-1;s>=0;s--){let o=e[s],l;if(t>0&&(l=n.lastChild)&&l instanceof st&&l.mark.eq(o))n=l,t--;else{let a=st.of(o,(r=this.cache.find(st,h=>h.mark.eq(o)))===null||r===void 0?void 0:r.dom);n.append(a),n=a,t=0}}return n}endLine(){if(this.curLine){this.flushBuffer();let e=this.curLine.lastChild;(!e||!pc(this.curLine,!1)||e.dom.nodeName!=\"BR\"&&e.isWidget()&&!(W.ios&&pc(this.curLine,!0)))&&this.curLine.append(this.cache.findWidget(Al,0,32)||new $r(Al.toDOM(),0,Al,32)),this.curLine=this.afterWidget=null}}updateBlockWrappers(){this.wrapperPos>this.pos+1e4&&(this.blockWrappers.goto(this.pos),this.wrappers.length=0);for(let e=this.wrappers.length-1;e>=0;e--)this.wrappers[e].to<this.pos&&this.wrappers.splice(e,1);for(let e=this.blockWrappers;e.value&&e.from<=this.pos;e.next())if(e.to>=this.pos){let t=new Vl(e.from,e.to,e.value,e.rank),r=this.wrappers.length;for(;r>0&&(this.wrappers[r-1].rank-t.rank||this.wrappers[r-1].to-t.to)<0;)r--;this.wrappers.splice(r,0,t)}this.wrapperPos=this.pos}getBlockPos(){var e;this.updateBlockWrappers();let t=this.root;for(let r of this.wrappers){let n=t.lastChild;if(r.from<this.pos&&n instanceof lr&&n.wrapper.eq(r.wrapper))t=n;else{let s=lr.of(r.wrapper,(e=this.cache.find(lr,o=>o.wrapper.eq(r.wrapper)))===null||e===void 0?void 0:e.dom);t.append(s),t=s}}return t}blockPosCovered(){let e=this.lastBlock;return e!=null&&!e.breakAfter&&(!e.isWidget()||(e.flags&160)>0)}getBuffer(e){let t=2|(e<0?16:32),r=this.cache.find(xi,void 0,1);return r&&(r.flags=t),r||new xi(t)}flushBuffer(){this.afterWidget&&!(this.afterWidget.flags&32)&&(this.afterWidget.parent.append(this.getBuffer(-1)),this.afterWidget=null)}},Gl=class{constructor(e){this.skipCount=0,this.text=\"\",this.textOff=0,this.cursor=e.iter()}skip(e){this.textOff+e<=this.text.length?this.textOff+=e:(this.skipCount+=e-(this.text.length-this.textOff),this.text=\"\",this.textOff=0)}next(e){if(this.textOff==this.text.length){let{value:n,lineBreak:s,done:o}=this.cursor.next(this.skipCount);if(this.skipCount=0,o)throw new Error(\"Ran out of text content when drawing inline views\");this.text=n;let l=this.textOff=Math.min(e,n.length);return s?null:n.slice(0,l)}let t=Math.min(this.text.length,this.textOff+e),r=this.text.slice(this.textOff,t);return this.textOff=t,r}},xs=[$r,yi,qr,st,xi,lr,bi];for(let i=0;i<xs.length;i++)xs[i].bucket=i;var Ul=class{constructor(e){this.view=e,this.buckets=xs.map(()=>[]),this.index=xs.map(()=>0),this.reused=new Map}add(e){let t=e.constructor.bucket,r=this.buckets[t];r.length<6?r.push(e):r[this.index[t]=(this.index[t]+1)%6]=e}find(e,t,r=2){let n=e.bucket,s=this.buckets[n],o=this.index[n];for(let l=s.length-1;l>=0;l--){let a=(l+o)%s.length,h=s[a];if((!t||t(h))&&!this.reused.has(h))return s.splice(a,1),a<o&&this.index[n]--,this.reused.set(h,r),h}return null}findWidget(e,t,r){let n=this.buckets[0];if(n.length)for(let s=0,o=0;;s++){if(s==n.length){if(o)return null;o=1,s=0}let l=n[s];if(!this.reused.has(l)&&(o==0?l.widget.compare(e):l.widget.constructor==e.constructor&&e.updateDOM(l.dom,this.view)))return n.splice(s,1),s<this.index[0]&&this.index[0]--,l.widget==e&&l.length==t&&(l.flags&497)==r?(this.reused.set(l,1),l):(this.reused.set(l,2),new $r(l.dom,t,e,l.flags&-498|r))}}reuse(e){return this.reused.set(e,1),e}maybeReuse(e,t=2){if(!this.reused.has(e))return this.reused.set(e,t),e.dom}clear(){for(let e=0;e<this.buckets.length;e++)this.buckets[e].length=this.index[e]=0}},Kl=class{constructor(e,t,r,n,s){this.view=e,this.decorations=n,this.disallowBlockEffectsFor=s,this.openWidget=!1,this.openMarks=0,this.cache=new Ul(e),this.text=new Gl(e.state.doc),this.builder=new $l(this.cache,new bi(e,e.contentDOM),le.iter(r)),this.cache.reused.set(t,2),this.old=new Wl(t),this.reuseWalker={skip:(o,l,a)=>{if(this.cache.add(o),o.isComposite())return!1},enter:o=>this.cache.add(o),leave:()=>{},break:()=>{}}}run(e,t){let r=t&&this.getCompositionContext(t.text);for(let n=0,s=0,o=0;;){let l=o<e.length?e[o++]:null,a=l?l.fromA:this.old.root.length;if(a>n){let h=a-n;this.preserve(h,!o,!l),n=a,s+=h}if(!l)break;t&&l.fromA<=t.range.fromA&&l.toA>=t.range.toA?(this.forward(l.fromA,t.range.fromA,t.range.fromA<t.range.toA?1:-1),this.emit(s,t.range.fromB),this.cache.clear(),this.builder.addComposition(t,r),this.text.skip(t.range.toB-t.range.fromB),this.forward(t.range.fromA,l.toA),this.emit(t.range.toB,l.toB)):(this.forward(l.fromA,l.toA),this.emit(s,l.toB)),s=l.toB,n=l.toA}return this.builder.curLine&&this.builder.endLine(),this.builder.root}preserve(e,t,r){let n=l1(this.old),s=this.openMarks;this.old.advance(e,r?1:-1,{skip:(o,l,a)=>{if(o.isWidget())if(this.openWidget)this.builder.continueWidget(a-l);else{let h=a>0||l<o.length?$r.of(o.widget,this.view,a-l,o.flags&496,this.cache.maybeReuse(o)):this.cache.reuse(o);h.flags&256?(h.flags&=-2,this.builder.addBlockWidget(h)):(this.builder.ensureLine(null),this.builder.addInlineWidget(h,n,s),s=n.length)}else if(o.isText())this.builder.ensureLine(null),!l&&a==o.length?this.builder.addText(o.text,n,s,this.cache.reuse(o)):(this.cache.add(o),this.builder.addText(o.text.slice(l,a),n,s)),s=n.length;else if(o.isLine())o.flags&=-2,this.cache.reused.set(o,1),this.builder.addLine(o);else if(o instanceof xi)this.cache.add(o);else if(o instanceof st)this.builder.ensureLine(null),this.builder.addMark(o,n,s),this.cache.reused.set(o,1),s=n.length;else return!1;this.openWidget=!1},enter:o=>{o.isLine()?this.builder.addLineStart(o.attrs,this.cache.maybeReuse(o)):(this.cache.add(o),o instanceof st&&n.unshift(o.mark)),this.openWidget=!1},leave:o=>{o.isLine()?n.length&&(n.length=s=0):o instanceof st&&(n.shift(),s=Math.min(s,n.length))},break:()=>{this.builder.addBreak(),this.openWidget=!1}}),this.text.skip(e)}emit(e,t){let r=null,n=this.builder,s=0,o=le.spans(this.decorations,e,t,{point:(l,a,h,c,u,d)=>{if(h instanceof Wr){if(this.disallowBlockEffectsFor[d]){if(h.block)throw new RangeError(\"Block decorations may not be specified via plugins\");if(a>this.view.state.doc.lineAt(l).to)throw new RangeError(\"Decorations that replace line breaks may not be specified via plugins\")}if(s=c.length,u>c.length)n.continueWidget(a-l);else{let p=h.widget||(h.block?wr.block:wr.inline),v=s1(h),y=this.cache.findWidget(p,a-l,v)||$r.of(p,this.view,a-l,v);h.block?(h.startSide>0&&n.addLineStartIfNotCovered(r),n.addBlockWidget(y)):(n.ensureLine(r),n.addInlineWidget(y,c,u))}r=null}else r=o1(r,h);a>l&&this.text.skip(a-l)},span:(l,a,h,c)=>{for(let u=l;u<a;){let d=this.text.next(Math.min(512,a-u));d==null?(n.addLineStartIfNotCovered(r),n.addBreak(),u++):(n.ensureLine(r),n.addText(d,h,u==l?c:h.length),u+=d.length),r=null}}});n.addLineStartIfNotCovered(r),this.openWidget=o>s,this.openMarks=o}forward(e,t,r=1){t-e<=10?this.old.advance(t-e,r,this.reuseWalker):(this.old.advance(5,-1,this.reuseWalker),this.old.advance(t-e-10,-1),this.old.advance(5,r,this.reuseWalker))}getCompositionContext(e){let t=[],r=null;for(let n=e.parentNode;;n=n.parentNode){let s=ke.get(n);if(n==this.view.contentDOM)break;s instanceof st?t.push(s):s?.isLine()?r=s:s instanceof lr||(n.nodeName==\"DIV\"&&!r&&n!=this.view.contentDOM?r=new yi(n,gu):r||t.push(st.of(new Qi({tagName:n.nodeName.toLowerCase(),attributes:Hp(n)}),n)))}return{line:r,marks:t}}};function pc(i,e){let t=r=>{for(let n of r.children)if((e?n.isText():n.length)||t(n))return!0;return!1};return t(i)}function s1(i){let e=i.isReplace?(i.startSide<0?64:0)|(i.endSide>0?128:0):i.startSide>0?32:16;return i.block&&(e|=256),e}var gu={class:\"cm-line\"};function o1(i,e){let t=e.spec.attributes,r=e.spec.class;return!t&&!r||(i||(i={class:\"cm-line\"}),t&&Aa(t,i),r&&(i.class+=\" \"+r)),i}function l1(i){let e=[];for(let t=i.parents.length;t>1;t--){let r=t==i.parents.length?i.tile:i.parents[t].tile;r instanceof st&&e.push(r.mark)}return e}function Cl(i){let e=ke.get(i);return e&&e.setDOM(i.cloneNode()),i}var wr=class extends kt{constructor(e){super(),this.tag=e}eq(e){return e.tag==this.tag}toDOM(){return document.createElement(this.tag)}updateDOM(e){return e.nodeName.toLowerCase()==this.tag}get isHidden(){return!0}};wr.inline=new wr(\"span\");wr.block=new wr(\"div\");var Al=new class extends kt{toDOM(){return document.createElement(\"br\")}get isHidden(){return!0}get editable(){return!0}},ws=class{constructor(e){this.view=e,this.decorations=[],this.blockWrappers=[],this.dynamicDecorationMap=[!1],this.domChanged=null,this.hasComposition=null,this.editContextFormatting=X.none,this.lastCompositionAfterCursor=!1,this.minWidth=0,this.minWidthFrom=0,this.minWidthTo=0,this.impreciseAnchor=null,this.impreciseHead=null,this.forceSelection=!1,this.lastUpdate=Date.now(),this.updateDeco(),this.tile=new bi(e,e.contentDOM),this.updateInner([new zt(0,0,0,e.state.doc.length)],null)}update(e){var t;let r=e.changedRanges;this.minWidth>0&&r.length&&(r.every(({fromA:c,toA:u})=>u<this.minWidthFrom||c>this.minWidthTo)?(this.minWidthFrom=e.changes.mapPos(this.minWidthFrom,1),this.minWidthTo=e.changes.mapPos(this.minWidthTo,1)):this.minWidth=this.minWidthFrom=this.minWidthTo=0),this.updateEditContextFormatting(e);let n=-1;this.view.inputState.composing>=0&&!this.view.observer.editContext&&(!((t=this.domChanged)===null||t===void 0)&&t.newSel?n=this.domChanged.newSel.head:!p1(e.changes,this.hasComposition)&&!e.selectionSet&&(n=e.state.selection.main.head));let s=n>-1?h1(this.view,e.changes,n):null;if(this.domChanged=null,this.hasComposition){let{from:c,to:u}=this.hasComposition;r=new zt(c,u,e.changes.mapPos(c,-1),e.changes.mapPos(u,1)).addToSet(r.slice())}this.hasComposition=s?{from:s.range.fromB,to:s.range.toB}:null,(W.ie||W.chrome)&&!s&&e&&e.state.doc.lines!=e.startState.doc.lines&&(this.forceSelection=!0);let o=this.decorations,l=this.blockWrappers;this.updateDeco();let a=f1(o,this.decorations,e.changes);a.length&&(r=zt.extendWithRanges(r,a));let h=d1(l,this.blockWrappers,e.changes);return h.length&&(r=zt.extendWithRanges(r,h)),s&&!r.some(c=>c.fromA<=s.range.fromA&&c.toA>=s.range.toA)&&(r=s.range.addToSet(r.slice())),this.tile.flags&2&&r.length==0?!1:(this.updateInner(r,s),e.transactions.length&&(this.lastUpdate=Date.now()),!0)}updateInner(e,t){this.view.viewState.mustMeasureContent=!0;let{observer:r}=this.view;r.ignore(()=>{if(t||e.length){let o=this.tile,l=new Kl(this.view,o,this.blockWrappers,this.decorations,this.dynamicDecorationMap);this.tile=l.run(e,t),jl(o,l.cache.reused)}this.tile.dom.style.height=this.view.viewState.contentHeight/this.view.scaleY+\"px\",this.tile.dom.style.flexBasis=this.minWidth?this.minWidth+\"px\":\"\";let s=W.chrome||W.ios?{node:r.selectionRange.focusNode,written:!1}:void 0;this.tile.sync(s),s&&(s.written||r.selectionRange.focusNode!=s.node||!this.tile.dom.contains(s.node))&&(this.forceSelection=!0),this.tile.dom.style.height=\"\"});let n=[];if(this.view.viewport.from||this.view.viewport.to<this.view.state.doc.length)for(let s of this.tile.children)s.isWidget()&&s.widget instanceof Xi&&n.push(s.dom);r.updateGaps(n)}updateEditContextFormatting(e){this.editContextFormatting=this.editContextFormatting.map(e.changes);for(let t of e.transactions)for(let r of t.effects)r.is(uu)&&(this.editContextFormatting=r.value)}updateSelection(e=!1,t=!1){(e||!this.view.observer.selectionRange.focusNode)&&this.view.observer.readSelectionRange();let{dom:r}=this.tile,n=this.view.root.activeElement,s=n==r,o=!s&&!(this.view.state.facet(or)||r.tabIndex>-1)&&Ui(r,this.view.observer.selectionRange)&&!(n&&r.contains(n));if(!(s||t||o))return;let l=this.forceSelection;this.forceSelection=!1;let a=this.view.state.selection.main,h,c;if(a.empty?c=h=this.inlineDOMNearPos(a.anchor,a.assoc||1):(c=this.inlineDOMNearPos(a.head,a.head==a.from?1:-1),h=this.inlineDOMNearPos(a.anchor,a.anchor==a.from?1:-1)),W.gecko&&a.empty&&!this.hasComposition&&a1(h)){let d=document.createTextNode(\"\");this.view.observer.ignore(()=>h.node.insertBefore(d,h.node.childNodes[h.offset]||null)),h=c=new $t(d,0),l=!0}let u=this.view.observer.selectionRange;(l||!u.focusNode||(!Ki(h.node,h.offset,u.anchorNode,u.anchorOffset)||!Ki(c.node,c.offset,u.focusNode,u.focusOffset))&&!this.suppressWidgetCursorChange(u,a))&&(this.view.observer.ignore(()=>{W.android&&W.chrome&&r.contains(u.focusNode)&&m1(u.focusNode,r)&&(r.blur(),r.focus({preventScroll:!0}));let d=tn(this.view.root);if(d)if(a.empty){if(W.gecko){let p=c1(h.node,h.offset);if(p&&p!=3){let v=(p==1?Xc:_c)(h.node,h.offset);v&&(h=new $t(v.node,v.offset))}}d.collapse(h.node,h.offset),a.bidiLevel!=null&&d.caretBidiLevel!==void 0&&(d.caretBidiLevel=a.bidiLevel)}else if(d.extend){d.collapse(h.node,h.offset);try{d.extend(c.node,c.offset)}catch{}}else{let p=document.createRange();a.anchor>a.head&&([h,c]=[c,h]),p.setEnd(c.node,c.offset),p.setStart(h.node,h.offset),d.removeAllRanges(),d.addRange(p)}o&&this.view.root.activeElement==r&&(r.blur(),n&&n.focus())}),this.view.observer.setSelectionRange(h,c)),this.impreciseAnchor=h.precise?null:new $t(u.anchorNode,u.anchorOffset),this.impreciseHead=c.precise?null:new $t(u.focusNode,u.focusOffset)}suppressWidgetCursorChange(e,t){return this.hasComposition&&t.empty&&Ki(e.focusNode,e.focusOffset,e.anchorNode,e.anchorOffset)&&this.posFromDOM(e.focusNode,e.focusOffset)==t.head}enforceCursorAssoc(){if(this.hasComposition)return;let{view:e}=this,t=e.state.selection.main,r=tn(e.root),{anchorNode:n,anchorOffset:s}=e.observer.selectionRange;if(!r||!t.empty||!t.assoc||!r.modify)return;let o=this.lineAt(t.head,t.assoc);if(!o)return;let l=o.posAtStart;if(t.head==l||t.head==l+o.length)return;let a=this.coordsAt(t.head,-1),h=this.coordsAt(t.head,1);if(!a||!h||a.bottom>h.top)return;let c=this.domAtPos(t.head+t.assoc,t.assoc);r.collapse(c.node,c.offset),r.modify(\"move\",t.assoc<0?\"forward\":\"backward\",\"lineboundary\"),e.observer.readSelectionRange();let u=e.observer.selectionRange;e.docView.posFromDOM(u.anchorNode,u.anchorOffset)!=t.from&&r.collapse(n,s)}posFromDOM(e,t){let r=this.tile.nearest(e);if(!r)return this.tile.dom.compareDocumentPosition(e)&2?0:this.view.state.doc.length;let n=r.posAtStart;if(r.isComposite()){let s;if(e==r.dom)s=r.dom.childNodes[t];else{let o=ar(e)==0?0:t==0?-1:1;for(;;){let l=e.parentNode;if(l==r.dom)break;o==0&&l.firstChild!=l.lastChild&&(e==l.firstChild?o=-1:o=1),e=l}o<0?s=e:s=e.nextSibling}if(s==r.dom.firstChild)return n;for(;s&&!ke.get(s);)s=s.nextSibling;if(!s)return n+r.length;for(let o=0,l=n;;o++){let a=r.children[o];if(a.dom==s)return l;l+=a.length+a.breakAfter}}else return r.isText()?e==r.dom?n+t:n+(t?r.length:0):n}domAtPos(e,t){let{tile:r,offset:n}=this.tile.resolveBlock(e,t);return r.isWidget()?r.domPosFor(e,t):r.domIn(n,t)}inlineDOMNearPos(e,t){let r,n=-1,s=!1,o,l=-1,a=!1;return this.tile.blockTiles((h,c)=>{if(h.isWidget()){if(h.flags&32&&c>=e)return!0;h.flags&16&&(s=!0)}else{let u=c+h.length;if(c<=e&&(r=h,n=e-c,s=u<e),u>=e&&!o&&(o=h,l=e-c,a=c>e),c>e&&o)return!0}}),!r&&!o?this.domAtPos(e,t):(s&&o?r=null:a&&r&&(o=null),r&&t<0||!o?r.domIn(n,t):o.domIn(l,t))}coordsAt(e,t){let{tile:r,offset:n}=this.tile.resolveBlock(e,t);return r.isWidget()?r.widget instanceof Xi?null:r.coordsInWidget(n,t,!0):r.coordsIn(n,t)}lineAt(e,t){let{tile:r}=this.tile.resolveBlock(e,t);return r.isLine()?r:null}coordsForChar(e){let{tile:t,offset:r}=this.tile.resolveBlock(e,1);if(!t.isLine())return null;function n(s,o){if(s.isComposite())for(let l of s.children){if(l.length>=o){let a=n(l,o);if(a)return a}if(o-=l.length,o<0)break}else if(s.isText()&&o<s.length){let l=Ie(s.text,o);if(l==o)return null;let a=rn(s.dom,o,l).getClientRects();for(let h=0;h<a.length;h++){let c=a[h];if(h==a.length-1||c.top<c.bottom&&c.left<c.right)return c}}return null}return n(t,r)}measureVisibleLineHeights(e){let t=[],{from:r,to:n}=e,s=this.view.contentDOM.clientWidth,o=s>Math.max(this.view.scrollDOM.clientWidth,this.minWidth)+1,l=-1,a=this.view.textDirection==he.LTR,h=0,c=(u,d,p)=>{for(let v=0;v<u.children.length&&!(d>n);v++){let y=u.children[v],w=d+y.length,S=y.dom.getBoundingClientRect(),{height:A}=S;if(p&&!v&&(h+=S.top-p.top),y instanceof lr)w>r&&c(y,d,S);else if(d>=r&&(h>0&&t.push(-h),t.push(A+h),h=0,o)){let M=y.dom.lastChild,E=M?hs(M):[];if(E.length){let T=E[E.length-1],B=a?T.right-S.left:S.right-T.left;B>l&&(l=B,this.minWidth=s,this.minWidthFrom=d,this.minWidthTo=w)}}p&&v==u.children.length-1&&(h+=p.bottom-S.bottom),d=w+y.breakAfter}};return c(this.tile,0,null),t}textDirectionAt(e){let{tile:t}=this.tile.resolveBlock(e,1);return getComputedStyle(t.dom).direction==\"rtl\"?he.RTL:he.LTR}measureTextSize(){let e=this.tile.blockTiles(o=>{if(o.isLine()&&o.children.length&&o.length<=20){let l=0,a;for(let h of o.children){if(!h.isText()||/[^ -~]/.test(h.text))return;let c=hs(h.dom);if(c.length!=1)return;l+=c[0].width,a=c[0].height}if(l)return{lineHeight:o.dom.getBoundingClientRect().height,charWidth:l/o.length,textHeight:a}}});if(e)return e;let t=document.createElement(\"div\"),r,n,s;return t.className=\"cm-line\",t.style.width=\"99999px\",t.style.position=\"absolute\",t.textContent=\"abc def ghi jkl mno pqr stu\",this.view.observer.ignore(()=>{this.tile.dom.appendChild(t);let o=hs(t.firstChild)[0];r=t.getBoundingClientRect().height,n=o&&o.width?o.width/27:7,s=o&&o.height?o.height:r,t.remove()}),{lineHeight:r,charWidth:n,textHeight:s}}computeBlockGapDeco(){let e=[],t=this.view.viewState;for(let r=0,n=0;;n++){let s=n==t.viewports.length?null:t.viewports[n],o=s?s.from-1:this.view.state.doc.length;if(o>r){let l=(t.lineBlockAt(o).bottom-t.lineBlockAt(r).top)/this.view.scaleY;e.push(X.replace({widget:new Xi(l),block:!0,inclusive:!0,isBlockGap:!0}).range(r,o))}if(!s)break;r=s.to+1}return X.set(e)}updateDeco(){let e=1,t=this.view.state.facet(Is).map(s=>(this.dynamicDecorationMap[e++]=typeof s==\"function\")?s(this.view):s),r=!1,n=this.view.state.facet(Oa).map((s,o)=>{let l=typeof s==\"function\";return l&&(r=!0),l?s(this.view):s});for(n.length&&(this.dynamicDecorationMap[e++]=r,t.push(le.join(n))),this.decorations=[this.editContextFormatting,...t,this.computeBlockGapDeco(),this.view.viewState.lineGapDeco];e<this.decorations.length;)this.dynamicDecorationMap[e++]=!1;this.blockWrappers=this.view.state.facet(du).map(s=>typeof s==\"function\"?s(this.view):s)}scrollIntoView(e){if(e.isSnapshot){let h=this.view.viewState.lineBlockAt(e.range.head);this.view.scrollDOM.scrollTop=h.top-e.yMargin,this.view.scrollDOM.scrollLeft=e.xMargin;return}for(let h of this.view.state.facet(cu))try{if(h(this.view,e.range,e))return!0}catch(c){Fe(this.view.state,c,\"scroll handler\")}let{range:t}=e,r=this.coordsAt(t.head,t.empty?t.assoc:t.head>t.anchor?-1:1),n;if(!r)return;!t.empty&&(n=this.coordsAt(t.anchor,t.anchor>t.head?-1:1))&&(r={left:Math.min(r.left,n.left),top:Math.min(r.top,n.top),right:Math.max(r.right,n.right),bottom:Math.max(r.bottom,n.bottom)});let s=za(this.view),o={left:r.left-s.left,top:r.top-s.top,right:r.right+s.right,bottom:r.bottom+s.bottom},{offsetWidth:l,offsetHeight:a}=this.view.scrollDOM;if(Vp(this.view.scrollDOM,o,t.head<t.anchor?-1:1,e.x,e.y,Math.max(Math.min(e.xMargin,l),-l),Math.max(Math.min(e.yMargin,a),-a),this.view.textDirection==he.LTR),window.visualViewport&&window.innerHeight-window.visualViewport.height>1&&(r.top>window.pageYOffset+window.visualViewport.offsetTop+window.visualViewport.height||r.bottom<window.pageYOffset+window.visualViewport.offsetTop)){let h=this.view.docView.lineAt(t.head,1);h&&h.dom.scrollIntoView({block:\"nearest\"})}}lineHasWidget(e){let t=r=>r.isWidget()||r.children.some(t);return t(this.tile.resolveBlock(e,1).tile)}destroy(){jl(this.tile)}};function jl(i,e){let t=e?.get(i);if(t!=1){t==null&&i.destroy();for(let r of i.children)jl(r,e)}}function a1(i){return i.node.nodeType==1&&i.node.firstChild&&(i.offset==0||i.node.childNodes[i.offset-1].contentEditable==\"false\")&&(i.offset==i.node.childNodes.length||i.node.childNodes[i.offset].contentEditable==\"false\")}function vu(i,e){let t=i.observer.selectionRange;if(!t.focusNode)return null;let r=Xc(t.focusNode,t.focusOffset),n=_c(t.focusNode,t.focusOffset),s=r||n;if(n&&r&&n.node!=r.node){let l=ke.get(n.node);if(!l||l.isText()&&l.text!=n.node.nodeValue)s=n;else if(i.docView.lastCompositionAfterCursor){let a=ke.get(r.node);!a||a.isText()&&a.text!=r.node.nodeValue||(s=n)}}if(i.docView.lastCompositionAfterCursor=s!=r,!s)return null;let o=e-s.offset;return{from:o,to:o+s.node.nodeValue.length,node:s.node}}function h1(i,e,t){let r=vu(i,t);if(!r)return null;let{node:n,from:s,to:o}=r,l=n.nodeValue;if(/[\\n\\r]/.test(l)||i.state.doc.sliceString(r.from,r.to)!=l)return null;let a=e.invertedDesc;return{range:new zt(a.mapPos(s),a.mapPos(o),s,o),text:n}}function c1(i,e){return i.nodeType!=1?0:(e&&i.childNodes[e-1].contentEditable==\"false\"?1:0)|(e<i.childNodes.length&&i.childNodes[e].contentEditable==\"false\"?2:0)}var u1=class{constructor(){this.changes=[]}compareRange(e,t){pi(e,t,this.changes)}comparePoint(e,t){pi(e,t,this.changes)}boundChange(e){pi(e,e,this.changes)}};function f1(i,e,t){let r=new u1;return le.compare(i,e,t,r),r.changes}var Yl=class{constructor(){this.changes=[]}compareRange(e,t){pi(e,t,this.changes)}comparePoint(){}boundChange(e){pi(e,e,this.changes)}};function d1(i,e,t){let r=new Yl;return le.compare(i,e,t,r),r.changes}function m1(i,e){for(let t=i;t&&t!=e;t=t.assignedSlot||t.parentNode)if(t.nodeType==1&&t.contentEditable==\"false\")return!0;return!1}function p1(i,e){let t=!1;return e&&i.iterChangedRanges((r,n)=>{r<e.to&&n>e.from&&(t=!0)}),t}var Xi=class extends kt{constructor(e){super(),this.height=e}toDOM(){let e=document.createElement(\"div\");return e.className=\"cm-gap\",this.updateDOM(e),e}eq(e){return e.height==this.height}updateDOM(e){return e.style.height=this.height+\"px\",!0}get editable(){return!0}get estimatedHeight(){return this.height}ignoreEvent(){return!1}};function g1(i,e,t=1){let r=i.charCategorizer(e),n=i.doc.lineAt(e),s=e-n.from;if(n.length==0)return R.cursor(e);s==0?t=1:s==n.length&&(t=-1);let o=s,l=s;t<0?o=Ie(n.text,s,!1):l=Ie(n.text,s);let a=r(n.text.slice(o,l));for(;o>0;){let h=Ie(n.text,o,!1);if(r(n.text.slice(h,o))!=a)break;o=h}for(;l<n.length;){let h=Ie(n.text,l);if(r(n.text.slice(l,h))!=a)break;l=h}return R.range(o+n.from,l+n.from)}function v1(i,e,t,r,n){let s=Math.round((r-e.left)*i.defaultCharacterWidth);if(i.lineWrapping&&t.height>i.defaultLineHeight*1.5){let l=i.viewState.heightOracle.textHeight,a=Math.floor((n-t.top-(i.defaultLineHeight-l)*.5)/l);s+=a*i.viewState.heightOracle.lineLength}let o=i.state.sliceDoc(t.from,t.to);return t.from+tc(o,s,i.state.tabSize)}function Xl(i,e,t){let r=i.lineBlockAt(e);if(Array.isArray(r.type)){let n;for(let s of r.type){if(s.from>e)break;if(!(s.to<e)){if(s.from<e&&s.to>e)return s;(!n||s.type==Ve.Text&&(n.type!=s.type||(t<0?s.from<e:s.to>e)))&&(n=s)}}return n||r}return r}function b1(i,e,t,r){let n=Xl(i,e.head,e.assoc||-1),s=!r||n.type!=Ve.Text||!(i.lineWrapping||n.widgetLineBreaks)?null:i.coordsAtPos(e.assoc<0&&e.head>n.from?e.head-1:e.head);if(s){let o=i.dom.getBoundingClientRect(),l=i.textDirectionAt(n.from),a=i.posAtCoords({x:t==(l==he.LTR)?o.right-1:o.left+1,y:(s.top+s.bottom)/2});if(a!=null)return R.cursor(a,t?-1:1)}return R.cursor(t?n.to:n.from,t?-1:1)}function gc(i,e,t,r){let n=i.state.doc.lineAt(e.head),s=i.bidiSpans(n),o=i.textDirectionAt(n.from);for(let l=e,a=null;;){let h=Qp(n,s,o,l,t),c=tu;if(!h){if(n.number==(t?i.state.doc.lines:1))return l;c=`\n`,n=i.state.doc.line(n.number+(t?1:-1)),s=i.bidiSpans(n),h=i.visualLineSide(n,!t)}if(a){if(!a(c))return l}else{if(!r)return h;a=r(c)}l=h}}function y1(i,e,t){let r=i.state.charCategorizer(e),n=r(t);return s=>{let o=r(s);return n==Ee.Space&&(n=o),n==o}}function x1(i,e,t,r){let n=e.head,s=t?1:-1;if(n==(t?i.state.doc.length:0))return R.cursor(n,e.assoc);let o=e.goalColumn,l,a=i.contentDOM.getBoundingClientRect(),h=i.coordsAtPos(n,(e.empty?e.assoc:0)||(t?1:-1)),c=i.documentTop;if(h)o==null&&(o=h.left-a.left),l=s<0?h.top:h.bottom;else{let v=i.viewState.lineBlockAt(n);o==null&&(o=Math.min(a.right-a.left,i.defaultCharacterWidth*(n-v.from))),l=(s<0?v.top:v.bottom)+c}let u=a.left+o,d=r??i.viewState.heightOracle.textHeight>>1,p=_l(i,{x:u,y:l+d*s},!1,s);return R.cursor(p.pos,p.assoc,void 0,o)}function _i(i,e,t){for(;;){let r=0;for(let n of i)n.between(e-1,e+1,(s,o,l)=>{if(e>s&&e<o){let a=r||t||(e-s<o-e?-1:1);e=a<0?s:o,r=a}});if(!r)return e}}function bu(i,e){let t=null;for(let r=0;r<e.ranges.length;r++){let n=e.ranges[r],s=null;if(n.empty){let o=_i(i,n.from,0);o!=n.from&&(s=R.cursor(o,-1))}else{let o=_i(i,n.from,-1),l=_i(i,n.to,1);(o!=n.from||l!=n.to)&&(s=R.range(n.from==n.anchor?o:l,n.from==n.head?o:l))}s&&(t||(t=e.ranges.slice()),t[r]=s)}return t?R.create(t,e.mainIndex):e}function Ml(i,e,t){let r=_i(i.state.facet(ln).map(n=>n(i)),t.from,e.head>t.from?-1:1);return r==t.from?t:R.cursor(r,r<t.from?1:-1)}var xt=class{constructor(e,t){this.pos=e,this.assoc=t}};function _l(i,e,t,r){let n=i.contentDOM.getBoundingClientRect(),s=n.top+i.viewState.paddingTop,{x:o,y:l}=e,a=l-s,h;for(;;){if(a<0)return new xt(0,1);if(a>i.viewState.docHeight)return new xt(i.state.doc.length,-1);if(h=i.elementAtHeight(a),r==null)break;if(h.type==Ve.Text){if(r<0?h.to<i.viewport.from:h.from>i.viewport.to)break;let d=i.docView.coordsAt(r<0?h.from:h.to,r>0?-1:1);if(d&&(r<0?d.top<=a+s:d.bottom>=a+s))break}let u=i.viewState.heightOracle.textHeight/2;a=r>0?h.bottom+u:h.top-u}if(i.viewport.from>=h.to||i.viewport.to<=h.from){if(t)return null;if(h.type==Ve.Text){let u=v1(i,n,h,o,l);return new xt(u,u==h.from?1:-1)}}if(h.type!=Ve.Text)return a<(h.top+h.bottom)/2?new xt(h.from,1):new xt(h.to,-1);let c=i.docView.lineAt(h.from,2);return(!c||c.length!=h.length)&&(c=i.docView.lineAt(h.from,-2)),new Jl(i,o,l,i.textDirectionAt(h.from)).scanTile(c,h.from)}var Jl=class{constructor(e,t,r,n){this.view=e,this.x=t,this.y=r,this.baseDir=n,this.line=null,this.spans=null}bidiSpansAt(e){return(!this.line||this.line.from>e||this.line.to<e)&&(this.line=this.view.state.doc.lineAt(e),this.spans=this.view.bidiSpans(this.line)),this}baseDirAt(e,t){let{line:r,spans:n}=this.bidiSpansAt(e);return n[wt.find(n,e-r.from,-1,t)].level==this.baseDir}dirAt(e,t){let{line:r,spans:n}=this.bidiSpansAt(e);return n[wt.find(n,e-r.from,-1,t)].dir}bidiIn(e,t){let{spans:r,line:n}=this.bidiSpansAt(e);return r.length>1||r.length&&(r[0].level!=this.baseDir||r[0].to+n.from<t)}scan(e,t){let r=0,n=e.length-1,s=new Set,o=this.bidiIn(e[0],e[n]),l,a,h=-1,c=1e9,u;e:for(;r<n;){let p=n-r,v=r+n>>1;t:if(s.has(v)){let w=r+Math.floor(Math.random()*p);for(let S=0;S<p;S++){if(!s.has(w)){v=w;break t}w++,w==n&&(w=r)}break e}s.add(v);let y=t(v);if(y)for(let w=0;w<y.length;w++){let S=y[w],A=0;if(S.bottom<this.y)(!l||l.bottom<S.bottom)&&(l=S),A=1;else if(S.top>this.y)(!a||a.top>S.top)&&(a=S),A=-1;else{let M=S.left>this.x?this.x-S.left:S.right<this.x?this.x-S.right:0,E=Math.abs(M);E<c&&(h=v,c=E,u=S),M&&(A=M<0==(this.baseDir==he.LTR)?-1:1)}A==-1&&(!o||this.baseDirAt(e[v],1))?n=v:A==1&&(!o||this.baseDirAt(e[v+1],-1))&&(r=v+1)}}if(!u){let p=l&&(!a||this.y-l.bottom<a.top-this.y)?l:a;return this.y=(p.top+p.bottom)/2,this.scan(e,t)}let d=(o?this.dirAt(e[h],1):this.baseDir)==he.LTR;return{i:h,after:this.x>(u.left+u.right)/2==d}}scanText(e,t){let r=[];for(let s=0;s<e.length;s=Ie(e.text,s))r.push(t+s);r.push(t+e.length);let n=this.scan(r,s=>{let o=r[s]-t,l=r[s+1]-t;return rn(e.dom,o,l).getClientRects()});return n.after?new xt(r[n.i+1],-1):new xt(r[n.i],1)}scanTile(e,t){if(!e.length)return new xt(t,1);if(e.children.length==1){let l=e.children[0];if(l.isText())return this.scanText(l,t);if(l.isComposite())return this.scanTile(l,t)}let r=[t];for(let l=0,a=t;l<e.children.length;l++)r.push(a+=e.children[l].length);let n=this.scan(r,l=>{let a=e.children[l];return a.flags&48?null:(a.dom.nodeType==1?a.dom:rn(a.dom,0,a.length)).getClientRects()}),s=e.children[n.i],o=r[n.i];return s.isText()?this.scanText(s,o):s.isComposite()?this.scanTile(s,o):n.after?new xt(r[n.i+1],-1):new xt(o,1)}},Vi=\"\\uFFFF\",Zl=class{constructor(e,t){this.points=e,this.view=t,this.text=\"\",this.lineSeparator=t.state.facet(fe.lineSeparator)}append(e){this.text+=e}lineBreak(){this.text+=Vi}readRange(e,t){if(!e)return this;let r=e.parentNode;for(let n=e;;){this.findPointBefore(r,n);let s=this.text.length;this.readNode(n);let o=ke.get(n),l=n.nextSibling;if(l==t){o?.breakAfter&&!l&&r!=this.view.contentDOM&&this.lineBreak();break}let a=ke.get(l);(o&&a?o.breakAfter:(o?o.breakAfter:vs(n))||vs(l)&&(n.nodeName!=\"BR\"||o?.isWidget())&&this.text.length>s)&&!k1(l,t)&&this.lineBreak(),n=l}return this.findPointBefore(r,t),this}readTextNode(e){let t=e.nodeValue;for(let r of this.points)r.node==e&&(r.pos=this.text.length+Math.min(r.offset,t.length));for(let r=0,n=this.lineSeparator?null:/\\r\\n?|\\n/g;;){let s=-1,o=1,l;if(this.lineSeparator?(s=t.indexOf(this.lineSeparator,r),o=this.lineSeparator.length):(l=n.exec(t))&&(s=l.index,o=l[0].length),this.append(t.slice(r,s<0?t.length:s)),s<0)break;if(this.lineBreak(),o>1)for(let a of this.points)a.node==e&&a.pos>this.text.length&&(a.pos-=o-1);r=s+o}}readNode(e){let t=ke.get(e),r=t&&t.overrideDOMText;if(r!=null){this.findPointInside(e,r.length);for(let n=r.iter();!n.next().done;)n.lineBreak?this.lineBreak():this.append(n.value)}else e.nodeType==3?this.readTextNode(e):e.nodeName==\"BR\"?e.nextSibling&&this.lineBreak():e.nodeType==1&&this.readRange(e.firstChild,null)}findPointBefore(e,t){for(let r of this.points)r.node==e&&e.childNodes[r.offset]==t&&(r.pos=this.text.length)}findPointInside(e,t){for(let r of this.points)(e.nodeType==3?r.node==e:e.contains(r.node))&&(r.pos=this.text.length+(w1(e,r.node,r.offset)?t:0))}};function w1(i,e,t){for(;;){if(!e||t<ar(e))return!1;if(e==i)return!0;t=xr(e)+1,e=e.parentNode}}function k1(i,e){let t;for(;!(i==e||!i);i=i.nextSibling){let r=ke.get(i);if(!r?.isWidget())return!1;r&&(t||(t=[])).push(r)}if(t)for(let r of t){let n=r.overrideDOMText;if(n?.length)return!1}return!0}var ks=class{constructor(e,t){this.node=e,this.offset=t,this.pos=-1}},Ql=class{constructor(e,t,r,n){this.typeOver=n,this.bounds=null,this.text=\"\",this.domChanged=t>-1;let{impreciseHead:s,impreciseAnchor:o}=e.docView;if(e.state.readOnly&&t>-1)this.newSel=null;else if(t>-1&&(this.bounds=yu(e.docView.tile,t,r,0))){let l=s||o?[]:C1(e),a=new Zl(l,e);a.readRange(this.bounds.startDOM,this.bounds.endDOM),this.text=a.text,this.newSel=A1(l,this.bounds.from)}else{let l=e.observer.selectionRange,a=s&&s.node==l.focusNode&&s.offset==l.focusOffset||!Rl(e.contentDOM,l.focusNode)?e.state.selection.main.head:e.docView.posFromDOM(l.focusNode,l.focusOffset),h=o&&o.node==l.anchorNode&&o.offset==l.anchorOffset||!Rl(e.contentDOM,l.anchorNode)?e.state.selection.main.anchor:e.docView.posFromDOM(l.anchorNode,l.anchorOffset),c=e.viewport;if((W.ios||W.chrome)&&e.state.selection.main.empty&&a!=h&&(c.from>0||c.to<e.state.doc.length)){let u=Math.min(a,h),d=Math.max(a,h),p=c.from-u,v=c.to-d;(p==0||p==1||u==0)&&(v==0||v==-1||d==e.state.doc.length)&&(a=0,h=e.state.doc.length)}e.inputState.composing>-1&&e.state.selection.ranges.length>1?this.newSel=e.state.selection.replaceRange(R.range(h,a)):this.newSel=R.single(h,a)}}};function yu(i,e,t,r){if(i.isComposite()){let n=-1,s=-1,o=-1,l=-1;for(let a=0,h=r,c=r;a<i.children.length;a++){let u=i.children[a],d=h+u.length;if(h<e&&d>t)return yu(u,e,t,h);if(d>=e&&n==-1&&(n=a,s=h),h>t&&u.dom.parentNode==i.dom){o=a,l=c;break}c=d,h=d+u.breakAfter}return{from:s,to:l<0?r+i.length:l,startDOM:(n?i.children[n-1].dom.nextSibling:null)||i.dom.firstChild,endDOM:o<i.children.length&&o>=0?i.children[o].dom:null}}else return i.isText()?{from:r,to:r+i.length,startDOM:i.dom,endDOM:i.dom.nextSibling}:null}function xu(i,e){let t,{newSel:r}=e,n=i.state.selection.main,s=i.inputState.lastKeyTime>Date.now()-100?i.inputState.lastKeyCode:-1;if(e.bounds){let{from:o,to:l}=e.bounds,a=n.from,h=null;(s===8||W.android&&e.text.length<l-o)&&(a=n.to,h=\"end\");let c=wu(i.state.doc.sliceString(o,l,Vi),e.text,a-o,h);c&&(W.chrome&&s==13&&c.toB==c.from+2&&e.text.slice(c.from,c.toB)==Vi+Vi&&c.toB--,t={from:o+c.from,to:o+c.toA,insert:se.of(e.text.slice(c.from,c.toB).split(Vi))})}else r&&(!i.hasFocus&&i.state.facet(or)||Ss(r,n))&&(r=null);if(!t&&!r)return!1;if(!t&&e.typeOver&&!n.empty&&r&&r.main.empty?t={from:n.from,to:n.to,insert:i.state.doc.slice(n.from,n.to)}:(W.mac||W.android)&&t&&t.from==t.to&&t.from==n.head-1&&/^\\. ?$/.test(t.insert.toString())&&i.contentDOM.getAttribute(\"autocorrect\")==\"off\"?(r&&t.insert.length==2&&(r=R.single(r.main.anchor-1,r.main.head-1)),t={from:t.from,to:t.to,insert:se.of([t.insert.toString().replace(\".\",\" \")])}):t&&t.from>=n.from&&t.to<=n.to&&(t.from!=n.from||t.to!=n.to)&&n.to-n.from-(t.to-t.from)<=4?t={from:n.from,to:n.to,insert:i.state.doc.slice(n.from,t.from).append(t.insert).append(i.state.doc.slice(t.to,n.to))}:i.state.doc.lineAt(n.from).to<n.to&&i.docView.lineHasWidget(n.to)&&i.inputState.insertingTextAt>Date.now()-50?t={from:n.from,to:n.to,insert:i.state.toText(i.inputState.insertingText)}:W.chrome&&t&&t.from==t.to&&t.from==n.head&&t.insert.toString()==`\n `&&i.lineWrapping&&(r&&(r=R.single(r.main.anchor-1,r.main.head-1)),t={from:n.from,to:n.to,insert:se.of([\" \"])}),t)return La(i,t,r,s);if(r&&!Ss(r,n)){let o=!1,l=\"select\";return i.inputState.lastSelectionTime>Date.now()-50&&(i.inputState.lastSelectionOrigin==\"select\"&&(o=!0),l=i.inputState.lastSelectionOrigin,l==\"select.pointer\"&&(r=bu(i.state.facet(ln).map(a=>a(i)),r))),i.dispatch({selection:r,scrollIntoView:o,userEvent:l}),!0}else return!1}function La(i,e,t,r=-1){if(W.ios&&i.inputState.flushIOSKey(e))return!0;let n=i.state.selection.main;if(W.android&&(e.to==n.to&&(e.from==n.from||e.from==n.from-1&&i.state.sliceDoc(e.from,n.from)==\" \")&&e.insert.length==1&&e.insert.lines==2&&gi(i.contentDOM,\"Enter\",13)||(e.from==n.from-1&&e.to==n.to&&e.insert.length==0||r==8&&e.insert.length<e.to-e.from&&e.to>n.head)&&gi(i.contentDOM,\"Backspace\",8)||e.from==n.from&&e.to==n.to+1&&e.insert.length==0&&gi(i.contentDOM,\"Delete\",46)))return!0;let s=e.insert.toString();i.inputState.composing>=0&&i.inputState.composing++;let o,l=()=>o||(o=S1(i,e,t));return i.state.facet(ou).some(a=>a(i,e.from,e.to,s,l))||i.dispatch(l()),!0}function S1(i,e,t){let r,n=i.state,s=n.selection.main,o=-1;if(e.from==e.to&&e.from<s.from||e.from>s.to){let a=e.from<s.from?-1:1,h=a<0?s.from:s.to,c=_i(n.facet(ln).map(u=>u(i)),h,a);e.from==c&&(o=c)}if(o>-1)r={changes:e,selection:R.cursor(e.from+e.insert.length,-1)};else if(e.from>=s.from&&e.to<=s.to&&e.to-e.from>=(s.to-s.from)/3&&(!t||t.main.empty&&t.main.from==e.from+e.insert.length)&&i.inputState.composing<0){let a=s.from<e.from?n.sliceDoc(s.from,e.from):\"\",h=s.to>e.to?n.sliceDoc(e.to,s.to):\"\";r=n.replaceSelection(i.state.toText(a+e.insert.sliceString(0,void 0,i.state.lineBreak)+h))}else{let a=n.changes(e),h=t&&t.main.to<=a.newLength?t.main:void 0;if(n.selection.ranges.length>1&&(i.inputState.composing>=0||i.inputState.compositionPendingChange)&&e.to<=s.to+10&&e.to>=s.to-10){let c=i.state.sliceDoc(e.from,e.to),u,d=t&&vu(i,t.main.head);if(d){let v=e.insert.length-(e.to-e.from);u={from:d.from,to:d.to-v}}else u=i.state.doc.lineAt(s.head);let p=s.to-e.to;r=n.changeByRange(v=>{if(v.from==s.from&&v.to==s.to)return{changes:a,range:h||v.map(a)};let y=v.to-p,w=y-c.length;if(i.state.sliceDoc(w,y)!=c||y>=u.from&&w<=u.to)return{range:v};let S=n.changes({from:w,to:y,insert:e.insert}),A=v.to-s.to;return{changes:S,range:h?R.range(Math.max(0,h.anchor+A),Math.max(0,h.head+A)):v.map(S)}})}else r={changes:a,selection:h&&n.selection.replaceRange(h)}}let l=\"input.type\";return(i.composing||i.inputState.compositionPendingChange&&i.inputState.compositionEndedAt>Date.now()-50)&&(i.inputState.compositionPendingChange=!1,l+=\".compose\",i.inputState.compositionFirstChange&&(l+=\".start\",i.inputState.compositionFirstChange=!1)),n.update(r,{userEvent:l,scrollIntoView:!0})}function wu(i,e,t,r){let n=Math.min(i.length,e.length),s=0;for(;s<n&&i.charCodeAt(s)==e.charCodeAt(s);)s++;if(s==n&&i.length==e.length)return null;let o=i.length,l=e.length;for(;o>0&&l>0&&i.charCodeAt(o-1)==e.charCodeAt(l-1);)o--,l--;if(r==\"end\"){let a=Math.max(0,s-Math.min(o,l));t-=o+a-s}if(o<s&&i.length<e.length){let a=t<=s&&t>=o?s-t:0;s-=a,l=s+(l-o),o=s}else if(l<s){let a=t<=s&&t>=l?s-t:0;s-=a,o=s+(o-l),l=s}return{from:s,toA:o,toB:l}}function C1(i){let e=[];if(i.root.activeElement!=i.contentDOM)return e;let{anchorNode:t,anchorOffset:r,focusNode:n,focusOffset:s}=i.observer.selectionRange;return t&&(e.push(new ks(t,r)),(n!=t||s!=r)&&e.push(new ks(n,s))),e}function A1(i,e){if(i.length==0)return null;let t=i[0].pos,r=i.length==2?i[1].pos:t;return t>-1&&r>-1?R.single(t+e,r+e):null}function Ss(i,e){return e.head==i.main.head&&e.anchor==i.main.anchor}var ea=class{setSelectionOrigin(e){this.lastSelectionOrigin=e,this.lastSelectionTime=Date.now()}constructor(e){this.view=e,this.lastKeyCode=0,this.lastKeyTime=0,this.lastTouchTime=0,this.lastFocusTime=0,this.lastScrollTop=0,this.lastScrollLeft=0,this.pendingIOSKey=void 0,this.tabFocusMode=-1,this.lastSelectionOrigin=null,this.lastSelectionTime=0,this.lastContextMenu=0,this.scrollHandlers=[],this.handlers=Object.create(null),this.composing=-1,this.compositionFirstChange=null,this.compositionEndedAt=0,this.compositionPendingKey=!1,this.compositionPendingChange=!1,this.insertingText=\"\",this.insertingTextAt=0,this.mouseSelection=null,this.draggedContent=null,this.handleEvent=this.handleEvent.bind(this),this.notifiedFocused=e.hasFocus,W.safari&&e.contentDOM.addEventListener(\"input\",()=>null),W.gecko&&H1(e.contentDOM.ownerDocument)}handleEvent(e){!z1(this.view,e)||this.ignoreDuringComposition(e)||e.type==\"keydown\"&&this.keydown(e)||(this.view.updateState!=0?Promise.resolve().then(()=>this.runHandlers(e.type,e)):this.runHandlers(e.type,e))}runHandlers(e,t){let r=this.handlers[e];if(r){for(let n of r.observers)n(this.view,t);for(let n of r.handlers){if(t.defaultPrevented)break;if(n(this.view,t)){t.preventDefault();break}}}}ensureHandlers(e){let t=M1(e),r=this.handlers,n=this.view.contentDOM;for(let s in t)if(s!=\"scroll\"){let o=!t[s].handlers.length,l=r[s];l&&o!=!l.handlers.length&&(n.removeEventListener(s,this.handleEvent),l=null),l||n.addEventListener(s,this.handleEvent,{passive:o})}for(let s in r)s!=\"scroll\"&&!t[s]&&n.removeEventListener(s,this.handleEvent);this.handlers=t}keydown(e){if(this.lastKeyCode=e.keyCode,this.lastKeyTime=Date.now(),e.keyCode==9&&this.tabFocusMode>-1&&(!this.tabFocusMode||Date.now()<=this.tabFocusMode))return!0;if(this.tabFocusMode>0&&e.keyCode!=27&&Su.indexOf(e.keyCode)<0&&(this.tabFocusMode=-1),W.android&&W.chrome&&!e.synthetic&&(e.keyCode==13||e.keyCode==8))return this.view.observer.delayAndroidKey(e.key,e.keyCode),!0;let t;return W.ios&&!e.synthetic&&!e.altKey&&!e.metaKey&&((t=ku.find(r=>r.keyCode==e.keyCode))&&!e.ctrlKey||T1.indexOf(e.key)>-1&&e.ctrlKey&&!e.shiftKey)?(this.pendingIOSKey=t||e,setTimeout(()=>this.flushIOSKey(),250),!0):(e.keyCode!=229&&this.view.observer.forceFlush(),!1)}flushIOSKey(e){let t=this.pendingIOSKey;return!t||t.key==\"Enter\"&&e&&e.from<e.to&&/^\\S+$/.test(e.insert.toString())?!1:(this.pendingIOSKey=void 0,gi(this.view.contentDOM,t.key,t.keyCode,t instanceof KeyboardEvent?t:void 0))}ignoreDuringComposition(e){return!/^key/.test(e.type)||e.synthetic?!1:this.composing>0?!0:W.safari&&!W.ios&&this.compositionPendingKey&&Date.now()-this.compositionEndedAt<100?(this.compositionPendingKey=!1,!0):!1}startMouseSelection(e){this.mouseSelection&&this.mouseSelection.destroy(),this.mouseSelection=e}update(e){this.view.observer.update(e),this.mouseSelection&&this.mouseSelection.update(e),this.draggedContent&&e.docChanged&&(this.draggedContent=this.draggedContent.map(e.changes)),e.transactions.length&&(this.lastKeyCode=this.lastSelectionTime=0)}destroy(){this.mouseSelection&&this.mouseSelection.destroy()}};function vc(i,e){return(t,r)=>{try{return e.call(i,r,t)}catch(n){Fe(t.state,n)}}}function M1(i){let e=Object.create(null);function t(r){return e[r]||(e[r]={observers:[],handlers:[]})}for(let r of i){let n=r.spec,s=n&&n.plugin.domEventHandlers,o=n&&n.plugin.domEventObservers;if(s)for(let l in s){let a=s[l];a&&t(l).handlers.push(vc(r.value,a))}if(o)for(let l in o){let a=o[l];a&&t(l).observers.push(vc(r.value,a))}}for(let r in Lt)t(r).handlers.push(Lt[r]);for(let r in St)t(r).observers.push(St[r]);return e}var ku=[{key:\"Backspace\",keyCode:8,inputType:\"deleteContentBackward\"},{key:\"Enter\",keyCode:13,inputType:\"insertParagraph\"},{key:\"Enter\",keyCode:13,inputType:\"insertLineBreak\"},{key:\"Delete\",keyCode:46,inputType:\"deleteContentForward\"}],T1=\"dthko\",Su=[16,17,18,20,91,92,224,225],ts=6;function rs(i){return Math.max(0,i)*.7+8}function D1(i,e){return Math.max(Math.abs(i.clientX-e.clientX),Math.abs(i.clientY-e.clientY))}var ta=class{constructor(e,t,r,n){this.view=e,this.startEvent=t,this.style=r,this.mustSelect=n,this.scrollSpeed={x:0,y:0},this.scrolling=-1,this.lastEvent=t,this.scrollParents=$p(e.contentDOM),this.atoms=e.state.facet(ln).map(o=>o(e));let s=e.contentDOM.ownerDocument;s.addEventListener(\"mousemove\",this.move=this.move.bind(this)),s.addEventListener(\"mouseup\",this.up=this.up.bind(this)),this.extend=t.shiftKey,this.multiple=e.state.facet(fe.allowMultipleSelections)&&B1(e,t),this.dragging=O1(e,t)&&Mu(t)==1?null:!1}start(e){this.dragging===!1&&this.select(e)}move(e){if(e.buttons==0)return this.destroy();if(this.dragging||this.dragging==null&&D1(this.startEvent,e)<10)return;this.select(this.lastEvent=e);let t=0,r=0,n=0,s=0,o=this.view.win.innerWidth,l=this.view.win.innerHeight;this.scrollParents.x&&({left:n,right:o}=this.scrollParents.x.getBoundingClientRect()),this.scrollParents.y&&({top:s,bottom:l}=this.scrollParents.y.getBoundingClientRect());let a=za(this.view);e.clientX-a.left<=n+ts?t=-rs(n-e.clientX):e.clientX+a.right>=o-ts&&(t=rs(e.clientX-o)),e.clientY-a.top<=s+ts?r=-rs(s-e.clientY):e.clientY+a.bottom>=l-ts&&(r=rs(e.clientY-l)),this.setScrollSpeed(t,r)}up(e){this.dragging==null&&this.select(this.lastEvent),this.dragging||e.preventDefault(),this.destroy()}destroy(){this.setScrollSpeed(0,0);let e=this.view.contentDOM.ownerDocument;e.removeEventListener(\"mousemove\",this.move),e.removeEventListener(\"mouseup\",this.up),this.view.inputState.mouseSelection=this.view.inputState.draggedContent=null}setScrollSpeed(e,t){this.scrollSpeed={x:e,y:t},e||t?this.scrolling<0&&(this.scrolling=setInterval(()=>this.scroll(),50)):this.scrolling>-1&&(clearInterval(this.scrolling),this.scrolling=-1)}scroll(){let{x:e,y:t}=this.scrollSpeed;e&&this.scrollParents.x&&(this.scrollParents.x.scrollLeft+=e,e=0),t&&this.scrollParents.y&&(this.scrollParents.y.scrollTop+=t,t=0),(e||t)&&this.view.win.scrollBy(e,t),this.dragging===!1&&this.select(this.lastEvent)}select(e){let{view:t}=this,r=bu(this.atoms,this.style.get(e,this.extend,this.multiple));(this.mustSelect||!r.eq(t.state.selection,this.dragging===!1))&&this.view.dispatch({selection:r,userEvent:\"select.pointer\"}),this.mustSelect=!1}update(e){e.transactions.some(t=>t.isUserEvent(\"input.type\"))?this.destroy():this.style.update(e)&&setTimeout(()=>this.select(this.lastEvent),20)}};function B1(i,e){let t=i.state.facet(ru);return t.length?t[0](e):W.mac?e.metaKey:e.ctrlKey}function E1(i,e){let t=i.state.facet(iu);return t.length?t[0](e):W.mac?!e.altKey:!e.ctrlKey}function O1(i,e){let{main:t}=i.state.selection;if(t.empty)return!1;let r=tn(i.root);if(!r||r.rangeCount==0)return!0;let n=r.getRangeAt(0).getClientRects();for(let s=0;s<n.length;s++){let o=n[s];if(o.left<=e.clientX&&o.right>=e.clientX&&o.top<=e.clientY&&o.bottom>=e.clientY)return!0}return!1}function z1(i,e){if(!e.bubbles)return!0;if(e.defaultPrevented)return!1;for(let t=e.target,r;t!=i.contentDOM;t=t.parentNode)if(!t||t.nodeType==11||(r=ke.get(t))&&r.isWidget()&&!r.isHidden&&r.widget.ignoreEvent(e))return!1;return!0}var Lt=Object.create(null),St=Object.create(null),Cu=W.ie&&W.ie_version<15||W.ios&&W.webkit_version<604;function L1(i){let e=i.dom.parentNode;if(!e)return;let t=e.appendChild(document.createElement(\"textarea\"));t.style.cssText=\"position: fixed; left: -10000px; top: 10px\",t.focus(),setTimeout(()=>{i.focus(),t.remove(),Au(i,t.value)},50)}function Rs(i,e,t){for(let r of i.facet(e))t=r(t,i);return t}function Au(i,e){e=Rs(i.state,Da,e);let{state:t}=i,r,n=1,s=t.toText(e),o=s.lines==t.selection.ranges.length;if(ra!=null&&t.selection.ranges.every(a=>a.empty)&&ra==s.toString()){let a=-1;r=t.changeByRange(h=>{let c=t.doc.lineAt(h.from);if(c.from==a)return{range:h};a=c.from;let u=t.toText((o?s.line(n++).text:e)+t.lineBreak);return{changes:{from:c.from,insert:u},range:R.cursor(h.from+u.length)}})}else o?r=t.changeByRange(a=>{let h=s.line(n++);return{changes:{from:a.from,to:a.to,insert:h.text},range:R.cursor(a.from+h.length)}}):r=t.replaceSelection(s);i.dispatch(r,{userEvent:\"input.paste\",scrollIntoView:!0})}St.scroll=i=>{i.inputState.lastScrollTop=i.scrollDOM.scrollTop,i.inputState.lastScrollLeft=i.scrollDOM.scrollLeft};Lt.keydown=(i,e)=>(i.inputState.setSelectionOrigin(\"select\"),e.keyCode==27&&i.inputState.tabFocusMode!=0&&(i.inputState.tabFocusMode=Date.now()+2e3),!1);St.touchstart=(i,e)=>{i.inputState.lastTouchTime=Date.now(),i.inputState.setSelectionOrigin(\"select.pointer\")};St.touchmove=i=>{i.inputState.setSelectionOrigin(\"select.pointer\")};Lt.mousedown=(i,e)=>{if(i.observer.flush(),i.inputState.lastTouchTime>Date.now()-2e3)return!1;let t=null;for(let r of i.state.facet(nu))if(t=r(i,e),t)break;if(!t&&e.button==0&&(t=R1(i,e)),t){let r=!i.hasFocus;i.inputState.startMouseSelection(new ta(i,e,t,r)),r&&i.observer.ignore(()=>{jc(i.contentDOM);let s=i.root.activeElement;s&&!s.contains(i.contentDOM)&&s.blur()});let n=i.inputState.mouseSelection;if(n)return n.start(e),n.dragging===!1}else i.inputState.setSelectionOrigin(\"select.pointer\");return!1};function bc(i,e,t,r){if(r==1)return R.cursor(e,t);if(r==2)return g1(i.state,e,t);{let n=i.docView.lineAt(e,t),s=i.state.doc.lineAt(n?n.posAtEnd:e),o=n?n.posAtStart:s.from,l=n?n.posAtEnd:s.to;return l<i.state.doc.length&&l==s.to&&l++,R.range(o,l)}}var I1=W.ie&&W.ie_version<=11,yc=null,xc=0,wc=0;function Mu(i){if(!I1)return i.detail;let e=yc,t=wc;return yc=i,wc=Date.now(),xc=!e||t>Date.now()-400&&Math.abs(e.clientX-i.clientX)<2&&Math.abs(e.clientY-i.clientY)<2?(xc+1)%3:1}function R1(i,e){let t=i.posAndSideAtCoords({x:e.clientX,y:e.clientY},!1),r=Mu(e),n=i.state.selection;return{update(s){s.docChanged&&(t.pos=s.changes.mapPos(t.pos),n=n.map(s.changes))},get(s,o,l){let a=i.posAndSideAtCoords({x:s.clientX,y:s.clientY},!1),h,c=bc(i,a.pos,a.assoc,r);if(t.pos!=a.pos&&!o){let u=bc(i,t.pos,t.assoc,r),d=Math.min(u.from,c.from),p=Math.max(u.to,c.to);c=d<c.from?R.range(d,p):R.range(p,d)}return o?n.replaceRange(n.main.extend(c.from,c.to)):l&&r==1&&n.ranges.length>1&&(h=P1(n,a.pos))?h:l?n.addRange(c):R.create([c])}}}function P1(i,e){for(let t=0;t<i.ranges.length;t++){let{from:r,to:n}=i.ranges[t];if(r<=e&&n>=e)return R.create(i.ranges.slice(0,t).concat(i.ranges.slice(t+1)),i.mainIndex==t?0:i.mainIndex-(i.mainIndex>t?1:0))}return null}Lt.dragstart=(i,e)=>{let{selection:{main:t}}=i.state;if(e.target.draggable){let n=i.docView.tile.nearest(e.target);if(n&&n.isWidget()){let s=n.posAtStart,o=s+n.length;(s>=t.to||o<=t.from)&&(t=R.range(s,o))}}let{inputState:r}=i;return r.mouseSelection&&(r.mouseSelection.dragging=!0),r.draggedContent=t,e.dataTransfer&&(e.dataTransfer.setData(\"Text\",Rs(i.state,Ba,i.state.sliceDoc(t.from,t.to))),e.dataTransfer.effectAllowed=\"copyMove\"),!1};Lt.dragend=i=>(i.inputState.draggedContent=null,!1);function kc(i,e,t,r){if(t=Rs(i.state,Da,t),!t)return;let n=i.posAtCoords({x:e.clientX,y:e.clientY},!1),{draggedContent:s}=i.inputState,o=r&&s&&E1(i,e)?{from:s.from,to:s.to}:null,l={from:n,insert:t},a=i.state.changes(o?[o,l]:l);i.focus(),i.dispatch({changes:a,selection:{anchor:a.mapPos(n,-1),head:a.mapPos(n,1)},userEvent:o?\"move.drop\":\"input.drop\"}),i.inputState.draggedContent=null}Lt.drop=(i,e)=>{if(!e.dataTransfer)return!1;if(i.state.readOnly)return!0;let t=e.dataTransfer.files;if(t&&t.length){let r=Array(t.length),n=0,s=()=>{++n==t.length&&kc(i,e,r.filter(o=>o!=null).join(i.state.lineBreak),!1)};for(let o=0;o<t.length;o++){let l=new FileReader;l.onerror=s,l.onload=()=>{/[\\x00-\\x08\\x0e-\\x1f]{2}/.test(l.result)||(r[o]=l.result),s()},l.readAsText(t[o])}return!0}else{let r=e.dataTransfer.getData(\"Text\");if(r)return kc(i,e,r,!0),!0}return!1};Lt.paste=(i,e)=>{if(i.state.readOnly)return!0;i.observer.flush();let t=Cu?null:e.clipboardData;return t?(Au(i,t.getData(\"text/plain\")||t.getData(\"text/uri-list\")),!0):(L1(i),!1)};function N1(i,e){let t=i.dom.parentNode;if(!t)return;let r=t.appendChild(document.createElement(\"textarea\"));r.style.cssText=\"position: fixed; left: -10000px; top: 10px\",r.value=e,r.focus(),r.selectionEnd=e.length,r.selectionStart=0,setTimeout(()=>{r.remove(),i.focus()},50)}function F1(i){let e=[],t=[],r=!1;for(let n of i.selection.ranges)n.empty||(e.push(i.sliceDoc(n.from,n.to)),t.push(n));if(!e.length){let n=-1;for(let{from:s}of i.selection.ranges){let o=i.doc.lineAt(s);o.number>n&&(e.push(o.text),t.push({from:o.from,to:Math.min(i.doc.length,o.to+1)})),n=o.number}r=!0}return{text:Rs(i,Ba,e.join(i.lineBreak)),ranges:t,linewise:r}}var ra=null;Lt.copy=Lt.cut=(i,e)=>{if(!Ui(i.contentDOM,i.observer.selectionRange))return!1;let{text:t,ranges:r,linewise:n}=F1(i.state);if(!t&&!n)return!1;ra=n?t:null,e.type==\"cut\"&&!i.state.readOnly&&i.dispatch({changes:r,scrollIntoView:!0,userEvent:\"delete.cut\"});let s=Cu?null:e.clipboardData;return s?(s.clearData(),s.setData(\"text/plain\",t),!0):(N1(i,t),!1)};var Tu=rt.define();function Du(i,e){let t=[];for(let r of i.facet(lu)){let n=r(i,e);n&&t.push(n)}return t.length?i.update({effects:t,annotations:Tu.of(!0)}):null}function Bu(i){setTimeout(()=>{let e=i.hasFocus;if(e!=i.inputState.notifiedFocused){let t=Du(i.state,e);t?i.dispatch(t):i.update([])}},10)}St.focus=i=>{i.inputState.lastFocusTime=Date.now(),!i.scrollDOM.scrollTop&&(i.inputState.lastScrollTop||i.inputState.lastScrollLeft)&&(i.scrollDOM.scrollTop=i.inputState.lastScrollTop,i.scrollDOM.scrollLeft=i.inputState.lastScrollLeft),Bu(i)};St.blur=i=>{i.observer.clearSelectionRange(),Bu(i)};St.compositionstart=St.compositionupdate=i=>{i.observer.editContext||(i.inputState.compositionFirstChange==null&&(i.inputState.compositionFirstChange=!0),i.inputState.composing<0&&(i.inputState.composing=0))};St.compositionend=i=>{i.observer.editContext||(i.inputState.composing=-1,i.inputState.compositionEndedAt=Date.now(),i.inputState.compositionPendingKey=!0,i.inputState.compositionPendingChange=i.observer.pendingRecords().length>0,i.inputState.compositionFirstChange=null,W.chrome&&W.android?i.observer.flushSoon():i.inputState.compositionPendingChange?Promise.resolve().then(()=>i.observer.flush()):setTimeout(()=>{i.inputState.composing<0&&i.docView.hasComposition&&i.update([])},50))};St.contextmenu=i=>{i.inputState.lastContextMenu=Date.now()};Lt.beforeinput=(i,e)=>{var t,r;if((e.inputType==\"insertText\"||e.inputType==\"insertCompositionText\")&&(i.inputState.insertingText=e.data,i.inputState.insertingTextAt=Date.now()),e.inputType==\"insertReplacementText\"&&i.observer.editContext){let s=(t=e.dataTransfer)===null||t===void 0?void 0:t.getData(\"text/plain\"),o=e.getTargetRanges();if(s&&o.length){let l=o[0],a=i.posAtDOM(l.startContainer,l.startOffset),h=i.posAtDOM(l.endContainer,l.endOffset);return La(i,{from:a,to:h,insert:i.state.toText(s)},null),!0}}let n;if(W.chrome&&W.android&&(n=ku.find(s=>s.inputType==e.inputType))&&(i.observer.delayAndroidKey(n.key,n.keyCode),n.key==\"Backspace\"||n.key==\"Delete\")){let s=((r=window.visualViewport)===null||r===void 0?void 0:r.height)||0;setTimeout(()=>{var o;(((o=window.visualViewport)===null||o===void 0?void 0:o.height)||0)>s+10&&i.hasFocus&&(i.contentDOM.blur(),i.focus())},100)}return W.ios&&e.inputType==\"deleteContentForward\"&&i.observer.flushSoon(),W.safari&&e.inputType==\"insertText\"&&i.inputState.composing>=0&&setTimeout(()=>St.compositionend(i,e),20),!1};var Sc=new Set;function H1(i){Sc.has(i)||(Sc.add(i),i.addEventListener(\"copy\",()=>{}),i.addEventListener(\"cut\",()=>{}))}var Cc=[\"pre-wrap\",\"normal\",\"pre-line\",\"break-spaces\"],wi=!1;function Ac(){wi=!1}var ia=class{constructor(e){this.lineWrapping=e,this.doc=se.empty,this.heightSamples={},this.lineHeight=14,this.charWidth=7,this.textHeight=14,this.lineLength=30}heightForGap(e,t){let r=this.doc.lineAt(t).number-this.doc.lineAt(e).number+1;return this.lineWrapping&&(r+=Math.max(0,Math.ceil((t-e-r*this.lineLength*.5)/this.lineLength))),this.lineHeight*r}heightForLine(e){return this.lineWrapping?(1+Math.max(0,Math.ceil((e-this.lineLength)/Math.max(1,this.lineLength-5))))*this.lineHeight:this.lineHeight}setDoc(e){return this.doc=e,this}mustRefreshForWrapping(e){return Cc.indexOf(e)>-1!=this.lineWrapping}mustRefreshForHeights(e){let t=!1;for(let r=0;r<e.length;r++){let n=e[r];n<0?r++:this.heightSamples[Math.floor(n*10)]||(t=!0,this.heightSamples[Math.floor(n*10)]=!0)}return t}refresh(e,t,r,n,s,o){let l=Cc.indexOf(e)>-1,a=Math.abs(t-this.lineHeight)>.3||this.lineWrapping!=l||Math.abs(r-this.charWidth)>.1;if(this.lineWrapping=l,this.lineHeight=t,this.charWidth=r,this.textHeight=n,this.lineLength=s,a){this.heightSamples={};for(let h=0;h<o.length;h++){let c=o[h];c<0?h++:this.heightSamples[Math.floor(c*10)]=!0}}return a}},na=class{constructor(e,t){this.from=e,this.heights=t,this.index=0}get more(){return this.index<this.heights.length}},Ot=class i{constructor(e,t,r,n,s){this.from=e,this.length=t,this.top=r,this.height=n,this._content=s}get type(){return typeof this._content==\"number\"?Ve.Text:Array.isArray(this._content)?this._content:this._content.type}get to(){return this.from+this.length}get bottom(){return this.top+this.height}get widget(){return this._content instanceof Wr?this._content.widget:null}get widgetLineBreaks(){return typeof this._content==\"number\"?this._content:0}join(e){let t=(Array.isArray(this._content)?this._content:[this]).concat(Array.isArray(e._content)?e._content:[e]);return new i(this.from,this.length+e.length,this.top,this.height+e.height,t)}},de=function(i){return i[i.ByPos=0]=\"ByPos\",i[i.ByHeight=1]=\"ByHeight\",i[i.ByPosNoHeight=2]=\"ByPosNoHeight\",i}(de||(de={})),cs=.001,ft=class i{constructor(e,t,r=2){this.length=e,this.height=t,this.flags=r}get outdated(){return(this.flags&2)>0}set outdated(e){this.flags=(e?2:0)|this.flags&-3}setHeight(e){this.height!=e&&(Math.abs(this.height-e)>cs&&(wi=!0),this.height=e)}replace(e,t,r){return i.of(r)}decomposeLeft(e,t){t.push(this)}decomposeRight(e,t){t.push(this)}applyChanges(e,t,r,n){let s=this,o=r.doc;for(let l=n.length-1;l>=0;l--){let{fromA:a,toA:h,fromB:c,toB:u}=n[l],d=s.lineAt(a,de.ByPosNoHeight,r.setDoc(t),0,0),p=d.to>=h?d:s.lineAt(h,de.ByPosNoHeight,r,0,0);for(u+=p.to-h,h=p.to;l>0&&d.from<=n[l-1].toA;)a=n[l-1].fromA,c=n[l-1].fromB,l--,a<d.from&&(d=s.lineAt(a,de.ByPosNoHeight,r,0,0));c+=d.from-a,a=d.from;let v=oa.build(r.setDoc(o),e,c,u);s=Cs(s,s.replace(a,h,v))}return s.updateHeight(r,0)}static empty(){return new yt(0,0,0)}static of(e){if(e.length==1)return e[0];let t=0,r=e.length,n=0,s=0;for(;;)if(t==r)if(n>s*2){let l=e[t-1];l.break?e.splice(--t,1,l.left,null,l.right):e.splice(--t,1,l.left,l.right),r+=1+l.break,n-=l.size}else if(s>n*2){let l=e[r];l.break?e.splice(r,1,l.left,null,l.right):e.splice(r,1,l.left,l.right),r+=2+l.break,s-=l.size}else break;else if(n<s){let l=e[t++];l&&(n+=l.size)}else{let l=e[--r];l&&(s+=l.size)}let o=0;return e[t-1]==null?(o=1,t--):e[t]==null&&(o=1,r++),new sa(i.of(e.slice(0,t)),o,i.of(e.slice(r)))}};function Cs(i,e){return i==e?i:(i.constructor!=e.constructor&&(wi=!0),e)}ft.prototype.size=1;var q1=X.replace({}),As=class extends ft{constructor(e,t,r){super(e,t),this.deco=r,this.spaceAbove=0}mainBlock(e,t){return new Ot(t,this.length,e+this.spaceAbove,this.height-this.spaceAbove,this.deco||0)}blockAt(e,t,r,n){return this.spaceAbove&&e<r+this.spaceAbove?new Ot(n,0,r,this.spaceAbove,q1):this.mainBlock(r,n)}lineAt(e,t,r,n,s){let o=this.mainBlock(n,s);return this.spaceAbove?this.blockAt(0,r,n,s).join(o):o}forEachLine(e,t,r,n,s,o){e<=s+this.length&&t>=s&&o(this.lineAt(0,de.ByPos,r,n,s))}setMeasuredHeight(e){let t=e.heights[e.index++];t<0?(this.spaceAbove=-t,t=e.heights[e.index++]):this.spaceAbove=0,this.setHeight(t)}updateHeight(e,t=0,r=!1,n){return n&&n.from<=t&&n.more&&this.setMeasuredHeight(n),this.outdated=!1,this}toString(){return`block(${this.length})`}},yt=class i extends As{constructor(e,t,r){super(e,t,null),this.collapsed=0,this.widgetHeight=0,this.breaks=0,this.spaceAbove=r}mainBlock(e,t){return new Ot(t,this.length,e+this.spaceAbove,this.height-this.spaceAbove,this.breaks)}replace(e,t,r){let n=r[0];return r.length==1&&(n instanceof i||n instanceof yr&&n.flags&4)&&Math.abs(this.length-n.length)<10?(n instanceof yr?n=new i(n.length,this.height,this.spaceAbove):n.height=this.height,this.outdated||(n.outdated=!1),n):ft.of(r)}updateHeight(e,t=0,r=!1,n){return n&&n.from<=t&&n.more?this.setMeasuredHeight(n):(r||this.outdated)&&(this.spaceAbove=0,this.setHeight(Math.max(this.widgetHeight,e.heightForLine(this.length-this.collapsed))+this.breaks*e.lineHeight)),this.outdated=!1,this}toString(){return`line(${this.length}${this.collapsed?-this.collapsed:\"\"}${this.widgetHeight?\":\"+this.widgetHeight:\"\"})`}},yr=class i extends ft{constructor(e){super(e,0)}heightMetrics(e,t){let r=e.doc.lineAt(t).number,n=e.doc.lineAt(t+this.length).number,s=n-r+1,o,l=0;if(e.lineWrapping){let a=Math.min(this.height,e.lineHeight*s);o=a/s,this.length>s+1&&(l=(this.height-a)/(this.length-s-1))}else o=this.height/s;return{firstLine:r,lastLine:n,perLine:o,perChar:l}}blockAt(e,t,r,n){let{firstLine:s,lastLine:o,perLine:l,perChar:a}=this.heightMetrics(t,n);if(t.lineWrapping){let h=n+(e<t.lineHeight?0:Math.round(Math.max(0,Math.min(1,(e-r)/this.height))*this.length)),c=t.doc.lineAt(h),u=l+c.length*a,d=Math.max(r,e-u/2);return new Ot(c.from,c.length,d,u,0)}else{let h=Math.max(0,Math.min(o-s,Math.floor((e-r)/l))),{from:c,length:u}=t.doc.line(s+h);return new Ot(c,u,r+l*h,l,0)}}lineAt(e,t,r,n,s){if(t==de.ByHeight)return this.blockAt(e,r,n,s);if(t==de.ByPosNoHeight){let{from:p,to:v}=r.doc.lineAt(e);return new Ot(p,v-p,0,0,0)}let{firstLine:o,perLine:l,perChar:a}=this.heightMetrics(r,s),h=r.doc.lineAt(e),c=l+h.length*a,u=h.number-o,d=n+l*u+a*(h.from-s-u);return new Ot(h.from,h.length,Math.max(n,Math.min(d,n+this.height-c)),c,0)}forEachLine(e,t,r,n,s,o){e=Math.max(e,s),t=Math.min(t,s+this.length);let{firstLine:l,perLine:a,perChar:h}=this.heightMetrics(r,s);for(let c=e,u=n;c<=t;){let d=r.doc.lineAt(c);if(c==e){let v=d.number-l;u+=a*v+h*(e-s-v)}let p=a+h*d.length;o(new Ot(d.from,d.length,u,p,0)),u+=p,c=d.to+1}}replace(e,t,r){let n=this.length-t;if(n>0){let s=r[r.length-1];s instanceof i?r[r.length-1]=new i(s.length+n):r.push(null,new i(n-1))}if(e>0){let s=r[0];s instanceof i?r[0]=new i(e+s.length):r.unshift(new i(e-1),null)}return ft.of(r)}decomposeLeft(e,t){t.push(new i(e-1),null)}decomposeRight(e,t){t.push(null,new i(this.length-e-1))}updateHeight(e,t=0,r=!1,n){let s=t+this.length;if(n&&n.from<=t+this.length&&n.more){let o=[],l=Math.max(t,n.from),a=-1;for(n.from>t&&o.push(new i(n.from-t-1).updateHeight(e,t));l<=s&&n.more;){let c=e.doc.lineAt(l).length;o.length&&o.push(null);let u=n.heights[n.index++],d=0;u<0&&(d=-u,u=n.heights[n.index++]),a==-1?a=u:Math.abs(u-a)>=cs&&(a=-2);let p=new yt(c,u,d);p.outdated=!1,o.push(p),l+=c+1}l<=s&&o.push(null,new i(s-l).updateHeight(e,l));let h=ft.of(o);return(a<0||Math.abs(h.height-this.height)>=cs||Math.abs(a-this.heightMetrics(e,t).perLine)>=cs)&&(wi=!0),Cs(this,h)}else(r||this.outdated)&&(this.setHeight(e.heightForGap(t,t+this.length)),this.outdated=!1);return this}toString(){return`gap(${this.length})`}},sa=class extends ft{constructor(e,t,r){super(e.length+t+r.length,e.height+r.height,t|(e.outdated||r.outdated?2:0)),this.left=e,this.right=r,this.size=e.size+r.size}get break(){return this.flags&1}blockAt(e,t,r,n){let s=r+this.left.height;return e<s?this.left.blockAt(e,t,r,n):this.right.blockAt(e,t,s,n+this.left.length+this.break)}lineAt(e,t,r,n,s){let o=n+this.left.height,l=s+this.left.length+this.break,a=t==de.ByHeight?e<o:e<l,h=a?this.left.lineAt(e,t,r,n,s):this.right.lineAt(e,t,r,o,l);if(this.break||(a?h.to<l:h.from>l))return h;let c=t==de.ByPosNoHeight?de.ByPosNoHeight:de.ByPos;return a?h.join(this.right.lineAt(l,c,r,o,l)):this.left.lineAt(l,c,r,n,s).join(h)}forEachLine(e,t,r,n,s,o){let l=n+this.left.height,a=s+this.left.length+this.break;if(this.break)e<a&&this.left.forEachLine(e,t,r,n,s,o),t>=a&&this.right.forEachLine(e,t,r,l,a,o);else{let h=this.lineAt(a,de.ByPos,r,n,s);e<h.from&&this.left.forEachLine(e,h.from-1,r,n,s,o),h.to>=e&&h.from<=t&&o(h),t>h.to&&this.right.forEachLine(h.to+1,t,r,l,a,o)}}replace(e,t,r){let n=this.left.length+this.break;if(t<n)return this.balanced(this.left.replace(e,t,r),this.right);if(e>this.left.length)return this.balanced(this.left,this.right.replace(e-n,t-n,r));let s=[];e>0&&this.decomposeLeft(e,s);let o=s.length;for(let l of r)s.push(l);if(e>0&&Mc(s,o-1),t<this.length){let l=s.length;this.decomposeRight(t,s),Mc(s,l)}return ft.of(s)}decomposeLeft(e,t){let r=this.left.length;if(e<=r)return this.left.decomposeLeft(e,t);t.push(this.left),this.break&&(r++,e>=r&&t.push(null)),e>r&&this.right.decomposeLeft(e-r,t)}decomposeRight(e,t){let r=this.left.length,n=r+this.break;if(e>=n)return this.right.decomposeRight(e-n,t);e<r&&this.left.decomposeRight(e,t),this.break&&e<n&&t.push(null),t.push(this.right)}balanced(e,t){return e.size>2*t.size||t.size>2*e.size?ft.of(this.break?[e,null,t]:[e,t]):(this.left=Cs(this.left,e),this.right=Cs(this.right,t),this.setHeight(e.height+t.height),this.outdated=e.outdated||t.outdated,this.size=e.size+t.size,this.length=e.length+this.break+t.length,this)}updateHeight(e,t=0,r=!1,n){let{left:s,right:o}=this,l=t+s.length+this.break,a=null;return n&&n.from<=t+s.length&&n.more?a=s=s.updateHeight(e,t,r,n):s.updateHeight(e,t,r),n&&n.from<=l+o.length&&n.more?a=o=o.updateHeight(e,l,r,n):o.updateHeight(e,l,r),a?this.balanced(s,o):(this.height=this.left.height+this.right.height,this.outdated=!1,this)}toString(){return this.left+(this.break?\" \":\"-\")+this.right}};function Mc(i,e){let t,r;i[e]==null&&(t=i[e-1])instanceof yr&&(r=i[e+1])instanceof yr&&i.splice(e-1,3,new yr(t.length+1+r.length))}var W1=5,oa=class i{constructor(e,t){this.pos=e,this.oracle=t,this.nodes=[],this.lineStart=-1,this.lineEnd=-1,this.covering=null,this.writtenTo=e}get isCovered(){return this.covering&&this.nodes[this.nodes.length-1]==this.covering}span(e,t){if(this.lineStart>-1){let r=Math.min(t,this.lineEnd),n=this.nodes[this.nodes.length-1];n instanceof yt?n.length+=r-this.pos:(r>this.pos||!this.isCovered)&&this.nodes.push(new yt(r-this.pos,-1,0)),this.writtenTo=r,t>r&&(this.nodes.push(null),this.writtenTo++,this.lineStart=-1)}this.pos=t}point(e,t,r){if(e<t||r.heightRelevant){let n=r.widget?r.widget.estimatedHeight:0,s=r.widget?r.widget.lineBreaks:0;n<0&&(n=this.oracle.lineHeight);let o=t-e;r.block?this.addBlock(new As(o,n,r)):(o||s||n>=W1)&&this.addLineDeco(n,s,o)}else t>e&&this.span(e,t);this.lineEnd>-1&&this.lineEnd<this.pos&&(this.lineEnd=this.oracle.doc.lineAt(this.pos).to)}enterLine(){if(this.lineStart>-1)return;let{from:e,to:t}=this.oracle.doc.lineAt(this.pos);this.lineStart=e,this.lineEnd=t,this.writtenTo<e&&((this.writtenTo<e-1||this.nodes[this.nodes.length-1]==null)&&this.nodes.push(this.blankContent(this.writtenTo,e-1)),this.nodes.push(null)),this.pos>e&&this.nodes.push(new yt(this.pos-e,-1,0)),this.writtenTo=this.pos}blankContent(e,t){let r=new yr(t-e);return this.oracle.doc.lineAt(e).to==t&&(r.flags|=4),r}ensureLine(){this.enterLine();let e=this.nodes.length?this.nodes[this.nodes.length-1]:null;if(e instanceof yt)return e;let t=new yt(0,-1,0);return this.nodes.push(t),t}addBlock(e){this.enterLine();let t=e.deco;t&&t.startSide>0&&!this.isCovered&&this.ensureLine(),this.nodes.push(e),this.writtenTo=this.pos=this.pos+e.length,t&&t.endSide>0&&(this.covering=e)}addLineDeco(e,t,r){let n=this.ensureLine();n.length+=r,n.collapsed+=r,n.widgetHeight=Math.max(n.widgetHeight,e),n.breaks+=t,this.writtenTo=this.pos=this.pos+r}finish(e){let t=this.nodes.length==0?null:this.nodes[this.nodes.length-1];this.lineStart>-1&&!(t instanceof yt)&&!this.isCovered?this.nodes.push(new yt(0,-1,0)):(this.writtenTo<this.pos||t==null)&&this.nodes.push(this.blankContent(this.writtenTo,this.pos));let r=e;for(let n of this.nodes)n instanceof yt&&n.updateHeight(this.oracle,r),r+=n?n.length:1;return this.nodes}static build(e,t,r,n){let s=new i(r,e);return le.spans(t,r,n,s,0),s.finish(r)}};function V1(i,e,t){let r=new la;return le.compare(i,e,t,r,0),r.changes}var la=class{constructor(){this.changes=[]}compareRange(){}comparePoint(e,t,r,n){(e<t||r&&r.heightRelevant||n&&n.heightRelevant)&&pi(e,t,this.changes,5)}};function $1(i,e){let t=i.getBoundingClientRect(),r=i.ownerDocument,n=r.defaultView||window,s=Math.max(0,t.left),o=Math.min(n.innerWidth,t.right),l=Math.max(0,t.top),a=Math.min(n.innerHeight,t.bottom);for(let h=i.parentNode;h&&h!=r.body;)if(h.nodeType==1){let c=h,u=window.getComputedStyle(c);if((c.scrollHeight>c.clientHeight||c.scrollWidth>c.clientWidth)&&u.overflow!=\"visible\"){let d=c.getBoundingClientRect();s=Math.max(s,d.left),o=Math.min(o,d.right),l=Math.max(l,d.top),a=Math.min(h==i.parentNode?n.innerHeight:a,d.bottom)}h=u.position==\"absolute\"||u.position==\"fixed\"?c.offsetParent:c.parentNode}else if(h.nodeType==11)h=h.host;else break;return{left:s-t.left,right:Math.max(s,o)-t.left,top:l-(t.top+e),bottom:Math.max(l,a)-(t.top+e)}}function G1(i){let e=i.getBoundingClientRect(),t=i.ownerDocument.defaultView||window;return e.left<t.innerWidth&&e.right>0&&e.top<t.innerHeight&&e.bottom>0}function U1(i,e){let t=i.getBoundingClientRect();return{left:0,right:t.right-t.left,top:e,bottom:t.bottom-(t.top+e)}}var Ji=class{constructor(e,t,r,n){this.from=e,this.to=t,this.size=r,this.displaySize=n}static same(e,t){if(e.length!=t.length)return!1;for(let r=0;r<e.length;r++){let n=e[r],s=t[r];if(n.from!=s.from||n.to!=s.to||n.size!=s.size)return!1}return!0}draw(e,t){return X.replace({widget:new aa(this.displaySize*(t?e.scaleY:e.scaleX),t)}).range(this.from,this.to)}},aa=class extends kt{constructor(e,t){super(),this.size=e,this.vertical=t}eq(e){return e.size==this.size&&e.vertical==this.vertical}toDOM(){let e=document.createElement(\"div\");return this.vertical?e.style.height=this.size+\"px\":(e.style.width=this.size+\"px\",e.style.height=\"2px\",e.style.display=\"inline-block\"),e}get estimatedHeight(){return this.vertical?this.size:-1}},Ms=class{constructor(e){this.state=e,this.pixelViewport={left:0,right:window.innerWidth,top:0,bottom:0},this.inView=!0,this.paddingTop=0,this.paddingBottom=0,this.contentDOMWidth=0,this.contentDOMHeight=0,this.editorHeight=0,this.editorWidth=0,this.scrollTop=0,this.scrolledToBottom=!1,this.scaleX=1,this.scaleY=1,this.scrollAnchorPos=0,this.scrollAnchorHeight=-1,this.scaler=Tc,this.scrollTarget=null,this.printing=!1,this.mustMeasureContent=!0,this.defaultTextDirection=he.LTR,this.visibleRanges=[],this.mustEnforceCursorAssoc=!1;let t=e.facet(Ea).some(r=>typeof r!=\"function\"&&r.class==\"cm-lineWrapping\");this.heightOracle=new ia(t),this.stateDeco=Dc(e),this.heightMap=ft.empty().applyChanges(this.stateDeco,se.empty,this.heightOracle.setDoc(e.doc),[new zt(0,0,0,e.doc.length)]);for(let r=0;r<2&&(this.viewport=this.getViewport(0,null),!!this.updateForViewport());r++);this.updateViewportLines(),this.lineGaps=this.ensureLineGaps([]),this.lineGapDeco=X.set(this.lineGaps.map(r=>r.draw(this,!1))),this.computeVisibleRanges()}updateForViewport(){let e=[this.viewport],{main:t}=this.state.selection;for(let r=0;r<=1;r++){let n=r?t.head:t.anchor;if(!e.some(({from:s,to:o})=>n>=s&&n<=o)){let{from:s,to:o}=this.lineBlockAt(n);e.push(new fi(s,o))}}return this.viewports=e.sort((r,n)=>r.from-n.from),this.updateScaler()}updateScaler(){let e=this.scaler;return this.scaler=this.heightMap.height<=7e6?Tc:new ha(this.heightOracle,this.heightMap,this.viewports),e.eq(this.scaler)?0:2}updateViewportLines(){this.viewportLines=[],this.heightMap.forEachLine(this.viewport.from,this.viewport.to,this.heightOracle.setDoc(this.state.doc),0,0,e=>{this.viewportLines.push($i(e,this.scaler))})}update(e,t=null){this.state=e.state;let r=this.stateDeco;this.stateDeco=Dc(this.state);let n=e.changedRanges,s=zt.extendWithRanges(n,V1(r,this.stateDeco,e?e.changes:Ye.empty(this.state.doc.length))),o=this.heightMap.height,l=this.scrolledToBottom?null:this.scrollAnchorAt(this.scrollTop);Ac(),this.heightMap=this.heightMap.applyChanges(this.stateDeco,e.startState.doc,this.heightOracle.setDoc(this.state.doc),s),(this.heightMap.height!=o||wi)&&(e.flags|=2),l?(this.scrollAnchorPos=e.changes.mapPos(l.from,-1),this.scrollAnchorHeight=l.top):(this.scrollAnchorPos=-1,this.scrollAnchorHeight=o);let a=s.length?this.mapViewport(this.viewport,e.changes):this.viewport;(t&&(t.range.head<a.from||t.range.head>a.to)||!this.viewportIsAppropriate(a))&&(a=this.getViewport(0,t));let h=a.from!=this.viewport.from||a.to!=this.viewport.to;this.viewport=a,e.flags|=this.updateForViewport(),(h||!e.changes.empty||e.flags&2)&&this.updateViewportLines(),(this.lineGaps.length||this.viewport.to-this.viewport.from>4e3)&&this.updateLineGaps(this.ensureLineGaps(this.mapLineGaps(this.lineGaps,e.changes))),e.flags|=this.computeVisibleRanges(e.changes),t&&(this.scrollTarget=t),!this.mustEnforceCursorAssoc&&(e.selectionSet||e.focusChanged)&&e.view.lineWrapping&&e.state.selection.main.empty&&e.state.selection.main.assoc&&!e.state.facet(hu)&&(this.mustEnforceCursorAssoc=!0)}measure(e){let t=e.contentDOM,r=window.getComputedStyle(t),n=this.heightOracle,s=r.whiteSpace;this.defaultTextDirection=r.direction==\"rtl\"?he.RTL:he.LTR;let o=this.heightOracle.mustRefreshForWrapping(s)||this.mustMeasureContent===\"refresh\",l=t.getBoundingClientRect(),a=o||this.mustMeasureContent||this.contentDOMHeight!=l.height;this.contentDOMHeight=l.height,this.mustMeasureContent=!1;let h=0,c=0;if(l.width&&l.height){let{scaleX:E,scaleY:T}=Kc(t,l);(E>.005&&Math.abs(this.scaleX-E)>.005||T>.005&&Math.abs(this.scaleY-T)>.005)&&(this.scaleX=E,this.scaleY=T,h|=16,o=a=!0)}let u=(parseInt(r.paddingTop)||0)*this.scaleY,d=(parseInt(r.paddingBottom)||0)*this.scaleY;(this.paddingTop!=u||this.paddingBottom!=d)&&(this.paddingTop=u,this.paddingBottom=d,h|=18),this.editorWidth!=e.scrollDOM.clientWidth&&(n.lineWrapping&&(a=!0),this.editorWidth=e.scrollDOM.clientWidth,h|=16);let p=e.scrollDOM.scrollTop*this.scaleY;this.scrollTop!=p&&(this.scrollAnchorHeight=-1,this.scrollTop=p),this.scrolledToBottom=Yc(e.scrollDOM);let v=(this.printing?U1:$1)(t,this.paddingTop),y=v.top-this.pixelViewport.top,w=v.bottom-this.pixelViewport.bottom;this.pixelViewport=v;let S=this.pixelViewport.bottom>this.pixelViewport.top&&this.pixelViewport.right>this.pixelViewport.left;if(S!=this.inView&&(this.inView=S,S&&(a=!0)),!this.inView&&!this.scrollTarget&&!G1(e.dom))return 0;let A=l.width;if((this.contentDOMWidth!=A||this.editorHeight!=e.scrollDOM.clientHeight)&&(this.contentDOMWidth=l.width,this.editorHeight=e.scrollDOM.clientHeight,h|=16),a){let E=e.docView.measureVisibleLineHeights(this.viewport);if(n.mustRefreshForHeights(E)&&(o=!0),o||n.lineWrapping&&Math.abs(A-this.contentDOMWidth)>n.charWidth){let{lineHeight:T,charWidth:B,textHeight:D}=e.docView.measureTextSize();o=T>0&&n.refresh(s,T,B,D,Math.max(5,A/B),E),o&&(e.docView.minWidth=0,h|=16)}y>0&&w>0?c=Math.max(y,w):y<0&&w<0&&(c=Math.min(y,w)),Ac();for(let T of this.viewports){let B=T.from==this.viewport.from?E:e.docView.measureVisibleLineHeights(T);this.heightMap=(o?ft.empty().applyChanges(this.stateDeco,se.empty,this.heightOracle,[new zt(0,0,0,e.state.doc.length)]):this.heightMap).updateHeight(n,0,o,new na(T.from,B))}wi&&(h|=2)}let M=!this.viewportIsAppropriate(this.viewport,c)||this.scrollTarget&&(this.scrollTarget.range.head<this.viewport.from||this.scrollTarget.range.head>this.viewport.to);return M&&(h&2&&(h|=this.updateScaler()),this.viewport=this.getViewport(c,this.scrollTarget),h|=this.updateForViewport()),(h&2||M)&&this.updateViewportLines(),(this.lineGaps.length||this.viewport.to-this.viewport.from>4e3)&&this.updateLineGaps(this.ensureLineGaps(o?[]:this.lineGaps,e)),h|=this.computeVisibleRanges(),this.mustEnforceCursorAssoc&&(this.mustEnforceCursorAssoc=!1,e.docView.enforceCursorAssoc()),h}get visibleTop(){return this.scaler.fromDOM(this.pixelViewport.top)}get visibleBottom(){return this.scaler.fromDOM(this.pixelViewport.bottom)}getViewport(e,t){let r=.5-Math.max(-.5,Math.min(.5,e/1e3/2)),n=this.heightMap,s=this.heightOracle,{visibleTop:o,visibleBottom:l}=this,a=new fi(n.lineAt(o-r*1e3,de.ByHeight,s,0,0).from,n.lineAt(l+(1-r)*1e3,de.ByHeight,s,0,0).to);if(t){let{head:h}=t.range;if(h<a.from||h>a.to){let c=Math.min(this.editorHeight,this.pixelViewport.bottom-this.pixelViewport.top),u=n.lineAt(h,de.ByPos,s,0,0),d;t.y==\"center\"?d=(u.top+u.bottom)/2-c/2:t.y==\"start\"||t.y==\"nearest\"&&h<a.from?d=u.top:d=u.bottom-c,a=new fi(n.lineAt(d-1e3/2,de.ByHeight,s,0,0).from,n.lineAt(d+c+1e3/2,de.ByHeight,s,0,0).to)}}return a}mapViewport(e,t){let r=t.mapPos(e.from,-1),n=t.mapPos(e.to,1);return new fi(this.heightMap.lineAt(r,de.ByPos,this.heightOracle,0,0).from,this.heightMap.lineAt(n,de.ByPos,this.heightOracle,0,0).to)}viewportIsAppropriate({from:e,to:t},r=0){if(!this.inView)return!0;let{top:n}=this.heightMap.lineAt(e,de.ByPos,this.heightOracle,0,0),{bottom:s}=this.heightMap.lineAt(t,de.ByPos,this.heightOracle,0,0),{visibleTop:o,visibleBottom:l}=this;return(e==0||n<=o-Math.max(10,Math.min(-r,250)))&&(t==this.state.doc.length||s>=l+Math.max(10,Math.min(r,250)))&&n>o-2*1e3&&s<l+2*1e3}mapLineGaps(e,t){if(!e.length||t.empty)return e;let r=[];for(let n of e)t.touchesRange(n.from,n.to)||r.push(new Ji(t.mapPos(n.from),t.mapPos(n.to),n.size,n.displaySize));return r}ensureLineGaps(e,t){let r=this.heightOracle.lineWrapping,n=r?1e4:2e3,s=n>>1,o=n<<1;if(this.defaultTextDirection!=he.LTR&&!r)return[];let l=[],a=(c,u,d,p)=>{if(u-c<s)return;let v=this.state.selection.main,y=[v.from];v.empty||y.push(v.to);for(let S of y)if(S>c&&S<u){a(c,S-10,d,p),a(S+10,u,d,p);return}let w=j1(e,S=>S.from>=d.from&&S.to<=d.to&&Math.abs(S.from-c)<s&&Math.abs(S.to-u)<s&&!y.some(A=>S.from<A&&S.to>A));if(!w){if(u<d.to&&t&&r&&t.visibleRanges.some(M=>M.from<=u&&M.to>=u)){let M=t.moveToLineBoundary(R.cursor(u),!1,!0).head;M>c&&(u=M)}let S=this.gapSize(d,c,u,p),A=r||S<2e6?S:2e6;w=new Ji(c,u,S,A)}l.push(w)},h=c=>{if(c.length<o||c.type!=Ve.Text)return;let u=K1(c.from,c.to,this.stateDeco);if(u.total<o)return;let d=this.scrollTarget?this.scrollTarget.range.head:null,p,v;if(r){let y=n/this.heightOracle.lineLength*this.heightOracle.lineHeight,w,S;if(d!=null){let A=ns(u,d),M=((this.visibleBottom-this.visibleTop)/2+y)/c.height;w=A-M,S=A+M}else w=(this.visibleTop-c.top-y)/c.height,S=(this.visibleBottom-c.top+y)/c.height;p=is(u,w),v=is(u,S)}else{let y=u.total*this.heightOracle.charWidth,w=n*this.heightOracle.charWidth,S=0;if(y>2e6)for(let B of e)B.from>=c.from&&B.from<c.to&&B.size!=B.displaySize&&B.from*this.heightOracle.charWidth+S<this.pixelViewport.left&&(S=B.size-B.displaySize);let A=this.pixelViewport.left+S,M=this.pixelViewport.right+S,E,T;if(d!=null){let B=ns(u,d),D=((M-A)/2+w)/y;E=B-D,T=B+D}else E=(A-w)/y,T=(M+w)/y;p=is(u,E),v=is(u,T)}p>c.from&&a(c.from,p,c,u),v<c.to&&a(v,c.to,c,u)};for(let c of this.viewportLines)Array.isArray(c.type)?c.type.forEach(h):h(c);return l}gapSize(e,t,r,n){let s=ns(n,r)-ns(n,t);return this.heightOracle.lineWrapping?e.height*s:n.total*this.heightOracle.charWidth*s}updateLineGaps(e){Ji.same(e,this.lineGaps)||(this.lineGaps=e,this.lineGapDeco=X.set(e.map(t=>t.draw(this,this.heightOracle.lineWrapping))))}computeVisibleRanges(e){let t=this.stateDeco;this.lineGaps.length&&(t=t.concat(this.lineGapDeco));let r=[];le.spans(t,this.viewport.from,this.viewport.to,{span(s,o){r.push({from:s,to:o})},point(){}},20);let n=0;if(r.length!=this.visibleRanges.length)n=12;else for(let s=0;s<r.length&&!(n&8);s++){let o=this.visibleRanges[s],l=r[s];(o.from!=l.from||o.to!=l.to)&&(n|=4,e&&e.mapPos(o.from,-1)==l.from&&e.mapPos(o.to,1)==l.to||(n|=8))}return this.visibleRanges=r,n}lineBlockAt(e){return e>=this.viewport.from&&e<=this.viewport.to&&this.viewportLines.find(t=>t.from<=e&&t.to>=e)||$i(this.heightMap.lineAt(e,de.ByPos,this.heightOracle,0,0),this.scaler)}lineBlockAtHeight(e){return e>=this.viewportLines[0].top&&e<=this.viewportLines[this.viewportLines.length-1].bottom&&this.viewportLines.find(t=>t.top<=e&&t.bottom>=e)||$i(this.heightMap.lineAt(this.scaler.fromDOM(e),de.ByHeight,this.heightOracle,0,0),this.scaler)}scrollAnchorAt(e){let t=this.lineBlockAtHeight(e+8);return t.from>=this.viewport.from||this.viewportLines[0].top-e>200?t:this.viewportLines[0]}elementAtHeight(e){return $i(this.heightMap.blockAt(this.scaler.fromDOM(e),this.heightOracle,0,0),this.scaler)}get docHeight(){return this.scaler.toDOM(this.heightMap.height)}get contentHeight(){return this.docHeight+this.paddingTop+this.paddingBottom}},fi=class{constructor(e,t){this.from=e,this.to=t}};function K1(i,e,t){let r=[],n=i,s=0;return le.spans(t,i,e,{span(){},point(o,l){o>n&&(r.push({from:n,to:o}),s+=o-n),n=l}},20),n<e&&(r.push({from:n,to:e}),s+=e-n),{total:s,ranges:r}}function is({total:i,ranges:e},t){if(t<=0)return e[0].from;if(t>=1)return e[e.length-1].to;let r=Math.floor(i*t);for(let n=0;;n++){let{from:s,to:o}=e[n],l=o-s;if(r<=l)return s+r;r-=l}}function ns(i,e){let t=0;for(let{from:r,to:n}of i.ranges){if(e<=n){t+=e-r;break}t+=n-r}return t/i.total}function j1(i,e){for(let t of i)if(e(t))return t}var Tc={toDOM(i){return i},fromDOM(i){return i},scale:1,eq(i){return i==this}};function Dc(i){let e=i.facet(Is).filter(r=>typeof r!=\"function\"),t=i.facet(Oa).filter(r=>typeof r!=\"function\");return t.length&&e.push(le.join(t)),e}var ha=class i{constructor(e,t,r){let n=0,s=0,o=0;this.viewports=r.map(({from:l,to:a})=>{let h=t.lineAt(l,de.ByPos,e,0,0).top,c=t.lineAt(a,de.ByPos,e,0,0).bottom;return n+=c-h,{from:l,to:a,top:h,bottom:c,domTop:0,domBottom:0}}),this.scale=(7e6-n)/(t.height-n);for(let l of this.viewports)l.domTop=o+(l.top-s)*this.scale,o=l.domBottom=l.domTop+(l.bottom-l.top),s=l.bottom}toDOM(e){for(let t=0,r=0,n=0;;t++){let s=t<this.viewports.length?this.viewports[t]:null;if(!s||e<s.top)return n+(e-r)*this.scale;if(e<=s.bottom)return s.domTop+(e-s.top);r=s.bottom,n=s.domBottom}}fromDOM(e){for(let t=0,r=0,n=0;;t++){let s=t<this.viewports.length?this.viewports[t]:null;if(!s||e<s.domTop)return r+(e-n)/this.scale;if(e<=s.domBottom)return s.top+(e-s.domTop);r=s.bottom,n=s.domBottom}}eq(e){return e instanceof i?this.scale==e.scale&&this.viewports.length==e.viewports.length&&this.viewports.every((t,r)=>t.from==e.viewports[r].from&&t.to==e.viewports[r].to):!1}};function $i(i,e){if(e.scale==1)return i;let t=e.toDOM(i.top),r=e.toDOM(i.bottom);return new Ot(i.from,i.length,t,r-t,Array.isArray(i._content)?i._content.map(n=>$i(n,e)):i._content)}var ss=H.define({combine:i=>i.join(\" \")}),ca=H.define({combine:i=>i.indexOf(!0)>-1}),ua=bt.newName(),Eu=bt.newName(),Ou=bt.newName(),zu={\"&light\":\".\"+Eu,\"&dark\":\".\"+Ou};function fa(i,e,t){return new bt(e,{finish(r){return/&/.test(r)?r.replace(/&\\w*/,n=>{if(n==\"&\")return i;if(!t||!t[n])throw new RangeError(`Unsupported selector: ${n}`);return t[n]}):i+\" \"+r}})}var Y1=fa(\".\"+ua,{\"&\":{position:\"relative !important\",boxSizing:\"border-box\",\"&.cm-focused\":{outline:\"1px dotted #212121\"},display:\"flex !important\",flexDirection:\"column\"},\".cm-scroller\":{display:\"flex !important\",alignItems:\"flex-start !important\",fontFamily:\"monospace\",lineHeight:1.4,height:\"100%\",overflowX:\"auto\",position:\"relative\",zIndex:0,overflowAnchor:\"none\"},\".cm-content\":{margin:0,flexGrow:2,flexShrink:0,display:\"block\",whiteSpace:\"pre\",wordWrap:\"normal\",boxSizing:\"border-box\",minHeight:\"100%\",padding:\"4px 0\",outline:\"none\",\"&[contenteditable=true]\":{WebkitUserModify:\"read-write-plaintext-only\"}},\".cm-lineWrapping\":{whiteSpace_fallback:\"pre-wrap\",whiteSpace:\"break-spaces\",wordBreak:\"break-word\",overflowWrap:\"anywhere\",flexShrink:1},\"&light .cm-content\":{caretColor:\"black\"},\"&dark .cm-content\":{caretColor:\"white\"},\".cm-line\":{display:\"block\",padding:\"0 2px 0 6px\"},\".cm-layer\":{position:\"absolute\",left:0,top:0,contain:\"size style\",\"& > *\":{position:\"absolute\"}},\"&light .cm-selectionBackground\":{background:\"#d9d9d9\"},\"&dark .cm-selectionBackground\":{background:\"#222\"},\"&light.cm-focused > .cm-scroller > .cm-selectionLayer .cm-selectionBackground\":{background:\"#d7d4f0\"},\"&dark.cm-focused > .cm-scroller > .cm-selectionLayer .cm-selectionBackground\":{background:\"#233\"},\".cm-cursorLayer\":{pointerEvents:\"none\"},\"&.cm-focused > .cm-scroller > .cm-cursorLayer\":{animation:\"steps(1) cm-blink 1.2s infinite\"},\"@keyframes cm-blink\":{\"0%\":{},\"50%\":{opacity:0},\"100%\":{}},\"@keyframes cm-blink2\":{\"0%\":{},\"50%\":{opacity:0},\"100%\":{}},\".cm-cursor, .cm-dropCursor\":{borderLeft:\"1.2px solid black\",marginLeft:\"-0.6px\",pointerEvents:\"none\"},\".cm-cursor\":{display:\"none\"},\"&dark .cm-cursor\":{borderLeftColor:\"#ddd\"},\".cm-dropCursor\":{position:\"absolute\"},\"&.cm-focused > .cm-scroller > .cm-cursorLayer .cm-cursor\":{display:\"block\"},\".cm-iso\":{unicodeBidi:\"isolate\"},\".cm-announced\":{position:\"fixed\",top:\"-10000px\"},\"@media print\":{\".cm-announced\":{display:\"none\"}},\"&light .cm-activeLine\":{backgroundColor:\"#cceeff44\"},\"&dark .cm-activeLine\":{backgroundColor:\"#99eeff33\"},\"&light .cm-specialChar\":{color:\"red\"},\"&dark .cm-specialChar\":{color:\"#f78\"},\".cm-gutters\":{flexShrink:0,display:\"flex\",height:\"100%\",boxSizing:\"border-box\",zIndex:200},\".cm-gutters-before\":{insetInlineStart:0},\".cm-gutters-after\":{insetInlineEnd:0},\"&light .cm-gutters\":{backgroundColor:\"#f5f5f5\",color:\"#6c6c6c\",border:\"0px solid #ddd\",\"&.cm-gutters-before\":{borderRightWidth:\"1px\"},\"&.cm-gutters-after\":{borderLeftWidth:\"1px\"}},\"&dark .cm-gutters\":{backgroundColor:\"#333338\",color:\"#ccc\"},\".cm-gutter\":{display:\"flex !important\",flexDirection:\"column\",flexShrink:0,boxSizing:\"border-box\",minHeight:\"100%\",overflow:\"hidden\"},\".cm-gutterElement\":{boxSizing:\"border-box\"},\".cm-lineNumbers .cm-gutterElement\":{padding:\"0 3px 0 5px\",minWidth:\"20px\",textAlign:\"right\",whiteSpace:\"nowrap\"},\"&light .cm-activeLineGutter\":{backgroundColor:\"#e2f2ff\"},\"&dark .cm-activeLineGutter\":{backgroundColor:\"#222227\"},\".cm-panels\":{boxSizing:\"border-box\",position:\"sticky\",left:0,right:0,zIndex:300},\"&light .cm-panels\":{backgroundColor:\"#f5f5f5\",color:\"black\"},\"&light .cm-panels-top\":{borderBottom:\"1px solid #ddd\"},\"&light .cm-panels-bottom\":{borderTop:\"1px solid #ddd\"},\"&dark .cm-panels\":{backgroundColor:\"#333338\",color:\"white\"},\".cm-dialog\":{padding:\"2px 19px 4px 6px\",position:\"relative\",\"& label\":{fontSize:\"80%\"}},\".cm-dialog-close\":{position:\"absolute\",top:\"3px\",right:\"4px\",backgroundColor:\"inherit\",border:\"none\",font:\"inherit\",fontSize:\"14px\",padding:\"0\"},\".cm-tab\":{display:\"inline-block\",overflow:\"hidden\",verticalAlign:\"bottom\"},\".cm-widgetBuffer\":{verticalAlign:\"text-top\",height:\"1em\",width:0,display:\"inline\"},\".cm-placeholder\":{color:\"#888\",display:\"inline-block\",verticalAlign:\"top\",userSelect:\"none\"},\".cm-highlightSpace\":{backgroundImage:\"radial-gradient(circle at 50% 55%, #aaa 20%, transparent 5%)\",backgroundPosition:\"center\"},\".cm-highlightTab\":{backgroundImage:`url('data:image/svg+xml,<svg xmlns=\"http://www.w3.org/2000/svg\" width=\"200\" height=\"20\"><path stroke=\"%23888\" stroke-width=\"1\" fill=\"none\" d=\"M1 10H196L190 5M190 15L196 10M197 4L197 16\"/></svg>')`,backgroundSize:\"auto 100%\",backgroundPosition:\"right 90%\",backgroundRepeat:\"no-repeat\"},\".cm-trailingSpace\":{backgroundColor:\"#ff332255\"},\".cm-button\":{verticalAlign:\"middle\",color:\"inherit\",fontSize:\"70%\",padding:\".2em 1em\",borderRadius:\"1px\"},\"&light .cm-button\":{backgroundImage:\"linear-gradient(#eff1f5, #d9d9df)\",border:\"1px solid #888\",\"&:active\":{backgroundImage:\"linear-gradient(#b4b4b4, #d0d3d6)\"}},\"&dark .cm-button\":{backgroundImage:\"linear-gradient(#393939, #111)\",border:\"1px solid #888\",\"&:active\":{backgroundImage:\"linear-gradient(#111, #333)\"}},\".cm-textfield\":{verticalAlign:\"middle\",color:\"inherit\",fontSize:\"70%\",border:\"1px solid silver\",padding:\".2em .5em\"},\"&light .cm-textfield\":{backgroundColor:\"white\"},\"&dark .cm-textfield\":{border:\"1px solid #555\",backgroundColor:\"inherit\"}},zu),X1={childList:!0,characterData:!0,subtree:!0,attributes:!0,characterDataOldValue:!0},Tl=W.ie&&W.ie_version<=11,da=class{constructor(e){this.view=e,this.active=!1,this.editContext=null,this.selectionRange=new Pl,this.selectionChanged=!1,this.delayedFlush=-1,this.resizeTimeout=-1,this.queue=[],this.delayedAndroidKey=null,this.flushingAndroidKey=-1,this.lastChange=0,this.scrollTargets=[],this.intersection=null,this.resizeScroll=null,this.intersecting=!1,this.gapIntersection=null,this.gaps=[],this.printQuery=null,this.parentCheck=-1,this.dom=e.contentDOM,this.observer=new MutationObserver(t=>{for(let r of t)this.queue.push(r);(W.ie&&W.ie_version<=11||W.ios&&e.composing)&&t.some(r=>r.type==\"childList\"&&r.removedNodes.length||r.type==\"characterData\"&&r.oldValue.length>r.target.nodeValue.length)?this.flushSoon():this.flush()}),window.EditContext&&W.android&&e.constructor.EDIT_CONTEXT!==!1&&!(W.chrome&&W.chrome_version<126)&&(this.editContext=new ma(e),e.state.facet(or)&&(e.contentDOM.editContext=this.editContext.editContext)),Tl&&(this.onCharData=t=>{this.queue.push({target:t.target,type:\"characterData\",oldValue:t.prevValue}),this.flushSoon()}),this.onSelectionChange=this.onSelectionChange.bind(this),this.onResize=this.onResize.bind(this),this.onPrint=this.onPrint.bind(this),this.onScroll=this.onScroll.bind(this),window.matchMedia&&(this.printQuery=window.matchMedia(\"print\")),typeof ResizeObserver==\"function\"&&(this.resizeScroll=new ResizeObserver(()=>{var t;((t=this.view.docView)===null||t===void 0?void 0:t.lastUpdate)<Date.now()-75&&this.onResize()}),this.resizeScroll.observe(e.scrollDOM)),this.addWindowListeners(this.win=e.win),this.start(),typeof IntersectionObserver==\"function\"&&(this.intersection=new IntersectionObserver(t=>{this.parentCheck<0&&(this.parentCheck=setTimeout(this.listenForScroll.bind(this),1e3)),t.length>0&&t[t.length-1].intersectionRatio>0!=this.intersecting&&(this.intersecting=!this.intersecting,this.intersecting!=this.view.inView&&this.onScrollChanged(document.createEvent(\"Event\")))},{threshold:[0,.001]}),this.intersection.observe(this.dom),this.gapIntersection=new IntersectionObserver(t=>{t.length>0&&t[t.length-1].intersectionRatio>0&&this.onScrollChanged(document.createEvent(\"Event\"))},{})),this.listenForScroll(),this.readSelectionRange()}onScrollChanged(e){this.view.inputState.runHandlers(\"scroll\",e),this.intersecting&&this.view.measure()}onScroll(e){this.intersecting&&this.flush(!1),this.editContext&&this.view.requestMeasure(this.editContext.measureReq),this.onScrollChanged(e)}onResize(){this.resizeTimeout<0&&(this.resizeTimeout=setTimeout(()=>{this.resizeTimeout=-1,this.view.requestMeasure()},50))}onPrint(e){(e.type==\"change\"||!e.type)&&!e.matches||(this.view.viewState.printing=!0,this.view.measure(),setTimeout(()=>{this.view.viewState.printing=!1,this.view.requestMeasure()},500))}updateGaps(e){if(this.gapIntersection&&(e.length!=this.gaps.length||this.gaps.some((t,r)=>t!=e[r]))){this.gapIntersection.disconnect();for(let t of e)this.gapIntersection.observe(t);this.gaps=e}}onSelectionChange(e){let t=this.selectionChanged;if(!this.readSelectionRange()||this.delayedAndroidKey)return;let{view:r}=this,n=this.selectionRange;if(r.state.facet(or)?r.root.activeElement!=this.dom:!Ui(this.dom,n))return;let s=n.anchorNode&&r.docView.tile.nearest(n.anchorNode);if(s&&s.isWidget()&&s.widget.ignoreEvent(e)){t||(this.selectionChanged=!1);return}(W.ie&&W.ie_version<=11||W.android&&W.chrome)&&!r.state.selection.main.empty&&n.focusNode&&Ki(n.focusNode,n.focusOffset,n.anchorNode,n.anchorOffset)?this.flushSoon():this.flush(!1)}readSelectionRange(){let{view:e}=this,t=tn(e.root);if(!t)return!1;let r=W.safari&&e.root.nodeType==11&&e.root.activeElement==this.dom&&_1(this.view,t)||t;if(!r||this.selectionRange.eq(r))return!1;let n=Ui(this.dom,r);return n&&!this.selectionChanged&&e.inputState.lastFocusTime>Date.now()-200&&e.inputState.lastTouchTime<Date.now()-300&&Up(this.dom,r)?(this.view.inputState.lastFocusTime=0,e.docView.updateSelection(),!1):(this.selectionRange.setRange(r),n&&(this.selectionChanged=!0),!0)}setSelectionRange(e,t){this.selectionRange.set(e.node,e.offset,t.node,t.offset),this.selectionChanged=!1}clearSelectionRange(){this.selectionRange.set(null,0,null,0)}listenForScroll(){this.parentCheck=-1;let e=0,t=null;for(let r=this.dom;r;)if(r.nodeType==1)!t&&e<this.scrollTargets.length&&this.scrollTargets[e]==r?e++:t||(t=this.scrollTargets.slice(0,e)),t&&t.push(r),r=r.assignedSlot||r.parentNode;else if(r.nodeType==11)r=r.host;else break;if(e<this.scrollTargets.length&&!t&&(t=this.scrollTargets.slice(0,e)),t){for(let r of this.scrollTargets)r.removeEventListener(\"scroll\",this.onScroll);for(let r of this.scrollTargets=t)r.addEventListener(\"scroll\",this.onScroll)}}ignore(e){if(!this.active)return e();try{return this.stop(),e()}finally{this.start(),this.clear()}}start(){this.active||(this.observer.observe(this.dom,X1),Tl&&this.dom.addEventListener(\"DOMCharacterDataModified\",this.onCharData),this.active=!0)}stop(){this.active&&(this.active=!1,this.observer.disconnect(),Tl&&this.dom.removeEventListener(\"DOMCharacterDataModified\",this.onCharData))}clear(){this.processRecords(),this.queue.length=0,this.selectionChanged=!1}delayAndroidKey(e,t){var r;if(!this.delayedAndroidKey){let n=()=>{let s=this.delayedAndroidKey;s&&(this.clearDelayedAndroidKey(),this.view.inputState.lastKeyCode=s.keyCode,this.view.inputState.lastKeyTime=Date.now(),!this.flush()&&s.force&&gi(this.dom,s.key,s.keyCode))};this.flushingAndroidKey=this.view.win.requestAnimationFrame(n)}(!this.delayedAndroidKey||e==\"Enter\")&&(this.delayedAndroidKey={key:e,keyCode:t,force:this.lastChange<Date.now()-50||!!(!((r=this.delayedAndroidKey)===null||r===void 0)&&r.force)})}clearDelayedAndroidKey(){this.win.cancelAnimationFrame(this.flushingAndroidKey),this.delayedAndroidKey=null,this.flushingAndroidKey=-1}flushSoon(){this.delayedFlush<0&&(this.delayedFlush=this.view.win.requestAnimationFrame(()=>{this.delayedFlush=-1,this.flush()}))}forceFlush(){this.delayedFlush>=0&&(this.view.win.cancelAnimationFrame(this.delayedFlush),this.delayedFlush=-1),this.flush()}pendingRecords(){for(let e of this.observer.takeRecords())this.queue.push(e);return this.queue}processRecords(){let e=this.pendingRecords();e.length&&(this.queue=[]);let t=-1,r=-1,n=!1;for(let s of e){let o=this.readMutation(s);o&&(o.typeOver&&(n=!0),t==-1?{from:t,to:r}=o:(t=Math.min(o.from,t),r=Math.max(o.to,r)))}return{from:t,to:r,typeOver:n}}readChange(){let{from:e,to:t,typeOver:r}=this.processRecords(),n=this.selectionChanged&&Ui(this.dom,this.selectionRange);if(e<0&&!n)return null;e>-1&&(this.lastChange=Date.now()),this.view.inputState.lastFocusTime=0,this.selectionChanged=!1;let s=new Ql(this.view,e,t,r);return this.view.docView.domChanged={newSel:s.newSel?s.newSel.main:null},s}flush(e=!0){if(this.delayedFlush>=0||this.delayedAndroidKey)return!1;e&&this.readSelectionRange();let t=this.readChange();if(!t)return this.view.requestMeasure(),!1;let r=this.view.state,n=xu(this.view,t);return this.view.state==r&&(t.domChanged||t.newSel&&!Ss(this.view.state.selection,t.newSel.main))&&this.view.update([]),n}readMutation(e){let t=this.view.docView.tile.nearest(e.target);if(!t||t.isWidget())return null;if(t.markDirty(e.type==\"attributes\"),e.type==\"childList\"){let r=Bc(t,e.previousSibling||e.target.previousSibling,-1),n=Bc(t,e.nextSibling||e.target.nextSibling,1);return{from:r?t.posAfter(r):t.posAtStart,to:n?t.posBefore(n):t.posAtEnd,typeOver:!1}}else return e.type==\"characterData\"?{from:t.posAtStart,to:t.posAtEnd,typeOver:e.target.nodeValue==e.oldValue}:null}setWindow(e){e!=this.win&&(this.removeWindowListeners(this.win),this.win=e,this.addWindowListeners(this.win))}addWindowListeners(e){e.addEventListener(\"resize\",this.onResize),this.printQuery?this.printQuery.addEventListener?this.printQuery.addEventListener(\"change\",this.onPrint):this.printQuery.addListener(this.onPrint):e.addEventListener(\"beforeprint\",this.onPrint),e.addEventListener(\"scroll\",this.onScroll),e.document.addEventListener(\"selectionchange\",this.onSelectionChange)}removeWindowListeners(e){e.removeEventListener(\"scroll\",this.onScroll),e.removeEventListener(\"resize\",this.onResize),this.printQuery?this.printQuery.removeEventListener?this.printQuery.removeEventListener(\"change\",this.onPrint):this.printQuery.removeListener(this.onPrint):e.removeEventListener(\"beforeprint\",this.onPrint),e.document.removeEventListener(\"selectionchange\",this.onSelectionChange)}update(e){this.editContext&&(this.editContext.update(e),e.startState.facet(or)!=e.state.facet(or)&&(e.view.contentDOM.editContext=e.state.facet(or)?this.editContext.editContext:null))}destroy(){var e,t,r;this.stop(),(e=this.intersection)===null||e===void 0||e.disconnect(),(t=this.gapIntersection)===null||t===void 0||t.disconnect(),(r=this.resizeScroll)===null||r===void 0||r.disconnect();for(let n of this.scrollTargets)n.removeEventListener(\"scroll\",this.onScroll);this.removeWindowListeners(this.win),clearTimeout(this.parentCheck),clearTimeout(this.resizeTimeout),this.win.cancelAnimationFrame(this.delayedFlush),this.win.cancelAnimationFrame(this.flushingAndroidKey),this.editContext&&(this.view.contentDOM.editContext=null,this.editContext.destroy())}};function Bc(i,e,t){for(;e;){let r=ke.get(e);if(r&&r.parent==i)return r;let n=e.parentNode;e=n!=i.dom?n:t>0?e.nextSibling:e.previousSibling}return null}function Ec(i,e){let t=e.startContainer,r=e.startOffset,n=e.endContainer,s=e.endOffset,o=i.docView.domAtPos(i.state.selection.main.anchor,1);return Ki(o.node,o.offset,n,s)&&([t,r,n,s]=[n,s,t,r]),{anchorNode:t,anchorOffset:r,focusNode:n,focusOffset:s}}function _1(i,e){if(e.getComposedRanges){let n=e.getComposedRanges(i.root)[0];if(n)return Ec(i,n)}let t=null;function r(n){n.preventDefault(),n.stopImmediatePropagation(),t=n.getTargetRanges()[0]}return i.contentDOM.addEventListener(\"beforeinput\",r,!0),i.dom.ownerDocument.execCommand(\"indent\"),i.contentDOM.removeEventListener(\"beforeinput\",r,!0),t?Ec(i,t):null}var ma=class{constructor(e){this.from=0,this.to=0,this.pendingContextChange=null,this.handlers=Object.create(null),this.composing=null,this.resetRange(e.state);let t=this.editContext=new window.EditContext({text:e.state.doc.sliceString(this.from,this.to),selectionStart:this.toContextPos(Math.max(this.from,Math.min(this.to,e.state.selection.main.anchor))),selectionEnd:this.toContextPos(e.state.selection.main.head)});this.handlers.textupdate=r=>{let n=e.state.selection.main,{anchor:s,head:o}=n,l=this.toEditorPos(r.updateRangeStart),a=this.toEditorPos(r.updateRangeEnd);e.inputState.composing>=0&&!this.composing&&(this.composing={contextBase:r.updateRangeStart,editorBase:l,drifted:!1});let h=a-l>r.text.length;l==this.from&&s<this.from?l=s:a==this.to&&s>this.to&&(a=s);let c=wu(e.state.sliceDoc(l,a),r.text,(h?n.from:n.to)-l,h?\"end\":null);if(!c){let d=R.single(this.toEditorPos(r.selectionStart),this.toEditorPos(r.selectionEnd));Ss(d,n)||e.dispatch({selection:d,userEvent:\"select\"});return}let u={from:c.from+l,to:c.toA+l,insert:se.of(r.text.slice(c.from,c.toB).split(`\n`))};if((W.mac||W.android)&&u.from==o-1&&/^\\. ?$/.test(r.text)&&e.contentDOM.getAttribute(\"autocorrect\")==\"off\"&&(u={from:l,to:a,insert:se.of([r.text.replace(\".\",\" \")])}),this.pendingContextChange=u,!e.state.readOnly){let d=this.to-this.from+(u.to-u.from+u.insert.length);La(e,u,R.single(this.toEditorPos(r.selectionStart,d),this.toEditorPos(r.selectionEnd,d)))}this.pendingContextChange&&(this.revertPending(e.state),this.setSelection(e.state)),u.from<u.to&&!u.insert.length&&e.inputState.composing>=0&&!/[\\\\p{Alphabetic}\\\\p{Number}_]/.test(t.text.slice(Math.max(0,r.updateRangeStart-1),Math.min(t.text.length,r.updateRangeStart+1)))&&this.handlers.compositionend(r)},this.handlers.characterboundsupdate=r=>{let n=[],s=null;for(let o=this.toEditorPos(r.rangeStart),l=this.toEditorPos(r.rangeEnd);o<l;o++){let a=e.coordsForChar(o);s=a&&new DOMRect(a.left,a.top,a.right-a.left,a.bottom-a.top)||s||new DOMRect,n.push(s)}t.updateCharacterBounds(r.rangeStart,n)},this.handlers.textformatupdate=r=>{let n=[];for(let s of r.getTextFormats()){let o=s.underlineStyle,l=s.underlineThickness;if(!/none/i.test(o)&&!/none/i.test(l)){let a=this.toEditorPos(s.rangeStart),h=this.toEditorPos(s.rangeEnd);if(a<h){let c=`text-decoration: underline ${/^[a-z]/.test(o)?o+\" \":o==\"Dashed\"?\"dashed \":o==\"Squiggle\"?\"wavy \":\"\"}${/thin/i.test(l)?1:2}px`;n.push(X.mark({attributes:{style:c}}).range(a,h))}}}e.dispatch({effects:uu.of(X.set(n))})},this.handlers.compositionstart=()=>{e.inputState.composing<0&&(e.inputState.composing=0,e.inputState.compositionFirstChange=!0)},this.handlers.compositionend=()=>{if(e.inputState.composing=-1,e.inputState.compositionFirstChange=null,this.composing){let{drifted:r}=this.composing;this.composing=null,r&&this.reset(e.state)}};for(let r in this.handlers)t.addEventListener(r,this.handlers[r]);this.measureReq={read:r=>{this.editContext.updateControlBounds(r.contentDOM.getBoundingClientRect());let n=tn(r.root);n&&n.rangeCount&&this.editContext.updateSelectionBounds(n.getRangeAt(0).getBoundingClientRect())}}}applyEdits(e){let t=0,r=!1,n=this.pendingContextChange;return e.changes.iterChanges((s,o,l,a,h)=>{if(r)return;let c=h.length-(o-s);if(n&&o>=n.to)if(n.from==s&&n.to==o&&n.insert.eq(h)){n=this.pendingContextChange=null,t+=c,this.to+=c;return}else n=null,this.revertPending(e.state);if(s+=t,o+=t,o<=this.from)this.from+=c,this.to+=c;else if(s<this.to){if(s<this.from||o>this.to||this.to-this.from+h.length>3e4){r=!0;return}this.editContext.updateText(this.toContextPos(s),this.toContextPos(o),h.toString()),this.to+=c}t+=c}),n&&!r&&this.revertPending(e.state),!r}update(e){let t=this.pendingContextChange,r=e.startState.selection.main;this.composing&&(this.composing.drifted||!e.changes.touchesRange(r.from,r.to)&&e.transactions.some(n=>!n.isUserEvent(\"input.type\")&&n.changes.touchesRange(this.from,this.to)))?(this.composing.drifted=!0,this.composing.editorBase=e.changes.mapPos(this.composing.editorBase)):!this.applyEdits(e)||!this.rangeIsValid(e.state)?(this.pendingContextChange=null,this.reset(e.state)):(e.docChanged||e.selectionSet||t)&&this.setSelection(e.state),(e.geometryChanged||e.docChanged||e.selectionSet)&&e.view.requestMeasure(this.measureReq)}resetRange(e){let{head:t}=e.selection.main;this.from=Math.max(0,t-1e4),this.to=Math.min(e.doc.length,t+1e4)}reset(e){this.resetRange(e),this.editContext.updateText(0,this.editContext.text.length,e.doc.sliceString(this.from,this.to)),this.setSelection(e)}revertPending(e){let t=this.pendingContextChange;this.pendingContextChange=null,this.editContext.updateText(this.toContextPos(t.from),this.toContextPos(t.from+t.insert.length),e.doc.sliceString(t.from,t.to))}setSelection(e){let{main:t}=e.selection,r=this.toContextPos(Math.max(this.from,Math.min(this.to,t.anchor))),n=this.toContextPos(t.head);(this.editContext.selectionStart!=r||this.editContext.selectionEnd!=n)&&this.editContext.updateSelection(r,n)}rangeIsValid(e){let{head:t}=e.selection.main;return!(this.from>0&&t-this.from<500||this.to<e.doc.length&&this.to-t<500||this.to-this.from>1e4*3)}toEditorPos(e,t=this.to-this.from){e=Math.min(e,t);let r=this.composing;return r&&r.drifted?r.editorBase+(e-r.contextBase):e+this.from}toContextPos(e){let t=this.composing;return t&&t.drifted?t.contextBase+(e-t.editorBase):e-this.from}destroy(){for(let e in this.handlers)this.editContext.removeEventListener(e,this.handlers[e])}},K=class i{get state(){return this.viewState.state}get viewport(){return this.viewState.viewport}get visibleRanges(){return this.viewState.visibleRanges}get inView(){return this.viewState.inView}get composing(){return!!this.inputState&&this.inputState.composing>0}get compositionStarted(){return!!this.inputState&&this.inputState.composing>=0}get root(){return this._root}get win(){return this.dom.ownerDocument.defaultView||window}constructor(e={}){var t;this.plugins=[],this.pluginMap=new Map,this.editorAttrs={},this.contentAttrs={},this.bidiCache=[],this.destroyed=!1,this.updateState=2,this.measureScheduled=-1,this.measureRequests=[],this.contentDOM=document.createElement(\"div\"),this.scrollDOM=document.createElement(\"div\"),this.scrollDOM.tabIndex=-1,this.scrollDOM.className=\"cm-scroller\",this.scrollDOM.appendChild(this.contentDOM),this.announceDOM=document.createElement(\"div\"),this.announceDOM.className=\"cm-announced\",this.announceDOM.setAttribute(\"aria-live\",\"polite\"),this.dom=document.createElement(\"div\"),this.dom.appendChild(this.announceDOM),this.dom.appendChild(this.scrollDOM),e.parent&&e.parent.appendChild(this.dom);let{dispatch:r}=e;this.dispatchTransactions=e.dispatchTransactions||r&&(n=>n.forEach(s=>r(s,this)))||(n=>this.update(n)),this.dispatch=this.dispatch.bind(this),this._root=e.root||Gp(e.parent)||document,this.viewState=new Ms(e.state||fe.create(e)),e.scrollTo&&e.scrollTo.is(es)&&(this.viewState.scrollTarget=e.scrollTo.value.clip(this.viewState.state)),this.plugins=this.state.facet(ui).map(n=>new Yi(n));for(let n of this.plugins)n.update(this);this.observer=new da(this),this.inputState=new ea(this),this.inputState.ensureHandlers(this.plugins),this.docView=new ws(this),this.mountStyles(),this.updateAttrs(),this.updateState=0,this.requestMeasure(),!((t=document.fonts)===null||t===void 0)&&t.ready&&document.fonts.ready.then(()=>{this.viewState.mustMeasureContent=\"refresh\",this.requestMeasure()})}dispatch(...e){let t=e.length==1&&e[0]instanceof Le?e:e.length==1&&Array.isArray(e[0])?e[0]:[this.state.update(...e)];this.dispatchTransactions(t,this)}update(e){if(this.updateState!=0)throw new Error(\"Calls to EditorView.update are not allowed while an update is in progress\");let t=!1,r=!1,n,s=this.state;for(let d of e){if(d.startState!=s)throw new RangeError(\"Trying to update state with a transaction that doesn't start from the previous state.\");s=d.state}if(this.destroyed){this.viewState.state=s;return}let o=this.hasFocus,l=0,a=null;e.some(d=>d.annotation(Tu))?(this.inputState.notifiedFocused=o,l=1):o!=this.inputState.notifiedFocused&&(this.inputState.notifiedFocused=o,a=Du(s,o),a||(l=1));let h=this.observer.delayedAndroidKey,c=null;if(h?(this.observer.clearDelayedAndroidKey(),c=this.observer.readChange(),(c&&!this.state.doc.eq(s.doc)||!this.state.selection.eq(s.selection))&&(c=null)):this.observer.clear(),s.facet(fe.phrases)!=this.state.facet(fe.phrases))return this.setState(s);n=ys.create(this,s,e),n.flags|=l;let u=this.viewState.scrollTarget;try{this.updateState=2;for(let d of e){if(u&&(u=u.map(d.changes)),d.scrollIntoView){let{main:p}=d.state.selection;u=new ji(p.empty?p:R.cursor(p.head,p.head>p.anchor?-1:1))}for(let p of d.effects)p.is(es)&&(u=p.value.clip(this.state))}this.viewState.update(n,u),this.bidiCache=Ts.update(this.bidiCache,n.changes),n.empty||(this.updatePlugins(n),this.inputState.update(n)),t=this.docView.update(n),this.state.facet(Wi)!=this.styleModules&&this.mountStyles(),r=this.updateAttrs(),this.showAnnouncements(e),this.docView.updateSelection(t,e.some(d=>d.isUserEvent(\"select.pointer\")))}finally{this.updateState=0}if(n.startState.facet(ss)!=n.state.facet(ss)&&(this.viewState.mustMeasureContent=!0),(t||r||u||this.viewState.mustEnforceCursorAssoc||this.viewState.mustMeasureContent)&&this.requestMeasure(),t&&this.docViewUpdate(),!n.empty)for(let d of this.state.facet(ql))try{d(n)}catch(p){Fe(this.state,p,\"update listener\")}(a||c)&&Promise.resolve().then(()=>{a&&this.state==a.startState&&this.dispatch(a),c&&!xu(this,c)&&h.force&&gi(this.contentDOM,h.key,h.keyCode)})}setState(e){if(this.updateState!=0)throw new Error(\"Calls to EditorView.setState are not allowed while an update is in progress\");if(this.destroyed){this.viewState.state=e;return}this.updateState=2;let t=this.hasFocus;try{for(let r of this.plugins)r.destroy(this);this.viewState=new Ms(e),this.plugins=e.facet(ui).map(r=>new Yi(r)),this.pluginMap.clear();for(let r of this.plugins)r.update(this);this.docView.destroy(),this.docView=new ws(this),this.inputState.ensureHandlers(this.plugins),this.mountStyles(),this.updateAttrs(),this.bidiCache=[]}finally{this.updateState=0}t&&this.focus(),this.requestMeasure()}updatePlugins(e){let t=e.startState.facet(ui),r=e.state.facet(ui);if(t!=r){let n=[];for(let s of r){let o=t.indexOf(s);if(o<0)n.push(new Yi(s));else{let l=this.plugins[o];l.mustUpdate=e,n.push(l)}}for(let s of this.plugins)s.mustUpdate!=e&&s.destroy(this);this.plugins=n,this.pluginMap.clear()}else for(let n of this.plugins)n.mustUpdate=e;for(let n=0;n<this.plugins.length;n++)this.plugins[n].update(this);t!=r&&this.inputState.ensureHandlers(this.plugins)}docViewUpdate(){for(let e of this.plugins){let t=e.value;if(t&&t.docViewUpdate)try{t.docViewUpdate(this)}catch(r){Fe(this.state,r,\"doc view update listener\")}}}measure(e=!0){if(this.destroyed)return;if(this.measureScheduled>-1&&this.win.cancelAnimationFrame(this.measureScheduled),this.observer.delayedAndroidKey){this.measureScheduled=-1,this.requestMeasure();return}this.measureScheduled=0,e&&this.observer.forceFlush();let t=null,r=this.scrollDOM,n=r.scrollTop*this.scaleY,{scrollAnchorPos:s,scrollAnchorHeight:o}=this.viewState;Math.abs(n-this.viewState.scrollTop)>1&&(o=-1),this.viewState.scrollAnchorHeight=-1;try{for(let l=0;;l++){if(o<0)if(Yc(r))s=-1,o=this.viewState.heightMap.height;else{let p=this.viewState.scrollAnchorAt(n);s=p.from,o=p.top}this.updateState=1;let a=this.viewState.measure(this);if(!a&&!this.measureRequests.length&&this.viewState.scrollTarget==null)break;if(l>5){console.warn(this.measureRequests.length?\"Measure loop restarted more than 5 times\":\"Viewport failed to stabilize\");break}let h=[];a&4||([this.measureRequests,h]=[h,this.measureRequests]);let c=h.map(p=>{try{return p.read(this)}catch(v){return Fe(this.state,v),Oc}}),u=ys.create(this,this.state,[]),d=!1;u.flags|=a,t?t.flags|=a:t=u,this.updateState=2,u.empty||(this.updatePlugins(u),this.inputState.update(u),this.updateAttrs(),d=this.docView.update(u),d&&this.docViewUpdate());for(let p=0;p<h.length;p++)if(c[p]!=Oc)try{let v=h[p];v.write&&v.write(c[p],this)}catch(v){Fe(this.state,v)}if(d&&this.docView.updateSelection(!0),!u.viewportChanged&&this.measureRequests.length==0){if(this.viewState.editorHeight)if(this.viewState.scrollTarget){this.docView.scrollIntoView(this.viewState.scrollTarget),this.viewState.scrollTarget=null,o=-1;continue}else{let v=(s<0?this.viewState.heightMap.height:this.viewState.lineBlockAt(s).top)-o;if(v>1||v<-1){n=n+v,r.scrollTop=n/this.scaleY,o=-1;continue}}break}}}finally{this.updateState=0,this.measureScheduled=-1}if(t&&!t.empty)for(let l of this.state.facet(ql))l(t)}get themeClasses(){return ua+\" \"+(this.state.facet(ca)?Ou:Eu)+\" \"+this.state.facet(ss)}updateAttrs(){let e=zc(this,fu,{class:\"cm-editor\"+(this.hasFocus?\" cm-focused \":\" \")+this.themeClasses}),t={spellcheck:\"false\",autocorrect:\"off\",autocapitalize:\"off\",writingsuggestions:\"false\",translate:\"no\",contenteditable:this.state.facet(or)?\"true\":\"false\",class:\"cm-content\",style:`${W.tabSize}: ${this.state.tabSize}`,role:\"textbox\",\"aria-multiline\":\"true\"};this.state.readOnly&&(t[\"aria-readonly\"]=\"true\"),zc(this,Ea,t);let r=this.observer.ignore(()=>{let n=cc(this.contentDOM,this.contentAttrs,t),s=cc(this.dom,this.editorAttrs,e);return n||s});return this.editorAttrs=e,this.contentAttrs=t,r}showAnnouncements(e){let t=!0;for(let r of e)for(let n of r.effects)if(n.is(i.announce)){t&&(this.announceDOM.textContent=\"\"),t=!1;let s=this.announceDOM.appendChild(document.createElement(\"div\"));s.textContent=n.value}}mountStyles(){this.styleModules=this.state.facet(Wi);let e=this.state.facet(i.cspNonce);bt.mount(this.root,this.styleModules.concat(Y1).reverse(),e?{nonce:e}:void 0)}readMeasured(){if(this.updateState==2)throw new Error(\"Reading the editor layout isn't allowed during an update\");this.updateState==0&&this.measureScheduled>-1&&this.measure(!1)}requestMeasure(e){if(this.measureScheduled<0&&(this.measureScheduled=this.win.requestAnimationFrame(()=>this.measure())),e){if(this.measureRequests.indexOf(e)>-1)return;if(e.key!=null){for(let t=0;t<this.measureRequests.length;t++)if(this.measureRequests[t].key===e.key){this.measureRequests[t]=e;return}}this.measureRequests.push(e)}}plugin(e){let t=this.pluginMap.get(e);return(t===void 0||t&&t.plugin!=e)&&this.pluginMap.set(e,t=this.plugins.find(r=>r.plugin==e)||null),t&&t.update(this).value}get documentTop(){return this.contentDOM.getBoundingClientRect().top+this.viewState.paddingTop}get documentPadding(){return{top:this.viewState.paddingTop,bottom:this.viewState.paddingBottom}}get scaleX(){return this.viewState.scaleX}get scaleY(){return this.viewState.scaleY}elementAtHeight(e){return this.readMeasured(),this.viewState.elementAtHeight(e)}lineBlockAtHeight(e){return this.readMeasured(),this.viewState.lineBlockAtHeight(e)}get viewportLineBlocks(){return this.viewState.viewportLines}lineBlockAt(e){return this.viewState.lineBlockAt(e)}get contentHeight(){return this.viewState.contentHeight}moveByChar(e,t,r){return Ml(this,e,gc(this,e,t,r))}moveByGroup(e,t){return Ml(this,e,gc(this,e,t,r=>y1(this,e.head,r)))}visualLineSide(e,t){let r=this.bidiSpans(e),n=this.textDirectionAt(e.from),s=r[t?r.length-1:0];return R.cursor(s.side(t,n)+e.from,s.forward(!t,n)?1:-1)}moveToLineBoundary(e,t,r=!0){return b1(this,e,t,r)}moveVertically(e,t,r){return Ml(this,e,x1(this,e,t,r))}domAtPos(e,t=1){return this.docView.domAtPos(e,t)}posAtDOM(e,t=0){return this.docView.posFromDOM(e,t)}posAtCoords(e,t=!0){this.readMeasured();let r=_l(this,e,t);return r&&r.pos}posAndSideAtCoords(e,t=!0){return this.readMeasured(),_l(this,e,t)}coordsAtPos(e,t=1){this.readMeasured();let r=this.docView.coordsAt(e,t);if(!r||r.left==r.right)return r;let n=this.state.doc.lineAt(e),s=this.bidiSpans(n),o=s[wt.find(s,e-n.from,-1,t)];return bs(r,o.dir==he.LTR==t>0)}coordsForChar(e){return this.readMeasured(),this.docView.coordsForChar(e)}get defaultCharacterWidth(){return this.viewState.heightOracle.charWidth}get defaultLineHeight(){return this.viewState.heightOracle.lineHeight}get textDirection(){return this.viewState.defaultTextDirection}textDirectionAt(e){return!this.state.facet(au)||e<this.viewport.from||e>this.viewport.to?this.textDirection:(this.readMeasured(),this.docView.textDirectionAt(e))}get lineWrapping(){return this.viewState.heightOracle.lineWrapping}bidiSpans(e){if(e.length>J1)return eu(e.length);let t=this.textDirectionAt(e.from),r;for(let s of this.bidiCache)if(s.from==e.from&&s.dir==t&&(s.fresh||Qc(s.isolates,r=dc(this,e))))return s.order;r||(r=dc(this,e));let n=Zp(e.text,t,r);return this.bidiCache.push(new Ts(e.from,e.to,t,r,!0,n)),n}get hasFocus(){var e;return(this.dom.ownerDocument.hasFocus()||W.safari&&((e=this.inputState)===null||e===void 0?void 0:e.lastContextMenu)>Date.now()-3e4)&&this.root.activeElement==this.contentDOM}focus(){this.observer.ignore(()=>{jc(this.contentDOM),this.docView.updateSelection()})}setRoot(e){this._root!=e&&(this._root=e,this.observer.setWindow((e.nodeType==9?e:e.ownerDocument).defaultView||window),this.mountStyles())}destroy(){this.root.activeElement==this.contentDOM&&this.contentDOM.blur();for(let e of this.plugins)e.destroy(this);this.plugins=[],this.inputState.destroy(),this.docView.destroy(),this.dom.remove(),this.observer.destroy(),this.measureScheduled>-1&&this.win.cancelAnimationFrame(this.measureScheduled),this.destroyed=!0}static scrollIntoView(e,t={}){return es.of(new ji(typeof e==\"number\"?R.cursor(e):e,t.y,t.x,t.yMargin,t.xMargin))}scrollSnapshot(){let{scrollTop:e,scrollLeft:t}=this.scrollDOM,r=this.viewState.scrollAnchorAt(e);return es.of(new ji(R.cursor(r.from),\"start\",\"start\",r.top-e,t,!0))}setTabFocusMode(e){e==null?this.inputState.tabFocusMode=this.inputState.tabFocusMode<0?0:-1:typeof e==\"boolean\"?this.inputState.tabFocusMode=e?0:-1:this.inputState.tabFocusMode!=0&&(this.inputState.tabFocusMode=Date.now()+e)}static domEventHandlers(e){return Se.define(()=>({}),{eventHandlers:e})}static domEventObservers(e){return Se.define(()=>({}),{eventObservers:e})}static theme(e,t){let r=bt.newName(),n=[ss.of(r),Wi.of(fa(`.${r}`,e))];return t&&t.dark&&n.push(ca.of(!0)),n}static baseTheme(e){return Wt.lowest(Wi.of(fa(\".\"+ua,e,zu)))}static findFromDOM(e){var t;let r=e.querySelector(\".cm-content\"),n=r&&ke.get(r)||ke.get(e);return((t=n?.root)===null||t===void 0?void 0:t.view)||null}};K.styleModule=Wi;K.inputHandler=ou;K.clipboardInputFilter=Da;K.clipboardOutputFilter=Ba;K.scrollHandler=cu;K.focusChangeEffect=lu;K.perLineTextDirection=au;K.exceptionSink=su;K.updateListener=ql;K.editable=or;K.mouseSelectionStyle=nu;K.dragMovesSelection=iu;K.clickAddsSelectionRange=ru;K.decorations=Is;K.blockWrappers=du;K.outerDecorations=Oa;K.atomicRanges=ln;K.bidiIsolatedRanges=mu;K.scrollMargins=pu;K.darkTheme=ca;K.cspNonce=H.define({combine:i=>i.length?i[0]:\"\"});K.contentAttributes=Ea;K.editorAttributes=fu;K.lineWrapping=K.contentAttributes.of({class:\"cm-lineWrapping\"});K.announce=te.define();var J1=4096,Oc={},Ts=class i{constructor(e,t,r,n,s,o){this.from=e,this.to=t,this.dir=r,this.isolates=n,this.fresh=s,this.order=o}static update(e,t){if(t.empty&&!e.some(s=>s.fresh))return e;let r=[],n=e.length?e[e.length-1].dir:he.LTR;for(let s=Math.max(0,e.length-10);s<e.length;s++){let o=e[s];o.dir==n&&!t.touchesRange(o.from,o.to)&&r.push(new i(t.mapPos(o.from,1),t.mapPos(o.to,-1),o.dir,o.isolates,!1,o.order))}return r}};function zc(i,e,t){for(let r=i.state.facet(e),n=r.length-1;n>=0;n--){let s=r[n],o=typeof s==\"function\"?s(i):s;o&&Aa(o,t)}return t}var Z1=W.mac?\"mac\":W.windows?\"win\":W.linux?\"linux\":\"key\";function Q1(i,e){let t=i.split(/-(?!$)/),r=t[t.length-1];r==\"Space\"&&(r=\" \");let n,s,o,l;for(let a=0;a<t.length-1;++a){let h=t[a];if(/^(cmd|meta|m)$/i.test(h))l=!0;else if(/^a(lt)?$/i.test(h))n=!0;else if(/^(c|ctrl|control)$/i.test(h))s=!0;else if(/^s(hift)?$/i.test(h))o=!0;else if(/^mod$/i.test(h))e==\"mac\"?l=!0:s=!0;else throw new Error(\"Unrecognized modifier name: \"+h)}return n&&(r=\"Alt-\"+r),s&&(r=\"Ctrl-\"+r),l&&(r=\"Meta-\"+r),o&&(r=\"Shift-\"+r),r}function os(i,e,t){return e.altKey&&(i=\"Alt-\"+i),e.ctrlKey&&(i=\"Ctrl-\"+i),e.metaKey&&(i=\"Meta-\"+i),t!==!1&&e.shiftKey&&(i=\"Shift-\"+i),i}var eg=Wt.default(K.domEventHandlers({keydown(i,e){return ng(tg(e.state),i,e,\"editor\")}})),an=H.define({enables:eg}),Lc=new WeakMap;function tg(i){let e=i.facet(an),t=Lc.get(e);return t||Lc.set(e,t=ig(e.reduce((r,n)=>r.concat(n),[]))),t}var br=null,rg=4e3;function ig(i,e=Z1){let t=Object.create(null),r=Object.create(null),n=(o,l)=>{let a=r[o];if(a==null)r[o]=l;else if(a!=l)throw new Error(\"Key binding \"+o+\" is used both as a regular binding and as a multi-stroke prefix\")},s=(o,l,a,h,c)=>{var u,d;let p=t[o]||(t[o]=Object.create(null)),v=l.split(/ (?!$)/).map(S=>Q1(S,e));for(let S=1;S<v.length;S++){let A=v.slice(0,S).join(\" \");n(A,!0),p[A]||(p[A]={preventDefault:!0,stopPropagation:!1,run:[M=>{let E=br={view:M,prefix:A,scope:o};return setTimeout(()=>{br==E&&(br=null)},rg),!0}]})}let y=v.join(\" \");n(y,!1);let w=p[y]||(p[y]={preventDefault:!1,stopPropagation:!1,run:((d=(u=p._any)===null||u===void 0?void 0:u.run)===null||d===void 0?void 0:d.slice())||[]});a&&w.run.push(a),h&&(w.preventDefault=!0),c&&(w.stopPropagation=!0)};for(let o of i){let l=o.scope?o.scope.split(\" \"):[\"editor\"];if(o.any)for(let h of l){let c=t[h]||(t[h]=Object.create(null));c._any||(c._any={preventDefault:!1,stopPropagation:!1,run:[]});let{any:u}=o;for(let d in c)c[d].run.push(p=>u(p,pa))}let a=o[e]||o.key;if(a)for(let h of l)s(h,a,o.run,o.preventDefault,o.stopPropagation),o.shift&&s(h,\"Shift-\"+a,o.shift,o.preventDefault,o.stopPropagation)}return t}var pa=null;function ng(i,e,t,r){pa=e;let n=sc(e),s=Xe(n,0),o=vt(s)==n.length&&n!=\" \",l=\"\",a=!1,h=!1,c=!1;br&&br.view==t&&br.scope==r&&(l=br.prefix+\" \",Su.indexOf(e.keyCode)<0&&(h=!0,br=null));let u=new Set,d=w=>{if(w){for(let S of w.run)if(!u.has(S)&&(u.add(S),S(t)))return w.stopPropagation&&(c=!0),!0;w.preventDefault&&(w.stopPropagation&&(c=!0),h=!0)}return!1},p=i[r],v,y;return p&&(d(p[l+os(n,e,!o)])?a=!0:o&&(e.altKey||e.metaKey||e.ctrlKey)&&!(W.windows&&e.ctrlKey&&e.altKey)&&!(W.mac&&e.altKey&&!(e.ctrlKey||e.metaKey))&&(v=sr[e.keyCode])&&v!=n?(d(p[l+os(v,e,!0)])||e.shiftKey&&(y=ci[e.keyCode])!=n&&y!=v&&d(p[l+os(y,e,!1)]))&&(a=!0):o&&e.shiftKey&&d(p[l+os(n,e,!0)])&&(a=!0),!a&&d(p._any)&&(a=!0)),h&&(a=!0),a&&c&&e.stopPropagation(),pa=null,a}var nn=class i{constructor(e,t,r,n,s){this.className=e,this.left=t,this.top=r,this.width=n,this.height=s}draw(){let e=document.createElement(\"div\");return e.className=this.className,this.adjust(e),e}update(e,t){return t.className!=this.className?!1:(this.adjust(e),!0)}adjust(e){e.style.left=this.left+\"px\",e.style.top=this.top+\"px\",this.width!=null&&(e.style.width=this.width+\"px\"),e.style.height=this.height+\"px\"}eq(e){return this.left==e.left&&this.top==e.top&&this.width==e.width&&this.height==e.height&&this.className==e.className}static forRange(e,t,r){if(r.empty){let n=e.coordsAtPos(r.head,r.assoc||1);if(!n)return[];let s=Lu(e);return[new i(t,n.left-s.left,n.top-s.top,null,n.bottom-n.top)]}else return sg(e,t,r)}};function Lu(i){let e=i.scrollDOM.getBoundingClientRect();return{left:(i.textDirection==he.LTR?e.left:e.right-i.scrollDOM.clientWidth*i.scaleX)-i.scrollDOM.scrollLeft*i.scaleX,top:e.top-i.scrollDOM.scrollTop*i.scaleY}}function Ic(i,e,t,r){let n=i.coordsAtPos(e,t*2);if(!n)return r;let s=i.dom.getBoundingClientRect(),o=(n.top+n.bottom)/2,l=i.posAtCoords({x:s.left+1,y:o}),a=i.posAtCoords({x:s.right-1,y:o});return l==null||a==null?r:{from:Math.max(r.from,Math.min(l,a)),to:Math.min(r.to,Math.max(l,a))}}function sg(i,e,t){if(t.to<=i.viewport.from||t.from>=i.viewport.to)return[];let r=Math.max(t.from,i.viewport.from),n=Math.min(t.to,i.viewport.to),s=i.textDirection==he.LTR,o=i.contentDOM,l=o.getBoundingClientRect(),a=Lu(i),h=o.querySelector(\".cm-line\"),c=h&&window.getComputedStyle(h),u=l.left+(c?parseInt(c.paddingLeft)+Math.min(0,parseInt(c.textIndent)):0),d=l.right-(c?parseInt(c.paddingRight):0),p=Xl(i,r,1),v=Xl(i,n,-1),y=p.type==Ve.Text?p:null,w=v.type==Ve.Text?v:null;if(y&&(i.lineWrapping||p.widgetLineBreaks)&&(y=Ic(i,r,1,y)),w&&(i.lineWrapping||v.widgetLineBreaks)&&(w=Ic(i,n,-1,w)),y&&w&&y.from==w.from&&y.to==w.to)return A(M(t.from,t.to,y));{let T=y?M(t.from,null,y):E(p,!1),B=w?M(null,t.to,w):E(v,!0),D=[];return(y||p).to<(w||v).from-(y&&w?1:0)||p.widgetLineBreaks>1&&T.bottom+i.defaultLineHeight/2<B.top?D.push(S(u,T.bottom,d,B.top)):T.bottom<B.top&&i.elementAtHeight((T.bottom+B.top)/2).type==Ve.Text&&(T.bottom=B.top=(T.bottom+B.top)/2),A(T).concat(D).concat(A(B))}function S(T,B,D,V){return new nn(e,T-a.left,B-a.top,Math.max(0,D-T),V-B)}function A({top:T,bottom:B,horizontal:D}){let V=[];for(let U=0;U<D.length;U+=2)V.push(S(D[U],T,D[U+1],B));return V}function M(T,B,D){let V=1e9,U=-1e9,ie=[];function j(J,re,ge,Be,qe){let Me=i.coordsAtPos(J,J==D.to?-2:2),Te=i.coordsAtPos(ge,ge==D.from?2:-2);!Me||!Te||(V=Math.min(Me.top,Te.top,V),U=Math.max(Me.bottom,Te.bottom,U),qe==he.LTR?ie.push(s&&re?u:Me.left,s&&Be?d:Te.right):ie.push(!s&&Be?u:Te.left,!s&&re?d:Me.right))}let $=T??D.from,ne=B??D.to;for(let J of i.visibleRanges)if(J.to>$&&J.from<ne)for(let re=Math.max(J.from,$),ge=Math.min(J.to,ne);;){let Be=i.state.doc.lineAt(re);for(let qe of i.bidiSpans(Be)){let Me=qe.from+Be.from,Te=qe.to+Be.from;if(Me>=ge)break;Te>re&&j(Math.max(Me,re),T==null&&Me<=$,Math.min(Te,ge),B==null&&Te>=ne,qe.dir)}if(re=Be.to+1,re>=ge)break}return ie.length==0&&j($,T==null,ne,B==null,i.textDirection),{top:V,bottom:U,horizontal:ie}}function E(T,B){let D=l.top+(B?T.top:T.bottom);return{top:D,bottom:D,horizontal:[]}}}function og(i,e){return i.constructor==e.constructor&&i.eq(e)}var ga=class{constructor(e,t){this.view=e,this.layer=t,this.drawn=[],this.scaleX=1,this.scaleY=1,this.measureReq={read:this.measure.bind(this),write:this.draw.bind(this)},this.dom=e.scrollDOM.appendChild(document.createElement(\"div\")),this.dom.classList.add(\"cm-layer\"),t.above&&this.dom.classList.add(\"cm-layer-above\"),t.class&&this.dom.classList.add(t.class),this.scale(),this.dom.setAttribute(\"aria-hidden\",\"true\"),this.setOrder(e.state),e.requestMeasure(this.measureReq),t.mount&&t.mount(this.dom,e)}update(e){e.startState.facet(us)!=e.state.facet(us)&&this.setOrder(e.state),(this.layer.update(e,this.dom)||e.geometryChanged)&&(this.scale(),e.view.requestMeasure(this.measureReq))}docViewUpdate(e){this.layer.updateOnDocViewUpdate!==!1&&e.requestMeasure(this.measureReq)}setOrder(e){let t=0,r=e.facet(us);for(;t<r.length&&r[t]!=this.layer;)t++;this.dom.style.zIndex=String((this.layer.above?150:-1)-t)}measure(){return this.layer.markers(this.view)}scale(){let{scaleX:e,scaleY:t}=this.view;(e!=this.scaleX||t!=this.scaleY)&&(this.scaleX=e,this.scaleY=t,this.dom.style.transform=`scale(${1/e}, ${1/t})`)}draw(e){if(e.length!=this.drawn.length||e.some((t,r)=>!og(t,this.drawn[r]))){let t=this.dom.firstChild,r=0;for(let n of e)n.update&&t&&n.constructor&&this.drawn[r].constructor&&n.update(t,this.drawn[r])?(t=t.nextSibling,r++):this.dom.insertBefore(n.draw(),t);for(;t;){let n=t.nextSibling;t.remove(),t=n}this.drawn=e,W.safari&&W.safari_version>=26&&(this.dom.style.display=this.dom.firstChild?\"\":\"none\")}}destroy(){this.layer.destroy&&this.layer.destroy(this.dom,this.view),this.dom.remove()}},us=H.define();function Iu(i){return[Se.define(e=>new ga(e,i)),us.of(i)]}var sn=H.define({combine(i){return it(i,{cursorBlinkRate:1200,drawRangeCursor:!0},{cursorBlinkRate:(e,t)=>Math.min(e,t),drawRangeCursor:(e,t)=>e||t})}});function Ru(i={}){return[sn.of(i),lg,ag,hg,hu.of(!0)]}function Pu(i){return i.startState.facet(sn)!=i.state.facet(sn)}var lg=Iu({above:!0,markers(i){let{state:e}=i,t=e.facet(sn),r=[];for(let n of e.selection.ranges){let s=n==e.selection.main;if(n.empty||t.drawRangeCursor){let o=s?\"cm-cursor cm-cursor-primary\":\"cm-cursor cm-cursor-secondary\",l=n.empty?n:R.cursor(n.head,n.head>n.anchor?-1:1);for(let a of nn.forRange(i,o,l))r.push(a)}}return r},update(i,e){i.transactions.some(r=>r.selection)&&(e.style.animationName=e.style.animationName==\"cm-blink\"?\"cm-blink2\":\"cm-blink\");let t=Pu(i);return t&&Rc(i.state,e),i.docChanged||i.selectionSet||t},mount(i,e){Rc(e.state,i)},class:\"cm-cursorLayer\"});function Rc(i,e){e.style.animationDuration=i.facet(sn).cursorBlinkRate+\"ms\"}var ag=Iu({above:!1,markers(i){return i.state.selection.ranges.map(e=>e.empty?[]:nn.forRange(i,\"cm-selectionBackground\",e)).reduce((e,t)=>e.concat(t))},update(i,e){return i.docChanged||i.selectionSet||i.viewportChanged||Pu(i)},class:\"cm-selectionLayer\"}),hg=Wt.highest(K.theme({\".cm-line\":{\"& ::selection, &::selection\":{backgroundColor:\"transparent !important\"},caretColor:\"transparent !important\"},\".cm-content\":{caretColor:\"transparent !important\",\"& :focus\":{caretColor:\"initial !important\",\"&::selection, & ::selection\":{backgroundColor:\"Highlight !important\"}}}})),Nu=te.define({map(i,e){return i==null?null:e.mapPos(i)}}),Gi=Re.define({create(){return null},update(i,e){return i!=null&&(i=e.changes.mapPos(i)),e.effects.reduce((t,r)=>r.is(Nu)?r.value:t,i)}}),cg=Se.fromClass(class{constructor(i){this.view=i,this.cursor=null,this.measureReq={read:this.readPos.bind(this),write:this.drawCursor.bind(this)}}update(i){var e;let t=i.state.field(Gi);t==null?this.cursor!=null&&((e=this.cursor)===null||e===void 0||e.remove(),this.cursor=null):(this.cursor||(this.cursor=this.view.scrollDOM.appendChild(document.createElement(\"div\")),this.cursor.className=\"cm-dropCursor\"),(i.startState.field(Gi)!=t||i.docChanged||i.geometryChanged)&&this.view.requestMeasure(this.measureReq))}readPos(){let{view:i}=this,e=i.state.field(Gi),t=e!=null&&i.coordsAtPos(e);if(!t)return null;let r=i.scrollDOM.getBoundingClientRect();return{left:t.left-r.left+i.scrollDOM.scrollLeft*i.scaleX,top:t.top-r.top+i.scrollDOM.scrollTop*i.scaleY,height:t.bottom-t.top}}drawCursor(i){if(this.cursor){let{scaleX:e,scaleY:t}=this.view;i?(this.cursor.style.left=i.left/e+\"px\",this.cursor.style.top=i.top/t+\"px\",this.cursor.style.height=i.height/t+\"px\"):this.cursor.style.left=\"-100000px\"}}destroy(){this.cursor&&this.cursor.remove()}setDropPos(i){this.view.state.field(Gi)!=i&&this.view.dispatch({effects:Nu.of(i)})}},{eventObservers:{dragover(i){this.setDropPos(this.view.posAtCoords({x:i.clientX,y:i.clientY}))},dragleave(i){(i.target==this.view.contentDOM||!this.view.contentDOM.contains(i.relatedTarget))&&this.setDropPos(null)},dragend(){this.setDropPos(null)},drop(){this.setDropPos(null)}}});function Fu(){return[Gi,cg]}function Pc(i,e,t,r,n){e.lastIndex=0;for(let s=i.iterRange(t,r),o=t,l;!s.next().done;o+=s.value.length)if(!s.lineBreak)for(;l=e.exec(s.value);)n(o+l.index,l)}function ug(i,e){let t=i.visibleRanges;if(t.length==1&&t[0].from==i.viewport.from&&t[0].to==i.viewport.to)return t;let r=[];for(let{from:n,to:s}of t)n=Math.max(i.state.doc.lineAt(n).from,n-e),s=Math.min(i.state.doc.lineAt(s).to,s+e),r.length&&r[r.length-1].to>=n?r[r.length-1].to=s:r.push({from:n,to:s});return r}var va=class{constructor(e){let{regexp:t,decoration:r,decorate:n,boundary:s,maxLength:o=1e3}=e;if(!t.global)throw new RangeError(\"The regular expression given to MatchDecorator should have its 'g' flag set\");if(this.regexp=t,n)this.addMatch=(l,a,h,c)=>n(c,h,h+l[0].length,l,a);else if(typeof r==\"function\")this.addMatch=(l,a,h,c)=>{let u=r(l,a,h);u&&c(h,h+l[0].length,u)};else if(r)this.addMatch=(l,a,h,c)=>c(h,h+l[0].length,r);else throw new RangeError(\"Either 'decorate' or 'decoration' should be provided to MatchDecorator\");this.boundary=s,this.maxLength=o}createDeco(e){let t=new Et,r=t.add.bind(t);for(let{from:n,to:s}of ug(e,this.maxLength))Pc(e.state.doc,this.regexp,n,s,(o,l)=>this.addMatch(l,e,o,r));return t.finish()}updateDeco(e,t){let r=1e9,n=-1;return e.docChanged&&e.changes.iterChanges((s,o,l,a)=>{a>=e.view.viewport.from&&l<=e.view.viewport.to&&(r=Math.min(l,r),n=Math.max(a,n))}),e.viewportMoved||n-r>1e3?this.createDeco(e.view):n>-1?this.updateRange(e.view,t.map(e.changes),r,n):t}updateRange(e,t,r,n){for(let s of e.visibleRanges){let o=Math.max(s.from,r),l=Math.min(s.to,n);if(l>=o){let a=e.state.doc.lineAt(o),h=a.to<l?e.state.doc.lineAt(l):a,c=Math.max(s.from,a.from),u=Math.min(s.to,h.to);if(this.boundary){for(;o>a.from;o--)if(this.boundary.test(a.text[o-1-a.from])){c=o;break}for(;l<h.to;l++)if(this.boundary.test(h.text[l-h.from])){u=l;break}}let d=[],p,v=(y,w,S)=>d.push(S.range(y,w));if(a==h)for(this.regexp.lastIndex=c-a.from;(p=this.regexp.exec(a.text))&&p.index<u-a.from;)this.addMatch(p,e,p.index+a.from,v);else Pc(e.state.doc,this.regexp,c,u,(y,w)=>this.addMatch(w,e,y,v));t=t.update({filterFrom:c,filterTo:u,filter:(y,w)=>y<c||w>u,add:d})}}return t}},ba=/x/.unicode!=null?\"gu\":\"g\",fg=new RegExp(`[\\0-\\b\n-\u001f\\x7F-\\x9F\\xAD\\u061C\\u200B\\u200E\\u200F\\u2028\\u2029\\u202D\\u202E\\u2066\\u2067\\u2069\\uFEFF\\uFFF9-\\uFFFC]`,ba),dg={0:\"null\",7:\"bell\",8:\"backspace\",10:\"newline\",11:\"vertical tab\",13:\"carriage return\",27:\"escape\",8203:\"zero width space\",8204:\"zero width non-joiner\",8205:\"zero width joiner\",8206:\"left-to-right mark\",8207:\"right-to-left mark\",8232:\"line separator\",8237:\"left-to-right override\",8238:\"right-to-left override\",8294:\"left-to-right isolate\",8295:\"right-to-left isolate\",8297:\"pop directional isolate\",8233:\"paragraph separator\",65279:\"zero width no-break space\",65532:\"object replacement\"},Dl=null;function mg(){var i;if(Dl==null&&typeof document<\"u\"&&document.body){let e=document.body.style;Dl=((i=e.tabSize)!==null&&i!==void 0?i:e.MozTabSize)!=null}return Dl||!1}var fs=H.define({combine(i){let e=it(i,{render:null,specialChars:fg,addSpecialChars:null});return(e.replaceTabs=!mg())&&(e.specialChars=new RegExp(\"\t|\"+e.specialChars.source,ba)),e.addSpecialChars&&(e.specialChars=new RegExp(e.specialChars.source+\"|\"+e.addSpecialChars.source,ba)),e}});function Hu(i={}){return[fs.of(i),pg()]}var Nc=null;function pg(){return Nc||(Nc=Se.fromClass(class{constructor(i){this.view=i,this.decorations=X.none,this.decorationCache=Object.create(null),this.decorator=this.makeDecorator(i.state.facet(fs)),this.decorations=this.decorator.createDeco(i)}makeDecorator(i){return new va({regexp:i.specialChars,decoration:(e,t,r)=>{let{doc:n}=t.state,s=Xe(e[0],0);if(s==9){let o=n.lineAt(r),l=t.state.tabSize,a=vr(o.text,l,r-o.from);return X.replace({widget:new xa((l-a%l)*this.view.defaultCharacterWidth/this.view.scaleX)})}return this.decorationCache[s]||(this.decorationCache[s]=X.replace({widget:new ya(i,s)}))},boundary:i.replaceTabs?void 0:/[^]/})}update(i){let e=i.state.facet(fs);i.startState.facet(fs)!=e?(this.decorator=this.makeDecorator(e),this.decorations=this.decorator.createDeco(i.view)):this.decorations=this.decorator.updateDeco(i,this.decorations)}},{decorations:i=>i.decorations}))}var gg=\"\\u2022\";function vg(i){return i>=32?gg:i==10?\"\\u2424\":String.fromCharCode(9216+i)}var ya=class extends kt{constructor(e,t){super(),this.options=e,this.code=t}eq(e){return e.code==this.code}toDOM(e){let t=vg(this.code),r=e.state.phrase(\"Control character\")+\" \"+(dg[this.code]||\"0x\"+this.code.toString(16)),n=this.options.render&&this.options.render(this.code,r,t);if(n)return n;let s=document.createElement(\"span\");return s.textContent=t,s.title=r,s.setAttribute(\"aria-label\",r),s.className=\"cm-specialChar\",s}ignoreEvent(){return!1}},xa=class extends kt{constructor(e){super(),this.width=e}eq(e){return e.width==this.width}toDOM(){let e=document.createElement(\"span\");return e.textContent=\"\t\",e.className=\"cm-tab\",e.style.width=this.width+\"px\",e}ignoreEvent(){return!1}};function qu(){return yg}var bg=X.line({class:\"cm-activeLine\"}),yg=Se.fromClass(class{constructor(i){this.decorations=this.getDeco(i)}update(i){(i.docChanged||i.selectionSet)&&(this.decorations=this.getDeco(i.view))}getDeco(i){let e=-1,t=[];for(let r of i.state.selection.ranges){let n=i.lineBlockAt(r.head);n.from>e&&(t.push(bg.range(n.from)),e=n.from)}return X.set(t)}},{decorations:i=>i.decorations});var ls=\"-10000px\",Ds=class{constructor(e,t,r,n){this.facet=t,this.createTooltipView=r,this.removeTooltipView=n,this.input=e.state.facet(t),this.tooltips=this.input.filter(o=>o);let s=null;this.tooltipViews=this.tooltips.map(o=>s=r(o,s))}update(e,t){var r;let n=e.state.facet(this.facet),s=n.filter(a=>a);if(n===this.input){for(let a of this.tooltipViews)a.update&&a.update(e);return!1}let o=[],l=t?[]:null;for(let a=0;a<s.length;a++){let h=s[a],c=-1;if(h){for(let u=0;u<this.tooltips.length;u++){let d=this.tooltips[u];d&&d.create==h.create&&(c=u)}if(c<0)o[a]=this.createTooltipView(h,a?o[a-1]:null),l&&(l[a]=!!h.above);else{let u=o[a]=this.tooltipViews[c];l&&(l[a]=t[c]),u.update&&u.update(e)}}}for(let a of this.tooltipViews)o.indexOf(a)<0&&(this.removeTooltipView(a),(r=a.destroy)===null||r===void 0||r.call(a));return t&&(l.forEach((a,h)=>t[h]=a),t.length=l.length),this.input=n,this.tooltips=s,this.tooltipViews=o,!0}};function xg(i){let e=i.dom.ownerDocument.documentElement;return{top:0,left:0,bottom:e.clientHeight,right:e.clientWidth}}var Bl=H.define({combine:i=>{var e,t,r;return{position:W.ios?\"absolute\":((e=i.find(n=>n.position))===null||e===void 0?void 0:e.position)||\"fixed\",parent:((t=i.find(n=>n.parent))===null||t===void 0?void 0:t.parent)||null,tooltipSpace:((r=i.find(n=>n.tooltipSpace))===null||r===void 0?void 0:r.tooltipSpace)||xg}}}),Fc=new WeakMap,Ia=Se.fromClass(class{constructor(i){this.view=i,this.above=[],this.inView=!0,this.madeAbsolute=!1,this.lastTransaction=0,this.measureTimeout=-1;let e=i.state.facet(Bl);this.position=e.position,this.parent=e.parent,this.classes=i.themeClasses,this.createContainer(),this.measureReq={read:this.readMeasure.bind(this),write:this.writeMeasure.bind(this),key:this},this.resizeObserver=typeof ResizeObserver==\"function\"?new ResizeObserver(()=>this.measureSoon()):null,this.manager=new Ds(i,hn,(t,r)=>this.createTooltip(t,r),t=>{this.resizeObserver&&this.resizeObserver.unobserve(t.dom),t.dom.remove()}),this.above=this.manager.tooltips.map(t=>!!t.above),this.intersectionObserver=typeof IntersectionObserver==\"function\"?new IntersectionObserver(t=>{Date.now()>this.lastTransaction-50&&t.length>0&&t[t.length-1].intersectionRatio<1&&this.measureSoon()},{threshold:[1]}):null,this.observeIntersection(),i.win.addEventListener(\"resize\",this.measureSoon=this.measureSoon.bind(this)),this.maybeMeasure()}createContainer(){this.parent?(this.container=document.createElement(\"div\"),this.container.style.position=\"relative\",this.container.className=this.view.themeClasses,this.parent.appendChild(this.container)):this.container=this.view.dom}observeIntersection(){if(this.intersectionObserver){this.intersectionObserver.disconnect();for(let i of this.manager.tooltipViews)this.intersectionObserver.observe(i.dom)}}measureSoon(){this.measureTimeout<0&&(this.measureTimeout=setTimeout(()=>{this.measureTimeout=-1,this.maybeMeasure()},50))}update(i){i.transactions.length&&(this.lastTransaction=Date.now());let e=this.manager.update(i,this.above);e&&this.observeIntersection();let t=e||i.geometryChanged,r=i.state.facet(Bl);if(r.position!=this.position&&!this.madeAbsolute){this.position=r.position;for(let n of this.manager.tooltipViews)n.dom.style.position=this.position;t=!0}if(r.parent!=this.parent){this.parent&&this.container.remove(),this.parent=r.parent,this.createContainer();for(let n of this.manager.tooltipViews)this.container.appendChild(n.dom);t=!0}else this.parent&&this.view.themeClasses!=this.classes&&(this.classes=this.container.className=this.view.themeClasses);t&&this.maybeMeasure()}createTooltip(i,e){let t=i.create(this.view),r=e?e.dom:null;if(t.dom.classList.add(\"cm-tooltip\"),i.arrow&&!t.dom.querySelector(\".cm-tooltip > .cm-tooltip-arrow\")){let n=document.createElement(\"div\");n.className=\"cm-tooltip-arrow\",t.dom.appendChild(n)}return t.dom.style.position=this.position,t.dom.style.top=ls,t.dom.style.left=\"0px\",this.container.insertBefore(t.dom,r),t.mount&&t.mount(this.view),this.resizeObserver&&this.resizeObserver.observe(t.dom),t}destroy(){var i,e,t;this.view.win.removeEventListener(\"resize\",this.measureSoon);for(let r of this.manager.tooltipViews)r.dom.remove(),(i=r.destroy)===null||i===void 0||i.call(r);this.parent&&this.container.remove(),(e=this.resizeObserver)===null||e===void 0||e.disconnect(),(t=this.intersectionObserver)===null||t===void 0||t.disconnect(),clearTimeout(this.measureTimeout)}readMeasure(){let i=1,e=1,t=!1;if(this.position==\"fixed\"&&this.manager.tooltipViews.length){let{dom:s}=this.manager.tooltipViews[0];if(W.safari){let o=s.getBoundingClientRect();t=Math.abs(o.top+1e4)>1||Math.abs(o.left)>1}else t=!!s.offsetParent&&s.offsetParent!=this.container.ownerDocument.body}if(t||this.position==\"absolute\")if(this.parent){let s=this.parent.getBoundingClientRect();s.width&&s.height&&(i=s.width/this.parent.offsetWidth,e=s.height/this.parent.offsetHeight)}else({scaleX:i,scaleY:e}=this.view.viewState);let r=this.view.scrollDOM.getBoundingClientRect(),n=za(this.view);return{visible:{left:r.left+n.left,top:r.top+n.top,right:r.right-n.right,bottom:r.bottom-n.bottom},parent:this.parent?this.container.getBoundingClientRect():this.view.dom.getBoundingClientRect(),pos:this.manager.tooltips.map((s,o)=>{let l=this.manager.tooltipViews[o];return l.getCoords?l.getCoords(s.pos):this.view.coordsAtPos(s.pos)}),size:this.manager.tooltipViews.map(({dom:s})=>s.getBoundingClientRect()),space:this.view.state.facet(Bl).tooltipSpace(this.view),scaleX:i,scaleY:e,makeAbsolute:t}}writeMeasure(i){var e;if(i.makeAbsolute){this.madeAbsolute=!0,this.position=\"absolute\";for(let l of this.manager.tooltipViews)l.dom.style.position=\"absolute\"}let{visible:t,space:r,scaleX:n,scaleY:s}=i,o=[];for(let l=0;l<this.manager.tooltips.length;l++){let a=this.manager.tooltips[l],h=this.manager.tooltipViews[l],{dom:c}=h,u=i.pos[l],d=i.size[l];if(!u||a.clip!==!1&&(u.bottom<=Math.max(t.top,r.top)||u.top>=Math.min(t.bottom,r.bottom)||u.right<Math.max(t.left,r.left)-.1||u.left>Math.min(t.right,r.right)+.1)){c.style.top=ls;continue}let p=a.arrow?h.dom.querySelector(\".cm-tooltip-arrow\"):null,v=p?7:0,y=d.right-d.left,w=(e=Fc.get(h))!==null&&e!==void 0?e:d.bottom-d.top,S=h.offset||kg,A=this.view.textDirection==he.LTR,M=d.width>r.right-r.left?A?r.left:r.right-d.width:A?Math.max(r.left,Math.min(u.left-(p?14:0)+S.x,r.right-y)):Math.min(Math.max(r.left,u.left-y+(p?14:0)-S.x),r.right-y),E=this.above[l];!a.strictSide&&(E?u.top-w-v-S.y<r.top:u.bottom+w+v+S.y>r.bottom)&&E==r.bottom-u.bottom>u.top-r.top&&(E=this.above[l]=!E);let T=(E?u.top-r.top:r.bottom-u.bottom)-v;if(T<w&&h.resize!==!1){if(T<this.view.defaultLineHeight){c.style.top=ls;continue}Fc.set(h,w),c.style.height=(w=T)/s+\"px\"}else c.style.height&&(c.style.height=\"\");let B=E?u.top-w-v-S.y:u.bottom+v+S.y,D=M+y;if(h.overlap!==!0)for(let V of o)V.left<D&&V.right>M&&V.top<B+w&&V.bottom>B&&(B=E?V.top-w-2-v:V.bottom+v+2);if(this.position==\"absolute\"?(c.style.top=(B-i.parent.top)/s+\"px\",Hc(c,(M-i.parent.left)/n)):(c.style.top=B/s+\"px\",Hc(c,M/n)),p){let V=u.left+(A?S.x:-S.x)-(M+14-7);p.style.left=V/n+\"px\"}h.overlap!==!0&&o.push({left:M,top:B,right:D,bottom:B+w}),c.classList.toggle(\"cm-tooltip-above\",E),c.classList.toggle(\"cm-tooltip-below\",!E),h.positioned&&h.positioned(i.space)}}maybeMeasure(){if(this.manager.tooltips.length&&(this.view.inView&&this.view.requestMeasure(this.measureReq),this.inView!=this.view.inView&&(this.inView=this.view.inView,!this.inView)))for(let i of this.manager.tooltipViews)i.dom.style.top=ls}},{eventObservers:{scroll(){this.maybeMeasure()}}});function Hc(i,e){let t=parseInt(i.style.left,10);(isNaN(t)||Math.abs(e-t)>1)&&(i.style.left=e+\"px\")}var wg=K.baseTheme({\".cm-tooltip\":{zIndex:500,boxSizing:\"border-box\"},\"&light .cm-tooltip\":{border:\"1px solid #bbb\",backgroundColor:\"#f5f5f5\"},\"&light .cm-tooltip-section:not(:first-child)\":{borderTop:\"1px solid #bbb\"},\"&dark .cm-tooltip\":{backgroundColor:\"#333338\",color:\"white\"},\".cm-tooltip-arrow\":{height:\"7px\",width:`${7*2}px`,position:\"absolute\",zIndex:-1,overflow:\"hidden\",\"&:before, &:after\":{content:\"''\",position:\"absolute\",width:0,height:0,borderLeft:\"7px solid transparent\",borderRight:\"7px solid transparent\"},\".cm-tooltip-above &\":{bottom:\"-7px\",\"&:before\":{borderTop:\"7px solid #bbb\"},\"&:after\":{borderTop:\"7px solid #f5f5f5\",bottom:\"1px\"}},\".cm-tooltip-below &\":{top:\"-7px\",\"&:before\":{borderBottom:\"7px solid #bbb\"},\"&:after\":{borderBottom:\"7px solid #f5f5f5\",top:\"1px\"}}},\"&dark .cm-tooltip .cm-tooltip-arrow\":{\"&:before\":{borderTopColor:\"#333338\",borderBottomColor:\"#333338\"},\"&:after\":{borderTopColor:\"transparent\",borderBottomColor:\"transparent\"}}}),kg={x:0,y:0},hn=H.define({enables:[Ia,wg]}),Bs=H.define({combine:i=>i.reduce((e,t)=>e.concat(t),[])}),Es=class i{static create(e){return new i(e)}constructor(e){this.view=e,this.mounted=!1,this.dom=document.createElement(\"div\"),this.dom.classList.add(\"cm-tooltip-hover\"),this.manager=new Ds(e,Bs,(t,r)=>this.createHostedView(t,r),t=>t.dom.remove())}createHostedView(e,t){let r=e.create(this.view);return r.dom.classList.add(\"cm-tooltip-section\"),this.dom.insertBefore(r.dom,t?t.dom.nextSibling:this.dom.firstChild),this.mounted&&r.mount&&r.mount(this.view),r}mount(e){for(let t of this.manager.tooltipViews)t.mount&&t.mount(e);this.mounted=!0}positioned(e){for(let t of this.manager.tooltipViews)t.positioned&&t.positioned(e)}update(e){this.manager.update(e)}destroy(){var e;for(let t of this.manager.tooltipViews)(e=t.destroy)===null||e===void 0||e.call(t)}passProp(e){let t;for(let r of this.manager.tooltipViews){let n=r[e];if(n!==void 0){if(t===void 0)t=n;else if(t!==n)return}}return t}get offset(){return this.passProp(\"offset\")}get getCoords(){return this.passProp(\"getCoords\")}get overlap(){return this.passProp(\"overlap\")}get resize(){return this.passProp(\"resize\")}},Sg=hn.compute([Bs],i=>{let e=i.facet(Bs);return e.length===0?null:{pos:Math.min(...e.map(t=>t.pos)),end:Math.max(...e.map(t=>{var r;return(r=t.end)!==null&&r!==void 0?r:t.pos})),create:Es.create,above:e[0].above,arrow:e.some(t=>t.arrow)}}),wa=class{constructor(e,t,r,n,s){this.view=e,this.source=t,this.field=r,this.setHover=n,this.hoverTime=s,this.hoverTimeout=-1,this.restartTimeout=-1,this.pending=null,this.lastMove={x:0,y:0,target:e.dom,time:0},this.checkHover=this.checkHover.bind(this),e.dom.addEventListener(\"mouseleave\",this.mouseleave=this.mouseleave.bind(this)),e.dom.addEventListener(\"mousemove\",this.mousemove=this.mousemove.bind(this))}update(){this.pending&&(this.pending=null,clearTimeout(this.restartTimeout),this.restartTimeout=setTimeout(()=>this.startHover(),20))}get active(){return this.view.state.field(this.field)}checkHover(){if(this.hoverTimeout=-1,this.active.length)return;let e=Date.now()-this.lastMove.time;e<this.hoverTime?this.hoverTimeout=setTimeout(this.checkHover,this.hoverTime-e):this.startHover()}startHover(){clearTimeout(this.restartTimeout);let{view:e,lastMove:t}=this,r=e.docView.tile.nearest(t.target);if(!r)return;let n,s=1;if(r.isWidget())n=r.posAtStart;else{if(n=e.posAtCoords(t),n==null)return;let l=e.coordsAtPos(n);if(!l||t.y<l.top||t.y>l.bottom||t.x<l.left-e.defaultCharacterWidth||t.x>l.right+e.defaultCharacterWidth)return;let a=e.bidiSpans(e.state.doc.lineAt(n)).find(c=>c.from<=n&&c.to>=n),h=a&&a.dir==he.RTL?-1:1;s=t.x<l.left?-h:h}let o=this.source(e,n,s);if(o?.then){let l=this.pending={pos:n};o.then(a=>{this.pending==l&&(this.pending=null,a&&!(Array.isArray(a)&&!a.length)&&e.dispatch({effects:this.setHover.of(Array.isArray(a)?a:[a])}))},a=>Fe(e.state,a,\"hover tooltip\"))}else o&&!(Array.isArray(o)&&!o.length)&&e.dispatch({effects:this.setHover.of(Array.isArray(o)?o:[o])})}get tooltip(){let e=this.view.plugin(Ia),t=e?e.manager.tooltips.findIndex(r=>r.create==Es.create):-1;return t>-1?e.manager.tooltipViews[t]:null}mousemove(e){var t,r;this.lastMove={x:e.clientX,y:e.clientY,target:e.target,time:Date.now()},this.hoverTimeout<0&&(this.hoverTimeout=setTimeout(this.checkHover,this.hoverTime));let{active:n,tooltip:s}=this;if(n.length&&s&&!Cg(s.dom,e)||this.pending){let{pos:o}=n[0]||this.pending,l=(r=(t=n[0])===null||t===void 0?void 0:t.end)!==null&&r!==void 0?r:o;(o==l?this.view.posAtCoords(this.lastMove)!=o:!Ag(this.view,o,l,e.clientX,e.clientY))&&(this.view.dispatch({effects:this.setHover.of([])}),this.pending=null)}}mouseleave(e){clearTimeout(this.hoverTimeout),this.hoverTimeout=-1;let{active:t}=this;if(t.length){let{tooltip:r}=this;r&&r.dom.contains(e.relatedTarget)?this.watchTooltipLeave(r.dom):this.view.dispatch({effects:this.setHover.of([])})}}watchTooltipLeave(e){let t=r=>{e.removeEventListener(\"mouseleave\",t),this.active.length&&!this.view.dom.contains(r.relatedTarget)&&this.view.dispatch({effects:this.setHover.of([])})};e.addEventListener(\"mouseleave\",t)}destroy(){clearTimeout(this.hoverTimeout),clearTimeout(this.restartTimeout),this.view.dom.removeEventListener(\"mouseleave\",this.mouseleave),this.view.dom.removeEventListener(\"mousemove\",this.mousemove)}},as=4;function Cg(i,e){let{left:t,right:r,top:n,bottom:s}=i.getBoundingClientRect(),o;if(o=i.querySelector(\".cm-tooltip-arrow\")){let l=o.getBoundingClientRect();n=Math.min(l.top,n),s=Math.max(l.bottom,s)}return e.clientX>=t-as&&e.clientX<=r+as&&e.clientY>=n-as&&e.clientY<=s+as}function Ag(i,e,t,r,n,s){let o=i.scrollDOM.getBoundingClientRect(),l=i.documentTop+i.documentPadding.top+i.contentHeight;if(o.left>r||o.right<r||o.top>n||Math.min(o.bottom,l)<n)return!1;let a=i.posAtCoords({x:r,y:n},!1);return a>=e&&a<=t}function Ps(i,e={}){let t=te.define(),r=Re.define({create(){return[]},update(n,s){if(n.length&&(e.hideOnChange&&(s.docChanged||s.selection)?n=[]:e.hideOn&&(n=n.filter(o=>!e.hideOn(s,o))),s.docChanged)){let o=[];for(let l of n){let a=s.changes.mapPos(l.pos,-1,We.TrackDel);if(a!=null){let h=Object.assign(Object.create(null),l);h.pos=a,h.end!=null&&(h.end=s.changes.mapPos(h.end)),o.push(h)}}n=o}for(let o of s.effects)o.is(t)&&(n=o.value),o.is(Mg)&&(n=[]);return n},provide:n=>Bs.from(n)});return{active:r,extension:[r,Se.define(n=>new wa(n,i,r,t,e.hoverTime||300)),Sg]}}function Ra(i,e){let t=i.plugin(Ia);if(!t)return null;let r=t.manager.tooltips.indexOf(e);return r<0?null:t.manager.tooltipViews[r]}var Mg=te.define();var qc=H.define({combine(i){let e,t;for(let r of i)e=e||r.topContainer,t=t||r.bottomContainer;return{topContainer:e,bottomContainer:t}}});var Tg=Se.fromClass(class{constructor(i){this.input=i.state.facet(on),this.specs=this.input.filter(t=>t),this.panels=this.specs.map(t=>t(i));let e=i.state.facet(qc);this.top=new di(i,!0,e.topContainer),this.bottom=new di(i,!1,e.bottomContainer),this.top.sync(this.panels.filter(t=>t.top)),this.bottom.sync(this.panels.filter(t=>!t.top));for(let t of this.panels)t.dom.classList.add(\"cm-panel\"),t.mount&&t.mount()}update(i){let e=i.state.facet(qc);this.top.container!=e.topContainer&&(this.top.sync([]),this.top=new di(i.view,!0,e.topContainer)),this.bottom.container!=e.bottomContainer&&(this.bottom.sync([]),this.bottom=new di(i.view,!1,e.bottomContainer)),this.top.syncClasses(),this.bottom.syncClasses();let t=i.state.facet(on);if(t!=this.input){let r=t.filter(a=>a),n=[],s=[],o=[],l=[];for(let a of r){let h=this.specs.indexOf(a),c;h<0?(c=a(i.view),l.push(c)):(c=this.panels[h],c.update&&c.update(i)),n.push(c),(c.top?s:o).push(c)}this.specs=r,this.panels=n,this.top.sync(s),this.bottom.sync(o);for(let a of l)a.dom.classList.add(\"cm-panel\"),a.mount&&a.mount()}else for(let r of this.panels)r.update&&r.update(i)}destroy(){this.top.sync([]),this.bottom.sync([])}},{provide:i=>K.scrollMargins.of(e=>{let t=e.plugin(i);return t&&{top:t.top.scrollMargin(),bottom:t.bottom.scrollMargin()}})}),di=class{constructor(e,t,r){this.view=e,this.top=t,this.container=r,this.dom=void 0,this.classes=\"\",this.panels=[],this.syncClasses()}sync(e){for(let t of this.panels)t.destroy&&e.indexOf(t)<0&&t.destroy();this.panels=e,this.syncDOM()}syncDOM(){if(this.panels.length==0){this.dom&&(this.dom.remove(),this.dom=void 0);return}if(!this.dom){this.dom=document.createElement(\"div\"),this.dom.className=this.top?\"cm-panels cm-panels-top\":\"cm-panels cm-panels-bottom\",this.dom.style[this.top?\"top\":\"bottom\"]=\"0\";let t=this.container||this.view.dom;t.insertBefore(this.dom,this.top?t.firstChild:null)}let e=this.dom.firstChild;for(let t of this.panels)if(t.dom.parentNode==this.dom){for(;e!=t.dom;)e=Wc(e);e=e.nextSibling}else this.dom.insertBefore(t.dom,e);for(;e;)e=Wc(e)}scrollMargin(){return!this.dom||this.container?0:Math.max(0,this.top?this.dom.getBoundingClientRect().bottom-Math.max(0,this.view.scrollDOM.getBoundingClientRect().top):Math.min(innerHeight,this.view.scrollDOM.getBoundingClientRect().bottom)-this.dom.getBoundingClientRect().top)}syncClasses(){if(!(!this.container||this.classes==this.view.themeClasses)){for(let e of this.classes.split(\" \"))e&&this.container.classList.remove(e);for(let e of(this.classes=this.view.themeClasses).split(\" \"))e&&this.container.classList.add(e)}}};function Wc(i){let e=i.nextSibling;return i.remove(),e}var on=H.define({enables:Tg});var Ct=class extends gt{compare(e){return this==e||this.constructor==e.constructor&&this.eq(e)}eq(e){return!1}destroy(e){}};Ct.prototype.elementClass=\"\";Ct.prototype.toDOM=void 0;Ct.prototype.mapMode=We.TrackBefore;Ct.prototype.startSide=Ct.prototype.endSide=-1;Ct.prototype.point=!0;var ds=H.define(),Dg=H.define();var ms=H.define();var ka=H.define({combine:i=>i.some(e=>e)});function Bg(i){let e=[Eg];return i&&i.fixed===!1&&e.push(ka.of(!0)),e}var Eg=Se.fromClass(class{constructor(i){this.view=i,this.domAfter=null,this.prevViewport=i.viewport,this.dom=document.createElement(\"div\"),this.dom.className=\"cm-gutters cm-gutters-before\",this.dom.setAttribute(\"aria-hidden\",\"true\"),this.dom.style.minHeight=this.view.contentHeight/this.view.scaleY+\"px\",this.gutters=i.state.facet(ms).map(e=>new Os(i,e)),this.fixed=!i.state.facet(ka);for(let e of this.gutters)e.config.side==\"after\"?this.getDOMAfter().appendChild(e.dom):this.dom.appendChild(e.dom);this.fixed&&(this.dom.style.position=\"sticky\"),this.syncGutters(!1),i.scrollDOM.insertBefore(this.dom,i.contentDOM)}getDOMAfter(){return this.domAfter||(this.domAfter=document.createElement(\"div\"),this.domAfter.className=\"cm-gutters cm-gutters-after\",this.domAfter.setAttribute(\"aria-hidden\",\"true\"),this.domAfter.style.minHeight=this.view.contentHeight/this.view.scaleY+\"px\",this.domAfter.style.position=this.fixed?\"sticky\":\"\",this.view.scrollDOM.appendChild(this.domAfter)),this.domAfter}update(i){if(this.updateGutters(i)){let e=this.prevViewport,t=i.view.viewport,r=Math.min(e.to,t.to)-Math.max(e.from,t.from);this.syncGutters(r<(t.to-t.from)*.8)}if(i.geometryChanged){let e=this.view.contentHeight/this.view.scaleY+\"px\";this.dom.style.minHeight=e,this.domAfter&&(this.domAfter.style.minHeight=e)}this.view.state.facet(ka)!=!this.fixed&&(this.fixed=!this.fixed,this.dom.style.position=this.fixed?\"sticky\":\"\",this.domAfter&&(this.domAfter.style.position=this.fixed?\"sticky\":\"\")),this.prevViewport=i.view.viewport}syncGutters(i){let e=this.dom.nextSibling;i&&(this.dom.remove(),this.domAfter&&this.domAfter.remove());let t=le.iter(this.view.state.facet(ds),this.view.viewport.from),r=[],n=this.gutters.map(s=>new Ca(s,this.view.viewport,-this.view.documentPadding.top));for(let s of this.view.viewportLineBlocks)if(r.length&&(r=[]),Array.isArray(s.type)){let o=!0;for(let l of s.type)if(l.type==Ve.Text&&o){Sa(t,r,l.from);for(let a of n)a.line(this.view,l,r);o=!1}else if(l.widget)for(let a of n)a.widget(this.view,l)}else if(s.type==Ve.Text){Sa(t,r,s.from);for(let o of n)o.line(this.view,s,r)}else if(s.widget)for(let o of n)o.widget(this.view,s);for(let s of n)s.finish();i&&(this.view.scrollDOM.insertBefore(this.dom,e),this.domAfter&&this.view.scrollDOM.appendChild(this.domAfter))}updateGutters(i){let e=i.startState.facet(ms),t=i.state.facet(ms),r=i.docChanged||i.heightChanged||i.viewportChanged||!le.eq(i.startState.facet(ds),i.state.facet(ds),i.view.viewport.from,i.view.viewport.to);if(e==t)for(let n of this.gutters)n.update(i)&&(r=!0);else{r=!0;let n=[];for(let s of t){let o=e.indexOf(s);o<0?n.push(new Os(this.view,s)):(this.gutters[o].update(i),n.push(this.gutters[o]))}for(let s of this.gutters)s.dom.remove(),n.indexOf(s)<0&&s.destroy();for(let s of n)s.config.side==\"after\"?this.getDOMAfter().appendChild(s.dom):this.dom.appendChild(s.dom);this.gutters=n}return r}destroy(){for(let i of this.gutters)i.destroy();this.dom.remove(),this.domAfter&&this.domAfter.remove()}},{provide:i=>K.scrollMargins.of(e=>{let t=e.plugin(i);if(!t||t.gutters.length==0||!t.fixed)return null;let r=t.dom.offsetWidth*e.scaleX,n=t.domAfter?t.domAfter.offsetWidth*e.scaleX:0;return e.textDirection==he.LTR?{left:r,right:n}:{right:r,left:n}})});function Vc(i){return Array.isArray(i)?i:[i]}function Sa(i,e,t){for(;i.value&&i.from<=t;)i.from==t&&e.push(i.value),i.next()}var Ca=class{constructor(e,t,r){this.gutter=e,this.height=r,this.i=0,this.cursor=le.iter(e.markers,t.from)}addElement(e,t,r){let{gutter:n}=this,s=(t.top-this.height)/e.scaleY,o=t.height/e.scaleY;if(this.i==n.elements.length){let l=new zs(e,o,s,r);n.elements.push(l),n.dom.appendChild(l.dom)}else n.elements[this.i].update(e,o,s,r);this.height=t.bottom,this.i++}line(e,t,r){let n=[];Sa(this.cursor,n,t.from),r.length&&(n=n.concat(r));let s=this.gutter.config.lineMarker(e,t,n);s&&n.unshift(s);let o=this.gutter;n.length==0&&!o.config.renderEmptyElements||this.addElement(e,t,n)}widget(e,t){let r=this.gutter.config.widgetMarker(e,t.widget,t),n=r?[r]:null;for(let s of e.state.facet(Dg)){let o=s(e,t.widget,t);o&&(n||(n=[])).push(o)}n&&this.addElement(e,t,n)}finish(){let e=this.gutter;for(;e.elements.length>this.i;){let t=e.elements.pop();e.dom.removeChild(t.dom),t.destroy()}}},Os=class{constructor(e,t){this.view=e,this.config=t,this.elements=[],this.spacer=null,this.dom=document.createElement(\"div\"),this.dom.className=\"cm-gutter\"+(this.config.class?\" \"+this.config.class:\"\");for(let r in t.domEventHandlers)this.dom.addEventListener(r,n=>{let s=n.target,o;if(s!=this.dom&&this.dom.contains(s)){for(;s.parentNode!=this.dom;)s=s.parentNode;let a=s.getBoundingClientRect();o=(a.top+a.bottom)/2}else o=n.clientY;let l=e.lineBlockAtHeight(o-e.documentTop);t.domEventHandlers[r](e,l,n)&&n.preventDefault()});this.markers=Vc(t.markers(e)),t.initialSpacer&&(this.spacer=new zs(e,0,0,[t.initialSpacer(e)]),this.dom.appendChild(this.spacer.dom),this.spacer.dom.style.cssText+=\"visibility: hidden; pointer-events: none\")}update(e){let t=this.markers;if(this.markers=Vc(this.config.markers(e.view)),this.spacer&&this.config.updateSpacer){let n=this.config.updateSpacer(this.spacer.markers[0],e);n!=this.spacer.markers[0]&&this.spacer.update(e.view,0,0,[n])}let r=e.view.viewport;return!le.eq(this.markers,t,r.from,r.to)||(this.config.lineMarkerChange?this.config.lineMarkerChange(e):!1)}destroy(){for(let e of this.elements)e.destroy()}},zs=class{constructor(e,t,r,n){this.height=-1,this.above=0,this.markers=[],this.dom=document.createElement(\"div\"),this.dom.className=\"cm-gutterElement\",this.update(e,t,r,n)}update(e,t,r,n){this.height!=t&&(this.height=t,this.dom.style.height=t+\"px\"),this.above!=r&&(this.dom.style.marginTop=(this.above=r)?r+\"px\":\"\"),Og(this.markers,n)||this.setMarkers(e,n)}setMarkers(e,t){let r=\"cm-gutterElement\",n=this.dom.firstChild;for(let s=0,o=0;;){let l=o,a=s<t.length?t[s++]:null,h=!1;if(a){let c=a.elementClass;c&&(r+=\" \"+c);for(let u=o;u<this.markers.length;u++)if(this.markers[u].compare(a)){l=u,h=!0;break}}else l=this.markers.length;for(;o<l;){let c=this.markers[o++];if(c.toDOM){c.destroy(n);let u=n.nextSibling;n.remove(),n=u}}if(!a)break;a.toDOM&&(h?n=n.nextSibling:this.dom.insertBefore(a.toDOM(e),n)),h&&o++}this.dom.className=r,this.markers=t}destroy(){this.setMarkers(null,[])}};function Og(i,e){if(i.length!=e.length)return!1;for(let t=0;t<i.length;t++)if(!i[t].compare(e[t]))return!1;return!0}var zg=H.define(),Lg=H.define(),mi=H.define({combine(i){return it(i,{formatNumber:String,domEventHandlers:{}},{domEventHandlers(e,t){let r=Object.assign({},e);for(let n in t){let s=r[n],o=t[n];r[n]=s?(l,a,h)=>s(l,a,h)||o(l,a,h):o}return r}})}}),Zi=class extends Ct{constructor(e){super(),this.number=e}eq(e){return this.number==e.number}toDOM(){return document.createTextNode(this.number)}};function El(i,e){return i.state.facet(mi).formatNumber(e,i.state)}var Ig=ms.compute([mi],i=>({class:\"cm-lineNumbers\",renderEmptyElements:!1,markers(e){return e.state.facet(zg)},lineMarker(e,t,r){return r.some(n=>n.toDOM)?null:new Zi(El(e,e.state.doc.lineAt(t.from).number))},widgetMarker:(e,t,r)=>{for(let n of e.state.facet(Lg)){let s=n(e,t,r);if(s)return s}return null},lineMarkerChange:e=>e.startState.facet(mi)!=e.state.facet(mi),initialSpacer(e){return new Zi(El(e,$c(e.state.doc.lines)))},updateSpacer(e,t){let r=El(t.view,$c(t.view.state.doc.lines));return r==e.number?e:new Zi(r)},domEventHandlers:i.facet(mi).domEventHandlers,side:\"before\"}));function Wu(i={}){return[mi.of(i),Bg(),Ig]}function $c(i){let e=9;for(;e<i;)e=e*10+9;return e}var Rg=new class extends Ct{constructor(){super(...arguments),this.elementClass=\"cm-activeLineGutter\"}},Pg=ds.compute([\"selection\"],i=>{let e=[],t=-1;for(let r of i.selection.ranges){let n=i.doc.lineAt(r.head).from;n>t&&(t=n,e.push(Rg.range(n)))}return le.of(e)});function Vu(){return Pg}var Ng=0,cn=class{constructor(e,t){this.from=e,this.to=t}},ee=class{constructor(e={}){this.id=Ng++,this.perNode=!!e.perNode,this.deserialize=e.deserialize||(()=>{throw new Error(\"This node type doesn't define a deserialize function\")}),this.combine=e.combine||null}add(e){if(this.perNode)throw new RangeError(\"Can't add per-node props to node types\");return typeof e!=\"function\"&&(e=Je.match(e)),t=>{let r=e(t);return r===void 0?null:[this,r]}}};ee.closedBy=new ee({deserialize:i=>i.split(\" \")});ee.openedBy=new ee({deserialize:i=>i.split(\" \")});ee.group=new ee({deserialize:i=>i.split(\" \")});ee.isolate=new ee({deserialize:i=>{if(i&&i!=\"rtl\"&&i!=\"ltr\"&&i!=\"auto\")throw new RangeError(\"Invalid value for isolate: \"+i);return i||\"auto\"}});ee.contextHash=new ee({perNode:!0});ee.lookAhead=new ee({perNode:!0});ee.mounted=new ee({perNode:!0});var Gr=class{constructor(e,t,r,n=!1){this.tree=e,this.overlay=t,this.parser=r,this.bracketed=n}static get(e){return e&&e.props&&e.props[ee.mounted.id]}},Fg=Object.create(null),Je=class i{constructor(e,t,r,n=0){this.name=e,this.props=t,this.id=r,this.flags=n}static define(e){let t=e.props&&e.props.length?Object.create(null):Fg,r=(e.top?1:0)|(e.skipped?2:0)|(e.error?4:0)|(e.name==null?8:0),n=new i(e.name||\"\",t,e.id,r);if(e.props){for(let s of e.props)if(Array.isArray(s)||(s=s(n)),s){if(s[0].perNode)throw new RangeError(\"Can't store a per-node prop on a node type\");t[s[0].id]=s[1]}}return n}prop(e){return this.props[e.id]}get isTop(){return(this.flags&1)>0}get isSkipped(){return(this.flags&2)>0}get isError(){return(this.flags&4)>0}get isAnonymous(){return(this.flags&8)>0}is(e){if(typeof e==\"string\"){if(this.name==e)return!0;let t=this.prop(ee.group);return t?t.indexOf(e)>-1:!1}return this.id==e}static match(e){let t=Object.create(null);for(let r in e)for(let n of r.split(\" \"))t[n]=e[r];return r=>{for(let n=r.prop(ee.group),s=-1;s<(n?n.length:0);s++){let o=t[s<0?r.name:n[s]];if(o)return o}}}};Je.none=new Je(\"\",Object.create(null),0,8);var Hs=class i{constructor(e){this.types=e;for(let t=0;t<e.length;t++)if(e[t].id!=t)throw new RangeError(\"Node type ids should correspond to array positions when creating a node set\")}extend(...e){let t=[];for(let r of this.types){let n=null;for(let s of e){let o=s(r);if(o){n||(n=Object.assign({},r.props));let l=o[1],a=o[0];a.combine&&a.id in n&&(l=a.combine(n[a.id],l)),n[a.id]=l}}t.push(n?new Je(r.name,n,r.id,r.flags):r)}return new i(t)}},Ns=new WeakMap,$u=new WeakMap,Ce;(function(i){i[i.ExcludeBuffers=1]=\"ExcludeBuffers\",i[i.IncludeAnonymous=2]=\"IncludeAnonymous\",i[i.IgnoreMounts=4]=\"IgnoreMounts\",i[i.IgnoreOverlays=8]=\"IgnoreOverlays\",i[i.EnterBracketed=16]=\"EnterBracketed\"})(Ce||(Ce={}));var ve=class i{constructor(e,t,r,n,s){if(this.type=e,this.children=t,this.positions=r,this.length=n,this.props=null,s&&s.length){this.props=Object.create(null);for(let[o,l]of s)this.props[typeof o==\"number\"?o:o.id]=l}}toString(){let e=Gr.get(this);if(e&&!e.overlay)return e.tree.toString();let t=\"\";for(let r of this.children){let n=r.toString();n&&(t&&(t+=\",\"),t+=n)}return this.type.name?(/\\W/.test(this.type.name)&&!this.type.isError?JSON.stringify(this.type.name):this.type.name)+(t.length?\"(\"+t+\")\":\"\"):t}cursor(e=0){return new dn(this.topNode,e)}cursorAt(e,t=0,r=0){let n=Ns.get(this)||this.topNode,s=new dn(n);return s.moveTo(e,t),Ns.set(this,s._tree),s}get topNode(){return new Gt(this,0,0,null)}resolve(e,t=0){let r=un(Ns.get(this)||this.topNode,e,t,!1);return Ns.set(this,r),r}resolveInner(e,t=0){let r=un($u.get(this)||this.topNode,e,t,!0);return $u.set(this,r),r}resolveStack(e,t=0){return Hg(this,e,t)}iterate(e){let{enter:t,leave:r,from:n=0,to:s=this.length}=e,o=e.mode||0,l=(o&Ce.IncludeAnonymous)>0;for(let a=this.cursor(o|Ce.IncludeAnonymous);;){let h=!1;if(a.from<=s&&a.to>=n&&(!l&&a.type.isAnonymous||t(a)!==!1)){if(a.firstChild())continue;h=!0}for(;h&&r&&(l||!a.type.isAnonymous)&&r(a),!a.nextSibling();){if(!a.parent())return;h=!0}}}prop(e){return e.perNode?this.props?this.props[e.id]:void 0:this.type.prop(e)}get propValues(){let e=[];if(this.props)for(let t in this.props)e.push([+t,this.props[t]]);return e}balance(e={}){return this.children.length<=8?this:Va(Je.none,this.children,this.positions,0,this.children.length,0,this.length,(t,r,n)=>new i(this.type,t,r,n,this.propValues),e.makeTree||((t,r,n)=>new i(Je.none,t,r,n)))}static build(e){return qg(e)}};ve.empty=new ve(Je.none,[],[],0);var Pa=class i{constructor(e,t){this.buffer=e,this.index=t}get id(){return this.buffer[this.index-4]}get start(){return this.buffer[this.index-3]}get end(){return this.buffer[this.index-2]}get size(){return this.buffer[this.index-1]}get pos(){return this.index}next(){this.index-=4}fork(){return new i(this.buffer,this.index)}},kr=class i{constructor(e,t,r){this.buffer=e,this.length=t,this.set=r}get type(){return Je.none}toString(){let e=[];for(let t=0;t<this.buffer.length;)e.push(this.childString(t)),t=this.buffer[t+3];return e.join(\",\")}childString(e){let t=this.buffer[e],r=this.buffer[e+3],n=this.set.types[t],s=n.name;if(/\\W/.test(s)&&!n.isError&&(s=JSON.stringify(s)),e+=4,r==e)return s;let o=[];for(;e<r;)o.push(this.childString(e)),e=this.buffer[e+3];return s+\"(\"+o.join(\",\")+\")\"}findChild(e,t,r,n,s){let{buffer:o}=this,l=-1;for(let a=e;a!=t&&!(Ku(s,n,o[a+1],o[a+2])&&(l=a,r>0));a=o[a+3]);return l}slice(e,t,r){let n=this.buffer,s=new Uint16Array(t-e),o=0;for(let l=e,a=0;l<t;){s[a++]=n[l++],s[a++]=n[l++]-r;let h=s[a++]=n[l++]-r;s[a++]=n[l++]-e,o=Math.max(o,h)}return new i(s,o,this.set)}};function Ku(i,e,t,r){switch(i){case-2:return t<e;case-1:return r>=e&&t<e;case 0:return t<e&&r>e;case 1:return t<=e&&r>e;case 2:return r>e;case 4:return!0}}function un(i,e,t,r){for(var n;i.from==i.to||(t<1?i.from>=e:i.from>e)||(t>-1?i.to<=e:i.to<e);){let o=!r&&i instanceof Gt&&i.index<0?null:i.parent;if(!o)return i;i=o}let s=r?0:Ce.IgnoreOverlays;if(r)for(let o=i,l=o.parent;l;o=l,l=o.parent)o instanceof Gt&&o.index<0&&((n=l.enter(e,t,s))===null||n===void 0?void 0:n.from)!=o.from&&(i=l);for(;;){let o=i.enter(e,t,s);if(!o)return i;i=o}}var qs=class{cursor(e=0){return new dn(this,e)}getChild(e,t=null,r=null){let n=Gu(this,e,t,r);return n.length?n[0]:null}getChildren(e,t=null,r=null){return Gu(this,e,t,r)}resolve(e,t=0){return un(this,e,t,!1)}resolveInner(e,t=0){return un(this,e,t,!0)}matchContext(e){return Na(this.parent,e)}enterUnfinishedNodesBefore(e){let t=this.childBefore(e),r=this;for(;t;){let n=t.lastChild;if(!n||n.to!=t.to)break;n.type.isError&&n.from==n.to?(r=t,t=n.prevSibling):t=n}return r}get node(){return this}get next(){return this.parent}},Gt=class i extends qs{constructor(e,t,r,n){super(),this._tree=e,this.from=t,this.index=r,this._parent=n}get type(){return this._tree.type}get name(){return this._tree.type.name}get to(){return this.from+this._tree.length}nextChild(e,t,r,n,s=0){for(let o=this;;){for(let{children:l,positions:a}=o._tree,h=t>0?l.length:-1;e!=h;e+=t){let c=l[e],u=a[e]+o.from,d;if(!(!(s&Ce.EnterBracketed&&c instanceof ve&&(d=Gr.get(c))&&!d.overlay&&d.bracketed&&r>=u&&r<=u+c.length)&&!Ku(n,r,u,u+c.length))){if(c instanceof kr){if(s&Ce.ExcludeBuffers)continue;let p=c.findChild(0,c.buffer.length,t,r-u,n);if(p>-1)return new fn(new Fa(o,c,e,u),null,p)}else if(s&Ce.IncludeAnonymous||!c.type.isAnonymous||Wa(c)){let p;if(!(s&Ce.IgnoreMounts)&&(p=Gr.get(c))&&!p.overlay)return new i(p.tree,u,e,o);let v=new i(c,u,e,o);return s&Ce.IncludeAnonymous||!v.type.isAnonymous?v:v.nextChild(t<0?c.children.length-1:0,t,r,n,s)}}}if(s&Ce.IncludeAnonymous||!o.type.isAnonymous||(o.index>=0?e=o.index+t:e=t<0?-1:o._parent._tree.children.length,o=o._parent,!o))return null}}get firstChild(){return this.nextChild(0,1,0,4)}get lastChild(){return this.nextChild(this._tree.children.length-1,-1,0,4)}childAfter(e){return this.nextChild(0,1,e,2)}childBefore(e){return this.nextChild(this._tree.children.length-1,-1,e,-2)}prop(e){return this._tree.prop(e)}enter(e,t,r=0){let n;if(!(r&Ce.IgnoreOverlays)&&(n=Gr.get(this._tree))&&n.overlay){let s=e-this.from,o=r&Ce.EnterBracketed&&n.bracketed;for(let{from:l,to:a}of n.overlay)if((t>0||o?l<=s:l<s)&&(t<0||o?a>=s:a>s))return new i(n.tree,n.overlay[0].from+this.from,-1,this)}return this.nextChild(0,1,e,t,r)}nextSignificantParent(){let e=this;for(;e.type.isAnonymous&&e._parent;)e=e._parent;return e}get parent(){return this._parent?this._parent.nextSignificantParent():null}get nextSibling(){return this._parent&&this.index>=0?this._parent.nextChild(this.index+1,1,0,4):null}get prevSibling(){return this._parent&&this.index>=0?this._parent.nextChild(this.index-1,-1,0,4):null}get tree(){return this._tree}toTree(){return this._tree}toString(){return this._tree.toString()}};function Gu(i,e,t,r){let n=i.cursor(),s=[];if(!n.firstChild())return s;if(t!=null){for(let o=!1;!o;)if(o=n.type.is(t),!n.nextSibling())return s}for(;;){if(r!=null&&n.type.is(r))return s;if(n.type.is(e)&&s.push(n.node),!n.nextSibling())return r==null?s:[]}}function Na(i,e,t=e.length-1){for(let r=i;t>=0;r=r.parent){if(!r)return!1;if(!r.type.isAnonymous){if(e[t]&&e[t]!=r.name)return!1;t--}}return!0}var Fa=class{constructor(e,t,r,n){this.parent=e,this.buffer=t,this.index=r,this.start=n}},fn=class i extends qs{get name(){return this.type.name}get from(){return this.context.start+this.context.buffer.buffer[this.index+1]}get to(){return this.context.start+this.context.buffer.buffer[this.index+2]}constructor(e,t,r){super(),this.context=e,this._parent=t,this.index=r,this.type=e.buffer.set.types[e.buffer.buffer[r]]}child(e,t,r){let{buffer:n}=this.context,s=n.findChild(this.index+4,n.buffer[this.index+3],e,t-this.context.start,r);return s<0?null:new i(this.context,this,s)}get firstChild(){return this.child(1,0,4)}get lastChild(){return this.child(-1,0,4)}childAfter(e){return this.child(1,e,2)}childBefore(e){return this.child(-1,e,-2)}prop(e){return this.type.prop(e)}enter(e,t,r=0){if(r&Ce.ExcludeBuffers)return null;let{buffer:n}=this.context,s=n.findChild(this.index+4,n.buffer[this.index+3],t>0?1:-1,e-this.context.start,t);return s<0?null:new i(this.context,this,s)}get parent(){return this._parent||this.context.parent.nextSignificantParent()}externalSibling(e){return this._parent?null:this.context.parent.nextChild(this.context.index+e,e,0,4)}get nextSibling(){let{buffer:e}=this.context,t=e.buffer[this.index+3];return t<(this._parent?e.buffer[this._parent.index+3]:e.buffer.length)?new i(this.context,this._parent,t):this.externalSibling(1)}get prevSibling(){let{buffer:e}=this.context,t=this._parent?this._parent.index+4:0;return this.index==t?this.externalSibling(-1):new i(this.context,this._parent,e.findChild(t,this.index,-1,0,4))}get tree(){return null}toTree(){let e=[],t=[],{buffer:r}=this.context,n=this.index+4,s=r.buffer[this.index+3];if(s>n){let o=r.buffer[this.index+1];e.push(r.slice(n,s,o)),t.push(0)}return new ve(this.type,e,t,this.to-this.from)}toString(){return this.context.buffer.childString(this.index)}};function ju(i){if(!i.length)return null;let e=0,t=i[0];for(let s=1;s<i.length;s++){let o=i[s];(o.from>t.from||o.to<t.to)&&(t=o,e=s)}let r=t instanceof Gt&&t.index<0?null:t.parent,n=i.slice();return r?n[e]=r:n.splice(e,1),new Ha(n,t)}var Ha=class{constructor(e,t){this.heads=e,this.node=t}get next(){return ju(this.heads)}};function Hg(i,e,t){let r=i.resolveInner(e,t),n=null;for(let s=r instanceof Gt?r:r.context.parent;s;s=s.parent)if(s.index<0){let o=s.parent;(n||(n=[r])).push(o.resolve(e,t)),s=o}else{let o=Gr.get(s.tree);if(o&&o.overlay&&o.overlay[0].from<=e&&o.overlay[o.overlay.length-1].to>=e){let l=new Gt(o.tree,o.overlay[0].from+s.from,-1,s);(n||(n=[r])).push(un(l,e,t,!1))}}return n?ju(n):r}var dn=class{get name(){return this.type.name}constructor(e,t=0){if(this.buffer=null,this.stack=[],this.index=0,this.bufferNode=null,this.mode=t&~Ce.EnterBracketed,e instanceof Gt)this.yieldNode(e);else{this._tree=e.context.parent,this.buffer=e.context;for(let r=e._parent;r;r=r._parent)this.stack.unshift(r.index);this.bufferNode=e,this.yieldBuf(e.index)}}yieldNode(e){return e?(this._tree=e,this.type=e.type,this.from=e.from,this.to=e.to,!0):!1}yieldBuf(e,t){this.index=e;let{start:r,buffer:n}=this.buffer;return this.type=t||n.set.types[n.buffer[e]],this.from=r+n.buffer[e+1],this.to=r+n.buffer[e+2],!0}yield(e){return e?e instanceof Gt?(this.buffer=null,this.yieldNode(e)):(this.buffer=e.context,this.yieldBuf(e.index,e.type)):!1}toString(){return this.buffer?this.buffer.buffer.childString(this.index):this._tree.toString()}enterChild(e,t,r){if(!this.buffer)return this.yield(this._tree.nextChild(e<0?this._tree._tree.children.length-1:0,e,t,r,this.mode));let{buffer:n}=this.buffer,s=n.findChild(this.index+4,n.buffer[this.index+3],e,t-this.buffer.start,r);return s<0?!1:(this.stack.push(this.index),this.yieldBuf(s))}firstChild(){return this.enterChild(1,0,4)}lastChild(){return this.enterChild(-1,0,4)}childAfter(e){return this.enterChild(1,e,2)}childBefore(e){return this.enterChild(-1,e,-2)}enter(e,t,r=this.mode){return this.buffer?r&Ce.ExcludeBuffers?!1:this.enterChild(1,e,t):this.yield(this._tree.enter(e,t,r))}parent(){if(!this.buffer)return this.yieldNode(this.mode&Ce.IncludeAnonymous?this._tree._parent:this._tree.parent);if(this.stack.length)return this.yieldBuf(this.stack.pop());let e=this.mode&Ce.IncludeAnonymous?this.buffer.parent:this.buffer.parent.nextSignificantParent();return this.buffer=null,this.yieldNode(e)}sibling(e){if(!this.buffer)return this._tree._parent?this.yield(this._tree.index<0?null:this._tree._parent.nextChild(this._tree.index+e,e,0,4,this.mode)):!1;let{buffer:t}=this.buffer,r=this.stack.length-1;if(e<0){let n=r<0?0:this.stack[r]+4;if(this.index!=n)return this.yieldBuf(t.findChild(n,this.index,-1,0,4))}else{let n=t.buffer[this.index+3];if(n<(r<0?t.buffer.length:t.buffer[this.stack[r]+3]))return this.yieldBuf(n)}return r<0?this.yield(this.buffer.parent.nextChild(this.buffer.index+e,e,0,4,this.mode)):!1}nextSibling(){return this.sibling(1)}prevSibling(){return this.sibling(-1)}atLastNode(e){let t,r,{buffer:n}=this;if(n){if(e>0){if(this.index<n.buffer.buffer.length)return!1}else for(let s=0;s<this.index;s++)if(n.buffer.buffer[s+3]<this.index)return!1;({index:t,parent:r}=n)}else({index:t,_parent:r}=this._tree);for(;r;{index:t,_parent:r}=r)if(t>-1)for(let s=t+e,o=e<0?-1:r._tree.children.length;s!=o;s+=e){let l=r._tree.children[s];if(this.mode&Ce.IncludeAnonymous||l instanceof kr||!l.type.isAnonymous||Wa(l))return!1}return!0}move(e,t){if(t&&this.enterChild(e,0,4))return!0;for(;;){if(this.sibling(e))return!0;if(this.atLastNode(e)||!this.parent())return!1}}next(e=!0){return this.move(1,e)}prev(e=!0){return this.move(-1,e)}moveTo(e,t=0){for(;(this.from==this.to||(t<1?this.from>=e:this.from>e)||(t>-1?this.to<=e:this.to<e))&&this.parent(););for(;this.enterChild(1,e,t););return this}get node(){if(!this.buffer)return this._tree;let e=this.bufferNode,t=null,r=0;if(e&&e.context==this.buffer)e:for(let n=this.index,s=this.stack.length;s>=0;){for(let o=e;o;o=o._parent)if(o.index==n){if(n==this.index)return o;t=o,r=s+1;break e}n=this.stack[--s]}for(let n=r;n<this.stack.length;n++)t=new fn(this.buffer,t,this.stack[n]);return this.bufferNode=new fn(this.buffer,t,this.index)}get tree(){return this.buffer?null:this._tree._tree}iterate(e,t){for(let r=0;;){let n=!1;if(this.type.isAnonymous||e(this)!==!1){if(this.firstChild()){r++;continue}this.type.isAnonymous||(n=!0)}for(;;){if(n&&t&&t(this),n=this.type.isAnonymous,!r)return;if(this.nextSibling())break;this.parent(),r--,n=!0}}}matchContext(e){if(!this.buffer)return Na(this.node.parent,e);let{buffer:t}=this.buffer,{types:r}=t.set;for(let n=e.length-1,s=this.stack.length-1;n>=0;s--){if(s<0)return Na(this._tree,e,n);let o=r[t.buffer[this.stack[s]]];if(!o.isAnonymous){if(e[n]&&e[n]!=o.name)return!1;n--}}return!0}};function Wa(i){return i.children.some(e=>e instanceof kr||!e.type.isAnonymous||Wa(e))}function qg(i){var e;let{buffer:t,nodeSet:r,maxBufferLength:n=1024,reused:s=[],minRepeatType:o=r.types.length}=i,l=Array.isArray(t)?new Pa(t,t.length):t,a=r.types,h=0,c=0;function u(T,B,D,V,U,ie){let{id:j,start:$,end:ne,size:J}=l,re=c,ge=h;if(J<0)if(l.next(),J==-1){let Ne=s[j];D.push(Ne),V.push($-T);return}else if(J==-3){h=j;return}else if(J==-4){c=j;return}else throw new RangeError(`Unrecognized record size: ${J}`);let Be=a[j],qe,Me,Te=$-T;if(ne-$<=n&&(Me=w(l.pos-B,U))){let Ne=new Uint16Array(Me.size-Me.skip),pe=l.pos-Me.size,je=Ne.length;for(;l.pos>pe;)je=S(Me.start,Ne,je);qe=new kr(Ne,ne-Me.start,r),Te=Me.start-T}else{let Ne=l.pos-J;l.next();let pe=[],je=[],Bt=j>=o?j:-1,ct=0,Ir=ne;for(;l.pos>Ne;)Bt>=0&&l.id==Bt&&l.size>=0?(l.end<=Ir-n&&(v(pe,je,$,ct,l.end,Ir,Bt,re,ge),ct=pe.length,Ir=l.end),l.next()):ie>2500?d($,Ne,pe,je):u($,Ne,pe,je,Bt,ie+1);if(Bt>=0&&ct>0&&ct<pe.length&&v(pe,je,$,ct,$,Ir,Bt,re,ge),pe.reverse(),je.reverse(),Bt>-1&&ct>0){let ir=p(Be,ge);qe=Va(Be,pe,je,0,pe.length,0,ne-$,ir,ir)}else qe=y(Be,pe,je,ne-$,re-ne,ge)}D.push(qe),V.push(Te)}function d(T,B,D,V){let U=[],ie=0,j=-1;for(;l.pos>B;){let{id:$,start:ne,end:J,size:re}=l;if(re>4)l.next();else{if(j>-1&&ne<j)break;j<0&&(j=J-n),U.push($,ne,J),ie++,l.next()}}if(ie){let $=new Uint16Array(ie*4),ne=U[U.length-2];for(let J=U.length-3,re=0;J>=0;J-=3)$[re++]=U[J],$[re++]=U[J+1]-ne,$[re++]=U[J+2]-ne,$[re++]=re;D.push(new kr($,U[2]-ne,r)),V.push(ne-T)}}function p(T,B){return(D,V,U)=>{let ie=0,j=D.length-1,$,ne;if(j>=0&&($=D[j])instanceof ve){if(!j&&$.type==T&&$.length==U)return $;(ne=$.prop(ee.lookAhead))&&(ie=V[j]+$.length+ne)}return y(T,D,V,U,ie,B)}}function v(T,B,D,V,U,ie,j,$,ne){let J=[],re=[];for(;T.length>V;)J.push(T.pop()),re.push(B.pop()+D-U);T.push(y(r.types[j],J,re,ie-U,$-ie,ne)),B.push(U-D)}function y(T,B,D,V,U,ie,j){if(ie){let $=[ee.contextHash,ie];j=j?[$].concat(j):[$]}if(U>25){let $=[ee.lookAhead,U];j=j?[$].concat(j):[$]}return new ve(T,B,D,V,j)}function w(T,B){let D=l.fork(),V=0,U=0,ie=0,j=D.end-n,$={size:0,start:0,skip:0};e:for(let ne=D.pos-T;D.pos>ne;){let J=D.size;if(D.id==B&&J>=0){$.size=V,$.start=U,$.skip=ie,ie+=4,V+=4,D.next();continue}let re=D.pos-J;if(J<0||re<ne||D.start<j)break;let ge=D.id>=o?4:0,Be=D.start;for(D.next();D.pos>re;){if(D.size<0)if(D.size==-3||D.size==-4)ge+=4;else break e;else D.id>=o&&(ge+=4);D.next()}U=Be,V+=J,ie+=ge}return(B<0||V==T)&&($.size=V,$.start=U,$.skip=ie),$.size>4?$:void 0}function S(T,B,D){let{id:V,start:U,end:ie,size:j}=l;if(l.next(),j>=0&&V<o){let $=D;if(j>4){let ne=l.pos-(j-4);for(;l.pos>ne;)D=S(T,B,D)}B[--D]=$,B[--D]=ie-T,B[--D]=U-T,B[--D]=V}else j==-3?h=V:j==-4&&(c=V);return D}let A=[],M=[];for(;l.pos>0;)u(i.start||0,i.bufferStart||0,A,M,-1,0);let E=(e=i.length)!==null&&e!==void 0?e:A.length?M[0]+A[0].length:0;return new ve(a[i.topID],A.reverse(),M.reverse(),E)}var Uu=new WeakMap;function Fs(i,e){if(!i.isAnonymous||e instanceof kr||e.type!=i)return 1;let t=Uu.get(e);if(t==null){t=1;for(let r of e.children){if(r.type!=i||!(r instanceof ve)){t=1;break}t+=Fs(i,r)}Uu.set(e,t)}return t}function Va(i,e,t,r,n,s,o,l,a){let h=0;for(let v=r;v<n;v++)h+=Fs(i,e[v]);let c=Math.ceil(h*1.5/8),u=[],d=[];function p(v,y,w,S,A){for(let M=w;M<S;){let E=M,T=y[M],B=Fs(i,v[M]);for(M++;M<S;M++){let D=Fs(i,v[M]);if(B+D>=c)break;B+=D}if(M==E+1){if(B>c){let D=v[E];p(D.children,D.positions,0,D.children.length,y[E]+A);continue}u.push(v[E])}else{let D=y[M-1]+v[M-1].length-T;u.push(Va(i,v,y,E,M,T,D,null,a))}d.push(T+A-s)}}return p(e,t,r,n,0),(l||a)(u,d,o)}var Ur=class i{constructor(e,t,r,n,s=!1,o=!1){this.from=e,this.to=t,this.tree=r,this.offset=n,this.open=(s?1:0)|(o?2:0)}get openStart(){return(this.open&1)>0}get openEnd(){return(this.open&2)>0}static addTree(e,t=[],r=!1){let n=[new i(0,e.length,e,0,!1,r)];for(let s of t)s.to>e.length&&n.push(s);return n}static applyChanges(e,t,r=128){if(!t.length)return e;let n=[],s=1,o=e.length?e[0]:null;for(let l=0,a=0,h=0;;l++){let c=l<t.length?t[l]:null,u=c?c.fromA:1e9;if(u-a>=r)for(;o&&o.from<u;){let d=o;if(a>=d.from||u<=d.to||h){let p=Math.max(d.from,a)-h,v=Math.min(d.to,u)-h;d=p>=v?null:new i(p,v,d.tree,d.offset+h,l>0,!!c)}if(d&&n.push(d),o.to>u)break;o=s<e.length?e[s++]:null}if(!c)break;a=c.toA,h=c.toA-c.toB}return n}},mn=class{startParse(e,t,r){return typeof e==\"string\"&&(e=new qa(e)),r=r?r.length?r.map(n=>new cn(n.from,n.to)):[new cn(0,0)]:[new cn(0,e.length)],this.createParse(e,t||[],r)}parse(e,t,r){let n=this.startParse(e,t,r);for(;;){let s=n.advance();if(s)return s}}},qa=class{constructor(e){this.string=e}get length(){return this.string.length}chunk(e){return this.string.slice(e)}get lineChunks(){return!1}read(e,t){return this.string.slice(e,t)}};var l7=new ee({perNode:!0});var Wg=0,It=class i{constructor(e,t,r,n){this.name=e,this.set=t,this.base=r,this.modified=n,this.id=Wg++}toString(){let{name:e}=this;for(let t of this.modified)t.name&&(e=`${t.name}(${e})`);return e}static define(e,t){let r=typeof e==\"string\"?e:\"?\";if(e instanceof i&&(t=e),t?.base)throw new Error(\"Can not derive from a modified tag\");let n=new i(r,[],null,[]);if(n.set.push(n),t)for(let s of t.set)n.set.push(s);return n}static defineModifier(e){let t=new Gs(e);return r=>r.modified.indexOf(t)>-1?r:Gs.get(r.base||r,r.modified.concat(t).sort((n,s)=>n.id-s.id))}},Vg=0,Gs=class i{constructor(e){this.name=e,this.instances=[],this.id=Vg++}static get(e,t){if(!t.length)return e;let r=t[0].instances.find(l=>l.base==e&&$g(t,l.modified));if(r)return r;let n=[],s=new It(e.name,n,e,t);for(let l of t)l.instances.push(s);let o=Gg(t);for(let l of e.set)if(!l.modified.length)for(let a of o)n.push(i.get(l,a));return s}};function $g(i,e){return i.length==e.length&&i.every((t,r)=>t==e[r])}function Gg(i){let e=[[]];for(let t=0;t<i.length;t++)for(let r=0,n=e.length;r<n;r++)e.push(e[r].concat(i[t]));return e.sort((t,r)=>r.length-t.length)}function _u(i){let e=Object.create(null);for(let t in i){let r=i[t];Array.isArray(r)||(r=[r]);for(let n of t.split(\" \"))if(n){let s=[],o=2,l=n;for(let u=0;;){if(l==\"...\"&&u>0&&u+3==n.length){o=1;break}let d=/^\"(?:[^\"\\\\]|\\\\.)*?\"|[^\\/!]+/.exec(l);if(!d)throw new RangeError(\"Invalid path: \"+n);if(s.push(d[0]==\"*\"?\"\":d[0][0]=='\"'?JSON.parse(d[0]):d[0]),u+=d[0].length,u==n.length)break;let p=n[u++];if(u==n.length&&p==\"!\"){o=0;break}if(p!=\"/\")throw new RangeError(\"Invalid path: \"+n);l=n.slice(u)}let a=s.length-1,h=s[a];if(!h)throw new RangeError(\"Invalid path: \"+n);let c=new jr(r,o,a>0?s.slice(0,a):null);e[h]=c.sort(e[h])}}return Ju.add(e)}var Ju=new ee({combine(i,e){let t,r,n;for(;i||e;){if(!i||e&&i.depth>=e.depth?(n=e,e=e.next):(n=i,i=i.next),t&&t.mode==n.mode&&!n.context&&!t.context)continue;let s=new jr(n.tags,n.mode,n.context);t?t.next=s:r=s,t=s}return r}}),jr=class{constructor(e,t,r,n){this.tags=e,this.mode=t,this.context=r,this.next=n}get opaque(){return this.mode==0}get inherit(){return this.mode==1}sort(e){return!e||e.depth<this.depth?(this.next=e,this):(e.next=this.sort(e.next),e)}get depth(){return this.context?this.context.length:0}};jr.empty=new jr([],2,null);function Ka(i,e){let t=Object.create(null);for(let s of i)if(!Array.isArray(s.tag))t[s.tag.id]=s.class;else for(let o of s.tag)t[o.id]=s.class;let{scope:r,all:n=null}=e||{};return{style:s=>{let o=n;for(let l of s)for(let a of l.set){let h=t[a.id];if(h){o=o?o+\" \"+h:h;break}}return o},scope:r}}function Ug(i,e){let t=null;for(let r of i){let n=r.style(e);n&&(t=t?t+\" \"+n:n)}return t}function Zu(i,e,t,r=0,n=i.length){let s=new Ga(r,Array.isArray(e)?e:[e],t);s.highlightRange(i.cursor(),r,n,\"\",s.highlighters),s.flush(n)}var Ga=class{constructor(e,t,r){this.at=e,this.highlighters=t,this.span=r,this.class=\"\"}startSpan(e,t){t!=this.class&&(this.flush(e),e>this.at&&(this.at=e),this.class=t)}flush(e){e>this.at&&this.class&&this.span(this.at,e,this.class)}highlightRange(e,t,r,n,s){let{type:o,from:l,to:a}=e;if(l>=r||a<=t)return;o.isTop&&(s=this.highlighters.filter(p=>!p.scope||p.scope(o)));let h=n,c=Kg(e)||jr.empty,u=Ug(s,c.tags);if(u&&(h&&(h+=\" \"),h+=u,c.mode==1&&(n+=(n?\" \":\"\")+u)),this.startSpan(Math.max(t,l),h),c.opaque)return;let d=e.tree&&e.tree.prop(ee.mounted);if(d&&d.overlay){let p=e.node.enter(d.overlay[0].from+l,1),v=this.highlighters.filter(w=>!w.scope||w.scope(d.tree.type)),y=e.firstChild();for(let w=0,S=l;;w++){let A=w<d.overlay.length?d.overlay[w]:null,M=A?A.from+l:a,E=Math.max(t,S),T=Math.min(r,M);if(E<T&&y)for(;e.from<T&&(this.highlightRange(e,E,T,n,s),this.startSpan(Math.min(T,e.to),h),!(e.to>=M||!e.nextSibling())););if(!A||M>r)break;S=A.to+l,S>t&&(this.highlightRange(p.cursor(),Math.max(t,A.from+l),Math.min(r,S),\"\",v),this.startSpan(Math.min(r,S),h))}y&&e.parent()}else if(e.firstChild()){d&&(n=\"\");do if(!(e.to<=t)){if(e.from>=r)break;this.highlightRange(e,t,r,n,s),this.startSpan(Math.min(r,e.to),h)}while(e.nextSibling());e.parent()}}};function Kg(i){let e=i.type.prop(Ju);for(;e&&e.context&&!i.matchContext(e.context);)e=e.next;return e||null}var F=It.define,Ws=F(),Sr=F(),Yu=F(Sr),Xu=F(Sr),Cr=F(),Vs=F(Cr),$a=F(Cr),jt=F(),Kr=F(jt),Ut=F(),Kt=F(),Ua=F(),pn=F(Ua),$s=F(),P={comment:Ws,lineComment:F(Ws),blockComment:F(Ws),docComment:F(Ws),name:Sr,variableName:F(Sr),typeName:Yu,tagName:F(Yu),propertyName:Xu,attributeName:F(Xu),className:F(Sr),labelName:F(Sr),namespace:F(Sr),macroName:F(Sr),literal:Cr,string:Vs,docString:F(Vs),character:F(Vs),attributeValue:F(Vs),number:$a,integer:F($a),float:F($a),bool:F(Cr),regexp:F(Cr),escape:F(Cr),color:F(Cr),url:F(Cr),keyword:Ut,self:F(Ut),null:F(Ut),atom:F(Ut),unit:F(Ut),modifier:F(Ut),operatorKeyword:F(Ut),controlKeyword:F(Ut),definitionKeyword:F(Ut),moduleKeyword:F(Ut),operator:Kt,derefOperator:F(Kt),arithmeticOperator:F(Kt),logicOperator:F(Kt),bitwiseOperator:F(Kt),compareOperator:F(Kt),updateOperator:F(Kt),definitionOperator:F(Kt),typeOperator:F(Kt),controlOperator:F(Kt),punctuation:Ua,separator:F(Ua),bracket:pn,angleBracket:F(pn),squareBracket:F(pn),paren:F(pn),brace:F(pn),content:jt,heading:Kr,heading1:F(Kr),heading2:F(Kr),heading3:F(Kr),heading4:F(Kr),heading5:F(Kr),heading6:F(Kr),contentSeparator:F(jt),list:F(jt),quote:F(jt),emphasis:F(jt),strong:F(jt),link:F(jt),monospace:F(jt),strikethrough:F(jt),inserted:F(),deleted:F(),changed:F(),invalid:F(),meta:$s,documentMeta:F($s),annotation:F($s),processingInstruction:F($s),definition:It.defineModifier(\"definition\"),constant:It.defineModifier(\"constant\"),function:It.defineModifier(\"function\"),standard:It.defineModifier(\"standard\"),local:It.defineModifier(\"local\"),special:It.defineModifier(\"special\")};for(let i in P){let e=P[i];e instanceof It&&(e.name=i)}var c7=Ka([{tag:P.link,class:\"tok-link\"},{tag:P.heading,class:\"tok-heading\"},{tag:P.emphasis,class:\"tok-emphasis\"},{tag:P.strong,class:\"tok-strong\"},{tag:P.keyword,class:\"tok-keyword\"},{tag:P.atom,class:\"tok-atom\"},{tag:P.bool,class:\"tok-bool\"},{tag:P.url,class:\"tok-url\"},{tag:P.labelName,class:\"tok-labelName\"},{tag:P.inserted,class:\"tok-inserted\"},{tag:P.deleted,class:\"tok-deleted\"},{tag:P.literal,class:\"tok-literal\"},{tag:P.string,class:\"tok-string\"},{tag:P.number,class:\"tok-number\"},{tag:[P.regexp,P.escape,P.special(P.string)],class:\"tok-string2\"},{tag:P.variableName,class:\"tok-variableName\"},{tag:P.local(P.variableName),class:\"tok-variableName tok-local\"},{tag:P.definition(P.variableName),class:\"tok-variableName tok-definition\"},{tag:P.special(P.variableName),class:\"tok-variableName2\"},{tag:P.definition(P.propertyName),class:\"tok-propertyName tok-definition\"},{tag:P.typeName,class:\"tok-typeName\"},{tag:P.namespace,class:\"tok-namespace\"},{tag:P.className,class:\"tok-className\"},{tag:P.macroName,class:\"tok-macroName\"},{tag:P.propertyName,class:\"tok-propertyName\"},{tag:P.operator,class:\"tok-operator\"},{tag:P.comment,class:\"tok-comment\"},{tag:P.meta,class:\"tok-meta\"},{tag:P.invalid,class:\"tok-invalid\"},{tag:P.punctuation,class:\"tok-punctuation\"}]);var ja,ki=new ee;function Yg(i){return H.define({combine:i?e=>e.concat(i):void 0})}var Xg=new ee,dt=class{constructor(e,t,r=[],n=\"\"){this.data=e,this.name=n,fe.prototype.hasOwnProperty(\"tree\")||Object.defineProperty(fe.prototype,\"tree\",{get(){return Ze(this)}}),this.parser=t,this.extension=[Si.of(this),fe.languageData.of((s,o,l)=>{let a=Qu(s,o,l),h=a.type.prop(ki);if(!h)return[];let c=s.facet(h),u=a.type.prop(Xg);if(u){let d=a.resolve(o-a.from,l);for(let p of u)if(p.test(d,s)){let v=s.facet(p.facet);return p.type==\"replace\"?v:v.concat(c)}}return c})].concat(r)}isActiveAt(e,t,r=-1){return Qu(e,t,r).type.prop(ki)==this.data}findRegions(e){let t=e.facet(Si);if(t?.data==this.data)return[{from:0,to:e.doc.length}];if(!t||!t.allowsNesting)return[];let r=[],n=(s,o)=>{if(s.prop(ki)==this.data){r.push({from:o,to:o+s.length});return}let l=s.prop(ee.mounted);if(l){if(l.tree.prop(ki)==this.data){if(l.overlay)for(let a of l.overlay)r.push({from:a.from+o,to:a.to+o});else r.push({from:o,to:o+s.length});return}else if(l.overlay){let a=r.length;if(n(l.tree,l.overlay[0].from+o),r.length>a)return}}for(let a=0;a<s.children.length;a++){let h=s.children[a];h instanceof ve&&n(h,s.positions[a]+o)}};return n(Ze(e),0),r}get allowsNesting(){return!0}};dt.setState=te.define();function Qu(i,e,t){let r=i.facet(Si),n=Ze(i).topNode;if(!r||r.allowsNesting)for(let s=n;s;s=s.enter(e,t,Ce.ExcludeBuffers|Ce.EnterBracketed))s.type.isTop&&(n=s);return n}function Ze(i){let e=i.field(dt.state,!1);return e?e.tree:ve.empty}var Ja=class{constructor(e){this.doc=e,this.cursorPos=0,this.string=\"\",this.cursor=e.iter()}get length(){return this.doc.length}syncTo(e){return this.string=this.cursor.next(e-this.cursorPos).value,this.cursorPos=e+this.string.length,this.cursorPos-this.string.length}chunk(e){return this.syncTo(e),this.string}get lineChunks(){return!0}read(e,t){let r=this.cursorPos-this.string.length;return e<r||t>=this.cursorPos?this.doc.sliceString(e,t):this.string.slice(e-r,t-r)}},gn=null,vn=class i{constructor(e,t,r=[],n,s,o,l,a){this.parser=e,this.state=t,this.fragments=r,this.tree=n,this.treeLen=s,this.viewport=o,this.skipped=l,this.scheduleOn=a,this.parse=null,this.tempSkipped=[]}static create(e,t,r){return new i(e,t,[],ve.empty,0,r,[],null)}startParse(){return this.parser.startParse(new Ja(this.state.doc),this.fragments)}work(e,t){return t!=null&&t>=this.state.doc.length&&(t=void 0),this.tree!=ve.empty&&this.isDone(t??this.state.doc.length)?(this.takeTree(),!0):this.withContext(()=>{var r;if(typeof e==\"number\"){let n=Date.now()+e;e=()=>Date.now()>n}for(this.parse||(this.parse=this.startParse()),t!=null&&(this.parse.stoppedAt==null||this.parse.stoppedAt>t)&&t<this.state.doc.length&&this.parse.stopAt(t);;){let n=this.parse.advance();if(n)if(this.fragments=this.withoutTempSkipped(Ur.addTree(n,this.fragments,this.parse.stoppedAt!=null)),this.treeLen=(r=this.parse.stoppedAt)!==null&&r!==void 0?r:this.state.doc.length,this.tree=n,this.parse=null,this.treeLen<(t??this.state.doc.length))this.parse=this.startParse();else return!0;if(e())return!1}})}takeTree(){let e,t;this.parse&&(e=this.parse.parsedPos)>=this.treeLen&&((this.parse.stoppedAt==null||this.parse.stoppedAt>e)&&this.parse.stopAt(e),this.withContext(()=>{for(;!(t=this.parse.advance()););}),this.treeLen=e,this.tree=t,this.fragments=this.withoutTempSkipped(Ur.addTree(this.tree,this.fragments,!0)),this.parse=null)}withContext(e){let t=gn;gn=this;try{return e()}finally{gn=t}}withoutTempSkipped(e){for(let t;t=this.tempSkipped.pop();)e=ef(e,t.from,t.to);return e}changes(e,t){let{fragments:r,tree:n,treeLen:s,viewport:o,skipped:l}=this;if(this.takeTree(),!e.empty){let a=[];if(e.iterChangedRanges((h,c,u,d)=>a.push({fromA:h,toA:c,fromB:u,toB:d})),r=Ur.applyChanges(r,a),n=ve.empty,s=0,o={from:e.mapPos(o.from,-1),to:e.mapPos(o.to,1)},this.skipped.length){l=[];for(let h of this.skipped){let c=e.mapPos(h.from,1),u=e.mapPos(h.to,-1);c<u&&l.push({from:c,to:u})}}}return new i(this.parser,t,r,n,s,o,l,this.scheduleOn)}updateViewport(e){if(this.viewport.from==e.from&&this.viewport.to==e.to)return!1;this.viewport=e;let t=this.skipped.length;for(let r=0;r<this.skipped.length;r++){let{from:n,to:s}=this.skipped[r];n<e.to&&s>e.from&&(this.fragments=ef(this.fragments,n,s),this.skipped.splice(r--,1))}return this.skipped.length>=t?!1:(this.reset(),!0)}reset(){this.parse&&(this.takeTree(),this.parse=null)}skipUntilInView(e,t){this.skipped.push({from:e,to:t})}static getSkippingParser(e){return new class extends mn{createParse(t,r,n){let s=n[0].from,o=n[n.length-1].to;return{parsedPos:s,advance(){let a=gn;if(a){for(let h of n)a.tempSkipped.push(h);e&&(a.scheduleOn=a.scheduleOn?Promise.all([a.scheduleOn,e]):e)}return this.parsedPos=o,new ve(Je.none,[],[],o-s)},stoppedAt:null,stopAt(){}}}}}isDone(e){e=Math.min(e,this.state.doc.length);let t=this.fragments;return this.treeLen>=e&&t.length&&t[0].from==0&&t[0].to>=e}static get(){return gn}};function ef(i,e,t){return Ur.applyChanges(i,[{fromA:e,toA:t,fromB:e,toB:t}])}var bn=class i{constructor(e){this.context=e,this.tree=e.tree}apply(e){if(!e.docChanged&&this.tree==this.context.tree)return this;let t=this.context.changes(e.changes,e.state),r=this.context.treeLen==e.startState.doc.length?void 0:Math.max(e.changes.mapPos(this.context.treeLen),t.viewport.to);return t.work(20,r)||t.takeTree(),new i(t)}static init(e){let t=Math.min(3e3,e.doc.length),r=vn.create(e.facet(Si).parser,e,{from:0,to:t});return r.work(20,t)||r.takeTree(),new i(r)}};dt.state=Re.define({create:bn.init,update(i,e){for(let t of e.effects)if(t.is(dt.setState))return t.value;return e.startState.facet(Si)!=e.state.facet(Si)?bn.init(e.state):i.apply(e)}});var lf=i=>{let e=setTimeout(()=>i(),500);return()=>clearTimeout(e)};typeof requestIdleCallback<\"u\"&&(lf=i=>{let e=-1,t=setTimeout(()=>{e=requestIdleCallback(i,{timeout:400})},100);return()=>e<0?clearTimeout(t):cancelIdleCallback(e)});var Ya=typeof navigator<\"u\"&&(!((ja=navigator.scheduling)===null||ja===void 0)&&ja.isInputPending)?()=>navigator.scheduling.isInputPending():null,_g=Se.fromClass(class{constructor(e){this.view=e,this.working=null,this.workScheduled=0,this.chunkEnd=-1,this.chunkBudget=-1,this.work=this.work.bind(this),this.scheduleWork()}update(e){let t=this.view.state.field(dt.state).context;(t.updateViewport(e.view.viewport)||this.view.viewport.to>t.treeLen)&&this.scheduleWork(),(e.docChanged||e.selectionSet)&&(this.view.hasFocus&&(this.chunkBudget+=50),this.scheduleWork()),this.checkAsyncSchedule(t)}scheduleWork(){if(this.working)return;let{state:e}=this.view,t=e.field(dt.state);(t.tree!=t.context.tree||!t.context.isDone(e.doc.length))&&(this.working=lf(this.work))}work(e){this.working=null;let t=Date.now();if(this.chunkEnd<t&&(this.chunkEnd<0||this.view.hasFocus)&&(this.chunkEnd=t+3e4,this.chunkBudget=3e3),this.chunkBudget<=0)return;let{state:r,viewport:{to:n}}=this.view,s=r.field(dt.state);if(s.tree==s.context.tree&&s.context.isDone(n+1e5))return;let o=Date.now()+Math.min(this.chunkBudget,100,e&&!Ya?Math.max(25,e.timeRemaining()-5):1e9),l=s.context.treeLen<n&&r.doc.length>n+1e3,a=s.context.work(()=>Ya&&Ya()||Date.now()>o,n+(l?0:1e5));this.chunkBudget-=Date.now()-t,(a||this.chunkBudget<=0)&&(s.context.takeTree(),this.view.dispatch({effects:dt.setState.of(new bn(s.context))})),this.chunkBudget>0&&!(a&&!l)&&this.scheduleWork(),this.checkAsyncSchedule(s.context)}checkAsyncSchedule(e){e.scheduleOn&&(this.workScheduled++,e.scheduleOn.then(()=>this.scheduleWork()).catch(t=>Fe(this.view.state,t)).then(()=>this.workScheduled--),e.scheduleOn=null)}destroy(){this.working&&this.working()}isWorking(){return!!(this.working||this.workScheduled>0)}},{eventHandlers:{focus(){this.scheduleWork()}}}),Si=H.define({combine(i){return i.length?i[0]:null},enables:i=>[dt.state,_g,K.contentAttributes.compute([i],e=>{let t=e.facet(i);return t&&t.name?{\"data-language\":t.name}:{}})]});var Jg=H.define(),xn=H.define({combine:i=>{if(!i.length)return\"  \";let e=i[0];if(!e||/\\S/.test(e)||Array.from(e).some(t=>t!=e[0]))throw new Error(\"Invalid indent unit: \"+JSON.stringify(i[0]));return e}});function Ar(i){let e=i.facet(xn);return e.charCodeAt(0)==9?i.tabSize*e.length:e.length}function Ai(i,e){let t=\"\",r=i.tabSize,n=i.facet(xn)[0];if(n==\"\t\"){for(;e>=r;)t+=\"\t\",e-=r;n=\" \"}for(let s=0;s<e;s++)t+=n;return t}function Ys(i,e){i instanceof fe&&(i=new Yr(i));for(let r of i.state.facet(Jg)){let n=r(i,e);if(n!==void 0)return n}let t=Ze(i.state);return t.length>=e?Zg(i,t,e):null}var Yr=class{constructor(e,t={}){this.state=e,this.options=t,this.unit=Ar(e)}lineAt(e,t=1){let r=this.state.doc.lineAt(e),{simulateBreak:n,simulateDoubleBreak:s}=this.options;return n!=null&&n>=r.from&&n<=r.to?s&&n==e?{text:\"\",from:e}:(t<0?n<e:n<=e)?{text:r.text.slice(n-r.from),from:n}:{text:r.text.slice(0,n-r.from),from:r.from}:r}textAfterPos(e,t=1){if(this.options.simulateDoubleBreak&&e==this.options.simulateBreak)return\"\";let{text:r,from:n}=this.lineAt(e,t);return r.slice(e-n,Math.min(r.length,e+100-n))}column(e,t=1){let{text:r,from:n}=this.lineAt(e,t),s=this.countColumn(r,e-n),o=this.options.overrideIndentation?this.options.overrideIndentation(n):-1;return o>-1&&(s+=o-this.countColumn(r,r.search(/\\S|$/))),s}countColumn(e,t=e.length){return vr(e,this.state.tabSize,t)}lineIndent(e,t=1){let{text:r,from:n}=this.lineAt(e,t),s=this.options.overrideIndentation;if(s){let o=s(n);if(o>-1)return o}return this.countColumn(r,r.search(/\\S|$/))}get simulatedBreak(){return this.options.simulateBreak||null}},af=new ee;function Zg(i,e,t){let r=e.resolveStack(t),n=e.resolveInner(t,-1).resolve(t,0).enterUnfinishedNodesBefore(t);if(n!=r.node){let s=[];for(let o=n;o&&!(o.from<r.node.from||o.to>r.node.to||o.from==r.node.from&&o.type==r.node.type);o=o.parent)s.push(o);for(let o=s.length-1;o>=0;o--)r={node:s[o],next:r}}return hf(r,i,t)}function hf(i,e,t){for(let r=i;r;r=r.next){let n=e4(r.node);if(n)return n(Za.create(e,t,r))}return 0}function Qg(i){return i.pos==i.options.simulateBreak&&i.options.simulateDoubleBreak}function e4(i){let e=i.type.prop(af);if(e)return e;let t=i.firstChild,r;if(t&&(r=t.type.prop(ee.closedBy))){let n=i.lastChild,s=n&&r.indexOf(n.name)>-1;return o=>n4(o,!0,1,void 0,s&&!Qg(o)?n.from:void 0)}return i.parent==null?t4:null}function t4(){return 0}var Za=class i extends Yr{constructor(e,t,r){super(e.state,e.options),this.base=e,this.pos=t,this.context=r}get node(){return this.context.node}static create(e,t,r){return new i(e,t,r)}get textAfter(){return this.textAfterPos(this.pos)}get baseIndent(){return this.baseIndentFor(this.node)}baseIndentFor(e){let t=this.state.doc.lineAt(e.from);for(;;){let r=e.resolve(t.from);for(;r.parent&&r.parent.from==r.from;)r=r.parent;if(r4(r,e))break;t=this.state.doc.lineAt(r.from)}return this.lineIndent(t.from)}continue(){return hf(this.context.next,this.base,this.pos)}};function r4(i,e){for(let t=e;t;t=t.parent)if(i==t)return!0;return!1}function i4(i){let e=i.node,t=e.childAfter(e.from),r=e.lastChild;if(!t)return null;let n=i.options.simulateBreak,s=i.state.doc.lineAt(t.from),o=n==null||n<=s.from?s.to:Math.min(s.to,n);for(let l=t.to;;){let a=e.childAfter(l);if(!a||a==r)return null;if(!a.type.isSkipped){if(a.from>=o)return null;let h=/^ */.exec(s.text.slice(t.to-s.from))[0].length;return{from:t.from,to:t.to+h}}l=a.to}}function n4(i,e,t,r,n){let s=i.textAfter,o=s.match(/^\\s*/)[0].length,l=r&&s.slice(o,o+r.length)==r||n==i.pos+o,a=e?i4(i):null;return a?l?i.column(a.from):i.column(a.to):i.baseIndent+(l?0:i.unit*t)}var s4=200;function cf(){return fe.transactionFilter.of(i=>{if(!i.docChanged||!i.isUserEvent(\"input.type\")&&!i.isUserEvent(\"input.complete\"))return i;let e=i.startState.languageDataAt(\"indentOnInput\",i.startState.selection.main.head);if(!e.length)return i;let t=i.newDoc,{head:r}=i.newSelection.main,n=t.lineAt(r);if(r>n.from+s4)return i;let s=t.sliceString(n.from,r);if(!e.some(h=>h.test(s)))return i;let{state:o}=i,l=-1,a=[];for(let{head:h}of o.selection.ranges){let c=o.doc.lineAt(h);if(c.from==l)continue;l=c.from;let u=Ys(o,c.from);if(u==null)continue;let d=/^\\s*/.exec(c.text)[0],p=Ai(o,u);d!=p&&a.push({from:c.from,to:c.from+d.length,insert:p})}return a.length?[i,{changes:a,sequential:!0}]:i})}var Ci=class i{constructor(e,t){this.specs=e;let r;function n(l){let a=bt.newName();return(r||(r=Object.create(null)))[\".\"+a]=l,a}let s=typeof t.all==\"string\"?t.all:t.all?n(t.all):void 0,o=t.scope;this.scope=o instanceof dt?l=>l.prop(ki)==o.data:o?l=>l==o:void 0,this.style=Ka(e.map(l=>({tag:l.tag,class:l.class||n(Object.assign({},l,{tag:null}))})),{all:s}).style,this.module=r?new bt(r):null,this.themeType=t.themeType}static define(e,t){return new i(e,t||{})}},Qa=H.define(),uf=H.define({combine(i){return i.length?[i[0]]:null}});function Xa(i){let e=i.facet(Qa);return e.length?e:i.facet(uf)}function ff(i,e){let t=[o4],r;return i instanceof Ci&&(i.module&&t.push(K.styleModule.of(i.module)),r=i.themeType),e?.fallback?t.push(uf.of(i)):r?t.push(Qa.computeN([K.darkTheme],n=>n.facet(K.darkTheme)==(r==\"dark\")?[i]:[])):t.push(Qa.of(i)),t}var e0=class{constructor(e){this.markCache=Object.create(null),this.tree=Ze(e.state),this.decorations=this.buildDeco(e,Xa(e.state)),this.decoratedTo=e.viewport.to}update(e){let t=Ze(e.state),r=Xa(e.state),n=r!=Xa(e.startState),{viewport:s}=e.view,o=e.changes.mapPos(this.decoratedTo,1);t.length<s.to&&!n&&t.type==this.tree.type&&o>=s.to?(this.decorations=this.decorations.map(e.changes),this.decoratedTo=o):(t!=this.tree||e.viewportChanged||n)&&(this.tree=t,this.decorations=this.buildDeco(e.view,r),this.decoratedTo=s.to)}buildDeco(e,t){if(!t||!this.tree.length)return X.none;let r=new Et;for(let{from:n,to:s}of e.visibleRanges)Zu(this.tree,t,(o,l,a)=>{r.add(o,l,this.markCache[a]||(this.markCache[a]=X.mark({class:a})))},n,s);return r.finish()}},o4=Wt.high(Se.fromClass(e0,{decorations:i=>i.decorations})),b7=Ci.define([{tag:P.meta,color:\"#404740\"},{tag:P.link,textDecoration:\"underline\"},{tag:P.heading,textDecoration:\"underline\",fontWeight:\"bold\"},{tag:P.emphasis,fontStyle:\"italic\"},{tag:P.strong,fontWeight:\"bold\"},{tag:P.strikethrough,textDecoration:\"line-through\"},{tag:P.keyword,color:\"#708\"},{tag:[P.atom,P.bool,P.url,P.contentSeparator,P.labelName],color:\"#219\"},{tag:[P.literal,P.inserted],color:\"#164\"},{tag:[P.string,P.deleted],color:\"#a11\"},{tag:[P.regexp,P.escape,P.special(P.string)],color:\"#e40\"},{tag:P.definition(P.variableName),color:\"#00f\"},{tag:P.local(P.variableName),color:\"#30a\"},{tag:[P.typeName,P.namespace],color:\"#085\"},{tag:P.className,color:\"#167\"},{tag:[P.special(P.variableName),P.macroName],color:\"#256\"},{tag:P.definition(P.propertyName),color:\"#00c\"},{tag:P.comment,color:\"#940\"},{tag:P.invalid,color:\"#f00\"}]),l4=K.baseTheme({\"&.cm-focused .cm-matchingBracket\":{backgroundColor:\"#328c8252\"},\"&.cm-focused .cm-nonmatchingBracket\":{backgroundColor:\"#bb555544\"}}),df=1e4,mf=\"()[]{}\",pf=H.define({combine(i){return it(i,{afterCursor:!0,brackets:mf,maxScanDistance:df,renderMatch:c4})}}),a4=X.mark({class:\"cm-matchingBracket\"}),h4=X.mark({class:\"cm-nonmatchingBracket\"});function c4(i){let e=[],t=i.matched?a4:h4;return e.push(t.range(i.start.from,i.start.to)),i.end&&e.push(t.range(i.end.from,i.end.to)),e}function tf(i){let e=[],t=i.facet(pf);for(let r of i.selection.ranges){if(!r.empty)continue;let n=Rt(i,r.head,-1,t)||r.head>0&&Rt(i,r.head-1,1,t)||t.afterCursor&&(Rt(i,r.head,1,t)||r.head<i.doc.length&&Rt(i,r.head+1,-1,t));n&&(e=e.concat(t.renderMatch(n,i)))}return X.set(e,!0)}var u4=Se.fromClass(class{constructor(i){this.paused=!1,this.decorations=tf(i.state)}update(i){(i.docChanged||i.selectionSet||this.paused)&&(i.view.composing?(this.decorations=this.decorations.map(i.changes),this.paused=!0):(this.decorations=tf(i.state),this.paused=!1))}},{decorations:i=>i.decorations}),f4=[u4,l4];function gf(i={}){return[pf.of(i),f4]}var d4=new ee;function t0(i,e,t){let r=i.prop(e<0?ee.openedBy:ee.closedBy);if(r)return r;if(i.name.length==1){let n=t.indexOf(i.name);if(n>-1&&n%2==(e<0?1:0))return[t[n+e]]}return null}function r0(i){let e=i.type.prop(d4);return e?e(i.node):i}function Rt(i,e,t,r={}){let n=r.maxScanDistance||df,s=r.brackets||mf,o=Ze(i),l=o.resolveInner(e,t);for(let a=l;a;a=a.parent){let h=t0(a.type,t,s);if(h&&a.from<a.to){let c=r0(a);if(c&&(t>0?e>=c.from&&e<c.to:e>c.from&&e<=c.to))return m4(i,e,t,a,c,h,s)}}return p4(i,e,t,o,l.type,n,s)}function m4(i,e,t,r,n,s,o){let l=r.parent,a={from:n.from,to:n.to},h=0,c=l?.cursor();if(c&&(t<0?c.childBefore(r.from):c.childAfter(r.to)))do if(t<0?c.to<=r.from:c.from>=r.to){if(h==0&&s.indexOf(c.type.name)>-1&&c.from<c.to){let u=r0(c);return{start:a,end:u?{from:u.from,to:u.to}:void 0,matched:!0}}else if(t0(c.type,t,o))h++;else if(t0(c.type,-t,o)){if(h==0){let u=r0(c);return{start:a,end:u&&u.from<u.to?{from:u.from,to:u.to}:void 0,matched:!1}}h--}}while(t<0?c.prevSibling():c.nextSibling());return{start:a,matched:!1}}function p4(i,e,t,r,n,s,o){let l=t<0?i.sliceDoc(e-1,e):i.sliceDoc(e,e+1),a=o.indexOf(l);if(a<0||a%2==0!=t>0)return null;let h={from:t<0?e-1:e,to:t>0?e+1:e},c=i.doc.iterRange(e,t>0?i.doc.length:0),u=0;for(let d=0;!c.next().done&&d<=s;){let p=c.value;t<0&&(d+=p.length);let v=e+d*t;for(let y=t>0?0:p.length-1,w=t>0?p.length:-1;y!=w;y+=t){let S=o.indexOf(p[y]);if(!(S<0||r.resolveInner(v+y,1).type!=n))if(S%2==0==t>0)u++;else{if(u==1)return{start:h,end:{from:v+y,to:v+y+1},matched:S>>1==a>>1};u--}}t>0&&(d+=p.length)}return c.done?{start:h,matched:!1}:null}function rf(i,e,t,r=0,n=0){e==null&&(e=i.search(/[^\\s\\u00a0]/),e==-1&&(e=i.length));let s=n;for(let o=r;o<e;o++)i.charCodeAt(o)==9?s+=t-s%t:s++;return s}var Us=class{constructor(e,t,r,n){this.string=e,this.tabSize=t,this.indentUnit=r,this.overrideIndent=n,this.pos=0,this.start=0,this.lastColumnPos=0,this.lastColumnValue=0}eol(){return this.pos>=this.string.length}sol(){return this.pos==0}peek(){return this.string.charAt(this.pos)||void 0}next(){if(this.pos<this.string.length)return this.string.charAt(this.pos++)}eat(e){let t=this.string.charAt(this.pos),r;if(typeof e==\"string\"?r=t==e:r=t&&(e instanceof RegExp?e.test(t):e(t)),r)return++this.pos,t}eatWhile(e){let t=this.pos;for(;this.eat(e););return this.pos>t}eatSpace(){let e=this.pos;for(;/[\\s\\u00a0]/.test(this.string.charAt(this.pos));)++this.pos;return this.pos>e}skipToEnd(){this.pos=this.string.length}skipTo(e){let t=this.string.indexOf(e,this.pos);if(t>-1)return this.pos=t,!0}backUp(e){this.pos-=e}column(){return this.lastColumnPos<this.start&&(this.lastColumnValue=rf(this.string,this.start,this.tabSize,this.lastColumnPos,this.lastColumnValue),this.lastColumnPos=this.start),this.lastColumnValue}indentation(){var e;return(e=this.overrideIndent)!==null&&e!==void 0?e:rf(this.string,null,this.tabSize)}match(e,t,r){if(typeof e==\"string\"){let n=o=>r?o.toLowerCase():o,s=this.string.substr(this.pos,e.length);return n(s)==n(e)?(t!==!1&&(this.pos+=e.length),!0):null}else{let n=this.string.slice(this.pos).match(e);return n&&n.index>0?null:(n&&t!==!1&&(this.pos+=n[0].length),n)}}current(){return this.string.slice(this.start,this.pos)}};function g4(i){return{name:i.name||\"\",token:i.token,blankLine:i.blankLine||(()=>{}),startState:i.startState||(()=>!0),copyState:i.copyState||v4,indent:i.indent||(()=>null),languageData:i.languageData||{},tokenTable:i.tokenTable||s0,mergeTokens:i.mergeTokens!==!1}}function v4(i){if(typeof i!=\"object\")return i;let e={};for(let t in i){let r=i[t];e[t]=r instanceof Array?r.slice():r}return e}var nf=new WeakMap,Ks=class i extends dt{constructor(e){let t=Yg(e.languageData),r=g4(e),n,s=new class extends mn{createParse(o,l,a){return new i0(n,o,l,a)}};super(t,s,[],e.name),this.topNode=w4(t,this),n=this,this.streamParser=r,this.stateAfter=new ee({perNode:!0}),this.tokenTable=e.tokenTable?new js(r.tokenTable):x4}static define(e){return new i(e)}getIndent(e){let t,{overrideIndentation:r}=e.options;r&&(t=nf.get(e.state),t!=null&&t<e.pos-1e4&&(t=void 0));let n=n0(this,e.node.tree,e.node.from,e.node.from,t??e.pos),s,o;if(n?(o=n.state,s=n.pos+1):(o=this.streamParser.startState(e.unit),s=e.node.from),e.pos-s>1e4)return null;for(;s<e.pos;){let a=e.state.doc.lineAt(s),h=Math.min(e.pos,a.to);if(a.length){let c=r?r(a.from):-1,u=new Us(a.text,e.state.tabSize,e.unit,c<0?void 0:c);for(;u.pos<h-a.from;)bf(this.streamParser.token,u,o)}else this.streamParser.blankLine(o,e.unit);if(h==e.pos)break;s=a.to+1}let l=e.lineAt(e.pos);return r&&t==null&&nf.set(e.state,l.from),this.streamParser.indent(o,/^\\s*(.*)/.exec(l.text)[1],e)}get allowsNesting(){return!1}};function n0(i,e,t,r,n){let s=t>=r&&t+e.length<=n&&e.prop(i.stateAfter);if(s)return{state:i.streamParser.copyState(s),pos:t+e.length};for(let o=e.children.length-1;o>=0;o--){let l=e.children[o],a=t+e.positions[o],h=l instanceof ve&&a<n&&n0(i,l,a,r,n);if(h)return h}return null}function vf(i,e,t,r,n){if(n&&t<=0&&r>=e.length)return e;!n&&t==0&&e.type==i.topNode&&(n=!0);for(let s=e.children.length-1;s>=0;s--){let o=e.positions[s],l=e.children[s],a;if(o<r&&l instanceof ve){if(!(a=vf(i,l,t-o,r-o,n)))break;return n?new ve(e.type,e.children.slice(0,s).concat(a),e.positions.slice(0,s+1),o+a.length):a}}return null}function b4(i,e,t,r,n){for(let s of e){let o=s.from+(s.openStart?25:0),l=s.to-(s.openEnd?25:0),a=o<=t&&l>t&&n0(i,s.tree,0-s.offset,t,l),h;if(a&&a.pos<=r&&(h=vf(i,s.tree,t+s.offset,a.pos+s.offset,!1)))return{state:a.state,tree:h}}return{state:i.streamParser.startState(n?Ar(n):4),tree:ve.empty}}var i0=class{constructor(e,t,r,n){this.lang=e,this.input=t,this.fragments=r,this.ranges=n,this.stoppedAt=null,this.chunks=[],this.chunkPos=[],this.chunk=[],this.chunkReused=void 0,this.rangeIndex=0,this.to=n[n.length-1].to;let s=vn.get(),o=n[0].from,{state:l,tree:a}=b4(e,r,o,this.to,s?.state);this.state=l,this.parsedPos=this.chunkStart=o+a.length;for(let h=0;h<a.children.length;h++)this.chunks.push(a.children[h]),this.chunkPos.push(a.positions[h]);s&&this.parsedPos<s.viewport.from-1e5&&n.some(h=>h.from<=s.viewport.from&&h.to>=s.viewport.from)&&(this.state=this.lang.streamParser.startState(Ar(s.state)),s.skipUntilInView(this.parsedPos,s.viewport.from),this.parsedPos=s.viewport.from),this.moveRangeIndex()}advance(){let e=vn.get(),t=this.stoppedAt==null?this.to:Math.min(this.to,this.stoppedAt),r=Math.min(t,this.chunkStart+512);for(e&&(r=Math.min(r,e.viewport.to));this.parsedPos<r;)this.parseLine(e);return this.chunkStart<this.parsedPos&&this.finishChunk(),this.parsedPos>=t?this.finish():e&&this.parsedPos>=e.viewport.to?(e.skipUntilInView(this.parsedPos,t),this.finish()):null}stopAt(e){this.stoppedAt=e}lineAfter(e){let t=this.input.chunk(e);if(this.input.lineChunks)t==`\n`&&(t=\"\");else{let r=t.indexOf(`\n`);r>-1&&(t=t.slice(0,r))}return e+t.length<=this.to?t:t.slice(0,this.to-e)}nextLine(){let e=this.parsedPos,t=this.lineAfter(e),r=e+t.length;for(let n=this.rangeIndex;;){let s=this.ranges[n].to;if(s>=r||(t=t.slice(0,s-(r-t.length)),n++,n==this.ranges.length))break;let o=this.ranges[n].from,l=this.lineAfter(o);t+=l,r=o+l.length}return{line:t,end:r}}skipGapsTo(e,t,r){for(;;){let n=this.ranges[this.rangeIndex].to,s=e+t;if(r>0?n>s:n>=s)break;let o=this.ranges[++this.rangeIndex].from;t+=o-n}return t}moveRangeIndex(){for(;this.ranges[this.rangeIndex].to<this.parsedPos;)this.rangeIndex++}emitToken(e,t,r,n){let s=4;if(this.ranges.length>1){n=this.skipGapsTo(t,n,1),t+=n;let l=this.chunk.length;n=this.skipGapsTo(r,n,-1),r+=n,s+=this.chunk.length-l}let o=this.chunk.length-4;return this.lang.streamParser.mergeTokens&&s==4&&o>=0&&this.chunk[o]==e&&this.chunk[o+2]==t?this.chunk[o+2]=r:this.chunk.push(e,t,r,s),n}parseLine(e){let{line:t,end:r}=this.nextLine(),n=0,{streamParser:s}=this.lang,o=new Us(t,e?e.state.tabSize:4,e?Ar(e.state):2);if(o.eol())s.blankLine(this.state,o.indentUnit);else for(;!o.eol();){let l=bf(s.token,o,this.state);if(l&&(n=this.emitToken(this.lang.tokenTable.resolve(l),this.parsedPos+o.start,this.parsedPos+o.pos,n)),o.start>1e4)break}this.parsedPos=r,this.moveRangeIndex(),this.parsedPos<this.to&&this.parsedPos++}finishChunk(){let e=ve.build({buffer:this.chunk,start:this.chunkStart,length:this.parsedPos-this.chunkStart,nodeSet:y4,topID:0,maxBufferLength:512,reused:this.chunkReused});e=new ve(e.type,e.children,e.positions,e.length,[[this.lang.stateAfter,this.lang.streamParser.copyState(this.state)]]),this.chunks.push(e),this.chunkPos.push(this.chunkStart-this.ranges[0].from),this.chunk=[],this.chunkReused=void 0,this.chunkStart=this.parsedPos}finish(){return new ve(this.lang.topNode,this.chunks,this.chunkPos,this.parsedPos-this.ranges[0].from).balance()}};function bf(i,e,t){e.start=e.pos;for(let r=0;r<10;r++){let n=i(e,t);if(e.pos>e.start)return n}throw new Error(\"Stream parser failed to advance stream.\")}var s0=Object.create(null),yn=[Je.none],y4=new Hs(yn),sf=[],of=Object.create(null),yf=Object.create(null);for(let[i,e]of[[\"variable\",\"variableName\"],[\"variable-2\",\"variableName.special\"],[\"string-2\",\"string.special\"],[\"def\",\"variableName.definition\"],[\"tag\",\"tagName\"],[\"attribute\",\"attributeName\"],[\"type\",\"typeName\"],[\"builtin\",\"variableName.standard\"],[\"qualifier\",\"modifier\"],[\"error\",\"invalid\"],[\"header\",\"heading\"],[\"property\",\"propertyName\"]])yf[i]=xf(s0,e);var js=class{constructor(e){this.extra=e,this.table=Object.assign(Object.create(null),yf)}resolve(e){return e?this.table[e]||(this.table[e]=xf(this.extra,e)):0}},x4=new js(s0);function _a(i,e){sf.indexOf(i)>-1||(sf.push(i),console.warn(e))}function xf(i,e){let t=[];for(let l of e.split(\" \")){let a=[];for(let h of l.split(\".\")){let c=i[h]||P[h];c?typeof c==\"function\"?a.length?a=a.map(c):_a(h,`Modifier ${h} used at start of tag`):a.length?_a(h,`Tag ${h} used as modifier`):a=Array.isArray(c)?c:[c]:_a(h,`Unknown highlighting tag ${h}`)}for(let h of a)t.push(h)}if(!t.length)return 0;let r=e.replace(/ /g,\"_\"),n=r+\" \"+t.map(l=>l.id),s=of[n];if(s)return s.id;let o=of[n]=Je.define({id:yn.length,name:r,props:[_u({[r]:t})]});return yn.push(o),o.id}function w4(i,e){let t=Je.define({id:yn.length,name:\"Document\",props:[ki.add(()=>i),af.add(()=>r=>e.getIndent(r))],top:!0});return yn.push(t),t}var y7={rtl:X.mark({class:\"cm-iso\",inclusive:!0,attributes:{dir:\"rtl\"},bidiIsolate:he.RTL}),ltr:X.mark({class:\"cm-iso\",inclusive:!0,attributes:{dir:\"ltr\"},bidiIsolate:he.LTR}),auto:X.mark({class:\"cm-iso\",inclusive:!0,attributes:{dir:\"auto\"},bidiIsolate:null})};function o0(i){var e={as:\"keyword\",do:\"keyword\",else:\"keyword\",end:\"keyword\",exception:\"keyword\",fun:\"keyword\",functor:\"keyword\",if:\"keyword\",in:\"keyword\",include:\"keyword\",let:\"keyword\",of:\"keyword\",open:\"keyword\",rec:\"keyword\",struct:\"keyword\",then:\"keyword\",type:\"keyword\",val:\"keyword\",while:\"keyword\",with:\"keyword\"},t=i.extraWords||{};for(var r in t)t.hasOwnProperty(r)&&(e[r]=i.extraWords[r]);var n=[];for(var s in e)n.push(s);function o(c,u){var d=c.next();if(d==='\"')return u.tokenize=l,u.tokenize(c,u);if(d===\"{\"&&c.eat(\"|\"))return u.longString=!0,u.tokenize=h,u.tokenize(c,u);if(d===\"(\"&&c.match(/^\\*(?!\\))/))return u.commentLevel++,u.tokenize=a,u.tokenize(c,u);if(d===\"~\"||d===\"?\")return c.eatWhile(/\\w/),\"variableName.special\";if(d===\"`\")return c.eatWhile(/\\w/),\"quote\";if(d===\"/\"&&i.slashComments&&c.eat(\"/\"))return c.skipToEnd(),\"comment\";if(/\\d/.test(d))return d===\"0\"&&c.eat(/[bB]/)&&c.eatWhile(/[01]/),d===\"0\"&&c.eat(/[xX]/)&&c.eatWhile(/[0-9a-fA-F]/),d===\"0\"&&c.eat(/[oO]/)?c.eatWhile(/[0-7]/):(c.eatWhile(/[\\d_]/),c.eat(\".\")&&c.eatWhile(/[\\d]/),c.eat(/[eE]/)&&c.eatWhile(/[\\d\\-+]/)),\"number\";if(/[+\\-*&%=<>!?|@\\.~:]/.test(d))return\"operator\";if(/[\\w\\xa1-\\uffff]/.test(d)){c.eatWhile(/[\\w\\xa1-\\uffff]/);var p=c.current();return e.hasOwnProperty(p)?e[p]:\"variable\"}return null}function l(c,u){for(var d,p=!1,v=!1;(d=c.next())!=null;){if(d==='\"'&&!v){p=!0;break}v=!v&&d===\"\\\\\"}return p&&!v&&(u.tokenize=o),\"string\"}function a(c,u){for(var d,p;u.commentLevel>0&&(p=c.next())!=null;)d===\"(\"&&p===\"*\"&&u.commentLevel++,d===\"*\"&&p===\")\"&&u.commentLevel--,d=p;return u.commentLevel<=0&&(u.tokenize=o),\"comment\"}function h(c,u){for(var d,p;u.longString&&(p=c.next())!=null;)d===\"|\"&&p===\"}\"&&(u.longString=!1),d=p;return u.longString||(u.tokenize=o),\"string\"}return{startState:function(){return{tokenize:o,commentLevel:0,longString:!1}},token:function(c,u){return c.eatSpace()?null:u.tokenize(c,u)},languageData:{autocomplete:n,commentTokens:{line:i.slashComments?\"//\":void 0,block:{open:\"(*\",close:\"*)\"}}}}}var wf=o0({name:\"ocaml\",extraWords:{and:\"keyword\",assert:\"keyword\",begin:\"keyword\",class:\"keyword\",constraint:\"keyword\",done:\"keyword\",downto:\"keyword\",external:\"keyword\",function:\"keyword\",initializer:\"keyword\",lazy:\"keyword\",match:\"keyword\",method:\"keyword\",module:\"keyword\",mutable:\"keyword\",new:\"keyword\",nonrec:\"keyword\",object:\"keyword\",private:\"keyword\",sig:\"keyword\",to:\"keyword\",try:\"keyword\",value:\"keyword\",virtual:\"keyword\",when:\"keyword\",raise:\"builtin\",failwith:\"builtin\",true:\"builtin\",false:\"builtin\",asr:\"builtin\",land:\"builtin\",lor:\"builtin\",lsl:\"builtin\",lsr:\"builtin\",lxor:\"builtin\",mod:\"builtin\",or:\"builtin\",raise_notrace:\"builtin\",trace:\"builtin\",exit:\"builtin\",print_string:\"builtin\",print_endline:\"builtin\",int:\"type\",float:\"type\",bool:\"type\",char:\"type\",string:\"type\",unit:\"type\",List:\"builtin\"}}),C7=o0({name:\"fsharp\",extraWords:{abstract:\"keyword\",assert:\"keyword\",base:\"keyword\",begin:\"keyword\",class:\"keyword\",default:\"keyword\",delegate:\"keyword\",\"do!\":\"keyword\",done:\"keyword\",downcast:\"keyword\",downto:\"keyword\",elif:\"keyword\",extern:\"keyword\",finally:\"keyword\",for:\"keyword\",function:\"keyword\",global:\"keyword\",inherit:\"keyword\",inline:\"keyword\",interface:\"keyword\",internal:\"keyword\",lazy:\"keyword\",\"let!\":\"keyword\",match:\"keyword\",member:\"keyword\",module:\"keyword\",mutable:\"keyword\",namespace:\"keyword\",new:\"keyword\",null:\"keyword\",override:\"keyword\",private:\"keyword\",public:\"keyword\",\"return!\":\"keyword\",return:\"keyword\",select:\"keyword\",static:\"keyword\",to:\"keyword\",try:\"keyword\",upcast:\"keyword\",\"use!\":\"keyword\",use:\"keyword\",void:\"keyword\",when:\"keyword\",\"yield!\":\"keyword\",yield:\"keyword\",atomic:\"keyword\",break:\"keyword\",checked:\"keyword\",component:\"keyword\",const:\"keyword\",constraint:\"keyword\",constructor:\"keyword\",continue:\"keyword\",eager:\"keyword\",event:\"keyword\",external:\"keyword\",fixed:\"keyword\",method:\"keyword\",mixin:\"keyword\",object:\"keyword\",parallel:\"keyword\",process:\"keyword\",protected:\"keyword\",pure:\"keyword\",sealed:\"keyword\",tailcall:\"keyword\",trait:\"keyword\",virtual:\"keyword\",volatile:\"keyword\",List:\"builtin\",Seq:\"builtin\",Map:\"builtin\",Set:\"builtin\",Option:\"builtin\",int:\"builtin\",string:\"builtin\",not:\"builtin\",true:\"builtin\",false:\"builtin\",raise:\"builtin\",failwith:\"builtin\"},slashComments:!0}),A7=o0({name:\"sml\",extraWords:{abstype:\"keyword\",and:\"keyword\",andalso:\"keyword\",case:\"keyword\",datatype:\"keyword\",fn:\"keyword\",handle:\"keyword\",infix:\"keyword\",infixr:\"keyword\",local:\"keyword\",nonfix:\"keyword\",op:\"keyword\",orelse:\"keyword\",raise:\"keyword\",withtype:\"keyword\",eqtype:\"keyword\",sharing:\"keyword\",sig:\"keyword\",signature:\"keyword\",structure:\"keyword\",where:\"keyword\",true:\"keyword\",false:\"keyword\",int:\"builtin\",real:\"builtin\",string:\"builtin\",char:\"builtin\",bool:\"builtin\"},slashComments:!0});var _s=class{constructor(e,t,r,n){this.state=e,this.pos=t,this.explicit=r,this.view=n,this.abortListeners=[],this.abortOnDocChange=!1}tokenBefore(e){let t=Ze(this.state).resolveInner(this.pos,-1);for(;t&&e.indexOf(t.name)<0;)t=t.parent;return t?{from:t.from,to:this.pos,text:this.state.sliceDoc(t.from,this.pos),type:t.type}:null}matchBefore(e){let t=this.state.doc.lineAt(this.pos),r=Math.max(t.from,this.pos-250),n=t.text.slice(r-t.from,this.pos-t.from),s=n.search(Bf(e,!1));return s<0?null:{from:r+s,to:this.pos,text:n.slice(s)}}get aborted(){return this.abortListeners==null}addEventListener(e,t,r){e==\"abort\"&&this.abortListeners&&(this.abortListeners.push(t),r&&r.onDocChange&&(this.abortOnDocChange=!0))}};function kf(i){let e=Object.keys(i).join(\"\"),t=/\\w/.test(e);return t&&(e=e.replace(/\\w/g,\"\")),`[${t?\"\\\\w\":\"\"}${e.replace(/[^\\w\\s]/g,\"\\\\$&\")}]`}function k4(i){let e=Object.create(null),t=Object.create(null);for(let{label:n}of i){e[n[0]]=!0;for(let s=1;s<n.length;s++)t[n[s]]=!0}let r=kf(e)+kf(t)+\"*$\";return[new RegExp(\"^\"+r),new RegExp(r)]}function S4(i){let e=i.map(n=>typeof n==\"string\"?{label:n}:n),[t,r]=e.every(n=>/^\\w+$/.test(n.label))?[/\\w*$/,/\\w+$/]:k4(e);return n=>{let s=n.matchBefore(r);return s||n.explicit?{from:s?s.from:n.pos,options:e,validFor:t}:null}}var Js=class{constructor(e,t,r,n){this.completion=e,this.source=t,this.match=r,this.score=n}};function _r(i){return i.selection.main.from}function Bf(i,e){var t;let{source:r}=i,n=e&&r[0]!=\"^\",s=r[r.length-1]!=\"$\";return!n&&!s?i:new RegExp(`${n?\"^\":\"\"}(?:${r})${s?\"$\":\"\"}`,(t=i.flags)!==null&&t!==void 0?t:i.ignoreCase?\"i\":\"\")}var Ef=rt.define();function C4(i,e,t,r){let{main:n}=i.selection,s=t-n.from,o=r-n.from;return{...i.changeByRange(l=>{if(l!=n&&t!=r&&i.sliceDoc(l.from+s,l.from+o)!=i.sliceDoc(t,r))return{range:l};let a=i.toText(e);return{changes:{from:l.from+s,to:r==n.from?l.to:l.from+o,insert:a},range:R.cursor(l.from+s+a.length)}}),scrollIntoView:!0,userEvent:\"input.complete\"}}var Sf=new WeakMap;function A4(i){if(!Array.isArray(i))return i;let e=Sf.get(i);return e||Sf.set(i,e=S4(i)),e}var Zs=te.define(),wn=te.define(),c0=class{constructor(e){this.pattern=e,this.chars=[],this.folded=[],this.any=[],this.precise=[],this.byWord=[],this.score=0,this.matched=[];for(let t=0;t<e.length;){let r=Xe(e,t),n=vt(r);this.chars.push(r);let s=e.slice(t,t+n),o=s.toUpperCase();this.folded.push(Xe(o==s?s.toLowerCase():o,0)),t+=n}this.astral=e.length!=this.chars.length}ret(e,t){return this.score=e,this.matched=t,this}match(e){if(this.pattern.length==0)return this.ret(-100,[]);if(e.length<this.pattern.length)return null;let{chars:t,folded:r,any:n,precise:s,byWord:o}=this;if(t.length==1){let A=Xe(e,0),M=vt(A),E=M==e.length?0:-100;if(A!=t[0])if(A==r[0])E+=-200;else return null;return this.ret(E,[0,M])}let l=e.indexOf(this.pattern);if(l==0)return this.ret(e.length==this.pattern.length?0:-100,[0,this.pattern.length]);let a=t.length,h=0;if(l<0){for(let A=0,M=Math.min(e.length,200);A<M&&h<a;){let E=Xe(e,A);(E==t[h]||E==r[h])&&(n[h++]=A),A+=vt(E)}if(h<a)return null}let c=0,u=0,d=!1,p=0,v=-1,y=-1,w=/[a-z]/.test(e),S=!0;for(let A=0,M=Math.min(e.length,200),E=0;A<M&&u<a;){let T=Xe(e,A);l<0&&(c<a&&T==t[c]&&(s[c++]=A),p<a&&(T==t[p]||T==r[p]?(p==0&&(v=A),y=A+1,p++):p=0));let B,D=T<255?T>=48&&T<=57||T>=97&&T<=122?2:T>=65&&T<=90?1:0:(B=qi(T))!=B.toLowerCase()?1:B!=B.toUpperCase()?2:0;(!A||D==1&&w||E==0&&D!=0)&&(t[u]==T||r[u]==T&&(d=!0)?o[u++]=A:o.length&&(S=!1)),E=D,A+=vt(T)}return u==a&&o[0]==0&&S?this.result(-100+(d?-200:0),o,e):p==a&&v==0?this.ret(-200-e.length+(y==e.length?0:-100),[0,y]):l>-1?this.ret(-700-e.length,[l,l+this.pattern.length]):p==a?this.ret(-900-e.length,[v,y]):u==a?this.result(-100+(d?-200:0)+-700+(S?0:-1100),o,e):t.length==2?null:this.result((n[0]?-700:0)+-200+-1100,n,e)}result(e,t,r){let n=[],s=0;for(let o of t){let l=o+(this.astral?vt(Xe(r,o)):1);s&&n[s-1]==o?n[s-1]=l:(n[s++]=o,n[s++]=l)}return this.ret(e-r.length,n)}},u0=class{constructor(e){this.pattern=e,this.matched=[],this.score=0,this.folded=e.toLowerCase()}match(e){if(e.length<this.pattern.length)return null;let t=e.slice(0,this.pattern.length),r=t==this.pattern?0:t.toLowerCase()==this.folded?-200:null;return r==null?null:(this.matched=[0,t.length],this.score=r+(e.length==this.pattern.length?0:-100),this)}},He=H.define({combine(i){return it(i,{activateOnTyping:!0,activateOnCompletion:()=>!1,activateOnTypingDelay:100,selectOnOpen:!0,override:null,closeOnBlur:!0,maxRenderedOptions:100,defaultKeymap:!0,tooltipClass:()=>\"\",optionClass:()=>\"\",aboveCursor:!1,icons:!0,addToOptions:[],positionInfo:M4,filterStrict:!1,compareCompletions:(e,t)=>(e.sortText||e.label).localeCompare(t.sortText||t.label),interactionDelay:75,updateSyncTime:100},{defaultKeymap:(e,t)=>e&&t,closeOnBlur:(e,t)=>e&&t,icons:(e,t)=>e&&t,tooltipClass:(e,t)=>r=>Cf(e(r),t(r)),optionClass:(e,t)=>r=>Cf(e(r),t(r)),addToOptions:(e,t)=>e.concat(t),filterStrict:(e,t)=>e||t})}});function Cf(i,e){return i?e?i+\" \"+e:i:e}function M4(i,e,t,r,n,s){let o=i.textDirection==he.RTL,l=o,a=!1,h=\"top\",c,u,d=e.left-n.left,p=n.right-e.right,v=r.right-r.left,y=r.bottom-r.top;if(l&&d<Math.min(v,p)?l=!1:!l&&p<Math.min(v,d)&&(l=!0),v<=(l?d:p))c=Math.max(n.top,Math.min(t.top,n.bottom-y))-e.top,u=Math.min(400,l?d:p);else{a=!0,u=Math.min(400,(o?e.right:n.right-e.left)-30);let A=n.bottom-e.bottom;A>=y||A>e.top?c=t.bottom-e.top:(h=\"bottom\",c=e.bottom-t.top)}let w=(e.bottom-e.top)/s.offsetHeight,S=(e.right-e.left)/s.offsetWidth;return{style:`${h}: ${c/w}px; max-width: ${u/S}px`,class:\"cm-completionInfo-\"+(a?o?\"left-narrow\":\"right-narrow\":l?\"left\":\"right\")}}function T4(i){let e=i.addToOptions.slice();return i.icons&&e.push({render(t){let r=document.createElement(\"div\");return r.classList.add(\"cm-completionIcon\"),t.type&&r.classList.add(...t.type.split(/\\s+/g).map(n=>\"cm-completionIcon-\"+n)),r.setAttribute(\"aria-hidden\",\"true\"),r},position:20}),e.push({render(t,r,n,s){let o=document.createElement(\"span\");o.className=\"cm-completionLabel\";let l=t.displayLabel||t.label,a=0;for(let h=0;h<s.length;){let c=s[h++],u=s[h++];c>a&&o.appendChild(document.createTextNode(l.slice(a,c)));let d=o.appendChild(document.createElement(\"span\"));d.appendChild(document.createTextNode(l.slice(c,u))),d.className=\"cm-completionMatchedText\",a=u}return a<l.length&&o.appendChild(document.createTextNode(l.slice(a))),o},position:50},{render(t){if(!t.detail)return null;let r=document.createElement(\"span\");return r.className=\"cm-completionDetail\",r.textContent=t.detail,r},position:80}),e.sort((t,r)=>t.position-r.position).map(t=>t.render)}function l0(i,e,t){if(i<=t)return{from:0,to:i};if(e<0&&(e=0),e<=i>>1){let n=Math.floor(e/t);return{from:n*t,to:(n+1)*t}}let r=Math.floor((i-e)/t);return{from:i-(r+1)*t,to:i-r*t}}var f0=class{constructor(e,t,r){this.view=e,this.stateField=t,this.applyCompletion=r,this.info=null,this.infoDestroy=null,this.placeInfoReq={read:()=>this.measureInfo(),write:a=>this.placeInfo(a),key:this},this.space=null,this.currentClass=\"\";let n=e.state.field(t),{options:s,selected:o}=n.open,l=e.state.facet(He);this.optionContent=T4(l),this.optionClass=l.optionClass,this.tooltipClass=l.tooltipClass,this.range=l0(s.length,o,l.maxRenderedOptions),this.dom=document.createElement(\"div\"),this.dom.className=\"cm-tooltip-autocomplete\",this.updateTooltipClass(e.state),this.dom.addEventListener(\"mousedown\",a=>{let{options:h}=e.state.field(t).open;for(let c=a.target,u;c&&c!=this.dom;c=c.parentNode)if(c.nodeName==\"LI\"&&(u=/-(\\d+)$/.exec(c.id))&&+u[1]<h.length){this.applyCompletion(e,h[+u[1]]),a.preventDefault();return}}),this.dom.addEventListener(\"focusout\",a=>{let h=e.state.field(this.stateField,!1);h&&h.tooltip&&e.state.facet(He).closeOnBlur&&a.relatedTarget!=e.contentDOM&&e.dispatch({effects:wn.of(null)})}),this.showOptions(s,n.id)}mount(){this.updateSel()}showOptions(e,t){this.list&&this.list.remove(),this.list=this.dom.appendChild(this.createListBox(e,t,this.range)),this.list.addEventListener(\"scroll\",()=>{this.info&&this.view.requestMeasure(this.placeInfoReq)})}update(e){var t;let r=e.state.field(this.stateField),n=e.startState.field(this.stateField);if(this.updateTooltipClass(e.state),r!=n){let{options:s,selected:o,disabled:l}=r.open;(!n.open||n.open.options!=s)&&(this.range=l0(s.length,o,e.state.facet(He).maxRenderedOptions),this.showOptions(s,r.id)),this.updateSel(),l!=((t=n.open)===null||t===void 0?void 0:t.disabled)&&this.dom.classList.toggle(\"cm-tooltip-autocomplete-disabled\",!!l)}}updateTooltipClass(e){let t=this.tooltipClass(e);if(t!=this.currentClass){for(let r of this.currentClass.split(\" \"))r&&this.dom.classList.remove(r);for(let r of t.split(\" \"))r&&this.dom.classList.add(r);this.currentClass=t}}positioned(e){this.space=e,this.info&&this.view.requestMeasure(this.placeInfoReq)}updateSel(){let e=this.view.state.field(this.stateField),t=e.open;(t.selected>-1&&t.selected<this.range.from||t.selected>=this.range.to)&&(this.range=l0(t.options.length,t.selected,this.view.state.facet(He).maxRenderedOptions),this.showOptions(t.options,e.id));let r=this.updateSelectedOption(t.selected);if(r){this.destroyInfo();let{completion:n}=t.options[t.selected],{info:s}=n;if(!s)return;let o=typeof s==\"string\"?document.createTextNode(s):s(n);if(!o)return;\"then\"in o?o.then(l=>{l&&this.view.state.field(this.stateField,!1)==e&&this.addInfoPane(l,n)}).catch(l=>Fe(this.view.state,l,\"completion info\")):(this.addInfoPane(o,n),r.setAttribute(\"aria-describedby\",this.info.id))}}addInfoPane(e,t){this.destroyInfo();let r=this.info=document.createElement(\"div\");if(r.className=\"cm-tooltip cm-completionInfo\",r.id=\"cm-completionInfo-\"+Math.floor(Math.random()*65535).toString(16),e.nodeType!=null)r.appendChild(e),this.infoDestroy=null;else{let{dom:n,destroy:s}=e;r.appendChild(n),this.infoDestroy=s||null}this.dom.appendChild(r),this.view.requestMeasure(this.placeInfoReq)}updateSelectedOption(e){let t=null;for(let r=this.list.firstChild,n=this.range.from;r;r=r.nextSibling,n++)r.nodeName!=\"LI\"||!r.id?n--:n==e?r.hasAttribute(\"aria-selected\")||(r.setAttribute(\"aria-selected\",\"true\"),t=r):r.hasAttribute(\"aria-selected\")&&(r.removeAttribute(\"aria-selected\"),r.removeAttribute(\"aria-describedby\"));return t&&B4(this.list,t),t}measureInfo(){let e=this.dom.querySelector(\"[aria-selected]\");if(!e||!this.info)return null;let t=this.dom.getBoundingClientRect(),r=this.info.getBoundingClientRect(),n=e.getBoundingClientRect(),s=this.space;if(!s){let o=this.dom.ownerDocument.documentElement;s={left:0,top:0,right:o.clientWidth,bottom:o.clientHeight}}return n.top>Math.min(s.bottom,t.bottom)-10||n.bottom<Math.max(s.top,t.top)+10?null:this.view.state.facet(He).positionInfo(this.view,t,n,r,s,this.dom)}placeInfo(e){this.info&&(e?(e.style&&(this.info.style.cssText=e.style),this.info.className=\"cm-tooltip cm-completionInfo \"+(e.class||\"\")):this.info.style.cssText=\"top: -1e6px\")}createListBox(e,t,r){let n=document.createElement(\"ul\");n.id=t,n.setAttribute(\"role\",\"listbox\"),n.setAttribute(\"aria-expanded\",\"true\"),n.setAttribute(\"aria-label\",this.view.state.phrase(\"Completions\")),n.addEventListener(\"mousedown\",o=>{o.target==n&&o.preventDefault()});let s=null;for(let o=r.from;o<r.to;o++){let{completion:l,match:a}=e[o],{section:h}=l;if(h){let d=typeof h==\"string\"?h:h.name;if(d!=s&&(o>r.from||r.from==0))if(s=d,typeof h!=\"string\"&&h.header)n.appendChild(h.header(h));else{let p=n.appendChild(document.createElement(\"completion-section\"));p.textContent=d}}let c=n.appendChild(document.createElement(\"li\"));c.id=t+\"-\"+o,c.setAttribute(\"role\",\"option\");let u=this.optionClass(l);u&&(c.className=u);for(let d of this.optionContent){let p=d(l,this.view.state,this.view,a);p&&c.appendChild(p)}}return r.from&&n.classList.add(\"cm-completionListIncompleteTop\"),r.to<e.length&&n.classList.add(\"cm-completionListIncompleteBottom\"),n}destroyInfo(){this.info&&(this.infoDestroy&&this.infoDestroy(),this.info.remove(),this.info=null)}destroy(){this.destroyInfo()}};function D4(i,e){return t=>new f0(t,i,e)}function B4(i,e){let t=i.getBoundingClientRect(),r=e.getBoundingClientRect(),n=t.height/i.offsetHeight;r.top<t.top?i.scrollTop-=(t.top-r.top)/n:r.bottom>t.bottom&&(i.scrollTop+=(r.bottom-t.bottom)/n)}function Af(i){return(i.boost||0)*100+(i.apply?10:0)+(i.info?5:0)+(i.type?1:0)}function E4(i,e){let t=[],r=null,n=null,s=c=>{t.push(c);let{section:u}=c.completion;if(u){r||(r=[]);let d=typeof u==\"string\"?u:u.name;r.some(p=>p.name==d)||r.push(typeof u==\"string\"?{name:d}:u)}},o=e.facet(He);for(let c of i)if(c.hasResult()){let u=c.result.getMatch;if(c.result.filter===!1)for(let d of c.result.options)s(new Js(d,c.source,u?u(d):[],1e9-t.length));else{let d=e.sliceDoc(c.from,c.to),p,v=o.filterStrict?new u0(d):new c0(d);for(let y of c.result.options)if(p=v.match(y.label)){let w=y.displayLabel?u?u(y,p.matched):[]:p.matched,S=p.score+(y.boost||0);if(s(new Js(y,c.source,w,S)),typeof y.section==\"object\"&&y.section.rank===\"dynamic\"){let{name:A}=y.section;n||(n=Object.create(null)),n[A]=Math.max(S,n[A]||-1e9)}}}}if(r){let c=Object.create(null),u=0,d=(p,v)=>(p.rank===\"dynamic\"&&v.rank===\"dynamic\"?n[v.name]-n[p.name]:0)||(typeof p.rank==\"number\"?p.rank:1e9)-(typeof v.rank==\"number\"?v.rank:1e9)||(p.name<v.name?-1:1);for(let p of r.sort(d))u-=1e5,c[p.name]=u;for(let p of t){let{section:v}=p.completion;v&&(p.score+=c[typeof v==\"string\"?v:v.name])}}let l=[],a=null,h=o.compareCompletions;for(let c of t.sort((u,d)=>d.score-u.score||h(u.completion,d.completion))){let u=c.completion;!a||a.label!=u.label||a.detail!=u.detail||a.type!=null&&u.type!=null&&a.type!=u.type||a.apply!=u.apply||a.boost!=u.boost?l.push(c):Af(c.completion)>Af(a)&&(l[l.length-1]=c),a=c.completion}return l}var d0=class i{constructor(e,t,r,n,s,o){this.options=e,this.attrs=t,this.tooltip=r,this.timestamp=n,this.selected=s,this.disabled=o}setSelected(e,t){return e==this.selected||e>=this.options.length?this:new i(this.options,Mf(t,e),this.tooltip,this.timestamp,e,this.disabled)}static build(e,t,r,n,s,o){if(n&&!o&&e.some(h=>h.isPending))return n.setDisabled();let l=E4(e,t);if(!l.length)return n&&e.some(h=>h.isPending)?n.setDisabled():null;let a=t.facet(He).selectOnOpen?0:-1;if(n&&n.selected!=a&&n.selected!=-1){let h=n.options[n.selected].completion;for(let c=0;c<l.length;c++)if(l[c].completion==h){a=c;break}}return new i(l,Mf(r,a),{pos:e.reduce((h,c)=>c.hasResult()?Math.min(h,c.from):h,1e8),create:P4,above:s.aboveCursor},n?n.timestamp:Date.now(),a,!1)}map(e){return new i(this.options,this.attrs,{...this.tooltip,pos:e.mapPos(this.tooltip.pos)},this.timestamp,this.selected,this.disabled)}setDisabled(){return new i(this.options,this.attrs,this.tooltip,this.timestamp,this.selected,!0)}},m0=class i{constructor(e,t,r){this.active=e,this.id=t,this.open=r}static start(){return new i(I4,\"cm-ac-\"+Math.floor(Math.random()*2e6).toString(36),null)}update(e){let{state:t}=e,r=t.facet(He),s=(r.override||t.languageDataAt(\"autocomplete\",_r(t)).map(A4)).map(a=>(this.active.find(c=>c.source==a)||new hr(a,this.active.some(c=>c.state!=0)?1:0)).update(e,r));s.length==this.active.length&&s.every((a,h)=>a==this.active[h])&&(s=this.active);let o=this.open,l=e.effects.some(a=>a.is(g0));o&&e.docChanged&&(o=o.map(e.changes)),e.selection||s.some(a=>a.hasResult()&&e.changes.touchesRange(a.from,a.to))||!O4(s,this.active)||l?o=d0.build(s,t,this.id,o,r,l):o&&o.disabled&&!s.some(a=>a.isPending)&&(o=null),!o&&s.every(a=>!a.isPending)&&s.some(a=>a.hasResult())&&(s=s.map(a=>a.hasResult()?new hr(a.source,0):a));for(let a of e.effects)a.is(zf)&&(o=o&&o.setSelected(a.value,this.id));return s==this.active&&o==this.open?this:new i(s,this.id,o)}get tooltip(){return this.open?this.open.tooltip:null}get attrs(){return this.open?this.open.attrs:this.active.length?z4:L4}};function O4(i,e){if(i==e)return!0;for(let t=0,r=0;;){for(;t<i.length&&!i[t].hasResult();)t++;for(;r<e.length&&!e[r].hasResult();)r++;let n=t==i.length,s=r==e.length;if(n||s)return n==s;if(i[t++].result!=e[r++].result)return!1}}var z4={\"aria-autocomplete\":\"list\"},L4={};function Mf(i,e){let t={\"aria-autocomplete\":\"list\",\"aria-haspopup\":\"listbox\",\"aria-controls\":i};return e>-1&&(t[\"aria-activedescendant\"]=i+\"-\"+e),t}var I4=[];function Of(i,e){if(i.isUserEvent(\"input.complete\")){let r=i.annotation(Ef);if(r&&e.activateOnCompletion(r))return 12}let t=i.isUserEvent(\"input.type\");return t&&e.activateOnTyping?5:t?1:i.isUserEvent(\"delete.backward\")?2:i.selection?8:i.docChanged?16:0}var hr=class i{constructor(e,t,r=!1){this.source=e,this.state=t,this.explicit=r}hasResult(){return!1}get isPending(){return this.state==1}update(e,t){let r=Of(e,t),n=this;(r&8||r&16&&this.touches(e))&&(n=new i(n.source,0)),r&4&&n.state==0&&(n=new i(this.source,1)),n=n.updateFor(e,r);for(let s of e.effects)if(s.is(Zs))n=new i(n.source,1,s.value);else if(s.is(wn))n=new i(n.source,0);else if(s.is(g0))for(let o of s.value)o.source==n.source&&(n=o);return n}updateFor(e,t){return this.map(e.changes)}map(e){return this}touches(e){return e.changes.touchesRange(_r(e.state))}},Qs=class i extends hr{constructor(e,t,r,n,s,o){super(e,3,t),this.limit=r,this.result=n,this.from=s,this.to=o}hasResult(){return!0}updateFor(e,t){var r;if(!(t&3))return this.map(e.changes);let n=this.result;n.map&&!e.changes.empty&&(n=n.map(n,e.changes));let s=e.changes.mapPos(this.from),o=e.changes.mapPos(this.to,1),l=_r(e.state);if(l>o||!n||t&2&&(_r(e.startState)==this.from||l<this.limit))return new hr(this.source,t&4?1:0);let a=e.changes.mapPos(this.limit);return R4(n.validFor,e.state,s,o)?new i(this.source,this.explicit,a,n,s,o):n.update&&(n=n.update(n,s,o,new _s(e.state,l,!1)))?new i(this.source,this.explicit,a,n,n.from,(r=n.to)!==null&&r!==void 0?r:_r(e.state)):new hr(this.source,1,this.explicit)}map(e){return e.empty?this:(this.result.map?this.result.map(this.result,e):this.result)?new i(this.source,this.explicit,e.mapPos(this.limit),this.result,e.mapPos(this.from),e.mapPos(this.to,1)):new hr(this.source,0)}touches(e){return e.changes.touchesRange(this.from,this.to)}};function R4(i,e,t,r){if(!i)return!1;let n=e.sliceDoc(t,r);return typeof i==\"function\"?i(n,t,r,e):Bf(i,!0).test(n)}var g0=te.define({map(i,e){return i.map(t=>t.map(e))}}),zf=te.define(),ot=Re.define({create(){return m0.start()},update(i,e){return i.update(e)},provide:i=>[hn.from(i,e=>e.tooltip),K.contentAttributes.from(i,e=>e.attrs)]});function v0(i,e){let t=e.completion.apply||e.completion.label,r=i.state.field(ot).active.find(n=>n.source==e.source);return r instanceof Qs?(typeof t==\"string\"?i.dispatch({...C4(i.state,t,r.from,r.to),annotations:Ef.of(e.completion)}):t(i,e.completion,r.from,r.to),!0):!1}var P4=D4(ot,v0);function Xs(i,e=\"option\"){return t=>{let r=t.state.field(ot,!1);if(!r||!r.open||r.open.disabled||Date.now()-r.open.timestamp<t.state.facet(He).interactionDelay)return!1;let n=1,s;e==\"page\"&&(s=Ra(t,r.open.tooltip))&&(n=Math.max(2,Math.floor(s.dom.offsetHeight/s.dom.querySelector(\"li\").offsetHeight)-1));let{length:o}=r.open.options,l=r.open.selected>-1?r.open.selected+n*(i?1:-1):i?0:o-1;return l<0?l=e==\"page\"?0:o-1:l>=o&&(l=e==\"page\"?o-1:0),t.dispatch({effects:zf.of(l)}),!0}}var N4=i=>{let e=i.state.field(ot,!1);return i.state.readOnly||!e||!e.open||e.open.selected<0||e.open.disabled||Date.now()-e.open.timestamp<i.state.facet(He).interactionDelay?!1:v0(i,e.open.options[e.open.selected])},a0=i=>i.state.field(ot,!1)?(i.dispatch({effects:Zs.of(!0)}),!0):!1,F4=i=>{let e=i.state.field(ot,!1);return!e||!e.active.some(t=>t.state!=0)?!1:(i.dispatch({effects:wn.of(null)}),!0)},p0=class{constructor(e,t){this.active=e,this.context=t,this.time=Date.now(),this.updates=[],this.done=void 0}},H4=50,q4=1e3,W4=Se.fromClass(class{constructor(i){this.view=i,this.debounceUpdate=-1,this.running=[],this.debounceAccept=-1,this.pendingStart=!1,this.composing=0;for(let e of i.state.field(ot).active)e.isPending&&this.startQuery(e)}update(i){let e=i.state.field(ot),t=i.state.facet(He);if(!i.selectionSet&&!i.docChanged&&i.startState.field(ot)==e)return;let r=i.transactions.some(s=>{let o=Of(s,t);return o&8||(s.selection||s.docChanged)&&!(o&3)});for(let s=0;s<this.running.length;s++){let o=this.running[s];if(r||o.context.abortOnDocChange&&i.docChanged||o.updates.length+i.transactions.length>H4&&Date.now()-o.time>q4){for(let l of o.context.abortListeners)try{l()}catch(a){Fe(this.view.state,a)}o.context.abortListeners=null,this.running.splice(s--,1)}else o.updates.push(...i.transactions)}this.debounceUpdate>-1&&clearTimeout(this.debounceUpdate),i.transactions.some(s=>s.effects.some(o=>o.is(Zs)))&&(this.pendingStart=!0);let n=this.pendingStart?50:t.activateOnTypingDelay;if(this.debounceUpdate=e.active.some(s=>s.isPending&&!this.running.some(o=>o.active.source==s.source))?setTimeout(()=>this.startUpdate(),n):-1,this.composing!=0)for(let s of i.transactions)s.isUserEvent(\"input.type\")?this.composing=2:this.composing==2&&s.selection&&(this.composing=3)}startUpdate(){this.debounceUpdate=-1,this.pendingStart=!1;let{state:i}=this.view,e=i.field(ot);for(let t of e.active)t.isPending&&!this.running.some(r=>r.active.source==t.source)&&this.startQuery(t);this.running.length&&e.open&&e.open.disabled&&(this.debounceAccept=setTimeout(()=>this.accept(),this.view.state.facet(He).updateSyncTime))}startQuery(i){let{state:e}=this.view,t=_r(e),r=new _s(e,t,i.explicit,this.view),n=new p0(i,r);this.running.push(n),Promise.resolve(i.source(r)).then(s=>{n.context.aborted||(n.done=s||null,this.scheduleAccept())},s=>{this.view.dispatch({effects:wn.of(null)}),Fe(this.view.state,s)})}scheduleAccept(){this.running.every(i=>i.done!==void 0)?this.accept():this.debounceAccept<0&&(this.debounceAccept=setTimeout(()=>this.accept(),this.view.state.facet(He).updateSyncTime))}accept(){var i;this.debounceAccept>-1&&clearTimeout(this.debounceAccept),this.debounceAccept=-1;let e=[],t=this.view.state.facet(He),r=this.view.state.field(ot);for(let n=0;n<this.running.length;n++){let s=this.running[n];if(s.done===void 0)continue;if(this.running.splice(n--,1),s.done){let l=_r(s.updates.length?s.updates[0].startState:this.view.state),a=Math.min(l,s.done.from+(s.active.explicit?0:1)),h=new Qs(s.active.source,s.active.explicit,a,s.done,s.done.from,(i=s.done.to)!==null&&i!==void 0?i:l);for(let c of s.updates)h=h.update(c,t);if(h.hasResult()){e.push(h);continue}}let o=r.active.find(l=>l.source==s.active.source);if(o&&o.isPending)if(s.done==null){let l=new hr(s.active.source,0);for(let a of s.updates)l=l.update(a,t);l.isPending||e.push(l)}else this.startQuery(o)}(e.length||r.open&&r.open.disabled)&&this.view.dispatch({effects:g0.of(e)})}},{eventHandlers:{blur(i){let e=this.view.state.field(ot,!1);if(e&&e.tooltip&&this.view.state.facet(He).closeOnBlur){let t=e.open&&Ra(this.view,e.open.tooltip);(!t||!t.dom.contains(i.relatedTarget))&&setTimeout(()=>this.view.dispatch({effects:wn.of(null)}),10)}},compositionstart(){this.composing=1},compositionend(){this.composing==3&&setTimeout(()=>this.view.dispatch({effects:Zs.of(!1)}),20),this.composing=0}}}),V4=typeof navigator==\"object\"&&/Win/.test(navigator.platform),$4=Wt.highest(K.domEventHandlers({keydown(i,e){let t=e.state.field(ot,!1);if(!t||!t.open||t.open.disabled||t.open.selected<0||i.key.length>1||i.ctrlKey&&!(V4&&i.altKey)||i.metaKey)return!1;let r=t.open.options[t.open.selected],n=t.active.find(o=>o.source==r.source),s=r.completion.commitCharacters||n.result.commitCharacters;return s&&s.indexOf(i.key)>-1&&v0(e,r),!1}})),G4=K.baseTheme({\".cm-tooltip.cm-tooltip-autocomplete\":{\"& > ul\":{fontFamily:\"monospace\",whiteSpace:\"nowrap\",overflow:\"hidden auto\",maxWidth_fallback:\"700px\",maxWidth:\"min(700px, 95vw)\",minWidth:\"250px\",maxHeight:\"10em\",height:\"100%\",listStyle:\"none\",margin:0,padding:0,\"& > li, & > completion-section\":{padding:\"1px 3px\",lineHeight:1.2},\"& > li\":{overflowX:\"hidden\",textOverflow:\"ellipsis\",cursor:\"pointer\"},\"& > completion-section\":{display:\"list-item\",borderBottom:\"1px solid silver\",paddingLeft:\"0.5em\",opacity:.7}}},\"&light .cm-tooltip-autocomplete ul li[aria-selected]\":{background:\"#17c\",color:\"white\"},\"&light .cm-tooltip-autocomplete-disabled ul li[aria-selected]\":{background:\"#777\"},\"&dark .cm-tooltip-autocomplete ul li[aria-selected]\":{background:\"#347\",color:\"white\"},\"&dark .cm-tooltip-autocomplete-disabled ul li[aria-selected]\":{background:\"#444\"},\".cm-completionListIncompleteTop:before, .cm-completionListIncompleteBottom:after\":{content:'\"\\xB7\\xB7\\xB7\"',opacity:.5,display:\"block\",textAlign:\"center\"},\".cm-tooltip.cm-completionInfo\":{position:\"absolute\",padding:\"3px 9px\",width:\"max-content\",maxWidth:\"400px\",boxSizing:\"border-box\",whiteSpace:\"pre-line\"},\".cm-completionInfo.cm-completionInfo-left\":{right:\"100%\"},\".cm-completionInfo.cm-completionInfo-right\":{left:\"100%\"},\".cm-completionInfo.cm-completionInfo-left-narrow\":{right:\"30px\"},\".cm-completionInfo.cm-completionInfo-right-narrow\":{left:\"30px\"},\"&light .cm-snippetField\":{backgroundColor:\"#00000022\"},\"&dark .cm-snippetField\":{backgroundColor:\"#ffffff22\"},\".cm-snippetFieldPosition\":{verticalAlign:\"text-top\",width:0,height:\"1.15em\",display:\"inline-block\",margin:\"0 -0.7px -.7em\",borderLeft:\"1.4px dotted #888\"},\".cm-completionMatchedText\":{textDecoration:\"underline\"},\".cm-completionDetail\":{marginLeft:\"0.5em\",fontStyle:\"italic\"},\".cm-completionIcon\":{fontSize:\"90%\",width:\".8em\",display:\"inline-block\",textAlign:\"center\",paddingRight:\".6em\",opacity:\"0.6\",boxSizing:\"content-box\"},\".cm-completionIcon-function, .cm-completionIcon-method\":{\"&:after\":{content:\"'\\u0192'\"}},\".cm-completionIcon-class\":{\"&:after\":{content:\"'\\u25CB'\"}},\".cm-completionIcon-interface\":{\"&:after\":{content:\"'\\u25CC'\"}},\".cm-completionIcon-variable\":{\"&:after\":{content:\"'\\u{1D465}'\"}},\".cm-completionIcon-constant\":{\"&:after\":{content:\"'\\u{1D436}'\"}},\".cm-completionIcon-type\":{\"&:after\":{content:\"'\\u{1D461}'\"}},\".cm-completionIcon-enum\":{\"&:after\":{content:\"'\\u222A'\"}},\".cm-completionIcon-property\":{\"&:after\":{content:\"'\\u25A1'\"}},\".cm-completionIcon-keyword\":{\"&:after\":{content:\"'\\u{1F511}\\uFE0E'\"}},\".cm-completionIcon-namespace\":{\"&:after\":{content:\"'\\u25A2'\"}},\".cm-completionIcon-text\":{\"&:after\":{content:\"'abc'\",fontSize:\"50%\",verticalAlign:\"middle\"}}});var eo={brackets:[\"(\",\"[\",\"{\",\"'\",'\"'],before:\")]}:;>\",stringPrefixes:[]},Xr=te.define({map(i,e){let t=e.mapPos(i,-1,We.TrackAfter);return t??void 0}}),b0=new class extends gt{};b0.startSide=1;b0.endSide=-1;var Lf=Re.define({create(){return le.empty},update(i,e){if(i=i.map(e.changes),e.selection){let t=e.state.doc.lineAt(e.selection.main.head);i=i.update({filter:r=>r>=t.from&&r<=t.to})}for(let t of e.effects)t.is(Xr)&&(i=i.update({add:[b0.range(t.value,t.value+1)]}));return i}});function If(){return[Y4,Lf]}var h0=\"()[]{}<>\\xAB\\xBB\\xBB\\xAB\\uFF3B\\uFF3D\\uFF5B\\uFF5D\";function U4(i){for(let e=0;e<h0.length;e+=2)if(h0.charCodeAt(e)==i)return h0.charAt(e+1);return qi(i<128?i:i+1)}function K4(i,e){return i.languageDataAt(\"closeBrackets\",e)[0]||eo}var j4=typeof navigator==\"object\"&&/Android\\b/.test(navigator.userAgent),Y4=K.inputHandler.of((i,e,t,r)=>{if((j4?i.composing:i.compositionStarted)||i.state.readOnly)return!1;let n=i.state.selection.main;if(r.length>2||r.length==2&&vt(Xe(r,0))==1||e!=n.from||t!=n.to)return!1;let s=X4(i.state,r);return s?(i.dispatch(s),!0):!1});function X4(i,e){let t=K4(i,i.selection.main.head),r=t.brackets||eo.brackets;for(let n of r){let s=U4(Xe(n,0));if(e==n)return s==n?Z4(i,n,r.indexOf(n+n+n)>-1,t):_4(i,n,s,t.before||eo.before);if(e==s&&Rf(i,i.selection.main.from))return J4(i,n,s)}return null}function Rf(i,e){let t=!1;return i.field(Lf).between(0,i.doc.length,r=>{r==e&&(t=!0)}),t}function y0(i,e){let t=i.sliceString(e,e+2);return t.slice(0,vt(Xe(t,0)))}function _4(i,e,t,r){let n=null,s=i.changeByRange(o=>{if(!o.empty)return{changes:[{insert:e,from:o.from},{insert:t,from:o.to}],effects:Xr.of(o.to+e.length),range:R.range(o.anchor+e.length,o.head+e.length)};let l=y0(i.doc,o.head);return!l||/\\s/.test(l)||r.indexOf(l)>-1?{changes:{insert:e+t,from:o.head},effects:Xr.of(o.head+e.length),range:R.cursor(o.head+e.length)}:{range:n=o}});return n?null:i.update(s,{scrollIntoView:!0,userEvent:\"input.type\"})}function J4(i,e,t){let r=null,n=i.changeByRange(s=>s.empty&&y0(i.doc,s.head)==t?{changes:{from:s.head,to:s.head+t.length,insert:t},range:R.cursor(s.head+t.length)}:r={range:s});return r?null:i.update(n,{scrollIntoView:!0,userEvent:\"input.type\"})}function Z4(i,e,t,r){let n=r.stringPrefixes||eo.stringPrefixes,s=null,o=i.changeByRange(l=>{if(!l.empty)return{changes:[{insert:e,from:l.from},{insert:e,from:l.to}],effects:Xr.of(l.to+e.length),range:R.range(l.anchor+e.length,l.head+e.length)};let a=l.head,h=y0(i.doc,a),c;if(h==e){if(Tf(i,a))return{changes:{insert:e+e,from:a},effects:Xr.of(a+e.length),range:R.cursor(a+e.length)};if(Rf(i,a)){let d=t&&i.sliceDoc(a,a+e.length*3)==e+e+e?e+e+e:e;return{changes:{from:a,to:a+d.length,insert:d},range:R.cursor(a+d.length)}}}else{if(t&&i.sliceDoc(a-2*e.length,a)==e+e&&(c=Df(i,a-2*e.length,n))>-1&&Tf(i,c))return{changes:{insert:e+e+e+e,from:a},effects:Xr.of(a+e.length),range:R.cursor(a+e.length)};if(i.charCategorizer(a)(h)!=Ee.Word&&Df(i,a,n)>-1&&!Q4(i,a,e,n))return{changes:{insert:e+e,from:a},effects:Xr.of(a+e.length),range:R.cursor(a+e.length)}}return{range:s=l}});return s?null:i.update(o,{scrollIntoView:!0,userEvent:\"input.type\"})}function Tf(i,e){let t=Ze(i).resolveInner(e+1);return t.parent&&t.from==e}function Q4(i,e,t,r){let n=Ze(i).resolveInner(e,-1),s=r.reduce((o,l)=>Math.max(o,l.length),0);for(let o=0;o<5;o++){let l=i.sliceDoc(n.from,Math.min(n.to,n.from+t.length+s)),a=l.indexOf(t);if(!a||a>-1&&r.indexOf(l.slice(0,a))>-1){let c=n.firstChild;for(;c&&c.from==n.from&&c.to-c.from>t.length+a;){if(i.sliceDoc(c.to-t.length,c.to)==t)return!1;c=c.firstChild}return!0}let h=n.to==e&&n.parent;if(!h)break;n=h}return!1}function Df(i,e,t){let r=i.charCategorizer(e);if(r(i.sliceDoc(e-1,e))!=Ee.Word)return e;for(let n of t){let s=e-n.length;if(i.sliceDoc(s,e)==n&&r(i.sliceDoc(s-1,s))!=Ee.Word)return s}return-1}function Pf(i={}){return[$4,ot,He.of(i),W4,t2,G4]}var e2=[{key:\"Ctrl-Space\",run:a0},{mac:\"Alt-`\",run:a0},{mac:\"Alt-i\",run:a0},{key:\"Escape\",run:F4},{key:\"ArrowDown\",run:Xs(!0)},{key:\"ArrowUp\",run:Xs(!1)},{key:\"PageDown\",run:Xs(!0,\"page\")},{key:\"PageUp\",run:Xs(!1,\"page\")},{key:\"Enter\",run:N4}],t2=Wt.highest(an.computeN([He],i=>i.facet(He).defaultKeymap?[e2]:[]));var r2=i=>{let{state:e}=i,t=e.doc.lineAt(e.selection.main.from),r=M0(i.state,t.from);return r.line?i2(i):r.block?s2(i):!1};function A0(i,e){return({state:t,dispatch:r})=>{if(t.readOnly)return!1;let n=i(e,t);return n?(r(t.update(n)),!0):!1}}var i2=A0(a2,0);var n2=A0(Uf,0);var s2=A0((i,e)=>Uf(i,e,l2(e)),0);function M0(i,e){let t=i.languageDataAt(\"commentTokens\",e,1);return t.length?t[0]:{}}var kn=50;function o2(i,{open:e,close:t},r,n){let s=i.sliceDoc(r-kn,r),o=i.sliceDoc(n,n+kn),l=/\\s*$/.exec(s)[0].length,a=/^\\s*/.exec(o)[0].length,h=s.length-l;if(s.slice(h-e.length,h)==e&&o.slice(a,a+t.length)==t)return{open:{pos:r-l,margin:l&&1},close:{pos:n+a,margin:a&&1}};let c,u;n-r<=2*kn?c=u=i.sliceDoc(r,n):(c=i.sliceDoc(r,r+kn),u=i.sliceDoc(n-kn,n));let d=/^\\s*/.exec(c)[0].length,p=/\\s*$/.exec(u)[0].length,v=u.length-p-t.length;return c.slice(d,d+e.length)==e&&u.slice(v,v+t.length)==t?{open:{pos:r+d+e.length,margin:/\\s/.test(c.charAt(d+e.length))?1:0},close:{pos:n-p-t.length,margin:/\\s/.test(u.charAt(v-1))?1:0}}:null}function l2(i){let e=[];for(let t of i.selection.ranges){let r=i.doc.lineAt(t.from),n=t.to<=r.to?r:i.doc.lineAt(t.to);n.from>r.from&&n.from==t.to&&(n=t.to==r.to+1?r:i.doc.lineAt(t.to-1));let s=e.length-1;s>=0&&e[s].to>r.from?e[s].to=n.to:e.push({from:r.from+/^\\s*/.exec(r.text)[0].length,to:n.to})}return e}function Uf(i,e,t=e.selection.ranges){let r=t.map(s=>M0(e,s.from).block);if(!r.every(s=>s))return null;let n=t.map((s,o)=>o2(e,r[o],s.from,s.to));if(i!=2&&!n.every(s=>s))return{changes:e.changes(t.map((s,o)=>n[o]?[]:[{from:s.from,insert:r[o].open+\" \"},{from:s.to,insert:\" \"+r[o].close}]))};if(i!=1&&n.some(s=>s)){let s=[];for(let o=0,l;o<n.length;o++)if(l=n[o]){let a=r[o],{open:h,close:c}=l;s.push({from:h.pos-a.open.length,to:h.pos+h.margin},{from:c.pos-c.margin,to:c.pos+a.close.length})}return{changes:s}}return null}function a2(i,e,t=e.selection.ranges){let r=[],n=-1;e:for(let{from:s,to:o}of t){let l=r.length,a=1e9,h;for(let c=s;c<=o;){let u=e.doc.lineAt(c);if(h==null&&(h=M0(e,u.from).line,!h))continue e;if(u.from>n&&(s==o||o>u.from)){n=u.from;let d=/^\\s*/.exec(u.text)[0].length,p=d==u.length,v=u.text.slice(d,d+h.length)==h?d:-1;d<u.text.length&&d<a&&(a=d),r.push({line:u,comment:v,token:h,indent:d,empty:p,single:!1})}c=u.to+1}if(a<1e9)for(let c=l;c<r.length;c++)r[c].indent<r[c].line.text.length&&(r[c].indent=a);r.length==l+1&&(r[l].single=!0)}if(i!=2&&r.some(s=>s.comment<0&&(!s.empty||s.single))){let s=[];for(let{line:l,token:a,indent:h,empty:c,single:u}of r)(u||!c)&&s.push({from:l.from+h,insert:a+\" \"});let o=e.changes(s);return{changes:o,selection:e.selection.map(o,1)}}else if(i!=1&&r.some(s=>s.comment>=0)){let s=[];for(let{line:o,comment:l,token:a}of r)if(l>=0){let h=o.from+l,c=h+a.length;o.text[c-o.from]==\" \"&&c++,s.push({from:h,to:c})}return{changes:s}}return null}var w0=rt.define(),h2=rt.define(),c2=H.define(),Kf=H.define({combine(i){return it(i,{minDepth:100,newGroupDelay:500,joinToEvent:(e,t)=>t},{minDepth:Math.max,newGroupDelay:Math.min,joinToEvent:(e,t)=>(r,n)=>e(r,n)||t(r,n)})}}),jf=Re.define({create(){return Jr.empty},update(i,e){let t=e.state.facet(Kf),r=e.annotation(w0);if(r){let a=Pt.fromTransaction(e,r.selection),h=r.side,c=h==0?i.undone:i.done;return a?c=ro(c,c.length,t.minDepth,a):c=Jf(c,e.startState.selection),new Jr(h==0?r.rest:c,h==0?c:r.rest)}let n=e.annotation(h2);if((n==\"full\"||n==\"before\")&&(i=i.isolate()),e.annotation(Le.addToHistory)===!1)return e.changes.empty?i:i.addMapping(e.changes.desc);let s=Pt.fromTransaction(e),o=e.annotation(Le.time),l=e.annotation(Le.userEvent);return s?i=i.addChanges(s,o,l,t,e):e.selection&&(i=i.addSelection(e.startState.selection,o,l,t.newGroupDelay)),(n==\"full\"||n==\"after\")&&(i=i.isolate()),i},toJSON(i){return{done:i.done.map(e=>e.toJSON()),undone:i.undone.map(e=>e.toJSON())}},fromJSON(i){return new Jr(i.done.map(Pt.fromJSON),i.undone.map(Pt.fromJSON))}});function Yf(i={}){return[jf,Kf.of(i),K.domEventHandlers({beforeinput(e,t){let r=e.inputType==\"historyUndo\"?Xf:e.inputType==\"historyRedo\"?k0:null;return r?(e.preventDefault(),r(t)):!1}})]}function io(i,e){return function({state:t,dispatch:r}){if(!e&&t.readOnly)return!1;let n=t.field(jf,!1);if(!n)return!1;let s=n.pop(i,t,e);return s?(r(s),!0):!1}}var Xf=io(0,!1),k0=io(1,!1),u2=io(0,!0),f2=io(1,!0);var Pt=class i{constructor(e,t,r,n,s){this.changes=e,this.effects=t,this.mapped=r,this.startSelection=n,this.selectionsAfter=s}setSelAfter(e){return new i(this.changes,this.effects,this.mapped,this.startSelection,e)}toJSON(){var e,t,r;return{changes:(e=this.changes)===null||e===void 0?void 0:e.toJSON(),mapped:(t=this.mapped)===null||t===void 0?void 0:t.toJSON(),startSelection:(r=this.startSelection)===null||r===void 0?void 0:r.toJSON(),selectionsAfter:this.selectionsAfter.map(n=>n.toJSON())}}static fromJSON(e){return new i(e.changes&&Ye.fromJSON(e.changes),[],e.mapped&&nr.fromJSON(e.mapped),e.startSelection&&R.fromJSON(e.startSelection),e.selectionsAfter.map(R.fromJSON))}static fromTransaction(e,t){let r=At;for(let n of e.startState.facet(c2)){let s=n(e);s.length&&(r=r.concat(s))}return!r.length&&e.changes.empty?null:new i(e.changes.invert(e.startState.doc),r,void 0,t||e.startState.selection,At)}static selection(e){return new i(void 0,At,void 0,void 0,e)}};function ro(i,e,t,r){let n=e+1>t+20?e-t-1:0,s=i.slice(n,e);return s.push(r),s}function d2(i,e){let t=[],r=!1;return i.iterChangedRanges((n,s)=>t.push(n,s)),e.iterChangedRanges((n,s,o,l)=>{for(let a=0;a<t.length;){let h=t[a++],c=t[a++];l>=h&&o<=c&&(r=!0)}}),r}function m2(i,e){return i.ranges.length==e.ranges.length&&i.ranges.filter((t,r)=>t.empty!=e.ranges[r].empty).length===0}function _f(i,e){return i.length?e.length?i.concat(e):i:e}var At=[],p2=200;function Jf(i,e){if(i.length){let t=i[i.length-1],r=t.selectionsAfter.slice(Math.max(0,t.selectionsAfter.length-p2));return r.length&&r[r.length-1].eq(e)?i:(r.push(e),ro(i,i.length-1,1e9,t.setSelAfter(r)))}else return[Pt.selection([e])]}function g2(i){let e=i[i.length-1],t=i.slice();return t[i.length-1]=e.setSelAfter(e.selectionsAfter.slice(0,e.selectionsAfter.length-1)),t}function x0(i,e){if(!i.length)return i;let t=i.length,r=At;for(;t;){let n=v2(i[t-1],e,r);if(n.changes&&!n.changes.empty||n.effects.length){let s=i.slice(0,t);return s[t-1]=n,s}else e=n.mapped,t--,r=n.selectionsAfter}return r.length?[Pt.selection(r)]:At}function v2(i,e,t){let r=_f(i.selectionsAfter.length?i.selectionsAfter.map(l=>l.map(e)):At,t);if(!i.changes)return Pt.selection(r);let n=i.changes.map(e),s=e.mapDesc(i.changes,!0),o=i.mapped?i.mapped.composeDesc(s):s;return new Pt(n,te.mapEffects(i.effects,e),o,i.startSelection.map(s),r)}var b2=/^(input\\.type|delete)($|\\.)/,Jr=class i{constructor(e,t,r=0,n=void 0){this.done=e,this.undone=t,this.prevTime=r,this.prevUserEvent=n}isolate(){return this.prevTime?new i(this.done,this.undone):this}addChanges(e,t,r,n,s){let o=this.done,l=o[o.length-1];return l&&l.changes&&!l.changes.empty&&e.changes&&(!r||b2.test(r))&&(!l.selectionsAfter.length&&t-this.prevTime<n.newGroupDelay&&n.joinToEvent(s,d2(l.changes,e.changes))||r==\"input.type.compose\")?o=ro(o,o.length-1,n.minDepth,new Pt(e.changes.compose(l.changes),_f(te.mapEffects(e.effects,l.changes),l.effects),l.mapped,l.startSelection,At)):o=ro(o,o.length,n.minDepth,e),new i(o,At,t,r)}addSelection(e,t,r,n){let s=this.done.length?this.done[this.done.length-1].selectionsAfter:At;return s.length>0&&t-this.prevTime<n&&r==this.prevUserEvent&&r&&/^select($|\\.)/.test(r)&&m2(s[s.length-1],e)?this:new i(Jf(this.done,e),this.undone,t,r)}addMapping(e){return new i(x0(this.done,e),x0(this.undone,e),this.prevTime,this.prevUserEvent)}pop(e,t,r){let n=e==0?this.done:this.undone;if(n.length==0)return null;let s=n[n.length-1],o=s.selectionsAfter[0]||(s.startSelection?s.startSelection.map(s.changes.invertedDesc,1):t.selection);if(r&&s.selectionsAfter.length)return t.update({selection:s.selectionsAfter[s.selectionsAfter.length-1],annotations:w0.of({side:e,rest:g2(n),selection:o}),userEvent:e==0?\"select.undo\":\"select.redo\",scrollIntoView:!0});if(s.changes){let l=n.length==1?At:n.slice(0,n.length-1);return s.mapped&&(l=x0(l,s.mapped)),t.update({changes:s.changes,selection:s.startSelection,effects:s.effects,annotations:w0.of({side:e,rest:l,selection:o}),filter:!1,userEvent:e==0?\"undo\":\"redo\",scrollIntoView:!0})}else return null}};Jr.empty=new Jr(At,At);var Zf=[{key:\"Mod-z\",run:Xf,preventDefault:!0},{key:\"Mod-y\",mac:\"Mod-Shift-z\",run:k0,preventDefault:!0},{linux:\"Ctrl-Shift-z\",run:k0,preventDefault:!0},{key:\"Mod-u\",run:u2,preventDefault:!0},{key:\"Alt-u\",mac:\"Mod-Shift-u\",run:f2,preventDefault:!0}];function Mi(i,e){return R.create(i.ranges.map(e),i.mainIndex)}function Nt(i,e){return i.update({selection:e,scrollIntoView:!0,userEvent:\"select\"})}function Ft({state:i,dispatch:e},t){let r=Mi(i.selection,t);return r.eq(i.selection,!0)?!1:(e(Nt(i,r)),!0)}function no(i,e){return R.cursor(e?i.to:i.from)}function Qf(i,e){return Ft(i,t=>t.empty?i.moveByChar(t,e):no(t,e))}function Ge(i){return i.textDirectionAt(i.state.selection.main.head)==he.LTR}var ed=i=>Qf(i,!Ge(i)),td=i=>Qf(i,Ge(i));function rd(i,e){return Ft(i,t=>t.empty?i.moveByGroup(t,e):no(t,e))}var y2=i=>rd(i,!Ge(i)),x2=i=>rd(i,Ge(i));var N7=typeof Intl<\"u\"&&Intl.Segmenter?new Intl.Segmenter(void 0,{granularity:\"word\"}):null;function w2(i,e,t){if(e.type.prop(t))return!0;let r=e.to-e.from;return r&&(r>2||/[^\\s,.;:]/.test(i.sliceDoc(e.from,e.to)))||e.firstChild}function so(i,e,t){let r=Ze(i).resolveInner(e.head),n=t?ee.closedBy:ee.openedBy;for(let a=e.head;;){let h=t?r.childAfter(a):r.childBefore(a);if(!h)break;w2(i,h,n)?r=h:a=t?h.to:h.from}let s=r.type.prop(n),o,l;return s&&(o=t?Rt(i,r.from,1):Rt(i,r.to,-1))&&o.matched?l=t?o.end.to:o.end.from:l=t?r.to:r.from,R.cursor(l,t?-1:1)}var k2=i=>Ft(i,e=>so(i.state,e,!Ge(i))),S2=i=>Ft(i,e=>so(i.state,e,Ge(i)));function id(i,e){return Ft(i,t=>{if(!t.empty)return no(t,e);let r=i.moveVertically(t,e);return r.head!=t.head?r:i.moveToLineBoundary(t,e)})}var nd=i=>id(i,!1),sd=i=>id(i,!0);function od(i){let e=i.scrollDOM.clientHeight<i.scrollDOM.scrollHeight-2,t=0,r=0,n;if(e){for(let s of i.state.facet(K.scrollMargins)){let o=s(i);o?.top&&(t=Math.max(o?.top,t)),o?.bottom&&(r=Math.max(o?.bottom,r))}n=i.scrollDOM.clientHeight-t-r}else n=(i.dom.ownerDocument.defaultView||window).innerHeight;return{marginTop:t,marginBottom:r,selfScroll:e,height:Math.max(i.defaultLineHeight,n-5)}}function ld(i,e){let t=od(i),{state:r}=i,n=Mi(r.selection,o=>o.empty?i.moveVertically(o,e,t.height):no(o,e));if(n.eq(r.selection))return!1;let s;if(t.selfScroll){let o=i.coordsAtPos(r.selection.main.head),l=i.scrollDOM.getBoundingClientRect(),a=l.top+t.marginTop,h=l.bottom-t.marginBottom;o&&o.top>a&&o.bottom<h&&(s=K.scrollIntoView(n.main.head,{y:\"start\",yMargin:o.top-a}))}return i.dispatch(Nt(r,n),{effects:s}),!0}var Nf=i=>ld(i,!1),S0=i=>ld(i,!0);function Mr(i,e,t){let r=i.lineBlockAt(e.head),n=i.moveToLineBoundary(e,t);if(n.head==e.head&&n.head!=(t?r.to:r.from)&&(n=i.moveToLineBoundary(e,t,!1)),!t&&n.head==r.from&&r.length){let s=/^\\s*/.exec(i.state.sliceDoc(r.from,Math.min(r.from+100,r.to)))[0].length;s&&e.head!=r.from+s&&(n=R.cursor(r.from+s))}return n}var C2=i=>Ft(i,e=>Mr(i,e,!0)),A2=i=>Ft(i,e=>Mr(i,e,!1)),M2=i=>Ft(i,e=>Mr(i,e,!Ge(i))),T2=i=>Ft(i,e=>Mr(i,e,Ge(i))),D2=i=>Ft(i,e=>R.cursor(i.lineBlockAt(e.head).from,1)),B2=i=>Ft(i,e=>R.cursor(i.lineBlockAt(e.head).to,-1));function E2(i,e,t){let r=!1,n=Mi(i.selection,s=>{let o=Rt(i,s.head,-1)||Rt(i,s.head,1)||s.head>0&&Rt(i,s.head-1,1)||s.head<i.doc.length&&Rt(i,s.head+1,-1);if(!o||!o.end)return s;r=!0;let l=o.start.from==s.head?o.end.to:o.end.from;return t?R.range(s.anchor,l):R.cursor(l)});return r?(e(Nt(i,n)),!0):!1}var O2=({state:i,dispatch:e})=>E2(i,e,!1);function Mt(i,e){let t=Mi(i.state.selection,r=>{let n=e(r);return R.range(r.anchor,n.head,n.goalColumn,n.bidiLevel||void 0)});return t.eq(i.state.selection)?!1:(i.dispatch(Nt(i.state,t)),!0)}function ad(i,e){return Mt(i,t=>i.moveByChar(t,e))}var hd=i=>ad(i,!Ge(i)),cd=i=>ad(i,Ge(i));function ud(i,e){return Mt(i,t=>i.moveByGroup(t,e))}var z2=i=>ud(i,!Ge(i)),L2=i=>ud(i,Ge(i));var I2=i=>Mt(i,e=>so(i.state,e,!Ge(i))),R2=i=>Mt(i,e=>so(i.state,e,Ge(i)));function fd(i,e){return Mt(i,t=>i.moveVertically(t,e))}var dd=i=>fd(i,!1),md=i=>fd(i,!0);function pd(i,e){return Mt(i,t=>i.moveVertically(t,e,od(i).height))}var Ff=i=>pd(i,!1),Hf=i=>pd(i,!0),P2=i=>Mt(i,e=>Mr(i,e,!0)),N2=i=>Mt(i,e=>Mr(i,e,!1)),F2=i=>Mt(i,e=>Mr(i,e,!Ge(i))),H2=i=>Mt(i,e=>Mr(i,e,Ge(i))),q2=i=>Mt(i,e=>R.cursor(i.lineBlockAt(e.head).from)),W2=i=>Mt(i,e=>R.cursor(i.lineBlockAt(e.head).to)),qf=({state:i,dispatch:e})=>(e(Nt(i,{anchor:0})),!0),Wf=({state:i,dispatch:e})=>(e(Nt(i,{anchor:i.doc.length})),!0),Vf=({state:i,dispatch:e})=>(e(Nt(i,{anchor:i.selection.main.anchor,head:0})),!0),$f=({state:i,dispatch:e})=>(e(Nt(i,{anchor:i.selection.main.anchor,head:i.doc.length})),!0),V2=({state:i,dispatch:e})=>(e(i.update({selection:{anchor:0,head:i.doc.length},userEvent:\"select\"})),!0),$2=({state:i,dispatch:e})=>{let t=oo(i).map(({from:r,to:n})=>R.range(r,Math.min(n+1,i.doc.length)));return e(i.update({selection:R.create(t),userEvent:\"select\"})),!0},G2=({state:i,dispatch:e})=>{let t=Mi(i.selection,r=>{let n=Ze(i),s=n.resolveStack(r.from,1);if(r.empty){let o=n.resolveStack(r.from,-1);o.node.from>=s.node.from&&o.node.to<=s.node.to&&(s=o)}for(let o=s;o;o=o.next){let{node:l}=o;if((l.from<r.from&&l.to>=r.to||l.to>r.to&&l.from<=r.from)&&o.next)return R.range(l.to,l.from)}return r});return t.eq(i.selection)?!1:(e(Nt(i,t)),!0)};function gd(i,e){let{state:t}=i,r=t.selection,n=t.selection.ranges.slice();for(let s of t.selection.ranges){let o=t.doc.lineAt(s.head);if(e?o.to<i.state.doc.length:o.from>0)for(let l=s;;){let a=i.moveVertically(l,e);if(a.head<o.from||a.head>o.to){n.some(h=>h.head==a.head)||n.push(a);break}else{if(a.head==l.head)break;l=a}}}return n.length==r.ranges.length?!1:(i.dispatch(Nt(t,R.create(n,n.length-1))),!0)}var U2=i=>gd(i,!1),K2=i=>gd(i,!0),j2=({state:i,dispatch:e})=>{let t=i.selection,r=null;return t.ranges.length>1?r=R.create([t.main]):t.main.empty||(r=R.create([R.cursor(t.main.head)])),r?(e(Nt(i,r)),!0):!1};function Sn(i,e){if(i.state.readOnly)return!1;let t=\"delete.selection\",{state:r}=i,n=r.changeByRange(s=>{let{from:o,to:l}=s;if(o==l){let a=e(s);a<o?(t=\"delete.backward\",a=to(i,a,!1)):a>o&&(t=\"delete.forward\",a=to(i,a,!0)),o=Math.min(o,a),l=Math.max(l,a)}else o=to(i,o,!1),l=to(i,l,!0);return o==l?{range:s}:{changes:{from:o,to:l},range:R.cursor(o,o<s.head?-1:1)}});return n.changes.empty?!1:(i.dispatch(r.update(n,{scrollIntoView:!0,userEvent:t,effects:t==\"delete.selection\"?K.announce.of(r.phrase(\"Selection deleted\")):void 0})),!0)}function to(i,e,t){if(i instanceof K)for(let r of i.state.facet(K.atomicRanges).map(n=>n(i)))r.between(e,e,(n,s)=>{n<e&&s>e&&(e=t?s:n)});return e}var vd=(i,e,t)=>Sn(i,r=>{let n=r.from,{state:s}=i,o=s.doc.lineAt(n),l,a;if(t&&!e&&n>o.from&&n<o.from+200&&!/[^ \\t]/.test(l=o.text.slice(0,n-o.from))){if(l[l.length-1]==\"\t\")return n-1;let h=vr(l,s.tabSize),c=h%Ar(s)||Ar(s);for(let u=0;u<c&&l[l.length-1-u]==\" \";u++)n--;a=n}else a=Ie(o.text,n-o.from,e,e)+o.from,a==n&&o.number!=(e?s.doc.lines:1)?a+=e?1:-1:!e&&/[\\ufe00-\\ufe0f]/.test(o.text.slice(a-o.from,n-o.from))&&(a=Ie(o.text,a-o.from,!1,!1)+o.from);return a}),C0=i=>vd(i,!1,!0);var bd=i=>vd(i,!0,!1),yd=(i,e)=>Sn(i,t=>{let r=t.head,{state:n}=i,s=n.doc.lineAt(r),o=n.charCategorizer(r);for(let l=null;;){if(r==(e?s.to:s.from)){r==t.head&&s.number!=(e?n.doc.lines:1)&&(r+=e?1:-1);break}let a=Ie(s.text,r-s.from,e)+s.from,h=s.text.slice(Math.min(r,a)-s.from,Math.max(r,a)-s.from),c=o(h);if(l!=null&&c!=l)break;(h!=\" \"||r!=t.head)&&(l=c),r=a}return r}),xd=i=>yd(i,!1),Y2=i=>yd(i,!0);var X2=i=>Sn(i,e=>{let t=i.lineBlockAt(e.head).to;return e.head<t?t:Math.min(i.state.doc.length,e.head+1)});var _2=i=>Sn(i,e=>{let t=i.moveToLineBoundary(e,!1).head;return e.head>t?t:Math.max(0,e.head-1)}),J2=i=>Sn(i,e=>{let t=i.moveToLineBoundary(e,!0).head;return e.head<t?t:Math.min(i.state.doc.length,e.head+1)});var Z2=({state:i,dispatch:e})=>{if(i.readOnly)return!1;let t=i.changeByRange(r=>({changes:{from:r.from,to:r.to,insert:se.of([\"\",\"\"])},range:R.cursor(r.from)}));return e(i.update(t,{scrollIntoView:!0,userEvent:\"input\"})),!0},Q2=({state:i,dispatch:e})=>{if(i.readOnly)return!1;let t=i.changeByRange(r=>{if(!r.empty||r.from==0||r.from==i.doc.length)return{range:r};let n=r.from,s=i.doc.lineAt(n),o=n==s.from?n-1:Ie(s.text,n-s.from,!1)+s.from,l=n==s.to?n+1:Ie(s.text,n-s.from,!0)+s.from;return{changes:{from:o,to:l,insert:i.doc.slice(n,l).append(i.doc.slice(o,n))},range:R.cursor(l)}});return t.changes.empty?!1:(e(i.update(t,{scrollIntoView:!0,userEvent:\"move.character\"})),!0)};function oo(i){let e=[],t=-1;for(let r of i.selection.ranges){let n=i.doc.lineAt(r.from),s=i.doc.lineAt(r.to);if(!r.empty&&r.to==s.from&&(s=i.doc.lineAt(r.to-1)),t>=n.number){let o=e[e.length-1];o.to=s.to,o.ranges.push(r)}else e.push({from:n.from,to:s.to,ranges:[r]});t=s.number+1}return e}function wd(i,e,t){if(i.readOnly)return!1;let r=[],n=[];for(let s of oo(i)){if(t?s.to==i.doc.length:s.from==0)continue;let o=i.doc.lineAt(t?s.to+1:s.from-1),l=o.length+1;if(t){r.push({from:s.to,to:o.to},{from:s.from,insert:o.text+i.lineBreak});for(let a of s.ranges)n.push(R.range(Math.min(i.doc.length,a.anchor+l),Math.min(i.doc.length,a.head+l)))}else{r.push({from:o.from,to:s.from},{from:s.to,insert:i.lineBreak+o.text});for(let a of s.ranges)n.push(R.range(a.anchor-l,a.head-l))}}return r.length?(e(i.update({changes:r,scrollIntoView:!0,selection:R.create(n,i.selection.mainIndex),userEvent:\"move.line\"})),!0):!1}var e5=({state:i,dispatch:e})=>wd(i,e,!1),t5=({state:i,dispatch:e})=>wd(i,e,!0);function kd(i,e,t){if(i.readOnly)return!1;let r=[];for(let s of oo(i))t?r.push({from:s.from,insert:i.doc.slice(s.from,s.to)+i.lineBreak}):r.push({from:s.to,insert:i.lineBreak+i.doc.slice(s.from,s.to)});let n=i.changes(r);return e(i.update({changes:n,selection:i.selection.map(n,t?1:-1),scrollIntoView:!0,userEvent:\"input.copyline\"})),!0}var r5=({state:i,dispatch:e})=>kd(i,e,!1),i5=({state:i,dispatch:e})=>kd(i,e,!0),n5=i=>{if(i.state.readOnly)return!1;let{state:e}=i,t=e.changes(oo(e).map(({from:n,to:s})=>(n>0?n--:s<e.doc.length&&s++,{from:n,to:s}))),r=Mi(e.selection,n=>{let s;if(i.lineWrapping){let o=i.lineBlockAt(n.head),l=i.coordsAtPos(n.head,n.assoc||1);l&&(s=o.bottom+i.documentTop-l.bottom+i.defaultLineHeight/2)}return i.moveVertically(n,!0,s)}).map(t);return i.dispatch({changes:t,selection:r,scrollIntoView:!0,userEvent:\"delete.line\"}),!0};function s5(i,e){if(/\\(\\)|\\[\\]|\\{\\}/.test(i.sliceDoc(e-1,e+1)))return{from:e,to:e};let t=Ze(i).resolveInner(e),r=t.childBefore(e),n=t.childAfter(e),s;return r&&n&&r.to<=e&&n.from>=e&&(s=r.type.prop(ee.closedBy))&&s.indexOf(n.name)>-1&&i.doc.lineAt(r.to).from==i.doc.lineAt(n.from).from&&!/\\S/.test(i.sliceDoc(r.to,n.from))?{from:r.to,to:n.from}:null}var Gf=Sd(!1),o5=Sd(!0);function Sd(i){return({state:e,dispatch:t})=>{if(e.readOnly)return!1;let r=e.changeByRange(n=>{let{from:s,to:o}=n,l=e.doc.lineAt(s),a=!i&&s==o&&s5(e,s);i&&(s=o=(o<=l.to?l:e.doc.lineAt(o)).to);let h=new Yr(e,{simulateBreak:s,simulateDoubleBreak:!!a}),c=Ys(h,s);for(c==null&&(c=vr(/^\\s*/.exec(e.doc.lineAt(s).text)[0],e.tabSize));o<l.to&&/\\s/.test(l.text[o-l.from]);)o++;a?{from:s,to:o}=a:s>l.from&&s<l.from+100&&!/\\S/.test(l.text.slice(0,s))&&(s=l.from);let u=[\"\",Ai(e,c)];return a&&u.push(Ai(e,h.lineIndent(l.from,-1))),{changes:{from:s,to:o,insert:se.of(u)},range:R.cursor(s+1+u[1].length)}});return t(e.update(r,{scrollIntoView:!0,userEvent:\"input\"})),!0}}function T0(i,e){let t=-1;return i.changeByRange(r=>{let n=[];for(let o=r.from;o<=r.to;){let l=i.doc.lineAt(o);l.number>t&&(r.empty||r.to>l.from)&&(e(l,n,r),t=l.number),o=l.to+1}let s=i.changes(n);return{changes:n,range:R.range(s.mapPos(r.anchor,1),s.mapPos(r.head,1))}})}var l5=({state:i,dispatch:e})=>{if(i.readOnly)return!1;let t=Object.create(null),r=new Yr(i,{overrideIndentation:s=>{let o=t[s];return o??-1}}),n=T0(i,(s,o,l)=>{let a=Ys(r,s.from);if(a==null)return;/\\S/.test(s.text)||(a=0);let h=/^\\s*/.exec(s.text)[0],c=Ai(i,a);(h!=c||l.from<s.from+h.length)&&(t[s.from]=a,o.push({from:s.from,to:s.from+h.length,insert:c}))});return n.changes.empty||e(i.update(n,{userEvent:\"indent\"})),!0},a5=({state:i,dispatch:e})=>i.readOnly?!1:(e(i.update(T0(i,(t,r)=>{r.push({from:t.from,insert:i.facet(xn)})}),{userEvent:\"input.indent\"})),!0),h5=({state:i,dispatch:e})=>i.readOnly?!1:(e(i.update(T0(i,(t,r)=>{let n=/^\\s*/.exec(t.text)[0];if(!n)return;let s=vr(n,i.tabSize),o=0,l=Ai(i,Math.max(0,s-Ar(i)));for(;o<n.length&&o<l.length&&n.charCodeAt(o)==l.charCodeAt(o);)o++;r.push({from:t.from+o,to:t.from+n.length,insert:l.slice(o)})}),{userEvent:\"delete.dedent\"})),!0),c5=i=>(i.setTabFocusMode(),!0);var u5=[{key:\"Ctrl-b\",run:ed,shift:hd,preventDefault:!0},{key:\"Ctrl-f\",run:td,shift:cd},{key:\"Ctrl-p\",run:nd,shift:dd},{key:\"Ctrl-n\",run:sd,shift:md},{key:\"Ctrl-a\",run:D2,shift:q2},{key:\"Ctrl-e\",run:B2,shift:W2},{key:\"Ctrl-d\",run:bd},{key:\"Ctrl-h\",run:C0},{key:\"Ctrl-k\",run:X2},{key:\"Ctrl-Alt-h\",run:xd},{key:\"Ctrl-o\",run:Z2},{key:\"Ctrl-t\",run:Q2},{key:\"Ctrl-v\",run:S0}],f5=[{key:\"ArrowLeft\",run:ed,shift:hd,preventDefault:!0},{key:\"Mod-ArrowLeft\",mac:\"Alt-ArrowLeft\",run:y2,shift:z2,preventDefault:!0},{mac:\"Cmd-ArrowLeft\",run:M2,shift:F2,preventDefault:!0},{key:\"ArrowRight\",run:td,shift:cd,preventDefault:!0},{key:\"Mod-ArrowRight\",mac:\"Alt-ArrowRight\",run:x2,shift:L2,preventDefault:!0},{mac:\"Cmd-ArrowRight\",run:T2,shift:H2,preventDefault:!0},{key:\"ArrowUp\",run:nd,shift:dd,preventDefault:!0},{mac:\"Cmd-ArrowUp\",run:qf,shift:Vf},{mac:\"Ctrl-ArrowUp\",run:Nf,shift:Ff},{key:\"ArrowDown\",run:sd,shift:md,preventDefault:!0},{mac:\"Cmd-ArrowDown\",run:Wf,shift:$f},{mac:\"Ctrl-ArrowDown\",run:S0,shift:Hf},{key:\"PageUp\",run:Nf,shift:Ff},{key:\"PageDown\",run:S0,shift:Hf},{key:\"Home\",run:A2,shift:N2,preventDefault:!0},{key:\"Mod-Home\",run:qf,shift:Vf},{key:\"End\",run:C2,shift:P2,preventDefault:!0},{key:\"Mod-End\",run:Wf,shift:$f},{key:\"Enter\",run:Gf,shift:Gf},{key:\"Mod-a\",run:V2},{key:\"Backspace\",run:C0,shift:C0,preventDefault:!0},{key:\"Delete\",run:bd,preventDefault:!0},{key:\"Mod-Backspace\",mac:\"Alt-Backspace\",run:xd,preventDefault:!0},{key:\"Mod-Delete\",mac:\"Alt-Delete\",run:Y2,preventDefault:!0},{mac:\"Mod-Backspace\",run:_2,preventDefault:!0},{mac:\"Mod-Delete\",run:J2,preventDefault:!0}].concat(u5.map(i=>({mac:i.key,run:i.run,shift:i.shift}))),Cd=[{key:\"Alt-ArrowLeft\",mac:\"Ctrl-ArrowLeft\",run:k2,shift:I2},{key:\"Alt-ArrowRight\",mac:\"Ctrl-ArrowRight\",run:S2,shift:R2},{key:\"Alt-ArrowUp\",run:e5},{key:\"Shift-Alt-ArrowUp\",run:r5},{key:\"Alt-ArrowDown\",run:t5},{key:\"Shift-Alt-ArrowDown\",run:i5},{key:\"Mod-Alt-ArrowUp\",run:U2},{key:\"Mod-Alt-ArrowDown\",run:K2},{key:\"Escape\",run:j2},{key:\"Mod-Enter\",run:o5},{key:\"Alt-l\",mac:\"Ctrl-l\",run:$2},{key:\"Mod-i\",run:G2,preventDefault:!0},{key:\"Mod-[\",run:h5},{key:\"Mod-]\",run:a5},{key:\"Mod-Alt-\\\\\",run:l5},{key:\"Shift-Mod-k\",run:n5},{key:\"Shift-Mod-\\\\\",run:O2},{key:\"Mod-/\",run:r2},{key:\"Alt-A\",run:n2},{key:\"Ctrl-m\",mac:\"Shift-Alt-m\",run:c5}].concat(f5);var Ad=typeof String.prototype.normalize==\"function\"?i=>i.normalize(\"NFKD\"):i=>i,lo=class{constructor(e,t,r=0,n=e.length,s,o){this.test=o,this.value={from:0,to:0},this.done=!1,this.matches=[],this.buffer=\"\",this.bufferPos=0,this.iter=e.iterRange(r,n),this.bufferStart=r,this.normalize=s?l=>s(Ad(l)):Ad,this.query=this.normalize(t)}peek(){if(this.bufferPos==this.buffer.length){if(this.bufferStart+=this.buffer.length,this.iter.next(),this.iter.done)return-1;this.bufferPos=0,this.buffer=this.iter.value}return Xe(this.buffer,this.bufferPos)}next(){for(;this.matches.length;)this.matches.pop();return this.nextOverlapping()}nextOverlapping(){for(;;){let e=this.peek();if(e<0)return this.done=!0,this;let t=qi(e),r=this.bufferStart+this.bufferPos;this.bufferPos+=vt(e);let n=this.normalize(t);if(n.length)for(let s=0,o=r;;s++){let l=n.charCodeAt(s),a=this.match(l,o,this.bufferPos+this.bufferStart);if(s==n.length-1){if(a)return this.value=a,this;break}o==r&&s<t.length&&t.charCodeAt(s)==l&&o++}}}match(e,t,r){let n=null;for(let s=0;s<this.matches.length;s+=2){let o=this.matches[s],l=!1;this.query.charCodeAt(o)==e&&(o==this.query.length-1?n={from:this.matches[s+1],to:r}:(this.matches[s]++,l=!0)),l||(this.matches.splice(s,2),s-=2)}return this.query.charCodeAt(0)==e&&(this.query.length==1?n={from:t,to:r}:this.matches.push(1,t)),n&&this.test&&!this.test(n.from,n.to,this.buffer,this.bufferStart)&&(n=null),n}};typeof Symbol<\"u\"&&(lo.prototype[Symbol.iterator]=function(){return this});var Td={from:-1,to:-1,match:/.*/.exec(\"\")},Dd=\"gm\"+(/x/.unicode==null?\"\":\"u\"),B0=class{constructor(e,t,r,n=0,s=e.length){if(this.text=e,this.to=s,this.curLine=\"\",this.done=!1,this.value=Td,/\\\\[sWDnr]|\\n|\\r|\\[\\^/.test(t))return new ho(e,t,r,n,s);this.re=new RegExp(t,Dd+(r?.ignoreCase?\"i\":\"\")),this.test=r?.test,this.iter=e.iter();let o=e.lineAt(n);this.curLineStart=o.from,this.matchPos=co(e,n),this.getLine(this.curLineStart)}getLine(e){this.iter.next(e),this.iter.lineBreak?this.curLine=\"\":(this.curLine=this.iter.value,this.curLineStart+this.curLine.length>this.to&&(this.curLine=this.curLine.slice(0,this.to-this.curLineStart)),this.iter.next())}nextLine(){this.curLineStart=this.curLineStart+this.curLine.length+1,this.curLineStart>this.to?this.curLine=\"\":this.getLine(0)}next(){for(let e=this.matchPos-this.curLineStart;;){this.re.lastIndex=e;let t=this.matchPos<=this.to&&this.re.exec(this.curLine);if(t){let r=this.curLineStart+t.index,n=r+t[0].length;if(this.matchPos=co(this.text,n+(r==n?1:0)),r==this.curLineStart+this.curLine.length&&this.nextLine(),(r<n||r>this.value.to)&&(!this.test||this.test(r,n,t)))return this.value={from:r,to:n,match:t},this;e=this.matchPos-this.curLineStart}else if(this.curLineStart+this.curLine.length<this.to)this.nextLine(),e=0;else return this.done=!0,this}}},D0=new WeakMap,ao=class i{constructor(e,t){this.from=e,this.text=t}get to(){return this.from+this.text.length}static get(e,t,r){let n=D0.get(e);if(!n||n.from>=r||n.to<=t){let l=new i(t,e.sliceString(t,r));return D0.set(e,l),l}if(n.from==t&&n.to==r)return n;let{text:s,from:o}=n;return o>t&&(s=e.sliceString(t,o)+s,o=t),n.to<r&&(s+=e.sliceString(n.to,r)),D0.set(e,new i(o,s)),new i(t,s.slice(t-o,r-o))}},ho=class{constructor(e,t,r,n,s){this.text=e,this.to=s,this.done=!1,this.value=Td,this.matchPos=co(e,n),this.re=new RegExp(t,Dd+(r?.ignoreCase?\"i\":\"\")),this.test=r?.test,this.flat=ao.get(e,n,this.chunkEnd(n+5e3))}chunkEnd(e){return e>=this.to?this.to:this.text.lineAt(e).to}next(){for(;;){let e=this.re.lastIndex=this.matchPos-this.flat.from,t=this.re.exec(this.flat.text);if(t&&!t[0]&&t.index==e&&(this.re.lastIndex=e+1,t=this.re.exec(this.flat.text)),t){let r=this.flat.from+t.index,n=r+t[0].length;if((this.flat.to>=this.to||t.index+t[0].length<=this.flat.text.length-10)&&(!this.test||this.test(r,n,t)))return this.value={from:r,to:n,match:t},this.matchPos=co(this.text,n+(r==n?1:0)),this}if(this.flat.to==this.to)return this.done=!0,this;this.flat=ao.get(this.text,this.flat.from,this.chunkEnd(this.flat.from+this.flat.text.length*2))}}};typeof Symbol<\"u\"&&(B0.prototype[Symbol.iterator]=ho.prototype[Symbol.iterator]=function(){return this});function co(i,e){if(e>=i.length)return e;let t=i.lineAt(e),r;for(;e<t.to&&(r=t.text.charCodeAt(e-t.from))>=56320&&r<57344;)e++;return e}var m5={highlightWordAroundCursor:!1,minSelectionLength:1,maxMatches:100,wholeWords:!1},Bd=H.define({combine(i){return it(i,m5,{highlightWordAroundCursor:(e,t)=>e||t,minSelectionLength:Math.min,maxMatches:Math.min})}});function Ed(i){let e=[y5,b5];return i&&e.push(Bd.of(i)),e}var p5=X.mark({class:\"cm-selectionMatch\"}),g5=X.mark({class:\"cm-selectionMatch cm-selectionMatch-main\"});function Md(i,e,t,r){return(t==0||i(e.sliceDoc(t-1,t))!=Ee.Word)&&(r==e.doc.length||i(e.sliceDoc(r,r+1))!=Ee.Word)}function v5(i,e,t,r){return i(e.sliceDoc(t,t+1))==Ee.Word&&i(e.sliceDoc(r-1,r))==Ee.Word}var b5=Se.fromClass(class{constructor(i){this.decorations=this.getDeco(i)}update(i){(i.selectionSet||i.docChanged||i.viewportChanged)&&(this.decorations=this.getDeco(i.view))}getDeco(i){let e=i.state.facet(Bd),{state:t}=i,r=t.selection;if(r.ranges.length>1)return X.none;let n=r.main,s,o=null;if(n.empty){if(!e.highlightWordAroundCursor)return X.none;let a=t.wordAt(n.head);if(!a)return X.none;o=t.charCategorizer(n.head),s=t.sliceDoc(a.from,a.to)}else{let a=n.to-n.from;if(a<e.minSelectionLength||a>200)return X.none;if(e.wholeWords){if(s=t.sliceDoc(n.from,n.to),o=t.charCategorizer(n.head),!(Md(o,t,n.from,n.to)&&v5(o,t,n.from,n.to)))return X.none}else if(s=t.sliceDoc(n.from,n.to),!s)return X.none}let l=[];for(let a of i.visibleRanges){let h=new lo(t.doc,s,a.from,a.to);for(;!h.next().done;){let{from:c,to:u}=h.value;if((!o||Md(o,t,c,u))&&(n.empty&&c<=n.from&&u>=n.to?l.push(g5.range(c,u)):(c>=n.to||u<=n.from)&&l.push(p5.range(c,u)),l.length>e.maxMatches))return X.none}}return X.set(l)}},{decorations:i=>i.decorations}),y5=K.baseTheme({\".cm-selectionMatch\":{backgroundColor:\"#99ff7780\"},\".cm-searchMatch .cm-selectionMatch\":{backgroundColor:\"transparent\"}});var fo=class{constructor(e,t,r){this.from=e,this.to=t,this.diagnostic=r}},Zr=class i{constructor(e,t,r){this.diagnostics=e,this.panel=t,this.selected=r}static init(e,t,r){let n=r.facet(Yt).markerFilter;n&&(e=n(e,r));let s=e.slice().sort((p,v)=>p.from-v.from||p.to-v.to),o=new Et,l=[],a=0,h=r.doc.iter(),c=0,u=r.doc.length;for(let p=0;;){let v=p==s.length?null:s[p];if(!v&&!l.length)break;let y,w;if(l.length)y=a,w=l.reduce((M,E)=>Math.min(M,E.to),v&&v.from>y?v.from:1e8);else{if(y=v.from,y>u)break;w=v.to,l.push(v),p++}for(;p<s.length;){let M=s[p];if(M.from==y&&(M.to>M.from||M.to==y))l.push(M),p++,w=Math.min(M.to,w);else{w=Math.min(M.from,w);break}}w=Math.min(w,u);let S=!1;if(l.some(M=>M.from==y&&(M.to==w||w==u))&&(S=y==w,!S&&w-y<10)){let M=y-(c+h.value.length);M>0&&(h.next(M),c=y);for(let E=y;;){if(E>=w){S=!0;break}if(!h.lineBreak&&c+h.value.length>E)break;E=c+h.value.length,c+=h.value.length,h.next()}}let A=O5(l);if(S)o.add(y,y,X.widget({widget:new E0(A),diagnostics:l.slice()}));else{let M=l.reduce((E,T)=>T.markClass?E+\" \"+T.markClass:E,\"\");o.add(y,w,X.mark({class:\"cm-lintRange cm-lintRange-\"+A+M,diagnostics:l.slice(),inclusiveEnd:l.some(E=>E.to>w)}))}if(a=w,a==u)break;for(let M=0;M<l.length;M++)l[M].to<=a&&l.splice(M--,1)}let d=o.finish();return new i(d,t,Ti(d))}};function Ti(i,e=null,t=0){let r=null;return i.between(t,1e9,(n,s,{spec:o})=>{if(!(e&&o.diagnostics.indexOf(e)<0))if(!r)r=new fo(n,s,e||o.diagnostics[0]);else{if(o.diagnostics.indexOf(r.diagnostic)<0)return!1;r=new fo(r.from,s,r.diagnostic)}}),r}function x5(i,e){let t=e.pos,r=e.end||t,n=i.state.facet(Yt).hideOn(i,t,r);if(n!=null)return n;let s=i.startState.doc.lineAt(e.pos);return!!(i.effects.some(o=>o.is(O0))||i.changes.touchesRange(s.from,Math.max(s.to,r)))}function w5(i,e){return i.field(Ht,!1)?e:e.concat(te.appendConfig.of(Fd))}function k5(i,e){return{effects:w5(i,[O0.of(e)])}}var O0=te.define(),Ld=te.define(),Id=te.define(),Ht=Re.define({create(){return new Zr(X.none,null,null)},update(i,e){if(e.docChanged&&i.diagnostics.size){let t=i.diagnostics.map(e.changes),r=null,n=i.panel;if(i.selected){let s=e.changes.mapPos(i.selected.from,1);r=Ti(t,i.selected.diagnostic,s)||Ti(t,null,s)}!t.size&&n&&e.state.facet(Yt).autoPanel&&(n=null),i=new Zr(t,n,r)}for(let t of e.effects)if(t.is(O0)){let r=e.state.facet(Yt).autoPanel?t.value.length?po.open:null:i.panel;i=Zr.init(t.value,r,e.state)}else t.is(Ld)?i=new Zr(i.diagnostics,t.value?po.open:null,i.selected):t.is(Id)&&(i=new Zr(i.diagnostics,i.panel,t.value));return i},provide:i=>[on.from(i,e=>e.panel),K.decorations.from(i,e=>e.diagnostics)]});var S5=X.mark({class:\"cm-lintRange cm-lintRange-active\"});function C5(i,e,t){let{diagnostics:r}=i.state.field(Ht),n,s=-1,o=-1;r.between(e-(t<0?1:0),e+(t>0?1:0),(a,h,{spec:c})=>{if(e>=a&&e<=h&&(a==h||(e>a||t>0)&&(e<h||t<0)))return n=c.diagnostics,s=a,o=h,!1});let l=i.state.facet(Yt).tooltipFilter;return n&&l&&(n=l(n,i.state)),n?{pos:s,end:o,above:i.state.doc.lineAt(s).to<o,create(){return{dom:A5(i,n)}}}:null}function A5(i,e){return nt(\"ul\",{class:\"cm-tooltip-lint\"},e.map(t=>Nd(i,t,!1)))}var Od=i=>{let e=i.state.field(Ht,!1);return!e||!e.panel?!1:(i.dispatch({effects:Ld.of(!1)}),!0)};var M5=Se.fromClass(class{constructor(i){this.view=i,this.timeout=-1,this.set=!0;let{delay:e}=i.state.facet(Yt);this.lintTime=Date.now()+e,this.run=this.run.bind(this),this.timeout=setTimeout(this.run,e)}run(){clearTimeout(this.timeout);let i=Date.now();if(i<this.lintTime-10)this.timeout=setTimeout(this.run,this.lintTime-i);else{this.set=!1;let{state:e}=this.view,{sources:t}=e.facet(Yt);t.length&&T5(t.map(r=>Promise.resolve(r(this.view))),r=>{this.view.state.doc==e.doc&&this.view.dispatch(k5(this.view.state,r.reduce((n,s)=>n.concat(s))))},r=>{Fe(this.view.state,r)})}}update(i){let e=i.state.facet(Yt);(i.docChanged||e!=i.startState.facet(Yt)||e.needsRefresh&&e.needsRefresh(i))&&(this.lintTime=Date.now()+e.delay,this.set||(this.set=!0,this.timeout=setTimeout(this.run,e.delay)))}force(){this.set&&(this.lintTime=Date.now(),this.run())}destroy(){clearTimeout(this.timeout)}});function T5(i,e,t){let r=[],n=-1;for(let s of i)s.then(o=>{r.push(o),clearTimeout(n),r.length==i.length?e(r):n=setTimeout(()=>e(r),200)},t)}var Yt=H.define({combine(i){return{sources:i.map(e=>e.source).filter(e=>e!=null),...it(i.map(e=>e.config),{delay:750,markerFilter:null,tooltipFilter:null,needsRefresh:null,hideOn:()=>null},{delay:Math.max,markerFilter:zd,tooltipFilter:zd,needsRefresh:(e,t)=>e?t?r=>e(r)||t(r):e:t,hideOn:(e,t)=>e?t?(r,n,s)=>e(r,n,s)||t(r,n,s):e:t,autoPanel:(e,t)=>e||t})}}});function zd(i,e){return i?e?(t,r)=>e(i(t,r),r):i:e}function Rd(i,e={}){return[Yt.of({source:i,config:e}),M5,Fd]}function Pd(i){let e=[];if(i)e:for(let{name:t}of i){for(let r=0;r<t.length;r++){let n=t[r];if(/[a-zA-Z]/.test(n)&&!e.some(s=>s.toLowerCase()==n.toLowerCase())){e.push(n);continue e}}e.push(\"\")}return e}function Nd(i,e,t){var r;let n=t?Pd(e.actions):[];return nt(\"li\",{class:\"cm-diagnostic cm-diagnostic-\"+e.severity},nt(\"span\",{class:\"cm-diagnosticText\"},e.renderMessage?e.renderMessage(i):e.message),(r=e.actions)===null||r===void 0?void 0:r.map((s,o)=>{let l=!1,a=p=>{if(p.preventDefault(),l)return;l=!0;let v=Ti(i.state.field(Ht).diagnostics,e);v&&s.apply(i,v.from,v.to)},{name:h}=s,c=n[o]?h.indexOf(n[o]):-1,u=c<0?h:[h.slice(0,c),nt(\"u\",h.slice(c,c+1)),h.slice(c+1)],d=s.markClass?\" \"+s.markClass:\"\";return nt(\"button\",{type:\"button\",class:\"cm-diagnosticAction\"+d,onclick:a,onmousedown:a,\"aria-label\":` Action: ${h}${c<0?\"\":` (access key \"${n[o]})\"`}.`},u)}),e.source&&nt(\"div\",{class:\"cm-diagnosticSource\"},e.source))}var E0=class extends kt{constructor(e){super(),this.sev=e}eq(e){return e.sev==this.sev}toDOM(){return nt(\"span\",{class:\"cm-lintPoint cm-lintPoint-\"+this.sev})}},mo=class{constructor(e,t){this.diagnostic=t,this.id=\"item_\"+Math.floor(Math.random()*4294967295).toString(16),this.dom=Nd(e,t,!0),this.dom.id=this.id,this.dom.setAttribute(\"role\",\"option\")}},po=class i{constructor(e){this.view=e,this.items=[];let t=n=>{if(!(n.ctrlKey||n.altKey||n.metaKey)){if(n.keyCode==27)Od(this.view),this.view.focus();else if(n.keyCode==38||n.keyCode==33)this.moveSelection((this.selectedIndex-1+this.items.length)%this.items.length);else if(n.keyCode==40||n.keyCode==34)this.moveSelection((this.selectedIndex+1)%this.items.length);else if(n.keyCode==36)this.moveSelection(0);else if(n.keyCode==35)this.moveSelection(this.items.length-1);else if(n.keyCode==13)this.view.focus();else if(n.keyCode>=65&&n.keyCode<=90&&this.selectedIndex>=0){let{diagnostic:s}=this.items[this.selectedIndex],o=Pd(s.actions);for(let l=0;l<o.length;l++)if(o[l].toUpperCase().charCodeAt(0)==n.keyCode){let a=Ti(this.view.state.field(Ht).diagnostics,s);a&&s.actions[l].apply(e,a.from,a.to)}}else return;n.preventDefault()}},r=n=>{for(let s=0;s<this.items.length;s++)this.items[s].dom.contains(n.target)&&this.moveSelection(s)};this.list=nt(\"ul\",{tabIndex:0,role:\"listbox\",\"aria-label\":this.view.state.phrase(\"Diagnostics\"),onkeydown:t,onclick:r}),this.dom=nt(\"div\",{class:\"cm-panel-lint\"},this.list,nt(\"button\",{type:\"button\",name:\"close\",\"aria-label\":this.view.state.phrase(\"close\"),onclick:()=>Od(this.view)},\"\\xD7\")),this.update()}get selectedIndex(){let e=this.view.state.field(Ht).selected;if(!e)return-1;for(let t=0;t<this.items.length;t++)if(this.items[t].diagnostic==e.diagnostic)return t;return-1}update(){let{diagnostics:e,selected:t}=this.view.state.field(Ht),r=0,n=!1,s=null,o=new Set;for(e.between(0,this.view.state.doc.length,(l,a,{spec:h})=>{for(let c of h.diagnostics){if(o.has(c))continue;o.add(c);let u=-1,d;for(let p=r;p<this.items.length;p++)if(this.items[p].diagnostic==c){u=p;break}u<0?(d=new mo(this.view,c),this.items.splice(r,0,d),n=!0):(d=this.items[u],u>r&&(this.items.splice(r,u-r),n=!0)),t&&d.diagnostic==t.diagnostic?d.dom.hasAttribute(\"aria-selected\")||(d.dom.setAttribute(\"aria-selected\",\"true\"),s=d):d.dom.hasAttribute(\"aria-selected\")&&d.dom.removeAttribute(\"aria-selected\"),r++}});r<this.items.length&&!(this.items.length==1&&this.items[0].diagnostic.from<0);)n=!0,this.items.pop();this.items.length==0&&(this.items.push(new mo(this.view,{from:-1,to:-1,severity:\"info\",message:this.view.state.phrase(\"No diagnostics\")})),n=!0),s?(this.list.setAttribute(\"aria-activedescendant\",s.id),this.view.requestMeasure({key:this,read:()=>({sel:s.dom.getBoundingClientRect(),panel:this.list.getBoundingClientRect()}),write:({sel:l,panel:a})=>{let h=a.height/this.list.offsetHeight;l.top<a.top?this.list.scrollTop-=(a.top-l.top)/h:l.bottom>a.bottom&&(this.list.scrollTop+=(l.bottom-a.bottom)/h)}})):this.selectedIndex<0&&this.list.removeAttribute(\"aria-activedescendant\"),n&&this.sync()}sync(){let e=this.list.firstChild;function t(){let r=e;e=r.nextSibling,r.remove()}for(let r of this.items)if(r.dom.parentNode==this.list){for(;e!=r.dom;)t();e=r.dom.nextSibling}else this.list.insertBefore(r.dom,e);for(;e;)t()}moveSelection(e){if(this.selectedIndex<0)return;let t=this.view.state.field(Ht),r=Ti(t.diagnostics,this.items[e].diagnostic);r&&this.view.dispatch({selection:{anchor:r.from,head:r.to},scrollIntoView:!0,effects:Id.of(r)})}static open(e){return new i(e)}};function D5(i,e='viewBox=\"0 0 40 40\"'){return`url('data:image/svg+xml,<svg xmlns=\"http://www.w3.org/2000/svg\" ${e}>${encodeURIComponent(i)}</svg>')`}function uo(i){return D5(`<path d=\"m0 2.5 l2 -1.5 l1 0 l2 1.5 l1 0\" stroke=\"${i}\" fill=\"none\" stroke-width=\".7\"/>`,'width=\"6\" height=\"3\"')}var B5=K.baseTheme({\".cm-diagnostic\":{padding:\"3px 6px 3px 8px\",marginLeft:\"-1px\",display:\"block\",whiteSpace:\"pre-wrap\"},\".cm-diagnostic-error\":{borderLeft:\"5px solid #d11\"},\".cm-diagnostic-warning\":{borderLeft:\"5px solid orange\"},\".cm-diagnostic-info\":{borderLeft:\"5px solid #999\"},\".cm-diagnostic-hint\":{borderLeft:\"5px solid #66d\"},\".cm-diagnosticAction\":{font:\"inherit\",border:\"none\",padding:\"2px 4px\",backgroundColor:\"#444\",color:\"white\",borderRadius:\"3px\",marginLeft:\"8px\",cursor:\"pointer\"},\".cm-diagnosticSource\":{fontSize:\"70%\",opacity:.7},\".cm-lintRange\":{backgroundPosition:\"left bottom\",backgroundRepeat:\"repeat-x\",paddingBottom:\"0.7px\"},\".cm-lintRange-error\":{backgroundImage:uo(\"#d11\")},\".cm-lintRange-warning\":{backgroundImage:uo(\"orange\")},\".cm-lintRange-info\":{backgroundImage:uo(\"#999\")},\".cm-lintRange-hint\":{backgroundImage:uo(\"#66d\")},\".cm-lintRange-active\":{backgroundColor:\"#ffdd9980\"},\".cm-tooltip-lint\":{padding:0,margin:0},\".cm-lintPoint\":{position:\"relative\",\"&:after\":{content:'\"\"',position:\"absolute\",bottom:0,left:\"-2px\",borderLeft:\"3px solid transparent\",borderRight:\"3px solid transparent\",borderBottom:\"4px solid #d11\"}},\".cm-lintPoint-warning\":{\"&:after\":{borderBottomColor:\"orange\"}},\".cm-lintPoint-info\":{\"&:after\":{borderBottomColor:\"#999\"}},\".cm-lintPoint-hint\":{\"&:after\":{borderBottomColor:\"#66d\"}},\".cm-panel.cm-panel-lint\":{position:\"relative\",\"& ul\":{maxHeight:\"100px\",overflowY:\"auto\",\"& [aria-selected]\":{backgroundColor:\"#ddd\",\"& u\":{textDecoration:\"underline\"}},\"&:focus [aria-selected]\":{background_fallback:\"#bdf\",backgroundColor:\"Highlight\",color_fallback:\"white\",color:\"HighlightText\"},\"& u\":{textDecoration:\"none\"},padding:0,margin:0},\"& [name=close]\":{position:\"absolute\",top:\"0\",right:\"2px\",background:\"inherit\",border:\"none\",font:\"inherit\",padding:0,margin:0}}});function E5(i){return i==\"error\"?4:i==\"warning\"?3:i==\"info\"?2:1}function O5(i){let e=\"hint\",t=1;for(let r of i){let n=E5(r.severity);n>t&&(t=n,e=r.severity)}return e}var Fd=[Ht,K.decorations.compute([Ht],i=>{let{selected:e,panel:t}=i.field(Ht);return!e||!t||e.from==e.to?X.none:X.set([S5.range(e.from,e.to)])}),Ps(C5,{hideOn:x5}),B5];var z5=K.theme({\"&\":{fontSize:\"14px\",backgroundColor:\"transparent\"},\".cm-content\":{fontFamily:\"'JetBrains Mono', 'SF Mono', 'Fira Code', 'Consolas', monospace\",caretColor:\"#daa550\",padding:\"8px 0\"},\".cm-cursor, .cm-dropCursor\":{borderLeftColor:\"#daa550\"},\"&.cm-focused .cm-selectionBackground, .cm-selectionBackground\":{backgroundColor:\"#3a3a50\"},\".cm-gutters\":{backgroundColor:\"transparent\",color:\"#646870\",border:\"none\",paddingLeft:\"4px\"},\".cm-activeLineGutter\":{backgroundColor:\"transparent\",color:\"#aab0ba\"},\".cm-activeLine\":{backgroundColor:\"transparent\"},\".cm-line\":{padding:\"0 8px\"},\".cm-tooltip\":{backgroundColor:\"#24242e\",border:\"1px solid #32323a\",color:\"#c8ccd4\"},\".cm-tooltip-autocomplete\":{\"& > ul > li[aria-selected]\":{backgroundColor:\"#3a3a50\"}},\".cm-tooltip-hover\":{padding:\"4px 8px\",maxWidth:\"500px\"},\".cm-type-tooltip code\":{fontFamily:\"'JetBrains Mono', 'SF Mono', 'Fira Code', monospace\",fontSize:\"13px\",color:\"#ffcb6b\"},\".cm-type-tooltip .cm-type-doc\":{marginTop:\"4px\",paddingTop:\"4px\",borderTop:\"1px solid #32323a\",fontSize:\"12px\",color:\"#9da5b4\",whiteSpace:\"pre-wrap\"},\".cm-diagnostic-error\":{borderBottom:\"2px solid #ff5370\"},\".cm-diagnostic-warning\":{borderBottom:\"2px solid #ffcb6b\"}},{dark:!0}),L5=Ci.define([{tag:P.keyword,color:\"#c792ea\"},{tag:P.operator,color:\"#89ddff\"},{tag:P.string,color:\"#c3e88d\"},{tag:P.number,color:\"#f78c6c\"},{tag:P.bool,color:\"#f78c6c\"},{tag:P.comment,color:\"#646870\",fontStyle:\"italic\"},{tag:P.typeName,color:\"#ffcb6b\"},{tag:P.definition(P.variableName),color:\"#82aaff\"},{tag:P.variableName,color:\"#c8ccd4\"},{tag:P.function(P.variableName),color:\"#82aaff\"},{tag:P.propertyName,color:\"#c8ccd4\"},{tag:P.meta,color:\"#daa550\"},{tag:P.punctuation,color:\"#89ddff\"}]);function I5(i){switch(i){case\"value\":return\"variable\";case\"type\":return\"type\";case\"module\":return\"namespace\";case\"module_type\":return\"interface\";case\"constructor\":return\"enum\";case\"label\":return\"property\";default:return\"variable\"}}function R5(i){return async e=>{let t=e.matchBefore(/[\\w.]+$/);if(!t&&!e.explicit)return null;let r=e.state.doc.toString(),n=e.pos;try{let s=await i.complete(r,n);return!s||s.length===0?null:{from:t?t.from:e.pos,options:s.map(o=>({label:o.label,type:I5(o.kind),detail:o.detail}))}}catch{return null}}}function P5(i){return Ps(async(e,t)=>{let r=e.state.doc.toString();try{let n=await i.typeAt(r,t);return!n||!n.info?null:{pos:n.info.from,end:n.info.to,above:!0,create(){let s=document.createElement(\"div\");s.className=\"cm-type-tooltip\";let o=document.createElement(\"code\");if(o.textContent=n.info.type,s.appendChild(o),n.info.doc){let l=document.createElement(\"div\");l.className=\"cm-type-doc\",l.textContent=n.info.doc,s.appendChild(l)}return{dom:s}}}}catch{return null}},{hoverTime:300})}function N5(i){return Rd(async e=>{let t=e.state.doc.toString();if(!t.trim())return[];try{let r=await i.diagnostics(t);return!r||!r.items?[]:r.items.map(n=>({from:n.from,to:Math.min(n.to,t.length),severity:n.severity,message:n.message}))}catch{return[]}},{delay:500})}function z0(i,e,t){let{onChange:r,onExecute:n,onExecuteAndMoveNext:s,onEscape:o,wsClient:l}=t,a=[Ks.define(wf).extension,Wu(),qu(),Vu(),Hu(),gf(),If(),cf(),Yf(),Ru(),Fu(),Ed(),z5,ff(L5),fe.tabSize.of(2),K.updateListener.of(c=>{c.docChanged&&r(c.state.doc.toString())}),an.of([{key:\"Ctrl-Enter\",run:()=>(n(),!0)},{key:\"Cmd-Enter\",run:()=>(n(),!0)},{key:\"Shift-Enter\",run:()=>(s(),!0)},{key:\"Escape\",run:()=>(o(),!0)},...Cd,...Zf])];return l&&a.push(Pf({override:[R5(l)],activateOnTyping:!0}),P5(l),N5(l)),new K({parent:i,doc:e,extensions:a})}var lt=class i{constructor(e,t,r){this.lexer=void 0,this.start=void 0,this.end=void 0,this.lexer=e,this.start=t,this.end=r}static range(e,t){return t?!e||!e.loc||!t.loc||e.loc.lexer!==t.loc.lexer?null:new i(e.loc.lexer,e.loc.start,t.loc.end):e&&e.loc}},mt=class i{constructor(e,t){this.text=void 0,this.loc=void 0,this.noexpand=void 0,this.treatAsRelax=void 0,this.text=e,this.loc=t}range(e,t){return new i(t,lt.range(this,e))}},I=class i{constructor(e,t){this.name=void 0,this.position=void 0,this.length=void 0,this.rawMessage=void 0;var r=\"KaTeX parse error: \"+e,n,s,o=t&&t.loc;if(o&&o.start<=o.end){var l=o.lexer.input;n=o.start,s=o.end,n===l.length?r+=\" at end of input: \":r+=\" at position \"+(n+1)+\": \";var a=l.slice(n,s).replace(/[^]/g,\"$&\\u0332\"),h;n>15?h=\"\\u2026\"+l.slice(n-15,n):h=l.slice(0,n);var c;s+15<l.length?c=l.slice(s,s+15)+\"\\u2026\":c=l.slice(s),r+=h+a+c}var u=new Error(r);return u.name=\"ParseError\",u.__proto__=i.prototype,u.position=n,n!=null&&s!=null&&(u.length=s-n),u.rawMessage=e,u}};I.prototype.__proto__=Error.prototype;var F5=/([A-Z])/g,oh=i=>i.replace(F5,\"-$1\").toLowerCase(),H5={\"&\":\"&amp;\",\">\":\"&gt;\",\"<\":\"&lt;\",'\"':\"&quot;\",\"'\":\"&#x27;\"},q5=/[&><\"']/g,Ke=i=>String(i).replace(q5,e=>H5[e]),Tn=i=>i.type===\"ordgroup\"||i.type===\"color\"?i.body.length===1?Tn(i.body[0]):i:i.type===\"font\"?Tn(i.body):i,W5=new Set([\"mathord\",\"textord\",\"atom\"]),dr=i=>W5.has(Tn(i).type),V5=i=>{var e=/^[\\x00-\\x20]*([^\\\\/#?]*?)(:|&#0*58|&#x0*3a|&colon)/i.exec(i);return e?e[2]!==\":\"||!/^[a-zA-Z][a-zA-Z0-9+\\-.]*$/.test(e[1])?null:e[1].toLowerCase():\"_relative\"},To={displayMode:{type:\"boolean\",description:\"Render math in display mode, which puts the math in display style (so \\\\int and \\\\sum are large, for example), and centers the math on the page on its own line.\",cli:\"-d, --display-mode\"},output:{type:{enum:[\"htmlAndMathml\",\"html\",\"mathml\"]},description:\"Determines the markup language of the output.\",cli:\"-F, --format <type>\"},leqno:{type:\"boolean\",description:\"Render display math in leqno style (left-justified tags).\"},fleqn:{type:\"boolean\",description:\"Render display math flush left.\"},throwOnError:{type:\"boolean\",default:!0,cli:\"-t, --no-throw-on-error\",cliDescription:\"Render errors (in the color given by --error-color) instead of throwing a ParseError exception when encountering an error.\"},errorColor:{type:\"string\",default:\"#cc0000\",cli:\"-c, --error-color <color>\",cliDescription:\"A color string given in the format 'rgb' or 'rrggbb' (no #). This option determines the color of errors rendered by the -t option.\",cliProcessor:i=>\"#\"+i},macros:{type:\"object\",cli:\"-m, --macro <def>\",cliDescription:\"Define custom macro of the form '\\\\foo:expansion' (use multiple -m arguments for multiple macros).\",cliDefault:[],cliProcessor:(i,e)=>(e.push(i),e)},minRuleThickness:{type:\"number\",description:\"Specifies a minimum thickness, in ems, for fraction lines, `\\\\sqrt` top lines, `{array}` vertical lines, `\\\\hline`, `\\\\hdashline`, `\\\\underline`, `\\\\overline`, and the borders of `\\\\fbox`, `\\\\boxed`, and `\\\\fcolorbox`.\",processor:i=>Math.max(0,i),cli:\"--min-rule-thickness <size>\",cliProcessor:parseFloat},colorIsTextColor:{type:\"boolean\",description:\"Makes \\\\color behave like LaTeX's 2-argument \\\\textcolor, instead of LaTeX's one-argument \\\\color mode change.\",cli:\"-b, --color-is-text-color\"},strict:{type:[{enum:[\"warn\",\"ignore\",\"error\"]},\"boolean\",\"function\"],description:\"Turn on strict / LaTeX faithfulness mode, which throws an error if the input uses features that are not supported by LaTeX.\",cli:\"-S, --strict\",cliDefault:!1},trust:{type:[\"boolean\",\"function\"],description:\"Trust the input, enabling all HTML features such as \\\\url.\",cli:\"-T, --trust\"},maxSize:{type:\"number\",default:1/0,description:\"If non-zero, all user-specified sizes, e.g. in \\\\rule{500em}{500em}, will be capped to maxSize ems. Otherwise, elements and spaces can be arbitrarily large\",processor:i=>Math.max(0,i),cli:\"-s, --max-size <n>\",cliProcessor:parseInt},maxExpand:{type:\"number\",default:1e3,description:\"Limit the number of macro expansions to the specified number, to prevent e.g. infinite macro loops. If set to Infinity, the macro expander will try to fully expand as in LaTeX.\",processor:i=>Math.max(0,i),cli:\"-e, --max-expand <n>\",cliProcessor:i=>i===\"Infinity\"?1/0:parseInt(i)},globalGroup:{type:\"boolean\",cli:!1}};function $5(i){if(i.default)return i.default;var e=i.type,t=Array.isArray(e)?e[0]:e;if(typeof t!=\"string\")return t.enum[0];switch(t){case\"boolean\":return!1;case\"string\":return\"\";case\"number\":return 0;case\"object\":return{}}}var Bn=class{constructor(e){this.displayMode=void 0,this.output=void 0,this.leqno=void 0,this.fleqn=void 0,this.throwOnError=void 0,this.errorColor=void 0,this.macros=void 0,this.minRuleThickness=void 0,this.colorIsTextColor=void 0,this.strict=void 0,this.trust=void 0,this.maxSize=void 0,this.maxExpand=void 0,this.globalGroup=void 0,e=e||{};for(var t in To)if(To.hasOwnProperty(t)){var r=To[t];this[t]=e[t]!==void 0?r.processor?r.processor(e[t]):e[t]:$5(r)}}reportNonstrict(e,t,r){var n=this.strict;if(typeof n==\"function\"&&(n=n(e,t,r)),!(!n||n===\"ignore\")){if(n===!0||n===\"error\")throw new I(\"LaTeX-incompatible input and strict mode is set to 'error': \"+(t+\" [\"+e+\"]\"),r);n===\"warn\"?typeof console<\"u\"&&console.warn(\"LaTeX-incompatible input and strict mode is set to 'warn': \"+(t+\" [\"+e+\"]\")):typeof console<\"u\"&&console.warn(\"LaTeX-incompatible input and strict mode is set to \"+(\"unrecognized '\"+n+\"': \"+t+\" [\"+e+\"]\"))}}useStrictBehavior(e,t,r){var n=this.strict;if(typeof n==\"function\")try{n=n(e,t,r)}catch{n=\"error\"}return!n||n===\"ignore\"?!1:n===!0||n===\"error\"?!0:n===\"warn\"?(typeof console<\"u\"&&console.warn(\"LaTeX-incompatible input and strict mode is set to 'warn': \"+(t+\" [\"+e+\"]\")),!1):(typeof console<\"u\"&&console.warn(\"LaTeX-incompatible input and strict mode is set to \"+(\"unrecognized '\"+n+\"': \"+t+\" [\"+e+\"]\")),!1)}isTrusted(e){if(e.url&&!e.protocol){var t=V5(e.url);if(t==null)return!1;e.protocol=t}var r=typeof this.trust==\"function\"?this.trust(e):this.trust;return!!r}},Xt=class{constructor(e,t,r){this.id=void 0,this.size=void 0,this.cramped=void 0,this.id=e,this.size=t,this.cramped=r}sup(){return _t[G5[this.id]]}sub(){return _t[U5[this.id]]}fracNum(){return _t[K5[this.id]]}fracDen(){return _t[j5[this.id]]}cramp(){return _t[Y5[this.id]]}text(){return _t[X5[this.id]]}isTight(){return this.size>=2}},lh=0,Bo=1,Bi=2,fr=3,En=4,Tt=5,Ei=6,et=7,_t=[new Xt(lh,0,!1),new Xt(Bo,0,!0),new Xt(Bi,1,!1),new Xt(fr,1,!0),new Xt(En,2,!1),new Xt(Tt,2,!0),new Xt(Ei,3,!1),new Xt(et,3,!0)],G5=[En,Tt,En,Tt,Ei,et,Ei,et],U5=[Tt,Tt,Tt,Tt,et,et,et,et],K5=[Bi,fr,En,Tt,Ei,et,Ei,et],j5=[fr,fr,Tt,Tt,et,et,et,et],Y5=[Bo,Bo,fr,fr,Tt,Tt,et,et],X5=[lh,Bo,Bi,fr,Bi,fr,Bi,fr],Z={DISPLAY:_t[lh],TEXT:_t[Bi],SCRIPT:_t[En],SCRIPTSCRIPT:_t[Ei]},j0=[{name:\"latin\",blocks:[[256,591],[768,879]]},{name:\"cyrillic\",blocks:[[1024,1279]]},{name:\"armenian\",blocks:[[1328,1423]]},{name:\"brahmic\",blocks:[[2304,4255]]},{name:\"georgian\",blocks:[[4256,4351]]},{name:\"cjk\",blocks:[[12288,12543],[19968,40879],[65280,65376]]},{name:\"hangul\",blocks:[[44032,55215]]}];function _5(i){for(var e=0;e<j0.length;e++)for(var t=j0[e],r=0;r<t.blocks.length;r++){var n=t.blocks[r];if(i>=n[0]&&i<=n[1])return t.name}return null}var Do=[];j0.forEach(i=>i.blocks.forEach(e=>Do.push(...e)));function gm(i){for(var e=0;e<Do.length;e+=2)if(i>=Do[e]&&i<=Do[e+1])return!0;return!1}var Di=80,J5=function(e,t){return\"M95,\"+(622+e+t)+`\nc-2.7,0,-7.17,-2.7,-13.5,-8c-5.8,-5.3,-9.5,-10,-9.5,-14\nc0,-2,0.3,-3.3,1,-4c1.3,-2.7,23.83,-20.7,67.5,-54\nc44.2,-33.3,65.8,-50.3,66.5,-51c1.3,-1.3,3,-2,5,-2c4.7,0,8.7,3.3,12,10\ns173,378,173,378c0.7,0,35.3,-71,104,-213c68.7,-142,137.5,-285,206.5,-429\nc69,-144,104.5,-217.7,106.5,-221\nl`+e/2.075+\" -\"+e+`\nc5.3,-9.3,12,-14,20,-14\nH400000v`+(40+e)+`H845.2724\ns-225.272,467,-225.272,467s-235,486,-235,486c-2.7,4.7,-9,7,-19,7\nc-6,0,-10,-1,-12,-3s-194,-422,-194,-422s-65,47,-65,47z\nM`+(834+e)+\" \"+t+\"h400000v\"+(40+e)+\"h-400000z\"},Z5=function(e,t){return\"M263,\"+(601+e+t)+`c0.7,0,18,39.7,52,119\nc34,79.3,68.167,158.7,102.5,238c34.3,79.3,51.8,119.3,52.5,120\nc340,-704.7,510.7,-1060.3,512,-1067\nl`+e/2.084+\" -\"+e+`\nc4.7,-7.3,11,-11,19,-11\nH40000v`+(40+e)+`H1012.3\ns-271.3,567,-271.3,567c-38.7,80.7,-84,175,-136,283c-52,108,-89.167,185.3,-111.5,232\nc-22.3,46.7,-33.8,70.3,-34.5,71c-4.7,4.7,-12.3,7,-23,7s-12,-1,-12,-1\ns-109,-253,-109,-253c-72.7,-168,-109.3,-252,-110,-252c-10.7,8,-22,16.7,-34,26\nc-22,17.3,-33.3,26,-34,26s-26,-26,-26,-26s76,-59,76,-59s76,-60,76,-60z\nM`+(1001+e)+\" \"+t+\"h400000v\"+(40+e)+\"h-400000z\"},Q5=function(e,t){return\"M983 \"+(10+e+t)+`\nl`+e/3.13+\" -\"+e+`\nc4,-6.7,10,-10,18,-10 H400000v`+(40+e)+`\nH1013.1s-83.4,268,-264.1,840c-180.7,572,-277,876.3,-289,913c-4.7,4.7,-12.7,7,-24,7\ns-12,0,-12,0c-1.3,-3.3,-3.7,-11.7,-7,-25c-35.3,-125.3,-106.7,-373.3,-214,-744\nc-10,12,-21,25,-33,39s-32,39,-32,39c-6,-5.3,-15,-14,-27,-26s25,-30,25,-30\nc26.7,-32.7,52,-63,76,-91s52,-60,52,-60s208,722,208,722\nc56,-175.3,126.3,-397.3,211,-666c84.7,-268.7,153.8,-488.2,207.5,-658.5\nc53.7,-170.3,84.5,-266.8,92.5,-289.5z\nM`+(1001+e)+\" \"+t+\"h400000v\"+(40+e)+\"h-400000z\"},e3=function(e,t){return\"M424,\"+(2398+e+t)+`\nc-1.3,-0.7,-38.5,-172,-111.5,-514c-73,-342,-109.8,-513.3,-110.5,-514\nc0,-2,-10.7,14.3,-32,49c-4.7,7.3,-9.8,15.7,-15.5,25c-5.7,9.3,-9.8,16,-12.5,20\ns-5,7,-5,7c-4,-3.3,-8.3,-7.7,-13,-13s-13,-13,-13,-13s76,-122,76,-122s77,-121,77,-121\ns209,968,209,968c0,-2,84.7,-361.7,254,-1079c169.3,-717.3,254.7,-1077.7,256,-1081\nl`+e/4.223+\" -\"+e+`c4,-6.7,10,-10,18,-10 H400000\nv`+(40+e)+`H1014.6\ns-87.3,378.7,-272.6,1166c-185.3,787.3,-279.3,1182.3,-282,1185\nc-2,6,-10,9,-24,9\nc-8,0,-12,-0.7,-12,-2z M`+(1001+e)+\" \"+t+`\nh400000v`+(40+e)+\"h-400000z\"},t3=function(e,t){return\"M473,\"+(2713+e+t)+`\nc339.3,-1799.3,509.3,-2700,510,-2702 l`+e/5.298+\" -\"+e+`\nc3.3,-7.3,9.3,-11,18,-11 H400000v`+(40+e)+`H1017.7\ns-90.5,478,-276.2,1466c-185.7,988,-279.5,1483,-281.5,1485c-2,6,-10,9,-24,9\nc-8,0,-12,-0.7,-12,-2c0,-1.3,-5.3,-32,-16,-92c-50.7,-293.3,-119.7,-693.3,-207,-1200\nc0,-1.3,-5.3,8.7,-16,30c-10.7,21.3,-21.3,42.7,-32,64s-16,33,-16,33s-26,-26,-26,-26\ns76,-153,76,-153s77,-151,77,-151c0.7,0.7,35.7,202,105,604c67.3,400.7,102,602.7,104,\n606zM`+(1001+e)+\" \"+t+\"h400000v\"+(40+e)+\"H1017.7z\"},r3=function(e){var t=e/2;return\"M400000 \"+e+\" H0 L\"+t+\" 0 l65 45 L145 \"+(e-80)+\" H400000z\"},i3=function(e,t,r){var n=r-54-t-e;return\"M702 \"+(e+t)+\"H400000\"+(40+e)+`\nH742v`+n+`l-4 4-4 4c-.667.7 -2 1.5-4 2.5s-4.167 1.833-6.5 2.5-5.5 1-9.5 1\nh-12l-28-84c-16.667-52-96.667 -294.333-240-727l-212 -643 -85 170\nc-4-3.333-8.333-7.667-13 -13l-13-13l77-155 77-156c66 199.333 139 419.667\n219 661 l218 661zM702 `+t+\"H400000v\"+(40+e)+\"H742z\"},n3=function(e,t,r){t=1e3*t;var n=\"\";switch(e){case\"sqrtMain\":n=J5(t,Di);break;case\"sqrtSize1\":n=Z5(t,Di);break;case\"sqrtSize2\":n=Q5(t,Di);break;case\"sqrtSize3\":n=e3(t,Di);break;case\"sqrtSize4\":n=t3(t,Di);break;case\"sqrtTall\":n=i3(t,Di,r)}return n},s3=function(e,t){switch(e){case\"\\u239C\":return\"M291 0 H417 V\"+t+\" H291z M291 0 H417 V\"+t+\" H291z\";case\"\\u2223\":return\"M145 0 H188 V\"+t+\" H145z M145 0 H188 V\"+t+\" H145z\";case\"\\u2225\":return\"M145 0 H188 V\"+t+\" H145z M145 0 H188 V\"+t+\" H145z\"+(\"M367 0 H410 V\"+t+\" H367z M367 0 H410 V\"+t+\" H367z\");case\"\\u239F\":return\"M457 0 H583 V\"+t+\" H457z M457 0 H583 V\"+t+\" H457z\";case\"\\u23A2\":return\"M319 0 H403 V\"+t+\" H319z M319 0 H403 V\"+t+\" H319z\";case\"\\u23A5\":return\"M263 0 H347 V\"+t+\" H263z M263 0 H347 V\"+t+\" H263z\";case\"\\u23AA\":return\"M384 0 H504 V\"+t+\" H384z M384 0 H504 V\"+t+\" H384z\";case\"\\u23D0\":return\"M312 0 H355 V\"+t+\" H312z M312 0 H355 V\"+t+\" H312z\";case\"\\u2016\":return\"M257 0 H300 V\"+t+\" H257z M257 0 H300 V\"+t+\" H257z\"+(\"M478 0 H521 V\"+t+\" H478z M478 0 H521 V\"+t+\" H478z\");default:return\"\"}},Hd={doubleleftarrow:`M262 157\nl10-10c34-36 62.7-77 86-123 3.3-8 5-13.3 5-16 0-5.3-6.7-8-20-8-7.3\n 0-12.2.5-14.5 1.5-2.3 1-4.8 4.5-7.5 10.5-49.3 97.3-121.7 169.3-217 216-28\n 14-57.3 25-88 33-6.7 2-11 3.8-13 5.5-2 1.7-3 4.2-3 7.5s1 5.8 3 7.5\nc2 1.7 6.3 3.5 13 5.5 68 17.3 128.2 47.8 180.5 91.5 52.3 43.7 93.8 96.2 124.5\n 157.5 9.3 8 15.3 12.3 18 13h6c12-.7 18-4 18-10 0-2-1.7-7-5-15-23.3-46-52-87\n-86-123l-10-10h399738v-40H218c328 0 0 0 0 0l-10-8c-26.7-20-65.7-43-117-69 2.7\n-2 6-3.7 10-5 36.7-16 72.3-37.3 107-64l10-8h399782v-40z\nm8 0v40h399730v-40zm0 194v40h399730v-40z`,doublerightarrow:`M399738 392l\n-10 10c-34 36-62.7 77-86 123-3.3 8-5 13.3-5 16 0 5.3 6.7 8 20 8 7.3 0 12.2-.5\n 14.5-1.5 2.3-1 4.8-4.5 7.5-10.5 49.3-97.3 121.7-169.3 217-216 28-14 57.3-25 88\n-33 6.7-2 11-3.8 13-5.5 2-1.7 3-4.2 3-7.5s-1-5.8-3-7.5c-2-1.7-6.3-3.5-13-5.5-68\n-17.3-128.2-47.8-180.5-91.5-52.3-43.7-93.8-96.2-124.5-157.5-9.3-8-15.3-12.3-18\n-13h-6c-12 .7-18 4-18 10 0 2 1.7 7 5 15 23.3 46 52 87 86 123l10 10H0v40h399782\nc-328 0 0 0 0 0l10 8c26.7 20 65.7 43 117 69-2.7 2-6 3.7-10 5-36.7 16-72.3 37.3\n-107 64l-10 8H0v40zM0 157v40h399730v-40zm0 194v40h399730v-40z`,leftarrow:`M400000 241H110l3-3c68.7-52.7 113.7-120\n 135-202 4-14.7 6-23 6-25 0-7.3-7-11-21-11-8 0-13.2.8-15.5 2.5-2.3 1.7-4.2 5.8\n-5.5 12.5-1.3 4.7-2.7 10.3-4 17-12 48.7-34.8 92-68.5 130S65.3 228.3 18 247\nc-10 4-16 7.7-18 11 0 8.7 6 14.3 18 17 47.3 18.7 87.8 47 121.5 85S196 441.3 208\n 490c.7 2 1.3 5 2 9s1.2 6.7 1.5 8c.3 1.3 1 3.3 2 6s2.2 4.5 3.5 5.5c1.3 1 3.3\n 1.8 6 2.5s6 1 10 1c14 0 21-3.7 21-11 0-2-2-10.3-6-25-20-79.3-65-146.7-135-202\n l-3-3h399890zM100 241v40h399900v-40z`,leftbrace:`M6 548l-6-6v-35l6-11c56-104 135.3-181.3 238-232 57.3-28.7 117\n-45 179-50h399577v120H403c-43.3 7-81 15-113 26-100.7 33-179.7 91-237 174-2.7\n 5-6 9-10 13-.7 1-7.3 1-20 1H6z`,leftbraceunder:`M0 6l6-6h17c12.688 0 19.313.3 20 1 4 4 7.313 8.3 10 13\n 35.313 51.3 80.813 93.8 136.5 127.5 55.688 33.7 117.188 55.8 184.5 66.5.688\n 0 2 .3 4 1 18.688 2.7 76 4.3 172 5h399450v120H429l-6-1c-124.688-8-235-61.7\n-331-161C60.687 138.7 32.312 99.3 7 54L0 41V6z`,leftgroup:`M400000 80\nH435C64 80 168.3 229.4 21 260c-5.9 1.2-18 0-18 0-2 0-3-1-3-3v-38C76 61 257 0\n 435 0h399565z`,leftgroupunder:`M400000 262\nH435C64 262 168.3 112.6 21 82c-5.9-1.2-18 0-18 0-2 0-3 1-3 3v38c76 158 257 219\n 435 219h399565z`,leftharpoon:`M0 267c.7 5.3 3 10 7 14h399993v-40H93c3.3\n-3.3 10.2-9.5 20.5-18.5s17.8-15.8 22.5-20.5c50.7-52 88-110.3 112-175 4-11.3 5\n-18.3 3-21-1.3-4-7.3-6-18-6-8 0-13 .7-15 2s-4.7 6.7-8 16c-42 98.7-107.3 174.7\n-196 228-6.7 4.7-10.7 8-12 10-1.3 2-2 5.7-2 11zm100-26v40h399900v-40z`,leftharpoonplus:`M0 267c.7 5.3 3 10 7 14h399993v-40H93c3.3-3.3 10.2-9.5\n 20.5-18.5s17.8-15.8 22.5-20.5c50.7-52 88-110.3 112-175 4-11.3 5-18.3 3-21-1.3\n-4-7.3-6-18-6-8 0-13 .7-15 2s-4.7 6.7-8 16c-42 98.7-107.3 174.7-196 228-6.7 4.7\n-10.7 8-12 10-1.3 2-2 5.7-2 11zm100-26v40h399900v-40zM0 435v40h400000v-40z\nm0 0v40h400000v-40z`,leftharpoondown:`M7 241c-4 4-6.333 8.667-7 14 0 5.333.667 9 2 11s5.333\n 5.333 12 10c90.667 54 156 130 196 228 3.333 10.667 6.333 16.333 9 17 2 .667 5\n 1 9 1h5c10.667 0 16.667-2 18-6 2-2.667 1-9.667-3-21-32-87.333-82.667-157.667\n-152-211l-3-3h399907v-40zM93 281 H400000 v-40L7 241z`,leftharpoondownplus:`M7 435c-4 4-6.3 8.7-7 14 0 5.3.7 9 2 11s5.3 5.3 12\n 10c90.7 54 156 130 196 228 3.3 10.7 6.3 16.3 9 17 2 .7 5 1 9 1h5c10.7 0 16.7\n-2 18-6 2-2.7 1-9.7-3-21-32-87.3-82.7-157.7-152-211l-3-3h399907v-40H7zm93 0\nv40h399900v-40zM0 241v40h399900v-40zm0 0v40h399900v-40z`,lefthook:`M400000 281 H103s-33-11.2-61-33.5S0 197.3 0 164s14.2-61.2 42.5\n-83.5C70.8 58.2 104 47 142 47 c16.7 0 25 6.7 25 20 0 12-8.7 18.7-26 20-40 3.3\n-68.7 15.7-86 37-10 12-15 25.3-15 40 0 22.7 9.8 40.7 29.5 54 19.7 13.3 43.5 21\n 71.5 23h399859zM103 281v-40h399897v40z`,leftlinesegment:`M40 281 V428 H0 V94 H40 V241 H400000 v40z\nM40 281 V428 H0 V94 H40 V241 H400000 v40z`,leftmapsto:`M40 281 V448H0V74H40V241H400000v40z\nM40 281 V448H0V74H40V241H400000v40z`,leftToFrom:`M0 147h400000v40H0zm0 214c68 40 115.7 95.7 143 167h22c15.3 0 23\n-.3 23-1 0-1.3-5.3-13.7-16-37-18-35.3-41.3-69-70-101l-7-8h399905v-40H95l7-8\nc28.7-32 52-65.7 70-101 10.7-23.3 16-35.7 16-37 0-.7-7.7-1-23-1h-22C115.7 265.3\n 68 321 0 361zm0-174v-40h399900v40zm100 154v40h399900v-40z`,longequal:`M0 50 h400000 v40H0z m0 194h40000v40H0z\nM0 50 h400000 v40H0z m0 194h40000v40H0z`,midbrace:`M200428 334\nc-100.7-8.3-195.3-44-280-108-55.3-42-101.7-93-139-153l-9-14c-2.7 4-5.7 8.7-9 14\n-53.3 86.7-123.7 153-211 199-66.7 36-137.3 56.3-212 62H0V214h199568c178.3-11.7\n 311.7-78.3 403-201 6-8 9.7-12 11-12 .7-.7 6.7-1 18-1s17.3.3 18 1c1.3 0 5 4 11\n 12 44.7 59.3 101.3 106.3 170 141s145.3 54.3 229 60h199572v120z`,midbraceunder:`M199572 214\nc100.7 8.3 195.3 44 280 108 55.3 42 101.7 93 139 153l9 14c2.7-4 5.7-8.7 9-14\n 53.3-86.7 123.7-153 211-199 66.7-36 137.3-56.3 212-62h199568v120H200432c-178.3\n 11.7-311.7 78.3-403 201-6 8-9.7 12-11 12-.7.7-6.7 1-18 1s-17.3-.3-18-1c-1.3 0\n-5-4-11-12-44.7-59.3-101.3-106.3-170-141s-145.3-54.3-229-60H0V214z`,oiintSize1:`M512.6 71.6c272.6 0 320.3 106.8 320.3 178.2 0 70.8-47.7 177.6\n-320.3 177.6S193.1 320.6 193.1 249.8c0-71.4 46.9-178.2 319.5-178.2z\nm368.1 178.2c0-86.4-60.9-215.4-368.1-215.4-306.4 0-367.3 129-367.3 215.4 0 85.8\n60.9 214.8 367.3 214.8 307.2 0 368.1-129 368.1-214.8z`,oiintSize2:`M757.8 100.1c384.7 0 451.1 137.6 451.1 230 0 91.3-66.4 228.8\n-451.1 228.8-386.3 0-452.7-137.5-452.7-228.8 0-92.4 66.4-230 452.7-230z\nm502.4 230c0-111.2-82.4-277.2-502.4-277.2s-504 166-504 277.2\nc0 110 84 276 504 276s502.4-166 502.4-276z`,oiiintSize1:`M681.4 71.6c408.9 0 480.5 106.8 480.5 178.2 0 70.8-71.6 177.6\n-480.5 177.6S202.1 320.6 202.1 249.8c0-71.4 70.5-178.2 479.3-178.2z\nm525.8 178.2c0-86.4-86.8-215.4-525.7-215.4-437.9 0-524.7 129-524.7 215.4 0\n85.8 86.8 214.8 524.7 214.8 438.9 0 525.7-129 525.7-214.8z`,oiiintSize2:`M1021.2 53c603.6 0 707.8 165.8 707.8 277.2 0 110-104.2 275.8\n-707.8 275.8-606 0-710.2-165.8-710.2-275.8C311 218.8 415.2 53 1021.2 53z\nm770.4 277.1c0-131.2-126.4-327.6-770.5-327.6S248.4 198.9 248.4 330.1\nc0 130 128.8 326.4 772.7 326.4s770.5-196.4 770.5-326.4z`,rightarrow:`M0 241v40h399891c-47.3 35.3-84 78-110 128\n-16.7 32-27.7 63.7-33 95 0 1.3-.2 2.7-.5 4-.3 1.3-.5 2.3-.5 3 0 7.3 6.7 11 20\n 11 8 0 13.2-.8 15.5-2.5 2.3-1.7 4.2-5.5 5.5-11.5 2-13.3 5.7-27 11-41 14.7-44.7\n 39-84.5 73-119.5s73.7-60.2 119-75.5c6-2 9-5.7 9-11s-3-9-9-11c-45.3-15.3-85\n-40.5-119-75.5s-58.3-74.8-73-119.5c-4.7-14-8.3-27.3-11-40-1.3-6.7-3.2-10.8-5.5\n-12.5-2.3-1.7-7.5-2.5-15.5-2.5-14 0-21 3.7-21 11 0 2 2 10.3 6 25 20.7 83.3 67\n 151.7 139 205zm0 0v40h399900v-40z`,rightbrace:`M400000 542l\n-6 6h-17c-12.7 0-19.3-.3-20-1-4-4-7.3-8.3-10-13-35.3-51.3-80.8-93.8-136.5-127.5\ns-117.2-55.8-184.5-66.5c-.7 0-2-.3-4-1-18.7-2.7-76-4.3-172-5H0V214h399571l6 1\nc124.7 8 235 61.7 331 161 31.3 33.3 59.7 72.7 85 118l7 13v35z`,rightbraceunder:`M399994 0l6 6v35l-6 11c-56 104-135.3 181.3-238 232-57.3\n 28.7-117 45-179 50H-300V214h399897c43.3-7 81-15 113-26 100.7-33 179.7-91 237\n-174 2.7-5 6-9 10-13 .7-1 7.3-1 20-1h17z`,rightgroup:`M0 80h399565c371 0 266.7 149.4 414 180 5.9 1.2 18 0 18 0 2 0\n 3-1 3-3v-38c-76-158-257-219-435-219H0z`,rightgroupunder:`M0 262h399565c371 0 266.7-149.4 414-180 5.9-1.2 18 0 18\n 0 2 0 3 1 3 3v38c-76 158-257 219-435 219H0z`,rightharpoon:`M0 241v40h399993c4.7-4.7 7-9.3 7-14 0-9.3\n-3.7-15.3-11-18-92.7-56.7-159-133.7-199-231-3.3-9.3-6-14.7-8-16-2-1.3-7-2-15-2\n-10.7 0-16.7 2-18 6-2 2.7-1 9.7 3 21 15.3 42 36.7 81.8 64 119.5 27.3 37.7 58\n 69.2 92 94.5zm0 0v40h399900v-40z`,rightharpoonplus:`M0 241v40h399993c4.7-4.7 7-9.3 7-14 0-9.3-3.7-15.3-11\n-18-92.7-56.7-159-133.7-199-231-3.3-9.3-6-14.7-8-16-2-1.3-7-2-15-2-10.7 0-16.7\n 2-18 6-2 2.7-1 9.7 3 21 15.3 42 36.7 81.8 64 119.5 27.3 37.7 58 69.2 92 94.5z\nm0 0v40h399900v-40z m100 194v40h399900v-40zm0 0v40h399900v-40z`,rightharpoondown:`M399747 511c0 7.3 6.7 11 20 11 8 0 13-.8 15-2.5s4.7-6.8\n 8-15.5c40-94 99.3-166.3 178-217 13.3-8 20.3-12.3 21-13 5.3-3.3 8.5-5.8 9.5\n-7.5 1-1.7 1.5-5.2 1.5-10.5s-2.3-10.3-7-15H0v40h399908c-34 25.3-64.7 57-92 95\n-27.3 38-48.7 77.7-64 119-3.3 8.7-5 14-5 16zM0 241v40h399900v-40z`,rightharpoondownplus:`M399747 705c0 7.3 6.7 11 20 11 8 0 13-.8\n 15-2.5s4.7-6.8 8-15.5c40-94 99.3-166.3 178-217 13.3-8 20.3-12.3 21-13 5.3-3.3\n 8.5-5.8 9.5-7.5 1-1.7 1.5-5.2 1.5-10.5s-2.3-10.3-7-15H0v40h399908c-34 25.3\n-64.7 57-92 95-27.3 38-48.7 77.7-64 119-3.3 8.7-5 14-5 16zM0 435v40h399900v-40z\nm0-194v40h400000v-40zm0 0v40h400000v-40z`,righthook:`M399859 241c-764 0 0 0 0 0 40-3.3 68.7-15.7 86-37 10-12 15-25.3\n 15-40 0-22.7-9.8-40.7-29.5-54-19.7-13.3-43.5-21-71.5-23-17.3-1.3-26-8-26-20 0\n-13.3 8.7-20 26-20 38 0 71 11.2 99 33.5 0 0 7 5.6 21 16.7 14 11.2 21 33.5 21\n 66.8s-14 61.2-42 83.5c-28 22.3-61 33.5-99 33.5L0 241z M0 281v-40h399859v40z`,rightlinesegment:`M399960 241 V94 h40 V428 h-40 V281 H0 v-40z\nM399960 241 V94 h40 V428 h-40 V281 H0 v-40z`,rightToFrom:`M400000 167c-70.7-42-118-97.7-142-167h-23c-15.3 0-23 .3-23\n 1 0 1.3 5.3 13.7 16 37 18 35.3 41.3 69 70 101l7 8H0v40h399905l-7 8c-28.7 32\n-52 65.7-70 101-10.7 23.3-16 35.7-16 37 0 .7 7.7 1 23 1h23c24-69.3 71.3-125 142\n-167z M100 147v40h399900v-40zM0 341v40h399900v-40z`,twoheadleftarrow:`M0 167c68 40\n 115.7 95.7 143 167h22c15.3 0 23-.3 23-1 0-1.3-5.3-13.7-16-37-18-35.3-41.3-69\n-70-101l-7-8h125l9 7c50.7 39.3 85 86 103 140h46c0-4.7-6.3-18.7-19-42-18-35.3\n-40-67.3-66-96l-9-9h399716v-40H284l9-9c26-28.7 48-60.7 66-96 12.7-23.333 19\n-37.333 19-42h-46c-18 54-52.3 100.7-103 140l-9 7H95l7-8c28.7-32 52-65.7 70-101\n 10.7-23.333 16-35.7 16-37 0-.7-7.7-1-23-1h-22C115.7 71.3 68 127 0 167z`,twoheadrightarrow:`M400000 167\nc-68-40-115.7-95.7-143-167h-22c-15.3 0-23 .3-23 1 0 1.3 5.3 13.7 16 37 18 35.3\n 41.3 69 70 101l7 8h-125l-9-7c-50.7-39.3-85-86-103-140h-46c0 4.7 6.3 18.7 19 42\n 18 35.3 40 67.3 66 96l9 9H0v40h399716l-9 9c-26 28.7-48 60.7-66 96-12.7 23.333\n-19 37.333-19 42h46c18-54 52.3-100.7 103-140l9-7h125l-7 8c-28.7 32-52 65.7-70\n 101-10.7 23.333-16 35.7-16 37 0 .7 7.7 1 23 1h22c27.3-71.3 75-127 143-167z`,tilde1:`M200 55.538c-77 0-168 73.953-177 73.953-3 0-7\n-2.175-9-5.437L2 97c-1-2-2-4-2-6 0-4 2-7 5-9l20-12C116 12 171 0 207 0c86 0\n 114 68 191 68 78 0 168-68 177-68 4 0 7 2 9 5l12 19c1 2.175 2 4.35 2 6.525 0\n 4.35-2 7.613-5 9.788l-19 13.05c-92 63.077-116.937 75.308-183 76.128\n-68.267.847-113-73.952-191-73.952z`,tilde2:`M344 55.266c-142 0-300.638 81.316-311.5 86.418\n-8.01 3.762-22.5 10.91-23.5 5.562L1 120c-1-2-1-3-1-4 0-5 3-9 8-10l18.4-9C160.9\n 31.9 283 0 358 0c148 0 188 122 331 122s314-97 326-97c4 0 8 2 10 7l7 21.114\nc1 2.14 1 3.21 1 4.28 0 5.347-3 9.626-7 10.696l-22.3 12.622C852.6 158.372 751\n 181.476 676 181.476c-149 0-189-126.21-332-126.21z`,tilde3:`M786 59C457 59 32 175.242 13 175.242c-6 0-10-3.457\n-11-10.37L.15 138c-1-7 3-12 10-13l19.2-6.4C378.4 40.7 634.3 0 804.3 0c337 0\n 411.8 157 746.8 157 328 0 754-112 773-112 5 0 10 3 11 9l1 14.075c1 8.066-.697\n 16.595-6.697 17.492l-21.052 7.31c-367.9 98.146-609.15 122.696-778.15 122.696\n -338 0-409-156.573-744-156.573z`,tilde4:`M786 58C457 58 32 177.487 13 177.487c-6 0-10-3.345\n-11-10.035L.15 143c-1-7 3-12 10-13l22-6.7C381.2 35 637.15 0 807.15 0c337 0 409\n 177 744 177 328 0 754-127 773-127 5 0 10 3 11 9l1 14.794c1 7.805-3 13.38-9\n 14.495l-20.7 5.574c-366.85 99.79-607.3 139.372-776.3 139.372-338 0-409\n -175.236-744-175.236z`,vec:`M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5\n3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11\n10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63\n-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1\n-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59\nH213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359\nc-16-25.333-24-45-24-59z`,widehat1:`M529 0h5l519 115c5 1 9 5 9 10 0 1-1 2-1 3l-4 22\nc-1 5-5 9-11 9h-2L532 67 19 159h-2c-5 0-9-4-11-9l-5-22c-1-6 2-12 8-13z`,widehat2:`M1181 0h2l1171 176c6 0 10 5 10 11l-2 23c-1 6-5 10\n-11 10h-1L1182 67 15 220h-1c-6 0-10-4-11-10l-2-23c-1-6 4-11 10-11z`,widehat3:`M1181 0h2l1171 236c6 0 10 5 10 11l-2 23c-1 6-5 10\n-11 10h-1L1182 67 15 280h-1c-6 0-10-4-11-10l-2-23c-1-6 4-11 10-11z`,widehat4:`M1181 0h2l1171 296c6 0 10 5 10 11l-2 23c-1 6-5 10\n-11 10h-1L1182 67 15 340h-1c-6 0-10-4-11-10l-2-23c-1-6 4-11 10-11z`,widecheck1:`M529,159h5l519,-115c5,-1,9,-5,9,-10c0,-1,-1,-2,-1,-3l-4,-22c-1,\n-5,-5,-9,-11,-9h-2l-512,92l-513,-92h-2c-5,0,-9,4,-11,9l-5,22c-1,6,2,12,8,13z`,widecheck2:`M1181,220h2l1171,-176c6,0,10,-5,10,-11l-2,-23c-1,-6,-5,-10,\n-11,-10h-1l-1168,153l-1167,-153h-1c-6,0,-10,4,-11,10l-2,23c-1,6,4,11,10,11z`,widecheck3:`M1181,280h2l1171,-236c6,0,10,-5,10,-11l-2,-23c-1,-6,-5,-10,\n-11,-10h-1l-1168,213l-1167,-213h-1c-6,0,-10,4,-11,10l-2,23c-1,6,4,11,10,11z`,widecheck4:`M1181,340h2l1171,-296c6,0,10,-5,10,-11l-2,-23c-1,-6,-5,-10,\n-11,-10h-1l-1168,273l-1167,-273h-1c-6,0,-10,4,-11,10l-2,23c-1,6,4,11,10,11z`,baraboveleftarrow:`M400000 620h-399890l3 -3c68.7 -52.7 113.7 -120 135 -202\nc4 -14.7 6 -23 6 -25c0 -7.3 -7 -11 -21 -11c-8 0 -13.2 0.8 -15.5 2.5\nc-2.3 1.7 -4.2 5.8 -5.5 12.5c-1.3 4.7 -2.7 10.3 -4 17c-12 48.7 -34.8 92 -68.5 130\ns-74.2 66.3 -121.5 85c-10 4 -16 7.7 -18 11c0 8.7 6 14.3 18 17c47.3 18.7 87.8 47\n121.5 85s56.5 81.3 68.5 130c0.7 2 1.3 5 2 9s1.2 6.7 1.5 8c0.3 1.3 1 3.3 2 6\ns2.2 4.5 3.5 5.5c1.3 1 3.3 1.8 6 2.5s6 1 10 1c14 0 21 -3.7 21 -11\nc0 -2 -2 -10.3 -6 -25c-20 -79.3 -65 -146.7 -135 -202l-3 -3h399890z\nM100 620v40h399900v-40z M0 241v40h399900v-40zM0 241v40h399900v-40z`,rightarrowabovebar:`M0 241v40h399891c-47.3 35.3-84 78-110 128-16.7 32\n-27.7 63.7-33 95 0 1.3-.2 2.7-.5 4-.3 1.3-.5 2.3-.5 3 0 7.3 6.7 11 20 11 8 0\n13.2-.8 15.5-2.5 2.3-1.7 4.2-5.5 5.5-11.5 2-13.3 5.7-27 11-41 14.7-44.7 39\n-84.5 73-119.5s73.7-60.2 119-75.5c6-2 9-5.7 9-11s-3-9-9-11c-45.3-15.3-85-40.5\n-119-75.5s-58.3-74.8-73-119.5c-4.7-14-8.3-27.3-11-40-1.3-6.7-3.2-10.8-5.5\n-12.5-2.3-1.7-7.5-2.5-15.5-2.5-14 0-21 3.7-21 11 0 2 2 10.3 6 25 20.7 83.3 67\n151.7 139 205zm96 379h399894v40H0zm0 0h399904v40H0z`,baraboveshortleftharpoon:`M507,435c-4,4,-6.3,8.7,-7,14c0,5.3,0.7,9,2,11\nc1.3,2,5.3,5.3,12,10c90.7,54,156,130,196,228c3.3,10.7,6.3,16.3,9,17\nc2,0.7,5,1,9,1c0,0,5,0,5,0c10.7,0,16.7,-2,18,-6c2,-2.7,1,-9.7,-3,-21\nc-32,-87.3,-82.7,-157.7,-152,-211c0,0,-3,-3,-3,-3l399351,0l0,-40\nc-398570,0,-399437,0,-399437,0z M593 435 v40 H399500 v-40z\nM0 281 v-40 H399908 v40z M0 281 v-40 H399908 v40z`,rightharpoonaboveshortbar:`M0,241 l0,40c399126,0,399993,0,399993,0\nc4.7,-4.7,7,-9.3,7,-14c0,-9.3,-3.7,-15.3,-11,-18c-92.7,-56.7,-159,-133.7,-199,\n-231c-3.3,-9.3,-6,-14.7,-8,-16c-2,-1.3,-7,-2,-15,-2c-10.7,0,-16.7,2,-18,6\nc-2,2.7,-1,9.7,3,21c15.3,42,36.7,81.8,64,119.5c27.3,37.7,58,69.2,92,94.5z\nM0 241 v40 H399908 v-40z M0 475 v-40 H399500 v40z M0 475 v-40 H399500 v40z`,shortbaraboveleftharpoon:`M7,435c-4,4,-6.3,8.7,-7,14c0,5.3,0.7,9,2,11\nc1.3,2,5.3,5.3,12,10c90.7,54,156,130,196,228c3.3,10.7,6.3,16.3,9,17c2,0.7,5,1,9,\n1c0,0,5,0,5,0c10.7,0,16.7,-2,18,-6c2,-2.7,1,-9.7,-3,-21c-32,-87.3,-82.7,-157.7,\n-152,-211c0,0,-3,-3,-3,-3l399907,0l0,-40c-399126,0,-399993,0,-399993,0z\nM93 435 v40 H400000 v-40z M500 241 v40 H400000 v-40z M500 241 v40 H400000 v-40z`,shortrightharpoonabovebar:`M53,241l0,40c398570,0,399437,0,399437,0\nc4.7,-4.7,7,-9.3,7,-14c0,-9.3,-3.7,-15.3,-11,-18c-92.7,-56.7,-159,-133.7,-199,\n-231c-3.3,-9.3,-6,-14.7,-8,-16c-2,-1.3,-7,-2,-15,-2c-10.7,0,-16.7,2,-18,6\nc-2,2.7,-1,9.7,3,21c15.3,42,36.7,81.8,64,119.5c27.3,37.7,58,69.2,92,94.5z\nM500 241 v40 H399408 v-40z M500 435 v40 H400000 v-40z`},o3=function(e,t){switch(e){case\"lbrack\":return\"M403 1759 V84 H666 V0 H319 V1759 v\"+t+` v1759 h347 v-84\nH403z M403 1759 V0 H319 V1759 v`+t+\" v1759 h84z\";case\"rbrack\":return\"M347 1759 V0 H0 V84 H263 V1759 v\"+t+` v1759 H0 v84 H347z\nM347 1759 V0 H263 V1759 v`+t+\" v1759 h84z\";case\"vert\":return\"M145 15 v585 v\"+t+` v585 c2.667,10,9.667,15,21,15\nc10,0,16.667,-5,20,-15 v-585 v`+-t+` v-585 c-2.667,-10,-9.667,-15,-21,-15\nc-10,0,-16.667,5,-20,15z M188 15 H145 v585 v`+t+\" v585 h43z\";case\"doublevert\":return\"M145 15 v585 v\"+t+` v585 c2.667,10,9.667,15,21,15\nc10,0,16.667,-5,20,-15 v-585 v`+-t+` v-585 c-2.667,-10,-9.667,-15,-21,-15\nc-10,0,-16.667,5,-20,15z M188 15 H145 v585 v`+t+` v585 h43z\nM367 15 v585 v`+t+` v585 c2.667,10,9.667,15,21,15\nc10,0,16.667,-5,20,-15 v-585 v`+-t+` v-585 c-2.667,-10,-9.667,-15,-21,-15\nc-10,0,-16.667,5,-20,15z M410 15 H367 v585 v`+t+\" v585 h43z\";case\"lfloor\":return\"M319 602 V0 H403 V602 v\"+t+` v1715 h263 v84 H319z\nMM319 602 V0 H403 V602 v`+t+\" v1715 H319z\";case\"rfloor\":return\"M319 602 V0 H403 V602 v\"+t+` v1799 H0 v-84 H319z\nMM319 602 V0 H403 V602 v`+t+\" v1715 H319z\";case\"lceil\":return\"M403 1759 V84 H666 V0 H319 V1759 v\"+t+` v602 h84z\nM403 1759 V0 H319 V1759 v`+t+\" v602 h84z\";case\"rceil\":return\"M347 1759 V0 H0 V84 H263 V1759 v\"+t+` v602 h84z\nM347 1759 V0 h-84 V1759 v`+t+\" v602 h84z\";case\"lparen\":return`M863,9c0,-2,-2,-5,-6,-9c0,0,-17,0,-17,0c-12.7,0,-19.3,0.3,-20,1\nc-5.3,5.3,-10.3,11,-15,17c-242.7,294.7,-395.3,682,-458,1162c-21.3,163.3,-33.3,349,\n-36,557 l0,`+(t+84)+`c0.2,6,0,26,0,60c2,159.3,10,310.7,24,454c53.3,528,210,\n949.7,470,1265c4.7,6,9.7,11.7,15,17c0.7,0.7,7,1,19,1c0,0,18,0,18,0c4,-4,6,-7,6,-9\nc0,-2.7,-3.3,-8.7,-10,-18c-135.3,-192.7,-235.5,-414.3,-300.5,-665c-65,-250.7,-102.5,\n-544.7,-112.5,-882c-2,-104,-3,-167,-3,-189\nl0,-`+(t+92)+`c0,-162.7,5.7,-314,17,-454c20.7,-272,63.7,-513,129,-723c65.3,\n-210,155.3,-396.3,270,-559c6.7,-9.3,10,-15.3,10,-18z`;case\"rparen\":return`M76,0c-16.7,0,-25,3,-25,9c0,2,2,6.3,6,13c21.3,28.7,42.3,60.3,\n63,95c96.7,156.7,172.8,332.5,228.5,527.5c55.7,195,92.8,416.5,111.5,664.5\nc11.3,139.3,17,290.7,17,454c0,28,1.7,43,3.3,45l0,`+(t+9)+`\nc-3,4,-3.3,16.7,-3.3,38c0,162,-5.7,313.7,-17,455c-18.7,248,-55.8,469.3,-111.5,664\nc-55.7,194.7,-131.8,370.3,-228.5,527c-20.7,34.7,-41.7,66.3,-63,95c-2,3.3,-4,7,-6,11\nc0,7.3,5.7,11,17,11c0,0,11,0,11,0c9.3,0,14.3,-0.3,15,-1c5.3,-5.3,10.3,-11,15,-17\nc242.7,-294.7,395.3,-681.7,458,-1161c21.3,-164.7,33.3,-350.7,36,-558\nl0,-`+(t+144)+`c-2,-159.3,-10,-310.7,-24,-454c-53.3,-528,-210,-949.7,\n-470,-1265c-4.7,-6,-9.7,-11.7,-15,-17c-0.7,-0.7,-6.7,-1,-18,-1z`;default:throw new Error(\"Unknown stretchy delimiter.\")}},ei=class{constructor(e){this.children=void 0,this.classes=void 0,this.height=void 0,this.depth=void 0,this.maxFontSize=void 0,this.style=void 0,this.children=e,this.classes=[],this.height=0,this.depth=0,this.maxFontSize=0,this.style={}}hasClass(e){return this.classes.includes(e)}toNode(){for(var e=document.createDocumentFragment(),t=0;t<this.children.length;t++)e.appendChild(this.children[t].toNode());return e}toMarkup(){for(var e=\"\",t=0;t<this.children.length;t++)e+=this.children[t].toMarkup();return e}toText(){var e=t=>t.toText();return this.children.map(e).join(\"\")}},Jt={\"AMS-Regular\":{32:[0,0,0,0,.25],65:[0,.68889,0,0,.72222],66:[0,.68889,0,0,.66667],67:[0,.68889,0,0,.72222],68:[0,.68889,0,0,.72222],69:[0,.68889,0,0,.66667],70:[0,.68889,0,0,.61111],71:[0,.68889,0,0,.77778],72:[0,.68889,0,0,.77778],73:[0,.68889,0,0,.38889],74:[.16667,.68889,0,0,.5],75:[0,.68889,0,0,.77778],76:[0,.68889,0,0,.66667],77:[0,.68889,0,0,.94445],78:[0,.68889,0,0,.72222],79:[.16667,.68889,0,0,.77778],80:[0,.68889,0,0,.61111],81:[.16667,.68889,0,0,.77778],82:[0,.68889,0,0,.72222],83:[0,.68889,0,0,.55556],84:[0,.68889,0,0,.66667],85:[0,.68889,0,0,.72222],86:[0,.68889,0,0,.72222],87:[0,.68889,0,0,1],88:[0,.68889,0,0,.72222],89:[0,.68889,0,0,.72222],90:[0,.68889,0,0,.66667],107:[0,.68889,0,0,.55556],160:[0,0,0,0,.25],165:[0,.675,.025,0,.75],174:[.15559,.69224,0,0,.94666],240:[0,.68889,0,0,.55556],295:[0,.68889,0,0,.54028],710:[0,.825,0,0,2.33334],732:[0,.9,0,0,2.33334],770:[0,.825,0,0,2.33334],771:[0,.9,0,0,2.33334],989:[.08167,.58167,0,0,.77778],1008:[0,.43056,.04028,0,.66667],8245:[0,.54986,0,0,.275],8463:[0,.68889,0,0,.54028],8487:[0,.68889,0,0,.72222],8498:[0,.68889,0,0,.55556],8502:[0,.68889,0,0,.66667],8503:[0,.68889,0,0,.44445],8504:[0,.68889,0,0,.66667],8513:[0,.68889,0,0,.63889],8592:[-.03598,.46402,0,0,.5],8594:[-.03598,.46402,0,0,.5],8602:[-.13313,.36687,0,0,1],8603:[-.13313,.36687,0,0,1],8606:[.01354,.52239,0,0,1],8608:[.01354,.52239,0,0,1],8610:[.01354,.52239,0,0,1.11111],8611:[.01354,.52239,0,0,1.11111],8619:[0,.54986,0,0,1],8620:[0,.54986,0,0,1],8621:[-.13313,.37788,0,0,1.38889],8622:[-.13313,.36687,0,0,1],8624:[0,.69224,0,0,.5],8625:[0,.69224,0,0,.5],8630:[0,.43056,0,0,1],8631:[0,.43056,0,0,1],8634:[.08198,.58198,0,0,.77778],8635:[.08198,.58198,0,0,.77778],8638:[.19444,.69224,0,0,.41667],8639:[.19444,.69224,0,0,.41667],8642:[.19444,.69224,0,0,.41667],8643:[.19444,.69224,0,0,.41667],8644:[.1808,.675,0,0,1],8646:[.1808,.675,0,0,1],8647:[.1808,.675,0,0,1],8648:[.19444,.69224,0,0,.83334],8649:[.1808,.675,0,0,1],8650:[.19444,.69224,0,0,.83334],8651:[.01354,.52239,0,0,1],8652:[.01354,.52239,0,0,1],8653:[-.13313,.36687,0,0,1],8654:[-.13313,.36687,0,0,1],8655:[-.13313,.36687,0,0,1],8666:[.13667,.63667,0,0,1],8667:[.13667,.63667,0,0,1],8669:[-.13313,.37788,0,0,1],8672:[-.064,.437,0,0,1.334],8674:[-.064,.437,0,0,1.334],8705:[0,.825,0,0,.5],8708:[0,.68889,0,0,.55556],8709:[.08167,.58167,0,0,.77778],8717:[0,.43056,0,0,.42917],8722:[-.03598,.46402,0,0,.5],8724:[.08198,.69224,0,0,.77778],8726:[.08167,.58167,0,0,.77778],8733:[0,.69224,0,0,.77778],8736:[0,.69224,0,0,.72222],8737:[0,.69224,0,0,.72222],8738:[.03517,.52239,0,0,.72222],8739:[.08167,.58167,0,0,.22222],8740:[.25142,.74111,0,0,.27778],8741:[.08167,.58167,0,0,.38889],8742:[.25142,.74111,0,0,.5],8756:[0,.69224,0,0,.66667],8757:[0,.69224,0,0,.66667],8764:[-.13313,.36687,0,0,.77778],8765:[-.13313,.37788,0,0,.77778],8769:[-.13313,.36687,0,0,.77778],8770:[-.03625,.46375,0,0,.77778],8774:[.30274,.79383,0,0,.77778],8776:[-.01688,.48312,0,0,.77778],8778:[.08167,.58167,0,0,.77778],8782:[.06062,.54986,0,0,.77778],8783:[.06062,.54986,0,0,.77778],8785:[.08198,.58198,0,0,.77778],8786:[.08198,.58198,0,0,.77778],8787:[.08198,.58198,0,0,.77778],8790:[0,.69224,0,0,.77778],8791:[.22958,.72958,0,0,.77778],8796:[.08198,.91667,0,0,.77778],8806:[.25583,.75583,0,0,.77778],8807:[.25583,.75583,0,0,.77778],8808:[.25142,.75726,0,0,.77778],8809:[.25142,.75726,0,0,.77778],8812:[.25583,.75583,0,0,.5],8814:[.20576,.70576,0,0,.77778],8815:[.20576,.70576,0,0,.77778],8816:[.30274,.79383,0,0,.77778],8817:[.30274,.79383,0,0,.77778],8818:[.22958,.72958,0,0,.77778],8819:[.22958,.72958,0,0,.77778],8822:[.1808,.675,0,0,.77778],8823:[.1808,.675,0,0,.77778],8828:[.13667,.63667,0,0,.77778],8829:[.13667,.63667,0,0,.77778],8830:[.22958,.72958,0,0,.77778],8831:[.22958,.72958,0,0,.77778],8832:[.20576,.70576,0,0,.77778],8833:[.20576,.70576,0,0,.77778],8840:[.30274,.79383,0,0,.77778],8841:[.30274,.79383,0,0,.77778],8842:[.13597,.63597,0,0,.77778],8843:[.13597,.63597,0,0,.77778],8847:[.03517,.54986,0,0,.77778],8848:[.03517,.54986,0,0,.77778],8858:[.08198,.58198,0,0,.77778],8859:[.08198,.58198,0,0,.77778],8861:[.08198,.58198,0,0,.77778],8862:[0,.675,0,0,.77778],8863:[0,.675,0,0,.77778],8864:[0,.675,0,0,.77778],8865:[0,.675,0,0,.77778],8872:[0,.69224,0,0,.61111],8873:[0,.69224,0,0,.72222],8874:[0,.69224,0,0,.88889],8876:[0,.68889,0,0,.61111],8877:[0,.68889,0,0,.61111],8878:[0,.68889,0,0,.72222],8879:[0,.68889,0,0,.72222],8882:[.03517,.54986,0,0,.77778],8883:[.03517,.54986,0,0,.77778],8884:[.13667,.63667,0,0,.77778],8885:[.13667,.63667,0,0,.77778],8888:[0,.54986,0,0,1.11111],8890:[.19444,.43056,0,0,.55556],8891:[.19444,.69224,0,0,.61111],8892:[.19444,.69224,0,0,.61111],8901:[0,.54986,0,0,.27778],8903:[.08167,.58167,0,0,.77778],8905:[.08167,.58167,0,0,.77778],8906:[.08167,.58167,0,0,.77778],8907:[0,.69224,0,0,.77778],8908:[0,.69224,0,0,.77778],8909:[-.03598,.46402,0,0,.77778],8910:[0,.54986,0,0,.76042],8911:[0,.54986,0,0,.76042],8912:[.03517,.54986,0,0,.77778],8913:[.03517,.54986,0,0,.77778],8914:[0,.54986,0,0,.66667],8915:[0,.54986,0,0,.66667],8916:[0,.69224,0,0,.66667],8918:[.0391,.5391,0,0,.77778],8919:[.0391,.5391,0,0,.77778],8920:[.03517,.54986,0,0,1.33334],8921:[.03517,.54986,0,0,1.33334],8922:[.38569,.88569,0,0,.77778],8923:[.38569,.88569,0,0,.77778],8926:[.13667,.63667,0,0,.77778],8927:[.13667,.63667,0,0,.77778],8928:[.30274,.79383,0,0,.77778],8929:[.30274,.79383,0,0,.77778],8934:[.23222,.74111,0,0,.77778],8935:[.23222,.74111,0,0,.77778],8936:[.23222,.74111,0,0,.77778],8937:[.23222,.74111,0,0,.77778],8938:[.20576,.70576,0,0,.77778],8939:[.20576,.70576,0,0,.77778],8940:[.30274,.79383,0,0,.77778],8941:[.30274,.79383,0,0,.77778],8994:[.19444,.69224,0,0,.77778],8995:[.19444,.69224,0,0,.77778],9416:[.15559,.69224,0,0,.90222],9484:[0,.69224,0,0,.5],9488:[0,.69224,0,0,.5],9492:[0,.37788,0,0,.5],9496:[0,.37788,0,0,.5],9585:[.19444,.68889,0,0,.88889],9586:[.19444,.74111,0,0,.88889],9632:[0,.675,0,0,.77778],9633:[0,.675,0,0,.77778],9650:[0,.54986,0,0,.72222],9651:[0,.54986,0,0,.72222],9654:[.03517,.54986,0,0,.77778],9660:[0,.54986,0,0,.72222],9661:[0,.54986,0,0,.72222],9664:[.03517,.54986,0,0,.77778],9674:[.11111,.69224,0,0,.66667],9733:[.19444,.69224,0,0,.94445],10003:[0,.69224,0,0,.83334],10016:[0,.69224,0,0,.83334],10731:[.11111,.69224,0,0,.66667],10846:[.19444,.75583,0,0,.61111],10877:[.13667,.63667,0,0,.77778],10878:[.13667,.63667,0,0,.77778],10885:[.25583,.75583,0,0,.77778],10886:[.25583,.75583,0,0,.77778],10887:[.13597,.63597,0,0,.77778],10888:[.13597,.63597,0,0,.77778],10889:[.26167,.75726,0,0,.77778],10890:[.26167,.75726,0,0,.77778],10891:[.48256,.98256,0,0,.77778],10892:[.48256,.98256,0,0,.77778],10901:[.13667,.63667,0,0,.77778],10902:[.13667,.63667,0,0,.77778],10933:[.25142,.75726,0,0,.77778],10934:[.25142,.75726,0,0,.77778],10935:[.26167,.75726,0,0,.77778],10936:[.26167,.75726,0,0,.77778],10937:[.26167,.75726,0,0,.77778],10938:[.26167,.75726,0,0,.77778],10949:[.25583,.75583,0,0,.77778],10950:[.25583,.75583,0,0,.77778],10955:[.28481,.79383,0,0,.77778],10956:[.28481,.79383,0,0,.77778],57350:[.08167,.58167,0,0,.22222],57351:[.08167,.58167,0,0,.38889],57352:[.08167,.58167,0,0,.77778],57353:[0,.43056,.04028,0,.66667],57356:[.25142,.75726,0,0,.77778],57357:[.25142,.75726,0,0,.77778],57358:[.41951,.91951,0,0,.77778],57359:[.30274,.79383,0,0,.77778],57360:[.30274,.79383,0,0,.77778],57361:[.41951,.91951,0,0,.77778],57366:[.25142,.75726,0,0,.77778],57367:[.25142,.75726,0,0,.77778],57368:[.25142,.75726,0,0,.77778],57369:[.25142,.75726,0,0,.77778],57370:[.13597,.63597,0,0,.77778],57371:[.13597,.63597,0,0,.77778]},\"Caligraphic-Regular\":{32:[0,0,0,0,.25],65:[0,.68333,0,.19445,.79847],66:[0,.68333,.03041,.13889,.65681],67:[0,.68333,.05834,.13889,.52653],68:[0,.68333,.02778,.08334,.77139],69:[0,.68333,.08944,.11111,.52778],70:[0,.68333,.09931,.11111,.71875],71:[.09722,.68333,.0593,.11111,.59487],72:[0,.68333,.00965,.11111,.84452],73:[0,.68333,.07382,0,.54452],74:[.09722,.68333,.18472,.16667,.67778],75:[0,.68333,.01445,.05556,.76195],76:[0,.68333,0,.13889,.68972],77:[0,.68333,0,.13889,1.2009],78:[0,.68333,.14736,.08334,.82049],79:[0,.68333,.02778,.11111,.79611],80:[0,.68333,.08222,.08334,.69556],81:[.09722,.68333,0,.11111,.81667],82:[0,.68333,0,.08334,.8475],83:[0,.68333,.075,.13889,.60556],84:[0,.68333,.25417,0,.54464],85:[0,.68333,.09931,.08334,.62583],86:[0,.68333,.08222,0,.61278],87:[0,.68333,.08222,.08334,.98778],88:[0,.68333,.14643,.13889,.7133],89:[.09722,.68333,.08222,.08334,.66834],90:[0,.68333,.07944,.13889,.72473],160:[0,0,0,0,.25]},\"Fraktur-Regular\":{32:[0,0,0,0,.25],33:[0,.69141,0,0,.29574],34:[0,.69141,0,0,.21471],38:[0,.69141,0,0,.73786],39:[0,.69141,0,0,.21201],40:[.24982,.74947,0,0,.38865],41:[.24982,.74947,0,0,.38865],42:[0,.62119,0,0,.27764],43:[.08319,.58283,0,0,.75623],44:[0,.10803,0,0,.27764],45:[.08319,.58283,0,0,.75623],46:[0,.10803,0,0,.27764],47:[.24982,.74947,0,0,.50181],48:[0,.47534,0,0,.50181],49:[0,.47534,0,0,.50181],50:[0,.47534,0,0,.50181],51:[.18906,.47534,0,0,.50181],52:[.18906,.47534,0,0,.50181],53:[.18906,.47534,0,0,.50181],54:[0,.69141,0,0,.50181],55:[.18906,.47534,0,0,.50181],56:[0,.69141,0,0,.50181],57:[.18906,.47534,0,0,.50181],58:[0,.47534,0,0,.21606],59:[.12604,.47534,0,0,.21606],61:[-.13099,.36866,0,0,.75623],63:[0,.69141,0,0,.36245],65:[0,.69141,0,0,.7176],66:[0,.69141,0,0,.88397],67:[0,.69141,0,0,.61254],68:[0,.69141,0,0,.83158],69:[0,.69141,0,0,.66278],70:[.12604,.69141,0,0,.61119],71:[0,.69141,0,0,.78539],72:[.06302,.69141,0,0,.7203],73:[0,.69141,0,0,.55448],74:[.12604,.69141,0,0,.55231],75:[0,.69141,0,0,.66845],76:[0,.69141,0,0,.66602],77:[0,.69141,0,0,1.04953],78:[0,.69141,0,0,.83212],79:[0,.69141,0,0,.82699],80:[.18906,.69141,0,0,.82753],81:[.03781,.69141,0,0,.82699],82:[0,.69141,0,0,.82807],83:[0,.69141,0,0,.82861],84:[0,.69141,0,0,.66899],85:[0,.69141,0,0,.64576],86:[0,.69141,0,0,.83131],87:[0,.69141,0,0,1.04602],88:[0,.69141,0,0,.71922],89:[.18906,.69141,0,0,.83293],90:[.12604,.69141,0,0,.60201],91:[.24982,.74947,0,0,.27764],93:[.24982,.74947,0,0,.27764],94:[0,.69141,0,0,.49965],97:[0,.47534,0,0,.50046],98:[0,.69141,0,0,.51315],99:[0,.47534,0,0,.38946],100:[0,.62119,0,0,.49857],101:[0,.47534,0,0,.40053],102:[.18906,.69141,0,0,.32626],103:[.18906,.47534,0,0,.5037],104:[.18906,.69141,0,0,.52126],105:[0,.69141,0,0,.27899],106:[0,.69141,0,0,.28088],107:[0,.69141,0,0,.38946],108:[0,.69141,0,0,.27953],109:[0,.47534,0,0,.76676],110:[0,.47534,0,0,.52666],111:[0,.47534,0,0,.48885],112:[.18906,.52396,0,0,.50046],113:[.18906,.47534,0,0,.48912],114:[0,.47534,0,0,.38919],115:[0,.47534,0,0,.44266],116:[0,.62119,0,0,.33301],117:[0,.47534,0,0,.5172],118:[0,.52396,0,0,.5118],119:[0,.52396,0,0,.77351],120:[.18906,.47534,0,0,.38865],121:[.18906,.47534,0,0,.49884],122:[.18906,.47534,0,0,.39054],160:[0,0,0,0,.25],8216:[0,.69141,0,0,.21471],8217:[0,.69141,0,0,.21471],58112:[0,.62119,0,0,.49749],58113:[0,.62119,0,0,.4983],58114:[.18906,.69141,0,0,.33328],58115:[.18906,.69141,0,0,.32923],58116:[.18906,.47534,0,0,.50343],58117:[0,.69141,0,0,.33301],58118:[0,.62119,0,0,.33409],58119:[0,.47534,0,0,.50073]},\"Main-Bold\":{32:[0,0,0,0,.25],33:[0,.69444,0,0,.35],34:[0,.69444,0,0,.60278],35:[.19444,.69444,0,0,.95833],36:[.05556,.75,0,0,.575],37:[.05556,.75,0,0,.95833],38:[0,.69444,0,0,.89444],39:[0,.69444,0,0,.31944],40:[.25,.75,0,0,.44722],41:[.25,.75,0,0,.44722],42:[0,.75,0,0,.575],43:[.13333,.63333,0,0,.89444],44:[.19444,.15556,0,0,.31944],45:[0,.44444,0,0,.38333],46:[0,.15556,0,0,.31944],47:[.25,.75,0,0,.575],48:[0,.64444,0,0,.575],49:[0,.64444,0,0,.575],50:[0,.64444,0,0,.575],51:[0,.64444,0,0,.575],52:[0,.64444,0,0,.575],53:[0,.64444,0,0,.575],54:[0,.64444,0,0,.575],55:[0,.64444,0,0,.575],56:[0,.64444,0,0,.575],57:[0,.64444,0,0,.575],58:[0,.44444,0,0,.31944],59:[.19444,.44444,0,0,.31944],60:[.08556,.58556,0,0,.89444],61:[-.10889,.39111,0,0,.89444],62:[.08556,.58556,0,0,.89444],63:[0,.69444,0,0,.54305],64:[0,.69444,0,0,.89444],65:[0,.68611,0,0,.86944],66:[0,.68611,0,0,.81805],67:[0,.68611,0,0,.83055],68:[0,.68611,0,0,.88194],69:[0,.68611,0,0,.75555],70:[0,.68611,0,0,.72361],71:[0,.68611,0,0,.90416],72:[0,.68611,0,0,.9],73:[0,.68611,0,0,.43611],74:[0,.68611,0,0,.59444],75:[0,.68611,0,0,.90138],76:[0,.68611,0,0,.69166],77:[0,.68611,0,0,1.09166],78:[0,.68611,0,0,.9],79:[0,.68611,0,0,.86388],80:[0,.68611,0,0,.78611],81:[.19444,.68611,0,0,.86388],82:[0,.68611,0,0,.8625],83:[0,.68611,0,0,.63889],84:[0,.68611,0,0,.8],85:[0,.68611,0,0,.88472],86:[0,.68611,.01597,0,.86944],87:[0,.68611,.01597,0,1.18888],88:[0,.68611,0,0,.86944],89:[0,.68611,.02875,0,.86944],90:[0,.68611,0,0,.70277],91:[.25,.75,0,0,.31944],92:[.25,.75,0,0,.575],93:[.25,.75,0,0,.31944],94:[0,.69444,0,0,.575],95:[.31,.13444,.03194,0,.575],97:[0,.44444,0,0,.55902],98:[0,.69444,0,0,.63889],99:[0,.44444,0,0,.51111],100:[0,.69444,0,0,.63889],101:[0,.44444,0,0,.52708],102:[0,.69444,.10903,0,.35139],103:[.19444,.44444,.01597,0,.575],104:[0,.69444,0,0,.63889],105:[0,.69444,0,0,.31944],106:[.19444,.69444,0,0,.35139],107:[0,.69444,0,0,.60694],108:[0,.69444,0,0,.31944],109:[0,.44444,0,0,.95833],110:[0,.44444,0,0,.63889],111:[0,.44444,0,0,.575],112:[.19444,.44444,0,0,.63889],113:[.19444,.44444,0,0,.60694],114:[0,.44444,0,0,.47361],115:[0,.44444,0,0,.45361],116:[0,.63492,0,0,.44722],117:[0,.44444,0,0,.63889],118:[0,.44444,.01597,0,.60694],119:[0,.44444,.01597,0,.83055],120:[0,.44444,0,0,.60694],121:[.19444,.44444,.01597,0,.60694],122:[0,.44444,0,0,.51111],123:[.25,.75,0,0,.575],124:[.25,.75,0,0,.31944],125:[.25,.75,0,0,.575],126:[.35,.34444,0,0,.575],160:[0,0,0,0,.25],163:[0,.69444,0,0,.86853],168:[0,.69444,0,0,.575],172:[0,.44444,0,0,.76666],176:[0,.69444,0,0,.86944],177:[.13333,.63333,0,0,.89444],184:[.17014,0,0,0,.51111],198:[0,.68611,0,0,1.04166],215:[.13333,.63333,0,0,.89444],216:[.04861,.73472,0,0,.89444],223:[0,.69444,0,0,.59722],230:[0,.44444,0,0,.83055],247:[.13333,.63333,0,0,.89444],248:[.09722,.54167,0,0,.575],305:[0,.44444,0,0,.31944],338:[0,.68611,0,0,1.16944],339:[0,.44444,0,0,.89444],567:[.19444,.44444,0,0,.35139],710:[0,.69444,0,0,.575],711:[0,.63194,0,0,.575],713:[0,.59611,0,0,.575],714:[0,.69444,0,0,.575],715:[0,.69444,0,0,.575],728:[0,.69444,0,0,.575],729:[0,.69444,0,0,.31944],730:[0,.69444,0,0,.86944],732:[0,.69444,0,0,.575],733:[0,.69444,0,0,.575],915:[0,.68611,0,0,.69166],916:[0,.68611,0,0,.95833],920:[0,.68611,0,0,.89444],923:[0,.68611,0,0,.80555],926:[0,.68611,0,0,.76666],928:[0,.68611,0,0,.9],931:[0,.68611,0,0,.83055],933:[0,.68611,0,0,.89444],934:[0,.68611,0,0,.83055],936:[0,.68611,0,0,.89444],937:[0,.68611,0,0,.83055],8211:[0,.44444,.03194,0,.575],8212:[0,.44444,.03194,0,1.14999],8216:[0,.69444,0,0,.31944],8217:[0,.69444,0,0,.31944],8220:[0,.69444,0,0,.60278],8221:[0,.69444,0,0,.60278],8224:[.19444,.69444,0,0,.51111],8225:[.19444,.69444,0,0,.51111],8242:[0,.55556,0,0,.34444],8407:[0,.72444,.15486,0,.575],8463:[0,.69444,0,0,.66759],8465:[0,.69444,0,0,.83055],8467:[0,.69444,0,0,.47361],8472:[.19444,.44444,0,0,.74027],8476:[0,.69444,0,0,.83055],8501:[0,.69444,0,0,.70277],8592:[-.10889,.39111,0,0,1.14999],8593:[.19444,.69444,0,0,.575],8594:[-.10889,.39111,0,0,1.14999],8595:[.19444,.69444,0,0,.575],8596:[-.10889,.39111,0,0,1.14999],8597:[.25,.75,0,0,.575],8598:[.19444,.69444,0,0,1.14999],8599:[.19444,.69444,0,0,1.14999],8600:[.19444,.69444,0,0,1.14999],8601:[.19444,.69444,0,0,1.14999],8636:[-.10889,.39111,0,0,1.14999],8637:[-.10889,.39111,0,0,1.14999],8640:[-.10889,.39111,0,0,1.14999],8641:[-.10889,.39111,0,0,1.14999],8656:[-.10889,.39111,0,0,1.14999],8657:[.19444,.69444,0,0,.70277],8658:[-.10889,.39111,0,0,1.14999],8659:[.19444,.69444,0,0,.70277],8660:[-.10889,.39111,0,0,1.14999],8661:[.25,.75,0,0,.70277],8704:[0,.69444,0,0,.63889],8706:[0,.69444,.06389,0,.62847],8707:[0,.69444,0,0,.63889],8709:[.05556,.75,0,0,.575],8711:[0,.68611,0,0,.95833],8712:[.08556,.58556,0,0,.76666],8715:[.08556,.58556,0,0,.76666],8722:[.13333,.63333,0,0,.89444],8723:[.13333,.63333,0,0,.89444],8725:[.25,.75,0,0,.575],8726:[.25,.75,0,0,.575],8727:[-.02778,.47222,0,0,.575],8728:[-.02639,.47361,0,0,.575],8729:[-.02639,.47361,0,0,.575],8730:[.18,.82,0,0,.95833],8733:[0,.44444,0,0,.89444],8734:[0,.44444,0,0,1.14999],8736:[0,.69224,0,0,.72222],8739:[.25,.75,0,0,.31944],8741:[.25,.75,0,0,.575],8743:[0,.55556,0,0,.76666],8744:[0,.55556,0,0,.76666],8745:[0,.55556,0,0,.76666],8746:[0,.55556,0,0,.76666],8747:[.19444,.69444,.12778,0,.56875],8764:[-.10889,.39111,0,0,.89444],8768:[.19444,.69444,0,0,.31944],8771:[.00222,.50222,0,0,.89444],8773:[.027,.638,0,0,.894],8776:[.02444,.52444,0,0,.89444],8781:[.00222,.50222,0,0,.89444],8801:[.00222,.50222,0,0,.89444],8804:[.19667,.69667,0,0,.89444],8805:[.19667,.69667,0,0,.89444],8810:[.08556,.58556,0,0,1.14999],8811:[.08556,.58556,0,0,1.14999],8826:[.08556,.58556,0,0,.89444],8827:[.08556,.58556,0,0,.89444],8834:[.08556,.58556,0,0,.89444],8835:[.08556,.58556,0,0,.89444],8838:[.19667,.69667,0,0,.89444],8839:[.19667,.69667,0,0,.89444],8846:[0,.55556,0,0,.76666],8849:[.19667,.69667,0,0,.89444],8850:[.19667,.69667,0,0,.89444],8851:[0,.55556,0,0,.76666],8852:[0,.55556,0,0,.76666],8853:[.13333,.63333,0,0,.89444],8854:[.13333,.63333,0,0,.89444],8855:[.13333,.63333,0,0,.89444],8856:[.13333,.63333,0,0,.89444],8857:[.13333,.63333,0,0,.89444],8866:[0,.69444,0,0,.70277],8867:[0,.69444,0,0,.70277],8868:[0,.69444,0,0,.89444],8869:[0,.69444,0,0,.89444],8900:[-.02639,.47361,0,0,.575],8901:[-.02639,.47361,0,0,.31944],8902:[-.02778,.47222,0,0,.575],8968:[.25,.75,0,0,.51111],8969:[.25,.75,0,0,.51111],8970:[.25,.75,0,0,.51111],8971:[.25,.75,0,0,.51111],8994:[-.13889,.36111,0,0,1.14999],8995:[-.13889,.36111,0,0,1.14999],9651:[.19444,.69444,0,0,1.02222],9657:[-.02778,.47222,0,0,.575],9661:[.19444,.69444,0,0,1.02222],9667:[-.02778,.47222,0,0,.575],9711:[.19444,.69444,0,0,1.14999],9824:[.12963,.69444,0,0,.89444],9825:[.12963,.69444,0,0,.89444],9826:[.12963,.69444,0,0,.89444],9827:[.12963,.69444,0,0,.89444],9837:[0,.75,0,0,.44722],9838:[.19444,.69444,0,0,.44722],9839:[.19444,.69444,0,0,.44722],10216:[.25,.75,0,0,.44722],10217:[.25,.75,0,0,.44722],10815:[0,.68611,0,0,.9],10927:[.19667,.69667,0,0,.89444],10928:[.19667,.69667,0,0,.89444],57376:[.19444,.69444,0,0,0]},\"Main-BoldItalic\":{32:[0,0,0,0,.25],33:[0,.69444,.11417,0,.38611],34:[0,.69444,.07939,0,.62055],35:[.19444,.69444,.06833,0,.94444],37:[.05556,.75,.12861,0,.94444],38:[0,.69444,.08528,0,.88555],39:[0,.69444,.12945,0,.35555],40:[.25,.75,.15806,0,.47333],41:[.25,.75,.03306,0,.47333],42:[0,.75,.14333,0,.59111],43:[.10333,.60333,.03306,0,.88555],44:[.19444,.14722,0,0,.35555],45:[0,.44444,.02611,0,.41444],46:[0,.14722,0,0,.35555],47:[.25,.75,.15806,0,.59111],48:[0,.64444,.13167,0,.59111],49:[0,.64444,.13167,0,.59111],50:[0,.64444,.13167,0,.59111],51:[0,.64444,.13167,0,.59111],52:[.19444,.64444,.13167,0,.59111],53:[0,.64444,.13167,0,.59111],54:[0,.64444,.13167,0,.59111],55:[.19444,.64444,.13167,0,.59111],56:[0,.64444,.13167,0,.59111],57:[0,.64444,.13167,0,.59111],58:[0,.44444,.06695,0,.35555],59:[.19444,.44444,.06695,0,.35555],61:[-.10889,.39111,.06833,0,.88555],63:[0,.69444,.11472,0,.59111],64:[0,.69444,.09208,0,.88555],65:[0,.68611,0,0,.86555],66:[0,.68611,.0992,0,.81666],67:[0,.68611,.14208,0,.82666],68:[0,.68611,.09062,0,.87555],69:[0,.68611,.11431,0,.75666],70:[0,.68611,.12903,0,.72722],71:[0,.68611,.07347,0,.89527],72:[0,.68611,.17208,0,.8961],73:[0,.68611,.15681,0,.47166],74:[0,.68611,.145,0,.61055],75:[0,.68611,.14208,0,.89499],76:[0,.68611,0,0,.69777],77:[0,.68611,.17208,0,1.07277],78:[0,.68611,.17208,0,.8961],79:[0,.68611,.09062,0,.85499],80:[0,.68611,.0992,0,.78721],81:[.19444,.68611,.09062,0,.85499],82:[0,.68611,.02559,0,.85944],83:[0,.68611,.11264,0,.64999],84:[0,.68611,.12903,0,.7961],85:[0,.68611,.17208,0,.88083],86:[0,.68611,.18625,0,.86555],87:[0,.68611,.18625,0,1.15999],88:[0,.68611,.15681,0,.86555],89:[0,.68611,.19803,0,.86555],90:[0,.68611,.14208,0,.70888],91:[.25,.75,.1875,0,.35611],93:[.25,.75,.09972,0,.35611],94:[0,.69444,.06709,0,.59111],95:[.31,.13444,.09811,0,.59111],97:[0,.44444,.09426,0,.59111],98:[0,.69444,.07861,0,.53222],99:[0,.44444,.05222,0,.53222],100:[0,.69444,.10861,0,.59111],101:[0,.44444,.085,0,.53222],102:[.19444,.69444,.21778,0,.4],103:[.19444,.44444,.105,0,.53222],104:[0,.69444,.09426,0,.59111],105:[0,.69326,.11387,0,.35555],106:[.19444,.69326,.1672,0,.35555],107:[0,.69444,.11111,0,.53222],108:[0,.69444,.10861,0,.29666],109:[0,.44444,.09426,0,.94444],110:[0,.44444,.09426,0,.64999],111:[0,.44444,.07861,0,.59111],112:[.19444,.44444,.07861,0,.59111],113:[.19444,.44444,.105,0,.53222],114:[0,.44444,.11111,0,.50167],115:[0,.44444,.08167,0,.48694],116:[0,.63492,.09639,0,.385],117:[0,.44444,.09426,0,.62055],118:[0,.44444,.11111,0,.53222],119:[0,.44444,.11111,0,.76777],120:[0,.44444,.12583,0,.56055],121:[.19444,.44444,.105,0,.56166],122:[0,.44444,.13889,0,.49055],126:[.35,.34444,.11472,0,.59111],160:[0,0,0,0,.25],168:[0,.69444,.11473,0,.59111],176:[0,.69444,0,0,.94888],184:[.17014,0,0,0,.53222],198:[0,.68611,.11431,0,1.02277],216:[.04861,.73472,.09062,0,.88555],223:[.19444,.69444,.09736,0,.665],230:[0,.44444,.085,0,.82666],248:[.09722,.54167,.09458,0,.59111],305:[0,.44444,.09426,0,.35555],338:[0,.68611,.11431,0,1.14054],339:[0,.44444,.085,0,.82666],567:[.19444,.44444,.04611,0,.385],710:[0,.69444,.06709,0,.59111],711:[0,.63194,.08271,0,.59111],713:[0,.59444,.10444,0,.59111],714:[0,.69444,.08528,0,.59111],715:[0,.69444,0,0,.59111],728:[0,.69444,.10333,0,.59111],729:[0,.69444,.12945,0,.35555],730:[0,.69444,0,0,.94888],732:[0,.69444,.11472,0,.59111],733:[0,.69444,.11472,0,.59111],915:[0,.68611,.12903,0,.69777],916:[0,.68611,0,0,.94444],920:[0,.68611,.09062,0,.88555],923:[0,.68611,0,0,.80666],926:[0,.68611,.15092,0,.76777],928:[0,.68611,.17208,0,.8961],931:[0,.68611,.11431,0,.82666],933:[0,.68611,.10778,0,.88555],934:[0,.68611,.05632,0,.82666],936:[0,.68611,.10778,0,.88555],937:[0,.68611,.0992,0,.82666],8211:[0,.44444,.09811,0,.59111],8212:[0,.44444,.09811,0,1.18221],8216:[0,.69444,.12945,0,.35555],8217:[0,.69444,.12945,0,.35555],8220:[0,.69444,.16772,0,.62055],8221:[0,.69444,.07939,0,.62055]},\"Main-Italic\":{32:[0,0,0,0,.25],33:[0,.69444,.12417,0,.30667],34:[0,.69444,.06961,0,.51444],35:[.19444,.69444,.06616,0,.81777],37:[.05556,.75,.13639,0,.81777],38:[0,.69444,.09694,0,.76666],39:[0,.69444,.12417,0,.30667],40:[.25,.75,.16194,0,.40889],41:[.25,.75,.03694,0,.40889],42:[0,.75,.14917,0,.51111],43:[.05667,.56167,.03694,0,.76666],44:[.19444,.10556,0,0,.30667],45:[0,.43056,.02826,0,.35778],46:[0,.10556,0,0,.30667],47:[.25,.75,.16194,0,.51111],48:[0,.64444,.13556,0,.51111],49:[0,.64444,.13556,0,.51111],50:[0,.64444,.13556,0,.51111],51:[0,.64444,.13556,0,.51111],52:[.19444,.64444,.13556,0,.51111],53:[0,.64444,.13556,0,.51111],54:[0,.64444,.13556,0,.51111],55:[.19444,.64444,.13556,0,.51111],56:[0,.64444,.13556,0,.51111],57:[0,.64444,.13556,0,.51111],58:[0,.43056,.0582,0,.30667],59:[.19444,.43056,.0582,0,.30667],61:[-.13313,.36687,.06616,0,.76666],63:[0,.69444,.1225,0,.51111],64:[0,.69444,.09597,0,.76666],65:[0,.68333,0,0,.74333],66:[0,.68333,.10257,0,.70389],67:[0,.68333,.14528,0,.71555],68:[0,.68333,.09403,0,.755],69:[0,.68333,.12028,0,.67833],70:[0,.68333,.13305,0,.65277],71:[0,.68333,.08722,0,.77361],72:[0,.68333,.16389,0,.74333],73:[0,.68333,.15806,0,.38555],74:[0,.68333,.14028,0,.525],75:[0,.68333,.14528,0,.76888],76:[0,.68333,0,0,.62722],77:[0,.68333,.16389,0,.89666],78:[0,.68333,.16389,0,.74333],79:[0,.68333,.09403,0,.76666],80:[0,.68333,.10257,0,.67833],81:[.19444,.68333,.09403,0,.76666],82:[0,.68333,.03868,0,.72944],83:[0,.68333,.11972,0,.56222],84:[0,.68333,.13305,0,.71555],85:[0,.68333,.16389,0,.74333],86:[0,.68333,.18361,0,.74333],87:[0,.68333,.18361,0,.99888],88:[0,.68333,.15806,0,.74333],89:[0,.68333,.19383,0,.74333],90:[0,.68333,.14528,0,.61333],91:[.25,.75,.1875,0,.30667],93:[.25,.75,.10528,0,.30667],94:[0,.69444,.06646,0,.51111],95:[.31,.12056,.09208,0,.51111],97:[0,.43056,.07671,0,.51111],98:[0,.69444,.06312,0,.46],99:[0,.43056,.05653,0,.46],100:[0,.69444,.10333,0,.51111],101:[0,.43056,.07514,0,.46],102:[.19444,.69444,.21194,0,.30667],103:[.19444,.43056,.08847,0,.46],104:[0,.69444,.07671,0,.51111],105:[0,.65536,.1019,0,.30667],106:[.19444,.65536,.14467,0,.30667],107:[0,.69444,.10764,0,.46],108:[0,.69444,.10333,0,.25555],109:[0,.43056,.07671,0,.81777],110:[0,.43056,.07671,0,.56222],111:[0,.43056,.06312,0,.51111],112:[.19444,.43056,.06312,0,.51111],113:[.19444,.43056,.08847,0,.46],114:[0,.43056,.10764,0,.42166],115:[0,.43056,.08208,0,.40889],116:[0,.61508,.09486,0,.33222],117:[0,.43056,.07671,0,.53666],118:[0,.43056,.10764,0,.46],119:[0,.43056,.10764,0,.66444],120:[0,.43056,.12042,0,.46389],121:[.19444,.43056,.08847,0,.48555],122:[0,.43056,.12292,0,.40889],126:[.35,.31786,.11585,0,.51111],160:[0,0,0,0,.25],168:[0,.66786,.10474,0,.51111],176:[0,.69444,0,0,.83129],184:[.17014,0,0,0,.46],198:[0,.68333,.12028,0,.88277],216:[.04861,.73194,.09403,0,.76666],223:[.19444,.69444,.10514,0,.53666],230:[0,.43056,.07514,0,.71555],248:[.09722,.52778,.09194,0,.51111],338:[0,.68333,.12028,0,.98499],339:[0,.43056,.07514,0,.71555],710:[0,.69444,.06646,0,.51111],711:[0,.62847,.08295,0,.51111],713:[0,.56167,.10333,0,.51111],714:[0,.69444,.09694,0,.51111],715:[0,.69444,0,0,.51111],728:[0,.69444,.10806,0,.51111],729:[0,.66786,.11752,0,.30667],730:[0,.69444,0,0,.83129],732:[0,.66786,.11585,0,.51111],733:[0,.69444,.1225,0,.51111],915:[0,.68333,.13305,0,.62722],916:[0,.68333,0,0,.81777],920:[0,.68333,.09403,0,.76666],923:[0,.68333,0,0,.69222],926:[0,.68333,.15294,0,.66444],928:[0,.68333,.16389,0,.74333],931:[0,.68333,.12028,0,.71555],933:[0,.68333,.11111,0,.76666],934:[0,.68333,.05986,0,.71555],936:[0,.68333,.11111,0,.76666],937:[0,.68333,.10257,0,.71555],8211:[0,.43056,.09208,0,.51111],8212:[0,.43056,.09208,0,1.02222],8216:[0,.69444,.12417,0,.30667],8217:[0,.69444,.12417,0,.30667],8220:[0,.69444,.1685,0,.51444],8221:[0,.69444,.06961,0,.51444],8463:[0,.68889,0,0,.54028]},\"Main-Regular\":{32:[0,0,0,0,.25],33:[0,.69444,0,0,.27778],34:[0,.69444,0,0,.5],35:[.19444,.69444,0,0,.83334],36:[.05556,.75,0,0,.5],37:[.05556,.75,0,0,.83334],38:[0,.69444,0,0,.77778],39:[0,.69444,0,0,.27778],40:[.25,.75,0,0,.38889],41:[.25,.75,0,0,.38889],42:[0,.75,0,0,.5],43:[.08333,.58333,0,0,.77778],44:[.19444,.10556,0,0,.27778],45:[0,.43056,0,0,.33333],46:[0,.10556,0,0,.27778],47:[.25,.75,0,0,.5],48:[0,.64444,0,0,.5],49:[0,.64444,0,0,.5],50:[0,.64444,0,0,.5],51:[0,.64444,0,0,.5],52:[0,.64444,0,0,.5],53:[0,.64444,0,0,.5],54:[0,.64444,0,0,.5],55:[0,.64444,0,0,.5],56:[0,.64444,0,0,.5],57:[0,.64444,0,0,.5],58:[0,.43056,0,0,.27778],59:[.19444,.43056,0,0,.27778],60:[.0391,.5391,0,0,.77778],61:[-.13313,.36687,0,0,.77778],62:[.0391,.5391,0,0,.77778],63:[0,.69444,0,0,.47222],64:[0,.69444,0,0,.77778],65:[0,.68333,0,0,.75],66:[0,.68333,0,0,.70834],67:[0,.68333,0,0,.72222],68:[0,.68333,0,0,.76389],69:[0,.68333,0,0,.68056],70:[0,.68333,0,0,.65278],71:[0,.68333,0,0,.78472],72:[0,.68333,0,0,.75],73:[0,.68333,0,0,.36111],74:[0,.68333,0,0,.51389],75:[0,.68333,0,0,.77778],76:[0,.68333,0,0,.625],77:[0,.68333,0,0,.91667],78:[0,.68333,0,0,.75],79:[0,.68333,0,0,.77778],80:[0,.68333,0,0,.68056],81:[.19444,.68333,0,0,.77778],82:[0,.68333,0,0,.73611],83:[0,.68333,0,0,.55556],84:[0,.68333,0,0,.72222],85:[0,.68333,0,0,.75],86:[0,.68333,.01389,0,.75],87:[0,.68333,.01389,0,1.02778],88:[0,.68333,0,0,.75],89:[0,.68333,.025,0,.75],90:[0,.68333,0,0,.61111],91:[.25,.75,0,0,.27778],92:[.25,.75,0,0,.5],93:[.25,.75,0,0,.27778],94:[0,.69444,0,0,.5],95:[.31,.12056,.02778,0,.5],97:[0,.43056,0,0,.5],98:[0,.69444,0,0,.55556],99:[0,.43056,0,0,.44445],100:[0,.69444,0,0,.55556],101:[0,.43056,0,0,.44445],102:[0,.69444,.07778,0,.30556],103:[.19444,.43056,.01389,0,.5],104:[0,.69444,0,0,.55556],105:[0,.66786,0,0,.27778],106:[.19444,.66786,0,0,.30556],107:[0,.69444,0,0,.52778],108:[0,.69444,0,0,.27778],109:[0,.43056,0,0,.83334],110:[0,.43056,0,0,.55556],111:[0,.43056,0,0,.5],112:[.19444,.43056,0,0,.55556],113:[.19444,.43056,0,0,.52778],114:[0,.43056,0,0,.39167],115:[0,.43056,0,0,.39445],116:[0,.61508,0,0,.38889],117:[0,.43056,0,0,.55556],118:[0,.43056,.01389,0,.52778],119:[0,.43056,.01389,0,.72222],120:[0,.43056,0,0,.52778],121:[.19444,.43056,.01389,0,.52778],122:[0,.43056,0,0,.44445],123:[.25,.75,0,0,.5],124:[.25,.75,0,0,.27778],125:[.25,.75,0,0,.5],126:[.35,.31786,0,0,.5],160:[0,0,0,0,.25],163:[0,.69444,0,0,.76909],167:[.19444,.69444,0,0,.44445],168:[0,.66786,0,0,.5],172:[0,.43056,0,0,.66667],176:[0,.69444,0,0,.75],177:[.08333,.58333,0,0,.77778],182:[.19444,.69444,0,0,.61111],184:[.17014,0,0,0,.44445],198:[0,.68333,0,0,.90278],215:[.08333,.58333,0,0,.77778],216:[.04861,.73194,0,0,.77778],223:[0,.69444,0,0,.5],230:[0,.43056,0,0,.72222],247:[.08333,.58333,0,0,.77778],248:[.09722,.52778,0,0,.5],305:[0,.43056,0,0,.27778],338:[0,.68333,0,0,1.01389],339:[0,.43056,0,0,.77778],567:[.19444,.43056,0,0,.30556],710:[0,.69444,0,0,.5],711:[0,.62847,0,0,.5],713:[0,.56778,0,0,.5],714:[0,.69444,0,0,.5],715:[0,.69444,0,0,.5],728:[0,.69444,0,0,.5],729:[0,.66786,0,0,.27778],730:[0,.69444,0,0,.75],732:[0,.66786,0,0,.5],733:[0,.69444,0,0,.5],915:[0,.68333,0,0,.625],916:[0,.68333,0,0,.83334],920:[0,.68333,0,0,.77778],923:[0,.68333,0,0,.69445],926:[0,.68333,0,0,.66667],928:[0,.68333,0,0,.75],931:[0,.68333,0,0,.72222],933:[0,.68333,0,0,.77778],934:[0,.68333,0,0,.72222],936:[0,.68333,0,0,.77778],937:[0,.68333,0,0,.72222],8211:[0,.43056,.02778,0,.5],8212:[0,.43056,.02778,0,1],8216:[0,.69444,0,0,.27778],8217:[0,.69444,0,0,.27778],8220:[0,.69444,0,0,.5],8221:[0,.69444,0,0,.5],8224:[.19444,.69444,0,0,.44445],8225:[.19444,.69444,0,0,.44445],8230:[0,.123,0,0,1.172],8242:[0,.55556,0,0,.275],8407:[0,.71444,.15382,0,.5],8463:[0,.68889,0,0,.54028],8465:[0,.69444,0,0,.72222],8467:[0,.69444,0,.11111,.41667],8472:[.19444,.43056,0,.11111,.63646],8476:[0,.69444,0,0,.72222],8501:[0,.69444,0,0,.61111],8592:[-.13313,.36687,0,0,1],8593:[.19444,.69444,0,0,.5],8594:[-.13313,.36687,0,0,1],8595:[.19444,.69444,0,0,.5],8596:[-.13313,.36687,0,0,1],8597:[.25,.75,0,0,.5],8598:[.19444,.69444,0,0,1],8599:[.19444,.69444,0,0,1],8600:[.19444,.69444,0,0,1],8601:[.19444,.69444,0,0,1],8614:[.011,.511,0,0,1],8617:[.011,.511,0,0,1.126],8618:[.011,.511,0,0,1.126],8636:[-.13313,.36687,0,0,1],8637:[-.13313,.36687,0,0,1],8640:[-.13313,.36687,0,0,1],8641:[-.13313,.36687,0,0,1],8652:[.011,.671,0,0,1],8656:[-.13313,.36687,0,0,1],8657:[.19444,.69444,0,0,.61111],8658:[-.13313,.36687,0,0,1],8659:[.19444,.69444,0,0,.61111],8660:[-.13313,.36687,0,0,1],8661:[.25,.75,0,0,.61111],8704:[0,.69444,0,0,.55556],8706:[0,.69444,.05556,.08334,.5309],8707:[0,.69444,0,0,.55556],8709:[.05556,.75,0,0,.5],8711:[0,.68333,0,0,.83334],8712:[.0391,.5391,0,0,.66667],8715:[.0391,.5391,0,0,.66667],8722:[.08333,.58333,0,0,.77778],8723:[.08333,.58333,0,0,.77778],8725:[.25,.75,0,0,.5],8726:[.25,.75,0,0,.5],8727:[-.03472,.46528,0,0,.5],8728:[-.05555,.44445,0,0,.5],8729:[-.05555,.44445,0,0,.5],8730:[.2,.8,0,0,.83334],8733:[0,.43056,0,0,.77778],8734:[0,.43056,0,0,1],8736:[0,.69224,0,0,.72222],8739:[.25,.75,0,0,.27778],8741:[.25,.75,0,0,.5],8743:[0,.55556,0,0,.66667],8744:[0,.55556,0,0,.66667],8745:[0,.55556,0,0,.66667],8746:[0,.55556,0,0,.66667],8747:[.19444,.69444,.11111,0,.41667],8764:[-.13313,.36687,0,0,.77778],8768:[.19444,.69444,0,0,.27778],8771:[-.03625,.46375,0,0,.77778],8773:[-.022,.589,0,0,.778],8776:[-.01688,.48312,0,0,.77778],8781:[-.03625,.46375,0,0,.77778],8784:[-.133,.673,0,0,.778],8801:[-.03625,.46375,0,0,.77778],8804:[.13597,.63597,0,0,.77778],8805:[.13597,.63597,0,0,.77778],8810:[.0391,.5391,0,0,1],8811:[.0391,.5391,0,0,1],8826:[.0391,.5391,0,0,.77778],8827:[.0391,.5391,0,0,.77778],8834:[.0391,.5391,0,0,.77778],8835:[.0391,.5391,0,0,.77778],8838:[.13597,.63597,0,0,.77778],8839:[.13597,.63597,0,0,.77778],8846:[0,.55556,0,0,.66667],8849:[.13597,.63597,0,0,.77778],8850:[.13597,.63597,0,0,.77778],8851:[0,.55556,0,0,.66667],8852:[0,.55556,0,0,.66667],8853:[.08333,.58333,0,0,.77778],8854:[.08333,.58333,0,0,.77778],8855:[.08333,.58333,0,0,.77778],8856:[.08333,.58333,0,0,.77778],8857:[.08333,.58333,0,0,.77778],8866:[0,.69444,0,0,.61111],8867:[0,.69444,0,0,.61111],8868:[0,.69444,0,0,.77778],8869:[0,.69444,0,0,.77778],8872:[.249,.75,0,0,.867],8900:[-.05555,.44445,0,0,.5],8901:[-.05555,.44445,0,0,.27778],8902:[-.03472,.46528,0,0,.5],8904:[.005,.505,0,0,.9],8942:[.03,.903,0,0,.278],8943:[-.19,.313,0,0,1.172],8945:[-.1,.823,0,0,1.282],8968:[.25,.75,0,0,.44445],8969:[.25,.75,0,0,.44445],8970:[.25,.75,0,0,.44445],8971:[.25,.75,0,0,.44445],8994:[-.14236,.35764,0,0,1],8995:[-.14236,.35764,0,0,1],9136:[.244,.744,0,0,.412],9137:[.244,.745,0,0,.412],9651:[.19444,.69444,0,0,.88889],9657:[-.03472,.46528,0,0,.5],9661:[.19444,.69444,0,0,.88889],9667:[-.03472,.46528,0,0,.5],9711:[.19444,.69444,0,0,1],9824:[.12963,.69444,0,0,.77778],9825:[.12963,.69444,0,0,.77778],9826:[.12963,.69444,0,0,.77778],9827:[.12963,.69444,0,0,.77778],9837:[0,.75,0,0,.38889],9838:[.19444,.69444,0,0,.38889],9839:[.19444,.69444,0,0,.38889],10216:[.25,.75,0,0,.38889],10217:[.25,.75,0,0,.38889],10222:[.244,.744,0,0,.412],10223:[.244,.745,0,0,.412],10229:[.011,.511,0,0,1.609],10230:[.011,.511,0,0,1.638],10231:[.011,.511,0,0,1.859],10232:[.024,.525,0,0,1.609],10233:[.024,.525,0,0,1.638],10234:[.024,.525,0,0,1.858],10236:[.011,.511,0,0,1.638],10815:[0,.68333,0,0,.75],10927:[.13597,.63597,0,0,.77778],10928:[.13597,.63597,0,0,.77778],57376:[.19444,.69444,0,0,0]},\"Math-BoldItalic\":{32:[0,0,0,0,.25],48:[0,.44444,0,0,.575],49:[0,.44444,0,0,.575],50:[0,.44444,0,0,.575],51:[.19444,.44444,0,0,.575],52:[.19444,.44444,0,0,.575],53:[.19444,.44444,0,0,.575],54:[0,.64444,0,0,.575],55:[.19444,.44444,0,0,.575],56:[0,.64444,0,0,.575],57:[.19444,.44444,0,0,.575],65:[0,.68611,0,0,.86944],66:[0,.68611,.04835,0,.8664],67:[0,.68611,.06979,0,.81694],68:[0,.68611,.03194,0,.93812],69:[0,.68611,.05451,0,.81007],70:[0,.68611,.15972,0,.68889],71:[0,.68611,0,0,.88673],72:[0,.68611,.08229,0,.98229],73:[0,.68611,.07778,0,.51111],74:[0,.68611,.10069,0,.63125],75:[0,.68611,.06979,0,.97118],76:[0,.68611,0,0,.75555],77:[0,.68611,.11424,0,1.14201],78:[0,.68611,.11424,0,.95034],79:[0,.68611,.03194,0,.83666],80:[0,.68611,.15972,0,.72309],81:[.19444,.68611,0,0,.86861],82:[0,.68611,.00421,0,.87235],83:[0,.68611,.05382,0,.69271],84:[0,.68611,.15972,0,.63663],85:[0,.68611,.11424,0,.80027],86:[0,.68611,.25555,0,.67778],87:[0,.68611,.15972,0,1.09305],88:[0,.68611,.07778,0,.94722],89:[0,.68611,.25555,0,.67458],90:[0,.68611,.06979,0,.77257],97:[0,.44444,0,0,.63287],98:[0,.69444,0,0,.52083],99:[0,.44444,0,0,.51342],100:[0,.69444,0,0,.60972],101:[0,.44444,0,0,.55361],102:[.19444,.69444,.11042,0,.56806],103:[.19444,.44444,.03704,0,.5449],104:[0,.69444,0,0,.66759],105:[0,.69326,0,0,.4048],106:[.19444,.69326,.0622,0,.47083],107:[0,.69444,.01852,0,.6037],108:[0,.69444,.0088,0,.34815],109:[0,.44444,0,0,1.0324],110:[0,.44444,0,0,.71296],111:[0,.44444,0,0,.58472],112:[.19444,.44444,0,0,.60092],113:[.19444,.44444,.03704,0,.54213],114:[0,.44444,.03194,0,.5287],115:[0,.44444,0,0,.53125],116:[0,.63492,0,0,.41528],117:[0,.44444,0,0,.68102],118:[0,.44444,.03704,0,.56666],119:[0,.44444,.02778,0,.83148],120:[0,.44444,0,0,.65903],121:[.19444,.44444,.03704,0,.59028],122:[0,.44444,.04213,0,.55509],160:[0,0,0,0,.25],915:[0,.68611,.15972,0,.65694],916:[0,.68611,0,0,.95833],920:[0,.68611,.03194,0,.86722],923:[0,.68611,0,0,.80555],926:[0,.68611,.07458,0,.84125],928:[0,.68611,.08229,0,.98229],931:[0,.68611,.05451,0,.88507],933:[0,.68611,.15972,0,.67083],934:[0,.68611,0,0,.76666],936:[0,.68611,.11653,0,.71402],937:[0,.68611,.04835,0,.8789],945:[0,.44444,0,0,.76064],946:[.19444,.69444,.03403,0,.65972],947:[.19444,.44444,.06389,0,.59003],948:[0,.69444,.03819,0,.52222],949:[0,.44444,0,0,.52882],950:[.19444,.69444,.06215,0,.50833],951:[.19444,.44444,.03704,0,.6],952:[0,.69444,.03194,0,.5618],953:[0,.44444,0,0,.41204],954:[0,.44444,0,0,.66759],955:[0,.69444,0,0,.67083],956:[.19444,.44444,0,0,.70787],957:[0,.44444,.06898,0,.57685],958:[.19444,.69444,.03021,0,.50833],959:[0,.44444,0,0,.58472],960:[0,.44444,.03704,0,.68241],961:[.19444,.44444,0,0,.6118],962:[.09722,.44444,.07917,0,.42361],963:[0,.44444,.03704,0,.68588],964:[0,.44444,.13472,0,.52083],965:[0,.44444,.03704,0,.63055],966:[.19444,.44444,0,0,.74722],967:[.19444,.44444,0,0,.71805],968:[.19444,.69444,.03704,0,.75833],969:[0,.44444,.03704,0,.71782],977:[0,.69444,0,0,.69155],981:[.19444,.69444,0,0,.7125],982:[0,.44444,.03194,0,.975],1009:[.19444,.44444,0,0,.6118],1013:[0,.44444,0,0,.48333],57649:[0,.44444,0,0,.39352],57911:[.19444,.44444,0,0,.43889]},\"Math-Italic\":{32:[0,0,0,0,.25],48:[0,.43056,0,0,.5],49:[0,.43056,0,0,.5],50:[0,.43056,0,0,.5],51:[.19444,.43056,0,0,.5],52:[.19444,.43056,0,0,.5],53:[.19444,.43056,0,0,.5],54:[0,.64444,0,0,.5],55:[.19444,.43056,0,0,.5],56:[0,.64444,0,0,.5],57:[.19444,.43056,0,0,.5],65:[0,.68333,0,.13889,.75],66:[0,.68333,.05017,.08334,.75851],67:[0,.68333,.07153,.08334,.71472],68:[0,.68333,.02778,.05556,.82792],69:[0,.68333,.05764,.08334,.7382],70:[0,.68333,.13889,.08334,.64306],71:[0,.68333,0,.08334,.78625],72:[0,.68333,.08125,.05556,.83125],73:[0,.68333,.07847,.11111,.43958],74:[0,.68333,.09618,.16667,.55451],75:[0,.68333,.07153,.05556,.84931],76:[0,.68333,0,.02778,.68056],77:[0,.68333,.10903,.08334,.97014],78:[0,.68333,.10903,.08334,.80347],79:[0,.68333,.02778,.08334,.76278],80:[0,.68333,.13889,.08334,.64201],81:[.19444,.68333,0,.08334,.79056],82:[0,.68333,.00773,.08334,.75929],83:[0,.68333,.05764,.08334,.6132],84:[0,.68333,.13889,.08334,.58438],85:[0,.68333,.10903,.02778,.68278],86:[0,.68333,.22222,0,.58333],87:[0,.68333,.13889,0,.94445],88:[0,.68333,.07847,.08334,.82847],89:[0,.68333,.22222,0,.58056],90:[0,.68333,.07153,.08334,.68264],97:[0,.43056,0,0,.52859],98:[0,.69444,0,0,.42917],99:[0,.43056,0,.05556,.43276],100:[0,.69444,0,.16667,.52049],101:[0,.43056,0,.05556,.46563],102:[.19444,.69444,.10764,.16667,.48959],103:[.19444,.43056,.03588,.02778,.47697],104:[0,.69444,0,0,.57616],105:[0,.65952,0,0,.34451],106:[.19444,.65952,.05724,0,.41181],107:[0,.69444,.03148,0,.5206],108:[0,.69444,.01968,.08334,.29838],109:[0,.43056,0,0,.87801],110:[0,.43056,0,0,.60023],111:[0,.43056,0,.05556,.48472],112:[.19444,.43056,0,.08334,.50313],113:[.19444,.43056,.03588,.08334,.44641],114:[0,.43056,.02778,.05556,.45116],115:[0,.43056,0,.05556,.46875],116:[0,.61508,0,.08334,.36111],117:[0,.43056,0,.02778,.57246],118:[0,.43056,.03588,.02778,.48472],119:[0,.43056,.02691,.08334,.71592],120:[0,.43056,0,.02778,.57153],121:[.19444,.43056,.03588,.05556,.49028],122:[0,.43056,.04398,.05556,.46505],160:[0,0,0,0,.25],915:[0,.68333,.13889,.08334,.61528],916:[0,.68333,0,.16667,.83334],920:[0,.68333,.02778,.08334,.76278],923:[0,.68333,0,.16667,.69445],926:[0,.68333,.07569,.08334,.74236],928:[0,.68333,.08125,.05556,.83125],931:[0,.68333,.05764,.08334,.77986],933:[0,.68333,.13889,.05556,.58333],934:[0,.68333,0,.08334,.66667],936:[0,.68333,.11,.05556,.61222],937:[0,.68333,.05017,.08334,.7724],945:[0,.43056,.0037,.02778,.6397],946:[.19444,.69444,.05278,.08334,.56563],947:[.19444,.43056,.05556,0,.51773],948:[0,.69444,.03785,.05556,.44444],949:[0,.43056,0,.08334,.46632],950:[.19444,.69444,.07378,.08334,.4375],951:[.19444,.43056,.03588,.05556,.49653],952:[0,.69444,.02778,.08334,.46944],953:[0,.43056,0,.05556,.35394],954:[0,.43056,0,0,.57616],955:[0,.69444,0,0,.58334],956:[.19444,.43056,0,.02778,.60255],957:[0,.43056,.06366,.02778,.49398],958:[.19444,.69444,.04601,.11111,.4375],959:[0,.43056,0,.05556,.48472],960:[0,.43056,.03588,0,.57003],961:[.19444,.43056,0,.08334,.51702],962:[.09722,.43056,.07986,.08334,.36285],963:[0,.43056,.03588,0,.57141],964:[0,.43056,.1132,.02778,.43715],965:[0,.43056,.03588,.02778,.54028],966:[.19444,.43056,0,.08334,.65417],967:[.19444,.43056,0,.05556,.62569],968:[.19444,.69444,.03588,.11111,.65139],969:[0,.43056,.03588,0,.62245],977:[0,.69444,0,.08334,.59144],981:[.19444,.69444,0,.08334,.59583],982:[0,.43056,.02778,0,.82813],1009:[.19444,.43056,0,.08334,.51702],1013:[0,.43056,0,.05556,.4059],57649:[0,.43056,0,.02778,.32246],57911:[.19444,.43056,0,.08334,.38403]},\"SansSerif-Bold\":{32:[0,0,0,0,.25],33:[0,.69444,0,0,.36667],34:[0,.69444,0,0,.55834],35:[.19444,.69444,0,0,.91667],36:[.05556,.75,0,0,.55],37:[.05556,.75,0,0,1.02912],38:[0,.69444,0,0,.83056],39:[0,.69444,0,0,.30556],40:[.25,.75,0,0,.42778],41:[.25,.75,0,0,.42778],42:[0,.75,0,0,.55],43:[.11667,.61667,0,0,.85556],44:[.10556,.13056,0,0,.30556],45:[0,.45833,0,0,.36667],46:[0,.13056,0,0,.30556],47:[.25,.75,0,0,.55],48:[0,.69444,0,0,.55],49:[0,.69444,0,0,.55],50:[0,.69444,0,0,.55],51:[0,.69444,0,0,.55],52:[0,.69444,0,0,.55],53:[0,.69444,0,0,.55],54:[0,.69444,0,0,.55],55:[0,.69444,0,0,.55],56:[0,.69444,0,0,.55],57:[0,.69444,0,0,.55],58:[0,.45833,0,0,.30556],59:[.10556,.45833,0,0,.30556],61:[-.09375,.40625,0,0,.85556],63:[0,.69444,0,0,.51945],64:[0,.69444,0,0,.73334],65:[0,.69444,0,0,.73334],66:[0,.69444,0,0,.73334],67:[0,.69444,0,0,.70278],68:[0,.69444,0,0,.79445],69:[0,.69444,0,0,.64167],70:[0,.69444,0,0,.61111],71:[0,.69444,0,0,.73334],72:[0,.69444,0,0,.79445],73:[0,.69444,0,0,.33056],74:[0,.69444,0,0,.51945],75:[0,.69444,0,0,.76389],76:[0,.69444,0,0,.58056],77:[0,.69444,0,0,.97778],78:[0,.69444,0,0,.79445],79:[0,.69444,0,0,.79445],80:[0,.69444,0,0,.70278],81:[.10556,.69444,0,0,.79445],82:[0,.69444,0,0,.70278],83:[0,.69444,0,0,.61111],84:[0,.69444,0,0,.73334],85:[0,.69444,0,0,.76389],86:[0,.69444,.01528,0,.73334],87:[0,.69444,.01528,0,1.03889],88:[0,.69444,0,0,.73334],89:[0,.69444,.0275,0,.73334],90:[0,.69444,0,0,.67223],91:[.25,.75,0,0,.34306],93:[.25,.75,0,0,.34306],94:[0,.69444,0,0,.55],95:[.35,.10833,.03056,0,.55],97:[0,.45833,0,0,.525],98:[0,.69444,0,0,.56111],99:[0,.45833,0,0,.48889],100:[0,.69444,0,0,.56111],101:[0,.45833,0,0,.51111],102:[0,.69444,.07639,0,.33611],103:[.19444,.45833,.01528,0,.55],104:[0,.69444,0,0,.56111],105:[0,.69444,0,0,.25556],106:[.19444,.69444,0,0,.28611],107:[0,.69444,0,0,.53056],108:[0,.69444,0,0,.25556],109:[0,.45833,0,0,.86667],110:[0,.45833,0,0,.56111],111:[0,.45833,0,0,.55],112:[.19444,.45833,0,0,.56111],113:[.19444,.45833,0,0,.56111],114:[0,.45833,.01528,0,.37222],115:[0,.45833,0,0,.42167],116:[0,.58929,0,0,.40417],117:[0,.45833,0,0,.56111],118:[0,.45833,.01528,0,.5],119:[0,.45833,.01528,0,.74445],120:[0,.45833,0,0,.5],121:[.19444,.45833,.01528,0,.5],122:[0,.45833,0,0,.47639],126:[.35,.34444,0,0,.55],160:[0,0,0,0,.25],168:[0,.69444,0,0,.55],176:[0,.69444,0,0,.73334],180:[0,.69444,0,0,.55],184:[.17014,0,0,0,.48889],305:[0,.45833,0,0,.25556],567:[.19444,.45833,0,0,.28611],710:[0,.69444,0,0,.55],711:[0,.63542,0,0,.55],713:[0,.63778,0,0,.55],728:[0,.69444,0,0,.55],729:[0,.69444,0,0,.30556],730:[0,.69444,0,0,.73334],732:[0,.69444,0,0,.55],733:[0,.69444,0,0,.55],915:[0,.69444,0,0,.58056],916:[0,.69444,0,0,.91667],920:[0,.69444,0,0,.85556],923:[0,.69444,0,0,.67223],926:[0,.69444,0,0,.73334],928:[0,.69444,0,0,.79445],931:[0,.69444,0,0,.79445],933:[0,.69444,0,0,.85556],934:[0,.69444,0,0,.79445],936:[0,.69444,0,0,.85556],937:[0,.69444,0,0,.79445],8211:[0,.45833,.03056,0,.55],8212:[0,.45833,.03056,0,1.10001],8216:[0,.69444,0,0,.30556],8217:[0,.69444,0,0,.30556],8220:[0,.69444,0,0,.55834],8221:[0,.69444,0,0,.55834]},\"SansSerif-Italic\":{32:[0,0,0,0,.25],33:[0,.69444,.05733,0,.31945],34:[0,.69444,.00316,0,.5],35:[.19444,.69444,.05087,0,.83334],36:[.05556,.75,.11156,0,.5],37:[.05556,.75,.03126,0,.83334],38:[0,.69444,.03058,0,.75834],39:[0,.69444,.07816,0,.27778],40:[.25,.75,.13164,0,.38889],41:[.25,.75,.02536,0,.38889],42:[0,.75,.11775,0,.5],43:[.08333,.58333,.02536,0,.77778],44:[.125,.08333,0,0,.27778],45:[0,.44444,.01946,0,.33333],46:[0,.08333,0,0,.27778],47:[.25,.75,.13164,0,.5],48:[0,.65556,.11156,0,.5],49:[0,.65556,.11156,0,.5],50:[0,.65556,.11156,0,.5],51:[0,.65556,.11156,0,.5],52:[0,.65556,.11156,0,.5],53:[0,.65556,.11156,0,.5],54:[0,.65556,.11156,0,.5],55:[0,.65556,.11156,0,.5],56:[0,.65556,.11156,0,.5],57:[0,.65556,.11156,0,.5],58:[0,.44444,.02502,0,.27778],59:[.125,.44444,.02502,0,.27778],61:[-.13,.37,.05087,0,.77778],63:[0,.69444,.11809,0,.47222],64:[0,.69444,.07555,0,.66667],65:[0,.69444,0,0,.66667],66:[0,.69444,.08293,0,.66667],67:[0,.69444,.11983,0,.63889],68:[0,.69444,.07555,0,.72223],69:[0,.69444,.11983,0,.59722],70:[0,.69444,.13372,0,.56945],71:[0,.69444,.11983,0,.66667],72:[0,.69444,.08094,0,.70834],73:[0,.69444,.13372,0,.27778],74:[0,.69444,.08094,0,.47222],75:[0,.69444,.11983,0,.69445],76:[0,.69444,0,0,.54167],77:[0,.69444,.08094,0,.875],78:[0,.69444,.08094,0,.70834],79:[0,.69444,.07555,0,.73611],80:[0,.69444,.08293,0,.63889],81:[.125,.69444,.07555,0,.73611],82:[0,.69444,.08293,0,.64584],83:[0,.69444,.09205,0,.55556],84:[0,.69444,.13372,0,.68056],85:[0,.69444,.08094,0,.6875],86:[0,.69444,.1615,0,.66667],87:[0,.69444,.1615,0,.94445],88:[0,.69444,.13372,0,.66667],89:[0,.69444,.17261,0,.66667],90:[0,.69444,.11983,0,.61111],91:[.25,.75,.15942,0,.28889],93:[.25,.75,.08719,0,.28889],94:[0,.69444,.0799,0,.5],95:[.35,.09444,.08616,0,.5],97:[0,.44444,.00981,0,.48056],98:[0,.69444,.03057,0,.51667],99:[0,.44444,.08336,0,.44445],100:[0,.69444,.09483,0,.51667],101:[0,.44444,.06778,0,.44445],102:[0,.69444,.21705,0,.30556],103:[.19444,.44444,.10836,0,.5],104:[0,.69444,.01778,0,.51667],105:[0,.67937,.09718,0,.23889],106:[.19444,.67937,.09162,0,.26667],107:[0,.69444,.08336,0,.48889],108:[0,.69444,.09483,0,.23889],109:[0,.44444,.01778,0,.79445],110:[0,.44444,.01778,0,.51667],111:[0,.44444,.06613,0,.5],112:[.19444,.44444,.0389,0,.51667],113:[.19444,.44444,.04169,0,.51667],114:[0,.44444,.10836,0,.34167],115:[0,.44444,.0778,0,.38333],116:[0,.57143,.07225,0,.36111],117:[0,.44444,.04169,0,.51667],118:[0,.44444,.10836,0,.46111],119:[0,.44444,.10836,0,.68334],120:[0,.44444,.09169,0,.46111],121:[.19444,.44444,.10836,0,.46111],122:[0,.44444,.08752,0,.43472],126:[.35,.32659,.08826,0,.5],160:[0,0,0,0,.25],168:[0,.67937,.06385,0,.5],176:[0,.69444,0,0,.73752],184:[.17014,0,0,0,.44445],305:[0,.44444,.04169,0,.23889],567:[.19444,.44444,.04169,0,.26667],710:[0,.69444,.0799,0,.5],711:[0,.63194,.08432,0,.5],713:[0,.60889,.08776,0,.5],714:[0,.69444,.09205,0,.5],715:[0,.69444,0,0,.5],728:[0,.69444,.09483,0,.5],729:[0,.67937,.07774,0,.27778],730:[0,.69444,0,0,.73752],732:[0,.67659,.08826,0,.5],733:[0,.69444,.09205,0,.5],915:[0,.69444,.13372,0,.54167],916:[0,.69444,0,0,.83334],920:[0,.69444,.07555,0,.77778],923:[0,.69444,0,0,.61111],926:[0,.69444,.12816,0,.66667],928:[0,.69444,.08094,0,.70834],931:[0,.69444,.11983,0,.72222],933:[0,.69444,.09031,0,.77778],934:[0,.69444,.04603,0,.72222],936:[0,.69444,.09031,0,.77778],937:[0,.69444,.08293,0,.72222],8211:[0,.44444,.08616,0,.5],8212:[0,.44444,.08616,0,1],8216:[0,.69444,.07816,0,.27778],8217:[0,.69444,.07816,0,.27778],8220:[0,.69444,.14205,0,.5],8221:[0,.69444,.00316,0,.5]},\"SansSerif-Regular\":{32:[0,0,0,0,.25],33:[0,.69444,0,0,.31945],34:[0,.69444,0,0,.5],35:[.19444,.69444,0,0,.83334],36:[.05556,.75,0,0,.5],37:[.05556,.75,0,0,.83334],38:[0,.69444,0,0,.75834],39:[0,.69444,0,0,.27778],40:[.25,.75,0,0,.38889],41:[.25,.75,0,0,.38889],42:[0,.75,0,0,.5],43:[.08333,.58333,0,0,.77778],44:[.125,.08333,0,0,.27778],45:[0,.44444,0,0,.33333],46:[0,.08333,0,0,.27778],47:[.25,.75,0,0,.5],48:[0,.65556,0,0,.5],49:[0,.65556,0,0,.5],50:[0,.65556,0,0,.5],51:[0,.65556,0,0,.5],52:[0,.65556,0,0,.5],53:[0,.65556,0,0,.5],54:[0,.65556,0,0,.5],55:[0,.65556,0,0,.5],56:[0,.65556,0,0,.5],57:[0,.65556,0,0,.5],58:[0,.44444,0,0,.27778],59:[.125,.44444,0,0,.27778],61:[-.13,.37,0,0,.77778],63:[0,.69444,0,0,.47222],64:[0,.69444,0,0,.66667],65:[0,.69444,0,0,.66667],66:[0,.69444,0,0,.66667],67:[0,.69444,0,0,.63889],68:[0,.69444,0,0,.72223],69:[0,.69444,0,0,.59722],70:[0,.69444,0,0,.56945],71:[0,.69444,0,0,.66667],72:[0,.69444,0,0,.70834],73:[0,.69444,0,0,.27778],74:[0,.69444,0,0,.47222],75:[0,.69444,0,0,.69445],76:[0,.69444,0,0,.54167],77:[0,.69444,0,0,.875],78:[0,.69444,0,0,.70834],79:[0,.69444,0,0,.73611],80:[0,.69444,0,0,.63889],81:[.125,.69444,0,0,.73611],82:[0,.69444,0,0,.64584],83:[0,.69444,0,0,.55556],84:[0,.69444,0,0,.68056],85:[0,.69444,0,0,.6875],86:[0,.69444,.01389,0,.66667],87:[0,.69444,.01389,0,.94445],88:[0,.69444,0,0,.66667],89:[0,.69444,.025,0,.66667],90:[0,.69444,0,0,.61111],91:[.25,.75,0,0,.28889],93:[.25,.75,0,0,.28889],94:[0,.69444,0,0,.5],95:[.35,.09444,.02778,0,.5],97:[0,.44444,0,0,.48056],98:[0,.69444,0,0,.51667],99:[0,.44444,0,0,.44445],100:[0,.69444,0,0,.51667],101:[0,.44444,0,0,.44445],102:[0,.69444,.06944,0,.30556],103:[.19444,.44444,.01389,0,.5],104:[0,.69444,0,0,.51667],105:[0,.67937,0,0,.23889],106:[.19444,.67937,0,0,.26667],107:[0,.69444,0,0,.48889],108:[0,.69444,0,0,.23889],109:[0,.44444,0,0,.79445],110:[0,.44444,0,0,.51667],111:[0,.44444,0,0,.5],112:[.19444,.44444,0,0,.51667],113:[.19444,.44444,0,0,.51667],114:[0,.44444,.01389,0,.34167],115:[0,.44444,0,0,.38333],116:[0,.57143,0,0,.36111],117:[0,.44444,0,0,.51667],118:[0,.44444,.01389,0,.46111],119:[0,.44444,.01389,0,.68334],120:[0,.44444,0,0,.46111],121:[.19444,.44444,.01389,0,.46111],122:[0,.44444,0,0,.43472],126:[.35,.32659,0,0,.5],160:[0,0,0,0,.25],168:[0,.67937,0,0,.5],176:[0,.69444,0,0,.66667],184:[.17014,0,0,0,.44445],305:[0,.44444,0,0,.23889],567:[.19444,.44444,0,0,.26667],710:[0,.69444,0,0,.5],711:[0,.63194,0,0,.5],713:[0,.60889,0,0,.5],714:[0,.69444,0,0,.5],715:[0,.69444,0,0,.5],728:[0,.69444,0,0,.5],729:[0,.67937,0,0,.27778],730:[0,.69444,0,0,.66667],732:[0,.67659,0,0,.5],733:[0,.69444,0,0,.5],915:[0,.69444,0,0,.54167],916:[0,.69444,0,0,.83334],920:[0,.69444,0,0,.77778],923:[0,.69444,0,0,.61111],926:[0,.69444,0,0,.66667],928:[0,.69444,0,0,.70834],931:[0,.69444,0,0,.72222],933:[0,.69444,0,0,.77778],934:[0,.69444,0,0,.72222],936:[0,.69444,0,0,.77778],937:[0,.69444,0,0,.72222],8211:[0,.44444,.02778,0,.5],8212:[0,.44444,.02778,0,1],8216:[0,.69444,0,0,.27778],8217:[0,.69444,0,0,.27778],8220:[0,.69444,0,0,.5],8221:[0,.69444,0,0,.5]},\"Script-Regular\":{32:[0,0,0,0,.25],65:[0,.7,.22925,0,.80253],66:[0,.7,.04087,0,.90757],67:[0,.7,.1689,0,.66619],68:[0,.7,.09371,0,.77443],69:[0,.7,.18583,0,.56162],70:[0,.7,.13634,0,.89544],71:[0,.7,.17322,0,.60961],72:[0,.7,.29694,0,.96919],73:[0,.7,.19189,0,.80907],74:[.27778,.7,.19189,0,1.05159],75:[0,.7,.31259,0,.91364],76:[0,.7,.19189,0,.87373],77:[0,.7,.15981,0,1.08031],78:[0,.7,.3525,0,.9015],79:[0,.7,.08078,0,.73787],80:[0,.7,.08078,0,1.01262],81:[0,.7,.03305,0,.88282],82:[0,.7,.06259,0,.85],83:[0,.7,.19189,0,.86767],84:[0,.7,.29087,0,.74697],85:[0,.7,.25815,0,.79996],86:[0,.7,.27523,0,.62204],87:[0,.7,.27523,0,.80532],88:[0,.7,.26006,0,.94445],89:[0,.7,.2939,0,.70961],90:[0,.7,.24037,0,.8212],160:[0,0,0,0,.25]},\"Size1-Regular\":{32:[0,0,0,0,.25],40:[.35001,.85,0,0,.45834],41:[.35001,.85,0,0,.45834],47:[.35001,.85,0,0,.57778],91:[.35001,.85,0,0,.41667],92:[.35001,.85,0,0,.57778],93:[.35001,.85,0,0,.41667],123:[.35001,.85,0,0,.58334],125:[.35001,.85,0,0,.58334],160:[0,0,0,0,.25],710:[0,.72222,0,0,.55556],732:[0,.72222,0,0,.55556],770:[0,.72222,0,0,.55556],771:[0,.72222,0,0,.55556],8214:[-99e-5,.601,0,0,.77778],8593:[1e-5,.6,0,0,.66667],8595:[1e-5,.6,0,0,.66667],8657:[1e-5,.6,0,0,.77778],8659:[1e-5,.6,0,0,.77778],8719:[.25001,.75,0,0,.94445],8720:[.25001,.75,0,0,.94445],8721:[.25001,.75,0,0,1.05556],8730:[.35001,.85,0,0,1],8739:[-.00599,.606,0,0,.33333],8741:[-.00599,.606,0,0,.55556],8747:[.30612,.805,.19445,0,.47222],8748:[.306,.805,.19445,0,.47222],8749:[.306,.805,.19445,0,.47222],8750:[.30612,.805,.19445,0,.47222],8896:[.25001,.75,0,0,.83334],8897:[.25001,.75,0,0,.83334],8898:[.25001,.75,0,0,.83334],8899:[.25001,.75,0,0,.83334],8968:[.35001,.85,0,0,.47222],8969:[.35001,.85,0,0,.47222],8970:[.35001,.85,0,0,.47222],8971:[.35001,.85,0,0,.47222],9168:[-99e-5,.601,0,0,.66667],10216:[.35001,.85,0,0,.47222],10217:[.35001,.85,0,0,.47222],10752:[.25001,.75,0,0,1.11111],10753:[.25001,.75,0,0,1.11111],10754:[.25001,.75,0,0,1.11111],10756:[.25001,.75,0,0,.83334],10758:[.25001,.75,0,0,.83334]},\"Size2-Regular\":{32:[0,0,0,0,.25],40:[.65002,1.15,0,0,.59722],41:[.65002,1.15,0,0,.59722],47:[.65002,1.15,0,0,.81111],91:[.65002,1.15,0,0,.47222],92:[.65002,1.15,0,0,.81111],93:[.65002,1.15,0,0,.47222],123:[.65002,1.15,0,0,.66667],125:[.65002,1.15,0,0,.66667],160:[0,0,0,0,.25],710:[0,.75,0,0,1],732:[0,.75,0,0,1],770:[0,.75,0,0,1],771:[0,.75,0,0,1],8719:[.55001,1.05,0,0,1.27778],8720:[.55001,1.05,0,0,1.27778],8721:[.55001,1.05,0,0,1.44445],8730:[.65002,1.15,0,0,1],8747:[.86225,1.36,.44445,0,.55556],8748:[.862,1.36,.44445,0,.55556],8749:[.862,1.36,.44445,0,.55556],8750:[.86225,1.36,.44445,0,.55556],8896:[.55001,1.05,0,0,1.11111],8897:[.55001,1.05,0,0,1.11111],8898:[.55001,1.05,0,0,1.11111],8899:[.55001,1.05,0,0,1.11111],8968:[.65002,1.15,0,0,.52778],8969:[.65002,1.15,0,0,.52778],8970:[.65002,1.15,0,0,.52778],8971:[.65002,1.15,0,0,.52778],10216:[.65002,1.15,0,0,.61111],10217:[.65002,1.15,0,0,.61111],10752:[.55001,1.05,0,0,1.51112],10753:[.55001,1.05,0,0,1.51112],10754:[.55001,1.05,0,0,1.51112],10756:[.55001,1.05,0,0,1.11111],10758:[.55001,1.05,0,0,1.11111]},\"Size3-Regular\":{32:[0,0,0,0,.25],40:[.95003,1.45,0,0,.73611],41:[.95003,1.45,0,0,.73611],47:[.95003,1.45,0,0,1.04445],91:[.95003,1.45,0,0,.52778],92:[.95003,1.45,0,0,1.04445],93:[.95003,1.45,0,0,.52778],123:[.95003,1.45,0,0,.75],125:[.95003,1.45,0,0,.75],160:[0,0,0,0,.25],710:[0,.75,0,0,1.44445],732:[0,.75,0,0,1.44445],770:[0,.75,0,0,1.44445],771:[0,.75,0,0,1.44445],8730:[.95003,1.45,0,0,1],8968:[.95003,1.45,0,0,.58334],8969:[.95003,1.45,0,0,.58334],8970:[.95003,1.45,0,0,.58334],8971:[.95003,1.45,0,0,.58334],10216:[.95003,1.45,0,0,.75],10217:[.95003,1.45,0,0,.75]},\"Size4-Regular\":{32:[0,0,0,0,.25],40:[1.25003,1.75,0,0,.79167],41:[1.25003,1.75,0,0,.79167],47:[1.25003,1.75,0,0,1.27778],91:[1.25003,1.75,0,0,.58334],92:[1.25003,1.75,0,0,1.27778],93:[1.25003,1.75,0,0,.58334],123:[1.25003,1.75,0,0,.80556],125:[1.25003,1.75,0,0,.80556],160:[0,0,0,0,.25],710:[0,.825,0,0,1.8889],732:[0,.825,0,0,1.8889],770:[0,.825,0,0,1.8889],771:[0,.825,0,0,1.8889],8730:[1.25003,1.75,0,0,1],8968:[1.25003,1.75,0,0,.63889],8969:[1.25003,1.75,0,0,.63889],8970:[1.25003,1.75,0,0,.63889],8971:[1.25003,1.75,0,0,.63889],9115:[.64502,1.155,0,0,.875],9116:[1e-5,.6,0,0,.875],9117:[.64502,1.155,0,0,.875],9118:[.64502,1.155,0,0,.875],9119:[1e-5,.6,0,0,.875],9120:[.64502,1.155,0,0,.875],9121:[.64502,1.155,0,0,.66667],9122:[-99e-5,.601,0,0,.66667],9123:[.64502,1.155,0,0,.66667],9124:[.64502,1.155,0,0,.66667],9125:[-99e-5,.601,0,0,.66667],9126:[.64502,1.155,0,0,.66667],9127:[1e-5,.9,0,0,.88889],9128:[.65002,1.15,0,0,.88889],9129:[.90001,0,0,0,.88889],9130:[0,.3,0,0,.88889],9131:[1e-5,.9,0,0,.88889],9132:[.65002,1.15,0,0,.88889],9133:[.90001,0,0,0,.88889],9143:[.88502,.915,0,0,1.05556],10216:[1.25003,1.75,0,0,.80556],10217:[1.25003,1.75,0,0,.80556],57344:[-.00499,.605,0,0,1.05556],57345:[-.00499,.605,0,0,1.05556],57680:[0,.12,0,0,.45],57681:[0,.12,0,0,.45],57682:[0,.12,0,0,.45],57683:[0,.12,0,0,.45]},\"Typewriter-Regular\":{32:[0,0,0,0,.525],33:[0,.61111,0,0,.525],34:[0,.61111,0,0,.525],35:[0,.61111,0,0,.525],36:[.08333,.69444,0,0,.525],37:[.08333,.69444,0,0,.525],38:[0,.61111,0,0,.525],39:[0,.61111,0,0,.525],40:[.08333,.69444,0,0,.525],41:[.08333,.69444,0,0,.525],42:[0,.52083,0,0,.525],43:[-.08056,.53055,0,0,.525],44:[.13889,.125,0,0,.525],45:[-.08056,.53055,0,0,.525],46:[0,.125,0,0,.525],47:[.08333,.69444,0,0,.525],48:[0,.61111,0,0,.525],49:[0,.61111,0,0,.525],50:[0,.61111,0,0,.525],51:[0,.61111,0,0,.525],52:[0,.61111,0,0,.525],53:[0,.61111,0,0,.525],54:[0,.61111,0,0,.525],55:[0,.61111,0,0,.525],56:[0,.61111,0,0,.525],57:[0,.61111,0,0,.525],58:[0,.43056,0,0,.525],59:[.13889,.43056,0,0,.525],60:[-.05556,.55556,0,0,.525],61:[-.19549,.41562,0,0,.525],62:[-.05556,.55556,0,0,.525],63:[0,.61111,0,0,.525],64:[0,.61111,0,0,.525],65:[0,.61111,0,0,.525],66:[0,.61111,0,0,.525],67:[0,.61111,0,0,.525],68:[0,.61111,0,0,.525],69:[0,.61111,0,0,.525],70:[0,.61111,0,0,.525],71:[0,.61111,0,0,.525],72:[0,.61111,0,0,.525],73:[0,.61111,0,0,.525],74:[0,.61111,0,0,.525],75:[0,.61111,0,0,.525],76:[0,.61111,0,0,.525],77:[0,.61111,0,0,.525],78:[0,.61111,0,0,.525],79:[0,.61111,0,0,.525],80:[0,.61111,0,0,.525],81:[.13889,.61111,0,0,.525],82:[0,.61111,0,0,.525],83:[0,.61111,0,0,.525],84:[0,.61111,0,0,.525],85:[0,.61111,0,0,.525],86:[0,.61111,0,0,.525],87:[0,.61111,0,0,.525],88:[0,.61111,0,0,.525],89:[0,.61111,0,0,.525],90:[0,.61111,0,0,.525],91:[.08333,.69444,0,0,.525],92:[.08333,.69444,0,0,.525],93:[.08333,.69444,0,0,.525],94:[0,.61111,0,0,.525],95:[.09514,0,0,0,.525],96:[0,.61111,0,0,.525],97:[0,.43056,0,0,.525],98:[0,.61111,0,0,.525],99:[0,.43056,0,0,.525],100:[0,.61111,0,0,.525],101:[0,.43056,0,0,.525],102:[0,.61111,0,0,.525],103:[.22222,.43056,0,0,.525],104:[0,.61111,0,0,.525],105:[0,.61111,0,0,.525],106:[.22222,.61111,0,0,.525],107:[0,.61111,0,0,.525],108:[0,.61111,0,0,.525],109:[0,.43056,0,0,.525],110:[0,.43056,0,0,.525],111:[0,.43056,0,0,.525],112:[.22222,.43056,0,0,.525],113:[.22222,.43056,0,0,.525],114:[0,.43056,0,0,.525],115:[0,.43056,0,0,.525],116:[0,.55358,0,0,.525],117:[0,.43056,0,0,.525],118:[0,.43056,0,0,.525],119:[0,.43056,0,0,.525],120:[0,.43056,0,0,.525],121:[.22222,.43056,0,0,.525],122:[0,.43056,0,0,.525],123:[.08333,.69444,0,0,.525],124:[.08333,.69444,0,0,.525],125:[.08333,.69444,0,0,.525],126:[0,.61111,0,0,.525],127:[0,.61111,0,0,.525],160:[0,0,0,0,.525],176:[0,.61111,0,0,.525],184:[.19445,0,0,0,.525],305:[0,.43056,0,0,.525],567:[.22222,.43056,0,0,.525],711:[0,.56597,0,0,.525],713:[0,.56555,0,0,.525],714:[0,.61111,0,0,.525],715:[0,.61111,0,0,.525],728:[0,.61111,0,0,.525],730:[0,.61111,0,0,.525],770:[0,.61111,0,0,.525],771:[0,.61111,0,0,.525],776:[0,.61111,0,0,.525],915:[0,.61111,0,0,.525],916:[0,.61111,0,0,.525],920:[0,.61111,0,0,.525],923:[0,.61111,0,0,.525],926:[0,.61111,0,0,.525],928:[0,.61111,0,0,.525],931:[0,.61111,0,0,.525],933:[0,.61111,0,0,.525],934:[0,.61111,0,0,.525],936:[0,.61111,0,0,.525],937:[0,.61111,0,0,.525],8216:[0,.61111,0,0,.525],8217:[0,.61111,0,0,.525],8242:[0,.61111,0,0,.525],9251:[.11111,.21944,0,0,.525]}},go={slant:[.25,.25,.25],space:[0,0,0],stretch:[0,0,0],shrink:[0,0,0],xHeight:[.431,.431,.431],quad:[1,1.171,1.472],extraSpace:[0,0,0],num1:[.677,.732,.925],num2:[.394,.384,.387],num3:[.444,.471,.504],denom1:[.686,.752,1.025],denom2:[.345,.344,.532],sup1:[.413,.503,.504],sup2:[.363,.431,.404],sup3:[.289,.286,.294],sub1:[.15,.143,.2],sub2:[.247,.286,.4],supDrop:[.386,.353,.494],subDrop:[.05,.071,.1],delim1:[2.39,1.7,1.98],delim2:[1.01,1.157,1.42],axisHeight:[.25,.25,.25],defaultRuleThickness:[.04,.049,.049],bigOpSpacing1:[.111,.111,.111],bigOpSpacing2:[.166,.166,.166],bigOpSpacing3:[.2,.2,.2],bigOpSpacing4:[.6,.611,.611],bigOpSpacing5:[.1,.143,.143],sqrtRuleThickness:[.04,.04,.04],ptPerEm:[10,10,10],doubleRuleSep:[.2,.2,.2],arrayRuleWidth:[.04,.04,.04],fboxsep:[.3,.3,.3],fboxrule:[.04,.04,.04]},qd={\\u00C5:\"A\",\\u00D0:\"D\",\\u00DE:\"o\",\\u00E5:\"a\",\\u00F0:\"d\",\\u00FE:\"o\",\\u0410:\"A\",\\u0411:\"B\",\\u0412:\"B\",\\u0413:\"F\",\\u0414:\"A\",\\u0415:\"E\",\\u0416:\"K\",\\u0417:\"3\",\\u0418:\"N\",\\u0419:\"N\",\\u041A:\"K\",\\u041B:\"N\",\\u041C:\"M\",\\u041D:\"H\",\\u041E:\"O\",\\u041F:\"N\",\\u0420:\"P\",\\u0421:\"C\",\\u0422:\"T\",\\u0423:\"y\",\\u0424:\"O\",\\u0425:\"X\",\\u0426:\"U\",\\u0427:\"h\",\\u0428:\"W\",\\u0429:\"W\",\\u042A:\"B\",\\u042B:\"X\",\\u042C:\"B\",\\u042D:\"3\",\\u042E:\"X\",\\u042F:\"R\",\\u0430:\"a\",\\u0431:\"b\",\\u0432:\"a\",\\u0433:\"r\",\\u0434:\"y\",\\u0435:\"e\",\\u0436:\"m\",\\u0437:\"e\",\\u0438:\"n\",\\u0439:\"n\",\\u043A:\"n\",\\u043B:\"n\",\\u043C:\"m\",\\u043D:\"n\",\\u043E:\"o\",\\u043F:\"n\",\\u0440:\"p\",\\u0441:\"c\",\\u0442:\"o\",\\u0443:\"y\",\\u0444:\"b\",\\u0445:\"x\",\\u0446:\"n\",\\u0447:\"n\",\\u0448:\"w\",\\u0449:\"w\",\\u044A:\"a\",\\u044B:\"m\",\\u044C:\"a\",\\u044D:\"e\",\\u044E:\"m\",\\u044F:\"r\"};function l3(i,e){Jt[i]=e}function ah(i,e,t){if(!Jt[e])throw new Error(\"Font metrics not found for font: \"+e+\".\");var r=i.charCodeAt(0),n=Jt[e][r];if(!n&&i[0]in qd&&(r=qd[i[0]].charCodeAt(0),n=Jt[e][r]),!n&&t===\"text\"&&gm(r)&&(n=Jt[e][77]),n)return{depth:n[0],height:n[1],italic:n[2],skew:n[3],width:n[4]}}var L0={};function a3(i){var e;if(i>=5?e=0:i>=3?e=1:e=2,!L0[e]){var t=L0[e]={cssEmPerMu:go.quad[e]/18};for(var r in go)go.hasOwnProperty(r)&&(t[r]=go[r][e])}return L0[e]}var h3=[[1,1,1],[2,1,1],[3,1,1],[4,2,1],[5,2,1],[6,3,1],[7,4,2],[8,6,3],[9,7,6],[10,8,7],[11,10,9]],Wd=[.5,.6,.7,.8,.9,1,1.2,1.44,1.728,2.074,2.488],Vd=function(e,t){return t.size<2?e:h3[e-1][t.size-1]},Eo=class i{constructor(e){this.style=void 0,this.color=void 0,this.size=void 0,this.textSize=void 0,this.phantom=void 0,this.font=void 0,this.fontFamily=void 0,this.fontWeight=void 0,this.fontShape=void 0,this.sizeMultiplier=void 0,this.maxSize=void 0,this.minRuleThickness=void 0,this._fontMetrics=void 0,this.style=e.style,this.color=e.color,this.size=e.size||i.BASESIZE,this.textSize=e.textSize||this.size,this.phantom=!!e.phantom,this.font=e.font||\"\",this.fontFamily=e.fontFamily||\"\",this.fontWeight=e.fontWeight||\"\",this.fontShape=e.fontShape||\"\",this.sizeMultiplier=Wd[this.size-1],this.maxSize=e.maxSize,this.minRuleThickness=e.minRuleThickness,this._fontMetrics=void 0}extend(e){var t={style:this.style,size:this.size,textSize:this.textSize,color:this.color,phantom:this.phantom,font:this.font,fontFamily:this.fontFamily,fontWeight:this.fontWeight,fontShape:this.fontShape,maxSize:this.maxSize,minRuleThickness:this.minRuleThickness};for(var r in e)e.hasOwnProperty(r)&&(t[r]=e[r]);return new i(t)}havingStyle(e){return this.style===e?this:this.extend({style:e,size:Vd(this.textSize,e)})}havingCrampedStyle(){return this.havingStyle(this.style.cramp())}havingSize(e){return this.size===e&&this.textSize===e?this:this.extend({style:this.style.text(),size:e,textSize:e,sizeMultiplier:Wd[e-1]})}havingBaseStyle(e){e=e||this.style.text();var t=Vd(i.BASESIZE,e);return this.size===t&&this.textSize===i.BASESIZE&&this.style===e?this:this.extend({style:e,size:t})}havingBaseSizing(){var e;switch(this.style.id){case 4:case 5:e=3;break;case 6:case 7:e=1;break;default:e=6}return this.extend({style:this.style.text(),size:e})}withColor(e){return this.extend({color:e})}withPhantom(){return this.extend({phantom:!0})}withFont(e){return this.extend({font:e})}withTextFontFamily(e){return this.extend({fontFamily:e,font:\"\"})}withTextFontWeight(e){return this.extend({fontWeight:e,font:\"\"})}withTextFontShape(e){return this.extend({fontShape:e,font:\"\"})}sizingClasses(e){return e.size!==this.size?[\"sizing\",\"reset-size\"+e.size,\"size\"+this.size]:[]}baseSizingClasses(){return this.size!==i.BASESIZE?[\"sizing\",\"reset-size\"+this.size,\"size\"+i.BASESIZE]:[]}fontMetrics(){return this._fontMetrics||(this._fontMetrics=a3(this.size)),this._fontMetrics}getColor(){return this.phantom?\"transparent\":this.color}};Eo.BASESIZE=6;var Y0={pt:1,mm:7227/2540,cm:7227/254,in:72.27,bp:803/800,pc:12,dd:1238/1157,cc:14856/1157,nd:685/642,nc:1370/107,sp:1/65536,px:803/800},c3={ex:!0,em:!0,mu:!0},vm=function(e){return typeof e!=\"string\"&&(e=e.unit),e in Y0||e in c3||e===\"ex\"},we=function(e,t){var r;if(e.unit in Y0)r=Y0[e.unit]/t.fontMetrics().ptPerEm/t.sizeMultiplier;else if(e.unit===\"mu\")r=t.fontMetrics().cssEmPerMu;else{var n;if(t.style.isTight()?n=t.havingStyle(t.style.text()):n=t,e.unit===\"ex\")r=n.fontMetrics().xHeight;else if(e.unit===\"em\")r=n.fontMetrics().quad;else throw new I(\"Invalid unit: '\"+e.unit+\"'\");n!==t&&(r*=n.sizeMultiplier/t.sizeMultiplier)}return Math.min(e.number*r,t.maxSize)},N=function(e){return+e.toFixed(4)+\"em\"},Br=function(e){return e.filter(t=>t).join(\" \")},bm=function(e,t,r){if(this.classes=e||[],this.attributes={},this.height=0,this.depth=0,this.maxFontSize=0,this.style=r||{},t){t.style.isTight()&&this.classes.push(\"mtight\");var n=t.getColor();n&&(this.style.color=n)}},ym=function(e){var t=document.createElement(e);t.className=Br(this.classes);for(var r in this.style)this.style.hasOwnProperty(r)&&(t.style[r]=this.style[r]);for(var n in this.attributes)this.attributes.hasOwnProperty(n)&&t.setAttribute(n,this.attributes[n]);for(var s=0;s<this.children.length;s++)t.appendChild(this.children[s].toNode());return t},u3=/[\\s\"'>/=\\x00-\\x1f]/,xm=function(e){var t=\"<\"+e;this.classes.length&&(t+=' class=\"'+Ke(Br(this.classes))+'\"');var r=\"\";for(var n in this.style)this.style.hasOwnProperty(n)&&(r+=oh(n)+\":\"+this.style[n]+\";\");r&&(t+=' style=\"'+Ke(r)+'\"');for(var s in this.attributes)if(this.attributes.hasOwnProperty(s)){if(u3.test(s))throw new I(\"Invalid attribute name '\"+s+\"'\");t+=\" \"+s+'=\"'+Ke(this.attributes[s])+'\"'}t+=\">\";for(var o=0;o<this.children.length;o++)t+=this.children[o].toMarkup();return t+=\"</\"+e+\">\",t},ti=class{constructor(e,t,r,n){this.children=void 0,this.attributes=void 0,this.classes=void 0,this.height=void 0,this.depth=void 0,this.width=void 0,this.maxFontSize=void 0,this.style=void 0,bm.call(this,e,r,n),this.children=t||[]}setAttribute(e,t){this.attributes[e]=t}hasClass(e){return this.classes.includes(e)}toNode(){return ym.call(this,\"span\")}toMarkup(){return xm.call(this,\"span\")}},On=class{constructor(e,t,r,n){this.children=void 0,this.attributes=void 0,this.classes=void 0,this.height=void 0,this.depth=void 0,this.maxFontSize=void 0,this.style=void 0,bm.call(this,t,n),this.children=r||[],this.setAttribute(\"href\",e)}setAttribute(e,t){this.attributes[e]=t}hasClass(e){return this.classes.includes(e)}toNode(){return ym.call(this,\"a\")}toMarkup(){return xm.call(this,\"a\")}},X0=class{constructor(e,t,r){this.src=void 0,this.alt=void 0,this.classes=void 0,this.height=void 0,this.depth=void 0,this.maxFontSize=void 0,this.style=void 0,this.alt=t,this.src=e,this.classes=[\"mord\"],this.style=r}hasClass(e){return this.classes.includes(e)}toNode(){var e=document.createElement(\"img\");e.src=this.src,e.alt=this.alt,e.className=\"mord\";for(var t in this.style)this.style.hasOwnProperty(t)&&(e.style[t]=this.style[t]);return e}toMarkup(){var e='<img src=\"'+Ke(this.src)+'\"'+(' alt=\"'+Ke(this.alt)+'\"'),t=\"\";for(var r in this.style)this.style.hasOwnProperty(r)&&(t+=oh(r)+\":\"+this.style[r]+\";\");return t&&(e+=' style=\"'+Ke(t)+'\"'),e+=\"'/>\",e}},f3={\\u00EE:\"\\u0131\\u0302\",\\u00EF:\"\\u0131\\u0308\",\\u00ED:\"\\u0131\\u0301\",\\u00EC:\"\\u0131\\u0300\"},at=class{constructor(e,t,r,n,s,o,l,a){this.text=void 0,this.height=void 0,this.depth=void 0,this.italic=void 0,this.skew=void 0,this.width=void 0,this.maxFontSize=void 0,this.classes=void 0,this.style=void 0,this.text=e,this.height=t||0,this.depth=r||0,this.italic=n||0,this.skew=s||0,this.width=o||0,this.classes=l||[],this.style=a||{},this.maxFontSize=0;var h=_5(this.text.charCodeAt(0));h&&this.classes.push(h+\"_fallback\"),/[îïíì]/.test(this.text)&&(this.text=f3[this.text])}hasClass(e){return this.classes.includes(e)}toNode(){var e=document.createTextNode(this.text),t=null;this.italic>0&&(t=document.createElement(\"span\"),t.style.marginRight=N(this.italic)),this.classes.length>0&&(t=t||document.createElement(\"span\"),t.className=Br(this.classes));for(var r in this.style)this.style.hasOwnProperty(r)&&(t=t||document.createElement(\"span\"),t.style[r]=this.style[r]);return t?(t.appendChild(e),t):e}toMarkup(){var e=!1,t=\"<span\";this.classes.length&&(e=!0,t+=' class=\"',t+=Ke(Br(this.classes)),t+='\"');var r=\"\";this.italic>0&&(r+=\"margin-right:\"+this.italic+\"em;\");for(var n in this.style)this.style.hasOwnProperty(n)&&(r+=oh(n)+\":\"+this.style[n]+\";\");r&&(e=!0,t+=' style=\"'+Ke(r)+'\"');var s=Ke(this.text);return e?(t+=\">\",t+=s,t+=\"</span>\",t):s}},qt=class{constructor(e,t){this.children=void 0,this.attributes=void 0,this.children=e||[],this.attributes=t||{}}toNode(){var e=\"http://www.w3.org/2000/svg\",t=document.createElementNS(e,\"svg\");for(var r in this.attributes)Object.prototype.hasOwnProperty.call(this.attributes,r)&&t.setAttribute(r,this.attributes[r]);for(var n=0;n<this.children.length;n++)t.appendChild(this.children[n].toNode());return t}toMarkup(){var e='<svg xmlns=\"http://www.w3.org/2000/svg\"';for(var t in this.attributes)Object.prototype.hasOwnProperty.call(this.attributes,t)&&(e+=\" \"+t+'=\"'+Ke(this.attributes[t])+'\"');e+=\">\";for(var r=0;r<this.children.length;r++)e+=this.children[r].toMarkup();return e+=\"</svg>\",e}},Zt=class{constructor(e,t){this.pathName=void 0,this.alternate=void 0,this.pathName=e,this.alternate=t}toNode(){var e=\"http://www.w3.org/2000/svg\",t=document.createElementNS(e,\"path\");return this.alternate?t.setAttribute(\"d\",this.alternate):t.setAttribute(\"d\",Hd[this.pathName]),t}toMarkup(){return this.alternate?'<path d=\"'+Ke(this.alternate)+'\"/>':'<path d=\"'+Ke(Hd[this.pathName])+'\"/>'}},zn=class{constructor(e){this.attributes=void 0,this.attributes=e||{}}toNode(){var e=\"http://www.w3.org/2000/svg\",t=document.createElementNS(e,\"line\");for(var r in this.attributes)Object.prototype.hasOwnProperty.call(this.attributes,r)&&t.setAttribute(r,this.attributes[r]);return t}toMarkup(){var e=\"<line\";for(var t in this.attributes)Object.prototype.hasOwnProperty.call(this.attributes,t)&&(e+=\" \"+t+'=\"'+Ke(this.attributes[t])+'\"');return e+=\"/>\",e}};function $d(i){if(i instanceof at)return i;throw new Error(\"Expected symbolNode but got \"+String(i)+\".\")}function d3(i){if(i instanceof ti)return i;throw new Error(\"Expected span<HtmlDomNode> but got \"+String(i)+\".\")}var m3={bin:1,close:1,inner:1,open:1,punct:1,rel:1},p3={\"accent-token\":1,mathord:1,\"op-token\":1,spacing:1,textord:1},me={math:{},text:{}};function f(i,e,t,r,n,s){me[i][n]={font:e,group:t,replace:r},s&&r&&(me[i][r]=me[i][n])}var m=\"math\",O=\"text\",g=\"main\",x=\"ams\",be=\"accent-token\",G=\"bin\",tt=\"close\",Li=\"inner\",Y=\"mathord\",ze=\"op-token\",pt=\"open\",Fo=\"punct\",k=\"rel\",mr=\"spacing\",C=\"textord\";f(m,g,k,\"\\u2261\",\"\\\\equiv\",!0);f(m,g,k,\"\\u227A\",\"\\\\prec\",!0);f(m,g,k,\"\\u227B\",\"\\\\succ\",!0);f(m,g,k,\"\\u223C\",\"\\\\sim\",!0);f(m,g,k,\"\\u22A5\",\"\\\\perp\");f(m,g,k,\"\\u2AAF\",\"\\\\preceq\",!0);f(m,g,k,\"\\u2AB0\",\"\\\\succeq\",!0);f(m,g,k,\"\\u2243\",\"\\\\simeq\",!0);f(m,g,k,\"\\u2223\",\"\\\\mid\",!0);f(m,g,k,\"\\u226A\",\"\\\\ll\",!0);f(m,g,k,\"\\u226B\",\"\\\\gg\",!0);f(m,g,k,\"\\u224D\",\"\\\\asymp\",!0);f(m,g,k,\"\\u2225\",\"\\\\parallel\");f(m,g,k,\"\\u22C8\",\"\\\\bowtie\",!0);f(m,g,k,\"\\u2323\",\"\\\\smile\",!0);f(m,g,k,\"\\u2291\",\"\\\\sqsubseteq\",!0);f(m,g,k,\"\\u2292\",\"\\\\sqsupseteq\",!0);f(m,g,k,\"\\u2250\",\"\\\\doteq\",!0);f(m,g,k,\"\\u2322\",\"\\\\frown\",!0);f(m,g,k,\"\\u220B\",\"\\\\ni\",!0);f(m,g,k,\"\\u221D\",\"\\\\propto\",!0);f(m,g,k,\"\\u22A2\",\"\\\\vdash\",!0);f(m,g,k,\"\\u22A3\",\"\\\\dashv\",!0);f(m,g,k,\"\\u220B\",\"\\\\owns\");f(m,g,Fo,\".\",\"\\\\ldotp\");f(m,g,Fo,\"\\u22C5\",\"\\\\cdotp\");f(m,g,C,\"#\",\"\\\\#\");f(O,g,C,\"#\",\"\\\\#\");f(m,g,C,\"&\",\"\\\\&\");f(O,g,C,\"&\",\"\\\\&\");f(m,g,C,\"\\u2135\",\"\\\\aleph\",!0);f(m,g,C,\"\\u2200\",\"\\\\forall\",!0);f(m,g,C,\"\\u210F\",\"\\\\hbar\",!0);f(m,g,C,\"\\u2203\",\"\\\\exists\",!0);f(m,g,C,\"\\u2207\",\"\\\\nabla\",!0);f(m,g,C,\"\\u266D\",\"\\\\flat\",!0);f(m,g,C,\"\\u2113\",\"\\\\ell\",!0);f(m,g,C,\"\\u266E\",\"\\\\natural\",!0);f(m,g,C,\"\\u2663\",\"\\\\clubsuit\",!0);f(m,g,C,\"\\u2118\",\"\\\\wp\",!0);f(m,g,C,\"\\u266F\",\"\\\\sharp\",!0);f(m,g,C,\"\\u2662\",\"\\\\diamondsuit\",!0);f(m,g,C,\"\\u211C\",\"\\\\Re\",!0);f(m,g,C,\"\\u2661\",\"\\\\heartsuit\",!0);f(m,g,C,\"\\u2111\",\"\\\\Im\",!0);f(m,g,C,\"\\u2660\",\"\\\\spadesuit\",!0);f(m,g,C,\"\\xA7\",\"\\\\S\",!0);f(O,g,C,\"\\xA7\",\"\\\\S\");f(m,g,C,\"\\xB6\",\"\\\\P\",!0);f(O,g,C,\"\\xB6\",\"\\\\P\");f(m,g,C,\"\\u2020\",\"\\\\dag\");f(O,g,C,\"\\u2020\",\"\\\\dag\");f(O,g,C,\"\\u2020\",\"\\\\textdagger\");f(m,g,C,\"\\u2021\",\"\\\\ddag\");f(O,g,C,\"\\u2021\",\"\\\\ddag\");f(O,g,C,\"\\u2021\",\"\\\\textdaggerdbl\");f(m,g,tt,\"\\u23B1\",\"\\\\rmoustache\",!0);f(m,g,pt,\"\\u23B0\",\"\\\\lmoustache\",!0);f(m,g,tt,\"\\u27EF\",\"\\\\rgroup\",!0);f(m,g,pt,\"\\u27EE\",\"\\\\lgroup\",!0);f(m,g,G,\"\\u2213\",\"\\\\mp\",!0);f(m,g,G,\"\\u2296\",\"\\\\ominus\",!0);f(m,g,G,\"\\u228E\",\"\\\\uplus\",!0);f(m,g,G,\"\\u2293\",\"\\\\sqcap\",!0);f(m,g,G,\"\\u2217\",\"\\\\ast\");f(m,g,G,\"\\u2294\",\"\\\\sqcup\",!0);f(m,g,G,\"\\u25EF\",\"\\\\bigcirc\",!0);f(m,g,G,\"\\u2219\",\"\\\\bullet\",!0);f(m,g,G,\"\\u2021\",\"\\\\ddagger\");f(m,g,G,\"\\u2240\",\"\\\\wr\",!0);f(m,g,G,\"\\u2A3F\",\"\\\\amalg\");f(m,g,G,\"&\",\"\\\\And\");f(m,g,k,\"\\u27F5\",\"\\\\longleftarrow\",!0);f(m,g,k,\"\\u21D0\",\"\\\\Leftarrow\",!0);f(m,g,k,\"\\u27F8\",\"\\\\Longleftarrow\",!0);f(m,g,k,\"\\u27F6\",\"\\\\longrightarrow\",!0);f(m,g,k,\"\\u21D2\",\"\\\\Rightarrow\",!0);f(m,g,k,\"\\u27F9\",\"\\\\Longrightarrow\",!0);f(m,g,k,\"\\u2194\",\"\\\\leftrightarrow\",!0);f(m,g,k,\"\\u27F7\",\"\\\\longleftrightarrow\",!0);f(m,g,k,\"\\u21D4\",\"\\\\Leftrightarrow\",!0);f(m,g,k,\"\\u27FA\",\"\\\\Longleftrightarrow\",!0);f(m,g,k,\"\\u21A6\",\"\\\\mapsto\",!0);f(m,g,k,\"\\u27FC\",\"\\\\longmapsto\",!0);f(m,g,k,\"\\u2197\",\"\\\\nearrow\",!0);f(m,g,k,\"\\u21A9\",\"\\\\hookleftarrow\",!0);f(m,g,k,\"\\u21AA\",\"\\\\hookrightarrow\",!0);f(m,g,k,\"\\u2198\",\"\\\\searrow\",!0);f(m,g,k,\"\\u21BC\",\"\\\\leftharpoonup\",!0);f(m,g,k,\"\\u21C0\",\"\\\\rightharpoonup\",!0);f(m,g,k,\"\\u2199\",\"\\\\swarrow\",!0);f(m,g,k,\"\\u21BD\",\"\\\\leftharpoondown\",!0);f(m,g,k,\"\\u21C1\",\"\\\\rightharpoondown\",!0);f(m,g,k,\"\\u2196\",\"\\\\nwarrow\",!0);f(m,g,k,\"\\u21CC\",\"\\\\rightleftharpoons\",!0);f(m,x,k,\"\\u226E\",\"\\\\nless\",!0);f(m,x,k,\"\\uE010\",\"\\\\@nleqslant\");f(m,x,k,\"\\uE011\",\"\\\\@nleqq\");f(m,x,k,\"\\u2A87\",\"\\\\lneq\",!0);f(m,x,k,\"\\u2268\",\"\\\\lneqq\",!0);f(m,x,k,\"\\uE00C\",\"\\\\@lvertneqq\");f(m,x,k,\"\\u22E6\",\"\\\\lnsim\",!0);f(m,x,k,\"\\u2A89\",\"\\\\lnapprox\",!0);f(m,x,k,\"\\u2280\",\"\\\\nprec\",!0);f(m,x,k,\"\\u22E0\",\"\\\\npreceq\",!0);f(m,x,k,\"\\u22E8\",\"\\\\precnsim\",!0);f(m,x,k,\"\\u2AB9\",\"\\\\precnapprox\",!0);f(m,x,k,\"\\u2241\",\"\\\\nsim\",!0);f(m,x,k,\"\\uE006\",\"\\\\@nshortmid\");f(m,x,k,\"\\u2224\",\"\\\\nmid\",!0);f(m,x,k,\"\\u22AC\",\"\\\\nvdash\",!0);f(m,x,k,\"\\u22AD\",\"\\\\nvDash\",!0);f(m,x,k,\"\\u22EA\",\"\\\\ntriangleleft\");f(m,x,k,\"\\u22EC\",\"\\\\ntrianglelefteq\",!0);f(m,x,k,\"\\u228A\",\"\\\\subsetneq\",!0);f(m,x,k,\"\\uE01A\",\"\\\\@varsubsetneq\");f(m,x,k,\"\\u2ACB\",\"\\\\subsetneqq\",!0);f(m,x,k,\"\\uE017\",\"\\\\@varsubsetneqq\");f(m,x,k,\"\\u226F\",\"\\\\ngtr\",!0);f(m,x,k,\"\\uE00F\",\"\\\\@ngeqslant\");f(m,x,k,\"\\uE00E\",\"\\\\@ngeqq\");f(m,x,k,\"\\u2A88\",\"\\\\gneq\",!0);f(m,x,k,\"\\u2269\",\"\\\\gneqq\",!0);f(m,x,k,\"\\uE00D\",\"\\\\@gvertneqq\");f(m,x,k,\"\\u22E7\",\"\\\\gnsim\",!0);f(m,x,k,\"\\u2A8A\",\"\\\\gnapprox\",!0);f(m,x,k,\"\\u2281\",\"\\\\nsucc\",!0);f(m,x,k,\"\\u22E1\",\"\\\\nsucceq\",!0);f(m,x,k,\"\\u22E9\",\"\\\\succnsim\",!0);f(m,x,k,\"\\u2ABA\",\"\\\\succnapprox\",!0);f(m,x,k,\"\\u2246\",\"\\\\ncong\",!0);f(m,x,k,\"\\uE007\",\"\\\\@nshortparallel\");f(m,x,k,\"\\u2226\",\"\\\\nparallel\",!0);f(m,x,k,\"\\u22AF\",\"\\\\nVDash\",!0);f(m,x,k,\"\\u22EB\",\"\\\\ntriangleright\");f(m,x,k,\"\\u22ED\",\"\\\\ntrianglerighteq\",!0);f(m,x,k,\"\\uE018\",\"\\\\@nsupseteqq\");f(m,x,k,\"\\u228B\",\"\\\\supsetneq\",!0);f(m,x,k,\"\\uE01B\",\"\\\\@varsupsetneq\");f(m,x,k,\"\\u2ACC\",\"\\\\supsetneqq\",!0);f(m,x,k,\"\\uE019\",\"\\\\@varsupsetneqq\");f(m,x,k,\"\\u22AE\",\"\\\\nVdash\",!0);f(m,x,k,\"\\u2AB5\",\"\\\\precneqq\",!0);f(m,x,k,\"\\u2AB6\",\"\\\\succneqq\",!0);f(m,x,k,\"\\uE016\",\"\\\\@nsubseteqq\");f(m,x,G,\"\\u22B4\",\"\\\\unlhd\");f(m,x,G,\"\\u22B5\",\"\\\\unrhd\");f(m,x,k,\"\\u219A\",\"\\\\nleftarrow\",!0);f(m,x,k,\"\\u219B\",\"\\\\nrightarrow\",!0);f(m,x,k,\"\\u21CD\",\"\\\\nLeftarrow\",!0);f(m,x,k,\"\\u21CF\",\"\\\\nRightarrow\",!0);f(m,x,k,\"\\u21AE\",\"\\\\nleftrightarrow\",!0);f(m,x,k,\"\\u21CE\",\"\\\\nLeftrightarrow\",!0);f(m,x,k,\"\\u25B3\",\"\\\\vartriangle\");f(m,x,C,\"\\u210F\",\"\\\\hslash\");f(m,x,C,\"\\u25BD\",\"\\\\triangledown\");f(m,x,C,\"\\u25CA\",\"\\\\lozenge\");f(m,x,C,\"\\u24C8\",\"\\\\circledS\");f(m,x,C,\"\\xAE\",\"\\\\circledR\");f(O,x,C,\"\\xAE\",\"\\\\circledR\");f(m,x,C,\"\\u2221\",\"\\\\measuredangle\",!0);f(m,x,C,\"\\u2204\",\"\\\\nexists\");f(m,x,C,\"\\u2127\",\"\\\\mho\");f(m,x,C,\"\\u2132\",\"\\\\Finv\",!0);f(m,x,C,\"\\u2141\",\"\\\\Game\",!0);f(m,x,C,\"\\u2035\",\"\\\\backprime\");f(m,x,C,\"\\u25B2\",\"\\\\blacktriangle\");f(m,x,C,\"\\u25BC\",\"\\\\blacktriangledown\");f(m,x,C,\"\\u25A0\",\"\\\\blacksquare\");f(m,x,C,\"\\u29EB\",\"\\\\blacklozenge\");f(m,x,C,\"\\u2605\",\"\\\\bigstar\");f(m,x,C,\"\\u2222\",\"\\\\sphericalangle\",!0);f(m,x,C,\"\\u2201\",\"\\\\complement\",!0);f(m,x,C,\"\\xF0\",\"\\\\eth\",!0);f(O,g,C,\"\\xF0\",\"\\xF0\");f(m,x,C,\"\\u2571\",\"\\\\diagup\");f(m,x,C,\"\\u2572\",\"\\\\diagdown\");f(m,x,C,\"\\u25A1\",\"\\\\square\");f(m,x,C,\"\\u25A1\",\"\\\\Box\");f(m,x,C,\"\\u25CA\",\"\\\\Diamond\");f(m,x,C,\"\\xA5\",\"\\\\yen\",!0);f(O,x,C,\"\\xA5\",\"\\\\yen\",!0);f(m,x,C,\"\\u2713\",\"\\\\checkmark\",!0);f(O,x,C,\"\\u2713\",\"\\\\checkmark\");f(m,x,C,\"\\u2136\",\"\\\\beth\",!0);f(m,x,C,\"\\u2138\",\"\\\\daleth\",!0);f(m,x,C,\"\\u2137\",\"\\\\gimel\",!0);f(m,x,C,\"\\u03DD\",\"\\\\digamma\",!0);f(m,x,C,\"\\u03F0\",\"\\\\varkappa\");f(m,x,pt,\"\\u250C\",\"\\\\@ulcorner\",!0);f(m,x,tt,\"\\u2510\",\"\\\\@urcorner\",!0);f(m,x,pt,\"\\u2514\",\"\\\\@llcorner\",!0);f(m,x,tt,\"\\u2518\",\"\\\\@lrcorner\",!0);f(m,x,k,\"\\u2266\",\"\\\\leqq\",!0);f(m,x,k,\"\\u2A7D\",\"\\\\leqslant\",!0);f(m,x,k,\"\\u2A95\",\"\\\\eqslantless\",!0);f(m,x,k,\"\\u2272\",\"\\\\lesssim\",!0);f(m,x,k,\"\\u2A85\",\"\\\\lessapprox\",!0);f(m,x,k,\"\\u224A\",\"\\\\approxeq\",!0);f(m,x,G,\"\\u22D6\",\"\\\\lessdot\");f(m,x,k,\"\\u22D8\",\"\\\\lll\",!0);f(m,x,k,\"\\u2276\",\"\\\\lessgtr\",!0);f(m,x,k,\"\\u22DA\",\"\\\\lesseqgtr\",!0);f(m,x,k,\"\\u2A8B\",\"\\\\lesseqqgtr\",!0);f(m,x,k,\"\\u2251\",\"\\\\doteqdot\");f(m,x,k,\"\\u2253\",\"\\\\risingdotseq\",!0);f(m,x,k,\"\\u2252\",\"\\\\fallingdotseq\",!0);f(m,x,k,\"\\u223D\",\"\\\\backsim\",!0);f(m,x,k,\"\\u22CD\",\"\\\\backsimeq\",!0);f(m,x,k,\"\\u2AC5\",\"\\\\subseteqq\",!0);f(m,x,k,\"\\u22D0\",\"\\\\Subset\",!0);f(m,x,k,\"\\u228F\",\"\\\\sqsubset\",!0);f(m,x,k,\"\\u227C\",\"\\\\preccurlyeq\",!0);f(m,x,k,\"\\u22DE\",\"\\\\curlyeqprec\",!0);f(m,x,k,\"\\u227E\",\"\\\\precsim\",!0);f(m,x,k,\"\\u2AB7\",\"\\\\precapprox\",!0);f(m,x,k,\"\\u22B2\",\"\\\\vartriangleleft\");f(m,x,k,\"\\u22B4\",\"\\\\trianglelefteq\");f(m,x,k,\"\\u22A8\",\"\\\\vDash\",!0);f(m,x,k,\"\\u22AA\",\"\\\\Vvdash\",!0);f(m,x,k,\"\\u2323\",\"\\\\smallsmile\");f(m,x,k,\"\\u2322\",\"\\\\smallfrown\");f(m,x,k,\"\\u224F\",\"\\\\bumpeq\",!0);f(m,x,k,\"\\u224E\",\"\\\\Bumpeq\",!0);f(m,x,k,\"\\u2267\",\"\\\\geqq\",!0);f(m,x,k,\"\\u2A7E\",\"\\\\geqslant\",!0);f(m,x,k,\"\\u2A96\",\"\\\\eqslantgtr\",!0);f(m,x,k,\"\\u2273\",\"\\\\gtrsim\",!0);f(m,x,k,\"\\u2A86\",\"\\\\gtrapprox\",!0);f(m,x,G,\"\\u22D7\",\"\\\\gtrdot\");f(m,x,k,\"\\u22D9\",\"\\\\ggg\",!0);f(m,x,k,\"\\u2277\",\"\\\\gtrless\",!0);f(m,x,k,\"\\u22DB\",\"\\\\gtreqless\",!0);f(m,x,k,\"\\u2A8C\",\"\\\\gtreqqless\",!0);f(m,x,k,\"\\u2256\",\"\\\\eqcirc\",!0);f(m,x,k,\"\\u2257\",\"\\\\circeq\",!0);f(m,x,k,\"\\u225C\",\"\\\\triangleq\",!0);f(m,x,k,\"\\u223C\",\"\\\\thicksim\");f(m,x,k,\"\\u2248\",\"\\\\thickapprox\");f(m,x,k,\"\\u2AC6\",\"\\\\supseteqq\",!0);f(m,x,k,\"\\u22D1\",\"\\\\Supset\",!0);f(m,x,k,\"\\u2290\",\"\\\\sqsupset\",!0);f(m,x,k,\"\\u227D\",\"\\\\succcurlyeq\",!0);f(m,x,k,\"\\u22DF\",\"\\\\curlyeqsucc\",!0);f(m,x,k,\"\\u227F\",\"\\\\succsim\",!0);f(m,x,k,\"\\u2AB8\",\"\\\\succapprox\",!0);f(m,x,k,\"\\u22B3\",\"\\\\vartriangleright\");f(m,x,k,\"\\u22B5\",\"\\\\trianglerighteq\");f(m,x,k,\"\\u22A9\",\"\\\\Vdash\",!0);f(m,x,k,\"\\u2223\",\"\\\\shortmid\");f(m,x,k,\"\\u2225\",\"\\\\shortparallel\");f(m,x,k,\"\\u226C\",\"\\\\between\",!0);f(m,x,k,\"\\u22D4\",\"\\\\pitchfork\",!0);f(m,x,k,\"\\u221D\",\"\\\\varpropto\");f(m,x,k,\"\\u25C0\",\"\\\\blacktriangleleft\");f(m,x,k,\"\\u2234\",\"\\\\therefore\",!0);f(m,x,k,\"\\u220D\",\"\\\\backepsilon\");f(m,x,k,\"\\u25B6\",\"\\\\blacktriangleright\");f(m,x,k,\"\\u2235\",\"\\\\because\",!0);f(m,x,k,\"\\u22D8\",\"\\\\llless\");f(m,x,k,\"\\u22D9\",\"\\\\gggtr\");f(m,x,G,\"\\u22B2\",\"\\\\lhd\");f(m,x,G,\"\\u22B3\",\"\\\\rhd\");f(m,x,k,\"\\u2242\",\"\\\\eqsim\",!0);f(m,g,k,\"\\u22C8\",\"\\\\Join\");f(m,x,k,\"\\u2251\",\"\\\\Doteq\",!0);f(m,x,G,\"\\u2214\",\"\\\\dotplus\",!0);f(m,x,G,\"\\u2216\",\"\\\\smallsetminus\");f(m,x,G,\"\\u22D2\",\"\\\\Cap\",!0);f(m,x,G,\"\\u22D3\",\"\\\\Cup\",!0);f(m,x,G,\"\\u2A5E\",\"\\\\doublebarwedge\",!0);f(m,x,G,\"\\u229F\",\"\\\\boxminus\",!0);f(m,x,G,\"\\u229E\",\"\\\\boxplus\",!0);f(m,x,G,\"\\u22C7\",\"\\\\divideontimes\",!0);f(m,x,G,\"\\u22C9\",\"\\\\ltimes\",!0);f(m,x,G,\"\\u22CA\",\"\\\\rtimes\",!0);f(m,x,G,\"\\u22CB\",\"\\\\leftthreetimes\",!0);f(m,x,G,\"\\u22CC\",\"\\\\rightthreetimes\",!0);f(m,x,G,\"\\u22CF\",\"\\\\curlywedge\",!0);f(m,x,G,\"\\u22CE\",\"\\\\curlyvee\",!0);f(m,x,G,\"\\u229D\",\"\\\\circleddash\",!0);f(m,x,G,\"\\u229B\",\"\\\\circledast\",!0);f(m,x,G,\"\\u22C5\",\"\\\\centerdot\");f(m,x,G,\"\\u22BA\",\"\\\\intercal\",!0);f(m,x,G,\"\\u22D2\",\"\\\\doublecap\");f(m,x,G,\"\\u22D3\",\"\\\\doublecup\");f(m,x,G,\"\\u22A0\",\"\\\\boxtimes\",!0);f(m,x,k,\"\\u21E2\",\"\\\\dashrightarrow\",!0);f(m,x,k,\"\\u21E0\",\"\\\\dashleftarrow\",!0);f(m,x,k,\"\\u21C7\",\"\\\\leftleftarrows\",!0);f(m,x,k,\"\\u21C6\",\"\\\\leftrightarrows\",!0);f(m,x,k,\"\\u21DA\",\"\\\\Lleftarrow\",!0);f(m,x,k,\"\\u219E\",\"\\\\twoheadleftarrow\",!0);f(m,x,k,\"\\u21A2\",\"\\\\leftarrowtail\",!0);f(m,x,k,\"\\u21AB\",\"\\\\looparrowleft\",!0);f(m,x,k,\"\\u21CB\",\"\\\\leftrightharpoons\",!0);f(m,x,k,\"\\u21B6\",\"\\\\curvearrowleft\",!0);f(m,x,k,\"\\u21BA\",\"\\\\circlearrowleft\",!0);f(m,x,k,\"\\u21B0\",\"\\\\Lsh\",!0);f(m,x,k,\"\\u21C8\",\"\\\\upuparrows\",!0);f(m,x,k,\"\\u21BF\",\"\\\\upharpoonleft\",!0);f(m,x,k,\"\\u21C3\",\"\\\\downharpoonleft\",!0);f(m,g,k,\"\\u22B6\",\"\\\\origof\",!0);f(m,g,k,\"\\u22B7\",\"\\\\imageof\",!0);f(m,x,k,\"\\u22B8\",\"\\\\multimap\",!0);f(m,x,k,\"\\u21AD\",\"\\\\leftrightsquigarrow\",!0);f(m,x,k,\"\\u21C9\",\"\\\\rightrightarrows\",!0);f(m,x,k,\"\\u21C4\",\"\\\\rightleftarrows\",!0);f(m,x,k,\"\\u21A0\",\"\\\\twoheadrightarrow\",!0);f(m,x,k,\"\\u21A3\",\"\\\\rightarrowtail\",!0);f(m,x,k,\"\\u21AC\",\"\\\\looparrowright\",!0);f(m,x,k,\"\\u21B7\",\"\\\\curvearrowright\",!0);f(m,x,k,\"\\u21BB\",\"\\\\circlearrowright\",!0);f(m,x,k,\"\\u21B1\",\"\\\\Rsh\",!0);f(m,x,k,\"\\u21CA\",\"\\\\downdownarrows\",!0);f(m,x,k,\"\\u21BE\",\"\\\\upharpoonright\",!0);f(m,x,k,\"\\u21C2\",\"\\\\downharpoonright\",!0);f(m,x,k,\"\\u21DD\",\"\\\\rightsquigarrow\",!0);f(m,x,k,\"\\u21DD\",\"\\\\leadsto\");f(m,x,k,\"\\u21DB\",\"\\\\Rrightarrow\",!0);f(m,x,k,\"\\u21BE\",\"\\\\restriction\");f(m,g,C,\"\\u2018\",\"`\");f(m,g,C,\"$\",\"\\\\$\");f(O,g,C,\"$\",\"\\\\$\");f(O,g,C,\"$\",\"\\\\textdollar\");f(m,g,C,\"%\",\"\\\\%\");f(O,g,C,\"%\",\"\\\\%\");f(m,g,C,\"_\",\"\\\\_\");f(O,g,C,\"_\",\"\\\\_\");f(O,g,C,\"_\",\"\\\\textunderscore\");f(m,g,C,\"\\u2220\",\"\\\\angle\",!0);f(m,g,C,\"\\u221E\",\"\\\\infty\",!0);f(m,g,C,\"\\u2032\",\"\\\\prime\");f(m,g,C,\"\\u25B3\",\"\\\\triangle\");f(m,g,C,\"\\u0393\",\"\\\\Gamma\",!0);f(m,g,C,\"\\u0394\",\"\\\\Delta\",!0);f(m,g,C,\"\\u0398\",\"\\\\Theta\",!0);f(m,g,C,\"\\u039B\",\"\\\\Lambda\",!0);f(m,g,C,\"\\u039E\",\"\\\\Xi\",!0);f(m,g,C,\"\\u03A0\",\"\\\\Pi\",!0);f(m,g,C,\"\\u03A3\",\"\\\\Sigma\",!0);f(m,g,C,\"\\u03A5\",\"\\\\Upsilon\",!0);f(m,g,C,\"\\u03A6\",\"\\\\Phi\",!0);f(m,g,C,\"\\u03A8\",\"\\\\Psi\",!0);f(m,g,C,\"\\u03A9\",\"\\\\Omega\",!0);f(m,g,C,\"A\",\"\\u0391\");f(m,g,C,\"B\",\"\\u0392\");f(m,g,C,\"E\",\"\\u0395\");f(m,g,C,\"Z\",\"\\u0396\");f(m,g,C,\"H\",\"\\u0397\");f(m,g,C,\"I\",\"\\u0399\");f(m,g,C,\"K\",\"\\u039A\");f(m,g,C,\"M\",\"\\u039C\");f(m,g,C,\"N\",\"\\u039D\");f(m,g,C,\"O\",\"\\u039F\");f(m,g,C,\"P\",\"\\u03A1\");f(m,g,C,\"T\",\"\\u03A4\");f(m,g,C,\"X\",\"\\u03A7\");f(m,g,C,\"\\xAC\",\"\\\\neg\",!0);f(m,g,C,\"\\xAC\",\"\\\\lnot\");f(m,g,C,\"\\u22A4\",\"\\\\top\");f(m,g,C,\"\\u22A5\",\"\\\\bot\");f(m,g,C,\"\\u2205\",\"\\\\emptyset\");f(m,x,C,\"\\u2205\",\"\\\\varnothing\");f(m,g,Y,\"\\u03B1\",\"\\\\alpha\",!0);f(m,g,Y,\"\\u03B2\",\"\\\\beta\",!0);f(m,g,Y,\"\\u03B3\",\"\\\\gamma\",!0);f(m,g,Y,\"\\u03B4\",\"\\\\delta\",!0);f(m,g,Y,\"\\u03F5\",\"\\\\epsilon\",!0);f(m,g,Y,\"\\u03B6\",\"\\\\zeta\",!0);f(m,g,Y,\"\\u03B7\",\"\\\\eta\",!0);f(m,g,Y,\"\\u03B8\",\"\\\\theta\",!0);f(m,g,Y,\"\\u03B9\",\"\\\\iota\",!0);f(m,g,Y,\"\\u03BA\",\"\\\\kappa\",!0);f(m,g,Y,\"\\u03BB\",\"\\\\lambda\",!0);f(m,g,Y,\"\\u03BC\",\"\\\\mu\",!0);f(m,g,Y,\"\\u03BD\",\"\\\\nu\",!0);f(m,g,Y,\"\\u03BE\",\"\\\\xi\",!0);f(m,g,Y,\"\\u03BF\",\"\\\\omicron\",!0);f(m,g,Y,\"\\u03C0\",\"\\\\pi\",!0);f(m,g,Y,\"\\u03C1\",\"\\\\rho\",!0);f(m,g,Y,\"\\u03C3\",\"\\\\sigma\",!0);f(m,g,Y,\"\\u03C4\",\"\\\\tau\",!0);f(m,g,Y,\"\\u03C5\",\"\\\\upsilon\",!0);f(m,g,Y,\"\\u03D5\",\"\\\\phi\",!0);f(m,g,Y,\"\\u03C7\",\"\\\\chi\",!0);f(m,g,Y,\"\\u03C8\",\"\\\\psi\",!0);f(m,g,Y,\"\\u03C9\",\"\\\\omega\",!0);f(m,g,Y,\"\\u03B5\",\"\\\\varepsilon\",!0);f(m,g,Y,\"\\u03D1\",\"\\\\vartheta\",!0);f(m,g,Y,\"\\u03D6\",\"\\\\varpi\",!0);f(m,g,Y,\"\\u03F1\",\"\\\\varrho\",!0);f(m,g,Y,\"\\u03C2\",\"\\\\varsigma\",!0);f(m,g,Y,\"\\u03C6\",\"\\\\varphi\",!0);f(m,g,G,\"\\u2217\",\"*\",!0);f(m,g,G,\"+\",\"+\");f(m,g,G,\"\\u2212\",\"-\",!0);f(m,g,G,\"\\u22C5\",\"\\\\cdot\",!0);f(m,g,G,\"\\u2218\",\"\\\\circ\",!0);f(m,g,G,\"\\xF7\",\"\\\\div\",!0);f(m,g,G,\"\\xB1\",\"\\\\pm\",!0);f(m,g,G,\"\\xD7\",\"\\\\times\",!0);f(m,g,G,\"\\u2229\",\"\\\\cap\",!0);f(m,g,G,\"\\u222A\",\"\\\\cup\",!0);f(m,g,G,\"\\u2216\",\"\\\\setminus\",!0);f(m,g,G,\"\\u2227\",\"\\\\land\");f(m,g,G,\"\\u2228\",\"\\\\lor\");f(m,g,G,\"\\u2227\",\"\\\\wedge\",!0);f(m,g,G,\"\\u2228\",\"\\\\vee\",!0);f(m,g,C,\"\\u221A\",\"\\\\surd\");f(m,g,pt,\"\\u27E8\",\"\\\\langle\",!0);f(m,g,pt,\"\\u2223\",\"\\\\lvert\");f(m,g,pt,\"\\u2225\",\"\\\\lVert\");f(m,g,tt,\"?\",\"?\");f(m,g,tt,\"!\",\"!\");f(m,g,tt,\"\\u27E9\",\"\\\\rangle\",!0);f(m,g,tt,\"\\u2223\",\"\\\\rvert\");f(m,g,tt,\"\\u2225\",\"\\\\rVert\");f(m,g,k,\"=\",\"=\");f(m,g,k,\":\",\":\");f(m,g,k,\"\\u2248\",\"\\\\approx\",!0);f(m,g,k,\"\\u2245\",\"\\\\cong\",!0);f(m,g,k,\"\\u2265\",\"\\\\ge\");f(m,g,k,\"\\u2265\",\"\\\\geq\",!0);f(m,g,k,\"\\u2190\",\"\\\\gets\");f(m,g,k,\">\",\"\\\\gt\",!0);f(m,g,k,\"\\u2208\",\"\\\\in\",!0);f(m,g,k,\"\\uE020\",\"\\\\@not\");f(m,g,k,\"\\u2282\",\"\\\\subset\",!0);f(m,g,k,\"\\u2283\",\"\\\\supset\",!0);f(m,g,k,\"\\u2286\",\"\\\\subseteq\",!0);f(m,g,k,\"\\u2287\",\"\\\\supseteq\",!0);f(m,x,k,\"\\u2288\",\"\\\\nsubseteq\",!0);f(m,x,k,\"\\u2289\",\"\\\\nsupseteq\",!0);f(m,g,k,\"\\u22A8\",\"\\\\models\");f(m,g,k,\"\\u2190\",\"\\\\leftarrow\",!0);f(m,g,k,\"\\u2264\",\"\\\\le\");f(m,g,k,\"\\u2264\",\"\\\\leq\",!0);f(m,g,k,\"<\",\"\\\\lt\",!0);f(m,g,k,\"\\u2192\",\"\\\\rightarrow\",!0);f(m,g,k,\"\\u2192\",\"\\\\to\");f(m,x,k,\"\\u2271\",\"\\\\ngeq\",!0);f(m,x,k,\"\\u2270\",\"\\\\nleq\",!0);f(m,g,mr,\"\\xA0\",\"\\\\ \");f(m,g,mr,\"\\xA0\",\"\\\\space\");f(m,g,mr,\"\\xA0\",\"\\\\nobreakspace\");f(O,g,mr,\"\\xA0\",\"\\\\ \");f(O,g,mr,\"\\xA0\",\" \");f(O,g,mr,\"\\xA0\",\"\\\\space\");f(O,g,mr,\"\\xA0\",\"\\\\nobreakspace\");f(m,g,mr,null,\"\\\\nobreak\");f(m,g,mr,null,\"\\\\allowbreak\");f(m,g,Fo,\",\",\",\");f(m,g,Fo,\";\",\";\");f(m,x,G,\"\\u22BC\",\"\\\\barwedge\",!0);f(m,x,G,\"\\u22BB\",\"\\\\veebar\",!0);f(m,g,G,\"\\u2299\",\"\\\\odot\",!0);f(m,g,G,\"\\u2295\",\"\\\\oplus\",!0);f(m,g,G,\"\\u2297\",\"\\\\otimes\",!0);f(m,g,C,\"\\u2202\",\"\\\\partial\",!0);f(m,g,G,\"\\u2298\",\"\\\\oslash\",!0);f(m,x,G,\"\\u229A\",\"\\\\circledcirc\",!0);f(m,x,G,\"\\u22A1\",\"\\\\boxdot\",!0);f(m,g,G,\"\\u25B3\",\"\\\\bigtriangleup\");f(m,g,G,\"\\u25BD\",\"\\\\bigtriangledown\");f(m,g,G,\"\\u2020\",\"\\\\dagger\");f(m,g,G,\"\\u22C4\",\"\\\\diamond\");f(m,g,G,\"\\u22C6\",\"\\\\star\");f(m,g,G,\"\\u25C3\",\"\\\\triangleleft\");f(m,g,G,\"\\u25B9\",\"\\\\triangleright\");f(m,g,pt,\"{\",\"\\\\{\");f(O,g,C,\"{\",\"\\\\{\");f(O,g,C,\"{\",\"\\\\textbraceleft\");f(m,g,tt,\"}\",\"\\\\}\");f(O,g,C,\"}\",\"\\\\}\");f(O,g,C,\"}\",\"\\\\textbraceright\");f(m,g,pt,\"{\",\"\\\\lbrace\");f(m,g,tt,\"}\",\"\\\\rbrace\");f(m,g,pt,\"[\",\"\\\\lbrack\",!0);f(O,g,C,\"[\",\"\\\\lbrack\",!0);f(m,g,tt,\"]\",\"\\\\rbrack\",!0);f(O,g,C,\"]\",\"\\\\rbrack\",!0);f(m,g,pt,\"(\",\"\\\\lparen\",!0);f(m,g,tt,\")\",\"\\\\rparen\",!0);f(O,g,C,\"<\",\"\\\\textless\",!0);f(O,g,C,\">\",\"\\\\textgreater\",!0);f(m,g,pt,\"\\u230A\",\"\\\\lfloor\",!0);f(m,g,tt,\"\\u230B\",\"\\\\rfloor\",!0);f(m,g,pt,\"\\u2308\",\"\\\\lceil\",!0);f(m,g,tt,\"\\u2309\",\"\\\\rceil\",!0);f(m,g,C,\"\\\\\",\"\\\\backslash\");f(m,g,C,\"\\u2223\",\"|\");f(m,g,C,\"\\u2223\",\"\\\\vert\");f(O,g,C,\"|\",\"\\\\textbar\",!0);f(m,g,C,\"\\u2225\",\"\\\\|\");f(m,g,C,\"\\u2225\",\"\\\\Vert\");f(O,g,C,\"\\u2225\",\"\\\\textbardbl\");f(O,g,C,\"~\",\"\\\\textasciitilde\");f(O,g,C,\"\\\\\",\"\\\\textbackslash\");f(O,g,C,\"^\",\"\\\\textasciicircum\");f(m,g,k,\"\\u2191\",\"\\\\uparrow\",!0);f(m,g,k,\"\\u21D1\",\"\\\\Uparrow\",!0);f(m,g,k,\"\\u2193\",\"\\\\downarrow\",!0);f(m,g,k,\"\\u21D3\",\"\\\\Downarrow\",!0);f(m,g,k,\"\\u2195\",\"\\\\updownarrow\",!0);f(m,g,k,\"\\u21D5\",\"\\\\Updownarrow\",!0);f(m,g,ze,\"\\u2210\",\"\\\\coprod\");f(m,g,ze,\"\\u22C1\",\"\\\\bigvee\");f(m,g,ze,\"\\u22C0\",\"\\\\bigwedge\");f(m,g,ze,\"\\u2A04\",\"\\\\biguplus\");f(m,g,ze,\"\\u22C2\",\"\\\\bigcap\");f(m,g,ze,\"\\u22C3\",\"\\\\bigcup\");f(m,g,ze,\"\\u222B\",\"\\\\int\");f(m,g,ze,\"\\u222B\",\"\\\\intop\");f(m,g,ze,\"\\u222C\",\"\\\\iint\");f(m,g,ze,\"\\u222D\",\"\\\\iiint\");f(m,g,ze,\"\\u220F\",\"\\\\prod\");f(m,g,ze,\"\\u2211\",\"\\\\sum\");f(m,g,ze,\"\\u2A02\",\"\\\\bigotimes\");f(m,g,ze,\"\\u2A01\",\"\\\\bigoplus\");f(m,g,ze,\"\\u2A00\",\"\\\\bigodot\");f(m,g,ze,\"\\u222E\",\"\\\\oint\");f(m,g,ze,\"\\u222F\",\"\\\\oiint\");f(m,g,ze,\"\\u2230\",\"\\\\oiiint\");f(m,g,ze,\"\\u2A06\",\"\\\\bigsqcup\");f(m,g,ze,\"\\u222B\",\"\\\\smallint\");f(O,g,Li,\"\\u2026\",\"\\\\textellipsis\");f(m,g,Li,\"\\u2026\",\"\\\\mathellipsis\");f(O,g,Li,\"\\u2026\",\"\\\\ldots\",!0);f(m,g,Li,\"\\u2026\",\"\\\\ldots\",!0);f(m,g,Li,\"\\u22EF\",\"\\\\@cdots\",!0);f(m,g,Li,\"\\u22F1\",\"\\\\ddots\",!0);f(m,g,C,\"\\u22EE\",\"\\\\varvdots\");f(O,g,C,\"\\u22EE\",\"\\\\varvdots\");f(m,g,be,\"\\u02CA\",\"\\\\acute\");f(m,g,be,\"\\u02CB\",\"\\\\grave\");f(m,g,be,\"\\xA8\",\"\\\\ddot\");f(m,g,be,\"~\",\"\\\\tilde\");f(m,g,be,\"\\u02C9\",\"\\\\bar\");f(m,g,be,\"\\u02D8\",\"\\\\breve\");f(m,g,be,\"\\u02C7\",\"\\\\check\");f(m,g,be,\"^\",\"\\\\hat\");f(m,g,be,\"\\u20D7\",\"\\\\vec\");f(m,g,be,\"\\u02D9\",\"\\\\dot\");f(m,g,be,\"\\u02DA\",\"\\\\mathring\");f(m,g,Y,\"\\uE131\",\"\\\\@imath\");f(m,g,Y,\"\\uE237\",\"\\\\@jmath\");f(m,g,C,\"\\u0131\",\"\\u0131\");f(m,g,C,\"\\u0237\",\"\\u0237\");f(O,g,C,\"\\u0131\",\"\\\\i\",!0);f(O,g,C,\"\\u0237\",\"\\\\j\",!0);f(O,g,C,\"\\xDF\",\"\\\\ss\",!0);f(O,g,C,\"\\xE6\",\"\\\\ae\",!0);f(O,g,C,\"\\u0153\",\"\\\\oe\",!0);f(O,g,C,\"\\xF8\",\"\\\\o\",!0);f(O,g,C,\"\\xC6\",\"\\\\AE\",!0);f(O,g,C,\"\\u0152\",\"\\\\OE\",!0);f(O,g,C,\"\\xD8\",\"\\\\O\",!0);f(O,g,be,\"\\u02CA\",\"\\\\'\");f(O,g,be,\"\\u02CB\",\"\\\\`\");f(O,g,be,\"\\u02C6\",\"\\\\^\");f(O,g,be,\"\\u02DC\",\"\\\\~\");f(O,g,be,\"\\u02C9\",\"\\\\=\");f(O,g,be,\"\\u02D8\",\"\\\\u\");f(O,g,be,\"\\u02D9\",\"\\\\.\");f(O,g,be,\"\\xB8\",\"\\\\c\");f(O,g,be,\"\\u02DA\",\"\\\\r\");f(O,g,be,\"\\u02C7\",\"\\\\v\");f(O,g,be,\"\\xA8\",'\\\\\"');f(O,g,be,\"\\u02DD\",\"\\\\H\");f(O,g,be,\"\\u25EF\",\"\\\\textcircled\");var wm={\"--\":!0,\"---\":!0,\"``\":!0,\"''\":!0};f(O,g,C,\"\\u2013\",\"--\",!0);f(O,g,C,\"\\u2013\",\"\\\\textendash\");f(O,g,C,\"\\u2014\",\"---\",!0);f(O,g,C,\"\\u2014\",\"\\\\textemdash\");f(O,g,C,\"\\u2018\",\"`\",!0);f(O,g,C,\"\\u2018\",\"\\\\textquoteleft\");f(O,g,C,\"\\u2019\",\"'\",!0);f(O,g,C,\"\\u2019\",\"\\\\textquoteright\");f(O,g,C,\"\\u201C\",\"``\",!0);f(O,g,C,\"\\u201C\",\"\\\\textquotedblleft\");f(O,g,C,\"\\u201D\",\"''\",!0);f(O,g,C,\"\\u201D\",\"\\\\textquotedblright\");f(m,g,C,\"\\xB0\",\"\\\\degree\",!0);f(O,g,C,\"\\xB0\",\"\\\\degree\");f(O,g,C,\"\\xB0\",\"\\\\textdegree\",!0);f(m,g,C,\"\\xA3\",\"\\\\pounds\");f(m,g,C,\"\\xA3\",\"\\\\mathsterling\",!0);f(O,g,C,\"\\xA3\",\"\\\\pounds\");f(O,g,C,\"\\xA3\",\"\\\\textsterling\",!0);f(m,x,C,\"\\u2720\",\"\\\\maltese\");f(O,x,C,\"\\u2720\",\"\\\\maltese\");var Gd='0123456789/@.\"';for(vo=0;vo<Gd.length;vo++)I0=Gd.charAt(vo),f(m,g,C,I0,I0);var I0,vo,Ud='0123456789!@*()-=+\";:?/.,';for(bo=0;bo<Ud.length;bo++)R0=Ud.charAt(bo),f(O,g,C,R0,R0);var R0,bo,Oo=\"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz\";for(yo=0;yo<Oo.length;yo++)Cn=Oo.charAt(yo),f(m,g,Y,Cn,Cn),f(O,g,C,Cn,Cn);var Cn,yo;f(m,x,C,\"C\",\"\\u2102\");f(O,x,C,\"C\",\"\\u2102\");f(m,x,C,\"H\",\"\\u210D\");f(O,x,C,\"H\",\"\\u210D\");f(m,x,C,\"N\",\"\\u2115\");f(O,x,C,\"N\",\"\\u2115\");f(m,x,C,\"P\",\"\\u2119\");f(O,x,C,\"P\",\"\\u2119\");f(m,x,C,\"Q\",\"\\u211A\");f(O,x,C,\"Q\",\"\\u211A\");f(m,x,C,\"R\",\"\\u211D\");f(O,x,C,\"R\",\"\\u211D\");f(m,x,C,\"Z\",\"\\u2124\");f(O,x,C,\"Z\",\"\\u2124\");f(m,g,Y,\"h\",\"\\u210E\");f(O,g,Y,\"h\",\"\\u210E\");var _=\"\";for(Ue=0;Ue<Oo.length;Ue++)ye=Oo.charAt(Ue),_=String.fromCharCode(55349,56320+Ue),f(m,g,Y,ye,_),f(O,g,C,ye,_),_=String.fromCharCode(55349,56372+Ue),f(m,g,Y,ye,_),f(O,g,C,ye,_),_=String.fromCharCode(55349,56424+Ue),f(m,g,Y,ye,_),f(O,g,C,ye,_),_=String.fromCharCode(55349,56580+Ue),f(m,g,Y,ye,_),f(O,g,C,ye,_),_=String.fromCharCode(55349,56684+Ue),f(m,g,Y,ye,_),f(O,g,C,ye,_),_=String.fromCharCode(55349,56736+Ue),f(m,g,Y,ye,_),f(O,g,C,ye,_),_=String.fromCharCode(55349,56788+Ue),f(m,g,Y,ye,_),f(O,g,C,ye,_),_=String.fromCharCode(55349,56840+Ue),f(m,g,Y,ye,_),f(O,g,C,ye,_),_=String.fromCharCode(55349,56944+Ue),f(m,g,Y,ye,_),f(O,g,C,ye,_),Ue<26&&(_=String.fromCharCode(55349,56632+Ue),f(m,g,Y,ye,_),f(O,g,C,ye,_),_=String.fromCharCode(55349,56476+Ue),f(m,g,Y,ye,_),f(O,g,C,ye,_));var ye,Ue;_=\"\\u{1D55C}\";f(m,g,Y,\"k\",_);f(O,g,C,\"k\",_);for(Tr=0;Tr<10;Tr++)cr=Tr.toString(),_=String.fromCharCode(55349,57294+Tr),f(m,g,Y,cr,_),f(O,g,C,cr,_),_=String.fromCharCode(55349,57314+Tr),f(m,g,Y,cr,_),f(O,g,C,cr,_),_=String.fromCharCode(55349,57324+Tr),f(m,g,Y,cr,_),f(O,g,C,cr,_),_=String.fromCharCode(55349,57334+Tr),f(m,g,Y,cr,_),f(O,g,C,cr,_);var cr,Tr,_0=\"\\xD0\\xDE\\xFE\";for(xo=0;xo<_0.length;xo++)An=_0.charAt(xo),f(m,g,Y,An,An),f(O,g,C,An,An);var An,xo,wo=[[\"mathbf\",\"textbf\",\"Main-Bold\"],[\"mathbf\",\"textbf\",\"Main-Bold\"],[\"mathnormal\",\"textit\",\"Math-Italic\"],[\"mathnormal\",\"textit\",\"Math-Italic\"],[\"boldsymbol\",\"boldsymbol\",\"Main-BoldItalic\"],[\"boldsymbol\",\"boldsymbol\",\"Main-BoldItalic\"],[\"mathscr\",\"textscr\",\"Script-Regular\"],[\"\",\"\",\"\"],[\"\",\"\",\"\"],[\"\",\"\",\"\"],[\"mathfrak\",\"textfrak\",\"Fraktur-Regular\"],[\"mathfrak\",\"textfrak\",\"Fraktur-Regular\"],[\"mathbb\",\"textbb\",\"AMS-Regular\"],[\"mathbb\",\"textbb\",\"AMS-Regular\"],[\"mathboldfrak\",\"textboldfrak\",\"Fraktur-Regular\"],[\"mathboldfrak\",\"textboldfrak\",\"Fraktur-Regular\"],[\"mathsf\",\"textsf\",\"SansSerif-Regular\"],[\"mathsf\",\"textsf\",\"SansSerif-Regular\"],[\"mathboldsf\",\"textboldsf\",\"SansSerif-Bold\"],[\"mathboldsf\",\"textboldsf\",\"SansSerif-Bold\"],[\"mathitsf\",\"textitsf\",\"SansSerif-Italic\"],[\"mathitsf\",\"textitsf\",\"SansSerif-Italic\"],[\"\",\"\",\"\"],[\"\",\"\",\"\"],[\"mathtt\",\"texttt\",\"Typewriter-Regular\"],[\"mathtt\",\"texttt\",\"Typewriter-Regular\"]],Kd=[[\"mathbf\",\"textbf\",\"Main-Bold\"],[\"\",\"\",\"\"],[\"mathsf\",\"textsf\",\"SansSerif-Regular\"],[\"mathboldsf\",\"textboldsf\",\"SansSerif-Bold\"],[\"mathtt\",\"texttt\",\"Typewriter-Regular\"]],g3=function(e,t){var r=e.charCodeAt(0),n=e.charCodeAt(1),s=(r-55296)*1024+(n-56320)+65536,o=t===\"math\"?0:1;if(119808<=s&&s<120484){var l=Math.floor((s-119808)/26);return[wo[l][2],wo[l][o]]}else if(120782<=s&&s<=120831){var a=Math.floor((s-120782)/10);return[Kd[a][2],Kd[a][o]]}else{if(s===120485||s===120486)return[wo[0][2],wo[0][o]];if(120486<s&&s<120782)return[\"\",\"\"];throw new I(\"Unsupported character: \"+e)}},Ho=function(e,t,r){return me[r][e]&&me[r][e].replace&&(e=me[r][e].replace),{value:e,metrics:ah(e,t,r)}},Qe=function(e,t,r,n,s){var o=Ho(e,t,r),l=o.metrics;e=o.value;var a;if(l){var h=l.italic;(r===\"text\"||n&&n.font===\"mathit\")&&(h=0),a=new at(e,l.height,l.depth,h,l.skew,l.width,s)}else typeof console<\"u\"&&console.warn(\"No character metrics \"+(\"for '\"+e+\"' in style '\"+t+\"' and mode '\"+r+\"'\")),a=new at(e,0,0,0,0,0,s);if(n){a.maxFontSize=n.sizeMultiplier,n.style.isTight()&&a.classes.push(\"mtight\");var c=n.getColor();c&&(a.style.color=c)}return a},hh=function(e,t,r,n){return n===void 0&&(n=[]),r.font===\"boldsymbol\"&&Ho(e,\"Main-Bold\",t).metrics?Qe(e,\"Main-Bold\",t,r,n.concat([\"mathbf\"])):e===\"\\\\\"||me[t][e].font===\"main\"?Qe(e,\"Main-Regular\",t,r,n):Qe(e,\"AMS-Regular\",t,r,n.concat([\"amsrm\"]))},v3=function(e,t,r,n,s){return s!==\"textord\"&&Ho(e,\"Math-BoldItalic\",t).metrics?{fontName:\"Math-BoldItalic\",fontClass:\"boldsymbol\"}:{fontName:\"Main-Bold\",fontClass:\"mathbf\"}},qo=function(e,t,r){var n=e.mode,s=e.text,o=[\"mord\"],l=n===\"math\"||n===\"text\"&&t.font,a=l?t.font:t.fontFamily,h=\"\",c=\"\";if(s.charCodeAt(0)===55349&&([h,c]=g3(s,n)),h.length>0)return Qe(s,h,n,t,o.concat(c));if(a){var u,d;if(a===\"boldsymbol\"){var p=v3(s,n,t,o,r);u=p.fontName,d=[p.fontClass]}else l?(u=J0[a].fontName,d=[a]):(u=ko(a,t.fontWeight,t.fontShape),d=[a,t.fontWeight,t.fontShape]);if(Ho(s,u,n).metrics)return Qe(s,u,n,t,o.concat(d));if(wm.hasOwnProperty(s)&&u.slice(0,10)===\"Typewriter\"){for(var v=[],y=0;y<s.length;y++)v.push(Qe(s[y],u,n,t,o.concat(d)));return pr(v)}}if(r===\"mathord\")return Qe(s,\"Math-Italic\",n,t,o.concat([\"mathnormal\"]));if(r===\"textord\"){var w=me[n][s]&&me[n][s].font;if(w===\"ams\"){var S=ko(\"amsrm\",t.fontWeight,t.fontShape);return Qe(s,S,n,t,o.concat(\"amsrm\",t.fontWeight,t.fontShape))}else if(w===\"main\"||!w){var A=ko(\"textrm\",t.fontWeight,t.fontShape);return Qe(s,A,n,t,o.concat(t.fontWeight,t.fontShape))}else{var M=ko(w,t.fontWeight,t.fontShape);return Qe(s,M,n,t,o.concat(M,t.fontWeight,t.fontShape))}}else throw new Error(\"unexpected type: \"+r+\" in makeOrd\")},b3=(i,e)=>{if(Br(i.classes)!==Br(e.classes)||i.skew!==e.skew||i.maxFontSize!==e.maxFontSize||i.italic!==0&&i.hasClass(\"mathnormal\"))return!1;if(i.classes.length===1){var t=i.classes[0];if(t===\"mbin\"||t===\"mord\")return!1}for(var r in i.style)if(i.style.hasOwnProperty(r)&&i.style[r]!==e.style[r])return!1;for(var n in e.style)if(e.style.hasOwnProperty(n)&&i.style[n]!==e.style[n])return!1;return!0},km=i=>{for(var e=0;e<i.length-1;e++){var t=i[e],r=i[e+1];t instanceof at&&r instanceof at&&b3(t,r)&&(t.text+=r.text,t.height=Math.max(t.height,r.height),t.depth=Math.max(t.depth,r.depth),t.italic=r.italic,i.splice(e+1,1),e--)}return i},ch=function(e){for(var t=0,r=0,n=0,s=0;s<e.children.length;s++){var o=e.children[s];o.height>t&&(t=o.height),o.depth>r&&(r=o.depth),o.maxFontSize>n&&(n=o.maxFontSize)}e.height=t,e.depth=r,e.maxFontSize=n},z=function(e,t,r,n){var s=new ti(e,t,r,n);return ch(s),s},Er=(i,e,t,r)=>new ti(i,e,t,r),Oi=function(e,t,r){var n=z([e],[],t);return n.height=Math.max(r||t.fontMetrics().defaultRuleThickness,t.minRuleThickness),n.style.borderBottomWidth=N(n.height),n.maxFontSize=1,n},y3=function(e,t,r,n){var s=new On(e,t,r,n);return ch(s),s},pr=function(e){var t=new ei(e);return ch(t),t},zi=function(e,t){return e instanceof ei?z([],[e],t):e},x3=function(e){if(e.positionType===\"individualShift\"){for(var t=e.children,r=[t[0]],n=-t[0].shift-t[0].elem.depth,s=n,o=1;o<t.length;o++){var l=-t[o].shift-s-t[o].elem.depth,a=l-(t[o-1].elem.height+t[o-1].elem.depth);s=s+l,r.push({type:\"kern\",size:a}),r.push(t[o])}return{children:r,depth:n}}var h;if(e.positionType===\"top\"){for(var c=e.positionData,u=0;u<e.children.length;u++){var d=e.children[u];c-=d.type===\"kern\"?d.size:d.elem.height+d.elem.depth}h=c}else if(e.positionType===\"bottom\")h=-e.positionData;else{var p=e.children[0];if(p.type!==\"elem\")throw new Error('First child must have type \"elem\".');if(e.positionType===\"shift\")h=-p.elem.depth-e.positionData;else if(e.positionType===\"firstBaseline\")h=-p.elem.depth;else throw new Error(\"Invalid positionType \"+e.positionType+\".\")}return{children:e.children,depth:h}},ae=function(e,t){for(var{children:r,depth:n}=x3(e),s=0,o=0;o<r.length;o++){var l=r[o];if(l.type===\"elem\"){var a=l.elem;s=Math.max(s,a.maxFontSize,a.height)}}s+=2;var h=z([\"pstrut\"],[]);h.style.height=N(s);for(var c=[],u=n,d=n,p=n,v=0;v<r.length;v++){var y=r[v];if(y.type===\"kern\")p+=y.size;else{var w=y.elem,S=y.wrapperClasses||[],A=y.wrapperStyle||{},M=z(S,[h,w],void 0,A);M.style.top=N(-s-p-w.depth),y.marginLeft&&(M.style.marginLeft=y.marginLeft),y.marginRight&&(M.style.marginRight=y.marginRight),c.push(M),p+=w.height+w.depth}u=Math.min(u,p),d=Math.max(d,p)}var E=z([\"vlist\"],c);E.style.height=N(d);var T;if(u<0){var B=z([],[]),D=z([\"vlist\"],[B]);D.style.height=N(-u);var V=z([\"vlist-s\"],[new at(\"\\u200B\")]);T=[z([\"vlist-r\"],[E,V]),z([\"vlist-r\"],[D])]}else T=[z([\"vlist-r\"],[E])];var U=z([\"vlist-t\"],T);return T.length===2&&U.classes.push(\"vlist-t2\"),U.height=d,U.depth=-u,U},Sm=(i,e)=>{var t=z([\"mspace\"],[],e),r=we(i,e);return t.style.marginRight=N(r),t},ko=function(e,t,r){var n=\"\";switch(e){case\"amsrm\":n=\"AMS\";break;case\"textrm\":n=\"Main\";break;case\"textsf\":n=\"SansSerif\";break;case\"texttt\":n=\"Typewriter\";break;default:n=e}var s;return t===\"textbf\"&&r===\"textit\"?s=\"BoldItalic\":t===\"textbf\"?s=\"Bold\":t===\"textit\"?s=\"Italic\":s=\"Regular\",n+\"-\"+s},J0={mathbf:{variant:\"bold\",fontName:\"Main-Bold\"},mathrm:{variant:\"normal\",fontName:\"Main-Regular\"},textit:{variant:\"italic\",fontName:\"Main-Italic\"},mathit:{variant:\"italic\",fontName:\"Main-Italic\"},mathnormal:{variant:\"italic\",fontName:\"Math-Italic\"},mathsfit:{variant:\"sans-serif-italic\",fontName:\"SansSerif-Italic\"},mathbb:{variant:\"double-struck\",fontName:\"AMS-Regular\"},mathcal:{variant:\"script\",fontName:\"Caligraphic-Regular\"},mathfrak:{variant:\"fraktur\",fontName:\"Fraktur-Regular\"},mathscr:{variant:\"script\",fontName:\"Script-Regular\"},mathsf:{variant:\"sans-serif\",fontName:\"SansSerif-Regular\"},mathtt:{variant:\"monospace\",fontName:\"Typewriter-Regular\"}},Cm={vec:[\"vec\",.471,.714],oiintSize1:[\"oiintSize1\",.957,.499],oiintSize2:[\"oiintSize2\",1.472,.659],oiiintSize1:[\"oiiintSize1\",1.304,.499],oiiintSize2:[\"oiiintSize2\",1.98,.659]},Am=function(e,t){var[r,n,s]=Cm[e],o=new Zt(r),l=new qt([o],{width:N(n),height:N(s),style:\"width:\"+N(n),viewBox:\"0 0 \"+1e3*n+\" \"+1e3*s,preserveAspectRatio:\"xMinYMin\"}),a=Er([\"overlay\"],[l],t);return a.height=s,a.style.height=N(s),a.style.width=N(n),a},xe={number:3,unit:\"mu\"},Qr={number:4,unit:\"mu\"},ur={number:5,unit:\"mu\"},w3={mord:{mop:xe,mbin:Qr,mrel:ur,minner:xe},mop:{mord:xe,mop:xe,mrel:ur,minner:xe},mbin:{mord:Qr,mop:Qr,mopen:Qr,minner:Qr},mrel:{mord:ur,mop:ur,mopen:ur,minner:ur},mopen:{},mclose:{mop:xe,mbin:Qr,mrel:ur,minner:xe},mpunct:{mord:xe,mop:xe,mrel:ur,mopen:xe,mclose:xe,mpunct:xe,minner:xe},minner:{mord:xe,mop:xe,mbin:Qr,mrel:ur,mopen:xe,mpunct:xe,minner:xe}},k3={mord:{mop:xe},mop:{mord:xe,mop:xe},mbin:{},mrel:{},mopen:{},mclose:{mop:xe},mpunct:{},minner:{mop:xe}},Mm={},zo={},Lo={};function q(i){for(var{type:e,names:t,props:r,handler:n,htmlBuilder:s,mathmlBuilder:o}=i,l={type:e,numArgs:r.numArgs,argTypes:r.argTypes,allowedInArgument:!!r.allowedInArgument,allowedInText:!!r.allowedInText,allowedInMath:r.allowedInMath===void 0?!0:r.allowedInMath,numOptionalArgs:r.numOptionalArgs||0,infix:!!r.infix,primitive:!!r.primitive,handler:n},a=0;a<t.length;++a)Mm[t[a]]=l;e&&(s&&(zo[e]=s),o&&(Lo[e]=o))}function ri(i){var{type:e,htmlBuilder:t,mathmlBuilder:r}=i;q({type:e,names:[],props:{numArgs:0},handler(){throw new Error(\"Should never be called.\")},htmlBuilder:t,mathmlBuilder:r})}var Io=function(e){return e.type===\"ordgroup\"&&e.body.length===1?e.body[0]:e},De=function(e){return e.type===\"ordgroup\"?e.body:[e]},S3=new Set([\"leftmost\",\"mbin\",\"mopen\",\"mrel\",\"mop\",\"mpunct\"]),C3=new Set([\"rightmost\",\"mrel\",\"mclose\",\"mpunct\"]),A3={display:Z.DISPLAY,text:Z.TEXT,script:Z.SCRIPT,scriptscript:Z.SCRIPTSCRIPT},M3={mord:\"mord\",mop:\"mop\",mbin:\"mbin\",mrel:\"mrel\",mopen:\"mopen\",mclose:\"mclose\",mpunct:\"mpunct\",minner:\"minner\"},Pe=function(e,t,r,n){n===void 0&&(n=[null,null]);for(var s=[],o=0;o<e.length;o++){var l=oe(e[o],t);if(l instanceof ei){var a=l.children;s.push(...a)}else s.push(l)}if(km(s),!r)return s;var h=t;if(e.length===1){var c=e[0];c.type===\"sizing\"?h=t.havingSize(c.size):c.type===\"styling\"&&(h=t.havingStyle(A3[c.style]))}var u=z([n[0]||\"leftmost\"],[],t),d=z([n[1]||\"rightmost\"],[],t),p=r===\"root\";return jd(s,(v,y)=>{var w=y.classes[0],S=v.classes[0];w===\"mbin\"&&C3.has(S)?y.classes[0]=\"mord\":S===\"mbin\"&&S3.has(w)&&(v.classes[0]=\"mord\")},{node:u},d,p),jd(s,(v,y)=>{var w=Z0(y),S=Z0(v),A=w&&S?v.hasClass(\"mtight\")?k3[w][S]:w3[w][S]:null;if(A)return Sm(A,h)},{node:u},d,p),s},jd=function i(e,t,r,n,s){n&&e.push(n);for(var o=0;o<e.length;o++){var l=e[o],a=Tm(l);if(a){i(a.children,t,r,null,s);continue}var h=!l.hasClass(\"mspace\");if(h){var c=t(l,r.node);c&&(r.insertAfter?r.insertAfter(c):(e.unshift(c),o++))}h?r.node=l:s&&l.hasClass(\"newline\")&&(r.node=z([\"leftmost\"])),r.insertAfter=(u=>d=>{e.splice(u+1,0,d),o++})(o)}n&&e.pop()},Tm=function(e){return e instanceof ei||e instanceof On||e instanceof ti&&e.hasClass(\"enclosing\")?e:null},T3=function i(e,t){var r=Tm(e);if(r){var n=r.children;if(n.length){if(t===\"right\")return i(n[n.length-1],\"right\");if(t===\"left\")return i(n[0],\"left\")}}return e},Z0=function(e,t){return e?(t&&(e=T3(e,t)),M3[e.classes[0]]||null):null},Ln=function(e,t){var r=[\"nulldelimiter\"].concat(e.baseSizingClasses());return z(t.concat(r))},oe=function(e,t,r){if(!e)return z();if(zo[e.type]){var n=zo[e.type](e,t);if(r&&t.size!==r.size){n=z(t.sizingClasses(r),[n],t);var s=t.sizeMultiplier/r.sizeMultiplier;n.height*=s,n.depth*=s}return n}else throw new I(\"Got group of unknown type: '\"+e.type+\"'\")};function So(i,e){var t=z([\"base\"],i,e),r=z([\"strut\"]);return r.style.height=N(t.height+t.depth),t.depth&&(r.style.verticalAlign=N(-t.depth)),t.children.unshift(r),t}function Q0(i,e){var t=null;i.length===1&&i[0].type===\"tag\"&&(t=i[0].tag,i=i[0].body);var r=Pe(i,e,\"root\"),n;r.length===2&&r[1].hasClass(\"tag\")&&(n=r.pop());for(var s=[],o=[],l=0;l<r.length;l++)if(o.push(r[l]),r[l].hasClass(\"mbin\")||r[l].hasClass(\"mrel\")||r[l].hasClass(\"allowbreak\")){for(var a=!1;l<r.length-1&&r[l+1].hasClass(\"mspace\")&&!r[l+1].hasClass(\"newline\");)l++,o.push(r[l]),r[l].hasClass(\"nobreak\")&&(a=!0);a||(s.push(So(o,e)),o=[])}else r[l].hasClass(\"newline\")&&(o.pop(),o.length>0&&(s.push(So(o,e)),o=[]),s.push(r[l]));o.length>0&&s.push(So(o,e));var h;t?(h=So(Pe(t,e,!0)),h.classes=[\"tag\"],s.push(h)):n&&s.push(n);var c=z([\"katex-html\"],s);if(c.setAttribute(\"aria-hidden\",\"true\"),h){var u=h.children[0];u.style.height=N(c.height+c.depth),c.depth&&(u.style.verticalAlign=N(-c.depth))}return c}function Dm(i){return new ei(i)}var L=class{constructor(e,t,r){this.type=void 0,this.attributes=void 0,this.children=void 0,this.classes=void 0,this.type=e,this.attributes={},this.children=t||[],this.classes=r||[]}setAttribute(e,t){this.attributes[e]=t}getAttribute(e){return this.attributes[e]}toNode(){var e=document.createElementNS(\"http://www.w3.org/1998/Math/MathML\",this.type);for(var t in this.attributes)Object.prototype.hasOwnProperty.call(this.attributes,t)&&e.setAttribute(t,this.attributes[t]);this.classes.length>0&&(e.className=Br(this.classes));for(var r=0;r<this.children.length;r++)if(this.children[r]instanceof Ae&&this.children[r+1]instanceof Ae){for(var n=this.children[r].toText()+this.children[++r].toText();this.children[r+1]instanceof Ae;)n+=this.children[++r].toText();e.appendChild(new Ae(n).toNode())}else e.appendChild(this.children[r].toNode());return e}toMarkup(){var e=\"<\"+this.type;for(var t in this.attributes)Object.prototype.hasOwnProperty.call(this.attributes,t)&&(e+=\" \"+t+'=\"',e+=Ke(this.attributes[t]),e+='\"');this.classes.length>0&&(e+=' class =\"'+Ke(Br(this.classes))+'\"'),e+=\">\";for(var r=0;r<this.children.length;r++)e+=this.children[r].toMarkup();return e+=\"</\"+this.type+\">\",e}toText(){return this.children.map(e=>e.toText()).join(\"\")}},Ae=class{constructor(e){this.text=void 0,this.text=e}toNode(){return document.createTextNode(this.text)}toMarkup(){return Ke(this.toText())}toText(){return this.text}},Ro=class{constructor(e){this.width=void 0,this.character=void 0,this.width=e,e>=.05555&&e<=.05556?this.character=\"\\u200A\":e>=.1666&&e<=.1667?this.character=\"\\u2009\":e>=.2222&&e<=.2223?this.character=\"\\u2005\":e>=.2777&&e<=.2778?this.character=\"\\u2005\\u200A\":e>=-.05556&&e<=-.05555?this.character=\"\\u200A\\u2063\":e>=-.1667&&e<=-.1666?this.character=\"\\u2009\\u2063\":e>=-.2223&&e<=-.2222?this.character=\"\\u205F\\u2063\":e>=-.2778&&e<=-.2777?this.character=\"\\u2005\\u2063\":this.character=null}toNode(){if(this.character)return document.createTextNode(this.character);var e=document.createElementNS(\"http://www.w3.org/1998/Math/MathML\",\"mspace\");return e.setAttribute(\"width\",N(this.width)),e}toMarkup(){return this.character?\"<mtext>\"+this.character+\"</mtext>\":'<mspace width=\"'+N(this.width)+'\"/>'}toText(){return this.character?this.character:\" \"}},D3=new Set([\"\\\\imath\",\"\\\\jmath\"]),B3=new Set([\"mrow\",\"mtable\"]),Dt=function(e,t,r){return me[t][e]&&me[t][e].replace&&e.charCodeAt(0)!==55349&&!(wm.hasOwnProperty(e)&&r&&(r.fontFamily&&r.fontFamily.slice(4,6)===\"tt\"||r.font&&r.font.slice(4,6)===\"tt\"))&&(e=me[t][e].replace),new Ae(e)},uh=function(e){return e.length===1?e[0]:new L(\"mrow\",e)},fh=function(e,t){if(t.fontFamily===\"texttt\")return\"monospace\";if(t.fontFamily===\"textsf\")return t.fontShape===\"textit\"&&t.fontWeight===\"textbf\"?\"sans-serif-bold-italic\":t.fontShape===\"textit\"?\"sans-serif-italic\":t.fontWeight===\"textbf\"?\"bold-sans-serif\":\"sans-serif\";if(t.fontShape===\"textit\"&&t.fontWeight===\"textbf\")return\"bold-italic\";if(t.fontShape===\"textit\")return\"italic\";if(t.fontWeight===\"textbf\")return\"bold\";var r=t.font;if(!r||r===\"mathnormal\")return null;var n=e.mode;if(r===\"mathit\")return\"italic\";if(r===\"boldsymbol\")return e.type===\"textord\"?\"bold\":\"bold-italic\";if(r===\"mathbf\")return\"bold\";if(r===\"mathbb\")return\"double-struck\";if(r===\"mathsfit\")return\"sans-serif-italic\";if(r===\"mathfrak\")return\"fraktur\";if(r===\"mathscr\"||r===\"mathcal\")return\"script\";if(r===\"mathsf\")return\"sans-serif\";if(r===\"mathtt\")return\"monospace\";var s=e.text;if(D3.has(s))return null;me[n][s]&&me[n][s].replace&&(s=me[n][s].replace);var o=J0[r].fontName;return ah(s,o,n)?J0[r].variant:null};function P0(i){if(!i)return!1;if(i.type===\"mi\"&&i.children.length===1){var e=i.children[0];return e instanceof Ae&&e.text===\".\"}else if(i.type===\"mo\"&&i.children.length===1&&i.getAttribute(\"separator\")===\"true\"&&i.getAttribute(\"lspace\")===\"0em\"&&i.getAttribute(\"rspace\")===\"0em\"){var t=i.children[0];return t instanceof Ae&&t.text===\",\"}else return!1}var ht=function(e,t,r){if(e.length===1){var n=ue(e[0],t);return r&&n instanceof L&&n.type===\"mo\"&&(n.setAttribute(\"lspace\",\"0em\"),n.setAttribute(\"rspace\",\"0em\")),[n]}for(var s=[],o,l=0;l<e.length;l++){var a=ue(e[l],t);if(a instanceof L&&o instanceof L){if(a.type===\"mtext\"&&o.type===\"mtext\"&&a.getAttribute(\"mathvariant\")===o.getAttribute(\"mathvariant\")){o.children.push(...a.children);continue}else if(a.type===\"mn\"&&o.type===\"mn\"){o.children.push(...a.children);continue}else if(P0(a)&&o.type===\"mn\"){o.children.push(...a.children);continue}else if(a.type===\"mn\"&&P0(o))a.children=[...o.children,...a.children],s.pop();else if((a.type===\"msup\"||a.type===\"msub\")&&a.children.length>=1&&(o.type===\"mn\"||P0(o))){var h=a.children[0];h instanceof L&&h.type===\"mn\"&&(h.children=[...o.children,...h.children],s.pop())}else if(o.type===\"mi\"&&o.children.length===1){var c=o.children[0];if(c instanceof Ae&&c.text===\"\\u0338\"&&(a.type===\"mo\"||a.type===\"mi\"||a.type===\"mn\")){var u=a.children[0];u instanceof Ae&&u.text.length>0&&(u.text=u.text.slice(0,1)+\"\\u0338\"+u.text.slice(1),s.pop())}}}s.push(a),o=a}return s},Or=function(e,t,r){return uh(ht(e,t,r))},ue=function(e,t){if(!e)return new L(\"mrow\");if(Lo[e.type]){var r=Lo[e.type](e,t);return r}else throw new I(\"Got group of unknown type: '\"+e.type+\"'\")};function Yd(i,e,t,r,n){var s=ht(i,t),o;s.length===1&&s[0]instanceof L&&B3.has(s[0].type)?o=s[0]:o=new L(\"mrow\",s);var l=new L(\"annotation\",[new Ae(e)]);l.setAttribute(\"encoding\",\"application/x-tex\");var a=new L(\"semantics\",[o,l]),h=new L(\"math\",[a]);h.setAttribute(\"xmlns\",\"http://www.w3.org/1998/Math/MathML\"),r&&h.setAttribute(\"display\",\"block\");var c=n?\"katex\":\"katex-mathml\";return z([c],[h])}var Bm=function(e){return new Eo({style:e.displayMode?Z.DISPLAY:Z.TEXT,maxSize:e.maxSize,minRuleThickness:e.minRuleThickness})},Em=function(e,t){if(t.displayMode){var r=[\"katex-display\"];t.leqno&&r.push(\"leqno\"),t.fleqn&&r.push(\"fleqn\"),e=z(r,[e])}return e},E3=function(e,t,r){var n=Bm(r),s;if(r.output===\"mathml\")return Yd(e,t,n,r.displayMode,!0);if(r.output===\"html\"){var o=Q0(e,n);s=z([\"katex\"],[o])}else{var l=Yd(e,t,n,r.displayMode,!1),a=Q0(e,n);s=z([\"katex\"],[l,a])}return Em(s,r)},O3=function(e,t,r){var n=Bm(r),s=Q0(e,n),o=z([\"katex\"],[s]);return Em(o,r)},z3={widehat:\"^\",widecheck:\"\\u02C7\",widetilde:\"~\",utilde:\"~\",overleftarrow:\"\\u2190\",underleftarrow:\"\\u2190\",xleftarrow:\"\\u2190\",overrightarrow:\"\\u2192\",underrightarrow:\"\\u2192\",xrightarrow:\"\\u2192\",underbrace:\"\\u23DF\",overbrace:\"\\u23DE\",overgroup:\"\\u23E0\",undergroup:\"\\u23E1\",overleftrightarrow:\"\\u2194\",underleftrightarrow:\"\\u2194\",xleftrightarrow:\"\\u2194\",Overrightarrow:\"\\u21D2\",xRightarrow:\"\\u21D2\",overleftharpoon:\"\\u21BC\",xleftharpoonup:\"\\u21BC\",overrightharpoon:\"\\u21C0\",xrightharpoonup:\"\\u21C0\",xLeftarrow:\"\\u21D0\",xLeftrightarrow:\"\\u21D4\",xhookleftarrow:\"\\u21A9\",xhookrightarrow:\"\\u21AA\",xmapsto:\"\\u21A6\",xrightharpoondown:\"\\u21C1\",xleftharpoondown:\"\\u21BD\",xrightleftharpoons:\"\\u21CC\",xleftrightharpoons:\"\\u21CB\",xtwoheadleftarrow:\"\\u219E\",xtwoheadrightarrow:\"\\u21A0\",xlongequal:\"=\",xtofrom:\"\\u21C4\",xrightleftarrows:\"\\u21C4\",xrightequilibrium:\"\\u21CC\",xleftequilibrium:\"\\u21CB\",\"\\\\cdrightarrow\":\"\\u2192\",\"\\\\cdleftarrow\":\"\\u2190\",\"\\\\cdlongequal\":\"=\"},Wo=function(e){var t=new L(\"mo\",[new Ae(z3[e.replace(/^\\\\/,\"\")])]);return t.setAttribute(\"stretchy\",\"true\"),t},L3={overrightarrow:[[\"rightarrow\"],.888,522,\"xMaxYMin\"],overleftarrow:[[\"leftarrow\"],.888,522,\"xMinYMin\"],underrightarrow:[[\"rightarrow\"],.888,522,\"xMaxYMin\"],underleftarrow:[[\"leftarrow\"],.888,522,\"xMinYMin\"],xrightarrow:[[\"rightarrow\"],1.469,522,\"xMaxYMin\"],\"\\\\cdrightarrow\":[[\"rightarrow\"],3,522,\"xMaxYMin\"],xleftarrow:[[\"leftarrow\"],1.469,522,\"xMinYMin\"],\"\\\\cdleftarrow\":[[\"leftarrow\"],3,522,\"xMinYMin\"],Overrightarrow:[[\"doublerightarrow\"],.888,560,\"xMaxYMin\"],xRightarrow:[[\"doublerightarrow\"],1.526,560,\"xMaxYMin\"],xLeftarrow:[[\"doubleleftarrow\"],1.526,560,\"xMinYMin\"],overleftharpoon:[[\"leftharpoon\"],.888,522,\"xMinYMin\"],xleftharpoonup:[[\"leftharpoon\"],.888,522,\"xMinYMin\"],xleftharpoondown:[[\"leftharpoondown\"],.888,522,\"xMinYMin\"],overrightharpoon:[[\"rightharpoon\"],.888,522,\"xMaxYMin\"],xrightharpoonup:[[\"rightharpoon\"],.888,522,\"xMaxYMin\"],xrightharpoondown:[[\"rightharpoondown\"],.888,522,\"xMaxYMin\"],xlongequal:[[\"longequal\"],.888,334,\"xMinYMin\"],\"\\\\cdlongequal\":[[\"longequal\"],3,334,\"xMinYMin\"],xtwoheadleftarrow:[[\"twoheadleftarrow\"],.888,334,\"xMinYMin\"],xtwoheadrightarrow:[[\"twoheadrightarrow\"],.888,334,\"xMaxYMin\"],overleftrightarrow:[[\"leftarrow\",\"rightarrow\"],.888,522],overbrace:[[\"leftbrace\",\"midbrace\",\"rightbrace\"],1.6,548],underbrace:[[\"leftbraceunder\",\"midbraceunder\",\"rightbraceunder\"],1.6,548],underleftrightarrow:[[\"leftarrow\",\"rightarrow\"],.888,522],xleftrightarrow:[[\"leftarrow\",\"rightarrow\"],1.75,522],xLeftrightarrow:[[\"doubleleftarrow\",\"doublerightarrow\"],1.75,560],xrightleftharpoons:[[\"leftharpoondownplus\",\"rightharpoonplus\"],1.75,716],xleftrightharpoons:[[\"leftharpoonplus\",\"rightharpoondownplus\"],1.75,716],xhookleftarrow:[[\"leftarrow\",\"righthook\"],1.08,522],xhookrightarrow:[[\"lefthook\",\"rightarrow\"],1.08,522],overlinesegment:[[\"leftlinesegment\",\"rightlinesegment\"],.888,522],underlinesegment:[[\"leftlinesegment\",\"rightlinesegment\"],.888,522],overgroup:[[\"leftgroup\",\"rightgroup\"],.888,342],undergroup:[[\"leftgroupunder\",\"rightgroupunder\"],.888,342],xmapsto:[[\"leftmapsto\",\"rightarrow\"],1.5,522],xtofrom:[[\"leftToFrom\",\"rightToFrom\"],1.75,528],xrightleftarrows:[[\"baraboveleftarrow\",\"rightarrowabovebar\"],1.75,901],xrightequilibrium:[[\"baraboveshortleftharpoon\",\"rightharpoonaboveshortbar\"],1.75,716],xleftequilibrium:[[\"shortbaraboveleftharpoon\",\"shortrightharpoonabovebar\"],1.75,716]},I3=new Set([\"widehat\",\"widecheck\",\"widetilde\",\"utilde\"]),Vo=function(e,t){function r(){var l=4e5,a=e.label.slice(1);if(I3.has(a)){var h=e,c=h.base.type===\"ordgroup\"?h.base.body.length:1,u,d,p;if(c>5)a===\"widehat\"||a===\"widecheck\"?(u=420,l=2364,p=.42,d=a+\"4\"):(u=312,l=2340,p=.34,d=\"tilde4\");else{var v=[1,1,2,2,3,3][c];a===\"widehat\"||a===\"widecheck\"?(l=[0,1062,2364,2364,2364][v],u=[0,239,300,360,420][v],p=[0,.24,.3,.3,.36,.42][v],d=a+v):(l=[0,600,1033,2339,2340][v],u=[0,260,286,306,312][v],p=[0,.26,.286,.3,.306,.34][v],d=\"tilde\"+v)}var y=new Zt(d),w=new qt([y],{width:\"100%\",height:N(p),viewBox:\"0 0 \"+l+\" \"+u,preserveAspectRatio:\"none\"});return{span:Er([],[w],t),minWidth:0,height:p}}else{var S=[],A=L3[a],[M,E,T]=A,B=T/1e3,D=M.length,V,U;if(D===1){var ie=A[3];V=[\"hide-tail\"],U=[ie]}else if(D===2)V=[\"halfarrow-left\",\"halfarrow-right\"],U=[\"xMinYMin\",\"xMaxYMin\"];else if(D===3)V=[\"brace-left\",\"brace-center\",\"brace-right\"],U=[\"xMinYMin\",\"xMidYMin\",\"xMaxYMin\"];else throw new Error(`Correct katexImagesData or update code here to support\n                    `+D+\" children.\");for(var j=0;j<D;j++){var $=new Zt(M[j]),ne=new qt([$],{width:\"400em\",height:N(B),viewBox:\"0 0 \"+l+\" \"+T,preserveAspectRatio:U[j]+\" slice\"}),J=Er([V[j]],[ne],t);if(D===1)return{span:J,minWidth:E,height:B};J.style.height=N(B),S.push(J)}return{span:z([\"stretchy\"],S,t),minWidth:E,height:B}}}var{span:n,minWidth:s,height:o}=r();return n.height=o,n.style.height=N(o),s>0&&(n.style.minWidth=N(s)),n},R3=function(e,t,r,n,s){var o,l=e.height+e.depth+r+n;if(/fbox|color|angl/.test(t)){if(o=z([\"stretchy\",t],[],s),t===\"fbox\"){var a=s.color&&s.getColor();a&&(o.style.borderColor=a)}}else{var h=[];/^[bx]cancel$/.test(t)&&h.push(new zn({x1:\"0\",y1:\"0\",x2:\"100%\",y2:\"100%\",\"stroke-width\":\"0.046em\"})),/^x?cancel$/.test(t)&&h.push(new zn({x1:\"0\",y1:\"100%\",x2:\"100%\",y2:\"0\",\"stroke-width\":\"0.046em\"}));var c=new qt(h,{width:\"100%\",height:N(l)});o=Er([],[c],s)}return o.height=l,o.style.height=N(l),o};function Q(i,e){if(!i||i.type!==e)throw new Error(\"Expected node of type \"+e+\", but got \"+(i?\"node of type \"+i.type:String(i)));return i}function dh(i){var e=$o(i);if(!e)throw new Error(\"Expected node of symbol group type, but got \"+(i?\"node of type \"+i.type:String(i)));return e}function $o(i){return i&&(i.type===\"atom\"||p3.hasOwnProperty(i.type))?i:null}var mh=(i,e)=>{var t,r,n;i&&i.type===\"supsub\"?(r=Q(i.base,\"accent\"),t=r.base,i.base=t,n=d3(oe(i,e)),i.base=r):(r=Q(i,\"accent\"),t=r.base);var s=oe(t,e.havingCrampedStyle()),o=r.isShifty&&dr(t),l=0;if(o){var a=Tn(t),h=oe(a,e.havingCrampedStyle());l=$d(h).skew}var c=r.label===\"\\\\c\",u=c?s.height+s.depth:Math.min(s.height,e.fontMetrics().xHeight),d;if(r.isStretchy)d=Vo(r,e),d=ae({positionType:\"firstBaseline\",children:[{type:\"elem\",elem:s},{type:\"elem\",elem:d,wrapperClasses:[\"svg-align\"],wrapperStyle:l>0?{width:\"calc(100% - \"+N(2*l)+\")\",marginLeft:N(2*l)}:void 0}]});else{var p,v;r.label===\"\\\\vec\"?(p=Am(\"vec\",e),v=Cm.vec[1]):(p=qo({mode:r.mode,text:r.label},e,\"textord\"),p=$d(p),p.italic=0,v=p.width,c&&(u+=p.depth)),d=z([\"accent-body\"],[p]);var y=r.label===\"\\\\textcircled\";y&&(d.classes.push(\"accent-full\"),u=s.height);var w=l;y||(w-=v/2),d.style.left=N(w),r.label===\"\\\\textcircled\"&&(d.style.top=\".2em\"),d=ae({positionType:\"firstBaseline\",children:[{type:\"elem\",elem:s},{type:\"kern\",size:-u},{type:\"elem\",elem:d}]})}var S=z([\"mord\",\"accent\"],[d],e);return n?(n.children[0]=S,n.height=Math.max(S.height,n.height),n.classes[0]=\"mord\",n):S},Om=(i,e)=>{var t=i.isStretchy?Wo(i.label):new L(\"mo\",[Dt(i.label,i.mode)]),r=new L(\"mover\",[ue(i.base,e),t]);return r.setAttribute(\"accent\",\"true\"),r},P3=new RegExp([\"\\\\acute\",\"\\\\grave\",\"\\\\ddot\",\"\\\\tilde\",\"\\\\bar\",\"\\\\breve\",\"\\\\check\",\"\\\\hat\",\"\\\\vec\",\"\\\\dot\",\"\\\\mathring\"].map(i=>\"\\\\\"+i).join(\"|\"));q({type:\"accent\",names:[\"\\\\acute\",\"\\\\grave\",\"\\\\ddot\",\"\\\\tilde\",\"\\\\bar\",\"\\\\breve\",\"\\\\check\",\"\\\\hat\",\"\\\\vec\",\"\\\\dot\",\"\\\\mathring\",\"\\\\widecheck\",\"\\\\widehat\",\"\\\\widetilde\",\"\\\\overrightarrow\",\"\\\\overleftarrow\",\"\\\\Overrightarrow\",\"\\\\overleftrightarrow\",\"\\\\overgroup\",\"\\\\overlinesegment\",\"\\\\overleftharpoon\",\"\\\\overrightharpoon\"],props:{numArgs:1},handler:(i,e)=>{var t=Io(e[0]),r=!P3.test(i.funcName),n=!r||i.funcName===\"\\\\widehat\"||i.funcName===\"\\\\widetilde\"||i.funcName===\"\\\\widecheck\";return{type:\"accent\",mode:i.parser.mode,label:i.funcName,isStretchy:r,isShifty:n,base:t}},htmlBuilder:mh,mathmlBuilder:Om});q({type:\"accent\",names:[\"\\\\'\",\"\\\\`\",\"\\\\^\",\"\\\\~\",\"\\\\=\",\"\\\\u\",\"\\\\.\",'\\\\\"',\"\\\\c\",\"\\\\r\",\"\\\\H\",\"\\\\v\",\"\\\\textcircled\"],props:{numArgs:1,allowedInText:!0,allowedInMath:!0,argTypes:[\"primitive\"]},handler:(i,e)=>{var t=e[0],r=i.parser.mode;return r===\"math\"&&(i.parser.settings.reportNonstrict(\"mathVsTextAccents\",\"LaTeX's accent \"+i.funcName+\" works only in text mode\"),r=\"text\"),{type:\"accent\",mode:r,label:i.funcName,isStretchy:!1,isShifty:!0,base:t}},htmlBuilder:mh,mathmlBuilder:Om});q({type:\"accentUnder\",names:[\"\\\\underleftarrow\",\"\\\\underrightarrow\",\"\\\\underleftrightarrow\",\"\\\\undergroup\",\"\\\\underlinesegment\",\"\\\\utilde\"],props:{numArgs:1},handler:(i,e)=>{var{parser:t,funcName:r}=i,n=e[0];return{type:\"accentUnder\",mode:t.mode,label:r,base:n}},htmlBuilder:(i,e)=>{var t=oe(i.base,e),r=Vo(i,e),n=i.label===\"\\\\utilde\"?.12:0,s=ae({positionType:\"top\",positionData:t.height,children:[{type:\"elem\",elem:r,wrapperClasses:[\"svg-align\"]},{type:\"kern\",size:n},{type:\"elem\",elem:t}]});return z([\"mord\",\"accentunder\"],[s],e)},mathmlBuilder:(i,e)=>{var t=Wo(i.label),r=new L(\"munder\",[ue(i.base,e),t]);return r.setAttribute(\"accentunder\",\"true\"),r}});var Co=i=>{var e=new L(\"mpadded\",i?[i]:[]);return e.setAttribute(\"width\",\"+0.6em\"),e.setAttribute(\"lspace\",\"0.3em\"),e};q({type:\"xArrow\",names:[\"\\\\xleftarrow\",\"\\\\xrightarrow\",\"\\\\xLeftarrow\",\"\\\\xRightarrow\",\"\\\\xleftrightarrow\",\"\\\\xLeftrightarrow\",\"\\\\xhookleftarrow\",\"\\\\xhookrightarrow\",\"\\\\xmapsto\",\"\\\\xrightharpoondown\",\"\\\\xrightharpoonup\",\"\\\\xleftharpoondown\",\"\\\\xleftharpoonup\",\"\\\\xrightleftharpoons\",\"\\\\xleftrightharpoons\",\"\\\\xlongequal\",\"\\\\xtwoheadrightarrow\",\"\\\\xtwoheadleftarrow\",\"\\\\xtofrom\",\"\\\\xrightleftarrows\",\"\\\\xrightequilibrium\",\"\\\\xleftequilibrium\",\"\\\\\\\\cdrightarrow\",\"\\\\\\\\cdleftarrow\",\"\\\\\\\\cdlongequal\"],props:{numArgs:1,numOptionalArgs:1},handler(i,e,t){var{parser:r,funcName:n}=i;return{type:\"xArrow\",mode:r.mode,label:n,body:e[0],below:t[0]}},htmlBuilder(i,e){var t=e.style,r=e.havingStyle(t.sup()),n=zi(oe(i.body,r,e),e),s=i.label.slice(0,2)===\"\\\\x\"?\"x\":\"cd\";n.classes.push(s+\"-arrow-pad\");var o;i.below&&(r=e.havingStyle(t.sub()),o=zi(oe(i.below,r,e),e),o.classes.push(s+\"-arrow-pad\"));var l=Vo(i,e),a=-e.fontMetrics().axisHeight+.5*l.height,h=-e.fontMetrics().axisHeight-.5*l.height-.111;(n.depth>.25||i.label===\"\\\\xleftequilibrium\")&&(h-=n.depth);var c;if(o){var u=-e.fontMetrics().axisHeight+o.height+.5*l.height+.111;c=ae({positionType:\"individualShift\",children:[{type:\"elem\",elem:n,shift:h},{type:\"elem\",elem:l,shift:a},{type:\"elem\",elem:o,shift:u}]})}else c=ae({positionType:\"individualShift\",children:[{type:\"elem\",elem:n,shift:h},{type:\"elem\",elem:l,shift:a}]});return c.children[0].children[0].children[1].classes.push(\"svg-align\"),z([\"mrel\",\"x-arrow\"],[c],e)},mathmlBuilder(i,e){var t=Wo(i.label);t.setAttribute(\"minsize\",i.label.charAt(0)===\"x\"?\"1.75em\":\"3.0em\");var r;if(i.body){var n=Co(ue(i.body,e));if(i.below){var s=Co(ue(i.below,e));r=new L(\"munderover\",[t,s,n])}else r=new L(\"mover\",[t,n])}else if(i.below){var o=Co(ue(i.below,e));r=new L(\"munder\",[t,o])}else r=Co(),r=new L(\"mover\",[t,r]);return r}});function zm(i,e){var t=Pe(i.body,e,!0);return z([i.mclass],t,e)}function Lm(i,e){var t,r=ht(i.body,e);return i.mclass===\"minner\"?t=new L(\"mpadded\",r):i.mclass===\"mord\"?i.isCharacterBox?(t=r[0],t.type=\"mi\"):t=new L(\"mi\",r):(i.isCharacterBox?(t=r[0],t.type=\"mo\"):t=new L(\"mo\",r),i.mclass===\"mbin\"?(t.attributes.lspace=\"0.22em\",t.attributes.rspace=\"0.22em\"):i.mclass===\"mpunct\"?(t.attributes.lspace=\"0em\",t.attributes.rspace=\"0.17em\"):i.mclass===\"mopen\"||i.mclass===\"mclose\"?(t.attributes.lspace=\"0em\",t.attributes.rspace=\"0em\"):i.mclass===\"minner\"&&(t.attributes.lspace=\"0.0556em\",t.attributes.width=\"+0.1111em\")),t}q({type:\"mclass\",names:[\"\\\\mathord\",\"\\\\mathbin\",\"\\\\mathrel\",\"\\\\mathopen\",\"\\\\mathclose\",\"\\\\mathpunct\",\"\\\\mathinner\"],props:{numArgs:1,primitive:!0},handler(i,e){var{parser:t,funcName:r}=i,n=e[0];return{type:\"mclass\",mode:t.mode,mclass:\"m\"+r.slice(5),body:De(n),isCharacterBox:dr(n)}},htmlBuilder:zm,mathmlBuilder:Lm});var Go=i=>{var e=i.type===\"ordgroup\"&&i.body.length?i.body[0]:i;return e.type===\"atom\"&&(e.family===\"bin\"||e.family===\"rel\")?\"m\"+e.family:\"mord\"};q({type:\"mclass\",names:[\"\\\\@binrel\"],props:{numArgs:2},handler(i,e){var{parser:t}=i;return{type:\"mclass\",mode:t.mode,mclass:Go(e[0]),body:De(e[1]),isCharacterBox:dr(e[1])}}});q({type:\"mclass\",names:[\"\\\\stackrel\",\"\\\\overset\",\"\\\\underset\"],props:{numArgs:2},handler(i,e){var{parser:t,funcName:r}=i,n=e[1],s=e[0],o;r!==\"\\\\stackrel\"?o=Go(n):o=\"mrel\";var l={type:\"op\",mode:n.mode,limits:!0,alwaysHandleSupSub:!0,parentIsSupSub:!1,symbol:!1,suppressBaseShift:r!==\"\\\\stackrel\",body:De(n)},a={type:\"supsub\",mode:s.mode,base:l,sup:r===\"\\\\underset\"?null:s,sub:r===\"\\\\underset\"?s:null};return{type:\"mclass\",mode:t.mode,mclass:o,body:[a],isCharacterBox:dr(a)}},htmlBuilder:zm,mathmlBuilder:Lm});q({type:\"pmb\",names:[\"\\\\pmb\"],props:{numArgs:1,allowedInText:!0},handler(i,e){var{parser:t}=i;return{type:\"pmb\",mode:t.mode,mclass:Go(e[0]),body:De(e[0])}},htmlBuilder(i,e){var t=Pe(i.body,e,!0),r=z([i.mclass],t,e);return r.style.textShadow=\"0.02em 0.01em 0.04px\",r},mathmlBuilder(i,e){var t=ht(i.body,e),r=new L(\"mstyle\",t);return r.setAttribute(\"style\",\"text-shadow: 0.02em 0.01em 0.04px\"),r}});var N3={\">\":\"\\\\\\\\cdrightarrow\",\"<\":\"\\\\\\\\cdleftarrow\",\"=\":\"\\\\\\\\cdlongequal\",A:\"\\\\uparrow\",V:\"\\\\downarrow\",\"|\":\"\\\\Vert\",\".\":\"no arrow\"},Xd=()=>({type:\"styling\",body:[],mode:\"math\",style:\"display\"}),_d=i=>i.type===\"textord\"&&i.text===\"@\",F3=(i,e)=>(i.type===\"mathord\"||i.type===\"atom\")&&i.text===e;function H3(i,e,t){var r=N3[i];switch(r){case\"\\\\\\\\cdrightarrow\":case\"\\\\\\\\cdleftarrow\":return t.callFunction(r,[e[0]],[e[1]]);case\"\\\\uparrow\":case\"\\\\downarrow\":{var n=t.callFunction(\"\\\\\\\\cdleft\",[e[0]],[]),s={type:\"atom\",text:r,mode:\"math\",family:\"rel\"},o=t.callFunction(\"\\\\Big\",[s],[]),l=t.callFunction(\"\\\\\\\\cdright\",[e[1]],[]),a={type:\"ordgroup\",mode:\"math\",body:[n,o,l]};return t.callFunction(\"\\\\\\\\cdparent\",[a],[])}case\"\\\\\\\\cdlongequal\":return t.callFunction(\"\\\\\\\\cdlongequal\",[],[]);case\"\\\\Vert\":{var h={type:\"textord\",text:\"\\\\Vert\",mode:\"math\"};return t.callFunction(\"\\\\Big\",[h],[])}default:return{type:\"textord\",text:\" \",mode:\"math\"}}}function q3(i){var e=[];for(i.gullet.beginGroup(),i.gullet.macros.set(\"\\\\cr\",\"\\\\\\\\\\\\relax\"),i.gullet.beginGroup();;){e.push(i.parseExpression(!1,\"\\\\\\\\\")),i.gullet.endGroup(),i.gullet.beginGroup();var t=i.fetch().text;if(t===\"&\"||t===\"\\\\\\\\\")i.consume();else if(t===\"\\\\end\"){e[e.length-1].length===0&&e.pop();break}else throw new I(\"Expected \\\\\\\\ or \\\\cr or \\\\end\",i.nextToken)}for(var r=[],n=[r],s=0;s<e.length;s++){for(var o=e[s],l=Xd(),a=0;a<o.length;a++)if(!_d(o[a]))l.body.push(o[a]);else{r.push(l),a+=1;var h=dh(o[a]).text,c=new Array(2);if(c[0]={type:\"ordgroup\",mode:\"math\",body:[]},c[1]={type:\"ordgroup\",mode:\"math\",body:[]},!\"=|.\".includes(h))if(\"<>AV\".includes(h))for(var u=0;u<2;u++){for(var d=!0,p=a+1;p<o.length;p++){if(F3(o[p],h)){d=!1,a=p;break}if(_d(o[p]))throw new I(\"Missing a \"+h+\" character to complete a CD arrow.\",o[p]);c[u].body.push(o[p])}if(d)throw new I(\"Missing a \"+h+\" character to complete a CD arrow.\",o[a])}else throw new I('Expected one of \"<>AV=|.\" after @',o[a]);var v=H3(h,c,i),y={type:\"styling\",body:[v],mode:\"math\",style:\"display\"};r.push(y),l=Xd()}s%2===0?r.push(l):r.shift(),r=[],n.push(r)}i.gullet.endGroup(),i.gullet.endGroup();var w=new Array(n[0].length).fill({type:\"align\",align:\"c\",pregap:.25,postgap:.25});return{type:\"array\",mode:\"math\",body:n,arraystretch:1,addJot:!0,rowGaps:[null],cols:w,colSeparationType:\"CD\",hLinesBeforeRow:new Array(n.length+1).fill([])}}q({type:\"cdlabel\",names:[\"\\\\\\\\cdleft\",\"\\\\\\\\cdright\"],props:{numArgs:1},handler(i,e){var{parser:t,funcName:r}=i;return{type:\"cdlabel\",mode:t.mode,side:r.slice(4),label:e[0]}},htmlBuilder(i,e){var t=e.havingStyle(e.style.sup()),r=zi(oe(i.label,t,e),e);return r.classes.push(\"cd-label-\"+i.side),r.style.bottom=N(.8-r.depth),r.height=0,r.depth=0,r},mathmlBuilder(i,e){var t=new L(\"mrow\",[ue(i.label,e)]);return t=new L(\"mpadded\",[t]),t.setAttribute(\"width\",\"0\"),i.side===\"left\"&&t.setAttribute(\"lspace\",\"-1width\"),t.setAttribute(\"voffset\",\"0.7em\"),t=new L(\"mstyle\",[t]),t.setAttribute(\"displaystyle\",\"false\"),t.setAttribute(\"scriptlevel\",\"1\"),t}});q({type:\"cdlabelparent\",names:[\"\\\\\\\\cdparent\"],props:{numArgs:1},handler(i,e){var{parser:t}=i;return{type:\"cdlabelparent\",mode:t.mode,fragment:e[0]}},htmlBuilder(i,e){var t=zi(oe(i.fragment,e),e);return t.classes.push(\"cd-vert-arrow\"),t},mathmlBuilder(i,e){return new L(\"mrow\",[ue(i.fragment,e)])}});q({type:\"textord\",names:[\"\\\\@char\"],props:{numArgs:1,allowedInText:!0},handler(i,e){for(var{parser:t}=i,r=Q(e[0],\"ordgroup\"),n=r.body,s=\"\",o=0;o<n.length;o++){var l=Q(n[o],\"textord\");s+=l.text}var a=parseInt(s),h;if(isNaN(a))throw new I(\"\\\\@char has non-numeric argument \"+s);if(a<0||a>=1114111)throw new I(\"\\\\@char with invalid code point \"+s);return a<=65535?h=String.fromCharCode(a):(a-=65536,h=String.fromCharCode((a>>10)+55296,(a&1023)+56320)),{type:\"textord\",mode:t.mode,text:h}}});var Im=(i,e)=>{var t=Pe(i.body,e.withColor(i.color),!1);return pr(t)},Rm=(i,e)=>{var t=ht(i.body,e.withColor(i.color)),r=new L(\"mstyle\",t);return r.setAttribute(\"mathcolor\",i.color),r};q({type:\"color\",names:[\"\\\\textcolor\"],props:{numArgs:2,allowedInText:!0,argTypes:[\"color\",\"original\"]},handler(i,e){var{parser:t}=i,r=Q(e[0],\"color-token\").color,n=e[1];return{type:\"color\",mode:t.mode,color:r,body:De(n)}},htmlBuilder:Im,mathmlBuilder:Rm});q({type:\"color\",names:[\"\\\\color\"],props:{numArgs:1,allowedInText:!0,argTypes:[\"color\"]},handler(i,e){var{parser:t,breakOnTokenText:r}=i,n=Q(e[0],\"color-token\").color;t.gullet.macros.set(\"\\\\current@color\",n);var s=t.parseExpression(!0,r);return{type:\"color\",mode:t.mode,color:n,body:s}},htmlBuilder:Im,mathmlBuilder:Rm});q({type:\"cr\",names:[\"\\\\\\\\\"],props:{numArgs:0,numOptionalArgs:0,allowedInText:!0},handler(i,e,t){var{parser:r}=i,n=r.gullet.future().text===\"[\"?r.parseSizeGroup(!0):null,s=!r.settings.displayMode||!r.settings.useStrictBehavior(\"newLineInDisplayMode\",\"In LaTeX, \\\\\\\\ or \\\\newline does nothing in display mode\");return{type:\"cr\",mode:r.mode,newLine:s,size:n&&Q(n,\"size\").value}},htmlBuilder(i,e){var t=z([\"mspace\"],[],e);return i.newLine&&(t.classes.push(\"newline\"),i.size&&(t.style.marginTop=N(we(i.size,e)))),t},mathmlBuilder(i,e){var t=new L(\"mspace\");return i.newLine&&(t.setAttribute(\"linebreak\",\"newline\"),i.size&&t.setAttribute(\"height\",N(we(i.size,e)))),t}});var eh={\"\\\\global\":\"\\\\global\",\"\\\\long\":\"\\\\\\\\globallong\",\"\\\\\\\\globallong\":\"\\\\\\\\globallong\",\"\\\\def\":\"\\\\gdef\",\"\\\\gdef\":\"\\\\gdef\",\"\\\\edef\":\"\\\\xdef\",\"\\\\xdef\":\"\\\\xdef\",\"\\\\let\":\"\\\\\\\\globallet\",\"\\\\futurelet\":\"\\\\\\\\globalfuture\"},Pm=i=>{var e=i.text;if(/^(?:[\\\\{}$&#^_]|EOF)$/.test(e))throw new I(\"Expected a control sequence\",i);return e},W3=i=>{var e=i.gullet.popToken();return e.text===\"=\"&&(e=i.gullet.popToken(),e.text===\" \"&&(e=i.gullet.popToken())),e},Nm=(i,e,t,r)=>{var n=i.gullet.macros.get(t.text);n==null&&(t.noexpand=!0,n={tokens:[t],numArgs:0,unexpandable:!i.gullet.isExpandable(t.text)}),i.gullet.macros.set(e,n,r)};q({type:\"internal\",names:[\"\\\\global\",\"\\\\long\",\"\\\\\\\\globallong\"],props:{numArgs:0,allowedInText:!0},handler(i){var{parser:e,funcName:t}=i;e.consumeSpaces();var r=e.fetch();if(eh[r.text])return(t===\"\\\\global\"||t===\"\\\\\\\\globallong\")&&(r.text=eh[r.text]),Q(e.parseFunction(),\"internal\");throw new I(\"Invalid token after macro prefix\",r)}});q({type:\"internal\",names:[\"\\\\def\",\"\\\\gdef\",\"\\\\edef\",\"\\\\xdef\"],props:{numArgs:0,allowedInText:!0,primitive:!0},handler(i){var{parser:e,funcName:t}=i,r=e.gullet.popToken(),n=r.text;if(/^(?:[\\\\{}$&#^_]|EOF)$/.test(n))throw new I(\"Expected a control sequence\",r);for(var s=0,o,l=[[]];e.gullet.future().text!==\"{\";)if(r=e.gullet.popToken(),r.text===\"#\"){if(e.gullet.future().text===\"{\"){o=e.gullet.future(),l[s].push(\"{\");break}if(r=e.gullet.popToken(),!/^[1-9]$/.test(r.text))throw new I('Invalid argument number \"'+r.text+'\"');if(parseInt(r.text)!==s+1)throw new I('Argument number \"'+r.text+'\" out of order');s++,l.push([])}else{if(r.text===\"EOF\")throw new I(\"Expected a macro definition\");l[s].push(r.text)}var{tokens:a}=e.gullet.consumeArg();return o&&a.unshift(o),(t===\"\\\\edef\"||t===\"\\\\xdef\")&&(a=e.gullet.expandTokens(a),a.reverse()),e.gullet.macros.set(n,{tokens:a,numArgs:s,delimiters:l},t===eh[t]),{type:\"internal\",mode:e.mode}}});q({type:\"internal\",names:[\"\\\\let\",\"\\\\\\\\globallet\"],props:{numArgs:0,allowedInText:!0,primitive:!0},handler(i){var{parser:e,funcName:t}=i,r=Pm(e.gullet.popToken());e.gullet.consumeSpaces();var n=W3(e);return Nm(e,r,n,t===\"\\\\\\\\globallet\"),{type:\"internal\",mode:e.mode}}});q({type:\"internal\",names:[\"\\\\futurelet\",\"\\\\\\\\globalfuture\"],props:{numArgs:0,allowedInText:!0,primitive:!0},handler(i){var{parser:e,funcName:t}=i,r=Pm(e.gullet.popToken()),n=e.gullet.popToken(),s=e.gullet.popToken();return Nm(e,r,s,t===\"\\\\\\\\globalfuture\"),e.gullet.pushToken(s),e.gullet.pushToken(n),{type:\"internal\",mode:e.mode}}});var Mn=function(e,t,r){var n=me.math[e]&&me.math[e].replace,s=ah(n||e,t,r);if(!s)throw new Error(\"Unsupported symbol \"+e+\" and font size \"+t+\".\");return s},ph=function(e,t,r,n){var s=r.havingBaseStyle(t),o=z(n.concat(s.sizingClasses(r)),[e],r),l=s.sizeMultiplier/r.sizeMultiplier;return o.height*=l,o.depth*=l,o.maxFontSize=s.sizeMultiplier,o},Fm=function(e,t,r){var n=t.havingBaseStyle(r),s=(1-t.sizeMultiplier/n.sizeMultiplier)*t.fontMetrics().axisHeight;e.classes.push(\"delimcenter\"),e.style.top=N(s),e.height-=s,e.depth+=s},V3=function(e,t,r,n,s,o){var l=Qe(e,\"Main-Regular\",s,n),a=ph(l,t,n,o);return r&&Fm(a,n,t),a},$3=function(e,t,r,n){return Qe(e,\"Size\"+t+\"-Regular\",r,n)},Hm=function(e,t,r,n,s,o){var l=$3(e,t,s,n),a=ph(z([\"delimsizing\",\"size\"+t],[l],n),Z.TEXT,n,o);return r&&Fm(a,n,Z.TEXT),a},N0=function(e,t,r){var n;t===\"Size1-Regular\"?n=\"delim-size1\":n=\"delim-size4\";var s=z([\"delimsizinginner\",n],[z([],[Qe(e,t,r)])]);return{type:\"elem\",elem:s}},F0=function(e,t,r){var n=Jt[\"Size4-Regular\"][e.charCodeAt(0)]?Jt[\"Size4-Regular\"][e.charCodeAt(0)][4]:Jt[\"Size1-Regular\"][e.charCodeAt(0)][4],s=new Zt(\"inner\",s3(e,Math.round(1e3*t))),o=new qt([s],{width:N(n),height:N(t),style:\"width:\"+N(n),viewBox:\"0 0 \"+1e3*n+\" \"+Math.round(1e3*t),preserveAspectRatio:\"xMinYMin\"}),l=Er([],[o],r);return l.height=t,l.style.height=N(t),l.style.width=N(n),{type:\"elem\",elem:l}},th=.008,Ao={type:\"kern\",size:-1*th},G3=new Set([\"|\",\"\\\\lvert\",\"\\\\rvert\",\"\\\\vert\"]),U3=new Set([\"\\\\|\",\"\\\\lVert\",\"\\\\rVert\",\"\\\\Vert\"]),qm=function(e,t,r,n,s,o){var l,a,h,c,u=\"\",d=0;l=h=c=e,a=null;var p=\"Size1-Regular\";e===\"\\\\uparrow\"?h=c=\"\\u23D0\":e===\"\\\\Uparrow\"?h=c=\"\\u2016\":e===\"\\\\downarrow\"?l=h=\"\\u23D0\":e===\"\\\\Downarrow\"?l=h=\"\\u2016\":e===\"\\\\updownarrow\"?(l=\"\\\\uparrow\",h=\"\\u23D0\",c=\"\\\\downarrow\"):e===\"\\\\Updownarrow\"?(l=\"\\\\Uparrow\",h=\"\\u2016\",c=\"\\\\Downarrow\"):G3.has(e)?(h=\"\\u2223\",u=\"vert\",d=333):U3.has(e)?(h=\"\\u2225\",u=\"doublevert\",d=556):e===\"[\"||e===\"\\\\lbrack\"?(l=\"\\u23A1\",h=\"\\u23A2\",c=\"\\u23A3\",p=\"Size4-Regular\",u=\"lbrack\",d=667):e===\"]\"||e===\"\\\\rbrack\"?(l=\"\\u23A4\",h=\"\\u23A5\",c=\"\\u23A6\",p=\"Size4-Regular\",u=\"rbrack\",d=667):e===\"\\\\lfloor\"||e===\"\\u230A\"?(h=l=\"\\u23A2\",c=\"\\u23A3\",p=\"Size4-Regular\",u=\"lfloor\",d=667):e===\"\\\\lceil\"||e===\"\\u2308\"?(l=\"\\u23A1\",h=c=\"\\u23A2\",p=\"Size4-Regular\",u=\"lceil\",d=667):e===\"\\\\rfloor\"||e===\"\\u230B\"?(h=l=\"\\u23A5\",c=\"\\u23A6\",p=\"Size4-Regular\",u=\"rfloor\",d=667):e===\"\\\\rceil\"||e===\"\\u2309\"?(l=\"\\u23A4\",h=c=\"\\u23A5\",p=\"Size4-Regular\",u=\"rceil\",d=667):e===\"(\"||e===\"\\\\lparen\"?(l=\"\\u239B\",h=\"\\u239C\",c=\"\\u239D\",p=\"Size4-Regular\",u=\"lparen\",d=875):e===\")\"||e===\"\\\\rparen\"?(l=\"\\u239E\",h=\"\\u239F\",c=\"\\u23A0\",p=\"Size4-Regular\",u=\"rparen\",d=875):e===\"\\\\{\"||e===\"\\\\lbrace\"?(l=\"\\u23A7\",a=\"\\u23A8\",c=\"\\u23A9\",h=\"\\u23AA\",p=\"Size4-Regular\"):e===\"\\\\}\"||e===\"\\\\rbrace\"?(l=\"\\u23AB\",a=\"\\u23AC\",c=\"\\u23AD\",h=\"\\u23AA\",p=\"Size4-Regular\"):e===\"\\\\lgroup\"||e===\"\\u27EE\"?(l=\"\\u23A7\",c=\"\\u23A9\",h=\"\\u23AA\",p=\"Size4-Regular\"):e===\"\\\\rgroup\"||e===\"\\u27EF\"?(l=\"\\u23AB\",c=\"\\u23AD\",h=\"\\u23AA\",p=\"Size4-Regular\"):e===\"\\\\lmoustache\"||e===\"\\u23B0\"?(l=\"\\u23A7\",c=\"\\u23AD\",h=\"\\u23AA\",p=\"Size4-Regular\"):(e===\"\\\\rmoustache\"||e===\"\\u23B1\")&&(l=\"\\u23AB\",c=\"\\u23A9\",h=\"\\u23AA\",p=\"Size4-Regular\");var v=Mn(l,p,s),y=v.height+v.depth,w=Mn(h,p,s),S=w.height+w.depth,A=Mn(c,p,s),M=A.height+A.depth,E=0,T=1;if(a!==null){var B=Mn(a,p,s);E=B.height+B.depth,T=2}var D=y+M+E,V=Math.max(0,Math.ceil((t-D)/(T*S))),U=D+V*T*S,ie=n.fontMetrics().axisHeight;r&&(ie*=n.sizeMultiplier);var j=U/2-ie,$=[];if(u.length>0){var ne=U-y-M,J=Math.round(U*1e3),re=o3(u,Math.round(ne*1e3)),ge=new Zt(u,re),Be=(d/1e3).toFixed(3)+\"em\",qe=(J/1e3).toFixed(3)+\"em\",Me=new qt([ge],{width:Be,height:qe,viewBox:\"0 0 \"+d+\" \"+J}),Te=Er([],[Me],n);Te.height=J/1e3,Te.style.width=Be,Te.style.height=qe,$.push({type:\"elem\",elem:Te})}else{if($.push(N0(c,p,s)),$.push(Ao),a===null){var Ne=U-y-M+2*th;$.push(F0(h,Ne,n))}else{var pe=(U-y-M-E)/2+2*th;$.push(F0(h,pe,n)),$.push(Ao),$.push(N0(a,p,s)),$.push(Ao),$.push(F0(h,pe,n))}$.push(Ao),$.push(N0(l,p,s))}var je=n.havingBaseStyle(Z.TEXT),Bt=ae({positionType:\"bottom\",positionData:j,children:$});return ph(z([\"delimsizing\",\"mult\"],[Bt],je),Z.TEXT,n,o)},H0=80,q0=.08,W0=function(e,t,r,n,s){var o=n3(e,n,r),l=new Zt(e,o),a=new qt([l],{width:\"400em\",height:N(t),viewBox:\"0 0 400000 \"+r,preserveAspectRatio:\"xMinYMin slice\"});return Er([\"hide-tail\"],[a],s)},K3=function(e,t){var r=t.havingBaseSizing(),n=Um(\"\\\\surd\",e*r.sizeMultiplier,Gm,r),s=r.sizeMultiplier,o=Math.max(0,t.minRuleThickness-t.fontMetrics().sqrtRuleThickness),l,a=0,h=0,c=0,u;return n.type===\"small\"?(c=1e3+1e3*o+H0,e<1?s=1:e<1.4&&(s=.7),a=(1+o+q0)/s,h=(1+o)/s,l=W0(\"sqrtMain\",a,c,o,t),l.style.minWidth=\"0.853em\",u=.833/s):n.type===\"large\"?(c=(1e3+H0)*Dn[n.size],h=(Dn[n.size]+o)/s,a=(Dn[n.size]+o+q0)/s,l=W0(\"sqrtSize\"+n.size,a,c,o,t),l.style.minWidth=\"1.02em\",u=1/s):(a=e+o+q0,h=e+o,c=Math.floor(1e3*e+o)+H0,l=W0(\"sqrtTall\",a,c,o,t),l.style.minWidth=\"0.742em\",u=1.056),l.height=h,l.style.height=N(a),{span:l,advanceWidth:u,ruleWidth:(t.fontMetrics().sqrtRuleThickness+o)*s}},Wm=new Set([\"(\",\"\\\\lparen\",\")\",\"\\\\rparen\",\"[\",\"\\\\lbrack\",\"]\",\"\\\\rbrack\",\"\\\\{\",\"\\\\lbrace\",\"\\\\}\",\"\\\\rbrace\",\"\\\\lfloor\",\"\\\\rfloor\",\"\\u230A\",\"\\u230B\",\"\\\\lceil\",\"\\\\rceil\",\"\\u2308\",\"\\u2309\",\"\\\\surd\"]),j3=new Set([\"\\\\uparrow\",\"\\\\downarrow\",\"\\\\updownarrow\",\"\\\\Uparrow\",\"\\\\Downarrow\",\"\\\\Updownarrow\",\"|\",\"\\\\|\",\"\\\\vert\",\"\\\\Vert\",\"\\\\lvert\",\"\\\\rvert\",\"\\\\lVert\",\"\\\\rVert\",\"\\\\lgroup\",\"\\\\rgroup\",\"\\u27EE\",\"\\u27EF\",\"\\\\lmoustache\",\"\\\\rmoustache\",\"\\u23B0\",\"\\u23B1\"]),Vm=new Set([\"<\",\">\",\"\\\\langle\",\"\\\\rangle\",\"/\",\"\\\\backslash\",\"\\\\lt\",\"\\\\gt\"]),Dn=[0,1.2,1.8,2.4,3],$m=function(e,t,r,n,s){if(e===\"<\"||e===\"\\\\lt\"||e===\"\\u27E8\"?e=\"\\\\langle\":(e===\">\"||e===\"\\\\gt\"||e===\"\\u27E9\")&&(e=\"\\\\rangle\"),Wm.has(e)||Vm.has(e))return Hm(e,t,!1,r,n,s);if(j3.has(e))return qm(e,Dn[t],!1,r,n,s);throw new I(\"Illegal delimiter: '\"+e+\"'\")},Y3=[{type:\"small\",style:Z.SCRIPTSCRIPT},{type:\"small\",style:Z.SCRIPT},{type:\"small\",style:Z.TEXT},{type:\"large\",size:1},{type:\"large\",size:2},{type:\"large\",size:3},{type:\"large\",size:4}],X3=[{type:\"small\",style:Z.SCRIPTSCRIPT},{type:\"small\",style:Z.SCRIPT},{type:\"small\",style:Z.TEXT},{type:\"stack\"}],Gm=[{type:\"small\",style:Z.SCRIPTSCRIPT},{type:\"small\",style:Z.SCRIPT},{type:\"small\",style:Z.TEXT},{type:\"large\",size:1},{type:\"large\",size:2},{type:\"large\",size:3},{type:\"large\",size:4},{type:\"stack\"}],_3=function(e){if(e.type===\"small\")return\"Main-Regular\";if(e.type===\"large\")return\"Size\"+e.size+\"-Regular\";if(e.type===\"stack\")return\"Size4-Regular\";throw new Error(\"Add support for delim type '\"+e.type+\"' here.\")},Um=function(e,t,r,n){for(var s=Math.min(2,3-n.style.size),o=s;o<r.length&&r[o].type!==\"stack\";o++){var l=Mn(e,_3(r[o]),\"math\"),a=l.height+l.depth;if(r[o].type===\"small\"){var h=n.havingBaseStyle(r[o].style);a*=h.sizeMultiplier}if(a>t)return r[o]}return r[r.length-1]},rh=function(e,t,r,n,s,o){e===\"<\"||e===\"\\\\lt\"||e===\"\\u27E8\"?e=\"\\\\langle\":(e===\">\"||e===\"\\\\gt\"||e===\"\\u27E9\")&&(e=\"\\\\rangle\");var l;Vm.has(e)?l=Y3:Wm.has(e)?l=Gm:l=X3;var a=Um(e,t,l,n);return a.type===\"small\"?V3(e,a.style,r,n,s,o):a.type===\"large\"?Hm(e,a.size,r,n,s,o):qm(e,t,r,n,s,o)},V0=function(e,t,r,n,s,o){var l=n.fontMetrics().axisHeight*n.sizeMultiplier,a=901,h=5/n.fontMetrics().ptPerEm,c=Math.max(t-l,r+l),u=Math.max(c/500*a,2*c-h);return rh(e,u,!0,n,s,o)},Jd={\"\\\\bigl\":{mclass:\"mopen\",size:1},\"\\\\Bigl\":{mclass:\"mopen\",size:2},\"\\\\biggl\":{mclass:\"mopen\",size:3},\"\\\\Biggl\":{mclass:\"mopen\",size:4},\"\\\\bigr\":{mclass:\"mclose\",size:1},\"\\\\Bigr\":{mclass:\"mclose\",size:2},\"\\\\biggr\":{mclass:\"mclose\",size:3},\"\\\\Biggr\":{mclass:\"mclose\",size:4},\"\\\\bigm\":{mclass:\"mrel\",size:1},\"\\\\Bigm\":{mclass:\"mrel\",size:2},\"\\\\biggm\":{mclass:\"mrel\",size:3},\"\\\\Biggm\":{mclass:\"mrel\",size:4},\"\\\\big\":{mclass:\"mord\",size:1},\"\\\\Big\":{mclass:\"mord\",size:2},\"\\\\bigg\":{mclass:\"mord\",size:3},\"\\\\Bigg\":{mclass:\"mord\",size:4}},J3=new Set([\"(\",\"\\\\lparen\",\")\",\"\\\\rparen\",\"[\",\"\\\\lbrack\",\"]\",\"\\\\rbrack\",\"\\\\{\",\"\\\\lbrace\",\"\\\\}\",\"\\\\rbrace\",\"\\\\lfloor\",\"\\\\rfloor\",\"\\u230A\",\"\\u230B\",\"\\\\lceil\",\"\\\\rceil\",\"\\u2308\",\"\\u2309\",\"<\",\">\",\"\\\\langle\",\"\\u27E8\",\"\\\\rangle\",\"\\u27E9\",\"\\\\lt\",\"\\\\gt\",\"\\\\lvert\",\"\\\\rvert\",\"\\\\lVert\",\"\\\\rVert\",\"\\\\lgroup\",\"\\\\rgroup\",\"\\u27EE\",\"\\u27EF\",\"\\\\lmoustache\",\"\\\\rmoustache\",\"\\u23B0\",\"\\u23B1\",\"/\",\"\\\\backslash\",\"|\",\"\\\\vert\",\"\\\\|\",\"\\\\Vert\",\"\\\\uparrow\",\"\\\\Uparrow\",\"\\\\downarrow\",\"\\\\Downarrow\",\"\\\\updownarrow\",\"\\\\Updownarrow\",\".\"]);function Uo(i,e){var t=$o(i);if(t&&J3.has(t.text))return t;throw t?new I(\"Invalid delimiter '\"+t.text+\"' after '\"+e.funcName+\"'\",i):new I(\"Invalid delimiter type '\"+i.type+\"'\",i)}q({type:\"delimsizing\",names:[\"\\\\bigl\",\"\\\\Bigl\",\"\\\\biggl\",\"\\\\Biggl\",\"\\\\bigr\",\"\\\\Bigr\",\"\\\\biggr\",\"\\\\Biggr\",\"\\\\bigm\",\"\\\\Bigm\",\"\\\\biggm\",\"\\\\Biggm\",\"\\\\big\",\"\\\\Big\",\"\\\\bigg\",\"\\\\Bigg\"],props:{numArgs:1,argTypes:[\"primitive\"]},handler:(i,e)=>{var t=Uo(e[0],i);return{type:\"delimsizing\",mode:i.parser.mode,size:Jd[i.funcName].size,mclass:Jd[i.funcName].mclass,delim:t.text}},htmlBuilder:(i,e)=>i.delim===\".\"?z([i.mclass]):$m(i.delim,i.size,e,i.mode,[i.mclass]),mathmlBuilder:i=>{var e=[];i.delim!==\".\"&&e.push(Dt(i.delim,i.mode));var t=new L(\"mo\",e);i.mclass===\"mopen\"||i.mclass===\"mclose\"?t.setAttribute(\"fence\",\"true\"):t.setAttribute(\"fence\",\"false\"),t.setAttribute(\"stretchy\",\"true\");var r=N(Dn[i.size]);return t.setAttribute(\"minsize\",r),t.setAttribute(\"maxsize\",r),t}});function Zd(i){if(!i.body)throw new Error(\"Bug: The leftright ParseNode wasn't fully parsed.\")}q({type:\"leftright-right\",names:[\"\\\\right\"],props:{numArgs:1,primitive:!0},handler:(i,e)=>{var t=i.parser.gullet.macros.get(\"\\\\current@color\");if(t&&typeof t!=\"string\")throw new I(\"\\\\current@color set to non-string in \\\\right\");return{type:\"leftright-right\",mode:i.parser.mode,delim:Uo(e[0],i).text,color:t}}});q({type:\"leftright\",names:[\"\\\\left\"],props:{numArgs:1,primitive:!0},handler:(i,e)=>{var t=Uo(e[0],i),r=i.parser;++r.leftrightDepth;var n=r.parseExpression(!1);--r.leftrightDepth,r.expect(\"\\\\right\",!1);var s=Q(r.parseFunction(),\"leftright-right\");return{type:\"leftright\",mode:r.mode,body:n,left:t.text,right:s.delim,rightColor:s.color}},htmlBuilder:(i,e)=>{Zd(i);for(var t=Pe(i.body,e,!0,[\"mopen\",\"mclose\"]),r=0,n=0,s=!1,o=0;o<t.length;o++)t[o].isMiddle?s=!0:(r=Math.max(t[o].height,r),n=Math.max(t[o].depth,n));r*=e.sizeMultiplier,n*=e.sizeMultiplier;var l;if(i.left===\".\"?l=Ln(e,[\"mopen\"]):l=V0(i.left,r,n,e,i.mode,[\"mopen\"]),t.unshift(l),s)for(var a=1;a<t.length;a++){var h=t[a],c=h.isMiddle;c&&(t[a]=V0(c.delim,r,n,c.options,i.mode,[]))}var u;if(i.right===\".\")u=Ln(e,[\"mclose\"]);else{var d=i.rightColor?e.withColor(i.rightColor):e;u=V0(i.right,r,n,d,i.mode,[\"mclose\"])}return t.push(u),z([\"minner\"],t,e)},mathmlBuilder:(i,e)=>{Zd(i);var t=ht(i.body,e);if(i.left!==\".\"){var r=new L(\"mo\",[Dt(i.left,i.mode)]);r.setAttribute(\"fence\",\"true\"),t.unshift(r)}if(i.right!==\".\"){var n=new L(\"mo\",[Dt(i.right,i.mode)]);n.setAttribute(\"fence\",\"true\"),i.rightColor&&n.setAttribute(\"mathcolor\",i.rightColor),t.push(n)}return uh(t)}});q({type:\"middle\",names:[\"\\\\middle\"],props:{numArgs:1,primitive:!0},handler:(i,e)=>{var t=Uo(e[0],i);if(!i.parser.leftrightDepth)throw new I(\"\\\\middle without preceding \\\\left\",t);return{type:\"middle\",mode:i.parser.mode,delim:t.text}},htmlBuilder:(i,e)=>{var t;if(i.delim===\".\")t=Ln(e,[]);else{t=$m(i.delim,1,e,i.mode,[]);var r={delim:i.delim,options:e};t.isMiddle=r}return t},mathmlBuilder:(i,e)=>{var t=i.delim===\"\\\\vert\"||i.delim===\"|\"?Dt(\"|\",\"text\"):Dt(i.delim,i.mode),r=new L(\"mo\",[t]);return r.setAttribute(\"fence\",\"true\"),r.setAttribute(\"lspace\",\"0.05em\"),r.setAttribute(\"rspace\",\"0.05em\"),r}});var gh=(i,e)=>{var t=zi(oe(i.body,e),e),r=i.label.slice(1),n=e.sizeMultiplier,s,o=0,l=dr(i.body);if(r===\"sout\")s=z([\"stretchy\",\"sout\"]),s.height=e.fontMetrics().defaultRuleThickness/n,o=-.5*e.fontMetrics().xHeight;else if(r===\"phase\"){var a=we({number:.6,unit:\"pt\"},e),h=we({number:.35,unit:\"ex\"},e),c=e.havingBaseSizing();n=n/c.sizeMultiplier;var u=t.height+t.depth+a+h;t.style.paddingLeft=N(u/2+a);var d=Math.floor(1e3*u*n),p=r3(d),v=new qt([new Zt(\"phase\",p)],{width:\"400em\",height:N(d/1e3),viewBox:\"0 0 400000 \"+d,preserveAspectRatio:\"xMinYMin slice\"});s=Er([\"hide-tail\"],[v],e),s.style.height=N(u),o=t.depth+a+h}else{/cancel/.test(r)?l||t.classes.push(\"cancel-pad\"):r===\"angl\"?t.classes.push(\"anglpad\"):t.classes.push(\"boxpad\");var y=0,w=0,S=0;/box/.test(r)?(S=Math.max(e.fontMetrics().fboxrule,e.minRuleThickness),y=e.fontMetrics().fboxsep+(r===\"colorbox\"?0:S),w=y):r===\"angl\"?(S=Math.max(e.fontMetrics().defaultRuleThickness,e.minRuleThickness),y=4*S,w=Math.max(0,.25-t.depth)):(y=l?.2:0,w=y),s=R3(t,r,y,w,e),/fbox|boxed|fcolorbox/.test(r)?(s.style.borderStyle=\"solid\",s.style.borderWidth=N(S)):r===\"angl\"&&S!==.049&&(s.style.borderTopWidth=N(S),s.style.borderRightWidth=N(S)),o=t.depth+w,i.backgroundColor&&(s.style.backgroundColor=i.backgroundColor,i.borderColor&&(s.style.borderColor=i.borderColor))}var A;if(i.backgroundColor)A=ae({positionType:\"individualShift\",children:[{type:\"elem\",elem:s,shift:o},{type:\"elem\",elem:t,shift:0}]});else{var M=/cancel|phase/.test(r)?[\"svg-align\"]:[];A=ae({positionType:\"individualShift\",children:[{type:\"elem\",elem:t,shift:0},{type:\"elem\",elem:s,shift:o,wrapperClasses:M}]})}return/cancel/.test(r)&&(A.height=t.height,A.depth=t.depth),/cancel/.test(r)&&!l?z([\"mord\",\"cancel-lap\"],[A],e):z([\"mord\"],[A],e)},vh=(i,e)=>{var t=0,r=new L(i.label.includes(\"colorbox\")?\"mpadded\":\"menclose\",[ue(i.body,e)]);switch(i.label){case\"\\\\cancel\":r.setAttribute(\"notation\",\"updiagonalstrike\");break;case\"\\\\bcancel\":r.setAttribute(\"notation\",\"downdiagonalstrike\");break;case\"\\\\phase\":r.setAttribute(\"notation\",\"phasorangle\");break;case\"\\\\sout\":r.setAttribute(\"notation\",\"horizontalstrike\");break;case\"\\\\fbox\":r.setAttribute(\"notation\",\"box\");break;case\"\\\\angl\":r.setAttribute(\"notation\",\"actuarial\");break;case\"\\\\fcolorbox\":case\"\\\\colorbox\":if(t=e.fontMetrics().fboxsep*e.fontMetrics().ptPerEm,r.setAttribute(\"width\",\"+\"+2*t+\"pt\"),r.setAttribute(\"height\",\"+\"+2*t+\"pt\"),r.setAttribute(\"lspace\",t+\"pt\"),r.setAttribute(\"voffset\",t+\"pt\"),i.label===\"\\\\fcolorbox\"){var n=Math.max(e.fontMetrics().fboxrule,e.minRuleThickness);r.setAttribute(\"style\",\"border: \"+n+\"em solid \"+String(i.borderColor))}break;case\"\\\\xcancel\":r.setAttribute(\"notation\",\"updiagonalstrike downdiagonalstrike\");break}return i.backgroundColor&&r.setAttribute(\"mathbackground\",i.backgroundColor),r};q({type:\"enclose\",names:[\"\\\\colorbox\"],props:{numArgs:2,allowedInText:!0,argTypes:[\"color\",\"text\"]},handler(i,e,t){var{parser:r,funcName:n}=i,s=Q(e[0],\"color-token\").color,o=e[1];return{type:\"enclose\",mode:r.mode,label:n,backgroundColor:s,body:o}},htmlBuilder:gh,mathmlBuilder:vh});q({type:\"enclose\",names:[\"\\\\fcolorbox\"],props:{numArgs:3,allowedInText:!0,argTypes:[\"color\",\"color\",\"text\"]},handler(i,e,t){var{parser:r,funcName:n}=i,s=Q(e[0],\"color-token\").color,o=Q(e[1],\"color-token\").color,l=e[2];return{type:\"enclose\",mode:r.mode,label:n,backgroundColor:o,borderColor:s,body:l}},htmlBuilder:gh,mathmlBuilder:vh});q({type:\"enclose\",names:[\"\\\\fbox\"],props:{numArgs:1,argTypes:[\"hbox\"],allowedInText:!0},handler(i,e){var{parser:t}=i;return{type:\"enclose\",mode:t.mode,label:\"\\\\fbox\",body:e[0]}}});q({type:\"enclose\",names:[\"\\\\cancel\",\"\\\\bcancel\",\"\\\\xcancel\",\"\\\\sout\",\"\\\\phase\"],props:{numArgs:1},handler(i,e){var{parser:t,funcName:r}=i,n=e[0];return{type:\"enclose\",mode:t.mode,label:r,body:n}},htmlBuilder:gh,mathmlBuilder:vh});q({type:\"enclose\",names:[\"\\\\angl\"],props:{numArgs:1,argTypes:[\"hbox\"],allowedInText:!1},handler(i,e){var{parser:t}=i;return{type:\"enclose\",mode:t.mode,label:\"\\\\angl\",body:e[0]}}});var Km={};function Qt(i){for(var{type:e,names:t,props:r,handler:n,htmlBuilder:s,mathmlBuilder:o}=i,l={type:e,numArgs:r.numArgs||0,allowedInText:!1,numOptionalArgs:0,handler:n},a=0;a<t.length;++a)Km[t[a]]=l;s&&(zo[e]=s),o&&(Lo[e]=o)}var jm={};function b(i,e){jm[i]=e}function Qd(i){var e=[];i.consumeSpaces();var t=i.fetch().text;for(t===\"\\\\relax\"&&(i.consume(),i.consumeSpaces(),t=i.fetch().text);t===\"\\\\hline\"||t===\"\\\\hdashline\";)i.consume(),e.push(t===\"\\\\hdashline\"),i.consumeSpaces(),t=i.fetch().text;return e}var Ko=i=>{var e=i.parser.settings;if(!e.displayMode)throw new I(\"{\"+i.envName+\"} can be used only in display mode.\")},Z3=new Set([\"gather\",\"gather*\"]);function bh(i){if(!i.includes(\"ed\"))return!i.includes(\"*\")}function zr(i,e,t){var{hskipBeforeAndAfter:r,addJot:n,cols:s,arraystretch:o,colSeparationType:l,autoTag:a,singleRow:h,emptySingleRow:c,maxNumCols:u,leqno:d}=e;if(i.gullet.beginGroup(),h||i.gullet.macros.set(\"\\\\cr\",\"\\\\\\\\\\\\relax\"),!o){var p=i.gullet.expandMacroAsText(\"\\\\arraystretch\");if(p==null)o=1;else if(o=parseFloat(p),!o||o<0)throw new I(\"Invalid \\\\arraystretch: \"+p)}i.gullet.beginGroup();var v=[],y=[v],w=[],S=[],A=a!=null?[]:void 0;function M(){a&&i.gullet.macros.set(\"\\\\@eqnsw\",\"1\",!0)}function E(){A&&(i.gullet.macros.get(\"\\\\df@tag\")?(A.push(i.subparse([new mt(\"\\\\df@tag\")])),i.gullet.macros.set(\"\\\\df@tag\",void 0,!0)):A.push(!!a&&i.gullet.macros.get(\"\\\\@eqnsw\")===\"1\"))}for(M(),S.push(Qd(i));;){var T=i.parseExpression(!1,h?\"\\\\end\":\"\\\\\\\\\");i.gullet.endGroup(),i.gullet.beginGroup(),T={type:\"ordgroup\",mode:i.mode,body:T},t&&(T={type:\"styling\",mode:i.mode,style:t,body:[T]}),v.push(T);var B=i.fetch().text;if(B===\"&\"){if(u&&v.length===u){if(h||l)throw new I(\"Too many tab characters: &\",i.nextToken);i.settings.reportNonstrict(\"textEnv\",\"Too few columns specified in the {array} column argument.\")}i.consume()}else if(B===\"\\\\end\"){E(),v.length===1&&T.type===\"styling\"&&T.body[0].body.length===0&&(y.length>1||!c)&&y.pop(),S.length<y.length+1&&S.push([]);break}else if(B===\"\\\\\\\\\"){i.consume();var D=void 0;i.gullet.future().text!==\" \"&&(D=i.parseSizeGroup(!0)),w.push(D?D.value:null),E(),S.push(Qd(i)),v=[],y.push(v),M()}else throw new I(\"Expected & or \\\\\\\\ or \\\\cr or \\\\end\",i.nextToken)}return i.gullet.endGroup(),i.gullet.endGroup(),{type:\"array\",mode:i.mode,addJot:n,arraystretch:o,body:y,cols:s,rowGaps:w,hskipBeforeAndAfter:r,hLinesBeforeRow:S,colSeparationType:l,tags:A,leqno:d}}function yh(i){return i.slice(0,1)===\"d\"?\"display\":\"text\"}var er=function(e,t){var r,n,s=e.body.length,o=e.hLinesBeforeRow,l=0,a=new Array(s),h=[],c=Math.max(t.fontMetrics().arrayRuleWidth,t.minRuleThickness),u=1/t.fontMetrics().ptPerEm,d=5*u;if(e.colSeparationType&&e.colSeparationType===\"small\"){var p=t.havingStyle(Z.SCRIPT).sizeMultiplier;d=.2778*(p/t.sizeMultiplier)}var v=e.colSeparationType===\"CD\"?we({number:3,unit:\"ex\"},t):12*u,y=3*u,w=e.arraystretch*v,S=.7*w,A=.3*w,M=0;function E(Fn){for(var Hn=0;Hn<Fn.length;++Hn)Hn>0&&(M+=.25),h.push({pos:M,isDashed:Fn[Hn]})}for(E(o[0]),r=0;r<e.body.length;++r){var T=e.body[r],B=S,D=A;l<T.length&&(l=T.length);var V=new Array(T.length);for(n=0;n<T.length;++n){var U=oe(T[n],t);D<U.depth&&(D=U.depth),B<U.height&&(B=U.height),V[n]=U}var ie=e.rowGaps[r],j=0;ie&&(j=we(ie,t),j>0&&(j+=A,D<j&&(D=j),j=0)),e.addJot&&(D+=y),V.height=B,V.depth=D,M+=B,V.pos=M,M+=D+j,a[r]=V,E(o[r+1])}var $=M/2+t.fontMetrics().axisHeight,ne=e.cols||[],J=[],re,ge,Be=[];if(e.tags&&e.tags.some(Fn=>Fn))for(r=0;r<s;++r){var qe=a[r],Me=qe.pos-$,Te=e.tags[r],Ne=void 0;Te===!0?Ne=z([\"eqn-num\"],[],t):Te===!1?Ne=z([],[],t):Ne=z([],Pe(Te,t,!0),t),Ne.depth=qe.depth,Ne.height=qe.height,Be.push({type:\"elem\",elem:Ne,shift:Me})}for(n=0,ge=0;n<l||ge<ne.length;++n,++ge){for(var pe=ne[ge]||{},je=!0;pe.type===\"separator\";){if(je||(re=z([\"arraycolsep\"],[]),re.style.width=N(t.fontMetrics().doubleRuleSep),J.push(re)),pe.separator===\"|\"||pe.separator===\":\"){var Bt=pe.separator===\"|\"?\"solid\":\"dashed\",ct=z([\"vertical-separator\"],[],t);ct.style.height=N(M),ct.style.borderRightWidth=N(c),ct.style.borderRightStyle=Bt,ct.style.margin=\"0 \"+N(-c/2);var Ir=M-$;Ir&&(ct.style.verticalAlign=N(-Ir)),J.push(ct)}else throw new I(\"Invalid separator type: \"+pe.separator);ge++,pe=ne[ge]||{},je=!1}if(!(n>=l)){var ir=void 0;if(n>0||e.hskipBeforeAndAfter){var Dh;ir=(Dh=pe.pregap)!=null?Dh:d,ir!==0&&(re=z([\"arraycolsep\"],[]),re.style.width=N(ir),J.push(re))}var ni=[];for(r=0;r<s;++r){var Pn=a[r],Nn=Pn[n];if(Nn){var bp=Pn.pos-$;Nn.depth=Pn.depth,Nn.height=Pn.height,ni.push({type:\"elem\",elem:Nn,shift:bp})}}if(ni=ae({positionType:\"individualShift\",children:ni}),ni=z([\"col-align-\"+(pe.align||\"c\")],[ni]),J.push(ni),n<l-1||e.hskipBeforeAndAfter){var Bh;ir=(Bh=pe.postgap)!=null?Bh:d,ir!==0&&(re=z([\"arraycolsep\"],[]),re.style.width=N(ir),J.push(re))}}}if(a=z([\"mtable\"],J),h.length>0){for(var yp=Oi(\"hline\",t,c),xp=Oi(\"hdashline\",t,c),Qo=[{type:\"elem\",elem:a,shift:0}];h.length>0;){var Eh=h.pop(),Oh=Eh.pos-$;Eh.isDashed?Qo.push({type:\"elem\",elem:xp,shift:Oh}):Qo.push({type:\"elem\",elem:yp,shift:Oh})}a=ae({positionType:\"individualShift\",children:Qo})}if(Be.length===0)return z([\"mord\"],[a],t);var el=ae({positionType:\"individualShift\",children:Be});return el=z([\"tag\"],[el],t),pr([a,el])},Q3={c:\"center \",l:\"left \",r:\"right \"},tr=function(e,t){for(var r=[],n=new L(\"mtd\",[],[\"mtr-glue\"]),s=new L(\"mtd\",[],[\"mml-eqn-num\"]),o=0;o<e.body.length;o++){for(var l=e.body[o],a=[],h=0;h<l.length;h++)a.push(new L(\"mtd\",[ue(l[h],t)]));e.tags&&e.tags[o]&&(a.unshift(n),a.push(n),e.leqno?a.unshift(s):a.push(s)),r.push(new L(\"mtr\",a))}var c=new L(\"mtable\",r),u=e.arraystretch===.5?.1:.16+e.arraystretch-1+(e.addJot?.09:0);c.setAttribute(\"rowspacing\",N(u));var d=\"\",p=\"\";if(e.cols&&e.cols.length>0){var v=e.cols,y=\"\",w=!1,S=0,A=v.length;v[0].type===\"separator\"&&(d+=\"top \",S=1),v[v.length-1].type===\"separator\"&&(d+=\"bottom \",A-=1);for(var M=S;M<A;M++)v[M].type===\"align\"?(p+=Q3[v[M].align],w&&(y+=\"none \"),w=!0):v[M].type===\"separator\"&&w&&(y+=v[M].separator===\"|\"?\"solid \":\"dashed \",w=!1);c.setAttribute(\"columnalign\",p.trim()),/[sd]/.test(y)&&c.setAttribute(\"columnlines\",y.trim())}if(e.colSeparationType===\"align\"){for(var E=e.cols||[],T=\"\",B=1;B<E.length;B++)T+=B%2?\"0em \":\"1em \";c.setAttribute(\"columnspacing\",T.trim())}else e.colSeparationType===\"alignat\"||e.colSeparationType===\"gather\"?c.setAttribute(\"columnspacing\",\"0em\"):e.colSeparationType===\"small\"?c.setAttribute(\"columnspacing\",\"0.2778em\"):e.colSeparationType===\"CD\"?c.setAttribute(\"columnspacing\",\"0.5em\"):c.setAttribute(\"columnspacing\",\"1em\");var D=\"\",V=e.hLinesBeforeRow;d+=V[0].length>0?\"left \":\"\",d+=V[V.length-1].length>0?\"right \":\"\";for(var U=1;U<V.length-1;U++)D+=V[U].length===0?\"none \":V[U][0]?\"dashed \":\"solid \";return/[sd]/.test(D)&&c.setAttribute(\"rowlines\",D.trim()),d!==\"\"&&(c=new L(\"menclose\",[c]),c.setAttribute(\"notation\",d.trim())),e.arraystretch&&e.arraystretch<1&&(c=new L(\"mstyle\",[c]),c.setAttribute(\"scriptlevel\",\"1\")),c},Ym=function(e,t){e.envName.includes(\"ed\")||Ko(e);var r=[],n=e.envName.includes(\"at\")?\"alignat\":\"align\",s=e.envName===\"split\",o=zr(e.parser,{cols:r,addJot:!0,autoTag:s?void 0:bh(e.envName),emptySingleRow:!0,colSeparationType:n,maxNumCols:s?2:void 0,leqno:e.parser.settings.leqno},\"display\"),l,a=0,h={type:\"ordgroup\",mode:e.mode,body:[]};if(t[0]&&t[0].type===\"ordgroup\"){for(var c=\"\",u=0;u<t[0].body.length;u++){var d=Q(t[0].body[u],\"textord\");c+=d.text}l=Number(c),a=l*2}var p=!a;o.body.forEach(function(S){for(var A=1;A<S.length;A+=2){var M=Q(S[A],\"styling\"),E=Q(M.body[0],\"ordgroup\");E.body.unshift(h)}if(p)a<S.length&&(a=S.length);else{var T=S.length/2;if(l<T)throw new I(\"Too many math in a row: \"+(\"expected \"+l+\", but got \"+T),S[0])}});for(var v=0;v<a;++v){var y=\"r\",w=0;v%2===1?y=\"l\":v>0&&p&&(w=1),r[v]={type:\"align\",align:y,pregap:w,postgap:0}}return o.colSeparationType=p?\"align\":\"alignat\",o};Qt({type:\"array\",names:[\"array\",\"darray\"],props:{numArgs:1},handler(i,e){var t=$o(e[0]),r=t?[e[0]]:Q(e[0],\"ordgroup\").body,n=r.map(function(o){var l=dh(o),a=l.text;if(\"lcr\".includes(a))return{type:\"align\",align:a};if(a===\"|\")return{type:\"separator\",separator:\"|\"};if(a===\":\")return{type:\"separator\",separator:\":\"};throw new I(\"Unknown column alignment: \"+a,o)}),s={cols:n,hskipBeforeAndAfter:!0,maxNumCols:n.length};return zr(i.parser,s,yh(i.envName))},htmlBuilder:er,mathmlBuilder:tr});Qt({type:\"array\",names:[\"matrix\",\"pmatrix\",\"bmatrix\",\"Bmatrix\",\"vmatrix\",\"Vmatrix\",\"matrix*\",\"pmatrix*\",\"bmatrix*\",\"Bmatrix*\",\"vmatrix*\",\"Vmatrix*\"],props:{numArgs:0},handler(i){var e={matrix:null,pmatrix:[\"(\",\")\"],bmatrix:[\"[\",\"]\"],Bmatrix:[\"\\\\{\",\"\\\\}\"],vmatrix:[\"|\",\"|\"],Vmatrix:[\"\\\\Vert\",\"\\\\Vert\"]}[i.envName.replace(\"*\",\"\")],t=\"c\",r={hskipBeforeAndAfter:!1,cols:[{type:\"align\",align:t}]};if(i.envName.charAt(i.envName.length-1)===\"*\"){var n=i.parser;if(n.consumeSpaces(),n.fetch().text===\"[\"){if(n.consume(),n.consumeSpaces(),t=n.fetch().text,!\"lcr\".includes(t))throw new I(\"Expected l or c or r\",n.nextToken);n.consume(),n.consumeSpaces(),n.expect(\"]\"),n.consume(),r.cols=[{type:\"align\",align:t}]}}var s=zr(i.parser,r,yh(i.envName)),o=Math.max(0,...s.body.map(l=>l.length));return s.cols=new Array(o).fill({type:\"align\",align:t}),e?{type:\"leftright\",mode:i.mode,body:[s],left:e[0],right:e[1],rightColor:void 0}:s},htmlBuilder:er,mathmlBuilder:tr});Qt({type:\"array\",names:[\"smallmatrix\"],props:{numArgs:0},handler(i){var e={arraystretch:.5},t=zr(i.parser,e,\"script\");return t.colSeparationType=\"small\",t},htmlBuilder:er,mathmlBuilder:tr});Qt({type:\"array\",names:[\"subarray\"],props:{numArgs:1},handler(i,e){var t=$o(e[0]),r=t?[e[0]]:Q(e[0],\"ordgroup\").body,n=r.map(function(o){var l=dh(o),a=l.text;if(\"lc\".includes(a))return{type:\"align\",align:a};throw new I(\"Unknown column alignment: \"+a,o)});if(n.length>1)throw new I(\"{subarray} can contain only one column\");var s={cols:n,hskipBeforeAndAfter:!1,arraystretch:.5};if(s=zr(i.parser,s,\"script\"),s.body.length>0&&s.body[0].length>1)throw new I(\"{subarray} can contain only one column\");return s},htmlBuilder:er,mathmlBuilder:tr});Qt({type:\"array\",names:[\"cases\",\"dcases\",\"rcases\",\"drcases\"],props:{numArgs:0},handler(i){var e={arraystretch:1.2,cols:[{type:\"align\",align:\"l\",pregap:0,postgap:1},{type:\"align\",align:\"l\",pregap:0,postgap:0}]},t=zr(i.parser,e,yh(i.envName));return{type:\"leftright\",mode:i.mode,body:[t],left:i.envName.includes(\"r\")?\".\":\"\\\\{\",right:i.envName.includes(\"r\")?\"\\\\}\":\".\",rightColor:void 0}},htmlBuilder:er,mathmlBuilder:tr});Qt({type:\"array\",names:[\"align\",\"align*\",\"aligned\",\"split\"],props:{numArgs:0},handler:Ym,htmlBuilder:er,mathmlBuilder:tr});Qt({type:\"array\",names:[\"gathered\",\"gather\",\"gather*\"],props:{numArgs:0},handler(i){Z3.has(i.envName)&&Ko(i);var e={cols:[{type:\"align\",align:\"c\"}],addJot:!0,colSeparationType:\"gather\",autoTag:bh(i.envName),emptySingleRow:!0,leqno:i.parser.settings.leqno};return zr(i.parser,e,\"display\")},htmlBuilder:er,mathmlBuilder:tr});Qt({type:\"array\",names:[\"alignat\",\"alignat*\",\"alignedat\"],props:{numArgs:1},handler:Ym,htmlBuilder:er,mathmlBuilder:tr});Qt({type:\"array\",names:[\"equation\",\"equation*\"],props:{numArgs:0},handler(i){Ko(i);var e={autoTag:bh(i.envName),emptySingleRow:!0,singleRow:!0,maxNumCols:1,leqno:i.parser.settings.leqno};return zr(i.parser,e,\"display\")},htmlBuilder:er,mathmlBuilder:tr});Qt({type:\"array\",names:[\"CD\"],props:{numArgs:0},handler(i){return Ko(i),q3(i.parser)},htmlBuilder:er,mathmlBuilder:tr});b(\"\\\\nonumber\",\"\\\\gdef\\\\@eqnsw{0}\");b(\"\\\\notag\",\"\\\\nonumber\");q({type:\"text\",names:[\"\\\\hline\",\"\\\\hdashline\"],props:{numArgs:0,allowedInText:!0,allowedInMath:!0},handler(i,e){throw new I(i.funcName+\" valid only within array environment\")}});var em=Km;q({type:\"environment\",names:[\"\\\\begin\",\"\\\\end\"],props:{numArgs:1,argTypes:[\"text\"]},handler(i,e){var{parser:t,funcName:r}=i,n=e[0];if(n.type!==\"ordgroup\")throw new I(\"Invalid environment name\",n);for(var s=\"\",o=0;o<n.body.length;++o)s+=Q(n.body[o],\"textord\").text;if(r===\"\\\\begin\"){if(!em.hasOwnProperty(s))throw new I(\"No such environment: \"+s,n);var l=em[s],{args:a,optArgs:h}=t.parseArguments(\"\\\\begin{\"+s+\"}\",l),c={mode:t.mode,envName:s,parser:t},u=l.handler(c,a,h);t.expect(\"\\\\end\",!1);var d=t.nextToken,p=Q(t.parseFunction(),\"environment\");if(p.name!==s)throw new I(\"Mismatch: \\\\begin{\"+s+\"} matched by \\\\end{\"+p.name+\"}\",d);return u}return{type:\"environment\",mode:t.mode,name:s,nameGroup:n}}});var Xm=(i,e)=>{var t=i.font,r=e.withFont(t);return oe(i.body,r)},_m=(i,e)=>{var t=i.font,r=e.withFont(t);return ue(i.body,r)},tm={\"\\\\Bbb\":\"\\\\mathbb\",\"\\\\bold\":\"\\\\mathbf\",\"\\\\frak\":\"\\\\mathfrak\",\"\\\\bm\":\"\\\\boldsymbol\"};q({type:\"font\",names:[\"\\\\mathrm\",\"\\\\mathit\",\"\\\\mathbf\",\"\\\\mathnormal\",\"\\\\mathsfit\",\"\\\\mathbb\",\"\\\\mathcal\",\"\\\\mathfrak\",\"\\\\mathscr\",\"\\\\mathsf\",\"\\\\mathtt\",\"\\\\Bbb\",\"\\\\bold\",\"\\\\frak\"],props:{numArgs:1,allowedInArgument:!0},handler:(i,e)=>{var{parser:t,funcName:r}=i,n=Io(e[0]),s=r;return s in tm&&(s=tm[s]),{type:\"font\",mode:t.mode,font:s.slice(1),body:n}},htmlBuilder:Xm,mathmlBuilder:_m});q({type:\"mclass\",names:[\"\\\\boldsymbol\",\"\\\\bm\"],props:{numArgs:1},handler:(i,e)=>{var{parser:t}=i,r=e[0];return{type:\"mclass\",mode:t.mode,mclass:Go(r),body:[{type:\"font\",mode:t.mode,font:\"boldsymbol\",body:r}],isCharacterBox:dr(r)}}});q({type:\"font\",names:[\"\\\\rm\",\"\\\\sf\",\"\\\\tt\",\"\\\\bf\",\"\\\\it\",\"\\\\cal\"],props:{numArgs:0,allowedInText:!0},handler:(i,e)=>{var{parser:t,funcName:r,breakOnTokenText:n}=i,{mode:s}=t,o=t.parseExpression(!0,n),l=\"math\"+r.slice(1);return{type:\"font\",mode:s,font:l,body:{type:\"ordgroup\",mode:t.mode,body:o}}},htmlBuilder:Xm,mathmlBuilder:_m});var e6=(i,e)=>{var t=e.style,r=t.fracNum(),n=t.fracDen(),s;s=e.havingStyle(r);var o=oe(i.numer,s,e);if(i.continued){var l=8.5/e.fontMetrics().ptPerEm,a=3.5/e.fontMetrics().ptPerEm;o.height=o.height<l?l:o.height,o.depth=o.depth<a?a:o.depth}s=e.havingStyle(n);var h=oe(i.denom,s,e),c,u,d;i.hasBarLine?(i.barSize?(u=we(i.barSize,e),c=Oi(\"frac-line\",e,u)):c=Oi(\"frac-line\",e),u=c.height,d=c.height):(c=null,u=0,d=e.fontMetrics().defaultRuleThickness);var p,v,y;t.size===Z.DISPLAY.size?(p=e.fontMetrics().num1,u>0?v=3*d:v=7*d,y=e.fontMetrics().denom1):(u>0?(p=e.fontMetrics().num2,v=d):(p=e.fontMetrics().num3,v=3*d),y=e.fontMetrics().denom2);var w;if(c){var A=e.fontMetrics().axisHeight;p-o.depth-(A+.5*u)<v&&(p+=v-(p-o.depth-(A+.5*u))),A-.5*u-(h.height-y)<v&&(y+=v-(A-.5*u-(h.height-y)));var M=-(A-.5*u);w=ae({positionType:\"individualShift\",children:[{type:\"elem\",elem:h,shift:y},{type:\"elem\",elem:c,shift:M},{type:\"elem\",elem:o,shift:-p}]})}else{var S=p-o.depth-(h.height-y);S<v&&(p+=.5*(v-S),y+=.5*(v-S)),w=ae({positionType:\"individualShift\",children:[{type:\"elem\",elem:h,shift:y},{type:\"elem\",elem:o,shift:-p}]})}s=e.havingStyle(t),w.height*=s.sizeMultiplier/e.sizeMultiplier,w.depth*=s.sizeMultiplier/e.sizeMultiplier;var E;t.size===Z.DISPLAY.size?E=e.fontMetrics().delim1:t.size===Z.SCRIPTSCRIPT.size?E=e.havingStyle(Z.SCRIPT).fontMetrics().delim2:E=e.fontMetrics().delim2;var T,B;return i.leftDelim==null?T=Ln(e,[\"mopen\"]):T=rh(i.leftDelim,E,!0,e.havingStyle(t),i.mode,[\"mopen\"]),i.continued?B=z([]):i.rightDelim==null?B=Ln(e,[\"mclose\"]):B=rh(i.rightDelim,E,!0,e.havingStyle(t),i.mode,[\"mclose\"]),z([\"mord\"].concat(s.sizingClasses(e)),[T,z([\"mfrac\"],[w]),B],e)},t6=(i,e)=>{var t=new L(\"mfrac\",[ue(i.numer,e),ue(i.denom,e)]);if(!i.hasBarLine)t.setAttribute(\"linethickness\",\"0px\");else if(i.barSize){var r=we(i.barSize,e);t.setAttribute(\"linethickness\",N(r))}if(i.leftDelim!=null||i.rightDelim!=null){var n=[];if(i.leftDelim!=null){var s=new L(\"mo\",[new Ae(i.leftDelim.replace(\"\\\\\",\"\"))]);s.setAttribute(\"fence\",\"true\"),n.push(s)}if(n.push(t),i.rightDelim!=null){var o=new L(\"mo\",[new Ae(i.rightDelim.replace(\"\\\\\",\"\"))]);o.setAttribute(\"fence\",\"true\"),n.push(o)}return uh(n)}return t},Jm=(i,e)=>{if(!e)return i;var t={type:\"styling\",mode:i.mode,style:e,body:[i]};return t};q({type:\"genfrac\",names:[\"\\\\cfrac\",\"\\\\dfrac\",\"\\\\frac\",\"\\\\tfrac\",\"\\\\dbinom\",\"\\\\binom\",\"\\\\tbinom\",\"\\\\\\\\atopfrac\",\"\\\\\\\\bracefrac\",\"\\\\\\\\brackfrac\"],props:{numArgs:2,allowedInArgument:!0},handler:(i,e)=>{var{parser:t,funcName:r}=i,n=e[0],s=e[1],o,l=null,a=null;switch(r){case\"\\\\cfrac\":case\"\\\\dfrac\":case\"\\\\frac\":case\"\\\\tfrac\":o=!0;break;case\"\\\\\\\\atopfrac\":o=!1;break;case\"\\\\dbinom\":case\"\\\\binom\":case\"\\\\tbinom\":o=!1,l=\"(\",a=\")\";break;case\"\\\\\\\\bracefrac\":o=!1,l=\"\\\\{\",a=\"\\\\}\";break;case\"\\\\\\\\brackfrac\":o=!1,l=\"[\",a=\"]\";break;default:throw new Error(\"Unrecognized genfrac command\")}var h=r===\"\\\\cfrac\",c=null;return h||r.startsWith(\"\\\\d\")?c=\"display\":r.startsWith(\"\\\\t\")&&(c=\"text\"),Jm({type:\"genfrac\",mode:t.mode,numer:n,denom:s,continued:h,hasBarLine:o,leftDelim:l,rightDelim:a,barSize:null},c)},htmlBuilder:e6,mathmlBuilder:t6});q({type:\"infix\",names:[\"\\\\over\",\"\\\\choose\",\"\\\\atop\",\"\\\\brace\",\"\\\\brack\"],props:{numArgs:0,infix:!0},handler(i){var{parser:e,funcName:t,token:r}=i,n;switch(t){case\"\\\\over\":n=\"\\\\frac\";break;case\"\\\\choose\":n=\"\\\\binom\";break;case\"\\\\atop\":n=\"\\\\\\\\atopfrac\";break;case\"\\\\brace\":n=\"\\\\\\\\bracefrac\";break;case\"\\\\brack\":n=\"\\\\\\\\brackfrac\";break;default:throw new Error(\"Unrecognized infix genfrac command\")}return{type:\"infix\",mode:e.mode,replaceWith:n,token:r}}});var rm=[\"display\",\"text\",\"script\",\"scriptscript\"],im=function(e){var t=null;return e.length>0&&(t=e,t=t===\".\"?null:t),t};q({type:\"genfrac\",names:[\"\\\\genfrac\"],props:{numArgs:6,allowedInArgument:!0,argTypes:[\"math\",\"math\",\"size\",\"text\",\"math\",\"math\"]},handler(i,e){var{parser:t}=i,r=e[4],n=e[5],s=Io(e[0]),o=s.type===\"atom\"&&s.family===\"open\"?im(s.text):null,l=Io(e[1]),a=l.type===\"atom\"&&l.family===\"close\"?im(l.text):null,h=Q(e[2],\"size\"),c,u=null;h.isBlank?c=!0:(u=h.value,c=u.number>0);var d=null,p=e[3];if(p.type===\"ordgroup\"){if(p.body.length>0){var v=Q(p.body[0],\"textord\");d=rm[Number(v.text)]}}else p=Q(p,\"textord\"),d=rm[Number(p.text)];return Jm({type:\"genfrac\",mode:t.mode,numer:r,denom:n,continued:!1,hasBarLine:c,barSize:u,leftDelim:o,rightDelim:a},d)}});q({type:\"infix\",names:[\"\\\\above\"],props:{numArgs:1,argTypes:[\"size\"],infix:!0},handler(i,e){var{parser:t,funcName:r,token:n}=i;return{type:\"infix\",mode:t.mode,replaceWith:\"\\\\\\\\abovefrac\",size:Q(e[0],\"size\").value,token:n}}});q({type:\"genfrac\",names:[\"\\\\\\\\abovefrac\"],props:{numArgs:3,argTypes:[\"math\",\"size\",\"math\"]},handler:(i,e)=>{var{parser:t,funcName:r}=i,n=e[0],s=Q(e[1],\"infix\").size;if(!s)throw new Error(\"\\\\\\\\abovefrac expected size, but got \"+String(s));var o=e[2],l=s.number>0;return{type:\"genfrac\",mode:t.mode,numer:n,denom:o,continued:!1,hasBarLine:l,barSize:s,leftDelim:null,rightDelim:null}}});var Zm=(i,e)=>{var t=e.style,r,n;i.type===\"supsub\"?(r=i.sup?oe(i.sup,e.havingStyle(t.sup()),e):oe(i.sub,e.havingStyle(t.sub()),e),n=Q(i.base,\"horizBrace\")):n=Q(i,\"horizBrace\");var s=oe(n.base,e.havingBaseStyle(Z.DISPLAY)),o=Vo(n,e),l;if(n.isOver?(l=ae({positionType:\"firstBaseline\",children:[{type:\"elem\",elem:s},{type:\"kern\",size:.1},{type:\"elem\",elem:o}]}),l.children[0].children[0].children[1].classes.push(\"svg-align\")):(l=ae({positionType:\"bottom\",positionData:s.depth+.1+o.height,children:[{type:\"elem\",elem:o},{type:\"kern\",size:.1},{type:\"elem\",elem:s}]}),l.children[0].children[0].children[0].classes.push(\"svg-align\")),r){var a=z([\"mord\",n.isOver?\"mover\":\"munder\"],[l],e);n.isOver?l=ae({positionType:\"firstBaseline\",children:[{type:\"elem\",elem:a},{type:\"kern\",size:.2},{type:\"elem\",elem:r}]}):l=ae({positionType:\"bottom\",positionData:a.depth+.2+r.height+r.depth,children:[{type:\"elem\",elem:r},{type:\"kern\",size:.2},{type:\"elem\",elem:a}]})}return z([\"mord\",n.isOver?\"mover\":\"munder\"],[l],e)},r6=(i,e)=>{var t=Wo(i.label);return new L(i.isOver?\"mover\":\"munder\",[ue(i.base,e),t])};q({type:\"horizBrace\",names:[\"\\\\overbrace\",\"\\\\underbrace\"],props:{numArgs:1},handler(i,e){var{parser:t,funcName:r}=i;return{type:\"horizBrace\",mode:t.mode,label:r,isOver:/^\\\\over/.test(r),base:e[0]}},htmlBuilder:Zm,mathmlBuilder:r6});q({type:\"href\",names:[\"\\\\href\"],props:{numArgs:2,argTypes:[\"url\",\"original\"],allowedInText:!0},handler:(i,e)=>{var{parser:t}=i,r=e[1],n=Q(e[0],\"url\").url;return t.settings.isTrusted({command:\"\\\\href\",url:n})?{type:\"href\",mode:t.mode,href:n,body:De(r)}:t.formatUnsupportedCmd(\"\\\\href\")},htmlBuilder:(i,e)=>{var t=Pe(i.body,e,!1);return y3(i.href,[],t,e)},mathmlBuilder:(i,e)=>{var t=Or(i.body,e);return t instanceof L||(t=new L(\"mrow\",[t])),t.setAttribute(\"href\",i.href),t}});q({type:\"href\",names:[\"\\\\url\"],props:{numArgs:1,argTypes:[\"url\"],allowedInText:!0},handler:(i,e)=>{var{parser:t}=i,r=Q(e[0],\"url\").url;if(!t.settings.isTrusted({command:\"\\\\url\",url:r}))return t.formatUnsupportedCmd(\"\\\\url\");for(var n=[],s=0;s<r.length;s++){var o=r[s];o===\"~\"&&(o=\"\\\\textasciitilde\"),n.push({type:\"textord\",mode:\"text\",text:o})}var l={type:\"text\",mode:t.mode,font:\"\\\\texttt\",body:n};return{type:\"href\",mode:t.mode,href:r,body:De(l)}}});q({type:\"hbox\",names:[\"\\\\hbox\"],props:{numArgs:1,argTypes:[\"text\"],allowedInText:!0,primitive:!0},handler(i,e){var{parser:t}=i;return{type:\"hbox\",mode:t.mode,body:De(e[0])}},htmlBuilder(i,e){var t=Pe(i.body,e,!1);return pr(t)},mathmlBuilder(i,e){return new L(\"mrow\",ht(i.body,e))}});q({type:\"html\",names:[\"\\\\htmlClass\",\"\\\\htmlId\",\"\\\\htmlStyle\",\"\\\\htmlData\"],props:{numArgs:2,argTypes:[\"raw\",\"original\"],allowedInText:!0},handler:(i,e)=>{var{parser:t,funcName:r,token:n}=i,s=Q(e[0],\"raw\").string,o=e[1];t.settings.strict&&t.settings.reportNonstrict(\"htmlExtension\",\"HTML extension is disabled on strict mode\");var l,a={};switch(r){case\"\\\\htmlClass\":a.class=s,l={command:\"\\\\htmlClass\",class:s};break;case\"\\\\htmlId\":a.id=s,l={command:\"\\\\htmlId\",id:s};break;case\"\\\\htmlStyle\":a.style=s,l={command:\"\\\\htmlStyle\",style:s};break;case\"\\\\htmlData\":{for(var h=s.split(\",\"),c=0;c<h.length;c++){var u=h[c],d=u.indexOf(\"=\");if(d<0)throw new I(\"\\\\htmlData key/value '\"+u+\"' missing equals sign\");var p=u.slice(0,d),v=u.slice(d+1);a[\"data-\"+p.trim()]=v}l={command:\"\\\\htmlData\",attributes:a};break}default:throw new Error(\"Unrecognized html command\")}return t.settings.isTrusted(l)?{type:\"html\",mode:t.mode,attributes:a,body:De(o)}:t.formatUnsupportedCmd(r)},htmlBuilder:(i,e)=>{var t=Pe(i.body,e,!1),r=[\"enclosing\"];i.attributes.class&&r.push(...i.attributes.class.trim().split(/\\s+/));var n=z(r,t,e);for(var s in i.attributes)s!==\"class\"&&i.attributes.hasOwnProperty(s)&&n.setAttribute(s,i.attributes[s]);return n},mathmlBuilder:(i,e)=>Or(i.body,e)});q({type:\"htmlmathml\",names:[\"\\\\html@mathml\"],props:{numArgs:2,allowedInArgument:!0,allowedInText:!0},handler:(i,e)=>{var{parser:t}=i;return{type:\"htmlmathml\",mode:t.mode,html:De(e[0]),mathml:De(e[1])}},htmlBuilder:(i,e)=>{var t=Pe(i.html,e,!1);return pr(t)},mathmlBuilder:(i,e)=>Or(i.mathml,e)});var $0=function(e){if(/^[-+]? *(\\d+(\\.\\d*)?|\\.\\d+)$/.test(e))return{number:+e,unit:\"bp\"};var t=/([-+]?) *(\\d+(?:\\.\\d*)?|\\.\\d+) *([a-z]{2})/.exec(e);if(!t)throw new I(\"Invalid size: '\"+e+\"' in \\\\includegraphics\");var r={number:+(t[1]+t[2]),unit:t[3]};if(!vm(r))throw new I(\"Invalid unit: '\"+r.unit+\"' in \\\\includegraphics.\");return r};q({type:\"includegraphics\",names:[\"\\\\includegraphics\"],props:{numArgs:1,numOptionalArgs:1,argTypes:[\"raw\",\"url\"],allowedInText:!1},handler:(i,e,t)=>{var{parser:r}=i,n={number:0,unit:\"em\"},s={number:.9,unit:\"em\"},o={number:0,unit:\"em\"},l=\"\";if(t[0])for(var a=Q(t[0],\"raw\").string,h=a.split(\",\"),c=0;c<h.length;c++){var u=h[c].split(\"=\");if(u.length===2){var d=u[1].trim();switch(u[0].trim()){case\"alt\":l=d;break;case\"width\":n=$0(d);break;case\"height\":s=$0(d);break;case\"totalheight\":o=$0(d);break;default:throw new I(\"Invalid key: '\"+u[0]+\"' in \\\\includegraphics.\")}}}var p=Q(e[0],\"url\").url;return l===\"\"&&(l=p,l=l.replace(/^.*[\\\\/]/,\"\"),l=l.substring(0,l.lastIndexOf(\".\"))),r.settings.isTrusted({command:\"\\\\includegraphics\",url:p})?{type:\"includegraphics\",mode:r.mode,alt:l,width:n,height:s,totalheight:o,src:p}:r.formatUnsupportedCmd(\"\\\\includegraphics\")},htmlBuilder:(i,e)=>{var t=we(i.height,e),r=0;i.totalheight.number>0&&(r=we(i.totalheight,e)-t);var n=0;i.width.number>0&&(n=we(i.width,e));var s={height:N(t+r)};n>0&&(s.width=N(n)),r>0&&(s.verticalAlign=N(-r));var o=new X0(i.src,i.alt,s);return o.height=t,o.depth=r,o},mathmlBuilder:(i,e)=>{var t=new L(\"mglyph\",[]);t.setAttribute(\"alt\",i.alt);var r=we(i.height,e),n=0;if(i.totalheight.number>0&&(n=we(i.totalheight,e)-r,t.setAttribute(\"valign\",N(-n))),t.setAttribute(\"height\",N(r+n)),i.width.number>0){var s=we(i.width,e);t.setAttribute(\"width\",N(s))}return t.setAttribute(\"src\",i.src),t}});q({type:\"kern\",names:[\"\\\\kern\",\"\\\\mkern\",\"\\\\hskip\",\"\\\\mskip\"],props:{numArgs:1,argTypes:[\"size\"],primitive:!0,allowedInText:!0},handler(i,e){var{parser:t,funcName:r}=i,n=Q(e[0],\"size\");if(t.settings.strict){var s=r[1]===\"m\",o=n.value.unit===\"mu\";s?(o||t.settings.reportNonstrict(\"mathVsTextUnits\",\"LaTeX's \"+r+\" supports only mu units, \"+(\"not \"+n.value.unit+\" units\")),t.mode!==\"math\"&&t.settings.reportNonstrict(\"mathVsTextUnits\",\"LaTeX's \"+r+\" works only in math mode\")):o&&t.settings.reportNonstrict(\"mathVsTextUnits\",\"LaTeX's \"+r+\" doesn't support mu units\")}return{type:\"kern\",mode:t.mode,dimension:n.value}},htmlBuilder(i,e){return Sm(i.dimension,e)},mathmlBuilder(i,e){var t=we(i.dimension,e);return new Ro(t)}});q({type:\"lap\",names:[\"\\\\mathllap\",\"\\\\mathrlap\",\"\\\\mathclap\"],props:{numArgs:1,allowedInText:!0},handler:(i,e)=>{var{parser:t,funcName:r}=i,n=e[0];return{type:\"lap\",mode:t.mode,alignment:r.slice(5),body:n}},htmlBuilder:(i,e)=>{var t;i.alignment===\"clap\"?(t=z([],[oe(i.body,e)]),t=z([\"inner\"],[t],e)):t=z([\"inner\"],[oe(i.body,e)]);var r=z([\"fix\"],[]),n=z([i.alignment],[t,r],e),s=z([\"strut\"]);return s.style.height=N(n.height+n.depth),n.depth&&(s.style.verticalAlign=N(-n.depth)),n.children.unshift(s),n=z([\"thinbox\"],[n],e),z([\"mord\",\"vbox\"],[n],e)},mathmlBuilder:(i,e)=>{var t=new L(\"mpadded\",[ue(i.body,e)]);if(i.alignment!==\"rlap\"){var r=i.alignment===\"llap\"?\"-1\":\"-0.5\";t.setAttribute(\"lspace\",r+\"width\")}return t.setAttribute(\"width\",\"0px\"),t}});q({type:\"styling\",names:[\"\\\\(\",\"$\"],props:{numArgs:0,allowedInText:!0,allowedInMath:!1},handler(i,e){var{funcName:t,parser:r}=i,n=r.mode;r.switchMode(\"math\");var s=t===\"\\\\(\"?\"\\\\)\":\"$\",o=r.parseExpression(!1,s);return r.expect(s),r.switchMode(n),{type:\"styling\",mode:r.mode,style:\"text\",body:o}}});q({type:\"text\",names:[\"\\\\)\",\"\\\\]\"],props:{numArgs:0,allowedInText:!0,allowedInMath:!1},handler(i,e){throw new I(\"Mismatched \"+i.funcName)}});var nm=(i,e)=>{switch(e.style.size){case Z.DISPLAY.size:return i.display;case Z.TEXT.size:return i.text;case Z.SCRIPT.size:return i.script;case Z.SCRIPTSCRIPT.size:return i.scriptscript;default:return i.text}};q({type:\"mathchoice\",names:[\"\\\\mathchoice\"],props:{numArgs:4,primitive:!0},handler:(i,e)=>{var{parser:t}=i;return{type:\"mathchoice\",mode:t.mode,display:De(e[0]),text:De(e[1]),script:De(e[2]),scriptscript:De(e[3])}},htmlBuilder:(i,e)=>{var t=nm(i,e),r=Pe(t,e,!1);return pr(r)},mathmlBuilder:(i,e)=>{var t=nm(i,e);return Or(t,e)}});var Qm=(i,e,t,r,n,s,o)=>{i=z([],[i]);var l=t&&dr(t),a,h;if(e){var c=oe(e,r.havingStyle(n.sup()),r);h={elem:c,kern:Math.max(r.fontMetrics().bigOpSpacing1,r.fontMetrics().bigOpSpacing3-c.depth)}}if(t){var u=oe(t,r.havingStyle(n.sub()),r);a={elem:u,kern:Math.max(r.fontMetrics().bigOpSpacing2,r.fontMetrics().bigOpSpacing4-u.height)}}var d;if(h&&a){var p=r.fontMetrics().bigOpSpacing5+a.elem.height+a.elem.depth+a.kern+i.depth+o;d=ae({positionType:\"bottom\",positionData:p,children:[{type:\"kern\",size:r.fontMetrics().bigOpSpacing5},{type:\"elem\",elem:a.elem,marginLeft:N(-s)},{type:\"kern\",size:a.kern},{type:\"elem\",elem:i},{type:\"kern\",size:h.kern},{type:\"elem\",elem:h.elem,marginLeft:N(s)},{type:\"kern\",size:r.fontMetrics().bigOpSpacing5}]})}else if(a){var v=i.height-o;d=ae({positionType:\"top\",positionData:v,children:[{type:\"kern\",size:r.fontMetrics().bigOpSpacing5},{type:\"elem\",elem:a.elem,marginLeft:N(-s)},{type:\"kern\",size:a.kern},{type:\"elem\",elem:i}]})}else if(h){var y=i.depth+o;d=ae({positionType:\"bottom\",positionData:y,children:[{type:\"elem\",elem:i},{type:\"kern\",size:h.kern},{type:\"elem\",elem:h.elem,marginLeft:N(s)},{type:\"kern\",size:r.fontMetrics().bigOpSpacing5}]})}else return i;var w=[d];if(a&&s!==0&&!l){var S=z([\"mspace\"],[],r);S.style.marginRight=N(s),w.unshift(S)}return z([\"mop\",\"op-limits\"],w,r)},ep=new Set([\"\\\\smallint\"]),Ii=(i,e)=>{var t,r,n=!1,s;i.type===\"supsub\"?(t=i.sup,r=i.sub,s=Q(i.base,\"op\"),n=!0):s=Q(i,\"op\");var o=e.style,l=!1;o.size===Z.DISPLAY.size&&s.symbol&&!ep.has(s.name)&&(l=!0);var a;if(s.symbol){var h=l?\"Size2-Regular\":\"Size1-Regular\",c=\"\";if((s.name===\"\\\\oiint\"||s.name===\"\\\\oiiint\")&&(c=s.name.slice(1),s.name=c===\"oiint\"?\"\\\\iint\":\"\\\\iiint\"),a=Qe(s.name,h,\"math\",e,[\"mop\",\"op-symbol\",l?\"large-op\":\"small-op\"]),c.length>0){var u=a.italic,d=Am(c+\"Size\"+(l?\"2\":\"1\"),e);a=ae({positionType:\"individualShift\",children:[{type:\"elem\",elem:a,shift:0},{type:\"elem\",elem:d,shift:l?.08:0}]}),s.name=\"\\\\\"+c,a.classes.unshift(\"mop\"),a.italic=u}}else if(s.body){var p=Pe(s.body,e,!0);p.length===1&&p[0]instanceof at?(a=p[0],a.classes[0]=\"mop\"):a=z([\"mop\"],p,e)}else{for(var v=[],y=1;y<s.name.length;y++)v.push(hh(s.name[y],s.mode,e));a=z([\"mop\"],v,e)}var w=0,S=0;return(a instanceof at||s.name===\"\\\\oiint\"||s.name===\"\\\\oiiint\")&&!s.suppressBaseShift&&(w=(a.height-a.depth)/2-e.fontMetrics().axisHeight,S=a.italic),n?Qm(a,t,r,e,o,S,w):(w&&(a.style.position=\"relative\",a.style.top=N(w)),a)},In=(i,e)=>{var t;if(i.symbol)t=new L(\"mo\",[Dt(i.name,i.mode)]),ep.has(i.name)&&t.setAttribute(\"largeop\",\"false\");else if(i.body)t=new L(\"mo\",ht(i.body,e));else{t=new L(\"mi\",[new Ae(i.name.slice(1))]);var r=new L(\"mo\",[Dt(\"\\u2061\",\"text\")]);i.parentIsSupSub?t=new L(\"mrow\",[t,r]):t=Dm([t,r])}return t},i6={\"\\u220F\":\"\\\\prod\",\"\\u2210\":\"\\\\coprod\",\"\\u2211\":\"\\\\sum\",\"\\u22C0\":\"\\\\bigwedge\",\"\\u22C1\":\"\\\\bigvee\",\"\\u22C2\":\"\\\\bigcap\",\"\\u22C3\":\"\\\\bigcup\",\"\\u2A00\":\"\\\\bigodot\",\"\\u2A01\":\"\\\\bigoplus\",\"\\u2A02\":\"\\\\bigotimes\",\"\\u2A04\":\"\\\\biguplus\",\"\\u2A06\":\"\\\\bigsqcup\"};q({type:\"op\",names:[\"\\\\coprod\",\"\\\\bigvee\",\"\\\\bigwedge\",\"\\\\biguplus\",\"\\\\bigcap\",\"\\\\bigcup\",\"\\\\intop\",\"\\\\prod\",\"\\\\sum\",\"\\\\bigotimes\",\"\\\\bigoplus\",\"\\\\bigodot\",\"\\\\bigsqcup\",\"\\\\smallint\",\"\\u220F\",\"\\u2210\",\"\\u2211\",\"\\u22C0\",\"\\u22C1\",\"\\u22C2\",\"\\u22C3\",\"\\u2A00\",\"\\u2A01\",\"\\u2A02\",\"\\u2A04\",\"\\u2A06\"],props:{numArgs:0},handler:(i,e)=>{var{parser:t,funcName:r}=i,n=r;return n.length===1&&(n=i6[n]),{type:\"op\",mode:t.mode,limits:!0,parentIsSupSub:!1,symbol:!0,name:n}},htmlBuilder:Ii,mathmlBuilder:In});q({type:\"op\",names:[\"\\\\mathop\"],props:{numArgs:1,primitive:!0},handler:(i,e)=>{var{parser:t}=i,r=e[0];return{type:\"op\",mode:t.mode,limits:!1,parentIsSupSub:!1,symbol:!1,body:De(r)}},htmlBuilder:Ii,mathmlBuilder:In});var n6={\"\\u222B\":\"\\\\int\",\"\\u222C\":\"\\\\iint\",\"\\u222D\":\"\\\\iiint\",\"\\u222E\":\"\\\\oint\",\"\\u222F\":\"\\\\oiint\",\"\\u2230\":\"\\\\oiiint\"};q({type:\"op\",names:[\"\\\\arcsin\",\"\\\\arccos\",\"\\\\arctan\",\"\\\\arctg\",\"\\\\arcctg\",\"\\\\arg\",\"\\\\ch\",\"\\\\cos\",\"\\\\cosec\",\"\\\\cosh\",\"\\\\cot\",\"\\\\cotg\",\"\\\\coth\",\"\\\\csc\",\"\\\\ctg\",\"\\\\cth\",\"\\\\deg\",\"\\\\dim\",\"\\\\exp\",\"\\\\hom\",\"\\\\ker\",\"\\\\lg\",\"\\\\ln\",\"\\\\log\",\"\\\\sec\",\"\\\\sin\",\"\\\\sinh\",\"\\\\sh\",\"\\\\tan\",\"\\\\tanh\",\"\\\\tg\",\"\\\\th\"],props:{numArgs:0},handler(i){var{parser:e,funcName:t}=i;return{type:\"op\",mode:e.mode,limits:!1,parentIsSupSub:!1,symbol:!1,name:t}},htmlBuilder:Ii,mathmlBuilder:In});q({type:\"op\",names:[\"\\\\det\",\"\\\\gcd\",\"\\\\inf\",\"\\\\lim\",\"\\\\max\",\"\\\\min\",\"\\\\Pr\",\"\\\\sup\"],props:{numArgs:0},handler(i){var{parser:e,funcName:t}=i;return{type:\"op\",mode:e.mode,limits:!0,parentIsSupSub:!1,symbol:!1,name:t}},htmlBuilder:Ii,mathmlBuilder:In});q({type:\"op\",names:[\"\\\\int\",\"\\\\iint\",\"\\\\iiint\",\"\\\\oint\",\"\\\\oiint\",\"\\\\oiiint\",\"\\u222B\",\"\\u222C\",\"\\u222D\",\"\\u222E\",\"\\u222F\",\"\\u2230\"],props:{numArgs:0,allowedInArgument:!0},handler(i){var{parser:e,funcName:t}=i,r=t;return r.length===1&&(r=n6[r]),{type:\"op\",mode:e.mode,limits:!1,parentIsSupSub:!1,symbol:!0,name:r}},htmlBuilder:Ii,mathmlBuilder:In});var tp=(i,e)=>{var t,r,n=!1,s;i.type===\"supsub\"?(t=i.sup,r=i.sub,s=Q(i.base,\"operatorname\"),n=!0):s=Q(i,\"operatorname\");var o;if(s.body.length>0){for(var l=s.body.map(u=>{var d=u.text;return typeof d==\"string\"?{type:\"textord\",mode:u.mode,text:d}:u}),a=Pe(l,e.withFont(\"mathrm\"),!0),h=0;h<a.length;h++){var c=a[h];c instanceof at&&(c.text=c.text.replace(/\\u2212/,\"-\").replace(/\\u2217/,\"*\"))}o=z([\"mop\"],a,e)}else o=z([\"mop\"],[],e);return n?Qm(o,t,r,e,e.style,0,0):o},s6=(i,e)=>{for(var t=ht(i.body,e.withFont(\"mathrm\")),r=!0,n=0;n<t.length;n++){var s=t[n];if(!(s instanceof Ro))if(s instanceof L)switch(s.type){case\"mi\":case\"mn\":case\"ms\":case\"mspace\":case\"mtext\":break;case\"mo\":{var o=s.children[0];s.children.length===1&&o instanceof Ae?o.text=o.text.replace(/\\u2212/,\"-\").replace(/\\u2217/,\"*\"):r=!1;break}default:r=!1}else r=!1}if(r){var l=t.map(c=>c.toText()).join(\"\");t=[new Ae(l)]}var a=new L(\"mi\",t);a.setAttribute(\"mathvariant\",\"normal\");var h=new L(\"mo\",[Dt(\"\\u2061\",\"text\")]);return i.parentIsSupSub?new L(\"mrow\",[a,h]):Dm([a,h])};q({type:\"operatorname\",names:[\"\\\\operatorname@\",\"\\\\operatornamewithlimits\"],props:{numArgs:1},handler:(i,e)=>{var{parser:t,funcName:r}=i,n=e[0];return{type:\"operatorname\",mode:t.mode,body:De(n),alwaysHandleSupSub:r===\"\\\\operatornamewithlimits\",limits:!1,parentIsSupSub:!1}},htmlBuilder:tp,mathmlBuilder:s6});b(\"\\\\operatorname\",\"\\\\@ifstar\\\\operatornamewithlimits\\\\operatorname@\");ri({type:\"ordgroup\",htmlBuilder(i,e){return i.semisimple?pr(Pe(i.body,e,!1)):z([\"mord\"],Pe(i.body,e,!0),e)},mathmlBuilder(i,e){return Or(i.body,e,!0)}});q({type:\"overline\",names:[\"\\\\overline\"],props:{numArgs:1},handler(i,e){var{parser:t}=i,r=e[0];return{type:\"overline\",mode:t.mode,body:r}},htmlBuilder(i,e){var t=oe(i.body,e.havingCrampedStyle()),r=Oi(\"overline-line\",e),n=e.fontMetrics().defaultRuleThickness,s=ae({positionType:\"firstBaseline\",children:[{type:\"elem\",elem:t},{type:\"kern\",size:3*n},{type:\"elem\",elem:r},{type:\"kern\",size:n}]});return z([\"mord\",\"overline\"],[s],e)},mathmlBuilder(i,e){var t=new L(\"mo\",[new Ae(\"\\u203E\")]);t.setAttribute(\"stretchy\",\"true\");var r=new L(\"mover\",[ue(i.body,e),t]);return r.setAttribute(\"accent\",\"true\"),r}});q({type:\"phantom\",names:[\"\\\\phantom\"],props:{numArgs:1,allowedInText:!0},handler:(i,e)=>{var{parser:t}=i,r=e[0];return{type:\"phantom\",mode:t.mode,body:De(r)}},htmlBuilder:(i,e)=>{var t=Pe(i.body,e.withPhantom(),!1);return pr(t)},mathmlBuilder:(i,e)=>{var t=ht(i.body,e);return new L(\"mphantom\",t)}});q({type:\"hphantom\",names:[\"\\\\hphantom\"],props:{numArgs:1,allowedInText:!0},handler:(i,e)=>{var{parser:t}=i,r=e[0];return{type:\"hphantom\",mode:t.mode,body:r}},htmlBuilder:(i,e)=>{var t=z([],[oe(i.body,e.withPhantom())]);if(t.height=0,t.depth=0,t.children)for(var r=0;r<t.children.length;r++)t.children[r].height=0,t.children[r].depth=0;return t=ae({positionType:\"firstBaseline\",children:[{type:\"elem\",elem:t}]}),z([\"mord\"],[t],e)},mathmlBuilder:(i,e)=>{var t=ht(De(i.body),e),r=new L(\"mphantom\",t),n=new L(\"mpadded\",[r]);return n.setAttribute(\"height\",\"0px\"),n.setAttribute(\"depth\",\"0px\"),n}});q({type:\"vphantom\",names:[\"\\\\vphantom\"],props:{numArgs:1,allowedInText:!0},handler:(i,e)=>{var{parser:t}=i,r=e[0];return{type:\"vphantom\",mode:t.mode,body:r}},htmlBuilder:(i,e)=>{var t=z([\"inner\"],[oe(i.body,e.withPhantom())]),r=z([\"fix\"],[]);return z([\"mord\",\"rlap\"],[t,r],e)},mathmlBuilder:(i,e)=>{var t=ht(De(i.body),e),r=new L(\"mphantom\",t),n=new L(\"mpadded\",[r]);return n.setAttribute(\"width\",\"0px\"),n}});q({type:\"raisebox\",names:[\"\\\\raisebox\"],props:{numArgs:2,argTypes:[\"size\",\"hbox\"],allowedInText:!0},handler(i,e){var{parser:t}=i,r=Q(e[0],\"size\").value,n=e[1];return{type:\"raisebox\",mode:t.mode,dy:r,body:n}},htmlBuilder(i,e){var t=oe(i.body,e),r=we(i.dy,e);return ae({positionType:\"shift\",positionData:-r,children:[{type:\"elem\",elem:t}]})},mathmlBuilder(i,e){var t=new L(\"mpadded\",[ue(i.body,e)]),r=i.dy.number+i.dy.unit;return t.setAttribute(\"voffset\",r),t}});q({type:\"internal\",names:[\"\\\\relax\"],props:{numArgs:0,allowedInText:!0,allowedInArgument:!0},handler(i){var{parser:e}=i;return{type:\"internal\",mode:e.mode}}});q({type:\"rule\",names:[\"\\\\rule\"],props:{numArgs:2,numOptionalArgs:1,allowedInText:!0,allowedInMath:!0,argTypes:[\"size\",\"size\",\"size\"]},handler(i,e,t){var{parser:r}=i,n=t[0],s=Q(e[0],\"size\"),o=Q(e[1],\"size\");return{type:\"rule\",mode:r.mode,shift:n&&Q(n,\"size\").value,width:s.value,height:o.value}},htmlBuilder(i,e){var t=z([\"mord\",\"rule\"],[],e),r=we(i.width,e),n=we(i.height,e),s=i.shift?we(i.shift,e):0;return t.style.borderRightWidth=N(r),t.style.borderTopWidth=N(n),t.style.bottom=N(s),t.width=r,t.height=n+s,t.depth=-s,t.maxFontSize=n*1.125*e.sizeMultiplier,t},mathmlBuilder(i,e){var t=we(i.width,e),r=we(i.height,e),n=i.shift?we(i.shift,e):0,s=e.color&&e.getColor()||\"black\",o=new L(\"mspace\");o.setAttribute(\"mathbackground\",s),o.setAttribute(\"width\",N(t)),o.setAttribute(\"height\",N(r));var l=new L(\"mpadded\",[o]);return n>=0?l.setAttribute(\"height\",N(n)):(l.setAttribute(\"height\",N(n)),l.setAttribute(\"depth\",N(-n))),l.setAttribute(\"voffset\",N(n)),l}});function rp(i,e,t){for(var r=Pe(i,e,!1),n=e.sizeMultiplier/t.sizeMultiplier,s=0;s<r.length;s++){var o=r[s].classes.indexOf(\"sizing\");o<0?Array.prototype.push.apply(r[s].classes,e.sizingClasses(t)):r[s].classes[o+1]===\"reset-size\"+e.size&&(r[s].classes[o+1]=\"reset-size\"+t.size),r[s].height*=n,r[s].depth*=n}return pr(r)}var sm=[\"\\\\tiny\",\"\\\\sixptsize\",\"\\\\scriptsize\",\"\\\\footnotesize\",\"\\\\small\",\"\\\\normalsize\",\"\\\\large\",\"\\\\Large\",\"\\\\LARGE\",\"\\\\huge\",\"\\\\Huge\"],o6=(i,e)=>{var t=e.havingSize(i.size);return rp(i.body,t,e)};q({type:\"sizing\",names:sm,props:{numArgs:0,allowedInText:!0},handler:(i,e)=>{var{breakOnTokenText:t,funcName:r,parser:n}=i,s=n.parseExpression(!1,t);return{type:\"sizing\",mode:n.mode,size:sm.indexOf(r)+1,body:s}},htmlBuilder:o6,mathmlBuilder:(i,e)=>{var t=e.havingSize(i.size),r=ht(i.body,t),n=new L(\"mstyle\",r);return n.setAttribute(\"mathsize\",N(t.sizeMultiplier)),n}});q({type:\"smash\",names:[\"\\\\smash\"],props:{numArgs:1,numOptionalArgs:1,allowedInText:!0},handler:(i,e,t)=>{var{parser:r}=i,n=!1,s=!1,o=t[0]&&Q(t[0],\"ordgroup\");if(o)for(var l=\"\",a=0;a<o.body.length;++a){var h=o.body[a];if(l=h.text,l===\"t\")n=!0;else if(l===\"b\")s=!0;else{n=!1,s=!1;break}}else n=!0,s=!0;var c=e[0];return{type:\"smash\",mode:r.mode,body:c,smashHeight:n,smashDepth:s}},htmlBuilder:(i,e)=>{var t=z([],[oe(i.body,e)]);if(!i.smashHeight&&!i.smashDepth)return t;if(i.smashHeight&&(t.height=0,t.children))for(var r=0;r<t.children.length;r++)t.children[r].height=0;if(i.smashDepth&&(t.depth=0,t.children))for(var n=0;n<t.children.length;n++)t.children[n].depth=0;var s=ae({positionType:\"firstBaseline\",children:[{type:\"elem\",elem:t}]});return z([\"mord\"],[s],e)},mathmlBuilder:(i,e)=>{var t=new L(\"mpadded\",[ue(i.body,e)]);return i.smashHeight&&t.setAttribute(\"height\",\"0px\"),i.smashDepth&&t.setAttribute(\"depth\",\"0px\"),t}});q({type:\"sqrt\",names:[\"\\\\sqrt\"],props:{numArgs:1,numOptionalArgs:1},handler(i,e,t){var{parser:r}=i,n=t[0],s=e[0];return{type:\"sqrt\",mode:r.mode,body:s,index:n}},htmlBuilder(i,e){var t=oe(i.body,e.havingCrampedStyle());t.height===0&&(t.height=e.fontMetrics().xHeight),t=zi(t,e);var r=e.fontMetrics(),n=r.defaultRuleThickness,s=n;e.style.id<Z.TEXT.id&&(s=e.fontMetrics().xHeight);var o=n+s/4,l=t.height+t.depth+o+n,{span:a,ruleWidth:h,advanceWidth:c}=K3(l,e),u=a.height-h;u>t.height+t.depth+o&&(o=(o+u-t.height-t.depth)/2);var d=a.height-t.height-o-h;t.style.paddingLeft=N(c);var p=ae({positionType:\"firstBaseline\",children:[{type:\"elem\",elem:t,wrapperClasses:[\"svg-align\"]},{type:\"kern\",size:-(t.height+d)},{type:\"elem\",elem:a},{type:\"kern\",size:h}]});if(i.index){var v=e.havingStyle(Z.SCRIPTSCRIPT),y=oe(i.index,v,e),w=.6*(p.height-p.depth),S=ae({positionType:\"shift\",positionData:-w,children:[{type:\"elem\",elem:y}]}),A=z([\"root\"],[S]);return z([\"mord\",\"sqrt\"],[A,p],e)}else return z([\"mord\",\"sqrt\"],[p],e)},mathmlBuilder(i,e){var{body:t,index:r}=i;return r?new L(\"mroot\",[ue(t,e),ue(r,e)]):new L(\"msqrt\",[ue(t,e)])}});var om={display:Z.DISPLAY,text:Z.TEXT,script:Z.SCRIPT,scriptscript:Z.SCRIPTSCRIPT};q({type:\"styling\",names:[\"\\\\displaystyle\",\"\\\\textstyle\",\"\\\\scriptstyle\",\"\\\\scriptscriptstyle\"],props:{numArgs:0,allowedInText:!0,primitive:!0},handler(i,e){var{breakOnTokenText:t,funcName:r,parser:n}=i,s=n.parseExpression(!0,t),o=r.slice(1,r.length-5);return{type:\"styling\",mode:n.mode,style:o,body:s}},htmlBuilder(i,e){var t=om[i.style],r=e.havingStyle(t).withFont(\"\");return rp(i.body,r,e)},mathmlBuilder(i,e){var t=om[i.style],r=e.havingStyle(t),n=ht(i.body,r),s=new L(\"mstyle\",n),o={display:[\"0\",\"true\"],text:[\"0\",\"false\"],script:[\"1\",\"false\"],scriptscript:[\"2\",\"false\"]},l=o[i.style];return s.setAttribute(\"scriptlevel\",l[0]),s.setAttribute(\"displaystyle\",l[1]),s}});var l6=function(e,t){var r=e.base;if(r)if(r.type===\"op\"){var n=r.limits&&(t.style.size===Z.DISPLAY.size||r.alwaysHandleSupSub);return n?Ii:null}else if(r.type===\"operatorname\"){var s=r.alwaysHandleSupSub&&(t.style.size===Z.DISPLAY.size||r.limits);return s?tp:null}else{if(r.type===\"accent\")return dr(r.base)?mh:null;if(r.type===\"horizBrace\"){var o=!e.sub;return o===r.isOver?Zm:null}else return null}else return null};ri({type:\"supsub\",htmlBuilder(i,e){var t=l6(i,e);if(t)return t(i,e);var{base:r,sup:n,sub:s}=i,o=oe(r,e),l,a,h=e.fontMetrics(),c=0,u=0,d=r&&dr(r);if(n){var p=e.havingStyle(e.style.sup());l=oe(n,p,e),d||(c=o.height-p.fontMetrics().supDrop*p.sizeMultiplier/e.sizeMultiplier)}if(s){var v=e.havingStyle(e.style.sub());a=oe(s,v,e),d||(u=o.depth+v.fontMetrics().subDrop*v.sizeMultiplier/e.sizeMultiplier)}var y;e.style===Z.DISPLAY?y=h.sup1:e.style.cramped?y=h.sup3:y=h.sup2;var w=e.sizeMultiplier,S=N(.5/h.ptPerEm/w),A=null;if(a){var M=i.base&&i.base.type===\"op\"&&i.base.name&&(i.base.name===\"\\\\oiint\"||i.base.name===\"\\\\oiiint\");(o instanceof at||M)&&(A=N(-o.italic))}var E;if(l&&a){c=Math.max(c,y,l.depth+.25*h.xHeight),u=Math.max(u,h.sub2);var T=h.defaultRuleThickness,B=4*T;if(c-l.depth-(a.height-u)<B){u=B-(c-l.depth)+a.height;var D=.8*h.xHeight-(c-l.depth);D>0&&(c+=D,u-=D)}var V=[{type:\"elem\",elem:a,shift:u,marginRight:S,marginLeft:A},{type:\"elem\",elem:l,shift:-c,marginRight:S}];E=ae({positionType:\"individualShift\",children:V})}else if(a){u=Math.max(u,h.sub1,a.height-.8*h.xHeight);var U=[{type:\"elem\",elem:a,marginLeft:A,marginRight:S}];E=ae({positionType:\"shift\",positionData:u,children:U})}else if(l)c=Math.max(c,y,l.depth+.25*h.xHeight),E=ae({positionType:\"shift\",positionData:-c,children:[{type:\"elem\",elem:l,marginRight:S}]});else throw new Error(\"supsub must have either sup or sub.\");var ie=Z0(o,\"right\")||\"mord\";return z([ie],[o,z([\"msupsub\"],[E])],e)},mathmlBuilder(i,e){var t=!1,r,n;i.base&&i.base.type===\"horizBrace\"&&(n=!!i.sup,n===i.base.isOver&&(t=!0,r=i.base.isOver)),i.base&&(i.base.type===\"op\"||i.base.type===\"operatorname\")&&(i.base.parentIsSupSub=!0);var s=[ue(i.base,e)];i.sub&&s.push(ue(i.sub,e)),i.sup&&s.push(ue(i.sup,e));var o;if(t)o=r?\"mover\":\"munder\";else if(i.sub)if(i.sup){var h=i.base;h&&h.type===\"op\"&&h.limits&&e.style===Z.DISPLAY||h&&h.type===\"operatorname\"&&h.alwaysHandleSupSub&&(e.style===Z.DISPLAY||h.limits)?o=\"munderover\":o=\"msubsup\"}else{var a=i.base;a&&a.type===\"op\"&&a.limits&&(e.style===Z.DISPLAY||a.alwaysHandleSupSub)||a&&a.type===\"operatorname\"&&a.alwaysHandleSupSub&&(a.limits||e.style===Z.DISPLAY)?o=\"munder\":o=\"msub\"}else{var l=i.base;l&&l.type===\"op\"&&l.limits&&(e.style===Z.DISPLAY||l.alwaysHandleSupSub)||l&&l.type===\"operatorname\"&&l.alwaysHandleSupSub&&(l.limits||e.style===Z.DISPLAY)?o=\"mover\":o=\"msup\"}return new L(o,s)}});ri({type:\"atom\",htmlBuilder(i,e){return hh(i.text,i.mode,e,[\"m\"+i.family])},mathmlBuilder(i,e){var t=new L(\"mo\",[Dt(i.text,i.mode)]);if(i.family===\"bin\"){var r=fh(i,e);r===\"bold-italic\"&&t.setAttribute(\"mathvariant\",r)}else i.family===\"punct\"?t.setAttribute(\"separator\",\"true\"):(i.family===\"open\"||i.family===\"close\")&&t.setAttribute(\"stretchy\",\"false\");return t}});var ip={mi:\"italic\",mn:\"normal\",mtext:\"normal\"};ri({type:\"mathord\",htmlBuilder(i,e){return qo(i,e,\"mathord\")},mathmlBuilder(i,e){var t=new L(\"mi\",[Dt(i.text,i.mode,e)]),r=fh(i,e)||\"italic\";return r!==ip[t.type]&&t.setAttribute(\"mathvariant\",r),t}});ri({type:\"textord\",htmlBuilder(i,e){return qo(i,e,\"textord\")},mathmlBuilder(i,e){var t=Dt(i.text,i.mode,e),r=fh(i,e)||\"normal\",n;return i.mode===\"text\"?n=new L(\"mtext\",[t]):/[0-9]/.test(i.text)?n=new L(\"mn\",[t]):i.text===\"\\\\prime\"?n=new L(\"mo\",[t]):n=new L(\"mi\",[t]),r!==ip[n.type]&&n.setAttribute(\"mathvariant\",r),n}});var G0={\"\\\\nobreak\":\"nobreak\",\"\\\\allowbreak\":\"allowbreak\"},U0={\" \":{},\"\\\\ \":{},\"~\":{className:\"nobreak\"},\"\\\\space\":{},\"\\\\nobreakspace\":{className:\"nobreak\"}};ri({type:\"spacing\",htmlBuilder(i,e){if(U0.hasOwnProperty(i.text)){var t=U0[i.text].className||\"\";if(i.mode===\"text\"){var r=qo(i,e,\"textord\");return r.classes.push(t),r}else return z([\"mspace\",t],[hh(i.text,i.mode,e)],e)}else{if(G0.hasOwnProperty(i.text))return z([\"mspace\",G0[i.text]],[],e);throw new I('Unknown type of space \"'+i.text+'\"')}},mathmlBuilder(i,e){var t;if(U0.hasOwnProperty(i.text))t=new L(\"mtext\",[new Ae(\"\\xA0\")]);else{if(G0.hasOwnProperty(i.text))return new L(\"mspace\");throw new I('Unknown type of space \"'+i.text+'\"')}return t}});var lm=()=>{var i=new L(\"mtd\",[]);return i.setAttribute(\"width\",\"50%\"),i};ri({type:\"tag\",mathmlBuilder(i,e){var t=new L(\"mtable\",[new L(\"mtr\",[lm(),new L(\"mtd\",[Or(i.body,e)]),lm(),new L(\"mtd\",[Or(i.tag,e)])])]);return t.setAttribute(\"width\",\"100%\"),t}});var am={\"\\\\text\":void 0,\"\\\\textrm\":\"textrm\",\"\\\\textsf\":\"textsf\",\"\\\\texttt\":\"texttt\",\"\\\\textnormal\":\"textrm\"},hm={\"\\\\textbf\":\"textbf\",\"\\\\textmd\":\"textmd\"},a6={\"\\\\textit\":\"textit\",\"\\\\textup\":\"textup\"},cm=(i,e)=>{var t=i.font;if(t){if(am[t])return e.withTextFontFamily(am[t]);if(hm[t])return e.withTextFontWeight(hm[t]);if(t===\"\\\\emph\")return e.fontShape===\"textit\"?e.withTextFontShape(\"textup\"):e.withTextFontShape(\"textit\")}else return e;return e.withTextFontShape(a6[t])};q({type:\"text\",names:[\"\\\\text\",\"\\\\textrm\",\"\\\\textsf\",\"\\\\texttt\",\"\\\\textnormal\",\"\\\\textbf\",\"\\\\textmd\",\"\\\\textit\",\"\\\\textup\",\"\\\\emph\"],props:{numArgs:1,argTypes:[\"text\"],allowedInArgument:!0,allowedInText:!0},handler(i,e){var{parser:t,funcName:r}=i,n=e[0];return{type:\"text\",mode:t.mode,body:De(n),font:r}},htmlBuilder(i,e){var t=cm(i,e),r=Pe(i.body,t,!0);return z([\"mord\",\"text\"],r,t)},mathmlBuilder(i,e){var t=cm(i,e);return Or(i.body,t)}});q({type:\"underline\",names:[\"\\\\underline\"],props:{numArgs:1,allowedInText:!0},handler(i,e){var{parser:t}=i;return{type:\"underline\",mode:t.mode,body:e[0]}},htmlBuilder(i,e){var t=oe(i.body,e),r=Oi(\"underline-line\",e),n=e.fontMetrics().defaultRuleThickness,s=ae({positionType:\"top\",positionData:t.height,children:[{type:\"kern\",size:n},{type:\"elem\",elem:r},{type:\"kern\",size:3*n},{type:\"elem\",elem:t}]});return z([\"mord\",\"underline\"],[s],e)},mathmlBuilder(i,e){var t=new L(\"mo\",[new Ae(\"\\u203E\")]);t.setAttribute(\"stretchy\",\"true\");var r=new L(\"munder\",[ue(i.body,e),t]);return r.setAttribute(\"accentunder\",\"true\"),r}});q({type:\"vcenter\",names:[\"\\\\vcenter\"],props:{numArgs:1,argTypes:[\"original\"],allowedInText:!1},handler(i,e){var{parser:t}=i;return{type:\"vcenter\",mode:t.mode,body:e[0]}},htmlBuilder(i,e){var t=oe(i.body,e),r=e.fontMetrics().axisHeight,n=.5*(t.height-r-(t.depth+r));return ae({positionType:\"shift\",positionData:n,children:[{type:\"elem\",elem:t}]})},mathmlBuilder(i,e){return new L(\"mpadded\",[ue(i.body,e)],[\"vcenter\"])}});q({type:\"verb\",names:[\"\\\\verb\"],props:{numArgs:0,allowedInText:!0},handler(i,e,t){throw new I(\"\\\\verb ended by end of line instead of matching delimiter\")},htmlBuilder(i,e){for(var t=um(i),r=[],n=e.havingStyle(e.style.text()),s=0;s<t.length;s++){var o=t[s];o===\"~\"&&(o=\"\\\\textasciitilde\"),r.push(Qe(o,\"Typewriter-Regular\",i.mode,n,[\"mord\",\"texttt\"]))}return z([\"mord\",\"text\"].concat(n.sizingClasses(e)),km(r),n)},mathmlBuilder(i,e){var t=new Ae(um(i)),r=new L(\"mtext\",[t]);return r.setAttribute(\"mathvariant\",\"monospace\"),r}});var um=i=>i.body.replace(/ /g,i.star?\"\\u2423\":\"\\xA0\"),Dr=Mm,np=`[ \\r\n\t]`,h6=\"\\\\\\\\[a-zA-Z@]+\",c6=\"\\\\\\\\[^\\uD800-\\uDFFF]\",u6=\"(\"+h6+\")\"+np+\"*\",f6=`\\\\\\\\(\n|[ \\r\t]+\n?)[ \\r\t]*`,ih=\"[\\u0300-\\u036F]\",d6=new RegExp(ih+\"+$\"),m6=\"(\"+np+\"+)|\"+(f6+\"|\")+\"([!-\\\\[\\\\]-\\u2027\\u202A-\\uD7FF\\uF900-\\uFFFF]\"+(ih+\"*\")+\"|[\\uD800-\\uDBFF][\\uDC00-\\uDFFF]\"+(ih+\"*\")+\"|\\\\\\\\verb\\\\*([^]).*?\\\\4|\\\\\\\\verb([^*a-zA-Z]).*?\\\\5\"+(\"|\"+u6)+(\"|\"+c6+\")\"),Po=class{constructor(e,t){this.input=void 0,this.settings=void 0,this.tokenRegex=void 0,this.catcodes=void 0,this.input=e,this.settings=t,this.tokenRegex=new RegExp(m6,\"g\"),this.catcodes={\"%\":14,\"~\":13}}setCatcode(e,t){this.catcodes[e]=t}lex(){var e=this.input,t=this.tokenRegex.lastIndex;if(t===e.length)return new mt(\"EOF\",new lt(this,t,t));var r=this.tokenRegex.exec(e);if(r===null||r.index!==t)throw new I(\"Unexpected character: '\"+e[t]+\"'\",new mt(e[t],new lt(this,t,t+1)));var n=r[6]||r[3]||(r[2]?\"\\\\ \":\" \");if(this.catcodes[n]===14){var s=e.indexOf(`\n`,this.tokenRegex.lastIndex);return s===-1?(this.tokenRegex.lastIndex=e.length,this.settings.reportNonstrict(\"commentAtEnd\",\"% comment has no terminating newline; LaTeX would fail because of commenting the end of math mode (e.g. $)\")):this.tokenRegex.lastIndex=s+1,this.lex()}return new mt(n,new lt(this,t,this.tokenRegex.lastIndex))}},nh=class{constructor(e,t){e===void 0&&(e={}),t===void 0&&(t={}),this.current=void 0,this.builtins=void 0,this.undefStack=void 0,this.current=t,this.builtins=e,this.undefStack=[]}beginGroup(){this.undefStack.push({})}endGroup(){if(this.undefStack.length===0)throw new I(\"Unbalanced namespace destruction: attempt to pop global namespace; please report this as a bug\");var e=this.undefStack.pop();for(var t in e)e.hasOwnProperty(t)&&(e[t]==null?delete this.current[t]:this.current[t]=e[t])}endGroups(){for(;this.undefStack.length>0;)this.endGroup()}has(e){return this.current.hasOwnProperty(e)||this.builtins.hasOwnProperty(e)}get(e){return this.current.hasOwnProperty(e)?this.current[e]:this.builtins[e]}set(e,t,r){if(r===void 0&&(r=!1),r){for(var n=0;n<this.undefStack.length;n++)delete this.undefStack[n][e];this.undefStack.length>0&&(this.undefStack[this.undefStack.length-1][e]=t)}else{var s=this.undefStack[this.undefStack.length-1];s&&!s.hasOwnProperty(e)&&(s[e]=this.current[e])}t==null?delete this.current[e]:this.current[e]=t}},p6=jm;b(\"\\\\noexpand\",function(i){var e=i.popToken();return i.isExpandable(e.text)&&(e.noexpand=!0,e.treatAsRelax=!0),{tokens:[e],numArgs:0}});b(\"\\\\expandafter\",function(i){var e=i.popToken();return i.expandOnce(!0),{tokens:[e],numArgs:0}});b(\"\\\\@firstoftwo\",function(i){var e=i.consumeArgs(2);return{tokens:e[0],numArgs:0}});b(\"\\\\@secondoftwo\",function(i){var e=i.consumeArgs(2);return{tokens:e[1],numArgs:0}});b(\"\\\\@ifnextchar\",function(i){var e=i.consumeArgs(3);i.consumeSpaces();var t=i.future();return e[0].length===1&&e[0][0].text===t.text?{tokens:e[1],numArgs:0}:{tokens:e[2],numArgs:0}});b(\"\\\\@ifstar\",\"\\\\@ifnextchar *{\\\\@firstoftwo{#1}}\");b(\"\\\\TextOrMath\",function(i){var e=i.consumeArgs(2);return i.mode===\"text\"?{tokens:e[0],numArgs:0}:{tokens:e[1],numArgs:0}});var fm={0:0,1:1,2:2,3:3,4:4,5:5,6:6,7:7,8:8,9:9,a:10,A:10,b:11,B:11,c:12,C:12,d:13,D:13,e:14,E:14,f:15,F:15};b(\"\\\\char\",function(i){var e=i.popToken(),t,r=\"\";if(e.text===\"'\")t=8,e=i.popToken();else if(e.text==='\"')t=16,e=i.popToken();else if(e.text===\"`\")if(e=i.popToken(),e.text[0]===\"\\\\\")r=e.text.charCodeAt(1);else{if(e.text===\"EOF\")throw new I(\"\\\\char` missing argument\");r=e.text.charCodeAt(0)}else t=10;if(t){if(r=fm[e.text],r==null||r>=t)throw new I(\"Invalid base-\"+t+\" digit \"+e.text);for(var n;(n=fm[i.future().text])!=null&&n<t;)r*=t,r+=n,i.popToken()}return\"\\\\@char{\"+r+\"}\"});var xh=(i,e,t,r)=>{var n=i.consumeArg().tokens;if(n.length!==1)throw new I(\"\\\\newcommand's first argument must be a macro name\");var s=n[0].text,o=i.isDefined(s);if(o&&!e)throw new I(\"\\\\newcommand{\"+s+\"} attempting to redefine \"+(s+\"; use \\\\renewcommand\"));if(!o&&!t)throw new I(\"\\\\renewcommand{\"+s+\"} when command \"+s+\" does not yet exist; use \\\\newcommand\");var l=0;if(n=i.consumeArg().tokens,n.length===1&&n[0].text===\"[\"){for(var a=\"\",h=i.expandNextToken();h.text!==\"]\"&&h.text!==\"EOF\";)a+=h.text,h=i.expandNextToken();if(!a.match(/^\\s*[0-9]+\\s*$/))throw new I(\"Invalid number of arguments: \"+a);l=parseInt(a),n=i.consumeArg().tokens}return o&&r||i.macros.set(s,{tokens:n,numArgs:l}),\"\"};b(\"\\\\newcommand\",i=>xh(i,!1,!0,!1));b(\"\\\\renewcommand\",i=>xh(i,!0,!1,!1));b(\"\\\\providecommand\",i=>xh(i,!0,!0,!0));b(\"\\\\message\",i=>{var e=i.consumeArgs(1)[0];return console.log(e.reverse().map(t=>t.text).join(\"\")),\"\"});b(\"\\\\errmessage\",i=>{var e=i.consumeArgs(1)[0];return console.error(e.reverse().map(t=>t.text).join(\"\")),\"\"});b(\"\\\\show\",i=>{var e=i.popToken(),t=e.text;return console.log(e,i.macros.get(t),Dr[t],me.math[t],me.text[t]),\"\"});b(\"\\\\bgroup\",\"{\");b(\"\\\\egroup\",\"}\");b(\"~\",\"\\\\nobreakspace\");b(\"\\\\lq\",\"`\");b(\"\\\\rq\",\"'\");b(\"\\\\aa\",\"\\\\r a\");b(\"\\\\AA\",\"\\\\r A\");b(\"\\\\textcopyright\",\"\\\\html@mathml{\\\\textcircled{c}}{\\\\char`\\xA9}\");b(\"\\\\copyright\",\"\\\\TextOrMath{\\\\textcopyright}{\\\\text{\\\\textcopyright}}\");b(\"\\\\textregistered\",\"\\\\html@mathml{\\\\textcircled{\\\\scriptsize R}}{\\\\char`\\xAE}\");b(\"\\u212C\",\"\\\\mathscr{B}\");b(\"\\u2130\",\"\\\\mathscr{E}\");b(\"\\u2131\",\"\\\\mathscr{F}\");b(\"\\u210B\",\"\\\\mathscr{H}\");b(\"\\u2110\",\"\\\\mathscr{I}\");b(\"\\u2112\",\"\\\\mathscr{L}\");b(\"\\u2133\",\"\\\\mathscr{M}\");b(\"\\u211B\",\"\\\\mathscr{R}\");b(\"\\u212D\",\"\\\\mathfrak{C}\");b(\"\\u210C\",\"\\\\mathfrak{H}\");b(\"\\u2128\",\"\\\\mathfrak{Z}\");b(\"\\\\Bbbk\",\"\\\\Bbb{k}\");b(\"\\xB7\",\"\\\\cdotp\");b(\"\\\\llap\",\"\\\\mathllap{\\\\textrm{#1}}\");b(\"\\\\rlap\",\"\\\\mathrlap{\\\\textrm{#1}}\");b(\"\\\\clap\",\"\\\\mathclap{\\\\textrm{#1}}\");b(\"\\\\mathstrut\",\"\\\\vphantom{(}\");b(\"\\\\underbar\",\"\\\\underline{\\\\text{#1}}\");b(\"\\\\not\",'\\\\html@mathml{\\\\mathrel{\\\\mathrlap\\\\@not}\\\\nobreak}{\\\\char\"338}');b(\"\\\\neq\",\"\\\\html@mathml{\\\\mathrel{\\\\not=}}{\\\\mathrel{\\\\char`\\u2260}}\");b(\"\\\\ne\",\"\\\\neq\");b(\"\\u2260\",\"\\\\neq\");b(\"\\\\notin\",\"\\\\html@mathml{\\\\mathrel{{\\\\in}\\\\mathllap{/\\\\mskip1mu}}}{\\\\mathrel{\\\\char`\\u2209}}\");b(\"\\u2209\",\"\\\\notin\");b(\"\\u2258\",\"\\\\html@mathml{\\\\mathrel{=\\\\kern{-1em}\\\\raisebox{0.4em}{$\\\\scriptsize\\\\frown$}}}{\\\\mathrel{\\\\char`\\u2258}}\");b(\"\\u2259\",\"\\\\html@mathml{\\\\stackrel{\\\\tiny\\\\wedge}{=}}{\\\\mathrel{\\\\char`\\u2258}}\");b(\"\\u225A\",\"\\\\html@mathml{\\\\stackrel{\\\\tiny\\\\vee}{=}}{\\\\mathrel{\\\\char`\\u225A}}\");b(\"\\u225B\",\"\\\\html@mathml{\\\\stackrel{\\\\scriptsize\\\\star}{=}}{\\\\mathrel{\\\\char`\\u225B}}\");b(\"\\u225D\",\"\\\\html@mathml{\\\\stackrel{\\\\tiny\\\\mathrm{def}}{=}}{\\\\mathrel{\\\\char`\\u225D}}\");b(\"\\u225E\",\"\\\\html@mathml{\\\\stackrel{\\\\tiny\\\\mathrm{m}}{=}}{\\\\mathrel{\\\\char`\\u225E}}\");b(\"\\u225F\",\"\\\\html@mathml{\\\\stackrel{\\\\tiny?}{=}}{\\\\mathrel{\\\\char`\\u225F}}\");b(\"\\u27C2\",\"\\\\perp\");b(\"\\u203C\",\"\\\\mathclose{!\\\\mkern-0.8mu!}\");b(\"\\u220C\",\"\\\\notni\");b(\"\\u231C\",\"\\\\ulcorner\");b(\"\\u231D\",\"\\\\urcorner\");b(\"\\u231E\",\"\\\\llcorner\");b(\"\\u231F\",\"\\\\lrcorner\");b(\"\\xA9\",\"\\\\copyright\");b(\"\\xAE\",\"\\\\textregistered\");b(\"\\uFE0F\",\"\\\\textregistered\");b(\"\\\\ulcorner\",'\\\\html@mathml{\\\\@ulcorner}{\\\\mathop{\\\\char\"231c}}');b(\"\\\\urcorner\",'\\\\html@mathml{\\\\@urcorner}{\\\\mathop{\\\\char\"231d}}');b(\"\\\\llcorner\",'\\\\html@mathml{\\\\@llcorner}{\\\\mathop{\\\\char\"231e}}');b(\"\\\\lrcorner\",'\\\\html@mathml{\\\\@lrcorner}{\\\\mathop{\\\\char\"231f}}');b(\"\\\\vdots\",\"{\\\\varvdots\\\\rule{0pt}{15pt}}\");b(\"\\u22EE\",\"\\\\vdots\");b(\"\\\\varGamma\",\"\\\\mathit{\\\\Gamma}\");b(\"\\\\varDelta\",\"\\\\mathit{\\\\Delta}\");b(\"\\\\varTheta\",\"\\\\mathit{\\\\Theta}\");b(\"\\\\varLambda\",\"\\\\mathit{\\\\Lambda}\");b(\"\\\\varXi\",\"\\\\mathit{\\\\Xi}\");b(\"\\\\varPi\",\"\\\\mathit{\\\\Pi}\");b(\"\\\\varSigma\",\"\\\\mathit{\\\\Sigma}\");b(\"\\\\varUpsilon\",\"\\\\mathit{\\\\Upsilon}\");b(\"\\\\varPhi\",\"\\\\mathit{\\\\Phi}\");b(\"\\\\varPsi\",\"\\\\mathit{\\\\Psi}\");b(\"\\\\varOmega\",\"\\\\mathit{\\\\Omega}\");b(\"\\\\substack\",\"\\\\begin{subarray}{c}#1\\\\end{subarray}\");b(\"\\\\colon\",\"\\\\nobreak\\\\mskip2mu\\\\mathpunct{}\\\\mathchoice{\\\\mkern-3mu}{\\\\mkern-3mu}{}{}{:}\\\\mskip6mu\\\\relax\");b(\"\\\\boxed\",\"\\\\fbox{$\\\\displaystyle{#1}$}\");b(\"\\\\iff\",\"\\\\DOTSB\\\\;\\\\Longleftrightarrow\\\\;\");b(\"\\\\implies\",\"\\\\DOTSB\\\\;\\\\Longrightarrow\\\\;\");b(\"\\\\impliedby\",\"\\\\DOTSB\\\\;\\\\Longleftarrow\\\\;\");b(\"\\\\dddot\",\"{\\\\overset{\\\\raisebox{-0.1ex}{\\\\normalsize ...}}{#1}}\");b(\"\\\\ddddot\",\"{\\\\overset{\\\\raisebox{-0.1ex}{\\\\normalsize ....}}{#1}}\");var dm={\",\":\"\\\\dotsc\",\"\\\\not\":\"\\\\dotsb\",\"+\":\"\\\\dotsb\",\"=\":\"\\\\dotsb\",\"<\":\"\\\\dotsb\",\">\":\"\\\\dotsb\",\"-\":\"\\\\dotsb\",\"*\":\"\\\\dotsb\",\":\":\"\\\\dotsb\",\"\\\\DOTSB\":\"\\\\dotsb\",\"\\\\coprod\":\"\\\\dotsb\",\"\\\\bigvee\":\"\\\\dotsb\",\"\\\\bigwedge\":\"\\\\dotsb\",\"\\\\biguplus\":\"\\\\dotsb\",\"\\\\bigcap\":\"\\\\dotsb\",\"\\\\bigcup\":\"\\\\dotsb\",\"\\\\prod\":\"\\\\dotsb\",\"\\\\sum\":\"\\\\dotsb\",\"\\\\bigotimes\":\"\\\\dotsb\",\"\\\\bigoplus\":\"\\\\dotsb\",\"\\\\bigodot\":\"\\\\dotsb\",\"\\\\bigsqcup\":\"\\\\dotsb\",\"\\\\And\":\"\\\\dotsb\",\"\\\\longrightarrow\":\"\\\\dotsb\",\"\\\\Longrightarrow\":\"\\\\dotsb\",\"\\\\longleftarrow\":\"\\\\dotsb\",\"\\\\Longleftarrow\":\"\\\\dotsb\",\"\\\\longleftrightarrow\":\"\\\\dotsb\",\"\\\\Longleftrightarrow\":\"\\\\dotsb\",\"\\\\mapsto\":\"\\\\dotsb\",\"\\\\longmapsto\":\"\\\\dotsb\",\"\\\\hookrightarrow\":\"\\\\dotsb\",\"\\\\doteq\":\"\\\\dotsb\",\"\\\\mathbin\":\"\\\\dotsb\",\"\\\\mathrel\":\"\\\\dotsb\",\"\\\\relbar\":\"\\\\dotsb\",\"\\\\Relbar\":\"\\\\dotsb\",\"\\\\xrightarrow\":\"\\\\dotsb\",\"\\\\xleftarrow\":\"\\\\dotsb\",\"\\\\DOTSI\":\"\\\\dotsi\",\"\\\\int\":\"\\\\dotsi\",\"\\\\oint\":\"\\\\dotsi\",\"\\\\iint\":\"\\\\dotsi\",\"\\\\iiint\":\"\\\\dotsi\",\"\\\\iiiint\":\"\\\\dotsi\",\"\\\\idotsint\":\"\\\\dotsi\",\"\\\\DOTSX\":\"\\\\dotsx\"},g6=new Set([\"bin\",\"rel\"]);b(\"\\\\dots\",function(i){var e=\"\\\\dotso\",t=i.expandAfterFuture().text;return t in dm?e=dm[t]:(t.slice(0,4)===\"\\\\not\"||t in me.math&&g6.has(me.math[t].group))&&(e=\"\\\\dotsb\"),e});var wh={\")\":!0,\"]\":!0,\"\\\\rbrack\":!0,\"\\\\}\":!0,\"\\\\rbrace\":!0,\"\\\\rangle\":!0,\"\\\\rceil\":!0,\"\\\\rfloor\":!0,\"\\\\rgroup\":!0,\"\\\\rmoustache\":!0,\"\\\\right\":!0,\"\\\\bigr\":!0,\"\\\\biggr\":!0,\"\\\\Bigr\":!0,\"\\\\Biggr\":!0,$:!0,\";\":!0,\".\":!0,\",\":!0};b(\"\\\\dotso\",function(i){var e=i.future().text;return e in wh?\"\\\\ldots\\\\,\":\"\\\\ldots\"});b(\"\\\\dotsc\",function(i){var e=i.future().text;return e in wh&&e!==\",\"?\"\\\\ldots\\\\,\":\"\\\\ldots\"});b(\"\\\\cdots\",function(i){var e=i.future().text;return e in wh?\"\\\\@cdots\\\\,\":\"\\\\@cdots\"});b(\"\\\\dotsb\",\"\\\\cdots\");b(\"\\\\dotsm\",\"\\\\cdots\");b(\"\\\\dotsi\",\"\\\\!\\\\cdots\");b(\"\\\\dotsx\",\"\\\\ldots\\\\,\");b(\"\\\\DOTSI\",\"\\\\relax\");b(\"\\\\DOTSB\",\"\\\\relax\");b(\"\\\\DOTSX\",\"\\\\relax\");b(\"\\\\tmspace\",\"\\\\TextOrMath{\\\\kern#1#3}{\\\\mskip#1#2}\\\\relax\");b(\"\\\\,\",\"\\\\tmspace+{3mu}{.1667em}\");b(\"\\\\thinspace\",\"\\\\,\");b(\"\\\\>\",\"\\\\mskip{4mu}\");b(\"\\\\:\",\"\\\\tmspace+{4mu}{.2222em}\");b(\"\\\\medspace\",\"\\\\:\");b(\"\\\\;\",\"\\\\tmspace+{5mu}{.2777em}\");b(\"\\\\thickspace\",\"\\\\;\");b(\"\\\\!\",\"\\\\tmspace-{3mu}{.1667em}\");b(\"\\\\negthinspace\",\"\\\\!\");b(\"\\\\negmedspace\",\"\\\\tmspace-{4mu}{.2222em}\");b(\"\\\\negthickspace\",\"\\\\tmspace-{5mu}{.277em}\");b(\"\\\\enspace\",\"\\\\kern.5em \");b(\"\\\\enskip\",\"\\\\hskip.5em\\\\relax\");b(\"\\\\quad\",\"\\\\hskip1em\\\\relax\");b(\"\\\\qquad\",\"\\\\hskip2em\\\\relax\");b(\"\\\\tag\",\"\\\\@ifstar\\\\tag@literal\\\\tag@paren\");b(\"\\\\tag@paren\",\"\\\\tag@literal{({#1})}\");b(\"\\\\tag@literal\",i=>{if(i.macros.get(\"\\\\df@tag\"))throw new I(\"Multiple \\\\tag\");return\"\\\\gdef\\\\df@tag{\\\\text{#1}}\"});b(\"\\\\bmod\",\"\\\\mathchoice{\\\\mskip1mu}{\\\\mskip1mu}{\\\\mskip5mu}{\\\\mskip5mu}\\\\mathbin{\\\\rm mod}\\\\mathchoice{\\\\mskip1mu}{\\\\mskip1mu}{\\\\mskip5mu}{\\\\mskip5mu}\");b(\"\\\\pod\",\"\\\\allowbreak\\\\mathchoice{\\\\mkern18mu}{\\\\mkern8mu}{\\\\mkern8mu}{\\\\mkern8mu}(#1)\");b(\"\\\\pmod\",\"\\\\pod{{\\\\rm mod}\\\\mkern6mu#1}\");b(\"\\\\mod\",\"\\\\allowbreak\\\\mathchoice{\\\\mkern18mu}{\\\\mkern12mu}{\\\\mkern12mu}{\\\\mkern12mu}{\\\\rm mod}\\\\,\\\\,#1\");b(\"\\\\newline\",\"\\\\\\\\\\\\relax\");b(\"\\\\TeX\",\"\\\\textrm{\\\\html@mathml{T\\\\kern-.1667em\\\\raisebox{-.5ex}{E}\\\\kern-.125emX}{TeX}}\");var sp=N(Jt[\"Main-Regular\"][84][1]-.7*Jt[\"Main-Regular\"][65][1]);b(\"\\\\LaTeX\",\"\\\\textrm{\\\\html@mathml{\"+(\"L\\\\kern-.36em\\\\raisebox{\"+sp+\"}{\\\\scriptstyle A}\")+\"\\\\kern-.15em\\\\TeX}{LaTeX}}\");b(\"\\\\KaTeX\",\"\\\\textrm{\\\\html@mathml{\"+(\"K\\\\kern-.17em\\\\raisebox{\"+sp+\"}{\\\\scriptstyle A}\")+\"\\\\kern-.15em\\\\TeX}{KaTeX}}\");b(\"\\\\hspace\",\"\\\\@ifstar\\\\@hspacer\\\\@hspace\");b(\"\\\\@hspace\",\"\\\\hskip #1\\\\relax\");b(\"\\\\@hspacer\",\"\\\\rule{0pt}{0pt}\\\\hskip #1\\\\relax\");b(\"\\\\ordinarycolon\",\":\");b(\"\\\\vcentcolon\",\"\\\\mathrel{\\\\mathop\\\\ordinarycolon}\");b(\"\\\\dblcolon\",'\\\\html@mathml{\\\\mathrel{\\\\vcentcolon\\\\mathrel{\\\\mkern-.9mu}\\\\vcentcolon}}{\\\\mathop{\\\\char\"2237}}');b(\"\\\\coloneqq\",'\\\\html@mathml{\\\\mathrel{\\\\vcentcolon\\\\mathrel{\\\\mkern-1.2mu}=}}{\\\\mathop{\\\\char\"2254}}');b(\"\\\\Coloneqq\",'\\\\html@mathml{\\\\mathrel{\\\\dblcolon\\\\mathrel{\\\\mkern-1.2mu}=}}{\\\\mathop{\\\\char\"2237\\\\char\"3d}}');b(\"\\\\coloneq\",'\\\\html@mathml{\\\\mathrel{\\\\vcentcolon\\\\mathrel{\\\\mkern-1.2mu}\\\\mathrel{-}}}{\\\\mathop{\\\\char\"3a\\\\char\"2212}}');b(\"\\\\Coloneq\",'\\\\html@mathml{\\\\mathrel{\\\\dblcolon\\\\mathrel{\\\\mkern-1.2mu}\\\\mathrel{-}}}{\\\\mathop{\\\\char\"2237\\\\char\"2212}}');b(\"\\\\eqqcolon\",'\\\\html@mathml{\\\\mathrel{=\\\\mathrel{\\\\mkern-1.2mu}\\\\vcentcolon}}{\\\\mathop{\\\\char\"2255}}');b(\"\\\\Eqqcolon\",'\\\\html@mathml{\\\\mathrel{=\\\\mathrel{\\\\mkern-1.2mu}\\\\dblcolon}}{\\\\mathop{\\\\char\"3d\\\\char\"2237}}');b(\"\\\\eqcolon\",'\\\\html@mathml{\\\\mathrel{\\\\mathrel{-}\\\\mathrel{\\\\mkern-1.2mu}\\\\vcentcolon}}{\\\\mathop{\\\\char\"2239}}');b(\"\\\\Eqcolon\",'\\\\html@mathml{\\\\mathrel{\\\\mathrel{-}\\\\mathrel{\\\\mkern-1.2mu}\\\\dblcolon}}{\\\\mathop{\\\\char\"2212\\\\char\"2237}}');b(\"\\\\colonapprox\",'\\\\html@mathml{\\\\mathrel{\\\\vcentcolon\\\\mathrel{\\\\mkern-1.2mu}\\\\approx}}{\\\\mathop{\\\\char\"3a\\\\char\"2248}}');b(\"\\\\Colonapprox\",'\\\\html@mathml{\\\\mathrel{\\\\dblcolon\\\\mathrel{\\\\mkern-1.2mu}\\\\approx}}{\\\\mathop{\\\\char\"2237\\\\char\"2248}}');b(\"\\\\colonsim\",'\\\\html@mathml{\\\\mathrel{\\\\vcentcolon\\\\mathrel{\\\\mkern-1.2mu}\\\\sim}}{\\\\mathop{\\\\char\"3a\\\\char\"223c}}');b(\"\\\\Colonsim\",'\\\\html@mathml{\\\\mathrel{\\\\dblcolon\\\\mathrel{\\\\mkern-1.2mu}\\\\sim}}{\\\\mathop{\\\\char\"2237\\\\char\"223c}}');b(\"\\u2237\",\"\\\\dblcolon\");b(\"\\u2239\",\"\\\\eqcolon\");b(\"\\u2254\",\"\\\\coloneqq\");b(\"\\u2255\",\"\\\\eqqcolon\");b(\"\\u2A74\",\"\\\\Coloneqq\");b(\"\\\\ratio\",\"\\\\vcentcolon\");b(\"\\\\coloncolon\",\"\\\\dblcolon\");b(\"\\\\colonequals\",\"\\\\coloneqq\");b(\"\\\\coloncolonequals\",\"\\\\Coloneqq\");b(\"\\\\equalscolon\",\"\\\\eqqcolon\");b(\"\\\\equalscoloncolon\",\"\\\\Eqqcolon\");b(\"\\\\colonminus\",\"\\\\coloneq\");b(\"\\\\coloncolonminus\",\"\\\\Coloneq\");b(\"\\\\minuscolon\",\"\\\\eqcolon\");b(\"\\\\minuscoloncolon\",\"\\\\Eqcolon\");b(\"\\\\coloncolonapprox\",\"\\\\Colonapprox\");b(\"\\\\coloncolonsim\",\"\\\\Colonsim\");b(\"\\\\simcolon\",\"\\\\mathrel{\\\\sim\\\\mathrel{\\\\mkern-1.2mu}\\\\vcentcolon}\");b(\"\\\\simcoloncolon\",\"\\\\mathrel{\\\\sim\\\\mathrel{\\\\mkern-1.2mu}\\\\dblcolon}\");b(\"\\\\approxcolon\",\"\\\\mathrel{\\\\approx\\\\mathrel{\\\\mkern-1.2mu}\\\\vcentcolon}\");b(\"\\\\approxcoloncolon\",\"\\\\mathrel{\\\\approx\\\\mathrel{\\\\mkern-1.2mu}\\\\dblcolon}\");b(\"\\\\notni\",\"\\\\html@mathml{\\\\not\\\\ni}{\\\\mathrel{\\\\char`\\u220C}}\");b(\"\\\\limsup\",\"\\\\DOTSB\\\\operatorname*{lim\\\\,sup}\");b(\"\\\\liminf\",\"\\\\DOTSB\\\\operatorname*{lim\\\\,inf}\");b(\"\\\\injlim\",\"\\\\DOTSB\\\\operatorname*{inj\\\\,lim}\");b(\"\\\\projlim\",\"\\\\DOTSB\\\\operatorname*{proj\\\\,lim}\");b(\"\\\\varlimsup\",\"\\\\DOTSB\\\\operatorname*{\\\\overline{lim}}\");b(\"\\\\varliminf\",\"\\\\DOTSB\\\\operatorname*{\\\\underline{lim}}\");b(\"\\\\varinjlim\",\"\\\\DOTSB\\\\operatorname*{\\\\underrightarrow{lim}}\");b(\"\\\\varprojlim\",\"\\\\DOTSB\\\\operatorname*{\\\\underleftarrow{lim}}\");b(\"\\\\gvertneqq\",\"\\\\html@mathml{\\\\@gvertneqq}{\\u2269}\");b(\"\\\\lvertneqq\",\"\\\\html@mathml{\\\\@lvertneqq}{\\u2268}\");b(\"\\\\ngeqq\",\"\\\\html@mathml{\\\\@ngeqq}{\\u2271}\");b(\"\\\\ngeqslant\",\"\\\\html@mathml{\\\\@ngeqslant}{\\u2271}\");b(\"\\\\nleqq\",\"\\\\html@mathml{\\\\@nleqq}{\\u2270}\");b(\"\\\\nleqslant\",\"\\\\html@mathml{\\\\@nleqslant}{\\u2270}\");b(\"\\\\nshortmid\",\"\\\\html@mathml{\\\\@nshortmid}{\\u2224}\");b(\"\\\\nshortparallel\",\"\\\\html@mathml{\\\\@nshortparallel}{\\u2226}\");b(\"\\\\nsubseteqq\",\"\\\\html@mathml{\\\\@nsubseteqq}{\\u2288}\");b(\"\\\\nsupseteqq\",\"\\\\html@mathml{\\\\@nsupseteqq}{\\u2289}\");b(\"\\\\varsubsetneq\",\"\\\\html@mathml{\\\\@varsubsetneq}{\\u228A}\");b(\"\\\\varsubsetneqq\",\"\\\\html@mathml{\\\\@varsubsetneqq}{\\u2ACB}\");b(\"\\\\varsupsetneq\",\"\\\\html@mathml{\\\\@varsupsetneq}{\\u228B}\");b(\"\\\\varsupsetneqq\",\"\\\\html@mathml{\\\\@varsupsetneqq}{\\u2ACC}\");b(\"\\\\imath\",\"\\\\html@mathml{\\\\@imath}{\\u0131}\");b(\"\\\\jmath\",\"\\\\html@mathml{\\\\@jmath}{\\u0237}\");b(\"\\\\llbracket\",\"\\\\html@mathml{\\\\mathopen{[\\\\mkern-3.2mu[}}{\\\\mathopen{\\\\char`\\u27E6}}\");b(\"\\\\rrbracket\",\"\\\\html@mathml{\\\\mathclose{]\\\\mkern-3.2mu]}}{\\\\mathclose{\\\\char`\\u27E7}}\");b(\"\\u27E6\",\"\\\\llbracket\");b(\"\\u27E7\",\"\\\\rrbracket\");b(\"\\\\lBrace\",\"\\\\html@mathml{\\\\mathopen{\\\\{\\\\mkern-3.2mu[}}{\\\\mathopen{\\\\char`\\u2983}}\");b(\"\\\\rBrace\",\"\\\\html@mathml{\\\\mathclose{]\\\\mkern-3.2mu\\\\}}}{\\\\mathclose{\\\\char`\\u2984}}\");b(\"\\u2983\",\"\\\\lBrace\");b(\"\\u2984\",\"\\\\rBrace\");b(\"\\\\minuso\",\"\\\\mathbin{\\\\html@mathml{{\\\\mathrlap{\\\\mathchoice{\\\\kern{0.145em}}{\\\\kern{0.145em}}{\\\\kern{0.1015em}}{\\\\kern{0.0725em}}\\\\circ}{-}}}{\\\\char`\\u29B5}}\");b(\"\\u29B5\",\"\\\\minuso\");b(\"\\\\darr\",\"\\\\downarrow\");b(\"\\\\dArr\",\"\\\\Downarrow\");b(\"\\\\Darr\",\"\\\\Downarrow\");b(\"\\\\lang\",\"\\\\langle\");b(\"\\\\rang\",\"\\\\rangle\");b(\"\\\\uarr\",\"\\\\uparrow\");b(\"\\\\uArr\",\"\\\\Uparrow\");b(\"\\\\Uarr\",\"\\\\Uparrow\");b(\"\\\\N\",\"\\\\mathbb{N}\");b(\"\\\\R\",\"\\\\mathbb{R}\");b(\"\\\\Z\",\"\\\\mathbb{Z}\");b(\"\\\\alef\",\"\\\\aleph\");b(\"\\\\alefsym\",\"\\\\aleph\");b(\"\\\\Alpha\",\"\\\\mathrm{A}\");b(\"\\\\Beta\",\"\\\\mathrm{B}\");b(\"\\\\bull\",\"\\\\bullet\");b(\"\\\\Chi\",\"\\\\mathrm{X}\");b(\"\\\\clubs\",\"\\\\clubsuit\");b(\"\\\\cnums\",\"\\\\mathbb{C}\");b(\"\\\\Complex\",\"\\\\mathbb{C}\");b(\"\\\\Dagger\",\"\\\\ddagger\");b(\"\\\\diamonds\",\"\\\\diamondsuit\");b(\"\\\\empty\",\"\\\\emptyset\");b(\"\\\\Epsilon\",\"\\\\mathrm{E}\");b(\"\\\\Eta\",\"\\\\mathrm{H}\");b(\"\\\\exist\",\"\\\\exists\");b(\"\\\\harr\",\"\\\\leftrightarrow\");b(\"\\\\hArr\",\"\\\\Leftrightarrow\");b(\"\\\\Harr\",\"\\\\Leftrightarrow\");b(\"\\\\hearts\",\"\\\\heartsuit\");b(\"\\\\image\",\"\\\\Im\");b(\"\\\\infin\",\"\\\\infty\");b(\"\\\\Iota\",\"\\\\mathrm{I}\");b(\"\\\\isin\",\"\\\\in\");b(\"\\\\Kappa\",\"\\\\mathrm{K}\");b(\"\\\\larr\",\"\\\\leftarrow\");b(\"\\\\lArr\",\"\\\\Leftarrow\");b(\"\\\\Larr\",\"\\\\Leftarrow\");b(\"\\\\lrarr\",\"\\\\leftrightarrow\");b(\"\\\\lrArr\",\"\\\\Leftrightarrow\");b(\"\\\\Lrarr\",\"\\\\Leftrightarrow\");b(\"\\\\Mu\",\"\\\\mathrm{M}\");b(\"\\\\natnums\",\"\\\\mathbb{N}\");b(\"\\\\Nu\",\"\\\\mathrm{N}\");b(\"\\\\Omicron\",\"\\\\mathrm{O}\");b(\"\\\\plusmn\",\"\\\\pm\");b(\"\\\\rarr\",\"\\\\rightarrow\");b(\"\\\\rArr\",\"\\\\Rightarrow\");b(\"\\\\Rarr\",\"\\\\Rightarrow\");b(\"\\\\real\",\"\\\\Re\");b(\"\\\\reals\",\"\\\\mathbb{R}\");b(\"\\\\Reals\",\"\\\\mathbb{R}\");b(\"\\\\Rho\",\"\\\\mathrm{P}\");b(\"\\\\sdot\",\"\\\\cdot\");b(\"\\\\sect\",\"\\\\S\");b(\"\\\\spades\",\"\\\\spadesuit\");b(\"\\\\sub\",\"\\\\subset\");b(\"\\\\sube\",\"\\\\subseteq\");b(\"\\\\supe\",\"\\\\supseteq\");b(\"\\\\Tau\",\"\\\\mathrm{T}\");b(\"\\\\thetasym\",\"\\\\vartheta\");b(\"\\\\weierp\",\"\\\\wp\");b(\"\\\\Zeta\",\"\\\\mathrm{Z}\");b(\"\\\\argmin\",\"\\\\DOTSB\\\\operatorname*{arg\\\\,min}\");b(\"\\\\argmax\",\"\\\\DOTSB\\\\operatorname*{arg\\\\,max}\");b(\"\\\\plim\",\"\\\\DOTSB\\\\mathop{\\\\operatorname{plim}}\\\\limits\");b(\"\\\\bra\",\"\\\\mathinner{\\\\langle{#1}|}\");b(\"\\\\ket\",\"\\\\mathinner{|{#1}\\\\rangle}\");b(\"\\\\braket\",\"\\\\mathinner{\\\\langle{#1}\\\\rangle}\");b(\"\\\\Bra\",\"\\\\left\\\\langle#1\\\\right|\");b(\"\\\\Ket\",\"\\\\left|#1\\\\right\\\\rangle\");var op=i=>e=>{var t=e.consumeArg().tokens,r=e.consumeArg().tokens,n=e.consumeArg().tokens,s=e.consumeArg().tokens,o=e.macros.get(\"|\"),l=e.macros.get(\"\\\\|\");e.macros.beginGroup();var a=u=>d=>{i&&(d.macros.set(\"|\",o),n.length&&d.macros.set(\"\\\\|\",l));var p=u;if(!u&&n.length){var v=d.future();v.text===\"|\"&&(d.popToken(),p=!0)}return{tokens:p?n:r,numArgs:0}};e.macros.set(\"|\",a(!1)),n.length&&e.macros.set(\"\\\\|\",a(!0));var h=e.consumeArg().tokens,c=e.expandTokens([...s,...h,...t]);return e.macros.endGroup(),{tokens:c.reverse(),numArgs:0}};b(\"\\\\bra@ket\",op(!1));b(\"\\\\bra@set\",op(!0));b(\"\\\\Braket\",\"\\\\bra@ket{\\\\left\\\\langle}{\\\\,\\\\middle\\\\vert\\\\,}{\\\\,\\\\middle\\\\vert\\\\,}{\\\\right\\\\rangle}\");b(\"\\\\Set\",\"\\\\bra@set{\\\\left\\\\{\\\\:}{\\\\;\\\\middle\\\\vert\\\\;}{\\\\;\\\\middle\\\\Vert\\\\;}{\\\\:\\\\right\\\\}}\");b(\"\\\\set\",\"\\\\bra@set{\\\\{\\\\,}{\\\\mid}{}{\\\\,\\\\}}\");b(\"\\\\angln\",\"{\\\\angl n}\");b(\"\\\\blue\",\"\\\\textcolor{##6495ed}{#1}\");b(\"\\\\orange\",\"\\\\textcolor{##ffa500}{#1}\");b(\"\\\\pink\",\"\\\\textcolor{##ff00af}{#1}\");b(\"\\\\red\",\"\\\\textcolor{##df0030}{#1}\");b(\"\\\\green\",\"\\\\textcolor{##28ae7b}{#1}\");b(\"\\\\gray\",\"\\\\textcolor{gray}{#1}\");b(\"\\\\purple\",\"\\\\textcolor{##9d38bd}{#1}\");b(\"\\\\blueA\",\"\\\\textcolor{##ccfaff}{#1}\");b(\"\\\\blueB\",\"\\\\textcolor{##80f6ff}{#1}\");b(\"\\\\blueC\",\"\\\\textcolor{##63d9ea}{#1}\");b(\"\\\\blueD\",\"\\\\textcolor{##11accd}{#1}\");b(\"\\\\blueE\",\"\\\\textcolor{##0c7f99}{#1}\");b(\"\\\\tealA\",\"\\\\textcolor{##94fff5}{#1}\");b(\"\\\\tealB\",\"\\\\textcolor{##26edd5}{#1}\");b(\"\\\\tealC\",\"\\\\textcolor{##01d1c1}{#1}\");b(\"\\\\tealD\",\"\\\\textcolor{##01a995}{#1}\");b(\"\\\\tealE\",\"\\\\textcolor{##208170}{#1}\");b(\"\\\\greenA\",\"\\\\textcolor{##b6ffb0}{#1}\");b(\"\\\\greenB\",\"\\\\textcolor{##8af281}{#1}\");b(\"\\\\greenC\",\"\\\\textcolor{##74cf70}{#1}\");b(\"\\\\greenD\",\"\\\\textcolor{##1fab54}{#1}\");b(\"\\\\greenE\",\"\\\\textcolor{##0d923f}{#1}\");b(\"\\\\goldA\",\"\\\\textcolor{##ffd0a9}{#1}\");b(\"\\\\goldB\",\"\\\\textcolor{##ffbb71}{#1}\");b(\"\\\\goldC\",\"\\\\textcolor{##ff9c39}{#1}\");b(\"\\\\goldD\",\"\\\\textcolor{##e07d10}{#1}\");b(\"\\\\goldE\",\"\\\\textcolor{##a75a05}{#1}\");b(\"\\\\redA\",\"\\\\textcolor{##fca9a9}{#1}\");b(\"\\\\redB\",\"\\\\textcolor{##ff8482}{#1}\");b(\"\\\\redC\",\"\\\\textcolor{##f9685d}{#1}\");b(\"\\\\redD\",\"\\\\textcolor{##e84d39}{#1}\");b(\"\\\\redE\",\"\\\\textcolor{##bc2612}{#1}\");b(\"\\\\maroonA\",\"\\\\textcolor{##ffbde0}{#1}\");b(\"\\\\maroonB\",\"\\\\textcolor{##ff92c6}{#1}\");b(\"\\\\maroonC\",\"\\\\textcolor{##ed5fa6}{#1}\");b(\"\\\\maroonD\",\"\\\\textcolor{##ca337c}{#1}\");b(\"\\\\maroonE\",\"\\\\textcolor{##9e034e}{#1}\");b(\"\\\\purpleA\",\"\\\\textcolor{##ddd7ff}{#1}\");b(\"\\\\purpleB\",\"\\\\textcolor{##c6b9fc}{#1}\");b(\"\\\\purpleC\",\"\\\\textcolor{##aa87ff}{#1}\");b(\"\\\\purpleD\",\"\\\\textcolor{##7854ab}{#1}\");b(\"\\\\purpleE\",\"\\\\textcolor{##543b78}{#1}\");b(\"\\\\mintA\",\"\\\\textcolor{##f5f9e8}{#1}\");b(\"\\\\mintB\",\"\\\\textcolor{##edf2df}{#1}\");b(\"\\\\mintC\",\"\\\\textcolor{##e0e5cc}{#1}\");b(\"\\\\grayA\",\"\\\\textcolor{##f6f7f7}{#1}\");b(\"\\\\grayB\",\"\\\\textcolor{##f0f1f2}{#1}\");b(\"\\\\grayC\",\"\\\\textcolor{##e3e5e6}{#1}\");b(\"\\\\grayD\",\"\\\\textcolor{##d6d8da}{#1}\");b(\"\\\\grayE\",\"\\\\textcolor{##babec2}{#1}\");b(\"\\\\grayF\",\"\\\\textcolor{##888d93}{#1}\");b(\"\\\\grayG\",\"\\\\textcolor{##626569}{#1}\");b(\"\\\\grayH\",\"\\\\textcolor{##3b3e40}{#1}\");b(\"\\\\grayI\",\"\\\\textcolor{##21242c}{#1}\");b(\"\\\\kaBlue\",\"\\\\textcolor{##314453}{#1}\");b(\"\\\\kaGreen\",\"\\\\textcolor{##71B307}{#1}\");var lp={\"^\":!0,_:!0,\"\\\\limits\":!0,\"\\\\nolimits\":!0},sh=class{constructor(e,t,r){this.settings=void 0,this.expansionCount=void 0,this.lexer=void 0,this.macros=void 0,this.stack=void 0,this.mode=void 0,this.settings=t,this.expansionCount=0,this.feed(e),this.macros=new nh(p6,t.macros),this.mode=r,this.stack=[]}feed(e){this.lexer=new Po(e,this.settings)}switchMode(e){this.mode=e}beginGroup(){this.macros.beginGroup()}endGroup(){this.macros.endGroup()}endGroups(){this.macros.endGroups()}future(){return this.stack.length===0&&this.pushToken(this.lexer.lex()),this.stack[this.stack.length-1]}popToken(){return this.future(),this.stack.pop()}pushToken(e){this.stack.push(e)}pushTokens(e){this.stack.push(...e)}scanArgument(e){var t,r,n;if(e){if(this.consumeSpaces(),this.future().text!==\"[\")return null;t=this.popToken(),{tokens:n,end:r}=this.consumeArg([\"]\"])}else({tokens:n,start:t,end:r}=this.consumeArg());return this.pushToken(new mt(\"EOF\",r.loc)),this.pushTokens(n),new mt(\"\",lt.range(t,r))}consumeSpaces(){for(;;){var e=this.future();if(e.text===\" \")this.stack.pop();else break}}consumeArg(e){var t=[],r=e&&e.length>0;r||this.consumeSpaces();var n=this.future(),s,o=0,l=0;do{if(s=this.popToken(),t.push(s),s.text===\"{\")++o;else if(s.text===\"}\"){if(--o,o===-1)throw new I(\"Extra }\",s)}else if(s.text===\"EOF\")throw new I(\"Unexpected end of input in a macro argument, expected '\"+(e&&r?e[l]:\"}\")+\"'\",s);if(e&&r)if((o===0||o===1&&e[l]===\"{\")&&s.text===e[l]){if(++l,l===e.length){t.splice(-l,l);break}}else l=0}while(o!==0||r);return n.text===\"{\"&&t[t.length-1].text===\"}\"&&(t.pop(),t.shift()),t.reverse(),{tokens:t,start:n,end:s}}consumeArgs(e,t){if(t){if(t.length!==e+1)throw new I(\"The length of delimiters doesn't match the number of args!\");for(var r=t[0],n=0;n<r.length;n++){var s=this.popToken();if(r[n]!==s.text)throw new I(\"Use of the macro doesn't match its definition\",s)}}for(var o=[],l=0;l<e;l++)o.push(this.consumeArg(t&&t[l+1]).tokens);return o}countExpansion(e){if(this.expansionCount+=e,this.expansionCount>this.settings.maxExpand)throw new I(\"Too many expansions: infinite loop or need to increase maxExpand setting\")}expandOnce(e){var t=this.popToken(),r=t.text,n=t.noexpand?null:this._getExpansion(r);if(n==null||e&&n.unexpandable){if(e&&n==null&&r[0]===\"\\\\\"&&!this.isDefined(r))throw new I(\"Undefined control sequence: \"+r);return this.pushToken(t),!1}this.countExpansion(1);var s=n.tokens,o=this.consumeArgs(n.numArgs,n.delimiters);if(n.numArgs){s=s.slice();for(var l=s.length-1;l>=0;--l){var a=s[l];if(a.text===\"#\"){if(l===0)throw new I(\"Incomplete placeholder at end of macro body\",a);if(a=s[--l],a.text===\"#\")s.splice(l+1,1);else if(/^[1-9]$/.test(a.text))s.splice(l,2,...o[+a.text-1]);else throw new I(\"Not a valid argument number\",a)}}}return this.pushTokens(s),s.length}expandAfterFuture(){return this.expandOnce(),this.future()}expandNextToken(){for(;;)if(this.expandOnce()===!1){var e=this.stack.pop();return e.treatAsRelax&&(e.text=\"\\\\relax\"),e}throw new Error}expandMacro(e){return this.macros.has(e)?this.expandTokens([new mt(e)]):void 0}expandTokens(e){var t=[],r=this.stack.length;for(this.pushTokens(e);this.stack.length>r;)if(this.expandOnce(!0)===!1){var n=this.stack.pop();n.treatAsRelax&&(n.noexpand=!1,n.treatAsRelax=!1),t.push(n)}return this.countExpansion(t.length),t}expandMacroAsText(e){var t=this.expandMacro(e);return t&&t.map(r=>r.text).join(\"\")}_getExpansion(e){var t=this.macros.get(e);if(t==null)return t;if(e.length===1){var r=this.lexer.catcodes[e];if(r!=null&&r!==13)return}var n=typeof t==\"function\"?t(this):t;if(typeof n==\"string\"){var s=0;if(n.includes(\"#\"))for(var o=n.replace(/##/g,\"\");o.includes(\"#\"+(s+1));)++s;for(var l=new Po(n,this.settings),a=[],h=l.lex();h.text!==\"EOF\";)a.push(h),h=l.lex();a.reverse();var c={tokens:a,numArgs:s};return c}return n}isDefined(e){return this.macros.has(e)||Dr.hasOwnProperty(e)||me.math.hasOwnProperty(e)||me.text.hasOwnProperty(e)||lp.hasOwnProperty(e)}isExpandable(e){var t=this.macros.get(e);return t!=null?typeof t==\"string\"||typeof t==\"function\"||!t.unexpandable:Dr.hasOwnProperty(e)&&!Dr[e].primitive}},mm=/^[₊₋₌₍₎₀₁₂₃₄₅₆₇₈₉ₐₑₕᵢⱼₖₗₘₙₒₚᵣₛₜᵤᵥₓᵦᵧᵨᵩᵪ]/,Mo=Object.freeze({\"\\u208A\":\"+\",\"\\u208B\":\"-\",\"\\u208C\":\"=\",\"\\u208D\":\"(\",\"\\u208E\":\")\",\"\\u2080\":\"0\",\"\\u2081\":\"1\",\"\\u2082\":\"2\",\"\\u2083\":\"3\",\"\\u2084\":\"4\",\"\\u2085\":\"5\",\"\\u2086\":\"6\",\"\\u2087\":\"7\",\"\\u2088\":\"8\",\"\\u2089\":\"9\",\"\\u2090\":\"a\",\"\\u2091\":\"e\",\"\\u2095\":\"h\",\"\\u1D62\":\"i\",\"\\u2C7C\":\"j\",\"\\u2096\":\"k\",\"\\u2097\":\"l\",\"\\u2098\":\"m\",\"\\u2099\":\"n\",\"\\u2092\":\"o\",\"\\u209A\":\"p\",\"\\u1D63\":\"r\",\"\\u209B\":\"s\",\"\\u209C\":\"t\",\"\\u1D64\":\"u\",\"\\u1D65\":\"v\",\"\\u2093\":\"x\",\"\\u1D66\":\"\\u03B2\",\"\\u1D67\":\"\\u03B3\",\"\\u1D68\":\"\\u03C1\",\"\\u1D69\":\"\\u03D5\",\"\\u1D6A\":\"\\u03C7\",\"\\u207A\":\"+\",\"\\u207B\":\"-\",\"\\u207C\":\"=\",\"\\u207D\":\"(\",\"\\u207E\":\")\",\"\\u2070\":\"0\",\"\\xB9\":\"1\",\"\\xB2\":\"2\",\"\\xB3\":\"3\",\"\\u2074\":\"4\",\"\\u2075\":\"5\",\"\\u2076\":\"6\",\"\\u2077\":\"7\",\"\\u2078\":\"8\",\"\\u2079\":\"9\",\"\\u1D2C\":\"A\",\"\\u1D2E\":\"B\",\"\\u1D30\":\"D\",\"\\u1D31\":\"E\",\"\\u1D33\":\"G\",\"\\u1D34\":\"H\",\"\\u1D35\":\"I\",\"\\u1D36\":\"J\",\"\\u1D37\":\"K\",\"\\u1D38\":\"L\",\"\\u1D39\":\"M\",\"\\u1D3A\":\"N\",\"\\u1D3C\":\"O\",\"\\u1D3E\":\"P\",\"\\u1D3F\":\"R\",\"\\u1D40\":\"T\",\"\\u1D41\":\"U\",\"\\u2C7D\":\"V\",\"\\u1D42\":\"W\",\"\\u1D43\":\"a\",\"\\u1D47\":\"b\",\"\\u1D9C\":\"c\",\"\\u1D48\":\"d\",\"\\u1D49\":\"e\",\"\\u1DA0\":\"f\",\"\\u1D4D\":\"g\",\\u02B0:\"h\",\"\\u2071\":\"i\",\\u02B2:\"j\",\"\\u1D4F\":\"k\",\\u02E1:\"l\",\"\\u1D50\":\"m\",\\u207F:\"n\",\"\\u1D52\":\"o\",\"\\u1D56\":\"p\",\\u02B3:\"r\",\\u02E2:\"s\",\"\\u1D57\":\"t\",\"\\u1D58\":\"u\",\"\\u1D5B\":\"v\",\\u02B7:\"w\",\\u02E3:\"x\",\\u02B8:\"y\",\"\\u1DBB\":\"z\",\"\\u1D5D\":\"\\u03B2\",\"\\u1D5E\":\"\\u03B3\",\"\\u1D5F\":\"\\u03B4\",\"\\u1D60\":\"\\u03D5\",\"\\u1D61\":\"\\u03C7\",\"\\u1DBF\":\"\\u03B8\"}),K0={\"\\u0301\":{text:\"\\\\'\",math:\"\\\\acute\"},\"\\u0300\":{text:\"\\\\`\",math:\"\\\\grave\"},\"\\u0308\":{text:'\\\\\"',math:\"\\\\ddot\"},\"\\u0303\":{text:\"\\\\~\",math:\"\\\\tilde\"},\"\\u0304\":{text:\"\\\\=\",math:\"\\\\bar\"},\"\\u0306\":{text:\"\\\\u\",math:\"\\\\breve\"},\"\\u030C\":{text:\"\\\\v\",math:\"\\\\check\"},\"\\u0302\":{text:\"\\\\^\",math:\"\\\\hat\"},\"\\u0307\":{text:\"\\\\.\",math:\"\\\\dot\"},\"\\u030A\":{text:\"\\\\r\",math:\"\\\\mathring\"},\"\\u030B\":{text:\"\\\\H\"},\"\\u0327\":{text:\"\\\\c\"}},pm={\\u00E1:\"a\\u0301\",\\u00E0:\"a\\u0300\",\\u00E4:\"a\\u0308\",\\u01DF:\"a\\u0308\\u0304\",\\u00E3:\"a\\u0303\",\\u0101:\"a\\u0304\",\\u0103:\"a\\u0306\",\\u1EAF:\"a\\u0306\\u0301\",\\u1EB1:\"a\\u0306\\u0300\",\\u1EB5:\"a\\u0306\\u0303\",\\u01CE:\"a\\u030C\",\\u00E2:\"a\\u0302\",\\u1EA5:\"a\\u0302\\u0301\",\\u1EA7:\"a\\u0302\\u0300\",\\u1EAB:\"a\\u0302\\u0303\",\\u0227:\"a\\u0307\",\\u01E1:\"a\\u0307\\u0304\",\\u00E5:\"a\\u030A\",\\u01FB:\"a\\u030A\\u0301\",\\u1E03:\"b\\u0307\",\\u0107:\"c\\u0301\",\\u1E09:\"c\\u0327\\u0301\",\\u010D:\"c\\u030C\",\\u0109:\"c\\u0302\",\\u010B:\"c\\u0307\",\\u00E7:\"c\\u0327\",\\u010F:\"d\\u030C\",\\u1E0B:\"d\\u0307\",\\u1E11:\"d\\u0327\",\\u00E9:\"e\\u0301\",\\u00E8:\"e\\u0300\",\\u00EB:\"e\\u0308\",\\u1EBD:\"e\\u0303\",\\u0113:\"e\\u0304\",\\u1E17:\"e\\u0304\\u0301\",\\u1E15:\"e\\u0304\\u0300\",\\u0115:\"e\\u0306\",\\u1E1D:\"e\\u0327\\u0306\",\\u011B:\"e\\u030C\",\\u00EA:\"e\\u0302\",\\u1EBF:\"e\\u0302\\u0301\",\\u1EC1:\"e\\u0302\\u0300\",\\u1EC5:\"e\\u0302\\u0303\",\\u0117:\"e\\u0307\",\\u0229:\"e\\u0327\",\\u1E1F:\"f\\u0307\",\\u01F5:\"g\\u0301\",\\u1E21:\"g\\u0304\",\\u011F:\"g\\u0306\",\\u01E7:\"g\\u030C\",\\u011D:\"g\\u0302\",\\u0121:\"g\\u0307\",\\u0123:\"g\\u0327\",\\u1E27:\"h\\u0308\",\\u021F:\"h\\u030C\",\\u0125:\"h\\u0302\",\\u1E23:\"h\\u0307\",\\u1E29:\"h\\u0327\",\\u00ED:\"i\\u0301\",\\u00EC:\"i\\u0300\",\\u00EF:\"i\\u0308\",\\u1E2F:\"i\\u0308\\u0301\",\\u0129:\"i\\u0303\",\\u012B:\"i\\u0304\",\\u012D:\"i\\u0306\",\\u01D0:\"i\\u030C\",\\u00EE:\"i\\u0302\",\\u01F0:\"j\\u030C\",\\u0135:\"j\\u0302\",\\u1E31:\"k\\u0301\",\\u01E9:\"k\\u030C\",\\u0137:\"k\\u0327\",\\u013A:\"l\\u0301\",\\u013E:\"l\\u030C\",\\u013C:\"l\\u0327\",\\u1E3F:\"m\\u0301\",\\u1E41:\"m\\u0307\",\\u0144:\"n\\u0301\",\\u01F9:\"n\\u0300\",\\u00F1:\"n\\u0303\",\\u0148:\"n\\u030C\",\\u1E45:\"n\\u0307\",\\u0146:\"n\\u0327\",\\u00F3:\"o\\u0301\",\\u00F2:\"o\\u0300\",\\u00F6:\"o\\u0308\",\\u022B:\"o\\u0308\\u0304\",\\u00F5:\"o\\u0303\",\\u1E4D:\"o\\u0303\\u0301\",\\u1E4F:\"o\\u0303\\u0308\",\\u022D:\"o\\u0303\\u0304\",\\u014D:\"o\\u0304\",\\u1E53:\"o\\u0304\\u0301\",\\u1E51:\"o\\u0304\\u0300\",\\u014F:\"o\\u0306\",\\u01D2:\"o\\u030C\",\\u00F4:\"o\\u0302\",\\u1ED1:\"o\\u0302\\u0301\",\\u1ED3:\"o\\u0302\\u0300\",\\u1ED7:\"o\\u0302\\u0303\",\\u022F:\"o\\u0307\",\\u0231:\"o\\u0307\\u0304\",\\u0151:\"o\\u030B\",\\u1E55:\"p\\u0301\",\\u1E57:\"p\\u0307\",\\u0155:\"r\\u0301\",\\u0159:\"r\\u030C\",\\u1E59:\"r\\u0307\",\\u0157:\"r\\u0327\",\\u015B:\"s\\u0301\",\\u1E65:\"s\\u0301\\u0307\",\\u0161:\"s\\u030C\",\\u1E67:\"s\\u030C\\u0307\",\\u015D:\"s\\u0302\",\\u1E61:\"s\\u0307\",\\u015F:\"s\\u0327\",\\u1E97:\"t\\u0308\",\\u0165:\"t\\u030C\",\\u1E6B:\"t\\u0307\",\\u0163:\"t\\u0327\",\\u00FA:\"u\\u0301\",\\u00F9:\"u\\u0300\",\\u00FC:\"u\\u0308\",\\u01D8:\"u\\u0308\\u0301\",\\u01DC:\"u\\u0308\\u0300\",\\u01D6:\"u\\u0308\\u0304\",\\u01DA:\"u\\u0308\\u030C\",\\u0169:\"u\\u0303\",\\u1E79:\"u\\u0303\\u0301\",\\u016B:\"u\\u0304\",\\u1E7B:\"u\\u0304\\u0308\",\\u016D:\"u\\u0306\",\\u01D4:\"u\\u030C\",\\u00FB:\"u\\u0302\",\\u016F:\"u\\u030A\",\\u0171:\"u\\u030B\",\\u1E7D:\"v\\u0303\",\\u1E83:\"w\\u0301\",\\u1E81:\"w\\u0300\",\\u1E85:\"w\\u0308\",\\u0175:\"w\\u0302\",\\u1E87:\"w\\u0307\",\\u1E98:\"w\\u030A\",\\u1E8D:\"x\\u0308\",\\u1E8B:\"x\\u0307\",\\u00FD:\"y\\u0301\",\\u1EF3:\"y\\u0300\",\\u00FF:\"y\\u0308\",\\u1EF9:\"y\\u0303\",\\u0233:\"y\\u0304\",\\u0177:\"y\\u0302\",\\u1E8F:\"y\\u0307\",\\u1E99:\"y\\u030A\",\\u017A:\"z\\u0301\",\\u017E:\"z\\u030C\",\\u1E91:\"z\\u0302\",\\u017C:\"z\\u0307\",\\u00C1:\"A\\u0301\",\\u00C0:\"A\\u0300\",\\u00C4:\"A\\u0308\",\\u01DE:\"A\\u0308\\u0304\",\\u00C3:\"A\\u0303\",\\u0100:\"A\\u0304\",\\u0102:\"A\\u0306\",\\u1EAE:\"A\\u0306\\u0301\",\\u1EB0:\"A\\u0306\\u0300\",\\u1EB4:\"A\\u0306\\u0303\",\\u01CD:\"A\\u030C\",\\u00C2:\"A\\u0302\",\\u1EA4:\"A\\u0302\\u0301\",\\u1EA6:\"A\\u0302\\u0300\",\\u1EAA:\"A\\u0302\\u0303\",\\u0226:\"A\\u0307\",\\u01E0:\"A\\u0307\\u0304\",\\u00C5:\"A\\u030A\",\\u01FA:\"A\\u030A\\u0301\",\\u1E02:\"B\\u0307\",\\u0106:\"C\\u0301\",\\u1E08:\"C\\u0327\\u0301\",\\u010C:\"C\\u030C\",\\u0108:\"C\\u0302\",\\u010A:\"C\\u0307\",\\u00C7:\"C\\u0327\",\\u010E:\"D\\u030C\",\\u1E0A:\"D\\u0307\",\\u1E10:\"D\\u0327\",\\u00C9:\"E\\u0301\",\\u00C8:\"E\\u0300\",\\u00CB:\"E\\u0308\",\\u1EBC:\"E\\u0303\",\\u0112:\"E\\u0304\",\\u1E16:\"E\\u0304\\u0301\",\\u1E14:\"E\\u0304\\u0300\",\\u0114:\"E\\u0306\",\\u1E1C:\"E\\u0327\\u0306\",\\u011A:\"E\\u030C\",\\u00CA:\"E\\u0302\",\\u1EBE:\"E\\u0302\\u0301\",\\u1EC0:\"E\\u0302\\u0300\",\\u1EC4:\"E\\u0302\\u0303\",\\u0116:\"E\\u0307\",\\u0228:\"E\\u0327\",\\u1E1E:\"F\\u0307\",\\u01F4:\"G\\u0301\",\\u1E20:\"G\\u0304\",\\u011E:\"G\\u0306\",\\u01E6:\"G\\u030C\",\\u011C:\"G\\u0302\",\\u0120:\"G\\u0307\",\\u0122:\"G\\u0327\",\\u1E26:\"H\\u0308\",\\u021E:\"H\\u030C\",\\u0124:\"H\\u0302\",\\u1E22:\"H\\u0307\",\\u1E28:\"H\\u0327\",\\u00CD:\"I\\u0301\",\\u00CC:\"I\\u0300\",\\u00CF:\"I\\u0308\",\\u1E2E:\"I\\u0308\\u0301\",\\u0128:\"I\\u0303\",\\u012A:\"I\\u0304\",\\u012C:\"I\\u0306\",\\u01CF:\"I\\u030C\",\\u00CE:\"I\\u0302\",\\u0130:\"I\\u0307\",\\u0134:\"J\\u0302\",\\u1E30:\"K\\u0301\",\\u01E8:\"K\\u030C\",\\u0136:\"K\\u0327\",\\u0139:\"L\\u0301\",\\u013D:\"L\\u030C\",\\u013B:\"L\\u0327\",\\u1E3E:\"M\\u0301\",\\u1E40:\"M\\u0307\",\\u0143:\"N\\u0301\",\\u01F8:\"N\\u0300\",\\u00D1:\"N\\u0303\",\\u0147:\"N\\u030C\",\\u1E44:\"N\\u0307\",\\u0145:\"N\\u0327\",\\u00D3:\"O\\u0301\",\\u00D2:\"O\\u0300\",\\u00D6:\"O\\u0308\",\\u022A:\"O\\u0308\\u0304\",\\u00D5:\"O\\u0303\",\\u1E4C:\"O\\u0303\\u0301\",\\u1E4E:\"O\\u0303\\u0308\",\\u022C:\"O\\u0303\\u0304\",\\u014C:\"O\\u0304\",\\u1E52:\"O\\u0304\\u0301\",\\u1E50:\"O\\u0304\\u0300\",\\u014E:\"O\\u0306\",\\u01D1:\"O\\u030C\",\\u00D4:\"O\\u0302\",\\u1ED0:\"O\\u0302\\u0301\",\\u1ED2:\"O\\u0302\\u0300\",\\u1ED6:\"O\\u0302\\u0303\",\\u022E:\"O\\u0307\",\\u0230:\"O\\u0307\\u0304\",\\u0150:\"O\\u030B\",\\u1E54:\"P\\u0301\",\\u1E56:\"P\\u0307\",\\u0154:\"R\\u0301\",\\u0158:\"R\\u030C\",\\u1E58:\"R\\u0307\",\\u0156:\"R\\u0327\",\\u015A:\"S\\u0301\",\\u1E64:\"S\\u0301\\u0307\",\\u0160:\"S\\u030C\",\\u1E66:\"S\\u030C\\u0307\",\\u015C:\"S\\u0302\",\\u1E60:\"S\\u0307\",\\u015E:\"S\\u0327\",\\u0164:\"T\\u030C\",\\u1E6A:\"T\\u0307\",\\u0162:\"T\\u0327\",\\u00DA:\"U\\u0301\",\\u00D9:\"U\\u0300\",\\u00DC:\"U\\u0308\",\\u01D7:\"U\\u0308\\u0301\",\\u01DB:\"U\\u0308\\u0300\",\\u01D5:\"U\\u0308\\u0304\",\\u01D9:\"U\\u0308\\u030C\",\\u0168:\"U\\u0303\",\\u1E78:\"U\\u0303\\u0301\",\\u016A:\"U\\u0304\",\\u1E7A:\"U\\u0304\\u0308\",\\u016C:\"U\\u0306\",\\u01D3:\"U\\u030C\",\\u00DB:\"U\\u0302\",\\u016E:\"U\\u030A\",\\u0170:\"U\\u030B\",\\u1E7C:\"V\\u0303\",\\u1E82:\"W\\u0301\",\\u1E80:\"W\\u0300\",\\u1E84:\"W\\u0308\",\\u0174:\"W\\u0302\",\\u1E86:\"W\\u0307\",\\u1E8C:\"X\\u0308\",\\u1E8A:\"X\\u0307\",\\u00DD:\"Y\\u0301\",\\u1EF2:\"Y\\u0300\",\\u0178:\"Y\\u0308\",\\u1EF8:\"Y\\u0303\",\\u0232:\"Y\\u0304\",\\u0176:\"Y\\u0302\",\\u1E8E:\"Y\\u0307\",\\u0179:\"Z\\u0301\",\\u017D:\"Z\\u030C\",\\u1E90:\"Z\\u0302\",\\u017B:\"Z\\u0307\",\\u03AC:\"\\u03B1\\u0301\",\\u1F70:\"\\u03B1\\u0300\",\\u1FB1:\"\\u03B1\\u0304\",\\u1FB0:\"\\u03B1\\u0306\",\\u03AD:\"\\u03B5\\u0301\",\\u1F72:\"\\u03B5\\u0300\",\\u03AE:\"\\u03B7\\u0301\",\\u1F74:\"\\u03B7\\u0300\",\\u03AF:\"\\u03B9\\u0301\",\\u1F76:\"\\u03B9\\u0300\",\\u03CA:\"\\u03B9\\u0308\",\\u0390:\"\\u03B9\\u0308\\u0301\",\\u1FD2:\"\\u03B9\\u0308\\u0300\",\\u1FD1:\"\\u03B9\\u0304\",\\u1FD0:\"\\u03B9\\u0306\",\\u03CC:\"\\u03BF\\u0301\",\\u1F78:\"\\u03BF\\u0300\",\\u03CD:\"\\u03C5\\u0301\",\\u1F7A:\"\\u03C5\\u0300\",\\u03CB:\"\\u03C5\\u0308\",\\u03B0:\"\\u03C5\\u0308\\u0301\",\\u1FE2:\"\\u03C5\\u0308\\u0300\",\\u1FE1:\"\\u03C5\\u0304\",\\u1FE0:\"\\u03C5\\u0306\",\\u03CE:\"\\u03C9\\u0301\",\\u1F7C:\"\\u03C9\\u0300\",\\u038E:\"\\u03A5\\u0301\",\\u1FEA:\"\\u03A5\\u0300\",\\u03AB:\"\\u03A5\\u0308\",\\u1FE9:\"\\u03A5\\u0304\",\\u1FE8:\"\\u03A5\\u0306\",\\u038F:\"\\u03A9\\u0301\",\\u1FFA:\"\\u03A9\\u0300\"},No=class i{constructor(e,t){this.mode=void 0,this.gullet=void 0,this.settings=void 0,this.leftrightDepth=void 0,this.nextToken=void 0,this.mode=\"math\",this.gullet=new sh(e,t,this.mode),this.settings=t,this.leftrightDepth=0}expect(e,t){if(t===void 0&&(t=!0),this.fetch().text!==e)throw new I(\"Expected '\"+e+\"', got '\"+this.fetch().text+\"'\",this.fetch());t&&this.consume()}consume(){this.nextToken=null}fetch(){return this.nextToken==null&&(this.nextToken=this.gullet.expandNextToken()),this.nextToken}switchMode(e){this.mode=e,this.gullet.switchMode(e)}parse(){this.settings.globalGroup||this.gullet.beginGroup(),this.settings.colorIsTextColor&&this.gullet.macros.set(\"\\\\color\",\"\\\\textcolor\");try{var e=this.parseExpression(!1);return this.expect(\"EOF\"),this.settings.globalGroup||this.gullet.endGroup(),e}finally{this.gullet.endGroups()}}subparse(e){var t=this.nextToken;this.consume(),this.gullet.pushToken(new mt(\"}\")),this.gullet.pushTokens(e);var r=this.parseExpression(!1);return this.expect(\"}\"),this.nextToken=t,r}parseExpression(e,t){for(var r=[];;){this.mode===\"math\"&&this.consumeSpaces();var n=this.fetch();if(i.endOfExpression.has(n.text)||t&&n.text===t||e&&Dr[n.text]&&Dr[n.text].infix)break;var s=this.parseAtom(t);if(s){if(s.type===\"internal\")continue}else break;r.push(s)}return this.mode===\"text\"&&this.formLigatures(r),this.handleInfixNodes(r)}handleInfixNodes(e){for(var t=-1,r,n=0;n<e.length;n++)if(e[n].type===\"infix\"){if(t!==-1)throw new I(\"only one infix operator per group\",e[n].token);t=n,r=e[n].replaceWith}if(t!==-1&&r){var s,o,l=e.slice(0,t),a=e.slice(t+1);l.length===1&&l[0].type===\"ordgroup\"?s=l[0]:s={type:\"ordgroup\",mode:this.mode,body:l},a.length===1&&a[0].type===\"ordgroup\"?o=a[0]:o={type:\"ordgroup\",mode:this.mode,body:a};var h;return r===\"\\\\\\\\abovefrac\"?h=this.callFunction(r,[s,e[t],o],[]):h=this.callFunction(r,[s,o],[]),[h]}else return e}handleSupSubscript(e){var t=this.fetch(),r=t.text;this.consume(),this.consumeSpaces();var n;do{var s;n=this.parseGroup(e)}while(((s=n)==null?void 0:s.type)===\"internal\");if(!n)throw new I(\"Expected group after '\"+r+\"'\",t);return n}formatUnsupportedCmd(e){for(var t=[],r=0;r<e.length;r++)t.push({type:\"textord\",mode:\"text\",text:e[r]});var n={type:\"text\",mode:this.mode,body:t},s={type:\"color\",mode:this.mode,color:this.settings.errorColor,body:[n]};return s}parseAtom(e){var t=this.parseGroup(\"atom\",e);if(t?.type===\"internal\"||this.mode===\"text\")return t;for(var r,n;;){this.consumeSpaces();var s=this.fetch();if(s.text===\"\\\\limits\"||s.text===\"\\\\nolimits\"){if(t&&t.type===\"op\"){var o=s.text===\"\\\\limits\";t.limits=o,t.alwaysHandleSupSub=!0}else if(t&&t.type===\"operatorname\")t.alwaysHandleSupSub&&(t.limits=s.text===\"\\\\limits\");else throw new I(\"Limit controls must follow a math operator\",s);this.consume()}else if(s.text===\"^\"){if(r)throw new I(\"Double superscript\",s);r=this.handleSupSubscript(\"superscript\")}else if(s.text===\"_\"){if(n)throw new I(\"Double subscript\",s);n=this.handleSupSubscript(\"subscript\")}else if(s.text===\"'\"){if(r)throw new I(\"Double superscript\",s);var l={type:\"textord\",mode:this.mode,text:\"\\\\prime\"},a=[l];for(this.consume();this.fetch().text===\"'\";)a.push(l),this.consume();this.fetch().text===\"^\"&&a.push(this.handleSupSubscript(\"superscript\")),r={type:\"ordgroup\",mode:this.mode,body:a}}else if(Mo[s.text]){var h=mm.test(s.text),c=[];for(c.push(new mt(Mo[s.text])),this.consume();;){var u=this.fetch().text;if(!Mo[u]||mm.test(u)!==h)break;c.unshift(new mt(Mo[u])),this.consume()}var d=this.subparse(c);h?n={type:\"ordgroup\",mode:\"math\",body:d}:r={type:\"ordgroup\",mode:\"math\",body:d}}else break}return r||n?{type:\"supsub\",mode:this.mode,base:t,sup:r,sub:n}:t}parseFunction(e,t){var r=this.fetch(),n=r.text,s=Dr[n];if(!s)return null;if(this.consume(),t&&t!==\"atom\"&&!s.allowedInArgument)throw new I(\"Got function '\"+n+\"' with no arguments\"+(t?\" as \"+t:\"\"),r);if(this.mode===\"text\"&&!s.allowedInText)throw new I(\"Can't use function '\"+n+\"' in text mode\",r);if(this.mode===\"math\"&&s.allowedInMath===!1)throw new I(\"Can't use function '\"+n+\"' in math mode\",r);var{args:o,optArgs:l}=this.parseArguments(n,s);return this.callFunction(n,o,l,r,e)}callFunction(e,t,r,n,s){var o={funcName:e,parser:this,token:n,breakOnTokenText:s},l=Dr[e];if(l&&l.handler)return l.handler(o,t,r);throw new I(\"No function handler for \"+e)}parseArguments(e,t){var r=t.numArgs+t.numOptionalArgs;if(r===0)return{args:[],optArgs:[]};for(var n=[],s=[],o=0;o<r;o++){var l=t.argTypes&&t.argTypes[o],a=o<t.numOptionalArgs;(t.primitive&&l==null||t.type===\"sqrt\"&&o===1&&s[0]==null)&&(l=\"primitive\");var h=this.parseGroupOfType(\"argument to '\"+e+\"'\",l,a);if(a)s.push(h);else if(h!=null)n.push(h);else throw new I(\"Null argument, please report this as a bug\")}return{args:n,optArgs:s}}parseGroupOfType(e,t,r){switch(t){case\"color\":return this.parseColorGroup(r);case\"size\":return this.parseSizeGroup(r);case\"url\":return this.parseUrlGroup(r);case\"math\":case\"text\":return this.parseArgumentGroup(r,t);case\"hbox\":{var n=this.parseArgumentGroup(r,\"text\");return n!=null?{type:\"styling\",mode:n.mode,body:[n],style:\"text\"}:null}case\"raw\":{var s=this.parseStringGroup(\"raw\",r);return s!=null?{type:\"raw\",mode:\"text\",string:s.text}:null}case\"primitive\":{if(r)throw new I(\"A primitive argument cannot be optional\");var o=this.parseGroup(e);if(o==null)throw new I(\"Expected group as \"+e,this.fetch());return o}case\"original\":case null:case void 0:return this.parseArgumentGroup(r);default:throw new I(\"Unknown group type as \"+e,this.fetch())}}consumeSpaces(){for(;this.fetch().text===\" \";)this.consume()}parseStringGroup(e,t){var r=this.gullet.scanArgument(t);if(r==null)return null;for(var n=\"\",s;(s=this.fetch()).text!==\"EOF\";)n+=s.text,this.consume();return this.consume(),r.text=n,r}parseRegexGroup(e,t){for(var r=this.fetch(),n=r,s=\"\",o;(o=this.fetch()).text!==\"EOF\"&&e.test(s+o.text);)n=o,s+=n.text,this.consume();if(s===\"\")throw new I(\"Invalid \"+t+\": '\"+r.text+\"'\",r);return r.range(n,s)}parseColorGroup(e){var t=this.parseStringGroup(\"color\",e);if(t==null)return null;var r=/^(#[a-f0-9]{3,4}|#[a-f0-9]{6}|#[a-f0-9]{8}|[a-f0-9]{6}|[a-z]+)$/i.exec(t.text);if(!r)throw new I(\"Invalid color: '\"+t.text+\"'\",t);var n=r[0];return/^[0-9a-f]{6}$/i.test(n)&&(n=\"#\"+n),{type:\"color-token\",mode:this.mode,color:n}}parseSizeGroup(e){var t,r=!1;if(this.gullet.consumeSpaces(),!e&&this.gullet.future().text!==\"{\"?t=this.parseRegexGroup(/^[-+]? *(?:$|\\d+|\\d+\\.\\d*|\\.\\d*) *[a-z]{0,2} *$/,\"size\"):t=this.parseStringGroup(\"size\",e),!t)return null;!e&&t.text.length===0&&(t.text=\"0pt\",r=!0);var n=/([-+]?) *(\\d+(?:\\.\\d*)?|\\.\\d+) *([a-z]{2})/.exec(t.text);if(!n)throw new I(\"Invalid size: '\"+t.text+\"'\",t);var s={number:+(n[1]+n[2]),unit:n[3]};if(!vm(s))throw new I(\"Invalid unit: '\"+s.unit+\"'\",t);return{type:\"size\",mode:this.mode,value:s,isBlank:r}}parseUrlGroup(e){this.gullet.lexer.setCatcode(\"%\",13),this.gullet.lexer.setCatcode(\"~\",12);var t=this.parseStringGroup(\"url\",e);if(this.gullet.lexer.setCatcode(\"%\",14),this.gullet.lexer.setCatcode(\"~\",13),t==null)return null;var r=t.text.replace(/\\\\([#$%&~_^{}])/g,\"$1\");return{type:\"url\",mode:this.mode,url:r}}parseArgumentGroup(e,t){var r=this.gullet.scanArgument(e);if(r==null)return null;var n=this.mode;t&&this.switchMode(t),this.gullet.beginGroup();var s=this.parseExpression(!1,\"EOF\");this.expect(\"EOF\"),this.gullet.endGroup();var o={type:\"ordgroup\",mode:this.mode,loc:r.loc,body:s};return t&&this.switchMode(n),o}parseGroup(e,t){var r=this.fetch(),n=r.text,s;if(n===\"{\"||n===\"\\\\begingroup\"){this.consume();var o=n===\"{\"?\"}\":\"\\\\endgroup\";this.gullet.beginGroup();var l=this.parseExpression(!1,o),a=this.fetch();this.expect(o),this.gullet.endGroup(),s={type:\"ordgroup\",mode:this.mode,loc:lt.range(r,a),body:l,semisimple:n===\"\\\\begingroup\"||void 0}}else if(s=this.parseFunction(t,e)||this.parseSymbol(),s==null&&n[0]===\"\\\\\"&&!lp.hasOwnProperty(n)){if(this.settings.throwOnError)throw new I(\"Undefined control sequence: \"+n,r);s=this.formatUnsupportedCmd(n),this.consume()}return s}formLigatures(e){for(var t=e.length-1,r=0;r<t;++r){var n=e[r],s=n.text;s===\"-\"&&e[r+1].text===\"-\"&&(r+1<t&&e[r+2].text===\"-\"?(e.splice(r,3,{type:\"textord\",mode:\"text\",loc:lt.range(n,e[r+2]),text:\"---\"}),t-=2):(e.splice(r,2,{type:\"textord\",mode:\"text\",loc:lt.range(n,e[r+1]),text:\"--\"}),t-=1)),(s===\"'\"||s===\"`\")&&e[r+1].text===s&&(e.splice(r,2,{type:\"textord\",mode:\"text\",loc:lt.range(n,e[r+1]),text:s+s}),t-=1)}}parseSymbol(){var e=this.fetch(),t=e.text;if(/^\\\\verb[^a-zA-Z]/.test(t)){this.consume();var r=t.slice(5),n=r.charAt(0)===\"*\";if(n&&(r=r.slice(1)),r.length<2||r.charAt(0)!==r.slice(-1))throw new I(`\\\\verb assertion failed --\n                    please report what input caused this bug`);return r=r.slice(1,-1),{type:\"verb\",mode:\"text\",body:r,star:n}}pm.hasOwnProperty(t[0])&&!me[this.mode][t[0]]&&(this.settings.strict&&this.mode===\"math\"&&this.settings.reportNonstrict(\"unicodeTextInMathMode\",'Accented Unicode text character \"'+t[0]+'\" used in math mode',e),t=pm[t[0]]+t.slice(1));var s=d6.exec(t);s&&(t=t.substring(0,s.index),t===\"i\"?t=\"\\u0131\":t===\"j\"&&(t=\"\\u0237\"));var o;if(me[this.mode][t]){this.settings.strict&&this.mode===\"math\"&&_0.includes(t)&&this.settings.reportNonstrict(\"unicodeTextInMathMode\",'Latin-1/Unicode text character \"'+t[0]+'\" used in math mode',e);var l=me[this.mode][t].group,a=lt.range(e),h;if(m3.hasOwnProperty(l)){var c=l;h={type:\"atom\",mode:this.mode,family:c,loc:a,text:t}}else h={type:l,mode:this.mode,loc:a,text:t};o=h}else if(t.charCodeAt(0)>=128)this.settings.strict&&(gm(t.charCodeAt(0))?this.mode===\"math\"&&this.settings.reportNonstrict(\"unicodeTextInMathMode\",'Unicode text character \"'+t[0]+'\" used in math mode',e):this.settings.reportNonstrict(\"unknownSymbol\",'Unrecognized Unicode character \"'+t[0]+'\"'+(\" (\"+t.charCodeAt(0)+\")\"),e)),o={type:\"textord\",mode:\"text\",loc:lt.range(e),text:t};else return null;if(this.consume(),s)for(var u=0;u<s[0].length;u++){var d=s[0][u];if(!K0[d])throw new I(\"Unknown accent ' \"+d+\"'\",e);var p=K0[d][this.mode]||K0[d].text;if(!p)throw new I(\"Accent \"+d+\" unsupported in \"+this.mode+\" mode\",e);o={type:\"accent\",mode:this.mode,loc:lt.range(e),label:p,isStretchy:!1,isShifty:!0,base:o}}return o}};No.endOfExpression=new Set([\"}\",\"\\\\endgroup\",\"\\\\end\",\"\\\\right\",\"&\"]);var kh=function(e,t){if(!(typeof e==\"string\"||e instanceof String))throw new TypeError(\"KaTeX can only parse string typed expression\");var r=new No(e,t);delete r.gullet.macros.current[\"\\\\df@tag\"];var n=r.parse();if(delete r.gullet.macros.current[\"\\\\current@color\"],delete r.gullet.macros.current[\"\\\\color\"],r.gullet.macros.get(\"\\\\df@tag\")){if(!t.displayMode)throw new I(\"\\\\tag works only in display equations\");n=[{type:\"tag\",mode:\"text\",body:n,tag:r.subparse([new mt(\"\\\\df@tag\")])}]}return n},ap=function(e,t,r){t.textContent=\"\";var n=Sh(e,r).toNode();t.appendChild(n)};typeof document<\"u\"&&document.compatMode!==\"CSS1Compat\"&&(typeof console<\"u\"&&console.warn(\"Warning: KaTeX doesn't work in quirks mode. Make sure your website has a suitable doctype.\"),ap=function(){throw new I(\"KaTeX doesn't work in quirks mode.\")});var v6=function(e,t){var r=Sh(e,t).toMarkup();return r},b6=function(e,t){var r=new Bn(t);return kh(e,r)},hp=function(e,t,r){if(r.throwOnError||!(e instanceof I))throw e;var n=z([\"katex-error\"],[new at(t)]);return n.setAttribute(\"title\",e.toString()),n.setAttribute(\"style\",\"color:\"+r.errorColor),n},Sh=function(e,t){var r=new Bn(t);try{var n=kh(e,r);return E3(n,e,r)}catch(s){return hp(s,e,r)}},y6=function(e,t){var r=new Bn(t);try{var n=kh(e,r);return O3(n,e,r)}catch(s){return hp(s,e,r)}},x6=\"0.16.33\",w6={Span:ti,Anchor:On,SymbolNode:at,SvgNode:qt,PathNode:Zt,LineNode:zn},Ch={version:x6,render:ap,renderToString:v6,ParseError:I,SETTINGS_SCHEMA:To,__parse:b6,__renderToDomTree:Sh,__renderToHTMLTree:y6,__setFontMetrics:l3,__defineSymbol:f,__defineFunction:q,__defineMacro:b,__domTree:w6};var k6=function(e,t,r){for(var n=r,s=0,o=e.length;n<t.length;){var l=t[n];if(s<=0&&t.slice(n,n+o)===e)return n;l===\"\\\\\"?n++:l===\"{\"?s++:l===\"}\"&&s--,n++}return-1},S6=function(e){return e.replace(/[-/\\\\^$*+?.()|[\\]{}]/g,\"\\\\$&\")},C6=/^\\\\begin{/,A6=function(e,t){for(var r,n=[],s=new RegExp(\"(\"+t.map(h=>S6(h.left)).join(\"|\")+\")\");r=e.search(s),r!==-1;){r>0&&(n.push({type:\"text\",data:e.slice(0,r)}),e=e.slice(r));var o=t.findIndex(h=>e.startsWith(h.left));if(r=k6(t[o].right,e,t[o].left.length),r===-1)break;var l=e.slice(0,r+t[o].right.length),a=C6.test(l)?l:e.slice(t[o].left.length,r);n.push({type:\"math\",data:a,rawData:l,display:t[o].display}),e=e.slice(r+t[o].right.length)}return e!==\"\"&&n.push({type:\"text\",data:e}),n},M6=function(e,t){var r=A6(e,t.delimiters);if(r.length===1&&r[0].type===\"text\")return null;for(var n=document.createDocumentFragment(),s=0;s<r.length;s++)if(r[s].type===\"text\")n.appendChild(document.createTextNode(r[s].data));else{var o=document.createElement(\"span\"),l=r[s].data;t.displayMode=r[s].display;try{t.preProcess&&(l=t.preProcess(l)),Ch.render(l,o,t)}catch(a){if(!(a instanceof Ch.ParseError))throw a;t.errorCallback(\"KaTeX auto-render: Failed to parse `\"+r[s].data+\"` with \",a),n.appendChild(document.createTextNode(r[s].rawData));continue}n.appendChild(o)}return n},T6=function i(e,t){for(var r=0;r<e.childNodes.length;r++){var n=e.childNodes[r];if(n.nodeType===3){for(var s=n.textContent,o=n.nextSibling,l=0;o&&o.nodeType===Node.TEXT_NODE;)s+=o.textContent,o=o.nextSibling,l++;var a=M6(s,t);if(a){for(var h=0;h<l;h++)n.nextSibling.remove();r+=a.childNodes.length-1,e.replaceChild(a,n)}else r+=l}else n.nodeType===1&&function(){var c=\" \"+n.className+\" \",u=!t.ignoredTags.has(n.nodeName.toLowerCase())&&t.ignoredClasses.every(d=>!c.includes(\" \"+d+\" \"));u&&i(n,t)}()}},cp=function(e,t){if(!e)throw new Error(\"No element provided to render\");var r={};for(var n in t)t.hasOwnProperty(n)&&(r[n]=t[n]);r.delimiters=r.delimiters||[{left:\"$$\",right:\"$$\",display:!0},{left:\"\\\\(\",right:\"\\\\)\",display:!1},{left:\"\\\\begin{equation}\",right:\"\\\\end{equation}\",display:!0},{left:\"\\\\begin{align}\",right:\"\\\\end{align}\",display:!0},{left:\"\\\\begin{alignat}\",right:\"\\\\end{alignat}\",display:!0},{left:\"\\\\begin{gather}\",right:\"\\\\end{gather}\",display:!0},{left:\"\\\\begin{CD}\",right:\"\\\\end{CD}\",display:!0},{left:\"\\\\[\",right:\"\\\\]\",display:!0}],r.ignoredTags=new Set(r.ignoredTags||[\"script\",\"noscript\",\"style\",\"textarea\",\"pre\",\"code\",\"option\"]),r.ignoredClasses=r.ignoredClasses||[],r.errorCallback=r.errorCallback||console.error,r.macros=r.macros||{},T6(e,r)};function up(i){cp(i,{output:\"mathml\",throwOnError:!1})}function Ah(i){switch(i.kind){case\"stdout\":return D6(i.text);case\"stderr\":return B6(i.text);case\"error\":return E6(i.text);case\"display\":return O6(i.mime,i.data);default:return null}}function D6(i){let e=document.createElement(\"pre\");return e.className=\"output output-stdout\",e.textContent=fp(\"\",i),e}function B6(i){let e=document.createElement(\"pre\");return e.className=\"output output-stderr\",e.textContent=i,e}function E6(i){let e=document.createElement(\"div\");e.className=\"output output-error\";let t=document.createElement(\"pre\");return t.textContent=i,e.appendChild(t),e}function O6(i,e){let t=document.createElement(\"div\");if(t.className=\"output output-display\",i===\"text/plain\"){let r=document.createElement(\"pre\");r.textContent=e,t.appendChild(r)}else if(i===\"text/html\"){let r=document.createElement(\"div\");r.className=\"display-html\",r.innerHTML=e,t.appendChild(r)}else if(i.startsWith(\"image/\")){let r=document.createElement(\"img\");i===\"image/svg+xml\"?r.src=\"data:image/svg+xml;base64,\"+btoa(e):r.src=\"data:\"+i+\";base64,\"+e,r.style.maxWidth=\"100%\",t.appendChild(r)}else if(i===\"application/json\"){let r=document.createElement(\"pre\");try{r.textContent=JSON.stringify(JSON.parse(e),null,2)}catch{r.textContent=e}t.appendChild(r)}else{let r=document.createElement(\"pre\");r.textContent=e,t.appendChild(r)}return t}function fp(i,e){let r=(i+e).split(`\n`);for(let n=0;n<r.length;n++){let s=r[n].split(\"\\r\");r[n]=s[s.length-1]}return r.join(`\n`)}function dp(i,e){if(e.kind===\"stdout\"&&i.lastElementChild&&i.lastElementChild.classList.contains(\"output-stdout\")){i.lastElementChild.textContent=fp(i.lastElementChild.textContent,e.text);return}let t=Ah(e);t&&i.appendChild(t)}var jo=new Map;function Rn(i){return i.attrs||{}}function mp(i){if(!i)return\"\";let e=i.indexOf(`\n`),t=e===-1?i:i.slice(0,e);return t.length>60?t.slice(0,57)+\"...\":t}function Mh(i,e,t){let r=document.createElement(\"div\");return r.className=`cell cell-${i.kind}`,r.dataset.cellId=i.id,r.dataset.status=i.status||\"idle\",i.kind===\"code\"?r.appendChild(z6(i,e,t)):r.appendChild(L6(i,e,t)),r.addEventListener(\"click\",n=>{n.target.closest(\"button\")||t.setFocus(i.id)}),r}function z6(i,e,t){let r=document.createElement(\"div\");r.className=\"cell-wrapper\";let n=Rn(i),s=document.createElement(\"div\");s.className=\"cell-gutter\";let o=document.createElement(\"span\");o.className=\"cell-number\",o.textContent=i.execution_count>0?`[${i.execution_count}]`:\"[ ]\",s.appendChild(o);let l=document.createElement(\"span\");l.className=\"cell-status-icon\",s.appendChild(l),r.appendChild(s);let a=document.createElement(\"div\");a.className=\"cell-content\";let h=document.createElement(\"div\");h.className=\"cell-collapsed-bar\";let c=document.createElement(\"span\");c.className=\"cell-collapsed-toggle\",c.textContent=\"\\u25B8\",h.appendChild(c);let u=document.createElement(\"span\");u.className=\"cell-collapsed-source\",u.textContent=mp(i.source)||\"(empty)\",h.appendChild(u),a.appendChild(h);let d=document.createElement(\"div\");d.className=\"cell-source-placeholder\";let p=document.createElement(\"span\");p.className=\"cell-collapsed-toggle\",p.textContent=\"\\u25B8\",d.appendChild(p);let v=document.createElement(\"span\");v.textContent=\"code\",d.appendChild(v),a.appendChild(d);let y=document.createElement(\"div\");y.className=\"cell-editor\",a.appendChild(y);let w=z0(y,i.source,{onChange:B=>{i.source=B,e.updateSource(i.id,B)},onExecute:()=>{e.executeCell(i.id)},onExecuteAndMoveNext:()=>{e.executeCell(i.id),t.focusNext()},onEscape:()=>{y.querySelector(\".cm-content\")?.blur()},wsClient:e});jo.set(i.id,w);let S=document.createElement(\"div\");if(S.className=\"cell-outputs\",i.outputs)for(let B of i.outputs){let D=Ah(B);D&&S.appendChild(D)}a.appendChild(S);let A=!!n.collapsed,M=!!n.hide_source;function E(){A?(h.style.display=\"\",d.style.display=\"none\",y.style.display=\"none\",S.style.display=\"none\",c.textContent=\"\\u25B8\"):M?(h.style.display=\"none\",d.style.display=\"\",y.style.display=\"none\",S.style.display=\"\",p.textContent=\"\\u25B8\"):(h.style.display=\"none\",d.style.display=\"none\",y.style.display=\"\",S.style.display=\"\")}h.addEventListener(\"click\",()=>{A=!1,E()}),d.addEventListener(\"click\",()=>{M=!1,E()}),E();let T=document.createElement(\"div\");return T.className=\"cell-actions\",T.innerHTML=`\n    <button data-action=\"run\" title=\"Evaluate cell (Ctrl+Enter)\">Run</button>\n    <button data-action=\"delete\" title=\"Delete cell (d)\">Delete</button>\n    <button data-action=\"move-up\" title=\"Move cell up (Shift+K)\">\\u2191</button>\n    <button data-action=\"move-down\" title=\"Move cell down (Shift+J)\">\\u2193</button>\n    <button data-action=\"toggle-type\" title=\"Convert to text cell (m)\">\\u21C4 Text</button>\n    <button data-action=\"toggle-collapse\" title=\"Toggle collapsed (z)\">${n.collapsed?\"Expand\":\"Collapse\"}</button>\n    <button data-action=\"toggle-hide-source\" title=\"Toggle hide source (Z)\">${n.hide_source?\"Show src\":\"Hide src\"}</button>\n  `,T.addEventListener(\"click\",B=>{let D=B.target.dataset.action;if(!D)return;let V=t.findCellIndex(i.id);switch(D){case\"run\":e.executeCell(i.id);break;case\"delete\":e.deleteCell(i.id);break;case\"move-up\":V>0&&e.moveCell(i.id,V-1);break;case\"move-down\":V<t.cells.length-1&&e.moveCell(i.id,V+1);break;case\"toggle-type\":e.setCellKind(i.id,\"text\");break;case\"toggle-collapse\":{let U={...Rn(i),collapsed:!n.collapsed};e.setCellAttrs(i.id,U);break}case\"toggle-hide-source\":{let U={...Rn(i),hide_source:!n.hide_source};e.setCellAttrs(i.id,U);break}}}),a.appendChild(T),r.appendChild(a),r}function L6(i,e,t){let r=document.createElement(\"div\");r.className=\"cell-wrapper\";let n=Rn(i),s=document.createElement(\"div\");s.className=\"cell-content\";let o=document.createElement(\"div\");o.className=\"cell-collapsed-bar\";let l=document.createElement(\"span\");l.className=\"cell-collapsed-toggle\",l.textContent=\"\\u25B8\",o.appendChild(l);let a=document.createElement(\"span\");a.className=\"cell-collapsed-source\",a.textContent=mp(i.source)||\"(empty)\",o.appendChild(a),s.appendChild(o);let h=document.createElement(\"div\");h.className=\"cell-markdown\",h.innerHTML=i.rendered_html||'<p class=\"cell-markdown-empty\">Empty text cell \\u2014 click to edit</p>',up(h),s.appendChild(h);let c=document.createElement(\"div\");c.className=\"cell-editor\",c.style.display=\"none\",s.appendChild(c);let u=null,d=!!n.collapsed;function p(){d?(o.style.display=\"\",h.style.display=\"none\",c.style.display=\"none\",l.textContent=\"\\u25B8\"):(o.style.display=\"none\",h.style.display=\"\")}o.addEventListener(\"click\",()=>{d=!1,p()}),p(),h.addEventListener(\"dblclick\",()=>{v()});function v(){h.style.display=\"none\",c.style.display=\"block\",u||(u=z0(c,i.source,{onChange:S=>{i.source=S,e.updateSource(i.id,S)},onExecute:()=>y(),onExecuteAndMoveNext:()=>{y(),t.focusNext()},onEscape:()=>y(),wsClient:null}),jo.set(i.id,u)),u.focus()}function y(){c.style.display=\"none\",h.style.display=\"block\",e.checkpoint()}let w=document.createElement(\"div\");return w.className=\"cell-actions\",w.innerHTML=`\n    <button data-action=\"edit\" title=\"Edit text (Enter)\">Edit</button>\n    <button data-action=\"delete\" title=\"Delete cell (d)\">Delete</button>\n    <button data-action=\"move-up\" title=\"Move cell up (Shift+K)\">\\u2191</button>\n    <button data-action=\"move-down\" title=\"Move cell down (Shift+J)\">\\u2193</button>\n    <button data-action=\"toggle-type\" title=\"Convert to code cell (m)\">\\u21C4 Code</button>\n    <button data-action=\"toggle-collapse\" title=\"Toggle collapsed (z)\">${n.collapsed?\"Expand\":\"Collapse\"}</button>\n  `,w.addEventListener(\"click\",S=>{let A=S.target.dataset.action;if(!A)return;let M=t.findCellIndex(i.id);switch(A){case\"edit\":v();break;case\"delete\":e.deleteCell(i.id);break;case\"move-up\":M>0&&e.moveCell(i.id,M-1);break;case\"move-down\":M<t.cells.length-1&&e.moveCell(i.id,M+1);break;case\"toggle-type\":e.setCellKind(i.id,\"code\");break;case\"toggle-collapse\":{let E={...Rn(i),collapsed:!n.collapsed};e.setCellAttrs(i.id,E);break}}}),s.appendChild(w),r.appendChild(s),r}function Yo(i){let e=jo.get(i);e&&(e.destroy(),jo.delete(i))}function pp(i,e){let t=document.querySelector(`[data-cell-id=\"${i}\"]`);if(t&&(t.dataset.status=e,e===\"running\"||e===\"queued\")){let r=t.querySelector(\".cell-status-icon\");r&&delete r.dataset.result}}function Th(i){let e=document.querySelector(`[data-cell-id=\"${i}\"] .cell-outputs`);e&&(e.innerHTML=\"\")}var Xo=class{constructor(e,t,r){this.container=e,this.store=t,this.wsClient=r,this._showSkeleton(),this._bindEvents()}_showSkeleton(){this.container.innerHTML=\"\";let e=document.createElement(\"div\");e.className=\"skeleton\";for(let t=0;t<4;t++){let r=document.createElement(\"div\");r.className=\"skeleton-cell\",r.style.animationDelay=`${t*.1}s`,t===0&&r.classList.add(\"skeleton-cell-short\"),e.appendChild(r)}this.container.appendChild(e)}_bindEvents(){this.store.on(\"notebook:loaded\",()=>this.renderAll()),this.store.on(\"cell:status\",({cellId:e,status:t})=>{pp(e,t),t===\"running\"&&Th(e)}),this.store.on(\"cell:output\",({cellId:e,output:t})=>{let r=document.querySelector(`[data-cell-id=\"${e}\"] .cell-outputs`);r&&dp(r,t)}),this.store.on(\"cell:outputs-cleared\",({cellId:e})=>{Th(e)}),this.store.on(\"cell:updated\",({cellId:e,cell:t})=>{this._replaceCell(e,t)}),this.store.on(\"cell:inserted\",({pos:e,cell:t})=>{this._insertCellAt(e,t)}),this.store.on(\"cell:deleted\",({cellId:e})=>{this._removeCell(e)}),this.store.on(\"cell:moved\",()=>{this.renderAll()}),this.store.on(\"cell:execution-done\",({cellId:e,success:t})=>{let r=document.querySelector(`[data-cell-id=\"${e}\"]`);if(r){r.dataset.status=\"idle\";let n=t?\"flash-success\":\"flash-error\";r.classList.add(n),r.addEventListener(\"animationend\",()=>r.classList.remove(n),{once:!0});let s=r.querySelector(\".cell-status-icon\");s&&(s.dataset.result=t?\"success\":\"error\");let o=r.querySelector(\".cell-number\"),l=this.store.findCell(e);o&&l&&l.execution_count>0&&(o.textContent=`[${l.execution_count}]`)}}),this.store.on(\"focus:changed\",({cellId:e,prevCellId:t})=>{if(t){let r=document.querySelector(`[data-cell-id=\"${t}\"]`);r&&r.classList.remove(\"focused\")}if(e){let r=document.querySelector(`[data-cell-id=\"${e}\"]`);r&&(r.classList.add(\"focused\"),r.scrollIntoView({block:\"nearest\",behavior:\"smooth\"}))}})}renderAll(){if(this.container.querySelectorAll(\"[data-cell-id]\").forEach(t=>{Yo(t.dataset.cellId)}),this.container.innerHTML=\"\",this.store.cells.length===0){let t=document.createElement(\"div\");t.className=\"empty-state\",t.innerHTML=`\n        <div class=\"empty-state-icon\">&laquo; &raquo;</div>\n        <p>No cells yet.</p>\n        <p class=\"empty-state-hint\">\n          Press <kbd>a</kbd> to add a code cell, or <kbd>t</kbd> for text.\n        </p>\n      `,this.container.appendChild(t);return}let e=this._groupIntoSections(this.store.cells);for(let t of e){let r=this._createSection(t);this.container.appendChild(r)}if(this.container.appendChild(this._createDivider(this.store.cells.length)),window._quillChapterNavEl&&this.container.appendChild(window._quillChapterNavEl),this._updatePageToc(),this.store.focusedCellId){let t=document.querySelector(`[data-cell-id=\"${this.store.focusedCellId}\"]`);t&&t.classList.add(\"focused\")}}_splitRenderedHtml(e){if(!e)return[];let t=document.createElement(\"div\");t.innerHTML=e;let r=[],n=[],s=!1;for(let o of Array.from(t.childNodes)){let l=o.nodeName;l===\"H1\"||l===\"H2\"?(n.length>0&&r.push(n.map(a=>a.outerHTML||a.textContent).join(\"\")),n=[o],s=!0):(!s&&n.length===0&&r.length,n.push(o))}return n.length>0&&r.push(n.map(o=>o.outerHTML||o.textContent).join(\"\")),r}_groupIntoSections(e){let t=[],r={name:null,cells:[]},n=/^#{1,2}\\s+(.+)/gm;for(let s of e)if(s.kind===\"text\"&&s.source){let o=[],l;for(;(l=n.exec(s.source))!==null;)o.push({name:l[1].trim(),index:l.index});if(n.lastIndex=0,o.length<=1){let a=o[0];a&&r.cells.length>0?(t.push(r),r={name:a.name,cells:[]}):a&&r.cells.length===0&&(r.name=a.name),r.cells.push(s)}else{let a=this._splitRenderedHtml(s.rendered_html),h=0,c=s.source.slice(0,o[0].index).trim().length>0;for(let u=0;u<o.length;u++){let d=o[u].index,p=u+1<o.length?o[u+1].index:s.source.length,v=s.source.slice(d,p).trim();if(u===0&&d>0){let w=s.source.slice(0,d).trim();if(w){let S=c&&a.length>0?a[h++]:null;r.cells.push({...s,source:w,rendered_html:S,id:s.id+\"_v0\",_virtual:!0})}}r.cells.length>0&&t.push(r),r={name:o[u].name,cells:[]};let y=h<a.length?a[h++]:null;r.cells.push({...s,source:v,rendered_html:y,id:s.id+\"_v\"+(u+1),_virtual:!0})}}}else r.cells.push(s);return(r.cells.length>0||t.length===0)&&t.push(r),t}_createSection(e){let t=document.createElement(\"section\");if(t.className=\"notebook-section\",e.name){let n=document.createElement(\"div\");n.className=\"section-header\";let s=document.createElement(\"span\");s.className=\"section-toggle\",s.textContent=\"\\u25BE\",n.appendChild(s);let o=document.createElement(\"h2\");o.textContent=e.name,n.appendChild(o),n.addEventListener(\"click\",()=>{let l=t.dataset.collapsed===\"true\";t.dataset.collapsed=l?\"false\":\"true\",s.textContent=l?\"\\u25BE\":\"\\u25B8\"}),t.appendChild(n)}let r=document.createElement(\"div\");r.className=\"section-body\";for(let n of e.cells){let s=this.store.findCellIndex(n.id);r.appendChild(this._createDivider(s)),r.appendChild(Mh(n,this.wsClient,this.store))}return t.appendChild(r),t}_createDivider(e){let t=document.createElement(\"div\");return t.className=\"cell-divider\",t.dataset.dividerPos=e,t.innerHTML=`\n      <span class=\"divider-line\"></span>\n      <span class=\"divider-buttons\">\n        <button data-action=\"add-code\" title=\"Add code cell\">+ Code</button>\n        <button data-action=\"add-text\" title=\"Add text cell\">+ Text</button>\n      </span>\n      <span class=\"divider-line\"></span>\n    `,t.addEventListener(\"click\",r=>{let n=r.target.dataset.action;if(!n)return;let s=this.container.querySelectorAll(\".cell-divider\"),o=0;for(let l of s){if(l===t)break;o++}n===\"add-code\"?this.wsClient.insertCell(o,\"code\"):n===\"add-text\"&&this.wsClient.insertCell(o,\"text\")}),t}_replaceCell(e,t){let r=document.querySelector(`[data-cell-id=\"${e}\"]`);if(!r)return;Yo(e);let n=Mh(t,this.wsClient,this.store);e===this.store.focusedCellId&&n.classList.add(\"focused\"),r.replaceWith(n)}_insertCellAt(e,t){this.renderAll(),this.store.setFocus(t.id)}_removeCell(e){let t=document.querySelector(`[data-cell-id=\"${e}\"]`);if(t){Yo(e);let r=t.previousElementSibling;r&&r.classList.contains(\"cell-divider\")&&r.remove(),t.remove()}this.store.cells.length===0&&this.renderAll()}_updatePageToc(){let e=document.getElementById(\"page-toc\");if(!e)return;let t=[];if(this.container.querySelectorAll(\".cell-markdown h2[id], .cell-markdown h3[id]\").forEach(n=>{t.push({level:n.tagName===\"H3\"?3:2,id:n.id,text:n.textContent})}),t.length===0){e.hidden=!0,e.innerHTML=\"\";return}let r=`<div class=\"page-toc-title\">On this page</div><ul>\n`;for(let n of t){let s=n.level===3?' class=\"toc-h3\"':\"\";r+=`<li${s}><a href=\"#${n.id}\">${n.text}</a></li>\n`}r+=\"</ul>\",e.innerHTML=r,e.hidden=!1}};function gp(i,e){document.addEventListener(\"keydown\",t=>{let r=document.getElementById(\"shortcuts-dialog\");if(t.key===\"Escape\"&&r&&!r.hidden){t.preventDefault(),r.hidden=!0;return}if(t.target.closest(\".cm-content\")||t.target.tagName===\"INPUT\"||t.target.tagName===\"TEXTAREA\")return;let n=t.ctrlKey||t.metaKey;if(n)switch(t.key){case\"s\":t.preventDefault(),e.save();return;case\"z\":t.preventDefault(),t.shiftKey?e.redo():e.undo();return;case\"Enter\":t.preventDefault(),t.shiftKey?e.executeAll():i.focusedCellId&&e.executeCell(i.focusedCellId);return}if(!n&&!t.altKey)switch(t.key){case\"?\":t.preventDefault(),window._quillToggleShortcuts&&window._quillToggleShortcuts();return;case\"ArrowLeft\":!i.focusedCellId&&window._quillChapterNav&&window._quillChapterNav.prev&&(t.preventDefault(),location.href=window._quillChapterNav.prev);return;case\"ArrowRight\":!i.focusedCellId&&window._quillChapterNav&&window._quillChapterNav.next&&(t.preventDefault(),location.href=window._quillChapterNav.next);return;case\"j\":case\"ArrowDown\":t.preventDefault(),i.focusNext();return;case\"k\":case\"ArrowUp\":t.preventDefault(),i.focusPrev();return;case\"J\":t.preventDefault();{let s=i.findCellIndex(i.focusedCellId);s>=0&&s<i.cells.length-1&&e.moveCell(i.focusedCellId,s+1)}return;case\"K\":t.preventDefault();{let s=i.findCellIndex(i.focusedCellId);s>0&&e.moveCell(i.focusedCellId,s-1)}return;case\"Enter\":if(t.preventDefault(),t.shiftKey)i.focusedCellId&&(e.executeCell(i.focusedCellId),i.focusNext());else if(i.focusedCellId){let s=document.querySelector(`[data-cell-id=\"${i.focusedCellId}\"] .cm-content`);if(s)s.focus();else{let o=document.querySelector(`[data-cell-id=\"${i.focusedCellId}\"] .cell-markdown`);o&&o.dispatchEvent(new Event(\"dblclick\"))}}return;case\"a\":t.preventDefault();{let s=i.focusedCellId?i.findCellIndex(i.focusedCellId)+1:i.cells.length;e.insertCell(s,\"code\")}return;case\"t\":t.preventDefault();{let s=i.focusedCellId?i.findCellIndex(i.focusedCellId)+1:i.cells.length;e.insertCell(s,\"text\")}return;case\"d\":t.preventDefault(),i.focusedCellId&&e.deleteCell(i.focusedCellId);return;case\"m\":if(t.preventDefault(),i.focusedCellId){let s=i.findCell(i.focusedCellId);s&&e.setCellKind(i.focusedCellId,s.kind===\"code\"?\"text\":\"code\")}return;case\"c\":if(t.preventDefault(),i.focusedCellId){let s=i.findCell(i.focusedCellId);s&&s.kind===\"code\"&&e.clearOutputs(i.focusedCellId)}return;case\"z\":if(t.preventDefault(),i.focusedCellId){let s=i.findCell(i.focusedCellId);if(s){let o=s.attrs||{};e.setCellAttrs(i.focusedCellId,{...o,collapsed:!o.collapsed})}}return;case\"Z\":if(t.preventDefault(),i.focusedCellId){let s=i.findCell(i.focusedCellId);if(s&&s.kind===\"code\"){let o=s.attrs||{};e.setCellAttrs(i.focusedCellId,{...o,hide_source:!o.hide_source})}}return}})}var rr=new qn,Lr=new Wn(rr),vp=document.getElementById(\"notebook\"),O8=new Xo(vp,rr,Lr),I6=document.getElementById(\"toast-container\");function Jo(i,e=\"info\"){let t=document.createElement(\"div\");t.className=`toast toast-${e}`,t.textContent=i,I6.appendChild(t),t.offsetHeight,t.classList.add(\"toast-visible\"),setTimeout(()=>{t.classList.remove(\"toast-visible\"),t.addEventListener(\"transitionend\",()=>t.remove(),{once:!0}),setTimeout(()=>t.remove(),500)},2500)}rr.on(\"saved\",()=>Jo(\"Notebook saved\",\"success\"));rr.on(\"error\",({message:i})=>Jo(i,\"error\"));rr.on(\"reconnected\",()=>Jo(\"Reconnected\",\"success\"));rr.on(\"cell:deleted\",()=>Jo(\"Cell deleted \\u2014 Ctrl+Z to undo\",\"info\"));var R6=document.querySelector(\"#kernel-status .status-dot\"),P6=document.querySelector(\"#kernel-status .status-text\");rr.on(\"connection:changed\",({status:i})=>{R6.dataset.status=i,P6.textContent=i===\"connected\"?\"Connected\":i===\"disconnected\"?\"Reconnecting\\u2026\":\"Connecting\\u2026\"});var ii=document.getElementById(\"connection-banner\");rr.on(\"connection:changed\",({status:i})=>{i===\"disconnected\"?(ii.hidden=!1,ii.offsetHeight,ii.classList.add(\"visible\")):i===\"connected\"&&(ii.classList.remove(\"visible\"),ii.addEventListener(\"transitionend\",()=>{ii.hidden=!0},{once:!0}),setTimeout(()=>{ii.hidden=!0},500))});document.getElementById(\"btn-run-all\").addEventListener(\"click\",()=>{Lr.executeAll()});document.getElementById(\"btn-interrupt\").addEventListener(\"click\",()=>{Lr.interrupt()});document.getElementById(\"btn-clear-all\").addEventListener(\"click\",()=>{Lr.clearAllOutputs()});document.getElementById(\"btn-save\").addEventListener(\"click\",()=>{Lr.save()});var _o=document.getElementById(\"shortcuts-dialog\"),N6=document.getElementById(\"shortcuts-close\");function Zo(){_o.hidden=!_o.hidden}document.getElementById(\"btn-help\").addEventListener(\"click\",Zo);N6.addEventListener(\"click\",Zo);_o.addEventListener(\"click\",i=>{i.target===_o&&Zo()});window._quillToggleShortcuts=Zo;document.addEventListener(\"mousedown\",i=>{!i.target.closest(\".cell\")&&!i.target.closest(\".toolbar\")&&!i.target.closest(\".dialog-backdrop\")&&rr.clearFocus()});gp(rr,Lr);async function F6(){try{let i=await fetch(\"/api/notebooks\");if(!i.ok)return;let e=await i.json();if(!e||e.length===0)return;let t=document.getElementById(\"sidebar\"),r=document.getElementById(\"layout\");t.hidden=!1,r.classList.add(\"has-sidebar\");let n=location.pathname;for(let l of e)if(l.type===\"section\"){let a=document.createElement(\"div\");a.className=\"sidebar-part\",a.textContent=l.title,t.appendChild(a)}else if(l.type===\"separator\"){let a=document.createElement(\"div\");a.className=\"sidebar-separator\",t.appendChild(a)}else if(l.type===\"notebook\"){let a=document.createElement(\"a\");a.className=\"sidebar-chapter\",a.href=l.url,a.textContent=l.title,(n===l.url||n===l.url.replace(/\\/$/,\"\"))&&a.classList.add(\"active\"),t.appendChild(a)}if(n===\"/\"&&!t.querySelector(\".sidebar-chapter.active\")){let l=t.querySelector(\".sidebar-chapter\");l&&l.classList.add(\"active\")}let s=e.filter(l=>l.type===\"notebook\"),o=s.findIndex(l=>n===l.url||n===l.url.replace(/\\/$/,\"\"));if(o<0&&n===\"/\"&&(o=0),o>=0){Lr.chapterPath=s[o].path;let l=o>0?s[o-1]:null,a=o<s.length-1?s[o+1]:null;window._quillChapterNav={prev:l?l.url:null,next:a?a.url:null};let h=document.createElement(\"div\");if(h.className=\"chapter-nav\",l){let c=document.createElement(\"a\");c.className=\"chapter-nav-link chapter-nav-prev\",c.href=l.url,c.innerHTML=`<span class=\"chapter-nav-dir\">\\u2190 Previous</span><span class=\"chapter-nav-title\">${l.title}</span>`,h.appendChild(c)}else h.appendChild(document.createElement(\"span\"));if(a){let c=document.createElement(\"a\");c.className=\"chapter-nav-link chapter-nav-next\",c.href=a.url,c.innerHTML=`<span class=\"chapter-nav-dir\">Next \\u2192</span><span class=\"chapter-nav-title\">${a.title}</span>`,h.appendChild(c)}window._quillChapterNavEl=h,vp.appendChild(h)}}catch{}}F6().then(()=>Lr.connect());\n"
  },
  {
    "path": "packages/quill/lib/quill-server/frontend/dune",
    "content": "; Dev: rebuild dist from sources and promote to source tree\n\n(rule\n (targets\n  (dir dist))\n (enabled_if\n  (= %{profile} dev))\n (deps\n  (glob_files src/*.js)\n  (glob_files css/*.css)\n  (glob_files fonts/*.woff2)\n  esbuild.config.mjs\n  (source_tree node_modules))\n (mode\n  (promote (until-clean)))\n (action\n  (bash \"node esbuild.config.mjs --production 2>&1\")))\n"
  },
  {
    "path": "packages/quill/lib/quill-server/frontend/esbuild.config.mjs",
    "content": "import * as esbuild from 'esbuild';\n\nconst production = process.argv.includes('--production');\n\nconst config = {\n  entryPoints: ['src/app.js'],\n  bundle: true,\n  minify: production,\n  sourcemap: !production,\n  outdir: 'dist',\n  format: 'esm',\n  target: 'es2020',\n  loader: { '.css': 'css', '.woff2': 'file' },\n  external: ['/assets/*'],\n};\n\nif (process.argv.includes('--watch')) {\n  const ctx = await esbuild.context(config);\n  await ctx.watch();\n  console.log('Watching for changes...');\n} else {\n  await esbuild.build(config);\n}\n"
  },
  {
    "path": "packages/quill/lib/quill-server/frontend/index.html",
    "content": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n  <meta charset=\"UTF-8\">\n  <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n  <title>Quill</title>\n  <link rel=\"stylesheet\" href=\"/assets/app.css\">\n</head>\n<body>\n  <header class=\"toolbar\">\n    <div class=\"toolbar-left\">\n      <span class=\"notebook-title\">Quill</span>\n    </div>\n    <div class=\"toolbar-center\">\n      <span class=\"kernel-status\" id=\"kernel-status\">\n        <span class=\"status-dot\"></span>\n        <span class=\"status-text\">Connecting</span>\n      </span>\n    </div>\n    <div class=\"toolbar-right\">\n      <button class=\"btn\" id=\"btn-run-all\" title=\"Run all cells (Ctrl+Shift+Enter)\">Run All</button>\n      <button class=\"btn\" id=\"btn-interrupt\" title=\"Interrupt execution\">Interrupt</button>\n      <button class=\"btn\" id=\"btn-clear-all\" title=\"Clear all outputs\">Clear All</button>\n      <button class=\"btn\" id=\"btn-save\" title=\"Save notebook (Ctrl+S)\">Save</button>\n      <button class=\"btn btn-help\" id=\"btn-help\" title=\"Keyboard shortcuts (?)\">?</button>\n    </div>\n  </header>\n\n  <div id=\"connection-banner\" class=\"connection-banner\" hidden>\n    <span class=\"connection-banner-dot\"></span>\n    Connection lost &mdash; reconnecting&hellip;\n  </div>\n\n  <div class=\"layout\" id=\"layout\">\n    <nav class=\"sidebar\" id=\"sidebar\" hidden></nav>\n    <main class=\"notebook\" id=\"notebook\"></main>\n    <nav class=\"page-toc\" id=\"page-toc\" hidden></nav>\n  </div>\n\n  <!-- Keyboard shortcuts dialog -->\n  <div id=\"shortcuts-dialog\" class=\"dialog-backdrop\" hidden>\n    <div class=\"dialog\">\n      <div class=\"dialog-header\">\n        <h3>Keyboard Shortcuts</h3>\n        <button class=\"dialog-close\" id=\"shortcuts-close\" title=\"Close\">&times;</button>\n      </div>\n      <div class=\"dialog-body shortcuts-grid\">\n        <div class=\"shortcut-group\">\n          <h4>Navigation</h4>\n          <dl>\n            <div><dt><kbd>j</kbd> / <kbd>&darr;</kbd></dt><dd>Focus next cell</dd></div>\n            <div><dt><kbd>k</kbd> / <kbd>&uarr;</kbd></dt><dd>Focus previous cell</dd></div>\n            <div><dt><kbd>Enter</kbd></dt><dd>Edit focused cell</dd></div>\n            <div><dt><kbd>Escape</kbd></dt><dd>Exit editor</dd></div>\n            <div><dt><kbd>&larr;</kbd></dt><dd>Previous chapter</dd></div>\n            <div><dt><kbd>&rarr;</kbd></dt><dd>Next chapter</dd></div>\n          </dl>\n        </div>\n        <div class=\"shortcut-group\">\n          <h4>Execution</h4>\n          <dl>\n            <div><dt><kbd>Ctrl</kbd>+<kbd>Enter</kbd></dt><dd>Run cell</dd></div>\n            <div><dt><kbd>Shift</kbd>+<kbd>Enter</kbd></dt><dd>Run cell &amp; move next</dd></div>\n            <div><dt><kbd>Ctrl</kbd>+<kbd>Shift</kbd>+<kbd>Enter</kbd></dt><dd>Run all cells</dd></div>\n          </dl>\n        </div>\n        <div class=\"shortcut-group\">\n          <h4>Cells</h4>\n          <dl>\n            <div><dt><kbd>a</kbd></dt><dd>Add code cell below</dd></div>\n            <div><dt><kbd>t</kbd></dt><dd>Add text cell below</dd></div>\n            <div><dt><kbd>d</kbd></dt><dd>Delete focused cell</dd></div>\n            <div><dt><kbd>m</kbd></dt><dd>Toggle code / text</dd></div>\n            <div><dt><kbd>c</kbd></dt><dd>Clear cell outputs</dd></div>\n            <div><dt><kbd>z</kbd></dt><dd>Toggle collapsed</dd></div>\n            <div><dt><kbd>Shift</kbd>+<kbd>Z</kbd></dt><dd>Toggle hide source</dd></div>\n            <div><dt><kbd>Shift</kbd>+<kbd>J</kbd></dt><dd>Move cell down</dd></div>\n            <div><dt><kbd>Shift</kbd>+<kbd>K</kbd></dt><dd>Move cell up</dd></div>\n          </dl>\n        </div>\n        <div class=\"shortcut-group\">\n          <h4>File</h4>\n          <dl>\n            <div><dt><kbd>Ctrl</kbd>+<kbd>S</kbd></dt><dd>Save</dd></div>\n            <div><dt><kbd>Ctrl</kbd>+<kbd>Z</kbd></dt><dd>Undo</dd></div>\n            <div><dt><kbd>Ctrl</kbd>+<kbd>Shift</kbd>+<kbd>Z</kbd></dt><dd>Redo</dd></div>\n            <div><dt><kbd>?</kbd></dt><dd>This dialog</dd></div>\n          </dl>\n        </div>\n      </div>\n    </div>\n  </div>\n\n  <div id=\"toast-container\" class=\"toast-container\"></div>\n\n  <script type=\"module\" src=\"/assets/app.js\"></script>\n</body>\n</html>\n"
  },
  {
    "path": "packages/quill/lib/quill-server/frontend/package.json",
    "content": "{\n  \"private\": true,\n  \"type\": \"module\",\n  \"scripts\": {\n    \"build\": \"node esbuild.config.mjs --production\",\n    \"dev\": \"node esbuild.config.mjs --watch\"\n  },\n  \"dependencies\": {\n    \"@codemirror/autocomplete\": \"^6.18.0\",\n    \"@codemirror/commands\": \"^6.7.0\",\n    \"@codemirror/language\": \"^6.10.0\",\n    \"@codemirror/legacy-modes\": \"^6.4.0\",\n    \"@codemirror/lint\": \"^6.9.4\",\n    \"@codemirror/search\": \"^6.5.0\",\n    \"@codemirror/state\": \"^6.4.0\",\n    \"@codemirror/view\": \"^6.34.0\",\n    \"@lezer/highlight\": \"^1.2.0\",\n    \"katex\": \"^0.16.0\"\n  },\n  \"devDependencies\": {\n    \"esbuild\": \"^0.24.0\"\n  }\n}\n"
  },
  {
    "path": "packages/quill/lib/quill-server/frontend/src/app.js",
    "content": "// Entry point: initialize WebSocket, store, and mount notebook.\n\nimport { Store } from './store.js';\nimport { WsClient } from './ws.js';\nimport { NotebookRenderer } from './notebook.js';\nimport { initShortcuts } from './shortcuts.js';\nimport '../css/notebook.css';\n\nconst store = new Store();\nconst wsClient = new WsClient(store);\n\n// Mount notebook renderer\nconst container = document.getElementById('notebook');\nconst renderer = new NotebookRenderer(container, store, wsClient);\n\n// --- Toast notification system ---\n\nconst toastContainer = document.getElementById('toast-container');\n\nfunction showToast(message, kind = 'info') {\n  const el = document.createElement('div');\n  el.className = `toast toast-${kind}`;\n  el.textContent = message;\n  toastContainer.appendChild(el);\n  // Trigger reflow so the animation starts from the initial state\n  el.offsetHeight;\n  el.classList.add('toast-visible');\n  setTimeout(() => {\n    el.classList.remove('toast-visible');\n    el.addEventListener('transitionend', () => el.remove(), { once: true });\n    // Fallback removal if transitionend doesn't fire\n    setTimeout(() => el.remove(), 500);\n  }, 2500);\n}\n\n// --- Event wiring ---\n\n// Save feedback\nstore.on('saved', () => showToast('Notebook saved', 'success'));\n\n// Error surfacing\nstore.on('error', ({ message }) => showToast(message, 'error'));\n\n// Reconnection feedback\nstore.on('reconnected', () => showToast('Reconnected', 'success'));\n\n// Delete hint\nstore.on('cell:deleted', () => showToast('Cell deleted \\u2014 Ctrl+Z to undo', 'info'));\n\n// Connection status indicator\nconst statusDot = document.querySelector('#kernel-status .status-dot');\nconst statusText = document.querySelector('#kernel-status .status-text');\n\nstore.on('connection:changed', ({ status }) => {\n  statusDot.dataset.status = status;\n  statusText.textContent = status === 'connected' ? 'Connected'\n    : status === 'disconnected' ? 'Reconnecting\\u2026'\n    : 'Connecting\\u2026';\n});\n\n// Connection banner\nconst connectionBanner = document.getElementById('connection-banner');\nstore.on('connection:changed', ({ status }) => {\n  if (status === 'disconnected') {\n    connectionBanner.hidden = false;\n    // Trigger reflow for animation\n    connectionBanner.offsetHeight;\n    connectionBanner.classList.add('visible');\n  } else if (status === 'connected') {\n    connectionBanner.classList.remove('visible');\n    connectionBanner.addEventListener('transitionend', () => {\n      connectionBanner.hidden = true;\n    }, { once: true });\n    // Fallback\n    setTimeout(() => { connectionBanner.hidden = true; }, 500);\n  }\n});\n\n// --- Toolbar buttons ---\n\ndocument.getElementById('btn-run-all').addEventListener('click', () => {\n  wsClient.executeAll();\n});\ndocument.getElementById('btn-interrupt').addEventListener('click', () => {\n  wsClient.interrupt();\n});\ndocument.getElementById('btn-clear-all').addEventListener('click', () => {\n  wsClient.clearAllOutputs();\n});\ndocument.getElementById('btn-save').addEventListener('click', () => {\n  wsClient.save();\n});\n\n// --- Shortcuts dialog ---\n\nconst shortcutsDialog = document.getElementById('shortcuts-dialog');\nconst shortcutsClose = document.getElementById('shortcuts-close');\n\nfunction toggleShortcutsDialog() {\n  shortcutsDialog.hidden = !shortcutsDialog.hidden;\n}\n\ndocument.getElementById('btn-help').addEventListener('click', toggleShortcutsDialog);\nshortcutsClose.addEventListener('click', toggleShortcutsDialog);\nshortcutsDialog.addEventListener('click', (e) => {\n  // Close on backdrop click\n  if (e.target === shortcutsDialog) toggleShortcutsDialog();\n});\n\n// Export for shortcuts.js\nwindow._quillToggleShortcuts = toggleShortcutsDialog;\n\n// Click outside cells to unfocus\ndocument.addEventListener('mousedown', (e) => {\n  if (!e.target.closest('.cell') && !e.target.closest('.toolbar') && !e.target.closest('.dialog-backdrop')) {\n    store.clearFocus();\n  }\n});\n\n// Init keyboard shortcuts\ninitShortcuts(store, wsClient);\n\n// --- Sidebar (directory mode) ---\n\nasync function initSidebar() {\n  try {\n    const res = await fetch('/api/notebooks');\n    if (!res.ok) return; // Single-file mode, no notebooks\n    const chapters = await res.json();\n    if (!chapters || chapters.length === 0) return;\n\n    const sidebar = document.getElementById('sidebar');\n    const layout = document.getElementById('layout');\n    sidebar.hidden = false;\n    layout.classList.add('has-sidebar');\n\n    const currentPath = location.pathname;\n\n    for (const entry of chapters) {\n      if (entry.type === 'section') {\n        const part = document.createElement('div');\n        part.className = 'sidebar-part';\n        part.textContent = entry.title;\n        sidebar.appendChild(part);\n      } else if (entry.type === 'separator') {\n        const sep = document.createElement('div');\n        sep.className = 'sidebar-separator';\n        sidebar.appendChild(sep);\n      } else if (entry.type === 'notebook') {\n        const link = document.createElement('a');\n        link.className = 'sidebar-chapter';\n        link.href = entry.url;\n        link.textContent = entry.title;\n        // Highlight active notebook\n        if (currentPath === entry.url || currentPath === entry.url.replace(/\\/$/, '')) {\n          link.classList.add('active');\n        }\n        sidebar.appendChild(link);\n      }\n    }\n\n    // When visiting '/', highlight the first notebook\n    if (currentPath === '/' && !sidebar.querySelector('.sidebar-chapter.active')) {\n      const first = sidebar.querySelector('.sidebar-chapter');\n      if (first) first.classList.add('active');\n    }\n\n    // Find active notebook and compute prev/next\n    const chapterEntries = chapters.filter(ch => ch.type === 'notebook');\n    let activeIdx = chapterEntries.findIndex(\n      ch => currentPath === ch.url || currentPath === ch.url.replace(/\\/$/, '')\n    );\n    if (activeIdx < 0 && currentPath === '/') activeIdx = 0;\n\n    if (activeIdx >= 0) {\n      wsClient.chapterPath = chapterEntries[activeIdx].path;\n\n      const prev = activeIdx > 0 ? chapterEntries[activeIdx - 1] : null;\n      const next = activeIdx < chapterEntries.length - 1 ? chapterEntries[activeIdx + 1] : null;\n\n      // Expose for keyboard shortcuts\n      window._quillChapterNav = {\n        prev: prev ? prev.url : null,\n        next: next ? next.url : null,\n      };\n\n      // Render prev/next navigation after the notebook container\n      // (placed outside #notebook so renderAll() doesn't destroy it)\n      const nav = document.createElement('div');\n      nav.className = 'chapter-nav';\n\n      if (prev) {\n        const prevLink = document.createElement('a');\n        prevLink.className = 'chapter-nav-link chapter-nav-prev';\n        prevLink.href = prev.url;\n        prevLink.innerHTML = `<span class=\"chapter-nav-dir\">\\u2190 Previous</span><span class=\"chapter-nav-title\">${prev.title}</span>`;\n        nav.appendChild(prevLink);\n      } else {\n        nav.appendChild(document.createElement('span'));\n      }\n\n      if (next) {\n        const nextLink = document.createElement('a');\n        nextLink.className = 'chapter-nav-link chapter-nav-next';\n        nextLink.href = next.url;\n        nextLink.innerHTML = `<span class=\"chapter-nav-dir\">Next \\u2192</span><span class=\"chapter-nav-title\">${next.title}</span>`;\n        nav.appendChild(nextLink);\n      }\n\n      // Store the nav element globally so the notebook renderer can\n      // re-append it after renderAll() clears the container.\n      window._quillChapterNavEl = nav;\n      container.appendChild(nav);\n    }\n  } catch {\n    // No sidebar in single-file mode\n  }\n}\n\n// Initialize sidebar (if directory mode), then connect WebSocket\ninitSidebar().then(() => wsClient.connect());\n"
  },
  {
    "path": "packages/quill/lib/quill-server/frontend/src/cell.js",
    "content": "// Cell renderer for code and text cells.\n\nimport { createEditor } from './editor.js';\nimport { renderMath } from './math.js';\nimport { appendOutputToContainer, renderOutput } from './output.js';\n\nconst editors = new Map(); // cellId -> EditorView\n\nexport function getEditor(cellId) {\n  return editors.get(cellId);\n}\n\nfunction getAttrs(cell) {\n  return cell.attrs || {};\n}\n\nfunction firstLine(source) {\n  if (!source) return '';\n  const nl = source.indexOf('\\n');\n  const line = nl === -1 ? source : source.slice(0, nl);\n  return line.length > 60 ? line.slice(0, 57) + '...' : line;\n}\n\nexport function createCellElement(cell, wsClient, store) {\n  const el = document.createElement('div');\n  el.className = `cell cell-${cell.kind}`;\n  el.dataset.cellId = cell.id;\n  el.dataset.status = cell.status || 'idle';\n\n  if (cell.kind === 'code') {\n    el.appendChild(createCodeCell(cell, wsClient, store));\n  } else {\n    el.appendChild(createTextCell(cell, wsClient, store));\n  }\n\n  // Focus on click — any click on the cell sets notebook-level focus\n  el.addEventListener('click', (e) => {\n    if (!e.target.closest('button')) {\n      store.setFocus(cell.id);\n    }\n  });\n\n  return el;\n}\n\nfunction createCodeCell(cell, wsClient, store) {\n  const wrapper = document.createElement('div');\n  wrapper.className = 'cell-wrapper';\n  const attrs = getAttrs(cell);\n\n  // Gutter\n  const gutter = document.createElement('div');\n  gutter.className = 'cell-gutter';\n  const numSpan = document.createElement('span');\n  numSpan.className = 'cell-number';\n  numSpan.textContent = cell.execution_count > 0 ? `[${cell.execution_count}]` : '[ ]';\n  gutter.appendChild(numSpan);\n  const statusIcon = document.createElement('span');\n  statusIcon.className = 'cell-status-icon';\n  gutter.appendChild(statusIcon);\n  wrapper.appendChild(gutter);\n\n  // Content\n  const content = document.createElement('div');\n  content.className = 'cell-content';\n\n  // Collapsed bar (for collapsed cells)\n  const collapsedBar = document.createElement('div');\n  collapsedBar.className = 'cell-collapsed-bar';\n  const collapseToggle = document.createElement('span');\n  collapseToggle.className = 'cell-collapsed-toggle';\n  collapseToggle.textContent = '\\u25B8'; // ▸\n  collapsedBar.appendChild(collapseToggle);\n  const collapsedSource = document.createElement('span');\n  collapsedSource.className = 'cell-collapsed-source';\n  collapsedSource.textContent = firstLine(cell.source) || '(empty)';\n  collapsedBar.appendChild(collapsedSource);\n  content.appendChild(collapsedBar);\n\n  // Source placeholder (for hide-source cells)\n  const sourcePlaceholder = document.createElement('div');\n  sourcePlaceholder.className = 'cell-source-placeholder';\n  const sourceToggle = document.createElement('span');\n  sourceToggle.className = 'cell-collapsed-toggle';\n  sourceToggle.textContent = '\\u25B8'; // ▸\n  sourcePlaceholder.appendChild(sourceToggle);\n  const sourceLabel = document.createElement('span');\n  sourceLabel.textContent = 'code';\n  sourcePlaceholder.appendChild(sourceLabel);\n  content.appendChild(sourcePlaceholder);\n\n  // Editor container\n  const editorContainer = document.createElement('div');\n  editorContainer.className = 'cell-editor';\n  content.appendChild(editorContainer);\n\n  // Mount CodeMirror\n  const view = createEditor(editorContainer, cell.source, {\n    onChange: (source) => {\n      cell.source = source;\n      wsClient.updateSource(cell.id, source);\n    },\n    onExecute: () => {\n      wsClient.executeCell(cell.id);\n    },\n    onExecuteAndMoveNext: () => {\n      wsClient.executeCell(cell.id);\n      store.focusNext();\n    },\n    onEscape: () => {\n      editorContainer.querySelector('.cm-content')?.blur();\n    },\n    wsClient,\n  });\n  editors.set(cell.id, view);\n\n  // Outputs\n  const outputsContainer = document.createElement('div');\n  outputsContainer.className = 'cell-outputs';\n  if (cell.outputs) {\n    for (const output of cell.outputs) {\n      const el = renderOutput(output);\n      if (el) outputsContainer.appendChild(el);\n    }\n  }\n  content.appendChild(outputsContainer);\n\n  // Collapse/expand state management\n  let isCollapsed = !!attrs.collapsed;\n  let isSourceHidden = !!attrs.hide_source;\n\n  function applyVisualState() {\n    if (isCollapsed) {\n      collapsedBar.style.display = '';\n      sourcePlaceholder.style.display = 'none';\n      editorContainer.style.display = 'none';\n      outputsContainer.style.display = 'none';\n      collapseToggle.textContent = '\\u25B8'; // ▸\n    } else if (isSourceHidden) {\n      collapsedBar.style.display = 'none';\n      sourcePlaceholder.style.display = '';\n      editorContainer.style.display = 'none';\n      outputsContainer.style.display = '';\n      sourceToggle.textContent = '\\u25B8'; // ▸\n    } else {\n      collapsedBar.style.display = 'none';\n      sourcePlaceholder.style.display = 'none';\n      editorContainer.style.display = '';\n      outputsContainer.style.display = '';\n    }\n  }\n\n  collapsedBar.addEventListener('click', () => {\n    isCollapsed = false;\n    applyVisualState();\n  });\n\n  sourcePlaceholder.addEventListener('click', () => {\n    isSourceHidden = false;\n    applyVisualState();\n  });\n\n  applyVisualState();\n\n  // Actions\n  const actions = document.createElement('div');\n  actions.className = 'cell-actions';\n  actions.innerHTML = `\n    <button data-action=\"run\" title=\"Evaluate cell (Ctrl+Enter)\">Run</button>\n    <button data-action=\"delete\" title=\"Delete cell (d)\">Delete</button>\n    <button data-action=\"move-up\" title=\"Move cell up (Shift+K)\">\\u2191</button>\n    <button data-action=\"move-down\" title=\"Move cell down (Shift+J)\">\\u2193</button>\n    <button data-action=\"toggle-type\" title=\"Convert to text cell (m)\">\\u21C4 Text</button>\n    <button data-action=\"toggle-collapse\" title=\"Toggle collapsed (z)\">${attrs.collapsed ? 'Expand' : 'Collapse'}</button>\n    <button data-action=\"toggle-hide-source\" title=\"Toggle hide source (Z)\">${attrs.hide_source ? 'Show src' : 'Hide src'}</button>\n  `;\n  actions.addEventListener('click', (e) => {\n    const action = e.target.dataset.action;\n    if (!action) return;\n    const idx = store.findCellIndex(cell.id);\n    switch (action) {\n      case 'run': wsClient.executeCell(cell.id); break;\n      case 'delete': wsClient.deleteCell(cell.id); break;\n      case 'move-up': if (idx > 0) wsClient.moveCell(cell.id, idx - 1); break;\n      case 'move-down': if (idx < store.cells.length - 1) wsClient.moveCell(cell.id, idx + 1); break;\n      case 'toggle-type': wsClient.setCellKind(cell.id, 'text'); break;\n      case 'toggle-collapse': {\n        const newAttrs = { ...getAttrs(cell), collapsed: !attrs.collapsed };\n        wsClient.setCellAttrs(cell.id, newAttrs);\n        break;\n      }\n      case 'toggle-hide-source': {\n        const newAttrs = { ...getAttrs(cell), hide_source: !attrs.hide_source };\n        wsClient.setCellAttrs(cell.id, newAttrs);\n        break;\n      }\n    }\n  });\n  content.appendChild(actions);\n\n  wrapper.appendChild(content);\n  return wrapper;\n}\n\nfunction createTextCell(cell, wsClient, store) {\n  const wrapper = document.createElement('div');\n  wrapper.className = 'cell-wrapper';\n  const attrs = getAttrs(cell);\n\n  const content = document.createElement('div');\n  content.className = 'cell-content';\n\n  // Collapsed bar (for collapsed text cells)\n  const collapsedBar = document.createElement('div');\n  collapsedBar.className = 'cell-collapsed-bar';\n  const collapseToggle = document.createElement('span');\n  collapseToggle.className = 'cell-collapsed-toggle';\n  collapseToggle.textContent = '\\u25B8'; // ▸\n  collapsedBar.appendChild(collapseToggle);\n  const collapsedSource = document.createElement('span');\n  collapsedSource.className = 'cell-collapsed-source';\n  collapsedSource.textContent = firstLine(cell.source) || '(empty)';\n  collapsedBar.appendChild(collapsedSource);\n  content.appendChild(collapsedBar);\n\n  // Rendered markdown view\n  const markdownView = document.createElement('div');\n  markdownView.className = 'cell-markdown';\n  markdownView.innerHTML = cell.rendered_html || '<p class=\"cell-markdown-empty\">Empty text cell \\u2014 click to edit</p>';\n  renderMath(markdownView);\n  content.appendChild(markdownView);\n\n  // Editor container (hidden by default)\n  const editorContainer = document.createElement('div');\n  editorContainer.className = 'cell-editor';\n  editorContainer.style.display = 'none';\n  content.appendChild(editorContainer);\n\n  let editorView = null;\n  let isCollapsed = !!attrs.collapsed;\n\n  function applyVisualState() {\n    if (isCollapsed) {\n      collapsedBar.style.display = '';\n      markdownView.style.display = 'none';\n      editorContainer.style.display = 'none';\n      collapseToggle.textContent = '\\u25B8'; // ▸\n    } else {\n      collapsedBar.style.display = 'none';\n      markdownView.style.display = '';\n    }\n  }\n\n  collapsedBar.addEventListener('click', () => {\n    isCollapsed = false;\n    applyVisualState();\n  });\n\n  applyVisualState();\n\n  // Double-click to edit\n  markdownView.addEventListener('dblclick', () => {\n    enterEditMode();\n  });\n\n  function enterEditMode() {\n    markdownView.style.display = 'none';\n    editorContainer.style.display = 'block';\n    if (!editorView) {\n      editorView = createEditor(editorContainer, cell.source, {\n        onChange: (source) => {\n          cell.source = source;\n          wsClient.updateSource(cell.id, source);\n        },\n        onExecute: () => exitEditMode(),\n        onExecuteAndMoveNext: () => { exitEditMode(); store.focusNext(); },\n        onEscape: () => exitEditMode(),\n        wsClient: null, // No autocomplete for markdown\n      });\n      editors.set(cell.id, editorView);\n    }\n    editorView.focus();\n  }\n\n  function exitEditMode() {\n    editorContainer.style.display = 'none';\n    markdownView.style.display = 'block';\n    // Source was already sent via debounced updateSource\n    // The server will send back cell_updated with fresh rendered_html\n    wsClient.checkpoint();\n  }\n\n  // Actions\n  const actions = document.createElement('div');\n  actions.className = 'cell-actions';\n  actions.innerHTML = `\n    <button data-action=\"edit\" title=\"Edit text (Enter)\">Edit</button>\n    <button data-action=\"delete\" title=\"Delete cell (d)\">Delete</button>\n    <button data-action=\"move-up\" title=\"Move cell up (Shift+K)\">\\u2191</button>\n    <button data-action=\"move-down\" title=\"Move cell down (Shift+J)\">\\u2193</button>\n    <button data-action=\"toggle-type\" title=\"Convert to code cell (m)\">\\u21C4 Code</button>\n    <button data-action=\"toggle-collapse\" title=\"Toggle collapsed (z)\">${attrs.collapsed ? 'Expand' : 'Collapse'}</button>\n  `;\n  actions.addEventListener('click', (e) => {\n    const action = e.target.dataset.action;\n    if (!action) return;\n    const idx = store.findCellIndex(cell.id);\n    switch (action) {\n      case 'edit': enterEditMode(); break;\n      case 'delete': wsClient.deleteCell(cell.id); break;\n      case 'move-up': if (idx > 0) wsClient.moveCell(cell.id, idx - 1); break;\n      case 'move-down': if (idx < store.cells.length - 1) wsClient.moveCell(cell.id, idx + 1); break;\n      case 'toggle-type': wsClient.setCellKind(cell.id, 'code'); break;\n      case 'toggle-collapse': {\n        const newAttrs = { ...getAttrs(cell), collapsed: !attrs.collapsed };\n        wsClient.setCellAttrs(cell.id, newAttrs);\n        break;\n      }\n    }\n  });\n  content.appendChild(actions);\n\n  wrapper.appendChild(content);\n  return wrapper;\n}\n\nexport function destroyCell(cellId) {\n  const view = editors.get(cellId);\n  if (view) {\n    view.destroy();\n    editors.delete(cellId);\n  }\n}\n\nexport function updateCellStatus(cellId, status) {\n  const el = document.querySelector(`[data-cell-id=\"${cellId}\"]`);\n  if (el) {\n    el.dataset.status = status;\n    // Clear previous result indicator when starting new execution\n    if (status === 'running' || status === 'queued') {\n      const icon = el.querySelector('.cell-status-icon');\n      if (icon) delete icon.dataset.result;\n    }\n  }\n}\n\nexport function clearCellOutputs(cellId) {\n  const el = document.querySelector(`[data-cell-id=\"${cellId}\"] .cell-outputs`);\n  if (el) el.innerHTML = '';\n}\n"
  },
  {
    "path": "packages/quill/lib/quill-server/frontend/src/editor.js",
    "content": "// CodeMirror 6 editor setup for OCaml code cells.\n\nimport { EditorView, keymap, lineNumbers, highlightActiveLine, highlightActiveLineGutter, drawSelection, dropCursor, highlightSpecialChars, hoverTooltip } from '@codemirror/view';\nimport { EditorState } from '@codemirror/state';\nimport { StreamLanguage, bracketMatching, indentOnInput } from '@codemirror/language';\nimport { oCaml } from '@codemirror/legacy-modes/mode/mllike';\nimport { closeBrackets } from '@codemirror/autocomplete';\nimport { autocompletion } from '@codemirror/autocomplete';\nimport { history, defaultKeymap, historyKeymap } from '@codemirror/commands';\nimport { highlightSelectionMatches } from '@codemirror/search';\nimport { linter } from '@codemirror/lint';\nimport { tags } from '@lezer/highlight';\nimport { HighlightStyle, syntaxHighlighting } from '@codemirror/language';\n\n// --- Theme ---\n\nconst quillTheme = EditorView.theme({\n  '&': {\n    fontSize: '14px',\n    backgroundColor: 'transparent',\n  },\n  '.cm-content': {\n    fontFamily: \"'JetBrains Mono', 'SF Mono', 'Fira Code', 'Consolas', monospace\",\n    caretColor: '#daa550',\n    padding: '8px 0',\n  },\n  '.cm-cursor, .cm-dropCursor': {\n    borderLeftColor: '#daa550',\n  },\n  '&.cm-focused .cm-selectionBackground, .cm-selectionBackground': {\n    backgroundColor: '#3a3a50',\n  },\n  '.cm-gutters': {\n    backgroundColor: 'transparent',\n    color: '#646870',\n    border: 'none',\n    paddingLeft: '4px',\n  },\n  '.cm-activeLineGutter': {\n    backgroundColor: 'transparent',\n    color: '#aab0ba',\n  },\n  '.cm-activeLine': {\n    backgroundColor: 'transparent',\n  },\n  '.cm-line': {\n    padding: '0 8px',\n  },\n  '.cm-tooltip': {\n    backgroundColor: '#24242e',\n    border: '1px solid #32323a',\n    color: '#c8ccd4',\n  },\n  '.cm-tooltip-autocomplete': {\n    '& > ul > li[aria-selected]': {\n      backgroundColor: '#3a3a50',\n    },\n  },\n  '.cm-tooltip-hover': {\n    padding: '4px 8px',\n    maxWidth: '500px',\n  },\n  '.cm-type-tooltip code': {\n    fontFamily: \"'JetBrains Mono', 'SF Mono', 'Fira Code', monospace\",\n    fontSize: '13px',\n    color: '#ffcb6b',\n  },\n  '.cm-type-tooltip .cm-type-doc': {\n    marginTop: '4px',\n    paddingTop: '4px',\n    borderTop: '1px solid #32323a',\n    fontSize: '12px',\n    color: '#9da5b4',\n    whiteSpace: 'pre-wrap',\n  },\n  '.cm-diagnostic-error': {\n    borderBottom: '2px solid #ff5370',\n  },\n  '.cm-diagnostic-warning': {\n    borderBottom: '2px solid #ffcb6b',\n  },\n}, { dark: true });\n\nconst quillHighlightStyle = HighlightStyle.define([\n  { tag: tags.keyword, color: '#c792ea' },\n  { tag: tags.operator, color: '#89ddff' },\n  { tag: tags.string, color: '#c3e88d' },\n  { tag: tags.number, color: '#f78c6c' },\n  { tag: tags.bool, color: '#f78c6c' },\n  { tag: tags.comment, color: '#646870', fontStyle: 'italic' },\n  { tag: tags.typeName, color: '#ffcb6b' },\n  { tag: tags.definition(tags.variableName), color: '#82aaff' },\n  { tag: tags.variableName, color: '#c8ccd4' },\n  { tag: tags.function(tags.variableName), color: '#82aaff' },\n  { tag: tags.propertyName, color: '#c8ccd4' },\n  { tag: tags.meta, color: '#daa550' },\n  { tag: tags.punctuation, color: '#89ddff' },\n]);\n\n// --- Completion source ---\n\nfunction mapCompletionKind(kind) {\n  switch (kind) {\n    case 'value': return 'variable';\n    case 'type': return 'type';\n    case 'module': return 'namespace';\n    case 'module_type': return 'interface';\n    case 'constructor': return 'enum';\n    case 'label': return 'property';\n    default: return 'variable';\n  }\n}\n\nfunction makeCompletionSource(wsClient) {\n  return async (context) => {\n    const trigger = context.matchBefore(/[\\w.]+$/);\n    if (!trigger && !context.explicit) return null;\n\n    const code = context.state.doc.toString();\n    const pos = context.pos;\n\n    try {\n      const items = await wsClient.complete(code, pos);\n      if (!items || items.length === 0) return null;\n      return {\n        from: trigger ? trigger.from : context.pos,\n        options: items.map(item => ({\n          label: item.label,\n          type: mapCompletionKind(item.kind),\n          detail: item.detail,\n        })),\n      };\n    } catch {\n      return null;\n    }\n  };\n}\n\n// --- Hover tooltip source ---\n\nfunction makeHoverSource(wsClient) {\n  return hoverTooltip(async (view, pos) => {\n    const code = view.state.doc.toString();\n    try {\n      const result = await wsClient.typeAt(code, pos);\n      if (!result || !result.info) return null;\n      return {\n        pos: result.info.from,\n        end: result.info.to,\n        above: true,\n        create() {\n          const dom = document.createElement('div');\n          dom.className = 'cm-type-tooltip';\n          const typeLine = document.createElement('code');\n          typeLine.textContent = result.info.type;\n          dom.appendChild(typeLine);\n          if (result.info.doc) {\n            const docLine = document.createElement('div');\n            docLine.className = 'cm-type-doc';\n            docLine.textContent = result.info.doc;\n            dom.appendChild(docLine);\n          }\n          return { dom };\n        },\n      };\n    } catch {\n      return null;\n    }\n  }, { hoverTime: 300 });\n}\n\n// --- Lint source ---\n\nfunction makeLintSource(wsClient) {\n  return linter(async (view) => {\n    const code = view.state.doc.toString();\n    if (!code.trim()) return [];\n    try {\n      const result = await wsClient.diagnostics(code);\n      if (!result || !result.items) return [];\n      return result.items.map(d => ({\n        from: d.from,\n        to: Math.min(d.to, code.length),\n        severity: d.severity,\n        message: d.message,\n      }));\n    } catch {\n      return [];\n    }\n  }, { delay: 500 });\n}\n\n// --- Editor creation ---\n\nexport function createEditor(container, source, options) {\n  const { onChange, onExecute, onExecuteAndMoveNext, onEscape, wsClient } = options;\n\n  const extensions = [\n    StreamLanguage.define(oCaml).extension,\n    lineNumbers(),\n    highlightActiveLine(),\n    highlightActiveLineGutter(),\n    highlightSpecialChars(),\n    bracketMatching(),\n    closeBrackets(),\n    indentOnInput(),\n    history(),\n    drawSelection(),\n    dropCursor(),\n    highlightSelectionMatches(),\n    quillTheme,\n    syntaxHighlighting(quillHighlightStyle),\n    EditorState.tabSize.of(2),\n    EditorView.updateListener.of(update => {\n      if (update.docChanged) {\n        onChange(update.state.doc.toString());\n      }\n    }),\n    keymap.of([\n      { key: 'Ctrl-Enter', run: () => { onExecute(); return true; } },\n      { key: 'Cmd-Enter', run: () => { onExecute(); return true; } },\n      { key: 'Shift-Enter', run: () => { onExecuteAndMoveNext(); return true; } },\n      { key: 'Escape', run: () => { onEscape(); return true; } },\n      ...defaultKeymap,\n      ...historyKeymap,\n    ]),\n  ];\n\n  if (wsClient) {\n    extensions.push(\n      autocompletion({\n        override: [makeCompletionSource(wsClient)],\n        activateOnTyping: true,\n      }),\n      makeHoverSource(wsClient),\n      makeLintSource(wsClient),\n    );\n  }\n\n  const view = new EditorView({\n    parent: container,\n    doc: source,\n    extensions,\n  });\n\n  return view;\n}\n"
  },
  {
    "path": "packages/quill/lib/quill-server/frontend/src/math.js",
    "content": "// Math equation rendering for text cells.\n\nimport renderMathInElement from 'katex/contrib/auto-render';\n\n/**\n * Render all math delimiters in the given element.\n * cmarkit outputs \\(...\\) for inline and \\[...\\] for display math,\n * which are the default delimiters for renderMathInElement.\n */\nexport function renderMath(element) {\n  renderMathInElement(element, {\n    output: 'mathml',\n    throwOnError: false,\n  });\n}\n"
  },
  {
    "path": "packages/quill/lib/quill-server/frontend/src/notebook.js",
    "content": "// Notebook renderer: manages cells, sections, and dividers.\n\nimport { createCellElement, destroyCell, updateCellStatus, clearCellOutputs } from './cell.js';\nimport { appendOutputToContainer } from './output.js';\n\nexport class NotebookRenderer {\n  constructor(container, store, wsClient) {\n    this.container = container;\n    this.store = store;\n    this.wsClient = wsClient;\n    this._showSkeleton();\n    this._bindEvents();\n  }\n\n  _showSkeleton() {\n    this.container.innerHTML = '';\n    const skeleton = document.createElement('div');\n    skeleton.className = 'skeleton';\n    for (let i = 0; i < 4; i++) {\n      const block = document.createElement('div');\n      block.className = 'skeleton-cell';\n      block.style.animationDelay = `${i * 0.1}s`;\n      // Vary heights to suggest different cell types\n      if (i === 0) block.classList.add('skeleton-cell-short');\n      skeleton.appendChild(block);\n    }\n    this.container.appendChild(skeleton);\n  }\n\n  _bindEvents() {\n    this.store.on('notebook:loaded', () => this.renderAll());\n    this.store.on('cell:status', ({ cellId, status }) => {\n      updateCellStatus(cellId, status);\n      if (status === 'running') clearCellOutputs(cellId);\n    });\n    this.store.on('cell:output', ({ cellId, output }) => {\n      const container = document.querySelector(`[data-cell-id=\"${cellId}\"] .cell-outputs`);\n      if (container) appendOutputToContainer(container, output);\n    });\n    this.store.on('cell:outputs-cleared', ({ cellId }) => {\n      clearCellOutputs(cellId);\n    });\n    this.store.on('cell:updated', ({ cellId, cell }) => {\n      this._replaceCell(cellId, cell);\n    });\n    this.store.on('cell:inserted', ({ pos, cell }) => {\n      this._insertCellAt(pos, cell);\n    });\n    this.store.on('cell:deleted', ({ cellId }) => {\n      this._removeCell(cellId);\n    });\n    this.store.on('cell:moved', () => {\n      // Re-render all on move for simplicity\n      this.renderAll();\n    });\n    this.store.on('cell:execution-done', ({ cellId, success }) => {\n      const el = document.querySelector(`[data-cell-id=\"${cellId}\"]`);\n      if (el) {\n        el.dataset.status = 'idle';\n        // Flash animation for completion feedback\n        const cls = success ? 'flash-success' : 'flash-error';\n        el.classList.add(cls);\n        el.addEventListener('animationend', () => el.classList.remove(cls), { once: true });\n        // Update gutter indicator\n        const icon = el.querySelector('.cell-status-icon');\n        if (icon) icon.dataset.result = success ? 'success' : 'error';\n        // Update execution count\n        const numSpan = el.querySelector('.cell-number');\n        const cell = this.store.findCell(cellId);\n        if (numSpan && cell && cell.execution_count > 0) {\n          numSpan.textContent = `[${cell.execution_count}]`;\n        }\n      }\n    });\n    this.store.on('focus:changed', ({ cellId, prevCellId }) => {\n      if (prevCellId) {\n        const prev = document.querySelector(`[data-cell-id=\"${prevCellId}\"]`);\n        if (prev) prev.classList.remove('focused');\n      }\n      if (cellId) {\n        const curr = document.querySelector(`[data-cell-id=\"${cellId}\"]`);\n        if (curr) {\n          curr.classList.add('focused');\n          curr.scrollIntoView({ block: 'nearest', behavior: 'smooth' });\n        }\n      }\n    });\n  }\n\n  renderAll() {\n    // Destroy existing editors\n    this.container.querySelectorAll('[data-cell-id]').forEach(el => {\n      destroyCell(el.dataset.cellId);\n    });\n    this.container.innerHTML = '';\n\n    // Empty state\n    if (this.store.cells.length === 0) {\n      const empty = document.createElement('div');\n      empty.className = 'empty-state';\n      empty.innerHTML = `\n        <div class=\"empty-state-icon\">&laquo; &raquo;</div>\n        <p>No cells yet.</p>\n        <p class=\"empty-state-hint\">\n          Press <kbd>a</kbd> to add a code cell, or <kbd>t</kbd> for text.\n        </p>\n      `;\n      this.container.appendChild(empty);\n      return;\n    }\n\n    // Group cells into sections\n    const sections = this._groupIntoSections(this.store.cells);\n\n    for (const section of sections) {\n      const sectionEl = this._createSection(section);\n      this.container.appendChild(sectionEl);\n    }\n\n    // Add final divider\n    this.container.appendChild(this._createDivider(this.store.cells.length));\n\n    // Re-append chapter navigation if present (book mode)\n    if (window._quillChapterNavEl) {\n      this.container.appendChild(window._quillChapterNavEl);\n    }\n\n    // Update on-page TOC\n    this._updatePageToc();\n\n    // Apply focus\n    if (this.store.focusedCellId) {\n      const el = document.querySelector(`[data-cell-id=\"${this.store.focusedCellId}\"]`);\n      if (el) el.classList.add('focused');\n    }\n  }\n\n  _splitRenderedHtml(html) {\n    // Split server-rendered HTML by h1/h2 headings, returning an array of\n    // HTML fragments.  Each fragment starts at a heading and runs until the\n    // next heading (or the end).  Any content before the first heading is\n    // returned as the first element with index -1.\n    if (!html) return [];\n    const container = document.createElement('div');\n    container.innerHTML = html;\n    const parts = [];\n    let buf = [];\n    let seenHeading = false;\n    for (const node of Array.from(container.childNodes)) {\n      const tag = node.nodeName;\n      if (tag === 'H1' || tag === 'H2') {\n        if (buf.length > 0) {\n          parts.push(buf.map(n => n.outerHTML || n.textContent).join(''));\n        }\n        buf = [node];\n        seenHeading = true;\n      } else {\n        if (!seenHeading && buf.length === 0 && parts.length === 0) {\n          // Content before first heading\n          buf.push(node);\n        } else {\n          buf.push(node);\n        }\n      }\n    }\n    if (buf.length > 0) {\n      parts.push(buf.map(n => n.outerHTML || n.textContent).join(''));\n    }\n    return parts;\n  }\n\n  _groupIntoSections(cells) {\n    const sections = [];\n    let current = { name: null, cells: [] };\n    const headingRe = /^#{1,2}\\s+(.+)/gm;\n\n    for (const cell of cells) {\n      if (cell.kind === 'text' && cell.source) {\n        // Find all headings and their positions in this cell\n        const headings = [];\n        let m;\n        while ((m = headingRe.exec(cell.source)) !== null) {\n          headings.push({ name: m[1].trim(), index: m.index });\n        }\n        headingRe.lastIndex = 0;\n\n        if (headings.length <= 1) {\n          // Zero or one heading — original behavior\n          const match = headings[0];\n          if (match && current.cells.length > 0) {\n            sections.push(current);\n            current = { name: match.name, cells: [] };\n          } else if (match && current.cells.length === 0) {\n            current.name = match.name;\n          }\n          current.cells.push(cell);\n        } else {\n          // Multiple headings — split cell into virtual sub-cells\n          // Also split the rendered HTML so each virtual cell gets its own fragment\n          const htmlParts = this._splitRenderedHtml(cell.rendered_html);\n          // htmlParts[0] may be content before first heading\n          let htmlIdx = 0;\n          const hasPreHeadingContent = cell.source.slice(0, headings[0].index).trim().length > 0;\n\n          for (let i = 0; i < headings.length; i++) {\n            const start = headings[i].index;\n            const end = i + 1 < headings.length ? headings[i + 1].index : cell.source.length;\n            const source = cell.source.slice(start, end).trim();\n\n            // Text before the first heading stays in the current section\n            if (i === 0 && start > 0) {\n              const before = cell.source.slice(0, start).trim();\n              if (before) {\n                const preHtml = hasPreHeadingContent && htmlParts.length > 0 ? htmlParts[htmlIdx++] : null;\n                current.cells.push({\n                  ...cell, source: before, rendered_html: preHtml,\n                  id: cell.id + '_v0', _virtual: true\n                });\n              }\n            }\n\n            if (current.cells.length > 0) {\n              sections.push(current);\n            }\n            current = { name: headings[i].name, cells: [] };\n            const partHtml = htmlIdx < htmlParts.length ? htmlParts[htmlIdx++] : null;\n            current.cells.push({\n              ...cell, source, rendered_html: partHtml,\n              id: cell.id + '_v' + (i + 1), _virtual: true\n            });\n          }\n        }\n      } else {\n        current.cells.push(cell);\n      }\n    }\n    if (current.cells.length > 0 || sections.length === 0) {\n      sections.push(current);\n    }\n    return sections;\n  }\n\n  _createSection(section) {\n    const sectionEl = document.createElement('section');\n    sectionEl.className = 'notebook-section';\n\n    if (section.name) {\n      const header = document.createElement('div');\n      header.className = 'section-header';\n      const toggle = document.createElement('span');\n      toggle.className = 'section-toggle';\n      toggle.textContent = '\\u25BE'; // ▾\n      header.appendChild(toggle);\n      const title = document.createElement('h2');\n      title.textContent = section.name;\n      header.appendChild(title);\n      header.addEventListener('click', () => {\n        const collapsed = sectionEl.dataset.collapsed === 'true';\n        sectionEl.dataset.collapsed = collapsed ? 'false' : 'true';\n        toggle.textContent = collapsed ? '\\u25BE' : '\\u25B8'; // ▾ or ▸\n      });\n      sectionEl.appendChild(header);\n    }\n\n    const body = document.createElement('div');\n    body.className = 'section-body';\n\n    for (const cell of section.cells) {\n      const idx = this.store.findCellIndex(cell.id);\n      body.appendChild(this._createDivider(idx));\n      body.appendChild(createCellElement(cell, this.wsClient, this.store));\n    }\n\n    sectionEl.appendChild(body);\n    return sectionEl;\n  }\n\n  _createDivider(pos) {\n    const div = document.createElement('div');\n    div.className = 'cell-divider';\n    div.dataset.dividerPos = pos;\n    div.innerHTML = `\n      <span class=\"divider-line\"></span>\n      <span class=\"divider-buttons\">\n        <button data-action=\"add-code\" title=\"Add code cell\">+ Code</button>\n        <button data-action=\"add-text\" title=\"Add text cell\">+ Text</button>\n      </span>\n      <span class=\"divider-line\"></span>\n    `;\n    div.addEventListener('click', (e) => {\n      const action = e.target.dataset.action;\n      if (!action) return;\n      // Compute actual position from DOM order to avoid stale closures\n      const dividers = this.container.querySelectorAll('.cell-divider');\n      let actualPos = 0;\n      for (const d of dividers) {\n        if (d === div) break;\n        actualPos++;\n      }\n      if (action === 'add-code') this.wsClient.insertCell(actualPos, 'code');\n      else if (action === 'add-text') this.wsClient.insertCell(actualPos, 'text');\n    });\n    return div;\n  }\n\n  _replaceCell(cellId, cellData) {\n    const el = document.querySelector(`[data-cell-id=\"${cellId}\"]`);\n    if (!el) return;\n    destroyCell(cellId);\n    const newEl = createCellElement(cellData, this.wsClient, this.store);\n    if (cellId === this.store.focusedCellId) newEl.classList.add('focused');\n    el.replaceWith(newEl);\n  }\n\n  _insertCellAt(pos, cell) {\n    // Re-render all for simplicity on insert\n    this.renderAll();\n    this.store.setFocus(cell.id);\n  }\n\n  _removeCell(cellId) {\n    const el = document.querySelector(`[data-cell-id=\"${cellId}\"]`);\n    if (el) {\n      destroyCell(cellId);\n      // Also remove the preceding divider\n      const prev = el.previousElementSibling;\n      if (prev && prev.classList.contains('cell-divider')) prev.remove();\n      el.remove();\n    }\n    // Show empty state if no cells left\n    if (this.store.cells.length === 0) {\n      this.renderAll();\n    }\n  }\n\n  _updatePageToc() {\n    const tocNav = document.getElementById('page-toc');\n    if (!tocNav) return;\n\n    // Collect h2/h3 headings with ids from rendered markdown cells\n    const headings = [];\n    this.container.querySelectorAll('.cell-markdown h2[id], .cell-markdown h3[id]').forEach(el => {\n      headings.push({ level: el.tagName === 'H3' ? 3 : 2, id: el.id, text: el.textContent });\n    });\n\n    if (headings.length === 0) {\n      tocNav.hidden = true;\n      tocNav.innerHTML = '';\n      return;\n    }\n\n    let html = '<div class=\"page-toc-title\">On this page</div><ul>\\n';\n    for (const h of headings) {\n      const cls = h.level === 3 ? ' class=\"toc-h3\"' : '';\n      html += `<li${cls}><a href=\"#${h.id}\">${h.text}</a></li>\\n`;\n    }\n    html += '</ul>';\n    tocNav.innerHTML = html;\n    tocNav.hidden = false;\n  }\n}\n"
  },
  {
    "path": "packages/quill/lib/quill-server/frontend/src/output.js",
    "content": "// Output renderer for cell execution results.\n\nexport function renderOutput(output) {\n  switch (output.kind) {\n    case 'stdout': return renderStdout(output.text);\n    case 'stderr': return renderStderr(output.text);\n    case 'error': return renderError(output.text);\n    case 'display': return renderDisplay(output.mime, output.data);\n    default: return null;\n  }\n}\n\nfunction renderStdout(text) {\n  const pre = document.createElement('pre');\n  pre.className = 'output output-stdout';\n  pre.textContent = applyCarriageReturn('', text);\n  return pre;\n}\n\nfunction renderStderr(text) {\n  const pre = document.createElement('pre');\n  pre.className = 'output output-stderr';\n  pre.textContent = text;\n  return pre;\n}\n\nfunction renderError(text) {\n  const div = document.createElement('div');\n  div.className = 'output output-error';\n  const pre = document.createElement('pre');\n  pre.textContent = text;\n  div.appendChild(pre);\n  return div;\n}\n\nfunction renderDisplay(mime, data) {\n  const div = document.createElement('div');\n  div.className = 'output output-display';\n\n  if (mime === 'text/plain') {\n    const pre = document.createElement('pre');\n    pre.textContent = data;\n    div.appendChild(pre);\n  } else if (mime === 'text/html') {\n    const wrapper = document.createElement('div');\n    wrapper.className = 'display-html';\n    wrapper.innerHTML = data;\n    div.appendChild(wrapper);\n  } else if (mime.startsWith('image/')) {\n    const img = document.createElement('img');\n    if (mime === 'image/svg+xml') {\n      img.src = 'data:image/svg+xml;base64,' + btoa(data);\n    } else {\n      img.src = 'data:' + mime + ';base64,' + data;\n    }\n    img.style.maxWidth = '100%';\n    div.appendChild(img);\n  } else if (mime === 'application/json') {\n    const pre = document.createElement('pre');\n    try {\n      pre.textContent = JSON.stringify(JSON.parse(data), null, 2);\n    } catch {\n      pre.textContent = data;\n    }\n    div.appendChild(pre);\n  } else {\n    const pre = document.createElement('pre');\n    pre.textContent = data;\n    div.appendChild(pre);\n  }\n\n  return div;\n}\n\n/**\n * Apply carriage-return semantics to terminal text.\n * A bare \\r (not followed by \\n) rewinds to the start of the current line,\n * so subsequent text overwrites it — used by training progress displays.\n */\nfunction applyCarriageReturn(existing, incoming) {\n  const combined = existing + incoming;\n  const lines = combined.split('\\n');\n  for (let i = 0; i < lines.length; i++) {\n    const parts = lines[i].split('\\r');\n    // Keep only the last segment after any \\r\n    lines[i] = parts[parts.length - 1];\n  }\n  return lines.join('\\n');\n}\n\n/** Append an output to a cell's output container, coalescing consecutive stdout. */\nexport function appendOutputToContainer(container, output) {\n  // Coalesce consecutive stdout, applying \\r semantics for progress updates\n  if (output.kind === 'stdout' && container.lastElementChild &&\n      container.lastElementChild.classList.contains('output-stdout')) {\n    container.lastElementChild.textContent =\n      applyCarriageReturn(container.lastElementChild.textContent, output.text);\n    return;\n  }\n  const el = renderOutput(output);\n  if (el) container.appendChild(el);\n}\n"
  },
  {
    "path": "packages/quill/lib/quill-server/frontend/src/shortcuts.js",
    "content": "// Keyboard shortcut handler for global notebook navigation.\n\nexport function initShortcuts(store, wsClient) {\n  document.addEventListener('keydown', (e) => {\n    // Close shortcuts dialog on Escape\n    const dialog = document.getElementById('shortcuts-dialog');\n    if (e.key === 'Escape' && dialog && !dialog.hidden) {\n      e.preventDefault();\n      dialog.hidden = true;\n      return;\n    }\n\n    // Skip if inside a CodeMirror editor\n    if (e.target.closest('.cm-content')) return;\n    // Skip if inside an input/textarea\n    if (e.target.tagName === 'INPUT' || e.target.tagName === 'TEXTAREA') return;\n\n    const ctrl = e.ctrlKey || e.metaKey;\n\n    // Ctrl/Cmd shortcuts\n    if (ctrl) {\n      switch (e.key) {\n        case 's':\n          e.preventDefault();\n          wsClient.save();\n          return;\n        case 'z':\n          e.preventDefault();\n          if (e.shiftKey) wsClient.redo();\n          else wsClient.undo();\n          return;\n        case 'Enter':\n          e.preventDefault();\n          if (e.shiftKey) wsClient.executeAll();\n          else if (store.focusedCellId) wsClient.executeCell(store.focusedCellId);\n          return;\n      }\n    }\n\n    // Navigation and cell management (no modifier)\n    if (!ctrl && !e.altKey) {\n      switch (e.key) {\n        case '?':\n          e.preventDefault();\n          if (window._quillToggleShortcuts) window._quillToggleShortcuts();\n          return;\n        case 'ArrowLeft':\n          if (!store.focusedCellId && window._quillChapterNav && window._quillChapterNav.prev) {\n            e.preventDefault();\n            location.href = window._quillChapterNav.prev;\n          }\n          return;\n        case 'ArrowRight':\n          if (!store.focusedCellId && window._quillChapterNav && window._quillChapterNav.next) {\n            e.preventDefault();\n            location.href = window._quillChapterNav.next;\n          }\n          return;\n        case 'j':\n        case 'ArrowDown':\n          e.preventDefault();\n          store.focusNext();\n          return;\n        case 'k':\n        case 'ArrowUp':\n          e.preventDefault();\n          store.focusPrev();\n          return;\n        case 'J':\n          e.preventDefault();\n          {\n            const idx = store.findCellIndex(store.focusedCellId);\n            if (idx >= 0 && idx < store.cells.length - 1) {\n              wsClient.moveCell(store.focusedCellId, idx + 1);\n            }\n          }\n          return;\n        case 'K':\n          e.preventDefault();\n          {\n            const idx = store.findCellIndex(store.focusedCellId);\n            if (idx > 0) {\n              wsClient.moveCell(store.focusedCellId, idx - 1);\n            }\n          }\n          return;\n        case 'Enter':\n          e.preventDefault();\n          if (e.shiftKey) {\n            // Shift+Enter: execute and move next\n            if (store.focusedCellId) {\n              wsClient.executeCell(store.focusedCellId);\n              store.focusNext();\n            }\n          } else {\n            // Enter: focus the editor in the focused cell\n            if (store.focusedCellId) {\n              const el = document.querySelector(`[data-cell-id=\"${store.focusedCellId}\"] .cm-content`);\n              if (el) el.focus();\n              else {\n                // For text cells, trigger edit mode\n                const markdown = document.querySelector(`[data-cell-id=\"${store.focusedCellId}\"] .cell-markdown`);\n                if (markdown) markdown.dispatchEvent(new Event('dblclick'));\n              }\n            }\n          }\n          return;\n        case 'a':\n          e.preventDefault();\n          {\n            const idx = store.focusedCellId ? store.findCellIndex(store.focusedCellId) + 1 : store.cells.length;\n            wsClient.insertCell(idx, 'code');\n          }\n          return;\n        case 't':\n          e.preventDefault();\n          {\n            const idx = store.focusedCellId ? store.findCellIndex(store.focusedCellId) + 1 : store.cells.length;\n            wsClient.insertCell(idx, 'text');\n          }\n          return;\n        case 'd':\n          e.preventDefault();\n          if (store.focusedCellId) wsClient.deleteCell(store.focusedCellId);\n          return;\n        case 'm':\n          e.preventDefault();\n          if (store.focusedCellId) {\n            const cell = store.findCell(store.focusedCellId);\n            if (cell) {\n              wsClient.setCellKind(store.focusedCellId, cell.kind === 'code' ? 'text' : 'code');\n            }\n          }\n          return;\n        case 'c':\n          e.preventDefault();\n          if (store.focusedCellId) {\n            const cell = store.findCell(store.focusedCellId);\n            if (cell && cell.kind === 'code') {\n              wsClient.clearOutputs(store.focusedCellId);\n            }\n          }\n          return;\n        case 'z':\n          e.preventDefault();\n          if (store.focusedCellId) {\n            const cell = store.findCell(store.focusedCellId);\n            if (cell) {\n              const attrs = cell.attrs || {};\n              wsClient.setCellAttrs(store.focusedCellId, { ...attrs, collapsed: !attrs.collapsed });\n            }\n          }\n          return;\n        case 'Z':\n          e.preventDefault();\n          if (store.focusedCellId) {\n            const cell = store.findCell(store.focusedCellId);\n            if (cell && cell.kind === 'code') {\n              const attrs = cell.attrs || {};\n              wsClient.setCellAttrs(store.focusedCellId, { ...attrs, hide_source: !attrs.hide_source });\n            }\n          }\n          return;\n      }\n    }\n  });\n}\n"
  },
  {
    "path": "packages/quill/lib/quill-server/frontend/src/store.js",
    "content": "// State container with event emitter for the notebook.\n\nexport class Store {\n  constructor() {\n    this.cells = [];\n    this.focusedCellId = null;\n    this.kernelStatus = 'connecting';\n    this.canUndo = false;\n    this.canRedo = false;\n    this.loaded = false;\n    this._listeners = new Map();\n  }\n\n  on(event, fn) {\n    if (!this._listeners.has(event)) this._listeners.set(event, []);\n    this._listeners.get(event).push(fn);\n  }\n\n  off(event, fn) {\n    const list = this._listeners.get(event);\n    if (list) {\n      const idx = list.indexOf(fn);\n      if (idx !== -1) list.splice(idx, 1);\n    }\n  }\n\n  emit(event, data) {\n    const list = this._listeners.get(event);\n    if (list) list.forEach(fn => fn(data));\n  }\n\n  // --- Mutations ---\n\n  loadNotebook(data) {\n    this.cells = data.cells;\n    this.canUndo = data.can_undo;\n    this.canRedo = data.can_redo;\n    this.loaded = true;\n    if (!this.focusedCellId || !this.cells.find(c => c.id === this.focusedCellId)) {\n      const firstCode = this.cells.find(c => c.kind === 'code');\n      this.focusedCellId = firstCode ? firstCode.id : (this.cells.length > 0 ? this.cells[0].id : null);\n    }\n    this.emit('notebook:loaded', this.cells);\n  }\n\n  findCell(id) {\n    return this.cells.find(c => c.id === id);\n  }\n\n  findCellIndex(id) {\n    return this.cells.findIndex(c => c.id === id);\n  }\n\n  setCellStatus(cellId, status) {\n    const cell = this.findCell(cellId);\n    if (cell) {\n      cell.status = status;\n      this.emit('cell:status', { cellId, status });\n    }\n  }\n\n  finishExecution(cellId, success) {\n    const cell = this.findCell(cellId);\n    if (cell) {\n      cell.lastRunSuccess = success;\n      cell.status = 'idle';\n      this.emit('cell:execution-done', { cellId, success });\n    }\n  }\n\n  appendOutput(cellId, output) {\n    const cell = this.findCell(cellId);\n    if (cell) {\n      if (!cell.outputs) cell.outputs = [];\n      cell.outputs.push(output);\n      this.emit('cell:output', { cellId, output });\n    }\n  }\n\n  clearOutputs(cellId) {\n    const cell = this.findCell(cellId);\n    if (cell && cell.outputs) {\n      cell.outputs = [];\n      this.emit('cell:outputs-cleared', { cellId });\n    }\n  }\n\n  updateCell(cellId, cellData) {\n    const idx = this.findCellIndex(cellId);\n    if (idx !== -1) {\n      // Preserve lastRunSuccess from the old cell\n      const oldCell = this.cells[idx];\n      if (oldCell.lastRunSuccess !== undefined) {\n        cellData.lastRunSuccess = oldCell.lastRunSuccess;\n      }\n      this.cells[idx] = cellData;\n      this.emit('cell:updated', { cellId, cell: cellData });\n    }\n  }\n\n  insertCell(pos, cell) {\n    this.cells.splice(pos, 0, cell);\n    this.emit('cell:inserted', { pos, cell });\n  }\n\n  deleteCell(cellId) {\n    const idx = this.findCellIndex(cellId);\n    if (idx !== -1) {\n      this.cells.splice(idx, 1);\n      this.emit('cell:deleted', { cellId });\n      // Update focus\n      if (this.focusedCellId === cellId) {\n        if (this.cells.length > 0) {\n          const newIdx = Math.min(idx, this.cells.length - 1);\n          this.setFocus(this.cells[newIdx].id);\n        } else {\n          this.focusedCellId = null;\n        }\n      }\n    }\n  }\n\n  moveCell(cellId, pos) {\n    const oldIdx = this.findCellIndex(cellId);\n    if (oldIdx !== -1) {\n      const [cell] = this.cells.splice(oldIdx, 1);\n      this.cells.splice(pos, 0, cell);\n      this.emit('cell:moved', { cellId, pos });\n    }\n  }\n\n  setUndoRedo(canUndo, canRedo) {\n    this.canUndo = canUndo;\n    this.canRedo = canRedo;\n    this.emit('undo-redo:changed', { canUndo, canRedo });\n  }\n\n  setConnectionStatus(status) {\n    this.kernelStatus = status;\n    this.emit('connection:changed', { status });\n  }\n\n  setFocus(cellId) {\n    const prev = this.focusedCellId;\n    this.focusedCellId = cellId;\n    this.emit('focus:changed', { cellId, prevCellId: prev });\n  }\n\n  clearFocus() {\n    if (this.focusedCellId) {\n      const prev = this.focusedCellId;\n      this.focusedCellId = null;\n      this.emit('focus:changed', { cellId: null, prevCellId: prev });\n    }\n  }\n\n  focusNext() {\n    const idx = this.findCellIndex(this.focusedCellId);\n    if (idx < this.cells.length - 1) {\n      this.setFocus(this.cells[idx + 1].id);\n    }\n  }\n\n  focusPrev() {\n    const idx = this.findCellIndex(this.focusedCellId);\n    if (idx > 0) {\n      this.setFocus(this.cells[idx - 1].id);\n    }\n  }\n}\n"
  },
  {
    "path": "packages/quill/lib/quill-server/frontend/src/ws.js",
    "content": "// WebSocket client with reconnection and message dispatch.\n\nexport class WsClient {\n  constructor(store) {\n    this.store = store;\n    this.ws = null;\n    this.reconnectDelay = 1000;\n    this._pendingCompletions = new Map();\n    this._pendingTypeAt = new Map();\n    this._pendingDiagnostics = new Map();\n    this._requestCounter = 0;\n    this._sourceDebounceTimers = new Map();\n    this.chapterPath = null; // Set by app.js in directory mode\n  }\n\n  connect() {\n    const protocol = location.protocol === 'https:' ? 'wss:' : 'ws:';\n    let url = `${protocol}//${location.host}/ws`;\n    if (this.chapterPath) {\n      url += `?path=${encodeURIComponent(this.chapterPath)}`;\n    }\n    this.ws = new WebSocket(url);\n    this.ws.onopen = () => {\n      const wasDisconnected = this.reconnectDelay > 1000;\n      this.reconnectDelay = 1000;\n      this.store.setConnectionStatus('connected');\n      if (wasDisconnected) {\n        this.store.emit('reconnected');\n      }\n    };\n    this.ws.onmessage = (event) => {\n      try {\n        const msg = JSON.parse(event.data);\n        console.debug('[ws]', msg.type, msg.cell_id || '');\n        this._onMessage(msg);\n      } catch (err) {\n        console.error('[ws] message error:', err, event.data.slice(0, 200));\n      }\n    };\n    this.ws.onclose = () => {\n      this.ws = null;\n      this.store.setConnectionStatus('disconnected');\n      setTimeout(() => this.reconnect(), this.reconnectDelay);\n    };\n    this.ws.onerror = () => {\n      if (this.ws) this.ws.close();\n    };\n  }\n\n  reconnect() {\n    this.reconnectDelay = Math.min(this.reconnectDelay * 2, 30000);\n    this.connect();\n  }\n\n  send(msg) {\n    if (this.ws && this.ws.readyState === WebSocket.OPEN) {\n      this.ws.send(JSON.stringify(msg));\n    }\n  }\n\n  _onMessage(msg) {\n    switch (msg.type) {\n      case 'notebook':\n        this.store.loadNotebook(msg);\n        break;\n      case 'cell_status':\n        this.store.setCellStatus(msg.cell_id, msg.status);\n        break;\n      case 'cell_output':\n        this.store.appendOutput(msg.cell_id, msg.output);\n        break;\n      case 'cell_updated': {\n        // Detect execution completion: cell was running/queued, now idle\n        const oldCell = this.store.findCell(msg.cell_id);\n        const wasExecuting = oldCell && (oldCell.status === 'running' || oldCell.status === 'queued');\n        this.store.updateCell(msg.cell_id, msg.cell);\n        if (wasExecuting && msg.cell.status === 'idle') {\n          const hasError = msg.cell.outputs && msg.cell.outputs.some(o => o.kind === 'error');\n          this.store.finishExecution(msg.cell_id, !hasError);\n        }\n        break;\n      }\n      case 'cell_inserted':\n        this.store.insertCell(msg.pos, msg.cell);\n        break;\n      case 'cell_deleted':\n        this.store.deleteCell(msg.cell_id);\n        break;\n      case 'cell_moved':\n        this.store.moveCell(msg.cell_id, msg.pos);\n        break;\n      case 'completions': {\n        const resolve = this._pendingCompletions.get(msg.request_id);\n        if (resolve) {\n          this._pendingCompletions.delete(msg.request_id);\n          resolve(msg.items);\n        }\n        break;\n      }\n      case 'type_at': {\n        const resolve = this._pendingTypeAt.get(msg.request_id);\n        if (resolve) {\n          this._pendingTypeAt.delete(msg.request_id);\n          resolve(msg);\n        }\n        break;\n      }\n      case 'diagnostics': {\n        const resolve = this._pendingDiagnostics.get(msg.request_id);\n        if (resolve) {\n          this._pendingDiagnostics.delete(msg.request_id);\n          resolve(msg);\n        }\n        break;\n      }\n      case 'saved':\n        this.store.emit('saved');\n        break;\n      case 'undo_redo':\n        this.store.setUndoRedo(msg.can_undo, msg.can_redo);\n        break;\n      case 'error':\n        this.store.emit('error', { message: msg.message });\n        break;\n    }\n  }\n\n  // --- Commands ---\n\n  updateSource(cellId, source) {\n    // Debounce: wait 150ms after last keystroke\n    const existing = this._sourceDebounceTimers.get(cellId);\n    if (existing) clearTimeout(existing);\n    this._sourceDebounceTimers.set(cellId, setTimeout(() => {\n      this._sourceDebounceTimers.delete(cellId);\n      this.send({ type: 'update_source', cell_id: cellId, source });\n    }, 150));\n  }\n\n  /** Cancel a pending debounced source update (caller sends explicitly). */\n  cancelPendingSource(cellId) {\n    const existing = this._sourceDebounceTimers.get(cellId);\n    if (existing) {\n      clearTimeout(existing);\n      this._sourceDebounceTimers.delete(cellId);\n    }\n  }\n\n  checkpoint() { this.send({ type: 'checkpoint' }); }\n\n  executeCell(cellId) {\n    this.cancelPendingSource(cellId);\n    const cell = this.store.findCell(cellId);\n    if (cell) {\n      this.send({ type: 'update_source', cell_id: cellId, source: cell.source });\n    }\n    this.send({ type: 'execute_cell', cell_id: cellId });\n  }\n\n  executeCells(cellIds) {\n    for (const cellId of cellIds) {\n      this.cancelPendingSource(cellId);\n      const cell = this.store.findCell(cellId);\n      if (cell) {\n        this.send({ type: 'update_source', cell_id: cellId, source: cell.source });\n      }\n    }\n    this.send({ type: 'execute_cells', cell_ids: cellIds });\n  }\n\n  executeAll() {\n    for (const [cellId, timer] of this._sourceDebounceTimers) {\n      clearTimeout(timer);\n      const cell = this.store.findCell(cellId);\n      if (cell) {\n        this.send({ type: 'update_source', cell_id: cellId, source: cell.source });\n      }\n    }\n    this._sourceDebounceTimers.clear();\n    this.send({ type: 'execute_all' });\n  }\n  interrupt() { this.send({ type: 'interrupt' }); }\n\n  insertCell(pos, kind) { this.send({ type: 'insert_cell', pos, kind }); }\n  deleteCell(cellId) { this.send({ type: 'delete_cell', cell_id: cellId }); }\n  moveCell(cellId, pos) { this.send({ type: 'move_cell', cell_id: cellId, pos }); }\n  setCellKind(cellId, kind) { this.send({ type: 'set_cell_kind', cell_id: cellId, kind }); }\n  setCellAttrs(cellId, attrs) { this.send({ type: 'set_cell_attrs', cell_id: cellId, ...attrs }); }\n\n  clearOutputs(cellId) { this.send({ type: 'clear_outputs', cell_id: cellId }); }\n  clearAllOutputs() { this.send({ type: 'clear_all_outputs' }); }\n\n  save() { this.send({ type: 'save' }); }\n  undo() { this.send({ type: 'undo' }); }\n  redo() { this.send({ type: 'redo' }); }\n\n  complete(code, pos) {\n    const requestId = `req_${++this._requestCounter}`;\n    return new Promise((resolve) => {\n      this._pendingCompletions.set(requestId, resolve);\n      this.send({ type: 'complete', request_id: requestId, code, pos });\n      setTimeout(() => {\n        if (this._pendingCompletions.has(requestId)) {\n          this._pendingCompletions.delete(requestId);\n          resolve([]);\n        }\n      }, 3000);\n    });\n  }\n\n  typeAt(code, pos) {\n    const requestId = `req_${++this._requestCounter}`;\n    return new Promise((resolve) => {\n      this._pendingTypeAt.set(requestId, resolve);\n      this.send({ type: 'type_at', request_id: requestId, code, pos });\n      setTimeout(() => {\n        if (this._pendingTypeAt.has(requestId)) {\n          this._pendingTypeAt.delete(requestId);\n          resolve(null);\n        }\n      }, 3000);\n    });\n  }\n\n  diagnostics(code) {\n    const requestId = `req_${++this._requestCounter}`;\n    return new Promise((resolve) => {\n      this._pendingDiagnostics.set(requestId, resolve);\n      this.send({ type: 'diagnostics', request_id: requestId, code });\n      setTimeout(() => {\n        if (this._pendingDiagnostics.has(requestId)) {\n          this._pendingDiagnostics.delete(requestId);\n          resolve({ items: [] });\n        }\n      }, 3000);\n    });\n  }\n}\n"
  },
  {
    "path": "packages/quill/lib/quill-server/httpd.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet err_malformed_req = \"malformed request line\"\nlet err_unsupported_meth = \"unsupported method\"\nlet err_missing_ws_key = \"missing Sec-WebSocket-Key\"\nlet err_ws_eof : _ format = \"httpd: ws: end of file\\n%!\"\nlet err_ws_unix : _ format = \"httpd: ws: %s in %s\\n%!\"\nlet err_handler : _ format = \"httpd: handler: %s\\n%!\"\nlet err_parse : _ format = \"httpd: parse: %s\\n%!\"\nlet err_connection : _ format = \"httpd: connection: %s\\n%!\"\nlet err_accept : _ format = \"httpd: accept: %s\\n%!\"\n\n(*---------------------------------------------------------------------------\n  String and URL utilities\n  ---------------------------------------------------------------------------*)\n\nlet str_equal_case_insensitive a b =\n  let len = String.length a in\n  if len <> String.length b then false\n  else\n    let rec loop i =\n      if i = len then true\n      else\n        let ca = Char.code a.[i] in\n        let cb = Char.code b.[i] in\n        let ca = if ca >= 65 && ca <= 90 then ca + 32 else ca in\n        let cb = if cb >= 65 && cb <= 90 then cb + 32 else cb in\n        if ca = cb then loop (i + 1) else false\n    in\n    loop 0\n\nlet sub_equal_case_insensitive s off len target =\n  if len <> String.length target then false\n  else\n    let rec loop i =\n      if i = len then true\n      else\n        let ca = Char.code s.[off + i] in\n        let cb = Char.code target.[i] in\n        let ca = if ca >= 65 && ca <= 90 then ca + 32 else ca in\n        let cb = if cb >= 65 && cb <= 90 then cb + 32 else cb in\n        if ca = cb then loop (i + 1) else false\n    in\n    loop 0\n\nlet starts_with prefix s =\n  let len_p = String.length prefix in\n  if String.length s < len_p then false\n  else\n    let rec loop i =\n      if i = len_p then true\n      else if s.[i] = prefix.[i] then loop (i + 1)\n      else false\n    in\n    loop 0\n\nlet url_decode s =\n  let hex c =\n    if c >= '0' && c <= '9' then Char.code c - Char.code '0'\n    else if c >= 'a' && c <= 'f' then Char.code c - Char.code 'a' + 10\n    else if c >= 'A' && c <= 'F' then Char.code c - Char.code 'A' + 10\n    else -1\n  in\n  let len = String.length s in\n  let buf = Buffer.create len in\n  let rec loop i =\n    if i >= len then Buffer.contents buf\n    else\n      match String.unsafe_get s i with\n      | '%' when i + 2 < len ->\n          let h = hex (String.unsafe_get s (i + 1)) in\n          let l = hex (String.unsafe_get s (i + 2)) in\n          if h >= 0 && l >= 0 then begin\n            Buffer.add_char buf (Char.chr ((h lsl 4) lor l));\n            loop (i + 3)\n          end\n          else begin\n            Buffer.add_char buf '%';\n            loop (i + 1)\n          end\n      | '+' ->\n          Buffer.add_char buf ' ';\n          loop (i + 1)\n      | c ->\n          Buffer.add_char buf c;\n          loop (i + 1)\n  in\n  loop 0\n\nlet parse_query_string s =\n  if String.length s = 0 then []\n  else\n    let rec split acc start i =\n      if i = String.length s then\n        if start < i then String.sub s start (i - start) :: acc else acc\n      else if String.unsafe_get s i = '&' then\n        split (String.sub s start (i - start) :: acc) (i + 1) (i + 1)\n      else split acc start (i + 1)\n    in\n    let pairs = split [] 0 0 in\n    let rec decode_pairs acc = function\n      | [] -> acc\n      | pair :: rest -> (\n          match String.index_opt pair '=' with\n          | Some i ->\n              let k = url_decode (String.sub pair 0 i) in\n              let v =\n                url_decode\n                  (String.sub pair (i + 1) (String.length pair - i - 1))\n              in\n              decode_pairs ((k, v) :: acc) rest\n          | None ->\n              if String.length pair > 0 then\n                decode_pairs ((url_decode pair, \"\") :: acc) rest\n              else decode_pairs acc rest)\n    in\n    decode_pairs [] pairs\n\n(*---------------------------------------------------------------------------\n  MIME types and HTTP reasons\n  ---------------------------------------------------------------------------*)\n\nlet mime_of_ext = function\n  | \".html\" | \".htm\" -> \"text/html; charset=utf-8\"\n  | \".css\" -> \"text/css; charset=utf-8\"\n  | \".js\" | \".mjs\" -> \"application/javascript; charset=utf-8\"\n  | \".json\" | \".map\" -> \"application/json; charset=utf-8\"\n  | \".png\" -> \"image/png\"\n  | \".jpg\" | \".jpeg\" -> \"image/jpeg\"\n  | \".gif\" -> \"image/gif\"\n  | \".svg\" -> \"image/svg+xml\"\n  | \".ico\" -> \"image/x-icon\"\n  | \".woff\" -> \"font/woff\"\n  | \".woff2\" -> \"font/woff2\"\n  | \".ttf\" -> \"font/ttf\"\n  | \".otf\" -> \"font/otf\"\n  | \".wasm\" -> \"application/wasm\"\n  | \".txt\" | \".md\" -> \"text/plain; charset=utf-8\"\n  | \".xml\" -> \"application/xml\"\n  | _ -> \"application/octet-stream\"\n\nlet mime_of_path path = mime_of_ext (Filename.extension path)\n\nlet reason_phrase = function\n  | 100 -> \"Continue\"\n  | 101 -> \"Switching Protocols\"\n  | 200 -> \"OK\"\n  | 201 -> \"Created\"\n  | 204 -> \"No Content\"\n  | 301 -> \"Moved Permanently\"\n  | 302 -> \"Found\"\n  | 304 -> \"Not Modified\"\n  | 400 -> \"Bad Request\"\n  | 401 -> \"Unauthorized\"\n  | 403 -> \"Forbidden\"\n  | 404 -> \"Not Found\"\n  | 405 -> \"Method Not Allowed\"\n  | 413 -> \"Content Too Large\"\n  | 426 -> \"Upgrade Required\"\n  | 500 -> \"Internal Server Error\"\n  | code -> string_of_int code\n\n(*---------------------------------------------------------------------------\n  SHA-1 and Base64 (for WebSocket handshake)\n  ---------------------------------------------------------------------------*)\n\nmodule Ws_crypto = struct\n  let base64_encode s =\n    let alpha =\n      \"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/\"\n    in\n    let rec loop len e ei s i =\n      if i >= len then Bytes.unsafe_to_string e\n      else\n        let i0 = i and i1 = i + 1 and i2 = i + 2 in\n        let b0 = Char.code s.[i0] in\n        let b1 = if i1 >= len then 0 else Char.code s.[i1] in\n        let b2 = if i2 >= len then 0 else Char.code s.[i2] in\n        let u = (b0 lsl 16) lor (b1 lsl 8) lor b2 in\n        Bytes.set e ei alpha.[u lsr 18];\n        Bytes.set e (ei + 1) alpha.[(u lsr 12) land 63];\n        Bytes.set e (ei + 2)\n          (if i1 >= len then '=' else alpha.[(u lsr 6) land 63]);\n        Bytes.set e (ei + 3) (if i2 >= len then '=' else alpha.[u land 63]);\n        loop len e (ei + 4) s (i2 + 1)\n    in\n    match String.length s with\n    | 0 -> \"\"\n    | len -> loop len (Bytes.create ((len + 2) / 3 * 4)) 0 s 0\n\n  let sha1 s =\n    let sha_1_pad s =\n      let len = String.length s in\n      let blen = 8 * len in\n      let rem = len mod 64 in\n      let mlen = if rem > 55 then len + 128 - rem else len + 64 - rem in\n      let m = Bytes.create mlen in\n      Bytes.blit_string s 0 m 0 len;\n      Bytes.fill m len (mlen - len) '\\x00';\n      Bytes.set m len '\\x80';\n      if Sys.word_size > 32 then begin\n        Bytes.set m (mlen - 8) (Char.unsafe_chr ((blen lsr 56) land 0xFF));\n        Bytes.set m (mlen - 7) (Char.unsafe_chr ((blen lsr 48) land 0xFF));\n        Bytes.set m (mlen - 6) (Char.unsafe_chr ((blen lsr 40) land 0xFF));\n        Bytes.set m (mlen - 5) (Char.unsafe_chr ((blen lsr 32) land 0xFF))\n      end;\n      Bytes.set m (mlen - 4) (Char.unsafe_chr ((blen lsr 24) land 0xFF));\n      Bytes.set m (mlen - 3) (Char.unsafe_chr ((blen lsr 16) land 0xFF));\n      Bytes.set m (mlen - 2) (Char.unsafe_chr ((blen lsr 8) land 0xFF));\n      Bytes.set m (mlen - 1) (Char.unsafe_chr (blen land 0xFF));\n      m\n    in\n    let ( &&& ) = ( land ) in\n    let ( lor ) = Int32.logor in\n    let ( lxor ) = Int32.logxor in\n    let ( land ) = Int32.logand in\n    let ( ++ ) = Int32.add in\n    let lnot = Int32.lognot in\n    let sr = Int32.shift_right in\n    let sl = Int32.shift_left in\n    let cls n x = sl x n lor Int32.shift_right_logical x (32 - n) in\n    let m = sha_1_pad s in\n    let w = Array.make 16 0l in\n    let h0 = ref 0x67452301l\n    and h1 = ref 0xEFCDAB89l\n    and h2 = ref 0x98BADCFEl in\n    let h3 = ref 0x10325476l and h4 = ref 0xC3D2E1F0l in\n    let a = ref 0l\n    and b = ref 0l\n    and c = ref 0l\n    and d = ref 0l\n    and e = ref 0l in\n    for i = 0 to (Bytes.length m / 64) - 1 do\n      let base = i * 64 in\n      for j = 0 to 15 do\n        let k = base + (j * 4) in\n        w.(j) <-\n          sl (Int32.of_int (Char.code (Bytes.get m k))) 24\n          lor sl (Int32.of_int (Char.code (Bytes.get m (k + 1)))) 16\n          lor sl (Int32.of_int (Char.code (Bytes.get m (k + 2)))) 8\n          lor Int32.of_int (Char.code (Bytes.get m (k + 3)))\n      done;\n      a := !h0;\n      b := !h1;\n      c := !h2;\n      d := !h3;\n      e := !h4;\n      for t = 0 to 79 do\n        let f, k =\n          if t <= 19 then (!b land !c lor (lnot !b land !d), 0x5A827999l)\n          else if t <= 39 then (!b lxor !c lxor !d, 0x6ED9EBA1l)\n          else if t <= 59 then\n            (!b land !c lor (!b land !d) lor (!c land !d), 0x8F1BBCDCl)\n          else (!b lxor !c lxor !d, 0xCA62C1D6l)\n        in\n        let s = t &&& 0xF in\n        if t >= 16 then\n          w.(s) <-\n            cls 1\n              (w.(s + 13 &&& 0xF)\n              lxor w.(s + 8 &&& 0xF)\n              lxor w.(s + 2 &&& 0xF)\n              lxor w.(s));\n        let temp = cls 5 !a ++ f ++ !e ++ w.(s) ++ k in\n        e := !d;\n        d := !c;\n        c := cls 30 !b;\n        b := !a;\n        a := temp\n      done;\n      h0 := !h0 ++ !a;\n      h1 := !h1 ++ !b;\n      h2 := !h2 ++ !c;\n      h3 := !h3 ++ !d;\n      h4 := !h4 ++ !e\n    done;\n    let h = Bytes.create 20 in\n    let i2s h k i =\n      Bytes.set h k (Char.unsafe_chr (Int32.to_int (sr i 24) &&& 0xFF));\n      Bytes.set h (k + 1) (Char.unsafe_chr (Int32.to_int (sr i 16) &&& 0xFF));\n      Bytes.set h (k + 2) (Char.unsafe_chr (Int32.to_int (sr i 8) &&& 0xFF));\n      Bytes.set h (k + 3) (Char.unsafe_chr (Int32.to_int i &&& 0xFF))\n    in\n    i2s h 0 !h0;\n    i2s h 4 !h1;\n    i2s h 8 !h2;\n    i2s h 12 !h3;\n    i2s h 16 !h4;\n    Bytes.unsafe_to_string h\nend\n\n(*---------------------------------------------------------------------------\n  Buffered reader\n  ---------------------------------------------------------------------------*)\n\ntype reader = {\n  fd : Unix.file_descr;\n  buf : bytes;\n  mutable pos : int;\n  mutable len : int;\n}\n\nlet reader_create fd = { fd; buf = Bytes.create 4096; pos = 0; len = 0 }\n\nlet reader_fill r =\n  if r.len = 0 then r.pos <- 0\n  else if r.pos > 0 then begin\n    Bytes.blit r.buf r.pos r.buf 0 r.len;\n    r.pos <- 0\n  end;\n  let space = Bytes.length r.buf - r.len in\n  if space > 0 then begin\n    let n = Unix.read r.fd r.buf r.len space in\n    if n = 0 then raise End_of_file;\n    r.len <- r.len + n\n  end\n\nlet reader_read_line r =\n  let buf = Buffer.create 128 in\n  let rec loop () =\n    if r.len = 0 then reader_fill r;\n    let limit = r.pos + r.len in\n    let rec find_nl i =\n      if i = limit then None\n      else if Bytes.unsafe_get r.buf i = '\\n' then Some i\n      else find_nl (i + 1)\n    in\n    match find_nl r.pos with\n    | Some i ->\n        let line_len = i - r.pos in\n        let end_len =\n          if line_len > 0 && Bytes.unsafe_get r.buf (i - 1) = '\\r' then\n            line_len - 1\n          else line_len\n        in\n        let s =\n          if Buffer.length buf = 0 then Bytes.sub_string r.buf r.pos end_len\n          else begin\n            Buffer.add_subbytes buf r.buf r.pos end_len;\n            Buffer.contents buf\n          end\n        in\n        let consumed = line_len + 1 in\n        r.pos <- r.pos + consumed;\n        r.len <- r.len - consumed;\n        s\n    | None ->\n        Buffer.add_subbytes buf r.buf r.pos r.len;\n        r.pos <- 0;\n        r.len <- 0;\n        loop ()\n  in\n  loop ()\n\nlet reader_read_exact_bytes r n =\n  if n <= r.len then begin\n    let b = Bytes.sub r.buf r.pos n in\n    r.pos <- r.pos + n;\n    r.len <- r.len - n;\n    b\n  end\n  else begin\n    let res = Bytes.create n in\n    let rec loop rem off =\n      if rem = 0 then res\n      else begin\n        if r.len = 0 then reader_fill r;\n        let take = min rem r.len in\n        Bytes.blit r.buf r.pos res off take;\n        r.pos <- r.pos + take;\n        r.len <- r.len - take;\n        loop (rem - take) (off + take)\n      end\n    in\n    loop n 0\n  end\n\nlet reader_read_exact r n = Bytes.unsafe_to_string (reader_read_exact_bytes r n)\n\n(*---------------------------------------------------------------------------\n  HTTP writing\n  ---------------------------------------------------------------------------*)\n\nlet write_all fd s off len =\n  let rec loop off len =\n    if len > 0 then begin\n      let n = Unix.write_substring fd s off len in\n      loop (off + n) (len - n)\n    end\n  in\n  loop off len\n\nlet write_string fd s = write_all fd s 0 (String.length s)\n\n(*---------------------------------------------------------------------------\n  Types\n  ---------------------------------------------------------------------------*)\n\ntype meth = GET | HEAD | POST | PUT | DELETE | OPTIONS\n\ntype request = {\n  meth : meth;\n  path : string;\n  query : (string * string) list;\n  headers : (string * string) list;\n  body : string;\n  client_addr : Unix.sockaddr;\n}\n\ntype response = {\n  status : int;\n  headers : (string * string) list;\n  body : string;\n}\n\nlet response ?(status = 200) ?(headers = []) body = { status; headers; body }\n\nlet json ?(status = 200) body =\n  { status; headers = [ (\"Content-Type\", \"application/json\") ]; body }\n\nlet header name (req : request) =\n  let rec find = function\n    | [] -> None\n    | (k, v) :: rest ->\n        if str_equal_case_insensitive k name then Some v else find rest\n  in\n  find req.headers\n\n(*---------------------------------------------------------------------------\n  HTTP Parsing\n  ---------------------------------------------------------------------------*)\n\nlet meth_of_string = function\n  | \"GET\" -> GET\n  | \"HEAD\" -> HEAD\n  | \"POST\" -> POST\n  | \"PUT\" -> PUT\n  | \"DELETE\" -> DELETE\n  | \"OPTIONS\" -> OPTIONS\n  | _ -> failwith err_unsupported_meth\n\nlet trim_header_value s start =\n  let len = String.length s in\n  let rec trim_left j =\n    if j < len && (s.[j] = ' ' || s.[j] = '\\t') then trim_left (j + 1) else j\n  in\n  let rec trim_right j =\n    if j >= start && (s.[j] = ' ' || s.[j] = '\\t') then trim_right (j - 1)\n    else j\n  in\n  let l = trim_left start in\n  let r = trim_right (len - 1) in\n  if l <= r then String.sub s l (r - l + 1) else \"\"\n\nlet parse_request reader client_addr =\n  let line = reader_read_line reader in\n  if String.length line = 0 then raise End_of_file;\n  let i1 =\n    match String.index_opt line ' ' with\n    | Some i -> i\n    | None -> failwith err_malformed_req\n  in\n  let i2 =\n    match String.index_from_opt line (i1 + 1) ' ' with\n    | Some i -> i\n    | None -> failwith err_malformed_req\n  in\n  let meth_s = String.sub line 0 i1 in\n  let raw_path = String.sub line (i1 + 1) (i2 - i1 - 1) in\n  let meth = meth_of_string meth_s in\n  let path, query_string =\n    match String.index_opt raw_path '?' with\n    | Some i ->\n        ( url_decode (String.sub raw_path 0 i),\n          String.sub raw_path (i + 1) (String.length raw_path - i - 1) )\n    | None -> (url_decode raw_path, \"\")\n  in\n  let query = parse_query_string query_string in\n\n  let rec loop_headers headers content_length keep_alive =\n    let hline = reader_read_line reader in\n    if String.length hline = 0 then (headers, content_length, keep_alive)\n    else\n      match String.index_opt hline ':' with\n      | Some i ->\n          let key = String.sub hline 0 i in\n          let value = trim_header_value hline (i + 1) in\n\n          let content_length =\n            if str_equal_case_insensitive key \"content-length\" then\n              try int_of_string value with _ -> content_length\n            else content_length\n          in\n          let keep_alive =\n            if\n              str_equal_case_insensitive key \"connection\"\n              && str_equal_case_insensitive value \"close\"\n            then false\n            else keep_alive\n          in\n          loop_headers ((key, value) :: headers) content_length keep_alive\n      | None -> loop_headers headers content_length keep_alive\n  in\n  let headers, content_length, keep_alive = loop_headers [] 0 true in\n  let body =\n    if content_length > 0 then reader_read_exact reader content_length else \"\"\n  in\n  ( { meth; path; query; headers = List.rev headers; body; client_addr },\n    keep_alive )\n\nlet write_response fd resp =\n  let buf = Buffer.create 256 in\n  Buffer.add_string buf \"HTTP/1.1 \";\n  Buffer.add_string buf (string_of_int resp.status);\n  Buffer.add_char buf ' ';\n  Buffer.add_string buf (reason_phrase resp.status);\n  Buffer.add_string buf \"\\r\\n\";\n  let rec add_headers = function\n    | [] -> ()\n    | (k, v) :: rest ->\n        Buffer.add_string buf k;\n        Buffer.add_string buf \": \";\n        Buffer.add_string buf v;\n        Buffer.add_string buf \"\\r\\n\";\n        add_headers rest\n  in\n  add_headers resp.headers;\n  Buffer.add_string buf \"Content-Length: \";\n  Buffer.add_string buf (string_of_int (String.length resp.body));\n  Buffer.add_string buf \"\\r\\n\\r\\n\";\n  write_string fd (Buffer.contents buf);\n  if String.length resp.body > 0 then write_string fd resp.body\n\n(*---------------------------------------------------------------------------\n  WebSocket\n  ---------------------------------------------------------------------------*)\n\ntype ws = {\n  ws_fd : Unix.file_descr;\n  ws_reader : reader;\n  ws_mutex : Mutex.t;\n  mutable ws_closed : bool;\n}\n\nlet ws_magic = \"258EAFA5-E914-47DA-95CA-C5AB0DC85B11\"\n\nlet ws_handshake req fd =\n  let key =\n    match header \"Sec-WebSocket-Key\" req with\n    | Some k -> k\n    | None -> failwith err_missing_ws_key\n  in\n  let accept = Ws_crypto.base64_encode (Ws_crypto.sha1 (key ^ ws_magic)) in\n  let buf = Buffer.create 128 in\n  Buffer.add_string buf \"HTTP/1.1 101 Switching Protocols\\r\\n\";\n  Buffer.add_string buf \"Upgrade: websocket\\r\\n\";\n  Buffer.add_string buf \"Connection: Upgrade\\r\\n\";\n  Buffer.add_string buf \"Sec-WebSocket-Accept: \";\n  Buffer.add_string buf accept;\n  Buffer.add_string buf \"\\r\\n\\r\\n\";\n  write_string fd (Buffer.contents buf)\n\nlet ws_write_frame_unlocked ws opcode payload =\n  if not ws.ws_closed then begin\n    let len = String.length payload in\n    let hlen = if len < 126 then 2 else if len < 65536 then 4 else 10 in\n    let h = Bytes.create hlen in\n    Bytes.unsafe_set h 0 (Char.unsafe_chr (0x80 lor opcode));\n    if len < 126 then Bytes.unsafe_set h 1 (Char.unsafe_chr len)\n    else if len < 65536 then begin\n      Bytes.unsafe_set h 1 (Char.unsafe_chr 126);\n      Bytes.unsafe_set h 2 (Char.unsafe_chr ((len lsr 8) land 0xFF));\n      Bytes.unsafe_set h 3 (Char.unsafe_chr (len land 0xFF))\n    end\n    else begin\n      Bytes.unsafe_set h 1 (Char.unsafe_chr 127);\n      let len64 = Int64.of_int len in\n      for i = 0 to 7 do\n        let shift = (7 - i) * 8 in\n        let b = Int64.logand (Int64.shift_right_logical len64 shift) 0xFFL in\n        Bytes.unsafe_set h (2 + i) (Char.unsafe_chr (Int64.to_int b))\n      done\n    end;\n    write_string ws.ws_fd (Bytes.unsafe_to_string h);\n    if len > 0 then write_string ws.ws_fd payload\n  end\n\nlet ws_write_frame ws opcode payload =\n  Mutex.lock ws.ws_mutex;\n  Fun.protect\n    ~finally:(fun () -> Mutex.unlock ws.ws_mutex)\n    (fun () -> ws_write_frame_unlocked ws opcode payload)\n\nlet ws_read_frame ws =\n  let h = reader_read_exact_bytes ws.ws_reader 2 in\n  let b0 = Char.code (Bytes.unsafe_get h 0) in\n  let b1 = Char.code (Bytes.unsafe_get h 1) in\n  let opcode = b0 land 0x0F in\n  let masked = b1 land 0x80 <> 0 in\n  let len_code = b1 land 0x7F in\n  let payload_len =\n    if len_code = 126 then\n      let ext = reader_read_exact_bytes ws.ws_reader 2 in\n      (Char.code (Bytes.unsafe_get ext 0) lsl 8)\n      lor Char.code (Bytes.unsafe_get ext 1)\n    else if len_code = 127 then\n      let ext = reader_read_exact_bytes ws.ws_reader 8 in\n      let rec loop i acc =\n        if i = 8 then acc\n        else loop (i + 1) ((acc lsl 8) lor Char.code (Bytes.unsafe_get ext i))\n      in\n      loop 0 0\n    else len_code\n  in\n  let mask_key =\n    if masked then Some (reader_read_exact_bytes ws.ws_reader 4) else None\n  in\n  let payload = reader_read_exact_bytes ws.ws_reader payload_len in\n  match mask_key with\n  | Some key ->\n      for i = 0 to payload_len - 1 do\n        let b = Char.code (Bytes.unsafe_get payload i) in\n        let m = Char.code (Bytes.unsafe_get key (i land 3)) in\n        Bytes.unsafe_set payload i (Char.unsafe_chr (b lxor m))\n      done;\n      (opcode, Bytes.unsafe_to_string payload)\n  | None -> (opcode, Bytes.unsafe_to_string payload)\n\nlet ws_send ws msg = ws_write_frame ws 0x1 msg\n\nlet ws_recv ws =\n  if ws.ws_closed then None\n  else\n    let rec loop () =\n      match ws_read_frame ws with\n      | (0x1 | 0x2), payload -> Some payload\n      | 0x8, _ ->\n          (try ws_write_frame ws 0x8 \"\" with _ -> ());\n          ws.ws_closed <- true;\n          None\n      | 0x9, _ ->\n          (try ws_write_frame ws 0xA \"\" with _ -> ());\n          loop ()\n      | 0xA, _ -> loop ()\n      | _ -> loop () (* ignore unknown opcodes per RFC 6455 *)\n    in\n    try loop () with\n    | End_of_file ->\n        Printf.eprintf err_ws_eof;\n        ws.ws_closed <- true;\n        None\n    | Unix.Unix_error (err, fn, _) ->\n        Printf.eprintf err_ws_unix (Unix.error_message err) fn;\n        ws.ws_closed <- true;\n        None\n\nlet ws_close ws =\n  Mutex.lock ws.ws_mutex;\n  Fun.protect\n    ~finally:(fun () -> Mutex.unlock ws.ws_mutex)\n    (fun () ->\n      if not ws.ws_closed then begin\n        (try\n           ws_write_frame_unlocked ws 0x8 \"\";\n           Unix.shutdown ws.ws_fd Unix.SHUTDOWN_ALL\n         with _ -> ());\n        ws.ws_closed <- true\n      end)\n\n(*---------------------------------------------------------------------------\n  Static file serving\n  ---------------------------------------------------------------------------*)\n\nlet serve_static ~prefix ~loader req =\n  let prefix_len = String.length prefix in\n  let path_len = String.length req.path in\n  let rel_path =\n    let start =\n      if path_len > prefix_len && req.path.[prefix_len] = '/' then\n        prefix_len + 1\n      else prefix_len\n    in\n    if start < path_len then String.sub req.path start (path_len - start)\n    else \"\"\n  in\n  match loader rel_path with\n  | Some data ->\n      response ~headers:[ (\"Content-Type\", mime_of_path rel_path) ] data\n  | None -> response ~status:404 \"Not Found\"\n\n(*---------------------------------------------------------------------------\n  Server evaluation\n  ---------------------------------------------------------------------------*)\n\ntype route_entry =\n  | Exact of meth * string * (request -> response)\n  | Static of string * (string -> string option)\n  | Websocket of string * (request -> ws -> unit)\n\ntype t = {\n  addr : string;\n  port : int;\n  mutable routes : route_entry list;\n  mutable running : bool;\n  mutable listen_fd : Unix.file_descr option;\n}\n\nlet create ?(addr = \"127.0.0.1\") ?(port = 8080) () =\n  { addr; port; routes = []; running = false; listen_fd = None }\n\nlet route server meth path handler =\n  server.routes <- Exact (meth, path, handler) :: server.routes\n\nlet static server ~prefix ~loader () =\n  server.routes <- Static (prefix, loader) :: server.routes\n\nlet websocket server path handler =\n  server.routes <- Websocket (path, handler) :: server.routes\n\nlet find_route routes req =\n  let rec search = function\n    | [] -> None\n    | Exact (m, p, h) :: _ when m = req.meth && String.equal p req.path ->\n        Some (`Handler h)\n    | Static (prefix, loader) :: _\n      when req.meth = GET && starts_with prefix req.path ->\n        Some (`Static (prefix, loader))\n    | Websocket (p, h) :: _ when req.meth = GET && String.equal p req.path ->\n        Some (`Websocket h)\n    | _ :: rest -> search rest\n  in\n  search routes\n\n(* Check if a comma-separated header value contains [token]\n   (case-insensitive) *)\nlet header_contains_token s token =\n  let len = String.length s in\n  let rec scan i =\n    if i >= len then false\n    else\n      let rec skip_ws j =\n        if j < len && (s.[j] = ' ' || s.[j] = '\\t') then skip_ws (j + 1) else j\n      in\n      let start = skip_ws i in\n      let rec find_sep j =\n        if j < len && s.[j] <> ',' then find_sep (j + 1) else j\n      in\n      let stop = find_sep start in\n      let rec rtrim j =\n        if j > start && (s.[j - 1] = ' ' || s.[j - 1] = '\\t') then rtrim (j - 1)\n        else j\n      in\n      let right = rtrim stop in\n      if sub_equal_case_insensitive s start (right - start) token then true\n      else scan (stop + 1)\n  in\n  scan 0\n\nlet is_websocket_upgrade req =\n  match (header \"Connection\" req, header \"Upgrade\" req) with\n  | Some conn, Some upg ->\n      header_contains_token conn \"upgrade\"\n      && str_equal_case_insensitive upg \"websocket\"\n  | _ -> false\n\nlet handle_ws_upgrade server req fd reader =\n  match find_route server.routes req with\n  | Some (`Websocket handler) -> (\n      (* Use long timeouts for WebSocket (effectively infinite). Both recv and\n         send must be increased — the initial HTTP SO_SNDTIMEO of 30s would\n         otherwise kill the connection when the process is paused (e.g. inside a\n         debugger). *)\n      Unix.setsockopt_float fd Unix.SO_RCVTIMEO 86400.0;\n      Unix.setsockopt_float fd Unix.SO_SNDTIMEO 86400.0;\n      ws_handshake req fd;\n      let ws =\n        {\n          ws_fd = fd;\n          ws_reader = reader;\n          ws_mutex = Mutex.create ();\n          ws_closed = false;\n        }\n      in\n      try handler req ws\n      with exn ->\n        Printf.eprintf \"[ws] handler error: %s\\n%!\" (Printexc.to_string exn))\n  | _ -> write_response fd (response ~status:404 \"Not Found\")\n\nlet dispatch_http server req =\n  match req.meth with\n  | OPTIONS ->\n      response ~status:204\n        ~headers:[ (\"Allow\", \"GET, HEAD, POST, PUT, DELETE, OPTIONS\") ]\n        \"\"\n  | _ -> (\n      match find_route server.routes req with\n      | Some (`Handler h) -> (\n          try h req\n          with exn ->\n            Printf.eprintf err_handler (Printexc.to_string exn);\n            response ~status:500 \"Internal Server Error\")\n      | Some (`Static (prefix, loader)) -> serve_static ~prefix ~loader req\n      | Some (`Websocket _) -> response ~status:426 \"Upgrade Required\"\n      | None -> response ~status:404 \"Not Found\")\n\nlet handle_connection server fd client_addr =\n  let reader = reader_create fd in\n  Unix.setsockopt_float fd Unix.SO_RCVTIMEO 30.0;\n  Unix.setsockopt_float fd Unix.SO_SNDTIMEO 30.0;\n  Unix.setsockopt fd Unix.TCP_NODELAY true;\n  let rec loop keep_alive =\n    if keep_alive && server.running then\n      begin match parse_request reader client_addr with\n      | req, ka ->\n          if is_websocket_upgrade req then\n            handle_ws_upgrade server req fd reader\n          else begin\n            write_response fd (dispatch_http server req);\n            loop ka\n          end\n      | exception End_of_file -> ()\n      | exception Unix.Unix_error (Unix.ETIMEDOUT, _, _) -> ()\n      | exception Unix.Unix_error (Unix.EAGAIN, _, _) -> ()\n      | exception Unix.Unix_error (Unix.ECONNRESET, _, _) -> ()\n      | exception Failure msg when msg = err_unsupported_meth ->\n          (* Unsupported method: request was well-formed, safe to continue. *)\n          Printf.eprintf err_parse msg;\n          (try write_response fd (response ~status:405 \"Method Not Allowed\")\n           with _ -> ());\n          loop true\n      | exception exn -> (\n          Printf.eprintf err_parse (Printexc.to_string exn);\n          try write_response fd (response ~status:400 \"Bad Request\")\n          with _ -> ())\n      end\n  in\n  loop true\n\nlet shutdown_silent fd = try Unix.shutdown fd Unix.SHUTDOWN_ALL with _ -> ()\nlet close_silent fd = try Unix.close fd with _ -> ()\n\nlet run ?(after_start = ignore) server =\n  if not Sys.win32 then\n    ignore (Unix.sigprocmask Unix.SIG_BLOCK [ Sys.sigpipe ] : int list);\n  let sock = Unix.socket Unix.PF_INET Unix.SOCK_STREAM 0 in\n  Unix.setsockopt sock Unix.SO_REUSEADDR true;\n  let inet_addr = Unix.inet_addr_of_string server.addr in\n  Unix.bind sock (Unix.ADDR_INET (inet_addr, server.port));\n  Unix.listen sock 128;\n  Unix.set_nonblock sock;\n  server.listen_fd <- Some sock;\n  server.running <- true;\n  server.routes <- List.rev server.routes;\n  after_start ();\n  let rec accept_loop () =\n    if server.running then\n      begin match Unix.accept sock with\n      | client_fd, client_addr ->\n          Unix.clear_nonblock client_fd;\n          ignore\n            (Thread.create\n               (fun () ->\n                 Fun.protect\n                   ~finally:(fun () ->\n                     shutdown_silent client_fd;\n                     close_silent client_fd)\n                   (fun () ->\n                     try handle_connection server client_fd client_addr\n                     with exn ->\n                       Printf.eprintf err_connection (Printexc.to_string exn)))\n               ()\n              : Thread.t);\n          accept_loop ()\n      | exception Unix.Unix_error ((Unix.EAGAIN | Unix.EWOULDBLOCK), _, _) ->\n          ignore (Unix.select [ sock ] [] [] 0.5 : _ * _ * _);\n          accept_loop ()\n      | exception Unix.Unix_error (Unix.EBADF, _, _) -> server.running <- false\n      | exception exn ->\n          Printf.eprintf err_accept (Printexc.to_string exn);\n          Thread.delay 0.01;\n          accept_loop ()\n      end\n  in\n  accept_loop ();\n  close_silent sock;\n  server.listen_fd <- None\n\nlet stop server =\n  server.running <- false;\n  match server.listen_fd with\n  | Some fd ->\n      close_silent fd;\n      server.listen_fd <- None\n  | None -> ()\n"
  },
  {
    "path": "packages/quill/lib/quill-server/httpd.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Minimal HTTP/1.1 server with WebSocket support.\n\n    [Httpd] is a thread-per-connection HTTP/1.1 server with keep-alive and\n    {{!websocket}WebSocket} support. It depends only on [unix] and\n    [threads.posix].\n\n    - {{!types}Types}\n    - {{!responses}Response constructors}\n    - {{!requests}Request utilities}\n    - {{!websocket}WebSocket}\n    - {{!server}Server} *)\n\n(** {1:types Types} *)\n\ntype meth =\n  | GET\n  | HEAD\n  | POST\n  | PUT\n  | DELETE\n  | OPTIONS  (** The type for HTTP request methods. *)\n\ntype request = {\n  meth : meth;  (** The request method. *)\n  path : string;  (** The percent-decoded request path. *)\n  query : (string * string) list;  (** Percent-decoded query parameters. *)\n  headers : (string * string) list;  (** Headers in original case. *)\n  body : string;  (** The full request body, read before the handler runs. *)\n  client_addr : Unix.sockaddr;  (** The client's socket address. *)\n}\n(** The type for HTTP requests. *)\n\ntype response = {\n  status : int;  (** The HTTP status code. *)\n  headers : (string * string) list;  (** The response headers. *)\n  body : string;\n      (** The response body. [Content-Length] is computed at send time. *)\n}\n(** The type for HTTP responses. *)\n\n(** {1:responses Response constructors} *)\n\nval response :\n  ?status:int -> ?headers:(string * string) list -> string -> response\n(** [response body] is a response with [body]. [status] defaults to [200].\n    [headers] defaults to [[]]. *)\n\nval json : ?status:int -> string -> response\n(** [json body] is a response with [Content-Type: application/json]. [status]\n    defaults to [200]. *)\n\n(** {1:requests Request utilities} *)\n\nval header : string -> request -> string option\n(** [header name req] is the value of the first header named [name] in [req],\n    matched case-insensitively. *)\n\n(** {1:websocket WebSocket} *)\n\ntype ws\n(** The type for WebSocket connections. Sends are thread-safe. *)\n\nval ws_send : ws -> string -> unit\n(** [ws_send ws msg] sends [msg] as a text frame. Thread-safe. *)\n\nval ws_recv : ws -> string option\n(** [ws_recv ws] blocks until a text or binary message arrives. Returns [None]\n    when the peer closes the connection or an I/O error occurs. Ping frames are\n    answered automatically. *)\n\nval ws_close : ws -> unit\n(** [ws_close ws] sends a close frame and shuts down the socket. Idempotent and\n    thread-safe. *)\n\n(** {1:server Server} *)\n\ntype t\n(** The type for HTTP servers. *)\n\nval create : ?addr:string -> ?port:int -> unit -> t\n(** [create ()] is a server. [addr] defaults to [\"127.0.0.1\"], [port] to [8080].\n*)\n\nval route : t -> meth -> string -> (request -> response) -> unit\n(** [route server meth path handler] registers [handler] for requests matching\n    [meth] and [path] exactly. *)\n\nval static :\n  t -> prefix:string -> loader:(string -> string option) -> unit -> unit\n(** [static server ~prefix ~loader ()] serves assets for [GET] requests whose\n    path starts with [prefix]. The relative path (after stripping [prefix]) is\n    passed to [loader]; if it returns [Some data] the data is served with an\n    appropriate MIME type, otherwise a 404 is returned. *)\n\nval websocket : t -> string -> (request -> ws -> unit) -> unit\n(** [websocket server path handler] registers a WebSocket endpoint at [path].\n    The handler runs on the connection thread; loop on {!ws_recv} and return\n    when done. *)\n\nval run : ?after_start:(unit -> unit) -> t -> unit\n(** [run server] starts the accept loop (blocking). [after_start] defaults to\n    [ignore] and is called once the socket is bound and listening. Returns when\n    {!stop} is called.\n\n    Raises [Unix.Unix_error] if binding or listening fails. *)\n\nval stop : t -> unit\n(** [stop server] requests graceful shutdown. The {!run} call returns once the\n    accept loop exits. *)\n"
  },
  {
    "path": "packages/quill/lib/quill-server/protocol.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Quill\n\nlet ( let* ) = Result.bind\n\n(* ───── JSON helpers ───── *)\n\nlet json_obj pairs =\n  Jsont.Json.object' (List.map (fun (k, v) -> (Jsont.Json.name k, v)) pairs)\n\nlet json_mem name = function\n  | Jsont.Object (mems, _) -> (\n      match Jsont.Json.find_mem name mems with\n      | Some (_, v) -> v\n      | None -> Jsont.Null ((), Jsont.Meta.none))\n  | _ -> Jsont.Null ((), Jsont.Meta.none)\n\nlet json_of_string s =\n  match Jsont_bytesrw.decode_string Jsont.json s with\n  | Ok v -> Ok v\n  | Error e -> Error e\n\nlet json_to_string j =\n  match Jsont_bytesrw.encode_string ~format:Jsont.Minify Jsont.json j with\n  | Ok s -> s\n  | Error e -> failwith e\n\n(* ───── Field extraction ───── *)\n\nlet get_string name json =\n  match json_mem name json with\n  | Jsont.String (s, _) -> Ok s\n  | _ -> Error (Printf.sprintf \"missing or invalid field '%s'\" name)\n\nlet get_int name json =\n  match json_mem name json with\n  | Jsont.Number (n, _) -> Ok (int_of_float n)\n  | _ -> Error (Printf.sprintf \"missing or invalid field '%s'\" name)\n\nlet get_bool name json =\n  match json_mem name json with Jsont.Bool (b, _) -> Ok b | _ -> Ok false\n\nlet get_string_list name json =\n  match json_mem name json with\n  | Jsont.Array (items, _) ->\n      let rec collect acc = function\n        | [] -> Ok (List.rev acc)\n        | Jsont.String (s, _) :: rest -> collect (s :: acc) rest\n        | _ :: _ -> Error (Printf.sprintf \"invalid item in '%s'\" name)\n      in\n      collect [] items\n  | _ -> Error (Printf.sprintf \"missing or invalid field '%s'\" name)\n\n(* ───── Client message parsing ───── *)\n\ntype client_msg =\n  | Update_source of { cell_id : string; source : string }\n  | Checkpoint\n  | Execute_cell of { cell_id : string }\n  | Execute_cells of { cell_ids : string list }\n  | Execute_all\n  | Interrupt\n  | Insert_cell of { pos : int; kind : [ `Code | `Text ] }\n  | Delete_cell of { cell_id : string }\n  | Move_cell of { cell_id : string; pos : int }\n  | Set_cell_kind of { cell_id : string; kind : [ `Code | `Text ] }\n  | Set_cell_attrs of { cell_id : string; attrs : Cell.attrs }\n  | Clear_outputs of { cell_id : string }\n  | Clear_all_outputs\n  | Save\n  | Undo\n  | Redo\n  | Complete of { request_id : string; code : string; pos : int }\n  | Type_at of { request_id : string; code : string; pos : int }\n  | Diagnostics of { request_id : string; code : string }\n\nlet parse_kind json =\n  match get_string \"kind\" json with\n  | Ok \"code\" -> Ok `Code\n  | Ok \"text\" -> Ok `Text\n  | Ok k -> Error (Printf.sprintf \"unknown cell kind '%s'\" k)\n  | Error e -> Error e\n\nlet client_msg_of_json s =\n  match json_of_string s with\n  | Error e -> Error e\n  | Ok json -> (\n      match get_string \"type\" json with\n      | Ok \"update_source\" ->\n          let* cell_id = get_string \"cell_id\" json in\n          let* source = get_string \"source\" json in\n          Ok (Update_source { cell_id; source })\n      | Ok \"checkpoint\" -> Ok Checkpoint\n      | Ok \"execute_cell\" ->\n          let* cell_id = get_string \"cell_id\" json in\n          Ok (Execute_cell { cell_id })\n      | Ok \"execute_cells\" ->\n          let* cell_ids = get_string_list \"cell_ids\" json in\n          Ok (Execute_cells { cell_ids })\n      | Ok \"execute_all\" -> Ok Execute_all\n      | Ok \"interrupt\" -> Ok Interrupt\n      | Ok \"insert_cell\" ->\n          let* pos = get_int \"pos\" json in\n          let* kind = parse_kind json in\n          Ok (Insert_cell { pos; kind })\n      | Ok \"delete_cell\" ->\n          let* cell_id = get_string \"cell_id\" json in\n          Ok (Delete_cell { cell_id })\n      | Ok \"move_cell\" ->\n          let* cell_id = get_string \"cell_id\" json in\n          let* pos = get_int \"pos\" json in\n          Ok (Move_cell { cell_id; pos })\n      | Ok \"set_cell_kind\" ->\n          let* cell_id = get_string \"cell_id\" json in\n          let* kind = parse_kind json in\n          Ok (Set_cell_kind { cell_id; kind })\n      | Ok \"set_cell_attrs\" ->\n          let* cell_id = get_string \"cell_id\" json in\n          let* collapsed = get_bool \"collapsed\" json in\n          let* hide_source = get_bool \"hide_source\" json in\n          Ok\n            (Set_cell_attrs { cell_id; attrs = { Cell.collapsed; hide_source } })\n      | Ok \"clear_outputs\" ->\n          let* cell_id = get_string \"cell_id\" json in\n          Ok (Clear_outputs { cell_id })\n      | Ok \"clear_all_outputs\" -> Ok Clear_all_outputs\n      | Ok \"save\" -> Ok Save\n      | Ok \"undo\" -> Ok Undo\n      | Ok \"redo\" -> Ok Redo\n      | Ok \"complete\" ->\n          let* request_id = get_string \"request_id\" json in\n          let* code = get_string \"code\" json in\n          let* pos = get_int \"pos\" json in\n          Ok (Complete { request_id; code; pos })\n      | Ok \"type_at\" ->\n          let* request_id = get_string \"request_id\" json in\n          let* code = get_string \"code\" json in\n          let* pos = get_int \"pos\" json in\n          Ok (Type_at { request_id; code; pos })\n      | Ok \"diagnostics\" ->\n          let* request_id = get_string \"request_id\" json in\n          let* code = get_string \"code\" json in\n          Ok (Diagnostics { request_id; code })\n      | Ok t -> Error (Printf.sprintf \"unknown message type '%s'\" t)\n      | Error e -> Error e)\n\n(* ───── Server message encoding ───── *)\n\nlet status_string = function\n  | Session.Idle -> \"idle\"\n  | Session.Queued -> \"queued\"\n  | Session.Running -> \"running\"\n\nlet output_to_json (o : Cell.output) =\n  match o with\n  | Stdout text ->\n      json_obj\n        [\n          (\"kind\", Jsont.Json.string \"stdout\"); (\"text\", Jsont.Json.string text);\n        ]\n  | Stderr text ->\n      json_obj\n        [\n          (\"kind\", Jsont.Json.string \"stderr\"); (\"text\", Jsont.Json.string text);\n        ]\n  | Error text ->\n      json_obj\n        [\n          (\"kind\", Jsont.Json.string \"error\"); (\"text\", Jsont.Json.string text);\n        ]\n  | Display { mime; data } ->\n      json_obj\n        [\n          (\"kind\", Jsont.Json.string \"display\");\n          (\"mime\", Jsont.Json.string mime);\n          (\"data\", Jsont.Json.string data);\n        ]\n\nlet attrs_to_json (a : Cell.attrs) =\n  let pairs = ref [] in\n  if a.hide_source then pairs := (\"hide_source\", Jsont.Json.bool true) :: !pairs;\n  if a.collapsed then pairs := (\"collapsed\", Jsont.Json.bool true) :: !pairs;\n  json_obj !pairs\n\nlet cell_to_json (cell : Cell.t) (status : Session.cell_status) =\n  match cell with\n  | Code { id; source; language; outputs; execution_count; attrs } ->\n      json_obj\n        [\n          (\"id\", Jsont.Json.string id);\n          (\"kind\", Jsont.Json.string \"code\");\n          (\"source\", Jsont.Json.string source);\n          (\"language\", Jsont.Json.string language);\n          (\"outputs\", Jsont.Json.list (List.map output_to_json outputs));\n          (\"execution_count\", Jsont.Json.int execution_count);\n          (\"status\", Jsont.Json.string (status_string status));\n          (\"attrs\", attrs_to_json attrs);\n        ]\n  | Text { id; source; attrs } ->\n      let html = Quill_markdown.Edit.to_html source in\n      json_obj\n        [\n          (\"id\", Jsont.Json.string id);\n          (\"kind\", Jsont.Json.string \"text\");\n          (\"source\", Jsont.Json.string source);\n          (\"rendered_html\", Jsont.Json.string html);\n          (\"status\", Jsont.Json.string (status_string status));\n          (\"attrs\", attrs_to_json attrs);\n        ]\n\nlet notebook_to_json ~cells ~can_undo ~can_redo =\n  json_to_string\n    (json_obj\n       [\n         (\"type\", Jsont.Json.string \"notebook\");\n         ( \"cells\",\n           Jsont.Json.list (List.map (fun (c, s) -> cell_to_json c s) cells) );\n         (\"can_undo\", Jsont.Json.bool can_undo);\n         (\"can_redo\", Jsont.Json.bool can_redo);\n       ])\n\nlet cell_output_to_json ~cell_id output =\n  json_to_string\n    (json_obj\n       [\n         (\"type\", Jsont.Json.string \"cell_output\");\n         (\"cell_id\", Jsont.Json.string cell_id);\n         (\"output\", output_to_json output);\n       ])\n\nlet cell_status_to_json ~cell_id status =\n  json_to_string\n    (json_obj\n       [\n         (\"type\", Jsont.Json.string \"cell_status\");\n         (\"cell_id\", Jsont.Json.string cell_id);\n         (\"status\", Jsont.Json.string (status_string status));\n       ])\n\nlet cell_inserted_to_json ~pos cell status =\n  json_to_string\n    (json_obj\n       [\n         (\"type\", Jsont.Json.string \"cell_inserted\");\n         (\"pos\", Jsont.Json.int pos);\n         (\"cell\", cell_to_json cell status);\n       ])\n\nlet cell_deleted_to_json ~cell_id =\n  json_to_string\n    (json_obj\n       [\n         (\"type\", Jsont.Json.string \"cell_deleted\");\n         (\"cell_id\", Jsont.Json.string cell_id);\n       ])\n\nlet cell_moved_to_json ~cell_id ~pos =\n  json_to_string\n    (json_obj\n       [\n         (\"type\", Jsont.Json.string \"cell_moved\");\n         (\"cell_id\", Jsont.Json.string cell_id);\n         (\"pos\", Jsont.Json.int pos);\n       ])\n\nlet cell_updated_to_json cell status =\n  json_to_string\n    (json_obj\n       [\n         (\"type\", Jsont.Json.string \"cell_updated\");\n         (\"cell_id\", Jsont.Json.string (Cell.id cell));\n         (\"cell\", cell_to_json cell status);\n       ])\n\nlet completion_kind_to_string = function\n  | Kernel.Value -> \"value\"\n  | Type -> \"type\"\n  | Module -> \"module\"\n  | Module_type -> \"module_type\"\n  | Constructor -> \"constructor\"\n  | Label -> \"label\"\n\nlet completion_item_to_json (item : Kernel.completion_item) =\n  json_obj\n    [\n      (\"label\", Jsont.Json.string item.label);\n      (\"kind\", Jsont.Json.string (completion_kind_to_string item.kind));\n      (\"detail\", Jsont.Json.string item.detail);\n    ]\n\nlet completions_to_json ~request_id items =\n  json_to_string\n    (json_obj\n       [\n         (\"type\", Jsont.Json.string \"completions\");\n         (\"request_id\", Jsont.Json.string request_id);\n         (\"items\", Jsont.Json.list (List.map completion_item_to_json items));\n       ])\n\nlet type_at_to_json ~request_id info =\n  let info_json =\n    match info with\n    | None -> Jsont.Json.null ()\n    | Some (ti : Kernel.type_info) ->\n        let doc_json =\n          match ti.doc with\n          | Some d -> Jsont.Json.string d\n          | None -> Jsont.Json.null ()\n        in\n        json_obj\n          [\n            (\"type\", Jsont.Json.string ti.typ);\n            (\"doc\", doc_json);\n            (\"from\", Jsont.Json.int ti.from_pos);\n            (\"to\", Jsont.Json.int ti.to_pos);\n          ]\n  in\n  json_to_string\n    (json_obj\n       [\n         (\"type\", Jsont.Json.string \"type_at\");\n         (\"request_id\", Jsont.Json.string request_id);\n         (\"info\", info_json);\n       ])\n\nlet severity_to_string = function\n  | Kernel.Error -> \"error\"\n  | Warning -> \"warning\"\n\nlet diagnostic_to_json (d : Kernel.diagnostic) =\n  json_obj\n    [\n      (\"from\", Jsont.Json.int d.from_pos);\n      (\"to\", Jsont.Json.int d.to_pos);\n      (\"severity\", Jsont.Json.string (severity_to_string d.severity));\n      (\"message\", Jsont.Json.string d.message);\n    ]\n\nlet diagnostics_to_json ~request_id items =\n  json_to_string\n    (json_obj\n       [\n         (\"type\", Jsont.Json.string \"diagnostics\");\n         (\"request_id\", Jsont.Json.string request_id);\n         (\"items\", Jsont.Json.list (List.map diagnostic_to_json items));\n       ])\n\nlet saved_to_json () =\n  json_to_string (json_obj [ (\"type\", Jsont.Json.string \"saved\") ])\n\nlet undo_redo_to_json ~can_undo ~can_redo =\n  json_to_string\n    (json_obj\n       [\n         (\"type\", Jsont.Json.string \"undo_redo\");\n         (\"can_undo\", Jsont.Json.bool can_undo);\n         (\"can_redo\", Jsont.Json.bool can_redo);\n       ])\n\nlet error_to_json msg =\n  json_to_string\n    (json_obj\n       [\n         (\"type\", Jsont.Json.string \"error\"); (\"message\", Jsont.Json.string msg);\n       ])\n"
  },
  {
    "path": "packages/quill/lib/quill-server/protocol.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** WebSocket protocol for notebook communication.\n\n    Defines the message types exchanged between the web frontend and the\n    notebook server, with JSON serialization.\n\n    - {{!client}Client messages}\n    - {{!server}Server messages} *)\n\n(** {1:json JSON helpers} *)\n\nval json_obj : (string * Jsont.Json.t) list -> Jsont.Json.t\nval json_to_string : Jsont.Json.t -> string\n\n(** {1:client Client messages} *)\n\ntype client_msg =\n  | Update_source of { cell_id : string; source : string }\n      (** Update the source text of cell [cell_id]. *)\n  | Checkpoint  (** Create an undo checkpoint of the current notebook state. *)\n  | Execute_cell of { cell_id : string }  (** Execute cell [cell_id]. *)\n  | Execute_cells of { cell_ids : string list }\n      (** Execute the cells [cell_ids] in order. *)\n  | Execute_all  (** Execute all code cells in document order. *)\n  | Interrupt  (** Interrupt the currently running execution. *)\n  | Insert_cell of { pos : int; kind : [ `Code | `Text ] }\n      (** Insert a new cell of the given [kind] at position [pos]. *)\n  | Delete_cell of { cell_id : string }  (** Delete cell [cell_id]. *)\n  | Move_cell of { cell_id : string; pos : int }\n      (** Move cell [cell_id] to position [pos]. *)\n  | Set_cell_kind of { cell_id : string; kind : [ `Code | `Text ] }\n      (** Change the kind of cell [cell_id] to [kind]. *)\n  | Set_cell_attrs of { cell_id : string; attrs : Quill.Cell.attrs }\n      (** Set the display attributes of cell [cell_id]. *)\n  | Clear_outputs of { cell_id : string }\n      (** Clear outputs of cell [cell_id]. *)\n  | Clear_all_outputs  (** Clear outputs of all cells. *)\n  | Save  (** Save the notebook to disk. *)\n  | Undo  (** Undo the last checkpoint. *)\n  | Redo  (** Redo the last undone checkpoint. *)\n  | Complete of { request_id : string; code : string; pos : int }\n      (** Request completions for [code] at cursor position [pos]. [request_id]\n          correlates the response. *)\n  | Type_at of { request_id : string; code : string; pos : int }\n      (** Request type information at cursor position [pos] in [code]. *)\n  | Diagnostics of { request_id : string; code : string }\n      (** Request parse and type diagnostics for [code]. *)\n\nval client_msg_of_json : string -> (client_msg, string) result\n(** [client_msg_of_json s] parses a JSON string into a client message. Returns\n    [Error msg] if [s] is not valid JSON, if the [\"type\"] field is missing or\n    unknown, or if required fields are absent. *)\n\n(** {1:server Server messages} *)\n\nval notebook_to_json :\n  cells:(Quill.Cell.t * Quill.Session.cell_status) list ->\n  can_undo:bool ->\n  can_redo:bool ->\n  string\n(** [notebook_to_json ~cells ~can_undo ~can_redo] is a [\"notebook\"] JSON message\n    with the full notebook state. Each cell is paired with its execution status.\n*)\n\nval cell_output_to_json : cell_id:string -> Quill.Cell.output -> string\n(** [cell_output_to_json ~cell_id output] is a [\"cell_output\"] JSON message for\n    [output] of cell [cell_id]. *)\n\nval cell_status_to_json : cell_id:string -> Quill.Session.cell_status -> string\n(** [cell_status_to_json ~cell_id status] is a [\"cell_status\"] JSON message for\n    cell [cell_id]. *)\n\nval cell_inserted_to_json :\n  pos:int -> Quill.Cell.t -> Quill.Session.cell_status -> string\n(** [cell_inserted_to_json ~pos cell status] is a [\"cell_inserted\"] JSON message\n    for [cell] at position [pos]. *)\n\nval cell_deleted_to_json : cell_id:string -> string\n(** [cell_deleted_to_json ~cell_id] is a [\"cell_deleted\"] JSON message for cell\n    [cell_id]. *)\n\nval cell_moved_to_json : cell_id:string -> pos:int -> string\n(** [cell_moved_to_json ~cell_id ~pos] is a [\"cell_moved\"] JSON message for cell\n    [cell_id] moved to position [pos]. *)\n\nval cell_updated_to_json : Quill.Cell.t -> Quill.Session.cell_status -> string\n(** [cell_updated_to_json cell status] is a [\"cell_updated\"] JSON message for\n    [cell] with [status]. *)\n\nval completions_to_json :\n  request_id:string -> Quill.Kernel.completion_item list -> string\n(** [completions_to_json ~request_id items] is a [\"completions\"] JSON message\n    with completion [items] for the given [request_id]. *)\n\nval type_at_to_json :\n  request_id:string -> Quill.Kernel.type_info option -> string\n(** [type_at_to_json ~request_id info] is a [\"type_at\"] JSON response. *)\n\nval diagnostics_to_json :\n  request_id:string -> Quill.Kernel.diagnostic list -> string\n(** [diagnostics_to_json ~request_id items] is a [\"diagnostics\"] JSON response.\n*)\n\nval saved_to_json : unit -> string\n(** [saved_to_json ()] is a [\"saved\"] JSON message. *)\n\nval undo_redo_to_json : can_undo:bool -> can_redo:bool -> string\n(** [undo_redo_to_json ~can_undo ~can_redo] is an [\"undo_redo\"] JSON message\n    with the current undo/redo availability. *)\n\nval error_to_json : string -> string\n(** [error_to_json msg] is an [\"error\"] JSON message with [msg]. *)\n"
  },
  {
    "path": "packages/quill/lib/quill-server/quill_server.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Quill\n\n(* Dedicated log channel: a dup of stderr taken at module init, before any FD\n   redirection by the toplevel kernel. This ensures debug logging never writes\n   to the capture pipe, avoiding feedback loops. *)\nlet log_fd = Unix.dup ~cloexec:true Unix.stderr\nlet log_oc = Unix.out_channel_of_descr log_fd\n\nlet log fmt =\n  Printf.ksprintf\n    (fun s ->\n      output_string log_oc s;\n      flush log_oc)\n    fmt\n\nlet err_file_not_found : _ format = \"Error: %s not found\\n%!\"\n\n(* ───── File I/O ───── *)\n\nlet read_file path =\n  let ic = open_in path in\n  Fun.protect\n    ~finally:(fun () -> close_in ic)\n    (fun () -> really_input_string ic (in_channel_length ic))\n\nlet write_file path content =\n  let oc = open_out path in\n  Fun.protect\n    ~finally:(fun () -> close_out oc)\n    (fun () -> output_string oc content)\n\nlet get_mtime path =\n  try (Unix.stat path).Unix.st_mtime with Unix.Unix_error _ -> 0.\n\n(* Serve files from the notebook's directory (for images, figures, etc.).\n   Security: rejects \"..\" segments, resolves symlinks with Unix.realpath, and\n   verifies the canonical path is strictly under base_dir. *)\nlet file_loader base_dir rel_path =\n  let segments = String.split_on_char '/' rel_path in\n  if List.exists (fun s -> s = \"\" || s = \".\" || s = \"..\") segments then None\n  else\n    let path = Filename.concat base_dir rel_path in\n    if Sys.file_exists path && not (Sys.is_directory path) then\n      try\n        let real = Unix.realpath path in\n        let real_base = Unix.realpath base_dir in\n        let prefix = real_base ^ \"/\" in\n        if\n          String.length real > String.length prefix\n          && String.sub real 0 (String.length prefix) = prefix\n        then Some (read_file real)\n        else None\n      with _ -> None\n    else None\n\n(* ───── Server state ───── *)\n\ntype state = {\n  mutable session : Session.t;\n  mutable kernel : Kernel.t;\n  path : string;\n  mutex : Mutex.t;\n  mutable ws_clients : Httpd.ws list;\n  mutable last_mtime : float;\n  (* Execution queue: serializes all kernel.execute calls through a single\n     worker thread. [exec_mutex] protects [exec_queue] and [exec_cancelled];\n     [exec_cond] is signaled when new work is enqueued. Lock ordering: [mutex] >\n     [exec_mutex] (never reversed). *)\n  exec_queue : Cell.id Queue.t;\n  exec_mutex : Mutex.t;\n  exec_cond : Condition.t;\n  mutable exec_cancelled : bool;\n}\n\nlet locked st f =\n  Mutex.lock st.mutex;\n  Fun.protect ~finally:(fun () -> Mutex.unlock st.mutex) f\n\nlet send st msg =\n  st.ws_clients <-\n    List.filter\n      (fun ws ->\n        try\n          Httpd.ws_send ws msg;\n          true\n        with _ -> false)\n      st.ws_clients\n\nlet send_undo_redo st =\n  send st\n    (Protocol.undo_redo_to_json\n       ~can_undo:(Session.can_undo st.session)\n       ~can_redo:(Session.can_redo st.session))\n\nlet cells_with_status st =\n  List.map\n    (fun c -> (c, Session.cell_status (Cell.id c) st.session))\n    (Doc.cells (Session.doc st.session))\n\nlet send_notebook st =\n  send st\n    (Protocol.notebook_to_json ~cells:(cells_with_status st)\n       ~can_undo:(Session.can_undo st.session)\n       ~can_redo:(Session.can_redo st.session))\n\n(* ───── Execution queue ───── *)\n\n(* Enqueue cell IDs for execution. Called while [st.mutex] is held. *)\nlet enqueue_execution st cell_ids =\n  st.session <- Session.checkpoint st.session;\n  List.iter\n    (fun cell_id ->\n      st.session <- Session.mark_queued cell_id st.session;\n      send st (Protocol.cell_status_to_json ~cell_id Session.Queued))\n    cell_ids;\n  Mutex.lock st.exec_mutex;\n  List.iter (fun cell_id -> Queue.push cell_id st.exec_queue) cell_ids;\n  Condition.signal st.exec_cond;\n  Mutex.unlock st.exec_mutex\n\n(* Long-lived worker thread: pops cell IDs one at a time and executes them.\n   Checks [exec_cancelled] between cells to support interrupt-and-drain. *)\nlet exec_worker st =\n  let rec loop () =\n    Mutex.lock st.exec_mutex;\n    while Queue.is_empty st.exec_queue do\n      Condition.wait st.exec_cond st.exec_mutex\n    done;\n    let cell_id = Queue.pop st.exec_queue in\n    if st.exec_cancelled then begin\n      (* Drain remaining queued cells and mark all cancelled cells idle *)\n      let cancelled =\n        cell_id :: Queue.fold (fun acc id -> id :: acc) [] st.exec_queue\n      in\n      Queue.clear st.exec_queue;\n      st.exec_cancelled <- false;\n      Mutex.unlock st.exec_mutex;\n      locked st (fun () ->\n          List.iter\n            (fun cid ->\n              st.session <- Session.mark_idle cid st.session;\n              send st (Protocol.cell_status_to_json ~cell_id:cid Session.Idle))\n            cancelled);\n      loop ()\n    end\n    else begin\n      Mutex.unlock st.exec_mutex;\n      let source =\n        locked st (fun () ->\n            match Doc.find cell_id (Session.doc st.session) with\n            | Some (Cell.Code { source; _ }) ->\n                st.session <- Session.clear_outputs cell_id st.session;\n                st.session <- Session.mark_running cell_id st.session;\n                send st (Protocol.cell_status_to_json ~cell_id Session.Running);\n                log \"[exec] %s running\\n%!\" cell_id;\n                Some source\n            | _ -> None)\n      in\n      (match source with\n      | Some code -> st.kernel.execute ~cell_id ~code\n      | None -> ());\n      loop ()\n    end\n  in\n  loop ()\n\n(* ───── Kernel event handler ───── *)\n\nlet on_kernel_event st = function\n  | Kernel.Output { cell_id; output } ->\n      (match output with\n      | Cell.Error msg -> log \"[exec] %s error: %s\\n%!\" cell_id msg\n      | _ -> ());\n      locked st (fun () ->\n          st.session <- Session.apply_output cell_id output st.session;\n          send st (Protocol.cell_output_to_json ~cell_id output))\n  | Kernel.Finished { cell_id; success } ->\n      log \"[exec] %s %s\\n%!\" cell_id (if success then \"done\" else \"failed\");\n      locked st (fun () ->\n          st.session <- Session.finish_execution cell_id ~success st.session;\n          match Doc.find cell_id (Session.doc st.session) with\n          | Some cell ->\n              let status = Session.cell_status cell_id st.session in\n              send st (Protocol.cell_updated_to_json cell status)\n          | None -> log \"[exec] %s not found after finish\\n%!\" cell_id)\n  | Kernel.Status_changed _ -> ()\n\n(* ───── Client message handler ───── *)\n\nlet handle_client_msg st = function\n  | Protocol.Update_source { cell_id; source } ->\n      st.session <- Session.update_source cell_id source st.session\n  | Protocol.Checkpoint ->\n      st.session <- Session.checkpoint st.session;\n      send_undo_redo st\n  | Protocol.Execute_cell { cell_id } -> enqueue_execution st [ cell_id ]\n  | Protocol.Execute_cells { cell_ids } -> enqueue_execution st cell_ids\n  | Protocol.Execute_all ->\n      let cell_ids =\n        List.filter_map\n          (fun c ->\n            match c with Cell.Code { id; _ } -> Some id | Text _ -> None)\n          (Doc.cells (Session.doc st.session))\n      in\n      enqueue_execution st cell_ids\n  | Protocol.Interrupt | Protocol.Complete _ | Protocol.Type_at _\n  | Protocol.Diagnostics _ ->\n      assert false (* dispatched by [handle_msg] before reaching here *)\n  | Protocol.Insert_cell { pos; kind } ->\n      let cell =\n        match kind with `Code -> Cell.code \"\" | `Text -> Cell.text \"\"\n      in\n      st.session <- Session.insert_cell ~pos cell st.session;\n      let status = Session.cell_status (Cell.id cell) st.session in\n      let kind_s = match kind with `Code -> \"code\" | `Text -> \"text\" in\n      log \"[cell] insert %s %s at %d\\n%!\" kind_s (Cell.id cell) pos;\n      send st (Protocol.cell_inserted_to_json ~pos cell status);\n      send_undo_redo st\n  | Protocol.Delete_cell { cell_id } ->\n      log \"[cell] delete %s\\n%!\" cell_id;\n      st.session <- Session.remove_cell cell_id st.session;\n      send st (Protocol.cell_deleted_to_json ~cell_id);\n      send_undo_redo st\n  | Protocol.Move_cell { cell_id; pos } ->\n      log \"[cell] move %s to %d\\n%!\" cell_id pos;\n      st.session <- Session.move_cell cell_id ~pos st.session;\n      send st (Protocol.cell_moved_to_json ~cell_id ~pos);\n      send_undo_redo st\n  | Protocol.Set_cell_kind { cell_id; kind } ->\n      let kind_s = match kind with `Code -> \"code\" | `Text -> \"text\" in\n      log \"[cell] set %s to %s\\n%!\" cell_id kind_s;\n      st.session <- Session.set_cell_kind cell_id kind st.session;\n      (match Doc.find cell_id (Session.doc st.session) with\n      | Some cell ->\n          let status = Session.cell_status cell_id st.session in\n          send st (Protocol.cell_updated_to_json cell status)\n      | None -> ());\n      send_undo_redo st\n  | Protocol.Set_cell_attrs { cell_id; attrs } ->\n      log \"[cell] set attrs %s\\n%!\" cell_id;\n      st.session <- Session.set_cell_attrs cell_id attrs st.session;\n      (match Doc.find cell_id (Session.doc st.session) with\n      | Some cell ->\n          let status = Session.cell_status cell_id st.session in\n          send st (Protocol.cell_updated_to_json cell status)\n      | None -> ());\n      send_undo_redo st\n  | Protocol.Clear_outputs { cell_id } -> (\n      st.session <- Session.clear_outputs cell_id st.session;\n      match Doc.find cell_id (Session.doc st.session) with\n      | Some cell ->\n          let status = Session.cell_status cell_id st.session in\n          send st (Protocol.cell_updated_to_json cell status)\n      | None -> ())\n  | Protocol.Clear_all_outputs ->\n      st.session <- Session.clear_all_outputs st.session;\n      send_notebook st\n  | Protocol.Save ->\n      st.session <- Session.checkpoint st.session;\n      let content =\n        Quill_markdown.to_string_with_outputs (Session.doc st.session)\n      in\n      write_file st.path content;\n      st.last_mtime <- get_mtime st.path;\n      log \"[save] %s\\n%!\" st.path;\n      send st (Protocol.saved_to_json ())\n  | Protocol.Undo ->\n      st.session <- Session.undo st.session;\n      send_notebook st\n  | Protocol.Redo ->\n      st.session <- Session.redo st.session;\n      send_notebook st\n\n(* ───── WebSocket handler ───── *)\n\nlet handle_msg st = function\n  | Protocol.Interrupt ->\n      log \"[exec] interrupt\\n%!\";\n      Mutex.lock st.exec_mutex;\n      st.exec_cancelled <- true;\n      Mutex.unlock st.exec_mutex;\n      st.kernel.interrupt ()\n  | Protocol.Complete { request_id; code; pos } ->\n      let items = st.kernel.complete ~code ~pos in\n      locked st (fun () ->\n          send st (Protocol.completions_to_json ~request_id items))\n  | Protocol.Type_at { request_id; code; pos } ->\n      let info =\n        match st.kernel.type_at with Some f -> f ~code ~pos | None -> None\n      in\n      locked st (fun () -> send st (Protocol.type_at_to_json ~request_id info))\n  | Protocol.Diagnostics { request_id; code } ->\n      let items =\n        match st.kernel.diagnostics with Some f -> f ~code | None -> []\n      in\n      locked st (fun () ->\n          send st (Protocol.diagnostics_to_json ~request_id items))\n  | msg -> locked st (fun () -> handle_client_msg st msg)\n\nlet ws_handler st _req ws =\n  locked st (fun () ->\n      st.ws_clients <- ws :: st.ws_clients;\n      log \"[ws] connected (%d active)\\n%!\" (List.length st.ws_clients);\n      (* Reload document from disk only if the file changed since we last loaded\n         or saved it. Re-parsing a file without cell ID markers generates new\n         random IDs, which would invalidate the session. *)\n      let mtime = get_mtime st.path in\n      (if mtime > st.last_mtime then\n         try\n           let md = read_file st.path in\n           let doc = Quill_markdown.of_string md in\n           st.session <- Session.create doc;\n           st.last_mtime <- mtime;\n           log \"[ws] reloaded %s\\n%!\" st.path\n         with exn -> log \"[ws] reload failed: %s\\n%!\" (Printexc.to_string exn));\n      send_notebook st);\n  let rec loop () =\n    match Httpd.ws_recv ws with\n    | Some msg -> (\n        match Protocol.client_msg_of_json msg with\n        | Ok client_msg ->\n            (try handle_msg st client_msg\n             with exn -> log \"[error] %s\\n%!\" (Printexc.to_string exn));\n            loop ()\n        | Error err ->\n            log \"[error] bad message: %s\\n%!\" err;\n            locked st (fun () -> send st (Protocol.error_to_json err));\n            loop ())\n    | None ->\n        locked st (fun () ->\n            st.ws_clients <- List.filter (fun w -> w != ws) st.ws_clients;\n            log \"[ws] disconnected (%d active)\\n%!\" (List.length st.ws_clients))\n  in\n  loop ()\n\n(* ───── Entry point ───── *)\n\nlet make_state ~create_kernel path =\n  let md = read_file path in\n  let doc = Quill_markdown.of_string md in\n  let session = Session.create doc in\n  let st =\n    {\n      session;\n      kernel =\n        {\n          execute = (fun ~cell_id:_ ~code:_ -> ());\n          interrupt = ignore;\n          complete = (fun ~code:_ ~pos:_ -> []);\n          type_at = None;\n          diagnostics = None;\n          is_complete = None;\n          status = (fun () -> Kernel.Starting);\n          shutdown = ignore;\n        };\n      path;\n      mutex = Mutex.create ();\n      ws_clients = [];\n      last_mtime = get_mtime path;\n      exec_queue = Queue.create ();\n      exec_mutex = Mutex.create ();\n      exec_cond = Condition.create ();\n      exec_cancelled = false;\n    }\n  in\n  let on_event ev = on_kernel_event st ev in\n  st.kernel <- create_kernel ~on_event;\n  ignore (Thread.create exec_worker st : Thread.t);\n  st\n\nlet serve ~create_kernel ?(addr = \"127.0.0.1\") ?(port = 8888) ?on_ready path =\n  if not (Sys.file_exists path) then (\n    Printf.eprintf err_file_not_found path;\n    exit 1);\n  let st = make_state ~create_kernel path in\n  let server = Httpd.create ~addr ~port () in\n  Httpd.route server GET \"/\" (fun _req ->\n      Httpd.response\n        ~headers:[ (\"Content-Type\", \"text/html; charset=utf-8\") ]\n        Assets.index_html);\n  Httpd.static server ~prefix:\"/assets/\" ~loader:Assets.lookup ();\n  Httpd.websocket server \"/ws\" (ws_handler st);\n  let base_dir =\n    let abs =\n      if Filename.is_relative path then Filename.concat (Sys.getcwd ()) path\n      else path\n    in\n    Filename.dirname abs\n  in\n  Httpd.static server ~prefix:\"/\" ~loader:(file_loader base_dir) ();\n  let after_start () =\n    Printf.printf \"Quill: http://%s:%d (Ctrl-C to stop)\\n%!\" addr port;\n    match on_ready with Some f -> f () | None -> ()\n  in\n  Httpd.run ~after_start server;\n  st.kernel.shutdown ()\n\n(* ───── Directory mode ───── *)\n\nlet notebook_url_path (nb : Quill_project.notebook) =\n  let dir = Filename.dirname nb.path in\n  if dir = \".\" then \"/\" ^ nb.path ^ \"/\" else \"/\" ^ dir ^ \"/\"\n\nlet rec toc_notebooks toc =\n  List.concat_map\n    (fun e ->\n      match e with\n      | Quill_project.Notebook (nb, children) ->\n          if Quill_project.is_placeholder nb then toc_notebooks children\n          else nb :: toc_notebooks children\n      | _ -> [])\n    toc\n\nlet toc_to_json toc =\n  let rec entry_json = function\n    | Quill_project.Notebook (nb, children) ->\n        let fields =\n          [\n            (\"type\", Jsont.Json.string \"notebook\");\n            (\"title\", Jsont.Json.string nb.title);\n            (\"path\", Jsont.Json.string nb.path);\n            (\"url\", Jsont.Json.string (notebook_url_path nb));\n            ( \"number\",\n              Jsont.Json.string\n                (Quill_project.number_string (Quill_project.number toc nb)) );\n            (\"placeholder\", Jsont.Json.bool (Quill_project.is_placeholder nb));\n            (\"children\", Jsont.Json.list (List.map entry_json children));\n          ]\n        in\n        Protocol.json_obj fields\n    | Quill_project.Section title ->\n        Protocol.json_obj\n          [\n            (\"type\", Jsont.Json.string \"section\");\n            (\"title\", Jsont.Json.string title);\n          ]\n    | Quill_project.Separator ->\n        Protocol.json_obj [ (\"type\", Jsont.Json.string \"separator\") ]\n  in\n  Protocol.json_to_string (Jsont.Json.list (List.map entry_json toc))\n\nlet serve_dir ~create_kernel ?(addr = \"127.0.0.1\") ?(port = 8888) ?on_ready\n    ?(prelude = fun _ -> None) ~(toc : Quill_project.toc_item list) root =\n  let notebooks = toc_notebooks toc in\n  let states : (string, state) Hashtbl.t = Hashtbl.create 16 in\n  let states_mutex = Mutex.create () in\n  let get_or_create_state nb_path =\n    Mutex.lock states_mutex;\n    let st =\n      match Hashtbl.find_opt states nb_path with\n      | Some st -> st\n      | None ->\n          let abs_path = Filename.concat root nb_path in\n          let create_kernel ~on_event =\n            let k = create_kernel ~on_event in\n            (match prelude nb_path with\n            | Some code -> k.Kernel.execute ~cell_id:\"__prelude__\" ~code\n            | None -> ());\n            k\n          in\n          let st = make_state ~create_kernel abs_path in\n          Hashtbl.replace states nb_path st;\n          log \"[dir] created state for %s\\n%!\" nb_path;\n          st\n    in\n    Mutex.unlock states_mutex;\n    st\n  in\n  let server = Httpd.create ~addr ~port () in\n  let serve_html _req =\n    Httpd.response\n      ~headers:[ (\"Content-Type\", \"text/html; charset=utf-8\") ]\n      Assets.index_html\n  in\n  Httpd.route server GET \"/\" serve_html;\n  List.iter\n    (fun (nb : Quill_project.notebook) ->\n      let url = notebook_url_path nb in\n      Httpd.route server GET url serve_html;\n      let url_noslash = String.sub url 0 (String.length url - 1) in\n      if url_noslash <> \"\" then Httpd.route server GET url_noslash serve_html)\n    notebooks;\n  Httpd.route server GET \"/api/notebooks\" (fun _req ->\n      Httpd.json (toc_to_json toc));\n  Httpd.static server ~prefix:\"/assets/\" ~loader:Assets.lookup ();\n  Httpd.websocket server \"/ws\" (fun req ws ->\n      let nb_path =\n        match List.assoc_opt \"path\" req.query with\n        | Some p -> p\n        | None -> (\n            match notebooks with\n            | nb :: _ -> nb.path\n            | [] ->\n                log \"[ws] no notebooks and no path param\\n%!\";\n                Httpd.ws_close ws;\n                failwith \"no notebooks\")\n      in\n      let st = get_or_create_state nb_path in\n      ws_handler st req ws);\n  Httpd.static server ~prefix:\"/\" ~loader:(file_loader root) ();\n  let after_start () =\n    Printf.printf \"Quill: http://%s:%d (Ctrl-C to stop)\\n%!\" addr port;\n    match on_ready with Some f -> f () | None -> ()\n  in\n  Httpd.run ~after_start server;\n  Hashtbl.iter (fun _ st -> st.kernel.shutdown ()) states\n"
  },
  {
    "path": "packages/quill/lib/quill-server/quill_server.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Web notebook server.\n\n    [Quill_server] serves a Jupyter-like notebook interface over HTTP and\n    WebSocket. *)\n\nval serve :\n  create_kernel:(on_event:(Quill.Kernel.event -> unit) -> Quill.Kernel.t) ->\n  ?addr:string ->\n  ?port:int ->\n  ?on_ready:(unit -> unit) ->\n  string ->\n  unit\n(** [serve ~create_kernel path] starts the web notebook server for the notebook\n    at [path]. [create_kernel] is called once to obtain a kernel. [addr]\n    defaults to [\"127.0.0.1\"], [port] to [8888]. [on_ready] is called after the\n    server socket is bound and listening, before the accept loop starts. Blocks\n    until the server is stopped. Exits the process with status [1] if [path]\n    does not exist. *)\n\n(** {1:dir Directory mode} *)\n\nval serve_dir :\n  create_kernel:(on_event:(Quill.Kernel.event -> unit) -> Quill.Kernel.t) ->\n  ?addr:string ->\n  ?port:int ->\n  ?on_ready:(unit -> unit) ->\n  ?prelude:(string -> string option) ->\n  toc:Quill_project.toc_item list ->\n  string ->\n  unit\n(** [serve_dir ~create_kernel ~toc root] starts the web notebook server for a\n    directory of notebooks at [root]. [toc] defines the table of contents\n    structure shown in the sidebar. Each notebook gets its own kernel, created\n    lazily on first access. [prelude] is called with the notebook's relative\n    path and may return OCaml code to execute before the notebook's cells.\n    [addr] defaults to [\"127.0.0.1\"], [port] to [8888]. *)\n"
  },
  {
    "path": "packages/quill/lib/quill-server/support/gen_assets.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Generates assets.ml from a frontend directory.\n\n   Usage: ocaml gen_assets.ml <frontend_dir> <output.ml>\n\n   Reads index.html, dist/ (JS/CSS), and fonts/ (woff2), writes an OCaml module\n   with: - [index_html] : the HTML page - [lookup] : maps asset paths to\n   contents *)\n\nlet read_file path =\n  let ic = open_in_bin path in\n  Fun.protect\n    ~finally:(fun () -> close_in ic)\n    (fun () ->\n      let len = in_channel_length ic in\n      really_input_string ic len)\n\nlet walk_dir root =\n  let rec aux prefix dir acc =\n    let entries = Sys.readdir dir in\n    Array.sort String.compare entries;\n    Array.fold_left\n      (fun acc name ->\n        let path = Filename.concat dir name in\n        let rel = if prefix = \"\" then name else prefix ^ \"/\" ^ name in\n        if Sys.is_directory path then aux rel path acc else (rel, path) :: acc)\n      acc entries\n  in\n  List.rev (aux \"\" root [])\n\nlet () =\n  let frontend_dir = Sys.argv.(1) in\n  let output_path = Sys.argv.(2) in\n  let index_path = Filename.concat frontend_dir \"index.html\" in\n  let dist_dir = Filename.concat frontend_dir \"dist\" in\n  let oc = open_out output_path in\n  Fun.protect\n    ~finally:(fun () -> close_out oc)\n    (fun () ->\n      Printf.fprintf oc \"let index_html = %S\\n\\n\" (read_file index_path);\n      let dist_files = walk_dir dist_dir in\n      let fonts_dir = Filename.concat frontend_dir \"fonts\" in\n      let font_files = walk_dir fonts_dir in\n      let font_files =\n        List.map (fun (rel, path) -> (\"fonts/\" ^ rel, path)) font_files\n      in\n      Printf.fprintf oc \"let lookup = function\\n\";\n      List.iter\n        (fun (rel, path) ->\n          Printf.fprintf oc \"  | %S -> Some %S\\n\" rel (read_file path))\n        (dist_files @ font_files);\n      Printf.fprintf oc \"  | _ -> None\\n\")\n"
  },
  {
    "path": "packages/quill/lib/quill-top/dune",
    "content": "(library\n (name quill_top)\n (public_name quill.top)\n (libraries quill compiler-libs.toplevel findlib unix threads.posix))\n"
  },
  {
    "path": "packages/quill/lib/quill-top/quill_top.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* ───── Toplevel primitives ───── *)\n\nlet findlib_predicates = ref [ \"byte\"; \"toploop\" ]\n\nlet ensure_findlib () =\n  match Findlib.init () with () -> true | exception _ -> false\n\n(* Mark packages that are already linked into the executable. Their .cmi files\n   need to be on the search path, but we must not try to load their .cma\n   again. *)\nlet add_packages pkgs =\n  if ensure_findlib () then\n    List.iter\n      (fun pkg ->\n        (match Findlib.package_directory pkg with\n        | dir -> Topdirs.dir_directory dir\n        | exception _ -> ());\n        if not (Findlib.is_recorded_package pkg) then\n          Findlib.record_package Findlib.Record_core pkg)\n      pkgs\n  else\n    (* Findlib unavailable — fall back to OCAMLPATH. *)\n    let sep = if Sys.win32 then ';' else ':' in\n    match Sys.getenv_opt \"OCAMLPATH\" with\n    | None -> ()\n    | Some ocamlpath ->\n        let roots = String.split_on_char sep ocamlpath in\n        List.iter\n          (fun pkg ->\n            let subdir =\n              String.concat Filename.dir_sep (String.split_on_char '.' pkg)\n            in\n            List.iter\n              (fun root ->\n                let dir = Filename.concat root subdir in\n                if Sys.file_exists dir && Sys.is_directory dir then\n                  Topdirs.dir_directory dir)\n              roots)\n          pkgs\n\n(* Try loading a single ancestor package. Returns [true] on success, [false] if\n   the archive references an undefined global (dependency not yet loaded). *)\nlet try_load_ancestor p =\n  let loaded =\n    Findlib.is_recorded_package p\n    && Findlib.type_of_recorded_package p = Findlib.Record_load\n  in\n  if loaded then true\n  else\n    let incore =\n      Findlib.is_recorded_package p\n      && Findlib.type_of_recorded_package p = Findlib.Record_core\n    in\n    let d = Findlib.package_directory p in\n    Topdirs.dir_directory d;\n    if incore then begin\n      Findlib.record_package Findlib.Record_load p;\n      true\n    end\n    else\n      let archive =\n        try Findlib.package_property !findlib_predicates p \"archive\"\n        with Not_found -> \"\"\n      in\n      let archives =\n        String.split_on_char ' ' archive |> List.filter (fun s -> s <> \"\")\n      in\n      try\n        List.iter\n          (fun arch ->\n            let path = Findlib.resolve_path ~base:d arch in\n            Topdirs.dir_load Format.err_formatter path)\n          archives;\n        Findlib.record_package Findlib.Record_load p;\n        true\n      with Symtable.Error (Undefined_global _) -> false\n\n(* Load a package: resolve its dependency chain and load archives.\n\n   Findlib's topological sort does not account for virtual library\n   implementations (a virtual package has no archive; its implementation archive\n   may appear later in the ancestor list). This causes [Undefined_global] when a\n   dependent is loaded before the implementation.\n\n   We handle this with a fixpoint loop: load what we can, collect failures, and\n   retry until either everything succeeds or no progress is made. *)\nlet load_package pkg =\n  if not (ensure_findlib ()) then\n    Printf.eprintf \"[quill] #require: findlib unavailable\\n%!\"\n  else\n    let ancestors =\n      Findlib.package_deep_ancestors !findlib_predicates [ pkg ]\n    in\n    let rec loop remaining =\n      let deferred =\n        List.filter (fun p -> not (try_load_ancestor p)) remaining\n      in\n      match deferred with\n      | [] -> ()\n      | _ when List.length deferred < List.length remaining -> loop deferred\n      | _ ->\n          (* No progress — report the packages we cannot load *)\n          List.iter\n            (fun p -> Printf.eprintf \"[quill] failed to load package %s\\n%!\" p)\n            deferred\n    in\n    loop ancestors\n\n(* ───── Initialization ───── *)\n\nlet initialized = ref false\nlet init_mutex = Mutex.create ()\n\nlet initialize_if_needed () =\n  Mutex.lock init_mutex;\n  Fun.protect\n    ~finally:(fun () -> Mutex.unlock init_mutex)\n    (fun () ->\n      if not !initialized then (\n        Sys.interactive := false;\n        Topeval.init ();\n        Toploop.initialize_toplevel_env ();\n        Toploop.input_name := \"//toplevel//\";\n        (* Register #require directive for loading packages at runtime. *)\n        Toploop.add_directive \"require\"\n          (Directive_string (fun pkg -> load_package pkg))\n          { section = \"Loading code\"; doc = \"Load a findlib package\" };\n        Sys.interactive := true;\n        initialized := true))\n\nlet install_printer name =\n  try\n    let phrase =\n      Printf.sprintf \"#install_printer %s;;\" name\n      |> Lexing.from_string\n      |> !Toploop.parse_toplevel_phrase\n    in\n    ignore (Toploop.execute_phrase false Format.err_formatter phrase)\n  with _ -> ()\n\nlet install_printer_fn ~ty f =\n  try\n    let parts = String.split_on_char '.' ty in\n    match Longident.unflatten parts with\n    | None -> ()\n    | Some lid ->\n        let path, _decl = Env.find_type_by_name lid !Toploop.toplevel_env in\n        let ty_expr = Ctype.newconstr path [] in\n        let printer_path = Path.Pident (Ident.create_local ty) in\n        Toploop.install_printer printer_path ty_expr f\n  with _ -> ()\n\n(* ───── Output capture ───── *)\n\n(** Pre-allocated read buffer for the poll thread. Avoids major heap allocations\n    (4096 > minor heap max) that could trigger GC while the execute thread is\n    inside Nx C code. *)\nlet poll_buf = Bytes.create 4096\n\n(** [read_available fd buf] reads whatever bytes are currently available on [fd]\n    into [buf] without blocking indefinitely (the caller uses [Unix.select]\n    first). Returns [None] on EOF. *)\nlet read_available fd buf =\n  match Unix.read fd buf 0 (Bytes.length buf) with\n  | 0 -> None\n  | n -> Some (Bytes.sub_string buf 0 n)\n  | exception Unix.Unix_error (Unix.EAGAIN, _, _) -> Some \"\"\n\n(** [drain_remaining fd] reads all remaining bytes after the write end is\n    closed. *)\nlet drain_remaining fd =\n  let buf = Buffer.create 256 in\n  let tmp = Bytes.create 4096 in\n  let rec loop () =\n    match Unix.read fd tmp 0 4096 with\n    | 0 -> ()\n    | n ->\n        Buffer.add_subbytes buf tmp 0 n;\n        loop ()\n  in\n  loop ();\n  Unix.close fd;\n  Buffer.contents buf\n\nlet capture ~on_stdout ~on_stderr ~on_display f =\n  let buf_out = Buffer.create 256 in\n  let buf_err = Buffer.create 256 in\n  let ppf_out = Format.formatter_of_buffer buf_out in\n  let ppf_err = Format.formatter_of_buffer buf_err in\n  (* Intercept Display_tag semantic tags on the toplevel formatter *)\n  Format.pp_set_print_tags ppf_out true;\n  Format.pp_set_formatter_stag_functions ppf_out\n    {\n      mark_open_stag = (fun _ -> \"\");\n      mark_close_stag = (fun _ -> \"\");\n      print_open_stag =\n        (fun stag ->\n          match stag with\n          | Quill.Cell.Display_tag { mime; data } ->\n              on_display (Quill.Cell.Display { mime; data })\n          | _ -> ());\n      print_close_stag = (fun _ -> ());\n    };\n  (* Pipes for raw stdout/stderr from user code (e.g. print_string) *)\n  let rd_out, wr_out = Unix.pipe ~cloexec:true () in\n  let rd_err, wr_err = Unix.pipe ~cloexec:true () in\n  let stdout_backup = Unix.dup ~cloexec:true Unix.stdout in\n  let stderr_backup = Unix.dup ~cloexec:true Unix.stderr in\n  (* Poll pipes in a background thread, streaming output as it arrives. Uses\n     Unix.select with a 50ms timeout so training progress prints (Printf.printf\n     \"\\rstep %d loss: %.4f%!\" ...) appear in real time. *)\n  let stop = Atomic.make false in\n  let poll_thread =\n    Thread.create\n      (fun () ->\n        while not (Atomic.get stop) do\n          let ready, _, _ =\n            try Unix.select [ rd_out; rd_err ] [] [] 0.05\n            with Unix.Unix_error (Unix.EINTR, _, _) -> ([], [], [])\n          in\n          List.iter\n            (fun fd ->\n              match read_available fd poll_buf with\n              | Some s when s <> \"\" ->\n                  if fd == rd_out then on_stdout s else on_stderr s\n              | _ -> ())\n            ready\n        done)\n      ()\n  in\n  let result = ref None in\n  Fun.protect\n    (fun () ->\n      flush stdout;\n      flush stderr;\n      Unix.dup2 ~cloexec:false wr_out Unix.stdout;\n      Unix.dup2 ~cloexec:false wr_err Unix.stderr;\n      result := Some (f ppf_out ppf_err))\n    ~finally:(fun () ->\n      Format.pp_print_flush ppf_out ();\n      Format.pp_print_flush ppf_err ();\n      flush stdout;\n      flush stderr;\n      Unix.dup2 ~cloexec:false stdout_backup Unix.stdout;\n      Unix.dup2 ~cloexec:false stderr_backup Unix.stderr;\n      Unix.close stdout_backup;\n      Unix.close stderr_backup;\n      (* Close write ends so poll thread and drain see EOF *)\n      Unix.close wr_out;\n      Unix.close wr_err);\n  (* Stop the poll thread and drain any remaining bytes *)\n  Atomic.set stop true;\n  Thread.join poll_thread;\n  let rest_out = drain_remaining rd_out in\n  let rest_err = drain_remaining rd_err in\n  if rest_out <> \"\" then on_stdout rest_out;\n  if rest_err <> \"\" then on_stderr rest_err;\n  (* Format buffer output (toplevel results like \"val x = ...\") *)\n  let toplevel_out = Buffer.contents buf_out in\n  let toplevel_err = Buffer.contents buf_err in\n  match !result with\n  | None -> failwith \"capture: unreachable\"\n  | Some ok -> (ok, toplevel_out, toplevel_err)\n\n(* ───── Execution ───── *)\n\nlet ensure_terminator code =\n  let trimmed = String.trim code in\n  if trimmed = \"\" || String.ends_with ~suffix:\";;\" trimmed then code\n  else code ^ \";;\"\n\nlet execute_code ppf_out ppf_err code =\n  let code = ensure_terminator code in\n  let lb = Lexing.from_string code in\n  lb.lex_curr_p <-\n    { pos_fname = \"//toplevel//\"; pos_lnum = 1; pos_bol = 0; pos_cnum = 0 };\n  let old_warnings_fmt = !Location.formatter_for_warnings in\n  Location.formatter_for_warnings := ppf_err;\n  let orig_input_lexbuf = !Location.input_lexbuf in\n  Location.input_lexbuf := Some lb;\n  let phrases = ref [] in\n  let parse_ok =\n    try\n      while true do\n        let phr = !Toploop.parse_toplevel_phrase lb in\n        phrases := phr :: !phrases\n      done;\n      assert false\n    with\n    | End_of_file -> true\n    | e ->\n        Location.report_exception ppf_err e;\n        false\n  in\n  let phrases = List.rev !phrases in\n  let num_phrases = List.length phrases in\n  let success = ref parse_ok in\n  Fun.protect\n    (fun () ->\n      List.iteri\n        (fun i phr ->\n          try\n            let is_last = i = num_phrases - 1 in\n            let ok = Toploop.execute_phrase is_last ppf_out phr in\n            success := !success && ok\n          with\n          | Sys.Break ->\n              success := false;\n              Format.fprintf ppf_err \"Interrupted.@.\"\n          | x ->\n              success := false;\n              Errors.report_error ppf_err x)\n        phrases)\n    ~finally:(fun () ->\n      Location.formatter_for_warnings := old_warnings_fmt;\n      Location.input_lexbuf := orig_input_lexbuf;\n      Format.pp_print_flush ppf_out ();\n      Format.pp_print_flush ppf_err ());\n  !success\n\n(* ───── Completion ───── *)\n\nlet clamp lo hi x = if x < lo then lo else if x > hi then hi else x\n\nlet starts_with ~prefix s =\n  let lp = String.length prefix and ls = String.length s in\n  lp <= ls && String.sub s 0 lp = prefix\n\nlet is_ident_char = function\n  | 'a' .. 'z' | 'A' .. 'Z' | '0' .. '9' | '_' | '\\'' -> true\n  | _ -> false\n\nlet is_path_char c = is_ident_char c || Char.equal c '.'\n\nlet parse_completion_context code pos =\n  let len = String.length code in\n  let pos = clamp 0 len pos in\n  let i = ref (pos - 1) in\n  while !i >= 0 && is_path_char code.[!i] do\n    decr i\n  done;\n  let start = !i + 1 in\n  let token = if pos > start then String.sub code start (pos - start) else \"\" in\n  let token =\n    if String.starts_with ~prefix:\".\" token then\n      String.sub token 1 (String.length token - 1)\n    else token\n  in\n  if token = \"\" then (None, \"\")\n  else\n    let trailing_dot = String.ends_with ~suffix:\".\" token in\n    let parts = String.split_on_char '.' token |> List.filter (( <> ) \"\") in\n    if trailing_dot then (Longident.unflatten parts, \"\")\n    else\n      match List.rev parts with\n      | [] -> (None, \"\")\n      | prefix :: rev_qual ->\n          let qualifier = Longident.unflatten (List.rev rev_qual) in\n          (qualifier, prefix)\n\nlet format_type env ty =\n  Printtyp.wrap_printing_env ~error:false env (fun () ->\n      Format.asprintf \"%a\" Printtyp.type_scheme ty)\n\nlet collect_env_items env qualifier =\n  let open Quill.Kernel in\n  let add label kind detail acc =\n    if String.length label = 0 then acc else { label; kind; detail } :: acc\n  in\n  let items =\n    Env.fold_values\n      (fun name _path (vd : Types.value_description) acc ->\n        add name Value (format_type env vd.val_type) acc)\n      qualifier env []\n  in\n  let items =\n    Env.fold_types\n      (fun name _path (td : Types.type_declaration) acc ->\n        let detail =\n          match td.type_manifest with\n          | Some ty -> \"= \" ^ format_type env ty\n          | None -> (\n              match td.type_kind with\n              | Type_abstract _ -> \"abstract\"\n              | Type_record _ -> \"record\"\n              | Type_variant _ -> \"variant\"\n              | Type_open -> \"open\")\n        in\n        add name Type detail acc)\n      qualifier env items\n  in\n  let items =\n    Env.fold_modules\n      (fun name _path (_md : Types.module_declaration) acc ->\n        add name Module \"module\" acc)\n      qualifier env items\n  in\n  let items =\n    Env.fold_modtypes\n      (fun name _path (_mtd : Types.modtype_declaration) acc ->\n        add name Module_type \"module type\" acc)\n      qualifier env items\n  in\n  let items =\n    Env.fold_constructors\n      (fun (c : Data_types.constructor_description) acc ->\n        let detail = format_type env c.cstr_res in\n        add c.cstr_name Constructor detail acc)\n      qualifier env items\n  in\n  Env.fold_labels\n    (fun (l : Data_types.label_description) acc ->\n      let detail = format_type env l.lbl_arg in\n      add l.lbl_name Label detail acc)\n    qualifier env items\n\nlet complete_names ~code ~pos =\n  let qualifier, prefix = parse_completion_context code pos in\n  let env = !Toploop.toplevel_env in\n  collect_env_items env qualifier\n  |> List.filter (fun (item : Quill.Kernel.completion_item) ->\n      String.length prefix = 0 || starts_with ~prefix item.label)\n  |> List.sort_uniq (fun (a : Quill.Kernel.completion_item) b ->\n      String.compare a.label b.label)\n\n(* ───── Parse and typecheck ───── *)\n\nlet parse_phrases code =\n  let code = ensure_terminator code in\n  let lb = Lexing.from_string code in\n  lb.lex_curr_p <-\n    { pos_fname = \"//toplevel//\"; pos_lnum = 1; pos_bol = 0; pos_cnum = 0 };\n  let phrases = ref [] in\n  (try\n     while true do\n       let phr = !Toploop.parse_toplevel_phrase lb in\n       phrases := phr :: !phrases\n     done\n   with End_of_file -> ());\n  List.rev !phrases\n\nlet typecheck_structure env structure =\n  let tstr, _sig, _names, _shape, _env =\n    Typemod.type_toplevel_phrase env structure\n  in\n  tstr\n\n(* ───── Type at position ───── *)\n\nlet loc_contains (loc : Location.t) pos =\n  (not loc.loc_ghost)\n  && loc.loc_start.pos_cnum <= pos\n  && pos <= loc.loc_end.pos_cnum\n\nlet loc_span (loc : Location.t) = loc.loc_end.pos_cnum - loc.loc_start.pos_cnum\n\nlet find_type_at_pos env (tstr : Typedtree.structure) pos =\n  let best = ref None in\n  let update loc ty =\n    if loc_contains loc pos then\n      match !best with\n      | Some (_, prev_loc, _) when loc_span loc >= loc_span prev_loc -> ()\n      | _ ->\n          let typ = format_type env ty in\n          best := Some (typ, loc, None)\n  in\n  let iter =\n    {\n      Tast_iterator.default_iterator with\n      expr =\n        (fun self (e : Typedtree.expression) ->\n          update e.exp_loc e.exp_type;\n          Tast_iterator.default_iterator.expr self e);\n      pat =\n        (fun (type k) self (p : k Typedtree.general_pattern) ->\n          update p.pat_loc p.pat_type;\n          Tast_iterator.default_iterator.pat self p);\n    }\n  in\n  iter.structure iter tstr;\n  match !best with\n  | None -> None\n  | Some (typ, loc, doc) ->\n      Some\n        Quill.Kernel.\n          {\n            typ;\n            doc;\n            from_pos = loc.loc_start.pos_cnum;\n            to_pos = loc.loc_end.pos_cnum;\n          }\n\nlet type_at_pos ~code ~pos =\n  let env = !Toploop.toplevel_env in\n  let phrases = parse_phrases code in\n  let rec try_phrases = function\n    | [] -> None\n    | Parsetree.Ptop_def structure :: rest -> (\n        match typecheck_structure env structure with\n        | tstr -> (\n            match find_type_at_pos env tstr pos with\n            | Some _ as result -> result\n            | None -> try_phrases rest)\n        | exception _ -> try_phrases rest)\n    | _ :: rest -> try_phrases rest\n  in\n  try_phrases phrases\n\n(* ───── Diagnostics ───── *)\n\nlet loc_to_positions (loc : Location.t) =\n  (loc.loc_start.pos_cnum, loc.loc_end.pos_cnum)\n\nlet error_loc_of_exn exn =\n  match exn with\n  | Location.Error report -> report.main.loc\n  | _ -> Location.in_file \"//toplevel//\"\n\nlet format_exn exn =\n  match Location.error_of_exn exn with\n  | Some (`Ok report) -> Format.asprintf \"%a\" Location.print_report report\n  | _ -> Printexc.to_string exn\n\nlet compute_diagnostics ~code =\n  let diags = ref [] in\n  let len = String.length code in\n  let add_diag severity loc message =\n    let from_pos, to_pos = loc_to_positions loc in\n    (* Clamp to valid range; skip diagnostics with no usable location *)\n    let from_pos = clamp 0 len from_pos in\n    let to_pos = clamp 0 len to_pos in\n    let to_pos =\n      if to_pos <= from_pos then min (from_pos + 1) len else to_pos\n    in\n    if from_pos < len then\n      diags := Quill.Kernel.{ from_pos; to_pos; severity; message } :: !diags\n  in\n  (match parse_phrases code with\n  | _ -> ()\n  | exception exn -> add_diag Error (error_loc_of_exn exn) (format_exn exn));\n  List.rev !diags\n\n(* ───── Phrase completeness ───── *)\n\nlet is_complete_phrase code =\n  let trimmed = String.trim code in\n  if trimmed = \"\" then false\n  else if String.ends_with ~suffix:\";;\" trimmed then true\n  else\n    (* Try parsing with \";;\" appended. If it parses, the phrase is complete. If\n       End_of_file, the parser consumed the phrase and wants more. If syntax\n       error, the code is broken -- submit to show the error. *)\n    let code_term = trimmed ^ \";;\" in\n    let lb = Lexing.from_string code_term in\n    lb.lex_curr_p <-\n      { pos_fname = \"//toplevel//\"; pos_lnum = 1; pos_bol = 0; pos_cnum = 0 };\n    match !Toploop.parse_toplevel_phrase lb with\n    | _ -> true\n    | exception End_of_file -> false\n    | exception _ -> true\n\n(* ───── Rich display detection ───── *)\n\nlet base64_decode_table =\n  let t = Array.make 256 (-1) in\n  String.iteri\n    (fun i c -> t.(Char.code c) <- i)\n    \"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/\";\n  t\n\nlet base64_decode s =\n  let len = String.length s in\n  let valid = ref 0 in\n  for i = 0 to len - 1 do\n    if base64_decode_table.(Char.code (String.unsafe_get s i)) >= 0 then\n      incr valid\n  done;\n  let out_len = !valid * 3 / 4 in\n  let out = Bytes.create out_len in\n  let j = ref 0 in\n  let acc = ref 0 in\n  let bits = ref 0 in\n  for i = 0 to len - 1 do\n    let v = base64_decode_table.(Char.code (String.unsafe_get s i)) in\n    if v >= 0 then begin\n      acc := (!acc lsl 6) lor v;\n      bits := !bits + 6;\n      if !bits >= 8 then begin\n        bits := !bits - 8;\n        if !j < out_len then begin\n          Bytes.unsafe_set out !j (Char.chr ((!acc lsr !bits) land 0xff));\n          incr j\n        end\n      end\n    end\n  done;\n  Bytes.sub_string out 0 !j\n\n(** Scan [s] for markdown data-URI patterns [![...](data:MIME;base64,DATA)] and\n    emit each as a Display output. Surrounding text is emitted as Stdout. This\n    allows pretty-printers (e.g. hugin, talon) to render rich content in quill\n    without depending on quill.\n\n    For text MIME types, the base64 data is decoded so that Display.data\n    contains raw text. For binary types (image), data remains base64-encoded. *)\nlet emit_with_images ~emit s =\n  let len = String.length s in\n  let text_start = ref 0 in\n  let i = ref 0 in\n  while !i < len - 1 do\n    if\n      Char.equal (String.unsafe_get s !i) '!'\n      && Char.equal (String.unsafe_get s (!i + 1)) '['\n    then begin\n      let start = !i in\n      (* Skip past alt text to find ]( *)\n      let j = ref (!i + 2) in\n      while !j < len && not (Char.equal (String.unsafe_get s !j) ']') do\n        incr j\n      done;\n      if\n        !j < len - 1\n        && Char.equal (String.unsafe_get s !j) ']'\n        && Char.equal (String.unsafe_get s (!j + 1)) '('\n      then begin\n        let paren_start = !j + 2 in\n        (* Check for data: URI *)\n        let prefix = \"data:\" in\n        let prefix_len = String.length prefix in\n        if\n          paren_start + prefix_len < len\n          && String.sub s paren_start prefix_len = prefix\n        then begin\n          (* Find ;base64, *)\n          let k = ref (paren_start + prefix_len) in\n          let base64_marker = \";base64,\" in\n          let marker_len = String.length base64_marker in\n          let found_marker = ref false in\n          let mime_end = ref 0 in\n          while !k < len - marker_len && not !found_marker do\n            if String.sub s !k marker_len = base64_marker then begin\n              found_marker := true;\n              mime_end := !k\n            end\n            else incr k\n          done;\n          if !found_marker then begin\n            let data_start = !mime_end + marker_len in\n            (* Find closing ) *)\n            let m = ref data_start in\n            while !m < len && not (Char.equal (String.unsafe_get s !m) ')') do\n              incr m\n            done;\n            if !m < len then begin\n              let mime =\n                String.sub s (paren_start + prefix_len)\n                  (!mime_end - paren_start - prefix_len)\n              in\n              let raw_data = String.sub s data_start (!m - data_start) in\n              (* For text MIME types, decode base64 to raw text *)\n              let data =\n                if String.length mime >= 5 && String.sub mime 0 5 = \"text/\" then\n                  base64_decode raw_data\n                else raw_data\n              in\n              (* Emit text before this image *)\n              if start > !text_start then\n                emit\n                  (Quill.Cell.Stdout\n                     (String.sub s !text_start (start - !text_start)));\n              emit (Quill.Cell.Display { mime; data });\n              i := !m + 1;\n              text_start := !i\n            end\n            else incr i\n          end\n          else incr i\n        end\n        else incr i\n      end\n      else incr i\n    end\n    else incr i\n  done;\n  (* Emit remaining text *)\n  if !text_start < len then begin\n    let rest = String.sub s !text_start (len - !text_start) in\n    if String.trim rest <> \"\" then emit (Quill.Cell.Stdout rest)\n  end\n\n(* ───── Kernel interface ───── *)\n\nlet status_ref = ref Quill.Kernel.Idle\n\nlet create ?setup ~on_event () =\n  let setup_done = ref false in\n  let ensure_setup () =\n    if not !setup_done then (\n      setup_done := true;\n      initialize_if_needed ();\n      match setup with Some f -> f () | None -> ())\n  in\n  let execute ~cell_id ~code =\n    ensure_setup ();\n    status_ref := Quill.Kernel.Busy;\n    on_event (Quill.Kernel.Status_changed Busy);\n    let emit output = on_event (Quill.Kernel.Output { cell_id; output }) in\n    let ok, toplevel_out, toplevel_err =\n      capture\n        ~on_stdout:(fun s -> emit (Quill.Cell.Stdout s))\n        ~on_stderr:(fun s -> emit (Quill.Cell.Stderr s))\n        ~on_display:emit\n        (fun ppf_out ppf_err -> execute_code ppf_out ppf_err code)\n    in\n    (* Emit toplevel formatter output (val bindings, type info). Scan for\n       markdown data-URI images and convert to Display outputs. *)\n    if toplevel_out <> \"\" then emit_with_images ~emit toplevel_out;\n    if toplevel_err <> \"\" then emit (Quill.Cell.Stderr toplevel_err);\n    (* Signal completion *)\n    on_event (Quill.Kernel.Finished { cell_id; success = ok });\n    status_ref := Quill.Kernel.Idle;\n    on_event (Quill.Kernel.Status_changed Idle)\n  in\n  let interrupt () =\n    (* Send SIGINT to the current thread - this will cause Sys.Break *)\n    try Unix.kill (Unix.getpid ()) Sys.sigint with _ -> ()\n  in\n  let complete ~code ~pos =\n    try\n      ensure_setup ();\n      complete_names ~code ~pos\n    with exn ->\n      Printf.eprintf \"[quill-top] complete error: %s\\n%!\"\n        (Printexc.to_string exn);\n      []\n  in\n  let status () = !status_ref in\n  let shutdown () =\n    status_ref := Quill.Kernel.Shutting_down;\n    on_event (Quill.Kernel.Status_changed Shutting_down)\n  in\n  {\n    Quill.Kernel.execute;\n    interrupt;\n    complete;\n    type_at =\n      Some\n        (fun ~code ~pos ->\n          try\n            ensure_setup ();\n            type_at_pos ~code ~pos\n          with exn ->\n            Printf.eprintf \"[quill-top] type_at error: %s\\n%!\"\n              (Printexc.to_string exn);\n            None);\n    diagnostics =\n      Some\n        (fun ~code ->\n          try\n            ensure_setup ();\n            compute_diagnostics ~code\n          with exn ->\n            Printf.eprintf \"[quill-top] diagnostics error: %s\\n%!\"\n              (Printexc.to_string exn);\n            []);\n    is_complete =\n      Some\n        (fun code ->\n          try\n            ensure_setup ();\n            is_complete_phrase code\n          with _ -> false);\n    status;\n    shutdown;\n  }\n"
  },
  {
    "path": "packages/quill/lib/quill-top/quill_top.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** OCaml toplevel kernel for Quill.\n\n    Provides an in-process OCaml toplevel as a {!Quill.Kernel.t}. Stdout and\n    stderr are streamed in real time during execution. Rich outputs (images,\n    HTML) are emitted via {!Quill.Cell.Display_tag} semantic tags on the\n    toplevel formatter. *)\n\nval initialize_if_needed : unit -> unit\n(** [initialize_if_needed ()] ensures the OCaml toplevel environment is\n    initialized. Safe to call multiple times; only the first call has effect. *)\n\nval add_packages : string list -> unit\n(** [add_packages pkgs] resolves each findlib package name and adds its\n    directory to the toplevel load path, marking each as already linked into the\n    executable. Unknown packages are silently skipped. *)\n\nval load_package : string -> unit\n(** [load_package pkg] resolves the findlib package [pkg] and all its transitive\n    dependencies, adds their directories, and dynamically loads their bytecode\n    archives. Packages already loaded or marked in-core via {!add_packages} are\n    skipped. Raises if the package is not found. *)\n\nval install_printer : string -> unit\n(** [install_printer name] installs a toplevel pretty-printer by evaluating\n    [#install_printer name;;]. The printer must be resolvable in the current\n    toplevel environment (i.e. its module directory was previously added via\n    {!add_packages}). Silently does nothing on failure. *)\n\nval install_printer_fn :\n  ty:string -> (Format.formatter -> Obj.t -> unit) -> unit\n(** [install_printer_fn ~ty f] registers [f] as a pretty-printer for values of\n    type [ty] (e.g. [\"Hugin.figure\"]). The type is looked up in the toplevel\n    environment. Unlike {!install_printer}, the function does not need to be\n    resolvable by name -- it is passed directly. Silently does nothing if the\n    type cannot be resolved. *)\n\nval create :\n  ?setup:(unit -> unit) ->\n  on_event:(Quill.Kernel.event -> unit) ->\n  unit ->\n  Quill.Kernel.t\n(** [create ?setup ~on_event ()] creates a new OCaml toplevel kernel. Kernel\n    events are delivered by calling [on_event]. [setup] is called once before\n    the first cell execution, after toplevel initialization -- use it to call\n    {!add_packages} and {!install_printer}. *)\n"
  },
  {
    "path": "packages/quill/lib/quill-tui/dune",
    "content": "(library\n (name quill_tui)\n (public_name quill.tui)\n (libraries\n  quill\n  quill.markdown\n  mosaic\n  mosaic.ui\n  toffee\n  matrix.grid\n  tree-sitter.ocaml\n  unix))\n"
  },
  {
    "path": "packages/quill/lib/quill-tui/quill_tui.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Mosaic\nopen Quill\n\n(* ───── Model ───── *)\n\ntype mode = Normal | Editing\ntype footer_msg_kind = Info | Warning | Error | Confirm\ntype footer_msg = { kind : footer_msg_kind; text : string; created_at : float }\n\ntype completion = {\n  prefix : string;\n  cursor_byte : int;\n  replace_start_byte : int;\n  items : string list;\n  selected : int;\n}\n\ntype model = {\n  session : Session.t;\n  kernel : Kernel.t;\n  event_queue : Kernel.event Queue.t;\n  path : string;\n  focus : int;\n  mode : mode;\n  dirty : bool;\n  footer_msg : footer_msg option;\n  last_mtime : float;\n  reload_acc : float;\n  confirm_quit : bool;\n  show_help : bool;\n  clock : float;\n  viewport_width : int;\n  viewport_height : int;\n  edit_cursor : int;\n  edit_cursor_override : int option;\n  edit_selection : (int * int) option;\n  completion_popup_open : bool;\n  completion : completion option;\n}\n\ntype msg =\n  | Focus_next\n  | Focus_prev\n  | Execute_focused\n  | Execute_and_advance\n  | Execute_all\n  | Interrupt\n  | Insert_code_below\n  | Insert_text_below\n  | Delete_focused\n  | Toggle_cell_kind\n  | Move_up\n  | Move_down\n  | Clear_focused\n  | Clear_all\n  | Save\n  | Quit\n  | Tick of float\n  | Dismiss_message\n  | Toggle_help\n  | Resize of int * int\n  | Enter_edit\n  | Exit_edit\n  | Edit_source of string\n  | Submit_edit of string\n  | Edit_cursor_changed of int * (int * int) option\n  | Trigger_completion\n  | Next_completion\n  | Prev_completion\n  | Accept_completion\n  | Dismiss_completion\n  | Deferred_focus_editor\n\n(* ───── Palette ───── *)\n\nlet chrome_bg = Ansi.Color.of_rgb 24 24 30\nlet accent = Ansi.Color.of_rgb 218 165 80\nlet accent_dim = Ansi.Color.of_rgb 140 110 60\nlet border_focused = Ansi.Color.of_rgb 120 120 140\nlet border_unfocused = Ansi.Color.of_rgb 50 50 58\nlet label_fg = Ansi.Color.of_rgb 100 100 115\nlet hint_fg = Ansi.Color.of_rgb 80 80 92\nlet output_fg = Ansi.Color.of_rgb 170 175 185\nlet output_dim_fg = Ansi.Color.of_rgb 120 125 135\nlet warning_fg = Ansi.Color.of_rgb 210 180 100\nlet error_fg = Ansi.Color.of_rgb 210 100 100\nlet error_bg = Ansi.Color.of_rgb 50 30 30\nlet info_fg = Ansi.Color.of_rgb 150 160 175\nlet overlay_bg = Ansi.Color.of_rgb 12 12 16\nlet cell_bg_focused = Ansi.Color.of_rgb 30 30 38\nlet reload_interval = 1.0\nlet template = \"# Untitled\\n\\n```ocaml\\n\\n```\\n\"\nlet scroll_box_id = \"notebook-scroll\"\nlet textarea_id = \"cell-editor\"\nlet help_scroll_id = \"footer-help-scroll\"\nlet lp n = Toffee.Style.Length_percentage.length (Float.of_int n)\n\nlet padding_lrtb ~l ~r ~t ~b =\n  Toffee.Geometry.Rect.make ~left:(lp l) ~right:(lp r) ~top:(lp t)\n    ~bottom:(lp b)\n\n(* ───── Helpers ───── *)\n\nlet read_file path =\n  let ic = open_in path in\n  Fun.protect\n    ~finally:(fun () -> close_in ic)\n    (fun () -> really_input_string ic (in_channel_length ic))\n\nlet write_file path content =\n  let oc = open_out path in\n  Fun.protect\n    ~finally:(fun () -> close_out oc)\n    (fun () -> output_string oc content)\n\nlet get_mtime path =\n  try (Unix.stat path).Unix.st_mtime with Unix.Unix_error _ -> 0.\n\nlet drain_events event_queue session =\n  let rec loop session =\n    match Queue.pop event_queue with\n    | Kernel.Output { cell_id; output } ->\n        loop (Session.apply_output cell_id output session)\n    | Kernel.Finished { cell_id; success } ->\n        loop (Session.finish_execution cell_id ~success session)\n    | Kernel.Status_changed _ -> loop session\n    | exception Queue.Empty -> session\n  in\n  loop session\n\nlet focused_cell m = Doc.nth m.focus (Session.doc m.session)\nlet cell_count m = Doc.length (Session.doc m.session)\nlet char_eq c u = Uchar.equal u (Uchar.of_char c)\n\nlet with_footer_message m kind text =\n  { m with footer_msg = Some { kind; text; created_at = m.clock } }\n\nlet clear_footer_message m = { m with footer_msg = None }\n\nlet clear_confirm_message m =\n  match m.footer_msg with\n  | Some { kind = Confirm; _ } -> clear_footer_message m\n  | _ -> m\n\nlet clamp lo hi x = if x < lo then lo else if x > hi then hi else x\n\nlet lowercase_codepoint i =\n  if i >= Char.code 'A' && i <= Char.code 'Z' then i + 32 else i\n\nlet starts_with ~prefix s =\n  let lp = String.length prefix and ls = String.length s in\n  lp <= ls && String.sub s 0 lp = prefix\n\nlet is_ident_start = function\n  | 'a' .. 'z' | 'A' .. 'Z' | '_' -> true\n  | _ -> false\n\nlet is_ident_char = function\n  | 'a' .. 'z' | 'A' .. 'Z' | '0' .. '9' | '_' | '\\'' -> true\n  | _ -> false\n\nlet unique_sorted strings =\n  let sorted = List.sort String.compare strings in\n  let rec dedup acc = function\n    | a :: (b :: _ as tl) when String.equal a b -> dedup acc tl\n    | x :: tl -> dedup (x :: acc) tl\n    | [] -> List.rev acc\n  in\n  dedup [] sorted\n\nlet collect_identifiers s =\n  let tbl = Hashtbl.create 64 in\n  let n = String.length s in\n  let i = ref 0 in\n  while !i < n do\n    if is_ident_start s.[!i] then begin\n      let j = ref (!i + 1) in\n      while !j < n && is_ident_char s.[!j] do\n        incr j\n      done;\n      let token = String.sub s !i (!j - !i) in\n      if String.length token >= 2 then Hashtbl.replace tbl token ();\n      i := !j\n    end\n    else incr i\n  done;\n  Hashtbl.fold (fun key () acc -> key :: acc) tbl []\n\nlet ocaml_keywords =\n  [\n    \"and\";\n    \"as\";\n    \"begin\";\n    \"class\";\n    \"done\";\n    \"else\";\n    \"end\";\n    \"exception\";\n    \"external\";\n    \"false\";\n    \"for\";\n    \"fun\";\n    \"function\";\n    \"if\";\n    \"in\";\n    \"include\";\n    \"let\";\n    \"match\";\n    \"module\";\n    \"mutable\";\n    \"of\";\n    \"open\";\n    \"rec\";\n    \"sig\";\n    \"struct\";\n    \"then\";\n    \"true\";\n    \"try\";\n    \"type\";\n    \"val\";\n    \"when\";\n    \"with\";\n  ]\n\nlet take_first n xs =\n  let rec loop acc n xs =\n    if n <= 0 then List.rev acc\n    else\n      match xs with [] -> List.rev acc | x :: tl -> loop (x :: acc) (n - 1) tl\n  in\n  loop [] n xs\n\nlet utf8_codepoint_offsets s =\n  let len = String.length s in\n  let rec prev_start i =\n    if i <= 0 then 0\n    else if Char.code s.[i] land 0xC0 = 0x80 then prev_start (i - 1)\n    else i\n  in\n  let rec loop acc i =\n    if i <= 0 then Array.of_list (0 :: acc)\n    else\n      let j = prev_start (i - 1) in\n      loop (i :: acc) j\n  in\n  loop [] len\n\nlet grapheme_byte_offsets s = utf8_codepoint_offsets s\n\nlet grapheme_count s =\n  let offsets = grapheme_byte_offsets s in\n  Array.length offsets - 1\n\nlet cursor_byte_of code cursor =\n  let offsets = grapheme_byte_offsets code in\n  let max_cursor = Array.length offsets - 1 in\n  let cursor = clamp 0 max_cursor cursor in\n  (cursor, offsets.(cursor))\n\nlet cursor_of_byte code byte =\n  let offsets = grapheme_byte_offsets code in\n  let byte = clamp 0 (String.length code) byte in\n  let rec loop i =\n    if i >= Array.length offsets then Array.length offsets - 1\n    else if offsets.(i) >= byte then i\n    else loop (i + 1)\n  in\n  loop 0\n\nlet find_prefix_at_cursor code ~cursor ~selection =\n  match selection with\n  | Some _ -> None\n  | None ->\n      let cursor, cursor_byte = cursor_byte_of code cursor in\n      let len = String.length code in\n      let at_ident_end =\n        cursor_byte = len || not (is_ident_char code.[cursor_byte])\n      in\n      if not at_ident_end then None\n      else\n        let i = ref (cursor_byte - 1) in\n        while\n          !i >= 0 && (is_ident_char code.[!i] || Char.equal code.[!i] '.')\n        do\n          decr i\n        done;\n        let start = !i + 1 in\n        let token =\n          if cursor_byte > start then String.sub code start (cursor_byte - start)\n          else \"\"\n        in\n        let replace_start_byte, prefix =\n          match String.rindex_opt token '.' with\n          | Some dot ->\n              ( start + dot + 1,\n                String.sub token (dot + 1) (String.length token - dot - 1) )\n          | None -> (start, token)\n        in\n        Some (cursor, cursor_byte, replace_start_byte, prefix)\n\nlet selected_completion_item c =\n  match c.items with\n  | [] -> None\n  | items ->\n      let len = List.length items in\n      Some (List.nth items (c.selected mod len))\n\nlet index_of item items =\n  let rec loop i = function\n    | [] -> None\n    | x :: _ when String.equal x item -> Some i\n    | _ :: tl -> loop (i + 1) tl\n  in\n  loop 0 items\n\nlet cycle_completion c delta =\n  let len = List.length c.items in\n  if len = 0 then c\n  else { c with selected = (c.selected + delta + len) mod len }\n\nlet footer_message_timeout kind =\n  match kind with\n  | Info -> Some 3.\n  | Warning -> Some 5.\n  | Error | Confirm -> None\n\nlet expire_footer_message m =\n  match m.footer_msg with\n  | Some ({ kind; created_at; _ } as footer_msg) -> (\n      match footer_message_timeout kind with\n      | Some timeout when m.clock -. created_at >= timeout ->\n          { m with footer_msg = None }\n      | _ -> { m with footer_msg = Some footer_msg })\n  | None -> m\n\nlet is_navigation_msg msg =\n  match msg with\n  | Focus_next | Focus_prev | Move_up | Move_down -> true\n  | _ -> false\n\nlet should_clear_error_msg msg =\n  if is_navigation_msg msg then false\n  else\n    match msg with\n    | Tick _ | Dismiss_message | Toggle_help | Edit_cursor_changed _ -> false\n    | _ -> true\n\nlet current_code_cell m =\n  match focused_cell m with\n  | Some (Cell.Code { id; source; _ }) -> Some (id, source)\n  | _ -> None\n\nlet build_completion ?(force = false) m code ~cursor ~selection =\n  match find_prefix_at_cursor code ~cursor ~selection with\n  | None -> None\n  | Some (_, cursor_byte, replace_start_byte, prefix) -> (\n      if (not force) && String.length prefix = 0 then None\n      else\n        let kernel_items =\n          try\n            List.map\n              (fun (c : Kernel.completion_item) -> c.label)\n              (m.kernel.complete ~code ~pos:cursor_byte)\n          with _ -> []\n        in\n        let items =\n          unique_sorted\n            (kernel_items @ collect_identifiers code @ ocaml_keywords)\n          |> List.filter (fun item ->\n              (String.length prefix = 0 || starts_with ~prefix item)\n              && not (String.equal item prefix))\n          |> take_first 200\n        in\n        match items with\n        | [] -> None\n        | _ ->\n            Some\n              { prefix; cursor_byte; replace_start_byte; items; selected = 0 })\n\nlet preserve_selection prev next =\n  match (prev, next) with\n  | Some prev, Some next -> (\n      match selected_completion_item prev with\n      | Some item -> (\n          match index_of item next.items with\n          | Some idx -> Some { next with selected = idx }\n          | None -> Some next)\n      | None -> Some next)\n  | _, x -> x\n\nlet recompute_completion m =\n  match current_code_cell m with\n  | None -> { m with completion = None; completion_popup_open = false }\n  | Some (_, source) ->\n      let force = m.completion_popup_open in\n      let next =\n        build_completion ~force m source ~cursor:m.edit_cursor\n          ~selection:m.edit_selection\n      in\n      { m with completion = preserve_selection m.completion next }\n\nlet ghost_text m =\n  match m.completion with\n  | None -> None\n  | Some c when String.length c.prefix = 0 -> None\n  | Some c -> (\n      match selected_completion_item c with\n      | None -> None\n      | Some item when starts_with ~prefix:c.prefix item ->\n          let suffix =\n            String.sub item (String.length c.prefix)\n              (String.length item - String.length c.prefix)\n          in\n          if String.length suffix = 0 then None else Some suffix\n      | Some _ -> None)\n\nlet replace_range_at_byte s ~start_byte ~end_byte text =\n  let len = String.length s in\n  let start_byte = clamp 0 len start_byte in\n  let end_byte = clamp start_byte len end_byte in\n  String.sub s 0 start_byte ^ text ^ String.sub s end_byte (len - end_byte)\n\nlet apply_completion m c choice =\n  match current_code_cell m with\n  | None -> m\n  | Some (cell_id, source) ->\n      let code =\n        replace_range_at_byte source ~start_byte:c.replace_start_byte\n          ~end_byte:c.cursor_byte choice\n      in\n      let cursor =\n        cursor_of_byte code (c.replace_start_byte + String.length choice)\n      in\n      let session = Session.update_source cell_id code m.session in\n      {\n        m with\n        session;\n        dirty = true;\n        edit_cursor = cursor;\n        edit_cursor_override = Some cursor;\n        edit_selection = None;\n        completion_popup_open = false;\n        completion = None;\n      }\n      |> recompute_completion\n\nlet cursor_line code cursor =\n  let _, cursor_byte = cursor_byte_of code cursor in\n  let line = ref 0 in\n  for i = 0 to cursor_byte - 1 do\n    if code.[i] = '\\n' then incr line\n  done;\n  !line\n\nlet active_line_colors code cursor =\n  let line = cursor_line code cursor in\n  [\n    ( line,\n      {\n        Line_number.gutter = Ansi.Color.of_rgb 48 48 68;\n        content = Some (Ansi.Color.of_rgb 32 32 48);\n      } );\n  ]\n\nlet highlight_source source =\n  try\n    Tree_sitter_ocaml.highlight_ocaml source\n    |> Syntax_theme.apply Syntax_theme.default ~content:source\n  with _ -> []\n\nlet editor_on_key m ev =\n  let data = Event.Key.data ev in\n  if data.event_type = Release then None\n  else\n    let md = data.modifier in\n    match data.key with\n    | Escape when m.completion_popup_open ->\n        Event.Key.prevent_default ev;\n        Some Dismiss_completion\n    | Enter\n      when m.completion_popup_open\n           && not (md.ctrl || md.alt || md.super || md.shift) ->\n        Event.Key.prevent_default ev;\n        Some Accept_completion\n    | Tab when m.completion_popup_open && md.shift ->\n        Event.Key.prevent_default ev;\n        Some Prev_completion\n    | Tab when m.completion_popup_open ->\n        Event.Key.prevent_default ev;\n        Some Accept_completion\n    | Tab when Option.is_none m.edit_selection ->\n        Event.Key.prevent_default ev;\n        Some Trigger_completion\n    | Char c\n      when m.completion_popup_open && md.ctrl\n           && lowercase_codepoint (Uchar.to_int c) = Char.code 'n' ->\n        Event.Key.prevent_default ev;\n        Some Next_completion\n    | Char c\n      when m.completion_popup_open && md.ctrl\n           && lowercase_codepoint (Uchar.to_int c) = Char.code 'p' ->\n        Event.Key.prevent_default ev;\n        Some Prev_completion\n    | Char c when md.ctrl && Uchar.to_int c = Char.code ' ' ->\n        Event.Key.prevent_default ev;\n        Some Trigger_completion\n    | Line_feed when not (md.ctrl || md.alt || md.super) ->\n        (* Shift+Enter arrives as Line_feed (0x0a) without shift flag *)\n        Event.Key.prevent_default ev;\n        Some Execute_and_advance\n    | _ -> None\n\n(* ───── Init ───── *)\n\nlet init ~create_kernel ~path () =\n  let event_queue = Queue.create () in\n  let on_event ev = Queue.push ev event_queue in\n  let kernel = create_kernel ~on_event in\n  let md =\n    if Sys.file_exists path then read_file path\n    else (\n      write_file path template;\n      template)\n  in\n  let doc = Quill_markdown.of_string md in\n  let session = Session.create doc in\n  let last_mtime = get_mtime path in\n  let focus = 0 in\n  let is_code_cell =\n    match Doc.nth focus doc with Some (Cell.Code _) -> true | _ -> false\n  in\n  let edit_cursor =\n    match Doc.nth focus doc with\n    | Some (Cell.Code { source; _ }) -> grapheme_count source\n    | _ -> 0\n  in\n  let initial_mode = if is_code_cell then Editing else Normal in\n  let title_cmd =\n    Cmd.set_title (Printf.sprintf \"Quill - %s\" (Filename.basename path))\n  in\n  let initial_cmd =\n    if is_code_cell then Cmd.batch [ title_cmd; Cmd.focus textarea_id ]\n    else title_cmd\n  in\n  ( {\n      session;\n      kernel;\n      event_queue;\n      path;\n      focus;\n      mode = initial_mode;\n      dirty = false;\n      footer_msg = None;\n      last_mtime;\n      reload_acc = 0.;\n      confirm_quit = false;\n      show_help = false;\n      clock = 0.;\n      viewport_width = 120;\n      viewport_height = 32;\n      edit_cursor;\n      edit_cursor_override = (if is_code_cell then Some edit_cursor else None);\n      edit_selection = None;\n      completion_popup_open = false;\n      completion = None;\n    },\n    initial_cmd )\n\n(* ───── File reload ───── *)\n\nlet check_reload m =\n  let mtime = get_mtime m.path in\n  if mtime > m.last_mtime then\n    let md = read_file m.path in\n    let doc = Quill_markdown.of_string md in\n    let session = Session.reload doc m.session in\n    let n = Doc.length (Session.doc session) in\n    let focus = if n > 0 then min m.focus (n - 1) else 0 in\n    {\n      m with\n      session;\n      focus;\n      last_mtime = mtime;\n      dirty = false;\n      completion_popup_open = false;\n      completion = None;\n      edit_cursor_override = None;\n      edit_selection = None;\n    }\n  else m\n\n(* ───── Execute helpers ───── *)\n\nlet execute_cell m id source =\n  let session = Session.checkpoint m.session in\n  let session = Session.clear_outputs id session in\n  let session = Session.mark_running id session in\n  m.kernel.execute ~cell_id:id ~code:source;\n  let session = drain_events m.event_queue session in\n  clear_footer_message { m with session; dirty = true }\n\nlet execute_all_cells m =\n  let session = Session.clear_all_outputs m.session in\n  let session = ref session in\n  List.iter\n    (fun cell ->\n      match cell with\n      | Cell.Code { id; source; _ } ->\n          session := Session.mark_running id !session;\n          m.kernel.execute ~cell_id:id ~code:source;\n          session := drain_events m.event_queue !session\n      | Cell.Text _ -> ())\n    (Doc.cells (Session.doc !session));\n  clear_footer_message { m with session = !session; dirty = true }\n\n(* ───── REPL flow: execute and advance ───── *)\n\n(** Execute the focused cell, then create a new empty cell below and enter edit\n    mode on it. This gives the REPL-like \"type, run, type more\" flow. *)\nlet execute_and_advance m =\n  match focused_cell m with\n  | Some (Cell.Code { id; source; _ }) ->\n      let m = execute_cell m id source in\n      (* Insert new code cell below *)\n      let pos = m.focus + 1 in\n      let cell = Cell.code \"\" in\n      let session = Session.insert_cell ~pos cell m.session in\n      let n = Doc.length (Session.doc session) in\n      let focus = min pos (n - 1) in\n      let m =\n        {\n          m with\n          session;\n          focus;\n          mode = Editing;\n          edit_cursor = 0;\n          edit_cursor_override = Some 0;\n          edit_selection = None;\n          completion_popup_open = false;\n          completion = None;\n        }\n      in\n      (* Defer focus: the new textarea doesn't exist yet in the render tree.\n         Dispatch a message that will issue Cmd.focus on the next update cycle,\n         after the view has re-rendered with the new cell. *)\n      (m, Cmd.perform (fun dispatch -> dispatch Deferred_focus_editor))\n  | Some (Cell.Text _) ->\n      (* Text cell: just advance to the next cell (or create one) *)\n      let n = cell_count m in\n      let pos = m.focus + 1 in\n      if pos < n then\n        (* Next cell exists — focus it and enter edit mode *)\n        let source =\n          match Doc.nth pos (Session.doc m.session) with\n          | Some (Cell.Code { source; _ } | Cell.Text { source; _ }) -> source\n          | None -> \"\"\n        in\n        let edit_cursor = grapheme_count source in\n        let m =\n          {\n            m with\n            focus = pos;\n            mode = Editing;\n            edit_cursor;\n            edit_cursor_override = Some edit_cursor;\n            edit_selection = None;\n            completion_popup_open = false;\n            completion = None;\n          }\n        in\n        (m, Cmd.perform (fun dispatch -> dispatch Deferred_focus_editor))\n      else\n        (* No next cell — create a new code cell *)\n        let cell = Cell.code \"\" in\n        let session = Session.insert_cell ~pos cell m.session in\n        let n = Doc.length (Session.doc session) in\n        let focus = min pos (n - 1) in\n        let m =\n          {\n            m with\n            session;\n            focus;\n            dirty = true;\n            mode = Editing;\n            edit_cursor = 0;\n            edit_cursor_override = Some 0;\n            edit_selection = None;\n            completion_popup_open = false;\n            completion = None;\n          }\n        in\n        (m, Cmd.perform (fun dispatch -> dispatch Deferred_focus_editor))\n  | None -> (with_footer_message m Warning \"No cell to advance from\", Cmd.none)\n\n(* ───── Update ───── *)\n\nlet tick_model m dt =\n  let session = drain_events m.event_queue m.session in\n  let m = { m with session; clock = m.clock +. dt } in\n  expire_footer_message m\n\nlet update_toggle_help m =\n  let show_help = not m.show_help in\n  let cmd =\n    if show_help then Cmd.focus help_scroll_id\n    else\n      match m.mode with\n      | Editing -> Cmd.focus textarea_id\n      | Normal -> Cmd.focus scroll_box_id\n  in\n  ({ m with show_help }, cmd)\n\nlet update_save m =\n  let session = Session.checkpoint m.session in\n  let m = { m with session } in\n  let doc = Session.doc m.session in\n  let content = Quill_markdown.to_string_with_outputs doc in\n  write_file m.path content;\n  let last_mtime = get_mtime m.path in\n  ( with_footer_message { m with dirty = false; last_mtime } Info \"Saved\",\n    Cmd.none )\n\nlet update_quit m =\n  if m.dirty && not m.confirm_quit then\n    ( with_footer_message\n        { m with confirm_quit = true }\n        Confirm \"Unsaved changes. Press q again to quit, s to save.\",\n      Cmd.none )\n  else (\n    m.kernel.shutdown ();\n    (m, Cmd.quit))\n\nlet update_editing msg m =\n  match msg with\n  | Deferred_focus_editor -> (m, Cmd.focus textarea_id)\n  | Toggle_help -> update_toggle_help m\n  | Dismiss_message ->\n      ({ m with confirm_quit = false; footer_msg = None }, Cmd.none)\n  | Resize (width, height) ->\n      ({ m with viewport_width = width; viewport_height = height }, Cmd.none)\n  | Exit_edit ->\n      let session = Session.checkpoint m.session in\n      ( {\n          m with\n          mode = Normal;\n          session;\n          edit_cursor_override = None;\n          completion_popup_open = false;\n          completion = None;\n          edit_selection = None;\n        },\n        Cmd.focus scroll_box_id )\n  | Edit_source source -> (\n      match focused_cell m with\n      | Some cell ->\n          let cell_id = Cell.id cell in\n          let session = Session.update_source cell_id source m.session in\n          let m = { m with session; dirty = true } in\n          let m =\n            if Option.is_some m.edit_selection then\n              { m with completion_popup_open = false }\n            else m\n          in\n          (recompute_completion m, Cmd.none)\n      | None -> (m, Cmd.none))\n  | Edit_cursor_changed (cursor, selection) ->\n      let m =\n        {\n          m with\n          edit_cursor = cursor;\n          edit_cursor_override = None;\n          edit_selection = selection;\n          completion_popup_open =\n            (match selection with\n            | Some _ -> false\n            | None -> m.completion_popup_open);\n        }\n      in\n      (recompute_completion m, Cmd.none)\n  | Trigger_completion ->\n      if Option.is_some m.edit_selection then\n        ( with_footer_message m Warning\n            \"Dismiss selection before triggering completion.\",\n          Cmd.none )\n      else\n        let m = recompute_completion { m with completion_popup_open = true } in\n        (m, Cmd.none)\n  | Next_completion -> (\n      match m.completion with\n      | None -> (m, Cmd.none)\n      | Some c ->\n          ( {\n              m with\n              completion = Some (cycle_completion c 1);\n              completion_popup_open = true;\n            },\n            Cmd.none ))\n  | Prev_completion -> (\n      match m.completion with\n      | None -> (m, Cmd.none)\n      | Some c ->\n          ( {\n              m with\n              completion = Some (cycle_completion c (-1));\n              completion_popup_open = true;\n            },\n            Cmd.none ))\n  | Accept_completion -> (\n      match m.completion with\n      | None -> (m, Cmd.none)\n      | Some c -> (\n          match selected_completion_item c with\n          | None ->\n              ( { m with completion_popup_open = false } |> recompute_completion,\n                Cmd.none )\n          | Some choice -> (apply_completion m c choice, Cmd.none)))\n  | Dismiss_completion ->\n      ( { m with completion_popup_open = false } |> recompute_completion,\n        Cmd.none )\n  | Submit_edit _ ->\n      (* Ctrl+Enter (on_submit): execute and advance (REPL flow) *)\n      execute_and_advance m\n  | Execute_focused -> (\n      (* Ctrl+Enter in edit mode: execute and stay *)\n      match focused_cell m with\n      | Some (Cell.Code { id; source; _ }) ->\n          let m = execute_cell m id source in\n          (m, Cmd.none)\n      | _ -> (m, Cmd.none))\n  | Execute_and_advance -> execute_and_advance m\n  | Save -> update_save m\n  | Quit ->\n      let session = Session.checkpoint m.session in\n      update_quit\n        {\n          m with\n          session;\n          mode = Normal;\n          completion_popup_open = false;\n          completion = None;\n          edit_cursor_override = None;\n          edit_selection = None;\n        }\n  | Interrupt ->\n      m.kernel.interrupt ();\n      (m, Cmd.none)\n  | Tick dt ->\n      let m = tick_model m dt in\n      ({ m with reload_acc = m.reload_acc +. dt }, Cmd.none)\n  | _ -> (m, Cmd.none)\n\nlet update_normal msg m =\n  match msg with\n  | Toggle_help -> update_toggle_help m\n  | Dismiss_message ->\n      ({ m with confirm_quit = false; footer_msg = None }, Cmd.none)\n  | Resize (width, height) ->\n      ({ m with viewport_width = width; viewport_height = height }, Cmd.none)\n  | Focus_next ->\n      let n = cell_count m in\n      let focus = if n > 0 then min (m.focus + 1) (n - 1) else 0 in\n      ({ m with focus }, Cmd.none)\n  | Focus_prev -> ({ m with focus = max (m.focus - 1) 0 }, Cmd.none)\n  | Execute_focused -> (\n      match focused_cell m with\n      | Some (Cell.Code { id; source; _ }) ->\n          (execute_cell m id source, Cmd.none)\n      | Some (Cell.Text _) ->\n          (with_footer_message m Error \"Cannot execute a text cell\", Cmd.none)\n      | None -> (with_footer_message m Warning \"No cell to execute\", Cmd.none))\n  | Execute_and_advance -> execute_and_advance m\n  | Execute_all -> (execute_all_cells m, Cmd.none)\n  | Interrupt ->\n      m.kernel.interrupt ();\n      (m, Cmd.none)\n  | Insert_code_below ->\n      let pos = m.focus + 1 in\n      let cell = Cell.code \"\" in\n      let session = Session.insert_cell ~pos cell m.session in\n      let n = Doc.length (Session.doc session) in\n      let focus = min pos (n - 1) in\n      (* Enter edit mode on the new cell (REPL-like) *)\n      let m =\n        {\n          m with\n          session;\n          focus;\n          dirty = true;\n          mode = Editing;\n          edit_cursor = 0;\n          edit_cursor_override = Some 0;\n          edit_selection = None;\n          completion_popup_open = false;\n          completion = None;\n        }\n      in\n      (m, Cmd.focus textarea_id)\n  | Insert_text_below ->\n      let pos = m.focus + 1 in\n      let cell = Cell.text \"\" in\n      let session = Session.insert_cell ~pos cell m.session in\n      let n = Doc.length (Session.doc session) in\n      ({ m with session; focus = min pos (n - 1); dirty = true }, Cmd.none)\n  | Delete_focused -> (\n      match focused_cell m with\n      | Some cell ->\n          let session = Session.remove_cell (Cell.id cell) m.session in\n          let n = Doc.length (Session.doc session) in\n          let focus = if n > 0 then min m.focus (n - 1) else 0 in\n          ({ m with session; focus; dirty = true }, Cmd.none)\n      | None -> (m, Cmd.none))\n  | Toggle_cell_kind -> (\n      match focused_cell m with\n      | Some cell ->\n          let cell_id = Cell.id cell in\n          let kind =\n            match cell with Cell.Code _ -> `Text | Cell.Text _ -> `Code\n          in\n          let session = Session.set_cell_kind cell_id kind m.session in\n          ({ m with session; dirty = true }, Cmd.none)\n      | None -> (m, Cmd.none))\n  | Move_up -> (\n      match focused_cell m with\n      | Some cell when m.focus > 0 ->\n          let cell_id = Cell.id cell in\n          let pos = m.focus - 1 in\n          let session = Session.move_cell cell_id ~pos m.session in\n          ({ m with session; focus = pos; dirty = true }, Cmd.none)\n      | _ -> (m, Cmd.none))\n  | Move_down -> (\n      match focused_cell m with\n      | Some cell when m.focus < cell_count m - 1 ->\n          let cell_id = Cell.id cell in\n          let pos = m.focus + 1 in\n          let session = Session.move_cell cell_id ~pos m.session in\n          ({ m with session; focus = pos; dirty = true }, Cmd.none)\n      | _ -> (m, Cmd.none))\n  | Clear_focused -> (\n      match focused_cell m with\n      | Some cell ->\n          let session = Session.clear_outputs (Cell.id cell) m.session in\n          ({ m with session; dirty = true }, Cmd.none)\n      | None -> (m, Cmd.none))\n  | Clear_all ->\n      let session = Session.clear_all_outputs m.session in\n      ({ m with session; dirty = true }, Cmd.none)\n  | Save -> update_save m\n  | Quit -> update_quit m\n  | Tick dt ->\n      let m = tick_model m dt in\n      let reload_acc = m.reload_acc +. dt in\n      if reload_acc >= reload_interval then\n        let m = check_reload { m with reload_acc = 0. } in\n        (m, Cmd.none)\n      else ({ m with reload_acc }, Cmd.none)\n  | Enter_edit -> (\n      match focused_cell m with\n      | Some (Cell.Code { source; _ } | Cell.Text { source; _ }) ->\n          let edit_cursor = grapheme_count source in\n          let m =\n            {\n              m with\n              mode = Editing;\n              edit_cursor;\n              edit_cursor_override = Some edit_cursor;\n              edit_selection = None;\n              completion_popup_open = false;\n              completion = None;\n            }\n          in\n          (recompute_completion m, Cmd.focus textarea_id)\n      | None -> (m, Cmd.none))\n  | _ -> (m, Cmd.none)\n\nlet update msg m =\n  let m =\n    if should_clear_error_msg msg then\n      match m.footer_msg with\n      | Some { kind = Error; _ } -> clear_footer_message m\n      | _ -> m\n    else m\n  in\n  let m =\n    match msg with\n    | Quit | Tick _ | Toggle_help | Resize _ -> m\n    | _ -> clear_confirm_message { m with confirm_quit = false }\n  in\n  match m.mode with\n  | Editing -> update_editing msg m\n  | Normal -> update_normal msg m\n\n(* ───── View Components ───── *)\n\nlet running_count m =\n  List.fold_left\n    (fun acc cell ->\n      match cell with\n      | Cell.Code { id; _ } ->\n          if Session.cell_status id m.session = Session.Running then acc + 1\n          else acc\n      | _ -> acc)\n    0\n    (Doc.cells (Session.doc m.session))\n\nlet has_running m = running_count m > 0\n\nlet view_header m =\n  let n = cell_count m in\n  let left =\n    box ~flex_direction:Row ~gap:(gap 1) ~align_items:Center\n      [\n        text ~style:(Ansi.Style.make ~fg:label_fg ~italic:true ()) \"quill\";\n        text\n          ~style:(Ansi.Style.make ~fg:Ansi.Color.white ~bold:true ())\n          (Filename.basename m.path);\n      ]\n  in\n  let center =\n    let rc = running_count m in\n    if rc > 0 then\n      box ~flex_direction:Row ~gap:(gap 1) ~align_items:Center\n        [\n          spinner ~frame_set:Spinner.dots ~color:accent ();\n          text\n            ~style:(Ansi.Style.make ~fg:accent ())\n            (Printf.sprintf \"%d running\" rc);\n        ]\n    else\n      text\n        ~style:(Ansi.Style.make ~fg:label_fg ())\n        (Printf.sprintf \"%d cells\" n)\n  in\n  let right =\n    if m.dirty then\n      text ~style:(Ansi.Style.make ~fg:accent ~bold:true ()) \"\\xe2\\x97\\x8f\"\n    else empty\n  in\n  box ~background:chrome_bg ~flex_direction:Row ~justify_content:Space_between\n    ~align_items:Center\n    ~size:{ width = pct 100; height = auto }\n    ~padding:(padding_lrtb ~l:2 ~r:2 ~t:0 ~b:0)\n    [ left; center; right ]\n\nlet view_error_bar msg =\n  box ~background:error_bg ~border:true ~border_sides:[ `Left ]\n    ~border_style:Border.heavy ~border_color:error_fg\n    ~size:{ width = pct 100; height = auto }\n    ~padding:(padding_lrtb ~l:1 ~r:1 ~t:0 ~b:0)\n    [ text ~style:(Ansi.Style.make ~fg:error_fg ()) msg ]\n\nlet trim_trailing_newlines s =\n  let len = String.length s in\n  let i = ref (len - 1) in\n  while !i >= 0 && (s.[!i] = '\\n' || s.[!i] = '\\r') do\n    decr i\n  done;\n  if !i = len - 1 then s else String.sub s 0 (!i + 1)\n\nlet view_output output =\n  match output with\n  | Cell.Stdout s ->\n      text ~style:(Ansi.Style.make ~fg:output_fg ()) (trim_trailing_newlines s)\n  | Cell.Stderr s ->\n      text\n        ~style:(Ansi.Style.make ~fg:warning_fg ~italic:true ())\n        (\"\\xe2\\x96\\xb6 \" ^ trim_trailing_newlines s)\n  | Cell.Error s -> view_error_bar s\n  | Cell.Display { mime; data } ->\n      if String.starts_with ~prefix:\"text/\" mime then\n        text ~style:(Ansi.Style.make ~fg:output_fg ()) data\n      else\n        text\n          ~style:(Ansi.Style.make ~fg:output_dim_fg ~italic:true ())\n          (Printf.sprintf \"[%s \\xc2\\xb7 %d bytes]\" mime (String.length data))\n\nlet completion_panel ~is_editing m =\n  if not (is_editing && m.mode = Editing && m.completion_popup_open) then empty\n  else\n    match m.completion with\n    | None ->\n        box ~border:true ~border_color:border_unfocused ~padding:(padding 1)\n          [\n            text\n              ~style:(Ansi.Style.make ~fg:hint_fg ())\n              \"No suggestions at cursor.\";\n          ]\n    | Some c ->\n        box ~border:true ~border_color:border_unfocused ~padding:(padding 1)\n          ~flex_direction:Column ~gap:(gap 0)\n          [\n            text\n              ~style:(Ansi.Style.make ~bold:true ~fg:accent ())\n              (Printf.sprintf \"Completions (%d)\" (List.length c.items));\n            box ~flex_direction:Column ~gap:(gap 0)\n              (take_first 8 c.items\n              |> List.mapi (fun i item ->\n                  let selected = i = c.selected in\n                  let prefix = if selected then \"> \" else \"  \" in\n                  text\n                    ~style:\n                      (if selected then\n                         Ansi.Style.make ~fg:Ansi.Color.black\n                           ~bg:Ansi.Color.yellow ~bold:true ()\n                       else Ansi.Style.make ~fg:Ansi.Color.white ())\n                    (prefix ^ item)));\n          ]\n\nlet view_code_cell m ~index ~is_focused ~is_editing ~status source outputs =\n  let border_color = if is_focused then border_focused else border_unfocused in\n  let num = index + 1 in\n  let title =\n    if is_editing then Printf.sprintf \" %d \\xe2\\x9c\\x8e \" num\n    else\n      let status_indicator =\n        match status with\n        | Session.Running -> \" \\xe2\\x80\\xa6\"\n        | Session.Queued -> \" \\xe2\\x97\\x8b\"\n        | Session.Idle -> if outputs <> [] then \" \\xe2\\x9c\\x93\" else \"\"\n      in\n      Printf.sprintf \" %d%s \" num status_indicator\n  in\n  let source_view =\n    if is_editing then\n      let highlights = highlight_source source in\n      let ghost_text = ghost_text m in\n      box\n        ~padding:(padding_lrtb ~l:1 ~r:1 ~t:0 ~b:0)\n        ~size:{ width = pct 100; height = auto }\n        [\n          line_number ~flex_grow:1.\n            ~line_colors:(active_line_colors source m.edit_cursor)\n            (textarea ~id:textarea_id ~value:source\n               ?cursor:m.edit_cursor_override ~spans:highlights ?ghost_text\n               ~ghost_text_color:(Ansi.Color.grayscale ~level:10)\n               ~text_color:output_fg ~background_color:cell_bg_focused\n               ~focused_text_color:output_fg\n               ~focused_background_color:cell_bg_focused ~cursor_style:`Line\n               ~cursor_color:accent ~wrap:`None\n               ~size:{ width = pct 100; height = auto }\n               ~on_key:(fun ev -> editor_on_key m ev)\n               ~on_input:(fun s -> Some (Edit_source s))\n               ~on_submit:(fun s -> Some (Submit_edit s))\n               ~on_cursor:(fun ~cursor ~selection ->\n                 Some (Edit_cursor_changed (cursor, selection)))\n               ());\n        ]\n    else\n      let highlights = highlight_source source in\n      box\n        ~padding:(padding_lrtb ~l:1 ~r:1 ~t:0 ~b:0)\n        ~size:{ width = pct 100; height = auto }\n        [ code ~spans:highlights source ]\n  in\n  let status_row =\n    match status with\n    | Session.Running ->\n        box ~flex_direction:Row ~gap:(gap 1) ~align_items:Center\n          ~padding:(padding_lrtb ~l:1 ~r:1 ~t:0 ~b:0)\n          ~size:{ width = pct 100; height = auto }\n          [\n            spinner ~frame_set:Spinner.dots ~color:accent ();\n            text\n              ~style:(Ansi.Style.make ~fg:accent_dim ~italic:true ())\n              \"evaluating\";\n          ]\n    | _ -> empty\n  in\n  let output_section =\n    if outputs = [] then empty\n    else\n      box ~flex_direction:Column ~border:true ~border_sides:[ `Top ]\n        ~border_style:Border.single ~border_color:border_unfocused\n        ~size:{ width = pct 100; height = auto }\n        ~padding:(padding_lrtb ~l:1 ~r:1 ~t:0 ~b:0)\n        (List.map view_output outputs)\n  in\n  box ~flex_direction:Column ~border:true ~border_color\n    ~border_style:Border.rounded ~title ~title_alignment:`Left\n    ?background:(if is_focused then Some cell_bg_focused else None)\n    ~size:{ width = pct 100; height = auto }\n    [ source_view; completion_panel ~is_editing m; status_row; output_section ]\n\nlet view_text_cell ~is_focused ~is_editing m source =\n  if is_editing then\n    box ~background:cell_bg_focused ~border:true ~border_color:border_focused\n      ~border_style:Border.rounded ~title:\" text \\xe2\\x9c\\x8e \"\n      ~title_alignment:`Left\n      ~size:{ width = pct 100; height = auto }\n      [\n        box\n          ~padding:(padding_lrtb ~l:1 ~r:1 ~t:0 ~b:0)\n          ~size:{ width = pct 100; height = auto }\n          [\n            textarea ~id:textarea_id ~value:source\n              ?cursor:m.edit_cursor_override ~text_color:output_fg\n              ~background_color:cell_bg_focused ~focused_text_color:output_fg\n              ~focused_background_color:cell_bg_focused ~cursor_style:`Line\n              ~cursor_color:accent ~wrap:`Word\n              ~size:{ width = pct 100; height = auto }\n              ~on_key:(fun ev ->\n                let data = Event.Key.data ev in\n                if data.event_type = Release then None\n                else\n                  match data.key with\n                  | Line_feed\n                    when not\n                           (data.modifier.ctrl || data.modifier.alt\n                          || data.modifier.super) ->\n                      Event.Key.prevent_default ev;\n                      Some Execute_and_advance\n                  | _ -> None)\n              ~on_input:(fun s -> Some (Edit_source s))\n              ~on_submit:(fun _s -> Some Execute_and_advance)\n              ~on_cursor:(fun ~cursor ~selection ->\n                Some (Edit_cursor_changed (cursor, selection)))\n              ();\n          ];\n      ]\n  else\n    box\n      ?background:(if is_focused then Some cell_bg_focused else None)\n      ~size:{ width = pct 100; height = auto }\n      ~padding:(padding_lrtb ~l:2 ~r:2 ~t:0 ~b:0)\n      [ markdown source ]\n\nlet view_cell ~index ~focus ~mode m cell =\n  let is_focused = index = focus in\n  match cell with\n  | Cell.Code { id; source; outputs; _ } ->\n      let status = Session.cell_status id m.session in\n      let is_editing = is_focused && mode = Editing in\n      view_code_cell m ~index ~is_focused ~is_editing ~status source outputs\n  | Cell.Text { source; _ } ->\n      let is_editing = is_focused && mode = Editing in\n      view_text_cell ~is_focused ~is_editing m source\n\nlet view_cells m =\n  let cells = Doc.cells (Session.doc m.session) in\n  if cells = [] then\n    [\n      box ~flex_direction:Column ~align_items:Center ~justify_content:Center\n        ~flex_grow:1.\n        ~size:{ width = pct 100; height = pct 100 }\n        [\n          text\n            ~style:(Ansi.Style.make ~fg:label_fg ~italic:true ())\n            \"empty notebook\";\n          box\n            ~size:{ width = auto; height = auto }\n            ~padding:(padding_lrtb ~l:0 ~r:0 ~t:1 ~b:0)\n            [\n              text\n                ~style:(Ansi.Style.make ~fg:hint_fg ())\n                \"press a to add a code cell, or t for text\";\n            ];\n        ];\n    ]\n  else\n    List.mapi\n      (fun index cell -> view_cell ~index ~focus:m.focus ~mode:m.mode m cell)\n      cells\n\ntype footer_width_tier = Wide | Medium | Compact | Tiny\ntype footer_action = { key : string; label : string }\n\nlet footer_width_tier m =\n  if m.viewport_width >= 120 then Wide\n  else if m.viewport_width >= 80 then Medium\n  else if m.viewport_width >= 60 then Compact\n  else Tiny\n\nlet rec take n xs =\n  if n <= 0 then []\n  else match xs with [] -> [] | x :: tl -> x :: take (n - 1) tl\n\nlet focused_kind_label m =\n  match focused_cell m with\n  | Some (Cell.Code _) -> \"code\"\n  | Some (Cell.Text _) -> \"text\"\n  | None -> \"none\"\n\nlet footer_mode_label m =\n  match m.mode with Normal -> \"NORMAL\" | Editing -> \"EDIT\"\n\nlet footer_kernel_label m =\n  let rc = running_count m in\n  if rc > 0 then Printf.sprintf \"running %d\" rc else \"idle\"\n\nlet footer_actions m =\n  if m.confirm_quit then\n    [\n      { key = \"q\"; label = \"Confirm\" };\n      { key = \"s\"; label = \"Save\" };\n      { key = \"Esc\"; label = \"Cancel\" };\n    ]\n  else\n    match m.mode with\n    | Editing ->\n        [\n          { key = \"Shift-Enter\"; label = \"Run\" };\n          { key = \"Tab\"; label = \"Complete\" };\n          { key = \"Esc\"; label = \"Exit\" };\n          { key = \"?\"; label = \"Help\" };\n        ]\n    | Normal ->\n        [\n          { key = \"Enter\"; label = \"Edit\" };\n          { key = \"x\"; label = \"Run\" };\n          { key = \"j/k\"; label = \"Navigate\" };\n          { key = \"?\"; label = \"Help\" };\n        ]\n\nlet footer_action_limit tier =\n  match tier with Wide -> 4 | Medium -> 3 | Compact -> 2 | Tiny -> 1\n\nlet footer_action_label tier label =\n  match (tier, label) with\n  | Medium, \"Interrupt\" -> \"Stop\"\n  | Medium, \"Navigate\" -> \"Nav\"\n  | Medium, \"Confirm Quit\" -> \"Confirm\"\n  | Medium, \"To Code\" -> \"ToCode\"\n  | Compact, \"Save\" -> \"Save\"\n  | Compact, \"Interrupt\" -> \"Stop\"\n  | Compact, \"Navigate\" -> \"Nav\"\n  | Compact, \"Confirm Quit\" -> \"Confirm\"\n  | Compact, \"To Code\" -> \"Code\"\n  | Compact, \"+Code\" -> \"+C\"\n  | Compact, \"+Text\" -> \"+T\"\n  | Compact, \"Help\" -> \"?\"\n  | _ -> label\n\nlet truncate_text max_len s =\n  if String.length s <= max_len then s\n  else String.sub s 0 (max 0 (max_len - 1)) ^ \"\\xe2\\x80\\xa6\"\n\nlet footer_message_view tier m =\n  match m.footer_msg with\n  | None -> None\n  | Some { kind; text = msg; _ } ->\n      let fg, prefix =\n        match kind with\n        | Info -> (info_fg, \"INFO\")\n        | Warning -> (warning_fg, \"WARN\")\n        | Error -> (error_fg, \"ERROR\")\n        | Confirm -> (warning_fg, \"CONFIRM\")\n      in\n      let max_len =\n        match tier with Wide -> 32 | Medium -> 22 | Compact -> 14 | Tiny -> 8\n      in\n      Some (fg, Printf.sprintf \"%s:%s\" prefix (truncate_text max_len msg))\n\nlet footer_status_text tier m =\n  let total = cell_count m in\n  let focus =\n    if total = 0 then \"cell 0/0\"\n    else Printf.sprintf \"cell %d/%d\" (m.focus + 1) total\n  in\n  let kernel = footer_kernel_label m in\n  let dirty = if m.dirty then \"modified\" else \"saved\" in\n  match tier with\n  | Wide ->\n      Printf.sprintf \"%s %s %s %s\" focus (focused_kind_label m) dirty kernel\n  | Medium -> Printf.sprintf \"%s %s %s\" focus (focused_kind_label m) kernel\n  | Compact -> Printf.sprintf \"%s %s\" focus kernel\n  | Tiny -> \"\"\n\nlet view_footer_actions tier m =\n  let key_style = Ansi.Style.make ~fg:label_fg ~bold:true () in\n  let desc_style = Ansi.Style.make ~fg:hint_fg () in\n  let actions =\n    if tier = Tiny then [ { key = \"?\"; label = \"Help\" } ]\n    else take (footer_action_limit tier) (footer_actions m)\n  in\n  let view_action action =\n    let label = footer_action_label tier action.label in\n    box ~flex_direction:Row ~gap:(gap 0) ~align_items:Center\n      ~size:{ width = auto; height = auto }\n      [\n        text ~style:key_style (Printf.sprintf \"[%s]\" action.key);\n        text ~style:desc_style (Printf.sprintf \" %s\" label);\n      ]\n  in\n  box ~flex_direction:Row ~gap:(gap 1) ~align_items:Center\n    ~size:{ width = auto; height = auto }\n    (List.map view_action actions)\n\nlet view_footer m =\n  let tier = footer_width_tier m in\n  let mode_style =\n    Ansi.Style.make\n      ~fg:(match m.mode with Editing -> accent | Normal -> label_fg)\n      ~bold:true ()\n  in\n  let desc_style = Ansi.Style.make ~fg:hint_fg () in\n  let status_text = footer_status_text tier m in\n  let status_node =\n    if status_text = \"\" then empty\n    else text ~style:desc_style (Printf.sprintf \" %s\" status_text)\n  in\n  let message_node =\n    match footer_message_view tier m with\n    | Some (fg, msg) ->\n        text ~style:(Ansi.Style.make ~fg ~bold:true ()) (\" | \" ^ msg)\n    | None -> empty\n  in\n  let left =\n    box ~flex_direction:Row ~gap:(gap 0) ~align_items:Center\n      ~size:{ width = auto; height = auto }\n      [\n        text ~style:mode_style (Printf.sprintf \"[%s]\" (footer_mode_label m));\n        status_node;\n        message_node;\n      ]\n  in\n  let right = view_footer_actions tier m in\n  box ~background:chrome_bg ~flex_direction:Row ~justify_content:Space_between\n    ~align_items:Center\n    ~size:{ width = pct 100; height = auto }\n    ~padding:(padding_lrtb ~l:2 ~r:2 ~t:0 ~b:0)\n    [ left; right ]\n\nlet view_footer_help_overlay m =\n  if not m.show_help then empty\n  else\n    let section_title title =\n      text ~style:(Ansi.Style.make ~fg:accent ~bold:true ()) title\n    in\n    let item key desc =\n      box ~flex_direction:Row ~gap:(gap 1) ~align_items:Center\n        ~size:{ width = pct 100; height = auto }\n        [\n          text\n            ~style:(Ansi.Style.make ~fg:label_fg ~bold:true ())\n            (Printf.sprintf \"[%s]\" key);\n          text ~style:(Ansi.Style.make ~fg:hint_fg ()) desc;\n        ]\n    in\n    let panel_width = if m.viewport_width < 80 then pct 96 else pct 82 in\n    let panel_height = if m.viewport_height < 24 then pct 86 else pct 72 in\n    box ~position:Absolute ~inset:(inset 0) ~z_index:20 ~background:overlay_bg\n      ~justify_content:Center ~align_items:Center\n      ~size:{ width = pct 100; height = pct 100 }\n      [\n        box ~border:true ~border_style:Border.rounded\n          ~border_color:border_focused ~background:chrome_bg\n          ~flex_direction:Column ~gap:(gap 1)\n          ~size:{ width = panel_width; height = panel_height }\n          ~padding:(padding_lrtb ~l:1 ~r:1 ~t:0 ~b:1)\n          [\n            box ~flex_direction:Row ~justify_content:Space_between\n              ~align_items:Center\n              ~size:{ width = pct 100; height = auto }\n              [\n                text\n                  ~style:(Ansi.Style.make ~fg:Ansi.Color.white ~bold:true ())\n                  \"Keybindings\";\n                text ~style:(Ansi.Style.make ~fg:hint_fg ()) \"Esc or ? to close\";\n              ];\n            scroll_box ~id:help_scroll_id ~scroll_y:true ~scroll_x:false\n              ~flex_grow:1.\n              ~size:{ width = pct 100; height = auto }\n              ~padding:(padding_lrtb ~l:1 ~r:1 ~t:0 ~b:0)\n              ~flex_direction:Column ~gap:(gap 1)\n              [\n                box ~flex_direction:Column ~gap:(gap 1)\n                  [\n                    section_title \"Normal mode\";\n                    item \"Enter\" \"Enter edit mode\";\n                    item \"x\" \"Execute focused cell\";\n                    item \"j / k\" \"Focus next / previous cell\";\n                    item \"J / K\" \"Move cell down / up\";\n                    item \"a / t\" \"Insert code / text cell below\";\n                    item \"d\" \"Delete focused cell\";\n                    item \"m\" \"Toggle cell kind (code/text)\";\n                    item \"c\" \"Clear focused cell outputs\";\n                    item \"s\" \"Save notebook\";\n                    item \"q\" \"Quit\";\n                  ];\n                box ~flex_direction:Column ~gap:(gap 1)\n                  [\n                    section_title \"Edit mode\";\n                    item \"Shift-Enter\" \"Execute and advance (REPL flow)\";\n                    item \"Ctrl-Enter\" \"Execute and advance (REPL flow)\";\n                    item \"Esc\" \"Exit to normal mode\";\n                    item \"Tab\" \"Trigger / accept completion\";\n                    item \"Shift-Tab\" \"Previous completion\";\n                    item \"Ctrl-Space\" \"Open completion popup\";\n                    item \"Ctrl-N / Ctrl-P\" \"Next / previous completion\";\n                    item \"Ctrl-S\" \"Save notebook\";\n                  ];\n                box ~flex_direction:Column ~gap:(gap 1)\n                  [\n                    section_title \"Global\";\n                    item \"Ctrl-A\" \"Execute all cells\";\n                    item \"Ctrl-C\" \"Interrupt execution\";\n                    item \"Ctrl-L\" \"Clear all outputs\";\n                    item \"?\" \"Toggle this help panel\";\n                  ];\n              ];\n          ];\n      ]\n\nlet view m =\n  box ~flex_direction:Column\n    ~size:{ width = pct 100; height = pct 100 }\n    [\n      view_header m;\n      scroll_box ~id:scroll_box_id ~scroll_y:true ~scroll_x:false ~flex_grow:1.\n        ~autofocus:true\n        ~size:{ width = pct 100; height = auto }\n        ~flex_direction:Column ~gap:(gap 1)\n        ~padding:(padding_lrtb ~l:1 ~r:1 ~t:1 ~b:1)\n        (view_cells m);\n      view_footer m;\n      view_footer_help_overlay m;\n    ]\n\n(* ───── Subscriptions ───── *)\n\nlet subscriptions model =\n  Sub.batch\n    [\n      Sub.on_tick (fun ~dt -> Tick dt);\n      Sub.on_resize (fun ~width ~height -> Resize (width, height));\n      (* Use on_key_all for all bindings because the scroll_box consumes\n         j/k/Up/Down via its scroll bar before on_key sees them. *)\n      Sub.on_key_all (fun ev ->\n          let data = Event.Key.data ev in\n          if model.show_help then\n            match data.key with\n            | Escape -> Some Toggle_help\n            | Char c when char_eq '?' c -> Some Toggle_help\n            | _ -> None\n          else\n            match model.mode with\n            | Editing -> (\n                if data.modifier.ctrl then\n                  match data.key with\n                  | Char c when char_eq 'a' c -> Some Execute_all\n                  | Char c when char_eq 's' c -> Some Save\n                  | Char c when char_eq 'c' c -> Some Interrupt\n                  | Char c when char_eq 'l' c -> Some Clear_all\n                  | _ -> None\n                else\n                  match data.key with\n                  | Char c when char_eq '?' c -> Some Toggle_help\n                  | _ -> None)\n            | Normal -> (\n                if data.modifier.ctrl then\n                  match data.key with\n                  | Char c when char_eq 'a' c -> Some Execute_all\n                  | Char c when char_eq 's' c -> Some Save\n                  | Char c when char_eq 'c' c -> Some Interrupt\n                  | Char c when char_eq 'l' c -> Some Clear_all\n                  | _ -> None\n                else\n                  match data.key with\n                  | Char c when char_eq 'j' c -> Some Focus_next\n                  | Char c when char_eq 'k' c -> Some Focus_prev\n                  | Char c when char_eq 'J' c -> Some Move_down\n                  | Char c when char_eq 'K' c -> Some Move_up\n                  | Char c when char_eq 'x' c -> Some Execute_focused\n                  | Char c when char_eq 'a' c -> Some Insert_code_below\n                  | Char c when char_eq 't' c -> Some Insert_text_below\n                  | Char c when char_eq 'd' c -> Some Delete_focused\n                  | Char c when char_eq 'm' c -> Some Toggle_cell_kind\n                  | Char c when char_eq 'c' c -> Some Clear_focused\n                  | Char c when char_eq 's' c -> Some Save\n                  | Char c when char_eq 'q' c -> Some Quit\n                  | Char c when char_eq '?' c -> Some Toggle_help\n                  | Down -> Some Focus_next\n                  | Up -> Some Focus_prev\n                  | Enter ->\n                      Some Enter_edit (* Enter = edit mode, not execute *)\n                  | Escape -> Some Dismiss_message\n                  | _ -> None));\n      (* Escape in editing mode: textarea does not consume it, so on_key\n         works. *)\n      Sub.on_key (fun ev ->\n          match model.mode with\n          | Editing when not model.show_help -> (\n              match (Event.Key.data ev).key with\n              | Escape when not model.completion_popup_open -> Some Exit_edit\n              | _ -> None)\n          | Editing | Normal -> None);\n    ]\n\n(* ───── Run ───── *)\n\nlet run ~create_kernel path =\n  let init () = init ~create_kernel ~path () in\n  run { init; update; view; subscriptions }\n"
  },
  {
    "path": "packages/quill/lib/quill-tui/quill_tui.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Terminal notebook interface.\n\n    Provides a full-screen TUI for viewing and executing notebooks using the\n    Mosaic TEA framework. The TUI is kernel-agnostic: callers supply a kernel\n    factory function. *)\n\nval run :\n  create_kernel:(on_event:(Quill.Kernel.event -> unit) -> Quill.Kernel.t) ->\n  string ->\n  unit\n(** [run ~create_kernel path] launches the notebook TUI for the file at [path].\n    [create_kernel] is called once to obtain a kernel; the TUI owns the kernel\n    lifecycle and calls [shutdown] on exit. The notebook is loaded from [path]\n    using {!Quill_markdown.of_string} and saved back on request. *)\n"
  },
  {
    "path": "packages/quill/test/dune",
    "content": "(tests\n (names test_cell test_doc test_session test_markdown)\n (package quill)\n (libraries quill quill.markdown windtrap))\n"
  },
  {
    "path": "packages/quill/test/test_cell.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Quill\n\nlet constructor_tests =\n  [\n    test \"code cell defaults\" (fun () ->\n        let c = Cell.code \"let x = 1\" in\n        equal string \"let x = 1\" (Cell.source c);\n        match c with\n        | Cell.Code { language; outputs; _ } ->\n            equal string \"ocaml\" language;\n            equal int 0 (List.length outputs)\n        | _ -> fail \"expected Code cell\");\n    test \"code cell with language\" (fun () ->\n        let c = Cell.code ~language:\"python\" \"print(1)\" in\n        match c with\n        | Cell.Code { language; _ } -> equal string \"python\" language\n        | _ -> fail \"expected Code cell\");\n    test \"text cell\" (fun () ->\n        let c = Cell.text \"# Hello\" in\n        equal string \"# Hello\" (Cell.source c);\n        match c with Cell.Text _ -> () | _ -> fail \"expected Text cell\");\n    test \"unique ids\" (fun () ->\n        let a = Cell.code \"a\" in\n        let b = Cell.code \"b\" in\n        is_true ~msg:\"distinct ids\" (not (String.equal (Cell.id a) (Cell.id b))));\n  ]\n\nlet transformation_tests =\n  [\n    test \"set_source on code\" (fun () ->\n        let c = Cell.code \"old\" |> Cell.set_source \"new\" in\n        equal string \"new\" (Cell.source c));\n    test \"set_source on text\" (fun () ->\n        let c = Cell.text \"old\" |> Cell.set_source \"new\" in\n        equal string \"new\" (Cell.source c));\n    test \"set_outputs\" (fun () ->\n        let c = Cell.code \"x\" |> Cell.set_outputs [ Cell.Stdout \"hello\" ] in\n        match c with\n        | Cell.Code { outputs; _ } -> equal int 1 (List.length outputs)\n        | _ -> fail \"expected Code cell\");\n    test \"set_outputs on text is noop\" (fun () ->\n        let c = Cell.text \"x\" |> Cell.set_outputs [ Cell.Stdout \"hello\" ] in\n        match c with Cell.Text _ -> () | _ -> fail \"expected Text cell\");\n    test \"append_output\" (fun () ->\n        let c =\n          Cell.code \"x\"\n          |> Cell.append_output (Cell.Stdout \"a\")\n          |> Cell.append_output (Cell.Stderr \"b\")\n        in\n        match c with\n        | Cell.Code { outputs; _ } -> equal int 2 (List.length outputs)\n        | _ -> fail \"expected Code cell\");\n    test \"clear_outputs\" (fun () ->\n        let c =\n          Cell.code \"x\"\n          |> Cell.set_outputs [ Cell.Stdout \"hello\" ]\n          |> Cell.clear_outputs\n        in\n        match c with\n        | Cell.Code { outputs; _ } -> equal int 0 (List.length outputs)\n        | _ -> fail \"expected Code cell\");\n  ]\n\nlet attrs_tests =\n  [\n    test \"default attrs\" (fun () ->\n        let c = Cell.code \"x\" in\n        let a = Cell.attrs c in\n        is_false ~msg:\"not collapsed\" a.collapsed;\n        is_false ~msg:\"not hide_source\" a.hide_source);\n    test \"default attrs on text\" (fun () ->\n        let c = Cell.text \"x\" in\n        let a = Cell.attrs c in\n        is_false ~msg:\"not collapsed\" a.collapsed;\n        is_false ~msg:\"not hide_source\" a.hide_source);\n    test \"code with attrs\" (fun () ->\n        let a = { Cell.collapsed = true; hide_source = false } in\n        let c = Cell.code ~attrs:a \"x\" in\n        let a' = Cell.attrs c in\n        is_true ~msg:\"collapsed\" a'.collapsed;\n        is_false ~msg:\"not hide_source\" a'.hide_source);\n    test \"set_attrs on code\" (fun () ->\n        let c = Cell.code \"x\" in\n        let c = Cell.set_attrs { collapsed = false; hide_source = true } c in\n        let a = Cell.attrs c in\n        is_false ~msg:\"not collapsed\" a.collapsed;\n        is_true ~msg:\"hide_source\" a.hide_source);\n    test \"set_attrs on text\" (fun () ->\n        let c = Cell.text \"x\" in\n        let c = Cell.set_attrs { collapsed = true; hide_source = false } c in\n        is_true ~msg:\"collapsed\" (Cell.attrs c).collapsed);\n    test \"set_source preserves attrs\" (fun () ->\n        let a = { Cell.collapsed = true; hide_source = true } in\n        let c = Cell.code ~attrs:a \"old\" |> Cell.set_source \"new\" in\n        let a' = Cell.attrs c in\n        is_true ~msg:\"collapsed preserved\" a'.collapsed;\n        is_true ~msg:\"hide_source preserved\" a'.hide_source);\n  ]\n\nlet () =\n  run \"Cell\"\n    [\n      group \"Constructors\" constructor_tests;\n      group \"Transformations\" transformation_tests;\n      group \"Attributes\" attrs_tests;\n    ]\n"
  },
  {
    "path": "packages/quill/test/test_doc.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Quill\n\nlet accessor_tests =\n  [\n    test \"empty doc\" (fun () ->\n        let d = Doc.empty () in\n        equal int 0 (Doc.length d);\n        equal int 0 (List.length (Doc.cells d)));\n    test \"of_cells\" (fun () ->\n        let c1 = Cell.text \"a\" in\n        let c2 = Cell.code \"b\" in\n        let d = Doc.of_cells [ c1; c2 ] in\n        equal int 2 (Doc.length d));\n    test \"nth\" (fun () ->\n        let c1 = Cell.text \"first\" in\n        let c2 = Cell.text \"second\" in\n        let d = Doc.of_cells [ c1; c2 ] in\n        (match Doc.nth 0 d with\n        | Some c -> equal string \"first\" (Cell.source c)\n        | None -> fail \"expected Some for nth 0\");\n        (match Doc.nth 1 d with\n        | Some c -> equal string \"second\" (Cell.source c)\n        | None -> fail \"expected Some for nth 1\");\n        is_none (Doc.nth 2 d));\n    test \"find\" (fun () ->\n        let c1 = Cell.text \"hello\" in\n        let id = Cell.id c1 in\n        let d = Doc.of_cells [ c1 ] in\n        match Doc.find id d with\n        | Some c -> equal string \"hello\" (Cell.source c)\n        | None -> fail \"expected Some\");\n    test \"find_index\" (fun () ->\n        let c1 = Cell.text \"a\" in\n        let c2 = Cell.text \"b\" in\n        let d = Doc.of_cells [ c1; c2 ] in\n        some int 1 (Doc.find_index (Cell.id c2) d));\n  ]\n\nlet modification_tests =\n  [\n    test \"insert at beginning\" (fun () ->\n        let c1 = Cell.text \"existing\" in\n        let c2 = Cell.text \"new\" in\n        let d = Doc.of_cells [ c1 ] |> Doc.insert ~pos:0 c2 in\n        equal int 2 (Doc.length d);\n        match Doc.nth 0 d with\n        | Some c -> equal string \"new\" (Cell.source c)\n        | None -> fail \"expected Some\");\n    test \"insert at end\" (fun () ->\n        let c1 = Cell.text \"first\" in\n        let c2 = Cell.text \"last\" in\n        let d = Doc.of_cells [ c1 ] |> Doc.insert ~pos:1 c2 in\n        match Doc.nth 1 d with\n        | Some c -> equal string \"last\" (Cell.source c)\n        | None -> fail \"expected Some\");\n    test \"remove\" (fun () ->\n        let c1 = Cell.text \"keep\" in\n        let c2 = Cell.text \"remove\" in\n        let d = Doc.of_cells [ c1; c2 ] |> Doc.remove (Cell.id c2) in\n        equal int 1 (Doc.length d));\n    test \"replace\" (fun () ->\n        let c1 = Cell.text \"old\" in\n        let c2 = Cell.text \"new\" in\n        let d = Doc.of_cells [ c1 ] |> Doc.replace (Cell.id c1) c2 in\n        match Doc.nth 0 d with\n        | Some c -> equal string \"new\" (Cell.source c)\n        | None -> fail \"expected Some\");\n    test \"move\" (fun () ->\n        let c1 = Cell.text \"a\" in\n        let c2 = Cell.text \"b\" in\n        let c3 = Cell.text \"c\" in\n        let d = Doc.of_cells [ c1; c2; c3 ] |> Doc.move (Cell.id c3) ~pos:0 in\n        match Doc.nth 0 d with\n        | Some c -> equal string \"c\" (Cell.source c)\n        | None -> fail \"expected Some\");\n    test \"update\" (fun () ->\n        let c = Cell.text \"old\" in\n        let d =\n          Doc.of_cells [ c ] |> Doc.update (Cell.id c) (Cell.set_source \"new\")\n        in\n        match Doc.nth 0 d with\n        | Some c -> equal string \"new\" (Cell.source c)\n        | None -> fail \"expected Some\");\n    test \"clear_all_outputs\" (fun () ->\n        let c = Cell.code \"x\" |> Cell.set_outputs [ Cell.Stdout \"out\" ] in\n        let d = Doc.of_cells [ c ] |> Doc.clear_all_outputs in\n        match Doc.nth 0 d with\n        | Some (Cell.Code { outputs; _ }) -> equal int 0 (List.length outputs)\n        | _ -> fail \"expected Code cell\");\n  ]\n\nlet () =\n  run \"Doc\"\n    [\n      group \"Accessors\" accessor_tests; group \"Modifications\" modification_tests;\n    ]\n"
  },
  {
    "path": "packages/quill/test/test_markdown.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Quill\n\nlet parsing_tests =\n  [\n    test \"empty document\" (fun () ->\n        let doc = Quill_markdown.of_string \"\" in\n        equal int 0 (Doc.length doc));\n    test \"text only\" (fun () ->\n        let doc = Quill_markdown.of_string \"# Hello\\n\\nSome text.\" in\n        equal int 1 (Doc.length doc);\n        match Doc.nth 0 doc with\n        | Some (Cell.Text _) -> ()\n        | _ -> fail \"expected Text cell\");\n    test \"code only\" (fun () ->\n        let doc = Quill_markdown.of_string \"```ocaml\\nlet x = 1\\n```\\n\" in\n        equal int 1 (Doc.length doc);\n        match Doc.nth 0 doc with\n        | Some (Cell.Code { language; source; _ }) ->\n            equal string \"ocaml\" language;\n            equal string \"let x = 1\" source\n        | _ -> fail \"expected Code cell\");\n    test \"mixed content\" (fun () ->\n        let md =\n          \"# Title\\n\\n\\\n           ```ocaml\\n\\\n           let x = 1\\n\\\n           ```\\n\\n\\\n           Some text.\\n\\n\\\n           ```ocaml\\n\\\n           let y = 2\\n\\\n           ```\\n\"\n        in\n        let doc = Quill_markdown.of_string md in\n        equal int 4 (Doc.length doc);\n        (match Doc.nth 0 doc with\n        | Some (Cell.Text _) -> ()\n        | _ -> fail \"expected Text cell at 0\");\n        (match Doc.nth 1 doc with\n        | Some (Cell.Code { source; _ }) -> equal string \"let x = 1\" source\n        | _ -> fail \"expected Code cell at 1\");\n        (match Doc.nth 2 doc with\n        | Some (Cell.Text _) -> ()\n        | _ -> fail \"expected Text cell at 2\");\n        match Doc.nth 3 doc with\n        | Some (Cell.Code { source; _ }) -> equal string \"let y = 2\" source\n        | _ -> fail \"expected Code cell at 3\");\n    test \"code without language\" (fun () ->\n        let doc = Quill_markdown.of_string \"```\\nsome code\\n```\\n\" in\n        equal int 1 (Doc.length doc);\n        match Doc.nth 0 doc with\n        | Some (Cell.Code { language; _ }) -> equal string \"\" language\n        | _ -> fail \"expected Code cell\");\n    test \"parse output markers\" (fun () ->\n        let md =\n          \"```ocaml\\n\\\n           let x = 1\\n\\\n           ```\\n\\\n           <!-- quill:output -->\\n\\\n           val x : int = 1\\n\\\n           <!-- /quill:output -->\\n\"\n        in\n        let doc = Quill_markdown.of_string md in\n        equal int 1 (Doc.length doc);\n        match Doc.nth 0 doc with\n        | Some (Cell.Code { outputs; _ }) -> (\n            equal int 1 (List.length outputs);\n            match List.hd outputs with\n            | Cell.Stdout s -> equal string \"val x : int = 1\" s\n            | _ -> fail \"expected Stdout output\")\n        | _ -> fail \"expected Code cell with outputs\");\n    test \"parse strips output markers as text\" (fun () ->\n        let md =\n          \"# Title\\n\\n\\\n           ```ocaml\\n\\\n           let x = 1\\n\\\n           ```\\n\\\n           <!-- quill:output -->\\n\\\n           val x : int = 1\\n\\\n           <!-- /quill:output -->\\n\\n\\\n           Some text.\\n\"\n        in\n        let doc = Quill_markdown.of_string md in\n        equal int 3 (Doc.length doc);\n        (match Doc.nth 0 doc with\n        | Some (Cell.Text _) -> ()\n        | _ -> fail \"expected Text cell at 0\");\n        (match Doc.nth 1 doc with\n        | Some (Cell.Code { outputs; _ }) -> equal int 1 (List.length outputs)\n        | _ -> fail \"expected Code cell at 1\");\n        match Doc.nth 2 doc with\n        | Some (Cell.Text { source; _ }) ->\n            is_true ~msg:\"text is 'Some text.'\"\n              (String.trim source = \"Some text.\")\n        | _ -> fail \"expected Text cell at 2\");\n    test \"roundtrip with outputs\" (fun () ->\n        let c =\n          Cell.code \"let x = 1\"\n          |> Cell.set_outputs [ Cell.Stdout \"val x : int = 1\\n\" ]\n        in\n        let doc = Doc.of_cells [ c ] in\n        let md = Quill_markdown.to_string_with_outputs doc in\n        let doc2 = Quill_markdown.of_string md in\n        let md2 = Quill_markdown.to_string_with_outputs doc2 in\n        equal string md md2);\n    test \"fmt strips outputs\" (fun () ->\n        let md =\n          \"```ocaml\\n\\\n           let x = 1\\n\\\n           ```\\n\\\n           <!-- quill:output -->\\n\\\n           val x : int = 1\\n\\\n           <!-- /quill:output -->\\n\"\n        in\n        let doc = Quill_markdown.of_string md in\n        let doc = Doc.clear_all_outputs doc in\n        let result = Quill_markdown.to_string doc in\n        let has_marker =\n          String.split_on_char '\\n' result\n          |> List.exists (fun l -> String.trim l = \"<!-- quill:output -->\")\n        in\n        is_false ~msg:\"no output marker after fmt\" has_marker);\n  ]\n\nlet rendering_tests =\n  [\n    test \"render text cell\" (fun () ->\n        let doc = Doc.of_cells [ Cell.text \"# Hello\" ] in\n        let md = Quill_markdown.to_string doc in\n        let lines = String.split_on_char '\\n' md in\n        is_true ~msg:\"contains heading\"\n          (List.exists (fun l -> String.trim l = \"# Hello\") lines));\n    test \"render code cell\" (fun () ->\n        let doc = Doc.of_cells [ Cell.code ~language:\"ocaml\" \"let x = 1\" ] in\n        let md = Quill_markdown.to_string doc in\n        let lines = String.split_on_char '\\n' md in\n        is_true ~msg:\"has fence\"\n          (List.exists (fun l -> String.trim l = \"```ocaml\") lines);\n        is_true ~msg:\"has source\"\n          (List.exists (fun l -> String.trim l = \"let x = 1\") lines));\n    test \"render with outputs\" (fun () ->\n        let c =\n          Cell.code \"let x = 1\"\n          |> Cell.set_outputs [ Cell.Stdout \"val x : int = 1\\n\" ]\n        in\n        let doc = Doc.of_cells [ c ] in\n        let md = Quill_markdown.to_string_with_outputs doc in\n        let has_marker =\n          String.split_on_char '\\n' md\n          |> List.exists (fun l -> String.trim l = \"<!-- quill:output -->\")\n        in\n        is_true ~msg:\"has output marker\" has_marker);\n    test \"render without outputs omits markers\" (fun () ->\n        let c =\n          Cell.code \"let x = 1\"\n          |> Cell.set_outputs [ Cell.Stdout \"val x : int = 1\\n\" ]\n        in\n        let doc = Doc.of_cells [ c ] in\n        let md = Quill_markdown.to_string doc in\n        let has_marker =\n          String.split_on_char '\\n' md\n          |> List.exists (fun l -> String.trim l = \"<!-- quill:output -->\")\n        in\n        is_false ~msg:\"no output marker\" has_marker);\n  ]\n\nlet id_persistence_tests =\n  [\n    test \"code cell IDs survive roundtrip\" (fun () ->\n        let c1 = Cell.text ~id:\"t_1\" \"# Hello\" in\n        let c2 = Cell.code ~id:\"c_2\" \"let x = 1\" in\n        let doc = Doc.of_cells [ c1; c2 ] in\n        let md = Quill_markdown.to_string doc in\n        let doc2 = Quill_markdown.of_string md in\n        (match Doc.nth 0 doc2 with\n        | Some (Cell.Text _) -> ()\n        | _ -> fail \"expected Text cell\");\n        match Doc.nth 1 doc2 with\n        | Some (Cell.Code { id; _ }) -> equal string \"c_2\" id\n        | _ -> fail \"expected Code cell\");\n    test \"fresh IDs for unmarked cells\" (fun () ->\n        let md = \"# Hello\\n\\n```ocaml\\nlet x = 1\\n```\\n\" in\n        let doc = Quill_markdown.of_string md in\n        (match Doc.nth 0 doc with\n        | Some c -> is_true ~msg:\"text cell has id\" (Cell.id c <> \"\")\n        | None -> fail \"expected cell\");\n        match Doc.nth 1 doc with\n        | Some c -> is_true ~msg:\"code cell has id\" (Cell.id c <> \"\")\n        | None -> fail \"expected cell\");\n    test \"IDs preserved with outputs\" (fun () ->\n        let c =\n          Cell.code ~id:\"c_99\" \"let x = 1\"\n          |> Cell.set_outputs [ Cell.Stdout \"val x : int = 1\\n\" ]\n        in\n        let doc = Doc.of_cells [ c ] in\n        let md = Quill_markdown.to_string_with_outputs doc in\n        let doc2 = Quill_markdown.of_string md in\n        match Doc.nth 0 doc2 with\n        | Some (Cell.Code { id; outputs; _ }) ->\n            equal string \"c_99\" id;\n            equal int 1 (List.length outputs)\n        | _ -> fail \"expected Code cell\");\n  ]\n\nlet structured_output_tests =\n  [\n    test \"roundtrip stderr\" (fun () ->\n        let c =\n          Cell.code \"let x = 1\"\n          |> Cell.set_outputs [ Cell.Stderr \"Warning 26: unused variable x\" ]\n        in\n        let doc = Doc.of_cells [ c ] in\n        let md = Quill_markdown.to_string_with_outputs doc in\n        let doc2 = Quill_markdown.of_string md in\n        match Doc.nth 0 doc2 with\n        | Some (Cell.Code { outputs; _ }) -> (\n            equal int 1 (List.length outputs);\n            match List.hd outputs with\n            | Cell.Stderr s -> equal string \"Warning 26: unused variable x\" s\n            | _ -> fail \"expected Stderr output\")\n        | _ -> fail \"expected Code cell\");\n    test \"roundtrip error\" (fun () ->\n        let c =\n          Cell.code \"let x = \" |> Cell.set_outputs [ Cell.Error \"Syntax error\" ]\n        in\n        let doc = Doc.of_cells [ c ] in\n        let md = Quill_markdown.to_string_with_outputs doc in\n        let doc2 = Quill_markdown.of_string md in\n        match Doc.nth 0 doc2 with\n        | Some (Cell.Code { outputs; _ }) -> (\n            equal int 1 (List.length outputs);\n            match List.hd outputs with\n            | Cell.Error s -> equal string \"Syntax error\" s\n            | _ -> fail \"expected Error output\")\n        | _ -> fail \"expected Code cell\");\n    test \"roundtrip display\" (fun () ->\n        let c =\n          Cell.code \"plot ()\"\n          |> Cell.set_outputs\n               [ Cell.Display { mime = \"image/png\"; data = \"iVBORw0KGgo=\" } ]\n        in\n        let doc = Doc.of_cells [ c ] in\n        let md = Quill_markdown.to_string_with_outputs doc in\n        let doc2 = Quill_markdown.of_string md in\n        match Doc.nth 0 doc2 with\n        | Some (Cell.Code { outputs; _ }) -> (\n            equal int 1 (List.length outputs);\n            match List.hd outputs with\n            | Cell.Display { mime; data } ->\n                equal string \"image/png\" mime;\n                equal string \"iVBORw0KGgo=\" data\n            | _ -> fail \"expected Display output\")\n        | _ -> fail \"expected Code cell\");\n    test \"roundtrip mixed outputs\" (fun () ->\n        let c =\n          Cell.code \"let x = 1\"\n          |> Cell.set_outputs\n               [\n                 Cell.Stdout \"val x : int = 1\";\n                 Cell.Stderr \"Warning 26: unused\";\n                 Cell.Display { mime = \"text/html\"; data = \"<b>hello</b>\" };\n               ]\n        in\n        let doc = Doc.of_cells [ c ] in\n        let md = Quill_markdown.to_string_with_outputs doc in\n        let doc2 = Quill_markdown.of_string md in\n        match Doc.nth 0 doc2 with\n        | Some (Cell.Code { outputs; _ }) -> (\n            equal int 3 (List.length outputs);\n            match outputs with\n            | [ Cell.Stdout s; Cell.Stderr e; Cell.Display { mime; data } ] ->\n                equal string \"val x : int = 1\" s;\n                equal string \"Warning 26: unused\" e;\n                equal string \"text/html\" mime;\n                equal string \"<b>hello</b>\" data\n            | _ -> fail \"expected Stdout, Stderr, Display\")\n        | _ -> fail \"expected Code cell\");\n    test \"backward compat: untagged output parsed as stdout\" (fun () ->\n        let md =\n          \"```ocaml\\n\\\n           let x = 1\\n\\\n           ```\\n\\\n           <!-- quill:output -->\\n\\\n           val x : int = 1\\n\\\n           <!-- /quill:output -->\\n\"\n        in\n        let doc = Quill_markdown.of_string md in\n        match Doc.nth 0 doc with\n        | Some (Cell.Code { outputs; _ }) -> (\n            equal int 1 (List.length outputs);\n            match List.hd outputs with\n            | Cell.Stdout s -> equal string \"val x : int = 1\" s\n            | _ -> fail \"expected Stdout\")\n        | _ -> fail \"expected Code cell\");\n  ]\n\nlet attrs_tests =\n  [\n    test \"parse collapsed attr\" (fun () ->\n        let md =\n          \"<!-- quill:cell id=\\\"c_1\\\" collapsed -->\\n```ocaml\\nlet x = 1\\n```\\n\"\n        in\n        let doc = Quill_markdown.of_string md in\n        match Doc.nth 0 doc with\n        | Some (Cell.Code { attrs; _ }) ->\n            is_true ~msg:\"collapsed\" attrs.collapsed;\n            is_false ~msg:\"not hide_source\" attrs.hide_source\n        | _ -> fail \"expected Code cell\");\n    test \"parse hide-source attr\" (fun () ->\n        let md =\n          \"<!-- quill:cell id=\\\"c_1\\\" hide-source -->\\n\\\n           ```ocaml\\n\\\n           let x = 1\\n\\\n           ```\\n\"\n        in\n        let doc = Quill_markdown.of_string md in\n        match Doc.nth 0 doc with\n        | Some (Cell.Code { attrs; _ }) ->\n            is_false ~msg:\"not collapsed\" attrs.collapsed;\n            is_true ~msg:\"hide_source\" attrs.hide_source\n        | _ -> fail \"expected Code cell\");\n    test \"parse multiple attrs\" (fun () ->\n        let md =\n          \"<!-- quill:cell id=\\\"c_1\\\" collapsed hide-source -->\\n\\\n           ```ocaml\\n\\\n           let x = 1\\n\\\n           ```\\n\"\n        in\n        let doc = Quill_markdown.of_string md in\n        match Doc.nth 0 doc with\n        | Some (Cell.Code { attrs; _ }) ->\n            is_true ~msg:\"collapsed\" attrs.collapsed;\n            is_true ~msg:\"hide_source\" attrs.hide_source\n        | _ -> fail \"expected Code cell\");\n    test \"unknown attrs are ignored\" (fun () ->\n        let md =\n          \"<!-- quill:cell id=\\\"c_1\\\" collapsed future-flag -->\\n\\\n           ```ocaml\\n\\\n           let x = 1\\n\\\n           ```\\n\"\n        in\n        let doc = Quill_markdown.of_string md in\n        match Doc.nth 0 doc with\n        | Some (Cell.Code { attrs; id; _ }) ->\n            equal string \"c_1\" id;\n            is_true ~msg:\"collapsed\" attrs.collapsed\n        | _ -> fail \"expected Code cell\");\n    test \"no attrs is backward compatible\" (fun () ->\n        let md = \"<!-- quill:cell id=\\\"c_1\\\" -->\\n```ocaml\\nlet x = 1\\n```\\n\" in\n        let doc = Quill_markdown.of_string md in\n        match Doc.nth 0 doc with\n        | Some (Cell.Code { attrs; id; _ }) ->\n            equal string \"c_1\" id;\n            is_false ~msg:\"not collapsed\" attrs.collapsed;\n            is_false ~msg:\"not hide_source\" attrs.hide_source\n        | _ -> fail \"expected Code cell\");\n    test \"collapsed text cell\" (fun () ->\n        let md =\n          \"<!-- quill:cell id=\\\"t_1\\\" collapsed -->\\n# Hidden section\\n\"\n        in\n        let doc = Quill_markdown.of_string md in\n        match Doc.nth 0 doc with\n        | Some (Cell.Text { attrs; _ }) ->\n            is_true ~msg:\"collapsed\" attrs.collapsed\n        | _ -> fail \"expected Text cell\");\n    test \"roundtrip collapsed\" (fun () ->\n        let a = { Cell.collapsed = true; hide_source = false } in\n        let c = Cell.code ~id:\"c_1\" ~attrs:a \"let x = 1\" in\n        let doc = Doc.of_cells [ c ] in\n        let md = Quill_markdown.to_string doc in\n        let doc2 = Quill_markdown.of_string md in\n        match Doc.nth 0 doc2 with\n        | Some (Cell.Code { id; attrs; _ }) ->\n            equal string \"c_1\" id;\n            is_true ~msg:\"collapsed survives\" attrs.collapsed;\n            is_false ~msg:\"hide_source unchanged\" attrs.hide_source\n        | _ -> fail \"expected Code cell\");\n    test \"roundtrip hide-source\" (fun () ->\n        let a = { Cell.collapsed = false; hide_source = true } in\n        let c = Cell.code ~id:\"c_2\" ~attrs:a \"let x = 1\" in\n        let doc = Doc.of_cells [ c ] in\n        let md = Quill_markdown.to_string doc in\n        let doc2 = Quill_markdown.of_string md in\n        match Doc.nth 0 doc2 with\n        | Some (Cell.Code { id; attrs; _ }) ->\n            equal string \"c_2\" id;\n            is_false ~msg:\"not collapsed\" attrs.collapsed;\n            is_true ~msg:\"hide_source survives\" attrs.hide_source\n        | _ -> fail \"expected Code cell\");\n    test \"roundtrip both attrs\" (fun () ->\n        let a = { Cell.collapsed = true; hide_source = true } in\n        let c = Cell.code ~id:\"c_3\" ~attrs:a \"let x = 1\" in\n        let doc = Doc.of_cells [ c ] in\n        let md = Quill_markdown.to_string doc in\n        let doc2 = Quill_markdown.of_string md in\n        match Doc.nth 0 doc2 with\n        | Some (Cell.Code { attrs; _ }) ->\n            is_true ~msg:\"collapsed survives\" attrs.collapsed;\n            is_true ~msg:\"hide_source survives\" attrs.hide_source\n        | _ -> fail \"expected Code cell\");\n    test \"default attrs produce no tokens\" (fun () ->\n        let c = Cell.code ~id:\"c_4\" \"let x = 1\" in\n        let doc = Doc.of_cells [ c ] in\n        let md = Quill_markdown.to_string doc in\n        is_true ~msg:\"no collapsed token\"\n          (not\n             (String.split_on_char ' ' md\n             |> List.exists (fun w -> w = \"collapsed\"))));\n  ]\n\nlet () =\n  run \"Markdown\"\n    [\n      group \"Parsing\" parsing_tests;\n      group \"Rendering\" rendering_tests;\n      group \"ID persistence\" id_persistence_tests;\n      group \"Structured outputs\" structured_output_tests;\n      group \"Attributes\" attrs_tests;\n    ]\n"
  },
  {
    "path": "packages/quill/test/test_session.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Quill\n\nlet basic_tests =\n  [\n    test \"create session\" (fun () ->\n        let doc = Doc.of_cells [ Cell.text \"hello\" ] in\n        let s = Session.create doc in\n        equal int 1 (Doc.length (Session.doc s)));\n    test \"update source\" (fun () ->\n        let c = Cell.text \"old\" in\n        let doc = Doc.of_cells [ c ] in\n        let s = Session.create doc in\n        let s = Session.update_source (Cell.id c) \"new\" s in\n        match Doc.find (Cell.id c) (Session.doc s) with\n        | Some c -> equal string \"new\" (Cell.source c)\n        | None -> fail \"cell not found\");\n    test \"insert cell\" (fun () ->\n        let doc = Doc.of_cells [ Cell.text \"a\" ] in\n        let s = Session.create doc in\n        let new_cell = Cell.text \"b\" in\n        let s = Session.insert_cell ~pos:1 new_cell s in\n        equal int 2 (Doc.length (Session.doc s)));\n    test \"remove cell\" (fun () ->\n        let c1 = Cell.text \"a\" in\n        let c2 = Cell.text \"b\" in\n        let doc = Doc.of_cells [ c1; c2 ] in\n        let s = Session.create doc in\n        let s = Session.remove_cell (Cell.id c1) s in\n        equal int 1 (Doc.length (Session.doc s)));\n    test \"move cell\" (fun () ->\n        let c1 = Cell.text \"a\" in\n        let c2 = Cell.text \"b\" in\n        let c3 = Cell.text \"c\" in\n        let doc = Doc.of_cells [ c1; c2; c3 ] in\n        let s = Session.create doc in\n        let s = Session.move_cell (Cell.id c3) ~pos:0 s in\n        match Doc.nth 0 (Session.doc s) with\n        | Some c -> equal string \"c\" (Cell.source c)\n        | None -> fail \"expected Some\");\n    test \"set cell kind\" (fun () ->\n        let c = Cell.text \"code here\" in\n        let doc = Doc.of_cells [ c ] in\n        let s = Session.create doc in\n        let s = Session.set_cell_kind (Cell.id c) `Code s in\n        match Doc.nth 0 (Session.doc s) with\n        | Some (Cell.Code _) -> ()\n        | _ -> fail \"expected Code cell\");\n    test \"clear outputs\" (fun () ->\n        let c = Cell.code \"x\" |> Cell.set_outputs [ Cell.Stdout \"out\" ] in\n        let doc = Doc.of_cells [ c ] in\n        let s = Session.create doc in\n        let s = Session.clear_outputs (Cell.id c) s in\n        match Doc.find (Cell.id c) (Session.doc s) with\n        | Some (Cell.Code { outputs; _ }) -> equal int 0 (List.length outputs)\n        | _ -> fail \"expected Code cell\");\n    test \"clear all outputs\" (fun () ->\n        let c1 = Cell.code \"x\" |> Cell.set_outputs [ Cell.Stdout \"out1\" ] in\n        let c2 = Cell.code \"y\" |> Cell.set_outputs [ Cell.Stdout \"out2\" ] in\n        let doc = Doc.of_cells [ c1; c2 ] in\n        let s = Session.create doc in\n        let s = Session.clear_all_outputs s in\n        List.iter\n          (fun cell ->\n            match cell with\n            | Cell.Code { outputs; _ } -> equal int 0 (List.length outputs)\n            | _ -> ())\n          (Doc.cells (Session.doc s)));\n  ]\n\nlet execution_state_tests =\n  [\n    test \"mark running\" (fun () ->\n        let c = Cell.code \"let x = 1\" in\n        let doc = Doc.of_cells [ c ] in\n        let s = Session.create doc in\n        let s = Session.mark_running (Cell.id c) s in\n        match Session.cell_status (Cell.id c) s with\n        | Session.Running -> ()\n        | _ -> fail \"expected Running\");\n    test \"mark queued\" (fun () ->\n        let c = Cell.code \"let x = 1\" in\n        let doc = Doc.of_cells [ c ] in\n        let s = Session.create doc in\n        let s = Session.mark_queued (Cell.id c) s in\n        match Session.cell_status (Cell.id c) s with\n        | Session.Queued -> ()\n        | _ -> fail \"expected Queued\");\n    test \"apply output and finish\" (fun () ->\n        let c = Cell.code \"let x = 1\" in\n        let doc = Doc.of_cells [ c ] in\n        let s = Session.create doc in\n        let s = Session.mark_running (Cell.id c) s in\n        let s = Session.apply_output (Cell.id c) (Cell.Stdout \"val x = 1\") s in\n        let s =\n          Session.apply_output (Cell.id c) (Cell.Stderr \"more output\") s\n        in\n        let s = Session.finish_execution (Cell.id c) ~success:true s in\n        (match Session.cell_status (Cell.id c) s with\n        | Session.Idle -> ()\n        | _ -> fail \"expected Idle after finish\");\n        match Doc.find (Cell.id c) (Session.doc s) with\n        | Some (Cell.Code { outputs; execution_count; _ }) ->\n            equal int 2 (List.length outputs);\n            equal int 1 execution_count\n        | _ -> fail \"expected Code cell with outputs\");\n    test \"default status is idle\" (fun () ->\n        let c = Cell.code \"x\" in\n        let doc = Doc.of_cells [ c ] in\n        let s = Session.create doc in\n        match Session.cell_status (Cell.id c) s with\n        | Session.Idle -> ()\n        | _ -> fail \"expected Idle\");\n  ]\n\nlet undo_redo_tests =\n  [\n    test \"update_source does not push history\" (fun () ->\n        let c = Cell.text \"original\" in\n        let doc = Doc.of_cells [ c ] in\n        let s = Session.create doc in\n        let s = Session.update_source (Cell.id c) \"changed\" s in\n        is_false ~msg:\"no undo without checkpoint\" (Session.can_undo s));\n    test \"checkpoint enables undo\" (fun () ->\n        let c = Cell.text \"original\" in\n        let doc = Doc.of_cells [ c ] in\n        let s = Session.create doc in\n        is_false ~msg:\"no undo initially\" (Session.can_undo s);\n        let s = Session.update_source (Cell.id c) \"changed\" s in\n        let s = Session.checkpoint s in\n        is_true ~msg:\"can undo after checkpoint\" (Session.can_undo s);\n        let s = Session.undo s in\n        (match Doc.find (Cell.id c) (Session.doc s) with\n        | Some c -> equal string \"original\" (Cell.source c)\n        | None -> fail \"cell not found\");\n        is_true ~msg:\"can redo\" (Session.can_redo s));\n    test \"redo after undo\" (fun () ->\n        let c = Cell.text \"original\" in\n        let doc = Doc.of_cells [ c ] in\n        let s = Session.create doc in\n        let s = Session.update_source (Cell.id c) \"changed\" s in\n        let s = Session.checkpoint s in\n        let s = Session.undo s in\n        let s = Session.redo s in\n        match Doc.find (Cell.id c) (Session.doc s) with\n        | Some c -> equal string \"changed\" (Cell.source c)\n        | None -> fail \"cell not found\");\n    test \"structural ops auto-checkpoint\" (fun () ->\n        let c = Cell.text \"a\" in\n        let doc = Doc.of_cells [ c ] in\n        let s = Session.create doc in\n        is_false ~msg:\"no undo initially\" (Session.can_undo s);\n        let s = Session.insert_cell ~pos:1 (Cell.text \"b\") s in\n        is_true ~msg:\"can undo after insert\" (Session.can_undo s);\n        let s = Session.undo s in\n        equal int 1 (Doc.length (Session.doc s)));\n    test \"checkpoint is noop when unchanged\" (fun () ->\n        let doc = Doc.of_cells [ Cell.text \"a\" ] in\n        let s = Session.create doc in\n        let s = Session.checkpoint s in\n        is_false ~msg:\"no undo after noop checkpoint\" (Session.can_undo s));\n    test \"undo on empty history is noop\" (fun () ->\n        let doc = Doc.of_cells [ Cell.text \"a\" ] in\n        let s = Session.create doc in\n        let s2 = Session.undo s in\n        equal int (Doc.length (Session.doc s)) (Doc.length (Session.doc s2)));\n    test \"reload clears history\" (fun () ->\n        let c = Cell.text \"original\" in\n        let doc = Doc.of_cells [ c ] in\n        let s = Session.create doc in\n        let s = Session.update_source (Cell.id c) \"changed\" s in\n        let s = Session.checkpoint s in\n        is_true ~msg:\"can undo before reload\" (Session.can_undo s);\n        let new_doc = Doc.of_cells [ Cell.text \"reloaded\" ] in\n        let s = Session.reload new_doc s in\n        is_false ~msg:\"no undo after reload\" (Session.can_undo s);\n        equal int 1 (Doc.length (Session.doc s)));\n  ]\n\nlet () =\n  run \"Session\"\n    [\n      group \"Basic\" basic_tests;\n      group \"Execution state\" execution_state_tests;\n      group \"Undo/Redo\" undo_redo_tests;\n    ]\n"
  },
  {
    "path": "packages/rune/README.md",
    "content": "# Rune\n\nJAX-inspired automatic differentiation and JIT compilation library for OCaml\n\nRune brings JAX-like capabilities to OCaml, enabling high-performance numerical\ncomputation with automatic differentiation, multi-device support (CPU, CUDA,\nMetal), and JIT compilation.\n\n## Features\n\n- N-dimensional tensor operations (arithmetic, linear algebra, etc.)\n- Automatic differentiation: `grad`, `grads`, `value_and_grad`, `value_and_grads`\n- Functional API for pure computations\n- Multi-device backends: CPU, CUDA, Metal\n- Random tensor initialization: `rand`\n- JIT compilation to accelerate operations on GPU backends\n- Seamless interop with Nx for data loading and visualization\n\n## Quick Start\n\n```ocaml\nopen Rune\n\n(* Define a simple function: sum of squares *)\nlet f x = sum (mul x x)\n\n(* Create input tensor *)\nlet x = create Float32 [|3;3|] (Array.init 9 float_of_int)\n\n(* Compute gradient of f at x *)\nlet grad_x = grad f x\n\n(* Print gradient *)\nprint grad_x\n```\n\n## Examples\n\nSee the `examples/` directory for:\n- `01-mlp`: training a simple MLP with `value_and_grads`\n- `xx-higher-derivative`: computing higher-order derivatives\n\n## Contributing\n\nSee the [Raven monorepo README](../README.md) for guidelines.\n\n## License\n\nISC License. See [LICENSE](../LICENSE) for details.\n"
  },
  {
    "path": "packages/rune/bench/README.md",
    "content": "# Rune Benchmarks\n\nThis directory contains benchmarks for the `rune` library. We provide comparative benchmarks against `pytorch`.\n\n## Results Rune Grad\n\n```\n┌───────────────────────────────┬──────────┬──────────┬──────────┬─────────┬────────────┐\n│ Name                          │ Wall/Run │  CPU/Run │  mWd/Run │ Speedup │ vs Fastest │\n├───────────────────────────────┼──────────┼──────────┼──────────┼─────────┼────────────┤\n│ ScalarGrad Medium (Rune)      │  15.89μs │  15.80μs │   7.56kw │   1.00x │       100% │\n│ ScalarGrad Large (Rune)       │  15.90μs │  15.86μs │   7.56kw │   1.00x │       100% │\n│ ScalarGrad Small (Rune)       │  16.05μs │  16.04μs │   7.56kw │   0.99x │       101% │\n│ VectorGrad Small (Rune)       │  32.40μs │  32.39μs │  14.14kw │   0.49x │       204% │\n│ VectorGrad Medium (Rune)      │  38.72μs │  38.62μs │  14.14kw │   0.41x │       244% │\n│ VectorGrad Large (Rune)       │  46.97μs │  46.85μs │  14.14kw │   0.34x │       296% │\n│ HigherOrderGrad Small (Rune)  │ 315.85μs │ 314.07μs │ 129.23kw │   0.05x │      1988% │\n│ HigherOrderGrad Medium (Rune) │ 390.42μs │ 388.73μs │ 129.23kw │   0.04x │      2457% │\n│ HigherOrderGrad Large (Rune)  │ 538.70μs │ 537.14μs │ 129.23kw │   0.03x │      3390% │\n│ MatMulGrad Small (Rune)       │ 626.49μs │ 889.20μs │  21.82kw │   0.03x │      3942% │\n│ ChainGrad Small (Rune)        │   4.22ms │   5.49ms │ 165.53kw │   0.00x │     26572% │\n│ MatMulGrad Medium (Rune)      │  10.61ms │  12.35ms │  21.70kw │   0.00x │     66768% │\n│ MatMulGrad Large (Rune)       │  36.79ms │  46.00ms │  21.70kw │   0.00x │    231511% │\n│ ChainGrad Medium (Rune)       │  77.46ms │  88.48ms │ 164.60kw │   0.00x │    487485% │\n│ ChainGrad Large (Rune)        │ 249.58ms │ 299.02ms │ 164.60kw │   0.00x │   1570623% │\n└───────────────────────────────┴──────────┴──────────┴──────────┴─────────┴────────────┘\n```\n\n## Results PyTorch Grad\n\n```\n┌──────────────────────────────────┬──────────┬──────────┬─────────┬─────────┬────────────┐\n│ Name                             │ Wall/Run │  CPU/Run │ mWd/Run │ Speedup │ vs Fastest │\n├──────────────────────────────────┼──────────┼──────────┼─────────┼─────────┼────────────┤\n│ ScalarGrad Large (PyTorch)       │  15.76µs │  15.73µs │   6.90w │   1.00x │       100% │\n│ ScalarGrad Small (PyTorch)       │  15.91µs │  15.86µs │   6.90w │   0.99x │       101% │\n│ ScalarGrad Medium (PyTorch)      │  16.03µs │  15.99µs │   6.90w │   0.98x │       102% │\n│ VectorGrad Small (PyTorch)       │  20.25µs │  20.24µs │   8.95w │   0.78x │       128% │\n│ VectorGrad Medium (PyTorch)      │  20.60µs │  20.46µs │   8.95w │   0.76x │       131% │\n│ VectorGrad Large (PyTorch)       │  21.25µs │  20.96µs │   8.95w │   0.74x │       135% │\n│ MatMulGrad Small (PyTorch)       │  37.37µs │  35.78µs │  15.11w │   0.42x │       237% │\n│ HigherOrderGrad Small (PyTorch)  │  47.46µs │  47.51µs │  41.39w │   0.33x │       301% │\n│ HigherOrderGrad Large (PyTorch)  │  47.55µs │  47.54µs │  38.61w │   0.33x │       302% │\n│ HigherOrderGrad Medium (PyTorch) │  48.36µs │  48.30µs │  38.61w │   0.33x │       307% │\n│ ChainGrad Small (PyTorch)        │ 189.04µs │ 268.01µs │ 141.66w │   0.08x │      1199% │\n│ MatMulGrad Medium (PyTorch)      │ 639.37µs │   1.26ms │ 543.12w │   0.02x │      4057% │\n│ ChainGrad Medium (PyTorch)       │ 847.94µs │   2.50ms │  1.32kw │   0.02x │      5380% │\n│ ChainGrad Large (PyTorch)        │   3.15ms │  11.92ms │  6.02kw │   0.00x │     20011% │\n│ MatMulGrad Large (PyTorch)       │   3.58ms │   7.28ms │  3.18kw │   0.00x │     22742% │\n└──────────────────────────────────┴──────────┴──────────┴─────────┴─────────┴────────────┘\n```\n"
  },
  {
    "path": "packages/rune/bench/bench_grad_pytorch.py",
    "content": "from __future__ import annotations\n\nimport sys\nfrom pathlib import Path\nfrom typing import Any, List\n\nimport torch\n\n_SCRIPTS_DIR = Path(__file__).resolve().parent\nwhile not (_SCRIPTS_DIR / \"dune-project\").exists():\n    _SCRIPTS_DIR = _SCRIPTS_DIR.parent\n_SCRIPTS_DIR = _SCRIPTS_DIR / \"scripts\"\nif str(_SCRIPTS_DIR) not in sys.path:\n    sys.path.insert(0, str(_SCRIPTS_DIR))\n\nimport ubench  # type: ignore\n\n\n# Benchmark sizes - focus on realistic ML workload sizes\nSIZES = [\n    (\"Small\", 100),    # Small batch/feature size\n    (\"Medium\", 500),   # Medium neural network layer\n    (\"Large\", 1000),   # Large neural network layer\n]\n\nBACKEND_NAME = \"PyTorch\"\n\n\ndef benchmark_name(op_name: str, size_name: str) -> str:\n    \"\"\"Create benchmark name.\"\"\"\n    return f\"{op_name} {size_name} ({BACKEND_NAME})\"\n\n\nclass ScalarGradBenchmarks:\n    \"\"\"Scalar→Scalar gradient: f(x) = x^2\"\"\"\n\n    @staticmethod\n    def build() -> List[Any]:\n        benchmarks = []\n        for size_name, _ in SIZES:\n            # Create tensor outside benchmark - matching Rune's approach\n            x = torch.tensor(5.0, requires_grad=True)\n\n            def bench_fn(x_input=x):\n                # Reset gradient from previous run\n                if x_input.grad is not None:\n                    x_input.grad.zero_()\n                y = x_input ** 2\n                y.backward()\n                return x_input.grad\n\n            bench_name = benchmark_name(\"ScalarGrad\", size_name)\n            benchmarks.append(ubench.bench(bench_name, bench_fn))\n\n        return benchmarks\n\n\nclass VectorScalarGradBenchmarks:\n    \"\"\"Vector→Scalar gradient: f(x) = sum(x^2) (L2 norm squared)\"\"\"\n\n    @staticmethod\n    def build() -> List[Any]:\n        benchmarks = []\n        torch.manual_seed(0)\n\n        for size_name, size in SIZES:\n            # Create tensor outside benchmark - matching Rune's approach\n            x = torch.randn(size, requires_grad=True)\n\n            def bench_fn(x_input=x):\n                # Reset gradient from previous run\n                if x_input.grad is not None:\n                    x_input.grad.zero_()\n                y = torch.sum(x_input ** 2)\n                y.backward()\n                return x_input.grad\n\n            bench_name = benchmark_name(\"VectorGrad\", size_name)\n            benchmarks.append(ubench.bench(bench_name, bench_fn))\n\n        return benchmarks\n\n\nclass MatMulGradBenchmarks:\n    \"\"\"MatMul gradient: f(x) = sum(matmul(x, W))\"\"\"\n\n    @staticmethod\n    def build() -> List[Any]:\n        benchmarks = []\n        torch.manual_seed(1)\n\n        for size_name, size in SIZES:\n            # Create tensors outside benchmark - matching Rune's approach\n            x = torch.randn(size, size, requires_grad=True)\n            w = torch.randn(size, size)\n\n            def bench_fn(x_input=x, w_input=w):\n                # Reset gradient from previous run\n                if x_input.grad is not None:\n                    x_input.grad.zero_()\n                y = torch.sum(torch.matmul(x_input, w_input))\n                y.backward()\n                return x_input.grad\n\n            bench_name = benchmark_name(\"MatMulGrad\", size_name)\n            benchmarks.append(ubench.bench(bench_name, bench_fn))\n\n        return benchmarks\n\n\nclass ChainGradBenchmarks:\n    \"\"\"Chain of operations: f(x) = sum(exp(tanh(x^2)))\"\"\"\n\n    @staticmethod\n    def build() -> List[Any]:\n        benchmarks = []\n        torch.manual_seed(2)\n\n        for size_name, size in SIZES:\n            # Create tensor outside benchmark - matching Rune's approach\n            x = torch.randn(size, size, requires_grad=True)\n\n            def bench_fn(x_input=x):\n                # Reset gradient from previous run\n                if x_input.grad is not None:\n                    x_input.grad.zero_()\n                y = torch.sum(torch.exp(torch.tanh(x_input ** 2)))\n                y.backward()\n                return x_input.grad\n\n            bench_name = benchmark_name(\"ChainGrad\", size_name)\n            benchmarks.append(ubench.bench(bench_name, bench_fn))\n\n        return benchmarks\n\n\nclass HigherOrderGradBenchmarks:\n    \"\"\"Higher-order gradient: grad(grad(f)) where f(x) = sum(x^3)\"\"\"\n\n    @staticmethod\n    def build() -> List[Any]:\n        benchmarks = []\n        torch.manual_seed(3)\n\n        for size_name, size in SIZES:\n            # Create tensor outside benchmark - matching Rune's approach\n            x = torch.randn(size, requires_grad=True)\n\n            def bench_fn(x_input=x):\n                # Reset gradient from previous run\n                if x_input.grad is not None:\n                    x_input.grad.zero_()\n\n                # First grad: grad(f)\n                y = torch.sum(x_input ** 3)\n                grad_outputs = torch.ones_like(y)\n                first_grad = torch.autograd.grad(y, x_input, grad_outputs=grad_outputs, create_graph=True)[0]\n\n                # Second grad: grad(grad(f))\n                grad_sum = torch.sum(first_grad)\n                second_grad = torch.autograd.grad(grad_sum, x_input)[0]\n\n                return second_grad\n\n            bench_name = benchmark_name(\"HigherOrderGrad\", size_name)\n            benchmarks.append(ubench.bench(bench_name, bench_fn))\n\n        return benchmarks\n\n\ndef build_benchmarks() -> List[Any]:\n    \"\"\"Build all gradient benchmarks.\"\"\"\n    benchmarks = []\n    benchmarks.extend(ScalarGradBenchmarks.build())\n    benchmarks.extend(VectorScalarGradBenchmarks.build())\n    benchmarks.extend(MatMulGradBenchmarks.build())\n    benchmarks.extend(ChainGradBenchmarks.build())\n    benchmarks.extend(HigherOrderGradBenchmarks.build())\n    return benchmarks\n\n\ndef default_config() -> ubench.Config:\n    \"\"\"Create default benchmark configuration.\"\"\"\n    return (\n        ubench.Config.default()\n        .time_limit(1.0)\n        .warmup(1)\n        .min_measurements(5)\n        .min_cpu(0.01)\n        .geometric_scale(1.3)\n        .gc_stabilization(False)\n        .build()\n    )\n\n\ndef main() -> None:\n    \"\"\"Main entry point.\"\"\"\n    benchmarks = build_benchmarks()\n    config = default_config()\n    ubench.run(benchmarks, config=config, output_format=\"pretty\", verbose=False)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "packages/rune/bench/bench_grad_rune.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Nx\nopen Rune\n\nlet sizes =\n  [\n    (\"Small\", 100);\n    (* Small batch/feature size *)\n    (\"Medium\", 500);\n    (* Medium neural network layer *)\n    (\"Large\", 1000);\n    (* Large neural network layer *)\n  ]\n\nlet backend_name = \"Rune\"\n\nlet benchmark_name op_name size_name =\n  Printf.sprintf \"%s %s (%s)\" op_name size_name backend_name\n\n(* Scalar→Scalar: f(x) = x^2 *)\nlet scalar_grad_benchmarks () =\n  let f x = square x in\n\n  List.map\n    (fun (size_name, _) ->\n      let x = scalar float32 5.0 in\n      let bench_name = benchmark_name \"ScalarGrad\" size_name in\n      Thumper.bench bench_name (fun () -> grad f x))\n    sizes\n\n(* Vector→Scalar: f(x) = sum(x^2) (L2 norm squared) *)\nlet vector_scalar_grad_benchmarks () =\n  List.map\n    (fun (size_name, size) ->\n      let x = randn float32 [| size |] in\n      let f x = sum (square x) in\n      let bench_name = benchmark_name \"VectorGrad\" size_name in\n      Thumper.bench bench_name (fun () -> grad f x))\n    sizes\n\n(* MatMul gradient: f(x) = sum(matmul(x, W)) *)\nlet matmul_grad_benchmarks () =\n  List.map\n    (fun (size_name, size) ->\n      let x = randn float32 [| size; size |] in\n      let w = randn float32 [| size; size |] in\n      let f x = sum (matmul x w) in\n      let bench_name = benchmark_name \"MatMulGrad\" size_name in\n      Thumper.bench bench_name (fun () -> grad f x))\n    sizes\n\n(* Chain of operations: f(x) = sum(exp(tanh(x^2))) *)\nlet chain_grad_benchmarks () =\n  List.map\n    (fun (size_name, size) ->\n      let x = randn float32 [| size; size |] in\n      let f x = sum (exp (tanh (square x))) in\n      let bench_name = benchmark_name \"ChainGrad\" size_name in\n      Thumper.bench bench_name (fun () -> grad f x))\n    sizes\n\n(* Higher-order gradient: grad(grad(f)) where f(x) = sum(x^3) *)\nlet higher_order_grad_benchmarks () =\n  List.map\n    (fun (size_name, size) ->\n      let x = randn float32 [| size |] in\n      let f x = sum (mul (mul x x) x) in\n      (* x^3 as x * x * x *)\n      let grad_f = grad f in\n      let grad_grad_f = grad (fun x -> sum (grad_f x)) in\n      let bench_name = benchmark_name \"HigherOrderGrad\" size_name in\n      Thumper.bench bench_name (fun () -> grad_grad_f x))\n    sizes\n\nlet build_benchmarks () =\n  [\n    Thumper.group \"ScalarGrad\" (scalar_grad_benchmarks ());\n    Thumper.group \"VectorGrad\" (vector_scalar_grad_benchmarks ());\n    Thumper.group \"MatMulGrad\" (matmul_grad_benchmarks ());\n    Thumper.group \"ChainGrad\" (chain_grad_benchmarks ());\n    Thumper.group \"HigherOrderGrad\" (higher_order_grad_benchmarks ());\n  ]\n\nlet () =\n  let benchmarks = build_benchmarks () in\n  Thumper.run \"rune_grad\" benchmarks\n"
  },
  {
    "path": "packages/rune/bench/dune",
    "content": "(executable\n (name bench_grad_rune)\n (modules bench_grad_rune)\n (libraries nx rune thumper))\n\n(rule\n (alias runtest)\n (action\n  (progn\n   (run %{exe:bench_grad_rune.exe} -q)\n   (diff? rune_grad.thumper rune_grad.thumper.corrected))))\n"
  },
  {
    "path": "packages/rune/bench/rune_grad.thumper",
    "content": "# thumper baseline\n# version: 1\n# suite_name: rune_grad\n# host: 1480401c3b76ed18\n# cpu: Apple M1 Max\n# ocaml: 5.4.1\n# git: 31747323\n# dirty: true\n# command: /Users/tmattio/Workspace/raven/_build/default/packages/rune/bench/bench_grad_rune.exe --bless --quick\n\nchaingrad/chaingrad_large__rune_\talloc_words\t3.427000e+03\t3.427000e+03\t3.427000e+03\t0.000000e+00\t5\t1\nchaingrad/chaingrad_large__rune_\tcpu_time\t2.575002e-02\t2.530498e-02\t2.609798e-02\t1.539802e-02\t5\t0\nchaingrad/chaingrad_large__rune_\twall_time\t1.077314e-02\t1.054344e-02\t1.096362e-02\t1.950124e-02\t5\t1\nchaingrad/chaingrad_medium__rune_\talloc_words\t3.427000e+03\t3.427000e+03\t3.427000e+03\t0.000000e+00\t5\t0\nchaingrad/chaingrad_medium__rune_\tcpu_time\t8.607342e-03\t8.485994e-03\t8.722955e-03\t1.376503e-02\t5\t2\nchaingrad/chaingrad_medium__rune_\twall_time\t4.498331e-03\t4.451612e-03\t4.545550e-03\t1.044142e-02\t5\t0\nchaingrad/chaingrad_small__rune_\talloc_words\t3.427000e+03\t3.427000e+03\t3.427000e+03\t0.000000e+00\t5\t1\nchaingrad/chaingrad_small__rune_\tcpu_time\t8.645667e-04\t8.491358e-04\t8.911334e-04\t2.428820e-02\t5\t0\nchaingrad/chaingrad_small__rune_\twall_time\t4.201428e-04\t4.124162e-04\t4.264762e-04\t1.673242e-02\t5\t0\nhigherordergrad/higherordergrad_large__rune_\talloc_words\t1.295800e+04\t1.295800e+04\t1.295800e+04\t0.000000e+00\t5\t0\nhigherordergrad/higherordergrad_large__rune_\tcpu_time\t1.040014e-04\t1.021400e-04\t1.065322e-04\t2.111615e-02\t5\t1\nhigherordergrad/higherordergrad_large__rune_\twall_time\t1.042034e-04\t1.023945e-04\t1.065158e-04\t1.977508e-02\t5\t1\nhigherordergrad/higherordergrad_medium__rune_\talloc_words\t1.295800e+04\t1.295800e+04\t1.295800e+04\t0.000000e+00\t5\t0\nhigherordergrad/higherordergrad_medium__rune_\tcpu_time\t8.657577e-05\t8.584802e-05\t8.740691e-05\t9.003085e-03\t5\t0\nhigherordergrad/higherordergrad_medium__rune_\twall_time\t8.674316e-05\t8.597056e-05\t8.771604e-05\t1.006120e-02\t5\t0\nhigherordergrad/higherordergrad_small__rune_\talloc_words\t1.295800e+04\t1.295800e+04\t1.295800e+04\t0.000000e+00\t5\t0\nhigherordergrad/higherordergrad_small__rune_\tcpu_time\t7.090812e-05\t7.050030e-05\t7.130218e-05\t5.654316e-03\t5\t0\nhigherordergrad/higherordergrad_small__rune_\twall_time\t7.104141e-05\t7.062549e-05\t7.143284e-05\t5.682210e-03\t5\t0\nmatmulgrad/matmulgrad_large__rune_\talloc_words\t2.222000e+03\t2.222000e+03\t2.222000e+03\t0.000000e+00\t5\t1\nmatmulgrad/matmulgrad_large__rune_\tcpu_time\t1.748642e-02\t1.734108e-02\t1.773867e-02\t1.136857e-02\t5\t0\nmatmulgrad/matmulgrad_large__rune_\twall_time\t1.061543e-02\t1.053023e-02\t1.087078e-02\t1.604026e-02\t5\t0\nmatmulgrad/matmulgrad_medium__rune_\talloc_words\t2.222000e+03\t2.222000e+03\t2.222000e+03\t0.000000e+00\t5\t0\nmatmulgrad/matmulgrad_medium__rune_\tcpu_time\t4.644163e-03\t4.586986e-03\t4.685784e-03\t1.063679e-02\t5\t0\nmatmulgrad/matmulgrad_medium__rune_\twall_time\t3.179440e-03\t3.169223e-03\t3.192833e-03\t3.712898e-03\t5\t2\nmatmulgrad/matmulgrad_small__rune_\talloc_words\t2.222000e+03\t2.222000e+03\t2.222000e+03\t0.000000e+00\t5\t0\nmatmulgrad/matmulgrad_small__rune_\tcpu_time\t5.446490e-04\t5.364458e-04\t5.531460e-04\t1.533115e-02\t5\t2\nmatmulgrad/matmulgrad_small__rune_\twall_time\t2.831983e-04\t2.808265e-04\t2.856499e-04\t8.516019e-03\t5\t1\nscalargrad/scalargrad_large__rune_\talloc_words\t1.090000e+03\t1.090000e+03\t1.090000e+03\t0.000000e+00\t5\t0\nscalargrad/scalargrad_large__rune_\tcpu_time\t4.372699e-06\t4.355628e-06\t4.401643e-06\t5.261614e-03\t5\t1\nscalargrad/scalargrad_large__rune_\twall_time\t4.377339e-06\t4.360202e-06\t4.409181e-06\t5.594660e-03\t5\t1\nscalargrad/scalargrad_medium__rune_\talloc_words\t1.090000e+03\t1.090000e+03\t1.090000e+03\t0.000000e+00\t5\t0\nscalargrad/scalargrad_medium__rune_\tcpu_time\t4.479495e-06\t4.438746e-06\t4.539990e-06\t1.130083e-02\t5\t0\nscalargrad/scalargrad_medium__rune_\twall_time\t4.485193e-06\t4.445813e-06\t4.532181e-06\t9.628058e-03\t5\t0\nscalargrad/scalargrad_small__rune_\talloc_words\t1.090000e+03\t1.090000e+03\t1.090000e+03\t0.000000e+00\t5\t0\nscalargrad/scalargrad_small__rune_\tcpu_time\t4.530607e-06\t4.437655e-06\t4.587361e-06\t1.652164e-02\t5\t2\nscalargrad/scalargrad_small__rune_\twall_time\t4.540945e-06\t4.442283e-06\t4.619599e-06\t1.952420e-02\t5\t2\nvectorgrad/vectorgrad_large__rune_\talloc_words\t1.948000e+03\t1.948000e+03\t1.948000e+03\t0.000000e+00\t5\t0\nvectorgrad/vectorgrad_large__rune_\tcpu_time\t1.660253e-05\t1.649366e-05\t1.672837e-05\t7.068456e-03\t5\t0\nvectorgrad/vectorgrad_large__rune_\twall_time\t1.662005e-05\t1.650883e-05\t1.674066e-05\t6.974237e-03\t5\t0\nvectorgrad/vectorgrad_medium__rune_\talloc_words\t1.948000e+03\t1.948000e+03\t1.948000e+03\t0.000000e+00\t5\t0\nvectorgrad/vectorgrad_medium__rune_\tcpu_time\t1.262289e-05\t1.250797e-05\t1.269161e-05\t7.274341e-03\t5\t1\nvectorgrad/vectorgrad_medium__rune_\twall_time\t1.262778e-05\t1.250697e-05\t1.269567e-05\t7.471640e-03\t5\t1\nvectorgrad/vectorgrad_small__rune_\talloc_words\t1.948000e+03\t1.948000e+03\t1.948000e+03\t0.000000e+00\t5\t0\nvectorgrad/vectorgrad_small__rune_\tcpu_time\t9.503489e-06\t9.442741e-06\t9.554159e-06\t5.861941e-03\t5\t0\nvectorgrad/vectorgrad_small__rune_\twall_time\t9.514726e-06\t9.462623e-06\t9.571061e-06\t5.698448e-03\t5\t0\n"
  },
  {
    "path": "packages/rune/doc/01-getting-started.md",
    "content": "# Getting Started\n\nThis guide shows you how to compute gradients and use Rune's transformations.\n\n## Installation\n\n<!-- $MDX skip -->\n```bash\nopam install rune\n```\n\nOr build from source:\n\n<!-- $MDX skip -->\n```bash\ngit clone https://github.com/raven-ml/raven\ncd raven && dune build rune\n```\n\nAdd to your `dune` file:\n\n<!-- $MDX skip -->\n```dune\n(executable\n (name main)\n (libraries rune))\n```\n\n## Your First Gradient\n\nRune operates on Nx tensors directly. Write a function using Nx operations, then use `grad` to get its derivative:\n\n```ocaml\nopen Nx\nopen Rune\n\nlet () =\n  (* A simple function: f(x) = x² + sin(x) *)\n  let f x = add (mul x x) (sin x) in\n\n  (* grad returns a function that computes the derivative *)\n  let f' = grad f in\n\n  let x = scalar Float32 2.0 in\n  Printf.printf \"f(2)  = %.4f\\n\" (item [] (f x));\n  Printf.printf \"f'(2) = %.4f\\n\" (item [] (f' x))\n  (* f'(x) = 2x + cos(x), so f'(2) ≈ 3.5839 *)\n```\n\nKey points:\n- `grad f` takes a function `f : Nx.t -> Nx.t` and returns a new function that computes the gradient\n- The input function must return a scalar tensor\n- The gradient has the same shape as the input\n\n## Value and Gradient Together\n\nIn practice, you usually want both the function value and its gradient. Use `value_and_grad` to avoid computing the forward pass twice:\n\n```ocaml\nopen Nx\nopen Rune\n\nlet () =\n  let f x = mean (mul x x) in\n  let x = create Float32 [|3|] [|1.0; 2.0; 3.0|] in\n  let value, gradient = value_and_grad f x in\n  Printf.printf \"f(x) = %.4f\\n\" (item [] value);\n  print_data gradient\n```\n\n## Multiple Inputs\n\nWhen your function takes multiple inputs, use `grads` or `value_and_grads`:\n\n```ocaml\nopen Nx\nopen Rune\n\nlet () =\n  let f inputs =\n    match inputs with\n    | [x; y] -> add (mul x x) (mul y y)\n    | _ -> failwith \"expected 2 inputs\"\n  in\n  let df = grads f in\n  match df [scalar Float32 3.0; scalar Float32 4.0] with\n  | [dx; dy] ->\n    Printf.printf \"df/dx = %.1f\\n\" (item [] dx);\n    Printf.printf \"df/dy = %.1f\\n\" (item [] dy)\n  | _ -> assert false\n```\n\n## Higher-Order Derivatives\n\nSince `grad` returns a regular function, you can differentiate again:\n\n```ocaml\nopen Nx\nopen Rune\n\nlet () =\n  (* f(x) = x⁴ *)\n  let f x = mul x (mul x (mul x x)) in\n  let f' = grad f in        (* 4x³ *)\n  let f'' = grad f' in      (* 12x² *)\n  let f''' = grad f'' in    (* 24x *)\n  let x = scalar Float32 2.0 in\n  Printf.printf \"f(2)    = %.1f\\n\" (item [] (f x));\n  Printf.printf \"f'(2)   = %.1f\\n\" (item [] (f' x));\n  Printf.printf \"f''(2)  = %.1f\\n\" (item [] (f'' x));\n  Printf.printf \"f'''(2) = %.1f\\n\" (item [] (f''' x))\n```\n\n## Stopping Gradients\n\nSometimes you need part of a computation to be treated as a constant:\n\n<!-- $MDX skip -->\n```ocaml\nopen Rune\n\n(* no_grad: nothing inside is recorded *)\nlet baseline = no_grad (fun () ->\n  (* compute a baseline value that should not be differentiated *)\n  mean predictions\n)\n\n(* detach: make a single tensor a constant *)\nlet target = detach current_prediction\n```\n\n## A Simple Training Loop\n\nHere is a minimal example that trains a linear model with gradient descent:\n\n<!-- $MDX skip -->\n```ocaml\nopen Nx\nopen Rune\n\nlet () =\n  (* Data: y = 2x + 1 *)\n  let x_data = create Float32 [|4; 1|] [|1.; 2.; 3.; 4.|] in\n  let y_data = create Float32 [|4; 1|] [|3.; 5.; 7.; 9.|] in\n\n  (* Parameters *)\n  let w = rand Float32 [|1; 1|] in\n  let b = zeros Float32 [|1|] in\n\n  let loss_fn params =\n    match params with\n    | [w; b] ->\n      let pred = add (matmul x_data w) b in\n      mean (mul (sub pred y_data) (sub pred y_data))\n    | _ -> assert false\n  in\n\n  let lr = scalar Float32 0.01 in\n  for epoch = 1 to 200 do\n    let loss, gs = value_and_grads loss_fn [w; b] in\n    match gs with\n    | [gw; gb] ->\n      ignore (sub ~out:w w (mul lr gw));\n      ignore (sub ~out:b b (mul lr gb));\n      if epoch mod 50 = 0 then\n        Printf.printf \"epoch %d  loss %.6f\\n\" epoch (item [] loss)\n    | _ -> assert false\n  done;\n  Printf.printf \"w = %.3f  b = %.3f\\n\" (item [0; 0] w) (item [0] b)\n```\n\nFor real neural networks, use [Kaun](/docs/kaun/) which provides layers, optimizers, and training loops built on top of Rune.\n\n## Next Steps\n\n- [Transformations](/docs/rune/transformations/) — complete guide to grad, jvp, vmap, and more\n- [How It Works](/docs/rune/how-it-works/) — how effects-based autodiff works under the hood\n- [Kaun Getting Started](/docs/kaun/getting-started/) — high-level neural network training\n"
  },
  {
    "path": "packages/rune/doc/02-transformations.md",
    "content": "# Transformations\n\nRune provides functional transformations that operate on Nx tensor functions. This guide covers every transformation available.\n\n## Reverse-Mode AD\n\nReverse-mode AD (backpropagation) is efficient when you have many inputs and a scalar output — the typical case in machine learning.\n\n### grad\n\n`grad f` returns a function that computes the gradient of scalar-valued `f`.\n\n```ocaml\nopen Nx\nopen Rune\n\nlet () =\n  let f x = sum (mul x x) in\n  let df = grad f in\n  let x = create Float32 [|3|] [|1.; 2.; 3.|] in\n  print_data (df x)\n  (* gradient: [2. 4. 6.] *)\n```\n\n### grads\n\n`grads` differentiates with respect to multiple inputs:\n\n```ocaml\nopen Nx\nopen Rune\n\nlet () =\n  let f inputs =\n    match inputs with\n    | [x; y] -> sum (add (mul x x) (mul y y))\n    | _ -> assert false\n  in\n  let gs = grads f [scalar Float32 3.0; scalar Float32 4.0] in\n  List.iter (fun g -> Printf.printf \"%.1f \" (item [] g)) gs\n  (* 6.0 8.0 *)\n```\n\n### value_and_grad\n\nComputes both the function value and gradient in a single forward-backward pass, avoiding redundant computation:\n\n<!-- $MDX skip -->\n```ocaml\nlet loss, gradient = value_and_grad loss_fn params\n```\n\n`value_and_grads` does the same for multiple inputs.\n\n### value_and_grad_aux\n\nWhen your function returns auxiliary data alongside the loss (e.g., predictions, metrics), use the `_aux` variants to carry it through without differentiating it:\n\n<!-- $MDX skip -->\n```ocaml\nlet f x =\n  let pred = forward_pass x in\n  let loss = compute_loss pred in\n  (loss, pred)  (* pred is auxiliary — not differentiated *)\n\nlet loss, gradient, pred = value_and_grad_aux f x\n```\n\n`value_and_grads_aux` does the same for multiple inputs.\n\n### vjp\n\nVector-Jacobian product. Unlike `grad`, the function does not need to return a scalar — you provide a cotangent vector:\n\n```ocaml\nopen Nx\nopen Rune\n\nlet () =\n  let f x = mul x x in\n  let x = create Float32 [|3|] [|1.; 2.; 3.|] in\n  let v = ones Float32 [|3|] in\n  let y, g = vjp f x v in\n  print_data y;  (* [1. 4. 9.] *)\n  print_data g   (* [2. 4. 6.] *)\n```\n\n`vjps` handles multiple inputs.\n\n## Forward-Mode AD\n\nForward-mode AD propagates tangent vectors alongside primal values. It is efficient when the number of inputs is small relative to the number of outputs.\n\n### jvp\n\nJacobian-vector product. Provide a tangent vector with the same shape as the input:\n\n```ocaml\nopen Nx\nopen Rune\n\nlet () =\n  let f x = mul x x in\n  let x = create Float32 [|3|] [|1.; 2.; 3.|] in\n  let v = ones Float32 [|3|] in\n  let y, tangent = jvp f x v in\n  print_data y;       (* [1. 4. 9.] — primal *)\n  print_data tangent  (* [2. 4. 6.] — directional derivative *)\n```\n\n`jvps` handles multiple inputs. `jvp_aux` carries auxiliary outputs.\n\n### Choosing Between Forward and Reverse Mode\n\n- **Reverse mode** (`grad`, `vjp`): One backward pass gives gradients for all inputs. Best when outputs << inputs (typical in ML: scalar loss, many parameters).\n- **Forward mode** (`jvp`): One forward pass gives one directional derivative. Best when inputs << outputs (e.g., sensitivity analysis with few parameters).\n\n## Stopping Gradients\n\n### no_grad\n\nEvaluate a computation without recording it for differentiation:\n\n<!-- $MDX skip -->\n```ocaml\nlet baseline = no_grad (fun () ->\n  mean predictions\n)\n```\n\nEverything computed inside `no_grad` is treated as a constant by enclosing gradient computations.\n\n### detach\n\nMake a single tensor a constant:\n\n<!-- $MDX skip -->\n```ocaml\nlet target = detach current_value\n(* target has the same values but is not differentiated *)\n```\n\n## Vectorising Map\n\n### vmap\n\n`vmap` transforms a function that operates on single examples into one that operates on batches:\n\n<!-- $MDX skip -->\n```ocaml\n(* Function that works on a single vector *)\nlet f x = sum (mul x x)\n\n(* Automatically batched: maps over axis 0 of the input *)\nlet f_batched = vmap f\n\n(* Process a batch of 10 vectors at once *)\nlet batch = rand Float32 [|10; 5|] in\nlet results = f_batched batch\n(* results has shape [|10|] — one scalar per example *)\n```\n\nBy default, `vmap` maps over axis 0 of inputs and stacks outputs on axis 0. You can customize this:\n\n<!-- $MDX skip -->\n```ocaml\n(* Map over axis 1 instead *)\nlet f_axis1 = vmap ~in_axes:(Single (Map 1)) f\n\n(* Don't map an input (broadcast it) *)\nlet f_shared = vmap ~in_axes:(Single NoMap) f\n```\n\n`vmaps` handles functions with multiple inputs, with per-input axis specifications.\n\n### Composing vmap with grad\n\nSince transformations are composable, you can compute per-example gradients:\n\n<!-- $MDX skip -->\n```ocaml\n(* Per-example gradient (no manual batching needed) *)\nlet per_example_grad = vmap (grad loss_fn)\n```\n\n## Gradient Checking\n\nRune provides utilities for verifying that autodiff gradients are correct by comparing them against finite-difference approximations.\n\n### finite_diff\n\nApproximate the gradient using finite differences:\n\n```ocaml\nopen Nx\nopen Rune\n\nlet () =\n  let f x = sum (mul x x) in\n  let x = create Float32 [|3|] [|1.; 2.; 3.|] in\n  let fd_grad = finite_diff f x in\n  let ad_grad = grad f x in\n  print_data fd_grad;\n  print_data ad_grad\n  (* both approximately [2. 4. 6.] *)\n```\n\nThe default method is central differences (`(f(x+h) - f(x-h)) / 2h`). You can choose `Forward` or `Backward` methods and adjust `eps` (default `1e-4`).\n\n### check_gradient\n\nAutomated comparison of autodiff vs finite-difference gradients:\n\n<!-- $MDX skip -->\n```ocaml\nmatch check_gradient ~verbose:true my_function x with\n| `Pass result -> Printf.printf \"max error: %e\\n\" result.max_abs_error\n| `Fail result ->\n  Printf.printf \"%d of %d elements failed\\n\"\n    result.num_failed result.num_checked\n```\n\n`check_gradients` handles functions with multiple inputs.\n\n## Debugging\n\n### debug\n\nPrint every tensor operation as it executes:\n\n<!-- $MDX skip -->\n```ocaml\nlet () =\n  let f x = add (mul x x) (sin x) in\n  let x = scalar Float32 2.0 in\n  let _ = debug f x in\n  ()\n(* Prints each operation, its inputs, and its output *)\n```\n\nThis is useful for understanding what operations a function performs, especially when debugging unexpected gradients.\n\n## Summary\n\n| Transform | Purpose | When to use |\n|-----------|---------|-------------|\n| `grad` | Gradient of scalar function | Training loss → parameter gradients |\n| `value_and_grad` | Value + gradient together | Avoid duplicate forward pass |\n| `vjp` | Vector-Jacobian product | Non-scalar outputs |\n| `jvp` | Jacobian-vector product | Few inputs, many outputs |\n| `vmap` | Vectorise over a batch dimension | Per-example computation |\n| `no_grad` / `detach` | Stop gradient propagation | Baselines, targets, constants |\n| `check_gradient` | Verify gradient correctness | Testing custom operations |\n| `debug` | Trace all operations | Understanding/debugging |\n"
  },
  {
    "path": "packages/rune/doc/03-how-it-works.md",
    "content": "# How It Works\n\nThis page explains how Rune implements automatic differentiation using OCaml 5 effect handlers. Understanding the mechanism is not required for using Rune, but it helps when debugging unexpected behavior or reasoning about performance.\n\n## The Core Idea\n\nWhen you call `Nx.add x y`, the operation raises an OCaml 5 effect before performing the actual computation. Normally, no handler is installed, so the effect is unhandled and falls through to the default C backend, which executes the operation directly.\n\nRune's transformations work by installing effect handlers that intercept these operations. Each transformation uses the intercepted operations differently:\n\n- **Reverse-mode AD** records operations on a tape during the forward pass, then propagates gradients backward.\n- **Forward-mode AD** propagates tangent vectors alongside primal values during a single forward pass.\n- **vmap** unbatches inputs, runs the function on slices, and rebatches outputs.\n- **debug** prints each operation and its arguments.\n\n## Effect-Based Architecture\n\nEvery Nx tensor operation raises an effect. For example, `Nx.add` raises an `E_add` effect, `Nx.mul` raises `E_mul`, and so on. Each effect carries the input tensors and an output buffer.\n\n```\nUser code: Nx.add x y\n     │\n     ├─ No handler installed → C backend executes directly\n     │\n     └─ Handler installed (e.g., by Rune.grad) → handler intercepts,\n        records the operation, then continues execution\n```\n\nThis design has a key property: **user code does not change**. You write functions using `Nx.add`, `Nx.mul`, `Nx.sin`, etc. and Rune transforms them by handling their effects differently. There is no special tensor type, no computation graph builder, and no tracing step.\n\n## Reverse-Mode AD (grad)\n\nWhen you call `Rune.grad f x`, Rune:\n\n1. **Installs an effect handler** that intercepts every Nx operation.\n2. **Runs `f x` under that handler**. As each operation executes, the handler records it on a tape (a list of operations with their inputs and outputs).\n3. **Seeds the output** with a cotangent of 1.0 (since `f` returns a scalar).\n4. **Walks the tape backward**, computing the gradient contribution of each operation using the chain rule.\n\nThe backward rules are the standard VJP (vector-Jacobian product) rules. For example:\n- `add`: gradients flow through to both inputs unchanged\n- `mul`: gradient of `a * b` w.r.t. `a` is `grad_out * b`\n- `sin`: gradient is `grad_out * cos(x)`\n\nBecause the tape is walked as the continuation stack unwinds, this happens automatically — there is no separate \"backward pass\" function to call.\n\n### Higher-order derivatives\n\nSince `grad f` returns a regular OCaml function, calling `grad (grad f)` works naturally: the outer `grad` installs a handler, and when the inner `grad` runs its forward-backward pass, those operations are themselves intercepted and recorded by the outer handler.\n\n## Forward-Mode AD (jvp)\n\nForward-mode AD is simpler than reverse-mode. When you call `Rune.jvp f x v`:\n\n1. **Installs an effect handler** that maintains a tangent value alongside each tensor.\n2. **Seeds the input** `x` with tangent `v`.\n3. **Runs `f x`**. At each operation, the handler computes both the primal result and the tangent using the JVP rule for that operation.\n\nFor example, for `z = x * y`:\n- Primal: `z = x * y`\n- Tangent: `dz = dx * y + x * dy`\n\nThe result is `(f x, J_f(x) · v)` — the function value and the directional derivative in direction `v`.\n\n## vmap\n\nWhen you call `Rune.vmap f x`:\n\n1. **Determines the batch size** from the mapped axis of `x`.\n2. **Slices the input** along the batch axis.\n3. **Runs `f` on each slice**, intercepting effects to track which operations happen.\n4. **Stacks the outputs** along the specified output axis.\n\nThe handler ensures that operations inside `f` see unbatched tensors, while the overall result is properly batched.\n\n## Composability\n\nBecause each transformation is just an effect handler, they compose naturally:\n\n- `grad (grad f)` — nested handlers for higher-order derivatives\n- `vmap (grad f)` — per-example gradients\n- `debug (grad f)` — trace the backward pass\n\nThe OCaml effect system handles the nesting: each handler only intercepts unhandled effects, and re-raises operations it doesn't care about to the next handler in the stack.\n\n## Implications for Users\n\n**No graph construction step.** Unlike frameworks that build a computation graph and then execute it, Rune runs eagerly. Every operation happens immediately, and transformations work by intercepting these operations as they execute.\n\n**Control flow works naturally.** Because Rune transforms ordinary OCaml functions, `if`, `for`, `while`, `match`, recursion, and higher-order functions all work as expected. There is no restriction to a \"graph-compatible\" subset of the language.\n\n**Side effects in differentiated functions.** Printing, logging, and other side effects inside a function passed to `grad` will execute during the forward pass. The backward gradient propagation does not re-execute the function — it uses the recorded tape.\n\n**Performance.** The effect handler adds overhead per-operation compared to raw Nx calls. For typical ML workloads where operations are large (e.g., matrix multiplications), this overhead is negligible. For workloads with many small operations, the overhead may be more noticeable.\n"
  },
  {
    "path": "packages/rune/doc/04-jax-comparison.md",
    "content": "# Rune vs. JAX -- A Practical Comparison\n\nThis guide explains how Rune's functional transformations relate to [JAX](https://jax.readthedocs.io/), focusing on:\n\n* How core concepts map (grad, vjp, jvp, vmap)\n* Where the APIs feel similar vs. deliberately different\n* How to translate common JAX patterns into Rune\n\nIf you already use JAX, this should be enough to become productive in Rune quickly.\n\n---\n\n## 1. Big-Picture Differences\n\n| Aspect            | JAX (Python)                                              | Rune (OCaml)                                          |\n| ----------------- | --------------------------------------------------------- | ----------------------------------------------------- |\n| Language          | Dynamic, interpreted                                      | Statically typed, compiled                            |\n| Array type        | `jax.Array`                                               | `Nx.t` (no separate Rune tensor type)                 |\n| Array library     | `jax.numpy`                                               | Nx                                                    |\n| AD mechanism      | Tracing + XLA compilation                                 | OCaml 5 effect handlers                               |\n| Reverse-mode AD   | `jax.grad`, `jax.value_and_grad`                          | `grad`, `value_and_grad`, `grads`, `value_and_grads`  |\n| Forward-mode AD   | `jax.jvp`                                                 | `jvp`, `jvps`                                         |\n| VJP               | `jax.vjp`                                                 | `vjp`, `vjps`                                         |\n| Vectorising map   | `jax.vmap`                                                | `vmap`, `vmaps`                                       |\n| JIT compilation   | `jax.jit`                                                 | Not yet implemented                                   |\n| Device placement  | `jax.device_put`, device kwarg                            | Not yet implemented                                   |\n| Gradient stopping | `jax.lax.stop_gradient`                                   | `no_grad`, `detach`                                   |\n| Gradient checking | `jax.test_util.check_grads`                               | `check_gradient`, `check_gradients`                   |\n| Debugging         | `jax.debug.print`                                         | `debug`                                               |\n| Control flow      | Restricted inside `jit` (requires `lax.cond`, `lax.scan`) | Full OCaml control flow (if, match, loops, recursion) |\n| Mutability        | Immutable arrays; functional updates                      | Immutable Nx tensors; same model                      |\n\n**Key things to know:**\n- Rune operates on `Nx.t` directly. There is no separate tensor type, no `rune.numpy`, and no tracing step.\n- Because Rune uses effect handlers rather than tracing, ordinary OCaml control flow works inside differentiated functions. No need for `lax.cond` or `lax.scan`.\n- JIT compilation and device/GPU placement do not exist yet. All computation runs eagerly on CPU via the Nx C backend.\n\n---\n\n## 2. Reverse-Mode AD (grad)\n\n### 2.1 Basic gradient\n\n**JAX**\n\n```python\nimport jax\nimport jax.numpy as jnp\n\ndef f(x):\n    return jnp.sum(x ** 2)\n\ngrad_f = jax.grad(f)\nx = jnp.array([1.0, 2.0, 3.0])\nprint(grad_f(x))  # [2. 4. 6.]\n```\n\n**Rune**\n\n<!-- $MDX skip -->\n```ocaml\nopen Nx\nopen Rune\n\nlet () =\n  let f x = sum (mul x x) in\n  let grad_f = grad f in\n  let x = create Float32 [|3|] [|1.; 2.; 3.|] in\n  print_data (grad_f x)\n  (* [2. 4. 6.] *)\n```\n\nBoth `jax.grad` and `Rune.grad` take a function and return a new function that computes the gradient. The input function must return a scalar.\n\n### 2.2 Value and gradient\n\n**JAX**\n\n```python\nloss, grads = jax.value_and_grad(loss_fn)(params)\n```\n\n**Rune**\n\n<!-- $MDX skip -->\n```ocaml\nlet loss, gradient = value_and_grad loss_fn params\n```\n\nBoth avoid computing the forward pass twice.\n\n### 2.3 Multiple inputs\n\n**JAX**\n\n```python\ndef f(x, y):\n    return jnp.sum(x ** 2 + y ** 2)\n\n# argnums selects which arguments to differentiate\ndx, dy = jax.grad(f, argnums=(0, 1))(x, y)\n```\n\n**Rune**\n\n<!-- $MDX skip -->\n```ocaml\nopen Nx\nopen Rune\n\nlet () =\n  let f inputs =\n    match inputs with\n    | [x; y] -> sum (add (mul x x) (mul y y))\n    | _ -> assert false\n  in\n  let gs = grads f [scalar Float32 3.0; scalar Float32 4.0] in\n  List.iter (fun g -> Printf.printf \"%.1f \" (item [] g)) gs\n  (* 6.0 8.0 *)\n```\n\nJAX uses `argnums` to select which positional arguments to differentiate. Rune takes a function of `Nx.t list` and differentiates with respect to all inputs. `value_and_grads` combines both:\n\n<!-- $MDX skip -->\n```ocaml\nlet loss, gradients = value_and_grads loss_fn [w; b]\n```\n\n### 2.4 Auxiliary outputs\n\n**JAX**\n\n```python\ndef f(x):\n    pred = model(x)\n    loss = compute_loss(pred)\n    return loss, pred  # pred is auxiliary\n\n(loss, pred), grads = jax.value_and_grad(f, has_aux=True)(x)\n```\n\n**Rune**\n\n<!-- $MDX skip -->\n```ocaml\nlet f x =\n  let pred = forward_pass x in\n  let loss = compute_loss pred in\n  (loss, pred)  (* pred is auxiliary -- not differentiated *)\n\nlet loss, gradient, pred = value_and_grad_aux f x\n```\n\nJAX uses a `has_aux=True` flag. Rune has dedicated `_aux` variants: `value_and_grad_aux` and `value_and_grads_aux`.\n\n### 2.5 Higher-order derivatives\n\n**JAX**\n\n```python\nf = lambda x: x ** 4\nf_prime = jax.grad(f)\nf_double_prime = jax.grad(f_prime)\n```\n\n**Rune**\n\n<!-- $MDX skip -->\n```ocaml\nopen Nx\nopen Rune\n\nlet () =\n  let f x = mul x (mul x (mul x x)) in\n  let f' = grad f in\n  let f'' = grad f' in\n  let f''' = grad f'' in\n  let x = scalar Float32 2.0 in\n  Printf.printf \"f(2)    = %.1f\\n\" (item [] (f x));\n  Printf.printf \"f'(2)   = %.1f\\n\" (item [] (f' x));\n  Printf.printf \"f''(2)  = %.1f\\n\" (item [] (f'' x));\n  Printf.printf \"f'''(2) = %.1f\\n\" (item [] (f''' x))\n```\n\nBoth compose naturally because `grad` returns an ordinary function.\n\n---\n\n## 3. VJP (Vector-Jacobian Product)\n\n**JAX**\n\n```python\ndef f(x):\n    return x ** 2\n\nprimals, vjp_fn = jax.vjp(f, x)\ngrads = vjp_fn(v)\n```\n\n**Rune**\n\n<!-- $MDX skip -->\n```ocaml\nopen Nx\nopen Rune\n\nlet () =\n  let f x = mul x x in\n  let x = create Float32 [|3|] [|1.; 2.; 3.|] in\n  let v = ones Float32 [|3|] in\n  let y, g = vjp f x v in\n  print_data y;  (* [1. 4. 9.] *)\n  print_data g   (* [2. 4. 6.] *)\n```\n\nIn JAX, `jax.vjp` returns a closure `vjp_fn` that you call with the cotangent. In Rune, `vjp f x v` takes the cotangent `v` directly and returns `(y, g)` in one call.\n\nFor multiple inputs, JAX still uses positional arguments while Rune uses `vjps` with a list:\n\n<!-- $MDX skip -->\n```ocaml\nlet y, gs = vjps f [x1; x2] v\n```\n\n---\n\n## 4. Forward-Mode AD (JVP)\n\n**JAX**\n\n```python\ndef f(x):\n    return x ** 2\n\nprimals, tangents = jax.jvp(f, (x,), (v,))\n```\n\n**Rune**\n\n<!-- $MDX skip -->\n```ocaml\nopen Nx\nopen Rune\n\nlet () =\n  let f x = mul x x in\n  let x = create Float32 [|3|] [|1.; 2.; 3.|] in\n  let v = ones Float32 [|3|] in\n  let y, tangent = jvp f x v in\n  print_data y;       (* [1. 4. 9.] -- primal *)\n  print_data tangent  (* [2. 4. 6.] -- directional derivative *)\n```\n\nThe API shape is nearly identical. JAX takes tuples of primals and tangents; Rune takes them as separate arguments.\n\nFor multiple inputs:\n\n<!-- $MDX skip -->\n```ocaml\nlet y, tangent = jvps f [x1; x2] [v1; v2]\n```\n\n`jvp_aux` carries auxiliary outputs:\n\n<!-- $MDX skip -->\n```ocaml\nlet y, tangent, aux = jvp_aux f x v\n```\n\n---\n\n## 5. Stopping Gradients\n\n**JAX**\n\n```python\nimport jax.lax\n\ndef f(x):\n    baseline = jax.lax.stop_gradient(running_mean)\n    return loss(x) - baseline\n```\n\n**Rune**\n\nThere are two options:\n\n<!-- $MDX skip -->\n```ocaml\n(* Option 1: detach a single tensor *)\nlet baseline = detach running_mean\n\n(* Option 2: block an entire computation *)\nlet baseline = no_grad (fun () ->\n  mean predictions\n)\n```\n\nJAX has a single `stop_gradient` that operates on arrays. Rune offers two mechanisms:\n- `detach x` returns a copy of `x` that is treated as a constant during differentiation. Closest to `jax.lax.stop_gradient`.\n- `no_grad f` runs `f ()` without recording any operations. Useful when a whole sub-computation should be excluded.\n\n---\n\n## 6. Vectorising Map (vmap)\n\n### 6.1 Basic usage\n\n**JAX**\n\n```python\ndef f(x):\n    return jnp.sum(x ** 2)\n\nf_batched = jax.vmap(f)\nbatch = jnp.ones((10, 5))\nresults = f_batched(batch)  # shape (10,)\n```\n\n**Rune**\n\n<!-- $MDX skip -->\n```ocaml\nlet f x = sum (mul x x) in\nlet f_batched = vmap f in\nlet batch = ones Float32 [|10; 5|] in\nlet results = f_batched batch\n(* results has shape [|10|] *)\n```\n\nBoth map over axis 0 by default and stack outputs on axis 0.\n\n### 6.2 Axis specifications\n\n**JAX**\n\n```python\n# Map over axis 1\njax.vmap(f, in_axes=1)\n\n# Don't map an input (broadcast it)\njax.vmap(f, in_axes=(0, None))\n```\n\n**Rune**\n\n<!-- $MDX skip -->\n```ocaml\n(* Map over axis 1 *)\nlet f_axis1 = vmap ~in_axes:(Single (Map 1)) f\n\n(* Don't map an input (broadcast it) *)\nlet f_shared =\n  vmaps\n    ~in_axes:[Map 0; NoMap]\n    f_multi\n```\n\nJAX uses `None` to indicate a non-mapped input and integers for mapped axes. Rune uses `Map n` and `NoMap` constructors. For single-input functions, wrap in `Single`; for multi-input, use `vmaps` with a list.\n\nOutput axis control:\n\n<!-- $MDX skip -->\n```ocaml\n(* Stack outputs along axis 1 instead of 0 *)\nlet f' = vmap ~out_axes:(OutSingle (Some 1)) f\n\n(* Discard the batch dimension (e.g., for reductions) *)\nlet f' = vmap ~out_axes:(OutSingle None) f\n```\n\n### 6.3 Composing vmap with grad\n\n**JAX**\n\n```python\n# Per-example gradients\nper_example_grad = jax.vmap(jax.grad(loss_fn))\n```\n\n**Rune**\n\n<!-- $MDX skip -->\n```ocaml\nlet per_example_grad = vmap (grad loss_fn)\n```\n\nBoth compose naturally. This gives per-example gradients without writing batch loops.\n\n---\n\n## 7. Gradient Checking\n\n**JAX**\n\n```python\nfrom jax._src import test_util as jtu\n\njtu.check_grads(f, (x,), order=1)\n```\n\n**Rune**\n\n<!-- $MDX skip -->\n```ocaml\nmatch check_gradient ~verbose:true f x with\n| `Pass result -> Printf.printf \"max error: %e\\n\" result.max_abs_error\n| `Fail result ->\n  Printf.printf \"%d of %d elements failed\\n\"\n    result.num_failed result.num_checked\n```\n\nRune provides more detailed results. The `gradient_check_result` record includes:\n- `max_abs_error`, `max_rel_error`, `mean_abs_error`, `mean_rel_error`\n- `failed_indices` with per-element `(index, autodiff_value, finite_diff_value, abs_error)`\n- `passed`, `num_checked`, `num_failed`\n\nAdditional utilities:\n- `finite_diff f x` -- approximate gradient via finite differences\n- `finite_diff_jacobian f x` -- approximate Jacobian for non-scalar outputs\n- `check_gradients f xs` -- check a multi-input function\n\nYou can control the finite-difference method:\n\n<!-- $MDX skip -->\n```ocaml\nlet fd = finite_diff ~method_:`Forward ~eps:1e-5 f x\n```\n\nAvailable methods: `` `Central `` (default), `` `Forward ``, `` `Backward ``.\n\n---\n\n## 8. Debugging\n\n**JAX**\n\n```python\ndef f(x):\n    y = x ** 2\n    jax.debug.print(\"y = {}\", y)\n    return y\n\nf(jnp.array(3.0))\n```\n\n**Rune**\n\n<!-- $MDX skip -->\n```ocaml\nlet f x = add (mul x x) (sin x) in\nlet x = scalar Float32 2.0 in\nlet _ = debug f x in\n()\n(* Prints each operation, its inputs, and its output *)\n```\n\nJAX's `debug.print` is a targeted print inside traced code. Rune's `debug` wraps an entire function and traces every tensor operation, printing the operation name, inputs, and output. It is more coarse-grained but requires no instrumentation inside the function.\n\n---\n\n## 9. Control Flow\n\nThis is a fundamental difference.\n\n**JAX**\n\nInside `jit`-compiled functions, Python control flow does not work because JAX traces the function:\n\n```python\n# Breaks under jit:\n@jax.jit\ndef f(x):\n    if x > 0:  # Error: traced value used in Python conditional\n        return x\n    else:\n        return -x\n\n# Must use JAX primitives:\n@jax.jit\ndef f(x):\n    return jax.lax.cond(x > 0, lambda: x, lambda: -x)\n```\n\n**Rune**\n\nOCaml control flow works everywhere, including inside `grad`, `jvp`, and `vmap`:\n\n<!-- $MDX skip -->\n```ocaml\nlet f x =\n  if item [] x > 0.0 then x\n  else neg x\n\n(* Works fine *)\nlet df = grad f\n```\n\nRune does not trace functions into a graph. It intercepts operations as they execute via effect handlers, so any OCaml expression is valid. No special `cond`, `scan`, or `while_loop` primitives are needed.\n\n---\n\n## 10. What Rune Does Not Have (Yet)\n\n| JAX feature                                  | Status in Rune                                      |\n| -------------------------------------------- | --------------------------------------------------- |\n| `jax.jit`                                    | Not implemented. All operations execute eagerly.    |\n| Device placement (`jax.device_put`, GPU/TPU) | Not implemented. All computation runs on CPU.       |\n| `jax.pmap` / distributed                     | Not implemented.                                    |\n| `jax.lax.scan`, `jax.lax.while_loop`         | Not needed. Use ordinary OCaml loops and recursion. |\n| `jax.custom_vjp`, `jax.custom_jvp`           | Not yet exposed.                                    |\n| `jax.checkpoint` (gradient checkpointing)    | Not implemented.                                    |\n| Pytrees / tree utilities                     | Not needed. Use OCaml data structures directly.     |\n| `jax.random` (splittable PRNG)               | Use `Nx.rand`, `Nx.randn` directly.                 |\n\n---\n\n## 11. Quick Cheat Sheet\n\n| Task                  | JAX                                      | Rune                              |\n| --------------------- | ---------------------------------------- | --------------------------------- |\n| Gradient of scalar fn | `jax.grad(f)(x)`                         | `grad f x`                        |\n| Value + gradient      | `jax.value_and_grad(f)(x)`               | `value_and_grad f x`              |\n| Multi-input gradient  | `jax.grad(f, argnums=(0,1))(x, y)`       | `grads f [x; y]`                  |\n| Auxiliary output      | `jax.value_and_grad(f, has_aux=True)(x)` | `value_and_grad_aux f x`          |\n| Higher-order deriv    | `jax.grad(jax.grad(f))`                  | `grad (grad f)`                   |\n| VJP                   | `primals, fn = jax.vjp(f, x); fn(v)`     | `vjp f x v`                       |\n| JVP                   | `jax.jvp(f, (x,), (v,))`                 | `jvp f x v`                       |\n| Stop gradient         | `jax.lax.stop_gradient(x)`               | `detach x`                        |\n| Block region from AD  | (no direct equivalent)                   | `no_grad (fun () -> ...)`         |\n| Batch map             | `jax.vmap(f)(batch)`                     | `vmap f batch`                    |\n| vmap axis control     | `jax.vmap(f, in_axes=(0, None))`         | `vmaps ~in_axes:[Map 0; NoMap] f` |\n| Per-example grad      | `jax.vmap(jax.grad(f))`                  | `vmap (grad f)`                   |\n| Gradient check        | `jtu.check_grads(f, (x,), 1)`            | `check_gradient f x`              |\n| Finite differences    | (manual)                                 | `finite_diff f x`                 |\n| Debug tracing         | `jax.debug.print(...)`                   | `debug f x`                       |\n| JIT compilation       | `jax.jit(f)`                             | Not yet available                 |\n| GPU placement         | `jax.device_put(x, gpu)`                 | Not yet available                 |\n"
  },
  {
    "path": "packages/rune/doc/dune",
    "content": "(mdx\n (files *.md)\n (package rune)\n (libraries rune nx))\n"
  },
  {
    "path": "packages/rune/doc/index.md",
    "content": "# rune\n\nRune provides functional transformations for Nx tensors: automatic differentiation (forward and reverse mode), vectorising maps, and gradient checking. It operates on `Nx.t` values directly — no special tensor type is needed.\n\n## Features\n\n- **Reverse-mode AD** — `grad`, `value_and_grad`, `vjp` for backpropagation\n- **Forward-mode AD** — `jvp` for Jacobian-vector products\n- **Vectorising map** — `vmap` to lift per-example functions to batched operations\n- **Gradient checking** — `check_gradient` and `finite_diff` for testing\n- **Composable** — nest transformations freely (`grad (grad f)`, `vmap (grad f)`)\n- **Effect-based** — uses OCaml 5 effects to intercept Nx operations cleanly\n\n## Quick Start\n\n```ocaml\nopen Nx\nopen Rune\n\n(* Define a function using Nx operations *)\nlet f x = add (mul x x) (sin x)\n\n(* Compute its gradient *)\nlet f' = grad f\n\nlet () =\n  let x = scalar float32 2.0 in\n  Printf.printf \"f(2)  = %.4f\\n\" (item [] (f x));\n  Printf.printf \"f'(2) = %.4f\\n\" (item [] (f' x))\n```\n\n## Next Steps\n\n- [Getting Started](/docs/rune/getting-started/) — installation and first gradients\n- [Transformations](/docs/rune/transformations/) — complete guide to grad, jvp, vmap, and more\n- [How It Works](/docs/rune/how-it-works/) — effects-based automatic differentiation explained\n"
  },
  {
    "path": "packages/rune/examples/01-mlp/README.md",
    "content": "# MLP\n\nTrain a multi-layer perceptron from scratch using Rune's automatic differentiation. Computes MSE loss, derives gradients with `Rune.grad`, and updates parameters in a manual training loop.\n"
  },
  {
    "path": "packages/rune/examples/01-mlp/dune",
    "content": "(executable\n (name main)\n (libraries nx rune))\n"
  },
  {
    "path": "packages/rune/examples/01-mlp/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Nx\nopen Rune\n\n(* Forward pass: computes the MLP output *)\nlet forward params inputs =\n  match params with\n  | [ w1; b1; w2; b2 ] ->\n      (* Input layer to hidden layer *)\n      let z1 = add (matmul inputs w1) b1 in\n      (* Hidden layer activation *)\n      let a1 = maximum (scalar Float32 0.0) z1 in\n      (* Hidden layer to output layer *)\n      let z2 = add (matmul a1 w2) b2 in\n      (* Output layer *)\n      z2\n  | _ -> failwith \"Invalid parameters\"\n\n(* Mean Squared Error loss *)\nlet mse_loss y_pred y_true =\n  let diff = sub y_pred y_true in\n  let squared_diff = mul diff diff in\n  mean squared_diff\n\n(* Training function *)\nlet train_mlp inputs y_true learning_rate epochs =\n  (* Initialize MLP parameters *)\n  let d = dim 1 inputs in\n  (* Number of input features *)\n  let h = 3 in\n  (* Hidden layer size *)\n  let c = dim 1 y_true in\n  (* Number of outputs *)\n  let w1 = rand Float32 [| d; h |] in\n  let b1 = zeros Float32 [| h |] in\n  let w2 = rand Float32 [| h; c |] in\n  let b2 = zeros Float32 [| c |] in\n  let params = [ w1; b1; w2; b2 ] in\n\n  (* Define the loss as a function of parameters *)\n  let loss_fn params =\n    let y_pred = forward params inputs in\n    mse_loss y_pred y_true\n  in\n\n  (* Training loop *)\n  for epoch = 1 to epochs do\n    (* Compute gradients using the provided grad function *)\n    let loss, grad_params = value_and_grads loss_fn params in\n\n    Printf.printf \"Epoch %d: Loss = %f\\n\" epoch (item [] loss);\n\n    List.combine params grad_params\n    |> List.iter (fun (param, grad) ->\n        blit (sub param (mul (scalar Float32 learning_rate) grad)) param)\n  done;\n  params\n\n(* Example usage *)\nlet () =\n  (* Dummy input data: 4 samples with 2 features *)\n  let inputs =\n    create Float32 [| 4; 2 |] [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0; 7.0; 8.0 |]\n  in\n  (* Dummy target data: 4 samples with 1 output *)\n  let y_true = create Float32 [| 4; 1 |] [| 1.0; 2.0; 3.0; 4.0 |] in\n  let learning_rate = 0.01 in\n  let epochs = 100 in\n\n  (* Train the MLP *)\n  let trained_params = train_mlp inputs y_true learning_rate epochs in\n\n  (* Make predictions with trained parameters *)\n  let y_pred = forward trained_params inputs in\n  print_endline \"Predictions after training:\";\n  print y_pred\n"
  },
  {
    "path": "packages/rune/examples/xx-higher-derivative/README.md",
    "content": "# Higher-Order Derivatives\n\nCompute first, second, and third-order derivatives by nesting `Rune.grad` calls, demonstrating Rune's support for higher-order automatic differentiation.\n"
  },
  {
    "path": "packages/rune/examples/xx-higher-derivative/dune",
    "content": "(executable\n (name main)\n (libraries nx rune))\n"
  },
  {
    "path": "packages/rune/examples/xx-higher-derivative/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Nx\nopen Rune\n\n(* Example *)\nlet () =\n  let f x = mul x (mul x (mul x x)) in\n  let x = create Float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  let result = f x in\n  Printf.printf \"Result: %s\\n\" (to_string result);\n\n  (* eager *)\n  let result = add x x in\n  Printf.printf \"Eager result: %s\\n\" (to_string result);\n\n  (* gradient *)\n  let gradient = (grad f) x in\n  Printf.printf \"First order derivative: %s\\n\" (to_string gradient);\n\n  let gradient = (grad (grad f)) x in\n  Printf.printf \"Second order derivative: %s\\n\" (to_string gradient);\n\n  let gradient = (grad (grad (grad f))) x in\n  Printf.printf \"Third order derivative: %s\\n\" (to_string gradient)\n"
  },
  {
    "path": "packages/rune/lib/autodiff.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Shared infrastructure for automatic differentiation. Contains the\n   identity-based hash table, derivative rules, and the autodiff-enabled flag\n   used by both JVP (forward-mode) and VJP (reverse-mode) handlers. *)\n\nopen Nx_core\nmodule T = Nx\n\n(* Physical identity table *)\n\nmodule Physical_tbl = struct\n  module Tbl = Hashtbl.Make (struct\n    type t = Obj.t\n\n    let equal = ( == )\n    let hash obj = Hashtbl.hash obj\n  end)\n\n  type ('k, 'v) t = 'v Tbl.t\n\n  let create n = Tbl.create n\n  let find t key = Tbl.find_opt t (Obj.repr key)\n  let add t key value = Tbl.replace t (Obj.repr key) value\nend\n\n(* Autodiff gate *)\n\nlet autodiff_enabled = ref true\n\nlet without_autodiff f =\n  let prev = !autodiff_enabled in\n  autodiff_enabled := false;\n  Fun.protect f ~finally:(fun () -> autodiff_enabled := prev)\n\n(* Derivative rules *)\n\nlet ln2 = 0.693147180559945309417\nlet two_over_sqrt_pi = 1.12837916709551257390\n\nlet float_scalar_like (type a b) (x : (a, b) T.t) (v : float) : (a, b) T.t =\n  T.full (T.dtype x) [||] (Dtype.of_float (T.dtype x) v)\n\n(* d/dx sin(x) = cos(x) *)\nlet deriv_sin (type a b) (x : (a, b) T.t) : (a, b) T.t = T.cos x\n\n(* d/dx sqrt(x) = 1 / (2 * sqrt(x)) *)\nlet deriv_sqrt (type a b) (sqrt_x : (a, b) T.t) : (a, b) T.t =\n  T.div (T.ones_like sqrt_x) (T.mul (float_scalar_like sqrt_x 2.0) sqrt_x)\n\n(* d/dx (1/x) = -1/x^2 *)\nlet deriv_recip (type a b) (x : (a, b) T.t) : (a, b) T.t =\n  T.neg (T.recip (T.mul x x))\n\n(* d/dx tan(x) = 1/cos^2(x) *)\nlet deriv_tan (type a b) (x : (a, b) T.t) : (a, b) T.t =\n  let cos_x = T.cos x in\n  T.recip (T.mul cos_x cos_x)\n\n(* d/dx asin(x) = 1/sqrt(1 - x^2) *)\nlet deriv_asin (type a b) (x : (a, b) T.t) : (a, b) T.t =\n  let one = T.ones_like x in\n  T.recip (T.sqrt (T.sub one (T.mul x x)))\n\n(* d/dx acos(x) = -1/sqrt(1 - x^2) *)\nlet deriv_acos (type a b) (x : (a, b) T.t) : (a, b) T.t = T.neg (deriv_asin x)\n\n(* d/dx atan(x) = 1/(1 + x^2) *)\nlet deriv_atan (type a b) (x : (a, b) T.t) : (a, b) T.t =\n  let one = T.ones_like x in\n  T.recip (T.add one (T.mul x x))\n\n(* d/dx erf(x) = (2/sqrt(pi)) * exp(-x^2) *)\nlet deriv_erf (type a b) (x : (a, b) T.t) : (a, b) T.t =\n  let coeff = float_scalar_like x two_over_sqrt_pi in\n  T.mul coeff (T.exp (T.neg (T.mul x x)))\n\n(* d/da (a^b) = b * a^(b-1) *)\nlet deriv_pow_wrt_base (type a b) (base : (a, b) T.t) (exp : (a, b) T.t) :\n    (a, b) T.t =\n  T.mul exp (T.pow base (T.sub exp (T.ones_like exp)))\n\n(* d/db (a^b) = a^b * ln(a) = a^b * log2(a) * ln(2) *)\nlet deriv_pow_wrt_exp (type a b) (base : (a, b) T.t) (result : (a, b) T.t) :\n    (a, b) T.t =\n  let ln_base = T.mul (T.log2 base) (float_scalar_like base ln2) in\n  T.mul result ln_base\n\n(* Custom differentiation effects *)\n\ntype _ Effect.t +=\n  | E_ad_mode_query : [ `VJP | `JVP ] Effect.t\n  | E_custom_vjp : {\n      cv_fwd : unit -> Obj.t;\n      cv_bwd : (Obj.t -> Obj.t) -> (Obj.t -> Obj.t -> unit) -> unit;\n    }\n      -> Obj.t Effect.t\n  | E_custom_jvp : {\n      cj_jvp : (Obj.t -> Obj.t) -> Obj.t * Obj.t;\n    }\n      -> Obj.t Effect.t\n\nlet query_ad_mode () =\n  try Some (Effect.perform E_ad_mode_query) with Effect.Unhandled _ -> None\n\n(* Reduce gradient to match source shape (for broadcasting). *)\nlet unbroadcast_grad (type a b) (g : (a, b) T.t) (src_shape : int array) :\n    (a, b) T.t =\n  let dst_shape = T.shape g in\n  if src_shape = dst_shape then g\n  else\n    let src_rank = Array.length src_shape in\n    let dst_rank = Array.length dst_shape in\n    let axes = ref [] in\n    for i = 0 to dst_rank - src_rank - 1 do\n      axes := i :: !axes\n    done;\n    for i = 0 to src_rank - 1 do\n      if src_shape.(i) = 1 && dst_shape.(i + (dst_rank - src_rank)) > 1 then\n        axes := (i + (dst_rank - src_rank)) :: !axes\n    done;\n    match !axes with\n    | [] -> g\n    | ax ->\n        let summed = T.sum g ~axes:ax ~keepdims:true in\n        if T.shape summed <> src_shape then T.reshape src_shape summed\n        else summed\n"
  },
  {
    "path": "packages/rune/lib/custom_diff.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Custom differentiation rules. Provides combinators for overriding automatic\n   differentiation with user-supplied forward/backward (VJP) or forward/tangent\n   (JVP) rules. *)\n\nopen Nx_core\nmodule T = Nx\n\nlet custom_vjp (type a b c d res) ~(fwd : (a, b) T.t -> (c, d) T.t * res)\n    ~(bwd : res -> (c, d) T.t -> (a, b) T.t) (x : (a, b) T.t) : (c, d) T.t =\n  match Autodiff.query_ad_mode () with\n  | Some `JVP | None -> fst (fwd x)\n  | Some `VJP ->\n      let residuals = ref None in\n      let y_ref = ref (Obj.repr ()) in\n      let cv_fwd () =\n        let y, r = fwd x in\n        residuals := Some r;\n        y_ref := Obj.repr y;\n        Obj.repr y\n      in\n      let cv_bwd get_grad acc_grad =\n        let g : (c, d) T.t = Obj.obj (get_grad !y_ref) in\n        (* residuals is guaranteed Some here: cv_fwd runs to completion before\n           the VJP handler calls cv_bwd on the backward pass *)\n        let dx = bwd (Option.get !residuals) g in\n        acc_grad (Obj.repr x) (Obj.repr dx)\n      in\n      Obj.obj (Effect.perform (Autodiff.E_custom_vjp { cv_fwd; cv_bwd }))\n\nlet custom_vjps (type a b c d res) ~(fwd : (a, b) T.t list -> (c, d) T.t * res)\n    ~(bwd : res -> (c, d) T.t -> (a, b) T.t list) (xs : (a, b) T.t list) :\n    (c, d) T.t =\n  match Autodiff.query_ad_mode () with\n  | Some `JVP | None -> fst (fwd xs)\n  | Some `VJP ->\n      let residuals = ref None in\n      let y_ref = ref (Obj.repr ()) in\n      let cv_fwd () =\n        let y, r = fwd xs in\n        residuals := Some r;\n        y_ref := Obj.repr y;\n        Obj.repr y\n      in\n      let cv_bwd get_grad acc_grad =\n        let g : (c, d) T.t = Obj.obj (get_grad !y_ref) in\n        (* residuals is guaranteed Some here: cv_fwd runs to completion before\n           the VJP handler calls cv_bwd on the backward pass *)\n        let dxs = bwd (Option.get !residuals) g in\n        List.iter2 (fun x dx -> acc_grad (Obj.repr x) (Obj.repr dx)) xs dxs\n      in\n      Obj.obj (Effect.perform (Autodiff.E_custom_vjp { cv_fwd; cv_bwd }))\n\nlet custom_jvp (type a b c d) ~(fwd : (a, b) T.t -> (c, d) T.t)\n    ~(jvp_rule : (a, b) T.t -> (a, b) T.t -> (c, d) T.t * (c, d) T.t)\n    (x : (a, b) T.t) : (c, d) T.t =\n  match Autodiff.query_ad_mode () with\n  | Some `VJP | None -> fwd x\n  | Some `JVP ->\n      let cj_jvp get_tangent =\n        let tangent : (a, b) T.t = Obj.obj (get_tangent (Obj.repr x)) in\n        let y, t = jvp_rule x tangent in\n        (Obj.repr y, Obj.repr t)\n      in\n      Obj.obj (Effect.perform (Autodiff.E_custom_jvp { cj_jvp }))\n\nlet custom_jvps (type a b c d) ~(fwd : (a, b) T.t list -> (c, d) T.t)\n    ~(jvp_rule : (a, b) T.t list -> (a, b) T.t list -> (c, d) T.t * (c, d) T.t)\n    (xs : (a, b) T.t list) : (c, d) T.t =\n  match Autodiff.query_ad_mode () with\n  | Some `VJP | None -> fwd xs\n  | Some `JVP ->\n      let cj_jvp get_tangent =\n        let tangents =\n          List.map\n            (fun x -> (Obj.obj (get_tangent (Obj.repr x)) : (a, b) T.t))\n            xs\n        in\n        let y, t = jvp_rule xs tangents in\n        (Obj.repr y, Obj.repr t)\n      in\n      Obj.obj (Effect.perform (Autodiff.E_custom_jvp { cj_jvp }))\n"
  },
  {
    "path": "packages/rune/lib/debug.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Nx_effect\nmodule T = Nx\n\ntype tensor_ref = Tensor_ref : ('a, 'b) T.t -> tensor_ref\n\n(* ───── Debug Context Effects ───── *)\n\ntype _ Effect.t +=\n  | E_push_debug_context : string -> unit Effect.t\n  | E_pop_debug_context : unit Effect.t\n\ntype tensor_stats = {\n  mean : float;\n  std : float;\n  min_val : float;\n  max_val : float;\n  nan_count : int;\n}\n\nlet push_context name =\n  try Effect.perform (E_push_debug_context name) with Effect.Unhandled _ -> ()\n\nlet pop_context () =\n  try Effect.perform E_pop_debug_context with Effect.Unhandled _ -> ()\n\nlet with_context name f =\n  try\n    Effect.perform (E_push_debug_context name);\n    Fun.protect f ~finally:(fun () -> Effect.perform E_pop_debug_context)\n  with Effect.Unhandled _ -> f ()\n\nlet compute_stats (Tensor_ref t) =\n  try\n    let t_f32 = T.cast T.float32 t in\n    let mean = T.item [] (T.mean t_f32) in\n    let std = T.item [] (T.std t_f32) in\n    let min_val = T.item [] (T.min t_f32) in\n    let max_val = T.item [] (T.max t_f32) in\n    let is_nan = T.isnan t_f32 in\n    let nan_count =\n      int_of_float (T.item [] (T.sum (T.cast T.float32 is_nan)))\n    in\n    { mean; std; min_val; max_val; nan_count }\n  with _ ->\n    { mean = 0.0; std = 0.0; min_val = 0.0; max_val = 0.0; nan_count = 0 }\n\nlet get_debug_indent context_stack =\n  let depth = List.length context_stack in\n  if depth = 0 then \"├─ \"\n  else\n    let rec build_prefix n =\n      if n = 0 then \"\" else \"│  \" ^ build_prefix (n - 1)\n    in\n    build_prefix depth ^ \"├─ \"\n\nlet format_number f =\n  (* Handle negative zero *)\n  let f = if f = -0. then 0. else f in\n  (* Use 2 decimal precision for consistency *)\n  Printf.sprintf \"%.2f\" f\n\nlet dtype_to_string (type a b) (dtype : (a, b) Nx_core.Dtype.t) =\n  match dtype with\n  | Float32 -> \"f32\"\n  | Float64 -> \"f64\"\n  | Float16 -> \"f16\"\n  | Int32 -> \"i32\"\n  | Int64 -> \"i64\"\n  | UInt8 -> \"u8\"\n  | Int8 -> \"i8\"\n  | Int16 -> \"i16\"\n  | UInt16 -> \"u16\"\n  | UInt32 -> \"u32\"\n  | UInt64 -> \"u64\"\n  | Complex64 -> \"c64\"\n  | Complex128 -> \"c128\"\n  | BFloat16 -> \"bf16\"\n  | Bool -> \"bool\"\n  | Int4 -> \"i4\"\n  | UInt4 -> \"u4\"\n  | Float8_e4m3 -> \"f8e4m3\"\n  | Float8_e5m2 -> \"f8e5m2\"\n\nlet format_input_shapes input_tensors =\n  match input_tensors with\n  | [] -> \"\"\n  | tensors ->\n      tensors\n      |> List.map (function Tensor_ref t -> T.shape_to_string (T.shape t))\n      |> String.concat \",\"\n\nlet log_operation context_stack op_name input_tensors output_tensor =\n  let indent = get_debug_indent context_stack in\n\n  (* Check if we're in a gradient context *)\n  let in_grad_context =\n    List.exists\n      (fun ctx ->\n        String.length ctx > 0 && String.starts_with ~prefix:\"\\xE2\\x88\\x87\" ctx)\n      context_stack\n  in\n\n  (* Format input part with arrow *)\n  let input_part =\n    let input_str = format_input_shapes input_tensors in\n    if input_str = \"\" then \"→ \" else input_str ^ \" → \"\n  in\n\n  let shape_str, dtype_str =\n    match output_tensor with\n    | Tensor_ref t ->\n        let shape = T.shape_to_string (T.shape t) in\n        let dtype = dtype_to_string (T.dtype t) in\n        (shape, dtype)\n  in\n\n  (* Put dtype inside brackets for output *)\n  let output_shape_with_dtype =\n    if shape_str = \"[]\" then \"[\" ^ dtype_str ^ \"]\"\n    else\n      let shape_without_brackets =\n        String.sub shape_str 1 (String.length shape_str - 2)\n      in\n      \"[\" ^ shape_without_brackets ^ \" \" ^ dtype_str ^ \"]\"\n  in\n\n  let stats = compute_stats output_tensor in\n\n  (* Check if tensor is all zeros *)\n  let stats_str =\n    if\n      stats.mean = 0. && stats.std = 0. && stats.min_val = 0.\n      && stats.max_val = 0.\n    then Printf.sprintf \" zeros nans=%d\" stats.nan_count\n    else if\n      stats.mean = 1. && stats.std = 0. && stats.min_val = 1.\n      && stats.max_val = 1.\n    then Printf.sprintf \" ones nans=%d\" stats.nan_count\n    else\n      Printf.sprintf \" μ=%s σ=%s range=[%s,%s] nans=%d\"\n        (format_number stats.mean) (format_number stats.std)\n        (format_number stats.min_val)\n        (format_number stats.max_val)\n        stats.nan_count\n  in\n\n  (* Add memory usage *)\n  let memory_str =\n    match output_tensor with\n    | Tensor_ref t ->\n        let shape = T.shape t in\n        let num_elements = Array.fold_left ( * ) 1 shape in\n        let bytes_per_element =\n          match T.dtype t with\n          | Float32 | Int32 | UInt32 -> 4\n          | Float64 | Int64 | UInt64 | Complex64 -> 8\n          | Float16 | Int16 | UInt16 | BFloat16 -> 2\n          | UInt8 | Int8 | Float8_e4m3 | Float8_e5m2 | Bool -> 1\n          | Complex128 -> 16\n          | Int4 | UInt4 -> 1 (* 2 values packed per byte *)\n        in\n        let bytes = num_elements * bytes_per_element in\n        let memory_mb = float bytes /. (1024. *. 1024.) in\n        if memory_mb < 0.01 then Printf.sprintf \" %.3fMB\" memory_mb\n        else Printf.sprintf \" %.1fMB\" memory_mb\n  in\n\n  (* Add NaN warning *)\n  let nan_warning = if stats.nan_count > 0 then \" ⚠ NaN detected!\" else \"\" in\n\n  (* Check for exploding gradients in gradient operations *)\n  let grad_warning =\n    if in_grad_context then\n      (* This is a gradient operation *)\n      let max_abs =\n        Stdlib.max (abs_float stats.max_val) (abs_float stats.min_val)\n      in\n      if max_abs > 100. then \" ⚠ Exploding gradients!\" else \"\"\n    else \"\"\n  in\n\n  Printf.printf \"%s%s %s%s%s%s%s%s\\n%!\" indent op_name input_part\n    output_shape_with_dtype stats_str memory_str nan_warning grad_warning\n\nlet debug_handler () =\n  let context_stack = ref [] in\n  let open Effect.Deep in\n  {\n    retc = (fun x -> x);\n    exnc = raise;\n    effc =\n      (fun (type a) (eff : a Effect.t) ->\n        match eff with\n        | E_push_debug_context name ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let parent_indent = get_debug_indent !context_stack in\n                Printf.printf \"%s%s\\n%!\" parent_indent name;\n                context_stack := name :: !context_stack;\n                continue k ())\n        | E_pop_debug_context ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                (match !context_stack with\n                | [] -> failwith \"Cannot pop from an empty context stack\"\n                | _ :: rest -> context_stack := rest);\n                continue k ())\n        | E_add { a; b } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = add a b in\n                log_operation !context_stack \"add\"\n                  [ Tensor_ref a; Tensor_ref b ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_sub { a; b } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = sub a b in\n                log_operation !context_stack \"sub\"\n                  [ Tensor_ref a; Tensor_ref b ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_mul { a; b } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = mul a b in\n                log_operation !context_stack \"mul\"\n                  [ Tensor_ref a; Tensor_ref b ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_matmul { a; b } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = matmul a b in\n                log_operation !context_stack \"matmul\"\n                  [ Tensor_ref a; Tensor_ref b ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_neg { t_in } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = neg t_in in\n                log_operation !context_stack \"neg\" [ Tensor_ref t_in ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_reduce_sum { t_in; axes; keepdims } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = reduce_sum ~axes ~keepdims t_in in\n                log_operation !context_stack \"sum\" [ Tensor_ref t_in ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_reduce_max { t_in; axes; keepdims } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = reduce_max ~axes ~keepdims t_in in\n                log_operation !context_stack \"max\" [ Tensor_ref t_in ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_reduce_min { t_in; axes; keepdims } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = reduce_min ~axes ~keepdims t_in in\n                log_operation !context_stack \"min\" [ Tensor_ref t_in ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_reshape { t_in; new_shape } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let result = reshape t_in new_shape in\n                log_operation !context_stack \"reshape\" [ Tensor_ref t_in ]\n                  (Tensor_ref result);\n                continue k result)\n        | E_cast { t_in; target_dtype } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let result = cast ~dtype:target_dtype t_in in\n                log_operation !context_stack \"cast\" [ Tensor_ref t_in ]\n                  (Tensor_ref result);\n                continue k result)\n        | E_sqrt { t_in } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = sqrt t_in in\n                log_operation !context_stack \"sqrt\" [ Tensor_ref t_in ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_sin { t_in } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = sin t_in in\n                log_operation !context_stack \"sin\" [ Tensor_ref t_in ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_fdiv { a; b } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = div a b in\n                log_operation !context_stack \"div\"\n                  [ Tensor_ref a; Tensor_ref b ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_pow { a; b } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = pow a b in\n                log_operation !context_stack \"pow\"\n                  [ Tensor_ref a; Tensor_ref b ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_max { a; b } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = max a b in\n                log_operation !context_stack \"max\"\n                  [ Tensor_ref a; Tensor_ref b ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_where { condition; if_true; if_false } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = where condition if_true if_false in\n                log_operation !context_stack \"where\"\n                  [\n                    Tensor_ref condition;\n                    Tensor_ref if_true;\n                    Tensor_ref if_false;\n                  ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_cat { t_list; axis } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let result = cat t_list ~axis in\n                log_operation !context_stack \"cat\"\n                  (List.map (fun t -> Tensor_ref t) t_list)\n                  (Tensor_ref result);\n                continue k result)\n        | E_gather { data; indices; axis } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let result = gather data indices ~axis in\n                log_operation !context_stack \"gather\"\n                  [ Tensor_ref data; Tensor_ref indices ]\n                  (Tensor_ref result);\n                continue k result)\n        | E_scatter { data_template; indices; updates; axis } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let result = scatter data_template ~indices ~updates ~axis in\n                log_operation !context_stack \"scatter\"\n                  [\n                    Tensor_ref data_template;\n                    Tensor_ref indices;\n                    Tensor_ref updates;\n                  ]\n                  (Tensor_ref result);\n                continue k result)\n        | E_permute { t_in; axes } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let result = permute t_in axes in\n                log_operation !context_stack \"permute\" [ Tensor_ref t_in ]\n                  (Tensor_ref result);\n                continue k result)\n        | E_expand { t_in; new_target_shape } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let result = expand t_in new_target_shape in\n                log_operation !context_stack \"expand\" [ Tensor_ref t_in ]\n                  (Tensor_ref result);\n                continue k result)\n        | E_pad { t_in; padding_config; fill_value } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let result = pad t_in padding_config fill_value in\n                log_operation !context_stack \"pad\" [ Tensor_ref t_in ]\n                  (Tensor_ref result);\n                continue k result)\n        | E_shrink { t_in; limits } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let result = shrink t_in limits in\n                log_operation !context_stack \"shrink\" [ Tensor_ref t_in ]\n                  (Tensor_ref result);\n                continue k result)\n        | E_flip { t_in; dims_to_flip } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let result = flip t_in dims_to_flip in\n                log_operation !context_stack \"flip\" [ Tensor_ref t_in ]\n                  (Tensor_ref result);\n                continue k result)\n        | E_contiguous { t_in } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let result = contiguous t_in in\n                log_operation !context_stack \"contiguous\" [ Tensor_ref t_in ]\n                  (Tensor_ref result);\n                continue k result)\n        | E_copy { t_in } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let result = copy t_in in\n                log_operation !context_stack \"copy\" [ Tensor_ref t_in ]\n                  (Tensor_ref result);\n                continue k result)\n        | E_buffer { context; dtype; size_in_elements } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let result = buffer context dtype [| size_in_elements |] in\n                log_operation !context_stack \"buffer\" [] (Tensor_ref result);\n                continue k result)\n        | E_const_scalar { context; value; dtype } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let result = const_scalar context value dtype in\n                log_operation !context_stack \"const_scalar\" []\n                  (Tensor_ref result);\n                continue k result)\n        | E_from_host { context; array } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let result = from_host context array in\n                log_operation !context_stack \"from_host\" [] (Tensor_ref result);\n                continue k result)\n        | E_idiv { a; b } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = div a b in\n                log_operation !context_stack \"idiv\"\n                  [ Tensor_ref a; Tensor_ref b ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_mod { a; b } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = mod_ a b in\n                log_operation !context_stack \"mod\"\n                  [ Tensor_ref a; Tensor_ref b ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_cmplt { a; b } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = cmplt a b in\n                log_operation !context_stack \"lt\"\n                  [ Tensor_ref a; Tensor_ref b ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_cmpne { a; b } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = cmpne a b in\n                log_operation !context_stack \"ne\"\n                  [ Tensor_ref a; Tensor_ref b ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_xor { a; b } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = xor a b in\n                log_operation !context_stack \"xor\"\n                  [ Tensor_ref a; Tensor_ref b ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_or { a; b } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = or_ a b in\n                log_operation !context_stack \"or\"\n                  [ Tensor_ref a; Tensor_ref b ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_and { a; b } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = and_ a b in\n                log_operation !context_stack \"and\"\n                  [ Tensor_ref a; Tensor_ref b ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_recip { t_in } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = recip t_in in\n                log_operation !context_stack \"recip\" [ Tensor_ref t_in ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_reduce_prod { t_in; axes; keepdims } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let out = reduce_prod ~axes ~keepdims t_in in\n                log_operation !context_stack \"prod\" [ Tensor_ref t_in ]\n                  (Tensor_ref out);\n                continue k out)\n        | E_assign { dst; src } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                assign dst src;\n                log_operation !context_stack \"assign\" [ Tensor_ref src ]\n                  (Tensor_ref dst);\n                continue k ())\n        | E_threefry { key; ctr } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let result = threefry key ctr in\n                log_operation !context_stack \"threefry\"\n                  [ Tensor_ref key; Tensor_ref ctr ]\n                  (Tensor_ref result);\n                continue k result)\n        | E_to_device { t_in; context } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let result = to_device context t_in in\n                log_operation !context_stack \"to_device\" [ Tensor_ref t_in ]\n                  (Tensor_ref result);\n                continue k result)\n        | E_unfold { t_in; kernel_size; stride; dilation; padding } ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let result =\n                  unfold t_in ~kernel_size ~stride ~dilation ~padding\n                in\n                log_operation !context_stack \"unfold\" [ Tensor_ref t_in ]\n                  (Tensor_ref result);\n                continue k result)\n        | E_fold { t_in; output_size; kernel_size; stride; dilation; padding }\n          ->\n            Some\n              (fun (k : (a, _) Effect.Deep.continuation) ->\n                let result =\n                  fold t_in ~output_size ~kernel_size ~stride ~dilation ~padding\n                in\n                log_operation !context_stack \"fold\" [ Tensor_ref t_in ]\n                  (Tensor_ref result);\n                continue k result)\n        | _ -> None);\n  }\n\nlet debug f x =\n  let handler = debug_handler () in\n  Effect.Deep.match_with f x handler\n"
  },
  {
    "path": "packages/rune/lib/dune",
    "content": "(library\n (name rune)\n (public_name rune)\n (private_modules autodiff jvp vjp custom_diff jacobian jit)\n (libraries nx.core nx nx.buffer nx.effect tolk tolk.ir))\n"
  },
  {
    "path": "packages/rune/lib/finite_diff.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Nx_core\nopen Nx_effect\nmodule T = Nx\n\ntype method_ = [ `Central | `Forward | `Backward ]\n\nlet default_eps = 1e-4 (* Better for float32 precision *)\n\nlet finite_diff (type a b c d) ?(eps = default_eps) ?(method_ = `Central)\n    (f : (a, b) T.t -> (c, d) T.t) (x : (a, b) T.t) : (a, b) T.t =\n  let x_shape = T.shape x in\n  let x_numel = Array.fold_left ( * ) 1 x_shape in\n\n  if x_numel = 0 then T.zeros (dtype x) x_shape\n  else\n    (* Create epsilon scalar with proper type *)\n    let eps_scalar =\n      let dt = dtype x in\n      T.full dt [||] (Dtype.of_float dt eps)\n    in\n\n    (* For simple scalar case *)\n    if x_numel = 1 then\n      match method_ with\n      | `Central ->\n          let x_plus = T.add x eps_scalar in\n          let x_minus = T.sub x eps_scalar in\n          let f_plus = f x_plus in\n          let f_minus = f x_minus in\n          (* Cast result back to input type *)\n          let result = T.sub f_plus f_minus in\n          let two_eps = T.add eps_scalar eps_scalar in\n          (* We need to cast the result to match input type *)\n          T.cast (dtype x) (T.div result (T.cast (dtype result) two_eps))\n      | `Forward ->\n          let x_plus = T.add x eps_scalar in\n          let f_plus = f x_plus in\n          let f_x = f x in\n          let result = T.sub f_plus f_x in\n          T.cast (dtype x) (T.div result (T.cast (dtype result) eps_scalar))\n      | `Backward ->\n          let x_minus = T.sub x eps_scalar in\n          let f_x = f x in\n          let f_minus = f x_minus in\n          let result = T.sub f_x f_minus in\n          T.cast (dtype x) (T.div result (T.cast (dtype result) eps_scalar))\n    else\n      (* For vector/matrix case - need to compute gradient elementwise *)\n      let grad = T.zeros (dtype x) x_shape in\n      let x_flat = T.reshape [| x_numel |] x in\n      let grad_flat = T.reshape [| x_numel |] grad in\n\n      for i = 0 to x_numel - 1 do\n        let x_copy_plus = T.copy x_flat in\n        let x_copy_minus = T.copy x_flat in\n\n        let current_val = T.get [ i ] x_flat in\n\n        match method_ with\n        | `Central ->\n            T.set [ i ] x_copy_plus (T.add current_val eps_scalar);\n            T.set [ i ] x_copy_minus (T.sub current_val eps_scalar);\n\n            let x_plus = T.reshape x_shape x_copy_plus in\n            let x_minus = T.reshape x_shape x_copy_minus in\n\n            let f_plus = f x_plus in\n            let f_minus = f x_minus in\n\n            if T.shape f_plus <> [||] then\n              failwith \"finite_diff: function must return scalar\";\n\n            let two_eps = T.add eps_scalar eps_scalar in\n            let result = T.sub f_plus f_minus in\n            let grad_i =\n              T.cast (dtype x) (T.div result (T.cast (dtype result) two_eps))\n            in\n            T.set [ i ] grad_flat grad_i\n        | `Forward ->\n            T.set [ i ] x_copy_plus (T.add current_val eps_scalar);\n\n            let x_plus = T.reshape x_shape x_copy_plus in\n            let x_orig = T.reshape x_shape x_flat in\n\n            let f_plus = f x_plus in\n            let f_x = f x_orig in\n\n            if T.shape f_plus <> [||] then\n              failwith \"finite_diff: function must return scalar\";\n\n            let result = T.sub f_plus f_x in\n            let grad_i =\n              T.cast (dtype x) (T.div result (T.cast (dtype result) eps_scalar))\n            in\n            T.set [ i ] grad_flat grad_i\n        | `Backward ->\n            T.set [ i ] x_copy_minus (T.sub current_val eps_scalar);\n\n            let x_minus = T.reshape x_shape x_copy_minus in\n            let x_orig = T.reshape x_shape x_flat in\n\n            let f_x = f x_orig in\n            let f_minus = f x_minus in\n\n            if T.shape f_x <> [||] then\n              failwith \"finite_diff: function must return scalar\";\n\n            let result = T.sub f_x f_minus in\n            let grad_i =\n              T.cast (dtype x) (T.div result (T.cast (dtype result) eps_scalar))\n            in\n            T.set [ i ] grad_flat grad_i\n      done;\n\n      T.reshape x_shape grad_flat\n\nlet finite_diff_jacobian (type a b c d) ?(eps = default_eps)\n    ?(method_ = `Central) (f : (a, b) T.t -> (c, d) T.t) (x : (a, b) T.t) :\n    (c, d) T.t =\n  let x_shape = T.shape x in\n  let x_numel = Array.fold_left ( * ) 1 x_shape in\n\n  let f_x = f x in\n  let output_shape = T.shape f_x in\n  let output_numel = Array.fold_left ( * ) 1 output_shape in\n\n  let jacobian = T.zeros (dtype f_x) [| output_numel; x_numel |] in\n\n  if x_numel = 0 || output_numel = 0 then jacobian\n  else\n    let x_flat = T.reshape [| x_numel |] x in\n\n    (* Create epsilon scalar with proper type *)\n    let eps_scalar =\n      let dt = dtype x in\n      T.full dt [||] (Dtype.of_float dt eps)\n    in\n\n    for i = 0 to x_numel - 1 do\n      let x_copy_plus = T.copy x_flat in\n      let x_copy_minus = T.copy x_flat in\n\n      let current_val = T.get [ i ] x_flat in\n\n      match method_ with\n      | `Central ->\n          T.set [ i ] x_copy_plus (T.add current_val eps_scalar);\n          T.set [ i ] x_copy_minus (T.sub current_val eps_scalar);\n\n          let x_plus = T.reshape x_shape x_copy_plus in\n          let x_minus = T.reshape x_shape x_copy_minus in\n\n          let f_plus = T.reshape [| output_numel |] (f x_plus) in\n          let f_minus = T.reshape [| output_numel |] (f x_minus) in\n\n          let two_eps_out =\n            let dt = dtype f_x in\n            T.full dt [||] (Dtype.of_float dt (2.0 *. eps))\n          in\n          let grad_col = T.div (T.sub f_plus f_minus) two_eps_out in\n\n          for j = 0 to output_numel - 1 do\n            T.set [ j; i ] jacobian (T.get [ j ] grad_col)\n          done\n      | `Forward ->\n          T.set [ i ] x_copy_plus (T.add current_val eps_scalar);\n\n          let x_plus = T.reshape x_shape x_copy_plus in\n\n          let f_plus = T.reshape [| output_numel |] (f x_plus) in\n          let f_x_flat = T.reshape [| output_numel |] f_x in\n\n          let eps_out =\n            let dt = dtype f_x in\n            T.full dt [||] (Dtype.of_float dt eps)\n          in\n          let grad_col = T.div (T.sub f_plus f_x_flat) eps_out in\n\n          for j = 0 to output_numel - 1 do\n            T.set [ j; i ] jacobian (T.get [ j ] grad_col)\n          done\n      | `Backward ->\n          T.set [ i ] x_copy_minus (T.sub current_val eps_scalar);\n\n          let x_minus = T.reshape x_shape x_copy_minus in\n\n          let f_x_flat = T.reshape [| output_numel |] f_x in\n          let f_minus = T.reshape [| output_numel |] (f x_minus) in\n\n          let eps_out =\n            let dt = dtype f_x in\n            T.full dt [||] (Dtype.of_float dt eps)\n          in\n          let grad_col = T.div (T.sub f_x_flat f_minus) eps_out in\n\n          for j = 0 to output_numel - 1 do\n            T.set [ j; i ] jacobian (T.get [ j ] grad_col)\n          done\n    done;\n\n    if output_shape = [||] then T.reshape x_shape (T.get [ 0 ] jacobian)\n    else jacobian\n"
  },
  {
    "path": "packages/rune/lib/gradcheck.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule T = Nx\n\ntype gradient_check_result = {\n  max_abs_error : float;\n  max_rel_error : float;\n  mean_abs_error : float;\n  mean_rel_error : float;\n  failed_indices : (int array * float * float * float) list;\n  passed : bool;\n  num_checked : int;\n  num_failed : int;\n}\n\nlet default_rtol = 2e-3 (* JAX default for float32 *)\nlet default_atol = 2e-3 (* JAX default for float32 *)\nlet to_float_value t = T.item [] t\n\nlet check_gradient ?(eps = Finite_diff.default_eps) ?(rtol = default_rtol)\n    ?(atol = default_atol) ?(verbose = false) ?(check_indices = None)\n    ?(method_ = `Central) f x =\n  let autodiff_grad = Vjp.grad f x in\n\n  let finite_diff_grad = Finite_diff.finite_diff ~eps ~method_ f x in\n\n  let shape = T.shape x in\n  let numel = Array.fold_left ( * ) 1 shape in\n\n  let autodiff_flat = T.reshape [| numel |] autodiff_grad in\n  let finite_diff_flat = T.reshape [| numel |] finite_diff_grad in\n\n  let indices_to_check =\n    match check_indices with\n    | None -> List.init numel Fun.id\n    | Some indices -> indices\n  in\n\n  let failed_indices = ref [] in\n  let abs_errors = ref [] in\n  let rel_errors = ref [] in\n\n  List.iter\n    (fun i ->\n      let auto_val = to_float_value (T.get [ i ] autodiff_flat) in\n      let finite_val = to_float_value (T.get [ i ] finite_diff_flat) in\n\n      let abs_error = abs_float (auto_val -. finite_val) in\n      let rel_error =\n        if abs_float auto_val > 1e-12 || abs_float finite_val > 1e-12 then\n          abs_error /. max (abs_float auto_val) (abs_float finite_val)\n        else 0.0\n      in\n\n      abs_errors := abs_error :: !abs_errors;\n      rel_errors := rel_error :: !rel_errors;\n\n      let passed_check = abs_error <= atol || rel_error <= rtol in\n\n      if not passed_check then (\n        let nd_index =\n          let flat_idx = i in\n          let nd_idx = Array.make (Array.length shape) 0 in\n          let mutable_idx = ref flat_idx in\n          for dim = Array.length shape - 1 downto 0 do\n            nd_idx.(dim) <- !mutable_idx mod shape.(dim);\n            mutable_idx := !mutable_idx / shape.(dim)\n          done;\n          nd_idx\n        in\n        failed_indices :=\n          (nd_index, auto_val, finite_val, abs_error) :: !failed_indices;\n\n        if verbose then\n          Printf.printf\n            \"Failed at index %s: autodiff=%.6e, finite_diff=%.6e, \\\n             abs_error=%.6e, rel_error=%.6e\\n\"\n            (nd_index |> Array.to_list |> List.map string_of_int\n           |> String.concat \", \" |> Printf.sprintf \"[%s]\")\n            auto_val finite_val abs_error rel_error))\n    indices_to_check;\n\n  let max_abs_error = List.fold_left max 0.0 !abs_errors in\n  let max_rel_error = List.fold_left max 0.0 !rel_errors in\n  let mean_abs_error =\n    if !abs_errors = [] then 0.0\n    else\n      List.fold_left ( +. ) 0.0 !abs_errors\n      /. float_of_int (List.length !abs_errors)\n  in\n  let mean_rel_error =\n    if !rel_errors = [] then 0.0\n    else\n      List.fold_left ( +. ) 0.0 !rel_errors\n      /. float_of_int (List.length !rel_errors)\n  in\n\n  let num_checked = List.length indices_to_check in\n  let num_failed = List.length !failed_indices in\n  let passed = num_failed = 0 in\n\n  if verbose then (\n    Printf.printf \"\\nGradient check summary:\\n\";\n    Printf.printf \"  Checked: %d elements\\n\" num_checked;\n    Printf.printf \"  Failed: %d elements\\n\" num_failed;\n    Printf.printf \"  Max absolute error: %.6e\\n\" max_abs_error;\n    Printf.printf \"  Max relative error: %.6e\\n\" max_rel_error;\n    Printf.printf \"  Mean absolute error: %.6e\\n\" mean_abs_error;\n    Printf.printf \"  Mean relative error: %.6e\\n\" mean_rel_error;\n    Printf.printf \"  Status: %s\\n\" (if passed then \"PASSED\" else \"FAILED\"));\n\n  let result =\n    {\n      max_abs_error;\n      max_rel_error;\n      mean_abs_error;\n      mean_rel_error;\n      failed_indices = List.rev !failed_indices;\n      passed;\n      num_checked;\n      num_failed;\n    }\n  in\n\n  if passed then `Pass result else `Fail result\n\nlet check_gradients ?(eps = Finite_diff.default_eps) ?(rtol = default_rtol)\n    ?(atol = default_atol) ?(verbose = false) ?(method_ = `Central) f xs =\n  let autodiff_grads = Vjp.grads f xs in\n\n  let results =\n    List.mapi\n      (fun idx (x, autodiff_grad) ->\n        let f_single x_i =\n          let xs_copy = List.mapi (fun i x -> if i = idx then x_i else x) xs in\n          f xs_copy\n        in\n\n        let finite_diff_grad =\n          Finite_diff.finite_diff ~eps ~method_ f_single x\n        in\n\n        let shape = T.shape x in\n        let numel = Array.fold_left ( * ) 1 shape in\n\n        let autodiff_flat = T.reshape [| numel |] autodiff_grad in\n        let finite_diff_flat = T.reshape [| numel |] finite_diff_grad in\n\n        let failed_indices = ref [] in\n        let abs_errors = ref [] in\n        let rel_errors = ref [] in\n\n        for i = 0 to numel - 1 do\n          let auto_val = to_float_value (T.get [ i ] autodiff_flat) in\n          let finite_val = to_float_value (T.get [ i ] finite_diff_flat) in\n\n          let abs_error = abs_float (auto_val -. finite_val) in\n          let rel_error =\n            if abs_float auto_val > 1e-12 || abs_float finite_val > 1e-12 then\n              abs_error /. max (abs_float auto_val) (abs_float finite_val)\n            else 0.0\n          in\n\n          abs_errors := abs_error :: !abs_errors;\n          rel_errors := rel_error :: !rel_errors;\n\n          let passed_check = abs_error <= atol || rel_error <= rtol in\n\n          if not passed_check then (\n            let nd_index =\n              let flat_idx = i in\n              let nd_idx = Array.make (Array.length shape) 0 in\n              let mutable_idx = ref flat_idx in\n              for dim = Array.length shape - 1 downto 0 do\n                nd_idx.(dim) <- !mutable_idx mod shape.(dim);\n                mutable_idx := !mutable_idx / shape.(dim)\n              done;\n              nd_idx\n            in\n            failed_indices :=\n              (nd_index, auto_val, finite_val, abs_error) :: !failed_indices;\n\n            if verbose then\n              Printf.printf\n                \"Input %d failed at index %s: autodiff=%.6e, finite_diff=%.6e, \\\n                 abs_error=%.6e, rel_error=%.6e\\n\"\n                idx\n                (nd_index |> Array.to_list |> List.map string_of_int\n               |> String.concat \", \" |> Printf.sprintf \"[%s]\")\n                auto_val finite_val abs_error rel_error)\n        done;\n\n        let max_abs_error =\n          if !abs_errors = [] then 0.0 else List.fold_left max 0.0 !abs_errors\n        in\n        let max_rel_error =\n          if !rel_errors = [] then 0.0 else List.fold_left max 0.0 !rel_errors\n        in\n        let mean_abs_error =\n          if !abs_errors = [] then 0.0\n          else\n            List.fold_left ( +. ) 0.0 !abs_errors\n            /. float_of_int (List.length !abs_errors)\n        in\n        let mean_rel_error =\n          if !rel_errors = [] then 0.0\n          else\n            List.fold_left ( +. ) 0.0 !rel_errors\n            /. float_of_int (List.length !rel_errors)\n        in\n\n        let num_checked = numel in\n        let num_failed = List.length !failed_indices in\n        let passed = num_failed = 0 in\n\n        if verbose then (\n          Printf.printf \"\\nGradient check summary for input %d:\\n\" idx;\n          Printf.printf \"  Checked: %d elements\\n\" num_checked;\n          Printf.printf \"  Failed: %d elements\\n\" num_failed;\n          Printf.printf \"  Max absolute error: %.6e\\n\" max_abs_error;\n          Printf.printf \"  Max relative error: %.6e\\n\" max_rel_error;\n          Printf.printf \"  Mean absolute error: %.6e\\n\" mean_abs_error;\n          Printf.printf \"  Mean relative error: %.6e\\n\" mean_rel_error;\n          Printf.printf \"  Status: %s\\n\" (if passed then \"PASSED\" else \"FAILED\"));\n\n        {\n          max_abs_error;\n          max_rel_error;\n          mean_abs_error;\n          mean_rel_error;\n          failed_indices = List.rev !failed_indices;\n          passed;\n          num_checked;\n          num_failed;\n        })\n      (List.combine xs autodiff_grads)\n  in\n\n  let all_passed = List.for_all (fun r -> r.passed) results in\n  if all_passed then `Pass results else `Fail results\n"
  },
  {
    "path": "packages/rune/lib/jacobian.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet f64 = Nx.float64\n\nlet jacfwd f x =\n  let y0 = f x in\n  let n = Nx.numel x and m = Nx.numel y0 in\n  let cols =\n    List.init n (fun i ->\n        let ei = Nx.zeros f64 [| n |] in\n        Nx.set_item [ i ] 1.0 ei;\n        let _, col =\n          Jvp.jvp\n            (fun x -> Nx.reshape [| m |] (f (Nx.reshape [| n |] x)))\n            (Nx.reshape [| n |] x) ei\n        in\n        col)\n  in\n  Nx.stack ~axis:1 cols\n\nlet jacrev f x =\n  let y0 = f x in\n  let n = Nx.numel x and m = Nx.numel y0 in\n  let rows =\n    List.init m (fun i ->\n        let ei = Nx.zeros f64 [| m |] in\n        Nx.set_item [ i ] 1.0 ei;\n        let _, row =\n          Vjp.vjp\n            (fun x -> Nx.reshape [| m |] (f (Nx.reshape [| n |] x)))\n            (Nx.reshape [| n |] x) ei\n        in\n        row)\n  in\n  Nx.stack ~axis:0 rows\n"
  },
  {
    "path": "packages/rune/lib/jit.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* JIT compilation via effect handler.\n\n   Intercepts Nx tensor operations to build a computation graph, then\n   delegates scheduling, compilation, memory planning, and replay to\n   the tolk JIT engine (Tolk.Jit.Tiny_jit).\n\n   Three phases (managed by Tiny_jit):\n   - Warmup (cnt=0): execute eagerly via C backend\n   - Capture (cnt=1): intercept effects, build lazy graph, schedule,\n     compile, and execute\n   - Replay (cnt>=2): validate inputs, substitute buffers, execute *)\n\nopen Nx_effect\nmodule T = Tolk_ir.Tensor\nmodule B = Tolk.Device.Buffer\n\n(* Dtype mapping *)\n\nlet tolk_dtype (type a b) (dt : (a, b) Nx.dtype) : Tolk_ir.Dtype.t =\n  let open Tolk_ir.Dtype in\n  match Nx_core.Dtype.to_string dt with\n  | \"float32\" -> float32\n  | \"float64\" -> float64\n  | \"float16\" -> float16\n  | \"int32\" -> int32\n  | \"int64\" -> int64\n  | \"int8\" -> int8\n  | \"int16\" -> int16\n  | \"uint8\" -> uint8\n  | \"uint16\" -> uint16\n  | \"uint32\" -> uint32\n  | \"uint64\" -> uint64\n  | \"bool\" -> bool\n  | s -> failwith (Printf.sprintf \"Jit: unsupported dtype %s\" s)\n\n(* Buffer transfer *)\n\nlet nx_to_device_buffer (type a b) dev (t : (a, b) Nx_effect.t) : B.t =\n  let host = Nx_effect.to_host t in\n  let shape = Nx_effect.view t |> Nx_core.View.shape in\n  let num_elements = Int.max 1 (Array.fold_left ( * ) 1 shape) in\n  let dt = tolk_dtype (Nx_effect.dtype t) in\n  let buf = Tolk.Device.create_buffer ~size:num_elements ~dtype:dt dev in\n  B.ensure_allocated buf;\n  let nbytes = num_elements * Tolk_ir.Dtype.itemsize dt in\n  let src_bytes = Bytes.create nbytes in\n  Nx_buffer.blit_to_bytes host src_bytes;\n  B.copyin buf src_bytes;\n  buf\n\nlet device_buffer_to_nx (type a b) (dt : (a, b) Nx.dtype) (shape : int array)\n    (buf : B.t) : (a, b) Nx_effect.t =\n  let num_elements = Int.max 1 (Array.fold_left ( * ) 1 shape) in\n  let tdt = tolk_dtype dt in\n  let nbytes = num_elements * Tolk_ir.Dtype.itemsize tdt in\n  let dst_bytes = Bytes.create nbytes in\n  B.copyout buf dst_bytes;\n  let ctx = Nx_effect.create_context () in\n  let nx_buf = Nx_effect.buffer ctx dt shape in\n  let host = Nx_effect.to_host nx_buf in\n  Nx_buffer.blit_from_bytes dst_bytes host;\n  nx_buf\n\n(* Identity-keyed hash table *)\n\n(* Physical identity ([==]) tracks tensor values through the effect\n   handler — structural equality would collide on placeholder tensors. *)\nmodule Phys_tbl = Hashtbl.Make (struct\n  type t = Obj.t\n  let equal = ( == )\n  let hash x = Hashtbl.hash (Obj.obj x : int)\nend)\n\n(* Capture context *)\n\ntype capture_ctx = {\n  tensor_to_node : T.t Phys_tbl.t;\n  slot_tensors : (int, Obj.t * Tolk_ir.Dtype.t * int array) Hashtbl.t;\n  mutable param_count : int;\n  device_node : T.t;\n}\n\nlet create_capture_ctx device_node = {\n  tensor_to_node = Phys_tbl.create 64;\n  slot_tensors = Hashtbl.create 16;\n  param_count = 0;\n  device_node;\n}\n\nlet make_placeholder (type a b) (dt : (a, b) Nx.dtype) (shape : int array)\n    : (a, b) Nx_effect.t =\n  let ctx = Nx_effect.create_context () in\n  Nx_effect.buffer ctx dt shape\n\nlet register ctx (t : _ Nx_effect.t) (node : T.t) : unit =\n  Phys_tbl.replace ctx.tensor_to_node (Obj.repr t) node\n\nlet shape_node dims =\n  let int_ n =\n    T.const (Tolk_ir.Const.int Tolk_ir.Dtype.Val.index n) Tolk_ir.Dtype.index in\n  match dims with\n  | [] -> T.const (Tolk_ir.Const.int Tolk_ir.Dtype.Val.index 1) Tolk_ir.Dtype.index\n  | _ ->\n      match List.map int_ dims with [d] -> d | ds -> T.vectorize ~srcs:ds\n\n(* Read a scalar value from an Nx tensor and construct a Const.t.\n   Used to fold scalar constants into the kernel IR. *)\nlet read_scalar_const (type a b) (t : (a, b) Nx_effect.t)\n    (dt : Tolk_ir.Dtype.t) : Tolk_ir.Const.t =\n  let vdt = Tolk_ir.Dtype.val_of dt in\n  let nbytes = Tolk_ir.Dtype.itemsize dt in\n  let buf = Bytes.create nbytes in\n  Nx_buffer.blit_to_bytes (Nx_effect.to_host t) buf;\n  if Tolk_ir.Dtype.is_float dt then\n    let v = if nbytes = 4 then\n      Int32.float_of_bits (Bytes.get_int32_le buf 0)\n    else Int64.float_of_bits (Bytes.get_int64_le buf 0) in\n    Tolk_ir.Const.float vdt v\n  else if Tolk_ir.Dtype.equal dt Tolk_ir.Dtype.bool then\n    Tolk_ir.Const.bool (Bytes.get_uint8 buf 0 <> 0)\n  else\n    let v = if nbytes <= 4 then\n      Int32.to_int (Bytes.get_int32_le buf 0)\n    else Int64.to_int (Bytes.get_int64_le buf 0) in\n    Tolk_ir.Const.int vdt v\n\nlet lookup_or_param ctx (t : _ Nx_effect.t) : T.t =\n  let key = Obj.repr t in\n  match Phys_tbl.find_opt ctx.tensor_to_node key with\n  | Some node -> node\n  | None ->\n      let dt = tolk_dtype (Nx_effect.dtype t) in\n      let shape = Nx_effect.view t |> Nx_core.View.shape in\n      if shape = [||] then begin\n        (* Scalar constant: fold into the IR directly. *)\n        let cv = read_scalar_const t dt in\n        let node = T.const cv dt in\n        Phys_tbl.replace ctx.tensor_to_node key node;\n        node\n      end else begin\n        let slot = ctx.param_count in\n        ctx.param_count <- slot + 1;\n        let sh = shape_node (Array.to_list shape) in\n        let node =\n          T.param ~slot ~dtype:dt ~shape:sh ~device:ctx.device_node ()\n        in\n        Phys_tbl.replace ctx.tensor_to_node key node;\n        Hashtbl.replace ctx.slot_tensors slot (key, dt, shape);\n        node\n      end\n\n(* Graph building *)\n\nlet emit_binary ctx op (a : _ Nx_effect.t) (b : _ Nx_effect.t) ~dtype ~shape =\n  let a_node = lookup_or_param ctx a in\n  let b_node = lookup_or_param ctx b in\n  let out = make_placeholder dtype shape in\n  let node = T.binary ~op ~lhs:a_node ~rhs:b_node in\n  register ctx out node;\n  out\n\nlet emit_unary ctx op (src : _ Nx_effect.t) ~dtype ~shape =\n  let src_node = lookup_or_param ctx src in\n  let out = make_placeholder dtype shape in\n  let node = T.unary ~op ~src:src_node in\n  register ctx out node;\n  out\n\nlet emit_reduce ctx op ~axes (src : _ Nx_effect.t) ~dtype ~shape =\n  let src_node = lookup_or_param ctx src in\n  let out = make_placeholder dtype shape in\n  let node = T.reduce_axis ~src:src_node ~op ~axes in\n  register ctx out node;\n  out\n\nlet infer_shape (t : _ Nx_effect.t) = Nx_effect.view t |> Nx_core.View.shape\n\nlet reduce_shape in_shape axes_list =\n  let out =\n    List.filteri (fun i _ -> not (List.mem i axes_list))\n      (Array.to_list in_shape)\n  in\n  if out = [] then [||] else Array.of_list out\n\n(* Effect handler *)\n\nlet make_capture_handler ctx =\n  let open Effect.Deep in\n  let effc : type c. c Effect.t -> ((c, _) continuation -> _) option =\n   fun eff ->\n    match eff with\n    | E_add { a; b } ->\n        Some (fun k -> continue k\n          (emit_binary ctx `Add a b ~dtype:(Nx_effect.dtype a)\n             ~shape:(infer_shape a)))\n    | E_sub { a; b } ->\n        Some (fun k -> continue k\n          (emit_binary ctx `Sub a b ~dtype:(Nx_effect.dtype a)\n             ~shape:(infer_shape a)))\n    | E_mul { a; b } ->\n        Some (fun k -> continue k\n          (emit_binary ctx `Mul a b ~dtype:(Nx_effect.dtype a)\n             ~shape:(infer_shape a)))\n    | E_max { a; b } ->\n        Some (fun k -> continue k\n          (emit_binary ctx `Max a b ~dtype:(Nx_effect.dtype a)\n             ~shape:(infer_shape a)))\n    | E_cmpeq { a; b } ->\n        Some (fun k -> continue k\n          (emit_binary ctx `Cmpeq a b ~dtype:Nx.bool\n             ~shape:(infer_shape a)))\n    | E_cmplt { a; b } ->\n        Some (fun k -> continue k\n          (emit_binary ctx `Cmplt a b ~dtype:Nx.bool\n             ~shape:(infer_shape a)))\n    | E_neg { t_in } ->\n        Some (fun k -> continue k\n          (emit_unary ctx `Neg t_in ~dtype:(Nx_effect.dtype t_in)\n             ~shape:(infer_shape t_in)))\n    | E_sqrt { t_in } ->\n        Some (fun k -> continue k\n          (emit_unary ctx `Sqrt t_in ~dtype:(Nx_effect.dtype t_in)\n             ~shape:(infer_shape t_in)))\n    | E_sin { t_in } ->\n        Some (fun k -> continue k\n          (emit_unary ctx `Sin t_in ~dtype:(Nx_effect.dtype t_in)\n             ~shape:(infer_shape t_in)))\n    | E_recip { t_in } ->\n        Some (fun k -> continue k\n          (emit_unary ctx `Recip t_in ~dtype:(Nx_effect.dtype t_in)\n             ~shape:(infer_shape t_in)))\n    | E_reduce_sum { t_in; axes; keepdims = _ } ->\n        Some (fun k ->\n          let axes_l = Array.to_list axes in\n          continue k\n            (emit_reduce ctx `Add ~axes:axes_l t_in\n               ~dtype:(Nx_effect.dtype t_in)\n               ~shape:(reduce_shape (infer_shape t_in) axes_l)))\n    | E_reduce_max { t_in; axes; keepdims = _ } ->\n        Some (fun k ->\n          let axes_l = Array.to_list axes in\n          continue k\n            (emit_reduce ctx `Max ~axes:axes_l t_in\n               ~dtype:(Nx_effect.dtype t_in)\n               ~shape:(reduce_shape (infer_shape t_in) axes_l)))\n    | E_reshape { t_in; new_shape } ->\n        Some (fun k ->\n          let src_node = lookup_or_param ctx t_in in\n          let out = make_placeholder (Nx_effect.dtype t_in) new_shape in\n          let sh = shape_node (Array.to_list new_shape) in\n          let node = T.reshape ~src:src_node ~shape:sh in\n          register ctx out node;\n          continue k out)\n    | E_permute { t_in; axes } ->\n        Some (fun k ->\n          let src_node = lookup_or_param ctx t_in in\n          let in_shape = infer_shape t_in in\n          let out_shape = Array.map (fun ax -> in_shape.(ax)) axes in\n          let out = make_placeholder (Nx_effect.dtype t_in) out_shape in\n          let node = T.permute ~src:src_node ~order:(Array.to_list axes) in\n          register ctx out node;\n          continue k out)\n    | E_expand { t_in; new_target_shape } ->\n        Some (fun k ->\n          let src_node = lookup_or_param ctx t_in in\n          let out = make_placeholder (Nx_effect.dtype t_in) new_target_shape in\n          let sh = shape_node (Array.to_list new_target_shape) in\n          let node = T.expand ~src:src_node ~shape:sh in\n          register ctx out node;\n          continue k out)\n    | E_cast { t_in; target_dtype } ->\n        Some (fun k ->\n          let src_node = lookup_or_param ctx t_in in\n          let dt = tolk_dtype target_dtype in\n          let out = make_placeholder target_dtype (infer_shape t_in) in\n          let node = T.cast ~src:src_node ~dtype:dt in\n          register ctx out node;\n          continue k out)\n    | E_where { condition; if_true; if_false } ->\n        Some (fun k ->\n          let c_node = lookup_or_param ctx condition in\n          let t_node = lookup_or_param ctx if_true in\n          let f_node = lookup_or_param ctx if_false in\n          let out =\n            make_placeholder (Nx_effect.dtype if_true) (infer_shape if_true)\n          in\n          let node = T.ternary ~op:`Where ~a:c_node ~b:t_node ~c:f_node in\n          register ctx out node;\n          continue k out)\n    | E_const_scalar { context = _; value; dtype = dt } ->\n        Some (fun k ->\n          let out = make_placeholder dt [||] in\n          let tdt = tolk_dtype dt in\n          let vdt = Tolk_ir.Dtype.val_of tdt in\n          let cv =\n            if Tolk_ir.Dtype.is_float tdt then\n              Tolk_ir.Const.float vdt (Obj.magic value : float)\n            else if Tolk_ir.Dtype.equal tdt Tolk_ir.Dtype.bool then\n              Tolk_ir.Const.bool (Obj.magic value : bool)\n            else Tolk_ir.Const.int vdt (Obj.magic value : int)\n          in\n          let node = T.const cv tdt in\n          register ctx out node;\n          continue k out)\n    | _ -> None\n  in\n  { retc = (fun x -> x); exnc = raise; effc }\n\n(* Graph capture *)\n\nlet capture_graph (type a b c d) ?(device_name = \"CPU\")\n    (f : (a, b) Nx.t -> (c, d) Nx.t) (x : (a, b) Nx.t)\n    : T.t * capture_ctx * (c, d) Nx_effect.t =\n  let device_node = T.device (Single device_name) in\n  let ctx = create_capture_ctx device_node in\n  let handler = make_capture_handler ctx in\n  let result = Effect.Deep.match_with f x handler in\n  let result_node = lookup_or_param ctx result in\n  let contig = T.contiguous ~src:result_node () in\n  let graph = T.sink [ contig ] in\n  (graph, ctx, result)\n\n(* Scheduling bridge *)\n\n(* Build the buffers callback for Schedule.linear_to_schedule.\n   Maps PARAM tensor nodes to device buffers: slot 0 is the function\n   input, other slots are captured constants. *)\nlet make_buffers_cb ctx dev input_buf =\n  let cache : (int, B.t) Hashtbl.t = Hashtbl.create 16 in\n  fun (node : T.t) ->\n    match T.view node with\n    | Param { slot; _ } ->\n        (match Hashtbl.find_opt cache slot with\n         | Some buf -> Some buf\n         | None ->\n             let buf =\n               if slot = 0 then input_buf\n               else\n                 let repr, dt, shape = Hashtbl.find ctx.slot_tensors slot in\n                 let num = Int.max 1 (Array.fold_left ( * ) 1 shape) in\n                 let buf =\n                   Tolk.Device.create_buffer ~size:num ~dtype:dt dev in\n                 B.ensure_allocated buf;\n                 let nbytes = num * Tolk_ir.Dtype.itemsize dt in\n                 let src = Bytes.create nbytes in\n                 let host =\n                   Nx_effect.to_host (Obj.obj repr : (_, _) Nx_effect.t) in\n                 Nx_buffer.blit_to_bytes host src;\n                 B.copyin buf src;\n                 buf\n             in\n             Hashtbl.replace cache slot buf;\n             Some buf)\n    | _ -> None\n\n(* Find the output buffer in the captured schedule — first non-None\n   buffer of the last exec item. *)\nlet find_output_buf cache =\n  let n = Array.length cache in\n  let rec loop i =\n    if i < 0 then failwith \"Jit: no output buffer in schedule\";\n    match (cache.(i)).Tolk.Jit.bufs.(0) with\n    | Some buf -> buf\n    | None -> loop (i - 1)\n  in\n  loop (n - 1)\n\n(* Public API *)\n\nlet trace (type a b c d) ?(device : Tolk.Device.t option)\n    (f : (a, b) Nx.t -> (c, d) Nx.t) : (a, b) Nx.t -> (c, d) Nx.t =\n  (* The Tiny_jit is created lazily on the second call (capture phase),\n     because warmup runs eagerly and doesn't need a device. *)\n  let tjit_ref : (unit -> (c, d) Nx.t) Tolk.Jit.tiny_jit option ref =\n    ref None in\n  let input_nx_dtype : Obj.t option ref = ref None in\n  let input_shape : int array ref = ref [||] in\n  let output_nx_dtype : Obj.t option ref = ref None in\n  let output_shape : int array ref = ref [||] in\n  let buffers_ref : (T.t -> B.t option) ref = ref (fun _ -> None) in\n  let warmup_done = ref false in\n  let ensure_tjit () =\n    match !tjit_ref with\n    | Some t -> t\n    | None ->\n        let dev = match device with\n          | Some d -> d\n          | None -> failwith \"Jit.trace: device is required for JIT\"\n        in\n        let ren = Tolk.Device.renderer dev in\n        let get_program = Tolk.Codegen.get_program dev ren in\n        let device_name = Tolk.Device.name dev in\n        let fxn (input_bufs : B.t array) _var_vals\n            : unit -> (c, d) Nx.t =\n          if Tolk.Jit.is_capturing () then begin\n            (* Capture: build tensor graph under effect handler,\n               schedule, and register the linear. *)\n            let x = make_placeholder\n              (Obj.obj (Option.get !input_nx_dtype) : (a, b) Nx.dtype)\n              !input_shape in\n            let graph, ctx, result =\n              capture_graph ~device_name f x in\n            output_shape := infer_shape result;\n            output_nx_dtype :=\n              Some (Obj.repr (Nx_effect.dtype result));\n            buffers_ref := make_buffers_cb ctx dev input_bufs.(0);\n            let linear =\n              match Tolk.Schedule.lower_sink_to_linear\n                      ~get_kernel_graph:Tolk.Rangeify.get_kernel_graph\n                      graph with\n              | Some l -> l\n              | None -> failwith \"Jit: scheduling failed\"\n            in\n            Tolk.Jit.add_linear linear;\n            let out_dt : (c, d) Nx.dtype =\n              Obj.obj (Option.get !output_nx_dtype) in\n            let out_shape = !output_shape in\n            fun () ->\n              let c = Option.get\n                (Tolk.Jit.captured (Option.get !tjit_ref)) in\n              let buf = find_output_buf (Tolk.Jit.jit_cache c) in\n              device_buffer_to_nx out_dt out_shape buf\n          end else begin\n            (* Warmup inside Tiny_jit (cnt=0). *)\n            let x = device_buffer_to_nx\n              (Obj.obj (Option.get !input_nx_dtype) : (a, b) Nx.dtype)\n              !input_shape input_bufs.(0) in\n            let result = f x in\n            output_shape := infer_shape result;\n            output_nx_dtype :=\n              Some (Obj.repr (Nx_effect.dtype result));\n            fun () -> result\n          end\n        in\n        let tjit =\n          Tolk.Jit.create ~device:dev ~get_program ~fxn () in\n        tjit_ref := Some tjit;\n        tjit\n  in\n  fun (x : (a, b) Nx.t) ->\n    if not !warmup_done then begin\n      (* Warmup: run eagerly on the C backend, no device needed. *)\n      warmup_done := true;\n      f x\n    end else begin\n      let tjit = ensure_tjit () in\n      input_nx_dtype := Some (Obj.repr (Nx_effect.dtype x));\n      input_shape := infer_shape x;\n      let dev = match device with\n        | Some d -> d\n        | None -> failwith \"Jit.trace: device is required for JIT\"\n      in\n      let buf = nx_to_device_buffer dev x in\n      let thunk = Tolk.Jit.call tjit [| buf |] []\n        ~buffers:(fun node -> !buffers_ref node) in\n      thunk ()\n    end\n\n(* Trace graph (debug/inspection) *)\n\ntype traced = {\n  tensor_graph : T.t;\n  kernel_graph : T.t;\n  rendered_source : string list;\n}\n\nlet extract_rendered_sources dev ren kernel_graph =\n  let sources = ref [] in\n  List.iter (fun node ->\n    match T.view node with\n    | Call { callee = Ast kernel; _ } ->\n        let p = Tolk.Codegen.get_program dev ren kernel in\n        sources := String.trim (Tolk.Program_spec.src p) :: !sources\n    | _ -> ())\n    (T.toposort kernel_graph);\n  List.rev !sources\n\nlet trace_graph (type a b c d) ~(device : Tolk.Device.t)\n    (f : (a, b) Nx.t -> (c, d) Nx.t) (x : (a, b) Nx.t) : traced =\n  let device_name = Tolk.Device.name device in\n  let tensor_graph, _ctx, _result = capture_graph ~device_name f x in\n  let kernel_graph = Tolk.Rangeify.get_kernel_graph tensor_graph in\n  let ren = Tolk.Device.renderer device in\n  let rendered_source = extract_rendered_sources device ren kernel_graph in\n  { tensor_graph; kernel_graph; rendered_source }\n\nlet reset () = ()\n"
  },
  {
    "path": "packages/rune/lib/jit.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** JIT compilation via effect handler.\n\n    Intercepts {!Nx} tensor operations to build a computation graph, compiles it\n    into optimized machine code, and replays the compiled schedule on subsequent\n    calls.\n\n    {b Usage:}\n    {[\n      let f_jit = Rune.jit f in\n      let y1 = f_jit x1 in   (* warmup: execute eagerly *)\n      let y2 = f_jit x2 in   (* capture: compile computation graph *)\n      let y3 = f_jit x3 in   (* replay: fast, no recompilation *)\n    ]}\n\n    When no device is provided, the JIT captures the graph but falls back to\n    eager execution. Pass [~device] to enable compiled execution. *)\n\nval trace :\n  ?device:Tolk.Device.t ->\n  (('a, 'b) Nx.t -> ('c, 'd) Nx.t) ->\n  ('a, 'b) Nx.t ->\n  ('c, 'd) Nx.t\n(** [trace ?device f] returns a JIT-compiled version of [f].\n\n    The returned function has the same type as [f] but compiles the computation\n    graph on the second call and replays the compiled schedule on subsequent\n    calls.\n\n    [device] selects the execution backend. When omitted, the computation graph\n    is still captured but execution falls back to the C backend.\n\n    Raises [Invalid_argument] if input shapes or dtypes change after capture. *)\n\n(** {1:inspection Inspecting computation graphs} *)\n\ntype traced = {\n  tensor_graph : Tolk_ir.Tensor.t;\n      (** High-level operation graph before scheduling. *)\n  kernel_graph : Tolk_ir.Tensor.t;\n      (** Scheduled graph with [Call] nodes containing kernel ASTs. *)\n  rendered_source : string list;\n      (** Rendered source code for each kernel (one per [Call] node). *)\n}\n(** Result of tracing a function through the JIT capture handler. *)\n\nval trace_graph :\n  device:Tolk.Device.t ->\n  (('a, 'b) Nx.t -> ('c, 'd) Nx.t) ->\n  ('a, 'b) Nx.t ->\n  traced\n(** [trace_graph ~device f x] traces [f] applied to [x], capturing the\n    computation graph without executing it.\n\n    Returns the tensor graph, kernel graph, and rendered source for\n    each kernel.  Useful for debugging what the JIT produces,\n    inspecting gradient graphs, or comparing against reference\n    implementations. *)\n\nval reset : unit -> unit\n(** [reset ()] clears the JIT cache, forcing recompilation on the next call. *)\n"
  },
  {
    "path": "packages/rune/lib/jvp.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Forward-mode automatic differentiation (JVP). Propagates tangent vectors\n   alongside primal values through an effect handler that intercepts every\n   tensor operation. *)\n\nopen Nx_core\nopen Nx_effect\nmodule T = Nx\n\n(* Dual numbers *)\n\ntype ('a, 'b) dual = { primal : ('a, 'b) t; tangent : ('a, 'b) t }\ntype any_dual = Any_dual : ('a, 'b) dual -> any_dual\n\nlet unwrap_dual (type a b) (_ : (a, b) Dtype.t) (Any_dual d) : (a, b) dual =\n  Obj.magic d\n\n(* Effect handler *)\n\nlet make_handler dual_map =\n  let open Effect.Deep in\n  let get_dual (type a b) (t : (a, b) t) : (a, b) dual =\n    match Autodiff.Physical_tbl.find dual_map t with\n    | Some (Any_dual d) -> unwrap_dual (dtype t) (Any_dual d)\n    | None -> { primal = t; tangent = T.zeros_like t }\n  in\n  let register t dual = Autodiff.Physical_tbl.add dual_map t (Any_dual dual) in\n\n  let effc : type c. c Effect.t -> ((c, _) continuation -> _) option =\n   fun eff ->\n    if not !Autodiff.autodiff_enabled then None\n    else\n      match eff with\n      (* Sources *)\n      | E_const_scalar { context = _; value; dtype = dt } ->\n          Some\n            (fun k ->\n              let res = T.full dt [||] value in\n              register res { primal = res; tangent = T.zeros_like res };\n              continue k res)\n      | E_from_host { context = ctx; array } ->\n          Some\n            (fun k ->\n              let res = from_host ctx array in\n              register res { primal = res; tangent = T.zeros_like res };\n              continue k res)\n      | E_buffer { context = ctx; dtype = dt; size_in_elements } ->\n          Some\n            (fun k ->\n              let res = buffer ctx dt [| size_in_elements |] in\n              continue k res)\n      | E_threefry { key; ctr } ->\n          Some\n            (fun k ->\n              let res = threefry key ctr in\n              register res { primal = res; tangent = T.zeros_like res };\n              continue k res)\n      (* Binary Arithmetic *)\n      | E_add { a; b } ->\n          Some\n            (fun k ->\n              let out = add a b in\n              let da = get_dual a in\n              let db = get_dual b in\n              let tan = T.add da.tangent db.tangent in\n              register out { primal = out; tangent = tan };\n              continue k out)\n      | E_sub { a; b } ->\n          Some\n            (fun k ->\n              let out = sub a b in\n              let da = get_dual a in\n              let db = get_dual b in\n              let tan = T.sub da.tangent db.tangent in\n              register out { primal = out; tangent = tan };\n              continue k out)\n      | E_mul { a; b } ->\n          Some\n            (fun k ->\n              let out = mul a b in\n              let da = get_dual a in\n              let db = get_dual b in\n              (* d(a*b) = da*b + a*db *)\n              let tan =\n                T.add (T.mul da.tangent db.primal) (T.mul da.primal db.tangent)\n              in\n              register out { primal = out; tangent = tan };\n              continue k out)\n      | E_fdiv { a; b } ->\n          Some\n            (fun k ->\n              let out = div a b in\n              let da = get_dual a in\n              let db = get_dual b in\n              (* d(a/b) = da/b - a*db/b^2 *)\n              let term1 = T.div da.tangent db.primal in\n              let term2 =\n                T.div (T.mul da.primal db.tangent) (T.mul db.primal db.primal)\n              in\n              let tan = T.sub term1 term2 in\n              register out { primal = out; tangent = tan };\n              continue k out)\n      | E_pow { a; b } ->\n          Some\n            (fun k ->\n              let out = pow a b in\n              let da = get_dual a in\n              let db = get_dual b in\n              let term1 =\n                T.mul da.tangent\n                  (Autodiff.deriv_pow_wrt_base da.primal db.primal)\n              in\n              let term2 =\n                T.mul db.tangent (Autodiff.deriv_pow_wrt_exp da.primal out)\n              in\n              let tan = T.add term1 term2 in\n              register out { primal = out; tangent = tan };\n              continue k out)\n      | E_max { a; b } ->\n          Some\n            (fun k ->\n              let out = max a b in\n              let da = get_dual a in\n              let db = get_dual b in\n              let mask_a = T.cast (dtype a) (T.cmpgt a b) in\n              let mask_b = T.sub (T.ones_like mask_a) mask_a in\n              let tan =\n                T.add (T.mul da.tangent mask_a) (T.mul db.tangent mask_b)\n              in\n              register out { primal = out; tangent = tan };\n              continue k out)\n      | E_min { a; b } ->\n          Some\n            (fun k ->\n              let out = min a b in\n              let da = get_dual a in\n              let db = get_dual b in\n              let mask_a = T.cast (dtype a) (T.cmplt a b) in\n              let mask_b = T.sub (T.ones_like mask_a) mask_a in\n              let tan =\n                T.add (T.mul da.tangent mask_a) (T.mul db.tangent mask_b)\n              in\n              register out { primal = out; tangent = tan };\n              continue k out)\n      | E_atan2 { a; b } ->\n          Some\n            (fun k ->\n              let out = atan2 a b in\n              let da = get_dual a in\n              let db = get_dual b in\n              let denom =\n                T.add (T.mul da.primal da.primal) (T.mul db.primal db.primal)\n              in\n              let tan =\n                T.add\n                  (T.mul da.tangent (T.div db.primal denom))\n                  (T.mul db.tangent (T.neg (T.div da.primal denom)))\n              in\n              register out { primal = out; tangent = tan };\n              continue k out)\n      (* Unary Arithmetic *)\n      | E_neg { t_in } ->\n          Some\n            (fun k ->\n              let out = neg t_in in\n              let d = get_dual t_in in\n              let tan = T.neg d.tangent in\n              register out { primal = out; tangent = tan };\n              continue k out)\n      | E_sin { t_in } ->\n          Some\n            (fun k ->\n              let out = sin t_in in\n              let d = get_dual t_in in\n              let tan = T.mul d.tangent (Autodiff.deriv_sin d.primal) in\n              register out { primal = out; tangent = tan };\n              continue k out)\n      | E_cos { t_in } ->\n          Some\n            (fun k ->\n              let out = cos t_in in\n              let d = get_dual t_in in\n              (* d/dx cos(x) = -sin(x) *)\n              let tan = T.mul d.tangent (T.neg (T.sin d.primal)) in\n              register out { primal = out; tangent = tan };\n              continue k out)\n      | E_log { t_in } ->\n          Some\n            (fun k ->\n              let out = log t_in in\n              let d = get_dual t_in in\n              let tan = T.mul d.tangent (T.recip d.primal) in\n              register out { primal = out; tangent = tan };\n              continue k out)\n      | E_exp { t_in } ->\n          Some\n            (fun k ->\n              let out = exp t_in in\n              let d = get_dual t_in in\n              let tan = T.mul d.tangent out in\n              register out { primal = out; tangent = tan };\n              continue k out)\n      | E_sqrt { t_in } ->\n          Some\n            (fun k ->\n              let out = sqrt t_in in\n              let d = get_dual t_in in\n              let tan = T.mul d.tangent (Autodiff.deriv_sqrt out) in\n              register out { primal = out; tangent = tan };\n              continue k out)\n      | E_recip { t_in } ->\n          Some\n            (fun k ->\n              let out = recip t_in in\n              let d = get_dual t_in in\n              let tan = T.mul d.tangent (Autodiff.deriv_recip d.primal) in\n              register out { primal = out; tangent = tan };\n              continue k out)\n      | E_abs { t_in } ->\n          Some\n            (fun k ->\n              let out = abs t_in in\n              let d = get_dual t_in in\n              let tan = T.mul d.tangent (T.sign d.primal) in\n              register out { primal = out; tangent = tan };\n              continue k out)\n      | E_sign { t_in } ->\n          Some\n            (fun k ->\n              let out = sign t_in in\n              register out { primal = out; tangent = T.zeros_like out };\n              continue k out)\n      | E_tan { t_in } ->\n          Some\n            (fun k ->\n              let out = tan t_in in\n              let d = get_dual t_in in\n              let tanv = T.mul d.tangent (Autodiff.deriv_tan d.primal) in\n              register out { primal = out; tangent = tanv };\n              continue k out)\n      | E_asin { t_in } ->\n          Some\n            (fun k ->\n              let out = asin t_in in\n              let d = get_dual t_in in\n              let tanv = T.mul d.tangent (Autodiff.deriv_asin d.primal) in\n              register out { primal = out; tangent = tanv };\n              continue k out)\n      | E_acos { t_in } ->\n          Some\n            (fun k ->\n              let out = acos t_in in\n              let d = get_dual t_in in\n              let tanv = T.mul d.tangent (Autodiff.deriv_acos d.primal) in\n              register out { primal = out; tangent = tanv };\n              continue k out)\n      | E_atan { t_in } ->\n          Some\n            (fun k ->\n              let out = atan t_in in\n              let d = get_dual t_in in\n              let tanv = T.mul d.tangent (Autodiff.deriv_atan d.primal) in\n              register out { primal = out; tangent = tanv };\n              continue k out)\n      | E_sinh { t_in } ->\n          Some\n            (fun k ->\n              let out = sinh t_in in\n              let d = get_dual t_in in\n              let tanv = T.mul d.tangent (T.cosh d.primal) in\n              register out { primal = out; tangent = tanv };\n              continue k out)\n      | E_cosh { t_in } ->\n          Some\n            (fun k ->\n              let out = cosh t_in in\n              let d = get_dual t_in in\n              let tanv = T.mul d.tangent (T.sinh d.primal) in\n              register out { primal = out; tangent = tanv };\n              continue k out)\n      | E_tanh { t_in } ->\n          Some\n            (fun k ->\n              let out = tanh t_in in\n              let d = get_dual t_in in\n              let one = T.ones_like out in\n              let tanv = T.mul d.tangent (T.sub one (T.mul out out)) in\n              register out { primal = out; tangent = tanv };\n              continue k out)\n      | E_trunc { t_in } ->\n          Some\n            (fun k ->\n              let out = trunc t_in in\n              register out { primal = out; tangent = T.zeros_like out };\n              continue k out)\n      | E_ceil { t_in } ->\n          Some\n            (fun k ->\n              let out = ceil t_in in\n              register out { primal = out; tangent = T.zeros_like out };\n              continue k out)\n      | E_floor { t_in } ->\n          Some\n            (fun k ->\n              let out = floor t_in in\n              register out { primal = out; tangent = T.zeros_like out };\n              continue k out)\n      | E_round { t_in } ->\n          Some\n            (fun k ->\n              let out = round t_in in\n              register out { primal = out; tangent = T.zeros_like out };\n              continue k out)\n      | E_erf { t_in } ->\n          Some\n            (fun k ->\n              let out = erf t_in in\n              let d = get_dual t_in in\n              let tanv = T.mul d.tangent (Autodiff.deriv_erf d.primal) in\n              register out { primal = out; tangent = tanv };\n              continue k out)\n      (* Shape Operations *)\n      | E_reshape { t_in; new_shape } ->\n          Some\n            (fun k ->\n              let res = reshape t_in new_shape in\n              let d = get_dual t_in in\n              let tan = reshape d.tangent new_shape in\n              register res { primal = res; tangent = tan };\n              continue k res)\n      | E_permute { t_in; axes } ->\n          Some\n            (fun k ->\n              let res = permute t_in axes in\n              let d = get_dual t_in in\n              let tan = permute d.tangent axes in\n              register res { primal = res; tangent = tan };\n              continue k res)\n      | E_expand { t_in; new_target_shape } ->\n          Some\n            (fun k ->\n              let res = expand t_in new_target_shape in\n              let d = get_dual t_in in\n              let tan = expand d.tangent new_target_shape in\n              register res { primal = res; tangent = tan };\n              continue k res)\n      | E_shrink { t_in; limits } ->\n          Some\n            (fun k ->\n              let res = shrink t_in limits in\n              let d = get_dual t_in in\n              let tan = shrink d.tangent limits in\n              register res { primal = res; tangent = tan };\n              continue k res)\n      | E_flip { t_in; dims_to_flip } ->\n          Some\n            (fun k ->\n              let res = flip t_in dims_to_flip in\n              let d = get_dual t_in in\n              let tan = flip d.tangent dims_to_flip in\n              register res { primal = res; tangent = tan };\n              continue k res)\n      | E_pad { t_in; padding_config; fill_value } ->\n          Some\n            (fun k ->\n              let res = pad t_in padding_config fill_value in\n              let d = get_dual t_in in\n              let tan =\n                pad d.tangent padding_config (Dtype.zero (dtype t_in))\n              in\n              register res { primal = res; tangent = tan };\n              continue k res)\n      | E_cat { t_list; axis } ->\n          Some\n            (fun k ->\n              let res = cat t_list ~axis in\n              let tangents = List.map (fun t -> (get_dual t).tangent) t_list in\n              let tan = cat tangents ~axis in\n              register res { primal = res; tangent = tan };\n              continue k res)\n      (* Reductions *)\n      | E_reduce_sum { t_in; axes; keepdims } ->\n          Some\n            (fun k ->\n              let out = reduce_sum ~axes ~keepdims t_in in\n              let d = get_dual t_in in\n              let tan = T.sum d.tangent ~axes:(Array.to_list axes) ~keepdims in\n              register out { primal = out; tangent = tan };\n              continue k out)\n      | E_reduce_max { t_in; axes; keepdims } ->\n          Some\n            (fun k ->\n              let out = reduce_max ~axes ~keepdims t_in in\n              let d = get_dual t_in in\n              let shape_in = T.shape t_in in\n              let out_bc =\n                if keepdims then T.broadcast_to shape_in out\n                else\n                  let kept =\n                    T.max t_in ~axes:(Array.to_list axes) ~keepdims:true\n                  in\n                  T.broadcast_to shape_in kept\n              in\n              let mask = T.cast (dtype out) (T.equal d.primal out_bc) in\n              let tan =\n                T.sum (T.mul d.tangent mask) ~axes:(Array.to_list axes)\n                  ~keepdims\n              in\n              register out { primal = out; tangent = tan };\n              continue k out)\n      | E_reduce_min { t_in; axes; keepdims } ->\n          Some\n            (fun k ->\n              let out = reduce_min ~axes ~keepdims t_in in\n              let d = get_dual t_in in\n              let shape_in = T.shape t_in in\n              let out_bc =\n                if keepdims then T.broadcast_to shape_in out\n                else\n                  let kept =\n                    T.min t_in ~axes:(Array.to_list axes) ~keepdims:true\n                  in\n                  T.broadcast_to shape_in kept\n              in\n              let mask = T.cast (dtype out) (T.equal d.primal out_bc) in\n              let tan =\n                T.sum (T.mul d.tangent mask) ~axes:(Array.to_list axes)\n                  ~keepdims\n              in\n              register out { primal = out; tangent = tan };\n              continue k out)\n      | E_argmax { t_in; axis; keepdims } ->\n          Some\n            (fun k ->\n              let out = argmax ~axis ~keepdims t_in in\n              continue k out)\n      | E_argmin { t_in; axis; keepdims } ->\n          Some\n            (fun k ->\n              let out = argmin ~axis ~keepdims t_in in\n              continue k out)\n      | E_sort { t_in; axis; descending } ->\n          Some\n            (fun k ->\n              let out = sort ~axis ~descending t_in in\n              continue k out)\n      | E_argsort { t_in; axis; descending } ->\n          Some\n            (fun k ->\n              let out = argsort ~axis ~descending t_in in\n              continue k out)\n      (* Matrix Operations *)\n      | E_matmul { a; b } ->\n          Some\n            (fun k ->\n              let out = matmul a b in\n              let da = get_dual a in\n              let db = get_dual b in\n              (* d(A@B) = dA@B + A@dB *)\n              let tan =\n                T.add\n                  (T.matmul da.tangent db.primal)\n                  (T.matmul da.primal db.tangent)\n              in\n              register out { primal = out; tangent = tan };\n              continue k out)\n      (* Selection *)\n      | E_where { condition; if_true; if_false } ->\n          Some\n            (fun k ->\n              let out = where condition if_true if_false in\n              let dt = get_dual if_true in\n              let df = get_dual if_false in\n              let tan = T.where condition dt.tangent df.tangent in\n              register out { primal = out; tangent = tan };\n              continue k out)\n      (* Comparisons (no gradient) *)\n      | E_cmplt { a; b } ->\n          Some\n            (fun k ->\n              let out = cmplt a b in\n              continue k out)\n      | E_cmpne { a; b } ->\n          Some\n            (fun k ->\n              let out = cmpne a b in\n              continue k out)\n      | E_cmpeq { a; b } ->\n          Some\n            (fun k ->\n              let out = cmpeq a b in\n              continue k out)\n      | E_cmple { a; b } ->\n          Some\n            (fun k ->\n              let out = cmple a b in\n              continue k out)\n      | E_xor { a; b } ->\n          Some\n            (fun k ->\n              let out = xor a b in\n              continue k out)\n      | E_or { a; b } ->\n          Some\n            (fun k ->\n              let out = or_ a b in\n              continue k out)\n      | E_and { a; b } ->\n          Some\n            (fun k ->\n              let out = and_ a b in\n              continue k out)\n      (* Other *)\n      | E_copy { t_in } ->\n          Some\n            (fun k ->\n              let res = copy t_in in\n              let d = get_dual t_in in\n              let tan = copy d.tangent in\n              register res { primal = res; tangent = tan };\n              continue k res)\n      | E_contiguous { t_in } ->\n          Some\n            (fun k ->\n              let res = contiguous t_in in\n              let d = get_dual t_in in\n              let tan = contiguous d.tangent in\n              register res { primal = res; tangent = tan };\n              continue k res)\n      | E_assign _ ->\n          Some\n            (fun _k ->\n              invalid_arg\n                \"in-place mutation (set_item, set_slice, blit, assign) cannot \\\n                 be used inside jvp — use scatter instead\")\n      | E_cast { t_in; target_dtype } ->\n          Some\n            (fun k ->\n              let res = cast ~dtype:target_dtype t_in in\n              let d = get_dual t_in in\n              let tan = cast ~dtype:target_dtype d.tangent in\n              register res { primal = res; tangent = tan };\n              continue k res)\n      (* Reduce Prod *)\n      | E_reduce_prod { t_in; axes; keepdims } ->\n          Some\n            (fun k ->\n              let out = reduce_prod ~axes ~keepdims t_in in\n              let d = get_dual t_in in\n              let shape_in = T.shape t_in in\n              let out_bc =\n                if keepdims then T.broadcast_to shape_in out\n                else\n                  let kept =\n                    T.prod t_in ~axes:(Array.to_list axes) ~keepdims:true\n                  in\n                  T.broadcast_to shape_in kept\n              in\n              (* Gradient contribution: res / x_i * dx_i, summed over axes *)\n              let contrib = T.mul (T.div out_bc d.primal) d.tangent in\n              let tan = T.sum contrib ~axes:(Array.to_list axes) ~keepdims in\n              register out { primal = out; tangent = tan };\n              continue k out)\n      (* Associative Scan *)\n      | E_associative_scan { t_in; axis; op } ->\n          Some\n            (fun k ->\n              let res = associative_scan ~axis ~op t_in in\n              let d = get_dual t_in in\n              let tan =\n                match op with\n                | `Sum -> associative_scan ~axis ~op:`Sum d.tangent\n                | `Prod ->\n                    let ratio = T.div d.tangent d.primal in\n                    let cumsum_ratio = associative_scan ~axis ~op:`Sum ratio in\n                    T.mul res cumsum_ratio\n                | `Max ->\n                    let ndim = Array.length (T.shape res) in\n                    let axis_norm = if axis < 0 then axis + ndim else axis in\n                    let shape = T.shape res in\n                    let dt = dtype t_in in\n                    let min_val = Dtype.min_value dt in\n                    let pad_left =\n                      Array.mapi\n                        (fun i _ -> if i = axis_norm then (1, 0) else (0, 0))\n                        shape\n                    in\n                    let padded = T.pad pad_left min_val res in\n                    let slice_right =\n                      Array.mapi\n                        (fun i dim ->\n                          if i = axis_norm then T.R (0, dim) else T.R (0, dim))\n                        shape\n                    in\n                    let shifted_res =\n                      T.slice (Array.to_list slice_right) padded\n                    in\n                    let active_mask = T.cast dt (T.cmpgt res shifted_res) in\n                    T.mul d.tangent active_mask\n                | `Min ->\n                    let ndim = Array.length (T.shape res) in\n                    let axis_norm = if axis < 0 then axis + ndim else axis in\n                    let shape = T.shape res in\n                    let dt = dtype t_in in\n                    let max_val = Dtype.max_value dt in\n                    let pad_left =\n                      Array.mapi\n                        (fun i _ -> if i = axis_norm then (1, 0) else (0, 0))\n                        shape\n                    in\n                    let padded = T.pad pad_left max_val res in\n                    let slice_right =\n                      Array.mapi\n                        (fun i dim ->\n                          if i = axis_norm then T.R (0, dim) else T.R (0, dim))\n                        shape\n                    in\n                    let shifted_res =\n                      T.slice (Array.to_list slice_right) padded\n                    in\n                    let active_mask = T.cast dt (T.cmplt res shifted_res) in\n                    T.mul d.tangent active_mask\n              in\n              register res { primal = res; tangent = tan };\n              continue k res)\n      (* Gather *)\n      | E_gather { data; indices; axis } ->\n          Some\n            (fun k ->\n              let res = gather data indices ~axis in\n              let d = get_dual data in\n              let tan = gather d.tangent indices ~axis in\n              register res { primal = res; tangent = tan };\n              continue k res)\n      (* Scatter *)\n      | E_scatter { data_template; indices; updates; axis } ->\n          Some\n            (fun k ->\n              let res = scatter data_template ~indices ~updates ~axis in\n              let d_template = get_dual data_template in\n              let d_updates = get_dual updates in\n              let mask =\n                scatter\n                  (T.ones_like data_template)\n                  ~indices ~updates:(T.zeros_like updates) ~axis\n              in\n              let tan_template = T.mul d_template.tangent mask in\n              let tan_updates =\n                scatter\n                  (T.zeros_like data_template)\n                  ~indices ~updates:d_updates.tangent ~axis\n              in\n              let tan = T.add tan_template tan_updates in\n              register res { primal = res; tangent = tan };\n              continue k res)\n      (* FFT Operations *)\n      | E_fft { t; axes } ->\n          Some\n            (fun k ->\n              let res = fft t ~axes in\n              let d = get_dual t in\n              let tan = fft d.tangent ~axes in\n              register res { primal = res; tangent = tan };\n              continue k res)\n      | E_ifft { t; axes } ->\n          Some\n            (fun k ->\n              let res = ifft t ~axes in\n              let d = get_dual t in\n              let tan = ifft d.tangent ~axes in\n              register res { primal = res; tangent = tan };\n              continue k res)\n      (* Custom differentiation *)\n      | Autodiff.E_ad_mode_query -> Some (fun k -> continue k `JVP)\n      | Autodiff.E_custom_jvp { cj_jvp } ->\n          Some\n            (fun k ->\n              let get_tangent packed =\n                let t : (_, _) t = Obj.obj packed in\n                Obj.repr (get_dual t).tangent\n              in\n              let primal_packed, tangent_packed =\n                Autodiff.without_autodiff (fun () -> cj_jvp get_tangent)\n              in\n              let primal : (_, _) t = Obj.obj primal_packed in\n              (* tangent has the same representation as primal — the user's\n                 jvp_rule returns matching types, but OCaml can't prove it *)\n              let tangent : (_, _) t = Obj.obj tangent_packed in\n              register primal { primal; tangent = Obj.magic tangent };\n              continue k primal_packed)\n      | _ -> None\n  in\n  { retc = Fun.id; exnc = raise; effc }\n\n(* API *)\n\nlet lookup_tangent dual_map result =\n  match Autodiff.Physical_tbl.find dual_map result with\n  | Some (Any_dual d) ->\n      let d = unwrap_dual (dtype result) (Any_dual d) in\n      (d.primal, d.tangent)\n  | None -> (result, T.zeros_like result)\n\nlet jvp (type a b c d) (f : (a, b) t -> (c, d) t) (primals : (a, b) t)\n    (tangents : (a, b) t) : (c, d) t * (c, d) t =\n  let dual_map = Autodiff.Physical_tbl.create 16 in\n  Autodiff.Physical_tbl.add dual_map primals\n    (Any_dual { primal = primals; tangent = tangents });\n  let handler = make_handler dual_map in\n  let result = Effect.Deep.match_with f primals handler in\n  lookup_tangent dual_map result\n\nlet jvps (type a b c d) (f : (a, b) t list -> (c, d) t)\n    (primals : (a, b) t list) (tangents : (a, b) t list) : (c, d) t * (c, d) t =\n  let dual_map = Autodiff.Physical_tbl.create 16 in\n  List.iter2\n    (fun p t ->\n      Autodiff.Physical_tbl.add dual_map p\n        (Any_dual { primal = p; tangent = t }))\n    primals tangents;\n  let handler = make_handler dual_map in\n  let result = Effect.Deep.match_with f primals handler in\n  lookup_tangent dual_map result\n\nlet jvp_aux (type a b c d e) (f : (a, b) t -> (c, d) t * e) (primals : (a, b) t)\n    (tangents : (a, b) t) : (c, d) t * (c, d) t * e =\n  let dual_map = Autodiff.Physical_tbl.create 16 in\n  Autodiff.Physical_tbl.add dual_map primals\n    (Any_dual { primal = primals; tangent = tangents });\n  let handler = make_handler dual_map in\n  let result, aux = Effect.Deep.match_with f primals handler in\n  let primal, tangent = lookup_tangent dual_map result in\n  (primal, tangent, aux)\n"
  },
  {
    "path": "packages/rune/lib/rune.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Reverse-mode *)\n\nlet grad = Vjp.grad\nlet grads = Vjp.grads\nlet value_and_grad = Vjp.value_and_grad\nlet value_and_grad_aux = Vjp.value_and_grad_aux\nlet value_and_grads = Vjp.value_and_grads\nlet value_and_grads_aux = Vjp.value_and_grads_aux\nlet vjp = Vjp.vjp\nlet vjps = Vjp.vjps\nlet no_grad = Vjp.no_grad\nlet detach = Vjp.detach\n\n(* Forward-mode *)\n\nlet jvp = Jvp.jvp\nlet jvp_aux = Jvp.jvp_aux\nlet jvps = Jvp.jvps\n\n(* Jacobian *)\n\nlet jacfwd = Jacobian.jacfwd\nlet jacrev = Jacobian.jacrev\n\n(* Custom differentiation rules *)\n\nlet custom_vjp = Custom_diff.custom_vjp\nlet custom_vjps = Custom_diff.custom_vjps\nlet custom_jvp = Custom_diff.custom_jvp\nlet custom_jvps = Custom_diff.custom_jvps\n\n(* Gradient checking *)\n\ntype method_ = Finite_diff.method_\n\ntype gradient_check_result = Gradcheck.gradient_check_result = {\n  max_abs_error : float;\n  max_rel_error : float;\n  mean_abs_error : float;\n  mean_rel_error : float;\n  failed_indices : (int array * float * float * float) list;\n  passed : bool;\n  num_checked : int;\n  num_failed : int;\n}\n\nlet finite_diff = Finite_diff.finite_diff\nlet finite_diff_jacobian = Finite_diff.finite_diff_jacobian\nlet check_gradient = Gradcheck.check_gradient\nlet check_gradients = Gradcheck.check_gradients\n\n(* Vmap *)\n\ntype axis_spec = Vmap.axis_spec = Map of int | NoMap\n\ntype 'a in_axes_spec = 'a Vmap.in_axes_spec =\n  | Single of axis_spec\n  | Container of 'a\n\ntype 'a out_axes_spec = 'a Vmap.out_axes_spec =\n  | OutSingle of int option\n  | OutContainer of 'a\n\nlet vmap = Vmap.vmap\nlet vmaps = Vmap.vmaps\n\n(* JIT *)\n\nlet jit ?device f = Jit.trace ?device f\n\ntype jit_traced = Jit.traced = {\n  tensor_graph : Tolk_ir.Tensor.t;\n  kernel_graph : Tolk_ir.Tensor.t;\n  rendered_source : string list;\n}\n\nlet trace_graph = Jit.trace_graph\n\n(* Debugging *)\n\nlet debug = Debug.debug\n"
  },
  {
    "path": "packages/rune/lib/rune.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Functional transformations for {!Nx} tensors.\n\n    Rune provides automatic differentiation (forward and reverse mode),\n    vectorising maps, and gradient checking. It operates by intercepting {!Nx}\n    tensor operations via OCaml 5 effect handlers — no special tensor type is\n    needed.\n\n    {b Terminology.}\n    - {e Primal}: the input value at which a derivative is evaluated.\n    - {e Tangent}: the directional derivative seed (forward mode).\n    - {e Cotangent}: the adjoint seed propagated backward (reverse mode).\n    - {e JVP}: Jacobian-vector product (forward-mode AD).\n    - {e VJP}: vector-Jacobian product (reverse-mode AD). *)\n\n(** {1:reverse Reverse-mode AD}\n\n    Compute gradients of scalar-valued functions via reverse-mode\n    (backpropagation). The function [f] must return a scalar tensor; the\n    gradient has the same shape as the input. *)\n\nval grad : (('a, 'b) Nx.t -> ('c, 'd) Nx.t) -> ('a, 'b) Nx.t -> ('a, 'b) Nx.t\n(** [grad f x] is the gradient of scalar-valued [f] at [x].\n\n    Equivalent to [snd (value_and_grad f x)].\n\n    See also {!grads}, {!value_and_grad}. *)\n\nval grads :\n  (('a, 'b) Nx.t list -> ('c, 'd) Nx.t) ->\n  ('a, 'b) Nx.t list ->\n  ('a, 'b) Nx.t list\n(** [grads f xs] is the list of gradients of scalar-valued [f] with respect to\n    each tensor in [xs]. The {e i}-th element of the result has the same shape\n    as the {e i}-th element of [xs].\n\n    See also {!grad}, {!value_and_grads}. *)\n\nval value_and_grad :\n  (('a, 'b) Nx.t -> ('c, 'd) Nx.t) ->\n  ('a, 'b) Nx.t ->\n  ('c, 'd) Nx.t * ('a, 'b) Nx.t\n(** [value_and_grad f x] is [(f x, grad f x)], computed in a single\n    forward-backward pass.\n\n    See also {!value_and_grad_aux}. *)\n\nval value_and_grad_aux :\n  (('a, 'b) Nx.t -> ('c, 'd) Nx.t * 'e) ->\n  ('a, 'b) Nx.t ->\n  ('c, 'd) Nx.t * ('a, 'b) Nx.t * 'e\n(** [value_and_grad_aux f x] is [(y, g, aux)] where [(y, aux) = f x] and [g] is\n    the gradient of [y] with respect to [x]. The auxiliary output [aux] is\n    carried through but not differentiated.\n\n    See also {!value_and_grads_aux}. *)\n\nval value_and_grads :\n  (('a, 'b) Nx.t list -> ('c, 'd) Nx.t) ->\n  ('a, 'b) Nx.t list ->\n  ('c, 'd) Nx.t * ('a, 'b) Nx.t list\n(** [value_and_grads f xs] is [(f xs, grads f xs)], computed in a single\n    forward-backward pass.\n\n    See also {!value_and_grads_aux}. *)\n\nval value_and_grads_aux :\n  (('a, 'b) Nx.t list -> ('c, 'd) Nx.t * 'e) ->\n  ('a, 'b) Nx.t list ->\n  ('c, 'd) Nx.t * ('a, 'b) Nx.t list * 'e\n(** [value_and_grads_aux f xs] is [(y, gs, aux)] where [(y, aux) = f xs] and\n    [gs] is the list of gradients of [y] with respect to each tensor in [xs].\n    The auxiliary output [aux] is carried through but not differentiated.\n\n    See also {!value_and_grad_aux}. *)\n\nval vjp :\n  (('a, 'b) Nx.t -> ('c, 'd) Nx.t) ->\n  ('a, 'b) Nx.t ->\n  ('c, 'd) Nx.t ->\n  ('c, 'd) Nx.t * ('a, 'b) Nx.t\n(** [vjp f x v] is [(y, g)] where [y = f x] and [g = v{^T} J{_f}(x)]\n    (vector-Jacobian product). Unlike {!grad}, [f] need not return a scalar —\n    the cotangent [v] must have the same shape as [y].\n\n    See also {!vjps}. *)\n\nval vjps :\n  (('a, 'b) Nx.t list -> ('c, 'd) Nx.t) ->\n  ('a, 'b) Nx.t list ->\n  ('c, 'd) Nx.t ->\n  ('c, 'd) Nx.t * ('a, 'b) Nx.t list\n(** [vjps f xs v] is like {!vjp} for functions with multiple inputs. Returns\n    [(y, gs)] where each gradient in [gs] corresponds to one input in [xs]. *)\n\n(** {1:forward Forward-mode AD}\n\n    Compute Jacobian-vector products by propagating tangent vectors alongside\n    primal values. Forward mode is efficient when the number of inputs is small\n    relative to the number of outputs. *)\n\nval jvp :\n  (('a, 'b) Nx.t -> ('c, 'd) Nx.t) ->\n  ('a, 'b) Nx.t ->\n  ('a, 'b) Nx.t ->\n  ('c, 'd) Nx.t * ('c, 'd) Nx.t\n(** [jvp f x v] is [(y, t)] where [y = f x] and [t = J{_f}(x) v]\n    (Jacobian-vector product). The tangent [v] must have the same shape as [x].\n\n    See also {!jvps}, {!jvp_aux}. *)\n\nval jvp_aux :\n  (('a, 'b) Nx.t -> ('c, 'd) Nx.t * 'e) ->\n  ('a, 'b) Nx.t ->\n  ('a, 'b) Nx.t ->\n  ('c, 'd) Nx.t * ('c, 'd) Nx.t * 'e\n(** [jvp_aux f x v] is like {!jvp} but for functions with auxiliary output.\n    Returns [(y, t, aux)] where [aux] is carried through but not differentiated.\n*)\n\nval jvps :\n  (('a, 'b) Nx.t list -> ('c, 'd) Nx.t) ->\n  ('a, 'b) Nx.t list ->\n  ('a, 'b) Nx.t list ->\n  ('c, 'd) Nx.t * ('c, 'd) Nx.t\n(** [jvps f xs vs] is like {!jvp} for functions with multiple inputs. Each\n    tangent in [vs] must have the same shape as the corresponding primal in\n    [xs]. *)\n\n(** {1:jacobian Jacobian computation} *)\n\nval jacfwd : (Nx.float64_t -> Nx.float64_t) -> Nx.float64_t -> Nx.float64_t\n(** [jacfwd f x] is the [{m} x {n}] Jacobian matrix of [f] at [x], computed\n    column-by-column via forward-mode AD (JVP). [f] maps a 1-D tensor of shape\n    [[n]] to a 1-D tensor of shape [[m]]. Entry [J(i,j)] is\n    {e d(output_i) / d(input_j)}.\n\n    Performs [n] JVP evaluations. Prefer over {!jacrev} when [n <= m]. *)\n\nval jacrev : (Nx.float64_t -> Nx.float64_t) -> Nx.float64_t -> Nx.float64_t\n(** [jacrev f x] is the [{m} x {n}] Jacobian matrix of [f] at [x], computed\n    row-by-row via reverse-mode AD (VJP). [f] maps a 1-D tensor of shape [[n]]\n    to a 1-D tensor of shape [[m]]. Entry [J(i,j)] is\n    {e d(output_i) / d(input_j)}.\n\n    Performs [m] VJP evaluations. Prefer over {!jacfwd} when [m <= n]. *)\n\n(** {1:stop Stopping gradients} *)\n\nval no_grad : (unit -> 'a) -> 'a\n(** [no_grad f] evaluates [f ()] without recording operations for automatic\n    differentiation. All tensors produced inside [f] are treated as constants by\n    enclosing gradient computations. *)\n\nval detach : ('a, 'b) Nx.t -> ('a, 'b) Nx.t\n(** [detach x] is a copy of [x] that is treated as a constant with respect to\n    automatic differentiation.\n\n    See also {!no_grad}. *)\n\n(** {1:custom Custom differentiation rules}\n\n    Override automatic differentiation with user-supplied forward and backward\n    (or tangent) rules. Useful for implicit differentiation, surrogate\n    gradients, and other computations where the derivative is algorithmically\n    distinct from the primal.\n\n    Under reverse-mode AD ({!grad}, {!vjp}), the custom backward rule is used\n    instead of tracing through the forward function. Under forward-mode AD\n    ({!jvp}) or outside AD, the forward function is traced normally.\n\n    {b Higher-order derivatives.} The backward function runs outside the inner\n    handler's continuation, so its {!Nx} operations are traced by enclosing AD\n    handlers. This means [grad (fun x -> grad (custom_vjp_fn) x) x] works\n    correctly. *)\n\nval custom_vjp :\n  fwd:(('a, 'b) Nx.t -> ('c, 'd) Nx.t * 'res) ->\n  bwd:('res -> ('c, 'd) Nx.t -> ('a, 'b) Nx.t) ->\n  ('a, 'b) Nx.t ->\n  ('c, 'd) Nx.t\n(** [custom_vjp ~fwd ~bwd x] computes [fwd x] with a custom VJP rule.\n\n    [fwd] returns [(y, residuals)] where [y] is the output and [residuals] is\n    auxiliary data saved for the backward pass (e.g. intermediate values needed\n    by the backward rule). [residuals] is not differentiated.\n\n    [bwd residuals g] receives the output cotangent [g] and returns the input\n    cotangent. It is only called under reverse-mode AD ({!grad}, {!vjp}); under\n    forward-mode AD ({!jvp}) or outside AD, [fwd] is traced normally instead. *)\n\nval custom_vjps :\n  fwd:(('a, 'b) Nx.t list -> ('c, 'd) Nx.t * 'res) ->\n  bwd:('res -> ('c, 'd) Nx.t -> ('a, 'b) Nx.t list) ->\n  ('a, 'b) Nx.t list ->\n  ('c, 'd) Nx.t\n(** [custom_vjps ~fwd ~bwd xs] is like {!custom_vjp} for functions with multiple\n    inputs. [bwd] must return a list of the same length as [xs]. *)\n\nval custom_jvp :\n  fwd:(('a, 'b) Nx.t -> ('c, 'd) Nx.t) ->\n  jvp_rule:(('a, 'b) Nx.t -> ('a, 'b) Nx.t -> ('c, 'd) Nx.t * ('c, 'd) Nx.t) ->\n  ('a, 'b) Nx.t ->\n  ('c, 'd) Nx.t\n(** [custom_jvp ~fwd ~jvp_rule x] computes [fwd x] with a custom JVP rule.\n\n    [jvp_rule primal tangent] receives the primal input and its tangent, and\n    returns [(y, dy)] where [y] is the primal output and [dy] is its tangent. It\n    is only called under forward-mode AD ({!jvp}); under reverse-mode AD\n    ({!grad}, {!vjp}) or outside AD, [fwd] is traced normally instead. *)\n\nval custom_jvps :\n  fwd:(('a, 'b) Nx.t list -> ('c, 'd) Nx.t) ->\n  jvp_rule:\n    (('a, 'b) Nx.t list -> ('a, 'b) Nx.t list -> ('c, 'd) Nx.t * ('c, 'd) Nx.t) ->\n  ('a, 'b) Nx.t list ->\n  ('c, 'd) Nx.t\n(** [custom_jvps ~fwd ~jvp_rule xs] is like {!custom_jvp} for functions with\n    multiple inputs. [jvp_rule primals tangents] receives a list of primals and\n    their tangents, and returns [(y, dy)]. *)\n\n(** {1:gradcheck Gradient checking}\n\n    Compare autodiff gradients against finite-difference approximations. Useful\n    for testing custom operations. *)\n\ntype method_ = [ `Central | `Forward | `Backward ]\n(** The type for finite difference methods.\n    - [`Central] — [(f(x+h) - f(x-h)) / 2h]. Most accurate, requires two\n      evaluations per element.\n    - [`Forward] — [(f(x+h) - f(x)) / h].\n    - [`Backward] — [(f(x) - f(x-h)) / h]. *)\n\nval finite_diff :\n  ?eps:float ->\n  ?method_:method_ ->\n  (('a, 'b) Nx.t -> ('c, 'd) Nx.t) ->\n  ('a, 'b) Nx.t ->\n  ('a, 'b) Nx.t\n(** [finite_diff f x] is the gradient of scalar-valued [f] at [x] approximated\n    by finite differences.\n\n    [eps] defaults to [1e-4]. [method_] defaults to [`Central]. *)\n\nval finite_diff_jacobian :\n  ?eps:float ->\n  ?method_:method_ ->\n  (('a, 'b) Nx.t -> ('c, 'd) Nx.t) ->\n  ('a, 'b) Nx.t ->\n  ('c, 'd) Nx.t\n(** [finite_diff_jacobian f x] is the Jacobian of [f] at [x] approximated by\n    finite differences.\n\n    [eps] defaults to [1e-4]. [method_] defaults to [`Central]. *)\n\ntype gradient_check_result = {\n  max_abs_error : float;  (** Largest absolute error across all elements. *)\n  max_rel_error : float;  (** Largest relative error across all elements. *)\n  mean_abs_error : float;  (** Mean absolute error. *)\n  mean_rel_error : float;  (** Mean relative error. *)\n  failed_indices : (int array * float * float * float) list;\n      (** [(index, autodiff, finite_diff, abs_error)] for each failed element.\n      *)\n  passed : bool;  (** [true] iff no element exceeded the tolerances. *)\n  num_checked : int;  (** Number of elements checked. *)\n  num_failed : int;  (** Number of elements that exceeded tolerances. *)\n}\n(** The type for gradient check results. *)\n\nval check_gradient :\n  ?eps:float ->\n  ?rtol:float ->\n  ?atol:float ->\n  ?verbose:bool ->\n  ?check_indices:int list option ->\n  ?method_:[ `Central | `Forward | `Backward ] ->\n  ((float, 'a) Nx.t -> ('b, 'c) Nx.t) ->\n  (float, 'a) Nx.t ->\n  [ `Pass of gradient_check_result | `Fail of gradient_check_result ]\n(** [check_gradient f x] compares the autodiff gradient of [f] at [x] against a\n    finite-difference approximation.\n\n    An element passes when [abs_error <= atol] or [rel_error <= rtol].\n\n    - [eps] defaults to [1e-4].\n    - [rtol] defaults to [2e-3].\n    - [atol] defaults to [2e-3].\n    - [verbose] defaults to [false]. When [true], prints per-element failures\n      and a summary to standard output.\n    - [check_indices] defaults to [None] (check all elements). When\n      [Some indices], only the listed flat indices are checked.\n    - [method_] defaults to [`Central].\n\n    See also {!check_gradients}. *)\n\nval check_gradients :\n  ?eps:float ->\n  ?rtol:float ->\n  ?atol:float ->\n  ?verbose:bool ->\n  ?method_:[ `Central | `Forward | `Backward ] ->\n  ((float, 'a) Nx.t list -> ('b, 'c) Nx.t) ->\n  (float, 'a) Nx.t list ->\n  [ `Pass of gradient_check_result list | `Fail of gradient_check_result list ]\n(** [check_gradients f xs] is like {!check_gradient} for functions with multiple\n    inputs. Returns one {!gradient_check_result} per input tensor.\n\n    Optional parameters have the same defaults as {!check_gradient}. *)\n\n(** {1:vmap Vectorising map}\n\n    Map a computation over a batch dimension. [vmap] transforms a function that\n    operates on single examples into one that operates on batches, without the\n    user writing explicit batch loops. *)\n\n(** The type for per-input axis specifications. *)\ntype axis_spec = Vmap.axis_spec =\n  | Map of int  (** Map over the axis at this index. *)\n  | NoMap  (** Do not map; broadcast the input as-is. *)\n\n(** The type for input axis specifications. *)\ntype 'a in_axes_spec = 'a Vmap.in_axes_spec =\n  | Single of axis_spec  (** Apply to all inputs. *)\n  | Container of 'a  (** Per-input specifications. *)\n\n(** The type for output axis specifications. *)\ntype 'a out_axes_spec = 'a Vmap.out_axes_spec =\n  | OutSingle of int option\n      (** Stack outputs along this axis ([None] to discard). *)\n  | OutContainer of 'a  (** Per-output specifications. *)\n\nval vmap :\n  ?in_axes:'a in_axes_spec ->\n  ?out_axes:'b out_axes_spec ->\n  ?axis_name:string ->\n  ?axis_size:int ->\n  (('c, 'd) Nx.t -> ('e, 'f) Nx.t) ->\n  ('c, 'd) Nx.t ->\n  ('e, 'f) Nx.t\n(** [vmap f x] is a vectorised version of [f] applied to [x].\n\n    - [in_axes] defaults to [Single (Map 0)].\n    - [out_axes] defaults to [OutSingle (Some 0)].\n    - [axis_name] is an optional label for the mapped axis (used in error\n      messages).\n    - [axis_size] overrides the batch size inferred from the input shape.\n      Required when all inputs use {!NoMap}.\n\n    See also {!vmaps}. *)\n\nval vmaps :\n  ?in_axes:Vmap.axis_spec list ->\n  ?out_axes:'b Vmap.out_axes_spec ->\n  ?axis_name:string ->\n  ?axis_size:int ->\n  (('c, 'd) Nx.t list -> ('e, 'f) Nx.t) ->\n  ('c, 'd) Nx.t list ->\n  ('e, 'f) Nx.t\n(** [vmaps f xs] is like {!vmap} for functions with multiple inputs. Each\n    element of [in_axes] corresponds to one input in [xs].\n\n    [in_axes] defaults to [Map 0] for every input. *)\n\n(** {1:jit JIT compilation} *)\n\nval jit :\n  ?device:Tolk.Device.t ->\n  (('a, 'b) Nx.t -> ('c, 'd) Nx.t) ->\n  ('a, 'b) Nx.t ->\n  ('c, 'd) Nx.t\n(** [jit f] returns a JIT-compiled version of [f].\n\n    - Call 1 (warmup): executes eagerly\n    - Call 2 (capture): intercepts tensor operations, builds computation graph,\n      compiles via Tolk's codegen pipeline\n    - Calls 3+ (replay): executes the compiled schedule without recompilation\n\n    Raises [Invalid_argument] if input shapes or dtypes change after capture. *)\n\ntype jit_traced = Jit.traced = {\n  tensor_graph : Tolk_ir.Tensor.t;\n      (** High-level operation graph before scheduling. *)\n  kernel_graph : Tolk_ir.Tensor.t;\n      (** Scheduled graph with [Call] nodes containing kernel ASTs. *)\n  rendered_source : string list;\n      (** Rendered source code for each kernel (one per [Call] node). *)\n}\n(** Result of tracing a function through the JIT capture handler. *)\n\nval trace_graph :\n  device:Tolk.Device.t ->\n  (('a, 'b) Nx.t -> ('c, 'd) Nx.t) ->\n  ('a, 'b) Nx.t ->\n  jit_traced\n(** [trace_graph ~device f x] traces [f] applied to [x], capturing the\n    computation graph without executing it.\n\n    Returns the tensor graph, kernel graph, and rendered source for each\n    kernel. Useful for debugging what the JIT produces, inspecting\n    gradient graphs, or comparing against reference implementations. *)\n\n(** {1:debug Debugging} *)\n\nval debug : ('a -> 'b) -> 'a -> 'b\n(** [debug f x] applies [f] to [x] under a tracing handler that prints every\n    tensor operation, its inputs, and its outputs to standard output. *)\n"
  },
  {
    "path": "packages/rune/lib/vjp.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Reverse-mode automatic differentiation (VJP). Runs the forward computation\n   under an effect handler that records a tape, then propagates cotangents\n   backward through the tape as the continuation stack unwinds. *)\n\nopen Nx_core\nopen Nx_effect\nmodule T = Nx\n\n(* Tape types *)\n\ntype any_tensor = Any : ('a, 'b) t -> any_tensor\n\nlet unwrap (type a b) (_ : (a, b) Dtype.t) (Any t) : (a, b) t = Obj.magic t\n\ntype ('a, 'b) t_with_grad = {\n  v : ('a, 'b) t;\n  mutable grad : ('a, 'b) t;\n  id : int;\n}\n\ntype any_twg = Any_twg : ('a, 'b) t_with_grad -> any_twg\n\nlet unwrap_twg (type a b) (_ : (a, b) Dtype.t) (Any_twg twg) :\n    (a, b) t_with_grad =\n  Obj.magic twg\n\nlet twg_id_counter = ref 0\n\nlet fresh_twg_id () =\n  incr twg_id_counter;\n  !twg_id_counter\n\n(* Effect handler *)\n\nlet make_handler tape seed_output =\n  let open Effect.Deep in\n  let get_or_init (type a b) (t : (a, b) t) : (a, b) t_with_grad =\n    match Autodiff.Physical_tbl.find tape t with\n    | Some (Any_twg twg) -> unwrap_twg (dtype t) (Any_twg twg)\n    | None ->\n        let id = fresh_twg_id () in\n        let twg = { v = t; grad = T.zeros_like t; id } in\n        Autodiff.Physical_tbl.add tape t (Any_twg twg);\n        twg\n  in\n\n  let effc : type c. c Effect.t -> ((c, _) continuation -> _) option =\n   fun eff ->\n    if not !Autodiff.autodiff_enabled then None\n    else\n      match eff with\n      (* Sources *)\n      | E_const_scalar { context = _; value; dtype = dt } ->\n          Some\n            (fun k ->\n              let res = T.full dt [||] value in\n              let fwd = continue k res in\n              let _ = get_or_init res in\n              fwd)\n      | E_from_host { context = ctx; array } ->\n          Some\n            (fun k ->\n              let res = from_host ctx array in\n              let fwd = continue k res in\n              let _ = get_or_init res in\n              fwd)\n      | E_buffer { context = ctx; dtype = dt; size_in_elements } ->\n          Some\n            (fun k ->\n              let res = buffer ctx dt [| size_in_elements |] in\n              let fwd = continue k res in\n              let _ = get_or_init res in\n              fwd)\n      | E_threefry { key; ctr } ->\n          Some\n            (fun k ->\n              let res = threefry key ctr in\n              let fwd = continue k res in\n              let _ = get_or_init res in\n              fwd)\n      (* Binary Arithmetic *)\n      | E_add { a; b } ->\n          Some\n            (fun k ->\n              let out = add a b in\n              let fwd = continue k out in\n              let twg_a = get_or_init a in\n              let twg_b = get_or_init b in\n              let twg_out = get_or_init out in\n              let g = twg_out.grad in\n              twg_a.grad <-\n                T.add twg_a.grad (Autodiff.unbroadcast_grad g (T.shape a));\n              twg_b.grad <-\n                T.add twg_b.grad (Autodiff.unbroadcast_grad g (T.shape b));\n              fwd)\n      | E_sub { a; b } ->\n          Some\n            (fun k ->\n              let out = sub a b in\n              let fwd = continue k out in\n              let twg_a = get_or_init a in\n              let twg_b = get_or_init b in\n              let twg_out = get_or_init out in\n              let g = twg_out.grad in\n              twg_a.grad <-\n                T.add twg_a.grad (Autodiff.unbroadcast_grad g (T.shape a));\n              twg_b.grad <-\n                T.add twg_b.grad\n                  (Autodiff.unbroadcast_grad (T.neg g) (T.shape b));\n              fwd)\n      | E_mul { a; b } ->\n          Some\n            (fun k ->\n              let out = mul a b in\n              let fwd = continue k out in\n              let twg_a = get_or_init a in\n              let twg_b = get_or_init b in\n              let twg_out = get_or_init out in\n              let g = twg_out.grad in\n              twg_a.grad <-\n                T.add twg_a.grad\n                  (Autodiff.unbroadcast_grad (T.mul g b) (T.shape a));\n              twg_b.grad <-\n                T.add twg_b.grad\n                  (Autodiff.unbroadcast_grad (T.mul g a) (T.shape b));\n              fwd)\n      | E_fdiv { a; b } ->\n          Some\n            (fun k ->\n              let out = div a b in\n              let fwd = continue k out in\n              let twg_a = get_or_init a in\n              let twg_b = get_or_init b in\n              let twg_out = get_or_init out in\n              let g = twg_out.grad in\n              let ga = T.div g b in\n              let gb = T.mul (T.neg g) (T.div a (T.mul b b)) in\n              twg_a.grad <-\n                T.add twg_a.grad (Autodiff.unbroadcast_grad ga (T.shape a));\n              twg_b.grad <-\n                T.add twg_b.grad (Autodiff.unbroadcast_grad gb (T.shape b));\n              fwd)\n      | E_pow { a; b } ->\n          Some\n            (fun k ->\n              let out = pow a b in\n              let fwd = continue k out in\n              let twg_a = get_or_init a in\n              let twg_b = get_or_init b in\n              let twg_out = get_or_init out in\n              let g = twg_out.grad in\n              let ga = T.mul g (Autodiff.deriv_pow_wrt_base a b) in\n              let gb = T.mul g (Autodiff.deriv_pow_wrt_exp a out) in\n              twg_a.grad <-\n                T.add twg_a.grad (Autodiff.unbroadcast_grad ga (T.shape a));\n              twg_b.grad <-\n                T.add twg_b.grad (Autodiff.unbroadcast_grad gb (T.shape b));\n              fwd)\n      | E_max { a; b } ->\n          Some\n            (fun k ->\n              let out = max a b in\n              let fwd = continue k out in\n              let twg_a = get_or_init a in\n              let twg_b = get_or_init b in\n              let twg_out = get_or_init out in\n              let g = twg_out.grad in\n              let mask_a = T.cast (dtype g) (T.cmpgt a b) in\n              let mask_b = T.sub (T.ones_like mask_a) mask_a in\n              let ga = T.mul g mask_a in\n              let gb = T.mul g mask_b in\n              twg_a.grad <-\n                T.add twg_a.grad (Autodiff.unbroadcast_grad ga (T.shape a));\n              twg_b.grad <-\n                T.add twg_b.grad (Autodiff.unbroadcast_grad gb (T.shape b));\n              fwd)\n      | E_min { a; b } ->\n          Some\n            (fun k ->\n              let out = min a b in\n              let fwd = continue k out in\n              let twg_a = get_or_init a in\n              let twg_b = get_or_init b in\n              let twg_out = get_or_init out in\n              let g = twg_out.grad in\n              let mask_a = T.cast (dtype g) (T.cmplt a b) in\n              let mask_b = T.sub (T.ones_like mask_a) mask_a in\n              let ga = T.mul g mask_a in\n              let gb = T.mul g mask_b in\n              twg_a.grad <-\n                T.add twg_a.grad (Autodiff.unbroadcast_grad ga (T.shape a));\n              twg_b.grad <-\n                T.add twg_b.grad (Autodiff.unbroadcast_grad gb (T.shape b));\n              fwd)\n      | E_atan2 { a; b } ->\n          Some\n            (fun k ->\n              let out = atan2 a b in\n              let fwd = continue k out in\n              let twg_a = get_or_init a in\n              let twg_b = get_or_init b in\n              let twg_out = get_or_init out in\n              let g = twg_out.grad in\n              let denom = T.add (T.mul a a) (T.mul b b) in\n              let ga = T.mul g (T.div b denom) in\n              let gb = T.mul g (T.neg (T.div a denom)) in\n              twg_a.grad <-\n                T.add twg_a.grad (Autodiff.unbroadcast_grad ga (T.shape a));\n              twg_b.grad <-\n                T.add twg_b.grad (Autodiff.unbroadcast_grad gb (T.shape b));\n              fwd)\n      (* Unary Arithmetic *)\n      | E_neg { t_in } ->\n          Some\n            (fun k ->\n              let out = neg t_in in\n              let fwd = continue k out in\n              let twg_in = get_or_init t_in in\n              let twg_out = get_or_init out in\n              twg_in.grad <- T.add twg_in.grad (T.neg twg_out.grad);\n              fwd)\n      | E_sin { t_in } ->\n          Some\n            (fun k ->\n              let out = sin t_in in\n              let fwd = continue k out in\n              let twg_in = get_or_init t_in in\n              let twg_out = get_or_init out in\n              let g = T.mul twg_out.grad (Autodiff.deriv_sin t_in) in\n              twg_in.grad <- T.add twg_in.grad g;\n              fwd)\n      | E_cos { t_in } ->\n          Some\n            (fun k ->\n              let out = cos t_in in\n              let fwd = continue k out in\n              let twg_in = get_or_init t_in in\n              let twg_out = get_or_init out in\n              let g = T.mul twg_out.grad (T.neg (T.sin t_in)) in\n              twg_in.grad <- T.add twg_in.grad g;\n              fwd)\n      | E_log { t_in } ->\n          Some\n            (fun k ->\n              let out = log t_in in\n              let fwd = continue k out in\n              let twg_in = get_or_init t_in in\n              let twg_out = get_or_init out in\n              let g = T.mul twg_out.grad (T.recip t_in) in\n              twg_in.grad <- T.add twg_in.grad g;\n              fwd)\n      | E_exp { t_in } ->\n          Some\n            (fun k ->\n              let out = exp t_in in\n              let fwd = continue k out in\n              let twg_in = get_or_init t_in in\n              let twg_out = get_or_init out in\n              let g = T.mul twg_out.grad out in\n              twg_in.grad <- T.add twg_in.grad g;\n              fwd)\n      | E_sqrt { t_in } ->\n          Some\n            (fun k ->\n              let out = sqrt t_in in\n              let fwd = continue k out in\n              let twg_in = get_or_init t_in in\n              let twg_out = get_or_init out in\n              let g = T.mul twg_out.grad (Autodiff.deriv_sqrt out) in\n              twg_in.grad <- T.add twg_in.grad g;\n              fwd)\n      | E_recip { t_in } ->\n          Some\n            (fun k ->\n              let out = recip t_in in\n              let fwd = continue k out in\n              let twg_in = get_or_init t_in in\n              let twg_out = get_or_init out in\n              let g = T.mul twg_out.grad (Autodiff.deriv_recip t_in) in\n              twg_in.grad <- T.add twg_in.grad g;\n              fwd)\n      | E_abs { t_in } ->\n          Some\n            (fun k ->\n              let out = abs t_in in\n              let fwd = continue k out in\n              let twg_in = get_or_init t_in in\n              let twg_out = get_or_init out in\n              let g = T.mul twg_out.grad (T.sign t_in) in\n              twg_in.grad <- T.add twg_in.grad g;\n              fwd)\n      | E_sign { t_in } ->\n          Some\n            (fun k ->\n              let out = sign t_in in\n              let fwd = continue k out in\n              let _ = get_or_init out in\n              fwd)\n      | E_tan { t_in } ->\n          Some\n            (fun k ->\n              let out = tan t_in in\n              let fwd = continue k out in\n              let twg_in = get_or_init t_in in\n              let twg_out = get_or_init out in\n              let g = T.mul twg_out.grad (Autodiff.deriv_tan t_in) in\n              twg_in.grad <- T.add twg_in.grad g;\n              fwd)\n      | E_asin { t_in } ->\n          Some\n            (fun k ->\n              let out = asin t_in in\n              let fwd = continue k out in\n              let twg_in = get_or_init t_in in\n              let twg_out = get_or_init out in\n              let g = T.mul twg_out.grad (Autodiff.deriv_asin t_in) in\n              twg_in.grad <- T.add twg_in.grad g;\n              fwd)\n      | E_acos { t_in } ->\n          Some\n            (fun k ->\n              let out = acos t_in in\n              let fwd = continue k out in\n              let twg_in = get_or_init t_in in\n              let twg_out = get_or_init out in\n              let g = T.mul twg_out.grad (Autodiff.deriv_acos t_in) in\n              twg_in.grad <- T.add twg_in.grad g;\n              fwd)\n      | E_atan { t_in } ->\n          Some\n            (fun k ->\n              let out = atan t_in in\n              let fwd = continue k out in\n              let twg_in = get_or_init t_in in\n              let twg_out = get_or_init out in\n              let g = T.mul twg_out.grad (Autodiff.deriv_atan t_in) in\n              twg_in.grad <- T.add twg_in.grad g;\n              fwd)\n      | E_sinh { t_in } ->\n          Some\n            (fun k ->\n              let out = sinh t_in in\n              let fwd = continue k out in\n              let twg_in = get_or_init t_in in\n              let twg_out = get_or_init out in\n              let g = T.mul twg_out.grad (T.cosh t_in) in\n              twg_in.grad <- T.add twg_in.grad g;\n              fwd)\n      | E_cosh { t_in } ->\n          Some\n            (fun k ->\n              let out = cosh t_in in\n              let fwd = continue k out in\n              let twg_in = get_or_init t_in in\n              let twg_out = get_or_init out in\n              let g = T.mul twg_out.grad (T.sinh t_in) in\n              twg_in.grad <- T.add twg_in.grad g;\n              fwd)\n      | E_tanh { t_in } ->\n          Some\n            (fun k ->\n              let out = tanh t_in in\n              let fwd = continue k out in\n              let twg_in = get_or_init t_in in\n              let twg_out = get_or_init out in\n              let one = T.ones_like out in\n              let g = T.mul twg_out.grad (T.sub one (T.mul out out)) in\n              twg_in.grad <- T.add twg_in.grad g;\n              fwd)\n      | E_trunc { t_in } ->\n          Some\n            (fun k ->\n              let out = trunc t_in in\n              let fwd = continue k out in\n              let _ = get_or_init out in\n              fwd)\n      | E_ceil { t_in } ->\n          Some\n            (fun k ->\n              let out = ceil t_in in\n              let fwd = continue k out in\n              let _ = get_or_init out in\n              fwd)\n      | E_floor { t_in } ->\n          Some\n            (fun k ->\n              let out = floor t_in in\n              let fwd = continue k out in\n              let _ = get_or_init out in\n              fwd)\n      | E_round { t_in } ->\n          Some\n            (fun k ->\n              let out = round t_in in\n              let fwd = continue k out in\n              let _ = get_or_init out in\n              fwd)\n      | E_erf { t_in } ->\n          Some\n            (fun k ->\n              let out = erf t_in in\n              let fwd = continue k out in\n              let twg_in = get_or_init t_in in\n              let twg_out = get_or_init out in\n              let g = T.mul twg_out.grad (Autodiff.deriv_erf t_in) in\n              twg_in.grad <- T.add twg_in.grad g;\n              fwd)\n      (* Shape Operations *)\n      | E_reshape { t_in; new_shape } ->\n          Some\n            (fun k ->\n              let res = reshape t_in new_shape in\n              let fwd = continue k res in\n              let twg_in = get_or_init t_in in\n              let twg_res = get_or_init res in\n              let g = T.reshape (T.shape t_in) twg_res.grad in\n              twg_in.grad <- T.add twg_in.grad g;\n              fwd)\n      | E_permute { t_in; axes } ->\n          Some\n            (fun k ->\n              let res = permute t_in axes in\n              let fwd = continue k res in\n              let twg_in = get_or_init t_in in\n              let twg_res = get_or_init res in\n              let inv = Array.make (Array.length axes) 0 in\n              Array.iteri (fun i d -> inv.(d) <- i) axes;\n              let g = T.transpose twg_res.grad ~axes:(Array.to_list inv) in\n              twg_in.grad <- T.add twg_in.grad g;\n              fwd)\n      | E_expand { t_in; new_target_shape } ->\n          Some\n            (fun k ->\n              let res = expand t_in new_target_shape in\n              let fwd = continue k res in\n              let twg_in = get_or_init t_in in\n              let twg_res = get_or_init res in\n              let g = Autodiff.unbroadcast_grad twg_res.grad (T.shape t_in) in\n              twg_in.grad <- T.add twg_in.grad g;\n              fwd)\n      | E_shrink { t_in; limits } ->\n          Some\n            (fun k ->\n              let res = shrink t_in limits in\n              let fwd = continue k res in\n              let twg_in = get_or_init t_in in\n              let twg_res = get_or_init res in\n              let pads =\n                Array.mapi\n                  (fun i (start, _) ->\n                    let total = (T.shape t_in).(i) in\n                    let len = (T.shape res).(i) in\n                    (start, total - start - len))\n                  limits\n              in\n              let g = pad twg_res.grad pads (Dtype.zero (dtype t_in)) in\n              twg_in.grad <- T.add twg_in.grad g;\n              fwd)\n      | E_flip { t_in; dims_to_flip } ->\n          Some\n            (fun k ->\n              let res = flip t_in dims_to_flip in\n              let fwd = continue k res in\n              let twg_in = get_or_init t_in in\n              let twg_res = get_or_init res in\n              let g = flip twg_res.grad dims_to_flip in\n              twg_in.grad <- T.add twg_in.grad g;\n              fwd)\n      | E_pad { t_in; padding_config; fill_value = _ } ->\n          Some\n            (fun k ->\n              let res = pad t_in padding_config (Dtype.zero (dtype t_in)) in\n              let fwd = continue k res in\n              let twg_in = get_or_init t_in in\n              let twg_res = get_or_init res in\n              let limits =\n                Array.mapi\n                  (fun i (pre, _) ->\n                    let dim = (T.shape t_in).(i) in\n                    (pre, pre + dim))\n                  padding_config\n              in\n              let g = T.shrink limits twg_res.grad in\n              twg_in.grad <- T.add twg_in.grad g;\n              fwd)\n      | E_cat { t_list; axis } ->\n          Some\n            (fun k ->\n              let res = cat t_list ~axis in\n              let fwd = continue k res in\n              let twg_res = get_or_init res in\n              let g = twg_res.grad in\n              let g_shape = T.shape g in\n              let off = ref 0 in\n              List.iter\n                (fun t ->\n                  let twg = get_or_init t in\n                  let len = (T.shape t).(axis) in\n                  let limits =\n                    Array.init (Array.length g_shape) (fun i ->\n                        if i = axis then (!off, !off + len) else (0, g_shape.(i)))\n                  in\n                  off := !off + len;\n                  twg.grad <- T.add twg.grad (T.shrink limits g))\n                t_list;\n              fwd)\n      (* Reductions *)\n      | E_reduce_sum { t_in; axes; keepdims } ->\n          Some\n            (fun k ->\n              let out = reduce_sum ~axes ~keepdims t_in in\n              let fwd = continue k out in\n              let twg_in = get_or_init t_in in\n              let twg_out = get_or_init out in\n              let g =\n                if keepdims then twg_out.grad\n                else\n                  let kept_shape =\n                    T.shape\n                      (T.sum t_in ~axes:(Array.to_list axes) ~keepdims:true)\n                  in\n                  T.reshape kept_shape twg_out.grad\n              in\n              let g_bc = T.broadcast_to (T.shape t_in) g in\n              twg_in.grad <- T.add twg_in.grad g_bc;\n              fwd)\n      | E_reduce_max { t_in; axes; keepdims } ->\n          Some\n            (fun k ->\n              let out = reduce_max ~axes ~keepdims t_in in\n              let fwd = continue k out in\n              let twg_in = get_or_init t_in in\n              let twg_out = get_or_init out in\n              let shape_in = T.shape t_in in\n              let out_bc =\n                if keepdims then T.broadcast_to shape_in out\n                else\n                  let kept =\n                    T.max t_in ~axes:(Array.to_list axes) ~keepdims:true\n                  in\n                  T.broadcast_to shape_in kept\n              in\n              let g_bc =\n                if keepdims then T.broadcast_to shape_in twg_out.grad\n                else\n                  let kept_shape =\n                    T.shape\n                      (T.max t_in ~axes:(Array.to_list axes) ~keepdims:true)\n                  in\n                  T.broadcast_to shape_in (T.reshape kept_shape twg_out.grad)\n              in\n              let mask = T.cast (dtype out) (T.equal t_in out_bc) in\n              twg_in.grad <- T.add twg_in.grad (T.mul g_bc mask);\n              fwd)\n      | E_reduce_min { t_in; axes; keepdims } ->\n          Some\n            (fun k ->\n              let out = reduce_min ~axes ~keepdims t_in in\n              let fwd = continue k out in\n              let twg_in = get_or_init t_in in\n              let twg_out = get_or_init out in\n              let shape_in = T.shape t_in in\n              let out_bc =\n                if keepdims then T.broadcast_to shape_in out\n                else\n                  let kept =\n                    T.min t_in ~axes:(Array.to_list axes) ~keepdims:true\n                  in\n                  T.broadcast_to shape_in kept\n              in\n              let g_bc =\n                if keepdims then T.broadcast_to shape_in twg_out.grad\n                else\n                  let kept_shape =\n                    T.shape\n                      (T.min t_in ~axes:(Array.to_list axes) ~keepdims:true)\n                  in\n                  T.broadcast_to shape_in (T.reshape kept_shape twg_out.grad)\n              in\n              let mask = T.cast (dtype out) (T.equal t_in out_bc) in\n              twg_in.grad <- T.add twg_in.grad (T.mul g_bc mask);\n              fwd)\n      | E_argmax { t_in; axis; keepdims } ->\n          Some\n            (fun k ->\n              let out = argmax ~axis ~keepdims t_in in\n              let fwd = continue k out in\n              let _ = get_or_init out in\n              fwd)\n      | E_argmin { t_in; axis; keepdims } ->\n          Some\n            (fun k ->\n              let out = argmin ~axis ~keepdims t_in in\n              let fwd = continue k out in\n              let _ = get_or_init out in\n              fwd)\n      | E_sort { t_in; axis; descending } ->\n          Some\n            (fun k ->\n              let out = sort ~axis ~descending t_in in\n              let fwd = continue k out in\n              let _ = get_or_init out in\n              fwd)\n      | E_argsort { t_in; axis; descending } ->\n          Some\n            (fun k ->\n              let out = argsort ~axis ~descending t_in in\n              let fwd = continue k out in\n              let _ = get_or_init out in\n              fwd)\n      (* Matrix Operations *)\n      | E_matmul { a; b } ->\n          Some\n            (fun k ->\n              let out = matmul a b in\n              let fwd = continue k out in\n              let twg_a = get_or_init a in\n              let twg_b = get_or_init b in\n              let twg_out = get_or_init out in\n              let g = twg_out.grad in\n              let a_shape = T.shape a in\n              let b_shape = T.shape b in\n              let g_shape = T.shape g in\n              let a_ndim = Array.length a_shape in\n              let b_ndim = Array.length b_shape in\n              let g_ndim = Array.length g_shape in\n              let transpose_last2 t =\n                let nd = Array.length (T.shape t) in\n                if nd < 2 then t\n                else\n                  let axes =\n                    List.init nd (fun i ->\n                        if i = nd - 2 then -1 else if i = nd - 1 then -2 else i)\n                  in\n                  T.transpose ~axes t\n              in\n              let grad_a =\n                if a_ndim = 2 && b_ndim >= 3 then\n                  let b_t = transpose_last2 b in\n                  let g_bt = T.matmul g b_t in\n                  let batch_dims = List.init (g_ndim - 2) Fun.id in\n                  if batch_dims = [] then g_bt\n                  else T.sum g_bt ~axes:batch_dims ~keepdims:false\n                else if a_ndim >= 3 && b_ndim >= 3 then\n                  T.matmul g (transpose_last2 b)\n                else T.matmul g (T.transpose b)\n              in\n              let grad_b =\n                if b_ndim = 2 && a_ndim >= 3 then\n                  let at_g = T.matmul (transpose_last2 a) g in\n                  let batch_dims = List.init (g_ndim - 2) Fun.id in\n                  if batch_dims = [] then at_g\n                  else T.sum at_g ~axes:batch_dims ~keepdims:false\n                else if a_ndim = 2 && b_ndim >= 3 then\n                  let a_t = T.transpose a in\n                  let batch_shape = Array.sub g_shape 0 (g_ndim - 2) in\n                  let a_t_shape = T.shape a_t in\n                  let target_shape = Array.concat [ batch_shape; a_t_shape ] in\n                  let a_t_expanded =\n                    T.broadcast_to target_shape\n                      (T.reshape (Array.concat [ [| 1 |]; a_t_shape ]) a_t)\n                  in\n                  T.matmul a_t_expanded g\n                else if a_ndim >= 3 && b_ndim >= 3 then\n                  T.matmul (transpose_last2 a) g\n                else T.matmul (T.transpose a) g\n              in\n              twg_a.grad <- T.add twg_a.grad grad_a;\n              twg_b.grad <- T.add twg_b.grad grad_b;\n              fwd)\n      (* Selection *)\n      | E_where { condition; if_true; if_false } ->\n          Some\n            (fun k ->\n              let out = where condition if_true if_false in\n              let fwd = continue k out in\n              let twg_t = get_or_init if_true in\n              let twg_f = get_or_init if_false in\n              let twg_out = get_or_init out in\n              let g = twg_out.grad in\n              let mask = T.cast (dtype g) condition in\n              let inv_mask = T.sub (T.ones_like mask) mask in\n              twg_t.grad <- T.add twg_t.grad (T.mul g mask);\n              twg_f.grad <- T.add twg_f.grad (T.mul g inv_mask);\n              fwd)\n      (* Comparisons (no gradient) *)\n      | E_cmplt { a; b } ->\n          Some\n            (fun k ->\n              let out = cmplt a b in\n              continue k out)\n      | E_cmpne { a; b } ->\n          Some\n            (fun k ->\n              let out = cmpne a b in\n              continue k out)\n      | E_cmpeq { a; b } ->\n          Some\n            (fun k ->\n              let out = cmpeq a b in\n              continue k out)\n      | E_cmple { a; b } ->\n          Some\n            (fun k ->\n              let out = cmple a b in\n              continue k out)\n      | E_xor { a; b } ->\n          Some\n            (fun k ->\n              let out = xor a b in\n              continue k out)\n      | E_or { a; b } ->\n          Some\n            (fun k ->\n              let out = or_ a b in\n              continue k out)\n      | E_and { a; b } ->\n          Some\n            (fun k ->\n              let out = and_ a b in\n              continue k out)\n      (* Other *)\n      | E_copy { t_in } ->\n          Some\n            (fun k ->\n              let res = copy t_in in\n              let fwd = continue k res in\n              let twg_in = get_or_init t_in in\n              let twg_res = get_or_init res in\n              twg_in.grad <- T.add twg_in.grad twg_res.grad;\n              fwd)\n      | E_contiguous { t_in } ->\n          Some\n            (fun k ->\n              let res = contiguous t_in in\n              let fwd = continue k res in\n              let twg_in = get_or_init t_in in\n              let twg_res = get_or_init res in\n              twg_in.grad <- T.add twg_in.grad twg_res.grad;\n              fwd)\n      | E_assign _ ->\n          Some\n            (fun _k ->\n              invalid_arg\n                \"in-place mutation (set_item, set_slice, blit, assign) cannot \\\n                 be used inside grad/value_and_grad — use scatter instead\")\n      | E_cast { t_in; target_dtype } ->\n          Some\n            (fun k ->\n              let res = cast ~dtype:target_dtype t_in in\n              let fwd = continue k res in\n              let twg_in = get_or_init t_in in\n              let twg_res = get_or_init res in\n              let g = T.cast (dtype t_in) twg_res.grad in\n              twg_in.grad <- T.add twg_in.grad g;\n              fwd)\n      (* Reduce Prod *)\n      | E_reduce_prod { t_in; axes; keepdims } ->\n          Some\n            (fun k ->\n              let out = reduce_prod ~axes ~keepdims t_in in\n              let fwd = continue k out in\n              let twg_in = get_or_init t_in in\n              let twg_out = get_or_init out in\n              let shape_in = T.shape t_in in\n              let g_prepared =\n                if keepdims then twg_out.grad\n                else\n                  let kept_shape =\n                    T.shape\n                      (T.prod t_in ~axes:(Array.to_list axes) ~keepdims:true)\n                  in\n                  T.reshape kept_shape twg_out.grad\n              in\n              let g_bc = T.broadcast_to shape_in g_prepared in\n              let out_prepared =\n                if keepdims then out\n                else\n                  let kept_shape =\n                    T.shape\n                      (T.prod t_in ~axes:(Array.to_list axes) ~keepdims:true)\n                  in\n                  T.reshape kept_shape out\n              in\n              let out_bc = T.broadcast_to shape_in out_prepared in\n              let grad_contrib = T.mul g_bc (T.div out_bc t_in) in\n              twg_in.grad <- T.add twg_in.grad grad_contrib;\n              fwd)\n      (* Associative Scan *)\n      | E_associative_scan { t_in; axis; op } ->\n          Some\n            (fun k ->\n              let res = associative_scan ~axis ~op t_in in\n              let fwd = continue k res in\n              let twg_in = get_or_init t_in in\n              let twg_res = get_or_init res in\n              let g = twg_res.grad in\n              let shape_in = T.shape t_in in\n              let axis_norm =\n                let rank = Array.length shape_in in\n                if axis < 0 then axis + rank else axis\n              in\n              let grad_contrib =\n                match op with\n                | `Sum ->\n                    let flipped = T.flip g ~axes:[ axis_norm ] in\n                    let scanned = T.cumsum ~axis:axis_norm flipped in\n                    T.flip scanned ~axes:[ axis_norm ]\n                | `Prod ->\n                    let prefix_exclusive axis tensor =\n                      let shape = T.shape tensor in\n                      let pad_config =\n                        Array.mapi\n                          (fun i _ -> if i = axis then (1, 0) else (0, 0))\n                          shape\n                      in\n                      let one = Dtype.one (T.dtype tensor) in\n                      let padded = T.pad pad_config one tensor in\n                      let cumprod_padded = T.cumprod ~axis padded in\n                      let slice_specs =\n                        Array.mapi\n                          (fun i dim ->\n                            if i = axis then T.R (0, dim) else T.R (0, dim))\n                          shape\n                      in\n                      T.slice (Array.to_list slice_specs) cumprod_padded\n                    in\n                    let suffix_exclusive axis tensor =\n                      let shape = T.shape tensor in\n                      let one = Dtype.one (T.dtype tensor) in\n                      let flipped = T.flip tensor ~axes:[ axis ] in\n                      let flipped_cumprod = T.cumprod ~axis flipped in\n                      let suffix_inclusive =\n                        T.flip flipped_cumprod ~axes:[ axis ]\n                      in\n                      let pad_config =\n                        Array.mapi\n                          (fun i _ -> if i = axis then (0, 1) else (0, 0))\n                          shape\n                      in\n                      let padded = T.pad pad_config one suffix_inclusive in\n                      let slice_specs =\n                        Array.mapi\n                          (fun i dim ->\n                            if i = axis then T.R (1, dim + 1) else T.R (0, dim))\n                          shape\n                      in\n                      T.slice (Array.to_list slice_specs) padded\n                    in\n                    let divide_no_nan num denom =\n                      let zero_tensor = T.zeros_like denom in\n                      let zero_mask = T.equal denom zero_tensor in\n                      let safe_denom =\n                        T.where zero_mask (T.ones_like denom) denom\n                      in\n                      let base = T.div num safe_denom in\n                      T.where zero_mask (T.zeros_like base) base\n                    in\n                    let reverse_cumsum tensor axis =\n                      let flipped = T.flip tensor ~axes:[ axis ] in\n                      let scanned = T.cumsum ~axis flipped in\n                      T.flip scanned ~axes:[ axis ]\n                    in\n                    let prefix = prefix_exclusive axis_norm t_in in\n                    let suffix = suffix_exclusive axis_norm t_in in\n                    let h = divide_no_nan g suffix in\n                    let tail_sum = T.sub (reverse_cumsum h axis_norm) h in\n                    let inner = T.add g (T.mul suffix tail_sum) in\n                    T.mul prefix inner\n                | `Max ->\n                    let shape = T.shape res in\n                    let dt = dtype t_in in\n                    let min_val = Dtype.min_value dt in\n                    let pad_left =\n                      Array.mapi\n                        (fun i _ -> if i = axis_norm then (1, 0) else (0, 0))\n                        shape\n                    in\n                    let padded = T.pad pad_left min_val res in\n                    let slice_right =\n                      Array.mapi\n                        (fun i dim ->\n                          if i = axis_norm then T.R (0, dim) else T.R (0, dim))\n                        shape\n                    in\n                    let shifted_res =\n                      T.slice (Array.to_list slice_right) padded\n                    in\n                    let active_mask = T.cast dt (T.cmpgt res shifted_res) in\n                    T.mul g active_mask\n                | `Min ->\n                    let shape = T.shape res in\n                    let dt = dtype t_in in\n                    let max_val = Dtype.max_value dt in\n                    let pad_left =\n                      Array.mapi\n                        (fun i _ -> if i = axis_norm then (1, 0) else (0, 0))\n                        shape\n                    in\n                    let padded = T.pad pad_left max_val res in\n                    let slice_right =\n                      Array.mapi\n                        (fun i dim ->\n                          if i = axis_norm then T.R (0, dim) else T.R (0, dim))\n                        shape\n                    in\n                    let shifted_res =\n                      T.slice (Array.to_list slice_right) padded\n                    in\n                    let active_mask = T.cast dt (T.cmplt res shifted_res) in\n                    T.mul g active_mask\n              in\n              twg_in.grad <- T.add twg_in.grad grad_contrib;\n              fwd)\n      (* Gather *)\n      | E_gather { data; indices; axis } ->\n          Some\n            (fun k ->\n              let res = gather data indices ~axis in\n              let fwd = continue k res in\n              let twg_data = get_or_init data in\n              let _ = get_or_init indices in\n              let twg_res = get_or_init res in\n              let g = twg_res.grad in\n              let zeros_data = T.zeros_like data in\n              let scattered_grads =\n                scatter ~mode:`Add zeros_data ~indices ~updates:g ~axis\n              in\n              twg_data.grad <- T.add twg_data.grad scattered_grads;\n              fwd)\n      (* Scatter *)\n      | E_scatter { data_template; indices; updates; axis } ->\n          Some\n            (fun k ->\n              let res = scatter data_template ~indices ~updates ~axis in\n              let fwd = continue k res in\n              let twg_dt = get_or_init data_template in\n              let twg_upd = get_or_init updates in\n              let _ = get_or_init indices in\n              let twg_res = get_or_init res in\n              let g = twg_res.grad in\n              let grad_upd = gather g indices ~axis in\n              twg_upd.grad <- T.add twg_upd.grad grad_upd;\n              let mask =\n                scatter\n                  (T.ones_like data_template)\n                  ~indices ~updates:(T.zeros_like updates) ~axis\n              in\n              let grad_dt = T.mul g mask in\n              twg_dt.grad <- T.add twg_dt.grad grad_dt;\n              fwd)\n      (* Unfold *)\n      | E_unfold { t_in; kernel_size; stride; dilation; padding } ->\n          Some\n            (fun k ->\n              let res = unfold t_in ~kernel_size ~stride ~dilation ~padding in\n              let fwd = continue k res in\n              let twg_in = get_or_init t_in in\n              let twg_res = get_or_init res in\n              let g = twg_res.grad in\n              let input_shape = T.shape t_in in\n              let num_spatial_dims = Array.length kernel_size in\n              let output_size =\n                Array.sub input_shape\n                  (Array.length input_shape - num_spatial_dims)\n                  num_spatial_dims\n              in\n              let grad_contrib =\n                fold g ~output_size ~kernel_size ~stride ~dilation ~padding\n              in\n              twg_in.grad <- T.add twg_in.grad grad_contrib;\n              fwd)\n      (* Fold *)\n      | E_fold { t_in; output_size; kernel_size; stride; dilation; padding } ->\n          Some\n            (fun k ->\n              let res =\n                fold t_in ~output_size ~kernel_size ~stride ~dilation ~padding\n              in\n              let fwd = continue k res in\n              let twg_in = get_or_init t_in in\n              let twg_res = get_or_init res in\n              let g = twg_res.grad in\n              let grad_contrib =\n                unfold g ~kernel_size ~stride ~dilation ~padding\n              in\n              twg_in.grad <- T.add twg_in.grad grad_contrib;\n              fwd)\n      (* Cholesky *)\n      | E_cholesky { t_in; upper } ->\n          Some\n            (fun k ->\n              let l = cholesky ~upper t_in in\n              let fwd = continue k l in\n              let twg_in = get_or_init t_in in\n              let twg_l = get_or_init l in\n              let dl = twg_l.grad in\n              let l_lower, dl_lower =\n                if upper then (T.transpose l, T.transpose dl) else (l, dl)\n              in\n              let c = T.matmul (T.transpose l_lower) dl_lower in\n              let p =\n                let tril_c = T.tril c in\n                let diag_c = T.diagonal c in\n                let two = T.add (T.ones_like diag_c) (T.ones_like diag_c) in\n                let half_diag = T.div diag_c two in\n                T.sub tril_c (T.diag half_diag)\n              in\n              let z =\n                triangular_solve ~upper:false ~transpose:true ~unit_diag:false\n                  l_lower p\n              in\n              let y =\n                triangular_solve ~upper:false ~transpose:true ~unit_diag:false\n                  l_lower (T.transpose z)\n              in\n              let s = T.transpose y in\n              let s_t = T.transpose s in\n              let sum = T.add s s_t in\n              let diag_s = T.diagonal s in\n              let diag_mat = T.diag diag_s in\n              let da_sym = T.sub sum diag_mat in\n              let da = T.tril da_sym in\n              twg_in.grad <- T.add twg_in.grad da;\n              fwd)\n      (* Triangular solve *)\n      | E_triangular_solve { a; b; upper; transpose; unit_diag } ->\n          Some\n            (fun k ->\n              let res = triangular_solve ~upper ~transpose ~unit_diag a b in\n              let fwd = continue k res in\n              let twg_a = get_or_init a in\n              let twg_b = get_or_init b in\n              let twg_res = get_or_init res in\n              let g = twg_res.grad in\n              let grad_b =\n                if transpose then\n                  triangular_solve ~upper ~transpose:false ~unit_diag a g\n                else triangular_solve ~upper ~transpose:true ~unit_diag a g\n              in\n              twg_b.grad <- T.add twg_b.grad grad_b;\n              let res_2d, grad_b_2d =\n                let g_ndim = Array.length (T.shape g) in\n                if g_ndim = 1 then\n                  (T.expand_dims [ -1 ] res, T.expand_dims [ -1 ] grad_b)\n                else (res, grad_b)\n              in\n              let grad_a_full =\n                if transpose then\n                  T.neg (T.matmul res_2d (T.transpose grad_b_2d))\n                else T.neg (T.matmul grad_b_2d (T.transpose res_2d))\n              in\n              let grad_a =\n                if upper then T.triu grad_a_full else T.tril grad_a_full\n              in\n              twg_a.grad <- T.add twg_a.grad grad_a;\n              fwd)\n      (* QR *)\n      | E_qr { t_in; reduced } ->\n          Some\n            (fun k ->\n              let q, r = qr ~reduced t_in in\n              let fwd = continue k (q, r) in\n              let twg_in = get_or_init t_in in\n              let twg_q = get_or_init q in\n              let twg_r = get_or_init r in\n              let gq = twg_q.grad in\n              let gr_full = twg_r.grad in\n              let gr =\n                let rt = T.transpose gr_full in\n                T.transpose (T.tril rt)\n              in\n              let m =\n                let term1 = T.matmul r (T.transpose gr) in\n                let term2 = T.matmul (T.transpose gq) q in\n                T.sub term1 term2\n              in\n              let lower_strict = T.tril ~k:(-1) m in\n              let diag_m = T.contiguous (T.diagonal m) in\n              let diag_mat = T.diag diag_m in\n              let copyltu =\n                T.add (T.add lower_strict (T.transpose lower_strict)) diag_mat\n              in\n              let rhs = T.add gq (T.matmul q copyltu) in\n              let da_t =\n                triangular_solve ~upper:true ~transpose:false ~unit_diag:false r\n                  (T.transpose rhs)\n              in\n              let da = T.transpose da_t in\n              twg_in.grad <- T.add twg_in.grad da;\n              fwd)\n      (* FFT Operations *)\n      | E_fft { t; axes } ->\n          Some\n            (fun k ->\n              let res = fft t ~axes in\n              let fwd = continue k res in\n              let twg_in = get_or_init t in\n              let twg_res = get_or_init res in\n              let g = twg_res.grad in\n              let grad_contrib = ifft g ~axes in\n              twg_in.grad <- T.add twg_in.grad grad_contrib;\n              fwd)\n      | E_ifft { t; axes } ->\n          Some\n            (fun k ->\n              let res = ifft t ~axes in\n              let fwd = continue k res in\n              let twg_in = get_or_init t in\n              let twg_res = get_or_init res in\n              let g = twg_res.grad in\n              let grad_contrib = fft g ~axes in\n              twg_in.grad <- T.add twg_in.grad grad_contrib;\n              fwd)\n      (* Custom differentiation *)\n      | Autodiff.E_ad_mode_query -> Some (fun k -> continue k `VJP)\n      | Autodiff.E_custom_vjp { cv_fwd; cv_bwd } ->\n          Some\n            (fun k ->\n              let output_packed = Autodiff.without_autodiff cv_fwd in\n              let result = continue k output_packed in\n              let get_grad packed =\n                let t : (_, _) t = Obj.obj packed in\n                Obj.repr (get_or_init t).grad\n              in\n              let acc_grad inp_packed dg_packed =\n                let t : (_, _) t = Obj.obj inp_packed in\n                let twg = get_or_init t in\n                twg.grad <- T.add twg.grad (Obj.magic dg_packed)\n              in\n              cv_bwd get_grad acc_grad;\n              result)\n      | _ -> None\n  in\n  {\n    retc =\n      (fun final_result ->\n        let twg_final = get_or_init final_result in\n        twg_final.grad <- seed_output final_result;\n        final_result);\n    exnc = raise;\n    effc;\n  }\n\n(* Helpers *)\n\nlet lookup_grad tape x =\n  match Autodiff.Physical_tbl.find tape x with\n  | Some (Any_twg twg) -> (unwrap_twg (dtype x) (Any_twg twg)).grad\n  | None -> T.zeros_like x\n\nlet lookup_grads tape xs = List.map (lookup_grad tape) xs\n\n(* API *)\n\nlet vjp (type a b c d) (f : (a, b) t -> (c, d) t) (x : (a, b) t)\n    (cotangent : (c, d) t) : (c, d) t * (a, b) t =\n  let tape = Autodiff.Physical_tbl.create 32 in\n  let handler = make_handler tape (fun _ -> cotangent) in\n  let y = Effect.Deep.match_with f x handler in\n  (y, lookup_grad tape x)\n\nlet vjps (type a b c d) (f : (a, b) t list -> (c, d) t) (xs : (a, b) t list)\n    (cotangent : (c, d) t) : (c, d) t * (a, b) t list =\n  let tape = Autodiff.Physical_tbl.create 32 in\n  let handler = make_handler tape (fun _ -> cotangent) in\n  let y = Effect.Deep.match_with f xs handler in\n  (y, lookup_grads tape xs)\n\nlet grad (type a b c d) (f : (a, b) t -> (c, d) t) (x : (a, b) t) : (a, b) t =\n  let tape = Autodiff.Physical_tbl.create 32 in\n  let handler = make_handler tape T.ones_like in\n  let _ = Effect.Deep.match_with f x handler in\n  lookup_grad tape x\n\nlet grads (type a b c d) (f : (a, b) t list -> (c, d) t) (xs : (a, b) t list) :\n    (a, b) t list =\n  let tape = Autodiff.Physical_tbl.create 32 in\n  let handler = make_handler tape T.ones_like in\n  let _ = Effect.Deep.match_with f xs handler in\n  lookup_grads tape xs\n\nlet value_and_grad (type a b c d) (f : (a, b) t -> (c, d) t) (x : (a, b) t) :\n    (c, d) t * (a, b) t =\n  let tape = Autodiff.Physical_tbl.create 32 in\n  let handler = make_handler tape T.ones_like in\n  let y = Effect.Deep.match_with f x handler in\n  (y, lookup_grad tape x)\n\nlet value_and_grad_aux (type a b c d e) (f : (a, b) t -> (c, d) t * e)\n    (x : (a, b) t) : (c, d) t * (a, b) t * e =\n  let tape = Autodiff.Physical_tbl.create 32 in\n  let aux = ref None in\n  let f' x =\n    let y, a = f x in\n    aux := Some a;\n    y\n  in\n  let handler = make_handler tape T.ones_like in\n  let y = Effect.Deep.match_with f' x handler in\n  let aux_value =\n    match !aux with\n    | Some a -> a\n    | None -> failwith \"value_and_grad_aux: objective did not produce output\"\n  in\n  (y, lookup_grad tape x, aux_value)\n\nlet value_and_grads (type a b c d) (f : (a, b) t list -> (c, d) t)\n    (xs : (a, b) t list) : (c, d) t * (a, b) t list =\n  let tape = Autodiff.Physical_tbl.create 32 in\n  let handler = make_handler tape T.ones_like in\n  let y = Effect.Deep.match_with f xs handler in\n  (y, lookup_grads tape xs)\n\nlet value_and_grads_aux (type a b c d e) (f : (a, b) t list -> (c, d) t * e)\n    (xs : (a, b) t list) : (c, d) t * (a, b) t list * e =\n  let tape = Autodiff.Physical_tbl.create 32 in\n  let aux = ref None in\n  let f' xs =\n    let y, a = f xs in\n    aux := Some a;\n    y\n  in\n  let handler = make_handler tape T.ones_like in\n  let y = Effect.Deep.match_with f' xs handler in\n  let aux_value =\n    match !aux with\n    | Some a -> a\n    | None -> failwith \"value_and_grads_aux: objective did not produce output\"\n  in\n  (y, lookup_grads tape xs, aux_value)\n\nlet detach t = Autodiff.without_autodiff (fun () -> T.copy t)\nlet no_grad f = Autodiff.without_autodiff f\n"
  },
  {
    "path": "packages/rune/lib/vmap.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Nx_effect\nopen Nx_core\nmodule T = Nx\n\n(* Type to represent mapping specification for a single axis *)\ntype axis_spec =\n  | Map of int (* Map over this axis index *)\n  | NoMap (* Don't map this axis *)\n\n(* Type to represent container mapping specifications *)\ntype 'a in_axes_spec = Single of axis_spec | Container of 'a\n\n(* Type to represent output axes specification *)\ntype 'a out_axes_spec = OutSingle of int option | OutContainer of 'a\n\n(* Helper to extract mapped axis from in_axes specification *)\nlet extract_axis_spec = function\n  | Single spec -> spec\n  | Container _ -> failwith \"vmap: container in_axes not yet supported\"\n\n(* Helper to extract output axis from out_axes specification *)\nlet extract_out_axis_spec = function\n  | OutSingle spec -> spec\n  | OutContainer _ -> failwith \"vmap: container out_axes not yet supported\"\n\n(* ───── Utility Functions for Batch Level Management ───── *)\n\nlet insert_at (arr : 'a array) (pos : int) (x : 'a) : 'a array =\n  let n = Array.length arr in\n  if pos < 0 || pos > n then\n    failwith\n      (Printf.sprintf \"insert_at: invalid position %d for array of length %d\"\n         pos n);\n  Array.concat [ Array.sub arr 0 pos; [| x |]; Array.sub arr pos (n - pos) ]\n\n(* Like insert_at, but if pos > length, left-pad with pad_value up to pos before\n   inserting. *)\nlet insert_at_pad (arr : int array) ~(pad_value : int) (pos : int) (x : int) :\n    int array =\n  let n = Array.length arr in\n  if pos < 0 then\n    failwith\n      (Printf.sprintf\n         \"insert_at_pad: invalid position %d for array of length %d\" pos n)\n  else if pos <= n then insert_at arr pos x\n  else\n    let pad_len = pos - n in\n    let pad = Array.make pad_len pad_value in\n    Array.concat [ arr; pad; [| x |] ]\n\nlet remove_at (arr : 'a array) (pos : int) : 'a array =\n  let n = Array.length arr in\n  if pos < 0 || pos >= n then\n    failwith\n      (Printf.sprintf \"remove_at: invalid position %d for array of length %d\"\n         pos n);\n  Array.concat [ Array.sub arr 0 pos; Array.sub arr (pos + 1) (n - pos - 1) ]\n\n(* Map logical (unbatched) axes -> physical axes given a batch dimension *)\nlet phys_axis ~bdim (i : int) = if i >= bdim then i + 1 else i\n\n(* Helper to create axis permutation for moving axis from -> to *)\nlet move_axis_perm ~from ~to_ ndim =\n  let perm = Array.init ndim (fun i -> i) in\n  if from = to_ then perm\n  else if from < to_ then (\n    (* Shift elements between from and to_ left *)\n    for i = from to to_ - 1 do\n      perm.(i) <- i + 1\n    done;\n    perm.(to_) <- from;\n    perm)\n  else (\n    (* from > to_: shift elements between to_ and from right *)\n    for i = to_ + 1 to from do\n      perm.(i) <- i - 1\n    done;\n    perm.(to_) <- from;\n    perm)\n\n(* Helper to move an axis to the front or back of a tensor *)\nlet move_axis (tensor : ('a, 'b) t) ~from_axis ~to_axis : ('a, 'b) t =\n  let shape = T.shape tensor in\n  let ndim = Array.length shape in\n  let from_axis = if from_axis < 0 then ndim + from_axis else from_axis in\n  let to_axis = if to_axis < 0 then ndim + to_axis + 1 else to_axis in\n\n  if from_axis = to_axis then tensor\n  else\n    let axes = Array.init ndim (fun i -> i) in\n    (* Remove from_axis from its current position *)\n    let temp_axes =\n      Array.concat\n        [\n          Array.sub axes 0 from_axis;\n          Array.sub axes (from_axis + 1) (ndim - from_axis - 1);\n        ]\n    in\n    (* Insert at to_axis position *)\n    let new_axes =\n      if to_axis = 0 then Array.concat [ [| from_axis |]; temp_axes ]\n      else if to_axis >= ndim then Array.concat [ temp_axes; [| from_axis |] ]\n      else\n        Array.concat\n          [\n            Array.sub temp_axes 0 to_axis;\n            [| from_axis |];\n            Array.sub temp_axes to_axis (Array.length temp_axes - to_axis);\n          ]\n    in\n    T.transpose tensor ~axes:(Array.to_list new_axes)\n\n(* Helper to add a batch dimension to a tensor at a specific position *)\nlet _add_batch_dim_at (tensor : ('a, 'b) t) ~batch_pos ~size : ('a, 'b) t =\n  let shape = T.shape tensor in\n  let new_shape = insert_at shape batch_pos size in\n  let expanded = T.expand_dims [ batch_pos ] tensor in\n  T.broadcast_to new_shape expanded\n\n(* Custom hashtable module that uses physical equality to distinguish tensors *)\nmodule PhysicalTbl = struct\n  type level = int\n  type t = (Obj.t * (level, int option) Hashtbl.t) list ref\n\n  let create () : t = ref []\n\n  let ensure_map (tbl : t) key =\n    let k = Obj.repr key in\n    match List.assoc_opt k !tbl with\n    | Some m -> m\n    | None ->\n        let m = Hashtbl.create 4 in\n        tbl := (k, m) :: !tbl;\n        m\n\n  let set_bdim (tbl : t) key ~level ~bdim =\n    let m = ensure_map tbl key in\n    Hashtbl.replace m level bdim\n\n  let get_bdim (tbl : t) key ~level : int option =\n    match List.assoc_opt (Obj.repr key) !tbl with\n    | None -> None\n    | Some m -> ( try Hashtbl.find m level with Not_found -> None)\n\n  let has_level (tbl : t) key level = Option.is_some (get_bdim tbl key ~level)\n\n  (* Get all batch dimensions for a tensor across all levels *)\n  let get_all_bdims (tbl : t) key : (level * int) list =\n    match List.assoc_opt (Obj.repr key) !tbl with\n    | None -> []\n    | Some m ->\n        Hashtbl.fold\n          (fun level bdim_opt acc ->\n            match bdim_opt with\n            | None -> acc\n            | Some bdim -> (level, bdim) :: acc)\n          m []\n        |> List.sort (fun (l1, _) (l2, _) -> compare l1 l2)\n\n  (* Copy all batch dimensions from one table to another *)\n  let copy_to (src : t) (dst : t) =\n    List.iter\n      (fun (key_repr, src_map) ->\n        let dst_map =\n          match List.assoc_opt key_repr !dst with\n          | Some m -> m\n          | None ->\n              let m = Hashtbl.create 4 in\n              dst := (key_repr, m) :: !dst;\n              m\n        in\n        Hashtbl.iter\n          (fun level bdim -> Hashtbl.replace dst_map level bdim)\n          src_map)\n      !src\n\n  (* Clear all batch dimensions at a specific level *)\n  let clear_level (tbl : t) level =\n    List.iter (fun (_, map) -> Hashtbl.remove map level) !tbl\nend\n\n(* ───── Vmap Environment (Dynamic Scope) ───── *)\ntype env = {\n  level : int;\n  shared : PhysicalTbl.t;\n  batch_sizes : (int, int) Hashtbl.t;\n}\n\nlet current_env : env option ref = ref None\nlet current_batch_level : int ref = ref 0\n\nlet with_env (env : env) (f : unit -> 'a) : 'a =\n  let prev_env = !current_env in\n  current_env := Some env;\n  current_batch_level := env.level;\n  Fun.protect\n    ~finally:(fun () ->\n      current_env := prev_env;\n      current_batch_level := match prev_env with Some e -> e.level | None -> 0)\n    f\n\nlet get_env () : env =\n  match !current_env with\n  | Some e -> e\n  | None ->\n      {\n        level = -1;\n        shared = PhysicalTbl.create ();\n        batch_sizes = Hashtbl.create 8;\n      }\n\nlet make_vmap_handler ~env ~axis_size ~batched_tensors out_axis axis_name =\n  let open Effect.Deep in\n  (* Store axis_name for potential use in collective operations *)\n  let _ = axis_name in\n  (* Currently unused, but available for future collective ops *)\n\n  (* Suspension flag: let shape-manipulation ops bubble to outer handlers (AD)\n     while we manage batch metadata. *)\n  let suspended = ref false in\n  let with_suspended f =\n    suspended := true;\n    Fun.protect ~finally:(fun () -> suspended := false) f\n  in\n\n  (* Get the batch dimension for a tensor at this level *)\n  let get_bdim tensor =\n    (* Check the shared batch state *)\n    PhysicalTbl.get_bdim env.shared tensor ~level:env.level\n  in\n\n  (* Set the batch dimension for a tensor at this level *)\n  let set_bdim tensor bdim =\n    (* Update both local and shared state *)\n    PhysicalTbl.set_bdim batched_tensors tensor ~level:env.level ~bdim;\n    PhysicalTbl.set_bdim env.shared tensor ~level:env.level ~bdim\n  in\n\n  (* Check if a tensor is batched at THIS level *)\n  let _is_batched tensor = Option.is_some (get_bdim tensor) in\n\n  (* Helper to get physical shape (backend view) of a tensor *)\n  let phys_shape_of : type a b. (a, b) t -> int array =\n   fun t -> View.shape (Nx_effect.view t)\n  in\n\n  (* Derive present batch prefix length by matching leading physical dims\n     against known batch sizes for levels 0..env.level. Robust even if bdim\n     metadata is partially missing. Assumes tensors are canonicalized. *)\n  let prefix_len_by_batch_sizes t =\n    let s = phys_shape_of t in\n    let n = Array.length s in\n    let pos = ref 0 in\n    for lv = 0 to env.level do\n      let sz = try Hashtbl.find env.batch_sizes lv with Not_found -> 1 in\n      if !pos < n && s.(!pos) = sz then incr pos\n    done;\n    !pos\n  in\n\n  let phys_shrink : type a b. (a, b) t -> (int * int) array -> (a, b) t =\n   fun t limits -> Nx_effect.shrink t limits\n  in\n\n  (* Effectful shape ops under suspension so AD can track duals *)\n  let phys_reshape : type a b. (a, b) t -> int array -> (a, b) t =\n   fun t new_shape -> with_suspended (fun () -> reshape t new_shape)\n  in\n\n  let phys_expand : type a b. (a, b) t -> int array -> (a, b) t =\n   fun t new_shape -> with_suspended (fun () -> expand t new_shape)\n  in\n\n  let phys_permute : type a b. (a, b) t -> int array -> (a, b) t =\n   fun t axes -> with_suspended (fun () -> permute t axes)\n  in\n\n  (* Debug helpers *)\n  let pp_shape (a : int array) : string =\n    let items =\n      a |> Array.to_list |> List.map string_of_int |> String.concat \";\"\n    in\n    \"[\" ^ items ^ \"]\"\n  in\n  let dprintf fmt = Printf.eprintf (\"[vmap:l%d] \" ^^ fmt ^^ \"\\n%!\") env.level in\n\n  (* Propagate per-level bdim positions through shape transforms *)\n  let copy_bdims_insert ~src ~dst ~insert_pos =\n    PhysicalTbl.get_all_bdims env.shared src\n    |> List.iter (fun (lv, pos) ->\n        let new_pos = if pos >= insert_pos then pos + 1 else pos in\n        PhysicalTbl.set_bdim env.shared dst ~level:lv ~bdim:(Some new_pos))\n  in\n\n  let copy_bdims_same ~src ~dst =\n    PhysicalTbl.get_all_bdims env.shared src\n    |> List.iter (fun (lv, pos) ->\n        PhysicalTbl.set_bdim env.shared dst ~level:lv ~bdim:(Some pos))\n  in\n\n  (* Broadcast a canonicalized tensor (batch dims at front) to a target physical\n     shape anchored after the batch prefix. *)\n  let broadcast_to_canonical : type a b. (a, b) t -> int array -> (a, b) t =\n   fun t target_phys ->\n    let s = phys_shape_of t in\n    dprintf \"btc: s=%s target=%s\" (pp_shape s) (pp_shape target_phys);\n    (* Derive batch prefix length by matching sizes to known batch sizes *)\n    let nbd = prefix_len_by_batch_sizes t in\n    let s_len = Array.length s in\n    let t_len = Array.length target_phys in\n    if nbd > t_len then failwith \"vmap: target rank smaller than batch prefix\";\n    (* Insert singleton logical dims after batch prefix to match target logical\n       rank *)\n    let s_logical = s_len - nbd in\n    let t_logical = t_len - nbd in\n    let t' =\n      if s_logical < t_logical then (\n        let insert_count = t_logical - s_logical in\n        let inserted = Array.make (s_len + insert_count) 0 in\n        Array.blit s 0 inserted 0 nbd;\n        for i = 0 to insert_count - 1 do\n          inserted.(nbd + i) <- 1\n        done;\n        Array.blit s nbd inserted (nbd + insert_count) (s_len - nbd);\n        dprintf \"btc: insert %d ones after nbd=%d -> %s\" insert_count nbd\n          (pp_shape inserted);\n        let t1 = phys_reshape t inserted in\n        copy_bdims_insert ~src:t ~dst:t1 ~insert_pos:nbd;\n        t1)\n      else t\n    in\n    (* Now expand any size-1 logical dims to match target logical dims; ensure\n       batch prefix matches target prefix *)\n    let s2 = phys_shape_of t' in\n    (* Validate/normalize batch prefix: expand singletons in prefix if needed *)\n    let s2' = Array.copy s2 in\n    for i = 0 to nbd - 1 do\n      let cur = if i < Array.length s2 then s2.(i) else 1 in\n      let tgt = target_phys.(i) in\n      if cur = tgt || cur = 1 then s2'.(i) <- tgt else s2'.(i) <- cur\n    done;\n    (* For logical dims, ensure either equal or 1; set to target *)\n    for i = nbd to t_len - 1 do\n      let cur = if i < Array.length s2 then s2.(i) else 1 in\n      let tgt = target_phys.(i) in\n      if cur = tgt || cur = 1 then s2'.(i) <- tgt\n      else if tgt = 1 then () (* fine, keep cur *)\n      else failwith \"vmap: incompatible logical broadcast\"\n    done;\n    if Array.length s2' <> Array.length s2 || Array.exists2 ( <> ) s2' s2 then (\n      dprintf \"btc: expand from %s to %s\" (pp_shape s2) (pp_shape s2');\n      let t2 = phys_expand t' s2' in\n      copy_bdims_same ~src:t' ~dst:t2;\n      t2)\n    else t'\n  in\n  let copy_bdims_permute ~src ~dst ~perm =\n    let n = Array.length perm in\n    let inv = Array.make n 0 in\n    for i = 0 to n - 1 do\n      inv.(perm.(i)) <- i\n    done;\n    PhysicalTbl.get_all_bdims env.shared src\n    |> List.iter (fun (lv, pos) ->\n        let new_pos = if pos >= 0 && pos < n then inv.(pos) else pos in\n        PhysicalTbl.set_bdim env.shared dst ~level:lv ~bdim:(Some new_pos))\n  in\n\n  (* Removed helpers no longer needed after robust prefix handling in\n     reshape/expand *)\n  let align_to p tensor =\n    match get_bdim tensor with\n    | None ->\n        (* If the unmarked tensor already has the batch at position [p] with the\n           correct size, just record it. Otherwise, insert a singleton at [p]\n           (padding with 1s if needed) and expand to [axis_size]. *)\n        let phys = phys_shape_of tensor in\n        let n = Array.length phys in\n        if p < n && phys.(p) = axis_size then (\n          PhysicalTbl.set_bdim env.shared tensor ~level:env.level ~bdim:(Some p);\n          tensor)\n        else\n          let inserted =\n            if p <= n then insert_at phys p 1\n            else insert_at_pad phys ~pad_value:1 p 1\n          in\n          let t1 = phys_reshape tensor inserted in\n          copy_bdims_insert ~src:tensor ~dst:t1 ~insert_pos:p;\n          let target = Array.copy inserted in\n          target.(p) <- axis_size;\n          let t2 = phys_expand t1 target in\n          copy_bdims_same ~src:t1 ~dst:t2;\n          PhysicalTbl.set_bdim env.shared t2 ~level:env.level ~bdim:(Some p);\n          t2\n    | Some q when q = p -> tensor\n    | Some q ->\n        (* Move batch dimension from q to p *)\n        let ndim = Array.length (phys_shape_of tensor) in\n        let perm = move_axis_perm ~from:q ~to_:p ndim in\n        let t' = phys_permute tensor perm in\n        PhysicalTbl.set_bdim env.shared t' ~level:env.level ~bdim:(Some p);\n        t'\n  in\n\n  (* Ensure [t] has all outer batch dims (levels < env.level) present in [like].\n     Missing dims are inserted and broadcast to match [like]'s physical\n     shape. *)\n  let add_missing_outer_bdims ~like t =\n    let like_bdims =\n      PhysicalTbl.get_all_bdims env.shared like\n      |> List.filter (fun (lv, _) -> lv < env.level)\n    in\n    if like_bdims = [] then t\n    else\n      let t_missing =\n        like_bdims\n        |> List.filter (fun (lv, _) ->\n            not (PhysicalTbl.has_level env.shared t lv))\n        |> List.sort (fun (_, a) (_, b) -> compare b a)\n      in\n      if t_missing = [] then t\n      else\n        let t_ref = ref t in\n        List.iter\n          (fun (lv, _pos) ->\n            (* Insert a singleton dim at the front physically by reshaping to\n               [1; ... old_shape] *)\n            let phys = phys_shape_of !t_ref in\n            let inserted = Array.append [| 1 |] phys in\n            let t1 = phys_reshape !t_ref inserted in\n            (* Broadcast that new leading dim to the batch size for level\n               [lv] *)\n            let batch_sz =\n              try Hashtbl.find env.batch_sizes lv with Not_found -> 1\n            in\n            let target = Array.copy inserted in\n            target.(0) <- batch_sz;\n            let t2 = phys_expand t1 target in\n            (* Record that [t2] is now batched at level [lv] at [pos]. Preserve\n               current-level bdim if it existed on input. *)\n            PhysicalTbl.set_bdim env.shared t2 ~level:lv ~bdim:(Some 0);\n            (match get_bdim t with\n            | Some cp ->\n                PhysicalTbl.set_bdim env.shared t2 ~level:env.level\n                  ~bdim:(Some cp)\n            | None -> ());\n            t_ref := t2)\n          t_missing;\n        !t_ref\n  in\n\n  let unify_outer_bdims a b =\n    let a' = add_missing_outer_bdims ~like:b a in\n    let b' = add_missing_outer_bdims ~like:a b in\n    (a', b')\n  in\n\n  (* Note: broadcasting to physical shapes is not needed when canonicalizing\n     batch dims and delegating logical broadcasting to the frontend. *)\n\n  (* Move all batch dims 0..env.level to the front in level order for [t]. *)\n  let canonicalize_batch_positions t =\n    (* Ensure all OUTER levels 0..env.level-1 are present; don't insert current\n       level *)\n    let t =\n      let t_ref = ref t in\n      for lv = env.level - 1 downto 0 do\n        if lv >= 0 && not (PhysicalTbl.has_level env.shared !t_ref lv) then (\n          let phys = phys_shape_of !t_ref in\n          let inserted = Array.append [| 1 |] phys in\n          let t1 = phys_reshape !t_ref inserted in\n          copy_bdims_insert ~src:!t_ref ~dst:t1 ~insert_pos:0;\n          let batch_sz =\n            try Hashtbl.find env.batch_sizes lv with Not_found -> 1\n          in\n          let target = Array.copy inserted in\n          target.(0) <- batch_sz;\n          let t2 = phys_expand t1 target in\n          copy_bdims_same ~src:t1 ~dst:t2;\n          PhysicalTbl.set_bdim env.shared t2 ~level:lv ~bdim:(Some 0);\n          (match PhysicalTbl.get_bdim env.shared !t_ref ~level:env.level with\n          | Some cp ->\n              PhysicalTbl.set_bdim env.shared t2 ~level:env.level\n                ~bdim:(Some cp)\n          | None -> ());\n          t_ref := t2)\n      done;\n      !t_ref\n    in\n    (* Build permutation to move PRESENT batch dims to the front in level\n       order *)\n    let phys = phys_shape_of t in\n    let r = Array.length phys in\n    let present_levels =\n      let acc = ref [] in\n      for lv = 0 to env.level do\n        match PhysicalTbl.get_bdim env.shared t ~level:lv with\n        | Some p -> acc := !acc @ [ (lv, p) ]\n        | None -> ()\n      done;\n      !acc\n    in\n    let batch_positions = List.map snd present_levels in\n    let is_batch = Array.make r false in\n    List.iter\n      (fun p -> if p >= 0 && p < r then is_batch.(p) <- true)\n      batch_positions;\n    let non_batch_positions =\n      let acc = ref [] in\n      for i = 0 to r - 1 do\n        if not is_batch.(i) then acc := !acc @ [ i ]\n      done;\n      !acc\n    in\n    let axes = Array.of_list (batch_positions @ non_batch_positions) in\n    let t' = phys_permute t axes in\n    (* Update bdim mapping: assign present levels to front in order *)\n    List.iteri\n      (fun i (lv, _pos) ->\n        PhysicalTbl.set_bdim env.shared t' ~level:lv ~bdim:(Some i))\n      present_levels;\n    t'\n  in\n\n  {\n    retc =\n      (fun result ->\n        (* Handle output axis specification *)\n        match out_axis with\n        | None -> (\n            (* JAX semantics: out_axes=None means the output is not batched.\n               Take the first element along THIS level's batch axis *)\n            match get_bdim result with\n            | None -> result\n            | Some p ->\n                dprintf \"retc(None): shrink along p=%d shape=%s\" p\n                  (pp_shape (phys_shape_of result));\n                let phys = phys_shape_of result in\n                let shrink_spec =\n                  Array.mapi (fun i d -> if i = p then (0, 1) else (0, d)) phys\n                in\n                let r' = phys_shrink result shrink_spec in\n                (* Remove current level mapping and shift others after p *)\n                PhysicalTbl.set_bdim env.shared r' ~level:env.level ~bdim:None;\n                PhysicalTbl.get_all_bdims env.shared result\n                |> List.iter (fun (lv, pos) ->\n                    if lv <> env.level then\n                      let new_pos = if pos > p then pos - 1 else pos in\n                      PhysicalTbl.set_bdim env.shared r' ~level:lv\n                        ~bdim:(Some new_pos));\n                r')\n        | Some out_pos -> (\n            (* Move batch dimension to specified position *)\n            match get_bdim result with\n            | None -> result\n            | Some p when p = out_pos -> result\n            | Some p ->\n                dprintf \"retc(Some %d): move from p=%d shape=%s\" out_pos p\n                  (pp_shape (phys_shape_of result));\n                let ndim = Array.length (phys_shape_of result) in\n                let perm = move_axis_perm ~from:p ~to_:out_pos ndim in\n                let r' = phys_permute result perm in\n                copy_bdims_permute ~src:result ~dst:r' ~perm;\n                r'));\n    exnc = raise;\n    effc =\n      (fun (type c) (eff : c Effect.t) ->\n        if !suspended then None\n        else\n          match eff with\n          (* Collective: psum over current batch level *)\n          | E_psum { t_in } ->\n              Some\n                (fun (k : (c, _) continuation) ->\n                  match get_bdim t_in with\n                  | None ->\n                      let result = copy t_in in\n                      continue k result\n                  | Some p ->\n                      let result =\n                        reduce_sum ~axes:[| p |] ~keepdims:false t_in\n                      in\n                      (* Update bdim mappings: current level removed; others\n                         after p shift left *)\n                      PhysicalTbl.set_bdim env.shared result ~level:env.level\n                        ~bdim:None;\n                      PhysicalTbl.get_all_bdims env.shared t_in\n                      |> List.iter (fun (lv, pos) ->\n                          if lv <> env.level then\n                            let new_pos = if pos > p then pos - 1 else pos in\n                            PhysicalTbl.set_bdim env.shared result ~level:lv\n                              ~bdim:(Some new_pos));\n                      continue k result)\n          (* CRITICAL: Intercept view to return unbatched view *)\n          | E_view tensor ->\n              Some\n                (fun (k : (c, _) continuation) ->\n                  (* Get the actual view from the backend *)\n                  let actual_view = Nx_effect.view tensor in\n\n                  (* Collect ALL batch dims from outermost (0) to current\n                     level *)\n                  let batch_dims_to_remove =\n                    let acc = ref [] in\n                    for lv = 0 to env.level do\n                      match\n                        PhysicalTbl.get_bdim env.shared tensor ~level:lv\n                      with\n                      | Some bdim -> acc := (lv, bdim) :: !acc\n                      | None -> ()\n                    done;\n                    (* Sort by physical position desc so removals are stable *)\n                    List.sort (fun (_, a) (_, b) -> compare b a) !acc\n                  in\n\n                  if batch_dims_to_remove = [] then continue k actual_view\n                  else\n                    let shape = View.shape actual_view in\n                    (* Remove batch dims from the symbolic shape directly *)\n                    let unbatched_shape =\n                      let arr = ref shape in\n                      List.iter\n                        (fun (_, pos) ->\n                          if pos >= 0 && pos < Array.length !arr then\n                            arr := remove_at !arr pos)\n                        batch_dims_to_remove;\n                      !arr\n                    in\n                    (* Preserve strides and offset if available *)\n                    let unbatched_view =\n                      match View.strides_opt actual_view with\n                      | None -> View.create unbatched_shape\n                      | Some strides ->\n                          let unbatched_strides =\n                            let s = ref strides in\n                            List.iter\n                              (fun (_, pos) ->\n                                if pos >= 0 && pos < Array.length !s then\n                                  s := remove_at !s pos)\n                              batch_dims_to_remove;\n                            !s\n                          in\n                          let offset = View.offset actual_view in\n                          View.create unbatched_shape ~strides:unbatched_strides\n                            ~offset\n                    in\n                    continue k unbatched_view)\n          (* Creation operations - create unbatched tensors *)\n          | E_const_scalar { context; value; dtype } ->\n              Some\n                (fun k ->\n                  let result = const_scalar context value dtype in\n                  (* Register as unbatched at ALL levels from 0 to current *)\n                  for lv = 0 to env.level do\n                    PhysicalTbl.set_bdim env.shared result ~level:lv ~bdim:None\n                  done;\n                  (* Also set in local table *)\n                  PhysicalTbl.set_bdim batched_tensors result ~level:env.level\n                    ~bdim:None;\n                  continue k result)\n          | E_from_host { context; array } ->\n              Some\n                (fun k ->\n                  let result = from_host context array in\n                  (* Register as unbatched at ALL levels from 0 to current *)\n                  for lv = 0 to env.level do\n                    PhysicalTbl.set_bdim env.shared result ~level:lv ~bdim:None\n                  done;\n                  (* Also set in local table *)\n                  PhysicalTbl.set_bdim batched_tensors result ~level:env.level\n                    ~bdim:None;\n                  continue k result)\n          (* Binary operations - handle broadcasting *)\n          | E_add { a; b } ->\n              Some\n                (fun k ->\n                  let a =\n                    a\n                    |> add_missing_outer_bdims ~like:b\n                    |> canonicalize_batch_positions\n                  in\n                  let b =\n                    b\n                    |> add_missing_outer_bdims ~like:a\n                    |> canonicalize_batch_positions\n                  in\n                  let ba = get_bdim a and bb = get_bdim b in\n                  (* Determine target position: use leftmost batch position if\n                     any *)\n                  let p =\n                    match (ba, bb) with\n                    | Some pa, Some pb -> Stdlib.min pa pb\n                    | Some pa, None -> pa\n                    | None, Some pb -> pb\n                    | None, None -> 0\n                  in\n                  (* Align both operands to position p, then restore canonical\n                     batch order *)\n                  let a', b' =\n                    if ba = None && bb = None then (a, b)\n                    else (align_to p a, align_to p b)\n                  in\n                  let a' = canonicalize_batch_positions a' in\n                  let b' = canonicalize_batch_positions b' in\n                  let sa = phys_shape_of a' and sb = phys_shape_of b' in\n                  let a_prefix_len =\n                    PhysicalTbl.get_all_bdims env.shared a'\n                    |> List.filter (fun (lv, _) -> lv <= env.level)\n                    |> List.length\n                  in\n                  let b_prefix_len =\n                    PhysicalTbl.get_all_bdims env.shared b'\n                    |> List.filter (fun (lv, _) -> lv <= env.level)\n                    |> List.length\n                  in\n                  let nbd = Stdlib.max a_prefix_len b_prefix_len in\n                  let a_log =\n                    Array.sub sa a_prefix_len (Array.length sa - a_prefix_len)\n                  in\n                  let b_log =\n                    Array.sub sb b_prefix_len (Array.length sb - b_prefix_len)\n                  in\n                  let target_log = Shape.broadcast a_log b_log in\n                  let target_pref =\n                    Array.init nbd (fun lv ->\n                        try Hashtbl.find env.batch_sizes lv\n                        with Not_found -> 1)\n                  in\n                  let target_phys = Array.append target_pref target_log in\n                  let a'' = broadcast_to_canonical a' target_phys in\n                  let b'' = broadcast_to_canonical b' target_phys in\n                  let out = add a'' b'' in\n                  (* Set result bdim based on whether any input was batched *)\n                  set_bdim out\n                    (match (ba, bb) with None, None -> None | _ -> Some p);\n                  continue k out)\n          | E_mul { a; b } ->\n              Some\n                (fun k ->\n                  let a =\n                    a\n                    |> add_missing_outer_bdims ~like:b\n                    |> canonicalize_batch_positions\n                  in\n                  let b =\n                    b\n                    |> add_missing_outer_bdims ~like:a\n                    |> canonicalize_batch_positions\n                  in\n                  let ba = get_bdim a and bb = get_bdim b in\n                  let p =\n                    match (ba, bb) with\n                    | Some pa, Some pb -> Stdlib.min pa pb\n                    | Some pa, None -> pa\n                    | None, Some pb -> pb\n                    | None, None -> 0\n                  in\n                  let a', b' =\n                    if ba = None && bb = None then (a, b)\n                    else (align_to p a, align_to p b)\n                  in\n                  let a' = canonicalize_batch_positions a' in\n                  let b' = canonicalize_batch_positions b' in\n                  let sa = phys_shape_of a' and sb = phys_shape_of b' in\n                  let a_prefix_len =\n                    PhysicalTbl.get_all_bdims env.shared a'\n                    |> List.filter (fun (lv, _) -> lv <= env.level)\n                    |> List.length\n                  in\n                  let b_prefix_len =\n                    PhysicalTbl.get_all_bdims env.shared b'\n                    |> List.filter (fun (lv, _) -> lv <= env.level)\n                    |> List.length\n                  in\n                  let nbd = Stdlib.max a_prefix_len b_prefix_len in\n                  let a_log =\n                    Array.sub sa a_prefix_len (Array.length sa - a_prefix_len)\n                  in\n                  let b_log =\n                    Array.sub sb b_prefix_len (Array.length sb - b_prefix_len)\n                  in\n                  let target_log = Shape.broadcast a_log b_log in\n                  let target_pref =\n                    Array.init nbd (fun lv ->\n                        try Hashtbl.find env.batch_sizes lv\n                        with Not_found -> 1)\n                  in\n                  let target_phys = Array.append target_pref target_log in\n                  let a'' = broadcast_to_canonical a' target_phys in\n                  let b'' = broadcast_to_canonical b' target_phys in\n                  let out = mul a'' b'' in\n                  set_bdim out\n                    (match (ba, bb) with None, None -> None | _ -> Some p);\n                  continue k out)\n          | E_fdiv { a; b } ->\n              Some\n                (fun k ->\n                  let a, b = unify_outer_bdims a b in\n                  let ba = get_bdim a and bb = get_bdim b in\n                  let p =\n                    match (ba, bb) with\n                    | Some pa, Some pb -> Stdlib.min pa pb\n                    | Some pa, None -> pa\n                    | None, Some pb -> pb\n                    | None, None -> 0\n                  in\n                  let a', b' =\n                    if ba = None && bb = None then (a, b)\n                    else (align_to p a, align_to p b)\n                  in\n                  let out = div a' b' in\n                  set_bdim out\n                    (match (ba, bb) with None, None -> None | _ -> Some p);\n                  continue k out)\n          | E_idiv { a; b } ->\n              Some\n                (fun k ->\n                  let a, b = unify_outer_bdims a b in\n                  let ba = get_bdim a and bb = get_bdim b in\n                  let p =\n                    match (ba, bb) with\n                    | Some pa, Some pb -> Stdlib.min pa pb\n                    | Some pa, None -> pa\n                    | None, Some pb -> pb\n                    | None, None -> 0\n                  in\n                  let a', b' =\n                    if ba = None && bb = None then (a, b)\n                    else (align_to p a, align_to p b)\n                  in\n                  let out = div a' b' in\n                  set_bdim out\n                    (match (ba, bb) with None, None -> None | _ -> Some p);\n                  continue k out)\n          | E_max { a; b } ->\n              Some\n                (fun k ->\n                  let a, b = unify_outer_bdims a b in\n                  let ba = get_bdim a and bb = get_bdim b in\n                  let p =\n                    match (ba, bb) with\n                    | Some pa, Some pb -> Stdlib.min pa pb\n                    | Some pa, None -> pa\n                    | None, Some pb -> pb\n                    | None, None -> 0\n                  in\n                  let a', b' =\n                    if ba = None && bb = None then (a, b)\n                    else (align_to p a, align_to p b)\n                  in\n                  let out = max a' b' in\n                  set_bdim out\n                    (match (ba, bb) with None, None -> None | _ -> Some p);\n                  continue k out)\n          | E_mod { a; b } ->\n              Some\n                (fun k ->\n                  let a, b = unify_outer_bdims a b in\n                  let ba = get_bdim a and bb = get_bdim b in\n                  let p =\n                    match (ba, bb) with\n                    | Some pa, Some pb -> Stdlib.min pa pb\n                    | Some pa, None -> pa\n                    | None, Some pb -> pb\n                    | None, None -> 0\n                  in\n                  let a', b' =\n                    if ba = None && bb = None then (a, b)\n                    else (align_to p a, align_to p b)\n                  in\n                  let out = mod_ a' b' in\n                  set_bdim out\n                    (match (ba, bb) with None, None -> None | _ -> Some p);\n                  continue k out)\n          | E_pow { a; b } ->\n              Some\n                (fun k ->\n                  let a, b = unify_outer_bdims a b in\n                  let ba = get_bdim a and bb = get_bdim b in\n                  let p =\n                    match (ba, bb) with\n                    | Some pa, Some pb -> Stdlib.min pa pb\n                    | Some pa, None -> pa\n                    | None, Some pb -> pb\n                    | None, None -> 0\n                  in\n                  let a', b' =\n                    if ba = None && bb = None then (a, b)\n                    else (align_to p a, align_to p b)\n                  in\n                  let out = pow a' b' in\n                  set_bdim out\n                    (match (ba, bb) with None, None -> None | _ -> Some p);\n                  continue k out)\n          | E_xor { a; b } ->\n              Some\n                (fun k ->\n                  let a, b = unify_outer_bdims a b in\n                  let ba = get_bdim a and bb = get_bdim b in\n                  let p =\n                    match (ba, bb) with\n                    | Some pa, Some pb -> Stdlib.min pa pb\n                    | Some pa, None -> pa\n                    | None, Some pb -> pb\n                    | None, None -> 0\n                  in\n                  let a', b' =\n                    if ba = None && bb = None then (a, b)\n                    else (align_to p a, align_to p b)\n                  in\n                  let out = xor a' b' in\n                  set_bdim out\n                    (match (ba, bb) with None, None -> None | _ -> Some p);\n                  continue k out)\n          | E_or { a; b } ->\n              Some\n                (fun k ->\n                  let a, b = unify_outer_bdims a b in\n                  let ba = get_bdim a and bb = get_bdim b in\n                  let p =\n                    match (ba, bb) with\n                    | Some pa, Some pb -> Stdlib.min pa pb\n                    | Some pa, None -> pa\n                    | None, Some pb -> pb\n                    | None, None -> 0\n                  in\n                  let a', b' =\n                    if ba = None && bb = None then (a, b)\n                    else (align_to p a, align_to p b)\n                  in\n                  let out = or_ a' b' in\n                  set_bdim out\n                    (match (ba, bb) with None, None -> None | _ -> Some p);\n                  continue k out)\n          | E_and { a; b } ->\n              Some\n                (fun k ->\n                  let a, b = unify_outer_bdims a b in\n                  let ba = get_bdim a and bb = get_bdim b in\n                  let p =\n                    match (ba, bb) with\n                    | Some pa, Some pb -> Stdlib.min pa pb\n                    | Some pa, None -> pa\n                    | None, Some pb -> pb\n                    | None, None -> 0\n                  in\n                  let a', b' =\n                    if ba = None && bb = None then (a, b)\n                    else (align_to p a, align_to p b)\n                  in\n                  let out = and_ a' b' in\n                  set_bdim out\n                    (match (ba, bb) with None, None -> None | _ -> Some p);\n                  continue k out)\n          (* Comparison operations *)\n          | E_cmplt { a; b } ->\n              Some\n                (fun k ->\n                  let ba = get_bdim a and bb = get_bdim b in\n                  let p =\n                    match (ba, bb) with\n                    | Some pa, Some pb -> Stdlib.min pa pb\n                    | Some pa, None -> pa\n                    | None, Some pb -> pb\n                    | None, None -> 0\n                  in\n                  let a', b' =\n                    if ba = None && bb = None then (a, b)\n                    else (align_to p a, align_to p b)\n                  in\n                  let out = cmplt a' b' in\n                  set_bdim out\n                    (match (ba, bb) with None, None -> None | _ -> Some p);\n                  continue k out)\n          | E_cmpne { a; b } ->\n              Some\n                (fun k ->\n                  let ba = get_bdim a and bb = get_bdim b in\n                  let p =\n                    match (ba, bb) with\n                    | Some pa, Some pb -> Stdlib.min pa pb\n                    | Some pa, None -> pa\n                    | None, Some pb -> pb\n                    | None, None -> 0\n                  in\n                  let a', b' =\n                    if ba = None && bb = None then (a, b)\n                    else (align_to p a, align_to p b)\n                  in\n                  let out = cmpne a' b' in\n                  set_bdim out\n                    (match (ba, bb) with None, None -> None | _ -> Some p);\n                  continue k out)\n          | E_cmpeq { a; b } ->\n              Some\n                (fun k ->\n                  let ba = get_bdim a and bb = get_bdim b in\n                  let p =\n                    match (ba, bb) with\n                    | Some pa, Some pb -> Stdlib.min pa pb\n                    | Some pa, None -> pa\n                    | None, Some pb -> pb\n                    | None, None -> 0\n                  in\n                  let a', b' =\n                    if ba = None && bb = None then (a, b)\n                    else (align_to p a, align_to p b)\n                  in\n                  let out = cmpeq a' b' in\n                  set_bdim out\n                    (match (ba, bb) with None, None -> None | _ -> Some p);\n                  continue k out)\n          | E_cmple { a; b } ->\n              Some\n                (fun k ->\n                  let ba = get_bdim a and bb = get_bdim b in\n                  let p =\n                    match (ba, bb) with\n                    | Some pa, Some pb -> Stdlib.min pa pb\n                    | Some pa, None -> pa\n                    | None, Some pb -> pb\n                    | None, None -> 0\n                  in\n                  let a', b' =\n                    if ba = None && bb = None then (a, b)\n                    else (align_to p a, align_to p b)\n                  in\n                  let out = cmple a' b' in\n                  set_bdim out\n                    (match (ba, bb) with None, None -> None | _ -> Some p);\n                  continue k out)\n          (* Unary operations - preserve batch status *)\n          | E_neg { t_in } ->\n              Some\n                (fun k ->\n                  let out = neg t_in in\n                  set_bdim out (get_bdim t_in);\n                  continue k out)\n          | E_sin { t_in } ->\n              Some\n                (fun k ->\n                  let out = sin t_in in\n                  set_bdim out (get_bdim t_in);\n                  continue k out)\n          | E_sqrt { t_in } ->\n              Some\n                (fun k ->\n                  let out = sqrt t_in in\n                  set_bdim out (get_bdim t_in);\n                  continue k out)\n          | E_recip { t_in } ->\n              Some\n                (fun k ->\n                  let out = recip t_in in\n                  set_bdim out (get_bdim t_in);\n                  continue k out)\n          (* Reduction operations with correct axes adjustment *)\n          | E_reduce_sum { t_in; axes; keepdims } ->\n              Some\n                (fun k ->\n                  match get_bdim t_in with\n                  | None ->\n                      let out = reduce_sum ~axes ~keepdims t_in in\n                      set_bdim out None;\n                      continue k out\n                  | Some p ->\n                      let adjusted_axes = Array.map (phys_axis ~bdim:p) axes in\n                      let out = reduce_sum ~axes:adjusted_axes ~keepdims t_in in\n                      (* Update bdim based on axes removed *)\n                      let new_p =\n                        if keepdims then Some p\n                        else\n                          let num_removed_before_p =\n                            Array.fold_left\n                              (fun acc a -> if a < p then acc + 1 else acc)\n                              0 adjusted_axes\n                          in\n                          Some (p - num_removed_before_p)\n                      in\n                      set_bdim out new_p;\n                      continue k out)\n          | E_reduce_max { t_in; axes; keepdims } ->\n              Some\n                (fun k ->\n                  match get_bdim t_in with\n                  | None ->\n                      let out = reduce_max ~axes ~keepdims t_in in\n                      set_bdim out None;\n                      continue k out\n                  | Some p ->\n                      let adjusted_axes = Array.map (phys_axis ~bdim:p) axes in\n                      let out = reduce_max ~axes:adjusted_axes ~keepdims t_in in\n                      let new_p =\n                        if keepdims then Some p\n                        else\n                          let num_removed_before_p =\n                            Array.fold_left\n                              (fun acc a -> if a < p then acc + 1 else acc)\n                              0 adjusted_axes\n                          in\n                          Some (p - num_removed_before_p)\n                      in\n                      set_bdim out new_p;\n                      continue k out)\n          | E_reduce_prod { t_in; axes; keepdims } ->\n              Some\n                (fun k ->\n                  match get_bdim t_in with\n                  | None ->\n                      let out = reduce_prod ~axes ~keepdims t_in in\n                      set_bdim out None;\n                      continue k out\n                  | Some p ->\n                      let adjusted_axes = Array.map (phys_axis ~bdim:p) axes in\n                      let out =\n                        reduce_prod ~axes:adjusted_axes ~keepdims t_in\n                      in\n                      let new_p =\n                        if keepdims then Some p\n                        else\n                          let num_removed_before_p =\n                            Array.fold_left\n                              (fun acc a -> if a < p then acc + 1 else acc)\n                              0 adjusted_axes\n                          in\n                          Some (p - num_removed_before_p)\n                      in\n                      set_bdim out new_p;\n                      continue k out)\n          (* Shape operations - adjust for batch dimension only if batched *)\n          | E_reshape { t_in; new_shape } ->\n              Some\n                (fun k ->\n                  (* User shape is logical. Preserve present batch prefix and\n                     reshape only the logical tail when element counts match;\n                     otherwise leave unchanged and let broadcasting handle\n                     it. *)\n                  let s_phys = phys_shape_of t_in in\n                  let nbd =\n                    PhysicalTbl.get_all_bdims env.shared t_in\n                    |> List.filter (fun (lv, _) -> lv <= env.level)\n                    |> List.length\n                  in\n                  let tail_len = Stdlib.max 0 (Array.length s_phys - nbd) in\n                  let old_tail =\n                    if tail_len = 0 then [||] else Array.sub s_phys nbd tail_len\n                  in\n                  let prod arr = Array.fold_left (fun a b -> a * b) 1 arr in\n                  let prod_old = prod old_tail in\n                  let target_logical = new_shape in\n                  let prod_new = prod target_logical in\n                  let prefix =\n                    if nbd = 0 then [||] else Array.sub s_phys 0 nbd\n                  in\n                  let phys_target = Array.append prefix target_logical in\n                  let result =\n                    if prod_old = prod_new then reshape t_in phys_target\n                    else t_in\n                  in\n                  set_bdim result (get_bdim t_in);\n                  continue k result)\n          | E_expand { t_in; new_target_shape } ->\n              Some\n                (fun k ->\n                  let new_target_arr = new_target_shape in\n                  (* Logical expand: canonicalize batches, then broadcast\n                     current logical dims with the requested new_target_shape.\n                     Keep the existing batch prefix untouched. *)\n                  let t0 = canonicalize_batch_positions t_in in\n                  let s = phys_shape_of t0 in\n                  dprintf \"E_expand: s=%s new_target=%s\" (pp_shape s)\n                    (pp_shape new_target_arr);\n                  let nbd = prefix_len_by_batch_sizes t0 in\n                  let prefix = if nbd = 0 then [||] else Array.sub s 0 nbd in\n                  let cur_log =\n                    let sl = Array.length s in\n                    if sl > nbd then Array.sub s nbd (sl - nbd) else [||]\n                  in\n                  dprintf \"E_expand: nbd=%d prefix=%s cur_log=%s\" nbd\n                    (pp_shape prefix) (pp_shape cur_log);\n                  (* If the requested target already includes the current\n                     prefix, strip it *)\n                  let logical_target =\n                    let lt = Array.length new_target_arr in\n                    if lt >= nbd then\n                      let starts_with_prefix =\n                        let ok = ref true in\n                        let i = ref 0 in\n                        while !ok && !i < nbd && !i < lt do\n                          if new_target_arr.(!i) <> prefix.(!i) then ok := false;\n                          incr i\n                        done;\n                        !ok\n                      in\n                      if starts_with_prefix then\n                        Array.sub new_target_arr nbd (lt - nbd)\n                      else new_target_arr\n                    else new_target_arr\n                  in\n                  (* Align ranks by left-padding current logical dims with 1s *)\n                  let lt_len = Array.length logical_target in\n                  let cl_len = Array.length cur_log in\n                  let cur_log_padded =\n                    if cl_len >= lt_len then cur_log\n                    else Array.append (Array.make (lt_len - cl_len) 1) cur_log\n                  in\n                  dprintf \"E_expand: logical_target=%s cur_log_padded=%s\"\n                    (pp_shape logical_target) (pp_shape cur_log_padded);\n                  (* Only expand if each dim is either equal or 1; otherwise,\n                     skip *)\n                  let broadcastable =\n                    let ok = ref true in\n                    for i = 0 to lt_len - 1 do\n                      let cur = cur_log_padded.(i) in\n                      let tgt = logical_target.(i) in\n                      if not (cur = tgt || cur = 1) then ok := false\n                    done;\n                    !ok\n                  in\n                  if not broadcastable then (\n                    dprintf \"E_expand: skip (not broadcastable)\";\n                    (* Normalize rank by reshaping to prefix @ cur_log_padded so\n                       downstream indexing (e.g., shrink/permutation) stays\n                       consistent. *)\n                    let fallback_phys = Array.append prefix cur_log_padded in\n                    let rshape = phys_reshape t0 fallback_phys in\n                    copy_bdims_same ~src:t0 ~dst:rshape;\n                    set_bdim rshape (get_bdim t_in);\n                    continue k rshape)\n                  else\n                    let target_log =\n                      Shape.broadcast cur_log_padded logical_target\n                    in\n                    let target_phys = Array.append prefix target_log in\n                    dprintf \"E_expand: target_log=%s target_phys=%s\"\n                      (pp_shape target_log) (pp_shape target_phys);\n                    let result = broadcast_to_canonical t0 target_phys in\n                    set_bdim result (get_bdim t_in);\n                    continue k result)\n          | E_permute { t_in; axes } ->\n              Some\n                (fun k ->\n                  match get_bdim t_in with\n                  | None ->\n                      let result = permute t_in axes in\n                      set_bdim result None;\n                      continue k result\n                  | Some p ->\n                      let rank = Array.length (T.shape t_in) in\n                      if Array.length axes = rank then (\n                        (* Physical permutation: apply as-is and move bdim\n                           accordingly *)\n                        let result = permute t_in axes in\n                        (* Find new position of previous p *)\n                        let new_p =\n                          let idx = ref 0 in\n                          while !idx < rank && axes.(!idx) <> p do\n                            incr idx\n                          done;\n                          if !idx >= rank then p else !idx\n                        in\n                        set_bdim result (Some new_p);\n                        continue k result)\n                      else\n                        (* Logical permutation: build physical permutation\n                           keeping p fixed *)\n                        let rank_log = rank - 1 in\n                        if Array.length axes <> rank_log then\n                          failwith \"vmap: permute axes length mismatch\"\n                        else\n                          let phys = Array.init rank (fun _ -> -1) in\n                          phys.(p) <- p;\n                          Array.iteri\n                            (fun j old_log ->\n                              let old_phys = phys_axis ~bdim:p old_log in\n                              let new_phys = phys_axis ~bdim:p j in\n                              phys.(new_phys) <- old_phys)\n                            axes;\n                          let result = permute t_in phys in\n                          set_bdim result (Some p);\n                          continue k result)\n          (* Matrix multiplication *)\n          | E_matmul { a; b } ->\n              Some\n                (fun k ->\n                  let a = canonicalize_batch_positions a in\n                  let b = canonicalize_batch_positions b in\n                  let ba = get_bdim a and bb = get_bdim b in\n                  let p =\n                    match (ba, bb) with\n                    | Some pa, Some pb -> Stdlib.min pa pb\n                    | Some pa, None -> pa\n                    | None, Some pb -> pb\n                    | None, None -> 0\n                  in\n                  let a', b' =\n                    if ba = None && bb = None then (a, b)\n                    else (align_to p a, align_to p b)\n                  in\n                  let out = matmul a' b' in\n                  set_bdim out\n                    (match (ba, bb) with None, None -> None | _ -> Some p);\n                  continue k out)\n          (* Where operation *)\n          | E_where { condition; if_true; if_false } ->\n              Some\n                (fun k ->\n                  (* Canonicalize and unify outer batch dims across all three\n                     operands *)\n                  let condition =\n                    condition\n                    |> add_missing_outer_bdims ~like:if_true\n                    |> add_missing_outer_bdims ~like:if_false\n                    |> canonicalize_batch_positions\n                  in\n                  let if_true =\n                    if_true\n                    |> add_missing_outer_bdims ~like:condition\n                    |> add_missing_outer_bdims ~like:if_false\n                    |> canonicalize_batch_positions\n                  in\n                  let if_false =\n                    if_false\n                    |> add_missing_outer_bdims ~like:condition\n                    |> add_missing_outer_bdims ~like:if_true\n                    |> canonicalize_batch_positions\n                  in\n                  let bc = get_bdim condition in\n                  let bt = get_bdim if_true in\n                  let bf = get_bdim if_false in\n\n                  (* Determine target position: use leftmost batch position if\n                     any *)\n                  let p =\n                    match (bc, bt, bf) with\n                    | Some pc, Some pt, Some pf ->\n                        Stdlib.min pc (Stdlib.min pt pf)\n                    | Some pc, Some pt, None -> Stdlib.min pc pt\n                    | Some pc, None, Some pf -> Stdlib.min pc pf\n                    | None, Some pt, Some pf -> Stdlib.min pt pf\n                    | Some pc, None, None -> pc\n                    | None, Some pt, None -> pt\n                    | None, None, Some pf -> pf\n                    | None, None, None -> 0\n                  in\n\n                  let any_batched =\n                    Option.is_some bc || Option.is_some bt || Option.is_some bf\n                  in\n\n                  let condition', if_true', if_false' =\n                    if any_batched then\n                      ( align_to p condition,\n                        align_to p if_true,\n                        align_to p if_false )\n                    else (condition, if_true, if_false)\n                  in\n\n                  (* Compute per-operand prefix length and broadcast logical\n                     shapes *)\n                  let sc = phys_shape_of condition'\n                  and st = phys_shape_of if_true'\n                  and sf = phys_shape_of if_false' in\n                  dprintf \"E_where: sc=%s st=%s sf=%s\" (pp_shape sc)\n                    (pp_shape st) (pp_shape sf);\n                  let c_prefix_len = prefix_len_by_batch_sizes condition' in\n                  let t_prefix_len = prefix_len_by_batch_sizes if_true' in\n                  let f_prefix_len = prefix_len_by_batch_sizes if_false' in\n                  let nbd =\n                    Stdlib.max c_prefix_len\n                      (Stdlib.max t_prefix_len f_prefix_len)\n                  in\n                  let c_log =\n                    Array.sub sc c_prefix_len (Array.length sc - c_prefix_len)\n                  in\n                  let t_log =\n                    Array.sub st t_prefix_len (Array.length st - t_prefix_len)\n                  in\n                  let f_log =\n                    Array.sub sf f_prefix_len (Array.length sf - f_prefix_len)\n                  in\n                  dprintf \"E_where: nbd=%d c_prefix=%d t_prefix=%d f_prefix=%d\"\n                    nbd c_prefix_len t_prefix_len f_prefix_len;\n                  (* Align ranks by left-padding with 1s to Stdlib.max logical\n                     rank *)\n                  let max_len =\n                    Stdlib.max (Array.length c_log)\n                      (Stdlib.max (Array.length t_log) (Array.length f_log))\n                  in\n                  let pad_left v =\n                    let lv = Array.length v in\n                    if lv >= max_len then v\n                    else Array.append (Array.make (max_len - lv) 1) v\n                  in\n                  let c_log = pad_left c_log in\n                  let t_log = pad_left t_log in\n                  let f_log = pad_left f_log in\n                  let target_log =\n                    Shape.broadcast c_log (Shape.broadcast t_log f_log)\n                  in\n                  let target_pref =\n                    Array.init nbd (fun lv ->\n                        try Hashtbl.find env.batch_sizes lv\n                        with Not_found -> 1)\n                  in\n                  let target_phys = Array.append target_pref target_log in\n                  dprintf \"E_where: target_log=%s target_phys=%s\"\n                    (pp_shape target_log) (pp_shape target_phys);\n                  let condition'' =\n                    broadcast_to_canonical condition' target_phys\n                  in\n                  let if_true'' = broadcast_to_canonical if_true' target_phys in\n                  let if_false'' =\n                    broadcast_to_canonical if_false' target_phys\n                  in\n\n                  let out = where condition'' if_true'' if_false'' in\n                  set_bdim out (if any_batched then Some p else None);\n                  continue k out)\n          (* Cast operation *)\n          | E_cast { t_in; target_dtype } ->\n              Some\n                (fun k ->\n                  let result = cast ~dtype:target_dtype t_in in\n                  set_bdim result (get_bdim t_in);\n                  continue k result)\n          (* Copy operations *)\n          | E_contiguous { t_in } ->\n              Some\n                (fun k ->\n                  let result = contiguous t_in in\n                  set_bdim result (get_bdim t_in);\n                  continue k result)\n          | E_copy { t_in } ->\n              Some\n                (fun k ->\n                  let result = copy t_in in\n                  set_bdim result (get_bdim t_in);\n                  continue k result)\n          (* Operations that need more complex handling *)\n          | E_gather { data; indices; axis } ->\n              Some\n                (fun k ->\n                  let bd = get_bdim data and bi = get_bdim indices in\n                  match bd with\n                  | None ->\n                      let result = gather data indices ~axis in\n                      set_bdim result bi;\n                      continue k result\n                  | Some p ->\n                      let adjusted_axis = phys_axis ~bdim:p axis in\n                      let indices' =\n                        if Option.is_none bi then align_to p indices\n                        else indices\n                      in\n                      let result = gather data indices' ~axis:adjusted_axis in\n                      set_bdim result (Some p);\n                      continue k result)\n          | E_scatter { data_template; indices; updates; axis } ->\n              Some\n                (fun k ->\n                  let bd = get_bdim data_template in\n                  let bi = get_bdim indices in\n                  let bu = get_bdim updates in\n                  match bd with\n                  | None ->\n                      let result =\n                        scatter data_template ~indices ~updates ~axis\n                      in\n                      set_bdim result\n                        (match (bi, bu) with None, None -> None | _ -> Some 0);\n                      continue k result\n                  | Some p ->\n                      let adjusted_axis = phys_axis ~bdim:p axis in\n                      let indices' =\n                        if Option.is_none bi then align_to p indices\n                        else indices\n                      in\n                      let updates' =\n                        if Option.is_none bu then align_to p updates\n                        else updates\n                      in\n                      let result =\n                        scatter data_template ~indices:indices'\n                          ~updates:updates' ~axis:adjusted_axis\n                      in\n                      set_bdim result (Some p);\n                      continue k result)\n          | E_cat { t_list; axis } ->\n              Some\n                (fun k ->\n                  let bdims = List.map get_bdim t_list in\n                  let any_batched = List.exists Option.is_some bdims in\n                  if not any_batched then (\n                    let result = cat t_list ~axis in\n                    set_bdim result None;\n                    continue k result)\n                  else\n                    (* Find leftmost batch position *)\n                    let p =\n                      List.fold_left\n                        (fun acc bd ->\n                          match bd with\n                          | Some p' -> (\n                              match acc with\n                              | None -> Some p'\n                              | Some a -> Some (Stdlib.min a p'))\n                          | None -> acc)\n                        None bdims\n                      |> Option.get\n                    in\n                    (* Align all tensors to position p *)\n                    let t_list' = List.map (fun t -> align_to p t) t_list in\n                    let adjusted_axis = phys_axis ~bdim:p axis in\n                    let result = cat t_list' ~axis:adjusted_axis in\n                    set_bdim result (Some p);\n                    continue k result)\n          | E_pad { t_in; padding_config; fill_value } ->\n              Some\n                (fun k ->\n                  match get_bdim t_in with\n                  | None ->\n                      let result = pad t_in padding_config fill_value in\n                      set_bdim result None;\n                      continue k result\n                  | Some p ->\n                      (* Insert no padding for batch dimension at p *)\n                      let adjusted_padding =\n                        let n = Array.length padding_config + 1 in\n                        Array.init n (fun i ->\n                            if i = p then (0, 0)\n                            else\n                              let j = if i < p then i else i - 1 in\n                              padding_config.(j))\n                      in\n                      let result = pad t_in adjusted_padding fill_value in\n                      set_bdim result (Some p);\n                      continue k result)\n          | E_shrink { t_in; limits } ->\n              Some\n                (fun k ->\n                  match get_bdim t_in with\n                  | None ->\n                      let result = shrink t_in limits in\n                      set_bdim result None;\n                      continue k result\n                  | Some p ->\n                      (* Don't shrink batch dimension at p *)\n                      let adjusted_limits =\n                        let n = Array.length limits + 1 in\n                        Array.init n (fun i ->\n                            if i = p then (0, axis_size)\n                            else\n                              let j = if i < p then i else i - 1 in\n                              limits.(j))\n                      in\n                      let result = shrink t_in adjusted_limits in\n                      set_bdim result (Some p);\n                      continue k result)\n          | E_flip { t_in; dims_to_flip } ->\n              Some\n                (fun k ->\n                  match get_bdim t_in with\n                  | None ->\n                      let result = flip t_in dims_to_flip in\n                      set_bdim result None;\n                      continue k result\n                  | Some p ->\n                      (* Don't flip batch dimension at p *)\n                      let adjusted_dims =\n                        let n = Array.length dims_to_flip + 1 in\n                        Array.init n (fun i ->\n                            if i = p then false\n                            else\n                              let j = if i < p then i else i - 1 in\n                              dims_to_flip.(j))\n                      in\n                      let result = flip t_in adjusted_dims in\n                      set_bdim result (Some p);\n                      continue k result)\n          | E_assign _ ->\n              Some\n                (fun _k ->\n                  invalid_arg\n                    \"in-place mutation (set_item, set_slice, blit, assign) \\\n                     cannot be used inside vmap — use scatter instead\")\n          (* Let other operations pass through *)\n          | _ -> None);\n  }\n\n(* ============================================================================\n   The Main vmap Function\n   ============================================================================ *)\n\nlet vmap ?(in_axes = Single (Map 0)) ?(out_axes = OutSingle (Some 0)) ?axis_name\n    ?axis_size f =\n fun input ->\n  (* Extract axis specifications *)\n  let axis_spec = extract_axis_spec in_axes in\n  let out_axis_spec = extract_out_axis_spec out_axes in\n\n  (* Establish or extend the vmap environment (partial; finalize after size). *)\n  let parent_env = !current_env in\n  let shared =\n    match parent_env with Some e -> e.shared | None -> PhysicalTbl.create ()\n  in\n  let level = match parent_env with Some e -> e.level + 1 | None -> 0 in\n  let batched_tensors = PhysicalTbl.create () in\n  (* Clear any stale mapping at this level for the input before shape queries *)\n  PhysicalTbl.set_bdim shared input ~level ~bdim:None;\n\n  (* Determine batch size and set bdim without moving axes *)\n  let axis_size =\n    match (axis_spec, axis_size) with\n    | Map axis_idx, None ->\n        (* axis_idx is logical; adjust to physical by adding OUTER prefix\n           length. *)\n        let shape = T.shape input in\n        (* Compute physical axis by accounting for existing outer batch dims\n           already present on this input. *)\n        let physical_k =\n          let outer_bdims =\n            List.init level (fun lev ->\n                PhysicalTbl.get_bdim shared input ~level:lev)\n            |> List.filter_map (fun x -> x)\n            |> List.sort compare\n          in\n          List.fold_left\n            (fun k_acc outer_bdim ->\n              if outer_bdim <= k_acc then k_acc + 1 else k_acc)\n            axis_idx outer_bdims\n        in\n        if physical_k < 0 || physical_k >= Array.length shape then\n          failwith\n            (Printf.sprintf \"vmap: invalid axis %d (physical %d) for rank %d\"\n               axis_idx physical_k (Array.length shape));\n        shape.(physical_k)\n    | NoMap, Some size -> size\n    | NoMap, None ->\n        failwith \"vmap: axis_size must be provided when in_axes is NoMap\"\n    | Map axis_idx, Some size ->\n        (* Verify provided size matches the physical dimension corresponding to\n           logical axis. *)\n        let shape = T.shape input in\n        let physical_k =\n          let outer_bdims =\n            List.init level (fun lev ->\n                PhysicalTbl.get_bdim shared input ~level:lev)\n            |> List.filter_map (fun x -> x)\n            |> List.sort compare\n          in\n          List.fold_left\n            (fun k_acc outer_bdim ->\n              if outer_bdim <= k_acc then k_acc + 1 else k_acc)\n            axis_idx outer_bdims\n        in\n        if physical_k < 0 || physical_k >= Array.length shape then\n          failwith\n            (Printf.sprintf \"vmap: invalid axis %d (physical %d) for rank %d\"\n               axis_idx physical_k (Array.length shape));\n        if shape.(physical_k) <> size then\n          failwith\n            (Printf.sprintf\n               \"vmap: axis_size %d doesn't match axis %d (physical %d) size %d\"\n               size axis_idx physical_k shape.(physical_k));\n        size\n  in\n\n  (* Finalize env now that axis_size is known *)\n  let batch_sizes =\n    match parent_env with\n    | Some e -> Hashtbl.copy e.batch_sizes\n    | None -> Hashtbl.create 8\n  in\n  Hashtbl.replace batch_sizes level axis_size;\n  let env = { level; shared; batch_sizes } in\n\n  (* Mark input bdim, accounting for outer batch dimensions *)\n  (match axis_spec with\n  | Map k ->\n      (* Adjust logical position to physical by adding OUTER prefix length *)\n      let physical_k =\n        let outer_bdims =\n          List.init level (fun lev ->\n              PhysicalTbl.get_bdim shared input ~level:lev)\n          |> List.filter_map (fun x -> x)\n          |> List.sort compare\n        in\n        List.fold_left\n          (fun k_acc outer_bdim ->\n            if outer_bdim <= k_acc then k_acc + 1 else k_acc)\n          k outer_bdims\n      in\n      PhysicalTbl.set_bdim batched_tensors input ~level ~bdim:(Some physical_k);\n      PhysicalTbl.set_bdim shared input ~level ~bdim:(Some physical_k)\n  | NoMap ->\n      PhysicalTbl.set_bdim batched_tensors input ~level ~bdim:None;\n      PhysicalTbl.set_bdim shared input ~level ~bdim:None);\n\n  (* Create the vmap handler with the level and local table *)\n  let vmap_handler =\n    make_vmap_handler ~env ~axis_size ~batched_tensors out_axis_spec axis_name\n  in\n\n  with_env env (fun () ->\n      match Effect.Deep.match_with f input vmap_handler with\n      | result ->\n          PhysicalTbl.clear_level env.shared level;\n          result\n      | exception exn ->\n          PhysicalTbl.clear_level env.shared level;\n          raise exn)\n\n(* vmaps for multiple arguments *)\nlet vmaps ?(in_axes = []) ?(out_axes = OutSingle (Some 0)) ?axis_name ?axis_size\n    f =\n fun inputs ->\n  (* Default to Map 0 for all inputs if in_axes is empty *)\n  let axis_specs =\n    if in_axes = [] then List.map (fun _ -> Map 0) inputs\n    else if List.length in_axes <> List.length inputs then\n      failwith \"vmaps: in_axes must have the same length as inputs or be empty\"\n    else in_axes\n  in\n\n  let out_axis_spec = extract_out_axis_spec out_axes in\n\n  (* Establish or extend the vmap environment (partial; finalize after size). *)\n  let parent_env = !current_env in\n  let shared =\n    match parent_env with Some e -> e.shared | None -> PhysicalTbl.create ()\n  in\n  let level = match parent_env with Some e -> e.level + 1 | None -> 0 in\n  let batched_tensors = PhysicalTbl.create () in\n  (* Clear any stale mapping at this level for inputs before shape queries *)\n  List.iter\n    (fun inp -> PhysicalTbl.set_bdim shared inp ~level ~bdim:None)\n    inputs;\n\n  (* Determine batch size from first mapped input *)\n  let axis_size =\n    match axis_size with\n    | Some size -> size\n    | None ->\n        (* Choose the maximum mapped size across inputs to allow broadcasting\n           smaller ones *)\n        let rec collect_sizes acc ins sp =\n          match (ins, sp) with\n          | input :: rest_i, Map axis_idx :: rest_s ->\n              let shape = T.shape input in\n              let physical_axis =\n                let outer_bdims =\n                  List.init level (fun lev ->\n                      PhysicalTbl.get_bdim shared input ~level:lev)\n                  |> List.filter_map (fun x -> x)\n                  |> List.sort compare\n                in\n                List.fold_left\n                  (fun k_acc outer_bdim ->\n                    if outer_bdim <= k_acc then k_acc + 1 else k_acc)\n                  axis_idx outer_bdims\n              in\n              if physical_axis < 0 || physical_axis >= Array.length shape then\n                failwith\n                  (Printf.sprintf\n                     \"vmaps: invalid axis %d (physical %d) for rank %d\" axis_idx\n                     physical_axis (Array.length shape));\n              collect_sizes (Stdlib.max acc shape.(physical_axis)) rest_i rest_s\n          | _ :: rest_i, NoMap :: rest_s -> collect_sizes acc rest_i rest_s\n          | [], [] -> acc\n          | _ -> failwith \"vmaps: internal error\"\n        in\n        collect_sizes 1 inputs axis_specs\n  in\n\n  (* Finalize env now that axis_size is known *)\n  let batch_sizes =\n    match parent_env with\n    | Some e -> Hashtbl.copy e.batch_sizes\n    | None -> Hashtbl.create 8\n  in\n  Hashtbl.replace batch_sizes level axis_size;\n  let env = { level; shared; batch_sizes } in\n\n  (* Mark each input's bdim, accounting for outer batch dimensions *)\n  List.iter2\n    (fun input axis_spec ->\n      match axis_spec with\n      | Map axis_idx ->\n          (* Check how many batch dimensions from outer levels come before\n             axis_idx *)\n          let physical_idx =\n            let outer_bdims =\n              List.init level (fun lev ->\n                  PhysicalTbl.get_bdim shared input ~level:lev)\n              |> List.filter_map (fun x -> x)\n              |> List.sort compare\n            in\n            List.fold_left\n              (fun k_acc outer_bdim ->\n                if outer_bdim <= k_acc then k_acc + 1 else k_acc)\n              axis_idx outer_bdims\n          in\n          (* If this input's mapped dimension is size 1 and axis_size > 1,\n             broadcast it. *)\n          let input_shape = T.shape input in\n          let input_axis_size = input_shape.(physical_idx) in\n          let input' =\n            if input_axis_size = axis_size then input\n            else if input_axis_size = 1 then\n              (* Build target physical shape by replacing that axis with\n                 axis_size *)\n              let target =\n                Array.mapi\n                  (fun i d -> if i = physical_idx then axis_size else d)\n                  input_shape\n              in\n              expand input target\n            else\n              failwith\n                (Printf.sprintf\n                   \"vmaps: cannot broadcast mapped axis of size %d to %d\"\n                   input_axis_size axis_size)\n          in\n          PhysicalTbl.set_bdim batched_tensors input' ~level\n            ~bdim:(Some physical_idx);\n          PhysicalTbl.set_bdim shared input' ~level ~bdim:(Some physical_idx)\n      | NoMap ->\n          PhysicalTbl.set_bdim batched_tensors input ~level ~bdim:None;\n          PhysicalTbl.set_bdim shared input ~level ~bdim:None)\n    inputs axis_specs;\n\n  (* Create the vmap handler with the level and local table *)\n  let vmap_handler =\n    make_vmap_handler ~env ~axis_size ~batched_tensors out_axis_spec axis_name\n  in\n\n  with_env env (fun () ->\n      match\n        Effect.Deep.match_with (fun inputs -> f inputs) inputs vmap_handler\n      with\n      | result ->\n          PhysicalTbl.clear_level env.shared level;\n          result\n      | exception exn ->\n          PhysicalTbl.clear_level env.shared level;\n          raise exn)\n"
  },
  {
    "path": "packages/rune/test/dune",
    "content": "(tests\n (names\n  test_vjp\n  test_gradcheck\n  test_jvp\n  test_custom_diff\n  test_jacobian\n  test_jit\n  test_jit_grad\n  test_jit_vmap)\n (package rune)\n (libraries nx rune nx.core windtrap test_rune_support tolk tolk.ir tolk.cpu))\n\n(test\n (name test_vmap)\n (enabled_if false)\n (package rune)\n (modules test_vmap)\n (libraries nx rune nx.core windtrap test_rune_support))\n"
  },
  {
    "path": "packages/rune/test/golden/jit_grad/dune",
    "content": "(executable\n (name generate_actual)\n (libraries nx rune tolk tolk.ir tolk.cpu))\n\n(rule\n (package rune)\n (targets\n  grad_square.actual\n  grad_sin.actual\n  grad_polynomial.actual\n  grad_cube.actual)\n (action\n  (run ./generate_actual.exe .)))\n\n(rule\n (alias runtest)\n (package rune)\n (action\n  (progn\n   (diff grad_square.expected grad_square.actual)\n   (diff grad_sin.expected grad_sin.actual)\n   (diff grad_polynomial.expected grad_polynomial.actual)\n   (diff grad_cube.expected grad_cube.actual))))\n"
  },
  {
    "path": "packages/rune/test/golden/jit_grad/generate_actual.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Generates .actual files for grad+JIT golden tests. Each file contains the\n   rendered C source from tracing grad(f) through the JIT capture handler via\n   [Jit.trace_graph]. Dune diff rules compare .actual against .expected\n   (generated from tinygrad's backward computation). *)\n\nlet write_actual dir name content =\n  let filename = Filename.concat dir (name ^ \".actual\") in\n  let oc = open_out filename in\n  output_string oc content;\n  output_char oc '\\n';\n  close_out oc\n\nlet dev = Tolk_cpu.create \"CPU\"\n\nlet grad_source f x =\n  let traced = Rune.trace_graph ~device:dev (Rune.grad f) x in\n  String.concat \"\\n---\\n\" traced.rendered_source\n\n(* ── Test cases ── *)\n\n(* grad(sum(x*x)) = 2*x, shape [4] *)\nlet grad_square () =\n  let x = Nx.full Nx.float32 [| 4 |] 3.0 in\n  grad_source (fun x -> Nx.sum (Nx.mul x x)) x\n\n(* grad(sum(sin(x))) = cos(x), shape [4] *)\nlet grad_sin () =\n  let x = Nx.full Nx.float32 [| 4 |] 1.0 in\n  grad_source (fun x -> Nx.sum (Nx.sin x)) x\n\n(* grad(sum((x+1)*x)) = 2x+1, shape [4] *)\nlet grad_polynomial () =\n  let x = Nx.full Nx.float32 [| 4 |] 2.0 in\n  grad_source\n    (fun x -> Nx.sum (Nx.mul (Nx.add x (Nx.scalar Nx.float32 1.0)) x))\n    x\n\n(* grad(sum(x*x*x)) = 3*x^2, shape [4] *)\nlet grad_cube () =\n  let x = Nx.full Nx.float32 [| 4 |] 2.0 in\n  grad_source (fun x -> Nx.sum (Nx.mul (Nx.mul x x) x)) x\n\ntype test_case = { name : string; generate : unit -> string }\n\nlet test_cases =\n  [\n    { name = \"grad_square\"; generate = grad_square };\n    { name = \"grad_sin\"; generate = grad_sin };\n    { name = \"grad_polynomial\"; generate = grad_polynomial };\n    { name = \"grad_cube\"; generate = grad_cube };\n  ]\n\nlet () =\n  let dir = Sys.argv.(1) in\n  let failed = ref false in\n  List.iter\n    (fun { name; generate } ->\n      match generate () with\n      | out -> write_actual dir name out\n      | exception exn ->\n          Printf.eprintf \"FAIL %s: %s\\n%!\" name (Printexc.to_string exn);\n          Printexc.print_backtrace stderr;\n          failed := true)\n    test_cases;\n  if !failed then exit 1\n"
  },
  {
    "path": "packages/rune/test/golden/jit_grad/generate_expected.py",
    "content": "#!/usr/bin/env python3\n\"\"\"Generate tinygrad reference .expected files for grad+JIT golden tests.\n\nUses tinygrad's Tensor.gradient() + Tensor.schedule() to produce the\nscheduled gradient kernel, then renders via the clang renderer.\n\nUsage:\n    uv run packages/rune/test/golden/jit_grad/generate_expected.py\n\nAfter running, commit the generated .expected files.\n\"\"\"\n\nimport os\nimport sys\n\nsys.path.insert(\n    0,\n    os.path.join(\n        os.path.dirname(__file__), \"..\", \"..\", \"..\", \"..\", \"..\", \"_tinygrad\"\n    ),\n)\n\nfrom tinygrad import Tensor\nfrom tinygrad.codegen import full_rewrite_to_sink, line_rewrite, pm_linearize_cleanups\nfrom tinygrad.codegen.late.linearizer import linearize\nfrom tinygrad.renderer.cstyle import ClangRenderer\n\nOUT_DIR = os.path.dirname(__file__)\n\nrenderer = ClangRenderer()\n\n\ndef write_expected(name, content):\n    path = os.path.join(OUT_DIR, f\"{name}.expected\")\n    with open(path, \"w\") as f:\n        f.write(content + \"\\n\")\n    print(f\"  wrote {path}\")\n\n\ndef gradient_source(f, x_shape):\n    \"\"\"Compute gradient of f w.r.t. x and return rendered source.\n\n    Uses Tensor.gradient() + Tensor.schedule() for proper scheduling,\n    then renders each kernel via the clang renderer.\n    \"\"\"\n    x = Tensor.empty(*x_shape).requires_grad_(True)\n    y = f(x)\n    (grad_x,) = y.gradient(x)\n    sched = grad_x.schedule()\n    sources = []\n    for item in sched:\n        ast = item.ast\n        rewritten = full_rewrite_to_sink(ast, renderer, optimize=True)\n        lst = linearize(rewritten)\n        lst = line_rewrite(lst, pm_linearize_cleanups)\n        sources.append(renderer.render(lst).strip())\n    return \"\\n---\\n\".join(sources)\n\n\n# ── Test cases ──\n\n\ndef build_grad_square():\n    \"\"\"grad(sum(x*x)) = 2*x, shape [4].\"\"\"\n    return gradient_source(lambda x: (x * x).sum(), (4,))\n\n\ndef build_grad_sin():\n    \"\"\"grad(sum(sin(x))) = cos(x), shape [4].\"\"\"\n    return gradient_source(lambda x: x.sin().sum(), (4,))\n\n\ndef build_grad_polynomial():\n    \"\"\"grad(sum((x+1)*x)) = 2x+1, shape [4].\"\"\"\n    return gradient_source(lambda x: ((x + 1) * x).sum(), (4,))\n\n\ndef build_grad_cube():\n    \"\"\"grad(sum(x*x*x)) = 3*x^2, shape [4].\"\"\"\n    return gradient_source(lambda x: (x * x * x).sum(), (4,))\n\n\ndef build_grad_sum():\n    \"\"\"grad(sum(x)) = ones, shape [4].\"\"\"\n    return gradient_source(lambda x: x.sum(), (4,))\n\n\nTEST_CASES = [\n    (\"grad_square\", build_grad_square),\n    (\"grad_sin\", build_grad_sin),\n    (\"grad_polynomial\", build_grad_polynomial),\n    (\"grad_cube\", build_grad_cube),\n    (\"grad_sum\", build_grad_sum),\n]\n\n\ndef main():\n    total = 0\n    for case_name, builder in TEST_CASES:\n        print(f\"\\n{case_name}:\")\n        try:\n            src = builder()\n            write_expected(case_name, src)\n            total += 1\n        except Exception as e:\n            print(f\"  FAIL {case_name}: {e}\")\n            import traceback\n\n            traceback.print_exc()\n\n    print(f\"\\nDone. Generated {total} .expected files in {OUT_DIR}\")\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "packages/rune/test/golden/jit_grad/grad_cube.expected",
    "content": "typedef float float4 __attribute__((aligned(16),ext_vector_type(4)));\nvoid E_4n3(float* restrict data0_4, float* restrict data1_4) {\n  float4 val0 = (*((float4*)((data1_4+0))));\n  *((float4*)((data0_4+0))) = (float4){(val0[0]*val0[0]*3.0f),(val0[1]*val0[1]*3.0f),(val0[2]*val0[2]*3.0f),(val0[3]*val0[3]*3.0f)};\n}\n"
  },
  {
    "path": "packages/rune/test/golden/jit_grad/grad_polynomial.expected",
    "content": "typedef float float4 __attribute__((aligned(16),ext_vector_type(4)));\nvoid E_4n2(float* restrict data0_4, float* restrict data1_4) {\n  float4 val0 = (*((float4*)((data1_4+0))));\n  *((float4*)((data0_4+0))) = (float4){(1.0f+(val0[0]*2.0f)),(1.0f+(val0[1]*2.0f)),(1.0f+(val0[2]*2.0f)),(1.0f+(val0[3]*2.0f))};\n}\n"
  },
  {
    "path": "packages/rune/test/golden/jit_grad/grad_sin.expected",
    "content": "typedef float float4 __attribute__((aligned(16),ext_vector_type(4)));\nvoid E_4n1(float* restrict data0_4, float* restrict data1_4) {\n  float4 val0 = (*((float4*)((data1_4+0))));\n  float alu0 = (1.5707963267948966f-val0[0]);\n  _Bool alu1 = (alu0!=alu0);\n  _Bool alu2 = (alu0!=((float)(-__builtin_inff())));\n  _Bool alu3 = (alu0!=((float)(__builtin_inff())));\n  float alu4 = (alu2?alu0:0.0f);\n  float alu5 = (alu1?0.0f:alu4);\n  float alu6 = (alu3?alu5:0.0f);\n  float alu7 = ((alu6<0.0f)?-1.0f:1.0f);\n  float alu8 = ((alu6!=0.0f)?alu7:0.0f);\n  float alu9 = (alu6*alu8);\n  unsigned int cast0 = __builtin_bit_cast(unsigned int, (float)(alu9));\n  unsigned int alu10 = (((cast0>>23u)&255u)+4294967169u+1u);\n  float alu11 = (1.5707963267948966f-val0[1]);\n  _Bool alu12 = (alu11!=alu11);\n  _Bool alu13 = (alu11!=((float)(-__builtin_inff())));\n  _Bool alu14 = (alu11!=((float)(__builtin_inff())));\n  float alu15 = (alu13?alu11:0.0f);\n  float alu16 = (alu12?0.0f:alu15);\n  float alu17 = (alu14?alu16:0.0f);\n  float alu18 = ((alu17<0.0f)?-1.0f:1.0f);\n  float alu19 = ((alu17!=0.0f)?alu18:0.0f);\n  float alu20 = (alu17*alu19);\n  unsigned int cast1 = __builtin_bit_cast(unsigned int, (float)(alu20));\n  unsigned int alu21 = (((cast1>>23u)&255u)+4294967169u+1u);\n  float alu22 = (1.5707963267948966f-val0[2]);\n  _Bool alu23 = (alu22!=alu22);\n  _Bool alu24 = (alu22!=((float)(-__builtin_inff())));\n  _Bool alu25 = (alu22!=((float)(__builtin_inff())));\n  float alu26 = (alu24?alu22:0.0f);\n  float alu27 = (alu23?0.0f:alu26);\n  float alu28 = (alu25?alu27:0.0f);\n  float alu29 = ((alu28<0.0f)?-1.0f:1.0f);\n  float alu30 = ((alu28!=0.0f)?alu29:0.0f);\n  float alu31 = (alu28*alu30);\n  unsigned int cast2 = __builtin_bit_cast(unsigned int, (float)(alu31));\n  unsigned int alu32 = (((cast2>>23u)&255u)+4294967169u+1u);\n  float alu33 = (1.5707963267948966f-val0[3]);\n  _Bool alu34 = (alu33!=alu33);\n  _Bool alu35 = (alu33!=((float)(-__builtin_inff())));\n  _Bool alu36 = (alu33!=((float)(__builtin_inff())));\n  float alu37 = (alu35?alu33:0.0f);\n  float alu38 = (alu34?0.0f:alu37);\n  float alu39 = (alu36?alu38:0.0f);\n  float alu40 = ((alu39<0.0f)?-1.0f:1.0f);\n  float alu41 = ((alu39!=0.0f)?alu40:0.0f);\n  float alu42 = (alu39*alu41);\n  unsigned int cast3 = __builtin_bit_cast(unsigned int, (float)(alu42));\n  unsigned int alu43 = (((cast3>>23u)&255u)+4294967169u+1u);\n  float alu44 = (alu9*0.3183098861837907f);\n  float alu45 = ((alu44<0.0f)?-0.5f:0.5f);\n  int cast4 = ((int)((alu44+alu45)));\n  float alu46 = (alu20*0.3183098861837907f);\n  float alu47 = ((alu46<0.0f)?-0.5f:0.5f);\n  int cast5 = ((int)((alu46+alu47)));\n  float alu48 = (alu31*0.3183098861837907f);\n  float alu49 = ((alu48<0.0f)?-0.5f:0.5f);\n  int cast6 = ((int)((alu48+alu49)));\n  float alu50 = (alu42*0.3183098861837907f);\n  float alu51 = ((alu50<0.0f)?-0.5f:0.5f);\n  int cast7 = ((int)((alu50+alu51)));\n  int alu52 = (((int)(alu10))&31);\n  unsigned long cast8 = ((unsigned long)(__builtin_bit_cast(float, (int)(((alu52+127)<<23)))));\n  unsigned long alu53 = (((unsigned long)(alu10))>>5ull);\n  _Bool alu54 = (alu53!=0ull);\n  _Bool alu55 = (alu53!=1ull);\n  _Bool alu56 = (alu53!=2ull);\n  _Bool alu57 = (alu53!=3ull);\n  _Bool alu58 = (alu53!=4ull);\n  unsigned int alu59 = ((alu53!=5ull)?0u:920167782u);\n  unsigned int alu60 = (alu58?alu59:2102212464u);\n  unsigned int alu61 = (alu57?alu60:2131351028u);\n  unsigned int alu62 = (alu56?alu61:2475754826u);\n  unsigned int alu63 = (alu55?alu62:683565275u);\n  unsigned int alu64 = (alu54?alu63:0u);\n  unsigned int alu65 = (alu58?0u:920167782u);\n  unsigned int alu66 = (alu57?alu65:2102212464u);\n  unsigned int alu67 = (alu56?alu66:2131351028u);\n  unsigned int alu68 = (alu55?alu67:2475754826u);\n  unsigned int alu69 = (alu54?alu68:683565275u);\n  unsigned long cast9 = ((unsigned long)(alu69));\n  unsigned int alu70 = (alu57?0u:920167782u);\n  unsigned int alu71 = (alu56?alu70:2102212464u);\n  unsigned int alu72 = (alu55?alu71:2131351028u);\n  unsigned int alu73 = (alu54?alu72:2475754826u);\n  unsigned long cast10 = ((unsigned long)(alu73));\n  unsigned long cast11 = ((unsigned long)(__builtin_bit_cast(float, (int)((((32-alu52)+127)<<23)))));\n  unsigned int alu74 = (alu56?0u:920167782u);\n  unsigned int alu75 = (alu55?alu74:2102212464u);\n  unsigned int alu76 = (alu54?alu75:2131351028u);\n  float cast12 = __builtin_bit_cast(float, (unsigned int)(((cast0&2155872255u)|1056964608u)));\n  unsigned long cast13 = ((unsigned long)((cast12*4294967296.0f)));\n  unsigned long alu77 = (((cast13*((unsigned long)((((unsigned int)((((unsigned long)(alu64))*cast8)))|((unsigned int)((cast9/cast11)))))))<<32ull)+(cast13*((unsigned long)((((unsigned int)((cast9*cast8)))|((unsigned int)((cast10/cast11)))))))+((cast13*((unsigned long)((((unsigned int)((cast10*cast8)))|((unsigned int)((((unsigned long)(alu76))/cast11)))))))>>32ull));\n  int cast14 = ((int)((alu77>>62ull)));\n  int alu78 = (((int)(alu21))&31);\n  unsigned long cast15 = ((unsigned long)(__builtin_bit_cast(float, (int)(((alu78+127)<<23)))));\n  unsigned long alu79 = (((unsigned long)(alu21))>>5ull);\n  _Bool alu80 = (alu79!=0ull);\n  _Bool alu81 = (alu79!=1ull);\n  _Bool alu82 = (alu79!=2ull);\n  _Bool alu83 = (alu79!=3ull);\n  _Bool alu84 = (alu79!=4ull);\n  unsigned int alu85 = ((alu79!=5ull)?0u:920167782u);\n  unsigned int alu86 = (alu84?alu85:2102212464u);\n  unsigned int alu87 = (alu83?alu86:2131351028u);\n  unsigned int alu88 = (alu82?alu87:2475754826u);\n  unsigned int alu89 = (alu81?alu88:683565275u);\n  unsigned int alu90 = (alu80?alu89:0u);\n  unsigned int alu91 = (alu84?0u:920167782u);\n  unsigned int alu92 = (alu83?alu91:2102212464u);\n  unsigned int alu93 = (alu82?alu92:2131351028u);\n  unsigned int alu94 = (alu81?alu93:2475754826u);\n  unsigned int alu95 = (alu80?alu94:683565275u);\n  unsigned long cast16 = ((unsigned long)(alu95));\n  unsigned int alu96 = (alu83?0u:920167782u);\n  unsigned int alu97 = (alu82?alu96:2102212464u);\n  unsigned int alu98 = (alu81?alu97:2131351028u);\n  unsigned int alu99 = (alu80?alu98:2475754826u);\n  unsigned long cast17 = ((unsigned long)(alu99));\n  unsigned long cast18 = ((unsigned long)(__builtin_bit_cast(float, (int)((((32-alu78)+127)<<23)))));\n  unsigned int alu100 = (alu82?0u:920167782u);\n  unsigned int alu101 = (alu81?alu100:2102212464u);\n  unsigned int alu102 = (alu80?alu101:2131351028u);\n  float cast19 = __builtin_bit_cast(float, (unsigned int)(((cast1&2155872255u)|1056964608u)));\n  unsigned long cast20 = ((unsigned long)((cast19*4294967296.0f)));\n  unsigned long alu103 = (((cast20*((unsigned long)((((unsigned int)((((unsigned long)(alu90))*cast15)))|((unsigned int)((cast16/cast18)))))))<<32ull)+(cast20*((unsigned long)((((unsigned int)((cast16*cast15)))|((unsigned int)((cast17/cast18)))))))+((cast20*((unsigned long)((((unsigned int)((cast17*cast15)))|((unsigned int)((((unsigned long)(alu102))/cast18)))))))>>32ull));\n  int cast21 = ((int)((alu103>>62ull)));\n  int alu104 = (((int)(alu32))&31);\n  unsigned long cast22 = ((unsigned long)(__builtin_bit_cast(float, (int)(((alu104+127)<<23)))));\n  unsigned long alu105 = (((unsigned long)(alu32))>>5ull);\n  _Bool alu106 = (alu105!=0ull);\n  _Bool alu107 = (alu105!=1ull);\n  _Bool alu108 = (alu105!=2ull);\n  _Bool alu109 = (alu105!=3ull);\n  _Bool alu110 = (alu105!=4ull);\n  unsigned int alu111 = ((alu105!=5ull)?0u:920167782u);\n  unsigned int alu112 = (alu110?alu111:2102212464u);\n  unsigned int alu113 = (alu109?alu112:2131351028u);\n  unsigned int alu114 = (alu108?alu113:2475754826u);\n  unsigned int alu115 = (alu107?alu114:683565275u);\n  unsigned int alu116 = (alu106?alu115:0u);\n  unsigned int alu117 = (alu110?0u:920167782u);\n  unsigned int alu118 = (alu109?alu117:2102212464u);\n  unsigned int alu119 = (alu108?alu118:2131351028u);\n  unsigned int alu120 = (alu107?alu119:2475754826u);\n  unsigned int alu121 = (alu106?alu120:683565275u);\n  unsigned long cast23 = ((unsigned long)(alu121));\n  unsigned int alu122 = (alu109?0u:920167782u);\n  unsigned int alu123 = (alu108?alu122:2102212464u);\n  unsigned int alu124 = (alu107?alu123:2131351028u);\n  unsigned int alu125 = (alu106?alu124:2475754826u);\n  unsigned long cast24 = ((unsigned long)(alu125));\n  unsigned long cast25 = ((unsigned long)(__builtin_bit_cast(float, (int)((((32-alu104)+127)<<23)))));\n  unsigned int alu126 = (alu108?0u:920167782u);\n  unsigned int alu127 = (alu107?alu126:2102212464u);\n  unsigned int alu128 = (alu106?alu127:2131351028u);\n  float cast26 = __builtin_bit_cast(float, (unsigned int)(((cast2&2155872255u)|1056964608u)));\n  unsigned long cast27 = ((unsigned long)((cast26*4294967296.0f)));\n  unsigned long alu129 = (((cast27*((unsigned long)((((unsigned int)((((unsigned long)(alu116))*cast22)))|((unsigned int)((cast23/cast25)))))))<<32ull)+(cast27*((unsigned long)((((unsigned int)((cast23*cast22)))|((unsigned int)((cast24/cast25)))))))+((cast27*((unsigned long)((((unsigned int)((cast24*cast22)))|((unsigned int)((((unsigned long)(alu128))/cast25)))))))>>32ull));\n  int cast28 = ((int)((alu129>>62ull)));\n  int alu130 = (((int)(alu43))&31);\n  unsigned long cast29 = ((unsigned long)(__builtin_bit_cast(float, (int)(((alu130+127)<<23)))));\n  unsigned long alu131 = (((unsigned long)(alu43))>>5ull);\n  _Bool alu132 = (alu131!=0ull);\n  _Bool alu133 = (alu131!=1ull);\n  _Bool alu134 = (alu131!=2ull);\n  _Bool alu135 = (alu131!=3ull);\n  _Bool alu136 = (alu131!=4ull);\n  unsigned int alu137 = ((alu131!=5ull)?0u:920167782u);\n  unsigned int alu138 = (alu136?alu137:2102212464u);\n  unsigned int alu139 = (alu135?alu138:2131351028u);\n  unsigned int alu140 = (alu134?alu139:2475754826u);\n  unsigned int alu141 = (alu133?alu140:683565275u);\n  unsigned int alu142 = (alu132?alu141:0u);\n  unsigned int alu143 = (alu136?0u:920167782u);\n  unsigned int alu144 = (alu135?alu143:2102212464u);\n  unsigned int alu145 = (alu134?alu144:2131351028u);\n  unsigned int alu146 = (alu133?alu145:2475754826u);\n  unsigned int alu147 = (alu132?alu146:683565275u);\n  unsigned long cast30 = ((unsigned long)(alu147));\n  unsigned int alu148 = (alu135?0u:920167782u);\n  unsigned int alu149 = (alu134?alu148:2102212464u);\n  unsigned int alu150 = (alu133?alu149:2131351028u);\n  unsigned int alu151 = (alu132?alu150:2475754826u);\n  unsigned long cast31 = ((unsigned long)(alu151));\n  unsigned long cast32 = ((unsigned long)(__builtin_bit_cast(float, (int)((((32-alu130)+127)<<23)))));\n  unsigned int alu152 = (alu134?0u:920167782u);\n  unsigned int alu153 = (alu133?alu152:2102212464u);\n  unsigned int alu154 = (alu132?alu153:2131351028u);\n  float cast33 = __builtin_bit_cast(float, (unsigned int)(((cast3&2155872255u)|1056964608u)));\n  unsigned long cast34 = ((unsigned long)((cast33*4294967296.0f)));\n  unsigned long alu155 = (((cast34*((unsigned long)((((unsigned int)((((unsigned long)(alu142))*cast29)))|((unsigned int)((cast30/cast32)))))))<<32ull)+(cast34*((unsigned long)((((unsigned int)((cast30*cast29)))|((unsigned int)((cast31/cast32)))))))+((cast34*((unsigned long)((((unsigned int)((cast31*cast29)))|((unsigned int)((((unsigned long)(alu154))/cast32)))))))>>32ull));\n  int cast35 = ((int)((alu155>>62ull)));\n  float cast36 = ((float)(cast4));\n  float cast37 = ((float)(cast5));\n  float cast38 = ((float)(cast6));\n  float cast39 = ((float)(cast7));\n  float alu156 = ((cast36*-1.215420125655342e-10f)+(cast36*-1.984187258941006e-09f)+(cast36*-0.0001131594181060791f)+(cast36*-3.1414794921875f)+alu9);\n  float alu157 = ((cast37*-1.215420125655342e-10f)+(cast37*-1.984187258941006e-09f)+(cast37*-0.0001131594181060791f)+(cast37*-3.1414794921875f)+alu20);\n  float alu158 = ((cast38*-1.215420125655342e-10f)+(cast38*-1.984187258941006e-09f)+(cast38*-0.0001131594181060791f)+(cast38*-3.1414794921875f)+alu31);\n  float alu159 = ((cast39*-1.215420125655342e-10f)+(cast39*-1.984187258941006e-09f)+(cast39*-0.0001131594181060791f)+(cast39*-3.1414794921875f)+alu42);\n  float alu160 = (((float)((alu77&4611686018427387903ull)))*3.4061215800865545e-19f);\n  float alu161 = (((float)((alu103&4611686018427387903ull)))*3.4061215800865545e-19f);\n  float alu162 = (((float)((alu129&4611686018427387903ull)))*3.4061215800865545e-19f);\n  float alu163 = (((float)((alu155&4611686018427387903ull)))*3.4061215800865545e-19f);\n  float alu164 = (alu156*alu156);\n  float alu165 = (alu157*alu157);\n  float alu166 = (alu158*alu158);\n  float alu167 = (alu159*alu159);\n  _Bool alu168 = (cast12<0.5f);\n  int alu169 = (alu168?cast14:(cast14+1));\n  float alu170 = (alu168?alu160:(alu160+-1.5707963267948966f));\n  float alu171 = (((alu169&1)!=0)?1.5707963267948966f:0.0f);\n  float alu172 = (alu170+alu171);\n  float alu173 = (alu172*alu172);\n  _Bool alu174 = (cast19<0.5f);\n  int alu175 = (alu174?cast21:(cast21+1));\n  float alu176 = (alu174?alu161:(alu161+-1.5707963267948966f));\n  float alu177 = (((alu175&1)!=0)?1.5707963267948966f:0.0f);\n  float alu178 = (alu176+alu177);\n  float alu179 = (alu178*alu178);\n  _Bool alu180 = (cast26<0.5f);\n  int alu181 = (alu180?cast28:(cast28+1));\n  float alu182 = (alu180?alu162:(alu162+-1.5707963267948966f));\n  float alu183 = (((alu181&1)!=0)?1.5707963267948966f:0.0f);\n  float alu184 = (alu182+alu183);\n  float alu185 = (alu184*alu184);\n  _Bool alu186 = (cast33<0.5f);\n  int alu187 = (alu186?cast35:(cast35+1));\n  float alu188 = (alu186?alu163:(alu163+-1.5707963267948966f));\n  float alu189 = (((alu187&1)!=0)?1.5707963267948966f:0.0f);\n  float alu190 = (alu188+alu189);\n  float alu191 = (alu190*alu190);\n  float alu192 = (((cast4&1)!=0)?-1.0f:1.0f);\n  float alu193 = (((cast5&1)!=0)?-1.0f:1.0f);\n  float alu194 = (((cast6&1)!=0)?-1.0f:1.0f);\n  float alu195 = (((cast7&1)!=0)?-1.0f:1.0f);\n  float alu196 = (((alu169&2)!=0)?-1.0f:1.0f);\n  float alu197 = (((alu175&2)!=0)?-1.0f:1.0f);\n  float alu198 = (((alu181&2)!=0)?-1.0f:1.0f);\n  float alu199 = (((alu187&2)!=0)?-1.0f:1.0f);\n  float alu200 = ((alu9<30.0f)?(alu156*((((((((2.6083159809786594e-06f*alu164)+-0.00019810690719168633f)*alu164)+0.00833307858556509f)*alu164)+-0.16666659712791443f)*alu164)+1.0f)*alu192):(alu172*((((((((2.6083159809786594e-06f*alu173)+-0.00019810690719168633f)*alu173)+0.00833307858556509f)*alu173)+-0.16666659712791443f)*alu173)+1.0f)*alu196));\n  float alu201 = ((alu20<30.0f)?(alu157*((((((((2.6083159809786594e-06f*alu165)+-0.00019810690719168633f)*alu165)+0.00833307858556509f)*alu165)+-0.16666659712791443f)*alu165)+1.0f)*alu193):(alu178*((((((((2.6083159809786594e-06f*alu179)+-0.00019810690719168633f)*alu179)+0.00833307858556509f)*alu179)+-0.16666659712791443f)*alu179)+1.0f)*alu197));\n  float alu202 = ((alu31<30.0f)?(alu158*((((((((2.6083159809786594e-06f*alu166)+-0.00019810690719168633f)*alu166)+0.00833307858556509f)*alu166)+-0.16666659712791443f)*alu166)+1.0f)*alu194):(alu184*((((((((2.6083159809786594e-06f*alu185)+-0.00019810690719168633f)*alu185)+0.00833307858556509f)*alu185)+-0.16666659712791443f)*alu185)+1.0f)*alu198));\n  float alu203 = ((alu42<30.0f)?(alu159*((((((((2.6083159809786594e-06f*alu167)+-0.00019810690719168633f)*alu167)+0.00833307858556509f)*alu167)+-0.16666659712791443f)*alu167)+1.0f)*alu195):(alu190*((((((((2.6083159809786594e-06f*alu191)+-0.00019810690719168633f)*alu191)+0.00833307858556509f)*alu191)+-0.16666659712791443f)*alu191)+1.0f)*alu199));\n  float alu204 = (alu2?(alu200*alu8):((float)(__builtin_nanf(\"\"))));\n  float alu205 = (alu1?((float)(__builtin_nanf(\"\"))):alu204);\n  float alu206 = (alu3?alu205:((float)(__builtin_nanf(\"\"))));\n  float alu207 = (alu13?(alu201*alu19):((float)(__builtin_nanf(\"\"))));\n  float alu208 = (alu12?((float)(__builtin_nanf(\"\"))):alu207);\n  float alu209 = (alu14?alu208:((float)(__builtin_nanf(\"\"))));\n  float alu210 = (alu24?(alu202*alu30):((float)(__builtin_nanf(\"\"))));\n  float alu211 = (alu23?((float)(__builtin_nanf(\"\"))):alu210);\n  float alu212 = (alu25?alu211:((float)(__builtin_nanf(\"\"))));\n  float alu213 = (alu35?(alu203*alu41):((float)(__builtin_nanf(\"\"))));\n  float alu214 = (alu34?((float)(__builtin_nanf(\"\"))):alu213);\n  float alu215 = (alu36?alu214:((float)(__builtin_nanf(\"\"))));\n  *((float4*)((data0_4+0))) = (float4){alu206,alu209,alu212,alu215};\n}\n"
  },
  {
    "path": "packages/rune/test/golden/jit_grad/grad_square.expected",
    "content": "typedef float float4 __attribute__((aligned(16),ext_vector_type(4)));\nvoid E_4(float* restrict data0_4, float* restrict data1_4) {\n  float4 val0 = (*((float4*)((data1_4+0))));\n  *((float4*)((data0_4+0))) = (float4){(val0[0]*2.0f),(val0[1]*2.0f),(val0[2]*2.0f),(val0[3]*2.0f)};\n}\n"
  },
  {
    "path": "packages/rune/test/golden/jit_grad/grad_sum.expected",
    "content": "\n"
  },
  {
    "path": "packages/rune/test/golden/jit_trace/add_const.expected",
    "content": "typedef float float4 __attribute__((aligned(16),ext_vector_type(4)));\nvoid E_64_4(float* restrict data0_256, float* restrict data1_256) {\n  for (int Lidx0 = 0; Lidx0 < 64; Lidx0++) {\n    int alu0 = (Lidx0<<2);\n    float4 val0 = (*((float4*)((data1_256+alu0))));\n    *((float4*)((data0_256+alu0))) = (float4){(val0[0]+1.0f),(val0[1]+1.0f),(val0[2]+1.0f),(val0[3]+1.0f)};\n  }\n}\n"
  },
  {
    "path": "packages/rune/test/golden/jit_trace/chain.expected",
    "content": "typedef float float4 __attribute__((aligned(16),ext_vector_type(4)));\nvoid E_64_4n2(float* restrict data0_256, float* restrict data1_256) {\n  for (int Lidx0 = 0; Lidx0 < 64; Lidx0++) {\n    int alu0 = (Lidx0<<2);\n    float4 val0 = (*((float4*)((data1_256+alu0))));\n    *((float4*)((data0_256+alu0))) = (float4){((val0[0]+1.0f)*2.0f),((val0[1]+1.0f)*2.0f),((val0[2]+1.0f)*2.0f),((val0[3]+1.0f)*2.0f)};\n  }\n}\n"
  },
  {
    "path": "packages/rune/test/golden/jit_trace/dune",
    "content": "(executable\n (name generate_actual)\n (libraries nx rune tolk tolk.ir tolk.cpu))\n\n(rule\n (package rune)\n (targets add_const.actual mul_self.actual sum.actual chain.actual)\n (action\n  (run ./generate_actual.exe .)))\n\n(rule\n (alias runtest)\n (package rune)\n (action\n  (progn\n   (diff add_const.expected add_const.actual)\n   (diff mul_self.expected mul_self.actual)\n   (diff sum.expected sum.actual)\n   (diff chain.expected chain.actual))))\n"
  },
  {
    "path": "packages/rune/test/golden/jit_trace/generate_actual.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Generates .actual files for JIT trace golden tests. Each file contains the\n   rendered C source from tracing a function through the JIT capture handler via\n   [Jit.trace_graph]. Dune diff rules compare .actual against .expected\n   (generated from the reference tinygrad pipeline). *)\n\nlet write_actual dir name content =\n  let filename = Filename.concat dir (name ^ \".actual\") in\n  let oc = open_out filename in\n  output_string oc content;\n  output_char oc '\\n';\n  close_out oc\n\nlet dev = Tolk_cpu.create \"CPU\"\n\nlet trace_source f x =\n  let traced = Rune.trace_graph ~device:dev f x in\n  String.concat \"\\n---\\n\" traced.rendered_source\n\n(* ── Test cases ── *)\n\n(* c = a + scalar(1.0), shape [256] *)\nlet trace_add_const () =\n  let x = Nx.full Nx.float32 [| 256 |] 0.0 in\n  trace_source (fun x -> Nx.add x (Nx.scalar Nx.float32 1.0)) x\n\n(* c = a * a, shape [256] *)\nlet trace_mul_self () =\n  let x = Nx.full Nx.float32 [| 256 |] 0.0 in\n  trace_source (fun x -> Nx.mul x x) x\n\n(* c = sum(a), shape [256] -> scalar *)\nlet trace_sum () =\n  let x = Nx.full Nx.float32 [| 256 |] 0.0 in\n  trace_source (fun x -> Nx.sum x) x\n\n(* c = (a + scalar(1.0)) * scalar(2.0), shape [256] *)\nlet trace_chain () =\n  let x = Nx.full Nx.float32 [| 256 |] 0.0 in\n  trace_source\n    (fun x ->\n      Nx.mul (Nx.add x (Nx.scalar Nx.float32 1.0)) (Nx.scalar Nx.float32 2.0))\n    x\n\ntype test_case = { name : string; generate : unit -> string }\n\nlet test_cases =\n  [\n    { name = \"add_const\"; generate = trace_add_const };\n    { name = \"mul_self\"; generate = trace_mul_self };\n    { name = \"sum\"; generate = trace_sum };\n    { name = \"chain\"; generate = trace_chain };\n  ]\n\nlet () =\n  let dir = Sys.argv.(1) in\n  let failed = ref false in\n  List.iter\n    (fun { name; generate } ->\n      match generate () with\n      | out -> write_actual dir name out\n      | exception exn ->\n          Printf.eprintf \"FAIL %s: %s\\n%!\" name (Printexc.to_string exn);\n          Printexc.print_backtrace stderr;\n          failed := true)\n    test_cases;\n  if !failed then exit 1\n"
  },
  {
    "path": "packages/rune/test/golden/jit_trace/generate_expected.py",
    "content": "#!/usr/bin/env python3\n\"\"\"Generate tinygrad reference .expected files for JIT trace golden tests.\n\nConstructs tensor-level UOp DAGs (matching what Rune's JIT capture handler\nwould produce) and runs them through tinygrad's\nget_kernel_graph + full_rewrite_to_sink + linearize + render pipeline.\n\nUsage:\n    uv run packages/rune/test/golden/jit_trace/generate_expected.py\n\nAfter running, commit the generated .expected files.\n\"\"\"\n\nimport os\nimport sys\n\nsys.path.insert(\n    0,\n    os.path.join(\n        os.path.dirname(__file__), \"..\", \"..\", \"..\", \"..\", \"..\", \"_tinygrad\"\n    ),\n)\n\nfrom tinygrad.uop.ops import UOp, Ops, KernelInfo, AxisType\nfrom tinygrad.dtype import dtypes\nfrom tinygrad.schedule.rangeify import get_kernel_graph\nfrom tinygrad.codegen import full_rewrite_to_sink, line_rewrite, pm_linearize_cleanups\nfrom tinygrad.codegen.late.linearizer import linearize\nfrom tinygrad.renderer.cstyle import ClangRenderer\n\nOUT_DIR = os.path.dirname(__file__)\n\nrenderer = ClangRenderer()\n\n\ndef write_expected(name, content):\n    path = os.path.join(OUT_DIR, f\"{name}.expected\")\n    with open(path, \"w\") as f:\n        f.write(content + \"\\n\")\n    print(f\"  wrote {path}\")\n\n\ndef render_kernel(ast, optimize=True):\n    \"\"\"Run full codegen pipeline on a kernel AST and return rendered source.\"\"\"\n    rewritten = full_rewrite_to_sink(ast, renderer, optimize=optimize)\n    lst = linearize(rewritten)\n    lst = line_rewrite(lst, pm_linearize_cleanups)\n    return renderer.render(lst).strip()\n\n\ndef get_source(sink, optimize=True):\n    \"\"\"Build tensor graph, rangeify, codegen, render all kernels.\"\"\"\n    kg = get_kernel_graph(sink)\n    sources = []\n    for u in kg.toposort():\n        if u.op is Ops.CALL and isinstance(u.src[0].arg, KernelInfo):\n            sources.append(render_kernel(u.src[0], optimize))\n    return \"\\n---\\n\".join(sources)\n\n\n# ── Helpers ──\n\n\ndef mk_shape(*dims):\n    if len(dims) == 1:\n        return UOp.const(dtypes.index, dims[0])\n    return UOp(\n        Ops.VECTORIZE,\n        dtypes.index.vec(len(dims)),\n        tuple(UOp.const(dtypes.index, d) for d in dims),\n    )\n\n\ndef mk_param(slot, *shape, dtype=dtypes.float32):\n    dev = UOp(Ops.DEVICE, arg=\"CPU\")\n    return UOp(Ops.PARAM, dtype, (mk_shape(*shape), dev), slot)\n\n\ndef wrap_sink(*srcs):\n    contigs = [UOp(Ops.CONTIGUOUS, s.dtype, (s,)) for s in srcs]\n    return UOp.sink(*contigs)\n\n\n# ── Test cases ──\n# Each matches a test case in generate_actual.ml\n\n\ndef broadcast_scalar(c, *shape):\n    \"\"\"Broadcast a scalar constant to a target shape via RESHAPE + EXPAND.\"\"\"\n    ones = tuple(1 for _ in shape)\n    reshaped = UOp(Ops.RESHAPE, c.dtype, (c, mk_shape(*ones)))\n    return UOp(Ops.EXPAND, c.dtype, (reshaped, mk_shape(*shape)))\n\n\ndef build_add_const():\n    \"\"\"c = a + scalar(1.0), shape [256].\n\n    The JIT handler captures Nx.scalar as a Const (shape []) and the Add\n    operates on shapes [256] + []. Tinygrad requires explicit broadcast,\n    so we reshape+expand the constant to [256] to match.\n    \"\"\"\n    a = mk_param(0, 256)\n    one = broadcast_scalar(UOp.const(dtypes.float32, 1.0), 256)\n    return wrap_sink(a + one)\n\n\ndef build_mul_self():\n    \"\"\"c = a * a, shape [256].\"\"\"\n    a = mk_param(0, 256)\n    return wrap_sink(a * a)\n\n\ndef build_sum():\n    \"\"\"c = sum(a), shape [256] -> scalar.\"\"\"\n    a = mk_param(0, 256)\n    red = UOp(Ops.REDUCE_AXIS, dtypes.float32, (a,), (Ops.ADD, (0,)))\n    return wrap_sink(red)\n\n\ndef build_chain():\n    \"\"\"c = (a + 1) * 2, shape [256].\n\n    Both constants are scalar and need broadcast to [256].\n    \"\"\"\n    a = mk_param(0, 256)\n    one = broadcast_scalar(UOp.const(dtypes.float32, 1.0), 256)\n    two = broadcast_scalar(UOp.const(dtypes.float32, 2.0), 256)\n    return wrap_sink((a + one) * two)\n\n\nTEST_CASES = [\n    (\"add_const\", build_add_const),\n    (\"mul_self\", build_mul_self),\n    (\"sum\", build_sum),\n    (\"chain\", build_chain),\n]\n\n\ndef main():\n    total = 0\n    for case_name, builder in TEST_CASES:\n        print(f\"\\n{case_name}:\")\n        sink = builder()\n        try:\n            src = get_source(sink)\n            write_expected(case_name, src)\n            total += 1\n        except Exception as e:\n            print(f\"  FAIL {case_name}: {e}\")\n            import traceback\n            traceback.print_exc()\n\n    print(f\"\\nDone. Generated {total} .expected files in {OUT_DIR}\")\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "packages/rune/test/golden/jit_trace/mul_self.expected",
    "content": "typedef float float4 __attribute__((aligned(16),ext_vector_type(4)));\nvoid E_64_4n1(float* restrict data0_256, float* restrict data1_256) {\n  for (int Lidx0 = 0; Lidx0 < 64; Lidx0++) {\n    int alu0 = (Lidx0<<2);\n    float4 val0 = (*((float4*)((data1_256+alu0))));\n    *((float4*)((data0_256+alu0))) = (float4){(val0[0]*val0[0]),(val0[1]*val0[1]),(val0[2]*val0[2]),(val0[3]*val0[3])};\n  }\n}\n"
  },
  {
    "path": "packages/rune/test/golden/jit_trace/sum.expected",
    "content": "typedef float float4 __attribute__((aligned(16),ext_vector_type(4)));\nvoid r_64_4(float* restrict data0_1, float* restrict data1_256) {\n  float acc0[1];\n  *(acc0+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 64; Ridx0++) {\n    float4 val0 = (*((float4*)((data1_256+(Ridx0<<2)))));\n    *(acc0+0) = ((*(acc0+0))+val0[0]+val0[1]+val0[2]+val0[3]);\n  }\n  *(data0_1+0) = (*(acc0+0));\n}\n"
  },
  {
    "path": "packages/rune/test/support/dune",
    "content": "(library\n (name test_rune_support)\n (libraries nx rune windtrap))\n"
  },
  {
    "path": "packages/rune/test/support/test_rune_support.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Shared test utilities for Rune test suite *)\n\nopen Windtrap\n\n(* Check Rune tensors for approximate equality *)\nlet check_rune ?eps msg expected actual =\n  let testable =\n    match eps with\n    | None ->\n        Testable.make ~pp:Nx.pp\n          ~equal:(fun a b ->\n            if Nx.shape a <> Nx.shape b then false\n            else\n              let eq_tensor = Nx.array_equal a b in\n              (* array_equal returns a scalar boolean tensor *)\n              let result = Nx.item [] eq_tensor in\n              result)\n          ()\n    | Some eps ->\n        Testable.make ~pp:Nx.pp\n          ~equal:(fun a b ->\n            let diff = Nx.sub a b in\n            let abs_diff = Nx.abs diff in\n            let max_diff = Nx.max abs_diff in\n            let max_diff_val = Nx.item [] max_diff in\n            Float.compare max_diff_val eps < 0)\n          ()\n  in\n  equal ~msg testable expected actual\n\n(* Check scalar values *)\nlet check_scalar ?eps msg expected actual =\n  let eps = Option.value ~default:1e-6 eps in\n  equal ~msg (float eps) expected actual\n\n(* Extract scalar from Rune tensor *)\nlet scalar_value t = Nx.item [] t\n\n(* Check shape of Rune tensor *)\nlet check_shape msg expected_shape tensor =\n  equal ~msg (array int) expected_shape (Nx.shape tensor)\n\n(* Common failure checks *)\nlet check_invalid_arg msg pattern f = raises ~msg (Invalid_argument pattern) f\nlet check_failure msg pattern f = raises ~msg (Failure pattern) f\n"
  },
  {
    "path": "packages/rune/test/test_custom_diff.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Test_rune_support\n\nmodule T = struct\n  include Nx\n  include Rune\nend\n\nlet eps = 1e-4\n\n(* ───── Custom VJP ───── *)\n\nlet test_custom_vjp_sin () =\n  (* Custom sin with explicit backward: d/dx sin(x) = cos(x) *)\n  let my_sin x =\n    T.custom_vjp\n      ~fwd:(fun x -> (T.sin x, x))\n      ~bwd:(fun x g -> T.mul g (T.cos x))\n      x\n  in\n  let x = T.scalar T.float32 1.0 in\n  let y = my_sin x in\n  check_scalar ~eps \"custom_vjp sin primal\" (Float.sin 1.0) (scalar_value y);\n  let grad_x = T.grad my_sin x in\n  check_scalar ~eps \"custom_vjp sin grad\" (Float.cos 1.0) (scalar_value grad_x)\n\nlet test_custom_vjp_surrogate () =\n  (* Surrogate gradient: forward uses sign, backward uses identity *)\n  let surrogate_sign x =\n    T.custom_vjp ~fwd:(fun x -> (T.sign x, ())) ~bwd:(fun () g -> g) x\n  in\n  let x = T.scalar T.float32 0.5 in\n  let y = surrogate_sign x in\n  check_scalar ~eps \"surrogate sign primal\" 1.0 (scalar_value y);\n  let grad_x = T.grad surrogate_sign x in\n  check_scalar ~eps \"surrogate sign grad\" 1.0 (scalar_value grad_x)\n\nlet test_custom_vjp_residuals () =\n  (* Use residuals to avoid recomputation in backward *)\n  let my_exp x =\n    T.custom_vjp\n      ~fwd:(fun x ->\n        let y = T.exp x in\n        (y, y))\n      ~bwd:(fun y g -> T.mul g y)\n      x\n  in\n  let x = T.scalar T.float32 2.0 in\n  let grad_x = T.grad my_exp x in\n  check_scalar ~eps \"custom_vjp residuals grad\" (Float.exp 2.0)\n    (scalar_value grad_x)\n\nlet test_custom_vjp_composition () =\n  (* custom_vjp composes with standard AD *)\n  let my_sin x =\n    T.custom_vjp\n      ~fwd:(fun x -> (T.sin x, x))\n      ~bwd:(fun x g -> T.mul g (T.cos x))\n      x\n  in\n  let f x = T.mul (my_sin x) x in\n  let x = T.scalar T.float32 1.0 in\n  (* d/dx (sin(x) * x) = cos(x) * x + sin(x) *)\n  let expected = (Float.cos 1.0 *. 1.0) +. Float.sin 1.0 in\n  let grad_x = T.grad f x in\n  check_scalar ~eps \"custom_vjp composition grad\" expected (scalar_value grad_x)\n\n(* ───── Custom VJPs (multi-input) ───── *)\n\nlet test_custom_vjps_mul () =\n  (* Custom mul with explicit backward *)\n  let my_mul xs =\n    T.custom_vjps\n      ~fwd:(fun xs ->\n        match xs with\n        | [ a; b ] -> (T.mul a b, (a, b))\n        | _ -> failwith \"expected 2 inputs\")\n      ~bwd:(fun (a, b) g -> [ T.mul g b; T.mul g a ])\n      xs\n  in\n  let x = T.scalar T.float32 3.0 in\n  let y = T.scalar T.float32 4.0 in\n  let result = my_mul [ x; y ] in\n  check_scalar ~eps \"custom_vjps mul primal\" 12.0 (scalar_value result);\n  let grads =\n    T.grads\n      (fun xs ->\n        match xs with [ a; b ] -> my_mul [ a; b ] | _ -> failwith \"bad\")\n      [ x; y ]\n  in\n  check_scalar ~eps \"custom_vjps mul grad_x\" 4.0\n    (scalar_value (List.nth grads 0));\n  check_scalar ~eps \"custom_vjps mul grad_y\" 3.0\n    (scalar_value (List.nth grads 1))\n\n(* ───── Custom JVP ───── *)\n\nlet test_custom_jvp_sin () =\n  (* Custom sin with explicit tangent rule *)\n  let my_sin x =\n    T.custom_jvp\n      ~fwd:(fun x -> T.sin x)\n      ~jvp_rule:(fun x dx ->\n        let y = T.sin x in\n        let dy = T.mul dx (T.cos x) in\n        (y, dy))\n      x\n  in\n  let x = T.scalar T.float32 1.0 in\n  let v = T.scalar T.float32 1.0 in\n  let primal, tangent = T.jvp my_sin x v in\n  check_scalar ~eps \"custom_jvp sin primal\" (Float.sin 1.0)\n    (scalar_value primal);\n  check_scalar ~eps \"custom_jvp sin tangent\" (Float.cos 1.0)\n    (scalar_value tangent)\n\nlet test_custom_jvps_mul () =\n  (* Custom mul with explicit tangent rule for multiple inputs *)\n  let my_mul xs =\n    T.custom_jvps\n      ~fwd:(fun xs ->\n        match xs with\n        | [ a; b ] -> T.mul a b\n        | _ -> failwith \"expected 2 inputs\")\n      ~jvp_rule:(fun xs dxs ->\n        match (xs, dxs) with\n        | [ a; b ], [ da; db ] ->\n            let y = T.mul a b in\n            let dy = T.add (T.mul da b) (T.mul a db) in\n            (y, dy)\n        | _ -> failwith \"expected 2 inputs\")\n      xs\n  in\n  let x = T.scalar T.float32 3.0 in\n  let y = T.scalar T.float32 4.0 in\n  let vx = T.scalar T.float32 1.0 in\n  let vy = T.scalar T.float32 0.5 in\n  let primal, tangent = T.jvps my_mul [ x; y ] [ vx; vy ] in\n  check_scalar ~eps \"custom_jvps mul primal\" 12.0 (scalar_value primal);\n  (* tangent = da*b + a*db = 1*4 + 3*0.5 = 5.5 *)\n  check_scalar ~eps \"custom_jvps mul tangent\" 5.5 (scalar_value tangent)\n\n(* ───── Fallthrough behavior ───── *)\n\nlet test_custom_vjp_under_jvp () =\n  (* custom_vjp under JVP should trace through fwd normally *)\n  let my_sin x =\n    T.custom_vjp\n      ~fwd:(fun x -> (T.sin x, x))\n      ~bwd:(fun _x _g -> failwith \"bwd should not be called under JVP\")\n      x\n  in\n  let x = T.scalar T.float32 1.0 in\n  let v = T.scalar T.float32 1.0 in\n  let primal, tangent = T.jvp my_sin x v in\n  check_scalar ~eps \"custom_vjp under jvp primal\" (Float.sin 1.0)\n    (scalar_value primal);\n  check_scalar ~eps \"custom_vjp under jvp tangent\" (Float.cos 1.0)\n    (scalar_value tangent)\n\nlet test_custom_jvp_under_vjp () =\n  (* custom_jvp under VJP should trace through fwd normally *)\n  let my_sin x =\n    T.custom_jvp\n      ~fwd:(fun x -> T.sin x)\n      ~jvp_rule:(fun _x _dx ->\n        failwith \"jvp_rule should not be called under VJP\")\n      x\n  in\n  let x = T.scalar T.float32 1.0 in\n  let grad_x = T.grad my_sin x in\n  check_scalar ~eps \"custom_jvp under vjp grad\" (Float.cos 1.0)\n    (scalar_value grad_x)\n\nlet test_custom_vjp_no_ad () =\n  (* custom_vjp outside AD should just compute fwd *)\n  let my_sin x =\n    T.custom_vjp\n      ~fwd:(fun x -> (T.sin x, ()))\n      ~bwd:(fun () _g -> failwith \"bwd should not be called outside AD\")\n      x\n  in\n  let x = T.scalar T.float32 1.0 in\n  let y = my_sin x in\n  check_scalar ~eps \"custom_vjp no ad\" (Float.sin 1.0) (scalar_value y)\n\nlet test_custom_jvp_no_ad () =\n  (* custom_jvp outside AD should just compute fwd *)\n  let my_sin x =\n    T.custom_jvp\n      ~fwd:(fun x -> T.sin x)\n      ~jvp_rule:(fun _x _dx ->\n        failwith \"jvp_rule should not be called outside AD\")\n      x\n  in\n  let x = T.scalar T.float32 1.0 in\n  let y = my_sin x in\n  check_scalar ~eps \"custom_jvp no ad\" (Float.sin 1.0) (scalar_value y)\n\n(* ───── Higher-order derivatives ───── *)\n\nlet test_custom_vjp_higher_order () =\n  (* grad(grad(f)) should work with custom_vjp *)\n  let my_sin x =\n    T.custom_vjp\n      ~fwd:(fun x -> (T.sin x, x))\n      ~bwd:(fun x g -> T.mul g (T.cos x))\n      x\n  in\n  let x = T.scalar T.float32 1.0 in\n  (* d²/dx² sin(x) = -sin(x) *)\n  let grad2 = T.grad (T.grad my_sin) x in\n  check_scalar ~eps \"custom_vjp higher order\"\n    (-.Float.sin 1.0)\n    (scalar_value grad2)\n\n(* ───── Multidimensional tensors ───── *)\n\nlet test_custom_vjp_array () =\n  (* custom_vjp works on non-scalar tensors *)\n  let my_square x =\n    T.custom_vjp\n      ~fwd:(fun x -> (T.mul x x, x))\n      ~bwd:(fun x g -> T.mul g (T.mul (T.scalar T.float32 2.0) x))\n      x\n  in\n  let x = T.create T.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let grad_x = T.grad (fun x -> T.sum (my_square x)) x in\n  check_shape \"custom_vjp array shape\" [| 2; 3 |] grad_x;\n  (* d/dx sum(x²) = 2x *)\n  (* d/dx_i sum(x²) = 2*x_i *)\n  let expected = T.create T.float32 [| 2; 3 |] [| 2.; 4.; 6.; 8.; 10.; 12. |] in\n  check_scalar ~eps \"custom_vjp array max diff\" 0.0\n    (scalar_value (T.max (T.abs (T.sub grad_x expected))))\n\n(* ───── Gradient checking ───── *)\n\nlet test_custom_vjp_gradcheck () =\n  (* Verify custom VJP agrees with finite differences *)\n  let my_square x =\n    T.custom_vjp\n      ~fwd:(fun x -> (T.mul x x, x))\n      ~bwd:(fun x g -> T.mul g (T.mul (T.scalar T.float32 2.0) x))\n      x\n  in\n  let x = T.scalar T.float32 3.0 in\n  let result = T.check_gradient my_square x in\n  match result with\n  | `Pass _ -> ()\n  | `Fail r ->\n      Windtrap.fail\n        (Printf.sprintf \"custom_vjp gradcheck failed: max_abs_error=%f\"\n           r.max_abs_error)\n\n(* ───── Test suite ───── *)\n\nlet tests =\n  [\n    group \"custom vjp\"\n      [\n        test \"sin\" test_custom_vjp_sin;\n        test \"surrogate gradient\" test_custom_vjp_surrogate;\n        test \"residuals\" test_custom_vjp_residuals;\n        test \"composition\" test_custom_vjp_composition;\n      ];\n    group \"custom vjps\" [ test \"multi-input mul\" test_custom_vjps_mul ];\n    group \"custom jvp\"\n      [\n        test \"sin\" test_custom_jvp_sin;\n        test \"multi-input mul\" test_custom_jvps_mul;\n      ];\n    group \"fallthrough\"\n      [\n        test \"custom_vjp under jvp\" test_custom_vjp_under_jvp;\n        test \"custom_jvp under vjp\" test_custom_jvp_under_vjp;\n        test \"custom_vjp no ad\" test_custom_vjp_no_ad;\n        test \"custom_jvp no ad\" test_custom_jvp_no_ad;\n      ];\n    group \"higher-order\" [ test \"grad of grad\" test_custom_vjp_higher_order ];\n    group \"multidimensional\" [ test \"array grad\" test_custom_vjp_array ];\n    group \"gradient checking\"\n      [ test \"custom_vjp gradcheck\" test_custom_vjp_gradcheck ];\n  ]\n\nlet () = run \"Rune Custom Diff Tests\" tests\n"
  },
  {
    "path": "packages/rune/test/test_gradcheck.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Test_rune_support\n\nmodule T = struct\n  include Nx\n  include Rune\nend\n\n(* ───── Test Finite Differences ───── *)\n\nlet test_finite_diff_simple () =\n  let x = T.scalar T.float32 2.0 in\n  let f x = T.mul x x in\n  let grad_fd = T.finite_diff f x in\n  check_scalar ~eps:1e-2 \"finite_diff(x²) at x=2\" 4.0 (scalar_value grad_fd)\n\nlet test_finite_diff_polynomial () =\n  let x = T.scalar T.float32 3.0 in\n  let f x =\n    let x2 = T.mul x x in\n    let x3 = T.mul x2 x in\n    T.add x3 (T.mul (T.scalar T.float32 2.0) x2)\n  in\n  let grad_fd = T.finite_diff f x in\n  (* Derivative of x³ + 2x² is 3x² + 4x = 3*9 + 4*3 = 27 + 12 = 39 *)\n  check_scalar ~eps:0.1 \"finite_diff(x³ + 2x²) at x=3\" 39.0\n    (scalar_value grad_fd)\n\nlet test_finite_diff_vector () =\n  let x = T.create T.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let f x = T.sum (T.mul x x) in\n  let grad_fd = T.finite_diff f x in\n  let expected = T.create T.float32 [| 3 |] [| 2.; 4.; 6. |] in\n  check_rune ~eps:1e-2 \"finite_diff vector gradient\" expected grad_fd\n\nlet test_finite_diff_methods () =\n  let x = T.scalar T.float32 1.0 in\n  let f = T.exp in\n\n  let grad_central = T.finite_diff ~method_:`Central f x in\n  let grad_forward = T.finite_diff ~method_:`Forward f x in\n  let grad_backward = T.finite_diff ~method_:`Backward f x in\n\n  let exp_1 = exp 1.0 in\n  check_scalar ~eps:1e-3 \"central difference exp'(1)\" exp_1\n    (scalar_value grad_central);\n  check_scalar ~eps:1e-2 \"forward difference exp'(1)\" exp_1\n    (scalar_value grad_forward);\n  check_scalar ~eps:1e-2 \"backward difference exp'(1)\" exp_1\n    (scalar_value grad_backward)\n\n(* ───── Test Gradient Checking ───── *)\n\nlet test_check_gradient_pass () =\n  let x = T.create T.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  let f x = T.sum (T.mul x x) in\n\n  match T.check_gradient ~verbose:false f x with\n  | `Pass result ->\n      equal ~msg:\"gradient check passed\" bool true result.passed;\n      equal ~msg:\"no failed indices\" bool true (result.failed_indices = [])\n  | `Fail _ -> fail \"Expected gradient check to pass\"\n\nlet test_check_gradient_fail () =\n  let x = T.scalar T.float32 2.0 in\n  let f x =\n    let wrong_grad = T.mul x (T.scalar T.float32 3.0) in\n    wrong_grad\n  in\n\n  let _grad_with_bug _f _x = T.scalar T.float32 2.0 in\n\n  let autodiff_grad = T.grad f x in\n  let finite_diff_grad = T.finite_diff f x in\n\n  check_scalar ~eps:1e-3 \"autodiff gives 3.0\" 3.0 (scalar_value autodiff_grad);\n  check_scalar ~eps:5e-3 \"finite_diff gives 3.0\" 3.0\n    (scalar_value finite_diff_grad)\n\nlet test_check_gradient_tolerances () =\n  let x = T.scalar T.float32 1.0 in\n  let f x = T.sin x in\n\n  match T.check_gradient ~rtol:1e-4 ~atol:1e-5 f x with\n  | `Pass result ->\n      equal ~msg:\"gradient check with tight tolerances\" bool true result.passed\n  | `Fail result ->\n      Printf.printf \"Max abs error: %.2e, Max rel error: %.2e\\n\"\n        result.max_abs_error result.max_rel_error;\n      fail \"Gradient check failed unexpectedly\"\n\nlet test_check_gradient_complex () =\n  let x = T.create T.float32 [| 2 |] [| 0.5; 1.5 |] in\n  let f x =\n    let exp_x = T.exp x in\n    let sin_x = T.sin x in\n    let prod = T.mul exp_x sin_x in\n    T.sum prod\n  in\n\n  match T.check_gradient ~verbose:false f x with\n  | `Pass result ->\n      equal ~msg:\"complex function gradient check\" bool true result.passed;\n      equal ~msg:\"low relative error\" bool true (result.max_rel_error < 1e-3)\n  | `Fail result ->\n      Printf.printf \"Failed: max_rel_error = %.2e\\n\" result.max_rel_error;\n      fail \"Complex gradient check failed\"\n\nlet test_check_gradients_multiple () =\n  let x1 = T.scalar T.float32 2.0 in\n  let x2 = T.scalar T.float32 3.0 in\n  let f xs =\n    match xs with [ a; b ] -> T.mul a b | _ -> failwith \"Expected 2 inputs\"\n  in\n\n  match T.check_gradients ~verbose:false f [ x1; x2 ] with\n  | `Pass results ->\n      equal ~msg:\"number of results\" int 2 (List.length results);\n      List.iter\n        (fun r -> equal ~msg:\"each gradient passed\" bool true r.T.passed)\n        results\n  | `Fail _ -> fail \"Expected multiple gradients check to pass\"\n\nlet test_check_gradient_matrix () =\n  let x = T.create T.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let f x =\n    let xt = T.transpose x in\n    let result = T.matmul x xt in\n    T.sum result\n  in\n\n  (* Matrix operations need looser tolerance due to float32 precision *)\n  match T.check_gradient ~verbose:false ~rtol:1e-2 ~atol:1e-2 f x with\n  | `Pass result ->\n      equal ~msg:\"matrix operation gradient check\" bool true result.passed\n  | `Fail result ->\n      Printf.printf \"Matrix gradient check failed: max_rel_error = %.2e\\n\"\n        result.max_rel_error;\n      fail \"Matrix gradient check failed\"\n\nlet test_finite_diff_jacobian () =\n  let x = T.create T.float32 [| 2 |] [| 1.; 2. |] in\n  let f x =\n    (* Simple function that returns a 2-element vector *)\n    (* f(x) = [x1 + x2, x1 * x2] where x = [x1, x2] *)\n    let x1 = T.get [ 0 ] x in\n    let x2 = T.get [ 1 ] x in\n    let sum = T.add x1 x2 in\n    let prod = T.mul x1 x2 in\n    (* Create result manually without stack *)\n    let result = T.zeros T.float32 [| 2 |] in\n    T.set [ 0 ] result sum;\n    T.set [ 1 ] result prod;\n    result\n  in\n\n  let jacobian = T.finite_diff_jacobian f x in\n  let expected_shape = [| 2; 2 |] in\n  equal ~msg:\"jacobian shape\" (array int) expected_shape (T.shape jacobian)\n\n(* ───── Test Suite ───── *)\n\nlet () =\n  run \"Gradient Checking\"\n    [\n      group \"finite_diff\"\n        [\n          test \"simple quadratic\" test_finite_diff_simple;\n          test \"polynomial\" test_finite_diff_polynomial;\n          test \"vector gradient\" test_finite_diff_vector;\n          test \"different methods\" test_finite_diff_methods;\n          test \"jacobian\" test_finite_diff_jacobian;\n        ];\n      group \"check_gradient\"\n        [\n          test \"passing check\" test_check_gradient_pass;\n          test \"verify correctness\" test_check_gradient_fail;\n          test \"tolerance settings\" test_check_gradient_tolerances;\n          test \"complex function\" test_check_gradient_complex;\n          test \"multiple inputs\" test_check_gradients_multiple;\n          test \"matrix operations\" test_check_gradient_matrix;\n        ];\n    ]\n"
  },
  {
    "path": "packages/rune/test/test_jacobian.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\n\nlet f64 = Nx.float64\n\n(* Test: Jacobian of linear function f(x) = A*x + b is exactly A *)\nlet test_jacfwd_linear () =\n  let a = Nx.create f64 [| 2; 3 |] [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0 |] in\n  let b = Nx.create f64 [| 2 |] [| 10.0; 20.0 |] in\n  let f x =\n    Nx.add (Nx.reshape [| 2 |] (Nx.matmul a (Nx.reshape [| 3; 1 |] x))) b\n  in\n  let x = Nx.create f64 [| 3 |] [| 1.0; 2.0; 3.0 |] in\n  let j = Rune.jacfwd f x in\n  for i = 0 to 1 do\n    for k = 0 to 2 do\n      is_true\n        ~msg:(Printf.sprintf \"jacfwd J[%d,%d] = A[%d,%d]\" i k i k)\n        (Float.abs (Nx.item [ i; k ] j -. Nx.item [ i; k ] a) < 1e-10)\n    done\n  done\n\nlet test_jacrev_linear () =\n  let a = Nx.create f64 [| 2; 3 |] [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0 |] in\n  let b = Nx.create f64 [| 2 |] [| 10.0; 20.0 |] in\n  let f x =\n    Nx.add (Nx.reshape [| 2 |] (Nx.matmul a (Nx.reshape [| 3; 1 |] x))) b\n  in\n  let x = Nx.create f64 [| 3 |] [| 1.0; 2.0; 3.0 |] in\n  let j = Rune.jacrev f x in\n  for i = 0 to 1 do\n    for k = 0 to 2 do\n      is_true\n        ~msg:(Printf.sprintf \"jacrev J[%d,%d] = A[%d,%d]\" i k i k)\n        (Float.abs (Nx.item [ i; k ] j -. Nx.item [ i; k ] a) < 1e-10)\n    done\n  done\n\n(* Test: jacfwd and jacrev produce the same result on nonlinear f *)\nlet test_jacfwd_jacrev_agree () =\n  let f x =\n    let x0 = Nx.slice [ I 0 ] x in\n    let x1 = Nx.slice [ I 1 ] x in\n    Nx.stack ~axis:0\n      [ Nx.mul x0 x1; Nx.add (Nx.square x0) (Nx.sin x1); Nx.exp x1 ]\n  in\n  let x = Nx.create f64 [| 2 |] [| 1.5; 0.7 |] in\n  let j_fwd = Rune.jacfwd f x in\n  let j_rev = Rune.jacrev f x in\n  let shape_fwd = Nx.shape j_fwd in\n  let shape_rev = Nx.shape j_rev in\n  is_true ~msg:\"same shape[0]\" (shape_fwd.(0) = shape_rev.(0));\n  is_true ~msg:\"same shape[1]\" (shape_fwd.(1) = shape_rev.(1));\n  for i = 0 to shape_fwd.(0) - 1 do\n    for k = 0 to shape_fwd.(1) - 1 do\n      is_true\n        ~msg:(Printf.sprintf \"jacfwd[%d,%d] = jacrev[%d,%d]\" i k i k)\n        (Float.abs (Nx.item [ i; k ] j_fwd -. Nx.item [ i; k ] j_rev) < 1e-10)\n    done\n  done\n\nlet () =\n  run \"Jacobian\"\n    [\n      test \"jacfwd: linear function\" test_jacfwd_linear;\n      test \"jacrev: linear function\" test_jacrev_linear;\n      test \"jacfwd and jacrev agree\" test_jacfwd_jacrev_agree;\n    ]\n"
  },
  {
    "path": "packages/rune/test/test_jit.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Test suite for the JIT effect handler.\n\n   Tests are split into two groups: - Without device: tests the graph building\n   and state machine - With device: tests the full pipeline (build → compile →\n   execute → replay)\n\n   Without a device, capture should fail (no eager fallback). *)\n\nopen Windtrap\nopen Test_rune_support\n\nmodule T = struct\n  include Nx\n  include Rune\nend\n\nlet eps = 1e-5\n\n(* ───── Without device: graph capture + state machine ───── *)\n\nlet test_jit_no_device_fails () =\n  (* JIT without device should fail on capture *)\n  let f x = T.add x (T.scalar T.float32 1.0) in\n  let f_jit = T.jit f in\n  let x = T.scalar T.float32 5.0 in\n  let _ = f_jit x in\n  (* warmup: ok, runs eagerly *)\n  let raised = ref false in\n  (try ignore (f_jit x) (* capture: should fail, no device *)\n   with Failure _ -> raised := true);\n  is_true !raised\n\nlet test_jit_warmup_is_eager () =\n  (* First call should execute eagerly and return correct result *)\n  let f x = T.add x (T.scalar T.float32 1.0) in\n  let f_jit = T.jit f in\n  let x = T.scalar T.float32 5.0 in\n  let result = f_jit x in\n  check_scalar ~eps \"warmup result\" 6.0 (scalar_value result)\n\nlet test_jit_warmup_calls_f () =\n  let called = ref false in\n  let f x =\n    called := true;\n    T.add x (T.scalar T.float32 1.0)\n  in\n  let f_jit = T.jit f in\n  let x = T.scalar T.float32 0.0 in\n  let _ = f_jit x in\n  is_true !called\n\n(* ───── With device: full pipeline ───── *)\n\n(* To test the full pipeline, we need a Tolk device. The CPU device is available\n   via tolk.cpu. These tests will only run when a device is available. *)\n\nlet get_cpu_device () : Tolk.Device.t option =\n  try Some (Tolk_cpu.create \"CPU\") with _ -> None\n\nlet test_jit_capture_compiles () =\n  match get_cpu_device () with\n  | None -> () (* skip: no device *)\n  | Some dev ->\n      let f x = T.add x (T.scalar T.float32 1.0) in\n      let f_jit = T.jit ~device:dev f in\n      let x = T.scalar T.float32 5.0 in\n      let _ = f_jit x in\n      (* warmup *)\n      (* Capture should build graph, compile, and return result *)\n      let result = f_jit x in\n      check_scalar ~eps \"capture result\" 6.0 (scalar_value result)\n\nlet test_jit_replay_no_recompile () =\n  match get_cpu_device () with\n  | None -> ()\n  | Some dev ->\n      let call_count = ref 0 in\n      let f x =\n        incr call_count;\n        T.add x (T.scalar T.float32 1.0)\n      in\n      let f_jit = T.jit ~device:dev f in\n      let x = T.scalar T.float32 5.0 in\n      let _ = f_jit x in\n      (* warmup: f called *)\n      equal int 1 !call_count;\n      let _ = f_jit x in\n      (* capture: f called under handler *)\n      equal int 2 !call_count;\n      let _ = f_jit x in\n      (* replay: f should NOT be called *)\n      equal int 2 !call_count\n\nlet test_jit_replay_different_values () =\n  match get_cpu_device () with\n  | None -> ()\n  | Some dev ->\n      let f x = T.mul x (T.scalar T.float32 3.0) in\n      let f_jit = T.jit ~device:dev f in\n      let _ = f_jit (T.scalar T.float32 2.0) in\n      (* warmup *)\n      let _ = f_jit (T.scalar T.float32 2.0) in\n      (* capture *)\n      let result = f_jit (T.scalar T.float32 7.0) in\n      (* replay *)\n      check_scalar ~eps \"replay 7*3\" 21.0 (scalar_value result)\n\nlet test_jit_shape_mismatch_rejected () =\n  match get_cpu_device () with\n  | None -> ()\n  | Some dev ->\n      let f x = T.add x x in\n      let f_jit = T.jit ~device:dev f in\n      let x4 = T.full T.float32 [| 4 |] 1.0 in\n      let x8 = T.full T.float32 [| 8 |] 1.0 in\n      let _ = f_jit x4 in\n      (* warmup *)\n      let _ = f_jit x4 in\n      (* capture *)\n      let raised = ref false in\n      (try ignore (f_jit x8) with Invalid_argument _ -> raised := true);\n      is_true !raised\n\nlet test_jit_chain () =\n  match get_cpu_device () with\n  | None -> ()\n  | Some dev ->\n      let f x =\n        let y = T.add x (T.scalar T.float32 1.0) in\n        T.mul y (T.scalar T.float32 2.0)\n      in\n      let f_jit = T.jit ~device:dev f in\n      let x = T.scalar T.float32 4.0 in\n      let _ = f_jit x in\n      (* warmup *)\n      let _ = f_jit x in\n      (* capture *)\n      let result = f_jit x in\n      (* replay *)\n      check_scalar ~eps \"chain (4+1)*2\" 10.0 (scalar_value result)\n\nlet test_jit_reduce () =\n  match get_cpu_device () with\n  | None -> ()\n  | Some dev ->\n      let f x = T.sum ~axes:[ 0 ] x in\n      let f_jit = T.jit ~device:dev f in\n      let x = T.full T.float32 [| 4 |] 3.0 in\n      let _ = f_jit x in\n      let _ = f_jit x in\n      let result = f_jit x in\n      check_scalar ~eps \"sum [3;3;3;3]\" 12.0 (scalar_value result)\n\n(* ───── Test runner ───── *)\n\nlet () =\n  run \"JIT\"\n    [\n      group \"no device\"\n        [\n          test \"warmup is eager\" test_jit_warmup_is_eager;\n          test \"warmup calls f\" test_jit_warmup_calls_f;\n          test \"no device fails on capture\" test_jit_no_device_fails;\n        ];\n      group \"with device\"\n        [\n          test \"capture compiles\" test_jit_capture_compiles;\n          test \"replay without recompile\" test_jit_replay_no_recompile;\n          test \"replay different values\" test_jit_replay_different_values;\n          test \"shape mismatch rejected\" test_jit_shape_mismatch_rejected;\n          test \"chain (x+1)*2\" test_jit_chain;\n          test \"reduce sum\" test_jit_reduce;\n        ];\n    ]\n"
  },
  {
    "path": "packages/rune/test/test_jit_grad.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Test suite for JIT + grad composition.\n\n   Each test verifies that jit(grad(f))(x) produces the same result as\n   grad(f)(x), ensuring that automatic differentiation composes correctly with\n   JIT compilation. *)\n\nopen Windtrap\nopen Test_rune_support\n\nmodule T = struct\n  include Nx\n  include Rune\nend\n\nlet eps = 1e-4\n\nlet get_cpu_device () : Tolk.Device.t option =\n  try Some (Tolk_cpu.create \"CPU\") with _ -> None\n\n(* ───── jit(grad(f)) vs grad(f) ───── *)\n\nlet test_jit_grad_square () =\n  match get_cpu_device () with\n  | None -> ()\n  | Some dev ->\n      (* f(x) = sum(x * x), grad = 2 * x *)\n      let f x = T.sum (T.mul x x) in\n      let x = T.full T.float32 [| 4 |] 3.0 in\n      let expected = T.grad f x in\n      let grad_jit = T.jit ~device:dev (T.grad f) in\n      let _ = grad_jit x in\n      (* warmup *)\n      let _ = grad_jit x in\n      (* capture *)\n      let result = grad_jit x in\n      (* replay *)\n      check_rune ~eps \"jit(grad(sum(x*x)))\" expected result\n\nlet test_jit_grad_add_const () =\n  match get_cpu_device () with\n  | None -> ()\n  | Some dev ->\n      (* f(x) = sum(x + 1), grad = all ones *)\n      let f x = T.sum (T.add x (T.scalar T.float32 1.0)) in\n      let x = T.full T.float32 [| 4 |] 2.0 in\n      let expected = T.grad f x in\n      let grad_jit = T.jit ~device:dev (T.grad f) in\n      let _ = grad_jit x in\n      let _ = grad_jit x in\n      let result = grad_jit x in\n      check_rune ~eps \"jit(grad(sum(x+1)))\" expected result\n\nlet test_jit_grad_sin () =\n  match get_cpu_device () with\n  | None -> ()\n  | Some dev ->\n      (* f(x) = sum(sin(x)), grad = cos(x) *)\n      let f x = T.sum (T.sin x) in\n      let x = T.full T.float32 [| 4 |] 1.0 in\n      let expected = T.grad f x in\n      let grad_jit = T.jit ~device:dev (T.grad f) in\n      let _ = grad_jit x in\n      let _ = grad_jit x in\n      let result = grad_jit x in\n      check_rune ~eps \"jit(grad(sum(sin(x))))\" expected result\n\nlet test_jit_grad_polynomial () =\n  match get_cpu_device () with\n  | None -> ()\n  | Some dev ->\n      (* f(x) = sum((x + 1) * x) = sum(x^2 + x), grad = 2x + 1 *)\n      let f x = T.sum (T.mul (T.add x (T.scalar T.float32 1.0)) x) in\n      let x = T.full T.float32 [| 4 |] 2.0 in\n      let expected = T.grad f x in\n      let grad_jit = T.jit ~device:dev (T.grad f) in\n      let _ = grad_jit x in\n      let _ = grad_jit x in\n      let result = grad_jit x in\n      check_rune ~eps \"jit(grad(sum((x+1)*x)))\" expected result\n\nlet test_jit_grad_cube () =\n  match get_cpu_device () with\n  | None -> ()\n  | Some dev ->\n      (* f(x) = sum(x * x * x), grad = 3 * x^2 *)\n      let f x = T.sum (T.mul (T.mul x x) x) in\n      let x = T.full T.float32 [| 4 |] 2.0 in\n      let expected = T.grad f x in\n      let grad_jit = T.jit ~device:dev (T.grad f) in\n      let _ = grad_jit x in\n      let _ = grad_jit x in\n      let result = grad_jit x in\n      check_rune ~eps \"jit(grad(sum(x*x*x)))\" expected result\n\nlet test_jit_grad_replay_different_input () =\n  match get_cpu_device () with\n  | None -> ()\n  | Some dev ->\n      (* Verify replay produces correct result for different input values *)\n      let f x = T.sum (T.mul x x) in\n      let x1 = T.full T.float32 [| 4 |] 3.0 in\n      let x2 = T.full T.float32 [| 4 |] 5.0 in\n      let grad_jit = T.jit ~device:dev (T.grad f) in\n      let _ = grad_jit x1 in\n      (* warmup *)\n      let _ = grad_jit x1 in\n      (* capture *)\n      let expected = T.grad f x2 in\n      let result = grad_jit x2 in\n      (* replay with different input *)\n      check_rune ~eps \"jit(grad(sum(x*x))) replay\" expected result\n\n(* ───── Test runner ───── *)\n\nlet () =\n  run \"JIT + grad\"\n    [\n      group \"jit(grad(f))\"\n        [\n          test \"sum(x*x)\" test_jit_grad_square;\n          test \"sum(x+1)\" test_jit_grad_add_const;\n          test \"sum(sin(x))\" test_jit_grad_sin;\n          test \"sum((x+1)*x)\" test_jit_grad_polynomial;\n          test \"sum(x*x*x)\" test_jit_grad_cube;\n          test \"replay different input\" test_jit_grad_replay_different_input;\n        ];\n    ]\n"
  },
  {
    "path": "packages/rune/test/test_jit_vmap.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Test suite for JIT + vmap composition.\n\n   Each test verifies that jit(vmap(f))(x) produces the same result as\n   vmap(f)(x), ensuring that vectorized mapping composes correctly with JIT\n   compilation. *)\n\nopen Windtrap\nopen Test_rune_support\n\nmodule T = struct\n  include Nx\n  include Rune\nend\n\nlet eps = 1e-4\n\nlet get_cpu_device () : Tolk.Device.t option =\n  try Some (Tolk_cpu.create \"CPU\") with _ -> None\n\n(* ───── jit(vmap(f)) vs vmap(f) ───── *)\n\nlet test_jit_vmap_mul_scalar () =\n  match get_cpu_device () with\n  | None -> ()\n  | Some dev ->\n      (* f(x) = x * 2, vmapped over batch dim *)\n      let f x = T.mul x (T.scalar T.float32 2.0) in\n      let x = T.full T.float32 [| 3; 4 |] 3.0 in\n      let expected = T.vmap f x in\n      let vmap_jit = T.jit ~device:dev (T.vmap f) in\n      let _ = vmap_jit x in\n      (* warmup *)\n      let _ = vmap_jit x in\n      (* capture *)\n      let result = vmap_jit x in\n      (* replay *)\n      check_rune ~eps \"jit(vmap(x*2))\" expected result\n\nlet test_jit_vmap_self_add () =\n  match get_cpu_device () with\n  | None -> ()\n  | Some dev ->\n      (* f(x) = x + x, vmapped *)\n      let f x = T.add x x in\n      let x = T.full T.float32 [| 3; 4 |] 2.0 in\n      let expected = T.vmap f x in\n      let vmap_jit = T.jit ~device:dev (T.vmap f) in\n      let _ = vmap_jit x in\n      let _ = vmap_jit x in\n      let result = vmap_jit x in\n      check_rune ~eps \"jit(vmap(x+x))\" expected result\n\nlet test_jit_vmap_sum () =\n  match get_cpu_device () with\n  | None -> ()\n  | Some dev ->\n      (* f(x) = sum(x), vmapped: reduce per-batch element *)\n      let f x = T.sum x in\n      let x = T.full T.float32 [| 3; 4 |] 1.0 in\n      let expected = T.vmap f x in\n      let vmap_jit = T.jit ~device:dev (T.vmap f) in\n      let _ = vmap_jit x in\n      let _ = vmap_jit x in\n      let result = vmap_jit x in\n      check_rune ~eps \"jit(vmap(sum(x)))\" expected result\n\nlet test_jit_vmap_square () =\n  match get_cpu_device () with\n  | None -> ()\n  | Some dev ->\n      (* f(x) = x * x, vmapped *)\n      let f x = T.mul x x in\n      let x = T.full T.float32 [| 3; 4 |] 3.0 in\n      let expected = T.vmap f x in\n      let vmap_jit = T.jit ~device:dev (T.vmap f) in\n      let _ = vmap_jit x in\n      let _ = vmap_jit x in\n      let result = vmap_jit x in\n      check_rune ~eps \"jit(vmap(x*x))\" expected result\n\n(* ───── Test runner ───── *)\n\nlet () =\n  run \"JIT + vmap\"\n    [\n      group \"jit(vmap(f))\"\n        [\n          test \"x * 2\" test_jit_vmap_mul_scalar;\n          test \"x + x\" test_jit_vmap_self_add;\n          test \"sum(x)\" test_jit_vmap_sum;\n          test \"x * x\" test_jit_vmap_square;\n        ];\n    ]\n"
  },
  {
    "path": "packages/rune/test/test_jvp.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Test_rune_support\n\nmodule T = struct\n  include Nx\n  include Rune\nend\n\nlet eps = 1e-6\n\n(* ───── Binary Operations ───── *)\n\nlet test_jvp_add () =\n  let x = T.scalar T.float32 2.0 in\n  let y = T.scalar T.float32 3.0 in\n  let vx = T.scalar T.float32 1.0 in\n  let vy = T.scalar T.float32 0.5 in\n\n  let f inputs =\n    match inputs with\n    | [ a; b ] -> T.add a b\n    | _ -> failwith \"Expected 2 inputs\"\n  in\n  let primal, tangent = T.jvps f [ x; y ] [ vx; vy ] in\n  check_scalar ~eps \"jvp(add) primal\" 5.0 (scalar_value primal);\n  check_scalar ~eps \"jvp(add) tangent\" 1.5 (scalar_value tangent)\n\nlet test_jvp_mul () =\n  let x = T.scalar T.float32 2.0 in\n  let y = T.scalar T.float32 3.0 in\n  let vx = T.scalar T.float32 1.0 in\n  let vy = T.scalar T.float32 0.5 in\n\n  let f inputs =\n    match inputs with\n    | [ a; b ] -> T.mul a b\n    | _ -> failwith \"Expected 2 inputs\"\n  in\n  let primal, tangent = T.jvps f [ x; y ] [ vx; vy ] in\n  check_scalar ~eps \"jvp(mul) primal\" 6.0 (scalar_value primal);\n  (* d(xy) = y*dx + x*dy = 3*1 + 2*0.5 = 4 *)\n  check_scalar ~eps \"jvp(mul) tangent\" 4.0 (scalar_value tangent)\n\nlet test_jvp_sub () =\n  let x = T.scalar T.float32 5.0 in\n  let y = T.scalar T.float32 3.0 in\n  let vx = T.scalar T.float32 1.0 in\n  let vy = T.scalar T.float32 0.5 in\n\n  let f inputs =\n    match inputs with\n    | [ a; b ] -> T.sub a b\n    | _ -> failwith \"Expected 2 inputs\"\n  in\n  let primal, tangent = T.jvps f [ x; y ] [ vx; vy ] in\n  check_scalar ~eps \"jvp(sub) primal\" 2.0 (scalar_value primal);\n  check_scalar ~eps \"jvp(sub) tangent\" 0.5 (scalar_value tangent)\n\nlet test_jvp_div () =\n  let x = T.scalar T.float32 6.0 in\n  let y = T.scalar T.float32 2.0 in\n  let vx = T.scalar T.float32 1.0 in\n  let vy = T.scalar T.float32 0.5 in\n\n  let f inputs =\n    match inputs with\n    | [ a; b ] -> T.div a b\n    | _ -> failwith \"Expected 2 inputs\"\n  in\n  let primal, tangent = T.jvps f [ x; y ] [ vx; vy ] in\n  check_scalar ~eps \"jvp(div) primal\" 3.0 (scalar_value primal);\n  (* d(x/y) = dx/y - x*dy/y² = 1/2 - 6*0.5/4 = 0.5 - 0.75 = -0.25 *)\n  check_scalar ~eps \"jvp(div) tangent\" (-0.25) (scalar_value tangent)\n\nlet test_jvp_pow () =\n  let x = T.scalar T.float32 2.0 in\n  let y = T.scalar T.float32 3.0 in\n  let vx = T.scalar T.float32 1.0 in\n  let vy = T.scalar T.float32 0.0 in\n\n  let f inputs =\n    match inputs with\n    | [ a; b ] -> T.pow a b\n    | _ -> failwith \"Expected 2 inputs\"\n  in\n  let primal, tangent = T.jvps f [ x; y ] [ vx; vy ] in\n  check_scalar ~eps \"jvp(pow) primal\" 8.0 (scalar_value primal);\n  (* d(x^y) = y*x^(y-1)*dx + x^y*ln(x)*dy = 3*4*1 + 0 = 12 *)\n  check_scalar ~eps \"jvp(pow) tangent\" 12.0 (scalar_value tangent)\n\nlet test_jvp_max () =\n  let x = T.scalar T.float32 2.0 in\n  let y = T.scalar T.float32 3.0 in\n  let vx = T.scalar T.float32 1.0 in\n  let vy = T.scalar T.float32 0.5 in\n\n  let f inputs =\n    match inputs with\n    | [ a; b ] -> T.maximum a b\n    | _ -> failwith \"Expected 2 inputs\"\n  in\n  let primal, tangent = T.jvps f [ x; y ] [ vx; vy ] in\n  check_scalar ~eps \"jvp(max) primal\" 3.0 (scalar_value primal);\n  (* max(x,y) = y when y > x, so tangent = vy = 0.5 *)\n  check_scalar ~eps \"jvp(max) tangent\" 0.5 (scalar_value tangent)\n\n(* ───── Unary Operations ───── *)\n\nlet test_jvp_exp () =\n  let x = T.scalar T.float32 0.0 in\n  let v = T.scalar T.float32 1.0 in\n  let primal, tangent = T.jvp T.exp x v in\n  check_scalar ~eps \"jvp(exp) primal at x=0\" 1.0 (scalar_value primal);\n  check_scalar ~eps \"jvp(exp) tangent at x=0\" 1.0 (scalar_value tangent)\n\nlet test_jvp_log () =\n  let x = T.scalar T.float32 2.0 in\n  let v = T.scalar T.float32 1.0 in\n  let primal, tangent = T.jvp T.log x v in\n  check_scalar ~eps \"jvp(log) primal\" (Stdlib.log 2.0) (scalar_value primal);\n  check_scalar ~eps \"jvp(log) tangent\" 0.5 (scalar_value tangent)\n\nlet test_jvp_sin_cos () =\n  let x = T.scalar T.float32 0.0 in\n  let v = T.scalar T.float32 1.0 in\n\n  let primal_sin, tangent_sin = T.jvp T.sin x v in\n  check_scalar ~eps \"jvp(sin) primal at x=0\" 0.0 (scalar_value primal_sin);\n  check_scalar ~eps \"jvp(sin) tangent at x=0\" 1.0 (scalar_value tangent_sin);\n\n  let primal_cos, tangent_cos = T.jvp T.cos x v in\n  check_scalar ~eps \"jvp(cos) primal at x=0\" 1.0 (scalar_value primal_cos);\n  check_scalar ~eps \"jvp(cos) tangent at x=0\" 0.0 (scalar_value tangent_cos)\n\nlet test_jvp_sqrt () =\n  let x = T.scalar T.float32 4.0 in\n  let v = T.scalar T.float32 1.0 in\n  let primal, tangent = T.jvp T.sqrt x v in\n  check_scalar ~eps \"jvp(sqrt) primal at x=4\" 2.0 (scalar_value primal);\n  (* d/dx sqrt(x) = 1/(2*sqrt(x)) = 1/4 = 0.25 *)\n  check_scalar ~eps \"jvp(sqrt) tangent at x=4\" 0.25 (scalar_value tangent)\n\nlet test_jvp_neg () =\n  let x = T.scalar T.float32 1.0 in\n  let v = T.scalar T.float32 1.0 in\n  let primal, tangent = T.jvp T.neg x v in\n  check_scalar ~eps \"jvp(neg) primal\" (-1.0) (scalar_value primal);\n  check_scalar ~eps \"jvp(neg) tangent\" (-1.0) (scalar_value tangent)\n\nlet test_jvp_relu () =\n  let x = T.create T.float32 [| 4 |] [| -2.; -1.; 1.; 2. |] in\n  let v = T.ones_like x in\n  let primal, tangent = T.jvp T.relu x v in\n  let expected_primal = T.create T.float32 [| 4 |] [| 0.; 0.; 1.; 2. |] in\n  let expected_tangent = T.create T.float32 [| 4 |] [| 0.; 0.; 1.; 1. |] in\n  check_rune ~eps \"jvp(relu) primal\" expected_primal primal;\n  check_rune ~eps \"jvp(relu) tangent\" expected_tangent tangent\n\nlet test_jvp_tanh () =\n  let x = T.scalar T.float32 0.5 in\n  let v = T.scalar T.float32 1.0 in\n  let primal, tangent = T.jvp T.tanh x v in\n  let tanh_val = scalar_value primal in\n  let expected_tangent = 1.0 -. (tanh_val *. tanh_val) in\n  check_scalar ~eps:1e-4 \"jvp(tanh) tangent\" expected_tangent\n    (scalar_value tangent)\n\nlet test_jvp_abs () =\n  let x = T.create T.float32 [| 4 |] [| -2.; -1.; 1.; 2. |] in\n  let v = T.ones_like x in\n  let primal, tangent = T.jvp T.abs x v in\n  let expected_primal = T.create T.float32 [| 4 |] [| 2.; 1.; 1.; 2. |] in\n  let expected_tangent = T.create T.float32 [| 4 |] [| -1.; -1.; 1.; 1. |] in\n  check_rune ~eps \"jvp(abs) primal\" expected_primal primal;\n  check_rune ~eps \"jvp(abs) tangent\" expected_tangent tangent\n\nlet test_jvp_cumsum () =\n  let x = T.create T.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let v = T.create T.float32 [| 3 |] [| 0.1; 0.2; 0.3 |] in\n  let primal, tangent = T.jvp (fun x -> T.cumsum ~axis:0 x) x v in\n  let expected_primal = T.create T.float32 [| 3 |] [| 1.; 3.; 6. |] in\n  let expected_tangent = T.create T.float32 [| 3 |] [| 0.1; 0.3; 0.6 |] in\n  check_rune ~eps \"jvp(cumsum) primal\" expected_primal primal;\n  check_rune ~eps \"jvp(cumsum) tangent\" expected_tangent tangent\n\nlet test_jvp_cumprod () =\n  let x = T.create T.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let v = T.create T.float32 [| 3 |] [| 0.1; 0.2; 0.3 |] in\n  let primal, tangent = T.jvp (fun x -> T.cumprod ~axis:0 x) x v in\n  let expected_primal = T.create T.float32 [| 3 |] [| 1.; 2.; 6. |] in\n  let expected_tangent = T.create T.float32 [| 3 |] [| 0.1; 0.4; 1.8 |] in\n  check_rune ~eps \"jvp(cumprod) primal\" expected_primal primal;\n  check_rune ~eps \"jvp(cumprod) tangent\" expected_tangent tangent\n\nlet test_jvp_sigmoid () =\n  let x = T.scalar T.float32 0.0 in\n  let v = T.scalar T.float32 1.0 in\n  let primal, tangent = T.jvp T.sigmoid x v in\n  check_scalar ~eps \"jvp(sigmoid) primal at x=0\" 0.5 (scalar_value primal);\n  (* sigmoid'(x) = sigmoid(x) * (1 - sigmoid(x)) = 0.5 * 0.5 = 0.25 *)\n  check_scalar ~eps \"jvp(sigmoid) tangent at x=0\" 0.25 (scalar_value tangent)\n\nlet test_jvp_square () =\n  let x = T.scalar T.float32 3.0 in\n  let v = T.scalar T.float32 1.0 in\n  let primal, tangent = T.jvp T.square x v in\n  check_scalar ~eps \"jvp(square) primal\" 9.0 (scalar_value primal);\n  check_scalar ~eps \"jvp(square) tangent\" 6.0 (scalar_value tangent)\n\nlet test_jvp_recip () =\n  let x = T.scalar T.float32 2.0 in\n  let v = T.scalar T.float32 1.0 in\n  let primal, tangent = T.jvp T.recip x v in\n  check_scalar ~eps \"jvp(recip) primal\" 0.5 (scalar_value primal);\n  (* d/dx (1/x) = -1/x² = -0.25 *)\n  check_scalar ~eps \"jvp(recip) tangent\" (-0.25) (scalar_value tangent)\n\nlet test_jvp_rsqrt () =\n  let x = T.scalar T.float32 4.0 in\n  let v = T.scalar T.float32 1.0 in\n  let primal, tangent = T.jvp T.rsqrt x v in\n  check_scalar ~eps \"jvp(rsqrt) primal\" 0.5 (scalar_value primal);\n  (* d/dx (1/sqrt(x)) = -1/(2*x^(3/2)) = -1/16 = -0.0625 *)\n  check_scalar ~eps \"jvp(rsqrt) tangent\" (-0.0625) (scalar_value tangent)\n\nlet test_jvp_tan () =\n  let x = T.scalar T.float32 0.0 in\n  let v = T.scalar T.float32 1.0 in\n  let primal, tangent = T.jvp T.tan x v in\n  check_scalar ~eps \"jvp(tan) primal at x=0\" 0.0 (scalar_value primal);\n  (* d/dx tan(x) = sec²(x) = 1/cos²(x) = 1 at x=0 *)\n  check_scalar ~eps \"jvp(tan) tangent at x=0\" 1.0 (scalar_value tangent)\n\nlet test_jvp_sinh_cosh () =\n  let x = T.scalar T.float32 0.0 in\n  let v = T.scalar T.float32 1.0 in\n\n  let primal_sinh, tangent_sinh = T.jvp T.sinh x v in\n  check_scalar ~eps \"jvp(sinh) primal at x=0\" 0.0 (scalar_value primal_sinh);\n  check_scalar ~eps \"jvp(sinh) tangent at x=0\" 1.0 (scalar_value tangent_sinh);\n\n  let primal_cosh, tangent_cosh = T.jvp T.cosh x v in\n  check_scalar ~eps \"jvp(cosh) primal at x=0\" 1.0 (scalar_value primal_cosh);\n  check_scalar ~eps \"jvp(cosh) tangent at x=0\" 0.0 (scalar_value tangent_cosh)\n\n(* ───── Reduction Operations ───── *)\n\nlet test_jvp_sum () =\n  let x = T.create T.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  let v = T.ones_like x in\n  let primal, tangent = T.jvp T.sum x v in\n  check_scalar ~eps \"jvp(sum) primal\" 10.0 (scalar_value primal);\n  check_scalar ~eps \"jvp(sum) tangent\" 4.0 (scalar_value tangent)\n\nlet test_jvp_mean () =\n  let x = T.create T.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  let v = T.ones_like x in\n  let primal, tangent = T.jvp T.mean x v in\n  check_scalar ~eps \"jvp(mean) primal\" 2.5 (scalar_value primal);\n  check_scalar ~eps \"jvp(mean) tangent\" 1.0 (scalar_value tangent)\n\nlet test_jvp_max_reduction () =\n  let x = T.create T.float32 [| 2; 2 |] [| 1.; 3.; 2.; 4. |] in\n  let v = T.create T.float32 [| 2; 2 |] [| 0.1; 0.2; 0.3; 0.4 |] in\n  let primal, tangent = T.jvp T.max x v in\n  check_scalar ~eps \"jvp(max) primal\" 4.0 (scalar_value primal);\n  (* Only the max element (4.) contributes, with tangent 0.4 *)\n  check_scalar ~eps \"jvp(max) tangent\" 0.4 (scalar_value tangent)\n\nlet test_jvp_sum_with_axis () =\n  let x = T.create T.float32 [| 2; 3 |] [| 0.; 1.; 2.; 3.; 4.; 5. |] in\n  let v = T.ones_like x in\n  let f x = T.sum x ~axes:[ 1 ] in\n  let primal, tangent = T.jvp f x v in\n  let expected_primal = T.create T.float32 [| 2 |] [| 3.; 12. |] in\n  let expected_tangent = T.create T.float32 [| 2 |] [| 3.; 3. |] in\n  check_rune ~eps \"jvp(sum axis=1) primal\" expected_primal primal;\n  check_rune ~eps \"jvp(sum axis=1) tangent\" expected_tangent tangent\n\nlet test_jvp_prod () =\n  let x = T.create T.float32 [| 3 |] [| 2.; 3.; 4. |] in\n  let v = T.ones_like x in\n  let primal, tangent = T.jvp T.prod x v in\n  check_scalar ~eps \"jvp(prod) primal\" 24.0 (scalar_value primal);\n  (* d(xyz) = yz*dx + xz*dy + xy*dz = 12 + 8 + 6 = 26 *)\n  check_scalar ~eps \"jvp(prod) tangent\" 26.0 (scalar_value tangent)\n\n(* ───── Broadcasting ───── *)\n\nlet test_jvp_broadcast_add () =\n  let x = T.create T.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let bias = T.create T.float32 [| 3 |] [| 0.1; 0.2; 0.3 |] in\n  let vx = T.ones_like x in\n  let vb = T.ones_like bias in\n\n  let f inputs =\n    match inputs with\n    | [ a; b ] -> T.add a b\n    | _ -> failwith \"Expected 2 inputs\"\n  in\n  let _primal, tangent = T.jvps f [ x; bias ] [ vx; vb ] in\n  (* Each position gets vx[i,j] + vb[j] = 1 + 1 = 2 *)\n  check_rune ~eps \"jvp(broadcast add) tangent\"\n    (T.full T.float32 [| 2; 3 |] 2.0)\n    tangent\n\nlet test_jvp_scalar_broadcast () =\n  let x = T.create T.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let scalar = T.scalar T.float32 2.0 in\n  let vx = T.ones_like x in\n  let vs = T.scalar T.float32 1.0 in\n\n  let f inputs =\n    match inputs with\n    | [ a; s ] -> T.mul a s\n    | _ -> failwith \"Expected 2 inputs\"\n  in\n  let _primal, tangent = T.jvps f [ x; scalar ] [ vx; vs ] in\n  (* d(x*s) = s*dx + x*ds = 2*1 + x*1 = 2 + x *)\n  let expected_tangent = T.add (T.full T.float32 [| 2; 3 |] 2.0) x in\n  check_rune ~eps \"jvp(scalar mul broadcast) tangent\" expected_tangent tangent\n\n(* ───── Shape Operations ───── *)\n\nlet test_jvp_reshape () =\n  let x = T.create T.float32 [| 2; 3 |] [| 0.; 1.; 2.; 3.; 4.; 5. |] in\n  let v = T.ones_like x in\n  let f x = T.reshape [| 3; 2 |] x in\n  let primal, tangent = T.jvp f x v in\n  check_rune ~eps \"jvp(reshape) primal\" (T.reshape [| 3; 2 |] x) primal;\n  check_rune ~eps \"jvp(reshape) tangent\" (T.reshape [| 3; 2 |] v) tangent\n\nlet test_jvp_transpose () =\n  let x = T.create T.float32 [| 2; 3 |] [| 0.; 1.; 2.; 3.; 4.; 5. |] in\n  let v = T.ones_like x in\n  let primal, tangent = T.jvp T.transpose x v in\n  check_rune ~eps \"jvp(transpose) primal\" (T.transpose x) primal;\n  check_rune ~eps \"jvp(transpose) tangent\" (T.transpose v) tangent\n\nlet test_jvp_squeeze () =\n  let x = T.create T.float32 [| 1; 3; 1 |] [| 1.; 2.; 3. |] in\n  let v = T.ones_like x in\n  let f x = T.squeeze ~axes:[ 0; 2 ] x in\n  let primal, tangent = T.jvp f x v in\n  let expected_primal = T.create T.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let expected_tangent = T.ones T.float32 [| 3 |] in\n  check_rune ~eps \"jvp(squeeze) primal\" expected_primal primal;\n  check_rune ~eps \"jvp(squeeze) tangent\" expected_tangent tangent\n\nlet test_jvp_expand_dims () =\n  let x = T.create T.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let v = T.ones_like x in\n  let f x = T.expand_dims [ 0 ] x in\n  let primal, tangent = T.jvp f x v in\n  let expected_primal = T.create T.float32 [| 1; 3 |] [| 1.; 2.; 3. |] in\n  let expected_tangent = T.ones T.float32 [| 1; 3 |] in\n  check_rune ~eps \"jvp(expand_dims) primal\" expected_primal primal;\n  check_rune ~eps \"jvp(expand_dims) tangent\" expected_tangent tangent\n\nlet test_jvp_concatenate () =\n  let x1 = T.create T.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  let x2 = T.create T.float32 [| 2; 2 |] [| 5.; 6.; 7.; 8. |] in\n  let v1 = T.ones_like x1 in\n  let v2 = T.full T.float32 [| 2; 2 |] 0.5 in\n\n  let f inputs =\n    match inputs with\n    | [ a; b ] -> T.concatenate [ a; b ] ~axis:0\n    | _ -> failwith \"Expected 2 inputs\"\n  in\n  let primal, tangent = T.jvps f [ x1; x2 ] [ v1; v2 ] in\n  let expected_primal =\n    T.create T.float32 [| 4; 2 |] [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8. |]\n  in\n  let expected_tangent =\n    T.create T.float32 [| 4; 2 |] [| 1.; 1.; 1.; 1.; 0.5; 0.5; 0.5; 0.5 |]\n  in\n  check_rune ~eps \"jvp(concatenate) primal\" expected_primal primal;\n  check_rune ~eps \"jvp(concatenate) tangent\" expected_tangent tangent\n\n(* ───── Complex Compositions ───── *)\n\nlet test_jvp_softmax () =\n  let x = T.create T.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let v = T.ones_like x in\n  let f x = T.softmax x ~axes:[ 0 ] in\n  let _primal, tangent = T.jvp f x v in\n  (* Softmax Jacobian is diag(s) - s*s^T, where s = softmax(x) *)\n  (* For uniform tangent v=[1,1,1], result is 0 (sum preserved) *)\n  let tangent_sum = T.sum tangent |> scalar_value in\n  check_scalar ~eps:1e-5 \"jvp(softmax) tangent sum\" 0.0 tangent_sum\n\nlet test_jvp_layer_norm () =\n  let x = T.create T.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let v = T.ones_like x in\n  let f x =\n    let mean = T.mean x ~axes:[ 1 ] ~keepdims:true in\n    let centered = T.sub x mean in\n    let var = T.mean (T.square centered) ~axes:[ 1 ] ~keepdims:true in\n    let std = T.sqrt (T.add var (T.scalar T.float32 1e-5)) in\n    T.div centered std\n  in\n  let _primal, tangent = T.jvp f x v in\n  (* Layer norm preserves zero mean in tangent space - check total sum is\n     small *)\n  let total_row_sum = T.sum (T.sum tangent ~axes:[ 1 ]) |> scalar_value in\n  check_scalar ~eps:1e-3 \"jvp(layer_norm) total row sum\" 0.0\n    (Float.abs total_row_sum)\n\nlet test_jvp_nested () =\n  (* Test nested JVP calls *)\n  let x = T.scalar T.float32 2.0 in\n  let v = T.scalar T.float32 1.0 in\n\n  (* f(x) = exp(sin(x²)) *)\n  let f x = T.exp (T.sin (T.square x)) in\n  let _primal, tangent = T.jvp f x v in\n\n  (* Manual computation: f'(x) = exp(sin(x²)) * cos(x²) * 2x At x=2: sin(4) ≈\n     -0.757, cos(4) ≈ -0.654, exp(-0.757) ≈ 0.469 f'(2) ≈ 0.469 * (-0.654) * 4 ≈\n     -1.227 *)\n  check_scalar ~eps:1e-3 \"jvp(nested) tangent\" (-1.227) (scalar_value tangent)\n\nlet test_jvp_higher_order () =\n  (* Second derivative via nested JVP *)\n  let x = T.scalar T.float32 1.0 in\n\n  (* f(x) = x³ *)\n  let f x = T.mul (T.square x) x in\n\n  (* First derivative: 3x² *)\n  let _, first_deriv = T.jvp f x (T.scalar T.float32 1.0) in\n  check_scalar ~eps \"first derivative of x³ at x=1\" 3.0\n    (scalar_value first_deriv);\n\n  (* Second derivative via JVP of JVP: 6x *)\n  let f_jvp x =\n    let _, tangent = T.jvp f x (T.scalar T.float32 1.0) in\n    tangent\n  in\n  let _, second_deriv = T.jvp f_jvp x (T.scalar T.float32 1.0) in\n  check_scalar ~eps \"second derivative of x³ at x=1\" 6.0\n    (scalar_value second_deriv)\n\n(* ───── Edge Cases ───── *)\n\nlet test_jvp_zero_tangent () =\n  (* Zero tangent should give zero output tangent *)\n  let x = T.scalar T.float32 2.0 in\n  let v = T.scalar T.float32 0.0 in\n  let f x = T.mul (T.exp x) (T.sin x) in\n  let _primal, tangent = T.jvp f x v in\n  check_scalar ~eps \"jvp with zero tangent\" 0.0 (scalar_value tangent)\n\nlet test_jvp_constant_function () =\n  (* Constant function should have zero tangent *)\n  let x = T.scalar T.float32 2.0 in\n  let v = T.scalar T.float32 1.0 in\n  let f _ = T.scalar T.float32 42.0 in\n  let primal, tangent = T.jvp f x v in\n  check_scalar ~eps \"jvp(constant) primal\" 42.0 (scalar_value primal);\n  check_scalar ~eps \"jvp(constant) tangent\" 0.0 (scalar_value tangent)\n\nlet test_jvp_identity () =\n  (* Identity function should pass through tangent *)\n  let x = T.create T.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  let v = T.create T.float32 [| 2; 2 |] [| 0.1; 0.2; 0.3; 0.4 |] in\n  let f x = x in\n  let primal, tangent = T.jvp f x v in\n  check_rune ~eps \"jvp(identity) primal\" x primal;\n  check_rune ~eps \"jvp(identity) tangent\" v tangent\n\n(* ───── Indexing Operations ───── *)\n\nlet test_jvp_slice () =\n  let x = T.create T.float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n  let v = T.create T.float32 [| 4 |] [| 0.1; 0.2; 0.3; 0.4 |] in\n  let f x = T.slice [ T.R (1, 3) ] x in\n  let primal, tangent = T.jvp f x v in\n  let expected_primal = T.create T.float32 [| 2 |] [| 2.; 3. |] in\n  let expected_tangent = T.create T.float32 [| 2 |] [| 0.2; 0.3 |] in\n  check_rune ~eps \"jvp(slice) primal\" expected_primal primal;\n  check_rune ~eps \"jvp(slice) tangent\" expected_tangent tangent\n\nlet test_jvp_gather () =\n  let x = T.create T.float32 [| 4 |] [| 10.; 20.; 30.; 40. |] in\n  let v = T.create T.float32 [| 4 |] [| 0.1; 0.2; 0.3; 0.4 |] in\n  let indices = T.create T.int32 [| 3 |] [| 2l; 0l; 3l |] in\n  let f x = T.take ~axis:0 indices x in\n  let primal, tangent = T.jvp f x v in\n  let expected_primal = T.create T.float32 [| 3 |] [| 30.; 10.; 40. |] in\n  let expected_tangent = T.create T.float32 [| 3 |] [| 0.3; 0.1; 0.4 |] in\n  check_rune ~eps \"jvp(gather) primal\" expected_primal primal;\n  check_rune ~eps \"jvp(gather) tangent\" expected_tangent tangent\n\nlet test_jvp_get () =\n  let x = T.create T.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let v = T.create T.float32 [| 2; 3 |] [| 0.1; 0.2; 0.3; 0.4; 0.5; 0.6 |] in\n  let f x = T.get [ 1; 2 ] x in\n  let primal, tangent = T.jvp f x v in\n  check_scalar ~eps \"jvp(get) primal\" 6.0 (scalar_value primal);\n  check_scalar ~eps \"jvp(get) tangent\" 0.6 (scalar_value tangent)\n\nlet test_jvp_take_along_axis () =\n  let x =\n    T.create T.float32 [| 3; 4 |]\n      [| 10.; 20.; 30.; 40.; 50.; 60.; 70.; 80.; 90.; 100.; 110.; 120. |]\n  in\n  let v =\n    T.create T.float32 [| 3; 4 |]\n      [| 0.1; 0.2; 0.3; 0.4; 0.5; 0.6; 0.7; 0.8; 0.9; 1.0; 1.1; 1.2 |]\n  in\n  let indices = T.create T.int32 [| 3; 2 |] [| 1l; 3l; 0l; 2l; 2l; 1l |] in\n  let f x = T.take_along_axis ~axis:1 indices x in\n  let primal, tangent = T.jvp f x v in\n  let expected_primal =\n    T.create T.float32 [| 3; 2 |] [| 20.; 40.; 50.; 70.; 110.; 100. |]\n  in\n  let expected_tangent =\n    T.create T.float32 [| 3; 2 |] [| 0.2; 0.4; 0.5; 0.7; 1.1; 1.0 |]\n  in\n  check_rune ~eps \"jvp(take_along_axis) primal\" expected_primal primal;\n  check_rune ~eps \"jvp(take_along_axis) tangent\" expected_tangent tangent\n\n(* ───── FFT Operations ───── *)\n\n(* Check complex tensors for approximate equality using magnitude of\n   difference *)\nlet check_complex_close ~eps msg expected actual =\n  let diff = T.sub expected actual in\n  (* |z|^2 = z * conj(z) for complex numbers *)\n  let mag_sq = T.mul diff (T.conjugate diff) in\n  (* Sum of squared magnitudes *)\n  let total_err = T.sum mag_sq in\n  let err_val = (T.item [] total_err : Complex.t).re in\n  if\n    err_val\n    > eps *. eps *. Float.of_int (Array.fold_left ( * ) 1 (T.shape expected))\n  then\n    failf \"%s: complex tensors differ, total squared error = %.6e\" msg err_val\n\nlet test_jvp_fft () =\n  (* FFT is linear, so JVP should be FFT of tangent *)\n  let x =\n    T.create T.complex64 [| 4 |]\n      [|\n        Complex.{ re = 1.0; im = 0.0 };\n        Complex.{ re = 2.0; im = 0.0 };\n        Complex.{ re = 3.0; im = 0.0 };\n        Complex.{ re = 4.0; im = 0.0 };\n      |]\n  in\n  let v =\n    T.create T.complex64 [| 4 |]\n      [|\n        Complex.{ re = 0.1; im = 0.0 };\n        Complex.{ re = 0.2; im = 0.0 };\n        Complex.{ re = 0.3; im = 0.0 };\n        Complex.{ re = 0.4; im = 0.0 };\n      |]\n  in\n  let f x = T.fft ~axis:0 x in\n  let primal, tangent = T.jvp f x v in\n  let expected_tangent = T.fft ~axis:0 v in\n  check_complex_close ~eps:1e-5 \"jvp(fft) primal\" (f x) primal;\n  check_complex_close ~eps:1e-5 \"jvp(fft) tangent\" expected_tangent tangent\n\nlet test_jvp_ifft () =\n  (* IFFT is linear, so JVP should be IFFT of tangent *)\n  let x =\n    T.create T.complex64 [| 4 |]\n      [|\n        Complex.{ re = 10.0; im = 0.0 };\n        Complex.{ re = -2.0; im = 2.0 };\n        Complex.{ re = -2.0; im = 0.0 };\n        Complex.{ re = -2.0; im = -2.0 };\n      |]\n  in\n  let v =\n    T.create T.complex64 [| 4 |]\n      [|\n        Complex.{ re = 1.0; im = 0.0 };\n        Complex.{ re = 0.0; im = 1.0 };\n        Complex.{ re = -1.0; im = 0.0 };\n        Complex.{ re = 0.0; im = -1.0 };\n      |]\n  in\n  let f x = T.ifft ~axis:0 x in\n  let primal, tangent = T.jvp f x v in\n  let expected_tangent = T.ifft ~axis:0 v in\n  check_complex_close ~eps:1e-5 \"jvp(ifft) primal\" (f x) primal;\n  check_complex_close ~eps:1e-5 \"jvp(ifft) tangent\" expected_tangent tangent\n\nlet test_jvp_fft_roundtrip () =\n  (* FFT followed by IFFT should be identity, tangent should pass through *)\n  let x =\n    T.create T.complex64 [| 4 |]\n      [|\n        Complex.{ re = 1.0; im = 0.5 };\n        Complex.{ re = 2.0; im = -0.5 };\n        Complex.{ re = 3.0; im = 0.2 };\n        Complex.{ re = 4.0; im = -0.2 };\n      |]\n  in\n  let v =\n    T.create T.complex64 [| 4 |]\n      [|\n        Complex.{ re = 0.1; im = 0.05 };\n        Complex.{ re = 0.2; im = -0.05 };\n        Complex.{ re = 0.3; im = 0.02 };\n        Complex.{ re = 0.4; im = -0.02 };\n      |]\n  in\n  let f x = T.ifft ~axis:0 (T.fft ~axis:0 x) in\n  let primal, tangent = T.jvp f x v in\n  (* Roundtrip should give back original *)\n  check_complex_close ~eps:1e-5 \"jvp(fft roundtrip) primal\" x primal;\n  check_complex_close ~eps:1e-5 \"jvp(fft roundtrip) tangent\" v tangent\n\n(* Test suite *)\nlet () =\n  run \"Rune JVP Comprehensive Tests\"\n    [\n      group \"binary operations\"\n        [\n          test \"add\" test_jvp_add;\n          test \"mul\" test_jvp_mul;\n          test \"sub\" test_jvp_sub;\n          test \"div\" test_jvp_div;\n          test \"pow\" test_jvp_pow;\n          test \"max\" test_jvp_max;\n        ];\n      group \"unary operations\"\n        [\n          test \"exp\" test_jvp_exp;\n          test \"log\" test_jvp_log;\n          test \"sin/cos\" test_jvp_sin_cos;\n          test \"sqrt\" test_jvp_sqrt;\n          test \"neg\" test_jvp_neg;\n          test \"relu\" test_jvp_relu;\n          test \"tanh\" test_jvp_tanh;\n          test \"abs\" test_jvp_abs;\n          test \"cumsum\" test_jvp_cumsum;\n          test \"cumprod\" test_jvp_cumprod;\n          test \"sigmoid\" test_jvp_sigmoid;\n          test \"square\" test_jvp_square;\n          test \"recip\" test_jvp_recip;\n          test \"rsqrt\" test_jvp_rsqrt;\n          test \"tan\" test_jvp_tan;\n          test \"sinh/cosh\" test_jvp_sinh_cosh;\n        ];\n      group \"reduction operations\"\n        [\n          test \"sum\" test_jvp_sum;\n          test \"mean\" test_jvp_mean;\n          test \"max\" test_jvp_max_reduction;\n          test \"sum with axis\" test_jvp_sum_with_axis;\n          test \"prod\" test_jvp_prod;\n        ];\n      group \"broadcasting\"\n        [\n          test \"broadcast add\" test_jvp_broadcast_add;\n          test \"scalar broadcast\" test_jvp_scalar_broadcast;\n        ];\n      group \"shape operations\"\n        [\n          test \"reshape\" test_jvp_reshape;\n          test \"transpose\" test_jvp_transpose;\n          test \"squeeze\" test_jvp_squeeze;\n          test \"expand_dims\" test_jvp_expand_dims;\n          test \"concatenate\" test_jvp_concatenate;\n        ];\n      group \"complex compositions\"\n        [\n          test \"softmax\" test_jvp_softmax;\n          test \"layer norm\" test_jvp_layer_norm;\n          test \"nested\" test_jvp_nested;\n          test \"higher order\" test_jvp_higher_order;\n        ];\n      group \"edge cases\"\n        [\n          test \"zero tangent\" test_jvp_zero_tangent;\n          test \"constant function\" test_jvp_constant_function;\n          test \"identity\" test_jvp_identity;\n        ];\n      group \"fft operations\"\n        [\n          test \"fft\" test_jvp_fft;\n          test \"ifft\" test_jvp_ifft;\n          test \"fft roundtrip\" test_jvp_fft_roundtrip;\n        ];\n      group \"indexing operations\"\n        [\n          test \"slice\" test_jvp_slice;\n          test \"gather\" test_jvp_gather;\n          test \"get\" test_jvp_get;\n          test \"take_along_axis\" test_jvp_take_along_axis;\n        ];\n    ]\n"
  },
  {
    "path": "packages/rune/test/test_vjp.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Test_rune_support\n\nmodule T = struct\n  include Nx\n  include Rune\nend\n\nlet eps = 1e-6\n\n(* ───── Binary Operations ───── *)\n\nlet test_grad_add () =\n  let x = T.scalar T.float32 2.0 in\n  let y = T.scalar T.float32 3.0 in\n  let grad_x = T.grad (fun x -> T.add x y) x in\n  let grad_y = T.grad (fun y -> T.add x y) y in\n  check_scalar ~eps \"add grad wrt x\" 1.0 (scalar_value grad_x);\n  check_scalar ~eps \"add grad wrt y\" 1.0 (scalar_value grad_y)\n\nlet test_grad_mul () =\n  let x = T.scalar T.float32 2.0 in\n  let y = T.scalar T.float32 3.0 in\n  let grad_x = T.grad (fun x -> T.mul x y) x in\n  let grad_y = T.grad (fun y -> T.mul x y) y in\n  check_scalar ~eps \"mul grad wrt x\" 3.0 (scalar_value grad_x);\n  check_scalar ~eps \"mul grad wrt y\" 2.0 (scalar_value grad_y)\n\nlet test_grad_sub () =\n  let x = T.scalar T.float32 5.0 in\n  let y = T.scalar T.float32 3.0 in\n  let grad_x = T.grad (fun x -> T.sub x y) x in\n  let grad_y = T.grad (fun y -> T.sub x y) y in\n  check_scalar ~eps \"sub grad wrt x\" 1.0 (scalar_value grad_x);\n  check_scalar ~eps \"sub grad wrt y\" (-1.0) (scalar_value grad_y)\n\nlet test_grad_div () =\n  let x = T.scalar T.float32 6.0 in\n  let y = T.scalar T.float32 2.0 in\n  let grad_x = T.grad (fun x -> T.div x y) x in\n  let grad_y = T.grad (fun y -> T.div x y) y in\n  check_scalar ~eps \"div grad wrt x\" 0.5 (scalar_value grad_x);\n  check_scalar ~eps \"div grad wrt y\" (-1.5) (scalar_value grad_y)\n\n(* ───── Unary Operations ───── *)\n\nlet test_grad_exp () =\n  (* exp: d/dx e^x = e^x *)\n  let grad_exp = T.grad T.exp (T.scalar T.float32 0.0) in\n  check_scalar ~eps \"grad(exp(x)) at x=0\" 1.0 (scalar_value grad_exp)\n\nlet test_grad_log () =\n  (* log: d/dx ln(x) = 1/x *)\n  let grad_log = T.grad T.log (T.scalar T.float32 2.0) in\n  check_scalar ~eps \"grad(log(x)) at x=2\" 0.5 (scalar_value grad_log)\n\nlet test_grad_sin () =\n  (* sin: d/dx sin(x) = cos(x) *)\n  let grad_sin = T.grad T.sin (T.scalar T.float32 0.0) in\n  check_scalar ~eps \"grad(sin(x)) at x=0\" 1.0 (scalar_value grad_sin)\n\nlet test_grad_cos () =\n  (* cos: d/dx cos(x) = -sin(x) *)\n  let grad_cos = T.grad T.cos (T.scalar T.float32 0.0) in\n  check_scalar ~eps \"grad(cos(x)) at x=0\" 0.0 (scalar_value grad_cos)\n\nlet test_grad_sqrt () =\n  (* sqrt: d/dx √x = 1/(2√x) *)\n  let grad_sqrt = T.grad T.sqrt (T.scalar T.float32 4.0) in\n  check_scalar ~eps \"grad(sqrt(x)) at x=4\" 0.25 (scalar_value grad_sqrt)\n\nlet test_grad_neg () =\n  (* neg: d/dx (-x) = -1 *)\n  let grad_neg = T.grad T.neg (T.scalar T.float32 1.0) in\n  check_scalar ~eps \"grad(-x)\" (-1.0) (scalar_value grad_neg)\n\nlet test_grad_relu () =\n  (* ReLU gradient: 0 for x<=0, 1 for x>0 *)\n  let x = T.create T.float32 [| 5 |] [| -2.; -1.; 0.; 1.; 2. |] in\n  let grad = T.grad (fun x -> T.sum (T.relu x)) x in\n  (* Critical: gradient at x=0 should be 0, not 1 *)\n  let expected = T.create T.float32 [| 5 |] [| 0.; 0.; 0.; 1.; 1. |] in\n  check_rune ~eps \"relu gradient\" expected grad;\n\n  (* Additional test: relu gradient through mean (the kaun test case) *)\n  let x2 = T.create T.float32 [| 2; 3 |] [| -1.0; 0.0; 1.0; -2.0; 2.0; 3.0 |] in\n  let grad2 = T.grad (fun x -> T.mean (T.relu x)) x2 in\n  (* 1/6 for positive values, 0 for non-positive *)\n  let expected2 =\n    T.create T.float32 [| 2; 3 |] [| 0.; 0.; 1. /. 6.; 0.; 1. /. 6.; 1. /. 6. |]\n  in\n  check_rune ~eps:1e-5 \"relu gradient at x=0 (mean)\" expected2 grad2\n\nlet test_grad_tanh () =\n  (* Tanh gradient: d/dx tanh(x) = 1 - tanh²(x) *)\n  let x = T.scalar T.float32 0.5 in\n  let grad_tanh = T.grad T.tanh x in\n  let tanh_val = T.tanh x |> scalar_value in\n  let expected = 1.0 -. (tanh_val *. tanh_val) in\n  check_scalar ~eps:1e-4 \"tanh gradient\" expected (scalar_value grad_tanh)\n\nlet test_grad_abs () =\n  (* Abs gradient: sign(x) *)\n  let x = T.create T.float32 [| 4 |] [| -2.; -1.; 1.; 2. |] in\n  let grad_abs = T.grad (fun x -> T.sum (T.abs x)) x in\n  let expected = T.create T.float32 [| 4 |] [| -1.; -1.; 1.; 1. |] in\n  check_rune ~eps \"abs gradient\" expected grad_abs\n\nlet test_grad_sigmoid () =\n  (* Sigmoid gradient: sigmoid(x) * (1 - sigmoid(x)) *)\n  let x = T.scalar T.float32 0.0 in\n  let grad = T.grad T.sigmoid x in\n  (* At x=0, sigmoid(0) = 0.5, so gradient = 0.5 * 0.5 = 0.25 *)\n  check_scalar ~eps \"sigmoid gradient at x=0\" 0.25 (scalar_value grad)\n\nlet test_grad_softmax () =\n  (* Softmax gradient *)\n  let x = T.create T.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let grad = T.grad (fun x -> T.sum (T.softmax x ~axes:[ 0 ])) x in\n  (* Sum of softmax is 1, so gradient should sum to 0 *)\n  let grad_sum = T.sum grad |> scalar_value in\n  check_scalar ~eps:1e-5 \"softmax gradient sum\" 0.0 grad_sum\n\nlet test_grad_square () =\n  (* Square gradient: d/dx x^2 = 2x *)\n  let x = T.scalar T.float32 3.0 in\n  let grad = T.grad T.square x in\n  check_scalar ~eps \"square gradient at x=3\" 6.0 (scalar_value grad)\n\nlet test_grad_recip () =\n  (* Reciprocal gradient: d/dx (1/x) = -1/x^2 *)\n  let x = T.scalar T.float32 2.0 in\n  let grad = T.grad T.recip x in\n  check_scalar ~eps \"recip gradient at x=2\" (-0.25) (scalar_value grad)\n\nlet test_grad_rsqrt () =\n  (* Reciprocal square root gradient: d/dx (1/sqrt(x)) = -1/(2*x^(3/2)) *)\n  let x = T.scalar T.float32 4.0 in\n  let grad = T.grad T.rsqrt x in\n  check_scalar ~eps \"rsqrt gradient at x=4\" (-0.0625) (scalar_value grad)\n\nlet test_grad_sign () =\n  (* Sign gradient: 0 everywhere (except at 0 where undefined) *)\n  let x = T.create T.float32 [| 4 |] [| -2.; -1.; 1.; 2. |] in\n  let grad = T.grad (fun x -> T.sum (T.sign x)) x in\n  let expected = T.zeros_like x in\n  check_rune ~eps \"sign gradient\" expected grad\n\nlet test_grad_tan () =\n  (* Tan gradient: d/dx tan(x) = sec^2(x) = 1/cos^2(x) *)\n  let x = T.scalar T.float32 0.0 in\n  let grad = T.grad T.tan x in\n  check_scalar ~eps \"tan gradient at x=0\" 1.0 (scalar_value grad)\n\nlet test_grad_sinh () =\n  (* Sinh gradient: d/dx sinh(x) = cosh(x) *)\n  let x = T.scalar T.float32 0.0 in\n  let grad = T.grad T.sinh x in\n  check_scalar ~eps \"sinh gradient at x=0\" 1.0 (scalar_value grad)\n\nlet test_grad_cosh () =\n  (* Cosh gradient: d/dx cosh(x) = sinh(x) *)\n  let x = T.scalar T.float32 0.0 in\n  let grad = T.grad T.cosh x in\n  check_scalar ~eps \"cosh gradient at x=0\" 0.0 (scalar_value grad)\n\n(* ───── Reduction Operations ───── *)\n\nlet test_grad_sum () =\n  (* Sum gradient: all ones *)\n  let x = T.create T.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  let grad = T.grad T.sum x in\n  check_rune ~eps \"sum gradient\" (T.ones_like x) grad\n\nlet test_grad_mean () =\n  (* Mean gradient: 1/n for each element *)\n  let x = T.create T.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  let grad = T.grad T.mean x in\n  check_rune ~eps \"mean gradient\" (T.full T.float32 [| 2; 2 |] 0.25) grad\n\nlet test_grad_max () =\n  (* Max gradient: 1 at max element, 0 elsewhere *)\n  let x = T.create T.float32 [| 2; 2 |] [| 1.; 3.; 2.; 4. |] in\n  let grad = T.grad T.max x in\n  let expected = T.create T.float32 [| 2; 2 |] [| 0.; 0.; 0.; 1. |] in\n  check_rune ~eps \"max gradient\" expected grad\n\nlet test_grad_sum_with_axis () =\n  (* Sum with axis specified *)\n  let x = T.create T.float32 [| 2; 3 |] [| 0.; 1.; 2.; 3.; 4.; 5. |] in\n  let grad = T.grad (fun x -> T.sum (T.sum x ~axes:[ 1 ])) x in\n  check_rune ~eps \"sum with axis gradient\" (T.ones_like x) grad\n\nlet test_grad_min () =\n  (* Min gradient: 1 at min element, 0 elsewhere *)\n  let x = T.create T.float32 [| 2; 2 |] [| 4.; 2.; 3.; 1. |] in\n  let grad = T.grad T.min x in\n  let expected = T.create T.float32 [| 2; 2 |] [| 0.; 0.; 0.; 1. |] in\n  check_rune ~eps \"min gradient\" expected grad\n\nlet test_grad_prod () =\n  (* Product gradient: product of all other elements *)\n  let x = T.create T.float32 [| 3 |] [| 2.; 3.; 4. |] in\n  let grad = T.grad T.prod x in\n  let expected = T.create T.float32 [| 3 |] [| 12.; 8.; 6. |] in\n  check_rune ~eps \"prod gradient\" expected grad\n\n(* ───── Broadcasting Gradients ───── *)\n\nlet test_grad_broadcast_add () =\n  (* Addition with broadcasting: [2,3] + [3] *)\n  let x = T.create T.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let bias = T.create T.float32 [| 3 |] [| 0.1; 0.2; 0.3 |] in\n\n  let _, grads_add =\n    T.value_and_grads\n      (fun inputs ->\n        match inputs with\n        | [ a; b ] -> T.sum (T.add a b)\n        | _ -> failwith \"Expected 2 inputs\")\n      [ x; bias ]\n  in\n  let grad_bias_add = List.nth grads_add 1 in\n  check_rune ~eps \"add broadcast: bias gradient\"\n    (T.full T.float32 [| 3 |] 2.0)\n    grad_bias_add\n\nlet test_grad_broadcast_mul () =\n  (* Multiplication with broadcasting: [2,3] * [3] *)\n  let x = T.create T.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let scale = T.create T.float32 [| 3 |] [| 2.; 3.; 4. |] in\n\n  let _, grads_mul =\n    T.value_and_grads\n      (fun inputs ->\n        match inputs with\n        | [ a; s ] -> T.sum (T.mul a s)\n        | _ -> failwith \"Expected 2 inputs\")\n      [ x; scale ]\n  in\n  let grad_scale_mul = List.nth grads_mul 1 in\n  let expected_mul = T.create T.float32 [| 3 |] [| 5.; 7.; 9. |] in\n  check_rune ~eps \"mul broadcast: scale gradient\" expected_mul grad_scale_mul\n\nlet test_grad_scalar_broadcast () =\n  (* Scalar broadcasting for add and mul *)\n  let x = T.create T.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let scalar = T.scalar T.float32 2.0 in\n\n  let grad_scalar_add = T.grad (fun s -> T.sum (T.add x s)) scalar in\n  check_scalar ~eps \"scalar add broadcast\" 6.0 (scalar_value grad_scalar_add);\n\n  let grad_scalar_mul = T.grad (fun s -> T.sum (T.mul x s)) scalar in\n  check_scalar ~eps \"scalar mul broadcast\" 21.0 (scalar_value grad_scalar_mul)\n\nlet test_grad_expand () =\n  (* Expand gradient tests *)\n  (* Scalar to vector *)\n  let scalar = T.scalar T.float32 5.0 in\n  let grad_scalar = T.grad (fun s -> T.sum (T.expand [| 3 |] s)) scalar in\n  check_scalar ~eps \"expand scalar gradient\" 3.0 (scalar_value grad_scalar);\n\n  (* Vector to matrix *)\n  let vec = T.create T.float32 [| 3 |] [| 10.; 20.; 30. |] in\n  let grad_vec = T.grad (fun v -> T.sum (T.expand [| 2; 3 |] v)) vec in\n  check_rune ~eps \"expand vector gradient\"\n    (T.full T.float32 [| 3 |] 2.0)\n    grad_vec\n\nlet test_grad_where () =\n  (* Where with broadcasting *)\n  let cond =\n    T.create T.bool [| 2; 3 |] [| true; false; true; false; true; false |]\n  in\n  let x = T.create T.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let y_scalar = T.scalar T.float32 10.0 in\n\n  let _, grads_where =\n    T.value_and_grads\n      (fun inputs ->\n        match inputs with\n        | [ a; b ] -> T.sum (T.where cond a b)\n        | _ -> failwith \"Expected 2 inputs\")\n      [ x; y_scalar ]\n  in\n  let grad_y_where = List.nth grads_where 1 in\n  check_scalar ~eps \"where scalar broadcast gradient\" 3.0\n    (scalar_value grad_y_where)\n\n(* ───── Shape Manipulation ───── *)\n\nlet test_grad_reshape () =\n  (* Reshape gradient *)\n  let x = T.create T.float32 [| 2; 3 |] [| 0.; 1.; 2.; 3.; 4.; 5. |] in\n  let grad = T.grad (fun x -> T.sum (T.reshape [| 3; 2 |] x)) x in\n  check_rune ~eps \"reshape gradient\" (T.ones_like x) grad\n\nlet test_grad_transpose () =\n  (* Transpose gradient *)\n  let x = T.create T.float32 [| 2; 3 |] [| 0.; 1.; 2.; 3.; 4.; 5. |] in\n  let grad = T.grad (fun x -> T.sum (T.transpose x)) x in\n  check_rune ~eps \"transpose gradient\" (T.ones_like x) grad\n\nlet test_grad_squeeze () =\n  (* Squeeze gradient *)\n  let x = T.create T.float32 [| 1; 3; 1 |] [| 1.; 2.; 3. |] in\n  let grad = T.grad (fun x -> T.sum (T.squeeze x)) x in\n  check_rune ~eps \"squeeze gradient\" (T.ones_like x) grad\n\nlet test_grad_unsqueeze () =\n  (* Unsqueeze gradient *)\n  let x = T.create T.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let grad = T.grad (fun x -> T.sum (T.unsqueeze_axis 0 x)) x in\n  check_rune ~eps \"unsqueeze gradient\" (T.ones_like x) grad\n\nlet test_grad_flatten () =\n  (* Flatten gradient *)\n  let x = T.create T.float32 [| 2; 3 |] [| 0.; 1.; 2.; 3.; 4.; 5. |] in\n  let grad = T.grad (fun x -> T.sum (T.flatten x)) x in\n  check_rune ~eps \"flatten gradient\" (T.ones_like x) grad\n\nlet test_grad_flip () =\n  (* Flip gradient *)\n  let x = T.create T.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let grad = T.grad (fun x -> T.sum (T.flip x)) x in\n  check_rune ~eps \"flip gradient\" (T.ones_like x) grad\n\nlet test_grad_pad () =\n  (* Pad gradient *)\n  let x = T.create T.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let grad = T.grad (fun x -> T.sum (T.pad [| (1, 1) |] 0. x)) x in\n  check_rune ~eps \"pad gradient\" (T.ones_like x) grad\n\nlet test_grad_tile () =\n  (* Tile gradient *)\n  let x = T.create T.float32 [| 2 |] [| 1.; 2. |] in\n  let grad = T.grad (fun x -> T.sum (T.tile [| 3 |] x)) x in\n  let expected = T.full T.float32 [| 2 |] 3.0 in\n  check_rune ~eps \"tile gradient\" expected grad\n\nlet test_grad_concatenate () =\n  (* Concatenate gradient *)\n  let x = T.create T.float32 [| 2 |] [| 1.; 2. |] in\n  let y = T.create T.float32 [| 3 |] [| 3.; 4.; 5. |] in\n  let grad_x = T.grad (fun x -> T.sum (T.concatenate [ x; y ])) x in\n  let grad_y = T.grad (fun y -> T.sum (T.concatenate [ x; y ])) y in\n  check_rune ~eps \"concatenate grad x\" (T.ones_like x) grad_x;\n  check_rune ~eps \"concatenate grad y\" (T.ones_like y) grad_y\n\nlet test_grad_stack () =\n  (* Stack gradient *)\n  let x = T.create T.float32 [| 2 |] [| 1.; 2. |] in\n  let y = T.create T.float32 [| 2 |] [| 3.; 4. |] in\n  let grad_x = T.grad (fun x -> T.sum (T.stack [ x; y ])) x in\n  check_rune ~eps \"stack gradient\" (T.ones_like x) grad_x\n\n(* Indexing operations tests *)\nlet test_grad_get () =\n  (* Test getting a single row *)\n  let x =\n    T.create T.float32 [| 3; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9. |]\n  in\n  let grad_fn = T.grad (fun x -> T.sum (T.get [ 1 ] x)) in\n  let grad = grad_fn x in\n  let expected =\n    T.create T.float32 [| 3; 3 |] [| 0.; 0.; 0.; 1.; 1.; 1.; 0.; 0.; 0. |]\n  in\n  check_rune ~eps \"get single row\" expected grad;\n\n  (* Test getting a single element *)\n  let grad_fn = T.grad (fun x -> T.get [ 1; 1 ] x) in\n  let grad = grad_fn x in\n  let expected =\n    T.create T.float32 [| 3; 3 |] [| 0.; 0.; 0.; 0.; 1.; 0.; 0.; 0.; 0. |]\n  in\n  check_rune ~eps \"get single element\" expected grad\n\nlet test_grad_slice () =\n  let x =\n    T.create T.float32 [| 3; 4 |]\n      [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9.; 10.; 11.; 12. |]\n  in\n\n  (* Test range slicing *)\n  let grad_fn =\n    T.grad (fun x -> T.sum (T.slice [ T.R (1, 3); T.R (0, 2) ] x))\n  in\n  let grad = grad_fn x in\n  let expected =\n    T.create T.float32 [| 3; 4 |]\n      [| 0.; 0.; 0.; 0.; 1.; 1.; 0.; 0.; 1.; 1.; 0.; 0. |]\n  in\n  check_rune ~eps \"slice range\" expected grad;\n\n  (* Test with step *)\n  let grad_fn = T.grad (fun x -> T.sum (T.slice [ T.Rs (0, 3, 2) ] x)) in\n  let grad = grad_fn x in\n  let expected =\n    T.create T.float32 [| 3; 4 |]\n      [| 1.; 1.; 1.; 1.; 0.; 0.; 0.; 0.; 1.; 1.; 1.; 1. |]\n  in\n  check_rune ~eps \"slice with step\" expected grad\n\nlet test_grad_take () =\n  let x = T.create T.float32 [| 5 |] [| 1.; 2.; 3.; 4.; 5. |] in\n  let indices = T.create T.int32 [| 3 |] [| 1l; 3l; 0l |] in\n\n  (* Test take without axis (flattens) *)\n  let grad_fn = T.grad (fun x -> T.sum (T.take indices x)) in\n  let grad = grad_fn x in\n  let expected = T.create T.float32 [| 5 |] [| 1.; 1.; 0.; 1.; 0. |] in\n  check_rune ~eps \"take\" expected grad;\n\n  (* Test 2D take with axis *)\n  let x2 =\n    T.create T.float32 [| 3; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9. |]\n  in\n  let indices2 = T.create T.int32 [| 2 |] [| 0l; 2l |] in\n  let grad_fn2 = T.grad (fun x -> T.sum (T.take ~axis:1 indices2 x)) in\n  let grad2 = grad_fn2 x2 in\n  let expected2 =\n    T.create T.float32 [| 3; 3 |] [| 1.; 0.; 1.; 1.; 0.; 1.; 1.; 0.; 1. |]\n  in\n  check_rune ~eps \"take with axis\" expected2 grad2\n\nlet test_grad_take_along_axis () =\n  let x = T.create T.float32 [| 2; 3 |] [| 4.; 1.; 2.; 3.; 5.; 6. |] in\n  let indices = T.create T.int32 [| 2; 1 |] [| 0l; 1l |] in\n\n  let grad_fn = T.grad (fun x -> T.sum (T.take_along_axis ~axis:1 indices x)) in\n  let grad = grad_fn x in\n  let expected = T.create T.float32 [| 2; 3 |] [| 1.; 0.; 0.; 0.; 1.; 0. |] in\n  check_rune ~eps \"take_along_axis\" expected grad\n\nlet test_grad_cumsum () =\n  let x = T.create T.float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n  let grad = T.grad (fun x -> T.sum (T.cumsum ~axis:0 x)) x in\n  let expected = T.create T.float32 [| 4 |] [| 4.; 3.; 2.; 1. |] in\n  check_rune ~eps \"cumsum gradient\" expected grad;\n\n  (* Edge case: 2D cumsum along different axes *)\n  let x2 = T.create T.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let grad2 = T.grad (fun x -> T.sum (T.cumsum ~axis:1 x)) x2 in\n  let expected2 = T.create T.float32 [| 2; 3 |] [| 3.; 2.; 1.; 3.; 2.; 1. |] in\n  check_rune ~eps \"cumsum gradient axis=1\" expected2 grad2\n\nlet test_grad_cumprod () =\n  let x = T.create T.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let grad = T.grad (fun x -> T.sum (T.cumprod ~axis:0 x)) x in\n  let expected = T.create T.float32 [| 3 |] [| 9.; 4.; 2. |] in\n  check_rune ~eps \"cumprod gradient\" expected grad;\n\n  (* Edge case: cumprod with zeros - tests divide_no_nan handling *)\n  let x_zero = T.create T.float32 [| 4 |] [| 1.; 0.; 2.; 3. |] in\n  let grad_zero = T.grad (fun x -> T.sum (T.cumprod ~axis:0 x)) x_zero in\n  (* cumprod([1,0,2,3]) = [1, 0, 0, 0], sum = 1 Gradient should handle division\n     by zero gracefully *)\n  let is_finite v = (not (Float.is_nan v)) && Float.is_finite v in\n  let grad_vals = T.to_array grad_zero in\n  Array.iter\n    (fun v ->\n      equal ~msg:\"cumprod zero gradient is finite\" bool true (is_finite v))\n    grad_vals\n\nlet test_grad_cummax () =\n  (* Basic case: strictly increasing - each element sets a new max once *)\n  let x = T.create T.float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n  let grad = T.grad (fun x -> T.sum (T.cummax ~axis:0 x)) x in\n  (* cummax([1,2,3,4]) = [1,2,3,4], each position sets exactly one max *)\n  let expected = T.create T.float32 [| 4 |] [| 1.; 1.; 1.; 1. |] in\n  check_rune ~eps \"cummax gradient (increasing)\" expected grad;\n\n  (* Strictly decreasing - only first element contributes to all outputs *)\n  let x_dec = T.create T.float32 [| 4 |] [| 4.; 3.; 2.; 1. |] in\n  let grad_dec = T.grad (fun x -> T.sum (T.cummax ~axis:0 x)) x_dec in\n  (* cummax([4,3,2,1]) = [4,4,4,4], x[0] is the source for all 4 outputs\n     But gradient of sum is 1 for each output, and all come from x[0],\n     so x[0] gets 4*1 = 4? No - the mask approach gives 1 where new max. *)\n  (* With our implementation: only position 0 has res > shifted_res, so grad = [1,0,0,0] *)\n  let expected_dec = T.create T.float32 [| 4 |] [| 1.; 0.; 0.; 0. |] in\n  check_rune ~eps \"cummax gradient (decreasing)\" expected_dec grad_dec;\n\n  (* Mixed case with plateau *)\n  let x_mix = T.create T.float32 [| 5 |] [| 1.; 3.; 2.; 3.; 4. |] in\n  let grad_mix = T.grad (fun x -> T.sum (T.cummax ~axis:0 x)) x_mix in\n  (* cummax([1,3,2,3,4]) = [1,3,3,3,4] res > shifted: [1>-inf, 3>1, 3>3, 3>3,\n     4>3] = [T,T,F,F,T] so grad = [1,1,0,0,1] *)\n  let expected_mix = T.create T.float32 [| 5 |] [| 1.; 1.; 0.; 0.; 1. |] in\n  check_rune ~eps \"cummax gradient (mixed)\" expected_mix grad_mix;\n\n  (* 2D case along axis 1 *)\n  let x2 = T.create T.float32 [| 2; 3 |] [| 1.; 3.; 2.; 4.; 2.; 5. |] in\n  let grad2 = T.grad (fun x -> T.sum (T.cummax ~axis:1 x)) x2 in\n  (* Row 0: cummax([1,3,2]) = [1,3,3], res > shifted = [T,T,F] -> [1,1,0] Row 1:\n     cummax([4,2,5]) = [4,4,5], res > shifted = [T,F,T] -> [1,0,1] *)\n  let expected2 = T.create T.float32 [| 2; 3 |] [| 1.; 1.; 0.; 1.; 0.; 1. |] in\n  check_rune ~eps \"cummax gradient 2D axis=1\" expected2 grad2\n\nlet test_grad_cummin () =\n  (* Basic case: strictly decreasing - each element sets a new min once *)\n  let x = T.create T.float32 [| 4 |] [| 4.; 3.; 2.; 1. |] in\n  let grad = T.grad (fun x -> T.sum (T.cummin ~axis:0 x)) x in\n  (* cummin([4,3,2,1]) = [4,3,2,1], each position sets exactly one min *)\n  let expected = T.create T.float32 [| 4 |] [| 1.; 1.; 1.; 1. |] in\n  check_rune ~eps \"cummin gradient (decreasing)\" expected grad;\n\n  (* Strictly increasing - only first element contributes *)\n  let x_inc = T.create T.float32 [| 4 |] [| 1.; 2.; 3.; 4. |] in\n  let grad_inc = T.grad (fun x -> T.sum (T.cummin ~axis:0 x)) x_inc in\n  (* cummin([1,2,3,4]) = [1,1,1,1], only x[0] sets new min at pos 0 *)\n  let expected_inc = T.create T.float32 [| 4 |] [| 1.; 0.; 0.; 0. |] in\n  check_rune ~eps \"cummin gradient (increasing)\" expected_inc grad_inc;\n\n  (* Mixed case *)\n  let x_mix = T.create T.float32 [| 5 |] [| 3.; 1.; 2.; 1.; 0. |] in\n  let grad_mix = T.grad (fun x -> T.sum (T.cummin ~axis:0 x)) x_mix in\n  (* cummin([3,1,2,1,0]) = [3,1,1,1,0] res < shifted: [3<+inf, 1<3, 1<1, 1<1,\n     0<1] = [T,T,F,F,T] so grad = [1,1,0,0,1] *)\n  let expected_mix = T.create T.float32 [| 5 |] [| 1.; 1.; 0.; 0.; 1. |] in\n  check_rune ~eps \"cummin gradient (mixed)\" expected_mix grad_mix;\n\n  (* 2D case along axis 0 *)\n  let x2 = T.create T.float32 [| 3; 2 |] [| 3.; 5.; 1.; 4.; 2.; 2. |] in\n  let grad2 = T.grad (fun x -> T.sum (T.cummin ~axis:0 x)) x2 in\n  (* Col 0: cummin([3,1,2]) = [3,1,1], res < shifted = [T,T,F] -> [1,1,0] Col 1:\n     cummin([5,4,2]) = [5,4,2], res < shifted = [T,T,T] -> [1,1,1] *)\n  let expected2 = T.create T.float32 [| 3; 2 |] [| 1.; 1.; 1.; 1.; 0.; 1. |] in\n  check_rune ~eps \"cummin gradient 2D axis=0\" expected2 grad2\n\n(* ───── Linear Algebra Operations ───── *)\n\nlet test_grad_dot () =\n  (* Dot product gradient *)\n  let x = T.create T.float32 [| 3 |] [| 1.; 2.; 3. |] in\n  let y = T.create T.float32 [| 3 |] [| 4.; 5.; 6. |] in\n  let grad_x = T.grad (fun x -> T.dot x y) x in\n  let grad_y = T.grad (fun y -> T.dot x y) y in\n  check_rune ~eps \"dot grad wrt x\" y grad_x;\n  check_rune ~eps \"dot grad wrt y\" x grad_y\n\nlet test_grad_trace () =\n  (* Trace gradient: identity matrix *)\n  let x =\n    T.create T.float32 [| 3; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6.; 7.; 8.; 9. |]\n  in\n  let grad = T.grad T.trace x in\n  let expected = T.eye T.float32 3 in\n  check_rune ~eps \"trace gradient\" expected grad\n\nlet test_grad_norm () =\n  (* L2 norm gradient *)\n  let x = T.create T.float32 [| 3 |] [| 3.; 0.; 4. |] in\n  let grad = T.grad (fun x -> T.norm x) x in\n  (* Gradient of ||x|| is x/||x|| *)\n  let expected = T.create T.float32 [| 3 |] [| 0.6; 0.; 0.8 |] in\n  check_rune ~eps \"norm gradient\" expected grad\n\nlet test_grad_solve () =\n  (* Test solve gradient using finite differences *)\n  (* Create a well-conditioned matrix *)\n  let a =\n    T.create T.float64 [| 3; 3 |] [| 4.; 1.; 1.; 1.; 3.; 1.; 1.; 1.; 2. |]\n  in\n  let b = T.create T.float64 [| 3 |] [| 4.; 5.; 6. |] in\n\n  (* Test gradient w.r.t b *)\n  let f_b b = T.sum (T.solve a b) in\n  match T.check_gradient ~rtol:1e-2 ~atol:1e-3 f_b b with\n  | `Pass result -> equal ~msg:\"solve grad wrt b passed\" bool true result.passed\n  | `Fail result ->\n      Printf.printf \"solve grad wrt b failed: max_rel_error = %.2e\\n\"\n        result.max_rel_error;\n      fail \"solve gradient w.r.t b check failed\"\n\nlet test_grad_solve_wrt_a () =\n  (* Test solve gradient w.r.t matrix A *)\n  let a =\n    T.create T.float64 [| 3; 3 |] [| 4.; 1.; 1.; 1.; 3.; 1.; 1.; 1.; 2. |]\n  in\n  let b = T.create T.float64 [| 3 |] [| 4.; 5.; 6. |] in\n\n  (* Test gradient w.r.t a *)\n  let f_a a = T.sum (T.solve a b) in\n  match T.check_gradient ~rtol:1e-2 ~atol:1e-3 f_a a with\n  | `Pass result -> equal ~msg:\"solve grad wrt a passed\" bool true result.passed\n  | `Fail result ->\n      Printf.printf \"solve grad wrt a failed: max_rel_error = %.2e\\n\"\n        result.max_rel_error;\n      fail \"solve gradient w.r.t a check failed\"\n\nlet test_grad_cholesky () =\n  (* Test cholesky gradient using finite differences *)\n  (* Create a symmetric positive definite matrix *)\n  let a =\n    T.create T.float64 [| 3; 3 |] [| 4.; 2.; 1.; 2.; 5.; 2.; 1.; 2.; 6. |]\n  in\n\n  (* Test gradient - cholesky returns L where A = L @ L^T *)\n  let f_chol a = T.sum (T.cholesky ~upper:false a) in\n  match T.check_gradient ~rtol:1e-2 ~atol:1e-3 f_chol a with\n  | `Pass result ->\n      equal ~msg:\"cholesky gradient passed\" bool true result.passed\n  | `Fail result ->\n      Printf.printf \"cholesky gradient failed: max_rel_error = %.2e\\n\"\n        result.max_rel_error;\n      fail \"cholesky gradient check failed\"\n\n(* ───── Atomic Neural Network Operations ───── *)\n\nlet test_grad_matmul () =\n  let a = T.create T.float32 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let b = T.create T.float32 [| 3; 2 |] [| 0.1; 0.2; 0.3; 0.4; 0.5; 0.6 |] in\n\n  let f_a a = T.sum (T.matmul a b) in\n  let f_b b = T.sum (T.matmul a b) in\n\n  let grad_a = T.grad f_a a in\n  let grad_b = T.grad f_b b in\n\n  let expected_a =\n    T.create T.float32 [| 2; 3 |] [| 0.3; 0.7; 1.1; 0.3; 0.7; 1.1 |]\n  in\n  let expected_b = T.create T.float32 [| 3; 2 |] [| 5.; 5.; 7.; 7.; 9.; 9. |] in\n\n  check_rune ~eps \"matmul grad wrt a\" expected_a grad_a;\n  check_rune ~eps \"matmul grad wrt b\" expected_b grad_b\n\n(* ───── Compound Operations (Loss Functions, Layers) ───── *)\n\nlet test_grad_linear_layer () =\n  (* Combined matmul + bias pattern *)\n  let batch = 4 in\n  let in_dim = 2 in\n  let out_dim = 8 in\n\n  let x =\n    T.create T.float32 [| batch; in_dim |]\n      (Array.init (batch * in_dim) float_of_int)\n  in\n  let w =\n    T.create T.float32 [| in_dim; out_dim |]\n      (Array.init (in_dim * out_dim) (fun i -> float_of_int i *. 0.1))\n  in\n  let b =\n    T.create T.float32 [| out_dim |]\n      (Array.init out_dim (fun i -> float_of_int i *. 0.01))\n  in\n\n  let linear w b x = T.add (T.matmul x w) b in\n  let loss x w b = T.sum (linear w b x) in\n\n  let grad_b = T.grad (fun b -> loss x w b) b in\n  let grad_w = T.grad (fun w -> loss x w b) w in\n\n  check_rune ~eps \"linear layer bias gradient\"\n    (T.full T.float32 [| out_dim |] (float_of_int batch))\n    grad_b;\n\n  let ones_out = T.ones T.float32 [| batch; out_dim |] in\n  let expected_w = T.matmul (T.transpose x) ones_out in\n  check_rune ~eps \"linear layer weight gradient\" expected_w grad_w\n\nlet test_grad_cross_entropy () =\n  (* Softmax + Cross-entropy *)\n  let logits =\n    T.create T.float32 [| 2; 3 |] [| 0.1; 0.2; 0.3; 0.4; 0.5; 0.6 |]\n  in\n  let targets = T.create T.float32 [| 2; 3 |] [| 1.; 0.; 0.; 0.; 0.; 1. |] in\n\n  let f_ce logits =\n    let probs = T.softmax logits ~axes:[ 1 ] in\n    let log_probs = T.log probs in\n    T.neg (T.sum (T.mul targets log_probs))\n  in\n\n  let grad_ce = T.grad f_ce logits in\n  let expected_ce = T.sub (T.softmax logits ~axes:[ 1 ]) targets in\n  check_rune ~eps:1e-5 \"cross-entropy gradient\" expected_ce grad_ce\n\nlet test_grad_binary_cross_entropy () =\n  (* Binary cross-entropy with sigmoid *)\n  let logits_bce = T.create T.float32 [| 4; 1 |] [| -1.; 0.5; 0.5; -1. |] in\n  let targets_bce = T.create T.float32 [| 4; 1 |] [| 0.; 1.; 1.; 0. |] in\n\n  let f_bce logits =\n    let sigmoid_logits = T.sigmoid logits in\n    let one = T.ones_like targets_bce in\n    let one_minus_targets = T.sub one targets_bce in\n    let one_minus_sigmoid = T.sub one sigmoid_logits in\n    let term1 = T.mul targets_bce (T.log sigmoid_logits) in\n    let term2 = T.mul one_minus_targets (T.log one_minus_sigmoid) in\n    T.mean (T.neg (T.add term1 term2))\n  in\n\n  let grad_bce = T.grad f_bce logits_bce in\n  let diff = T.sub (T.sigmoid logits_bce) targets_bce in\n  let n = float_of_int (Array.fold_left ( * ) 1 (T.shape logits_bce)) in\n  let expected_bce = T.div diff (T.scalar T.float32 n) in\n  check_rune ~eps:1e-5 \"sigmoid BCE gradient\" expected_bce grad_bce\n\n(* ───── Composition and Higher-order ───── *)\n\nlet test_grad_multi_variable () =\n  (* Multi-variable gradient *)\n  let x = T.scalar T.float32 2.0 in\n  let y = T.scalar T.float32 3.0 in\n\n  let f_x x = T.add (T.mul x x) (T.mul y y) in\n  let f_y y = T.add (T.mul x x) (T.mul y y) in\n  let grad_x = T.grad f_x x in\n  let grad_y = T.grad f_y y in\n  check_scalar ~eps \"multi-var grad wrt x\" 4.0 (scalar_value grad_x);\n  check_scalar ~eps \"multi-var grad wrt y\" 6.0 (scalar_value grad_y)\n\nlet test_grad_chain_rule () =\n  (* Chain rule *)\n  let x = T.scalar T.float32 2.0 in\n  let f_chain x =\n    let y = T.mul x x in\n    T.mul y y\n  in\n  let grad_chain = T.grad f_chain x in\n  check_scalar ~eps \"chain rule: grad(x⁴) at x=2\" 32.0 (scalar_value grad_chain)\n\nlet test_grad_shared_subexpression () =\n  (* Shared subexpression *)\n  let x = T.scalar T.float32 2.0 in\n  let f_shared x =\n    let a = T.mul x x in\n    T.add a a\n  in\n  let grad_shared = T.grad f_shared x in\n  check_scalar ~eps \"shared subexpression gradient\" 8.0\n    (scalar_value grad_shared)\n\nlet test_grad_second_order () =\n  (* Second-order derivative *)\n  let x = T.scalar T.float32 2.0 in\n  let f x = T.mul x (T.mul x x) in\n  let grad_f x = T.grad f x in\n  let second_deriv = T.grad grad_f x in\n  check_scalar ~eps \"second derivative of x³ at x=2\" 12.0\n    (scalar_value second_deriv)\n\n(* ───── API Functions ───── *)\n\nlet test_grad_value_and_grad () =\n  (* value_and_grad *)\n  let x = T.scalar T.float32 2.0 in\n  let f x = T.mul x x in\n  let value, grad = T.value_and_grad f x in\n  check_scalar ~eps \"value_and_grad value\" 4.0 (scalar_value value);\n  check_scalar ~eps \"value_and_grad grad\" 4.0 (scalar_value grad)\n\nlet test_grad_value_and_grads () =\n  (* grads and value_and_grads *)\n  let x = T.scalar T.float32 2.0 in\n  let y = T.scalar T.float32 3.0 in\n  let z = T.scalar T.float32 1.0 in\n\n  let f_multi inputs =\n    match inputs with\n    | [ a; b; c ] -> T.mul c (T.add (T.mul a a) (T.mul b b))\n    | _ -> failwith \"Expected 3 inputs\"\n  in\n\n  let value, grads = T.value_and_grads f_multi [ x; y; z ] in\n  check_scalar ~eps \"value_and_grads value\" 13.0 (scalar_value value);\n  check_scalar ~eps \"value_and_grads grad x\" 4.0\n    (scalar_value (List.nth grads 0));\n  check_scalar ~eps \"value_and_grads grad y\" 6.0\n    (scalar_value (List.nth grads 1));\n  check_scalar ~eps \"value_and_grads grad z\" 13.0\n    (scalar_value (List.nth grads 2))\n\nlet test_grad_pow () =\n  (* Power gradient: d/dx x^n = n*x^(n-1) *)\n  let x = T.scalar T.float32 2.0 in\n  let n = T.scalar T.float32 3.0 in\n  let grad = T.grad (fun x -> T.pow x n) x in\n  check_scalar ~eps \"pow gradient: x^3 at x=2\" 12.0 (scalar_value grad)\n\nlet test_grad_minimum () =\n  (* Minimum gradient *)\n  let x = T.scalar T.float32 2.0 in\n  let y = T.scalar T.float32 3.0 in\n  let grad_x = T.grad (fun x -> T.minimum x y) x in\n  let grad_y = T.grad (fun y -> T.minimum x y) y in\n  check_scalar ~eps \"minimum grad wrt x (smaller)\" 1.0 (scalar_value grad_x);\n  check_scalar ~eps \"minimum grad wrt y (larger)\" 0.0 (scalar_value grad_y);\n\n  (* Edge case: when x == y, gradient flows to second argument (y) since minimum\n     uses where(a < b, a, b) and a < b is false when equal *)\n  let z = T.scalar T.float32 2.0 in\n  let grad_x_eq = T.grad (fun x -> T.minimum x z) x in\n  let grad_z_eq = T.grad (fun z -> T.minimum x z) z in\n  check_scalar ~eps \"minimum grad wrt x (equal)\" 0.0 (scalar_value grad_x_eq);\n  check_scalar ~eps \"minimum grad wrt y (equal)\" 1.0 (scalar_value grad_z_eq)\n\nlet test_grad_maximum () =\n  (* Maximum gradient *)\n  let x = T.scalar T.float32 2.0 in\n  let y = T.scalar T.float32 3.0 in\n  let grad_x = T.grad (fun x -> T.maximum x y) x in\n  let grad_y = T.grad (fun y -> T.maximum x y) y in\n  check_scalar ~eps \"maximum grad wrt x (smaller)\" 0.0 (scalar_value grad_x);\n  check_scalar ~eps \"maximum grad wrt y (larger)\" 1.0 (scalar_value grad_y);\n\n  (* Edge case: when x == y, gradient flows to second argument (y) This is\n     critical for relu(x) = max(x, 0) at x=0 to give gradient 0 *)\n  let z = T.scalar T.float32 2.0 in\n  let grad_x_eq = T.grad (fun x -> T.maximum x z) x in\n  let grad_z_eq = T.grad (fun z -> T.maximum x z) z in\n  check_scalar ~eps \"maximum grad wrt x (equal)\" 0.0 (scalar_value grad_x_eq);\n  check_scalar ~eps \"maximum grad wrt y (equal)\" 1.0 (scalar_value grad_z_eq)\n\nlet test_grad_zero () =\n  (* Gradient at zero *)\n  let grad = T.grad (fun x -> T.mul x x) (T.scalar T.float32 0.0) in\n  check_scalar ~eps \"grad(x²) at x=0\" 0.0 (scalar_value grad)\n\nlet test_grad_nan_propagation () =\n  (* NaN propagation in gradients *)\n  let x = T.scalar T.float32 1.0 in\n  let grad = T.grad (fun x -> T.div x (T.sub x x)) x in\n  (* x / (x - x) = x / 0 *)\n  let grad_val = scalar_value grad in\n  let is_nan = Float.is_nan grad_val || Float.is_infinite grad_val in\n  equal ~msg:\"NaN/Inf gradient\" bool true is_nan\n\nlet test_grad_large_values () =\n  (* Test gradient with large values *)\n  let x = T.scalar T.float32 1e10 in\n  let grad = T.grad (fun x -> T.div x (T.scalar T.float32 1e20)) x in\n  check_scalar ~eps:1e-15 \"large value gradient\" 1e-20 (scalar_value grad)\n\nlet test_grad_small_values () =\n  (* Test gradient with very small values *)\n  let x = T.scalar T.float32 1e-10 in\n  let grad = T.grad (fun x -> T.mul x (T.scalar T.float32 1e10)) x in\n  check_scalar ~eps \"small value gradient\" 1e10 (scalar_value grad)\n\nlet test_no_grad_context () =\n  let x = T.scalar T.float32 2.0 in\n  let f x =\n    let y = T.no_grad (fun () -> T.mul x x) in\n    T.sum y\n  in\n  let value = f x |> scalar_value in\n  let grad = T.grad f x |> scalar_value in\n  check_scalar ~eps \"no_grad preserves forward value\" 4.0 value;\n  check_scalar ~eps \"no_grad zeros gradient\" 0.0 grad\n\nlet test_detach_constant () =\n  let x = T.scalar T.float32 3.0 in\n  let f x = T.sum (T.detach x) in\n  let value = f x |> scalar_value in\n  let grad = T.grad f x |> scalar_value in\n  check_scalar ~eps \"detach preserves forward value\" 3.0 value;\n  check_scalar ~eps \"detach removes gradient\" 0.0 grad\n\nlet test_detach_partial_grad () =\n  let x = T.scalar T.float32 2.5 in\n  let f x = T.sum (T.mul (T.detach x) x) in\n  let grad = T.grad f x |> scalar_value in\n  check_scalar ~eps \"detach treats operand as constant\" 2.5 grad\n\n(* ───── FFT Operations ───── *)\n\n(* Helper to check complex tensor gradients *)\nlet check_complex_grad ~eps msg expected actual =\n  let diff = T.sub expected actual in\n  let err = T.sum (T.mul diff (T.conjugate diff)) in\n  let err_val = (T.item [] err : Complex.t).re in\n  let size = Float.of_int (Array.fold_left ( * ) 1 (T.shape expected)) in\n  if err_val > eps *. eps *. size then\n    failf \"%s: complex grad error = %.2e\" msg err_val\n\nlet test_grad_fft () =\n  (* FFT gradient: For f(x) = sum(FFT(x)), grad = n * IFFT(ones) Since FFT is\n     linear, the adjoint is n * IFFT *)\n  let x =\n    T.create T.complex64 [| 4 |]\n      [|\n        Complex.{ re = 1.0; im = 0.5 };\n        Complex.{ re = 2.0; im = -0.5 };\n        Complex.{ re = 3.0; im = 0.2 };\n        Complex.{ re = 4.0; im = -0.2 };\n      |]\n  in\n  let f x = T.sum (T.fft ~axis:0 x) in\n  let grad = T.grad f x in\n  (* Gradient of sum(FFT(x)) w.r.t. x is n * IFFT(ones) = ones (since IFFT\n     divides by n) *)\n  let n = 4 in\n  let ones = T.ones T.complex64 [| n |] in\n  let expected = T.mul_s ones Complex.{ re = Float.of_int n; im = 0.0 } in\n  let expected_grad = T.ifft ~axis:0 expected in\n  check_complex_grad ~eps:1e-5 \"fft gradient\" expected_grad grad\n\nlet test_grad_ifft () =\n  (* IFFT gradient: For f(x) = sum(IFFT(x)), grad = FFT(ones) / n *)\n  let x =\n    T.create T.complex64 [| 4 |]\n      [|\n        Complex.{ re = 10.0; im = 0.0 };\n        Complex.{ re = -2.0; im = 2.0 };\n        Complex.{ re = -2.0; im = 0.0 };\n        Complex.{ re = -2.0; im = -2.0 };\n      |]\n  in\n  let f x = T.sum (T.ifft ~axis:0 x) in\n  let grad = T.grad f x in\n  (* Gradient of sum(IFFT(x)) w.r.t. x is FFT(ones) / n *)\n  let n = 4 in\n  let ones = T.ones T.complex64 [| n |] in\n  let fft_ones = T.fft ~axis:0 ones in\n  let expected_grad =\n    T.div_s fft_ones Complex.{ re = Float.of_int n; im = 0.0 }\n  in\n  check_complex_grad ~eps:1e-5 \"ifft gradient\" expected_grad grad\n\nlet test_grad_fft_roundtrip () =\n  (* FFT -> IFFT roundtrip should have identity gradient *)\n  let x =\n    T.create T.complex64 [| 4 |]\n      [|\n        Complex.{ re = 1.0; im = 0.5 };\n        Complex.{ re = 2.0; im = -0.5 };\n        Complex.{ re = 3.0; im = 0.2 };\n        Complex.{ re = 4.0; im = -0.2 };\n      |]\n  in\n  let f x = T.sum (T.ifft ~axis:0 (T.fft ~axis:0 x)) in\n  let grad = T.grad f x in\n  (* Roundtrip is identity, so gradient of sum should be ones *)\n  let expected_grad = T.ones T.complex64 [| 4 |] in\n  check_complex_grad ~eps:1e-5 \"fft roundtrip gradient\" expected_grad grad\n\nlet test_grad_fft2 () =\n  (* 2D FFT gradient test *)\n  let x =\n    T.create T.complex64 [| 2; 2 |]\n      [|\n        Complex.{ re = 1.0; im = 0.0 };\n        Complex.{ re = 2.0; im = 0.0 };\n        Complex.{ re = 3.0; im = 0.0 };\n        Complex.{ re = 4.0; im = 0.0 };\n      |]\n  in\n  let f x = T.sum (T.fft2 x) in\n  let grad = T.grad f x in\n  (* For 2D FFT, adjoint is n1*n2 * IFFT2 *)\n  let n = 4 in\n  (* 2 * 2 *)\n  let ones = T.ones T.complex64 [| 2; 2 |] in\n  let expected = T.mul_s ones Complex.{ re = Float.of_int n; im = 0.0 } in\n  let expected_grad = T.ifft2 expected in\n  check_complex_grad ~eps:1e-5 \"fft2 gradient\" expected_grad grad\n\nlet tests =\n  [\n    group \"binary operations\"\n      [\n        test \"add\" test_grad_add;\n        test \"mul\" test_grad_mul;\n        test \"sub\" test_grad_sub;\n        test \"div\" test_grad_div;\n        test \"pow\" test_grad_pow;\n        test \"minimum\" test_grad_minimum;\n        test \"maximum\" test_grad_maximum;\n      ];\n    group \"unary operations\"\n      [\n        test \"exp\" test_grad_exp;\n        test \"log\" test_grad_log;\n        test \"sin\" test_grad_sin;\n        test \"cos\" test_grad_cos;\n        test \"sqrt\" test_grad_sqrt;\n        test \"neg\" test_grad_neg;\n        test \"relu\" test_grad_relu;\n        test \"tanh\" test_grad_tanh;\n        test \"abs\" test_grad_abs;\n        test \"sigmoid\" test_grad_sigmoid;\n        test \"softmax\" test_grad_softmax;\n        test \"square\" test_grad_square;\n        test \"recip\" test_grad_recip;\n        test \"rsqrt\" test_grad_rsqrt;\n        test \"sign\" test_grad_sign;\n        test \"tan\" test_grad_tan;\n        test \"sinh\" test_grad_sinh;\n        test \"cosh\" test_grad_cosh;\n      ];\n    group \"reduction operations\"\n      [\n        test \"sum\" test_grad_sum;\n        test \"mean\" test_grad_mean;\n        test \"max\" test_grad_max;\n        test \"sum with axis\" test_grad_sum_with_axis;\n        test \"min\" test_grad_min;\n        test \"prod\" test_grad_prod;\n      ];\n    group \"broadcasting\"\n      [\n        test \"broadcast add\" test_grad_broadcast_add;\n        test \"broadcast mul\" test_grad_broadcast_mul;\n        test \"scalar broadcast\" test_grad_scalar_broadcast;\n        test \"expand\" test_grad_expand;\n        test \"where\" test_grad_where;\n      ];\n    group \"shape manipulation\"\n      [\n        test \"reshape\" test_grad_reshape;\n        test \"transpose\" test_grad_transpose;\n        test \"squeeze\" test_grad_squeeze;\n        test \"unsqueeze\" test_grad_unsqueeze;\n        test \"flatten\" test_grad_flatten;\n        test \"flip\" test_grad_flip;\n        test \"pad\" test_grad_pad;\n        test \"tile\" test_grad_tile;\n        test \"concatenate\" test_grad_concatenate;\n        test \"stack\" test_grad_stack;\n      ];\n    group \"indexing operations\"\n      [\n        test \"get\" test_grad_get;\n        test \"slice\" test_grad_slice;\n        test \"take\" test_grad_take;\n        test \"take_along_axis\" test_grad_take_along_axis;\n      ];\n    group \"linear algebra\"\n      [\n        test \"dot\" test_grad_dot;\n        test \"trace\" test_grad_trace;\n        test \"norm\" test_grad_norm;\n        test \"solve wrt b\" test_grad_solve;\n        test \"solve wrt a\" test_grad_solve_wrt_a;\n        test \"cholesky\" test_grad_cholesky;\n      ];\n    group \"fft operations\"\n      [\n        test \"fft\" test_grad_fft;\n        test \"ifft\" test_grad_ifft;\n        test \"fft roundtrip\" test_grad_fft_roundtrip;\n        test \"fft2\" test_grad_fft2;\n      ];\n    group \"neural network operations\" [ test \"matmul\" test_grad_matmul ];\n    group \"cumulative\"\n      [\n        test \"cumsum\" test_grad_cumsum;\n        test \"cumprod\" test_grad_cumprod;\n        test \"cummax\" test_grad_cummax;\n        test \"cummin\" test_grad_cummin;\n      ];\n    group \"compound operations\"\n      [\n        test \"linear layer\" test_grad_linear_layer;\n        test \"cross entropy\" test_grad_cross_entropy;\n        test \"binary cross entropy\" test_grad_binary_cross_entropy;\n      ];\n    group \"composition and higher-order\"\n      [\n        test \"multi-variable\" test_grad_multi_variable;\n        test \"chain rule\" test_grad_chain_rule;\n        test \"shared subexpression\" test_grad_shared_subexpression;\n        test \"second order\" test_grad_second_order;\n      ];\n    group \"api functions\"\n      [\n        test \"value_and_grad\" test_grad_value_and_grad;\n        test \"value_and_grads\" test_grad_value_and_grads;\n        test \"no_grad\" test_no_grad_context;\n        test \"detach constant\" test_detach_constant;\n        test \"detach partial gradient\" test_detach_partial_grad;\n      ];\n    group \"special cases\"\n      [\n        test \"gradient at zero\" test_grad_zero;\n        test \"NaN propagation\" test_grad_nan_propagation;\n        test \"large values\" test_grad_large_values;\n        test \"small values\" test_grad_small_values;\n      ];\n  ]\n\n(* Test suite *)\nlet () = run \"Rune VJP Tests\" tests\n"
  },
  {
    "path": "packages/rune/test/test_vmap.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Test_rune_support\n\nmodule T = struct\n  include Nx\n  include Rune\nend\n\nlet eps = 1e-6\n\n(* Test basic vmap functionality *)\nlet test_vmap_simple () =\n  let x = T.create T.float32 [| 3; 2 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let f t = T.mul_s t 2. in\n  let vmapped_f = T.vmap f in\n  let result = vmapped_f x in\n  let expected = T.create T.float32 [| 3; 2 |] [| 2.; 4.; 6.; 8.; 10.; 12. |] in\n  check_rune ~eps \"vmap simple\" expected result\n\n(* Test vmap with matrix multiplication *)\nlet test_vmap_matmul () =\n  let batch_x =\n    T.create T.float32 [| 2; 3; 3 |]\n      [|\n        1.;\n        2.;\n        3.;\n        4.;\n        5.;\n        6.;\n        7.;\n        8.;\n        9.;\n        10.;\n        11.;\n        12.;\n        13.;\n        14.;\n        15.;\n        16.;\n        17.;\n        18.;\n      |]\n  in\n  let w = T.create T.float32 [| 3; 2 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let batched_matmul = T.vmap (fun x -> T.matmul x w) in\n  let result = batched_matmul batch_x in\n\n  (* Expected: batch of 2 matrix multiplications *)\n  let expected_shape = [| 2; 3; 2 |] in\n  check_shape \"vmap matmul shape\" expected_shape result;\n\n  (* Check first batch result *)\n  let first_batch = T.get [ 0 ] result in\n  let expected_first = T.matmul (T.get [ 0 ] batch_x) w in\n  check_rune ~eps \"vmap matmul first batch\" expected_first first_batch\n\n(* Test vmap with different axis *)\nlet test_vmap_axis () =\n  let x = T.create T.float32 [| 2; 3; 4 |] (Array.init 24 float_of_int) in\n  let f = T.vmap ~in_axes:(T.Single (T.Map 1)) (fun t -> T.sum t) in\n  let result = f x in\n  let expected_shape = [| 3 |] in\n  check_shape \"vmap axis shape\" expected_shape result\n\n(* Test vmap with no output axis *)\nlet test_vmap_no_out_axis () =\n  (* JAX semantics: out_axes=None only works with constant functions. For\n     non-constant outputs, JAX would error. We take first element. *)\n  let x = T.create T.float32 [| 5; 3 |] (Array.init 15 float_of_int) in\n  let f = T.vmap ~out_axes:(T.OutSingle None) (fun t -> T.sum t) in\n  let result = f x in\n  (* First row sum: 0+1+2 = 3 *)\n  check_scalar ~eps \"vmap no out axis\" 3. (T.item [ 0 ] result)\n\n(* Test vmap with broadcasting *)\nlet test_vmap_broadcast () =\n  let x = T.create T.float32 [| 3; 2 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let y = T.create T.float32 [| 2 |] [| 10.; 20. |] in\n  let f = T.vmap (fun t -> T.add t y) in\n  let result = f x in\n  let expected =\n    T.create T.float32 [| 3; 2 |] [| 11.; 22.; 13.; 24.; 15.; 26. |]\n  in\n  check_rune ~eps \"vmap broadcast\" expected result\n\n(* Test nested vmap *)\nlet test_nested_vmap () =\n  let x = T.create T.float32 [| 2; 3; 4 |] (Array.init 24 float_of_int) in\n  let inner_vmap = T.vmap (fun t -> T.mul_s t 2.) in\n  let outer_vmap = T.vmap inner_vmap in\n  let result = outer_vmap x in\n  let expected_shape = [| 2; 3; 4 |] in\n  check_shape \"nested vmap shape\" expected_shape result;\n\n  (* Check that all values are doubled *)\n  let first_val = T.item [ 0; 0; 0 ] result in\n  check_scalar ~eps \"nested vmap first value\" 0. first_val\n\n(* Test vmap with reduction *)\nlet test_vmap_reduction () =\n  let x = T.create T.float32 [| 4; 3; 2 |] (Array.init 24 float_of_int) in\n  let f = T.vmap (fun t -> T.sum t ~axes:[ 1 ]) in\n  let result = f x in\n  let expected_shape = [| 4; 3 |] in\n  check_shape \"vmap reduction shape\" expected_shape result\n\n(* Test vmap with where operation *)\nlet test_vmap_where () =\n  (* JAX semantics: captured tensors are broadcast, not co-iterated *)\n  let cond =\n    T.create T.bool [| 3; 2 |] [| true; false; true; true; false; true |]\n  in\n  let x = T.create T.float32 [| 3; 2 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let y = T.create T.float32 [| 3; 2 |] [| 10.; 20.; 30.; 40.; 50.; 60. |] in\n  let f = T.vmap (fun c -> T.where c x y) in\n  let result = f cond in\n  (* With broadcast semantics, result shape should be [3, 3, 2] Each batch\n     element sees the entire x and y arrays *)\n  let expected_shape = [| 3; 3; 2 |] in\n  check_shape \"vmap where shape\" expected_shape result\n(* For now, just check shape. Full value check would be complex. *)\n\n(* Test vmap with transpose *)\nlet test_vmap_transpose () =\n  let x = T.create T.float32 [| 2; 3; 4 |] (Array.init 24 float_of_int) in\n  let f = T.vmap (fun t -> T.transpose t) in\n  let result = f x in\n  let expected_shape = [| 2; 4; 3 |] in\n  check_shape \"vmap transpose shape\" expected_shape result\n\n(* Test vmap with elementwise operations *)\nlet test_vmap_elementwise () =\n  let x =\n    T.create T.float32 [| 3; 4 |]\n      (Array.init 12 (fun i -> float_of_int (i + 1)))\n  in\n  let y =\n    T.create T.float32 [| 3; 4 |]\n      (Array.init 12 (fun i -> float_of_int (i + 1)))\n  in\n\n  (* JAX semantics: captured y is treated as a constant across the mapped axis\n     (not co-iterated). Broadcasting happens elementwise, not as a cross-product\n     over an extra axis. *)\n  let f = T.vmap (fun a -> T.add a y) in\n  let result = f x in\n  (* Under JAX semantics, result shape is [3, 4] (same as x). *)\n  let expected_shape = [| 3; 4 |] in\n  check_shape \"vmap elementwise broadcast shape\" expected_shape result\n\n(* Test composition: jvp (vmap f) *)\nlet test_jvp_vmap_composition () =\n  let x = T.create T.float32 [| 3; 2 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let v = T.create T.float32 [| 3; 2 |] [| 0.1; 0.2; 0.3; 0.4; 0.5; 0.6 |] in\n\n  (* Define f: sum of squares *)\n  let f t = T.sum (T.mul t t) in\n\n  (* vmap f *)\n  let vmapped_f = T.vmap f in\n\n  (* jvp of vmapped f *)\n  let primals, tangents = T.jvp vmapped_f x v in\n\n  let expected_primals = T.create T.float32 [| 3 |] [| 5.; 25.; 61. |] in\n  let expected_tangents = T.create T.float32 [| 3 |] [| 1.; 5.; 12.2 |] in\n\n  check_rune ~eps:1e-5 \"jvp(vmap(f)) primals\" expected_primals primals;\n  check_rune ~eps:1e-5 \"jvp(vmap(f)) tangents\" expected_tangents tangents\n\n(* Test composition: vmap (jvp f) *)\nlet test_vmap_jvp_composition () =\n  let x = T.create T.float32 [| 3; 2 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let v = T.create T.float32 [| 3; 2 |] [| 0.1; 0.2; 0.3; 0.4; 0.5; 0.6 |] in\n\n  (* Define f: sum of squares *)\n  let f t = T.sum (T.mul t t) in\n\n  (* Function that computes jvp and returns primals *)\n  let jvp_f_primals inputs =\n    match inputs with\n    | [ x; v ] ->\n        let primals, _ = T.jvp f x v in\n        primals\n    | _ -> failwith \"jvp_f_primals expects exactly 2 inputs\"\n  in\n\n  (* Function that computes jvp and returns tangents *)\n  let jvp_f_tangents inputs =\n    match inputs with\n    | [ x; v ] ->\n        let _, tangents = T.jvp f x v in\n        tangents\n    | _ -> failwith \"jvp_f_tangents expects exactly 2 inputs\"\n  in\n\n  (* vmap the jvp functions *)\n  let vmapped_jvp_f_primals = T.vmaps jvp_f_primals in\n  let vmapped_jvp_f_tangents = T.vmaps jvp_f_tangents in\n  let primals = vmapped_jvp_f_primals [ x; v ] in\n  let tangents = vmapped_jvp_f_tangents [ x; v ] in\n\n  let expected_primals = T.create T.float32 [| 3 |] [| 5.; 25.; 61. |] in\n  let expected_tangents = T.create T.float32 [| 3 |] [| 1.; 5.; 12.2 |] in\n\n  check_rune ~eps:1e-5 \"vmap(jvp(f)) primals\" expected_primals primals;\n  check_rune ~eps:1e-5 \"vmap(jvp(f)) tangents\" expected_tangents tangents\n\n(* Test composition: grad (vmap f) *)\nlet test_grad_vmap_composition () =\n  let x = T.create T.float32 [| 3; 2 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n\n  (* Define f: sum of squares *)\n  let f t = T.sum (T.mul t t) in\n\n  (* vmap f *)\n  let vmapped_f = T.vmap f in\n\n  (* To take grad of vmap, we need to sum the output *)\n  let sum_vmapped_f x = T.sum (vmapped_f x) in\n\n  (* grad of sum of vmapped f *)\n  let grad_sum_vmapped_f = T.grad sum_vmapped_f in\n  let grads = grad_sum_vmapped_f x in\n\n  let expected_grads =\n    T.create T.float32 [| 3; 2 |] [| 2.; 4.; 6.; 8.; 10.; 12. |]\n  in\n\n  check_rune ~eps:1e-5 \"grad(sum(vmap(f)))\" expected_grads grads\n\n(* Test composition: vmap (grad f) *)\nlet test_vmap_grad_composition () =\n  let x = T.create T.float32 [| 3; 2 |] [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n\n  (* Define f: sum of squares *)\n  let f t = T.sum (T.mul t t) in\n\n  (* grad f *)\n  let grad_f = T.grad f in\n\n  (* vmap grad f *)\n  let vmapped_grad_f = T.vmap grad_f in\n  let grads = vmapped_grad_f x in\n\n  let expected_grads =\n    T.create T.float32 [| 3; 2 |] [| 2.; 4.; 6.; 8.; 10.; 12. |]\n  in\n\n  check_rune ~eps:1e-5 \"vmap(grad(f))\" expected_grads grads\n\n(* Test composition with two-argument function: jvp (vmap g) *)\nlet test_jvp_vmap_composition_two_args () =\n  let x = T.create T.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  let y = T.create T.float32 [| 2; 2 |] [| 5.; 6.; 7.; 8. |] in\n  let v_x = T.create T.float32 [| 2; 2 |] [| 0.1; 0.2; 0.3; 0.4 |] in\n  let v_y = T.create T.float32 [| 2; 2 |] [| 0.5; 0.6; 0.7; 0.8 |] in\n\n  (* Define g: sum of element-wise product *)\n  let g inputs =\n    match inputs with\n    | [ x; y ] -> T.sum (T.mul x y)\n    | _ -> failwith \"g expects exactly 2 inputs\"\n  in\n\n  (* vmap g *)\n  let vmapped_g = T.vmaps g in\n\n  (* jvp of vmapped g *)\n  let primals, tangents = T.jvps vmapped_g [ x; y ] [ v_x; v_y ] in\n\n  let expected_primals = T.create T.float32 [| 2 |] [| 17.; 53. |] in\n  let expected_tangents = T.create T.float32 [| 2 |] [| 3.4; 10.6 |] in\n\n  check_rune ~eps:1e-5 \"jvp(vmap(g)) primals\" expected_primals primals;\n  check_rune ~eps:1e-5 \"jvp(vmap(g)) tangents\" expected_tangents tangents\n\n(* Test composition with two-argument function: vmap (jvp g) *)\nlet test_vmap_jvp_composition_two_args () =\n  let x = T.create T.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  let y = T.create T.float32 [| 2; 2 |] [| 5.; 6.; 7.; 8. |] in\n  let v_x = T.create T.float32 [| 2; 2 |] [| 0.1; 0.2; 0.3; 0.4 |] in\n  let v_y = T.create T.float32 [| 2; 2 |] [| 0.5; 0.6; 0.7; 0.8 |] in\n\n  (* Define g: sum of element-wise product *)\n  let g inputs =\n    match inputs with\n    | [ x; y ] -> T.sum (T.mul x y)\n    | _ -> failwith \"g expects exactly 2 inputs\"\n  in\n\n  (* Function that computes jvp and returns primals *)\n  let jvp_g_primals inputs =\n    match inputs with\n    | [ x; y; v_x; v_y ] ->\n        let primals, _ = T.jvps g [ x; y ] [ v_x; v_y ] in\n        primals\n    | _ -> failwith \"jvp_g_primals expects exactly 4 inputs\"\n  in\n\n  (* Function that computes jvp and returns tangents *)\n  let jvp_g_tangents inputs =\n    match inputs with\n    | [ x; y; v_x; v_y ] ->\n        let _, tangents = T.jvps g [ x; y ] [ v_x; v_y ] in\n        tangents\n    | _ -> failwith \"jvp_g_tangents expects exactly 4 inputs\"\n  in\n\n  (* vmap the jvp functions *)\n  let vmapped_jvp_g_primals = T.vmaps jvp_g_primals in\n  let vmapped_jvp_g_tangents = T.vmaps jvp_g_tangents in\n  let primals = vmapped_jvp_g_primals [ x; y; v_x; v_y ] in\n  let tangents = vmapped_jvp_g_tangents [ x; y; v_x; v_y ] in\n\n  let expected_primals = T.create T.float32 [| 2 |] [| 17.; 53. |] in\n  let expected_tangents = T.create T.float32 [| 2 |] [| 3.4; 10.6 |] in\n\n  check_rune ~eps:1e-5 \"vmap(jvp(g)) primals\" expected_primals primals;\n  check_rune ~eps:1e-5 \"vmap(jvp(g)) tangents\" expected_tangents tangents\n\n(* Test composition with two-argument function: grad (vmap g) *)\nlet test_grad_vmap_composition_two_args () =\n  let x = T.create T.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  let y = T.create T.float32 [| 2; 2 |] [| 5.; 6.; 7.; 8. |] in\n\n  (* Define g: sum of element-wise product *)\n  let g inputs =\n    match inputs with\n    | [ x; y ] -> T.sum (T.mul x y)\n    | _ -> failwith \"g expects exactly 2 inputs\"\n  in\n\n  (* vmap g *)\n  let vmapped_g = T.vmaps g in\n\n  (* To take grad of vmap, we need to sum the output *)\n  let sum_vmapped_g inputs = T.sum (vmapped_g inputs) in\n\n  (* grad of sum of vmapped g *)\n  let grads_list = T.grads sum_vmapped_g [ x; y ] in\n  let grad_x = List.nth grads_list 0 in\n\n  let expected_grads = T.create T.float32 [| 2; 2 |] [| 5.; 6.; 7.; 8. |] in\n\n  check_rune ~eps:1e-5 \"grad(sum(vmap(g)), argnums=0)\" expected_grads grad_x\n\n(* Test composition with two-argument function: vmap (grad g) *)\nlet test_vmap_grad_composition_two_args () =\n  let x = T.create T.float32 [| 2; 2 |] [| 1.; 2.; 3.; 4. |] in\n  let y = T.create T.float32 [| 2; 2 |] [| 5.; 6.; 7.; 8. |] in\n\n  (* Define g: sum of element-wise product *)\n  let g inputs =\n    match inputs with\n    | [ x; y ] -> T.sum (T.mul x y)\n    | _ -> failwith \"g expects exactly 2 inputs\"\n  in\n\n  (* Function that computes grad w.r.t. first argument *)\n  let grad_g inputs =\n    match inputs with\n    | [ x; y ] ->\n        let grads = T.grads g [ x; y ] in\n        List.nth grads 0 (* Return gradient w.r.t. x *)\n    | _ -> failwith \"grad_g expects exactly 2 inputs\"\n  in\n\n  (* vmap grad g *)\n  let vmapped_grad_g = T.vmaps grad_g in\n  let grads = vmapped_grad_g [ x; y ] in\n\n  let expected_grads = T.create T.float32 [| 2; 2 |] [| 5.; 6.; 7.; 8. |] in\n\n  check_rune ~eps:1e-5 \"vmap(grad(g), argnums=0)\" expected_grads grads\n\nlet () =\n  run \"Vmap tests\"\n    [\n      group \"basic\"\n        [\n          test \"simple\" test_vmap_simple;\n          test \"matmul\" test_vmap_matmul;\n          test \"axis\" test_vmap_axis;\n          test \"no_out_axis\" test_vmap_no_out_axis;\n          test \"broadcast\" test_vmap_broadcast;\n          test \"nested\" test_nested_vmap;\n          test \"reduction\" test_vmap_reduction;\n          test \"where\" test_vmap_where;\n          test \"transpose\" test_vmap_transpose;\n          test \"elementwise\" test_vmap_elementwise;\n        ];\n      group \"composition\"\n        [\n          test \"jvp_vmap\" test_jvp_vmap_composition;\n          test \"vmap_jvp\" test_vmap_jvp_composition;\n          test \"grad_vmap\" test_grad_vmap_composition;\n          test \"vmap_grad\" test_vmap_grad_composition;\n          test \"jvp_vmap_two_args\" test_jvp_vmap_composition_two_args;\n          test \"vmap_jvp_two_args\" test_vmap_jvp_composition_two_args;\n          test \"grad_vmap_two_args\" test_grad_vmap_composition_two_args;\n          test \"vmap_grad_two_args\" test_vmap_grad_composition_two_args;\n        ];\n    ]\n"
  },
  {
    "path": "packages/sowilo/README.md",
    "content": "# Sowilo\n\nDifferentiable computer vision for OCaml, built on [Rune](../rune/)\n\nSowilo provides image processing operations expressed purely through Rune\ntensor operations. All operations are compatible with `Rune.grad`,\n`Rune.jit`, and `Rune.vmap`.\n\n## Quick Start\n\nLoad an image, detect edges, and save the result:\n\n```ocaml\nopen Sowilo\n\nlet () =\n  let img = Nx_io.load_image \"photo.png\" |> to_float in\n  let gray = to_grayscale img in\n  let edges = canny ~low:0.2 ~high:0.6 gray in\n  Nx_io.save_image (to_uint8 edges) \"edges.png\"\n```\n\n## Features\n\n- **Type conversion**: `to_float`, `to_uint8`, `normalize`, `threshold`\n- **Color**: `to_grayscale`, `rgb_to_hsv`/`hsv_to_rgb`, `adjust_brightness`, `adjust_contrast`, `adjust_saturation`, `adjust_hue`, `adjust_gamma`, `invert`\n- **Geometric transforms**: `resize` (nearest, bilinear), `crop`, `center_crop`, `hflip`, `vflip`, `rotate90`, `pad`\n- **Spatial filters**: `gaussian_blur`, `box_blur`, `median_blur`, `filter2d`, `unsharp_mask`\n- **Morphology**: `structuring_element` (Rect, Cross, Ellipse), `erode`, `dilate`, `opening`, `closing`, `morphological_gradient`\n- **Edge detection**: `sobel` (returns gx, gy), `scharr`, `laplacian`, `canny`\n- **Differentiable**: most operations support `Rune.grad` (exceptions: `canny`, `median_blur`)\n- **Batch ready**: all operations handle `[H; W; C]` and `[N; H; W; C]` tensors\n\n## Image Conventions\n\nImages are Rune tensors with channels-last layout. Operations expect float32\nvalues in [0, 1].\n\n- Single image: `[H; W; C]` (height, width, channels)\n- Batch: `[N; H; W; C]` (batch, height, width, channels)\n- Grayscale: C = 1, RGB: C = 3, RGBA: C = 4\n\n## Examples\n\n- **01-grayscale** -- RGB to grayscale conversion\n- **02-gaussian-blur** -- Gaussian blur with configurable sigma and kernel size\n- **03-median-blur** -- Median filtering for noise removal\n- **04-threshold** -- Binary thresholding\n- **05-sobel** -- Sobel gradient computation (horizontal and vertical)\n- **06-canny** -- Canny edge detection with hysteresis thresholds\n- **07-morphology** -- Erosion, dilation with structuring elements\n\n## Contributing\n\nSee the [Raven monorepo README](../README.md) for guidelines.\n\n## License\n\nISC License. See [LICENSE](../LICENSE) for details.\n"
  },
  {
    "path": "packages/sowilo/bench/README.md",
    "content": "# Sowilo Benchmarks\n\nBenchmark suite for Sowilo image-processing operators with a reference\nimplementation in OpenCV. The fixtures are synthetic but stable so we can track\nregressions across releases.\n\n## Fixtures\n\nPNG assets are stored in `./data/`:\n\n- `img_1920x1080.png` — 1920×1080 RGB frame with a “sunset” gradient.\n- `img_1280x720.png` — 1280×720 RGB frame with “forest” tones.\n- `img_512x512.png` — 512×512 RGB frame with a “nebula” palette.\n\nRegenerate the fixtures by running \n\n```bash\nuv run python sowilo/bench/scripts/generate_fixtures.py\n```\n\n## Running the benchmarks\n\n### Sowilo (OCaml)\n\n```bash\ndune exec sowilo/bench/bench_sowilo.exe\n```\n\n### OpenCV (Python)\n\n```bash\nuv run sowilo/bench/bench_sowilo.py\n```\n\n## Results Sowilo (OCaml)\n\n```\n┌───────────────────────────┬──────────┬──────────┬──────────┬─────────┬────────────┐\n│ Name                      │ Wall/Run │  CPU/Run │  mWd/Run │ Speedup │ vs Fastest │\n├───────────────────────────┼──────────┼──────────┼──────────┼─────────┼────────────┤\n│ Sowilo/ToGrayscale/1080p  │ 467.27μs │   2.21ms │   1.62kw │   1.00x │       100% │\n│ Sowilo/Sobel/720p         │  33.28ms │ 154.37ms │  10.38kw │   0.01x │      7122% │\n│ Sowilo/GaussianBlur/1080p │ 115.85ms │ 495.95ms │  24.67kw │   0.00x │     24793% │\n│ Sowilo/Canny/1080p        │ 569.93ms │    1.15s │ 178.48kw │   0.00x │    121969% │\n└───────────────────────────┴──────────┴──────────┴──────────┴─────────┴────────────┘\n```\n\n## Results OpenCV (Python)\n\n```\n┌─────────────────────────────┬──────────┬──────────┬─────────┬─────────┬────────────┐\n│ Name                        │ Wall/Run │  CPU/Run │ mWd/Run │ Speedup │ vs Fastest │\n├─────────────────────────────┼──────────┼──────────┼─────────┼─────────┼────────────┤\n│ Sobel/720p (OpenCV)         │ 417.38µs │ 417.08µs │ 508.99w │   1.00x │       100% │\n│ ToGrayscale/1080p (OpenCV)  │ 605.11µs │   1.24ms │ 990.27w │   0.69x │       145% │\n│ GaussianBlur/1080p (OpenCV) │   2.19ms │   6.74ms │  3.16kw │   0.19x │       524% │\n│ Canny/1080p (OpenCV)        │   5.88ms │  36.32ms │  4.40kw │   0.07x │      1408% │\n└─────────────────────────────┴──────────┴──────────┴─────────┴─────────┴────────────┘\n```\n"
  },
  {
    "path": "packages/sowilo/bench/bench_sowilo.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Sowilo image processing benchmarks using synthetic PNG fixtures. *)\n\nmodule Fixtures = struct\n  let data_dir = Filename.concat (Sys.getcwd ()) \"packages/sowilo/bench/data\"\n\n  let load_image name =\n    let path = Filename.concat data_dir name in\n    let nx_img = Nx_io.load_image path in\n    Sowilo.to_float nx_img\n\n  let img_1080 = lazy (load_image \"img_1920x1080.png\")\n  let img_720 = lazy (load_image \"img_1280x720.png\")\n  let gray_1080 = lazy (Sowilo.to_grayscale (Lazy.force img_1080))\n  let gray_720 = lazy (Sowilo.to_grayscale (Lazy.force img_720))\n  let img_1080 () = Lazy.force img_1080\n  let gray_1080 () = Lazy.force gray_1080\n  let gray_720 () = Lazy.force gray_720\nend\n\nlet force_tensor tensor = Nx.to_buffer tensor\n\nlet bench_grayscale img =\n  let gray = Sowilo.to_grayscale img in\n  force_tensor gray\n\nlet bench_gaussian img =\n  let blurred = Sowilo.gaussian_blur ~sigma:1.2 ~ksize:5 img in\n  force_tensor blurred\n\nlet bench_sobel img =\n  let gx, _gy = Sowilo.sobel img in\n  force_tensor gx\n\nlet bench_canny img =\n  let edges = Sowilo.canny ~low:0.2 ~high:0.6 img in\n  force_tensor edges\n\nlet all_benchmarks =\n  let color_1080 = Fixtures.img_1080 () in\n  let gray_1080 = Fixtures.gray_1080 () in\n  let gray_720 = Fixtures.gray_720 () in\n  [\n    Thumper.bench \"ToGrayscale/1080p\" (fun () -> bench_grayscale color_1080);\n    Thumper.bench \"GaussianBlur/1080p\" (fun () -> bench_gaussian color_1080);\n    Thumper.bench \"Sobel/720p\" (fun () -> bench_sobel gray_720);\n    Thumper.bench \"Canny/1080p\" (fun () -> bench_canny gray_1080);\n  ]\n  |> fun benches -> [ Thumper.group \"Sowilo\" benches ]\n\nlet () = Thumper.run \"sowilo\" all_benchmarks\n"
  },
  {
    "path": "packages/sowilo/bench/bench_sowilo.py",
    "content": "from __future__ import annotations\n\nimport sys\nfrom pathlib import Path\nfrom typing import Any, List\n\nimport cv2\nimport numpy as np\n\n_SCRIPTS_DIR = Path(__file__).resolve().parent\nwhile not (_SCRIPTS_DIR / \"dune-project\").exists():\n    _SCRIPTS_DIR = _SCRIPTS_DIR.parent\n_SCRIPTS_DIR = _SCRIPTS_DIR / \"scripts\"\nif str(_SCRIPTS_DIR) not in sys.path:\n    sys.path.insert(0, str(_SCRIPTS_DIR))\n\nimport ubench  # type: ignore\n\n\nDATA_DIR = Path(__file__).resolve().parent / \"data\"\n\n\ndef _load_images() -> tuple[dict[str, np.ndarray], dict[str, np.ndarray]]:\n    color_images: dict[str, np.ndarray] = {}\n    gray_images: dict[str, np.ndarray] = {}\n\n    for name in [\"img_1920x1080.png\", \"img_1280x720.png\", \"img_512x512.png\"]:\n        path = DATA_DIR / name\n        img_bgr = cv2.imread(str(path), cv2.IMREAD_COLOR)\n        if img_bgr is None:\n            raise RuntimeError(f\"Failed to read {path}\")\n        img_rgb = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2RGB)\n        color_images[name] = img_rgb\n        gray_images[name] = cv2.cvtColor(img_rgb, cv2.COLOR_RGB2GRAY)\n\n    return color_images, gray_images\n\n\nCOLOR_IMAGES, GRAY_IMAGES = _load_images()\n\n\ndef build_benchmarks() -> List[Any]:\n    benches: List[Any] = []\n\n    color_1080 = COLOR_IMAGES[\"img_1920x1080.png\"]\n    gray_1080 = GRAY_IMAGES[\"img_1920x1080.png\"]\n    gray_720 = GRAY_IMAGES[\"img_1280x720.png\"]\n\n    def bench_grayscale() -> None:\n        gray = cv2.cvtColor(color_1080, cv2.COLOR_RGB2GRAY)\n        int(gray.sum())\n\n    benches.append(ubench.bench(\"ToGrayscale/1080p (OpenCV)\", bench_grayscale))\n\n    def bench_gaussian() -> None:\n        blurred = cv2.GaussianBlur(color_1080, ksize=(5, 5), sigmaX=1.2, sigmaY=1.2)\n        int(blurred.sum())\n\n    benches.append(ubench.bench(\"GaussianBlur/1080p (OpenCV)\", bench_gaussian))\n\n    def bench_sobel() -> None:\n        sobel_x = cv2.Sobel(gray_720, ddepth=cv2.CV_16S, dx=1, dy=0, ksize=3)\n        int(sobel_x.sum())\n\n    benches.append(ubench.bench(\"Sobel/720p (OpenCV)\", bench_sobel))\n\n    def bench_canny() -> None:\n        edges = cv2.Canny(gray_1080, threshold1=55.0, threshold2=120.0)\n        int(edges.sum())\n\n    benches.append(ubench.bench(\"Canny/1080p (OpenCV)\", bench_canny))\n\n    return benches\n\n\ndef default_config() -> ubench.Config:\n    return ubench.Config.default().build()\n\n\ndef main() -> None:\n    benchmarks = build_benchmarks()\n    config = default_config()\n    ubench.run(benchmarks, config=config, output_format=\"pretty\", verbose=False)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "packages/sowilo/bench/dune",
    "content": "(executable\n (name bench_sowilo)\n (libraries sowilo nx.io nx thumper))\n\n(rule\n (alias runtest)\n (action\n  (progn\n   (run %{exe:bench_sowilo.exe} -q)\n   (diff? sowilo.thumper sowilo.thumper.corrected))))\n"
  },
  {
    "path": "packages/sowilo/bench/scripts/generate_fixtures.py",
    "content": "\"\"\"Generate synthetic image fixtures for Sowilo benchmarks.\"\"\"\n\nfrom __future__ import annotations\n\nfrom pathlib import Path\n\nimport numpy as np\nfrom PIL import Image\n\n\ndef _make_image(width: int, height: int, seed: int, *, palette: str) -> np.ndarray:\n    rng = np.random.default_rng(seed)\n    x = np.linspace(-1.0, 1.0, width, dtype=np.float32)\n    y = np.linspace(-1.0, 1.0, height, dtype=np.float32)\n    xv, yv = np.meshgrid(x, y)\n\n    radial = np.sqrt(xv**2 + yv**2)\n    angle = np.arctan2(yv, xv)\n\n    if palette == \"sunset\":\n        base_r = 0.6 + 0.4 * np.cos(angle)\n        base_g = 0.4 + 0.35 * np.sin(radial * np.pi)\n        base_b = 0.2 + 0.6 * np.exp(-radial * 1.5)\n    elif palette == \"forest\":\n        base_r = 0.2 + 0.3 * np.exp(-radial * 2.0)\n        base_g = 0.4 + 0.5 * np.cos(angle * 2.0)\n        base_b = 0.3 + 0.4 * np.sin(radial * np.pi)\n    else:  # \"nebula\"\n        base_r = 0.45 + 0.5 * np.sin(3.0 * angle)\n        base_g = 0.35 + 0.45 * np.cos(5.0 * radial)\n        base_b = 0.55 + 0.35 * np.sin(4.0 * angle + radial)\n\n    noise = rng.normal(loc=0.0, scale=0.05, size=(height, width)).astype(np.float32)\n    channels = [base_r, base_g, base_b]\n    stacked = []\n    for channel in channels:\n        layer = channel + noise\n        layer = np.clip(layer, 0.0, 1.0)\n        stacked.append((layer * 255.0).astype(np.uint8))\n\n    return np.stack(stacked, axis=-1)\n\n\ndef _write_image(path: Path, array: np.ndarray) -> None:\n    Image.fromarray(array, mode=\"RGB\").save(path, format=\"PNG\", optimize=True)\n\n\ndef main() -> None:\n    data_dir = Path(__file__).resolve().parents[1] / \"data\"\n    data_dir.mkdir(parents=True, exist_ok=True)\n\n    specs = [\n        (1920, 1080, 7, \"img_1920x1080.png\", \"sunset\"),\n        (1280, 720, 19, \"img_1280x720.png\", \"forest\"),\n        (512, 512, 29, \"img_512x512.png\", \"nebula\"),\n    ]\n\n    for width, height, seed, filename, palette in specs:\n        img = _make_image(width, height, seed, palette=palette)\n        _write_image(data_dir / filename, img)\n\n    print(f\"Generated Sowilo fixtures in {data_dir}\")\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "packages/sowilo/bench/sowilo.thumper",
    "content": "# thumper baseline\n# version: 1\n# suite_name: sowilo\n# host: 1480401c3b76ed18\n# cpu: Apple M1 Max\n# ocaml: 5.4.1\n# git: 31747323\n# dirty: true\n# command: /Users/tmattio/Workspace/raven/_build/default/packages/sowilo/bench/bench_sowilo.exe --bless --quick\n\nsowilo/canny_1080p\talloc_words\t4.967600e+04\t4.967600e+04\t4.967600e+04\t0.000000e+00\t5\t1\nsowilo/canny_1080p\tcpu_time\t1.632088e+00\t1.630246e+00\t1.633625e+00\t1.035134e-03\t5\t0\nsowilo/canny_1080p\twall_time\t1.004940e+00\t1.004072e+00\t1.005668e+00\t7.940793e-04\t5\t0\nsowilo/gaussianblur_1080p\talloc_words\t5.710000e+03\t5.710000e+03\t5.710000e+03\t0.000000e+00\t5\t1\nsowilo/gaussianblur_1080p\tcpu_time\t9.470104e-01\t9.413222e-01\t9.487540e-01\t3.923816e-03\t5\t1\nsowilo/gaussianblur_1080p\twall_time\t4.713762e-01\t4.703494e-01\t4.733366e-01\t3.168658e-03\t5\t0\nsowilo/sobel_720p\talloc_words\t4.243000e+03\t4.243000e+03\t4.243000e+03\t0.000000e+00\t5\t1\nsowilo/sobel_720p\tcpu_time\t2.632349e-01\t2.609808e-01\t2.647416e-01\t7.143433e-03\t5\t1\nsowilo/sobel_720p\twall_time\t1.352112e-01\t1.348894e-01\t1.356552e-01\t2.831791e-03\t5\t0\nsowilo/tograyscale_1080p\talloc_words\t5.140000e+02\t5.140000e+02\t5.140000e+02\t0.000000e+00\t5\t1\nsowilo/tograyscale_1080p\tcpu_time\t2.350553e-03\t2.324044e-03\t2.368710e-03\t9.501148e-03\t5\t0\nsowilo/tograyscale_1080p\twall_time\t3.986396e-04\t3.931048e-04\t4.076624e-04\t1.825903e-02\t5\t0\n"
  },
  {
    "path": "packages/sowilo/doc/01-getting-started.md",
    "content": "# Getting Started\n\nThis guide covers loading images, understanding image conventions, building\nyour first processing pipeline, and saving results.\n\n## Installation\n\n<!-- $MDX skip -->\n```bash\nopam install sowilo\n```\n\nOr build from source:\n\n<!-- $MDX skip -->\n```bash\ngit clone https://github.com/raven-ml/raven\ncd raven && dune build sowilo\n```\n\n## Loading Images\n\nSowilo operates on Rune tensors. Load an image with `Nx_io`, convert it to\na Rune tensor, then to float32:\n\n<!-- $MDX skip -->\n```ocaml\nopen Sowilo\n\nlet img =\n  Nx_io.load_image \"photo.png\"  (* Nx uint8 tensor [H; W; C] *)\n                  (* Rune uint8 tensor *)\n  |> to_float                   (* Rune float32 tensor in [0, 1] *)\n```\n\n`to_float` divides by 255 and casts to float32. This is the standard input\nformat for all sowilo operations.\n\n## Image Conventions\n\n**Layout.** Images use channels-last layout:\n- Single image: `[H; W; C]` (height, width, channels)\n- Batch: `[N; H; W; C]` (batch, height, width, channels)\n\n**Channel counts.** Grayscale has C = 1, RGB has C = 3, RGBA has C = 4.\n\n**Value range.** Operations expect float32 values in [0, 1]. Use `to_float`\nto convert from integer representations and `to_uint8` to convert back:\n\n<!-- $MDX skip -->\n```ocaml\n(* uint8 [0, 255] -> float32 [0, 1] *)\nlet float_img = to_float uint8_img\n\n(* float32 [0, 1] -> uint8 [0, 255] (clips to [0, 1] first) *)\nlet uint8_img = to_uint8 float_img\n```\n\n## Your First Pipeline\n\nLoad an image, convert to grayscale, blur, and detect edges:\n\n<!-- $MDX skip -->\n```ocaml\nopen Sowilo\n\nlet () =\n  let img = Nx_io.load_image \"photo.png\" |> to_float in\n\n  let edges =\n    img\n    |> to_grayscale                  (* RGB -> single channel *)\n    |> gaussian_blur ~sigma:1.0      (* smooth noise *)\n    |> canny ~low:0.2 ~high:0.6     (* detect edges *)\n  in\n\n  (* Save: convert back to uint8, then to Nx for I/O *)\n  Nx_io.save_image (to_uint8 edges) \"edges.png\"\n```\n\nOperations compose naturally with `|>`. Each takes a tensor and returns a\ntensor, so you can chain as many as you need.\n\n## Color Adjustments\n\nAdjust brightness, contrast, saturation, hue, and gamma:\n\n<!-- $MDX skip -->\n```ocaml\nopen Sowilo\n\nlet () =\n  let img = Nx_io.load_image \"photo.png\" |> to_float in\n\n  (* Each function takes a factor and an image *)\n  let bright = adjust_brightness 1.3 img in\n  let contrasted = adjust_contrast 1.5 img in\n  let saturated = adjust_saturation 1.2 img in\n  let warm = adjust_hue 0.05 img in\n  let gamma = adjust_gamma 0.8 img in\n  let negative = invert img in\n\n  ignore (bright, contrasted, saturated, warm, gamma, negative)\n```\n\n## Geometric Transforms\n\nResize, crop, flip, rotate, and pad:\n\n<!-- $MDX skip -->\n```ocaml\nopen Sowilo\n\nlet () =\n  let img = Nx_io.load_image \"photo.png\" |> to_float in\n\n  let small = resize ~height:224 ~width:224 img in\n  let cropped = crop ~y:50 ~x:100 ~height:200 ~width:200 img in\n  let centered = center_crop ~height:200 ~width:200 img in\n  let flipped = hflip img in\n  let upside_down = vflip img in\n  let rotated = rotate90 img in              (* 90 degrees counter-clockwise *)\n  let rotated_cw = rotate90 ~k:(-1) img in   (* 90 degrees clockwise *)\n  let padded = pad (10, 10, 10, 10) img in   (* top, bottom, left, right *)\n\n  ignore (small, cropped, centered, flipped, upside_down, rotated, rotated_cw, padded)\n```\n\n`resize` defaults to bilinear interpolation. Pass `~interpolation:Nearest`\nfor nearest-neighbor.\n\n## Morphological Operations\n\nBuild structuring elements and apply morphological operations:\n\n<!-- $MDX skip -->\n```ocaml\nopen Sowilo\n\nlet () =\n  let img = Nx_io.load_image \"photo.png\" |> to_float in\n  let gray = to_grayscale img in\n  let binary = threshold 0.5 gray in\n\n  (* Create a 5x5 rectangular structuring element *)\n  let kernel = structuring_element Rect (5, 5) in\n\n  let eroded = erode ~kernel binary in\n  let dilated = dilate ~kernel binary in\n  let opened = opening ~kernel binary in\n  let closed = closing ~kernel binary in\n  let grad = morphological_gradient ~kernel binary in\n\n  ignore (eroded, dilated, opened, closed, grad)\n```\n\nThree kernel shapes are available: `Rect` (full rectangle), `Cross`\n(cross-shaped), and `Ellipse` (elliptical). The size must be a pair of\npositive odd integers.\n\n## Edge Detection\n\nSowilo provides four edge detection methods:\n\n<!-- $MDX skip -->\n```ocaml\nopen Sowilo\n\nlet () =\n  let img = Nx_io.load_image \"photo.png\" |> to_float in\n  let gray = to_grayscale img in\n\n  (* Sobel: returns horizontal and vertical gradients *)\n  let gx, gy = sobel gray in\n\n  (* Scharr: more rotationally accurate than Sobel *)\n  let sx, sy = scharr gray in\n\n  (* Laplacian: sum of second derivatives *)\n  let lap = laplacian gray in\n\n  (* Canny: binary edge map with hysteresis thresholding *)\n  let edges = canny ~low:0.2 ~high:0.6 gray in\n\n  ignore (gx, gy, sx, sy, lap, edges)\n```\n\n`sobel` and `scharr` return `(gx, gy)` tuples. All edge detectors require\ngrayscale input (C = 1).\n\n## Saving Results\n\nConvert back to uint8 and use `Nx_io.save_image`:\n\n<!-- $MDX skip -->\n```ocaml\nlet save result path =\n  Nx_io.save_image (to_uint8 result) path\n```\n\n## Displaying with Hugin\n\nUse Hugin for visualization:\n\n<!-- $MDX skip -->\n```ocaml\nlet () =\n  let img = Nx_io.load_image \"photo.png\" |> to_float in\n  let gray = to_grayscale img in\n  let edges = canny ~low:0.2 ~high:0.6 gray in\n\n  let fig = Hugin.figure ~width:1000 ~height:500 () in\n\n  let ax1 = Hugin.subplot ~nrows:1 ~ncols:2 ~index:1 fig in\n  ignore\n    (ax1\n    |> Hugin.Plotting.imshow ~data:img\n    |> Hugin.Axes.set_title \"Original\");\n\n  let ax2 = Hugin.subplot ~nrows:1 ~ncols:2 ~index:2 fig in\n  ignore\n    (ax2\n    |> Hugin.Plotting.imshow ~data:edges\n         ~cmap:Hugin.Artist.Colormap.gray\n    |> Hugin.Axes.set_title \"Canny Edges\");\n\n  Hugin.show fig\n```\n\n## Next Steps\n\n- [Operations Reference](02-operations/) -- every operation with detailed examples\n- [Pipelines and Integration](03-pipelines/) -- composing pipelines, batch processing, deep learning\n"
  },
  {
    "path": "packages/sowilo/doc/02-operations.md",
    "content": "# Operations Reference\n\nEvery operation in sowilo, organized by category. All functions operate on\nRune float32 tensors with values in [0, 1] unless otherwise noted.\n\n## Type Conversion and Preprocessing\n\n### to_float\n\nConverts a tensor to float32 and scales to [0, 1] by dividing by 255.\n\n<!-- $MDX skip -->\n```ocaml\nlet img = Nx_io.load_image \"photo.png\" |> Sowilo.to_float\n(* uint8 [0, 255] -> float32 [0.0, 1.0] *)\n```\n\n### to_uint8\n\nScales from [0, 1] to [0, 255] and casts to uint8. Values are clipped to\n[0, 1] before scaling.\n\n<!-- $MDX skip -->\n```ocaml\nlet result = Sowilo.to_uint8 processed_img\n(* float32 [0.0, 1.0] -> uint8 [0, 255] *)\n```\n\n### normalize\n\nPer-channel normalization: `(img - mean) / std`. The `mean` and `std` lists\nmust match the channel dimension length.\n\n<!-- $MDX skip -->\n```ocaml\n(* ImageNet normalization *)\nlet normalized =\n  Sowilo.normalize\n    ~mean:[0.485; 0.456; 0.406]\n    ~std:[0.229; 0.224; 0.225]\n    img\n```\n\nRaises `Invalid_argument` if `mean` or `std` length does not match the\nnumber of channels.\n\n### threshold\n\nBinary thresholding: returns 1.0 where the image exceeds the threshold, 0.0\nelsewhere.\n\n<!-- $MDX skip -->\n```ocaml\n(* Pixels > 0.5 become 1.0, rest become 0.0 *)\nlet binary = Sowilo.threshold 0.5 gray_img\n```\n\n## Color Space Conversion and Adjustment\n\n### to_grayscale\n\nConverts RGB to single-channel grayscale using ITU-R BT.601 weights:\n`0.299 * R + 0.587 * G + 0.114 * B`. Input must have C >= 3. Output has\nC = 1.\n\n<!-- $MDX skip -->\n```ocaml\nlet gray = Sowilo.to_grayscale rgb_img\n```\n\n### rgb_to_hsv / hsv_to_rgb\n\nConvert between RGB and HSV color spaces. H is in [0, 1] (normalized from\n[0, 360]), S and V are in [0, 1].\n\n<!-- $MDX skip -->\n```ocaml\nlet hsv = Sowilo.rgb_to_hsv img\n(* ... manipulate hue, saturation, value channels ... *)\nlet rgb = Sowilo.hsv_to_rgb hsv\n```\n\n### adjust_brightness\n\nScales pixel values by a factor and clips to [0, 1].\n\n<!-- $MDX skip -->\n```ocaml\nlet brighter = Sowilo.adjust_brightness 1.3 img   (* 30% brighter *)\nlet darker = Sowilo.adjust_brightness 0.7 img     (* 30% darker *)\n```\n\n### adjust_contrast\n\nAdjusts contrast around the per-channel mean. Factor 0 produces solid gray,\n1 is the original image.\n\n<!-- $MDX skip -->\n```ocaml\nlet high_contrast = Sowilo.adjust_contrast 1.5 img\nlet low_contrast = Sowilo.adjust_contrast 0.5 img\n```\n\n### adjust_saturation\n\nAdjusts color saturation via HSV. Factor 0 produces grayscale, 1 is the\noriginal image.\n\n<!-- $MDX skip -->\n```ocaml\nlet vivid = Sowilo.adjust_saturation 1.5 img\nlet muted = Sowilo.adjust_saturation 0.5 img\n```\n\n### adjust_hue\n\nRotates hue by a delta in [-0.5, 0.5], corresponding to a full rotation of\nthe hue circle.\n\n<!-- $MDX skip -->\n```ocaml\nlet warm = Sowilo.adjust_hue 0.05 img\nlet cool = Sowilo.adjust_hue (-0.05) img\n```\n\n### adjust_gamma\n\nApplies gamma correction: `img ** gamma`. Values less than 1.0 brighten,\ngreater than 1.0 darken.\n\n<!-- $MDX skip -->\n```ocaml\nlet brightened = Sowilo.adjust_gamma 0.5 img\nlet darkened = Sowilo.adjust_gamma 2.0 img\n```\n\n### invert\n\nInverts the image: `1.0 - img`.\n\n<!-- $MDX skip -->\n```ocaml\nlet negative = Sowilo.invert img\n```\n\n## Geometric Transforms\n\n### resize\n\nResizes to target dimensions. Defaults to bilinear interpolation. Casts to\nfloat32 internally for bilinear mode.\n\n<!-- $MDX skip -->\n```ocaml\nlet small = Sowilo.resize ~height:224 ~width:224 img\nlet nearest = Sowilo.resize ~interpolation:Nearest ~height:64 ~width:64 img\n```\n\nRaises `Invalid_argument` if height or width is not positive.\n\n### crop\n\nExtracts a rectangular region starting at (y, x) with the given dimensions.\n\n<!-- $MDX skip -->\n```ocaml\nlet region = Sowilo.crop ~y:50 ~x:100 ~height:200 ~width:300 img\n```\n\nRaises `Invalid_argument` if the region exceeds image bounds.\n\n### center_crop\n\nCrops a centered rectangle of the given size.\n\n<!-- $MDX skip -->\n```ocaml\nlet centered = Sowilo.center_crop ~height:200 ~width:200 img\n```\n\nRaises `Invalid_argument` if the crop size exceeds image dimensions.\n\n### hflip / vflip\n\nFlip horizontally (left to right) or vertically (top to bottom).\n\n<!-- $MDX skip -->\n```ocaml\nlet mirrored = Sowilo.hflip img\nlet upside_down = Sowilo.vflip img\n```\n\n### rotate90\n\nRotates by k * 90 degrees counter-clockwise. k defaults to 1. Negative\nvalues rotate clockwise.\n\n<!-- $MDX skip -->\n```ocaml\nlet rotated_90 = Sowilo.rotate90 img              (* 90 CCW *)\nlet rotated_180 = Sowilo.rotate90 ~k:2 img         (* 180 *)\nlet rotated_cw = Sowilo.rotate90 ~k:(-1) img       (* 90 CW *)\n```\n\n### pad\n\nZero-pads the spatial dimensions. The tuple specifies (top, bottom, left,\nright) padding. An optional `~value` parameter sets the fill value (defaults\nto 0.0).\n\n<!-- $MDX skip -->\n```ocaml\nlet padded = Sowilo.pad (10, 10, 20, 20) img\nlet white_padded = Sowilo.pad ~value:1.0 (5, 5, 5, 5) img\n```\n\n## Spatial Filtering\n\n### gaussian_blur\n\nIsotropic Gaussian blur using separable convolution. `sigma` is required.\n`ksize` defaults to `2 * ceil(3 * sigma) + 1`, capturing 99.7% of the\ndistribution.\n\n<!-- $MDX skip -->\n```ocaml\nlet blurred = Sowilo.gaussian_blur ~sigma:1.0 img\nlet blurred_5x5 = Sowilo.gaussian_blur ~sigma:1.5 ~ksize:5 img\n```\n\nRaises `Invalid_argument` if `ksize` is even or not positive.\n\n### box_blur\n\nApplies a ksize x ksize averaging filter.\n\n<!-- $MDX skip -->\n```ocaml\nlet averaged = Sowilo.box_blur ~ksize:3 img\nlet smooth = Sowilo.box_blur ~ksize:7 img\n```\n\nRaises `Invalid_argument` if `ksize` is not positive.\n\n### median_blur\n\nApplies a median filter. **Not differentiable**: uses sort internally,\ngradient is zero almost everywhere.\n\n<!-- $MDX skip -->\n```ocaml\nlet denoised = Sowilo.median_blur ~ksize:3 img\n```\n\nRaises `Invalid_argument` if `ksize` is not a positive odd integer.\n\n### filter2d\n\nApplies a custom 2D convolution kernel of shape `[kH; kW]`. Applied\nindependently to each channel with Same padding.\n\n<!-- $MDX skip -->\n```ocaml\n(* Sharpen kernel *)\nlet kernel = Nx.create Nx.Float32 [| 3; 3 |]\n  [| 0.; -1.; 0.; -1.; 5.; -1.; 0.; -1.; 0. |]\nlet sharpened = Sowilo.filter2d kernel img\n```\n\n### unsharp_mask\n\nSharpens by subtracting a Gaussian blur:\n`img + amount * (img - gaussian_blur ~sigma img)`. `amount` defaults to 1.0.\n\n<!-- $MDX skip -->\n```ocaml\nlet sharp = Sowilo.unsharp_mask ~sigma:1.0 img\nlet very_sharp = Sowilo.unsharp_mask ~sigma:1.0 ~amount:2.0 img\n```\n\n## Morphological Operations\n\n### structuring_element\n\nCreates a structuring element of the given shape and size. The size is a\npair of positive odd integers (height, width).\n\nThree shapes are available:\n- `Rect` -- full rectangle\n- `Cross` -- cross-shaped element\n- `Ellipse` -- elliptical element\n\n<!-- $MDX skip -->\n```ocaml\nlet rect = Sowilo.structuring_element Rect (5, 5)\nlet cross = Sowilo.structuring_element Cross (3, 3)\nlet ellipse = Sowilo.structuring_element Ellipse (7, 7)\n```\n\nRaises `Invalid_argument` if height or width is not positive or not odd.\n\n### erode / dilate\n\nErosion replaces each pixel with the minimum over the kernel-shaped\nneighborhood. Dilation replaces with the maximum.\n\n<!-- $MDX skip -->\n```ocaml\nlet kernel = Sowilo.structuring_element Rect (5, 5) in\nlet eroded = Sowilo.erode ~kernel img\nlet dilated = Sowilo.dilate ~kernel img\n```\n\n### opening / closing\n\nOpening (erode then dilate) removes small bright regions. Closing (dilate\nthen erode) fills small dark regions.\n\n<!-- $MDX skip -->\n```ocaml\nlet kernel = Sowilo.structuring_element Rect (5, 5) in\nlet opened = Sowilo.opening ~kernel binary_img\nlet closed = Sowilo.closing ~kernel binary_img\n```\n\n### morphological_gradient\n\nThe difference between dilation and erosion: `dilate - erode`. Highlights\nedges.\n\n<!-- $MDX skip -->\n```ocaml\nlet kernel = Sowilo.structuring_element Rect (3, 3) in\nlet edges = Sowilo.morphological_gradient ~kernel img\n```\n\n## Edge Detection\n\nAll edge detection operations require grayscale input (C = 1).\n\n### sobel\n\nComputes Sobel gradients. Returns a `(gx, gy)` tuple where `gx` is the\nhorizontal gradient and `gy` is the vertical gradient. `ksize` defaults\nto 3.\n\n<!-- $MDX skip -->\n```ocaml\nlet gx, gy = Sowilo.sobel gray in\nlet gx5, gy5 = Sowilo.sobel ~ksize:5 gray in\n\n(* Compute gradient magnitude *)\nlet magnitude =\n  Nx.sqrt (Nx.add (Nx.mul gx gx) (Nx.mul gy gy))\n```\n\n### scharr\n\nComputes Scharr gradients, which are more rotationally accurate than Sobel.\nReturns a `(gx, gy)` tuple.\n\n<!-- $MDX skip -->\n```ocaml\nlet gx, gy = Sowilo.scharr gray\n```\n\n### laplacian\n\nComputes the Laplacian (sum of second spatial derivatives). `ksize` defaults\nto 3.\n\n<!-- $MDX skip -->\n```ocaml\nlet lap = Sowilo.laplacian gray\nlet lap5 = Sowilo.laplacian ~ksize:5 gray\n```\n\n### canny\n\nCanny edge detector. Returns 1.0 for edge pixels, 0.0 otherwise. `low` and\n`high` are hysteresis thresholds (in [0, 1] since images are float32 in\n[0, 1]). `sigma` controls the initial Gaussian blur and defaults to 1.4.\n\n**Not differentiable**: uses non-maximum suppression and hysteresis\nthresholding.\n\n<!-- $MDX skip -->\n```ocaml\nlet edges = Sowilo.canny ~low:0.2 ~high:0.6 gray\nlet tight = Sowilo.canny ~low:0.3 ~high:0.7 ~sigma:1.0 gray\n```\n\n## Differentiability Summary\n\nMost operations are differentiable because they are built from standard\nRune tensor operations. The two exceptions are:\n\n| Operation | Differentiable | Reason |\n|-----------|---------------|--------|\n| `median_blur` | No | Uses sort; gradient is zero almost everywhere |\n| `canny` | No | Uses non-maximum suppression and hysteresis thresholding |\n\nAll other operations (filters, color transforms, geometric transforms,\nmorphology, threshold, sobel, scharr, laplacian) support `Rune.grad`.\n"
  },
  {
    "path": "packages/sowilo/doc/03-pipelines.md",
    "content": "# Pipelines and Integration\n\nSowilo operations are pure functions on Rune tensors. They compose naturally\nwith `|>`, work in batches, and integrate with Kaun training loops.\n\n## Composing Operations\n\nChain operations with the pipe operator:\n\n<!-- $MDX skip -->\n```ocaml\nopen Sowilo\n\nlet process img =\n  img\n  |> to_float\n  |> to_grayscale\n  |> gaussian_blur ~sigma:1.0\n  |> threshold 0.5\n\nlet edges img =\n  img\n  |> to_float\n  |> to_grayscale\n  |> canny ~low:0.2 ~high:0.6\n```\n\nSince every operation takes a tensor and returns a tensor, you can define\nreusable pipeline functions and combine them:\n\n<!-- $MDX skip -->\n```ocaml\nopen Sowilo\n\nlet preprocess img =\n  img\n  |> to_float\n  |> resize ~height:256 ~width:256\n  |> center_crop ~height:224 ~width:224\n\nlet enhance img =\n  img\n  |> adjust_contrast 1.2\n  |> unsharp_mask ~sigma:1.0 ~amount:0.5\n\nlet full_pipeline img = img |> preprocess |> enhance\n```\n\n## Batch Processing\n\nAll operations handle both single images `[H; W; C]` and batches\n`[N; H; W; C]`. Stack images into a batch, process in one call:\n\n<!-- $MDX skip -->\n```ocaml\nopen Sowilo\n\nlet process_batch paths =\n  (* Load and stack into [N; H; W; C] *)\n  let images =\n    List.map\n      (fun p -> Nx_io.load_image p|> to_float)\n      paths\n  in\n  let batch = Nx.stack ~axis:0 images in\n\n  (* All operations broadcast over the batch dimension *)\n  let processed =\n    batch\n    |> resize ~height:224 ~width:224\n    |> to_grayscale\n    |> gaussian_blur ~sigma:1.0\n  in\n  processed\n```\n\n## Deep Learning Preprocessing\n\nPrepare images for neural networks with standard preprocessing:\n\n<!-- $MDX skip -->\n```ocaml\nopen Sowilo\n\n(* ImageNet preprocessing *)\nlet imagenet_preprocess img =\n  img\n  |> to_float\n  |> resize ~height:256 ~width:256\n  |> center_crop ~height:224 ~width:224\n  |> normalize\n      ~mean:[0.485; 0.456; 0.406]\n      ~std:[0.229; 0.224; 0.225]\n```\n\n## Differentiable Augmentation\n\nSince most sowilo operations are differentiable, you can use them as\naugmentations inside a training loop and gradients will flow through:\n\n<!-- $MDX skip -->\n```ocaml\nopen Sowilo\n\n(* Differentiable augmentation pipeline *)\nlet augment img =\n  img\n  |> adjust_brightness 1.1\n  |> adjust_contrast 0.9\n  |> adjust_saturation 1.1\n  |> gaussian_blur ~sigma:0.3\n\n(* Use in a loss function - gradients flow through augmentation *)\nlet loss params img label =\n  let preprocessed = imagenet_preprocess (augment img) in\n  let logits = model params preprocessed in\n  cross_entropy logits label\n\n(* Rune.grad differentiates through augment + preprocess + model *)\n```\n\nOperations that break the gradient (`canny`, `median_blur`) should not be\nused inside differentiable pipelines. All other operations -- blurs, color\nadjustments, geometric transforms, morphology, threshold, sobel, scharr,\nlaplacian -- support `Rune.grad`.\n\n## Integration with Kaun\n\nUse sowilo preprocessing in Kaun data pipelines:\n\n<!-- $MDX skip -->\n```ocaml\nopen Sowilo\nopen Kaun\n\nlet preprocess img =\n  img\n  |> Sowilo.to_float\n  |> Sowilo.resize ~height:224 ~width:224\n  |> Sowilo.normalize\n      ~mean:[0.485; 0.456; 0.406]\n      ~std:[0.229; 0.224; 0.225]\n\nlet train_data =\n  Data.prepare ~shuffle:rngs ~batch_size:32 (images, labels)\n  |> Data.map (fun (x, y) ->\n    (preprocess x, fun logits -> Loss.cross_entropy_sparse logits y))\n```\n\n## Feature Extraction\n\nCombine edge detection with morphological operations to extract features:\n\n<!-- $MDX skip -->\n```ocaml\nopen Sowilo\n\nlet extract_features img =\n  let gray = to_grayscale img in\n\n  (* Edge features *)\n  let gx, gy = sobel gray in\n  let magnitude = Nx.sqrt (Nx.add (Nx.mul gx gx) (Nx.mul gy gy)) in\n\n  (* Morphological features *)\n  let kernel = structuring_element Rect (3, 3) in\n  let gradient = morphological_gradient ~kernel gray in\n\n  (* Stack as multi-channel feature map *)\n  Nx.concatenate ~axis:(-1) [ gray; magnitude; gradient ]\n```\n\n## Visualization\n\nDisplay processing results side by side with Hugin:\n\n<!-- $MDX skip -->\n```ocaml\nopen Sowilo\n\nlet visualize_pipeline img =\n  let gray = to_grayscale img in\n  let blurred = gaussian_blur ~sigma:2.0 gray in\n  let edges = canny ~low:0.2 ~high:0.6 gray in\n\n  let fig = Hugin.figure ~width:1200 ~height:400 () in\n\n  let ax1 = Hugin.subplot ~nrows:1 ~ncols:3 ~index:1 fig in\n  ignore\n    (ax1\n    |> Hugin.Plotting.imshow ~data:gray\n         ~cmap:Hugin.Artist.Colormap.gray\n    |> Hugin.Axes.set_title \"Grayscale\");\n\n  let ax2 = Hugin.subplot ~nrows:1 ~ncols:3 ~index:2 fig in\n  ignore\n    (ax2\n    |> Hugin.Plotting.imshow ~data:blurred\n         ~cmap:Hugin.Artist.Colormap.gray\n    |> Hugin.Axes.set_title \"Gaussian Blur\");\n\n  let ax3 = Hugin.subplot ~nrows:1 ~ncols:3 ~index:3 fig in\n  ignore\n    (ax3\n    |> Hugin.Plotting.imshow ~data:edges\n         ~cmap:Hugin.Artist.Colormap.gray\n    |> Hugin.Axes.set_title \"Canny Edges\");\n\n  Hugin.show fig\n```\n\n## Color Space Manipulation\n\nAdjust colors through HSV for more precise control:\n\n<!-- $MDX skip -->\n```ocaml\nopen Sowilo\n\n(* Selective color manipulation via HSV *)\nlet make_warmer img =\n  let hsv = rgb_to_hsv img in\n  (* Shift hue slightly toward warm tones *)\n  let adjusted = adjust_hue 0.02 img in\n  (* Boost saturation *)\n  adjust_saturation 1.2 adjusted\n\n(* Grayscale with tint *)\nlet sepia img =\n  img\n  |> to_grayscale\n  |> fun gray ->\n     (* Expand back to 3 channels and tint *)\n     let rgb = Nx.concatenate ~axis:(-1) [ gray; gray; gray ] in\n     adjust_saturation 0.3 (adjust_hue 0.05 rgb)\n```\n"
  },
  {
    "path": "packages/sowilo/doc/04-opencv-comparison.md",
    "content": "# Sowilo vs. OpenCV -- A Practical Comparison\n\nThis guide explains how Sowilo's image processing model relates to Python's [OpenCV](https://docs.opencv.org/), focusing on:\n\n* How core concepts map (images, color spaces, filtering, morphology, edges)\n* Where the APIs feel similar vs. deliberately different\n* How to translate common OpenCV patterns into Sowilo\n\nIf you already use OpenCV, this should be enough to become productive in Sowilo quickly.\n\n---\n\n## 1. Big-Picture Differences\n\n| Aspect           | OpenCV (Python)                                      | Sowilo (OCaml)                                                |\n| ---------------- | ---------------------------------------------------- | ------------------------------------------------------------- |\n| Language         | C++ core with Python bindings                        | Pure OCaml on Nx tensors                                      |\n| Image type       | `numpy.ndarray`                                      | `Nx.t` (same type used everywhere in raven)                   |\n| Channel order    | BGR by default                                       | RGB, channels-last `[H; W; C]`                                |\n| Pixel range      | uint8 `[0, 255]` or float32/64                       | float32 `[0, 1]` (convert with `to_float` / `to_uint8`)       |\n| Color conversion | `cv2.cvtColor` with 200+ codes                       | Named functions: `to_grayscale`, `rgb_to_hsv`, `hsv_to_rgb`   |\n| Autodiff         | Not available                                        | All ops (except `median_blur`, `canny`) work with `Rune.grad` |\n| Batching         | Manual loops or `np.stack`                           | Native batch dimension `[N; H; W; C]` + `Rune.vmap`           |\n| Backend          | Optimized C++/CUDA                                   | Nx C backend (CPU)                                            |\n| Mutability       | Arrays mutated in-place by convention                | Immutable tensors; operations return new `Nx.t`               |\n| Scope            | Full vision library (video, GUI, ML, features, etc.) | Image processing primitives for ML pipelines                  |\n\n**Sowilo semantics to know (read once):**\n- Images are plain `Nx.t` tensors, not a separate type. Any Nx operation works on them.\n- All operations expect float32 in `[0, 1]`. Use `to_float` to convert from uint8.\n- Channel layout is always channels-last: `[H; W; C]` or `[N; H; W; C]` for batches.\n- Every operation (except `median_blur` and `canny`) is differentiable through `Rune.grad`.\n\n---\n\n## 2. Image Representation\n\n### 2.1 Loading and layout\n\n**OpenCV**\n\n```python\nimport cv2\nimport numpy as np\n\nimg = cv2.imread(\"photo.jpg\")          # BGR, uint8, shape (H, W, 3)\nimg_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\nimg_f = img_rgb.astype(np.float32) / 255.0\n```\n\n**Sowilo**\n\nSowilo does not provide I/O. Load with any image library that produces an `Nx.t`, then convert:\n\n<!-- $MDX skip -->\n```ocaml\n(* Assuming img is a uint8 [H; W; C] tensor loaded from disk *)\nlet img = Sowilo.to_float img   (* float32 [0, 1], RGB, [H; W; C] *)\n```\n\nKey differences:\n\n* OpenCV defaults to BGR ordering. Sowilo always uses RGB.\n* OpenCV operates on uint8 or float64 interchangeably. Sowilo expects float32 in `[0, 1]` for all processing functions.\n* There is no `cv2.imread` equivalent -- image I/O is outside Sowilo's scope.\n\n### 2.2 Converting back to uint8\n\n**OpenCV**\n\n```python\nout = (img_f * 255).clip(0, 255).astype(np.uint8)\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet out = Sowilo.to_uint8 img   (* clips to [0, 1], scales to [0, 255], casts to uint8 *)\n```\n\n---\n\n## 3. Type Conversion and Preprocessing\n\n### 3.1 Normalization\n\n**OpenCV / NumPy**\n\n```python\nmean = np.array([0.485, 0.456, 0.406], dtype=np.float32)\nstd  = np.array([0.229, 0.224, 0.225], dtype=np.float32)\nnormalized = (img_f - mean) / std\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet normalized =\n  Sowilo.normalize\n    ~mean:[0.485; 0.456; 0.406]\n    ~std:[0.229; 0.224; 0.225]\n    img\n```\n\nBoth apply per-channel `(img - mean) / std`. Sowilo raises `Invalid_argument` if the list lengths do not match the channel count.\n\n### 3.2 Thresholding\n\n**OpenCV**\n\n```python\n_, binary = cv2.threshold(gray, 0.5, 1.0, cv2.THRESH_BINARY)\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet binary = Sowilo.threshold 0.5 gray\n```\n\nSowilo's `threshold` returns `1.0` where `img > t`, `0.0` elsewhere. OpenCV's `cv2.threshold` has many modes (binary, truncate, adaptive, Otsu); Sowilo provides only the simple binary variant.\n\n---\n\n## 4. Color Space Conversion\n\n**OpenCV**\n\n```python\ngray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\nhsv  = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)\nrgb  = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR)\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet gray = Sowilo.to_grayscale img\nlet hsv  = Sowilo.rgb_to_hsv img\nlet rgb  = Sowilo.hsv_to_rgb hsv\n```\n\nDifferences:\n\n* OpenCV has 200+ conversion codes (`COLOR_BGR2Lab`, `COLOR_YUV2RGB_NV21`, etc.). Sowilo provides three conversions: grayscale, RGB-to-HSV, and HSV-to-RGB.\n* OpenCV's HSV uses H in `[0, 180]`, S and V in `[0, 255]` for uint8. Sowilo normalizes all channels to `[0, 1]`.\n* `to_grayscale` uses ITU-R BT.601 weights (`0.299 * R + 0.587 * G + 0.114 * B`), same as OpenCV's `COLOR_RGB2GRAY`.\n\n---\n\n## 5. Image Adjustments\n\nOpenCV has no built-in brightness/contrast/saturation functions. The standard approach is manual arithmetic. Sowilo provides dedicated functions for these.\n\n### 5.1 Brightness\n\n**OpenCV**\n\n```python\nbright = np.clip(img_f * 1.5, 0, 1)\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet bright = Sowilo.adjust_brightness 1.5 img\n```\n\n### 5.2 Contrast\n\n**OpenCV**\n\n```python\nmean = img_f.mean(axis=(0, 1), keepdims=True)\ncontrasted = np.clip(mean + 1.5 * (img_f - mean), 0, 1)\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet contrasted = Sowilo.adjust_contrast 1.5 img\n```\n\nA factor of `0` produces solid gray, `1` is the original image.\n\n### 5.3 Saturation\n\n**OpenCV**\n\n```python\nhsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV).astype(np.float32)\nhsv[:, :, 1] = np.clip(hsv[:, :, 1] * 1.5, 0, 255)\nresult = cv2.cvtColor(hsv.astype(np.uint8), cv2.COLOR_HSV2BGR)\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet saturated = Sowilo.adjust_saturation 1.5 img\n```\n\n### 5.4 Hue\n\n**OpenCV**\n\n```python\nhsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV).astype(np.float32)\nhsv[:, :, 0] = (hsv[:, :, 0] + 30) % 180\nresult = cv2.cvtColor(hsv.astype(np.uint8), cv2.COLOR_HSV2BGR)\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet shifted = Sowilo.adjust_hue 0.1 img   (* delta in [-0.5, 0.5] *)\n```\n\nSowilo uses `[-0.5, 0.5]` for a full hue rotation. OpenCV uses `[0, 180]` degrees for uint8 HSV.\n\n### 5.5 Gamma correction\n\n**OpenCV**\n\n```python\ngamma = 2.2\ncorrected = np.power(img_f, gamma)\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet corrected = Sowilo.adjust_gamma 2.2 img\n```\n\n### 5.6 Invert\n\n**OpenCV**\n\n```python\ninverted = 255 - img          # uint8\ninverted = 1.0 - img_f       # float\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet inverted = Sowilo.invert img\n```\n\n---\n\n## 6. Geometric Transforms\n\n### 6.1 Resize\n\n**OpenCV**\n\n```python\nresized = cv2.resize(img, (width, height), interpolation=cv2.INTER_LINEAR)\nresized_nn = cv2.resize(img, (width, height), interpolation=cv2.INTER_NEAREST)\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet resized    = Sowilo.resize ~height:224 ~width:224 img\nlet resized_nn = Sowilo.resize ~interpolation:Nearest ~height:224 ~width:224 img\n```\n\nDifferences:\n\n* OpenCV takes `(width, height)`. Sowilo takes `~height` and `~width` as labeled arguments.\n* Sowilo supports `Nearest` and `Bilinear` (default). OpenCV has many more modes (cubic, Lanczos, area).\n* `resize` works on any dtype. For bilinear interpolation it casts to float32 internally.\n\n### 6.2 Crop\n\n**OpenCV**\n\n```python\ncropped = img[y:y+h, x:x+w]\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet cropped = Sowilo.crop ~y:10 ~x:20 ~height:100 ~width:100 img\nlet centered = Sowilo.center_crop ~height:224 ~width:224 img\n```\n\n`center_crop` computes the offset automatically. OpenCV has no built-in center crop.\n\n### 6.3 Flip\n\n**OpenCV**\n\n```python\nflipped_h = cv2.flip(img, 1)    # horizontal\nflipped_v = cv2.flip(img, 0)    # vertical\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet flipped_h = Sowilo.hflip img\nlet flipped_v = Sowilo.vflip img\n```\n\n### 6.4 Rotate\n\n**OpenCV**\n\n```python\nrotated = cv2.rotate(img, cv2.ROTATE_90_COUNTERCLOCKWISE)\nrotated_cw = cv2.rotate(img, cv2.ROTATE_90_CLOCKWISE)\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet rotated    = Sowilo.rotate90 img            (* 90 degrees counter-clockwise *)\nlet rotated_cw = Sowilo.rotate90 ~k:(-1) img    (* 90 degrees clockwise *)\nlet rotated_180 = Sowilo.rotate90 ~k:2 img      (* 180 degrees *)\n```\n\n`rotate90` only handles multiples of 90 degrees. OpenCV's `cv2.getRotationMatrix2D` + `cv2.warpAffine` for arbitrary angles has no equivalent.\n\n### 6.5 Pad\n\n**OpenCV**\n\n```python\npadded = cv2.copyMakeBorder(img, top, bottom, left, right,\n                             cv2.BORDER_CONSTANT, value=0)\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet padded = Sowilo.pad (10, 10, 20, 20) img            (* zero-padded *)\nlet padded = Sowilo.pad ~value:0.5 (10, 10, 20, 20) img (* custom fill *)\n```\n\nSowilo supports constant padding only. OpenCV also has reflect, replicate, and wrap modes.\n\n---\n\n## 7. Spatial Filtering\n\n### 7.1 Gaussian blur\n\n**OpenCV**\n\n```python\nblurred = cv2.GaussianBlur(img, (0, 0), sigmaX=1.5)\nblurred = cv2.GaussianBlur(img, (7, 7), sigmaX=1.5)\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet blurred = Sowilo.gaussian_blur ~sigma:1.5 img\nlet blurred = Sowilo.gaussian_blur ~sigma:1.5 ~ksize:7 img\n```\n\nSowilo defaults `ksize` to `2 * ceil(3 * sigma) + 1`, which captures 99.7% of the distribution. OpenCV lets you pass `(0, 0)` for automatic sizing. Sowilo uses separable convolution internally, same as OpenCV.\n\n### 7.2 Box blur (averaging)\n\n**OpenCV**\n\n```python\nblurred = cv2.blur(img, (5, 5))\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet blurred = Sowilo.box_blur ~ksize:5 img\n```\n\nSowilo uses a square kernel. OpenCV's `cv2.blur` supports rectangular kernels.\n\n### 7.3 Median blur\n\n**OpenCV**\n\n```python\nblurred = cv2.medianBlur(img, 5)\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet blurred = Sowilo.median_blur ~ksize:5 img\n```\n\n`ksize` must be a positive odd integer. Note: `median_blur` is **not differentiable** -- gradient is zero almost everywhere.\n\n### 7.4 Custom kernels (filter2d)\n\n**OpenCV**\n\n```python\nkernel = np.array([[-1, -1, -1],\n                   [-1,  8, -1],\n                   [-1, -1, -1]], dtype=np.float32)\nedges = cv2.filter2D(img, -1, kernel)\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet kernel = Nx.create Nx.float32 [|3; 3|]\n  [|-1.; -1.; -1.;\n    -1.;  8.; -1.;\n    -1.; -1.; -1.|]\n\nlet edges = Sowilo.filter2d kernel img\n```\n\nBoth apply 2D convolution with same-size padding. Note the argument order: Sowilo takes `kernel` first, then `img`. OpenCV takes `src`, `ddepth`, `kernel`.\n\n### 7.5 Sharpening (unsharp mask)\n\n**OpenCV**\n\n```python\nblurred = cv2.GaussianBlur(img, (0, 0), sigma)\nsharpened = cv2.addWeighted(img, 1.0 + amount, blurred, -amount, 0)\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet sharpened = Sowilo.unsharp_mask ~sigma:1.0 img\nlet sharpened = Sowilo.unsharp_mask ~sigma:1.0 ~amount:1.5 img\n```\n\n`amount` defaults to `1.0`. The formula is `img + amount * (img - gaussian_blur ~sigma img)`.\n\n---\n\n## 8. Morphological Operations\n\n### 8.1 Structuring elements\n\n**OpenCV**\n\n```python\nkernel_rect    = cv2.getStructuringElement(cv2.MORPH_RECT, (5, 5))\nkernel_cross   = cv2.getStructuringElement(cv2.MORPH_CROSS, (5, 5))\nkernel_ellipse = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5))\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet kernel_rect    = Sowilo.structuring_element Rect (5, 5)\nlet kernel_cross   = Sowilo.structuring_element Cross (5, 5)\nlet kernel_ellipse = Sowilo.structuring_element Ellipse (5, 5)\n```\n\nBoth produce a binary mask. Dimensions must be positive odd integers.\n\n### 8.2 Erode and dilate\n\n**OpenCV**\n\n```python\neroded  = cv2.erode(img, kernel, iterations=1)\ndilated = cv2.dilate(img, kernel, iterations=1)\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet eroded  = Sowilo.erode ~kernel img\nlet dilated = Sowilo.dilate ~kernel img\n```\n\nSowilo does not have an `iterations` parameter. Apply the operation multiple times if needed.\n\n### 8.3 Compound operations\n\n**OpenCV**\n\n```python\nopened   = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)\nclosed   = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)\ngradient = cv2.morphologyEx(img, cv2.MORPH_GRADIENT, kernel)\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet opened   = Sowilo.opening ~kernel img\nlet closed   = Sowilo.closing ~kernel img\nlet gradient = Sowilo.morphological_gradient ~kernel img\n```\n\n* `opening` = erode then dilate (removes small bright regions).\n* `closing` = dilate then erode (fills small dark regions).\n* `morphological_gradient` = dilate - erode (highlights edges).\n\nOpenCV also has `MORPH_TOPHAT`, `MORPH_BLACKHAT`, and `MORPH_HITMISS`. Sowilo does not provide these.\n\n---\n\n## 9. Edge Detection\n\n### 9.1 Sobel\n\n**OpenCV**\n\n```python\ngx = cv2.Sobel(gray, cv2.CV_32F, 1, 0, ksize=3)\ngy = cv2.Sobel(gray, cv2.CV_32F, 0, 1, ksize=3)\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet gx, gy = Sowilo.sobel gray               (* ksize defaults to 3 *)\nlet gx, gy = Sowilo.sobel ~ksize:5 gray\n```\n\nSowilo returns both gradients as a tuple. OpenCV requires two calls. Input must have `C = 1`.\n\n### 9.2 Scharr\n\n**OpenCV**\n\n```python\ngx = cv2.Scharr(gray, cv2.CV_32F, 1, 0)\ngy = cv2.Scharr(gray, cv2.CV_32F, 0, 1)\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet gx, gy = Sowilo.scharr gray\n```\n\nScharr is more rotationally accurate than Sobel with `ksize=3`.\n\n### 9.3 Laplacian\n\n**OpenCV**\n\n```python\nlaplacian = cv2.Laplacian(gray, cv2.CV_32F, ksize=3)\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet laplacian = Sowilo.laplacian gray\nlet laplacian = Sowilo.laplacian ~ksize:5 gray\n```\n\n### 9.4 Canny\n\n**OpenCV**\n\n```python\nedges = cv2.Canny(gray_u8, 100, 200)\n```\n\n**Sowilo**\n\n<!-- $MDX skip -->\n```ocaml\nlet edges = Sowilo.canny ~low:0.1 ~high:0.2 gray\nlet edges = Sowilo.canny ~low:0.1 ~high:0.2 ~sigma:2.0 gray\n```\n\nDifferences:\n\n* OpenCV takes integer thresholds on uint8 pixel values. Sowilo takes float thresholds on `[0, 1]` values.\n* Sowilo includes a built-in Gaussian blur controlled by `~sigma` (defaults to `1.4`). OpenCV expects you to blur beforehand.\n* `canny` returns `1.0` for edge pixels, `0.0` for non-edges.\n* **Not differentiable**: uses non-maximum suppression and hysteresis thresholding.\n\n---\n\n## 10. Differentiable Pipelines\n\nThis is Sowilo's key advantage over OpenCV. Because operations are expressed as Nx tensor computations, they compose with `Rune.grad` and `Rune.vmap`.\n\n### 10.1 Gradient through image processing\n\nNo OpenCV equivalent exists. OpenCV operations are opaque C++ -- you cannot backpropagate through them.\n\n<!-- $MDX skip -->\n```ocaml\n(* Compute the gradient of a loss through an image processing pipeline *)\nlet pipeline params img =\n  img\n  |> Sowilo.adjust_brightness params.brightness\n  |> Sowilo.adjust_contrast params.contrast\n  |> Sowilo.gaussian_blur ~sigma:params.sigma\n\n(* Differentiate the loss w.r.t. a parameter *)\nlet loss_fn brightness img target =\n  let processed = img |> Sowilo.adjust_brightness brightness in\n  let diff = Nx.sub processed target in\n  Nx.sum (Nx.mul diff diff)\n\nlet grad_fn = Rune.grad loss_fn\nlet grad_brightness = grad_fn 1.2 img target\n```\n\nThis works because `adjust_brightness`, `adjust_contrast`, `gaussian_blur`, and most other Sowilo operations are built from differentiable Nx primitives.\n\n### 10.2 Batch processing with vmap\n\n**OpenCV**\n\n```python\n# Manual loop over batch\nresults = [cv2.GaussianBlur(img, (0, 0), 1.5) for img in batch]\n```\n\n**Sowilo** with `Rune.vmap`:\n\n<!-- $MDX skip -->\n```ocaml\n(* Apply Gaussian blur to a batch of images in one call *)\nlet blur_batch = Rune.vmap (Sowilo.gaussian_blur ~sigma:1.5)\nlet blurred_batch = blur_batch batch   (* batch shape: [N; H; W; C] *)\n```\n\n### 10.3 What is differentiable?\n\n| Operation                                                         | Differentiable                                |\n| ----------------------------------------------------------------- | --------------------------------------------- |\n| `to_float`, `to_uint8`                                            | Yes                                           |\n| `normalize`, `threshold`                                          | Yes                                           |\n| `to_grayscale`                                                    | Yes                                           |\n| `rgb_to_hsv`, `hsv_to_rgb`                                        | Yes                                           |\n| `adjust_brightness/contrast/saturation/hue/gamma`                 | Yes                                           |\n| `invert`                                                          | Yes                                           |\n| `resize` (bilinear)                                               | Yes                                           |\n| `crop`, `center_crop`                                             | Yes                                           |\n| `hflip`, `vflip`, `rotate90`                                      | Yes                                           |\n| `pad`                                                             | Yes                                           |\n| `gaussian_blur`, `box_blur`                                       | Yes                                           |\n| `filter2d`, `unsharp_mask`                                        | Yes                                           |\n| `erode`, `dilate`, `opening`, `closing`, `morphological_gradient` | Yes                                           |\n| `sobel`, `scharr`, `laplacian`                                    | Yes                                           |\n| `median_blur`                                                     | **No** (sort-based, gradient is zero)         |\n| `canny`                                                           | **No** (non-maximum suppression + hysteresis) |\n\n---\n\n## 11. What Sowilo Doesn't Have\n\nSowilo is a focused library for differentiable image processing primitives. It does not cover:\n\n* **Image I/O** -- no `imread`, `imwrite`. Use an external library to load/save images as `Nx.t` tensors.\n* **Video** -- no `VideoCapture`, `VideoWriter`, or frame-by-frame processing.\n* **GUI** -- no `imshow`, `waitKey`, or window management.\n* **Drawing** -- no `rectangle`, `circle`, `putText`, or shape rendering.\n* **Feature detection** -- no SIFT, ORB, AKAZE, or keypoint matching.\n* **Contour detection** -- no `findContours`, `drawContours`, or shape analysis.\n* **Object detection** -- no Haar cascades, HOG detectors, or DNN module.\n* **Camera calibration** -- no `calibrateCamera`, `undistort`, or stereo vision.\n* **Arbitrary affine/perspective transforms** -- no `warpAffine`, `warpPerspective`, or rotation by arbitrary angles.\n* **Additional color spaces** -- no Lab, YUV, or Bayer conversions.\n* **Adaptive thresholding** -- no `adaptiveThreshold` or Otsu's method.\n* **Histogram operations** -- no `calcHist`, `equalizeHist`, or CLAHE.\n* **Additional border modes** -- only constant padding (no reflect, replicate, or wrap).\n* **Connected components** -- no `connectedComponents` or label analysis.\n\nIf you need these, use OpenCV from Python or a dedicated OCaml binding. Sowilo focuses on the subset of operations useful in differentiable ML pipelines.\n\n---\n\n## 12. Quick Cheat Sheet\n\n| Task                   | OpenCV                                          | Sowilo                                                       |\n| ---------------------- | ----------------------------------------------- | ------------------------------------------------------------ |\n| Load image             | `cv2.imread(\"f.jpg\")`                           | N/A (use external I/O)                                       |\n| uint8 to float         | `img.astype(np.float32) / 255.0`                | `Sowilo.to_float img`                                        |\n| float to uint8         | `(img * 255).clip(0,255).astype(np.uint8)`      | `Sowilo.to_uint8 img`                                        |\n| Normalize              | `(img - mean) / std`                            | `Sowilo.normalize ~mean ~std img`                            |\n| Threshold              | `cv2.threshold(img, t, 1.0, THRESH_BINARY)`     | `Sowilo.threshold t img`                                     |\n| To grayscale           | `cv2.cvtColor(img, COLOR_BGR2GRAY)`             | `Sowilo.to_grayscale img`                                    |\n| RGB to HSV             | `cv2.cvtColor(img, COLOR_RGB2HSV)`              | `Sowilo.rgb_to_hsv img`                                      |\n| HSV to RGB             | `cv2.cvtColor(img, COLOR_HSV2RGB)`              | `Sowilo.hsv_to_rgb img`                                      |\n| Brightness             | `np.clip(img * f, 0, 1)`                        | `Sowilo.adjust_brightness f img`                             |\n| Contrast               | manual per-channel math                         | `Sowilo.adjust_contrast f img`                               |\n| Saturation             | manual HSV manipulation                         | `Sowilo.adjust_saturation f img`                             |\n| Hue shift              | manual HSV manipulation                         | `Sowilo.adjust_hue delta img`                                |\n| Gamma                  | `np.power(img, gamma)`                          | `Sowilo.adjust_gamma gamma img`                              |\n| Invert                 | `1.0 - img`                                     | `Sowilo.invert img`                                          |\n| Resize                 | `cv2.resize(img, (w, h))`                       | `Sowilo.resize ~height:h ~width:w img`                       |\n| Crop                   | `img[y:y+h, x:x+w]`                             | `Sowilo.crop ~y ~x ~height:h ~width:w img`                   |\n| Center crop            | manual offset computation                       | `Sowilo.center_crop ~height:h ~width:w img`                  |\n| Horizontal flip        | `cv2.flip(img, 1)`                              | `Sowilo.hflip img`                                           |\n| Vertical flip          | `cv2.flip(img, 0)`                              | `Sowilo.vflip img`                                           |\n| Rotate 90              | `cv2.rotate(img, ROTATE_90_CCW)`                | `Sowilo.rotate90 img`                                        |\n| Pad                    | `cv2.copyMakeBorder(img, t, b, l, r, ...)`      | `Sowilo.pad (t, b, l, r) img`                                |\n| Gaussian blur          | `cv2.GaussianBlur(img, (0,0), sigma)`           | `Sowilo.gaussian_blur ~sigma img`                            |\n| Box blur               | `cv2.blur(img, (k, k))`                         | `Sowilo.box_blur ~ksize:k img`                               |\n| Median blur            | `cv2.medianBlur(img, k)`                        | `Sowilo.median_blur ~ksize:k img`                            |\n| Custom kernel          | `cv2.filter2D(img, -1, kernel)`                 | `Sowilo.filter2d kernel img`                                 |\n| Sharpen                | manual unsharp mask                             | `Sowilo.unsharp_mask ~sigma img`                             |\n| Structuring element    | `cv2.getStructuringElement(shape, (h, w))`      | `Sowilo.structuring_element shape (h, w)`                    |\n| Erode                  | `cv2.erode(img, kernel)`                        | `Sowilo.erode ~kernel img`                                   |\n| Dilate                 | `cv2.dilate(img, kernel)`                       | `Sowilo.dilate ~kernel img`                                  |\n| Opening                | `cv2.morphologyEx(img, MORPH_OPEN, kernel)`     | `Sowilo.opening ~kernel img`                                 |\n| Closing                | `cv2.morphologyEx(img, MORPH_CLOSE, kernel)`    | `Sowilo.closing ~kernel img`                                 |\n| Morphological gradient | `cv2.morphologyEx(img, MORPH_GRADIENT, kernel)` | `Sowilo.morphological_gradient ~kernel img`                  |\n| Sobel                  | `cv2.Sobel(img, CV_32F, dx, dy)`                | `Sowilo.sobel img` (returns `(gx, gy)`)                      |\n| Scharr                 | `cv2.Scharr(img, CV_32F, dx, dy)`               | `Sowilo.scharr img` (returns `(gx, gy)`)                     |\n| Laplacian              | `cv2.Laplacian(img, CV_32F)`                    | `Sowilo.laplacian img`                                       |\n| Canny                  | `cv2.Canny(img, low, high)`                     | `Sowilo.canny ~low ~high img`                                |\n| Backprop through ops   | not possible                                    | `Rune.grad f` works on all ops except `median_blur`, `canny` |\n| Batch processing       | manual loop                                     | `Rune.vmap f` over batch dimension                           |\n"
  },
  {
    "path": "packages/sowilo/doc/dune",
    "content": "(mdx\n (files *.md)\n (package sowilo)\n (libraries sowilo))\n"
  },
  {
    "path": "packages/sowilo/doc/index.md",
    "content": "# Sowilo\n\nDifferentiable computer vision on Rune tensors.\n\nSowilo provides image processing operations expressed purely through Rune\ntensor operations. Filters, edge detectors, morphological operations, color\ntransforms, and geometric transforms -- all compatible with `Rune.grad`\nand `Rune.vmap`.\n\n## Image Conventions\n\nImages are `Nx.t` tensors with channels-last layout:\n\n- **Single image**: `[H; W; C]` (height, width, channels)\n- **Batch**: `[N; H; W; C]` (batch, height, width, channels)\n- **Grayscale**: C = 1, **RGB**: C = 3, **RGBA**: C = 4\n\nOperations expect float32 tensors with values in [0, 1]. Use `to_float` to\nconvert from uint8 and `to_uint8` to convert back.\n\n## What's Included\n\n- **Type conversion**: `to_float`, `to_uint8`, `normalize`, `threshold`\n- **Color**: `to_grayscale`, `rgb_to_hsv`, `hsv_to_rgb`, brightness, contrast, saturation, hue, gamma, `invert`\n- **Geometric transforms**: `resize`, `crop`, `center_crop`, `hflip`, `vflip`, `rotate90`, `pad`\n- **Spatial filters**: `gaussian_blur`, `box_blur`, `median_blur`, `filter2d`, `unsharp_mask`\n- **Morphology**: `structuring_element`, `erode`, `dilate`, `opening`, `closing`, `morphological_gradient`\n- **Edge detection**: `sobel`, `scharr`, `laplacian`, `canny`\n\n## Quick Start\n\n<!-- $MDX skip -->\n```ocaml\nopen Sowilo\n\nlet () =\n  (* Load image and convert to float32 [0, 1] *)\n  let img = Nx_io.load_image \"photo.png\" |> to_float in\n\n  (* Process: grayscale, blur, edge detection *)\n  let edges =\n    img\n    |> to_grayscale\n    |> gaussian_blur ~sigma:1.0\n    |> canny ~low:0.2 ~high:0.6\n  in\n\n  (* Save result *)\n  Nx_io.save_image (to_uint8 edges) \"edges.png\"\n```\n\n## Learn More\n\n- [Getting Started](01-getting-started/) -- installation, image conventions, first pipeline\n- [Operations Reference](02-operations/) -- every operation with examples\n- [Pipelines and Integration](03-pipelines/) -- composing pipelines, batch processing, deep learning integration\n- [Examples](https://github.com/raven-ml/raven/tree/main/sowilo/examples) -- complete image processing examples\n"
  },
  {
    "path": "packages/sowilo/examples/01-grayscale/README.md",
    "content": "# Grayscale\n\nConvert a color image to grayscale using `Sowilo.to_grayscale` and display the original and result side by side with Hugin.\n"
  },
  {
    "path": "packages/sowilo/examples/01-grayscale/dune",
    "content": "(executable\n (name main)\n (libraries nx nx.io sowilo rune hugin))\n"
  },
  {
    "path": "packages/sowilo/examples/01-grayscale/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet image_path = \"sowilo/examples/lena.png\"\n\nlet () =\n  let img_u8 = Nx_io.load_image image_path in\n  let img = Sowilo.to_float img_u8 in\n  let gray = Sowilo.to_grayscale img in\n  Hugin.hstack\n    [\n      Hugin.image img_u8 |> Hugin.title \"Original\";\n      Hugin.imshow ~data:gray ~cmap:Hugin.Cmap.gray ()\n      |> Hugin.title \"Grayscale\";\n    ]\n  |> Hugin.show\n"
  },
  {
    "path": "packages/sowilo/examples/02-gaussian-blur/README.md",
    "content": "# Gaussian Blur\n\nApply a Gaussian blur with `Sowilo.gaussian_blur` to smooth a grayscale image. Compares the original grayscale with the blurred result.\n"
  },
  {
    "path": "packages/sowilo/examples/02-gaussian-blur/dune",
    "content": "(executable\n (name main)\n (libraries nx nx.io sowilo rune hugin))\n"
  },
  {
    "path": "packages/sowilo/examples/02-gaussian-blur/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet image_path = \"sowilo/examples/lena.png\"\n\nlet () =\n  let img = Sowilo.to_float (Nx_io.load_image image_path) in\n  let gray = Sowilo.to_grayscale img in\n  let blurred = Sowilo.gaussian_blur ~sigma:1.5 ~ksize:5 gray in\n  Hugin.hstack\n    [\n      Hugin.imshow ~data:gray ~cmap:Hugin.Cmap.gray ()\n      |> Hugin.title \"Grayscale\";\n      Hugin.imshow ~data:blurred ~cmap:Hugin.Cmap.gray ()\n      |> Hugin.title \"Gaussian Blur (5x5, sigma=1.5)\";\n    ]\n  |> Hugin.show\n"
  },
  {
    "path": "packages/sowilo/examples/03-median-blur/README.md",
    "content": "# Median Blur\n\nApply a median filter with `Sowilo.median_blur` for noise removal. Effective at removing salt-and-pepper noise while preserving edges.\n"
  },
  {
    "path": "packages/sowilo/examples/03-median-blur/dune",
    "content": "(executable\n (name main)\n (libraries nx nx.io sowilo rune hugin))\n"
  },
  {
    "path": "packages/sowilo/examples/03-median-blur/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet image_path = \"sowilo/examples/lena.png\"\n\nlet () =\n  let img = Sowilo.to_float (Nx_io.load_image image_path) in\n  let gray = Sowilo.to_grayscale img in\n  let median = Sowilo.median_blur ~ksize:5 gray in\n  Hugin.hstack\n    [\n      Hugin.imshow ~data:gray ~cmap:Hugin.Cmap.gray ()\n      |> Hugin.title \"Grayscale\";\n      Hugin.imshow ~data:median ~cmap:Hugin.Cmap.gray ()\n      |> Hugin.title \"Median Blur (k=5)\";\n    ]\n  |> Hugin.show\n"
  },
  {
    "path": "packages/sowilo/examples/04-threshold/README.md",
    "content": "# Threshold\n\nBinarize a grayscale image with `Sowilo.threshold`. Pixels above the threshold become white, the rest become black.\n"
  },
  {
    "path": "packages/sowilo/examples/04-threshold/dune",
    "content": "(executable\n (name main)\n (libraries nx nx.io sowilo rune hugin))\n"
  },
  {
    "path": "packages/sowilo/examples/04-threshold/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet image_path = \"sowilo/examples/lena.png\"\n\nlet () =\n  let img = Sowilo.to_float (Nx_io.load_image image_path) in\n  let gray = Sowilo.to_grayscale img in\n  let thresh = Sowilo.threshold 0.5 gray in\n  Hugin.hstack\n    [\n      Hugin.imshow ~data:gray ~cmap:Hugin.Cmap.gray ()\n      |> Hugin.title \"Grayscale\";\n      Hugin.imshow ~data:thresh ~cmap:Hugin.Cmap.gray ()\n      |> Hugin.title \"Binary Threshold (128)\";\n    ]\n  |> Hugin.show\n"
  },
  {
    "path": "packages/sowilo/examples/05-sobel/README.md",
    "content": "# Sobel\n\nCompute horizontal and vertical edge gradients with `Sowilo.sobel`. Displays the original grayscale alongside Sobel X and Sobel Y visualizations.\n"
  },
  {
    "path": "packages/sowilo/examples/05-sobel/dune",
    "content": "(executable\n (name main)\n (libraries nx nx.io sowilo rune hugin))\n"
  },
  {
    "path": "packages/sowilo/examples/05-sobel/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet image_path = \"sowilo/examples/lena.png\"\n\nlet normalize_gradient img =\n  let abs_img = Nx.abs img in\n  let min_val = Nx.item [] (Nx.min ~keepdims:false abs_img) in\n  let max_val = Nx.item [] (Nx.max ~keepdims:false abs_img) in\n  let range = max_val -. min_val in\n  if range <= 1e-6 then Nx.zeros_like img\n  else\n    Nx.div\n      (Nx.sub abs_img (Nx.scalar Nx.float32 min_val))\n      (Nx.scalar Nx.float32 range)\n\nlet () =\n  let img = Sowilo.to_float (Nx_io.load_image image_path) in\n  let gray = Sowilo.to_grayscale img in\n  let gx, gy = Sowilo.sobel gray in\n  Hugin.hstack\n    [\n      Hugin.imshow ~data:gray ~cmap:Hugin.Cmap.gray ()\n      |> Hugin.title \"Grayscale\";\n      Hugin.imshow ~data:(normalize_gradient gx) ~cmap:Hugin.Cmap.gray ()\n      |> Hugin.title \"Sobel X\";\n      Hugin.imshow ~data:(normalize_gradient gy) ~cmap:Hugin.Cmap.gray ()\n      |> Hugin.title \"Sobel Y\";\n    ]\n  |> Hugin.show\n"
  },
  {
    "path": "packages/sowilo/examples/06-canny/README.md",
    "content": "# Canny\n\nDetect edges with the Canny algorithm using `Sowilo.canny`. Applies non-maximum suppression and hysteresis thresholding to produce clean edge maps.\n"
  },
  {
    "path": "packages/sowilo/examples/06-canny/dune",
    "content": "(executable\n (name main)\n (libraries nx nx.io sowilo rune hugin))\n"
  },
  {
    "path": "packages/sowilo/examples/06-canny/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet image_path = \"sowilo/examples/lena.png\"\n\nlet () =\n  let img = Sowilo.to_float (Nx_io.load_image image_path) in\n  let gray = Sowilo.to_grayscale img in\n  let edges = Sowilo.canny ~low:0.2 ~high:0.6 gray in\n  Hugin.hstack\n    [\n      Hugin.imshow ~data:gray ~cmap:Hugin.Cmap.gray ()\n      |> Hugin.title \"Grayscale\";\n      Hugin.imshow ~data:edges ~cmap:Hugin.Cmap.gray ()\n      |> Hugin.title \"Canny Edges (0.2, 0.6)\";\n    ]\n  |> Hugin.show\n"
  },
  {
    "path": "packages/sowilo/examples/07-morphology/README.md",
    "content": "# Morphology\n\nApply morphological erosion and dilation with `Sowilo.erode` and `Sowilo.dilate` using a rectangular structuring element. Compares the binarized image with eroded and dilated results.\n"
  },
  {
    "path": "packages/sowilo/examples/07-morphology/dune",
    "content": "(executable\n (name main)\n (libraries nx nx.io sowilo rune hugin))\n"
  },
  {
    "path": "packages/sowilo/examples/07-morphology/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet image_path = \"sowilo/examples/lena.png\"\n\nlet () =\n  let img = Sowilo.to_float (Nx_io.load_image image_path) in\n  let gray = Sowilo.to_grayscale img in\n  let thresh = Sowilo.threshold 0.5 gray in\n  let kernel = Sowilo.structuring_element Rect (5, 5) in\n  let eroded = Sowilo.erode ~kernel thresh in\n  let dilated = Sowilo.dilate ~kernel thresh in\n  Hugin.hstack\n    [\n      Hugin.imshow ~data:thresh ~cmap:Hugin.Cmap.gray ()\n      |> Hugin.title \"Thresholded\";\n      Hugin.imshow ~data:eroded ~cmap:Hugin.Cmap.gray ()\n      |> Hugin.title \"Eroded (5x5)\";\n      Hugin.imshow ~data:dilated ~cmap:Hugin.Cmap.gray ()\n      |> Hugin.title \"Dilated (5x5)\";\n    ]\n  |> Hugin.show\n"
  },
  {
    "path": "packages/sowilo/lib/color.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet to_grayscale img =\n  let shape = Nx.shape img in\n  let rank = Array.length shape in\n  let c_axis = rank - 1 in\n  (* Flatten spatial dims, matmul with [3;1] weights, reshape back *)\n  let spatial = Array.sub shape 0 (rank - 1) in\n  let n_pixels = Array.fold_left ( * ) 1 spatial in\n  let flat = Nx.reshape [| n_pixels; shape.(c_axis) |] img in\n  let weights = Nx.create Nx.float32 [| 3; 1 |] [| 0.299; 0.587; 0.114 |] in\n  let result = Nx.matmul flat weights in\n  let out_shape = Array.copy shape in\n  out_shape.(c_axis) <- 1;\n  Nx.reshape out_shape result\n\n(* RGB to HSV conversion using piecewise hue computation *)\n\nlet rgb_to_hsv img =\n  let shape = Nx.shape img in\n  let rank = Array.length shape in\n  let c_axis = rank - 1 in\n  let slice_channel i =\n    let slices =\n      List.init rank (fun ax -> if ax = c_axis then Nx.R (i, i + 1) else Nx.A)\n    in\n    Nx.slice slices img\n  in\n  let r = slice_channel 0 in\n  let g = slice_channel 1 in\n  let b = slice_channel 2 in\n  let cmax = Nx.maximum (Nx.maximum r g) b in\n  let cmin = Nx.minimum (Nx.minimum r g) b in\n  let delta = Nx.sub cmax cmin in\n  let eps = Nx.scalar_like img 1e-7 in\n  let delta_safe = Nx.add delta eps in\n  (* Hue computation: piecewise by which channel is max *)\n  let is_r_max = Nx.equal cmax r in\n  let is_g_max = Nx.logical_and (Nx.equal cmax g) (Nx.logical_not is_r_max) in\n  let h_r = Nx.div (Nx.sub g b) delta_safe in\n  let h_g = Nx.add_s (Nx.div (Nx.sub b r) delta_safe) 2.0 in\n  let h_b = Nx.add_s (Nx.div (Nx.sub r g) delta_safe) 4.0 in\n  let h = Nx.where is_r_max h_r (Nx.where is_g_max h_g h_b) in\n  (* Normalize to [0, 1]: divide by 6, wrap negatives *)\n  let h = Nx.div_s h 6.0 in\n  let h = Nx.where (Nx.less h (Nx.zeros_like h)) (Nx.add_s h 1.0) h in\n  (* Saturation *)\n  let s =\n    Nx.where (Nx.greater cmax eps)\n      (Nx.div delta (Nx.add cmax eps))\n      (Nx.zeros_like cmax)\n  in\n  (* Value *)\n  let v = cmax in\n  Nx.concatenate ~axis:c_axis [ h; s; v ]\n\n(* HSV to RGB conversion *)\n\nlet hsv_to_rgb img =\n  let shape = Nx.shape img in\n  let rank = Array.length shape in\n  let c_axis = rank - 1 in\n  let slice_channel i =\n    let slices =\n      List.init rank (fun ax -> if ax = c_axis then Nx.R (i, i + 1) else Nx.A)\n    in\n    Nx.slice slices img\n  in\n  let h = slice_channel 0 in\n  let s = slice_channel 1 in\n  let v = slice_channel 2 in\n  (* h is in [0, 1], scale to [0, 6) *)\n  let h6 = Nx.mul_s h 6.0 in\n  let hi = Nx.floor h6 in\n  let f = Nx.sub h6 hi in\n  let one = Nx.ones_like v in\n  let p = Nx.mul v (Nx.sub one s) in\n  let q = Nx.mul v (Nx.sub one (Nx.mul s f)) in\n  let t_ = Nx.mul v (Nx.sub one (Nx.mul s (Nx.sub one f))) in\n  (* Select r, g, b based on hi mod 6 *)\n  let hi_mod = Nx.mod_ h6 (Nx.scalar_like h6 6.0) in\n  let hi_floor = Nx.floor hi_mod in\n  let is_sect n =\n    let n_t = Nx.scalar_like hi_floor (Float.of_int n) in\n    Nx.logical_and\n      (Nx.greater_equal hi_floor n_t)\n      (Nx.less hi_floor (Nx.scalar_like hi_floor (Float.of_int (n + 1))))\n  in\n  let s0 = is_sect 0 in\n  let s1 = is_sect 1 in\n  let s2 = is_sect 2 in\n  let s3 = is_sect 3 in\n  let s4 = is_sect 4 in\n  (* s5 is the remainder *)\n  let r =\n    Nx.where s0 v\n      (Nx.where s1 q (Nx.where s2 p (Nx.where s3 p (Nx.where s4 t_ v))))\n  in\n  let g =\n    Nx.where s0 t_\n      (Nx.where s1 v (Nx.where s2 v (Nx.where s3 q (Nx.where s4 p p))))\n  in\n  let b =\n    Nx.where s0 p\n      (Nx.where s1 p (Nx.where s2 t_ (Nx.where s3 v (Nx.where s4 v q))))\n  in\n  Nx.concatenate ~axis:c_axis [ r; g; b ]\n\nlet adjust_brightness factor img =\n  Nx.clip ~min:0.0 ~max:1.0 (Nx.mul_s img factor)\n\nlet adjust_contrast factor img =\n  let shape = Nx.shape img in\n  let rank = Array.length shape in\n  (* Mean per channel, keep spatial dims *)\n  let axes = List.init (rank - 1) Fun.id in\n  let mean = Nx.mean ~axes ~keepdims:true img in\n  let shifted = Nx.sub img mean in\n  Nx.clip ~min:0.0 ~max:1.0 (Nx.add mean (Nx.mul_s shifted factor))\n\nlet adjust_saturation factor img =\n  let hsv = rgb_to_hsv img in\n  let shape = Nx.shape hsv in\n  let rank = Array.length shape in\n  let c_axis = rank - 1 in\n  let h =\n    Nx.slice\n      (List.init rank (fun ax -> if ax = c_axis then Nx.R (0, 1) else Nx.A))\n      hsv\n  in\n  let s =\n    Nx.slice\n      (List.init rank (fun ax -> if ax = c_axis then Nx.R (1, 2) else Nx.A))\n      hsv\n  in\n  let v =\n    Nx.slice\n      (List.init rank (fun ax -> if ax = c_axis then Nx.R (2, 3) else Nx.A))\n      hsv\n  in\n  let s' = Nx.clip ~min:0.0 ~max:1.0 (Nx.mul_s s factor) in\n  hsv_to_rgb (Nx.concatenate ~axis:c_axis [ h; s'; v ])\n\nlet adjust_hue delta img =\n  let hsv = rgb_to_hsv img in\n  let shape = Nx.shape hsv in\n  let rank = Array.length shape in\n  let c_axis = rank - 1 in\n  let h =\n    Nx.slice\n      (List.init rank (fun ax -> if ax = c_axis then Nx.R (0, 1) else Nx.A))\n      hsv\n  in\n  let s =\n    Nx.slice\n      (List.init rank (fun ax -> if ax = c_axis then Nx.R (1, 2) else Nx.A))\n      hsv\n  in\n  let v =\n    Nx.slice\n      (List.init rank (fun ax -> if ax = c_axis then Nx.R (2, 3) else Nx.A))\n      hsv\n  in\n  (* Wrap hue to [0, 1] *)\n  let h' = Nx.add_s h delta in\n  let h' = Nx.sub h' (Nx.floor h') in\n  hsv_to_rgb (Nx.concatenate ~axis:c_axis [ h'; s; v ])\n\nlet adjust_gamma gamma img = Nx.pow_s img gamma\nlet invert img = Nx.sub (Nx.ones_like img) img\n"
  },
  {
    "path": "packages/sowilo/lib/dune",
    "content": "(library\n (name sowilo)\n (public_name sowilo)\n (libraries nx nx.core)\n (private_modules helpers color transform filter morphology edge))\n"
  },
  {
    "path": "packages/sowilo/lib/edge.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Compute both gradients in a single pass via correlate2d *)\n\nlet gradient_pair ~kernel_x ~kernel_y img =\n  Helpers.with_batch_pair\n    (fun img ->\n      let gx = Helpers.convolve_per_channel kernel_x img in\n      let gy = Helpers.convolve_per_channel kernel_y img in\n      (gx, gy))\n    img\n\nlet sobel_kx =\n  Nx.create Nx.float32 [| 3; 3 |] [| -1.; 0.; 1.; -2.; 0.; 2.; -1.; 0.; 1. |]\n\nlet sobel_ky =\n  Nx.create Nx.float32 [| 3; 3 |] [| -1.; -2.; -1.; 0.; 0.; 0.; 1.; 2.; 1. |]\n\nlet scharr_kx =\n  Nx.create Nx.float32 [| 3; 3 |] [| -3.; 0.; 3.; -10.; 0.; 10.; -3.; 0.; 3. |]\n\nlet scharr_ky =\n  Nx.create Nx.float32 [| 3; 3 |] [| -3.; -10.; -3.; 0.; 0.; 0.; 3.; 10.; 3. |]\n\nlet sobel ?(ksize = 3) img =\n  ignore ksize;\n  gradient_pair ~kernel_x:sobel_kx ~kernel_y:sobel_ky img\n\nlet scharr img = gradient_pair ~kernel_x:scharr_kx ~kernel_y:scharr_ky img\n\nlet laplacian ?(ksize = 3) img =\n  ignore ksize;\n  (* Laplacian kernel: [0 1 0; 1 -4 1; 0 1 0] *)\n  let kernel =\n    Nx.create Nx.float32 [| 3; 3 |]\n      [| 0.0; 1.0; 0.0; 1.0; -4.0; 1.0; 0.0; 1.0; 0.0 |]\n  in\n  Helpers.with_batch (fun img -> Helpers.convolve_per_channel kernel img) img\n\nlet canny ~low ~high ?(sigma = 1.4) img =\n  Helpers.with_batch\n    (fun img ->\n      (* 1. Gaussian blur *)\n      let blurred = Filter.gaussian_blur ~sigma img in\n      (* 2. Gradient computation *)\n      let gx, gy =\n        gradient_pair ~kernel_x:sobel_kx ~kernel_y:sobel_ky blurred\n      in\n      let mag = Nx.sqrt (Nx.add (Nx.square gx) (Nx.square gy)) in\n      let angle = Nx.atan2 gy gx in\n      let shape = Nx.shape mag in\n      let n = shape.(0) and h = shape.(1) and w = shape.(2) in\n      let mag3 = Nx.reshape [| n; h; w |] mag in\n      let angle3 = Nx.reshape [| n; h; w |] angle in\n      (* 3. Non-maximum suppression *)\n      let angle_deg = Nx.mul_s angle3 (180.0 /. Float.pi) in\n      let angle_pos =\n        Nx.where\n          (Nx.less angle_deg (Nx.zeros_like angle_deg))\n          (Nx.add_s angle_deg 180.0) angle_deg\n      in\n      let scalar v = Nx.scalar_like angle_pos v in\n      let is_horizontal =\n        Nx.logical_or\n          (Nx.logical_and\n             (Nx.greater_equal angle_pos (scalar 0.0))\n             (Nx.less angle_pos (scalar 22.5)))\n          (Nx.logical_and\n             (Nx.greater_equal angle_pos (scalar 157.5))\n             (Nx.less_equal angle_pos (scalar 180.0)))\n      in\n      let is_diag1 =\n        Nx.logical_and\n          (Nx.greater_equal angle_pos (scalar 22.5))\n          (Nx.less angle_pos (scalar 67.5))\n      in\n      let is_vertical =\n        Nx.logical_and\n          (Nx.greater_equal angle_pos (scalar 67.5))\n          (Nx.less angle_pos (scalar 112.5))\n      in\n      let is_diag2 =\n        Nx.logical_and\n          (Nx.greater_equal angle_pos (scalar 112.5))\n          (Nx.less angle_pos (scalar 157.5))\n      in\n      let mag_padded = Nx.pad [| (0, 0); (1, 1); (1, 1) |] 0.0 mag3 in\n      let center =\n        Nx.slice [ Nx.A; Nx.R (1, h + 1); Nx.R (1, w + 1) ] mag_padded\n      in\n      let left = Nx.slice [ Nx.A; Nx.R (1, h + 1); Nx.R (0, w) ] mag_padded in\n      let right =\n        Nx.slice [ Nx.A; Nx.R (1, h + 1); Nx.R (2, w + 2) ] mag_padded\n      in\n      let top = Nx.slice [ Nx.A; Nx.R (0, h); Nx.R (1, w + 1) ] mag_padded in\n      let bottom =\n        Nx.slice [ Nx.A; Nx.R (2, h + 2); Nx.R (1, w + 1) ] mag_padded\n      in\n      let tr = Nx.slice [ Nx.A; Nx.R (0, h); Nx.R (2, w + 2) ] mag_padded in\n      let bl = Nx.slice [ Nx.A; Nx.R (2, h + 2); Nx.R (0, w) ] mag_padded in\n      let tl = Nx.slice [ Nx.A; Nx.R (0, h); Nx.R (0, w) ] mag_padded in\n      let br = Nx.slice [ Nx.A; Nx.R (2, h + 2); Nx.R (2, w + 2) ] mag_padded in\n      let ge a b = Nx.greater_equal a b in\n      let is_max =\n        Nx.logical_or\n          (Nx.logical_or\n             (Nx.logical_and is_horizontal\n                (Nx.logical_and (ge center left) (ge center right)))\n             (Nx.logical_and is_diag1\n                (Nx.logical_and (ge center tr) (ge center bl))))\n          (Nx.logical_or\n             (Nx.logical_and is_vertical\n                (Nx.logical_and (ge center top) (ge center bottom)))\n             (Nx.logical_and is_diag2\n                (Nx.logical_and (ge center tl) (ge center br))))\n      in\n      let nms = Nx.where is_max mag3 (Nx.zeros_like mag3) in\n      (* 4. Double thresholding *)\n      let strong = Nx.greater nms (Nx.scalar_like nms high) in\n      let weak =\n        Nx.logical_and\n          (Nx.greater_equal nms (Nx.scalar_like nms low))\n          (Nx.logical_not strong)\n      in\n      (* 5. Hysteresis via dilation *)\n      let one = Nx.ones_like nms in\n      let zero = Nx.zeros_like nms in\n      let strong_map = Nx.where strong one zero in\n      let strong_4d = Nx.reshape [| n; h; w; 1 |] strong_map in\n      let k3 = Morphology.structuring_element Rect (3, 3) in\n      let dilated =\n        Morphology.dilate ~kernel:k3 (Morphology.dilate ~kernel:k3 strong_4d)\n      in\n      let dilated3 = Nx.reshape [| n; h; w |] dilated in\n      let connected = Nx.greater dilated3 zero in\n      let final =\n        Nx.where (Nx.logical_and connected (Nx.logical_or strong weak)) one zero\n      in\n      Nx.reshape [| n; h; w; 1 |] final)\n    img\n"
  },
  {
    "path": "packages/sowilo/lib/filter.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet generate_gaussian_kernel size sigma =\n  let center = float (size / 2) in\n  let sigma2_sq = 2.0 *. sigma *. sigma in\n  let positions = Nx.arange_f Nx.float32 0.0 (float size) 1.0 in\n  let x = Nx.sub_s positions center in\n  let kernel = Nx.exp (Nx.div_s (Nx.neg (Nx.square x)) sigma2_sq) in\n  let sum = Nx.sum kernel in\n  Nx.div kernel (Nx.reshape [||] sum)\n\nlet gaussian_blur ~sigma ?(ksize = 0) img =\n  let ksize =\n    if ksize > 0 then ksize\n    else (2 * int_of_float (Float.round (3.0 *. sigma))) + 1\n  in\n  if ksize <= 0 || ksize mod 2 = 0 then\n    invalid_arg \"gaussian_blur: ksize must be a positive odd integer\";\n  let kernel_1d = generate_gaussian_kernel ksize sigma in\n  let kernel_h = Nx.reshape [| 1; ksize |] kernel_1d in\n  let kernel_v = Nx.reshape [| ksize; 1 |] kernel_1d in\n  Helpers.with_batch\n    (fun img ->\n      let temp = Helpers.convolve_per_channel kernel_h img in\n      Helpers.convolve_per_channel kernel_v temp)\n    img\n\nlet box_blur ~ksize img =\n  if ksize <= 0 then invalid_arg \"box_blur: ksize must be positive\";\n  let value = 1.0 /. float (ksize * ksize) in\n  let kernel = Nx.full Nx.float32 [| ksize; ksize |] value in\n  Helpers.with_batch (fun img -> Helpers.convolve_per_channel kernel img) img\n\nlet median_blur ~ksize img =\n  if ksize <= 0 || ksize mod 2 = 0 then\n    invalid_arg \"median_blur: ksize must be a positive odd integer\";\n  let pad_size = ksize / 2 in\n  Helpers.with_batch\n    (fun img ->\n      let shape = Nx.shape img in\n      let h = shape.(1) and w = shape.(2) in\n      let padded =\n        Nx.pad\n          [| (0, 0); (pad_size, pad_size); (pad_size, pad_size); (0, 0) |]\n          0.0 img\n      in\n      let windows = ref [] in\n      for dy = 0 to ksize - 1 do\n        for dx = 0 to ksize - 1 do\n          let slice =\n            Nx.slice [ Nx.A; Nx.R (dy, dy + h); Nx.R (dx, dx + w); Nx.A ] padded\n          in\n          windows := slice :: !windows\n        done\n      done;\n      let stacked = Nx.stack ~axis:0 (List.rev !windows) in\n      let sorted, _ = Nx.sort ~axis:0 stacked in\n      let median_idx = ksize * ksize / 2 in\n      Nx.slice [ Nx.I median_idx; Nx.A; Nx.A; Nx.A; Nx.A ] sorted)\n    img\n\nlet filter2d kernel img =\n  Helpers.with_batch (fun img -> Helpers.convolve_per_channel kernel img) img\n\nlet unsharp_mask ~sigma ?(amount = 1.0) img =\n  let blurred = gaussian_blur ~sigma img in\n  let diff = Nx.sub img blurred in\n  Nx.add img (Nx.mul_s diff amount)\n"
  },
  {
    "path": "packages/sowilo/lib/helpers.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet err_rank n =\n  Printf.sprintf \"expected rank 3 [H;W;C] or 4 [N;H;W;C], got %d\" n\n\nlet with_batch f img =\n  match Array.length (Nx.shape img) with\n  | 3 ->\n      let batched = Nx.unsqueeze_axis 0 img in\n      Nx.squeeze_axis 0 (f batched)\n  | 4 -> f img\n  | n -> invalid_arg (err_rank n)\n\nlet with_batch_pair f img =\n  match Array.length (Nx.shape img) with\n  | 3 ->\n      let batched = Nx.unsqueeze_axis 0 img in\n      let a, b = f batched in\n      (Nx.squeeze_axis 0 a, Nx.squeeze_axis 0 b)\n  | 4 -> f img\n  | n -> invalid_arg (err_rank n)\n\nlet convolve_per_channel kernel img =\n  (* img: (N, H, W, C), kernel: (kH, kW) *)\n  let shape = Nx.shape img in\n  let n = shape.(0) in\n  let h = shape.(1) in\n  let w = shape.(2) in\n  let c = shape.(3) in\n  (* NCHW then merge N*C into leading: (N*C, H, W) *)\n  let img_nchw = Nx.transpose ~axes:[ 0; 3; 1; 2 ] img in\n  let merged = Nx.reshape [| n * c; h; w |] img_nchw in\n  (* correlate on last 2 dims with Same padding *)\n  let result = Nx.correlate ~padding:`Same merged kernel in\n  let out_shape = Nx.shape result in\n  let oh = out_shape.(1) in\n  let ow = out_shape.(2) in\n  (* Reshape back: (N, C, H_out, W_out) -> (N, H_out, W_out, C) *)\n  let result = Nx.reshape [| n; c; oh; ow |] result in\n  Nx.transpose ~axes:[ 0; 2; 3; 1 ] result\n"
  },
  {
    "path": "packages/sowilo/lib/morphology.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype kernel_shape = Rect | Cross | Ellipse\n\nlet structuring_element shape (kh, kw) =\n  if kh <= 0 || kw <= 0 || kh mod 2 = 0 || kw mod 2 = 0 then\n    invalid_arg \"structuring_element: dimensions must be positive odd integers\";\n  match shape with\n  | Rect -> Nx.ones Nx.uint8 [| kh; kw |]\n  | Cross ->\n      let center_h = kh / 2 and center_w = kw / 2 in\n      let h_line = Nx.ones Nx.uint8 [| 1; kw |] in\n      let h_padded =\n        Nx.pad [| (center_h, kh - center_h - 1); (0, 0) |] 0 h_line\n      in\n      let v_line = Nx.ones Nx.uint8 [| kh; 1 |] in\n      let v_padded =\n        Nx.pad [| (0, 0); (center_w, kw - center_w - 1) |] 0 v_line\n      in\n      Nx.maximum h_padded v_padded\n  | Ellipse ->\n      let cx = float (kw / 2) and cy = float (kh / 2) in\n      let rx = cx +. 0.5 and ry = cy +. 0.5 in\n      let data =\n        Array.init (kh * kw) (fun idx ->\n            let y = float (idx / kw) -. cy in\n            let x = float (idx mod kw) -. cx in\n            if (x *. x /. (rx *. rx)) +. (y *. y /. (ry *. ry)) <= 1.0 then 1\n            else 0)\n      in\n      Nx.create Nx.uint8 [| kh; kw |] data\n\n(* Find active positions (non-zero) in a kernel *)\nlet active_positions kernel =\n  let kshape = Nx.shape kernel in\n  let kh = kshape.(0) and kw = kshape.(1) in\n  let positions = ref [] in\n  for i = 0 to kh - 1 do\n    for j = 0 to kw - 1 do\n      if Nx.item [ i; j ] kernel <> 0 then positions := (i, j) :: !positions\n    done\n  done;\n  match !positions with\n  | [] ->\n      invalid_arg \"structuring element must have at least one non-zero element\"\n  | ps -> List.rev ps\n\nlet morph_reduce op slices =\n  match slices with\n  | [] -> failwith \"empty slice list\"\n  | first :: rest -> List.fold_left (fun acc s -> op acc s) first rest\n\nlet morph_op (type a b) ~op ~kernel (img : (a, b) Nx.t) : (a, b) Nx.t =\n  let kshape = Nx.shape kernel in\n  let kh = kshape.(0) and kw = kshape.(1) in\n  let pad_h = kh / 2 and pad_w = kw / 2 in\n  let positions = active_positions kernel in\n  let reduce =\n    match op with\n    | `Min -> morph_reduce Nx.minimum\n    | `Max -> morph_reduce Nx.maximum\n  in\n  let dt = Nx.dtype img in\n  (* For erosion, pad with max so boundary doesn't create false minima. For\n     dilation, pad with min (zeros). *)\n  let pad_val : a =\n    match op with\n    | `Max -> Nx_core.Dtype.zero dt\n    | `Min -> Nx_core.Dtype.max_value dt\n  in\n  Helpers.with_batch\n    (fun img ->\n      let shape = Nx.shape img in\n      let h = shape.(1) and w = shape.(2) in\n      let padding = [| (0, 0); (pad_h, pad_h); (pad_w, pad_w); (0, 0) |] in\n      let padded = Nx.pad padding pad_val img in\n      let slices =\n        List.map\n          (fun (dy, dx) ->\n            Nx.slice [ Nx.A; Nx.R (dy, dy + h); Nx.R (dx, dx + w); Nx.A ] padded)\n          positions\n      in\n      reduce slices)\n    img\n\nlet erode ~kernel img = morph_op ~op:`Min ~kernel img\nlet dilate ~kernel img = morph_op ~op:`Max ~kernel img\nlet opening ~kernel img = dilate ~kernel (erode ~kernel img)\nlet closing ~kernel img = erode ~kernel (dilate ~kernel img)\n\nlet morphological_gradient ~kernel img =\n  Nx.sub (dilate ~kernel img) (erode ~kernel img)\n"
  },
  {
    "path": "packages/sowilo/lib/sowilo.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Type conversion and preprocessing *)\n\nlet to_float img = Nx.div_s (Nx.astype Nx.float32 img) 255.0\n\nlet to_uint8 img =\n  let clipped = Nx.clip ~min:0.0 ~max:1.0 img in\n  Nx.astype Nx.uint8 (Nx.mul_s clipped 255.0)\n\nlet normalize ~mean ~std img =\n  let shape = Nx.shape img in\n  let rank = Array.length shape in\n  let c = shape.(rank - 1) in\n  if List.length mean <> c || List.length std <> c then\n    invalid_arg\n      (Printf.sprintf\n         \"normalize: mean/std length (%d/%d) does not match channels (%d)\"\n         (List.length mean) (List.length std) c);\n  let ones = Array.make rank 1 in\n  ones.(rank - 1) <- c;\n  let mean_t =\n    Nx.reshape ones (Nx.create Nx.float32 [| c |] (Array.of_list mean))\n  in\n  let std_t =\n    Nx.reshape ones (Nx.create Nx.float32 [| c |] (Array.of_list std))\n  in\n  Nx.div (Nx.sub img mean_t) std_t\n\nlet threshold t img =\n  let t_s = Nx.scalar_like img t in\n  let one = Nx.ones_like img in\n  let zero = Nx.zeros_like img in\n  Nx.where (Nx.greater img t_s) one zero\n\n(* Re-export private modules *)\n\ninclude Color\ninclude Transform\ninclude Filter\ninclude Morphology\ninclude Edge\n"
  },
  {
    "path": "packages/sowilo/lib/sowilo.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Differentiable computer vision on {!Rune}.\n\n    Sowilo provides image processing operations expressed purely through {!Nx}\n    tensor operations. All operations are compatible with {!Rune.grad} and\n    {!Rune.vmap}.\n\n    {1:conventions Image conventions}\n\n    Images are {!Nx.t} tensors with channels-last layout:\n    - Single image: [[H; W; C]] (height, width, channels).\n    - Batch: [[N; H; W; C]] (batch, height, width, channels).\n    - Grayscale: [C = 1], RGB: [C = 3], RGBA: [C = 4].\n\n    Operations expect float32 tensors with values in \\[0, 1\\]. Use {!to_float}\n    and {!to_uint8} to convert between integer and float representations. *)\n\n(** {1:converting Type conversion and preprocessing} *)\n\nval to_float : ('a, 'b) Nx.t -> Nx.float32_t\n(** [to_float img] is [img] cast to float32 and scaled to \\[0, 1\\] by dividing\n    by 255. *)\n\nval to_uint8 : Nx.float32_t -> Nx.uint8_t\n(** [to_uint8 img] is [img] scaled from \\[0, 1\\] to \\[0, 255\\] and cast to\n    uint8. Values are clipped to \\[0, 1\\] before scaling. *)\n\nval normalize :\n  mean:float list -> std:float list -> Nx.float32_t -> Nx.float32_t\n(** [normalize ~mean ~std img] is per-channel normalization:\n    [(img - mean) / std]. [mean] and [std] must have the same length as the\n    channel dimension.\n\n    Raises [Invalid_argument] if [mean] or [std] length does not match the\n    number of channels. *)\n\nval threshold : float -> Nx.float32_t -> Nx.float32_t\n(** [threshold t img] is [1.0] where [img > t], [0.0] elsewhere. *)\n\n(** {1:color Color space conversion and adjustment} *)\n\nval to_grayscale : Nx.float32_t -> Nx.float32_t\n(** [to_grayscale img] converts RGB to single-channel grayscale using ITU-R\n    BT.601 weights: [0.299 * R + 0.587 * G + 0.114 * B]. Input must have\n    [C >= 3]. Output has [C = 1]. *)\n\nval rgb_to_hsv : Nx.float32_t -> Nx.float32_t\n(** [rgb_to_hsv img] converts RGB \\[0, 1\\] to HSV. H is in \\[0, 1\\] (normalized\n    from \\[0, 360\\]), S and V are in \\[0, 1\\]. *)\n\nval hsv_to_rgb : Nx.float32_t -> Nx.float32_t\n(** [hsv_to_rgb img] converts HSV back to RGB \\[0, 1\\]. *)\n\nval adjust_brightness : float -> Nx.float32_t -> Nx.float32_t\n(** [adjust_brightness factor img] scales pixel values by [factor] and clips to\n    \\[0, 1\\]. *)\n\nval adjust_contrast : float -> Nx.float32_t -> Nx.float32_t\n(** [adjust_contrast factor img] adjusts contrast around the per-channel mean.\n    [0] produces solid gray, [1] is the original image. *)\n\nval adjust_saturation : float -> Nx.float32_t -> Nx.float32_t\n(** [adjust_saturation factor img] adjusts color saturation via HSV. [0]\n    produces grayscale, [1] is the original image. *)\n\nval adjust_hue : float -> Nx.float32_t -> Nx.float32_t\n(** [adjust_hue delta img] rotates hue by [delta]. [delta] is in \\[-0.5, 0.5\\],\n    corresponding to a full rotation of the hue circle. *)\n\nval adjust_gamma : float -> Nx.float32_t -> Nx.float32_t\n(** [adjust_gamma gamma img] applies gamma correction: [img ** gamma]. *)\n\nval invert : Nx.float32_t -> Nx.float32_t\n(** [invert img] is [1.0 - img]. *)\n\n(** {1:transform Geometric transforms} *)\n\n(** The type for interpolation methods. *)\ntype interpolation =\n  | Nearest  (** Nearest-neighbor interpolation. *)\n  | Bilinear  (** Bilinear interpolation (default). *)\n\nval resize :\n  ?interpolation:interpolation ->\n  height:int ->\n  width:int ->\n  ('a, 'b) Nx.t ->\n  ('a, 'b) Nx.t\n(** [resize ~height ~width img] resizes to target dimensions. [interpolation]\n    defaults to {!Bilinear}. Casts to float32 internally for bilinear\n    interpolation.\n\n    Raises [Invalid_argument] if [height] or [width] is not positive. *)\n\nval crop :\n  y:int -> x:int -> height:int -> width:int -> ('a, 'b) Nx.t -> ('a, 'b) Nx.t\n(** [crop ~y ~x ~height ~width img] extracts a rectangular region starting at\n    [(y, x)] with the given dimensions.\n\n    Raises [Invalid_argument] if the region exceeds image bounds. *)\n\nval center_crop : height:int -> width:int -> ('a, 'b) Nx.t -> ('a, 'b) Nx.t\n(** [center_crop ~height ~width img] crops a centered rectangle.\n\n    Raises [Invalid_argument] if [height] or [width] exceeds the image\n    dimensions. *)\n\nval hflip : ('a, 'b) Nx.t -> ('a, 'b) Nx.t\n(** [hflip img] flips horizontally (left to right). *)\n\nval vflip : ('a, 'b) Nx.t -> ('a, 'b) Nx.t\n(** [vflip img] flips vertically (top to bottom). *)\n\nval rotate90 : ?k:int -> ('a, 'b) Nx.t -> ('a, 'b) Nx.t\n(** [rotate90 img] rotates by [k * 90] degrees counter-clockwise. [k] defaults\n    to [1]. Negative values rotate clockwise. *)\n\nval pad :\n  ?value:float -> int * int * int * int -> ('a, 'b) Nx.t -> ('a, 'b) Nx.t\n(** [pad (top, bottom, left, right) img] zero-pads the spatial dimensions.\n    [value] defaults to [0.0]. *)\n\n(** {1:filter Spatial filtering} *)\n\nval gaussian_blur : sigma:float -> ?ksize:int -> Nx.float32_t -> Nx.float32_t\n(** [gaussian_blur ~sigma img] applies isotropic Gaussian blur using separable\n    convolution. [ksize] defaults to [2 * ceil(3 * sigma) + 1], capturing 99.7%\n    of the distribution.\n\n    Raises [Invalid_argument] if [ksize] is even or not positive. *)\n\nval box_blur : ksize:int -> Nx.float32_t -> Nx.float32_t\n(** [box_blur ~ksize img] applies a [ksize * ksize] averaging filter.\n\n    Raises [Invalid_argument] if [ksize] is not positive. *)\n\nval median_blur : ksize:int -> Nx.float32_t -> Nx.float32_t\n(** [median_blur ~ksize img] applies a median filter.\n\n    {b Note.} Not differentiable: uses sort internally, gradient is zero almost\n    everywhere.\n\n    Raises [Invalid_argument] if [ksize] is not a positive odd integer. *)\n\nval filter2d : Nx.float32_t -> Nx.float32_t -> Nx.float32_t\n(** [filter2d kernel img] applies a custom 2D convolution [kernel] to [img].\n    [kernel] has shape [[kH; kW]]. Applied independently to each channel with\n    [Same] padding. *)\n\nval unsharp_mask : sigma:float -> ?amount:float -> Nx.float32_t -> Nx.float32_t\n(** [unsharp_mask ~sigma img] sharpens by subtracting a Gaussian blur:\n    [img + amount * (img - gaussian_blur ~sigma img)]. [amount] defaults to\n    [1.0]. *)\n\n(** {1:morphology Morphological operations} *)\n\n(** The type for structuring element shapes. *)\ntype kernel_shape =\n  | Rect  (** Full rectangle. *)\n  | Cross  (** Cross-shaped element. *)\n  | Ellipse  (** Elliptical element. *)\n\nval structuring_element : kernel_shape -> int * int -> Nx.uint8_t\n(** [structuring_element shape (h, w)] is a structuring element of the given\n    [shape] and size. [h] and [w] must be positive odd integers.\n\n    Raises [Invalid_argument] if [h] or [w] is not positive or not odd. *)\n\nval erode : kernel:Nx.uint8_t -> ('a, 'b) Nx.t -> ('a, 'b) Nx.t\n(** [erode ~kernel img] replaces each pixel with the minimum over the\n    kernel-shaped neighborhood. *)\n\nval dilate : kernel:Nx.uint8_t -> ('a, 'b) Nx.t -> ('a, 'b) Nx.t\n(** [dilate ~kernel img] replaces each pixel with the maximum over the\n    kernel-shaped neighborhood. *)\n\nval opening : kernel:Nx.uint8_t -> ('a, 'b) Nx.t -> ('a, 'b) Nx.t\n(** [opening ~kernel img] is [dilate ~kernel (erode ~kernel img)]. Removes small\n    bright regions. *)\n\nval closing : kernel:Nx.uint8_t -> ('a, 'b) Nx.t -> ('a, 'b) Nx.t\n(** [closing ~kernel img] is [erode ~kernel (dilate ~kernel img)]. Fills small\n    dark regions. *)\n\nval morphological_gradient : kernel:Nx.uint8_t -> ('a, 'b) Nx.t -> ('a, 'b) Nx.t\n(** [morphological_gradient ~kernel img] is\n    [dilate ~kernel img - erode ~kernel img]. Highlights edges. *)\n\n(** {1:edge Edge detection} *)\n\nval sobel : ?ksize:int -> Nx.float32_t -> Nx.float32_t * Nx.float32_t\n(** [sobel img] computes Sobel gradients. Returns [(gx, gy)] where [gx] is the\n    horizontal gradient and [gy] is the vertical gradient. [ksize] defaults to\n    [3]. Input must have [C = 1]. *)\n\nval scharr : Nx.float32_t -> Nx.float32_t * Nx.float32_t\n(** [scharr img] computes Scharr gradients, which are more rotationally accurate\n    than Sobel. Returns [(gx, gy)]. Input must have [C = 1]. *)\n\nval laplacian : ?ksize:int -> Nx.float32_t -> Nx.float32_t\n(** [laplacian img] computes the Laplacian (sum of second spatial derivatives).\n    [ksize] defaults to [3]. Input must have [C = 1]. *)\n\nval canny :\n  low:float -> high:float -> ?sigma:float -> Nx.float32_t -> Nx.float32_t\n(** [canny ~low ~high img] applies the Canny edge detector. Returns [1.0] for\n    edge pixels, [0.0] otherwise. [low] and [high] are the hysteresis\n    thresholds. [sigma] controls the initial Gaussian blur and defaults to\n    [1.4]. Input must have [C = 1].\n\n    {b Note.} Not differentiable: uses non-maximum suppression and hysteresis\n    thresholding. *)\n"
  },
  {
    "path": "packages/sowilo/lib/transform.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype interpolation = Nearest | Bilinear\n\n(* Spatial axes for H and W given tensor rank *)\n\nlet hw_axes rank =\n  match rank with\n  | 3 -> (0, 1) (* [H; W; C] *)\n  | 4 -> (1, 2) (* [N; H; W; C] *)\n  | n -> invalid_arg (Helpers.err_rank n)\n\nlet float_range size = Nx.arange_f Nx.float32 0.0 (float size) 1.0\n\nlet compute_nearest_indices ~size_in ~size_out =\n  if size_out = 1 || size_in = 1 then Nx.full Nx.int32 [| size_out |] Int32.zero\n  else\n    let scale = float size_in /. float size_out in\n    let coords = float_range size_out in\n    let src = Nx.sub_s (Nx.mul_s (Nx.add_s coords 0.5) scale) 0.5 in\n    let src_clipped = Nx.clip ~min:0.0 ~max:(float (size_in - 1)) src in\n    Nx.astype Nx.int32 (Nx.round src_clipped)\n\nlet compute_linear_axis ~size_in ~size_out =\n  if size_out = 1 || size_in = 1 then\n    let zeros_i = Nx.full Nx.int32 [| size_out |] Int32.zero in\n    let zeros_f = Nx.full Nx.float32 [| size_out |] 0.0 in\n    (zeros_i, zeros_i, zeros_f)\n  else\n    let scale = float (size_in - 1) /. float (size_out - 1) in\n    let src = Nx.mul_s (float_range size_out) scale in\n    let idx0 = src |> Nx.floor |> Nx.astype Nx.int32 in\n    let one = Nx.scalar_like idx0 Int32.(of_int 1) in\n    let max_idx = Nx.scalar_like idx0 Int32.(of_int (size_in - 1)) in\n    let idx1 = Nx.minimum (Nx.add idx0 one) max_idx in\n    let delta = Nx.sub src (Nx.astype Nx.float32 idx0) in\n    (idx0, idx1, delta)\n\nlet resize : type a b.\n    ?interpolation:interpolation ->\n    height:int ->\n    width:int ->\n    (a, b) Nx.t ->\n    (a, b) Nx.t =\n fun ?(interpolation = Bilinear) ~height:out_h ~width:out_w img ->\n  if out_h <= 0 || out_w <= 0 then\n    invalid_arg \"resize: height and width must be positive\";\n  let shape = Nx.shape img in\n  let rank = Array.length shape in\n  let h_ax, w_ax = hw_axes rank in\n  let in_h = shape.(h_ax) and in_w = shape.(w_ax) in\n  match interpolation with\n  | Nearest ->\n      let y_idx = compute_nearest_indices ~size_in:in_h ~size_out:out_h in\n      let x_idx = compute_nearest_indices ~size_in:in_w ~size_out:out_w in\n      img |> Nx.take ~axis:h_ax y_idx |> Nx.take ~axis:w_ax x_idx\n  | Bilinear ->\n      let img_f = Nx.astype Nx.float32 img in\n      let y0, y1, dy = compute_linear_axis ~size_in:in_h ~size_out:out_h in\n      let x0, x1, dx = compute_linear_axis ~size_in:in_w ~size_out:out_w in\n      let top = Nx.take ~axis:h_ax y0 img_f in\n      let bottom = Nx.take ~axis:h_ax y1 img_f in\n      let top_left = Nx.take ~axis:w_ax x0 top in\n      let top_right = Nx.take ~axis:w_ax x1 top in\n      let bottom_left = Nx.take ~axis:w_ax x0 bottom in\n      let bottom_right = Nx.take ~axis:w_ax x1 bottom in\n      (* Reshape dx and dy for broadcasting *)\n      let make_broadcastable ax size =\n        let s = Array.make rank 1 in\n        s.(ax) <- size;\n        s\n      in\n      let dx_b = Nx.reshape (make_broadcastable w_ax out_w) dx in\n      let dy_b = Nx.reshape (make_broadcastable h_ax out_h) dy in\n      let one_dx = Nx.sub (Nx.ones_like dx_b) dx_b in\n      let one_dy = Nx.sub (Nx.ones_like dy_b) dy_b in\n      let top_interp =\n        Nx.add (Nx.mul one_dx top_left) (Nx.mul dx_b top_right)\n      in\n      let bottom_interp =\n        Nx.add (Nx.mul one_dx bottom_left) (Nx.mul dx_b bottom_right)\n      in\n      let blended =\n        Nx.add (Nx.mul one_dy top_interp) (Nx.mul dy_b bottom_interp)\n      in\n      Nx.astype (Nx.dtype img) blended\n\nlet crop ~y ~x ~height ~width img =\n  let shape = Nx.shape img in\n  let rank = Array.length shape in\n  let h_ax, w_ax = hw_axes rank in\n  let in_h = shape.(h_ax) and in_w = shape.(w_ax) in\n  if\n    y < 0 || x < 0 || height <= 0 || width <= 0\n    || y + height > in_h\n    || x + width > in_w\n  then\n    invalid_arg\n      (Printf.sprintf \"crop: region y=%d x=%d h=%d w=%d exceeds image %dx%d\" y x\n         height width in_h in_w);\n  let slices =\n    List.init rank (fun ax ->\n        if ax = h_ax then Nx.R (y, y + height)\n        else if ax = w_ax then Nx.R (x, x + width)\n        else Nx.A)\n  in\n  Nx.slice slices img\n\nlet center_crop ~height ~width img =\n  let shape = Nx.shape img in\n  let rank = Array.length shape in\n  let h_ax, w_ax = hw_axes rank in\n  let in_h = shape.(h_ax) and in_w = shape.(w_ax) in\n  if height > in_h || width > in_w then\n    invalid_arg\n      (Printf.sprintf \"center_crop: target %dx%d exceeds image %dx%d\" height\n         width in_h in_w);\n  let y = (in_h - height) / 2 in\n  let x = (in_w - width) / 2 in\n  crop ~y ~x ~height ~width img\n\nlet hflip img =\n  let rank = Array.length (Nx.shape img) in\n  let _, w_ax = hw_axes rank in\n  Nx.flip ~axes:[ w_ax ] img\n\nlet vflip img =\n  let rank = Array.length (Nx.shape img) in\n  let h_ax, _ = hw_axes rank in\n  Nx.flip ~axes:[ h_ax ] img\n\nlet rotate90 ?(k = 1) img =\n  let k = ((k mod 4) + 4) mod 4 in\n  if k = 0 then img\n  else\n    let rank = Array.length (Nx.shape img) in\n    let h_ax, w_ax = hw_axes rank in\n    let rotate_once t =\n      (* CCW 90: transpose H,W then flip W *)\n      let axes = Array.init rank Fun.id in\n      axes.(h_ax) <- w_ax;\n      axes.(w_ax) <- h_ax;\n      let transposed = Nx.transpose ~axes:(Array.to_list axes) t in\n      Nx.flip ~axes:[ h_ax ] transposed\n    in\n    let result = ref img in\n    for _ = 1 to k do\n      result := rotate_once !result\n    done;\n    !result\n\nlet pad : type a b.\n    ?value:float -> int * int * int * int -> (a, b) Nx.t -> (a, b) Nx.t =\n fun ?(value = 0.0) (top, bottom, left, right) img ->\n  let rank = Array.length (Nx.shape img) in\n  let h_ax, w_ax = hw_axes rank in\n  let padding = Array.make rank (0, 0) in\n  padding.(h_ax) <- (top, bottom);\n  padding.(w_ax) <- (left, right);\n  let fill : a = Nx_core.Dtype.of_float (Nx.dtype img) value in\n  Nx.pad padding fill img\n"
  },
  {
    "path": "packages/sowilo/test/dune",
    "content": "(test\n (name test_sowilo)\n (package sowilo)\n (libraries sowilo nx windtrap))\n"
  },
  {
    "path": "packages/sowilo/test/test_sowilo.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Sowilo\n\n(* Helpers *)\n\nlet create_gray_f h w value = Nx.full Nx.float32 [| h; w; 1 |] value\n\nlet create_checkerboard h w =\n  let data =\n    Array.init (h * w) (fun i ->\n        let row = i / w and col = i mod w in\n        if (row + col) mod 2 = 0 then 1.0 else 0.0)\n  in\n  Nx.create Nx.float32 [| h; w; 1 |] data\n\nlet create_centered_square h w square_size =\n  let data =\n    Array.init (h * w) (fun idx ->\n        let i = idx / w and j = idx mod w in\n        let start_h = (h - square_size) / 2 in\n        let start_w = (w - square_size) / 2 in\n        if\n          i >= start_h\n          && i < start_h + square_size\n          && j >= start_w\n          && j < start_w + square_size\n        then 1.0\n        else 0.0)\n  in\n  Nx.create Nx.float32 [| h; w; 1 |] data\n\nlet check_shape msg expected_shape tensor =\n  equal ~msg (array int) expected_shape (Nx.shape tensor)\n\nlet check_pixel_f msg expected tensor indices =\n  let actual = Nx.item indices tensor in\n  let diff = Float.abs (expected -. actual) in\n  if diff > 0.01 then failf \"%s: expected ~%.3f, got %.3f\" msg expected actual\n\nlet check_pixel_i msg expected tensor indices =\n  let actual = Nx.item indices tensor in\n  equal ~msg int expected actual\n\n(* ───── Geometric Transform Tests ───── *)\n\nlet test_flip_vertical () =\n  let img = create_checkerboard 4 4 in\n  let flipped = vflip img in\n  check_pixel_f \"top-left after flip\"\n    (Nx.item [ 3; 0; 0 ] img)\n    flipped [ 0; 0; 0 ];\n  check_pixel_f \"top-right after flip\"\n    (Nx.item [ 3; 3; 0 ] img)\n    flipped [ 0; 3; 0 ];\n  check_shape \"flip preserves shape\" (Nx.shape img) flipped\n\nlet test_flip_horizontal () =\n  let img = create_checkerboard 4 4 in\n  let flipped = hflip img in\n  check_pixel_f \"top-left after flip\"\n    (Nx.item [ 0; 3; 0 ] img)\n    flipped [ 0; 0; 0 ];\n  check_pixel_f \"bottom-left after flip\"\n    (Nx.item [ 3; 3; 0 ] img)\n    flipped [ 3; 0; 0 ];\n  check_shape \"flip preserves shape\" (Nx.shape img) flipped\n\nlet test_flip_batch () =\n  let data = [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0; 7.0; 8.0 |] in\n  let img = Nx.create Nx.float32 [| 2; 2; 2; 1 |] data in\n  let flipped_v = vflip img in\n  check_shape \"vertical batch shape\" [| 2; 2; 2; 1 |] flipped_v;\n  check_pixel_f \"batch 0 vertical flip\" 3.0 flipped_v [ 0; 0; 0; 0 ];\n  check_pixel_f \"batch 1 vertical flip\" 7.0 flipped_v [ 1; 0; 0; 0 ];\n  let flipped_h = hflip img in\n  check_pixel_f \"batch 0 horizontal flip\" 2.0 flipped_h [ 0; 0; 0; 0 ];\n  check_pixel_f \"batch 1 horizontal flip\" 6.0 flipped_h [ 1; 0; 0; 0 ]\n\nlet test_crop () =\n  let data = Array.init (10 * 10) (fun i -> Float.of_int i /. 100.0) in\n  let img = Nx.create Nx.float32 [| 10; 10; 1 |] data in\n  let cropped = crop ~y:2 ~x:3 ~height:5 ~width:4 img in\n  check_shape \"crop shape\" [| 5; 4; 1 |] cropped;\n  check_pixel_f \"crop content\" (Nx.item [ 2; 3; 0 ] img) cropped [ 0; 0; 0 ];\n  raises ~msg:\"crop out of bounds\"\n    (Invalid_argument \"crop: region y=8 x=8 h=5 w=5 exceeds image 10x10\")\n    (fun () -> ignore (crop ~y:8 ~x:8 ~height:5 ~width:5 img))\n\nlet test_crop_batch () =\n  let data = Array.init (2 * 4 * 4) (fun i -> Float.of_int i) in\n  let img = Nx.create Nx.float32 [| 2; 4; 4; 1 |] data in\n  let cropped = crop ~y:1 ~x:1 ~height:2 ~width:2 img in\n  check_shape \"batch crop shape\" [| 2; 2; 2; 1 |] cropped;\n  check_pixel_f \"batch crop value\"\n    (Nx.item [ 0; 1; 1; 0 ] img)\n    cropped [ 0; 0; 0; 0 ];\n  check_pixel_f \"batch crop second batch\"\n    (Nx.item [ 1; 2; 2; 0 ] img)\n    cropped [ 1; 1; 1; 0 ]\n\nlet test_resize_nearest () =\n  let img = Nx.create Nx.float32 [| 2; 2; 1 |] [| 0.1; 0.2; 0.3; 0.4 |] in\n  let resized = resize ~interpolation:Nearest ~height:4 ~width:4 img in\n  check_shape \"resize nearest shape\" [| 4; 4; 1 |] resized;\n  check_pixel_f \"nearest top-left\" 0.1 resized [ 0; 0; 0 ];\n  check_pixel_f \"nearest top-right\" 0.2 resized [ 0; 3; 0 ];\n  check_pixel_f \"nearest bottom-left\" 0.3 resized [ 3; 0; 0 ];\n  check_pixel_f \"nearest bottom-right\" 0.4 resized [ 3; 3; 0 ]\n\nlet test_resize_bilinear () =\n  let img = Nx.create Nx.float32 [| 2; 2; 1 |] [| 0.0; 1.0; 0.0; 1.0 |] in\n  let resized = resize ~height:3 ~width:3 img in\n  check_shape \"resize bilinear shape\" [| 3; 3; 1 |] resized;\n  check_pixel_f \"bilinear left edge\" 0.0 resized [ 0; 0; 0 ];\n  check_pixel_f \"bilinear right edge\" 1.0 resized [ 0; 2; 0 ];\n  let center = Nx.item [ 1; 1; 0 ] resized in\n  if center < 0.4 || center > 0.6 then\n    failf \"Bilinear resize center expected ~0.5, got %.3f\" center\n\nlet test_resize_batch () =\n  let data = [| 0.1; 0.2; 0.3; 0.4; 0.5; 0.6; 0.7; 0.8 |] in\n  let img = Nx.create Nx.float32 [| 2; 2; 2; 1 |] data in\n  let resized = resize ~interpolation:Nearest ~height:4 ~width:4 img in\n  check_shape \"resize batch shape\" [| 2; 4; 4; 1 |] resized;\n  check_pixel_f \"batch0 top-left\" 0.1 resized [ 0; 0; 0; 0 ];\n  check_pixel_f \"batch1 bottom-right\" 0.8 resized [ 1; 3; 3; 0 ]\n\nlet test_resize_color_bilinear () =\n  let img =\n    Nx.create Nx.float32 [| 1; 2; 2; 3 |]\n      [| 0.0; 0.0; 0.0; 1.0; 0.0; 0.0; 0.0; 1.0; 0.0; 1.0; 1.0; 0.0 |]\n  in\n  let resized = resize ~height:3 ~width:3 img in\n  check_shape \"resize color shape\" [| 1; 3; 3; 3 |] resized;\n  let center_r = Nx.item [ 0; 1; 1; 0 ] resized in\n  let center_g = Nx.item [ 0; 1; 1; 1 ] resized in\n  if center_r < 0.4 || center_r > 0.6 then\n    failf \"Color bilinear resize R expected ~0.5, got %.3f\" center_r;\n  if center_g < 0.4 || center_g > 0.6 then\n    failf \"Color bilinear resize G expected ~0.5, got %.3f\" center_g;\n  check_pixel_f \"corner preserves blue\" 0.0 resized [ 0; 0; 0; 2 ]\n\n(* ───── Color Conversion Tests ───── *)\n\nlet test_to_grayscale () =\n  let rgb =\n    Nx.create Nx.float32 [| 2; 2; 3 |]\n      [|\n        1.0;\n        0.0;\n        0.0;\n        (* Red *)\n        0.0;\n        1.0;\n        0.0;\n        (* Green *)\n        0.0;\n        0.0;\n        1.0;\n        (* Blue *)\n        1.0;\n        1.0;\n        1.0;\n        (* White *)\n      |]\n  in\n  let gray = to_grayscale rgb in\n  check_shape \"grayscale shape\" [| 2; 2; 1 |] gray;\n  check_pixel_f \"red to gray\" 0.299 gray [ 0; 0; 0 ];\n  check_pixel_f \"green to gray\" 0.587 gray [ 0; 1; 0 ];\n  check_pixel_f \"blue to gray\" 0.114 gray [ 1; 0; 0 ];\n  check_pixel_f \"white to gray\" 1.0 gray [ 1; 1; 0 ]\n\n(* ───── Data Type Conversion Tests ───── *)\n\nlet test_float_conversions () =\n  let uint8_img = Nx.full Nx.uint8 [| 2; 2; 1 |] 255 in\n  let float_img = to_float uint8_img in\n  let float_val = Nx.item [ 0; 0; 0 ] float_img in\n  equal ~msg:\"to_float normalization\" (float 0.001) 1.0 float_val;\n  let uint8_back = to_uint8 float_img in\n  check_shape \"round-trip shape\" [| 2; 2; 1 |] uint8_back;\n  check_pixel_i \"round-trip value\" 255 uint8_back [ 0; 0; 0 ];\n  let out_of_range =\n    Nx.create Nx.float32 [| 2; 2; 1 |] [| -0.5; 0.5; 1.5; 0.75 |]\n  in\n  let clipped = to_uint8 out_of_range in\n  check_pixel_i \"clipped negative\" 0 clipped [ 0; 0; 0 ];\n  check_pixel_i \"clipped middle\" 127 clipped [ 0; 1; 0 ];\n  check_pixel_i \"clipped overflow\" 255 clipped [ 1; 0; 0 ];\n  check_pixel_i \"clipped normal\" 191 clipped [ 1; 1; 0 ]\n\n(* ───── Filtering Tests ───── *)\n\nlet test_gaussian_blur () =\n  let img = create_centered_square 10 10 4 in\n  let blurred = gaussian_blur ~sigma:1.0 ~ksize:3 img in\n  check_shape \"blur preserves shape\" (Nx.shape img) blurred;\n  let edge_val = Nx.item [ 3; 3; 0 ] blurred in\n  if edge_val = 0.0 || edge_val = 1.0 then\n    failf \"Edge not smoothed: got %.3f\" edge_val\n\nlet test_box_blur () =\n  let data = [| 0.0; 0.0; 0.0; 0.0; 1.0; 0.0; 0.0; 0.0; 0.0 |] in\n  let img = Nx.create Nx.float32 [| 3; 3; 1 |] data in\n  let filtered = box_blur ~ksize:3 img in\n  let center = Nx.item [ 1; 1; 0 ] filtered in\n  let expected = 1.0 /. 9.0 in\n  if Float.abs (center -. expected) > 0.02 then\n    failf \"Box filter center: expected ~%.3f, got %.3f\" expected center\n\nlet test_median_blur () =\n  let img = create_gray_f 5 5 0.5 in\n  let filtered = median_blur ~ksize:3 img in\n  check_shape \"median blur shape\" (Nx.shape img) filtered\n\nlet test_median_blur_preserves_median () =\n  let data = [| 0.0; 0.0; 0.0; 0.0; 1.0; 0.0; 0.0; 0.0; 0.0 |] in\n  let img = Nx.create Nx.float32 [| 3; 3; 1 |] data in\n  let filtered = median_blur ~ksize:3 img in\n  check_pixel_f \"median removes impulse noise\" 0.0 filtered [ 1; 1; 0 ]\n\n(* ───── Thresholding Tests ───── *)\n\nlet test_threshold () =\n  let img =\n    Nx.create Nx.float32 [| 2; 3; 1 |] [| 0.2; 0.4; 0.6; 0.8; 0.99; 0.1 |]\n  in\n  let binary = threshold 0.5 img in\n  check_pixel_f \"below threshold\" 0.0 binary [ 0; 0; 0 ];\n  check_pixel_f \"above threshold\" 1.0 binary [ 1; 0; 0 ]\n\n(* ───── Morphological Operations Tests ───── *)\n\nlet test_structuring_elements () =\n  let rect = structuring_element Rect (3, 5) in\n  check_shape \"rect shape\" [| 3; 5 |] rect;\n  check_pixel_i \"rect filled\" 1 rect [ 1; 2 ];\n  let cross = structuring_element Cross (5, 5) in\n  check_shape \"cross shape\" [| 5; 5 |] cross;\n  check_pixel_i \"cross center\" 1 cross [ 2; 2 ];\n  check_pixel_i \"cross arm\" 1 cross [ 2; 0 ];\n  check_pixel_i \"cross corner\" 0 cross [ 0; 0 ]\n\nlet test_erosion () =\n  let img = create_centered_square 10 10 4 in\n  let kernel = structuring_element Rect (3, 3) in\n  let eroded = erode ~kernel img in\n  let white_count = ref 0 in\n  for i = 0 to 9 do\n    for j = 0 to 9 do\n      if Nx.item [ i; j; 0 ] eroded > 0.5 then incr white_count\n    done\n  done;\n  equal ~msg:\"erosion reduces white area\" int 4 !white_count;\n  let center = Nx.item [ 4; 4; 0 ] eroded in\n  if center < 0.5 then failf \"erosion center not preserved: %.3f\" center\n\nlet test_dilation () =\n  let img = create_centered_square 10 10 4 in\n  let kernel = structuring_element Rect (3, 3) in\n  let dilated = dilate ~kernel img in\n  let white_count = ref 0 in\n  for i = 0 to 9 do\n    for j = 0 to 9 do\n      if Nx.item [ i; j; 0 ] dilated > 0.5 then incr white_count\n    done\n  done;\n  equal ~msg:\"dilation expands white area\" int 36 !white_count\n\nlet test_dilation_kernel_shape () =\n  let data = Array.make (5 * 5) 0.0 in\n  data.((2 * 5) + 2) <- 1.0;\n  let img = Nx.create Nx.float32 [| 5; 5; 1 |] data in\n  let rect = structuring_element Rect (3, 3) in\n  let cross = structuring_element Cross (3, 3) in\n  let dilated_rect = dilate ~kernel:rect img in\n  let dilated_cross = dilate ~kernel:cross img in\n  let count_white tensor =\n    let shape = Nx.shape tensor in\n    let h = shape.(0) and w = shape.(1) in\n    let total = ref 0 in\n    for i = 0 to h - 1 do\n      for j = 0 to w - 1 do\n        if Nx.item [ i; j; 0 ] tensor > 0.5 then incr total\n      done\n    done;\n    !total\n  in\n  equal ~msg:\"rect kernel produces 3x3 block\" int 9 (count_white dilated_rect);\n  equal ~msg:\"cross kernel preserves cross shape\" int 5\n    (count_white dilated_cross)\n\n(* ───── Edge Detection Tests ───── *)\n\nlet test_sobel () =\n  (* Vertical edge: left half black, right half white *)\n  let img_data =\n    Array.init 25 (fun idx ->\n        let j = idx mod 5 in\n        if j >= 2 then 1.0 else 0.0)\n  in\n  let img = Nx.create Nx.float32 [| 5; 5; 1 |] img_data in\n  let gx, _gy = sobel img in\n  check_shape \"sobel shape\" (Nx.shape img) gx;\n  let edge_response = Float.abs (Nx.item [ 2; 2; 0 ] gx) in\n  if edge_response < 0.1 then\n    failf \"Sobel X edge response too weak: %.3f\" edge_response;\n  (* Horizontal edge: top half black, bottom half white *)\n  let img_h_data =\n    Array.init 25 (fun idx ->\n        let i = idx / 5 in\n        if i >= 2 then 1.0 else 0.0)\n  in\n  let img_h = Nx.create Nx.float32 [| 5; 5; 1 |] img_h_data in\n  let _gx, gy = sobel img_h in\n  let edge_response_y = Float.abs (Nx.item [ 2; 2; 0 ] gy) in\n  if edge_response_y < 0.1 then\n    failf \"Sobel Y edge response too weak: %.3f\" edge_response_y\n\nlet test_canny () =\n  let img = create_centered_square 20 20 10 in\n  let edges = canny ~low:0.2 ~high:0.6 img in\n  check_shape \"canny shape\" (Nx.shape img) edges;\n  let edge_count = ref 0 in\n  for i = 0 to 19 do\n    for j = 0 to 19 do\n      if Nx.item [ i; j; 0 ] edges > 0.5 then incr edge_count\n    done\n  done;\n  if !edge_count = 0 then fail \"Canny detected no edges\";\n  if !edge_count > 100 then\n    failf \"Canny detected too many edges: %d\" !edge_count\n\n(* ───── Integration Tests ───── *)\n\nlet test_pipeline () =\n  let img = create_centered_square 20 20 8 in\n  let blurred = gaussian_blur ~sigma:1.5 ~ksize:5 img in\n  let binary = threshold 0.5 blurred in\n  let kernel = structuring_element Rect (3, 3) in\n  let cleaned = erode ~kernel binary in\n  let final = dilate ~kernel cleaned in\n  check_shape \"pipeline preserves shape\" (Nx.shape img) final;\n  let white_count = ref 0 in\n  for i = 0 to 19 do\n    for j = 0 to 19 do\n      if Nx.item [ i; j; 0 ] final > 0.5 then incr white_count\n    done\n  done;\n  if !white_count = 0 then fail \"Pipeline eliminated all features\"\n\n(* ───── Test Suite ───── *)\n\nlet () =\n  run \"Sowilo\"\n    [\n      group \"transforms\"\n        [\n          test \"vflip\" test_flip_vertical;\n          test \"hflip\" test_flip_horizontal;\n          test \"crop\" test_crop;\n          test \"flip_batch\" test_flip_batch;\n          test \"crop_batch\" test_crop_batch;\n          test \"resize_nearest\" test_resize_nearest;\n          test \"resize_bilinear\" test_resize_bilinear;\n          test \"resize_batch\" test_resize_batch;\n          test \"resize_color_bilinear\" test_resize_color_bilinear;\n        ];\n      group \"color\" [ test \"to_grayscale\" test_to_grayscale ];\n      group \"type_conversion\"\n        [ test \"float_conversions\" test_float_conversions ];\n      group \"filtering\"\n        [\n          test \"gaussian_blur\" test_gaussian_blur;\n          test \"box_blur\" test_box_blur;\n          test \"median_blur\" test_median_blur;\n          test \"median_blur_median\" test_median_blur_preserves_median;\n        ];\n      group \"thresholding\" [ test \"threshold\" test_threshold ];\n      group \"morphology\"\n        [\n          test \"structuring_elements\" test_structuring_elements;\n          test \"erosion\" test_erosion;\n          test \"dilation\" test_dilation;\n          test \"dilation_kernel_shape\" test_dilation_kernel_shape;\n        ];\n      group \"edge_detection\"\n        [ test \"sobel\" test_sobel; slow \"canny\" test_canny ];\n      group \"integration\" [ test \"pipeline\" test_pipeline ];\n    ]\n"
  },
  {
    "path": "packages/talon/README.md",
    "content": "# Talon\n\nA dataframe library for OCaml with heterogeneous column types, inspired by pandas and polars.\n\n## Features\n\n- **Heterogeneous columns**: Mix numeric tensors, strings, and booleans in a single dataframe\n- **Null handling**: Built-in support for missing values across all column types\n- **Rich API**: Filtering, grouping, sorting, aggregations, and joins\n- **I/O support**: CSV and JSON serialization through sublibraries\n- **Nx integration**: Seamless interop with the Nx tensor library\n\n## Installation\n\n```bash\nopam install talon\n```\n\n## Quick Example\n\n```ocaml\nopen Talon\n\n(* Create a dataframe *)\nlet df = create [\n  (\"name\", Col.string_list [\"Alice\"; \"Bob\"; \"Charlie\"]);\n  (\"age\", Col.int32_list [25l; 30l; 35l]);\n  (\"score\", Col.float64_list [85.5; 92.0; 78.5])\n]\n\n(* Filter rows *)\nlet adults = filter_by df Row.(map (int32 \"age\") ~f:(fun age -> age > 25l))\n\n(* Aggregations *)\nlet avg_score = Agg.Float.mean df \"score\"\nlet total = Agg.Int.sum df \"age\"\n\n(* Group by computed key *)\nlet by_grade = group_by df Row.(\n  map (float64 \"score\") ~f:(fun s ->\n    if s >= 90.0 then \"A\" \n    else if s >= 80.0 then \"B\" \n    else \"C\"))\n```\n\n## Null Semantics\n\nNumeric columns store their data in Nx tensors plus an optional null mask. Use\nthe [`Col.*_opt`] constructors to build nullable columns; if a mask is absent,\nall payload values (including `nan` or `Int32.min_int`) are treated as genuine\ndata. Option-based accessors such as `Row.float64_opt` and helpers like\n`Agg.count` honor the mask when propagating missing values.\n\n## CSV and JSON Support\n\n```ocaml\n(* CSV I/O *)\nlet df = Talon_csv.read \"data.csv\"\nTalon_csv.write df \"output.csv\"\n\n(* JSON I/O *)\nlet json = Talon_json.to_string ~orient:`Records df\nlet df2 = Talon_json.from_string ~orient:`Columns json_str\n```\n\n## License\n\nISC\n"
  },
  {
    "path": "packages/talon/bench/README.md",
    "content": "# Talon Benchmarks\n\nThis directory hosts lightweight benchmarks for Talon alongside a reference\nimplementation in pandas. The goal is to exercise the most common dataframe\nworkloads so we can spot performance regressions quickly.\n\n## Fixtures\n\nSynthetic CSV fixtures live in `./data/` and are generated by\n`scripts/generate_fixtures.py`:\n\n- `customers.csv` — 1.5k customer records with segments, regions, and loyalty\n  metadata.\n- `transactions.csv` — 40k point-of-sale transactions linked to customers with\n  category, channel, discount, and amount columns.\n\nRegenerate fixtures after modifying the generator or schema:\n\n```bash\nuv run python talon/bench/scripts/generate_fixtures.py\n```\n\n## Running the benchmarks\n\n### Talon (OCaml)\n\n```bash\ndune exec talon/bench/bench_talon.exe\n```\n\n### Pandas (Python)\n\n```bash\nuv run --with pandas talon/bench/bench_talon.py\n```\n\n## Results Talon (Ocaml)\n\n```\n┌─────────────────────────────┬──────────┬─────────┬──────────┬─────────┬────────────┐\n│ Name                        │ Wall/Run │ CPU/Run │  mWd/Run │ Speedup │ vs Fastest │\n├─────────────────────────────┼──────────┼─────────┼──────────┼─────────┼────────────┤\n│ Talon/Filter/high_value     │   3.39ms │  3.39ms │ 468.01kw │   1.00x │       100% │\n│ Talon/Group/category_region │  19.03ms │ 19.02ms │   2.23Mw │   0.18x │       561% │\n│ Talon/Join/customer_lookup  │  26.10ms │ 26.07ms │   3.13Mw │   0.13x │       770% │\n│ Talon/Sort/amount_desc      │  32.37ms │ 32.34ms │   2.11Mw │   0.10x │       954% │\n└─────────────────────────────┴──────────┴─────────┴──────────┴─────────┴────────────┘\n```\n\n## Results Pandas (Python)\n\n```\n┌────────────────────────────────┬──────────┬──────────┬─────────┬─────────┬────────────┐\n│ Name                           │ Wall/Run │  CPU/Run │ mWd/Run │ Speedup │ vs Fastest │\n├────────────────────────────────┼──────────┼──────────┼─────────┼─────────┼────────────┤\n│ Filter/high_value (pandas)     │ 264.78µs │ 263.81µs │ 394.57w │   1.00x │       100% │\n│ Group/category_region (pandas) │ 870.91µs │ 868.62µs │  1.56kw │   0.30x │       329% │\n│ Join/customer_lookup (pandas)  │   1.67ms │   1.67ms │  1.99kw │   0.16x │       632% │\n│ Sort/amount_desc (pandas)      │   3.13ms │   3.12ms │  3.80kw │   0.08x │      1180% │\n└────────────────────────────────┴──────────┴──────────┴─────────┴─────────┴────────────┘\n```\n"
  },
  {
    "path": "packages/talon/bench/bench_talon.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Talon dataframe benchmarks using realistic CSV fixtures. *)\n\nmodule Row = Talon.Row\n\nmodule Fixtures = struct\n  let data_dir = Filename.concat (Sys.getcwd ()) \"packages/talon/bench/data\"\n\n  let load_csv name dtype_spec =\n    Talon_csv.read ~dtype_spec (Filename.concat data_dir name)\n\n  let transactions =\n    lazy\n      (load_csv \"transactions.csv\"\n         [\n           (\"transaction_id\", `Int32);\n           (\"customer_id\", `Int32);\n           (\"region\", `String);\n           (\"category\", `String);\n           (\"channel\", `String);\n           (\"amount\", `Float64);\n           (\"quantity\", `Int32);\n           (\"discount\", `Float64);\n           (\"promo\", `String);\n           (\"event_date\", `String);\n         ])\n\n  let customers =\n    lazy\n      (load_csv \"customers.csv\"\n         [\n           (\"customer_id\", `Int32);\n           (\"segment\", `String);\n           (\"region\", `String);\n           (\"status\", `String);\n           (\"loyalty_score\", `Float64);\n           (\"tenure_years\", `Int32);\n         ])\n\n  let transactions () = Lazy.force transactions\n  let customers () = Lazy.force customers\nend\n\nlet force_float_sum df column =\n  let total = Talon.Agg.sum df column in\n  total\n\nlet bench_filter df =\n  let filtered =\n    Talon.filter_by df\n      Row.(\n        map3 (float64 \"amount\") (int32 \"quantity\") (string \"region\")\n          ~f:(fun amount quantity region ->\n            amount > 120.\n            && Int32.compare quantity 3l >= 0\n            && String.equal region \"EMEA\"))\n  in\n  force_float_sum filtered \"amount\"\n\nlet bench_group df =\n  let groups =\n    Talon.group_by df\n      Row.(\n        map2 (string \"category\") (string \"region\") ~f:(fun category region ->\n            category ^ \"|\" ^ region))\n  in\n  let total =\n    List.fold_left\n      (fun acc (_key, group_df) -> acc +. Talon.Agg.sum group_df \"amount\")\n      0. groups\n  in\n  total\n\nlet bench_join df customers =\n  let joined = Talon.join df customers ~on:\"customer_id\" ~how:`Left () in\n  force_float_sum joined \"amount\"\n\nlet bench_sort df =\n  let sorted = Talon.sort_values ~ascending:false df \"amount\" in\n  force_float_sum sorted \"amount\"\n\nlet all_benchmarks =\n  let transactions = Fixtures.transactions () in\n  let customers = Fixtures.customers () in\n  [\n    Thumper.bench \"Filter/high_value\" (fun () -> bench_filter transactions);\n    Thumper.bench \"Group/category_region\" (fun () -> bench_group transactions);\n    Thumper.bench \"Join/customer_lookup\" (fun () ->\n        bench_join transactions customers);\n    Thumper.bench \"Sort/amount_desc\" (fun () -> bench_sort transactions);\n  ]\n  |> fun benches -> [ Thumper.group \"Talon\" benches ]\n\nlet () = Thumper.run \"talon\" all_benchmarks\n"
  },
  {
    "path": "packages/talon/bench/bench_talon.py",
    "content": "from __future__ import annotations\n\nimport sys\nfrom pathlib import Path\nfrom typing import Any, List\n\nimport pandas as pd\n\n_SCRIPTS_DIR = Path(__file__).resolve().parent\nwhile not (_SCRIPTS_DIR / \"dune-project\").exists():\n    _SCRIPTS_DIR = _SCRIPTS_DIR.parent\n_SCRIPTS_DIR = _SCRIPTS_DIR / \"scripts\"\nif str(_SCRIPTS_DIR) not in sys.path:\n    sys.path.insert(0, str(_SCRIPTS_DIR))\n\nimport ubench  # type: ignore\n\n\nDATA_DIR = Path(__file__).resolve().parent / \"data\"\n\n\ndef _load_data() -> tuple[pd.DataFrame, pd.DataFrame]:\n    transactions = pd.read_csv(\n        DATA_DIR / \"transactions.csv\",\n        dtype={\n            \"transaction_id\": \"int32\",\n            \"customer_id\": \"int32\",\n            \"region\": \"category\",\n            \"category\": \"category\",\n            \"channel\": \"category\",\n            \"amount\": \"float64\",\n            \"quantity\": \"int32\",\n            \"discount\": \"float64\",\n            \"promo\": \"category\",\n            \"event_date\": \"string\",\n        },\n    )\n    customers = pd.read_csv(\n        DATA_DIR / \"customers.csv\",\n        dtype={\n            \"customer_id\": \"int32\",\n            \"segment\": \"category\",\n            \"region\": \"category\",\n            \"status\": \"category\",\n            \"loyalty_score\": \"float64\",\n            \"tenure_years\": \"int32\",\n        },\n    )\n    return transactions, customers\n\n\nTRANSACTIONS, CUSTOMERS = _load_data()\n\n\ndef build_benchmarks() -> List[Any]:\n    benches: List[Any] = []\n\n    def bench_filter() -> None:\n        filtered = TRANSACTIONS[\n            (TRANSACTIONS[\"amount\"] > 120.0)\n            & (TRANSACTIONS[\"quantity\"] >= 3)\n            & (TRANSACTIONS[\"region\"] == \"EMEA\")\n        ]\n        float(filtered[\"amount\"].sum())\n\n    benches.append(ubench.bench(\"Filter/high_value (pandas)\", bench_filter))\n\n    def bench_group() -> None:\n        grouped = (\n            TRANSACTIONS.groupby([\"category\", \"region\"])[\"amount\"].sum()\n        )\n        float(grouped.sum())\n\n    benches.append(ubench.bench(\"Group/category_region (pandas)\", bench_group))\n\n    def bench_join() -> None:\n        joined = TRANSACTIONS.merge(CUSTOMERS, on=\"customer_id\", how=\"left\")\n        float(joined[\"amount\"].sum())\n\n    benches.append(ubench.bench(\"Join/customer_lookup (pandas)\", bench_join))\n\n    def bench_sort() -> None:\n        sorted_df = TRANSACTIONS.sort_values(\"amount\", ascending=False)\n        float(sorted_df[\"amount\"].iloc[0])\n\n    benches.append(ubench.bench(\"Sort/amount_desc (pandas)\", bench_sort))\n\n    return benches\n\n\ndef default_config() -> ubench.Config:\n    return ubench.Config.default().build()\n\n\ndef main() -> None:\n    benchmarks = build_benchmarks()\n    config = default_config()\n    ubench.run(benchmarks, config=config, output_format=\"pretty\", verbose=False)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "packages/talon/bench/data/customers.csv",
    "content": "customer_id,segment,region,status,loyalty_score,tenure_years\r\n1000,Growth,APAC,active,81.65,6\r\n1001,Consumer,LATAM,at_risk,94.70,11\r\n1002,Enterprise,EMEA,active,64.88,8\r\n1003,Growth,APAC,at_risk,90.79,12\r\n1004,SMB,LATAM,active,22.30,3\r\n1005,Enterprise,LATAM,active,68.71,12\r\n1006,SMB,AMER,active,44.23,12\r\n1007,SMB,APAC,inactive,49.81,8\r\n1008,Growth,AMER,active,94.13,8\r\n1009,Consumer,APAC,inactive,71.37,8\r\n1010,Consumer,EMEA,at_risk,93.84,12\r\n1011,Enterprise,APAC,active,40.30,3\r\n1012,SMB,LATAM,active,37.07,7\r\n1013,Enterprise,LATAM,active,78.82,2\r\n1014,Growth,EMEA,at_risk,55.66,3\r\n1015,Enterprise,AMER,at_risk,60.04,9\r\n1016,Enterprise,AMER,inactive,97.34,3\r\n1017,Consumer,AMER,active,40.23,2\r\n1018,Enterprise,APAC,at_risk,41.85,6\r\n1019,SMB,APAC,inactive,45.30,7\r\n1020,Enterprise,APAC,active,35.09,3\r\n1021,Growth,AMER,at_risk,90.17,12\r\n1022,Enterprise,APAC,at_risk,73.46,9\r\n1023,Growth,APAC,active,28.08,5\r\n1024,SMB,APAC,active,50.22,10\r\n1025,Enterprise,EMEA,active,37.13,7\r\n1026,Enterprise,APAC,active,76.38,8\r\n1027,SMB,APAC,active,61.67,3\r\n1028,Consumer,EMEA,active,43.11,5\r\n1029,Enterprise,EMEA,at_risk,51.88,1\r\n1030,SMB,EMEA,inactive,90.97,12\r\n1031,Growth,AMER,inactive,88.51,7\r\n1032,Enterprise,AMER,inactive,62.46,7\r\n1033,Growth,APAC,at_risk,74.21,5\r\n1034,Consumer,EMEA,active,51.72,9\r\n1035,Enterprise,EMEA,active,97.97,11\r\n1036,Enterprise,EMEA,at_risk,45.32,6\r\n1037,Consumer,EMEA,active,62.47,7\r\n1038,Enterprise,APAC,at_risk,71.39,2\r\n1039,SMB,AMER,active,55.11,6\r\n1040,SMB,LATAM,at_risk,58.68,4\r\n1041,SMB,APAC,active,74.21,8\r\n1042,SMB,LATAM,active,50.80,11\r\n1043,Growth,LATAM,active,56.80,3\r\n1044,Growth,EMEA,at_risk,79.79,7\r\n1045,SMB,LATAM,active,34.14,7\r\n1046,SMB,EMEA,active,43.41,11\r\n1047,Enterprise,APAC,active,63.73,12\r\n1048,Consumer,EMEA,inactive,62.10,8\r\n1049,Enterprise,AMER,active,96.79,3\r\n1050,Growth,AMER,active,78.36,7\r\n1051,SMB,EMEA,active,30.63,8\r\n1052,Growth,LATAM,active,80.64,10\r\n1053,Enterprise,AMER,active,58.51,4\r\n1054,SMB,EMEA,inactive,29.20,5\r\n1055,Enterprise,AMER,at_risk,42.77,12\r\n1056,Growth,LATAM,active,60.22,9\r\n1057,SMB,LATAM,active,39.31,4\r\n1058,Consumer,LATAM,active,40.97,2\r\n1059,Consumer,AMER,active,33.24,7\r\n1060,SMB,LATAM,at_risk,31.57,12\r\n1061,Consumer,APAC,at_risk,33.13,12\r\n1062,Enterprise,EMEA,active,54.50,3\r\n1063,Consumer,AMER,active,22.21,5\r\n1064,Enterprise,AMER,active,87.91,4\r\n1065,Consumer,AMER,active,82.85,9\r\n1066,Consumer,AMER,active,50.52,9\r\n1067,Enterprise,APAC,active,37.86,3\r\n1068,Growth,APAC,at_risk,97.51,8\r\n1069,Enterprise,APAC,active,45.64,3\r\n1070,Growth,EMEA,inactive,25.96,4\r\n1071,Enterprise,AMER,active,28.70,9\r\n1072,Consumer,LATAM,active,72.43,5\r\n1073,SMB,AMER,active,95.74,12\r\n1074,SMB,LATAM,active,67.26,6\r\n1075,Growth,AMER,at_risk,31.52,1\r\n1076,SMB,LATAM,at_risk,20.42,12\r\n1077,Enterprise,AMER,active,50.93,7\r\n1078,Consumer,APAC,active,55.89,10\r\n1079,SMB,LATAM,active,80.77,7\r\n1080,Enterprise,LATAM,active,41.49,5\r\n1081,Consumer,AMER,inactive,77.34,1\r\n1082,Consumer,EMEA,active,47.85,9\r\n1083,SMB,AMER,active,90.87,12\r\n1084,Enterprise,AMER,active,46.80,7\r\n1085,Consumer,EMEA,inactive,23.93,2\r\n1086,Consumer,AMER,active,83.39,3\r\n1087,SMB,AMER,at_risk,90.85,8\r\n1088,SMB,LATAM,active,58.32,10\r\n1089,SMB,LATAM,active,95.55,8\r\n1090,Enterprise,AMER,active,41.27,4\r\n1091,Consumer,EMEA,at_risk,31.08,8\r\n1092,Enterprise,AMER,active,56.25,1\r\n1093,Enterprise,APAC,active,88.73,6\r\n1094,Enterprise,LATAM,at_risk,51.96,2\r\n1095,Growth,APAC,at_risk,68.05,10\r\n1096,SMB,EMEA,active,47.51,9\r\n1097,Growth,EMEA,active,74.12,5\r\n1098,SMB,APAC,active,43.37,2\r\n1099,SMB,LATAM,active,27.66,4\r\n1100,Growth,AMER,active,53.56,5\r\n1101,SMB,AMER,active,82.89,4\r\n1102,Growth,APAC,active,76.90,4\r\n1103,Enterprise,EMEA,active,93.74,2\r\n1104,Enterprise,APAC,inactive,95.65,12\r\n1105,Consumer,AMER,inactive,87.96,9\r\n1106,Enterprise,AMER,active,59.83,10\r\n1107,Consumer,APAC,inactive,49.72,12\r\n1108,Enterprise,EMEA,active,82.49,1\r\n1109,SMB,APAC,at_risk,87.99,8\r\n1110,Growth,LATAM,at_risk,51.72,10\r\n1111,Consumer,APAC,active,71.14,4\r\n1112,Consumer,APAC,at_risk,77.32,12\r\n1113,SMB,EMEA,active,66.44,6\r\n1114,SMB,APAC,active,30.12,6\r\n1115,SMB,AMER,at_risk,22.56,3\r\n1116,SMB,LATAM,active,77.21,2\r\n1117,SMB,LATAM,at_risk,88.56,5\r\n1118,Consumer,AMER,active,51.56,4\r\n1119,Enterprise,LATAM,active,59.96,6\r\n1120,SMB,LATAM,inactive,45.79,10\r\n1121,Consumer,EMEA,active,79.43,5\r\n1122,SMB,AMER,at_risk,91.99,5\r\n1123,SMB,LATAM,active,94.22,11\r\n1124,Consumer,AMER,active,55.81,2\r\n1125,Growth,LATAM,active,24.43,10\r\n1126,Growth,LATAM,active,73.82,1\r\n1127,Growth,EMEA,active,26.78,12\r\n1128,Enterprise,LATAM,active,30.73,4\r\n1129,SMB,LATAM,inactive,80.71,8\r\n1130,Consumer,LATAM,at_risk,30.41,1\r\n1131,Enterprise,APAC,inactive,45.24,1\r\n1132,Enterprise,EMEA,inactive,58.45,7\r\n1133,Enterprise,EMEA,at_risk,91.32,11\r\n1134,Growth,APAC,inactive,27.03,4\r\n1135,Consumer,APAC,active,72.15,4\r\n1136,SMB,EMEA,at_risk,29.57,4\r\n1137,SMB,APAC,inactive,32.96,5\r\n1138,Enterprise,AMER,at_risk,94.63,6\r\n1139,Enterprise,EMEA,inactive,96.64,4\r\n1140,Consumer,LATAM,inactive,74.97,4\r\n1141,Enterprise,AMER,at_risk,26.51,8\r\n1142,Growth,EMEA,active,32.65,1\r\n1143,SMB,LATAM,at_risk,94.98,12\r\n1144,SMB,APAC,at_risk,91.29,8\r\n1145,Consumer,AMER,active,35.52,2\r\n1146,Enterprise,LATAM,active,21.53,10\r\n1147,SMB,EMEA,active,70.06,5\r\n1148,Enterprise,AMER,active,32.30,12\r\n1149,SMB,LATAM,active,30.48,12\r\n1150,Growth,LATAM,at_risk,83.62,12\r\n1151,Enterprise,APAC,active,29.89,3\r\n1152,SMB,LATAM,active,97.81,4\r\n1153,Enterprise,AMER,inactive,21.41,12\r\n1154,Growth,LATAM,inactive,58.28,7\r\n1155,SMB,EMEA,inactive,68.71,11\r\n1156,SMB,APAC,active,86.14,10\r\n1157,Enterprise,LATAM,active,93.14,3\r\n1158,SMB,LATAM,at_risk,31.95,9\r\n1159,Enterprise,LATAM,inactive,44.73,1\r\n1160,Consumer,LATAM,active,45.29,10\r\n1161,Growth,AMER,active,77.35,11\r\n1162,Consumer,AMER,at_risk,83.18,4\r\n1163,Consumer,AMER,active,53.26,12\r\n1164,Enterprise,EMEA,inactive,95.59,1\r\n1165,SMB,AMER,active,82.49,12\r\n1166,Growth,AMER,at_risk,72.17,10\r\n1167,Consumer,EMEA,active,67.74,10\r\n1168,SMB,APAC,active,89.07,10\r\n1169,Consumer,LATAM,active,21.65,7\r\n1170,Enterprise,AMER,active,85.29,7\r\n1171,SMB,APAC,at_risk,74.77,12\r\n1172,Consumer,APAC,inactive,62.18,12\r\n1173,SMB,LATAM,active,27.70,2\r\n1174,Enterprise,APAC,at_risk,68.12,5\r\n1175,SMB,AMER,active,50.26,4\r\n1176,Enterprise,EMEA,active,58.26,2\r\n1177,Enterprise,LATAM,active,52.20,12\r\n1178,Enterprise,EMEA,active,81.42,9\r\n1179,Enterprise,APAC,at_risk,71.52,5\r\n1180,Enterprise,AMER,at_risk,58.04,1\r\n1181,SMB,LATAM,inactive,74.30,9\r\n1182,Enterprise,EMEA,active,90.78,8\r\n1183,Enterprise,AMER,active,54.26,9\r\n1184,Enterprise,AMER,at_risk,51.85,6\r\n1185,Growth,LATAM,inactive,62.87,9\r\n1186,Consumer,APAC,at_risk,94.34,11\r\n1187,Consumer,AMER,active,27.46,8\r\n1188,Growth,LATAM,inactive,24.81,2\r\n1189,Growth,AMER,at_risk,96.74,10\r\n1190,Growth,EMEA,inactive,21.87,2\r\n1191,Growth,EMEA,active,97.79,8\r\n1192,Growth,EMEA,active,84.04,12\r\n1193,Growth,APAC,inactive,65.28,12\r\n1194,Growth,APAC,at_risk,68.01,12\r\n1195,SMB,AMER,at_risk,70.43,3\r\n1196,Growth,APAC,active,77.65,7\r\n1197,Growth,LATAM,inactive,32.46,1\r\n1198,SMB,AMER,active,47.26,4\r\n1199,Growth,APAC,active,36.84,4\r\n1200,Enterprise,EMEA,active,40.36,11\r\n1201,SMB,LATAM,at_risk,24.28,1\r\n1202,Growth,APAC,at_risk,24.47,12\r\n1203,Enterprise,AMER,active,47.15,9\r\n1204,SMB,AMER,active,57.92,12\r\n1205,Enterprise,APAC,at_risk,53.45,6\r\n1206,Enterprise,EMEA,active,88.95,1\r\n1207,Enterprise,APAC,active,58.03,4\r\n1208,Growth,AMER,active,53.83,7\r\n1209,Consumer,AMER,active,96.88,1\r\n1210,SMB,LATAM,active,32.93,6\r\n1211,Growth,EMEA,active,28.44,10\r\n1212,Enterprise,LATAM,active,60.14,2\r\n1213,Enterprise,EMEA,active,48.24,8\r\n1214,SMB,EMEA,at_risk,93.33,12\r\n1215,Enterprise,LATAM,at_risk,60.79,6\r\n1216,Consumer,APAC,at_risk,93.51,10\r\n1217,Consumer,EMEA,inactive,72.18,2\r\n1218,SMB,AMER,active,90.19,5\r\n1219,Growth,LATAM,at_risk,53.94,8\r\n1220,Growth,LATAM,active,57.67,6\r\n1221,Enterprise,LATAM,inactive,24.30,6\r\n1222,Consumer,AMER,inactive,29.40,9\r\n1223,Consumer,LATAM,active,67.04,8\r\n1224,Growth,APAC,inactive,88.90,6\r\n1225,Enterprise,APAC,inactive,55.16,7\r\n1226,SMB,AMER,inactive,38.68,11\r\n1227,Enterprise,AMER,inactive,82.67,12\r\n1228,Enterprise,APAC,at_risk,28.31,5\r\n1229,Growth,LATAM,active,40.58,5\r\n1230,Enterprise,EMEA,active,26.20,7\r\n1231,Enterprise,AMER,active,44.72,6\r\n1232,Enterprise,LATAM,active,88.20,5\r\n1233,Growth,AMER,active,87.84,2\r\n1234,SMB,AMER,active,62.59,1\r\n1235,Consumer,EMEA,at_risk,96.78,10\r\n1236,SMB,AMER,active,33.53,8\r\n1237,Enterprise,LATAM,at_risk,91.93,11\r\n1238,Enterprise,AMER,active,50.47,5\r\n1239,Enterprise,APAC,active,77.20,4\r\n1240,Growth,EMEA,inactive,67.49,5\r\n1241,SMB,APAC,active,81.93,7\r\n1242,Growth,LATAM,at_risk,82.67,9\r\n1243,Enterprise,AMER,active,85.77,2\r\n1244,Growth,LATAM,at_risk,87.45,2\r\n1245,SMB,APAC,active,33.32,5\r\n1246,Consumer,EMEA,active,81.47,8\r\n1247,Consumer,AMER,at_risk,86.53,6\r\n1248,Consumer,APAC,inactive,87.59,10\r\n1249,SMB,EMEA,active,59.72,7\r\n1250,SMB,APAC,active,67.43,12\r\n1251,Enterprise,EMEA,active,50.55,5\r\n1252,SMB,APAC,active,77.10,2\r\n1253,SMB,AMER,active,64.65,11\r\n1254,Growth,APAC,active,65.54,1\r\n1255,Consumer,AMER,active,71.68,12\r\n1256,Consumer,LATAM,active,63.46,5\r\n1257,SMB,APAC,inactive,32.60,8\r\n1258,Growth,EMEA,at_risk,79.83,6\r\n1259,SMB,EMEA,active,67.07,3\r\n1260,Consumer,LATAM,active,53.56,8\r\n1261,Enterprise,APAC,active,91.98,12\r\n1262,Consumer,APAC,at_risk,76.98,10\r\n1263,Enterprise,AMER,active,94.12,7\r\n1264,Consumer,APAC,active,36.69,11\r\n1265,Enterprise,APAC,at_risk,60.67,9\r\n1266,SMB,AMER,at_risk,34.80,5\r\n1267,Consumer,EMEA,active,36.58,11\r\n1268,SMB,EMEA,active,71.62,3\r\n1269,Consumer,LATAM,at_risk,60.28,10\r\n1270,Enterprise,LATAM,active,60.30,11\r\n1271,Enterprise,EMEA,at_risk,89.87,9\r\n1272,Consumer,AMER,inactive,42.80,6\r\n1273,SMB,AMER,at_risk,69.40,9\r\n1274,Consumer,LATAM,inactive,30.60,5\r\n1275,Consumer,EMEA,active,78.10,9\r\n1276,Growth,AMER,active,41.25,6\r\n1277,Enterprise,AMER,active,61.38,2\r\n1278,SMB,AMER,active,36.44,10\r\n1279,Consumer,EMEA,inactive,87.23,8\r\n1280,Enterprise,LATAM,active,91.07,7\r\n1281,SMB,AMER,at_risk,64.85,5\r\n1282,Consumer,LATAM,active,27.76,9\r\n1283,Enterprise,APAC,inactive,74.85,2\r\n1284,SMB,APAC,active,95.69,6\r\n1285,Growth,EMEA,active,47.89,9\r\n1286,Enterprise,EMEA,active,67.45,7\r\n1287,SMB,AMER,active,78.04,5\r\n1288,SMB,LATAM,active,41.26,2\r\n1289,Enterprise,LATAM,active,50.86,12\r\n1290,Enterprise,EMEA,at_risk,29.85,12\r\n1291,SMB,EMEA,active,22.10,9\r\n1292,Growth,LATAM,inactive,32.44,10\r\n1293,Growth,AMER,active,94.27,8\r\n1294,Enterprise,APAC,active,21.83,3\r\n1295,Enterprise,EMEA,active,46.71,8\r\n1296,Enterprise,LATAM,active,66.76,9\r\n1297,Consumer,AMER,active,76.86,8\r\n1298,Consumer,LATAM,active,76.34,10\r\n1299,Enterprise,LATAM,active,83.12,6\r\n1300,Consumer,EMEA,inactive,43.51,6\r\n1301,Growth,AMER,active,66.27,11\r\n1302,Consumer,LATAM,active,54.03,10\r\n1303,Enterprise,LATAM,at_risk,44.13,11\r\n1304,Consumer,LATAM,active,53.41,5\r\n1305,Growth,EMEA,inactive,43.56,7\r\n1306,SMB,LATAM,active,81.31,11\r\n1307,Growth,AMER,at_risk,75.00,8\r\n1308,SMB,EMEA,active,52.29,3\r\n1309,Growth,EMEA,at_risk,96.15,4\r\n1310,SMB,AMER,inactive,65.45,3\r\n1311,Consumer,APAC,active,45.27,10\r\n1312,Growth,EMEA,active,54.02,8\r\n1313,Enterprise,EMEA,active,24.69,5\r\n1314,SMB,AMER,at_risk,54.43,4\r\n1315,Growth,AMER,at_risk,28.85,11\r\n1316,Consumer,APAC,active,92.32,7\r\n1317,Enterprise,EMEA,active,70.24,11\r\n1318,Enterprise,LATAM,at_risk,77.72,3\r\n1319,SMB,EMEA,active,56.14,9\r\n1320,SMB,EMEA,active,51.38,12\r\n1321,Consumer,EMEA,at_risk,75.55,4\r\n1322,Growth,AMER,inactive,21.96,5\r\n1323,Consumer,EMEA,active,70.93,4\r\n1324,Growth,LATAM,active,65.51,1\r\n1325,SMB,APAC,active,36.09,4\r\n1326,Enterprise,AMER,active,30.26,6\r\n1327,Growth,APAC,active,46.53,4\r\n1328,Consumer,APAC,active,38.98,11\r\n1329,Growth,APAC,active,39.24,1\r\n1330,Consumer,EMEA,active,82.06,5\r\n1331,Consumer,AMER,inactive,92.23,4\r\n1332,Consumer,APAC,active,51.14,1\r\n1333,SMB,EMEA,active,31.93,11\r\n1334,Growth,APAC,at_risk,92.99,10\r\n1335,SMB,APAC,active,25.40,11\r\n1336,Enterprise,APAC,active,72.54,6\r\n1337,SMB,APAC,active,61.51,8\r\n1338,Growth,EMEA,active,46.62,3\r\n1339,Growth,EMEA,active,53.12,5\r\n1340,SMB,LATAM,active,63.02,9\r\n1341,Growth,EMEA,active,26.65,5\r\n1342,Enterprise,LATAM,active,21.60,12\r\n1343,SMB,LATAM,active,25.60,2\r\n1344,SMB,EMEA,active,70.53,6\r\n1345,Consumer,AMER,at_risk,95.01,2\r\n1346,Growth,AMER,inactive,50.47,12\r\n1347,SMB,APAC,active,65.51,11\r\n1348,SMB,AMER,active,49.86,7\r\n1349,Enterprise,APAC,active,95.85,1\r\n1350,Consumer,LATAM,active,24.87,5\r\n1351,SMB,APAC,at_risk,94.16,4\r\n1352,Enterprise,AMER,at_risk,32.69,5\r\n1353,SMB,EMEA,active,67.60,4\r\n1354,Growth,AMER,active,90.64,9\r\n1355,Enterprise,EMEA,active,62.06,4\r\n1356,Consumer,LATAM,active,82.76,2\r\n1357,Enterprise,EMEA,active,29.68,2\r\n1358,Growth,APAC,inactive,49.28,12\r\n1359,Growth,LATAM,at_risk,64.29,4\r\n1360,SMB,APAC,active,42.44,3\r\n1361,Consumer,LATAM,at_risk,49.41,10\r\n1362,Consumer,AMER,active,89.24,5\r\n1363,Consumer,EMEA,active,51.88,6\r\n1364,Consumer,EMEA,inactive,85.66,11\r\n1365,SMB,LATAM,active,36.44,5\r\n1366,Growth,APAC,active,21.53,4\r\n1367,Enterprise,AMER,active,92.56,5\r\n1368,Consumer,EMEA,active,62.82,9\r\n1369,Consumer,AMER,at_risk,49.85,1\r\n1370,Enterprise,APAC,active,36.82,7\r\n1371,Consumer,AMER,active,36.47,9\r\n1372,Consumer,APAC,active,76.91,5\r\n1373,SMB,LATAM,active,97.71,8\r\n1374,Consumer,APAC,active,68.36,10\r\n1375,Enterprise,AMER,active,35.88,4\r\n1376,Consumer,EMEA,active,84.60,8\r\n1377,Enterprise,APAC,active,95.36,5\r\n1378,Enterprise,APAC,active,53.56,5\r\n1379,SMB,EMEA,active,67.45,10\r\n1380,SMB,AMER,active,57.31,5\r\n1381,Consumer,LATAM,inactive,88.13,3\r\n1382,Consumer,LATAM,active,64.67,1\r\n1383,Growth,AMER,active,61.46,3\r\n1384,Enterprise,LATAM,active,82.03,9\r\n1385,Growth,LATAM,at_risk,97.13,11\r\n1386,SMB,AMER,active,75.36,4\r\n1387,Growth,AMER,at_risk,93.13,6\r\n1388,Growth,AMER,active,94.20,11\r\n1389,Consumer,LATAM,inactive,28.20,5\r\n1390,Enterprise,APAC,active,89.99,6\r\n1391,SMB,LATAM,at_risk,43.01,6\r\n1392,SMB,AMER,inactive,77.79,7\r\n1393,Enterprise,LATAM,active,95.87,12\r\n1394,Enterprise,LATAM,active,64.68,5\r\n1395,Enterprise,APAC,inactive,77.48,10\r\n1396,Growth,EMEA,inactive,91.59,1\r\n1397,Consumer,LATAM,active,73.02,4\r\n1398,Consumer,APAC,active,50.56,2\r\n1399,Growth,AMER,active,41.32,6\r\n1400,Enterprise,EMEA,active,78.46,6\r\n1401,Consumer,LATAM,active,36.72,12\r\n1402,SMB,EMEA,active,51.27,3\r\n1403,Enterprise,APAC,inactive,66.50,3\r\n1404,Consumer,EMEA,active,87.09,8\r\n1405,Enterprise,LATAM,active,88.39,2\r\n1406,SMB,LATAM,active,58.95,11\r\n1407,Consumer,LATAM,active,97.33,3\r\n1408,Growth,AMER,at_risk,82.99,11\r\n1409,Consumer,APAC,active,64.79,2\r\n1410,Enterprise,AMER,active,48.80,8\r\n1411,Growth,LATAM,active,34.82,5\r\n1412,Growth,AMER,at_risk,76.48,12\r\n1413,Enterprise,LATAM,active,39.76,6\r\n1414,Growth,APAC,active,43.45,12\r\n1415,SMB,AMER,at_risk,27.99,10\r\n1416,SMB,EMEA,active,55.27,10\r\n1417,Consumer,APAC,active,48.44,8\r\n1418,Consumer,LATAM,active,64.85,4\r\n1419,SMB,APAC,active,63.85,3\r\n1420,SMB,APAC,active,50.53,2\r\n1421,Enterprise,APAC,at_risk,82.85,2\r\n1422,Growth,LATAM,active,66.70,6\r\n1423,SMB,EMEA,active,91.57,12\r\n1424,Enterprise,APAC,at_risk,25.57,4\r\n1425,Consumer,EMEA,active,29.28,10\r\n1426,Enterprise,AMER,active,54.17,2\r\n1427,Consumer,EMEA,active,41.01,6\r\n1428,SMB,APAC,active,49.53,10\r\n1429,Consumer,APAC,active,71.37,4\r\n1430,Growth,EMEA,at_risk,78.95,3\r\n1431,Consumer,APAC,active,90.96,9\r\n1432,Growth,APAC,active,44.93,11\r\n1433,Growth,EMEA,active,35.64,5\r\n1434,SMB,EMEA,at_risk,70.90,5\r\n1435,Consumer,AMER,at_risk,85.69,8\r\n1436,Growth,APAC,active,51.62,2\r\n1437,Growth,EMEA,inactive,95.44,7\r\n1438,Growth,APAC,active,48.49,7\r\n1439,Enterprise,LATAM,active,36.23,7\r\n1440,Growth,AMER,active,97.05,12\r\n1441,Enterprise,LATAM,at_risk,51.68,9\r\n1442,SMB,EMEA,active,57.17,5\r\n1443,Enterprise,EMEA,active,91.82,4\r\n1444,SMB,EMEA,active,40.22,6\r\n1445,Consumer,APAC,active,53.96,2\r\n1446,Growth,AMER,active,80.25,1\r\n1447,Consumer,LATAM,active,48.21,9\r\n1448,Growth,EMEA,active,87.67,7\r\n1449,SMB,EMEA,active,36.47,4\r\n1450,Enterprise,EMEA,active,69.58,4\r\n1451,SMB,EMEA,at_risk,79.83,2\r\n1452,Consumer,LATAM,active,27.03,6\r\n1453,SMB,APAC,active,87.82,12\r\n1454,Enterprise,APAC,active,83.83,12\r\n1455,Consumer,APAC,active,37.65,8\r\n1456,SMB,APAC,at_risk,26.07,3\r\n1457,Growth,EMEA,active,31.66,11\r\n1458,SMB,APAC,active,49.22,9\r\n1459,SMB,LATAM,inactive,83.69,4\r\n1460,Growth,LATAM,at_risk,79.27,8\r\n1461,Enterprise,LATAM,active,85.06,5\r\n1462,Consumer,LATAM,at_risk,50.51,7\r\n1463,SMB,EMEA,inactive,22.40,9\r\n1464,Growth,APAC,active,69.48,9\r\n1465,Enterprise,AMER,active,26.33,6\r\n1466,Consumer,AMER,active,76.72,6\r\n1467,Growth,LATAM,at_risk,48.95,7\r\n1468,Enterprise,AMER,active,73.74,1\r\n1469,Growth,EMEA,active,23.25,3\r\n1470,SMB,LATAM,active,81.59,12\r\n1471,Consumer,EMEA,active,59.32,10\r\n1472,Consumer,AMER,at_risk,94.78,5\r\n1473,Consumer,LATAM,active,76.49,6\r\n1474,Growth,LATAM,active,60.94,11\r\n1475,Enterprise,LATAM,active,62.74,7\r\n1476,Growth,APAC,active,59.05,4\r\n1477,Growth,APAC,active,57.21,11\r\n1478,SMB,EMEA,inactive,76.89,1\r\n1479,Enterprise,AMER,inactive,68.43,2\r\n1480,Growth,APAC,at_risk,28.52,5\r\n1481,Enterprise,LATAM,at_risk,40.78,12\r\n1482,Growth,AMER,active,67.21,12\r\n1483,Growth,EMEA,inactive,65.77,6\r\n1484,Consumer,AMER,active,31.16,5\r\n1485,Consumer,APAC,active,72.47,7\r\n1486,Growth,LATAM,active,28.18,10\r\n1487,Enterprise,AMER,at_risk,91.98,2\r\n1488,Growth,AMER,inactive,81.81,3\r\n1489,Enterprise,AMER,active,81.92,2\r\n1490,Growth,AMER,inactive,92.46,5\r\n1491,Growth,EMEA,at_risk,86.36,12\r\n1492,Consumer,APAC,at_risk,59.68,1\r\n1493,Consumer,APAC,active,49.37,3\r\n1494,Enterprise,AMER,at_risk,79.82,2\r\n1495,Enterprise,LATAM,at_risk,86.35,10\r\n1496,Consumer,AMER,active,45.77,10\r\n1497,Consumer,EMEA,active,53.42,12\r\n1498,Enterprise,LATAM,inactive,84.39,6\r\n1499,Enterprise,EMEA,active,77.25,8\r\n1500,SMB,EMEA,active,68.42,7\r\n1501,SMB,AMER,active,87.09,6\r\n1502,Consumer,APAC,active,38.33,11\r\n1503,SMB,APAC,active,31.44,12\r\n1504,SMB,AMER,active,45.06,12\r\n1505,SMB,EMEA,active,39.51,8\r\n1506,Growth,EMEA,active,49.20,6\r\n1507,SMB,EMEA,active,75.82,1\r\n1508,SMB,LATAM,active,35.08,11\r\n1509,Consumer,AMER,active,32.05,6\r\n1510,Enterprise,EMEA,inactive,35.20,3\r\n1511,Consumer,EMEA,active,62.44,9\r\n1512,Consumer,APAC,at_risk,28.15,12\r\n1513,Enterprise,APAC,active,97.92,5\r\n1514,Enterprise,LATAM,active,27.82,5\r\n1515,Enterprise,EMEA,at_risk,75.41,8\r\n1516,Growth,EMEA,active,85.32,11\r\n1517,Enterprise,AMER,inactive,57.82,3\r\n1518,Growth,AMER,active,67.21,5\r\n1519,Enterprise,APAC,at_risk,68.33,4\r\n1520,SMB,APAC,active,33.64,4\r\n1521,Consumer,LATAM,at_risk,37.57,1\r\n1522,SMB,LATAM,inactive,53.73,4\r\n1523,SMB,LATAM,active,56.97,9\r\n1524,Consumer,LATAM,active,82.96,8\r\n1525,SMB,APAC,active,57.29,10\r\n1526,SMB,EMEA,active,67.10,10\r\n1527,Consumer,AMER,active,29.35,3\r\n1528,SMB,EMEA,inactive,89.09,5\r\n1529,Consumer,LATAM,active,34.05,9\r\n1530,Growth,APAC,active,32.75,6\r\n1531,SMB,EMEA,at_risk,71.60,12\r\n1532,SMB,APAC,active,79.15,8\r\n1533,Consumer,APAC,active,68.73,4\r\n1534,Enterprise,EMEA,active,29.41,3\r\n1535,SMB,AMER,active,56.81,4\r\n1536,SMB,LATAM,active,70.61,9\r\n1537,SMB,LATAM,at_risk,43.46,9\r\n1538,Consumer,AMER,at_risk,78.69,2\r\n1539,Consumer,LATAM,at_risk,74.45,9\r\n1540,Enterprise,LATAM,at_risk,86.77,10\r\n1541,Consumer,APAC,inactive,21.15,11\r\n1542,SMB,APAC,active,57.49,7\r\n1543,Enterprise,AMER,active,23.68,4\r\n1544,Enterprise,LATAM,active,24.49,1\r\n1545,SMB,AMER,active,65.03,2\r\n1546,SMB,EMEA,active,96.14,6\r\n1547,Enterprise,AMER,active,92.68,7\r\n1548,Enterprise,EMEA,active,95.40,1\r\n1549,Consumer,APAC,active,89.00,10\r\n1550,Consumer,APAC,at_risk,78.75,10\r\n1551,SMB,APAC,at_risk,89.96,7\r\n1552,Consumer,EMEA,active,95.15,9\r\n1553,Enterprise,LATAM,at_risk,40.24,8\r\n1554,Enterprise,AMER,active,42.02,6\r\n1555,Consumer,AMER,at_risk,87.30,2\r\n1556,SMB,AMER,at_risk,30.64,7\r\n1557,Enterprise,LATAM,active,64.90,3\r\n1558,Growth,EMEA,active,74.14,9\r\n1559,Consumer,EMEA,active,50.11,12\r\n1560,Consumer,AMER,at_risk,82.26,2\r\n1561,SMB,EMEA,active,38.00,7\r\n1562,Growth,AMER,active,26.03,10\r\n1563,Growth,EMEA,active,60.49,8\r\n1564,SMB,APAC,active,28.31,11\r\n1565,Consumer,AMER,at_risk,76.25,10\r\n1566,SMB,EMEA,active,54.24,2\r\n1567,Enterprise,AMER,active,44.46,3\r\n1568,Consumer,AMER,at_risk,44.49,12\r\n1569,Consumer,APAC,active,57.52,1\r\n1570,SMB,AMER,active,29.41,11\r\n1571,SMB,EMEA,inactive,64.91,2\r\n1572,Consumer,LATAM,active,57.66,3\r\n1573,SMB,AMER,at_risk,31.79,3\r\n1574,Consumer,AMER,inactive,91.79,11\r\n1575,Growth,APAC,at_risk,89.06,10\r\n1576,Enterprise,EMEA,at_risk,53.21,2\r\n1577,Consumer,AMER,inactive,86.81,11\r\n1578,Enterprise,LATAM,active,77.22,2\r\n1579,SMB,AMER,active,33.08,11\r\n1580,Growth,AMER,at_risk,42.88,7\r\n1581,Consumer,APAC,active,64.91,1\r\n1582,Growth,AMER,active,51.74,7\r\n1583,SMB,AMER,inactive,47.36,11\r\n1584,SMB,EMEA,inactive,95.58,4\r\n1585,SMB,AMER,inactive,29.03,3\r\n1586,Enterprise,LATAM,active,91.35,8\r\n1587,Growth,LATAM,active,74.41,9\r\n1588,SMB,LATAM,active,38.51,6\r\n1589,Growth,AMER,active,95.62,4\r\n1590,Consumer,APAC,active,71.36,5\r\n1591,Consumer,APAC,active,89.39,1\r\n1592,SMB,LATAM,at_risk,49.47,12\r\n1593,Consumer,AMER,at_risk,63.52,11\r\n1594,Growth,LATAM,active,21.58,2\r\n1595,Enterprise,AMER,active,78.01,7\r\n1596,SMB,EMEA,active,90.12,12\r\n1597,Enterprise,APAC,active,58.57,8\r\n1598,Consumer,EMEA,active,94.22,5\r\n1599,Growth,APAC,active,74.85,12\r\n1600,SMB,AMER,active,87.18,10\r\n1601,SMB,APAC,inactive,41.97,3\r\n1602,Growth,EMEA,inactive,42.01,9\r\n1603,Enterprise,EMEA,active,58.90,5\r\n1604,SMB,EMEA,active,43.34,4\r\n1605,SMB,APAC,inactive,68.71,1\r\n1606,Enterprise,AMER,active,47.35,8\r\n1607,Consumer,LATAM,active,55.43,10\r\n1608,Enterprise,AMER,active,88.81,11\r\n1609,SMB,LATAM,active,49.34,11\r\n1610,Growth,LATAM,inactive,59.75,11\r\n1611,Growth,EMEA,inactive,37.00,12\r\n1612,Consumer,LATAM,inactive,34.17,4\r\n1613,Growth,EMEA,active,23.39,3\r\n1614,Growth,EMEA,at_risk,86.22,8\r\n1615,Growth,LATAM,active,64.10,12\r\n1616,SMB,APAC,active,87.08,2\r\n1617,Growth,AMER,active,60.99,4\r\n1618,Consumer,EMEA,at_risk,82.61,9\r\n1619,Enterprise,APAC,active,80.75,2\r\n1620,SMB,LATAM,active,65.71,4\r\n1621,SMB,APAC,active,65.97,6\r\n1622,SMB,LATAM,active,20.21,8\r\n1623,Growth,AMER,inactive,83.37,6\r\n1624,SMB,AMER,at_risk,50.71,1\r\n1625,Growth,EMEA,at_risk,62.14,9\r\n1626,Consumer,EMEA,inactive,53.68,10\r\n1627,Growth,LATAM,inactive,73.30,5\r\n1628,Consumer,EMEA,active,22.60,3\r\n1629,SMB,LATAM,inactive,85.17,11\r\n1630,Growth,APAC,at_risk,71.40,10\r\n1631,Consumer,APAC,active,47.02,8\r\n1632,Consumer,LATAM,inactive,35.45,10\r\n1633,SMB,EMEA,active,90.34,4\r\n1634,Enterprise,AMER,active,59.82,8\r\n1635,Growth,APAC,active,55.84,8\r\n1636,SMB,APAC,at_risk,87.02,6\r\n1637,Enterprise,APAC,active,63.48,3\r\n1638,SMB,EMEA,at_risk,68.24,1\r\n1639,SMB,APAC,active,36.93,6\r\n1640,Consumer,APAC,at_risk,33.71,10\r\n1641,SMB,EMEA,at_risk,32.59,9\r\n1642,SMB,EMEA,active,46.12,12\r\n1643,Enterprise,EMEA,active,33.17,12\r\n1644,SMB,EMEA,active,28.95,2\r\n1645,Consumer,EMEA,active,63.67,7\r\n1646,Growth,APAC,at_risk,85.33,5\r\n1647,Consumer,LATAM,active,21.36,11\r\n1648,Consumer,APAC,inactive,73.62,8\r\n1649,Growth,APAC,active,25.58,11\r\n1650,Enterprise,LATAM,active,51.30,6\r\n1651,Enterprise,LATAM,inactive,92.31,9\r\n1652,SMB,APAC,at_risk,93.57,8\r\n1653,Growth,APAC,active,81.85,12\r\n1654,Enterprise,EMEA,active,45.85,2\r\n1655,Growth,LATAM,active,90.30,9\r\n1656,SMB,LATAM,active,87.81,12\r\n1657,Consumer,LATAM,at_risk,42.68,9\r\n1658,Growth,AMER,at_risk,86.03,2\r\n1659,Enterprise,APAC,active,84.49,4\r\n1660,Enterprise,AMER,active,96.43,4\r\n1661,Consumer,LATAM,active,45.66,4\r\n1662,Growth,LATAM,inactive,74.13,5\r\n1663,SMB,LATAM,active,97.77,2\r\n1664,SMB,LATAM,active,79.25,3\r\n1665,SMB,AMER,at_risk,48.04,11\r\n1666,Consumer,LATAM,active,46.40,7\r\n1667,SMB,AMER,active,40.79,11\r\n1668,SMB,AMER,active,68.56,5\r\n1669,Growth,AMER,active,75.51,10\r\n1670,Consumer,EMEA,active,64.46,12\r\n1671,SMB,APAC,active,47.62,7\r\n1672,SMB,APAC,active,33.81,2\r\n1673,SMB,AMER,active,41.45,5\r\n1674,Enterprise,LATAM,inactive,48.54,10\r\n1675,SMB,LATAM,active,73.62,11\r\n1676,Consumer,LATAM,active,53.77,7\r\n1677,Enterprise,EMEA,active,23.11,1\r\n1678,SMB,LATAM,active,81.29,11\r\n1679,Growth,APAC,active,55.55,9\r\n1680,Consumer,LATAM,inactive,97.52,6\r\n1681,SMB,LATAM,inactive,23.55,4\r\n1682,Enterprise,EMEA,active,88.84,11\r\n1683,SMB,AMER,active,70.61,7\r\n1684,SMB,EMEA,inactive,48.54,12\r\n1685,Consumer,AMER,inactive,28.47,9\r\n1686,Growth,LATAM,active,32.61,9\r\n1687,Consumer,APAC,active,92.01,2\r\n1688,Growth,LATAM,inactive,97.73,2\r\n1689,Growth,LATAM,inactive,83.72,5\r\n1690,Consumer,LATAM,inactive,57.03,3\r\n1691,Enterprise,LATAM,active,41.60,2\r\n1692,Growth,LATAM,active,76.13,2\r\n1693,SMB,EMEA,active,74.59,4\r\n1694,Enterprise,APAC,active,85.54,11\r\n1695,Growth,LATAM,active,26.21,12\r\n1696,SMB,LATAM,active,95.26,6\r\n1697,Consumer,APAC,active,20.91,1\r\n1698,SMB,EMEA,at_risk,38.69,9\r\n1699,Growth,APAC,active,74.06,12\r\n1700,SMB,EMEA,at_risk,76.55,1\r\n1701,Growth,LATAM,inactive,25.87,9\r\n1702,Consumer,AMER,active,77.13,5\r\n1703,Consumer,AMER,at_risk,52.65,7\r\n1704,Growth,AMER,active,97.78,1\r\n1705,Enterprise,AMER,active,73.36,10\r\n1706,Consumer,EMEA,active,38.80,12\r\n1707,SMB,APAC,active,31.74,10\r\n1708,Enterprise,LATAM,at_risk,78.07,9\r\n1709,SMB,EMEA,inactive,69.91,12\r\n1710,SMB,APAC,inactive,83.89,5\r\n1711,Growth,APAC,active,21.11,1\r\n1712,Enterprise,LATAM,active,25.02,10\r\n1713,Consumer,EMEA,active,24.33,4\r\n1714,Enterprise,APAC,active,37.03,9\r\n1715,Enterprise,AMER,active,68.76,7\r\n1716,SMB,LATAM,active,87.20,4\r\n1717,Growth,AMER,inactive,55.24,7\r\n1718,Consumer,EMEA,at_risk,45.32,4\r\n1719,Enterprise,LATAM,at_risk,41.68,7\r\n1720,Consumer,AMER,at_risk,23.36,7\r\n1721,Enterprise,EMEA,active,73.08,2\r\n1722,Growth,EMEA,active,51.38,4\r\n1723,Growth,LATAM,active,60.27,6\r\n1724,Consumer,LATAM,active,92.01,4\r\n1725,Growth,APAC,active,78.71,9\r\n1726,Enterprise,EMEA,active,49.83,11\r\n1727,Growth,APAC,inactive,92.75,5\r\n1728,SMB,AMER,at_risk,43.20,6\r\n1729,Enterprise,AMER,active,34.51,5\r\n1730,SMB,AMER,active,67.90,2\r\n1731,Consumer,AMER,active,49.58,10\r\n1732,Enterprise,LATAM,active,63.96,2\r\n1733,Consumer,LATAM,inactive,20.83,1\r\n1734,Enterprise,AMER,active,22.97,8\r\n1735,Consumer,LATAM,active,71.97,6\r\n1736,Enterprise,AMER,inactive,50.95,11\r\n1737,SMB,AMER,active,87.06,6\r\n1738,SMB,LATAM,active,28.98,10\r\n1739,Enterprise,AMER,active,89.16,1\r\n1740,SMB,EMEA,at_risk,86.25,2\r\n1741,Growth,AMER,inactive,20.72,6\r\n1742,Enterprise,AMER,inactive,44.40,7\r\n1743,SMB,LATAM,active,54.28,7\r\n1744,Growth,EMEA,active,54.81,8\r\n1745,SMB,APAC,active,56.63,5\r\n1746,Consumer,LATAM,at_risk,69.15,8\r\n1747,Enterprise,EMEA,active,51.09,11\r\n1748,SMB,APAC,active,52.15,8\r\n1749,Consumer,LATAM,active,88.52,12\r\n1750,Enterprise,LATAM,active,90.76,9\r\n1751,Enterprise,AMER,active,39.98,5\r\n1752,Consumer,APAC,at_risk,59.31,10\r\n1753,Enterprise,APAC,inactive,35.77,4\r\n1754,Growth,EMEA,active,87.30,6\r\n1755,Consumer,APAC,active,39.65,6\r\n1756,Enterprise,EMEA,active,72.93,9\r\n1757,SMB,EMEA,active,64.61,11\r\n1758,Growth,AMER,active,91.05,2\r\n1759,Enterprise,EMEA,active,71.69,12\r\n1760,Enterprise,LATAM,active,79.02,2\r\n1761,Enterprise,EMEA,inactive,66.95,10\r\n1762,Enterprise,LATAM,active,55.45,9\r\n1763,Enterprise,LATAM,active,29.00,3\r\n1764,Growth,LATAM,active,76.96,2\r\n1765,SMB,EMEA,at_risk,81.57,2\r\n1766,Consumer,AMER,at_risk,49.53,1\r\n1767,Enterprise,AMER,at_risk,40.88,3\r\n1768,Enterprise,AMER,active,49.24,5\r\n1769,Growth,LATAM,active,82.73,9\r\n1770,SMB,AMER,at_risk,51.98,12\r\n1771,SMB,AMER,active,55.06,2\r\n1772,Growth,EMEA,inactive,59.14,12\r\n1773,Growth,LATAM,at_risk,23.72,2\r\n1774,Growth,EMEA,at_risk,53.31,5\r\n1775,Growth,EMEA,active,31.84,1\r\n1776,Growth,APAC,active,82.19,3\r\n1777,SMB,AMER,active,86.18,6\r\n1778,Consumer,LATAM,active,25.94,9\r\n1779,SMB,APAC,active,39.83,2\r\n1780,SMB,APAC,active,90.38,5\r\n1781,SMB,LATAM,active,26.10,8\r\n1782,Consumer,LATAM,at_risk,60.27,1\r\n1783,SMB,AMER,active,89.80,9\r\n1784,Enterprise,EMEA,active,43.34,12\r\n1785,Enterprise,EMEA,inactive,95.33,6\r\n1786,Growth,APAC,active,70.40,10\r\n1787,Enterprise,APAC,active,53.81,6\r\n1788,Growth,AMER,active,71.92,9\r\n1789,Enterprise,EMEA,inactive,87.16,8\r\n1790,Consumer,AMER,active,71.85,12\r\n1791,Consumer,LATAM,active,94.65,4\r\n1792,Enterprise,AMER,at_risk,69.02,8\r\n1793,Enterprise,LATAM,at_risk,64.60,7\r\n1794,SMB,AMER,at_risk,65.91,8\r\n1795,SMB,EMEA,active,41.55,1\r\n1796,Enterprise,LATAM,inactive,41.49,4\r\n1797,Consumer,LATAM,active,90.89,3\r\n1798,Consumer,AMER,active,39.23,8\r\n1799,Growth,EMEA,at_risk,29.89,7\r\n1800,SMB,APAC,active,93.97,10\r\n1801,Consumer,LATAM,active,84.01,11\r\n1802,SMB,AMER,active,50.66,12\r\n1803,Growth,LATAM,active,36.48,5\r\n1804,Enterprise,AMER,inactive,50.27,4\r\n1805,Enterprise,EMEA,active,29.58,7\r\n1806,Consumer,APAC,at_risk,88.79,7\r\n1807,Consumer,EMEA,active,40.95,7\r\n1808,Consumer,APAC,active,93.33,12\r\n1809,Consumer,APAC,inactive,88.95,2\r\n1810,Growth,LATAM,active,96.78,10\r\n1811,Enterprise,APAC,active,97.97,12\r\n1812,Enterprise,EMEA,inactive,83.53,11\r\n1813,Consumer,EMEA,active,45.81,9\r\n1814,Enterprise,AMER,at_risk,77.04,1\r\n1815,Consumer,APAC,inactive,50.84,8\r\n1816,Consumer,EMEA,active,70.16,3\r\n1817,Growth,APAC,active,91.81,7\r\n1818,Growth,AMER,inactive,62.61,12\r\n1819,Enterprise,AMER,active,57.75,12\r\n1820,Growth,AMER,active,35.32,1\r\n1821,Consumer,LATAM,active,91.93,10\r\n1822,Consumer,EMEA,inactive,21.91,7\r\n1823,Enterprise,EMEA,at_risk,55.63,10\r\n1824,Growth,LATAM,active,67.51,7\r\n1825,Enterprise,AMER,at_risk,83.49,7\r\n1826,SMB,LATAM,active,28.45,7\r\n1827,Growth,EMEA,active,81.39,12\r\n1828,Enterprise,EMEA,at_risk,90.00,11\r\n1829,Consumer,EMEA,active,86.35,10\r\n1830,Consumer,EMEA,active,25.35,5\r\n1831,Enterprise,APAC,active,95.85,1\r\n1832,SMB,APAC,active,51.26,4\r\n1833,Enterprise,EMEA,active,94.82,10\r\n1834,Enterprise,AMER,inactive,47.10,8\r\n1835,SMB,AMER,active,64.64,4\r\n1836,Enterprise,LATAM,active,88.93,8\r\n1837,Consumer,LATAM,active,31.27,10\r\n1838,Growth,AMER,inactive,41.77,11\r\n1839,Enterprise,APAC,active,54.16,3\r\n1840,Enterprise,LATAM,active,67.83,3\r\n1841,Enterprise,AMER,active,26.59,6\r\n1842,Growth,LATAM,at_risk,95.67,9\r\n1843,SMB,EMEA,active,92.85,4\r\n1844,Enterprise,APAC,active,44.86,6\r\n1845,SMB,AMER,active,97.32,10\r\n1846,Enterprise,APAC,active,28.88,8\r\n1847,Consumer,LATAM,inactive,32.33,5\r\n1848,Consumer,EMEA,active,33.82,1\r\n1849,Growth,EMEA,at_risk,68.31,1\r\n1850,Growth,APAC,active,97.64,3\r\n1851,Enterprise,EMEA,active,24.50,11\r\n1852,Consumer,AMER,at_risk,58.31,12\r\n1853,Growth,APAC,active,87.10,10\r\n1854,Consumer,AMER,active,45.04,12\r\n1855,SMB,APAC,inactive,55.44,11\r\n1856,SMB,EMEA,active,75.36,6\r\n1857,Consumer,LATAM,active,92.58,2\r\n1858,SMB,LATAM,inactive,34.76,8\r\n1859,Growth,AMER,at_risk,95.79,2\r\n1860,Growth,EMEA,active,42.02,2\r\n1861,SMB,AMER,active,54.39,6\r\n1862,SMB,LATAM,active,38.25,1\r\n1863,Enterprise,EMEA,at_risk,30.27,6\r\n1864,SMB,EMEA,active,41.16,4\r\n1865,Enterprise,LATAM,at_risk,41.57,4\r\n1866,Enterprise,EMEA,active,90.66,7\r\n1867,Consumer,AMER,active,33.41,8\r\n1868,Enterprise,AMER,at_risk,52.60,6\r\n1869,Consumer,AMER,active,39.44,7\r\n1870,Consumer,EMEA,active,90.73,4\r\n1871,SMB,APAC,inactive,69.66,11\r\n1872,Growth,LATAM,active,91.42,9\r\n1873,Growth,EMEA,at_risk,64.98,12\r\n1874,Consumer,LATAM,active,20.82,4\r\n1875,Enterprise,EMEA,active,93.60,2\r\n1876,Enterprise,LATAM,active,89.27,9\r\n1877,SMB,LATAM,active,83.84,3\r\n1878,Consumer,EMEA,active,35.46,1\r\n1879,Consumer,EMEA,at_risk,57.03,1\r\n1880,Enterprise,LATAM,inactive,37.19,5\r\n1881,SMB,LATAM,active,80.93,1\r\n1882,SMB,AMER,active,87.97,10\r\n1883,Enterprise,LATAM,inactive,67.26,5\r\n1884,Consumer,APAC,active,36.66,7\r\n1885,Consumer,EMEA,active,39.08,9\r\n1886,Growth,LATAM,active,79.15,12\r\n1887,SMB,LATAM,active,41.44,2\r\n1888,Enterprise,LATAM,active,60.33,4\r\n1889,Consumer,APAC,active,20.19,2\r\n1890,SMB,LATAM,active,30.50,2\r\n1891,Enterprise,APAC,active,20.77,11\r\n1892,Consumer,LATAM,active,60.83,12\r\n1893,Consumer,APAC,active,69.60,5\r\n1894,Growth,APAC,active,60.94,11\r\n1895,Consumer,AMER,active,37.76,3\r\n1896,SMB,EMEA,active,39.94,8\r\n1897,Enterprise,AMER,at_risk,31.52,12\r\n1898,Enterprise,EMEA,active,64.77,10\r\n1899,SMB,APAC,active,47.35,3\r\n1900,Enterprise,APAC,active,69.23,8\r\n1901,SMB,AMER,active,63.33,10\r\n1902,SMB,AMER,active,38.92,10\r\n1903,Consumer,LATAM,at_risk,51.27,1\r\n1904,Consumer,APAC,at_risk,57.39,6\r\n1905,Consumer,APAC,active,93.54,8\r\n1906,SMB,APAC,active,62.29,1\r\n1907,SMB,EMEA,active,31.12,3\r\n1908,Enterprise,AMER,active,59.99,10\r\n1909,Consumer,APAC,active,37.96,9\r\n1910,Consumer,LATAM,active,39.04,7\r\n1911,SMB,LATAM,at_risk,59.91,8\r\n1912,Consumer,APAC,active,66.06,3\r\n1913,SMB,LATAM,active,27.58,9\r\n1914,Enterprise,EMEA,inactive,51.23,4\r\n1915,Enterprise,LATAM,at_risk,45.20,12\r\n1916,Growth,AMER,active,71.60,6\r\n1917,Enterprise,LATAM,active,49.17,4\r\n1918,Growth,EMEA,active,55.22,6\r\n1919,Growth,EMEA,active,29.39,2\r\n1920,Enterprise,LATAM,active,29.51,7\r\n1921,SMB,LATAM,at_risk,61.17,8\r\n1922,Growth,EMEA,at_risk,47.89,4\r\n1923,Growth,LATAM,active,26.15,6\r\n1924,Enterprise,AMER,active,76.83,6\r\n1925,Enterprise,LATAM,at_risk,32.76,2\r\n1926,Enterprise,AMER,active,94.09,12\r\n1927,Enterprise,EMEA,active,50.43,11\r\n1928,SMB,AMER,at_risk,21.83,7\r\n1929,SMB,LATAM,active,82.23,7\r\n1930,Growth,AMER,active,84.33,2\r\n1931,SMB,APAC,at_risk,74.55,8\r\n1932,Enterprise,EMEA,at_risk,54.91,3\r\n1933,Enterprise,EMEA,active,36.99,1\r\n1934,SMB,EMEA,active,89.09,3\r\n1935,Consumer,EMEA,inactive,40.98,9\r\n1936,SMB,EMEA,active,96.60,6\r\n1937,Enterprise,APAC,active,61.34,3\r\n1938,Enterprise,APAC,at_risk,80.83,11\r\n1939,SMB,LATAM,inactive,33.42,4\r\n1940,Growth,APAC,at_risk,29.29,12\r\n1941,SMB,AMER,inactive,77.49,12\r\n1942,SMB,APAC,at_risk,31.99,3\r\n1943,Enterprise,AMER,active,22.07,7\r\n1944,Consumer,AMER,active,43.78,11\r\n1945,Enterprise,AMER,at_risk,69.68,4\r\n1946,Consumer,AMER,at_risk,85.30,12\r\n1947,Consumer,EMEA,active,64.96,2\r\n1948,Enterprise,EMEA,active,56.63,10\r\n1949,SMB,AMER,active,60.40,1\r\n1950,Growth,LATAM,inactive,74.27,10\r\n1951,Consumer,LATAM,inactive,37.51,12\r\n1952,Enterprise,EMEA,active,70.17,5\r\n1953,SMB,EMEA,active,62.77,2\r\n1954,Growth,APAC,active,93.07,10\r\n1955,SMB,AMER,active,82.44,9\r\n1956,Enterprise,APAC,at_risk,47.87,11\r\n1957,Growth,AMER,active,87.74,6\r\n1958,Consumer,APAC,active,55.76,3\r\n1959,Enterprise,EMEA,active,31.70,8\r\n1960,Enterprise,EMEA,active,48.95,9\r\n1961,Enterprise,EMEA,active,55.82,2\r\n1962,SMB,APAC,active,67.77,6\r\n1963,SMB,AMER,active,59.49,1\r\n1964,SMB,EMEA,active,43.17,10\r\n1965,SMB,LATAM,active,77.20,10\r\n1966,SMB,APAC,active,22.03,4\r\n1967,Enterprise,EMEA,active,69.19,8\r\n1968,Enterprise,EMEA,at_risk,86.00,5\r\n1969,Enterprise,LATAM,at_risk,92.70,4\r\n1970,SMB,LATAM,active,72.72,12\r\n1971,SMB,EMEA,inactive,28.99,11\r\n1972,Enterprise,LATAM,active,81.53,6\r\n1973,SMB,EMEA,inactive,40.77,3\r\n1974,SMB,EMEA,active,88.93,12\r\n1975,Consumer,EMEA,inactive,54.36,5\r\n1976,Enterprise,AMER,at_risk,39.85,2\r\n1977,Consumer,APAC,active,56.53,11\r\n1978,Enterprise,AMER,inactive,51.04,6\r\n1979,Growth,APAC,at_risk,69.51,10\r\n1980,Consumer,LATAM,at_risk,47.63,6\r\n1981,Growth,EMEA,active,28.47,11\r\n1982,Growth,EMEA,active,48.70,11\r\n1983,Growth,LATAM,inactive,54.52,4\r\n1984,Consumer,LATAM,active,58.72,2\r\n1985,Enterprise,AMER,at_risk,85.12,3\r\n1986,SMB,LATAM,at_risk,61.97,8\r\n1987,Enterprise,AMER,inactive,27.92,11\r\n1988,SMB,AMER,active,97.20,4\r\n1989,Growth,LATAM,active,66.52,3\r\n1990,Growth,EMEA,at_risk,48.03,4\r\n1991,SMB,APAC,inactive,39.53,3\r\n1992,Consumer,LATAM,active,58.89,7\r\n1993,SMB,APAC,active,69.08,12\r\n1994,Growth,LATAM,inactive,45.80,6\r\n1995,SMB,LATAM,active,55.52,8\r\n1996,Growth,APAC,inactive,37.88,8\r\n1997,SMB,APAC,active,61.06,5\r\n1998,Consumer,APAC,active,96.61,9\r\n1999,Enterprise,EMEA,active,58.95,6\r\n2000,Enterprise,APAC,active,68.25,10\r\n2001,Consumer,EMEA,active,97.21,2\r\n2002,Enterprise,APAC,at_risk,67.86,12\r\n2003,Growth,LATAM,active,29.66,9\r\n2004,Growth,LATAM,active,95.30,1\r\n2005,SMB,APAC,active,91.24,5\r\n2006,SMB,APAC,active,64.63,8\r\n2007,Consumer,LATAM,active,87.09,3\r\n2008,SMB,APAC,active,49.31,4\r\n2009,Consumer,LATAM,active,50.48,10\r\n2010,Consumer,APAC,active,45.52,1\r\n2011,Growth,AMER,active,66.37,4\r\n2012,Enterprise,APAC,at_risk,89.54,8\r\n2013,Consumer,APAC,active,39.53,7\r\n2014,Enterprise,EMEA,active,53.05,8\r\n2015,SMB,APAC,at_risk,63.74,9\r\n2016,SMB,LATAM,active,44.32,12\r\n2017,Consumer,EMEA,active,83.37,2\r\n2018,Enterprise,AMER,active,81.27,2\r\n2019,Growth,AMER,at_risk,41.67,5\r\n2020,Enterprise,AMER,active,97.38,12\r\n2021,SMB,EMEA,at_risk,76.29,2\r\n2022,Enterprise,LATAM,active,92.03,8\r\n2023,Enterprise,LATAM,active,54.99,7\r\n2024,Growth,AMER,active,74.28,3\r\n2025,Enterprise,EMEA,active,68.58,12\r\n2026,SMB,LATAM,active,64.07,12\r\n2027,Enterprise,EMEA,active,38.32,8\r\n2028,Consumer,APAC,active,69.53,5\r\n2029,Enterprise,APAC,active,24.19,6\r\n2030,SMB,EMEA,inactive,71.11,7\r\n2031,Enterprise,AMER,active,86.66,5\r\n2032,Enterprise,AMER,active,90.56,4\r\n2033,Growth,LATAM,active,58.23,3\r\n2034,Enterprise,LATAM,active,87.85,2\r\n2035,Enterprise,LATAM,active,36.67,4\r\n2036,SMB,APAC,active,53.12,4\r\n2037,Growth,LATAM,at_risk,23.76,1\r\n2038,Enterprise,LATAM,inactive,26.45,5\r\n2039,Enterprise,EMEA,active,44.80,1\r\n2040,SMB,LATAM,at_risk,43.69,7\r\n2041,Growth,LATAM,at_risk,59.77,2\r\n2042,Consumer,LATAM,inactive,52.76,12\r\n2043,SMB,EMEA,at_risk,97.85,9\r\n2044,Growth,APAC,active,93.96,12\r\n2045,Enterprise,LATAM,at_risk,56.09,5\r\n2046,SMB,APAC,at_risk,48.48,10\r\n2047,Growth,AMER,active,90.73,12\r\n2048,Enterprise,LATAM,active,37.18,3\r\n2049,SMB,LATAM,inactive,56.62,11\r\n2050,Consumer,APAC,active,66.59,1\r\n2051,Consumer,APAC,active,28.14,1\r\n2052,Enterprise,LATAM,active,96.80,10\r\n2053,SMB,AMER,inactive,27.32,8\r\n2054,SMB,AMER,active,64.48,5\r\n2055,Consumer,AMER,active,60.90,1\r\n2056,SMB,LATAM,at_risk,55.70,9\r\n2057,Enterprise,APAC,at_risk,24.43,1\r\n2058,SMB,LATAM,at_risk,92.73,6\r\n2059,SMB,AMER,at_risk,77.03,1\r\n2060,Consumer,LATAM,active,67.77,12\r\n2061,Growth,EMEA,inactive,95.38,5\r\n2062,Consumer,EMEA,at_risk,83.92,11\r\n2063,Enterprise,APAC,active,61.20,7\r\n2064,Growth,LATAM,at_risk,73.50,8\r\n2065,SMB,EMEA,active,59.73,3\r\n2066,Growth,APAC,active,77.42,9\r\n2067,Consumer,LATAM,active,38.25,6\r\n2068,Consumer,LATAM,active,87.67,10\r\n2069,Consumer,AMER,active,72.96,7\r\n2070,SMB,APAC,active,20.17,6\r\n2071,Enterprise,APAC,inactive,81.88,8\r\n2072,Growth,AMER,at_risk,95.57,6\r\n2073,Consumer,AMER,at_risk,25.33,8\r\n2074,Enterprise,AMER,active,25.88,7\r\n2075,SMB,LATAM,active,94.23,10\r\n2076,Enterprise,AMER,active,89.73,4\r\n2077,Enterprise,APAC,active,60.68,3\r\n2078,SMB,APAC,active,84.53,6\r\n2079,SMB,EMEA,at_risk,30.00,5\r\n2080,Enterprise,LATAM,inactive,30.79,8\r\n2081,SMB,APAC,at_risk,66.98,1\r\n2082,Enterprise,APAC,active,50.96,6\r\n2083,Growth,LATAM,active,67.27,3\r\n2084,SMB,LATAM,active,64.04,4\r\n2085,Consumer,AMER,active,37.94,11\r\n2086,Consumer,APAC,active,37.75,4\r\n2087,Enterprise,LATAM,active,97.98,4\r\n2088,Growth,EMEA,at_risk,69.33,6\r\n2089,Enterprise,EMEA,inactive,91.09,4\r\n2090,Growth,AMER,active,21.74,2\r\n2091,Consumer,LATAM,active,49.15,4\r\n2092,Growth,AMER,active,84.21,6\r\n2093,Enterprise,LATAM,active,79.88,7\r\n2094,Growth,AMER,active,46.84,3\r\n2095,SMB,EMEA,inactive,23.17,10\r\n2096,Consumer,LATAM,active,47.56,5\r\n2097,Enterprise,AMER,inactive,43.74,1\r\n2098,SMB,AMER,at_risk,32.27,6\r\n2099,Growth,AMER,at_risk,96.82,5\r\n2100,Consumer,APAC,active,57.87,9\r\n2101,SMB,APAC,at_risk,80.52,10\r\n2102,Enterprise,APAC,active,53.67,6\r\n2103,SMB,LATAM,inactive,52.80,3\r\n2104,Consumer,EMEA,active,77.59,6\r\n2105,SMB,APAC,active,85.46,6\r\n2106,SMB,LATAM,at_risk,93.49,7\r\n2107,Consumer,APAC,active,28.46,11\r\n2108,Growth,AMER,at_risk,54.95,12\r\n2109,Enterprise,EMEA,inactive,29.46,2\r\n2110,SMB,LATAM,active,94.59,5\r\n2111,Growth,EMEA,inactive,94.08,3\r\n2112,Consumer,LATAM,at_risk,76.08,7\r\n2113,SMB,LATAM,at_risk,41.61,12\r\n2114,Consumer,AMER,inactive,32.26,9\r\n2115,Enterprise,APAC,active,36.08,2\r\n2116,Consumer,LATAM,active,57.09,9\r\n2117,Consumer,EMEA,active,65.14,11\r\n2118,Enterprise,AMER,inactive,34.76,5\r\n2119,Enterprise,AMER,at_risk,64.95,9\r\n2120,SMB,AMER,active,93.57,2\r\n2121,SMB,APAC,inactive,87.14,8\r\n2122,Consumer,AMER,inactive,24.86,9\r\n2123,Consumer,AMER,active,45.68,9\r\n2124,Enterprise,AMER,inactive,91.39,3\r\n2125,SMB,LATAM,active,76.03,10\r\n2126,Growth,APAC,active,93.04,4\r\n2127,SMB,LATAM,active,89.96,3\r\n2128,Consumer,EMEA,active,48.60,4\r\n2129,Growth,APAC,active,75.81,3\r\n2130,SMB,EMEA,at_risk,35.79,9\r\n2131,SMB,APAC,active,76.44,4\r\n2132,Enterprise,LATAM,inactive,90.43,10\r\n2133,Consumer,AMER,inactive,70.58,6\r\n2134,Enterprise,AMER,active,61.16,9\r\n2135,Enterprise,EMEA,active,97.47,12\r\n2136,Growth,AMER,inactive,30.70,8\r\n2137,Consumer,LATAM,active,97.80,8\r\n2138,SMB,APAC,inactive,84.69,6\r\n2139,Growth,AMER,at_risk,64.08,6\r\n2140,SMB,AMER,active,71.15,4\r\n2141,Enterprise,AMER,active,49.17,7\r\n2142,Growth,LATAM,inactive,40.91,2\r\n2143,Consumer,AMER,active,44.25,8\r\n2144,SMB,EMEA,at_risk,68.83,5\r\n2145,SMB,AMER,at_risk,61.78,1\r\n2146,Growth,APAC,active,26.87,12\r\n2147,Growth,LATAM,active,31.26,11\r\n2148,Enterprise,EMEA,active,46.32,11\r\n2149,Consumer,EMEA,active,28.03,2\r\n2150,Growth,APAC,at_risk,68.24,11\r\n2151,Consumer,APAC,active,80.16,3\r\n2152,Growth,EMEA,active,66.03,5\r\n2153,SMB,APAC,at_risk,69.00,12\r\n2154,Consumer,APAC,active,88.60,5\r\n2155,Growth,APAC,active,31.23,8\r\n2156,SMB,AMER,active,67.85,11\r\n2157,Consumer,AMER,active,26.81,4\r\n2158,Consumer,APAC,active,35.05,10\r\n2159,Consumer,AMER,active,43.48,2\r\n2160,SMB,LATAM,inactive,27.72,1\r\n2161,Growth,LATAM,active,21.76,9\r\n2162,SMB,EMEA,at_risk,39.55,7\r\n2163,Consumer,EMEA,at_risk,29.90,3\r\n2164,SMB,AMER,at_risk,87.82,8\r\n2165,SMB,AMER,active,61.89,8\r\n2166,SMB,AMER,active,66.20,6\r\n2167,Enterprise,APAC,active,51.30,8\r\n2168,SMB,EMEA,active,47.09,3\r\n2169,SMB,EMEA,active,21.08,7\r\n2170,SMB,EMEA,active,49.05,6\r\n2171,SMB,EMEA,at_risk,57.50,5\r\n2172,Growth,EMEA,active,59.76,2\r\n2173,Consumer,LATAM,active,80.30,12\r\n2174,Enterprise,LATAM,active,62.98,3\r\n2175,Enterprise,AMER,active,68.45,11\r\n2176,Enterprise,AMER,active,95.21,7\r\n2177,Consumer,AMER,active,53.83,7\r\n2178,Growth,AMER,at_risk,95.34,5\r\n2179,SMB,LATAM,at_risk,44.70,9\r\n2180,Growth,AMER,inactive,45.48,12\r\n2181,Growth,AMER,active,62.86,8\r\n2182,SMB,AMER,inactive,77.32,4\r\n2183,Growth,EMEA,inactive,68.24,10\r\n2184,Growth,APAC,active,84.62,7\r\n2185,Growth,EMEA,active,77.38,4\r\n2186,Consumer,LATAM,inactive,85.07,4\r\n2187,Consumer,EMEA,active,50.05,9\r\n2188,SMB,EMEA,inactive,68.55,4\r\n2189,Enterprise,LATAM,active,89.91,12\r\n2190,Growth,LATAM,inactive,52.23,9\r\n2191,Growth,AMER,active,40.97,11\r\n2192,Enterprise,APAC,active,40.25,10\r\n2193,Enterprise,AMER,active,92.35,5\r\n2194,Enterprise,APAC,at_risk,69.14,1\r\n2195,SMB,APAC,active,27.18,6\r\n2196,Consumer,AMER,active,63.55,12\r\n2197,Enterprise,LATAM,at_risk,36.55,7\r\n2198,SMB,EMEA,active,81.21,5\r\n2199,SMB,LATAM,active,35.62,8\r\n2200,Enterprise,LATAM,active,69.53,12\r\n2201,Enterprise,AMER,active,36.20,8\r\n2202,SMB,APAC,active,60.89,5\r\n2203,Growth,APAC,inactive,87.07,11\r\n2204,Enterprise,AMER,inactive,39.02,9\r\n2205,SMB,AMER,active,83.74,7\r\n2206,Growth,AMER,active,51.04,9\r\n2207,SMB,APAC,active,93.54,9\r\n2208,Enterprise,AMER,active,51.16,2\r\n2209,SMB,AMER,inactive,67.96,2\r\n2210,Growth,APAC,active,43.36,2\r\n2211,Consumer,APAC,active,32.39,7\r\n2212,SMB,EMEA,active,53.67,9\r\n2213,Enterprise,APAC,at_risk,73.99,12\r\n2214,Enterprise,AMER,inactive,51.92,11\r\n2215,Enterprise,LATAM,active,54.08,5\r\n2216,Growth,AMER,active,38.08,10\r\n2217,Enterprise,LATAM,active,69.86,2\r\n2218,SMB,EMEA,active,64.94,9\r\n2219,Enterprise,LATAM,at_risk,84.88,9\r\n2220,SMB,LATAM,inactive,32.73,6\r\n2221,SMB,LATAM,at_risk,69.65,12\r\n2222,Enterprise,LATAM,active,83.13,1\r\n2223,Consumer,EMEA,at_risk,47.60,12\r\n2224,Growth,EMEA,active,51.98,10\r\n2225,SMB,EMEA,active,97.37,3\r\n2226,Consumer,EMEA,active,45.88,8\r\n2227,Enterprise,LATAM,at_risk,57.66,9\r\n2228,Growth,EMEA,inactive,30.81,2\r\n2229,SMB,APAC,active,96.49,9\r\n2230,Consumer,LATAM,at_risk,41.63,12\r\n2231,SMB,LATAM,at_risk,91.22,9\r\n2232,SMB,EMEA,active,66.11,1\r\n2233,SMB,EMEA,active,49.83,6\r\n2234,Consumer,LATAM,active,44.45,6\r\n2235,Growth,AMER,at_risk,31.00,11\r\n2236,SMB,APAC,active,41.09,7\r\n2237,Enterprise,EMEA,active,69.86,2\r\n2238,Consumer,AMER,active,25.71,10\r\n2239,Enterprise,EMEA,active,29.32,8\r\n2240,Growth,LATAM,at_risk,71.84,11\r\n2241,SMB,APAC,at_risk,25.23,7\r\n2242,Consumer,AMER,active,30.47,12\r\n2243,Enterprise,APAC,active,64.96,8\r\n2244,Growth,LATAM,active,24.18,11\r\n2245,Consumer,APAC,inactive,50.98,2\r\n2246,Growth,AMER,inactive,62.45,9\r\n2247,Growth,LATAM,inactive,95.72,9\r\n2248,Consumer,LATAM,inactive,91.79,11\r\n2249,SMB,LATAM,active,66.92,11\r\n2250,Growth,AMER,active,23.30,3\r\n2251,SMB,APAC,at_risk,77.04,8\r\n2252,Enterprise,EMEA,inactive,34.06,7\r\n2253,SMB,AMER,inactive,81.71,12\r\n2254,Consumer,LATAM,active,44.83,5\r\n2255,Growth,AMER,inactive,52.81,4\r\n2256,Growth,AMER,inactive,79.67,8\r\n2257,Enterprise,AMER,active,48.38,8\r\n2258,Enterprise,AMER,active,63.00,1\r\n2259,Growth,AMER,active,37.84,5\r\n2260,Enterprise,EMEA,active,97.21,9\r\n2261,SMB,EMEA,inactive,20.90,5\r\n2262,Consumer,APAC,active,48.48,7\r\n2263,SMB,AMER,active,74.07,12\r\n2264,Growth,LATAM,active,83.94,4\r\n2265,SMB,APAC,at_risk,50.97,10\r\n2266,SMB,LATAM,active,26.59,12\r\n2267,SMB,AMER,active,24.84,9\r\n2268,SMB,EMEA,inactive,92.27,9\r\n2269,SMB,EMEA,active,89.31,9\r\n2270,Consumer,APAC,at_risk,44.07,11\r\n2271,Growth,LATAM,at_risk,72.97,5\r\n2272,SMB,EMEA,inactive,70.97,5\r\n2273,Enterprise,APAC,active,79.74,2\r\n2274,SMB,APAC,active,82.28,5\r\n2275,Consumer,LATAM,inactive,42.50,8\r\n2276,Growth,AMER,active,76.83,7\r\n2277,Consumer,EMEA,at_risk,67.93,8\r\n2278,Enterprise,APAC,active,46.65,8\r\n2279,SMB,LATAM,at_risk,73.43,8\r\n2280,Consumer,EMEA,active,96.09,6\r\n2281,Enterprise,AMER,active,53.27,8\r\n2282,Enterprise,EMEA,at_risk,20.72,11\r\n2283,SMB,AMER,active,46.89,10\r\n2284,Enterprise,EMEA,at_risk,78.17,2\r\n2285,Enterprise,APAC,inactive,62.71,10\r\n2286,Consumer,AMER,active,69.92,9\r\n2287,Growth,EMEA,inactive,58.24,8\r\n2288,SMB,AMER,at_risk,88.58,6\r\n2289,Consumer,APAC,at_risk,52.14,8\r\n2290,Enterprise,LATAM,active,60.18,9\r\n2291,Consumer,EMEA,at_risk,89.34,2\r\n2292,SMB,EMEA,active,68.02,12\r\n2293,Growth,LATAM,active,56.16,12\r\n2294,Enterprise,EMEA,active,56.42,6\r\n2295,Consumer,EMEA,active,22.76,6\r\n2296,SMB,AMER,active,66.24,5\r\n2297,Consumer,EMEA,inactive,55.19,2\r\n2298,SMB,APAC,active,78.32,9\r\n2299,Enterprise,EMEA,active,94.21,11\r\n2300,Consumer,EMEA,inactive,46.99,12\r\n2301,Enterprise,EMEA,at_risk,67.32,7\r\n2302,Enterprise,APAC,inactive,30.30,8\r\n2303,SMB,EMEA,inactive,44.31,7\r\n2304,Enterprise,LATAM,active,69.20,11\r\n2305,Growth,AMER,at_risk,79.38,9\r\n2306,Consumer,AMER,active,49.28,7\r\n2307,Growth,LATAM,inactive,88.87,6\r\n2308,SMB,AMER,active,57.19,3\r\n2309,SMB,AMER,at_risk,23.33,9\r\n2310,Growth,EMEA,inactive,90.56,6\r\n2311,Growth,LATAM,active,48.24,6\r\n2312,Enterprise,APAC,inactive,55.62,11\r\n2313,Consumer,LATAM,active,34.51,9\r\n2314,SMB,EMEA,active,45.25,3\r\n2315,SMB,LATAM,active,89.85,12\r\n2316,Enterprise,EMEA,active,59.90,4\r\n2317,Enterprise,LATAM,active,60.81,12\r\n2318,SMB,AMER,active,36.23,1\r\n2319,Growth,AMER,active,82.74,9\r\n2320,Growth,LATAM,active,22.21,10\r\n2321,SMB,APAC,active,58.44,7\r\n2322,Consumer,AMER,active,23.31,7\r\n2323,Consumer,AMER,at_risk,46.31,1\r\n2324,Enterprise,AMER,inactive,89.54,12\r\n2325,SMB,LATAM,active,70.51,2\r\n2326,SMB,LATAM,active,52.26,10\r\n2327,Consumer,EMEA,active,75.26,12\r\n2328,Growth,EMEA,active,25.12,3\r\n2329,Enterprise,LATAM,active,93.81,6\r\n2330,Enterprise,EMEA,active,36.47,10\r\n2331,Enterprise,APAC,inactive,37.88,3\r\n2332,Consumer,APAC,at_risk,60.09,11\r\n2333,Consumer,APAC,active,65.11,4\r\n2334,Consumer,LATAM,active,30.89,8\r\n2335,Growth,EMEA,at_risk,83.53,4\r\n2336,Enterprise,APAC,active,66.42,11\r\n2337,SMB,AMER,active,65.98,10\r\n2338,Enterprise,AMER,active,73.31,1\r\n2339,SMB,AMER,active,24.24,12\r\n2340,SMB,EMEA,active,23.18,3\r\n2341,Consumer,EMEA,active,55.30,2\r\n2342,SMB,AMER,active,86.89,11\r\n2343,SMB,EMEA,active,91.93,3\r\n2344,SMB,LATAM,active,64.23,10\r\n2345,Growth,LATAM,at_risk,28.95,7\r\n2346,Enterprise,LATAM,active,28.46,12\r\n2347,Growth,AMER,at_risk,61.56,12\r\n2348,SMB,EMEA,active,27.98,11\r\n2349,SMB,APAC,active,24.61,2\r\n2350,SMB,APAC,active,34.38,3\r\n2351,Growth,EMEA,active,21.07,9\r\n2352,Growth,APAC,active,45.63,9\r\n2353,SMB,AMER,active,54.43,1\r\n2354,SMB,LATAM,active,81.45,9\r\n2355,Consumer,EMEA,active,31.34,3\r\n2356,Growth,LATAM,inactive,57.96,4\r\n2357,Consumer,EMEA,inactive,34.21,12\r\n2358,Consumer,AMER,at_risk,94.03,1\r\n2359,SMB,LATAM,at_risk,48.53,8\r\n2360,Growth,EMEA,at_risk,72.22,2\r\n2361,SMB,EMEA,inactive,68.32,10\r\n2362,Growth,AMER,inactive,77.52,2\r\n2363,Enterprise,AMER,active,52.99,5\r\n2364,Enterprise,APAC,at_risk,49.61,12\r\n2365,Growth,LATAM,active,23.89,8\r\n2366,Growth,APAC,at_risk,48.58,10\r\n2367,Consumer,AMER,at_risk,91.55,5\r\n2368,Enterprise,AMER,active,72.01,2\r\n2369,SMB,LATAM,at_risk,48.97,9\r\n2370,Consumer,EMEA,at_risk,51.85,4\r\n2371,Enterprise,LATAM,active,92.99,2\r\n2372,Consumer,AMER,active,93.51,4\r\n2373,Growth,LATAM,active,91.46,9\r\n2374,Consumer,LATAM,at_risk,61.26,7\r\n2375,Consumer,AMER,active,50.87,4\r\n2376,Enterprise,LATAM,active,28.85,5\r\n2377,Enterprise,AMER,active,63.30,8\r\n2378,Enterprise,LATAM,active,74.56,8\r\n2379,SMB,AMER,active,77.08,7\r\n2380,Consumer,AMER,at_risk,36.70,2\r\n2381,Enterprise,AMER,active,61.38,6\r\n2382,Consumer,LATAM,active,96.30,9\r\n2383,SMB,APAC,inactive,66.63,4\r\n2384,Consumer,LATAM,active,96.04,9\r\n2385,SMB,APAC,inactive,25.52,3\r\n2386,Enterprise,EMEA,active,90.44,1\r\n2387,Growth,EMEA,active,62.68,9\r\n2388,Growth,LATAM,active,57.57,6\r\n2389,Consumer,LATAM,at_risk,43.23,1\r\n2390,Consumer,AMER,active,83.70,7\r\n2391,Enterprise,EMEA,inactive,29.18,2\r\n2392,Consumer,EMEA,active,33.46,8\r\n2393,Consumer,LATAM,at_risk,73.00,9\r\n2394,Growth,EMEA,active,25.18,3\r\n2395,Growth,APAC,active,50.06,7\r\n2396,Growth,AMER,at_risk,76.65,4\r\n2397,Enterprise,LATAM,active,64.22,1\r\n2398,SMB,EMEA,active,59.28,10\r\n2399,Enterprise,LATAM,active,89.92,5\r\n2400,Consumer,EMEA,active,33.39,11\r\n2401,Consumer,LATAM,inactive,25.52,9\r\n2402,SMB,AMER,inactive,76.42,2\r\n2403,Enterprise,LATAM,active,32.39,6\r\n2404,Enterprise,EMEA,active,54.33,11\r\n2405,Consumer,AMER,active,56.19,3\r\n2406,Consumer,EMEA,active,85.59,6\r\n2407,SMB,EMEA,active,73.86,11\r\n2408,SMB,EMEA,active,39.15,8\r\n2409,SMB,APAC,inactive,54.41,1\r\n2410,Consumer,EMEA,inactive,61.92,2\r\n2411,Growth,EMEA,active,39.90,6\r\n2412,Enterprise,LATAM,active,40.53,4\r\n2413,Enterprise,AMER,at_risk,29.94,7\r\n2414,SMB,EMEA,active,51.52,11\r\n2415,SMB,AMER,active,90.08,1\r\n2416,Growth,LATAM,active,23.28,5\r\n2417,Growth,LATAM,active,82.26,1\r\n2418,Enterprise,AMER,inactive,38.39,4\r\n2419,Enterprise,LATAM,active,40.74,5\r\n2420,SMB,EMEA,active,54.14,2\r\n2421,Enterprise,AMER,inactive,46.43,3\r\n2422,Consumer,APAC,active,47.50,2\r\n2423,Consumer,LATAM,active,85.86,12\r\n2424,SMB,LATAM,at_risk,42.88,5\r\n2425,Consumer,APAC,active,66.63,5\r\n2426,SMB,AMER,at_risk,47.36,8\r\n2427,SMB,LATAM,active,36.32,4\r\n2428,Consumer,LATAM,inactive,61.99,5\r\n2429,Growth,EMEA,active,21.87,8\r\n2430,Enterprise,APAC,active,33.84,5\r\n2431,Growth,LATAM,active,35.63,5\r\n2432,Consumer,AMER,active,88.68,12\r\n2433,Enterprise,APAC,active,84.84,8\r\n2434,Consumer,APAC,active,73.12,10\r\n2435,SMB,AMER,active,52.44,1\r\n2436,Growth,LATAM,at_risk,60.95,11\r\n2437,Enterprise,LATAM,at_risk,47.92,8\r\n2438,Consumer,AMER,active,94.39,11\r\n2439,Enterprise,AMER,active,24.03,8\r\n2440,SMB,APAC,active,54.95,5\r\n2441,Enterprise,EMEA,active,68.43,3\r\n2442,Growth,APAC,active,77.38,9\r\n2443,SMB,LATAM,at_risk,58.44,8\r\n2444,Growth,EMEA,active,28.58,8\r\n2445,Consumer,APAC,inactive,83.85,2\r\n2446,SMB,LATAM,active,89.80,4\r\n2447,SMB,AMER,active,44.26,9\r\n2448,SMB,APAC,at_risk,86.03,8\r\n2449,Enterprise,LATAM,at_risk,24.95,3\r\n2450,Enterprise,EMEA,active,54.94,2\r\n2451,Consumer,APAC,active,77.05,11\r\n2452,Consumer,LATAM,active,94.10,7\r\n2453,Consumer,AMER,inactive,76.19,8\r\n2454,Consumer,LATAM,active,93.78,12\r\n2455,SMB,AMER,active,24.86,1\r\n2456,Growth,APAC,at_risk,59.44,5\r\n2457,Enterprise,EMEA,at_risk,39.95,2\r\n2458,Consumer,EMEA,active,63.01,9\r\n2459,Growth,AMER,active,76.30,3\r\n2460,Growth,AMER,active,56.40,5\r\n2461,Consumer,LATAM,inactive,80.80,12\r\n2462,SMB,EMEA,inactive,94.22,10\r\n2463,SMB,AMER,active,29.37,7\r\n2464,Growth,LATAM,active,81.80,10\r\n2465,SMB,EMEA,active,41.30,1\r\n2466,Consumer,APAC,active,28.71,9\r\n2467,Enterprise,AMER,active,45.69,11\r\n2468,Enterprise,EMEA,active,68.22,2\r\n2469,Enterprise,LATAM,active,83.45,8\r\n2470,Growth,EMEA,inactive,83.75,5\r\n2471,Enterprise,APAC,active,27.33,10\r\n2472,Enterprise,AMER,inactive,65.05,10\r\n2473,Growth,EMEA,inactive,63.61,11\r\n2474,Growth,LATAM,active,28.31,1\r\n2475,Growth,AMER,active,58.59,10\r\n2476,Enterprise,APAC,inactive,28.79,2\r\n2477,SMB,APAC,active,56.45,1\r\n2478,Enterprise,AMER,active,83.01,10\r\n2479,Consumer,EMEA,inactive,59.02,8\r\n2480,SMB,APAC,inactive,60.22,7\r\n2481,Growth,APAC,active,35.53,9\r\n2482,Growth,EMEA,active,93.25,8\r\n2483,SMB,LATAM,at_risk,93.24,1\r\n2484,Growth,APAC,active,26.20,5\r\n2485,SMB,AMER,active,55.94,11\r\n2486,Consumer,APAC,at_risk,33.24,2\r\n2487,Consumer,LATAM,at_risk,64.90,4\r\n2488,SMB,EMEA,active,78.48,5\r\n2489,Consumer,LATAM,at_risk,58.29,10\r\n2490,SMB,AMER,inactive,40.78,9\r\n2491,SMB,APAC,at_risk,96.03,7\r\n2492,Consumer,LATAM,at_risk,24.95,4\r\n2493,SMB,APAC,inactive,66.48,4\r\n2494,Enterprise,AMER,active,36.77,3\r\n2495,SMB,EMEA,active,75.66,7\r\n2496,Growth,EMEA,at_risk,37.38,2\r\n2497,Growth,AMER,active,87.67,11\r\n2498,SMB,LATAM,active,30.78,8\r\n2499,SMB,LATAM,at_risk,24.86,3\r\n"
  },
  {
    "path": "packages/talon/bench/data/transactions.csv",
    "content": "transaction_id,customer_id,region,category,channel,amount,quantity,discount,promo,event_date\r\n1,1558,EMEA,fashion,retail,94.28,7,0.114,none,2024-11-22\r\n2,1914,EMEA,toys,retail,82.19,1,0.053,none,2024-12-13\r\n3,2222,LATAM,grocery,retail,46.28,5,0.167,coupon,2024-07-14\r\n4,2453,AMER,fashion,retail,55.96,5,0.103,none,2024-09-26\r\n5,1412,AMER,toys,online,76.99,3,0.128,coupon,2024-02-16\r\n6,1788,AMER,home,online,48.41,7,0.185,none,2024-06-04\r\n7,2265,APAC,toys,retail,42.10,8,0.094,bundle,2024-09-23\r\n8,1166,AMER,grocery,retail,48.15,8,0.046,coupon,2024-06-17\r\n9,1792,AMER,grocery,retail,87.07,3,0.077,none,2024-11-26\r\n10,2085,AMER,fashion,mobile,20.17,6,0.222,coupon,2024-11-25\r\n11,1606,AMER,fashion,retail,40.98,1,0.189,none,2024-08-16\r\n12,2192,APAC,grocery,online,68.00,2,0.106,loyalty,2024-09-08\r\n13,2027,EMEA,home,retail,108.92,7,0.189,none,2024-09-04\r\n14,1142,EMEA,home,online,63.73,1,0.247,none,2024-12-08\r\n15,2299,EMEA,home,retail,58.84,6,0.071,none,2024-09-13\r\n16,1428,APAC,grocery,online,61.35,4,0.226,coupon,2024-07-02\r\n17,1159,LATAM,sports,retail,106.86,5,0.207,loyalty,2024-10-08\r\n18,2477,APAC,toys,online,96.04,8,0.045,bundle,2024-09-21\r\n19,1111,APAC,electronics,online,89.01,4,0.209,none,2024-02-23\r\n20,1320,EMEA,toys,online,105.26,8,0.083,none,2024-03-04\r\n21,1584,EMEA,home,online,71.41,5,0.128,none,2024-09-07\r\n22,1766,AMER,grocery,online,84.19,5,0.063,none,2024-02-10\r\n23,1537,LATAM,grocery,retail,61.67,1,0.240,none,2024-11-18\r\n24,1875,EMEA,grocery,mobile,34.26,5,0.110,none,2024-12-04\r\n25,1972,LATAM,toys,mobile,47.49,1,0.228,coupon,2024-03-22\r\n26,1648,APAC,fashion,retail,115.31,7,0.172,bundle,2024-12-18\r\n27,1154,LATAM,grocery,online,34.54,6,0.247,none,2024-06-18\r\n28,1637,APAC,electronics,mobile,62.69,3,0.152,coupon,2024-11-01\r\n29,1668,AMER,home,partner,93.00,1,0.118,none,2024-07-07\r\n30,1750,LATAM,grocery,online,69.63,5,0.193,coupon,2024-12-03\r\n31,1077,AMER,fashion,online,81.40,2,0.218,coupon,2024-09-23\r\n32,2167,APAC,sports,online,123.29,6,0.233,coupon,2024-09-07\r\n33,1753,APAC,sports,retail,34.94,3,0.239,none,2024-07-12\r\n34,1806,APAC,home,retail,36.39,7,0.114,none,2024-10-11\r\n35,1931,APAC,home,retail,136.24,4,0.149,none,2024-01-05\r\n36,2436,LATAM,grocery,mobile,106.51,6,0.009,none,2024-06-22\r\n37,1289,LATAM,home,online,74.57,2,0.090,loyalty,2024-09-03\r\n38,1277,AMER,home,retail,124.97,4,0.109,none,2024-09-20\r\n39,2291,EMEA,toys,mobile,68.68,7,0.082,loyalty,2024-11-14\r\n40,1809,APAC,fashion,online,73.40,1,0.165,none,2024-03-06\r\n41,1651,LATAM,electronics,retail,111.95,3,0.179,coupon,2024-01-10\r\n42,1991,APAC,grocery,online,64.26,4,0.163,none,2024-11-14\r\n43,1560,AMER,home,retail,111.27,6,0.118,coupon,2024-07-18\r\n44,1226,AMER,grocery,mobile,34.36,5,0.196,bundle,2024-10-07\r\n45,1635,APAC,electronics,retail,69.71,6,0.003,bundle,2024-01-06\r\n46,1448,EMEA,home,online,42.15,1,0.147,none,2024-08-15\r\n47,1202,APAC,electronics,online,57.23,5,0.169,coupon,2024-06-18\r\n48,1383,AMER,sports,online,84.49,4,0.135,none,2024-04-06\r\n49,1811,APAC,electronics,online,20.10,7,0.218,loyalty,2024-02-17\r\n50,1609,LATAM,grocery,retail,54.90,3,0.247,none,2024-11-21\r\n51,1119,LATAM,fashion,mobile,47.08,3,0.026,bundle,2024-01-26\r\n52,1519,APAC,home,online,92.88,5,0.103,coupon,2024-04-23\r\n53,2019,AMER,grocery,online,62.67,6,0.107,coupon,2024-11-04\r\n54,2082,APAC,home,online,35.99,7,0.081,bundle,2024-01-07\r\n55,1418,LATAM,grocery,online,70.87,1,0.144,none,2024-04-28\r\n56,1354,AMER,grocery,online,39.12,8,0.054,none,2024-02-23\r\n57,1901,AMER,grocery,online,26.38,3,0.181,none,2024-02-04\r\n58,1835,AMER,grocery,retail,87.76,4,0.228,bundle,2024-10-02\r\n59,2350,APAC,grocery,partner,82.43,5,0.170,coupon,2024-10-20\r\n60,2000,APAC,electronics,retail,38.83,4,0.039,none,2024-12-08\r\n61,2351,EMEA,grocery,online,51.58,7,0.101,none,2024-01-10\r\n62,1573,AMER,electronics,mobile,40.91,5,0.195,none,2024-08-02\r\n63,2102,APAC,sports,retail,30.94,4,0.146,bundle,2024-07-16\r\n64,1350,LATAM,electronics,online,40.82,4,0.083,none,2024-09-11\r\n65,2264,LATAM,home,mobile,43.29,3,0.149,none,2024-04-26\r\n66,1904,APAC,toys,mobile,37.59,4,0.217,coupon,2024-02-13\r\n67,2119,AMER,grocery,retail,28.50,5,0.051,coupon,2024-06-18\r\n68,1681,LATAM,grocery,mobile,75.58,5,0.120,loyalty,2024-01-08\r\n69,2410,EMEA,fashion,mobile,75.71,5,0.101,none,2024-05-26\r\n70,1379,EMEA,grocery,online,68.12,1,0.087,none,2024-03-07\r\n71,1137,APAC,grocery,online,72.96,4,0.200,none,2024-10-21\r\n72,1892,LATAM,fashion,retail,55.36,4,0.240,bundle,2024-01-13\r\n73,2040,LATAM,electronics,retail,30.32,7,0.181,coupon,2024-11-21\r\n74,1367,AMER,electronics,partner,93.99,5,0.104,none,2024-10-05\r\n75,1929,LATAM,grocery,online,36.96,1,0.032,none,2024-03-16\r\n76,2492,LATAM,home,retail,95.31,7,0.182,none,2024-11-23\r\n77,1581,APAC,sports,retail,39.69,4,0.159,none,2024-03-22\r\n78,1312,EMEA,grocery,online,70.59,1,0.248,none,2024-12-12\r\n79,2311,LATAM,toys,retail,48.41,1,0.170,coupon,2024-04-24\r\n80,1436,APAC,home,online,23.37,2,0.035,bundle,2024-05-07\r\n81,1289,LATAM,fashion,retail,107.60,5,0.039,coupon,2024-01-24\r\n82,2054,AMER,sports,online,37.19,2,0.039,loyalty,2024-12-20\r\n83,1480,APAC,electronics,retail,127.31,1,0.021,coupon,2024-09-17\r\n84,2255,AMER,home,mobile,23.65,8,0.017,none,2024-10-23\r\n85,1626,EMEA,electronics,retail,67.19,4,0.207,bundle,2024-09-27\r\n86,1862,LATAM,grocery,retail,50.49,2,0.085,none,2024-04-21\r\n87,1381,LATAM,grocery,mobile,52.50,6,0.096,none,2024-08-21\r\n88,1312,EMEA,home,retail,42.18,8,0.225,none,2024-05-14\r\n89,2065,EMEA,fashion,online,55.62,1,0.094,bundle,2024-01-19\r\n90,1829,EMEA,grocery,online,26.18,4,0.074,none,2024-12-04\r\n91,2216,AMER,grocery,online,112.95,8,0.012,coupon,2024-07-24\r\n92,1597,APAC,sports,retail,45.83,8,0.022,bundle,2024-08-11\r\n93,1874,LATAM,electronics,online,24.47,3,0.173,bundle,2024-07-08\r\n94,1395,APAC,sports,online,97.79,4,0.154,coupon,2024-08-16\r\n95,1297,AMER,home,retail,40.86,3,0.138,coupon,2024-09-17\r\n96,2434,APAC,sports,online,62.43,5,0.197,coupon,2024-08-21\r\n97,2188,EMEA,electronics,mobile,50.66,8,0.010,none,2024-09-20\r\n98,1974,EMEA,fashion,online,65.99,5,0.054,none,2024-12-22\r\n99,2005,APAC,home,online,59.72,8,0.230,coupon,2024-01-16\r\n100,1516,EMEA,home,retail,73.59,2,0.100,coupon,2024-03-27\r\n101,2193,AMER,fashion,retail,59.88,2,0.136,bundle,2024-04-18\r\n102,2462,EMEA,electronics,retail,36.87,1,0.023,coupon,2024-06-11\r\n103,2095,EMEA,electronics,mobile,44.77,6,0.186,loyalty,2024-11-02\r\n104,1314,AMER,electronics,retail,87.09,6,0.132,none,2024-04-12\r\n105,1130,LATAM,sports,partner,25.98,8,0.232,bundle,2024-04-02\r\n106,2252,EMEA,grocery,retail,83.06,6,0.147,none,2024-12-08\r\n107,1964,EMEA,electronics,online,85.78,6,0.100,none,2024-04-08\r\n108,2370,EMEA,sports,online,118.17,4,0.165,none,2024-08-17\r\n109,2351,EMEA,grocery,retail,52.26,3,0.142,none,2024-03-06\r\n110,1101,AMER,electronics,mobile,41.39,8,0.059,none,2024-03-08\r\n111,2359,LATAM,grocery,online,27.31,8,0.193,coupon,2024-12-09\r\n112,1815,APAC,electronics,online,84.91,2,0.022,bundle,2024-10-04\r\n113,2161,LATAM,grocery,retail,59.64,6,0.088,none,2024-10-15\r\n114,1644,EMEA,toys,retail,75.87,4,0.056,none,2024-11-10\r\n115,2233,EMEA,electronics,online,24.83,2,0.204,loyalty,2024-11-19\r\n116,2221,LATAM,fashion,online,21.94,7,0.008,none,2024-03-28\r\n117,1448,EMEA,electronics,online,81.33,2,0.114,coupon,2024-03-03\r\n118,1347,APAC,electronics,online,70.00,6,0.192,none,2024-03-03\r\n119,1505,EMEA,grocery,online,27.00,1,0.025,bundle,2024-03-10\r\n120,1221,LATAM,grocery,partner,105.05,7,0.001,none,2024-11-03\r\n121,1447,LATAM,grocery,online,63.87,5,0.173,bundle,2024-08-17\r\n122,1035,EMEA,electronics,online,62.91,5,0.139,none,2024-11-12\r\n123,1219,LATAM,sports,online,60.73,2,0.005,none,2024-06-10\r\n124,2075,LATAM,home,retail,39.07,7,0.180,coupon,2024-02-25\r\n125,2134,AMER,home,online,22.31,1,0.187,none,2024-06-06\r\n126,1304,LATAM,electronics,online,33.10,1,0.106,none,2024-07-11\r\n127,2199,LATAM,home,online,75.90,3,0.225,none,2024-09-25\r\n128,1237,LATAM,fashion,retail,53.91,1,0.090,none,2024-07-22\r\n129,1079,LATAM,fashion,retail,41.52,1,0.054,none,2024-02-17\r\n130,1640,APAC,home,online,54.35,2,0.043,none,2024-08-14\r\n131,1679,APAC,home,online,41.75,7,0.089,none,2024-01-07\r\n132,1748,APAC,grocery,retail,74.43,8,0.227,none,2024-10-26\r\n133,1304,LATAM,electronics,retail,114.80,6,0.057,none,2024-07-14\r\n134,1295,EMEA,sports,retail,55.68,4,0.199,loyalty,2024-10-04\r\n135,2352,APAC,sports,retail,65.60,2,0.018,none,2024-07-06\r\n136,1303,LATAM,grocery,retail,29.80,7,0.160,none,2024-08-23\r\n137,2328,EMEA,electronics,online,57.97,7,0.068,none,2024-09-02\r\n138,1416,EMEA,home,retail,79.15,4,0.154,coupon,2024-03-10\r\n139,2284,EMEA,grocery,online,154.14,4,0.052,bundle,2024-02-18\r\n140,2169,EMEA,home,mobile,86.78,7,0.019,loyalty,2024-02-10\r\n141,1746,LATAM,fashion,online,40.17,4,0.142,none,2024-01-16\r\n142,2171,EMEA,home,online,140.04,2,0.233,none,2024-06-22\r\n143,1852,AMER,home,mobile,116.30,8,0.183,coupon,2024-08-02\r\n144,1926,AMER,electronics,retail,83.34,7,0.250,bundle,2024-10-05\r\n145,1118,AMER,grocery,online,52.47,5,0.220,none,2024-06-11\r\n146,1135,APAC,home,online,51.71,4,0.127,coupon,2024-02-27\r\n147,1789,EMEA,electronics,online,80.20,6,0.125,loyalty,2024-10-22\r\n148,1610,LATAM,electronics,online,84.03,5,0.119,loyalty,2024-07-16\r\n149,1639,APAC,grocery,mobile,110.98,1,0.219,none,2024-12-08\r\n150,1777,AMER,fashion,retail,80.53,7,0.012,none,2024-08-02\r\n151,1227,AMER,grocery,retail,33.66,6,0.051,none,2024-05-13\r\n152,1364,EMEA,electronics,online,48.33,1,0.015,none,2024-04-28\r\n153,1787,APAC,grocery,retail,41.58,2,0.164,none,2024-03-01\r\n154,2168,EMEA,fashion,online,29.83,4,0.115,none,2024-08-11\r\n155,2394,EMEA,fashion,retail,168.24,6,0.072,coupon,2024-10-01\r\n156,1714,APAC,grocery,online,89.18,5,0.064,coupon,2024-08-16\r\n157,1459,LATAM,electronics,online,18.44,2,0.186,none,2024-05-05\r\n158,1466,AMER,grocery,mobile,29.06,7,0.135,none,2024-10-06\r\n159,1763,LATAM,electronics,online,64.77,7,0.006,bundle,2024-05-28\r\n160,1674,LATAM,electronics,online,47.41,6,0.048,none,2024-07-13\r\n161,1494,AMER,grocery,mobile,51.37,7,0.005,none,2024-05-19\r\n162,1537,LATAM,home,retail,37.16,8,0.066,none,2024-01-24\r\n163,1669,AMER,electronics,mobile,67.99,1,0.061,none,2024-12-17\r\n164,1606,AMER,grocery,retail,55.92,6,0.105,bundle,2024-02-16\r\n165,2285,APAC,grocery,online,34.59,1,0.209,none,2024-09-06\r\n166,1443,EMEA,grocery,online,33.05,3,0.053,none,2024-09-25\r\n167,2014,EMEA,sports,mobile,84.23,8,0.115,none,2024-04-06\r\n168,1838,AMER,toys,online,126.03,4,0.109,none,2024-11-05\r\n169,1746,LATAM,sports,online,64.21,5,0.166,bundle,2024-06-21\r\n170,1045,LATAM,sports,mobile,110.52,2,0.014,loyalty,2024-12-05\r\n171,1290,EMEA,sports,retail,26.26,5,0.128,none,2024-02-12\r\n172,2366,APAC,toys,mobile,58.77,4,0.222,none,2024-06-23\r\n173,2194,APAC,grocery,online,49.37,5,0.067,none,2024-08-06\r\n174,1035,EMEA,home,online,73.55,5,0.036,bundle,2024-02-13\r\n175,2479,EMEA,electronics,mobile,84.62,4,0.000,none,2024-02-08\r\n176,1783,AMER,grocery,online,62.59,4,0.036,none,2024-09-24\r\n177,1493,APAC,home,retail,38.13,8,0.108,none,2024-07-06\r\n178,2059,AMER,sports,online,98.81,5,0.098,none,2024-04-14\r\n179,2368,AMER,fashion,retail,77.59,6,0.161,none,2024-12-14\r\n180,1094,LATAM,grocery,online,116.77,6,0.058,none,2024-01-06\r\n181,1618,EMEA,grocery,retail,72.52,7,0.099,none,2024-08-01\r\n182,2051,APAC,fashion,retail,47.25,8,0.217,loyalty,2024-05-24\r\n183,1524,LATAM,grocery,retail,41.47,8,0.019,none,2024-01-03\r\n184,2442,APAC,sports,online,142.67,7,0.026,none,2024-09-04\r\n185,1226,AMER,fashion,retail,147.69,7,0.220,none,2024-03-25\r\n186,2088,EMEA,home,mobile,75.25,1,0.078,none,2024-02-03\r\n187,1902,AMER,electronics,online,93.22,1,0.137,none,2024-01-07\r\n188,2494,AMER,grocery,online,79.76,2,0.030,none,2024-08-14\r\n189,2419,LATAM,electronics,online,40.82,3,0.015,none,2024-11-16\r\n190,1335,APAC,fashion,online,40.18,7,0.019,coupon,2024-05-18\r\n191,1625,EMEA,grocery,online,102.64,6,0.209,coupon,2024-06-05\r\n192,2371,LATAM,electronics,online,102.57,1,0.096,bundle,2024-06-01\r\n193,1078,APAC,fashion,mobile,35.72,4,0.064,none,2024-05-12\r\n194,1821,LATAM,home,retail,40.55,6,0.247,bundle,2024-12-05\r\n195,1476,APAC,electronics,online,43.34,2,0.076,bundle,2024-05-10\r\n196,2252,EMEA,grocery,online,78.31,5,0.029,none,2024-08-27\r\n197,1972,LATAM,toys,mobile,68.26,6,0.164,none,2024-06-24\r\n198,1453,APAC,home,retail,65.04,8,0.009,bundle,2024-09-16\r\n199,2223,EMEA,grocery,retail,51.75,7,0.131,none,2024-03-19\r\n200,1255,AMER,grocery,partner,92.85,8,0.203,coupon,2024-11-10\r\n201,1419,APAC,grocery,partner,24.47,6,0.167,coupon,2024-07-28\r\n202,1186,APAC,electronics,partner,34.69,5,0.082,none,2024-03-08\r\n203,2265,APAC,sports,online,61.83,8,0.130,none,2024-02-22\r\n204,2442,APAC,home,online,98.33,7,0.229,bundle,2024-12-18\r\n205,1014,EMEA,grocery,retail,57.95,5,0.075,none,2024-04-03\r\n206,1780,APAC,fashion,retail,86.64,2,0.090,none,2024-09-06\r\n207,1668,AMER,electronics,retail,45.57,4,0.241,bundle,2024-05-09\r\n208,2478,AMER,electronics,online,66.97,5,0.163,loyalty,2024-06-26\r\n209,1544,LATAM,fashion,retail,35.62,4,0.217,none,2024-11-16\r\n210,1446,AMER,home,online,58.56,4,0.019,coupon,2024-04-09\r\n211,2182,AMER,toys,retail,84.31,6,0.169,none,2024-01-02\r\n212,1128,LATAM,home,online,23.78,1,0.211,none,2024-10-19\r\n213,1204,AMER,home,retail,60.95,5,0.243,none,2024-08-19\r\n214,1216,APAC,grocery,retail,60.34,7,0.035,bundle,2024-02-17\r\n215,2056,LATAM,electronics,retail,29.97,7,0.018,coupon,2024-08-03\r\n216,2411,EMEA,fashion,retail,19.44,1,0.115,loyalty,2024-09-15\r\n217,2127,LATAM,toys,online,158.17,2,0.116,none,2024-10-01\r\n218,2324,AMER,electronics,online,24.80,6,0.079,coupon,2024-06-13\r\n219,1598,EMEA,grocery,retail,40.92,1,0.223,coupon,2024-01-15\r\n220,2085,AMER,home,mobile,44.13,3,0.149,loyalty,2024-05-24\r\n221,1274,LATAM,fashion,retail,83.28,5,0.214,bundle,2024-09-13\r\n222,1275,EMEA,grocery,retail,72.46,2,0.242,none,2024-09-24\r\n223,1459,LATAM,toys,online,47.26,8,0.200,loyalty,2024-02-10\r\n224,1789,EMEA,electronics,online,38.77,5,0.243,coupon,2024-04-24\r\n225,1022,APAC,toys,mobile,29.97,3,0.030,none,2024-11-13\r\n226,2360,EMEA,home,retail,60.14,2,0.029,none,2024-10-28\r\n227,1983,LATAM,toys,online,80.81,1,0.069,coupon,2024-08-27\r\n228,1611,EMEA,sports,online,72.20,6,0.022,coupon,2024-04-10\r\n229,1485,APAC,home,online,75.05,2,0.210,bundle,2024-10-08\r\n230,1908,AMER,home,retail,60.10,6,0.200,coupon,2024-12-28\r\n231,2395,APAC,sports,mobile,72.60,8,0.025,bundle,2024-04-07\r\n232,1307,AMER,fashion,online,66.35,1,0.035,coupon,2024-06-11\r\n233,2365,LATAM,fashion,retail,31.08,2,0.085,none,2024-10-05\r\n234,1776,APAC,electronics,online,111.32,5,0.070,bundle,2024-09-09\r\n235,2162,EMEA,toys,retail,32.65,4,0.199,coupon,2024-03-24\r\n236,1589,AMER,electronics,online,77.28,4,0.029,bundle,2024-06-21\r\n237,1553,LATAM,sports,retail,43.33,7,0.016,bundle,2024-05-06\r\n238,2047,AMER,home,online,67.13,5,0.139,none,2024-11-15\r\n239,1642,EMEA,sports,retail,25.65,5,0.122,none,2024-05-21\r\n240,2336,APAC,electronics,online,116.42,3,0.240,none,2024-07-11\r\n241,2236,APAC,sports,retail,40.85,4,0.220,none,2024-07-19\r\n242,2017,EMEA,electronics,online,65.14,7,0.052,loyalty,2024-04-28\r\n243,1405,LATAM,electronics,retail,46.37,4,0.177,bundle,2024-08-07\r\n244,2425,APAC,toys,online,112.97,6,0.148,none,2024-05-02\r\n245,2253,AMER,electronics,online,40.87,7,0.185,bundle,2024-08-26\r\n246,1602,EMEA,grocery,online,72.63,8,0.121,loyalty,2024-04-14\r\n247,1069,APAC,grocery,online,71.43,1,0.250,loyalty,2024-03-04\r\n248,2382,LATAM,home,partner,55.03,5,0.090,none,2024-11-13\r\n249,1995,LATAM,sports,online,49.43,6,0.129,loyalty,2024-12-28\r\n250,1596,EMEA,grocery,partner,23.08,3,0.218,none,2024-06-08\r\n251,1411,LATAM,sports,mobile,104.60,6,0.052,coupon,2024-06-25\r\n252,2008,APAC,fashion,partner,36.16,1,0.131,none,2024-06-28\r\n253,1797,LATAM,toys,online,67.10,3,0.133,none,2024-10-18\r\n254,2254,LATAM,home,mobile,54.96,7,0.069,none,2024-03-28\r\n255,1973,EMEA,grocery,online,31.56,4,0.161,coupon,2024-10-27\r\n256,2436,LATAM,electronics,retail,85.49,7,0.161,coupon,2024-01-01\r\n257,1097,EMEA,grocery,online,25.31,1,0.112,none,2024-06-13\r\n258,1074,LATAM,grocery,retail,46.37,6,0.110,none,2024-02-01\r\n259,1413,LATAM,home,online,30.89,7,0.135,loyalty,2024-04-28\r\n260,1927,EMEA,home,online,29.95,4,0.146,none,2024-10-20\r\n261,2457,EMEA,fashion,retail,114.41,2,0.118,bundle,2024-12-09\r\n262,2422,APAC,toys,retail,65.40,8,0.212,coupon,2024-03-11\r\n263,1985,AMER,electronics,mobile,80.74,4,0.245,none,2024-05-22\r\n264,1674,LATAM,grocery,online,136.94,2,0.184,none,2024-08-22\r\n265,1814,AMER,toys,retail,42.78,6,0.048,none,2024-02-03\r\n266,2275,LATAM,grocery,mobile,22.31,8,0.043,bundle,2024-01-27\r\n267,2079,EMEA,grocery,online,75.66,4,0.149,none,2024-12-27\r\n268,1946,AMER,electronics,retail,89.39,5,0.210,loyalty,2024-04-23\r\n269,1971,EMEA,grocery,mobile,36.37,6,0.159,none,2024-09-14\r\n270,1658,AMER,grocery,online,150.87,5,0.128,loyalty,2024-11-22\r\n271,2108,AMER,electronics,online,14.50,4,0.141,none,2024-07-13\r\n272,1252,APAC,fashion,retail,53.87,2,0.150,loyalty,2024-06-12\r\n273,1137,APAC,electronics,online,90.62,4,0.230,coupon,2024-07-14\r\n274,2434,APAC,grocery,mobile,88.50,1,0.023,coupon,2024-05-17\r\n275,1411,LATAM,toys,online,126.54,2,0.142,coupon,2024-10-24\r\n276,2025,EMEA,sports,online,101.10,7,0.184,none,2024-10-07\r\n277,2317,LATAM,sports,mobile,35.53,2,0.187,loyalty,2024-02-22\r\n278,2477,APAC,home,online,87.72,1,0.034,loyalty,2024-08-09\r\n279,1260,LATAM,grocery,online,26.90,4,0.108,bundle,2024-06-23\r\n280,2350,APAC,sports,retail,86.41,5,0.068,loyalty,2024-04-18\r\n281,1953,EMEA,sports,online,90.11,7,0.100,none,2024-01-21\r\n282,2080,LATAM,toys,online,47.61,1,0.240,coupon,2024-07-08\r\n283,2401,LATAM,grocery,mobile,46.60,6,0.006,none,2024-04-21\r\n284,2149,EMEA,grocery,online,23.47,3,0.232,none,2024-04-16\r\n285,2281,AMER,electronics,online,43.98,4,0.185,none,2024-12-05\r\n286,2379,AMER,fashion,retail,26.23,4,0.058,none,2024-03-19\r\n287,1435,AMER,fashion,mobile,60.09,8,0.091,bundle,2024-04-23\r\n288,2057,APAC,home,retail,79.62,3,0.219,none,2024-01-26\r\n289,1029,EMEA,home,retail,75.09,6,0.170,none,2024-02-24\r\n290,1000,APAC,sports,retail,46.15,5,0.108,none,2024-12-12\r\n291,2309,AMER,sports,online,44.44,1,0.002,coupon,2024-08-01\r\n292,1845,AMER,electronics,mobile,33.81,1,0.085,none,2024-08-06\r\n293,1374,APAC,grocery,online,85.39,4,0.244,bundle,2024-04-21\r\n294,2335,EMEA,home,retail,61.46,4,0.069,bundle,2024-07-03\r\n295,2207,APAC,grocery,online,40.02,3,0.186,none,2024-11-03\r\n296,2222,LATAM,sports,online,46.36,8,0.249,bundle,2024-10-11\r\n297,1181,LATAM,toys,partner,94.33,3,0.049,none,2024-03-17\r\n298,2425,APAC,toys,mobile,44.38,6,0.192,none,2024-01-04\r\n299,1450,EMEA,fashion,online,88.74,5,0.200,none,2024-07-15\r\n300,2279,LATAM,grocery,online,47.69,6,0.035,loyalty,2024-02-19\r\n301,2370,EMEA,fashion,online,21.79,7,0.065,coupon,2024-04-26\r\n302,1672,APAC,sports,online,78.32,6,0.238,none,2024-09-24\r\n303,1727,APAC,home,online,95.78,7,0.008,bundle,2024-05-14\r\n304,1791,LATAM,grocery,online,36.75,6,0.124,none,2024-05-04\r\n305,2401,LATAM,fashion,retail,19.33,1,0.004,coupon,2024-03-24\r\n306,1437,EMEA,fashion,online,35.54,4,0.012,none,2024-02-09\r\n307,2493,APAC,electronics,retail,38.69,1,0.142,loyalty,2024-07-15\r\n308,2175,AMER,sports,retail,48.89,7,0.162,none,2024-10-02\r\n309,2132,LATAM,grocery,online,93.27,2,0.161,none,2024-02-19\r\n310,1876,LATAM,sports,online,72.23,1,0.123,bundle,2024-03-21\r\n311,1199,APAC,electronics,online,104.10,5,0.076,none,2024-02-09\r\n312,1075,AMER,sports,retail,50.64,8,0.198,none,2024-03-11\r\n313,2490,AMER,electronics,online,102.48,6,0.248,none,2024-05-11\r\n314,2339,AMER,electronics,mobile,48.90,3,0.121,coupon,2024-06-27\r\n315,2465,EMEA,sports,mobile,23.99,7,0.165,none,2024-01-10\r\n316,2260,EMEA,grocery,mobile,16.77,4,0.061,loyalty,2024-04-04\r\n317,2440,APAC,toys,retail,93.12,2,0.199,bundle,2024-06-16\r\n318,1835,AMER,fashion,online,51.56,5,0.157,none,2024-09-26\r\n319,1494,AMER,grocery,retail,57.81,8,0.104,none,2024-07-11\r\n320,1514,LATAM,grocery,retail,34.29,2,0.010,coupon,2024-05-28\r\n321,1544,LATAM,sports,retail,34.43,1,0.120,none,2024-09-14\r\n322,1067,APAC,home,online,102.46,5,0.034,none,2024-10-05\r\n323,1919,EMEA,grocery,retail,30.32,3,0.047,bundle,2024-03-11\r\n324,1876,LATAM,toys,retail,46.70,6,0.011,bundle,2024-03-05\r\n325,1469,EMEA,home,partner,54.83,3,0.086,none,2024-11-02\r\n326,1198,AMER,fashion,online,113.24,7,0.068,bundle,2024-07-06\r\n327,1947,EMEA,electronics,retail,22.20,8,0.150,coupon,2024-09-06\r\n328,1973,EMEA,home,online,64.93,4,0.056,none,2024-12-28\r\n329,1566,EMEA,electronics,retail,70.80,5,0.042,none,2024-11-21\r\n330,1939,LATAM,toys,mobile,59.67,3,0.126,coupon,2024-10-14\r\n331,1002,EMEA,grocery,online,37.48,1,0.155,none,2024-12-11\r\n332,1849,EMEA,electronics,retail,42.37,4,0.076,none,2024-02-28\r\n333,1796,LATAM,electronics,online,45.25,4,0.216,none,2024-11-17\r\n334,2311,LATAM,grocery,online,49.85,8,0.117,none,2024-01-24\r\n335,1637,APAC,toys,retail,36.23,2,0.081,none,2024-12-03\r\n336,1748,APAC,toys,retail,157.87,5,0.042,none,2024-06-05\r\n337,1811,APAC,grocery,mobile,79.07,2,0.224,bundle,2024-03-06\r\n338,1360,APAC,home,online,161.27,6,0.128,coupon,2024-08-05\r\n339,1735,LATAM,home,online,83.91,6,0.196,none,2024-05-20\r\n340,2490,AMER,home,retail,86.20,6,0.247,coupon,2024-04-05\r\n341,2262,APAC,fashion,mobile,82.04,2,0.243,none,2024-06-09\r\n342,2348,EMEA,home,online,122.87,2,0.235,none,2024-05-18\r\n343,2132,LATAM,home,retail,61.43,3,0.066,loyalty,2024-05-02\r\n344,1756,EMEA,grocery,retail,71.60,5,0.118,coupon,2024-02-01\r\n345,1521,LATAM,toys,online,59.80,5,0.168,none,2024-06-08\r\n346,2349,APAC,grocery,retail,73.21,2,0.099,none,2024-09-25\r\n347,1955,AMER,electronics,retail,110.82,6,0.152,none,2024-06-16\r\n348,2146,APAC,toys,retail,52.78,4,0.026,none,2024-11-04\r\n349,2486,APAC,fashion,mobile,62.29,4,0.079,coupon,2024-03-22\r\n350,2171,EMEA,electronics,online,30.58,5,0.141,bundle,2024-03-17\r\n351,1729,AMER,sports,mobile,43.41,5,0.176,none,2024-01-22\r\n352,2312,APAC,grocery,online,43.11,3,0.031,none,2024-10-11\r\n353,1466,AMER,toys,online,26.18,2,0.214,none,2024-04-11\r\n354,1033,APAC,electronics,online,43.19,3,0.092,none,2024-03-23\r\n355,1581,APAC,grocery,retail,28.46,8,0.125,bundle,2024-11-11\r\n356,1515,EMEA,toys,online,56.64,1,0.233,bundle,2024-11-10\r\n357,1629,LATAM,grocery,online,77.36,2,0.212,none,2024-09-13\r\n358,1053,AMER,fashion,retail,64.03,7,0.225,none,2024-02-16\r\n359,2194,APAC,sports,mobile,49.25,5,0.229,none,2024-08-15\r\n360,1736,AMER,home,online,125.59,4,0.199,none,2024-03-15\r\n361,1370,APAC,home,mobile,43.48,3,0.088,loyalty,2024-09-19\r\n362,2206,AMER,fashion,mobile,42.16,8,0.018,none,2024-09-11\r\n363,2215,LATAM,toys,online,46.05,8,0.161,loyalty,2024-11-22\r\n364,1499,EMEA,home,online,52.27,1,0.193,none,2024-07-05\r\n365,1960,EMEA,electronics,online,55.12,4,0.002,loyalty,2024-04-01\r\n366,1093,APAC,electronics,retail,40.72,4,0.124,none,2024-02-15\r\n367,2418,AMER,fashion,mobile,43.57,1,0.058,none,2024-09-03\r\n368,1897,AMER,electronics,retail,41.78,6,0.043,coupon,2024-05-02\r\n369,2382,LATAM,home,mobile,104.98,6,0.124,none,2024-08-24\r\n370,2034,LATAM,electronics,retail,101.84,4,0.004,none,2024-01-10\r\n371,1831,APAC,home,online,50.00,7,0.188,loyalty,2024-01-04\r\n372,2376,LATAM,toys,partner,108.99,6,0.215,coupon,2024-01-12\r\n373,2204,AMER,grocery,retail,57.19,6,0.063,none,2024-07-18\r\n374,1072,LATAM,home,retail,90.04,1,0.222,bundle,2024-04-28\r\n375,1708,LATAM,grocery,online,101.01,2,0.129,none,2024-01-27\r\n376,1706,EMEA,electronics,online,58.80,5,0.221,none,2024-12-18\r\n377,1343,LATAM,sports,online,17.04,1,0.086,none,2024-05-06\r\n378,1109,APAC,grocery,mobile,136.77,8,0.209,none,2024-09-03\r\n379,1952,EMEA,fashion,online,49.99,8,0.139,none,2024-06-20\r\n380,2473,EMEA,home,online,93.91,7,0.054,none,2024-09-01\r\n381,1215,LATAM,grocery,mobile,33.74,6,0.149,none,2024-03-01\r\n382,1357,EMEA,fashion,online,27.31,7,0.080,loyalty,2024-04-18\r\n383,2079,EMEA,sports,retail,128.21,3,0.062,none,2024-07-22\r\n384,1828,EMEA,home,mobile,62.65,5,0.140,none,2024-11-09\r\n385,2018,AMER,fashion,online,58.06,1,0.149,none,2024-09-06\r\n386,1978,AMER,home,retail,79.97,4,0.035,coupon,2024-07-05\r\n387,2316,EMEA,fashion,online,37.38,6,0.207,none,2024-05-08\r\n388,1270,LATAM,grocery,retail,63.76,8,0.089,coupon,2024-03-13\r\n389,1305,EMEA,fashion,retail,76.68,1,0.190,coupon,2024-02-15\r\n390,1100,AMER,sports,retail,49.39,1,0.138,none,2024-10-26\r\n391,2333,APAC,home,retail,22.94,6,0.154,coupon,2024-11-26\r\n392,2450,EMEA,fashion,online,93.46,7,0.157,none,2024-05-02\r\n393,1333,EMEA,sports,online,26.69,8,0.203,bundle,2024-06-10\r\n394,1626,EMEA,grocery,online,66.36,4,0.238,loyalty,2024-11-26\r\n395,1629,LATAM,fashion,online,175.60,7,0.104,loyalty,2024-01-07\r\n396,1658,AMER,grocery,retail,29.26,4,0.039,none,2024-10-04\r\n397,1477,APAC,electronics,retail,79.63,3,0.023,coupon,2024-11-03\r\n398,1976,AMER,fashion,retail,149.97,8,0.240,bundle,2024-11-01\r\n399,1383,AMER,electronics,retail,31.72,1,0.196,none,2024-10-26\r\n400,1316,APAC,home,online,29.17,3,0.113,none,2024-11-09\r\n401,1403,APAC,grocery,retail,31.49,5,0.102,coupon,2024-06-07\r\n402,1421,APAC,grocery,online,42.84,4,0.178,loyalty,2024-06-06\r\n403,1696,LATAM,home,retail,40.92,3,0.214,none,2024-04-22\r\n404,2087,LATAM,grocery,online,61.17,7,0.204,loyalty,2024-12-28\r\n405,1377,APAC,toys,retail,63.33,8,0.208,none,2024-07-27\r\n406,1025,EMEA,electronics,retail,29.02,1,0.184,coupon,2024-08-18\r\n407,1674,LATAM,home,mobile,38.84,2,0.153,bundle,2024-06-22\r\n408,1568,AMER,sports,retail,49.86,4,0.114,none,2024-02-27\r\n409,2394,EMEA,grocery,retail,147.99,7,0.192,none,2024-05-15\r\n410,2030,EMEA,sports,online,50.00,4,0.201,none,2024-04-23\r\n411,2448,APAC,home,online,210.35,7,0.173,none,2024-05-04\r\n412,1033,APAC,electronics,online,34.67,4,0.159,loyalty,2024-04-05\r\n413,1454,APAC,fashion,retail,21.81,3,0.081,none,2024-12-01\r\n414,1439,LATAM,grocery,retail,79.59,8,0.125,loyalty,2024-10-06\r\n415,2391,EMEA,home,retail,68.75,3,0.149,none,2024-03-24\r\n416,1437,EMEA,electronics,retail,76.18,8,0.175,none,2024-03-18\r\n417,2110,LATAM,grocery,retail,153.49,8,0.022,none,2024-09-18\r\n418,1200,EMEA,fashion,retail,62.19,7,0.119,none,2024-11-10\r\n419,1666,LATAM,home,mobile,73.57,1,0.239,none,2024-10-18\r\n420,1060,LATAM,sports,partner,56.18,3,0.182,none,2024-08-06\r\n421,2363,AMER,grocery,retail,78.30,4,0.026,bundle,2024-11-14\r\n422,1392,AMER,home,retail,80.89,3,0.103,coupon,2024-02-02\r\n423,1600,AMER,grocery,online,64.83,8,0.033,coupon,2024-10-08\r\n424,1381,LATAM,fashion,retail,51.84,6,0.247,none,2024-04-20\r\n425,1495,LATAM,fashion,online,30.44,8,0.224,none,2024-01-24\r\n426,2185,EMEA,electronics,retail,155.98,2,0.006,none,2024-05-18\r\n427,2358,AMER,electronics,retail,24.80,3,0.245,none,2024-09-25\r\n428,2019,AMER,home,retail,20.67,2,0.078,bundle,2024-05-18\r\n429,1009,APAC,electronics,retail,35.68,8,0.225,bundle,2024-01-17\r\n430,1894,APAC,home,online,80.79,5,0.061,none,2024-05-04\r\n431,1384,LATAM,sports,retail,116.30,7,0.014,none,2024-08-11\r\n432,2002,APAC,electronics,retail,57.44,1,0.101,loyalty,2024-12-06\r\n433,1317,EMEA,grocery,online,129.54,5,0.145,bundle,2024-09-22\r\n434,1927,EMEA,grocery,online,63.09,3,0.175,loyalty,2024-06-16\r\n435,1723,LATAM,sports,retail,75.56,4,0.026,none,2024-11-16\r\n436,1058,LATAM,home,mobile,49.36,5,0.219,none,2024-07-03\r\n437,2313,LATAM,fashion,online,54.59,4,0.173,loyalty,2024-06-16\r\n438,1815,APAC,toys,retail,77.75,6,0.001,none,2024-07-12\r\n439,1635,APAC,grocery,online,68.61,1,0.102,none,2024-06-09\r\n440,1581,APAC,toys,retail,87.17,3,0.194,none,2024-11-03\r\n441,1271,EMEA,electronics,online,28.16,4,0.213,none,2024-01-21\r\n442,1786,APAC,fashion,online,80.84,8,0.223,none,2024-07-08\r\n443,1287,AMER,toys,online,69.05,1,0.147,bundle,2024-02-01\r\n444,1687,APAC,toys,mobile,118.62,2,0.141,none,2024-10-10\r\n445,1242,LATAM,electronics,online,114.18,3,0.085,none,2024-02-02\r\n446,1920,LATAM,grocery,retail,38.92,8,0.066,none,2024-10-28\r\n447,2314,EMEA,toys,retail,32.25,3,0.246,none,2024-11-06\r\n448,2179,LATAM,electronics,mobile,80.09,3,0.174,none,2024-08-01\r\n449,1856,EMEA,grocery,retail,53.97,3,0.049,none,2024-01-25\r\n450,2303,EMEA,electronics,online,78.07,8,0.161,none,2024-09-23\r\n451,1820,AMER,home,retail,17.79,1,0.058,coupon,2024-11-07\r\n452,1547,AMER,home,mobile,57.52,2,0.161,none,2024-11-12\r\n453,1839,APAC,home,online,107.41,5,0.031,none,2024-05-07\r\n454,1474,LATAM,electronics,online,49.96,1,0.125,none,2024-10-13\r\n455,1220,LATAM,home,mobile,94.72,3,0.038,none,2024-08-12\r\n456,2225,EMEA,fashion,online,56.27,5,0.159,coupon,2024-11-24\r\n457,1104,APAC,grocery,online,68.21,6,0.135,none,2024-04-01\r\n458,1960,EMEA,sports,retail,51.81,8,0.233,none,2024-04-13\r\n459,1241,APAC,home,mobile,34.65,2,0.001,bundle,2024-12-07\r\n460,2384,LATAM,grocery,mobile,93.10,1,0.108,loyalty,2024-07-14\r\n461,2257,AMER,grocery,mobile,158.13,8,0.078,none,2024-11-19\r\n462,1301,AMER,electronics,retail,67.87,3,0.093,none,2024-04-23\r\n463,1991,APAC,home,retail,178.25,4,0.111,bundle,2024-08-26\r\n464,1678,LATAM,toys,retail,96.78,7,0.220,none,2024-08-02\r\n465,1256,LATAM,electronics,online,43.86,8,0.238,none,2024-10-25\r\n466,2167,APAC,fashion,online,114.05,2,0.239,none,2024-05-03\r\n467,2399,LATAM,toys,online,50.02,2,0.234,none,2024-03-10\r\n468,1609,LATAM,electronics,retail,32.63,7,0.145,none,2024-09-09\r\n469,1472,AMER,electronics,online,35.25,7,0.245,none,2024-08-14\r\n470,1301,AMER,fashion,mobile,46.03,7,0.073,coupon,2024-10-15\r\n471,1382,LATAM,fashion,online,106.55,3,0.222,none,2024-06-07\r\n472,1187,AMER,grocery,retail,19.67,1,0.149,coupon,2024-12-23\r\n473,1948,EMEA,grocery,mobile,101.80,6,0.006,bundle,2024-01-28\r\n474,1447,LATAM,sports,retail,75.01,8,0.184,none,2024-09-27\r\n475,1844,APAC,home,online,26.42,5,0.248,none,2024-08-04\r\n476,1314,AMER,toys,mobile,74.25,8,0.064,none,2024-03-01\r\n477,2084,LATAM,electronics,online,129.76,7,0.174,bundle,2024-05-19\r\n478,1286,EMEA,electronics,online,53.91,5,0.122,bundle,2024-10-28\r\n479,1549,APAC,toys,online,39.19,3,0.078,none,2024-12-20\r\n480,1626,EMEA,home,retail,56.47,1,0.160,none,2024-05-03\r\n481,1525,APAC,fashion,mobile,37.07,1,0.017,loyalty,2024-08-10\r\n482,2101,APAC,fashion,retail,37.40,3,0.031,none,2024-03-28\r\n483,1873,EMEA,fashion,retail,64.69,3,0.062,coupon,2024-02-28\r\n484,2456,APAC,sports,online,42.72,4,0.097,none,2024-11-17\r\n485,1987,AMER,electronics,mobile,64.24,5,0.125,none,2024-02-18\r\n486,2267,AMER,sports,partner,61.33,2,0.155,coupon,2024-09-19\r\n487,1936,EMEA,grocery,retail,48.97,2,0.187,none,2024-01-02\r\n488,1542,APAC,toys,retail,22.86,1,0.197,none,2024-06-20\r\n489,1719,LATAM,electronics,online,17.04,5,0.130,none,2024-05-21\r\n490,1684,EMEA,sports,mobile,134.71,2,0.074,loyalty,2024-05-09\r\n491,1065,AMER,sports,retail,40.19,2,0.007,loyalty,2024-02-14\r\n492,1213,EMEA,electronics,retail,116.70,1,0.168,none,2024-08-26\r\n493,1711,APAC,home,retail,30.61,4,0.122,none,2024-07-12\r\n494,1215,LATAM,home,retail,42.60,3,0.208,coupon,2024-08-10\r\n495,2492,LATAM,toys,retail,53.59,2,0.044,none,2024-06-26\r\n496,1831,APAC,electronics,retail,31.60,2,0.216,none,2024-09-23\r\n497,1319,EMEA,toys,retail,35.92,4,0.129,bundle,2024-06-23\r\n498,1079,LATAM,sports,online,19.59,8,0.029,none,2024-05-13\r\n499,2325,LATAM,sports,retail,39.10,3,0.184,none,2024-07-01\r\n500,1682,EMEA,fashion,online,77.55,2,0.142,none,2024-06-08\r\n501,1571,EMEA,electronics,online,46.08,6,0.067,none,2024-06-07\r\n502,1475,LATAM,home,mobile,118.00,4,0.068,none,2024-07-01\r\n503,1203,AMER,home,retail,44.04,8,0.208,none,2024-02-04\r\n504,1262,APAC,electronics,retail,49.96,1,0.172,none,2024-01-10\r\n505,1797,LATAM,home,retail,101.15,3,0.191,none,2024-01-04\r\n506,1542,APAC,electronics,retail,73.52,1,0.223,coupon,2024-12-11\r\n507,1387,AMER,toys,retail,93.36,6,0.201,coupon,2024-02-11\r\n508,1201,LATAM,grocery,online,39.90,4,0.172,coupon,2024-10-26\r\n509,1823,EMEA,electronics,online,54.82,5,0.006,loyalty,2024-04-09\r\n510,2331,APAC,grocery,retail,30.69,2,0.072,coupon,2024-12-12\r\n511,1977,APAC,electronics,partner,29.56,1,0.163,coupon,2024-04-06\r\n512,1972,LATAM,grocery,mobile,77.40,4,0.149,none,2024-06-12\r\n513,2253,AMER,home,mobile,123.04,1,0.117,bundle,2024-07-28\r\n514,1063,AMER,toys,online,50.39,1,0.109,none,2024-05-06\r\n515,1415,AMER,toys,mobile,47.64,8,0.049,none,2024-05-26\r\n516,1527,AMER,sports,retail,105.56,2,0.156,coupon,2024-07-09\r\n517,1230,EMEA,fashion,retail,47.64,3,0.112,bundle,2024-05-24\r\n518,1184,AMER,fashion,retail,53.09,1,0.031,none,2024-06-02\r\n519,1590,APAC,fashion,retail,53.10,8,0.202,loyalty,2024-01-21\r\n520,1848,EMEA,toys,online,123.22,8,0.071,none,2024-01-03\r\n521,1000,APAC,sports,online,66.59,4,0.230,none,2024-06-16\r\n522,1418,LATAM,grocery,online,105.17,2,0.020,coupon,2024-05-09\r\n523,2398,EMEA,toys,partner,66.47,8,0.088,loyalty,2024-07-13\r\n524,1899,APAC,home,online,88.17,2,0.190,none,2024-05-02\r\n525,1945,AMER,electronics,mobile,62.45,4,0.093,bundle,2024-10-26\r\n526,1882,AMER,home,online,75.80,7,0.101,loyalty,2024-08-23\r\n527,2217,LATAM,fashion,retail,46.57,4,0.211,none,2024-10-07\r\n528,1362,AMER,grocery,partner,77.79,5,0.142,none,2024-01-03\r\n529,1973,EMEA,sports,mobile,125.87,7,0.147,bundle,2024-04-26\r\n530,1136,EMEA,fashion,retail,53.96,1,0.228,none,2024-11-23\r\n531,1737,AMER,fashion,retail,65.61,4,0.016,coupon,2024-01-13\r\n532,1357,EMEA,grocery,retail,62.30,4,0.166,coupon,2024-12-17\r\n533,2189,LATAM,fashion,online,37.00,3,0.216,none,2024-04-11\r\n534,1152,LATAM,grocery,mobile,45.03,3,0.187,none,2024-04-05\r\n535,2125,LATAM,home,online,53.78,6,0.018,none,2024-12-09\r\n536,1815,APAC,grocery,mobile,78.94,7,0.196,none,2024-12-24\r\n537,2462,EMEA,home,retail,32.93,4,0.207,bundle,2024-11-07\r\n538,2482,EMEA,grocery,online,44.07,6,0.050,none,2024-04-28\r\n539,1088,LATAM,grocery,online,77.72,8,0.095,bundle,2024-06-15\r\n540,1490,AMER,fashion,online,95.32,2,0.152,none,2024-08-05\r\n541,1756,EMEA,home,online,58.72,2,0.185,bundle,2024-05-21\r\n542,2086,APAC,home,partner,119.55,3,0.172,coupon,2024-07-12\r\n543,1462,LATAM,fashion,partner,98.06,8,0.093,coupon,2024-05-11\r\n544,1358,APAC,home,retail,98.38,4,0.122,none,2024-10-20\r\n545,1259,EMEA,sports,online,63.43,8,0.082,none,2024-11-22\r\n546,1663,LATAM,toys,online,62.45,5,0.021,none,2024-12-26\r\n547,1154,LATAM,fashion,retail,77.42,4,0.210,none,2024-12-02\r\n548,2433,APAC,toys,online,36.51,4,0.056,none,2024-09-24\r\n549,1805,EMEA,fashion,online,88.29,1,0.051,coupon,2024-02-02\r\n550,1623,AMER,sports,retail,130.36,1,0.154,bundle,2024-03-20\r\n551,1732,LATAM,sports,mobile,76.95,7,0.087,none,2024-02-01\r\n552,1925,LATAM,grocery,mobile,54.04,6,0.068,none,2024-08-17\r\n553,2107,APAC,fashion,mobile,143.47,2,0.096,none,2024-01-28\r\n554,1409,APAC,grocery,partner,85.67,7,0.107,none,2024-11-28\r\n555,1639,APAC,toys,online,77.21,4,0.124,none,2024-12-13\r\n556,1626,EMEA,home,online,27.73,1,0.127,loyalty,2024-02-02\r\n557,1697,APAC,sports,online,39.18,5,0.037,none,2024-10-04\r\n558,1403,APAC,sports,retail,78.39,1,0.153,none,2024-06-20\r\n559,1493,APAC,home,retail,26.12,1,0.101,none,2024-12-09\r\n560,1946,AMER,home,online,36.59,4,0.185,none,2024-09-21\r\n561,1261,APAC,grocery,online,83.44,6,0.189,none,2024-11-16\r\n562,2201,AMER,toys,retail,49.80,3,0.039,coupon,2024-02-08\r\n563,2187,EMEA,grocery,online,27.59,4,0.180,none,2024-11-17\r\n564,1819,AMER,home,retail,23.79,6,0.120,loyalty,2024-05-27\r\n565,1568,AMER,electronics,online,73.26,8,0.207,none,2024-03-26\r\n566,2130,EMEA,sports,mobile,44.90,5,0.158,none,2024-02-10\r\n567,1965,LATAM,toys,partner,65.02,5,0.144,none,2024-08-28\r\n568,1040,LATAM,electronics,online,34.25,3,0.155,bundle,2024-09-19\r\n569,1463,EMEA,fashion,mobile,35.79,5,0.077,none,2024-01-23\r\n570,2181,AMER,grocery,online,44.66,2,0.117,loyalty,2024-10-18\r\n571,2325,LATAM,fashion,retail,93.85,4,0.170,coupon,2024-12-16\r\n572,2237,EMEA,fashion,retail,136.73,8,0.075,none,2024-04-06\r\n573,1944,AMER,sports,mobile,40.75,8,0.088,loyalty,2024-11-16\r\n574,1487,AMER,fashion,retail,55.66,3,0.051,bundle,2024-07-07\r\n575,1258,EMEA,grocery,online,71.20,5,0.028,none,2024-07-01\r\n576,2256,AMER,grocery,mobile,45.21,4,0.208,bundle,2024-01-10\r\n577,2087,LATAM,electronics,partner,25.99,2,0.071,loyalty,2024-07-14\r\n578,1559,EMEA,grocery,mobile,80.23,4,0.212,coupon,2024-11-08\r\n579,1678,LATAM,home,retail,68.44,8,0.108,none,2024-02-03\r\n580,1529,LATAM,sports,online,68.99,8,0.095,none,2024-12-01\r\n581,1531,EMEA,grocery,retail,48.16,6,0.211,coupon,2024-12-07\r\n582,1812,EMEA,home,retail,69.48,2,0.158,bundle,2024-06-02\r\n583,1806,APAC,sports,online,18.73,3,0.186,none,2024-02-14\r\n584,1044,EMEA,home,retail,60.22,7,0.005,none,2024-06-16\r\n585,2179,LATAM,grocery,online,68.83,3,0.087,none,2024-04-09\r\n586,1459,LATAM,home,online,28.20,3,0.138,bundle,2024-10-18\r\n587,1827,EMEA,electronics,mobile,59.25,4,0.044,none,2024-06-18\r\n588,1125,LATAM,home,online,61.54,5,0.216,none,2024-12-18\r\n589,2460,AMER,toys,partner,42.80,3,0.200,bundle,2024-03-06\r\n590,1068,APAC,home,mobile,71.64,2,0.038,none,2024-01-13\r\n591,1588,LATAM,toys,online,68.20,4,0.025,coupon,2024-11-23\r\n592,1156,APAC,sports,online,100.08,7,0.198,loyalty,2024-01-06\r\n593,1728,AMER,home,online,77.62,6,0.095,none,2024-03-09\r\n594,1961,EMEA,electronics,online,48.97,3,0.058,none,2024-10-23\r\n595,1204,AMER,home,online,44.17,2,0.217,none,2024-08-06\r\n596,1107,APAC,fashion,online,93.74,7,0.061,none,2024-05-27\r\n597,1917,LATAM,fashion,online,30.09,4,0.005,none,2024-03-20\r\n598,2034,LATAM,electronics,online,95.61,1,0.110,coupon,2024-09-18\r\n599,1424,APAC,electronics,retail,40.34,3,0.143,none,2024-02-18\r\n600,1046,EMEA,home,mobile,68.00,8,0.236,none,2024-01-10\r\n601,1713,EMEA,grocery,mobile,35.75,7,0.201,none,2024-02-13\r\n602,1936,EMEA,electronics,online,76.52,1,0.173,none,2024-08-04\r\n603,1148,AMER,grocery,online,55.37,5,0.095,none,2024-05-14\r\n604,1061,APAC,electronics,retail,67.51,2,0.249,loyalty,2024-06-05\r\n605,1707,APAC,toys,online,54.55,2,0.041,bundle,2024-12-19\r\n606,1576,EMEA,home,online,62.79,6,0.117,none,2024-07-27\r\n607,1897,AMER,toys,online,146.49,3,0.001,loyalty,2024-12-15\r\n608,1862,LATAM,electronics,online,52.47,3,0.129,none,2024-07-12\r\n609,1183,AMER,home,retail,58.73,7,0.105,coupon,2024-05-16\r\n610,1697,APAC,toys,online,31.87,5,0.142,bundle,2024-03-10\r\n611,1526,EMEA,home,mobile,103.55,1,0.185,none,2024-12-28\r\n612,2021,EMEA,home,partner,107.35,5,0.103,coupon,2024-07-11\r\n613,2190,LATAM,home,online,214.22,6,0.123,loyalty,2024-03-09\r\n614,1531,EMEA,home,online,90.34,5,0.048,none,2024-05-18\r\n615,2336,APAC,grocery,online,40.29,1,0.056,loyalty,2024-12-14\r\n616,1880,LATAM,electronics,online,39.17,3,0.076,none,2024-04-23\r\n617,1135,APAC,home,retail,24.10,1,0.155,bundle,2024-02-02\r\n618,1694,APAC,home,retail,19.68,3,0.050,none,2024-05-03\r\n619,2278,APAC,electronics,online,125.13,3,0.066,coupon,2024-04-28\r\n620,1436,APAC,grocery,online,110.06,6,0.232,none,2024-12-20\r\n621,1358,APAC,grocery,mobile,78.66,1,0.160,coupon,2024-03-07\r\n622,1988,AMER,fashion,mobile,67.26,5,0.003,loyalty,2024-10-11\r\n623,2421,AMER,sports,retail,73.08,7,0.228,loyalty,2024-08-08\r\n624,1323,EMEA,electronics,online,42.89,5,0.061,none,2024-01-27\r\n625,1751,AMER,grocery,online,67.18,3,0.230,none,2024-12-12\r\n626,2179,LATAM,sports,retail,68.88,2,0.205,coupon,2024-09-10\r\n627,2018,AMER,grocery,online,76.64,2,0.111,none,2024-08-21\r\n628,1950,LATAM,home,online,90.55,1,0.137,coupon,2024-06-22\r\n629,1219,LATAM,toys,online,58.39,5,0.097,none,2024-04-12\r\n630,1935,EMEA,home,online,85.47,3,0.086,none,2024-01-06\r\n631,1441,LATAM,home,retail,48.92,4,0.003,none,2024-01-14\r\n632,1671,APAC,sports,online,70.34,3,0.134,none,2024-04-12\r\n633,1838,AMER,grocery,online,43.63,8,0.098,none,2024-08-10\r\n634,1381,LATAM,grocery,online,59.65,4,0.246,bundle,2024-03-12\r\n635,1359,LATAM,fashion,mobile,57.82,4,0.017,none,2024-12-16\r\n636,1769,LATAM,electronics,online,129.40,6,0.030,none,2024-01-18\r\n637,2301,EMEA,fashion,mobile,64.13,4,0.109,bundle,2024-02-19\r\n638,1896,EMEA,home,online,24.37,3,0.120,none,2024-11-04\r\n639,2261,EMEA,grocery,mobile,45.66,1,0.097,loyalty,2024-09-22\r\n640,1335,APAC,toys,online,23.13,8,0.010,none,2024-06-02\r\n641,1420,APAC,sports,retail,49.98,3,0.130,none,2024-10-09\r\n642,2466,APAC,sports,mobile,54.16,5,0.082,none,2024-04-25\r\n643,2141,AMER,sports,online,16.49,3,0.078,none,2024-03-25\r\n644,1044,EMEA,home,retail,111.99,3,0.091,none,2024-06-17\r\n645,1315,AMER,home,retail,71.20,5,0.022,none,2024-01-27\r\n646,2316,EMEA,fashion,online,38.74,1,0.150,none,2024-09-21\r\n647,1747,EMEA,grocery,online,74.01,2,0.131,none,2024-08-19\r\n648,1028,EMEA,grocery,retail,62.51,2,0.094,none,2024-11-15\r\n649,2257,AMER,electronics,online,46.95,2,0.020,bundle,2024-09-04\r\n650,1702,AMER,electronics,retail,33.11,3,0.051,none,2024-06-08\r\n651,1472,AMER,home,online,168.23,5,0.087,none,2024-06-10\r\n652,1355,EMEA,home,retail,37.61,5,0.189,loyalty,2024-01-10\r\n653,1914,EMEA,grocery,mobile,35.06,3,0.153,loyalty,2024-06-24\r\n654,1343,LATAM,grocery,online,90.93,1,0.147,none,2024-05-22\r\n655,1926,AMER,grocery,online,88.26,6,0.075,none,2024-04-27\r\n656,1058,LATAM,toys,retail,54.32,5,0.058,coupon,2024-09-22\r\n657,1889,APAC,home,online,26.19,5,0.188,none,2024-01-04\r\n658,1146,LATAM,home,online,38.38,5,0.079,coupon,2024-09-17\r\n659,1282,LATAM,grocery,retail,28.18,7,0.242,none,2024-03-03\r\n660,2481,APAC,home,online,60.18,1,0.231,none,2024-08-24\r\n661,2295,EMEA,grocery,partner,79.34,3,0.149,bundle,2024-02-22\r\n662,1189,AMER,sports,retail,46.84,3,0.238,none,2024-11-12\r\n663,2053,AMER,electronics,retail,99.69,2,0.045,loyalty,2024-02-12\r\n664,1763,LATAM,fashion,retail,21.80,2,0.041,coupon,2024-11-14\r\n665,1858,LATAM,home,online,133.63,7,0.102,none,2024-06-05\r\n666,2255,AMER,electronics,mobile,68.84,6,0.175,coupon,2024-06-25\r\n667,1741,AMER,grocery,online,152.87,6,0.249,bundle,2024-09-24\r\n668,1390,APAC,electronics,online,33.65,2,0.246,coupon,2024-09-22\r\n669,1963,AMER,electronics,partner,56.83,8,0.087,loyalty,2024-09-06\r\n670,1915,LATAM,fashion,partner,75.76,4,0.183,none,2024-09-18\r\n671,1758,AMER,grocery,partner,31.33,4,0.108,bundle,2024-10-02\r\n672,1128,LATAM,grocery,mobile,66.26,7,0.137,none,2024-10-27\r\n673,1732,LATAM,toys,retail,125.96,8,0.200,coupon,2024-01-07\r\n674,2241,APAC,grocery,online,56.52,3,0.226,loyalty,2024-06-14\r\n675,2129,APAC,grocery,retail,32.67,2,0.245,loyalty,2024-01-12\r\n676,1478,EMEA,sports,retail,67.43,6,0.247,bundle,2024-03-15\r\n677,1632,LATAM,toys,online,61.70,2,0.044,loyalty,2024-03-20\r\n678,2367,AMER,home,mobile,46.31,5,0.044,none,2024-11-03\r\n679,1489,AMER,grocery,retail,61.29,8,0.122,none,2024-06-21\r\n680,1292,LATAM,home,partner,33.80,7,0.028,none,2024-07-19\r\n681,1353,EMEA,sports,online,94.11,2,0.159,bundle,2024-11-14\r\n682,1413,LATAM,sports,retail,68.88,8,0.171,none,2024-07-07\r\n683,1929,LATAM,electronics,retail,40.23,1,0.226,none,2024-05-13\r\n684,2266,LATAM,electronics,partner,62.70,3,0.139,none,2024-02-27\r\n685,2209,AMER,toys,online,69.23,4,0.053,none,2024-01-18\r\n686,1017,AMER,electronics,retail,42.66,6,0.153,bundle,2024-11-26\r\n687,1788,AMER,grocery,retail,98.93,3,0.099,bundle,2024-01-27\r\n688,2143,AMER,home,mobile,35.16,5,0.111,loyalty,2024-04-03\r\n689,1263,AMER,sports,retail,21.26,1,0.009,none,2024-12-01\r\n690,1744,EMEA,grocery,retail,74.81,7,0.081,none,2024-05-22\r\n691,1135,APAC,electronics,online,43.86,4,0.021,bundle,2024-01-01\r\n692,1494,AMER,grocery,online,35.20,7,0.044,none,2024-06-13\r\n693,1027,APAC,sports,mobile,68.66,1,0.181,loyalty,2024-07-06\r\n694,1691,LATAM,electronics,online,32.10,5,0.077,coupon,2024-12-02\r\n695,2131,APAC,toys,retail,52.83,2,0.016,loyalty,2024-12-06\r\n696,2114,AMER,fashion,mobile,61.72,3,0.156,coupon,2024-12-05\r\n697,1411,LATAM,fashion,online,27.55,7,0.107,coupon,2024-11-20\r\n698,2354,LATAM,electronics,online,50.92,5,0.106,none,2024-11-06\r\n699,1983,LATAM,sports,online,179.97,1,0.212,coupon,2024-09-18\r\n700,1837,LATAM,toys,online,81.25,6,0.085,none,2024-04-15\r\n701,2155,APAC,home,online,69.06,2,0.125,coupon,2024-04-20\r\n702,1742,AMER,grocery,online,76.52,6,0.148,none,2024-08-08\r\n703,1455,APAC,home,online,61.90,6,0.243,none,2024-08-04\r\n704,1277,AMER,fashion,online,147.68,2,0.176,none,2024-06-04\r\n705,2149,EMEA,electronics,retail,39.17,6,0.110,coupon,2024-09-07\r\n706,2404,EMEA,grocery,retail,51.54,2,0.186,bundle,2024-03-13\r\n707,2082,APAC,sports,partner,46.97,3,0.174,none,2024-04-22\r\n708,1251,EMEA,home,online,44.86,3,0.055,coupon,2024-06-02\r\n709,2305,AMER,sports,retail,14.91,7,0.038,none,2024-10-23\r\n710,1049,AMER,toys,retail,84.03,2,0.238,coupon,2024-05-21\r\n711,2235,AMER,electronics,mobile,59.99,7,0.127,none,2024-09-07\r\n712,1701,LATAM,grocery,partner,77.59,2,0.167,none,2024-09-23\r\n713,2244,LATAM,grocery,online,21.29,5,0.223,none,2024-05-13\r\n714,1428,APAC,sports,retail,44.58,1,0.061,none,2024-10-17\r\n715,1710,APAC,electronics,online,34.51,1,0.202,none,2024-05-10\r\n716,2208,AMER,fashion,retail,20.14,8,0.090,coupon,2024-12-12\r\n717,1187,AMER,sports,mobile,133.79,1,0.177,none,2024-12-05\r\n718,1605,APAC,sports,retail,76.65,2,0.137,loyalty,2024-01-27\r\n719,1329,APAC,fashion,online,32.66,1,0.135,bundle,2024-10-14\r\n720,1355,EMEA,grocery,partner,38.57,5,0.016,coupon,2024-07-03\r\n721,2449,LATAM,toys,retail,102.15,2,0.007,none,2024-09-06\r\n722,1710,APAC,fashion,online,109.25,1,0.092,none,2024-12-08\r\n723,2050,APAC,fashion,retail,63.75,7,0.005,none,2024-10-24\r\n724,1686,LATAM,home,online,22.89,6,0.026,none,2024-01-08\r\n725,1029,EMEA,fashion,online,47.41,4,0.049,coupon,2024-11-15\r\n726,1236,AMER,grocery,retail,40.80,2,0.077,none,2024-01-22\r\n727,2397,LATAM,grocery,retail,49.90,2,0.102,coupon,2024-11-16\r\n728,1024,APAC,sports,online,85.06,1,0.059,none,2024-05-16\r\n729,1250,APAC,electronics,online,89.82,5,0.034,coupon,2024-07-04\r\n730,2377,AMER,fashion,online,32.93,7,0.080,bundle,2024-07-03\r\n731,1933,EMEA,toys,mobile,51.06,7,0.060,none,2024-04-02\r\n732,1475,LATAM,sports,retail,146.58,7,0.212,none,2024-08-08\r\n733,1436,APAC,grocery,online,66.37,6,0.179,bundle,2024-08-12\r\n734,1494,AMER,fashion,mobile,98.80,1,0.018,none,2024-03-23\r\n735,2421,AMER,electronics,retail,87.07,2,0.210,coupon,2024-01-20\r\n736,2441,EMEA,toys,mobile,71.30,5,0.049,none,2024-07-20\r\n737,2162,EMEA,home,online,83.24,1,0.236,none,2024-06-21\r\n738,1567,AMER,grocery,online,53.69,6,0.161,none,2024-11-10\r\n739,1936,EMEA,grocery,retail,51.89,5,0.167,none,2024-06-25\r\n740,1144,APAC,fashion,partner,42.62,8,0.097,none,2024-08-07\r\n741,1826,LATAM,home,partner,81.05,4,0.052,none,2024-02-14\r\n742,2222,LATAM,toys,online,36.34,1,0.023,coupon,2024-05-23\r\n743,1449,EMEA,toys,online,118.93,7,0.185,coupon,2024-11-22\r\n744,1357,EMEA,electronics,mobile,90.82,6,0.215,coupon,2024-10-06\r\n745,2223,EMEA,fashion,partner,75.81,5,0.014,none,2024-11-01\r\n746,1195,AMER,toys,online,30.56,8,0.020,none,2024-06-14\r\n747,2391,EMEA,home,online,71.26,8,0.076,none,2024-12-18\r\n748,1607,LATAM,grocery,online,145.08,8,0.060,loyalty,2024-05-09\r\n749,2162,EMEA,grocery,online,58.11,8,0.235,none,2024-11-23\r\n750,1121,EMEA,sports,online,54.82,3,0.222,none,2024-12-22\r\n751,1938,APAC,home,retail,69.22,8,0.022,bundle,2024-08-26\r\n752,2113,LATAM,electronics,online,67.29,8,0.110,coupon,2024-03-17\r\n753,2219,LATAM,grocery,mobile,88.62,5,0.163,none,2024-12-19\r\n754,2457,EMEA,grocery,partner,35.54,1,0.203,none,2024-01-09\r\n755,1631,APAC,home,online,68.47,4,0.087,loyalty,2024-05-09\r\n756,1226,AMER,toys,partner,80.51,2,0.032,none,2024-10-20\r\n757,1859,AMER,fashion,online,98.79,1,0.232,none,2024-05-21\r\n758,2487,LATAM,electronics,retail,34.28,3,0.013,none,2024-10-13\r\n759,2333,APAC,fashion,online,55.52,6,0.220,coupon,2024-05-08\r\n760,2103,LATAM,sports,retail,31.36,6,0.062,none,2024-10-20\r\n761,1765,EMEA,fashion,mobile,49.10,7,0.186,bundle,2024-12-09\r\n762,1910,LATAM,sports,online,138.26,7,0.141,none,2024-03-27\r\n763,1491,EMEA,grocery,online,34.90,5,0.072,loyalty,2024-06-16\r\n764,1919,EMEA,electronics,mobile,46.51,3,0.097,coupon,2024-08-24\r\n765,1989,LATAM,fashion,mobile,80.09,3,0.058,none,2024-08-07\r\n766,2160,LATAM,electronics,online,38.88,1,0.229,coupon,2024-04-06\r\n767,1350,LATAM,electronics,online,33.51,5,0.155,bundle,2024-09-04\r\n768,1890,LATAM,fashion,online,82.80,1,0.085,coupon,2024-10-22\r\n769,1946,AMER,sports,retail,21.78,3,0.073,coupon,2024-05-10\r\n770,1289,LATAM,fashion,online,48.47,1,0.181,none,2024-10-22\r\n771,1607,LATAM,electronics,online,15.01,1,0.100,bundle,2024-11-18\r\n772,2320,LATAM,grocery,mobile,76.49,1,0.015,coupon,2024-10-03\r\n773,1323,EMEA,grocery,retail,68.17,1,0.116,none,2024-05-11\r\n774,2331,APAC,grocery,online,53.00,5,0.001,bundle,2024-06-12\r\n775,1860,EMEA,home,retail,14.90,3,0.155,none,2024-08-20\r\n776,1238,AMER,electronics,online,29.16,1,0.094,none,2024-10-22\r\n777,2050,APAC,sports,retail,62.93,4,0.182,none,2024-02-02\r\n778,1975,EMEA,home,online,36.02,7,0.052,none,2024-08-13\r\n779,1263,AMER,grocery,retail,128.56,2,0.194,none,2024-09-24\r\n780,2176,AMER,fashion,retail,120.18,7,0.174,none,2024-10-28\r\n781,1396,EMEA,electronics,online,28.47,2,0.152,none,2024-02-12\r\n782,1396,EMEA,fashion,online,28.60,7,0.088,bundle,2024-07-06\r\n783,1079,LATAM,grocery,online,77.11,7,0.249,none,2024-09-17\r\n784,1186,APAC,grocery,online,33.20,4,0.009,none,2024-12-14\r\n785,1675,LATAM,grocery,online,112.52,8,0.166,none,2024-02-21\r\n786,1415,AMER,electronics,online,22.87,3,0.116,coupon,2024-10-28\r\n787,1291,EMEA,home,mobile,21.90,1,0.078,coupon,2024-07-04\r\n788,1871,APAC,electronics,retail,35.78,6,0.069,coupon,2024-05-17\r\n789,1471,EMEA,electronics,online,193.97,5,0.073,loyalty,2024-06-25\r\n790,1115,AMER,electronics,online,96.63,6,0.209,none,2024-04-15\r\n791,2460,AMER,grocery,online,46.12,5,0.010,none,2024-07-16\r\n792,1848,EMEA,fashion,retail,152.89,1,0.007,none,2024-09-16\r\n793,1477,APAC,grocery,online,53.30,8,0.088,none,2024-06-13\r\n794,1294,APAC,toys,retail,41.18,3,0.038,none,2024-01-05\r\n795,1269,LATAM,electronics,retail,88.82,7,0.044,loyalty,2024-11-02\r\n796,2374,LATAM,grocery,online,62.93,7,0.151,bundle,2024-03-12\r\n797,2440,APAC,grocery,retail,17.71,2,0.099,none,2024-01-07\r\n798,1908,AMER,grocery,retail,28.47,5,0.005,loyalty,2024-09-10\r\n799,1234,AMER,electronics,online,40.00,8,0.219,none,2024-04-26\r\n800,1104,APAC,grocery,online,56.41,1,0.199,none,2024-09-26\r\n801,2307,LATAM,grocery,online,107.38,4,0.064,none,2024-05-06\r\n802,1377,APAC,electronics,online,47.16,5,0.210,coupon,2024-11-16\r\n803,1068,APAC,grocery,retail,123.42,8,0.155,none,2024-10-25\r\n804,1563,EMEA,home,online,30.66,8,0.047,coupon,2024-01-08\r\n805,1114,APAC,fashion,retail,39.99,4,0.198,bundle,2024-06-25\r\n806,1176,EMEA,electronics,online,20.68,7,0.191,none,2024-09-07\r\n807,2290,LATAM,grocery,mobile,74.61,7,0.135,none,2024-06-08\r\n808,1794,AMER,home,online,58.07,3,0.172,coupon,2024-06-24\r\n809,1345,AMER,fashion,online,66.75,2,0.021,none,2024-12-15\r\n810,1339,EMEA,fashion,retail,72.17,3,0.102,none,2024-05-06\r\n811,1957,AMER,home,retail,112.03,6,0.209,none,2024-03-27\r\n812,1701,LATAM,sports,retail,70.82,6,0.187,none,2024-06-07\r\n813,2028,APAC,grocery,retail,43.36,8,0.242,none,2024-01-04\r\n814,1570,AMER,grocery,online,43.86,4,0.090,none,2024-08-25\r\n815,1722,EMEA,grocery,mobile,28.32,6,0.132,loyalty,2024-09-14\r\n816,1996,APAC,toys,online,26.70,7,0.154,bundle,2024-11-04\r\n817,2344,LATAM,sports,online,50.54,7,0.031,loyalty,2024-03-25\r\n818,2345,LATAM,toys,retail,83.27,1,0.024,none,2024-11-21\r\n819,1599,APAC,electronics,retail,66.88,6,0.109,coupon,2024-02-10\r\n820,1106,AMER,home,online,65.33,2,0.112,coupon,2024-01-19\r\n821,2498,LATAM,toys,online,30.46,6,0.129,coupon,2024-12-28\r\n822,1986,LATAM,grocery,online,42.71,4,0.210,loyalty,2024-07-28\r\n823,1704,AMER,grocery,mobile,92.13,2,0.139,none,2024-10-16\r\n824,2445,APAC,home,online,50.92,4,0.020,none,2024-10-25\r\n825,2308,AMER,grocery,online,35.47,3,0.008,none,2024-12-19\r\n826,1071,AMER,fashion,partner,38.44,8,0.009,none,2024-07-28\r\n827,2073,AMER,sports,retail,40.61,4,0.118,none,2024-09-06\r\n828,1629,LATAM,electronics,online,32.07,2,0.178,coupon,2024-07-27\r\n829,2241,APAC,grocery,online,49.97,2,0.076,none,2024-06-01\r\n830,1577,AMER,home,retail,29.73,8,0.241,loyalty,2024-05-06\r\n831,1848,EMEA,electronics,retail,115.15,5,0.222,none,2024-02-03\r\n832,1294,APAC,toys,partner,57.77,4,0.151,none,2024-12-06\r\n833,1432,APAC,fashion,online,46.77,5,0.195,loyalty,2024-12-16\r\n834,1063,AMER,fashion,retail,59.25,3,0.214,loyalty,2024-01-15\r\n835,1497,EMEA,sports,online,70.55,1,0.031,none,2024-02-05\r\n836,2370,EMEA,grocery,retail,40.07,7,0.049,none,2024-04-01\r\n837,1339,EMEA,electronics,mobile,39.60,6,0.132,none,2024-05-02\r\n838,2341,EMEA,sports,mobile,19.32,8,0.057,bundle,2024-06-18\r\n839,1571,EMEA,home,online,50.31,4,0.060,none,2024-05-12\r\n840,1777,AMER,electronics,retail,53.88,2,0.202,none,2024-10-19\r\n841,2122,AMER,toys,retail,43.64,5,0.084,bundle,2024-02-05\r\n842,1254,APAC,sports,online,25.69,6,0.228,none,2024-03-09\r\n843,2490,AMER,grocery,online,47.10,4,0.105,loyalty,2024-07-08\r\n844,1189,AMER,toys,retail,12.16,2,0.033,none,2024-04-06\r\n845,1442,EMEA,fashion,retail,25.16,4,0.166,coupon,2024-09-23\r\n846,1014,EMEA,electronics,partner,103.11,3,0.086,none,2024-01-15\r\n847,1068,APAC,electronics,retail,58.31,6,0.159,bundle,2024-12-05\r\n848,1271,EMEA,toys,online,77.36,7,0.173,none,2024-01-27\r\n849,1899,APAC,toys,partner,28.25,1,0.086,loyalty,2024-04-12\r\n850,1070,EMEA,home,retail,89.89,2,0.011,none,2024-10-06\r\n851,2171,EMEA,home,online,60.29,5,0.207,none,2024-06-01\r\n852,2006,APAC,electronics,online,76.40,7,0.156,coupon,2024-12-13\r\n853,1773,LATAM,grocery,online,95.81,5,0.122,loyalty,2024-04-28\r\n854,1460,LATAM,home,partner,83.62,8,0.056,none,2024-06-11\r\n855,1040,LATAM,grocery,mobile,87.68,2,0.212,bundle,2024-10-16\r\n856,1120,LATAM,electronics,online,115.34,3,0.014,bundle,2024-08-13\r\n857,1793,LATAM,sports,online,34.70,6,0.194,none,2024-07-18\r\n858,2392,EMEA,electronics,retail,104.83,1,0.244,loyalty,2024-01-05\r\n859,2189,LATAM,grocery,online,118.71,6,0.106,none,2024-09-28\r\n860,2334,LATAM,electronics,retail,18.53,6,0.104,none,2024-12-14\r\n861,2424,LATAM,fashion,online,42.26,3,0.142,coupon,2024-12-11\r\n862,1244,LATAM,grocery,retail,49.38,1,0.175,none,2024-01-14\r\n863,1113,EMEA,fashion,retail,79.56,8,0.171,bundle,2024-12-27\r\n864,2445,APAC,sports,online,79.06,7,0.169,none,2024-04-07\r\n865,1017,AMER,electronics,online,62.38,5,0.069,coupon,2024-11-28\r\n866,2479,EMEA,grocery,online,37.77,4,0.201,none,2024-11-11\r\n867,1857,LATAM,fashion,retail,104.68,5,0.030,none,2024-08-03\r\n868,1461,LATAM,sports,online,35.61,8,0.031,bundle,2024-08-17\r\n869,1871,APAC,grocery,online,134.92,6,0.209,none,2024-01-01\r\n870,1475,LATAM,electronics,mobile,28.03,1,0.003,none,2024-04-05\r\n871,1860,EMEA,home,mobile,140.17,2,0.234,none,2024-11-17\r\n872,2433,APAC,grocery,mobile,47.09,5,0.224,none,2024-11-14\r\n873,2200,LATAM,home,retail,29.99,4,0.095,coupon,2024-06-23\r\n874,1002,EMEA,fashion,retail,46.56,8,0.189,coupon,2024-06-18\r\n875,1925,LATAM,grocery,online,54.75,1,0.047,none,2024-06-15\r\n876,2343,EMEA,home,online,70.83,4,0.070,loyalty,2024-02-11\r\n877,2472,AMER,toys,partner,45.60,1,0.134,none,2024-04-15\r\n878,1645,EMEA,sports,online,90.24,8,0.229,coupon,2024-02-14\r\n879,1366,APAC,electronics,online,33.82,7,0.024,bundle,2024-05-24\r\n880,2140,AMER,toys,online,103.11,4,0.164,bundle,2024-02-19\r\n881,2432,AMER,fashion,retail,79.79,5,0.069,bundle,2024-03-24\r\n882,1847,LATAM,home,partner,79.51,6,0.067,none,2024-05-04\r\n883,1134,APAC,home,retail,53.09,1,0.007,none,2024-09-12\r\n884,1262,APAC,sports,online,37.35,1,0.227,coupon,2024-09-18\r\n885,1508,LATAM,fashion,retail,78.31,8,0.016,none,2024-03-04\r\n886,1459,LATAM,grocery,retail,36.61,7,0.159,coupon,2024-04-22\r\n887,1562,AMER,home,retail,52.47,8,0.161,loyalty,2024-04-16\r\n888,2250,AMER,fashion,online,25.36,4,0.020,bundle,2024-08-01\r\n889,1596,EMEA,sports,mobile,26.52,4,0.158,none,2024-11-04\r\n890,1320,EMEA,toys,retail,70.85,1,0.225,loyalty,2024-10-13\r\n891,1907,EMEA,grocery,online,92.13,7,0.061,coupon,2024-12-16\r\n892,1725,APAC,toys,retail,52.02,5,0.070,none,2024-07-04\r\n893,2241,APAC,electronics,retail,84.79,3,0.044,none,2024-12-12\r\n894,2321,APAC,home,online,75.41,4,0.071,none,2024-06-20\r\n895,1152,LATAM,electronics,online,86.11,5,0.181,none,2024-04-23\r\n896,1785,EMEA,grocery,online,67.67,4,0.045,none,2024-05-18\r\n897,1342,LATAM,sports,retail,73.78,6,0.035,none,2024-09-25\r\n898,2072,AMER,electronics,retail,172.84,6,0.219,none,2024-02-24\r\n899,2340,EMEA,home,online,59.06,7,0.069,none,2024-04-15\r\n900,2238,AMER,grocery,online,101.52,8,0.040,bundle,2024-04-05\r\n901,1381,LATAM,sports,retail,26.90,8,0.130,coupon,2024-03-12\r\n902,2271,LATAM,fashion,retail,115.09,6,0.199,none,2024-12-08\r\n903,2046,APAC,home,retail,78.49,4,0.179,none,2024-02-06\r\n904,1866,EMEA,grocery,online,88.70,5,0.135,none,2024-10-11\r\n905,2110,LATAM,grocery,online,56.55,2,0.116,none,2024-09-18\r\n906,1051,EMEA,grocery,online,55.07,8,0.153,none,2024-06-06\r\n907,1253,AMER,electronics,online,55.59,8,0.087,none,2024-04-28\r\n908,2302,APAC,home,online,76.39,8,0.203,none,2024-03-21\r\n909,1445,APAC,grocery,online,82.08,5,0.154,none,2024-08-16\r\n910,1029,EMEA,fashion,online,22.85,2,0.073,loyalty,2024-09-16\r\n911,1893,APAC,home,online,30.55,4,0.066,none,2024-05-17\r\n912,1535,AMER,fashion,partner,54.21,1,0.118,none,2024-12-11\r\n913,1573,AMER,fashion,online,39.07,3,0.024,none,2024-07-16\r\n914,1616,APAC,home,mobile,35.18,2,0.244,coupon,2024-09-28\r\n915,2205,AMER,home,retail,98.99,6,0.042,none,2024-09-05\r\n916,1183,AMER,sports,online,84.49,8,0.083,none,2024-10-11\r\n917,1782,LATAM,grocery,online,47.19,8,0.003,none,2024-09-26\r\n918,1129,LATAM,sports,retail,78.23,2,0.038,none,2024-07-05\r\n919,1875,EMEA,fashion,online,51.08,5,0.195,none,2024-05-20\r\n920,1630,APAC,toys,online,26.63,4,0.080,none,2024-02-06\r\n921,2297,EMEA,sports,mobile,86.62,3,0.066,none,2024-03-27\r\n922,2015,APAC,sports,online,177.36,5,0.166,none,2024-03-03\r\n923,1891,APAC,toys,online,33.68,1,0.042,coupon,2024-06-18\r\n924,2091,LATAM,electronics,mobile,73.01,8,0.005,none,2024-11-04\r\n925,2415,AMER,electronics,mobile,111.08,8,0.156,loyalty,2024-06-08\r\n926,1751,AMER,home,online,41.05,4,0.044,loyalty,2024-11-16\r\n927,2275,LATAM,electronics,online,63.50,5,0.017,none,2024-05-16\r\n928,1601,APAC,home,online,27.89,6,0.178,bundle,2024-08-23\r\n929,2016,LATAM,fashion,online,28.98,6,0.140,bundle,2024-09-24\r\n930,1804,AMER,grocery,retail,34.27,5,0.226,loyalty,2024-12-22\r\n931,1059,AMER,toys,retail,78.39,1,0.014,loyalty,2024-05-28\r\n932,1229,LATAM,grocery,retail,38.47,2,0.128,none,2024-07-14\r\n933,1380,AMER,home,online,38.10,6,0.071,none,2024-06-08\r\n934,1757,EMEA,toys,online,66.90,2,0.141,none,2024-08-24\r\n935,2129,APAC,electronics,online,40.40,4,0.075,none,2024-07-21\r\n936,1140,LATAM,home,online,49.96,2,0.111,none,2024-08-18\r\n937,1396,EMEA,electronics,online,253.19,4,0.115,none,2024-10-07\r\n938,1924,AMER,fashion,retail,46.09,4,0.250,none,2024-12-22\r\n939,1163,AMER,fashion,online,41.02,5,0.154,none,2024-06-27\r\n940,1565,AMER,fashion,online,52.46,8,0.034,none,2024-10-16\r\n941,1276,AMER,fashion,retail,63.60,7,0.234,none,2024-05-17\r\n942,2027,EMEA,home,retail,45.40,6,0.200,coupon,2024-01-13\r\n943,2293,LATAM,grocery,online,54.82,3,0.225,coupon,2024-03-13\r\n944,1129,LATAM,electronics,retail,54.93,7,0.170,coupon,2024-03-23\r\n945,1744,EMEA,fashion,mobile,49.25,1,0.157,bundle,2024-09-23\r\n946,1690,LATAM,grocery,online,53.75,3,0.153,bundle,2024-12-06\r\n947,1674,LATAM,fashion,online,30.94,6,0.126,loyalty,2024-07-20\r\n948,2128,EMEA,electronics,retail,55.23,5,0.169,none,2024-02-26\r\n949,1876,LATAM,toys,retail,35.74,5,0.164,none,2024-08-13\r\n950,2082,APAC,grocery,retail,141.49,3,0.218,none,2024-08-14\r\n951,2446,LATAM,electronics,online,44.43,5,0.138,bundle,2024-05-11\r\n952,2137,LATAM,electronics,online,39.03,3,0.217,none,2024-10-26\r\n953,1623,AMER,fashion,retail,55.01,4,0.129,none,2024-02-27\r\n954,2253,AMER,electronics,online,78.13,6,0.088,bundle,2024-09-11\r\n955,1456,APAC,grocery,retail,93.62,8,0.038,none,2024-12-06\r\n956,2207,APAC,toys,mobile,67.40,6,0.244,none,2024-06-22\r\n957,1165,AMER,home,mobile,120.83,8,0.201,none,2024-09-02\r\n958,1020,APAC,fashion,retail,35.32,5,0.188,none,2024-10-16\r\n959,1457,EMEA,fashion,retail,50.47,7,0.172,none,2024-10-21\r\n960,1491,EMEA,grocery,retail,46.21,4,0.059,coupon,2024-06-28\r\n961,2313,LATAM,home,retail,116.82,1,0.155,none,2024-11-15\r\n962,1846,APAC,sports,online,102.69,6,0.234,none,2024-10-24\r\n963,2458,EMEA,toys,retail,61.33,7,0.193,loyalty,2024-06-12\r\n964,1542,APAC,electronics,online,43.95,1,0.204,none,2024-04-13\r\n965,2443,LATAM,sports,retail,62.14,3,0.155,none,2024-01-26\r\n966,1200,EMEA,grocery,retail,52.94,6,0.132,none,2024-03-24\r\n967,1090,AMER,home,mobile,27.34,2,0.037,none,2024-10-15\r\n968,1045,LATAM,grocery,online,147.85,8,0.123,none,2024-06-28\r\n969,1290,EMEA,fashion,online,80.06,4,0.238,none,2024-04-25\r\n970,2101,APAC,fashion,retail,21.10,1,0.177,none,2024-04-23\r\n971,1965,LATAM,electronics,online,22.54,4,0.098,none,2024-03-03\r\n972,1069,APAC,electronics,retail,148.74,5,0.205,bundle,2024-12-28\r\n973,2110,LATAM,home,online,52.34,6,0.128,none,2024-08-19\r\n974,2107,APAC,home,online,78.66,8,0.053,bundle,2024-04-18\r\n975,2495,EMEA,grocery,retail,42.21,4,0.175,coupon,2024-03-09\r\n976,2404,EMEA,electronics,online,29.13,1,0.077,loyalty,2024-05-03\r\n977,1386,AMER,electronics,retail,66.34,3,0.148,coupon,2024-12-13\r\n978,1865,LATAM,grocery,retail,47.79,5,0.235,bundle,2024-12-08\r\n979,1094,LATAM,electronics,mobile,60.22,5,0.027,none,2024-09-25\r\n980,1305,EMEA,fashion,online,35.67,5,0.129,none,2024-06-06\r\n981,1488,AMER,home,retail,52.85,8,0.134,none,2024-12-04\r\n982,1822,EMEA,grocery,retail,184.38,4,0.080,bundle,2024-10-10\r\n983,2141,AMER,grocery,retail,53.05,3,0.247,coupon,2024-05-13\r\n984,1428,APAC,fashion,online,60.64,6,0.074,loyalty,2024-12-15\r\n985,2436,LATAM,home,online,22.91,4,0.148,coupon,2024-02-18\r\n986,1601,APAC,fashion,mobile,37.32,1,0.185,none,2024-07-22\r\n987,1692,LATAM,fashion,online,65.63,1,0.063,none,2024-01-02\r\n988,1783,AMER,sports,online,42.98,1,0.225,none,2024-04-28\r\n989,1459,LATAM,fashion,retail,122.29,7,0.218,loyalty,2024-06-04\r\n990,1804,AMER,grocery,retail,41.72,6,0.034,coupon,2024-09-08\r\n991,1226,AMER,grocery,online,59.54,3,0.012,none,2024-09-09\r\n992,2208,AMER,fashion,online,66.75,3,0.121,coupon,2024-07-03\r\n993,1393,LATAM,toys,retail,38.96,5,0.083,none,2024-08-22\r\n994,2019,AMER,home,online,114.84,6,0.035,coupon,2024-11-07\r\n995,1943,AMER,grocery,retail,41.74,5,0.184,none,2024-10-25\r\n996,1510,EMEA,electronics,retail,33.12,1,0.059,bundle,2024-11-19\r\n997,1785,EMEA,home,online,87.07,2,0.130,bundle,2024-10-27\r\n998,2356,LATAM,electronics,online,73.51,5,0.170,none,2024-06-07\r\n999,1714,APAC,electronics,online,32.51,4,0.231,coupon,2024-08-03\r\n1000,1607,LATAM,grocery,online,119.65,1,0.189,bundle,2024-06-11\r\n1001,2468,EMEA,home,online,29.91,5,0.228,loyalty,2024-02-27\r\n1002,2277,EMEA,electronics,retail,49.23,7,0.150,coupon,2024-02-04\r\n1003,1177,LATAM,electronics,online,45.58,6,0.102,none,2024-09-23\r\n1004,2328,EMEA,home,online,44.78,3,0.139,coupon,2024-10-19\r\n1005,1285,EMEA,fashion,online,179.93,4,0.145,none,2024-02-16\r\n1006,1241,APAC,fashion,online,79.72,4,0.170,bundle,2024-12-23\r\n1007,1976,AMER,toys,online,74.38,7,0.009,none,2024-08-28\r\n1008,1161,AMER,electronics,retail,38.83,3,0.237,none,2024-10-07\r\n1009,1844,APAC,fashion,retail,21.20,3,0.239,none,2024-04-05\r\n1010,1231,AMER,grocery,online,32.34,1,0.133,none,2024-11-15\r\n1011,1894,APAC,sports,retail,63.30,7,0.227,none,2024-10-17\r\n1012,1404,EMEA,fashion,partner,85.79,6,0.026,bundle,2024-04-26\r\n1013,2021,EMEA,grocery,retail,69.59,4,0.116,none,2024-06-11\r\n1014,1739,AMER,grocery,retail,120.78,2,0.034,none,2024-11-16\r\n1015,1284,APAC,grocery,retail,91.92,6,0.079,coupon,2024-08-01\r\n1016,1473,LATAM,home,online,35.06,2,0.237,none,2024-08-05\r\n1017,2497,AMER,sports,online,12.41,3,0.038,bundle,2024-10-05\r\n1018,2111,EMEA,fashion,mobile,59.69,4,0.222,loyalty,2024-01-18\r\n1019,2432,AMER,grocery,retail,57.38,8,0.002,coupon,2024-09-14\r\n1020,1130,LATAM,electronics,online,48.39,3,0.006,none,2024-07-06\r\n1021,1448,EMEA,electronics,retail,38.10,6,0.141,none,2024-07-01\r\n1022,1433,EMEA,sports,online,47.00,8,0.152,none,2024-07-12\r\n1023,2096,LATAM,grocery,retail,51.69,3,0.143,bundle,2024-09-16\r\n1024,1373,LATAM,electronics,retail,62.05,6,0.205,none,2024-10-07\r\n1025,1862,LATAM,electronics,retail,41.04,5,0.016,coupon,2024-09-11\r\n1026,1211,EMEA,home,retail,48.04,7,0.186,coupon,2024-12-28\r\n1027,1412,AMER,electronics,retail,96.12,6,0.139,none,2024-04-08\r\n1028,1940,APAC,sports,retail,170.26,1,0.174,coupon,2024-03-20\r\n1029,1249,EMEA,fashion,online,45.08,8,0.151,none,2024-07-16\r\n1030,1975,EMEA,electronics,retail,44.16,5,0.047,none,2024-02-04\r\n1031,2235,AMER,electronics,online,18.88,2,0.133,none,2024-10-20\r\n1032,2167,APAC,fashion,retail,20.88,1,0.104,none,2024-09-21\r\n1033,1397,LATAM,grocery,mobile,210.99,7,0.107,none,2024-05-09\r\n1034,1550,APAC,electronics,retail,26.73,2,0.073,bundle,2024-10-26\r\n1035,1933,EMEA,home,mobile,55.92,8,0.177,loyalty,2024-01-05\r\n1036,2268,EMEA,grocery,retail,51.78,4,0.231,coupon,2024-05-12\r\n1037,2055,AMER,electronics,retail,40.34,4,0.210,none,2024-01-09\r\n1038,2323,AMER,fashion,retail,36.99,8,0.249,bundle,2024-11-18\r\n1039,1131,APAC,grocery,mobile,76.90,6,0.153,coupon,2024-03-12\r\n1040,2383,APAC,fashion,online,71.52,4,0.157,bundle,2024-12-15\r\n1041,1170,AMER,toys,partner,18.15,3,0.188,none,2024-07-23\r\n1042,1459,LATAM,grocery,retail,76.26,8,0.117,none,2024-01-01\r\n1043,1442,EMEA,grocery,retail,109.68,3,0.168,loyalty,2024-01-02\r\n1044,2028,APAC,toys,online,57.18,6,0.198,none,2024-06-21\r\n1045,2046,APAC,grocery,online,30.94,5,0.045,none,2024-08-14\r\n1046,1851,EMEA,sports,online,65.47,2,0.050,loyalty,2024-12-10\r\n1047,1653,APAC,sports,online,41.19,8,0.117,bundle,2024-11-25\r\n1048,1225,APAC,electronics,retail,33.89,3,0.110,bundle,2024-07-21\r\n1049,1429,APAC,grocery,online,41.16,5,0.019,none,2024-05-27\r\n1050,1010,EMEA,home,mobile,15.86,8,0.173,none,2024-01-01\r\n1051,1928,AMER,home,retail,81.04,1,0.243,bundle,2024-08-01\r\n1052,2285,APAC,grocery,retail,33.06,5,0.183,none,2024-04-10\r\n1053,1490,AMER,grocery,online,21.29,8,0.107,bundle,2024-12-14\r\n1054,2249,LATAM,electronics,retail,242.98,8,0.156,bundle,2024-05-10\r\n1055,1667,AMER,grocery,retail,115.05,5,0.147,none,2024-06-02\r\n1056,1334,APAC,electronics,online,37.03,4,0.190,coupon,2024-05-26\r\n1057,1295,EMEA,grocery,partner,26.92,2,0.048,none,2024-03-06\r\n1058,1771,AMER,fashion,mobile,62.97,4,0.129,none,2024-05-09\r\n1059,1901,AMER,toys,online,67.49,3,0.239,none,2024-11-07\r\n1060,2156,AMER,electronics,mobile,44.27,6,0.169,none,2024-12-12\r\n1061,1434,EMEA,electronics,retail,25.73,6,0.022,coupon,2024-04-04\r\n1062,1042,LATAM,grocery,mobile,36.76,8,0.203,loyalty,2024-08-22\r\n1063,1266,AMER,electronics,online,55.30,5,0.027,loyalty,2024-08-01\r\n1064,2377,AMER,home,retail,68.93,4,0.111,loyalty,2024-01-15\r\n1065,1459,LATAM,fashion,retail,60.86,4,0.113,none,2024-08-04\r\n1066,1820,AMER,grocery,online,61.91,4,0.134,bundle,2024-02-04\r\n1067,1113,EMEA,grocery,mobile,85.64,3,0.106,none,2024-10-10\r\n1068,1086,AMER,electronics,mobile,162.33,3,0.028,none,2024-04-23\r\n1069,1610,LATAM,toys,retail,27.19,3,0.241,coupon,2024-01-17\r\n1070,1462,LATAM,grocery,retail,36.35,8,0.081,loyalty,2024-01-06\r\n1071,1434,EMEA,electronics,online,194.77,3,0.173,bundle,2024-07-10\r\n1072,2296,AMER,sports,online,54.58,3,0.071,coupon,2024-10-14\r\n1073,2090,AMER,toys,retail,34.78,1,0.067,none,2024-05-24\r\n1074,1320,EMEA,electronics,online,63.50,1,0.036,loyalty,2024-01-07\r\n1075,1043,LATAM,sports,online,20.18,5,0.150,none,2024-01-17\r\n1076,1597,APAC,fashion,partner,66.53,4,0.183,none,2024-11-10\r\n1077,1476,APAC,electronics,online,79.87,5,0.142,none,2024-04-25\r\n1078,1730,AMER,sports,retail,35.34,5,0.127,none,2024-05-27\r\n1079,1978,AMER,grocery,retail,97.30,3,0.035,coupon,2024-06-25\r\n1080,2448,APAC,toys,retail,145.37,1,0.211,coupon,2024-05-01\r\n1081,1975,EMEA,home,online,49.65,4,0.216,none,2024-08-11\r\n1082,2363,AMER,sports,online,87.76,6,0.152,none,2024-06-20\r\n1083,1924,AMER,electronics,online,36.66,4,0.045,coupon,2024-07-17\r\n1084,2165,AMER,grocery,online,20.11,3,0.031,none,2024-04-08\r\n1085,1697,APAC,electronics,retail,117.27,5,0.010,none,2024-11-05\r\n1086,1046,EMEA,grocery,online,98.90,4,0.196,none,2024-03-02\r\n1087,1431,APAC,sports,online,27.48,6,0.102,none,2024-02-20\r\n1088,1761,EMEA,grocery,retail,47.00,5,0.018,none,2024-06-09\r\n1089,2085,AMER,electronics,online,31.11,3,0.239,none,2024-02-23\r\n1090,1591,APAC,grocery,retail,132.73,7,0.006,none,2024-01-07\r\n1091,1498,LATAM,electronics,retail,116.50,3,0.127,none,2024-12-08\r\n1092,1536,LATAM,fashion,online,71.02,5,0.003,coupon,2024-08-07\r\n1093,2202,APAC,fashion,retail,81.59,6,0.212,coupon,2024-06-02\r\n1094,1943,AMER,home,online,65.74,5,0.220,none,2024-09-18\r\n1095,2072,AMER,fashion,partner,73.40,4,0.084,coupon,2024-03-26\r\n1096,1966,APAC,grocery,retail,36.43,6,0.195,none,2024-01-23\r\n1097,1744,EMEA,home,retail,69.04,3,0.028,none,2024-01-17\r\n1098,1334,APAC,electronics,retail,51.33,3,0.060,none,2024-11-04\r\n1099,1333,EMEA,home,online,48.57,7,0.024,none,2024-07-26\r\n1100,1722,EMEA,home,retail,32.84,7,0.188,none,2024-01-05\r\n1101,2205,AMER,fashion,online,126.26,3,0.070,none,2024-07-03\r\n1102,1156,APAC,toys,online,31.68,7,0.145,none,2024-05-14\r\n1103,1545,AMER,home,mobile,77.20,7,0.130,bundle,2024-03-05\r\n1104,1014,EMEA,home,retail,32.41,1,0.134,coupon,2024-02-18\r\n1105,1107,APAC,fashion,online,62.84,5,0.105,bundle,2024-07-14\r\n1106,2065,EMEA,home,online,14.03,8,0.130,coupon,2024-01-22\r\n1107,1096,EMEA,sports,retail,112.82,5,0.071,none,2024-09-28\r\n1108,2149,EMEA,sports,retail,78.55,3,0.103,bundle,2024-07-15\r\n1109,1872,LATAM,grocery,mobile,122.42,6,0.194,coupon,2024-11-22\r\n1110,2053,AMER,electronics,mobile,89.73,8,0.117,coupon,2024-06-10\r\n1111,1992,LATAM,electronics,retail,89.40,6,0.139,none,2024-10-28\r\n1112,2056,LATAM,grocery,retail,99.96,5,0.055,none,2024-10-07\r\n1113,2321,APAC,electronics,retail,106.42,2,0.148,none,2024-03-14\r\n1114,2376,LATAM,electronics,online,46.13,7,0.137,bundle,2024-06-13\r\n1115,2114,AMER,home,partner,98.88,5,0.243,coupon,2024-06-03\r\n1116,1604,EMEA,toys,retail,92.37,3,0.121,loyalty,2024-10-09\r\n1117,2126,APAC,home,online,70.74,6,0.144,loyalty,2024-03-01\r\n1118,1783,AMER,electronics,retail,85.49,8,0.033,none,2024-09-03\r\n1119,2055,AMER,electronics,retail,50.96,2,0.133,none,2024-02-09\r\n1120,1337,APAC,sports,retail,69.61,5,0.118,none,2024-03-25\r\n1121,2333,APAC,home,retail,26.34,7,0.185,loyalty,2024-09-06\r\n1122,2233,EMEA,home,mobile,160.24,6,0.018,none,2024-11-08\r\n1123,1005,LATAM,toys,mobile,98.97,5,0.028,none,2024-01-13\r\n1124,1150,LATAM,electronics,retail,83.61,7,0.114,none,2024-10-20\r\n1125,1245,APAC,home,mobile,30.49,8,0.216,bundle,2024-04-25\r\n1126,2215,LATAM,grocery,retail,161.28,8,0.176,none,2024-01-22\r\n1127,1074,LATAM,grocery,online,35.78,7,0.037,none,2024-05-06\r\n1128,1280,LATAM,grocery,retail,81.96,1,0.144,coupon,2024-10-01\r\n1129,2233,EMEA,toys,online,49.18,8,0.047,none,2024-05-11\r\n1130,2425,APAC,fashion,retail,48.15,7,0.124,bundle,2024-11-24\r\n1131,2368,AMER,grocery,retail,48.48,8,0.219,none,2024-05-09\r\n1132,1470,LATAM,home,online,42.51,8,0.084,coupon,2024-02-14\r\n1133,1361,LATAM,electronics,retail,107.81,3,0.091,coupon,2024-06-16\r\n1134,1366,APAC,electronics,online,49.56,1,0.158,coupon,2024-10-18\r\n1135,1432,APAC,grocery,retail,101.13,2,0.097,none,2024-02-05\r\n1136,1163,AMER,grocery,online,39.50,1,0.080,bundle,2024-09-28\r\n1137,1269,LATAM,sports,retail,68.96,6,0.161,none,2024-03-15\r\n1138,2205,AMER,fashion,retail,76.48,7,0.199,none,2024-08-10\r\n1139,2486,APAC,grocery,retail,92.70,3,0.228,coupon,2024-08-09\r\n1140,1372,APAC,sports,online,55.93,6,0.016,loyalty,2024-12-21\r\n1141,2463,AMER,fashion,online,22.52,3,0.224,coupon,2024-08-04\r\n1142,1322,AMER,grocery,retail,60.60,3,0.211,bundle,2024-12-28\r\n1143,1728,AMER,grocery,retail,23.38,2,0.154,none,2024-09-11\r\n1144,2481,APAC,fashion,mobile,66.26,1,0.064,none,2024-03-01\r\n1145,1770,AMER,grocery,retail,21.29,1,0.131,none,2024-01-06\r\n1146,2069,AMER,toys,online,74.13,4,0.174,none,2024-01-26\r\n1147,1693,EMEA,fashion,mobile,112.09,4,0.175,coupon,2024-09-27\r\n1148,1132,EMEA,toys,retail,48.87,4,0.128,bundle,2024-02-10\r\n1149,1003,APAC,electronics,retail,58.69,1,0.124,none,2024-02-01\r\n1150,1328,APAC,fashion,mobile,61.91,3,0.127,coupon,2024-11-12\r\n1151,1958,APAC,grocery,online,78.78,1,0.238,loyalty,2024-02-09\r\n1152,1460,LATAM,sports,online,28.11,4,0.017,none,2024-02-14\r\n1153,2100,APAC,sports,online,63.99,4,0.156,loyalty,2024-05-02\r\n1154,2294,EMEA,sports,online,71.34,4,0.181,none,2024-06-05\r\n1155,2257,AMER,grocery,mobile,38.70,2,0.029,none,2024-11-27\r\n1156,1476,APAC,sports,retail,94.21,4,0.107,none,2024-04-15\r\n1157,1135,APAC,fashion,online,42.18,7,0.238,none,2024-06-26\r\n1158,2281,AMER,grocery,online,43.27,5,0.032,none,2024-04-27\r\n1159,1377,APAC,electronics,retail,55.73,1,0.031,coupon,2024-05-11\r\n1160,1616,APAC,grocery,mobile,43.68,3,0.221,none,2024-10-17\r\n1161,1742,AMER,fashion,retail,25.00,8,0.020,none,2024-02-17\r\n1162,2408,EMEA,grocery,retail,57.00,1,0.029,bundle,2024-10-16\r\n1163,2396,AMER,home,online,177.83,3,0.244,none,2024-02-26\r\n1164,1245,APAC,fashion,online,42.64,5,0.215,none,2024-03-25\r\n1165,2382,LATAM,sports,retail,59.28,1,0.014,none,2024-08-21\r\n1166,1002,EMEA,fashion,retail,45.62,8,0.219,bundle,2024-11-02\r\n1167,1696,LATAM,toys,retail,141.42,5,0.077,bundle,2024-09-25\r\n1168,1229,LATAM,grocery,retail,71.64,6,0.085,none,2024-08-04\r\n1169,1504,AMER,grocery,online,30.34,2,0.022,bundle,2024-11-01\r\n1170,1721,EMEA,fashion,online,68.38,7,0.250,bundle,2024-01-13\r\n1171,1835,AMER,home,retail,52.80,6,0.098,coupon,2024-07-23\r\n1172,1624,AMER,grocery,online,168.01,3,0.219,none,2024-01-25\r\n1173,1256,LATAM,home,online,55.52,7,0.194,bundle,2024-07-28\r\n1174,2160,LATAM,grocery,online,77.99,6,0.177,coupon,2024-11-27\r\n1175,2079,EMEA,fashion,mobile,41.15,8,0.061,none,2024-07-27\r\n1176,1044,EMEA,toys,mobile,63.89,1,0.223,loyalty,2024-04-26\r\n1177,1915,LATAM,fashion,retail,56.23,4,0.050,coupon,2024-06-13\r\n1178,2164,AMER,fashion,online,99.74,5,0.225,none,2024-04-18\r\n1179,1018,APAC,electronics,retail,57.34,8,0.246,coupon,2024-06-26\r\n1180,2405,AMER,electronics,online,74.82,2,0.153,none,2024-07-16\r\n1181,2176,AMER,electronics,mobile,56.42,1,0.056,none,2024-04-01\r\n1182,1306,LATAM,grocery,retail,31.34,6,0.047,loyalty,2024-01-18\r\n1183,1164,EMEA,home,retail,110.01,5,0.025,bundle,2024-05-01\r\n1184,2130,EMEA,sports,retail,50.32,4,0.048,none,2024-11-17\r\n1185,1263,AMER,sports,retail,42.55,8,0.220,loyalty,2024-04-26\r\n1186,1905,APAC,fashion,retail,66.42,7,0.177,none,2024-11-21\r\n1187,1657,LATAM,fashion,retail,85.58,5,0.005,none,2024-09-16\r\n1188,1691,LATAM,home,partner,59.91,3,0.220,none,2024-12-08\r\n1189,1290,EMEA,grocery,online,88.07,1,0.108,coupon,2024-07-20\r\n1190,1153,AMER,sports,online,45.25,2,0.241,coupon,2024-11-23\r\n1191,1200,EMEA,electronics,online,33.61,5,0.220,bundle,2024-01-25\r\n1192,1315,AMER,electronics,retail,46.21,1,0.092,none,2024-06-14\r\n1193,1816,EMEA,toys,partner,24.00,8,0.153,coupon,2024-07-17\r\n1194,1341,EMEA,fashion,online,55.54,4,0.041,bundle,2024-11-18\r\n1195,2441,EMEA,home,retail,136.53,7,0.134,coupon,2024-10-02\r\n1196,2075,LATAM,grocery,mobile,63.85,5,0.158,none,2024-04-12\r\n1197,1684,EMEA,electronics,online,59.42,8,0.249,coupon,2024-08-10\r\n1198,1473,LATAM,home,partner,103.57,6,0.151,none,2024-02-06\r\n1199,1444,EMEA,home,retail,45.34,5,0.038,none,2024-08-27\r\n1200,1018,APAC,grocery,online,48.43,1,0.159,none,2024-07-07\r\n1201,1783,AMER,grocery,mobile,73.24,3,0.149,bundle,2024-12-16\r\n1202,2201,AMER,grocery,mobile,19.46,1,0.107,coupon,2024-07-05\r\n1203,1863,EMEA,grocery,online,145.92,3,0.004,none,2024-01-05\r\n1204,1352,AMER,grocery,online,66.91,6,0.192,none,2024-07-04\r\n1205,1618,EMEA,electronics,mobile,81.43,5,0.131,none,2024-08-12\r\n1206,1922,EMEA,toys,partner,63.70,5,0.006,bundle,2024-03-06\r\n1207,1580,AMER,electronics,mobile,60.64,7,0.236,none,2024-03-20\r\n1208,1788,AMER,electronics,mobile,92.11,5,0.105,none,2024-03-02\r\n1209,1808,APAC,toys,online,42.07,5,0.138,none,2024-07-09\r\n1210,2263,AMER,grocery,mobile,84.34,8,0.121,none,2024-05-07\r\n1211,2442,APAC,electronics,retail,33.53,1,0.230,none,2024-03-23\r\n1212,2380,AMER,grocery,mobile,26.76,5,0.102,none,2024-04-14\r\n1213,2498,LATAM,toys,mobile,57.42,2,0.211,bundle,2024-08-16\r\n1214,1710,APAC,grocery,mobile,35.56,2,0.122,none,2024-12-12\r\n1215,1866,EMEA,fashion,online,41.38,2,0.132,none,2024-06-15\r\n1216,1680,LATAM,electronics,online,70.48,2,0.102,loyalty,2024-02-21\r\n1217,1384,LATAM,grocery,retail,26.39,4,0.044,none,2024-01-02\r\n1218,1950,LATAM,home,online,30.82,6,0.126,coupon,2024-12-18\r\n1219,2034,LATAM,home,online,153.12,6,0.163,none,2024-06-27\r\n1220,1731,AMER,electronics,online,118.31,6,0.076,coupon,2024-12-11\r\n1221,1558,EMEA,home,retail,18.74,7,0.027,none,2024-06-25\r\n1222,1030,EMEA,grocery,online,79.14,6,0.104,none,2024-06-18\r\n1223,2432,AMER,fashion,retail,77.99,2,0.095,loyalty,2024-02-11\r\n1224,1954,APAC,electronics,online,86.29,7,0.245,none,2024-11-15\r\n1225,1093,APAC,electronics,online,32.63,3,0.227,none,2024-03-13\r\n1226,1890,LATAM,sports,online,90.54,2,0.085,bundle,2024-02-01\r\n1227,2489,LATAM,electronics,retail,60.44,1,0.067,none,2024-03-22\r\n1228,1604,EMEA,electronics,retail,56.15,2,0.230,coupon,2024-12-03\r\n1229,1082,EMEA,fashion,mobile,68.89,3,0.033,coupon,2024-12-10\r\n1230,2298,APAC,home,online,40.26,2,0.055,coupon,2024-04-24\r\n1231,1877,LATAM,electronics,retail,69.65,1,0.098,none,2024-03-24\r\n1232,1565,AMER,grocery,retail,83.95,2,0.102,none,2024-08-18\r\n1233,1151,APAC,grocery,online,25.91,3,0.111,coupon,2024-07-12\r\n1234,1912,APAC,home,online,33.64,4,0.062,none,2024-06-06\r\n1235,1887,LATAM,grocery,retail,31.45,5,0.116,bundle,2024-08-08\r\n1236,2303,EMEA,electronics,online,77.47,6,0.232,coupon,2024-12-19\r\n1237,2223,EMEA,toys,online,94.78,7,0.171,coupon,2024-07-27\r\n1238,1732,LATAM,home,retail,49.03,3,0.146,none,2024-08-16\r\n1239,1252,APAC,toys,online,61.14,4,0.066,coupon,2024-04-13\r\n1240,1449,EMEA,toys,online,35.72,6,0.046,coupon,2024-06-03\r\n1241,2055,AMER,grocery,retail,65.88,5,0.071,bundle,2024-10-01\r\n1242,1334,APAC,electronics,retail,50.40,2,0.161,bundle,2024-07-26\r\n1243,2050,APAC,home,online,27.93,5,0.030,none,2024-08-24\r\n1244,2055,AMER,electronics,online,29.18,2,0.056,loyalty,2024-07-04\r\n1245,1146,LATAM,grocery,online,137.12,7,0.217,bundle,2024-03-23\r\n1246,2222,LATAM,toys,retail,110.61,2,0.089,none,2024-06-03\r\n1247,1034,EMEA,toys,online,21.11,5,0.206,none,2024-11-23\r\n1248,1224,APAC,grocery,online,80.89,4,0.060,none,2024-05-28\r\n1249,1793,LATAM,grocery,retail,76.75,5,0.067,none,2024-10-10\r\n1250,2129,APAC,toys,mobile,32.78,8,0.217,none,2024-02-26\r\n1251,2496,EMEA,grocery,online,102.35,3,0.219,coupon,2024-07-22\r\n1252,1825,AMER,grocery,mobile,92.89,5,0.177,coupon,2024-10-05\r\n1253,1289,LATAM,electronics,online,108.71,3,0.124,none,2024-11-12\r\n1254,2328,EMEA,toys,retail,44.37,4,0.242,none,2024-10-04\r\n1255,1174,APAC,electronics,mobile,92.42,3,0.068,none,2024-02-17\r\n1256,1313,EMEA,toys,retail,78.10,4,0.127,loyalty,2024-11-16\r\n1257,1303,LATAM,sports,retail,60.00,7,0.204,none,2024-11-14\r\n1258,1205,APAC,home,online,23.21,7,0.155,coupon,2024-09-16\r\n1259,1572,LATAM,home,online,76.14,4,0.006,none,2024-06-08\r\n1260,2249,LATAM,grocery,retail,32.98,2,0.236,bundle,2024-08-11\r\n1261,1572,LATAM,electronics,online,37.96,6,0.079,none,2024-03-23\r\n1262,1801,LATAM,toys,mobile,117.66,5,0.043,none,2024-02-11\r\n1263,1298,LATAM,grocery,retail,98.76,8,0.079,coupon,2024-10-23\r\n1264,1429,APAC,home,mobile,15.58,4,0.081,bundle,2024-08-08\r\n1265,2460,AMER,electronics,online,24.83,7,0.227,coupon,2024-08-26\r\n1266,2061,EMEA,sports,online,48.33,2,0.048,none,2024-09-22\r\n1267,2491,APAC,electronics,mobile,61.00,6,0.090,none,2024-11-15\r\n1268,2136,AMER,grocery,retail,162.75,3,0.008,loyalty,2024-06-15\r\n1269,1217,EMEA,grocery,retail,53.60,7,0.172,bundle,2024-05-27\r\n1270,2344,LATAM,toys,retail,26.72,6,0.120,bundle,2024-09-09\r\n1271,1211,EMEA,toys,retail,44.96,7,0.069,none,2024-10-16\r\n1272,1153,AMER,grocery,online,59.07,4,0.070,coupon,2024-07-14\r\n1273,1815,APAC,sports,partner,27.70,3,0.149,none,2024-02-08\r\n1274,1423,EMEA,home,mobile,43.78,3,0.132,loyalty,2024-09-27\r\n1275,2177,AMER,grocery,online,24.61,7,0.113,bundle,2024-05-09\r\n1276,2252,EMEA,home,online,24.15,7,0.217,coupon,2024-04-11\r\n1277,2168,EMEA,home,retail,56.26,8,0.061,none,2024-07-28\r\n1278,1492,APAC,sports,online,55.34,7,0.118,none,2024-10-23\r\n1279,1358,APAC,grocery,online,74.61,1,0.121,coupon,2024-04-10\r\n1280,1611,EMEA,home,retail,60.49,3,0.099,none,2024-02-21\r\n1281,2266,LATAM,electronics,online,135.66,4,0.196,none,2024-01-28\r\n1282,1211,EMEA,electronics,retail,43.17,3,0.079,coupon,2024-01-25\r\n1283,1349,APAC,sports,online,32.47,6,0.024,none,2024-10-11\r\n1284,2019,AMER,fashion,online,71.62,8,0.117,none,2024-11-20\r\n1285,2395,APAC,fashion,retail,38.01,3,0.208,bundle,2024-08-03\r\n1286,1792,AMER,grocery,online,58.80,6,0.208,none,2024-04-20\r\n1287,1924,AMER,sports,retail,42.30,5,0.147,none,2024-10-04\r\n1288,1588,LATAM,electronics,mobile,72.08,3,0.111,coupon,2024-08-22\r\n1289,1107,APAC,electronics,online,139.68,6,0.180,none,2024-10-20\r\n1290,2432,AMER,grocery,retail,74.28,7,0.055,coupon,2024-11-19\r\n1291,1925,LATAM,grocery,retail,32.49,5,0.185,coupon,2024-06-14\r\n1292,1525,APAC,sports,online,67.45,8,0.006,none,2024-04-13\r\n1293,1436,APAC,electronics,retail,42.99,6,0.075,coupon,2024-04-20\r\n1294,1389,LATAM,home,partner,119.86,6,0.226,loyalty,2024-07-02\r\n1295,1687,APAC,home,online,26.33,5,0.232,coupon,2024-03-19\r\n1296,1631,APAC,grocery,retail,39.79,2,0.157,loyalty,2024-10-04\r\n1297,2426,AMER,fashion,retail,50.09,8,0.169,none,2024-02-20\r\n1298,1607,LATAM,grocery,mobile,107.29,2,0.038,loyalty,2024-09-03\r\n1299,2263,AMER,fashion,mobile,87.73,4,0.153,loyalty,2024-08-03\r\n1300,1979,APAC,grocery,retail,220.64,8,0.176,none,2024-11-02\r\n1301,1602,EMEA,fashion,online,56.66,6,0.161,none,2024-05-02\r\n1302,1979,APAC,sports,online,39.46,5,0.186,none,2024-08-09\r\n1303,1021,AMER,grocery,online,79.56,4,0.243,none,2024-11-03\r\n1304,2290,LATAM,grocery,retail,26.94,4,0.011,none,2024-06-24\r\n1305,1966,APAC,electronics,retail,46.42,3,0.122,none,2024-02-26\r\n1306,1170,AMER,home,online,35.92,5,0.234,none,2024-05-03\r\n1307,2043,EMEA,toys,retail,48.99,7,0.027,loyalty,2024-07-14\r\n1308,1476,APAC,grocery,mobile,38.05,7,0.011,bundle,2024-06-19\r\n1309,2084,LATAM,grocery,mobile,50.84,5,0.115,bundle,2024-11-18\r\n1310,1148,AMER,fashion,retail,27.81,3,0.235,none,2024-01-09\r\n1311,1198,AMER,home,online,67.52,6,0.158,coupon,2024-02-21\r\n1312,1042,LATAM,fashion,online,48.37,5,0.246,loyalty,2024-05-23\r\n1313,1437,EMEA,grocery,retail,67.16,5,0.068,coupon,2024-10-27\r\n1314,1415,AMER,home,retail,35.06,6,0.038,loyalty,2024-12-23\r\n1315,1249,EMEA,electronics,online,152.59,1,0.067,none,2024-10-03\r\n1316,1650,LATAM,grocery,online,60.13,3,0.063,none,2024-11-13\r\n1317,2066,APAC,fashion,online,56.34,7,0.110,none,2024-03-27\r\n1318,2162,EMEA,home,mobile,39.56,1,0.196,bundle,2024-04-20\r\n1319,2276,AMER,electronics,online,150.80,7,0.099,none,2024-05-06\r\n1320,1728,AMER,grocery,online,67.68,4,0.232,none,2024-03-06\r\n1321,1369,AMER,grocery,online,65.48,1,0.046,none,2024-05-12\r\n1322,1614,EMEA,home,partner,124.24,6,0.214,none,2024-03-06\r\n1323,1200,EMEA,home,retail,33.22,1,0.229,coupon,2024-09-12\r\n1324,2192,APAC,home,retail,39.98,3,0.152,none,2024-04-19\r\n1325,1327,APAC,sports,retail,89.23,5,0.215,none,2024-06-13\r\n1326,1274,LATAM,sports,mobile,33.26,8,0.019,none,2024-05-20\r\n1327,1638,EMEA,sports,retail,40.08,4,0.119,none,2024-10-17\r\n1328,1322,AMER,grocery,online,61.57,1,0.046,bundle,2024-03-01\r\n1329,1802,AMER,electronics,retail,94.07,4,0.015,none,2024-03-21\r\n1330,2350,APAC,sports,partner,38.31,8,0.084,none,2024-09-25\r\n1331,1787,APAC,grocery,retail,84.94,1,0.103,none,2024-08-03\r\n1332,1188,LATAM,fashion,retail,82.86,8,0.067,coupon,2024-10-20\r\n1333,1725,APAC,electronics,mobile,66.07,5,0.088,coupon,2024-02-12\r\n1334,2224,EMEA,fashion,retail,48.47,7,0.008,bundle,2024-11-19\r\n1335,1182,EMEA,grocery,online,44.83,8,0.178,coupon,2024-09-19\r\n1336,2110,LATAM,electronics,retail,61.42,7,0.050,none,2024-04-01\r\n1337,1242,LATAM,electronics,mobile,89.63,7,0.099,none,2024-01-23\r\n1338,1387,AMER,electronics,partner,60.19,2,0.206,coupon,2024-10-23\r\n1339,1120,LATAM,grocery,online,68.06,7,0.045,coupon,2024-05-02\r\n1340,1783,AMER,grocery,online,61.00,7,0.040,bundle,2024-07-07\r\n1341,1045,LATAM,home,online,124.13,6,0.158,none,2024-09-19\r\n1342,2148,EMEA,fashion,online,30.64,3,0.035,coupon,2024-11-10\r\n1343,2180,AMER,fashion,partner,99.65,5,0.212,none,2024-11-16\r\n1344,1019,APAC,home,retail,95.86,2,0.162,none,2024-07-25\r\n1345,2198,EMEA,electronics,online,51.21,1,0.225,none,2024-01-03\r\n1346,1975,EMEA,electronics,online,97.58,5,0.056,none,2024-01-01\r\n1347,2355,EMEA,home,online,57.11,7,0.068,coupon,2024-06-05\r\n1348,1901,AMER,sports,retail,99.51,3,0.110,none,2024-05-14\r\n1349,2167,APAC,grocery,retail,28.28,1,0.042,bundle,2024-05-28\r\n1350,2140,AMER,grocery,mobile,128.70,2,0.080,none,2024-10-02\r\n1351,1835,AMER,electronics,retail,84.17,4,0.143,none,2024-05-28\r\n1352,1187,AMER,grocery,retail,56.11,5,0.150,none,2024-08-02\r\n1353,2344,LATAM,fashion,online,60.41,3,0.045,coupon,2024-10-22\r\n1354,1999,EMEA,home,online,79.55,6,0.065,coupon,2024-06-19\r\n1355,1840,LATAM,grocery,retail,60.13,2,0.084,bundle,2024-11-17\r\n1356,1902,AMER,electronics,online,34.62,2,0.234,none,2024-10-23\r\n1357,1376,EMEA,sports,mobile,47.31,6,0.076,coupon,2024-12-19\r\n1358,1975,EMEA,fashion,retail,34.51,6,0.179,coupon,2024-09-23\r\n1359,2447,AMER,home,mobile,87.09,2,0.180,none,2024-05-23\r\n1360,1837,LATAM,fashion,retail,44.35,5,0.029,loyalty,2024-06-17\r\n1361,1486,LATAM,grocery,online,62.64,5,0.134,coupon,2024-12-08\r\n1362,1923,LATAM,fashion,online,45.31,6,0.094,none,2024-07-25\r\n1363,2387,EMEA,home,mobile,21.66,6,0.236,coupon,2024-04-13\r\n1364,1546,EMEA,home,online,68.89,6,0.055,none,2024-12-16\r\n1365,1791,LATAM,grocery,mobile,101.94,4,0.017,coupon,2024-07-24\r\n1366,2209,AMER,fashion,retail,56.52,8,0.015,loyalty,2024-03-24\r\n1367,1896,EMEA,fashion,retail,31.28,3,0.055,none,2024-02-27\r\n1368,1467,LATAM,sports,retail,40.94,6,0.057,bundle,2024-03-26\r\n1369,2035,LATAM,home,online,32.32,7,0.051,bundle,2024-06-24\r\n1370,2487,LATAM,toys,online,49.46,5,0.155,none,2024-08-02\r\n1371,2169,EMEA,home,online,71.70,3,0.214,none,2024-02-13\r\n1372,1621,APAC,fashion,online,93.35,4,0.192,none,2024-07-07\r\n1373,1566,EMEA,sports,retail,92.82,8,0.119,none,2024-09-25\r\n1374,1444,EMEA,fashion,retail,38.80,2,0.196,coupon,2024-01-20\r\n1375,2462,EMEA,home,online,76.04,4,0.194,none,2024-11-25\r\n1376,1239,APAC,fashion,retail,52.29,8,0.059,none,2024-04-13\r\n1377,1109,APAC,home,online,67.28,4,0.095,bundle,2024-07-11\r\n1378,1016,AMER,sports,mobile,55.30,7,0.075,none,2024-06-18\r\n1379,1343,LATAM,home,retail,106.06,1,0.112,none,2024-06-16\r\n1380,2084,LATAM,toys,online,131.80,4,0.054,coupon,2024-04-07\r\n1381,1772,EMEA,sports,online,48.35,7,0.116,bundle,2024-05-01\r\n1382,1077,AMER,sports,online,54.13,5,0.226,bundle,2024-03-15\r\n1383,1909,APAC,grocery,retail,85.06,7,0.093,loyalty,2024-05-11\r\n1384,1896,EMEA,home,online,66.56,2,0.184,none,2024-06-10\r\n1385,1034,EMEA,sports,mobile,49.40,3,0.106,none,2024-11-07\r\n1386,1314,AMER,grocery,online,89.39,1,0.035,bundle,2024-08-08\r\n1387,1873,EMEA,electronics,retail,72.14,8,0.176,coupon,2024-07-01\r\n1388,2212,EMEA,grocery,online,62.38,6,0.004,bundle,2024-04-12\r\n1389,1309,EMEA,home,retail,63.39,8,0.236,none,2024-10-06\r\n1390,1438,APAC,home,online,113.63,3,0.163,none,2024-05-26\r\n1391,1355,EMEA,sports,mobile,62.46,8,0.117,bundle,2024-10-25\r\n1392,2247,LATAM,grocery,online,89.22,5,0.207,none,2024-12-03\r\n1393,1849,EMEA,fashion,online,66.32,2,0.084,coupon,2024-09-26\r\n1394,1763,LATAM,home,retail,48.17,6,0.209,bundle,2024-04-03\r\n1395,1791,LATAM,electronics,retail,56.21,3,0.064,none,2024-02-15\r\n1396,2338,AMER,home,retail,54.59,5,0.062,coupon,2024-08-07\r\n1397,1368,EMEA,sports,online,146.60,3,0.172,none,2024-09-15\r\n1398,1421,APAC,electronics,online,102.98,7,0.167,bundle,2024-01-26\r\n1399,1858,LATAM,toys,retail,46.97,4,0.054,none,2024-08-24\r\n1400,1638,EMEA,sports,mobile,50.78,1,0.067,coupon,2024-08-27\r\n1401,2340,EMEA,fashion,online,48.11,7,0.211,loyalty,2024-07-11\r\n1402,1640,APAC,electronics,retail,38.78,3,0.244,none,2024-08-09\r\n1403,1745,APAC,grocery,online,49.73,2,0.139,none,2024-06-03\r\n1404,1691,LATAM,grocery,retail,49.66,8,0.023,none,2024-01-18\r\n1405,2224,EMEA,fashion,retail,38.78,7,0.120,coupon,2024-09-09\r\n1406,1358,APAC,sports,retail,135.65,7,0.097,none,2024-04-03\r\n1407,1439,LATAM,sports,retail,28.34,3,0.001,none,2024-11-18\r\n1408,2021,EMEA,fashion,mobile,36.45,1,0.110,loyalty,2024-01-14\r\n1409,2084,LATAM,electronics,partner,133.35,8,0.195,bundle,2024-06-04\r\n1410,1620,LATAM,home,partner,102.52,2,0.226,loyalty,2024-11-13\r\n1411,1169,LATAM,grocery,mobile,69.13,8,0.112,loyalty,2024-12-09\r\n1412,1016,AMER,toys,mobile,107.79,8,0.023,none,2024-11-19\r\n1413,1767,AMER,sports,mobile,56.64,1,0.140,none,2024-08-16\r\n1414,1818,AMER,grocery,retail,83.84,8,0.218,coupon,2024-04-18\r\n1415,1708,LATAM,electronics,retail,44.82,6,0.230,none,2024-07-13\r\n1416,2205,AMER,fashion,retail,93.07,2,0.119,loyalty,2024-07-13\r\n1417,2416,LATAM,toys,online,75.44,3,0.181,loyalty,2024-04-04\r\n1418,2116,LATAM,home,retail,36.55,2,0.184,none,2024-03-19\r\n1419,1798,AMER,home,online,100.16,4,0.098,none,2024-10-15\r\n1420,1482,AMER,toys,online,31.76,2,0.036,bundle,2024-09-14\r\n1421,2138,APAC,grocery,retail,143.92,4,0.209,coupon,2024-04-06\r\n1422,1391,LATAM,grocery,online,85.12,5,0.009,loyalty,2024-02-18\r\n1423,1813,EMEA,home,online,39.72,8,0.162,coupon,2024-06-01\r\n1424,2090,AMER,sports,partner,88.24,4,0.236,coupon,2024-02-05\r\n1425,2351,EMEA,grocery,retail,105.39,8,0.196,none,2024-04-02\r\n1426,1305,EMEA,sports,retail,110.65,3,0.150,none,2024-04-04\r\n1427,2413,AMER,grocery,retail,43.08,1,0.200,bundle,2024-02-09\r\n1428,1182,EMEA,grocery,online,99.55,1,0.248,none,2024-02-25\r\n1429,1662,LATAM,toys,retail,16.10,5,0.059,none,2024-09-11\r\n1430,1827,EMEA,home,retail,63.31,4,0.208,coupon,2024-01-14\r\n1431,2055,AMER,home,online,45.39,3,0.004,none,2024-03-09\r\n1432,2432,AMER,home,retail,33.86,3,0.222,none,2024-07-21\r\n1433,1049,AMER,grocery,online,127.06,8,0.123,coupon,2024-10-08\r\n1434,1420,APAC,electronics,mobile,23.45,1,0.201,coupon,2024-09-25\r\n1435,1315,AMER,home,online,49.92,8,0.244,coupon,2024-10-18\r\n1436,2326,LATAM,home,online,54.45,7,0.195,loyalty,2024-06-04\r\n1437,1784,EMEA,electronics,retail,56.26,4,0.101,coupon,2024-07-03\r\n1438,1257,APAC,home,retail,41.24,1,0.181,bundle,2024-05-22\r\n1439,2088,EMEA,grocery,mobile,63.71,2,0.057,bundle,2024-11-25\r\n1440,2253,AMER,toys,retail,66.86,4,0.094,coupon,2024-11-22\r\n1441,1407,LATAM,electronics,retail,70.42,1,0.240,loyalty,2024-06-17\r\n1442,2118,AMER,home,online,36.76,5,0.113,coupon,2024-10-14\r\n1443,1037,EMEA,grocery,retail,52.49,2,0.049,loyalty,2024-08-07\r\n1444,1408,AMER,fashion,retail,94.19,3,0.157,none,2024-08-02\r\n1445,1274,LATAM,fashion,retail,29.00,6,0.147,bundle,2024-01-06\r\n1446,1599,APAC,sports,retail,16.27,4,0.067,none,2024-08-01\r\n1447,1266,AMER,grocery,partner,42.81,4,0.022,bundle,2024-11-10\r\n1448,1366,APAC,grocery,retail,16.47,7,0.150,none,2024-09-17\r\n1449,2320,LATAM,grocery,retail,48.22,4,0.013,coupon,2024-12-22\r\n1450,2013,APAC,sports,online,45.98,6,0.056,coupon,2024-08-05\r\n1451,1953,EMEA,sports,online,45.96,3,0.162,bundle,2024-04-10\r\n1452,2177,AMER,electronics,retail,69.48,2,0.233,loyalty,2024-09-24\r\n1453,1914,EMEA,fashion,online,100.11,8,0.149,none,2024-06-14\r\n1454,1685,AMER,grocery,online,64.87,1,0.039,bundle,2024-08-14\r\n1455,1175,AMER,home,retail,25.64,7,0.156,none,2024-01-04\r\n1456,1604,EMEA,fashion,online,149.50,1,0.108,loyalty,2024-03-11\r\n1457,1618,EMEA,grocery,partner,22.16,1,0.157,bundle,2024-10-25\r\n1458,2352,APAC,toys,retail,83.98,2,0.127,none,2024-01-06\r\n1459,2130,EMEA,electronics,online,82.10,8,0.234,none,2024-08-23\r\n1460,2007,LATAM,fashion,online,54.75,5,0.134,none,2024-08-06\r\n1461,2211,APAC,fashion,online,39.38,1,0.188,none,2024-07-24\r\n1462,2390,AMER,fashion,online,52.06,1,0.115,none,2024-01-24\r\n1463,1546,EMEA,fashion,online,76.82,5,0.178,none,2024-07-26\r\n1464,1928,AMER,electronics,mobile,41.16,4,0.155,none,2024-06-11\r\n1465,2050,APAC,grocery,mobile,182.90,6,0.074,loyalty,2024-08-15\r\n1466,2229,APAC,grocery,retail,55.64,6,0.081,coupon,2024-03-16\r\n1467,1514,LATAM,sports,online,42.85,1,0.234,none,2024-06-01\r\n1468,1548,EMEA,electronics,online,144.22,4,0.182,none,2024-07-23\r\n1469,1061,APAC,electronics,retail,55.84,7,0.087,none,2024-06-01\r\n1470,1375,AMER,grocery,online,17.88,6,0.055,none,2024-01-27\r\n1471,1257,APAC,toys,retail,86.16,8,0.127,none,2024-10-14\r\n1472,1081,AMER,home,mobile,33.74,3,0.015,loyalty,2024-08-17\r\n1473,1919,EMEA,home,online,36.93,8,0.196,none,2024-08-26\r\n1474,1850,APAC,grocery,mobile,176.06,1,0.181,bundle,2024-12-22\r\n1475,1923,LATAM,home,online,71.59,6,0.152,none,2024-12-08\r\n1476,1863,EMEA,fashion,retail,42.16,3,0.135,bundle,2024-09-09\r\n1477,2088,EMEA,toys,retail,74.04,2,0.128,none,2024-12-25\r\n1478,1538,AMER,electronics,online,58.77,2,0.180,none,2024-06-04\r\n1479,2062,EMEA,home,online,61.65,8,0.229,bundle,2024-04-12\r\n1480,2325,LATAM,sports,retail,53.49,6,0.232,bundle,2024-10-19\r\n1481,1484,AMER,home,online,15.44,8,0.062,bundle,2024-07-01\r\n1482,1635,APAC,electronics,online,74.66,4,0.159,none,2024-03-03\r\n1483,1979,APAC,grocery,online,61.15,5,0.090,coupon,2024-05-13\r\n1484,2101,APAC,electronics,retail,149.14,1,0.065,none,2024-08-11\r\n1485,2102,APAC,toys,retail,54.05,1,0.064,coupon,2024-02-14\r\n1486,1388,AMER,grocery,retail,81.20,5,0.200,none,2024-11-01\r\n1487,1604,EMEA,toys,online,41.59,4,0.190,none,2024-11-08\r\n1488,2270,APAC,fashion,mobile,85.45,6,0.110,bundle,2024-06-15\r\n1489,1138,AMER,electronics,retail,82.24,8,0.203,none,2024-09-18\r\n1490,1957,AMER,grocery,online,47.96,3,0.048,none,2024-01-09\r\n1491,1418,LATAM,grocery,online,63.07,1,0.245,loyalty,2024-12-04\r\n1492,1217,EMEA,fashion,online,32.32,2,0.143,none,2024-03-24\r\n1493,1502,APAC,sports,online,157.90,3,0.226,bundle,2024-01-18\r\n1494,2393,LATAM,fashion,online,26.65,7,0.020,none,2024-09-26\r\n1495,1016,AMER,sports,online,80.25,5,0.218,none,2024-09-02\r\n1496,1289,LATAM,grocery,retail,164.40,5,0.130,loyalty,2024-01-01\r\n1497,2054,AMER,grocery,online,131.44,1,0.182,none,2024-07-18\r\n1498,1731,AMER,toys,online,40.68,3,0.042,bundle,2024-04-20\r\n1499,2148,EMEA,fashion,online,101.42,6,0.049,bundle,2024-03-10\r\n1500,2213,APAC,toys,mobile,65.31,5,0.138,coupon,2024-08-03\r\n1501,2465,EMEA,electronics,retail,49.23,1,0.098,none,2024-05-22\r\n1502,1887,LATAM,electronics,retail,49.53,1,0.004,none,2024-06-14\r\n1503,1475,LATAM,electronics,retail,93.31,1,0.035,coupon,2024-03-23\r\n1504,1765,EMEA,grocery,online,50.58,1,0.113,none,2024-09-13\r\n1505,1501,AMER,grocery,online,33.29,8,0.043,none,2024-08-13\r\n1506,1050,AMER,toys,online,139.74,4,0.220,coupon,2024-02-15\r\n1507,1388,AMER,toys,retail,73.81,2,0.124,coupon,2024-08-21\r\n1508,1839,APAC,electronics,retail,57.34,3,0.048,none,2024-11-09\r\n1509,2260,EMEA,electronics,online,92.05,3,0.122,bundle,2024-01-03\r\n1510,1322,AMER,grocery,online,44.25,5,0.040,none,2024-12-10\r\n1511,1489,AMER,sports,mobile,81.16,3,0.249,none,2024-09-09\r\n1512,2141,AMER,electronics,retail,56.98,5,0.012,coupon,2024-11-27\r\n1513,1379,EMEA,home,online,34.03,8,0.111,none,2024-12-18\r\n1514,1964,EMEA,home,retail,43.96,8,0.109,none,2024-07-08\r\n1515,1105,AMER,grocery,retail,40.82,5,0.238,none,2024-02-27\r\n1516,1560,AMER,grocery,retail,68.43,6,0.224,none,2024-12-01\r\n1517,2243,APAC,electronics,retail,31.85,1,0.086,coupon,2024-05-10\r\n1518,1470,LATAM,fashion,retail,27.74,8,0.040,none,2024-09-03\r\n1519,1046,EMEA,grocery,retail,32.54,8,0.059,bundle,2024-06-10\r\n1520,1489,AMER,sports,retail,43.41,2,0.219,bundle,2024-03-01\r\n1521,2241,APAC,grocery,retail,58.70,1,0.055,coupon,2024-06-03\r\n1522,2108,AMER,grocery,retail,36.36,1,0.230,none,2024-06-27\r\n1523,1552,EMEA,grocery,online,77.09,3,0.094,none,2024-08-20\r\n1524,2401,LATAM,grocery,retail,18.29,4,0.091,none,2024-01-20\r\n1525,1701,LATAM,fashion,online,24.03,7,0.183,coupon,2024-06-17\r\n1526,1009,APAC,electronics,online,74.73,4,0.058,bundle,2024-10-12\r\n1527,1768,AMER,home,retail,48.62,6,0.098,none,2024-06-07\r\n1528,1975,EMEA,home,retail,118.99,5,0.117,none,2024-09-27\r\n1529,2093,LATAM,toys,online,26.52,7,0.044,none,2024-09-17\r\n1530,1139,EMEA,sports,retail,101.39,3,0.017,bundle,2024-01-27\r\n1531,2077,APAC,electronics,retail,54.18,5,0.052,coupon,2024-01-02\r\n1532,1342,LATAM,sports,online,55.97,7,0.044,none,2024-07-06\r\n1533,2494,AMER,toys,retail,110.75,7,0.022,loyalty,2024-05-24\r\n1534,1989,LATAM,grocery,mobile,49.78,1,0.064,coupon,2024-04-10\r\n1535,1993,APAC,grocery,mobile,30.92,8,0.201,coupon,2024-07-19\r\n1536,2389,LATAM,fashion,online,30.45,8,0.161,none,2024-07-25\r\n1537,1801,LATAM,fashion,online,103.71,1,0.078,coupon,2024-11-24\r\n1538,2087,LATAM,fashion,online,85.22,6,0.147,loyalty,2024-12-26\r\n1539,2426,AMER,toys,retail,42.27,7,0.086,none,2024-10-19\r\n1540,1126,LATAM,fashion,online,39.34,8,0.210,coupon,2024-08-09\r\n1541,2310,EMEA,grocery,online,77.03,3,0.169,bundle,2024-11-10\r\n1542,2199,LATAM,grocery,online,28.18,4,0.043,bundle,2024-11-20\r\n1543,1934,EMEA,home,online,101.16,6,0.158,none,2024-03-03\r\n1544,1788,AMER,grocery,retail,115.79,2,0.036,coupon,2024-11-07\r\n1545,1399,AMER,grocery,online,103.61,3,0.236,none,2024-05-16\r\n1546,1818,AMER,grocery,retail,54.14,2,0.039,none,2024-08-06\r\n1547,1115,AMER,sports,retail,49.58,3,0.077,loyalty,2024-12-22\r\n1548,1226,AMER,sports,online,38.73,3,0.130,none,2024-09-02\r\n1549,1419,APAC,electronics,online,25.67,1,0.167,bundle,2024-11-26\r\n1550,1044,EMEA,fashion,online,27.41,3,0.017,coupon,2024-05-10\r\n1551,1440,AMER,toys,retail,59.05,2,0.176,loyalty,2024-02-26\r\n1552,2090,AMER,grocery,online,48.88,3,0.058,bundle,2024-11-24\r\n1553,2337,AMER,fashion,online,49.98,5,0.030,none,2024-08-09\r\n1554,1180,AMER,electronics,online,64.08,4,0.148,coupon,2024-07-07\r\n1555,2467,AMER,electronics,retail,72.58,5,0.062,coupon,2024-09-19\r\n1556,2177,AMER,sports,online,80.32,1,0.208,loyalty,2024-04-14\r\n1557,1364,EMEA,grocery,retail,31.45,7,0.247,none,2024-05-10\r\n1558,2338,AMER,electronics,online,48.71,2,0.117,loyalty,2024-07-28\r\n1559,1422,LATAM,grocery,retail,31.79,5,0.227,none,2024-10-13\r\n1560,1606,AMER,sports,online,57.44,4,0.055,loyalty,2024-08-14\r\n1561,1202,APAC,toys,mobile,99.39,2,0.040,none,2024-02-18\r\n1562,1198,AMER,grocery,online,105.84,2,0.194,none,2024-09-16\r\n1563,1816,EMEA,grocery,retail,53.81,6,0.112,coupon,2024-07-23\r\n1564,2437,LATAM,sports,retail,24.77,8,0.248,bundle,2024-09-16\r\n1565,1649,APAC,fashion,online,67.97,2,0.215,none,2024-03-21\r\n1566,1239,APAC,grocery,online,45.63,6,0.165,none,2024-02-28\r\n1567,1273,AMER,sports,retail,47.84,2,0.214,none,2024-04-09\r\n1568,1193,APAC,grocery,online,67.88,7,0.128,none,2024-02-11\r\n1569,2331,APAC,grocery,mobile,41.15,3,0.176,coupon,2024-05-10\r\n1570,2489,LATAM,home,online,185.30,5,0.051,none,2024-12-28\r\n1571,1501,AMER,fashion,retail,93.80,8,0.118,coupon,2024-09-14\r\n1572,1512,APAC,grocery,retail,19.47,4,0.100,none,2024-01-03\r\n1573,1022,APAC,home,online,44.18,8,0.199,loyalty,2024-12-10\r\n1574,1434,EMEA,sports,online,71.84,6,0.179,none,2024-03-04\r\n1575,1912,APAC,sports,online,38.04,7,0.034,none,2024-07-05\r\n1576,1834,AMER,grocery,retail,130.13,4,0.123,none,2024-02-17\r\n1577,1745,APAC,electronics,online,92.33,5,0.097,coupon,2024-06-20\r\n1578,2414,EMEA,home,online,49.46,7,0.015,coupon,2024-11-12\r\n1579,2209,AMER,electronics,online,57.17,3,0.140,bundle,2024-11-26\r\n1580,1678,LATAM,fashion,retail,68.29,6,0.079,none,2024-12-19\r\n1581,1619,APAC,sports,retail,79.37,2,0.179,bundle,2024-11-10\r\n1582,1058,LATAM,fashion,mobile,45.16,4,0.095,none,2024-12-25\r\n1583,2321,APAC,home,mobile,57.07,3,0.213,none,2024-11-13\r\n1584,1834,AMER,grocery,online,38.96,1,0.150,none,2024-11-06\r\n1585,1154,LATAM,toys,mobile,132.44,6,0.089,none,2024-07-15\r\n1586,1073,AMER,grocery,online,119.36,1,0.081,none,2024-06-06\r\n1587,1071,AMER,home,partner,86.58,2,0.060,bundle,2024-03-09\r\n1588,2354,LATAM,grocery,online,40.91,5,0.219,loyalty,2024-02-11\r\n1589,1735,LATAM,toys,retail,24.85,2,0.064,coupon,2024-08-01\r\n1590,2208,AMER,home,online,70.08,8,0.159,none,2024-03-26\r\n1591,2018,AMER,fashion,online,53.13,2,0.211,none,2024-09-01\r\n1592,1578,LATAM,toys,online,75.48,3,0.217,none,2024-07-21\r\n1593,2012,APAC,home,online,75.46,8,0.025,none,2024-02-05\r\n1594,1701,LATAM,toys,retail,46.31,5,0.117,loyalty,2024-08-26\r\n1595,1189,AMER,home,online,62.49,2,0.199,none,2024-07-12\r\n1596,1124,AMER,fashion,mobile,110.25,6,0.175,none,2024-04-16\r\n1597,1293,AMER,electronics,mobile,57.34,7,0.179,coupon,2024-06-20\r\n1598,1823,EMEA,electronics,online,88.92,5,0.108,coupon,2024-08-19\r\n1599,1169,LATAM,fashion,online,145.75,3,0.008,coupon,2024-08-10\r\n1600,1608,AMER,electronics,retail,52.52,5,0.098,none,2024-09-27\r\n1601,1505,EMEA,grocery,online,30.24,1,0.063,none,2024-03-28\r\n1602,2287,EMEA,electronics,online,57.46,7,0.097,loyalty,2024-12-09\r\n1603,1461,LATAM,electronics,online,72.61,7,0.210,none,2024-10-08\r\n1604,2308,AMER,home,retail,28.04,1,0.202,none,2024-07-26\r\n1605,1851,EMEA,grocery,retail,48.59,6,0.179,none,2024-02-17\r\n1606,1562,AMER,toys,retail,92.85,5,0.199,none,2024-11-04\r\n1607,1055,AMER,electronics,online,117.83,7,0.112,none,2024-08-03\r\n1608,1769,LATAM,fashion,online,21.19,4,0.070,loyalty,2024-06-03\r\n1609,2259,AMER,home,retail,28.11,7,0.010,coupon,2024-01-06\r\n1610,1447,LATAM,home,retail,41.90,8,0.083,none,2024-01-27\r\n1611,1565,AMER,home,mobile,83.56,4,0.184,none,2024-03-17\r\n1612,1425,EMEA,grocery,online,45.99,8,0.066,loyalty,2024-09-13\r\n1613,1269,LATAM,home,retail,119.22,5,0.206,loyalty,2024-06-23\r\n1614,2050,APAC,home,online,111.20,7,0.207,none,2024-05-27\r\n1615,1265,APAC,home,retail,49.53,6,0.059,loyalty,2024-04-03\r\n1616,2407,EMEA,grocery,online,66.60,3,0.088,none,2024-09-13\r\n1617,2450,EMEA,electronics,online,33.65,5,0.112,coupon,2024-02-01\r\n1618,1432,APAC,grocery,mobile,48.54,2,0.041,coupon,2024-06-14\r\n1619,1679,APAC,home,mobile,47.27,8,0.128,coupon,2024-09-24\r\n1620,2365,LATAM,grocery,online,48.63,4,0.135,coupon,2024-07-02\r\n1621,1772,EMEA,grocery,retail,107.50,3,0.055,none,2024-12-16\r\n1622,1299,LATAM,fashion,retail,89.77,2,0.054,coupon,2024-01-19\r\n1623,1986,LATAM,electronics,retail,90.32,7,0.072,bundle,2024-10-07\r\n1624,1882,AMER,electronics,retail,53.29,4,0.123,bundle,2024-06-01\r\n1625,1907,EMEA,grocery,retail,67.75,3,0.074,none,2024-05-17\r\n1626,2020,AMER,home,online,51.52,8,0.028,none,2024-02-26\r\n1627,1669,AMER,fashion,retail,73.16,8,0.185,bundle,2024-10-28\r\n1628,1373,LATAM,fashion,retail,64.11,1,0.040,none,2024-01-04\r\n1629,1488,AMER,fashion,online,34.77,3,0.076,loyalty,2024-01-08\r\n1630,2108,AMER,fashion,online,271.02,1,0.118,none,2024-05-21\r\n1631,1931,APAC,home,online,70.05,6,0.145,none,2024-06-17\r\n1632,1718,EMEA,electronics,online,67.95,2,0.216,none,2024-03-20\r\n1633,1005,LATAM,grocery,retail,49.31,5,0.171,none,2024-09-22\r\n1634,2306,AMER,sports,retail,28.57,2,0.117,none,2024-08-16\r\n1635,2468,EMEA,toys,mobile,86.03,3,0.084,none,2024-04-22\r\n1636,1497,EMEA,electronics,online,34.61,3,0.174,coupon,2024-10-16\r\n1637,1648,APAC,home,online,99.46,2,0.199,coupon,2024-11-24\r\n1638,2477,APAC,fashion,online,77.98,6,0.086,none,2024-05-24\r\n1639,1089,LATAM,grocery,retail,39.03,2,0.165,coupon,2024-08-09\r\n1640,1127,EMEA,electronics,mobile,99.40,8,0.182,coupon,2024-03-09\r\n1641,1772,EMEA,home,retail,49.84,5,0.009,none,2024-06-06\r\n1642,1421,APAC,toys,online,32.24,7,0.225,none,2024-11-10\r\n1643,2024,AMER,toys,retail,62.07,6,0.025,loyalty,2024-12-19\r\n1644,1132,EMEA,toys,online,54.75,8,0.242,none,2024-04-18\r\n1645,1830,EMEA,home,online,73.12,1,0.122,none,2024-04-18\r\n1646,1567,AMER,toys,online,56.18,3,0.194,loyalty,2024-12-08\r\n1647,1542,APAC,home,online,104.24,3,0.060,none,2024-02-26\r\n1648,1556,AMER,toys,retail,25.76,1,0.005,none,2024-03-14\r\n1649,1023,APAC,electronics,mobile,15.10,4,0.078,none,2024-04-07\r\n1650,2085,AMER,electronics,retail,107.26,3,0.046,bundle,2024-03-24\r\n1651,1014,EMEA,fashion,partner,79.71,2,0.198,loyalty,2024-09-21\r\n1652,2081,APAC,grocery,retail,24.31,2,0.057,loyalty,2024-09-13\r\n1653,2145,AMER,electronics,mobile,69.67,3,0.245,none,2024-12-24\r\n1654,2335,EMEA,sports,partner,58.26,2,0.177,none,2024-06-01\r\n1655,1141,AMER,grocery,retail,35.65,3,0.124,none,2024-11-13\r\n1656,1426,AMER,sports,online,103.57,1,0.127,bundle,2024-06-23\r\n1657,2065,EMEA,electronics,mobile,52.19,6,0.077,none,2024-05-16\r\n1658,1175,AMER,grocery,retail,66.10,7,0.152,coupon,2024-08-14\r\n1659,2284,EMEA,grocery,online,38.35,5,0.050,loyalty,2024-09-04\r\n1660,2452,LATAM,electronics,online,89.51,5,0.227,none,2024-02-19\r\n1661,2365,LATAM,fashion,online,165.89,4,0.041,bundle,2024-05-15\r\n1662,1719,LATAM,electronics,retail,61.07,7,0.061,loyalty,2024-02-16\r\n1663,2225,EMEA,electronics,retail,25.16,6,0.040,none,2024-08-25\r\n1664,1481,LATAM,grocery,retail,66.49,1,0.238,coupon,2024-12-16\r\n1665,1906,APAC,fashion,retail,68.89,7,0.191,coupon,2024-08-05\r\n1666,1324,LATAM,electronics,retail,86.66,2,0.151,bundle,2024-12-22\r\n1667,2099,AMER,electronics,online,76.61,1,0.071,none,2024-07-23\r\n1668,1473,LATAM,home,online,69.45,4,0.043,none,2024-02-20\r\n1669,1802,AMER,electronics,retail,82.01,6,0.046,none,2024-05-19\r\n1670,1682,EMEA,grocery,online,62.05,2,0.101,none,2024-05-21\r\n1671,2345,LATAM,fashion,online,145.76,8,0.249,coupon,2024-02-16\r\n1672,1126,LATAM,home,online,36.38,5,0.183,coupon,2024-09-13\r\n1673,1436,APAC,electronics,retail,65.63,8,0.096,coupon,2024-04-22\r\n1674,1456,APAC,grocery,retail,22.38,3,0.093,loyalty,2024-07-19\r\n1675,1272,AMER,electronics,online,192.55,7,0.007,none,2024-02-01\r\n1676,1686,LATAM,electronics,retail,54.96,5,0.123,bundle,2024-04-19\r\n1677,2487,LATAM,grocery,online,12.67,6,0.035,coupon,2024-04-19\r\n1678,2081,APAC,grocery,online,33.30,5,0.226,none,2024-03-24\r\n1679,1184,AMER,fashion,partner,85.49,3,0.161,bundle,2024-09-02\r\n1680,1670,EMEA,electronics,retail,36.15,4,0.085,none,2024-04-27\r\n1681,1643,EMEA,home,mobile,72.98,6,0.006,none,2024-12-12\r\n1682,1127,EMEA,home,online,45.42,5,0.243,none,2024-04-06\r\n1683,1229,LATAM,sports,online,37.64,7,0.175,none,2024-02-07\r\n1684,1941,AMER,electronics,retail,38.54,7,0.171,coupon,2024-03-12\r\n1685,2165,AMER,electronics,retail,33.92,2,0.007,coupon,2024-01-03\r\n1686,1033,APAC,home,partner,56.10,2,0.240,none,2024-08-06\r\n1687,2295,EMEA,electronics,retail,34.86,2,0.210,none,2024-10-01\r\n1688,2379,AMER,grocery,mobile,31.45,3,0.072,loyalty,2024-03-13\r\n1689,1663,LATAM,home,retail,57.58,3,0.203,none,2024-05-21\r\n1690,1416,EMEA,grocery,retail,52.39,7,0.033,none,2024-08-20\r\n1691,2420,EMEA,home,online,87.76,7,0.236,none,2024-04-17\r\n1692,1702,AMER,grocery,mobile,47.04,1,0.047,none,2024-04-25\r\n1693,1616,APAC,electronics,partner,21.14,2,0.066,coupon,2024-07-09\r\n1694,2453,AMER,grocery,mobile,81.74,6,0.129,none,2024-09-15\r\n1695,2338,AMER,fashion,retail,36.35,3,0.135,coupon,2024-05-09\r\n1696,1003,APAC,fashion,partner,28.37,3,0.015,none,2024-09-27\r\n1697,2009,LATAM,sports,mobile,79.69,8,0.102,none,2024-04-20\r\n1698,1878,EMEA,grocery,online,39.22,7,0.038,bundle,2024-12-27\r\n1699,2087,LATAM,sports,mobile,59.66,6,0.236,coupon,2024-08-22\r\n1700,1852,AMER,fashion,mobile,17.37,4,0.206,coupon,2024-07-09\r\n1701,1774,EMEA,electronics,online,45.94,2,0.097,none,2024-09-04\r\n1702,1513,APAC,grocery,retail,90.78,5,0.178,coupon,2024-11-02\r\n1703,1108,EMEA,toys,retail,73.50,7,0.017,none,2024-04-16\r\n1704,1050,AMER,home,online,37.45,3,0.212,coupon,2024-01-13\r\n1705,1739,AMER,fashion,online,54.25,8,0.161,coupon,2024-10-18\r\n1706,1564,APAC,sports,retail,47.39,4,0.064,none,2024-08-13\r\n1707,1425,EMEA,sports,online,74.27,1,0.168,none,2024-05-08\r\n1708,2208,AMER,electronics,online,118.98,5,0.244,none,2024-12-26\r\n1709,1201,LATAM,home,retail,66.04,1,0.025,loyalty,2024-10-16\r\n1710,1220,LATAM,fashion,online,69.02,3,0.224,none,2024-06-07\r\n1711,1563,EMEA,home,online,40.76,2,0.023,none,2024-10-15\r\n1712,1877,LATAM,electronics,online,53.98,4,0.209,bundle,2024-10-07\r\n1713,1397,LATAM,fashion,retail,83.59,6,0.112,none,2024-12-09\r\n1714,1746,LATAM,electronics,retail,24.39,2,0.007,coupon,2024-12-20\r\n1715,1716,LATAM,electronics,retail,58.83,3,0.233,none,2024-05-04\r\n1716,1630,APAC,electronics,online,42.66,8,0.072,none,2024-11-28\r\n1717,1500,EMEA,toys,partner,35.23,2,0.137,coupon,2024-01-10\r\n1718,1662,LATAM,electronics,retail,62.17,1,0.094,none,2024-02-25\r\n1719,1023,APAC,grocery,online,123.52,6,0.155,coupon,2024-06-13\r\n1720,2072,AMER,grocery,online,36.35,8,0.182,loyalty,2024-09-20\r\n1721,2136,AMER,toys,retail,178.29,2,0.065,none,2024-07-26\r\n1722,1663,LATAM,home,online,65.99,5,0.085,bundle,2024-09-06\r\n1723,2430,APAC,fashion,online,38.11,6,0.106,none,2024-10-20\r\n1724,2417,LATAM,grocery,online,69.15,3,0.192,coupon,2024-06-09\r\n1725,1484,AMER,fashion,retail,35.93,6,0.179,coupon,2024-08-02\r\n1726,1044,EMEA,fashion,online,46.01,4,0.187,coupon,2024-01-03\r\n1727,1066,AMER,home,online,87.50,1,0.243,coupon,2024-07-08\r\n1728,2368,AMER,home,online,57.46,4,0.147,coupon,2024-09-10\r\n1729,1456,APAC,grocery,online,32.68,7,0.237,none,2024-01-07\r\n1730,1257,APAC,fashion,retail,35.01,3,0.044,none,2024-10-23\r\n1731,2350,APAC,fashion,mobile,35.20,8,0.212,none,2024-09-17\r\n1732,1678,LATAM,home,partner,109.88,3,0.091,none,2024-11-19\r\n1733,2363,AMER,electronics,mobile,41.42,4,0.192,coupon,2024-10-25\r\n1734,2315,LATAM,home,retail,28.88,5,0.110,coupon,2024-08-02\r\n1735,2157,AMER,sports,mobile,28.32,3,0.042,loyalty,2024-05-13\r\n1736,2038,LATAM,home,online,74.76,7,0.031,coupon,2024-02-27\r\n1737,1549,APAC,sports,retail,28.39,1,0.119,bundle,2024-11-21\r\n1738,1658,AMER,grocery,retail,53.84,6,0.107,coupon,2024-06-06\r\n1739,1974,EMEA,grocery,partner,48.17,2,0.156,none,2024-05-02\r\n1740,2133,AMER,home,partner,31.83,1,0.054,none,2024-08-19\r\n1741,1440,AMER,fashion,retail,46.49,3,0.207,bundle,2024-07-02\r\n1742,2260,EMEA,home,retail,75.52,3,0.233,none,2024-07-22\r\n1743,2456,APAC,fashion,mobile,53.48,6,0.151,coupon,2024-07-13\r\n1744,1679,APAC,toys,retail,40.94,7,0.118,none,2024-07-28\r\n1745,1040,LATAM,electronics,online,75.69,2,0.073,loyalty,2024-09-12\r\n1746,1795,EMEA,electronics,retail,70.95,5,0.016,none,2024-01-12\r\n1747,1264,APAC,sports,retail,210.22,7,0.149,none,2024-03-20\r\n1748,1099,LATAM,toys,online,70.39,4,0.132,loyalty,2024-04-18\r\n1749,1715,AMER,grocery,online,15.06,3,0.137,none,2024-12-03\r\n1750,2311,LATAM,toys,online,37.35,6,0.046,none,2024-04-23\r\n1751,2401,LATAM,home,mobile,167.38,1,0.114,none,2024-04-15\r\n1752,1210,LATAM,electronics,online,94.06,2,0.111,none,2024-03-10\r\n1753,1288,LATAM,fashion,online,83.26,2,0.167,coupon,2024-08-02\r\n1754,1057,LATAM,grocery,online,95.63,2,0.205,bundle,2024-12-02\r\n1755,1434,EMEA,sports,retail,48.84,7,0.029,none,2024-10-05\r\n1756,1224,APAC,home,online,65.74,4,0.203,bundle,2024-02-20\r\n1757,1855,APAC,home,mobile,91.11,7,0.060,none,2024-02-16\r\n1758,1807,EMEA,electronics,retail,24.76,7,0.149,coupon,2024-01-28\r\n1759,1608,AMER,grocery,online,68.23,6,0.192,none,2024-02-27\r\n1760,1172,APAC,home,retail,86.73,3,0.219,bundle,2024-05-22\r\n1761,1154,LATAM,sports,retail,126.90,7,0.215,none,2024-01-28\r\n1762,2291,EMEA,home,retail,25.95,5,0.118,coupon,2024-09-05\r\n1763,2154,APAC,fashion,retail,137.07,4,0.013,loyalty,2024-10-10\r\n1764,1084,AMER,toys,online,105.43,2,0.137,bundle,2024-07-10\r\n1765,1601,APAC,sports,online,43.07,6,0.126,none,2024-05-22\r\n1766,2340,EMEA,electronics,retail,112.81,6,0.236,none,2024-10-01\r\n1767,2214,AMER,electronics,online,32.30,7,0.151,none,2024-09-14\r\n1768,1071,AMER,toys,online,40.61,6,0.186,none,2024-04-04\r\n1769,1609,LATAM,grocery,retail,46.92,5,0.235,loyalty,2024-09-02\r\n1770,1606,AMER,toys,partner,40.83,5,0.022,loyalty,2024-05-16\r\n1771,1777,AMER,fashion,online,42.85,3,0.195,bundle,2024-03-04\r\n1772,2388,LATAM,sports,retail,35.35,4,0.143,none,2024-10-07\r\n1773,1825,AMER,grocery,online,36.23,7,0.180,none,2024-06-03\r\n1774,1353,EMEA,electronics,retail,36.49,8,0.002,coupon,2024-08-12\r\n1775,2169,EMEA,fashion,retail,79.28,2,0.190,loyalty,2024-01-26\r\n1776,1858,LATAM,grocery,retail,21.75,5,0.003,none,2024-11-16\r\n1777,1037,EMEA,sports,retail,61.26,7,0.125,loyalty,2024-03-08\r\n1778,2147,LATAM,home,retail,68.48,2,0.135,none,2024-07-21\r\n1779,2066,APAC,sports,partner,94.04,4,0.106,none,2024-07-03\r\n1780,2195,APAC,grocery,retail,58.45,3,0.002,none,2024-05-20\r\n1781,1331,AMER,grocery,retail,58.53,4,0.139,none,2024-07-11\r\n1782,1184,AMER,sports,retail,72.72,4,0.191,none,2024-11-28\r\n1783,1099,LATAM,grocery,online,58.39,3,0.076,none,2024-09-12\r\n1784,1018,APAC,electronics,retail,47.75,6,0.240,none,2024-02-08\r\n1785,1684,EMEA,electronics,online,45.38,3,0.032,bundle,2024-01-15\r\n1786,1169,LATAM,sports,partner,76.17,1,0.249,none,2024-06-14\r\n1787,1335,APAC,grocery,retail,40.38,1,0.041,coupon,2024-09-13\r\n1788,1844,APAC,home,online,47.83,8,0.096,loyalty,2024-11-10\r\n1789,1151,APAC,home,mobile,58.08,8,0.194,none,2024-10-26\r\n1790,1795,EMEA,grocery,retail,35.28,1,0.003,none,2024-03-19\r\n1791,1228,APAC,grocery,online,73.84,8,0.216,none,2024-12-19\r\n1792,1510,EMEA,grocery,retail,45.20,8,0.082,none,2024-07-19\r\n1793,1350,LATAM,home,retail,37.70,1,0.108,none,2024-12-28\r\n1794,2376,LATAM,electronics,online,55.72,1,0.029,none,2024-03-09\r\n1795,2198,EMEA,home,retail,49.23,5,0.156,bundle,2024-10-14\r\n1796,2332,APAC,home,retail,51.28,8,0.028,none,2024-05-21\r\n1797,1596,EMEA,home,retail,39.43,4,0.124,coupon,2024-03-07\r\n1798,2165,AMER,grocery,partner,101.82,5,0.026,none,2024-10-06\r\n1799,2183,EMEA,fashion,retail,69.56,1,0.007,bundle,2024-12-03\r\n1800,2428,LATAM,home,online,49.24,6,0.239,none,2024-06-04\r\n1801,2170,EMEA,grocery,mobile,71.60,4,0.168,none,2024-10-24\r\n1802,2412,LATAM,sports,online,91.66,4,0.245,none,2024-11-09\r\n1803,1305,EMEA,fashion,retail,86.14,6,0.172,none,2024-03-16\r\n1804,1134,APAC,grocery,online,53.18,5,0.156,none,2024-04-06\r\n1805,1613,EMEA,electronics,online,56.01,1,0.094,none,2024-10-10\r\n1806,1448,EMEA,sports,online,33.85,7,0.123,none,2024-08-28\r\n1807,2013,APAC,grocery,online,66.72,7,0.111,none,2024-02-13\r\n1808,2305,AMER,home,online,77.97,4,0.244,bundle,2024-05-01\r\n1809,1266,AMER,grocery,online,59.40,3,0.015,none,2024-05-04\r\n1810,2002,APAC,toys,retail,30.58,5,0.154,none,2024-10-12\r\n1811,1993,APAC,sports,online,27.57,7,0.053,coupon,2024-03-03\r\n1812,1657,LATAM,home,online,108.95,4,0.175,none,2024-05-23\r\n1813,1865,LATAM,fashion,mobile,47.53,2,0.245,none,2024-03-22\r\n1814,2429,EMEA,electronics,online,42.08,8,0.023,none,2024-02-06\r\n1815,1576,EMEA,grocery,mobile,106.10,1,0.076,coupon,2024-07-13\r\n1816,1443,EMEA,electronics,retail,102.46,2,0.211,none,2024-05-01\r\n1817,1426,AMER,grocery,retail,37.62,5,0.131,bundle,2024-02-24\r\n1818,2392,EMEA,toys,retail,77.71,7,0.066,coupon,2024-04-23\r\n1819,1111,APAC,home,online,71.34,2,0.193,coupon,2024-09-02\r\n1820,2320,LATAM,grocery,mobile,118.03,8,0.157,bundle,2024-02-11\r\n1821,2018,AMER,electronics,partner,95.63,5,0.168,none,2024-04-03\r\n1822,2117,EMEA,grocery,online,60.68,3,0.165,none,2024-06-28\r\n1823,1280,LATAM,grocery,online,45.15,3,0.178,none,2024-12-05\r\n1824,1814,AMER,fashion,retail,55.62,8,0.080,loyalty,2024-12-25\r\n1825,2483,LATAM,grocery,retail,86.44,4,0.065,coupon,2024-10-12\r\n1826,2401,LATAM,sports,retail,49.89,8,0.128,coupon,2024-12-09\r\n1827,2246,AMER,fashion,online,86.63,3,0.052,coupon,2024-03-02\r\n1828,1410,AMER,home,online,25.40,6,0.079,none,2024-03-16\r\n1829,2099,AMER,electronics,online,39.17,8,0.008,coupon,2024-03-12\r\n1830,1787,APAC,grocery,partner,81.60,1,0.231,coupon,2024-01-05\r\n1831,1085,EMEA,sports,mobile,61.23,3,0.185,none,2024-11-03\r\n1832,1557,LATAM,fashion,retail,44.56,4,0.080,none,2024-05-19\r\n1833,1560,AMER,sports,online,24.07,6,0.054,loyalty,2024-11-08\r\n1834,2172,EMEA,sports,retail,67.70,2,0.075,coupon,2024-02-13\r\n1835,1528,EMEA,grocery,online,27.09,4,0.016,none,2024-05-13\r\n1836,2276,AMER,home,retail,51.83,1,0.169,none,2024-08-26\r\n1837,1020,APAC,grocery,mobile,48.44,1,0.066,coupon,2024-01-10\r\n1838,2016,LATAM,grocery,mobile,215.07,8,0.118,loyalty,2024-07-01\r\n1839,1790,AMER,grocery,online,67.02,8,0.151,none,2024-01-05\r\n1840,1874,LATAM,fashion,mobile,52.20,4,0.089,bundle,2024-03-10\r\n1841,1620,LATAM,fashion,online,103.30,8,0.060,coupon,2024-10-28\r\n1842,1064,AMER,fashion,retail,77.06,7,0.202,coupon,2024-10-17\r\n1843,1969,LATAM,electronics,retail,147.02,3,0.020,none,2024-04-01\r\n1844,1330,EMEA,home,online,39.83,8,0.131,coupon,2024-06-25\r\n1845,1639,APAC,home,online,43.95,6,0.080,coupon,2024-02-13\r\n1846,2135,EMEA,grocery,retail,107.30,2,0.002,loyalty,2024-12-15\r\n1847,1753,APAC,grocery,retail,78.36,2,0.209,bundle,2024-07-12\r\n1848,2161,LATAM,electronics,retail,65.67,7,0.015,none,2024-04-24\r\n1849,2256,AMER,electronics,online,94.86,4,0.249,none,2024-11-16\r\n1850,1816,EMEA,grocery,online,33.51,8,0.141,none,2024-08-11\r\n1851,2316,EMEA,grocery,online,157.67,4,0.053,none,2024-01-21\r\n1852,2215,LATAM,toys,online,71.55,4,0.006,bundle,2024-08-12\r\n1853,1911,LATAM,home,retail,65.91,7,0.022,none,2024-04-26\r\n1854,1042,LATAM,grocery,retail,46.88,6,0.066,none,2024-11-27\r\n1855,2290,LATAM,electronics,online,85.67,1,0.215,none,2024-05-12\r\n1856,1451,EMEA,home,mobile,94.19,6,0.150,none,2024-05-13\r\n1857,1162,AMER,sports,retail,54.72,8,0.043,coupon,2024-09-23\r\n1858,1424,APAC,fashion,online,48.42,2,0.059,coupon,2024-04-19\r\n1859,2003,LATAM,fashion,retail,39.50,8,0.039,none,2024-08-09\r\n1860,1792,AMER,home,online,71.58,6,0.233,none,2024-07-11\r\n1861,1062,EMEA,sports,retail,71.30,6,0.054,none,2024-01-24\r\n1862,1450,EMEA,sports,retail,54.94,5,0.104,coupon,2024-05-24\r\n1863,1827,EMEA,grocery,retail,22.81,3,0.145,none,2024-04-13\r\n1864,1292,LATAM,grocery,retail,18.00,4,0.217,bundle,2024-06-17\r\n1865,2393,LATAM,toys,mobile,51.71,5,0.018,coupon,2024-11-18\r\n1866,1016,AMER,grocery,online,68.32,7,0.200,loyalty,2024-01-27\r\n1867,1753,APAC,home,partner,29.92,3,0.053,none,2024-11-04\r\n1868,2305,AMER,home,online,83.35,1,0.146,none,2024-09-21\r\n1869,1045,LATAM,grocery,retail,62.61,1,0.065,none,2024-07-16\r\n1870,1426,AMER,electronics,retail,86.47,7,0.200,none,2024-09-26\r\n1871,2078,APAC,sports,online,50.45,4,0.167,none,2024-08-08\r\n1872,2224,EMEA,grocery,mobile,15.88,1,0.118,none,2024-07-07\r\n1873,1775,EMEA,home,online,58.07,4,0.241,coupon,2024-03-17\r\n1874,1939,LATAM,sports,partner,71.56,5,0.028,none,2024-10-15\r\n1875,1974,EMEA,sports,retail,44.04,2,0.230,loyalty,2024-09-28\r\n1876,1192,EMEA,toys,retail,43.93,1,0.238,none,2024-12-12\r\n1877,1438,APAC,sports,online,99.35,5,0.194,loyalty,2024-06-20\r\n1878,2071,APAC,electronics,retail,34.19,1,0.132,bundle,2024-04-02\r\n1879,1461,LATAM,grocery,online,111.85,1,0.141,bundle,2024-07-11\r\n1880,1134,APAC,fashion,online,19.98,4,0.102,none,2024-07-25\r\n1881,1479,AMER,home,retail,45.77,1,0.223,loyalty,2024-03-19\r\n1882,1122,AMER,electronics,retail,37.83,1,0.123,none,2024-02-18\r\n1883,1867,AMER,electronics,partner,155.59,7,0.192,loyalty,2024-02-24\r\n1884,1451,EMEA,electronics,mobile,63.46,7,0.117,none,2024-09-11\r\n1885,2089,EMEA,fashion,retail,76.94,8,0.027,coupon,2024-11-21\r\n1886,1222,AMER,grocery,online,49.25,6,0.205,coupon,2024-11-23\r\n1887,2036,APAC,electronics,online,33.92,6,0.060,bundle,2024-09-14\r\n1888,1719,LATAM,grocery,online,27.30,5,0.223,none,2024-07-11\r\n1889,1772,EMEA,sports,retail,31.69,2,0.137,bundle,2024-09-22\r\n1890,1754,EMEA,electronics,partner,15.73,4,0.144,bundle,2024-04-28\r\n1891,1045,LATAM,sports,online,30.57,6,0.124,none,2024-04-16\r\n1892,1420,APAC,home,retail,34.27,1,0.076,none,2024-12-10\r\n1893,1760,LATAM,home,online,55.61,3,0.211,none,2024-09-24\r\n1894,1907,EMEA,home,retail,44.88,5,0.241,bundle,2024-05-28\r\n1895,1999,EMEA,home,online,75.80,2,0.103,coupon,2024-07-13\r\n1896,1678,LATAM,electronics,retail,85.23,5,0.023,coupon,2024-03-21\r\n1897,2271,LATAM,sports,online,26.19,1,0.134,bundle,2024-11-03\r\n1898,1930,AMER,grocery,online,41.03,6,0.178,loyalty,2024-03-14\r\n1899,1562,AMER,toys,partner,101.09,3,0.206,none,2024-01-13\r\n1900,2258,AMER,sports,mobile,46.02,1,0.242,loyalty,2024-07-01\r\n1901,1838,AMER,grocery,mobile,22.75,7,0.207,none,2024-12-09\r\n1902,1191,EMEA,toys,partner,46.39,7,0.202,none,2024-03-17\r\n1903,1186,APAC,toys,mobile,38.94,8,0.243,coupon,2024-12-01\r\n1904,1881,LATAM,toys,online,49.19,3,0.155,none,2024-09-22\r\n1905,1140,LATAM,sports,online,39.60,6,0.248,none,2024-04-25\r\n1906,2456,APAC,fashion,retail,31.92,7,0.211,loyalty,2024-12-23\r\n1907,2074,AMER,sports,retail,90.45,1,0.141,none,2024-02-16\r\n1908,2317,LATAM,electronics,retail,104.17,8,0.188,none,2024-05-08\r\n1909,2315,LATAM,electronics,online,47.65,5,0.128,none,2024-10-09\r\n1910,1141,AMER,sports,online,74.90,4,0.029,bundle,2024-12-25\r\n1911,1562,AMER,grocery,online,152.23,4,0.031,none,2024-04-18\r\n1912,1476,APAC,toys,partner,62.33,2,0.163,coupon,2024-07-01\r\n1913,1554,AMER,sports,online,33.74,6,0.084,none,2024-02-13\r\n1914,1195,AMER,electronics,online,120.11,2,0.095,none,2024-02-21\r\n1915,1540,LATAM,grocery,online,92.95,4,0.010,bundle,2024-04-02\r\n1916,1612,LATAM,grocery,retail,38.97,1,0.182,loyalty,2024-01-02\r\n1917,1529,LATAM,grocery,online,69.56,1,0.174,none,2024-12-05\r\n1918,2173,LATAM,electronics,partner,65.09,5,0.090,none,2024-07-04\r\n1919,1971,EMEA,grocery,retail,48.32,5,0.008,bundle,2024-02-18\r\n1920,2062,EMEA,sports,online,130.18,4,0.164,none,2024-03-19\r\n1921,1345,AMER,grocery,online,119.26,1,0.141,none,2024-09-15\r\n1922,1071,AMER,toys,mobile,29.31,3,0.231,none,2024-06-23\r\n1923,2123,AMER,toys,mobile,38.49,5,0.040,none,2024-10-16\r\n1924,2103,LATAM,home,mobile,106.66,8,0.012,loyalty,2024-02-18\r\n1925,1503,APAC,grocery,retail,104.91,8,0.168,none,2024-04-20\r\n1926,2064,LATAM,home,retail,115.41,2,0.218,none,2024-01-15\r\n1927,1635,APAC,electronics,online,30.30,4,0.131,none,2024-10-03\r\n1928,2258,AMER,home,retail,44.99,4,0.018,none,2024-09-04\r\n1929,1693,EMEA,electronics,online,96.25,7,0.096,none,2024-08-07\r\n1930,1613,EMEA,sports,retail,59.09,3,0.080,none,2024-06-03\r\n1931,2338,AMER,grocery,online,74.51,2,0.212,none,2024-02-20\r\n1932,1747,EMEA,electronics,mobile,32.16,7,0.038,bundle,2024-12-14\r\n1933,1116,LATAM,sports,online,108.36,3,0.185,coupon,2024-12-13\r\n1934,1990,EMEA,sports,retail,80.26,7,0.248,bundle,2024-04-01\r\n1935,1194,APAC,fashion,online,53.64,6,0.230,coupon,2024-01-13\r\n1936,1318,LATAM,home,retail,53.15,1,0.177,bundle,2024-08-01\r\n1937,1465,AMER,electronics,retail,96.38,2,0.047,coupon,2024-05-09\r\n1938,1525,APAC,sports,retail,64.42,7,0.124,loyalty,2024-11-06\r\n1939,1157,LATAM,grocery,mobile,46.31,7,0.238,none,2024-03-21\r\n1940,1787,APAC,home,partner,55.17,8,0.137,none,2024-11-15\r\n1941,1463,EMEA,grocery,retail,99.93,2,0.119,none,2024-05-15\r\n1942,1310,AMER,home,mobile,81.08,8,0.242,bundle,2024-12-14\r\n1943,2253,AMER,grocery,retail,50.07,5,0.027,bundle,2024-06-08\r\n1944,2074,AMER,grocery,retail,53.69,5,0.155,coupon,2024-11-21\r\n1945,2309,AMER,toys,online,54.41,3,0.192,none,2024-10-26\r\n1946,1753,APAC,sports,retail,48.40,7,0.232,coupon,2024-07-27\r\n1947,2338,AMER,fashion,retail,104.95,4,0.146,none,2024-10-04\r\n1948,1772,EMEA,sports,retail,115.17,5,0.189,none,2024-02-14\r\n1949,2471,APAC,grocery,retail,63.59,4,0.176,none,2024-10-16\r\n1950,1773,LATAM,sports,online,101.34,1,0.084,none,2024-03-27\r\n1951,1853,APAC,grocery,online,73.79,1,0.185,none,2024-05-13\r\n1952,1095,APAC,sports,retail,68.95,3,0.131,coupon,2024-01-06\r\n1953,1753,APAC,home,retail,41.56,2,0.180,none,2024-11-01\r\n1954,2311,LATAM,fashion,online,192.58,4,0.198,loyalty,2024-06-28\r\n1955,2439,AMER,home,online,45.74,8,0.045,none,2024-09-18\r\n1956,1979,APAC,home,retail,40.16,1,0.034,loyalty,2024-09-15\r\n1957,1868,AMER,fashion,online,68.95,4,0.028,none,2024-01-06\r\n1958,2160,LATAM,home,retail,80.03,1,0.119,none,2024-07-03\r\n1959,1542,APAC,electronics,mobile,90.63,2,0.159,none,2024-11-08\r\n1960,2207,APAC,home,partner,24.97,3,0.015,none,2024-03-05\r\n1961,1077,AMER,electronics,retail,104.46,2,0.116,none,2024-02-12\r\n1962,2487,LATAM,fashion,retail,65.03,7,0.005,coupon,2024-02-27\r\n1963,1487,AMER,sports,retail,64.58,3,0.059,none,2024-11-03\r\n1964,2007,LATAM,grocery,online,35.81,7,0.070,loyalty,2024-01-24\r\n1965,1518,AMER,sports,retail,39.04,3,0.144,bundle,2024-08-18\r\n1966,2392,EMEA,home,online,73.56,7,0.178,loyalty,2024-05-21\r\n1967,1305,EMEA,electronics,retail,18.55,3,0.186,none,2024-12-14\r\n1968,1830,EMEA,home,online,56.85,7,0.047,loyalty,2024-05-09\r\n1969,1609,LATAM,grocery,online,72.35,8,0.059,none,2024-02-18\r\n1970,1929,LATAM,home,partner,74.48,3,0.202,bundle,2024-01-26\r\n1971,1438,APAC,electronics,online,65.11,5,0.249,none,2024-01-16\r\n1972,2038,LATAM,toys,retail,30.22,6,0.249,none,2024-04-01\r\n1973,1633,EMEA,fashion,online,72.39,8,0.144,bundle,2024-08-18\r\n1974,1121,EMEA,sports,online,60.10,7,0.020,none,2024-12-26\r\n1975,2336,APAC,sports,retail,61.28,2,0.020,none,2024-02-16\r\n1976,2337,AMER,toys,retail,84.64,5,0.203,coupon,2024-04-06\r\n1977,1162,AMER,grocery,retail,79.21,7,0.077,coupon,2024-02-20\r\n1978,2016,LATAM,grocery,online,124.70,2,0.167,none,2024-06-24\r\n1979,2117,EMEA,grocery,retail,98.85,7,0.200,none,2024-07-03\r\n1980,1767,AMER,grocery,online,23.35,8,0.048,none,2024-12-15\r\n1981,1568,AMER,toys,retail,51.48,3,0.054,none,2024-08-11\r\n1982,2423,LATAM,grocery,online,51.32,7,0.189,loyalty,2024-02-20\r\n1983,1935,EMEA,sports,online,61.68,7,0.243,loyalty,2024-10-04\r\n1984,1458,APAC,electronics,mobile,64.83,6,0.235,none,2024-11-12\r\n1985,1963,AMER,fashion,online,173.10,4,0.019,coupon,2024-11-21\r\n1986,1039,AMER,electronics,online,29.61,1,0.132,none,2024-06-03\r\n1987,1096,EMEA,fashion,online,154.79,7,0.147,none,2024-11-08\r\n1988,2063,APAC,grocery,online,91.35,5,0.228,none,2024-01-02\r\n1989,2338,AMER,sports,mobile,33.42,8,0.209,none,2024-03-26\r\n1990,2218,EMEA,home,online,67.89,2,0.184,bundle,2024-05-18\r\n1991,1201,LATAM,home,online,128.65,4,0.064,loyalty,2024-05-19\r\n1992,1494,AMER,grocery,retail,122.36,2,0.146,coupon,2024-01-25\r\n1993,2081,APAC,grocery,retail,81.05,4,0.147,bundle,2024-10-16\r\n1994,1856,EMEA,grocery,online,100.57,2,0.244,loyalty,2024-03-08\r\n1995,2184,APAC,home,partner,80.35,5,0.030,none,2024-11-16\r\n1996,1666,LATAM,toys,retail,29.13,2,0.082,bundle,2024-01-04\r\n1997,1418,LATAM,sports,mobile,96.69,5,0.060,none,2024-02-11\r\n1998,1129,LATAM,grocery,online,81.38,5,0.042,none,2024-06-07\r\n1999,1624,AMER,home,retail,54.77,7,0.196,bundle,2024-10-25\r\n2000,1021,AMER,home,online,86.67,7,0.205,none,2024-06-28\r\n2001,2015,APAC,electronics,retail,25.53,2,0.060,none,2024-11-11\r\n2002,1233,AMER,fashion,online,44.82,8,0.006,coupon,2024-08-26\r\n2003,1429,APAC,grocery,online,71.60,4,0.151,none,2024-04-07\r\n2004,1917,LATAM,home,retail,152.47,8,0.067,none,2024-02-09\r\n2005,1295,EMEA,grocery,retail,134.38,8,0.102,none,2024-02-03\r\n2006,1700,EMEA,grocery,retail,60.03,1,0.218,none,2024-09-02\r\n2007,1507,EMEA,fashion,online,31.51,2,0.188,coupon,2024-04-15\r\n2008,2165,AMER,home,mobile,70.17,5,0.060,loyalty,2024-12-08\r\n2009,1911,LATAM,fashion,retail,97.88,2,0.097,none,2024-01-12\r\n2010,1277,AMER,grocery,partner,121.08,5,0.004,none,2024-05-25\r\n2011,2404,EMEA,home,online,34.80,8,0.186,none,2024-11-17\r\n2012,1259,EMEA,grocery,retail,50.33,5,0.180,none,2024-02-01\r\n2013,2253,AMER,fashion,online,92.82,2,0.038,none,2024-11-21\r\n2014,2113,LATAM,fashion,online,45.05,6,0.152,none,2024-02-10\r\n2015,1052,LATAM,fashion,retail,86.45,1,0.029,none,2024-10-08\r\n2016,2089,EMEA,grocery,online,43.74,1,0.168,bundle,2024-09-08\r\n2017,1033,APAC,fashion,mobile,44.77,5,0.036,none,2024-07-26\r\n2018,1436,APAC,grocery,retail,57.36,3,0.213,bundle,2024-06-18\r\n2019,1568,AMER,home,online,46.87,7,0.108,none,2024-07-18\r\n2020,1233,AMER,fashion,online,46.82,2,0.135,coupon,2024-09-07\r\n2021,2322,AMER,home,online,54.89,3,0.174,none,2024-11-04\r\n2022,1284,APAC,electronics,retail,52.47,2,0.010,coupon,2024-09-08\r\n2023,1511,EMEA,toys,online,18.21,2,0.074,coupon,2024-07-22\r\n2024,2066,APAC,grocery,retail,30.20,2,0.083,loyalty,2024-02-15\r\n2025,1529,LATAM,electronics,online,89.41,4,0.068,coupon,2024-06-13\r\n2026,1620,LATAM,grocery,retail,73.94,5,0.062,none,2024-02-15\r\n2027,1379,EMEA,toys,online,72.76,6,0.155,coupon,2024-06-21\r\n2028,1856,EMEA,grocery,retail,85.64,3,0.143,none,2024-04-20\r\n2029,2104,EMEA,home,retail,75.69,2,0.115,none,2024-04-16\r\n2030,1878,EMEA,grocery,online,37.68,4,0.040,none,2024-03-13\r\n2031,2263,AMER,grocery,partner,33.57,2,0.245,none,2024-07-08\r\n2032,1907,EMEA,sports,retail,24.17,8,0.171,bundle,2024-10-08\r\n2033,1795,EMEA,electronics,retail,113.69,3,0.219,none,2024-01-05\r\n2034,1069,APAC,sports,online,29.71,4,0.094,bundle,2024-09-23\r\n2035,1506,EMEA,home,retail,122.88,3,0.048,none,2024-11-28\r\n2036,1324,LATAM,grocery,online,69.33,5,0.193,bundle,2024-02-21\r\n2037,1712,LATAM,electronics,mobile,43.79,1,0.207,coupon,2024-02-08\r\n2038,1405,LATAM,home,mobile,66.52,3,0.173,none,2024-10-14\r\n2039,1447,LATAM,grocery,retail,99.31,4,0.151,bundle,2024-02-21\r\n2040,2196,AMER,grocery,mobile,57.73,4,0.217,none,2024-07-05\r\n2041,2018,AMER,home,retail,34.31,7,0.002,none,2024-09-09\r\n2042,1161,AMER,grocery,online,73.97,4,0.087,none,2024-05-24\r\n2043,1073,AMER,toys,online,36.80,2,0.196,none,2024-08-05\r\n2044,1421,APAC,home,online,56.41,4,0.025,none,2024-04-26\r\n2045,1338,EMEA,sports,online,102.87,7,0.248,loyalty,2024-06-28\r\n2046,1992,LATAM,toys,online,46.64,1,0.155,none,2024-08-23\r\n2047,1455,APAC,home,retail,114.97,6,0.214,none,2024-06-06\r\n2048,2320,LATAM,electronics,online,38.15,6,0.066,none,2024-12-06\r\n2049,2129,APAC,grocery,online,115.56,6,0.152,bundle,2024-10-17\r\n2050,2029,APAC,sports,online,25.26,2,0.124,bundle,2024-07-25\r\n2051,1464,APAC,electronics,online,45.67,3,0.200,coupon,2024-04-06\r\n2052,2251,APAC,toys,online,46.13,5,0.061,coupon,2024-09-05\r\n2053,1471,EMEA,grocery,online,72.04,4,0.106,none,2024-09-04\r\n2054,2127,LATAM,electronics,retail,27.03,2,0.124,none,2024-03-05\r\n2055,1227,AMER,grocery,online,85.08,4,0.140,none,2024-03-16\r\n2056,1848,EMEA,electronics,mobile,77.97,3,0.115,coupon,2024-07-28\r\n2057,1015,AMER,electronics,online,85.63,2,0.216,none,2024-04-28\r\n2058,1167,EMEA,toys,retail,24.23,7,0.067,none,2024-12-14\r\n2059,2261,EMEA,grocery,online,114.26,6,0.088,coupon,2024-01-07\r\n2060,2072,AMER,electronics,retail,54.41,5,0.224,coupon,2024-02-11\r\n2061,1577,AMER,sports,mobile,63.08,6,0.068,coupon,2024-09-21\r\n2062,2233,EMEA,grocery,retail,47.77,4,0.067,none,2024-05-04\r\n2063,1668,AMER,toys,online,30.77,3,0.235,bundle,2024-07-05\r\n2064,1703,AMER,grocery,online,17.56,6,0.008,none,2024-09-26\r\n2065,1449,EMEA,grocery,online,65.38,8,0.221,coupon,2024-10-27\r\n2066,1744,EMEA,grocery,online,32.06,4,0.180,coupon,2024-08-03\r\n2067,1574,AMER,sports,online,60.89,1,0.231,coupon,2024-02-10\r\n2068,2064,LATAM,grocery,retail,41.99,4,0.152,none,2024-12-05\r\n2069,2221,LATAM,grocery,retail,30.84,7,0.041,none,2024-06-23\r\n2070,1368,EMEA,toys,retail,59.79,5,0.192,coupon,2024-08-07\r\n2071,1879,EMEA,electronics,mobile,39.38,7,0.054,coupon,2024-02-18\r\n2072,1444,EMEA,fashion,retail,53.75,3,0.208,coupon,2024-01-11\r\n2073,2081,APAC,fashion,online,76.64,7,0.033,loyalty,2024-09-09\r\n2074,1324,LATAM,sports,online,45.84,5,0.239,coupon,2024-09-24\r\n2075,2095,EMEA,electronics,retail,202.40,2,0.069,none,2024-12-13\r\n2076,1609,LATAM,toys,retail,50.09,8,0.215,coupon,2024-12-16\r\n2077,1547,AMER,toys,partner,55.47,8,0.178,none,2024-11-07\r\n2078,2274,APAC,home,online,76.19,1,0.101,coupon,2024-03-12\r\n2079,1166,AMER,fashion,online,72.01,2,0.207,none,2024-09-18\r\n2080,1489,AMER,electronics,online,38.36,1,0.228,none,2024-06-03\r\n2081,1849,EMEA,home,partner,78.68,7,0.092,coupon,2024-04-10\r\n2082,1193,APAC,home,retail,93.50,2,0.145,none,2024-10-28\r\n2083,2440,APAC,electronics,retail,69.03,3,0.155,none,2024-12-01\r\n2084,1492,APAC,home,mobile,150.62,8,0.208,coupon,2024-03-02\r\n2085,1051,EMEA,electronics,online,40.73,5,0.209,loyalty,2024-03-07\r\n2086,1602,EMEA,electronics,online,54.72,3,0.183,none,2024-07-28\r\n2087,1386,AMER,electronics,online,66.08,2,0.085,none,2024-09-23\r\n2088,1303,LATAM,home,retail,79.02,8,0.200,bundle,2024-05-05\r\n2089,2266,LATAM,grocery,partner,79.87,2,0.094,coupon,2024-05-01\r\n2090,2135,EMEA,electronics,online,21.53,3,0.099,bundle,2024-04-23\r\n2091,1657,LATAM,sports,mobile,72.66,3,0.220,none,2024-07-26\r\n2092,1445,APAC,electronics,retail,140.84,7,0.081,coupon,2024-06-16\r\n2093,1974,EMEA,toys,partner,60.74,7,0.223,none,2024-01-09\r\n2094,1550,APAC,grocery,retail,105.35,8,0.240,loyalty,2024-04-03\r\n2095,2225,EMEA,toys,online,58.97,7,0.237,none,2024-12-08\r\n2096,1483,EMEA,grocery,retail,15.02,5,0.158,bundle,2024-05-19\r\n2097,1566,EMEA,fashion,retail,51.14,3,0.099,none,2024-08-05\r\n2098,1985,AMER,electronics,retail,116.40,6,0.130,coupon,2024-08-09\r\n2099,2165,AMER,home,online,36.81,3,0.014,none,2024-02-19\r\n2100,1127,EMEA,grocery,retail,43.23,4,0.018,none,2024-04-18\r\n2101,1602,EMEA,toys,retail,49.02,8,0.040,coupon,2024-04-09\r\n2102,1800,APAC,fashion,online,144.31,2,0.129,none,2024-04-25\r\n2103,1938,APAC,home,mobile,48.18,7,0.176,none,2024-10-27\r\n2104,1641,EMEA,grocery,online,32.98,4,0.043,bundle,2024-03-11\r\n2105,2479,EMEA,home,online,41.62,5,0.038,none,2024-09-25\r\n2106,1409,APAC,fashion,retail,35.53,8,0.194,none,2024-12-06\r\n2107,1948,EMEA,home,mobile,90.53,6,0.223,none,2024-10-13\r\n2108,1229,LATAM,electronics,retail,55.01,2,0.149,none,2024-01-25\r\n2109,2158,APAC,home,mobile,22.07,5,0.175,loyalty,2024-09-03\r\n2110,1504,AMER,toys,online,73.48,6,0.214,none,2024-09-06\r\n2111,1012,LATAM,toys,online,111.40,8,0.076,coupon,2024-09-17\r\n2112,2353,AMER,electronics,online,76.03,6,0.161,coupon,2024-05-09\r\n2113,1691,LATAM,grocery,online,107.35,6,0.189,none,2024-07-17\r\n2114,1329,APAC,electronics,online,113.08,5,0.220,none,2024-08-14\r\n2115,2254,LATAM,home,online,78.83,2,0.218,loyalty,2024-02-14\r\n2116,1171,APAC,grocery,online,54.54,4,0.105,coupon,2024-11-27\r\n2117,2242,AMER,sports,online,57.45,4,0.035,coupon,2024-04-03\r\n2118,1309,EMEA,home,retail,51.32,7,0.018,none,2024-08-20\r\n2119,2423,LATAM,toys,mobile,61.10,6,0.238,loyalty,2024-12-03\r\n2120,2428,LATAM,home,online,65.07,2,0.226,none,2024-04-04\r\n2121,1097,EMEA,home,retail,54.01,6,0.012,none,2024-04-27\r\n2122,1449,EMEA,sports,retail,106.11,4,0.245,coupon,2024-09-19\r\n2123,1216,APAC,grocery,online,50.49,6,0.245,none,2024-10-03\r\n2124,1646,APAC,grocery,online,45.44,1,0.053,coupon,2024-10-27\r\n2125,1123,LATAM,fashion,retail,53.68,6,0.114,coupon,2024-11-21\r\n2126,1108,EMEA,grocery,retail,63.53,3,0.178,coupon,2024-02-24\r\n2127,2391,EMEA,grocery,retail,32.78,1,0.170,coupon,2024-02-05\r\n2128,1751,AMER,sports,mobile,114.55,1,0.170,coupon,2024-02-09\r\n2129,2020,AMER,fashion,mobile,68.24,8,0.063,none,2024-04-18\r\n2130,2369,LATAM,home,online,24.38,7,0.060,none,2024-11-08\r\n2131,1336,APAC,electronics,online,70.85,3,0.147,none,2024-05-20\r\n2132,1607,LATAM,sports,online,158.56,3,0.086,none,2024-12-16\r\n2133,1600,AMER,fashion,online,129.90,8,0.138,coupon,2024-02-14\r\n2134,2138,APAC,grocery,online,98.25,6,0.054,none,2024-04-27\r\n2135,1796,LATAM,grocery,online,49.93,5,0.165,coupon,2024-02-21\r\n2136,1026,APAC,grocery,retail,17.22,2,0.247,none,2024-12-28\r\n2137,1527,AMER,grocery,online,154.19,4,0.024,none,2024-11-11\r\n2138,1838,AMER,electronics,online,36.31,3,0.174,none,2024-10-09\r\n2139,1539,LATAM,fashion,retail,86.27,1,0.074,bundle,2024-05-08\r\n2140,2332,APAC,electronics,partner,68.41,6,0.049,none,2024-08-17\r\n2141,2499,LATAM,fashion,mobile,106.18,7,0.070,none,2024-11-07\r\n2142,1504,AMER,fashion,retail,34.31,1,0.152,none,2024-07-07\r\n2143,1811,APAC,fashion,online,33.72,1,0.139,coupon,2024-01-13\r\n2144,1501,AMER,sports,retail,61.22,8,0.242,loyalty,2024-05-01\r\n2145,1538,AMER,electronics,online,61.23,5,0.212,none,2024-05-07\r\n2146,1005,LATAM,electronics,mobile,149.89,8,0.139,none,2024-02-02\r\n2147,1396,EMEA,electronics,mobile,69.91,3,0.248,coupon,2024-12-21\r\n2148,1893,APAC,grocery,retail,43.01,1,0.179,bundle,2024-12-18\r\n2149,2432,AMER,home,online,33.56,6,0.048,none,2024-08-20\r\n2150,1807,EMEA,electronics,mobile,49.24,4,0.236,none,2024-12-21\r\n2151,1957,AMER,toys,mobile,34.25,5,0.107,none,2024-05-02\r\n2152,2162,EMEA,grocery,retail,63.42,4,0.144,loyalty,2024-11-14\r\n2153,1967,EMEA,electronics,retail,66.83,1,0.056,loyalty,2024-12-24\r\n2154,2128,EMEA,sports,online,77.23,3,0.075,coupon,2024-04-04\r\n2155,1767,AMER,home,online,20.06,3,0.237,loyalty,2024-03-14\r\n2156,2305,AMER,electronics,online,52.78,1,0.181,none,2024-01-03\r\n2157,1062,EMEA,fashion,mobile,38.32,2,0.103,coupon,2024-09-06\r\n2158,2439,AMER,electronics,retail,16.89,1,0.060,coupon,2024-05-28\r\n2159,1785,EMEA,toys,online,168.78,4,0.235,none,2024-06-19\r\n2160,1258,EMEA,home,mobile,61.92,8,0.114,none,2024-12-20\r\n2161,2219,LATAM,grocery,retail,47.85,7,0.043,none,2024-01-03\r\n2162,2129,APAC,home,retail,39.37,1,0.110,none,2024-09-19\r\n2163,1136,EMEA,grocery,online,29.23,2,0.203,loyalty,2024-12-27\r\n2164,1349,APAC,electronics,online,27.41,5,0.075,coupon,2024-05-02\r\n2165,2045,LATAM,home,retail,59.25,3,0.026,loyalty,2024-06-21\r\n2166,1437,EMEA,grocery,retail,49.35,1,0.106,none,2024-09-28\r\n2167,1816,EMEA,home,online,59.19,5,0.170,none,2024-09-12\r\n2168,1371,AMER,home,online,34.67,4,0.143,none,2024-08-25\r\n2169,1783,AMER,home,online,32.12,4,0.046,none,2024-02-03\r\n2170,1928,AMER,home,online,75.13,7,0.180,none,2024-06-23\r\n2171,1967,EMEA,electronics,partner,48.92,2,0.095,loyalty,2024-09-09\r\n2172,1098,APAC,grocery,retail,86.09,7,0.247,none,2024-11-12\r\n2173,1707,APAC,electronics,online,28.67,5,0.240,none,2024-08-10\r\n2174,1842,LATAM,grocery,mobile,72.06,5,0.049,none,2024-09-11\r\n2175,1460,LATAM,fashion,retail,47.68,4,0.243,none,2024-06-27\r\n2176,1511,EMEA,home,online,109.89,4,0.150,none,2024-04-14\r\n2177,2071,APAC,electronics,online,60.75,7,0.155,coupon,2024-12-10\r\n2178,1577,AMER,grocery,retail,42.78,1,0.016,loyalty,2024-01-04\r\n2179,1065,AMER,sports,mobile,44.58,2,0.236,none,2024-10-20\r\n2180,2251,APAC,grocery,online,56.28,2,0.161,none,2024-06-22\r\n2181,1539,LATAM,fashion,online,79.67,1,0.144,coupon,2024-02-15\r\n2182,2261,EMEA,home,online,27.79,3,0.207,none,2024-02-05\r\n2183,2389,LATAM,toys,mobile,60.59,1,0.185,none,2024-02-26\r\n2184,1430,EMEA,home,retail,110.27,8,0.105,bundle,2024-05-01\r\n2185,2402,AMER,electronics,online,56.92,2,0.029,none,2024-07-25\r\n2186,1893,APAC,sports,retail,57.89,1,0.231,none,2024-10-13\r\n2187,1456,APAC,electronics,retail,78.55,3,0.106,none,2024-08-05\r\n2188,1761,EMEA,home,retail,57.59,7,0.092,none,2024-10-16\r\n2189,1121,EMEA,sports,retail,37.42,8,0.125,none,2024-08-08\r\n2190,1442,EMEA,electronics,retail,39.28,5,0.223,none,2024-01-03\r\n2191,2133,AMER,home,mobile,50.40,5,0.237,none,2024-11-19\r\n2192,1102,APAC,fashion,online,112.75,1,0.109,none,2024-12-27\r\n2193,2055,AMER,electronics,online,51.60,3,0.032,none,2024-10-09\r\n2194,1021,AMER,home,retail,48.50,4,0.010,none,2024-07-20\r\n2195,1978,AMER,fashion,retail,67.50,5,0.091,none,2024-10-03\r\n2196,1090,AMER,electronics,retail,98.55,3,0.212,none,2024-03-27\r\n2197,1340,LATAM,grocery,retail,36.11,8,0.036,coupon,2024-06-09\r\n2198,1535,AMER,grocery,retail,73.67,2,0.081,none,2024-11-09\r\n2199,1465,AMER,grocery,retail,65.70,1,0.217,none,2024-11-13\r\n2200,1671,APAC,fashion,online,101.97,7,0.177,none,2024-07-16\r\n2201,2195,APAC,electronics,online,92.15,3,0.081,coupon,2024-03-01\r\n2202,2028,APAC,grocery,online,67.39,7,0.205,none,2024-08-19\r\n2203,1053,AMER,fashion,retail,55.12,7,0.126,bundle,2024-03-12\r\n2204,1990,EMEA,toys,mobile,37.52,3,0.055,none,2024-09-01\r\n2205,2351,EMEA,toys,online,43.20,4,0.020,none,2024-05-05\r\n2206,1775,EMEA,fashion,retail,60.14,5,0.163,none,2024-11-02\r\n2207,1846,APAC,fashion,online,41.74,8,0.010,none,2024-06-05\r\n2208,2066,APAC,electronics,online,98.85,8,0.011,none,2024-10-06\r\n2209,1498,LATAM,home,retail,90.47,4,0.011,coupon,2024-10-15\r\n2210,2342,AMER,home,retail,90.80,3,0.168,loyalty,2024-12-20\r\n2211,2089,EMEA,toys,mobile,32.61,8,0.185,bundle,2024-10-22\r\n2212,1679,APAC,grocery,online,179.45,1,0.000,none,2024-09-02\r\n2213,1486,LATAM,grocery,partner,47.14,8,0.242,bundle,2024-02-03\r\n2214,1688,LATAM,fashion,retail,42.58,4,0.088,none,2024-09-07\r\n2215,1477,APAC,home,online,38.46,1,0.199,loyalty,2024-07-19\r\n2216,1787,APAC,grocery,online,105.01,7,0.103,bundle,2024-02-25\r\n2217,1932,EMEA,grocery,online,125.12,2,0.121,none,2024-10-07\r\n2218,2239,EMEA,electronics,online,101.24,3,0.093,bundle,2024-12-07\r\n2219,2073,AMER,sports,online,111.47,5,0.116,coupon,2024-02-28\r\n2220,1588,LATAM,fashion,online,33.75,8,0.111,none,2024-09-25\r\n2221,1813,EMEA,home,online,23.81,8,0.037,none,2024-03-11\r\n2222,1689,LATAM,toys,retail,38.36,7,0.123,none,2024-10-24\r\n2223,2265,APAC,home,mobile,74.79,6,0.177,none,2024-06-16\r\n2224,1911,LATAM,electronics,retail,45.09,3,0.009,none,2024-05-01\r\n2225,1872,LATAM,home,retail,80.38,2,0.081,none,2024-04-15\r\n2226,1426,AMER,electronics,online,73.40,2,0.175,none,2024-06-28\r\n2227,2141,AMER,fashion,online,80.62,6,0.165,coupon,2024-01-23\r\n2228,1021,AMER,sports,retail,34.41,6,0.054,loyalty,2024-06-18\r\n2229,2090,AMER,electronics,retail,40.86,8,0.014,none,2024-12-23\r\n2230,1018,APAC,electronics,online,36.36,5,0.019,coupon,2024-11-16\r\n2231,2134,AMER,sports,online,85.17,8,0.030,bundle,2024-07-20\r\n2232,1118,AMER,grocery,online,65.33,5,0.156,none,2024-03-26\r\n2233,1686,LATAM,grocery,online,77.66,2,0.030,none,2024-02-13\r\n2234,2465,EMEA,grocery,online,29.44,5,0.012,none,2024-06-23\r\n2235,1606,AMER,grocery,online,51.22,4,0.197,none,2024-01-18\r\n2236,1848,EMEA,home,mobile,56.30,3,0.046,bundle,2024-06-06\r\n2237,2415,AMER,sports,online,20.73,7,0.187,none,2024-01-17\r\n2238,1614,EMEA,grocery,retail,88.06,5,0.029,none,2024-08-25\r\n2239,1079,LATAM,electronics,online,61.42,8,0.150,none,2024-10-23\r\n2240,1787,APAC,toys,online,58.55,1,0.245,none,2024-01-08\r\n2241,1261,APAC,electronics,online,39.25,3,0.149,none,2024-06-13\r\n2242,2472,AMER,sports,partner,84.83,2,0.119,none,2024-04-04\r\n2243,1136,EMEA,sports,retail,42.91,7,0.028,none,2024-11-09\r\n2244,1946,AMER,grocery,online,29.92,2,0.184,loyalty,2024-03-12\r\n2245,2292,EMEA,home,retail,62.59,3,0.091,none,2024-12-18\r\n2246,1180,AMER,fashion,retail,51.68,3,0.100,none,2024-04-01\r\n2247,2345,LATAM,sports,mobile,90.18,4,0.062,none,2024-07-19\r\n2248,2338,AMER,fashion,retail,41.71,3,0.117,loyalty,2024-07-02\r\n2249,1202,APAC,sports,online,73.32,4,0.079,none,2024-04-26\r\n2250,2268,EMEA,fashion,online,35.42,6,0.243,none,2024-04-28\r\n2251,1519,APAC,grocery,online,52.03,4,0.071,loyalty,2024-10-09\r\n2252,2340,EMEA,grocery,retail,121.73,2,0.089,none,2024-05-17\r\n2253,1680,LATAM,fashion,retail,101.59,2,0.151,bundle,2024-12-22\r\n2254,1285,EMEA,grocery,online,81.41,7,0.077,coupon,2024-10-17\r\n2255,2104,EMEA,grocery,retail,53.13,4,0.134,none,2024-03-05\r\n2256,1216,APAC,home,retail,53.17,5,0.125,none,2024-05-12\r\n2257,1604,EMEA,grocery,retail,38.96,6,0.248,none,2024-07-25\r\n2258,2368,AMER,grocery,online,32.98,1,0.225,none,2024-03-04\r\n2259,1806,APAC,grocery,online,86.33,1,0.036,none,2024-09-18\r\n2260,2015,APAC,electronics,retail,60.62,3,0.194,none,2024-01-10\r\n2261,1928,AMER,electronics,retail,97.52,5,0.101,bundle,2024-11-08\r\n2262,2195,APAC,fashion,online,61.12,4,0.101,coupon,2024-08-06\r\n2263,1936,EMEA,fashion,online,66.93,5,0.072,coupon,2024-12-04\r\n2264,1573,AMER,electronics,retail,57.09,4,0.026,loyalty,2024-01-08\r\n2265,1954,APAC,electronics,online,114.01,4,0.076,coupon,2024-10-06\r\n2266,2004,LATAM,grocery,online,39.82,1,0.163,none,2024-12-16\r\n2267,2374,LATAM,sports,retail,52.68,8,0.066,none,2024-01-28\r\n2268,2496,EMEA,sports,online,113.66,5,0.242,none,2024-11-11\r\n2269,2389,LATAM,electronics,retail,27.48,6,0.247,coupon,2024-10-01\r\n2270,2018,AMER,fashion,retail,28.46,5,0.198,none,2024-10-26\r\n2271,2342,AMER,fashion,retail,86.62,7,0.152,coupon,2024-11-17\r\n2272,1591,APAC,electronics,retail,54.46,1,0.102,none,2024-11-25\r\n2273,1405,LATAM,grocery,retail,41.75,2,0.137,none,2024-05-18\r\n2274,2228,EMEA,fashion,mobile,55.87,8,0.022,bundle,2024-10-11\r\n2275,1334,APAC,home,online,21.41,5,0.055,loyalty,2024-05-18\r\n2276,1694,APAC,grocery,online,72.80,8,0.065,none,2024-10-09\r\n2277,2212,EMEA,toys,online,103.91,2,0.136,none,2024-06-11\r\n2278,1040,LATAM,grocery,online,56.59,4,0.129,loyalty,2024-10-26\r\n2279,2239,EMEA,toys,online,56.80,4,0.114,coupon,2024-03-27\r\n2280,2166,AMER,toys,retail,34.70,8,0.132,coupon,2024-03-17\r\n2281,1855,APAC,sports,partner,59.56,2,0.131,none,2024-06-26\r\n2282,1450,EMEA,fashion,online,60.30,3,0.170,none,2024-06-26\r\n2283,1025,EMEA,grocery,retail,43.83,2,0.024,loyalty,2024-06-18\r\n2284,1028,EMEA,toys,retail,27.03,8,0.237,coupon,2024-03-13\r\n2285,1414,APAC,grocery,online,141.73,1,0.219,bundle,2024-08-08\r\n2286,2172,EMEA,grocery,online,67.47,1,0.097,none,2024-04-28\r\n2287,1779,APAC,grocery,online,110.38,3,0.024,coupon,2024-05-14\r\n2288,2468,EMEA,grocery,partner,42.18,2,0.058,loyalty,2024-01-12\r\n2289,2303,EMEA,sports,online,29.11,8,0.035,none,2024-08-03\r\n2290,1920,LATAM,fashion,retail,98.48,7,0.091,bundle,2024-11-13\r\n2291,1396,EMEA,grocery,online,58.12,4,0.217,bundle,2024-01-07\r\n2292,1714,APAC,toys,online,68.40,2,0.036,none,2024-02-28\r\n2293,1255,AMER,sports,retail,51.34,4,0.172,loyalty,2024-06-18\r\n2294,2219,LATAM,home,retail,86.64,2,0.190,none,2024-01-27\r\n2295,2012,APAC,electronics,partner,22.16,4,0.015,none,2024-01-25\r\n2296,2445,APAC,electronics,retail,13.19,4,0.215,none,2024-10-13\r\n2297,1527,AMER,home,retail,41.46,6,0.060,none,2024-05-14\r\n2298,2130,EMEA,fashion,retail,85.46,8,0.245,loyalty,2024-07-13\r\n2299,1101,AMER,toys,online,49.23,1,0.078,coupon,2024-10-13\r\n2300,1920,LATAM,home,retail,45.05,7,0.120,bundle,2024-03-18\r\n2301,2468,EMEA,fashion,partner,56.92,3,0.011,bundle,2024-08-03\r\n2302,1470,LATAM,electronics,online,27.72,1,0.109,bundle,2024-04-12\r\n2303,1525,APAC,sports,retail,36.04,3,0.221,none,2024-09-05\r\n2304,1451,EMEA,grocery,retail,39.49,2,0.013,coupon,2024-03-22\r\n2305,1147,EMEA,grocery,partner,69.45,3,0.182,bundle,2024-01-02\r\n2306,1200,EMEA,electronics,partner,56.60,3,0.142,none,2024-01-23\r\n2307,2274,APAC,grocery,retail,36.42,1,0.151,bundle,2024-07-07\r\n2308,1956,APAC,electronics,partner,70.43,7,0.135,none,2024-08-24\r\n2309,1489,AMER,home,partner,40.67,4,0.017,none,2024-10-01\r\n2310,2415,AMER,sports,online,90.39,5,0.231,coupon,2024-02-10\r\n2311,1825,AMER,grocery,online,100.85,3,0.147,none,2024-07-24\r\n2312,2429,EMEA,home,mobile,79.51,7,0.120,none,2024-04-18\r\n2313,1154,LATAM,home,retail,68.13,5,0.046,none,2024-03-13\r\n2314,2193,AMER,electronics,online,66.91,3,0.168,coupon,2024-11-04\r\n2315,2376,LATAM,home,online,61.57,5,0.172,none,2024-06-26\r\n2316,1565,AMER,electronics,online,81.14,6,0.148,bundle,2024-05-03\r\n2317,2262,APAC,fashion,online,36.19,3,0.011,none,2024-08-26\r\n2318,1943,AMER,fashion,retail,59.61,7,0.229,none,2024-07-06\r\n2319,1068,APAC,sports,retail,34.63,6,0.137,bundle,2024-01-07\r\n2320,2106,LATAM,toys,online,54.89,1,0.020,coupon,2024-05-07\r\n2321,2190,LATAM,fashion,partner,74.81,1,0.012,none,2024-05-23\r\n2322,1949,AMER,grocery,online,62.28,3,0.072,none,2024-03-11\r\n2323,1460,LATAM,toys,mobile,45.96,3,0.157,coupon,2024-12-12\r\n2324,1360,APAC,fashion,retail,60.11,6,0.078,none,2024-08-07\r\n2325,1328,APAC,electronics,online,15.92,6,0.185,none,2024-06-07\r\n2326,2020,AMER,electronics,partner,40.51,7,0.014,none,2024-06-25\r\n2327,2438,AMER,toys,partner,36.37,1,0.148,none,2024-11-06\r\n2328,1786,APAC,fashion,online,32.02,4,0.017,loyalty,2024-10-25\r\n2329,1348,AMER,home,online,96.05,2,0.184,none,2024-11-09\r\n2330,2477,APAC,home,online,59.45,2,0.007,bundle,2024-04-17\r\n2331,1092,AMER,home,online,75.80,4,0.237,none,2024-05-27\r\n2332,1517,AMER,home,retail,135.38,2,0.092,coupon,2024-03-10\r\n2333,2460,AMER,fashion,online,69.18,2,0.205,coupon,2024-12-10\r\n2334,1308,EMEA,electronics,retail,73.58,1,0.105,none,2024-04-27\r\n2335,2312,APAC,electronics,retail,89.53,6,0.250,none,2024-10-21\r\n2336,1582,AMER,electronics,online,49.15,2,0.103,loyalty,2024-10-26\r\n2337,2269,EMEA,grocery,retail,58.30,4,0.037,none,2024-08-23\r\n2338,1090,AMER,electronics,retail,82.48,4,0.171,loyalty,2024-03-12\r\n2339,1322,AMER,grocery,retail,53.47,8,0.127,loyalty,2024-09-27\r\n2340,2050,APAC,sports,online,45.89,1,0.155,none,2024-06-21\r\n2341,2110,LATAM,grocery,retail,174.82,4,0.220,none,2024-11-25\r\n2342,1613,EMEA,electronics,mobile,57.67,6,0.009,none,2024-09-23\r\n2343,2253,AMER,fashion,online,57.29,6,0.125,none,2024-11-03\r\n2344,1928,AMER,electronics,online,60.92,1,0.198,none,2024-04-13\r\n2345,1519,APAC,electronics,online,49.30,2,0.040,coupon,2024-06-03\r\n2346,1466,AMER,home,online,95.49,1,0.075,none,2024-09-23\r\n2347,1517,AMER,toys,online,81.44,3,0.228,loyalty,2024-02-21\r\n2348,1182,EMEA,home,online,59.71,8,0.211,none,2024-05-05\r\n2349,1442,EMEA,fashion,partner,35.24,6,0.091,none,2024-02-06\r\n2350,1533,APAC,toys,retail,76.76,2,0.033,coupon,2024-06-19\r\n2351,2270,APAC,sports,mobile,85.00,8,0.245,none,2024-08-12\r\n2352,1187,AMER,sports,online,76.17,5,0.112,none,2024-08-03\r\n2353,1890,LATAM,grocery,mobile,57.26,7,0.129,none,2024-12-21\r\n2354,1837,LATAM,fashion,online,74.70,5,0.081,none,2024-01-14\r\n2355,1877,LATAM,toys,mobile,59.85,6,0.044,bundle,2024-02-22\r\n2356,1540,LATAM,home,mobile,61.74,5,0.056,none,2024-06-12\r\n2357,1688,LATAM,grocery,retail,114.09,2,0.204,loyalty,2024-02-25\r\n2358,2176,AMER,home,online,89.17,7,0.002,none,2024-06-22\r\n2359,1739,AMER,electronics,retail,108.24,2,0.069,coupon,2024-09-18\r\n2360,2069,AMER,grocery,online,40.16,3,0.223,coupon,2024-05-11\r\n2361,2003,LATAM,home,retail,57.38,5,0.164,none,2024-04-27\r\n2362,1298,LATAM,grocery,partner,22.37,1,0.039,coupon,2024-03-02\r\n2363,1248,APAC,fashion,online,44.29,3,0.127,none,2024-07-11\r\n2364,1120,LATAM,electronics,online,79.99,7,0.219,none,2024-09-18\r\n2365,2109,EMEA,electronics,online,111.26,1,0.112,bundle,2024-04-07\r\n2366,1143,LATAM,grocery,online,108.13,8,0.193,none,2024-07-19\r\n2367,1313,EMEA,electronics,retail,106.39,3,0.099,none,2024-09-27\r\n2368,2183,EMEA,grocery,retail,23.27,6,0.150,none,2024-11-02\r\n2369,1389,LATAM,grocery,mobile,56.45,8,0.182,none,2024-10-27\r\n2370,2316,EMEA,grocery,online,76.97,4,0.185,bundle,2024-11-01\r\n2371,1071,AMER,grocery,online,112.43,2,0.169,bundle,2024-12-27\r\n2372,2189,LATAM,sports,mobile,48.65,3,0.189,none,2024-08-25\r\n2373,1537,LATAM,home,retail,103.93,3,0.209,none,2024-07-09\r\n2374,1641,EMEA,home,retail,55.61,4,0.203,none,2024-09-25\r\n2375,1425,EMEA,grocery,retail,63.36,1,0.197,bundle,2024-05-24\r\n2376,1825,AMER,electronics,retail,52.18,6,0.057,none,2024-02-10\r\n2377,2217,LATAM,sports,online,66.62,4,0.136,bundle,2024-08-15\r\n2378,1878,EMEA,electronics,online,39.89,7,0.003,none,2024-01-10\r\n2379,1955,AMER,electronics,retail,93.01,3,0.248,none,2024-02-27\r\n2380,1260,LATAM,sports,online,102.49,3,0.145,loyalty,2024-02-14\r\n2381,1718,EMEA,home,online,38.07,8,0.080,none,2024-07-05\r\n2382,2314,EMEA,electronics,online,60.05,5,0.173,bundle,2024-05-24\r\n2383,1180,AMER,grocery,online,86.27,4,0.249,coupon,2024-07-19\r\n2384,1129,LATAM,sports,online,19.74,4,0.220,none,2024-03-11\r\n2385,1599,APAC,home,mobile,97.55,4,0.234,none,2024-08-10\r\n2386,1264,APAC,grocery,retail,36.59,2,0.077,bundle,2024-12-15\r\n2387,2180,AMER,home,retail,54.70,6,0.083,none,2024-08-08\r\n2388,2399,LATAM,electronics,online,53.82,8,0.224,coupon,2024-02-19\r\n2389,1367,AMER,grocery,online,56.05,3,0.190,bundle,2024-06-19\r\n2390,1264,APAC,home,partner,102.04,6,0.181,none,2024-06-26\r\n2391,1507,EMEA,sports,mobile,34.50,6,0.235,none,2024-11-07\r\n2392,2009,LATAM,home,mobile,50.79,6,0.087,none,2024-10-06\r\n2393,2087,LATAM,fashion,online,79.77,7,0.118,none,2024-08-27\r\n2394,1148,AMER,home,mobile,47.40,4,0.242,bundle,2024-06-27\r\n2395,1035,EMEA,grocery,mobile,41.50,7,0.045,none,2024-12-12\r\n2396,1792,AMER,electronics,online,35.09,4,0.230,bundle,2024-01-10\r\n2397,2362,AMER,electronics,partner,23.62,1,0.064,coupon,2024-05-25\r\n2398,2158,APAC,electronics,online,59.01,6,0.103,bundle,2024-01-12\r\n2399,2223,EMEA,grocery,mobile,50.59,7,0.191,none,2024-11-16\r\n2400,1884,APAC,grocery,retail,90.70,4,0.008,bundle,2024-06-22\r\n2401,1539,LATAM,toys,retail,95.12,7,0.242,coupon,2024-07-01\r\n2402,1917,LATAM,fashion,retail,29.82,8,0.064,bundle,2024-06-22\r\n2403,2028,APAC,electronics,online,60.16,8,0.046,none,2024-12-12\r\n2404,1612,LATAM,grocery,online,30.44,7,0.141,coupon,2024-12-24\r\n2405,1878,EMEA,grocery,online,50.61,1,0.146,none,2024-05-23\r\n2406,2151,APAC,electronics,partner,87.32,6,0.013,none,2024-02-09\r\n2407,1933,EMEA,sports,online,169.83,4,0.214,none,2024-03-28\r\n2408,2260,EMEA,toys,mobile,60.40,7,0.058,none,2024-08-28\r\n2409,1320,EMEA,fashion,online,31.08,4,0.102,none,2024-06-11\r\n2410,1963,AMER,fashion,online,40.95,4,0.090,bundle,2024-09-11\r\n2411,1524,LATAM,home,online,36.35,8,0.199,loyalty,2024-11-17\r\n2412,2154,APAC,grocery,retail,77.75,8,0.204,loyalty,2024-07-13\r\n2413,1134,APAC,sports,online,84.36,6,0.185,none,2024-07-24\r\n2414,1030,EMEA,sports,retail,39.75,3,0.080,loyalty,2024-12-23\r\n2415,2317,LATAM,grocery,online,62.29,8,0.117,coupon,2024-07-06\r\n2416,2040,LATAM,home,online,48.37,4,0.084,none,2024-08-11\r\n2417,1801,LATAM,fashion,retail,41.72,6,0.182,none,2024-05-07\r\n2418,1308,EMEA,grocery,retail,21.96,8,0.235,none,2024-07-20\r\n2419,1242,LATAM,electronics,retail,23.71,1,0.040,none,2024-10-01\r\n2420,2454,LATAM,sports,mobile,60.34,6,0.125,coupon,2024-08-04\r\n2421,2099,AMER,home,mobile,55.65,4,0.044,none,2024-10-19\r\n2422,2408,EMEA,grocery,online,29.71,2,0.148,none,2024-01-05\r\n2423,2221,LATAM,electronics,online,53.52,1,0.107,loyalty,2024-11-26\r\n2424,1519,APAC,home,retail,29.08,3,0.046,none,2024-01-18\r\n2425,1540,LATAM,home,online,73.08,4,0.190,none,2024-10-19\r\n2426,1283,APAC,home,retail,143.32,3,0.086,none,2024-07-21\r\n2427,1188,LATAM,home,mobile,228.91,1,0.042,none,2024-07-23\r\n2428,1614,EMEA,fashion,online,36.71,7,0.089,none,2024-10-18\r\n2429,1648,APAC,fashion,mobile,96.65,1,0.203,coupon,2024-06-26\r\n2430,2195,APAC,sports,online,56.38,3,0.142,coupon,2024-03-02\r\n2431,1832,APAC,electronics,online,81.08,3,0.109,none,2024-06-08\r\n2432,1665,AMER,home,partner,136.19,1,0.185,bundle,2024-04-24\r\n2433,1112,APAC,grocery,online,36.15,1,0.185,none,2024-05-21\r\n2434,1601,APAC,sports,partner,15.50,3,0.091,bundle,2024-03-17\r\n2435,2087,LATAM,fashion,mobile,130.91,8,0.214,coupon,2024-02-01\r\n2436,1459,LATAM,grocery,partner,61.59,5,0.238,none,2024-11-17\r\n2437,1330,EMEA,sports,retail,21.04,5,0.173,bundle,2024-03-06\r\n2438,1634,AMER,grocery,retail,40.55,2,0.129,none,2024-04-28\r\n2439,1944,AMER,sports,retail,77.33,5,0.214,loyalty,2024-01-20\r\n2440,2279,LATAM,fashion,retail,140.72,6,0.186,none,2024-05-01\r\n2441,1674,LATAM,fashion,online,60.06,1,0.220,none,2024-12-03\r\n2442,2160,LATAM,grocery,online,53.63,5,0.187,coupon,2024-04-14\r\n2443,2407,EMEA,electronics,online,50.85,3,0.076,none,2024-09-26\r\n2444,1554,AMER,grocery,partner,49.90,4,0.159,coupon,2024-08-25\r\n2445,1014,EMEA,grocery,online,78.19,5,0.145,none,2024-06-09\r\n2446,1841,AMER,home,online,50.89,4,0.194,none,2024-12-15\r\n2447,1172,APAC,electronics,online,56.45,7,0.127,coupon,2024-03-04\r\n2448,1704,AMER,electronics,online,44.55,2,0.033,none,2024-04-24\r\n2449,2435,AMER,home,retail,30.67,7,0.188,coupon,2024-01-05\r\n2450,2278,APAC,grocery,mobile,37.31,4,0.025,none,2024-10-14\r\n2451,1476,APAC,electronics,partner,35.80,4,0.209,bundle,2024-01-10\r\n2452,2108,AMER,grocery,online,36.19,8,0.132,none,2024-04-26\r\n2453,1303,LATAM,fashion,retail,48.53,2,0.185,none,2024-02-27\r\n2454,2248,LATAM,fashion,online,40.11,4,0.009,none,2024-06-24\r\n2455,1591,APAC,grocery,online,62.62,3,0.126,none,2024-12-27\r\n2456,1461,LATAM,electronics,online,72.39,4,0.209,none,2024-09-05\r\n2457,2018,AMER,sports,mobile,84.46,4,0.162,bundle,2024-08-03\r\n2458,1549,APAC,fashion,online,54.80,8,0.139,none,2024-05-26\r\n2459,1246,EMEA,grocery,online,76.33,8,0.170,none,2024-09-28\r\n2460,1051,EMEA,home,retail,64.99,4,0.099,loyalty,2024-06-21\r\n2461,1720,AMER,toys,partner,69.12,6,0.139,coupon,2024-03-22\r\n2462,2362,AMER,electronics,online,37.07,6,0.025,none,2024-06-18\r\n2463,1264,APAC,home,retail,88.32,4,0.046,none,2024-12-21\r\n2464,2169,EMEA,grocery,partner,35.70,2,0.112,coupon,2024-02-14\r\n2465,2330,EMEA,grocery,retail,28.85,6,0.010,none,2024-11-17\r\n2466,1791,LATAM,grocery,online,136.34,7,0.076,loyalty,2024-07-02\r\n2467,2162,EMEA,grocery,retail,64.76,7,0.040,loyalty,2024-11-16\r\n2468,2086,APAC,home,retail,34.58,1,0.090,none,2024-10-24\r\n2469,1354,AMER,electronics,partner,82.76,6,0.053,coupon,2024-08-08\r\n2470,2208,AMER,toys,online,32.84,2,0.154,none,2024-08-05\r\n2471,1038,APAC,grocery,mobile,36.82,5,0.175,none,2024-02-03\r\n2472,1860,EMEA,home,retail,128.03,7,0.104,coupon,2024-07-17\r\n2473,2270,APAC,grocery,retail,126.73,8,0.080,bundle,2024-02-11\r\n2474,1257,APAC,electronics,online,64.37,4,0.055,none,2024-09-17\r\n2475,1896,EMEA,sports,retail,35.70,7,0.213,loyalty,2024-04-06\r\n2476,1991,APAC,electronics,online,29.11,8,0.130,none,2024-06-10\r\n2477,1214,EMEA,home,online,78.46,7,0.058,none,2024-10-12\r\n2478,2356,LATAM,grocery,partner,184.14,4,0.047,bundle,2024-12-28\r\n2479,1124,AMER,sports,retail,43.25,7,0.009,coupon,2024-09-07\r\n2480,1026,APAC,electronics,online,38.53,1,0.230,coupon,2024-01-17\r\n2481,1954,APAC,home,retail,43.87,5,0.222,none,2024-05-08\r\n2482,1980,LATAM,fashion,retail,48.05,8,0.123,bundle,2024-07-27\r\n2483,2014,EMEA,grocery,mobile,56.22,1,0.193,none,2024-09-15\r\n2484,1088,LATAM,toys,online,142.77,7,0.108,none,2024-08-17\r\n2485,1294,APAC,sports,online,107.70,8,0.110,coupon,2024-11-08\r\n2486,1672,APAC,home,partner,35.87,6,0.219,none,2024-09-20\r\n2487,1804,AMER,home,online,36.93,8,0.057,none,2024-11-17\r\n2488,2134,AMER,home,online,77.28,7,0.148,none,2024-11-19\r\n2489,1347,APAC,grocery,online,34.72,7,0.229,bundle,2024-06-17\r\n2490,2418,AMER,grocery,online,29.55,8,0.215,none,2024-02-17\r\n2491,1540,LATAM,toys,online,68.76,5,0.175,none,2024-11-18\r\n2492,1882,AMER,fashion,retail,51.48,4,0.247,none,2024-11-02\r\n2493,1897,AMER,electronics,online,55.27,1,0.213,none,2024-10-28\r\n2494,1601,APAC,fashion,online,71.91,6,0.195,none,2024-11-21\r\n2495,1650,LATAM,home,online,28.64,5,0.009,loyalty,2024-04-28\r\n2496,1293,AMER,sports,retail,34.74,8,0.197,none,2024-01-12\r\n2497,1136,EMEA,sports,retail,40.15,5,0.060,none,2024-06-03\r\n2498,1643,EMEA,fashion,online,52.65,7,0.070,none,2024-10-20\r\n2499,2183,EMEA,grocery,online,111.10,1,0.248,none,2024-06-04\r\n2500,1178,EMEA,fashion,mobile,56.56,2,0.066,none,2024-10-11\r\n2501,1400,EMEA,sports,mobile,44.13,2,0.029,coupon,2024-06-04\r\n2502,1470,LATAM,home,partner,35.94,3,0.137,loyalty,2024-03-01\r\n2503,2006,APAC,grocery,retail,47.25,8,0.199,none,2024-09-28\r\n2504,2454,LATAM,home,online,21.81,1,0.069,coupon,2024-11-25\r\n2505,2282,EMEA,electronics,online,62.39,8,0.049,bundle,2024-08-03\r\n2506,1221,LATAM,toys,retail,51.92,8,0.071,coupon,2024-08-02\r\n2507,2315,LATAM,grocery,retail,132.38,2,0.219,bundle,2024-10-04\r\n2508,1283,APAC,home,retail,30.72,6,0.216,none,2024-10-22\r\n2509,1287,AMER,electronics,retail,51.49,7,0.051,none,2024-05-21\r\n2510,1978,AMER,grocery,online,95.70,2,0.078,none,2024-07-10\r\n2511,2148,EMEA,home,online,71.62,7,0.127,none,2024-06-06\r\n2512,2376,LATAM,grocery,online,120.91,4,0.154,none,2024-09-03\r\n2513,1705,AMER,home,retail,71.14,2,0.100,coupon,2024-11-12\r\n2514,2293,LATAM,fashion,retail,73.52,2,0.036,none,2024-07-03\r\n2515,2107,APAC,electronics,online,80.66,3,0.167,loyalty,2024-07-27\r\n2516,1185,LATAM,toys,online,51.89,3,0.169,bundle,2024-07-06\r\n2517,1966,APAC,home,online,56.55,2,0.232,none,2024-10-04\r\n2518,1699,APAC,electronics,online,52.21,4,0.242,none,2024-07-15\r\n2519,1132,EMEA,sports,online,105.71,5,0.210,loyalty,2024-07-18\r\n2520,1121,EMEA,toys,mobile,41.33,1,0.154,none,2024-08-17\r\n2521,1204,AMER,toys,partner,53.94,8,0.148,coupon,2024-02-02\r\n2522,1992,LATAM,grocery,online,34.85,2,0.033,coupon,2024-07-15\r\n2523,2287,EMEA,toys,retail,52.27,6,0.156,none,2024-11-20\r\n2524,1138,AMER,home,online,24.71,1,0.207,none,2024-06-01\r\n2525,1575,APAC,grocery,retail,24.08,8,0.059,bundle,2024-04-11\r\n2526,1785,EMEA,home,online,61.00,5,0.201,none,2024-12-18\r\n2527,2298,APAC,fashion,online,36.81,4,0.151,bundle,2024-05-15\r\n2528,1581,APAC,grocery,mobile,79.34,7,0.003,bundle,2024-08-28\r\n2529,1489,AMER,grocery,retail,25.79,1,0.023,coupon,2024-05-17\r\n2530,1108,EMEA,electronics,retail,110.80,8,0.090,none,2024-07-12\r\n2531,1843,EMEA,electronics,online,23.41,5,0.086,coupon,2024-01-04\r\n2532,1184,AMER,sports,online,67.36,8,0.183,bundle,2024-03-04\r\n2533,2352,APAC,fashion,retail,34.48,8,0.097,none,2024-11-13\r\n2534,1039,AMER,sports,retail,55.96,6,0.156,none,2024-01-08\r\n2535,1040,LATAM,fashion,retail,77.51,3,0.125,coupon,2024-09-28\r\n2536,1268,EMEA,fashion,retail,55.61,8,0.040,none,2024-05-17\r\n2537,1249,EMEA,sports,retail,24.90,5,0.219,none,2024-06-06\r\n2538,1068,APAC,grocery,online,47.11,6,0.189,loyalty,2024-04-24\r\n2539,1960,EMEA,fashion,retail,73.61,8,0.080,none,2024-09-08\r\n2540,1535,AMER,grocery,online,42.59,7,0.128,none,2024-10-17\r\n2541,2231,LATAM,home,retail,21.25,4,0.197,none,2024-01-17\r\n2542,1943,AMER,toys,retail,85.24,7,0.091,loyalty,2024-06-25\r\n2543,2325,LATAM,electronics,online,99.26,5,0.211,none,2024-08-20\r\n2544,1812,EMEA,toys,online,137.36,8,0.088,bundle,2024-05-02\r\n2545,1196,APAC,home,online,89.24,2,0.068,none,2024-04-16\r\n2546,1501,AMER,grocery,mobile,65.63,5,0.010,bundle,2024-06-18\r\n2547,1548,EMEA,sports,retail,110.15,4,0.046,none,2024-12-16\r\n2548,1151,APAC,grocery,retail,59.23,6,0.048,none,2024-03-28\r\n2549,1744,EMEA,electronics,online,88.30,5,0.070,none,2024-04-11\r\n2550,2048,LATAM,fashion,partner,52.32,2,0.137,bundle,2024-09-22\r\n2551,1661,LATAM,sports,mobile,25.03,2,0.245,none,2024-03-17\r\n2552,2022,LATAM,sports,online,61.90,3,0.227,none,2024-09-12\r\n2553,1895,AMER,electronics,retail,56.99,7,0.199,none,2024-10-14\r\n2554,1942,APAC,grocery,online,56.74,3,0.108,none,2024-04-19\r\n2555,1613,EMEA,sports,online,33.97,4,0.201,coupon,2024-02-02\r\n2556,1315,AMER,fashion,retail,29.92,7,0.155,none,2024-07-22\r\n2557,1767,AMER,electronics,retail,52.67,7,0.227,none,2024-11-26\r\n2558,1894,APAC,grocery,retail,38.99,4,0.147,none,2024-07-10\r\n2559,2264,LATAM,electronics,retail,41.02,2,0.036,none,2024-11-07\r\n2560,2461,LATAM,grocery,online,136.67,7,0.048,coupon,2024-03-21\r\n2561,1413,LATAM,fashion,online,45.95,3,0.003,none,2024-07-28\r\n2562,1358,APAC,home,mobile,110.21,8,0.162,loyalty,2024-09-07\r\n2563,1763,LATAM,fashion,retail,57.22,8,0.152,none,2024-11-13\r\n2564,2454,LATAM,home,retail,117.97,7,0.119,coupon,2024-12-26\r\n2565,1971,EMEA,home,online,45.65,4,0.151,none,2024-02-01\r\n2566,1915,LATAM,home,online,15.95,5,0.004,coupon,2024-11-17\r\n2567,2381,AMER,home,online,52.28,1,0.153,none,2024-04-27\r\n2568,1698,EMEA,grocery,online,179.04,7,0.177,bundle,2024-01-13\r\n2569,1284,APAC,fashion,online,60.72,3,0.181,none,2024-09-25\r\n2570,1244,LATAM,grocery,online,110.48,4,0.038,none,2024-03-03\r\n2571,1182,EMEA,home,partner,42.66,1,0.029,none,2024-09-11\r\n2572,1199,APAC,electronics,online,65.86,6,0.069,bundle,2024-08-16\r\n2573,2151,APAC,sports,online,74.29,8,0.086,none,2024-01-07\r\n2574,1456,APAC,grocery,online,68.09,8,0.227,coupon,2024-05-20\r\n2575,2143,AMER,home,online,68.68,7,0.017,none,2024-05-02\r\n2576,1522,LATAM,sports,online,43.56,2,0.016,none,2024-02-22\r\n2577,1320,EMEA,sports,retail,109.91,6,0.135,none,2024-05-21\r\n2578,1477,APAC,grocery,mobile,86.63,4,0.197,coupon,2024-06-21\r\n2579,2015,APAC,grocery,online,116.96,5,0.027,none,2024-12-13\r\n2580,2059,AMER,electronics,online,32.37,5,0.152,loyalty,2024-06-11\r\n2581,1332,APAC,electronics,retail,62.21,7,0.238,loyalty,2024-07-16\r\n2582,2172,EMEA,grocery,retail,120.72,8,0.160,bundle,2024-02-12\r\n2583,2450,EMEA,toys,partner,49.10,2,0.217,bundle,2024-08-11\r\n2584,2042,LATAM,electronics,online,69.95,7,0.121,none,2024-04-01\r\n2585,2254,LATAM,sports,retail,68.26,2,0.171,bundle,2024-05-05\r\n2586,2490,AMER,toys,mobile,56.09,4,0.250,none,2024-02-21\r\n2587,1370,APAC,electronics,retail,51.81,1,0.173,none,2024-06-28\r\n2588,1865,LATAM,electronics,online,95.97,2,0.140,bundle,2024-01-10\r\n2589,1201,LATAM,electronics,online,58.65,2,0.150,coupon,2024-11-14\r\n2590,1814,AMER,toys,retail,51.89,8,0.007,none,2024-06-05\r\n2591,2304,LATAM,grocery,retail,84.12,6,0.108,none,2024-07-23\r\n2592,2236,APAC,sports,mobile,85.32,3,0.119,none,2024-12-01\r\n2593,2475,AMER,home,retail,51.41,7,0.008,none,2024-11-08\r\n2594,2304,LATAM,electronics,online,67.13,3,0.107,none,2024-01-18\r\n2595,1834,AMER,grocery,online,60.08,1,0.014,none,2024-04-08\r\n2596,1350,LATAM,grocery,online,49.79,5,0.111,none,2024-02-06\r\n2597,1255,AMER,grocery,online,76.09,6,0.183,none,2024-11-04\r\n2598,1946,AMER,toys,mobile,63.01,6,0.047,none,2024-08-28\r\n2599,2132,LATAM,fashion,retail,51.13,5,0.242,none,2024-03-11\r\n2600,1077,AMER,fashion,online,130.14,4,0.151,none,2024-03-01\r\n2601,1535,AMER,home,partner,136.43,7,0.095,none,2024-03-02\r\n2602,2160,LATAM,fashion,retail,55.74,2,0.176,none,2024-05-12\r\n2603,1166,AMER,fashion,mobile,144.61,7,0.202,none,2024-01-28\r\n2604,1778,LATAM,fashion,online,59.00,1,0.116,bundle,2024-09-24\r\n2605,2298,APAC,fashion,retail,42.62,4,0.186,none,2024-05-04\r\n2606,2092,AMER,grocery,online,39.09,6,0.046,coupon,2024-12-16\r\n2607,2139,AMER,home,retail,39.13,5,0.161,none,2024-02-23\r\n2608,1503,APAC,grocery,online,73.32,1,0.027,none,2024-06-02\r\n2609,2205,AMER,electronics,retail,69.75,6,0.013,none,2024-03-21\r\n2610,2082,APAC,home,retail,18.56,5,0.154,coupon,2024-10-28\r\n2611,1795,EMEA,home,online,47.03,1,0.097,none,2024-12-28\r\n2612,2329,LATAM,toys,online,94.45,6,0.124,none,2024-08-17\r\n2613,2039,EMEA,fashion,online,45.21,2,0.248,none,2024-04-25\r\n2614,1739,AMER,sports,retail,88.76,6,0.210,none,2024-07-27\r\n2615,1552,EMEA,home,online,108.01,4,0.034,none,2024-05-25\r\n2616,1285,EMEA,home,retail,44.37,5,0.232,none,2024-02-16\r\n2617,1723,LATAM,grocery,mobile,43.89,8,0.014,coupon,2024-10-05\r\n2618,2218,EMEA,electronics,retail,30.44,3,0.192,loyalty,2024-08-13\r\n2619,1810,LATAM,grocery,retail,85.45,6,0.038,loyalty,2024-04-18\r\n2620,1882,AMER,sports,online,50.61,2,0.056,none,2024-09-09\r\n2621,1698,EMEA,home,online,33.46,2,0.224,none,2024-12-13\r\n2622,1697,APAC,electronics,mobile,84.14,2,0.153,coupon,2024-04-25\r\n2623,1803,LATAM,electronics,retail,97.33,6,0.039,none,2024-02-23\r\n2624,1583,AMER,grocery,online,72.54,7,0.134,coupon,2024-01-04\r\n2625,1135,APAC,grocery,online,79.39,1,0.173,none,2024-04-22\r\n2626,2425,APAC,grocery,mobile,50.96,7,0.191,none,2024-05-11\r\n2627,1261,APAC,electronics,retail,90.26,5,0.120,coupon,2024-02-10\r\n2628,2215,LATAM,grocery,online,28.70,4,0.001,loyalty,2024-12-05\r\n2629,1019,APAC,electronics,retail,50.39,2,0.116,bundle,2024-07-13\r\n2630,1390,APAC,fashion,online,28.20,8,0.136,coupon,2024-10-15\r\n2631,1214,EMEA,toys,retail,54.19,3,0.142,none,2024-10-15\r\n2632,1301,AMER,electronics,online,47.20,6,0.216,coupon,2024-04-02\r\n2633,1731,AMER,fashion,online,31.61,4,0.237,bundle,2024-02-06\r\n2634,1938,APAC,grocery,retail,31.11,6,0.101,none,2024-12-04\r\n2635,2254,LATAM,grocery,online,30.25,3,0.155,none,2024-04-19\r\n2636,1465,AMER,home,mobile,83.18,1,0.131,bundle,2024-10-25\r\n2637,2185,EMEA,electronics,online,34.17,1,0.097,coupon,2024-06-02\r\n2638,1353,EMEA,grocery,retail,36.64,4,0.241,none,2024-06-01\r\n2639,2215,LATAM,electronics,online,64.24,2,0.014,bundle,2024-12-14\r\n2640,1459,LATAM,grocery,retail,53.10,2,0.168,coupon,2024-08-03\r\n2641,1798,AMER,grocery,retail,64.32,7,0.123,none,2024-05-22\r\n2642,2210,APAC,sports,mobile,36.08,3,0.003,none,2024-09-07\r\n2643,1042,LATAM,electronics,online,74.37,5,0.062,coupon,2024-03-08\r\n2644,1689,LATAM,fashion,retail,52.90,8,0.031,none,2024-09-24\r\n2645,1150,LATAM,electronics,mobile,27.17,2,0.162,none,2024-06-01\r\n2646,1581,APAC,grocery,retail,38.74,5,0.104,none,2024-05-16\r\n2647,1867,AMER,sports,online,57.88,4,0.078,none,2024-12-14\r\n2648,1336,APAC,electronics,mobile,57.00,6,0.002,none,2024-08-09\r\n2649,2108,AMER,home,retail,29.34,6,0.029,coupon,2024-10-06\r\n2650,2360,EMEA,grocery,online,49.68,1,0.141,coupon,2024-07-12\r\n2651,2287,EMEA,grocery,online,71.25,3,0.223,bundle,2024-09-19\r\n2652,2076,AMER,grocery,partner,92.86,2,0.160,none,2024-12-27\r\n2653,2092,AMER,home,online,105.93,5,0.134,none,2024-06-19\r\n2654,2425,APAC,sports,online,35.15,6,0.020,none,2024-03-19\r\n2655,1472,AMER,grocery,partner,55.39,8,0.089,none,2024-08-15\r\n2656,2423,LATAM,home,online,47.15,3,0.223,none,2024-10-13\r\n2657,1030,EMEA,grocery,online,50.00,8,0.213,bundle,2024-07-17\r\n2658,2065,EMEA,fashion,online,170.30,5,0.021,none,2024-09-22\r\n2659,1986,LATAM,electronics,online,40.20,4,0.038,none,2024-03-25\r\n2660,1520,APAC,home,online,69.44,8,0.094,loyalty,2024-09-18\r\n2661,1961,EMEA,toys,retail,211.98,8,0.204,coupon,2024-03-07\r\n2662,1912,APAC,electronics,retail,103.81,4,0.053,bundle,2024-10-22\r\n2663,2127,LATAM,toys,mobile,47.94,3,0.025,coupon,2024-06-05\r\n2664,2393,LATAM,electronics,online,79.03,8,0.139,bundle,2024-08-15\r\n2665,1639,APAC,home,online,104.13,7,0.119,none,2024-09-14\r\n2666,1234,AMER,electronics,retail,60.27,5,0.074,none,2024-08-23\r\n2667,1259,EMEA,toys,online,80.07,3,0.237,bundle,2024-09-18\r\n2668,1805,EMEA,grocery,online,67.93,2,0.052,bundle,2024-08-24\r\n2669,1024,APAC,toys,retail,43.26,8,0.250,none,2024-07-06\r\n2670,1545,AMER,grocery,online,65.35,4,0.056,bundle,2024-02-28\r\n2671,2385,APAC,sports,online,81.49,6,0.116,loyalty,2024-01-21\r\n2672,2044,APAC,grocery,partner,124.47,6,0.035,none,2024-12-28\r\n2673,1082,EMEA,toys,mobile,38.82,5,0.191,none,2024-04-05\r\n2674,2148,EMEA,toys,retail,47.28,6,0.237,none,2024-02-04\r\n2675,1372,APAC,electronics,retail,81.20,4,0.015,bundle,2024-12-14\r\n2676,1746,LATAM,home,retail,38.37,7,0.067,loyalty,2024-09-17\r\n2677,1866,EMEA,fashion,mobile,35.01,1,0.184,none,2024-05-27\r\n2678,1559,EMEA,electronics,online,52.75,7,0.023,coupon,2024-06-23\r\n2679,1730,AMER,home,online,39.17,4,0.039,none,2024-12-27\r\n2680,2389,LATAM,home,online,50.36,1,0.190,bundle,2024-07-02\r\n2681,1916,AMER,toys,online,49.35,7,0.126,bundle,2024-11-02\r\n2682,1509,AMER,home,retail,18.44,6,0.024,none,2024-01-19\r\n2683,2095,EMEA,toys,online,53.12,7,0.072,coupon,2024-04-05\r\n2684,2088,EMEA,home,mobile,50.40,2,0.190,loyalty,2024-11-27\r\n2685,1843,EMEA,fashion,retail,64.69,4,0.060,coupon,2024-02-13\r\n2686,1169,LATAM,grocery,retail,79.94,1,0.183,bundle,2024-08-25\r\n2687,1247,AMER,electronics,online,44.20,1,0.043,none,2024-07-24\r\n2688,1818,AMER,grocery,retail,58.38,6,0.222,none,2024-10-10\r\n2689,1956,APAC,fashion,partner,36.63,7,0.132,none,2024-12-22\r\n2690,1830,EMEA,electronics,mobile,122.67,1,0.232,none,2024-10-25\r\n2691,1750,LATAM,electronics,retail,34.56,8,0.154,none,2024-10-16\r\n2692,1104,APAC,electronics,online,80.51,6,0.224,loyalty,2024-01-22\r\n2693,1615,LATAM,sports,online,32.53,5,0.121,none,2024-10-23\r\n2694,2292,EMEA,fashion,retail,41.89,2,0.232,coupon,2024-09-23\r\n2695,1869,AMER,grocery,retail,36.78,3,0.242,none,2024-02-24\r\n2696,1320,EMEA,sports,retail,52.20,8,0.139,none,2024-04-18\r\n2697,1166,AMER,grocery,retail,34.81,8,0.155,none,2024-11-12\r\n2698,1087,AMER,fashion,retail,56.58,1,0.230,none,2024-08-16\r\n2699,1703,AMER,sports,partner,23.80,5,0.213,none,2024-12-20\r\n2700,1568,AMER,electronics,online,44.73,6,0.179,none,2024-12-02\r\n2701,1900,APAC,grocery,retail,25.53,3,0.071,loyalty,2024-12-27\r\n2702,2362,AMER,electronics,retail,36.03,1,0.066,coupon,2024-09-24\r\n2703,1211,EMEA,sports,retail,165.25,3,0.130,none,2024-04-19\r\n2704,1799,EMEA,electronics,mobile,92.16,5,0.161,loyalty,2024-08-26\r\n2705,2446,LATAM,grocery,mobile,381.21,5,0.121,coupon,2024-11-21\r\n2706,2422,APAC,electronics,online,42.39,4,0.193,none,2024-04-26\r\n2707,1718,EMEA,home,online,122.14,5,0.222,bundle,2024-12-26\r\n2708,2243,APAC,fashion,online,47.18,7,0.156,none,2024-05-09\r\n2709,1004,LATAM,electronics,retail,155.39,8,0.164,bundle,2024-05-02\r\n2710,2238,AMER,home,online,30.15,4,0.228,bundle,2024-12-10\r\n2711,1092,AMER,toys,online,121.67,2,0.166,none,2024-11-08\r\n2712,1757,EMEA,fashion,online,50.24,1,0.110,bundle,2024-06-22\r\n2713,2254,LATAM,home,online,86.52,7,0.140,none,2024-04-20\r\n2714,1656,LATAM,grocery,online,168.73,3,0.081,none,2024-10-06\r\n2715,2412,LATAM,home,online,66.17,5,0.076,none,2024-03-05\r\n2716,1871,APAC,grocery,online,107.91,5,0.156,bundle,2024-02-05\r\n2717,2354,LATAM,home,online,50.66,8,0.198,coupon,2024-04-23\r\n2718,2260,EMEA,grocery,online,31.48,6,0.001,none,2024-03-17\r\n2719,2320,LATAM,electronics,mobile,55.18,3,0.150,none,2024-01-19\r\n2720,1587,LATAM,fashion,online,30.19,3,0.106,none,2024-05-23\r\n2721,1038,APAC,sports,retail,61.23,5,0.134,none,2024-05-03\r\n2722,1118,AMER,home,online,43.01,6,0.215,bundle,2024-06-08\r\n2723,1959,EMEA,electronics,online,63.96,2,0.069,none,2024-07-21\r\n2724,1864,EMEA,grocery,online,57.18,8,0.180,bundle,2024-04-28\r\n2725,1482,AMER,electronics,online,32.88,8,0.148,bundle,2024-05-09\r\n2726,1375,AMER,grocery,retail,190.35,4,0.099,bundle,2024-08-13\r\n2727,1076,LATAM,electronics,online,32.33,2,0.204,none,2024-02-12\r\n2728,1168,APAC,electronics,online,83.50,4,0.202,coupon,2024-10-13\r\n2729,1595,AMER,fashion,online,83.60,7,0.052,none,2024-06-12\r\n2730,1221,LATAM,electronics,mobile,21.49,5,0.124,none,2024-07-23\r\n2731,2005,APAC,toys,mobile,52.83,6,0.108,bundle,2024-05-05\r\n2732,1819,AMER,fashion,online,30.03,4,0.187,none,2024-05-10\r\n2733,1044,EMEA,grocery,online,128.79,4,0.164,loyalty,2024-05-22\r\n2734,2479,EMEA,grocery,online,129.70,3,0.171,none,2024-01-24\r\n2735,1463,EMEA,grocery,retail,57.11,5,0.129,none,2024-01-26\r\n2736,2002,APAC,electronics,mobile,15.09,5,0.082,none,2024-06-28\r\n2737,1503,APAC,home,online,111.43,6,0.144,coupon,2024-07-20\r\n2738,2015,APAC,grocery,mobile,61.68,3,0.139,coupon,2024-02-24\r\n2739,1775,EMEA,sports,retail,41.67,6,0.027,loyalty,2024-05-13\r\n2740,1744,EMEA,sports,retail,71.49,4,0.065,none,2024-03-27\r\n2741,1824,LATAM,electronics,online,31.50,2,0.002,none,2024-03-09\r\n2742,2324,AMER,fashion,retail,35.26,2,0.200,none,2024-11-06\r\n2743,1034,EMEA,grocery,online,33.23,5,0.212,loyalty,2024-02-23\r\n2744,2499,LATAM,home,online,44.23,5,0.214,none,2024-02-14\r\n2745,1152,LATAM,grocery,online,27.65,2,0.103,loyalty,2024-01-10\r\n2746,1090,AMER,grocery,online,66.01,6,0.041,loyalty,2024-02-13\r\n2747,1269,LATAM,electronics,online,56.68,7,0.184,none,2024-06-17\r\n2748,1724,LATAM,electronics,online,37.37,4,0.003,none,2024-03-07\r\n2749,1466,AMER,grocery,retail,128.63,6,0.213,bundle,2024-01-04\r\n2750,1574,AMER,home,online,53.75,8,0.234,coupon,2024-03-05\r\n2751,2429,EMEA,grocery,retail,19.53,1,0.234,none,2024-10-07\r\n2752,2405,AMER,toys,online,85.24,6,0.093,bundle,2024-02-17\r\n2753,1790,AMER,electronics,retail,60.38,2,0.170,bundle,2024-05-13\r\n2754,2050,APAC,fashion,online,56.08,6,0.035,none,2024-03-02\r\n2755,1956,APAC,grocery,partner,60.51,1,0.051,coupon,2024-11-16\r\n2756,1972,LATAM,toys,retail,36.20,2,0.195,none,2024-05-27\r\n2757,1914,EMEA,grocery,retail,23.01,7,0.018,none,2024-04-16\r\n2758,1934,EMEA,grocery,retail,100.67,4,0.077,loyalty,2024-09-16\r\n2759,1925,LATAM,home,online,139.61,6,0.193,none,2024-05-12\r\n2760,2130,EMEA,toys,online,92.26,8,0.191,none,2024-09-07\r\n2761,2070,APAC,home,partner,105.56,4,0.077,none,2024-08-22\r\n2762,1991,APAC,electronics,retail,201.59,8,0.172,none,2024-05-05\r\n2763,1633,EMEA,sports,retail,37.53,8,0.194,none,2024-01-23\r\n2764,2197,LATAM,grocery,online,22.24,6,0.134,coupon,2024-06-18\r\n2765,1604,EMEA,electronics,online,44.00,4,0.103,loyalty,2024-03-19\r\n2766,1999,EMEA,toys,online,37.24,3,0.051,none,2024-10-21\r\n2767,1191,EMEA,grocery,retail,92.70,3,0.190,bundle,2024-05-17\r\n2768,2328,EMEA,toys,online,26.36,5,0.192,none,2024-03-21\r\n2769,1198,AMER,grocery,online,33.24,5,0.121,none,2024-11-09\r\n2770,2306,AMER,electronics,online,56.77,8,0.248,coupon,2024-07-07\r\n2771,2117,EMEA,home,online,51.67,8,0.159,none,2024-07-08\r\n2772,1692,LATAM,toys,online,157.78,4,0.085,bundle,2024-09-25\r\n2773,1506,EMEA,fashion,online,86.10,7,0.032,none,2024-12-27\r\n2774,1602,EMEA,home,retail,48.84,5,0.232,none,2024-12-21\r\n2775,1466,AMER,grocery,mobile,63.24,7,0.164,none,2024-06-24\r\n2776,1314,AMER,electronics,retail,38.20,6,0.140,none,2024-12-15\r\n2777,1591,APAC,toys,retail,24.70,6,0.218,loyalty,2024-08-15\r\n2778,1338,EMEA,home,online,35.59,2,0.075,none,2024-12-22\r\n2779,2047,AMER,grocery,retail,66.56,3,0.039,none,2024-05-15\r\n2780,2362,AMER,grocery,retail,78.12,2,0.139,none,2024-02-07\r\n2781,1831,APAC,grocery,online,37.34,4,0.008,bundle,2024-08-08\r\n2782,1006,AMER,electronics,online,12.83,2,0.233,none,2024-05-13\r\n2783,2255,AMER,sports,mobile,98.63,8,0.193,bundle,2024-05-07\r\n2784,1780,APAC,electronics,retail,72.22,4,0.111,coupon,2024-02-26\r\n2785,2250,AMER,toys,partner,45.94,5,0.041,none,2024-06-11\r\n2786,1648,APAC,home,retail,40.00,8,0.226,none,2024-09-16\r\n2787,1797,LATAM,electronics,retail,47.02,6,0.159,coupon,2024-03-07\r\n2788,1722,EMEA,home,retail,78.70,5,0.231,loyalty,2024-12-06\r\n2789,1930,AMER,fashion,retail,89.73,4,0.202,coupon,2024-06-26\r\n2790,2188,EMEA,electronics,retail,46.43,5,0.004,none,2024-12-14\r\n2791,2182,AMER,grocery,online,58.21,3,0.014,bundle,2024-02-17\r\n2792,2419,LATAM,grocery,online,53.40,4,0.119,none,2024-02-12\r\n2793,1947,EMEA,sports,online,66.42,7,0.023,none,2024-01-04\r\n2794,1610,LATAM,electronics,retail,15.62,8,0.108,none,2024-05-23\r\n2795,2221,LATAM,grocery,retail,33.06,1,0.071,none,2024-01-04\r\n2796,1862,LATAM,grocery,retail,56.75,2,0.161,none,2024-05-09\r\n2797,1996,APAC,sports,online,74.21,8,0.190,none,2024-10-22\r\n2798,1104,APAC,home,retail,60.83,1,0.114,none,2024-11-01\r\n2799,1270,LATAM,home,online,24.62,4,0.130,none,2024-01-18\r\n2800,2372,AMER,home,mobile,37.55,1,0.128,none,2024-11-21\r\n2801,2422,APAC,grocery,online,36.32,4,0.200,none,2024-07-22\r\n2802,1783,AMER,grocery,retail,20.96,3,0.056,none,2024-12-13\r\n2803,1495,LATAM,electronics,retail,43.11,4,0.215,none,2024-02-04\r\n2804,2411,EMEA,grocery,online,42.83,7,0.112,none,2024-03-26\r\n2805,2200,LATAM,electronics,retail,26.78,5,0.111,none,2024-03-07\r\n2806,2309,AMER,grocery,mobile,52.71,3,0.199,none,2024-10-06\r\n2807,2182,AMER,home,online,72.80,2,0.105,none,2024-12-20\r\n2808,1834,AMER,fashion,retail,78.22,6,0.084,loyalty,2024-03-25\r\n2809,1070,EMEA,grocery,retail,68.66,8,0.219,none,2024-04-07\r\n2810,1845,AMER,electronics,retail,35.40,5,0.131,bundle,2024-02-23\r\n2811,1049,AMER,sports,online,80.75,4,0.089,none,2024-01-10\r\n2812,1046,EMEA,home,partner,103.42,7,0.152,none,2024-09-18\r\n2813,1432,APAC,home,online,64.09,6,0.052,none,2024-09-11\r\n2814,2397,LATAM,electronics,mobile,69.89,6,0.057,none,2024-04-05\r\n2815,1572,LATAM,home,retail,32.10,8,0.085,none,2024-09-01\r\n2816,2041,LATAM,fashion,mobile,48.20,1,0.177,loyalty,2024-08-07\r\n2817,1475,LATAM,toys,retail,67.40,2,0.047,none,2024-07-25\r\n2818,2363,AMER,home,retail,73.15,6,0.018,none,2024-01-25\r\n2819,2069,AMER,electronics,retail,18.05,4,0.165,none,2024-02-15\r\n2820,2488,EMEA,electronics,online,68.98,5,0.054,none,2024-12-14\r\n2821,2015,APAC,sports,online,64.05,7,0.116,none,2024-09-03\r\n2822,1334,APAC,toys,online,32.89,6,0.143,none,2024-10-09\r\n2823,2348,EMEA,electronics,online,317.47,8,0.181,none,2024-07-05\r\n2824,1551,APAC,toys,retail,115.90,6,0.010,none,2024-11-06\r\n2825,1937,APAC,grocery,retail,41.16,5,0.124,none,2024-10-19\r\n2826,2208,AMER,fashion,online,113.07,6,0.165,none,2024-06-11\r\n2827,2078,APAC,toys,online,52.23,1,0.177,none,2024-12-25\r\n2828,2023,LATAM,electronics,online,56.04,6,0.234,none,2024-05-15\r\n2829,2387,EMEA,grocery,online,73.72,5,0.187,coupon,2024-12-02\r\n2830,2251,APAC,electronics,retail,25.09,3,0.200,none,2024-07-11\r\n2831,1941,AMER,grocery,online,69.10,5,0.139,none,2024-08-24\r\n2832,1822,EMEA,grocery,online,36.16,4,0.221,none,2024-01-20\r\n2833,2430,APAC,fashion,online,64.17,5,0.169,none,2024-02-15\r\n2834,1293,AMER,grocery,retail,36.34,7,0.202,none,2024-11-23\r\n2835,1653,APAC,electronics,mobile,59.15,7,0.094,none,2024-07-21\r\n2836,1874,LATAM,toys,online,48.27,7,0.045,none,2024-01-20\r\n2837,1119,LATAM,electronics,online,37.86,7,0.015,bundle,2024-06-15\r\n2838,2085,AMER,fashion,retail,31.53,1,0.018,none,2024-02-09\r\n2839,2389,LATAM,grocery,online,53.40,6,0.190,none,2024-06-25\r\n2840,2270,APAC,sports,mobile,75.97,8,0.207,coupon,2024-06-09\r\n2841,2269,EMEA,toys,online,70.70,5,0.248,none,2024-02-06\r\n2842,1339,EMEA,toys,retail,84.12,2,0.161,none,2024-08-08\r\n2843,1004,LATAM,home,retail,38.25,7,0.234,none,2024-01-26\r\n2844,2360,EMEA,home,retail,46.85,4,0.004,none,2024-05-16\r\n2845,1131,APAC,electronics,online,64.51,3,0.232,bundle,2024-05-22\r\n2846,2499,LATAM,toys,mobile,98.60,8,0.179,coupon,2024-01-05\r\n2847,2338,AMER,home,retail,42.76,7,0.176,none,2024-07-10\r\n2848,1006,AMER,home,mobile,63.95,5,0.193,none,2024-06-09\r\n2849,1590,APAC,sports,mobile,34.14,3,0.232,loyalty,2024-09-21\r\n2850,2470,EMEA,fashion,retail,51.65,1,0.208,coupon,2024-04-21\r\n2851,1597,APAC,toys,mobile,85.07,5,0.242,none,2024-07-21\r\n2852,1434,EMEA,grocery,retail,35.11,6,0.062,none,2024-11-27\r\n2853,1667,AMER,fashion,online,122.32,8,0.132,none,2024-06-02\r\n2854,1123,LATAM,home,online,28.40,3,0.135,none,2024-10-19\r\n2855,1121,EMEA,toys,online,59.72,3,0.095,coupon,2024-01-18\r\n2856,2309,AMER,toys,retail,66.83,5,0.208,none,2024-06-23\r\n2857,1465,AMER,fashion,online,57.69,7,0.124,coupon,2024-12-25\r\n2858,1952,EMEA,home,online,209.96,6,0.025,loyalty,2024-01-19\r\n2859,2098,AMER,grocery,online,35.75,1,0.033,none,2024-10-18\r\n2860,2029,APAC,home,retail,82.36,7,0.183,none,2024-09-28\r\n2861,2295,EMEA,home,mobile,60.87,2,0.168,bundle,2024-09-18\r\n2862,1095,APAC,grocery,retail,73.80,5,0.244,coupon,2024-04-25\r\n2863,1125,LATAM,fashion,retail,24.85,6,0.199,bundle,2024-07-23\r\n2864,1337,APAC,home,retail,163.53,5,0.192,none,2024-12-07\r\n2865,1882,AMER,sports,online,39.51,5,0.120,loyalty,2024-12-21\r\n2866,1663,LATAM,home,online,18.79,3,0.021,coupon,2024-03-20\r\n2867,1331,AMER,home,online,86.10,3,0.239,none,2024-04-27\r\n2868,1045,LATAM,grocery,retail,70.52,8,0.174,none,2024-03-06\r\n2869,1443,EMEA,fashion,retail,71.99,1,0.164,bundle,2024-01-15\r\n2870,2183,EMEA,grocery,online,27.53,7,0.170,loyalty,2024-09-22\r\n2871,1291,EMEA,home,online,40.20,1,0.221,none,2024-01-03\r\n2872,2122,AMER,fashion,mobile,53.36,8,0.076,none,2024-09-28\r\n2873,1740,EMEA,home,online,96.72,2,0.016,none,2024-07-09\r\n2874,2064,LATAM,home,mobile,95.63,4,0.116,bundle,2024-01-24\r\n2875,2043,EMEA,home,mobile,62.98,2,0.117,none,2024-05-10\r\n2876,2012,APAC,toys,retail,28.55,5,0.127,coupon,2024-07-04\r\n2877,1044,EMEA,electronics,retail,89.83,4,0.149,coupon,2024-05-09\r\n2878,2023,LATAM,fashion,partner,44.63,5,0.206,coupon,2024-05-17\r\n2879,1229,LATAM,grocery,online,69.46,4,0.022,none,2024-03-15\r\n2880,1122,AMER,toys,online,42.77,2,0.080,none,2024-09-24\r\n2881,2256,AMER,sports,online,51.23,3,0.005,none,2024-01-17\r\n2882,2250,AMER,electronics,online,142.94,7,0.163,none,2024-09-01\r\n2883,1050,AMER,fashion,retail,102.10,3,0.134,coupon,2024-04-13\r\n2884,1429,APAC,home,online,96.51,5,0.164,loyalty,2024-02-14\r\n2885,1017,AMER,fashion,partner,85.60,4,0.202,none,2024-09-01\r\n2886,1868,AMER,electronics,online,147.72,3,0.130,loyalty,2024-08-26\r\n2887,2273,APAC,sports,retail,44.32,8,0.016,none,2024-12-22\r\n2888,1531,EMEA,grocery,online,47.47,2,0.199,bundle,2024-11-11\r\n2889,1865,LATAM,grocery,retail,48.77,6,0.065,none,2024-02-20\r\n2890,1572,LATAM,grocery,online,76.05,2,0.008,none,2024-03-14\r\n2891,1835,AMER,fashion,retail,31.17,6,0.077,none,2024-12-21\r\n2892,2252,EMEA,grocery,online,54.75,6,0.122,none,2024-04-08\r\n2893,1430,EMEA,home,retail,49.27,7,0.149,none,2024-03-23\r\n2894,1888,LATAM,electronics,mobile,21.66,2,0.188,none,2024-04-27\r\n2895,1705,AMER,fashion,online,29.13,2,0.154,none,2024-02-11\r\n2896,1392,AMER,electronics,online,136.13,5,0.197,none,2024-07-17\r\n2897,1627,LATAM,sports,retail,28.30,5,0.137,none,2024-09-04\r\n2898,1085,EMEA,grocery,retail,26.21,6,0.012,coupon,2024-01-06\r\n2899,1287,AMER,electronics,online,22.68,5,0.092,bundle,2024-06-15\r\n2900,1752,APAC,electronics,retail,49.12,5,0.166,none,2024-07-02\r\n2901,2373,LATAM,grocery,retail,79.30,4,0.145,coupon,2024-01-17\r\n2902,2442,APAC,sports,online,34.16,7,0.105,coupon,2024-10-26\r\n2903,1636,APAC,sports,retail,77.52,4,0.228,none,2024-01-15\r\n2904,1910,LATAM,fashion,online,36.74,8,0.119,none,2024-10-02\r\n2905,2378,LATAM,fashion,online,79.91,1,0.064,bundle,2024-05-05\r\n2906,1335,APAC,grocery,online,36.86,1,0.141,none,2024-07-08\r\n2907,1600,AMER,electronics,online,24.57,5,0.027,none,2024-03-03\r\n2908,1435,AMER,sports,online,32.57,1,0.233,loyalty,2024-10-26\r\n2909,2336,APAC,grocery,online,45.24,2,0.095,none,2024-07-10\r\n2910,1284,APAC,fashion,retail,85.07,7,0.198,bundle,2024-02-14\r\n2911,1852,AMER,toys,online,44.19,2,0.092,loyalty,2024-01-22\r\n2912,2111,EMEA,home,mobile,62.71,3,0.049,bundle,2024-08-25\r\n2913,1164,EMEA,grocery,online,126.93,1,0.003,coupon,2024-11-02\r\n2914,1001,LATAM,toys,online,107.31,4,0.186,none,2024-09-08\r\n2915,2429,EMEA,home,online,109.74,1,0.117,none,2024-02-02\r\n2916,1820,AMER,grocery,mobile,102.13,2,0.134,none,2024-11-24\r\n2917,1866,EMEA,toys,online,159.73,4,0.121,bundle,2024-01-09\r\n2918,1918,EMEA,electronics,online,38.88,2,0.059,none,2024-05-14\r\n2919,1035,EMEA,grocery,online,37.52,7,0.204,coupon,2024-07-05\r\n2920,1702,AMER,toys,online,45.44,5,0.142,none,2024-03-13\r\n2921,1317,EMEA,grocery,online,64.57,2,0.160,none,2024-05-09\r\n2922,2279,LATAM,grocery,online,47.45,4,0.032,none,2024-08-23\r\n2923,2445,APAC,electronics,online,62.41,6,0.119,none,2024-02-22\r\n2924,2018,AMER,electronics,retail,95.50,7,0.181,none,2024-10-03\r\n2925,1096,EMEA,home,online,69.96,8,0.090,none,2024-12-04\r\n2926,1176,EMEA,grocery,retail,91.90,1,0.052,coupon,2024-04-23\r\n2927,2412,LATAM,grocery,retail,161.98,2,0.036,none,2024-05-11\r\n2928,1631,APAC,electronics,retail,49.18,8,0.228,coupon,2024-01-18\r\n2929,2230,LATAM,grocery,retail,106.66,7,0.060,loyalty,2024-10-24\r\n2930,1922,EMEA,home,online,39.59,1,0.211,coupon,2024-05-25\r\n2931,1275,EMEA,grocery,online,13.70,2,0.168,none,2024-08-13\r\n2932,2216,AMER,electronics,retail,72.11,1,0.224,coupon,2024-11-16\r\n2933,1987,AMER,home,mobile,93.34,1,0.004,coupon,2024-05-07\r\n2934,1755,APAC,electronics,mobile,96.32,2,0.250,none,2024-09-27\r\n2935,1080,LATAM,electronics,online,86.14,8,0.138,none,2024-07-15\r\n2936,1742,AMER,fashion,mobile,58.41,3,0.237,none,2024-03-12\r\n2937,2342,AMER,grocery,online,24.21,8,0.228,none,2024-11-24\r\n2938,1406,LATAM,electronics,retail,68.88,6,0.244,none,2024-06-23\r\n2939,1595,AMER,grocery,retail,71.98,8,0.219,none,2024-05-03\r\n2940,1430,EMEA,grocery,online,125.03,6,0.199,coupon,2024-12-24\r\n2941,2186,LATAM,sports,retail,21.22,5,0.189,none,2024-05-16\r\n2942,1325,APAC,grocery,retail,47.01,8,0.248,loyalty,2024-09-09\r\n2943,1576,EMEA,toys,retail,97.83,4,0.027,bundle,2024-01-23\r\n2944,1891,APAC,grocery,online,43.68,4,0.205,none,2024-04-02\r\n2945,1287,AMER,sports,online,35.53,2,0.027,none,2024-07-13\r\n2946,1390,APAC,home,retail,33.44,4,0.199,bundle,2024-03-09\r\n2947,1467,LATAM,electronics,retail,32.86,2,0.168,none,2024-12-20\r\n2948,1599,APAC,fashion,retail,63.94,5,0.219,none,2024-04-28\r\n2949,2132,LATAM,home,mobile,46.71,1,0.108,bundle,2024-10-18\r\n2950,1387,AMER,toys,online,61.27,3,0.150,none,2024-06-16\r\n2951,1875,EMEA,fashion,online,51.28,2,0.082,none,2024-06-26\r\n2952,2190,LATAM,electronics,retail,34.77,6,0.050,coupon,2024-03-21\r\n2953,2062,EMEA,home,partner,39.05,6,0.169,bundle,2024-11-22\r\n2954,2403,LATAM,home,online,169.06,5,0.134,loyalty,2024-01-25\r\n2955,2135,EMEA,grocery,online,55.11,4,0.069,none,2024-03-07\r\n2956,2473,EMEA,grocery,online,84.78,5,0.132,none,2024-08-11\r\n2957,1164,EMEA,toys,online,91.33,1,0.041,coupon,2024-07-01\r\n2958,2234,LATAM,sports,online,54.93,8,0.224,none,2024-01-13\r\n2959,1507,EMEA,grocery,retail,26.43,6,0.059,coupon,2024-04-02\r\n2960,1081,AMER,grocery,retail,52.72,1,0.070,bundle,2024-01-03\r\n2961,1302,LATAM,toys,retail,49.26,7,0.214,loyalty,2024-07-14\r\n2962,2361,EMEA,electronics,online,35.40,7,0.074,none,2024-01-02\r\n2963,1104,APAC,home,retail,22.25,5,0.179,none,2024-09-23\r\n2964,2445,APAC,electronics,online,78.73,2,0.166,none,2024-10-05\r\n2965,2459,AMER,sports,online,30.73,4,0.241,coupon,2024-08-27\r\n2966,1383,AMER,toys,online,64.89,3,0.092,coupon,2024-09-06\r\n2967,1999,EMEA,toys,partner,12.82,2,0.004,none,2024-11-13\r\n2968,2168,EMEA,sports,mobile,32.45,5,0.141,none,2024-02-06\r\n2969,1283,APAC,grocery,online,43.73,2,0.009,none,2024-04-21\r\n2970,1723,LATAM,grocery,online,104.30,6,0.134,coupon,2024-09-18\r\n2971,1619,APAC,grocery,online,56.55,7,0.188,none,2024-04-16\r\n2972,1826,LATAM,home,online,21.55,1,0.080,none,2024-08-21\r\n2973,1979,APAC,electronics,online,39.91,1,0.140,bundle,2024-06-02\r\n2974,1044,EMEA,grocery,retail,43.87,5,0.014,bundle,2024-04-28\r\n2975,2402,AMER,electronics,online,68.56,7,0.112,none,2024-11-07\r\n2976,2360,EMEA,toys,online,18.99,5,0.221,coupon,2024-03-24\r\n2977,1267,EMEA,toys,online,62.54,2,0.162,none,2024-11-26\r\n2978,1034,EMEA,electronics,online,62.17,5,0.149,none,2024-09-24\r\n2979,2395,APAC,electronics,mobile,28.19,5,0.074,none,2024-06-09\r\n2980,2054,AMER,toys,online,50.95,5,0.215,none,2024-02-24\r\n2981,2425,APAC,toys,online,113.86,8,0.095,loyalty,2024-01-07\r\n2982,1679,APAC,grocery,online,88.12,5,0.235,none,2024-07-20\r\n2983,1111,APAC,toys,online,58.87,1,0.151,none,2024-10-16\r\n2984,1710,APAC,fashion,retail,39.92,8,0.224,none,2024-08-12\r\n2985,2232,EMEA,electronics,online,49.62,7,0.036,coupon,2024-08-03\r\n2986,2235,AMER,grocery,mobile,69.82,4,0.009,bundle,2024-05-12\r\n2987,1679,APAC,fashion,online,74.91,3,0.117,loyalty,2024-03-11\r\n2988,1615,LATAM,fashion,online,110.69,3,0.193,none,2024-07-15\r\n2989,1662,LATAM,home,retail,76.94,8,0.204,none,2024-01-26\r\n2990,2220,LATAM,grocery,online,56.71,6,0.174,none,2024-02-03\r\n2991,2152,EMEA,fashion,retail,74.48,5,0.097,none,2024-11-22\r\n2992,1546,EMEA,fashion,online,109.95,3,0.122,bundle,2024-06-13\r\n2993,1930,AMER,electronics,online,25.37,2,0.135,none,2024-05-19\r\n2994,1370,APAC,electronics,retail,140.22,2,0.113,coupon,2024-04-14\r\n2995,2017,EMEA,electronics,retail,99.03,5,0.239,none,2024-06-24\r\n2996,1372,APAC,electronics,retail,37.97,1,0.185,none,2024-10-08\r\n2997,1226,AMER,sports,online,71.14,7,0.085,bundle,2024-06-17\r\n2998,1957,AMER,fashion,retail,51.66,6,0.192,none,2024-04-15\r\n2999,2239,EMEA,sports,retail,50.44,8,0.103,coupon,2024-01-13\r\n3000,1057,LATAM,sports,retail,28.37,2,0.095,none,2024-05-26\r\n3001,1702,AMER,electronics,online,58.48,6,0.226,loyalty,2024-10-08\r\n3002,1909,APAC,fashion,mobile,32.18,1,0.012,bundle,2024-05-26\r\n3003,1092,AMER,grocery,online,214.86,1,0.037,none,2024-09-03\r\n3004,2054,AMER,grocery,partner,88.45,7,0.249,coupon,2024-06-19\r\n3005,2060,LATAM,fashion,online,46.19,3,0.181,none,2024-03-25\r\n3006,2241,APAC,grocery,online,91.63,8,0.154,loyalty,2024-08-03\r\n3007,2146,APAC,electronics,online,47.88,1,0.008,none,2024-02-02\r\n3008,2217,LATAM,home,mobile,91.25,5,0.008,none,2024-11-08\r\n3009,2354,LATAM,sports,retail,98.67,1,0.186,none,2024-06-09\r\n3010,1653,APAC,electronics,mobile,163.92,1,0.122,none,2024-10-03\r\n3011,1418,LATAM,grocery,online,46.50,1,0.079,none,2024-03-08\r\n3012,1674,LATAM,sports,retail,167.24,7,0.184,none,2024-04-21\r\n3013,2114,AMER,grocery,online,45.73,3,0.105,none,2024-08-22\r\n3014,1506,EMEA,electronics,mobile,60.59,3,0.247,none,2024-12-07\r\n3015,1469,EMEA,electronics,mobile,34.49,7,0.209,none,2024-06-16\r\n3016,1406,LATAM,toys,retail,54.82,4,0.083,none,2024-07-07\r\n3017,1111,APAC,fashion,retail,104.99,6,0.211,none,2024-07-13\r\n3018,2133,AMER,electronics,online,103.77,6,0.241,none,2024-05-28\r\n3019,2296,AMER,grocery,online,80.24,7,0.116,coupon,2024-08-04\r\n3020,1214,EMEA,electronics,retail,112.34,7,0.036,none,2024-05-21\r\n3021,1080,LATAM,electronics,online,61.85,3,0.214,none,2024-11-07\r\n3022,1759,EMEA,electronics,online,23.54,2,0.003,none,2024-07-03\r\n3023,1439,LATAM,sports,online,52.02,8,0.015,none,2024-05-21\r\n3024,1966,APAC,grocery,online,38.18,6,0.070,none,2024-03-03\r\n3025,1316,APAC,home,retail,28.17,4,0.127,none,2024-11-04\r\n3026,2281,AMER,sports,partner,39.61,5,0.155,none,2024-01-09\r\n3027,1758,AMER,toys,retail,34.54,7,0.194,coupon,2024-11-17\r\n3028,1600,AMER,fashion,retail,99.69,7,0.181,none,2024-06-04\r\n3029,1714,APAC,fashion,retail,37.14,5,0.116,loyalty,2024-02-28\r\n3030,1964,EMEA,sports,retail,34.81,4,0.188,loyalty,2024-09-10\r\n3031,2452,LATAM,grocery,retail,30.01,7,0.080,bundle,2024-08-01\r\n3032,1490,AMER,grocery,online,37.78,6,0.040,none,2024-02-18\r\n3033,2217,LATAM,toys,mobile,23.89,4,0.199,bundle,2024-01-13\r\n3034,1773,LATAM,home,online,50.42,7,0.211,loyalty,2024-06-23\r\n3035,2327,EMEA,toys,retail,32.06,4,0.200,none,2024-01-07\r\n3036,1298,LATAM,grocery,online,40.77,6,0.005,none,2024-05-21\r\n3037,2488,EMEA,home,mobile,69.31,2,0.209,none,2024-10-01\r\n3038,1021,AMER,sports,retail,53.35,4,0.186,none,2024-03-08\r\n3039,2356,LATAM,grocery,retail,67.05,5,0.203,none,2024-07-21\r\n3040,2313,LATAM,grocery,online,40.64,1,0.012,none,2024-12-15\r\n3041,2075,LATAM,grocery,online,43.33,1,0.059,none,2024-08-06\r\n3042,1719,LATAM,grocery,online,66.76,3,0.082,coupon,2024-03-25\r\n3043,1541,APAC,sports,retail,34.92,5,0.084,bundle,2024-04-01\r\n3044,1148,AMER,fashion,online,40.71,6,0.234,none,2024-03-01\r\n3045,2112,LATAM,home,partner,53.41,1,0.119,none,2024-08-06\r\n3046,1141,AMER,fashion,online,44.95,1,0.054,none,2024-07-14\r\n3047,1404,EMEA,grocery,online,65.37,1,0.179,none,2024-11-13\r\n3048,1366,APAC,fashion,retail,62.10,5,0.121,none,2024-05-22\r\n3049,1410,AMER,sports,mobile,81.08,2,0.231,loyalty,2024-10-20\r\n3050,2387,EMEA,electronics,retail,212.37,4,0.201,coupon,2024-07-14\r\n3051,1092,AMER,home,online,70.03,4,0.023,none,2024-12-19\r\n3052,2215,LATAM,home,mobile,52.43,1,0.152,none,2024-04-21\r\n3053,2258,AMER,home,online,42.32,7,0.041,none,2024-10-16\r\n3054,1597,APAC,toys,online,58.63,3,0.057,none,2024-07-13\r\n3055,1805,EMEA,toys,mobile,99.20,7,0.202,coupon,2024-09-09\r\n3056,1750,LATAM,fashion,online,54.12,7,0.167,bundle,2024-10-01\r\n3057,2098,AMER,fashion,retail,82.45,2,0.070,none,2024-01-25\r\n3058,1238,AMER,electronics,online,31.63,1,0.042,bundle,2024-03-26\r\n3059,1499,EMEA,grocery,online,76.53,2,0.222,coupon,2024-06-22\r\n3060,2045,LATAM,grocery,online,62.40,4,0.002,coupon,2024-10-21\r\n3061,1011,APAC,fashion,online,97.60,2,0.055,bundle,2024-06-11\r\n3062,2362,AMER,grocery,retail,121.09,1,0.224,none,2024-11-21\r\n3063,1099,LATAM,grocery,retail,75.08,8,0.161,none,2024-02-14\r\n3064,1315,AMER,grocery,online,192.66,2,0.217,bundle,2024-07-11\r\n3065,1752,APAC,electronics,retail,47.31,3,0.025,none,2024-09-02\r\n3066,2233,EMEA,toys,mobile,77.75,2,0.087,loyalty,2024-03-06\r\n3067,1158,LATAM,home,online,74.20,3,0.033,none,2024-08-02\r\n3068,2359,LATAM,grocery,online,84.06,4,0.221,none,2024-12-11\r\n3069,1939,LATAM,electronics,retail,106.85,5,0.208,none,2024-02-27\r\n3070,1587,LATAM,grocery,online,27.43,1,0.046,coupon,2024-03-17\r\n3071,2475,AMER,fashion,partner,38.20,3,0.239,loyalty,2024-12-07\r\n3072,2353,AMER,fashion,online,41.56,7,0.161,none,2024-10-04\r\n3073,1127,EMEA,fashion,online,71.69,2,0.047,none,2024-05-02\r\n3074,1687,APAC,fashion,retail,76.47,6,0.021,bundle,2024-10-26\r\n3075,1811,APAC,electronics,retail,137.48,5,0.185,coupon,2024-06-22\r\n3076,1864,EMEA,home,mobile,45.64,1,0.232,none,2024-05-09\r\n3077,1371,AMER,sports,retail,75.08,2,0.165,none,2024-05-10\r\n3078,1002,EMEA,toys,retail,43.24,6,0.035,none,2024-09-02\r\n3079,1652,APAC,grocery,online,103.56,8,0.185,none,2024-05-06\r\n3080,1914,EMEA,toys,mobile,37.75,1,0.045,none,2024-08-01\r\n3081,1926,AMER,grocery,retail,50.15,8,0.152,bundle,2024-03-28\r\n3082,1060,LATAM,sports,online,57.55,8,0.170,loyalty,2024-01-01\r\n3083,1717,AMER,home,retail,43.37,6,0.119,none,2024-04-12\r\n3084,1416,EMEA,grocery,online,21.46,8,0.111,none,2024-09-18\r\n3085,1808,APAC,electronics,mobile,134.69,1,0.146,none,2024-05-11\r\n3086,1639,APAC,fashion,retail,94.97,5,0.097,none,2024-12-20\r\n3087,1626,EMEA,grocery,online,95.73,3,0.093,bundle,2024-03-08\r\n3088,1054,EMEA,grocery,retail,77.66,1,0.200,none,2024-01-05\r\n3089,1443,EMEA,fashion,mobile,48.50,3,0.062,coupon,2024-09-05\r\n3090,2476,APAC,home,online,87.33,2,0.011,coupon,2024-11-13\r\n3091,1978,AMER,grocery,partner,67.90,3,0.025,none,2024-11-17\r\n3092,1688,LATAM,grocery,online,28.93,5,0.207,none,2024-02-15\r\n3093,2264,LATAM,grocery,online,29.53,8,0.148,coupon,2024-06-02\r\n3094,1722,EMEA,fashion,online,50.60,1,0.182,bundle,2024-10-12\r\n3095,1351,APAC,sports,online,43.52,5,0.191,none,2024-02-11\r\n3096,2428,LATAM,fashion,online,37.62,6,0.071,bundle,2024-12-17\r\n3097,2316,EMEA,toys,online,69.56,6,0.195,bundle,2024-09-19\r\n3098,1755,APAC,home,online,54.59,3,0.151,coupon,2024-11-24\r\n3099,1611,EMEA,home,mobile,96.86,7,0.209,none,2024-04-03\r\n3100,2108,AMER,electronics,online,111.77,5,0.122,none,2024-05-24\r\n3101,2163,EMEA,home,retail,81.54,7,0.137,none,2024-09-16\r\n3102,2164,AMER,home,retail,30.26,8,0.133,coupon,2024-11-09\r\n3103,2494,AMER,sports,online,60.72,1,0.021,loyalty,2024-06-11\r\n3104,1154,LATAM,toys,online,56.52,8,0.051,none,2024-06-17\r\n3105,1959,EMEA,home,retail,110.51,7,0.071,loyalty,2024-02-28\r\n3106,2129,APAC,home,retail,77.14,2,0.185,bundle,2024-06-16\r\n3107,1879,EMEA,fashion,mobile,20.32,7,0.189,coupon,2024-07-27\r\n3108,2183,EMEA,fashion,online,64.05,7,0.122,none,2024-02-20\r\n3109,1659,APAC,home,retail,25.73,5,0.046,coupon,2024-02-03\r\n3110,1616,APAC,toys,retail,53.14,4,0.034,none,2024-01-23\r\n3111,2465,EMEA,sports,retail,139.04,3,0.122,coupon,2024-08-23\r\n3112,1667,AMER,fashion,mobile,67.87,3,0.034,coupon,2024-05-07\r\n3113,1413,LATAM,home,online,80.39,1,0.123,bundle,2024-05-05\r\n3114,1469,EMEA,electronics,online,36.39,5,0.010,bundle,2024-04-05\r\n3115,1028,EMEA,grocery,online,115.51,6,0.163,none,2024-01-03\r\n3116,1678,LATAM,home,online,87.26,2,0.060,none,2024-11-15\r\n3117,2206,AMER,home,mobile,78.24,5,0.130,none,2024-03-07\r\n3118,1267,EMEA,grocery,retail,22.96,1,0.199,none,2024-06-25\r\n3119,2212,EMEA,home,retail,46.37,4,0.219,none,2024-06-04\r\n3120,1868,AMER,sports,retail,72.26,6,0.077,none,2024-03-01\r\n3121,1823,EMEA,grocery,retail,20.41,6,0.200,none,2024-05-03\r\n3122,2116,LATAM,fashion,retail,100.86,3,0.050,loyalty,2024-09-03\r\n3123,1961,EMEA,sports,online,64.96,1,0.053,bundle,2024-04-15\r\n3124,1312,EMEA,fashion,online,70.38,4,0.145,none,2024-04-09\r\n3125,1174,APAC,toys,online,35.94,6,0.164,bundle,2024-01-23\r\n3126,1231,AMER,electronics,retail,66.30,6,0.170,bundle,2024-07-08\r\n3127,2190,LATAM,toys,retail,69.99,6,0.248,none,2024-01-04\r\n3128,1365,LATAM,grocery,retail,72.08,6,0.080,coupon,2024-03-06\r\n3129,2380,AMER,electronics,online,43.68,8,0.020,none,2024-10-18\r\n3130,1374,APAC,fashion,online,22.07,6,0.027,none,2024-08-24\r\n3131,1126,LATAM,grocery,retail,62.28,7,0.062,loyalty,2024-11-14\r\n3132,1773,LATAM,sports,retail,23.51,3,0.090,none,2024-10-13\r\n3133,2119,AMER,grocery,online,30.92,2,0.153,none,2024-07-02\r\n3134,1335,APAC,toys,online,98.79,7,0.128,coupon,2024-08-26\r\n3135,1804,AMER,electronics,retail,49.50,3,0.169,none,2024-07-15\r\n3136,1278,AMER,home,online,15.54,6,0.148,none,2024-05-18\r\n3137,1828,EMEA,grocery,online,48.95,8,0.231,none,2024-03-04\r\n3138,2320,LATAM,electronics,online,124.85,4,0.065,loyalty,2024-04-06\r\n3139,1082,EMEA,sports,retail,51.17,7,0.128,none,2024-07-05\r\n3140,1420,APAC,electronics,retail,49.73,2,0.198,loyalty,2024-08-21\r\n3141,2430,APAC,toys,retail,35.72,4,0.032,none,2024-07-14\r\n3142,2312,APAC,toys,online,95.28,4,0.026,none,2024-02-12\r\n3143,2174,LATAM,grocery,online,109.96,4,0.021,coupon,2024-12-17\r\n3144,2067,LATAM,fashion,online,34.82,2,0.128,bundle,2024-05-13\r\n3145,1739,AMER,home,mobile,56.74,4,0.164,none,2024-05-02\r\n3146,1104,APAC,grocery,mobile,27.92,6,0.069,none,2024-06-05\r\n3147,1352,AMER,grocery,mobile,43.48,3,0.229,none,2024-08-14\r\n3148,2119,AMER,electronics,partner,125.43,6,0.210,none,2024-12-09\r\n3149,2427,LATAM,home,online,66.22,1,0.049,coupon,2024-06-15\r\n3150,1414,APAC,sports,mobile,77.27,8,0.171,loyalty,2024-08-28\r\n3151,2078,APAC,home,retail,130.42,8,0.123,none,2024-12-02\r\n3152,1941,AMER,fashion,retail,88.05,3,0.067,none,2024-12-22\r\n3153,1724,LATAM,fashion,online,72.41,4,0.146,none,2024-04-18\r\n3154,1672,APAC,fashion,online,49.22,2,0.092,coupon,2024-06-28\r\n3155,1914,EMEA,electronics,online,54.19,8,0.159,none,2024-12-08\r\n3156,1454,APAC,grocery,retail,30.05,4,0.034,none,2024-02-20\r\n3157,1653,APAC,home,retail,82.57,8,0.181,bundle,2024-05-23\r\n3158,1907,EMEA,home,retail,69.70,4,0.178,none,2024-09-14\r\n3159,1372,APAC,toys,online,41.22,7,0.220,none,2024-04-22\r\n3160,2422,APAC,fashion,online,96.54,3,0.051,bundle,2024-05-28\r\n3161,2201,AMER,sports,online,53.92,3,0.191,loyalty,2024-01-24\r\n3162,1404,EMEA,electronics,online,55.94,8,0.155,bundle,2024-12-28\r\n3163,1201,LATAM,grocery,retail,89.48,1,0.106,coupon,2024-02-19\r\n3164,1884,APAC,electronics,retail,54.22,1,0.203,none,2024-03-02\r\n3165,1862,LATAM,toys,retail,21.98,7,0.161,none,2024-01-21\r\n3166,2454,LATAM,sports,retail,51.83,1,0.105,coupon,2024-09-07\r\n3167,1672,APAC,toys,mobile,88.13,2,0.098,coupon,2024-10-04\r\n3168,1924,AMER,sports,mobile,18.50,4,0.003,none,2024-08-22\r\n3169,1633,EMEA,electronics,retail,38.97,3,0.092,loyalty,2024-01-16\r\n3170,1386,AMER,electronics,online,84.90,4,0.097,loyalty,2024-03-24\r\n3171,1825,AMER,home,online,111.80,7,0.079,loyalty,2024-03-20\r\n3172,2340,EMEA,grocery,online,115.66,5,0.013,none,2024-08-10\r\n3173,1323,EMEA,sports,retail,82.89,6,0.007,none,2024-01-21\r\n3174,1507,EMEA,fashion,retail,53.67,1,0.124,none,2024-09-12\r\n3175,1890,LATAM,grocery,online,212.34,7,0.062,coupon,2024-04-19\r\n3176,1009,APAC,sports,partner,84.42,1,0.238,coupon,2024-10-17\r\n3177,1765,EMEA,sports,retail,109.18,2,0.177,none,2024-08-21\r\n3178,2325,LATAM,home,online,37.22,2,0.213,bundle,2024-10-13\r\n3179,1424,APAC,toys,online,61.80,4,0.145,none,2024-07-08\r\n3180,1228,APAC,electronics,online,41.06,6,0.164,none,2024-12-18\r\n3181,1156,APAC,grocery,online,37.05,8,0.098,none,2024-08-09\r\n3182,2478,AMER,toys,retail,110.46,8,0.104,coupon,2024-01-26\r\n3183,1594,LATAM,fashion,retail,80.67,6,0.011,none,2024-03-17\r\n3184,1839,APAC,sports,online,36.66,1,0.034,loyalty,2024-11-05\r\n3185,2201,AMER,sports,online,55.57,2,0.187,coupon,2024-02-08\r\n3186,1063,AMER,sports,online,40.95,8,0.101,loyalty,2024-11-23\r\n3187,2343,EMEA,electronics,mobile,89.68,4,0.059,coupon,2024-07-05\r\n3188,1845,AMER,electronics,retail,40.40,4,0.170,bundle,2024-03-24\r\n3189,2285,APAC,fashion,retail,45.40,2,0.204,none,2024-09-02\r\n3190,1462,LATAM,grocery,online,118.53,4,0.182,bundle,2024-10-26\r\n3191,1831,APAC,sports,mobile,55.79,6,0.224,loyalty,2024-04-23\r\n3192,1374,APAC,home,retail,80.55,2,0.153,none,2024-07-20\r\n3193,1087,AMER,sports,retail,42.83,1,0.080,bundle,2024-09-07\r\n3194,1372,APAC,electronics,retail,41.25,2,0.239,none,2024-09-13\r\n3195,2194,APAC,grocery,online,56.72,3,0.027,coupon,2024-12-03\r\n3196,1457,EMEA,sports,online,94.88,5,0.028,loyalty,2024-09-12\r\n3197,1251,EMEA,toys,online,41.71,6,0.030,none,2024-07-20\r\n3198,1005,LATAM,electronics,retail,31.33,8,0.153,coupon,2024-08-05\r\n3199,2338,AMER,fashion,online,35.14,4,0.043,none,2024-05-08\r\n3200,1060,LATAM,electronics,retail,97.58,7,0.091,coupon,2024-10-13\r\n3201,1299,LATAM,sports,online,69.86,3,0.004,loyalty,2024-12-24\r\n3202,1662,LATAM,fashion,online,56.43,4,0.198,coupon,2024-02-25\r\n3203,1041,APAC,home,retail,40.47,8,0.247,none,2024-05-23\r\n3204,2358,AMER,toys,mobile,93.47,8,0.031,none,2024-07-18\r\n3205,1183,AMER,home,retail,74.97,4,0.170,coupon,2024-07-27\r\n3206,1645,EMEA,grocery,online,42.88,5,0.024,bundle,2024-10-15\r\n3207,2296,AMER,electronics,mobile,128.93,8,0.243,none,2024-05-19\r\n3208,1836,LATAM,home,online,22.22,2,0.043,none,2024-03-01\r\n3209,1920,LATAM,electronics,online,47.36,7,0.219,none,2024-12-26\r\n3210,2364,APAC,home,mobile,27.47,1,0.246,none,2024-03-09\r\n3211,2489,LATAM,grocery,online,41.89,8,0.242,none,2024-08-10\r\n3212,1759,EMEA,electronics,online,70.82,4,0.029,coupon,2024-05-18\r\n3213,1682,EMEA,toys,online,52.65,8,0.191,bundle,2024-01-26\r\n3214,1639,APAC,home,retail,79.54,3,0.039,bundle,2024-08-14\r\n3215,1052,LATAM,electronics,retail,61.00,2,0.135,bundle,2024-04-17\r\n3216,2153,APAC,fashion,retail,50.19,6,0.121,bundle,2024-10-12\r\n3217,2499,LATAM,fashion,online,36.65,7,0.038,coupon,2024-03-25\r\n3218,1060,LATAM,home,retail,35.94,8,0.168,coupon,2024-01-13\r\n3219,2414,EMEA,home,online,34.46,5,0.113,none,2024-04-12\r\n3220,1602,EMEA,electronics,online,54.38,3,0.011,bundle,2024-09-08\r\n3221,1226,AMER,grocery,retail,42.21,4,0.150,none,2024-06-20\r\n3222,2257,AMER,electronics,online,41.88,8,0.039,none,2024-05-14\r\n3223,1818,AMER,electronics,online,49.01,5,0.238,coupon,2024-01-05\r\n3224,1432,APAC,electronics,online,84.20,5,0.185,bundle,2024-03-09\r\n3225,2172,EMEA,electronics,mobile,33.88,1,0.182,none,2024-12-16\r\n3226,1176,EMEA,home,online,84.21,1,0.173,none,2024-03-10\r\n3227,1348,AMER,home,mobile,17.61,2,0.181,none,2024-02-12\r\n3228,2032,AMER,electronics,online,72.76,8,0.114,none,2024-09-24\r\n3229,1381,LATAM,electronics,online,38.90,7,0.035,loyalty,2024-12-21\r\n3230,2184,APAC,toys,online,67.65,2,0.057,bundle,2024-01-19\r\n3231,1142,EMEA,fashion,online,37.38,4,0.169,bundle,2024-05-12\r\n3232,2070,APAC,fashion,online,48.42,3,0.206,coupon,2024-11-02\r\n3233,1311,APAC,fashion,online,58.13,1,0.062,none,2024-02-11\r\n3234,1712,LATAM,home,online,63.42,7,0.078,coupon,2024-04-11\r\n3235,1145,AMER,fashion,online,25.36,2,0.095,loyalty,2024-09-02\r\n3236,1455,APAC,home,online,43.66,2,0.065,coupon,2024-12-28\r\n3237,1627,LATAM,grocery,retail,100.45,3,0.196,none,2024-11-03\r\n3238,1094,LATAM,grocery,retail,64.02,3,0.174,none,2024-07-01\r\n3239,1320,EMEA,electronics,retail,84.79,8,0.101,none,2024-06-16\r\n3240,2371,LATAM,grocery,online,133.73,7,0.121,none,2024-06-02\r\n3241,2354,LATAM,grocery,online,47.43,1,0.103,loyalty,2024-04-10\r\n3242,1469,EMEA,home,retail,49.71,6,0.143,loyalty,2024-04-11\r\n3243,1930,AMER,electronics,retail,101.13,6,0.048,none,2024-12-04\r\n3244,2212,EMEA,electronics,retail,76.79,5,0.005,none,2024-12-08\r\n3245,1166,AMER,home,online,55.69,2,0.147,bundle,2024-12-25\r\n3246,1415,AMER,home,retail,74.46,1,0.096,none,2024-03-09\r\n3247,2482,EMEA,fashion,mobile,70.14,2,0.051,none,2024-01-05\r\n3248,1601,APAC,toys,retail,21.77,5,0.191,none,2024-05-02\r\n3249,1417,APAC,fashion,online,66.43,1,0.133,loyalty,2024-10-16\r\n3250,2387,EMEA,electronics,retail,86.91,8,0.246,none,2024-09-14\r\n3251,2372,AMER,home,retail,84.80,1,0.178,none,2024-02-17\r\n3252,2301,EMEA,home,online,53.01,8,0.121,none,2024-02-14\r\n3253,1282,LATAM,electronics,retail,63.57,4,0.063,none,2024-06-22\r\n3254,1300,EMEA,sports,online,125.18,4,0.248,coupon,2024-07-04\r\n3255,1463,EMEA,electronics,online,57.42,5,0.192,none,2024-04-24\r\n3256,1801,LATAM,fashion,retail,31.32,8,0.171,bundle,2024-10-05\r\n3257,1135,APAC,toys,retail,128.72,4,0.148,none,2024-11-15\r\n3258,1365,LATAM,electronics,online,64.56,6,0.036,loyalty,2024-05-22\r\n3259,2257,AMER,grocery,retail,54.15,6,0.090,none,2024-07-27\r\n3260,1288,LATAM,home,online,85.19,5,0.226,none,2024-06-24\r\n3261,2270,APAC,sports,retail,49.96,4,0.085,none,2024-12-23\r\n3262,2037,LATAM,electronics,online,75.06,4,0.082,none,2024-01-12\r\n3263,1168,APAC,grocery,retail,45.12,7,0.118,none,2024-11-02\r\n3264,2159,AMER,grocery,retail,40.63,6,0.008,none,2024-10-23\r\n3265,1971,EMEA,grocery,retail,145.55,5,0.188,none,2024-04-25\r\n3266,2449,LATAM,grocery,retail,22.22,3,0.094,none,2024-10-22\r\n3267,1579,AMER,sports,mobile,39.27,3,0.216,none,2024-03-14\r\n3268,1264,APAC,electronics,retail,57.72,7,0.130,none,2024-06-15\r\n3269,1122,AMER,electronics,retail,23.73,1,0.138,none,2024-10-14\r\n3270,1109,APAC,electronics,partner,161.25,1,0.146,none,2024-01-23\r\n3271,1029,EMEA,toys,retail,80.17,3,0.102,loyalty,2024-07-08\r\n3272,1946,AMER,home,online,15.10,7,0.224,none,2024-05-01\r\n3273,1658,AMER,home,online,56.75,4,0.077,loyalty,2024-02-01\r\n3274,1645,EMEA,sports,retail,67.10,8,0.053,bundle,2024-06-08\r\n3275,2133,AMER,sports,mobile,34.58,3,0.069,none,2024-05-27\r\n3276,1586,LATAM,grocery,online,40.07,6,0.192,coupon,2024-02-05\r\n3277,1794,AMER,toys,online,279.69,6,0.090,bundle,2024-01-19\r\n3278,1189,AMER,electronics,retail,31.21,8,0.027,none,2024-01-08\r\n3279,2180,AMER,sports,retail,119.76,5,0.185,none,2024-11-13\r\n3280,1095,APAC,fashion,online,126.07,3,0.065,none,2024-02-10\r\n3281,1409,APAC,toys,online,86.09,3,0.237,loyalty,2024-06-04\r\n3282,2004,LATAM,grocery,retail,55.97,8,0.177,none,2024-08-20\r\n3283,1551,APAC,sports,online,17.69,7,0.034,bundle,2024-06-08\r\n3284,1931,APAC,electronics,retail,38.62,8,0.156,loyalty,2024-12-17\r\n3285,1835,AMER,electronics,retail,31.95,6,0.172,coupon,2024-02-23\r\n3286,1749,LATAM,grocery,online,59.42,1,0.103,none,2024-07-20\r\n3287,1489,AMER,sports,online,45.97,5,0.153,loyalty,2024-03-27\r\n3288,2251,APAC,grocery,online,46.59,7,0.109,coupon,2024-12-24\r\n3289,1140,LATAM,grocery,mobile,21.66,7,0.122,bundle,2024-02-11\r\n3290,2252,EMEA,fashion,mobile,67.29,2,0.213,bundle,2024-04-28\r\n3291,1995,LATAM,home,retail,171.89,1,0.170,coupon,2024-08-26\r\n3292,1081,AMER,grocery,online,85.44,6,0.089,coupon,2024-11-24\r\n3293,2325,LATAM,fashion,online,49.60,7,0.066,none,2024-04-06\r\n3294,1293,AMER,electronics,online,65.37,5,0.185,bundle,2024-12-02\r\n3295,2418,AMER,home,online,24.94,1,0.170,none,2024-07-14\r\n3296,1581,APAC,home,retail,38.49,7,0.229,coupon,2024-04-25\r\n3297,2097,AMER,home,online,68.95,7,0.195,loyalty,2024-07-28\r\n3298,1738,LATAM,sports,online,41.08,2,0.127,coupon,2024-11-28\r\n3299,2123,AMER,fashion,mobile,54.53,8,0.142,none,2024-01-14\r\n3300,1766,AMER,electronics,mobile,102.30,2,0.026,none,2024-04-05\r\n3301,2230,LATAM,grocery,online,65.28,5,0.042,none,2024-08-27\r\n3302,1527,AMER,sports,online,81.63,8,0.193,none,2024-12-19\r\n3303,2237,EMEA,home,retail,66.41,7,0.051,coupon,2024-10-28\r\n3304,2203,APAC,sports,online,53.09,1,0.137,none,2024-11-27\r\n3305,2455,AMER,toys,online,50.25,1,0.162,none,2024-08-09\r\n3306,1067,APAC,grocery,retail,27.75,4,0.098,none,2024-04-04\r\n3307,2121,APAC,toys,retail,69.38,5,0.159,loyalty,2024-09-09\r\n3308,1062,EMEA,electronics,online,29.84,7,0.147,none,2024-04-08\r\n3309,2490,AMER,fashion,retail,93.01,4,0.171,none,2024-05-16\r\n3310,1493,APAC,grocery,online,21.17,8,0.153,coupon,2024-10-01\r\n3311,2425,APAC,toys,online,95.42,5,0.232,none,2024-12-26\r\n3312,2286,AMER,sports,mobile,79.16,1,0.110,bundle,2024-01-08\r\n3313,1148,AMER,toys,retail,36.85,7,0.236,none,2024-03-25\r\n3314,2044,APAC,home,retail,61.40,3,0.194,none,2024-12-17\r\n3315,1459,LATAM,sports,partner,37.39,5,0.004,none,2024-05-27\r\n3316,1115,AMER,sports,mobile,41.63,3,0.043,none,2024-10-05\r\n3317,1509,AMER,electronics,online,70.66,4,0.091,none,2024-03-01\r\n3318,1483,EMEA,home,online,36.70,2,0.106,none,2024-03-13\r\n3319,1581,APAC,electronics,retail,98.34,5,0.040,none,2024-12-11\r\n3320,1185,LATAM,home,mobile,32.81,1,0.119,coupon,2024-09-07\r\n3321,2276,AMER,sports,mobile,67.14,1,0.157,none,2024-11-05\r\n3322,1292,LATAM,electronics,mobile,87.28,6,0.133,none,2024-01-06\r\n3323,1173,LATAM,fashion,online,80.68,2,0.213,none,2024-05-21\r\n3324,1507,EMEA,grocery,mobile,59.74,1,0.077,none,2024-11-09\r\n3325,1389,LATAM,toys,online,28.64,3,0.179,none,2024-01-20\r\n3326,1277,AMER,toys,online,76.00,1,0.220,none,2024-05-27\r\n3327,1487,AMER,fashion,retail,54.80,2,0.178,none,2024-10-22\r\n3328,1083,AMER,electronics,mobile,12.36,1,0.134,none,2024-02-07\r\n3329,2420,EMEA,grocery,mobile,117.20,2,0.091,bundle,2024-11-17\r\n3330,2236,APAC,toys,mobile,122.76,1,0.158,bundle,2024-10-12\r\n3331,1592,LATAM,grocery,online,17.85,5,0.152,coupon,2024-04-18\r\n3332,1155,EMEA,electronics,online,54.25,2,0.219,none,2024-10-21\r\n3333,1387,AMER,sports,retail,42.66,2,0.006,none,2024-01-24\r\n3334,2435,AMER,electronics,online,229.11,4,0.069,bundle,2024-03-22\r\n3335,2375,AMER,home,mobile,78.30,2,0.057,none,2024-08-22\r\n3336,1320,EMEA,home,online,57.51,6,0.073,bundle,2024-08-18\r\n3337,2389,LATAM,fashion,partner,43.44,7,0.200,none,2024-02-07\r\n3338,2413,AMER,home,online,46.81,5,0.040,none,2024-03-02\r\n3339,2467,AMER,toys,mobile,58.46,7,0.090,coupon,2024-03-23\r\n3340,2011,AMER,home,mobile,141.46,3,0.167,none,2024-12-19\r\n3341,1694,APAC,electronics,online,76.99,6,0.092,loyalty,2024-10-01\r\n3342,1278,AMER,sports,online,88.41,7,0.079,none,2024-03-09\r\n3343,1298,LATAM,toys,retail,31.69,2,0.208,none,2024-05-15\r\n3344,1187,AMER,grocery,retail,76.44,7,0.219,bundle,2024-04-11\r\n3345,1517,AMER,fashion,online,58.37,3,0.160,coupon,2024-10-25\r\n3346,2122,AMER,electronics,online,111.41,6,0.223,coupon,2024-05-03\r\n3347,1872,LATAM,fashion,mobile,96.21,1,0.230,none,2024-04-16\r\n3348,1875,EMEA,grocery,online,18.51,7,0.179,none,2024-02-02\r\n3349,1861,AMER,sports,retail,34.06,5,0.157,none,2024-11-20\r\n3350,1425,EMEA,home,retail,47.50,8,0.081,coupon,2024-06-22\r\n3351,2298,APAC,home,retail,40.37,6,0.124,none,2024-04-11\r\n3352,1188,LATAM,fashion,online,31.58,8,0.080,none,2024-02-09\r\n3353,2048,LATAM,home,mobile,31.13,3,0.085,none,2024-12-12\r\n3354,2084,LATAM,fashion,mobile,29.49,1,0.042,none,2024-08-24\r\n3355,2176,AMER,electronics,online,44.40,6,0.031,bundle,2024-08-05\r\n3356,1108,EMEA,electronics,retail,25.07,6,0.210,bundle,2024-12-08\r\n3357,1371,AMER,fashion,online,45.43,4,0.037,coupon,2024-02-26\r\n3358,1088,LATAM,fashion,retail,32.39,7,0.111,none,2024-03-23\r\n3359,2478,AMER,electronics,online,29.56,7,0.130,none,2024-05-08\r\n3360,2488,EMEA,fashion,partner,107.15,8,0.033,coupon,2024-07-19\r\n3361,1414,APAC,sports,retail,56.35,2,0.131,none,2024-04-28\r\n3362,1463,EMEA,electronics,retail,55.52,7,0.238,coupon,2024-08-20\r\n3363,1838,AMER,electronics,online,51.69,3,0.148,none,2024-03-19\r\n3364,1942,APAC,grocery,retail,59.35,3,0.179,none,2024-06-22\r\n3365,1985,AMER,electronics,online,48.06,1,0.232,none,2024-04-05\r\n3366,1923,LATAM,toys,online,31.09,3,0.191,none,2024-08-14\r\n3367,1638,EMEA,sports,mobile,65.18,2,0.184,none,2024-10-05\r\n3368,1223,LATAM,fashion,retail,261.94,4,0.031,none,2024-07-24\r\n3369,2114,AMER,grocery,retail,23.81,8,0.250,bundle,2024-08-09\r\n3370,2111,EMEA,toys,mobile,79.70,7,0.024,none,2024-01-18\r\n3371,2021,EMEA,sports,retail,78.14,8,0.003,none,2024-01-15\r\n3372,1622,LATAM,grocery,online,51.96,5,0.246,loyalty,2024-06-17\r\n3373,1394,LATAM,fashion,retail,32.13,2,0.110,none,2024-12-22\r\n3374,1176,EMEA,fashion,retail,54.22,3,0.011,coupon,2024-12-13\r\n3375,2288,AMER,home,retail,80.04,1,0.188,coupon,2024-02-08\r\n3376,1116,LATAM,electronics,retail,47.93,3,0.112,coupon,2024-03-20\r\n3377,1411,LATAM,fashion,mobile,60.42,7,0.200,none,2024-05-15\r\n3378,2499,LATAM,electronics,online,54.87,7,0.184,coupon,2024-03-21\r\n3379,1558,EMEA,grocery,mobile,101.52,5,0.010,bundle,2024-11-12\r\n3380,1455,APAC,fashion,mobile,28.24,5,0.213,loyalty,2024-11-06\r\n3381,1102,APAC,sports,online,50.88,4,0.153,none,2024-04-01\r\n3382,2483,LATAM,toys,mobile,65.26,5,0.239,bundle,2024-11-09\r\n3383,2289,APAC,grocery,online,109.53,7,0.025,bundle,2024-02-22\r\n3384,1574,AMER,fashion,mobile,46.86,2,0.016,none,2024-05-27\r\n3385,1632,LATAM,electronics,online,131.56,2,0.078,loyalty,2024-02-09\r\n3386,1438,APAC,grocery,online,48.75,4,0.190,none,2024-12-27\r\n3387,1252,APAC,grocery,retail,78.28,4,0.149,none,2024-02-24\r\n3388,1337,APAC,sports,online,87.91,1,0.207,coupon,2024-05-08\r\n3389,2486,APAC,electronics,online,68.91,3,0.150,none,2024-07-25\r\n3390,2448,APAC,fashion,retail,121.53,7,0.100,bundle,2024-01-14\r\n3391,1376,EMEA,sports,online,61.82,5,0.155,none,2024-01-13\r\n3392,2370,EMEA,home,mobile,97.24,1,0.104,none,2024-12-08\r\n3393,1052,LATAM,grocery,retail,76.24,2,0.108,none,2024-05-27\r\n3394,1605,APAC,home,partner,29.32,8,0.104,loyalty,2024-11-13\r\n3395,1386,AMER,electronics,retail,32.98,2,0.036,coupon,2024-04-10\r\n3396,2046,APAC,electronics,online,42.98,6,0.192,none,2024-02-02\r\n3397,1405,LATAM,toys,retail,72.12,5,0.246,coupon,2024-06-28\r\n3398,2021,EMEA,home,retail,60.88,7,0.177,coupon,2024-02-14\r\n3399,1959,EMEA,grocery,retail,48.78,7,0.200,none,2024-01-26\r\n3400,1780,APAC,sports,online,106.80,2,0.005,bundle,2024-03-17\r\n3401,2369,LATAM,fashion,online,84.37,7,0.245,none,2024-10-07\r\n3402,2021,EMEA,electronics,retail,68.19,6,0.244,loyalty,2024-04-07\r\n3403,2027,EMEA,grocery,online,51.68,4,0.249,coupon,2024-10-18\r\n3404,2044,APAC,electronics,mobile,33.39,2,0.202,none,2024-01-08\r\n3405,2302,APAC,toys,retail,27.88,7,0.063,bundle,2024-07-23\r\n3406,2408,EMEA,sports,online,34.05,5,0.182,none,2024-07-02\r\n3407,1537,LATAM,grocery,online,37.06,5,0.097,none,2024-01-18\r\n3408,2171,EMEA,fashion,retail,25.85,6,0.041,coupon,2024-08-13\r\n3409,2168,EMEA,sports,online,39.10,4,0.227,coupon,2024-04-08\r\n3410,1424,APAC,grocery,online,100.29,3,0.097,bundle,2024-03-07\r\n3411,1047,APAC,electronics,retail,79.16,4,0.218,coupon,2024-10-27\r\n3412,1800,APAC,grocery,mobile,36.34,4,0.171,bundle,2024-05-11\r\n3413,1311,APAC,toys,retail,55.40,8,0.082,loyalty,2024-10-16\r\n3414,1764,LATAM,sports,online,67.56,1,0.147,coupon,2024-08-24\r\n3415,1862,LATAM,fashion,mobile,48.57,8,0.109,bundle,2024-07-18\r\n3416,2335,EMEA,grocery,online,68.90,1,0.009,bundle,2024-04-17\r\n3417,2278,APAC,home,retail,31.21,4,0.188,coupon,2024-04-15\r\n3418,1232,LATAM,fashion,mobile,70.65,1,0.004,bundle,2024-05-26\r\n3419,1450,EMEA,sports,online,41.27,3,0.141,none,2024-11-10\r\n3420,1652,APAC,fashion,retail,63.54,2,0.075,bundle,2024-06-01\r\n3421,1689,LATAM,sports,online,26.56,5,0.021,coupon,2024-03-28\r\n3422,1313,EMEA,toys,online,19.76,1,0.234,none,2024-02-08\r\n3423,2242,AMER,home,online,38.72,4,0.235,coupon,2024-08-27\r\n3424,1749,LATAM,fashion,mobile,39.37,3,0.137,none,2024-10-25\r\n3425,1226,AMER,sports,retail,67.78,2,0.017,none,2024-03-07\r\n3426,1834,AMER,home,online,77.45,1,0.088,none,2024-04-03\r\n3427,1216,APAC,grocery,retail,49.33,3,0.139,none,2024-06-04\r\n3428,1393,LATAM,fashion,online,62.28,5,0.212,none,2024-05-09\r\n3429,1215,LATAM,fashion,online,74.74,2,0.073,coupon,2024-08-03\r\n3430,2307,LATAM,electronics,retail,65.06,1,0.184,loyalty,2024-11-09\r\n3431,2266,LATAM,fashion,partner,92.35,7,0.185,coupon,2024-08-10\r\n3432,1466,AMER,sports,online,93.60,7,0.115,bundle,2024-06-17\r\n3433,2278,APAC,sports,retail,60.09,6,0.144,bundle,2024-09-12\r\n3434,1152,LATAM,grocery,retail,14.85,5,0.173,loyalty,2024-11-09\r\n3435,1676,LATAM,toys,mobile,20.35,7,0.098,none,2024-02-05\r\n3436,1905,APAC,electronics,partner,37.00,7,0.117,bundle,2024-05-02\r\n3437,1829,EMEA,electronics,online,99.99,4,0.133,coupon,2024-05-20\r\n3438,1298,LATAM,toys,mobile,101.71,3,0.148,loyalty,2024-12-17\r\n3439,1523,LATAM,home,online,94.50,1,0.063,none,2024-06-22\r\n3440,2007,LATAM,grocery,online,48.81,8,0.110,loyalty,2024-11-12\r\n3441,1320,EMEA,toys,online,51.33,8,0.122,coupon,2024-01-05\r\n3442,1885,EMEA,toys,retail,46.96,5,0.059,loyalty,2024-05-15\r\n3443,1724,LATAM,fashion,online,41.87,6,0.139,none,2024-08-01\r\n3444,2453,AMER,home,mobile,54.72,6,0.249,none,2024-04-23\r\n3445,2239,EMEA,toys,retail,38.28,8,0.036,none,2024-08-27\r\n3446,1976,AMER,toys,retail,120.31,1,0.145,coupon,2024-01-26\r\n3447,1654,EMEA,toys,mobile,72.12,3,0.221,none,2024-01-01\r\n3448,2169,EMEA,electronics,retail,51.42,5,0.014,coupon,2024-02-18\r\n3449,2171,EMEA,toys,retail,22.61,1,0.141,coupon,2024-09-20\r\n3450,2436,LATAM,electronics,online,48.99,5,0.111,none,2024-12-22\r\n3451,1286,EMEA,fashion,mobile,60.56,2,0.020,none,2024-05-05\r\n3452,1116,LATAM,fashion,mobile,46.70,2,0.236,coupon,2024-04-09\r\n3453,2119,AMER,grocery,online,62.27,5,0.177,none,2024-06-03\r\n3454,2184,APAC,sports,retail,45.30,8,0.197,none,2024-12-28\r\n3455,1817,APAC,electronics,retail,22.45,6,0.037,none,2024-05-01\r\n3456,2430,APAC,grocery,online,106.66,5,0.227,none,2024-05-25\r\n3457,1355,EMEA,grocery,online,53.70,5,0.068,none,2024-02-04\r\n3458,1677,EMEA,grocery,mobile,111.99,4,0.059,loyalty,2024-05-28\r\n3459,1912,APAC,toys,partner,41.18,5,0.015,none,2024-10-27\r\n3460,1526,EMEA,grocery,retail,59.56,3,0.147,bundle,2024-01-03\r\n3461,2108,AMER,grocery,partner,81.97,5,0.165,coupon,2024-05-15\r\n3462,2318,AMER,sports,retail,95.78,8,0.160,coupon,2024-04-10\r\n3463,1001,LATAM,electronics,retail,87.44,1,0.051,none,2024-06-05\r\n3464,1322,AMER,electronics,retail,67.97,3,0.035,bundle,2024-12-20\r\n3465,2336,APAC,home,online,57.51,8,0.158,bundle,2024-06-13\r\n3466,1206,EMEA,grocery,mobile,53.98,7,0.021,none,2024-10-02\r\n3467,2437,LATAM,electronics,online,43.25,2,0.039,none,2024-08-18\r\n3468,1577,AMER,fashion,online,76.93,7,0.191,coupon,2024-05-19\r\n3469,1250,APAC,home,retail,48.80,3,0.142,none,2024-03-08\r\n3470,2488,EMEA,sports,online,133.00,7,0.118,bundle,2024-10-10\r\n3471,1268,EMEA,home,online,29.42,8,0.023,none,2024-02-25\r\n3472,2289,APAC,home,retail,104.41,3,0.088,none,2024-11-14\r\n3473,1495,LATAM,electronics,mobile,85.97,1,0.167,loyalty,2024-08-02\r\n3474,2179,LATAM,sports,online,56.33,3,0.124,none,2024-09-17\r\n3475,1982,EMEA,sports,retail,31.72,1,0.147,none,2024-09-15\r\n3476,2072,AMER,grocery,retail,83.64,7,0.085,none,2024-01-09\r\n3477,1264,APAC,grocery,retail,34.45,8,0.234,none,2024-06-03\r\n3478,2190,LATAM,fashion,online,50.39,1,0.105,none,2024-12-14\r\n3479,1309,EMEA,grocery,online,54.86,6,0.148,loyalty,2024-06-13\r\n3480,1209,AMER,home,online,44.35,3,0.114,loyalty,2024-11-21\r\n3481,2429,EMEA,home,online,71.50,1,0.113,coupon,2024-04-20\r\n3482,1319,EMEA,toys,retail,63.70,8,0.204,none,2024-03-13\r\n3483,2004,LATAM,home,retail,69.44,2,0.070,coupon,2024-01-04\r\n3484,1548,EMEA,home,retail,170.56,6,0.003,none,2024-07-20\r\n3485,1289,LATAM,electronics,retail,43.77,1,0.197,none,2024-09-28\r\n3486,1931,APAC,fashion,retail,42.26,6,0.198,none,2024-08-09\r\n3487,1762,LATAM,grocery,online,113.58,2,0.233,coupon,2024-06-27\r\n3488,1911,LATAM,fashion,retail,38.48,8,0.243,none,2024-05-03\r\n3489,1409,APAC,fashion,retail,33.18,2,0.069,none,2024-12-11\r\n3490,1408,AMER,grocery,online,70.44,3,0.184,coupon,2024-09-10\r\n3491,2002,APAC,grocery,online,44.01,4,0.004,none,2024-03-20\r\n3492,1870,EMEA,fashion,retail,23.87,7,0.242,coupon,2024-07-07\r\n3493,2091,LATAM,grocery,online,76.14,6,0.020,none,2024-10-08\r\n3494,1690,LATAM,home,retail,99.24,6,0.085,loyalty,2024-03-02\r\n3495,1982,EMEA,grocery,mobile,48.32,2,0.021,bundle,2024-02-01\r\n3496,1943,AMER,toys,online,55.72,7,0.107,coupon,2024-12-22\r\n3497,2044,APAC,electronics,online,45.38,1,0.212,coupon,2024-05-09\r\n3498,1408,AMER,grocery,retail,50.23,5,0.139,loyalty,2024-02-26\r\n3499,2228,EMEA,fashion,online,46.05,2,0.078,none,2024-09-08\r\n3500,1327,APAC,home,online,33.39,2,0.144,none,2024-05-28\r\n3501,1503,APAC,home,online,85.32,7,0.160,none,2024-05-08\r\n3502,1749,LATAM,fashion,retail,78.33,1,0.148,none,2024-10-21\r\n3503,1429,APAC,home,retail,18.78,1,0.246,none,2024-10-02\r\n3504,1083,AMER,grocery,online,66.46,3,0.209,none,2024-01-19\r\n3505,1501,AMER,grocery,retail,73.52,1,0.047,none,2024-09-07\r\n3506,1189,AMER,fashion,online,59.35,4,0.032,bundle,2024-07-12\r\n3507,1297,AMER,sports,retail,48.48,2,0.235,bundle,2024-04-28\r\n3508,1451,EMEA,sports,mobile,59.66,6,0.121,coupon,2024-02-18\r\n3509,2351,EMEA,grocery,online,53.49,7,0.249,none,2024-02-05\r\n3510,1391,LATAM,grocery,online,91.56,1,0.048,none,2024-01-28\r\n3511,1878,EMEA,home,online,72.74,1,0.054,none,2024-01-17\r\n3512,1788,AMER,toys,partner,49.78,4,0.231,coupon,2024-03-10\r\n3513,2257,AMER,grocery,online,87.21,8,0.125,bundle,2024-11-15\r\n3514,1546,EMEA,home,mobile,26.08,2,0.110,none,2024-10-08\r\n3515,2288,AMER,grocery,online,72.72,6,0.098,none,2024-10-25\r\n3516,1461,LATAM,fashion,retail,141.64,1,0.013,none,2024-11-28\r\n3517,1581,APAC,fashion,online,84.79,6,0.111,bundle,2024-12-28\r\n3518,1885,EMEA,electronics,retail,51.90,8,0.007,loyalty,2024-02-10\r\n3519,2093,LATAM,home,online,106.53,5,0.204,coupon,2024-08-28\r\n3520,2273,APAC,grocery,partner,60.57,7,0.032,coupon,2024-10-24\r\n3521,2204,AMER,fashion,online,84.19,7,0.148,loyalty,2024-05-11\r\n3522,1422,LATAM,sports,retail,60.63,7,0.201,none,2024-05-01\r\n3523,2441,EMEA,grocery,online,54.67,2,0.094,none,2024-08-18\r\n3524,1701,LATAM,home,partner,38.95,4,0.031,none,2024-01-06\r\n3525,1520,APAC,electronics,retail,68.91,2,0.172,coupon,2024-10-02\r\n3526,2086,APAC,grocery,mobile,24.51,7,0.176,coupon,2024-05-24\r\n3527,1433,EMEA,grocery,retail,35.85,1,0.206,none,2024-10-20\r\n3528,2350,APAC,sports,online,58.92,2,0.183,none,2024-10-06\r\n3529,2157,AMER,grocery,online,31.63,5,0.069,coupon,2024-10-04\r\n3530,2092,AMER,toys,retail,67.51,8,0.080,none,2024-04-26\r\n3531,2137,LATAM,grocery,online,60.75,5,0.062,none,2024-07-04\r\n3532,1718,EMEA,electronics,mobile,30.71,2,0.026,none,2024-04-13\r\n3533,1845,AMER,home,online,69.46,8,0.007,loyalty,2024-02-04\r\n3534,1332,APAC,home,online,22.45,4,0.045,none,2024-05-22\r\n3535,1784,EMEA,home,partner,56.77,7,0.120,bundle,2024-12-17\r\n3536,2198,EMEA,home,online,20.51,6,0.011,none,2024-03-15\r\n3537,1167,EMEA,home,online,40.33,4,0.215,none,2024-11-04\r\n3538,2272,EMEA,sports,online,54.71,2,0.123,coupon,2024-03-11\r\n3539,2132,LATAM,fashion,retail,80.31,3,0.030,bundle,2024-03-02\r\n3540,1894,APAC,grocery,online,80.51,5,0.122,none,2024-10-15\r\n3541,1258,EMEA,home,retail,32.20,7,0.138,bundle,2024-06-01\r\n3542,1816,EMEA,home,online,118.44,4,0.064,none,2024-03-04\r\n3543,1056,LATAM,fashion,online,40.21,5,0.115,none,2024-10-27\r\n3544,2197,LATAM,fashion,retail,80.01,8,0.020,coupon,2024-11-07\r\n3545,2115,APAC,sports,retail,76.60,3,0.018,coupon,2024-05-05\r\n3546,1364,EMEA,sports,retail,95.27,5,0.211,loyalty,2024-05-03\r\n3547,1411,LATAM,electronics,retail,107.92,4,0.063,none,2024-05-04\r\n3548,2096,LATAM,toys,mobile,52.91,6,0.091,none,2024-01-23\r\n3549,1443,EMEA,sports,online,53.99,6,0.196,none,2024-04-10\r\n3550,2048,LATAM,sports,online,56.55,8,0.179,none,2024-06-12\r\n3551,1935,EMEA,home,online,41.95,4,0.202,bundle,2024-08-21\r\n3552,1089,LATAM,grocery,partner,32.31,8,0.229,none,2024-02-02\r\n3553,1168,APAC,fashion,mobile,43.61,1,0.071,none,2024-10-06\r\n3554,1539,LATAM,sports,online,66.42,3,0.143,bundle,2024-01-03\r\n3555,2051,APAC,grocery,partner,37.07,2,0.115,none,2024-06-04\r\n3556,1674,LATAM,grocery,retail,39.25,8,0.188,loyalty,2024-09-08\r\n3557,1066,AMER,home,mobile,53.14,3,0.001,bundle,2024-10-04\r\n3558,1688,LATAM,electronics,partner,52.74,4,0.090,none,2024-12-18\r\n3559,1138,AMER,sports,mobile,54.83,1,0.208,loyalty,2024-10-25\r\n3560,1922,EMEA,electronics,online,25.72,6,0.142,none,2024-12-10\r\n3561,1922,EMEA,home,retail,69.08,4,0.070,none,2024-04-16\r\n3562,1837,LATAM,grocery,online,50.39,5,0.149,none,2024-01-04\r\n3563,2096,LATAM,electronics,partner,66.87,3,0.046,bundle,2024-10-04\r\n3564,1355,EMEA,toys,online,46.62,5,0.178,loyalty,2024-09-03\r\n3565,2289,APAC,grocery,retail,59.94,3,0.220,none,2024-09-17\r\n3566,1387,AMER,fashion,retail,44.44,6,0.181,coupon,2024-04-18\r\n3567,2310,EMEA,fashion,retail,46.25,6,0.061,coupon,2024-05-22\r\n3568,1011,APAC,fashion,partner,80.43,1,0.145,none,2024-02-09\r\n3569,1218,AMER,sports,online,59.09,5,0.114,none,2024-11-22\r\n3570,1995,LATAM,fashion,partner,89.27,1,0.009,loyalty,2024-12-28\r\n3571,1168,APAC,toys,online,95.84,7,0.239,none,2024-08-09\r\n3572,1823,EMEA,electronics,online,89.44,1,0.054,bundle,2024-04-08\r\n3573,1991,APAC,fashion,retail,43.87,1,0.110,none,2024-05-22\r\n3574,2361,EMEA,fashion,online,57.80,4,0.172,none,2024-11-28\r\n3575,1495,LATAM,electronics,online,44.66,8,0.010,none,2024-05-25\r\n3576,1872,LATAM,sports,mobile,66.74,3,0.062,coupon,2024-05-17\r\n3577,1120,LATAM,fashion,retail,31.25,1,0.182,none,2024-01-04\r\n3578,1096,EMEA,electronics,retail,89.73,4,0.195,bundle,2024-07-21\r\n3579,1142,EMEA,grocery,retail,31.03,3,0.114,none,2024-06-08\r\n3580,2230,LATAM,toys,retail,84.98,4,0.141,none,2024-02-13\r\n3581,1339,EMEA,grocery,online,70.02,3,0.124,none,2024-03-10\r\n3582,1802,AMER,electronics,mobile,34.67,5,0.097,loyalty,2024-06-05\r\n3583,1779,APAC,home,partner,50.25,6,0.249,loyalty,2024-11-05\r\n3584,1034,EMEA,electronics,online,92.55,3,0.125,none,2024-02-13\r\n3585,2249,LATAM,electronics,online,11.25,6,0.124,bundle,2024-10-28\r\n3586,1836,LATAM,toys,mobile,92.86,2,0.150,coupon,2024-11-21\r\n3587,1051,EMEA,fashion,mobile,23.92,2,0.021,none,2024-05-19\r\n3588,1998,APAC,home,mobile,47.01,2,0.150,loyalty,2024-06-21\r\n3589,1321,EMEA,grocery,partner,70.69,2,0.031,none,2024-12-08\r\n3590,1509,AMER,fashion,online,68.80,4,0.219,bundle,2024-06-15\r\n3591,1071,AMER,home,online,77.27,1,0.019,coupon,2024-11-15\r\n3592,1458,APAC,fashion,retail,37.22,3,0.161,bundle,2024-08-17\r\n3593,2044,APAC,toys,online,91.63,3,0.234,coupon,2024-08-13\r\n3594,1877,LATAM,home,online,157.47,3,0.198,loyalty,2024-07-22\r\n3595,2218,EMEA,home,retail,16.10,6,0.072,none,2024-07-05\r\n3596,1552,EMEA,electronics,retail,84.84,3,0.241,none,2024-06-13\r\n3597,1642,EMEA,home,online,39.32,5,0.197,none,2024-03-27\r\n3598,1861,AMER,grocery,online,82.54,4,0.087,coupon,2024-04-09\r\n3599,1132,EMEA,home,mobile,130.36,2,0.166,loyalty,2024-11-24\r\n3600,1532,APAC,toys,online,58.10,2,0.174,none,2024-01-10\r\n3601,1541,APAC,electronics,retail,34.33,8,0.127,none,2024-04-17\r\n3602,1255,AMER,grocery,online,103.30,6,0.222,none,2024-11-26\r\n3603,1912,APAC,sports,retail,60.91,6,0.003,none,2024-04-15\r\n3604,2133,AMER,toys,online,138.85,4,0.018,none,2024-02-07\r\n3605,1202,APAC,sports,retail,86.95,2,0.188,loyalty,2024-04-04\r\n3606,2488,EMEA,grocery,online,43.75,2,0.233,none,2024-12-06\r\n3607,2228,EMEA,home,retail,98.69,5,0.018,coupon,2024-01-03\r\n3608,1223,LATAM,home,retail,57.83,8,0.085,bundle,2024-10-12\r\n3609,1425,EMEA,home,online,64.27,5,0.185,none,2024-12-02\r\n3610,1101,AMER,grocery,retail,69.98,4,0.033,none,2024-05-17\r\n3611,1269,LATAM,home,mobile,62.68,6,0.081,bundle,2024-09-21\r\n3612,1928,AMER,grocery,mobile,63.67,8,0.071,none,2024-05-22\r\n3613,2457,EMEA,fashion,online,93.22,3,0.063,none,2024-01-18\r\n3614,1731,AMER,electronics,retail,49.05,3,0.119,none,2024-09-26\r\n3615,1940,APAC,toys,mobile,136.51,4,0.136,none,2024-03-07\r\n3616,1030,EMEA,toys,online,105.36,1,0.051,none,2024-11-17\r\n3617,1536,LATAM,electronics,online,54.77,7,0.164,none,2024-05-28\r\n3618,2286,AMER,grocery,online,22.80,3,0.079,none,2024-04-20\r\n3619,1296,LATAM,home,retail,75.89,2,0.229,coupon,2024-10-10\r\n3620,1578,LATAM,toys,online,113.75,1,0.101,none,2024-10-18\r\n3621,1760,LATAM,electronics,mobile,47.88,3,0.023,none,2024-02-15\r\n3622,1154,LATAM,fashion,retail,69.03,3,0.031,none,2024-02-26\r\n3623,1796,LATAM,sports,partner,72.99,3,0.248,none,2024-02-25\r\n3624,2111,EMEA,sports,retail,205.80,3,0.009,none,2024-08-11\r\n3625,1058,LATAM,home,retail,50.34,8,0.057,none,2024-10-13\r\n3626,2185,EMEA,electronics,online,36.03,4,0.015,bundle,2024-03-15\r\n3627,1142,EMEA,toys,online,128.07,2,0.170,coupon,2024-03-28\r\n3628,1157,LATAM,grocery,partner,47.27,2,0.020,loyalty,2024-10-14\r\n3629,1592,LATAM,electronics,retail,114.70,7,0.131,none,2024-03-14\r\n3630,1414,APAC,grocery,online,99.87,8,0.192,none,2024-11-07\r\n3631,2471,APAC,grocery,online,68.51,2,0.065,bundle,2024-03-08\r\n3632,1492,APAC,home,mobile,69.42,6,0.239,coupon,2024-12-09\r\n3633,1198,AMER,sports,retail,53.81,8,0.061,none,2024-03-20\r\n3634,1485,APAC,grocery,retail,53.96,5,0.112,none,2024-12-08\r\n3635,1035,EMEA,electronics,online,40.42,3,0.222,bundle,2024-06-06\r\n3636,1987,AMER,grocery,retail,69.12,2,0.193,bundle,2024-06-09\r\n3637,1075,AMER,electronics,online,182.63,2,0.013,bundle,2024-11-15\r\n3638,2036,APAC,sports,retail,46.59,5,0.187,none,2024-04-12\r\n3639,1221,LATAM,sports,retail,62.85,5,0.028,none,2024-11-14\r\n3640,2050,APAC,home,online,70.60,4,0.115,none,2024-02-05\r\n3641,1679,APAC,electronics,retail,42.18,3,0.064,none,2024-07-08\r\n3642,1766,AMER,fashion,online,34.34,1,0.045,none,2024-03-20\r\n3643,1942,APAC,electronics,online,60.29,6,0.232,none,2024-02-21\r\n3644,1786,APAC,grocery,mobile,123.67,6,0.218,none,2024-12-21\r\n3645,1953,EMEA,home,mobile,54.08,2,0.053,loyalty,2024-08-27\r\n3646,1046,EMEA,toys,online,33.23,2,0.120,none,2024-01-09\r\n3647,1610,LATAM,fashion,retail,18.86,5,0.149,none,2024-05-22\r\n3648,2118,AMER,fashion,online,85.38,1,0.065,bundle,2024-12-01\r\n3649,1687,APAC,grocery,retail,41.43,8,0.041,loyalty,2024-08-18\r\n3650,2087,LATAM,fashion,online,51.84,3,0.189,none,2024-02-08\r\n3651,1805,EMEA,toys,retail,45.81,3,0.017,none,2024-11-06\r\n3652,2498,LATAM,sports,retail,100.72,6,0.127,coupon,2024-03-15\r\n3653,1495,LATAM,toys,retail,75.43,3,0.003,none,2024-01-27\r\n3654,1453,APAC,toys,partner,30.24,6,0.183,none,2024-12-28\r\n3655,2233,EMEA,fashion,online,36.28,7,0.033,none,2024-05-15\r\n3656,1094,LATAM,home,online,61.70,4,0.193,none,2024-07-14\r\n3657,1723,LATAM,grocery,online,30.79,1,0.170,loyalty,2024-08-01\r\n3658,1657,LATAM,grocery,online,77.85,5,0.141,none,2024-06-01\r\n3659,1970,LATAM,home,retail,36.65,1,0.116,loyalty,2024-02-15\r\n3660,2484,APAC,grocery,online,102.27,4,0.011,none,2024-04-16\r\n3661,1691,LATAM,fashion,mobile,63.37,7,0.158,coupon,2024-06-05\r\n3662,1694,APAC,grocery,online,30.87,3,0.161,bundle,2024-08-15\r\n3663,1654,EMEA,fashion,retail,134.80,1,0.029,none,2024-07-03\r\n3664,2120,AMER,electronics,online,127.47,6,0.000,none,2024-04-05\r\n3665,2498,LATAM,sports,retail,59.29,8,0.162,coupon,2024-02-23\r\n3666,2056,LATAM,grocery,retail,49.31,8,0.106,bundle,2024-11-19\r\n3667,1659,APAC,electronics,retail,34.21,2,0.224,none,2024-03-09\r\n3668,1581,APAC,sports,retail,33.43,5,0.229,bundle,2024-03-02\r\n3669,1545,AMER,electronics,mobile,61.38,8,0.122,bundle,2024-01-15\r\n3670,2204,AMER,home,online,41.09,2,0.208,none,2024-05-26\r\n3671,1092,AMER,electronics,online,113.70,4,0.102,none,2024-02-11\r\n3672,1975,EMEA,home,retail,45.53,8,0.176,none,2024-10-12\r\n3673,1939,LATAM,grocery,online,50.75,1,0.083,none,2024-05-16\r\n3674,2246,AMER,sports,online,91.92,2,0.199,bundle,2024-09-06\r\n3675,1827,EMEA,sports,online,101.62,1,0.029,loyalty,2024-02-28\r\n3676,1707,APAC,home,online,32.18,2,0.041,loyalty,2024-11-28\r\n3677,1015,AMER,fashion,online,36.16,6,0.171,bundle,2024-04-03\r\n3678,1783,AMER,electronics,retail,31.97,3,0.225,loyalty,2024-10-19\r\n3679,1073,AMER,grocery,online,39.88,4,0.069,none,2024-03-14\r\n3680,2047,AMER,toys,online,72.38,8,0.024,coupon,2024-08-03\r\n3681,1739,AMER,toys,online,61.25,4,0.031,loyalty,2024-09-04\r\n3682,1633,EMEA,grocery,mobile,61.55,5,0.168,none,2024-03-09\r\n3683,2498,LATAM,grocery,mobile,42.09,6,0.083,none,2024-06-26\r\n3684,2483,LATAM,fashion,online,66.14,5,0.211,coupon,2024-09-23\r\n3685,2196,AMER,grocery,online,44.57,4,0.181,none,2024-07-02\r\n3686,1260,LATAM,sports,online,41.08,8,0.192,coupon,2024-05-01\r\n3687,1167,EMEA,sports,retail,64.17,2,0.193,coupon,2024-10-17\r\n3688,1301,AMER,home,online,45.61,2,0.249,none,2024-01-16\r\n3689,1847,LATAM,toys,online,29.76,5,0.144,none,2024-11-15\r\n3690,1141,AMER,home,online,47.45,4,0.202,bundle,2024-05-20\r\n3691,2216,AMER,fashion,mobile,30.47,7,0.024,coupon,2024-08-17\r\n3692,2134,AMER,electronics,partner,41.94,6,0.217,none,2024-05-08\r\n3693,1572,LATAM,toys,partner,32.72,7,0.170,coupon,2024-08-16\r\n3694,1096,EMEA,fashion,retail,44.58,6,0.228,none,2024-01-11\r\n3695,2071,APAC,fashion,mobile,118.61,6,0.134,none,2024-03-23\r\n3696,1905,APAC,toys,online,39.26,6,0.189,bundle,2024-01-24\r\n3697,2342,AMER,sports,online,62.40,6,0.035,loyalty,2024-12-03\r\n3698,2192,APAC,fashion,retail,29.60,8,0.144,none,2024-04-11\r\n3699,1222,AMER,electronics,retail,43.46,8,0.064,bundle,2024-09-18\r\n3700,1944,AMER,home,online,76.72,5,0.092,bundle,2024-05-01\r\n3701,1281,AMER,electronics,online,47.60,8,0.231,coupon,2024-07-25\r\n3702,1112,APAC,home,online,115.16,4,0.209,none,2024-10-15\r\n3703,1164,EMEA,sports,online,47.65,3,0.216,coupon,2024-09-05\r\n3704,2156,AMER,electronics,online,14.87,6,0.216,bundle,2024-06-26\r\n3705,1085,EMEA,grocery,online,82.28,8,0.249,none,2024-10-07\r\n3706,1910,LATAM,electronics,retail,149.53,6,0.193,coupon,2024-08-01\r\n3707,1679,APAC,sports,online,18.19,4,0.137,loyalty,2024-02-10\r\n3708,2039,EMEA,toys,retail,78.18,5,0.211,none,2024-10-26\r\n3709,1627,LATAM,sports,online,81.06,6,0.089,none,2024-08-19\r\n3710,1665,AMER,grocery,online,24.11,5,0.061,coupon,2024-03-24\r\n3711,1926,AMER,fashion,mobile,65.78,8,0.193,none,2024-06-17\r\n3712,1715,AMER,home,online,114.54,3,0.014,none,2024-03-13\r\n3713,1566,EMEA,home,mobile,53.28,6,0.134,none,2024-05-05\r\n3714,1139,EMEA,home,online,27.07,3,0.199,loyalty,2024-06-06\r\n3715,1871,APAC,home,online,126.14,8,0.168,none,2024-04-07\r\n3716,1413,LATAM,grocery,retail,34.39,8,0.157,none,2024-04-11\r\n3717,1544,LATAM,grocery,partner,30.63,6,0.106,none,2024-05-07\r\n3718,1780,APAC,grocery,online,55.40,8,0.100,bundle,2024-04-12\r\n3719,2049,LATAM,home,partner,104.36,6,0.083,none,2024-12-02\r\n3720,2039,EMEA,toys,online,42.86,4,0.132,none,2024-09-28\r\n3721,1060,LATAM,electronics,retail,143.62,8,0.160,coupon,2024-05-27\r\n3722,1533,APAC,home,retail,65.66,3,0.231,none,2024-06-25\r\n3723,1037,EMEA,grocery,retail,88.29,4,0.134,loyalty,2024-03-13\r\n3724,1941,AMER,home,retail,102.98,2,0.182,coupon,2024-08-28\r\n3725,2244,LATAM,sports,online,46.64,4,0.111,none,2024-10-09\r\n3726,1893,APAC,home,online,55.91,4,0.086,coupon,2024-05-17\r\n3727,1992,LATAM,grocery,mobile,86.20,4,0.005,none,2024-04-20\r\n3728,2409,APAC,fashion,online,46.14,8,0.156,loyalty,2024-12-13\r\n3729,2484,APAC,electronics,retail,47.35,6,0.121,bundle,2024-07-21\r\n3730,2173,LATAM,fashion,retail,37.10,1,0.061,loyalty,2024-12-20\r\n3731,1302,LATAM,electronics,mobile,53.64,3,0.118,coupon,2024-06-13\r\n3732,2120,AMER,grocery,online,78.60,7,0.017,none,2024-12-03\r\n3733,1621,APAC,grocery,retail,84.24,3,0.116,none,2024-02-25\r\n3734,2207,APAC,electronics,online,74.32,7,0.240,none,2024-10-24\r\n3735,1713,EMEA,home,retail,59.25,7,0.082,bundle,2024-05-01\r\n3736,1383,AMER,grocery,retail,72.89,2,0.166,coupon,2024-09-27\r\n3737,1360,APAC,electronics,retail,37.97,3,0.100,bundle,2024-02-04\r\n3738,2338,AMER,home,online,123.92,5,0.183,none,2024-06-01\r\n3739,2003,LATAM,grocery,online,51.68,1,0.222,none,2024-01-15\r\n3740,1063,AMER,sports,online,52.41,6,0.249,coupon,2024-04-12\r\n3741,1595,AMER,sports,retail,23.32,8,0.089,none,2024-04-23\r\n3742,1741,AMER,electronics,retail,40.30,7,0.220,none,2024-11-28\r\n3743,1243,AMER,sports,retail,50.51,5,0.004,none,2024-04-16\r\n3744,2006,APAC,electronics,online,70.26,5,0.001,coupon,2024-10-15\r\n3745,1726,EMEA,fashion,retail,50.58,5,0.207,none,2024-04-15\r\n3746,1130,LATAM,grocery,mobile,47.77,8,0.006,none,2024-05-04\r\n3747,1282,LATAM,fashion,online,36.95,4,0.217,none,2024-09-20\r\n3748,1263,AMER,electronics,mobile,54.69,4,0.028,none,2024-11-23\r\n3749,1239,APAC,home,online,58.49,1,0.244,none,2024-06-15\r\n3750,1326,AMER,grocery,online,51.40,7,0.094,none,2024-02-21\r\n3751,1638,EMEA,grocery,online,15.37,4,0.093,none,2024-11-26\r\n3752,1856,EMEA,home,online,120.34,3,0.184,coupon,2024-11-23\r\n3753,1109,APAC,fashion,retail,85.64,5,0.170,none,2024-05-11\r\n3754,1329,APAC,electronics,online,101.97,7,0.236,none,2024-04-16\r\n3755,2162,EMEA,grocery,online,62.52,2,0.216,bundle,2024-09-24\r\n3756,1286,EMEA,toys,mobile,35.47,3,0.179,none,2024-11-17\r\n3757,1412,AMER,toys,online,48.80,5,0.147,none,2024-10-16\r\n3758,1643,EMEA,fashion,mobile,65.85,3,0.045,loyalty,2024-10-07\r\n3759,2117,EMEA,grocery,online,88.67,4,0.228,none,2024-08-24\r\n3760,1095,APAC,home,online,113.63,5,0.244,none,2024-03-12\r\n3761,2073,AMER,toys,online,42.12,4,0.116,none,2024-09-18\r\n3762,1900,APAC,grocery,retail,60.51,3,0.128,bundle,2024-07-06\r\n3763,2166,AMER,electronics,mobile,67.19,4,0.078,none,2024-04-21\r\n3764,1950,LATAM,sports,retail,64.47,4,0.143,bundle,2024-06-14\r\n3765,2338,AMER,grocery,retail,33.18,5,0.123,none,2024-10-22\r\n3766,1977,APAC,home,online,50.49,2,0.176,none,2024-01-06\r\n3767,1166,AMER,electronics,online,83.05,1,0.055,none,2024-04-20\r\n3768,2275,LATAM,fashion,online,36.03,5,0.060,bundle,2024-01-16\r\n3769,2433,APAC,home,retail,29.01,6,0.208,coupon,2024-04-12\r\n3770,2248,LATAM,electronics,retail,35.33,2,0.010,none,2024-12-15\r\n3771,1145,AMER,home,retail,51.45,3,0.204,none,2024-02-16\r\n3772,2317,LATAM,fashion,retail,95.93,6,0.135,none,2024-01-09\r\n3773,1216,APAC,home,online,131.40,6,0.149,none,2024-08-11\r\n3774,2301,EMEA,grocery,retail,48.14,4,0.072,bundle,2024-03-11\r\n3775,1201,LATAM,grocery,partner,121.63,2,0.171,bundle,2024-02-20\r\n3776,2234,LATAM,toys,retail,88.18,1,0.158,coupon,2024-06-03\r\n3777,2491,APAC,electronics,mobile,58.04,6,0.157,none,2024-01-05\r\n3778,2148,EMEA,toys,retail,87.93,1,0.114,coupon,2024-09-12\r\n3779,1936,EMEA,home,partner,28.59,2,0.137,coupon,2024-07-10\r\n3780,1448,EMEA,grocery,retail,73.44,3,0.020,none,2024-05-06\r\n3781,1325,APAC,sports,online,30.04,1,0.097,none,2024-08-18\r\n3782,1979,APAC,electronics,online,35.77,5,0.014,loyalty,2024-10-08\r\n3783,1265,APAC,electronics,mobile,38.68,7,0.111,none,2024-03-17\r\n3784,2250,AMER,electronics,retail,55.36,8,0.206,none,2024-10-03\r\n3785,1867,AMER,grocery,retail,49.25,2,0.101,none,2024-06-03\r\n3786,1394,LATAM,grocery,online,18.38,2,0.017,coupon,2024-04-07\r\n3787,1764,LATAM,grocery,online,22.17,2,0.219,none,2024-05-23\r\n3788,1129,LATAM,fashion,online,46.06,4,0.015,coupon,2024-06-20\r\n3789,1434,EMEA,sports,online,77.21,5,0.115,coupon,2024-12-25\r\n3790,1400,EMEA,grocery,retail,46.53,4,0.077,none,2024-05-25\r\n3791,1751,AMER,electronics,online,50.48,7,0.038,none,2024-01-20\r\n3792,2075,LATAM,home,partner,46.45,7,0.225,none,2024-11-16\r\n3793,1978,AMER,electronics,partner,92.00,6,0.141,loyalty,2024-08-16\r\n3794,2105,APAC,toys,retail,116.55,5,0.102,none,2024-07-28\r\n3795,1391,LATAM,grocery,retail,49.57,6,0.201,none,2024-11-12\r\n3796,1282,LATAM,toys,online,110.00,2,0.214,bundle,2024-07-17\r\n3797,2234,LATAM,electronics,retail,91.82,8,0.147,none,2024-06-24\r\n3798,1766,AMER,toys,online,94.62,4,0.230,coupon,2024-11-14\r\n3799,1987,AMER,electronics,online,87.59,1,0.133,coupon,2024-06-03\r\n3800,2068,LATAM,electronics,online,40.97,6,0.172,none,2024-03-05\r\n3801,1255,AMER,home,retail,48.31,5,0.168,none,2024-10-12\r\n3802,1195,AMER,fashion,retail,56.41,4,0.190,none,2024-01-27\r\n3803,1536,LATAM,electronics,retail,92.21,2,0.198,bundle,2024-06-18\r\n3804,2406,EMEA,grocery,online,59.63,7,0.134,none,2024-01-20\r\n3805,1340,LATAM,fashion,online,50.99,3,0.079,loyalty,2024-08-14\r\n3806,1913,LATAM,sports,online,52.16,7,0.104,bundle,2024-06-19\r\n3807,1421,APAC,toys,online,56.89,2,0.138,coupon,2024-11-11\r\n3808,2340,EMEA,grocery,mobile,97.50,1,0.117,none,2024-02-07\r\n3809,1415,AMER,electronics,retail,66.87,5,0.016,bundle,2024-05-03\r\n3810,2385,APAC,fashion,retail,66.17,2,0.110,bundle,2024-07-08\r\n3811,2460,AMER,electronics,online,64.24,2,0.062,coupon,2024-09-05\r\n3812,1702,AMER,home,online,71.52,3,0.008,none,2024-01-22\r\n3813,1705,AMER,grocery,retail,64.06,6,0.069,bundle,2024-03-21\r\n3814,1887,LATAM,toys,online,51.96,5,0.162,none,2024-01-05\r\n3815,1108,EMEA,home,online,102.21,4,0.242,none,2024-07-10\r\n3816,1020,APAC,electronics,retail,28.32,2,0.230,coupon,2024-12-20\r\n3817,1317,EMEA,grocery,online,90.89,7,0.102,none,2024-05-04\r\n3818,2008,APAC,fashion,retail,62.72,7,0.129,none,2024-02-09\r\n3819,1457,EMEA,grocery,retail,25.60,8,0.132,loyalty,2024-05-11\r\n3820,2192,APAC,fashion,retail,58.05,3,0.044,none,2024-07-07\r\n3821,1441,LATAM,sports,retail,94.76,7,0.080,none,2024-04-28\r\n3822,2406,EMEA,electronics,retail,19.14,1,0.150,coupon,2024-06-25\r\n3823,1572,LATAM,grocery,mobile,103.22,4,0.017,coupon,2024-07-26\r\n3824,1074,LATAM,grocery,online,78.09,1,0.092,none,2024-04-10\r\n3825,2043,EMEA,electronics,online,21.78,4,0.233,none,2024-06-06\r\n3826,1604,EMEA,fashion,retail,62.73,2,0.195,none,2024-01-20\r\n3827,1147,EMEA,electronics,retail,38.75,5,0.065,coupon,2024-06-06\r\n3828,1469,EMEA,fashion,retail,54.78,7,0.010,none,2024-05-21\r\n3829,2153,APAC,grocery,online,64.86,7,0.229,none,2024-05-11\r\n3830,1542,APAC,fashion,partner,92.52,7,0.006,none,2024-07-14\r\n3831,1715,AMER,fashion,online,41.67,3,0.034,bundle,2024-06-14\r\n3832,1767,AMER,grocery,mobile,46.12,2,0.113,none,2024-05-23\r\n3833,2448,APAC,toys,online,61.06,3,0.118,none,2024-06-16\r\n3834,2385,APAC,fashion,online,70.13,6,0.249,none,2024-06-25\r\n3835,1857,LATAM,electronics,online,59.22,1,0.007,coupon,2024-03-10\r\n3836,2240,LATAM,electronics,online,66.16,8,0.103,loyalty,2024-11-14\r\n3837,2066,APAC,home,online,54.02,7,0.213,coupon,2024-06-22\r\n3838,1687,APAC,toys,partner,48.92,3,0.158,coupon,2024-08-04\r\n3839,2454,LATAM,grocery,retail,167.68,2,0.021,none,2024-08-09\r\n3840,2048,LATAM,electronics,online,90.01,4,0.108,none,2024-09-10\r\n3841,1557,LATAM,home,online,79.17,4,0.190,none,2024-03-27\r\n3842,1600,AMER,grocery,mobile,90.90,5,0.059,none,2024-01-24\r\n3843,1662,LATAM,electronics,mobile,47.12,1,0.154,coupon,2024-02-27\r\n3844,1039,AMER,electronics,online,71.15,2,0.249,none,2024-11-07\r\n3845,2353,AMER,toys,retail,27.67,7,0.057,bundle,2024-12-26\r\n3846,1755,APAC,grocery,online,29.95,1,0.056,none,2024-01-10\r\n3847,1488,AMER,home,online,46.74,5,0.013,bundle,2024-03-17\r\n3848,1736,AMER,home,online,79.22,8,0.031,none,2024-09-08\r\n3849,1280,LATAM,toys,online,36.84,4,0.032,none,2024-10-19\r\n3850,1601,APAC,fashion,retail,46.78,5,0.204,none,2024-10-09\r\n3851,2079,EMEA,sports,mobile,16.00,5,0.179,bundle,2024-12-04\r\n3852,1460,LATAM,home,online,36.38,5,0.045,none,2024-04-14\r\n3853,1571,EMEA,grocery,retail,25.59,8,0.098,none,2024-10-20\r\n3854,1397,LATAM,electronics,mobile,29.76,3,0.065,none,2024-02-26\r\n3855,1528,EMEA,sports,retail,53.84,6,0.207,coupon,2024-05-16\r\n3856,1223,LATAM,electronics,retail,18.42,5,0.040,none,2024-03-16\r\n3857,2289,APAC,grocery,mobile,127.83,5,0.228,none,2024-08-07\r\n3858,1497,EMEA,home,online,46.97,4,0.072,coupon,2024-08-20\r\n3859,1136,EMEA,fashion,online,31.30,6,0.165,none,2024-08-07\r\n3860,1991,APAC,sports,online,56.53,7,0.142,coupon,2024-07-12\r\n3861,1930,AMER,grocery,online,286.22,2,0.178,coupon,2024-02-15\r\n3862,1828,EMEA,electronics,online,108.14,5,0.230,coupon,2024-11-05\r\n3863,2475,AMER,home,mobile,71.57,4,0.244,coupon,2024-05-06\r\n3864,1072,LATAM,grocery,retail,50.70,8,0.003,bundle,2024-07-24\r\n3865,1051,EMEA,electronics,online,60.55,5,0.247,none,2024-04-20\r\n3866,2372,AMER,toys,online,54.78,6,0.011,none,2024-07-20\r\n3867,1647,LATAM,electronics,mobile,54.10,4,0.167,coupon,2024-02-13\r\n3868,2078,APAC,fashion,online,52.54,1,0.183,none,2024-07-01\r\n3869,1066,AMER,fashion,online,116.35,8,0.029,coupon,2024-07-27\r\n3870,1451,EMEA,grocery,retail,70.07,1,0.089,loyalty,2024-05-17\r\n3871,1416,EMEA,toys,retail,31.32,6,0.029,bundle,2024-10-10\r\n3872,1864,EMEA,fashion,online,31.04,3,0.250,loyalty,2024-12-21\r\n3873,1370,APAC,electronics,online,32.08,4,0.223,bundle,2024-10-25\r\n3874,2411,EMEA,grocery,retail,159.33,2,0.143,bundle,2024-03-20\r\n3875,1628,EMEA,grocery,mobile,42.05,3,0.114,none,2024-08-26\r\n3876,1488,AMER,home,retail,75.32,5,0.192,none,2024-08-13\r\n3877,2409,APAC,sports,mobile,101.97,4,0.134,none,2024-10-10\r\n3878,1464,APAC,grocery,online,62.75,8,0.173,none,2024-09-21\r\n3879,2040,LATAM,home,retail,29.91,5,0.099,none,2024-11-19\r\n3880,1812,EMEA,electronics,retail,42.27,1,0.072,none,2024-10-10\r\n3881,1889,APAC,fashion,online,49.97,5,0.219,coupon,2024-07-16\r\n3882,1526,EMEA,electronics,retail,73.75,1,0.073,coupon,2024-01-23\r\n3883,1816,EMEA,electronics,online,35.13,1,0.027,bundle,2024-05-03\r\n3884,2244,LATAM,electronics,retail,53.74,6,0.224,none,2024-04-05\r\n3885,1015,AMER,sports,retail,24.27,7,0.142,none,2024-02-06\r\n3886,1636,APAC,grocery,retail,85.84,7,0.219,coupon,2024-12-04\r\n3887,1165,AMER,grocery,online,63.55,7,0.070,none,2024-03-02\r\n3888,1488,AMER,electronics,online,55.51,4,0.154,none,2024-04-24\r\n3889,1001,LATAM,toys,online,50.71,8,0.218,bundle,2024-04-24\r\n3890,2243,APAC,sports,online,117.30,8,0.012,coupon,2024-02-22\r\n3891,2156,AMER,home,mobile,74.79,6,0.171,none,2024-11-17\r\n3892,2101,APAC,toys,retail,48.31,3,0.071,coupon,2024-08-12\r\n3893,1744,EMEA,home,online,44.14,1,0.010,none,2024-02-22\r\n3894,1196,APAC,sports,partner,47.37,1,0.218,coupon,2024-09-01\r\n3895,2098,AMER,grocery,online,49.77,5,0.184,none,2024-05-08\r\n3896,1548,EMEA,toys,online,38.01,1,0.203,none,2024-10-08\r\n3897,2062,EMEA,grocery,mobile,71.91,3,0.213,none,2024-11-06\r\n3898,1240,EMEA,sports,retail,33.13,7,0.164,none,2024-11-15\r\n3899,2461,LATAM,fashion,retail,62.94,7,0.054,bundle,2024-06-09\r\n3900,2164,AMER,toys,retail,88.82,5,0.229,none,2024-11-22\r\n3901,1411,LATAM,grocery,mobile,42.57,8,0.003,none,2024-01-01\r\n3902,1289,LATAM,electronics,online,34.49,8,0.026,loyalty,2024-06-04\r\n3903,2342,AMER,fashion,online,63.02,8,0.122,none,2024-10-05\r\n3904,1311,APAC,sports,retail,80.77,8,0.101,none,2024-03-09\r\n3905,2435,AMER,grocery,retail,61.68,1,0.061,none,2024-05-10\r\n3906,1634,AMER,toys,online,84.85,4,0.178,coupon,2024-07-26\r\n3907,1461,LATAM,grocery,online,113.75,8,0.250,none,2024-01-11\r\n3908,1285,EMEA,home,retail,67.08,8,0.240,none,2024-09-26\r\n3909,2427,LATAM,grocery,online,45.61,3,0.237,bundle,2024-10-25\r\n3910,2490,AMER,fashion,mobile,102.68,2,0.164,coupon,2024-03-15\r\n3911,1330,EMEA,grocery,retail,90.36,4,0.149,none,2024-08-08\r\n3912,1087,AMER,toys,mobile,48.77,8,0.127,coupon,2024-06-02\r\n3913,1164,EMEA,home,online,89.78,2,0.003,bundle,2024-02-03\r\n3914,1767,AMER,home,retail,32.25,4,0.043,bundle,2024-04-12\r\n3915,1802,AMER,grocery,retail,52.29,2,0.026,none,2024-07-04\r\n3916,1345,AMER,fashion,retail,65.66,5,0.169,none,2024-08-07\r\n3917,1151,APAC,sports,online,30.73,6,0.151,none,2024-09-23\r\n3918,1236,AMER,home,retail,104.18,5,0.187,none,2024-05-06\r\n3919,1537,LATAM,toys,retail,35.86,2,0.179,bundle,2024-03-26\r\n3920,1296,LATAM,home,online,67.44,4,0.119,loyalty,2024-03-14\r\n3921,2209,AMER,sports,retail,43.18,2,0.038,loyalty,2024-04-09\r\n3922,1403,APAC,home,online,96.04,7,0.097,loyalty,2024-06-15\r\n3923,1413,LATAM,grocery,online,107.64,2,0.093,coupon,2024-06-22\r\n3924,1568,AMER,home,partner,49.69,3,0.080,bundle,2024-02-02\r\n3925,1165,AMER,sports,partner,53.89,4,0.087,coupon,2024-02-20\r\n3926,1993,APAC,sports,online,64.28,8,0.029,loyalty,2024-03-02\r\n3927,2290,LATAM,home,retail,36.96,8,0.011,none,2024-01-09\r\n3928,1363,EMEA,grocery,online,46.87,2,0.003,coupon,2024-11-04\r\n3929,2173,LATAM,sports,online,78.52,8,0.029,loyalty,2024-08-27\r\n3930,1224,APAC,fashion,online,64.02,7,0.101,none,2024-05-20\r\n3931,2122,AMER,sports,online,223.43,5,0.182,none,2024-07-19\r\n3932,2172,EMEA,sports,online,94.41,4,0.230,loyalty,2024-04-18\r\n3933,1910,LATAM,home,retail,38.19,8,0.018,none,2024-10-05\r\n3934,1407,LATAM,electronics,online,124.33,7,0.066,bundle,2024-01-14\r\n3935,1902,AMER,grocery,online,45.67,3,0.086,bundle,2024-12-01\r\n3936,1160,LATAM,fashion,online,82.49,3,0.180,none,2024-05-21\r\n3937,1612,LATAM,grocery,retail,30.61,4,0.187,none,2024-11-25\r\n3938,1767,AMER,sports,online,52.05,8,0.011,bundle,2024-11-11\r\n3939,1043,LATAM,sports,online,74.24,7,0.221,coupon,2024-04-03\r\n3940,2324,AMER,sports,mobile,41.11,7,0.157,none,2024-03-25\r\n3941,1044,EMEA,home,retail,74.32,7,0.120,coupon,2024-04-02\r\n3942,1322,AMER,electronics,retail,86.26,1,0.072,none,2024-11-27\r\n3943,1525,APAC,home,online,89.22,6,0.060,none,2024-10-19\r\n3944,1635,APAC,home,online,145.72,3,0.060,bundle,2024-01-11\r\n3945,2316,EMEA,fashion,online,58.12,4,0.039,bundle,2024-06-09\r\n3946,2166,AMER,home,online,20.69,4,0.107,coupon,2024-02-28\r\n3947,1315,AMER,electronics,online,71.82,3,0.048,none,2024-08-15\r\n3948,1839,APAC,electronics,retail,51.01,3,0.162,none,2024-09-07\r\n3949,1417,APAC,grocery,retail,19.08,2,0.228,none,2024-06-10\r\n3950,2316,EMEA,home,online,78.95,4,0.116,none,2024-09-22\r\n3951,1588,LATAM,toys,retail,49.00,8,0.243,none,2024-06-03\r\n3952,2018,AMER,grocery,online,56.88,1,0.106,coupon,2024-02-16\r\n3953,2367,AMER,home,mobile,125.39,6,0.195,none,2024-08-15\r\n3954,1604,EMEA,fashion,retail,29.78,2,0.189,none,2024-02-09\r\n3955,1148,AMER,sports,mobile,61.71,7,0.002,bundle,2024-06-24\r\n3956,1643,EMEA,grocery,online,71.17,5,0.054,none,2024-04-08\r\n3957,2457,EMEA,fashion,online,31.99,6,0.163,none,2024-05-12\r\n3958,1396,EMEA,toys,online,122.72,7,0.101,bundle,2024-08-27\r\n3959,1974,EMEA,electronics,retail,88.80,3,0.224,none,2024-08-26\r\n3960,1699,APAC,home,partner,26.01,5,0.194,coupon,2024-09-27\r\n3961,1994,LATAM,electronics,retail,63.29,2,0.074,coupon,2024-08-27\r\n3962,1527,AMER,electronics,online,119.66,2,0.244,none,2024-01-28\r\n3963,1673,AMER,electronics,online,62.19,1,0.148,none,2024-09-15\r\n3964,1172,APAC,sports,online,33.51,6,0.247,none,2024-10-13\r\n3965,2025,EMEA,toys,retail,91.04,2,0.243,bundle,2024-01-27\r\n3966,1937,APAC,home,retail,94.16,4,0.163,coupon,2024-10-26\r\n3967,1387,AMER,fashion,mobile,66.66,4,0.053,none,2024-06-28\r\n3968,2225,EMEA,grocery,retail,48.63,2,0.138,bundle,2024-06-09\r\n3969,1216,APAC,sports,retail,129.04,4,0.174,bundle,2024-04-11\r\n3970,1959,EMEA,grocery,online,74.33,5,0.145,bundle,2024-11-18\r\n3971,1880,LATAM,home,retail,33.43,2,0.085,coupon,2024-09-25\r\n3972,1004,LATAM,toys,mobile,41.70,5,0.038,none,2024-12-13\r\n3973,2469,LATAM,electronics,online,82.99,4,0.138,coupon,2024-04-17\r\n3974,2125,LATAM,grocery,online,52.35,1,0.209,none,2024-10-19\r\n3975,1919,EMEA,fashion,retail,91.88,6,0.174,none,2024-08-18\r\n3976,1254,APAC,home,retail,101.27,3,0.146,loyalty,2024-04-06\r\n3977,2424,LATAM,grocery,retail,47.61,8,0.041,loyalty,2024-09-05\r\n3978,1270,LATAM,grocery,retail,108.92,1,0.073,none,2024-07-26\r\n3979,1059,AMER,sports,online,152.24,1,0.159,coupon,2024-01-16\r\n3980,1311,APAC,sports,retail,120.96,2,0.160,none,2024-08-17\r\n3981,1929,LATAM,electronics,retail,90.25,1,0.166,loyalty,2024-09-07\r\n3982,2244,LATAM,grocery,online,88.93,6,0.159,none,2024-05-02\r\n3983,2377,AMER,fashion,online,51.95,7,0.034,none,2024-08-17\r\n3984,1368,EMEA,toys,retail,26.67,5,0.166,none,2024-03-03\r\n3985,2451,APAC,grocery,mobile,126.64,8,0.194,coupon,2024-08-11\r\n3986,2281,AMER,home,mobile,36.68,1,0.129,bundle,2024-08-02\r\n3987,2115,APAC,sports,retail,120.48,1,0.081,none,2024-04-22\r\n3988,1435,AMER,grocery,online,134.14,7,0.247,bundle,2024-04-06\r\n3989,1204,AMER,sports,online,47.89,2,0.206,coupon,2024-07-01\r\n3990,1202,APAC,home,online,96.74,8,0.105,none,2024-05-15\r\n3991,1647,LATAM,grocery,online,35.99,7,0.241,none,2024-07-06\r\n3992,2348,EMEA,fashion,online,43.46,4,0.079,bundle,2024-05-20\r\n3993,2339,AMER,electronics,online,67.95,3,0.226,none,2024-02-14\r\n3994,2198,EMEA,grocery,mobile,67.44,1,0.038,none,2024-07-09\r\n3995,1919,EMEA,electronics,online,58.29,5,0.133,bundle,2024-06-04\r\n3996,1816,EMEA,electronics,retail,192.76,2,0.046,bundle,2024-06-18\r\n3997,2421,AMER,grocery,online,56.58,3,0.072,bundle,2024-06-27\r\n3998,2373,LATAM,grocery,online,22.42,4,0.173,bundle,2024-09-08\r\n3999,1178,EMEA,home,retail,100.92,6,0.094,loyalty,2024-03-27\r\n4000,1183,AMER,toys,online,71.29,3,0.227,none,2024-10-23\r\n4001,2106,LATAM,grocery,online,76.86,2,0.116,none,2024-02-19\r\n4002,1861,AMER,toys,mobile,40.41,1,0.099,coupon,2024-05-23\r\n4003,1237,LATAM,grocery,online,86.23,2,0.205,loyalty,2024-01-16\r\n4004,2162,EMEA,home,online,88.17,2,0.052,coupon,2024-09-04\r\n4005,1955,AMER,grocery,partner,77.07,4,0.246,bundle,2024-06-22\r\n4006,1456,APAC,electronics,online,127.77,2,0.096,none,2024-07-03\r\n4007,2140,AMER,electronics,online,97.72,3,0.113,none,2024-06-27\r\n4008,1226,AMER,electronics,mobile,82.98,4,0.056,none,2024-02-11\r\n4009,2338,AMER,toys,partner,28.76,2,0.120,none,2024-04-14\r\n4010,2498,LATAM,fashion,online,34.46,5,0.002,none,2024-04-23\r\n4011,1615,LATAM,home,online,61.77,4,0.008,coupon,2024-09-02\r\n4012,1527,AMER,fashion,retail,27.37,8,0.099,none,2024-02-20\r\n4013,1474,LATAM,home,retail,82.50,2,0.039,bundle,2024-10-14\r\n4014,1180,AMER,grocery,online,16.45,6,0.085,none,2024-11-02\r\n4015,1250,APAC,home,mobile,101.94,5,0.165,none,2024-12-10\r\n4016,1361,LATAM,electronics,retail,37.71,8,0.096,bundle,2024-10-06\r\n4017,1034,EMEA,toys,retail,50.07,4,0.012,none,2024-11-15\r\n4018,1712,LATAM,grocery,online,71.40,8,0.024,none,2024-03-01\r\n4019,1477,APAC,grocery,retail,75.31,2,0.113,bundle,2024-07-22\r\n4020,1429,APAC,home,retail,25.71,6,0.197,coupon,2024-07-25\r\n4021,1582,AMER,fashion,retail,90.30,1,0.125,none,2024-12-03\r\n4022,2295,EMEA,fashion,retail,91.14,2,0.018,none,2024-09-28\r\n4023,1679,APAC,toys,online,44.05,7,0.226,coupon,2024-11-21\r\n4024,2478,AMER,toys,online,56.58,5,0.211,none,2024-09-16\r\n4025,2153,APAC,grocery,mobile,67.19,2,0.076,none,2024-10-07\r\n4026,1566,EMEA,home,retail,31.74,3,0.096,loyalty,2024-03-22\r\n4027,2486,APAC,home,online,47.84,6,0.078,none,2024-10-22\r\n4028,2156,AMER,electronics,online,50.85,3,0.244,none,2024-08-06\r\n4029,1104,APAC,home,retail,71.54,4,0.077,none,2024-11-04\r\n4030,2057,APAC,home,mobile,112.32,8,0.236,bundle,2024-07-11\r\n4031,1224,APAC,home,retail,62.27,4,0.121,coupon,2024-09-12\r\n4032,1784,EMEA,fashion,retail,48.91,5,0.096,bundle,2024-06-07\r\n4033,1856,EMEA,sports,online,52.62,4,0.064,none,2024-08-11\r\n4034,2272,EMEA,sports,mobile,28.22,3,0.088,none,2024-01-11\r\n4035,1160,LATAM,home,online,47.62,4,0.125,none,2024-03-14\r\n4036,1729,AMER,sports,retail,35.19,4,0.127,coupon,2024-12-06\r\n4037,1956,APAC,fashion,online,68.62,6,0.042,none,2024-10-06\r\n4038,2312,APAC,electronics,retail,82.06,1,0.219,none,2024-12-06\r\n4039,1771,AMER,grocery,retail,121.76,7,0.168,coupon,2024-02-02\r\n4040,1861,AMER,sports,retail,69.00,3,0.084,none,2024-07-14\r\n4041,1628,EMEA,sports,retail,45.56,1,0.150,coupon,2024-10-15\r\n4042,2309,AMER,home,mobile,49.68,6,0.122,bundle,2024-10-22\r\n4043,2374,LATAM,grocery,retail,125.01,7,0.209,none,2024-06-27\r\n4044,2170,EMEA,sports,retail,49.84,6,0.067,coupon,2024-06-03\r\n4045,1034,EMEA,electronics,retail,46.98,5,0.058,bundle,2024-08-09\r\n4046,1363,EMEA,grocery,online,45.40,8,0.167,none,2024-05-10\r\n4047,1947,EMEA,fashion,retail,77.38,7,0.137,loyalty,2024-04-05\r\n4048,1652,APAC,fashion,mobile,69.95,3,0.045,bundle,2024-09-19\r\n4049,2161,LATAM,electronics,retail,50.54,6,0.204,none,2024-11-04\r\n4050,1850,APAC,grocery,retail,44.76,1,0.180,none,2024-05-15\r\n4051,1515,EMEA,home,mobile,40.98,4,0.008,none,2024-05-01\r\n4052,1949,AMER,electronics,online,60.91,5,0.234,coupon,2024-02-16\r\n4053,1344,EMEA,toys,online,26.38,2,0.014,none,2024-04-24\r\n4054,1289,LATAM,fashion,mobile,60.71,4,0.093,none,2024-09-10\r\n4055,2072,AMER,electronics,online,22.89,4,0.043,coupon,2024-01-13\r\n4056,1193,APAC,home,online,30.44,6,0.091,none,2024-11-03\r\n4057,1320,EMEA,sports,online,50.22,2,0.052,none,2024-03-28\r\n4058,1132,EMEA,home,mobile,42.28,4,0.161,none,2024-04-20\r\n4059,1155,EMEA,grocery,online,24.01,8,0.150,bundle,2024-08-05\r\n4060,1895,AMER,grocery,online,57.92,8,0.171,none,2024-10-11\r\n4061,2012,APAC,fashion,online,35.63,3,0.180,coupon,2024-12-23\r\n4062,1755,APAC,fashion,online,45.55,4,0.053,none,2024-08-04\r\n4063,1923,LATAM,sports,mobile,83.46,4,0.232,none,2024-07-02\r\n4064,1803,LATAM,home,retail,75.30,4,0.048,none,2024-02-26\r\n4065,2261,EMEA,grocery,online,105.80,2,0.039,none,2024-08-09\r\n4066,2099,AMER,toys,retail,50.72,1,0.093,none,2024-06-24\r\n4067,2331,APAC,grocery,online,37.88,6,0.201,bundle,2024-07-10\r\n4068,1271,EMEA,sports,mobile,79.69,7,0.138,none,2024-11-02\r\n4069,2319,AMER,sports,mobile,73.76,5,0.160,none,2024-11-04\r\n4070,2357,EMEA,grocery,mobile,82.84,7,0.059,none,2024-07-07\r\n4071,1055,AMER,home,online,26.17,7,0.027,none,2024-03-23\r\n4072,1226,AMER,electronics,mobile,52.67,3,0.038,bundle,2024-06-10\r\n4073,1456,APAC,grocery,retail,72.55,3,0.088,loyalty,2024-02-14\r\n4074,1179,APAC,grocery,online,101.98,6,0.216,bundle,2024-08-04\r\n4075,1465,AMER,grocery,online,100.07,1,0.062,bundle,2024-12-27\r\n4076,1038,APAC,sports,partner,101.66,6,0.076,coupon,2024-04-27\r\n4077,2194,APAC,grocery,retail,131.52,4,0.171,none,2024-06-09\r\n4078,2380,AMER,home,retail,52.20,3,0.216,coupon,2024-03-04\r\n4079,1130,LATAM,electronics,online,30.45,6,0.183,none,2024-06-17\r\n4080,1603,EMEA,home,retail,31.67,3,0.206,loyalty,2024-08-13\r\n4081,2325,LATAM,fashion,retail,38.84,6,0.222,none,2024-06-02\r\n4082,2088,EMEA,home,online,37.95,5,0.061,bundle,2024-02-14\r\n4083,2217,LATAM,grocery,retail,32.09,8,0.215,none,2024-11-12\r\n4084,2311,LATAM,grocery,retail,68.55,8,0.168,none,2024-03-26\r\n4085,2138,APAC,grocery,mobile,30.60,2,0.169,none,2024-11-20\r\n4086,2144,EMEA,electronics,retail,59.32,5,0.004,bundle,2024-12-27\r\n4087,2499,LATAM,grocery,online,75.42,8,0.058,none,2024-03-13\r\n4088,2406,EMEA,fashion,online,34.57,8,0.052,none,2024-03-25\r\n4089,1091,EMEA,electronics,online,137.13,1,0.196,none,2024-12-06\r\n4090,1444,EMEA,fashion,retail,38.87,1,0.164,coupon,2024-07-26\r\n4091,1050,AMER,home,online,84.31,1,0.153,none,2024-08-07\r\n4092,1719,LATAM,electronics,partner,94.63,5,0.132,none,2024-03-19\r\n4093,2015,APAC,grocery,retail,52.49,4,0.196,none,2024-08-23\r\n4094,1349,APAC,fashion,mobile,88.95,7,0.132,none,2024-05-03\r\n4095,1326,AMER,fashion,mobile,32.25,2,0.046,none,2024-07-14\r\n4096,1023,APAC,grocery,retail,51.16,5,0.162,none,2024-03-18\r\n4097,2343,EMEA,grocery,mobile,33.05,7,0.148,coupon,2024-05-03\r\n4098,1551,APAC,grocery,online,94.60,4,0.119,none,2024-11-15\r\n4099,1644,EMEA,grocery,retail,71.90,5,0.096,coupon,2024-09-14\r\n4100,1382,LATAM,electronics,retail,13.14,8,0.168,none,2024-01-20\r\n4101,1247,AMER,toys,online,48.17,3,0.139,bundle,2024-04-20\r\n4102,1105,AMER,fashion,online,46.73,1,0.146,none,2024-09-21\r\n4103,1769,LATAM,grocery,online,84.01,2,0.098,none,2024-08-27\r\n4104,1211,EMEA,toys,retail,39.18,5,0.053,none,2024-10-08\r\n4105,1134,APAC,toys,retail,45.87,2,0.102,none,2024-10-07\r\n4106,1336,APAC,toys,online,47.34,2,0.090,bundle,2024-10-01\r\n4107,2459,AMER,grocery,mobile,81.61,7,0.225,none,2024-06-06\r\n4108,1935,EMEA,home,online,41.39,2,0.180,coupon,2024-09-13\r\n4109,1700,EMEA,electronics,mobile,77.24,7,0.131,coupon,2024-10-02\r\n4110,1401,LATAM,grocery,online,72.15,2,0.038,none,2024-08-24\r\n4111,1471,EMEA,electronics,online,105.78,3,0.020,none,2024-10-17\r\n4112,2244,LATAM,grocery,partner,49.72,8,0.151,none,2024-09-28\r\n4113,1102,APAC,toys,online,26.05,6,0.111,none,2024-11-23\r\n4114,1701,LATAM,home,mobile,69.25,7,0.191,none,2024-11-14\r\n4115,2183,EMEA,home,retail,58.52,2,0.149,none,2024-03-13\r\n4116,2098,AMER,electronics,online,130.74,8,0.164,none,2024-06-21\r\n4117,1149,LATAM,home,online,54.43,5,0.033,coupon,2024-07-14\r\n4118,1941,AMER,grocery,partner,28.13,8,0.100,coupon,2024-07-06\r\n4119,1206,EMEA,toys,online,67.66,8,0.039,bundle,2024-02-23\r\n4120,1394,LATAM,electronics,mobile,48.18,4,0.136,none,2024-05-22\r\n4121,1425,EMEA,electronics,partner,54.09,7,0.218,none,2024-04-27\r\n4122,1841,AMER,home,online,43.65,8,0.239,none,2024-08-27\r\n4123,1019,APAC,electronics,online,29.58,5,0.161,coupon,2024-02-01\r\n4124,2084,LATAM,fashion,online,35.40,4,0.083,none,2024-12-16\r\n4125,2129,APAC,toys,retail,81.69,7,0.079,loyalty,2024-05-11\r\n4126,1397,LATAM,grocery,online,78.55,8,0.243,none,2024-01-17\r\n4127,1190,EMEA,sports,partner,58.17,5,0.149,coupon,2024-08-04\r\n4128,1405,LATAM,sports,partner,40.71,5,0.075,none,2024-03-18\r\n4129,1785,EMEA,grocery,retail,50.56,5,0.089,coupon,2024-02-17\r\n4130,1448,EMEA,grocery,online,46.27,3,0.153,loyalty,2024-03-22\r\n4131,1222,AMER,home,mobile,32.25,7,0.017,loyalty,2024-02-04\r\n4132,1930,AMER,home,online,58.56,4,0.237,none,2024-12-08\r\n4133,2476,APAC,fashion,retail,92.24,5,0.248,none,2024-11-10\r\n4134,2077,APAC,grocery,retail,69.34,5,0.003,loyalty,2024-12-13\r\n4135,1976,AMER,electronics,mobile,99.51,7,0.209,none,2024-01-17\r\n4136,2197,LATAM,electronics,online,52.19,8,0.071,none,2024-04-27\r\n4137,2483,LATAM,electronics,online,71.28,8,0.134,none,2024-12-23\r\n4138,1562,AMER,grocery,online,89.51,3,0.063,none,2024-04-04\r\n4139,1835,AMER,home,mobile,102.50,7,0.107,none,2024-09-20\r\n4140,2183,EMEA,electronics,retail,97.97,5,0.102,none,2024-02-25\r\n4141,1508,LATAM,fashion,retail,110.30,5,0.205,loyalty,2024-11-13\r\n4142,2299,EMEA,electronics,online,102.60,6,0.142,none,2024-04-17\r\n4143,1151,APAC,grocery,online,41.00,2,0.081,loyalty,2024-04-06\r\n4144,1156,APAC,electronics,mobile,39.43,2,0.148,coupon,2024-11-26\r\n4145,1452,LATAM,home,retail,59.03,4,0.026,none,2024-07-06\r\n4146,1347,APAC,sports,online,24.28,2,0.223,none,2024-05-08\r\n4147,1117,LATAM,toys,online,44.96,2,0.015,none,2024-12-24\r\n4148,1826,LATAM,electronics,retail,71.40,7,0.183,loyalty,2024-09-19\r\n4149,1671,APAC,electronics,online,65.81,7,0.168,none,2024-11-20\r\n4150,2399,LATAM,sports,retail,18.60,5,0.244,coupon,2024-08-20\r\n4151,1739,AMER,grocery,online,34.92,2,0.014,loyalty,2024-04-07\r\n4152,2091,LATAM,home,online,8.18,4,0.214,none,2024-05-28\r\n4153,1850,APAC,electronics,online,92.97,7,0.201,none,2024-01-02\r\n4154,1282,LATAM,fashion,online,127.16,4,0.156,coupon,2024-10-19\r\n4155,1851,EMEA,toys,retail,85.07,6,0.199,bundle,2024-12-06\r\n4156,1563,EMEA,fashion,online,125.98,1,0.187,bundle,2024-10-02\r\n4157,2343,EMEA,grocery,online,69.61,3,0.029,bundle,2024-12-23\r\n4158,2273,APAC,home,online,83.32,7,0.090,none,2024-09-12\r\n4159,1838,AMER,grocery,online,223.53,3,0.208,loyalty,2024-01-10\r\n4160,1002,EMEA,fashion,retail,66.64,1,0.216,bundle,2024-06-02\r\n4161,2102,APAC,grocery,partner,62.19,7,0.016,none,2024-05-01\r\n4162,1899,APAC,grocery,online,37.28,6,0.009,none,2024-06-02\r\n4163,1299,LATAM,electronics,online,27.50,3,0.080,none,2024-09-23\r\n4164,1523,LATAM,grocery,online,39.60,6,0.071,loyalty,2024-08-03\r\n4165,1806,APAC,sports,online,131.52,5,0.048,bundle,2024-11-15\r\n4166,1073,AMER,sports,mobile,61.32,3,0.073,none,2024-09-25\r\n4167,2311,LATAM,electronics,online,29.18,8,0.039,none,2024-06-24\r\n4168,1745,APAC,grocery,online,54.62,6,0.190,none,2024-12-10\r\n4169,1888,LATAM,grocery,online,68.18,8,0.166,none,2024-03-05\r\n4170,1678,LATAM,electronics,online,30.85,3,0.124,none,2024-12-20\r\n4171,2159,AMER,sports,retail,18.03,8,0.147,coupon,2024-07-12\r\n4172,1068,APAC,sports,online,40.16,4,0.012,none,2024-04-10\r\n4173,2013,APAC,grocery,retail,115.53,1,0.215,none,2024-02-18\r\n4174,1772,EMEA,fashion,partner,79.99,7,0.153,none,2024-03-16\r\n4175,1958,APAC,toys,retail,83.11,7,0.167,none,2024-05-23\r\n4176,1295,EMEA,toys,online,114.56,1,0.249,bundle,2024-07-24\r\n4177,2018,AMER,home,online,48.34,7,0.015,none,2024-03-22\r\n4178,2169,EMEA,sports,mobile,28.57,5,0.205,none,2024-06-20\r\n4179,1656,LATAM,fashion,retail,81.49,7,0.116,loyalty,2024-11-02\r\n4180,1965,LATAM,home,online,101.62,4,0.057,none,2024-01-13\r\n4181,1973,EMEA,electronics,online,113.48,6,0.199,coupon,2024-08-10\r\n4182,2375,AMER,grocery,retail,67.94,2,0.106,bundle,2024-03-23\r\n4183,2154,APAC,fashion,online,48.31,4,0.086,none,2024-05-08\r\n4184,1120,LATAM,electronics,retail,31.82,2,0.028,bundle,2024-05-19\r\n4185,1141,AMER,sports,retail,87.83,7,0.208,coupon,2024-09-03\r\n4186,1065,AMER,electronics,retail,41.84,4,0.237,none,2024-06-23\r\n4187,1656,LATAM,grocery,retail,86.29,7,0.010,none,2024-08-26\r\n4188,1150,LATAM,grocery,retail,99.84,5,0.087,none,2024-07-01\r\n4189,2054,AMER,sports,retail,143.20,8,0.104,none,2024-10-19\r\n4190,1823,EMEA,home,partner,62.99,4,0.226,bundle,2024-10-22\r\n4191,1157,LATAM,grocery,retail,63.19,5,0.142,none,2024-08-09\r\n4192,1822,EMEA,fashion,online,82.02,5,0.229,none,2024-01-22\r\n4193,2036,APAC,grocery,retail,46.87,7,0.107,loyalty,2024-04-25\r\n4194,1168,APAC,home,retail,48.97,1,0.030,none,2024-05-04\r\n4195,1140,LATAM,sports,partner,26.69,5,0.079,loyalty,2024-04-09\r\n4196,1776,APAC,fashion,online,42.19,7,0.087,none,2024-09-21\r\n4197,2448,APAC,grocery,online,85.03,3,0.135,loyalty,2024-12-07\r\n4198,1650,LATAM,toys,retail,94.33,7,0.068,coupon,2024-04-06\r\n4199,1989,LATAM,grocery,online,62.59,3,0.170,none,2024-11-09\r\n4200,2295,EMEA,fashion,online,54.88,1,0.045,coupon,2024-01-26\r\n4201,1221,LATAM,toys,retail,45.93,1,0.158,bundle,2024-10-19\r\n4202,2017,EMEA,grocery,online,41.00,8,0.043,none,2024-06-17\r\n4203,1412,AMER,fashion,retail,60.62,6,0.012,none,2024-09-09\r\n4204,1167,EMEA,fashion,retail,239.08,3,0.159,none,2024-03-05\r\n4205,2115,APAC,grocery,retail,85.70,4,0.163,loyalty,2024-09-09\r\n4206,1579,AMER,grocery,online,47.45,1,0.014,none,2024-11-23\r\n4207,2432,AMER,electronics,mobile,115.53,5,0.179,none,2024-12-17\r\n4208,2252,EMEA,grocery,retail,33.45,6,0.217,coupon,2024-03-14\r\n4209,2025,EMEA,home,online,57.58,2,0.142,bundle,2024-08-15\r\n4210,1834,AMER,home,retail,38.96,2,0.003,bundle,2024-05-16\r\n4211,1118,AMER,fashion,mobile,46.58,3,0.104,bundle,2024-06-13\r\n4212,1245,APAC,electronics,partner,14.01,2,0.115,coupon,2024-10-06\r\n4213,2342,AMER,grocery,mobile,43.95,7,0.113,coupon,2024-05-27\r\n4214,1718,EMEA,fashion,mobile,88.54,1,0.249,none,2024-11-09\r\n4215,1055,AMER,home,online,43.60,4,0.112,none,2024-11-11\r\n4216,1667,AMER,fashion,online,94.87,1,0.214,none,2024-12-15\r\n4217,2027,EMEA,home,online,112.19,4,0.126,none,2024-01-22\r\n4218,2171,EMEA,home,online,46.78,2,0.137,coupon,2024-10-18\r\n4219,1138,AMER,electronics,online,52.66,8,0.187,coupon,2024-11-25\r\n4220,1567,AMER,electronics,online,64.29,8,0.133,coupon,2024-04-26\r\n4221,2104,EMEA,grocery,mobile,79.29,5,0.089,bundle,2024-07-23\r\n4222,1963,AMER,grocery,retail,67.63,2,0.188,bundle,2024-08-13\r\n4223,1352,AMER,fashion,online,24.56,6,0.180,none,2024-08-26\r\n4224,1056,LATAM,home,mobile,105.85,1,0.197,none,2024-06-08\r\n4225,1341,EMEA,home,online,56.35,5,0.044,bundle,2024-07-04\r\n4226,2296,AMER,grocery,retail,98.61,3,0.216,bundle,2024-02-10\r\n4227,1920,LATAM,toys,mobile,149.99,7,0.173,coupon,2024-12-26\r\n4228,1884,APAC,fashion,online,20.15,6,0.032,none,2024-01-26\r\n4229,1108,EMEA,sports,retail,56.11,6,0.186,none,2024-03-19\r\n4230,1529,LATAM,home,online,87.73,1,0.072,none,2024-07-24\r\n4231,1421,APAC,fashion,online,55.23,8,0.022,none,2024-05-21\r\n4232,1102,APAC,fashion,retail,101.42,2,0.039,coupon,2024-04-09\r\n4233,1594,LATAM,home,online,116.29,7,0.032,coupon,2024-12-09\r\n4234,2220,LATAM,grocery,online,59.78,6,0.231,bundle,2024-09-24\r\n4235,1372,APAC,toys,online,63.19,3,0.250,loyalty,2024-09-20\r\n4236,1904,APAC,home,online,63.98,7,0.010,none,2024-09-19\r\n4237,1566,EMEA,toys,mobile,84.99,7,0.139,loyalty,2024-03-19\r\n4238,1955,AMER,home,retail,34.85,5,0.198,coupon,2024-06-26\r\n4239,1977,APAC,electronics,online,39.29,3,0.168,coupon,2024-09-03\r\n4240,2024,AMER,electronics,online,106.79,2,0.060,none,2024-01-20\r\n4241,1163,AMER,sports,mobile,115.57,6,0.054,loyalty,2024-08-08\r\n4242,1336,APAC,electronics,online,49.22,8,0.227,none,2024-01-06\r\n4243,1010,EMEA,grocery,online,79.56,1,0.195,bundle,2024-03-02\r\n4244,2066,APAC,grocery,retail,39.50,4,0.170,none,2024-10-05\r\n4245,2392,EMEA,fashion,retail,46.11,7,0.050,loyalty,2024-07-02\r\n4246,2030,EMEA,fashion,mobile,87.98,1,0.125,coupon,2024-05-06\r\n4247,1552,EMEA,sports,partner,112.56,3,0.145,none,2024-02-24\r\n4248,1295,EMEA,fashion,online,24.26,5,0.163,none,2024-05-19\r\n4249,1385,LATAM,fashion,partner,107.51,4,0.140,coupon,2024-05-27\r\n4250,1526,EMEA,electronics,online,39.88,2,0.000,none,2024-11-13\r\n4251,1334,APAC,fashion,retail,136.22,1,0.006,loyalty,2024-01-25\r\n4252,2465,EMEA,fashion,retail,42.89,5,0.216,coupon,2024-03-12\r\n4253,1341,EMEA,electronics,mobile,70.47,2,0.070,coupon,2024-06-05\r\n4254,2020,AMER,home,partner,60.43,6,0.098,coupon,2024-11-10\r\n4255,2179,LATAM,sports,online,42.98,3,0.169,none,2024-06-14\r\n4256,1432,APAC,electronics,retail,91.13,2,0.092,none,2024-01-17\r\n4257,2317,LATAM,electronics,retail,54.12,4,0.108,coupon,2024-07-22\r\n4258,1252,APAC,electronics,mobile,67.09,8,0.235,none,2024-01-21\r\n4259,2114,AMER,fashion,online,27.71,5,0.112,none,2024-11-15\r\n4260,2364,APAC,toys,online,49.66,6,0.081,none,2024-05-06\r\n4261,1400,EMEA,home,online,35.05,1,0.138,none,2024-04-16\r\n4262,2453,AMER,electronics,retail,21.27,4,0.231,none,2024-04-13\r\n4263,1455,APAC,grocery,retail,55.03,3,0.091,none,2024-04-09\r\n4264,2184,APAC,electronics,retail,74.13,2,0.004,none,2024-05-22\r\n4265,1524,LATAM,grocery,retail,28.21,1,0.045,none,2024-04-23\r\n4266,2255,AMER,grocery,online,50.68,5,0.067,bundle,2024-02-17\r\n4267,1086,AMER,electronics,online,28.94,8,0.068,none,2024-05-21\r\n4268,1913,LATAM,electronics,online,42.59,7,0.074,none,2024-06-02\r\n4269,1607,LATAM,grocery,retail,114.50,5,0.113,loyalty,2024-08-18\r\n4270,1284,APAC,toys,retail,22.12,1,0.187,none,2024-03-21\r\n4271,2178,AMER,fashion,online,68.64,6,0.169,none,2024-12-26\r\n4272,2270,APAC,electronics,retail,45.48,5,0.247,coupon,2024-03-27\r\n4273,1763,LATAM,home,online,86.00,8,0.239,coupon,2024-05-13\r\n4274,1258,EMEA,sports,online,62.73,1,0.246,coupon,2024-03-16\r\n4275,1642,EMEA,grocery,mobile,47.03,1,0.050,loyalty,2024-10-14\r\n4276,1028,EMEA,fashion,mobile,51.00,1,0.212,none,2024-04-25\r\n4277,2155,APAC,electronics,retail,42.90,7,0.215,none,2024-12-28\r\n4278,2425,APAC,home,online,33.48,4,0.236,coupon,2024-08-16\r\n4279,1494,AMER,home,retail,213.15,3,0.070,bundle,2024-10-20\r\n4280,2020,AMER,toys,retail,28.33,2,0.085,none,2024-12-25\r\n4281,1835,AMER,electronics,online,132.09,1,0.057,none,2024-08-27\r\n4282,1502,APAC,electronics,retail,48.78,8,0.115,none,2024-06-19\r\n4283,1360,APAC,grocery,online,42.45,1,0.140,none,2024-06-10\r\n4284,1698,EMEA,toys,online,116.69,8,0.003,coupon,2024-10-06\r\n4285,1852,AMER,grocery,retail,122.81,1,0.110,coupon,2024-06-13\r\n4286,2055,AMER,grocery,online,68.84,4,0.132,bundle,2024-05-11\r\n4287,2168,EMEA,grocery,mobile,80.31,1,0.219,coupon,2024-11-18\r\n4288,1266,AMER,sports,online,128.61,5,0.164,none,2024-10-28\r\n4289,2258,AMER,sports,retail,97.66,3,0.009,none,2024-11-20\r\n4290,1361,LATAM,home,retail,40.39,6,0.050,none,2024-06-07\r\n4291,2345,LATAM,fashion,online,65.17,2,0.084,coupon,2024-08-21\r\n4292,2255,AMER,grocery,online,34.37,6,0.200,loyalty,2024-07-02\r\n4293,1180,AMER,home,retail,79.21,2,0.002,none,2024-10-04\r\n4294,2156,AMER,electronics,retail,51.45,4,0.185,loyalty,2024-04-09\r\n4295,1722,EMEA,toys,retail,161.10,5,0.131,coupon,2024-03-19\r\n4296,2427,LATAM,grocery,online,20.30,5,0.244,coupon,2024-06-02\r\n4297,1675,LATAM,fashion,online,54.58,5,0.137,coupon,2024-05-07\r\n4298,1900,APAC,home,retail,24.80,6,0.115,none,2024-01-26\r\n4299,1332,APAC,home,online,82.79,2,0.062,none,2024-05-04\r\n4300,1426,AMER,electronics,online,43.93,5,0.027,none,2024-11-12\r\n4301,1347,APAC,electronics,mobile,52.28,4,0.220,none,2024-06-26\r\n4302,1531,EMEA,sports,retail,35.83,1,0.227,loyalty,2024-10-04\r\n4303,1957,AMER,sports,online,89.67,1,0.165,coupon,2024-04-27\r\n4304,1005,LATAM,sports,online,89.87,8,0.038,none,2024-06-12\r\n4305,2404,EMEA,electronics,online,46.28,6,0.056,coupon,2024-11-18\r\n4306,2194,APAC,grocery,retail,41.08,4,0.176,none,2024-03-11\r\n4307,2214,AMER,grocery,retail,98.16,6,0.202,none,2024-11-22\r\n4308,1729,AMER,grocery,online,46.80,3,0.122,bundle,2024-05-02\r\n4309,1058,LATAM,sports,partner,58.35,8,0.081,bundle,2024-07-11\r\n4310,1817,APAC,home,mobile,68.51,4,0.127,none,2024-08-11\r\n4311,2324,AMER,home,online,62.26,8,0.101,none,2024-06-09\r\n4312,1160,LATAM,fashion,retail,63.56,4,0.238,none,2024-10-13\r\n4313,1152,LATAM,toys,retail,35.06,6,0.245,coupon,2024-05-19\r\n4314,2143,AMER,home,online,48.84,1,0.156,none,2024-12-22\r\n4315,2416,LATAM,electronics,retail,50.71,5,0.009,bundle,2024-08-24\r\n4316,1520,APAC,grocery,retail,32.23,8,0.238,coupon,2024-08-24\r\n4317,2244,LATAM,fashion,retail,35.50,2,0.241,none,2024-05-23\r\n4318,1193,APAC,toys,online,83.34,1,0.243,none,2024-11-26\r\n4319,1796,LATAM,sports,retail,88.09,7,0.119,loyalty,2024-10-03\r\n4320,1485,APAC,fashion,retail,63.87,5,0.092,none,2024-01-17\r\n4321,2456,APAC,grocery,mobile,74.87,5,0.164,bundle,2024-11-03\r\n4322,1095,APAC,sports,online,108.39,8,0.084,bundle,2024-10-02\r\n4323,1891,APAC,fashion,retail,61.45,7,0.166,loyalty,2024-03-19\r\n4324,1748,APAC,fashion,online,71.86,6,0.129,none,2024-03-22\r\n4325,1042,LATAM,fashion,online,48.48,4,0.149,coupon,2024-10-16\r\n4326,1203,AMER,grocery,online,61.68,4,0.029,none,2024-04-05\r\n4327,1654,EMEA,grocery,online,62.23,7,0.214,coupon,2024-06-05\r\n4328,1831,APAC,electronics,retail,16.01,7,0.237,bundle,2024-03-01\r\n4329,1802,AMER,home,online,79.85,4,0.166,bundle,2024-11-16\r\n4330,1909,APAC,grocery,online,42.72,3,0.130,coupon,2024-10-20\r\n4331,1832,APAC,home,online,115.06,3,0.111,loyalty,2024-04-02\r\n4332,1414,APAC,grocery,online,43.72,5,0.129,loyalty,2024-09-12\r\n4333,1064,AMER,sports,online,57.25,4,0.032,none,2024-05-06\r\n4334,1556,AMER,home,online,227.79,1,0.228,none,2024-02-10\r\n4335,1774,EMEA,electronics,retail,62.36,1,0.031,none,2024-02-13\r\n4336,1797,LATAM,fashion,mobile,54.05,4,0.153,none,2024-01-08\r\n4337,1339,EMEA,electronics,online,45.14,8,0.113,none,2024-09-11\r\n4338,2300,EMEA,sports,online,85.99,4,0.052,coupon,2024-04-25\r\n4339,2247,LATAM,fashion,online,54.07,4,0.232,none,2024-09-16\r\n4340,1577,AMER,electronics,mobile,37.14,5,0.067,none,2024-08-04\r\n4341,1221,LATAM,home,retail,79.81,1,0.185,none,2024-09-10\r\n4342,1327,APAC,fashion,online,38.13,3,0.023,none,2024-08-20\r\n4343,1370,APAC,home,retail,38.56,2,0.186,none,2024-10-08\r\n4344,2444,EMEA,fashion,online,38.77,2,0.062,coupon,2024-01-16\r\n4345,1946,AMER,home,retail,142.27,8,0.037,none,2024-04-28\r\n4346,1545,AMER,grocery,mobile,65.57,6,0.050,none,2024-02-28\r\n4347,2287,EMEA,toys,partner,75.31,8,0.082,none,2024-06-21\r\n4348,1876,LATAM,sports,online,36.97,4,0.019,none,2024-08-14\r\n4349,1684,EMEA,grocery,online,69.35,8,0.114,none,2024-07-09\r\n4350,2371,LATAM,fashion,online,52.62,5,0.211,none,2024-07-05\r\n4351,1627,LATAM,sports,mobile,51.58,8,0.100,none,2024-03-22\r\n4352,1470,LATAM,fashion,online,47.91,8,0.088,none,2024-09-28\r\n4353,1184,AMER,sports,retail,51.25,6,0.173,none,2024-09-12\r\n4354,2096,LATAM,grocery,online,41.46,4,0.132,none,2024-12-11\r\n4355,2207,APAC,sports,retail,37.19,4,0.040,none,2024-08-10\r\n4356,1971,EMEA,home,online,33.73,8,0.001,loyalty,2024-03-20\r\n4357,1608,AMER,grocery,online,95.27,4,0.176,coupon,2024-10-26\r\n4358,2054,AMER,fashion,retail,50.93,2,0.160,coupon,2024-08-17\r\n4359,1593,AMER,electronics,online,68.93,1,0.159,bundle,2024-09-23\r\n4360,1042,LATAM,home,online,82.31,6,0.228,none,2024-01-06\r\n4361,1070,EMEA,grocery,retail,30.12,4,0.058,bundle,2024-10-24\r\n4362,1442,EMEA,grocery,online,62.27,8,0.220,loyalty,2024-05-16\r\n4363,1142,EMEA,grocery,retail,132.52,2,0.078,none,2024-06-17\r\n4364,1310,AMER,home,online,68.16,7,0.120,coupon,2024-05-22\r\n4365,1741,AMER,fashion,online,67.95,5,0.217,coupon,2024-09-16\r\n4366,2470,EMEA,sports,mobile,59.50,5,0.237,none,2024-12-18\r\n4367,2113,LATAM,home,retail,99.18,5,0.202,none,2024-02-09\r\n4368,1108,EMEA,home,retail,99.13,6,0.171,coupon,2024-01-21\r\n4369,2054,AMER,electronics,partner,142.72,3,0.034,none,2024-04-18\r\n4370,1960,EMEA,home,online,54.23,3,0.161,bundle,2024-06-15\r\n4371,2276,AMER,grocery,retail,103.60,4,0.170,coupon,2024-09-14\r\n4372,2221,LATAM,toys,retail,23.67,3,0.073,none,2024-02-09\r\n4373,2162,EMEA,grocery,retail,164.82,7,0.129,bundle,2024-01-16\r\n4374,1406,LATAM,home,retail,81.80,1,0.190,coupon,2024-12-22\r\n4375,1664,LATAM,grocery,retail,136.13,2,0.026,none,2024-02-07\r\n4376,1686,LATAM,toys,retail,30.10,1,0.190,loyalty,2024-08-24\r\n4377,1140,LATAM,toys,online,146.20,5,0.139,none,2024-01-26\r\n4378,1134,APAC,sports,online,56.14,5,0.116,none,2024-03-17\r\n4379,2137,LATAM,toys,online,56.42,4,0.090,bundle,2024-07-05\r\n4380,2264,LATAM,grocery,online,64.46,5,0.208,none,2024-09-07\r\n4381,1692,LATAM,electronics,retail,137.15,7,0.071,coupon,2024-01-23\r\n4382,1523,LATAM,toys,online,12.42,3,0.177,none,2024-11-27\r\n4383,1003,APAC,fashion,retail,44.19,8,0.042,none,2024-10-24\r\n4384,2254,LATAM,grocery,online,91.24,8,0.149,none,2024-04-12\r\n4385,1299,LATAM,home,online,32.88,7,0.121,coupon,2024-08-28\r\n4386,2274,APAC,sports,online,75.76,2,0.004,none,2024-08-13\r\n4387,1038,APAC,sports,mobile,59.94,8,0.119,none,2024-09-14\r\n4388,1659,APAC,grocery,online,57.25,1,0.082,none,2024-03-23\r\n4389,1681,LATAM,fashion,retail,133.23,7,0.077,coupon,2024-11-25\r\n4390,1163,AMER,electronics,online,143.82,5,0.153,none,2024-04-18\r\n4391,2174,LATAM,grocery,online,29.34,5,0.107,loyalty,2024-06-09\r\n4392,1327,APAC,toys,online,54.24,1,0.050,none,2024-12-08\r\n4393,1574,AMER,fashion,online,48.78,4,0.135,none,2024-01-02\r\n4394,2253,AMER,sports,retail,63.48,5,0.200,coupon,2024-09-22\r\n4395,1889,APAC,grocery,online,62.73,8,0.245,none,2024-01-05\r\n4396,1379,EMEA,fashion,retail,49.06,6,0.122,bundle,2024-11-17\r\n4397,1163,AMER,grocery,online,41.89,3,0.151,loyalty,2024-08-17\r\n4398,1301,AMER,electronics,retail,56.48,4,0.198,bundle,2024-05-07\r\n4399,2128,EMEA,home,online,39.53,6,0.005,bundle,2024-05-07\r\n4400,1712,LATAM,grocery,retail,76.20,3,0.211,bundle,2024-01-05\r\n4401,2272,EMEA,grocery,retail,93.44,2,0.098,coupon,2024-01-22\r\n4402,2407,EMEA,home,retail,65.96,3,0.184,none,2024-04-28\r\n4403,1924,AMER,grocery,online,39.45,5,0.037,loyalty,2024-05-06\r\n4404,1067,APAC,fashion,online,34.72,1,0.144,loyalty,2024-11-09\r\n4405,1890,LATAM,grocery,online,200.18,1,0.039,none,2024-08-11\r\n4406,1693,EMEA,home,online,27.72,2,0.027,loyalty,2024-08-05\r\n4407,1589,AMER,sports,mobile,59.22,7,0.123,none,2024-09-22\r\n4408,1233,AMER,grocery,mobile,63.12,6,0.031,none,2024-12-05\r\n4409,2233,EMEA,fashion,retail,67.99,8,0.085,none,2024-06-06\r\n4410,2303,EMEA,home,partner,33.07,6,0.053,bundle,2024-02-05\r\n4411,1194,APAC,sports,mobile,50.04,5,0.089,coupon,2024-08-24\r\n4412,2369,LATAM,toys,online,59.21,5,0.155,none,2024-03-01\r\n4413,1098,APAC,sports,mobile,31.53,4,0.169,bundle,2024-05-02\r\n4414,1666,LATAM,grocery,mobile,56.49,4,0.221,none,2024-05-24\r\n4415,1755,APAC,grocery,retail,120.60,5,0.053,none,2024-05-13\r\n4416,1921,LATAM,grocery,online,34.12,3,0.144,none,2024-10-16\r\n4417,1209,AMER,home,retail,80.53,3,0.121,none,2024-12-06\r\n4418,2220,LATAM,electronics,retail,87.83,2,0.023,none,2024-10-24\r\n4419,1480,APAC,toys,online,17.32,7,0.082,none,2024-12-13\r\n4420,2049,LATAM,sports,retail,122.78,5,0.128,none,2024-10-13\r\n4421,2492,LATAM,electronics,online,89.15,2,0.055,none,2024-06-03\r\n4422,1505,EMEA,toys,retail,24.27,3,0.248,coupon,2024-04-25\r\n4423,1953,EMEA,electronics,retail,55.09,1,0.059,none,2024-01-13\r\n4424,2327,EMEA,toys,retail,57.61,8,0.174,none,2024-08-26\r\n4425,1862,LATAM,home,mobile,30.16,1,0.065,bundle,2024-05-18\r\n4426,2264,LATAM,fashion,retail,76.91,7,0.033,none,2024-01-14\r\n4427,1515,EMEA,fashion,online,114.08,2,0.229,none,2024-09-23\r\n4428,1591,APAC,toys,retail,105.18,4,0.204,bundle,2024-03-27\r\n4429,1156,APAC,grocery,retail,74.02,4,0.038,bundle,2024-09-02\r\n4430,1305,EMEA,grocery,online,84.31,7,0.001,none,2024-10-13\r\n4431,1356,LATAM,grocery,retail,63.07,7,0.091,none,2024-05-27\r\n4432,1636,APAC,sports,online,55.69,8,0.104,coupon,2024-01-12\r\n4433,1932,EMEA,grocery,retail,39.60,6,0.157,none,2024-04-17\r\n4434,1621,APAC,electronics,retail,78.10,5,0.168,none,2024-09-09\r\n4435,2365,LATAM,home,online,68.87,6,0.071,coupon,2024-09-28\r\n4436,2223,EMEA,toys,retail,55.71,5,0.094,none,2024-11-20\r\n4437,2495,EMEA,home,retail,49.62,7,0.210,bundle,2024-07-13\r\n4438,2461,LATAM,toys,online,56.49,6,0.166,none,2024-05-04\r\n4439,1705,AMER,grocery,retail,18.85,4,0.053,none,2024-05-27\r\n4440,1362,AMER,grocery,retail,15.52,3,0.241,coupon,2024-08-19\r\n4441,1269,LATAM,grocery,mobile,56.93,4,0.140,coupon,2024-11-06\r\n4442,1519,APAC,grocery,online,97.62,3,0.231,none,2024-11-25\r\n4443,1268,EMEA,sports,online,64.05,4,0.125,coupon,2024-06-19\r\n4444,1348,AMER,grocery,online,63.85,8,0.076,none,2024-03-08\r\n4445,2278,APAC,home,online,96.77,3,0.141,none,2024-05-24\r\n4446,1239,APAC,sports,online,22.64,4,0.236,none,2024-10-26\r\n4447,1538,AMER,home,online,67.86,7,0.079,none,2024-09-06\r\n4448,1549,APAC,electronics,mobile,133.95,1,0.047,loyalty,2024-10-22\r\n4449,1279,EMEA,sports,retail,44.98,6,0.179,none,2024-11-11\r\n4450,2317,LATAM,fashion,retail,65.77,3,0.122,none,2024-07-20\r\n4451,1188,LATAM,home,retail,32.45,6,0.176,none,2024-12-17\r\n4452,1822,EMEA,grocery,online,27.32,5,0.243,none,2024-04-21\r\n4453,1127,EMEA,electronics,mobile,99.44,3,0.079,coupon,2024-03-19\r\n4454,2432,AMER,electronics,online,60.89,3,0.049,none,2024-01-10\r\n4455,1582,AMER,grocery,retail,51.52,7,0.247,none,2024-07-17\r\n4456,1257,APAC,toys,online,63.97,6,0.163,none,2024-06-10\r\n4457,2290,LATAM,toys,partner,71.42,7,0.082,coupon,2024-08-28\r\n4458,1159,LATAM,grocery,mobile,78.92,6,0.208,none,2024-09-24\r\n4459,1295,EMEA,electronics,online,58.38,5,0.011,bundle,2024-01-27\r\n4460,1886,LATAM,electronics,partner,39.62,3,0.160,none,2024-04-01\r\n4461,1770,AMER,grocery,online,35.32,5,0.178,none,2024-01-10\r\n4462,1036,EMEA,grocery,online,123.94,8,0.114,bundle,2024-09-04\r\n4463,1792,AMER,electronics,retail,29.75,2,0.214,none,2024-05-24\r\n4464,2385,APAC,toys,online,82.35,7,0.211,none,2024-04-10\r\n4465,1990,EMEA,grocery,online,79.03,7,0.014,none,2024-08-23\r\n4466,1829,EMEA,toys,online,69.44,2,0.044,none,2024-10-11\r\n4467,1979,APAC,sports,online,14.67,6,0.006,none,2024-06-19\r\n4468,1102,APAC,home,retail,46.37,2,0.216,none,2024-08-13\r\n4469,2341,EMEA,home,mobile,22.59,8,0.192,none,2024-11-02\r\n4470,2018,AMER,home,mobile,20.28,4,0.162,coupon,2024-01-27\r\n4471,1037,EMEA,sports,online,210.14,2,0.234,loyalty,2024-04-17\r\n4472,1875,EMEA,grocery,retail,50.64,1,0.220,none,2024-04-13\r\n4473,1169,LATAM,sports,online,38.84,2,0.209,none,2024-06-22\r\n4474,1008,AMER,electronics,online,24.84,2,0.154,none,2024-10-20\r\n4475,1434,EMEA,grocery,retail,84.69,7,0.062,coupon,2024-11-25\r\n4476,1895,AMER,home,online,112.66,3,0.040,bundle,2024-01-03\r\n4477,1315,AMER,electronics,retail,53.17,3,0.216,bundle,2024-06-25\r\n4478,2068,LATAM,home,online,116.45,5,0.122,none,2024-02-23\r\n4479,2058,LATAM,sports,mobile,43.53,5,0.115,none,2024-10-13\r\n4480,2039,EMEA,toys,online,101.46,5,0.080,none,2024-07-20\r\n4481,1184,AMER,home,online,91.26,5,0.055,none,2024-03-10\r\n4482,1272,AMER,sports,mobile,44.75,5,0.106,none,2024-06-07\r\n4483,1388,AMER,home,retail,55.54,4,0.084,none,2024-05-24\r\n4484,2252,EMEA,sports,mobile,191.52,4,0.082,loyalty,2024-01-22\r\n4485,1599,APAC,grocery,online,40.58,2,0.181,bundle,2024-10-22\r\n4486,1170,AMER,fashion,online,66.59,6,0.210,coupon,2024-02-05\r\n4487,1071,AMER,home,online,176.47,1,0.070,bundle,2024-05-16\r\n4488,2075,LATAM,electronics,online,80.23,8,0.125,none,2024-12-20\r\n4489,1592,LATAM,toys,retail,69.24,2,0.164,none,2024-03-08\r\n4490,2260,EMEA,sports,mobile,103.59,3,0.001,none,2024-10-05\r\n4491,1036,EMEA,grocery,online,45.48,3,0.046,bundle,2024-05-22\r\n4492,1927,EMEA,toys,retail,33.18,6,0.185,loyalty,2024-10-28\r\n4493,1092,AMER,grocery,online,41.90,5,0.168,none,2024-11-16\r\n4494,1424,APAC,grocery,online,24.91,6,0.239,none,2024-04-18\r\n4495,1435,AMER,electronics,retail,50.30,1,0.084,none,2024-09-08\r\n4496,1189,AMER,home,retail,134.50,8,0.057,coupon,2024-05-22\r\n4497,1377,APAC,toys,retail,11.27,7,0.223,bundle,2024-07-10\r\n4498,1200,EMEA,home,retail,63.29,5,0.012,coupon,2024-09-02\r\n4499,1979,APAC,home,retail,66.28,3,0.126,bundle,2024-07-17\r\n4500,2268,EMEA,home,retail,125.51,3,0.186,coupon,2024-07-08\r\n4501,1295,EMEA,sports,retail,22.94,3,0.163,none,2024-07-27\r\n4502,1577,AMER,grocery,retail,99.32,6,0.154,none,2024-06-24\r\n4503,2206,AMER,toys,mobile,44.61,2,0.037,none,2024-07-17\r\n4504,2334,LATAM,sports,online,25.53,4,0.066,none,2024-11-25\r\n4505,1241,APAC,grocery,online,75.22,3,0.169,coupon,2024-06-23\r\n4506,2332,APAC,electronics,online,38.74,1,0.242,coupon,2024-04-11\r\n4507,2100,APAC,electronics,retail,35.30,5,0.161,bundle,2024-05-02\r\n4508,2243,APAC,toys,retail,27.05,3,0.113,none,2024-04-16\r\n4509,1915,LATAM,home,retail,84.88,4,0.033,none,2024-08-06\r\n4510,1869,AMER,home,online,77.99,1,0.128,coupon,2024-11-07\r\n4511,1922,EMEA,electronics,online,36.25,7,0.049,none,2024-09-06\r\n4512,1293,AMER,home,retail,75.85,5,0.048,none,2024-02-04\r\n4513,2466,APAC,sports,online,39.05,5,0.004,none,2024-12-13\r\n4514,1203,AMER,grocery,online,51.04,6,0.020,coupon,2024-08-09\r\n4515,1145,AMER,electronics,retail,148.98,7,0.020,none,2024-03-13\r\n4516,2181,AMER,electronics,online,107.19,7,0.003,loyalty,2024-06-08\r\n4517,1532,APAC,fashion,retail,32.66,3,0.117,none,2024-03-12\r\n4518,1092,AMER,electronics,online,39.10,7,0.028,none,2024-05-06\r\n4519,2347,AMER,grocery,retail,49.46,3,0.215,none,2024-03-09\r\n4520,1994,LATAM,home,online,107.93,1,0.055,none,2024-04-03\r\n4521,1438,APAC,electronics,online,65.87,3,0.046,none,2024-05-03\r\n4522,1485,APAC,grocery,mobile,113.79,8,0.112,none,2024-08-23\r\n4523,2421,AMER,grocery,retail,54.96,5,0.066,coupon,2024-01-20\r\n4524,1516,EMEA,electronics,online,55.92,4,0.126,none,2024-03-08\r\n4525,1881,LATAM,electronics,partner,42.61,3,0.165,none,2024-02-23\r\n4526,2092,AMER,toys,online,40.17,8,0.171,none,2024-11-26\r\n4527,2252,EMEA,home,online,46.30,2,0.225,none,2024-12-12\r\n4528,1742,AMER,sports,retail,35.50,5,0.017,coupon,2024-03-14\r\n4529,2471,APAC,grocery,online,63.76,3,0.214,none,2024-07-20\r\n4530,1570,AMER,electronics,online,43.88,2,0.003,none,2024-09-28\r\n4531,1827,EMEA,grocery,mobile,107.79,1,0.114,none,2024-10-20\r\n4532,1983,LATAM,fashion,online,45.73,8,0.102,none,2024-12-03\r\n4533,1516,EMEA,fashion,online,51.38,1,0.216,none,2024-02-16\r\n4534,1238,AMER,fashion,online,41.85,1,0.097,coupon,2024-08-19\r\n4535,2304,LATAM,toys,online,49.63,6,0.249,none,2024-12-25\r\n4536,1919,EMEA,grocery,online,38.10,5,0.182,none,2024-01-11\r\n4537,1584,EMEA,electronics,retail,41.44,1,0.105,bundle,2024-11-25\r\n4538,1185,LATAM,grocery,retail,48.54,2,0.022,none,2024-11-10\r\n4539,1455,APAC,home,online,44.06,3,0.081,coupon,2024-04-04\r\n4540,2485,AMER,grocery,retail,73.48,6,0.096,none,2024-03-26\r\n4541,2450,EMEA,electronics,retail,106.80,2,0.094,none,2024-04-20\r\n4542,1782,LATAM,electronics,online,48.07,4,0.104,none,2024-10-25\r\n4543,2168,EMEA,electronics,online,53.53,8,0.219,none,2024-12-13\r\n4544,2009,LATAM,sports,retail,35.26,7,0.117,none,2024-11-16\r\n4545,1093,APAC,grocery,retail,54.99,4,0.235,coupon,2024-12-26\r\n4546,2033,LATAM,home,online,124.24,2,0.240,none,2024-11-07\r\n4547,1046,EMEA,sports,mobile,122.59,2,0.169,coupon,2024-06-02\r\n4548,2208,AMER,grocery,partner,111.39,4,0.007,coupon,2024-11-21\r\n4549,2471,APAC,fashion,online,48.72,3,0.018,none,2024-06-16\r\n4550,1681,LATAM,grocery,retail,113.77,4,0.097,loyalty,2024-11-16\r\n4551,1038,APAC,electronics,online,71.33,4,0.158,loyalty,2024-09-19\r\n4552,2107,APAC,grocery,retail,71.41,1,0.236,coupon,2024-10-16\r\n4553,1384,LATAM,toys,online,87.61,7,0.213,loyalty,2024-05-22\r\n4554,2262,APAC,grocery,retail,19.12,6,0.095,bundle,2024-01-01\r\n4555,2156,AMER,electronics,retail,66.95,4,0.176,none,2024-08-05\r\n4556,1799,EMEA,fashion,retail,27.45,3,0.189,none,2024-05-07\r\n4557,2495,EMEA,toys,mobile,31.19,7,0.221,bundle,2024-04-24\r\n4558,1077,AMER,home,mobile,77.01,3,0.114,bundle,2024-03-20\r\n4559,2084,LATAM,home,partner,69.27,6,0.167,none,2024-08-26\r\n4560,1497,EMEA,electronics,online,75.38,1,0.113,none,2024-02-05\r\n4561,1315,AMER,electronics,mobile,19.95,2,0.250,none,2024-11-22\r\n4562,1831,APAC,sports,online,29.79,8,0.230,none,2024-08-19\r\n4563,2286,AMER,grocery,mobile,56.60,6,0.149,none,2024-11-17\r\n4564,2082,APAC,fashion,mobile,81.95,1,0.036,none,2024-07-24\r\n4565,1717,AMER,electronics,online,47.53,5,0.030,coupon,2024-10-05\r\n4566,2101,APAC,sports,online,47.43,1,0.087,none,2024-11-14\r\n4567,1483,EMEA,sports,online,69.24,2,0.086,bundle,2024-06-08\r\n4568,2470,EMEA,toys,online,42.80,5,0.067,coupon,2024-02-01\r\n4569,1296,LATAM,electronics,retail,111.93,2,0.055,coupon,2024-10-09\r\n4570,1701,LATAM,toys,online,41.77,4,0.000,none,2024-06-03\r\n4571,2365,LATAM,fashion,retail,27.58,6,0.098,coupon,2024-06-22\r\n4572,1891,APAC,grocery,retail,69.93,1,0.092,bundle,2024-09-28\r\n4573,2416,LATAM,grocery,retail,40.79,1,0.200,bundle,2024-03-19\r\n4574,1984,LATAM,grocery,partner,65.39,6,0.004,bundle,2024-04-25\r\n4575,1982,EMEA,electronics,online,28.16,1,0.083,none,2024-02-09\r\n4576,1410,AMER,electronics,partner,103.00,5,0.213,coupon,2024-01-18\r\n4577,1993,APAC,electronics,mobile,52.01,2,0.158,loyalty,2024-03-04\r\n4578,2238,AMER,fashion,retail,106.26,8,0.192,coupon,2024-06-21\r\n4579,1545,AMER,grocery,online,61.02,7,0.027,coupon,2024-07-24\r\n4580,1648,APAC,toys,online,62.85,5,0.218,loyalty,2024-06-05\r\n4581,2318,AMER,electronics,online,73.43,5,0.241,bundle,2024-08-22\r\n4582,1177,LATAM,toys,online,37.27,7,0.167,none,2024-11-24\r\n4583,1566,EMEA,home,online,102.59,4,0.233,coupon,2024-10-06\r\n4584,1131,APAC,home,online,66.73,5,0.009,bundle,2024-11-19\r\n4585,2183,EMEA,sports,retail,179.05,6,0.056,none,2024-04-06\r\n4586,2471,APAC,fashion,online,42.70,4,0.153,bundle,2024-05-03\r\n4587,1869,AMER,electronics,retail,136.22,1,0.142,coupon,2024-07-01\r\n4588,1537,LATAM,electronics,mobile,32.36,7,0.152,none,2024-03-18\r\n4589,1318,LATAM,grocery,online,78.82,1,0.135,coupon,2024-10-09\r\n4590,1840,LATAM,fashion,retail,45.97,7,0.250,loyalty,2024-04-11\r\n4591,1948,EMEA,sports,online,56.48,3,0.173,coupon,2024-08-09\r\n4592,1772,EMEA,toys,online,186.97,2,0.075,loyalty,2024-06-24\r\n4593,1726,EMEA,toys,retail,65.00,1,0.203,coupon,2024-04-26\r\n4594,1943,AMER,home,online,59.05,6,0.076,none,2024-02-06\r\n4595,1132,EMEA,sports,mobile,71.05,6,0.123,bundle,2024-11-28\r\n4596,1354,AMER,electronics,online,28.12,3,0.218,none,2024-03-08\r\n4597,2213,APAC,electronics,retail,49.61,6,0.115,coupon,2024-03-24\r\n4598,1972,LATAM,electronics,retail,103.69,3,0.213,none,2024-05-11\r\n4599,1149,LATAM,grocery,retail,78.78,5,0.034,none,2024-09-25\r\n4600,1677,EMEA,fashion,online,67.15,1,0.120,none,2024-04-25\r\n4601,1889,APAC,home,online,83.44,4,0.099,bundle,2024-07-15\r\n4602,1515,EMEA,sports,retail,38.38,2,0.087,none,2024-05-06\r\n4603,1619,APAC,electronics,online,37.63,4,0.188,none,2024-05-26\r\n4604,2365,LATAM,fashion,online,43.97,1,0.202,loyalty,2024-12-09\r\n4605,1877,LATAM,home,online,141.33,1,0.202,bundle,2024-12-19\r\n4606,1774,EMEA,grocery,mobile,58.28,2,0.228,none,2024-01-18\r\n4607,2151,APAC,electronics,online,89.25,6,0.173,coupon,2024-03-27\r\n4608,2028,APAC,electronics,retail,52.21,6,0.185,none,2024-01-01\r\n4609,1999,EMEA,toys,online,59.50,1,0.023,none,2024-11-28\r\n4610,1418,LATAM,grocery,retail,43.47,3,0.176,none,2024-06-23\r\n4611,1170,AMER,electronics,retail,82.60,2,0.069,bundle,2024-01-28\r\n4612,1089,LATAM,grocery,retail,24.30,2,0.219,bundle,2024-08-01\r\n4613,1522,LATAM,sports,mobile,13.79,5,0.033,coupon,2024-11-20\r\n4614,1834,AMER,toys,online,68.18,3,0.011,none,2024-06-22\r\n4615,1065,AMER,sports,online,47.90,3,0.059,none,2024-05-01\r\n4616,2124,AMER,fashion,retail,67.20,5,0.200,none,2024-01-09\r\n4617,2208,AMER,electronics,mobile,38.17,2,0.012,none,2024-02-25\r\n4618,2411,EMEA,sports,online,54.98,5,0.019,bundle,2024-06-06\r\n4619,1959,EMEA,electronics,mobile,50.02,1,0.200,none,2024-10-16\r\n4620,2146,APAC,home,retail,103.48,8,0.185,coupon,2024-11-10\r\n4621,1583,AMER,home,online,62.84,7,0.217,none,2024-03-23\r\n4622,1202,APAC,fashion,mobile,38.57,3,0.222,coupon,2024-01-24\r\n4623,1898,EMEA,fashion,online,25.75,7,0.180,loyalty,2024-01-07\r\n4624,1437,EMEA,grocery,partner,124.46,1,0.123,none,2024-06-23\r\n4625,1976,AMER,grocery,retail,43.85,4,0.202,none,2024-02-14\r\n4626,1922,EMEA,grocery,retail,22.18,1,0.057,loyalty,2024-10-09\r\n4627,2035,LATAM,electronics,retail,38.27,2,0.060,none,2024-09-27\r\n4628,2317,LATAM,sports,retail,60.25,8,0.226,none,2024-10-11\r\n4629,1993,APAC,fashion,retail,52.72,4,0.178,none,2024-03-04\r\n4630,2095,EMEA,home,retail,33.18,1,0.179,bundle,2024-03-08\r\n4631,2322,AMER,sports,mobile,47.64,2,0.098,loyalty,2024-02-07\r\n4632,1082,EMEA,home,retail,82.13,2,0.216,none,2024-10-24\r\n4633,1178,EMEA,home,retail,75.11,2,0.025,coupon,2024-06-11\r\n4634,1388,AMER,grocery,partner,73.42,5,0.047,none,2024-10-22\r\n4635,1158,LATAM,grocery,mobile,87.10,3,0.109,coupon,2024-11-26\r\n4636,1525,APAC,grocery,online,69.71,2,0.243,none,2024-02-06\r\n4637,2146,APAC,fashion,online,73.35,1,0.071,loyalty,2024-06-26\r\n4638,1635,APAC,home,online,54.63,1,0.012,none,2024-07-10\r\n4639,1144,APAC,grocery,online,62.01,6,0.243,coupon,2024-09-16\r\n4640,2031,AMER,home,retail,107.37,7,0.078,none,2024-03-04\r\n4641,1391,LATAM,grocery,mobile,58.96,5,0.179,coupon,2024-10-08\r\n4642,1551,APAC,grocery,retail,94.07,7,0.244,coupon,2024-03-22\r\n4643,1395,APAC,fashion,retail,89.57,6,0.200,loyalty,2024-03-15\r\n4644,2406,EMEA,fashion,mobile,54.93,3,0.145,none,2024-01-27\r\n4645,1846,APAC,home,online,116.71,6,0.155,bundle,2024-12-09\r\n4646,2394,EMEA,fashion,mobile,120.65,2,0.184,none,2024-05-09\r\n4647,1304,LATAM,home,online,37.28,3,0.127,none,2024-01-09\r\n4648,1941,AMER,home,online,46.95,1,0.196,coupon,2024-05-02\r\n4649,2078,APAC,electronics,mobile,53.03,5,0.130,loyalty,2024-04-04\r\n4650,2482,EMEA,home,retail,88.19,5,0.063,bundle,2024-09-07\r\n4651,1705,AMER,home,online,20.79,5,0.086,none,2024-08-03\r\n4652,2061,EMEA,electronics,mobile,66.43,8,0.050,none,2024-06-09\r\n4653,1690,LATAM,electronics,retail,29.42,7,0.146,none,2024-09-20\r\n4654,1797,LATAM,grocery,online,268.26,3,0.107,bundle,2024-01-22\r\n4655,1842,LATAM,grocery,online,73.50,6,0.245,none,2024-02-07\r\n4656,2426,AMER,grocery,retail,78.28,1,0.178,coupon,2024-08-18\r\n4657,1477,APAC,grocery,online,116.46,8,0.236,none,2024-12-07\r\n4658,1994,LATAM,fashion,mobile,47.53,8,0.123,bundle,2024-10-14\r\n4659,1499,EMEA,electronics,retail,45.73,8,0.055,coupon,2024-05-06\r\n4660,1561,EMEA,electronics,online,87.38,6,0.109,none,2024-06-21\r\n4661,1280,LATAM,sports,partner,65.50,8,0.203,none,2024-09-11\r\n4662,1888,LATAM,grocery,online,60.21,7,0.011,none,2024-08-08\r\n4663,2439,AMER,home,retail,87.05,2,0.185,none,2024-07-23\r\n4664,2341,EMEA,sports,mobile,39.85,4,0.025,coupon,2024-03-24\r\n4665,1686,LATAM,sports,retail,61.13,4,0.125,none,2024-10-03\r\n4666,1300,EMEA,fashion,mobile,89.02,2,0.159,loyalty,2024-05-01\r\n4667,2318,AMER,electronics,mobile,34.60,1,0.244,none,2024-12-25\r\n4668,1407,LATAM,fashion,retail,103.66,1,0.044,none,2024-07-27\r\n4669,1707,APAC,home,online,27.11,6,0.122,bundle,2024-12-14\r\n4670,1983,LATAM,grocery,online,49.38,7,0.042,none,2024-09-21\r\n4671,1827,EMEA,home,retail,76.16,3,0.250,none,2024-07-10\r\n4672,1707,APAC,grocery,online,22.05,5,0.045,coupon,2024-10-02\r\n4673,2257,AMER,fashion,mobile,56.63,5,0.001,coupon,2024-05-04\r\n4674,2435,AMER,grocery,retail,50.52,5,0.074,coupon,2024-10-12\r\n4675,2295,EMEA,grocery,mobile,100.42,5,0.169,none,2024-10-10\r\n4676,1630,APAC,fashion,online,23.21,5,0.096,none,2024-06-24\r\n4677,1908,AMER,sports,online,34.17,4,0.189,coupon,2024-05-22\r\n4678,2208,AMER,electronics,retail,72.00,6,0.155,none,2024-08-25\r\n4679,1295,EMEA,home,online,35.73,6,0.187,none,2024-09-24\r\n4680,1376,EMEA,electronics,retail,23.05,1,0.245,coupon,2024-10-22\r\n4681,2374,LATAM,electronics,online,48.98,8,0.052,none,2024-08-10\r\n4682,1845,AMER,electronics,online,31.93,5,0.210,none,2024-12-25\r\n4683,2314,EMEA,grocery,online,53.88,2,0.242,none,2024-12-01\r\n4684,1753,APAC,electronics,mobile,25.70,8,0.006,coupon,2024-05-23\r\n4685,1703,AMER,sports,mobile,72.44,7,0.199,none,2024-01-21\r\n4686,1863,EMEA,grocery,online,39.53,4,0.102,bundle,2024-03-05\r\n4687,2168,EMEA,toys,online,37.40,4,0.249,none,2024-06-12\r\n4688,1394,LATAM,grocery,online,67.79,2,0.228,none,2024-04-26\r\n4689,2409,APAC,grocery,retail,45.30,4,0.203,none,2024-10-19\r\n4690,1468,AMER,sports,retail,164.09,4,0.136,bundle,2024-10-15\r\n4691,2099,AMER,home,retail,99.38,3,0.154,coupon,2024-08-14\r\n4692,1362,AMER,toys,retail,87.65,6,0.009,none,2024-09-06\r\n4693,1328,APAC,toys,retail,67.18,1,0.152,coupon,2024-09-04\r\n4694,2174,LATAM,sports,online,44.66,4,0.125,loyalty,2024-08-19\r\n4695,1733,LATAM,electronics,online,65.12,7,0.075,coupon,2024-10-01\r\n4696,1151,APAC,sports,retail,42.73,5,0.242,none,2024-06-10\r\n4697,1016,AMER,electronics,online,104.87,3,0.123,none,2024-05-19\r\n4698,1851,EMEA,toys,retail,78.16,8,0.010,none,2024-05-03\r\n4699,1456,APAC,home,online,47.09,1,0.047,bundle,2024-11-06\r\n4700,1447,LATAM,fashion,retail,33.39,7,0.042,coupon,2024-10-20\r\n4701,1466,AMER,electronics,online,110.59,2,0.247,none,2024-08-11\r\n4702,1822,EMEA,home,retail,129.58,3,0.145,none,2024-11-13\r\n4703,2444,EMEA,sports,retail,67.92,5,0.219,bundle,2024-06-07\r\n4704,1078,APAC,grocery,online,106.14,4,0.122,none,2024-02-22\r\n4705,1236,AMER,fashion,partner,48.65,3,0.208,coupon,2024-07-06\r\n4706,1137,APAC,fashion,online,52.29,6,0.241,none,2024-06-03\r\n4707,1225,APAC,home,online,51.41,3,0.147,none,2024-09-19\r\n4708,1288,LATAM,sports,online,57.23,1,0.064,coupon,2024-12-08\r\n4709,2272,EMEA,toys,online,41.91,8,0.162,coupon,2024-07-11\r\n4710,1701,LATAM,electronics,retail,82.20,8,0.244,loyalty,2024-12-20\r\n4711,1520,APAC,electronics,online,44.17,3,0.069,loyalty,2024-03-06\r\n4712,2228,EMEA,fashion,online,29.42,3,0.000,bundle,2024-02-14\r\n4713,1310,AMER,fashion,online,113.30,7,0.074,none,2024-10-01\r\n4714,1890,LATAM,grocery,online,80.34,1,0.043,coupon,2024-11-02\r\n4715,2052,LATAM,toys,retail,178.38,3,0.023,bundle,2024-01-04\r\n4716,1656,LATAM,grocery,retail,30.68,5,0.144,none,2024-05-18\r\n4717,2275,LATAM,grocery,online,56.30,1,0.042,none,2024-04-01\r\n4718,1902,AMER,electronics,online,74.25,1,0.020,none,2024-05-06\r\n4719,1227,AMER,grocery,online,68.77,1,0.135,bundle,2024-09-27\r\n4720,1447,LATAM,sports,retail,29.74,6,0.101,none,2024-03-17\r\n4721,1823,EMEA,home,partner,106.33,7,0.008,none,2024-04-16\r\n4722,2071,APAC,home,online,99.84,2,0.214,none,2024-07-09\r\n4723,1685,AMER,electronics,retail,52.74,6,0.086,loyalty,2024-02-28\r\n4724,1974,EMEA,sports,mobile,115.59,5,0.046,bundle,2024-05-07\r\n4725,1080,LATAM,fashion,retail,40.63,2,0.094,none,2024-06-08\r\n4726,1618,EMEA,electronics,online,60.25,8,0.200,coupon,2024-07-14\r\n4727,2160,LATAM,electronics,partner,57.06,1,0.087,bundle,2024-08-11\r\n4728,1447,LATAM,electronics,online,161.85,1,0.012,loyalty,2024-09-03\r\n4729,1858,LATAM,grocery,online,30.69,1,0.125,bundle,2024-01-28\r\n4730,1679,APAC,electronics,online,45.88,7,0.126,none,2024-11-06\r\n4731,2417,LATAM,home,retail,53.32,8,0.248,coupon,2024-03-25\r\n4732,1190,EMEA,electronics,mobile,127.44,1,0.130,none,2024-12-02\r\n4733,1435,AMER,fashion,retail,33.70,4,0.141,bundle,2024-08-19\r\n4734,1875,EMEA,fashion,retail,30.62,5,0.211,none,2024-04-06\r\n4735,1349,APAC,grocery,online,66.71,6,0.161,bundle,2024-05-07\r\n4736,1373,LATAM,grocery,online,38.29,8,0.215,coupon,2024-05-07\r\n4737,1743,LATAM,sports,mobile,78.16,1,0.114,loyalty,2024-06-14\r\n4738,1060,LATAM,home,online,55.11,5,0.218,bundle,2024-10-16\r\n4739,1899,APAC,grocery,partner,31.48,2,0.120,bundle,2024-11-12\r\n4740,1423,EMEA,grocery,online,134.92,3,0.243,coupon,2024-03-14\r\n4741,1453,APAC,fashion,online,93.04,1,0.155,coupon,2024-12-16\r\n4742,2211,APAC,fashion,online,46.65,5,0.102,loyalty,2024-09-12\r\n4743,1935,EMEA,electronics,online,72.13,7,0.101,none,2024-04-22\r\n4744,2249,LATAM,grocery,partner,78.53,1,0.135,bundle,2024-04-03\r\n4745,2229,APAC,fashion,online,68.28,8,0.081,none,2024-01-23\r\n4746,1253,AMER,fashion,online,42.27,2,0.183,coupon,2024-12-10\r\n4747,1169,LATAM,toys,retail,15.68,3,0.237,none,2024-08-12\r\n4748,1797,LATAM,home,retail,139.53,4,0.112,bundle,2024-07-11\r\n4749,2020,AMER,toys,retail,102.64,5,0.124,loyalty,2024-09-21\r\n4750,1454,APAC,fashion,online,95.91,1,0.243,loyalty,2024-11-27\r\n4751,1068,APAC,fashion,retail,31.69,7,0.064,coupon,2024-09-17\r\n4752,2301,EMEA,electronics,online,130.19,1,0.120,none,2024-01-17\r\n4753,1427,EMEA,toys,mobile,54.09,1,0.003,none,2024-02-12\r\n4754,1144,APAC,grocery,mobile,97.33,5,0.052,coupon,2024-05-24\r\n4755,1915,LATAM,grocery,retail,89.97,5,0.040,none,2024-04-21\r\n4756,2015,APAC,toys,retail,100.45,6,0.027,none,2024-06-08\r\n4757,1198,AMER,toys,mobile,122.24,6,0.027,none,2024-11-14\r\n4758,2490,AMER,electronics,online,45.16,7,0.005,none,2024-03-14\r\n4759,1203,AMER,electronics,retail,59.36,8,0.115,none,2024-08-27\r\n4760,1247,AMER,grocery,online,62.87,2,0.178,none,2024-09-13\r\n4761,1975,EMEA,electronics,retail,39.26,6,0.028,loyalty,2024-01-13\r\n4762,2292,EMEA,sports,online,87.99,2,0.124,none,2024-04-14\r\n4763,1165,AMER,electronics,partner,28.33,4,0.079,coupon,2024-11-16\r\n4764,2452,LATAM,grocery,online,85.56,7,0.124,none,2024-11-25\r\n4765,2385,APAC,electronics,online,59.75,6,0.236,none,2024-06-03\r\n4766,2167,APAC,electronics,online,147.37,8,0.220,none,2024-07-23\r\n4767,1145,AMER,grocery,online,23.72,3,0.183,none,2024-03-02\r\n4768,1527,AMER,electronics,mobile,47.45,2,0.070,none,2024-12-02\r\n4769,2184,APAC,electronics,mobile,31.92,4,0.001,loyalty,2024-06-18\r\n4770,1135,APAC,grocery,retail,29.32,2,0.068,none,2024-05-18\r\n4771,1225,APAC,fashion,retail,139.49,6,0.140,none,2024-11-04\r\n4772,1517,AMER,home,retail,37.83,1,0.062,none,2024-03-24\r\n4773,1416,EMEA,electronics,retail,89.10,5,0.067,none,2024-10-20\r\n4774,1432,APAC,fashion,retail,56.93,7,0.101,bundle,2024-05-08\r\n4775,1567,AMER,electronics,mobile,78.02,8,0.141,none,2024-11-11\r\n4776,1064,AMER,electronics,retail,56.61,2,0.008,none,2024-04-02\r\n4777,2061,EMEA,home,online,67.81,3,0.132,coupon,2024-07-13\r\n4778,1787,APAC,sports,mobile,38.29,8,0.161,none,2024-03-22\r\n4779,1605,APAC,sports,retail,66.82,1,0.100,none,2024-06-22\r\n4780,1624,AMER,grocery,retail,139.89,3,0.214,coupon,2024-01-05\r\n4781,1671,APAC,grocery,retail,21.58,2,0.144,none,2024-08-03\r\n4782,1438,APAC,sports,online,45.17,5,0.202,none,2024-09-15\r\n4783,2412,LATAM,grocery,retail,99.89,4,0.230,none,2024-07-28\r\n4784,1182,EMEA,sports,online,153.18,4,0.108,none,2024-01-19\r\n4785,2434,APAC,grocery,partner,34.76,2,0.097,none,2024-09-28\r\n4786,1675,LATAM,fashion,mobile,28.62,4,0.014,coupon,2024-11-28\r\n4787,1379,EMEA,sports,online,83.61,5,0.170,bundle,2024-01-12\r\n4788,2391,EMEA,grocery,retail,38.32,1,0.064,none,2024-05-20\r\n4789,2438,AMER,grocery,online,115.84,4,0.085,none,2024-10-18\r\n4790,2484,APAC,home,retail,43.18,2,0.080,none,2024-11-10\r\n4791,2178,AMER,sports,online,68.70,7,0.015,none,2024-12-21\r\n4792,1026,APAC,sports,retail,22.99,6,0.094,none,2024-07-15\r\n4793,1859,AMER,fashion,online,41.09,2,0.159,none,2024-02-09\r\n4794,1224,APAC,sports,retail,27.82,7,0.042,bundle,2024-05-03\r\n4795,1187,AMER,electronics,online,39.30,6,0.057,none,2024-01-04\r\n4796,1618,EMEA,sports,retail,34.04,3,0.196,coupon,2024-02-04\r\n4797,1429,APAC,fashion,retail,61.96,5,0.051,none,2024-03-07\r\n4798,1621,APAC,fashion,mobile,42.91,8,0.105,none,2024-09-04\r\n4799,2232,EMEA,sports,retail,101.14,5,0.126,bundle,2024-04-19\r\n4800,1767,AMER,electronics,retail,116.70,1,0.054,none,2024-12-07\r\n4801,2075,LATAM,grocery,retail,64.36,8,0.086,none,2024-05-03\r\n4802,1143,LATAM,grocery,retail,141.36,5,0.121,none,2024-07-05\r\n4803,2437,LATAM,grocery,online,107.76,5,0.193,none,2024-01-06\r\n4804,2322,AMER,sports,retail,37.55,1,0.130,coupon,2024-11-16\r\n4805,2470,EMEA,electronics,mobile,19.98,1,0.160,none,2024-04-04\r\n4806,1454,APAC,grocery,retail,52.05,4,0.058,loyalty,2024-07-25\r\n4807,1019,APAC,home,online,93.79,3,0.037,none,2024-11-01\r\n4808,1179,APAC,toys,online,76.11,8,0.222,bundle,2024-09-17\r\n4809,2203,APAC,electronics,partner,89.10,5,0.065,coupon,2024-06-19\r\n4810,1916,AMER,home,online,112.23,7,0.141,coupon,2024-01-22\r\n4811,2273,APAC,grocery,retail,45.39,5,0.040,loyalty,2024-12-21\r\n4812,2298,APAC,grocery,mobile,59.77,6,0.213,none,2024-05-01\r\n4813,1157,LATAM,home,online,51.07,2,0.148,none,2024-03-21\r\n4814,1183,AMER,home,online,136.48,2,0.224,coupon,2024-06-13\r\n4815,2105,APAC,grocery,online,60.82,1,0.085,bundle,2024-03-24\r\n4816,2218,EMEA,grocery,online,48.67,3,0.009,bundle,2024-09-17\r\n4817,1798,AMER,electronics,mobile,32.44,8,0.233,coupon,2024-05-18\r\n4818,1182,EMEA,grocery,partner,48.65,1,0.172,none,2024-09-26\r\n4819,2020,AMER,home,online,92.40,4,0.054,coupon,2024-10-16\r\n4820,2431,LATAM,sports,online,36.84,2,0.026,none,2024-03-08\r\n4821,1792,AMER,electronics,online,59.73,6,0.227,coupon,2024-05-28\r\n4822,2435,AMER,fashion,online,72.84,6,0.043,coupon,2024-07-23\r\n4823,1479,AMER,sports,partner,16.20,1,0.058,none,2024-05-19\r\n4824,1438,APAC,grocery,retail,19.89,2,0.191,none,2024-10-05\r\n4825,1768,AMER,grocery,retail,125.49,8,0.151,bundle,2024-07-06\r\n4826,2066,APAC,home,retail,97.45,8,0.180,coupon,2024-10-16\r\n4827,1501,AMER,home,online,50.03,2,0.221,none,2024-11-07\r\n4828,1334,APAC,home,partner,22.86,5,0.034,none,2024-04-08\r\n4829,1612,LATAM,sports,retail,40.78,6,0.157,none,2024-04-16\r\n4830,1958,APAC,sports,online,24.72,6,0.044,none,2024-09-21\r\n4831,2244,LATAM,electronics,retail,78.76,5,0.060,bundle,2024-09-27\r\n4832,1949,AMER,fashion,mobile,44.15,6,0.143,none,2024-05-28\r\n4833,2070,APAC,fashion,online,65.54,5,0.124,none,2024-08-20\r\n4834,1321,EMEA,home,retail,90.85,8,0.013,none,2024-02-14\r\n4835,2061,EMEA,grocery,online,42.93,5,0.236,none,2024-06-10\r\n4836,2306,AMER,fashion,online,39.33,4,0.093,loyalty,2024-12-07\r\n4837,1084,AMER,home,mobile,34.78,8,0.160,coupon,2024-02-07\r\n4838,1916,AMER,electronics,online,40.78,2,0.136,none,2024-01-14\r\n4839,2124,AMER,grocery,online,63.04,6,0.234,none,2024-05-27\r\n4840,1526,EMEA,sports,online,32.27,2,0.114,bundle,2024-11-14\r\n4841,1714,APAC,toys,retail,110.97,1,0.208,none,2024-02-28\r\n4842,1858,LATAM,grocery,retail,58.78,6,0.061,none,2024-04-24\r\n4843,1975,EMEA,grocery,online,108.74,2,0.169,none,2024-10-02\r\n4844,1549,APAC,toys,online,23.33,8,0.211,none,2024-11-02\r\n4845,2464,LATAM,grocery,online,69.42,8,0.148,none,2024-11-22\r\n4846,1015,AMER,sports,partner,233.81,3,0.084,bundle,2024-11-12\r\n4847,2270,APAC,electronics,online,66.97,7,0.249,none,2024-11-05\r\n4848,2299,EMEA,grocery,online,57.02,4,0.244,coupon,2024-01-02\r\n4849,1121,EMEA,toys,mobile,55.99,1,0.132,none,2024-02-13\r\n4850,1619,APAC,grocery,retail,122.45,2,0.193,none,2024-08-06\r\n4851,1818,AMER,grocery,online,69.80,2,0.113,coupon,2024-09-19\r\n4852,1766,AMER,sports,online,46.25,1,0.126,none,2024-10-08\r\n4853,2266,LATAM,electronics,online,51.02,7,0.182,coupon,2024-03-11\r\n4854,2096,LATAM,sports,retail,96.88,3,0.119,none,2024-06-14\r\n4855,1165,AMER,grocery,retail,111.87,6,0.075,coupon,2024-05-13\r\n4856,1557,LATAM,home,retail,59.23,2,0.177,coupon,2024-06-08\r\n4857,1694,APAC,grocery,retail,111.67,4,0.018,none,2024-08-13\r\n4858,1090,AMER,sports,retail,53.16,2,0.158,coupon,2024-07-01\r\n4859,2073,AMER,grocery,retail,54.61,2,0.222,none,2024-12-12\r\n4860,1593,AMER,fashion,partner,75.87,5,0.062,bundle,2024-04-12\r\n4861,2354,LATAM,home,mobile,75.38,4,0.154,coupon,2024-09-01\r\n4862,1856,EMEA,sports,retail,85.44,7,0.125,none,2024-11-07\r\n4863,1215,LATAM,sports,retail,195.63,1,0.063,coupon,2024-09-10\r\n4864,1426,AMER,fashion,online,69.13,6,0.066,none,2024-03-27\r\n4865,1011,APAC,sports,online,44.25,2,0.072,none,2024-04-27\r\n4866,2230,LATAM,grocery,online,43.50,6,0.147,none,2024-11-27\r\n4867,2255,AMER,electronics,online,61.27,8,0.212,none,2024-02-23\r\n4868,2304,LATAM,electronics,online,50.59,6,0.022,none,2024-07-07\r\n4869,1214,EMEA,electronics,mobile,141.47,6,0.077,loyalty,2024-12-06\r\n4870,1331,AMER,fashion,retail,30.14,3,0.155,loyalty,2024-10-16\r\n4871,2200,LATAM,electronics,retail,77.28,7,0.120,loyalty,2024-02-27\r\n4872,2341,EMEA,electronics,retail,37.79,3,0.062,none,2024-01-20\r\n4873,2000,APAC,electronics,retail,85.00,5,0.010,bundle,2024-09-26\r\n4874,1677,EMEA,fashion,online,33.37,8,0.143,bundle,2024-07-19\r\n4875,1073,AMER,sports,retail,47.09,8,0.040,none,2024-12-26\r\n4876,1186,APAC,grocery,online,71.52,2,0.122,none,2024-02-28\r\n4877,2297,EMEA,electronics,online,47.55,5,0.006,none,2024-10-11\r\n4878,1698,EMEA,fashion,retail,29.74,7,0.009,bundle,2024-04-11\r\n4879,1477,APAC,home,online,47.40,4,0.005,coupon,2024-10-21\r\n4880,1819,AMER,fashion,mobile,35.99,6,0.175,none,2024-02-10\r\n4881,2495,EMEA,grocery,mobile,60.14,7,0.169,none,2024-07-19\r\n4882,1898,EMEA,grocery,online,98.07,3,0.135,coupon,2024-08-19\r\n4883,1424,APAC,toys,retail,27.33,8,0.049,bundle,2024-07-21\r\n4884,1087,AMER,electronics,partner,82.55,1,0.042,none,2024-10-20\r\n4885,2400,EMEA,home,retail,97.12,8,0.139,none,2024-10-17\r\n4886,1105,AMER,grocery,online,73.39,8,0.233,coupon,2024-09-01\r\n4887,2304,LATAM,grocery,online,103.04,8,0.158,none,2024-01-09\r\n4888,1709,EMEA,grocery,mobile,111.85,8,0.216,none,2024-08-14\r\n4889,1985,AMER,toys,online,56.41,8,0.222,loyalty,2024-04-14\r\n4890,1911,LATAM,grocery,online,55.26,5,0.085,none,2024-05-10\r\n4891,1554,AMER,grocery,online,41.76,8,0.196,none,2024-04-10\r\n4892,1420,APAC,home,online,59.13,6,0.175,none,2024-01-10\r\n4893,2198,EMEA,sports,online,122.58,4,0.241,bundle,2024-07-16\r\n4894,2027,EMEA,grocery,retail,36.92,6,0.041,loyalty,2024-01-01\r\n4895,1416,EMEA,home,online,56.95,7,0.047,none,2024-04-06\r\n4896,2352,APAC,toys,mobile,22.65,1,0.084,none,2024-05-15\r\n4897,2041,LATAM,sports,retail,76.31,8,0.091,none,2024-08-01\r\n4898,1585,AMER,grocery,online,58.96,4,0.063,bundle,2024-06-24\r\n4899,1385,LATAM,toys,mobile,50.87,6,0.028,coupon,2024-11-17\r\n4900,2418,AMER,home,retail,57.51,7,0.229,coupon,2024-06-22\r\n4901,2157,AMER,home,mobile,71.28,4,0.103,bundle,2024-11-20\r\n4902,1871,APAC,grocery,retail,20.62,8,0.078,none,2024-04-04\r\n4903,1575,APAC,grocery,online,41.23,3,0.114,none,2024-01-05\r\n4904,1381,LATAM,home,mobile,118.89,4,0.030,none,2024-01-16\r\n4905,1080,LATAM,fashion,online,154.30,6,0.190,loyalty,2024-11-22\r\n4906,1918,EMEA,home,online,74.79,8,0.152,loyalty,2024-05-11\r\n4907,2351,EMEA,toys,online,99.18,1,0.065,none,2024-11-17\r\n4908,1312,EMEA,sports,retail,61.71,6,0.129,none,2024-11-19\r\n4909,1197,LATAM,fashion,online,59.89,1,0.085,bundle,2024-09-13\r\n4910,2489,LATAM,toys,online,42.06,5,0.149,none,2024-12-22\r\n4911,2374,LATAM,fashion,retail,43.76,8,0.038,none,2024-05-21\r\n4912,2392,EMEA,electronics,retail,73.74,3,0.103,bundle,2024-10-28\r\n4913,2210,APAC,fashion,online,32.31,2,0.219,none,2024-07-23\r\n4914,1358,APAC,home,online,25.89,4,0.161,coupon,2024-01-05\r\n4915,1135,APAC,sports,mobile,32.24,1,0.247,bundle,2024-06-26\r\n4916,2305,AMER,home,online,90.23,3,0.051,none,2024-07-03\r\n4917,1999,EMEA,sports,mobile,16.01,6,0.029,coupon,2024-08-17\r\n4918,1823,EMEA,electronics,retail,120.61,8,0.057,none,2024-04-18\r\n4919,1294,APAC,home,online,118.58,2,0.170,none,2024-05-15\r\n4920,1724,LATAM,electronics,online,31.46,4,0.190,none,2024-10-26\r\n4921,2054,AMER,electronics,retail,96.04,2,0.222,none,2024-08-01\r\n4922,1866,EMEA,toys,online,69.84,8,0.035,none,2024-09-25\r\n4923,1329,APAC,fashion,online,64.86,3,0.075,bundle,2024-05-26\r\n4924,2445,APAC,fashion,retail,60.76,2,0.041,coupon,2024-08-17\r\n4925,2002,APAC,grocery,mobile,39.70,3,0.016,none,2024-09-28\r\n4926,1734,AMER,fashion,online,72.28,1,0.240,bundle,2024-11-27\r\n4927,2270,APAC,electronics,online,78.77,4,0.065,none,2024-05-17\r\n4928,1760,LATAM,electronics,online,93.50,5,0.141,coupon,2024-11-20\r\n4929,2159,AMER,electronics,retail,22.81,1,0.163,bundle,2024-06-15\r\n4930,1002,EMEA,home,online,54.74,5,0.113,bundle,2024-10-26\r\n4931,1755,APAC,electronics,retail,38.46,7,0.123,coupon,2024-09-08\r\n4932,2028,APAC,electronics,online,32.24,2,0.105,bundle,2024-06-04\r\n4933,2486,APAC,grocery,retail,128.23,4,0.233,coupon,2024-11-24\r\n4934,1979,APAC,home,online,56.28,2,0.220,none,2024-08-20\r\n4935,1837,LATAM,fashion,retail,64.93,4,0.189,none,2024-10-20\r\n4936,2334,LATAM,home,online,32.89,7,0.101,loyalty,2024-08-16\r\n4937,2235,AMER,grocery,online,65.85,5,0.157,none,2024-09-27\r\n4938,2139,AMER,grocery,partner,95.98,7,0.138,none,2024-11-28\r\n4939,2433,APAC,home,online,70.42,8,0.044,none,2024-05-19\r\n4940,1612,LATAM,home,retail,22.81,4,0.043,none,2024-04-18\r\n4941,1289,LATAM,electronics,online,40.88,5,0.171,none,2024-01-01\r\n4942,2033,LATAM,electronics,online,89.72,8,0.238,none,2024-11-16\r\n4943,1313,EMEA,home,mobile,33.21,6,0.088,none,2024-06-12\r\n4944,2146,APAC,home,online,48.03,7,0.237,none,2024-01-16\r\n4945,2259,AMER,fashion,retail,54.46,7,0.017,none,2024-04-27\r\n4946,1595,AMER,fashion,online,43.59,6,0.092,none,2024-08-22\r\n4947,2153,APAC,sports,retail,84.91,7,0.061,bundle,2024-05-17\r\n4948,1526,EMEA,electronics,online,46.09,6,0.212,none,2024-03-26\r\n4949,1961,EMEA,fashion,retail,54.91,4,0.078,coupon,2024-06-11\r\n4950,1491,EMEA,grocery,online,163.20,4,0.250,none,2024-11-25\r\n4951,1640,APAC,electronics,partner,68.58,4,0.224,none,2024-05-13\r\n4952,1376,EMEA,fashion,retail,69.67,7,0.021,none,2024-10-02\r\n4953,1946,AMER,electronics,retail,136.75,4,0.098,none,2024-04-21\r\n4954,1295,EMEA,home,online,47.70,3,0.137,none,2024-02-15\r\n4955,1540,LATAM,grocery,mobile,22.87,1,0.077,none,2024-01-12\r\n4956,2139,AMER,grocery,online,83.30,5,0.071,bundle,2024-07-02\r\n4957,2246,AMER,home,retail,61.81,5,0.047,none,2024-12-01\r\n4958,1685,AMER,home,mobile,35.07,4,0.239,none,2024-11-27\r\n4959,1529,LATAM,grocery,online,60.66,7,0.111,bundle,2024-08-21\r\n4960,1755,APAC,sports,online,23.36,8,0.129,none,2024-01-14\r\n4961,2471,APAC,fashion,online,44.45,3,0.082,none,2024-11-01\r\n4962,2122,AMER,sports,mobile,20.14,6,0.128,bundle,2024-11-04\r\n4963,1306,LATAM,fashion,online,27.49,5,0.028,bundle,2024-06-03\r\n4964,2155,APAC,grocery,online,43.07,1,0.247,coupon,2024-03-26\r\n4965,1056,LATAM,grocery,online,25.44,5,0.236,none,2024-10-03\r\n4966,1775,EMEA,grocery,retail,85.61,4,0.200,none,2024-02-17\r\n4967,2279,LATAM,fashion,online,47.64,7,0.210,coupon,2024-11-26\r\n4968,2391,EMEA,toys,online,111.42,3,0.222,none,2024-05-16\r\n4969,1055,AMER,toys,retail,65.31,4,0.040,bundle,2024-08-19\r\n4970,1752,APAC,sports,online,35.59,7,0.035,none,2024-08-14\r\n4971,1108,EMEA,home,online,49.22,7,0.019,coupon,2024-04-11\r\n4972,1375,AMER,electronics,online,50.56,5,0.015,none,2024-10-05\r\n4973,1202,APAC,toys,retail,34.48,6,0.240,bundle,2024-12-01\r\n4974,1289,LATAM,grocery,mobile,90.11,7,0.094,none,2024-07-12\r\n4975,2349,APAC,home,online,66.53,7,0.038,none,2024-05-09\r\n4976,1409,APAC,electronics,retail,67.23,1,0.128,bundle,2024-10-04\r\n4977,1478,EMEA,grocery,online,44.48,3,0.087,none,2024-04-19\r\n4978,1125,LATAM,fashion,retail,58.86,4,0.170,none,2024-07-18\r\n4979,2450,EMEA,fashion,online,48.32,8,0.042,none,2024-08-23\r\n4980,2447,AMER,electronics,online,44.46,2,0.167,loyalty,2024-08-12\r\n4981,2357,EMEA,home,online,45.54,2,0.136,coupon,2024-05-28\r\n4982,2180,AMER,sports,retail,32.38,3,0.206,bundle,2024-08-04\r\n4983,2194,APAC,fashion,mobile,52.70,3,0.050,none,2024-01-14\r\n4984,1605,APAC,grocery,online,80.87,6,0.122,coupon,2024-09-04\r\n4985,1767,AMER,sports,retail,58.96,6,0.148,none,2024-09-21\r\n4986,2181,AMER,grocery,retail,84.18,8,0.248,none,2024-05-09\r\n4987,1716,LATAM,toys,partner,67.38,7,0.124,none,2024-06-13\r\n4988,1956,APAC,grocery,retail,39.96,1,0.029,coupon,2024-03-17\r\n4989,1449,EMEA,grocery,mobile,61.59,6,0.156,none,2024-05-20\r\n4990,1439,LATAM,home,online,71.90,3,0.055,bundle,2024-01-24\r\n4991,2421,AMER,grocery,retail,83.86,2,0.060,coupon,2024-02-10\r\n4992,2162,EMEA,home,mobile,35.95,5,0.008,loyalty,2024-02-02\r\n4993,2237,EMEA,electronics,online,112.80,3,0.090,bundle,2024-03-11\r\n4994,2109,EMEA,grocery,retail,33.55,4,0.134,none,2024-11-18\r\n4995,2033,LATAM,home,online,64.32,4,0.209,coupon,2024-06-18\r\n4996,1881,LATAM,grocery,retail,108.92,2,0.203,coupon,2024-11-25\r\n4997,1575,APAC,home,retail,31.22,7,0.045,none,2024-04-24\r\n4998,2039,EMEA,fashion,online,145.00,2,0.204,coupon,2024-02-18\r\n4999,1490,AMER,electronics,mobile,41.97,7,0.041,none,2024-03-20\r\n5000,1949,AMER,sports,retail,29.40,5,0.178,loyalty,2024-05-01\r\n5001,1994,LATAM,home,online,150.82,8,0.200,bundle,2024-02-19\r\n5002,1165,AMER,sports,online,37.45,3,0.114,none,2024-10-14\r\n5003,1319,EMEA,grocery,retail,88.18,5,0.112,none,2024-05-16\r\n5004,1160,LATAM,electronics,retail,61.53,4,0.162,none,2024-07-21\r\n5005,2372,AMER,grocery,online,62.15,6,0.040,none,2024-08-24\r\n5006,1979,APAC,electronics,online,38.88,1,0.034,none,2024-08-08\r\n5007,1660,AMER,grocery,mobile,57.89,4,0.096,none,2024-09-01\r\n5008,1575,APAC,electronics,retail,69.50,3,0.125,coupon,2024-09-02\r\n5009,1029,EMEA,home,online,157.30,7,0.163,none,2024-09-05\r\n5010,1395,APAC,grocery,retail,35.72,6,0.192,bundle,2024-10-14\r\n5011,1613,EMEA,grocery,online,93.76,5,0.022,none,2024-11-17\r\n5012,1128,LATAM,electronics,online,89.32,2,0.073,none,2024-06-21\r\n5013,2273,APAC,electronics,retail,48.90,4,0.168,none,2024-01-06\r\n5014,2415,AMER,grocery,retail,140.39,1,0.057,none,2024-12-09\r\n5015,2482,EMEA,toys,online,19.65,8,0.176,bundle,2024-12-23\r\n5016,1598,EMEA,fashion,online,143.66,6,0.035,coupon,2024-06-05\r\n5017,1745,APAC,fashion,mobile,98.73,2,0.129,none,2024-12-12\r\n5018,1492,APAC,toys,online,46.20,6,0.211,coupon,2024-03-07\r\n5019,2412,LATAM,home,online,49.90,4,0.054,loyalty,2024-09-20\r\n5020,2478,AMER,sports,online,44.01,1,0.097,none,2024-12-16\r\n5021,2428,LATAM,electronics,retail,71.75,1,0.148,none,2024-09-16\r\n5022,2157,AMER,electronics,partner,36.28,8,0.007,none,2024-02-03\r\n5023,1799,EMEA,fashion,partner,122.59,2,0.008,none,2024-08-25\r\n5024,1463,EMEA,home,retail,46.35,6,0.099,none,2024-08-13\r\n5025,2317,LATAM,grocery,retail,45.61,3,0.118,none,2024-12-25\r\n5026,1399,AMER,electronics,online,23.41,8,0.096,none,2024-06-21\r\n5027,2337,AMER,grocery,partner,36.36,3,0.055,none,2024-01-21\r\n5028,2091,LATAM,grocery,retail,80.67,3,0.209,none,2024-09-07\r\n5029,1934,EMEA,fashion,retail,114.46,6,0.042,none,2024-02-22\r\n5030,2275,LATAM,fashion,online,47.84,4,0.122,bundle,2024-05-08\r\n5031,1682,EMEA,home,online,57.80,8,0.164,bundle,2024-09-27\r\n5032,2468,EMEA,electronics,mobile,75.80,5,0.004,coupon,2024-03-02\r\n5033,2348,EMEA,fashion,online,22.04,4,0.241,none,2024-06-10\r\n5034,1056,LATAM,toys,retail,54.77,2,0.087,none,2024-01-14\r\n5035,1861,AMER,fashion,retail,16.66,8,0.180,none,2024-12-18\r\n5036,1638,EMEA,grocery,online,27.16,7,0.143,none,2024-01-07\r\n5037,2132,LATAM,electronics,mobile,48.00,5,0.067,bundle,2024-05-23\r\n5038,2331,APAC,electronics,online,76.86,2,0.156,none,2024-09-27\r\n5039,1188,LATAM,fashion,online,79.28,1,0.071,bundle,2024-05-16\r\n5040,2076,AMER,grocery,online,108.55,5,0.040,coupon,2024-12-16\r\n5041,1531,EMEA,fashion,online,50.88,3,0.198,coupon,2024-03-14\r\n5042,2382,LATAM,grocery,mobile,53.05,5,0.015,coupon,2024-11-11\r\n5043,2076,AMER,home,online,37.55,7,0.188,bundle,2024-10-10\r\n5044,1065,AMER,electronics,online,51.37,5,0.046,coupon,2024-05-07\r\n5045,1781,LATAM,toys,partner,43.12,4,0.068,bundle,2024-06-08\r\n5046,2448,APAC,electronics,online,83.93,4,0.134,none,2024-05-09\r\n5047,1443,EMEA,electronics,online,30.98,7,0.248,coupon,2024-03-12\r\n5048,1038,APAC,grocery,retail,34.62,3,0.028,none,2024-06-08\r\n5049,1182,EMEA,grocery,mobile,56.09,1,0.082,none,2024-12-10\r\n5050,1238,AMER,electronics,online,31.48,5,0.186,bundle,2024-07-13\r\n5051,1320,EMEA,fashion,retail,73.26,8,0.145,bundle,2024-08-27\r\n5052,2284,EMEA,electronics,online,46.99,2,0.168,none,2024-12-28\r\n5053,2180,AMER,toys,retail,64.01,2,0.171,loyalty,2024-08-23\r\n5054,1443,EMEA,sports,online,49.90,3,0.241,loyalty,2024-12-03\r\n5055,1947,EMEA,toys,mobile,41.20,5,0.143,none,2024-04-20\r\n5056,1151,APAC,grocery,online,23.14,7,0.220,none,2024-10-07\r\n5057,1464,APAC,electronics,retail,26.92,1,0.123,bundle,2024-02-24\r\n5058,2301,EMEA,grocery,online,79.64,5,0.023,none,2024-04-24\r\n5059,1367,AMER,toys,online,74.67,3,0.077,coupon,2024-08-17\r\n5060,1260,LATAM,fashion,retail,72.23,5,0.139,loyalty,2024-02-15\r\n5061,1762,LATAM,fashion,retail,41.34,8,0.246,none,2024-04-28\r\n5062,1444,EMEA,grocery,retail,39.63,6,0.141,coupon,2024-10-15\r\n5063,1602,EMEA,home,partner,201.99,3,0.246,none,2024-10-06\r\n5064,1363,EMEA,fashion,online,78.47,7,0.065,none,2024-06-01\r\n5065,2196,AMER,sports,retail,48.99,4,0.023,none,2024-01-10\r\n5066,1988,AMER,home,retail,34.87,7,0.023,none,2024-08-02\r\n5067,1113,EMEA,grocery,mobile,75.37,3,0.189,none,2024-04-02\r\n5068,1574,AMER,grocery,retail,80.68,7,0.066,none,2024-02-10\r\n5069,1333,EMEA,electronics,retail,26.01,1,0.107,none,2024-02-09\r\n5070,1334,APAC,toys,retail,62.13,6,0.010,none,2024-06-10\r\n5071,1186,APAC,home,retail,55.91,4,0.124,none,2024-06-04\r\n5072,1826,LATAM,home,retail,52.47,1,0.183,bundle,2024-06-17\r\n5073,1568,AMER,fashion,online,36.75,1,0.149,none,2024-12-06\r\n5074,1840,LATAM,electronics,retail,46.16,2,0.075,loyalty,2024-01-26\r\n5075,2302,APAC,sports,retail,97.43,4,0.182,bundle,2024-07-26\r\n5076,2360,EMEA,home,mobile,171.23,8,0.157,none,2024-07-20\r\n5077,1806,APAC,electronics,online,24.64,1,0.023,none,2024-09-17\r\n5078,2419,LATAM,sports,mobile,29.15,1,0.166,none,2024-12-19\r\n5079,1465,AMER,home,partner,35.40,3,0.036,none,2024-09-15\r\n5080,1640,APAC,toys,mobile,29.49,4,0.050,none,2024-12-09\r\n5081,1661,LATAM,home,online,39.32,3,0.126,loyalty,2024-07-17\r\n5082,1380,AMER,grocery,retail,126.42,4,0.101,none,2024-05-27\r\n5083,2480,APAC,home,mobile,78.32,7,0.151,none,2024-04-11\r\n5084,2298,APAC,fashion,online,56.45,4,0.028,none,2024-10-13\r\n5085,2453,AMER,fashion,mobile,82.40,5,0.158,none,2024-12-25\r\n5086,1809,APAC,toys,online,136.15,1,0.162,none,2024-06-10\r\n5087,1718,EMEA,home,online,86.73,2,0.224,none,2024-06-25\r\n5088,2109,EMEA,fashion,online,41.86,6,0.223,none,2024-10-07\r\n5089,2265,APAC,electronics,online,38.43,7,0.039,none,2024-07-16\r\n5090,1895,AMER,electronics,online,107.33,6,0.062,none,2024-03-04\r\n5091,2416,LATAM,grocery,mobile,32.15,8,0.112,none,2024-01-05\r\n5092,1162,AMER,home,mobile,69.84,5,0.008,none,2024-05-03\r\n5093,2067,LATAM,fashion,online,86.34,6,0.103,none,2024-05-15\r\n5094,1963,AMER,fashion,online,56.86,7,0.030,none,2024-05-17\r\n5095,2100,APAC,electronics,mobile,43.59,8,0.002,none,2024-09-27\r\n5096,1833,EMEA,electronics,retail,60.84,2,0.084,coupon,2024-12-03\r\n5097,2478,AMER,toys,retail,86.97,8,0.199,loyalty,2024-03-10\r\n5098,1855,APAC,electronics,retail,31.68,6,0.022,none,2024-05-28\r\n5099,1096,EMEA,fashion,retail,76.65,2,0.180,none,2024-01-16\r\n5100,1074,LATAM,sports,retail,36.58,7,0.088,none,2024-04-07\r\n5101,1834,AMER,grocery,retail,82.76,4,0.238,loyalty,2024-01-19\r\n5102,2119,AMER,grocery,online,70.76,7,0.056,coupon,2024-03-09\r\n5103,1482,AMER,grocery,online,196.20,8,0.178,bundle,2024-04-16\r\n5104,1936,EMEA,sports,retail,61.67,6,0.094,coupon,2024-04-21\r\n5105,2143,AMER,electronics,online,106.48,6,0.167,bundle,2024-03-04\r\n5106,1042,LATAM,fashion,mobile,87.71,8,0.165,none,2024-03-10\r\n5107,2327,EMEA,toys,online,63.94,4,0.217,none,2024-09-18\r\n5108,1413,LATAM,grocery,retail,117.45,5,0.041,bundle,2024-01-18\r\n5109,1557,LATAM,fashion,mobile,129.87,4,0.021,bundle,2024-10-15\r\n5110,1827,EMEA,sports,online,54.76,8,0.163,none,2024-03-10\r\n5111,2412,LATAM,toys,retail,88.29,5,0.117,loyalty,2024-04-06\r\n5112,2175,AMER,home,online,67.50,4,0.230,none,2024-08-04\r\n5113,1332,APAC,home,retail,26.03,7,0.203,none,2024-08-07\r\n5114,1074,LATAM,electronics,retail,52.17,3,0.047,bundle,2024-08-06\r\n5115,2230,LATAM,fashion,retail,49.91,8,0.066,none,2024-04-10\r\n5116,1701,LATAM,home,online,62.93,3,0.048,none,2024-08-24\r\n5117,1420,APAC,electronics,mobile,90.19,1,0.110,none,2024-09-08\r\n5118,1434,EMEA,toys,retail,103.86,5,0.037,none,2024-10-04\r\n5119,1764,LATAM,grocery,online,100.82,8,0.200,none,2024-12-28\r\n5120,1078,APAC,home,online,91.86,5,0.018,coupon,2024-04-02\r\n5121,2461,LATAM,fashion,retail,29.85,3,0.170,none,2024-07-08\r\n5122,1526,EMEA,fashion,partner,38.74,7,0.137,none,2024-06-21\r\n5123,1930,AMER,grocery,partner,32.55,6,0.207,coupon,2024-02-05\r\n5124,2308,AMER,home,retail,31.06,1,0.195,coupon,2024-04-12\r\n5125,2033,LATAM,fashion,partner,22.18,3,0.020,none,2024-07-23\r\n5126,1568,AMER,toys,online,44.41,5,0.106,none,2024-05-16\r\n5127,1736,AMER,grocery,online,97.15,7,0.197,none,2024-03-15\r\n5128,2433,APAC,fashion,retail,293.24,3,0.091,none,2024-12-19\r\n5129,1149,LATAM,grocery,online,40.78,4,0.182,none,2024-11-15\r\n5130,1035,EMEA,home,online,16.86,7,0.215,coupon,2024-05-19\r\n5131,1037,EMEA,grocery,partner,53.36,5,0.032,coupon,2024-11-25\r\n5132,1609,LATAM,fashion,mobile,39.35,8,0.104,none,2024-08-16\r\n5133,1445,APAC,grocery,online,57.35,4,0.151,coupon,2024-11-18\r\n5134,1992,LATAM,electronics,online,130.79,1,0.201,none,2024-06-18\r\n5135,2409,APAC,home,online,135.13,2,0.086,loyalty,2024-10-03\r\n5136,1930,AMER,sports,online,56.76,8,0.035,coupon,2024-04-26\r\n5137,2179,LATAM,home,mobile,72.33,4,0.090,none,2024-12-15\r\n5138,1758,AMER,home,retail,105.76,8,0.129,coupon,2024-11-26\r\n5139,2023,LATAM,sports,online,35.16,5,0.169,bundle,2024-02-02\r\n5140,2143,AMER,sports,online,107.73,3,0.027,none,2024-12-04\r\n5141,1343,LATAM,fashion,online,86.70,5,0.052,bundle,2024-06-08\r\n5142,2234,LATAM,sports,online,60.04,1,0.049,loyalty,2024-10-22\r\n5143,2370,EMEA,fashion,online,34.97,4,0.119,none,2024-07-19\r\n5144,1558,EMEA,electronics,online,65.19,7,0.036,none,2024-10-11\r\n5145,2016,LATAM,fashion,mobile,30.18,5,0.226,none,2024-05-14\r\n5146,1870,EMEA,grocery,online,84.61,3,0.115,none,2024-05-28\r\n5147,2375,AMER,fashion,retail,34.88,1,0.196,coupon,2024-12-19\r\n5148,2138,APAC,electronics,online,51.23,2,0.119,none,2024-03-10\r\n5149,1118,AMER,home,online,67.93,3,0.119,bundle,2024-10-03\r\n5150,2395,APAC,grocery,retail,70.21,5,0.016,coupon,2024-11-19\r\n5151,1772,EMEA,home,mobile,45.19,4,0.029,loyalty,2024-12-16\r\n5152,2386,EMEA,grocery,retail,119.09,1,0.133,none,2024-12-16\r\n5153,1630,APAC,sports,mobile,73.18,1,0.122,bundle,2024-02-13\r\n5154,2069,AMER,fashion,retail,91.31,8,0.086,loyalty,2024-05-20\r\n5155,2198,EMEA,grocery,online,36.13,6,0.041,coupon,2024-10-13\r\n5156,1856,EMEA,grocery,mobile,54.08,8,0.222,coupon,2024-03-02\r\n5157,2190,LATAM,grocery,retail,85.97,6,0.198,coupon,2024-10-28\r\n5158,2487,LATAM,grocery,online,55.52,2,0.139,loyalty,2024-08-01\r\n5159,1778,LATAM,grocery,retail,51.27,3,0.034,none,2024-04-15\r\n5160,1729,AMER,grocery,online,74.34,7,0.083,none,2024-04-27\r\n5161,1350,LATAM,toys,retail,38.88,5,0.026,none,2024-08-21\r\n5162,2336,APAC,fashion,retail,13.11,6,0.189,loyalty,2024-06-08\r\n5163,1365,LATAM,home,mobile,96.97,2,0.180,none,2024-01-28\r\n5164,1916,AMER,grocery,online,130.06,7,0.183,none,2024-11-20\r\n5165,2364,APAC,electronics,retail,74.61,3,0.178,none,2024-12-10\r\n5166,1308,EMEA,fashion,retail,42.84,3,0.022,loyalty,2024-11-02\r\n5167,1004,LATAM,grocery,retail,66.39,8,0.244,bundle,2024-04-28\r\n5168,1622,LATAM,grocery,online,88.44,5,0.214,none,2024-05-01\r\n5169,1190,EMEA,electronics,online,105.91,7,0.046,loyalty,2024-07-05\r\n5170,1181,LATAM,electronics,online,39.49,8,0.196,coupon,2024-11-20\r\n5171,1957,AMER,sports,partner,24.30,6,0.208,bundle,2024-05-14\r\n5172,1480,APAC,grocery,retail,57.31,6,0.125,none,2024-01-24\r\n5173,1238,AMER,home,online,64.98,1,0.112,none,2024-02-02\r\n5174,2207,APAC,grocery,online,35.69,7,0.039,none,2024-01-04\r\n5175,1930,AMER,home,mobile,81.44,6,0.215,none,2024-12-09\r\n5176,2367,AMER,toys,online,51.31,6,0.061,loyalty,2024-04-03\r\n5177,1089,LATAM,home,online,75.60,4,0.207,none,2024-02-14\r\n5178,2493,APAC,grocery,online,54.88,7,0.153,none,2024-01-03\r\n5179,1553,LATAM,electronics,online,62.25,1,0.008,none,2024-04-04\r\n5180,1993,APAC,grocery,retail,105.14,4,0.063,none,2024-06-05\r\n5181,1562,AMER,sports,online,144.51,5,0.202,none,2024-04-05\r\n5182,1240,EMEA,home,retail,72.71,7,0.018,none,2024-10-10\r\n5183,2253,AMER,sports,online,16.39,6,0.129,none,2024-11-27\r\n5184,1984,LATAM,electronics,retail,189.35,7,0.050,bundle,2024-10-26\r\n5185,1086,AMER,fashion,mobile,71.67,3,0.062,bundle,2024-05-07\r\n5186,1186,APAC,toys,mobile,79.64,3,0.149,none,2024-04-19\r\n5187,2238,AMER,home,retail,88.43,1,0.229,none,2024-10-25\r\n5188,2008,APAC,electronics,online,35.12,4,0.219,none,2024-08-02\r\n5189,1787,APAC,grocery,online,28.07,1,0.141,bundle,2024-06-15\r\n5190,1238,AMER,electronics,retail,25.36,7,0.033,none,2024-06-09\r\n5191,1020,APAC,fashion,mobile,65.69,8,0.186,coupon,2024-10-28\r\n5192,1891,APAC,home,online,41.49,5,0.045,coupon,2024-03-15\r\n5193,1013,LATAM,home,mobile,51.14,8,0.057,none,2024-12-19\r\n5194,1744,EMEA,grocery,online,59.62,1,0.068,none,2024-10-01\r\n5195,1770,AMER,fashion,retail,42.36,7,0.069,bundle,2024-09-16\r\n5196,1966,APAC,toys,retail,108.43,7,0.211,none,2024-07-27\r\n5197,1148,AMER,fashion,online,29.48,5,0.028,bundle,2024-09-02\r\n5198,1660,AMER,electronics,online,57.39,6,0.112,bundle,2024-02-14\r\n5199,1185,LATAM,grocery,online,103.62,5,0.074,none,2024-11-27\r\n5200,1690,LATAM,sports,retail,59.52,8,0.156,none,2024-01-19\r\n5201,1130,LATAM,sports,mobile,90.67,4,0.235,none,2024-07-05\r\n5202,1275,EMEA,electronics,online,25.39,5,0.235,none,2024-12-06\r\n5203,1447,LATAM,electronics,online,49.76,5,0.089,none,2024-05-11\r\n5204,2293,LATAM,electronics,retail,78.45,6,0.091,none,2024-07-28\r\n5205,1372,APAC,grocery,retail,28.53,4,0.094,none,2024-03-26\r\n5206,2145,AMER,sports,online,47.34,8,0.067,none,2024-02-09\r\n5207,1724,LATAM,electronics,mobile,41.67,8,0.011,loyalty,2024-08-24\r\n5208,1925,LATAM,sports,retail,218.61,2,0.101,none,2024-03-25\r\n5209,1683,AMER,sports,online,22.66,3,0.233,none,2024-11-19\r\n5210,1581,APAC,home,retail,78.71,4,0.065,none,2024-08-16\r\n5211,2176,AMER,grocery,retail,37.69,2,0.093,loyalty,2024-02-15\r\n5212,1393,LATAM,electronics,mobile,39.02,5,0.021,none,2024-10-17\r\n5213,1589,AMER,grocery,partner,69.09,8,0.159,coupon,2024-08-07\r\n5214,2415,AMER,home,retail,37.90,5,0.180,none,2024-09-11\r\n5215,2073,AMER,grocery,online,155.97,1,0.107,bundle,2024-08-26\r\n5216,2229,APAC,electronics,retail,36.08,6,0.082,none,2024-09-01\r\n5217,1644,EMEA,grocery,retail,82.09,4,0.232,none,2024-04-03\r\n5218,1964,EMEA,fashion,retail,28.06,7,0.114,bundle,2024-02-03\r\n5219,1695,LATAM,home,online,105.20,7,0.046,loyalty,2024-02-26\r\n5220,1818,AMER,electronics,online,27.60,3,0.115,none,2024-08-25\r\n5221,1904,APAC,toys,retail,53.17,7,0.163,none,2024-10-25\r\n5222,1807,EMEA,home,online,28.03,4,0.069,none,2024-05-06\r\n5223,1261,APAC,fashion,online,135.71,5,0.208,loyalty,2024-11-23\r\n5224,1716,LATAM,electronics,online,42.94,5,0.028,loyalty,2024-12-19\r\n5225,2250,AMER,toys,online,149.17,5,0.005,none,2024-08-02\r\n5226,1380,AMER,electronics,mobile,100.00,5,0.206,loyalty,2024-01-10\r\n5227,1381,LATAM,electronics,online,66.76,2,0.188,loyalty,2024-03-20\r\n5228,1985,AMER,grocery,retail,37.59,7,0.102,none,2024-12-14\r\n5229,1899,APAC,home,retail,42.11,8,0.213,none,2024-03-14\r\n5230,1306,LATAM,fashion,online,60.07,3,0.099,none,2024-11-09\r\n5231,1521,LATAM,toys,online,60.46,4,0.033,coupon,2024-11-17\r\n5232,1222,AMER,grocery,online,70.38,3,0.226,none,2024-02-11\r\n5233,1628,EMEA,grocery,retail,50.36,2,0.028,none,2024-12-12\r\n5234,2492,LATAM,electronics,retail,35.82,5,0.009,none,2024-05-22\r\n5235,1951,LATAM,electronics,online,76.92,8,0.132,coupon,2024-05-21\r\n5236,1162,AMER,fashion,retail,115.20,1,0.108,none,2024-09-27\r\n5237,1978,AMER,fashion,mobile,46.06,1,0.072,none,2024-02-22\r\n5238,2476,APAC,fashion,online,56.76,3,0.179,none,2024-03-21\r\n5239,1551,APAC,electronics,online,34.33,2,0.174,none,2024-03-27\r\n5240,2455,AMER,toys,mobile,56.98,1,0.113,coupon,2024-03-08\r\n5241,1645,EMEA,sports,retail,45.27,4,0.229,none,2024-04-21\r\n5242,2099,AMER,electronics,online,62.15,7,0.023,none,2024-08-08\r\n5243,1789,EMEA,grocery,online,166.75,6,0.206,none,2024-07-12\r\n5244,1340,LATAM,grocery,online,50.70,2,0.127,bundle,2024-01-14\r\n5245,1208,AMER,electronics,online,47.72,3,0.119,none,2024-11-25\r\n5246,2395,APAC,electronics,retail,111.00,5,0.145,none,2024-06-10\r\n5247,2353,AMER,grocery,online,53.51,2,0.153,none,2024-03-01\r\n5248,1376,EMEA,grocery,online,29.92,4,0.151,none,2024-04-22\r\n5249,1059,AMER,grocery,retail,127.70,2,0.040,none,2024-06-21\r\n5250,1592,LATAM,home,online,15.09,8,0.066,none,2024-12-01\r\n5251,2018,AMER,grocery,online,32.44,2,0.237,coupon,2024-02-01\r\n5252,1084,AMER,grocery,mobile,56.20,3,0.072,bundle,2024-02-10\r\n5253,1103,EMEA,fashion,online,58.10,2,0.115,none,2024-04-20\r\n5254,1545,AMER,grocery,retail,98.36,5,0.195,none,2024-12-28\r\n5255,1651,LATAM,grocery,online,118.09,7,0.134,none,2024-04-11\r\n5256,2152,EMEA,electronics,mobile,21.24,8,0.091,coupon,2024-06-03\r\n5257,2238,AMER,grocery,online,53.64,2,0.135,none,2024-02-25\r\n5258,2341,EMEA,grocery,online,44.06,7,0.171,none,2024-05-21\r\n5259,1605,APAC,electronics,online,122.75,3,0.147,coupon,2024-02-19\r\n5260,2427,LATAM,grocery,online,77.82,7,0.099,bundle,2024-10-07\r\n5261,1690,LATAM,sports,online,36.14,6,0.115,none,2024-08-24\r\n5262,1715,AMER,home,online,37.47,1,0.099,coupon,2024-12-21\r\n5263,1767,AMER,electronics,online,15.60,6,0.181,none,2024-01-16\r\n5264,2140,AMER,grocery,online,20.96,5,0.150,coupon,2024-05-28\r\n5265,1932,EMEA,electronics,online,52.61,3,0.241,coupon,2024-01-06\r\n5266,1544,LATAM,grocery,mobile,26.18,7,0.014,coupon,2024-05-23\r\n5267,1029,EMEA,grocery,online,86.14,4,0.215,coupon,2024-07-27\r\n5268,2340,EMEA,grocery,retail,57.20,2,0.151,none,2024-06-09\r\n5269,1600,AMER,fashion,online,49.58,4,0.242,none,2024-11-09\r\n5270,1493,APAC,toys,online,32.75,5,0.035,none,2024-06-22\r\n5271,1530,APAC,grocery,retail,58.65,2,0.123,bundle,2024-08-28\r\n5272,1462,LATAM,fashion,online,42.70,7,0.022,none,2024-03-21\r\n5273,2418,AMER,electronics,online,45.32,6,0.057,coupon,2024-06-04\r\n5274,1748,APAC,toys,retail,132.74,2,0.241,none,2024-10-08\r\n5275,2094,AMER,fashion,retail,71.82,3,0.215,none,2024-05-20\r\n5276,2376,LATAM,home,retail,43.49,3,0.236,coupon,2024-07-04\r\n5277,1462,LATAM,fashion,online,35.05,5,0.212,none,2024-08-04\r\n5278,1137,APAC,sports,online,52.63,2,0.167,coupon,2024-12-03\r\n5279,2296,AMER,electronics,retail,44.16,4,0.201,none,2024-07-21\r\n5280,1681,LATAM,grocery,retail,35.25,7,0.249,none,2024-06-12\r\n5281,2412,LATAM,electronics,partner,19.88,4,0.053,none,2024-08-13\r\n5282,2039,EMEA,home,online,99.34,2,0.026,none,2024-07-16\r\n5283,2119,AMER,home,online,46.29,4,0.127,none,2024-08-12\r\n5284,1747,EMEA,grocery,online,48.23,5,0.108,loyalty,2024-02-24\r\n5285,1238,AMER,sports,retail,84.28,8,0.087,loyalty,2024-04-09\r\n5286,1113,EMEA,electronics,retail,54.53,3,0.026,bundle,2024-01-12\r\n5287,2093,LATAM,fashion,retail,155.64,2,0.057,none,2024-05-23\r\n5288,2470,EMEA,fashion,mobile,105.00,1,0.188,bundle,2024-02-03\r\n5289,1126,LATAM,electronics,retail,38.50,4,0.099,none,2024-04-21\r\n5290,1135,APAC,home,retail,38.74,7,0.083,bundle,2024-05-12\r\n5291,2101,APAC,sports,retail,29.67,5,0.167,none,2024-09-25\r\n5292,2377,AMER,home,online,51.39,2,0.086,none,2024-09-05\r\n5293,1098,APAC,home,retail,79.04,2,0.229,none,2024-12-12\r\n5294,2003,LATAM,toys,online,64.52,3,0.157,none,2024-01-28\r\n5295,1115,AMER,electronics,online,57.05,8,0.084,coupon,2024-12-15\r\n5296,2226,EMEA,fashion,online,70.03,1,0.197,coupon,2024-03-26\r\n5297,1228,APAC,toys,mobile,83.19,4,0.165,coupon,2024-05-20\r\n5298,1929,LATAM,grocery,online,181.93,3,0.218,none,2024-02-15\r\n5299,2159,AMER,grocery,partner,82.59,4,0.142,bundle,2024-04-14\r\n5300,1828,EMEA,grocery,retail,106.17,1,0.083,coupon,2024-08-13\r\n5301,2020,AMER,electronics,online,48.23,4,0.144,none,2024-06-10\r\n5302,1620,LATAM,home,retail,45.05,3,0.153,none,2024-12-01\r\n5303,1400,EMEA,home,online,100.65,5,0.112,none,2024-02-14\r\n5304,1905,APAC,grocery,online,48.46,1,0.130,none,2024-12-14\r\n5305,2001,EMEA,home,online,46.83,1,0.131,none,2024-07-05\r\n5306,1181,LATAM,electronics,online,76.83,1,0.238,none,2024-05-17\r\n5307,1836,LATAM,home,online,46.71,3,0.037,none,2024-04-09\r\n5308,1799,EMEA,toys,online,64.78,1,0.026,coupon,2024-01-06\r\n5309,1252,APAC,grocery,retail,112.46,5,0.153,coupon,2024-11-15\r\n5310,1061,APAC,grocery,retail,70.03,8,0.085,none,2024-11-28\r\n5311,1586,LATAM,home,online,113.54,7,0.179,bundle,2024-05-20\r\n5312,2137,LATAM,electronics,online,73.52,4,0.196,none,2024-04-17\r\n5313,1910,LATAM,fashion,retail,82.71,5,0.154,none,2024-05-26\r\n5314,1585,AMER,electronics,online,37.72,8,0.056,loyalty,2024-02-04\r\n5315,1926,AMER,grocery,partner,64.94,1,0.129,none,2024-04-23\r\n5316,1500,EMEA,electronics,mobile,47.69,6,0.214,coupon,2024-05-15\r\n5317,1082,EMEA,sports,online,52.43,8,0.125,coupon,2024-08-13\r\n5318,2233,EMEA,fashion,retail,31.56,3,0.163,coupon,2024-01-21\r\n5319,1446,AMER,sports,online,110.90,4,0.220,bundle,2024-05-24\r\n5320,1993,APAC,toys,online,66.61,8,0.025,coupon,2024-02-09\r\n5321,2102,APAC,grocery,online,51.54,2,0.173,none,2024-03-21\r\n5322,1860,EMEA,electronics,online,46.90,4,0.004,loyalty,2024-05-21\r\n5323,2156,AMER,electronics,retail,36.32,4,0.024,none,2024-11-03\r\n5324,2388,LATAM,electronics,mobile,30.93,5,0.155,coupon,2024-04-05\r\n5325,1759,EMEA,grocery,online,55.13,6,0.092,coupon,2024-03-07\r\n5326,2474,LATAM,sports,online,38.25,5,0.208,none,2024-11-20\r\n5327,2189,LATAM,home,online,111.38,7,0.079,loyalty,2024-06-21\r\n5328,1609,LATAM,home,retail,72.53,6,0.035,loyalty,2024-02-08\r\n5329,2142,LATAM,fashion,online,37.06,6,0.201,none,2024-03-05\r\n5330,2178,AMER,fashion,retail,37.75,8,0.078,none,2024-10-01\r\n5331,2100,APAC,fashion,retail,36.73,3,0.079,coupon,2024-12-14\r\n5332,1432,APAC,grocery,online,46.59,4,0.248,coupon,2024-10-18\r\n5333,2030,EMEA,electronics,retail,31.21,3,0.036,none,2024-04-16\r\n5334,2195,APAC,toys,retail,37.23,2,0.148,none,2024-10-02\r\n5335,1308,EMEA,toys,mobile,32.28,5,0.123,none,2024-01-22\r\n5336,1563,EMEA,grocery,online,55.71,1,0.002,coupon,2024-01-19\r\n5337,1570,AMER,electronics,mobile,52.08,2,0.076,none,2024-01-27\r\n5338,1506,EMEA,home,retail,64.69,3,0.015,none,2024-07-12\r\n5339,1528,EMEA,grocery,mobile,96.20,1,0.092,none,2024-06-07\r\n5340,2183,EMEA,grocery,online,139.95,8,0.093,loyalty,2024-11-01\r\n5341,2369,LATAM,electronics,retail,130.12,6,0.128,none,2024-02-08\r\n5342,1956,APAC,sports,online,96.13,3,0.000,none,2024-02-24\r\n5343,1631,APAC,sports,retail,74.13,4,0.184,coupon,2024-07-22\r\n5344,2091,LATAM,electronics,online,89.40,1,0.240,coupon,2024-04-05\r\n5345,2072,AMER,fashion,online,32.32,2,0.130,none,2024-01-28\r\n5346,2412,LATAM,electronics,online,17.68,8,0.144,loyalty,2024-05-17\r\n5347,2270,APAC,home,retail,71.94,7,0.108,none,2024-01-06\r\n5348,2133,AMER,home,retail,26.43,7,0.158,none,2024-11-03\r\n5349,1303,LATAM,sports,retail,133.79,4,0.230,none,2024-06-28\r\n5350,2268,EMEA,home,online,26.20,7,0.071,none,2024-07-04\r\n5351,1748,APAC,toys,retail,19.75,7,0.077,none,2024-11-04\r\n5352,1647,LATAM,electronics,online,63.84,4,0.076,loyalty,2024-10-15\r\n5353,1546,EMEA,toys,online,21.75,4,0.112,loyalty,2024-11-17\r\n5354,2084,LATAM,sports,online,44.60,1,0.005,none,2024-05-12\r\n5355,2347,AMER,grocery,mobile,110.78,8,0.092,coupon,2024-01-11\r\n5356,1054,EMEA,home,retail,34.89,3,0.107,coupon,2024-01-27\r\n5357,1426,AMER,home,online,37.35,4,0.233,none,2024-11-28\r\n5358,1750,LATAM,grocery,retail,76.81,1,0.104,coupon,2024-06-05\r\n5359,1151,APAC,grocery,online,15.11,5,0.128,none,2024-12-10\r\n5360,1262,APAC,sports,online,49.62,3,0.056,coupon,2024-06-25\r\n5361,1292,LATAM,fashion,partner,17.97,5,0.231,none,2024-02-19\r\n5362,1893,APAC,electronics,mobile,34.94,3,0.121,coupon,2024-09-04\r\n5363,1232,LATAM,home,retail,77.51,7,0.145,none,2024-04-28\r\n5364,1132,EMEA,home,retail,59.17,6,0.103,loyalty,2024-04-08\r\n5365,1043,LATAM,fashion,online,66.88,5,0.010,bundle,2024-05-20\r\n5366,2319,AMER,sports,online,33.18,3,0.233,none,2024-07-07\r\n5367,2102,APAC,sports,retail,49.82,6,0.190,none,2024-09-27\r\n5368,2306,AMER,sports,online,55.11,6,0.128,none,2024-02-04\r\n5369,2134,AMER,home,mobile,63.89,8,0.179,none,2024-12-20\r\n5370,2104,EMEA,home,partner,50.24,8,0.241,coupon,2024-06-08\r\n5371,1546,EMEA,grocery,mobile,55.74,7,0.135,none,2024-09-04\r\n5372,1059,AMER,electronics,online,38.30,2,0.010,none,2024-09-11\r\n5373,1045,LATAM,grocery,mobile,68.69,3,0.215,none,2024-03-25\r\n5374,2377,AMER,grocery,partner,86.33,5,0.057,none,2024-05-12\r\n5375,1491,EMEA,home,mobile,133.17,8,0.044,bundle,2024-12-10\r\n5376,1512,APAC,fashion,online,51.67,3,0.098,none,2024-08-23\r\n5377,2276,AMER,fashion,online,90.93,3,0.236,none,2024-01-16\r\n5378,1704,AMER,grocery,retail,45.10,8,0.077,none,2024-04-14\r\n5379,2305,AMER,grocery,retail,41.01,1,0.080,bundle,2024-05-18\r\n5380,1654,EMEA,grocery,mobile,145.15,8,0.138,none,2024-08-10\r\n5381,1360,APAC,sports,online,79.95,5,0.208,none,2024-10-14\r\n5382,2384,LATAM,grocery,online,65.20,8,0.163,bundle,2024-09-28\r\n5383,2090,AMER,grocery,retail,82.51,4,0.001,none,2024-09-06\r\n5384,1934,EMEA,electronics,online,44.14,6,0.093,none,2024-06-19\r\n5385,1581,APAC,grocery,online,76.01,8,0.155,none,2024-04-20\r\n5386,2291,EMEA,electronics,retail,22.45,8,0.237,bundle,2024-03-11\r\n5387,1698,EMEA,grocery,retail,125.04,4,0.099,none,2024-10-14\r\n5388,2473,EMEA,home,partner,59.23,5,0.107,none,2024-02-28\r\n5389,2007,LATAM,sports,retail,22.18,1,0.020,loyalty,2024-09-03\r\n5390,1657,LATAM,electronics,online,116.16,7,0.200,coupon,2024-11-21\r\n5391,2445,APAC,electronics,online,57.71,3,0.109,none,2024-03-17\r\n5392,1113,EMEA,home,online,61.65,4,0.145,none,2024-12-07\r\n5393,1242,LATAM,home,retail,79.31,3,0.221,none,2024-10-07\r\n5394,1895,AMER,electronics,online,95.32,2,0.166,none,2024-03-10\r\n5395,1165,AMER,grocery,mobile,91.56,7,0.164,none,2024-04-04\r\n5396,1876,LATAM,home,mobile,78.45,3,0.019,none,2024-08-17\r\n5397,1131,APAC,home,mobile,70.01,5,0.032,none,2024-04-03\r\n5398,1866,EMEA,home,online,27.36,1,0.105,none,2024-05-22\r\n5399,1474,LATAM,electronics,mobile,105.28,3,0.043,none,2024-09-14\r\n5400,1809,APAC,grocery,retail,77.57,7,0.076,none,2024-09-21\r\n5401,1874,LATAM,fashion,online,79.08,4,0.138,loyalty,2024-07-01\r\n5402,1680,LATAM,fashion,retail,86.61,1,0.038,coupon,2024-07-02\r\n5403,1285,EMEA,electronics,retail,104.58,5,0.118,bundle,2024-04-07\r\n5404,1355,EMEA,grocery,online,46.77,3,0.187,none,2024-12-23\r\n5405,2007,LATAM,sports,mobile,98.17,1,0.083,loyalty,2024-10-15\r\n5406,1716,LATAM,sports,mobile,95.41,6,0.116,none,2024-02-21\r\n5407,1612,LATAM,grocery,online,56.45,6,0.091,none,2024-12-12\r\n5408,2494,AMER,home,retail,61.47,5,0.009,loyalty,2024-04-27\r\n5409,1283,APAC,grocery,online,56.71,3,0.034,none,2024-06-11\r\n5410,2100,APAC,fashion,mobile,48.78,7,0.231,loyalty,2024-07-25\r\n5411,1369,AMER,sports,online,38.70,3,0.158,none,2024-11-07\r\n5412,2341,EMEA,fashion,online,55.90,1,0.058,none,2024-07-09\r\n5413,1174,APAC,electronics,retail,41.58,7,0.109,none,2024-11-19\r\n5414,2005,APAC,toys,online,72.23,5,0.072,coupon,2024-03-23\r\n5415,1732,LATAM,toys,retail,82.69,5,0.142,none,2024-07-05\r\n5416,1690,LATAM,electronics,online,69.46,6,0.220,none,2024-06-24\r\n5417,2494,AMER,fashion,partner,119.66,2,0.066,none,2024-04-12\r\n5418,2387,EMEA,fashion,retail,22.63,4,0.026,loyalty,2024-02-03\r\n5419,1054,EMEA,toys,online,92.92,3,0.102,bundle,2024-08-13\r\n5420,1574,AMER,electronics,mobile,40.95,1,0.015,none,2024-09-14\r\n5421,1574,AMER,grocery,retail,48.07,6,0.066,none,2024-01-27\r\n5422,1126,LATAM,toys,retail,55.11,1,0.006,none,2024-06-11\r\n5423,1679,APAC,fashion,retail,39.25,2,0.200,none,2024-02-04\r\n5424,1856,EMEA,grocery,retail,53.42,4,0.093,none,2024-11-04\r\n5425,1373,LATAM,grocery,online,22.37,7,0.246,bundle,2024-05-27\r\n5426,1641,EMEA,sports,retail,64.70,2,0.233,bundle,2024-09-27\r\n5427,2078,APAC,electronics,retail,124.65,8,0.164,none,2024-07-26\r\n5428,1719,LATAM,toys,retail,179.25,5,0.185,none,2024-07-07\r\n5429,1740,EMEA,grocery,partner,47.12,1,0.129,none,2024-09-24\r\n5430,1388,AMER,home,online,62.68,8,0.235,none,2024-07-22\r\n5431,1191,EMEA,fashion,retail,98.54,1,0.178,none,2024-07-07\r\n5432,1586,LATAM,electronics,retail,79.31,3,0.063,bundle,2024-03-25\r\n5433,2118,AMER,fashion,mobile,44.84,4,0.193,coupon,2024-07-01\r\n5434,1505,EMEA,grocery,mobile,58.97,8,0.181,bundle,2024-07-08\r\n5435,1148,AMER,electronics,retail,52.92,4,0.039,none,2024-11-21\r\n5436,2062,EMEA,grocery,online,70.40,6,0.141,none,2024-08-05\r\n5437,2200,LATAM,sports,online,68.36,5,0.026,none,2024-08-06\r\n5438,1379,EMEA,grocery,mobile,29.79,4,0.228,bundle,2024-08-05\r\n5439,1512,APAC,grocery,online,49.08,6,0.021,none,2024-03-07\r\n5440,1558,EMEA,grocery,retail,80.02,5,0.209,none,2024-11-21\r\n5441,1385,LATAM,grocery,online,25.30,7,0.077,none,2024-09-08\r\n5442,1856,EMEA,electronics,online,42.23,4,0.193,coupon,2024-12-12\r\n5443,1078,APAC,grocery,retail,36.66,7,0.123,bundle,2024-06-14\r\n5444,1940,APAC,toys,partner,66.45,1,0.117,none,2024-07-05\r\n5445,1734,AMER,sports,partner,63.00,8,0.064,none,2024-05-13\r\n5446,2011,AMER,fashion,retail,116.54,5,0.218,bundle,2024-11-09\r\n5447,2435,AMER,fashion,online,39.91,5,0.149,none,2024-05-03\r\n5448,2145,AMER,electronics,online,49.18,7,0.113,none,2024-10-13\r\n5449,1636,APAC,electronics,mobile,59.99,8,0.238,none,2024-03-15\r\n5450,1812,EMEA,sports,mobile,52.20,6,0.055,none,2024-05-15\r\n5451,2180,AMER,home,mobile,65.60,7,0.105,coupon,2024-07-23\r\n5452,2462,EMEA,fashion,partner,191.91,2,0.045,none,2024-04-06\r\n5453,1576,EMEA,grocery,mobile,56.32,5,0.241,coupon,2024-01-13\r\n5454,2460,AMER,grocery,retail,40.99,4,0.216,loyalty,2024-10-13\r\n5455,1536,LATAM,sports,online,88.72,4,0.092,coupon,2024-12-14\r\n5456,2371,LATAM,grocery,online,45.93,8,0.118,loyalty,2024-09-09\r\n5457,2007,LATAM,home,retail,39.70,5,0.219,coupon,2024-03-20\r\n5458,1324,LATAM,fashion,online,79.67,6,0.215,coupon,2024-08-01\r\n5459,1584,EMEA,electronics,retail,93.23,2,0.159,none,2024-06-27\r\n5460,1179,APAC,electronics,online,30.37,4,0.092,none,2024-04-19\r\n5461,1528,EMEA,grocery,online,87.87,4,0.020,none,2024-08-15\r\n5462,2067,LATAM,home,retail,54.84,8,0.061,none,2024-09-10\r\n5463,1029,EMEA,grocery,retail,52.61,4,0.177,coupon,2024-02-05\r\n5464,1701,LATAM,electronics,partner,82.00,1,0.176,coupon,2024-06-02\r\n5465,1086,AMER,home,retail,74.33,3,0.017,bundle,2024-12-18\r\n5466,1959,EMEA,electronics,retail,51.26,3,0.166,none,2024-07-25\r\n5467,1338,EMEA,grocery,mobile,23.60,5,0.020,loyalty,2024-10-07\r\n5468,1794,AMER,sports,retail,126.74,5,0.148,none,2024-05-16\r\n5469,2431,LATAM,grocery,online,30.70,1,0.061,coupon,2024-02-12\r\n5470,2368,AMER,toys,online,57.07,5,0.106,bundle,2024-11-05\r\n5471,2073,AMER,grocery,retail,82.91,7,0.099,loyalty,2024-05-21\r\n5472,1446,AMER,grocery,online,74.99,4,0.090,none,2024-02-09\r\n5473,2497,AMER,electronics,mobile,76.08,6,0.221,none,2024-02-05\r\n5474,1766,AMER,home,retail,59.13,5,0.245,none,2024-03-19\r\n5475,1155,EMEA,home,retail,124.94,7,0.176,none,2024-08-15\r\n5476,1196,APAC,sports,online,77.29,5,0.035,loyalty,2024-06-04\r\n5477,1235,EMEA,sports,online,46.27,6,0.149,loyalty,2024-01-03\r\n5478,1953,EMEA,sports,online,207.57,4,0.009,none,2024-06-09\r\n5479,2473,EMEA,electronics,online,35.28,8,0.175,loyalty,2024-10-04\r\n5480,1665,AMER,sports,retail,51.27,2,0.098,coupon,2024-05-17\r\n5481,1186,APAC,sports,partner,37.45,6,0.107,bundle,2024-09-25\r\n5482,2473,EMEA,toys,online,99.38,3,0.167,none,2024-02-23\r\n5483,2482,EMEA,electronics,retail,62.15,6,0.002,none,2024-08-01\r\n5484,2404,EMEA,electronics,partner,97.14,6,0.212,none,2024-10-21\r\n5485,1239,APAC,home,online,33.17,8,0.064,none,2024-09-02\r\n5486,1378,APAC,home,mobile,60.01,3,0.045,none,2024-04-14\r\n5487,2229,APAC,electronics,online,79.74,3,0.064,bundle,2024-06-26\r\n5488,1347,APAC,grocery,online,91.97,4,0.026,loyalty,2024-09-09\r\n5489,1951,LATAM,sports,retail,43.50,4,0.103,none,2024-07-27\r\n5490,2224,EMEA,electronics,online,85.91,5,0.185,bundle,2024-08-15\r\n5491,2318,AMER,electronics,retail,67.77,7,0.186,coupon,2024-09-19\r\n5492,1317,EMEA,grocery,online,78.39,4,0.232,bundle,2024-06-27\r\n5493,1012,LATAM,home,online,29.47,3,0.091,coupon,2024-05-19\r\n5494,2097,AMER,home,retail,31.96,7,0.096,none,2024-09-17\r\n5495,1918,EMEA,grocery,retail,30.07,7,0.217,none,2024-05-28\r\n5496,1111,APAC,fashion,retail,49.03,8,0.173,none,2024-08-04\r\n5497,2208,AMER,fashion,online,54.42,7,0.208,none,2024-06-04\r\n5498,2090,AMER,electronics,online,79.51,5,0.241,coupon,2024-05-24\r\n5499,2288,AMER,grocery,mobile,23.67,7,0.029,none,2024-01-06\r\n5500,1969,LATAM,sports,retail,64.59,7,0.237,none,2024-11-28\r\n5501,1266,AMER,toys,partner,140.01,1,0.143,none,2024-07-21\r\n5502,1311,APAC,grocery,online,22.53,2,0.197,none,2024-01-05\r\n5503,2173,LATAM,grocery,retail,19.23,1,0.111,bundle,2024-07-09\r\n5504,2024,AMER,grocery,online,31.84,7,0.174,none,2024-08-10\r\n5505,1775,EMEA,sports,retail,80.58,5,0.156,none,2024-12-13\r\n5506,1820,AMER,grocery,online,55.41,2,0.223,none,2024-10-11\r\n5507,1972,LATAM,electronics,mobile,40.95,7,0.014,none,2024-04-25\r\n5508,1955,AMER,toys,retail,61.52,2,0.158,none,2024-01-17\r\n5509,1898,EMEA,sports,online,48.86,3,0.028,loyalty,2024-03-28\r\n5510,1712,LATAM,grocery,mobile,56.02,6,0.041,loyalty,2024-03-07\r\n5511,1368,EMEA,grocery,retail,44.69,2,0.150,none,2024-01-27\r\n5512,1982,EMEA,electronics,online,78.36,8,0.225,none,2024-04-21\r\n5513,2305,AMER,sports,retail,118.48,6,0.195,none,2024-02-21\r\n5514,1886,LATAM,home,retail,52.09,1,0.055,none,2024-02-11\r\n5515,1754,EMEA,grocery,mobile,85.00,4,0.068,coupon,2024-06-13\r\n5516,1284,APAC,grocery,online,53.31,7,0.066,none,2024-07-05\r\n5517,1863,EMEA,grocery,retail,207.22,2,0.206,none,2024-08-05\r\n5518,1499,EMEA,fashion,retail,53.35,4,0.129,none,2024-05-27\r\n5519,1976,AMER,sports,online,76.96,3,0.000,none,2024-02-19\r\n5520,1158,LATAM,sports,retail,49.49,7,0.126,none,2024-04-18\r\n5521,1724,LATAM,home,online,43.26,4,0.056,none,2024-04-10\r\n5522,1364,EMEA,toys,mobile,63.78,8,0.061,none,2024-01-18\r\n5523,1676,LATAM,grocery,online,64.15,5,0.218,none,2024-11-25\r\n5524,1734,AMER,sports,online,40.90,2,0.212,loyalty,2024-10-02\r\n5525,2029,APAC,grocery,online,95.97,2,0.076,loyalty,2024-11-23\r\n5526,1436,APAC,grocery,online,30.04,6,0.108,bundle,2024-07-08\r\n5527,2024,AMER,electronics,partner,57.29,6,0.194,none,2024-05-13\r\n5528,2259,AMER,grocery,online,109.51,5,0.248,coupon,2024-09-13\r\n5529,1181,LATAM,sports,online,56.17,3,0.062,none,2024-08-23\r\n5530,1386,AMER,grocery,retail,67.99,7,0.026,none,2024-09-01\r\n5531,1816,EMEA,toys,retail,82.14,4,0.131,none,2024-06-11\r\n5532,1064,AMER,home,retail,139.94,4,0.110,none,2024-11-21\r\n5533,2318,AMER,fashion,mobile,42.14,8,0.179,coupon,2024-05-23\r\n5534,1003,APAC,electronics,online,33.35,7,0.081,bundle,2024-07-20\r\n5535,1810,LATAM,home,online,58.39,7,0.050,none,2024-07-04\r\n5536,1801,LATAM,sports,mobile,61.78,2,0.192,bundle,2024-04-20\r\n5537,2231,LATAM,toys,retail,71.08,4,0.157,bundle,2024-01-08\r\n5538,2153,APAC,grocery,retail,108.13,6,0.150,bundle,2024-05-02\r\n5539,1883,LATAM,grocery,online,84.68,4,0.034,bundle,2024-05-18\r\n5540,2362,AMER,grocery,mobile,29.71,4,0.190,none,2024-10-05\r\n5541,2444,EMEA,electronics,online,172.19,8,0.124,none,2024-11-16\r\n5542,2135,EMEA,toys,online,51.08,8,0.184,none,2024-09-02\r\n5543,1960,EMEA,grocery,online,40.50,8,0.162,none,2024-07-06\r\n5544,1812,EMEA,electronics,retail,41.99,8,0.139,none,2024-07-20\r\n5545,2431,LATAM,electronics,mobile,54.12,3,0.046,none,2024-12-17\r\n5546,1593,AMER,electronics,mobile,36.00,1,0.056,coupon,2024-08-22\r\n5547,2391,EMEA,fashion,retail,81.87,3,0.028,none,2024-11-04\r\n5548,2455,AMER,home,online,47.44,1,0.180,none,2024-01-06\r\n5549,1235,EMEA,grocery,online,47.82,7,0.132,none,2024-06-04\r\n5550,2490,AMER,sports,online,48.26,1,0.104,none,2024-11-27\r\n5551,2419,LATAM,fashion,retail,18.23,3,0.097,bundle,2024-05-13\r\n5552,2443,LATAM,home,retail,41.47,1,0.217,none,2024-04-01\r\n5553,1854,AMER,electronics,retail,53.74,7,0.209,loyalty,2024-01-19\r\n5554,2464,LATAM,electronics,retail,64.22,5,0.112,none,2024-03-22\r\n5555,2020,AMER,toys,retail,44.17,5,0.220,bundle,2024-04-02\r\n5556,2205,AMER,toys,retail,85.54,8,0.115,bundle,2024-07-11\r\n5557,1052,LATAM,fashion,retail,57.55,7,0.099,none,2024-06-26\r\n5558,2383,APAC,fashion,mobile,120.70,2,0.036,none,2024-02-21\r\n5559,2349,APAC,toys,retail,28.67,4,0.161,none,2024-01-24\r\n5560,1438,APAC,home,online,42.00,7,0.042,none,2024-08-20\r\n5561,1560,AMER,fashion,retail,33.09,1,0.123,none,2024-08-17\r\n5562,1126,LATAM,toys,online,25.99,2,0.114,coupon,2024-04-04\r\n5563,1601,APAC,fashion,mobile,129.06,4,0.068,none,2024-01-04\r\n5564,1873,EMEA,electronics,retail,43.44,1,0.242,none,2024-08-27\r\n5565,1182,EMEA,fashion,retail,48.03,4,0.208,bundle,2024-04-04\r\n5566,1784,EMEA,home,retail,44.41,4,0.062,none,2024-09-21\r\n5567,2013,APAC,grocery,online,41.67,6,0.157,none,2024-01-03\r\n5568,2449,LATAM,home,retail,15.53,4,0.214,none,2024-03-19\r\n5569,2445,APAC,toys,retail,50.89,2,0.067,none,2024-08-16\r\n5570,2338,AMER,fashion,online,91.74,6,0.028,none,2024-12-11\r\n5571,2193,AMER,grocery,retail,64.15,6,0.068,none,2024-03-21\r\n5572,1658,AMER,sports,retail,117.24,1,0.114,none,2024-10-02\r\n5573,2302,APAC,home,retail,50.80,5,0.036,none,2024-10-18\r\n5574,1350,LATAM,grocery,online,87.06,6,0.195,none,2024-09-18\r\n5575,1626,EMEA,electronics,retail,24.89,5,0.021,bundle,2024-04-20\r\n5576,1143,LATAM,sports,online,186.11,6,0.136,none,2024-02-13\r\n5577,1168,APAC,grocery,partner,46.20,7,0.063,coupon,2024-09-15\r\n5578,2364,APAC,sports,online,16.13,6,0.105,coupon,2024-01-17\r\n5579,1039,AMER,toys,online,56.74,2,0.134,none,2024-07-02\r\n5580,2262,APAC,sports,online,62.95,7,0.077,none,2024-10-05\r\n5581,1845,AMER,electronics,online,49.26,1,0.239,none,2024-06-15\r\n5582,1231,AMER,toys,online,33.58,7,0.043,coupon,2024-07-09\r\n5583,2196,AMER,home,partner,44.54,7,0.198,none,2024-10-07\r\n5584,1503,APAC,electronics,retail,38.47,5,0.248,none,2024-11-06\r\n5585,2275,LATAM,home,online,38.15,1,0.017,none,2024-06-12\r\n5586,1683,AMER,grocery,online,30.06,3,0.205,loyalty,2024-03-22\r\n5587,2024,AMER,electronics,retail,51.22,2,0.032,none,2024-04-20\r\n5588,1939,LATAM,sports,online,31.16,8,0.047,none,2024-03-20\r\n5589,1569,APAC,electronics,online,95.08,3,0.057,none,2024-02-14\r\n5590,2054,AMER,grocery,online,71.67,7,0.015,none,2024-07-27\r\n5591,1545,AMER,electronics,online,122.43,5,0.201,coupon,2024-09-28\r\n5592,1692,LATAM,sports,retail,55.84,6,0.167,coupon,2024-04-13\r\n5593,1767,AMER,sports,online,37.13,1,0.247,none,2024-05-05\r\n5594,2172,EMEA,grocery,retail,23.75,8,0.182,loyalty,2024-06-16\r\n5595,2065,EMEA,fashion,retail,72.25,8,0.007,none,2024-09-20\r\n5596,1459,LATAM,home,retail,197.06,8,0.188,none,2024-11-01\r\n5597,1004,LATAM,grocery,online,43.41,4,0.160,none,2024-08-06\r\n5598,2081,APAC,toys,online,63.41,5,0.094,none,2024-11-16\r\n5599,2187,EMEA,grocery,retail,116.91,7,0.186,loyalty,2024-05-21\r\n5600,1314,AMER,sports,mobile,78.98,6,0.116,coupon,2024-03-09\r\n5601,1740,EMEA,fashion,online,65.75,8,0.219,loyalty,2024-05-05\r\n5602,2308,AMER,home,mobile,78.53,1,0.223,none,2024-01-23\r\n5603,1835,AMER,home,online,83.43,8,0.225,none,2024-11-03\r\n5604,1828,EMEA,toys,retail,68.60,5,0.194,none,2024-03-07\r\n5605,1882,AMER,fashion,online,28.26,4,0.052,none,2024-07-25\r\n5606,1485,APAC,grocery,online,46.14,7,0.156,none,2024-03-27\r\n5607,2325,LATAM,toys,online,97.52,4,0.167,bundle,2024-02-03\r\n5608,2185,EMEA,fashion,online,20.96,7,0.216,loyalty,2024-03-22\r\n5609,1479,AMER,electronics,online,139.55,5,0.081,none,2024-07-19\r\n5610,1508,LATAM,home,online,126.71,5,0.041,coupon,2024-09-10\r\n5611,1888,LATAM,sports,online,40.41,8,0.052,coupon,2024-03-22\r\n5612,1700,EMEA,electronics,online,142.63,3,0.047,bundle,2024-01-25\r\n5613,2161,LATAM,sports,online,19.49,3,0.194,none,2024-02-04\r\n5614,1163,AMER,grocery,retail,79.01,4,0.129,loyalty,2024-01-21\r\n5615,2213,APAC,sports,online,63.89,2,0.124,none,2024-09-22\r\n5616,1337,APAC,electronics,online,41.17,2,0.155,coupon,2024-08-18\r\n5617,1869,AMER,home,mobile,275.03,5,0.140,loyalty,2024-01-18\r\n5618,1070,EMEA,grocery,online,59.17,6,0.032,bundle,2024-05-13\r\n5619,1003,APAC,home,online,33.62,4,0.072,none,2024-05-01\r\n5620,2016,LATAM,electronics,retail,62.37,8,0.141,bundle,2024-10-25\r\n5621,2017,EMEA,grocery,partner,23.83,2,0.215,loyalty,2024-11-01\r\n5622,2363,AMER,sports,mobile,114.16,1,0.189,none,2024-12-26\r\n5623,2084,LATAM,home,partner,27.75,8,0.028,none,2024-12-17\r\n5624,1674,LATAM,grocery,online,89.50,4,0.246,bundle,2024-06-04\r\n5625,2195,APAC,grocery,mobile,25.64,3,0.131,coupon,2024-03-11\r\n5626,2149,EMEA,electronics,partner,50.95,8,0.159,none,2024-04-13\r\n5627,1838,AMER,grocery,online,63.78,2,0.086,coupon,2024-08-20\r\n5628,1508,LATAM,electronics,retail,54.71,2,0.030,none,2024-04-24\r\n5629,1262,APAC,electronics,retail,56.40,3,0.077,none,2024-01-10\r\n5630,2106,LATAM,grocery,mobile,48.70,4,0.024,none,2024-09-10\r\n5631,1250,APAC,grocery,online,38.45,7,0.124,none,2024-07-28\r\n5632,2280,EMEA,toys,online,30.89,3,0.143,none,2024-10-28\r\n5633,1277,AMER,fashion,online,101.70,5,0.135,none,2024-03-19\r\n5634,2306,AMER,grocery,partner,143.51,3,0.016,none,2024-07-15\r\n5635,1148,AMER,grocery,online,58.61,3,0.121,none,2024-08-23\r\n5636,1723,LATAM,electronics,online,97.04,1,0.006,bundle,2024-06-14\r\n5637,1373,LATAM,home,mobile,42.87,4,0.112,none,2024-10-11\r\n5638,1211,EMEA,electronics,mobile,53.54,3,0.147,none,2024-10-03\r\n5639,1032,AMER,sports,retail,23.76,8,0.061,coupon,2024-08-16\r\n5640,2199,LATAM,electronics,retail,71.40,8,0.042,loyalty,2024-10-04\r\n5641,2119,AMER,grocery,mobile,47.02,5,0.080,none,2024-06-06\r\n5642,1887,LATAM,home,retail,56.56,3,0.192,none,2024-11-12\r\n5643,1899,APAC,grocery,mobile,61.09,7,0.237,none,2024-02-04\r\n5644,1994,LATAM,grocery,retail,52.77,1,0.015,none,2024-12-21\r\n5645,2413,AMER,electronics,retail,62.54,7,0.107,none,2024-09-22\r\n5646,1112,APAC,electronics,online,59.28,1,0.180,none,2024-09-17\r\n5647,1206,EMEA,grocery,mobile,103.10,5,0.118,none,2024-08-08\r\n5648,2147,LATAM,grocery,retail,99.57,4,0.085,none,2024-12-07\r\n5649,1951,LATAM,home,retail,93.91,6,0.077,coupon,2024-01-23\r\n5650,2480,APAC,grocery,retail,85.13,7,0.074,bundle,2024-01-15\r\n5651,1996,APAC,sports,online,98.01,5,0.109,bundle,2024-08-24\r\n5652,1697,APAC,grocery,online,16.75,7,0.175,none,2024-06-05\r\n5653,1530,APAC,grocery,online,43.13,2,0.073,bundle,2024-10-20\r\n5654,2454,LATAM,home,mobile,30.13,7,0.119,none,2024-05-28\r\n5655,1595,AMER,electronics,online,59.80,7,0.076,loyalty,2024-07-08\r\n5656,1895,AMER,grocery,mobile,47.83,5,0.076,none,2024-09-20\r\n5657,1123,LATAM,home,retail,52.57,2,0.089,coupon,2024-10-08\r\n5658,1113,EMEA,fashion,mobile,44.47,5,0.094,coupon,2024-08-17\r\n5659,1668,AMER,grocery,online,78.65,3,0.241,bundle,2024-05-01\r\n5660,2199,LATAM,sports,online,97.81,2,0.118,none,2024-10-25\r\n5661,1673,AMER,grocery,online,20.11,7,0.130,none,2024-09-09\r\n5662,1108,EMEA,grocery,mobile,30.88,8,0.056,coupon,2024-04-20\r\n5663,1595,AMER,grocery,online,70.66,5,0.181,bundle,2024-10-23\r\n5664,1197,LATAM,electronics,mobile,32.33,6,0.037,none,2024-12-11\r\n5665,1275,EMEA,fashion,online,30.88,6,0.133,none,2024-02-28\r\n5666,1659,APAC,grocery,online,51.80,1,0.232,none,2024-07-19\r\n5667,2061,EMEA,home,retail,58.12,5,0.133,none,2024-06-20\r\n5668,1284,APAC,fashion,retail,21.23,4,0.095,none,2024-02-21\r\n5669,1492,APAC,fashion,online,35.41,7,0.182,none,2024-05-02\r\n5670,1577,AMER,toys,online,54.36,7,0.057,none,2024-01-06\r\n5671,1007,APAC,electronics,retail,57.78,6,0.111,loyalty,2024-11-13\r\n5672,2145,AMER,fashion,online,66.16,8,0.141,none,2024-03-08\r\n5673,2144,EMEA,fashion,retail,71.20,1,0.139,bundle,2024-12-17\r\n5674,1679,APAC,sports,retail,86.73,3,0.114,none,2024-02-11\r\n5675,1425,EMEA,electronics,online,58.85,7,0.103,none,2024-07-23\r\n5676,1477,APAC,electronics,online,99.86,7,0.163,none,2024-07-09\r\n5677,2464,LATAM,grocery,mobile,66.34,2,0.164,none,2024-11-12\r\n5678,1636,APAC,fashion,retail,61.69,5,0.180,bundle,2024-12-27\r\n5679,2403,LATAM,home,mobile,43.14,2,0.201,none,2024-12-15\r\n5680,1820,AMER,fashion,online,83.90,7,0.181,coupon,2024-01-08\r\n5681,1983,LATAM,electronics,retail,91.63,1,0.165,none,2024-10-03\r\n5682,1658,AMER,electronics,retail,40.74,1,0.053,none,2024-03-26\r\n5683,1837,LATAM,electronics,mobile,26.22,5,0.021,coupon,2024-10-24\r\n5684,1776,APAC,grocery,online,169.51,6,0.124,bundle,2024-05-04\r\n5685,2091,LATAM,home,online,26.28,5,0.216,loyalty,2024-10-14\r\n5686,1510,EMEA,home,partner,74.91,3,0.101,bundle,2024-12-25\r\n5687,1272,AMER,home,retail,45.11,6,0.131,bundle,2024-10-06\r\n5688,2411,EMEA,electronics,partner,38.23,1,0.005,none,2024-07-17\r\n5689,1812,EMEA,fashion,online,39.14,4,0.238,coupon,2024-02-11\r\n5690,1111,APAC,grocery,retail,118.70,2,0.228,coupon,2024-08-19\r\n5691,2074,AMER,grocery,mobile,85.79,5,0.006,bundle,2024-03-12\r\n5692,1925,LATAM,fashion,retail,39.72,3,0.241,coupon,2024-09-27\r\n5693,1672,APAC,home,online,48.22,6,0.242,none,2024-10-12\r\n5694,1561,EMEA,electronics,online,55.17,4,0.181,none,2024-07-10\r\n5695,2174,LATAM,sports,partner,48.45,2,0.115,none,2024-06-04\r\n5696,1575,APAC,toys,online,66.94,2,0.096,none,2024-01-26\r\n5697,1238,AMER,toys,retail,145.77,1,0.150,none,2024-10-26\r\n5698,1470,LATAM,grocery,retail,44.17,7,0.077,none,2024-11-01\r\n5699,1523,LATAM,home,retail,75.38,7,0.166,none,2024-07-10\r\n5700,2499,LATAM,grocery,retail,68.99,8,0.236,bundle,2024-10-03\r\n5701,2313,LATAM,grocery,partner,98.53,6,0.091,coupon,2024-06-26\r\n5702,1933,EMEA,electronics,retail,68.81,8,0.217,none,2024-12-20\r\n5703,1011,APAC,sports,partner,86.87,3,0.062,none,2024-01-14\r\n5704,2018,AMER,fashion,online,70.01,6,0.073,none,2024-09-14\r\n5705,1146,LATAM,home,retail,74.66,8,0.205,none,2024-11-17\r\n5706,1052,LATAM,home,online,57.16,1,0.041,coupon,2024-01-19\r\n5707,1134,APAC,electronics,retail,72.03,5,0.230,none,2024-09-17\r\n5708,1193,APAC,home,online,33.20,3,0.102,bundle,2024-04-10\r\n5709,1878,EMEA,electronics,online,32.65,4,0.246,none,2024-12-23\r\n5710,2447,AMER,grocery,retail,123.59,2,0.098,none,2024-06-02\r\n5711,1955,AMER,home,online,101.53,7,0.244,none,2024-05-13\r\n5712,2314,EMEA,electronics,online,23.12,6,0.135,coupon,2024-12-09\r\n5713,1874,LATAM,electronics,online,38.04,7,0.078,none,2024-05-02\r\n5714,1340,LATAM,fashion,mobile,113.51,1,0.031,none,2024-07-27\r\n5715,2256,AMER,grocery,online,16.57,5,0.104,coupon,2024-04-09\r\n5716,2296,AMER,grocery,retail,223.99,3,0.139,none,2024-02-10\r\n5717,1377,APAC,home,online,57.21,2,0.035,none,2024-04-27\r\n5718,1246,EMEA,fashion,online,64.98,2,0.191,none,2024-02-19\r\n5719,2258,AMER,grocery,online,25.57,4,0.116,bundle,2024-07-27\r\n5720,1632,LATAM,electronics,online,72.13,7,0.002,none,2024-09-26\r\n5721,1354,AMER,sports,retail,82.83,1,0.068,none,2024-03-16\r\n5722,2416,LATAM,sports,mobile,20.44,4,0.105,loyalty,2024-05-23\r\n5723,1276,AMER,grocery,mobile,62.35,2,0.079,none,2024-03-21\r\n5724,1725,APAC,fashion,online,28.20,2,0.157,none,2024-11-17\r\n5725,1173,LATAM,fashion,retail,75.46,6,0.045,coupon,2024-10-20\r\n5726,1479,AMER,grocery,online,46.98,4,0.182,bundle,2024-10-27\r\n5727,1034,EMEA,grocery,mobile,45.54,2,0.039,none,2024-06-21\r\n5728,2266,LATAM,electronics,retail,92.28,6,0.224,coupon,2024-07-15\r\n5729,2268,EMEA,electronics,retail,33.76,2,0.248,none,2024-06-08\r\n5730,1531,EMEA,home,online,45.38,1,0.154,none,2024-11-09\r\n5731,1649,APAC,electronics,online,70.92,7,0.063,none,2024-09-10\r\n5732,1088,LATAM,grocery,online,81.78,1,0.104,loyalty,2024-02-23\r\n5733,1874,LATAM,grocery,mobile,29.32,4,0.110,none,2024-09-12\r\n5734,1771,AMER,toys,online,63.00,3,0.232,none,2024-09-13\r\n5735,2110,LATAM,fashion,online,55.26,8,0.101,none,2024-02-23\r\n5736,1636,APAC,toys,retail,23.62,3,0.083,none,2024-11-24\r\n5737,1219,LATAM,home,online,80.95,3,0.162,none,2024-10-20\r\n5738,1758,AMER,grocery,mobile,228.43,5,0.062,none,2024-10-19\r\n5739,1805,EMEA,sports,retail,42.81,8,0.116,coupon,2024-05-05\r\n5740,1048,EMEA,electronics,online,135.89,1,0.137,bundle,2024-02-09\r\n5741,1700,EMEA,fashion,mobile,39.38,1,0.126,none,2024-01-21\r\n5742,2395,APAC,toys,partner,46.40,2,0.124,none,2024-10-28\r\n5743,2265,APAC,fashion,mobile,31.54,3,0.107,none,2024-01-07\r\n5744,1519,APAC,electronics,online,42.96,6,0.092,none,2024-09-21\r\n5745,2245,APAC,grocery,online,59.32,7,0.020,coupon,2024-02-25\r\n5746,2284,EMEA,fashion,online,83.24,8,0.041,bundle,2024-04-21\r\n5747,1824,LATAM,home,retail,32.23,2,0.171,bundle,2024-12-21\r\n5748,1674,LATAM,fashion,mobile,50.81,2,0.110,bundle,2024-03-19\r\n5749,1672,APAC,fashion,online,49.06,3,0.065,none,2024-10-12\r\n5750,2249,LATAM,electronics,retail,75.68,1,0.242,bundle,2024-06-10\r\n5751,2334,LATAM,fashion,retail,120.63,1,0.136,none,2024-01-09\r\n5752,1978,AMER,toys,online,97.55,3,0.111,loyalty,2024-08-25\r\n5753,1694,APAC,home,online,61.44,4,0.060,none,2024-09-20\r\n5754,1590,APAC,fashion,online,121.11,1,0.182,coupon,2024-12-21\r\n5755,2341,EMEA,sports,partner,46.10,2,0.127,none,2024-11-25\r\n5756,2483,LATAM,home,mobile,48.20,1,0.073,none,2024-05-13\r\n5757,1456,APAC,sports,online,52.45,6,0.099,loyalty,2024-01-16\r\n5758,1403,APAC,electronics,mobile,55.83,8,0.002,none,2024-10-19\r\n5759,2009,LATAM,fashion,mobile,89.95,4,0.157,none,2024-08-23\r\n5760,2324,AMER,electronics,mobile,60.76,6,0.111,bundle,2024-08-19\r\n5761,1871,APAC,electronics,online,92.04,5,0.242,none,2024-07-05\r\n5762,1755,APAC,fashion,online,105.96,6,0.051,coupon,2024-10-25\r\n5763,1969,LATAM,grocery,retail,106.07,1,0.096,none,2024-03-11\r\n5764,1608,AMER,electronics,online,41.44,7,0.181,bundle,2024-04-25\r\n5765,2242,AMER,sports,retail,54.77,7,0.209,none,2024-07-01\r\n5766,2144,EMEA,grocery,online,116.05,1,0.130,none,2024-03-05\r\n5767,1223,LATAM,grocery,retail,54.04,7,0.213,coupon,2024-06-08\r\n5768,1524,LATAM,electronics,retail,43.82,1,0.225,none,2024-06-08\r\n5769,1077,AMER,sports,online,47.33,7,0.184,none,2024-07-21\r\n5770,1549,APAC,fashion,mobile,42.30,2,0.172,none,2024-07-15\r\n5771,1250,APAC,fashion,mobile,32.50,6,0.175,bundle,2024-06-03\r\n5772,2398,EMEA,grocery,online,73.34,7,0.213,none,2024-02-21\r\n5773,2199,LATAM,grocery,retail,27.74,4,0.116,coupon,2024-01-11\r\n5774,1025,EMEA,grocery,online,79.93,7,0.088,coupon,2024-01-05\r\n5775,2057,APAC,electronics,online,50.08,5,0.093,coupon,2024-05-08\r\n5776,1435,AMER,grocery,mobile,71.90,6,0.066,none,2024-12-01\r\n5777,2015,APAC,home,mobile,91.34,3,0.248,none,2024-10-26\r\n5778,1603,EMEA,toys,online,29.65,1,0.024,coupon,2024-11-20\r\n5779,1464,APAC,grocery,online,91.58,2,0.201,none,2024-02-07\r\n5780,2384,LATAM,home,retail,28.62,2,0.026,coupon,2024-08-05\r\n5781,1672,APAC,electronics,online,39.87,3,0.171,coupon,2024-12-07\r\n5782,1318,LATAM,electronics,retail,41.49,7,0.183,bundle,2024-07-02\r\n5783,1266,AMER,sports,online,36.75,1,0.057,none,2024-05-13\r\n5784,2167,APAC,toys,mobile,57.78,1,0.014,coupon,2024-01-18\r\n5785,2084,LATAM,toys,online,83.16,8,0.219,none,2024-03-18\r\n5786,1632,LATAM,home,retail,59.88,5,0.019,none,2024-08-04\r\n5787,1796,LATAM,sports,online,53.71,2,0.090,coupon,2024-07-23\r\n5788,2176,AMER,electronics,online,95.60,4,0.040,bundle,2024-10-20\r\n5789,1209,AMER,toys,retail,49.60,4,0.112,none,2024-12-09\r\n5790,1526,EMEA,home,online,76.23,3,0.133,loyalty,2024-03-13\r\n5791,1331,AMER,toys,retail,73.31,4,0.109,none,2024-01-28\r\n5792,2434,APAC,electronics,mobile,49.97,4,0.215,none,2024-11-08\r\n5793,2405,AMER,fashion,online,77.45,3,0.065,coupon,2024-02-07\r\n5794,2426,AMER,home,retail,78.22,3,0.215,none,2024-11-03\r\n5795,1769,LATAM,fashion,retail,59.59,2,0.192,loyalty,2024-01-22\r\n5796,1661,LATAM,grocery,online,56.86,6,0.147,none,2024-08-25\r\n5797,2211,APAC,grocery,online,49.65,4,0.061,loyalty,2024-03-08\r\n5798,1610,LATAM,grocery,online,41.14,4,0.164,coupon,2024-02-02\r\n5799,1374,APAC,electronics,online,48.10,1,0.123,none,2024-08-12\r\n5800,1626,EMEA,electronics,online,26.95,5,0.157,loyalty,2024-12-24\r\n5801,1012,LATAM,electronics,retail,75.12,7,0.160,none,2024-09-06\r\n5802,1493,APAC,electronics,mobile,80.29,4,0.058,none,2024-09-04\r\n5803,1266,AMER,sports,online,94.82,3,0.186,none,2024-07-11\r\n5804,1286,EMEA,electronics,retail,59.16,4,0.221,bundle,2024-01-20\r\n5805,1878,EMEA,grocery,online,40.00,2,0.079,coupon,2024-03-22\r\n5806,1665,AMER,home,partner,47.64,6,0.227,loyalty,2024-08-12\r\n5807,1854,AMER,grocery,online,65.50,4,0.167,none,2024-01-02\r\n5808,1699,APAC,electronics,mobile,68.63,6,0.031,none,2024-04-10\r\n5809,2163,EMEA,grocery,retail,48.90,5,0.209,bundle,2024-03-04\r\n5810,1228,APAC,sports,mobile,96.34,2,0.244,none,2024-03-22\r\n5811,1301,AMER,home,retail,18.63,8,0.083,bundle,2024-10-01\r\n5812,1536,LATAM,fashion,retail,48.65,1,0.200,none,2024-08-14\r\n5813,1744,EMEA,toys,retail,148.99,4,0.215,coupon,2024-03-17\r\n5814,1903,LATAM,electronics,online,72.00,6,0.032,none,2024-07-16\r\n5815,1797,LATAM,sports,retail,55.72,8,0.002,none,2024-09-01\r\n5816,2172,EMEA,grocery,mobile,51.75,4,0.197,none,2024-04-08\r\n5817,1030,EMEA,toys,retail,33.70,7,0.103,none,2024-10-22\r\n5818,2493,APAC,sports,mobile,95.92,6,0.072,loyalty,2024-05-15\r\n5819,1671,APAC,electronics,retail,23.96,5,0.175,loyalty,2024-11-08\r\n5820,2324,AMER,fashion,retail,129.28,6,0.038,bundle,2024-12-06\r\n5821,1972,LATAM,fashion,online,50.94,3,0.198,coupon,2024-06-02\r\n5822,1498,LATAM,fashion,online,27.66,8,0.169,loyalty,2024-12-19\r\n5823,2013,APAC,toys,online,100.31,7,0.183,none,2024-02-10\r\n5824,1439,LATAM,grocery,online,70.21,7,0.118,bundle,2024-12-14\r\n5825,1472,AMER,grocery,online,83.24,2,0.184,none,2024-12-04\r\n5826,1760,LATAM,electronics,partner,110.37,1,0.231,none,2024-07-06\r\n5827,1126,LATAM,home,retail,118.00,4,0.133,none,2024-04-03\r\n5828,1067,APAC,fashion,retail,58.06,7,0.009,coupon,2024-05-10\r\n5829,1323,EMEA,sports,retail,58.66,7,0.147,loyalty,2024-09-15\r\n5830,1142,EMEA,home,partner,56.12,5,0.160,none,2024-02-22\r\n5831,1523,LATAM,home,mobile,34.64,8,0.044,none,2024-06-08\r\n5832,2145,AMER,home,retail,38.98,1,0.121,none,2024-01-19\r\n5833,1303,LATAM,grocery,retail,93.26,8,0.132,none,2024-01-11\r\n5834,1186,APAC,electronics,retail,37.49,3,0.029,none,2024-02-12\r\n5835,1420,APAC,electronics,online,38.19,1,0.245,none,2024-07-13\r\n5836,1984,LATAM,home,online,39.59,8,0.185,none,2024-12-23\r\n5837,1377,APAC,sports,retail,81.28,8,0.114,none,2024-08-04\r\n5838,2198,EMEA,electronics,online,61.71,3,0.208,none,2024-11-12\r\n5839,1898,EMEA,home,retail,51.52,1,0.248,coupon,2024-01-03\r\n5840,1112,APAC,electronics,online,80.34,4,0.124,none,2024-11-27\r\n5841,2272,EMEA,fashion,online,43.05,2,0.039,none,2024-10-18\r\n5842,2264,LATAM,grocery,online,21.26,3,0.217,bundle,2024-02-07\r\n5843,2078,APAC,sports,mobile,116.86,1,0.081,bundle,2024-01-01\r\n5844,2432,AMER,grocery,online,58.22,6,0.070,coupon,2024-03-19\r\n5845,1336,APAC,electronics,retail,126.77,3,0.005,none,2024-08-28\r\n5846,1091,EMEA,grocery,online,134.32,1,0.160,coupon,2024-10-18\r\n5847,2372,AMER,toys,online,55.47,3,0.244,none,2024-10-17\r\n5848,2011,AMER,grocery,online,83.60,3,0.002,none,2024-02-20\r\n5849,1801,LATAM,grocery,online,55.52,3,0.227,coupon,2024-05-23\r\n5850,1435,AMER,home,online,49.85,2,0.136,none,2024-05-14\r\n5851,2066,APAC,home,partner,43.89,2,0.221,none,2024-04-14\r\n5852,1948,EMEA,fashion,partner,71.28,7,0.237,coupon,2024-08-23\r\n5853,1113,EMEA,fashion,retail,43.46,1,0.080,none,2024-12-18\r\n5854,1409,APAC,grocery,mobile,69.25,8,0.037,none,2024-07-12\r\n5855,1524,LATAM,grocery,online,33.59,5,0.189,loyalty,2024-06-23\r\n5856,1659,APAC,grocery,mobile,61.59,3,0.078,coupon,2024-01-18\r\n5857,1559,EMEA,electronics,online,125.84,3,0.124,loyalty,2024-07-09\r\n5858,1425,EMEA,sports,online,77.43,3,0.067,coupon,2024-12-23\r\n5859,1292,LATAM,grocery,mobile,97.73,6,0.004,none,2024-02-27\r\n5860,1988,AMER,grocery,retail,62.36,8,0.211,none,2024-09-23\r\n5861,1724,LATAM,grocery,retail,73.84,7,0.211,bundle,2024-05-04\r\n5862,2105,APAC,toys,mobile,49.59,2,0.101,bundle,2024-03-03\r\n5863,2412,LATAM,grocery,mobile,70.45,4,0.113,loyalty,2024-03-09\r\n5864,1453,APAC,toys,mobile,84.31,2,0.096,coupon,2024-08-26\r\n5865,1591,APAC,grocery,retail,68.48,4,0.213,none,2024-10-14\r\n5866,1969,LATAM,grocery,retail,36.72,4,0.149,none,2024-02-19\r\n5867,1819,AMER,electronics,retail,22.59,3,0.188,bundle,2024-02-23\r\n5868,2106,LATAM,home,online,87.68,4,0.081,none,2024-10-27\r\n5869,1194,APAC,fashion,mobile,59.04,4,0.136,none,2024-01-11\r\n5870,2400,EMEA,fashion,mobile,39.61,5,0.071,none,2024-06-11\r\n5871,1995,LATAM,home,online,60.14,3,0.048,bundle,2024-06-01\r\n5872,1409,APAC,electronics,online,45.70,5,0.174,none,2024-02-22\r\n5873,1143,LATAM,toys,mobile,42.11,1,0.190,none,2024-10-22\r\n5874,1779,APAC,grocery,online,144.06,4,0.094,none,2024-06-10\r\n5875,2292,EMEA,fashion,mobile,105.53,1,0.088,loyalty,2024-01-22\r\n5876,2260,EMEA,grocery,online,153.42,1,0.071,none,2024-03-18\r\n5877,1321,EMEA,grocery,online,44.26,5,0.164,none,2024-03-20\r\n5878,1658,AMER,grocery,retail,218.58,1,0.114,none,2024-06-24\r\n5879,1603,EMEA,fashion,online,167.13,4,0.244,none,2024-10-07\r\n5880,2454,LATAM,electronics,online,40.60,7,0.090,none,2024-09-13\r\n5881,1517,AMER,sports,retail,140.01,2,0.120,none,2024-06-11\r\n5882,1655,LATAM,toys,partner,43.64,6,0.140,none,2024-10-08\r\n5883,2052,LATAM,sports,online,38.41,2,0.191,loyalty,2024-11-10\r\n5884,1974,EMEA,grocery,partner,65.21,7,0.233,loyalty,2024-03-25\r\n5885,2011,AMER,grocery,partner,68.93,2,0.108,coupon,2024-11-27\r\n5886,1034,EMEA,home,retail,14.17,6,0.216,loyalty,2024-08-06\r\n5887,2226,EMEA,grocery,mobile,47.38,8,0.064,coupon,2024-04-16\r\n5888,1776,APAC,home,online,25.01,2,0.080,coupon,2024-11-22\r\n5889,1612,LATAM,electronics,online,41.29,7,0.039,loyalty,2024-07-13\r\n5890,1093,APAC,sports,partner,70.54,7,0.053,coupon,2024-09-21\r\n5891,1186,APAC,grocery,retail,45.49,4,0.184,none,2024-06-16\r\n5892,2102,APAC,electronics,retail,42.62,6,0.238,coupon,2024-04-25\r\n5893,1905,APAC,toys,online,27.57,2,0.148,none,2024-03-16\r\n5894,1983,LATAM,grocery,online,78.73,6,0.037,coupon,2024-09-28\r\n5895,1798,AMER,grocery,online,136.90,3,0.135,none,2024-01-26\r\n5896,2119,AMER,sports,online,74.77,7,0.212,coupon,2024-06-08\r\n5897,1008,AMER,electronics,online,136.05,6,0.209,bundle,2024-09-11\r\n5898,2148,EMEA,electronics,online,48.19,3,0.235,none,2024-12-15\r\n5899,2233,EMEA,electronics,retail,88.24,6,0.122,bundle,2024-09-22\r\n5900,1188,LATAM,grocery,retail,34.58,5,0.187,none,2024-07-24\r\n5901,1719,LATAM,toys,online,63.91,8,0.191,none,2024-03-23\r\n5902,1094,LATAM,sports,online,46.03,1,0.129,none,2024-08-17\r\n5903,1580,AMER,electronics,online,66.90,1,0.248,none,2024-10-06\r\n5904,1520,APAC,sports,online,107.72,1,0.166,bundle,2024-12-01\r\n5905,2301,EMEA,sports,online,71.74,3,0.212,none,2024-04-10\r\n5906,2282,EMEA,electronics,online,42.71,1,0.015,none,2024-01-19\r\n5907,1827,EMEA,fashion,retail,29.06,4,0.155,none,2024-11-14\r\n5908,1383,AMER,fashion,retail,54.95,6,0.204,none,2024-09-12\r\n5909,1604,EMEA,home,online,41.51,1,0.180,none,2024-04-03\r\n5910,2249,LATAM,home,online,106.38,6,0.049,coupon,2024-05-18\r\n5911,2133,AMER,electronics,retail,172.92,4,0.002,none,2024-07-22\r\n5912,2048,LATAM,fashion,online,19.27,4,0.110,none,2024-10-01\r\n5913,1139,EMEA,electronics,retail,65.00,4,0.204,none,2024-11-27\r\n5914,1071,AMER,toys,partner,57.00,5,0.242,none,2024-04-17\r\n5915,2376,LATAM,grocery,online,34.15,7,0.058,loyalty,2024-01-11\r\n5916,1470,LATAM,home,online,35.04,4,0.045,none,2024-06-11\r\n5917,2288,AMER,home,mobile,60.64,8,0.220,none,2024-05-11\r\n5918,1662,LATAM,grocery,online,35.97,6,0.026,none,2024-06-03\r\n5919,1113,EMEA,fashion,online,19.72,7,0.070,bundle,2024-12-17\r\n5920,1072,LATAM,home,mobile,52.43,5,0.239,none,2024-12-02\r\n5921,2427,LATAM,electronics,online,62.54,4,0.079,none,2024-07-19\r\n5922,1874,LATAM,fashion,online,85.08,2,0.013,none,2024-02-19\r\n5923,1699,APAC,fashion,online,30.27,2,0.098,bundle,2024-06-16\r\n5924,1246,EMEA,electronics,online,59.26,6,0.210,bundle,2024-11-01\r\n5925,1588,LATAM,grocery,online,50.05,4,0.060,none,2024-09-21\r\n5926,2344,LATAM,electronics,online,49.37,1,0.108,none,2024-03-10\r\n5927,1098,APAC,grocery,mobile,59.98,8,0.058,coupon,2024-12-22\r\n5928,2033,LATAM,toys,retail,74.03,1,0.059,bundle,2024-02-08\r\n5929,2326,LATAM,home,retail,59.86,1,0.028,none,2024-04-13\r\n5930,1640,APAC,fashion,retail,46.08,2,0.096,loyalty,2024-02-17\r\n5931,1163,AMER,electronics,retail,32.17,7,0.062,none,2024-01-26\r\n5932,1330,EMEA,grocery,retail,42.44,8,0.038,loyalty,2024-02-25\r\n5933,1465,AMER,grocery,online,60.22,6,0.083,coupon,2024-07-15\r\n5934,2444,EMEA,fashion,online,88.31,3,0.149,bundle,2024-05-14\r\n5935,1531,EMEA,electronics,online,29.97,4,0.002,loyalty,2024-05-08\r\n5936,1765,EMEA,sports,online,18.16,4,0.090,none,2024-07-21\r\n5937,1624,AMER,electronics,mobile,61.52,2,0.204,coupon,2024-08-16\r\n5938,2167,APAC,toys,online,44.94,1,0.226,coupon,2024-09-08\r\n5939,1681,LATAM,grocery,partner,30.81,2,0.118,none,2024-12-26\r\n5940,1836,LATAM,electronics,retail,105.61,4,0.062,none,2024-06-18\r\n5941,1064,AMER,grocery,mobile,203.16,2,0.223,none,2024-06-03\r\n5942,1852,AMER,home,mobile,47.33,7,0.228,none,2024-06-08\r\n5943,1848,EMEA,home,online,77.84,3,0.182,coupon,2024-02-12\r\n5944,1642,EMEA,grocery,online,75.11,1,0.032,bundle,2024-11-02\r\n5945,1930,AMER,grocery,partner,144.81,2,0.061,none,2024-06-12\r\n5946,1362,AMER,fashion,retail,97.98,1,0.083,none,2024-07-17\r\n5947,1738,LATAM,fashion,mobile,38.37,7,0.098,coupon,2024-04-27\r\n5948,1374,APAC,grocery,mobile,71.38,8,0.046,none,2024-11-05\r\n5949,1143,LATAM,electronics,retail,56.38,7,0.198,coupon,2024-09-17\r\n5950,1618,EMEA,home,mobile,34.54,8,0.025,none,2024-03-01\r\n5951,2053,AMER,fashion,online,22.17,7,0.039,none,2024-07-05\r\n5952,1325,APAC,grocery,online,51.57,1,0.012,bundle,2024-03-22\r\n5953,1187,AMER,fashion,mobile,27.38,3,0.092,loyalty,2024-12-09\r\n5954,2015,APAC,electronics,online,44.89,8,0.081,none,2024-07-14\r\n5955,1413,LATAM,sports,retail,58.84,5,0.219,loyalty,2024-03-28\r\n5956,2115,APAC,grocery,retail,68.70,2,0.146,none,2024-09-16\r\n5957,1367,AMER,toys,retail,78.91,5,0.025,none,2024-08-28\r\n5958,1658,AMER,toys,online,85.86,6,0.118,none,2024-03-17\r\n5959,1362,AMER,home,mobile,64.07,7,0.129,none,2024-07-22\r\n5960,1601,APAC,electronics,online,33.81,7,0.235,loyalty,2024-06-07\r\n5961,1585,AMER,grocery,online,20.53,1,0.041,none,2024-08-13\r\n5962,2234,LATAM,electronics,retail,73.09,4,0.073,none,2024-04-10\r\n5963,2093,LATAM,grocery,mobile,104.62,7,0.186,none,2024-02-07\r\n5964,2090,AMER,sports,online,74.92,4,0.075,none,2024-09-04\r\n5965,1998,APAC,grocery,online,146.35,2,0.167,coupon,2024-06-03\r\n5966,2491,APAC,home,online,122.59,7,0.046,none,2024-02-25\r\n5967,1743,LATAM,sports,retail,54.25,7,0.198,none,2024-12-23\r\n5968,2094,AMER,grocery,retail,93.24,2,0.184,bundle,2024-10-15\r\n5969,1064,AMER,electronics,online,47.05,5,0.108,none,2024-12-22\r\n5970,1170,AMER,electronics,online,61.34,3,0.121,loyalty,2024-08-15\r\n5971,1397,LATAM,grocery,online,45.44,4,0.192,none,2024-03-04\r\n5972,1627,LATAM,grocery,retail,76.81,3,0.039,coupon,2024-12-11\r\n5973,1501,AMER,grocery,online,31.09,8,0.207,coupon,2024-12-03\r\n5974,2071,APAC,grocery,online,35.22,7,0.215,coupon,2024-04-15\r\n5975,1447,LATAM,home,mobile,86.72,8,0.027,none,2024-09-02\r\n5976,2163,EMEA,toys,retail,37.49,2,0.071,bundle,2024-02-24\r\n5977,1075,AMER,grocery,online,37.00,2,0.178,bundle,2024-06-28\r\n5978,1743,LATAM,electronics,retail,110.04,5,0.152,none,2024-06-11\r\n5979,1738,LATAM,grocery,online,43.79,8,0.079,none,2024-01-27\r\n5980,1980,LATAM,sports,retail,42.14,6,0.098,none,2024-07-21\r\n5981,1674,LATAM,sports,online,49.05,3,0.015,bundle,2024-04-20\r\n5982,2227,LATAM,sports,online,126.28,8,0.055,none,2024-11-23\r\n5983,1743,LATAM,grocery,online,78.66,6,0.130,coupon,2024-10-25\r\n5984,1257,APAC,grocery,retail,73.12,8,0.107,none,2024-07-27\r\n5985,1443,EMEA,fashion,online,55.17,7,0.044,loyalty,2024-06-19\r\n5986,2002,APAC,grocery,partner,68.88,6,0.086,none,2024-01-12\r\n5987,1382,LATAM,grocery,online,49.86,1,0.068,coupon,2024-10-23\r\n5988,1649,APAC,electronics,retail,15.31,4,0.098,none,2024-07-07\r\n5989,1644,EMEA,sports,online,17.48,4,0.226,none,2024-06-03\r\n5990,1084,AMER,fashion,online,28.45,6,0.213,none,2024-08-16\r\n5991,1794,AMER,grocery,mobile,40.03,2,0.198,none,2024-06-24\r\n5992,2474,LATAM,fashion,online,167.02,8,0.132,none,2024-12-22\r\n5993,1052,LATAM,grocery,mobile,25.32,7,0.227,none,2024-02-23\r\n5994,1179,APAC,grocery,online,65.29,6,0.135,coupon,2024-12-17\r\n5995,1644,EMEA,grocery,retail,80.90,5,0.221,none,2024-07-16\r\n5996,1071,AMER,toys,online,106.85,6,0.198,none,2024-09-15\r\n5997,1748,APAC,home,online,28.75,4,0.018,loyalty,2024-07-18\r\n5998,2287,EMEA,electronics,online,34.90,2,0.052,none,2024-03-24\r\n5999,2108,AMER,toys,online,52.12,3,0.099,coupon,2024-01-11\r\n6000,1592,LATAM,grocery,retail,81.69,2,0.121,none,2024-01-25\r\n6001,2179,LATAM,toys,online,74.92,2,0.225,none,2024-03-01\r\n6002,1268,EMEA,toys,retail,124.86,5,0.099,loyalty,2024-03-20\r\n6003,2130,EMEA,grocery,retail,52.69,3,0.188,none,2024-02-11\r\n6004,2436,LATAM,electronics,retail,73.60,3,0.113,coupon,2024-04-19\r\n6005,2080,LATAM,grocery,retail,36.41,8,0.078,none,2024-12-18\r\n6006,1274,LATAM,home,online,100.83,5,0.202,bundle,2024-04-10\r\n6007,1830,EMEA,home,online,123.78,5,0.181,none,2024-07-01\r\n6008,1529,LATAM,home,online,57.94,5,0.243,none,2024-01-27\r\n6009,1453,APAC,toys,online,30.35,7,0.222,none,2024-07-13\r\n6010,2383,APAC,electronics,online,57.82,5,0.038,bundle,2024-01-14\r\n6011,2481,APAC,grocery,retail,39.26,4,0.043,bundle,2024-01-20\r\n6012,2479,EMEA,grocery,online,151.18,3,0.059,coupon,2024-04-15\r\n6013,1707,APAC,toys,retail,69.62,8,0.101,none,2024-01-10\r\n6014,1625,EMEA,home,online,35.66,8,0.077,bundle,2024-11-07\r\n6015,2054,AMER,grocery,retail,40.40,3,0.076,none,2024-07-26\r\n6016,2151,APAC,electronics,online,72.25,6,0.170,none,2024-08-27\r\n6017,1563,EMEA,fashion,online,86.19,3,0.207,coupon,2024-10-17\r\n6018,1868,AMER,home,online,43.06,5,0.067,none,2024-06-06\r\n6019,1729,AMER,toys,retail,193.19,8,0.164,none,2024-05-11\r\n6020,2092,AMER,sports,partner,34.67,6,0.064,none,2024-09-20\r\n6021,2421,AMER,fashion,online,145.64,8,0.033,coupon,2024-10-17\r\n6022,2350,APAC,grocery,online,79.80,5,0.171,none,2024-04-27\r\n6023,1928,AMER,fashion,mobile,80.77,3,0.031,none,2024-08-28\r\n6024,1564,APAC,fashion,retail,123.45,4,0.247,none,2024-09-04\r\n6025,1969,LATAM,home,online,39.28,8,0.039,bundle,2024-01-12\r\n6026,2074,AMER,grocery,online,33.65,6,0.237,none,2024-02-04\r\n6027,1025,EMEA,toys,retail,246.69,2,0.055,coupon,2024-06-14\r\n6028,2496,EMEA,grocery,retail,59.76,5,0.247,coupon,2024-11-15\r\n6029,1120,LATAM,grocery,partner,115.13,4,0.039,loyalty,2024-07-03\r\n6030,2271,LATAM,toys,partner,18.81,7,0.242,none,2024-06-02\r\n6031,1300,EMEA,grocery,retail,48.94,5,0.010,none,2024-07-06\r\n6032,1224,APAC,home,mobile,145.49,7,0.079,none,2024-12-14\r\n6033,2377,AMER,sports,online,36.93,1,0.159,loyalty,2024-02-04\r\n6034,1665,AMER,toys,retail,58.76,2,0.146,loyalty,2024-04-28\r\n6035,1252,APAC,toys,online,52.66,5,0.244,coupon,2024-11-03\r\n6036,1866,EMEA,home,online,35.55,3,0.053,coupon,2024-04-06\r\n6037,1027,APAC,home,online,137.23,5,0.058,none,2024-02-14\r\n6038,1137,APAC,fashion,mobile,104.08,8,0.035,none,2024-10-12\r\n6039,1277,AMER,fashion,retail,75.40,5,0.096,loyalty,2024-10-23\r\n6040,2130,EMEA,grocery,retail,67.00,3,0.021,loyalty,2024-03-17\r\n6041,1974,EMEA,fashion,online,73.44,6,0.059,bundle,2024-07-06\r\n6042,2443,LATAM,electronics,retail,83.60,1,0.137,bundle,2024-07-11\r\n6043,1145,AMER,home,mobile,75.33,3,0.116,bundle,2024-07-01\r\n6044,1897,AMER,grocery,retail,80.43,2,0.008,none,2024-04-18\r\n6045,1477,APAC,fashion,partner,43.55,6,0.153,bundle,2024-03-08\r\n6046,1738,LATAM,fashion,retail,53.41,4,0.142,none,2024-10-23\r\n6047,1652,APAC,grocery,online,40.68,3,0.135,none,2024-11-04\r\n6048,2363,AMER,electronics,online,46.54,5,0.003,coupon,2024-01-19\r\n6049,1031,AMER,fashion,retail,38.21,4,0.095,bundle,2024-08-15\r\n6050,2137,LATAM,sports,retail,56.90,7,0.093,none,2024-09-10\r\n6051,1501,AMER,fashion,online,125.19,4,0.147,bundle,2024-07-14\r\n6052,1814,AMER,grocery,partner,64.78,7,0.128,coupon,2024-07-21\r\n6053,1373,LATAM,electronics,mobile,94.94,5,0.115,bundle,2024-07-02\r\n6054,2406,EMEA,home,mobile,29.27,2,0.136,coupon,2024-09-25\r\n6055,1384,LATAM,toys,retail,125.37,7,0.091,loyalty,2024-12-17\r\n6056,2422,APAC,toys,retail,36.07,7,0.213,coupon,2024-09-27\r\n6057,1259,EMEA,toys,retail,45.41,8,0.173,none,2024-10-12\r\n6058,2155,APAC,grocery,retail,27.50,2,0.080,none,2024-05-04\r\n6059,2033,LATAM,grocery,online,56.41,7,0.169,none,2024-07-15\r\n6060,1483,EMEA,home,online,42.21,2,0.011,none,2024-05-27\r\n6061,1333,EMEA,grocery,mobile,130.78,6,0.085,none,2024-01-22\r\n6062,2330,EMEA,fashion,mobile,127.67,5,0.213,none,2024-08-24\r\n6063,1250,APAC,fashion,retail,83.28,1,0.143,none,2024-06-21\r\n6064,1098,APAC,electronics,online,51.73,8,0.031,coupon,2024-05-15\r\n6065,2398,EMEA,home,online,76.57,1,0.182,none,2024-09-19\r\n6066,1871,APAC,toys,retail,107.75,6,0.172,loyalty,2024-06-20\r\n6067,1730,AMER,home,online,112.50,3,0.108,coupon,2024-10-06\r\n6068,2176,AMER,fashion,online,160.91,4,0.169,bundle,2024-07-01\r\n6069,1574,AMER,sports,mobile,60.18,2,0.200,none,2024-11-10\r\n6070,2441,EMEA,home,retail,65.42,8,0.242,none,2024-04-23\r\n6071,2439,AMER,grocery,online,50.42,8,0.212,bundle,2024-01-15\r\n6072,1836,LATAM,toys,online,112.39,7,0.131,none,2024-01-10\r\n6073,1153,AMER,sports,online,48.09,3,0.152,none,2024-02-17\r\n6074,1174,APAC,grocery,online,49.81,2,0.004,coupon,2024-06-15\r\n6075,1804,AMER,electronics,retail,238.52,3,0.166,loyalty,2024-03-19\r\n6076,2104,EMEA,electronics,retail,149.91,1,0.173,bundle,2024-01-14\r\n6077,2331,APAC,sports,online,83.63,5,0.088,coupon,2024-03-27\r\n6078,1478,EMEA,grocery,retail,161.31,6,0.080,none,2024-10-13\r\n6079,2220,LATAM,sports,mobile,149.29,8,0.169,coupon,2024-10-06\r\n6080,1063,AMER,sports,partner,31.75,5,0.244,none,2024-09-14\r\n6081,2226,EMEA,toys,retail,55.25,3,0.067,none,2024-05-18\r\n6082,1974,EMEA,electronics,online,192.51,2,0.144,bundle,2024-12-14\r\n6083,1499,EMEA,home,retail,64.85,6,0.149,none,2024-01-07\r\n6084,1242,LATAM,home,retail,26.40,6,0.014,coupon,2024-02-04\r\n6085,1040,LATAM,toys,mobile,44.23,7,0.187,none,2024-04-16\r\n6086,1410,AMER,fashion,online,25.57,8,0.044,bundle,2024-03-18\r\n6087,2153,APAC,fashion,online,100.10,6,0.160,bundle,2024-01-25\r\n6088,1683,AMER,home,online,112.92,6,0.150,bundle,2024-07-27\r\n6089,2400,EMEA,toys,online,94.22,1,0.070,none,2024-05-23\r\n6090,2376,LATAM,home,retail,43.27,4,0.232,bundle,2024-05-06\r\n6091,2081,APAC,toys,retail,66.25,8,0.197,bundle,2024-10-22\r\n6092,1991,APAC,grocery,online,59.01,8,0.203,none,2024-11-14\r\n6093,1351,APAC,fashion,online,20.47,7,0.124,coupon,2024-08-06\r\n6094,2045,LATAM,home,retail,56.40,4,0.066,loyalty,2024-04-12\r\n6095,1603,EMEA,grocery,online,77.05,5,0.241,coupon,2024-02-24\r\n6096,1514,LATAM,fashion,online,91.03,6,0.113,coupon,2024-03-26\r\n6097,1933,EMEA,electronics,online,55.27,7,0.134,none,2024-11-09\r\n6098,1731,AMER,electronics,retail,38.22,5,0.200,none,2024-06-05\r\n6099,1570,AMER,home,retail,66.05,2,0.156,none,2024-11-16\r\n6100,1688,LATAM,fashion,online,62.82,3,0.137,none,2024-12-01\r\n6101,2221,LATAM,electronics,retail,123.49,1,0.030,coupon,2024-07-14\r\n6102,1326,AMER,fashion,online,35.75,3,0.028,loyalty,2024-06-22\r\n6103,1474,LATAM,fashion,online,37.01,1,0.068,coupon,2024-04-27\r\n6104,2061,EMEA,home,retail,92.31,8,0.217,none,2024-05-15\r\n6105,1019,APAC,electronics,retail,43.95,1,0.161,coupon,2024-12-10\r\n6106,2478,AMER,grocery,retail,37.07,1,0.214,none,2024-06-12\r\n6107,2052,LATAM,grocery,partner,114.79,6,0.012,bundle,2024-11-18\r\n6108,1289,LATAM,grocery,online,50.55,6,0.103,coupon,2024-09-20\r\n6109,1631,APAC,sports,retail,71.31,6,0.229,none,2024-01-25\r\n6110,1725,APAC,fashion,online,129.12,8,0.048,none,2024-01-23\r\n6111,1197,LATAM,grocery,online,46.87,6,0.123,bundle,2024-06-27\r\n6112,1070,EMEA,toys,online,139.25,7,0.234,coupon,2024-11-09\r\n6113,1939,LATAM,fashion,online,57.67,2,0.244,coupon,2024-11-02\r\n6114,2351,EMEA,fashion,online,33.24,6,0.010,none,2024-12-08\r\n6115,1959,EMEA,home,retail,165.64,3,0.019,bundle,2024-03-17\r\n6116,1701,LATAM,sports,retail,129.72,5,0.235,none,2024-11-16\r\n6117,1957,AMER,grocery,online,58.45,7,0.159,coupon,2024-06-23\r\n6118,1220,LATAM,grocery,retail,69.52,3,0.111,none,2024-09-22\r\n6119,1919,EMEA,electronics,retail,76.64,2,0.023,bundle,2024-09-26\r\n6120,1405,LATAM,grocery,online,49.35,4,0.191,coupon,2024-06-08\r\n6121,1629,LATAM,grocery,retail,21.15,5,0.206,none,2024-09-15\r\n6122,2329,LATAM,electronics,online,81.44,6,0.216,coupon,2024-06-12\r\n6123,1596,EMEA,home,online,36.23,2,0.155,none,2024-04-27\r\n6124,1899,APAC,home,online,79.06,5,0.009,bundle,2024-10-16\r\n6125,1960,EMEA,toys,retail,30.05,5,0.144,none,2024-09-25\r\n6126,1529,LATAM,electronics,online,60.86,4,0.091,bundle,2024-01-08\r\n6127,1851,EMEA,fashion,online,94.77,6,0.132,loyalty,2024-08-03\r\n6128,1266,AMER,grocery,mobile,27.24,3,0.187,none,2024-07-23\r\n6129,2236,APAC,grocery,online,45.49,8,0.186,coupon,2024-02-13\r\n6130,1881,LATAM,grocery,mobile,72.82,1,0.243,none,2024-08-18\r\n6131,2296,AMER,sports,online,31.37,6,0.037,none,2024-08-05\r\n6132,1392,AMER,grocery,online,44.43,8,0.097,none,2024-10-01\r\n6133,1213,EMEA,grocery,online,170.66,7,0.075,none,2024-03-26\r\n6134,1120,LATAM,fashion,online,57.94,6,0.139,coupon,2024-05-26\r\n6135,2375,AMER,home,online,17.97,1,0.010,none,2024-11-19\r\n6136,1151,APAC,grocery,online,17.17,1,0.098,none,2024-09-23\r\n6137,2009,LATAM,home,retail,58.12,8,0.118,bundle,2024-01-05\r\n6138,1480,APAC,grocery,online,82.59,6,0.175,none,2024-06-07\r\n6139,2009,LATAM,sports,retail,87.50,3,0.043,coupon,2024-04-14\r\n6140,2158,APAC,fashion,mobile,89.04,6,0.168,loyalty,2024-08-05\r\n6141,1549,APAC,sports,online,64.41,6,0.184,none,2024-02-10\r\n6142,1690,LATAM,fashion,online,54.95,4,0.212,none,2024-01-19\r\n6143,2479,EMEA,grocery,online,75.24,7,0.149,none,2024-01-04\r\n6144,1932,EMEA,home,online,61.68,8,0.013,coupon,2024-08-20\r\n6145,1527,AMER,electronics,retail,92.26,8,0.244,coupon,2024-06-17\r\n6146,1271,EMEA,grocery,retail,31.93,3,0.181,none,2024-05-15\r\n6147,1095,APAC,home,mobile,28.22,1,0.186,loyalty,2024-09-04\r\n6148,1821,LATAM,grocery,online,80.51,1,0.134,none,2024-07-02\r\n6149,2206,AMER,toys,online,72.88,3,0.188,none,2024-10-05\r\n6150,1113,EMEA,grocery,online,84.00,2,0.064,none,2024-01-10\r\n6151,1462,LATAM,sports,retail,26.97,4,0.113,coupon,2024-11-04\r\n6152,1456,APAC,fashion,online,81.01,7,0.062,none,2024-12-11\r\n6153,1556,AMER,grocery,retail,18.83,6,0.144,bundle,2024-06-28\r\n6154,1465,AMER,home,online,114.81,8,0.006,bundle,2024-08-08\r\n6155,2359,LATAM,fashion,online,39.45,5,0.171,bundle,2024-04-01\r\n6156,1577,AMER,grocery,online,35.28,8,0.152,none,2024-09-03\r\n6157,1826,LATAM,grocery,online,86.36,7,0.110,bundle,2024-09-26\r\n6158,2310,EMEA,home,retail,106.23,8,0.099,coupon,2024-04-14\r\n6159,2301,EMEA,home,retail,51.63,4,0.187,coupon,2024-07-05\r\n6160,1875,EMEA,home,retail,62.46,7,0.008,none,2024-04-01\r\n6161,1753,APAC,home,online,74.98,3,0.166,bundle,2024-12-27\r\n6162,1502,APAC,home,retail,116.52,3,0.107,loyalty,2024-04-06\r\n6163,1096,EMEA,electronics,online,29.21,8,0.065,none,2024-09-21\r\n6164,1444,EMEA,home,partner,54.61,6,0.155,none,2024-05-08\r\n6165,1610,LATAM,grocery,online,43.76,5,0.152,none,2024-01-23\r\n6166,2190,LATAM,grocery,online,51.48,8,0.215,none,2024-04-22\r\n6167,1576,EMEA,home,mobile,70.77,6,0.061,none,2024-06-05\r\n6168,1993,APAC,grocery,mobile,42.75,4,0.154,loyalty,2024-06-15\r\n6169,2156,AMER,grocery,partner,33.40,5,0.092,coupon,2024-11-13\r\n6170,1489,AMER,electronics,retail,14.47,7,0.206,none,2024-07-16\r\n6171,2426,AMER,toys,online,239.03,8,0.034,coupon,2024-05-06\r\n6172,1484,AMER,sports,online,61.11,4,0.188,none,2024-02-13\r\n6173,1476,APAC,fashion,online,138.95,4,0.002,none,2024-01-25\r\n6174,1872,LATAM,electronics,retail,146.98,3,0.096,none,2024-06-17\r\n6175,1032,AMER,electronics,online,70.23,3,0.238,none,2024-04-27\r\n6176,1055,AMER,toys,retail,77.93,8,0.093,loyalty,2024-01-27\r\n6177,1490,AMER,home,online,69.34,4,0.230,none,2024-02-18\r\n6178,1645,EMEA,electronics,mobile,52.77,8,0.163,bundle,2024-09-26\r\n6179,2381,AMER,sports,online,61.51,1,0.168,loyalty,2024-03-06\r\n6180,1599,APAC,electronics,retail,122.57,2,0.133,none,2024-07-21\r\n6181,1698,EMEA,electronics,online,18.98,6,0.081,none,2024-02-05\r\n6182,1079,LATAM,fashion,retail,95.82,2,0.014,none,2024-02-21\r\n6183,1795,EMEA,fashion,retail,20.77,8,0.104,none,2024-07-01\r\n6184,2356,LATAM,fashion,online,67.87,3,0.226,bundle,2024-04-23\r\n6185,1430,EMEA,toys,retail,63.59,1,0.025,none,2024-01-15\r\n6186,1519,APAC,grocery,retail,78.53,2,0.095,none,2024-08-12\r\n6187,2133,AMER,electronics,retail,57.94,2,0.125,none,2024-05-16\r\n6188,2285,APAC,fashion,mobile,113.84,2,0.062,none,2024-09-04\r\n6189,1959,EMEA,grocery,mobile,70.14,2,0.025,none,2024-07-09\r\n6190,1154,LATAM,sports,online,88.15,8,0.191,bundle,2024-07-20\r\n6191,2160,LATAM,fashion,retail,48.77,5,0.076,none,2024-01-25\r\n6192,1064,AMER,fashion,online,28.49,6,0.135,none,2024-03-18\r\n6193,1645,EMEA,toys,partner,57.53,3,0.160,none,2024-06-07\r\n6194,1826,LATAM,sports,retail,37.29,5,0.085,none,2024-01-04\r\n6195,1268,EMEA,home,online,108.70,8,0.237,none,2024-11-17\r\n6196,1507,EMEA,grocery,online,90.56,7,0.034,none,2024-07-10\r\n6197,2419,LATAM,home,retail,58.06,2,0.023,none,2024-06-25\r\n6198,1995,LATAM,home,online,41.66,3,0.176,none,2024-08-05\r\n6199,1073,AMER,electronics,retail,83.38,1,0.201,none,2024-03-04\r\n6200,1610,LATAM,grocery,online,86.88,5,0.136,none,2024-06-25\r\n6201,1901,AMER,grocery,retail,54.71,7,0.200,none,2024-05-24\r\n6202,2472,AMER,fashion,online,67.19,4,0.075,none,2024-05-24\r\n6203,2376,LATAM,grocery,online,128.91,1,0.168,none,2024-09-15\r\n6204,2334,LATAM,sports,online,65.99,4,0.136,loyalty,2024-07-14\r\n6205,2214,AMER,toys,online,129.55,5,0.080,none,2024-11-06\r\n6206,2099,AMER,electronics,retail,57.33,6,0.075,none,2024-09-18\r\n6207,1720,AMER,fashion,online,44.26,8,0.197,none,2024-04-18\r\n6208,1956,APAC,grocery,online,62.92,6,0.058,none,2024-09-06\r\n6209,1079,LATAM,electronics,retail,57.00,1,0.123,none,2024-12-26\r\n6210,2494,AMER,electronics,online,37.59,7,0.237,coupon,2024-07-19\r\n6211,1187,AMER,sports,online,81.02,8,0.221,coupon,2024-06-21\r\n6212,2077,APAC,electronics,retail,80.34,2,0.094,none,2024-07-22\r\n6213,2310,EMEA,sports,retail,28.98,8,0.234,none,2024-07-26\r\n6214,1867,AMER,electronics,online,113.44,3,0.001,coupon,2024-02-03\r\n6215,1590,APAC,grocery,retail,56.45,4,0.102,coupon,2024-03-19\r\n6216,2082,APAC,electronics,online,98.96,1,0.087,coupon,2024-09-27\r\n6217,2372,AMER,sports,online,116.41,3,0.002,none,2024-02-23\r\n6218,1890,LATAM,electronics,online,52.57,4,0.140,coupon,2024-05-21\r\n6219,1720,AMER,electronics,retail,41.87,7,0.059,none,2024-03-23\r\n6220,2242,AMER,home,retail,33.05,2,0.079,none,2024-10-20\r\n6221,2047,AMER,fashion,online,60.78,3,0.228,none,2024-09-14\r\n6222,1920,LATAM,grocery,mobile,126.11,5,0.172,coupon,2024-09-14\r\n6223,2090,AMER,grocery,mobile,95.02,1,0.038,none,2024-01-21\r\n6224,1205,APAC,grocery,online,64.93,3,0.070,bundle,2024-04-18\r\n6225,2370,EMEA,electronics,online,41.04,2,0.108,none,2024-03-16\r\n6226,1088,LATAM,electronics,online,52.90,2,0.070,coupon,2024-09-23\r\n6227,2439,AMER,fashion,retail,59.74,8,0.038,coupon,2024-06-17\r\n6228,2144,EMEA,fashion,retail,70.61,8,0.172,coupon,2024-09-22\r\n6229,1832,APAC,home,online,77.52,3,0.045,coupon,2024-12-06\r\n6230,1707,APAC,grocery,mobile,39.63,4,0.109,coupon,2024-04-10\r\n6231,1829,EMEA,fashion,online,54.44,2,0.216,coupon,2024-02-24\r\n6232,2121,APAC,fashion,mobile,42.83,7,0.090,none,2024-01-17\r\n6233,1257,APAC,electronics,online,68.72,7,0.199,none,2024-03-28\r\n6234,1283,APAC,toys,online,175.10,6,0.124,coupon,2024-09-16\r\n6235,1363,EMEA,grocery,partner,98.99,2,0.244,none,2024-01-27\r\n6236,1323,EMEA,grocery,online,96.13,2,0.236,none,2024-01-13\r\n6237,1939,LATAM,home,online,31.78,6,0.175,none,2024-05-19\r\n6238,2441,EMEA,grocery,online,78.78,7,0.118,none,2024-06-25\r\n6239,2347,AMER,electronics,retail,94.87,8,0.094,none,2024-08-18\r\n6240,1880,LATAM,electronics,online,62.35,3,0.111,coupon,2024-03-07\r\n6241,2476,APAC,fashion,retail,54.34,8,0.241,none,2024-05-19\r\n6242,1804,AMER,fashion,online,91.16,2,0.176,none,2024-09-04\r\n6243,2137,LATAM,grocery,retail,33.49,6,0.076,bundle,2024-05-26\r\n6244,2396,AMER,electronics,mobile,76.61,5,0.230,none,2024-04-11\r\n6245,2200,LATAM,electronics,online,45.19,4,0.018,none,2024-06-28\r\n6246,1527,AMER,sports,mobile,28.98,8,0.033,none,2024-06-12\r\n6247,2403,LATAM,sports,online,48.11,8,0.009,coupon,2024-12-07\r\n6248,1566,EMEA,fashion,mobile,30.95,1,0.181,none,2024-05-09\r\n6249,1261,APAC,home,mobile,47.68,2,0.217,none,2024-10-02\r\n6250,2121,APAC,sports,mobile,37.92,4,0.112,none,2024-12-09\r\n6251,1676,LATAM,sports,online,42.58,2,0.041,bundle,2024-05-09\r\n6252,2051,APAC,home,partner,82.72,3,0.067,none,2024-04-23\r\n6253,2287,EMEA,sports,online,100.08,5,0.010,coupon,2024-03-03\r\n6254,1449,EMEA,fashion,online,63.62,5,0.202,coupon,2024-01-06\r\n6255,1014,EMEA,grocery,online,71.16,6,0.151,none,2024-01-16\r\n6256,2031,AMER,toys,online,34.41,2,0.142,none,2024-05-13\r\n6257,2252,EMEA,grocery,retail,54.37,8,0.090,none,2024-06-05\r\n6258,1290,EMEA,fashion,mobile,38.98,2,0.241,loyalty,2024-03-06\r\n6259,2247,LATAM,electronics,online,43.57,6,0.210,none,2024-09-23\r\n6260,1894,APAC,grocery,online,75.68,4,0.053,none,2024-07-18\r\n6261,2463,AMER,fashion,partner,61.45,2,0.022,none,2024-10-12\r\n6262,2300,EMEA,home,retail,42.35,6,0.168,none,2024-07-01\r\n6263,2013,APAC,grocery,retail,26.22,6,0.158,none,2024-08-11\r\n6264,2479,EMEA,toys,retail,310.87,8,0.215,coupon,2024-08-26\r\n6265,2417,LATAM,sports,retail,91.15,8,0.185,loyalty,2024-04-03\r\n6266,1580,AMER,electronics,online,101.99,5,0.068,none,2024-08-26\r\n6267,1471,EMEA,home,mobile,29.68,2,0.003,none,2024-07-21\r\n6268,1729,AMER,sports,mobile,42.10,1,0.086,none,2024-04-12\r\n6269,1507,EMEA,home,mobile,78.24,5,0.053,loyalty,2024-05-05\r\n6270,1568,AMER,fashion,retail,80.33,6,0.119,none,2024-03-08\r\n6271,1228,APAC,electronics,retail,42.78,3,0.024,bundle,2024-10-05\r\n6272,2030,EMEA,sports,retail,78.78,1,0.197,none,2024-11-19\r\n6273,1457,EMEA,fashion,retail,70.91,8,0.111,none,2024-10-25\r\n6274,2035,LATAM,electronics,retail,22.94,5,0.105,loyalty,2024-10-03\r\n6275,1172,APAC,grocery,retail,162.24,3,0.044,none,2024-06-13\r\n6276,1628,EMEA,grocery,retail,116.82,4,0.178,none,2024-07-28\r\n6277,1377,APAC,sports,mobile,93.58,8,0.019,bundle,2024-05-25\r\n6278,1155,EMEA,electronics,online,17.88,8,0.199,bundle,2024-11-10\r\n6279,1392,AMER,home,online,51.87,5,0.143,none,2024-03-27\r\n6280,2241,APAC,fashion,online,59.89,4,0.130,bundle,2024-08-07\r\n6281,1689,LATAM,sports,retail,47.09,8,0.177,none,2024-08-20\r\n6282,2054,AMER,grocery,online,25.48,1,0.106,none,2024-11-25\r\n6283,1243,AMER,home,retail,55.34,3,0.002,none,2024-12-21\r\n6284,2370,EMEA,home,online,26.65,2,0.144,none,2024-11-12\r\n6285,1584,EMEA,grocery,mobile,31.40,6,0.219,none,2024-05-12\r\n6286,1247,AMER,toys,online,26.22,2,0.232,none,2024-09-23\r\n6287,2137,LATAM,grocery,retail,73.39,3,0.063,none,2024-05-21\r\n6288,1012,LATAM,electronics,online,30.89,7,0.199,none,2024-11-01\r\n6289,2345,LATAM,fashion,online,145.79,5,0.211,bundle,2024-07-11\r\n6290,2436,LATAM,grocery,retail,138.26,8,0.153,coupon,2024-05-10\r\n6291,1424,APAC,home,retail,86.10,3,0.250,coupon,2024-09-02\r\n6292,1665,AMER,electronics,retail,36.36,4,0.082,none,2024-02-09\r\n6293,2079,EMEA,home,online,61.58,5,0.002,bundle,2024-03-09\r\n6294,2308,AMER,sports,retail,75.16,1,0.189,loyalty,2024-03-05\r\n6295,1556,AMER,grocery,mobile,74.46,7,0.130,none,2024-10-28\r\n6296,2179,LATAM,electronics,mobile,42.31,6,0.231,loyalty,2024-08-05\r\n6297,1859,AMER,fashion,mobile,30.40,1,0.212,coupon,2024-12-10\r\n6298,2235,AMER,grocery,retail,36.33,7,0.158,none,2024-09-13\r\n6299,2165,AMER,fashion,online,38.99,1,0.025,coupon,2024-10-07\r\n6300,2096,LATAM,grocery,mobile,81.60,5,0.038,coupon,2024-05-13\r\n6301,1321,EMEA,grocery,online,30.59,2,0.101,loyalty,2024-12-17\r\n6302,2264,LATAM,home,retail,54.32,6,0.012,coupon,2024-06-06\r\n6303,2438,AMER,home,retail,87.13,5,0.107,coupon,2024-08-07\r\n6304,1663,LATAM,grocery,partner,93.08,5,0.075,bundle,2024-09-25\r\n6305,2319,AMER,home,partner,77.85,7,0.206,loyalty,2024-02-26\r\n6306,1818,AMER,grocery,online,111.96,7,0.234,none,2024-03-18\r\n6307,1089,LATAM,electronics,online,75.33,5,0.134,none,2024-04-23\r\n6308,1213,EMEA,grocery,retail,26.57,5,0.191,none,2024-11-08\r\n6309,2083,LATAM,fashion,retail,89.09,4,0.031,none,2024-12-16\r\n6310,1275,EMEA,toys,retail,78.67,8,0.020,none,2024-04-05\r\n6311,1351,APAC,home,mobile,21.75,2,0.137,bundle,2024-08-09\r\n6312,1754,EMEA,toys,retail,26.78,1,0.131,bundle,2024-01-23\r\n6313,1535,AMER,grocery,online,66.47,7,0.106,none,2024-10-18\r\n6314,2074,AMER,grocery,retail,87.03,3,0.230,none,2024-01-10\r\n6315,1288,LATAM,electronics,online,86.12,6,0.041,none,2024-01-26\r\n6316,1252,APAC,toys,online,36.81,1,0.161,none,2024-08-27\r\n6317,1879,EMEA,toys,online,46.88,4,0.156,none,2024-05-15\r\n6318,1949,AMER,electronics,partner,84.51,5,0.234,none,2024-06-22\r\n6319,2353,AMER,grocery,online,46.42,6,0.182,none,2024-06-01\r\n6320,1616,APAC,grocery,mobile,129.76,5,0.042,bundle,2024-07-08\r\n6321,1945,AMER,fashion,retail,71.57,8,0.155,none,2024-12-18\r\n6322,2080,LATAM,electronics,mobile,42.63,8,0.036,none,2024-03-25\r\n6323,2466,APAC,grocery,online,20.30,5,0.035,bundle,2024-03-18\r\n6324,1245,APAC,electronics,retail,52.92,5,0.040,coupon,2024-06-18\r\n6325,1823,EMEA,home,online,16.33,1,0.225,loyalty,2024-12-10\r\n6326,1679,APAC,fashion,retail,40.61,1,0.126,bundle,2024-08-20\r\n6327,1829,EMEA,electronics,online,67.62,4,0.095,none,2024-08-27\r\n6328,2344,LATAM,home,retail,57.36,2,0.135,none,2024-07-10\r\n6329,1368,EMEA,electronics,partner,26.74,7,0.181,none,2024-03-18\r\n6330,1995,LATAM,home,online,22.72,8,0.037,none,2024-10-16\r\n6331,1816,EMEA,fashion,online,70.95,1,0.168,none,2024-06-28\r\n6332,2018,AMER,sports,online,93.13,8,0.089,coupon,2024-08-11\r\n6333,2491,APAC,toys,online,33.27,3,0.124,none,2024-05-02\r\n6334,1026,APAC,toys,retail,69.11,6,0.085,none,2024-12-06\r\n6335,1350,LATAM,electronics,online,55.72,3,0.201,bundle,2024-03-11\r\n6336,1984,LATAM,grocery,online,71.15,4,0.208,none,2024-11-07\r\n6337,1475,LATAM,fashion,retail,62.85,4,0.164,none,2024-05-16\r\n6338,1294,APAC,electronics,online,195.44,3,0.206,bundle,2024-10-08\r\n6339,1358,APAC,fashion,online,31.71,4,0.203,none,2024-03-23\r\n6340,2393,LATAM,electronics,mobile,55.66,6,0.073,none,2024-12-18\r\n6341,1475,LATAM,sports,retail,105.22,7,0.079,coupon,2024-11-23\r\n6342,1592,LATAM,toys,partner,75.59,5,0.096,none,2024-09-15\r\n6343,2190,LATAM,fashion,retail,159.82,5,0.064,none,2024-01-02\r\n6344,1517,AMER,home,mobile,50.55,3,0.082,none,2024-09-11\r\n6345,1634,AMER,home,online,85.25,4,0.202,coupon,2024-11-25\r\n6346,1422,LATAM,grocery,online,16.74,4,0.013,bundle,2024-01-08\r\n6347,1000,APAC,electronics,retail,38.98,4,0.169,none,2024-11-19\r\n6348,1464,APAC,fashion,retail,136.15,8,0.003,none,2024-07-22\r\n6349,2422,APAC,home,mobile,88.02,2,0.178,bundle,2024-04-16\r\n6350,1718,EMEA,toys,retail,23.76,2,0.222,coupon,2024-09-15\r\n6351,1048,EMEA,sports,online,39.66,2,0.146,bundle,2024-12-06\r\n6352,1783,AMER,grocery,online,79.14,2,0.124,coupon,2024-01-04\r\n6353,1836,LATAM,grocery,online,41.65,8,0.178,none,2024-07-02\r\n6354,1734,AMER,grocery,retail,46.81,7,0.033,none,2024-03-08\r\n6355,2254,LATAM,electronics,retail,110.68,3,0.114,coupon,2024-01-06\r\n6356,1610,LATAM,fashion,online,72.64,2,0.134,none,2024-01-27\r\n6357,2372,AMER,grocery,online,52.57,7,0.169,none,2024-09-13\r\n6358,2272,EMEA,grocery,online,28.34,4,0.248,coupon,2024-08-14\r\n6359,1305,EMEA,home,retail,62.04,6,0.047,loyalty,2024-06-02\r\n6360,1698,EMEA,sports,mobile,79.31,2,0.030,coupon,2024-02-27\r\n6361,2313,LATAM,electronics,online,35.35,2,0.060,none,2024-01-28\r\n6362,1353,EMEA,home,online,96.62,4,0.248,bundle,2024-05-28\r\n6363,2402,AMER,grocery,mobile,115.65,3,0.021,none,2024-08-17\r\n6364,2036,APAC,toys,retail,41.68,6,0.201,none,2024-09-14\r\n6365,1356,LATAM,fashion,mobile,81.87,7,0.249,none,2024-03-10\r\n6366,1702,AMER,sports,mobile,91.68,5,0.215,none,2024-08-19\r\n6367,1266,AMER,toys,mobile,38.59,1,0.163,coupon,2024-04-08\r\n6368,1501,AMER,electronics,retail,78.93,2,0.008,none,2024-07-24\r\n6369,1806,APAC,sports,online,92.19,6,0.229,none,2024-07-02\r\n6370,2150,APAC,fashion,mobile,32.73,7,0.141,none,2024-06-11\r\n6371,1899,APAC,electronics,online,51.58,7,0.101,coupon,2024-01-22\r\n6372,1498,LATAM,electronics,retail,43.71,5,0.077,coupon,2024-09-15\r\n6373,2382,LATAM,grocery,online,52.59,8,0.120,none,2024-04-21\r\n6374,2030,EMEA,home,online,23.75,8,0.148,none,2024-07-16\r\n6375,2454,LATAM,grocery,online,95.65,3,0.213,none,2024-04-25\r\n6376,1633,EMEA,electronics,retail,59.92,1,0.145,bundle,2024-09-24\r\n6377,1171,APAC,toys,retail,36.66,7,0.226,coupon,2024-12-02\r\n6378,1476,APAC,grocery,online,61.11,6,0.053,none,2024-02-14\r\n6379,1035,EMEA,electronics,mobile,30.33,7,0.223,coupon,2024-01-13\r\n6380,1547,AMER,grocery,online,61.88,7,0.070,loyalty,2024-04-01\r\n6381,2263,AMER,electronics,mobile,51.02,1,0.077,none,2024-04-09\r\n6382,1065,AMER,grocery,online,25.61,1,0.115,none,2024-02-19\r\n6383,2080,LATAM,grocery,retail,63.29,1,0.153,coupon,2024-12-24\r\n6384,2108,AMER,toys,online,121.74,3,0.000,coupon,2024-09-23\r\n6385,2368,AMER,home,online,41.68,8,0.162,none,2024-12-25\r\n6386,2020,AMER,grocery,retail,44.45,5,0.056,bundle,2024-07-23\r\n6387,2362,AMER,grocery,online,35.29,1,0.206,none,2024-05-15\r\n6388,2172,EMEA,grocery,retail,48.77,5,0.170,coupon,2024-06-13\r\n6389,1532,APAC,grocery,online,130.08,3,0.223,none,2024-11-15\r\n6390,2039,EMEA,grocery,retail,41.72,1,0.182,bundle,2024-03-24\r\n6391,1715,AMER,home,online,34.16,7,0.048,none,2024-05-09\r\n6392,1071,AMER,grocery,partner,92.31,7,0.067,none,2024-01-10\r\n6393,1050,AMER,sports,retail,40.47,3,0.079,none,2024-04-09\r\n6394,1082,EMEA,home,online,91.75,2,0.011,none,2024-02-08\r\n6395,1312,EMEA,electronics,online,61.86,8,0.146,bundle,2024-06-25\r\n6396,1427,EMEA,fashion,online,58.87,6,0.083,none,2024-09-27\r\n6397,1973,EMEA,sports,retail,48.22,5,0.020,none,2024-10-11\r\n6398,1139,EMEA,electronics,retail,95.94,5,0.099,coupon,2024-01-11\r\n6399,1233,AMER,home,retail,24.58,3,0.042,coupon,2024-10-15\r\n6400,1532,APAC,grocery,retail,46.34,2,0.174,loyalty,2024-10-11\r\n6401,1488,AMER,toys,online,36.05,7,0.124,coupon,2024-10-20\r\n6402,2129,APAC,grocery,online,25.83,5,0.177,bundle,2024-09-11\r\n6403,2469,LATAM,home,retail,41.56,6,0.238,loyalty,2024-06-08\r\n6404,2278,APAC,sports,online,55.76,6,0.175,none,2024-11-18\r\n6405,1867,AMER,grocery,mobile,122.12,5,0.007,none,2024-01-16\r\n6406,2067,LATAM,home,retail,176.68,2,0.080,loyalty,2024-04-02\r\n6407,1343,LATAM,electronics,online,229.01,5,0.057,none,2024-03-07\r\n6408,2452,LATAM,electronics,retail,37.31,2,0.050,bundle,2024-07-06\r\n6409,1323,EMEA,home,mobile,30.42,3,0.091,bundle,2024-02-12\r\n6410,1287,AMER,electronics,online,89.96,7,0.115,none,2024-01-28\r\n6411,2424,LATAM,grocery,retail,38.29,8,0.246,none,2024-06-21\r\n6412,1595,AMER,home,retail,30.84,3,0.134,bundle,2024-01-14\r\n6413,1830,EMEA,toys,retail,44.00,1,0.052,coupon,2024-10-02\r\n6414,1011,APAC,fashion,mobile,61.24,4,0.171,none,2024-06-08\r\n6415,1635,APAC,electronics,online,106.36,8,0.207,coupon,2024-12-28\r\n6416,1086,AMER,electronics,online,45.78,5,0.085,none,2024-12-15\r\n6417,1855,APAC,grocery,retail,77.10,6,0.158,bundle,2024-08-24\r\n6418,2419,LATAM,electronics,mobile,54.77,4,0.140,bundle,2024-08-14\r\n6419,1076,LATAM,electronics,online,45.18,3,0.074,bundle,2024-12-07\r\n6420,1027,APAC,grocery,retail,100.84,3,0.194,coupon,2024-07-12\r\n6421,1422,LATAM,electronics,mobile,31.05,4,0.213,none,2024-04-18\r\n6422,2061,EMEA,electronics,online,64.70,5,0.222,coupon,2024-02-23\r\n6423,2385,APAC,toys,online,103.14,7,0.134,none,2024-08-11\r\n6424,1640,APAC,electronics,retail,24.60,8,0.060,none,2024-03-18\r\n6425,1679,APAC,sports,online,62.84,2,0.192,loyalty,2024-11-03\r\n6426,1901,AMER,grocery,online,65.19,2,0.002,none,2024-03-25\r\n6427,2071,APAC,home,online,101.23,7,0.130,loyalty,2024-03-04\r\n6428,2022,LATAM,grocery,retail,179.29,3,0.182,bundle,2024-07-13\r\n6429,2416,LATAM,grocery,online,80.47,5,0.113,none,2024-08-20\r\n6430,1409,APAC,home,retail,17.41,5,0.091,none,2024-04-27\r\n6431,2061,EMEA,fashion,online,117.66,6,0.083,none,2024-11-10\r\n6432,2355,EMEA,grocery,online,80.75,1,0.141,none,2024-09-13\r\n6433,1282,LATAM,sports,mobile,101.29,1,0.161,bundle,2024-09-04\r\n6434,1765,EMEA,electronics,online,47.37,8,0.021,none,2024-09-10\r\n6435,2172,EMEA,toys,partner,67.82,7,0.061,none,2024-06-18\r\n6436,2364,APAC,fashion,online,56.29,7,0.212,bundle,2024-02-09\r\n6437,2023,LATAM,home,retail,65.46,5,0.234,none,2024-02-11\r\n6438,2273,APAC,home,online,17.96,1,0.035,none,2024-09-22\r\n6439,1385,LATAM,home,retail,26.58,1,0.221,none,2024-02-28\r\n6440,1872,LATAM,home,mobile,135.49,2,0.159,none,2024-01-16\r\n6441,1159,LATAM,home,retail,67.35,4,0.086,loyalty,2024-11-13\r\n6442,1520,APAC,electronics,online,149.56,3,0.107,bundle,2024-01-20\r\n6443,2459,AMER,grocery,retail,139.05,6,0.140,coupon,2024-10-04\r\n6444,1999,EMEA,electronics,online,43.60,1,0.124,none,2024-05-18\r\n6445,1595,AMER,fashion,online,45.08,3,0.119,none,2024-01-13\r\n6446,2316,EMEA,fashion,retail,30.56,8,0.045,bundle,2024-11-14\r\n6447,1278,AMER,sports,online,67.26,8,0.049,bundle,2024-04-13\r\n6448,2077,APAC,home,mobile,35.66,1,0.120,coupon,2024-12-24\r\n6449,1816,EMEA,fashion,mobile,85.62,1,0.162,none,2024-04-11\r\n6450,1485,APAC,toys,retail,133.89,2,0.167,coupon,2024-05-12\r\n6451,1705,AMER,electronics,online,56.19,5,0.035,none,2024-10-17\r\n6452,2158,APAC,fashion,online,93.33,6,0.039,coupon,2024-01-10\r\n6453,1209,AMER,fashion,online,80.44,7,0.196,bundle,2024-11-15\r\n6454,2319,AMER,fashion,mobile,47.41,6,0.227,loyalty,2024-05-12\r\n6455,1973,EMEA,electronics,retail,38.20,1,0.075,coupon,2024-08-11\r\n6456,2149,EMEA,sports,mobile,45.01,1,0.200,none,2024-11-17\r\n6457,1854,AMER,sports,mobile,70.60,7,0.247,none,2024-06-03\r\n6458,1606,AMER,electronics,retail,152.42,5,0.210,coupon,2024-04-13\r\n6459,1076,LATAM,sports,retail,31.78,4,0.132,none,2024-02-01\r\n6460,2003,LATAM,grocery,retail,55.10,5,0.140,bundle,2024-03-14\r\n6461,2044,APAC,fashion,online,40.53,6,0.212,none,2024-06-20\r\n6462,1023,APAC,fashion,retail,57.97,8,0.060,none,2024-09-25\r\n6463,1214,EMEA,home,online,59.42,7,0.166,coupon,2024-09-07\r\n6464,2406,EMEA,electronics,online,58.64,1,0.040,coupon,2024-11-17\r\n6465,1070,EMEA,sports,retail,85.93,2,0.141,bundle,2024-03-08\r\n6466,2238,AMER,fashion,online,70.17,3,0.171,none,2024-10-15\r\n6467,2115,APAC,toys,online,61.00,6,0.173,none,2024-09-12\r\n6468,2061,EMEA,grocery,retail,59.70,4,0.046,none,2024-04-05\r\n6469,1303,LATAM,fashion,retail,36.60,8,0.189,none,2024-04-18\r\n6470,1528,EMEA,toys,mobile,54.67,6,0.192,none,2024-12-11\r\n6471,2484,APAC,grocery,online,55.97,8,0.169,none,2024-05-06\r\n6472,2260,EMEA,electronics,mobile,100.03,6,0.021,none,2024-10-14\r\n6473,2469,LATAM,fashion,retail,76.99,6,0.009,bundle,2024-10-27\r\n6474,1371,AMER,grocery,online,54.62,1,0.100,none,2024-04-08\r\n6475,1825,AMER,electronics,retail,27.16,3,0.191,none,2024-09-09\r\n6476,1135,APAC,electronics,retail,65.84,2,0.064,bundle,2024-07-15\r\n6477,2322,AMER,grocery,retail,53.07,4,0.011,none,2024-04-04\r\n6478,2074,AMER,home,retail,53.16,1,0.212,none,2024-02-19\r\n6479,1040,LATAM,grocery,mobile,19.47,3,0.113,none,2024-03-15\r\n6480,1258,EMEA,grocery,retail,113.26,3,0.071,none,2024-12-17\r\n6481,2411,EMEA,home,online,91.55,6,0.041,bundle,2024-12-13\r\n6482,1604,EMEA,sports,online,116.24,7,0.185,none,2024-09-04\r\n6483,2169,EMEA,home,online,103.48,2,0.208,none,2024-02-19\r\n6484,2061,EMEA,grocery,retail,214.85,3,0.052,coupon,2024-05-06\r\n6485,1166,AMER,home,retail,87.55,5,0.093,coupon,2024-10-26\r\n6486,1753,APAC,toys,online,58.15,8,0.165,bundle,2024-02-09\r\n6487,2253,AMER,electronics,online,118.55,2,0.250,none,2024-11-13\r\n6488,1665,AMER,home,online,53.00,4,0.236,loyalty,2024-02-06\r\n6489,1729,AMER,grocery,retail,57.93,3,0.119,coupon,2024-08-26\r\n6490,1466,AMER,home,online,58.10,5,0.145,loyalty,2024-03-20\r\n6491,2101,APAC,home,online,66.78,3,0.048,none,2024-06-03\r\n6492,1705,AMER,grocery,online,20.63,3,0.076,coupon,2024-03-26\r\n6493,1270,LATAM,fashion,online,139.09,6,0.033,bundle,2024-09-24\r\n6494,1345,AMER,home,online,43.46,6,0.122,none,2024-12-15\r\n6495,1948,EMEA,home,online,67.65,1,0.229,none,2024-02-28\r\n6496,2121,APAC,fashion,online,50.46,8,0.162,none,2024-05-23\r\n6497,1576,EMEA,toys,retail,36.19,8,0.148,none,2024-03-06\r\n6498,1021,AMER,home,retail,41.40,7,0.031,none,2024-05-24\r\n6499,2458,EMEA,fashion,retail,104.79,6,0.247,coupon,2024-03-05\r\n6500,1642,EMEA,grocery,mobile,15.92,2,0.025,none,2024-11-18\r\n6501,1590,APAC,electronics,retail,32.08,3,0.247,coupon,2024-10-28\r\n6502,2379,AMER,electronics,online,22.76,5,0.061,coupon,2024-12-24\r\n6503,2398,EMEA,sports,online,58.49,6,0.033,bundle,2024-02-02\r\n6504,1133,EMEA,home,online,52.10,3,0.234,coupon,2024-12-14\r\n6505,2071,APAC,sports,online,113.75,4,0.104,coupon,2024-04-04\r\n6506,1585,AMER,grocery,partner,57.18,5,0.187,loyalty,2024-09-24\r\n6507,2420,EMEA,home,online,33.42,5,0.015,bundle,2024-08-13\r\n6508,1557,LATAM,sports,online,51.44,2,0.046,coupon,2024-08-13\r\n6509,2390,AMER,grocery,retail,20.56,4,0.233,none,2024-07-21\r\n6510,1150,LATAM,grocery,retail,33.79,7,0.021,bundle,2024-12-01\r\n6511,1992,LATAM,grocery,online,87.15,3,0.014,none,2024-09-20\r\n6512,1970,LATAM,grocery,online,79.05,8,0.204,coupon,2024-01-11\r\n6513,2257,AMER,electronics,online,87.23,3,0.060,none,2024-12-07\r\n6514,1426,AMER,grocery,retail,16.99,5,0.223,coupon,2024-06-08\r\n6515,1761,EMEA,sports,online,23.42,3,0.235,loyalty,2024-12-12\r\n6516,1601,APAC,fashion,retail,53.87,5,0.018,coupon,2024-02-04\r\n6517,1386,AMER,grocery,retail,77.02,7,0.250,bundle,2024-11-16\r\n6518,2102,APAC,grocery,retail,59.72,2,0.052,bundle,2024-01-25\r\n6519,1809,APAC,grocery,online,139.86,4,0.249,coupon,2024-11-19\r\n6520,2255,AMER,electronics,online,39.57,1,0.025,none,2024-02-03\r\n6521,1785,EMEA,electronics,online,62.11,6,0.064,none,2024-09-21\r\n6522,1076,LATAM,grocery,online,95.27,1,0.179,none,2024-01-12\r\n6523,2251,APAC,electronics,online,82.01,1,0.250,loyalty,2024-05-25\r\n6524,1918,EMEA,fashion,online,39.44,8,0.026,none,2024-09-17\r\n6525,2009,LATAM,toys,online,27.75,1,0.049,none,2024-10-24\r\n6526,2158,APAC,electronics,mobile,53.75,5,0.093,none,2024-07-01\r\n6527,1002,EMEA,electronics,retail,83.50,1,0.204,bundle,2024-10-12\r\n6528,1753,APAC,electronics,retail,95.82,6,0.221,none,2024-10-08\r\n6529,1720,AMER,sports,partner,48.92,6,0.099,none,2024-03-02\r\n6530,2052,LATAM,grocery,online,206.25,7,0.140,none,2024-03-15\r\n6531,1954,APAC,grocery,online,31.63,2,0.193,loyalty,2024-11-01\r\n6532,1456,APAC,fashion,retail,31.84,6,0.183,none,2024-01-11\r\n6533,1088,LATAM,grocery,partner,99.72,1,0.038,none,2024-10-28\r\n6534,2153,APAC,electronics,online,73.80,8,0.199,none,2024-03-14\r\n6535,2270,APAC,grocery,partner,116.54,4,0.050,none,2024-06-02\r\n6536,2304,LATAM,electronics,online,37.76,3,0.219,loyalty,2024-01-21\r\n6537,2206,AMER,toys,retail,63.83,3,0.055,loyalty,2024-07-22\r\n6538,1476,APAC,grocery,retail,58.70,2,0.000,loyalty,2024-05-28\r\n6539,1746,LATAM,grocery,retail,129.90,3,0.169,none,2024-01-14\r\n6540,1111,APAC,toys,mobile,60.07,6,0.089,loyalty,2024-06-21\r\n6541,2076,AMER,home,online,141.92,3,0.186,none,2024-01-16\r\n6542,2141,AMER,home,online,139.73,7,0.022,none,2024-12-21\r\n6543,1422,LATAM,home,online,51.43,8,0.039,none,2024-07-06\r\n6544,1746,LATAM,home,retail,133.27,6,0.072,none,2024-06-04\r\n6545,2098,AMER,home,online,45.39,5,0.174,loyalty,2024-12-12\r\n6546,1597,APAC,electronics,retail,28.31,5,0.018,none,2024-12-18\r\n6547,2441,EMEA,grocery,online,180.58,7,0.194,bundle,2024-05-10\r\n6548,1910,LATAM,grocery,online,51.27,6,0.188,none,2024-01-07\r\n6549,2328,EMEA,toys,retail,51.43,5,0.220,bundle,2024-07-24\r\n6550,2118,AMER,home,online,47.92,1,0.203,none,2024-12-02\r\n6551,2468,EMEA,home,mobile,60.71,1,0.046,coupon,2024-07-19\r\n6552,1357,EMEA,grocery,retail,40.98,4,0.208,bundle,2024-12-11\r\n6553,2311,LATAM,grocery,retail,85.93,4,0.212,none,2024-08-08\r\n6554,2384,LATAM,home,retail,49.58,8,0.218,none,2024-03-26\r\n6555,2123,AMER,sports,online,33.66,1,0.146,none,2024-08-12\r\n6556,1676,LATAM,home,online,15.61,3,0.193,none,2024-06-03\r\n6557,1115,AMER,grocery,mobile,39.85,3,0.050,none,2024-06-20\r\n6558,1998,APAC,sports,online,76.32,2,0.112,coupon,2024-02-11\r\n6559,1580,AMER,electronics,online,65.21,3,0.082,bundle,2024-11-06\r\n6560,1335,APAC,home,retail,78.91,7,0.130,none,2024-10-11\r\n6561,2405,AMER,home,online,22.41,6,0.208,none,2024-12-17\r\n6562,1310,AMER,electronics,online,140.97,3,0.046,none,2024-08-22\r\n6563,1768,AMER,electronics,retail,19.31,7,0.216,none,2024-09-07\r\n6564,2032,AMER,home,mobile,47.38,6,0.018,loyalty,2024-06-28\r\n6565,1348,AMER,sports,online,201.41,3,0.017,coupon,2024-11-15\r\n6566,2417,LATAM,electronics,retail,28.59,2,0.205,coupon,2024-01-12\r\n6567,2143,AMER,home,online,156.29,5,0.185,loyalty,2024-08-10\r\n6568,2072,AMER,electronics,retail,33.52,6,0.153,none,2024-02-23\r\n6569,1169,LATAM,grocery,mobile,44.60,6,0.156,coupon,2024-12-16\r\n6570,1038,APAC,toys,retail,71.30,2,0.207,coupon,2024-04-11\r\n6571,2260,EMEA,grocery,retail,77.08,5,0.192,none,2024-02-19\r\n6572,2142,LATAM,electronics,online,61.03,8,0.100,loyalty,2024-02-05\r\n6573,2019,AMER,electronics,online,85.84,2,0.176,bundle,2024-01-26\r\n6574,1001,LATAM,home,retail,94.39,5,0.242,none,2024-04-19\r\n6575,1041,APAC,electronics,online,75.74,8,0.241,none,2024-03-17\r\n6576,1847,LATAM,fashion,online,44.97,6,0.100,none,2024-07-28\r\n6577,2147,LATAM,electronics,retail,54.42,3,0.181,none,2024-06-27\r\n6578,2068,LATAM,home,mobile,70.35,5,0.201,coupon,2024-04-27\r\n6579,2483,LATAM,fashion,online,67.80,4,0.069,bundle,2024-05-27\r\n6580,1818,AMER,grocery,retail,48.65,4,0.077,coupon,2024-02-12\r\n6581,1044,EMEA,electronics,online,147.78,4,0.241,loyalty,2024-10-25\r\n6582,1661,LATAM,toys,retail,46.85,2,0.133,none,2024-02-18\r\n6583,1229,LATAM,home,online,76.23,3,0.054,loyalty,2024-12-17\r\n6584,1176,EMEA,fashion,online,68.01,7,0.113,none,2024-04-05\r\n6585,1481,LATAM,fashion,online,39.10,6,0.037,coupon,2024-10-18\r\n6586,1643,EMEA,grocery,online,70.81,3,0.162,none,2024-10-25\r\n6587,1394,LATAM,grocery,retail,54.46,4,0.052,coupon,2024-03-07\r\n6588,2339,AMER,electronics,mobile,53.00,8,0.224,none,2024-05-27\r\n6589,1407,LATAM,grocery,online,65.68,3,0.094,coupon,2024-10-23\r\n6590,1542,APAC,grocery,online,119.76,1,0.018,none,2024-03-03\r\n6591,1056,LATAM,home,retail,30.82,3,0.162,none,2024-04-23\r\n6592,1675,LATAM,sports,mobile,71.94,5,0.031,none,2024-03-22\r\n6593,1452,LATAM,toys,retail,44.56,8,0.157,none,2024-10-21\r\n6594,1561,EMEA,electronics,retail,98.53,2,0.076,none,2024-01-26\r\n6595,1545,AMER,electronics,retail,32.62,7,0.080,none,2024-01-20\r\n6596,2318,AMER,toys,mobile,30.83,6,0.204,none,2024-06-14\r\n6597,2187,EMEA,home,mobile,93.95,3,0.153,none,2024-07-04\r\n6598,1456,APAC,sports,mobile,58.79,2,0.180,coupon,2024-06-26\r\n6599,1909,APAC,sports,online,116.77,7,0.094,none,2024-08-24\r\n6600,2250,AMER,electronics,mobile,110.87,6,0.034,coupon,2024-02-02\r\n6601,2115,APAC,toys,retail,56.13,4,0.231,none,2024-08-13\r\n6602,1613,EMEA,toys,online,32.94,1,0.018,loyalty,2024-10-21\r\n6603,2028,APAC,fashion,online,56.11,7,0.142,none,2024-09-17\r\n6604,1477,APAC,toys,online,99.88,6,0.117,none,2024-07-28\r\n6605,1937,APAC,toys,retail,61.70,7,0.063,bundle,2024-12-16\r\n6606,2165,AMER,toys,online,88.29,8,0.213,coupon,2024-10-09\r\n6607,2039,EMEA,grocery,retail,23.44,3,0.151,none,2024-07-24\r\n6608,1601,APAC,grocery,retail,83.12,5,0.093,coupon,2024-04-07\r\n6609,1020,APAC,grocery,online,40.33,5,0.094,none,2024-08-09\r\n6610,1040,LATAM,home,online,41.70,8,0.146,coupon,2024-11-25\r\n6611,1351,APAC,home,retail,147.21,2,0.208,none,2024-06-23\r\n6612,1386,AMER,toys,online,55.48,1,0.228,coupon,2024-12-21\r\n6613,2195,APAC,sports,retail,238.77,7,0.001,coupon,2024-08-02\r\n6614,1957,AMER,sports,mobile,51.03,2,0.012,none,2024-05-04\r\n6615,1220,LATAM,grocery,retail,48.93,4,0.078,bundle,2024-07-19\r\n6616,2030,EMEA,grocery,partner,48.52,3,0.011,none,2024-04-20\r\n6617,2100,APAC,grocery,retail,45.56,5,0.138,none,2024-04-06\r\n6618,1505,EMEA,electronics,mobile,93.08,1,0.058,coupon,2024-03-17\r\n6619,2155,APAC,fashion,online,67.98,1,0.236,none,2024-10-25\r\n6620,1840,LATAM,fashion,retail,30.86,2,0.228,coupon,2024-12-14\r\n6621,1667,AMER,sports,online,36.49,2,0.041,none,2024-11-26\r\n6622,1141,AMER,sports,online,28.36,3,0.238,loyalty,2024-01-17\r\n6623,1810,LATAM,toys,online,52.79,4,0.084,none,2024-05-22\r\n6624,2229,APAC,grocery,online,60.40,4,0.234,none,2024-03-06\r\n6625,2235,AMER,fashion,retail,70.46,7,0.002,none,2024-11-17\r\n6626,1965,LATAM,electronics,retail,42.98,2,0.056,coupon,2024-02-17\r\n6627,1479,AMER,grocery,mobile,76.87,5,0.068,none,2024-04-05\r\n6628,1972,LATAM,grocery,retail,88.99,6,0.029,none,2024-01-02\r\n6629,2002,APAC,toys,retail,37.52,5,0.237,none,2024-01-21\r\n6630,1179,APAC,electronics,retail,85.57,1,0.149,none,2024-08-08\r\n6631,1169,LATAM,electronics,retail,23.79,3,0.034,none,2024-02-01\r\n6632,1741,AMER,home,online,66.69,3,0.177,none,2024-06-14\r\n6633,2494,AMER,grocery,retail,69.72,2,0.112,none,2024-11-25\r\n6634,2490,AMER,sports,mobile,98.31,7,0.092,none,2024-08-24\r\n6635,1031,AMER,toys,mobile,36.44,3,0.244,none,2024-02-01\r\n6636,1148,AMER,electronics,retail,67.02,8,0.071,loyalty,2024-03-16\r\n6637,2347,AMER,grocery,retail,74.25,4,0.013,none,2024-02-11\r\n6638,1833,EMEA,toys,mobile,86.75,6,0.029,none,2024-11-21\r\n6639,1279,EMEA,fashion,online,53.92,2,0.212,bundle,2024-08-22\r\n6640,1572,LATAM,grocery,retail,45.81,6,0.066,none,2024-05-17\r\n6641,1616,APAC,grocery,retail,39.07,6,0.211,coupon,2024-08-28\r\n6642,1446,AMER,fashion,retail,109.04,6,0.175,none,2024-11-12\r\n6643,1585,AMER,home,retail,81.69,7,0.035,coupon,2024-02-06\r\n6644,1144,APAC,grocery,mobile,84.79,7,0.083,none,2024-04-28\r\n6645,1744,EMEA,home,retail,45.80,5,0.080,none,2024-03-23\r\n6646,2295,EMEA,fashion,retail,42.83,7,0.006,none,2024-03-26\r\n6647,1964,EMEA,grocery,mobile,74.61,2,0.151,none,2024-02-15\r\n6648,1664,LATAM,grocery,retail,34.73,4,0.009,none,2024-12-26\r\n6649,1104,APAC,fashion,online,52.73,8,0.075,bundle,2024-04-09\r\n6650,1142,EMEA,fashion,online,31.69,8,0.133,none,2024-11-28\r\n6651,2447,AMER,grocery,retail,29.78,1,0.231,loyalty,2024-11-26\r\n6652,1245,APAC,home,online,19.17,6,0.189,none,2024-05-03\r\n6653,1455,APAC,electronics,mobile,66.59,2,0.176,none,2024-12-28\r\n6654,1666,LATAM,fashion,online,42.79,1,0.094,coupon,2024-07-09\r\n6655,1837,LATAM,grocery,retail,67.50,8,0.074,bundle,2024-01-28\r\n6656,2250,AMER,sports,retail,35.71,2,0.198,loyalty,2024-09-03\r\n6657,1912,APAC,fashion,retail,76.84,2,0.218,none,2024-06-13\r\n6658,1151,APAC,toys,online,46.03,6,0.179,coupon,2024-11-07\r\n6659,1430,EMEA,toys,online,44.98,5,0.112,none,2024-01-27\r\n6660,1325,APAC,grocery,online,69.06,7,0.218,coupon,2024-06-17\r\n6661,2417,LATAM,grocery,online,57.35,3,0.177,bundle,2024-08-24\r\n6662,1248,APAC,grocery,retail,34.67,7,0.247,coupon,2024-10-27\r\n6663,2155,APAC,sports,online,42.24,3,0.209,none,2024-03-16\r\n6664,1995,LATAM,grocery,retail,24.37,1,0.109,none,2024-03-03\r\n6665,1300,EMEA,fashion,online,38.21,1,0.179,none,2024-10-22\r\n6666,1804,AMER,sports,online,90.43,6,0.130,none,2024-08-12\r\n6667,2231,LATAM,grocery,mobile,34.96,4,0.066,none,2024-09-15\r\n6668,2417,LATAM,fashion,retail,90.18,8,0.130,bundle,2024-05-13\r\n6669,1786,APAC,electronics,online,51.60,7,0.080,none,2024-06-10\r\n6670,1224,APAC,electronics,online,22.15,3,0.024,loyalty,2024-10-04\r\n6671,2159,AMER,home,partner,47.91,2,0.190,bundle,2024-12-13\r\n6672,1678,LATAM,fashion,online,56.59,8,0.186,none,2024-09-16\r\n6673,1487,AMER,grocery,retail,64.85,5,0.242,bundle,2024-04-03\r\n6674,2305,AMER,home,online,35.60,8,0.212,coupon,2024-07-03\r\n6675,1220,LATAM,grocery,retail,51.53,3,0.001,bundle,2024-01-15\r\n6676,1957,AMER,fashion,online,53.56,4,0.176,none,2024-09-22\r\n6677,1504,AMER,fashion,retail,31.68,4,0.065,none,2024-12-10\r\n6678,1975,EMEA,sports,retail,101.28,1,0.106,none,2024-05-07\r\n6679,1379,EMEA,electronics,online,82.11,2,0.160,none,2024-09-02\r\n6680,1111,APAC,sports,partner,50.56,7,0.018,bundle,2024-09-20\r\n6681,1970,LATAM,electronics,retail,48.31,6,0.182,coupon,2024-03-20\r\n6682,1609,LATAM,toys,online,97.02,6,0.215,coupon,2024-10-07\r\n6683,2052,LATAM,toys,online,36.17,8,0.063,none,2024-09-20\r\n6684,1938,APAC,sports,retail,61.19,5,0.130,none,2024-01-16\r\n6685,1638,EMEA,electronics,mobile,57.75,5,0.162,none,2024-08-07\r\n6686,2038,LATAM,grocery,online,47.90,6,0.001,coupon,2024-02-19\r\n6687,2276,AMER,home,online,42.48,4,0.239,coupon,2024-08-05\r\n6688,2119,AMER,home,retail,36.96,2,0.070,bundle,2024-09-11\r\n6689,2331,APAC,toys,mobile,97.10,3,0.100,none,2024-09-21\r\n6690,2356,LATAM,electronics,online,99.82,5,0.062,bundle,2024-11-10\r\n6691,1131,APAC,electronics,online,92.85,3,0.061,coupon,2024-10-27\r\n6692,2289,APAC,grocery,retail,116.38,4,0.148,none,2024-04-27\r\n6693,2059,AMER,sports,retail,94.59,4,0.153,none,2024-10-17\r\n6694,2424,LATAM,sports,retail,101.68,6,0.246,bundle,2024-05-06\r\n6695,2383,APAC,grocery,retail,54.90,8,0.009,none,2024-07-27\r\n6696,2327,EMEA,electronics,mobile,37.04,2,0.012,none,2024-01-21\r\n6697,1354,AMER,home,online,30.16,2,0.166,none,2024-03-02\r\n6698,1251,EMEA,sports,partner,24.07,8,0.072,none,2024-04-04\r\n6699,1525,APAC,fashion,retail,41.38,8,0.174,none,2024-06-19\r\n6700,1741,AMER,fashion,online,58.48,6,0.076,loyalty,2024-01-10\r\n6701,1411,LATAM,electronics,retail,35.45,5,0.217,none,2024-10-14\r\n6702,1633,EMEA,grocery,retail,55.55,8,0.037,none,2024-12-11\r\n6703,1096,EMEA,electronics,retail,75.38,3,0.065,none,2024-04-06\r\n6704,1130,LATAM,home,mobile,126.71,5,0.247,none,2024-06-06\r\n6705,1097,EMEA,home,mobile,50.16,4,0.159,bundle,2024-11-03\r\n6706,1091,EMEA,home,online,63.64,5,0.033,none,2024-07-06\r\n6707,2010,APAC,electronics,retail,52.88,8,0.186,none,2024-05-14\r\n6708,1910,LATAM,electronics,online,45.69,6,0.030,bundle,2024-11-12\r\n6709,1394,LATAM,grocery,online,17.30,8,0.033,none,2024-08-11\r\n6710,1183,AMER,toys,online,48.18,1,0.061,bundle,2024-03-07\r\n6711,1787,APAC,sports,retail,66.90,4,0.238,none,2024-10-15\r\n6712,2207,APAC,sports,online,24.84,8,0.128,none,2024-02-15\r\n6713,1532,APAC,fashion,online,53.62,1,0.051,none,2024-03-19\r\n6714,1456,APAC,toys,online,93.87,1,0.111,coupon,2024-12-23\r\n6715,1713,EMEA,toys,online,63.24,3,0.159,bundle,2024-06-22\r\n6716,2407,EMEA,sports,online,48.42,6,0.046,coupon,2024-04-28\r\n6717,2091,LATAM,grocery,partner,122.57,6,0.061,coupon,2024-03-22\r\n6718,2254,LATAM,home,retail,26.39,7,0.175,none,2024-03-21\r\n6719,2017,EMEA,toys,online,36.65,3,0.074,none,2024-11-13\r\n6720,2245,APAC,sports,mobile,42.28,7,0.073,none,2024-12-10\r\n6721,1557,LATAM,sports,mobile,64.65,5,0.213,none,2024-05-03\r\n6722,1094,LATAM,electronics,online,47.01,7,0.022,none,2024-07-03\r\n6723,1954,APAC,fashion,mobile,57.65,7,0.056,none,2024-03-19\r\n6724,2008,APAC,grocery,retail,96.31,3,0.041,none,2024-11-18\r\n6725,1859,AMER,electronics,retail,72.43,3,0.111,none,2024-08-14\r\n6726,1358,APAC,fashion,online,269.88,3,0.243,bundle,2024-07-14\r\n6727,1071,AMER,electronics,retail,52.28,1,0.136,coupon,2024-06-27\r\n6728,1761,EMEA,electronics,online,35.14,8,0.180,coupon,2024-04-19\r\n6729,1254,APAC,sports,retail,96.78,8,0.190,none,2024-12-07\r\n6730,1739,AMER,grocery,retail,66.67,5,0.150,none,2024-10-27\r\n6731,2092,AMER,grocery,online,52.75,7,0.058,none,2024-06-27\r\n6732,2241,APAC,electronics,retail,39.74,6,0.074,coupon,2024-08-12\r\n6733,1215,LATAM,grocery,online,82.53,6,0.029,coupon,2024-09-02\r\n6734,1236,AMER,fashion,retail,69.47,8,0.033,none,2024-05-19\r\n6735,2339,AMER,fashion,online,95.32,5,0.067,none,2024-12-15\r\n6736,1110,LATAM,electronics,mobile,56.85,1,0.206,none,2024-10-11\r\n6737,1928,AMER,electronics,retail,63.82,5,0.116,coupon,2024-12-03\r\n6738,2225,EMEA,sports,retail,92.70,4,0.095,none,2024-10-19\r\n6739,1851,EMEA,sports,retail,24.96,2,0.142,none,2024-08-26\r\n6740,2188,EMEA,fashion,online,37.61,7,0.209,bundle,2024-12-07\r\n6741,2315,LATAM,toys,online,44.63,8,0.036,none,2024-01-10\r\n6742,2480,APAC,electronics,retail,126.23,4,0.034,loyalty,2024-08-12\r\n6743,1930,AMER,toys,online,89.32,6,0.041,coupon,2024-07-13\r\n6744,1874,LATAM,grocery,retail,84.80,5,0.246,none,2024-10-06\r\n6745,2139,AMER,fashion,retail,79.98,2,0.152,loyalty,2024-12-06\r\n6746,1376,EMEA,electronics,partner,113.40,1,0.103,none,2024-01-10\r\n6747,2176,AMER,grocery,mobile,80.68,5,0.150,none,2024-11-28\r\n6748,1979,APAC,grocery,retail,113.42,3,0.173,none,2024-06-24\r\n6749,1572,LATAM,grocery,online,29.13,8,0.194,none,2024-09-19\r\n6750,1573,AMER,sports,online,31.17,7,0.180,none,2024-07-26\r\n6751,2243,APAC,fashion,mobile,49.11,5,0.221,none,2024-07-26\r\n6752,2262,APAC,sports,online,35.04,7,0.153,loyalty,2024-02-17\r\n6753,1075,AMER,electronics,online,36.72,6,0.076,none,2024-06-04\r\n6754,2024,AMER,electronics,mobile,61.68,4,0.092,none,2024-05-26\r\n6755,2226,EMEA,grocery,online,142.25,2,0.039,none,2024-12-13\r\n6756,1259,EMEA,fashion,retail,137.92,6,0.122,none,2024-07-07\r\n6757,1646,APAC,sports,mobile,107.58,4,0.200,loyalty,2024-07-20\r\n6758,2495,EMEA,toys,online,38.08,8,0.115,none,2024-11-15\r\n6759,2412,LATAM,electronics,online,137.46,6,0.246,coupon,2024-12-01\r\n6760,1635,APAC,toys,retail,39.51,5,0.031,coupon,2024-08-15\r\n6761,1996,APAC,sports,online,47.61,1,0.113,coupon,2024-05-24\r\n6762,1583,AMER,sports,online,20.27,8,0.067,coupon,2024-09-23\r\n6763,2116,LATAM,electronics,online,47.90,4,0.240,bundle,2024-08-02\r\n6764,1360,APAC,home,online,26.21,1,0.084,none,2024-09-05\r\n6765,2264,LATAM,electronics,online,40.49,8,0.027,bundle,2024-07-01\r\n6766,1157,LATAM,grocery,retail,97.09,1,0.075,none,2024-12-04\r\n6767,1287,AMER,grocery,retail,61.06,3,0.149,coupon,2024-01-17\r\n6768,1945,AMER,fashion,online,96.50,5,0.031,none,2024-05-09\r\n6769,1194,APAC,electronics,retail,19.07,7,0.176,coupon,2024-08-25\r\n6770,1872,LATAM,grocery,retail,51.55,4,0.123,none,2024-08-08\r\n6771,2469,LATAM,fashion,retail,90.02,5,0.210,none,2024-01-03\r\n6772,2356,LATAM,home,online,103.82,4,0.049,loyalty,2024-08-03\r\n6773,2448,APAC,home,online,129.44,5,0.200,none,2024-10-22\r\n6774,1687,APAC,electronics,retail,48.02,4,0.220,none,2024-07-27\r\n6775,1436,APAC,sports,retail,87.45,2,0.101,coupon,2024-02-18\r\n6776,1682,EMEA,toys,retail,53.34,3,0.086,bundle,2024-09-09\r\n6777,1262,APAC,grocery,partner,37.01,4,0.214,loyalty,2024-06-19\r\n6778,1799,EMEA,home,mobile,58.92,4,0.002,coupon,2024-04-03\r\n6779,1841,AMER,grocery,retail,86.14,5,0.076,loyalty,2024-09-02\r\n6780,1115,AMER,toys,partner,68.22,3,0.174,loyalty,2024-05-11\r\n6781,2119,AMER,fashion,retail,47.93,4,0.205,none,2024-08-26\r\n6782,2040,LATAM,grocery,online,57.47,8,0.116,none,2024-02-03\r\n6783,1506,EMEA,home,online,37.36,1,0.017,bundle,2024-12-12\r\n6784,2012,APAC,electronics,partner,66.31,1,0.228,none,2024-11-21\r\n6785,1377,APAC,electronics,retail,139.51,3,0.217,bundle,2024-04-26\r\n6786,2264,LATAM,sports,partner,59.44,4,0.192,none,2024-05-02\r\n6787,2070,APAC,fashion,retail,82.93,3,0.135,none,2024-12-01\r\n6788,1264,APAC,sports,online,113.98,5,0.081,coupon,2024-12-16\r\n6789,2125,LATAM,toys,online,91.70,1,0.183,none,2024-11-09\r\n6790,1484,AMER,grocery,online,55.21,7,0.207,none,2024-02-24\r\n6791,1994,LATAM,grocery,mobile,41.90,5,0.071,none,2024-11-07\r\n6792,1068,APAC,home,online,21.25,8,0.112,none,2024-02-11\r\n6793,2093,LATAM,home,online,23.04,8,0.012,none,2024-05-13\r\n6794,1143,LATAM,home,retail,34.99,5,0.061,none,2024-12-13\r\n6795,2390,AMER,home,partner,16.01,8,0.059,none,2024-10-01\r\n6796,2015,APAC,toys,online,92.07,6,0.026,none,2024-01-21\r\n6797,2213,APAC,home,retail,85.83,8,0.200,bundle,2024-02-15\r\n6798,1137,APAC,sports,mobile,33.49,2,0.151,loyalty,2024-01-19\r\n6799,2232,EMEA,home,retail,64.21,8,0.238,coupon,2024-11-11\r\n6800,1761,EMEA,grocery,retail,47.57,6,0.115,coupon,2024-06-16\r\n6801,1313,EMEA,grocery,online,89.76,7,0.084,none,2024-07-26\r\n6802,2291,EMEA,home,online,83.69,7,0.044,coupon,2024-02-21\r\n6803,1143,LATAM,fashion,online,120.77,8,0.200,loyalty,2024-02-11\r\n6804,1490,AMER,electronics,online,54.47,3,0.104,none,2024-08-01\r\n6805,2076,AMER,home,online,24.48,8,0.245,none,2024-05-09\r\n6806,1448,EMEA,electronics,retail,36.03,6,0.043,none,2024-11-05\r\n6807,1526,EMEA,home,retail,149.10,3,0.188,none,2024-04-02\r\n6808,2387,EMEA,home,online,94.01,7,0.035,coupon,2024-10-15\r\n6809,1434,EMEA,toys,retail,33.69,7,0.058,loyalty,2024-06-11\r\n6810,2287,EMEA,sports,online,75.90,4,0.123,none,2024-06-26\r\n6811,1258,EMEA,grocery,online,36.47,6,0.193,coupon,2024-04-07\r\n6812,1643,EMEA,home,online,47.76,8,0.097,none,2024-01-15\r\n6813,2351,EMEA,electronics,mobile,34.22,7,0.115,none,2024-04-08\r\n6814,1045,LATAM,fashion,mobile,19.72,1,0.104,none,2024-01-15\r\n6815,1640,APAC,grocery,online,64.27,2,0.177,none,2024-02-02\r\n6816,1377,APAC,sports,mobile,70.67,4,0.244,none,2024-09-06\r\n6817,1072,LATAM,sports,mobile,12.99,6,0.003,none,2024-09-19\r\n6818,2370,EMEA,home,online,75.32,6,0.188,coupon,2024-12-15\r\n6819,2471,APAC,electronics,retail,36.30,2,0.198,none,2024-09-02\r\n6820,1457,EMEA,electronics,online,44.44,1,0.099,none,2024-04-22\r\n6821,1052,LATAM,grocery,retail,70.93,4,0.154,coupon,2024-02-07\r\n6822,1759,EMEA,home,retail,41.82,8,0.013,none,2024-09-14\r\n6823,1502,APAC,grocery,retail,119.33,4,0.183,coupon,2024-12-15\r\n6824,1257,APAC,sports,retail,65.24,2,0.127,none,2024-06-06\r\n6825,2354,LATAM,electronics,online,76.89,6,0.090,none,2024-01-07\r\n6826,1121,EMEA,electronics,online,72.88,4,0.164,coupon,2024-08-03\r\n6827,1996,APAC,sports,online,36.78,1,0.193,bundle,2024-11-24\r\n6828,1867,AMER,fashion,online,47.17,6,0.076,bundle,2024-04-13\r\n6829,1673,AMER,fashion,online,185.40,8,0.210,none,2024-10-17\r\n6830,2010,APAC,fashion,mobile,87.55,4,0.162,none,2024-11-05\r\n6831,1846,APAC,grocery,online,71.62,3,0.190,none,2024-01-01\r\n6832,2012,APAC,sports,online,58.62,3,0.195,none,2024-01-17\r\n6833,2152,EMEA,electronics,online,136.87,5,0.079,none,2024-04-17\r\n6834,2487,LATAM,fashion,retail,113.18,6,0.116,none,2024-05-22\r\n6835,1651,LATAM,home,online,39.51,5,0.028,bundle,2024-02-11\r\n6836,2406,EMEA,electronics,retail,53.34,6,0.050,none,2024-09-02\r\n6837,1723,LATAM,home,partner,64.82,3,0.086,none,2024-06-28\r\n6838,1759,EMEA,electronics,retail,31.51,5,0.069,none,2024-11-12\r\n6839,2487,LATAM,home,retail,91.46,4,0.248,loyalty,2024-11-04\r\n6840,2226,EMEA,toys,retail,67.08,3,0.201,coupon,2024-04-22\r\n6841,2260,EMEA,home,mobile,91.64,8,0.045,none,2024-07-21\r\n6842,2315,LATAM,home,online,72.56,7,0.204,none,2024-02-24\r\n6843,1111,APAC,electronics,online,37.57,8,0.082,bundle,2024-12-27\r\n6844,1690,LATAM,grocery,retail,71.74,1,0.173,none,2024-03-18\r\n6845,2368,AMER,home,retail,47.45,4,0.225,bundle,2024-06-04\r\n6846,1102,APAC,sports,online,44.99,3,0.234,coupon,2024-08-13\r\n6847,2173,LATAM,home,online,39.82,2,0.158,none,2024-03-06\r\n6848,1803,LATAM,electronics,online,89.95,6,0.048,bundle,2024-10-25\r\n6849,2332,APAC,fashion,online,49.31,6,0.028,none,2024-04-02\r\n6850,1466,AMER,home,online,42.35,8,0.123,bundle,2024-03-06\r\n6851,1769,LATAM,grocery,online,63.32,5,0.007,coupon,2024-08-12\r\n6852,1799,EMEA,home,online,78.05,5,0.209,bundle,2024-03-16\r\n6853,2048,LATAM,sports,retail,58.60,1,0.193,none,2024-03-23\r\n6854,1878,EMEA,sports,online,106.51,7,0.232,none,2024-12-08\r\n6855,1301,AMER,sports,online,103.47,7,0.239,none,2024-02-28\r\n6856,2186,LATAM,sports,retail,29.35,5,0.129,none,2024-11-01\r\n6857,1324,LATAM,fashion,online,84.20,4,0.086,none,2024-06-04\r\n6858,1055,AMER,home,online,119.25,6,0.141,none,2024-10-07\r\n6859,2344,LATAM,grocery,retail,110.34,5,0.091,none,2024-03-05\r\n6860,1507,EMEA,grocery,partner,52.07,4,0.225,bundle,2024-09-19\r\n6861,1268,EMEA,electronics,retail,24.02,2,0.156,none,2024-07-17\r\n6862,1999,EMEA,sports,retail,85.04,2,0.175,coupon,2024-03-08\r\n6863,2151,APAC,fashion,retail,42.74,4,0.137,none,2024-09-11\r\n6864,2194,APAC,electronics,online,39.00,4,0.037,none,2024-10-03\r\n6865,1028,EMEA,grocery,online,19.70,1,0.065,coupon,2024-04-20\r\n6866,1118,AMER,home,retail,95.71,4,0.032,none,2024-04-05\r\n6867,1251,EMEA,toys,online,44.31,3,0.180,none,2024-10-20\r\n6868,2089,EMEA,grocery,retail,98.59,6,0.229,none,2024-02-01\r\n6869,1818,AMER,electronics,online,94.45,2,0.122,none,2024-03-23\r\n6870,1074,LATAM,sports,retail,38.81,4,0.142,loyalty,2024-06-28\r\n6871,1511,EMEA,toys,online,21.04,8,0.057,none,2024-02-21\r\n6872,1832,APAC,grocery,mobile,50.35,6,0.247,none,2024-01-20\r\n6873,1339,EMEA,toys,online,117.33,4,0.191,none,2024-04-16\r\n6874,2296,AMER,electronics,online,55.91,2,0.036,none,2024-09-09\r\n6875,1947,EMEA,sports,online,58.28,8,0.200,none,2024-03-13\r\n6876,2346,LATAM,toys,retail,96.09,4,0.056,coupon,2024-12-24\r\n6877,1360,APAC,grocery,retail,42.37,3,0.180,none,2024-03-22\r\n6878,1059,AMER,grocery,online,69.62,3,0.040,coupon,2024-11-21\r\n6879,1197,LATAM,sports,online,119.23,5,0.236,none,2024-04-14\r\n6880,1994,LATAM,fashion,retail,94.92,6,0.102,bundle,2024-04-27\r\n6881,2214,AMER,grocery,online,56.07,5,0.030,bundle,2024-06-21\r\n6882,2129,APAC,home,online,62.87,5,0.153,none,2024-10-02\r\n6883,2400,EMEA,electronics,mobile,42.48,6,0.020,coupon,2024-03-07\r\n6884,2060,LATAM,fashion,online,58.74,6,0.049,loyalty,2024-11-10\r\n6885,1393,LATAM,grocery,online,88.82,7,0.213,none,2024-12-26\r\n6886,1479,AMER,toys,online,126.85,7,0.156,bundle,2024-01-28\r\n6887,1713,EMEA,electronics,retail,50.40,6,0.150,none,2024-02-14\r\n6888,1174,APAC,electronics,retail,34.73,6,0.201,none,2024-11-12\r\n6889,1322,AMER,electronics,mobile,62.28,4,0.099,none,2024-11-12\r\n6890,1359,LATAM,home,online,48.78,1,0.134,none,2024-09-02\r\n6891,1401,LATAM,grocery,online,83.80,4,0.178,none,2024-12-28\r\n6892,1135,APAC,electronics,retail,45.62,7,0.024,coupon,2024-01-11\r\n6893,1598,EMEA,sports,retail,52.00,3,0.093,coupon,2024-05-10\r\n6894,2238,AMER,sports,online,55.04,4,0.141,bundle,2024-07-20\r\n6895,1221,LATAM,electronics,online,134.45,6,0.080,none,2024-06-23\r\n6896,1251,EMEA,grocery,online,83.64,3,0.143,none,2024-12-25\r\n6897,2061,EMEA,electronics,retail,85.67,4,0.011,loyalty,2024-02-18\r\n6898,1111,APAC,sports,online,18.63,2,0.182,none,2024-09-24\r\n6899,1680,LATAM,sports,retail,81.10,4,0.102,none,2024-10-23\r\n6900,2314,EMEA,home,retail,44.50,7,0.124,none,2024-09-14\r\n6901,2015,APAC,toys,online,26.53,2,0.061,none,2024-01-05\r\n6902,1011,APAC,electronics,mobile,130.52,5,0.167,none,2024-06-19\r\n6903,1659,APAC,electronics,online,47.01,3,0.042,none,2024-10-08\r\n6904,1179,APAC,grocery,online,111.59,8,0.127,bundle,2024-04-19\r\n6905,2249,LATAM,sports,retail,98.63,6,0.238,none,2024-03-12\r\n6906,1500,EMEA,toys,retail,112.97,3,0.219,loyalty,2024-07-19\r\n6907,1924,AMER,sports,retail,76.88,7,0.126,none,2024-01-11\r\n6908,1860,EMEA,electronics,retail,42.55,2,0.143,none,2024-08-20\r\n6909,1629,LATAM,toys,mobile,39.97,4,0.084,none,2024-05-15\r\n6910,1621,APAC,home,online,64.81,6,0.155,none,2024-02-05\r\n6911,2249,LATAM,grocery,mobile,59.69,3,0.185,none,2024-12-22\r\n6912,1060,LATAM,grocery,online,76.75,7,0.218,coupon,2024-07-04\r\n6913,1422,LATAM,grocery,online,35.67,8,0.239,bundle,2024-12-10\r\n6914,1665,AMER,grocery,mobile,102.57,6,0.153,none,2024-07-19\r\n6915,1868,AMER,home,mobile,37.61,8,0.153,none,2024-08-01\r\n6916,1508,LATAM,sports,online,50.57,8,0.005,none,2024-07-07\r\n6917,1714,APAC,grocery,retail,31.10,1,0.246,loyalty,2024-02-04\r\n6918,1555,AMER,grocery,online,37.06,3,0.037,bundle,2024-03-14\r\n6919,1260,LATAM,electronics,retail,45.54,1,0.108,coupon,2024-02-07\r\n6920,2079,EMEA,grocery,mobile,60.33,8,0.147,none,2024-09-26\r\n6921,1556,AMER,grocery,online,73.04,4,0.166,none,2024-04-14\r\n6922,2131,APAC,sports,mobile,151.12,8,0.072,none,2024-10-07\r\n6923,1205,APAC,electronics,mobile,43.34,2,0.100,none,2024-11-04\r\n6924,2229,APAC,fashion,online,79.02,1,0.103,none,2024-09-07\r\n6925,2156,AMER,toys,retail,59.19,4,0.070,bundle,2024-11-22\r\n6926,1502,APAC,grocery,retail,60.09,1,0.120,bundle,2024-06-18\r\n6927,2363,AMER,fashion,mobile,48.57,2,0.137,none,2024-01-13\r\n6928,2300,EMEA,grocery,retail,43.46,7,0.127,none,2024-06-02\r\n6929,1600,AMER,fashion,retail,61.61,1,0.106,loyalty,2024-01-28\r\n6930,2424,LATAM,home,online,35.43,1,0.232,coupon,2024-02-28\r\n6931,2089,EMEA,home,online,42.65,6,0.173,none,2024-07-06\r\n6932,2432,AMER,fashion,retail,77.05,1,0.007,none,2024-02-17\r\n6933,1783,AMER,fashion,online,40.43,7,0.020,coupon,2024-09-04\r\n6934,1433,EMEA,fashion,retail,90.20,5,0.118,none,2024-12-26\r\n6935,1671,APAC,fashion,retail,94.77,7,0.197,none,2024-12-02\r\n6936,1940,APAC,electronics,online,34.34,8,0.130,none,2024-07-16\r\n6937,1032,AMER,grocery,mobile,74.34,3,0.098,coupon,2024-01-08\r\n6938,1284,APAC,fashion,mobile,47.72,1,0.009,none,2024-11-22\r\n6939,2378,LATAM,grocery,retail,34.60,5,0.159,bundle,2024-10-07\r\n6940,1681,LATAM,grocery,retail,88.64,5,0.199,bundle,2024-01-15\r\n6941,1040,LATAM,home,retail,90.82,7,0.012,none,2024-12-09\r\n6942,2194,APAC,sports,retail,35.86,3,0.070,none,2024-07-25\r\n6943,1345,AMER,fashion,retail,47.24,4,0.161,none,2024-11-28\r\n6944,2072,AMER,electronics,online,57.68,1,0.098,none,2024-07-17\r\n6945,1412,AMER,grocery,retail,109.26,2,0.101,none,2024-08-09\r\n6946,1993,APAC,fashion,online,42.30,6,0.141,none,2024-11-27\r\n6947,1597,APAC,grocery,mobile,34.54,5,0.164,coupon,2024-08-02\r\n6948,1805,EMEA,fashion,retail,38.01,7,0.192,bundle,2024-11-17\r\n6949,1594,LATAM,home,mobile,28.24,1,0.236,bundle,2024-05-14\r\n6950,1786,APAC,grocery,online,98.99,2,0.159,none,2024-10-11\r\n6951,2018,AMER,grocery,online,84.36,4,0.222,none,2024-11-04\r\n6952,1244,LATAM,home,online,48.67,5,0.034,none,2024-08-01\r\n6953,2496,EMEA,toys,retail,141.62,2,0.137,none,2024-11-10\r\n6954,1066,AMER,grocery,partner,44.65,6,0.061,bundle,2024-11-03\r\n6955,1789,EMEA,grocery,retail,100.37,5,0.061,coupon,2024-09-08\r\n6956,1213,EMEA,fashion,retail,107.58,6,0.077,none,2024-07-03\r\n6957,1393,LATAM,electronics,retail,44.71,7,0.105,none,2024-09-17\r\n6958,1553,LATAM,grocery,online,89.07,7,0.134,coupon,2024-03-20\r\n6959,1813,EMEA,electronics,mobile,48.33,6,0.170,bundle,2024-12-04\r\n6960,1310,AMER,home,online,89.33,5,0.217,coupon,2024-02-01\r\n6961,2472,AMER,toys,online,43.28,7,0.105,none,2024-05-18\r\n6962,2049,LATAM,grocery,online,43.69,5,0.062,none,2024-01-26\r\n6963,1187,AMER,home,online,39.99,7,0.223,none,2024-02-03\r\n6964,2471,APAC,home,online,32.44,8,0.132,none,2024-07-01\r\n6965,2242,AMER,home,retail,89.00,1,0.021,none,2024-07-25\r\n6966,1614,EMEA,toys,online,39.58,4,0.229,coupon,2024-01-07\r\n6967,2311,LATAM,grocery,retail,51.45,6,0.016,none,2024-05-20\r\n6968,1042,LATAM,toys,online,62.80,7,0.034,coupon,2024-10-14\r\n6969,1685,AMER,toys,retail,116.69,5,0.167,none,2024-04-25\r\n6970,1760,LATAM,grocery,online,25.71,8,0.195,none,2024-09-27\r\n6971,1763,LATAM,electronics,retail,35.21,3,0.059,bundle,2024-01-15\r\n6972,2174,LATAM,fashion,online,72.65,4,0.012,none,2024-12-24\r\n6973,1387,AMER,fashion,online,110.95,8,0.080,coupon,2024-12-20\r\n6974,1769,LATAM,fashion,online,40.08,3,0.193,coupon,2024-06-17\r\n6975,2013,APAC,grocery,retail,33.49,4,0.011,none,2024-09-20\r\n6976,2142,LATAM,grocery,online,39.48,2,0.008,bundle,2024-09-14\r\n6977,2068,LATAM,grocery,online,42.48,1,0.140,none,2024-12-13\r\n6978,1632,LATAM,grocery,online,40.68,6,0.050,loyalty,2024-11-16\r\n6979,1057,LATAM,fashion,partner,90.92,7,0.111,none,2024-05-09\r\n6980,2253,AMER,sports,retail,62.09,8,0.133,bundle,2024-03-18\r\n6981,1033,APAC,electronics,mobile,34.25,4,0.059,none,2024-02-15\r\n6982,1337,APAC,fashion,online,25.06,2,0.055,none,2024-04-03\r\n6983,1050,AMER,electronics,retail,132.35,3,0.090,none,2024-11-12\r\n6984,1202,APAC,sports,online,119.32,7,0.016,none,2024-02-24\r\n6985,1531,EMEA,toys,retail,103.31,1,0.015,none,2024-08-26\r\n6986,1698,EMEA,electronics,retail,53.76,1,0.126,coupon,2024-06-02\r\n6987,1046,EMEA,grocery,online,57.09,6,0.193,none,2024-07-12\r\n6988,1337,APAC,electronics,online,73.09,5,0.193,coupon,2024-05-17\r\n6989,1693,EMEA,electronics,mobile,61.64,3,0.241,none,2024-06-07\r\n6990,2390,AMER,sports,retail,86.47,6,0.139,coupon,2024-12-28\r\n6991,2289,APAC,fashion,retail,38.13,7,0.033,none,2024-05-08\r\n6992,1900,APAC,sports,mobile,153.39,8,0.109,none,2024-01-07\r\n6993,2077,APAC,fashion,online,48.36,7,0.021,bundle,2024-09-19\r\n6994,2404,EMEA,electronics,online,90.78,8,0.087,coupon,2024-07-17\r\n6995,2349,APAC,fashion,mobile,90.37,6,0.082,loyalty,2024-02-14\r\n6996,2253,AMER,toys,online,36.04,6,0.215,coupon,2024-03-11\r\n6997,2011,AMER,electronics,online,117.13,1,0.081,coupon,2024-05-05\r\n6998,1661,LATAM,grocery,retail,40.98,1,0.124,none,2024-06-24\r\n6999,1925,LATAM,toys,retail,56.79,7,0.197,none,2024-10-03\r\n7000,1719,LATAM,toys,mobile,37.12,4,0.006,none,2024-03-27\r\n7001,1042,LATAM,home,retail,32.91,1,0.081,none,2024-07-02\r\n7002,1453,APAC,grocery,mobile,60.29,3,0.122,none,2024-04-15\r\n7003,1982,EMEA,toys,mobile,46.86,3,0.058,loyalty,2024-11-06\r\n7004,2034,LATAM,toys,online,69.19,2,0.209,none,2024-12-04\r\n7005,1615,LATAM,home,retail,80.10,8,0.097,none,2024-01-27\r\n7006,1662,LATAM,grocery,retail,75.73,1,0.220,none,2024-07-16\r\n7007,2288,AMER,home,retail,30.64,8,0.057,loyalty,2024-12-16\r\n7008,1154,LATAM,home,online,39.09,2,0.126,none,2024-11-04\r\n7009,2419,LATAM,grocery,retail,70.57,7,0.052,bundle,2024-05-22\r\n7010,1753,APAC,electronics,retail,39.02,7,0.187,none,2024-03-28\r\n7011,1150,LATAM,grocery,mobile,80.98,6,0.001,none,2024-11-11\r\n7012,1472,AMER,grocery,online,26.49,2,0.145,none,2024-07-26\r\n7013,2235,AMER,home,retail,202.16,7,0.073,coupon,2024-02-02\r\n7014,1717,AMER,electronics,online,20.31,6,0.058,none,2024-04-25\r\n7015,1328,APAC,grocery,online,73.59,6,0.126,none,2024-07-02\r\n7016,1962,APAC,toys,retail,100.33,5,0.084,none,2024-06-04\r\n7017,1585,AMER,home,retail,83.39,5,0.202,none,2024-12-18\r\n7018,1266,AMER,home,online,138.09,4,0.022,none,2024-02-23\r\n7019,2011,AMER,home,mobile,30.36,3,0.215,loyalty,2024-11-17\r\n7020,1417,APAC,grocery,online,54.97,7,0.160,none,2024-11-07\r\n7021,1810,LATAM,grocery,partner,82.20,2,0.208,none,2024-07-17\r\n7022,2083,LATAM,grocery,mobile,98.04,7,0.122,loyalty,2024-09-05\r\n7023,1146,LATAM,electronics,retail,34.83,7,0.243,loyalty,2024-02-26\r\n7024,2374,LATAM,home,mobile,87.69,8,0.105,none,2024-02-02\r\n7025,1970,LATAM,fashion,retail,83.27,4,0.186,loyalty,2024-08-25\r\n7026,2251,APAC,grocery,retail,58.70,7,0.038,loyalty,2024-05-07\r\n7027,1495,LATAM,sports,online,36.20,8,0.215,none,2024-09-10\r\n7028,1456,APAC,home,retail,175.15,2,0.008,loyalty,2024-03-11\r\n7029,1201,LATAM,sports,online,36.43,8,0.106,none,2024-01-03\r\n7030,2409,APAC,home,online,87.28,8,0.008,none,2024-04-02\r\n7031,1489,AMER,electronics,mobile,33.72,6,0.052,coupon,2024-05-23\r\n7032,1074,LATAM,toys,retail,68.62,4,0.040,none,2024-02-19\r\n7033,2340,EMEA,electronics,online,53.38,1,0.131,loyalty,2024-06-09\r\n7034,1264,APAC,electronics,online,62.22,4,0.248,none,2024-12-14\r\n7035,1763,LATAM,grocery,retail,95.18,2,0.022,none,2024-12-23\r\n7036,2393,LATAM,fashion,online,54.56,4,0.038,loyalty,2024-06-18\r\n7037,1303,LATAM,home,online,33.48,6,0.177,bundle,2024-03-07\r\n7038,1405,LATAM,electronics,online,60.62,4,0.150,loyalty,2024-12-23\r\n7039,2125,LATAM,home,retail,33.03,1,0.056,none,2024-10-11\r\n7040,1896,EMEA,home,retail,41.42,5,0.132,coupon,2024-05-09\r\n7041,1487,AMER,fashion,mobile,50.18,8,0.134,none,2024-02-08\r\n7042,2067,LATAM,grocery,online,90.92,6,0.158,coupon,2024-03-18\r\n7043,1389,LATAM,sports,online,50.68,1,0.152,none,2024-08-24\r\n7044,1140,LATAM,fashion,mobile,91.78,2,0.223,coupon,2024-05-02\r\n7045,1644,EMEA,sports,retail,22.45,7,0.099,bundle,2024-12-12\r\n7046,2219,LATAM,grocery,online,34.09,3,0.212,none,2024-02-20\r\n7047,2205,AMER,grocery,mobile,40.82,6,0.031,none,2024-01-21\r\n7048,1489,AMER,home,partner,35.41,3,0.084,none,2024-11-28\r\n7049,2187,EMEA,grocery,online,43.81,7,0.028,none,2024-12-08\r\n7050,2427,LATAM,home,mobile,44.77,5,0.090,bundle,2024-04-18\r\n7051,2238,AMER,grocery,retail,61.50,1,0.145,none,2024-12-10\r\n7052,2307,LATAM,electronics,online,28.76,4,0.204,loyalty,2024-07-06\r\n7053,1180,AMER,sports,retail,39.87,6,0.203,none,2024-07-18\r\n7054,1742,AMER,home,online,38.60,2,0.045,bundle,2024-06-19\r\n7055,1388,AMER,electronics,retail,39.77,8,0.062,none,2024-02-09\r\n7056,1078,APAC,sports,retail,37.80,5,0.212,none,2024-01-17\r\n7057,2327,EMEA,grocery,mobile,36.46,5,0.136,none,2024-01-09\r\n7058,1239,APAC,toys,mobile,22.90,4,0.201,none,2024-12-20\r\n7059,1427,EMEA,toys,online,42.06,5,0.177,coupon,2024-08-24\r\n7060,2019,AMER,electronics,retail,171.49,5,0.141,none,2024-02-11\r\n7061,2026,LATAM,electronics,partner,31.70,7,0.212,bundle,2024-09-24\r\n7062,1356,LATAM,electronics,online,33.34,2,0.141,none,2024-07-27\r\n7063,1196,APAC,electronics,mobile,20.09,4,0.021,bundle,2024-08-10\r\n7064,2262,APAC,home,retail,69.51,7,0.155,none,2024-12-10\r\n7065,2474,LATAM,fashion,retail,62.84,8,0.134,bundle,2024-07-28\r\n7066,1031,AMER,fashion,online,86.77,5,0.093,coupon,2024-02-20\r\n7067,1681,LATAM,electronics,retail,35.81,2,0.209,none,2024-10-20\r\n7068,2258,AMER,electronics,online,40.21,4,0.243,coupon,2024-11-03\r\n7069,1603,EMEA,sports,online,36.05,5,0.053,none,2024-05-25\r\n7070,1756,EMEA,sports,retail,79.00,5,0.218,none,2024-08-13\r\n7071,1096,EMEA,home,online,40.47,1,0.140,bundle,2024-05-26\r\n7072,2196,AMER,fashion,online,24.91,6,0.210,none,2024-03-01\r\n7073,2363,AMER,toys,retail,46.70,7,0.014,none,2024-05-23\r\n7074,2233,EMEA,grocery,online,31.77,3,0.034,coupon,2024-09-04\r\n7075,1725,APAC,grocery,retail,21.82,2,0.110,coupon,2024-05-01\r\n7076,1537,LATAM,electronics,online,39.00,2,0.104,none,2024-04-19\r\n7077,1876,LATAM,fashion,online,25.40,4,0.216,none,2024-01-12\r\n7078,1635,APAC,sports,online,39.09,3,0.130,none,2024-01-09\r\n7079,1893,APAC,grocery,retail,54.51,8,0.069,none,2024-01-27\r\n7080,1990,EMEA,home,retail,34.67,1,0.011,bundle,2024-06-03\r\n7081,1335,APAC,sports,retail,147.11,2,0.056,bundle,2024-05-17\r\n7082,1262,APAC,electronics,online,66.75,1,0.216,none,2024-01-13\r\n7083,1714,APAC,grocery,retail,62.35,6,0.117,none,2024-07-16\r\n7084,2053,AMER,fashion,retail,54.51,1,0.150,none,2024-03-27\r\n7085,1363,EMEA,electronics,online,140.64,4,0.241,none,2024-09-01\r\n7086,2320,LATAM,electronics,online,121.16,7,0.176,coupon,2024-01-10\r\n7087,2377,AMER,sports,mobile,36.74,2,0.236,none,2024-09-01\r\n7088,1297,AMER,electronics,retail,30.63,8,0.008,none,2024-08-06\r\n7089,2202,APAC,fashion,retail,55.51,4,0.102,none,2024-07-02\r\n7090,1030,EMEA,toys,online,45.37,5,0.168,coupon,2024-06-20\r\n7091,1635,APAC,grocery,retail,43.10,8,0.203,none,2024-02-03\r\n7092,2186,LATAM,fashion,online,88.84,7,0.150,none,2024-03-01\r\n7093,1707,APAC,electronics,retail,86.76,1,0.195,none,2024-05-16\r\n7094,1700,EMEA,fashion,mobile,274.34,5,0.006,none,2024-01-14\r\n7095,2039,EMEA,grocery,retail,37.26,4,0.116,coupon,2024-09-09\r\n7096,1780,APAC,home,retail,32.24,7,0.131,none,2024-11-26\r\n7097,1651,LATAM,grocery,online,93.67,5,0.112,coupon,2024-07-13\r\n7098,1154,LATAM,fashion,retail,152.95,4,0.118,none,2024-03-03\r\n7099,1265,APAC,toys,retail,66.48,5,0.080,loyalty,2024-08-24\r\n7100,1715,AMER,fashion,online,57.37,5,0.133,none,2024-07-02\r\n7101,2390,AMER,electronics,online,26.67,3,0.013,none,2024-09-02\r\n7102,2322,AMER,electronics,online,33.96,4,0.132,coupon,2024-04-08\r\n7103,1473,LATAM,fashion,online,132.79,6,0.108,none,2024-06-24\r\n7104,1966,APAC,sports,online,142.52,4,0.073,none,2024-01-01\r\n7105,1162,AMER,electronics,retail,100.33,6,0.072,bundle,2024-08-10\r\n7106,1172,APAC,grocery,mobile,74.66,3,0.115,none,2024-09-18\r\n7107,1849,EMEA,toys,online,28.84,5,0.226,loyalty,2024-11-02\r\n7108,1807,EMEA,home,mobile,76.77,3,0.221,none,2024-05-09\r\n7109,2230,LATAM,grocery,mobile,55.79,6,0.081,none,2024-01-20\r\n7110,1658,AMER,fashion,retail,28.94,3,0.200,none,2024-12-13\r\n7111,1491,EMEA,electronics,mobile,36.56,5,0.188,bundle,2024-04-08\r\n7112,2313,LATAM,home,retail,62.01,1,0.044,none,2024-11-21\r\n7113,2172,EMEA,grocery,online,66.26,5,0.094,none,2024-06-25\r\n7114,1118,AMER,home,online,27.84,7,0.248,bundle,2024-11-09\r\n7115,2202,APAC,fashion,retail,85.67,5,0.213,bundle,2024-10-25\r\n7116,2264,LATAM,sports,retail,113.86,3,0.110,none,2024-01-27\r\n7117,1484,AMER,home,retail,17.62,2,0.044,coupon,2024-05-24\r\n7118,2191,AMER,electronics,partner,76.27,6,0.026,none,2024-05-06\r\n7119,1658,AMER,sports,online,39.54,6,0.184,none,2024-09-01\r\n7120,2411,EMEA,home,online,13.38,3,0.058,none,2024-07-10\r\n7121,1499,EMEA,fashion,partner,69.38,1,0.233,none,2024-11-12\r\n7122,2029,APAC,grocery,online,43.61,4,0.138,loyalty,2024-04-04\r\n7123,1475,LATAM,home,mobile,84.47,4,0.211,none,2024-04-15\r\n7124,1926,AMER,sports,partner,94.14,1,0.156,coupon,2024-12-16\r\n7125,1430,EMEA,electronics,retail,58.02,8,0.018,none,2024-07-12\r\n7126,1861,AMER,fashion,online,54.34,2,0.168,bundle,2024-01-13\r\n7127,1852,AMER,toys,retail,32.64,4,0.016,bundle,2024-09-11\r\n7128,2441,EMEA,grocery,mobile,73.44,2,0.059,none,2024-04-05\r\n7129,2245,APAC,grocery,online,138.96,6,0.081,coupon,2024-01-01\r\n7130,1674,LATAM,grocery,online,66.99,3,0.038,none,2024-02-15\r\n7131,1640,APAC,electronics,retail,113.66,6,0.123,none,2024-11-23\r\n7132,2210,APAC,electronics,online,60.14,1,0.101,coupon,2024-04-06\r\n7133,2102,APAC,grocery,online,82.34,1,0.122,loyalty,2024-03-21\r\n7134,2021,EMEA,home,online,90.46,8,0.207,loyalty,2024-12-12\r\n7135,2195,APAC,fashion,retail,72.33,3,0.067,coupon,2024-10-26\r\n7136,1196,APAC,grocery,retail,44.75,1,0.001,none,2024-10-24\r\n7137,1757,EMEA,sports,online,120.30,4,0.003,coupon,2024-10-13\r\n7138,1666,LATAM,sports,retail,56.36,1,0.229,none,2024-11-20\r\n7139,2029,APAC,toys,retail,122.69,1,0.043,coupon,2024-09-07\r\n7140,1157,LATAM,home,online,76.36,3,0.029,none,2024-11-13\r\n7141,1436,APAC,home,retail,89.58,2,0.245,none,2024-05-14\r\n7142,2386,EMEA,fashion,retail,53.07,6,0.023,none,2024-12-20\r\n7143,1251,EMEA,home,online,49.73,5,0.198,loyalty,2024-10-25\r\n7144,1760,LATAM,electronics,online,56.31,3,0.123,coupon,2024-11-22\r\n7145,1670,EMEA,electronics,online,29.83,4,0.014,bundle,2024-03-19\r\n7146,1658,AMER,home,retail,58.89,7,0.190,bundle,2024-10-08\r\n7147,1487,AMER,fashion,retail,106.33,2,0.224,bundle,2024-05-21\r\n7148,1137,APAC,electronics,retail,50.08,5,0.180,none,2024-02-14\r\n7149,1216,APAC,fashion,mobile,53.40,8,0.123,coupon,2024-09-21\r\n7150,1442,EMEA,home,retail,71.82,7,0.096,coupon,2024-12-15\r\n7151,1230,EMEA,grocery,mobile,88.42,7,0.119,none,2024-12-11\r\n7152,2075,LATAM,sports,retail,61.82,6,0.020,none,2024-04-05\r\n7153,1407,LATAM,grocery,retail,23.48,2,0.050,loyalty,2024-04-02\r\n7154,1548,EMEA,grocery,online,148.56,4,0.199,coupon,2024-04-17\r\n7155,2248,LATAM,home,online,79.50,1,0.086,loyalty,2024-03-18\r\n7156,2483,LATAM,home,retail,51.14,6,0.007,none,2024-11-03\r\n7157,2408,EMEA,electronics,retail,55.30,6,0.185,coupon,2024-02-10\r\n7158,1761,EMEA,grocery,online,62.20,5,0.136,coupon,2024-11-27\r\n7159,1199,APAC,electronics,online,26.76,5,0.202,none,2024-02-10\r\n7160,2137,LATAM,grocery,retail,27.54,5,0.204,none,2024-11-03\r\n7161,2079,EMEA,fashion,mobile,74.36,4,0.031,none,2024-12-18\r\n7162,2370,EMEA,electronics,retail,46.85,8,0.212,bundle,2024-03-08\r\n7163,1104,APAC,home,online,49.43,4,0.024,coupon,2024-09-02\r\n7164,1773,LATAM,home,retail,59.53,7,0.152,none,2024-12-16\r\n7165,1807,EMEA,home,online,63.86,1,0.182,none,2024-10-13\r\n7166,2298,APAC,fashion,retail,93.74,3,0.096,bundle,2024-06-21\r\n7167,1734,AMER,electronics,online,130.33,5,0.129,none,2024-05-03\r\n7168,1775,EMEA,sports,retail,117.49,5,0.054,coupon,2024-05-17\r\n7169,2206,AMER,home,retail,57.14,1,0.222,coupon,2024-12-02\r\n7170,2239,EMEA,electronics,retail,27.91,1,0.128,none,2024-03-09\r\n7171,1914,EMEA,sports,online,29.11,2,0.118,none,2024-09-19\r\n7172,1539,LATAM,grocery,mobile,62.88,4,0.110,coupon,2024-06-28\r\n7173,1442,EMEA,home,online,71.64,8,0.068,none,2024-03-04\r\n7174,2097,AMER,home,mobile,132.21,7,0.067,loyalty,2024-12-13\r\n7175,1652,APAC,grocery,retail,70.91,8,0.166,none,2024-11-21\r\n7176,1557,LATAM,electronics,retail,85.88,8,0.130,none,2024-10-09\r\n7177,2021,EMEA,toys,retail,137.64,7,0.167,bundle,2024-12-06\r\n7178,1157,LATAM,home,retail,30.49,2,0.197,coupon,2024-01-14\r\n7179,1545,AMER,electronics,online,47.46,3,0.026,bundle,2024-02-11\r\n7180,1418,LATAM,grocery,online,52.19,4,0.140,none,2024-09-21\r\n7181,1830,EMEA,toys,retail,99.99,3,0.002,none,2024-10-28\r\n7182,1328,APAC,sports,mobile,25.91,4,0.074,loyalty,2024-05-04\r\n7183,1541,APAC,grocery,retail,39.31,8,0.190,none,2024-10-20\r\n7184,1154,LATAM,electronics,mobile,75.06,2,0.241,none,2024-12-12\r\n7185,1929,LATAM,fashion,mobile,59.54,6,0.151,coupon,2024-04-19\r\n7186,1839,APAC,electronics,retail,32.63,5,0.208,none,2024-05-05\r\n7187,2160,LATAM,electronics,online,49.42,4,0.008,coupon,2024-10-07\r\n7188,1888,LATAM,home,retail,74.15,3,0.117,none,2024-12-11\r\n7189,1034,EMEA,grocery,retail,33.33,6,0.071,bundle,2024-03-14\r\n7190,1880,LATAM,fashion,mobile,58.74,5,0.249,coupon,2024-08-07\r\n7191,1109,APAC,grocery,retail,28.48,8,0.145,none,2024-11-23\r\n7192,1990,EMEA,grocery,mobile,67.91,5,0.001,coupon,2024-06-17\r\n7193,1210,LATAM,home,online,23.78,1,0.106,none,2024-11-25\r\n7194,2054,AMER,home,retail,135.98,3,0.211,coupon,2024-01-24\r\n7195,1065,AMER,home,online,27.20,2,0.053,none,2024-06-24\r\n7196,2145,AMER,electronics,partner,55.04,5,0.068,bundle,2024-02-11\r\n7197,1477,APAC,grocery,online,55.02,2,0.184,none,2024-07-10\r\n7198,1934,EMEA,fashion,partner,29.81,8,0.049,coupon,2024-01-24\r\n7199,1393,LATAM,toys,partner,55.52,4,0.099,none,2024-07-27\r\n7200,2498,LATAM,grocery,online,48.83,7,0.142,coupon,2024-08-17\r\n7201,1066,AMER,home,retail,40.32,7,0.189,none,2024-08-07\r\n7202,2497,AMER,grocery,online,102.00,4,0.155,none,2024-01-11\r\n7203,1336,APAC,toys,mobile,169.85,7,0.199,loyalty,2024-09-17\r\n7204,2008,APAC,grocery,retail,126.53,4,0.104,bundle,2024-07-02\r\n7205,1432,APAC,grocery,online,62.63,4,0.240,coupon,2024-01-24\r\n7206,1080,LATAM,toys,online,85.00,5,0.137,bundle,2024-06-23\r\n7207,1742,AMER,grocery,retail,63.71,4,0.080,loyalty,2024-09-25\r\n7208,1131,APAC,fashion,retail,55.02,5,0.185,loyalty,2024-11-18\r\n7209,2003,LATAM,home,retail,59.74,6,0.170,none,2024-10-18\r\n7210,1871,APAC,fashion,mobile,46.03,4,0.046,none,2024-09-14\r\n7211,1429,APAC,grocery,retail,111.27,4,0.156,coupon,2024-03-10\r\n7212,2241,APAC,electronics,online,109.99,7,0.041,none,2024-07-11\r\n7213,1226,AMER,electronics,online,48.66,2,0.207,none,2024-10-18\r\n7214,2175,AMER,home,online,55.24,5,0.137,loyalty,2024-07-13\r\n7215,1387,AMER,sports,mobile,36.79,2,0.070,bundle,2024-12-02\r\n7216,2066,APAC,toys,online,23.29,5,0.054,loyalty,2024-01-24\r\n7217,1870,EMEA,grocery,partner,36.33,8,0.013,none,2024-04-02\r\n7218,2147,LATAM,home,online,83.58,6,0.182,none,2024-10-12\r\n7219,2161,LATAM,electronics,online,29.60,4,0.044,coupon,2024-03-12\r\n7220,2156,AMER,toys,retail,85.66,6,0.150,none,2024-10-20\r\n7221,1980,LATAM,toys,mobile,28.79,7,0.072,none,2024-09-17\r\n7222,1285,EMEA,home,online,101.79,7,0.164,none,2024-01-18\r\n7223,1680,LATAM,grocery,retail,38.32,7,0.226,none,2024-10-27\r\n7224,1368,EMEA,home,online,40.78,8,0.002,none,2024-04-07\r\n7225,1140,LATAM,fashion,retail,31.80,5,0.188,coupon,2024-01-12\r\n7226,1356,LATAM,electronics,retail,27.33,7,0.127,none,2024-10-25\r\n7227,1782,LATAM,toys,online,89.09,7,0.026,bundle,2024-05-01\r\n7228,1083,AMER,grocery,mobile,27.80,1,0.071,none,2024-01-01\r\n7229,1960,EMEA,home,online,36.35,8,0.234,none,2024-01-02\r\n7230,2468,EMEA,electronics,mobile,46.16,8,0.006,none,2024-11-06\r\n7231,1971,EMEA,sports,retail,32.44,1,0.063,none,2024-04-03\r\n7232,1362,AMER,fashion,partner,91.66,1,0.187,none,2024-09-24\r\n7233,2159,AMER,grocery,partner,97.03,4,0.197,bundle,2024-12-23\r\n7234,1317,EMEA,grocery,online,55.41,6,0.198,none,2024-12-03\r\n7235,1779,APAC,fashion,online,82.42,7,0.064,none,2024-07-17\r\n7236,2272,EMEA,fashion,retail,97.65,8,0.157,none,2024-05-12\r\n7237,1792,AMER,fashion,online,40.14,2,0.063,bundle,2024-03-21\r\n7238,1062,EMEA,fashion,online,73.75,6,0.097,bundle,2024-04-09\r\n7239,1181,LATAM,electronics,retail,84.08,3,0.250,loyalty,2024-03-08\r\n7240,2036,APAC,toys,mobile,118.65,3,0.116,bundle,2024-07-04\r\n7241,1938,APAC,home,online,36.73,7,0.081,none,2024-07-21\r\n7242,2234,LATAM,electronics,retail,108.70,4,0.145,loyalty,2024-07-26\r\n7243,2162,EMEA,electronics,online,83.07,4,0.147,none,2024-04-21\r\n7244,1348,AMER,grocery,online,113.73,2,0.045,coupon,2024-02-11\r\n7245,1839,APAC,fashion,retail,25.44,7,0.129,none,2024-11-10\r\n7246,1200,EMEA,home,online,48.15,3,0.203,none,2024-11-10\r\n7247,1724,LATAM,fashion,retail,318.91,6,0.040,loyalty,2024-09-23\r\n7248,2046,APAC,sports,partner,14.43,2,0.211,none,2024-10-26\r\n7249,1007,APAC,home,retail,40.26,4,0.109,loyalty,2024-06-06\r\n7250,1038,APAC,grocery,online,50.48,5,0.078,none,2024-03-27\r\n7251,1245,APAC,fashion,online,16.22,1,0.222,none,2024-03-22\r\n7252,1256,LATAM,electronics,online,30.36,4,0.215,none,2024-01-14\r\n7253,1836,LATAM,electronics,online,137.94,6,0.119,none,2024-07-10\r\n7254,1332,APAC,grocery,online,119.24,1,0.222,none,2024-09-27\r\n7255,2056,LATAM,electronics,online,39.81,1,0.003,bundle,2024-11-19\r\n7256,1887,LATAM,toys,retail,48.76,6,0.109,none,2024-09-03\r\n7257,2067,LATAM,grocery,retail,39.65,3,0.022,loyalty,2024-06-27\r\n7258,1214,EMEA,sports,online,57.90,3,0.229,none,2024-10-11\r\n7259,1284,APAC,toys,retail,100.06,4,0.082,none,2024-09-16\r\n7260,2122,AMER,home,retail,19.22,8,0.137,none,2024-06-23\r\n7261,1405,LATAM,electronics,retail,66.09,2,0.009,none,2024-07-05\r\n7262,1860,EMEA,sports,retail,37.73,7,0.130,coupon,2024-10-17\r\n7263,2476,APAC,fashion,retail,49.56,3,0.243,none,2024-11-19\r\n7264,1587,LATAM,grocery,partner,45.87,5,0.151,none,2024-05-07\r\n7265,1973,EMEA,grocery,retail,66.52,1,0.002,coupon,2024-01-08\r\n7266,2135,EMEA,electronics,online,60.43,3,0.022,loyalty,2024-04-27\r\n7267,2283,AMER,grocery,online,36.65,2,0.058,bundle,2024-06-16\r\n7268,2450,EMEA,toys,retail,107.05,2,0.221,none,2024-04-17\r\n7269,1100,AMER,sports,retail,61.39,6,0.158,loyalty,2024-07-08\r\n7270,1869,AMER,grocery,mobile,56.79,5,0.145,none,2024-04-08\r\n7271,1558,EMEA,grocery,mobile,74.73,1,0.022,coupon,2024-10-03\r\n7272,2202,APAC,grocery,retail,54.26,3,0.017,none,2024-05-24\r\n7273,1363,EMEA,fashion,mobile,76.88,6,0.096,none,2024-06-25\r\n7274,1400,EMEA,grocery,retail,97.96,3,0.248,loyalty,2024-02-20\r\n7275,1349,APAC,grocery,retail,77.51,1,0.240,none,2024-10-20\r\n7276,2438,AMER,home,retail,85.11,4,0.050,loyalty,2024-08-15\r\n7277,1876,LATAM,home,mobile,87.67,3,0.219,none,2024-02-17\r\n7278,2367,AMER,toys,online,53.39,6,0.250,bundle,2024-01-28\r\n7279,2296,AMER,toys,online,33.65,1,0.096,loyalty,2024-02-07\r\n7280,1354,AMER,grocery,retail,65.68,7,0.192,none,2024-10-20\r\n7281,1500,EMEA,home,online,36.55,3,0.016,loyalty,2024-10-04\r\n7282,1563,EMEA,fashion,online,40.08,5,0.223,none,2024-12-17\r\n7283,2162,EMEA,grocery,online,45.38,7,0.207,none,2024-05-10\r\n7284,2235,AMER,toys,retail,22.58,4,0.164,none,2024-07-25\r\n7285,2196,AMER,electronics,online,70.88,7,0.068,loyalty,2024-06-07\r\n7286,1554,AMER,toys,partner,59.62,6,0.085,none,2024-03-27\r\n7287,2306,AMER,grocery,retail,140.32,7,0.234,loyalty,2024-12-09\r\n7288,1704,AMER,toys,retail,36.89,8,0.137,none,2024-01-26\r\n7289,2016,LATAM,electronics,online,55.82,7,0.150,none,2024-10-02\r\n7290,1267,EMEA,grocery,retail,48.42,4,0.058,none,2024-12-12\r\n7291,1337,APAC,sports,retail,81.50,4,0.138,loyalty,2024-07-03\r\n7292,1197,LATAM,home,online,20.22,4,0.152,bundle,2024-09-27\r\n7293,2344,LATAM,fashion,mobile,22.72,1,0.067,none,2024-08-05\r\n7294,2098,AMER,toys,retail,34.47,2,0.197,bundle,2024-08-19\r\n7295,1318,LATAM,grocery,online,84.20,1,0.057,coupon,2024-01-22\r\n7296,1284,APAC,electronics,online,102.86,4,0.112,loyalty,2024-07-26\r\n7297,2150,APAC,fashion,mobile,58.76,7,0.243,bundle,2024-12-27\r\n7298,1795,EMEA,fashion,retail,60.55,3,0.176,none,2024-10-02\r\n7299,1343,LATAM,grocery,retail,92.31,5,0.127,coupon,2024-07-15\r\n7300,2391,EMEA,grocery,retail,74.11,4,0.003,none,2024-08-23\r\n7301,2476,APAC,toys,online,41.56,6,0.066,none,2024-11-01\r\n7302,1538,AMER,sports,retail,51.31,6,0.055,loyalty,2024-12-05\r\n7303,1215,LATAM,home,online,158.97,5,0.202,none,2024-07-12\r\n7304,2490,AMER,grocery,retail,40.63,1,0.187,loyalty,2024-10-27\r\n7305,2441,EMEA,electronics,online,114.03,4,0.017,bundle,2024-08-14\r\n7306,1605,APAC,electronics,online,59.23,8,0.235,coupon,2024-06-13\r\n7307,1266,AMER,sports,retail,40.04,4,0.087,none,2024-09-03\r\n7308,2379,AMER,home,retail,41.31,1,0.033,none,2024-02-24\r\n7309,2015,APAC,fashion,retail,78.88,3,0.049,loyalty,2024-05-07\r\n7310,1552,EMEA,home,mobile,34.97,5,0.172,loyalty,2024-11-09\r\n7311,1917,LATAM,grocery,online,49.44,1,0.096,none,2024-08-01\r\n7312,1559,EMEA,grocery,retail,28.39,5,0.072,none,2024-06-26\r\n7313,2186,LATAM,home,mobile,62.59,3,0.236,coupon,2024-05-13\r\n7314,1818,AMER,grocery,online,33.44,7,0.014,none,2024-02-28\r\n7315,2117,EMEA,sports,mobile,28.57,1,0.135,none,2024-06-06\r\n7316,1685,AMER,grocery,partner,24.26,6,0.043,none,2024-04-13\r\n7317,1415,AMER,electronics,retail,38.13,4,0.131,coupon,2024-12-12\r\n7318,2326,LATAM,home,online,31.81,4,0.010,none,2024-12-09\r\n7319,1255,AMER,grocery,retail,75.92,3,0.027,none,2024-12-18\r\n7320,1279,EMEA,electronics,retail,29.98,6,0.049,none,2024-10-12\r\n7321,1346,AMER,home,mobile,61.35,2,0.211,none,2024-09-25\r\n7322,2140,AMER,grocery,online,56.46,8,0.003,bundle,2024-10-10\r\n7323,2277,EMEA,home,online,48.93,3,0.226,none,2024-08-07\r\n7324,1391,LATAM,electronics,online,77.79,3,0.120,none,2024-02-21\r\n7325,1680,LATAM,electronics,partner,116.61,2,0.244,none,2024-08-10\r\n7326,2226,EMEA,home,online,40.46,3,0.147,none,2024-10-26\r\n7327,2096,LATAM,electronics,online,51.37,6,0.051,none,2024-12-09\r\n7328,1836,LATAM,electronics,retail,63.82,4,0.181,none,2024-07-16\r\n7329,1948,EMEA,toys,retail,51.56,7,0.020,none,2024-01-04\r\n7330,2246,AMER,electronics,online,139.59,4,0.016,coupon,2024-11-07\r\n7331,1332,APAC,grocery,retail,100.33,6,0.182,none,2024-03-04\r\n7332,1222,AMER,grocery,online,43.56,7,0.150,coupon,2024-08-24\r\n7333,1671,APAC,grocery,online,39.05,5,0.199,none,2024-06-01\r\n7334,2482,EMEA,home,mobile,55.95,8,0.075,coupon,2024-06-26\r\n7335,1523,LATAM,toys,online,65.87,4,0.198,none,2024-06-27\r\n7336,1604,EMEA,fashion,retail,26.65,5,0.122,none,2024-04-05\r\n7337,1612,LATAM,fashion,online,55.96,4,0.143,none,2024-09-03\r\n7338,1811,APAC,grocery,retail,42.54,2,0.199,none,2024-08-03\r\n7339,1903,LATAM,grocery,retail,39.45,2,0.044,none,2024-02-15\r\n7340,1508,LATAM,sports,retail,115.83,6,0.010,none,2024-05-19\r\n7341,1868,AMER,home,retail,67.27,7,0.155,coupon,2024-06-04\r\n7342,1218,AMER,home,online,51.39,7,0.015,none,2024-10-01\r\n7343,1210,LATAM,grocery,retail,129.84,5,0.053,none,2024-11-02\r\n7344,1877,LATAM,home,online,50.98,8,0.234,none,2024-06-26\r\n7345,1054,EMEA,electronics,retail,116.07,6,0.079,coupon,2024-09-03\r\n7346,1143,LATAM,grocery,retail,50.00,2,0.184,none,2024-12-22\r\n7347,2149,EMEA,electronics,online,42.02,5,0.220,bundle,2024-07-28\r\n7348,1462,LATAM,grocery,retail,65.71,6,0.195,none,2024-02-11\r\n7349,1594,LATAM,fashion,online,44.48,4,0.187,bundle,2024-03-10\r\n7350,1620,LATAM,fashion,retail,88.41,5,0.032,none,2024-01-27\r\n7351,1964,EMEA,sports,retail,67.41,2,0.221,bundle,2024-07-19\r\n7352,1258,EMEA,electronics,retail,41.73,7,0.067,none,2024-04-12\r\n7353,2246,AMER,home,online,11.78,6,0.119,none,2024-08-12\r\n7354,1214,EMEA,grocery,retail,126.44,2,0.010,bundle,2024-01-02\r\n7355,1826,LATAM,fashion,online,52.49,7,0.220,none,2024-06-17\r\n7356,1834,AMER,home,mobile,68.49,3,0.040,none,2024-07-10\r\n7357,2195,APAC,sports,retail,67.72,8,0.154,none,2024-02-05\r\n7358,1914,EMEA,toys,partner,112.49,8,0.231,none,2024-05-07\r\n7359,1015,AMER,grocery,retail,34.62,7,0.023,none,2024-12-11\r\n7360,1597,APAC,home,partner,35.80,6,0.177,none,2024-02-25\r\n7361,1528,EMEA,home,retail,34.85,6,0.124,none,2024-06-18\r\n7362,1764,LATAM,electronics,mobile,32.17,1,0.192,bundle,2024-10-24\r\n7363,1928,AMER,electronics,retail,61.11,1,0.202,coupon,2024-02-22\r\n7364,2354,LATAM,toys,partner,110.63,7,0.125,none,2024-04-28\r\n7365,1954,APAC,grocery,mobile,33.42,6,0.162,loyalty,2024-01-08\r\n7366,1831,APAC,home,online,90.62,7,0.111,none,2024-03-08\r\n7367,1696,LATAM,sports,retail,31.98,3,0.234,bundle,2024-12-01\r\n7368,1340,LATAM,grocery,retail,51.10,2,0.206,coupon,2024-09-04\r\n7369,1979,APAC,grocery,retail,37.63,8,0.199,bundle,2024-09-19\r\n7370,2020,AMER,sports,online,18.44,8,0.053,none,2024-05-03\r\n7371,2223,EMEA,home,retail,79.28,6,0.238,bundle,2024-04-02\r\n7372,1208,AMER,grocery,online,86.02,4,0.065,loyalty,2024-02-05\r\n7373,1491,EMEA,fashion,partner,33.02,2,0.073,bundle,2024-12-18\r\n7374,1147,EMEA,fashion,mobile,109.22,1,0.093,bundle,2024-08-20\r\n7375,1368,EMEA,fashion,online,69.79,2,0.242,none,2024-11-14\r\n7376,2131,APAC,fashion,online,96.31,7,0.012,bundle,2024-10-21\r\n7377,1506,EMEA,sports,retail,83.61,6,0.053,none,2024-11-14\r\n7378,1993,APAC,grocery,retail,91.83,2,0.250,coupon,2024-07-08\r\n7379,1624,AMER,home,online,204.80,2,0.112,none,2024-10-06\r\n7380,2401,LATAM,electronics,retail,53.83,1,0.147,coupon,2024-08-03\r\n7381,1217,EMEA,sports,online,61.41,2,0.020,none,2024-12-18\r\n7382,1294,APAC,electronics,online,72.49,5,0.209,none,2024-08-28\r\n7383,2366,APAC,electronics,retail,74.65,2,0.200,none,2024-12-27\r\n7384,1555,AMER,home,mobile,53.36,6,0.229,loyalty,2024-09-16\r\n7385,1221,LATAM,home,online,25.09,1,0.192,bundle,2024-08-21\r\n7386,1611,EMEA,fashion,mobile,102.79,6,0.237,bundle,2024-06-24\r\n7387,2214,AMER,fashion,retail,96.46,3,0.114,none,2024-10-14\r\n7388,1955,AMER,home,mobile,72.14,5,0.162,bundle,2024-09-12\r\n7389,1460,LATAM,toys,retail,70.59,7,0.059,none,2024-12-27\r\n7390,1227,AMER,toys,online,8.87,6,0.234,coupon,2024-07-01\r\n7391,2192,APAC,toys,online,49.29,3,0.195,loyalty,2024-06-25\r\n7392,1264,APAC,fashion,retail,34.47,7,0.231,loyalty,2024-07-08\r\n7393,1148,AMER,fashion,mobile,47.32,7,0.027,bundle,2024-02-26\r\n7394,1339,EMEA,toys,mobile,58.08,3,0.084,none,2024-12-09\r\n7395,1124,AMER,toys,online,87.81,7,0.064,none,2024-11-05\r\n7396,2411,EMEA,home,online,88.57,5,0.138,none,2024-08-06\r\n7397,1235,EMEA,electronics,online,51.02,2,0.134,none,2024-10-08\r\n7398,1461,LATAM,grocery,retail,58.43,6,0.053,loyalty,2024-06-09\r\n7399,1852,AMER,fashion,online,70.43,6,0.131,none,2024-04-17\r\n7400,1111,APAC,home,online,52.72,1,0.143,coupon,2024-06-12\r\n7401,1986,LATAM,electronics,partner,43.81,8,0.134,none,2024-02-28\r\n7402,2100,APAC,electronics,partner,29.52,7,0.179,coupon,2024-01-12\r\n7403,1393,LATAM,sports,retail,44.77,1,0.147,none,2024-01-03\r\n7404,1724,LATAM,home,online,55.38,5,0.181,none,2024-06-07\r\n7405,1740,EMEA,sports,retail,34.59,5,0.085,none,2024-09-06\r\n7406,1795,EMEA,grocery,mobile,59.43,1,0.127,none,2024-11-04\r\n7407,1340,LATAM,home,retail,49.91,8,0.057,none,2024-03-14\r\n7408,2078,APAC,toys,online,150.36,8,0.031,none,2024-04-03\r\n7409,2150,APAC,fashion,retail,47.41,6,0.011,none,2024-09-18\r\n7410,2482,EMEA,fashion,online,62.18,3,0.108,none,2024-03-05\r\n7411,1597,APAC,grocery,online,74.46,2,0.218,none,2024-04-19\r\n7412,1955,AMER,home,online,60.97,1,0.203,bundle,2024-03-20\r\n7413,1403,APAC,electronics,retail,49.95,4,0.020,bundle,2024-02-17\r\n7414,1614,EMEA,grocery,retail,45.30,6,0.016,none,2024-05-04\r\n7415,1500,EMEA,grocery,retail,57.67,6,0.034,none,2024-06-06\r\n7416,1440,AMER,electronics,mobile,35.96,5,0.135,none,2024-12-17\r\n7417,1532,APAC,electronics,online,55.22,4,0.166,bundle,2024-09-12\r\n7418,1460,LATAM,electronics,retail,28.94,4,0.218,bundle,2024-10-07\r\n7419,1433,EMEA,electronics,mobile,65.54,3,0.059,none,2024-12-15\r\n7420,1527,AMER,home,online,55.92,2,0.046,none,2024-11-23\r\n7421,1724,LATAM,grocery,retail,93.22,4,0.208,bundle,2024-03-20\r\n7422,1644,EMEA,sports,online,152.71,3,0.099,loyalty,2024-01-17\r\n7423,1790,AMER,grocery,retail,34.09,3,0.233,none,2024-09-04\r\n7424,1096,EMEA,electronics,online,98.14,5,0.099,bundle,2024-12-10\r\n7425,1272,AMER,electronics,online,43.11,3,0.114,none,2024-07-11\r\n7426,1355,EMEA,grocery,online,41.26,4,0.100,none,2024-04-03\r\n7427,1236,AMER,fashion,retail,79.66,6,0.087,none,2024-04-28\r\n7428,1802,AMER,home,retail,51.10,4,0.121,coupon,2024-11-15\r\n7429,1575,APAC,fashion,mobile,52.04,7,0.055,loyalty,2024-04-03\r\n7430,2228,EMEA,electronics,retail,37.07,4,0.151,none,2024-06-07\r\n7431,1778,LATAM,home,online,35.09,5,0.059,coupon,2024-10-09\r\n7432,1144,APAC,toys,retail,48.53,7,0.032,none,2024-01-13\r\n7433,1113,EMEA,home,online,67.56,7,0.115,none,2024-09-28\r\n7434,1884,APAC,sports,online,88.80,6,0.054,none,2024-10-07\r\n7435,1308,EMEA,fashion,online,74.84,2,0.213,bundle,2024-10-11\r\n7436,2249,LATAM,fashion,online,69.69,3,0.248,none,2024-04-17\r\n7437,2310,EMEA,fashion,online,35.92,5,0.057,none,2024-08-03\r\n7438,2166,AMER,toys,retail,49.27,5,0.158,bundle,2024-06-27\r\n7439,1288,LATAM,sports,mobile,71.42,2,0.208,loyalty,2024-12-28\r\n7440,1909,APAC,fashion,online,73.29,1,0.113,none,2024-11-24\r\n7441,2112,LATAM,grocery,online,98.50,7,0.023,none,2024-01-06\r\n7442,1007,APAC,grocery,online,52.80,1,0.099,none,2024-02-11\r\n7443,1857,LATAM,grocery,online,71.85,4,0.166,none,2024-05-11\r\n7444,2394,EMEA,toys,retail,40.07,4,0.118,coupon,2024-01-03\r\n7445,1821,LATAM,sports,online,89.57,6,0.070,none,2024-10-14\r\n7446,2025,EMEA,home,online,349.24,8,0.237,coupon,2024-12-26\r\n7447,2188,EMEA,toys,retail,53.72,3,0.189,none,2024-05-17\r\n7448,2090,AMER,home,retail,76.77,4,0.006,bundle,2024-03-15\r\n7449,2385,APAC,home,retail,50.05,3,0.241,bundle,2024-11-21\r\n7450,2040,LATAM,grocery,online,62.58,1,0.217,coupon,2024-10-07\r\n7451,1510,EMEA,fashion,online,62.07,1,0.106,none,2024-04-08\r\n7452,1036,EMEA,grocery,retail,51.83,2,0.135,loyalty,2024-01-21\r\n7453,2210,APAC,grocery,retail,214.51,5,0.193,none,2024-06-08\r\n7454,1021,AMER,fashion,online,98.89,8,0.229,none,2024-01-22\r\n7455,2489,LATAM,sports,online,41.38,3,0.146,bundle,2024-03-10\r\n7456,2111,EMEA,electronics,retail,77.43,4,0.083,bundle,2024-03-05\r\n7457,1221,LATAM,toys,partner,39.83,3,0.117,bundle,2024-08-15\r\n7458,1788,AMER,grocery,online,56.54,1,0.137,none,2024-02-19\r\n7459,1978,AMER,electronics,online,69.53,2,0.147,coupon,2024-01-10\r\n7460,1350,LATAM,grocery,mobile,49.93,5,0.247,bundle,2024-09-13\r\n7461,1983,LATAM,electronics,retail,36.26,7,0.219,none,2024-09-22\r\n7462,1732,LATAM,toys,online,79.48,3,0.183,coupon,2024-08-15\r\n7463,1144,APAC,home,online,23.19,2,0.113,none,2024-07-21\r\n7464,1596,EMEA,toys,online,56.96,1,0.143,bundle,2024-04-13\r\n7465,2189,LATAM,sports,online,70.46,7,0.068,none,2024-09-17\r\n7466,1178,EMEA,electronics,retail,98.37,5,0.011,none,2024-01-24\r\n7467,1326,AMER,electronics,online,60.55,3,0.223,none,2024-12-27\r\n7468,1543,AMER,home,online,45.79,7,0.214,none,2024-11-11\r\n7469,1368,EMEA,grocery,retail,83.79,2,0.180,none,2024-04-03\r\n7470,1693,EMEA,toys,retail,52.11,8,0.103,none,2024-02-02\r\n7471,1083,AMER,electronics,retail,110.72,1,0.194,loyalty,2024-05-19\r\n7472,1679,APAC,grocery,online,54.43,7,0.217,coupon,2024-04-12\r\n7473,2231,LATAM,home,mobile,80.02,1,0.038,none,2024-12-19\r\n7474,2372,AMER,toys,online,28.01,7,0.012,none,2024-10-28\r\n7475,1331,AMER,home,retail,97.54,5,0.161,coupon,2024-07-15\r\n7476,1526,EMEA,fashion,retail,52.97,6,0.022,coupon,2024-01-23\r\n7477,1023,APAC,toys,online,160.35,6,0.167,none,2024-05-08\r\n7478,2235,AMER,home,retail,31.33,4,0.141,coupon,2024-07-08\r\n7479,2463,AMER,home,retail,81.35,3,0.225,coupon,2024-02-20\r\n7480,1006,AMER,home,online,61.63,3,0.126,none,2024-10-13\r\n7481,1138,AMER,grocery,online,36.51,1,0.173,none,2024-04-28\r\n7482,2260,EMEA,toys,online,41.62,8,0.145,none,2024-09-05\r\n7483,2448,APAC,grocery,online,74.97,7,0.191,loyalty,2024-09-14\r\n7484,1243,AMER,toys,retail,90.01,7,0.063,none,2024-10-06\r\n7485,1562,AMER,toys,mobile,54.19,6,0.085,none,2024-11-07\r\n7486,1639,APAC,electronics,partner,76.75,7,0.061,none,2024-10-18\r\n7487,1571,EMEA,grocery,online,29.03,5,0.031,none,2024-03-21\r\n7488,1619,APAC,fashion,retail,35.35,2,0.240,coupon,2024-08-02\r\n7489,1998,APAC,electronics,mobile,71.11,6,0.101,none,2024-08-04\r\n7490,2110,LATAM,sports,retail,94.75,4,0.248,none,2024-10-01\r\n7491,1114,APAC,sports,online,120.20,5,0.239,none,2024-11-23\r\n7492,2328,EMEA,toys,online,54.31,5,0.226,none,2024-06-19\r\n7493,2171,EMEA,electronics,retail,41.58,8,0.085,none,2024-05-11\r\n7494,2275,LATAM,grocery,retail,81.71,4,0.030,none,2024-02-24\r\n7495,2241,APAC,electronics,retail,54.87,5,0.197,coupon,2024-02-05\r\n7496,1857,LATAM,grocery,retail,28.61,5,0.242,none,2024-03-08\r\n7497,2057,APAC,sports,retail,60.37,4,0.240,none,2024-08-02\r\n7498,2193,AMER,grocery,online,16.25,6,0.008,coupon,2024-02-22\r\n7499,1752,APAC,electronics,retail,26.99,4,0.012,coupon,2024-10-11\r\n7500,2404,EMEA,toys,online,118.24,2,0.068,coupon,2024-05-18\r\n7501,1607,LATAM,electronics,online,47.30,7,0.142,none,2024-09-10\r\n7502,1997,APAC,electronics,retail,29.63,1,0.022,loyalty,2024-12-21\r\n7503,1503,APAC,electronics,online,81.80,1,0.085,none,2024-12-14\r\n7504,1010,EMEA,home,partner,75.88,1,0.102,none,2024-03-06\r\n7505,1123,LATAM,sports,online,84.22,7,0.133,coupon,2024-01-16\r\n7506,2365,LATAM,home,online,30.17,3,0.002,bundle,2024-07-15\r\n7507,2256,AMER,grocery,online,18.47,6,0.018,none,2024-11-14\r\n7508,1898,EMEA,home,partner,65.35,6,0.048,none,2024-03-10\r\n7509,1954,APAC,home,retail,33.97,1,0.112,loyalty,2024-06-03\r\n7510,1458,APAC,home,online,20.20,8,0.191,none,2024-07-19\r\n7511,2197,LATAM,sports,retail,98.87,2,0.159,none,2024-06-25\r\n7512,2088,EMEA,home,online,47.85,8,0.161,none,2024-08-17\r\n7513,1706,EMEA,electronics,mobile,142.45,6,0.185,loyalty,2024-10-06\r\n7514,1684,EMEA,grocery,online,150.94,7,0.099,none,2024-02-01\r\n7515,1158,LATAM,sports,online,55.34,6,0.015,none,2024-02-15\r\n7516,1723,LATAM,toys,retail,92.18,2,0.094,loyalty,2024-04-01\r\n7517,2488,EMEA,toys,retail,98.32,3,0.202,coupon,2024-12-09\r\n7518,2331,APAC,electronics,mobile,55.51,7,0.247,none,2024-01-24\r\n7519,2210,APAC,grocery,retail,33.03,6,0.190,none,2024-10-18\r\n7520,1908,AMER,grocery,online,50.59,5,0.105,coupon,2024-02-12\r\n7521,1880,LATAM,home,online,47.07,8,0.246,none,2024-10-13\r\n7522,1245,APAC,fashion,retail,25.23,4,0.016,bundle,2024-10-22\r\n7523,1883,LATAM,grocery,retail,69.20,6,0.121,coupon,2024-12-16\r\n7524,1167,EMEA,electronics,mobile,22.90,3,0.047,none,2024-09-02\r\n7525,2061,EMEA,grocery,retail,82.01,3,0.114,none,2024-02-09\r\n7526,2010,APAC,electronics,online,106.92,1,0.129,coupon,2024-09-26\r\n7527,1145,AMER,toys,online,73.95,3,0.053,none,2024-03-19\r\n7528,1211,EMEA,electronics,retail,90.84,2,0.138,none,2024-06-25\r\n7529,1067,APAC,home,online,48.19,6,0.235,none,2024-01-24\r\n7530,1164,EMEA,fashion,online,71.38,7,0.140,coupon,2024-05-08\r\n7531,1139,EMEA,home,retail,48.96,1,0.240,none,2024-09-15\r\n7532,2276,AMER,home,partner,39.34,7,0.161,coupon,2024-08-22\r\n7533,1054,EMEA,electronics,mobile,35.31,6,0.062,none,2024-04-23\r\n7534,2074,AMER,home,retail,89.36,8,0.117,none,2024-11-15\r\n7535,1208,AMER,electronics,online,75.54,2,0.034,none,2024-09-06\r\n7536,1200,EMEA,grocery,online,35.72,7,0.053,coupon,2024-02-17\r\n7537,2017,EMEA,toys,retail,71.22,8,0.224,coupon,2024-12-20\r\n7538,1732,LATAM,toys,mobile,151.55,4,0.119,none,2024-02-02\r\n7539,1736,AMER,fashion,online,118.23,5,0.244,none,2024-06-16\r\n7540,1079,LATAM,home,retail,40.38,8,0.139,bundle,2024-04-11\r\n7541,1673,AMER,toys,online,53.93,2,0.106,coupon,2024-05-25\r\n7542,2208,AMER,grocery,retail,45.19,5,0.126,coupon,2024-11-14\r\n7543,1283,APAC,grocery,mobile,19.42,5,0.109,none,2024-02-17\r\n7544,1742,AMER,home,online,64.97,3,0.052,loyalty,2024-12-10\r\n7545,1098,APAC,fashion,online,77.53,2,0.120,bundle,2024-02-01\r\n7546,2180,AMER,toys,partner,89.83,4,0.037,bundle,2024-06-12\r\n7547,2214,AMER,electronics,mobile,40.90,8,0.007,bundle,2024-03-14\r\n7548,2182,AMER,home,online,64.81,2,0.143,coupon,2024-12-06\r\n7549,1483,EMEA,sports,retail,34.70,6,0.199,none,2024-03-01\r\n7550,1746,LATAM,electronics,mobile,85.03,2,0.107,bundle,2024-05-07\r\n7551,1700,EMEA,electronics,retail,37.26,1,0.130,none,2024-11-18\r\n7552,1232,LATAM,home,online,50.30,4,0.132,none,2024-07-18\r\n7553,1879,EMEA,toys,online,35.94,6,0.006,bundle,2024-11-09\r\n7554,2286,AMER,grocery,retail,68.15,4,0.081,bundle,2024-05-14\r\n7555,2129,APAC,toys,online,56.46,8,0.067,coupon,2024-09-21\r\n7556,1488,AMER,toys,retail,110.99,5,0.227,none,2024-07-17\r\n7557,2282,EMEA,fashion,mobile,47.33,5,0.156,none,2024-12-19\r\n7558,2329,LATAM,grocery,online,74.23,5,0.230,none,2024-03-02\r\n7559,1196,APAC,grocery,online,55.96,8,0.022,none,2024-01-28\r\n7560,1155,EMEA,home,online,17.47,5,0.156,none,2024-08-02\r\n7561,1265,APAC,fashion,online,33.61,4,0.054,none,2024-01-25\r\n7562,2262,APAC,fashion,retail,44.25,7,0.156,none,2024-07-01\r\n7563,1191,EMEA,grocery,online,64.69,4,0.223,coupon,2024-07-01\r\n7564,1097,EMEA,sports,online,42.01,2,0.037,none,2024-08-16\r\n7565,1553,LATAM,toys,online,33.79,5,0.025,none,2024-09-23\r\n7566,1656,LATAM,fashion,online,58.42,6,0.189,bundle,2024-08-19\r\n7567,1850,APAC,grocery,retail,79.42,2,0.215,coupon,2024-10-02\r\n7568,1496,AMER,electronics,retail,52.02,4,0.061,none,2024-07-06\r\n7569,2435,AMER,electronics,online,83.47,7,0.238,none,2024-07-16\r\n7570,1732,LATAM,grocery,online,58.28,8,0.102,loyalty,2024-10-23\r\n7571,2167,APAC,grocery,retail,70.72,4,0.104,coupon,2024-07-05\r\n7572,2059,AMER,electronics,online,32.92,2,0.181,none,2024-11-07\r\n7573,1335,APAC,grocery,mobile,80.44,2,0.104,none,2024-02-18\r\n7574,1806,APAC,electronics,mobile,49.04,5,0.010,coupon,2024-08-14\r\n7575,1898,EMEA,home,online,48.38,6,0.023,bundle,2024-03-17\r\n7576,1867,AMER,fashion,online,118.41,3,0.000,bundle,2024-02-10\r\n7577,1625,EMEA,electronics,partner,48.11,4,0.149,bundle,2024-04-28\r\n7578,2388,LATAM,sports,mobile,62.23,3,0.082,loyalty,2024-12-17\r\n7579,1697,APAC,grocery,online,33.93,6,0.074,none,2024-08-14\r\n7580,1928,AMER,electronics,online,126.85,2,0.172,none,2024-09-09\r\n7581,2488,EMEA,toys,online,38.90,4,0.076,coupon,2024-05-20\r\n7582,1791,LATAM,grocery,retail,30.39,5,0.211,none,2024-03-15\r\n7583,1222,AMER,grocery,online,84.20,1,0.142,loyalty,2024-08-24\r\n7584,2346,LATAM,grocery,online,47.25,5,0.053,bundle,2024-06-28\r\n7585,1281,AMER,toys,mobile,146.18,5,0.156,loyalty,2024-04-10\r\n7586,2384,LATAM,grocery,retail,38.67,1,0.121,bundle,2024-05-26\r\n7587,1656,LATAM,home,online,36.48,8,0.145,coupon,2024-04-01\r\n7588,1178,EMEA,grocery,retail,77.27,7,0.209,none,2024-08-14\r\n7589,2237,EMEA,fashion,online,49.98,1,0.104,coupon,2024-09-23\r\n7590,2440,APAC,sports,online,80.14,5,0.048,none,2024-09-04\r\n7591,1387,AMER,grocery,online,32.41,7,0.018,loyalty,2024-12-17\r\n7592,2480,APAC,electronics,retail,41.21,8,0.058,none,2024-07-01\r\n7593,2023,LATAM,grocery,online,89.29,1,0.203,coupon,2024-06-22\r\n7594,1876,LATAM,toys,online,64.16,3,0.141,loyalty,2024-07-02\r\n7595,1501,AMER,electronics,retail,17.17,6,0.015,none,2024-03-17\r\n7596,1919,EMEA,electronics,online,47.13,8,0.097,loyalty,2024-07-19\r\n7597,1206,EMEA,grocery,retail,63.67,6,0.250,none,2024-02-08\r\n7598,1874,LATAM,home,online,26.71,1,0.108,coupon,2024-05-21\r\n7599,1302,LATAM,sports,online,69.95,5,0.218,none,2024-04-06\r\n7600,1992,LATAM,electronics,online,85.60,1,0.004,none,2024-06-06\r\n7601,1645,EMEA,electronics,retail,46.82,3,0.153,none,2024-09-08\r\n7602,1691,LATAM,sports,online,86.94,8,0.163,bundle,2024-01-18\r\n7603,2238,AMER,home,mobile,210.72,5,0.017,bundle,2024-10-15\r\n7604,1781,LATAM,home,mobile,33.04,2,0.040,none,2024-12-11\r\n7605,2248,LATAM,grocery,partner,38.84,4,0.037,none,2024-05-02\r\n7606,2120,AMER,home,retail,62.23,2,0.088,loyalty,2024-03-25\r\n7607,1586,LATAM,home,online,66.97,1,0.013,none,2024-12-24\r\n7608,1854,AMER,grocery,retail,96.47,4,0.038,none,2024-03-04\r\n7609,2476,APAC,grocery,online,38.36,3,0.070,none,2024-01-09\r\n7610,1325,APAC,electronics,retail,51.45,2,0.087,none,2024-09-02\r\n7611,1288,LATAM,electronics,mobile,80.47,5,0.010,coupon,2024-03-07\r\n7612,2295,EMEA,home,online,73.57,2,0.066,none,2024-06-23\r\n7613,1960,EMEA,home,online,68.41,5,0.245,none,2024-03-20\r\n7614,1641,EMEA,toys,mobile,54.60,7,0.025,bundle,2024-08-21\r\n7615,2171,EMEA,home,online,82.90,5,0.014,none,2024-02-14\r\n7616,1999,EMEA,fashion,retail,61.78,3,0.164,none,2024-07-19\r\n7617,2035,LATAM,toys,retail,66.12,6,0.072,none,2024-08-27\r\n7618,1651,LATAM,grocery,online,159.03,6,0.241,none,2024-04-04\r\n7619,1673,AMER,fashion,mobile,186.20,8,0.243,none,2024-06-17\r\n7620,1345,AMER,fashion,retail,82.46,8,0.144,none,2024-04-18\r\n7621,1908,AMER,electronics,retail,26.49,1,0.012,bundle,2024-06-27\r\n7622,1399,AMER,sports,retail,105.17,1,0.220,none,2024-05-09\r\n7623,1059,AMER,electronics,mobile,41.50,6,0.009,bundle,2024-05-27\r\n7624,1794,AMER,sports,online,31.53,1,0.168,none,2024-03-26\r\n7625,1733,LATAM,electronics,online,43.58,5,0.100,none,2024-03-05\r\n7626,1809,APAC,fashion,online,66.80,7,0.062,none,2024-01-14\r\n7627,1646,APAC,fashion,mobile,101.12,2,0.248,none,2024-10-27\r\n7628,1849,EMEA,electronics,online,132.02,7,0.179,loyalty,2024-11-26\r\n7629,1291,EMEA,sports,online,48.71,1,0.044,none,2024-07-27\r\n7630,1510,EMEA,grocery,online,43.45,3,0.198,none,2024-07-22\r\n7631,2289,APAC,electronics,mobile,23.80,5,0.225,coupon,2024-02-12\r\n7632,1201,LATAM,home,mobile,36.49,6,0.145,none,2024-12-12\r\n7633,1233,AMER,grocery,online,25.41,7,0.060,loyalty,2024-07-06\r\n7634,1944,AMER,grocery,online,63.54,2,0.119,none,2024-04-03\r\n7635,2308,AMER,grocery,retail,37.69,7,0.026,none,2024-07-17\r\n7636,1748,APAC,fashion,partner,53.57,8,0.196,none,2024-08-25\r\n7637,1640,APAC,grocery,online,33.71,4,0.020,bundle,2024-03-16\r\n7638,1897,AMER,home,online,74.34,4,0.107,none,2024-11-09\r\n7639,2282,EMEA,electronics,online,41.85,2,0.187,none,2024-11-26\r\n7640,1237,LATAM,electronics,partner,55.52,2,0.215,none,2024-08-14\r\n7641,1179,APAC,toys,online,44.42,6,0.052,coupon,2024-06-06\r\n7642,1162,AMER,home,online,44.03,8,0.062,none,2024-06-10\r\n7643,1586,LATAM,electronics,mobile,27.84,6,0.065,none,2024-10-11\r\n7644,1592,LATAM,grocery,retail,23.48,4,0.247,none,2024-10-27\r\n7645,2010,APAC,grocery,retail,118.13,6,0.036,bundle,2024-05-26\r\n7646,2353,AMER,sports,online,53.05,6,0.129,none,2024-04-10\r\n7647,2435,AMER,grocery,mobile,152.09,1,0.104,none,2024-12-11\r\n7648,1600,AMER,grocery,mobile,71.69,7,0.047,none,2024-08-03\r\n7649,1172,APAC,grocery,mobile,63.91,2,0.202,none,2024-03-21\r\n7650,2008,APAC,electronics,online,136.38,5,0.115,bundle,2024-08-19\r\n7651,2170,EMEA,electronics,mobile,16.24,2,0.107,bundle,2024-03-14\r\n7652,1051,EMEA,fashion,mobile,31.33,3,0.107,loyalty,2024-04-28\r\n7653,1869,AMER,sports,online,20.96,1,0.237,coupon,2024-12-24\r\n7654,1058,LATAM,electronics,retail,92.36,4,0.070,coupon,2024-03-08\r\n7655,2292,EMEA,electronics,mobile,37.31,5,0.216,coupon,2024-11-24\r\n7656,1523,LATAM,electronics,online,168.67,1,0.128,bundle,2024-04-04\r\n7657,2101,APAC,home,retail,28.76,5,0.192,coupon,2024-02-08\r\n7658,1138,AMER,grocery,mobile,110.21,6,0.215,bundle,2024-10-16\r\n7659,1850,APAC,electronics,online,72.91,6,0.056,none,2024-01-03\r\n7660,1922,EMEA,electronics,mobile,68.49,1,0.033,loyalty,2024-10-17\r\n7661,2440,APAC,home,online,60.54,4,0.048,none,2024-10-01\r\n7662,1571,EMEA,grocery,online,67.68,3,0.130,none,2024-05-28\r\n7663,2014,EMEA,electronics,mobile,47.17,7,0.194,coupon,2024-07-23\r\n7664,1131,APAC,sports,online,20.03,1,0.111,loyalty,2024-07-23\r\n7665,1073,AMER,electronics,partner,47.48,1,0.190,none,2024-09-04\r\n7666,2072,AMER,toys,online,64.30,2,0.026,coupon,2024-12-19\r\n7667,1589,AMER,sports,online,79.13,7,0.243,bundle,2024-12-20\r\n7668,1809,APAC,electronics,online,113.63,4,0.039,coupon,2024-09-12\r\n7669,2006,APAC,home,retail,24.63,3,0.175,none,2024-09-12\r\n7670,2107,APAC,grocery,online,75.41,7,0.047,none,2024-05-24\r\n7671,2127,LATAM,sports,online,70.10,3,0.028,loyalty,2024-07-24\r\n7672,1519,APAC,home,online,68.93,7,0.109,none,2024-12-24\r\n7673,1595,AMER,fashion,partner,189.39,6,0.061,coupon,2024-01-17\r\n7674,1880,LATAM,electronics,retail,80.94,2,0.033,coupon,2024-01-04\r\n7675,1757,EMEA,grocery,mobile,67.47,7,0.006,bundle,2024-06-26\r\n7676,1463,EMEA,electronics,online,65.56,4,0.101,none,2024-11-19\r\n7677,1934,EMEA,electronics,retail,49.63,7,0.073,none,2024-06-21\r\n7678,1056,LATAM,grocery,mobile,43.23,7,0.238,none,2024-08-18\r\n7679,1096,EMEA,home,online,57.91,3,0.038,none,2024-04-06\r\n7680,1750,LATAM,electronics,retail,40.03,6,0.191,none,2024-02-19\r\n7681,1105,AMER,fashion,partner,72.72,5,0.012,none,2024-01-09\r\n7682,1423,EMEA,sports,retail,36.60,2,0.184,none,2024-02-12\r\n7683,1272,AMER,electronics,online,84.84,7,0.163,none,2024-01-17\r\n7684,1135,APAC,toys,mobile,99.50,7,0.061,none,2024-07-18\r\n7685,2185,EMEA,home,online,107.99,6,0.172,none,2024-03-08\r\n7686,1645,EMEA,grocery,online,60.48,5,0.217,loyalty,2024-07-14\r\n7687,1053,AMER,fashion,mobile,47.80,3,0.055,none,2024-08-06\r\n7688,2370,EMEA,grocery,online,52.06,4,0.060,coupon,2024-04-25\r\n7689,2414,EMEA,grocery,partner,22.75,8,0.093,bundle,2024-03-28\r\n7690,1885,EMEA,electronics,online,73.18,2,0.014,none,2024-09-12\r\n7691,1314,AMER,grocery,retail,52.22,6,0.130,coupon,2024-11-07\r\n7692,2063,APAC,electronics,online,36.76,5,0.163,loyalty,2024-09-04\r\n7693,2189,LATAM,electronics,online,83.60,4,0.099,bundle,2024-04-13\r\n7694,2408,EMEA,grocery,mobile,110.88,1,0.221,none,2024-06-11\r\n7695,1424,APAC,home,mobile,92.76,2,0.123,none,2024-01-11\r\n7696,1852,AMER,grocery,retail,52.80,3,0.110,loyalty,2024-08-11\r\n7697,2485,AMER,fashion,retail,111.90,6,0.125,coupon,2024-09-26\r\n7698,1683,AMER,home,online,69.84,4,0.091,coupon,2024-05-14\r\n7699,1923,LATAM,sports,mobile,53.84,6,0.172,none,2024-11-18\r\n7700,1329,APAC,fashion,online,26.55,5,0.049,coupon,2024-07-09\r\n7701,2386,EMEA,fashion,online,53.94,8,0.035,bundle,2024-08-06\r\n7702,1163,AMER,electronics,retail,57.28,5,0.145,none,2024-12-26\r\n7703,1083,AMER,electronics,retail,18.69,4,0.217,coupon,2024-04-01\r\n7704,1446,AMER,home,online,59.83,4,0.174,coupon,2024-06-14\r\n7705,1297,AMER,grocery,mobile,28.98,5,0.039,coupon,2024-01-12\r\n7706,2371,LATAM,grocery,online,44.51,2,0.231,loyalty,2024-07-19\r\n7707,1121,EMEA,toys,mobile,75.41,4,0.132,none,2024-10-02\r\n7708,1017,AMER,fashion,retail,85.02,4,0.096,none,2024-06-01\r\n7709,1349,APAC,electronics,online,100.15,5,0.176,none,2024-04-26\r\n7710,1203,AMER,toys,online,33.07,7,0.180,none,2024-01-17\r\n7711,1959,EMEA,fashion,online,71.89,6,0.190,loyalty,2024-09-18\r\n7712,2377,AMER,grocery,retail,96.07,6,0.214,coupon,2024-04-19\r\n7713,2278,APAC,home,mobile,39.20,1,0.238,none,2024-08-06\r\n7714,1287,AMER,fashion,online,62.23,3,0.156,coupon,2024-04-16\r\n7715,2134,AMER,electronics,retail,62.97,4,0.244,coupon,2024-05-23\r\n7716,1411,LATAM,home,retail,43.24,8,0.106,none,2024-01-09\r\n7717,1981,EMEA,grocery,retail,24.79,2,0.110,loyalty,2024-08-11\r\n7718,1644,EMEA,electronics,retail,30.34,1,0.171,coupon,2024-11-07\r\n7719,2489,LATAM,home,retail,40.71,6,0.003,bundle,2024-11-25\r\n7720,1316,APAC,sports,online,28.13,4,0.214,none,2024-09-01\r\n7721,2114,AMER,electronics,retail,50.40,3,0.107,bundle,2024-02-25\r\n7722,2144,EMEA,home,mobile,110.13,2,0.046,none,2024-06-13\r\n7723,2291,EMEA,electronics,online,81.54,5,0.211,none,2024-07-21\r\n7724,2179,LATAM,home,retail,80.72,6,0.016,coupon,2024-05-15\r\n7725,2110,LATAM,grocery,retail,63.18,3,0.223,none,2024-11-14\r\n7726,2205,AMER,electronics,online,118.45,5,0.046,coupon,2024-01-18\r\n7727,2092,AMER,grocery,partner,115.19,5,0.141,coupon,2024-05-06\r\n7728,1383,AMER,toys,online,61.90,6,0.201,bundle,2024-08-04\r\n7729,1194,APAC,grocery,retail,82.81,8,0.058,loyalty,2024-02-17\r\n7730,2399,LATAM,electronics,mobile,52.62,4,0.221,none,2024-06-09\r\n7731,1381,LATAM,grocery,mobile,56.72,6,0.017,loyalty,2024-06-21\r\n7732,1789,EMEA,grocery,online,65.23,2,0.040,bundle,2024-10-18\r\n7733,1244,LATAM,home,partner,59.48,2,0.100,bundle,2024-05-28\r\n7734,1555,AMER,home,mobile,64.38,7,0.060,none,2024-09-13\r\n7735,2250,AMER,home,partner,45.33,3,0.158,bundle,2024-10-04\r\n7736,2311,LATAM,fashion,online,34.80,6,0.212,coupon,2024-09-16\r\n7737,1225,APAC,electronics,retail,35.29,4,0.101,coupon,2024-12-19\r\n7738,1778,LATAM,sports,online,26.90,5,0.055,none,2024-08-19\r\n7739,1853,APAC,electronics,online,37.20,3,0.000,none,2024-04-15\r\n7740,1850,APAC,fashion,partner,52.69,8,0.022,none,2024-10-12\r\n7741,2066,APAC,electronics,mobile,63.41,4,0.118,none,2024-04-12\r\n7742,2453,AMER,home,online,102.27,7,0.142,none,2024-04-01\r\n7743,1544,LATAM,fashion,retail,105.39,4,0.238,none,2024-02-06\r\n7744,2016,LATAM,home,retail,139.21,5,0.099,bundle,2024-07-09\r\n7745,1248,APAC,fashion,mobile,35.12,1,0.204,none,2024-07-13\r\n7746,1090,AMER,home,online,61.73,3,0.077,none,2024-08-06\r\n7747,1307,AMER,sports,partner,61.18,3,0.057,none,2024-02-12\r\n7748,1962,APAC,toys,retail,142.95,1,0.176,coupon,2024-08-07\r\n7749,2417,LATAM,electronics,online,50.74,6,0.124,none,2024-04-24\r\n7750,1147,EMEA,electronics,retail,50.85,2,0.020,none,2024-08-23\r\n7751,2256,AMER,electronics,mobile,54.03,3,0.228,none,2024-11-15\r\n7752,1008,AMER,fashion,online,74.58,7,0.174,coupon,2024-09-10\r\n7753,2162,EMEA,sports,online,62.75,7,0.052,bundle,2024-11-19\r\n7754,2214,AMER,grocery,online,43.84,3,0.108,none,2024-11-04\r\n7755,1186,APAC,grocery,online,60.15,1,0.011,none,2024-04-23\r\n7756,1138,AMER,electronics,online,59.02,5,0.161,coupon,2024-08-21\r\n7757,1139,EMEA,grocery,retail,24.16,1,0.123,none,2024-05-05\r\n7758,1174,APAC,home,online,75.03,7,0.004,loyalty,2024-07-19\r\n7759,1627,LATAM,fashion,online,27.24,7,0.040,none,2024-06-13\r\n7760,1212,LATAM,fashion,online,98.46,6,0.178,none,2024-11-09\r\n7761,1321,EMEA,home,retail,40.65,7,0.089,none,2024-05-06\r\n7762,2189,LATAM,electronics,mobile,48.11,1,0.172,none,2024-08-03\r\n7763,1050,AMER,home,retail,48.04,7,0.014,none,2024-10-19\r\n7764,1452,LATAM,sports,online,76.90,1,0.119,loyalty,2024-05-08\r\n7765,1414,APAC,electronics,online,45.16,2,0.151,none,2024-06-15\r\n7766,2416,LATAM,sports,online,79.46,2,0.143,coupon,2024-05-24\r\n7767,2186,LATAM,toys,online,60.56,8,0.107,coupon,2024-12-15\r\n7768,2344,LATAM,sports,retail,50.05,2,0.088,none,2024-08-24\r\n7769,1170,AMER,grocery,retail,44.59,2,0.235,coupon,2024-04-11\r\n7770,1208,AMER,grocery,online,28.94,1,0.118,none,2024-09-12\r\n7771,1494,AMER,grocery,online,29.45,1,0.195,coupon,2024-10-26\r\n7772,1319,EMEA,grocery,retail,45.65,6,0.000,none,2024-10-08\r\n7773,1446,AMER,grocery,retail,51.68,5,0.204,bundle,2024-04-02\r\n7774,1435,AMER,fashion,retail,76.75,3,0.199,coupon,2024-08-27\r\n7775,2162,EMEA,electronics,online,61.10,5,0.106,none,2024-02-09\r\n7776,2439,AMER,grocery,mobile,179.15,7,0.210,none,2024-06-06\r\n7777,1224,APAC,grocery,online,40.30,4,0.053,none,2024-10-12\r\n7778,2451,APAC,grocery,retail,54.49,2,0.187,none,2024-04-28\r\n7779,1166,AMER,toys,retail,63.25,4,0.030,none,2024-06-10\r\n7780,2093,LATAM,grocery,online,90.00,7,0.185,none,2024-10-15\r\n7781,1282,LATAM,fashion,online,86.67,5,0.235,bundle,2024-01-23\r\n7782,2458,EMEA,fashion,online,96.11,4,0.084,none,2024-10-28\r\n7783,1322,AMER,electronics,retail,61.23,4,0.142,none,2024-10-18\r\n7784,2321,APAC,electronics,mobile,78.05,2,0.130,none,2024-09-23\r\n7785,2180,AMER,grocery,online,60.76,7,0.166,none,2024-08-11\r\n7786,1722,EMEA,home,retail,68.75,1,0.097,none,2024-06-28\r\n7787,2004,LATAM,electronics,online,100.27,2,0.074,bundle,2024-07-04\r\n7788,1261,APAC,toys,partner,53.90,1,0.055,bundle,2024-03-22\r\n7789,2037,LATAM,fashion,online,44.66,4,0.221,none,2024-08-10\r\n7790,1273,AMER,home,online,38.73,6,0.247,bundle,2024-04-27\r\n7791,1285,EMEA,sports,retail,175.86,1,0.019,none,2024-02-28\r\n7792,1947,EMEA,sports,online,35.01,5,0.243,bundle,2024-09-09\r\n7793,2140,AMER,sports,retail,52.33,7,0.228,none,2024-06-13\r\n7794,1366,APAC,sports,online,42.23,3,0.215,none,2024-06-25\r\n7795,2367,AMER,fashion,retail,64.68,7,0.165,none,2024-03-23\r\n7796,2215,LATAM,electronics,online,62.29,7,0.102,none,2024-02-12\r\n7797,2000,APAC,toys,online,45.38,1,0.197,none,2024-05-01\r\n7798,1770,AMER,toys,online,61.38,6,0.180,none,2024-11-09\r\n7799,1870,EMEA,toys,online,85.78,3,0.133,bundle,2024-10-18\r\n7800,1056,LATAM,sports,online,65.24,3,0.175,none,2024-06-23\r\n7801,1474,LATAM,toys,mobile,34.84,5,0.027,none,2024-04-08\r\n7802,2139,AMER,home,mobile,55.62,1,0.003,bundle,2024-08-18\r\n7803,1686,LATAM,electronics,online,85.06,7,0.108,coupon,2024-06-19\r\n7804,1703,AMER,electronics,online,38.06,1,0.166,loyalty,2024-05-19\r\n7805,2451,APAC,toys,online,22.04,6,0.018,none,2024-01-13\r\n7806,1605,APAC,home,online,28.83,1,0.248,coupon,2024-02-10\r\n7807,2392,EMEA,toys,online,36.85,1,0.132,loyalty,2024-11-24\r\n7808,1997,APAC,grocery,retail,153.00,5,0.139,none,2024-11-26\r\n7809,2205,AMER,fashion,online,35.51,2,0.110,none,2024-02-25\r\n7810,1662,LATAM,toys,online,87.32,5,0.092,none,2024-07-17\r\n7811,2490,AMER,toys,online,40.60,3,0.021,none,2024-11-25\r\n7812,1193,APAC,sports,partner,70.33,2,0.056,none,2024-11-23\r\n7813,1463,EMEA,fashion,mobile,72.49,8,0.046,none,2024-04-25\r\n7814,1749,LATAM,grocery,mobile,103.45,4,0.227,none,2024-12-25\r\n7815,1789,EMEA,electronics,retail,39.67,7,0.222,loyalty,2024-10-14\r\n7816,1473,LATAM,fashion,online,30.56,6,0.176,none,2024-12-19\r\n7817,1854,AMER,grocery,online,81.33,4,0.004,none,2024-05-23\r\n7818,1632,LATAM,grocery,online,59.91,3,0.174,none,2024-05-10\r\n7819,1629,LATAM,grocery,online,46.07,5,0.194,none,2024-02-25\r\n7820,2090,AMER,toys,mobile,173.60,4,0.015,none,2024-08-18\r\n7821,1727,APAC,sports,partner,35.80,2,0.058,none,2024-06-05\r\n7822,1597,APAC,home,online,103.11,8,0.024,none,2024-04-07\r\n7823,1442,EMEA,electronics,retail,61.77,7,0.064,none,2024-04-08\r\n7824,1340,LATAM,electronics,online,62.67,2,0.121,loyalty,2024-05-08\r\n7825,1787,APAC,home,online,24.78,4,0.159,none,2024-03-13\r\n7826,2466,APAC,toys,mobile,20.48,1,0.195,loyalty,2024-11-12\r\n7827,1439,LATAM,home,retail,101.59,3,0.018,none,2024-09-24\r\n7828,2000,APAC,fashion,online,83.90,4,0.109,coupon,2024-03-26\r\n7829,2419,LATAM,grocery,online,45.43,5,0.052,none,2024-04-06\r\n7830,1780,APAC,sports,online,53.98,3,0.052,none,2024-04-04\r\n7831,1168,APAC,electronics,partner,45.97,6,0.132,coupon,2024-06-04\r\n7832,1083,AMER,toys,online,95.04,7,0.105,bundle,2024-07-19\r\n7833,1170,AMER,electronics,retail,27.49,1,0.232,none,2024-08-22\r\n7834,1611,EMEA,home,online,61.05,2,0.006,bundle,2024-01-13\r\n7835,1082,EMEA,fashion,online,58.34,1,0.066,none,2024-01-17\r\n7836,1616,APAC,fashion,retail,76.63,4,0.172,coupon,2024-07-01\r\n7837,1882,AMER,fashion,online,74.14,6,0.035,none,2024-03-19\r\n7838,1293,AMER,home,retail,36.76,5,0.208,none,2024-08-07\r\n7839,1456,APAC,grocery,retail,51.09,6,0.049,none,2024-12-23\r\n7840,1954,APAC,sports,online,34.62,1,0.183,coupon,2024-03-10\r\n7841,2366,APAC,sports,online,56.92,8,0.142,loyalty,2024-01-03\r\n7842,1015,AMER,fashion,online,89.47,1,0.016,none,2024-10-18\r\n7843,1336,APAC,sports,online,103.91,4,0.006,none,2024-04-20\r\n7844,2100,APAC,grocery,retail,68.75,3,0.169,none,2024-06-01\r\n7845,1527,AMER,electronics,online,85.42,1,0.127,coupon,2024-10-09\r\n7846,1564,APAC,sports,mobile,62.61,5,0.201,none,2024-07-11\r\n7847,1072,LATAM,sports,mobile,134.02,7,0.178,coupon,2024-01-18\r\n7848,2473,EMEA,grocery,retail,129.89,2,0.149,none,2024-02-16\r\n7849,1245,APAC,toys,mobile,159.93,4,0.229,none,2024-06-17\r\n7850,2206,AMER,home,online,60.90,1,0.149,none,2024-03-08\r\n7851,1135,APAC,home,retail,30.95,7,0.021,none,2024-03-10\r\n7852,1242,LATAM,home,online,75.68,8,0.024,none,2024-03-07\r\n7853,1796,LATAM,home,retail,44.69,2,0.200,none,2024-08-10\r\n7854,1538,AMER,home,retail,44.40,6,0.034,none,2024-08-21\r\n7855,1335,APAC,fashion,online,34.89,3,0.040,none,2024-02-27\r\n7856,2035,LATAM,fashion,mobile,65.54,3,0.054,loyalty,2024-10-08\r\n7857,2449,LATAM,sports,retail,17.15,6,0.153,none,2024-09-04\r\n7858,2443,LATAM,home,retail,36.74,3,0.152,coupon,2024-04-13\r\n7859,1154,LATAM,grocery,online,45.25,8,0.124,none,2024-07-03\r\n7860,1870,EMEA,grocery,retail,56.93,2,0.147,coupon,2024-05-27\r\n7861,1741,AMER,grocery,online,72.00,6,0.107,none,2024-09-01\r\n7862,1479,AMER,sports,online,22.01,7,0.203,none,2024-09-01\r\n7863,2328,EMEA,electronics,mobile,33.31,1,0.087,none,2024-07-07\r\n7864,1478,EMEA,home,online,69.44,5,0.104,none,2024-08-21\r\n7865,1386,AMER,grocery,online,81.56,8,0.100,loyalty,2024-07-05\r\n7866,2266,LATAM,fashion,retail,27.25,5,0.221,bundle,2024-08-16\r\n7867,1183,AMER,electronics,online,91.88,8,0.110,bundle,2024-05-02\r\n7868,2450,EMEA,electronics,retail,85.73,8,0.047,coupon,2024-07-18\r\n7869,2479,EMEA,sports,online,61.75,4,0.247,none,2024-02-15\r\n7870,2222,LATAM,sports,online,70.68,5,0.153,none,2024-01-23\r\n7871,1440,AMER,sports,retail,59.74,5,0.007,none,2024-10-15\r\n7872,1853,APAC,home,online,33.17,3,0.216,none,2024-10-05\r\n7873,2495,EMEA,grocery,online,45.11,8,0.126,none,2024-10-12\r\n7874,2420,EMEA,toys,online,31.47,2,0.166,none,2024-12-24\r\n7875,1555,AMER,electronics,online,40.14,3,0.156,none,2024-03-07\r\n7876,1647,LATAM,fashion,retail,32.41,2,0.129,coupon,2024-12-02\r\n7877,1752,APAC,grocery,retail,53.59,4,0.019,coupon,2024-01-20\r\n7878,1329,APAC,toys,online,67.34,3,0.035,none,2024-02-02\r\n7879,2117,EMEA,home,online,45.05,3,0.177,none,2024-05-15\r\n7880,2366,APAC,sports,retail,63.81,3,0.231,coupon,2024-09-25\r\n7881,1056,LATAM,grocery,online,64.87,4,0.045,coupon,2024-04-04\r\n7882,1012,LATAM,electronics,retail,72.84,5,0.195,bundle,2024-10-12\r\n7883,2449,LATAM,home,retail,33.35,5,0.151,none,2024-09-14\r\n7884,1100,AMER,home,online,33.90,7,0.216,none,2024-04-02\r\n7885,1594,LATAM,grocery,mobile,35.90,6,0.099,none,2024-06-12\r\n7886,1652,APAC,grocery,retail,72.90,8,0.014,none,2024-04-13\r\n7887,1124,AMER,grocery,mobile,83.41,6,0.005,loyalty,2024-10-14\r\n7888,1142,EMEA,electronics,retail,41.96,6,0.012,bundle,2024-11-27\r\n7889,2462,EMEA,toys,retail,41.70,5,0.204,none,2024-11-20\r\n7890,1225,APAC,fashion,mobile,59.53,4,0.084,coupon,2024-09-27\r\n7891,1571,EMEA,electronics,retail,41.94,4,0.143,none,2024-08-25\r\n7892,1025,EMEA,toys,online,64.59,8,0.064,none,2024-06-13\r\n7893,2173,LATAM,fashion,retail,52.52,3,0.148,none,2024-12-11\r\n7894,1967,EMEA,grocery,online,69.27,8,0.190,none,2024-09-28\r\n7895,1445,APAC,grocery,online,33.40,2,0.099,none,2024-06-10\r\n7896,1091,EMEA,fashion,online,59.10,7,0.226,loyalty,2024-10-10\r\n7897,2348,EMEA,fashion,online,70.76,4,0.033,loyalty,2024-06-27\r\n7898,1951,LATAM,home,online,27.37,5,0.182,none,2024-07-16\r\n7899,2191,AMER,sports,online,74.98,6,0.083,coupon,2024-11-15\r\n7900,1819,AMER,home,online,75.10,2,0.229,loyalty,2024-02-04\r\n7901,1384,LATAM,grocery,partner,99.70,2,0.218,none,2024-11-18\r\n7902,1685,AMER,home,online,41.65,4,0.216,coupon,2024-10-16\r\n7903,1513,APAC,grocery,retail,83.08,8,0.152,none,2024-12-20\r\n7904,2185,EMEA,home,retail,53.23,4,0.159,none,2024-03-21\r\n7905,1759,EMEA,electronics,retail,60.60,8,0.070,coupon,2024-04-25\r\n7906,1248,APAC,grocery,online,95.30,1,0.121,coupon,2024-09-23\r\n7907,2299,EMEA,electronics,partner,191.33,7,0.212,bundle,2024-08-08\r\n7908,1051,EMEA,grocery,online,19.46,8,0.063,coupon,2024-06-09\r\n7909,1443,EMEA,home,retail,37.71,7,0.035,none,2024-05-26\r\n7910,2353,AMER,grocery,retail,61.66,1,0.133,loyalty,2024-11-15\r\n7911,1527,AMER,home,online,67.01,5,0.042,loyalty,2024-03-27\r\n7912,1630,APAC,electronics,retail,50.42,2,0.048,none,2024-11-17\r\n7913,2288,AMER,grocery,retail,33.31,6,0.194,none,2024-02-02\r\n7914,1161,AMER,home,online,69.09,8,0.057,none,2024-03-19\r\n7915,1793,LATAM,home,retail,20.75,1,0.090,bundle,2024-06-02\r\n7916,1286,EMEA,fashion,retail,73.07,6,0.133,none,2024-04-25\r\n7917,1764,LATAM,home,mobile,17.92,3,0.158,none,2024-07-22\r\n7918,1401,LATAM,electronics,online,110.26,6,0.201,none,2024-03-01\r\n7919,1524,LATAM,grocery,retail,44.88,3,0.095,none,2024-09-02\r\n7920,2103,LATAM,grocery,retail,29.26,2,0.205,none,2024-03-08\r\n7921,1483,EMEA,grocery,retail,79.68,2,0.030,bundle,2024-05-09\r\n7922,1285,EMEA,fashion,retail,86.27,4,0.068,none,2024-09-22\r\n7923,2141,AMER,toys,online,68.10,1,0.103,none,2024-05-23\r\n7924,2106,LATAM,grocery,mobile,28.00,8,0.027,coupon,2024-12-05\r\n7925,1829,EMEA,home,retail,37.59,3,0.165,bundle,2024-06-11\r\n7926,2024,AMER,fashion,retail,35.79,1,0.247,coupon,2024-04-08\r\n7927,1105,AMER,electronics,online,70.65,3,0.127,coupon,2024-09-23\r\n7928,1910,LATAM,grocery,mobile,81.92,7,0.068,none,2024-07-10\r\n7929,1115,AMER,grocery,retail,45.05,6,0.134,none,2024-08-02\r\n7930,1794,AMER,electronics,online,88.92,5,0.091,none,2024-09-06\r\n7931,1752,APAC,toys,retail,44.13,5,0.065,coupon,2024-10-19\r\n7932,1602,EMEA,home,online,45.25,6,0.159,none,2024-07-07\r\n7933,1193,APAC,grocery,retail,69.56,3,0.215,coupon,2024-12-06\r\n7934,2022,LATAM,electronics,retail,50.94,4,0.134,none,2024-07-08\r\n7935,2008,APAC,grocery,online,173.01,6,0.111,none,2024-08-09\r\n7936,1241,APAC,home,online,33.81,5,0.117,coupon,2024-04-28\r\n7937,1102,APAC,sports,partner,36.76,1,0.135,none,2024-06-09\r\n7938,1044,EMEA,sports,partner,27.19,6,0.171,loyalty,2024-01-07\r\n7939,1208,AMER,grocery,retail,71.89,2,0.223,bundle,2024-11-24\r\n7940,2085,AMER,home,retail,34.74,7,0.248,coupon,2024-10-21\r\n7941,1230,EMEA,sports,mobile,50.55,3,0.240,loyalty,2024-01-14\r\n7942,1230,EMEA,fashion,online,39.57,3,0.192,bundle,2024-07-24\r\n7943,1030,EMEA,electronics,retail,45.26,6,0.053,none,2024-12-24\r\n7944,1361,LATAM,grocery,retail,39.52,2,0.223,none,2024-04-05\r\n7945,1220,LATAM,fashion,mobile,72.37,8,0.136,coupon,2024-09-19\r\n7946,1109,APAC,home,online,81.87,2,0.043,coupon,2024-07-23\r\n7947,1821,LATAM,home,online,46.45,1,0.227,none,2024-11-07\r\n7948,1144,APAC,home,online,58.39,5,0.235,none,2024-05-19\r\n7949,1226,AMER,fashion,retail,144.20,5,0.115,bundle,2024-06-16\r\n7950,1234,AMER,fashion,partner,72.61,6,0.141,none,2024-09-02\r\n7951,1304,LATAM,home,online,87.98,3,0.181,none,2024-12-25\r\n7952,1885,EMEA,grocery,online,89.13,6,0.234,none,2024-10-05\r\n7953,2153,APAC,toys,online,70.81,8,0.223,coupon,2024-01-24\r\n7954,2355,EMEA,sports,retail,23.34,5,0.166,none,2024-04-04\r\n7955,1933,EMEA,electronics,online,46.49,5,0.186,coupon,2024-07-25\r\n7956,1883,LATAM,sports,mobile,17.04,3,0.240,bundle,2024-05-11\r\n7957,1712,LATAM,grocery,retail,85.75,2,0.232,none,2024-05-23\r\n7958,1557,LATAM,home,online,78.08,1,0.126,coupon,2024-05-12\r\n7959,2394,EMEA,fashion,mobile,59.39,3,0.160,coupon,2024-11-16\r\n7960,1942,APAC,toys,online,77.07,6,0.073,none,2024-08-09\r\n7961,1963,AMER,electronics,retail,139.70,1,0.056,none,2024-02-21\r\n7962,1830,EMEA,home,online,66.60,8,0.204,none,2024-12-11\r\n7963,1738,LATAM,grocery,mobile,40.07,8,0.113,loyalty,2024-06-18\r\n7964,1164,EMEA,home,online,31.51,3,0.057,none,2024-04-27\r\n7965,1693,EMEA,electronics,online,81.40,5,0.201,coupon,2024-01-25\r\n7966,1630,APAC,fashion,online,92.46,5,0.241,coupon,2024-09-24\r\n7967,1837,LATAM,sports,online,45.15,3,0.115,none,2024-09-24\r\n7968,1277,AMER,toys,retail,69.36,5,0.056,coupon,2024-11-05\r\n7969,2112,LATAM,home,mobile,47.30,4,0.061,none,2024-01-14\r\n7970,1367,AMER,toys,partner,58.08,8,0.013,none,2024-04-14\r\n7971,2229,APAC,grocery,retail,41.05,2,0.192,coupon,2024-03-09\r\n7972,1893,APAC,grocery,retail,54.28,7,0.158,none,2024-07-08\r\n7973,2187,EMEA,electronics,mobile,57.57,2,0.190,none,2024-06-12\r\n7974,2345,LATAM,sports,retail,46.74,3,0.241,none,2024-04-27\r\n7975,2097,AMER,fashion,retail,50.47,7,0.155,none,2024-07-28\r\n7976,2246,AMER,toys,online,59.53,6,0.013,none,2024-03-23\r\n7977,1408,AMER,sports,online,44.14,8,0.199,none,2024-05-13\r\n7978,2424,LATAM,grocery,online,33.41,3,0.131,bundle,2024-01-01\r\n7979,1830,EMEA,home,online,30.91,7,0.082,none,2024-08-22\r\n7980,2368,AMER,grocery,retail,30.29,1,0.199,none,2024-12-13\r\n7981,2112,LATAM,home,retail,31.16,1,0.178,none,2024-02-24\r\n7982,1643,EMEA,toys,retail,44.85,6,0.151,coupon,2024-08-06\r\n7983,1495,LATAM,grocery,online,51.08,5,0.188,bundle,2024-05-09\r\n7984,1926,AMER,sports,partner,86.67,4,0.085,none,2024-07-08\r\n7985,1176,EMEA,fashion,partner,40.48,3,0.179,none,2024-06-23\r\n7986,1847,LATAM,electronics,mobile,56.18,1,0.199,none,2024-12-08\r\n7987,2225,EMEA,electronics,online,134.27,1,0.044,none,2024-12-17\r\n7988,1417,APAC,fashion,online,46.76,5,0.090,coupon,2024-01-25\r\n7989,1972,LATAM,fashion,online,58.04,6,0.234,bundle,2024-09-25\r\n7990,1985,AMER,toys,mobile,101.56,5,0.157,coupon,2024-01-16\r\n7991,2376,LATAM,electronics,online,45.61,7,0.031,none,2024-01-13\r\n7992,2307,LATAM,home,online,123.47,6,0.242,loyalty,2024-10-26\r\n7993,1988,AMER,fashion,online,41.22,4,0.104,none,2024-12-15\r\n7994,2085,AMER,fashion,retail,56.13,4,0.179,bundle,2024-02-03\r\n7995,1058,LATAM,home,online,103.66,3,0.054,loyalty,2024-11-12\r\n7996,1870,EMEA,electronics,online,49.92,7,0.077,none,2024-07-26\r\n7997,1606,AMER,fashion,retail,44.31,4,0.089,none,2024-01-04\r\n7998,1868,AMER,electronics,online,24.59,3,0.016,loyalty,2024-03-25\r\n7999,1378,APAC,fashion,retail,75.43,6,0.197,none,2024-02-21\r\n8000,2407,EMEA,home,online,34.46,7,0.046,none,2024-07-19\r\n8001,1800,APAC,grocery,online,130.24,7,0.139,none,2024-12-28\r\n8002,1878,EMEA,grocery,online,122.32,8,0.099,none,2024-01-17\r\n8003,1694,APAC,electronics,online,57.74,7,0.203,none,2024-01-08\r\n8004,1528,EMEA,electronics,online,66.59,1,0.065,none,2024-02-10\r\n8005,1575,APAC,home,online,92.43,1,0.087,none,2024-05-13\r\n8006,2016,LATAM,fashion,online,61.84,7,0.035,none,2024-03-04\r\n8007,1930,AMER,electronics,online,22.00,8,0.017,bundle,2024-06-14\r\n8008,1679,APAC,grocery,mobile,57.47,1,0.170,none,2024-12-20\r\n8009,1224,APAC,grocery,partner,155.20,6,0.026,none,2024-03-11\r\n8010,2478,AMER,grocery,mobile,36.87,7,0.172,bundle,2024-12-08\r\n8011,1295,EMEA,electronics,retail,113.75,2,0.046,none,2024-05-06\r\n8012,2044,APAC,grocery,online,34.80,6,0.127,none,2024-05-26\r\n8013,1485,APAC,electronics,online,17.82,3,0.031,none,2024-05-21\r\n8014,1742,AMER,grocery,mobile,25.16,8,0.180,none,2024-12-23\r\n8015,1596,EMEA,sports,online,71.56,5,0.110,none,2024-08-12\r\n8016,2408,EMEA,toys,mobile,131.16,1,0.098,bundle,2024-07-27\r\n8017,1587,LATAM,grocery,retail,53.25,7,0.166,none,2024-09-22\r\n8018,1620,LATAM,home,mobile,203.51,8,0.068,coupon,2024-05-21\r\n8019,1904,APAC,grocery,retail,102.38,5,0.131,none,2024-07-18\r\n8020,2029,APAC,grocery,retail,69.44,7,0.238,none,2024-06-06\r\n8021,1259,EMEA,electronics,online,31.42,7,0.049,loyalty,2024-10-21\r\n8022,1943,AMER,fashion,online,49.35,3,0.173,loyalty,2024-12-05\r\n8023,1993,APAC,sports,online,28.51,3,0.012,coupon,2024-09-20\r\n8024,1978,AMER,home,online,79.45,7,0.056,none,2024-02-13\r\n8025,2450,EMEA,toys,retail,43.37,5,0.176,none,2024-12-24\r\n8026,2209,AMER,toys,retail,55.10,2,0.146,none,2024-01-20\r\n8027,2304,LATAM,home,partner,21.24,4,0.008,none,2024-09-28\r\n8028,1344,EMEA,home,online,36.45,7,0.101,coupon,2024-11-18\r\n8029,1123,LATAM,home,retail,59.98,4,0.192,none,2024-12-05\r\n8030,1747,EMEA,toys,retail,25.60,8,0.195,none,2024-07-13\r\n8031,1122,AMER,sports,online,101.54,8,0.183,none,2024-05-22\r\n8032,2400,EMEA,sports,online,33.16,8,0.195,bundle,2024-07-23\r\n8033,2462,EMEA,home,online,76.50,8,0.017,coupon,2024-05-05\r\n8034,1982,EMEA,toys,online,70.18,2,0.244,coupon,2024-11-15\r\n8035,2199,LATAM,sports,retail,49.24,3,0.027,bundle,2024-01-14\r\n8036,2225,EMEA,home,online,110.91,8,0.189,none,2024-10-07\r\n8037,1656,LATAM,home,partner,48.45,1,0.216,none,2024-09-13\r\n8038,1004,LATAM,home,online,49.13,6,0.047,bundle,2024-07-15\r\n8039,2040,LATAM,home,mobile,38.60,1,0.239,loyalty,2024-05-02\r\n8040,1537,LATAM,home,retail,98.42,2,0.081,coupon,2024-06-03\r\n8041,1186,APAC,fashion,online,197.58,2,0.223,none,2024-07-16\r\n8042,1408,AMER,sports,retail,26.30,2,0.151,none,2024-09-24\r\n8043,1914,EMEA,home,online,193.96,1,0.107,coupon,2024-08-10\r\n8044,1254,APAC,home,retail,101.57,5,0.017,none,2024-06-05\r\n8045,2396,AMER,home,mobile,89.89,1,0.148,none,2024-04-20\r\n8046,2180,AMER,fashion,retail,88.70,4,0.234,none,2024-06-10\r\n8047,1842,LATAM,home,mobile,85.36,5,0.133,none,2024-09-24\r\n8048,1018,APAC,sports,online,51.09,7,0.076,none,2024-11-21\r\n8049,1617,AMER,toys,retail,21.34,3,0.021,coupon,2024-01-27\r\n8050,1559,EMEA,fashion,mobile,45.25,8,0.107,none,2024-07-11\r\n8051,1087,AMER,grocery,retail,30.88,7,0.022,bundle,2024-03-07\r\n8052,1953,EMEA,home,retail,20.66,8,0.152,bundle,2024-12-07\r\n8053,1233,AMER,sports,retail,42.73,1,0.115,none,2024-08-06\r\n8054,1284,APAC,fashion,mobile,29.48,1,0.033,coupon,2024-09-22\r\n8055,1369,AMER,electronics,online,144.68,3,0.213,coupon,2024-11-02\r\n8056,1549,APAC,toys,online,53.88,7,0.089,none,2024-02-01\r\n8057,1018,APAC,toys,online,36.47,8,0.084,none,2024-02-13\r\n8058,2124,AMER,electronics,online,106.59,4,0.050,loyalty,2024-07-26\r\n8059,1420,APAC,grocery,retail,291.52,6,0.178,none,2024-01-08\r\n8060,1471,EMEA,grocery,retail,54.93,5,0.114,bundle,2024-11-16\r\n8061,2092,AMER,toys,retail,69.97,4,0.200,loyalty,2024-10-12\r\n8062,1458,APAC,fashion,online,65.17,5,0.173,none,2024-05-04\r\n8063,1514,LATAM,fashion,online,53.49,2,0.036,none,2024-11-17\r\n8064,2338,AMER,electronics,online,63.04,1,0.001,coupon,2024-08-03\r\n8065,2413,AMER,grocery,online,63.85,1,0.149,none,2024-12-20\r\n8066,1459,LATAM,sports,online,32.01,3,0.246,none,2024-11-25\r\n8067,1449,EMEA,fashion,retail,35.79,5,0.158,none,2024-02-14\r\n8068,2036,APAC,grocery,online,75.78,5,0.036,bundle,2024-05-10\r\n8069,1728,AMER,fashion,mobile,61.04,4,0.241,none,2024-08-09\r\n8070,1493,APAC,grocery,online,44.49,6,0.217,coupon,2024-12-08\r\n8071,2060,LATAM,home,online,113.99,6,0.244,coupon,2024-07-26\r\n8072,1611,EMEA,toys,retail,47.67,6,0.060,none,2024-04-16\r\n8073,1304,LATAM,grocery,online,48.37,7,0.203,bundle,2024-08-28\r\n8074,1074,LATAM,electronics,online,68.58,1,0.157,none,2024-11-01\r\n8075,1828,EMEA,home,partner,55.62,4,0.191,coupon,2024-09-03\r\n8076,2297,EMEA,toys,online,35.76,2,0.201,none,2024-07-05\r\n8077,1385,LATAM,home,retail,89.33,2,0.138,none,2024-03-16\r\n8078,1092,AMER,electronics,online,96.32,3,0.100,none,2024-07-13\r\n8079,2221,LATAM,grocery,retail,27.98,7,0.222,loyalty,2024-08-18\r\n8080,1057,LATAM,toys,retail,59.64,7,0.197,none,2024-08-03\r\n8081,1930,AMER,electronics,online,45.74,8,0.060,bundle,2024-12-18\r\n8082,2102,APAC,fashion,retail,66.72,8,0.032,none,2024-09-23\r\n8083,1514,LATAM,electronics,retail,46.84,6,0.218,none,2024-05-13\r\n8084,1224,APAC,home,online,102.93,5,0.136,coupon,2024-12-04\r\n8085,2257,AMER,home,partner,64.66,6,0.161,bundle,2024-01-04\r\n8086,1921,LATAM,electronics,online,79.75,3,0.043,bundle,2024-02-03\r\n8087,1755,APAC,grocery,mobile,47.23,4,0.063,bundle,2024-06-24\r\n8088,2352,APAC,fashion,retail,47.09,7,0.191,none,2024-09-11\r\n8089,1638,EMEA,fashion,online,88.11,4,0.248,coupon,2024-10-01\r\n8090,2107,APAC,grocery,mobile,149.62,1,0.234,coupon,2024-06-13\r\n8091,1337,APAC,home,online,59.81,6,0.114,none,2024-09-13\r\n8092,1219,LATAM,electronics,mobile,38.36,2,0.186,none,2024-09-27\r\n8093,1757,EMEA,fashion,online,105.72,3,0.219,loyalty,2024-05-08\r\n8094,2088,EMEA,grocery,retail,46.25,4,0.054,loyalty,2024-12-12\r\n8095,2293,LATAM,toys,online,63.42,7,0.086,loyalty,2024-01-08\r\n8096,1937,APAC,fashion,online,55.62,3,0.095,bundle,2024-01-01\r\n8097,2089,EMEA,sports,online,54.96,5,0.134,none,2024-10-04\r\n8098,2306,AMER,electronics,online,86.50,8,0.058,none,2024-06-24\r\n8099,1886,LATAM,electronics,online,49.21,7,0.054,bundle,2024-02-24\r\n8100,1818,AMER,toys,online,95.04,6,0.051,coupon,2024-12-12\r\n8101,1915,LATAM,electronics,mobile,35.71,6,0.081,none,2024-04-27\r\n8102,1265,APAC,toys,online,86.96,6,0.151,none,2024-04-22\r\n8103,1111,APAC,fashion,online,63.59,7,0.005,none,2024-03-28\r\n8104,2339,AMER,home,online,75.24,3,0.205,bundle,2024-04-10\r\n8105,1919,EMEA,grocery,retail,37.78,4,0.156,none,2024-03-22\r\n8106,1674,LATAM,toys,retail,58.90,6,0.032,bundle,2024-06-02\r\n8107,2043,EMEA,grocery,retail,47.92,3,0.048,none,2024-02-04\r\n8108,2218,EMEA,electronics,online,37.12,7,0.047,none,2024-12-08\r\n8109,1598,EMEA,grocery,online,73.56,1,0.025,coupon,2024-02-14\r\n8110,2225,EMEA,home,mobile,44.25,8,0.134,none,2024-08-12\r\n8111,1721,EMEA,electronics,retail,70.40,8,0.016,none,2024-09-25\r\n8112,1759,EMEA,fashion,online,63.07,6,0.103,none,2024-11-16\r\n8113,1116,LATAM,home,online,30.31,3,0.218,none,2024-07-10\r\n8114,2485,AMER,electronics,partner,86.17,6,0.175,loyalty,2024-07-04\r\n8115,1286,EMEA,home,mobile,51.54,3,0.194,bundle,2024-02-16\r\n8116,1186,APAC,home,partner,58.06,6,0.054,coupon,2024-04-27\r\n8117,1894,APAC,electronics,retail,88.02,8,0.185,loyalty,2024-11-11\r\n8118,1108,EMEA,electronics,retail,25.27,8,0.215,loyalty,2024-08-21\r\n8119,1149,LATAM,fashion,online,71.86,8,0.030,none,2024-05-07\r\n8120,1942,APAC,fashion,online,140.39,3,0.104,bundle,2024-02-27\r\n8121,1197,LATAM,electronics,online,69.14,2,0.209,none,2024-04-14\r\n8122,1040,LATAM,home,retail,34.32,3,0.037,none,2024-11-06\r\n8123,2082,APAC,fashion,retail,20.46,3,0.063,none,2024-06-11\r\n8124,1655,LATAM,home,online,35.60,6,0.099,none,2024-09-04\r\n8125,1371,AMER,fashion,online,76.60,6,0.164,none,2024-06-05\r\n8126,1518,AMER,home,online,52.86,1,0.219,coupon,2024-01-18\r\n8127,1149,LATAM,sports,online,49.07,7,0.156,none,2024-09-21\r\n8128,1765,EMEA,sports,online,21.75,3,0.101,none,2024-11-04\r\n8129,1290,EMEA,sports,online,79.30,5,0.233,loyalty,2024-08-08\r\n8130,2048,LATAM,toys,retail,61.79,7,0.095,none,2024-10-22\r\n8131,2386,EMEA,home,online,75.33,8,0.094,none,2024-04-13\r\n8132,1065,AMER,electronics,online,154.45,2,0.019,coupon,2024-02-25\r\n8133,1098,APAC,sports,partner,38.17,6,0.060,bundle,2024-09-22\r\n8134,1193,APAC,grocery,online,51.99,1,0.170,none,2024-12-13\r\n8135,1187,AMER,fashion,retail,59.25,1,0.236,coupon,2024-09-18\r\n8136,1432,APAC,sports,mobile,74.95,6,0.201,none,2024-08-10\r\n8137,2320,LATAM,grocery,online,25.70,5,0.100,none,2024-04-09\r\n8138,2490,AMER,grocery,mobile,119.01,6,0.030,none,2024-01-27\r\n8139,1265,APAC,electronics,online,51.25,2,0.250,none,2024-01-23\r\n8140,1109,APAC,home,online,65.66,2,0.068,coupon,2024-08-24\r\n8141,2001,EMEA,grocery,online,73.36,8,0.026,none,2024-06-20\r\n8142,2357,EMEA,electronics,partner,41.78,4,0.041,coupon,2024-04-16\r\n8143,1213,EMEA,fashion,retail,73.99,4,0.172,none,2024-01-17\r\n8144,2287,EMEA,home,retail,46.32,2,0.083,bundle,2024-12-14\r\n8145,2013,APAC,electronics,retail,41.68,3,0.137,coupon,2024-02-21\r\n8146,1691,LATAM,home,online,50.62,1,0.168,loyalty,2024-02-26\r\n8147,2283,AMER,electronics,retail,63.96,6,0.165,none,2024-07-17\r\n8148,1528,EMEA,home,online,91.65,5,0.018,none,2024-07-25\r\n8149,1824,LATAM,sports,mobile,47.81,2,0.080,bundle,2024-08-19\r\n8150,1485,APAC,grocery,retail,35.96,3,0.101,loyalty,2024-02-11\r\n8151,1351,APAC,electronics,retail,59.39,5,0.029,loyalty,2024-01-25\r\n8152,2034,LATAM,fashion,retail,39.81,7,0.190,none,2024-11-04\r\n8153,1402,EMEA,grocery,online,60.60,3,0.165,none,2024-07-25\r\n8154,2262,APAC,toys,online,46.67,5,0.086,none,2024-05-20\r\n8155,2003,LATAM,toys,mobile,52.24,8,0.119,bundle,2024-03-16\r\n8156,2317,LATAM,home,retail,104.85,3,0.033,none,2024-08-11\r\n8157,2309,AMER,grocery,online,26.17,5,0.159,none,2024-03-14\r\n8158,1026,APAC,sports,online,98.90,2,0.220,none,2024-05-14\r\n8159,1718,EMEA,electronics,online,39.38,6,0.071,bundle,2024-08-17\r\n8160,1697,APAC,home,retail,38.15,8,0.243,coupon,2024-06-09\r\n8161,1189,AMER,grocery,mobile,13.87,5,0.133,bundle,2024-11-11\r\n8162,2219,LATAM,home,retail,123.17,3,0.068,loyalty,2024-02-17\r\n8163,2337,AMER,fashion,mobile,88.54,7,0.089,none,2024-04-06\r\n8164,2359,LATAM,home,online,62.28,1,0.056,none,2024-07-19\r\n8165,1846,APAC,grocery,online,45.93,1,0.128,none,2024-10-12\r\n8166,2354,LATAM,electronics,partner,63.51,7,0.024,none,2024-02-15\r\n8167,1269,LATAM,grocery,online,110.14,5,0.207,none,2024-01-04\r\n8168,1208,AMER,sports,online,57.94,3,0.009,coupon,2024-04-23\r\n8169,2283,AMER,sports,retail,85.25,8,0.004,bundle,2024-06-12\r\n8170,2045,LATAM,fashion,online,32.92,3,0.028,bundle,2024-12-20\r\n8171,1550,APAC,grocery,online,68.11,4,0.205,none,2024-09-19\r\n8172,1306,LATAM,grocery,online,50.30,7,0.232,coupon,2024-11-10\r\n8173,1827,EMEA,electronics,online,47.08,7,0.005,none,2024-01-18\r\n8174,1347,APAC,sports,retail,76.45,6,0.078,bundle,2024-12-11\r\n8175,1003,APAC,home,online,26.27,3,0.119,none,2024-04-12\r\n8176,1758,AMER,home,online,81.99,5,0.008,loyalty,2024-08-06\r\n8177,1528,EMEA,grocery,retail,131.56,2,0.022,coupon,2024-05-21\r\n8178,1845,AMER,electronics,retail,72.51,1,0.186,coupon,2024-04-07\r\n8179,1645,EMEA,fashion,retail,43.93,4,0.041,none,2024-07-16\r\n8180,2118,AMER,home,retail,56.23,1,0.245,coupon,2024-08-21\r\n8181,1595,AMER,grocery,online,59.95,8,0.045,none,2024-11-06\r\n8182,1466,AMER,sports,online,62.36,8,0.130,none,2024-02-04\r\n8183,1737,AMER,electronics,online,95.40,3,0.194,none,2024-04-07\r\n8184,2335,EMEA,grocery,online,109.99,6,0.089,coupon,2024-11-06\r\n8185,1850,APAC,grocery,online,54.23,8,0.188,bundle,2024-12-17\r\n8186,1823,EMEA,home,mobile,57.26,5,0.060,bundle,2024-09-01\r\n8187,2186,LATAM,grocery,retail,36.48,5,0.205,coupon,2024-08-18\r\n8188,1608,AMER,fashion,online,74.10,2,0.218,bundle,2024-12-18\r\n8189,1877,LATAM,fashion,retail,65.57,2,0.208,none,2024-02-21\r\n8190,1008,AMER,grocery,mobile,31.61,5,0.181,bundle,2024-12-03\r\n8191,2137,LATAM,electronics,online,89.83,4,0.021,none,2024-03-13\r\n8192,1291,EMEA,electronics,partner,63.89,5,0.017,none,2024-01-25\r\n8193,1896,EMEA,grocery,mobile,77.28,3,0.024,none,2024-10-03\r\n8194,1940,APAC,sports,mobile,67.97,6,0.023,none,2024-11-20\r\n8195,2119,AMER,toys,online,57.62,4,0.110,coupon,2024-06-04\r\n8196,1784,EMEA,grocery,mobile,63.86,1,0.232,bundle,2024-12-14\r\n8197,1399,AMER,home,online,22.52,4,0.028,loyalty,2024-08-27\r\n8198,2005,APAC,sports,retail,45.78,2,0.055,bundle,2024-12-05\r\n8199,1376,EMEA,grocery,retail,92.81,8,0.111,bundle,2024-04-19\r\n8200,2062,EMEA,toys,retail,72.03,3,0.125,none,2024-05-03\r\n8201,2242,AMER,grocery,retail,16.48,1,0.240,none,2024-04-09\r\n8202,1687,APAC,sports,retail,71.62,8,0.225,none,2024-09-24\r\n8203,1965,LATAM,fashion,online,33.34,2,0.096,none,2024-04-27\r\n8204,2209,AMER,home,mobile,73.39,4,0.169,bundle,2024-09-02\r\n8205,1430,EMEA,sports,retail,55.06,5,0.067,none,2024-04-24\r\n8206,1163,AMER,home,retail,100.65,8,0.064,none,2024-06-01\r\n8207,1504,AMER,fashion,retail,49.83,6,0.227,none,2024-01-17\r\n8208,2386,EMEA,sports,online,42.89,3,0.130,coupon,2024-01-15\r\n8209,1026,APAC,fashion,online,112.51,8,0.098,none,2024-12-18\r\n8210,1597,APAC,electronics,online,47.63,7,0.082,none,2024-03-14\r\n8211,1368,EMEA,grocery,retail,66.18,7,0.204,none,2024-12-14\r\n8212,1154,LATAM,grocery,mobile,95.60,2,0.235,none,2024-04-04\r\n8213,1448,EMEA,electronics,online,248.83,3,0.144,none,2024-10-11\r\n8214,1541,APAC,home,online,30.75,2,0.136,bundle,2024-12-08\r\n8215,1967,EMEA,sports,mobile,155.78,8,0.139,coupon,2024-09-25\r\n8216,2422,APAC,grocery,partner,108.81,7,0.080,none,2024-08-17\r\n8217,1807,EMEA,electronics,online,69.46,5,0.157,none,2024-03-12\r\n8218,2219,LATAM,sports,online,69.61,3,0.026,coupon,2024-03-17\r\n8219,1083,AMER,grocery,online,45.82,6,0.061,coupon,2024-06-15\r\n8220,1779,APAC,grocery,partner,55.60,8,0.154,coupon,2024-03-20\r\n8221,1305,EMEA,electronics,online,129.90,1,0.189,none,2024-09-04\r\n8222,1170,AMER,home,retail,52.89,4,0.052,bundle,2024-03-21\r\n8223,1285,EMEA,toys,retail,80.12,6,0.044,none,2024-08-20\r\n8224,2330,EMEA,home,partner,22.55,2,0.033,bundle,2024-09-20\r\n8225,2462,EMEA,grocery,online,60.87,3,0.122,none,2024-03-17\r\n8226,2391,EMEA,fashion,retail,48.03,1,0.065,none,2024-03-24\r\n8227,1952,EMEA,electronics,online,59.05,2,0.175,none,2024-08-06\r\n8228,1514,LATAM,toys,retail,96.64,5,0.192,none,2024-11-07\r\n8229,1322,AMER,sports,partner,176.11,6,0.100,none,2024-03-01\r\n8230,1311,APAC,grocery,retail,63.73,3,0.057,loyalty,2024-06-21\r\n8231,1437,EMEA,home,retail,33.93,3,0.107,loyalty,2024-01-15\r\n8232,2346,LATAM,sports,retail,41.62,5,0.232,bundle,2024-07-07\r\n8233,1364,EMEA,home,retail,43.56,5,0.176,none,2024-10-18\r\n8234,2383,APAC,grocery,mobile,47.74,5,0.121,none,2024-12-14\r\n8235,1943,AMER,grocery,retail,24.18,5,0.054,none,2024-10-19\r\n8236,1036,EMEA,electronics,online,201.96,8,0.114,none,2024-10-23\r\n8237,1625,EMEA,grocery,online,24.73,4,0.017,none,2024-07-28\r\n8238,1394,LATAM,grocery,online,40.81,2,0.176,coupon,2024-10-14\r\n8239,1778,LATAM,electronics,retail,64.81,1,0.096,coupon,2024-04-13\r\n8240,1429,APAC,grocery,online,46.21,2,0.067,bundle,2024-10-02\r\n8241,1570,AMER,sports,online,57.99,6,0.101,none,2024-06-01\r\n8242,2479,EMEA,electronics,online,48.43,7,0.061,none,2024-01-24\r\n8243,1979,APAC,toys,retail,93.97,1,0.059,none,2024-10-10\r\n8244,2094,AMER,fashion,partner,109.64,2,0.230,none,2024-11-13\r\n8245,1539,LATAM,electronics,mobile,123.86,1,0.240,none,2024-01-28\r\n8246,2476,APAC,grocery,retail,76.38,1,0.125,none,2024-10-12\r\n8247,2393,LATAM,sports,retail,62.80,7,0.119,bundle,2024-08-27\r\n8248,2037,LATAM,grocery,online,73.35,3,0.021,none,2024-12-02\r\n8249,1658,AMER,home,online,57.11,4,0.241,coupon,2024-02-17\r\n8250,1419,APAC,sports,mobile,76.11,1,0.158,bundle,2024-03-27\r\n8251,2291,EMEA,sports,retail,29.91,8,0.062,coupon,2024-04-06\r\n8252,2246,AMER,fashion,retail,26.45,4,0.223,none,2024-03-21\r\n8253,1435,AMER,grocery,online,121.22,2,0.111,none,2024-12-14\r\n8254,1456,APAC,electronics,online,36.14,7,0.046,none,2024-07-14\r\n8255,1326,AMER,home,partner,40.02,8,0.009,none,2024-03-07\r\n8256,1812,EMEA,grocery,online,74.64,5,0.249,none,2024-05-16\r\n8257,2141,AMER,electronics,online,56.45,3,0.006,none,2024-11-09\r\n8258,1234,AMER,electronics,retail,23.43,8,0.173,none,2024-11-28\r\n8259,1645,EMEA,electronics,online,37.39,6,0.095,none,2024-01-16\r\n8260,1206,EMEA,fashion,online,100.20,8,0.061,coupon,2024-12-03\r\n8261,2496,EMEA,grocery,retail,47.50,5,0.025,loyalty,2024-01-17\r\n8262,1150,LATAM,sports,online,76.77,2,0.246,coupon,2024-10-05\r\n8263,1556,AMER,home,online,67.43,5,0.228,coupon,2024-12-28\r\n8264,1536,LATAM,grocery,retail,56.39,3,0.014,coupon,2024-10-26\r\n8265,1202,APAC,fashion,retail,86.34,2,0.011,none,2024-06-02\r\n8266,1526,EMEA,fashion,retail,58.91,7,0.111,none,2024-09-14\r\n8267,1347,APAC,sports,mobile,66.76,6,0.166,none,2024-08-23\r\n8268,2203,APAC,home,online,47.85,2,0.117,bundle,2024-06-11\r\n8269,1074,LATAM,toys,mobile,77.65,3,0.139,none,2024-06-13\r\n8270,1761,EMEA,home,mobile,37.15,6,0.087,none,2024-08-25\r\n8271,2204,AMER,electronics,online,76.47,4,0.056,coupon,2024-02-15\r\n8272,1202,APAC,electronics,mobile,65.36,2,0.171,none,2024-07-15\r\n8273,2426,AMER,fashion,online,35.71,8,0.111,bundle,2024-09-12\r\n8274,1639,APAC,fashion,online,69.83,8,0.143,none,2024-01-15\r\n8275,2054,AMER,electronics,retail,50.43,1,0.026,coupon,2024-11-05\r\n8276,2382,LATAM,electronics,retail,47.70,4,0.203,none,2024-03-13\r\n8277,1382,LATAM,grocery,online,36.62,8,0.036,none,2024-06-19\r\n8278,1308,EMEA,grocery,online,33.17,5,0.147,none,2024-11-21\r\n8279,1446,AMER,electronics,online,73.47,8,0.136,none,2024-10-19\r\n8280,2260,EMEA,electronics,retail,97.12,3,0.028,none,2024-12-05\r\n8281,2426,AMER,grocery,retail,75.05,3,0.182,none,2024-06-28\r\n8282,1575,APAC,home,retail,60.82,5,0.054,coupon,2024-10-20\r\n8283,2087,LATAM,fashion,retail,61.92,8,0.082,none,2024-01-24\r\n8284,2069,AMER,grocery,mobile,37.78,8,0.121,none,2024-02-11\r\n8285,1719,LATAM,electronics,online,31.10,2,0.183,none,2024-12-23\r\n8286,1432,APAC,home,online,34.44,3,0.173,loyalty,2024-12-20\r\n8287,1875,EMEA,sports,mobile,69.26,4,0.048,none,2024-11-25\r\n8288,1714,APAC,fashion,online,79.83,4,0.027,bundle,2024-03-23\r\n8289,1595,AMER,electronics,online,13.47,2,0.086,loyalty,2024-08-06\r\n8290,2410,EMEA,grocery,online,55.68,3,0.182,loyalty,2024-05-16\r\n8291,1210,LATAM,sports,partner,73.37,3,0.177,coupon,2024-04-19\r\n8292,2307,LATAM,home,online,31.45,4,0.241,loyalty,2024-01-17\r\n8293,2433,APAC,grocery,partner,28.07,1,0.210,coupon,2024-05-27\r\n8294,1837,LATAM,grocery,retail,44.67,3,0.069,coupon,2024-08-20\r\n8295,1605,APAC,electronics,mobile,51.45,7,0.009,loyalty,2024-07-28\r\n8296,1213,EMEA,home,online,77.21,3,0.244,coupon,2024-06-24\r\n8297,1204,AMER,grocery,online,84.25,2,0.152,loyalty,2024-07-07\r\n8298,1088,LATAM,electronics,retail,58.37,6,0.211,none,2024-02-06\r\n8299,1803,LATAM,grocery,retail,41.20,3,0.201,loyalty,2024-01-28\r\n8300,2150,APAC,sports,retail,52.58,4,0.010,none,2024-12-28\r\n8301,1716,LATAM,grocery,mobile,72.28,7,0.037,none,2024-04-17\r\n8302,1273,AMER,electronics,retail,56.65,6,0.210,coupon,2024-11-16\r\n8303,1565,AMER,electronics,online,57.07,8,0.082,loyalty,2024-11-26\r\n8304,1503,APAC,electronics,online,90.64,7,0.038,coupon,2024-05-19\r\n8305,2344,LATAM,home,mobile,76.36,4,0.167,none,2024-04-26\r\n8306,1933,EMEA,fashion,retail,41.34,1,0.206,coupon,2024-01-20\r\n8307,1879,EMEA,sports,retail,96.09,8,0.019,none,2024-06-20\r\n8308,1614,EMEA,toys,online,42.49,3,0.249,none,2024-05-19\r\n8309,1715,AMER,sports,retail,199.05,7,0.143,bundle,2024-09-02\r\n8310,1512,APAC,electronics,retail,65.74,5,0.042,coupon,2024-12-01\r\n8311,1210,LATAM,electronics,online,71.10,3,0.121,coupon,2024-11-15\r\n8312,1764,LATAM,electronics,mobile,42.62,6,0.133,none,2024-03-14\r\n8313,1908,AMER,grocery,retail,95.57,1,0.102,loyalty,2024-11-16\r\n8314,2015,APAC,toys,online,81.01,7,0.178,none,2024-11-14\r\n8315,1474,LATAM,sports,online,75.28,4,0.053,none,2024-04-27\r\n8316,2499,LATAM,fashion,online,97.61,7,0.057,coupon,2024-07-08\r\n8317,1738,LATAM,home,partner,81.37,2,0.174,bundle,2024-10-15\r\n8318,1547,AMER,grocery,online,56.65,7,0.082,coupon,2024-09-18\r\n8319,1483,EMEA,grocery,online,64.73,6,0.088,none,2024-05-18\r\n8320,2050,APAC,toys,mobile,41.28,2,0.042,none,2024-06-20\r\n8321,1855,APAC,fashion,retail,69.19,2,0.057,none,2024-12-26\r\n8322,2380,AMER,grocery,retail,46.64,5,0.053,none,2024-03-12\r\n8323,1397,LATAM,home,retail,70.83,6,0.094,none,2024-12-01\r\n8324,1129,LATAM,home,retail,37.12,4,0.080,coupon,2024-09-21\r\n8325,1854,AMER,sports,online,36.91,1,0.012,none,2024-03-06\r\n8326,1937,APAC,sports,retail,27.87,4,0.174,none,2024-02-20\r\n8327,1025,EMEA,electronics,mobile,19.88,3,0.204,none,2024-05-05\r\n8328,1261,APAC,grocery,online,50.58,5,0.195,none,2024-10-12\r\n8329,2104,EMEA,grocery,online,38.87,5,0.241,coupon,2024-01-16\r\n8330,2334,LATAM,fashion,online,58.85,2,0.140,coupon,2024-02-24\r\n8331,1602,EMEA,home,retail,28.23,5,0.188,none,2024-11-21\r\n8332,1261,APAC,grocery,mobile,41.95,2,0.189,none,2024-11-20\r\n8333,1093,APAC,electronics,retail,129.38,4,0.092,none,2024-03-15\r\n8334,1710,APAC,home,retail,67.09,2,0.205,none,2024-02-11\r\n8335,1312,EMEA,grocery,mobile,59.65,2,0.019,bundle,2024-01-16\r\n8336,1027,APAC,grocery,partner,37.16,4,0.197,none,2024-03-02\r\n8337,1566,EMEA,home,mobile,120.69,3,0.205,bundle,2024-08-18\r\n8338,2028,APAC,electronics,retail,55.71,3,0.182,coupon,2024-10-16\r\n8339,1208,AMER,grocery,mobile,61.18,7,0.184,none,2024-05-06\r\n8340,1617,AMER,fashion,retail,79.91,1,0.199,none,2024-02-18\r\n8341,2094,AMER,home,online,33.20,8,0.236,coupon,2024-08-07\r\n8342,2100,APAC,grocery,retail,62.69,8,0.205,none,2024-07-13\r\n8343,1541,APAC,sports,online,24.63,8,0.236,none,2024-04-04\r\n8344,1992,LATAM,electronics,online,31.63,1,0.201,none,2024-02-24\r\n8345,1257,APAC,electronics,retail,62.39,6,0.068,none,2024-07-06\r\n8346,1606,AMER,electronics,retail,56.09,2,0.030,none,2024-06-11\r\n8347,2421,AMER,fashion,online,62.15,8,0.161,coupon,2024-09-21\r\n8348,1853,APAC,toys,retail,85.77,8,0.017,none,2024-06-10\r\n8349,1837,LATAM,home,mobile,61.39,2,0.049,coupon,2024-01-11\r\n8350,1365,LATAM,home,retail,30.92,6,0.152,loyalty,2024-01-16\r\n8351,1977,APAC,electronics,online,27.47,4,0.022,none,2024-07-02\r\n8352,2477,APAC,toys,partner,62.79,2,0.114,none,2024-09-22\r\n8353,2128,EMEA,sports,retail,82.36,7,0.232,none,2024-03-28\r\n8354,2265,APAC,fashion,online,57.89,6,0.110,coupon,2024-05-04\r\n8355,2190,LATAM,fashion,retail,38.18,8,0.177,coupon,2024-06-23\r\n8356,1099,LATAM,electronics,online,112.89,8,0.083,none,2024-07-02\r\n8357,1954,APAC,toys,retail,88.02,7,0.094,coupon,2024-07-19\r\n8358,2454,LATAM,fashion,online,23.74,3,0.158,none,2024-02-28\r\n8359,1981,EMEA,grocery,online,95.24,5,0.048,none,2024-01-26\r\n8360,2450,EMEA,electronics,retail,46.01,7,0.089,bundle,2024-03-22\r\n8361,1104,APAC,electronics,retail,46.71,1,0.201,coupon,2024-05-19\r\n8362,1933,EMEA,grocery,mobile,95.27,7,0.222,none,2024-02-27\r\n8363,1782,LATAM,electronics,partner,44.82,7,0.103,bundle,2024-04-21\r\n8364,1311,APAC,home,mobile,69.67,8,0.059,none,2024-12-14\r\n8365,1944,AMER,electronics,retail,44.57,2,0.012,none,2024-02-27\r\n8366,2165,AMER,fashion,retail,109.47,3,0.042,none,2024-02-18\r\n8367,1649,APAC,fashion,online,236.98,5,0.092,none,2024-04-11\r\n8368,1293,AMER,fashion,online,59.41,7,0.123,none,2024-02-19\r\n8369,1787,APAC,sports,online,30.05,8,0.054,none,2024-02-24\r\n8370,2296,AMER,home,online,41.96,8,0.173,none,2024-12-08\r\n8371,1742,AMER,fashion,online,79.38,6,0.016,none,2024-02-10\r\n8372,1525,APAC,grocery,partner,53.93,3,0.116,none,2024-05-21\r\n8373,2053,AMER,electronics,online,95.66,1,0.141,none,2024-04-23\r\n8374,1973,EMEA,grocery,online,58.57,5,0.121,none,2024-03-25\r\n8375,1091,EMEA,home,retail,33.22,5,0.184,loyalty,2024-03-14\r\n8376,1724,LATAM,electronics,online,112.96,5,0.027,bundle,2024-12-27\r\n8377,1441,LATAM,toys,partner,165.31,2,0.134,bundle,2024-05-28\r\n8378,1913,LATAM,grocery,online,15.13,3,0.214,none,2024-04-18\r\n8379,2496,EMEA,sports,online,80.69,3,0.018,coupon,2024-09-13\r\n8380,1630,APAC,grocery,retail,104.68,1,0.218,coupon,2024-08-18\r\n8381,1430,EMEA,fashion,retail,39.33,8,0.076,bundle,2024-01-16\r\n8382,2455,AMER,home,partner,37.34,7,0.162,loyalty,2024-12-01\r\n8383,1531,EMEA,fashion,mobile,32.50,5,0.079,none,2024-10-25\r\n8384,2475,AMER,home,online,208.16,8,0.163,bundle,2024-04-16\r\n8385,1386,AMER,home,retail,59.99,4,0.112,coupon,2024-01-23\r\n8386,2407,EMEA,fashion,retail,28.62,5,0.115,coupon,2024-02-28\r\n8387,1900,APAC,grocery,online,116.93,7,0.099,none,2024-07-08\r\n8388,1714,APAC,home,online,49.77,7,0.097,none,2024-11-27\r\n8389,1845,AMER,grocery,online,55.71,4,0.248,none,2024-05-13\r\n8390,1640,APAC,electronics,retail,114.52,7,0.209,loyalty,2024-06-11\r\n8391,1687,APAC,grocery,mobile,95.96,3,0.110,bundle,2024-09-16\r\n8392,1059,AMER,fashion,online,74.74,3,0.151,none,2024-11-21\r\n8393,2266,LATAM,electronics,partner,113.84,7,0.051,coupon,2024-09-09\r\n8394,1269,LATAM,sports,retail,61.41,5,0.025,bundle,2024-04-01\r\n8395,1279,EMEA,fashion,mobile,54.18,1,0.057,none,2024-08-28\r\n8396,1638,EMEA,electronics,retail,144.67,4,0.014,bundle,2024-10-13\r\n8397,1757,EMEA,home,retail,24.07,3,0.150,coupon,2024-01-13\r\n8398,1569,APAC,sports,retail,104.09,4,0.182,none,2024-08-11\r\n8399,1107,APAC,sports,retail,37.19,8,0.233,bundle,2024-09-20\r\n8400,2405,AMER,home,online,73.13,6,0.038,bundle,2024-04-13\r\n8401,1074,LATAM,grocery,mobile,51.04,5,0.134,none,2024-07-06\r\n8402,2256,AMER,grocery,online,114.05,8,0.190,none,2024-08-21\r\n8403,1238,AMER,electronics,online,43.41,4,0.196,coupon,2024-09-18\r\n8404,1145,AMER,electronics,retail,72.99,8,0.167,none,2024-05-19\r\n8405,2343,EMEA,grocery,retail,25.77,1,0.222,none,2024-04-17\r\n8406,2085,AMER,fashion,online,68.03,4,0.206,coupon,2024-10-15\r\n8407,1286,EMEA,grocery,partner,215.70,8,0.067,none,2024-05-20\r\n8408,1666,LATAM,electronics,online,54.15,4,0.136,bundle,2024-08-13\r\n8409,1184,AMER,sports,retail,66.71,2,0.086,none,2024-08-06\r\n8410,1532,APAC,grocery,retail,56.03,1,0.051,coupon,2024-05-13\r\n8411,1193,APAC,fashion,retail,35.75,4,0.055,none,2024-05-28\r\n8412,1277,AMER,grocery,online,30.74,6,0.127,none,2024-06-16\r\n8413,1416,EMEA,electronics,online,89.46,5,0.042,none,2024-11-25\r\n8414,1633,EMEA,fashion,retail,83.43,4,0.118,loyalty,2024-03-09\r\n8415,2263,AMER,electronics,retail,211.73,4,0.159,none,2024-09-02\r\n8416,2216,AMER,electronics,online,107.78,2,0.009,none,2024-12-13\r\n8417,1908,AMER,home,online,26.94,7,0.074,none,2024-01-09\r\n8418,1143,LATAM,electronics,retail,53.04,6,0.006,coupon,2024-11-10\r\n8419,1064,AMER,home,retail,110.56,4,0.248,none,2024-11-21\r\n8420,1348,AMER,grocery,mobile,95.84,3,0.176,none,2024-08-11\r\n8421,1300,EMEA,grocery,retail,63.04,2,0.132,coupon,2024-12-18\r\n8422,1090,AMER,grocery,online,98.77,2,0.002,none,2024-06-26\r\n8423,1736,AMER,electronics,online,61.49,6,0.182,none,2024-09-11\r\n8424,1212,LATAM,electronics,mobile,106.07,2,0.070,coupon,2024-03-12\r\n8425,2085,AMER,home,online,68.65,4,0.028,coupon,2024-02-07\r\n8426,1419,APAC,home,online,17.25,8,0.090,none,2024-07-14\r\n8427,1586,LATAM,home,online,78.21,7,0.011,none,2024-02-26\r\n8428,2499,LATAM,sports,online,54.45,8,0.129,coupon,2024-11-16\r\n8429,1728,AMER,sports,retail,76.97,6,0.067,none,2024-06-02\r\n8430,2209,AMER,fashion,retail,31.31,7,0.157,none,2024-06-13\r\n8431,1133,EMEA,electronics,online,102.29,4,0.026,loyalty,2024-05-14\r\n8432,1792,AMER,electronics,online,70.25,3,0.178,none,2024-09-23\r\n8433,2115,APAC,electronics,mobile,84.74,4,0.161,coupon,2024-10-01\r\n8434,1163,AMER,fashion,retail,54.19,1,0.176,none,2024-07-24\r\n8435,2149,EMEA,grocery,online,397.53,7,0.167,coupon,2024-12-14\r\n8436,2396,AMER,toys,online,62.89,1,0.221,loyalty,2024-03-15\r\n8437,1882,AMER,electronics,mobile,113.74,6,0.124,none,2024-01-17\r\n8438,1839,APAC,sports,retail,35.56,4,0.152,none,2024-10-14\r\n8439,1196,APAC,grocery,partner,51.84,2,0.245,none,2024-07-02\r\n8440,1601,APAC,home,online,26.87,6,0.045,none,2024-09-12\r\n8441,1891,APAC,grocery,retail,79.24,4,0.212,coupon,2024-07-02\r\n8442,2015,APAC,grocery,online,78.96,6,0.116,none,2024-04-27\r\n8443,1537,LATAM,electronics,partner,93.97,2,0.211,none,2024-10-17\r\n8444,2409,APAC,home,online,62.26,6,0.108,none,2024-10-24\r\n8445,1556,AMER,grocery,mobile,60.38,4,0.135,none,2024-11-11\r\n8446,1313,EMEA,fashion,online,48.44,7,0.146,none,2024-09-26\r\n8447,2164,AMER,grocery,partner,51.41,5,0.098,none,2024-02-28\r\n8448,1295,EMEA,electronics,online,22.61,6,0.128,none,2024-04-19\r\n8449,2179,LATAM,grocery,online,39.52,2,0.182,none,2024-02-24\r\n8450,1450,EMEA,electronics,online,56.85,6,0.051,coupon,2024-12-09\r\n8451,1238,AMER,fashion,retail,54.74,3,0.214,coupon,2024-04-13\r\n8452,1849,EMEA,grocery,retail,45.92,7,0.075,coupon,2024-04-24\r\n8453,1907,EMEA,sports,online,87.79,4,0.067,coupon,2024-11-19\r\n8454,2114,AMER,grocery,online,59.31,1,0.150,none,2024-11-11\r\n8455,1864,EMEA,sports,online,147.13,3,0.195,none,2024-10-11\r\n8456,1997,APAC,grocery,mobile,48.25,5,0.092,coupon,2024-06-21\r\n8457,2091,LATAM,home,mobile,89.89,3,0.238,coupon,2024-01-12\r\n8458,2007,LATAM,toys,online,89.23,8,0.174,none,2024-07-27\r\n8459,1456,APAC,sports,online,46.60,5,0.188,none,2024-02-05\r\n8460,1077,AMER,sports,online,50.22,8,0.083,none,2024-08-17\r\n8461,1226,AMER,toys,mobile,65.30,7,0.073,bundle,2024-12-04\r\n8462,1804,AMER,sports,online,120.80,1,0.164,bundle,2024-04-01\r\n8463,1568,AMER,fashion,mobile,48.14,3,0.019,none,2024-08-20\r\n8464,2074,AMER,electronics,online,107.75,2,0.196,coupon,2024-08-19\r\n8465,1695,LATAM,toys,online,40.63,1,0.007,none,2024-03-04\r\n8466,1896,EMEA,grocery,mobile,47.36,1,0.016,none,2024-04-07\r\n8467,1402,EMEA,grocery,retail,35.61,2,0.049,none,2024-08-22\r\n8468,2379,AMER,toys,mobile,105.45,8,0.186,none,2024-06-03\r\n8469,1926,AMER,sports,retail,140.28,1,0.241,none,2024-07-13\r\n8470,2241,APAC,fashion,retail,61.16,4,0.049,bundle,2024-09-03\r\n8471,1364,EMEA,electronics,retail,58.74,4,0.016,none,2024-05-19\r\n8472,1253,AMER,electronics,online,45.19,8,0.088,bundle,2024-02-23\r\n8473,1076,LATAM,grocery,mobile,95.62,1,0.076,bundle,2024-05-01\r\n8474,1755,APAC,electronics,online,56.37,7,0.213,none,2024-06-13\r\n8475,1942,APAC,home,retail,37.70,3,0.111,loyalty,2024-05-09\r\n8476,1422,LATAM,sports,retail,29.50,6,0.037,coupon,2024-08-08\r\n8477,1591,APAC,toys,online,54.06,4,0.200,none,2024-08-14\r\n8478,2286,AMER,grocery,retail,34.15,4,0.247,coupon,2024-05-18\r\n8479,1309,EMEA,grocery,retail,41.21,6,0.030,loyalty,2024-05-25\r\n8480,1873,EMEA,home,online,26.17,3,0.015,coupon,2024-03-12\r\n8481,1447,LATAM,grocery,online,47.74,5,0.152,none,2024-02-19\r\n8482,1917,LATAM,electronics,partner,76.12,6,0.241,none,2024-07-20\r\n8483,2090,AMER,electronics,retail,43.74,8,0.055,none,2024-02-14\r\n8484,1824,LATAM,grocery,online,59.78,3,0.010,coupon,2024-06-15\r\n8485,1459,LATAM,sports,retail,25.17,6,0.125,none,2024-07-05\r\n8486,2327,EMEA,grocery,mobile,119.20,6,0.103,none,2024-05-01\r\n8487,2380,AMER,toys,online,39.33,8,0.117,none,2024-09-05\r\n8488,1077,AMER,fashion,mobile,48.91,8,0.148,none,2024-01-21\r\n8489,2281,AMER,fashion,retail,21.26,3,0.042,coupon,2024-11-12\r\n8490,1159,LATAM,home,mobile,23.85,8,0.196,none,2024-02-19\r\n8491,2316,EMEA,home,online,56.24,4,0.097,none,2024-09-10\r\n8492,1935,EMEA,electronics,mobile,28.55,1,0.165,none,2024-06-23\r\n8493,2049,LATAM,toys,retail,25.76,2,0.090,loyalty,2024-09-12\r\n8494,2167,APAC,fashion,partner,65.48,5,0.041,none,2024-03-24\r\n8495,1732,LATAM,fashion,online,28.82,4,0.141,none,2024-08-12\r\n8496,1162,AMER,electronics,partner,86.11,4,0.080,coupon,2024-09-01\r\n8497,1358,APAC,sports,retail,190.44,7,0.217,coupon,2024-12-01\r\n8498,1203,AMER,grocery,retail,93.48,7,0.160,bundle,2024-03-04\r\n8499,1746,LATAM,electronics,online,69.18,1,0.081,coupon,2024-12-08\r\n8500,1977,APAC,electronics,online,29.92,5,0.208,none,2024-08-02\r\n8501,2279,LATAM,grocery,online,34.44,1,0.145,none,2024-12-09\r\n8502,1837,LATAM,fashion,mobile,70.54,5,0.217,loyalty,2024-11-06\r\n8503,1752,APAC,fashion,online,48.26,4,0.209,coupon,2024-12-09\r\n8504,1815,APAC,fashion,retail,39.12,8,0.161,none,2024-11-24\r\n8505,1922,EMEA,grocery,online,33.76,6,0.130,bundle,2024-12-22\r\n8506,1398,APAC,electronics,mobile,49.87,2,0.076,coupon,2024-03-12\r\n8507,1994,LATAM,grocery,online,21.27,2,0.244,none,2024-03-13\r\n8508,2306,AMER,home,partner,128.18,2,0.064,coupon,2024-05-28\r\n8509,2448,APAC,home,online,38.99,6,0.156,none,2024-08-24\r\n8510,1080,LATAM,toys,retail,41.22,5,0.098,none,2024-04-21\r\n8511,1604,EMEA,home,retail,40.64,7,0.193,coupon,2024-11-25\r\n8512,1924,AMER,sports,retail,53.11,7,0.127,none,2024-01-02\r\n8513,1434,EMEA,home,partner,42.83,6,0.181,bundle,2024-08-02\r\n8514,2163,EMEA,sports,online,42.56,5,0.222,coupon,2024-10-17\r\n8515,1717,AMER,grocery,online,84.08,4,0.192,none,2024-04-28\r\n8516,2063,APAC,grocery,partner,24.50,5,0.069,none,2024-01-19\r\n8517,1935,EMEA,fashion,mobile,24.47,6,0.117,none,2024-11-15\r\n8518,2225,EMEA,home,online,41.21,4,0.233,none,2024-01-21\r\n8519,1261,APAC,fashion,online,20.00,5,0.066,loyalty,2024-09-28\r\n8520,1645,EMEA,grocery,online,71.70,5,0.095,bundle,2024-10-02\r\n8521,2467,AMER,fashion,retail,47.14,2,0.070,none,2024-03-22\r\n8522,1651,LATAM,grocery,mobile,47.28,2,0.250,none,2024-04-26\r\n8523,1145,AMER,sports,online,27.91,5,0.246,coupon,2024-04-19\r\n8524,2163,EMEA,grocery,online,57.92,1,0.087,coupon,2024-05-27\r\n8525,1734,AMER,grocery,online,152.30,8,0.073,none,2024-04-18\r\n8526,1479,AMER,electronics,partner,134.51,7,0.153,none,2024-04-19\r\n8527,1954,APAC,home,retail,57.93,6,0.093,coupon,2024-06-21\r\n8528,1263,AMER,fashion,retail,93.76,6,0.042,none,2024-12-08\r\n8529,1058,LATAM,home,mobile,54.92,4,0.139,loyalty,2024-12-20\r\n8530,2046,APAC,home,online,58.39,7,0.024,none,2024-03-18\r\n8531,1034,EMEA,grocery,partner,43.00,8,0.133,none,2024-11-17\r\n8532,1441,LATAM,grocery,retail,109.07,7,0.136,coupon,2024-03-15\r\n8533,2403,LATAM,home,online,42.86,6,0.134,coupon,2024-07-10\r\n8534,2483,LATAM,home,retail,48.20,2,0.122,none,2024-04-07\r\n8535,1186,APAC,grocery,retail,76.86,4,0.202,none,2024-02-06\r\n8536,1965,LATAM,electronics,retail,60.94,7,0.066,none,2024-02-19\r\n8537,2043,EMEA,fashion,retail,48.01,8,0.031,coupon,2024-08-19\r\n8538,2135,EMEA,electronics,online,36.01,2,0.019,none,2024-11-09\r\n8539,1162,AMER,electronics,online,43.77,2,0.162,coupon,2024-09-08\r\n8540,1025,EMEA,toys,online,84.38,4,0.006,none,2024-03-21\r\n8541,1630,APAC,electronics,retail,85.21,2,0.108,bundle,2024-04-13\r\n8542,2152,EMEA,electronics,retail,81.87,7,0.222,coupon,2024-02-28\r\n8543,1637,APAC,fashion,retail,122.09,4,0.075,none,2024-10-18\r\n8544,1198,AMER,electronics,online,165.67,3,0.136,none,2024-11-21\r\n8545,1635,APAC,toys,online,26.73,4,0.208,none,2024-07-23\r\n8546,2260,EMEA,sports,retail,75.99,1,0.088,none,2024-09-18\r\n8547,2112,LATAM,fashion,online,30.89,1,0.038,loyalty,2024-05-20\r\n8548,2292,EMEA,fashion,retail,45.07,7,0.127,none,2024-12-12\r\n8549,2453,AMER,electronics,partner,35.90,8,0.184,coupon,2024-05-07\r\n8550,1905,APAC,grocery,retail,45.96,7,0.125,none,2024-08-19\r\n8551,2014,EMEA,grocery,online,44.76,3,0.237,none,2024-04-22\r\n8552,1798,AMER,grocery,partner,43.04,7,0.128,none,2024-04-25\r\n8553,2128,EMEA,toys,retail,79.66,5,0.022,none,2024-11-14\r\n8554,1401,LATAM,fashion,mobile,39.66,7,0.126,coupon,2024-11-22\r\n8555,2390,AMER,electronics,online,51.05,4,0.166,none,2024-03-27\r\n8556,1126,LATAM,fashion,partner,53.64,1,0.090,coupon,2024-08-01\r\n8557,1676,LATAM,sports,online,28.20,2,0.250,bundle,2024-11-17\r\n8558,1684,EMEA,electronics,retail,48.03,6,0.102,coupon,2024-07-26\r\n8559,2125,LATAM,electronics,retail,50.77,2,0.224,coupon,2024-03-18\r\n8560,1725,APAC,home,online,23.03,8,0.121,loyalty,2024-10-13\r\n8561,2110,LATAM,grocery,online,46.47,4,0.188,none,2024-02-25\r\n8562,1614,EMEA,electronics,online,23.61,6,0.221,none,2024-12-14\r\n8563,2235,AMER,electronics,online,41.36,8,0.100,none,2024-07-14\r\n8564,1310,AMER,sports,online,37.24,2,0.245,coupon,2024-09-24\r\n8565,2463,AMER,grocery,online,35.19,4,0.200,bundle,2024-02-11\r\n8566,1482,AMER,fashion,online,88.20,7,0.075,none,2024-05-10\r\n8567,2213,APAC,fashion,online,104.40,3,0.146,none,2024-11-07\r\n8568,1079,LATAM,toys,online,24.75,6,0.049,bundle,2024-10-16\r\n8569,1982,EMEA,electronics,online,24.15,2,0.137,bundle,2024-10-26\r\n8570,1326,AMER,sports,mobile,32.65,2,0.046,bundle,2024-03-07\r\n8571,2248,LATAM,fashion,online,84.05,8,0.104,none,2024-07-14\r\n8572,2164,AMER,toys,retail,125.17,1,0.085,loyalty,2024-09-15\r\n8573,1423,EMEA,grocery,retail,45.75,7,0.180,none,2024-09-04\r\n8574,1548,EMEA,fashion,online,11.81,3,0.220,loyalty,2024-06-05\r\n8575,2371,LATAM,home,retail,43.29,5,0.220,none,2024-11-13\r\n8576,1154,LATAM,sports,online,208.61,1,0.184,coupon,2024-01-06\r\n8577,1155,EMEA,fashion,online,36.51,7,0.141,coupon,2024-05-20\r\n8578,1155,EMEA,fashion,online,66.05,6,0.047,loyalty,2024-05-02\r\n8579,1691,LATAM,grocery,retail,71.81,2,0.106,none,2024-10-15\r\n8580,1382,LATAM,grocery,retail,55.17,7,0.172,bundle,2024-11-22\r\n8581,1503,APAC,grocery,online,48.60,8,0.148,none,2024-06-02\r\n8582,2354,LATAM,toys,retail,80.44,1,0.116,none,2024-08-06\r\n8583,1938,APAC,sports,online,67.76,4,0.096,none,2024-03-24\r\n8584,2314,EMEA,home,mobile,38.06,1,0.017,none,2024-07-08\r\n8585,1906,APAC,fashion,mobile,66.49,6,0.200,coupon,2024-05-10\r\n8586,1722,EMEA,toys,online,97.06,5,0.090,none,2024-09-07\r\n8587,1829,EMEA,electronics,online,20.93,8,0.002,bundle,2024-07-10\r\n8588,1153,AMER,electronics,online,50.67,7,0.154,bundle,2024-05-14\r\n8589,1101,AMER,home,mobile,103.51,3,0.006,loyalty,2024-06-09\r\n8590,1317,EMEA,fashion,mobile,23.42,7,0.243,none,2024-03-05\r\n8591,2236,APAC,grocery,mobile,90.72,7,0.068,none,2024-01-02\r\n8592,1998,APAC,grocery,online,28.90,2,0.028,none,2024-04-23\r\n8593,2405,AMER,grocery,online,30.53,2,0.247,coupon,2024-06-16\r\n8594,1955,AMER,fashion,online,54.14,3,0.042,none,2024-01-04\r\n8595,2239,EMEA,sports,retail,30.41,6,0.050,bundle,2024-04-27\r\n8596,2294,EMEA,sports,retail,48.86,6,0.243,coupon,2024-11-22\r\n8597,2282,EMEA,electronics,retail,44.14,7,0.091,none,2024-12-26\r\n8598,1607,LATAM,grocery,retail,38.77,6,0.111,none,2024-04-06\r\n8599,2210,APAC,sports,online,16.65,2,0.157,none,2024-04-04\r\n8600,1409,APAC,home,retail,65.69,7,0.019,bundle,2024-08-16\r\n8601,1577,AMER,toys,online,35.62,8,0.246,none,2024-04-18\r\n8602,2017,EMEA,grocery,online,93.16,5,0.124,none,2024-11-13\r\n8603,2407,EMEA,grocery,online,32.50,1,0.198,coupon,2024-05-20\r\n8604,1767,AMER,electronics,retail,64.15,8,0.042,none,2024-07-05\r\n8605,1657,LATAM,home,online,30.42,5,0.233,none,2024-07-20\r\n8606,1928,AMER,toys,retail,99.86,4,0.026,none,2024-03-01\r\n8607,1404,EMEA,home,retail,35.14,7,0.184,coupon,2024-01-28\r\n8608,1084,AMER,electronics,retail,61.54,4,0.204,none,2024-08-03\r\n8609,1119,LATAM,grocery,online,71.24,6,0.207,bundle,2024-05-26\r\n8610,2415,AMER,toys,online,84.44,2,0.039,coupon,2024-01-02\r\n8611,2488,EMEA,home,online,52.74,3,0.223,none,2024-11-03\r\n8612,1169,LATAM,sports,retail,132.35,3,0.146,none,2024-11-17\r\n8613,2095,EMEA,home,retail,25.76,5,0.179,none,2024-04-07\r\n8614,1392,AMER,sports,retail,21.16,3,0.065,none,2024-03-06\r\n8615,1555,AMER,toys,mobile,24.73,5,0.036,none,2024-08-02\r\n8616,2327,EMEA,fashion,online,67.01,2,0.109,loyalty,2024-10-19\r\n8617,2084,LATAM,electronics,retail,32.88,7,0.003,none,2024-07-24\r\n8618,1809,APAC,fashion,partner,29.46,7,0.187,loyalty,2024-04-17\r\n8619,1100,AMER,fashion,retail,33.81,4,0.231,none,2024-09-28\r\n8620,2483,LATAM,fashion,partner,70.68,4,0.213,none,2024-06-11\r\n8621,1823,EMEA,electronics,online,65.50,3,0.225,none,2024-04-11\r\n8622,2465,EMEA,grocery,retail,110.96,7,0.049,none,2024-08-24\r\n8623,2008,APAC,sports,partner,123.67,5,0.209,none,2024-05-04\r\n8624,1955,AMER,sports,mobile,144.69,3,0.232,none,2024-12-01\r\n8625,1480,APAC,sports,mobile,73.10,4,0.175,bundle,2024-11-05\r\n8626,2388,LATAM,electronics,retail,54.94,5,0.164,none,2024-05-06\r\n8627,1824,LATAM,sports,retail,51.95,2,0.017,coupon,2024-08-06\r\n8628,2430,APAC,toys,retail,26.98,6,0.164,none,2024-05-15\r\n8629,2432,AMER,fashion,online,83.93,6,0.096,none,2024-11-05\r\n8630,1392,AMER,grocery,online,49.89,8,0.154,none,2024-05-16\r\n8631,1844,APAC,sports,online,81.21,6,0.002,bundle,2024-10-08\r\n8632,2344,LATAM,grocery,online,128.50,3,0.076,none,2024-06-07\r\n8633,1777,AMER,fashion,retail,20.72,4,0.003,coupon,2024-08-08\r\n8634,1721,EMEA,grocery,online,72.95,5,0.168,none,2024-10-04\r\n8635,1472,AMER,sports,retail,53.59,1,0.201,none,2024-08-18\r\n8636,1505,EMEA,sports,online,56.35,1,0.237,none,2024-02-08\r\n8637,2206,AMER,electronics,online,39.58,3,0.011,loyalty,2024-04-28\r\n8638,2411,EMEA,toys,retail,74.87,7,0.158,none,2024-10-08\r\n8639,1044,EMEA,home,online,45.54,8,0.228,none,2024-01-20\r\n8640,2339,AMER,sports,online,33.17,7,0.142,none,2024-10-21\r\n8641,2085,AMER,fashion,retail,37.10,6,0.164,coupon,2024-03-17\r\n8642,2444,EMEA,fashion,online,59.93,3,0.121,none,2024-02-18\r\n8643,1064,AMER,fashion,online,63.15,6,0.087,none,2024-05-09\r\n8644,1533,APAC,fashion,retail,70.33,6,0.053,coupon,2024-09-13\r\n8645,1028,EMEA,electronics,retail,114.54,8,0.157,bundle,2024-09-14\r\n8646,1744,EMEA,toys,online,36.41,1,0.239,coupon,2024-10-25\r\n8647,1623,AMER,grocery,retail,36.82,7,0.206,none,2024-01-04\r\n8648,1475,LATAM,fashion,retail,84.86,5,0.012,bundle,2024-06-22\r\n8649,1553,LATAM,electronics,mobile,58.80,5,0.070,bundle,2024-03-08\r\n8650,1090,AMER,grocery,online,38.33,5,0.216,coupon,2024-04-05\r\n8651,2269,EMEA,grocery,online,63.78,7,0.066,coupon,2024-06-22\r\n8652,1909,APAC,grocery,online,32.64,7,0.113,none,2024-12-22\r\n8653,1925,LATAM,grocery,retail,26.56,3,0.178,none,2024-06-03\r\n8654,1342,LATAM,fashion,retail,139.80,1,0.106,coupon,2024-05-13\r\n8655,1220,LATAM,home,retail,47.02,2,0.065,coupon,2024-04-24\r\n8656,1806,APAC,sports,partner,23.70,2,0.248,none,2024-01-22\r\n8657,1570,AMER,fashion,online,39.19,3,0.094,coupon,2024-08-03\r\n8658,2166,AMER,electronics,retail,76.58,6,0.002,none,2024-06-05\r\n8659,1569,APAC,home,retail,63.59,3,0.032,coupon,2024-06-18\r\n8660,1877,LATAM,sports,online,72.91,4,0.059,none,2024-02-04\r\n8661,1820,AMER,toys,mobile,70.70,7,0.102,none,2024-09-02\r\n8662,1868,AMER,fashion,mobile,50.42,7,0.226,none,2024-01-24\r\n8663,1903,LATAM,electronics,retail,57.29,1,0.172,none,2024-05-15\r\n8664,1760,LATAM,grocery,mobile,61.11,1,0.077,none,2024-09-08\r\n8665,2305,AMER,home,online,227.55,2,0.070,none,2024-05-26\r\n8666,2447,AMER,grocery,retail,84.00,2,0.202,coupon,2024-01-05\r\n8667,2466,APAC,electronics,mobile,39.53,2,0.091,none,2024-11-27\r\n8668,1877,LATAM,sports,retail,85.62,4,0.003,loyalty,2024-08-22\r\n8669,1110,LATAM,fashion,retail,66.07,4,0.081,coupon,2024-11-07\r\n8670,1055,AMER,fashion,online,86.96,7,0.189,none,2024-11-25\r\n8671,2295,EMEA,sports,online,116.59,6,0.213,bundle,2024-10-20\r\n8672,2027,EMEA,grocery,retail,66.28,5,0.187,bundle,2024-10-02\r\n8673,2065,EMEA,fashion,mobile,28.56,4,0.055,none,2024-10-25\r\n8674,1728,AMER,home,retail,102.39,4,0.074,none,2024-09-22\r\n8675,1975,EMEA,fashion,mobile,63.79,4,0.147,coupon,2024-09-05\r\n8676,1384,LATAM,fashion,online,57.36,2,0.240,none,2024-06-28\r\n8677,1465,AMER,home,online,170.06,5,0.174,none,2024-01-22\r\n8678,1426,AMER,fashion,retail,22.33,1,0.149,none,2024-04-07\r\n8679,1100,AMER,sports,retail,50.74,2,0.153,loyalty,2024-01-23\r\n8680,2247,LATAM,grocery,online,39.53,7,0.015,bundle,2024-03-02\r\n8681,1361,LATAM,electronics,online,62.09,5,0.040,none,2024-05-04\r\n8682,2290,LATAM,toys,retail,51.68,7,0.059,none,2024-01-28\r\n8683,1792,AMER,grocery,retail,78.66,2,0.226,none,2024-02-04\r\n8684,1392,AMER,fashion,retail,52.19,4,0.118,none,2024-05-19\r\n8685,1095,APAC,fashion,online,35.33,6,0.092,none,2024-02-03\r\n8686,1035,EMEA,sports,online,62.57,5,0.018,loyalty,2024-06-14\r\n8687,1488,AMER,toys,retail,68.39,4,0.169,loyalty,2024-04-27\r\n8688,1402,EMEA,grocery,online,70.68,2,0.122,none,2024-02-05\r\n8689,2139,AMER,sports,online,17.68,4,0.205,none,2024-07-20\r\n8690,2439,AMER,fashion,retail,42.34,7,0.085,none,2024-09-24\r\n8691,2316,EMEA,toys,retail,49.38,5,0.085,none,2024-02-13\r\n8692,1427,EMEA,electronics,mobile,67.68,5,0.181,none,2024-10-08\r\n8693,1248,APAC,grocery,mobile,78.13,7,0.245,none,2024-06-28\r\n8694,1913,LATAM,grocery,online,51.20,4,0.151,none,2024-11-01\r\n8695,2375,AMER,sports,online,71.57,3,0.014,none,2024-04-20\r\n8696,1352,AMER,toys,online,20.38,2,0.126,none,2024-11-13\r\n8697,1550,APAC,fashion,retail,76.07,1,0.106,bundle,2024-07-03\r\n8698,1977,APAC,home,mobile,62.98,4,0.249,none,2024-09-20\r\n8699,1932,EMEA,sports,mobile,33.98,2,0.097,coupon,2024-02-12\r\n8700,1150,LATAM,grocery,online,186.18,2,0.095,coupon,2024-11-24\r\n8701,1052,LATAM,home,retail,60.25,6,0.151,none,2024-05-13\r\n8702,2124,AMER,grocery,retail,140.01,3,0.184,none,2024-02-12\r\n8703,2497,AMER,home,online,48.60,5,0.029,loyalty,2024-03-09\r\n8704,1732,LATAM,home,retail,39.50,2,0.063,none,2024-07-28\r\n8705,2033,LATAM,toys,online,65.35,5,0.051,loyalty,2024-04-02\r\n8706,1612,LATAM,fashion,online,88.54,3,0.158,none,2024-08-14\r\n8707,2365,LATAM,home,retail,60.30,6,0.123,coupon,2024-07-04\r\n8708,1537,LATAM,home,online,112.97,4,0.249,coupon,2024-09-25\r\n8709,1173,LATAM,fashion,online,138.23,4,0.245,loyalty,2024-04-08\r\n8710,1079,LATAM,electronics,online,24.99,7,0.158,loyalty,2024-09-09\r\n8711,1023,APAC,electronics,online,32.46,2,0.005,coupon,2024-05-26\r\n8712,1491,EMEA,toys,retail,52.67,7,0.086,none,2024-02-13\r\n8713,1073,AMER,electronics,online,105.09,5,0.026,none,2024-06-16\r\n8714,1116,LATAM,grocery,retail,85.30,8,0.113,coupon,2024-09-01\r\n8715,1760,LATAM,home,online,27.20,1,0.041,loyalty,2024-11-25\r\n8716,1591,APAC,fashion,online,130.21,3,0.110,none,2024-07-07\r\n8717,1551,APAC,home,online,90.15,6,0.209,coupon,2024-03-03\r\n8718,1926,AMER,home,retail,81.14,2,0.050,loyalty,2024-09-14\r\n8719,1652,APAC,fashion,online,46.02,3,0.092,none,2024-01-12\r\n8720,2267,AMER,toys,retail,99.69,7,0.172,none,2024-04-17\r\n8721,1200,EMEA,fashion,online,76.49,7,0.097,loyalty,2024-09-12\r\n8722,1731,AMER,sports,online,103.69,4,0.231,none,2024-01-10\r\n8723,1910,LATAM,electronics,mobile,38.96,1,0.122,coupon,2024-05-16\r\n8724,2124,AMER,fashion,retail,144.50,2,0.165,none,2024-08-14\r\n8725,2197,LATAM,grocery,online,120.43,4,0.089,none,2024-10-10\r\n8726,2255,AMER,grocery,mobile,62.32,1,0.123,coupon,2024-02-11\r\n8727,1561,EMEA,electronics,online,67.19,2,0.245,none,2024-03-24\r\n8728,2221,LATAM,fashion,retail,26.34,1,0.010,loyalty,2024-07-03\r\n8729,1593,AMER,grocery,partner,123.64,4,0.025,none,2024-07-26\r\n8730,1648,APAC,home,online,43.25,3,0.208,coupon,2024-07-21\r\n8731,1754,EMEA,sports,retail,68.75,8,0.211,none,2024-03-15\r\n8732,2015,APAC,sports,retail,79.53,7,0.155,bundle,2024-11-22\r\n8733,1287,AMER,grocery,retail,18.43,2,0.212,none,2024-11-08\r\n8734,1713,EMEA,home,online,37.86,1,0.139,bundle,2024-01-12\r\n8735,1527,AMER,toys,online,65.40,8,0.060,loyalty,2024-12-03\r\n8736,1200,EMEA,home,mobile,82.98,1,0.165,none,2024-07-05\r\n8737,1013,LATAM,toys,mobile,89.48,2,0.198,none,2024-01-19\r\n8738,1106,AMER,fashion,mobile,156.92,8,0.078,bundle,2024-01-23\r\n8739,2396,AMER,toys,retail,105.11,8,0.169,none,2024-08-25\r\n8740,1677,EMEA,home,retail,88.57,6,0.170,coupon,2024-12-11\r\n8741,1290,EMEA,sports,online,120.51,3,0.247,none,2024-07-01\r\n8742,1329,APAC,grocery,online,45.25,8,0.039,bundle,2024-07-24\r\n8743,1715,AMER,grocery,online,63.02,8,0.228,none,2024-08-23\r\n8744,1764,LATAM,grocery,retail,44.23,7,0.185,none,2024-07-20\r\n8745,2087,LATAM,grocery,online,34.98,1,0.038,coupon,2024-05-01\r\n8746,1788,AMER,fashion,retail,18.29,8,0.111,none,2024-12-04\r\n8747,1376,EMEA,fashion,mobile,34.59,2,0.153,coupon,2024-10-11\r\n8748,1758,AMER,electronics,retail,71.96,3,0.126,none,2024-01-27\r\n8749,2396,AMER,toys,online,92.66,5,0.034,bundle,2024-07-09\r\n8750,2021,EMEA,sports,online,51.36,3,0.105,coupon,2024-09-12\r\n8751,1423,EMEA,grocery,mobile,60.23,5,0.047,none,2024-04-17\r\n8752,1989,LATAM,toys,retail,81.47,8,0.245,bundle,2024-01-25\r\n8753,1165,AMER,grocery,online,54.82,5,0.238,coupon,2024-05-12\r\n8754,2171,EMEA,toys,online,74.81,8,0.157,loyalty,2024-08-11\r\n8755,2430,APAC,electronics,online,87.30,2,0.122,coupon,2024-08-10\r\n8756,1045,LATAM,grocery,retail,51.64,4,0.146,none,2024-06-18\r\n8757,1362,AMER,electronics,retail,35.80,8,0.090,coupon,2024-11-09\r\n8758,2295,EMEA,toys,mobile,24.49,7,0.094,none,2024-11-17\r\n8759,1326,AMER,home,retail,52.94,1,0.236,bundle,2024-07-18\r\n8760,1778,LATAM,electronics,retail,34.77,5,0.248,bundle,2024-03-17\r\n8761,1664,LATAM,grocery,online,19.89,7,0.241,coupon,2024-06-28\r\n8762,2249,LATAM,fashion,online,47.75,1,0.052,none,2024-12-25\r\n8763,1415,AMER,grocery,online,67.76,2,0.036,loyalty,2024-02-13\r\n8764,1255,AMER,toys,online,87.12,3,0.048,coupon,2024-04-17\r\n8765,1309,EMEA,toys,retail,161.76,8,0.075,none,2024-07-12\r\n8766,2118,AMER,home,online,42.48,6,0.108,none,2024-08-25\r\n8767,1786,APAC,sports,online,22.81,8,0.112,coupon,2024-07-03\r\n8768,1197,LATAM,home,mobile,74.82,7,0.150,none,2024-12-18\r\n8769,1511,EMEA,grocery,retail,50.87,6,0.172,none,2024-04-14\r\n8770,2493,APAC,toys,online,31.36,6,0.070,bundle,2024-04-12\r\n8771,2170,EMEA,fashion,retail,49.17,3,0.206,none,2024-02-13\r\n8772,1354,AMER,fashion,online,52.78,2,0.158,coupon,2024-06-27\r\n8773,1875,EMEA,grocery,retail,49.77,8,0.049,coupon,2024-10-20\r\n8774,2029,APAC,fashion,retail,47.08,2,0.060,none,2024-10-28\r\n8775,2207,APAC,electronics,online,54.09,4,0.009,loyalty,2024-06-17\r\n8776,2430,APAC,electronics,online,48.41,4,0.148,none,2024-07-14\r\n8777,2022,LATAM,grocery,online,24.54,3,0.227,none,2024-03-10\r\n8778,1911,LATAM,fashion,online,26.07,5,0.179,none,2024-07-04\r\n8779,2467,AMER,electronics,retail,43.74,2,0.084,none,2024-11-07\r\n8780,1410,AMER,electronics,online,55.35,1,0.083,loyalty,2024-08-21\r\n8781,1048,EMEA,home,retail,24.28,3,0.035,bundle,2024-03-28\r\n8782,2113,LATAM,toys,retail,30.03,1,0.039,loyalty,2024-09-18\r\n8783,1638,EMEA,grocery,retail,80.22,7,0.218,none,2024-07-28\r\n8784,1771,AMER,fashion,online,55.82,3,0.083,none,2024-06-22\r\n8785,2236,APAC,sports,retail,25.51,7,0.079,coupon,2024-11-13\r\n8786,1719,LATAM,home,online,82.18,2,0.050,coupon,2024-07-07\r\n8787,2415,AMER,fashion,online,105.47,5,0.218,coupon,2024-03-17\r\n8788,2045,LATAM,grocery,mobile,96.46,8,0.237,bundle,2024-01-23\r\n8789,1472,AMER,sports,retail,36.40,5,0.242,loyalty,2024-10-22\r\n8790,2305,AMER,toys,mobile,72.58,1,0.089,none,2024-07-16\r\n8791,2299,EMEA,electronics,mobile,74.70,4,0.039,none,2024-03-27\r\n8792,1487,AMER,electronics,online,29.48,2,0.023,none,2024-02-02\r\n8793,2259,AMER,fashion,online,76.32,4,0.027,none,2024-06-04\r\n8794,1866,EMEA,fashion,retail,69.24,2,0.083,none,2024-11-01\r\n8795,1586,LATAM,grocery,online,141.04,3,0.004,loyalty,2024-12-27\r\n8796,1767,AMER,sports,retail,26.52,8,0.039,none,2024-04-17\r\n8797,1269,LATAM,toys,retail,58.74,2,0.175,loyalty,2024-07-06\r\n8798,1793,LATAM,grocery,online,67.00,5,0.146,none,2024-10-14\r\n8799,1901,AMER,electronics,mobile,47.80,6,0.105,none,2024-06-03\r\n8800,1792,AMER,grocery,mobile,30.52,5,0.197,bundle,2024-06-09\r\n8801,2249,LATAM,sports,mobile,48.20,2,0.055,loyalty,2024-06-24\r\n8802,1320,EMEA,home,online,53.05,6,0.104,none,2024-01-07\r\n8803,1699,APAC,sports,online,29.21,2,0.111,none,2024-01-24\r\n8804,1938,APAC,home,online,65.39,8,0.157,none,2024-08-16\r\n8805,2076,AMER,grocery,online,53.60,4,0.015,none,2024-02-01\r\n8806,2175,AMER,grocery,online,45.51,6,0.202,none,2024-03-17\r\n8807,1491,EMEA,fashion,online,107.85,7,0.206,none,2024-10-11\r\n8808,1783,AMER,grocery,retail,67.86,1,0.196,coupon,2024-11-25\r\n8809,1704,AMER,grocery,retail,42.11,1,0.152,none,2024-08-13\r\n8810,2241,APAC,home,online,39.43,2,0.123,bundle,2024-08-16\r\n8811,1195,AMER,toys,retail,37.39,1,0.211,none,2024-09-28\r\n8812,1017,AMER,home,online,42.75,3,0.235,none,2024-11-23\r\n8813,1635,APAC,sports,mobile,84.67,5,0.117,coupon,2024-04-05\r\n8814,2228,EMEA,electronics,online,42.58,3,0.187,bundle,2024-09-01\r\n8815,1615,LATAM,electronics,mobile,41.07,8,0.017,none,2024-08-02\r\n8816,1108,EMEA,home,mobile,44.28,3,0.208,coupon,2024-11-17\r\n8817,1902,AMER,fashion,retail,52.24,6,0.106,none,2024-04-18\r\n8818,2373,LATAM,electronics,mobile,34.67,3,0.134,none,2024-09-03\r\n8819,1092,AMER,electronics,online,64.63,1,0.198,none,2024-12-08\r\n8820,1451,EMEA,sports,online,28.21,6,0.054,coupon,2024-04-13\r\n8821,1872,LATAM,grocery,online,49.00,3,0.084,none,2024-04-18\r\n8822,1009,APAC,grocery,mobile,38.13,3,0.008,coupon,2024-04-11\r\n8823,2089,EMEA,grocery,online,96.27,6,0.230,coupon,2024-11-17\r\n8824,2389,LATAM,grocery,retail,38.65,5,0.054,none,2024-09-21\r\n8825,2462,EMEA,electronics,online,69.79,5,0.127,coupon,2024-03-03\r\n8826,1566,EMEA,home,retail,39.68,3,0.139,none,2024-11-25\r\n8827,1981,EMEA,fashion,online,137.88,2,0.203,loyalty,2024-11-05\r\n8828,2388,LATAM,electronics,retail,47.17,8,0.130,none,2024-09-28\r\n8829,2135,EMEA,electronics,retail,49.08,4,0.113,none,2024-01-09\r\n8830,1956,APAC,sports,mobile,70.45,2,0.011,bundle,2024-09-04\r\n8831,2091,LATAM,toys,online,34.05,7,0.138,coupon,2024-10-20\r\n8832,1558,EMEA,grocery,online,46.58,8,0.195,none,2024-12-25\r\n8833,2108,AMER,sports,online,63.77,1,0.070,none,2024-06-26\r\n8834,1907,EMEA,toys,partner,56.80,8,0.229,none,2024-11-21\r\n8835,2368,AMER,electronics,online,38.10,4,0.129,none,2024-03-09\r\n8836,2365,LATAM,grocery,mobile,48.63,2,0.113,coupon,2024-01-04\r\n8837,1307,AMER,home,partner,95.55,6,0.085,none,2024-11-15\r\n8838,1476,APAC,fashion,retail,36.65,7,0.025,none,2024-06-08\r\n8839,1398,APAC,home,mobile,81.47,3,0.163,none,2024-12-25\r\n8840,1436,APAC,grocery,online,44.05,5,0.118,loyalty,2024-03-20\r\n8841,2059,AMER,electronics,retail,27.40,6,0.149,none,2024-05-14\r\n8842,1018,APAC,toys,online,89.79,8,0.077,none,2024-05-26\r\n8843,2161,LATAM,electronics,mobile,109.91,5,0.073,none,2024-09-26\r\n8844,1163,AMER,toys,online,80.66,2,0.247,none,2024-12-20\r\n8845,2146,APAC,grocery,partner,58.33,3,0.125,coupon,2024-11-07\r\n8846,1984,LATAM,toys,online,163.43,1,0.006,none,2024-10-21\r\n8847,1236,AMER,home,online,25.60,1,0.167,coupon,2024-07-15\r\n8848,1110,LATAM,fashion,retail,72.42,8,0.046,none,2024-05-02\r\n8849,1810,LATAM,grocery,retail,46.35,2,0.058,none,2024-11-28\r\n8850,1462,LATAM,grocery,retail,66.86,8,0.248,none,2024-04-14\r\n8851,1237,LATAM,grocery,online,127.83,7,0.248,none,2024-04-20\r\n8852,1965,LATAM,home,mobile,42.69,6,0.022,none,2024-07-03\r\n8853,2004,LATAM,home,online,53.01,1,0.150,coupon,2024-02-10\r\n8854,1796,LATAM,grocery,online,99.47,5,0.074,none,2024-09-17\r\n8855,1914,EMEA,home,mobile,34.55,6,0.032,none,2024-05-18\r\n8856,1603,EMEA,electronics,retail,121.68,2,0.175,bundle,2024-02-07\r\n8857,2328,EMEA,sports,online,41.40,1,0.127,none,2024-12-13\r\n8858,1385,LATAM,electronics,online,12.00,7,0.241,none,2024-12-23\r\n8859,1697,APAC,sports,retail,53.83,7,0.017,bundle,2024-11-14\r\n8860,1667,AMER,grocery,partner,82.39,6,0.120,bundle,2024-10-14\r\n8861,2200,LATAM,fashion,retail,63.31,2,0.151,none,2024-12-26\r\n8862,1065,AMER,grocery,online,48.37,1,0.176,bundle,2024-07-15\r\n8863,1196,APAC,sports,online,77.57,3,0.073,none,2024-06-14\r\n8864,2388,LATAM,sports,retail,50.67,1,0.038,none,2024-02-20\r\n8865,1519,APAC,home,retail,37.49,8,0.143,coupon,2024-12-18\r\n8866,1227,AMER,home,online,52.43,1,0.030,none,2024-05-05\r\n8867,1489,AMER,home,retail,60.87,3,0.087,none,2024-07-05\r\n8868,1530,APAC,toys,online,72.62,4,0.170,none,2024-11-15\r\n8869,1184,AMER,electronics,online,69.43,8,0.008,none,2024-07-05\r\n8870,1842,LATAM,grocery,retail,58.02,7,0.012,none,2024-01-07\r\n8871,2312,APAC,grocery,online,75.99,4,0.241,none,2024-07-14\r\n8872,2224,EMEA,grocery,online,47.83,7,0.232,none,2024-02-10\r\n8873,1357,EMEA,grocery,retail,25.72,8,0.036,none,2024-05-08\r\n8874,1627,LATAM,sports,online,53.01,3,0.152,bundle,2024-03-24\r\n8875,2407,EMEA,electronics,retail,168.36,1,0.169,coupon,2024-04-10\r\n8876,1059,AMER,grocery,retail,56.98,6,0.137,loyalty,2024-10-10\r\n8877,2003,LATAM,home,retail,23.52,4,0.094,none,2024-11-18\r\n8878,1515,EMEA,home,online,52.59,3,0.012,loyalty,2024-02-23\r\n8879,1047,APAC,grocery,retail,65.08,5,0.024,none,2024-03-12\r\n8880,2392,EMEA,home,retail,60.91,5,0.221,none,2024-03-21\r\n8881,1546,EMEA,electronics,partner,66.08,6,0.067,none,2024-06-24\r\n8882,2213,APAC,grocery,retail,187.28,3,0.241,none,2024-10-03\r\n8883,1865,LATAM,home,retail,157.95,5,0.167,none,2024-02-22\r\n8884,1390,APAC,electronics,online,24.44,5,0.079,none,2024-05-16\r\n8885,1095,APAC,toys,partner,101.32,4,0.072,coupon,2024-04-13\r\n8886,1214,EMEA,home,online,44.97,4,0.012,coupon,2024-10-20\r\n8887,1787,APAC,grocery,retail,56.61,8,0.065,none,2024-11-19\r\n8888,1153,AMER,fashion,mobile,59.18,1,0.035,none,2024-04-08\r\n8889,1599,APAC,electronics,online,65.84,1,0.053,none,2024-03-04\r\n8890,1845,AMER,electronics,online,54.85,4,0.239,none,2024-12-21\r\n8891,2461,LATAM,toys,online,68.36,2,0.219,coupon,2024-01-11\r\n8892,1004,LATAM,sports,online,30.14,7,0.142,none,2024-11-20\r\n8893,1102,APAC,electronics,mobile,68.21,5,0.138,none,2024-02-22\r\n8894,2288,AMER,grocery,online,34.49,6,0.029,coupon,2024-03-23\r\n8895,1343,LATAM,home,retail,39.30,7,0.095,none,2024-11-17\r\n8896,1067,APAC,grocery,retail,60.88,8,0.054,bundle,2024-08-03\r\n8897,2002,APAC,fashion,retail,83.36,6,0.114,none,2024-11-19\r\n8898,2203,APAC,sports,retail,56.83,2,0.218,loyalty,2024-09-05\r\n8899,2438,AMER,grocery,retail,75.33,1,0.007,none,2024-07-25\r\n8900,2118,AMER,electronics,retail,72.36,5,0.015,coupon,2024-03-19\r\n8901,2198,EMEA,grocery,online,62.64,7,0.041,none,2024-12-24\r\n8902,2181,AMER,sports,partner,84.29,3,0.021,coupon,2024-07-28\r\n8903,1876,LATAM,grocery,retail,34.03,6,0.050,none,2024-02-27\r\n8904,1616,APAC,grocery,mobile,28.21,5,0.224,none,2024-05-26\r\n8905,1042,LATAM,toys,retail,40.61,2,0.234,bundle,2024-10-04\r\n8906,1947,EMEA,grocery,online,75.70,3,0.245,bundle,2024-06-13\r\n8907,2369,LATAM,electronics,online,101.15,3,0.229,none,2024-09-10\r\n8908,1729,AMER,electronics,mobile,57.23,1,0.079,none,2024-04-07\r\n8909,1416,EMEA,grocery,mobile,72.76,7,0.014,none,2024-10-10\r\n8910,1040,LATAM,home,online,96.36,2,0.155,none,2024-12-27\r\n8911,1969,LATAM,toys,online,41.18,7,0.102,none,2024-12-13\r\n8912,2354,LATAM,sports,retail,56.16,8,0.126,none,2024-10-04\r\n8913,2150,APAC,home,online,181.88,5,0.234,none,2024-04-10\r\n8914,2200,LATAM,sports,online,79.83,1,0.232,none,2024-07-19\r\n8915,2234,LATAM,fashion,retail,90.47,1,0.179,none,2024-08-25\r\n8916,2260,EMEA,grocery,retail,48.56,7,0.081,coupon,2024-11-24\r\n8917,1187,AMER,electronics,online,77.18,1,0.042,none,2024-06-13\r\n8918,1161,AMER,sports,online,63.76,6,0.204,coupon,2024-09-23\r\n8919,2493,APAC,grocery,retail,123.87,5,0.171,coupon,2024-12-12\r\n8920,1816,EMEA,home,retail,102.77,5,0.170,none,2024-09-07\r\n8921,1054,EMEA,electronics,partner,30.09,2,0.097,bundle,2024-09-27\r\n8922,1454,APAC,grocery,partner,57.67,5,0.250,coupon,2024-04-01\r\n8923,1009,APAC,fashion,retail,24.88,2,0.120,coupon,2024-08-26\r\n8924,1505,EMEA,electronics,retail,59.12,7,0.219,none,2024-01-11\r\n8925,1779,APAC,electronics,retail,43.97,4,0.178,coupon,2024-04-19\r\n8926,1288,LATAM,electronics,online,92.79,3,0.234,none,2024-12-16\r\n8927,1956,APAC,home,retail,72.50,5,0.132,none,2024-10-04\r\n8928,1360,APAC,fashion,online,30.04,6,0.058,none,2024-03-09\r\n8929,2119,AMER,grocery,retail,29.89,8,0.002,coupon,2024-03-26\r\n8930,2398,EMEA,grocery,online,33.66,7,0.078,loyalty,2024-11-26\r\n8931,1768,AMER,sports,retail,41.01,1,0.184,none,2024-03-25\r\n8932,2318,AMER,toys,online,43.71,1,0.107,bundle,2024-10-24\r\n8933,2256,AMER,electronics,online,44.65,2,0.048,none,2024-08-09\r\n8934,2362,AMER,sports,online,72.47,5,0.240,bundle,2024-11-11\r\n8935,2281,AMER,sports,mobile,57.52,5,0.078,bundle,2024-04-23\r\n8936,1689,LATAM,electronics,online,29.66,5,0.230,bundle,2024-02-26\r\n8937,2293,LATAM,electronics,mobile,14.71,3,0.227,none,2024-04-10\r\n8938,1757,EMEA,home,retail,40.26,6,0.076,none,2024-11-11\r\n8939,1730,AMER,sports,online,106.38,2,0.005,none,2024-05-28\r\n8940,1779,APAC,electronics,online,51.46,7,0.097,none,2024-05-20\r\n8941,1676,LATAM,grocery,online,138.10,4,0.217,loyalty,2024-10-11\r\n8942,1039,AMER,grocery,online,100.44,5,0.232,bundle,2024-10-05\r\n8943,2399,LATAM,sports,online,36.11,4,0.035,none,2024-11-17\r\n8944,1351,APAC,fashion,mobile,56.78,7,0.179,coupon,2024-02-19\r\n8945,1544,LATAM,electronics,online,102.86,6,0.233,none,2024-04-18\r\n8946,1797,LATAM,fashion,online,34.41,4,0.077,bundle,2024-09-15\r\n8947,1575,APAC,grocery,mobile,70.03,8,0.197,coupon,2024-10-16\r\n8948,1783,AMER,grocery,online,27.04,5,0.135,none,2024-04-09\r\n8949,2031,AMER,toys,online,36.77,3,0.161,coupon,2024-04-25\r\n8950,2067,LATAM,fashion,mobile,21.60,5,0.089,none,2024-06-19\r\n8951,1186,APAC,toys,online,90.71,8,0.224,loyalty,2024-04-21\r\n8952,2150,APAC,home,retail,113.26,7,0.076,bundle,2024-03-16\r\n8953,1432,APAC,grocery,retail,89.63,1,0.205,loyalty,2024-04-27\r\n8954,1015,AMER,toys,retail,37.15,8,0.196,none,2024-08-15\r\n8955,1500,EMEA,sports,online,51.71,7,0.182,none,2024-02-10\r\n8956,1658,AMER,sports,online,71.63,7,0.240,coupon,2024-06-12\r\n8957,1463,EMEA,electronics,online,64.30,2,0.219,coupon,2024-12-25\r\n8958,2025,EMEA,toys,retail,83.80,8,0.171,loyalty,2024-11-14\r\n8959,2107,APAC,grocery,online,49.62,5,0.077,none,2024-12-19\r\n8960,1045,LATAM,home,online,58.41,4,0.235,none,2024-03-22\r\n8961,1661,LATAM,grocery,online,24.89,6,0.203,bundle,2024-05-08\r\n8962,1193,APAC,toys,online,51.77,4,0.224,none,2024-12-21\r\n8963,1152,LATAM,sports,partner,25.71,4,0.171,none,2024-12-25\r\n8964,1967,EMEA,electronics,retail,75.42,4,0.155,none,2024-08-21\r\n8965,1128,LATAM,electronics,mobile,70.67,5,0.237,coupon,2024-12-27\r\n8966,1577,AMER,home,retail,49.39,5,0.157,none,2024-10-09\r\n8967,1809,APAC,electronics,online,114.70,4,0.250,none,2024-02-07\r\n8968,2246,AMER,sports,online,69.49,7,0.092,none,2024-06-11\r\n8969,1006,AMER,electronics,online,110.68,2,0.010,none,2024-07-25\r\n8970,1578,LATAM,sports,online,67.30,7,0.090,loyalty,2024-06-16\r\n8971,1833,EMEA,electronics,retail,70.92,2,0.093,none,2024-01-16\r\n8972,1572,LATAM,grocery,retail,71.65,2,0.160,coupon,2024-10-25\r\n8973,1495,LATAM,grocery,online,74.05,6,0.166,coupon,2024-12-24\r\n8974,2469,LATAM,fashion,retail,76.01,5,0.138,none,2024-12-07\r\n8975,1242,LATAM,grocery,online,62.62,1,0.074,bundle,2024-09-19\r\n8976,1813,EMEA,electronics,online,57.95,3,0.052,none,2024-07-07\r\n8977,1436,APAC,grocery,retail,107.48,6,0.104,none,2024-03-11\r\n8978,1678,LATAM,fashion,retail,124.03,2,0.075,loyalty,2024-10-22\r\n8979,1087,AMER,grocery,online,27.24,5,0.140,none,2024-03-12\r\n8980,1836,LATAM,sports,retail,26.61,7,0.143,none,2024-03-22\r\n8981,1145,AMER,electronics,mobile,65.28,5,0.074,none,2024-08-28\r\n8982,1552,EMEA,home,online,40.57,7,0.230,none,2024-01-19\r\n8983,1563,EMEA,toys,partner,56.88,7,0.242,none,2024-03-17\r\n8984,1444,EMEA,fashion,mobile,75.35,6,0.189,none,2024-05-10\r\n8985,1300,EMEA,fashion,online,53.13,5,0.069,none,2024-08-19\r\n8986,1615,LATAM,electronics,retail,107.05,2,0.194,loyalty,2024-03-09\r\n8987,1669,AMER,fashion,retail,67.64,1,0.077,bundle,2024-12-02\r\n8988,1764,LATAM,grocery,online,82.64,4,0.146,coupon,2024-10-14\r\n8989,1984,LATAM,toys,online,36.11,8,0.096,loyalty,2024-11-07\r\n8990,1557,LATAM,home,mobile,38.09,3,0.028,none,2024-12-01\r\n8991,1517,AMER,electronics,online,168.34,5,0.229,coupon,2024-05-21\r\n8992,1647,LATAM,fashion,online,59.45,6,0.126,coupon,2024-06-15\r\n8993,1031,AMER,sports,online,76.17,7,0.025,none,2024-12-04\r\n8994,1261,APAC,fashion,retail,98.00,4,0.103,none,2024-07-26\r\n8995,2158,APAC,fashion,mobile,15.00,8,0.134,none,2024-10-23\r\n8996,1677,EMEA,grocery,online,148.94,7,0.146,none,2024-10-27\r\n8997,1044,EMEA,fashion,mobile,64.81,8,0.007,bundle,2024-08-08\r\n8998,2124,AMER,electronics,retail,75.90,2,0.102,bundle,2024-07-12\r\n8999,2292,EMEA,home,online,56.01,2,0.043,none,2024-09-21\r\n9000,2438,AMER,home,retail,79.40,5,0.232,none,2024-08-18\r\n9001,2180,AMER,home,retail,82.47,2,0.195,none,2024-06-11\r\n9002,1649,APAC,electronics,online,72.32,8,0.165,bundle,2024-09-01\r\n9003,1592,LATAM,grocery,online,36.90,5,0.230,none,2024-12-20\r\n9004,1517,AMER,sports,partner,54.06,5,0.010,coupon,2024-01-28\r\n9005,2193,AMER,home,retail,70.97,4,0.039,bundle,2024-04-22\r\n9006,2086,APAC,fashion,retail,107.05,2,0.190,none,2024-03-24\r\n9007,1933,EMEA,home,online,87.81,5,0.235,loyalty,2024-05-11\r\n9008,2212,EMEA,toys,mobile,39.69,7,0.242,none,2024-11-18\r\n9009,2286,AMER,fashion,online,86.32,1,0.191,coupon,2024-04-26\r\n9010,2406,EMEA,sports,online,75.48,8,0.024,bundle,2024-09-25\r\n9011,1634,AMER,toys,online,74.61,4,0.019,none,2024-10-02\r\n9012,1447,LATAM,grocery,online,32.26,8,0.158,bundle,2024-08-01\r\n9013,2062,EMEA,grocery,retail,17.47,6,0.034,none,2024-11-27\r\n9014,1987,AMER,grocery,retail,117.66,6,0.159,none,2024-06-10\r\n9015,2020,AMER,fashion,online,61.09,1,0.070,coupon,2024-03-04\r\n9016,1076,LATAM,grocery,retail,55.73,6,0.175,none,2024-05-03\r\n9017,1154,LATAM,grocery,partner,57.88,2,0.186,none,2024-12-11\r\n9018,1508,LATAM,toys,online,51.03,6,0.179,none,2024-10-27\r\n9019,2423,LATAM,fashion,retail,100.07,2,0.192,none,2024-04-24\r\n9020,1215,LATAM,grocery,online,29.41,3,0.217,bundle,2024-08-01\r\n9021,1058,LATAM,toys,online,64.40,6,0.230,none,2024-07-20\r\n9022,1853,APAC,electronics,retail,48.02,1,0.050,none,2024-11-01\r\n9023,1833,EMEA,electronics,partner,48.24,5,0.186,loyalty,2024-08-14\r\n9024,2492,LATAM,electronics,online,42.16,3,0.120,none,2024-03-14\r\n9025,2181,AMER,grocery,retail,93.12,5,0.171,none,2024-06-26\r\n9026,1230,EMEA,fashion,retail,52.23,4,0.160,bundle,2024-02-05\r\n9027,1601,APAC,home,retail,42.47,3,0.057,none,2024-12-17\r\n9028,1953,EMEA,grocery,retail,121.41,6,0.031,coupon,2024-02-03\r\n9029,2254,LATAM,home,mobile,142.13,6,0.135,coupon,2024-06-16\r\n9030,1697,APAC,electronics,online,53.67,6,0.109,none,2024-01-16\r\n9031,1571,EMEA,grocery,retail,63.19,7,0.141,bundle,2024-07-03\r\n9032,1201,LATAM,grocery,online,60.44,5,0.019,none,2024-10-03\r\n9033,1354,AMER,electronics,retail,62.85,1,0.235,none,2024-03-19\r\n9034,1521,LATAM,grocery,retail,43.47,7,0.060,coupon,2024-07-01\r\n9035,1078,APAC,grocery,online,123.76,5,0.234,none,2024-08-02\r\n9036,1892,LATAM,grocery,online,46.65,7,0.110,bundle,2024-10-27\r\n9037,1053,AMER,electronics,mobile,75.72,3,0.169,none,2024-11-08\r\n9038,1359,LATAM,home,mobile,29.07,2,0.129,none,2024-06-25\r\n9039,2056,LATAM,toys,online,47.54,8,0.067,coupon,2024-12-22\r\n9040,1118,AMER,sports,retail,54.09,3,0.218,loyalty,2024-10-11\r\n9041,1304,LATAM,grocery,retail,66.71,6,0.071,loyalty,2024-02-23\r\n9042,1394,LATAM,toys,online,81.21,2,0.147,loyalty,2024-07-10\r\n9043,1038,APAC,fashion,retail,67.46,6,0.237,loyalty,2024-04-16\r\n9044,2019,AMER,grocery,mobile,107.80,3,0.016,none,2024-12-11\r\n9045,2257,AMER,grocery,online,54.64,8,0.047,none,2024-08-12\r\n9046,2051,APAC,toys,online,50.91,6,0.053,loyalty,2024-06-25\r\n9047,1544,LATAM,grocery,online,31.21,5,0.175,none,2024-06-16\r\n9048,2143,AMER,electronics,online,71.34,1,0.056,none,2024-09-19\r\n9049,2011,AMER,electronics,retail,70.19,8,0.103,bundle,2024-12-20\r\n9050,1493,APAC,home,partner,24.63,6,0.208,loyalty,2024-02-22\r\n9051,1693,EMEA,home,mobile,25.46,4,0.026,loyalty,2024-10-24\r\n9052,2278,APAC,home,mobile,41.74,5,0.196,loyalty,2024-05-14\r\n9053,1506,EMEA,sports,retail,45.19,5,0.144,bundle,2024-09-14\r\n9054,1274,LATAM,sports,retail,66.66,8,0.212,coupon,2024-12-10\r\n9055,2157,AMER,electronics,retail,83.33,2,0.075,coupon,2024-07-19\r\n9056,1226,AMER,fashion,retail,47.44,7,0.217,coupon,2024-11-16\r\n9057,2138,APAC,toys,online,39.54,2,0.029,coupon,2024-05-27\r\n9058,1907,EMEA,sports,retail,64.20,6,0.103,bundle,2024-12-20\r\n9059,1813,EMEA,fashion,online,85.00,2,0.025,bundle,2024-10-12\r\n9060,1024,APAC,home,retail,28.36,8,0.182,none,2024-09-01\r\n9061,1454,APAC,grocery,mobile,38.65,4,0.247,none,2024-05-19\r\n9062,1222,AMER,fashion,retail,47.12,7,0.124,none,2024-02-28\r\n9063,1454,APAC,toys,online,32.21,6,0.091,coupon,2024-09-09\r\n9064,1193,APAC,grocery,online,68.59,8,0.231,none,2024-03-06\r\n9065,1672,APAC,home,online,125.68,1,0.055,coupon,2024-03-08\r\n9066,2254,LATAM,grocery,retail,49.85,7,0.120,none,2024-06-12\r\n9067,2085,AMER,grocery,retail,37.39,7,0.204,coupon,2024-11-06\r\n9068,2299,EMEA,grocery,retail,73.21,8,0.199,none,2024-01-11\r\n9069,1715,AMER,sports,mobile,82.23,8,0.158,coupon,2024-07-20\r\n9070,2424,LATAM,toys,retail,52.92,7,0.033,none,2024-05-18\r\n9071,1538,AMER,toys,retail,36.23,2,0.212,none,2024-05-26\r\n9072,1775,EMEA,electronics,mobile,47.18,6,0.093,none,2024-08-13\r\n9073,2008,APAC,grocery,retail,28.77,6,0.024,coupon,2024-01-11\r\n9074,1679,APAC,electronics,mobile,83.93,1,0.195,none,2024-09-07\r\n9075,1543,AMER,grocery,retail,117.53,1,0.078,none,2024-01-02\r\n9076,2392,EMEA,toys,retail,45.65,2,0.086,bundle,2024-05-11\r\n9077,2130,EMEA,grocery,online,87.27,3,0.071,none,2024-05-17\r\n9078,1089,LATAM,electronics,retail,47.71,5,0.046,none,2024-01-08\r\n9079,2092,AMER,grocery,mobile,53.16,1,0.181,none,2024-01-12\r\n9080,2023,LATAM,electronics,online,43.59,3,0.153,coupon,2024-07-28\r\n9081,2408,EMEA,fashion,online,91.29,8,0.227,none,2024-04-25\r\n9082,1203,AMER,grocery,mobile,17.44,4,0.128,bundle,2024-05-14\r\n9083,1702,AMER,fashion,mobile,32.61,4,0.124,none,2024-05-17\r\n9084,1739,AMER,fashion,online,88.73,3,0.131,none,2024-09-01\r\n9085,1548,EMEA,home,retail,87.88,3,0.074,none,2024-01-09\r\n9086,1127,EMEA,sports,online,34.32,2,0.034,none,2024-12-21\r\n9087,2172,EMEA,fashion,online,68.08,2,0.179,none,2024-06-09\r\n9088,2454,LATAM,electronics,retail,103.71,6,0.013,none,2024-08-22\r\n9089,2456,APAC,grocery,online,102.36,1,0.020,bundle,2024-10-27\r\n9090,1923,LATAM,grocery,mobile,63.07,1,0.139,none,2024-08-03\r\n9091,1695,LATAM,grocery,retail,67.35,3,0.023,bundle,2024-03-04\r\n9092,1517,AMER,fashion,online,72.61,5,0.167,coupon,2024-05-22\r\n9093,2276,AMER,toys,retail,57.23,7,0.059,none,2024-11-21\r\n9094,1765,EMEA,sports,online,41.50,8,0.183,none,2024-10-07\r\n9095,1063,AMER,grocery,mobile,242.91,2,0.026,loyalty,2024-07-15\r\n9096,2088,EMEA,sports,online,112.53,7,0.040,loyalty,2024-09-11\r\n9097,1858,LATAM,electronics,online,76.91,5,0.242,none,2024-08-27\r\n9098,2005,APAC,electronics,retail,72.45,8,0.227,bundle,2024-12-11\r\n9099,2236,APAC,toys,online,80.66,4,0.030,none,2024-01-02\r\n9100,1912,APAC,fashion,online,48.05,4,0.221,none,2024-06-13\r\n9101,1087,AMER,sports,retail,26.71,3,0.249,bundle,2024-10-07\r\n9102,1095,APAC,grocery,online,115.58,3,0.183,coupon,2024-01-03\r\n9103,2256,AMER,electronics,retail,74.83,4,0.203,bundle,2024-10-21\r\n9104,2326,LATAM,grocery,online,21.56,6,0.214,coupon,2024-06-25\r\n9105,1033,APAC,fashion,online,114.93,4,0.157,none,2024-07-02\r\n9106,1350,LATAM,home,retail,57.63,6,0.059,coupon,2024-10-19\r\n9107,1675,LATAM,sports,online,78.53,8,0.246,bundle,2024-05-11\r\n9108,2413,AMER,grocery,retail,80.84,8,0.239,none,2024-06-09\r\n9109,2071,APAC,grocery,retail,43.43,6,0.080,loyalty,2024-05-17\r\n9110,2108,AMER,sports,online,29.93,2,0.114,none,2024-05-11\r\n9111,2382,LATAM,electronics,retail,95.08,4,0.029,coupon,2024-08-05\r\n9112,2445,APAC,home,online,38.98,8,0.141,coupon,2024-12-25\r\n9113,1068,APAC,electronics,online,165.52,7,0.153,bundle,2024-03-04\r\n9114,1904,APAC,grocery,online,71.43,1,0.205,none,2024-10-20\r\n9115,1812,EMEA,sports,retail,47.07,1,0.119,none,2024-12-06\r\n9116,1334,APAC,sports,retail,54.59,7,0.051,coupon,2024-07-05\r\n9117,1377,APAC,sports,online,24.40,7,0.188,none,2024-06-19\r\n9118,1368,EMEA,electronics,retail,52.98,3,0.222,bundle,2024-05-26\r\n9119,1299,LATAM,sports,retail,13.16,7,0.157,none,2024-02-27\r\n9120,2085,AMER,home,online,61.87,4,0.044,coupon,2024-06-14\r\n9121,1203,AMER,grocery,online,28.74,2,0.140,coupon,2024-06-12\r\n9122,1313,EMEA,grocery,online,106.51,1,0.053,none,2024-10-18\r\n9123,2211,APAC,electronics,retail,53.56,5,0.083,none,2024-06-11\r\n9124,2096,LATAM,grocery,retail,18.93,5,0.220,coupon,2024-09-17\r\n9125,1780,APAC,grocery,retail,46.46,6,0.117,bundle,2024-11-12\r\n9126,1976,AMER,fashion,mobile,34.31,8,0.121,loyalty,2024-10-04\r\n9127,1271,EMEA,electronics,retail,49.09,3,0.080,none,2024-11-28\r\n9128,1971,EMEA,toys,mobile,16.16,7,0.247,none,2024-06-05\r\n9129,1907,EMEA,grocery,retail,68.11,8,0.158,none,2024-06-14\r\n9130,2420,EMEA,home,online,56.24,1,0.177,none,2024-05-25\r\n9131,1937,APAC,home,retail,118.91,6,0.145,none,2024-09-08\r\n9132,1835,AMER,toys,online,17.81,2,0.121,bundle,2024-05-27\r\n9133,1514,LATAM,home,online,90.36,5,0.192,coupon,2024-06-10\r\n9134,1915,LATAM,electronics,retail,43.47,2,0.132,coupon,2024-07-14\r\n9135,2301,EMEA,sports,online,100.28,3,0.008,loyalty,2024-12-03\r\n9136,1261,APAC,electronics,online,38.53,8,0.202,none,2024-12-25\r\n9137,2027,EMEA,sports,online,69.83,4,0.192,none,2024-10-18\r\n9138,2230,LATAM,fashion,online,40.13,2,0.234,loyalty,2024-01-16\r\n9139,1586,LATAM,grocery,retail,40.46,5,0.068,none,2024-05-10\r\n9140,1363,EMEA,toys,retail,79.36,3,0.016,bundle,2024-09-09\r\n9141,2331,APAC,electronics,retail,16.59,8,0.091,none,2024-01-27\r\n9142,2194,APAC,sports,retail,57.75,4,0.168,loyalty,2024-02-03\r\n9143,1975,EMEA,grocery,retail,36.49,2,0.044,coupon,2024-12-02\r\n9144,1629,LATAM,electronics,online,71.00,4,0.213,none,2024-08-23\r\n9145,2083,LATAM,toys,retail,22.05,1,0.058,loyalty,2024-04-25\r\n9146,1002,EMEA,toys,mobile,45.64,1,0.199,none,2024-01-23\r\n9147,2070,APAC,grocery,retail,41.05,3,0.125,coupon,2024-01-24\r\n9148,1899,APAC,grocery,retail,85.83,7,0.179,coupon,2024-04-23\r\n9149,1913,LATAM,fashion,online,43.44,8,0.023,none,2024-08-16\r\n9150,1253,AMER,fashion,online,72.54,3,0.125,none,2024-12-18\r\n9151,1580,AMER,toys,online,120.51,8,0.008,coupon,2024-01-19\r\n9152,1121,EMEA,sports,mobile,54.06,3,0.200,bundle,2024-11-21\r\n9153,1757,EMEA,toys,mobile,190.13,3,0.245,none,2024-09-01\r\n9154,2359,LATAM,electronics,online,32.15,2,0.194,coupon,2024-12-16\r\n9155,2040,LATAM,electronics,retail,51.60,4,0.215,bundle,2024-07-10\r\n9156,1078,APAC,grocery,online,38.63,4,0.170,none,2024-06-13\r\n9157,1658,AMER,fashion,online,44.53,4,0.083,none,2024-09-24\r\n9158,2082,APAC,electronics,online,64.72,6,0.198,none,2024-12-05\r\n9159,1154,LATAM,electronics,retail,53.94,4,0.055,coupon,2024-04-13\r\n9160,1123,LATAM,electronics,online,31.70,8,0.105,bundle,2024-09-17\r\n9161,1198,AMER,grocery,retail,35.51,6,0.098,bundle,2024-02-09\r\n9162,1600,AMER,grocery,partner,59.24,2,0.246,none,2024-01-11\r\n9163,2128,EMEA,electronics,retail,54.40,6,0.048,none,2024-06-23\r\n9164,1972,LATAM,grocery,retail,46.15,6,0.249,none,2024-01-09\r\n9165,1727,APAC,fashion,online,100.10,2,0.085,bundle,2024-08-27\r\n9166,2392,EMEA,sports,partner,51.37,8,0.044,none,2024-02-12\r\n9167,2364,APAC,fashion,online,50.76,6,0.242,loyalty,2024-11-03\r\n9168,1544,LATAM,electronics,online,53.17,1,0.015,none,2024-12-04\r\n9169,2033,LATAM,electronics,mobile,40.23,7,0.223,loyalty,2024-09-03\r\n9170,2329,LATAM,home,retail,45.15,8,0.249,none,2024-12-11\r\n9171,1091,EMEA,electronics,retail,40.66,2,0.012,none,2024-11-09\r\n9172,1605,APAC,sports,retail,41.83,5,0.126,coupon,2024-10-17\r\n9173,1820,AMER,grocery,partner,64.69,3,0.179,coupon,2024-01-25\r\n9174,2437,LATAM,grocery,retail,39.80,4,0.242,none,2024-06-26\r\n9175,2338,AMER,grocery,mobile,132.62,2,0.145,coupon,2024-12-16\r\n9176,1516,EMEA,home,online,84.82,2,0.052,none,2024-09-22\r\n9177,2278,APAC,electronics,online,44.61,5,0.071,coupon,2024-11-10\r\n9178,1753,APAC,home,online,39.14,6,0.084,none,2024-04-05\r\n9179,1216,APAC,home,online,33.89,7,0.042,loyalty,2024-12-22\r\n9180,1430,EMEA,home,retail,70.32,7,0.236,none,2024-09-14\r\n9181,1186,APAC,electronics,online,99.73,7,0.152,loyalty,2024-12-16\r\n9182,1701,LATAM,sports,mobile,177.25,3,0.191,none,2024-12-13\r\n9183,1976,AMER,grocery,retail,68.66,8,0.100,none,2024-01-19\r\n9184,1851,EMEA,fashion,online,34.42,7,0.121,loyalty,2024-05-01\r\n9185,1839,APAC,electronics,retail,88.63,3,0.105,coupon,2024-05-13\r\n9186,1871,APAC,home,online,36.16,6,0.069,none,2024-10-25\r\n9187,1903,LATAM,home,retail,53.76,7,0.013,loyalty,2024-03-14\r\n9188,1223,LATAM,grocery,online,94.28,3,0.161,bundle,2024-04-27\r\n9189,2010,APAC,grocery,retail,57.27,4,0.110,none,2024-03-05\r\n9190,1336,APAC,toys,mobile,56.06,8,0.246,loyalty,2024-12-04\r\n9191,2173,LATAM,home,online,59.81,5,0.233,coupon,2024-03-12\r\n9192,2196,AMER,grocery,online,69.16,1,0.220,loyalty,2024-01-27\r\n9193,2403,LATAM,electronics,retail,64.79,6,0.237,none,2024-06-15\r\n9194,2034,LATAM,electronics,retail,70.10,4,0.207,none,2024-06-15\r\n9195,2400,EMEA,grocery,online,132.54,6,0.088,loyalty,2024-08-10\r\n9196,1392,AMER,electronics,retail,65.54,4,0.120,loyalty,2024-04-10\r\n9197,1032,AMER,electronics,retail,33.42,4,0.215,bundle,2024-09-23\r\n9198,1106,AMER,fashion,online,57.81,3,0.211,none,2024-01-15\r\n9199,1826,LATAM,electronics,mobile,63.23,4,0.212,loyalty,2024-08-26\r\n9200,2441,EMEA,fashion,mobile,149.16,8,0.042,none,2024-04-26\r\n9201,1975,EMEA,sports,online,28.03,4,0.186,coupon,2024-11-23\r\n9202,1326,AMER,home,mobile,35.97,8,0.050,bundle,2024-06-08\r\n9203,2286,AMER,electronics,retail,24.65,2,0.194,coupon,2024-11-12\r\n9204,2159,AMER,grocery,online,64.67,2,0.197,loyalty,2024-05-15\r\n9205,1054,EMEA,electronics,online,94.15,3,0.239,bundle,2024-04-08\r\n9206,2001,EMEA,grocery,retail,166.63,2,0.089,coupon,2024-12-06\r\n9207,1210,LATAM,fashion,online,44.84,1,0.169,loyalty,2024-09-20\r\n9208,1861,AMER,grocery,retail,50.18,4,0.244,coupon,2024-08-27\r\n9209,1734,AMER,grocery,mobile,27.56,5,0.226,none,2024-07-23\r\n9210,2429,EMEA,electronics,online,74.74,3,0.226,none,2024-07-06\r\n9211,1597,APAC,electronics,retail,23.13,7,0.199,coupon,2024-11-08\r\n9212,1772,EMEA,electronics,online,45.40,1,0.007,loyalty,2024-07-02\r\n9213,1669,AMER,grocery,retail,71.11,6,0.196,bundle,2024-03-07\r\n9214,1206,EMEA,grocery,mobile,33.89,2,0.008,loyalty,2024-07-16\r\n9215,1453,APAC,grocery,mobile,43.36,7,0.012,coupon,2024-10-12\r\n9216,1115,AMER,home,online,81.44,5,0.043,none,2024-05-10\r\n9217,2330,EMEA,toys,retail,16.87,3,0.059,loyalty,2024-04-17\r\n9218,2159,AMER,home,retail,55.35,4,0.112,none,2024-12-27\r\n9219,1172,APAC,electronics,mobile,161.49,2,0.022,coupon,2024-07-08\r\n9220,1496,AMER,home,online,58.26,7,0.097,coupon,2024-11-03\r\n9221,1644,EMEA,home,online,85.33,6,0.186,none,2024-02-16\r\n9222,1000,APAC,sports,mobile,59.14,2,0.214,none,2024-03-23\r\n9223,1642,EMEA,home,retail,58.81,3,0.164,none,2024-05-23\r\n9224,1426,AMER,grocery,retail,53.43,7,0.097,none,2024-12-16\r\n9225,1697,APAC,sports,mobile,67.24,1,0.224,coupon,2024-05-15\r\n9226,2389,LATAM,toys,online,58.33,1,0.050,none,2024-12-22\r\n9227,2284,EMEA,electronics,online,103.08,3,0.001,none,2024-03-13\r\n9228,1125,LATAM,grocery,online,90.59,7,0.095,none,2024-07-08\r\n9229,2406,EMEA,electronics,online,37.78,6,0.181,bundle,2024-09-04\r\n9230,1898,EMEA,home,online,83.14,4,0.157,none,2024-07-06\r\n9231,1495,LATAM,electronics,retail,71.27,8,0.165,bundle,2024-10-26\r\n9232,2288,AMER,electronics,online,54.44,3,0.135,none,2024-04-15\r\n9233,1447,LATAM,sports,mobile,24.61,2,0.111,none,2024-04-09\r\n9234,2057,APAC,home,retail,90.41,6,0.059,none,2024-05-10\r\n9235,1114,APAC,grocery,online,57.80,6,0.111,bundle,2024-12-19\r\n9236,1918,EMEA,electronics,online,51.95,1,0.170,none,2024-08-17\r\n9237,1889,APAC,electronics,retail,27.27,8,0.146,loyalty,2024-05-22\r\n9238,1927,EMEA,grocery,retail,93.67,7,0.179,none,2024-02-12\r\n9239,2432,AMER,home,retail,19.54,7,0.097,none,2024-11-14\r\n9240,2411,EMEA,grocery,mobile,24.84,8,0.026,none,2024-05-08\r\n9241,2262,APAC,fashion,retail,36.49,6,0.038,none,2024-02-20\r\n9242,2465,EMEA,toys,online,103.08,8,0.015,none,2024-02-05\r\n9243,1143,LATAM,toys,online,198.54,7,0.190,none,2024-10-11\r\n9244,1729,AMER,toys,mobile,20.72,2,0.136,coupon,2024-10-10\r\n9245,1723,LATAM,electronics,retail,75.01,4,0.096,bundle,2024-02-04\r\n9246,1953,EMEA,home,online,68.91,8,0.133,none,2024-02-22\r\n9247,1148,AMER,home,retail,58.63,5,0.084,coupon,2024-08-04\r\n9248,1565,AMER,grocery,retail,88.97,4,0.036,coupon,2024-10-02\r\n9249,1990,EMEA,home,online,111.66,4,0.132,coupon,2024-04-04\r\n9250,1095,APAC,electronics,retail,68.10,4,0.240,none,2024-07-03\r\n9251,2283,AMER,electronics,mobile,68.57,3,0.209,coupon,2024-05-11\r\n9252,2475,AMER,electronics,retail,27.52,7,0.047,coupon,2024-09-24\r\n9253,2178,AMER,home,online,47.36,6,0.033,none,2024-08-11\r\n9254,1227,AMER,home,mobile,47.64,6,0.060,none,2024-09-15\r\n9255,2153,APAC,grocery,online,118.02,8,0.245,none,2024-04-15\r\n9256,1279,EMEA,toys,mobile,83.16,5,0.228,none,2024-07-20\r\n9257,2462,EMEA,grocery,retail,57.56,6,0.188,none,2024-03-13\r\n9258,1124,AMER,electronics,online,30.61,2,0.062,none,2024-12-27\r\n9259,1186,APAC,grocery,online,58.47,3,0.221,bundle,2024-05-12\r\n9260,2450,EMEA,grocery,retail,52.35,1,0.238,bundle,2024-05-16\r\n9261,2023,LATAM,sports,online,43.03,7,0.046,coupon,2024-12-14\r\n9262,2096,LATAM,electronics,online,117.75,4,0.088,none,2024-06-03\r\n9263,1181,LATAM,grocery,retail,45.65,6,0.090,none,2024-05-05\r\n9264,1630,APAC,home,online,57.61,7,0.080,none,2024-10-16\r\n9265,1425,EMEA,home,online,95.10,1,0.152,coupon,2024-03-19\r\n9266,1187,AMER,electronics,online,61.81,1,0.022,none,2024-02-28\r\n9267,1397,LATAM,home,retail,34.11,3,0.059,bundle,2024-04-25\r\n9268,1133,EMEA,fashion,online,63.30,5,0.234,none,2024-01-25\r\n9269,1814,AMER,grocery,mobile,53.58,6,0.041,none,2024-10-26\r\n9270,1932,EMEA,home,retail,52.73,8,0.089,none,2024-01-10\r\n9271,1004,LATAM,sports,retail,183.00,7,0.027,none,2024-02-16\r\n9272,1343,LATAM,electronics,mobile,38.33,4,0.213,none,2024-03-27\r\n9273,2092,AMER,fashion,mobile,75.87,5,0.061,none,2024-04-19\r\n9274,1679,APAC,grocery,mobile,38.98,3,0.247,loyalty,2024-03-18\r\n9275,1884,APAC,grocery,online,64.92,6,0.069,none,2024-12-22\r\n9276,1449,EMEA,electronics,retail,36.42,5,0.118,coupon,2024-02-25\r\n9277,1922,EMEA,grocery,retail,50.69,3,0.222,none,2024-04-22\r\n9278,2066,APAC,electronics,online,67.60,6,0.212,none,2024-04-23\r\n9279,1152,LATAM,home,online,22.14,7,0.128,none,2024-12-25\r\n9280,2081,APAC,electronics,partner,90.38,4,0.185,coupon,2024-04-14\r\n9281,1114,APAC,sports,online,50.84,5,0.096,none,2024-01-15\r\n9282,1736,AMER,sports,online,37.10,3,0.220,bundle,2024-09-03\r\n9283,1787,APAC,grocery,mobile,32.51,5,0.172,none,2024-04-10\r\n9284,1070,EMEA,fashion,retail,35.64,3,0.172,bundle,2024-09-21\r\n9285,1300,EMEA,grocery,online,111.94,5,0.109,bundle,2024-05-17\r\n9286,2354,LATAM,grocery,retail,32.08,1,0.206,coupon,2024-07-11\r\n9287,1887,LATAM,fashion,partner,86.14,5,0.036,none,2024-06-20\r\n9288,2058,LATAM,sports,retail,28.25,4,0.229,bundle,2024-12-16\r\n9289,2251,APAC,grocery,online,54.11,5,0.155,none,2024-08-21\r\n9290,2310,EMEA,fashion,online,53.42,2,0.224,none,2024-08-26\r\n9291,1364,EMEA,sports,partner,107.90,4,0.171,loyalty,2024-07-16\r\n9292,1976,AMER,grocery,online,44.79,6,0.025,none,2024-04-03\r\n9293,1260,LATAM,home,retail,33.19,6,0.149,none,2024-10-23\r\n9294,1141,AMER,sports,online,69.84,6,0.012,none,2024-10-04\r\n9295,1338,EMEA,grocery,retail,44.08,2,0.059,none,2024-09-28\r\n9296,1703,AMER,electronics,online,72.38,6,0.019,coupon,2024-11-08\r\n9297,1566,EMEA,sports,online,70.87,5,0.241,bundle,2024-04-18\r\n9298,2039,EMEA,sports,online,78.19,5,0.206,none,2024-12-24\r\n9299,2271,LATAM,electronics,retail,123.72,5,0.166,none,2024-12-15\r\n9300,2384,LATAM,grocery,online,98.11,8,0.172,none,2024-05-26\r\n9301,2074,AMER,grocery,online,21.55,3,0.161,none,2024-08-28\r\n9302,1285,EMEA,electronics,online,180.49,6,0.059,none,2024-04-09\r\n9303,1708,LATAM,sports,mobile,36.11,3,0.040,none,2024-12-05\r\n9304,1273,AMER,electronics,online,36.33,3,0.043,none,2024-10-04\r\n9305,1792,AMER,electronics,retail,31.02,4,0.047,none,2024-07-22\r\n9306,1169,LATAM,toys,mobile,92.82,1,0.136,coupon,2024-07-21\r\n9307,2314,EMEA,fashion,online,45.33,7,0.027,none,2024-06-09\r\n9308,1728,AMER,electronics,partner,47.88,2,0.231,coupon,2024-09-23\r\n9309,1384,LATAM,fashion,retail,32.13,3,0.050,none,2024-02-23\r\n9310,2088,EMEA,sports,online,48.79,3,0.201,coupon,2024-07-12\r\n9311,2356,LATAM,home,retail,61.27,5,0.218,none,2024-03-25\r\n9312,1245,APAC,electronics,online,80.32,3,0.003,none,2024-07-14\r\n9313,1606,AMER,sports,online,44.20,6,0.106,bundle,2024-01-08\r\n9314,2306,AMER,sports,online,112.05,6,0.059,none,2024-11-03\r\n9315,2073,AMER,fashion,retail,99.04,3,0.199,none,2024-06-06\r\n9316,1173,LATAM,fashion,retail,85.67,5,0.058,none,2024-06-21\r\n9317,1000,APAC,toys,online,44.74,3,0.177,none,2024-04-10\r\n9318,1975,EMEA,home,retail,138.92,8,0.247,none,2024-04-05\r\n9319,1780,APAC,sports,online,85.13,5,0.236,coupon,2024-09-09\r\n9320,1011,APAC,grocery,mobile,72.97,4,0.140,none,2024-06-26\r\n9321,1872,LATAM,fashion,mobile,56.70,6,0.126,none,2024-11-05\r\n9322,1425,EMEA,electronics,retail,53.72,6,0.066,none,2024-05-22\r\n9323,1706,EMEA,electronics,online,52.37,8,0.025,none,2024-06-21\r\n9324,1310,AMER,fashion,retail,69.38,2,0.184,none,2024-05-01\r\n9325,2026,LATAM,electronics,partner,22.19,1,0.060,loyalty,2024-09-03\r\n9326,1837,LATAM,electronics,online,45.32,1,0.023,none,2024-12-10\r\n9327,1743,LATAM,sports,online,184.27,7,0.179,none,2024-03-23\r\n9328,1863,EMEA,home,online,150.53,5,0.072,bundle,2024-07-01\r\n9329,2309,AMER,toys,retail,38.28,2,0.161,none,2024-10-08\r\n9330,1339,EMEA,grocery,retail,47.98,1,0.105,none,2024-04-13\r\n9331,1164,EMEA,electronics,mobile,85.94,1,0.040,none,2024-11-11\r\n9332,1072,LATAM,sports,online,73.42,5,0.085,none,2024-07-27\r\n9333,1566,EMEA,electronics,mobile,57.77,6,0.051,none,2024-09-22\r\n9334,1346,AMER,grocery,retail,101.80,2,0.100,bundle,2024-11-27\r\n9335,2381,AMER,home,online,40.43,5,0.038,bundle,2024-07-25\r\n9336,1547,AMER,home,retail,66.14,6,0.246,none,2024-10-19\r\n9337,2000,APAC,home,retail,18.90,5,0.054,coupon,2024-08-02\r\n9338,1989,LATAM,electronics,online,29.74,5,0.157,none,2024-01-05\r\n9339,1281,AMER,electronics,online,55.21,8,0.243,none,2024-02-22\r\n9340,1312,EMEA,grocery,online,34.63,5,0.229,loyalty,2024-05-12\r\n9341,1383,AMER,electronics,retail,77.26,6,0.184,none,2024-08-18\r\n9342,1222,AMER,sports,retail,68.03,5,0.091,none,2024-08-22\r\n9343,1594,LATAM,grocery,retail,61.99,1,0.141,none,2024-09-21\r\n9344,1887,LATAM,electronics,retail,69.08,7,0.003,none,2024-01-16\r\n9345,1802,AMER,grocery,online,42.87,3,0.230,none,2024-06-03\r\n9346,2358,AMER,fashion,mobile,29.25,6,0.205,coupon,2024-02-21\r\n9347,1909,APAC,electronics,retail,75.25,8,0.132,coupon,2024-10-04\r\n9348,1000,APAC,electronics,online,38.11,2,0.196,coupon,2024-04-03\r\n9349,1026,APAC,home,online,45.20,4,0.156,none,2024-12-13\r\n9350,1203,AMER,electronics,online,50.83,1,0.037,none,2024-12-22\r\n9351,2351,EMEA,sports,online,291.32,4,0.010,loyalty,2024-12-23\r\n9352,1544,LATAM,home,online,82.71,1,0.229,loyalty,2024-05-25\r\n9353,2182,AMER,home,online,74.86,6,0.062,bundle,2024-04-26\r\n9354,1250,APAC,toys,online,80.91,1,0.138,none,2024-01-14\r\n9355,1458,APAC,home,retail,146.17,5,0.147,none,2024-05-04\r\n9356,2136,AMER,fashion,online,54.75,2,0.249,loyalty,2024-01-22\r\n9357,1370,APAC,home,online,60.26,7,0.142,none,2024-12-15\r\n9358,2140,AMER,sports,mobile,157.65,4,0.051,bundle,2024-01-12\r\n9359,2152,EMEA,home,retail,81.32,1,0.112,coupon,2024-02-24\r\n9360,1037,EMEA,fashion,partner,80.66,1,0.221,bundle,2024-11-24\r\n9361,1853,APAC,electronics,mobile,32.14,8,0.214,none,2024-01-05\r\n9362,1054,EMEA,electronics,online,64.06,4,0.024,bundle,2024-10-19\r\n9363,1811,APAC,electronics,mobile,118.59,1,0.210,none,2024-11-18\r\n9364,1174,APAC,grocery,retail,30.35,4,0.036,none,2024-03-23\r\n9365,1121,EMEA,electronics,retail,57.99,3,0.047,coupon,2024-01-10\r\n9366,1588,LATAM,fashion,retail,28.98,6,0.001,bundle,2024-08-25\r\n9367,1029,EMEA,toys,retail,128.96,4,0.200,none,2024-09-12\r\n9368,1919,EMEA,sports,retail,40.90,2,0.057,none,2024-10-13\r\n9369,1916,AMER,electronics,retail,44.11,1,0.213,bundle,2024-09-20\r\n9370,1772,EMEA,sports,online,41.40,1,0.012,none,2024-12-10\r\n9371,1386,AMER,fashion,online,48.79,8,0.214,bundle,2024-07-24\r\n9372,1361,LATAM,sports,online,71.15,1,0.171,none,2024-09-27\r\n9373,2424,LATAM,toys,online,42.61,7,0.205,loyalty,2024-01-13\r\n9374,1539,LATAM,home,online,57.82,1,0.221,none,2024-10-04\r\n9375,1817,APAC,fashion,retail,48.22,6,0.007,none,2024-01-25\r\n9376,1825,AMER,home,retail,130.62,1,0.062,bundle,2024-05-26\r\n9377,1826,LATAM,grocery,online,91.65,6,0.147,none,2024-12-27\r\n9378,2313,LATAM,toys,online,65.77,6,0.211,coupon,2024-03-10\r\n9379,1958,APAC,home,retail,70.66,1,0.042,coupon,2024-08-08\r\n9380,1189,AMER,sports,retail,27.89,7,0.149,loyalty,2024-10-12\r\n9381,2474,LATAM,grocery,retail,30.96,8,0.171,none,2024-09-14\r\n9382,1064,AMER,toys,online,32.69,4,0.102,coupon,2024-09-25\r\n9383,1866,EMEA,grocery,online,90.32,8,0.179,none,2024-02-26\r\n9384,1448,EMEA,sports,mobile,75.51,4,0.084,none,2024-07-23\r\n9385,2105,APAC,home,online,78.65,2,0.011,loyalty,2024-08-04\r\n9386,2013,APAC,home,online,83.98,6,0.246,none,2024-01-20\r\n9387,1616,APAC,toys,mobile,73.15,3,0.124,loyalty,2024-08-05\r\n9388,2150,APAC,grocery,retail,25.66,4,0.173,bundle,2024-03-25\r\n9389,2346,LATAM,sports,online,36.57,7,0.032,none,2024-07-05\r\n9390,2302,APAC,grocery,online,90.39,4,0.192,coupon,2024-08-16\r\n9391,1115,AMER,toys,retail,72.23,5,0.024,none,2024-07-23\r\n9392,1942,APAC,toys,online,61.25,4,0.206,none,2024-02-17\r\n9393,2443,LATAM,fashion,retail,73.21,4,0.137,none,2024-11-21\r\n9394,1439,LATAM,electronics,retail,45.59,4,0.159,none,2024-02-06\r\n9395,1065,AMER,fashion,retail,142.59,3,0.191,coupon,2024-06-21\r\n9396,2220,LATAM,home,mobile,76.71,2,0.191,coupon,2024-06-24\r\n9397,1418,LATAM,home,retail,44.61,6,0.065,none,2024-12-09\r\n9398,1351,APAC,home,retail,44.38,4,0.039,bundle,2024-12-25\r\n9399,1241,APAC,grocery,online,132.34,1,0.229,none,2024-12-16\r\n9400,1038,APAC,grocery,retail,53.89,7,0.127,none,2024-04-18\r\n9401,2138,APAC,grocery,online,94.06,6,0.077,loyalty,2024-11-25\r\n9402,1729,AMER,fashion,online,23.23,4,0.224,none,2024-03-06\r\n9403,1208,AMER,grocery,online,38.44,8,0.170,coupon,2024-03-02\r\n9404,1399,AMER,grocery,online,63.50,8,0.198,none,2024-02-12\r\n9405,1634,AMER,fashion,online,55.26,2,0.059,none,2024-09-03\r\n9406,2165,AMER,toys,mobile,77.30,1,0.098,loyalty,2024-07-28\r\n9407,1949,AMER,electronics,online,41.24,4,0.145,none,2024-11-26\r\n9408,1664,LATAM,home,online,87.65,6,0.042,bundle,2024-01-21\r\n9409,1291,EMEA,electronics,mobile,62.42,6,0.159,coupon,2024-11-23\r\n9410,1712,LATAM,grocery,online,53.24,5,0.243,none,2024-02-07\r\n9411,1387,AMER,electronics,online,71.20,3,0.057,none,2024-11-14\r\n9412,2140,AMER,electronics,retail,33.94,6,0.140,coupon,2024-12-07\r\n9413,1141,AMER,fashion,mobile,52.52,7,0.020,none,2024-06-24\r\n9414,1430,EMEA,grocery,online,47.17,6,0.110,bundle,2024-11-25\r\n9415,1037,EMEA,grocery,mobile,104.04,8,0.123,none,2024-04-10\r\n9416,1017,AMER,fashion,online,67.28,1,0.030,coupon,2024-03-25\r\n9417,1690,LATAM,toys,online,43.67,1,0.232,none,2024-11-06\r\n9418,1951,LATAM,grocery,retail,96.94,5,0.092,none,2024-05-08\r\n9419,1941,AMER,grocery,retail,70.33,2,0.080,bundle,2024-04-28\r\n9420,2453,AMER,grocery,retail,47.67,6,0.186,none,2024-05-05\r\n9421,1184,AMER,home,online,38.36,3,0.223,coupon,2024-12-21\r\n9422,2007,LATAM,fashion,online,68.51,7,0.142,none,2024-11-15\r\n9423,1246,EMEA,grocery,online,37.36,8,0.018,none,2024-06-14\r\n9424,2352,APAC,electronics,online,62.69,4,0.214,none,2024-01-12\r\n9425,1339,EMEA,toys,retail,65.05,2,0.041,none,2024-04-21\r\n9426,1211,EMEA,home,mobile,51.16,8,0.213,none,2024-07-25\r\n9427,1249,EMEA,grocery,online,35.20,5,0.118,coupon,2024-09-01\r\n9428,2406,EMEA,toys,online,74.95,2,0.008,coupon,2024-10-09\r\n9429,2010,APAC,grocery,online,44.33,7,0.093,coupon,2024-02-13\r\n9430,1316,APAC,grocery,online,46.13,2,0.097,coupon,2024-03-24\r\n9431,2236,APAC,home,online,47.10,5,0.124,none,2024-10-28\r\n9432,1438,APAC,home,retail,99.05,4,0.039,none,2024-11-24\r\n9433,1205,APAC,sports,online,38.52,7,0.226,loyalty,2024-04-10\r\n9434,1097,EMEA,home,retail,115.56,5,0.239,coupon,2024-12-25\r\n9435,1974,EMEA,electronics,online,24.02,8,0.029,none,2024-03-18\r\n9436,1775,EMEA,toys,online,39.16,6,0.192,coupon,2024-11-16\r\n9437,1243,AMER,sports,mobile,53.66,1,0.085,loyalty,2024-11-13\r\n9438,2100,APAC,home,retail,56.61,1,0.017,none,2024-02-20\r\n9439,1353,EMEA,electronics,retail,63.92,5,0.058,none,2024-01-24\r\n9440,1057,LATAM,toys,mobile,31.54,6,0.021,none,2024-06-07\r\n9441,1243,AMER,toys,online,69.80,8,0.167,none,2024-04-04\r\n9442,1872,LATAM,grocery,retail,67.10,5,0.193,none,2024-03-16\r\n9443,1552,EMEA,fashion,mobile,35.11,6,0.088,bundle,2024-08-02\r\n9444,1078,APAC,electronics,online,25.07,6,0.113,none,2024-01-21\r\n9445,1978,AMER,grocery,online,81.55,3,0.119,loyalty,2024-09-25\r\n9446,1760,LATAM,electronics,mobile,55.30,8,0.249,none,2024-03-13\r\n9447,1886,LATAM,electronics,retail,116.83,4,0.131,coupon,2024-04-23\r\n9448,1606,AMER,home,online,49.23,4,0.089,coupon,2024-12-12\r\n9449,1372,APAC,grocery,online,65.72,7,0.232,none,2024-03-27\r\n9450,1729,AMER,home,online,59.96,7,0.223,none,2024-12-03\r\n9451,1852,AMER,grocery,retail,98.82,1,0.006,bundle,2024-06-07\r\n9452,1439,LATAM,grocery,retail,62.31,1,0.221,none,2024-04-07\r\n9453,2271,LATAM,toys,online,27.59,4,0.021,none,2024-12-23\r\n9454,1149,LATAM,toys,online,74.52,2,0.207,none,2024-09-20\r\n9455,1897,AMER,home,retail,75.78,2,0.136,loyalty,2024-05-11\r\n9456,1215,LATAM,electronics,online,65.98,1,0.036,none,2024-08-08\r\n9457,1778,LATAM,electronics,partner,74.67,8,0.019,bundle,2024-06-11\r\n9458,1131,APAC,grocery,online,59.61,3,0.108,none,2024-11-18\r\n9459,2179,LATAM,home,retail,76.76,2,0.022,loyalty,2024-12-18\r\n9460,1395,APAC,electronics,partner,76.25,8,0.107,none,2024-11-01\r\n9461,1777,AMER,electronics,retail,61.02,6,0.170,bundle,2024-11-01\r\n9462,1221,LATAM,grocery,online,35.70,8,0.024,coupon,2024-07-28\r\n9463,1220,LATAM,grocery,retail,36.77,4,0.112,none,2024-03-26\r\n9464,1143,LATAM,fashion,mobile,67.95,6,0.175,none,2024-07-25\r\n9465,1429,APAC,toys,retail,34.32,6,0.039,none,2024-04-25\r\n9466,1460,LATAM,home,mobile,104.25,1,0.047,coupon,2024-02-27\r\n9467,1552,EMEA,toys,mobile,31.16,2,0.056,none,2024-06-24\r\n9468,1705,AMER,fashion,online,68.40,5,0.022,none,2024-09-18\r\n9469,1158,LATAM,sports,mobile,37.77,3,0.137,none,2024-03-08\r\n9470,2313,LATAM,electronics,online,92.90,5,0.030,none,2024-12-01\r\n9471,1795,EMEA,grocery,mobile,142.74,4,0.069,bundle,2024-01-09\r\n9472,2182,AMER,toys,online,109.63,6,0.207,none,2024-03-27\r\n9473,2043,EMEA,fashion,partner,48.39,7,0.049,coupon,2024-03-09\r\n9474,1455,APAC,electronics,mobile,30.85,3,0.069,none,2024-02-18\r\n9475,1530,APAC,electronics,online,167.77,1,0.097,loyalty,2024-09-20\r\n9476,1811,APAC,home,online,78.42,6,0.103,none,2024-03-04\r\n9477,2472,AMER,toys,retail,44.06,6,0.040,none,2024-07-27\r\n9478,1551,APAC,sports,online,109.96,5,0.062,coupon,2024-12-18\r\n9479,1390,APAC,grocery,retail,61.18,3,0.039,none,2024-09-24\r\n9480,2313,LATAM,grocery,retail,23.17,2,0.104,none,2024-06-05\r\n9481,2183,EMEA,grocery,online,31.70,6,0.047,none,2024-06-06\r\n9482,1590,APAC,toys,retail,35.73,1,0.161,loyalty,2024-02-13\r\n9483,2207,APAC,home,online,51.58,2,0.168,none,2024-10-25\r\n9484,1111,APAC,electronics,retail,121.99,4,0.185,none,2024-08-21\r\n9485,1959,EMEA,home,online,42.52,2,0.054,bundle,2024-12-04\r\n9486,2120,AMER,fashion,retail,59.09,4,0.043,coupon,2024-02-01\r\n9487,2115,APAC,electronics,mobile,93.70,4,0.213,none,2024-03-07\r\n9488,2341,EMEA,electronics,retail,98.01,6,0.239,none,2024-06-25\r\n9489,1684,EMEA,home,online,30.09,7,0.109,loyalty,2024-07-13\r\n9490,1437,EMEA,home,retail,49.03,1,0.076,none,2024-01-26\r\n9491,2225,EMEA,home,online,40.54,5,0.090,coupon,2024-05-28\r\n9492,1318,LATAM,home,mobile,31.59,5,0.240,none,2024-11-06\r\n9493,2263,AMER,fashion,mobile,29.81,4,0.068,loyalty,2024-04-24\r\n9494,1555,AMER,grocery,retail,64.85,5,0.138,coupon,2024-02-19\r\n9495,1205,APAC,fashion,online,29.82,3,0.061,loyalty,2024-11-14\r\n9496,2070,APAC,fashion,online,89.23,1,0.056,none,2024-01-13\r\n9497,1136,EMEA,grocery,online,38.27,1,0.221,none,2024-12-11\r\n9498,1194,APAC,grocery,online,64.53,5,0.047,coupon,2024-05-02\r\n9499,1291,EMEA,grocery,online,69.02,2,0.165,none,2024-09-21\r\n9500,1130,LATAM,fashion,online,52.35,3,0.206,bundle,2024-07-04\r\n9501,1909,APAC,sports,retail,27.94,3,0.100,bundle,2024-07-23\r\n9502,1484,AMER,grocery,retail,69.98,8,0.041,none,2024-09-09\r\n9503,1461,LATAM,home,online,185.67,2,0.044,loyalty,2024-06-11\r\n9504,1801,LATAM,home,online,27.24,6,0.064,none,2024-02-03\r\n9505,2293,LATAM,toys,online,56.26,6,0.037,none,2024-02-09\r\n9506,2093,LATAM,electronics,mobile,45.30,5,0.084,none,2024-03-12\r\n9507,2272,EMEA,sports,online,11.02,8,0.167,none,2024-02-13\r\n9508,2073,AMER,grocery,online,119.81,4,0.211,none,2024-08-27\r\n9509,1211,EMEA,fashion,online,61.30,3,0.126,bundle,2024-05-02\r\n9510,1839,APAC,grocery,mobile,37.69,8,0.072,none,2024-06-13\r\n9511,2293,LATAM,sports,retail,79.04,7,0.200,bundle,2024-12-27\r\n9512,2059,AMER,electronics,online,54.32,5,0.121,none,2024-01-09\r\n9513,1061,APAC,electronics,mobile,72.72,7,0.162,none,2024-01-22\r\n9514,1489,AMER,home,mobile,38.29,4,0.059,bundle,2024-08-26\r\n9515,1890,LATAM,grocery,online,56.06,7,0.084,loyalty,2024-02-02\r\n9516,1983,LATAM,sports,retail,62.75,2,0.231,loyalty,2024-09-24\r\n9517,2192,APAC,electronics,online,111.86,5,0.051,loyalty,2024-02-09\r\n9518,1409,APAC,fashion,retail,36.44,8,0.121,none,2024-05-07\r\n9519,2436,LATAM,grocery,retail,93.66,6,0.043,bundle,2024-01-20\r\n9520,1427,EMEA,home,mobile,69.34,4,0.150,none,2024-03-26\r\n9521,2001,EMEA,grocery,retail,46.54,6,0.065,none,2024-05-16\r\n9522,1116,LATAM,grocery,retail,20.64,1,0.224,coupon,2024-09-21\r\n9523,2316,EMEA,home,partner,34.88,8,0.168,none,2024-01-01\r\n9524,1056,LATAM,home,retail,28.88,7,0.191,bundle,2024-05-17\r\n9525,1443,EMEA,sports,online,55.01,2,0.204,coupon,2024-11-21\r\n9526,1484,AMER,electronics,retail,117.84,2,0.017,none,2024-05-17\r\n9527,2486,APAC,grocery,retail,39.95,1,0.092,none,2024-09-25\r\n9528,1578,LATAM,grocery,online,37.89,2,0.083,none,2024-05-06\r\n9529,2268,EMEA,electronics,online,84.16,7,0.218,coupon,2024-10-20\r\n9530,2055,AMER,toys,retail,44.89,2,0.005,none,2024-04-02\r\n9531,1725,APAC,grocery,retail,69.12,1,0.134,loyalty,2024-05-27\r\n9532,1863,EMEA,sports,retail,47.28,6,0.144,coupon,2024-04-19\r\n9533,2499,LATAM,fashion,online,81.84,3,0.145,none,2024-08-17\r\n9534,1768,AMER,toys,online,58.42,8,0.188,none,2024-10-06\r\n9535,2341,EMEA,toys,mobile,45.86,5,0.078,none,2024-07-14\r\n9536,2448,APAC,grocery,retail,58.13,5,0.130,loyalty,2024-06-24\r\n9537,1569,APAC,grocery,online,61.26,5,0.013,none,2024-04-15\r\n9538,1713,EMEA,electronics,online,51.91,4,0.046,none,2024-02-17\r\n9539,1507,EMEA,fashion,retail,40.82,5,0.077,none,2024-03-11\r\n9540,2408,EMEA,home,retail,55.21,2,0.059,none,2024-09-18\r\n9541,1553,LATAM,sports,retail,43.25,8,0.054,loyalty,2024-01-09\r\n9542,1096,EMEA,grocery,online,30.92,5,0.245,bundle,2024-04-08\r\n9543,1152,LATAM,grocery,online,73.60,7,0.206,none,2024-09-03\r\n9544,1337,APAC,grocery,online,94.24,6,0.054,bundle,2024-07-26\r\n9545,1859,AMER,electronics,online,84.41,3,0.010,none,2024-06-13\r\n9546,1562,AMER,home,mobile,26.14,1,0.173,none,2024-11-11\r\n9547,1996,APAC,toys,online,143.67,3,0.244,none,2024-03-26\r\n9548,2047,AMER,grocery,online,30.12,7,0.245,none,2024-11-11\r\n9549,1529,LATAM,fashion,retail,67.21,3,0.092,bundle,2024-06-18\r\n9550,1830,EMEA,sports,retail,87.67,3,0.022,none,2024-06-26\r\n9551,1713,EMEA,grocery,online,82.27,8,0.076,none,2024-11-12\r\n9552,1987,AMER,fashion,online,35.10,5,0.230,loyalty,2024-05-22\r\n9553,1586,LATAM,grocery,retail,132.11,3,0.224,none,2024-11-19\r\n9554,1845,AMER,sports,mobile,64.46,8,0.002,none,2024-05-26\r\n9555,2275,LATAM,grocery,online,28.48,6,0.106,coupon,2024-05-01\r\n9556,1289,LATAM,electronics,online,35.57,8,0.052,coupon,2024-06-10\r\n9557,2353,AMER,fashion,retail,32.21,6,0.007,coupon,2024-03-20\r\n9558,1482,AMER,sports,online,80.95,5,0.198,none,2024-09-04\r\n9559,1418,LATAM,electronics,retail,58.21,6,0.026,none,2024-09-13\r\n9560,2147,LATAM,home,retail,22.32,8,0.100,none,2024-06-01\r\n9561,1160,LATAM,sports,online,118.57,3,0.052,none,2024-05-21\r\n9562,2267,AMER,grocery,retail,63.89,1,0.120,coupon,2024-08-28\r\n9563,1628,EMEA,electronics,online,45.11,7,0.208,none,2024-07-05\r\n9564,1160,LATAM,grocery,retail,37.04,2,0.113,none,2024-04-17\r\n9565,2358,AMER,home,retail,75.25,8,0.236,none,2024-12-08\r\n9566,1431,APAC,electronics,online,100.79,5,0.165,bundle,2024-01-16\r\n9567,2358,AMER,home,online,18.44,3,0.181,none,2024-01-02\r\n9568,2334,LATAM,electronics,retail,13.54,7,0.225,coupon,2024-07-26\r\n9569,2222,LATAM,sports,retail,56.08,4,0.223,none,2024-03-21\r\n9570,1002,EMEA,electronics,retail,93.45,3,0.107,none,2024-02-25\r\n9571,2371,LATAM,grocery,online,60.36,6,0.239,bundle,2024-04-09\r\n9572,1028,EMEA,fashion,retail,49.20,8,0.057,none,2024-09-26\r\n9573,2400,EMEA,grocery,online,36.75,2,0.247,none,2024-12-26\r\n9574,1394,LATAM,electronics,retail,46.07,5,0.079,none,2024-12-06\r\n9575,2089,EMEA,fashion,online,41.93,4,0.153,none,2024-10-09\r\n9576,1514,LATAM,sports,retail,44.30,4,0.059,coupon,2024-03-15\r\n9577,1809,APAC,electronics,mobile,71.37,1,0.217,coupon,2024-07-16\r\n9578,2324,AMER,home,online,61.93,1,0.215,none,2024-10-20\r\n9579,2390,AMER,grocery,online,49.08,3,0.208,none,2024-08-18\r\n9580,2079,EMEA,grocery,online,69.21,7,0.167,none,2024-05-07\r\n9581,1050,AMER,electronics,retail,28.23,7,0.248,coupon,2024-12-05\r\n9582,1637,APAC,home,retail,24.28,4,0.055,none,2024-12-19\r\n9583,1225,APAC,toys,online,55.78,3,0.206,none,2024-04-21\r\n9584,1555,AMER,electronics,online,64.19,6,0.242,none,2024-03-13\r\n9585,1842,LATAM,electronics,online,132.89,8,0.191,none,2024-02-11\r\n9586,1175,AMER,fashion,mobile,54.05,2,0.007,none,2024-10-05\r\n9587,2299,EMEA,toys,online,70.88,6,0.034,none,2024-07-17\r\n9588,1945,AMER,grocery,online,106.30,5,0.063,bundle,2024-01-28\r\n9589,2476,APAC,fashion,online,150.85,5,0.002,none,2024-10-16\r\n9590,2269,EMEA,grocery,online,124.36,1,0.197,none,2024-01-12\r\n9591,2070,APAC,grocery,online,84.74,2,0.124,none,2024-09-27\r\n9592,2337,AMER,fashion,mobile,23.89,3,0.162,coupon,2024-05-16\r\n9593,2190,LATAM,sports,online,55.00,8,0.160,bundle,2024-05-22\r\n9594,1293,AMER,home,online,87.35,4,0.142,bundle,2024-06-17\r\n9595,2469,LATAM,grocery,online,54.18,7,0.166,coupon,2024-10-23\r\n9596,1780,APAC,grocery,retail,83.44,2,0.048,none,2024-07-10\r\n9597,2136,AMER,home,online,66.16,3,0.239,bundle,2024-06-25\r\n9598,1081,AMER,electronics,online,75.94,7,0.175,loyalty,2024-09-08\r\n9599,1725,APAC,fashion,online,88.07,7,0.172,loyalty,2024-11-12\r\n9600,1025,EMEA,grocery,online,115.60,1,0.003,none,2024-02-01\r\n9601,2037,LATAM,grocery,mobile,134.10,4,0.068,bundle,2024-04-21\r\n9602,1390,APAC,grocery,online,38.83,3,0.071,coupon,2024-02-28\r\n9603,2370,EMEA,grocery,retail,21.62,6,0.042,none,2024-08-22\r\n9604,2227,LATAM,fashion,online,114.32,4,0.076,none,2024-05-19\r\n9605,2048,LATAM,electronics,online,84.47,8,0.035,none,2024-06-17\r\n9606,1543,AMER,grocery,mobile,30.63,6,0.117,coupon,2024-10-11\r\n9607,1695,LATAM,sports,retail,61.58,5,0.238,bundle,2024-01-22\r\n9608,2391,EMEA,electronics,online,77.97,3,0.023,none,2024-05-05\r\n9609,1401,LATAM,fashion,online,42.50,3,0.050,none,2024-06-15\r\n9610,1363,EMEA,sports,partner,81.38,1,0.135,none,2024-01-07\r\n9611,1728,AMER,home,retail,66.93,1,0.069,coupon,2024-11-13\r\n9612,1197,LATAM,toys,partner,70.95,1,0.152,loyalty,2024-06-15\r\n9613,1092,AMER,home,online,66.93,2,0.111,coupon,2024-08-15\r\n9614,2190,LATAM,grocery,retail,47.37,8,0.148,none,2024-03-12\r\n9615,2111,EMEA,sports,retail,91.64,3,0.200,bundle,2024-12-21\r\n9616,1642,EMEA,home,retail,60.31,8,0.168,none,2024-10-07\r\n9617,1468,AMER,grocery,online,69.83,4,0.039,coupon,2024-04-18\r\n9618,2185,EMEA,fashion,online,76.58,3,0.126,none,2024-10-26\r\n9619,1745,APAC,fashion,retail,60.68,3,0.234,none,2024-11-08\r\n9620,1040,LATAM,grocery,mobile,32.62,1,0.003,none,2024-04-19\r\n9621,2118,AMER,grocery,mobile,42.83,8,0.096,coupon,2024-04-17\r\n9622,2412,LATAM,fashion,online,49.48,4,0.204,none,2024-06-28\r\n9623,1974,EMEA,sports,online,31.79,6,0.022,none,2024-04-25\r\n9624,2458,EMEA,fashion,online,58.64,6,0.210,bundle,2024-12-20\r\n9625,1703,AMER,fashion,online,49.21,5,0.123,coupon,2024-03-22\r\n9626,1960,EMEA,fashion,mobile,45.85,8,0.035,none,2024-01-19\r\n9627,2354,LATAM,home,retail,35.39,8,0.165,coupon,2024-07-26\r\n9628,2268,EMEA,home,retail,21.48,4,0.123,none,2024-08-14\r\n9629,1082,EMEA,grocery,retail,74.08,6,0.152,none,2024-02-05\r\n9630,2149,EMEA,grocery,online,47.18,5,0.019,none,2024-06-07\r\n9631,1270,LATAM,electronics,online,59.39,5,0.086,loyalty,2024-08-05\r\n9632,1930,AMER,sports,online,43.66,6,0.083,bundle,2024-10-25\r\n9633,2043,EMEA,home,retail,48.76,1,0.208,loyalty,2024-12-09\r\n9634,2213,APAC,home,online,22.14,7,0.103,none,2024-07-08\r\n9635,1148,AMER,sports,retail,65.59,5,0.208,coupon,2024-12-13\r\n9636,1555,AMER,fashion,retail,90.36,5,0.017,none,2024-09-26\r\n9637,2406,EMEA,home,retail,55.31,8,0.111,none,2024-12-13\r\n9638,1367,AMER,toys,partner,53.73,3,0.068,none,2024-04-19\r\n9639,1928,AMER,toys,mobile,29.48,2,0.042,bundle,2024-07-14\r\n9640,2077,APAC,home,online,49.99,1,0.122,none,2024-02-11\r\n9641,1242,LATAM,home,online,67.95,3,0.206,none,2024-03-13\r\n9642,1060,LATAM,grocery,online,45.91,8,0.174,loyalty,2024-09-18\r\n9643,1785,EMEA,fashion,online,84.30,6,0.120,coupon,2024-10-17\r\n9644,2106,LATAM,home,retail,49.98,2,0.055,none,2024-01-09\r\n9645,2361,EMEA,electronics,mobile,79.61,2,0.069,coupon,2024-11-26\r\n9646,1179,APAC,sports,partner,95.52,1,0.179,loyalty,2024-12-25\r\n9647,1585,AMER,toys,retail,81.72,3,0.022,bundle,2024-08-26\r\n9648,1048,EMEA,sports,online,62.18,1,0.080,bundle,2024-09-09\r\n9649,1267,EMEA,toys,online,40.69,4,0.007,none,2024-05-19\r\n9650,2102,APAC,grocery,retail,84.68,3,0.073,coupon,2024-07-20\r\n9651,2155,APAC,grocery,online,59.17,6,0.042,none,2024-03-26\r\n9652,2334,LATAM,electronics,retail,65.52,7,0.123,none,2024-12-24\r\n9653,1286,EMEA,home,retail,45.70,7,0.012,loyalty,2024-05-10\r\n9654,2160,LATAM,home,mobile,50.04,8,0.062,none,2024-12-13\r\n9655,1022,APAC,home,online,62.97,7,0.235,none,2024-12-22\r\n9656,1461,LATAM,sports,online,29.62,4,0.069,coupon,2024-03-17\r\n9657,1538,AMER,fashion,retail,22.59,7,0.234,coupon,2024-08-16\r\n9658,2333,APAC,sports,online,82.38,4,0.216,none,2024-08-21\r\n9659,2051,APAC,home,retail,66.69,8,0.050,none,2024-07-17\r\n9660,2294,EMEA,sports,retail,20.19,7,0.150,none,2024-08-12\r\n9661,2091,LATAM,sports,retail,66.97,6,0.086,none,2024-03-05\r\n9662,2048,LATAM,home,mobile,37.87,7,0.103,coupon,2024-05-28\r\n9663,1714,APAC,grocery,retail,74.54,1,0.248,none,2024-03-25\r\n9664,2215,LATAM,sports,partner,47.30,4,0.023,none,2024-04-04\r\n9665,1274,LATAM,fashion,online,83.40,2,0.079,none,2024-07-14\r\n9666,2417,LATAM,grocery,partner,52.02,8,0.096,none,2024-02-05\r\n9667,1883,LATAM,electronics,mobile,175.32,4,0.150,bundle,2024-11-01\r\n9668,1656,LATAM,electronics,retail,125.52,1,0.144,none,2024-11-24\r\n9669,1542,APAC,grocery,retail,61.81,1,0.041,none,2024-09-11\r\n9670,1978,AMER,fashion,retail,16.53,2,0.147,loyalty,2024-03-26\r\n9671,1114,APAC,fashion,mobile,21.96,4,0.203,coupon,2024-07-26\r\n9672,1595,AMER,fashion,retail,64.42,8,0.142,coupon,2024-12-26\r\n9673,1595,AMER,grocery,retail,46.46,8,0.228,none,2024-05-12\r\n9674,1586,LATAM,electronics,retail,32.04,8,0.127,coupon,2024-03-11\r\n9675,2433,APAC,grocery,online,32.33,5,0.041,none,2024-09-14\r\n9676,2288,AMER,grocery,retail,80.47,3,0.211,none,2024-12-15\r\n9677,2434,APAC,toys,online,92.91,7,0.031,coupon,2024-02-16\r\n9678,2372,AMER,home,online,39.60,7,0.104,loyalty,2024-12-02\r\n9679,2358,AMER,electronics,partner,202.62,8,0.166,none,2024-03-26\r\n9680,2461,LATAM,sports,online,34.68,7,0.102,none,2024-12-18\r\n9681,2086,APAC,sports,online,39.36,8,0.237,bundle,2024-04-22\r\n9682,1730,AMER,toys,partner,18.01,7,0.127,none,2024-11-03\r\n9683,2279,LATAM,electronics,mobile,24.48,2,0.079,none,2024-02-07\r\n9684,1002,EMEA,toys,retail,65.24,5,0.045,loyalty,2024-11-23\r\n9685,1146,LATAM,home,online,61.92,4,0.218,none,2024-05-07\r\n9686,1966,APAC,sports,mobile,29.17,6,0.021,loyalty,2024-12-26\r\n9687,1042,LATAM,grocery,online,74.64,5,0.028,bundle,2024-08-03\r\n9688,1590,APAC,fashion,online,107.51,8,0.148,bundle,2024-09-15\r\n9689,1038,APAC,grocery,online,52.45,7,0.199,none,2024-05-10\r\n9690,2287,EMEA,fashion,online,52.21,8,0.010,none,2024-09-27\r\n9691,1449,EMEA,fashion,retail,78.19,2,0.034,none,2024-11-14\r\n9692,1801,LATAM,fashion,retail,43.55,5,0.137,none,2024-07-10\r\n9693,2136,AMER,home,partner,70.46,2,0.034,loyalty,2024-02-02\r\n9694,1938,APAC,sports,retail,61.14,4,0.160,none,2024-04-26\r\n9695,1253,AMER,electronics,partner,90.81,7,0.157,bundle,2024-02-07\r\n9696,2494,AMER,sports,online,31.79,1,0.219,none,2024-05-09\r\n9697,1549,APAC,home,retail,64.43,2,0.210,none,2024-04-22\r\n9698,1538,AMER,sports,retail,50.80,2,0.126,none,2024-04-02\r\n9699,2451,APAC,grocery,mobile,53.56,1,0.087,bundle,2024-05-08\r\n9700,2398,EMEA,home,online,44.27,7,0.190,none,2024-02-08\r\n9701,2355,EMEA,home,online,31.33,6,0.017,none,2024-02-26\r\n9702,2204,AMER,electronics,online,36.98,3,0.148,none,2024-06-25\r\n9703,2371,LATAM,electronics,partner,64.48,1,0.144,none,2024-12-13\r\n9704,2284,EMEA,grocery,retail,51.51,7,0.211,none,2024-01-19\r\n9705,1942,APAC,fashion,mobile,55.11,7,0.210,none,2024-05-07\r\n9706,1751,AMER,electronics,retail,47.36,7,0.088,bundle,2024-09-25\r\n9707,1824,LATAM,grocery,online,145.67,5,0.173,loyalty,2024-03-17\r\n9708,1692,LATAM,electronics,retail,86.84,1,0.150,none,2024-08-12\r\n9709,1450,EMEA,electronics,online,71.25,2,0.225,coupon,2024-06-09\r\n9710,1849,EMEA,fashion,retail,35.68,8,0.229,none,2024-04-06\r\n9711,1570,AMER,grocery,online,59.37,4,0.237,none,2024-12-11\r\n9712,1134,APAC,home,retail,83.47,1,0.027,none,2024-04-25\r\n9713,1048,EMEA,grocery,online,54.14,3,0.219,coupon,2024-02-28\r\n9714,2470,EMEA,home,online,117.54,6,0.021,none,2024-02-09\r\n9715,1754,EMEA,home,online,30.38,2,0.158,coupon,2024-06-25\r\n9716,2489,LATAM,grocery,retail,54.50,3,0.133,none,2024-10-11\r\n9717,1316,APAC,sports,online,46.89,4,0.172,coupon,2024-12-08\r\n9718,1159,LATAM,home,retail,79.16,1,0.211,none,2024-01-23\r\n9719,1339,EMEA,electronics,retail,81.51,5,0.056,coupon,2024-02-21\r\n9720,2316,EMEA,grocery,retail,73.46,7,0.014,coupon,2024-09-19\r\n9721,1965,LATAM,electronics,online,57.42,7,0.010,none,2024-08-24\r\n9722,1457,EMEA,grocery,retail,41.54,2,0.025,loyalty,2024-07-14\r\n9723,1595,AMER,sports,online,80.80,4,0.171,none,2024-01-05\r\n9724,2196,AMER,fashion,online,215.68,3,0.174,bundle,2024-09-05\r\n9725,1719,LATAM,fashion,partner,99.57,6,0.016,none,2024-11-07\r\n9726,1274,LATAM,grocery,retail,110.98,7,0.078,none,2024-09-09\r\n9727,1064,AMER,home,partner,77.43,8,0.008,none,2024-08-10\r\n9728,1219,LATAM,sports,online,50.04,8,0.144,coupon,2024-04-05\r\n9729,1829,EMEA,toys,retail,28.10,5,0.005,coupon,2024-02-12\r\n9730,1908,AMER,home,online,53.05,8,0.106,none,2024-12-06\r\n9731,2131,APAC,electronics,retail,26.97,5,0.216,bundle,2024-08-26\r\n9732,2098,AMER,grocery,online,80.38,3,0.167,loyalty,2024-05-23\r\n9733,1562,AMER,fashion,retail,60.38,4,0.032,none,2024-02-13\r\n9734,1308,EMEA,grocery,online,36.53,1,0.155,none,2024-08-15\r\n9735,1506,EMEA,electronics,online,39.42,3,0.018,none,2024-08-20\r\n9736,1995,LATAM,fashion,mobile,70.55,5,0.097,coupon,2024-12-09\r\n9737,2321,APAC,grocery,online,62.46,5,0.105,loyalty,2024-09-01\r\n9738,2310,EMEA,home,online,56.00,8,0.061,none,2024-12-02\r\n9739,1453,APAC,sports,partner,63.09,3,0.079,none,2024-10-16\r\n9740,1756,EMEA,toys,online,60.64,1,0.102,bundle,2024-10-24\r\n9741,2155,APAC,grocery,mobile,34.70,8,0.248,coupon,2024-03-26\r\n9742,2487,LATAM,electronics,retail,61.26,6,0.244,coupon,2024-06-19\r\n9743,1240,EMEA,fashion,retail,89.81,7,0.051,bundle,2024-12-24\r\n9744,1136,EMEA,home,retail,109.57,3,0.119,none,2024-09-02\r\n9745,2145,AMER,grocery,online,35.05,2,0.066,none,2024-03-19\r\n9746,2245,APAC,toys,mobile,39.38,3,0.223,none,2024-04-19\r\n9747,1629,LATAM,toys,retail,35.11,8,0.157,bundle,2024-03-26\r\n9748,1262,APAC,fashion,online,35.65,5,0.139,none,2024-06-19\r\n9749,1688,LATAM,grocery,online,94.42,3,0.059,none,2024-05-22\r\n9750,2368,AMER,fashion,retail,35.14,5,0.143,none,2024-07-27\r\n9751,2057,APAC,grocery,mobile,73.31,7,0.097,coupon,2024-02-14\r\n9752,2161,LATAM,grocery,online,104.17,6,0.161,bundle,2024-07-02\r\n9753,1747,EMEA,grocery,retail,43.49,5,0.085,coupon,2024-03-05\r\n9754,2413,AMER,grocery,online,51.36,3,0.078,bundle,2024-06-04\r\n9755,2235,AMER,home,retail,33.92,1,0.050,none,2024-12-18\r\n9756,1499,EMEA,home,partner,23.11,6,0.022,coupon,2024-09-10\r\n9757,1068,APAC,grocery,online,110.24,4,0.043,bundle,2024-03-09\r\n9758,1282,LATAM,fashion,retail,45.69,1,0.135,loyalty,2024-12-28\r\n9759,2406,EMEA,grocery,online,32.35,2,0.205,none,2024-05-05\r\n9760,2464,LATAM,home,online,68.83,1,0.011,bundle,2024-02-27\r\n9761,2344,LATAM,electronics,retail,158.95,4,0.091,coupon,2024-02-17\r\n9762,1000,APAC,fashion,online,50.35,4,0.079,loyalty,2024-05-13\r\n9763,1972,LATAM,sports,retail,88.97,2,0.016,none,2024-06-06\r\n9764,2091,LATAM,home,retail,92.99,1,0.172,bundle,2024-03-14\r\n9765,1556,AMER,grocery,online,66.44,8,0.213,none,2024-04-24\r\n9766,2360,EMEA,electronics,retail,112.42,3,0.076,none,2024-12-10\r\n9767,1665,AMER,home,retail,238.11,3,0.086,none,2024-07-19\r\n9768,2415,AMER,grocery,mobile,112.82,1,0.162,none,2024-08-05\r\n9769,1602,EMEA,grocery,mobile,39.84,5,0.144,bundle,2024-06-16\r\n9770,2467,AMER,grocery,online,29.71,1,0.148,none,2024-12-11\r\n9771,2281,AMER,toys,online,109.39,6,0.016,none,2024-07-15\r\n9772,2426,AMER,electronics,retail,38.26,7,0.227,none,2024-03-03\r\n9773,1125,LATAM,grocery,online,98.18,8,0.242,none,2024-12-10\r\n9774,1941,AMER,toys,online,29.08,7,0.223,none,2024-10-20\r\n9775,2467,AMER,home,retail,32.50,6,0.078,none,2024-06-01\r\n9776,2221,LATAM,electronics,online,114.73,6,0.137,coupon,2024-07-28\r\n9777,1706,EMEA,grocery,mobile,55.52,6,0.115,none,2024-12-09\r\n9778,1475,LATAM,sports,online,114.12,3,0.225,bundle,2024-04-07\r\n9779,2306,AMER,toys,online,48.10,6,0.085,none,2024-01-13\r\n9780,1826,LATAM,sports,online,85.38,5,0.233,bundle,2024-10-28\r\n9781,1056,LATAM,electronics,mobile,48.89,1,0.081,none,2024-09-20\r\n9782,1263,AMER,grocery,mobile,88.03,6,0.203,none,2024-05-25\r\n9783,2093,LATAM,grocery,retail,37.83,3,0.001,none,2024-08-04\r\n9784,2178,AMER,sports,online,48.76,4,0.228,coupon,2024-10-22\r\n9785,1066,AMER,home,retail,41.04,5,0.090,none,2024-10-08\r\n9786,2493,APAC,electronics,mobile,101.89,6,0.100,none,2024-07-16\r\n9787,2475,AMER,home,mobile,75.54,1,0.121,none,2024-06-04\r\n9788,1872,LATAM,home,online,49.83,6,0.198,coupon,2024-04-07\r\n9789,2289,APAC,fashion,retail,71.84,6,0.005,none,2024-07-08\r\n9790,1236,AMER,electronics,mobile,34.44,1,0.160,none,2024-01-21\r\n9791,1578,LATAM,toys,online,23.35,4,0.081,none,2024-06-11\r\n9792,1372,APAC,grocery,online,35.24,6,0.053,none,2024-08-03\r\n9793,2296,AMER,home,retail,32.49,6,0.124,none,2024-02-07\r\n9794,1922,EMEA,home,online,73.49,3,0.206,loyalty,2024-11-09\r\n9795,1814,AMER,grocery,online,41.38,7,0.235,coupon,2024-01-26\r\n9796,1566,EMEA,home,online,31.10,4,0.092,coupon,2024-09-20\r\n9797,2256,AMER,electronics,online,53.41,4,0.241,loyalty,2024-09-12\r\n9798,2294,EMEA,electronics,online,44.59,3,0.162,bundle,2024-06-25\r\n9799,2261,EMEA,fashion,retail,62.95,7,0.136,loyalty,2024-12-16\r\n9800,2087,LATAM,electronics,retail,62.57,3,0.110,bundle,2024-06-19\r\n9801,1098,APAC,toys,online,30.24,6,0.007,bundle,2024-05-15\r\n9802,1762,LATAM,electronics,retail,37.94,1,0.069,none,2024-12-13\r\n9803,1373,LATAM,electronics,online,60.68,4,0.122,loyalty,2024-09-19\r\n9804,1137,APAC,fashion,retail,38.14,4,0.172,none,2024-12-22\r\n9805,1941,AMER,fashion,partner,52.55,7,0.163,none,2024-04-11\r\n9806,1532,APAC,sports,online,102.20,2,0.054,coupon,2024-04-16\r\n9807,1560,AMER,fashion,online,34.31,3,0.096,none,2024-03-26\r\n9808,1158,LATAM,electronics,retail,122.96,4,0.201,coupon,2024-04-04\r\n9809,1852,AMER,electronics,retail,60.12,1,0.062,none,2024-07-22\r\n9810,1390,APAC,electronics,online,68.83,8,0.106,none,2024-12-19\r\n9811,2256,AMER,electronics,partner,41.59,1,0.145,none,2024-11-05\r\n9812,1497,EMEA,electronics,retail,53.18,6,0.166,coupon,2024-07-04\r\n9813,1301,AMER,electronics,mobile,92.52,7,0.003,coupon,2024-01-14\r\n9814,1710,APAC,home,online,24.00,7,0.113,none,2024-01-04\r\n9815,1699,APAC,grocery,online,134.80,7,0.047,none,2024-04-27\r\n9816,1849,EMEA,grocery,mobile,19.69,1,0.039,bundle,2024-12-12\r\n9817,2050,APAC,sports,retail,46.33,8,0.183,none,2024-12-21\r\n9818,1962,APAC,electronics,retail,29.13,4,0.076,loyalty,2024-03-06\r\n9819,1749,LATAM,grocery,mobile,52.44,8,0.217,bundle,2024-03-26\r\n9820,1708,LATAM,fashion,retail,60.15,7,0.083,coupon,2024-06-17\r\n9821,1447,LATAM,grocery,online,45.74,2,0.087,none,2024-11-20\r\n9822,1263,AMER,home,online,93.37,2,0.003,coupon,2024-11-19\r\n9823,2416,LATAM,toys,online,102.87,1,0.034,loyalty,2024-07-06\r\n9824,1561,EMEA,home,online,41.06,6,0.112,bundle,2024-04-25\r\n9825,1346,AMER,grocery,retail,152.30,3,0.143,loyalty,2024-08-22\r\n9826,2440,APAC,grocery,online,23.88,4,0.158,none,2024-07-03\r\n9827,2304,LATAM,grocery,retail,54.22,4,0.212,none,2024-04-06\r\n9828,2201,AMER,sports,online,108.08,6,0.092,none,2024-10-26\r\n9829,1274,LATAM,fashion,retail,54.78,4,0.164,loyalty,2024-06-01\r\n9830,1798,AMER,sports,online,64.09,1,0.023,coupon,2024-11-10\r\n9831,2305,AMER,sports,partner,78.45,4,0.197,loyalty,2024-06-07\r\n9832,1089,LATAM,grocery,retail,29.21,3,0.003,none,2024-05-06\r\n9833,1078,APAC,sports,retail,79.45,3,0.089,none,2024-03-15\r\n9834,1268,EMEA,grocery,mobile,59.58,2,0.053,coupon,2024-10-01\r\n9835,1994,LATAM,grocery,mobile,60.52,7,0.163,none,2024-07-07\r\n9836,1609,LATAM,home,retail,60.45,1,0.182,none,2024-08-18\r\n9837,1601,APAC,home,online,44.30,4,0.150,none,2024-02-12\r\n9838,1223,LATAM,toys,online,99.80,3,0.167,none,2024-09-19\r\n9839,1735,LATAM,grocery,online,109.04,5,0.213,loyalty,2024-03-09\r\n9840,2261,EMEA,fashion,retail,33.99,6,0.196,none,2024-06-04\r\n9841,1972,LATAM,electronics,retail,66.28,2,0.004,loyalty,2024-01-16\r\n9842,2420,EMEA,fashion,online,51.29,3,0.030,none,2024-07-28\r\n9843,1302,LATAM,electronics,retail,35.59,4,0.138,coupon,2024-04-17\r\n9844,1983,LATAM,grocery,retail,63.72,2,0.109,coupon,2024-07-03\r\n9845,1073,AMER,toys,online,173.01,4,0.219,none,2024-07-01\r\n9846,2105,APAC,sports,retail,45.76,6,0.233,none,2024-10-28\r\n9847,1217,EMEA,electronics,online,21.69,1,0.065,loyalty,2024-06-24\r\n9848,2009,LATAM,grocery,retail,52.01,6,0.022,coupon,2024-05-26\r\n9849,2316,EMEA,sports,retail,41.26,7,0.065,none,2024-11-28\r\n9850,2252,EMEA,home,online,54.57,2,0.114,bundle,2024-08-27\r\n9851,1853,APAC,electronics,online,57.59,5,0.176,none,2024-05-07\r\n9852,2275,LATAM,grocery,online,45.37,3,0.141,coupon,2024-09-10\r\n9853,1990,EMEA,grocery,retail,114.02,5,0.016,none,2024-04-23\r\n9854,2283,AMER,toys,mobile,43.85,5,0.177,bundle,2024-09-17\r\n9855,1357,EMEA,grocery,online,70.01,8,0.045,coupon,2024-12-26\r\n9856,2279,LATAM,electronics,mobile,80.08,1,0.043,none,2024-04-15\r\n9857,2305,AMER,sports,online,39.17,6,0.059,none,2024-02-02\r\n9858,1933,EMEA,fashion,online,36.02,7,0.231,none,2024-02-27\r\n9859,2258,AMER,grocery,mobile,44.88,2,0.011,none,2024-07-28\r\n9860,1137,APAC,grocery,online,51.25,1,0.236,none,2024-08-01\r\n9861,1243,AMER,sports,online,31.83,3,0.144,none,2024-11-20\r\n9862,1071,AMER,electronics,retail,64.05,8,0.072,none,2024-09-19\r\n9863,2156,AMER,sports,online,196.32,8,0.133,bundle,2024-06-24\r\n9864,2024,AMER,electronics,online,70.06,5,0.171,none,2024-09-11\r\n9865,2070,APAC,toys,online,113.90,7,0.030,none,2024-04-11\r\n9866,1888,LATAM,electronics,retail,52.77,8,0.237,none,2024-07-28\r\n9867,1020,APAC,grocery,partner,46.46,1,0.231,none,2024-04-05\r\n9868,1122,AMER,home,online,61.51,2,0.036,loyalty,2024-02-15\r\n9869,1279,EMEA,home,retail,147.38,7,0.021,coupon,2024-10-16\r\n9870,2462,EMEA,home,online,58.30,4,0.200,bundle,2024-07-05\r\n9871,1332,APAC,home,mobile,60.27,4,0.138,coupon,2024-12-12\r\n9872,1534,EMEA,fashion,online,78.31,7,0.232,coupon,2024-05-05\r\n9873,1637,APAC,grocery,mobile,82.46,3,0.060,coupon,2024-06-10\r\n9874,1650,LATAM,home,online,52.84,3,0.151,none,2024-06-13\r\n9875,1630,APAC,home,mobile,91.43,6,0.151,none,2024-09-09\r\n9876,1942,APAC,home,online,112.42,8,0.114,none,2024-11-17\r\n9877,1998,APAC,sports,retail,109.52,6,0.090,none,2024-06-04\r\n9878,1557,LATAM,fashion,online,59.85,5,0.201,none,2024-04-25\r\n9879,2476,APAC,toys,mobile,68.55,3,0.051,bundle,2024-11-10\r\n9880,1933,EMEA,sports,retail,33.24,8,0.200,loyalty,2024-11-17\r\n9881,1936,EMEA,home,online,85.43,5,0.051,none,2024-04-22\r\n9882,2459,AMER,electronics,mobile,82.83,5,0.053,none,2024-10-05\r\n9883,1660,AMER,grocery,mobile,48.75,8,0.089,none,2024-06-06\r\n9884,1817,APAC,sports,online,42.09,6,0.009,bundle,2024-08-09\r\n9885,1502,APAC,home,mobile,44.52,8,0.230,loyalty,2024-03-09\r\n9886,1007,APAC,toys,online,101.35,1,0.083,none,2024-11-13\r\n9887,1944,AMER,home,retail,51.03,7,0.087,none,2024-01-08\r\n9888,1975,EMEA,electronics,online,24.89,3,0.216,loyalty,2024-08-24\r\n9889,1300,EMEA,home,online,43.19,7,0.206,coupon,2024-02-16\r\n9890,1012,LATAM,fashion,online,77.41,8,0.125,bundle,2024-02-04\r\n9891,1095,APAC,grocery,online,17.48,3,0.157,none,2024-06-26\r\n9892,1371,AMER,home,retail,65.36,3,0.185,coupon,2024-11-20\r\n9893,1868,AMER,grocery,online,72.71,4,0.116,none,2024-04-05\r\n9894,2078,APAC,electronics,retail,32.93,6,0.213,loyalty,2024-10-03\r\n9895,1978,AMER,electronics,mobile,56.41,8,0.173,bundle,2024-04-25\r\n9896,1302,LATAM,home,retail,49.58,5,0.112,none,2024-01-26\r\n9897,1199,APAC,grocery,mobile,49.23,7,0.005,coupon,2024-08-17\r\n9898,1139,EMEA,electronics,online,41.41,5,0.201,bundle,2024-02-15\r\n9899,1804,AMER,home,online,84.37,1,0.017,coupon,2024-09-12\r\n9900,2208,AMER,sports,partner,44.74,7,0.230,coupon,2024-12-19\r\n9901,2454,LATAM,electronics,retail,96.07,2,0.171,none,2024-07-10\r\n9902,1703,AMER,electronics,online,40.94,3,0.222,none,2024-08-25\r\n9903,2127,LATAM,sports,mobile,44.39,5,0.165,none,2024-08-16\r\n9904,2012,APAC,electronics,online,98.14,5,0.019,coupon,2024-03-23\r\n9905,1509,AMER,home,online,50.30,3,0.101,none,2024-04-15\r\n9906,1710,APAC,electronics,retail,194.20,3,0.073,none,2024-05-10\r\n9907,1729,AMER,home,online,24.64,8,0.223,none,2024-06-06\r\n9908,1635,APAC,grocery,online,50.01,7,0.226,none,2024-09-07\r\n9909,1107,APAC,sports,mobile,41.54,5,0.217,coupon,2024-11-21\r\n9910,2036,APAC,electronics,mobile,55.92,4,0.034,loyalty,2024-07-22\r\n9911,1285,EMEA,fashion,retail,109.86,5,0.066,none,2024-08-27\r\n9912,1957,AMER,fashion,online,58.01,2,0.037,coupon,2024-06-03\r\n9913,2386,EMEA,fashion,partner,21.85,6,0.168,none,2024-12-09\r\n9914,1449,EMEA,toys,online,156.52,2,0.033,loyalty,2024-02-27\r\n9915,1077,AMER,toys,online,43.68,7,0.144,none,2024-08-12\r\n9916,1232,LATAM,grocery,retail,31.50,2,0.063,bundle,2024-11-08\r\n9917,1517,AMER,grocery,retail,50.64,2,0.202,none,2024-10-28\r\n9918,2159,AMER,toys,retail,53.99,2,0.051,bundle,2024-04-06\r\n9919,1326,AMER,home,online,50.42,6,0.157,coupon,2024-12-17\r\n9920,1385,LATAM,electronics,retail,41.52,6,0.122,coupon,2024-04-05\r\n9921,2290,LATAM,fashion,online,32.43,5,0.039,bundle,2024-01-22\r\n9922,1341,EMEA,home,retail,64.63,5,0.200,bundle,2024-06-01\r\n9923,1728,AMER,toys,online,62.83,3,0.053,none,2024-07-18\r\n9924,2217,LATAM,home,online,69.11,7,0.011,bundle,2024-04-24\r\n9925,1965,LATAM,home,retail,53.64,4,0.013,none,2024-07-26\r\n9926,2271,LATAM,home,online,66.96,6,0.230,none,2024-07-14\r\n9927,2370,EMEA,sports,retail,37.35,7,0.010,none,2024-04-04\r\n9928,1166,AMER,electronics,online,83.44,5,0.153,none,2024-12-18\r\n9929,1974,EMEA,fashion,online,121.18,8,0.070,none,2024-08-19\r\n9930,1107,APAC,sports,online,111.02,4,0.041,bundle,2024-09-02\r\n9931,2412,LATAM,toys,retail,82.93,4,0.161,bundle,2024-03-13\r\n9932,2283,AMER,electronics,online,43.78,2,0.161,coupon,2024-11-14\r\n9933,2332,APAC,toys,online,80.85,6,0.210,bundle,2024-05-11\r\n9934,2180,AMER,sports,retail,111.18,3,0.096,loyalty,2024-08-22\r\n9935,1836,LATAM,electronics,mobile,76.28,3,0.140,coupon,2024-10-15\r\n9936,1017,AMER,electronics,online,76.56,8,0.128,bundle,2024-08-14\r\n9937,2266,LATAM,grocery,online,88.17,1,0.103,bundle,2024-07-20\r\n9938,2489,LATAM,grocery,online,52.45,4,0.088,bundle,2024-04-24\r\n9939,2497,AMER,toys,online,56.78,1,0.197,bundle,2024-07-27\r\n9940,1500,EMEA,home,mobile,46.35,5,0.242,none,2024-08-08\r\n9941,2473,EMEA,sports,online,45.96,7,0.135,coupon,2024-03-23\r\n9942,2413,AMER,sports,retail,36.16,3,0.081,none,2024-10-24\r\n9943,1039,AMER,toys,retail,73.39,7,0.170,none,2024-06-03\r\n9944,1634,AMER,home,online,82.01,5,0.167,bundle,2024-05-20\r\n9945,1303,LATAM,electronics,mobile,73.02,4,0.235,none,2024-08-09\r\n9946,1233,AMER,electronics,mobile,64.29,5,0.132,none,2024-03-16\r\n9947,2458,EMEA,fashion,online,61.24,5,0.002,coupon,2024-06-04\r\n9948,1311,APAC,home,online,34.97,3,0.086,none,2024-08-12\r\n9949,2048,LATAM,home,retail,82.12,6,0.100,none,2024-01-21\r\n9950,1405,LATAM,grocery,online,29.10,8,0.153,none,2024-05-17\r\n9951,1794,AMER,electronics,retail,26.73,8,0.128,none,2024-05-25\r\n9952,1749,LATAM,fashion,online,44.68,5,0.040,bundle,2024-01-07\r\n9953,2165,AMER,fashion,online,33.65,5,0.068,coupon,2024-12-11\r\n9954,1545,AMER,fashion,retail,25.50,6,0.129,none,2024-10-18\r\n9955,1354,AMER,sports,mobile,24.80,5,0.001,bundle,2024-10-05\r\n9956,1097,EMEA,home,online,100.40,2,0.142,bundle,2024-09-27\r\n9957,1388,AMER,home,online,42.13,5,0.045,bundle,2024-07-20\r\n9958,2005,APAC,home,retail,69.13,8,0.091,none,2024-11-25\r\n9959,1698,EMEA,home,mobile,87.97,4,0.135,coupon,2024-10-01\r\n9960,2249,LATAM,sports,mobile,78.54,8,0.117,coupon,2024-01-12\r\n9961,1644,EMEA,home,mobile,34.02,6,0.157,loyalty,2024-01-20\r\n9962,1449,EMEA,sports,online,71.61,1,0.220,none,2024-03-03\r\n9963,1268,EMEA,fashion,retail,66.51,6,0.063,none,2024-12-11\r\n9964,1811,APAC,fashion,online,33.42,8,0.180,none,2024-04-24\r\n9965,1624,AMER,grocery,online,52.54,4,0.073,coupon,2024-06-14\r\n9966,1439,LATAM,fashion,retail,123.77,6,0.057,none,2024-09-17\r\n9967,2225,EMEA,home,online,50.69,1,0.101,none,2024-05-25\r\n9968,1963,AMER,sports,online,70.48,8,0.088,none,2024-11-21\r\n9969,2497,AMER,sports,online,50.36,5,0.224,none,2024-12-21\r\n9970,1262,APAC,sports,online,24.37,6,0.064,loyalty,2024-06-11\r\n9971,2499,LATAM,fashion,retail,89.56,2,0.107,none,2024-05-02\r\n9972,1843,EMEA,grocery,mobile,43.22,2,0.129,bundle,2024-08-28\r\n9973,1902,AMER,electronics,online,83.94,8,0.113,none,2024-01-25\r\n9974,1001,LATAM,fashion,online,60.74,5,0.227,loyalty,2024-12-11\r\n9975,1128,LATAM,fashion,retail,78.51,7,0.143,none,2024-06-03\r\n9976,1004,LATAM,electronics,online,88.79,7,0.201,none,2024-12-07\r\n9977,1993,APAC,grocery,online,129.88,1,0.074,bundle,2024-06-22\r\n9978,1320,EMEA,grocery,online,50.73,3,0.175,none,2024-12-12\r\n9979,1492,APAC,toys,online,65.75,8,0.124,none,2024-03-08\r\n9980,1067,APAC,sports,online,72.72,6,0.039,none,2024-11-12\r\n9981,1532,APAC,electronics,mobile,88.37,7,0.092,coupon,2024-07-15\r\n9982,1771,AMER,grocery,online,47.68,2,0.161,none,2024-11-13\r\n9983,2146,APAC,fashion,online,22.87,7,0.194,none,2024-04-17\r\n9984,2248,LATAM,toys,online,38.95,3,0.135,none,2024-12-08\r\n9985,2193,AMER,fashion,online,44.69,1,0.110,none,2024-12-05\r\n9986,2252,EMEA,grocery,online,29.99,6,0.136,loyalty,2024-07-28\r\n9987,1900,APAC,grocery,mobile,85.92,5,0.022,coupon,2024-02-11\r\n9988,2112,LATAM,fashion,mobile,30.39,2,0.141,none,2024-02-27\r\n9989,1328,APAC,sports,retail,59.75,1,0.102,coupon,2024-08-16\r\n9990,1131,APAC,fashion,online,80.69,8,0.054,none,2024-09-06\r\n9991,2203,APAC,home,online,56.81,3,0.045,bundle,2024-06-16\r\n9992,2492,LATAM,sports,retail,175.09,5,0.052,loyalty,2024-02-15\r\n9993,2424,LATAM,electronics,online,58.60,7,0.205,none,2024-02-04\r\n9994,2202,APAC,toys,retail,50.36,2,0.194,none,2024-02-18\r\n9995,2172,EMEA,home,online,37.08,1,0.019,none,2024-03-10\r\n9996,2084,LATAM,home,online,36.91,5,0.045,coupon,2024-01-16\r\n9997,2407,EMEA,sports,retail,55.50,1,0.131,none,2024-01-19\r\n9998,1624,AMER,electronics,mobile,133.89,1,0.147,none,2024-05-24\r\n9999,1313,EMEA,fashion,online,126.59,1,0.063,coupon,2024-06-14\r\n10000,1933,EMEA,electronics,online,297.76,5,0.079,none,2024-06-21\r\n10001,2307,LATAM,fashion,online,39.99,6,0.048,coupon,2024-06-12\r\n10002,2209,AMER,fashion,online,37.75,1,0.140,coupon,2024-02-23\r\n10003,2278,APAC,home,partner,67.85,7,0.232,coupon,2024-12-04\r\n10004,2140,AMER,grocery,online,46.86,5,0.058,coupon,2024-08-23\r\n10005,2161,LATAM,sports,online,72.01,2,0.181,coupon,2024-11-06\r\n10006,2290,LATAM,electronics,mobile,19.60,2,0.145,coupon,2024-12-08\r\n10007,2067,LATAM,grocery,retail,102.16,4,0.002,loyalty,2024-11-08\r\n10008,1072,LATAM,fashion,retail,120.22,8,0.162,coupon,2024-12-26\r\n10009,1855,APAC,fashion,retail,107.35,6,0.189,none,2024-07-15\r\n10010,1608,AMER,grocery,online,72.10,4,0.146,bundle,2024-04-23\r\n10011,1880,LATAM,electronics,retail,50.91,7,0.090,bundle,2024-06-04\r\n10012,1746,LATAM,fashion,online,81.12,5,0.198,none,2024-07-05\r\n10013,1857,LATAM,fashion,retail,75.49,7,0.124,loyalty,2024-06-14\r\n10014,1934,EMEA,home,online,108.24,2,0.166,none,2024-02-04\r\n10015,1294,APAC,grocery,mobile,55.65,1,0.177,coupon,2024-03-07\r\n10016,1945,AMER,home,online,79.84,6,0.106,none,2024-03-17\r\n10017,1614,EMEA,grocery,retail,47.03,8,0.056,coupon,2024-05-17\r\n10018,1762,LATAM,electronics,online,60.97,6,0.128,none,2024-12-11\r\n10019,2248,LATAM,fashion,retail,113.05,1,0.184,bundle,2024-03-16\r\n10020,1727,APAC,grocery,mobile,67.99,6,0.101,none,2024-11-04\r\n10021,1994,LATAM,toys,online,25.51,7,0.078,none,2024-02-11\r\n10022,2072,AMER,grocery,partner,130.35,1,0.241,none,2024-04-17\r\n10023,1616,APAC,fashion,partner,69.05,8,0.170,none,2024-07-27\r\n10024,1071,AMER,home,online,48.22,5,0.183,loyalty,2024-05-14\r\n10025,1680,LATAM,toys,online,30.41,6,0.042,loyalty,2024-02-19\r\n10026,1335,APAC,electronics,partner,42.31,2,0.099,none,2024-01-04\r\n10027,1935,EMEA,home,retail,79.44,5,0.109,coupon,2024-07-27\r\n10028,1896,EMEA,grocery,online,55.82,7,0.073,none,2024-07-01\r\n10029,1743,LATAM,fashion,online,73.92,5,0.135,coupon,2024-06-09\r\n10030,1345,AMER,home,online,38.20,2,0.195,loyalty,2024-12-14\r\n10031,2048,LATAM,home,online,60.28,1,0.124,coupon,2024-11-25\r\n10032,1487,AMER,grocery,mobile,64.05,1,0.133,none,2024-05-19\r\n10033,1719,LATAM,fashion,online,103.76,6,0.096,none,2024-05-16\r\n10034,2311,LATAM,home,online,56.20,5,0.115,bundle,2024-07-04\r\n10035,2381,AMER,electronics,online,143.03,7,0.231,loyalty,2024-09-28\r\n10036,1589,AMER,toys,partner,93.20,8,0.248,coupon,2024-08-07\r\n10037,1643,EMEA,fashion,mobile,72.89,2,0.045,loyalty,2024-12-10\r\n10038,2267,AMER,fashion,online,107.92,2,0.184,none,2024-06-04\r\n10039,1148,AMER,grocery,online,113.94,1,0.218,loyalty,2024-08-20\r\n10040,1283,APAC,toys,partner,50.15,1,0.160,bundle,2024-02-03\r\n10041,1578,LATAM,electronics,online,49.95,3,0.185,bundle,2024-09-23\r\n10042,1139,EMEA,electronics,mobile,72.22,3,0.071,none,2024-11-04\r\n10043,1360,APAC,grocery,online,86.00,4,0.125,bundle,2024-02-16\r\n10044,2032,AMER,grocery,retail,32.41,1,0.010,bundle,2024-05-15\r\n10045,2134,AMER,home,mobile,34.92,3,0.219,loyalty,2024-06-18\r\n10046,1672,APAC,sports,online,113.38,7,0.082,coupon,2024-06-12\r\n10047,1021,AMER,electronics,online,62.96,1,0.077,none,2024-08-17\r\n10048,2281,AMER,fashion,mobile,43.76,8,0.047,loyalty,2024-10-23\r\n10049,1988,AMER,fashion,retail,62.74,1,0.159,none,2024-10-04\r\n10050,1496,AMER,home,mobile,46.98,3,0.185,loyalty,2024-07-03\r\n10051,1677,EMEA,fashion,online,100.47,6,0.032,loyalty,2024-04-22\r\n10052,2397,LATAM,fashion,online,50.83,4,0.040,none,2024-11-07\r\n10053,1639,APAC,fashion,mobile,59.09,1,0.215,none,2024-03-01\r\n10054,1557,LATAM,grocery,retail,12.39,4,0.045,none,2024-07-19\r\n10055,1167,EMEA,grocery,partner,127.34,7,0.158,none,2024-01-19\r\n10056,2117,EMEA,grocery,online,125.46,6,0.083,none,2024-03-24\r\n10057,2420,EMEA,grocery,online,102.72,8,0.207,coupon,2024-03-02\r\n10058,1093,APAC,toys,mobile,103.14,4,0.211,bundle,2024-12-27\r\n10059,1287,AMER,electronics,retail,11.41,8,0.030,coupon,2024-04-26\r\n10060,2104,EMEA,electronics,online,21.73,3,0.105,none,2024-10-24\r\n10061,1751,AMER,grocery,retail,93.50,3,0.059,none,2024-02-04\r\n10062,1417,APAC,grocery,retail,44.21,5,0.006,none,2024-11-27\r\n10063,1833,EMEA,sports,online,46.37,1,0.059,none,2024-06-01\r\n10064,2246,AMER,electronics,retail,27.02,2,0.105,none,2024-10-11\r\n10065,2003,LATAM,electronics,mobile,79.39,7,0.080,none,2024-10-03\r\n10066,2138,APAC,electronics,online,29.48,7,0.096,none,2024-12-16\r\n10067,1602,EMEA,fashion,partner,39.43,2,0.003,none,2024-03-12\r\n10068,2210,APAC,home,online,37.97,8,0.001,none,2024-11-01\r\n10069,1293,AMER,fashion,online,62.43,8,0.140,none,2024-08-27\r\n10070,2071,APAC,fashion,online,51.74,1,0.032,loyalty,2024-09-03\r\n10071,1110,LATAM,grocery,retail,36.70,2,0.083,none,2024-08-22\r\n10072,1474,LATAM,grocery,online,83.79,2,0.226,coupon,2024-12-10\r\n10073,1849,EMEA,fashion,mobile,71.48,1,0.117,none,2024-10-06\r\n10074,1066,AMER,sports,retail,25.20,6,0.072,coupon,2024-07-06\r\n10075,1807,EMEA,fashion,online,52.73,6,0.197,none,2024-05-24\r\n10076,1760,LATAM,toys,online,60.77,1,0.154,coupon,2024-10-21\r\n10077,1777,AMER,electronics,mobile,95.92,3,0.072,loyalty,2024-04-27\r\n10078,1760,LATAM,grocery,online,81.02,2,0.143,none,2024-02-26\r\n10079,2210,APAC,grocery,mobile,126.68,3,0.138,bundle,2024-08-07\r\n10080,2182,AMER,toys,retail,93.36,3,0.076,bundle,2024-11-08\r\n10081,1512,APAC,home,online,35.12,6,0.213,none,2024-05-27\r\n10082,1112,APAC,home,online,84.54,1,0.141,coupon,2024-10-05\r\n10083,1278,AMER,sports,online,55.23,1,0.191,coupon,2024-09-25\r\n10084,1982,EMEA,fashion,online,19.03,3,0.219,loyalty,2024-02-11\r\n10085,2356,LATAM,electronics,online,31.31,1,0.132,coupon,2024-01-16\r\n10086,2029,APAC,home,online,83.91,4,0.169,none,2024-04-09\r\n10087,1342,LATAM,home,online,46.58,3,0.105,none,2024-10-06\r\n10088,1182,EMEA,fashion,online,140.61,5,0.125,loyalty,2024-09-13\r\n10089,1714,APAC,grocery,online,42.83,5,0.071,coupon,2024-09-10\r\n10090,2340,EMEA,grocery,retail,37.94,1,0.067,none,2024-04-01\r\n10091,2059,AMER,toys,retail,50.42,4,0.145,none,2024-10-03\r\n10092,1303,LATAM,electronics,mobile,16.03,6,0.093,none,2024-04-19\r\n10093,1904,APAC,home,online,68.14,3,0.029,none,2024-07-18\r\n10094,1275,EMEA,electronics,retail,50.40,7,0.170,none,2024-08-11\r\n10095,1128,LATAM,sports,mobile,150.52,2,0.220,none,2024-09-24\r\n10096,1196,APAC,electronics,online,19.47,8,0.103,bundle,2024-05-02\r\n10097,2185,EMEA,grocery,online,46.59,7,0.042,none,2024-05-10\r\n10098,2286,AMER,grocery,online,59.73,6,0.155,none,2024-11-03\r\n10099,1993,APAC,grocery,retail,43.19,2,0.164,none,2024-03-25\r\n10100,1976,AMER,toys,mobile,46.54,4,0.062,none,2024-04-21\r\n10101,1441,LATAM,electronics,retail,29.82,1,0.080,none,2024-09-20\r\n10102,1967,EMEA,electronics,mobile,49.75,5,0.113,none,2024-01-04\r\n10103,2038,LATAM,sports,retail,57.35,2,0.196,coupon,2024-09-27\r\n10104,2120,AMER,grocery,retail,33.08,1,0.020,bundle,2024-11-05\r\n10105,1011,APAC,grocery,online,74.88,5,0.094,none,2024-10-02\r\n10106,1249,EMEA,home,online,47.75,8,0.177,none,2024-10-03\r\n10107,1556,AMER,grocery,mobile,91.88,2,0.202,loyalty,2024-04-28\r\n10108,1896,EMEA,home,mobile,30.39,4,0.239,coupon,2024-09-15\r\n10109,1345,AMER,grocery,retail,139.93,3,0.210,none,2024-12-22\r\n10110,1110,LATAM,toys,mobile,75.99,1,0.019,none,2024-01-09\r\n10111,1171,APAC,grocery,retail,21.59,1,0.151,none,2024-03-22\r\n10112,1563,EMEA,home,retail,68.61,8,0.113,none,2024-06-21\r\n10113,1563,EMEA,electronics,retail,62.70,4,0.046,coupon,2024-12-14\r\n10114,2181,AMER,sports,online,49.43,8,0.078,loyalty,2024-01-03\r\n10115,1851,EMEA,electronics,online,81.46,3,0.023,bundle,2024-01-20\r\n10116,1562,AMER,toys,online,153.28,2,0.142,none,2024-06-20\r\n10117,1351,APAC,grocery,online,58.66,8,0.074,none,2024-04-23\r\n10118,1901,AMER,toys,mobile,72.53,2,0.124,none,2024-06-06\r\n10119,1573,AMER,electronics,online,38.39,3,0.094,loyalty,2024-11-04\r\n10120,1639,APAC,home,mobile,324.81,8,0.120,coupon,2024-07-23\r\n10121,1101,AMER,electronics,online,76.84,5,0.128,none,2024-09-25\r\n10122,1767,AMER,home,online,101.62,6,0.054,coupon,2024-01-13\r\n10123,1857,LATAM,grocery,retail,97.28,4,0.215,coupon,2024-08-24\r\n10124,1331,AMER,sports,online,31.19,8,0.132,coupon,2024-04-08\r\n10125,1473,LATAM,electronics,online,30.08,2,0.062,coupon,2024-08-20\r\n10126,1995,LATAM,grocery,online,34.30,4,0.131,none,2024-01-27\r\n10127,2182,AMER,fashion,online,50.14,4,0.180,none,2024-05-28\r\n10128,1690,LATAM,grocery,retail,47.50,6,0.189,none,2024-11-10\r\n10129,2361,EMEA,grocery,online,79.27,1,0.018,none,2024-08-06\r\n10130,2235,AMER,fashion,online,71.30,4,0.104,none,2024-11-23\r\n10131,1863,EMEA,grocery,mobile,78.17,5,0.203,bundle,2024-10-24\r\n10132,1369,AMER,sports,retail,73.96,4,0.052,none,2024-10-28\r\n10133,2247,LATAM,grocery,retail,79.56,3,0.032,none,2024-01-12\r\n10134,1039,AMER,electronics,online,70.58,5,0.120,coupon,2024-06-21\r\n10135,1848,EMEA,home,online,59.99,2,0.222,none,2024-11-27\r\n10136,1886,LATAM,fashion,online,97.85,6,0.129,none,2024-12-02\r\n10137,1586,LATAM,electronics,retail,126.89,3,0.071,none,2024-07-17\r\n10138,1329,APAC,grocery,online,35.03,8,0.247,none,2024-08-05\r\n10139,1521,LATAM,home,retail,107.82,1,0.142,none,2024-03-22\r\n10140,1386,AMER,grocery,online,188.92,4,0.104,none,2024-01-17\r\n10141,1724,LATAM,grocery,online,78.12,3,0.165,none,2024-04-11\r\n10142,2318,AMER,grocery,retail,55.35,1,0.170,coupon,2024-05-26\r\n10143,2473,EMEA,grocery,online,54.88,3,0.013,none,2024-06-02\r\n10144,1120,LATAM,toys,mobile,86.47,4,0.054,coupon,2024-12-11\r\n10145,2209,AMER,fashion,retail,74.73,5,0.126,none,2024-09-18\r\n10146,2420,EMEA,grocery,online,35.73,6,0.016,none,2024-03-13\r\n10147,1288,LATAM,home,retail,72.80,5,0.120,coupon,2024-07-25\r\n10148,2165,AMER,grocery,online,11.87,6,0.053,coupon,2024-12-14\r\n10149,2067,LATAM,fashion,online,65.92,2,0.001,none,2024-11-23\r\n10150,1095,APAC,electronics,partner,152.97,8,0.201,none,2024-04-05\r\n10151,2253,AMER,home,mobile,54.12,7,0.121,none,2024-08-19\r\n10152,1305,EMEA,electronics,mobile,29.09,2,0.205,none,2024-03-09\r\n10153,1447,LATAM,electronics,online,59.81,5,0.132,bundle,2024-08-22\r\n10154,1894,APAC,fashion,retail,36.34,5,0.235,bundle,2024-09-02\r\n10155,1460,LATAM,fashion,partner,28.82,1,0.149,bundle,2024-02-17\r\n10156,1548,EMEA,sports,online,22.91,8,0.036,coupon,2024-06-11\r\n10157,1885,EMEA,electronics,retail,60.92,7,0.216,none,2024-07-23\r\n10158,1044,EMEA,toys,mobile,34.49,8,0.066,bundle,2024-11-10\r\n10159,1439,LATAM,home,mobile,66.70,8,0.056,loyalty,2024-04-04\r\n10160,1333,EMEA,fashion,online,26.04,4,0.156,none,2024-06-13\r\n10161,1295,EMEA,electronics,retail,70.23,5,0.052,none,2024-12-07\r\n10162,1848,EMEA,home,retail,31.87,4,0.039,none,2024-12-07\r\n10163,2452,LATAM,fashion,retail,50.12,4,0.219,none,2024-10-16\r\n10164,2019,AMER,grocery,mobile,31.14,6,0.176,none,2024-10-18\r\n10165,1866,EMEA,electronics,online,34.08,6,0.161,none,2024-08-14\r\n10166,2048,LATAM,sports,online,84.61,8,0.220,none,2024-05-20\r\n10167,1498,LATAM,fashion,mobile,25.88,2,0.202,none,2024-12-10\r\n10168,1695,LATAM,fashion,mobile,30.68,4,0.064,bundle,2024-11-14\r\n10169,1806,APAC,toys,mobile,31.07,6,0.046,none,2024-02-23\r\n10170,2437,LATAM,fashion,mobile,34.97,3,0.015,bundle,2024-11-17\r\n10171,1009,APAC,grocery,online,43.64,1,0.112,none,2024-07-28\r\n10172,1690,LATAM,toys,retail,26.98,5,0.004,none,2024-06-22\r\n10173,1393,LATAM,grocery,online,51.70,3,0.046,bundle,2024-04-10\r\n10174,1235,EMEA,sports,retail,75.81,7,0.097,coupon,2024-03-11\r\n10175,1995,LATAM,sports,online,61.74,3,0.214,bundle,2024-07-16\r\n10176,1689,LATAM,grocery,mobile,38.19,2,0.124,none,2024-05-19\r\n10177,2055,AMER,home,online,127.29,7,0.145,none,2024-02-26\r\n10178,1579,AMER,sports,mobile,77.17,4,0.126,bundle,2024-08-01\r\n10179,1237,LATAM,toys,online,74.04,1,0.085,coupon,2024-08-27\r\n10180,1151,APAC,sports,mobile,27.22,6,0.076,coupon,2024-05-21\r\n10181,2374,LATAM,sports,mobile,21.96,2,0.189,coupon,2024-12-21\r\n10182,1162,AMER,grocery,online,195.30,2,0.029,bundle,2024-06-17\r\n10183,1637,APAC,grocery,online,22.97,6,0.194,none,2024-12-13\r\n10184,1624,AMER,grocery,mobile,154.83,8,0.102,bundle,2024-05-15\r\n10185,1027,APAC,grocery,partner,68.75,4,0.205,none,2024-05-12\r\n10186,1672,APAC,fashion,online,61.52,7,0.132,loyalty,2024-08-22\r\n10187,2222,LATAM,fashion,online,71.01,3,0.181,none,2024-12-04\r\n10188,1195,AMER,fashion,retail,53.34,7,0.017,none,2024-11-12\r\n10189,1057,LATAM,grocery,online,50.64,7,0.120,loyalty,2024-09-28\r\n10190,1029,EMEA,grocery,online,148.96,5,0.035,none,2024-06-01\r\n10191,2378,LATAM,grocery,online,79.69,8,0.233,none,2024-09-13\r\n10192,1711,APAC,grocery,retail,150.07,7,0.243,none,2024-01-20\r\n10193,1445,APAC,toys,online,27.83,5,0.123,bundle,2024-09-08\r\n10194,1370,APAC,home,online,152.98,4,0.191,none,2024-03-07\r\n10195,1613,EMEA,home,online,36.27,8,0.027,bundle,2024-04-02\r\n10196,1779,APAC,home,partner,47.41,2,0.009,none,2024-07-06\r\n10197,1867,AMER,grocery,retail,31.84,4,0.106,coupon,2024-04-18\r\n10198,1908,AMER,home,retail,66.87,8,0.181,coupon,2024-09-18\r\n10199,1122,AMER,sports,mobile,46.33,5,0.088,bundle,2024-05-16\r\n10200,1217,EMEA,home,mobile,109.93,2,0.065,none,2024-07-03\r\n10201,2332,APAC,sports,retail,56.15,1,0.218,none,2024-07-22\r\n10202,2355,EMEA,electronics,partner,24.69,1,0.100,none,2024-08-04\r\n10203,1294,APAC,home,retail,44.49,2,0.021,bundle,2024-06-18\r\n10204,2381,AMER,sports,retail,101.75,6,0.046,none,2024-04-10\r\n10205,1930,AMER,toys,mobile,246.54,6,0.008,none,2024-11-11\r\n10206,1587,LATAM,electronics,online,102.52,6,0.151,coupon,2024-11-07\r\n10207,1812,EMEA,home,online,58.89,7,0.048,loyalty,2024-07-09\r\n10208,2338,AMER,sports,online,89.41,7,0.079,none,2024-09-13\r\n10209,2479,EMEA,grocery,online,77.22,6,0.067,none,2024-12-11\r\n10210,1473,LATAM,grocery,online,47.16,1,0.137,none,2024-08-03\r\n10211,1326,AMER,home,online,59.62,3,0.124,bundle,2024-07-15\r\n10212,2175,AMER,toys,online,42.15,6,0.206,none,2024-01-24\r\n10213,1244,LATAM,grocery,retail,55.09,6,0.218,none,2024-07-16\r\n10214,1084,AMER,toys,partner,55.53,8,0.197,coupon,2024-06-06\r\n10215,1948,EMEA,grocery,online,56.12,5,0.082,coupon,2024-06-20\r\n10216,1780,APAC,grocery,online,72.96,8,0.057,coupon,2024-06-01\r\n10217,1322,AMER,grocery,partner,43.04,6,0.192,none,2024-02-20\r\n10218,1117,LATAM,fashion,retail,19.91,6,0.231,none,2024-10-03\r\n10219,1059,AMER,grocery,mobile,38.20,8,0.084,none,2024-12-03\r\n10220,1281,AMER,sports,retail,92.98,7,0.026,none,2024-11-27\r\n10221,1913,LATAM,grocery,mobile,70.43,6,0.047,coupon,2024-06-05\r\n10222,2392,EMEA,home,online,46.09,5,0.132,none,2024-02-01\r\n10223,2398,EMEA,fashion,mobile,96.25,1,0.063,none,2024-02-25\r\n10224,1618,EMEA,electronics,online,228.20,1,0.180,loyalty,2024-06-06\r\n10225,1239,APAC,home,retail,75.56,8,0.192,none,2024-08-12\r\n10226,2275,LATAM,home,online,41.86,2,0.148,bundle,2024-04-22\r\n10227,2375,AMER,sports,online,55.23,7,0.128,bundle,2024-12-28\r\n10228,1232,LATAM,sports,mobile,25.22,6,0.090,none,2024-07-14\r\n10229,1636,APAC,grocery,retail,35.09,5,0.217,none,2024-02-14\r\n10230,1095,APAC,fashion,online,67.17,8,0.181,loyalty,2024-03-20\r\n10231,2356,LATAM,home,online,49.94,5,0.108,loyalty,2024-10-09\r\n10232,1972,LATAM,home,retail,92.58,8,0.222,none,2024-11-22\r\n10233,1967,EMEA,electronics,retail,41.25,1,0.001,none,2024-12-20\r\n10234,1150,LATAM,fashion,retail,82.41,2,0.210,coupon,2024-06-27\r\n10235,2032,AMER,electronics,retail,20.39,1,0.223,none,2024-01-03\r\n10236,1924,AMER,electronics,online,51.60,5,0.117,none,2024-02-09\r\n10237,1079,LATAM,electronics,mobile,43.44,1,0.178,none,2024-06-01\r\n10238,2257,AMER,grocery,online,78.66,3,0.039,bundle,2024-11-01\r\n10239,2143,AMER,sports,online,23.53,3,0.023,coupon,2024-05-25\r\n10240,1190,EMEA,grocery,online,20.95,8,0.141,none,2024-05-26\r\n10241,1286,EMEA,electronics,online,38.47,2,0.035,none,2024-12-02\r\n10242,1051,EMEA,grocery,retail,33.86,1,0.021,none,2024-05-23\r\n10243,2471,APAC,home,online,45.41,7,0.096,coupon,2024-09-26\r\n10244,1457,EMEA,sports,online,67.94,7,0.134,coupon,2024-06-27\r\n10245,2396,AMER,home,online,20.51,7,0.114,coupon,2024-11-13\r\n10246,1715,AMER,home,retail,63.32,8,0.007,loyalty,2024-07-06\r\n10247,1431,APAC,sports,online,93.89,5,0.099,none,2024-03-24\r\n10248,2090,AMER,fashion,online,48.28,8,0.148,none,2024-02-13\r\n10249,2099,AMER,grocery,mobile,72.01,7,0.164,none,2024-03-08\r\n10250,2242,AMER,electronics,online,71.11,6,0.034,none,2024-06-22\r\n10251,1892,LATAM,electronics,retail,65.28,4,0.208,none,2024-01-02\r\n10252,1798,AMER,home,online,155.97,3,0.146,none,2024-11-10\r\n10253,2401,LATAM,grocery,retail,64.84,4,0.094,none,2024-06-11\r\n10254,2054,AMER,home,online,55.94,3,0.125,none,2024-05-17\r\n10255,1393,LATAM,fashion,retail,77.69,6,0.194,none,2024-03-27\r\n10256,2041,LATAM,toys,retail,69.26,4,0.078,none,2024-10-12\r\n10257,2226,EMEA,fashion,online,46.98,8,0.220,none,2024-10-22\r\n10258,1142,EMEA,home,retail,35.75,3,0.032,none,2024-09-06\r\n10259,1265,APAC,fashion,online,115.65,5,0.047,none,2024-07-28\r\n10260,2036,APAC,grocery,online,36.26,6,0.083,loyalty,2024-08-17\r\n10261,1884,APAC,sports,online,86.92,8,0.224,bundle,2024-08-27\r\n10262,1716,LATAM,fashion,online,65.97,3,0.009,none,2024-10-06\r\n10263,2422,APAC,electronics,online,18.63,5,0.071,none,2024-06-24\r\n10264,1901,AMER,home,online,50.00,3,0.112,coupon,2024-07-01\r\n10265,1003,APAC,toys,online,185.43,7,0.160,none,2024-08-03\r\n10266,1437,EMEA,toys,online,147.86,5,0.118,coupon,2024-08-27\r\n10267,2412,LATAM,home,online,101.12,6,0.038,bundle,2024-10-07\r\n10268,1383,AMER,fashion,retail,78.54,8,0.076,bundle,2024-12-13\r\n10269,2480,APAC,grocery,retail,56.80,1,0.189,bundle,2024-10-08\r\n10270,2146,APAC,toys,online,25.25,3,0.007,coupon,2024-01-22\r\n10271,1664,LATAM,toys,online,175.91,7,0.104,none,2024-12-12\r\n10272,2048,LATAM,grocery,retail,93.74,2,0.175,bundle,2024-10-01\r\n10273,2150,APAC,grocery,online,117.64,3,0.186,none,2024-01-27\r\n10274,1731,AMER,grocery,retail,88.34,1,0.121,none,2024-06-06\r\n10275,1232,LATAM,grocery,online,50.61,5,0.003,none,2024-10-25\r\n10276,1730,AMER,fashion,online,54.19,8,0.039,none,2024-10-20\r\n10277,1963,AMER,toys,retail,58.37,7,0.226,bundle,2024-04-01\r\n10278,1515,EMEA,fashion,online,75.81,6,0.020,none,2024-05-04\r\n10279,2059,AMER,grocery,online,35.22,3,0.057,none,2024-12-05\r\n10280,1835,AMER,fashion,online,69.01,6,0.145,none,2024-07-23\r\n10281,1618,EMEA,home,online,37.96,7,0.051,coupon,2024-03-21\r\n10282,2307,LATAM,fashion,partner,27.34,3,0.073,none,2024-10-04\r\n10283,1552,EMEA,home,retail,69.33,6,0.195,bundle,2024-08-14\r\n10284,1096,EMEA,toys,mobile,73.96,1,0.245,coupon,2024-04-25\r\n10285,1215,LATAM,grocery,retail,43.26,4,0.063,loyalty,2024-09-28\r\n10286,1631,APAC,toys,mobile,27.76,2,0.218,none,2024-02-11\r\n10287,1807,EMEA,home,online,16.15,5,0.073,none,2024-08-25\r\n10288,1544,LATAM,fashion,partner,80.37,6,0.075,bundle,2024-01-27\r\n10289,1745,APAC,fashion,online,30.53,1,0.114,coupon,2024-07-27\r\n10290,2299,EMEA,grocery,retail,49.49,5,0.209,none,2024-09-18\r\n10291,2007,LATAM,home,online,47.80,2,0.055,coupon,2024-02-20\r\n10292,1099,LATAM,grocery,online,53.76,7,0.104,none,2024-03-21\r\n10293,1455,APAC,fashion,retail,81.61,7,0.113,none,2024-11-27\r\n10294,1462,LATAM,electronics,retail,96.34,7,0.152,bundle,2024-10-13\r\n10295,2222,LATAM,fashion,retail,44.20,2,0.182,none,2024-01-08\r\n10296,1460,LATAM,home,online,76.48,7,0.170,coupon,2024-01-23\r\n10297,1640,APAC,home,online,116.69,6,0.246,bundle,2024-05-11\r\n10298,1765,EMEA,grocery,mobile,56.40,4,0.188,coupon,2024-02-25\r\n10299,1578,LATAM,grocery,online,70.40,4,0.159,none,2024-06-09\r\n10300,1236,AMER,fashion,retail,41.24,3,0.001,loyalty,2024-11-15\r\n10301,1128,LATAM,fashion,retail,19.65,4,0.185,none,2024-11-05\r\n10302,1481,LATAM,electronics,online,47.87,2,0.060,coupon,2024-03-16\r\n10303,1650,LATAM,grocery,retail,58.68,5,0.242,bundle,2024-06-24\r\n10304,1963,AMER,electronics,mobile,95.49,8,0.037,none,2024-04-15\r\n10305,1075,AMER,fashion,online,87.96,4,0.025,none,2024-01-01\r\n10306,1686,LATAM,sports,online,61.41,8,0.149,loyalty,2024-04-03\r\n10307,1528,EMEA,home,retail,152.05,5,0.081,none,2024-02-21\r\n10308,1055,AMER,electronics,online,75.35,2,0.204,none,2024-12-08\r\n10309,2100,APAC,toys,retail,43.58,4,0.038,loyalty,2024-01-20\r\n10310,1322,AMER,sports,partner,48.58,8,0.066,none,2024-09-28\r\n10311,2315,LATAM,grocery,online,106.41,8,0.073,coupon,2024-03-18\r\n10312,1701,LATAM,electronics,mobile,36.93,8,0.118,none,2024-12-10\r\n10313,1682,EMEA,grocery,online,83.80,7,0.226,loyalty,2024-11-23\r\n10314,1724,LATAM,grocery,online,80.66,2,0.241,coupon,2024-06-19\r\n10315,1388,AMER,fashion,mobile,30.79,1,0.028,coupon,2024-04-11\r\n10316,1708,LATAM,home,online,164.30,5,0.180,none,2024-09-01\r\n10317,1610,LATAM,grocery,retail,20.33,6,0.210,loyalty,2024-07-20\r\n10318,2323,AMER,home,retail,125.19,4,0.197,coupon,2024-04-19\r\n10319,1256,LATAM,electronics,retail,89.87,1,0.095,bundle,2024-02-07\r\n10320,1563,EMEA,grocery,retail,124.65,3,0.054,none,2024-03-07\r\n10321,1542,APAC,sports,retail,89.69,7,0.206,none,2024-12-03\r\n10322,1641,EMEA,fashion,online,45.55,2,0.056,none,2024-05-13\r\n10323,2031,AMER,electronics,mobile,189.09,1,0.131,bundle,2024-05-25\r\n10324,1891,APAC,grocery,online,65.62,8,0.193,coupon,2024-09-14\r\n10325,2007,LATAM,toys,online,123.18,1,0.173,none,2024-05-13\r\n10326,1132,EMEA,electronics,online,62.61,3,0.233,bundle,2024-10-28\r\n10327,1305,EMEA,sports,online,60.92,1,0.239,none,2024-12-12\r\n10328,1249,EMEA,grocery,retail,55.86,4,0.021,none,2024-08-18\r\n10329,1209,AMER,toys,online,76.35,4,0.144,none,2024-02-27\r\n10330,1977,APAC,toys,retail,40.61,1,0.058,none,2024-02-26\r\n10331,1916,AMER,grocery,mobile,50.12,5,0.173,loyalty,2024-05-12\r\n10332,2420,EMEA,sports,retail,48.56,5,0.189,none,2024-06-26\r\n10333,2139,AMER,fashion,retail,56.67,4,0.084,none,2024-11-28\r\n10334,1610,LATAM,home,online,50.13,4,0.061,none,2024-11-21\r\n10335,1312,EMEA,home,online,39.92,8,0.033,none,2024-02-05\r\n10336,1827,EMEA,grocery,retail,67.92,2,0.133,none,2024-05-20\r\n10337,1186,APAC,electronics,online,39.88,4,0.243,none,2024-10-11\r\n10338,1809,APAC,sports,online,40.22,5,0.024,loyalty,2024-08-12\r\n10339,2148,EMEA,grocery,online,53.42,7,0.096,loyalty,2024-08-08\r\n10340,1407,LATAM,fashion,retail,61.99,1,0.171,none,2024-08-01\r\n10341,2143,AMER,grocery,online,76.15,7,0.196,coupon,2024-10-11\r\n10342,1759,EMEA,electronics,retail,109.44,4,0.123,bundle,2024-03-16\r\n10343,1237,LATAM,home,mobile,93.29,3,0.087,none,2024-06-06\r\n10344,1230,EMEA,electronics,online,43.68,3,0.065,loyalty,2024-02-17\r\n10345,2343,EMEA,fashion,mobile,116.12,3,0.124,bundle,2024-10-22\r\n10346,1570,AMER,home,retail,112.28,1,0.226,none,2024-06-02\r\n10347,1095,APAC,grocery,online,36.69,8,0.185,none,2024-10-07\r\n10348,2192,APAC,grocery,online,103.01,3,0.126,none,2024-08-11\r\n10349,2334,LATAM,sports,retail,94.49,4,0.146,loyalty,2024-02-11\r\n10350,1472,AMER,fashion,retail,30.36,2,0.099,none,2024-10-05\r\n10351,1586,LATAM,sports,mobile,77.02,7,0.124,coupon,2024-06-27\r\n10352,1908,AMER,grocery,online,20.86,6,0.199,none,2024-06-18\r\n10353,2210,APAC,grocery,online,89.79,1,0.110,bundle,2024-12-05\r\n10354,1887,LATAM,electronics,online,62.72,4,0.122,none,2024-11-18\r\n10355,2484,APAC,grocery,online,36.25,7,0.119,coupon,2024-12-22\r\n10356,1358,APAC,fashion,online,93.97,5,0.244,none,2024-10-21\r\n10357,1783,AMER,home,online,48.58,4,0.107,none,2024-07-23\r\n10358,1665,AMER,fashion,online,93.28,5,0.219,none,2024-08-25\r\n10359,1660,AMER,fashion,online,43.05,7,0.108,coupon,2024-05-05\r\n10360,1888,LATAM,fashion,mobile,41.84,5,0.119,none,2024-06-13\r\n10361,2212,EMEA,grocery,online,113.55,2,0.162,bundle,2024-12-06\r\n10362,2186,LATAM,grocery,online,59.00,1,0.052,coupon,2024-10-07\r\n10363,1202,APAC,fashion,online,29.37,1,0.108,coupon,2024-05-04\r\n10364,2454,LATAM,home,mobile,25.74,4,0.224,none,2024-01-18\r\n10365,1649,APAC,electronics,partner,51.75,5,0.038,none,2024-02-23\r\n10366,1720,AMER,grocery,retail,60.85,6,0.144,coupon,2024-07-16\r\n10367,1944,AMER,fashion,retail,106.55,7,0.144,loyalty,2024-09-04\r\n10368,1420,APAC,grocery,online,45.87,1,0.193,bundle,2024-01-05\r\n10369,1639,APAC,fashion,online,19.89,6,0.219,none,2024-08-22\r\n10370,1844,APAC,home,partner,30.97,4,0.085,none,2024-06-21\r\n10371,1692,LATAM,grocery,mobile,44.68,7,0.192,coupon,2024-05-14\r\n10372,1894,APAC,electronics,retail,43.69,6,0.016,none,2024-01-23\r\n10373,1667,AMER,sports,online,60.88,6,0.137,none,2024-09-22\r\n10374,1029,EMEA,home,mobile,65.15,1,0.164,none,2024-04-18\r\n10375,1616,APAC,electronics,online,75.37,8,0.010,coupon,2024-04-25\r\n10376,1883,LATAM,grocery,retail,70.14,5,0.067,coupon,2024-04-05\r\n10377,1902,AMER,toys,retail,39.62,2,0.123,bundle,2024-07-27\r\n10378,1445,APAC,electronics,retail,37.09,1,0.170,none,2024-04-01\r\n10379,1252,APAC,electronics,online,36.72,4,0.070,none,2024-12-06\r\n10380,1041,APAC,toys,online,77.15,1,0.120,none,2024-10-08\r\n10381,1742,AMER,fashion,retail,30.16,2,0.193,none,2024-12-17\r\n10382,1630,APAC,electronics,retail,50.04,3,0.227,bundle,2024-04-13\r\n10383,1450,EMEA,grocery,retail,35.05,1,0.005,none,2024-06-13\r\n10384,1840,LATAM,grocery,mobile,102.99,7,0.019,bundle,2024-04-26\r\n10385,1851,EMEA,home,online,50.54,5,0.023,none,2024-05-14\r\n10386,1042,LATAM,fashion,retail,33.81,5,0.239,bundle,2024-12-01\r\n10387,1786,APAC,electronics,online,52.87,6,0.221,bundle,2024-07-16\r\n10388,2202,APAC,fashion,retail,86.23,7,0.042,none,2024-03-01\r\n10389,1917,LATAM,sports,online,132.81,8,0.167,bundle,2024-04-28\r\n10390,2349,APAC,sports,online,28.99,3,0.075,loyalty,2024-12-01\r\n10391,2454,LATAM,grocery,online,37.30,8,0.142,bundle,2024-05-14\r\n10392,2301,EMEA,sports,online,50.96,6,0.089,loyalty,2024-12-04\r\n10393,1641,EMEA,grocery,retail,17.39,2,0.106,bundle,2024-03-10\r\n10394,1993,APAC,toys,online,67.02,7,0.115,none,2024-03-13\r\n10395,2459,AMER,electronics,retail,38.10,5,0.221,coupon,2024-06-01\r\n10396,1775,EMEA,fashion,online,22.44,7,0.244,coupon,2024-07-17\r\n10397,1429,APAC,grocery,online,53.44,1,0.218,none,2024-12-07\r\n10398,1508,LATAM,grocery,partner,25.34,3,0.031,none,2024-02-19\r\n10399,2297,EMEA,toys,online,66.47,2,0.019,coupon,2024-05-23\r\n10400,1815,APAC,electronics,online,40.49,8,0.174,none,2024-12-28\r\n10401,1669,AMER,fashion,retail,35.68,8,0.018,bundle,2024-11-21\r\n10402,1565,AMER,grocery,online,33.43,6,0.003,none,2024-07-06\r\n10403,2213,APAC,grocery,retail,25.05,2,0.202,coupon,2024-09-08\r\n10404,1655,LATAM,electronics,mobile,45.92,5,0.133,none,2024-04-23\r\n10405,1530,APAC,electronics,online,35.44,8,0.182,none,2024-07-08\r\n10406,1913,LATAM,fashion,online,34.61,7,0.072,none,2024-11-07\r\n10407,2103,LATAM,fashion,online,39.92,3,0.008,none,2024-05-11\r\n10408,1525,APAC,home,online,25.66,4,0.155,none,2024-12-14\r\n10409,2475,AMER,grocery,retail,61.41,4,0.126,loyalty,2024-06-21\r\n10410,2001,EMEA,fashion,online,57.15,2,0.057,none,2024-10-24\r\n10411,1070,EMEA,fashion,partner,38.54,4,0.237,coupon,2024-11-13\r\n10412,1757,EMEA,sports,retail,32.55,5,0.170,none,2024-05-04\r\n10413,2493,APAC,electronics,retail,78.36,4,0.112,none,2024-12-06\r\n10414,1725,APAC,toys,retail,61.87,2,0.207,coupon,2024-01-16\r\n10415,2017,EMEA,fashion,online,117.52,3,0.071,none,2024-11-17\r\n10416,1077,AMER,fashion,retail,110.53,6,0.006,none,2024-09-26\r\n10417,1242,LATAM,toys,retail,55.38,6,0.246,none,2024-03-26\r\n10418,1373,LATAM,electronics,retail,110.63,5,0.015,coupon,2024-07-01\r\n10419,2285,APAC,electronics,online,59.52,1,0.179,none,2024-09-19\r\n10420,2110,LATAM,grocery,retail,42.84,6,0.189,none,2024-10-17\r\n10421,1781,LATAM,grocery,retail,80.90,6,0.189,coupon,2024-12-17\r\n10422,1518,AMER,electronics,partner,84.51,4,0.190,none,2024-11-14\r\n10423,1806,APAC,fashion,online,35.85,8,0.033,coupon,2024-10-08\r\n10424,2184,APAC,sports,retail,38.37,4,0.195,none,2024-11-20\r\n10425,2177,AMER,electronics,mobile,140.75,2,0.035,none,2024-03-22\r\n10426,2424,LATAM,home,online,31.03,6,0.129,none,2024-02-10\r\n10427,1515,EMEA,grocery,retail,49.23,4,0.166,bundle,2024-02-24\r\n10428,1920,LATAM,electronics,online,71.10,2,0.237,none,2024-02-01\r\n10429,2201,AMER,grocery,online,90.84,1,0.225,loyalty,2024-02-01\r\n10430,2282,EMEA,grocery,online,68.87,5,0.114,none,2024-02-12\r\n10431,1044,EMEA,electronics,retail,123.93,4,0.002,none,2024-03-16\r\n10432,1249,EMEA,grocery,mobile,104.89,6,0.166,coupon,2024-03-02\r\n10433,2200,LATAM,fashion,mobile,33.64,1,0.231,loyalty,2024-10-09\r\n10434,1610,LATAM,fashion,online,39.98,8,0.019,coupon,2024-05-21\r\n10435,2326,LATAM,home,online,76.66,4,0.029,coupon,2024-12-03\r\n10436,2453,AMER,sports,mobile,67.85,4,0.174,none,2024-01-02\r\n10437,2443,LATAM,electronics,online,62.27,4,0.153,none,2024-02-16\r\n10438,2070,APAC,sports,retail,76.48,8,0.089,none,2024-07-15\r\n10439,1710,APAC,grocery,mobile,44.98,6,0.064,bundle,2024-11-10\r\n10440,1954,APAC,grocery,mobile,59.23,7,0.111,coupon,2024-09-02\r\n10441,2041,LATAM,fashion,online,25.70,6,0.090,coupon,2024-12-09\r\n10442,1530,APAC,electronics,online,31.50,8,0.133,coupon,2024-09-01\r\n10443,1647,LATAM,toys,online,81.57,3,0.084,none,2024-06-20\r\n10444,1394,LATAM,electronics,retail,60.38,3,0.110,none,2024-07-02\r\n10445,2240,LATAM,home,retail,56.57,2,0.059,none,2024-12-06\r\n10446,1702,AMER,fashion,mobile,33.68,2,0.177,coupon,2024-05-24\r\n10447,2353,AMER,sports,online,136.35,2,0.048,none,2024-02-16\r\n10448,1447,LATAM,fashion,retail,66.63,6,0.234,none,2024-03-05\r\n10449,1970,LATAM,electronics,online,51.69,2,0.222,none,2024-08-17\r\n10450,1801,LATAM,home,online,137.63,7,0.002,none,2024-01-06\r\n10451,1407,LATAM,home,retail,117.69,5,0.179,none,2024-04-07\r\n10452,1526,EMEA,home,online,72.79,7,0.147,coupon,2024-08-28\r\n10453,1764,LATAM,home,online,55.52,2,0.005,none,2024-06-21\r\n10454,2005,APAC,fashion,mobile,20.12,5,0.247,none,2024-09-01\r\n10455,1686,LATAM,sports,mobile,119.55,2,0.138,none,2024-01-12\r\n10456,1013,LATAM,grocery,online,56.49,6,0.055,none,2024-08-28\r\n10457,1616,APAC,electronics,mobile,92.41,4,0.139,coupon,2024-11-26\r\n10458,1866,EMEA,home,mobile,50.55,5,0.076,coupon,2024-10-04\r\n10459,1426,AMER,toys,mobile,69.00,5,0.140,coupon,2024-10-18\r\n10460,2313,LATAM,sports,online,33.37,7,0.247,bundle,2024-12-14\r\n10461,1331,AMER,grocery,online,38.98,5,0.125,bundle,2024-06-13\r\n10462,2380,AMER,grocery,online,31.10,1,0.002,bundle,2024-01-25\r\n10463,2170,EMEA,electronics,partner,28.34,2,0.214,bundle,2024-06-04\r\n10464,1309,EMEA,electronics,retail,58.29,2,0.067,none,2024-08-05\r\n10465,1650,LATAM,sports,online,47.66,8,0.169,none,2024-07-02\r\n10466,1705,AMER,sports,partner,88.86,8,0.072,coupon,2024-11-01\r\n10467,2360,EMEA,grocery,online,37.17,3,0.019,coupon,2024-12-18\r\n10468,1260,LATAM,fashion,mobile,83.78,5,0.033,none,2024-03-17\r\n10469,1217,EMEA,electronics,mobile,77.39,8,0.063,none,2024-02-17\r\n10470,1804,AMER,toys,retail,82.28,2,0.023,none,2024-05-14\r\n10471,2247,LATAM,home,retail,39.00,6,0.039,loyalty,2024-04-03\r\n10472,1649,APAC,home,mobile,119.02,3,0.213,coupon,2024-04-21\r\n10473,2459,AMER,sports,online,38.47,4,0.053,none,2024-03-04\r\n10474,2101,APAC,grocery,retail,105.61,1,0.048,none,2024-08-27\r\n10475,2022,LATAM,grocery,online,99.49,7,0.175,bundle,2024-03-02\r\n10476,2265,APAC,electronics,online,40.30,2,0.228,bundle,2024-02-14\r\n10477,1989,LATAM,grocery,online,56.83,7,0.070,coupon,2024-06-23\r\n10478,1703,AMER,fashion,retail,59.03,3,0.131,none,2024-08-11\r\n10479,1909,APAC,grocery,online,63.39,3,0.093,none,2024-02-15\r\n10480,1841,AMER,grocery,online,61.74,2,0.025,bundle,2024-06-06\r\n10481,2337,AMER,grocery,mobile,149.42,8,0.084,none,2024-07-20\r\n10482,1513,APAC,electronics,online,39.66,2,0.243,none,2024-01-11\r\n10483,1924,AMER,fashion,online,132.34,7,0.184,coupon,2024-02-17\r\n10484,2021,EMEA,electronics,online,40.28,1,0.164,none,2024-05-11\r\n10485,2409,APAC,home,retail,74.35,5,0.082,bundle,2024-12-16\r\n10486,1029,EMEA,toys,retail,57.17,3,0.023,none,2024-11-23\r\n10487,1959,EMEA,grocery,partner,61.56,4,0.126,none,2024-08-08\r\n10488,2134,AMER,toys,online,127.39,6,0.075,bundle,2024-08-01\r\n10489,1011,APAC,home,partner,21.13,1,0.197,none,2024-08-04\r\n10490,1616,APAC,electronics,mobile,112.04,8,0.159,coupon,2024-10-20\r\n10491,1072,LATAM,grocery,online,38.46,6,0.111,none,2024-09-20\r\n10492,2421,AMER,toys,mobile,41.42,7,0.183,coupon,2024-03-20\r\n10493,1132,EMEA,toys,retail,14.40,4,0.135,loyalty,2024-07-16\r\n10494,2085,AMER,home,online,117.76,2,0.247,none,2024-05-19\r\n10495,2415,AMER,sports,retail,27.51,1,0.192,none,2024-08-10\r\n10496,1323,EMEA,electronics,retail,19.09,6,0.007,coupon,2024-09-22\r\n10497,1686,LATAM,toys,online,98.92,4,0.047,none,2024-11-16\r\n10498,1656,LATAM,grocery,retail,32.30,7,0.070,none,2024-09-22\r\n10499,2218,EMEA,home,online,95.81,4,0.054,none,2024-01-12\r\n10500,1079,LATAM,home,mobile,21.27,1,0.229,none,2024-08-04\r\n10501,1651,LATAM,fashion,retail,64.23,4,0.099,none,2024-11-03\r\n10502,2221,LATAM,toys,retail,32.82,6,0.035,none,2024-01-15\r\n10503,1618,EMEA,electronics,online,36.29,4,0.135,coupon,2024-10-21\r\n10504,1834,AMER,sports,retail,22.01,6,0.041,bundle,2024-01-15\r\n10505,1861,AMER,electronics,online,42.80,5,0.217,none,2024-12-01\r\n10506,1060,LATAM,toys,online,45.99,8,0.071,none,2024-08-24\r\n10507,1705,AMER,grocery,retail,35.91,5,0.139,coupon,2024-01-08\r\n10508,2102,APAC,home,online,30.90,6,0.125,loyalty,2024-04-04\r\n10509,1565,AMER,fashion,mobile,74.49,8,0.150,bundle,2024-11-09\r\n10510,1101,AMER,toys,partner,46.62,7,0.249,bundle,2024-12-05\r\n10511,1972,LATAM,toys,mobile,26.41,1,0.084,none,2024-07-12\r\n10512,2318,AMER,grocery,mobile,71.42,6,0.193,none,2024-11-15\r\n10513,1396,EMEA,electronics,online,97.62,6,0.118,none,2024-06-13\r\n10514,1029,EMEA,grocery,retail,44.05,5,0.051,none,2024-04-18\r\n10515,1991,APAC,grocery,online,83.59,1,0.049,none,2024-01-11\r\n10516,1289,LATAM,grocery,retail,33.86,7,0.224,none,2024-06-20\r\n10517,1412,AMER,grocery,retail,59.02,3,0.102,loyalty,2024-11-26\r\n10518,1064,AMER,grocery,online,38.01,5,0.184,none,2024-04-05\r\n10519,2305,AMER,home,online,24.67,8,0.199,coupon,2024-08-09\r\n10520,1013,LATAM,home,online,83.42,7,0.083,none,2024-04-06\r\n10521,1156,APAC,grocery,mobile,22.57,8,0.083,bundle,2024-12-12\r\n10522,1861,AMER,electronics,online,116.10,4,0.115,none,2024-05-23\r\n10523,1322,AMER,home,online,71.68,5,0.195,none,2024-07-08\r\n10524,1729,AMER,sports,online,19.09,1,0.204,none,2024-11-07\r\n10525,1453,APAC,home,online,64.82,1,0.040,none,2024-11-25\r\n10526,1314,AMER,home,retail,15.47,3,0.067,coupon,2024-10-17\r\n10527,2287,EMEA,grocery,retail,74.39,7,0.075,none,2024-08-20\r\n10528,1733,LATAM,electronics,online,38.73,4,0.222,none,2024-02-21\r\n10529,1890,LATAM,sports,retail,19.79,7,0.064,loyalty,2024-12-11\r\n10530,2327,EMEA,home,retail,129.47,3,0.213,none,2024-09-12\r\n10531,1031,AMER,toys,online,108.54,5,0.070,coupon,2024-04-09\r\n10532,1736,AMER,toys,mobile,46.88,1,0.006,none,2024-11-19\r\n10533,1093,APAC,home,partner,71.26,8,0.073,none,2024-03-28\r\n10534,1488,AMER,grocery,retail,170.01,4,0.112,none,2024-06-14\r\n10535,2081,APAC,grocery,online,34.90,5,0.101,loyalty,2024-03-04\r\n10536,1519,APAC,grocery,online,27.91,4,0.021,none,2024-12-26\r\n10537,2068,LATAM,grocery,mobile,40.20,8,0.164,loyalty,2024-02-27\r\n10538,2000,APAC,toys,online,74.35,6,0.141,coupon,2024-07-23\r\n10539,1280,LATAM,grocery,online,53.03,3,0.001,bundle,2024-09-13\r\n10540,2390,AMER,home,online,75.29,1,0.115,none,2024-08-10\r\n10541,2241,APAC,grocery,retail,41.88,8,0.104,none,2024-07-23\r\n10542,2334,LATAM,electronics,retail,35.28,7,0.070,none,2024-10-05\r\n10543,1372,APAC,electronics,retail,73.09,6,0.104,loyalty,2024-12-03\r\n10544,1837,LATAM,fashion,retail,61.58,2,0.175,coupon,2024-06-21\r\n10545,1861,AMER,electronics,retail,27.59,7,0.036,none,2024-02-27\r\n10546,1498,LATAM,grocery,online,75.27,5,0.145,coupon,2024-01-08\r\n10547,1804,AMER,sports,retail,33.31,1,0.157,none,2024-04-26\r\n10548,2263,AMER,grocery,online,103.61,8,0.095,none,2024-08-20\r\n10549,1649,APAC,home,partner,24.79,3,0.250,none,2024-12-25\r\n10550,1570,AMER,home,retail,73.48,6,0.027,bundle,2024-03-11\r\n10551,1483,EMEA,fashion,retail,150.00,1,0.122,coupon,2024-10-06\r\n10552,2443,LATAM,fashion,online,83.03,8,0.176,coupon,2024-12-18\r\n10553,1534,EMEA,home,online,24.61,1,0.176,loyalty,2024-04-16\r\n10554,1413,LATAM,electronics,mobile,90.76,3,0.073,none,2024-07-22\r\n10555,2151,APAC,electronics,mobile,42.39,8,0.172,bundle,2024-01-10\r\n10556,1498,LATAM,home,retail,42.30,2,0.177,none,2024-09-15\r\n10557,1330,EMEA,electronics,retail,42.62,4,0.192,coupon,2024-10-01\r\n10558,2386,EMEA,electronics,mobile,145.72,5,0.176,none,2024-10-08\r\n10559,1623,AMER,sports,retail,36.05,2,0.021,none,2024-11-04\r\n10560,2018,AMER,toys,retail,85.90,1,0.002,none,2024-06-20\r\n10561,1290,EMEA,sports,online,88.52,6,0.107,none,2024-05-06\r\n10562,1128,LATAM,electronics,retail,31.99,6,0.164,coupon,2024-05-16\r\n10563,1534,EMEA,grocery,mobile,134.76,4,0.062,none,2024-12-18\r\n10564,1384,LATAM,home,online,43.25,1,0.007,coupon,2024-06-24\r\n10565,1645,EMEA,sports,online,26.28,8,0.060,bundle,2024-08-24\r\n10566,1250,APAC,electronics,retail,89.38,8,0.204,loyalty,2024-04-23\r\n10567,2029,APAC,fashion,partner,48.73,3,0.156,none,2024-07-18\r\n10568,1978,AMER,home,retail,58.84,7,0.111,coupon,2024-06-23\r\n10569,1959,EMEA,electronics,partner,31.98,8,0.183,bundle,2024-01-20\r\n10570,1705,AMER,home,retail,68.52,3,0.073,none,2024-11-08\r\n10571,1002,EMEA,toys,online,64.78,3,0.162,loyalty,2024-09-24\r\n10572,1151,APAC,grocery,retail,95.41,1,0.211,coupon,2024-07-26\r\n10573,1855,APAC,sports,partner,48.20,2,0.246,coupon,2024-09-06\r\n10574,1286,EMEA,home,online,48.07,1,0.099,coupon,2024-01-01\r\n10575,1201,LATAM,fashion,retail,112.53,2,0.138,loyalty,2024-02-02\r\n10576,1658,AMER,sports,mobile,64.89,1,0.044,none,2024-08-02\r\n10577,1634,AMER,fashion,online,29.36,8,0.114,loyalty,2024-06-11\r\n10578,1638,EMEA,toys,retail,55.08,6,0.203,loyalty,2024-03-07\r\n10579,1757,EMEA,sports,online,34.94,5,0.035,coupon,2024-07-06\r\n10580,2046,APAC,fashion,online,41.57,8,0.015,none,2024-08-07\r\n10581,1508,LATAM,toys,online,44.61,2,0.044,coupon,2024-06-13\r\n10582,1172,APAC,toys,retail,151.23,6,0.037,none,2024-08-28\r\n10583,1790,AMER,grocery,online,38.57,5,0.085,none,2024-02-11\r\n10584,1272,AMER,fashion,partner,75.25,6,0.063,none,2024-05-14\r\n10585,1152,LATAM,fashion,online,56.18,5,0.174,none,2024-10-25\r\n10586,1045,LATAM,grocery,online,72.60,8,0.105,none,2024-03-08\r\n10587,2095,EMEA,toys,mobile,116.89,7,0.082,loyalty,2024-09-22\r\n10588,1492,APAC,home,online,53.52,4,0.119,none,2024-05-16\r\n10589,1846,APAC,sports,online,77.44,5,0.226,none,2024-06-13\r\n10590,1078,APAC,grocery,online,66.11,4,0.157,none,2024-10-17\r\n10591,2310,EMEA,fashion,retail,34.33,5,0.199,none,2024-07-24\r\n10592,1140,LATAM,toys,retail,23.45,2,0.015,none,2024-11-27\r\n10593,1630,APAC,toys,retail,56.13,1,0.023,none,2024-08-16\r\n10594,1572,LATAM,electronics,online,57.64,4,0.094,coupon,2024-01-07\r\n10595,1341,EMEA,grocery,retail,104.47,7,0.154,coupon,2024-07-23\r\n10596,1375,AMER,fashion,retail,41.34,7,0.143,none,2024-02-20\r\n10597,1727,APAC,grocery,mobile,44.25,7,0.060,loyalty,2024-04-16\r\n10598,1134,APAC,fashion,online,145.53,1,0.187,coupon,2024-12-13\r\n10599,2383,APAC,grocery,retail,33.09,7,0.201,none,2024-08-19\r\n10600,1468,AMER,fashion,retail,51.04,2,0.245,bundle,2024-03-25\r\n10601,1724,LATAM,toys,online,32.06,2,0.201,coupon,2024-01-23\r\n10602,2222,LATAM,electronics,retail,29.13,7,0.096,none,2024-01-12\r\n10603,1675,LATAM,electronics,mobile,42.14,7,0.105,coupon,2024-04-12\r\n10604,1483,EMEA,toys,online,41.17,2,0.026,coupon,2024-01-08\r\n10605,2402,AMER,home,online,115.13,7,0.120,bundle,2024-03-01\r\n10606,1247,AMER,electronics,retail,102.74,6,0.044,bundle,2024-01-04\r\n10607,1848,EMEA,home,retail,71.98,6,0.232,none,2024-02-08\r\n10608,1940,APAC,home,online,14.58,4,0.132,bundle,2024-04-21\r\n10609,2393,LATAM,electronics,retail,48.49,6,0.105,none,2024-01-27\r\n10610,1227,AMER,electronics,online,23.97,2,0.205,bundle,2024-06-11\r\n10611,2485,AMER,grocery,retail,153.91,7,0.055,none,2024-03-11\r\n10612,2249,LATAM,sports,online,43.69,2,0.217,bundle,2024-11-20\r\n10613,2426,AMER,home,retail,73.37,3,0.084,none,2024-09-11\r\n10614,1214,EMEA,home,online,47.85,3,0.098,none,2024-01-14\r\n10615,1609,LATAM,toys,online,69.45,1,0.133,none,2024-10-14\r\n10616,2426,AMER,fashion,online,84.04,7,0.036,coupon,2024-08-13\r\n10617,1169,LATAM,grocery,partner,43.11,8,0.017,none,2024-02-15\r\n10618,1029,EMEA,sports,retail,94.92,7,0.084,bundle,2024-12-12\r\n10619,1501,AMER,electronics,retail,76.61,1,0.004,none,2024-11-20\r\n10620,2077,APAC,home,online,38.69,2,0.027,coupon,2024-09-03\r\n10621,1623,AMER,sports,retail,61.83,3,0.089,coupon,2024-04-12\r\n10622,1916,AMER,home,retail,59.38,2,0.133,none,2024-09-09\r\n10623,1074,LATAM,electronics,retail,45.67,8,0.214,none,2024-02-12\r\n10624,2445,APAC,fashion,online,98.76,8,0.015,none,2024-01-17\r\n10625,1764,LATAM,home,online,19.15,5,0.017,none,2024-02-07\r\n10626,1109,APAC,electronics,online,62.08,6,0.136,none,2024-01-27\r\n10627,2497,AMER,grocery,mobile,50.71,3,0.077,coupon,2024-03-16\r\n10628,2342,AMER,sports,online,44.21,1,0.236,bundle,2024-11-27\r\n10629,2225,EMEA,home,online,35.69,7,0.175,none,2024-02-20\r\n10630,1500,EMEA,home,retail,53.31,3,0.187,none,2024-06-14\r\n10631,1517,AMER,toys,retail,90.17,2,0.026,none,2024-10-09\r\n10632,2148,EMEA,grocery,online,78.72,1,0.020,coupon,2024-02-21\r\n10633,1717,AMER,sports,online,90.58,3,0.235,none,2024-03-27\r\n10634,1790,AMER,home,retail,97.52,7,0.200,none,2024-04-17\r\n10635,1484,AMER,home,online,33.57,7,0.022,coupon,2024-03-25\r\n10636,1638,EMEA,electronics,online,55.90,7,0.203,none,2024-01-09\r\n10637,2150,APAC,electronics,retail,79.72,5,0.134,none,2024-12-16\r\n10638,1102,APAC,grocery,retail,38.08,2,0.212,none,2024-02-28\r\n10639,2451,APAC,sports,online,76.33,1,0.237,none,2024-06-13\r\n10640,2151,APAC,grocery,online,12.77,2,0.019,none,2024-12-15\r\n10641,1893,APAC,fashion,retail,59.40,3,0.127,none,2024-08-15\r\n10642,2196,AMER,fashion,online,137.61,3,0.126,coupon,2024-04-01\r\n10643,1202,APAC,home,retail,26.26,8,0.172,none,2024-11-16\r\n10644,2086,APAC,grocery,online,41.99,8,0.056,coupon,2024-03-27\r\n10645,1479,AMER,grocery,online,24.37,6,0.078,loyalty,2024-04-13\r\n10646,1633,EMEA,home,online,45.53,6,0.088,bundle,2024-03-19\r\n10647,1949,AMER,home,online,59.40,6,0.196,none,2024-02-20\r\n10648,1317,EMEA,toys,online,102.30,4,0.163,none,2024-09-08\r\n10649,1710,APAC,sports,mobile,67.86,6,0.040,coupon,2024-06-13\r\n10650,2158,APAC,home,online,39.64,7,0.026,loyalty,2024-04-11\r\n10651,1318,LATAM,toys,retail,53.70,7,0.042,none,2024-11-13\r\n10652,1348,AMER,grocery,online,92.73,8,0.006,none,2024-04-19\r\n10653,1226,AMER,home,online,28.32,6,0.200,none,2024-02-08\r\n10654,1036,EMEA,fashion,retail,33.24,8,0.085,bundle,2024-03-13\r\n10655,1083,AMER,electronics,online,198.41,2,0.126,bundle,2024-10-20\r\n10656,1395,APAC,fashion,mobile,114.31,3,0.141,none,2024-08-04\r\n10657,2280,EMEA,electronics,retail,74.43,5,0.149,coupon,2024-09-05\r\n10658,1781,LATAM,grocery,partner,29.94,7,0.132,loyalty,2024-08-04\r\n10659,1856,EMEA,sports,retail,63.51,7,0.217,none,2024-12-19\r\n10660,1382,LATAM,grocery,online,78.54,6,0.055,loyalty,2024-11-16\r\n10661,2328,EMEA,grocery,online,42.67,1,0.194,none,2024-11-01\r\n10662,1764,LATAM,fashion,retail,54.01,5,0.165,bundle,2024-06-16\r\n10663,2150,APAC,grocery,partner,61.32,6,0.130,none,2024-08-20\r\n10664,1660,AMER,home,online,85.24,3,0.191,coupon,2024-09-12\r\n10665,1646,APAC,grocery,online,34.27,8,0.091,coupon,2024-10-12\r\n10666,1216,APAC,grocery,retail,71.87,1,0.221,none,2024-04-28\r\n10667,2049,LATAM,home,retail,104.44,5,0.037,none,2024-09-15\r\n10668,1542,APAC,grocery,online,109.59,3,0.015,none,2024-05-01\r\n10669,1056,LATAM,home,online,51.01,4,0.219,bundle,2024-06-07\r\n10670,2052,LATAM,grocery,online,58.78,2,0.225,coupon,2024-01-06\r\n10671,1844,APAC,grocery,mobile,94.11,3,0.025,none,2024-09-16\r\n10672,2213,APAC,fashion,retail,30.49,3,0.193,coupon,2024-02-04\r\n10673,1728,AMER,grocery,retail,28.34,5,0.201,none,2024-08-21\r\n10674,1576,EMEA,fashion,online,87.91,6,0.024,bundle,2024-09-04\r\n10675,1023,APAC,grocery,online,154.28,8,0.009,coupon,2024-03-10\r\n10676,2371,LATAM,grocery,online,68.74,3,0.133,none,2024-05-11\r\n10677,1390,APAC,grocery,mobile,65.88,7,0.115,none,2024-10-06\r\n10678,1758,AMER,grocery,retail,48.73,6,0.072,none,2024-07-12\r\n10679,1517,AMER,electronics,retail,42.68,7,0.208,loyalty,2024-02-01\r\n10680,1114,APAC,home,online,86.09,1,0.052,none,2024-11-13\r\n10681,1509,AMER,grocery,partner,13.62,3,0.079,none,2024-02-01\r\n10682,1303,LATAM,toys,online,75.00,8,0.076,coupon,2024-07-15\r\n10683,1446,AMER,sports,mobile,65.49,4,0.070,bundle,2024-04-24\r\n10684,1465,AMER,home,partner,48.49,4,0.194,none,2024-05-08\r\n10685,1361,LATAM,home,mobile,41.41,6,0.163,loyalty,2024-06-28\r\n10686,2042,LATAM,home,retail,35.89,2,0.235,none,2024-09-06\r\n10687,1222,AMER,sports,retail,34.81,5,0.076,none,2024-04-27\r\n10688,2402,AMER,electronics,online,58.27,4,0.001,none,2024-10-26\r\n10689,1925,LATAM,fashion,retail,59.24,2,0.243,bundle,2024-07-19\r\n10690,1882,AMER,toys,online,50.22,5,0.202,bundle,2024-05-15\r\n10691,1077,AMER,grocery,mobile,108.93,1,0.013,none,2024-06-21\r\n10692,2171,EMEA,electronics,online,46.08,3,0.214,none,2024-07-01\r\n10693,1216,APAC,electronics,retail,153.01,8,0.138,bundle,2024-02-21\r\n10694,2057,APAC,grocery,retail,97.07,6,0.087,coupon,2024-04-20\r\n10695,1968,EMEA,electronics,retail,42.42,8,0.138,none,2024-01-06\r\n10696,1143,LATAM,electronics,retail,122.97,7,0.169,none,2024-10-13\r\n10697,1403,APAC,grocery,online,50.41,6,0.016,coupon,2024-07-02\r\n10698,1034,EMEA,grocery,online,100.76,5,0.094,none,2024-12-16\r\n10699,1337,APAC,grocery,online,57.42,3,0.020,none,2024-04-14\r\n10700,1770,AMER,home,online,42.79,2,0.047,loyalty,2024-07-23\r\n10701,2075,LATAM,electronics,retail,33.22,1,0.071,coupon,2024-01-23\r\n10702,1563,EMEA,electronics,mobile,77.36,1,0.057,bundle,2024-11-28\r\n10703,1584,EMEA,fashion,mobile,109.86,7,0.115,none,2024-12-16\r\n10704,1953,EMEA,grocery,retail,76.83,8,0.181,coupon,2024-11-13\r\n10705,1824,LATAM,grocery,online,44.49,5,0.059,coupon,2024-08-11\r\n10706,1629,LATAM,home,retail,74.59,7,0.105,none,2024-04-12\r\n10707,1881,LATAM,electronics,online,30.02,3,0.022,coupon,2024-08-15\r\n10708,1905,APAC,grocery,mobile,34.65,1,0.213,loyalty,2024-10-11\r\n10709,1252,APAC,electronics,retail,52.32,4,0.084,bundle,2024-08-13\r\n10710,1763,LATAM,fashion,retail,32.60,7,0.126,none,2024-08-03\r\n10711,1584,EMEA,grocery,retail,36.14,1,0.162,coupon,2024-02-11\r\n10712,2117,EMEA,electronics,retail,28.54,4,0.154,bundle,2024-03-28\r\n10713,1350,LATAM,electronics,partner,38.85,8,0.006,none,2024-09-02\r\n10714,1128,LATAM,grocery,retail,108.83,5,0.229,none,2024-12-22\r\n10715,1799,EMEA,electronics,online,26.69,3,0.135,none,2024-12-07\r\n10716,2132,LATAM,grocery,online,32.72,6,0.170,coupon,2024-09-13\r\n10717,2470,EMEA,electronics,online,39.67,2,0.050,none,2024-04-16\r\n10718,1546,EMEA,grocery,online,65.32,6,0.185,none,2024-11-18\r\n10719,1723,LATAM,fashion,online,82.76,6,0.217,none,2024-01-07\r\n10720,1737,AMER,toys,online,52.68,2,0.153,none,2024-03-12\r\n10721,1500,EMEA,fashion,retail,82.20,8,0.049,bundle,2024-04-09\r\n10722,2349,APAC,grocery,online,47.10,1,0.054,none,2024-02-09\r\n10723,1318,LATAM,fashion,retail,33.42,1,0.034,none,2024-02-28\r\n10724,1271,EMEA,home,retail,48.72,3,0.130,coupon,2024-11-07\r\n10725,1510,EMEA,sports,retail,67.32,1,0.192,none,2024-01-08\r\n10726,2431,LATAM,sports,mobile,71.09,7,0.030,none,2024-04-22\r\n10727,1542,APAC,fashion,online,21.90,7,0.168,none,2024-07-14\r\n10728,2067,LATAM,grocery,retail,37.30,4,0.094,none,2024-01-08\r\n10729,2244,LATAM,home,retail,80.57,6,0.080,none,2024-02-10\r\n10730,2126,APAC,sports,online,157.85,2,0.222,none,2024-08-26\r\n10731,2494,AMER,home,retail,25.88,2,0.179,coupon,2024-01-04\r\n10732,1316,APAC,electronics,online,60.27,6,0.220,none,2024-11-10\r\n10733,2120,AMER,home,online,50.90,6,0.070,coupon,2024-01-14\r\n10734,1540,LATAM,fashion,online,57.65,7,0.019,none,2024-12-07\r\n10735,1158,LATAM,electronics,online,40.85,2,0.076,coupon,2024-11-03\r\n10736,2117,EMEA,electronics,mobile,36.54,3,0.168,none,2024-04-23\r\n10737,2277,EMEA,home,retail,71.92,6,0.248,none,2024-05-19\r\n10738,1658,AMER,electronics,retail,37.96,7,0.176,none,2024-11-11\r\n10739,1228,APAC,sports,online,23.55,8,0.178,none,2024-11-27\r\n10740,2025,EMEA,fashion,mobile,122.62,8,0.214,coupon,2024-04-22\r\n10741,1307,AMER,electronics,retail,28.18,1,0.233,none,2024-12-09\r\n10742,2211,APAC,electronics,mobile,63.20,4,0.205,none,2024-11-03\r\n10743,1906,APAC,fashion,online,51.01,8,0.081,none,2024-09-07\r\n10744,1862,LATAM,fashion,partner,108.44,3,0.235,none,2024-03-05\r\n10745,1420,APAC,fashion,retail,117.51,2,0.140,none,2024-06-03\r\n10746,1678,LATAM,toys,mobile,54.57,8,0.061,none,2024-11-19\r\n10747,1255,AMER,electronics,mobile,35.32,1,0.151,none,2024-01-05\r\n10748,1071,AMER,grocery,online,50.39,1,0.114,coupon,2024-01-10\r\n10749,1984,LATAM,toys,online,51.83,3,0.114,bundle,2024-11-08\r\n10750,1488,AMER,home,online,25.97,7,0.092,none,2024-11-24\r\n10751,1714,APAC,grocery,retail,140.08,5,0.190,none,2024-09-10\r\n10752,1328,APAC,toys,online,77.54,7,0.152,none,2024-06-02\r\n10753,2077,APAC,home,online,67.06,1,0.190,bundle,2024-07-01\r\n10754,2418,AMER,toys,retail,75.95,7,0.076,bundle,2024-04-16\r\n10755,1869,AMER,home,online,41.39,5,0.038,none,2024-02-18\r\n10756,1951,LATAM,electronics,online,132.51,7,0.243,none,2024-12-10\r\n10757,1961,EMEA,fashion,online,43.17,1,0.022,bundle,2024-05-15\r\n10758,1953,EMEA,electronics,mobile,53.24,4,0.198,coupon,2024-10-12\r\n10759,2326,LATAM,grocery,online,51.85,2,0.003,loyalty,2024-06-27\r\n10760,1484,AMER,toys,partner,70.25,4,0.203,loyalty,2024-11-22\r\n10761,1273,AMER,electronics,mobile,72.58,3,0.010,none,2024-12-11\r\n10762,1202,APAC,sports,online,41.44,3,0.138,none,2024-08-02\r\n10763,1401,LATAM,grocery,online,44.29,8,0.144,none,2024-10-21\r\n10764,2348,EMEA,sports,retail,49.42,5,0.166,coupon,2024-05-28\r\n10765,1486,LATAM,fashion,online,67.88,2,0.144,loyalty,2024-04-02\r\n10766,1992,LATAM,toys,online,124.23,4,0.248,none,2024-12-16\r\n10767,2354,LATAM,toys,online,95.58,1,0.013,none,2024-07-05\r\n10768,1708,LATAM,electronics,online,94.21,5,0.245,none,2024-08-21\r\n10769,2090,AMER,electronics,online,37.73,5,0.226,bundle,2024-10-19\r\n10770,2329,LATAM,sports,online,56.02,6,0.169,loyalty,2024-01-26\r\n10771,1945,AMER,home,retail,57.97,3,0.200,coupon,2024-07-18\r\n10772,2103,LATAM,fashion,online,61.66,5,0.219,none,2024-03-25\r\n10773,2367,AMER,electronics,online,39.51,6,0.233,coupon,2024-08-17\r\n10774,1491,EMEA,sports,retail,77.97,2,0.183,none,2024-08-24\r\n10775,1161,AMER,sports,online,55.62,6,0.173,none,2024-05-19\r\n10776,1040,LATAM,toys,online,26.39,8,0.016,none,2024-08-12\r\n10777,2184,APAC,grocery,online,51.19,2,0.051,none,2024-09-15\r\n10778,2440,APAC,electronics,online,50.42,8,0.072,bundle,2024-03-04\r\n10779,1446,AMER,home,retail,51.67,4,0.032,coupon,2024-05-25\r\n10780,1781,LATAM,sports,retail,41.15,2,0.022,coupon,2024-11-02\r\n10781,2351,EMEA,sports,retail,59.65,6,0.084,coupon,2024-11-25\r\n10782,1345,AMER,sports,online,56.42,5,0.117,bundle,2024-10-05\r\n10783,1314,AMER,electronics,online,35.53,6,0.121,none,2024-09-07\r\n10784,1042,LATAM,sports,retail,37.76,5,0.040,coupon,2024-07-07\r\n10785,1701,LATAM,grocery,mobile,67.91,5,0.207,none,2024-06-27\r\n10786,1739,AMER,fashion,mobile,48.23,4,0.131,none,2024-06-26\r\n10787,1769,LATAM,home,retail,59.61,2,0.249,coupon,2024-07-19\r\n10788,2260,EMEA,sports,online,75.81,6,0.065,none,2024-03-12\r\n10789,1469,EMEA,grocery,online,32.64,7,0.177,none,2024-07-03\r\n10790,1351,APAC,electronics,retail,58.09,1,0.196,none,2024-02-11\r\n10791,1669,AMER,electronics,partner,50.02,3,0.110,none,2024-06-27\r\n10792,1399,AMER,grocery,retail,30.86,6,0.197,none,2024-12-25\r\n10793,2138,APAC,fashion,mobile,51.13,6,0.150,none,2024-07-26\r\n10794,1394,LATAM,electronics,online,20.84,2,0.232,coupon,2024-08-02\r\n10795,2166,AMER,grocery,online,52.62,2,0.008,none,2024-02-06\r\n10796,1402,EMEA,grocery,online,58.38,4,0.250,bundle,2024-12-03\r\n10797,1012,LATAM,sports,online,66.71,7,0.124,none,2024-01-20\r\n10798,1809,APAC,toys,retail,35.82,3,0.082,coupon,2024-01-09\r\n10799,1274,LATAM,grocery,online,81.90,5,0.141,none,2024-05-23\r\n10800,1530,APAC,home,online,141.85,8,0.182,none,2024-01-11\r\n10801,1211,EMEA,home,online,101.26,8,0.006,coupon,2024-08-20\r\n10802,1180,AMER,grocery,retail,75.24,1,0.172,none,2024-12-20\r\n10803,2393,LATAM,grocery,online,186.99,5,0.248,loyalty,2024-04-12\r\n10804,1282,LATAM,electronics,retail,38.23,3,0.129,coupon,2024-06-05\r\n10805,1663,LATAM,home,mobile,67.06,7,0.143,none,2024-12-07\r\n10806,1255,AMER,grocery,mobile,27.07,2,0.124,bundle,2024-11-14\r\n10807,2374,LATAM,toys,online,73.96,5,0.142,bundle,2024-11-15\r\n10808,1632,LATAM,fashion,online,48.57,8,0.022,none,2024-12-12\r\n10809,1424,APAC,fashion,online,20.00,5,0.243,none,2024-08-07\r\n10810,1980,LATAM,fashion,online,40.91,7,0.160,none,2024-05-16\r\n10811,1776,APAC,electronics,mobile,56.96,4,0.128,none,2024-07-19\r\n10812,1626,EMEA,grocery,online,67.14,8,0.006,none,2024-10-07\r\n10813,1458,APAC,fashion,retail,33.68,4,0.114,none,2024-08-15\r\n10814,2152,EMEA,toys,retail,47.71,4,0.027,bundle,2024-04-13\r\n10815,1057,LATAM,home,online,18.38,1,0.075,loyalty,2024-01-19\r\n10816,1172,APAC,home,online,52.31,6,0.031,bundle,2024-08-11\r\n10817,1522,LATAM,sports,online,178.46,2,0.127,bundle,2024-01-24\r\n10818,1684,EMEA,fashion,mobile,72.27,4,0.150,none,2024-12-21\r\n10819,1321,EMEA,fashion,online,25.55,8,0.066,none,2024-04-09\r\n10820,1615,LATAM,fashion,online,26.60,6,0.000,none,2024-07-16\r\n10821,1130,LATAM,grocery,partner,53.21,6,0.155,none,2024-09-22\r\n10822,1014,EMEA,grocery,mobile,25.11,6,0.203,loyalty,2024-06-22\r\n10823,1275,EMEA,fashion,mobile,35.99,5,0.245,coupon,2024-06-26\r\n10824,1423,EMEA,electronics,online,123.20,4,0.181,none,2024-07-12\r\n10825,1745,APAC,sports,retail,68.06,8,0.015,coupon,2024-08-27\r\n10826,1016,AMER,home,retail,72.83,6,0.049,none,2024-06-12\r\n10827,1590,APAC,fashion,online,25.27,1,0.084,loyalty,2024-08-01\r\n10828,2273,APAC,toys,mobile,41.28,5,0.227,none,2024-09-22\r\n10829,1735,LATAM,sports,retail,44.64,3,0.065,loyalty,2024-05-03\r\n10830,1536,LATAM,home,retail,76.54,1,0.240,none,2024-03-22\r\n10831,1422,LATAM,toys,retail,89.57,5,0.159,coupon,2024-09-14\r\n10832,1549,APAC,electronics,retail,26.95,4,0.013,coupon,2024-12-23\r\n10833,2264,LATAM,electronics,online,28.37,7,0.022,none,2024-03-26\r\n10834,1397,LATAM,sports,retail,44.16,3,0.140,bundle,2024-12-01\r\n10835,1276,AMER,sports,retail,56.84,1,0.203,none,2024-09-01\r\n10836,2027,EMEA,home,online,58.74,5,0.167,none,2024-01-20\r\n10837,2363,AMER,sports,retail,79.59,8,0.211,none,2024-05-20\r\n10838,2354,LATAM,home,retail,35.29,8,0.241,none,2024-08-03\r\n10839,1934,EMEA,electronics,retail,38.53,5,0.223,none,2024-04-13\r\n10840,1171,APAC,grocery,online,85.81,1,0.187,none,2024-05-06\r\n10841,1329,APAC,home,retail,33.42,5,0.119,coupon,2024-09-12\r\n10842,2086,APAC,electronics,retail,70.35,2,0.156,none,2024-04-11\r\n10843,1829,EMEA,grocery,mobile,54.13,8,0.189,loyalty,2024-01-05\r\n10844,1266,AMER,electronics,online,52.89,2,0.031,bundle,2024-04-06\r\n10845,1616,APAC,home,online,23.32,8,0.082,bundle,2024-02-01\r\n10846,1986,LATAM,grocery,online,54.91,3,0.169,none,2024-06-19\r\n10847,1050,AMER,grocery,online,42.75,2,0.041,none,2024-01-03\r\n10848,2232,EMEA,toys,retail,75.85,5,0.049,coupon,2024-04-10\r\n10849,1141,AMER,toys,retail,53.42,6,0.069,none,2024-10-03\r\n10850,1123,LATAM,home,mobile,138.46,7,0.231,coupon,2024-05-09\r\n10851,1638,EMEA,fashion,retail,67.66,2,0.054,loyalty,2024-05-03\r\n10852,1546,EMEA,fashion,retail,113.08,6,0.001,none,2024-03-02\r\n10853,2151,APAC,electronics,retail,20.35,7,0.127,none,2024-12-12\r\n10854,2228,EMEA,electronics,retail,65.75,2,0.232,none,2024-11-10\r\n10855,1447,LATAM,home,retail,59.88,4,0.081,bundle,2024-11-26\r\n10856,2220,LATAM,electronics,online,38.61,5,0.139,none,2024-12-18\r\n10857,1503,APAC,toys,retail,51.19,7,0.050,none,2024-05-28\r\n10858,1402,EMEA,grocery,retail,43.58,3,0.145,none,2024-12-28\r\n10859,1182,EMEA,toys,online,144.26,2,0.133,none,2024-11-03\r\n10860,1944,AMER,electronics,mobile,43.72,1,0.126,bundle,2024-08-05\r\n10861,2497,AMER,grocery,mobile,59.04,8,0.050,coupon,2024-01-14\r\n10862,1776,APAC,grocery,online,51.17,8,0.003,none,2024-10-18\r\n10863,1490,AMER,sports,online,104.71,1,0.020,loyalty,2024-08-14\r\n10864,2389,LATAM,electronics,retail,62.99,8,0.170,none,2024-11-26\r\n10865,1656,LATAM,home,retail,16.18,2,0.136,none,2024-10-19\r\n10866,1981,EMEA,fashion,online,86.10,1,0.002,bundle,2024-03-21\r\n10867,1103,EMEA,home,online,87.84,2,0.204,coupon,2024-05-17\r\n10868,1183,AMER,home,partner,128.33,7,0.230,bundle,2024-01-23\r\n10869,2123,AMER,sports,online,73.51,8,0.052,bundle,2024-12-03\r\n10870,2049,LATAM,sports,retail,42.72,7,0.209,bundle,2024-03-03\r\n10871,1009,APAC,fashion,retail,97.22,3,0.233,none,2024-06-27\r\n10872,1107,APAC,toys,mobile,121.66,2,0.154,bundle,2024-11-27\r\n10873,2416,LATAM,fashion,online,37.77,5,0.032,none,2024-07-02\r\n10874,1205,APAC,home,retail,19.94,3,0.123,none,2024-02-01\r\n10875,1420,APAC,grocery,retail,32.60,8,0.250,bundle,2024-09-21\r\n10876,1805,EMEA,sports,retail,54.38,7,0.021,coupon,2024-01-24\r\n10877,1207,APAC,grocery,online,65.57,1,0.152,none,2024-01-19\r\n10878,1821,LATAM,electronics,online,83.07,3,0.193,bundle,2024-08-11\r\n10879,1770,AMER,toys,partner,80.85,2,0.051,coupon,2024-09-04\r\n10880,2311,LATAM,fashion,online,113.10,4,0.069,none,2024-08-18\r\n10881,1158,LATAM,sports,mobile,49.48,2,0.177,none,2024-01-16\r\n10882,1860,EMEA,grocery,online,38.36,6,0.169,none,2024-05-11\r\n10883,2437,LATAM,toys,retail,51.72,5,0.225,none,2024-02-14\r\n10884,1119,LATAM,electronics,online,90.58,6,0.004,none,2024-06-08\r\n10885,1579,AMER,home,online,35.98,4,0.228,bundle,2024-02-10\r\n10886,1789,EMEA,home,retail,17.25,3,0.014,coupon,2024-02-21\r\n10887,1013,LATAM,home,retail,51.90,7,0.250,bundle,2024-02-23\r\n10888,2448,APAC,home,online,61.56,7,0.224,bundle,2024-06-02\r\n10889,1280,LATAM,grocery,retail,66.58,7,0.139,none,2024-10-17\r\n10890,1020,APAC,sports,retail,42.43,6,0.239,loyalty,2024-02-07\r\n10891,1211,EMEA,electronics,online,73.04,4,0.069,bundle,2024-05-03\r\n10892,2226,EMEA,grocery,retail,44.58,2,0.196,coupon,2024-03-09\r\n10893,2243,APAC,grocery,online,65.94,8,0.061,none,2024-04-08\r\n10894,2358,AMER,fashion,online,74.54,5,0.054,coupon,2024-04-24\r\n10895,1331,AMER,electronics,online,24.93,2,0.249,bundle,2024-02-01\r\n10896,2349,APAC,grocery,retail,44.87,8,0.202,coupon,2024-12-07\r\n10897,1926,AMER,fashion,online,83.24,8,0.026,bundle,2024-12-21\r\n10898,1428,APAC,toys,retail,38.34,2,0.096,none,2024-01-28\r\n10899,2295,EMEA,home,mobile,44.29,3,0.128,none,2024-09-06\r\n10900,2290,LATAM,fashion,online,79.84,8,0.016,none,2024-02-16\r\n10901,2211,APAC,home,online,66.80,5,0.084,none,2024-11-15\r\n10902,1865,LATAM,grocery,online,75.23,1,0.094,coupon,2024-06-20\r\n10903,1544,LATAM,grocery,retail,128.07,2,0.008,none,2024-10-15\r\n10904,1151,APAC,sports,online,33.76,4,0.085,loyalty,2024-03-22\r\n10905,1467,LATAM,sports,mobile,137.25,3,0.170,none,2024-09-16\r\n10906,1939,LATAM,grocery,online,36.55,5,0.243,bundle,2024-04-24\r\n10907,2111,EMEA,fashion,retail,36.96,3,0.089,none,2024-11-06\r\n10908,1240,EMEA,home,online,28.16,6,0.154,none,2024-01-08\r\n10909,2000,APAC,electronics,retail,51.45,8,0.208,loyalty,2024-06-05\r\n10910,2360,EMEA,home,mobile,55.35,2,0.171,bundle,2024-12-19\r\n10911,1022,APAC,toys,retail,45.13,2,0.039,coupon,2024-03-17\r\n10912,1429,APAC,sports,online,55.62,4,0.068,none,2024-06-13\r\n10913,1135,APAC,electronics,online,104.85,3,0.154,loyalty,2024-05-15\r\n10914,1286,EMEA,sports,mobile,47.19,4,0.248,none,2024-07-13\r\n10915,2046,APAC,electronics,retail,47.05,5,0.077,bundle,2024-01-28\r\n10916,1391,LATAM,home,online,61.60,2,0.175,coupon,2024-06-25\r\n10917,1628,EMEA,grocery,mobile,43.79,5,0.067,loyalty,2024-04-27\r\n10918,1061,APAC,sports,retail,55.13,8,0.227,coupon,2024-07-26\r\n10919,2059,AMER,electronics,online,41.11,4,0.085,loyalty,2024-05-18\r\n10920,2222,LATAM,sports,retail,31.90,3,0.208,none,2024-08-21\r\n10921,1677,EMEA,grocery,online,51.03,5,0.122,coupon,2024-10-24\r\n10922,1772,EMEA,grocery,online,34.84,6,0.235,coupon,2024-10-08\r\n10923,1686,LATAM,grocery,online,96.07,6,0.005,coupon,2024-01-01\r\n10924,1920,LATAM,electronics,online,64.39,1,0.037,bundle,2024-05-23\r\n10925,1948,EMEA,toys,online,56.21,5,0.182,bundle,2024-02-18\r\n10926,1398,APAC,sports,online,93.20,6,0.038,bundle,2024-04-23\r\n10927,2150,APAC,fashion,retail,62.41,6,0.013,none,2024-07-01\r\n10928,1143,LATAM,grocery,mobile,50.37,7,0.106,coupon,2024-06-03\r\n10929,2271,LATAM,home,online,55.32,5,0.011,none,2024-02-26\r\n10930,1018,APAC,fashion,retail,87.27,7,0.240,none,2024-11-04\r\n10931,1624,AMER,sports,retail,69.85,4,0.220,coupon,2024-06-27\r\n10932,1989,LATAM,home,online,103.81,6,0.002,coupon,2024-03-13\r\n10933,1165,AMER,fashion,retail,25.68,4,0.209,coupon,2024-08-09\r\n10934,1972,LATAM,sports,online,92.66,2,0.025,bundle,2024-04-09\r\n10935,1396,EMEA,grocery,retail,51.67,8,0.014,none,2024-06-18\r\n10936,1840,LATAM,fashion,retail,53.21,6,0.133,loyalty,2024-06-15\r\n10937,2206,AMER,toys,retail,20.11,2,0.068,none,2024-02-18\r\n10938,1173,LATAM,grocery,retail,77.29,1,0.125,none,2024-04-05\r\n10939,1424,APAC,toys,online,33.72,6,0.126,bundle,2024-05-10\r\n10940,1361,LATAM,grocery,mobile,41.74,4,0.126,coupon,2024-02-07\r\n10941,2164,AMER,electronics,mobile,38.84,5,0.234,coupon,2024-01-21\r\n10942,1223,LATAM,sports,online,52.41,6,0.153,none,2024-04-06\r\n10943,1570,AMER,sports,online,60.54,4,0.028,coupon,2024-01-26\r\n10944,1438,APAC,home,retail,73.39,6,0.240,bundle,2024-05-20\r\n10945,2497,AMER,sports,mobile,89.65,7,0.078,none,2024-02-04\r\n10946,1015,AMER,grocery,retail,51.68,1,0.234,none,2024-12-23\r\n10947,1423,EMEA,grocery,online,105.88,8,0.099,none,2024-05-26\r\n10948,1300,EMEA,grocery,retail,112.13,3,0.141,coupon,2024-03-09\r\n10949,2035,LATAM,sports,retail,71.44,1,0.169,none,2024-08-22\r\n10950,1286,EMEA,sports,mobile,97.50,8,0.040,none,2024-10-03\r\n10951,2340,EMEA,electronics,online,34.14,1,0.202,coupon,2024-02-18\r\n10952,1782,LATAM,fashion,online,122.39,8,0.161,coupon,2024-04-03\r\n10953,1611,EMEA,grocery,online,61.92,2,0.119,loyalty,2024-03-18\r\n10954,1594,LATAM,home,online,54.35,4,0.139,none,2024-06-28\r\n10955,1483,EMEA,electronics,retail,31.99,6,0.008,coupon,2024-08-07\r\n10956,1425,EMEA,grocery,online,81.02,2,0.166,bundle,2024-11-21\r\n10957,1680,LATAM,home,online,74.81,5,0.065,bundle,2024-06-09\r\n10958,2051,APAC,toys,partner,94.65,5,0.098,coupon,2024-11-18\r\n10959,2020,AMER,home,online,48.22,8,0.241,none,2024-04-01\r\n10960,2040,LATAM,electronics,retail,70.08,6,0.098,bundle,2024-02-18\r\n10961,1952,EMEA,electronics,online,78.18,2,0.050,loyalty,2024-08-26\r\n10962,2193,AMER,home,online,63.23,4,0.118,loyalty,2024-05-08\r\n10963,1864,EMEA,fashion,online,202.93,1,0.229,coupon,2024-06-02\r\n10964,1891,APAC,home,retail,35.63,6,0.071,none,2024-04-26\r\n10965,2278,APAC,grocery,online,65.46,6,0.017,loyalty,2024-09-21\r\n10966,1939,LATAM,home,retail,29.27,8,0.037,loyalty,2024-03-26\r\n10967,1787,APAC,home,mobile,54.32,2,0.020,none,2024-07-17\r\n10968,2463,AMER,grocery,retail,152.40,7,0.067,bundle,2024-10-28\r\n10969,1640,APAC,toys,mobile,29.89,7,0.144,none,2024-07-28\r\n10970,2450,EMEA,grocery,retail,33.79,6,0.063,coupon,2024-03-15\r\n10971,2403,LATAM,electronics,online,47.91,4,0.021,none,2024-07-14\r\n10972,2297,EMEA,sports,mobile,56.60,7,0.053,coupon,2024-01-25\r\n10973,1334,APAC,grocery,online,82.42,7,0.073,bundle,2024-02-24\r\n10974,2329,LATAM,home,online,66.12,7,0.155,none,2024-11-25\r\n10975,1609,LATAM,sports,retail,59.49,3,0.094,none,2024-11-02\r\n10976,1446,AMER,electronics,retail,46.97,8,0.037,loyalty,2024-12-17\r\n10977,1269,LATAM,home,retail,71.33,1,0.136,none,2024-07-19\r\n10978,2458,EMEA,sports,partner,61.20,1,0.243,loyalty,2024-03-14\r\n10979,2003,LATAM,grocery,online,76.47,2,0.150,loyalty,2024-12-08\r\n10980,1845,AMER,grocery,partner,80.08,2,0.249,none,2024-01-15\r\n10981,1104,APAC,sports,retail,27.00,6,0.073,none,2024-03-03\r\n10982,1660,AMER,electronics,online,32.42,5,0.085,none,2024-12-10\r\n10983,2224,EMEA,grocery,mobile,74.39,8,0.036,none,2024-06-17\r\n10984,1476,APAC,sports,online,49.73,4,0.146,coupon,2024-12-12\r\n10985,1555,AMER,electronics,mobile,77.40,3,0.115,coupon,2024-12-03\r\n10986,1292,LATAM,fashion,retail,138.12,8,0.007,coupon,2024-08-09\r\n10987,1135,APAC,fashion,online,27.66,8,0.052,loyalty,2024-03-07\r\n10988,1463,EMEA,grocery,online,78.84,1,0.066,loyalty,2024-11-23\r\n10989,2438,AMER,home,online,83.04,8,0.086,none,2024-01-01\r\n10990,1145,AMER,grocery,online,71.28,3,0.025,loyalty,2024-07-26\r\n10991,1257,APAC,sports,retail,107.69,6,0.081,coupon,2024-12-24\r\n10992,2015,APAC,grocery,retail,151.56,3,0.040,none,2024-08-03\r\n10993,2114,AMER,sports,partner,87.66,7,0.061,loyalty,2024-08-03\r\n10994,1219,LATAM,electronics,retail,135.10,6,0.120,bundle,2024-10-01\r\n10995,1886,LATAM,electronics,mobile,50.57,6,0.094,none,2024-01-12\r\n10996,2485,AMER,home,retail,55.70,3,0.223,none,2024-06-08\r\n10997,1073,AMER,fashion,mobile,42.09,6,0.065,bundle,2024-06-26\r\n10998,2419,LATAM,grocery,retail,62.86,1,0.183,bundle,2024-11-24\r\n10999,1331,AMER,grocery,online,85.50,5,0.148,none,2024-01-06\r\n11000,2036,APAC,electronics,retail,46.76,4,0.029,bundle,2024-06-05\r\n11001,1336,APAC,grocery,online,58.03,8,0.170,none,2024-08-20\r\n11002,1874,LATAM,home,online,24.90,1,0.122,none,2024-06-14\r\n11003,2293,LATAM,fashion,retail,49.65,2,0.042,none,2024-10-09\r\n11004,1806,APAC,fashion,retail,70.41,8,0.060,bundle,2024-01-06\r\n11005,1415,AMER,toys,online,102.30,2,0.218,loyalty,2024-11-11\r\n11006,2120,AMER,fashion,retail,103.63,4,0.073,none,2024-05-20\r\n11007,1684,EMEA,grocery,online,47.27,2,0.157,none,2024-05-01\r\n11008,1390,APAC,toys,online,41.00,4,0.218,bundle,2024-01-15\r\n11009,1531,EMEA,sports,mobile,43.82,2,0.103,none,2024-01-13\r\n11010,2178,AMER,home,retail,45.54,6,0.227,none,2024-04-11\r\n11011,1886,LATAM,toys,retail,19.77,3,0.176,loyalty,2024-10-20\r\n11012,1332,APAC,fashion,online,40.42,2,0.102,coupon,2024-07-13\r\n11013,2305,AMER,sports,online,85.85,7,0.066,none,2024-07-11\r\n11014,1475,LATAM,fashion,mobile,78.50,6,0.045,none,2024-06-04\r\n11015,1743,LATAM,electronics,retail,57.83,7,0.165,none,2024-03-21\r\n11016,2492,LATAM,home,retail,83.53,8,0.140,none,2024-01-27\r\n11017,1179,APAC,grocery,retail,131.25,5,0.015,coupon,2024-03-06\r\n11018,2044,APAC,fashion,online,29.21,4,0.188,none,2024-10-12\r\n11019,1450,EMEA,grocery,online,81.87,6,0.214,coupon,2024-05-10\r\n11020,2286,AMER,grocery,mobile,31.54,3,0.098,none,2024-03-28\r\n11021,1116,LATAM,sports,retail,21.22,4,0.004,bundle,2024-07-02\r\n11022,2413,AMER,grocery,online,45.41,5,0.146,none,2024-09-24\r\n11023,2025,EMEA,grocery,retail,57.48,4,0.009,none,2024-05-20\r\n11024,2101,APAC,toys,retail,69.93,1,0.084,coupon,2024-11-26\r\n11025,1014,EMEA,fashion,retail,145.53,1,0.148,none,2024-03-04\r\n11026,1518,AMER,fashion,retail,44.04,4,0.161,none,2024-05-17\r\n11027,1429,APAC,fashion,retail,66.38,7,0.126,none,2024-01-17\r\n11028,1728,AMER,sports,partner,24.27,1,0.189,none,2024-11-25\r\n11029,2287,EMEA,electronics,online,25.85,4,0.022,none,2024-12-14\r\n11030,2432,AMER,fashion,retail,76.12,3,0.015,bundle,2024-05-03\r\n11031,1253,AMER,home,online,54.30,4,0.146,coupon,2024-06-10\r\n11032,1806,APAC,home,retail,47.17,2,0.015,coupon,2024-10-07\r\n11033,2219,LATAM,home,mobile,104.37,3,0.161,none,2024-09-01\r\n11034,1237,LATAM,grocery,retail,57.34,6,0.170,none,2024-09-22\r\n11035,1377,APAC,home,mobile,89.87,6,0.097,none,2024-09-16\r\n11036,1665,AMER,toys,retail,48.23,7,0.056,none,2024-10-11\r\n11037,1693,EMEA,electronics,online,191.59,2,0.125,none,2024-02-05\r\n11038,1315,AMER,fashion,online,77.17,3,0.084,none,2024-04-04\r\n11039,1508,LATAM,grocery,retail,34.03,3,0.148,none,2024-10-21\r\n11040,2231,LATAM,home,retail,81.51,8,0.144,loyalty,2024-06-16\r\n11041,1513,APAC,electronics,retail,31.39,8,0.032,loyalty,2024-02-18\r\n11042,2330,EMEA,fashion,online,28.35,7,0.096,loyalty,2024-02-13\r\n11043,1365,LATAM,grocery,retail,33.08,5,0.183,coupon,2024-01-15\r\n11044,2291,EMEA,home,retail,60.65,3,0.076,loyalty,2024-07-01\r\n11045,1771,AMER,electronics,retail,36.52,6,0.196,loyalty,2024-02-01\r\n11046,2067,LATAM,grocery,online,60.96,7,0.098,coupon,2024-04-28\r\n11047,2204,AMER,grocery,online,53.52,2,0.082,coupon,2024-06-05\r\n11048,1592,LATAM,home,mobile,58.93,2,0.066,loyalty,2024-02-28\r\n11049,1419,APAC,grocery,online,42.07,8,0.244,none,2024-12-01\r\n11050,1915,LATAM,electronics,mobile,40.45,1,0.138,none,2024-04-04\r\n11051,1217,EMEA,grocery,online,84.01,7,0.156,none,2024-03-04\r\n11052,1176,EMEA,toys,online,187.36,3,0.064,coupon,2024-09-21\r\n11053,2411,EMEA,grocery,online,46.93,6,0.221,bundle,2024-06-02\r\n11054,1543,AMER,home,online,40.67,2,0.236,none,2024-08-14\r\n11055,2338,AMER,grocery,retail,38.60,7,0.098,coupon,2024-01-25\r\n11056,1198,AMER,sports,retail,46.66,5,0.235,coupon,2024-08-26\r\n11057,1765,EMEA,fashion,mobile,109.72,2,0.247,none,2024-05-16\r\n11058,2436,LATAM,sports,online,40.32,8,0.034,none,2024-04-02\r\n11059,1005,LATAM,sports,mobile,115.42,3,0.085,none,2024-09-26\r\n11060,1009,APAC,grocery,retail,99.16,4,0.092,none,2024-07-05\r\n11061,1029,EMEA,grocery,retail,50.56,6,0.105,none,2024-02-08\r\n11062,1340,LATAM,grocery,retail,197.37,3,0.074,none,2024-06-20\r\n11063,2354,LATAM,home,retail,21.32,6,0.141,none,2024-04-04\r\n11064,1614,EMEA,electronics,online,19.97,6,0.126,none,2024-10-28\r\n11065,2158,APAC,electronics,retail,32.30,1,0.023,coupon,2024-11-11\r\n11066,2013,APAC,fashion,online,64.09,7,0.185,none,2024-11-01\r\n11067,1107,APAC,grocery,online,117.86,7,0.018,none,2024-01-20\r\n11068,1123,LATAM,grocery,retail,162.45,2,0.243,none,2024-09-02\r\n11069,1127,EMEA,fashion,retail,28.54,6,0.079,coupon,2024-06-28\r\n11070,1732,LATAM,home,mobile,75.28,8,0.075,none,2024-02-08\r\n11071,1795,EMEA,toys,mobile,192.45,7,0.085,coupon,2024-04-27\r\n11072,2357,EMEA,home,mobile,40.11,1,0.229,none,2024-07-16\r\n11073,1313,EMEA,electronics,online,75.72,1,0.131,coupon,2024-11-02\r\n11074,2018,AMER,sports,mobile,75.35,5,0.032,bundle,2024-03-19\r\n11075,2187,EMEA,electronics,mobile,185.26,3,0.227,none,2024-09-13\r\n11076,1240,EMEA,electronics,online,91.48,2,0.024,none,2024-04-07\r\n11077,2134,AMER,fashion,online,42.78,3,0.185,none,2024-01-08\r\n11078,1101,AMER,fashion,retail,40.20,5,0.183,none,2024-03-07\r\n11079,1917,LATAM,home,online,90.41,6,0.024,bundle,2024-04-07\r\n11080,1733,LATAM,grocery,retail,61.02,3,0.097,coupon,2024-02-22\r\n11081,2374,LATAM,home,online,50.68,6,0.005,none,2024-08-11\r\n11082,2470,EMEA,grocery,online,138.13,2,0.108,bundle,2024-01-06\r\n11083,1620,LATAM,grocery,online,46.17,1,0.052,none,2024-02-16\r\n11084,1875,EMEA,electronics,online,28.41,4,0.172,none,2024-07-21\r\n11085,1437,EMEA,home,partner,19.98,8,0.179,none,2024-03-15\r\n11086,2444,EMEA,electronics,online,54.21,6,0.200,none,2024-12-18\r\n11087,2051,APAC,sports,online,90.18,2,0.052,none,2024-05-12\r\n11088,1887,LATAM,grocery,mobile,110.09,4,0.093,loyalty,2024-03-09\r\n11089,2331,APAC,toys,retail,46.92,5,0.004,none,2024-05-26\r\n11090,2175,AMER,electronics,retail,113.34,6,0.232,bundle,2024-01-22\r\n11091,1743,LATAM,grocery,online,82.30,3,0.175,loyalty,2024-11-19\r\n11092,1979,APAC,home,mobile,53.03,8,0.147,none,2024-03-19\r\n11093,1137,APAC,grocery,retail,23.51,6,0.021,bundle,2024-01-28\r\n11094,1503,APAC,electronics,online,69.47,4,0.122,none,2024-03-03\r\n11095,1572,LATAM,electronics,online,102.70,5,0.122,bundle,2024-06-13\r\n11096,2499,LATAM,electronics,online,74.22,4,0.041,none,2024-02-24\r\n11097,1548,EMEA,home,retail,69.61,3,0.105,coupon,2024-07-22\r\n11098,2476,APAC,grocery,retail,124.40,1,0.248,coupon,2024-05-24\r\n11099,2137,LATAM,fashion,online,34.88,7,0.141,coupon,2024-03-05\r\n11100,1141,AMER,toys,retail,43.83,5,0.031,none,2024-04-07\r\n11101,1731,AMER,home,partner,26.26,2,0.183,bundle,2024-09-08\r\n11102,1880,LATAM,grocery,online,63.87,1,0.226,none,2024-05-14\r\n11103,2274,APAC,grocery,online,19.39,2,0.041,none,2024-01-21\r\n11104,1515,EMEA,home,online,41.69,7,0.009,none,2024-09-11\r\n11105,2127,LATAM,electronics,online,23.95,8,0.238,none,2024-09-17\r\n11106,1907,EMEA,fashion,retail,157.81,2,0.134,coupon,2024-06-15\r\n11107,2156,AMER,toys,retail,103.44,8,0.161,none,2024-07-12\r\n11108,1792,AMER,grocery,mobile,147.02,1,0.226,loyalty,2024-04-03\r\n11109,2178,AMER,electronics,partner,37.97,8,0.137,none,2024-01-02\r\n11110,2343,EMEA,home,online,61.96,1,0.183,none,2024-03-12\r\n11111,2097,AMER,fashion,retail,50.64,3,0.030,bundle,2024-09-16\r\n11112,2329,LATAM,grocery,retail,63.88,6,0.129,none,2024-11-20\r\n11113,1788,AMER,home,retail,64.66,4,0.181,bundle,2024-02-16\r\n11114,1568,AMER,sports,mobile,75.34,2,0.061,none,2024-10-06\r\n11115,2019,AMER,fashion,retail,47.18,8,0.198,none,2024-05-01\r\n11116,1600,AMER,electronics,retail,48.80,2,0.049,none,2024-12-21\r\n11117,2109,EMEA,grocery,retail,36.99,4,0.102,coupon,2024-03-03\r\n11118,1333,EMEA,sports,partner,40.15,5,0.000,loyalty,2024-08-09\r\n11119,1910,LATAM,home,retail,64.93,7,0.076,none,2024-07-20\r\n11120,2351,EMEA,electronics,retail,32.49,6,0.072,none,2024-05-20\r\n11121,1821,LATAM,electronics,retail,68.03,3,0.056,none,2024-11-05\r\n11122,1734,AMER,sports,partner,38.76,4,0.160,none,2024-06-12\r\n11123,1061,APAC,home,retail,33.47,3,0.023,loyalty,2024-07-15\r\n11124,1799,EMEA,grocery,online,53.22,4,0.030,none,2024-10-26\r\n11125,2045,LATAM,home,retail,98.55,7,0.179,none,2024-07-23\r\n11126,1529,LATAM,toys,retail,59.70,1,0.242,none,2024-10-03\r\n11127,1469,EMEA,sports,mobile,39.32,5,0.183,bundle,2024-12-08\r\n11128,1685,AMER,electronics,retail,56.32,4,0.074,none,2024-06-14\r\n11129,1841,AMER,grocery,online,34.80,4,0.081,none,2024-09-28\r\n11130,1443,EMEA,fashion,online,38.86,1,0.058,none,2024-02-14\r\n11131,1546,EMEA,grocery,online,53.56,4,0.201,loyalty,2024-03-08\r\n11132,2146,APAC,fashion,retail,179.07,6,0.249,none,2024-01-03\r\n11133,1873,EMEA,home,online,71.16,3,0.136,none,2024-10-23\r\n11134,2388,LATAM,fashion,online,53.57,3,0.064,bundle,2024-04-04\r\n11135,1952,EMEA,grocery,retail,71.91,8,0.040,loyalty,2024-12-24\r\n11136,1209,AMER,grocery,mobile,54.40,1,0.109,loyalty,2024-03-19\r\n11137,1975,EMEA,home,retail,93.31,3,0.190,coupon,2024-05-18\r\n11138,1324,LATAM,fashion,retail,28.58,4,0.131,bundle,2024-11-05\r\n11139,1106,AMER,home,online,87.88,2,0.151,coupon,2024-05-04\r\n11140,2010,APAC,sports,mobile,165.52,2,0.008,bundle,2024-10-08\r\n11141,1318,LATAM,grocery,online,131.45,4,0.193,none,2024-10-07\r\n11142,1700,EMEA,grocery,retail,100.92,4,0.219,coupon,2024-12-22\r\n11143,1601,APAC,sports,online,84.78,4,0.154,loyalty,2024-12-01\r\n11144,1395,APAC,grocery,mobile,183.75,7,0.088,none,2024-04-07\r\n11145,2125,LATAM,home,online,54.96,2,0.060,bundle,2024-05-19\r\n11146,1241,APAC,home,retail,38.39,3,0.238,coupon,2024-01-04\r\n11147,1982,EMEA,grocery,retail,71.40,8,0.114,bundle,2024-04-12\r\n11148,2477,APAC,electronics,retail,125.97,2,0.180,bundle,2024-06-04\r\n11149,1123,LATAM,electronics,retail,73.70,7,0.221,none,2024-06-15\r\n11150,1714,APAC,electronics,retail,14.92,6,0.192,loyalty,2024-11-24\r\n11151,1849,EMEA,fashion,online,51.69,8,0.228,none,2024-05-08\r\n11152,2289,APAC,home,online,120.17,2,0.233,none,2024-12-04\r\n11153,1667,AMER,toys,partner,46.35,6,0.024,bundle,2024-09-13\r\n11154,1960,EMEA,electronics,partner,51.43,7,0.165,none,2024-02-23\r\n11155,1014,EMEA,fashion,partner,30.24,7,0.027,none,2024-01-19\r\n11156,1882,AMER,sports,online,62.85,5,0.167,none,2024-01-04\r\n11157,2007,LATAM,toys,online,57.63,6,0.157,none,2024-05-09\r\n11158,1967,EMEA,grocery,online,132.14,7,0.008,coupon,2024-05-17\r\n11159,1038,APAC,fashion,mobile,27.78,8,0.223,bundle,2024-04-19\r\n11160,2340,EMEA,fashion,mobile,84.93,1,0.203,coupon,2024-03-26\r\n11161,2164,AMER,electronics,retail,26.16,6,0.187,none,2024-05-04\r\n11162,2377,AMER,grocery,online,94.52,4,0.042,coupon,2024-01-28\r\n11163,1446,AMER,toys,retail,36.43,5,0.237,coupon,2024-01-23\r\n11164,1910,LATAM,home,online,36.48,4,0.156,none,2024-01-11\r\n11165,2199,LATAM,toys,mobile,57.03,7,0.023,coupon,2024-12-28\r\n11166,1745,APAC,grocery,online,54.15,5,0.078,none,2024-11-19\r\n11167,1637,APAC,fashion,retail,60.08,4,0.025,none,2024-11-01\r\n11168,2235,AMER,electronics,mobile,128.55,6,0.146,none,2024-10-26\r\n11169,1779,APAC,fashion,online,27.64,4,0.238,none,2024-10-11\r\n11170,1390,APAC,fashion,retail,61.51,4,0.235,none,2024-03-02\r\n11171,1919,EMEA,grocery,online,35.93,7,0.198,coupon,2024-07-28\r\n11172,1750,LATAM,grocery,online,47.95,5,0.209,none,2024-01-23\r\n11173,1036,EMEA,grocery,online,70.90,4,0.218,coupon,2024-10-19\r\n11174,1701,LATAM,toys,online,102.29,2,0.195,none,2024-07-28\r\n11175,2326,LATAM,toys,retail,32.29,6,0.239,coupon,2024-06-04\r\n11176,2050,APAC,grocery,retail,86.80,1,0.145,loyalty,2024-10-11\r\n11177,1869,AMER,electronics,retail,73.46,3,0.063,coupon,2024-12-02\r\n11178,2089,EMEA,grocery,retail,119.99,6,0.122,coupon,2024-04-10\r\n11179,1925,LATAM,home,online,53.61,2,0.074,bundle,2024-05-01\r\n11180,1419,APAC,grocery,retail,30.02,6,0.156,coupon,2024-10-03\r\n11181,1673,AMER,grocery,online,118.77,4,0.084,none,2024-05-19\r\n11182,1703,AMER,toys,retail,38.27,3,0.229,coupon,2024-12-06\r\n11183,2187,EMEA,home,retail,86.19,4,0.025,none,2024-02-23\r\n11184,2096,LATAM,electronics,retail,118.14,5,0.082,loyalty,2024-03-03\r\n11185,1682,EMEA,grocery,retail,90.26,8,0.090,none,2024-09-12\r\n11186,1847,LATAM,grocery,online,47.31,1,0.003,bundle,2024-05-15\r\n11187,2089,EMEA,grocery,retail,58.21,2,0.118,none,2024-10-08\r\n11188,2053,AMER,home,partner,41.31,7,0.074,coupon,2024-05-03\r\n11189,1785,EMEA,grocery,partner,85.79,3,0.175,none,2024-05-02\r\n11190,1568,AMER,sports,retail,47.64,7,0.154,none,2024-04-14\r\n11191,2315,LATAM,toys,retail,134.39,1,0.194,none,2024-10-18\r\n11192,2147,LATAM,grocery,retail,145.91,8,0.109,none,2024-11-04\r\n11193,1811,APAC,home,retail,70.26,4,0.014,none,2024-12-10\r\n11194,1211,EMEA,grocery,retail,68.97,7,0.024,bundle,2024-06-06\r\n11195,2172,EMEA,grocery,online,42.05,2,0.107,none,2024-02-02\r\n11196,2470,EMEA,grocery,partner,26.44,6,0.022,coupon,2024-10-10\r\n11197,1726,EMEA,fashion,online,66.47,8,0.208,coupon,2024-08-02\r\n11198,2360,EMEA,fashion,online,59.37,5,0.085,none,2024-09-09\r\n11199,1636,APAC,electronics,mobile,82.85,4,0.202,coupon,2024-06-01\r\n11200,1756,EMEA,sports,mobile,72.52,7,0.181,coupon,2024-04-09\r\n11201,2269,EMEA,toys,mobile,56.37,3,0.198,loyalty,2024-10-09\r\n11202,1954,APAC,grocery,retail,119.47,2,0.193,none,2024-11-27\r\n11203,1672,APAC,fashion,mobile,54.59,1,0.163,coupon,2024-01-06\r\n11204,1393,LATAM,fashion,retail,82.12,7,0.162,loyalty,2024-12-25\r\n11205,1281,AMER,grocery,online,118.84,4,0.125,bundle,2024-01-23\r\n11206,1803,LATAM,home,retail,49.81,5,0.027,none,2024-08-04\r\n11207,1330,EMEA,home,online,70.73,4,0.127,loyalty,2024-02-19\r\n11208,2246,AMER,home,online,67.63,5,0.168,none,2024-07-14\r\n11209,1042,LATAM,home,online,41.19,7,0.071,none,2024-09-02\r\n11210,2008,APAC,grocery,online,69.78,5,0.191,none,2024-05-21\r\n11211,2344,LATAM,electronics,retail,35.78,5,0.059,none,2024-11-19\r\n11212,1881,LATAM,home,online,51.91,1,0.027,none,2024-04-05\r\n11213,1544,LATAM,grocery,retail,107.02,7,0.148,none,2024-01-23\r\n11214,1941,AMER,toys,retail,52.73,6,0.228,none,2024-05-08\r\n11215,2115,APAC,electronics,online,72.46,3,0.198,none,2024-02-19\r\n11216,1353,EMEA,fashion,retail,56.62,2,0.119,none,2024-05-25\r\n11217,2228,EMEA,fashion,online,90.27,4,0.244,none,2024-04-23\r\n11218,1881,LATAM,electronics,online,91.57,6,0.000,coupon,2024-11-13\r\n11219,1358,APAC,grocery,retail,42.72,8,0.225,none,2024-02-15\r\n11220,2265,APAC,electronics,retail,28.18,8,0.057,loyalty,2024-10-11\r\n11221,1143,LATAM,electronics,online,128.28,3,0.062,bundle,2024-09-12\r\n11222,1336,APAC,grocery,online,48.90,2,0.097,none,2024-03-07\r\n11223,2463,AMER,sports,mobile,57.86,5,0.112,loyalty,2024-06-06\r\n11224,2380,AMER,home,online,222.22,6,0.225,none,2024-12-07\r\n11225,1667,AMER,sports,retail,107.23,3,0.107,none,2024-05-19\r\n11226,1357,EMEA,electronics,online,71.24,3,0.234,none,2024-06-05\r\n11227,1997,APAC,toys,online,35.14,8,0.122,coupon,2024-08-14\r\n11228,1683,AMER,fashion,mobile,49.91,6,0.131,none,2024-06-04\r\n11229,2242,AMER,electronics,mobile,44.48,3,0.173,bundle,2024-07-14\r\n11230,2383,APAC,sports,partner,94.70,6,0.130,bundle,2024-07-24\r\n11231,1883,LATAM,sports,retail,47.20,3,0.178,loyalty,2024-05-28\r\n11232,2349,APAC,sports,retail,51.61,4,0.242,bundle,2024-02-02\r\n11233,1555,AMER,electronics,online,63.03,6,0.219,coupon,2024-10-12\r\n11234,2474,LATAM,grocery,retail,40.09,5,0.023,none,2024-07-22\r\n11235,2051,APAC,electronics,online,87.50,2,0.015,none,2024-07-05\r\n11236,1743,LATAM,fashion,online,57.76,7,0.239,none,2024-03-05\r\n11237,2154,APAC,home,online,46.49,1,0.221,none,2024-03-27\r\n11238,1388,AMER,sports,mobile,49.27,1,0.083,bundle,2024-06-22\r\n11239,2426,AMER,fashion,partner,48.01,5,0.209,coupon,2024-10-01\r\n11240,1804,AMER,home,mobile,32.56,5,0.108,none,2024-05-27\r\n11241,1609,LATAM,grocery,online,35.34,8,0.168,coupon,2024-10-22\r\n11242,1937,APAC,electronics,retail,72.00,5,0.235,none,2024-11-14\r\n11243,2329,LATAM,grocery,retail,46.04,7,0.141,bundle,2024-12-08\r\n11244,1811,APAC,electronics,online,122.05,1,0.131,coupon,2024-06-16\r\n11245,1631,APAC,grocery,online,94.77,5,0.223,loyalty,2024-12-17\r\n11246,1267,EMEA,toys,retail,25.34,5,0.205,none,2024-01-15\r\n11247,1425,EMEA,fashion,retail,22.01,7,0.090,none,2024-02-23\r\n11248,1129,LATAM,electronics,retail,67.67,4,0.030,coupon,2024-06-16\r\n11249,1172,APAC,electronics,retail,26.95,7,0.016,bundle,2024-11-21\r\n11250,1949,AMER,home,partner,151.16,8,0.037,loyalty,2024-08-03\r\n11251,2499,LATAM,home,online,101.05,2,0.234,bundle,2024-11-18\r\n11252,1953,EMEA,fashion,retail,23.97,2,0.008,bundle,2024-03-11\r\n11253,1567,AMER,home,retail,52.75,4,0.157,coupon,2024-11-13\r\n11254,2449,LATAM,sports,retail,98.74,4,0.027,none,2024-06-28\r\n11255,1200,EMEA,fashion,online,67.22,5,0.168,none,2024-06-24\r\n11256,2017,EMEA,home,retail,55.48,4,0.005,none,2024-07-20\r\n11257,1066,AMER,grocery,retail,65.56,8,0.177,none,2024-03-12\r\n11258,1877,LATAM,electronics,online,68.99,2,0.206,none,2024-03-02\r\n11259,1930,AMER,grocery,online,22.01,4,0.218,loyalty,2024-07-22\r\n11260,2491,APAC,grocery,online,55.79,1,0.234,coupon,2024-10-01\r\n11261,1922,EMEA,home,mobile,69.68,8,0.063,coupon,2024-09-03\r\n11262,2208,AMER,toys,partner,54.14,7,0.159,none,2024-05-17\r\n11263,2216,AMER,fashion,retail,83.17,2,0.223,none,2024-01-25\r\n11264,2213,APAC,fashion,retail,60.75,3,0.140,coupon,2024-08-27\r\n11265,1512,APAC,grocery,retail,42.98,7,0.126,none,2024-09-18\r\n11266,1926,AMER,fashion,retail,38.25,7,0.017,loyalty,2024-12-19\r\n11267,1465,AMER,home,retail,78.22,8,0.003,none,2024-02-10\r\n11268,2337,AMER,sports,online,63.89,8,0.183,none,2024-10-11\r\n11269,2300,EMEA,fashion,retail,35.24,2,0.062,none,2024-04-04\r\n11270,1110,LATAM,electronics,retail,32.94,6,0.220,none,2024-07-18\r\n11271,2261,EMEA,fashion,retail,23.84,2,0.247,bundle,2024-12-25\r\n11272,2427,LATAM,sports,partner,39.73,8,0.235,coupon,2024-04-17\r\n11273,2383,APAC,electronics,online,100.10,1,0.120,bundle,2024-04-08\r\n11274,1550,APAC,grocery,partner,111.26,5,0.180,loyalty,2024-10-04\r\n11275,1333,EMEA,grocery,online,63.60,4,0.002,none,2024-12-25\r\n11276,1600,AMER,home,retail,38.27,3,0.043,coupon,2024-05-24\r\n11277,2182,AMER,grocery,mobile,70.75,8,0.174,bundle,2024-03-28\r\n11278,1496,AMER,home,partner,41.81,7,0.149,coupon,2024-01-20\r\n11279,2468,EMEA,sports,retail,41.31,1,0.233,none,2024-11-28\r\n11280,2020,AMER,sports,online,38.62,6,0.111,loyalty,2024-09-17\r\n11281,2303,EMEA,fashion,retail,93.68,8,0.087,none,2024-01-13\r\n11282,2460,AMER,sports,retail,116.34,2,0.105,loyalty,2024-06-19\r\n11283,1524,LATAM,fashion,mobile,51.28,1,0.117,coupon,2024-04-04\r\n11284,2376,LATAM,sports,online,44.67,7,0.138,none,2024-05-28\r\n11285,2334,LATAM,grocery,online,34.49,5,0.086,bundle,2024-02-01\r\n11286,2258,AMER,electronics,partner,48.81,4,0.231,bundle,2024-06-01\r\n11287,1064,AMER,fashion,retail,39.48,4,0.050,bundle,2024-06-09\r\n11288,2162,EMEA,electronics,retail,72.66,2,0.231,bundle,2024-10-10\r\n11289,2201,AMER,grocery,retail,114.20,1,0.112,none,2024-01-23\r\n11290,2047,AMER,grocery,online,164.17,8,0.062,none,2024-07-09\r\n11291,1421,APAC,sports,mobile,122.79,1,0.182,none,2024-05-05\r\n11292,1001,LATAM,toys,retail,41.86,2,0.150,none,2024-01-09\r\n11293,1591,APAC,home,retail,236.06,7,0.095,coupon,2024-09-26\r\n11294,1402,EMEA,toys,retail,68.20,3,0.092,bundle,2024-02-06\r\n11295,1407,LATAM,home,retail,56.49,3,0.150,loyalty,2024-10-18\r\n11296,1392,AMER,fashion,mobile,53.58,2,0.083,none,2024-12-13\r\n11297,1700,EMEA,sports,online,139.37,6,0.090,none,2024-04-12\r\n11298,1269,LATAM,grocery,online,46.74,8,0.245,none,2024-04-27\r\n11299,1632,LATAM,home,online,61.91,8,0.199,none,2024-07-26\r\n11300,1662,LATAM,home,mobile,143.97,8,0.141,coupon,2024-07-02\r\n11301,1553,LATAM,grocery,mobile,44.65,1,0.048,none,2024-07-22\r\n11302,1595,AMER,electronics,online,68.02,4,0.210,coupon,2024-08-02\r\n11303,1818,AMER,home,retail,37.30,7,0.002,none,2024-11-01\r\n11304,1087,AMER,electronics,online,117.20,5,0.102,coupon,2024-11-03\r\n11305,1404,EMEA,electronics,online,55.41,3,0.043,none,2024-06-01\r\n11306,1915,LATAM,sports,retail,52.17,5,0.219,none,2024-01-18\r\n11307,1330,EMEA,fashion,online,61.44,8,0.004,none,2024-03-28\r\n11308,1007,APAC,sports,mobile,24.55,1,0.174,coupon,2024-03-12\r\n11309,1325,APAC,grocery,retail,24.75,4,0.151,none,2024-09-02\r\n11310,2433,APAC,electronics,online,39.09,2,0.164,loyalty,2024-08-16\r\n11311,1787,APAC,toys,retail,27.50,7,0.129,none,2024-05-03\r\n11312,1159,LATAM,electronics,mobile,121.43,6,0.062,coupon,2024-02-25\r\n11313,2061,EMEA,home,online,67.99,1,0.034,none,2024-07-22\r\n11314,1424,APAC,home,online,75.02,5,0.143,none,2024-04-27\r\n11315,1375,AMER,grocery,online,34.66,5,0.203,none,2024-11-03\r\n11316,1498,LATAM,grocery,mobile,62.57,5,0.187,none,2024-07-04\r\n11317,1849,EMEA,home,online,35.75,1,0.234,coupon,2024-02-04\r\n11318,2289,APAC,grocery,online,135.89,1,0.175,bundle,2024-11-15\r\n11319,1992,LATAM,electronics,online,26.62,5,0.044,none,2024-03-15\r\n11320,1607,LATAM,home,partner,58.80,7,0.147,none,2024-02-22\r\n11321,2193,AMER,home,online,54.71,5,0.197,loyalty,2024-03-17\r\n11322,1472,AMER,electronics,partner,63.19,4,0.052,none,2024-12-05\r\n11323,2338,AMER,fashion,retail,88.96,5,0.241,bundle,2024-09-06\r\n11324,2484,APAC,electronics,retail,30.77,7,0.246,none,2024-06-06\r\n11325,1260,LATAM,electronics,mobile,48.64,3,0.026,bundle,2024-04-19\r\n11326,1260,LATAM,grocery,retail,213.98,3,0.036,none,2024-11-18\r\n11327,1897,AMER,home,online,46.78,7,0.108,loyalty,2024-10-04\r\n11328,2406,EMEA,grocery,retail,68.15,1,0.233,none,2024-07-23\r\n11329,1242,LATAM,grocery,online,80.94,3,0.064,loyalty,2024-05-14\r\n11330,2091,LATAM,electronics,online,75.94,5,0.231,coupon,2024-03-11\r\n11331,2294,EMEA,home,online,46.55,8,0.133,none,2024-06-01\r\n11332,1393,LATAM,home,retail,31.71,3,0.075,none,2024-01-08\r\n11333,2319,AMER,toys,online,77.23,1,0.054,none,2024-04-28\r\n11334,1846,APAC,home,online,24.88,1,0.014,none,2024-01-02\r\n11335,1335,APAC,fashion,online,48.05,2,0.035,coupon,2024-04-16\r\n11336,1590,APAC,fashion,retail,75.12,6,0.164,none,2024-09-12\r\n11337,2107,APAC,electronics,retail,91.11,5,0.144,bundle,2024-09-19\r\n11338,2376,LATAM,fashion,online,94.23,8,0.201,coupon,2024-02-05\r\n11339,1124,AMER,grocery,mobile,30.86,2,0.023,none,2024-06-10\r\n11340,1691,LATAM,electronics,online,62.45,4,0.073,none,2024-07-28\r\n11341,1052,LATAM,toys,mobile,61.93,6,0.060,none,2024-01-15\r\n11342,2177,AMER,home,retail,92.32,2,0.121,none,2024-01-07\r\n11343,1714,APAC,fashion,mobile,43.30,3,0.167,coupon,2024-05-02\r\n11344,1445,APAC,fashion,online,71.82,3,0.107,none,2024-07-16\r\n11345,2349,APAC,electronics,retail,80.45,1,0.123,bundle,2024-02-12\r\n11346,2212,EMEA,home,retail,122.55,4,0.055,none,2024-09-14\r\n11347,1195,AMER,home,online,54.09,4,0.240,none,2024-06-06\r\n11348,1350,LATAM,home,online,69.29,1,0.245,none,2024-03-05\r\n11349,1632,LATAM,grocery,retail,58.96,8,0.074,none,2024-08-26\r\n11350,1722,EMEA,sports,online,129.58,4,0.111,bundle,2024-02-16\r\n11351,1074,LATAM,grocery,retail,83.01,5,0.074,none,2024-10-06\r\n11352,1083,AMER,home,online,71.22,1,0.164,none,2024-12-28\r\n11353,1837,LATAM,home,mobile,54.19,7,0.126,none,2024-03-22\r\n11354,2264,LATAM,sports,partner,44.68,4,0.050,loyalty,2024-12-14\r\n11355,1582,AMER,grocery,retail,63.25,3,0.001,none,2024-01-04\r\n11356,2089,EMEA,electronics,retail,38.97,4,0.040,none,2024-07-14\r\n11357,2389,LATAM,sports,online,110.04,8,0.028,loyalty,2024-02-07\r\n11358,2264,LATAM,home,online,144.86,2,0.066,loyalty,2024-07-27\r\n11359,1973,EMEA,grocery,online,36.63,6,0.145,loyalty,2024-10-10\r\n11360,1926,AMER,home,partner,46.99,7,0.024,none,2024-12-11\r\n11361,2323,AMER,grocery,mobile,81.63,7,0.026,bundle,2024-04-03\r\n11362,1552,EMEA,grocery,partner,77.39,4,0.212,coupon,2024-03-24\r\n11363,1142,EMEA,sports,partner,99.53,5,0.090,loyalty,2024-08-12\r\n11364,1844,APAC,fashion,retail,54.42,2,0.019,none,2024-03-27\r\n11365,2284,EMEA,grocery,online,66.44,5,0.017,coupon,2024-05-04\r\n11366,2325,LATAM,fashion,retail,35.62,3,0.247,coupon,2024-06-25\r\n11367,1095,APAC,electronics,online,29.97,1,0.017,none,2024-07-12\r\n11368,2135,EMEA,toys,online,53.81,8,0.042,bundle,2024-10-28\r\n11369,1873,EMEA,electronics,mobile,46.37,5,0.049,bundle,2024-08-16\r\n11370,1168,APAC,grocery,online,41.75,6,0.158,bundle,2024-04-01\r\n11371,1996,APAC,electronics,mobile,80.23,6,0.148,bundle,2024-09-16\r\n11372,2102,APAC,fashion,retail,56.10,4,0.102,none,2024-07-25\r\n11373,1093,APAC,electronics,retail,140.36,7,0.008,none,2024-01-21\r\n11374,2326,LATAM,electronics,mobile,57.39,6,0.231,coupon,2024-02-20\r\n11375,2002,APAC,sports,online,19.83,5,0.145,none,2024-01-11\r\n11376,1587,LATAM,grocery,retail,138.64,8,0.213,bundle,2024-09-05\r\n11377,2216,AMER,electronics,mobile,51.35,2,0.139,loyalty,2024-07-04\r\n11378,2082,APAC,home,retail,76.38,8,0.120,bundle,2024-09-14\r\n11379,1730,AMER,fashion,online,43.86,7,0.123,loyalty,2024-05-06\r\n11380,1938,APAC,electronics,retail,146.63,8,0.228,none,2024-04-12\r\n11381,1088,LATAM,home,retail,63.30,6,0.072,none,2024-07-20\r\n11382,1310,AMER,home,mobile,30.72,7,0.094,none,2024-01-17\r\n11383,1453,APAC,sports,online,111.42,2,0.165,coupon,2024-07-04\r\n11384,1724,LATAM,sports,online,24.40,2,0.020,bundle,2024-12-02\r\n11385,1370,APAC,home,partner,115.62,2,0.149,coupon,2024-05-18\r\n11386,1689,LATAM,sports,online,58.25,1,0.249,none,2024-05-04\r\n11387,1898,EMEA,electronics,mobile,21.66,3,0.049,none,2024-04-18\r\n11388,1487,AMER,grocery,mobile,41.33,6,0.130,bundle,2024-11-20\r\n11389,2404,EMEA,home,retail,51.94,6,0.027,none,2024-04-11\r\n11390,1292,LATAM,fashion,online,45.60,8,0.240,none,2024-07-12\r\n11391,2310,EMEA,electronics,online,112.49,4,0.139,none,2024-10-10\r\n11392,2300,EMEA,grocery,retail,52.27,2,0.194,none,2024-11-27\r\n11393,2058,LATAM,electronics,online,112.29,1,0.195,coupon,2024-07-23\r\n11394,1893,APAC,home,retail,59.15,5,0.086,coupon,2024-04-14\r\n11395,1488,AMER,sports,retail,49.43,5,0.160,none,2024-09-10\r\n11396,2301,EMEA,grocery,online,71.39,4,0.085,coupon,2024-07-19\r\n11397,1872,LATAM,home,online,34.29,4,0.051,none,2024-04-25\r\n11398,1848,EMEA,electronics,online,72.56,1,0.093,none,2024-04-16\r\n11399,1984,LATAM,home,retail,49.97,6,0.171,loyalty,2024-07-23\r\n11400,2075,LATAM,grocery,retail,54.50,4,0.113,coupon,2024-01-19\r\n11401,1850,APAC,grocery,online,49.27,8,0.223,none,2024-11-21\r\n11402,1136,EMEA,fashion,mobile,64.72,3,0.191,none,2024-06-26\r\n11403,1047,APAC,toys,partner,45.91,7,0.051,loyalty,2024-05-26\r\n11404,1658,AMER,fashion,online,75.63,6,0.036,none,2024-12-16\r\n11405,1761,EMEA,fashion,partner,79.26,5,0.159,none,2024-10-17\r\n11406,2410,EMEA,electronics,retail,142.49,3,0.137,coupon,2024-11-17\r\n11407,1404,EMEA,toys,mobile,47.69,8,0.205,none,2024-09-18\r\n11408,1180,AMER,electronics,retail,48.80,4,0.141,none,2024-06-21\r\n11409,2013,APAC,electronics,online,30.03,5,0.021,none,2024-02-14\r\n11410,1892,LATAM,sports,retail,23.60,2,0.027,bundle,2024-12-20\r\n11411,2462,EMEA,home,retail,32.61,8,0.107,none,2024-08-24\r\n11412,2128,EMEA,grocery,retail,96.63,3,0.199,none,2024-11-25\r\n11413,2406,EMEA,grocery,online,33.44,6,0.095,coupon,2024-01-28\r\n11414,2377,AMER,fashion,online,215.38,7,0.201,none,2024-09-09\r\n11415,1450,EMEA,sports,online,83.85,5,0.227,none,2024-11-22\r\n11416,1455,APAC,home,retail,17.73,8,0.179,none,2024-09-14\r\n11417,2207,APAC,electronics,retail,22.09,4,0.016,none,2024-06-01\r\n11418,2030,EMEA,grocery,retail,102.43,5,0.031,none,2024-05-23\r\n11419,1339,EMEA,electronics,online,60.87,2,0.036,none,2024-11-23\r\n11420,2307,LATAM,grocery,online,71.79,1,0.080,coupon,2024-08-11\r\n11421,1812,EMEA,sports,retail,77.00,2,0.077,none,2024-03-15\r\n11422,1515,EMEA,electronics,online,60.43,5,0.218,bundle,2024-10-08\r\n11423,2022,LATAM,electronics,retail,86.85,8,0.034,none,2024-06-19\r\n11424,2075,LATAM,home,retail,58.90,8,0.072,none,2024-11-18\r\n11425,1204,AMER,toys,retail,105.77,2,0.020,none,2024-11-11\r\n11426,1273,AMER,home,partner,48.48,4,0.005,loyalty,2024-03-03\r\n11427,1562,AMER,grocery,retail,60.08,8,0.201,none,2024-11-12\r\n11428,2011,AMER,home,mobile,49.03,4,0.101,none,2024-01-07\r\n11429,2470,EMEA,grocery,online,43.54,7,0.076,loyalty,2024-06-14\r\n11430,1823,EMEA,sports,retail,125.05,4,0.120,none,2024-09-02\r\n11431,1390,APAC,fashion,online,70.59,8,0.052,none,2024-12-24\r\n11432,1935,EMEA,electronics,mobile,19.55,5,0.091,coupon,2024-07-25\r\n11433,2092,AMER,electronics,retail,81.43,1,0.219,none,2024-03-14\r\n11434,1652,APAC,toys,mobile,184.16,4,0.141,loyalty,2024-01-16\r\n11435,1662,LATAM,home,retail,49.57,3,0.122,bundle,2024-08-12\r\n11436,1192,EMEA,grocery,online,47.52,6,0.007,bundle,2024-04-28\r\n11437,2300,EMEA,toys,online,86.83,3,0.098,none,2024-01-02\r\n11438,2032,AMER,grocery,retail,83.80,8,0.168,bundle,2024-03-27\r\n11439,1057,LATAM,grocery,mobile,67.12,7,0.067,none,2024-12-07\r\n11440,2358,AMER,electronics,mobile,33.00,1,0.203,coupon,2024-04-27\r\n11441,1348,AMER,grocery,mobile,81.11,8,0.217,loyalty,2024-01-19\r\n11442,1919,EMEA,home,retail,123.60,8,0.228,none,2024-06-03\r\n11443,1045,LATAM,electronics,online,219.49,8,0.007,none,2024-06-12\r\n11444,1730,AMER,home,online,19.13,4,0.039,coupon,2024-09-08\r\n11445,1920,LATAM,sports,online,23.29,2,0.239,coupon,2024-10-05\r\n11446,1064,AMER,grocery,retail,64.53,2,0.083,loyalty,2024-04-20\r\n11447,2313,LATAM,fashion,online,42.37,4,0.166,none,2024-09-24\r\n11448,2484,APAC,electronics,online,43.56,3,0.064,none,2024-08-04\r\n11449,1196,APAC,home,online,38.37,1,0.175,none,2024-03-14\r\n11450,2299,EMEA,home,online,124.80,6,0.106,none,2024-11-12\r\n11451,2324,AMER,electronics,retail,44.07,8,0.203,none,2024-08-24\r\n11452,1043,LATAM,fashion,online,61.96,7,0.013,bundle,2024-09-05\r\n11453,1233,AMER,sports,retail,124.89,3,0.038,none,2024-07-19\r\n11454,1829,EMEA,grocery,retail,59.67,4,0.141,none,2024-07-21\r\n11455,1626,EMEA,toys,mobile,103.82,6,0.221,coupon,2024-08-14\r\n11456,1984,LATAM,electronics,online,82.89,4,0.176,coupon,2024-11-13\r\n11457,1218,AMER,electronics,mobile,133.03,2,0.064,bundle,2024-05-17\r\n11458,1378,APAC,toys,retail,57.60,1,0.091,none,2024-01-24\r\n11459,2013,APAC,grocery,online,47.04,6,0.055,none,2024-12-20\r\n11460,2096,LATAM,grocery,retail,55.99,3,0.018,none,2024-03-27\r\n11461,2276,AMER,grocery,online,79.17,4,0.204,none,2024-02-25\r\n11462,1627,LATAM,home,mobile,57.06,8,0.237,loyalty,2024-01-15\r\n11463,1819,AMER,grocery,retail,85.79,1,0.096,none,2024-05-22\r\n11464,1545,AMER,toys,online,28.39,1,0.108,none,2024-11-08\r\n11465,1642,EMEA,home,retail,33.59,7,0.083,none,2024-02-13\r\n11466,1523,LATAM,sports,online,52.03,6,0.183,none,2024-04-17\r\n11467,2294,EMEA,fashion,retail,29.57,5,0.207,none,2024-01-27\r\n11468,2085,AMER,grocery,online,41.23,7,0.153,bundle,2024-01-03\r\n11469,1049,AMER,toys,retail,67.86,4,0.165,loyalty,2024-11-13\r\n11470,1425,EMEA,grocery,mobile,38.53,2,0.075,loyalty,2024-04-19\r\n11471,1988,AMER,grocery,partner,55.67,1,0.102,none,2024-04-24\r\n11472,1740,EMEA,toys,online,70.59,8,0.158,none,2024-07-14\r\n11473,2355,EMEA,grocery,online,36.75,2,0.229,none,2024-10-15\r\n11474,1213,EMEA,home,retail,77.87,2,0.038,none,2024-11-07\r\n11475,2455,AMER,home,mobile,43.22,5,0.225,loyalty,2024-02-04\r\n11476,2369,LATAM,fashion,online,41.99,1,0.239,none,2024-06-21\r\n11477,2004,LATAM,electronics,retail,44.53,4,0.122,coupon,2024-04-03\r\n11478,2378,LATAM,electronics,partner,54.45,4,0.095,bundle,2024-02-17\r\n11479,1545,AMER,sports,online,54.49,5,0.237,none,2024-10-10\r\n11480,1075,AMER,toys,online,69.82,1,0.215,bundle,2024-11-12\r\n11481,1532,APAC,grocery,online,26.56,6,0.089,coupon,2024-08-07\r\n11482,2017,EMEA,electronics,mobile,47.30,2,0.245,bundle,2024-04-17\r\n11483,1328,APAC,home,retail,63.43,4,0.052,coupon,2024-09-09\r\n11484,2473,EMEA,home,retail,80.19,7,0.147,coupon,2024-07-07\r\n11485,1919,EMEA,grocery,online,89.62,3,0.130,bundle,2024-06-07\r\n11486,1724,LATAM,electronics,online,61.90,4,0.092,none,2024-04-13\r\n11487,1120,LATAM,grocery,online,46.14,1,0.206,none,2024-07-22\r\n11488,1093,APAC,sports,online,86.07,7,0.160,bundle,2024-10-10\r\n11489,1459,LATAM,grocery,online,91.49,1,0.159,none,2024-01-17\r\n11490,2222,LATAM,electronics,retail,51.66,3,0.039,none,2024-09-23\r\n11491,1572,LATAM,electronics,retail,51.43,5,0.109,none,2024-08-14\r\n11492,2473,EMEA,grocery,retail,60.42,7,0.153,bundle,2024-02-12\r\n11493,2367,AMER,grocery,online,76.47,2,0.038,coupon,2024-07-05\r\n11494,2042,LATAM,sports,retail,79.22,7,0.195,loyalty,2024-11-10\r\n11495,2118,AMER,home,retail,41.37,7,0.119,none,2024-12-05\r\n11496,2246,AMER,home,retail,62.03,4,0.049,bundle,2024-03-27\r\n11497,1555,AMER,grocery,online,35.50,3,0.018,loyalty,2024-11-09\r\n11498,1220,LATAM,grocery,retail,46.46,3,0.200,none,2024-01-02\r\n11499,2265,APAC,home,online,58.34,1,0.044,bundle,2024-10-06\r\n11500,1990,EMEA,electronics,online,54.37,2,0.223,none,2024-03-05\r\n11501,1996,APAC,grocery,partner,82.30,6,0.156,none,2024-07-28\r\n11502,2079,EMEA,grocery,mobile,69.05,8,0.098,none,2024-10-21\r\n11503,2416,LATAM,electronics,online,47.81,1,0.233,none,2024-12-22\r\n11504,2117,EMEA,electronics,online,199.85,6,0.236,none,2024-06-17\r\n11505,2356,LATAM,home,online,42.10,8,0.015,loyalty,2024-06-21\r\n11506,1774,EMEA,electronics,retail,43.47,8,0.079,coupon,2024-09-01\r\n11507,2148,EMEA,home,mobile,52.23,3,0.074,coupon,2024-12-25\r\n11508,2490,AMER,electronics,online,51.03,6,0.211,coupon,2024-06-13\r\n11509,1378,APAC,electronics,online,114.46,2,0.063,none,2024-04-22\r\n11510,1338,EMEA,grocery,retail,85.35,6,0.235,none,2024-02-16\r\n11511,1648,APAC,grocery,retail,30.80,1,0.081,none,2024-05-22\r\n11512,1749,LATAM,fashion,retail,46.98,2,0.101,none,2024-05-19\r\n11513,1671,APAC,grocery,mobile,40.28,7,0.198,loyalty,2024-03-01\r\n11514,1660,AMER,grocery,mobile,111.10,8,0.202,none,2024-03-22\r\n11515,2244,LATAM,grocery,online,92.09,7,0.198,none,2024-02-04\r\n11516,2069,AMER,home,online,15.05,6,0.130,bundle,2024-09-12\r\n11517,2273,APAC,grocery,online,91.68,6,0.103,none,2024-02-07\r\n11518,1936,EMEA,grocery,online,175.84,1,0.058,coupon,2024-07-15\r\n11519,2138,APAC,fashion,online,68.77,3,0.185,bundle,2024-06-20\r\n11520,1478,EMEA,electronics,online,49.52,7,0.019,none,2024-04-10\r\n11521,1869,AMER,home,online,77.45,4,0.229,none,2024-08-17\r\n11522,1959,EMEA,toys,online,66.33,4,0.137,none,2024-04-17\r\n11523,1026,APAC,sports,mobile,94.10,7,0.027,none,2024-11-14\r\n11524,1692,LATAM,toys,online,99.55,1,0.108,coupon,2024-09-01\r\n11525,2123,AMER,sports,online,58.10,6,0.234,none,2024-03-28\r\n11526,2321,APAC,fashion,online,45.05,1,0.103,none,2024-09-15\r\n11527,2464,LATAM,home,retail,26.03,7,0.122,none,2024-02-03\r\n11528,2426,AMER,fashion,mobile,45.54,7,0.046,none,2024-05-12\r\n11529,1346,AMER,fashion,retail,21.55,4,0.091,none,2024-12-08\r\n11530,2215,LATAM,sports,retail,145.72,2,0.015,none,2024-04-23\r\n11531,2106,LATAM,grocery,retail,97.15,6,0.022,none,2024-04-05\r\n11532,1657,LATAM,electronics,retail,26.67,8,0.049,none,2024-06-07\r\n11533,1821,LATAM,home,retail,91.10,3,0.053,coupon,2024-08-06\r\n11534,1693,EMEA,fashion,online,47.03,8,0.088,none,2024-01-04\r\n11535,1267,EMEA,sports,retail,74.93,8,0.221,none,2024-05-20\r\n11536,1078,APAC,grocery,mobile,103.76,8,0.139,none,2024-06-07\r\n11537,1350,LATAM,sports,online,123.24,4,0.245,coupon,2024-07-28\r\n11538,1592,LATAM,electronics,partner,84.29,6,0.082,bundle,2024-10-11\r\n11539,1228,APAC,fashion,online,60.89,6,0.024,coupon,2024-08-11\r\n11540,1329,APAC,toys,retail,32.15,5,0.025,none,2024-10-03\r\n11541,1606,AMER,electronics,retail,54.95,8,0.115,none,2024-12-20\r\n11542,1034,EMEA,electronics,online,32.73,8,0.198,none,2024-02-12\r\n11543,1748,APAC,fashion,retail,79.76,1,0.017,none,2024-10-22\r\n11544,1894,APAC,sports,retail,82.68,5,0.083,bundle,2024-11-03\r\n11545,1982,EMEA,grocery,online,32.87,5,0.124,none,2024-01-02\r\n11546,2367,AMER,sports,online,129.88,6,0.014,none,2024-04-27\r\n11547,1853,APAC,home,partner,23.47,4,0.073,coupon,2024-01-14\r\n11548,1933,EMEA,grocery,online,56.13,5,0.173,none,2024-12-15\r\n11549,1948,EMEA,fashion,online,52.59,8,0.015,none,2024-11-23\r\n11550,1142,EMEA,sports,online,180.08,2,0.038,none,2024-01-03\r\n11551,1002,EMEA,home,online,59.09,3,0.147,coupon,2024-07-03\r\n11552,2252,EMEA,home,online,65.33,3,0.032,loyalty,2024-02-15\r\n11553,1095,APAC,home,retail,112.91,8,0.096,none,2024-11-04\r\n11554,1302,LATAM,grocery,online,41.82,7,0.126,bundle,2024-12-23\r\n11555,2345,LATAM,toys,partner,130.87,5,0.195,none,2024-05-23\r\n11556,1508,LATAM,electronics,online,26.05,8,0.211,none,2024-06-22\r\n11557,1595,AMER,home,retail,29.89,7,0.114,none,2024-09-15\r\n11558,1697,APAC,fashion,online,24.64,7,0.182,coupon,2024-02-23\r\n11559,2170,EMEA,home,online,52.87,5,0.227,loyalty,2024-03-23\r\n11560,2008,APAC,toys,online,49.19,8,0.151,none,2024-07-16\r\n11561,1326,AMER,grocery,retail,57.56,5,0.116,none,2024-07-08\r\n11562,1288,LATAM,home,online,78.27,2,0.078,none,2024-01-21\r\n11563,1316,APAC,fashion,retail,63.68,1,0.008,none,2024-07-01\r\n11564,1844,APAC,home,online,65.22,5,0.181,bundle,2024-02-07\r\n11565,1987,AMER,fashion,online,57.25,6,0.153,coupon,2024-01-27\r\n11566,1974,EMEA,home,online,84.79,8,0.077,none,2024-07-22\r\n11567,1921,LATAM,grocery,online,115.78,5,0.001,none,2024-11-05\r\n11568,1028,EMEA,toys,retail,69.27,3,0.210,none,2024-10-09\r\n11569,2128,EMEA,grocery,online,33.74,6,0.031,coupon,2024-11-03\r\n11570,1824,LATAM,toys,online,71.37,2,0.158,none,2024-03-11\r\n11571,1004,LATAM,home,online,94.90,5,0.017,none,2024-11-26\r\n11572,1060,LATAM,home,online,37.31,5,0.021,bundle,2024-09-18\r\n11573,1802,AMER,fashion,retail,60.71,6,0.059,bundle,2024-08-18\r\n11574,1929,LATAM,electronics,partner,36.42,5,0.194,coupon,2024-05-14\r\n11575,2452,LATAM,home,online,81.28,4,0.173,coupon,2024-01-14\r\n11576,1404,EMEA,toys,online,47.92,4,0.121,loyalty,2024-11-13\r\n11577,1363,EMEA,electronics,retail,111.68,4,0.025,coupon,2024-02-08\r\n11578,1494,AMER,home,mobile,70.77,3,0.015,none,2024-08-16\r\n11579,1453,APAC,electronics,online,54.22,5,0.081,none,2024-02-24\r\n11580,1955,AMER,electronics,mobile,50.10,4,0.128,coupon,2024-02-06\r\n11581,1265,APAC,grocery,mobile,32.15,7,0.120,coupon,2024-08-25\r\n11582,1824,LATAM,toys,online,101.97,3,0.241,coupon,2024-08-21\r\n11583,1313,EMEA,home,retail,32.91,5,0.153,bundle,2024-04-18\r\n11584,1717,AMER,fashion,online,78.64,2,0.059,none,2024-03-03\r\n11585,2107,APAC,sports,partner,105.04,7,0.118,none,2024-12-02\r\n11586,1920,LATAM,home,online,23.25,1,0.247,bundle,2024-04-25\r\n11587,2423,LATAM,fashion,online,37.86,3,0.068,none,2024-01-24\r\n11588,1008,AMER,toys,retail,80.36,5,0.220,none,2024-01-07\r\n11589,1415,AMER,home,retail,38.37,5,0.102,coupon,2024-05-19\r\n11590,1486,LATAM,home,retail,78.95,5,0.029,coupon,2024-05-22\r\n11591,1760,LATAM,grocery,retail,111.91,7,0.040,none,2024-03-10\r\n11592,1589,AMER,fashion,retail,93.62,6,0.143,none,2024-02-03\r\n11593,1380,AMER,toys,online,37.83,2,0.217,none,2024-09-22\r\n11594,1068,APAC,grocery,retail,81.37,8,0.235,bundle,2024-03-14\r\n11595,2412,LATAM,grocery,mobile,94.92,6,0.216,loyalty,2024-08-04\r\n11596,2319,AMER,home,retail,107.19,6,0.150,none,2024-06-19\r\n11597,1416,EMEA,electronics,mobile,34.97,6,0.190,loyalty,2024-05-13\r\n11598,1831,APAC,home,partner,24.51,4,0.241,loyalty,2024-01-20\r\n11599,1349,APAC,sports,online,59.55,5,0.141,coupon,2024-06-10\r\n11600,1164,EMEA,fashion,online,45.06,2,0.206,none,2024-08-05\r\n11601,1054,EMEA,fashion,retail,49.12,4,0.065,coupon,2024-09-10\r\n11602,2076,AMER,grocery,retail,61.60,4,0.238,loyalty,2024-02-15\r\n11603,2270,APAC,fashion,online,39.24,2,0.163,bundle,2024-10-13\r\n11604,2359,LATAM,electronics,retail,36.76,3,0.097,coupon,2024-07-02\r\n11605,1230,EMEA,grocery,retail,23.80,8,0.029,none,2024-10-16\r\n11606,2278,APAC,fashion,mobile,56.23,6,0.139,loyalty,2024-01-02\r\n11607,1024,APAC,fashion,retail,85.41,6,0.064,coupon,2024-12-08\r\n11608,1709,EMEA,grocery,online,49.40,8,0.071,none,2024-12-06\r\n11609,1540,LATAM,toys,online,44.53,2,0.068,none,2024-05-16\r\n11610,2177,AMER,sports,retail,49.44,3,0.213,coupon,2024-07-27\r\n11611,1618,EMEA,fashion,mobile,38.54,8,0.175,none,2024-07-06\r\n11612,2367,AMER,home,retail,101.14,1,0.173,none,2024-05-24\r\n11613,2082,APAC,grocery,online,86.73,1,0.044,none,2024-12-13\r\n11614,1216,APAC,electronics,mobile,56.18,1,0.158,coupon,2024-02-12\r\n11615,2106,LATAM,sports,online,75.21,8,0.025,none,2024-06-05\r\n11616,2011,AMER,electronics,online,141.79,8,0.171,none,2024-08-18\r\n11617,1361,LATAM,home,retail,53.21,2,0.056,none,2024-08-05\r\n11618,1991,APAC,grocery,retail,82.29,1,0.083,none,2024-08-22\r\n11619,1914,EMEA,fashion,retail,54.34,5,0.062,bundle,2024-12-22\r\n11620,1989,LATAM,toys,online,110.07,5,0.041,coupon,2024-11-25\r\n11621,1517,AMER,grocery,retail,105.54,7,0.031,coupon,2024-02-15\r\n11622,1561,EMEA,fashion,online,58.61,4,0.061,loyalty,2024-04-01\r\n11623,1361,LATAM,sports,online,32.92,7,0.073,bundle,2024-02-16\r\n11624,1105,AMER,home,retail,93.95,2,0.160,none,2024-03-04\r\n11625,1628,EMEA,grocery,retail,50.00,2,0.147,none,2024-05-19\r\n11626,1397,LATAM,grocery,online,68.96,7,0.046,none,2024-04-10\r\n11627,1139,EMEA,electronics,mobile,152.68,2,0.092,coupon,2024-05-03\r\n11628,1752,APAC,home,mobile,47.89,6,0.057,none,2024-03-05\r\n11629,1463,EMEA,grocery,retail,95.74,1,0.016,bundle,2024-11-19\r\n11630,1028,EMEA,grocery,retail,137.12,6,0.243,none,2024-07-05\r\n11631,1888,LATAM,grocery,online,59.49,4,0.092,coupon,2024-04-21\r\n11632,2382,LATAM,grocery,mobile,53.96,1,0.001,none,2024-06-16\r\n11633,1132,EMEA,grocery,mobile,41.41,5,0.205,bundle,2024-07-21\r\n11634,1130,LATAM,grocery,mobile,63.35,6,0.202,coupon,2024-07-05\r\n11635,1618,EMEA,electronics,mobile,76.98,2,0.127,bundle,2024-02-07\r\n11636,1974,EMEA,toys,online,62.75,5,0.083,bundle,2024-02-01\r\n11637,1971,EMEA,electronics,retail,89.54,4,0.191,none,2024-02-13\r\n11638,2303,EMEA,fashion,retail,57.11,6,0.231,none,2024-03-02\r\n11639,2425,APAC,electronics,online,133.57,3,0.158,none,2024-02-05\r\n11640,1008,AMER,toys,mobile,37.76,7,0.126,none,2024-06-25\r\n11641,2427,LATAM,home,online,65.40,8,0.182,bundle,2024-01-13\r\n11642,1792,AMER,grocery,online,32.10,4,0.028,bundle,2024-02-27\r\n11643,2023,LATAM,sports,online,62.57,3,0.036,none,2024-09-24\r\n11644,1892,LATAM,grocery,online,30.05,5,0.181,none,2024-09-15\r\n11645,2482,EMEA,electronics,retail,84.77,7,0.114,loyalty,2024-03-18\r\n11646,1313,EMEA,grocery,retail,38.30,5,0.115,none,2024-06-11\r\n11647,2057,APAC,fashion,online,79.42,3,0.065,loyalty,2024-07-14\r\n11648,1959,EMEA,home,retail,49.51,2,0.198,loyalty,2024-08-14\r\n11649,1293,AMER,toys,retail,74.53,3,0.081,none,2024-11-09\r\n11650,1901,AMER,toys,retail,30.52,6,0.049,none,2024-02-09\r\n11651,1212,LATAM,electronics,retail,48.53,6,0.009,bundle,2024-04-16\r\n11652,1344,EMEA,grocery,online,46.30,7,0.231,coupon,2024-10-03\r\n11653,1544,LATAM,sports,online,46.57,3,0.231,none,2024-11-06\r\n11654,1308,EMEA,grocery,online,62.79,5,0.040,loyalty,2024-12-13\r\n11655,1504,AMER,fashion,online,77.46,8,0.179,loyalty,2024-08-19\r\n11656,1653,APAC,toys,mobile,342.26,3,0.159,coupon,2024-08-26\r\n11657,2427,LATAM,grocery,online,34.94,7,0.056,coupon,2024-12-13\r\n11658,1360,APAC,home,online,48.60,4,0.200,none,2024-10-01\r\n11659,1969,LATAM,fashion,online,59.53,3,0.118,loyalty,2024-09-03\r\n11660,1264,APAC,sports,mobile,74.21,3,0.005,loyalty,2024-07-19\r\n11661,2087,LATAM,fashion,partner,80.22,8,0.069,bundle,2024-11-03\r\n11662,1315,AMER,electronics,online,42.50,6,0.029,bundle,2024-07-19\r\n11663,1223,LATAM,grocery,retail,34.51,1,0.103,coupon,2024-07-28\r\n11664,2153,APAC,sports,online,39.10,1,0.085,bundle,2024-04-17\r\n11665,1735,LATAM,sports,retail,56.26,1,0.183,coupon,2024-12-12\r\n11666,2099,AMER,sports,retail,34.11,4,0.098,none,2024-04-03\r\n11667,1186,APAC,grocery,online,69.90,5,0.152,none,2024-05-03\r\n11668,2135,EMEA,home,retail,22.61,3,0.112,none,2024-03-27\r\n11669,1384,LATAM,home,retail,45.81,7,0.021,none,2024-10-11\r\n11670,1924,AMER,electronics,online,71.55,1,0.087,bundle,2024-05-04\r\n11671,1063,AMER,fashion,online,66.20,5,0.055,none,2024-10-05\r\n11672,1679,APAC,home,retail,27.20,8,0.117,bundle,2024-05-07\r\n11673,2122,AMER,home,online,66.08,5,0.130,none,2024-05-03\r\n11674,1432,APAC,toys,online,64.43,8,0.169,bundle,2024-07-05\r\n11675,1397,LATAM,electronics,partner,54.35,1,0.154,loyalty,2024-05-25\r\n11676,1204,AMER,grocery,online,55.63,2,0.079,bundle,2024-02-02\r\n11677,1547,AMER,grocery,retail,25.84,1,0.068,none,2024-05-23\r\n11678,1794,AMER,home,mobile,44.86,4,0.220,coupon,2024-10-21\r\n11679,1607,LATAM,electronics,online,74.62,2,0.085,bundle,2024-09-06\r\n11680,1656,LATAM,grocery,retail,91.31,2,0.199,none,2024-09-14\r\n11681,1553,LATAM,grocery,online,48.64,5,0.043,none,2024-01-22\r\n11682,1050,AMER,sports,mobile,36.38,5,0.009,none,2024-04-11\r\n11683,2426,AMER,grocery,online,74.14,3,0.219,none,2024-09-21\r\n11684,2273,APAC,grocery,online,20.75,2,0.211,none,2024-04-02\r\n11685,2344,LATAM,sports,mobile,34.21,5,0.014,coupon,2024-10-26\r\n11686,1683,AMER,toys,retail,110.17,7,0.169,none,2024-08-25\r\n11687,2408,EMEA,grocery,retail,44.12,4,0.134,none,2024-06-12\r\n11688,1731,AMER,electronics,online,70.19,4,0.212,none,2024-07-13\r\n11689,1408,AMER,electronics,retail,30.53,1,0.051,none,2024-10-16\r\n11690,1824,LATAM,home,retail,49.88,2,0.057,none,2024-11-02\r\n11691,1291,EMEA,fashion,partner,19.35,4,0.192,none,2024-02-07\r\n11692,1858,LATAM,fashion,retail,80.46,2,0.154,none,2024-07-12\r\n11693,1702,AMER,grocery,online,53.39,7,0.157,bundle,2024-03-05\r\n11694,2179,LATAM,sports,online,37.09,7,0.097,bundle,2024-01-18\r\n11695,2012,APAC,toys,retail,49.10,7,0.147,bundle,2024-06-20\r\n11696,1450,EMEA,electronics,retail,72.40,6,0.114,loyalty,2024-01-10\r\n11697,1200,EMEA,electronics,mobile,181.64,5,0.078,none,2024-12-18\r\n11698,1346,AMER,fashion,online,95.54,8,0.158,none,2024-11-13\r\n11699,1405,LATAM,electronics,online,37.47,4,0.229,coupon,2024-06-23\r\n11700,2106,LATAM,sports,partner,36.87,8,0.109,none,2024-09-23\r\n11701,1273,AMER,grocery,mobile,33.05,1,0.225,none,2024-07-23\r\n11702,1696,LATAM,electronics,mobile,75.34,5,0.012,none,2024-06-26\r\n11703,2299,EMEA,grocery,online,64.49,6,0.060,bundle,2024-07-27\r\n11704,2225,EMEA,sports,online,74.08,5,0.179,none,2024-05-12\r\n11705,2496,EMEA,grocery,partner,204.31,5,0.034,none,2024-07-15\r\n11706,2350,APAC,fashion,retail,39.15,8,0.222,none,2024-01-08\r\n11707,1081,AMER,sports,online,18.52,8,0.240,coupon,2024-11-10\r\n11708,1011,APAC,sports,online,52.20,5,0.109,coupon,2024-06-23\r\n11709,1871,APAC,home,online,21.77,8,0.009,none,2024-07-02\r\n11710,2028,APAC,home,online,88.03,1,0.219,none,2024-08-02\r\n11711,2046,APAC,grocery,online,29.81,7,0.188,none,2024-12-14\r\n11712,1366,APAC,sports,online,49.23,8,0.185,loyalty,2024-03-28\r\n11713,1963,AMER,toys,online,66.03,7,0.028,none,2024-12-18\r\n11714,1970,LATAM,grocery,online,80.31,7,0.057,none,2024-08-13\r\n11715,2002,APAC,grocery,retail,108.98,5,0.209,none,2024-05-18\r\n11716,2176,AMER,grocery,retail,40.85,1,0.185,coupon,2024-07-03\r\n11717,2123,AMER,home,online,43.68,4,0.029,none,2024-01-25\r\n11718,1096,EMEA,grocery,online,27.35,7,0.103,none,2024-01-27\r\n11719,1689,LATAM,home,online,75.64,5,0.106,loyalty,2024-01-28\r\n11720,2473,EMEA,home,retail,18.23,7,0.120,loyalty,2024-11-11\r\n11721,1399,AMER,home,partner,36.36,1,0.054,none,2024-12-13\r\n11722,1202,APAC,grocery,online,44.21,2,0.102,coupon,2024-12-14\r\n11723,1414,APAC,electronics,mobile,39.37,7,0.053,bundle,2024-04-27\r\n11724,1518,AMER,electronics,online,26.70,2,0.146,none,2024-05-11\r\n11725,1916,AMER,home,online,68.40,6,0.195,none,2024-09-08\r\n11726,2344,LATAM,fashion,mobile,72.98,2,0.078,bundle,2024-09-10\r\n11727,2210,APAC,grocery,retail,51.26,4,0.115,none,2024-03-03\r\n11728,1099,LATAM,grocery,online,108.95,1,0.022,bundle,2024-08-26\r\n11729,2256,AMER,electronics,mobile,17.66,3,0.155,none,2024-04-25\r\n11730,2123,AMER,grocery,mobile,52.41,6,0.037,none,2024-05-11\r\n11731,1912,APAC,sports,mobile,59.69,4,0.025,none,2024-05-28\r\n11732,2487,LATAM,electronics,retail,37.13,5,0.223,none,2024-05-27\r\n11733,1036,EMEA,fashion,retail,32.89,3,0.185,none,2024-03-08\r\n11734,1962,APAC,fashion,online,40.19,1,0.209,none,2024-04-02\r\n11735,1264,APAC,grocery,partner,87.63,4,0.012,bundle,2024-11-21\r\n11736,2322,AMER,home,retail,43.54,5,0.091,none,2024-07-26\r\n11737,2391,EMEA,sports,retail,55.00,8,0.038,none,2024-01-15\r\n11738,1593,AMER,toys,online,67.49,3,0.173,bundle,2024-07-22\r\n11739,2011,AMER,home,online,32.76,5,0.097,none,2024-11-15\r\n11740,2072,AMER,home,online,68.89,6,0.107,bundle,2024-11-22\r\n11741,1402,EMEA,home,mobile,34.59,4,0.092,none,2024-11-12\r\n11742,1270,LATAM,grocery,online,73.39,7,0.095,none,2024-11-09\r\n11743,1043,LATAM,home,mobile,82.45,8,0.068,none,2024-03-27\r\n11744,1119,LATAM,sports,online,117.21,5,0.106,none,2024-06-26\r\n11745,1858,LATAM,grocery,online,24.64,6,0.197,none,2024-04-04\r\n11746,2439,AMER,sports,retail,28.50,6,0.085,none,2024-12-05\r\n11747,1794,AMER,electronics,retail,117.04,7,0.023,coupon,2024-03-25\r\n11748,1751,AMER,sports,online,51.93,3,0.225,none,2024-05-03\r\n11749,1758,AMER,toys,online,68.79,1,0.137,bundle,2024-12-02\r\n11750,1893,APAC,electronics,online,39.75,7,0.127,coupon,2024-10-02\r\n11751,1036,EMEA,grocery,retail,94.63,2,0.047,none,2024-09-28\r\n11752,2322,AMER,home,retail,53.49,3,0.107,bundle,2024-09-28\r\n11753,1557,LATAM,home,retail,50.56,4,0.060,none,2024-03-01\r\n11754,2156,AMER,fashion,mobile,36.74,1,0.138,bundle,2024-07-12\r\n11755,1671,APAC,fashion,retail,33.79,5,0.077,coupon,2024-09-14\r\n11756,2017,EMEA,fashion,online,29.92,6,0.145,none,2024-05-20\r\n11757,1968,EMEA,fashion,retail,61.40,4,0.111,bundle,2024-02-20\r\n11758,1242,LATAM,grocery,online,44.26,7,0.015,none,2024-10-17\r\n11759,1135,APAC,toys,online,33.61,3,0.067,none,2024-08-19\r\n11760,2207,APAC,grocery,online,40.76,7,0.180,none,2024-02-12\r\n11761,2149,EMEA,toys,retail,35.82,3,0.108,none,2024-06-02\r\n11762,1812,EMEA,home,online,51.15,1,0.115,none,2024-10-25\r\n11763,2031,AMER,electronics,online,145.27,8,0.184,none,2024-12-20\r\n11764,1240,EMEA,toys,online,55.84,2,0.237,bundle,2024-04-05\r\n11765,1514,LATAM,fashion,online,46.77,8,0.102,coupon,2024-02-18\r\n11766,2117,EMEA,sports,online,71.92,2,0.124,none,2024-04-23\r\n11767,1219,LATAM,toys,online,44.81,8,0.213,none,2024-12-09\r\n11768,1469,EMEA,sports,retail,89.64,2,0.141,none,2024-04-18\r\n11769,1932,EMEA,sports,retail,102.31,3,0.103,bundle,2024-01-25\r\n11770,1661,LATAM,fashion,retail,143.59,7,0.045,coupon,2024-04-26\r\n11771,2483,LATAM,electronics,online,32.30,8,0.234,loyalty,2024-12-16\r\n11772,1830,EMEA,home,mobile,49.02,6,0.136,bundle,2024-11-08\r\n11773,1671,APAC,grocery,retail,104.12,4,0.202,none,2024-08-12\r\n11774,2053,AMER,sports,retail,45.24,7,0.184,bundle,2024-02-17\r\n11775,1745,APAC,electronics,online,39.41,6,0.127,loyalty,2024-11-26\r\n11776,1538,AMER,fashion,online,41.91,2,0.128,loyalty,2024-08-09\r\n11777,2124,AMER,grocery,retail,112.33,2,0.098,bundle,2024-03-08\r\n11778,2130,EMEA,sports,online,104.60,4,0.065,bundle,2024-02-12\r\n11779,1289,LATAM,electronics,online,76.78,1,0.209,bundle,2024-06-27\r\n11780,1924,AMER,fashion,retail,43.07,7,0.036,bundle,2024-04-21\r\n11781,1635,APAC,home,retail,47.09,8,0.088,coupon,2024-08-14\r\n11782,2051,APAC,grocery,mobile,31.34,4,0.127,loyalty,2024-04-13\r\n11783,1994,LATAM,electronics,partner,65.91,1,0.184,loyalty,2024-06-11\r\n11784,2294,EMEA,fashion,retail,36.60,1,0.165,bundle,2024-04-06\r\n11785,1399,AMER,grocery,retail,66.84,8,0.125,coupon,2024-02-06\r\n11786,1813,EMEA,home,online,65.35,2,0.013,bundle,2024-12-23\r\n11787,1331,AMER,sports,retail,45.03,8,0.206,none,2024-04-02\r\n11788,1591,APAC,toys,online,59.85,8,0.038,none,2024-02-05\r\n11789,1233,AMER,home,mobile,90.39,5,0.239,none,2024-10-09\r\n11790,1057,LATAM,grocery,online,31.96,4,0.059,bundle,2024-02-10\r\n11791,1827,EMEA,grocery,retail,102.67,4,0.025,coupon,2024-03-04\r\n11792,2208,AMER,home,online,59.78,4,0.107,none,2024-07-26\r\n11793,1952,EMEA,electronics,partner,73.90,5,0.165,coupon,2024-11-22\r\n11794,1728,AMER,toys,online,53.93,4,0.029,none,2024-11-06\r\n11795,2476,APAC,grocery,online,27.08,8,0.094,bundle,2024-07-11\r\n11796,1330,EMEA,grocery,online,35.55,1,0.021,coupon,2024-06-17\r\n11797,1273,AMER,electronics,retail,32.88,6,0.117,none,2024-12-09\r\n11798,1278,AMER,grocery,retail,49.37,6,0.052,none,2024-09-25\r\n11799,1014,EMEA,fashion,online,54.45,4,0.113,coupon,2024-03-27\r\n11800,1398,APAC,grocery,retail,62.95,2,0.166,none,2024-02-08\r\n11801,2124,AMER,sports,retail,36.12,5,0.100,loyalty,2024-09-05\r\n11802,2294,EMEA,sports,online,64.57,8,0.109,none,2024-07-01\r\n11803,1430,EMEA,fashion,online,56.29,8,0.106,bundle,2024-09-04\r\n11804,2089,EMEA,toys,retail,41.16,1,0.070,bundle,2024-08-12\r\n11805,2128,EMEA,electronics,online,88.96,3,0.017,coupon,2024-04-11\r\n11806,2207,APAC,home,online,30.75,3,0.046,none,2024-04-09\r\n11807,1709,EMEA,fashion,online,55.50,8,0.230,none,2024-12-13\r\n11808,1115,AMER,home,online,58.01,5,0.016,none,2024-06-07\r\n11809,2074,AMER,grocery,online,30.98,4,0.106,none,2024-05-23\r\n11810,1145,AMER,grocery,online,28.99,5,0.080,coupon,2024-02-20\r\n11811,1631,APAC,grocery,online,37.61,2,0.044,none,2024-11-10\r\n11812,1623,AMER,electronics,online,71.09,6,0.135,coupon,2024-10-21\r\n11813,1657,LATAM,grocery,mobile,107.83,7,0.097,loyalty,2024-07-20\r\n11814,1679,APAC,sports,retail,89.89,3,0.152,coupon,2024-03-15\r\n11815,1404,EMEA,electronics,retail,90.29,7,0.128,none,2024-10-21\r\n11816,2361,EMEA,sports,retail,74.72,3,0.220,coupon,2024-05-14\r\n11817,2192,APAC,electronics,online,166.32,1,0.123,none,2024-12-18\r\n11818,1830,EMEA,grocery,online,42.29,1,0.056,coupon,2024-09-22\r\n11819,1880,LATAM,fashion,online,26.71,2,0.203,none,2024-02-26\r\n11820,1284,APAC,toys,online,71.07,2,0.160,none,2024-03-04\r\n11821,1815,APAC,home,online,78.03,2,0.059,coupon,2024-09-28\r\n11822,2372,AMER,sports,online,85.35,3,0.051,coupon,2024-11-27\r\n11823,1456,APAC,electronics,online,44.98,5,0.245,none,2024-05-04\r\n11824,2206,AMER,electronics,retail,52.57,3,0.173,none,2024-09-22\r\n11825,1394,LATAM,toys,online,39.24,4,0.020,none,2024-08-23\r\n11826,2077,APAC,electronics,retail,61.99,4,0.126,none,2024-06-04\r\n11827,1491,EMEA,grocery,mobile,73.30,2,0.246,coupon,2024-10-02\r\n11828,2253,AMER,home,retail,94.75,3,0.187,none,2024-02-24\r\n11829,1435,AMER,fashion,retail,22.59,4,0.157,none,2024-09-09\r\n11830,1216,APAC,toys,mobile,57.20,8,0.246,coupon,2024-11-19\r\n11831,1420,APAC,home,partner,70.31,2,0.173,loyalty,2024-03-25\r\n11832,2445,APAC,home,mobile,111.53,7,0.158,none,2024-01-10\r\n11833,1199,APAC,sports,online,135.44,1,0.146,coupon,2024-07-15\r\n11834,1565,AMER,grocery,online,110.54,1,0.136,none,2024-03-02\r\n11835,1432,APAC,fashion,partner,22.66,8,0.101,none,2024-05-23\r\n11836,1157,LATAM,fashion,mobile,185.40,1,0.241,coupon,2024-02-18\r\n11837,1178,EMEA,fashion,online,25.73,5,0.099,bundle,2024-12-28\r\n11838,1963,AMER,electronics,partner,78.87,5,0.105,coupon,2024-01-20\r\n11839,1650,LATAM,electronics,mobile,190.18,1,0.081,none,2024-10-25\r\n11840,2292,EMEA,home,online,32.17,6,0.025,bundle,2024-04-26\r\n11841,1847,LATAM,fashion,online,31.13,3,0.020,bundle,2024-08-21\r\n11842,1928,AMER,sports,retail,26.56,2,0.149,none,2024-04-01\r\n11843,1177,LATAM,toys,retail,73.08,6,0.231,none,2024-03-02\r\n11844,1208,AMER,grocery,mobile,49.52,2,0.207,none,2024-01-21\r\n11845,2117,EMEA,grocery,retail,51.18,3,0.152,none,2024-06-19\r\n11846,1129,LATAM,sports,online,62.52,2,0.042,none,2024-08-13\r\n11847,2323,AMER,electronics,online,58.52,4,0.149,none,2024-12-16\r\n11848,2288,AMER,sports,mobile,55.82,4,0.141,none,2024-08-04\r\n11849,2094,AMER,fashion,retail,96.95,1,0.232,none,2024-05-09\r\n11850,1107,APAC,grocery,retail,90.61,5,0.190,none,2024-06-09\r\n11851,1559,EMEA,sports,mobile,95.62,6,0.176,none,2024-02-06\r\n11852,2344,LATAM,fashion,online,60.18,4,0.019,coupon,2024-06-08\r\n11853,2018,AMER,fashion,retail,84.04,1,0.226,coupon,2024-11-25\r\n11854,1848,EMEA,grocery,online,60.61,3,0.204,none,2024-06-13\r\n11855,1970,LATAM,electronics,retail,53.83,3,0.025,coupon,2024-08-21\r\n11856,1789,EMEA,grocery,retail,50.72,5,0.003,loyalty,2024-11-03\r\n11857,1191,EMEA,grocery,retail,59.62,1,0.190,none,2024-06-06\r\n11858,1873,EMEA,home,online,78.94,4,0.016,loyalty,2024-03-21\r\n11859,1400,EMEA,electronics,online,90.20,8,0.164,none,2024-06-18\r\n11860,1863,EMEA,sports,online,22.86,3,0.100,loyalty,2024-01-10\r\n11861,1641,EMEA,grocery,retail,25.49,8,0.179,none,2024-04-24\r\n11862,1453,APAC,sports,online,23.18,1,0.042,bundle,2024-11-06\r\n11863,2227,LATAM,grocery,retail,57.71,4,0.160,bundle,2024-04-13\r\n11864,1661,LATAM,fashion,mobile,109.04,2,0.182,none,2024-01-10\r\n11865,1942,APAC,home,mobile,76.22,1,0.160,bundle,2024-09-08\r\n11866,2368,AMER,fashion,mobile,71.95,6,0.071,coupon,2024-01-20\r\n11867,2123,AMER,sports,online,81.42,8,0.055,bundle,2024-05-04\r\n11868,1975,EMEA,grocery,online,22.70,7,0.100,none,2024-05-09\r\n11869,2099,AMER,electronics,retail,72.29,7,0.111,none,2024-02-07\r\n11870,1958,APAC,grocery,online,87.53,3,0.092,coupon,2024-08-15\r\n11871,1199,APAC,grocery,retail,19.22,8,0.145,none,2024-03-15\r\n11872,2349,APAC,grocery,online,60.76,7,0.100,coupon,2024-08-09\r\n11873,1656,LATAM,fashion,online,36.20,1,0.071,none,2024-09-09\r\n11874,1493,APAC,sports,mobile,36.86,4,0.078,none,2024-04-11\r\n11875,1601,APAC,electronics,online,35.79,3,0.080,none,2024-01-20\r\n11876,2267,AMER,toys,partner,50.79,2,0.157,coupon,2024-12-02\r\n11877,1613,EMEA,fashion,partner,38.94,4,0.032,coupon,2024-09-16\r\n11878,2393,LATAM,electronics,online,55.50,7,0.176,none,2024-01-21\r\n11879,1621,APAC,home,online,94.37,1,0.248,loyalty,2024-02-08\r\n11880,1279,EMEA,fashion,retail,37.46,4,0.093,coupon,2024-02-07\r\n11881,1806,APAC,home,online,34.16,4,0.051,bundle,2024-07-07\r\n11882,2023,LATAM,home,retail,41.14,1,0.034,none,2024-08-08\r\n11883,1851,EMEA,sports,mobile,41.97,1,0.165,coupon,2024-10-13\r\n11884,2312,APAC,electronics,online,73.34,2,0.214,loyalty,2024-11-04\r\n11885,1330,EMEA,sports,retail,81.72,6,0.236,none,2024-11-11\r\n11886,1006,AMER,grocery,online,58.74,5,0.007,none,2024-06-09\r\n11887,2308,AMER,fashion,retail,45.79,5,0.219,none,2024-12-26\r\n11888,2251,APAC,toys,retail,49.43,1,0.055,none,2024-02-01\r\n11889,1039,AMER,home,partner,63.56,5,0.134,bundle,2024-03-23\r\n11890,1012,LATAM,electronics,retail,62.02,7,0.159,bundle,2024-02-21\r\n11891,1536,LATAM,fashion,online,102.59,5,0.117,none,2024-06-04\r\n11892,1987,AMER,electronics,online,91.84,3,0.012,none,2024-01-02\r\n11893,1932,EMEA,electronics,online,38.82,3,0.039,none,2024-10-11\r\n11894,2015,APAC,sports,online,21.54,2,0.127,none,2024-05-28\r\n11895,1519,APAC,fashion,online,89.11,6,0.146,none,2024-03-24\r\n11896,1627,LATAM,home,online,66.99,5,0.002,none,2024-10-16\r\n11897,1149,LATAM,electronics,retail,20.39,4,0.082,none,2024-03-08\r\n11898,1623,AMER,fashion,mobile,139.65,6,0.014,none,2024-07-26\r\n11899,1599,APAC,fashion,retail,37.40,5,0.090,coupon,2024-06-05\r\n11900,1536,LATAM,electronics,online,55.06,6,0.235,coupon,2024-05-25\r\n11901,2387,EMEA,grocery,mobile,80.65,5,0.237,coupon,2024-09-18\r\n11902,2028,APAC,fashion,online,179.89,6,0.146,none,2024-03-21\r\n11903,1568,AMER,toys,online,53.66,2,0.159,none,2024-01-08\r\n11904,1545,AMER,electronics,retail,44.45,2,0.035,none,2024-07-25\r\n11905,1087,AMER,fashion,retail,43.28,3,0.076,none,2024-07-06\r\n11906,2290,LATAM,electronics,online,68.40,4,0.080,none,2024-11-01\r\n11907,1798,AMER,grocery,retail,128.28,1,0.203,coupon,2024-11-06\r\n11908,1470,LATAM,toys,online,27.18,1,0.187,none,2024-11-11\r\n11909,1261,APAC,grocery,online,54.31,6,0.167,none,2024-09-12\r\n11910,1252,APAC,electronics,retail,57.48,6,0.115,coupon,2024-08-03\r\n11911,1639,APAC,toys,retail,24.56,1,0.188,none,2024-09-14\r\n11912,1424,APAC,electronics,retail,77.36,2,0.006,coupon,2024-06-28\r\n11913,1504,AMER,grocery,online,49.56,5,0.075,none,2024-08-27\r\n11914,1706,EMEA,fashion,partner,53.70,5,0.034,coupon,2024-05-28\r\n11915,1981,EMEA,electronics,online,58.43,3,0.033,coupon,2024-07-11\r\n11916,2430,APAC,home,retail,29.37,2,0.045,loyalty,2024-09-11\r\n11917,2395,APAC,toys,online,45.65,3,0.085,bundle,2024-06-05\r\n11918,1815,APAC,electronics,mobile,51.44,6,0.043,none,2024-01-12\r\n11919,2085,AMER,electronics,retail,57.46,1,0.031,none,2024-08-16\r\n11920,1149,LATAM,grocery,retail,73.97,3,0.156,coupon,2024-10-09\r\n11921,1973,EMEA,grocery,retail,55.43,1,0.189,none,2024-09-10\r\n11922,2248,LATAM,toys,online,33.88,2,0.131,coupon,2024-02-24\r\n11923,2459,AMER,grocery,retail,68.58,7,0.043,none,2024-01-16\r\n11924,1682,EMEA,home,retail,73.91,2,0.130,coupon,2024-01-19\r\n11925,1587,LATAM,home,online,31.87,4,0.045,bundle,2024-12-28\r\n11926,1836,LATAM,electronics,online,21.47,7,0.150,none,2024-11-26\r\n11927,1357,EMEA,home,online,50.28,7,0.221,bundle,2024-10-11\r\n11928,1677,EMEA,fashion,mobile,22.98,4,0.190,none,2024-01-23\r\n11929,1929,LATAM,sports,mobile,62.63,1,0.186,none,2024-06-25\r\n11930,1067,APAC,electronics,online,50.29,7,0.243,bundle,2024-03-24\r\n11931,2300,EMEA,grocery,online,43.46,3,0.067,bundle,2024-04-03\r\n11932,1531,EMEA,electronics,online,112.78,6,0.145,coupon,2024-02-10\r\n11933,2463,AMER,fashion,retail,73.80,5,0.206,bundle,2024-05-06\r\n11934,2238,AMER,grocery,online,87.04,1,0.062,none,2024-05-15\r\n11935,1648,APAC,sports,mobile,39.78,1,0.202,coupon,2024-11-05\r\n11936,1102,APAC,grocery,retail,33.54,8,0.156,none,2024-01-05\r\n11937,2210,APAC,toys,online,79.41,2,0.059,loyalty,2024-08-12\r\n11938,2465,EMEA,grocery,online,75.22,2,0.178,none,2024-10-19\r\n11939,1183,AMER,electronics,retail,44.68,7,0.133,coupon,2024-12-11\r\n11940,1963,AMER,home,online,84.05,2,0.239,none,2024-05-18\r\n11941,2188,EMEA,home,retail,33.37,5,0.114,bundle,2024-06-18\r\n11942,1921,LATAM,fashion,online,102.87,6,0.022,none,2024-01-12\r\n11943,1480,APAC,grocery,online,49.37,3,0.180,none,2024-11-19\r\n11944,2488,EMEA,grocery,retail,36.54,2,0.160,loyalty,2024-11-05\r\n11945,2411,EMEA,home,mobile,54.80,3,0.039,none,2024-09-26\r\n11946,1039,AMER,grocery,online,65.79,2,0.173,loyalty,2024-06-09\r\n11947,1517,AMER,electronics,mobile,33.90,2,0.144,none,2024-11-15\r\n11948,1234,AMER,toys,online,59.06,3,0.163,coupon,2024-10-08\r\n11949,1171,APAC,grocery,retail,55.91,1,0.220,none,2024-08-04\r\n11950,2300,EMEA,sports,retail,44.53,8,0.108,none,2024-03-19\r\n11951,1796,LATAM,grocery,online,69.07,7,0.128,none,2024-11-20\r\n11952,2002,APAC,fashion,mobile,59.50,3,0.042,bundle,2024-12-18\r\n11953,1565,AMER,electronics,mobile,115.89,3,0.038,none,2024-09-16\r\n11954,1177,LATAM,electronics,online,27.24,3,0.160,coupon,2024-12-23\r\n11955,1716,LATAM,toys,online,91.60,8,0.112,none,2024-02-02\r\n11956,2406,EMEA,toys,online,43.42,7,0.054,none,2024-04-01\r\n11957,1938,APAC,fashion,online,53.13,7,0.160,none,2024-02-13\r\n11958,1974,EMEA,home,retail,128.41,1,0.212,none,2024-08-02\r\n11959,2092,AMER,grocery,online,110.08,3,0.227,none,2024-02-26\r\n11960,2433,APAC,home,online,68.62,5,0.173,coupon,2024-10-19\r\n11961,1457,EMEA,electronics,mobile,152.80,1,0.220,loyalty,2024-01-13\r\n11962,1598,EMEA,electronics,retail,36.58,3,0.104,coupon,2024-08-06\r\n11963,1934,EMEA,electronics,online,49.69,3,0.189,bundle,2024-02-17\r\n11964,2175,AMER,grocery,retail,55.56,3,0.158,none,2024-12-24\r\n11965,1078,APAC,toys,retail,87.89,1,0.205,none,2024-02-28\r\n11966,1611,EMEA,fashion,online,44.12,2,0.188,none,2024-06-05\r\n11967,2256,AMER,toys,retail,81.99,3,0.024,none,2024-06-28\r\n11968,1308,EMEA,grocery,mobile,34.86,5,0.143,none,2024-06-22\r\n11969,1698,EMEA,grocery,online,26.01,7,0.228,none,2024-10-04\r\n11970,1244,LATAM,electronics,online,48.27,4,0.122,none,2024-05-22\r\n11971,1050,AMER,sports,retail,85.95,3,0.006,coupon,2024-06-14\r\n11972,1213,EMEA,grocery,retail,35.76,3,0.118,none,2024-06-21\r\n11973,1864,EMEA,toys,online,25.80,3,0.163,coupon,2024-04-06\r\n11974,1567,AMER,sports,retail,98.58,6,0.067,coupon,2024-09-12\r\n11975,2093,LATAM,fashion,partner,42.33,6,0.052,coupon,2024-01-21\r\n11976,2155,APAC,electronics,retail,50.78,1,0.136,bundle,2024-11-16\r\n11977,2404,EMEA,electronics,online,76.08,3,0.235,none,2024-02-17\r\n11978,1557,LATAM,home,online,89.31,2,0.189,coupon,2024-01-11\r\n11979,2004,LATAM,fashion,online,73.10,2,0.039,none,2024-10-18\r\n11980,1511,EMEA,toys,online,66.49,2,0.175,coupon,2024-08-25\r\n11981,2440,APAC,sports,online,45.79,8,0.021,none,2024-06-21\r\n11982,1574,AMER,electronics,online,35.40,1,0.088,none,2024-06-06\r\n11983,1590,APAC,grocery,mobile,95.90,1,0.029,none,2024-03-12\r\n11984,1021,AMER,toys,online,30.08,1,0.065,none,2024-01-04\r\n11985,2042,LATAM,toys,retail,99.22,2,0.173,none,2024-04-02\r\n11986,2496,EMEA,home,online,61.12,7,0.204,none,2024-08-10\r\n11987,2066,APAC,grocery,retail,33.76,5,0.097,none,2024-03-18\r\n11988,1616,APAC,grocery,retail,43.47,8,0.185,none,2024-01-05\r\n11989,1662,LATAM,grocery,mobile,29.22,6,0.076,none,2024-02-07\r\n11990,1931,APAC,toys,retail,31.56,7,0.158,loyalty,2024-03-17\r\n11991,2343,EMEA,toys,retail,70.10,4,0.147,none,2024-02-19\r\n11992,1686,LATAM,sports,mobile,65.16,7,0.202,none,2024-07-12\r\n11993,1305,EMEA,home,mobile,51.60,2,0.215,none,2024-04-11\r\n11994,1926,AMER,grocery,partner,31.92,2,0.233,none,2024-05-22\r\n11995,1357,EMEA,electronics,retail,39.88,4,0.174,coupon,2024-12-04\r\n11996,2195,APAC,electronics,partner,63.72,8,0.162,none,2024-09-14\r\n11997,2285,APAC,electronics,online,276.58,3,0.121,coupon,2024-07-26\r\n11998,1042,LATAM,home,retail,66.24,8,0.167,coupon,2024-12-01\r\n11999,2356,LATAM,grocery,online,93.50,1,0.036,none,2024-06-13\r\n12000,1798,AMER,grocery,online,58.78,8,0.104,coupon,2024-04-03\r\n12001,1207,APAC,electronics,retail,58.34,5,0.140,none,2024-01-27\r\n12002,2300,EMEA,grocery,online,50.56,6,0.148,none,2024-04-23\r\n12003,1725,APAC,sports,retail,61.22,8,0.013,loyalty,2024-05-12\r\n12004,1031,AMER,home,online,78.65,3,0.191,coupon,2024-12-03\r\n12005,2005,APAC,home,online,42.80,7,0.168,loyalty,2024-10-05\r\n12006,1981,EMEA,fashion,mobile,78.52,2,0.091,none,2024-03-06\r\n12007,1194,APAC,grocery,online,59.96,6,0.201,none,2024-07-26\r\n12008,1768,AMER,home,online,26.21,5,0.127,none,2024-02-10\r\n12009,1795,EMEA,electronics,retail,77.05,2,0.216,coupon,2024-12-24\r\n12010,2168,EMEA,sports,retail,48.95,8,0.004,loyalty,2024-06-18\r\n12011,2225,EMEA,fashion,online,90.07,1,0.022,none,2024-02-16\r\n12012,2074,AMER,grocery,mobile,53.62,1,0.114,coupon,2024-08-12\r\n12013,1987,AMER,grocery,retail,29.79,1,0.015,none,2024-06-21\r\n12014,2142,LATAM,home,retail,25.57,3,0.150,none,2024-12-28\r\n12015,2441,EMEA,grocery,retail,101.40,2,0.074,none,2024-09-06\r\n12016,2056,LATAM,electronics,online,49.16,5,0.174,loyalty,2024-02-07\r\n12017,1317,EMEA,grocery,online,40.80,2,0.151,none,2024-02-26\r\n12018,2477,APAC,grocery,retail,183.35,6,0.236,bundle,2024-02-15\r\n12019,1807,EMEA,electronics,retail,52.67,5,0.224,none,2024-04-28\r\n12020,2172,EMEA,home,online,95.62,4,0.003,none,2024-10-04\r\n12021,1450,EMEA,grocery,online,21.34,5,0.003,coupon,2024-10-24\r\n12022,2024,AMER,electronics,online,45.15,1,0.020,coupon,2024-02-26\r\n12023,2016,LATAM,grocery,online,25.32,1,0.097,bundle,2024-04-23\r\n12024,1106,AMER,home,mobile,95.00,2,0.069,none,2024-09-07\r\n12025,2472,AMER,sports,online,123.77,4,0.044,none,2024-02-26\r\n12026,2485,AMER,sports,online,44.46,1,0.218,none,2024-09-16\r\n12027,1126,LATAM,fashion,online,68.68,5,0.167,none,2024-10-12\r\n12028,1587,LATAM,sports,online,123.01,6,0.186,coupon,2024-08-14\r\n12029,1943,AMER,fashion,online,16.91,5,0.163,coupon,2024-03-25\r\n12030,1207,APAC,toys,retail,32.70,1,0.048,coupon,2024-10-24\r\n12031,2197,LATAM,electronics,retail,160.61,1,0.070,none,2024-09-03\r\n12032,2130,EMEA,toys,online,99.74,7,0.144,bundle,2024-05-21\r\n12033,1343,LATAM,electronics,mobile,30.95,4,0.113,none,2024-01-28\r\n12034,2355,EMEA,fashion,online,79.84,7,0.003,none,2024-11-11\r\n12035,1200,EMEA,fashion,retail,62.74,6,0.027,none,2024-10-21\r\n12036,2187,EMEA,electronics,mobile,88.23,5,0.227,none,2024-03-27\r\n12037,1431,APAC,fashion,online,66.59,4,0.086,coupon,2024-09-19\r\n12038,2063,APAC,fashion,online,40.66,6,0.224,none,2024-01-06\r\n12039,1780,APAC,grocery,retail,71.50,3,0.225,none,2024-04-25\r\n12040,1452,LATAM,fashion,online,33.18,1,0.142,none,2024-11-19\r\n12041,1760,LATAM,toys,mobile,65.75,6,0.203,bundle,2024-02-25\r\n12042,1813,EMEA,home,online,105.40,1,0.116,bundle,2024-11-23\r\n12043,1138,AMER,sports,online,56.94,3,0.243,bundle,2024-08-02\r\n12044,1508,LATAM,fashion,mobile,126.01,3,0.015,none,2024-06-18\r\n12045,1154,LATAM,home,online,69.36,2,0.086,coupon,2024-08-08\r\n12046,2112,LATAM,grocery,retail,130.29,2,0.070,coupon,2024-11-06\r\n12047,2278,APAC,electronics,retail,39.27,3,0.111,none,2024-09-17\r\n12048,2101,APAC,electronics,online,72.53,4,0.086,coupon,2024-03-24\r\n12049,1707,APAC,home,mobile,123.93,1,0.028,none,2024-05-21\r\n12050,2472,AMER,grocery,retail,41.11,7,0.034,coupon,2024-02-26\r\n12051,1487,AMER,sports,online,63.71,6,0.096,none,2024-04-12\r\n12052,1989,LATAM,grocery,online,45.76,1,0.154,bundle,2024-07-03\r\n12053,2025,EMEA,electronics,online,51.57,5,0.141,none,2024-03-27\r\n12054,2276,AMER,toys,online,57.64,1,0.066,none,2024-03-18\r\n12055,1822,EMEA,grocery,retail,68.13,2,0.185,coupon,2024-03-02\r\n12056,2294,EMEA,electronics,online,14.87,1,0.065,none,2024-10-28\r\n12057,2346,LATAM,home,online,38.12,2,0.137,loyalty,2024-07-04\r\n12058,2363,AMER,fashion,online,55.71,7,0.218,none,2024-12-15\r\n12059,1932,EMEA,toys,retail,79.45,2,0.195,none,2024-02-25\r\n12060,1112,APAC,electronics,online,50.07,8,0.236,loyalty,2024-03-22\r\n12061,1035,EMEA,toys,partner,58.02,7,0.109,none,2024-04-17\r\n12062,2359,LATAM,grocery,online,68.76,3,0.107,none,2024-04-24\r\n12063,1022,APAC,grocery,retail,31.49,7,0.098,none,2024-06-07\r\n12064,1893,APAC,toys,online,91.10,4,0.098,coupon,2024-10-08\r\n12065,2142,LATAM,electronics,retail,62.65,5,0.021,bundle,2024-12-10\r\n12066,1694,APAC,grocery,mobile,50.09,1,0.053,loyalty,2024-09-11\r\n12067,1098,APAC,grocery,online,75.52,4,0.002,loyalty,2024-05-27\r\n12068,1465,AMER,toys,retail,166.20,3,0.088,none,2024-05-11\r\n12069,1094,LATAM,grocery,online,39.90,1,0.111,none,2024-08-04\r\n12070,1514,LATAM,electronics,retail,44.68,3,0.067,coupon,2024-01-28\r\n12071,2151,APAC,electronics,retail,57.57,2,0.074,none,2024-09-01\r\n12072,2366,APAC,home,mobile,61.13,6,0.144,bundle,2024-02-22\r\n12073,2336,APAC,electronics,online,38.34,3,0.133,none,2024-02-07\r\n12074,2241,APAC,fashion,online,53.50,6,0.145,loyalty,2024-04-21\r\n12075,1655,LATAM,fashion,retail,61.36,6,0.137,bundle,2024-07-10\r\n12076,1624,AMER,home,retail,41.26,7,0.244,bundle,2024-06-13\r\n12077,2130,EMEA,home,online,43.39,6,0.212,none,2024-08-19\r\n12078,2483,LATAM,grocery,mobile,112.33,1,0.014,coupon,2024-09-27\r\n12079,2288,AMER,toys,retail,17.54,8,0.203,none,2024-12-23\r\n12080,1831,APAC,home,retail,86.30,4,0.082,none,2024-12-25\r\n12081,2396,AMER,home,online,66.43,6,0.094,none,2024-04-21\r\n12082,1828,EMEA,toys,online,94.87,6,0.212,bundle,2024-05-26\r\n12083,2182,AMER,sports,online,62.33,1,0.085,coupon,2024-02-25\r\n12084,2038,LATAM,sports,retail,140.80,2,0.146,coupon,2024-03-01\r\n12085,1473,LATAM,grocery,mobile,41.94,2,0.096,none,2024-11-13\r\n12086,1973,EMEA,home,partner,47.83,3,0.059,none,2024-05-24\r\n12087,2376,LATAM,electronics,retail,81.21,4,0.031,none,2024-01-04\r\n12088,2090,AMER,home,online,50.90,7,0.014,none,2024-01-13\r\n12089,1581,APAC,electronics,online,43.74,8,0.197,none,2024-01-27\r\n12090,2008,APAC,grocery,retail,35.75,2,0.146,loyalty,2024-02-14\r\n12091,2094,AMER,toys,retail,32.48,6,0.153,none,2024-12-10\r\n12092,1661,LATAM,electronics,retail,78.04,6,0.131,coupon,2024-02-22\r\n12093,1711,APAC,sports,online,49.03,1,0.044,loyalty,2024-08-08\r\n12094,1610,LATAM,home,online,86.49,8,0.162,none,2024-12-14\r\n12095,1011,APAC,grocery,online,50.59,5,0.104,none,2024-06-18\r\n12096,1728,AMER,home,online,60.91,3,0.108,coupon,2024-12-05\r\n12097,1480,APAC,electronics,retail,46.27,2,0.043,none,2024-01-25\r\n12098,2273,APAC,home,retail,74.60,6,0.239,coupon,2024-12-17\r\n12099,1406,LATAM,home,partner,70.41,3,0.158,none,2024-02-20\r\n12100,1404,EMEA,home,online,70.65,4,0.209,bundle,2024-06-01\r\n12101,1520,APAC,sports,mobile,59.46,5,0.122,none,2024-06-06\r\n12102,1070,EMEA,sports,retail,162.41,2,0.081,loyalty,2024-04-03\r\n12103,1937,APAC,grocery,online,97.33,1,0.239,none,2024-08-11\r\n12104,1993,APAC,home,retail,41.51,1,0.206,none,2024-12-19\r\n12105,1306,LATAM,grocery,retail,77.22,7,0.008,none,2024-06-24\r\n12106,1882,AMER,grocery,online,54.85,6,0.153,none,2024-08-26\r\n12107,1792,AMER,grocery,online,42.20,5,0.235,coupon,2024-08-22\r\n12108,1353,EMEA,home,online,28.85,3,0.078,loyalty,2024-05-04\r\n12109,1617,AMER,electronics,online,83.81,3,0.200,loyalty,2024-07-02\r\n12110,1166,AMER,fashion,online,136.23,7,0.091,bundle,2024-01-14\r\n12111,2234,LATAM,home,online,53.25,8,0.129,bundle,2024-12-15\r\n12112,1568,AMER,grocery,retail,104.84,3,0.083,coupon,2024-10-25\r\n12113,1486,LATAM,grocery,online,26.47,5,0.156,none,2024-03-02\r\n12114,1524,LATAM,home,mobile,25.56,3,0.229,coupon,2024-06-01\r\n12115,1684,EMEA,electronics,online,124.39,6,0.239,bundle,2024-07-08\r\n12116,1939,LATAM,grocery,retail,51.55,5,0.243,none,2024-07-21\r\n12117,2359,LATAM,electronics,online,81.09,7,0.096,none,2024-06-09\r\n12118,1198,AMER,sports,retail,31.67,2,0.040,coupon,2024-08-05\r\n12119,2440,APAC,sports,retail,74.57,2,0.173,none,2024-12-12\r\n12120,1853,APAC,sports,online,90.96,8,0.081,none,2024-06-21\r\n12121,1435,AMER,fashion,mobile,63.85,3,0.082,loyalty,2024-04-03\r\n12122,1116,LATAM,electronics,retail,47.32,5,0.004,none,2024-05-05\r\n12123,2127,LATAM,toys,retail,42.58,2,0.058,none,2024-01-17\r\n12124,2126,APAC,grocery,retail,43.19,8,0.225,none,2024-04-04\r\n12125,1569,APAC,sports,retail,120.19,5,0.173,loyalty,2024-06-12\r\n12126,1508,LATAM,electronics,retail,51.63,4,0.047,bundle,2024-07-12\r\n12127,1115,AMER,sports,retail,33.08,2,0.108,none,2024-10-16\r\n12128,1219,LATAM,home,retail,19.83,4,0.112,none,2024-02-15\r\n12129,2253,AMER,home,online,72.04,6,0.077,none,2024-10-01\r\n12130,2398,EMEA,home,online,109.82,5,0.018,none,2024-02-13\r\n12131,1619,APAC,grocery,mobile,80.08,6,0.243,none,2024-08-22\r\n12132,1749,LATAM,grocery,retail,74.81,8,0.199,none,2024-08-28\r\n12133,1861,AMER,electronics,retail,54.62,4,0.156,bundle,2024-09-17\r\n12134,1518,AMER,grocery,retail,87.71,5,0.036,coupon,2024-03-21\r\n12135,1592,LATAM,electronics,mobile,38.92,4,0.237,bundle,2024-11-18\r\n12136,1958,APAC,electronics,mobile,65.97,4,0.192,none,2024-09-21\r\n12137,1384,LATAM,fashion,online,107.54,3,0.220,loyalty,2024-10-19\r\n12138,1612,LATAM,grocery,online,74.05,7,0.011,none,2024-12-14\r\n12139,1854,AMER,sports,partner,46.13,2,0.234,loyalty,2024-12-23\r\n12140,1641,EMEA,fashion,retail,60.51,1,0.122,bundle,2024-09-12\r\n12141,2338,AMER,grocery,online,40.29,4,0.202,none,2024-03-09\r\n12142,2480,APAC,sports,online,46.69,1,0.112,coupon,2024-06-10\r\n12143,1530,APAC,sports,online,40.11,4,0.134,loyalty,2024-03-04\r\n12144,1774,EMEA,fashion,online,24.95,8,0.036,coupon,2024-05-24\r\n12145,1073,AMER,fashion,online,47.45,8,0.029,coupon,2024-01-24\r\n12146,2145,AMER,electronics,partner,85.53,7,0.113,none,2024-12-03\r\n12147,1307,AMER,home,partner,32.86,8,0.139,bundle,2024-01-17\r\n12148,1945,AMER,fashion,online,64.33,1,0.131,coupon,2024-01-06\r\n12149,2423,LATAM,home,retail,100.80,1,0.046,none,2024-01-04\r\n12150,1321,EMEA,home,mobile,34.30,1,0.184,none,2024-01-08\r\n12151,1886,LATAM,electronics,mobile,22.45,2,0.016,none,2024-09-15\r\n12152,1273,AMER,toys,online,318.37,4,0.118,coupon,2024-03-21\r\n12153,1464,APAC,home,retail,56.88,8,0.149,coupon,2024-06-23\r\n12154,1125,LATAM,sports,retail,29.28,1,0.025,none,2024-08-18\r\n12155,1415,AMER,home,online,98.82,7,0.098,none,2024-09-16\r\n12156,2386,EMEA,home,retail,81.09,4,0.078,none,2024-12-21\r\n12157,2403,LATAM,home,online,34.31,6,0.149,coupon,2024-08-19\r\n12158,1449,EMEA,grocery,retail,69.22,5,0.044,coupon,2024-12-06\r\n12159,1255,AMER,home,partner,89.87,1,0.119,none,2024-06-24\r\n12160,1972,LATAM,home,online,97.03,3,0.138,none,2024-05-01\r\n12161,2237,EMEA,electronics,online,56.43,2,0.189,none,2024-07-04\r\n12162,1393,LATAM,toys,online,37.95,5,0.132,bundle,2024-06-12\r\n12163,2375,AMER,home,online,26.66,4,0.192,bundle,2024-02-24\r\n12164,2364,APAC,grocery,retail,73.42,4,0.144,none,2024-07-07\r\n12165,1222,AMER,electronics,online,31.79,8,0.172,coupon,2024-11-10\r\n12166,1349,APAC,grocery,retail,89.71,7,0.222,bundle,2024-12-11\r\n12167,2334,LATAM,toys,online,33.27,5,0.150,coupon,2024-04-23\r\n12168,2154,APAC,grocery,retail,29.23,3,0.105,none,2024-10-23\r\n12169,1810,LATAM,fashion,online,154.91,3,0.167,coupon,2024-04-23\r\n12170,2026,LATAM,home,online,35.32,1,0.020,bundle,2024-02-04\r\n12171,1372,APAC,electronics,online,35.19,3,0.165,none,2024-04-02\r\n12172,2093,LATAM,fashion,online,34.30,5,0.093,none,2024-12-09\r\n12173,2095,EMEA,fashion,partner,49.45,8,0.160,none,2024-02-06\r\n12174,1831,APAC,grocery,online,33.26,2,0.194,none,2024-06-12\r\n12175,1064,AMER,electronics,retail,86.44,7,0.245,none,2024-02-24\r\n12176,1299,LATAM,fashion,online,45.20,3,0.027,coupon,2024-06-19\r\n12177,1468,AMER,electronics,online,41.30,6,0.045,none,2024-11-24\r\n12178,1007,APAC,grocery,retail,148.15,2,0.029,none,2024-03-13\r\n12179,1746,LATAM,grocery,online,87.10,2,0.069,loyalty,2024-11-03\r\n12180,2480,APAC,home,retail,100.98,5,0.103,none,2024-10-15\r\n12181,1463,EMEA,fashion,online,42.93,3,0.206,coupon,2024-11-03\r\n12182,1932,EMEA,electronics,online,173.66,4,0.140,none,2024-05-10\r\n12183,1970,LATAM,electronics,online,86.04,6,0.038,loyalty,2024-06-06\r\n12184,1108,EMEA,toys,retail,37.93,4,0.071,coupon,2024-04-04\r\n12185,2349,APAC,home,retail,82.71,2,0.141,none,2024-07-11\r\n12186,1987,AMER,electronics,mobile,32.86,4,0.093,coupon,2024-01-24\r\n12187,2352,APAC,fashion,online,20.04,5,0.092,none,2024-08-28\r\n12188,2494,AMER,electronics,online,63.24,8,0.157,bundle,2024-11-09\r\n12189,1135,APAC,grocery,retail,77.38,5,0.068,coupon,2024-11-18\r\n12190,1870,EMEA,home,retail,40.32,7,0.127,coupon,2024-09-09\r\n12191,1512,APAC,grocery,online,58.88,7,0.031,loyalty,2024-06-09\r\n12192,1130,LATAM,electronics,retail,110.36,7,0.081,none,2024-10-06\r\n12193,1716,LATAM,electronics,retail,37.37,6,0.021,none,2024-06-14\r\n12194,1216,APAC,electronics,mobile,143.99,4,0.221,none,2024-10-04\r\n12195,1836,LATAM,grocery,online,53.05,6,0.143,none,2024-04-06\r\n12196,2457,EMEA,toys,retail,88.77,1,0.128,none,2024-03-06\r\n12197,1817,APAC,home,retail,24.34,7,0.232,coupon,2024-08-21\r\n12198,1139,EMEA,grocery,mobile,79.17,3,0.074,bundle,2024-10-15\r\n12199,2091,LATAM,home,mobile,39.68,2,0.163,bundle,2024-02-11\r\n12200,1598,EMEA,home,online,47.04,8,0.189,coupon,2024-06-07\r\n12201,1437,EMEA,toys,online,95.15,2,0.050,none,2024-01-07\r\n12202,1844,APAC,electronics,retail,59.13,2,0.024,bundle,2024-10-06\r\n12203,1405,LATAM,fashion,retail,50.19,5,0.141,loyalty,2024-11-02\r\n12204,1466,AMER,electronics,online,151.41,3,0.085,coupon,2024-10-27\r\n12205,1947,EMEA,electronics,online,123.55,5,0.249,loyalty,2024-02-09\r\n12206,2379,AMER,electronics,online,46.11,7,0.044,coupon,2024-11-05\r\n12207,1996,APAC,home,retail,85.61,4,0.247,bundle,2024-03-14\r\n12208,1042,LATAM,home,online,125.62,5,0.090,none,2024-07-03\r\n12209,1291,EMEA,home,retail,46.23,4,0.249,coupon,2024-06-19\r\n12210,1360,APAC,home,retail,47.64,8,0.108,bundle,2024-01-21\r\n12211,1709,EMEA,home,retail,58.24,2,0.169,none,2024-06-28\r\n12212,1554,AMER,fashion,retail,88.61,6,0.166,loyalty,2024-03-14\r\n12213,2000,APAC,grocery,retail,80.80,3,0.012,coupon,2024-04-26\r\n12214,2364,APAC,sports,retail,25.68,3,0.161,loyalty,2024-07-06\r\n12215,1098,APAC,toys,online,55.53,7,0.019,none,2024-04-06\r\n12216,2011,AMER,grocery,online,56.27,6,0.008,none,2024-11-06\r\n12217,1682,EMEA,electronics,online,56.52,1,0.059,coupon,2024-09-04\r\n12218,1377,APAC,grocery,retail,32.58,8,0.040,loyalty,2024-05-05\r\n12219,2080,LATAM,grocery,retail,146.07,2,0.123,none,2024-05-03\r\n12220,1625,EMEA,grocery,retail,129.52,3,0.126,bundle,2024-04-14\r\n12221,1182,EMEA,grocery,online,50.08,8,0.104,none,2024-11-27\r\n12222,2216,AMER,home,online,31.61,1,0.074,bundle,2024-05-14\r\n12223,1935,EMEA,home,mobile,72.36,4,0.142,loyalty,2024-02-02\r\n12224,1039,AMER,electronics,online,67.39,5,0.241,none,2024-02-28\r\n12225,1060,LATAM,grocery,online,68.30,2,0.068,none,2024-04-20\r\n12226,1612,LATAM,electronics,retail,53.47,1,0.122,none,2024-07-02\r\n12227,2270,APAC,sports,online,45.14,2,0.227,none,2024-07-22\r\n12228,1121,EMEA,toys,online,77.62,1,0.090,bundle,2024-03-01\r\n12229,1636,APAC,home,online,111.04,8,0.227,bundle,2024-01-03\r\n12230,2457,EMEA,home,online,60.20,2,0.243,bundle,2024-09-05\r\n12231,1776,APAC,sports,mobile,22.69,4,0.102,bundle,2024-04-04\r\n12232,1262,APAC,grocery,retail,73.62,5,0.153,none,2024-05-23\r\n12233,1933,EMEA,home,mobile,60.05,3,0.226,none,2024-01-27\r\n12234,1351,APAC,home,online,66.29,1,0.078,loyalty,2024-01-21\r\n12235,2105,APAC,home,online,88.19,5,0.207,coupon,2024-04-26\r\n12236,2134,AMER,home,retail,53.07,4,0.115,bundle,2024-12-15\r\n12237,1814,AMER,home,online,39.31,8,0.218,loyalty,2024-03-23\r\n12238,2058,LATAM,electronics,online,86.54,3,0.060,bundle,2024-07-17\r\n12239,1854,AMER,sports,retail,82.81,6,0.185,coupon,2024-05-20\r\n12240,2479,EMEA,electronics,retail,41.81,5,0.199,bundle,2024-11-06\r\n12241,2350,APAC,sports,retail,37.99,4,0.036,none,2024-07-12\r\n12242,1615,LATAM,fashion,retail,33.28,6,0.011,none,2024-12-24\r\n12243,1939,LATAM,grocery,online,28.32,6,0.063,loyalty,2024-01-09\r\n12244,1914,EMEA,grocery,retail,50.73,6,0.003,coupon,2024-06-06\r\n12245,1259,EMEA,home,online,33.38,2,0.192,none,2024-01-17\r\n12246,1444,EMEA,sports,retail,97.93,1,0.022,none,2024-08-18\r\n12247,2349,APAC,sports,online,104.34,7,0.103,bundle,2024-09-04\r\n12248,1756,EMEA,fashion,online,77.37,2,0.151,none,2024-05-25\r\n12249,1912,APAC,grocery,online,39.98,5,0.188,coupon,2024-10-04\r\n12250,2145,AMER,home,retail,40.75,2,0.151,none,2024-02-05\r\n12251,1413,LATAM,electronics,retail,40.95,7,0.157,none,2024-02-05\r\n12252,1916,AMER,grocery,online,74.01,3,0.190,none,2024-09-18\r\n12253,2301,EMEA,grocery,mobile,38.45,7,0.222,bundle,2024-11-13\r\n12254,2497,AMER,electronics,online,46.28,6,0.077,none,2024-04-25\r\n12255,1232,LATAM,electronics,online,20.24,5,0.072,none,2024-04-26\r\n12256,2441,EMEA,grocery,retail,32.70,8,0.227,bundle,2024-08-17\r\n12257,2441,EMEA,electronics,retail,31.77,8,0.147,coupon,2024-05-20\r\n12258,1059,AMER,grocery,mobile,118.12,2,0.072,none,2024-03-20\r\n12259,1438,APAC,home,retail,64.02,2,0.065,coupon,2024-06-04\r\n12260,1921,LATAM,fashion,mobile,42.73,4,0.049,bundle,2024-09-28\r\n12261,1912,APAC,grocery,mobile,32.50,8,0.112,none,2024-08-01\r\n12262,2427,LATAM,grocery,online,63.07,1,0.192,coupon,2024-07-23\r\n12263,2350,APAC,grocery,online,94.30,4,0.057,bundle,2024-10-28\r\n12264,1362,AMER,electronics,mobile,36.06,5,0.016,none,2024-10-08\r\n12265,1984,LATAM,fashion,retail,54.22,5,0.145,coupon,2024-03-27\r\n12266,2130,EMEA,fashion,online,74.19,1,0.231,loyalty,2024-08-08\r\n12267,1605,APAC,grocery,online,49.88,5,0.084,loyalty,2024-10-15\r\n12268,2462,EMEA,fashion,online,61.51,6,0.033,none,2024-07-24\r\n12269,1817,APAC,toys,online,22.45,2,0.141,none,2024-11-23\r\n12270,2453,AMER,grocery,retail,54.44,6,0.079,none,2024-09-14\r\n12271,1180,AMER,electronics,partner,54.68,6,0.121,none,2024-07-19\r\n12272,1321,EMEA,home,mobile,29.77,6,0.089,loyalty,2024-08-04\r\n12273,1049,AMER,toys,retail,32.83,2,0.013,coupon,2024-03-16\r\n12274,2049,LATAM,fashion,retail,25.19,5,0.152,none,2024-02-02\r\n12275,1031,AMER,toys,mobile,82.98,1,0.028,none,2024-05-16\r\n12276,1444,EMEA,home,online,40.36,5,0.246,none,2024-06-03\r\n12277,2092,AMER,grocery,retail,35.31,3,0.236,none,2024-04-20\r\n12278,1339,EMEA,grocery,retail,33.06,6,0.240,loyalty,2024-03-27\r\n12279,2115,APAC,fashion,retail,40.78,4,0.138,coupon,2024-10-01\r\n12280,1906,APAC,sports,retail,46.98,1,0.092,none,2024-12-28\r\n12281,1483,EMEA,fashion,online,54.76,3,0.127,none,2024-01-27\r\n12282,2031,AMER,sports,retail,29.12,7,0.020,coupon,2024-01-09\r\n12283,1162,AMER,fashion,online,113.97,7,0.221,bundle,2024-11-17\r\n12284,2365,LATAM,toys,partner,32.13,8,0.222,loyalty,2024-01-05\r\n12285,1822,EMEA,fashion,online,98.41,1,0.044,bundle,2024-03-12\r\n12286,1518,AMER,home,online,49.07,3,0.008,none,2024-05-28\r\n12287,1975,EMEA,fashion,online,60.54,2,0.156,loyalty,2024-08-21\r\n12288,1549,APAC,electronics,online,43.38,2,0.065,none,2024-02-22\r\n12289,1303,LATAM,sports,partner,38.13,7,0.134,none,2024-01-23\r\n12290,1648,APAC,toys,retail,51.33,4,0.192,none,2024-11-04\r\n12291,1244,LATAM,home,mobile,40.50,3,0.167,none,2024-04-05\r\n12292,1971,EMEA,fashion,retail,31.52,4,0.085,none,2024-06-12\r\n12293,1363,EMEA,sports,online,24.73,8,0.053,none,2024-09-26\r\n12294,1769,LATAM,fashion,online,113.68,7,0.099,none,2024-12-09\r\n12295,1684,EMEA,fashion,online,68.92,1,0.249,none,2024-06-25\r\n12296,2427,LATAM,grocery,retail,135.41,3,0.247,coupon,2024-04-12\r\n12297,1368,EMEA,home,online,78.18,8,0.204,none,2024-02-22\r\n12298,1474,LATAM,grocery,online,52.09,5,0.188,coupon,2024-10-24\r\n12299,2049,LATAM,fashion,retail,34.64,1,0.144,none,2024-06-19\r\n12300,2310,EMEA,electronics,retail,83.57,5,0.128,none,2024-12-28\r\n12301,1819,AMER,fashion,online,105.40,3,0.018,coupon,2024-10-01\r\n12302,1333,EMEA,toys,mobile,50.62,2,0.229,none,2024-06-06\r\n12303,1143,LATAM,toys,online,39.10,2,0.187,none,2024-02-26\r\n12304,2161,LATAM,grocery,mobile,81.93,1,0.034,none,2024-05-26\r\n12305,2226,EMEA,home,partner,63.68,1,0.121,coupon,2024-10-27\r\n12306,1807,EMEA,grocery,online,58.98,3,0.207,bundle,2024-06-01\r\n12307,1110,LATAM,home,retail,47.25,3,0.206,none,2024-03-17\r\n12308,1379,EMEA,toys,online,40.89,7,0.078,none,2024-10-16\r\n12309,1194,APAC,electronics,retail,24.02,7,0.009,none,2024-08-18\r\n12310,1657,LATAM,sports,partner,39.06,7,0.094,loyalty,2024-11-02\r\n12311,1972,LATAM,home,retail,41.59,5,0.121,coupon,2024-04-04\r\n12312,2372,AMER,sports,online,99.39,1,0.061,none,2024-10-08\r\n12313,2043,EMEA,home,mobile,41.18,2,0.107,none,2024-12-13\r\n12314,1135,APAC,electronics,retail,43.87,6,0.167,bundle,2024-08-19\r\n12315,1197,LATAM,grocery,online,24.91,3,0.131,none,2024-07-24\r\n12316,2172,EMEA,grocery,online,62.09,7,0.030,none,2024-08-22\r\n12317,1365,LATAM,sports,retail,98.93,3,0.115,bundle,2024-01-02\r\n12318,2325,LATAM,fashion,online,135.20,5,0.143,bundle,2024-07-24\r\n12319,1172,APAC,fashion,retail,131.58,6,0.228,loyalty,2024-01-26\r\n12320,1470,LATAM,sports,online,45.67,8,0.226,coupon,2024-11-17\r\n12321,1885,EMEA,grocery,online,43.09,8,0.182,none,2024-08-02\r\n12322,1733,LATAM,home,mobile,76.55,2,0.143,none,2024-08-12\r\n12323,1420,APAC,grocery,online,113.41,8,0.153,none,2024-05-22\r\n12324,2495,EMEA,fashion,online,43.04,2,0.113,loyalty,2024-06-13\r\n12325,1337,APAC,home,online,93.08,3,0.079,none,2024-10-24\r\n12326,1371,AMER,home,retail,56.29,6,0.023,none,2024-04-18\r\n12327,2201,AMER,electronics,retail,74.86,7,0.070,none,2024-08-15\r\n12328,1432,APAC,toys,online,145.73,5,0.003,none,2024-02-27\r\n12329,1092,AMER,fashion,mobile,45.74,6,0.070,none,2024-03-08\r\n12330,1177,LATAM,sports,retail,26.41,4,0.083,none,2024-10-06\r\n12331,1275,EMEA,fashion,mobile,47.86,5,0.108,bundle,2024-04-21\r\n12332,1288,LATAM,sports,online,55.28,2,0.205,none,2024-10-23\r\n12333,2470,EMEA,fashion,partner,32.20,6,0.242,coupon,2024-11-17\r\n12334,1335,APAC,toys,retail,140.52,2,0.207,bundle,2024-10-11\r\n12335,1080,LATAM,grocery,mobile,62.96,4,0.249,none,2024-05-22\r\n12336,1393,LATAM,toys,mobile,71.25,8,0.033,none,2024-01-27\r\n12337,1841,AMER,home,retail,77.14,2,0.050,none,2024-05-09\r\n12338,1746,LATAM,electronics,online,106.76,7,0.134,none,2024-12-25\r\n12339,1494,AMER,electronics,partner,99.55,6,0.174,loyalty,2024-03-14\r\n12340,2399,LATAM,grocery,mobile,58.33,5,0.189,none,2024-05-05\r\n12341,1368,EMEA,home,online,85.55,5,0.095,none,2024-04-15\r\n12342,1014,EMEA,sports,online,123.85,7,0.065,none,2024-03-02\r\n12343,1996,APAC,electronics,online,96.88,2,0.169,none,2024-04-02\r\n12344,2285,APAC,home,retail,82.01,8,0.204,none,2024-11-13\r\n12345,1503,APAC,home,online,57.14,4,0.083,none,2024-09-26\r\n12346,2037,LATAM,grocery,online,41.14,5,0.182,none,2024-06-16\r\n12347,1906,APAC,home,online,78.16,7,0.156,bundle,2024-05-28\r\n12348,1964,EMEA,grocery,online,28.16,7,0.224,none,2024-09-05\r\n12349,2200,LATAM,home,retail,58.22,6,0.172,none,2024-01-08\r\n12350,1483,EMEA,electronics,partner,20.49,8,0.082,none,2024-02-05\r\n12351,2268,EMEA,fashion,retail,99.61,8,0.203,none,2024-04-17\r\n12352,1698,EMEA,fashion,online,78.00,3,0.189,none,2024-01-07\r\n12353,2350,APAC,grocery,partner,60.59,7,0.168,none,2024-11-18\r\n12354,2319,AMER,fashion,retail,149.15,8,0.107,coupon,2024-06-07\r\n12355,1592,LATAM,grocery,mobile,50.59,3,0.035,none,2024-06-07\r\n12356,1887,LATAM,home,partner,46.91,7,0.048,bundle,2024-11-27\r\n12357,1215,LATAM,grocery,mobile,54.79,7,0.012,none,2024-03-10\r\n12358,1487,AMER,grocery,mobile,70.62,4,0.139,loyalty,2024-03-15\r\n12359,1681,LATAM,grocery,online,80.54,2,0.157,none,2024-06-27\r\n12360,2222,LATAM,electronics,retail,58.81,1,0.016,none,2024-08-01\r\n12361,1584,EMEA,home,retail,31.63,6,0.055,coupon,2024-05-14\r\n12362,1000,APAC,grocery,mobile,68.04,1,0.235,loyalty,2024-04-07\r\n12363,2475,AMER,fashion,retail,59.74,4,0.047,none,2024-02-20\r\n12364,1093,APAC,home,online,17.42,3,0.036,none,2024-09-28\r\n12365,1706,EMEA,fashion,retail,101.13,8,0.121,none,2024-09-07\r\n12366,1558,EMEA,electronics,online,67.53,5,0.149,none,2024-01-06\r\n12367,1268,EMEA,grocery,online,29.80,1,0.187,none,2024-04-26\r\n12368,1592,LATAM,electronics,online,94.30,2,0.077,none,2024-10-26\r\n12369,1724,LATAM,toys,mobile,76.04,1,0.062,coupon,2024-01-16\r\n12370,1541,APAC,grocery,online,53.29,7,0.059,bundle,2024-09-16\r\n12371,1586,LATAM,electronics,online,31.44,1,0.070,coupon,2024-10-14\r\n12372,2430,APAC,sports,retail,184.54,5,0.023,none,2024-01-13\r\n12373,1455,APAC,home,online,43.00,8,0.115,bundle,2024-08-07\r\n12374,2483,LATAM,home,retail,54.88,3,0.037,bundle,2024-11-08\r\n12375,2243,APAC,sports,retail,83.00,1,0.219,none,2024-09-16\r\n12376,1471,EMEA,fashion,retail,47.55,5,0.151,loyalty,2024-07-15\r\n12377,1406,LATAM,electronics,retail,43.00,6,0.122,none,2024-03-28\r\n12378,2315,LATAM,home,mobile,90.88,2,0.101,none,2024-07-26\r\n12379,1169,LATAM,electronics,online,77.09,6,0.079,none,2024-10-20\r\n12380,1422,LATAM,fashion,online,26.79,8,0.024,none,2024-04-05\r\n12381,1758,AMER,electronics,online,49.43,8,0.125,loyalty,2024-09-27\r\n12382,1055,AMER,electronics,online,87.22,8,0.101,bundle,2024-11-23\r\n12383,1817,APAC,fashion,retail,41.11,4,0.048,none,2024-06-25\r\n12384,2126,APAC,electronics,retail,25.49,4,0.165,coupon,2024-09-18\r\n12385,2027,EMEA,fashion,mobile,58.80,8,0.100,coupon,2024-04-06\r\n12386,2031,AMER,grocery,retail,60.10,8,0.041,loyalty,2024-06-27\r\n12387,1302,LATAM,sports,retail,31.51,7,0.192,none,2024-04-25\r\n12388,1921,LATAM,grocery,retail,52.23,1,0.119,none,2024-12-22\r\n12389,2396,AMER,home,retail,50.45,7,0.017,coupon,2024-08-18\r\n12390,1331,AMER,home,online,45.73,1,0.015,none,2024-12-10\r\n12391,1880,LATAM,home,online,45.82,8,0.146,none,2024-12-11\r\n12392,2297,EMEA,grocery,partner,24.97,4,0.080,none,2024-05-21\r\n12393,1777,AMER,sports,online,43.62,3,0.238,none,2024-07-06\r\n12394,2426,AMER,electronics,online,55.55,4,0.055,coupon,2024-04-23\r\n12395,2314,EMEA,home,online,33.27,7,0.013,coupon,2024-07-24\r\n12396,2316,EMEA,grocery,online,46.24,5,0.121,none,2024-07-11\r\n12397,1021,AMER,fashion,retail,62.81,6,0.065,none,2024-04-27\r\n12398,1845,AMER,home,online,115.52,6,0.230,none,2024-05-15\r\n12399,2362,AMER,grocery,online,64.22,7,0.174,bundle,2024-01-17\r\n12400,1329,APAC,sports,partner,80.54,5,0.191,bundle,2024-06-21\r\n12401,1590,APAC,fashion,retail,51.76,4,0.162,coupon,2024-01-19\r\n12402,1725,APAC,home,online,98.36,3,0.001,loyalty,2024-04-23\r\n12403,1307,AMER,grocery,online,33.90,1,0.222,none,2024-05-07\r\n12404,1211,EMEA,electronics,retail,45.26,7,0.239,loyalty,2024-03-08\r\n12405,2458,EMEA,fashion,retail,50.33,3,0.222,bundle,2024-12-23\r\n12406,1056,LATAM,home,online,44.91,6,0.200,coupon,2024-07-26\r\n12407,1581,APAC,home,online,59.35,8,0.184,none,2024-07-04\r\n12408,2252,EMEA,home,online,72.76,4,0.096,none,2024-08-27\r\n12409,1331,AMER,home,online,73.61,5,0.081,bundle,2024-02-03\r\n12410,2295,EMEA,home,retail,56.70,6,0.116,none,2024-09-23\r\n12411,1347,APAC,toys,mobile,55.38,8,0.089,coupon,2024-08-04\r\n12412,1264,APAC,fashion,online,36.50,7,0.182,coupon,2024-02-20\r\n12413,1749,LATAM,grocery,retail,39.51,6,0.175,none,2024-07-23\r\n12414,1394,LATAM,electronics,online,72.94,5,0.038,none,2024-03-28\r\n12415,1636,APAC,sports,online,81.32,8,0.096,none,2024-05-28\r\n12416,2223,EMEA,toys,retail,128.35,5,0.017,coupon,2024-05-27\r\n12417,1586,LATAM,toys,retail,45.96,7,0.185,bundle,2024-09-01\r\n12418,1691,LATAM,grocery,online,47.47,2,0.118,coupon,2024-09-13\r\n12419,1475,LATAM,home,online,49.29,8,0.122,coupon,2024-01-18\r\n12420,1609,LATAM,fashion,online,24.40,8,0.060,none,2024-10-23\r\n12421,2194,APAC,fashion,online,42.58,7,0.222,none,2024-01-27\r\n12422,1284,APAC,electronics,retail,89.07,8,0.247,none,2024-11-25\r\n12423,2152,EMEA,home,retail,37.48,7,0.033,bundle,2024-05-09\r\n12424,1565,AMER,fashion,online,68.81,4,0.130,none,2024-09-28\r\n12425,1493,APAC,electronics,mobile,44.44,1,0.230,coupon,2024-07-28\r\n12426,1055,AMER,fashion,mobile,100.40,5,0.224,none,2024-12-08\r\n12427,2086,APAC,grocery,retail,191.64,1,0.032,bundle,2024-12-23\r\n12428,1422,LATAM,sports,online,84.39,3,0.210,none,2024-11-17\r\n12429,1136,EMEA,grocery,mobile,68.74,3,0.116,bundle,2024-09-24\r\n12430,1470,LATAM,home,retail,102.54,7,0.136,coupon,2024-10-01\r\n12431,2089,EMEA,fashion,mobile,50.29,4,0.051,none,2024-04-20\r\n12432,1951,LATAM,sports,mobile,65.95,2,0.093,none,2024-02-17\r\n12433,1579,AMER,toys,partner,88.45,8,0.078,bundle,2024-01-10\r\n12434,1059,AMER,grocery,mobile,68.21,5,0.026,none,2024-11-09\r\n12435,1609,LATAM,fashion,online,51.04,5,0.131,coupon,2024-04-19\r\n12436,1373,LATAM,electronics,partner,25.88,8,0.202,none,2024-12-13\r\n12437,2277,EMEA,grocery,retail,32.32,5,0.024,bundle,2024-12-07\r\n12438,1469,EMEA,fashion,retail,72.41,8,0.042,coupon,2024-03-17\r\n12439,1174,APAC,home,online,59.04,5,0.021,coupon,2024-07-14\r\n12440,1761,EMEA,grocery,retail,42.10,8,0.157,none,2024-01-07\r\n12441,1970,LATAM,grocery,retail,57.84,8,0.115,loyalty,2024-03-26\r\n12442,2108,AMER,grocery,online,47.98,4,0.195,none,2024-04-09\r\n12443,2355,EMEA,home,mobile,146.31,3,0.051,loyalty,2024-07-07\r\n12444,1792,AMER,toys,online,67.49,4,0.151,none,2024-02-22\r\n12445,1445,APAC,grocery,mobile,21.82,8,0.161,none,2024-11-24\r\n12446,1096,EMEA,fashion,online,83.08,8,0.162,none,2024-10-15\r\n12447,1834,AMER,sports,mobile,34.86,1,0.185,none,2024-08-20\r\n12448,1588,LATAM,electronics,retail,152.74,8,0.120,coupon,2024-12-20\r\n12449,1068,APAC,sports,retail,75.04,3,0.248,none,2024-04-11\r\n12450,1978,AMER,fashion,retail,26.07,4,0.144,none,2024-12-23\r\n12451,1926,AMER,electronics,mobile,63.00,5,0.187,none,2024-06-23\r\n12452,2283,AMER,sports,retail,55.69,1,0.086,none,2024-02-25\r\n12453,1344,EMEA,toys,retail,35.93,1,0.021,coupon,2024-05-16\r\n12454,1347,APAC,grocery,online,86.64,4,0.134,none,2024-10-04\r\n12455,1084,AMER,toys,retail,61.30,8,0.052,loyalty,2024-06-13\r\n12456,2061,EMEA,grocery,retail,87.44,3,0.129,none,2024-03-26\r\n12457,1063,AMER,grocery,online,43.11,3,0.237,coupon,2024-04-02\r\n12458,2158,APAC,electronics,retail,46.57,1,0.241,coupon,2024-07-28\r\n12459,1884,APAC,grocery,retail,69.86,5,0.164,coupon,2024-10-22\r\n12460,1782,LATAM,fashion,retail,16.78,2,0.235,none,2024-04-05\r\n12461,1957,AMER,sports,online,63.14,6,0.231,none,2024-07-20\r\n12462,2284,EMEA,grocery,retail,67.80,7,0.239,none,2024-02-24\r\n12463,2463,AMER,fashion,online,265.84,1,0.220,loyalty,2024-05-21\r\n12464,1681,LATAM,grocery,online,55.84,8,0.055,none,2024-07-18\r\n12465,1068,APAC,sports,online,77.24,1,0.192,none,2024-07-07\r\n12466,1498,LATAM,grocery,online,50.92,2,0.145,coupon,2024-05-01\r\n12467,1636,APAC,grocery,online,89.91,5,0.209,bundle,2024-02-10\r\n12468,1491,EMEA,fashion,retail,56.01,3,0.069,none,2024-07-21\r\n12469,1471,EMEA,home,online,55.22,8,0.012,coupon,2024-03-04\r\n12470,2310,EMEA,home,online,75.07,4,0.031,none,2024-04-12\r\n12471,2432,AMER,grocery,online,73.76,8,0.001,none,2024-10-18\r\n12472,1295,EMEA,grocery,online,27.97,1,0.173,none,2024-10-24\r\n12473,1850,APAC,grocery,mobile,26.30,8,0.141,none,2024-10-08\r\n12474,1824,LATAM,electronics,mobile,94.41,6,0.179,none,2024-11-26\r\n12475,1728,AMER,fashion,mobile,65.63,4,0.044,none,2024-03-04\r\n12476,1214,EMEA,home,retail,12.94,1,0.227,bundle,2024-09-27\r\n12477,1813,EMEA,grocery,retail,45.24,8,0.080,none,2024-07-23\r\n12478,1851,EMEA,fashion,online,46.94,7,0.129,none,2024-10-28\r\n12479,2213,APAC,grocery,retail,25.15,5,0.183,coupon,2024-09-26\r\n12480,1801,LATAM,home,retail,98.29,6,0.072,none,2024-04-25\r\n12481,1917,LATAM,electronics,online,49.51,3,0.123,coupon,2024-04-23\r\n12482,1162,AMER,grocery,online,34.45,7,0.239,none,2024-06-20\r\n12483,1710,APAC,electronics,online,148.36,2,0.085,bundle,2024-07-18\r\n12484,2450,EMEA,fashion,online,57.23,3,0.040,loyalty,2024-12-20\r\n12485,2429,EMEA,fashion,online,50.13,2,0.232,loyalty,2024-12-14\r\n12486,1751,AMER,grocery,retail,51.04,1,0.246,none,2024-08-09\r\n12487,1646,APAC,sports,online,55.24,8,0.134,none,2024-05-24\r\n12488,1959,EMEA,grocery,online,32.09,3,0.220,none,2024-09-07\r\n12489,1661,LATAM,home,retail,47.18,1,0.091,coupon,2024-11-11\r\n12490,1796,LATAM,home,mobile,55.21,2,0.213,none,2024-06-05\r\n12491,1878,EMEA,electronics,online,42.80,5,0.216,bundle,2024-05-25\r\n12492,1451,EMEA,grocery,mobile,52.65,6,0.138,none,2024-01-02\r\n12493,1380,AMER,electronics,online,63.84,8,0.042,loyalty,2024-11-15\r\n12494,1302,LATAM,fashion,retail,66.82,1,0.047,none,2024-02-21\r\n12495,2337,AMER,home,retail,46.79,8,0.119,none,2024-06-16\r\n12496,2397,LATAM,electronics,retail,39.96,5,0.135,loyalty,2024-08-04\r\n12497,2296,AMER,toys,mobile,62.40,3,0.225,coupon,2024-07-06\r\n12498,1953,EMEA,electronics,mobile,116.92,4,0.238,bundle,2024-08-25\r\n12499,2301,EMEA,toys,online,75.43,3,0.106,coupon,2024-11-18\r\n12500,1047,APAC,grocery,retail,46.38,2,0.108,none,2024-07-16\r\n12501,2028,APAC,toys,mobile,48.97,4,0.043,none,2024-10-21\r\n12502,1440,AMER,home,online,85.52,2,0.008,none,2024-03-28\r\n12503,2384,LATAM,sports,retail,71.91,1,0.191,none,2024-08-21\r\n12504,1799,EMEA,fashion,retail,52.81,4,0.135,loyalty,2024-09-02\r\n12505,1132,EMEA,electronics,online,51.36,5,0.015,coupon,2024-11-17\r\n12506,2460,AMER,home,online,52.71,7,0.101,none,2024-03-26\r\n12507,2342,AMER,grocery,retail,40.67,3,0.054,coupon,2024-09-01\r\n12508,1066,AMER,fashion,retail,52.49,3,0.082,none,2024-07-15\r\n12509,2125,LATAM,grocery,retail,45.18,8,0.242,none,2024-11-17\r\n12510,1076,LATAM,grocery,partner,66.42,5,0.039,none,2024-08-19\r\n12511,1459,LATAM,home,online,58.50,5,0.184,none,2024-04-15\r\n12512,2390,AMER,home,retail,27.02,7,0.216,none,2024-01-23\r\n12513,1011,APAC,home,partner,25.33,6,0.096,none,2024-12-02\r\n12514,2353,AMER,grocery,mobile,95.97,1,0.243,none,2024-12-08\r\n12515,1231,AMER,fashion,retail,41.55,8,0.208,bundle,2024-09-06\r\n12516,2396,AMER,grocery,online,114.44,6,0.008,none,2024-01-01\r\n12517,1596,EMEA,fashion,online,76.64,2,0.179,none,2024-07-07\r\n12518,1406,LATAM,home,online,62.69,8,0.210,bundle,2024-05-12\r\n12519,1870,EMEA,sports,partner,35.89,1,0.218,none,2024-12-06\r\n12520,1378,APAC,grocery,online,157.90,1,0.216,none,2024-10-05\r\n12521,1088,LATAM,toys,online,33.45,4,0.088,coupon,2024-04-16\r\n12522,1175,AMER,toys,online,45.19,1,0.145,bundle,2024-08-24\r\n12523,2097,AMER,electronics,online,75.50,3,0.056,none,2024-02-21\r\n12524,1620,LATAM,grocery,online,58.21,4,0.159,bundle,2024-12-21\r\n12525,1849,EMEA,electronics,online,100.46,1,0.132,bundle,2024-06-14\r\n12526,2015,APAC,fashion,online,55.72,4,0.197,none,2024-07-01\r\n12527,2276,AMER,home,mobile,45.11,7,0.106,none,2024-06-17\r\n12528,2345,LATAM,fashion,retail,90.35,6,0.064,none,2024-11-20\r\n12529,1284,APAC,fashion,mobile,99.42,3,0.089,none,2024-03-02\r\n12530,2423,LATAM,electronics,online,50.29,7,0.206,none,2024-04-20\r\n12531,2252,EMEA,fashion,retail,59.59,2,0.036,bundle,2024-10-22\r\n12532,1774,EMEA,grocery,retail,34.50,3,0.180,loyalty,2024-07-25\r\n12533,1535,AMER,electronics,retail,48.06,8,0.159,coupon,2024-09-04\r\n12534,1439,LATAM,fashion,online,60.18,1,0.250,bundle,2024-11-25\r\n12535,1617,AMER,toys,online,39.32,5,0.049,coupon,2024-09-18\r\n12536,1985,AMER,home,retail,87.89,3,0.171,coupon,2024-12-26\r\n12537,2121,APAC,fashion,retail,31.75,7,0.117,loyalty,2024-06-13\r\n12538,1100,AMER,sports,online,64.39,8,0.068,coupon,2024-02-15\r\n12539,1840,LATAM,fashion,retail,50.11,8,0.227,none,2024-11-08\r\n12540,1542,APAC,electronics,retail,39.83,8,0.077,coupon,2024-04-28\r\n12541,1204,AMER,sports,online,76.09,3,0.177,none,2024-09-20\r\n12542,1972,LATAM,toys,online,62.95,7,0.063,none,2024-03-15\r\n12543,2143,AMER,home,retail,33.79,6,0.047,bundle,2024-06-09\r\n12544,2264,LATAM,sports,online,50.46,8,0.232,bundle,2024-05-28\r\n12545,1859,AMER,electronics,retail,67.29,2,0.084,bundle,2024-06-06\r\n12546,1064,AMER,electronics,online,20.74,4,0.144,none,2024-06-23\r\n12547,2451,APAC,fashion,partner,118.88,7,0.140,none,2024-01-21\r\n12548,2213,APAC,grocery,retail,42.57,2,0.160,none,2024-10-05\r\n12549,2392,EMEA,grocery,retail,71.47,4,0.137,none,2024-12-08\r\n12550,1555,AMER,toys,retail,101.91,5,0.089,loyalty,2024-08-09\r\n12551,1054,EMEA,sports,online,85.84,2,0.193,none,2024-08-08\r\n12552,2062,EMEA,grocery,online,28.03,1,0.006,coupon,2024-06-28\r\n12553,2151,APAC,fashion,online,23.63,5,0.149,coupon,2024-06-21\r\n12554,1628,EMEA,sports,retail,100.40,5,0.116,coupon,2024-03-02\r\n12555,1649,APAC,grocery,mobile,27.38,3,0.213,none,2024-02-27\r\n12556,2257,AMER,grocery,online,38.30,6,0.033,none,2024-05-20\r\n12557,1144,APAC,electronics,mobile,32.15,5,0.104,coupon,2024-03-03\r\n12558,1240,EMEA,toys,retail,27.63,5,0.124,none,2024-09-04\r\n12559,1445,APAC,grocery,online,21.28,1,0.231,coupon,2024-01-22\r\n12560,1146,LATAM,electronics,online,47.84,2,0.136,none,2024-12-06\r\n12561,2419,LATAM,electronics,retail,122.12,2,0.236,loyalty,2024-07-28\r\n12562,2497,AMER,fashion,mobile,53.19,1,0.172,none,2024-09-20\r\n12563,2008,APAC,sports,retail,174.01,4,0.186,loyalty,2024-03-02\r\n12564,1385,LATAM,sports,online,34.18,6,0.066,coupon,2024-04-21\r\n12565,1450,EMEA,fashion,retail,52.25,8,0.042,none,2024-03-20\r\n12566,2085,AMER,grocery,retail,19.53,6,0.196,none,2024-02-03\r\n12567,2093,LATAM,electronics,online,78.25,8,0.034,none,2024-03-10\r\n12568,1522,LATAM,grocery,mobile,30.31,7,0.163,none,2024-07-21\r\n12569,2495,EMEA,fashion,mobile,32.52,7,0.006,none,2024-02-12\r\n12570,1019,APAC,toys,online,46.88,5,0.116,loyalty,2024-06-02\r\n12571,2231,LATAM,home,online,66.89,8,0.240,none,2024-02-28\r\n12572,1451,EMEA,grocery,partner,25.43,6,0.044,none,2024-09-07\r\n12573,1730,AMER,home,online,61.46,8,0.223,none,2024-01-04\r\n12574,2006,APAC,fashion,online,27.50,1,0.002,loyalty,2024-04-09\r\n12575,1507,EMEA,sports,online,134.12,6,0.016,coupon,2024-07-25\r\n12576,1171,APAC,home,mobile,44.96,4,0.105,none,2024-02-25\r\n12577,2436,LATAM,home,retail,65.14,1,0.180,coupon,2024-07-10\r\n12578,1199,APAC,toys,online,30.46,2,0.184,none,2024-02-11\r\n12579,1580,AMER,fashion,mobile,146.31,7,0.059,none,2024-08-24\r\n12580,1602,EMEA,home,partner,64.32,7,0.134,none,2024-06-14\r\n12581,1949,AMER,grocery,retail,64.99,7,0.058,none,2024-11-14\r\n12582,1237,LATAM,sports,retail,96.59,3,0.210,none,2024-09-18\r\n12583,2209,AMER,home,online,155.26,6,0.027,coupon,2024-09-06\r\n12584,1067,APAC,grocery,online,56.96,5,0.213,bundle,2024-07-07\r\n12585,1045,LATAM,sports,mobile,38.68,8,0.237,coupon,2024-02-06\r\n12586,1221,LATAM,grocery,online,47.53,2,0.051,bundle,2024-10-25\r\n12587,1798,AMER,fashion,online,52.74,2,0.069,none,2024-06-24\r\n12588,2197,LATAM,grocery,online,38.71,4,0.175,none,2024-08-24\r\n12589,1384,LATAM,fashion,retail,93.57,3,0.183,coupon,2024-01-14\r\n12590,1515,EMEA,electronics,partner,75.43,3,0.086,none,2024-11-16\r\n12591,2352,APAC,electronics,retail,27.94,1,0.115,none,2024-04-23\r\n12592,2427,LATAM,fashion,retail,42.79,3,0.204,loyalty,2024-12-04\r\n12593,2245,APAC,fashion,retail,65.93,2,0.208,none,2024-12-12\r\n12594,1602,EMEA,fashion,online,22.37,3,0.044,coupon,2024-12-06\r\n12595,2157,AMER,electronics,online,101.44,8,0.148,none,2024-05-24\r\n12596,2135,EMEA,fashion,online,56.94,2,0.171,none,2024-03-08\r\n12597,2416,LATAM,grocery,online,68.03,7,0.002,none,2024-07-20\r\n12598,1440,AMER,grocery,retail,31.77,5,0.147,none,2024-09-10\r\n12599,1323,EMEA,electronics,online,41.29,4,0.204,none,2024-12-12\r\n12600,1195,AMER,sports,online,26.09,2,0.181,none,2024-04-27\r\n12601,1338,EMEA,fashion,online,16.70,8,0.005,none,2024-07-14\r\n12602,2277,EMEA,fashion,retail,68.57,4,0.029,none,2024-03-23\r\n12603,2431,LATAM,home,retail,18.19,1,0.066,none,2024-03-21\r\n12604,1221,LATAM,sports,retail,75.28,4,0.005,loyalty,2024-05-14\r\n12605,1738,LATAM,electronics,online,71.81,8,0.073,none,2024-11-21\r\n12606,2263,AMER,toys,mobile,49.52,3,0.059,none,2024-12-13\r\n12607,2246,AMER,home,retail,47.66,4,0.091,coupon,2024-12-21\r\n12608,1531,EMEA,toys,retail,107.35,4,0.047,coupon,2024-07-07\r\n12609,1445,APAC,sports,mobile,48.34,3,0.212,none,2024-12-21\r\n12610,1348,AMER,electronics,mobile,104.68,7,0.207,none,2024-07-21\r\n12611,2494,AMER,toys,mobile,44.63,1,0.086,none,2024-08-13\r\n12612,1031,AMER,sports,mobile,53.36,2,0.238,none,2024-07-16\r\n12613,1017,AMER,grocery,retail,59.94,6,0.051,coupon,2024-05-21\r\n12614,1157,LATAM,grocery,retail,60.98,5,0.138,coupon,2024-08-28\r\n12615,1815,APAC,electronics,retail,69.72,2,0.249,none,2024-08-05\r\n12616,2150,APAC,home,online,52.46,5,0.109,none,2024-01-01\r\n12617,1689,LATAM,fashion,online,103.33,1,0.098,none,2024-07-23\r\n12618,2200,LATAM,electronics,mobile,62.23,2,0.028,none,2024-12-18\r\n12619,1310,AMER,grocery,retail,13.17,1,0.194,none,2024-12-09\r\n12620,1143,LATAM,fashion,retail,110.76,6,0.116,none,2024-06-13\r\n12621,1749,LATAM,grocery,online,76.36,5,0.149,coupon,2024-02-25\r\n12622,1107,APAC,home,retail,67.31,4,0.199,loyalty,2024-01-06\r\n12623,2194,APAC,grocery,retail,88.82,2,0.162,none,2024-11-04\r\n12624,2444,EMEA,home,retail,36.07,6,0.199,loyalty,2024-12-13\r\n12625,2199,LATAM,fashion,online,51.46,3,0.247,bundle,2024-04-05\r\n12626,1418,LATAM,electronics,retail,33.38,4,0.041,none,2024-05-11\r\n12627,1488,AMER,fashion,mobile,106.73,6,0.121,bundle,2024-05-22\r\n12628,2241,APAC,electronics,partner,21.83,3,0.071,none,2024-02-13\r\n12629,1597,APAC,home,retail,53.65,5,0.047,coupon,2024-08-18\r\n12630,2143,AMER,fashion,online,26.05,5,0.182,none,2024-12-09\r\n12631,1126,LATAM,electronics,online,103.19,4,0.234,loyalty,2024-03-16\r\n12632,1867,AMER,grocery,online,37.88,6,0.141,none,2024-08-03\r\n12633,2365,LATAM,electronics,online,34.56,7,0.118,bundle,2024-04-20\r\n12634,1734,AMER,fashion,online,93.50,4,0.136,bundle,2024-05-23\r\n12635,1599,APAC,grocery,retail,53.74,4,0.245,none,2024-04-11\r\n12636,2104,EMEA,home,online,165.56,3,0.182,bundle,2024-11-05\r\n12637,1386,AMER,home,online,70.45,1,0.227,loyalty,2024-11-03\r\n12638,1024,APAC,sports,online,62.64,6,0.062,none,2024-08-20\r\n12639,1055,AMER,fashion,online,46.53,4,0.096,none,2024-09-06\r\n12640,1753,APAC,sports,online,35.06,1,0.173,none,2024-01-15\r\n12641,1836,LATAM,fashion,mobile,164.39,4,0.057,coupon,2024-08-02\r\n12642,2476,APAC,electronics,mobile,43.97,4,0.175,none,2024-10-23\r\n12643,1691,LATAM,home,retail,163.33,6,0.062,none,2024-12-27\r\n12644,1794,AMER,grocery,partner,112.93,8,0.079,bundle,2024-02-13\r\n12645,1797,LATAM,electronics,online,107.73,3,0.046,none,2024-05-21\r\n12646,1876,LATAM,electronics,retail,36.92,7,0.101,none,2024-02-16\r\n12647,1859,AMER,grocery,online,66.20,7,0.214,loyalty,2024-05-27\r\n12648,1961,EMEA,home,online,39.92,3,0.246,none,2024-12-28\r\n12649,1387,AMER,sports,retail,22.21,8,0.142,none,2024-06-13\r\n12650,1215,LATAM,electronics,online,50.78,7,0.119,none,2024-01-21\r\n12651,1215,LATAM,home,online,46.09,4,0.161,none,2024-10-08\r\n12652,1328,APAC,home,online,81.45,6,0.040,none,2024-09-20\r\n12653,2279,LATAM,electronics,retail,74.41,3,0.168,none,2024-07-19\r\n12654,1849,EMEA,grocery,online,49.90,5,0.156,none,2024-02-17\r\n12655,1220,LATAM,grocery,online,113.93,8,0.209,coupon,2024-06-16\r\n12656,1897,AMER,grocery,mobile,52.41,8,0.055,none,2024-11-16\r\n12657,1042,LATAM,electronics,retail,71.06,1,0.089,bundle,2024-11-26\r\n12658,2274,APAC,fashion,retail,63.16,6,0.043,none,2024-04-21\r\n12659,1400,EMEA,electronics,retail,46.49,6,0.066,coupon,2024-06-02\r\n12660,1280,LATAM,fashion,online,18.08,3,0.093,none,2024-07-01\r\n12661,1026,APAC,sports,retail,76.23,6,0.184,bundle,2024-03-14\r\n12662,2390,AMER,toys,online,18.60,3,0.150,bundle,2024-06-15\r\n12663,1859,AMER,grocery,retail,79.27,2,0.034,loyalty,2024-11-17\r\n12664,2043,EMEA,electronics,online,61.74,6,0.009,none,2024-07-23\r\n12665,1213,EMEA,sports,retail,21.35,1,0.110,loyalty,2024-06-21\r\n12666,1785,EMEA,electronics,retail,76.69,1,0.018,coupon,2024-05-06\r\n12667,1678,LATAM,sports,retail,157.67,6,0.112,none,2024-04-21\r\n12668,1642,EMEA,electronics,online,60.80,4,0.180,none,2024-06-03\r\n12669,1873,EMEA,grocery,online,44.06,5,0.135,bundle,2024-05-23\r\n12670,2231,LATAM,sports,mobile,48.69,7,0.166,bundle,2024-02-06\r\n12671,2495,EMEA,fashion,retail,78.78,4,0.099,none,2024-08-28\r\n12672,2046,APAC,grocery,online,49.80,6,0.095,coupon,2024-04-05\r\n12673,1489,AMER,fashion,retail,40.07,8,0.098,coupon,2024-02-19\r\n12674,2175,AMER,toys,online,39.02,2,0.037,none,2024-10-26\r\n12675,2066,APAC,sports,online,103.73,6,0.099,loyalty,2024-11-23\r\n12676,2015,APAC,fashion,mobile,168.01,8,0.023,none,2024-01-03\r\n12677,1060,LATAM,grocery,mobile,87.50,4,0.052,coupon,2024-02-23\r\n12678,1144,APAC,electronics,mobile,57.57,5,0.054,coupon,2024-04-19\r\n12679,1784,EMEA,home,online,26.99,8,0.086,none,2024-01-25\r\n12680,1282,LATAM,home,retail,36.54,8,0.002,none,2024-08-03\r\n12681,2173,LATAM,electronics,online,39.55,3,0.227,loyalty,2024-06-01\r\n12682,1461,LATAM,grocery,mobile,152.40,5,0.108,coupon,2024-07-19\r\n12683,1377,APAC,electronics,retail,117.21,1,0.239,none,2024-12-26\r\n12684,1856,EMEA,fashion,retail,90.87,7,0.033,none,2024-04-22\r\n12685,2305,AMER,home,online,32.72,8,0.231,loyalty,2024-08-27\r\n12686,2037,LATAM,electronics,mobile,53.55,2,0.110,loyalty,2024-11-12\r\n12687,2104,EMEA,grocery,retail,55.14,6,0.082,coupon,2024-07-08\r\n12688,2386,EMEA,electronics,online,67.34,5,0.083,coupon,2024-08-02\r\n12689,1847,LATAM,fashion,online,62.15,3,0.040,none,2024-06-02\r\n12690,1476,APAC,fashion,online,71.09,2,0.069,coupon,2024-04-12\r\n12691,2017,EMEA,home,retail,60.77,8,0.036,loyalty,2024-12-04\r\n12692,1318,LATAM,electronics,retail,53.65,8,0.210,coupon,2024-01-17\r\n12693,2083,LATAM,electronics,online,111.47,8,0.126,none,2024-06-03\r\n12694,1298,LATAM,grocery,retail,97.29,8,0.083,coupon,2024-09-12\r\n12695,1230,EMEA,home,online,42.72,5,0.007,bundle,2024-04-13\r\n12696,1265,APAC,electronics,online,28.34,4,0.200,none,2024-05-04\r\n12697,2277,EMEA,electronics,retail,46.80,2,0.136,none,2024-04-12\r\n12698,1969,LATAM,electronics,retail,76.94,6,0.244,bundle,2024-12-10\r\n12699,1804,AMER,grocery,mobile,69.89,2,0.146,none,2024-12-11\r\n12700,2250,AMER,grocery,online,54.12,7,0.067,none,2024-08-17\r\n12701,2027,EMEA,toys,retail,43.02,5,0.084,none,2024-02-13\r\n12702,2048,LATAM,sports,retail,52.99,4,0.025,bundle,2024-10-24\r\n12703,1331,AMER,fashion,retail,53.52,6,0.011,none,2024-11-03\r\n12704,2437,LATAM,grocery,online,20.81,6,0.070,loyalty,2024-11-19\r\n12705,2336,APAC,fashion,online,55.32,1,0.202,none,2024-10-23\r\n12706,1049,AMER,toys,partner,42.90,5,0.005,none,2024-01-08\r\n12707,1394,LATAM,home,online,28.56,2,0.153,none,2024-09-27\r\n12708,1235,EMEA,grocery,retail,51.34,3,0.054,loyalty,2024-12-12\r\n12709,1476,APAC,electronics,mobile,59.66,5,0.231,loyalty,2024-06-17\r\n12710,1944,AMER,electronics,retail,57.26,2,0.127,bundle,2024-02-11\r\n12711,1506,EMEA,toys,online,26.71,5,0.026,none,2024-08-06\r\n12712,1700,EMEA,home,online,78.21,3,0.200,coupon,2024-02-10\r\n12713,1025,EMEA,electronics,online,86.75,8,0.066,none,2024-05-11\r\n12714,1906,APAC,grocery,retail,44.49,1,0.142,loyalty,2024-06-26\r\n12715,1337,APAC,grocery,online,49.10,5,0.163,none,2024-01-10\r\n12716,1902,AMER,sports,online,56.79,8,0.144,none,2024-09-18\r\n12717,1701,LATAM,fashion,retail,36.62,8,0.127,none,2024-11-28\r\n12718,1848,EMEA,electronics,online,107.39,7,0.102,none,2024-11-23\r\n12719,2325,LATAM,sports,retail,114.93,1,0.076,none,2024-10-17\r\n12720,1833,EMEA,sports,online,64.97,2,0.019,coupon,2024-05-12\r\n12721,2012,APAC,electronics,retail,139.41,5,0.012,coupon,2024-05-01\r\n12722,2093,LATAM,grocery,retail,42.05,5,0.098,bundle,2024-12-04\r\n12723,1030,EMEA,home,retail,64.43,8,0.250,none,2024-01-25\r\n12724,2082,APAC,electronics,online,104.56,5,0.245,none,2024-08-01\r\n12725,2479,EMEA,fashion,retail,41.55,8,0.067,none,2024-09-05\r\n12726,2234,LATAM,home,online,96.93,4,0.192,none,2024-04-26\r\n12727,1014,EMEA,grocery,online,56.92,4,0.158,bundle,2024-10-24\r\n12728,2474,LATAM,fashion,online,66.99,1,0.044,loyalty,2024-12-24\r\n12729,1766,AMER,toys,retail,81.88,3,0.235,none,2024-11-27\r\n12730,1238,AMER,home,mobile,93.27,4,0.059,coupon,2024-11-11\r\n12731,1275,EMEA,grocery,online,60.16,7,0.035,bundle,2024-07-05\r\n12732,1516,EMEA,toys,retail,69.17,8,0.187,bundle,2024-02-01\r\n12733,1525,APAC,grocery,online,171.32,6,0.241,coupon,2024-07-18\r\n12734,1283,APAC,grocery,online,21.98,2,0.049,none,2024-05-19\r\n12735,1008,AMER,grocery,online,112.55,6,0.053,none,2024-02-28\r\n12736,1603,EMEA,home,retail,222.72,1,0.237,bundle,2024-01-28\r\n12737,1977,APAC,grocery,online,51.29,6,0.133,none,2024-07-02\r\n12738,1368,EMEA,grocery,retail,38.59,6,0.095,coupon,2024-06-01\r\n12739,2423,LATAM,grocery,retail,19.82,5,0.050,none,2024-07-04\r\n12740,2356,LATAM,grocery,partner,63.35,3,0.009,bundle,2024-10-01\r\n12741,1275,EMEA,sports,retail,17.02,6,0.174,coupon,2024-07-23\r\n12742,2012,APAC,grocery,online,37.43,1,0.047,none,2024-01-03\r\n12743,2498,LATAM,toys,online,40.92,1,0.178,coupon,2024-11-27\r\n12744,2162,EMEA,fashion,retail,23.38,4,0.121,none,2024-05-24\r\n12745,1029,EMEA,electronics,online,58.65,6,0.212,loyalty,2024-12-21\r\n12746,2425,APAC,grocery,online,70.68,1,0.161,loyalty,2024-07-10\r\n12747,1910,LATAM,fashion,online,61.96,4,0.001,bundle,2024-11-06\r\n12748,1275,EMEA,grocery,online,69.66,2,0.006,none,2024-09-22\r\n12749,1507,EMEA,electronics,retail,145.06,8,0.133,bundle,2024-10-04\r\n12750,1181,LATAM,electronics,online,127.01,4,0.157,none,2024-07-13\r\n12751,1539,LATAM,sports,online,72.08,8,0.027,none,2024-06-16\r\n12752,1942,APAC,fashion,online,71.53,1,0.117,none,2024-08-03\r\n12753,1819,AMER,toys,retail,44.69,5,0.218,none,2024-12-20\r\n12754,1008,AMER,electronics,online,34.08,7,0.036,none,2024-06-13\r\n12755,1926,AMER,grocery,mobile,41.77,6,0.184,none,2024-12-20\r\n12756,2130,EMEA,fashion,online,69.55,8,0.187,none,2024-01-04\r\n12757,2191,AMER,electronics,retail,55.25,3,0.221,none,2024-08-26\r\n12758,2344,LATAM,home,online,103.23,6,0.072,bundle,2024-09-07\r\n12759,2111,EMEA,grocery,mobile,60.49,4,0.225,bundle,2024-08-09\r\n12760,1038,APAC,electronics,online,52.90,1,0.172,bundle,2024-06-02\r\n12761,1601,APAC,toys,online,63.68,3,0.174,none,2024-04-27\r\n12762,1347,APAC,electronics,retail,136.24,1,0.117,none,2024-11-27\r\n12763,1076,LATAM,fashion,online,82.85,8,0.126,none,2024-08-25\r\n12764,2440,APAC,electronics,retail,45.24,4,0.046,none,2024-08-13\r\n12765,1640,APAC,sports,retail,75.19,8,0.060,coupon,2024-08-27\r\n12766,1102,APAC,home,online,25.43,6,0.129,none,2024-07-24\r\n12767,2022,LATAM,grocery,online,73.57,2,0.215,coupon,2024-12-17\r\n12768,2326,LATAM,home,online,93.60,4,0.108,coupon,2024-04-03\r\n12769,1429,APAC,electronics,retail,52.07,2,0.171,none,2024-12-20\r\n12770,1452,LATAM,fashion,mobile,59.87,5,0.007,bundle,2024-11-24\r\n12771,1434,EMEA,sports,online,54.96,8,0.187,bundle,2024-10-27\r\n12772,1098,APAC,grocery,retail,34.46,5,0.218,none,2024-06-23\r\n12773,1554,AMER,toys,mobile,70.06,4,0.037,bundle,2024-11-05\r\n12774,1933,EMEA,grocery,online,44.53,5,0.070,coupon,2024-02-01\r\n12775,1378,APAC,toys,retail,71.19,1,0.168,coupon,2024-04-14\r\n12776,1552,EMEA,electronics,retail,79.41,5,0.151,coupon,2024-07-08\r\n12777,1505,EMEA,home,retail,44.79,1,0.117,none,2024-08-26\r\n12778,1158,LATAM,toys,mobile,58.77,3,0.245,none,2024-10-01\r\n12779,1908,AMER,sports,retail,49.55,6,0.227,none,2024-07-17\r\n12780,1183,AMER,home,online,62.31,5,0.092,coupon,2024-10-24\r\n12781,1796,LATAM,home,retail,80.69,7,0.203,loyalty,2024-02-14\r\n12782,1193,APAC,sports,mobile,43.51,6,0.229,coupon,2024-09-03\r\n12783,1421,APAC,home,retail,20.04,1,0.137,none,2024-01-01\r\n12784,1871,APAC,grocery,mobile,57.19,6,0.024,none,2024-07-26\r\n12785,1127,EMEA,grocery,mobile,53.27,1,0.002,bundle,2024-11-03\r\n12786,2240,LATAM,grocery,mobile,65.32,6,0.204,none,2024-07-25\r\n12787,2425,APAC,grocery,retail,80.79,2,0.149,coupon,2024-03-27\r\n12788,2384,LATAM,fashion,online,33.92,1,0.235,bundle,2024-06-05\r\n12789,1288,LATAM,home,online,85.80,5,0.223,none,2024-06-09\r\n12790,2332,APAC,sports,online,30.38,6,0.247,none,2024-01-19\r\n12791,1778,LATAM,toys,retail,85.67,2,0.083,bundle,2024-12-15\r\n12792,2178,AMER,sports,online,34.23,6,0.035,coupon,2024-10-04\r\n12793,1065,AMER,grocery,online,52.91,6,0.034,bundle,2024-11-11\r\n12794,1288,LATAM,toys,retail,54.03,8,0.215,none,2024-03-17\r\n12795,1440,AMER,grocery,online,62.89,6,0.051,none,2024-11-22\r\n12796,1858,LATAM,sports,retail,126.28,2,0.183,none,2024-11-26\r\n12797,1665,AMER,fashion,retail,34.57,4,0.037,loyalty,2024-11-05\r\n12798,1211,EMEA,sports,retail,76.55,5,0.055,none,2024-04-18\r\n12799,1921,LATAM,sports,mobile,67.45,4,0.191,loyalty,2024-03-25\r\n12800,1680,LATAM,grocery,retail,36.21,6,0.191,bundle,2024-05-23\r\n12801,2478,AMER,sports,mobile,42.47,1,0.081,none,2024-03-06\r\n12802,1290,EMEA,fashion,retail,87.01,2,0.154,none,2024-08-23\r\n12803,1565,AMER,electronics,online,36.36,5,0.084,coupon,2024-06-07\r\n12804,1069,APAC,home,online,104.89,8,0.190,none,2024-05-09\r\n12805,1047,APAC,grocery,retail,47.78,2,0.249,none,2024-07-16\r\n12806,1986,LATAM,electronics,retail,95.68,2,0.084,loyalty,2024-12-07\r\n12807,2042,LATAM,grocery,mobile,63.36,4,0.031,bundle,2024-05-22\r\n12808,1329,APAC,electronics,retail,58.06,2,0.088,coupon,2024-04-12\r\n12809,1131,APAC,grocery,retail,40.18,1,0.096,none,2024-10-16\r\n12810,2150,APAC,home,online,66.46,6,0.093,none,2024-02-12\r\n12811,1130,LATAM,home,mobile,17.73,4,0.019,none,2024-12-20\r\n12812,2097,AMER,home,partner,35.06,2,0.115,none,2024-01-05\r\n12813,1595,AMER,electronics,mobile,39.19,4,0.099,bundle,2024-11-05\r\n12814,1005,LATAM,sports,online,32.94,8,0.117,none,2024-01-25\r\n12815,1427,EMEA,electronics,online,55.85,7,0.142,bundle,2024-07-19\r\n12816,2067,LATAM,toys,online,59.29,3,0.121,none,2024-06-25\r\n12817,1979,APAC,sports,online,52.37,4,0.137,loyalty,2024-09-23\r\n12818,1989,LATAM,grocery,online,47.27,4,0.017,loyalty,2024-01-03\r\n12819,2149,EMEA,grocery,online,44.77,4,0.031,coupon,2024-09-14\r\n12820,2491,APAC,electronics,online,33.47,7,0.119,none,2024-11-14\r\n12821,2458,EMEA,toys,retail,147.65,7,0.186,coupon,2024-07-08\r\n12822,2027,EMEA,fashion,mobile,77.55,8,0.149,none,2024-04-11\r\n12823,2019,AMER,electronics,mobile,30.39,2,0.113,bundle,2024-03-16\r\n12824,1052,LATAM,electronics,online,35.92,1,0.013,loyalty,2024-02-06\r\n12825,1187,AMER,electronics,retail,118.39,7,0.046,coupon,2024-02-08\r\n12826,2147,LATAM,grocery,online,75.06,7,0.028,none,2024-09-21\r\n12827,1017,AMER,fashion,mobile,121.78,5,0.022,loyalty,2024-12-18\r\n12828,1423,EMEA,grocery,retail,53.75,7,0.019,loyalty,2024-10-09\r\n12829,2128,EMEA,sports,mobile,22.40,6,0.228,none,2024-10-16\r\n12830,1511,EMEA,grocery,retail,71.76,8,0.026,none,2024-05-01\r\n12831,1275,EMEA,electronics,online,42.30,2,0.143,coupon,2024-09-04\r\n12832,2122,AMER,home,retail,85.71,6,0.108,none,2024-05-02\r\n12833,2159,AMER,home,online,63.02,3,0.110,coupon,2024-09-26\r\n12834,1919,EMEA,grocery,online,38.34,6,0.164,loyalty,2024-01-07\r\n12835,1408,AMER,grocery,online,36.40,4,0.055,none,2024-06-01\r\n12836,1464,APAC,electronics,online,128.63,1,0.233,coupon,2024-02-18\r\n12837,1633,EMEA,sports,retail,35.85,1,0.003,coupon,2024-09-18\r\n12838,1683,AMER,electronics,online,91.25,5,0.245,none,2024-08-20\r\n12839,1531,EMEA,home,online,68.28,1,0.033,coupon,2024-12-12\r\n12840,1865,LATAM,grocery,partner,33.41,4,0.155,bundle,2024-12-03\r\n12841,1253,AMER,fashion,mobile,226.78,4,0.027,bundle,2024-06-07\r\n12842,1875,EMEA,grocery,retail,69.05,1,0.051,loyalty,2024-04-15\r\n12843,1442,EMEA,home,online,40.38,2,0.194,loyalty,2024-12-02\r\n12844,1572,LATAM,electronics,mobile,180.52,4,0.179,loyalty,2024-01-20\r\n12845,1420,APAC,toys,online,39.96,3,0.158,none,2024-01-21\r\n12846,1657,LATAM,sports,retail,25.48,7,0.230,none,2024-07-17\r\n12847,2147,LATAM,electronics,retail,57.41,4,0.115,none,2024-12-08\r\n12848,1744,EMEA,home,online,41.53,3,0.156,none,2024-07-08\r\n12849,1330,EMEA,toys,retail,45.32,7,0.131,coupon,2024-03-23\r\n12850,2120,AMER,sports,mobile,65.80,1,0.043,bundle,2024-09-11\r\n12851,2124,AMER,fashion,retail,72.08,7,0.248,none,2024-06-27\r\n12852,1617,AMER,home,retail,45.09,8,0.121,none,2024-02-24\r\n12853,2384,LATAM,fashion,online,132.98,1,0.241,none,2024-06-18\r\n12854,1608,AMER,sports,retail,157.90,1,0.158,none,2024-12-11\r\n12855,2056,LATAM,electronics,mobile,41.94,2,0.122,none,2024-07-14\r\n12856,1968,EMEA,fashion,retail,78.80,2,0.157,none,2024-10-28\r\n12857,1429,APAC,electronics,online,25.34,5,0.065,none,2024-08-06\r\n12858,2475,AMER,electronics,retail,32.58,4,0.042,none,2024-09-15\r\n12859,1640,APAC,electronics,online,58.03,8,0.098,coupon,2024-05-02\r\n12860,1335,APAC,grocery,online,67.55,4,0.100,none,2024-04-13\r\n12861,1501,AMER,home,online,89.37,1,0.165,none,2024-12-09\r\n12862,1750,LATAM,grocery,retail,132.75,3,0.175,bundle,2024-10-16\r\n12863,1279,EMEA,grocery,retail,46.44,7,0.090,none,2024-05-14\r\n12864,1285,EMEA,toys,retail,73.11,8,0.099,none,2024-07-25\r\n12865,1200,EMEA,grocery,online,68.59,5,0.183,bundle,2024-01-26\r\n12866,1960,EMEA,fashion,online,43.64,4,0.207,none,2024-05-19\r\n12867,1611,EMEA,sports,mobile,65.54,1,0.135,none,2024-08-04\r\n12868,2017,EMEA,grocery,online,76.01,3,0.113,none,2024-01-04\r\n12869,1101,AMER,electronics,online,69.97,2,0.042,coupon,2024-06-10\r\n12870,1730,AMER,toys,retail,48.97,1,0.191,coupon,2024-01-16\r\n12871,1350,LATAM,sports,partner,79.03,8,0.115,none,2024-10-16\r\n12872,2139,AMER,electronics,online,86.13,3,0.164,none,2024-10-06\r\n12873,1786,APAC,electronics,partner,115.24,8,0.107,none,2024-03-24\r\n12874,2472,AMER,toys,online,73.75,6,0.184,none,2024-12-01\r\n12875,1180,AMER,electronics,retail,34.14,5,0.216,none,2024-06-15\r\n12876,2205,AMER,home,retail,98.82,6,0.167,coupon,2024-02-25\r\n12877,1976,AMER,toys,online,33.58,2,0.021,loyalty,2024-03-15\r\n12878,1828,EMEA,fashion,retail,128.48,3,0.248,none,2024-05-28\r\n12879,1651,LATAM,electronics,mobile,31.29,1,0.174,none,2024-01-21\r\n12880,2029,APAC,fashion,online,49.42,2,0.070,none,2024-04-27\r\n12881,1477,APAC,electronics,online,30.13,6,0.115,coupon,2024-09-11\r\n12882,2456,APAC,home,online,89.68,7,0.160,none,2024-07-24\r\n12883,1326,AMER,toys,online,71.87,4,0.247,bundle,2024-03-03\r\n12884,1767,AMER,grocery,online,82.64,5,0.207,none,2024-01-19\r\n12885,2085,AMER,grocery,online,28.49,2,0.146,none,2024-02-17\r\n12886,1622,LATAM,sports,online,39.86,1,0.085,none,2024-10-14\r\n12887,1621,APAC,fashion,online,79.33,4,0.200,loyalty,2024-07-23\r\n12888,1519,APAC,home,online,50.34,6,0.104,none,2024-04-21\r\n12889,2469,LATAM,grocery,online,56.59,2,0.053,none,2024-08-22\r\n12890,1043,LATAM,electronics,retail,45.06,6,0.073,coupon,2024-11-07\r\n12891,1009,APAC,toys,retail,143.44,4,0.084,none,2024-03-15\r\n12892,2377,AMER,fashion,retail,26.32,6,0.136,loyalty,2024-04-10\r\n12893,1030,EMEA,toys,mobile,105.80,2,0.202,none,2024-06-24\r\n12894,1383,AMER,sports,online,39.06,7,0.070,none,2024-05-12\r\n12895,1927,EMEA,toys,retail,69.13,5,0.020,loyalty,2024-12-08\r\n12896,2180,AMER,sports,partner,65.24,3,0.044,none,2024-04-12\r\n12897,1051,EMEA,grocery,retail,128.61,4,0.111,none,2024-08-13\r\n12898,1176,EMEA,grocery,retail,110.88,2,0.082,none,2024-11-20\r\n12899,1279,EMEA,fashion,retail,126.62,6,0.142,loyalty,2024-04-01\r\n12900,2328,EMEA,electronics,online,48.75,3,0.102,none,2024-12-04\r\n12901,1369,AMER,grocery,partner,57.20,8,0.185,bundle,2024-11-25\r\n12902,2299,EMEA,toys,online,34.97,2,0.184,none,2024-01-25\r\n12903,1568,AMER,electronics,partner,35.93,8,0.123,bundle,2024-05-20\r\n12904,2351,EMEA,grocery,online,39.77,7,0.103,coupon,2024-09-02\r\n12905,1052,LATAM,grocery,retail,81.59,1,0.226,loyalty,2024-08-08\r\n12906,1590,APAC,electronics,online,59.33,4,0.232,bundle,2024-07-28\r\n12907,2265,APAC,home,online,82.97,2,0.007,bundle,2024-05-14\r\n12908,1935,EMEA,home,retail,88.03,7,0.184,none,2024-02-26\r\n12909,2463,AMER,electronics,retail,46.68,8,0.185,bundle,2024-04-03\r\n12910,2233,EMEA,sports,mobile,48.27,7,0.221,none,2024-07-12\r\n12911,1062,EMEA,grocery,retail,70.20,2,0.103,none,2024-03-12\r\n12912,2331,APAC,home,retail,21.63,3,0.014,loyalty,2024-09-25\r\n12913,1027,APAC,fashion,online,131.09,4,0.091,bundle,2024-05-12\r\n12914,1841,AMER,electronics,online,75.31,2,0.049,coupon,2024-05-22\r\n12915,2477,APAC,home,retail,38.93,1,0.197,none,2024-02-08\r\n12916,1903,LATAM,electronics,retail,59.14,8,0.102,none,2024-08-08\r\n12917,1397,LATAM,grocery,mobile,88.48,1,0.032,none,2024-09-02\r\n12918,1330,EMEA,fashion,retail,71.75,3,0.089,bundle,2024-08-04\r\n12919,1423,EMEA,fashion,partner,54.60,8,0.033,bundle,2024-03-28\r\n12920,1258,EMEA,sports,partner,131.27,3,0.084,coupon,2024-06-11\r\n12921,1563,EMEA,fashion,mobile,41.46,3,0.039,none,2024-04-14\r\n12922,1974,EMEA,home,online,91.38,5,0.110,none,2024-02-10\r\n12923,2041,LATAM,toys,retail,75.82,1,0.087,none,2024-03-04\r\n12924,1828,EMEA,electronics,online,42.16,7,0.057,none,2024-05-03\r\n12925,1768,AMER,sports,retail,61.84,7,0.196,none,2024-01-07\r\n12926,1896,EMEA,fashion,online,106.37,3,0.189,none,2024-01-07\r\n12927,1812,EMEA,sports,retail,43.02,7,0.124,loyalty,2024-10-03\r\n12928,1344,EMEA,grocery,online,38.84,1,0.021,none,2024-07-23\r\n12929,2485,AMER,home,mobile,33.73,8,0.121,none,2024-02-22\r\n12930,2069,AMER,electronics,retail,84.14,7,0.017,none,2024-11-08\r\n12931,1718,EMEA,fashion,retail,34.60,5,0.079,coupon,2024-06-27\r\n12932,1656,LATAM,fashion,online,61.28,3,0.039,none,2024-01-14\r\n12933,1572,LATAM,grocery,retail,34.79,8,0.143,none,2024-09-08\r\n12934,2397,LATAM,fashion,online,86.43,7,0.014,bundle,2024-08-11\r\n12935,1631,APAC,home,online,69.60,4,0.098,none,2024-10-11\r\n12936,1531,EMEA,fashion,mobile,77.98,3,0.197,none,2024-06-04\r\n12937,2168,EMEA,grocery,retail,31.91,7,0.191,none,2024-07-05\r\n12938,1356,LATAM,home,mobile,51.70,5,0.163,none,2024-10-17\r\n12939,2355,EMEA,electronics,mobile,63.93,6,0.059,bundle,2024-11-28\r\n12940,1771,AMER,sports,online,129.23,3,0.097,bundle,2024-08-28\r\n12941,2005,APAC,home,retail,179.59,1,0.095,none,2024-12-01\r\n12942,2106,LATAM,electronics,online,66.70,8,0.158,bundle,2024-12-02\r\n12943,1848,EMEA,grocery,retail,80.23,4,0.023,none,2024-04-06\r\n12944,1624,AMER,toys,online,63.87,3,0.112,loyalty,2024-01-03\r\n12945,1972,LATAM,home,mobile,61.56,7,0.165,none,2024-06-17\r\n12946,1587,LATAM,toys,online,48.21,5,0.025,none,2024-07-08\r\n12947,2373,LATAM,grocery,online,102.09,6,0.176,coupon,2024-01-09\r\n12948,1020,APAC,fashion,online,70.01,3,0.157,loyalty,2024-07-10\r\n12949,1907,EMEA,fashion,retail,41.26,6,0.122,none,2024-02-05\r\n12950,1197,LATAM,home,retail,48.71,1,0.031,none,2024-01-04\r\n12951,1746,LATAM,fashion,retail,20.71,8,0.236,bundle,2024-10-24\r\n12952,1778,LATAM,electronics,online,44.13,2,0.108,bundle,2024-11-21\r\n12953,1288,LATAM,grocery,online,60.40,8,0.070,none,2024-05-08\r\n12954,1993,APAC,grocery,online,83.57,6,0.138,none,2024-07-08\r\n12955,1354,AMER,grocery,online,81.85,1,0.240,loyalty,2024-07-09\r\n12956,1818,AMER,grocery,online,75.63,1,0.004,none,2024-02-20\r\n12957,1361,LATAM,home,retail,60.69,4,0.149,none,2024-04-24\r\n12958,2149,EMEA,home,mobile,31.60,5,0.093,loyalty,2024-04-16\r\n12959,1848,EMEA,sports,retail,96.88,7,0.015,bundle,2024-04-17\r\n12960,2107,APAC,electronics,online,119.60,6,0.144,bundle,2024-11-17\r\n12961,1310,AMER,home,online,85.50,1,0.157,bundle,2024-11-28\r\n12962,2201,AMER,electronics,online,37.09,7,0.178,none,2024-12-23\r\n12963,1090,AMER,home,online,77.86,4,0.067,coupon,2024-04-24\r\n12964,1261,APAC,home,mobile,35.96,5,0.045,none,2024-10-24\r\n12965,1776,APAC,home,online,48.09,2,0.069,none,2024-03-15\r\n12966,1386,AMER,electronics,retail,40.81,6,0.218,coupon,2024-05-09\r\n12967,1950,LATAM,grocery,online,50.64,5,0.172,bundle,2024-08-07\r\n12968,1073,AMER,home,mobile,44.42,8,0.038,none,2024-09-14\r\n12969,2049,LATAM,grocery,online,35.97,3,0.154,none,2024-10-27\r\n12970,1043,LATAM,home,retail,40.00,4,0.183,coupon,2024-05-10\r\n12971,2332,APAC,fashion,online,52.17,4,0.141,none,2024-05-07\r\n12972,1390,APAC,fashion,retail,53.90,8,0.121,none,2024-08-12\r\n12973,1630,APAC,fashion,retail,64.98,2,0.084,none,2024-07-17\r\n12974,1742,AMER,electronics,mobile,57.62,8,0.048,coupon,2024-07-15\r\n12975,1162,AMER,grocery,online,22.45,4,0.211,none,2024-07-14\r\n12976,2448,APAC,home,mobile,58.94,5,0.104,none,2024-10-16\r\n12977,1467,LATAM,toys,online,110.54,1,0.070,coupon,2024-02-04\r\n12978,2325,LATAM,toys,retail,39.09,1,0.096,bundle,2024-10-28\r\n12979,2036,APAC,sports,online,57.31,8,0.114,none,2024-04-11\r\n12980,1863,EMEA,grocery,online,105.55,4,0.084,coupon,2024-07-20\r\n12981,2279,LATAM,sports,online,49.37,3,0.048,bundle,2024-11-16\r\n12982,1758,AMER,toys,online,24.30,1,0.112,none,2024-10-02\r\n12983,1495,LATAM,home,online,43.93,4,0.014,bundle,2024-05-01\r\n12984,2298,APAC,toys,online,35.67,6,0.228,loyalty,2024-01-21\r\n12985,1898,EMEA,electronics,retail,33.23,4,0.182,coupon,2024-03-15\r\n12986,1953,EMEA,electronics,online,32.71,2,0.142,loyalty,2024-09-21\r\n12987,1572,LATAM,electronics,online,58.22,2,0.141,bundle,2024-10-09\r\n12988,2172,EMEA,grocery,online,93.49,7,0.111,loyalty,2024-10-15\r\n12989,1616,APAC,home,online,32.84,7,0.014,none,2024-08-22\r\n12990,1606,AMER,electronics,mobile,135.00,3,0.151,bundle,2024-12-07\r\n12991,2486,APAC,grocery,retail,118.09,2,0.250,none,2024-04-28\r\n12992,1485,APAC,home,online,80.68,1,0.095,coupon,2024-08-01\r\n12993,2197,LATAM,toys,retail,75.93,7,0.193,none,2024-04-14\r\n12994,1558,EMEA,home,online,34.71,1,0.109,coupon,2024-10-28\r\n12995,1716,LATAM,grocery,online,61.37,7,0.102,none,2024-01-15\r\n12996,1476,APAC,electronics,retail,60.92,2,0.163,none,2024-10-12\r\n12997,1214,EMEA,home,online,105.86,2,0.206,none,2024-11-19\r\n12998,2217,LATAM,electronics,retail,41.94,7,0.126,none,2024-01-19\r\n12999,2181,AMER,sports,online,23.40,5,0.150,bundle,2024-08-23\r\n13000,1226,AMER,toys,online,95.17,5,0.072,loyalty,2024-05-12\r\n13001,1069,APAC,grocery,retail,58.41,5,0.016,loyalty,2024-06-07\r\n13002,1842,LATAM,electronics,online,53.86,6,0.005,none,2024-02-22\r\n13003,1932,EMEA,grocery,online,83.00,8,0.107,none,2024-09-16\r\n13004,2189,LATAM,electronics,online,151.47,6,0.203,none,2024-08-24\r\n13005,2473,EMEA,home,partner,36.77,2,0.026,none,2024-06-23\r\n13006,1629,LATAM,sports,retail,36.95,2,0.009,none,2024-02-14\r\n13007,1725,APAC,toys,retail,61.22,4,0.099,coupon,2024-04-13\r\n13008,1767,AMER,toys,retail,117.58,4,0.038,none,2024-06-17\r\n13009,2149,EMEA,electronics,mobile,98.41,2,0.181,none,2024-04-09\r\n13010,2163,EMEA,grocery,online,49.23,8,0.176,loyalty,2024-04-10\r\n13011,2377,AMER,grocery,online,24.71,3,0.012,none,2024-12-11\r\n13012,1360,APAC,sports,online,107.39,3,0.020,none,2024-05-19\r\n13013,2113,LATAM,grocery,online,51.76,1,0.079,none,2024-04-05\r\n13014,1924,AMER,grocery,online,59.91,4,0.213,none,2024-07-25\r\n13015,1017,AMER,home,online,88.51,6,0.060,none,2024-04-02\r\n13016,1114,APAC,grocery,partner,40.58,5,0.198,loyalty,2024-06-06\r\n13017,2213,APAC,home,retail,45.27,3,0.162,none,2024-03-18\r\n13018,1137,APAC,home,retail,46.75,7,0.017,bundle,2024-10-19\r\n13019,2474,LATAM,sports,mobile,34.55,3,0.172,none,2024-12-23\r\n13020,2415,AMER,sports,mobile,60.78,2,0.142,bundle,2024-07-06\r\n13021,2048,LATAM,electronics,retail,9.67,4,0.142,coupon,2024-02-24\r\n13022,1634,AMER,home,online,48.64,8,0.176,none,2024-02-02\r\n13023,1335,APAC,fashion,online,107.48,8,0.063,none,2024-01-12\r\n13024,1304,LATAM,electronics,online,44.94,6,0.215,none,2024-01-24\r\n13025,2192,APAC,grocery,online,96.87,7,0.219,bundle,2024-12-19\r\n13026,2381,AMER,electronics,retail,123.87,7,0.019,none,2024-11-01\r\n13027,2357,EMEA,sports,online,26.71,5,0.070,none,2024-12-08\r\n13028,2074,AMER,fashion,online,105.27,7,0.245,bundle,2024-03-18\r\n13029,1520,APAC,home,retail,51.55,7,0.059,none,2024-04-19\r\n13030,1137,APAC,electronics,online,52.83,8,0.067,coupon,2024-10-28\r\n13031,1605,APAC,electronics,online,52.32,2,0.105,bundle,2024-08-06\r\n13032,2224,EMEA,grocery,online,105.06,3,0.102,none,2024-08-08\r\n13033,1222,AMER,toys,retail,41.86,1,0.124,none,2024-10-12\r\n13034,1494,AMER,electronics,online,58.64,3,0.022,coupon,2024-12-21\r\n13035,1934,EMEA,grocery,retail,109.23,5,0.100,none,2024-12-18\r\n13036,1290,EMEA,grocery,online,50.60,7,0.092,none,2024-12-26\r\n13037,2169,EMEA,fashion,online,106.33,4,0.088,bundle,2024-11-28\r\n13038,1368,EMEA,sports,retail,45.92,6,0.038,bundle,2024-08-22\r\n13039,1372,APAC,toys,online,54.73,4,0.043,none,2024-09-09\r\n13040,2193,AMER,home,online,66.59,3,0.192,none,2024-06-15\r\n13041,1281,AMER,fashion,retail,16.29,7,0.187,none,2024-04-19\r\n13042,1083,AMER,electronics,online,23.53,4,0.153,none,2024-06-25\r\n13043,1375,AMER,grocery,online,70.31,8,0.244,none,2024-11-13\r\n13044,2365,LATAM,sports,retail,23.02,5,0.148,none,2024-02-08\r\n13045,1623,AMER,fashion,retail,77.10,4,0.117,coupon,2024-08-01\r\n13046,2433,APAC,sports,online,34.33,7,0.130,none,2024-02-23\r\n13047,1709,EMEA,electronics,retail,95.55,8,0.099,none,2024-01-25\r\n13048,1444,EMEA,fashion,mobile,54.14,1,0.173,bundle,2024-08-03\r\n13049,2203,APAC,electronics,mobile,27.03,3,0.141,none,2024-02-03\r\n13050,2434,APAC,grocery,retail,77.19,5,0.183,none,2024-07-22\r\n13051,2383,APAC,home,online,57.56,2,0.226,none,2024-08-27\r\n13052,1977,APAC,grocery,retail,78.72,1,0.121,coupon,2024-04-19\r\n13053,1775,EMEA,sports,retail,32.07,6,0.076,loyalty,2024-08-17\r\n13054,2489,LATAM,grocery,retail,34.04,3,0.014,none,2024-09-15\r\n13055,1257,APAC,home,partner,71.73,8,0.026,loyalty,2024-09-25\r\n13056,1874,LATAM,grocery,online,50.43,6,0.052,coupon,2024-08-02\r\n13057,1412,AMER,grocery,mobile,72.50,2,0.238,none,2024-11-16\r\n13058,2316,EMEA,grocery,partner,64.58,2,0.213,loyalty,2024-02-28\r\n13059,2010,APAC,grocery,online,88.10,5,0.136,none,2024-01-17\r\n13060,1885,EMEA,electronics,online,21.85,3,0.199,none,2024-02-25\r\n13061,2008,APAC,sports,retail,18.62,8,0.169,coupon,2024-05-09\r\n13062,2022,LATAM,sports,online,38.02,5,0.111,bundle,2024-11-28\r\n13063,1822,EMEA,sports,mobile,77.62,2,0.126,none,2024-11-27\r\n13064,1766,AMER,fashion,mobile,26.43,2,0.104,none,2024-06-17\r\n13065,2090,AMER,electronics,mobile,56.17,1,0.147,none,2024-05-02\r\n13066,1435,AMER,fashion,retail,71.36,4,0.115,coupon,2024-02-04\r\n13067,1074,LATAM,fashion,online,48.83,6,0.039,coupon,2024-09-15\r\n13068,2275,LATAM,grocery,online,52.41,8,0.086,none,2024-05-03\r\n13069,2470,EMEA,home,online,33.24,2,0.181,none,2024-05-19\r\n13070,1677,EMEA,home,retail,40.42,4,0.188,coupon,2024-10-04\r\n13071,1583,AMER,home,online,64.44,6,0.044,bundle,2024-07-03\r\n13072,1524,LATAM,fashion,mobile,77.00,6,0.113,bundle,2024-11-02\r\n13073,1886,LATAM,electronics,mobile,152.55,3,0.060,none,2024-08-01\r\n13074,2322,AMER,electronics,online,37.68,8,0.040,bundle,2024-02-26\r\n13075,1684,EMEA,sports,retail,71.74,6,0.021,bundle,2024-02-25\r\n13076,1377,APAC,home,online,177.30,3,0.059,none,2024-06-18\r\n13077,2394,EMEA,sports,online,40.40,4,0.205,bundle,2024-09-13\r\n13078,1543,AMER,electronics,retail,31.72,5,0.214,none,2024-02-15\r\n13079,1102,APAC,fashion,mobile,74.01,2,0.118,bundle,2024-02-17\r\n13080,1670,EMEA,grocery,mobile,57.65,6,0.093,none,2024-06-10\r\n13081,2076,AMER,home,retail,67.84,1,0.222,coupon,2024-09-25\r\n13082,2236,APAC,fashion,mobile,29.74,3,0.121,none,2024-04-06\r\n13083,2460,AMER,grocery,mobile,59.10,3,0.075,none,2024-06-25\r\n13084,1706,EMEA,fashion,mobile,45.42,3,0.015,none,2024-05-13\r\n13085,2395,APAC,grocery,online,88.56,2,0.019,none,2024-08-14\r\n13086,1287,AMER,home,mobile,70.65,8,0.005,none,2024-04-21\r\n13087,1868,AMER,grocery,retail,68.31,5,0.220,loyalty,2024-08-08\r\n13088,1226,AMER,sports,online,26.39,3,0.176,loyalty,2024-01-22\r\n13089,2301,EMEA,grocery,partner,48.93,5,0.099,coupon,2024-10-20\r\n13090,1282,LATAM,home,mobile,50.88,1,0.189,bundle,2024-01-13\r\n13091,1346,AMER,electronics,mobile,83.23,1,0.026,none,2024-08-07\r\n13092,1081,AMER,home,mobile,87.87,2,0.144,bundle,2024-09-24\r\n13093,2126,APAC,grocery,online,164.75,4,0.075,none,2024-12-07\r\n13094,1062,EMEA,electronics,online,75.65,7,0.117,none,2024-07-25\r\n13095,1313,EMEA,sports,online,51.40,1,0.189,loyalty,2024-10-17\r\n13096,1935,EMEA,sports,online,50.00,1,0.212,none,2024-11-06\r\n13097,1094,LATAM,toys,online,118.98,8,0.118,none,2024-01-22\r\n13098,1803,LATAM,fashion,mobile,47.44,5,0.093,none,2024-04-03\r\n13099,2034,LATAM,grocery,retail,61.27,2,0.026,none,2024-07-27\r\n13100,1073,AMER,toys,retail,84.17,2,0.171,coupon,2024-01-11\r\n13101,1937,APAC,electronics,partner,32.27,2,0.093,none,2024-08-28\r\n13102,1264,APAC,fashion,online,47.70,7,0.058,none,2024-01-22\r\n13103,1977,APAC,electronics,online,47.12,1,0.020,bundle,2024-09-28\r\n13104,1326,AMER,grocery,retail,55.82,6,0.047,none,2024-09-23\r\n13105,2391,EMEA,home,retail,41.46,7,0.173,bundle,2024-02-17\r\n13106,1939,LATAM,home,mobile,89.47,2,0.166,none,2024-09-26\r\n13107,1542,APAC,electronics,online,40.05,6,0.004,bundle,2024-03-10\r\n13108,2296,AMER,home,mobile,136.06,4,0.080,none,2024-06-06\r\n13109,2001,EMEA,toys,mobile,97.09,7,0.153,bundle,2024-09-14\r\n13110,1861,AMER,electronics,partner,120.63,2,0.128,none,2024-03-05\r\n13111,1092,AMER,grocery,retail,30.46,5,0.011,none,2024-10-14\r\n13112,1083,AMER,grocery,online,268.45,7,0.003,none,2024-12-17\r\n13113,2115,APAC,toys,retail,32.93,7,0.115,none,2024-08-02\r\n13114,1768,AMER,fashion,mobile,25.08,4,0.014,none,2024-12-07\r\n13115,1234,AMER,electronics,mobile,49.74,6,0.122,coupon,2024-10-12\r\n13116,1804,AMER,electronics,online,64.22,5,0.120,loyalty,2024-10-24\r\n13117,1825,AMER,home,online,35.58,8,0.037,none,2024-05-15\r\n13118,2355,EMEA,sports,online,65.78,7,0.096,loyalty,2024-10-28\r\n13119,1049,AMER,electronics,online,47.73,2,0.140,coupon,2024-03-06\r\n13120,2244,LATAM,electronics,retail,69.90,4,0.122,none,2024-11-21\r\n13121,1722,EMEA,grocery,retail,59.35,4,0.245,loyalty,2024-06-20\r\n13122,2411,EMEA,home,mobile,62.15,3,0.161,bundle,2024-12-15\r\n13123,2208,AMER,home,retail,44.55,7,0.035,none,2024-06-17\r\n13124,2151,APAC,electronics,retail,102.92,8,0.030,none,2024-09-07\r\n13125,2264,LATAM,home,retail,31.56,2,0.050,coupon,2024-12-05\r\n13126,1469,EMEA,grocery,retail,117.37,8,0.039,none,2024-06-16\r\n13127,1290,EMEA,fashion,retail,38.99,2,0.125,bundle,2024-05-25\r\n13128,2137,LATAM,sports,retail,59.72,3,0.038,none,2024-07-21\r\n13129,1805,EMEA,home,online,33.27,1,0.125,coupon,2024-07-10\r\n13130,1952,EMEA,fashion,mobile,34.82,6,0.236,coupon,2024-04-20\r\n13131,1970,LATAM,fashion,retail,66.67,2,0.084,none,2024-05-28\r\n13132,1164,EMEA,home,online,118.68,7,0.138,none,2024-11-22\r\n13133,1338,EMEA,fashion,mobile,45.70,7,0.219,bundle,2024-02-13\r\n13134,2458,EMEA,toys,online,35.63,1,0.133,coupon,2024-03-11\r\n13135,1501,AMER,sports,online,42.19,7,0.147,none,2024-01-06\r\n13136,1995,LATAM,grocery,online,56.58,3,0.138,coupon,2024-04-14\r\n13137,1554,AMER,toys,retail,97.91,8,0.231,none,2024-08-02\r\n13138,1933,EMEA,grocery,mobile,77.50,6,0.004,none,2024-12-08\r\n13139,2058,LATAM,electronics,online,33.15,1,0.209,coupon,2024-09-07\r\n13140,1345,AMER,toys,retail,48.93,4,0.062,none,2024-06-15\r\n13141,1378,APAC,grocery,retail,72.30,8,0.210,none,2024-06-16\r\n13142,1460,LATAM,electronics,online,60.10,8,0.184,none,2024-03-03\r\n13143,1210,LATAM,home,online,67.79,8,0.244,bundle,2024-07-08\r\n13144,1277,AMER,toys,online,87.75,6,0.134,none,2024-11-23\r\n13145,1673,AMER,home,retail,55.57,8,0.244,bundle,2024-05-03\r\n13146,1603,EMEA,fashion,online,41.46,2,0.183,none,2024-06-05\r\n13147,2303,EMEA,toys,online,66.51,6,0.196,bundle,2024-11-16\r\n13148,1653,APAC,electronics,mobile,37.62,8,0.157,coupon,2024-06-14\r\n13149,1605,APAC,home,retail,126.00,6,0.119,none,2024-01-18\r\n13150,1733,LATAM,sports,mobile,46.27,8,0.006,none,2024-02-05\r\n13151,1458,APAC,fashion,online,66.17,4,0.207,loyalty,2024-06-06\r\n13152,2429,EMEA,grocery,retail,55.15,7,0.127,bundle,2024-08-16\r\n13153,1184,AMER,home,online,33.81,3,0.235,none,2024-05-19\r\n13154,1034,EMEA,electronics,retail,49.10,5,0.227,none,2024-05-06\r\n13155,1862,LATAM,grocery,retail,61.45,3,0.031,none,2024-03-15\r\n13156,1505,EMEA,fashion,retail,58.82,2,0.048,coupon,2024-11-10\r\n13157,1753,APAC,fashion,online,36.05,2,0.140,none,2024-07-13\r\n13158,1803,LATAM,grocery,mobile,49.19,2,0.068,none,2024-12-28\r\n13159,2105,APAC,sports,online,109.09,5,0.157,bundle,2024-12-10\r\n13160,2400,EMEA,grocery,online,42.58,4,0.220,bundle,2024-11-25\r\n13161,2481,APAC,electronics,online,80.16,7,0.153,none,2024-01-15\r\n13162,1120,LATAM,grocery,retail,28.89,3,0.093,none,2024-04-11\r\n13163,1031,AMER,fashion,online,27.17,2,0.080,none,2024-04-21\r\n13164,1993,APAC,home,online,41.60,7,0.069,bundle,2024-05-17\r\n13165,2486,APAC,electronics,online,60.84,6,0.195,bundle,2024-10-13\r\n13166,2370,EMEA,grocery,online,53.96,6,0.209,none,2024-09-02\r\n13167,1294,APAC,toys,online,34.78,2,0.177,none,2024-05-22\r\n13168,2010,APAC,electronics,partner,37.86,4,0.201,none,2024-08-05\r\n13169,1981,EMEA,grocery,online,115.24,8,0.017,none,2024-02-20\r\n13170,1838,AMER,electronics,online,35.99,6,0.071,loyalty,2024-01-26\r\n13171,1216,APAC,grocery,retail,45.49,1,0.162,none,2024-06-24\r\n13172,1784,EMEA,toys,online,52.89,4,0.008,none,2024-10-27\r\n13173,1065,AMER,grocery,retail,43.09,5,0.017,none,2024-12-07\r\n13174,2435,AMER,home,online,72.62,6,0.094,none,2024-05-28\r\n13175,1931,APAC,grocery,retail,65.18,8,0.136,none,2024-10-19\r\n13176,1406,LATAM,electronics,retail,27.67,6,0.045,none,2024-12-05\r\n13177,2069,AMER,electronics,online,45.56,3,0.204,coupon,2024-03-26\r\n13178,1488,AMER,grocery,mobile,47.59,2,0.234,coupon,2024-10-13\r\n13179,1094,LATAM,toys,online,29.16,5,0.213,none,2024-09-21\r\n13180,2327,EMEA,home,online,45.17,4,0.128,none,2024-08-20\r\n13181,2314,EMEA,grocery,retail,45.05,3,0.108,bundle,2024-11-13\r\n13182,2174,LATAM,electronics,mobile,58.35,7,0.038,coupon,2024-12-01\r\n13183,2084,LATAM,sports,retail,43.88,3,0.086,loyalty,2024-02-23\r\n13184,1107,APAC,home,online,57.67,6,0.084,coupon,2024-06-17\r\n13185,1262,APAC,sports,online,128.04,5,0.167,none,2024-05-21\r\n13186,2083,LATAM,electronics,retail,52.56,4,0.006,none,2024-06-08\r\n13187,1522,LATAM,electronics,online,28.84,5,0.203,none,2024-01-08\r\n13188,1129,LATAM,sports,online,26.61,1,0.209,none,2024-10-04\r\n13189,1435,AMER,sports,online,47.70,1,0.085,loyalty,2024-07-09\r\n13190,1760,LATAM,home,retail,28.84,1,0.002,coupon,2024-04-15\r\n13191,2499,LATAM,toys,retail,60.85,5,0.228,bundle,2024-03-06\r\n13192,2116,LATAM,grocery,online,127.88,8,0.001,none,2024-12-16\r\n13193,1674,LATAM,electronics,online,70.15,8,0.192,bundle,2024-04-13\r\n13194,1792,AMER,home,online,15.02,4,0.164,coupon,2024-07-18\r\n13195,1487,AMER,electronics,online,76.43,8,0.029,coupon,2024-01-11\r\n13196,1594,LATAM,electronics,retail,22.99,5,0.104,bundle,2024-05-22\r\n13197,1811,APAC,sports,online,57.85,5,0.009,none,2024-04-26\r\n13198,2221,LATAM,grocery,retail,43.19,8,0.127,loyalty,2024-04-03\r\n13199,1918,EMEA,grocery,retail,53.48,2,0.193,none,2024-10-22\r\n13200,1128,LATAM,home,online,71.23,4,0.235,coupon,2024-05-26\r\n13201,2401,LATAM,fashion,retail,94.65,6,0.029,none,2024-06-07\r\n13202,1624,AMER,fashion,mobile,38.93,5,0.233,none,2024-06-04\r\n13203,2202,APAC,grocery,online,70.14,1,0.005,loyalty,2024-03-23\r\n13204,1433,EMEA,electronics,mobile,58.35,7,0.237,none,2024-05-06\r\n13205,1096,EMEA,grocery,mobile,35.28,6,0.113,none,2024-10-06\r\n13206,1784,EMEA,grocery,online,89.03,7,0.095,bundle,2024-07-03\r\n13207,1811,APAC,home,online,95.99,7,0.244,none,2024-04-23\r\n13208,2238,AMER,fashion,mobile,70.73,6,0.039,none,2024-02-07\r\n13209,1111,APAC,electronics,mobile,49.44,8,0.169,none,2024-06-28\r\n13210,2073,AMER,fashion,online,22.34,4,0.015,loyalty,2024-03-02\r\n13211,1328,APAC,home,online,91.66,1,0.059,none,2024-01-02\r\n13212,1344,EMEA,fashion,online,72.66,8,0.081,none,2024-01-21\r\n13213,1282,LATAM,home,mobile,66.86,8,0.089,bundle,2024-07-19\r\n13214,2349,APAC,fashion,retail,36.94,1,0.085,none,2024-08-24\r\n13215,2174,LATAM,sports,partner,47.63,6,0.125,none,2024-10-25\r\n13216,1613,EMEA,grocery,online,43.18,3,0.219,none,2024-06-04\r\n13217,2183,EMEA,sports,mobile,24.41,8,0.176,bundle,2024-02-16\r\n13218,1658,AMER,grocery,online,27.05,5,0.152,none,2024-12-11\r\n13219,2486,APAC,home,retail,36.75,2,0.156,coupon,2024-09-14\r\n13220,1400,EMEA,electronics,online,87.72,7,0.178,none,2024-03-17\r\n13221,1710,APAC,fashion,online,156.26,6,0.162,none,2024-06-15\r\n13222,2416,LATAM,toys,online,68.22,7,0.065,none,2024-01-28\r\n13223,1875,EMEA,home,online,50.97,7,0.156,coupon,2024-08-14\r\n13224,1965,LATAM,fashion,online,48.14,5,0.201,coupon,2024-05-20\r\n13225,1534,EMEA,grocery,retail,197.36,3,0.170,none,2024-02-04\r\n13226,1783,AMER,home,mobile,43.43,4,0.150,none,2024-03-05\r\n13227,1414,APAC,electronics,online,71.50,1,0.135,coupon,2024-02-08\r\n13228,1785,EMEA,home,online,49.43,1,0.082,loyalty,2024-11-01\r\n13229,2426,AMER,toys,retail,44.50,8,0.159,coupon,2024-02-11\r\n13230,1997,APAC,electronics,online,97.76,5,0.165,none,2024-12-19\r\n13231,1777,AMER,grocery,online,36.73,5,0.135,none,2024-12-16\r\n13232,1182,EMEA,electronics,online,50.57,6,0.166,none,2024-10-14\r\n13233,1014,EMEA,home,retail,62.48,2,0.161,bundle,2024-04-24\r\n13234,1182,EMEA,home,online,23.12,5,0.175,coupon,2024-04-09\r\n13235,1933,EMEA,fashion,retail,71.06,2,0.060,coupon,2024-08-13\r\n13236,2301,EMEA,sports,online,209.79,4,0.011,none,2024-07-23\r\n13237,1150,LATAM,sports,retail,29.93,2,0.198,none,2024-11-26\r\n13238,2145,AMER,grocery,mobile,71.28,3,0.152,bundle,2024-06-09\r\n13239,2158,APAC,home,mobile,172.73,1,0.074,none,2024-05-17\r\n13240,1181,LATAM,grocery,online,54.81,2,0.007,none,2024-04-04\r\n13241,1033,APAC,toys,retail,44.63,1,0.104,none,2024-02-19\r\n13242,1141,AMER,toys,retail,38.13,2,0.236,none,2024-04-26\r\n13243,2290,LATAM,home,online,246.43,6,0.167,bundle,2024-12-16\r\n13244,1036,EMEA,fashion,online,70.50,2,0.243,none,2024-07-26\r\n13245,1311,APAC,fashion,online,107.07,2,0.121,none,2024-09-16\r\n13246,1591,APAC,grocery,online,106.48,7,0.158,none,2024-08-05\r\n13247,1017,AMER,sports,retail,115.70,3,0.221,none,2024-08-14\r\n13248,2380,AMER,home,online,79.54,6,0.139,none,2024-04-04\r\n13249,2375,AMER,grocery,online,45.40,3,0.037,loyalty,2024-01-13\r\n13250,1682,EMEA,electronics,retail,118.31,1,0.017,loyalty,2024-01-05\r\n13251,1808,APAC,fashion,online,64.65,1,0.159,none,2024-05-18\r\n13252,1521,LATAM,electronics,online,46.78,8,0.162,none,2024-09-03\r\n13253,1273,AMER,electronics,retail,54.89,5,0.050,none,2024-01-14\r\n13254,1257,APAC,grocery,retail,74.15,2,0.083,loyalty,2024-09-24\r\n13255,1254,APAC,toys,online,43.81,3,0.195,loyalty,2024-06-02\r\n13256,1551,APAC,home,online,80.47,3,0.110,none,2024-01-10\r\n13257,1368,EMEA,sports,online,74.75,6,0.182,coupon,2024-08-07\r\n13258,1042,LATAM,grocery,retail,41.45,8,0.063,none,2024-11-22\r\n13259,2308,AMER,electronics,online,49.42,1,0.210,none,2024-01-12\r\n13260,2246,AMER,fashion,online,53.44,3,0.141,coupon,2024-09-05\r\n13261,2170,EMEA,fashion,online,78.13,8,0.212,none,2024-03-16\r\n13262,1881,LATAM,fashion,partner,62.94,8,0.147,bundle,2024-03-25\r\n13263,1504,AMER,home,online,98.18,1,0.041,none,2024-08-05\r\n13264,1905,APAC,grocery,retail,76.85,1,0.230,none,2024-09-14\r\n13265,1188,LATAM,grocery,online,65.55,2,0.021,none,2024-06-14\r\n13266,1012,LATAM,electronics,retail,124.55,5,0.019,coupon,2024-09-10\r\n13267,1284,APAC,grocery,partner,27.48,8,0.192,bundle,2024-10-28\r\n13268,1889,APAC,grocery,online,58.02,8,0.202,none,2024-05-27\r\n13269,1574,AMER,electronics,retail,72.50,4,0.078,none,2024-07-16\r\n13270,1438,APAC,sports,mobile,60.35,1,0.160,none,2024-09-04\r\n13271,1756,EMEA,grocery,online,55.82,7,0.082,bundle,2024-09-17\r\n13272,1537,LATAM,sports,retail,31.01,1,0.057,loyalty,2024-03-12\r\n13273,2284,EMEA,home,retail,86.42,8,0.204,none,2024-02-01\r\n13274,1460,LATAM,electronics,mobile,20.69,6,0.046,none,2024-04-19\r\n13275,1896,EMEA,grocery,online,50.94,2,0.211,bundle,2024-05-17\r\n13276,2163,EMEA,grocery,online,70.61,3,0.209,none,2024-05-23\r\n13277,1273,AMER,sports,online,52.16,3,0.060,none,2024-05-27\r\n13278,2072,AMER,electronics,partner,45.02,3,0.126,coupon,2024-11-20\r\n13279,1818,AMER,fashion,online,108.38,6,0.187,coupon,2024-09-06\r\n13280,2091,LATAM,electronics,online,31.94,1,0.070,coupon,2024-12-06\r\n13281,2413,AMER,toys,retail,34.90,4,0.022,none,2024-10-04\r\n13282,1869,AMER,home,retail,25.97,6,0.248,coupon,2024-05-25\r\n13283,2323,AMER,fashion,retail,24.70,5,0.109,none,2024-04-24\r\n13284,2027,EMEA,fashion,mobile,53.65,2,0.087,none,2024-11-25\r\n13285,1555,AMER,electronics,retail,83.21,6,0.173,none,2024-05-23\r\n13286,1126,LATAM,toys,retail,19.46,2,0.186,none,2024-09-10\r\n13287,1426,AMER,grocery,online,291.31,7,0.249,none,2024-05-28\r\n13288,2113,LATAM,electronics,online,38.54,2,0.145,none,2024-12-28\r\n13289,2442,APAC,electronics,retail,88.57,5,0.166,none,2024-02-06\r\n13290,1272,AMER,sports,mobile,40.16,2,0.205,bundle,2024-06-28\r\n13291,2258,AMER,fashion,retail,65.59,8,0.149,loyalty,2024-11-28\r\n13292,1370,APAC,electronics,online,48.69,4,0.032,loyalty,2024-06-07\r\n13293,1838,AMER,grocery,retail,46.08,1,0.087,none,2024-09-15\r\n13294,2216,AMER,grocery,online,32.14,3,0.244,none,2024-03-13\r\n13295,1154,LATAM,grocery,online,64.48,1,0.101,bundle,2024-04-13\r\n13296,1687,APAC,home,mobile,131.01,4,0.163,none,2024-02-26\r\n13297,1085,EMEA,fashion,retail,57.82,1,0.209,none,2024-09-03\r\n13298,1947,EMEA,electronics,online,52.25,7,0.143,none,2024-02-05\r\n13299,2003,LATAM,toys,retail,48.78,5,0.108,none,2024-03-18\r\n13300,2255,AMER,fashion,retail,76.47,4,0.238,none,2024-08-19\r\n13301,1876,LATAM,home,online,92.74,7,0.138,coupon,2024-07-08\r\n13302,1004,LATAM,sports,online,56.14,5,0.220,none,2024-02-13\r\n13303,1665,AMER,fashion,retail,114.96,4,0.248,none,2024-06-07\r\n13304,2142,LATAM,sports,online,124.06,1,0.214,none,2024-02-04\r\n13305,1667,AMER,toys,retail,76.69,8,0.236,loyalty,2024-05-22\r\n13306,2376,LATAM,home,mobile,29.52,3,0.123,none,2024-06-13\r\n13307,1766,AMER,grocery,retail,73.58,8,0.084,none,2024-11-22\r\n13308,1749,LATAM,grocery,retail,50.81,4,0.194,loyalty,2024-08-04\r\n13309,1776,APAC,electronics,retail,44.81,8,0.091,coupon,2024-04-06\r\n13310,1652,APAC,electronics,online,87.62,1,0.248,none,2024-08-14\r\n13311,1693,EMEA,grocery,online,81.48,6,0.212,loyalty,2024-09-13\r\n13312,1762,LATAM,electronics,mobile,56.92,6,0.184,none,2024-12-12\r\n13313,1796,LATAM,grocery,retail,54.01,2,0.137,none,2024-01-03\r\n13314,1019,APAC,sports,partner,47.61,5,0.192,none,2024-04-17\r\n13315,2394,EMEA,toys,retail,74.52,8,0.077,none,2024-11-26\r\n13316,2135,EMEA,electronics,mobile,30.67,7,0.123,coupon,2024-06-05\r\n13317,1911,LATAM,grocery,retail,86.50,8,0.109,none,2024-03-07\r\n13318,2261,EMEA,electronics,retail,26.01,1,0.060,none,2024-02-14\r\n13319,2205,AMER,electronics,retail,29.56,3,0.093,none,2024-03-03\r\n13320,1815,APAC,electronics,online,49.42,6,0.214,coupon,2024-02-08\r\n13321,1363,EMEA,grocery,retail,24.66,5,0.090,none,2024-07-19\r\n13322,2397,LATAM,grocery,online,62.27,8,0.201,none,2024-04-25\r\n13323,1834,AMER,grocery,mobile,63.62,7,0.031,coupon,2024-11-21\r\n13324,2329,LATAM,home,online,92.78,5,0.188,none,2024-08-16\r\n13325,1333,EMEA,electronics,online,46.55,8,0.012,coupon,2024-12-26\r\n13326,1720,AMER,electronics,online,66.93,6,0.008,coupon,2024-02-26\r\n13327,2247,LATAM,electronics,retail,68.09,7,0.044,bundle,2024-05-03\r\n13328,1823,EMEA,grocery,retail,20.94,8,0.248,coupon,2024-10-02\r\n13329,1070,EMEA,electronics,online,16.04,4,0.139,bundle,2024-07-01\r\n13330,2126,APAC,home,online,73.65,3,0.045,bundle,2024-07-07\r\n13331,1745,APAC,electronics,online,51.67,7,0.240,none,2024-02-10\r\n13332,2023,LATAM,sports,online,39.49,2,0.024,coupon,2024-12-11\r\n13333,1906,APAC,home,retail,74.83,6,0.248,bundle,2024-04-02\r\n13334,1148,AMER,toys,online,44.25,6,0.247,none,2024-12-25\r\n13335,1175,AMER,home,mobile,41.28,4,0.185,bundle,2024-08-28\r\n13336,1575,APAC,grocery,online,55.23,2,0.019,none,2024-12-19\r\n13337,1592,LATAM,home,online,37.46,6,0.120,bundle,2024-03-27\r\n13338,1627,LATAM,grocery,online,57.29,7,0.208,none,2024-12-07\r\n13339,2461,LATAM,electronics,partner,29.12,8,0.183,none,2024-12-20\r\n13340,1276,AMER,home,online,55.07,4,0.174,coupon,2024-02-14\r\n13341,1205,APAC,fashion,online,88.87,6,0.095,bundle,2024-04-07\r\n13342,2077,APAC,grocery,online,161.45,1,0.125,bundle,2024-06-24\r\n13343,2397,LATAM,toys,online,69.56,2,0.021,loyalty,2024-05-14\r\n13344,1530,APAC,grocery,online,51.76,3,0.037,none,2024-06-19\r\n13345,1638,EMEA,fashion,online,15.89,4,0.242,coupon,2024-12-17\r\n13346,1067,APAC,home,mobile,99.41,4,0.122,none,2024-01-17\r\n13347,1045,LATAM,fashion,online,38.08,1,0.239,bundle,2024-11-18\r\n13348,1517,AMER,electronics,online,27.33,5,0.207,coupon,2024-07-10\r\n13349,2029,APAC,grocery,online,48.72,3,0.106,loyalty,2024-05-02\r\n13350,2481,APAC,grocery,online,46.70,1,0.175,none,2024-03-02\r\n13351,1007,APAC,toys,retail,40.40,8,0.248,bundle,2024-02-14\r\n13352,1046,EMEA,grocery,partner,62.56,3,0.221,coupon,2024-01-08\r\n13353,1368,EMEA,fashion,online,163.89,6,0.243,none,2024-12-04\r\n13354,1052,LATAM,fashion,mobile,65.64,3,0.168,bundle,2024-06-24\r\n13355,1622,LATAM,fashion,retail,31.71,1,0.205,coupon,2024-02-21\r\n13356,2423,LATAM,electronics,retail,37.78,3,0.224,coupon,2024-11-20\r\n13357,2412,LATAM,grocery,retail,42.53,4,0.175,none,2024-11-20\r\n13358,2406,EMEA,grocery,retail,259.76,4,0.097,none,2024-05-09\r\n13359,1443,EMEA,toys,mobile,39.20,4,0.146,none,2024-12-27\r\n13360,1110,LATAM,grocery,online,83.26,2,0.174,bundle,2024-08-13\r\n13361,2152,EMEA,home,online,36.79,5,0.107,none,2024-12-11\r\n13362,2091,LATAM,home,online,42.99,6,0.010,none,2024-11-26\r\n13363,1446,AMER,electronics,retail,76.42,3,0.060,bundle,2024-03-20\r\n13364,1347,APAC,electronics,online,121.07,5,0.021,coupon,2024-12-25\r\n13365,1366,APAC,grocery,online,38.40,6,0.105,coupon,2024-03-01\r\n13366,1771,AMER,grocery,online,48.67,4,0.103,loyalty,2024-09-03\r\n13367,1443,EMEA,home,online,86.39,3,0.175,coupon,2024-07-10\r\n13368,2479,EMEA,electronics,online,100.30,3,0.075,bundle,2024-03-22\r\n13369,2180,AMER,toys,online,109.31,5,0.070,none,2024-01-08\r\n13370,1900,APAC,fashion,mobile,83.61,2,0.240,coupon,2024-03-01\r\n13371,1568,AMER,electronics,online,59.42,1,0.211,none,2024-07-23\r\n13372,1756,EMEA,electronics,online,83.63,7,0.221,none,2024-07-01\r\n13373,1887,LATAM,grocery,mobile,27.91,4,0.162,loyalty,2024-07-03\r\n13374,1073,AMER,fashion,online,30.17,5,0.156,none,2024-05-05\r\n13375,1414,APAC,sports,retail,63.61,4,0.222,coupon,2024-04-21\r\n13376,1369,AMER,sports,online,30.96,8,0.031,coupon,2024-07-27\r\n13377,2370,EMEA,fashion,retail,64.85,6,0.018,none,2024-08-22\r\n13378,1320,EMEA,fashion,retail,23.55,7,0.218,none,2024-05-22\r\n13379,2392,EMEA,toys,retail,63.48,7,0.177,none,2024-08-04\r\n13380,1183,AMER,grocery,online,30.54,7,0.172,coupon,2024-08-22\r\n13381,1940,APAC,home,online,30.73,7,0.011,bundle,2024-12-20\r\n13382,1320,EMEA,electronics,mobile,47.57,1,0.143,coupon,2024-02-10\r\n13383,1940,APAC,fashion,online,40.56,1,0.161,loyalty,2024-10-25\r\n13384,1240,EMEA,grocery,mobile,58.19,8,0.139,none,2024-06-03\r\n13385,1116,LATAM,electronics,mobile,39.80,8,0.010,none,2024-07-05\r\n13386,2317,LATAM,sports,retail,101.76,8,0.151,coupon,2024-09-19\r\n13387,1187,AMER,grocery,online,83.28,5,0.017,none,2024-08-07\r\n13388,2383,APAC,electronics,retail,30.31,7,0.227,none,2024-03-10\r\n13389,1758,AMER,sports,online,90.87,8,0.070,none,2024-03-05\r\n13390,1405,LATAM,electronics,online,71.30,1,0.163,coupon,2024-01-12\r\n13391,1751,AMER,grocery,online,25.48,3,0.226,none,2024-06-15\r\n13392,2065,EMEA,home,retail,49.71,3,0.039,none,2024-06-15\r\n13393,1331,AMER,home,online,47.40,3,0.135,loyalty,2024-04-12\r\n13394,1617,AMER,electronics,mobile,56.64,4,0.035,coupon,2024-04-15\r\n13395,2061,EMEA,home,online,25.62,7,0.070,none,2024-03-08\r\n13396,1141,AMER,grocery,online,99.39,3,0.067,none,2024-02-11\r\n13397,2368,AMER,fashion,online,85.62,8,0.126,coupon,2024-03-18\r\n13398,2180,AMER,sports,retail,42.35,7,0.097,coupon,2024-02-17\r\n13399,2159,AMER,electronics,online,28.66,3,0.235,coupon,2024-07-04\r\n13400,2468,EMEA,home,online,42.61,6,0.056,bundle,2024-08-11\r\n13401,1035,EMEA,electronics,online,44.22,8,0.053,none,2024-11-16\r\n13402,2042,LATAM,sports,mobile,55.50,2,0.153,none,2024-05-18\r\n13403,1906,APAC,grocery,mobile,63.49,5,0.114,coupon,2024-10-05\r\n13404,1273,AMER,grocery,retail,44.73,5,0.106,loyalty,2024-10-26\r\n13405,1232,LATAM,sports,partner,27.69,6,0.236,loyalty,2024-06-10\r\n13406,2428,LATAM,grocery,mobile,52.51,2,0.225,none,2024-05-25\r\n13407,1238,AMER,home,retail,46.75,8,0.225,none,2024-04-25\r\n13408,2011,AMER,home,retail,60.48,1,0.216,none,2024-08-06\r\n13409,1462,LATAM,electronics,retail,33.86,4,0.207,none,2024-09-06\r\n13410,2116,LATAM,toys,mobile,37.86,7,0.111,none,2024-03-24\r\n13411,1552,EMEA,electronics,online,41.41,6,0.221,none,2024-08-15\r\n13412,2436,LATAM,grocery,online,72.64,8,0.052,coupon,2024-02-25\r\n13413,1265,APAC,home,retail,84.47,7,0.117,none,2024-11-21\r\n13414,1933,EMEA,grocery,retail,126.10,1,0.176,bundle,2024-03-16\r\n13415,2341,EMEA,home,partner,60.59,5,0.180,bundle,2024-08-15\r\n13416,1579,AMER,home,online,56.10,5,0.150,none,2024-10-08\r\n13417,1594,LATAM,fashion,online,43.65,1,0.245,none,2024-04-21\r\n13418,1477,APAC,grocery,online,117.06,4,0.012,none,2024-08-27\r\n13419,1132,EMEA,sports,online,140.74,3,0.113,bundle,2024-05-09\r\n13420,1999,EMEA,sports,online,56.46,2,0.180,bundle,2024-09-08\r\n13421,1040,LATAM,grocery,online,100.29,2,0.046,bundle,2024-04-28\r\n13422,1899,APAC,grocery,online,42.54,4,0.103,none,2024-07-04\r\n13423,1696,LATAM,home,partner,25.10,7,0.203,none,2024-11-04\r\n13424,1055,AMER,grocery,mobile,85.71,6,0.142,bundle,2024-01-19\r\n13425,1443,EMEA,sports,online,54.66,8,0.248,none,2024-11-04\r\n13426,1856,EMEA,grocery,partner,77.28,4,0.177,coupon,2024-11-19\r\n13427,1243,AMER,grocery,online,32.80,1,0.224,none,2024-03-18\r\n13428,1683,AMER,sports,mobile,58.06,1,0.237,none,2024-01-14\r\n13429,1963,AMER,toys,mobile,32.20,8,0.110,none,2024-11-27\r\n13430,1874,LATAM,sports,retail,31.68,5,0.035,none,2024-09-03\r\n13431,2370,EMEA,sports,retail,51.50,3,0.139,loyalty,2024-12-01\r\n13432,1567,AMER,electronics,online,48.96,6,0.013,coupon,2024-10-20\r\n13433,2066,APAC,fashion,retail,71.71,7,0.152,none,2024-05-21\r\n13434,2270,APAC,home,online,75.43,3,0.243,none,2024-05-18\r\n13435,2416,LATAM,sports,mobile,24.66,6,0.171,none,2024-09-17\r\n13436,1357,EMEA,grocery,retail,33.47,5,0.219,none,2024-07-21\r\n13437,1186,APAC,toys,online,85.26,6,0.231,bundle,2024-03-27\r\n13438,1904,APAC,electronics,retail,30.98,3,0.004,bundle,2024-10-16\r\n13439,1312,EMEA,grocery,mobile,73.66,6,0.030,bundle,2024-03-11\r\n13440,1179,APAC,grocery,online,66.44,4,0.045,bundle,2024-12-01\r\n13441,1428,APAC,sports,mobile,39.49,4,0.154,none,2024-03-11\r\n13442,1274,LATAM,toys,online,37.24,6,0.232,loyalty,2024-12-25\r\n13443,1342,LATAM,grocery,retail,55.74,6,0.074,loyalty,2024-09-19\r\n13444,1548,EMEA,grocery,online,101.74,2,0.039,loyalty,2024-03-11\r\n13445,1541,APAC,electronics,online,84.88,6,0.080,none,2024-06-07\r\n13446,2096,LATAM,home,partner,72.66,3,0.082,coupon,2024-05-07\r\n13447,2498,LATAM,electronics,mobile,41.14,1,0.231,coupon,2024-08-10\r\n13448,1055,AMER,grocery,retail,32.96,4,0.087,none,2024-10-27\r\n13449,1671,APAC,toys,retail,138.26,4,0.224,bundle,2024-09-17\r\n13450,1310,AMER,toys,online,55.28,1,0.021,loyalty,2024-04-18\r\n13451,1191,EMEA,electronics,online,49.87,8,0.099,none,2024-05-18\r\n13452,1612,LATAM,fashion,retail,78.10,7,0.067,none,2024-06-20\r\n13453,1466,AMER,toys,online,78.87,4,0.053,coupon,2024-07-24\r\n13454,2137,LATAM,electronics,retail,55.98,4,0.016,none,2024-12-27\r\n13455,1746,LATAM,fashion,mobile,34.41,2,0.053,coupon,2024-05-19\r\n13456,2379,AMER,fashion,retail,63.48,5,0.006,none,2024-03-22\r\n13457,2077,APAC,grocery,retail,141.87,1,0.163,none,2024-04-23\r\n13458,2302,APAC,home,retail,43.85,1,0.167,none,2024-06-21\r\n13459,1688,LATAM,grocery,online,67.70,5,0.139,none,2024-01-15\r\n13460,1208,AMER,home,online,29.13,7,0.111,none,2024-03-25\r\n13461,1820,AMER,home,online,45.61,7,0.228,coupon,2024-03-12\r\n13462,1696,LATAM,toys,online,23.15,6,0.230,none,2024-12-23\r\n13463,2185,EMEA,home,online,40.61,5,0.207,bundle,2024-09-13\r\n13464,1910,LATAM,grocery,online,57.92,1,0.135,none,2024-06-14\r\n13465,1215,LATAM,home,retail,35.00,2,0.111,none,2024-02-14\r\n13466,1975,EMEA,electronics,online,96.78,6,0.232,none,2024-11-25\r\n13467,1005,LATAM,toys,retail,53.73,2,0.186,bundle,2024-09-20\r\n13468,2028,APAC,home,online,62.60,2,0.079,coupon,2024-11-01\r\n13469,1497,EMEA,grocery,online,86.54,1,0.178,coupon,2024-09-08\r\n13470,1079,LATAM,home,mobile,58.98,8,0.151,coupon,2024-02-14\r\n13471,1870,EMEA,fashion,retail,130.16,2,0.016,loyalty,2024-03-12\r\n13472,1041,APAC,home,online,64.23,1,0.037,bundle,2024-07-17\r\n13473,1095,APAC,fashion,retail,17.15,3,0.206,coupon,2024-06-04\r\n13474,2137,LATAM,home,retail,75.29,6,0.022,none,2024-10-04\r\n13475,1243,AMER,home,retail,127.41,6,0.069,loyalty,2024-04-16\r\n13476,2212,EMEA,electronics,retail,48.31,4,0.161,coupon,2024-05-10\r\n13477,2210,APAC,fashion,retail,63.24,1,0.054,bundle,2024-01-05\r\n13478,1448,EMEA,fashion,retail,33.05,1,0.093,none,2024-05-21\r\n13479,1885,EMEA,home,online,57.90,2,0.136,none,2024-02-01\r\n13480,2322,AMER,fashion,online,74.94,1,0.191,none,2024-09-18\r\n13481,2061,EMEA,grocery,online,82.24,1,0.115,bundle,2024-02-24\r\n13482,2196,AMER,sports,online,59.67,6,0.224,none,2024-02-20\r\n13483,2471,APAC,sports,online,41.01,8,0.228,none,2024-01-24\r\n13484,2013,APAC,fashion,partner,120.58,7,0.166,none,2024-02-23\r\n13485,1960,EMEA,home,online,238.08,5,0.075,coupon,2024-03-11\r\n13486,1131,APAC,electronics,retail,98.53,7,0.127,none,2024-02-27\r\n13487,2083,LATAM,electronics,online,100.20,1,0.237,none,2024-07-23\r\n13488,1747,EMEA,grocery,online,35.66,4,0.156,none,2024-11-27\r\n13489,1768,AMER,electronics,online,57.28,7,0.162,none,2024-09-26\r\n13490,2367,AMER,electronics,online,84.40,2,0.029,none,2024-05-17\r\n13491,1257,APAC,electronics,retail,16.37,6,0.074,bundle,2024-09-08\r\n13492,1663,LATAM,sports,online,93.19,2,0.183,coupon,2024-10-17\r\n13493,1148,AMER,home,retail,160.24,1,0.124,bundle,2024-06-15\r\n13494,2233,EMEA,home,retail,46.99,5,0.237,none,2024-08-11\r\n13495,2049,LATAM,sports,online,47.39,5,0.196,coupon,2024-08-06\r\n13496,1048,EMEA,sports,retail,50.26,4,0.225,none,2024-03-01\r\n13497,2321,APAC,toys,online,67.41,4,0.007,coupon,2024-04-04\r\n13498,1095,APAC,home,retail,25.34,3,0.066,bundle,2024-10-28\r\n13499,2066,APAC,electronics,retail,40.58,6,0.070,bundle,2024-10-06\r\n13500,2141,AMER,home,online,134.92,4,0.022,coupon,2024-11-25\r\n13501,1311,APAC,electronics,online,60.27,6,0.019,loyalty,2024-09-27\r\n13502,2247,LATAM,fashion,retail,96.48,2,0.139,none,2024-07-07\r\n13503,2416,LATAM,fashion,online,56.99,6,0.099,none,2024-12-19\r\n13504,1270,LATAM,electronics,online,39.41,8,0.220,bundle,2024-05-11\r\n13505,2000,APAC,fashion,online,50.93,3,0.222,coupon,2024-09-15\r\n13506,2145,AMER,electronics,online,51.54,6,0.201,none,2024-10-17\r\n13507,1174,APAC,grocery,mobile,52.49,4,0.045,none,2024-04-20\r\n13508,1629,LATAM,electronics,retail,61.09,5,0.025,none,2024-09-02\r\n13509,1831,APAC,fashion,online,29.08,4,0.072,bundle,2024-03-27\r\n13510,2062,EMEA,sports,online,96.61,7,0.143,none,2024-12-17\r\n13511,1311,APAC,grocery,online,106.31,7,0.030,coupon,2024-06-27\r\n13512,2486,APAC,fashion,partner,74.62,8,0.184,none,2024-03-02\r\n13513,1421,APAC,fashion,mobile,62.40,4,0.031,none,2024-07-01\r\n13514,2127,LATAM,sports,partner,19.48,7,0.201,bundle,2024-02-18\r\n13515,1888,LATAM,fashion,mobile,25.44,6,0.071,coupon,2024-05-13\r\n13516,1042,LATAM,electronics,mobile,40.59,3,0.056,none,2024-11-08\r\n13517,2057,APAC,grocery,online,105.27,8,0.162,none,2024-12-25\r\n13518,2419,LATAM,electronics,partner,85.81,8,0.011,none,2024-01-11\r\n13519,2232,EMEA,toys,online,13.67,4,0.008,none,2024-02-02\r\n13520,1073,AMER,electronics,retail,60.38,8,0.233,loyalty,2024-03-08\r\n13521,2382,LATAM,electronics,online,55.77,4,0.136,none,2024-08-28\r\n13522,1442,EMEA,grocery,online,45.88,3,0.104,coupon,2024-07-06\r\n13523,2105,APAC,electronics,online,52.34,8,0.124,coupon,2024-02-06\r\n13524,1366,APAC,grocery,online,53.94,6,0.129,none,2024-05-06\r\n13525,1448,EMEA,grocery,online,172.60,4,0.078,bundle,2024-03-08\r\n13526,1248,APAC,grocery,retail,72.39,5,0.229,loyalty,2024-04-01\r\n13527,1461,LATAM,grocery,retail,65.26,3,0.163,loyalty,2024-01-27\r\n13528,1155,EMEA,grocery,mobile,14.90,7,0.242,coupon,2024-06-03\r\n13529,2390,AMER,fashion,mobile,46.88,8,0.118,bundle,2024-09-01\r\n13530,1477,APAC,grocery,mobile,76.13,2,0.232,none,2024-03-24\r\n13531,2252,EMEA,home,online,110.37,3,0.055,loyalty,2024-08-25\r\n13532,1057,LATAM,electronics,online,54.91,1,0.189,bundle,2024-03-05\r\n13533,1561,EMEA,fashion,online,83.57,8,0.048,none,2024-07-17\r\n13534,2375,AMER,fashion,online,40.75,5,0.043,none,2024-03-15\r\n13535,1860,EMEA,grocery,online,84.70,1,0.200,none,2024-05-28\r\n13536,1475,LATAM,home,retail,40.22,4,0.072,bundle,2024-10-26\r\n13537,1000,APAC,toys,online,128.42,1,0.076,bundle,2024-04-20\r\n13538,1702,AMER,electronics,online,124.80,1,0.102,none,2024-09-24\r\n13539,1656,LATAM,fashion,online,25.08,8,0.185,coupon,2024-02-26\r\n13540,1426,AMER,electronics,online,99.80,5,0.226,coupon,2024-11-15\r\n13541,2199,LATAM,grocery,retail,24.79,1,0.186,none,2024-07-20\r\n13542,1714,APAC,sports,online,46.85,8,0.237,none,2024-04-22\r\n13543,2157,AMER,home,retail,55.39,6,0.245,none,2024-12-10\r\n13544,2366,APAC,grocery,mobile,81.19,3,0.163,none,2024-05-03\r\n13545,2030,EMEA,fashion,online,36.41,6,0.231,none,2024-10-11\r\n13546,1505,EMEA,grocery,retail,30.28,3,0.094,none,2024-01-24\r\n13547,1606,AMER,electronics,mobile,69.98,6,0.019,bundle,2024-06-18\r\n13548,2368,AMER,home,online,43.17,1,0.064,loyalty,2024-05-24\r\n13549,2309,AMER,toys,online,34.70,3,0.033,none,2024-04-12\r\n13550,1318,LATAM,grocery,retail,34.47,7,0.225,loyalty,2024-10-26\r\n13551,1086,AMER,electronics,retail,53.46,3,0.172,none,2024-02-25\r\n13552,1738,LATAM,electronics,online,67.81,1,0.138,none,2024-06-23\r\n13553,2428,LATAM,grocery,retail,41.11,3,0.007,none,2024-12-02\r\n13554,1641,EMEA,toys,retail,42.08,2,0.108,none,2024-05-11\r\n13555,1418,LATAM,home,mobile,50.64,3,0.055,coupon,2024-02-27\r\n13556,2456,APAC,home,retail,88.63,3,0.029,none,2024-08-27\r\n13557,1845,AMER,sports,online,100.34,3,0.163,none,2024-04-10\r\n13558,2100,APAC,electronics,online,37.56,6,0.123,none,2024-02-26\r\n13559,2233,EMEA,electronics,retail,28.05,8,0.110,bundle,2024-04-03\r\n13560,1968,EMEA,home,retail,316.71,4,0.219,coupon,2024-06-09\r\n13561,1193,APAC,home,partner,28.56,5,0.185,none,2024-11-25\r\n13562,1957,AMER,grocery,retail,39.54,3,0.154,none,2024-12-08\r\n13563,1879,EMEA,grocery,online,37.83,2,0.160,none,2024-04-08\r\n13564,1446,AMER,electronics,retail,26.09,1,0.121,none,2024-07-12\r\n13565,1686,LATAM,fashion,retail,21.54,4,0.226,none,2024-06-08\r\n13566,2093,LATAM,home,online,41.30,4,0.035,bundle,2024-01-20\r\n13567,2140,AMER,grocery,online,53.06,4,0.231,coupon,2024-09-07\r\n13568,1174,APAC,electronics,retail,43.24,6,0.121,none,2024-07-02\r\n13569,1073,AMER,toys,retail,146.90,6,0.069,none,2024-11-09\r\n13570,2043,EMEA,grocery,online,50.12,4,0.227,coupon,2024-12-20\r\n13571,1489,AMER,grocery,retail,53.00,7,0.053,coupon,2024-02-14\r\n13572,1673,AMER,electronics,online,115.99,6,0.072,none,2024-04-14\r\n13573,1465,AMER,toys,online,43.25,3,0.058,none,2024-12-25\r\n13574,1471,EMEA,electronics,partner,67.41,8,0.141,coupon,2024-11-16\r\n13575,1199,APAC,electronics,retail,65.79,4,0.189,none,2024-11-03\r\n13576,2149,EMEA,sports,retail,132.82,7,0.245,coupon,2024-12-09\r\n13577,1980,LATAM,grocery,mobile,40.09,5,0.142,none,2024-09-26\r\n13578,1562,AMER,home,online,175.77,3,0.031,none,2024-11-23\r\n13579,1917,LATAM,grocery,online,35.22,3,0.138,bundle,2024-06-11\r\n13580,1380,AMER,grocery,online,79.57,2,0.017,coupon,2024-08-16\r\n13581,1738,LATAM,toys,online,135.11,7,0.023,none,2024-07-25\r\n13582,2367,AMER,grocery,retail,93.71,5,0.126,none,2024-01-13\r\n13583,1342,LATAM,electronics,online,66.69,6,0.174,loyalty,2024-05-21\r\n13584,1442,EMEA,grocery,mobile,37.37,8,0.002,none,2024-12-13\r\n13585,2396,AMER,home,online,71.65,1,0.162,loyalty,2024-05-20\r\n13586,1198,AMER,home,online,41.58,2,0.004,none,2024-03-01\r\n13587,1793,LATAM,home,online,62.11,4,0.243,coupon,2024-04-06\r\n13588,2131,APAC,home,retail,20.31,3,0.147,coupon,2024-06-09\r\n13589,1767,AMER,fashion,mobile,13.49,4,0.159,loyalty,2024-01-11\r\n13590,1173,LATAM,grocery,mobile,102.60,6,0.086,none,2024-09-06\r\n13591,2164,AMER,grocery,mobile,26.81,5,0.059,bundle,2024-10-15\r\n13592,1631,APAC,fashion,retail,49.13,5,0.166,loyalty,2024-01-03\r\n13593,2459,AMER,sports,mobile,60.99,1,0.153,none,2024-09-03\r\n13594,1506,EMEA,grocery,online,79.71,7,0.116,bundle,2024-02-12\r\n13595,1023,APAC,home,partner,31.15,1,0.164,none,2024-02-16\r\n13596,2024,AMER,home,online,32.94,2,0.080,loyalty,2024-04-04\r\n13597,2201,AMER,home,online,45.65,4,0.120,none,2024-04-27\r\n13598,2438,AMER,fashion,partner,52.01,4,0.022,none,2024-04-22\r\n13599,1367,AMER,electronics,online,50.64,7,0.186,none,2024-06-14\r\n13600,2262,APAC,sports,mobile,177.20,3,0.139,bundle,2024-01-02\r\n13601,2397,LATAM,grocery,mobile,141.82,3,0.149,none,2024-11-12\r\n13602,2162,EMEA,fashion,online,55.09,2,0.231,none,2024-02-13\r\n13603,2098,AMER,toys,online,35.10,6,0.184,loyalty,2024-01-23\r\n13604,1224,APAC,sports,retail,64.09,6,0.080,none,2024-04-12\r\n13605,1308,EMEA,toys,retail,23.54,4,0.029,loyalty,2024-09-27\r\n13606,1182,EMEA,home,online,50.82,6,0.162,none,2024-05-09\r\n13607,1350,LATAM,sports,retail,41.60,5,0.148,none,2024-12-04\r\n13608,2348,EMEA,home,retail,48.61,2,0.081,bundle,2024-11-11\r\n13609,1947,EMEA,sports,retail,44.71,7,0.150,none,2024-03-21\r\n13610,1908,AMER,toys,online,55.13,3,0.213,bundle,2024-09-24\r\n13611,1487,AMER,toys,retail,67.29,3,0.247,none,2024-02-28\r\n13612,2102,APAC,home,online,120.23,5,0.242,none,2024-06-02\r\n13613,1439,LATAM,grocery,retail,32.67,7,0.120,none,2024-04-04\r\n13614,1583,AMER,electronics,retail,42.03,5,0.093,none,2024-06-05\r\n13615,2413,AMER,electronics,online,109.95,6,0.130,none,2024-04-20\r\n13616,1701,LATAM,grocery,online,128.41,3,0.087,none,2024-07-15\r\n13617,2172,EMEA,electronics,online,82.59,7,0.185,none,2024-10-03\r\n13618,2188,EMEA,electronics,retail,54.52,4,0.014,coupon,2024-11-20\r\n13619,1203,AMER,electronics,online,65.71,8,0.189,coupon,2024-01-18\r\n13620,1680,LATAM,electronics,online,24.55,7,0.172,bundle,2024-03-05\r\n13621,2313,LATAM,home,retail,90.91,3,0.133,coupon,2024-08-07\r\n13622,2235,AMER,home,online,41.76,2,0.099,bundle,2024-09-16\r\n13623,1993,APAC,sports,online,37.76,1,0.055,loyalty,2024-02-25\r\n13624,1663,LATAM,grocery,mobile,46.92,6,0.100,bundle,2024-07-16\r\n13625,1396,EMEA,fashion,retail,39.99,7,0.105,none,2024-04-23\r\n13626,1871,APAC,home,retail,93.46,2,0.144,none,2024-06-13\r\n13627,1670,EMEA,grocery,mobile,30.96,3,0.117,none,2024-02-21\r\n13628,2350,APAC,grocery,retail,59.99,4,0.142,none,2024-09-07\r\n13629,1228,APAC,grocery,retail,33.43,4,0.113,none,2024-06-26\r\n13630,2338,AMER,home,mobile,104.11,5,0.027,loyalty,2024-01-21\r\n13631,1030,EMEA,electronics,retail,27.47,3,0.044,none,2024-06-22\r\n13632,1869,AMER,toys,retail,76.47,8,0.136,none,2024-10-04\r\n13633,2203,APAC,electronics,online,45.86,7,0.206,none,2024-03-12\r\n13634,1568,AMER,grocery,online,67.77,5,0.035,loyalty,2024-06-27\r\n13635,1963,AMER,electronics,retail,54.13,8,0.003,none,2024-06-06\r\n13636,1200,EMEA,fashion,online,44.33,6,0.191,coupon,2024-03-17\r\n13637,2441,EMEA,fashion,online,73.22,8,0.016,none,2024-06-07\r\n13638,1329,APAC,grocery,mobile,84.55,4,0.139,none,2024-03-19\r\n13639,1172,APAC,home,retail,87.93,1,0.158,coupon,2024-11-12\r\n13640,2104,EMEA,fashion,partner,37.44,4,0.040,coupon,2024-05-02\r\n13641,1240,EMEA,grocery,mobile,53.82,3,0.221,none,2024-10-19\r\n13642,1654,EMEA,home,online,92.01,1,0.213,coupon,2024-04-21\r\n13643,2076,AMER,fashion,online,55.95,3,0.195,loyalty,2024-05-25\r\n13644,1812,EMEA,electronics,online,100.96,2,0.095,none,2024-02-11\r\n13645,1089,LATAM,fashion,retail,45.56,3,0.191,none,2024-05-01\r\n13646,1955,AMER,electronics,mobile,137.63,8,0.013,bundle,2024-07-25\r\n13647,1074,LATAM,grocery,mobile,51.93,2,0.240,coupon,2024-01-04\r\n13648,1705,AMER,electronics,mobile,55.32,1,0.244,none,2024-11-09\r\n13649,2379,AMER,grocery,mobile,51.84,5,0.248,none,2024-06-28\r\n13650,2106,LATAM,sports,online,39.29,4,0.144,none,2024-12-28\r\n13651,1740,EMEA,grocery,retail,45.25,2,0.232,none,2024-06-19\r\n13652,1749,LATAM,grocery,online,26.37,4,0.169,bundle,2024-10-17\r\n13653,2457,EMEA,fashion,online,38.39,3,0.037,coupon,2024-07-24\r\n13654,1102,APAC,grocery,mobile,51.47,8,0.230,none,2024-10-28\r\n13655,2297,EMEA,fashion,online,122.78,5,0.248,loyalty,2024-06-18\r\n13656,1515,EMEA,fashion,retail,91.93,7,0.176,none,2024-09-25\r\n13657,1245,APAC,toys,retail,30.75,6,0.220,coupon,2024-06-17\r\n13658,1434,EMEA,grocery,online,82.18,6,0.153,coupon,2024-04-24\r\n13659,1654,EMEA,home,mobile,48.90,4,0.179,loyalty,2024-04-21\r\n13660,1788,AMER,toys,online,53.05,6,0.067,loyalty,2024-09-18\r\n13661,1499,EMEA,sports,retail,68.63,2,0.227,none,2024-10-06\r\n13662,2257,AMER,grocery,mobile,22.78,5,0.180,none,2024-12-27\r\n13663,2121,APAC,fashion,retail,109.63,1,0.193,coupon,2024-06-01\r\n13664,1645,EMEA,toys,mobile,44.36,4,0.114,none,2024-06-13\r\n13665,1110,LATAM,electronics,retail,76.30,3,0.052,bundle,2024-01-05\r\n13666,1829,EMEA,sports,online,68.15,8,0.193,none,2024-01-10\r\n13667,1711,APAC,electronics,online,77.54,6,0.174,none,2024-04-25\r\n13668,2129,APAC,toys,partner,42.31,4,0.177,loyalty,2024-05-13\r\n13669,1515,EMEA,grocery,online,41.23,3,0.249,none,2024-09-01\r\n13670,1560,AMER,grocery,online,78.38,6,0.246,coupon,2024-05-03\r\n13671,1879,EMEA,grocery,online,115.38,7,0.225,none,2024-02-07\r\n13672,1872,LATAM,grocery,retail,24.64,1,0.242,bundle,2024-12-16\r\n13673,1246,EMEA,electronics,retail,117.15,8,0.140,coupon,2024-03-13\r\n13674,1594,LATAM,electronics,online,69.41,5,0.034,none,2024-09-03\r\n13675,1442,EMEA,electronics,partner,51.27,1,0.060,none,2024-12-21\r\n13676,2311,LATAM,toys,online,30.43,7,0.118,coupon,2024-12-15\r\n13677,1124,AMER,fashion,online,38.42,8,0.024,none,2024-10-04\r\n13678,2055,AMER,fashion,online,103.63,2,0.011,none,2024-05-21\r\n13679,1605,APAC,home,online,53.42,5,0.071,none,2024-10-05\r\n13680,2343,EMEA,grocery,mobile,157.61,1,0.149,none,2024-06-20\r\n13681,1418,LATAM,grocery,mobile,90.73,4,0.196,none,2024-09-12\r\n13682,1597,APAC,grocery,retail,44.54,4,0.141,none,2024-08-20\r\n13683,1892,LATAM,grocery,partner,53.96,2,0.228,none,2024-04-03\r\n13684,2035,LATAM,grocery,retail,63.64,7,0.128,none,2024-12-04\r\n13685,2219,LATAM,toys,retail,47.90,6,0.082,loyalty,2024-02-03\r\n13686,1069,APAC,grocery,retail,78.44,4,0.047,bundle,2024-02-08\r\n13687,1755,APAC,grocery,retail,90.30,1,0.187,coupon,2024-11-19\r\n13688,1410,AMER,electronics,mobile,71.57,5,0.114,none,2024-01-16\r\n13689,1181,LATAM,grocery,online,74.66,8,0.247,coupon,2024-05-24\r\n13690,1393,LATAM,sports,retail,54.82,1,0.212,none,2024-01-23\r\n13691,2115,APAC,grocery,online,27.47,5,0.000,coupon,2024-12-24\r\n13692,1955,AMER,fashion,online,64.94,4,0.105,loyalty,2024-11-22\r\n13693,1905,APAC,fashion,online,47.70,5,0.057,none,2024-08-21\r\n13694,1708,LATAM,electronics,retail,74.61,4,0.062,none,2024-12-15\r\n13695,1053,AMER,fashion,retail,45.63,6,0.076,none,2024-09-20\r\n13696,2469,LATAM,grocery,retail,39.66,3,0.032,none,2024-09-14\r\n13697,1005,LATAM,home,online,71.44,8,0.094,none,2024-07-24\r\n13698,1231,AMER,sports,mobile,33.27,7,0.061,coupon,2024-03-18\r\n13699,2345,LATAM,fashion,online,77.01,8,0.165,none,2024-06-21\r\n13700,1598,EMEA,grocery,online,54.45,4,0.142,none,2024-07-09\r\n13701,2195,APAC,sports,online,44.13,1,0.170,loyalty,2024-08-23\r\n13702,2355,EMEA,electronics,online,60.06,1,0.055,coupon,2024-01-23\r\n13703,1205,APAC,home,retail,68.95,2,0.242,bundle,2024-01-10\r\n13704,1902,AMER,grocery,retail,61.87,3,0.168,none,2024-09-14\r\n13705,2433,APAC,fashion,online,76.40,8,0.175,loyalty,2024-05-23\r\n13706,1379,EMEA,electronics,mobile,63.07,5,0.176,none,2024-03-06\r\n13707,1843,EMEA,home,online,67.42,7,0.109,none,2024-07-23\r\n13708,2045,LATAM,electronics,retail,91.89,6,0.148,bundle,2024-12-07\r\n13709,1960,EMEA,electronics,online,38.98,8,0.096,bundle,2024-06-03\r\n13710,1055,AMER,sports,mobile,110.62,7,0.171,none,2024-04-18\r\n13711,1138,AMER,electronics,online,41.47,5,0.235,none,2024-10-08\r\n13712,1271,EMEA,sports,mobile,53.68,1,0.120,loyalty,2024-04-23\r\n13713,2349,APAC,grocery,retail,77.49,5,0.028,none,2024-01-15\r\n13714,1422,LATAM,toys,mobile,12.52,7,0.243,loyalty,2024-03-11\r\n13715,2359,LATAM,home,online,69.30,1,0.228,coupon,2024-06-24\r\n13716,1844,APAC,fashion,online,98.74,2,0.058,none,2024-12-17\r\n13717,2288,AMER,home,retail,56.95,8,0.210,coupon,2024-05-04\r\n13718,1892,LATAM,grocery,retail,39.61,6,0.233,loyalty,2024-01-10\r\n13719,2192,APAC,electronics,retail,35.09,3,0.085,none,2024-09-22\r\n13720,1627,LATAM,sports,retail,68.34,6,0.148,none,2024-11-12\r\n13721,2376,LATAM,fashion,retail,43.49,8,0.244,coupon,2024-05-20\r\n13722,1491,EMEA,fashion,online,82.53,4,0.176,none,2024-12-04\r\n13723,2222,LATAM,electronics,online,69.62,5,0.107,none,2024-09-02\r\n13724,1825,AMER,electronics,online,72.75,7,0.141,none,2024-02-24\r\n13725,1927,EMEA,fashion,retail,72.02,1,0.085,loyalty,2024-07-25\r\n13726,1649,APAC,home,retail,34.55,1,0.080,coupon,2024-06-28\r\n13727,1592,LATAM,fashion,partner,50.38,2,0.240,coupon,2024-07-10\r\n13728,1511,EMEA,home,online,25.64,7,0.210,none,2024-05-10\r\n13729,2200,LATAM,electronics,online,45.57,1,0.011,none,2024-06-07\r\n13730,1032,AMER,grocery,mobile,95.78,7,0.028,none,2024-11-20\r\n13731,2358,AMER,fashion,mobile,66.55,4,0.136,bundle,2024-08-13\r\n13732,2252,EMEA,toys,retail,88.60,1,0.055,none,2024-08-19\r\n13733,2053,AMER,sports,online,137.08,1,0.208,none,2024-03-12\r\n13734,1516,EMEA,grocery,retail,137.38,5,0.120,none,2024-01-10\r\n13735,1890,LATAM,grocery,online,131.71,6,0.056,coupon,2024-09-24\r\n13736,1278,AMER,home,mobile,68.86,5,0.126,none,2024-08-19\r\n13737,2489,LATAM,electronics,retail,52.18,5,0.081,bundle,2024-04-14\r\n13738,1837,LATAM,fashion,retail,74.84,3,0.062,none,2024-01-26\r\n13739,2189,LATAM,fashion,online,64.73,4,0.138,loyalty,2024-07-08\r\n13740,1989,LATAM,toys,online,84.15,4,0.181,loyalty,2024-05-21\r\n13741,1704,AMER,toys,online,50.46,5,0.185,none,2024-12-14\r\n13742,1961,EMEA,electronics,retail,59.52,4,0.191,coupon,2024-07-15\r\n13743,1471,EMEA,grocery,mobile,30.71,5,0.090,none,2024-07-25\r\n13744,2403,LATAM,electronics,mobile,76.99,1,0.217,none,2024-08-24\r\n13745,1791,LATAM,electronics,retail,86.19,1,0.172,none,2024-03-09\r\n13746,1645,EMEA,home,online,62.43,3,0.112,none,2024-05-21\r\n13747,2183,EMEA,home,online,34.17,4,0.111,none,2024-07-04\r\n13748,1560,AMER,grocery,retail,55.30,8,0.195,none,2024-09-03\r\n13749,1614,EMEA,grocery,mobile,39.40,7,0.009,none,2024-12-05\r\n13750,2201,AMER,grocery,online,57.67,5,0.028,none,2024-05-02\r\n13751,2452,LATAM,toys,online,120.96,5,0.151,loyalty,2024-01-22\r\n13752,1221,LATAM,grocery,online,96.71,4,0.188,loyalty,2024-12-06\r\n13753,1943,AMER,sports,mobile,97.71,3,0.042,coupon,2024-08-27\r\n13754,1204,AMER,toys,retail,179.08,1,0.019,coupon,2024-09-26\r\n13755,1785,EMEA,grocery,online,31.84,8,0.212,coupon,2024-03-14\r\n13756,1890,LATAM,grocery,retail,85.65,5,0.166,none,2024-12-18\r\n13757,1368,EMEA,grocery,retail,67.68,4,0.165,none,2024-12-26\r\n13758,2273,APAC,sports,retail,76.05,7,0.014,bundle,2024-12-13\r\n13759,2060,LATAM,toys,mobile,41.50,5,0.026,bundle,2024-01-27\r\n13760,2132,LATAM,electronics,retail,37.38,6,0.044,none,2024-10-05\r\n13761,2461,LATAM,grocery,partner,42.30,8,0.109,none,2024-05-22\r\n13762,2188,EMEA,grocery,online,82.17,3,0.129,bundle,2024-07-08\r\n13763,2320,LATAM,sports,online,68.24,6,0.135,loyalty,2024-05-05\r\n13764,1358,APAC,home,online,58.49,3,0.015,none,2024-04-12\r\n13765,2213,APAC,home,online,87.71,1,0.178,loyalty,2024-09-21\r\n13766,1213,EMEA,home,retail,45.32,5,0.127,none,2024-03-06\r\n13767,1996,APAC,grocery,mobile,179.65,7,0.038,bundle,2024-02-21\r\n13768,1255,AMER,sports,online,36.47,2,0.089,none,2024-01-04\r\n13769,1059,AMER,home,retail,46.30,1,0.007,coupon,2024-03-26\r\n13770,1859,AMER,fashion,online,73.89,6,0.097,none,2024-06-05\r\n13771,1712,LATAM,electronics,online,53.11,1,0.145,none,2024-01-08\r\n13772,1913,LATAM,toys,retail,83.57,2,0.117,bundle,2024-09-03\r\n13773,1445,APAC,sports,retail,39.00,1,0.062,bundle,2024-01-27\r\n13774,1065,AMER,electronics,retail,37.32,3,0.057,coupon,2024-08-17\r\n13775,2425,APAC,grocery,online,103.97,3,0.072,none,2024-03-27\r\n13776,2264,LATAM,home,retail,29.46,3,0.203,loyalty,2024-08-05\r\n13777,2042,LATAM,fashion,online,48.58,3,0.003,none,2024-02-12\r\n13778,1008,AMER,toys,online,51.28,3,0.046,bundle,2024-05-25\r\n13779,1806,APAC,toys,retail,117.28,1,0.228,none,2024-01-28\r\n13780,2214,AMER,home,online,177.32,5,0.191,coupon,2024-07-17\r\n13781,2101,APAC,electronics,mobile,123.61,5,0.054,none,2024-09-25\r\n13782,2042,LATAM,electronics,online,31.35,6,0.125,none,2024-02-01\r\n13783,2139,AMER,grocery,partner,47.23,5,0.029,loyalty,2024-10-24\r\n13784,2406,EMEA,grocery,online,41.59,3,0.207,coupon,2024-06-27\r\n13785,2221,LATAM,electronics,retail,45.49,7,0.086,none,2024-02-04\r\n13786,2176,AMER,home,retail,86.87,6,0.154,none,2024-04-22\r\n13787,2036,APAC,electronics,retail,36.29,3,0.113,bundle,2024-06-14\r\n13788,1715,AMER,sports,online,49.34,5,0.208,coupon,2024-06-02\r\n13789,1520,APAC,toys,online,40.24,3,0.062,bundle,2024-02-27\r\n13790,1867,AMER,grocery,online,42.48,4,0.073,none,2024-07-17\r\n13791,1509,AMER,home,online,68.67,2,0.098,coupon,2024-10-03\r\n13792,2108,AMER,sports,retail,14.67,7,0.080,bundle,2024-12-09\r\n13793,2235,AMER,home,online,50.04,3,0.243,loyalty,2024-02-22\r\n13794,2134,AMER,grocery,mobile,26.82,3,0.118,bundle,2024-10-28\r\n13795,1069,APAC,electronics,online,65.92,6,0.068,none,2024-09-01\r\n13796,1413,LATAM,electronics,retail,41.77,8,0.218,none,2024-12-10\r\n13797,1904,APAC,grocery,mobile,73.80,7,0.113,none,2024-11-15\r\n13798,2147,LATAM,fashion,retail,45.49,8,0.011,bundle,2024-08-26\r\n13799,2450,EMEA,toys,online,129.80,6,0.102,none,2024-08-05\r\n13800,1637,APAC,grocery,mobile,32.13,6,0.230,none,2024-03-14\r\n13801,1278,AMER,home,online,43.44,4,0.187,loyalty,2024-11-16\r\n13802,2161,LATAM,fashion,retail,36.91,7,0.215,bundle,2024-12-07\r\n13803,2291,EMEA,home,online,70.06,5,0.146,none,2024-12-23\r\n13804,2430,APAC,grocery,retail,101.38,8,0.235,bundle,2024-03-19\r\n13805,1423,EMEA,grocery,online,23.51,8,0.138,none,2024-01-11\r\n13806,2262,APAC,toys,online,85.32,7,0.040,none,2024-01-18\r\n13807,2340,EMEA,grocery,retail,20.43,5,0.098,none,2024-11-07\r\n13808,2478,AMER,electronics,retail,50.07,6,0.176,coupon,2024-07-28\r\n13809,2297,EMEA,fashion,mobile,97.62,2,0.107,bundle,2024-12-04\r\n13810,1671,APAC,toys,retail,32.83,3,0.094,none,2024-01-19\r\n13811,2189,LATAM,home,mobile,129.15,5,0.234,loyalty,2024-05-28\r\n13812,2320,LATAM,grocery,mobile,144.59,5,0.180,none,2024-12-13\r\n13813,1651,LATAM,fashion,online,32.06,2,0.139,none,2024-11-05\r\n13814,1055,AMER,electronics,retail,94.47,2,0.002,coupon,2024-04-07\r\n13815,1194,APAC,sports,mobile,30.18,8,0.156,none,2024-12-07\r\n13816,1299,LATAM,electronics,retail,101.76,4,0.056,bundle,2024-06-11\r\n13817,1712,LATAM,home,retail,108.41,7,0.228,none,2024-02-20\r\n13818,2215,LATAM,grocery,online,88.91,2,0.189,none,2024-08-01\r\n13819,2419,LATAM,electronics,online,53.80,5,0.190,loyalty,2024-02-06\r\n13820,1105,AMER,grocery,online,21.46,7,0.215,none,2024-10-22\r\n13821,1148,AMER,grocery,online,50.24,8,0.038,none,2024-11-10\r\n13822,2216,AMER,toys,retail,156.78,7,0.096,none,2024-07-23\r\n13823,1434,EMEA,sports,retail,76.63,7,0.148,none,2024-12-22\r\n13824,1747,EMEA,home,online,41.08,2,0.023,none,2024-06-28\r\n13825,2022,LATAM,grocery,retail,17.34,2,0.041,bundle,2024-07-14\r\n13826,1562,AMER,electronics,online,141.60,2,0.155,loyalty,2024-01-27\r\n13827,1593,AMER,grocery,online,49.62,6,0.194,none,2024-06-21\r\n13828,1069,APAC,electronics,online,211.70,5,0.014,bundle,2024-11-11\r\n13829,1187,AMER,fashion,mobile,42.12,5,0.210,coupon,2024-04-18\r\n13830,1686,LATAM,home,partner,44.49,4,0.097,loyalty,2024-07-25\r\n13831,1666,LATAM,home,mobile,83.65,7,0.159,coupon,2024-03-22\r\n13832,1803,LATAM,grocery,online,49.01,3,0.057,none,2024-06-03\r\n13833,1945,AMER,grocery,online,246.01,4,0.099,bundle,2024-10-27\r\n13834,1846,APAC,home,mobile,43.36,8,0.137,none,2024-04-13\r\n13835,1691,LATAM,fashion,online,51.41,1,0.007,coupon,2024-01-19\r\n13836,1706,EMEA,electronics,online,57.89,7,0.205,coupon,2024-08-18\r\n13837,2384,LATAM,grocery,retail,41.49,1,0.223,none,2024-08-23\r\n13838,2494,AMER,sports,retail,20.64,8,0.198,loyalty,2024-12-26\r\n13839,1997,APAC,sports,retail,71.29,5,0.139,none,2024-11-18\r\n13840,1907,EMEA,sports,online,69.75,2,0.145,none,2024-04-23\r\n13841,1774,EMEA,home,online,61.45,7,0.213,none,2024-09-23\r\n13842,2127,LATAM,grocery,online,73.45,4,0.240,none,2024-10-14\r\n13843,2253,AMER,home,online,33.18,5,0.109,coupon,2024-08-06\r\n13844,1502,APAC,fashion,retail,55.48,8,0.190,coupon,2024-09-21\r\n13845,1847,LATAM,home,online,76.78,3,0.173,coupon,2024-02-24\r\n13846,1946,AMER,toys,online,38.71,4,0.180,none,2024-02-23\r\n13847,1231,AMER,electronics,online,41.75,8,0.208,none,2024-08-16\r\n13848,1553,LATAM,grocery,partner,43.20,3,0.237,loyalty,2024-12-18\r\n13849,2329,LATAM,fashion,retail,53.93,7,0.155,none,2024-09-13\r\n13850,1501,AMER,toys,online,115.32,4,0.039,none,2024-11-21\r\n13851,1718,EMEA,toys,partner,38.62,4,0.222,none,2024-12-26\r\n13852,2017,EMEA,home,online,43.30,4,0.071,none,2024-06-19\r\n13853,2397,LATAM,electronics,retail,75.49,2,0.221,coupon,2024-11-06\r\n13854,1344,EMEA,electronics,mobile,41.98,6,0.042,none,2024-01-11\r\n13855,1265,APAC,fashion,online,101.48,8,0.015,coupon,2024-09-08\r\n13856,2464,LATAM,grocery,online,171.58,6,0.081,loyalty,2024-08-23\r\n13857,2303,EMEA,home,online,95.16,2,0.152,bundle,2024-11-28\r\n13858,1624,AMER,grocery,online,47.72,3,0.222,none,2024-03-18\r\n13859,1609,LATAM,grocery,online,38.07,8,0.195,none,2024-05-19\r\n13860,2089,EMEA,sports,online,55.73,4,0.141,loyalty,2024-06-01\r\n13861,1168,APAC,home,partner,32.60,1,0.152,coupon,2024-06-06\r\n13862,1907,EMEA,fashion,partner,26.48,4,0.121,none,2024-10-02\r\n13863,1977,APAC,toys,online,31.49,7,0.247,none,2024-01-02\r\n13864,1502,APAC,toys,mobile,86.95,4,0.091,none,2024-01-03\r\n13865,2124,AMER,grocery,online,33.46,4,0.033,loyalty,2024-05-24\r\n13866,1797,LATAM,electronics,retail,95.87,3,0.103,none,2024-06-15\r\n13867,1057,LATAM,sports,online,63.21,5,0.221,none,2024-09-05\r\n13868,2076,AMER,grocery,retail,70.72,8,0.011,none,2024-11-25\r\n13869,2059,AMER,grocery,retail,39.22,2,0.084,coupon,2024-12-02\r\n13870,2241,APAC,toys,online,56.13,6,0.165,coupon,2024-01-18\r\n13871,1083,AMER,sports,retail,70.13,4,0.171,none,2024-01-25\r\n13872,1728,AMER,home,mobile,126.73,7,0.239,none,2024-10-02\r\n13873,1425,EMEA,grocery,online,68.60,4,0.041,none,2024-01-22\r\n13874,1436,APAC,home,online,58.17,6,0.091,coupon,2024-11-23\r\n13875,2045,LATAM,toys,online,38.75,3,0.044,coupon,2024-02-08\r\n13876,1140,LATAM,grocery,retail,57.52,3,0.102,coupon,2024-10-05\r\n13877,1258,EMEA,home,online,51.33,4,0.224,none,2024-08-16\r\n13878,1063,AMER,grocery,online,34.69,7,0.050,coupon,2024-06-16\r\n13879,1771,AMER,home,retail,25.81,1,0.075,none,2024-04-01\r\n13880,1715,AMER,sports,retail,80.84,3,0.237,none,2024-03-15\r\n13881,2194,APAC,grocery,online,38.19,2,0.145,coupon,2024-11-11\r\n13882,1175,AMER,grocery,retail,67.06,8,0.092,loyalty,2024-10-27\r\n13883,1547,AMER,grocery,online,89.53,8,0.148,coupon,2024-09-13\r\n13884,1221,LATAM,grocery,online,71.74,2,0.061,none,2024-12-09\r\n13885,1952,EMEA,electronics,online,48.15,3,0.238,none,2024-01-26\r\n13886,1766,AMER,grocery,online,69.41,1,0.136,none,2024-05-02\r\n13887,1833,EMEA,home,online,56.23,7,0.078,none,2024-11-17\r\n13888,2491,APAC,home,retail,32.93,3,0.146,none,2024-01-06\r\n13889,1137,APAC,fashion,retail,44.36,3,0.206,bundle,2024-03-16\r\n13890,2386,EMEA,grocery,retail,62.07,3,0.038,loyalty,2024-01-09\r\n13891,2079,EMEA,sports,mobile,27.76,2,0.229,coupon,2024-04-03\r\n13892,1526,EMEA,grocery,online,74.06,4,0.080,loyalty,2024-06-02\r\n13893,1173,LATAM,toys,retail,89.93,2,0.042,none,2024-03-22\r\n13894,2112,LATAM,grocery,retail,59.79,2,0.109,bundle,2024-01-03\r\n13895,1878,EMEA,sports,online,79.05,4,0.093,coupon,2024-09-09\r\n13896,1939,LATAM,electronics,mobile,39.49,7,0.088,none,2024-02-08\r\n13897,2200,LATAM,electronics,online,78.18,5,0.096,coupon,2024-07-12\r\n13898,1695,LATAM,electronics,online,93.14,2,0.106,loyalty,2024-03-02\r\n13899,1756,EMEA,fashion,online,32.46,8,0.049,coupon,2024-10-09\r\n13900,1773,LATAM,sports,online,79.50,8,0.133,loyalty,2024-06-24\r\n13901,2092,AMER,home,retail,52.97,5,0.246,none,2024-05-20\r\n13902,1577,AMER,grocery,online,48.26,2,0.158,none,2024-03-05\r\n13903,1647,LATAM,sports,mobile,149.10,3,0.175,coupon,2024-11-17\r\n13904,1787,APAC,sports,online,91.53,3,0.037,coupon,2024-09-06\r\n13905,2044,APAC,electronics,mobile,67.81,2,0.081,loyalty,2024-10-08\r\n13906,1518,AMER,toys,online,33.23,4,0.241,coupon,2024-03-08\r\n13907,1186,APAC,fashion,retail,55.24,7,0.120,none,2024-10-18\r\n13908,2476,APAC,grocery,mobile,86.96,4,0.122,loyalty,2024-07-08\r\n13909,1960,EMEA,grocery,retail,28.73,5,0.124,none,2024-05-14\r\n13910,1909,APAC,electronics,mobile,23.60,7,0.210,loyalty,2024-10-12\r\n13911,1450,EMEA,fashion,retail,29.99,2,0.227,none,2024-12-13\r\n13912,2473,EMEA,fashion,online,60.98,4,0.188,coupon,2024-09-16\r\n13913,2485,AMER,electronics,online,81.18,1,0.130,none,2024-09-01\r\n13914,2098,AMER,electronics,retail,24.29,7,0.064,coupon,2024-09-23\r\n13915,1821,LATAM,grocery,online,83.37,7,0.160,coupon,2024-04-25\r\n13916,2116,LATAM,electronics,online,36.24,6,0.084,none,2024-07-13\r\n13917,1141,AMER,electronics,online,51.72,8,0.002,none,2024-04-13\r\n13918,1428,APAC,electronics,online,20.05,3,0.127,coupon,2024-01-21\r\n13919,1882,AMER,grocery,online,58.77,4,0.225,loyalty,2024-05-03\r\n13920,1350,LATAM,home,retail,50.09,5,0.222,coupon,2024-05-14\r\n13921,1957,AMER,grocery,online,78.49,4,0.113,coupon,2024-02-12\r\n13922,2096,LATAM,toys,retail,65.90,2,0.114,coupon,2024-06-25\r\n13923,1572,LATAM,electronics,online,73.74,1,0.001,coupon,2024-07-02\r\n13924,2197,LATAM,grocery,online,65.36,4,0.023,loyalty,2024-06-23\r\n13925,1775,EMEA,home,online,32.12,8,0.179,none,2024-05-05\r\n13926,1074,LATAM,electronics,online,39.60,1,0.185,coupon,2024-07-05\r\n13927,2307,LATAM,sports,retail,47.65,1,0.238,none,2024-03-12\r\n13928,1122,AMER,grocery,retail,64.50,4,0.005,none,2024-07-01\r\n13929,1313,EMEA,home,retail,62.51,8,0.172,bundle,2024-10-25\r\n13930,2417,LATAM,sports,mobile,68.88,2,0.096,coupon,2024-02-26\r\n13931,1846,APAC,electronics,online,78.83,6,0.087,none,2024-05-23\r\n13932,2205,AMER,toys,online,53.48,3,0.176,coupon,2024-09-07\r\n13933,2375,AMER,sports,retail,176.95,5,0.013,coupon,2024-03-22\r\n13934,2323,AMER,grocery,online,66.67,7,0.241,none,2024-11-12\r\n13935,1011,APAC,toys,retail,100.54,8,0.105,coupon,2024-09-26\r\n13936,1337,APAC,sports,mobile,75.10,1,0.124,none,2024-10-14\r\n13937,1644,EMEA,grocery,retail,59.12,8,0.119,loyalty,2024-06-06\r\n13938,1091,EMEA,sports,online,115.48,7,0.111,none,2024-01-22\r\n13939,2244,LATAM,sports,online,69.35,6,0.068,bundle,2024-07-02\r\n13940,2119,AMER,fashion,online,68.60,7,0.018,coupon,2024-07-27\r\n13941,1186,APAC,electronics,retail,51.11,3,0.143,none,2024-12-14\r\n13942,2320,LATAM,grocery,retail,90.34,8,0.233,coupon,2024-10-09\r\n13943,1827,EMEA,electronics,online,70.70,4,0.185,none,2024-06-12\r\n13944,1702,AMER,grocery,online,67.90,4,0.069,bundle,2024-11-19\r\n13945,2498,LATAM,home,online,118.13,8,0.153,bundle,2024-08-21\r\n13946,2245,APAC,fashion,online,21.45,6,0.217,bundle,2024-10-02\r\n13947,1800,APAC,home,partner,80.61,4,0.101,loyalty,2024-10-27\r\n13948,1296,LATAM,electronics,retail,43.75,1,0.217,none,2024-12-03\r\n13949,1312,EMEA,home,online,39.66,4,0.137,none,2024-08-13\r\n13950,1715,AMER,sports,retail,32.21,8,0.234,bundle,2024-08-23\r\n13951,1243,AMER,home,online,43.18,8,0.041,loyalty,2024-03-05\r\n13952,2179,LATAM,toys,online,150.21,4,0.111,none,2024-06-15\r\n13953,2238,AMER,electronics,mobile,49.50,3,0.129,bundle,2024-04-25\r\n13954,1385,LATAM,home,mobile,68.69,4,0.233,none,2024-06-06\r\n13955,1692,LATAM,sports,online,27.88,1,0.178,coupon,2024-06-20\r\n13956,1061,APAC,toys,retail,29.25,5,0.123,coupon,2024-08-08\r\n13957,1575,APAC,grocery,partner,41.38,7,0.211,none,2024-03-09\r\n13958,1754,EMEA,home,online,50.14,6,0.115,coupon,2024-06-17\r\n13959,1485,APAC,grocery,mobile,31.15,7,0.242,none,2024-04-13\r\n13960,2363,AMER,sports,online,127.94,1,0.205,bundle,2024-05-05\r\n13961,2239,EMEA,electronics,online,67.98,5,0.039,coupon,2024-02-09\r\n13962,1760,LATAM,grocery,retail,74.40,3,0.063,none,2024-03-04\r\n13963,1839,APAC,home,retail,63.80,3,0.164,loyalty,2024-07-02\r\n13964,1318,LATAM,sports,online,129.34,4,0.052,none,2024-03-20\r\n13965,1205,APAC,fashion,retail,29.25,3,0.009,coupon,2024-09-21\r\n13966,2057,APAC,fashion,retail,74.55,5,0.245,none,2024-04-09\r\n13967,1309,EMEA,home,partner,99.40,1,0.167,bundle,2024-05-20\r\n13968,2422,APAC,toys,retail,110.62,2,0.232,coupon,2024-12-03\r\n13969,1934,EMEA,fashion,online,57.60,3,0.114,none,2024-09-22\r\n13970,1025,EMEA,electronics,mobile,43.81,8,0.218,none,2024-11-03\r\n13971,1395,APAC,toys,online,49.26,8,0.002,none,2024-06-09\r\n13972,1226,AMER,fashion,retail,42.46,4,0.231,none,2024-04-19\r\n13973,2360,EMEA,fashion,retail,65.83,4,0.153,none,2024-08-27\r\n13974,2408,EMEA,home,mobile,62.17,7,0.151,none,2024-02-16\r\n13975,2021,EMEA,sports,partner,63.96,7,0.120,coupon,2024-04-23\r\n13976,2081,APAC,sports,online,33.51,4,0.168,loyalty,2024-06-17\r\n13977,1848,EMEA,fashion,retail,39.32,8,0.045,none,2024-09-26\r\n13978,1937,APAC,sports,partner,78.90,5,0.107,none,2024-04-05\r\n13979,1378,APAC,grocery,retail,86.15,5,0.120,coupon,2024-09-08\r\n13980,1836,LATAM,electronics,retail,60.35,4,0.237,none,2024-11-19\r\n13981,1768,AMER,fashion,online,36.68,6,0.008,none,2024-02-05\r\n13982,2146,APAC,toys,mobile,48.18,3,0.089,coupon,2024-02-19\r\n13983,1604,EMEA,sports,online,63.43,5,0.023,none,2024-08-16\r\n13984,2356,LATAM,grocery,online,45.38,2,0.080,none,2024-03-18\r\n13985,1848,EMEA,grocery,online,36.29,7,0.054,coupon,2024-10-22\r\n13986,1508,LATAM,grocery,online,63.06,7,0.188,coupon,2024-10-02\r\n13987,1915,LATAM,toys,online,65.88,4,0.093,bundle,2024-02-18\r\n13988,1681,LATAM,home,online,54.90,4,0.169,bundle,2024-02-07\r\n13989,2102,APAC,electronics,mobile,42.29,3,0.107,coupon,2024-07-26\r\n13990,2308,AMER,grocery,mobile,137.97,5,0.089,none,2024-12-14\r\n13991,1205,APAC,fashion,online,48.66,2,0.237,coupon,2024-10-22\r\n13992,1394,LATAM,grocery,mobile,115.57,6,0.096,coupon,2024-04-12\r\n13993,1354,AMER,sports,online,46.06,1,0.028,none,2024-08-02\r\n13994,1603,EMEA,electronics,retail,58.48,1,0.057,none,2024-01-26\r\n13995,1865,LATAM,toys,retail,30.67,6,0.094,none,2024-04-04\r\n13996,1369,AMER,toys,mobile,52.74,2,0.062,none,2024-07-20\r\n13997,1273,AMER,home,online,41.01,2,0.105,coupon,2024-01-07\r\n13998,1935,EMEA,electronics,mobile,87.48,2,0.115,coupon,2024-11-01\r\n13999,1038,APAC,home,online,56.27,4,0.203,bundle,2024-06-04\r\n14000,1242,LATAM,fashion,online,90.82,5,0.132,none,2024-11-22\r\n14001,1471,EMEA,fashion,retail,109.08,4,0.067,none,2024-03-15\r\n14002,1071,AMER,fashion,mobile,76.15,4,0.186,none,2024-03-24\r\n14003,1649,APAC,grocery,mobile,177.15,6,0.131,none,2024-05-04\r\n14004,2217,LATAM,sports,online,123.03,6,0.161,none,2024-06-26\r\n14005,1226,AMER,fashion,retail,49.86,5,0.056,none,2024-03-13\r\n14006,2033,LATAM,fashion,partner,58.19,7,0.200,bundle,2024-11-20\r\n14007,1752,APAC,home,online,97.30,5,0.202,bundle,2024-06-01\r\n14008,1668,AMER,fashion,retail,36.47,5,0.141,none,2024-04-17\r\n14009,1670,EMEA,home,online,101.66,1,0.073,none,2024-09-19\r\n14010,2386,EMEA,sports,online,46.24,6,0.143,coupon,2024-03-14\r\n14011,1304,LATAM,home,online,43.49,7,0.128,none,2024-01-16\r\n14012,1979,APAC,fashion,online,79.25,3,0.203,bundle,2024-06-07\r\n14013,1406,LATAM,grocery,online,92.19,4,0.209,coupon,2024-09-21\r\n14014,2470,EMEA,grocery,online,40.92,6,0.202,coupon,2024-07-16\r\n14015,2399,LATAM,home,mobile,129.15,6,0.181,none,2024-04-22\r\n14016,1389,LATAM,electronics,online,115.57,3,0.044,none,2024-03-02\r\n14017,1834,AMER,grocery,retail,34.18,4,0.059,none,2024-03-16\r\n14018,1284,APAC,electronics,retail,62.70,7,0.045,none,2024-03-19\r\n14019,2216,AMER,home,online,35.39,3,0.034,coupon,2024-12-02\r\n14020,2262,APAC,sports,retail,80.82,6,0.022,none,2024-05-05\r\n14021,1209,AMER,grocery,retail,49.91,7,0.019,none,2024-06-01\r\n14022,1517,AMER,electronics,retail,34.13,2,0.004,none,2024-12-17\r\n14023,1592,LATAM,grocery,online,167.86,1,0.097,coupon,2024-03-09\r\n14024,1288,LATAM,electronics,online,61.42,2,0.145,none,2024-09-22\r\n14025,1768,AMER,electronics,online,97.55,5,0.112,none,2024-06-15\r\n14026,1821,LATAM,electronics,online,44.87,5,0.029,bundle,2024-01-26\r\n14027,2280,EMEA,electronics,retail,43.66,5,0.208,none,2024-10-18\r\n14028,1445,APAC,grocery,retail,72.18,1,0.160,loyalty,2024-08-15\r\n14029,2499,LATAM,home,online,53.97,5,0.116,none,2024-11-02\r\n14030,2234,LATAM,grocery,partner,203.47,7,0.142,none,2024-01-27\r\n14031,2369,LATAM,toys,online,37.79,3,0.113,none,2024-02-17\r\n14032,1327,APAC,toys,online,82.83,7,0.114,bundle,2024-09-22\r\n14033,2419,LATAM,fashion,online,103.69,7,0.032,none,2024-07-19\r\n14034,1489,AMER,electronics,partner,48.41,4,0.240,loyalty,2024-11-18\r\n14035,2369,LATAM,electronics,online,89.40,2,0.036,none,2024-01-09\r\n14036,2498,LATAM,grocery,partner,87.82,3,0.171,bundle,2024-01-19\r\n14037,2379,AMER,fashion,online,93.66,2,0.029,none,2024-08-12\r\n14038,2278,APAC,sports,online,25.20,5,0.163,none,2024-01-03\r\n14039,1153,AMER,fashion,online,29.95,8,0.186,none,2024-04-18\r\n14040,1921,LATAM,fashion,online,53.55,8,0.170,none,2024-08-19\r\n14041,1323,EMEA,home,online,33.81,8,0.177,none,2024-02-03\r\n14042,2390,AMER,home,retail,41.06,8,0.026,bundle,2024-02-17\r\n14043,1083,AMER,toys,online,59.29,4,0.171,none,2024-06-25\r\n14044,1060,LATAM,grocery,retail,45.79,5,0.022,loyalty,2024-11-17\r\n14045,2127,LATAM,fashion,online,59.33,1,0.234,none,2024-06-27\r\n14046,1852,AMER,home,online,93.26,2,0.187,none,2024-12-20\r\n14047,2102,APAC,grocery,retail,27.31,6,0.041,none,2024-03-08\r\n14048,2128,EMEA,sports,online,108.50,5,0.019,coupon,2024-03-25\r\n14049,1850,APAC,grocery,online,123.96,5,0.210,coupon,2024-04-23\r\n14050,1550,APAC,fashion,online,61.12,6,0.117,none,2024-12-09\r\n14051,1937,APAC,fashion,online,159.71,3,0.181,none,2024-12-25\r\n14052,1310,AMER,home,online,131.44,4,0.074,none,2024-05-11\r\n14053,1750,LATAM,grocery,online,40.28,6,0.046,none,2024-12-05\r\n14054,1820,AMER,home,mobile,22.81,6,0.083,none,2024-09-19\r\n14055,2344,LATAM,home,retail,32.33,1,0.170,none,2024-11-24\r\n14056,2245,APAC,grocery,retail,21.67,7,0.168,none,2024-01-23\r\n14057,1743,LATAM,home,online,58.00,1,0.028,coupon,2024-06-15\r\n14058,1740,EMEA,grocery,online,54.15,4,0.148,none,2024-04-18\r\n14059,2239,EMEA,grocery,online,44.97,5,0.127,loyalty,2024-07-11\r\n14060,2006,APAC,grocery,online,114.53,7,0.154,coupon,2024-12-10\r\n14061,1963,AMER,fashion,online,93.75,6,0.023,bundle,2024-06-24\r\n14062,1440,AMER,home,online,79.77,8,0.177,none,2024-02-01\r\n14063,2240,LATAM,grocery,online,45.72,5,0.091,bundle,2024-03-16\r\n14064,1531,EMEA,sports,retail,32.53,3,0.233,bundle,2024-10-12\r\n14065,1859,AMER,toys,mobile,81.95,2,0.077,bundle,2024-06-24\r\n14066,1237,LATAM,grocery,online,47.93,4,0.110,none,2024-02-20\r\n14067,1546,EMEA,electronics,retail,61.85,8,0.186,bundle,2024-06-28\r\n14068,2266,LATAM,grocery,retail,89.44,4,0.234,none,2024-10-20\r\n14069,2099,AMER,grocery,online,49.38,7,0.224,none,2024-01-15\r\n14070,1712,LATAM,toys,online,37.60,3,0.172,none,2024-01-06\r\n14071,2032,AMER,electronics,online,62.66,2,0.145,coupon,2024-04-27\r\n14072,1252,APAC,home,online,41.67,7,0.235,none,2024-11-20\r\n14073,2405,AMER,home,retail,69.62,6,0.223,none,2024-09-20\r\n14074,1912,APAC,electronics,online,51.43,6,0.081,loyalty,2024-02-04\r\n14075,2441,EMEA,electronics,mobile,48.68,4,0.056,coupon,2024-01-17\r\n14076,1355,EMEA,grocery,mobile,25.18,8,0.067,none,2024-07-13\r\n14077,1754,EMEA,grocery,online,76.28,3,0.154,none,2024-10-03\r\n14078,1555,AMER,electronics,online,28.87,2,0.067,bundle,2024-11-02\r\n14079,1563,EMEA,home,mobile,31.75,6,0.012,loyalty,2024-01-13\r\n14080,1853,APAC,grocery,online,76.82,3,0.228,none,2024-09-14\r\n14081,1403,APAC,toys,retail,45.19,4,0.112,bundle,2024-06-03\r\n14082,2411,EMEA,electronics,retail,23.82,2,0.182,none,2024-03-21\r\n14083,1410,AMER,fashion,retail,49.51,3,0.229,loyalty,2024-07-17\r\n14084,1742,AMER,fashion,retail,36.76,2,0.158,coupon,2024-09-20\r\n14085,1164,EMEA,grocery,online,50.10,1,0.201,none,2024-07-23\r\n14086,2324,AMER,sports,mobile,52.84,5,0.082,none,2024-05-14\r\n14087,1590,APAC,grocery,retail,66.32,3,0.072,bundle,2024-09-09\r\n14088,1857,LATAM,electronics,online,64.01,3,0.025,none,2024-05-28\r\n14089,1199,APAC,fashion,online,22.68,8,0.019,none,2024-01-05\r\n14090,1912,APAC,toys,online,78.63,5,0.041,none,2024-02-17\r\n14091,1559,EMEA,grocery,online,50.37,8,0.127,none,2024-12-15\r\n14092,1237,LATAM,toys,retail,64.23,2,0.058,none,2024-08-06\r\n14093,2459,AMER,electronics,retail,165.24,3,0.012,coupon,2024-07-20\r\n14094,2445,APAC,electronics,online,20.97,5,0.023,loyalty,2024-05-25\r\n14095,1187,AMER,home,online,74.17,2,0.038,loyalty,2024-12-14\r\n14096,2320,LATAM,sports,retail,30.83,6,0.207,none,2024-05-02\r\n14097,1252,APAC,grocery,retail,65.18,5,0.040,none,2024-10-08\r\n14098,1394,LATAM,electronics,online,42.41,8,0.206,none,2024-04-05\r\n14099,1323,EMEA,electronics,online,98.70,3,0.192,coupon,2024-10-13\r\n14100,1693,EMEA,home,online,42.68,6,0.114,coupon,2024-06-12\r\n14101,1422,LATAM,grocery,retail,48.10,5,0.148,coupon,2024-01-27\r\n14102,2010,APAC,electronics,retail,42.27,8,0.127,coupon,2024-02-13\r\n14103,1228,APAC,electronics,online,60.52,8,0.138,none,2024-12-08\r\n14104,1543,AMER,grocery,online,20.32,2,0.160,coupon,2024-11-13\r\n14105,1440,AMER,grocery,retail,45.64,7,0.206,bundle,2024-02-26\r\n14106,1248,APAC,fashion,online,91.02,2,0.171,none,2024-12-22\r\n14107,1637,APAC,grocery,retail,51.47,4,0.120,none,2024-03-10\r\n14108,1903,LATAM,electronics,online,43.03,1,0.084,coupon,2024-07-27\r\n14109,1835,AMER,fashion,online,96.66,5,0.031,none,2024-04-19\r\n14110,1972,LATAM,electronics,retail,35.01,1,0.240,bundle,2024-09-03\r\n14111,1583,AMER,sports,online,115.92,4,0.031,coupon,2024-12-04\r\n14112,1719,LATAM,fashion,online,94.41,3,0.076,none,2024-04-16\r\n14113,1692,LATAM,electronics,retail,41.41,7,0.069,none,2024-12-04\r\n14114,1920,LATAM,fashion,retail,51.81,7,0.160,coupon,2024-06-09\r\n14115,2407,EMEA,grocery,online,78.66,6,0.156,none,2024-02-06\r\n14116,2389,LATAM,home,retail,53.01,3,0.029,coupon,2024-06-02\r\n14117,1816,EMEA,home,online,31.59,4,0.170,loyalty,2024-08-02\r\n14118,2393,LATAM,grocery,retail,57.85,5,0.076,coupon,2024-06-17\r\n14119,1986,LATAM,electronics,retail,69.14,3,0.081,none,2024-12-23\r\n14120,2076,AMER,fashion,online,62.15,6,0.116,none,2024-07-10\r\n14121,1035,EMEA,grocery,mobile,54.47,4,0.131,none,2024-07-21\r\n14122,1262,APAC,home,online,134.51,8,0.030,bundle,2024-01-02\r\n14123,1927,EMEA,grocery,online,52.90,7,0.187,none,2024-11-16\r\n14124,1534,EMEA,fashion,retail,36.16,4,0.042,coupon,2024-04-22\r\n14125,2427,LATAM,home,mobile,50.35,1,0.084,none,2024-06-20\r\n14126,2421,AMER,grocery,partner,83.40,2,0.076,loyalty,2024-05-08\r\n14127,1822,EMEA,electronics,online,65.50,2,0.043,none,2024-11-28\r\n14128,1668,AMER,grocery,online,40.64,6,0.190,none,2024-04-01\r\n14129,1422,LATAM,home,retail,64.60,1,0.146,none,2024-03-05\r\n14130,2371,LATAM,fashion,online,114.67,2,0.077,none,2024-05-20\r\n14131,2446,LATAM,electronics,retail,96.83,5,0.047,none,2024-08-07\r\n14132,1530,APAC,fashion,mobile,28.61,6,0.109,loyalty,2024-05-15\r\n14133,1513,APAC,sports,retail,30.06,1,0.186,coupon,2024-08-16\r\n14134,1234,AMER,grocery,retail,37.60,7,0.143,coupon,2024-08-28\r\n14135,2207,APAC,sports,online,97.30,6,0.067,none,2024-12-04\r\n14136,1717,AMER,home,retail,67.80,6,0.011,none,2024-09-03\r\n14137,1807,EMEA,fashion,retail,46.49,7,0.175,coupon,2024-03-21\r\n14138,1455,APAC,home,retail,52.90,6,0.052,loyalty,2024-07-04\r\n14139,1565,AMER,electronics,online,33.36,4,0.017,none,2024-01-05\r\n14140,1140,LATAM,electronics,partner,58.66,8,0.189,none,2024-01-11\r\n14141,2125,LATAM,home,mobile,47.76,4,0.201,none,2024-02-03\r\n14142,2393,LATAM,home,online,74.90,4,0.245,none,2024-01-13\r\n14143,1043,LATAM,sports,retail,69.48,1,0.168,none,2024-07-09\r\n14144,1546,EMEA,fashion,mobile,87.55,8,0.139,none,2024-11-08\r\n14145,1552,EMEA,electronics,online,27.38,7,0.165,bundle,2024-10-05\r\n14146,1420,APAC,home,mobile,39.07,6,0.204,none,2024-08-16\r\n14147,1842,LATAM,grocery,mobile,174.29,5,0.180,none,2024-11-18\r\n14148,2118,AMER,toys,online,24.35,2,0.162,bundle,2024-02-24\r\n14149,2139,AMER,electronics,online,219.31,8,0.124,loyalty,2024-07-01\r\n14150,1996,APAC,grocery,online,74.37,7,0.245,none,2024-05-14\r\n14151,1440,AMER,home,online,106.59,1,0.243,none,2024-04-20\r\n14152,1966,APAC,toys,retail,31.70,7,0.241,none,2024-11-28\r\n14153,2479,EMEA,fashion,mobile,43.89,6,0.024,none,2024-07-07\r\n14154,2056,LATAM,electronics,online,175.45,6,0.022,none,2024-11-04\r\n14155,1794,AMER,electronics,online,60.11,8,0.182,none,2024-06-26\r\n14156,1719,LATAM,electronics,retail,66.21,4,0.017,none,2024-07-01\r\n14157,1344,EMEA,grocery,retail,149.21,1,0.016,loyalty,2024-12-02\r\n14158,2080,LATAM,electronics,retail,71.85,3,0.201,loyalty,2024-10-14\r\n14159,2352,APAC,fashion,retail,19.45,2,0.061,coupon,2024-01-17\r\n14160,1075,AMER,grocery,mobile,159.62,7,0.032,none,2024-05-16\r\n14161,1602,EMEA,sports,online,46.98,1,0.102,none,2024-09-07\r\n14162,1463,EMEA,electronics,online,57.11,6,0.201,none,2024-11-21\r\n14163,1646,APAC,home,mobile,24.55,7,0.223,none,2024-04-05\r\n14164,2130,EMEA,home,online,30.40,2,0.098,bundle,2024-05-20\r\n14165,2131,APAC,sports,online,80.02,2,0.218,coupon,2024-02-20\r\n14166,2134,AMER,fashion,online,95.02,3,0.228,coupon,2024-07-19\r\n14167,1415,AMER,sports,online,36.50,7,0.220,none,2024-08-01\r\n14168,1530,APAC,electronics,retail,26.68,3,0.103,coupon,2024-07-19\r\n14169,2475,AMER,electronics,online,54.12,7,0.060,bundle,2024-12-27\r\n14170,1283,APAC,grocery,retail,177.91,2,0.077,none,2024-05-12\r\n14171,1304,LATAM,home,mobile,45.78,3,0.238,coupon,2024-07-25\r\n14172,2088,EMEA,toys,online,63.95,8,0.140,coupon,2024-10-18\r\n14173,1190,EMEA,home,online,29.74,7,0.208,none,2024-06-18\r\n14174,1434,EMEA,sports,partner,43.21,8,0.208,loyalty,2024-10-09\r\n14175,1770,AMER,electronics,retail,52.01,3,0.204,coupon,2024-11-27\r\n14176,2407,EMEA,home,retail,36.31,2,0.196,loyalty,2024-10-13\r\n14177,2211,APAC,sports,retail,39.68,4,0.040,loyalty,2024-04-17\r\n14178,1068,APAC,home,retail,102.50,8,0.123,none,2024-04-06\r\n14179,1153,AMER,toys,mobile,47.14,3,0.093,loyalty,2024-06-12\r\n14180,1896,EMEA,home,retail,50.77,5,0.122,none,2024-02-10\r\n14181,1001,LATAM,grocery,retail,161.49,5,0.187,none,2024-12-16\r\n14182,2471,APAC,fashion,retail,47.68,5,0.236,bundle,2024-12-26\r\n14183,1367,AMER,grocery,retail,84.76,4,0.082,loyalty,2024-10-01\r\n14184,1124,AMER,electronics,retail,82.04,5,0.137,coupon,2024-05-26\r\n14185,1364,EMEA,electronics,mobile,72.47,8,0.099,coupon,2024-01-22\r\n14186,2040,LATAM,grocery,retail,46.31,8,0.138,none,2024-10-14\r\n14187,2428,LATAM,home,retail,107.58,1,0.076,none,2024-05-05\r\n14188,1811,APAC,fashion,online,38.83,4,0.059,none,2024-09-13\r\n14189,2006,APAC,home,retail,63.73,6,0.183,coupon,2024-10-10\r\n14190,1289,LATAM,toys,online,50.55,2,0.212,none,2024-07-23\r\n14191,2474,LATAM,electronics,online,32.17,8,0.234,loyalty,2024-10-01\r\n14192,1836,LATAM,grocery,online,36.61,5,0.097,loyalty,2024-10-03\r\n14193,1750,LATAM,sports,online,64.54,1,0.182,bundle,2024-11-03\r\n14194,2298,APAC,sports,retail,100.82,2,0.178,none,2024-12-10\r\n14195,2027,EMEA,sports,retail,79.47,2,0.126,none,2024-02-23\r\n14196,1947,EMEA,home,retail,55.38,3,0.080,none,2024-05-05\r\n14197,2253,AMER,sports,retail,58.55,5,0.198,none,2024-07-24\r\n14198,1490,AMER,grocery,retail,75.50,3,0.138,none,2024-09-11\r\n14199,2125,LATAM,home,retail,33.27,6,0.227,none,2024-01-04\r\n14200,1954,APAC,grocery,online,65.32,6,0.223,bundle,2024-08-03\r\n14201,2106,LATAM,fashion,online,31.82,3,0.191,none,2024-12-17\r\n14202,2394,EMEA,electronics,online,54.01,2,0.224,none,2024-07-08\r\n14203,2404,EMEA,home,retail,36.74,2,0.053,coupon,2024-02-02\r\n14204,2195,APAC,fashion,mobile,26.21,8,0.219,bundle,2024-03-24\r\n14205,2230,LATAM,electronics,retail,50.19,6,0.214,none,2024-09-10\r\n14206,1622,LATAM,home,mobile,34.46,4,0.124,none,2024-10-18\r\n14207,2173,LATAM,sports,mobile,47.98,7,0.076,loyalty,2024-03-18\r\n14208,1908,AMER,fashion,retail,102.35,4,0.058,none,2024-02-05\r\n14209,2270,APAC,fashion,online,54.09,1,0.134,coupon,2024-05-08\r\n14210,1254,APAC,grocery,online,35.69,7,0.233,none,2024-03-09\r\n14211,2244,LATAM,grocery,retail,26.78,3,0.114,none,2024-10-06\r\n14212,1552,EMEA,grocery,online,134.68,5,0.147,none,2024-01-14\r\n14213,1056,LATAM,electronics,online,112.30,3,0.183,none,2024-08-21\r\n14214,1046,EMEA,electronics,online,32.48,2,0.075,none,2024-12-05\r\n14215,1629,LATAM,grocery,mobile,103.99,8,0.017,bundle,2024-08-04\r\n14216,1325,APAC,grocery,online,62.54,4,0.248,bundle,2024-09-18\r\n14217,1801,LATAM,sports,retail,65.03,2,0.230,none,2024-08-20\r\n14218,1432,APAC,fashion,online,15.98,7,0.065,coupon,2024-12-05\r\n14219,1087,AMER,electronics,retail,33.38,5,0.247,none,2024-03-17\r\n14220,1483,EMEA,fashion,online,59.34,3,0.182,coupon,2024-04-09\r\n14221,2139,AMER,toys,online,61.57,3,0.102,loyalty,2024-10-14\r\n14222,1071,AMER,electronics,retail,50.06,8,0.091,coupon,2024-01-09\r\n14223,1778,LATAM,sports,partner,73.58,3,0.142,none,2024-07-03\r\n14224,1684,EMEA,fashion,mobile,45.37,8,0.188,bundle,2024-07-11\r\n14225,1525,APAC,fashion,online,27.24,8,0.168,none,2024-12-09\r\n14226,1953,EMEA,grocery,mobile,77.46,1,0.110,coupon,2024-07-24\r\n14227,2215,LATAM,grocery,retail,83.74,8,0.169,none,2024-10-19\r\n14228,1763,LATAM,sports,retail,171.91,1,0.118,none,2024-02-05\r\n14229,1492,APAC,home,online,20.99,8,0.154,bundle,2024-02-09\r\n14230,1386,AMER,sports,online,74.05,7,0.214,none,2024-04-02\r\n14231,1812,EMEA,toys,online,12.98,6,0.151,none,2024-04-01\r\n14232,1748,APAC,toys,online,73.18,6,0.150,bundle,2024-04-28\r\n14233,1981,EMEA,electronics,mobile,13.51,1,0.055,none,2024-01-05\r\n14234,1518,AMER,electronics,online,55.24,2,0.198,none,2024-02-15\r\n14235,1211,EMEA,electronics,retail,40.27,5,0.071,none,2024-05-23\r\n14236,1672,APAC,grocery,retail,74.74,2,0.075,none,2024-11-17\r\n14237,2253,AMER,grocery,online,24.69,7,0.007,none,2024-01-12\r\n14238,2222,LATAM,grocery,mobile,82.53,8,0.076,coupon,2024-01-17\r\n14239,1171,APAC,toys,retail,58.47,3,0.083,coupon,2024-08-24\r\n14240,1107,APAC,home,online,40.68,8,0.171,none,2024-01-05\r\n14241,1784,EMEA,grocery,online,52.29,4,0.243,none,2024-10-04\r\n14242,1093,APAC,grocery,online,52.12,1,0.224,none,2024-07-06\r\n14243,1630,APAC,toys,online,61.98,8,0.117,loyalty,2024-11-04\r\n14244,1015,AMER,electronics,online,48.21,2,0.071,none,2024-02-08\r\n14245,1187,AMER,electronics,retail,72.27,7,0.053,none,2024-11-23\r\n14246,1576,EMEA,grocery,online,110.95,4,0.082,none,2024-10-11\r\n14247,1944,AMER,grocery,online,117.21,7,0.212,none,2024-12-06\r\n14248,1038,APAC,home,mobile,30.60,6,0.205,none,2024-03-21\r\n14249,2281,AMER,electronics,online,70.46,4,0.125,none,2024-09-04\r\n14250,2255,AMER,home,online,40.49,1,0.024,bundle,2024-05-12\r\n14251,1598,EMEA,grocery,retail,48.06,8,0.103,none,2024-07-11\r\n14252,2327,EMEA,sports,online,47.39,3,0.075,coupon,2024-10-16\r\n14253,2173,LATAM,home,online,37.49,7,0.187,none,2024-02-22\r\n14254,2379,AMER,home,online,51.31,7,0.217,coupon,2024-10-15\r\n14255,2011,AMER,electronics,online,43.55,6,0.023,coupon,2024-09-10\r\n14256,1645,EMEA,home,retail,114.90,8,0.051,bundle,2024-11-04\r\n14257,1455,APAC,sports,online,29.21,8,0.211,coupon,2024-12-03\r\n14258,2300,EMEA,electronics,online,51.89,2,0.140,none,2024-08-27\r\n14259,1724,LATAM,toys,online,87.37,7,0.035,none,2024-03-18\r\n14260,1557,LATAM,electronics,online,138.65,2,0.141,coupon,2024-01-18\r\n14261,1045,LATAM,grocery,online,62.68,8,0.036,none,2024-04-28\r\n14262,2308,AMER,fashion,mobile,30.37,8,0.101,coupon,2024-06-19\r\n14263,1905,APAC,fashion,online,76.81,4,0.081,none,2024-11-23\r\n14264,1144,APAC,toys,mobile,48.93,2,0.183,none,2024-12-06\r\n14265,1808,APAC,sports,mobile,89.88,3,0.017,coupon,2024-02-27\r\n14266,1288,LATAM,sports,partner,48.42,8,0.033,coupon,2024-10-06\r\n14267,1119,LATAM,home,mobile,56.36,4,0.154,none,2024-10-23\r\n14268,1131,APAC,grocery,retail,100.88,6,0.137,none,2024-10-04\r\n14269,1661,LATAM,home,online,34.58,6,0.116,none,2024-04-22\r\n14270,1840,LATAM,sports,retail,54.95,6,0.092,none,2024-07-08\r\n14271,2026,LATAM,sports,retail,75.10,4,0.069,none,2024-11-01\r\n14272,1517,AMER,toys,online,60.41,6,0.066,none,2024-09-11\r\n14273,1172,APAC,electronics,retail,37.66,5,0.238,bundle,2024-03-08\r\n14274,1144,APAC,home,mobile,58.41,2,0.208,none,2024-06-19\r\n14275,1369,AMER,fashion,online,29.63,1,0.221,none,2024-03-12\r\n14276,1679,APAC,toys,mobile,133.89,1,0.078,coupon,2024-01-11\r\n14277,1246,EMEA,toys,online,68.68,8,0.162,loyalty,2024-12-12\r\n14278,2163,EMEA,fashion,retail,111.82,7,0.205,none,2024-11-27\r\n14279,1087,AMER,toys,online,72.91,7,0.147,loyalty,2024-12-08\r\n14280,2277,EMEA,fashion,mobile,56.39,1,0.191,loyalty,2024-07-04\r\n14281,2276,AMER,fashion,online,42.38,8,0.135,none,2024-12-15\r\n14282,2316,EMEA,electronics,online,124.90,6,0.061,none,2024-07-18\r\n14283,1415,AMER,home,online,41.79,8,0.126,coupon,2024-06-27\r\n14284,1308,EMEA,electronics,retail,48.03,4,0.124,none,2024-03-25\r\n14285,1536,LATAM,electronics,mobile,95.95,4,0.249,coupon,2024-11-25\r\n14286,1523,LATAM,electronics,retail,40.21,6,0.172,coupon,2024-01-17\r\n14287,1788,AMER,grocery,retail,52.89,6,0.234,none,2024-02-06\r\n14288,2278,APAC,home,partner,62.43,6,0.069,none,2024-10-19\r\n14289,2454,LATAM,grocery,online,51.20,7,0.234,loyalty,2024-11-01\r\n14290,1679,APAC,electronics,online,34.52,7,0.195,none,2024-11-02\r\n14291,1483,EMEA,sports,retail,65.26,8,0.170,none,2024-04-11\r\n14292,1981,EMEA,toys,retail,59.84,5,0.192,loyalty,2024-03-13\r\n14293,2133,AMER,fashion,online,41.34,7,0.175,bundle,2024-09-02\r\n14294,1411,LATAM,home,retail,108.27,3,0.070,none,2024-12-05\r\n14295,1179,APAC,electronics,online,24.49,1,0.189,none,2024-04-13\r\n14296,2068,LATAM,grocery,mobile,34.56,3,0.023,none,2024-02-17\r\n14297,1055,AMER,sports,online,61.71,4,0.166,loyalty,2024-03-01\r\n14298,1182,EMEA,grocery,online,45.20,5,0.124,coupon,2024-09-24\r\n14299,1052,LATAM,electronics,online,15.87,2,0.144,bundle,2024-11-15\r\n14300,1171,APAC,grocery,partner,83.90,5,0.061,none,2024-10-10\r\n14301,2205,AMER,fashion,mobile,51.65,7,0.098,coupon,2024-01-18\r\n14302,1609,LATAM,electronics,online,25.98,7,0.003,none,2024-09-16\r\n14303,1225,APAC,grocery,online,96.50,7,0.184,none,2024-04-10\r\n14304,1244,LATAM,grocery,retail,28.16,5,0.127,coupon,2024-12-01\r\n14305,1448,EMEA,fashion,mobile,38.67,2,0.247,coupon,2024-02-01\r\n14306,1533,APAC,home,partner,26.76,5,0.003,coupon,2024-01-15\r\n14307,2204,AMER,sports,online,171.67,3,0.064,none,2024-04-11\r\n14308,2402,AMER,sports,online,32.36,4,0.091,coupon,2024-12-18\r\n14309,1554,AMER,home,online,32.38,3,0.121,coupon,2024-08-02\r\n14310,1298,LATAM,grocery,retail,134.81,3,0.244,none,2024-12-03\r\n14311,2267,AMER,electronics,partner,82.41,8,0.112,coupon,2024-11-22\r\n14312,1535,AMER,home,mobile,31.82,3,0.118,none,2024-02-01\r\n14313,1104,APAC,home,retail,46.47,3,0.209,bundle,2024-07-25\r\n14314,1603,EMEA,home,retail,49.25,4,0.131,coupon,2024-01-10\r\n14315,2360,EMEA,grocery,retail,54.93,6,0.176,coupon,2024-09-06\r\n14316,1856,EMEA,grocery,online,63.35,6,0.206,none,2024-08-02\r\n14317,2120,AMER,grocery,online,58.30,8,0.196,coupon,2024-08-14\r\n14318,1456,APAC,fashion,mobile,86.18,5,0.051,loyalty,2024-01-24\r\n14319,1996,APAC,grocery,retail,95.71,6,0.028,coupon,2024-02-04\r\n14320,1182,EMEA,electronics,online,49.18,8,0.073,none,2024-01-08\r\n14321,2364,APAC,electronics,retail,55.70,8,0.168,loyalty,2024-06-19\r\n14322,2040,LATAM,home,online,32.79,1,0.026,none,2024-04-01\r\n14323,1548,EMEA,grocery,mobile,103.22,2,0.028,coupon,2024-01-06\r\n14324,1335,APAC,grocery,mobile,50.47,1,0.046,none,2024-02-26\r\n14325,1301,AMER,grocery,retail,83.43,4,0.008,bundle,2024-12-23\r\n14326,1004,LATAM,grocery,online,91.85,1,0.023,none,2024-01-07\r\n14327,1274,LATAM,electronics,retail,100.59,4,0.127,none,2024-05-02\r\n14328,1933,EMEA,grocery,online,98.45,5,0.044,none,2024-12-23\r\n14329,1078,APAC,sports,online,54.55,6,0.067,none,2024-10-07\r\n14330,1262,APAC,home,retail,38.97,3,0.111,none,2024-04-05\r\n14331,1240,EMEA,toys,mobile,33.81,3,0.036,none,2024-01-18\r\n14332,1060,LATAM,toys,online,58.65,4,0.200,loyalty,2024-01-09\r\n14333,2060,LATAM,sports,mobile,27.35,2,0.119,bundle,2024-08-05\r\n14334,1484,AMER,electronics,mobile,34.56,5,0.020,none,2024-05-08\r\n14335,1117,LATAM,sports,mobile,61.89,7,0.058,coupon,2024-02-07\r\n14336,1243,AMER,sports,mobile,50.86,3,0.082,none,2024-12-01\r\n14337,1565,AMER,grocery,retail,119.22,4,0.098,none,2024-10-11\r\n14338,1326,AMER,toys,online,38.73,5,0.185,none,2024-01-19\r\n14339,1794,AMER,electronics,online,60.16,8,0.170,none,2024-01-09\r\n14340,2415,AMER,electronics,online,102.23,8,0.029,loyalty,2024-03-14\r\n14341,1792,AMER,toys,retail,31.68,3,0.023,none,2024-11-21\r\n14342,1041,APAC,electronics,retail,29.88,8,0.014,none,2024-11-04\r\n14343,1726,EMEA,sports,online,53.82,7,0.226,loyalty,2024-07-17\r\n14344,1857,LATAM,fashion,online,39.00,1,0.237,none,2024-01-24\r\n14345,1133,EMEA,toys,online,51.77,4,0.132,bundle,2024-10-28\r\n14346,1586,LATAM,electronics,retail,73.44,2,0.207,loyalty,2024-07-18\r\n14347,2159,AMER,home,retail,45.05,3,0.191,none,2024-06-24\r\n14348,1416,EMEA,grocery,online,25.18,6,0.171,none,2024-09-05\r\n14349,1441,LATAM,electronics,online,53.15,6,0.036,none,2024-12-10\r\n14350,1947,EMEA,electronics,online,58.45,3,0.093,none,2024-05-17\r\n14351,1085,EMEA,grocery,online,190.96,1,0.184,none,2024-03-28\r\n14352,1292,LATAM,electronics,retail,51.07,3,0.173,none,2024-04-11\r\n14353,2073,AMER,sports,online,42.61,3,0.028,bundle,2024-08-24\r\n14354,1591,APAC,grocery,mobile,84.83,3,0.127,coupon,2024-06-27\r\n14355,1823,EMEA,home,mobile,61.30,5,0.121,coupon,2024-04-14\r\n14356,1250,APAC,home,retail,65.74,5,0.077,none,2024-07-06\r\n14357,1270,LATAM,electronics,online,53.28,3,0.208,none,2024-01-26\r\n14358,1414,APAC,toys,online,47.90,1,0.215,none,2024-11-28\r\n14359,1075,AMER,sports,online,15.88,3,0.009,loyalty,2024-12-18\r\n14360,1534,EMEA,grocery,online,50.92,7,0.094,bundle,2024-01-18\r\n14361,1676,LATAM,grocery,retail,162.40,3,0.223,none,2024-11-21\r\n14362,1065,AMER,electronics,retail,57.90,4,0.150,loyalty,2024-06-22\r\n14363,1775,EMEA,home,online,49.40,1,0.011,none,2024-06-04\r\n14364,1966,APAC,home,partner,44.66,4,0.187,none,2024-02-10\r\n14365,2347,AMER,electronics,online,35.35,2,0.239,none,2024-02-18\r\n14366,1969,LATAM,grocery,retail,59.82,8,0.169,none,2024-04-14\r\n14367,2455,AMER,grocery,retail,72.54,7,0.082,none,2024-06-16\r\n14368,1185,LATAM,electronics,online,35.55,4,0.129,none,2024-02-02\r\n14369,1096,EMEA,home,retail,21.13,1,0.063,coupon,2024-09-14\r\n14370,1981,EMEA,electronics,retail,70.93,6,0.233,none,2024-02-03\r\n14371,1584,EMEA,toys,mobile,116.39,8,0.136,none,2024-05-20\r\n14372,2017,EMEA,toys,mobile,76.03,1,0.210,none,2024-09-28\r\n14373,1886,LATAM,fashion,online,51.22,8,0.224,coupon,2024-07-21\r\n14374,1008,AMER,toys,retail,36.66,4,0.063,coupon,2024-12-26\r\n14375,1657,LATAM,grocery,retail,23.49,1,0.045,loyalty,2024-10-10\r\n14376,2159,AMER,grocery,retail,54.54,8,0.210,bundle,2024-09-07\r\n14377,2005,APAC,home,online,139.84,5,0.234,coupon,2024-11-05\r\n14378,2116,LATAM,sports,online,51.99,5,0.077,none,2024-02-22\r\n14379,1222,AMER,sports,retail,52.74,8,0.037,none,2024-09-22\r\n14380,1287,AMER,home,online,104.64,6,0.085,loyalty,2024-08-04\r\n14381,1050,AMER,fashion,online,63.37,8,0.026,none,2024-09-11\r\n14382,1033,APAC,fashion,mobile,34.31,6,0.250,coupon,2024-02-02\r\n14383,2009,LATAM,home,online,79.50,6,0.120,none,2024-03-15\r\n14384,2416,LATAM,grocery,mobile,96.91,5,0.027,bundle,2024-02-06\r\n14385,1143,LATAM,sports,mobile,48.01,8,0.029,none,2024-01-05\r\n14386,1790,AMER,electronics,retail,74.87,7,0.065,coupon,2024-12-26\r\n14387,1438,APAC,home,mobile,237.26,3,0.189,coupon,2024-05-05\r\n14388,2123,AMER,fashion,mobile,119.75,3,0.130,none,2024-06-11\r\n14389,1896,EMEA,home,online,51.80,4,0.108,loyalty,2024-06-10\r\n14390,2456,APAC,home,retail,50.71,1,0.047,none,2024-12-01\r\n14391,2419,LATAM,electronics,partner,43.57,3,0.006,none,2024-09-24\r\n14392,2023,LATAM,electronics,retail,108.89,3,0.027,bundle,2024-04-20\r\n14393,1731,AMER,home,retail,34.79,3,0.225,none,2024-02-08\r\n14394,2418,AMER,fashion,mobile,25.31,1,0.152,loyalty,2024-11-06\r\n14395,2071,APAC,grocery,online,32.86,5,0.151,none,2024-01-02\r\n14396,2205,AMER,electronics,online,27.61,5,0.206,bundle,2024-04-23\r\n14397,2416,LATAM,home,online,49.18,5,0.187,bundle,2024-01-15\r\n14398,2200,LATAM,toys,online,19.73,6,0.246,loyalty,2024-07-01\r\n14399,2401,LATAM,fashion,online,27.55,6,0.047,loyalty,2024-07-10\r\n14400,1846,APAC,grocery,mobile,49.25,3,0.093,loyalty,2024-07-16\r\n14401,1633,EMEA,grocery,online,160.65,7,0.045,none,2024-11-06\r\n14402,1153,AMER,home,online,33.93,4,0.145,none,2024-12-28\r\n14403,2494,AMER,home,online,57.99,2,0.145,loyalty,2024-02-01\r\n14404,1490,AMER,grocery,online,25.80,4,0.107,coupon,2024-08-20\r\n14405,1822,EMEA,electronics,retail,80.18,2,0.158,none,2024-01-09\r\n14406,1849,EMEA,toys,online,14.43,6,0.199,none,2024-03-08\r\n14407,2245,APAC,grocery,retail,55.62,8,0.234,bundle,2024-10-12\r\n14408,2315,LATAM,fashion,retail,61.16,7,0.017,none,2024-07-15\r\n14409,2241,APAC,grocery,mobile,17.11,2,0.118,none,2024-08-28\r\n14410,1415,AMER,grocery,retail,26.65,5,0.238,none,2024-07-24\r\n14411,1317,EMEA,sports,online,63.69,3,0.118,coupon,2024-12-24\r\n14412,2047,AMER,electronics,partner,51.27,8,0.068,bundle,2024-08-13\r\n14413,1565,AMER,fashion,retail,91.30,6,0.164,none,2024-07-05\r\n14414,2130,EMEA,grocery,mobile,28.23,1,0.237,loyalty,2024-07-05\r\n14415,1497,EMEA,fashion,online,24.98,2,0.224,none,2024-01-27\r\n14416,1655,LATAM,electronics,online,48.85,7,0.210,none,2024-05-21\r\n14417,1776,APAC,electronics,online,77.05,6,0.119,none,2024-04-13\r\n14418,1352,AMER,fashion,retail,36.40,7,0.247,none,2024-09-08\r\n14419,1166,AMER,grocery,mobile,88.37,4,0.134,coupon,2024-12-02\r\n14420,2351,EMEA,grocery,online,94.73,5,0.030,none,2024-12-02\r\n14421,1757,EMEA,fashion,partner,68.97,3,0.017,none,2024-11-19\r\n14422,1499,EMEA,grocery,online,22.59,2,0.106,coupon,2024-05-11\r\n14423,1669,AMER,electronics,online,49.91,8,0.146,bundle,2024-05-06\r\n14424,1298,LATAM,sports,mobile,55.67,4,0.092,coupon,2024-04-09\r\n14425,1256,LATAM,grocery,partner,61.14,4,0.113,loyalty,2024-08-22\r\n14426,1940,APAC,home,online,78.02,7,0.048,bundle,2024-11-20\r\n14427,1856,EMEA,grocery,mobile,41.89,8,0.199,none,2024-07-09\r\n14428,2129,APAC,grocery,online,48.12,3,0.147,none,2024-10-09\r\n14429,2425,APAC,fashion,online,65.22,8,0.197,coupon,2024-07-05\r\n14430,1721,EMEA,home,online,93.27,8,0.154,none,2024-08-11\r\n14431,1605,APAC,electronics,retail,117.56,3,0.249,none,2024-05-12\r\n14432,2450,EMEA,sports,online,40.90,7,0.186,bundle,2024-05-19\r\n14433,2111,EMEA,home,mobile,84.11,3,0.002,none,2024-03-18\r\n14434,2302,APAC,grocery,online,42.89,5,0.129,none,2024-06-15\r\n14435,1524,LATAM,toys,retail,63.24,6,0.040,none,2024-11-11\r\n14436,1326,AMER,grocery,online,68.67,7,0.070,none,2024-08-04\r\n14437,1327,APAC,grocery,mobile,87.28,2,0.102,bundle,2024-02-08\r\n14438,1622,LATAM,grocery,mobile,73.51,8,0.069,coupon,2024-11-04\r\n14439,1798,AMER,electronics,mobile,50.09,7,0.151,none,2024-07-09\r\n14440,1238,AMER,fashion,online,52.09,6,0.240,bundle,2024-12-05\r\n14441,2068,LATAM,sports,online,29.36,7,0.170,none,2024-03-16\r\n14442,1091,EMEA,sports,partner,27.94,3,0.017,none,2024-04-07\r\n14443,1124,AMER,grocery,retail,47.91,6,0.150,coupon,2024-08-13\r\n14444,2039,EMEA,electronics,online,70.61,8,0.222,none,2024-11-27\r\n14445,2456,APAC,grocery,online,37.46,8,0.096,bundle,2024-11-05\r\n14446,1017,AMER,sports,retail,55.37,7,0.226,loyalty,2024-01-08\r\n14447,1404,EMEA,fashion,mobile,28.73,1,0.223,none,2024-06-28\r\n14448,1841,AMER,home,online,31.46,2,0.077,bundle,2024-04-09\r\n14449,1495,LATAM,toys,online,77.75,8,0.020,coupon,2024-05-26\r\n14450,1390,APAC,sports,online,116.19,3,0.032,bundle,2024-11-24\r\n14451,1603,EMEA,fashion,retail,50.39,6,0.173,none,2024-12-17\r\n14452,1135,APAC,toys,retail,71.19,8,0.164,none,2024-11-17\r\n14453,1052,LATAM,sports,online,59.72,2,0.112,none,2024-10-01\r\n14454,1039,AMER,electronics,mobile,27.76,4,0.069,none,2024-02-11\r\n14455,2309,AMER,electronics,retail,65.28,2,0.188,coupon,2024-11-13\r\n14456,1818,AMER,fashion,online,27.50,3,0.123,none,2024-09-20\r\n14457,1140,LATAM,fashion,retail,69.86,4,0.206,loyalty,2024-04-05\r\n14458,1712,LATAM,grocery,retail,27.82,5,0.174,coupon,2024-09-19\r\n14459,1797,LATAM,home,retail,41.93,4,0.209,loyalty,2024-04-11\r\n14460,1837,LATAM,toys,retail,100.02,2,0.070,coupon,2024-03-03\r\n14461,2376,LATAM,home,retail,41.48,7,0.039,coupon,2024-05-14\r\n14462,2110,LATAM,toys,retail,27.77,8,0.005,bundle,2024-11-08\r\n14463,2187,EMEA,fashion,online,32.44,7,0.090,coupon,2024-06-25\r\n14464,1403,APAC,toys,online,59.46,4,0.250,coupon,2024-09-04\r\n14465,1129,LATAM,electronics,retail,51.78,3,0.146,bundle,2024-02-07\r\n14466,1748,APAC,grocery,mobile,63.80,7,0.099,none,2024-12-24\r\n14467,1233,AMER,home,retail,78.30,2,0.005,none,2024-03-25\r\n14468,1428,APAC,fashion,partner,62.65,5,0.172,none,2024-02-03\r\n14469,2266,LATAM,home,retail,87.27,7,0.086,coupon,2024-02-02\r\n14470,2392,EMEA,home,mobile,61.15,6,0.190,none,2024-04-14\r\n14471,1421,APAC,grocery,online,124.60,5,0.150,coupon,2024-08-11\r\n14472,2434,APAC,fashion,mobile,52.33,2,0.177,none,2024-11-22\r\n14473,2415,AMER,toys,online,39.06,2,0.005,none,2024-05-13\r\n14474,2188,EMEA,fashion,online,65.02,2,0.074,none,2024-07-28\r\n14475,1864,EMEA,electronics,retail,60.31,2,0.217,bundle,2024-04-04\r\n14476,1438,APAC,grocery,online,22.60,8,0.242,none,2024-02-27\r\n14477,2025,EMEA,home,retail,64.65,1,0.058,coupon,2024-12-02\r\n14478,1204,AMER,electronics,mobile,103.17,6,0.093,none,2024-01-18\r\n14479,1893,APAC,electronics,retail,44.57,8,0.200,none,2024-05-08\r\n14480,2364,APAC,toys,retail,36.90,7,0.059,bundle,2024-01-10\r\n14481,2375,AMER,electronics,online,42.10,1,0.145,none,2024-04-16\r\n14482,2478,AMER,toys,mobile,36.48,2,0.220,bundle,2024-09-19\r\n14483,1495,LATAM,home,online,46.62,7,0.192,bundle,2024-11-28\r\n14484,1450,EMEA,fashion,online,69.59,5,0.063,none,2024-06-11\r\n14485,1967,EMEA,home,online,23.84,4,0.022,coupon,2024-09-15\r\n14486,1490,AMER,electronics,retail,79.17,5,0.056,coupon,2024-10-08\r\n14487,1678,LATAM,sports,retail,53.51,4,0.196,coupon,2024-12-08\r\n14488,1626,EMEA,grocery,online,77.39,4,0.169,none,2024-03-22\r\n14489,1783,AMER,fashion,online,196.56,4,0.062,none,2024-11-10\r\n14490,2167,APAC,sports,retail,100.18,8,0.006,bundle,2024-02-09\r\n14491,1272,AMER,electronics,partner,122.20,5,0.197,none,2024-10-17\r\n14492,1616,APAC,electronics,mobile,42.49,6,0.044,loyalty,2024-09-28\r\n14493,2451,APAC,electronics,retail,57.75,4,0.035,coupon,2024-10-17\r\n14494,1740,EMEA,sports,retail,51.00,6,0.053,none,2024-10-15\r\n14495,1519,APAC,home,retail,53.60,6,0.060,none,2024-06-26\r\n14496,1546,EMEA,home,online,28.05,1,0.048,none,2024-11-22\r\n14497,2385,APAC,home,retail,110.49,8,0.057,bundle,2024-09-04\r\n14498,1156,APAC,home,partner,50.83,4,0.201,none,2024-11-02\r\n14499,1713,EMEA,electronics,mobile,29.70,7,0.129,none,2024-04-16\r\n14500,1079,LATAM,grocery,mobile,132.44,2,0.151,loyalty,2024-01-08\r\n14501,1885,EMEA,toys,mobile,17.25,1,0.240,coupon,2024-01-12\r\n14502,1587,LATAM,grocery,mobile,49.25,4,0.013,none,2024-09-06\r\n14503,1832,APAC,home,online,144.32,5,0.099,coupon,2024-06-06\r\n14504,1632,LATAM,toys,online,97.48,4,0.193,none,2024-07-23\r\n14505,1228,APAC,toys,online,63.03,5,0.207,loyalty,2024-10-26\r\n14506,1694,APAC,home,retail,37.64,8,0.114,none,2024-02-19\r\n14507,1464,APAC,electronics,mobile,38.15,4,0.084,none,2024-04-19\r\n14508,2182,AMER,home,online,57.61,4,0.126,none,2024-09-17\r\n14509,1561,EMEA,electronics,retail,105.06,4,0.109,none,2024-09-06\r\n14510,1144,APAC,grocery,online,62.34,1,0.026,coupon,2024-03-19\r\n14511,1986,LATAM,electronics,retail,89.08,2,0.243,coupon,2024-10-05\r\n14512,1389,LATAM,grocery,retail,32.94,1,0.021,none,2024-07-03\r\n14513,2159,AMER,fashion,mobile,153.45,6,0.134,coupon,2024-03-01\r\n14514,2045,LATAM,fashion,online,15.34,3,0.139,none,2024-12-08\r\n14515,2052,LATAM,electronics,partner,72.18,7,0.227,none,2024-06-05\r\n14516,1772,EMEA,sports,partner,29.33,6,0.173,bundle,2024-05-20\r\n14517,1801,LATAM,toys,online,37.95,5,0.074,none,2024-03-22\r\n14518,1907,EMEA,electronics,retail,35.44,2,0.083,none,2024-09-21\r\n14519,1643,EMEA,grocery,online,90.86,3,0.184,none,2024-05-17\r\n14520,1043,LATAM,grocery,mobile,128.83,3,0.168,coupon,2024-10-19\r\n14521,2496,EMEA,toys,online,35.95,8,0.017,none,2024-03-23\r\n14522,1370,APAC,fashion,online,52.20,3,0.013,loyalty,2024-07-09\r\n14523,1046,EMEA,toys,retail,53.63,8,0.187,none,2024-02-05\r\n14524,1913,LATAM,sports,mobile,90.33,7,0.185,coupon,2024-01-18\r\n14525,2064,LATAM,electronics,online,28.71,7,0.202,none,2024-11-23\r\n14526,1973,EMEA,home,retail,66.60,1,0.184,none,2024-08-04\r\n14527,1607,LATAM,home,online,61.08,8,0.061,loyalty,2024-07-16\r\n14528,1053,AMER,electronics,online,33.90,7,0.165,none,2024-02-11\r\n14529,2266,LATAM,grocery,mobile,39.31,3,0.020,none,2024-06-22\r\n14530,1224,APAC,electronics,online,31.68,5,0.182,none,2024-04-19\r\n14531,1074,LATAM,electronics,online,31.12,4,0.068,none,2024-07-05\r\n14532,1006,AMER,fashion,online,33.03,8,0.165,coupon,2024-08-01\r\n14533,1128,LATAM,home,retail,93.91,3,0.079,none,2024-08-04\r\n14534,1730,AMER,electronics,online,104.45,7,0.220,bundle,2024-01-28\r\n14535,2010,APAC,fashion,retail,41.08,8,0.076,coupon,2024-03-15\r\n14536,1602,EMEA,toys,online,110.46,2,0.100,none,2024-01-14\r\n14537,1657,LATAM,grocery,mobile,43.37,5,0.016,coupon,2024-03-12\r\n14538,2307,LATAM,electronics,online,50.95,8,0.239,bundle,2024-11-20\r\n14539,2318,AMER,home,retail,53.39,6,0.012,none,2024-08-22\r\n14540,2357,EMEA,grocery,mobile,67.39,2,0.178,none,2024-11-10\r\n14541,1278,AMER,home,online,77.63,5,0.196,none,2024-08-09\r\n14542,1557,LATAM,grocery,retail,96.99,6,0.113,bundle,2024-11-24\r\n14543,2163,EMEA,sports,retail,34.89,5,0.111,none,2024-12-24\r\n14544,1767,AMER,toys,online,194.99,2,0.241,coupon,2024-07-22\r\n14545,2465,EMEA,grocery,mobile,39.14,1,0.086,bundle,2024-01-08\r\n14546,1575,APAC,toys,mobile,39.42,4,0.110,loyalty,2024-12-01\r\n14547,1560,AMER,grocery,online,72.90,8,0.081,none,2024-05-05\r\n14548,1429,APAC,sports,mobile,37.91,3,0.052,bundle,2024-11-09\r\n14549,1139,EMEA,fashion,online,27.45,8,0.185,none,2024-06-27\r\n14550,1912,APAC,home,retail,33.05,8,0.023,none,2024-04-26\r\n14551,1090,AMER,electronics,online,44.84,1,0.009,bundle,2024-05-07\r\n14552,1075,AMER,grocery,mobile,53.86,6,0.040,none,2024-07-13\r\n14553,1303,LATAM,grocery,partner,30.24,1,0.034,none,2024-04-11\r\n14554,1096,EMEA,toys,online,60.05,2,0.212,loyalty,2024-09-14\r\n14555,1637,APAC,grocery,online,49.46,6,0.098,none,2024-06-07\r\n14556,1447,LATAM,electronics,mobile,33.92,3,0.141,coupon,2024-04-07\r\n14557,2042,LATAM,sports,partner,35.57,6,0.167,coupon,2024-03-19\r\n14558,1304,LATAM,sports,retail,102.00,3,0.036,none,2024-07-18\r\n14559,2479,EMEA,electronics,online,72.00,4,0.093,coupon,2024-05-12\r\n14560,1788,AMER,electronics,online,74.03,1,0.047,none,2024-02-10\r\n14561,1350,LATAM,home,retail,41.15,7,0.054,none,2024-08-02\r\n14562,1703,AMER,electronics,mobile,57.15,8,0.150,bundle,2024-07-02\r\n14563,2359,LATAM,grocery,retail,38.47,8,0.241,none,2024-03-16\r\n14564,1990,EMEA,fashion,partner,92.20,2,0.129,none,2024-11-10\r\n14565,1856,EMEA,toys,online,36.47,3,0.042,bundle,2024-06-18\r\n14566,2333,APAC,sports,online,56.43,2,0.009,bundle,2024-05-04\r\n14567,1327,APAC,electronics,online,52.68,3,0.017,none,2024-12-14\r\n14568,1129,LATAM,home,online,166.70,6,0.117,coupon,2024-03-15\r\n14569,1610,LATAM,home,online,32.01,6,0.082,coupon,2024-08-10\r\n14570,2106,LATAM,home,online,49.24,5,0.097,bundle,2024-07-14\r\n14571,1616,APAC,grocery,retail,51.05,1,0.012,none,2024-05-04\r\n14572,1593,AMER,toys,online,64.67,4,0.040,coupon,2024-10-09\r\n14573,2116,LATAM,electronics,retail,68.25,3,0.113,coupon,2024-11-17\r\n14574,2213,APAC,electronics,partner,56.27,8,0.147,bundle,2024-10-20\r\n14575,1931,APAC,home,mobile,78.20,4,0.018,coupon,2024-06-03\r\n14576,2357,EMEA,fashion,mobile,32.21,7,0.057,coupon,2024-11-15\r\n14577,1262,APAC,grocery,online,51.22,7,0.118,none,2024-04-22\r\n14578,2422,APAC,home,online,37.27,3,0.076,none,2024-07-19\r\n14579,1249,EMEA,electronics,mobile,112.55,3,0.189,none,2024-01-10\r\n14580,1411,LATAM,sports,online,216.66,1,0.027,none,2024-02-12\r\n14581,2370,EMEA,toys,retail,72.64,6,0.014,bundle,2024-03-28\r\n14582,1460,LATAM,fashion,online,106.55,1,0.248,loyalty,2024-09-23\r\n14583,2082,APAC,electronics,retail,62.86,3,0.214,bundle,2024-01-23\r\n14584,1706,EMEA,sports,mobile,100.68,4,0.149,none,2024-11-09\r\n14585,1435,AMER,grocery,retail,16.58,3,0.178,none,2024-03-09\r\n14586,1937,APAC,home,online,83.61,6,0.245,bundle,2024-05-10\r\n14587,1920,LATAM,electronics,online,106.63,7,0.043,coupon,2024-08-05\r\n14588,1563,EMEA,sports,retail,33.44,2,0.146,none,2024-12-26\r\n14589,1933,EMEA,grocery,mobile,41.96,8,0.046,loyalty,2024-12-01\r\n14590,2271,LATAM,sports,online,63.20,5,0.224,coupon,2024-12-10\r\n14591,1343,LATAM,fashion,online,35.72,8,0.160,bundle,2024-12-24\r\n14592,1703,AMER,toys,mobile,60.32,4,0.111,coupon,2024-11-05\r\n14593,2477,APAC,toys,retail,25.41,1,0.019,coupon,2024-04-11\r\n14594,2250,AMER,grocery,online,65.99,4,0.120,coupon,2024-10-28\r\n14595,2468,EMEA,grocery,online,39.83,2,0.174,coupon,2024-07-28\r\n14596,1206,EMEA,grocery,retail,25.18,7,0.075,none,2024-02-26\r\n14597,2332,APAC,grocery,online,62.77,5,0.037,none,2024-05-03\r\n14598,1275,EMEA,sports,online,39.62,1,0.164,bundle,2024-01-25\r\n14599,2141,AMER,grocery,retail,86.81,8,0.168,coupon,2024-09-18\r\n14600,1533,APAC,electronics,online,45.53,3,0.100,none,2024-04-19\r\n14601,1388,AMER,toys,online,35.02,5,0.160,none,2024-08-22\r\n14602,1565,AMER,toys,retail,16.79,1,0.145,none,2024-12-22\r\n14603,2395,APAC,toys,online,111.92,3,0.126,coupon,2024-01-06\r\n14604,2499,LATAM,home,online,38.61,5,0.186,none,2024-08-03\r\n14605,1592,LATAM,grocery,online,115.92,8,0.004,none,2024-01-14\r\n14606,1337,APAC,grocery,partner,52.48,5,0.025,none,2024-02-15\r\n14607,1945,AMER,grocery,online,30.11,2,0.074,none,2024-08-16\r\n14608,2187,EMEA,fashion,online,64.94,1,0.166,bundle,2024-08-05\r\n14609,2135,EMEA,sports,retail,190.39,1,0.171,none,2024-12-03\r\n14610,2093,LATAM,grocery,online,115.37,1,0.063,coupon,2024-07-22\r\n14611,2191,AMER,fashion,retail,61.31,5,0.224,coupon,2024-04-24\r\n14612,1351,APAC,home,online,31.26,8,0.127,none,2024-01-15\r\n14613,2432,AMER,toys,retail,15.91,2,0.163,coupon,2024-11-23\r\n14614,1387,AMER,grocery,online,27.47,7,0.076,none,2024-09-22\r\n14615,1291,EMEA,toys,retail,52.61,7,0.060,loyalty,2024-05-19\r\n14616,1390,APAC,toys,mobile,73.17,1,0.045,none,2024-11-12\r\n14617,1714,APAC,grocery,mobile,62.24,3,0.180,loyalty,2024-10-19\r\n14618,2230,LATAM,electronics,online,52.39,1,0.027,loyalty,2024-10-26\r\n14619,1592,LATAM,grocery,online,75.01,6,0.024,bundle,2024-05-27\r\n14620,2080,LATAM,home,retail,40.60,7,0.040,loyalty,2024-02-27\r\n14621,1274,LATAM,toys,retail,69.98,3,0.128,coupon,2024-08-05\r\n14622,1224,APAC,home,online,28.39,6,0.003,coupon,2024-12-10\r\n14623,1209,AMER,toys,retail,49.65,7,0.244,none,2024-08-24\r\n14624,1019,APAC,electronics,retail,86.50,4,0.006,none,2024-10-22\r\n14625,1097,EMEA,grocery,retail,50.49,7,0.229,bundle,2024-03-06\r\n14626,1980,LATAM,home,online,105.02,8,0.017,loyalty,2024-07-11\r\n14627,1316,APAC,toys,retail,50.16,5,0.224,none,2024-07-24\r\n14628,1541,APAC,electronics,online,67.69,3,0.067,none,2024-01-26\r\n14629,2372,AMER,grocery,retail,100.95,7,0.089,none,2024-05-24\r\n14630,1374,APAC,toys,online,188.83,2,0.108,none,2024-12-08\r\n14631,2186,LATAM,toys,online,49.58,3,0.070,loyalty,2024-12-03\r\n14632,1253,AMER,electronics,online,68.78,6,0.097,none,2024-06-08\r\n14633,2172,EMEA,electronics,online,49.31,2,0.180,coupon,2024-02-17\r\n14634,2069,AMER,fashion,online,100.29,5,0.149,bundle,2024-06-07\r\n14635,1453,APAC,electronics,online,49.86,2,0.157,none,2024-09-13\r\n14636,2360,EMEA,toys,partner,25.18,4,0.202,none,2024-05-19\r\n14637,1122,AMER,electronics,online,69.09,4,0.231,none,2024-08-15\r\n14638,1537,LATAM,home,retail,103.21,7,0.225,none,2024-04-03\r\n14639,2016,LATAM,toys,retail,103.11,5,0.044,coupon,2024-08-04\r\n14640,1496,AMER,electronics,mobile,37.60,5,0.249,coupon,2024-05-10\r\n14641,1950,LATAM,toys,retail,83.21,1,0.214,none,2024-11-15\r\n14642,1109,APAC,grocery,online,58.62,5,0.014,bundle,2024-07-03\r\n14643,1998,APAC,fashion,online,38.00,6,0.227,none,2024-05-20\r\n14644,2085,AMER,toys,retail,45.54,8,0.222,none,2024-11-26\r\n14645,1156,APAC,toys,retail,31.18,8,0.019,none,2024-09-13\r\n14646,2321,APAC,home,online,29.21,7,0.012,bundle,2024-06-08\r\n14647,1661,LATAM,home,online,47.21,4,0.217,loyalty,2024-09-22\r\n14648,2104,EMEA,grocery,online,141.76,7,0.133,none,2024-04-15\r\n14649,2329,LATAM,fashion,mobile,67.02,8,0.195,bundle,2024-08-15\r\n14650,1544,LATAM,grocery,mobile,128.22,1,0.023,none,2024-04-16\r\n14651,2460,AMER,home,online,114.26,4,0.074,bundle,2024-01-08\r\n14652,1636,APAC,fashion,retail,67.83,4,0.138,none,2024-01-12\r\n14653,1041,APAC,electronics,retail,38.87,7,0.223,none,2024-02-22\r\n14654,1055,AMER,home,mobile,46.87,4,0.226,bundle,2024-01-07\r\n14655,2450,EMEA,grocery,online,90.77,8,0.023,none,2024-02-25\r\n14656,1733,LATAM,sports,partner,60.59,2,0.163,none,2024-09-05\r\n14657,2142,LATAM,fashion,partner,40.81,8,0.224,loyalty,2024-01-12\r\n14658,2170,EMEA,fashion,retail,42.73,6,0.099,none,2024-02-17\r\n14659,1132,EMEA,electronics,mobile,58.01,7,0.219,none,2024-12-06\r\n14660,2497,AMER,electronics,online,23.16,3,0.246,bundle,2024-01-04\r\n14661,1128,LATAM,toys,mobile,133.20,3,0.224,coupon,2024-03-04\r\n14662,2110,LATAM,toys,online,66.20,5,0.121,loyalty,2024-11-02\r\n14663,1066,AMER,toys,retail,30.94,5,0.137,none,2024-02-18\r\n14664,1868,AMER,sports,online,71.88,6,0.207,bundle,2024-06-19\r\n14665,1241,APAC,electronics,retail,31.52,4,0.055,bundle,2024-06-01\r\n14666,1588,LATAM,fashion,retail,30.23,4,0.195,none,2024-02-20\r\n14667,1978,AMER,toys,retail,29.78,4,0.064,bundle,2024-04-19\r\n14668,1255,AMER,grocery,partner,27.62,5,0.104,none,2024-11-11\r\n14669,2298,APAC,grocery,partner,43.04,6,0.236,bundle,2024-03-25\r\n14670,1976,AMER,grocery,retail,169.14,6,0.092,loyalty,2024-08-18\r\n14671,1491,EMEA,electronics,retail,203.05,6,0.219,none,2024-06-06\r\n14672,2048,LATAM,fashion,online,56.23,8,0.074,coupon,2024-03-23\r\n14673,1375,AMER,grocery,mobile,74.85,1,0.195,none,2024-04-01\r\n14674,2463,AMER,electronics,retail,83.32,4,0.187,none,2024-01-27\r\n14675,1059,AMER,grocery,online,48.69,1,0.192,none,2024-01-02\r\n14676,1132,EMEA,electronics,retail,44.46,8,0.005,none,2024-06-18\r\n14677,1788,AMER,electronics,mobile,68.74,6,0.132,loyalty,2024-12-19\r\n14678,1381,LATAM,home,online,42.09,7,0.113,coupon,2024-08-15\r\n14679,1638,EMEA,toys,partner,69.26,7,0.240,bundle,2024-04-13\r\n14680,1596,EMEA,grocery,online,68.17,2,0.109,bundle,2024-11-20\r\n14681,1930,AMER,fashion,online,74.40,6,0.174,bundle,2024-01-10\r\n14682,1729,AMER,toys,mobile,74.67,3,0.157,none,2024-06-22\r\n14683,1396,EMEA,sports,mobile,106.87,5,0.187,bundle,2024-09-24\r\n14684,1616,APAC,sports,online,54.68,3,0.015,none,2024-11-28\r\n14685,1621,APAC,sports,online,97.55,5,0.071,coupon,2024-02-11\r\n14686,1155,EMEA,fashion,retail,56.83,2,0.083,none,2024-05-06\r\n14687,1551,APAC,home,retail,67.22,2,0.096,bundle,2024-08-09\r\n14688,1328,APAC,grocery,online,64.11,8,0.135,coupon,2024-11-04\r\n14689,2272,EMEA,fashion,retail,36.21,2,0.248,coupon,2024-11-27\r\n14690,1294,APAC,electronics,online,19.12,5,0.055,bundle,2024-01-04\r\n14691,1051,EMEA,home,partner,76.22,8,0.230,none,2024-10-22\r\n14692,1525,APAC,electronics,mobile,32.51,3,0.024,none,2024-12-06\r\n14693,1062,EMEA,electronics,online,14.74,5,0.082,none,2024-09-04\r\n14694,1410,AMER,grocery,retail,71.83,2,0.197,none,2024-01-08\r\n14695,1649,APAC,home,online,108.47,2,0.215,bundle,2024-03-23\r\n14696,1108,EMEA,fashion,retail,29.26,2,0.171,bundle,2024-02-04\r\n14697,1303,LATAM,home,online,50.23,3,0.044,none,2024-11-14\r\n14698,1519,APAC,toys,online,131.47,5,0.042,none,2024-08-19\r\n14699,2445,APAC,electronics,online,34.75,2,0.216,none,2024-12-13\r\n14700,1804,AMER,electronics,online,73.10,4,0.214,none,2024-11-18\r\n14701,2055,AMER,sports,mobile,68.36,8,0.141,none,2024-12-04\r\n14702,1517,AMER,electronics,online,118.94,2,0.102,none,2024-09-23\r\n14703,2282,EMEA,grocery,retail,57.72,2,0.164,coupon,2024-04-16\r\n14704,1495,LATAM,fashion,online,69.60,8,0.235,none,2024-06-23\r\n14705,2440,APAC,fashion,mobile,152.78,8,0.158,bundle,2024-05-03\r\n14706,1319,EMEA,grocery,mobile,93.10,5,0.177,coupon,2024-11-11\r\n14707,1122,AMER,grocery,online,24.75,8,0.076,bundle,2024-04-28\r\n14708,1473,LATAM,grocery,online,68.57,1,0.020,none,2024-12-19\r\n14709,1474,LATAM,sports,online,47.07,1,0.187,bundle,2024-06-18\r\n14710,1834,AMER,toys,online,28.54,3,0.047,bundle,2024-06-14\r\n14711,1797,LATAM,home,retail,41.60,5,0.066,none,2024-07-04\r\n14712,2285,APAC,electronics,retail,69.74,5,0.026,none,2024-06-18\r\n14713,2491,APAC,fashion,partner,63.14,6,0.183,none,2024-07-25\r\n14714,1883,LATAM,electronics,online,117.42,6,0.055,none,2024-03-15\r\n14715,1044,EMEA,grocery,retail,85.04,3,0.149,none,2024-06-13\r\n14716,1850,APAC,home,retail,43.77,6,0.190,none,2024-07-17\r\n14717,1420,APAC,fashion,retail,59.00,7,0.247,none,2024-11-24\r\n14718,1663,LATAM,grocery,online,60.11,1,0.000,none,2024-11-20\r\n14719,1642,EMEA,fashion,retail,56.38,1,0.243,none,2024-04-20\r\n14720,1162,AMER,electronics,retail,100.15,8,0.203,loyalty,2024-01-08\r\n14721,1519,APAC,home,mobile,51.58,4,0.006,coupon,2024-06-26\r\n14722,2226,EMEA,grocery,retail,64.27,4,0.234,none,2024-10-24\r\n14723,2011,AMER,electronics,online,72.10,7,0.012,bundle,2024-10-12\r\n14724,2037,LATAM,grocery,partner,71.18,3,0.060,loyalty,2024-10-10\r\n14725,1953,EMEA,grocery,online,138.86,3,0.015,none,2024-01-11\r\n14726,1105,AMER,grocery,retail,37.97,5,0.228,bundle,2024-11-08\r\n14727,2453,AMER,electronics,online,106.41,5,0.181,coupon,2024-09-13\r\n14728,1747,EMEA,electronics,online,51.91,5,0.205,none,2024-05-16\r\n14729,1421,APAC,electronics,mobile,37.81,6,0.246,bundle,2024-04-08\r\n14730,1876,LATAM,grocery,retail,77.09,3,0.204,none,2024-04-10\r\n14731,1604,EMEA,home,retail,53.54,6,0.074,none,2024-03-19\r\n14732,1860,EMEA,fashion,retail,125.51,2,0.160,coupon,2024-07-20\r\n14733,1440,AMER,sports,partner,95.73,5,0.105,none,2024-06-02\r\n14734,1541,APAC,home,retail,50.28,4,0.178,coupon,2024-06-20\r\n14735,1829,EMEA,grocery,online,23.48,4,0.249,none,2024-12-04\r\n14736,1889,APAC,grocery,online,107.92,7,0.002,none,2024-02-27\r\n14737,2433,APAC,sports,mobile,90.33,8,0.041,loyalty,2024-07-23\r\n14738,1202,APAC,home,online,37.44,7,0.000,none,2024-06-10\r\n14739,1752,APAC,grocery,online,105.80,8,0.189,none,2024-02-28\r\n14740,2085,AMER,toys,online,49.07,4,0.183,none,2024-10-05\r\n14741,2436,LATAM,toys,online,52.41,4,0.067,bundle,2024-07-11\r\n14742,1904,APAC,sports,online,37.58,4,0.203,none,2024-12-22\r\n14743,2316,EMEA,toys,mobile,57.96,6,0.121,none,2024-12-02\r\n14744,1337,APAC,grocery,online,35.44,3,0.179,loyalty,2024-08-23\r\n14745,2348,EMEA,grocery,online,47.72,5,0.209,coupon,2024-08-27\r\n14746,2017,EMEA,fashion,retail,54.66,7,0.038,none,2024-06-10\r\n14747,2406,EMEA,grocery,retail,33.34,6,0.038,bundle,2024-01-15\r\n14748,1246,EMEA,sports,retail,21.70,3,0.028,none,2024-07-27\r\n14749,1408,AMER,electronics,online,146.42,3,0.166,bundle,2024-05-04\r\n14750,2028,APAC,grocery,online,34.96,6,0.183,coupon,2024-06-24\r\n14751,2291,EMEA,grocery,retail,37.02,8,0.079,loyalty,2024-03-19\r\n14752,2239,EMEA,grocery,mobile,61.37,8,0.065,none,2024-09-06\r\n14753,1347,APAC,sports,online,53.54,4,0.080,none,2024-08-07\r\n14754,1666,LATAM,grocery,retail,151.05,5,0.104,none,2024-12-18\r\n14755,1689,LATAM,electronics,online,25.71,5,0.013,coupon,2024-07-03\r\n14756,1169,LATAM,sports,retail,46.99,5,0.172,none,2024-04-16\r\n14757,1786,APAC,grocery,mobile,44.80,3,0.198,loyalty,2024-12-23\r\n14758,1286,EMEA,home,retail,36.86,2,0.228,none,2024-03-19\r\n14759,1723,LATAM,electronics,mobile,27.64,7,0.056,none,2024-11-21\r\n14760,2187,EMEA,sports,partner,46.56,3,0.134,none,2024-04-13\r\n14761,2309,AMER,home,retail,100.29,3,0.220,coupon,2024-11-09\r\n14762,2464,LATAM,sports,online,66.19,5,0.226,bundle,2024-08-27\r\n14763,2444,EMEA,electronics,online,84.12,7,0.050,loyalty,2024-01-09\r\n14764,2306,AMER,toys,online,25.22,8,0.128,none,2024-05-03\r\n14765,1141,AMER,sports,online,80.69,5,0.192,loyalty,2024-08-04\r\n14766,2190,LATAM,home,mobile,69.78,8,0.214,none,2024-12-11\r\n14767,1635,APAC,sports,online,29.02,6,0.161,none,2024-01-14\r\n14768,1380,AMER,sports,online,70.09,2,0.222,none,2024-05-22\r\n14769,2013,APAC,electronics,online,81.44,2,0.066,none,2024-05-04\r\n14770,1844,APAC,sports,online,42.15,5,0.056,loyalty,2024-12-05\r\n14771,1198,AMER,grocery,mobile,58.74,3,0.088,none,2024-11-11\r\n14772,2127,LATAM,toys,online,52.81,7,0.199,none,2024-02-16\r\n14773,1136,EMEA,home,retail,92.27,6,0.212,coupon,2024-03-26\r\n14774,1036,EMEA,fashion,online,42.27,3,0.120,coupon,2024-08-22\r\n14775,1919,EMEA,electronics,online,51.04,1,0.226,coupon,2024-07-19\r\n14776,1408,AMER,electronics,online,26.54,3,0.188,coupon,2024-10-20\r\n14777,1391,LATAM,sports,online,61.38,7,0.142,none,2024-08-24\r\n14778,1915,LATAM,electronics,mobile,62.71,1,0.005,none,2024-08-22\r\n14779,1333,EMEA,grocery,mobile,21.53,8,0.208,bundle,2024-04-22\r\n14780,1985,AMER,grocery,online,25.30,1,0.218,coupon,2024-06-06\r\n14781,2246,AMER,sports,mobile,37.44,5,0.054,bundle,2024-12-07\r\n14782,2340,EMEA,grocery,mobile,30.86,1,0.244,none,2024-07-06\r\n14783,2288,AMER,fashion,partner,33.68,6,0.000,coupon,2024-04-06\r\n14784,1827,EMEA,electronics,mobile,41.17,8,0.093,none,2024-12-09\r\n14785,1716,LATAM,electronics,retail,104.17,6,0.193,none,2024-06-22\r\n14786,1462,LATAM,electronics,mobile,68.69,3,0.096,none,2024-08-18\r\n14787,1988,AMER,home,retail,56.56,3,0.239,coupon,2024-02-14\r\n14788,1184,AMER,electronics,online,33.55,8,0.211,none,2024-04-24\r\n14789,1350,LATAM,sports,online,27.00,6,0.085,loyalty,2024-06-11\r\n14790,1474,LATAM,fashion,retail,52.21,5,0.085,none,2024-12-14\r\n14791,1005,LATAM,home,mobile,62.45,3,0.063,none,2024-10-04\r\n14792,1457,EMEA,grocery,online,50.24,6,0.004,none,2024-08-14\r\n14793,2126,APAC,grocery,retail,50.60,1,0.163,none,2024-11-18\r\n14794,1884,APAC,home,online,127.85,7,0.058,bundle,2024-01-05\r\n14795,2342,AMER,home,mobile,48.59,5,0.052,bundle,2024-12-23\r\n14796,1170,AMER,electronics,online,45.50,1,0.082,none,2024-02-23\r\n14797,2383,APAC,electronics,mobile,41.03,3,0.066,none,2024-09-06\r\n14798,2137,LATAM,home,online,40.94,6,0.093,bundle,2024-07-27\r\n14799,2405,AMER,electronics,online,130.88,3,0.051,coupon,2024-07-07\r\n14800,2449,LATAM,sports,online,20.23,2,0.200,none,2024-08-09\r\n14801,1037,EMEA,electronics,online,124.52,7,0.049,none,2024-08-03\r\n14802,1639,APAC,sports,online,78.84,1,0.048,none,2024-12-03\r\n14803,1136,EMEA,fashion,mobile,40.02,3,0.011,none,2024-03-15\r\n14804,2221,LATAM,electronics,online,63.93,1,0.058,none,2024-09-05\r\n14805,1885,EMEA,home,retail,97.51,7,0.033,bundle,2024-11-22\r\n14806,1780,APAC,fashion,mobile,91.13,1,0.206,none,2024-05-27\r\n14807,1271,EMEA,fashion,partner,119.55,4,0.230,coupon,2024-01-17\r\n14808,1912,APAC,electronics,retail,68.15,4,0.182,none,2024-12-02\r\n14809,2072,AMER,electronics,partner,85.99,5,0.224,coupon,2024-03-24\r\n14810,1758,AMER,sports,online,110.69,4,0.153,none,2024-05-10\r\n14811,2179,LATAM,electronics,mobile,53.70,2,0.025,bundle,2024-05-02\r\n14812,1515,EMEA,grocery,online,61.23,8,0.123,loyalty,2024-04-22\r\n14813,1070,EMEA,toys,mobile,63.40,2,0.067,none,2024-10-15\r\n14814,1885,EMEA,electronics,partner,66.27,3,0.238,none,2024-09-08\r\n14815,1851,EMEA,toys,online,41.82,2,0.030,bundle,2024-08-21\r\n14816,2247,LATAM,grocery,retail,100.80,2,0.102,bundle,2024-10-21\r\n14817,1171,APAC,toys,online,50.58,8,0.150,coupon,2024-01-05\r\n14818,1975,EMEA,fashion,online,84.20,3,0.149,none,2024-09-10\r\n14819,1828,EMEA,toys,partner,52.98,6,0.081,coupon,2024-05-03\r\n14820,2301,EMEA,toys,partner,154.43,8,0.158,none,2024-05-22\r\n14821,1769,LATAM,grocery,online,30.80,7,0.075,none,2024-08-02\r\n14822,2317,LATAM,home,online,53.82,5,0.148,bundle,2024-08-26\r\n14823,2376,LATAM,toys,online,96.52,2,0.241,coupon,2024-06-13\r\n14824,2279,LATAM,sports,online,55.48,7,0.103,coupon,2024-02-06\r\n14825,1139,EMEA,home,online,50.92,2,0.201,none,2024-04-14\r\n14826,1365,LATAM,home,retail,97.82,8,0.025,none,2024-09-23\r\n14827,2316,EMEA,electronics,online,78.63,8,0.108,coupon,2024-10-21\r\n14828,1444,EMEA,grocery,mobile,95.85,5,0.197,coupon,2024-12-01\r\n14829,1068,APAC,grocery,retail,47.69,7,0.163,coupon,2024-01-17\r\n14830,1880,LATAM,home,retail,99.10,4,0.064,none,2024-05-21\r\n14831,2186,LATAM,grocery,online,67.61,1,0.065,none,2024-09-17\r\n14832,1548,EMEA,sports,retail,84.16,8,0.069,none,2024-04-13\r\n14833,1106,AMER,electronics,mobile,71.41,4,0.097,none,2024-12-16\r\n14834,1689,LATAM,grocery,retail,74.57,1,0.067,none,2024-06-20\r\n14835,1765,EMEA,fashion,online,33.97,8,0.033,none,2024-06-16\r\n14836,1936,EMEA,grocery,online,19.74,4,0.211,bundle,2024-05-09\r\n14837,1418,LATAM,toys,retail,80.49,5,0.210,none,2024-11-27\r\n14838,1103,EMEA,electronics,online,48.98,4,0.013,none,2024-02-04\r\n14839,1745,APAC,electronics,mobile,54.84,5,0.207,bundle,2024-12-07\r\n14840,1644,EMEA,home,mobile,41.48,2,0.102,coupon,2024-03-19\r\n14841,1262,APAC,home,mobile,94.06,5,0.097,none,2024-01-13\r\n14842,1514,LATAM,fashion,mobile,39.32,4,0.053,none,2024-12-11\r\n14843,1341,EMEA,home,retail,48.26,6,0.133,bundle,2024-09-13\r\n14844,1089,LATAM,grocery,partner,38.66,6,0.159,coupon,2024-07-10\r\n14845,1314,AMER,sports,partner,15.38,1,0.018,coupon,2024-02-17\r\n14846,2492,LATAM,fashion,retail,73.63,6,0.070,none,2024-09-28\r\n14847,1064,AMER,fashion,online,21.66,7,0.181,none,2024-04-09\r\n14848,1677,EMEA,toys,mobile,32.31,5,0.025,none,2024-02-24\r\n14849,1216,APAC,electronics,mobile,85.56,3,0.120,loyalty,2024-09-21\r\n14850,1262,APAC,sports,mobile,70.48,8,0.099,bundle,2024-02-17\r\n14851,1033,APAC,electronics,partner,78.49,8,0.036,loyalty,2024-03-26\r\n14852,1468,AMER,fashion,online,88.42,8,0.012,none,2024-11-21\r\n14853,1577,AMER,fashion,online,32.09,5,0.177,none,2024-08-24\r\n14854,1263,AMER,electronics,retail,118.32,4,0.123,none,2024-01-04\r\n14855,1725,APAC,electronics,retail,17.90,3,0.098,none,2024-06-18\r\n14856,1119,LATAM,sports,online,55.14,7,0.031,none,2024-08-16\r\n14857,1490,AMER,grocery,online,104.79,1,0.137,none,2024-03-24\r\n14858,1616,APAC,toys,online,28.26,1,0.172,none,2024-03-20\r\n14859,2284,EMEA,fashion,mobile,37.88,6,0.107,none,2024-08-28\r\n14860,1149,LATAM,home,online,65.74,7,0.089,none,2024-02-15\r\n14861,1294,APAC,grocery,online,132.54,1,0.010,none,2024-11-17\r\n14862,2297,EMEA,toys,retail,66.49,4,0.249,none,2024-02-25\r\n14863,1892,LATAM,grocery,retail,92.34,8,0.031,none,2024-11-28\r\n14864,2077,APAC,grocery,mobile,84.38,4,0.081,coupon,2024-07-09\r\n14865,1327,APAC,grocery,retail,24.90,4,0.212,loyalty,2024-01-09\r\n14866,1711,APAC,electronics,retail,88.17,8,0.190,none,2024-07-21\r\n14867,2210,APAC,electronics,retail,37.33,7,0.068,bundle,2024-02-10\r\n14868,1238,AMER,sports,retail,73.21,3,0.116,bundle,2024-04-09\r\n14869,1835,AMER,electronics,online,88.86,4,0.223,bundle,2024-01-15\r\n14870,2469,LATAM,grocery,online,25.73,5,0.115,bundle,2024-05-27\r\n14871,1044,EMEA,grocery,retail,25.80,7,0.077,none,2024-02-25\r\n14872,1557,LATAM,grocery,retail,33.44,2,0.103,coupon,2024-01-20\r\n14873,2286,AMER,toys,online,73.40,7,0.212,none,2024-01-15\r\n14874,2351,EMEA,toys,partner,53.71,6,0.111,coupon,2024-10-13\r\n14875,1203,AMER,sports,mobile,86.41,5,0.222,loyalty,2024-02-15\r\n14876,1791,LATAM,grocery,online,55.52,3,0.203,bundle,2024-12-15\r\n14877,2070,APAC,toys,mobile,41.32,3,0.209,loyalty,2024-03-20\r\n14878,1405,LATAM,toys,mobile,59.50,6,0.228,bundle,2024-01-20\r\n14879,1442,EMEA,home,mobile,98.24,8,0.131,coupon,2024-12-06\r\n14880,2314,EMEA,home,mobile,32.93,2,0.012,loyalty,2024-06-14\r\n14881,2203,APAC,grocery,retail,67.20,3,0.242,none,2024-05-24\r\n14882,1851,EMEA,grocery,online,58.02,7,0.116,coupon,2024-04-17\r\n14883,1621,APAC,fashion,online,81.57,8,0.078,coupon,2024-04-17\r\n14884,2119,AMER,home,retail,23.00,7,0.248,coupon,2024-05-20\r\n14885,1824,LATAM,electronics,retail,157.79,7,0.179,bundle,2024-10-10\r\n14886,1279,EMEA,grocery,retail,42.49,2,0.054,loyalty,2024-12-25\r\n14887,1194,APAC,electronics,retail,20.70,7,0.216,none,2024-06-27\r\n14888,2432,AMER,electronics,mobile,31.20,4,0.157,loyalty,2024-03-22\r\n14889,1531,EMEA,grocery,partner,62.01,7,0.188,none,2024-03-10\r\n14890,1124,AMER,grocery,partner,25.58,4,0.142,none,2024-01-05\r\n14891,1694,APAC,fashion,retail,34.83,3,0.018,none,2024-02-12\r\n14892,2098,AMER,grocery,mobile,42.65,8,0.072,none,2024-08-17\r\n14893,2419,LATAM,fashion,online,68.05,1,0.104,none,2024-03-13\r\n14894,1333,EMEA,grocery,retail,57.58,1,0.204,none,2024-01-13\r\n14895,1803,LATAM,sports,online,139.79,5,0.070,none,2024-01-20\r\n14896,2068,LATAM,electronics,partner,41.60,6,0.188,loyalty,2024-07-17\r\n14897,1952,EMEA,grocery,mobile,58.98,3,0.094,none,2024-10-27\r\n14898,2150,APAC,electronics,retail,48.11,1,0.234,none,2024-08-25\r\n14899,1004,LATAM,electronics,mobile,35.08,2,0.244,none,2024-09-15\r\n14900,2475,AMER,home,online,63.77,6,0.149,none,2024-06-01\r\n14901,2030,EMEA,electronics,online,51.15,2,0.052,none,2024-08-25\r\n14902,1256,LATAM,grocery,mobile,39.54,5,0.195,bundle,2024-11-18\r\n14903,2434,APAC,fashion,retail,58.07,1,0.047,none,2024-03-02\r\n14904,2322,AMER,fashion,retail,57.45,6,0.173,bundle,2024-07-23\r\n14905,2485,AMER,electronics,online,23.62,8,0.153,none,2024-08-19\r\n14906,2373,LATAM,fashion,online,33.97,1,0.189,none,2024-08-16\r\n14907,1066,AMER,electronics,online,56.06,8,0.151,none,2024-03-10\r\n14908,1417,APAC,electronics,mobile,92.96,8,0.132,coupon,2024-09-01\r\n14909,2188,EMEA,electronics,online,31.06,5,0.165,coupon,2024-03-23\r\n14910,1901,AMER,electronics,retail,70.71,6,0.140,bundle,2024-12-02\r\n14911,2201,AMER,electronics,online,33.89,2,0.122,bundle,2024-07-21\r\n14912,1579,AMER,grocery,online,93.55,6,0.243,coupon,2024-02-25\r\n14913,2465,EMEA,grocery,online,29.31,6,0.231,none,2024-05-03\r\n14914,2260,EMEA,sports,online,24.26,2,0.173,coupon,2024-09-02\r\n14915,1501,AMER,fashion,retail,35.65,6,0.164,none,2024-02-15\r\n14916,1867,AMER,electronics,online,59.96,1,0.184,none,2024-07-26\r\n14917,2160,LATAM,sports,online,42.13,1,0.024,coupon,2024-09-26\r\n14918,2314,EMEA,electronics,online,197.85,5,0.129,none,2024-06-25\r\n14919,1222,AMER,electronics,retail,45.40,2,0.107,coupon,2024-12-26\r\n14920,1944,AMER,home,online,24.46,4,0.061,none,2024-04-02\r\n14921,2209,AMER,grocery,partner,48.98,8,0.204,none,2024-06-14\r\n14922,1736,AMER,toys,online,45.95,8,0.249,none,2024-11-12\r\n14923,1680,LATAM,toys,online,37.16,8,0.073,none,2024-12-10\r\n14924,1230,EMEA,toys,online,136.62,1,0.225,none,2024-08-07\r\n14925,1553,LATAM,sports,retail,48.87,2,0.037,none,2024-06-10\r\n14926,1190,EMEA,sports,online,63.56,5,0.032,loyalty,2024-11-06\r\n14927,1579,AMER,fashion,retail,37.58,8,0.145,loyalty,2024-08-09\r\n14928,2225,EMEA,toys,online,28.59,1,0.122,none,2024-05-20\r\n14929,1615,LATAM,grocery,online,42.05,4,0.102,none,2024-11-19\r\n14930,1809,APAC,electronics,online,67.95,4,0.211,none,2024-02-06\r\n14931,1600,AMER,fashion,online,70.53,4,0.022,bundle,2024-03-28\r\n14932,2321,APAC,electronics,online,46.00,8,0.124,loyalty,2024-07-14\r\n14933,2375,AMER,electronics,retail,105.92,6,0.144,none,2024-09-27\r\n14934,1001,LATAM,grocery,online,52.30,3,0.172,coupon,2024-09-25\r\n14935,2329,LATAM,electronics,online,85.23,7,0.226,none,2024-12-03\r\n14936,1313,EMEA,home,online,75.45,5,0.149,coupon,2024-08-14\r\n14937,2474,LATAM,grocery,retail,52.08,4,0.088,none,2024-12-12\r\n14938,1364,EMEA,home,online,20.84,1,0.016,coupon,2024-04-19\r\n14939,1841,AMER,home,online,59.09,1,0.063,coupon,2024-08-19\r\n14940,2138,APAC,electronics,online,39.56,8,0.035,coupon,2024-03-09\r\n14941,1175,AMER,sports,online,178.85,6,0.093,loyalty,2024-10-24\r\n14942,1796,LATAM,toys,online,73.01,6,0.204,bundle,2024-12-23\r\n14943,1876,LATAM,home,retail,51.02,6,0.249,none,2024-08-10\r\n14944,1436,APAC,electronics,retail,26.85,3,0.201,none,2024-03-27\r\n14945,1909,APAC,fashion,retail,91.15,1,0.005,coupon,2024-08-24\r\n14946,2469,LATAM,toys,mobile,113.90,2,0.060,coupon,2024-03-12\r\n14947,1077,AMER,home,online,49.55,3,0.063,none,2024-10-19\r\n14948,2437,LATAM,fashion,mobile,127.05,3,0.148,none,2024-08-09\r\n14949,1101,AMER,fashion,online,97.38,5,0.003,bundle,2024-04-16\r\n14950,2448,APAC,grocery,online,137.04,4,0.168,none,2024-01-08\r\n14951,2394,EMEA,sports,online,56.75,8,0.105,coupon,2024-06-27\r\n14952,1835,AMER,fashion,mobile,66.77,8,0.105,none,2024-06-26\r\n14953,1155,EMEA,home,online,70.84,3,0.175,coupon,2024-03-03\r\n14954,1029,EMEA,electronics,online,46.50,2,0.247,coupon,2024-08-11\r\n14955,1004,LATAM,electronics,online,57.90,6,0.083,none,2024-02-02\r\n14956,1691,LATAM,home,online,25.46,6,0.237,bundle,2024-11-05\r\n14957,1571,EMEA,home,retail,31.71,4,0.193,none,2024-11-21\r\n14958,1467,LATAM,sports,retail,101.27,7,0.126,none,2024-07-07\r\n14959,1470,LATAM,grocery,online,124.34,7,0.047,bundle,2024-06-25\r\n14960,2092,AMER,grocery,online,65.81,4,0.103,none,2024-12-21\r\n14961,1940,APAC,sports,retail,87.81,7,0.227,coupon,2024-08-17\r\n14962,1234,AMER,grocery,retail,49.75,5,0.214,loyalty,2024-03-04\r\n14963,1309,EMEA,home,retail,48.70,8,0.242,none,2024-09-13\r\n14964,1929,LATAM,grocery,online,66.93,6,0.160,none,2024-09-14\r\n14965,2211,APAC,grocery,online,193.86,6,0.095,none,2024-01-07\r\n14966,2059,AMER,home,online,53.17,1,0.144,none,2024-11-07\r\n14967,1380,AMER,sports,online,56.38,3,0.035,coupon,2024-12-03\r\n14968,1784,EMEA,sports,online,43.69,1,0.166,none,2024-02-18\r\n14969,1276,AMER,electronics,online,30.95,2,0.167,none,2024-05-28\r\n14970,1109,APAC,fashion,online,51.47,4,0.046,loyalty,2024-10-20\r\n14971,2223,EMEA,home,partner,49.97,3,0.079,coupon,2024-11-22\r\n14972,1419,APAC,home,retail,93.87,1,0.231,coupon,2024-03-22\r\n14973,1881,LATAM,fashion,online,49.65,6,0.188,none,2024-11-22\r\n14974,1473,LATAM,electronics,online,49.38,6,0.120,none,2024-08-08\r\n14975,1445,APAC,home,retail,104.55,1,0.207,loyalty,2024-07-25\r\n14976,1537,LATAM,home,retail,102.82,2,0.142,none,2024-06-26\r\n14977,1281,AMER,sports,online,114.66,6,0.146,none,2024-04-16\r\n14978,1576,EMEA,grocery,online,38.71,4,0.066,none,2024-11-20\r\n14979,1014,EMEA,sports,retail,72.63,1,0.154,none,2024-10-25\r\n14980,1881,LATAM,sports,retail,46.01,7,0.123,bundle,2024-09-07\r\n14981,1534,EMEA,sports,online,63.92,2,0.247,none,2024-12-17\r\n14982,1846,APAC,sports,retail,40.10,7,0.235,coupon,2024-06-02\r\n14983,1471,EMEA,grocery,partner,52.56,8,0.220,none,2024-06-23\r\n14984,1682,EMEA,electronics,online,68.12,3,0.023,none,2024-04-08\r\n14985,1486,LATAM,electronics,retail,59.69,7,0.118,bundle,2024-06-07\r\n14986,2157,AMER,fashion,online,65.18,2,0.142,loyalty,2024-09-04\r\n14987,2082,APAC,electronics,online,43.11,1,0.102,coupon,2024-05-22\r\n14988,1271,EMEA,home,retail,67.32,8,0.042,bundle,2024-02-17\r\n14989,1678,LATAM,grocery,online,22.77,3,0.187,coupon,2024-05-05\r\n14990,2457,EMEA,grocery,online,62.81,6,0.067,none,2024-11-11\r\n14991,1532,APAC,fashion,retail,56.52,6,0.173,bundle,2024-05-19\r\n14992,1230,EMEA,sports,mobile,62.73,6,0.163,none,2024-10-07\r\n14993,1556,AMER,toys,online,29.11,1,0.040,loyalty,2024-07-06\r\n14994,1109,APAC,grocery,retail,105.60,7,0.025,bundle,2024-07-13\r\n14995,1618,EMEA,fashion,online,107.36,8,0.065,coupon,2024-07-25\r\n14996,1180,AMER,home,mobile,83.51,8,0.033,loyalty,2024-04-26\r\n14997,1177,LATAM,electronics,partner,32.74,8,0.189,coupon,2024-12-21\r\n14998,2409,APAC,fashion,retail,49.05,6,0.059,none,2024-12-26\r\n14999,1450,EMEA,home,online,91.39,2,0.007,none,2024-07-12\r\n15000,2014,EMEA,electronics,retail,53.27,6,0.059,loyalty,2024-10-19\r\n15001,2122,AMER,grocery,mobile,23.54,1,0.077,none,2024-10-09\r\n15002,1809,APAC,sports,partner,96.32,8,0.044,none,2024-08-04\r\n15003,1602,EMEA,fashion,online,51.39,6,0.003,none,2024-08-10\r\n15004,2089,EMEA,grocery,online,112.50,1,0.009,bundle,2024-03-25\r\n15005,1350,LATAM,electronics,partner,60.36,5,0.090,coupon,2024-10-24\r\n15006,1198,AMER,home,online,83.33,6,0.161,bundle,2024-01-04\r\n15007,1059,AMER,fashion,online,74.86,3,0.075,bundle,2024-11-23\r\n15008,1687,APAC,grocery,online,56.80,4,0.148,coupon,2024-02-12\r\n15009,1391,LATAM,electronics,retail,33.94,7,0.144,none,2024-05-14\r\n15010,1623,AMER,fashion,mobile,73.74,4,0.130,coupon,2024-11-22\r\n15011,1414,APAC,sports,online,100.33,2,0.008,none,2024-02-07\r\n15012,2099,AMER,fashion,mobile,53.63,1,0.166,none,2024-10-12\r\n15013,2451,APAC,fashion,online,61.94,5,0.127,none,2024-12-07\r\n15014,1899,APAC,home,partner,74.61,5,0.008,none,2024-03-10\r\n15015,1729,AMER,electronics,online,45.79,1,0.100,none,2024-09-12\r\n15016,2424,LATAM,sports,mobile,63.54,3,0.140,none,2024-01-02\r\n15017,1977,APAC,home,retail,51.97,8,0.029,none,2024-12-22\r\n15018,2477,APAC,electronics,online,50.87,7,0.015,none,2024-05-07\r\n15019,2474,LATAM,home,retail,49.12,2,0.018,none,2024-11-10\r\n15020,1819,AMER,toys,online,69.00,3,0.199,loyalty,2024-07-05\r\n15021,1744,EMEA,home,online,95.03,6,0.080,coupon,2024-08-20\r\n15022,2473,EMEA,sports,online,50.31,1,0.063,coupon,2024-02-18\r\n15023,1736,AMER,home,online,75.98,7,0.033,none,2024-08-21\r\n15024,2223,EMEA,toys,retail,57.72,1,0.210,none,2024-09-25\r\n15025,2484,APAC,electronics,retail,115.75,5,0.202,none,2024-08-23\r\n15026,1430,EMEA,sports,online,67.09,8,0.207,bundle,2024-03-28\r\n15027,1472,AMER,electronics,online,65.10,3,0.134,bundle,2024-11-28\r\n15028,1039,AMER,home,online,50.50,4,0.117,coupon,2024-03-10\r\n15029,1577,AMER,electronics,online,74.47,5,0.093,bundle,2024-08-16\r\n15030,2144,EMEA,home,mobile,46.69,6,0.184,loyalty,2024-06-27\r\n15031,2011,AMER,grocery,retail,20.25,2,0.014,none,2024-12-26\r\n15032,1117,LATAM,electronics,mobile,142.65,3,0.049,none,2024-07-18\r\n15033,1381,LATAM,grocery,retail,46.79,3,0.103,none,2024-12-15\r\n15034,1297,AMER,home,online,15.07,3,0.217,none,2024-07-16\r\n15035,2484,APAC,electronics,online,71.90,2,0.120,bundle,2024-01-04\r\n15036,1637,APAC,electronics,retail,226.26,6,0.145,coupon,2024-07-26\r\n15037,2406,EMEA,home,retail,54.98,6,0.232,none,2024-09-17\r\n15038,1553,LATAM,electronics,online,147.61,8,0.104,none,2024-06-19\r\n15039,1632,LATAM,toys,retail,79.28,3,0.031,none,2024-04-04\r\n15040,2251,APAC,grocery,online,47.82,4,0.211,none,2024-04-28\r\n15041,1468,AMER,grocery,online,48.91,7,0.114,none,2024-03-09\r\n15042,2087,LATAM,toys,partner,32.30,7,0.219,bundle,2024-11-05\r\n15043,1351,APAC,home,online,78.36,8,0.143,bundle,2024-12-01\r\n15044,2423,LATAM,fashion,retail,65.56,2,0.076,none,2024-12-11\r\n15045,2016,LATAM,fashion,partner,62.60,4,0.021,none,2024-06-24\r\n15046,1748,APAC,fashion,online,71.24,7,0.141,bundle,2024-07-27\r\n15047,2323,AMER,toys,online,68.17,7,0.151,coupon,2024-11-06\r\n15048,1000,APAC,home,mobile,74.13,4,0.094,none,2024-02-16\r\n15049,2237,EMEA,fashion,online,81.54,3,0.232,bundle,2024-05-01\r\n15050,2341,EMEA,grocery,online,69.42,5,0.066,none,2024-03-17\r\n15051,1500,EMEA,electronics,online,34.90,4,0.223,none,2024-07-14\r\n15052,2157,AMER,grocery,online,37.31,1,0.215,coupon,2024-07-23\r\n15053,1078,APAC,toys,retail,65.99,4,0.165,bundle,2024-04-25\r\n15054,1617,AMER,sports,mobile,71.42,7,0.244,none,2024-06-02\r\n15055,1164,EMEA,fashion,online,78.89,1,0.107,none,2024-08-05\r\n15056,1970,LATAM,fashion,retail,124.60,2,0.104,loyalty,2024-05-09\r\n15057,2001,EMEA,sports,retail,128.59,8,0.046,none,2024-07-01\r\n15058,2202,APAC,sports,retail,76.70,7,0.214,none,2024-10-01\r\n15059,1094,LATAM,home,retail,27.75,5,0.010,coupon,2024-12-11\r\n15060,1836,LATAM,grocery,retail,22.48,7,0.182,coupon,2024-06-05\r\n15061,1582,AMER,fashion,retail,128.04,8,0.038,coupon,2024-07-05\r\n15062,2229,APAC,home,retail,145.94,8,0.246,none,2024-10-17\r\n15063,1768,AMER,home,retail,57.44,1,0.136,none,2024-04-23\r\n15064,2156,AMER,home,partner,188.99,7,0.116,bundle,2024-05-12\r\n15065,2141,AMER,electronics,online,49.50,7,0.048,none,2024-10-28\r\n15066,2488,EMEA,grocery,retail,61.61,8,0.173,bundle,2024-12-15\r\n15067,1012,LATAM,home,retail,139.14,3,0.188,none,2024-11-20\r\n15068,1513,APAC,sports,online,120.48,8,0.041,none,2024-07-10\r\n15069,1845,AMER,toys,partner,58.67,7,0.001,none,2024-04-12\r\n15070,1516,EMEA,sports,online,86.01,6,0.116,none,2024-11-10\r\n15071,1246,EMEA,grocery,retail,47.48,5,0.142,bundle,2024-09-14\r\n15072,1782,LATAM,home,online,43.01,7,0.223,none,2024-03-28\r\n15073,1785,EMEA,grocery,online,49.98,5,0.039,none,2024-04-13\r\n15074,1503,APAC,grocery,online,236.61,7,0.249,none,2024-08-03\r\n15075,2454,LATAM,fashion,online,106.40,1,0.232,none,2024-04-20\r\n15076,1800,APAC,home,online,82.77,5,0.067,bundle,2024-05-20\r\n15077,2253,AMER,toys,online,117.72,2,0.104,coupon,2024-04-17\r\n15078,1445,APAC,toys,partner,31.68,5,0.224,none,2024-03-04\r\n15079,1364,EMEA,electronics,mobile,29.68,7,0.182,none,2024-11-03\r\n15080,1466,AMER,home,online,54.28,1,0.144,none,2024-10-22\r\n15081,1579,AMER,grocery,online,34.85,6,0.164,none,2024-04-20\r\n15082,2350,APAC,electronics,mobile,118.35,2,0.038,coupon,2024-05-12\r\n15083,1408,AMER,grocery,online,67.07,3,0.166,coupon,2024-06-12\r\n15084,1104,APAC,fashion,online,68.89,4,0.054,bundle,2024-01-22\r\n15085,1646,APAC,sports,online,25.61,3,0.109,coupon,2024-08-08\r\n15086,1098,APAC,sports,mobile,84.25,7,0.124,coupon,2024-02-25\r\n15087,1855,APAC,electronics,retail,56.92,4,0.189,none,2024-04-08\r\n15088,2011,AMER,grocery,online,68.37,4,0.158,coupon,2024-09-09\r\n15089,1198,AMER,grocery,retail,72.76,5,0.130,coupon,2024-08-26\r\n15090,2326,LATAM,electronics,partner,13.34,2,0.162,none,2024-01-17\r\n15091,1819,AMER,sports,mobile,51.13,2,0.116,loyalty,2024-08-07\r\n15092,1365,LATAM,electronics,retail,146.20,7,0.192,none,2024-03-10\r\n15093,1283,APAC,toys,online,65.09,2,0.055,coupon,2024-05-06\r\n15094,2298,APAC,fashion,online,87.29,6,0.057,bundle,2024-08-19\r\n15095,2241,APAC,toys,retail,68.52,4,0.170,none,2024-03-11\r\n15096,2135,EMEA,grocery,online,46.20,3,0.084,none,2024-09-26\r\n15097,1776,APAC,home,retail,55.78,8,0.099,coupon,2024-12-02\r\n15098,1470,LATAM,grocery,online,68.11,3,0.088,coupon,2024-02-25\r\n15099,2401,LATAM,sports,mobile,160.81,5,0.087,none,2024-06-24\r\n15100,1685,AMER,toys,mobile,40.52,2,0.026,coupon,2024-06-25\r\n15101,1067,APAC,home,online,45.01,6,0.018,coupon,2024-11-25\r\n15102,1644,EMEA,sports,partner,16.43,8,0.224,bundle,2024-08-06\r\n15103,2006,APAC,electronics,online,49.71,5,0.160,coupon,2024-11-09\r\n15104,1196,APAC,electronics,online,31.44,5,0.033,none,2024-02-11\r\n15105,1392,AMER,home,retail,106.74,4,0.210,none,2024-01-15\r\n15106,1828,EMEA,home,mobile,58.78,5,0.017,none,2024-08-23\r\n15107,1956,APAC,fashion,retail,52.10,3,0.159,coupon,2024-03-17\r\n15108,1471,EMEA,sports,online,55.28,1,0.208,loyalty,2024-01-21\r\n15109,1114,APAC,electronics,retail,50.26,4,0.079,coupon,2024-07-22\r\n15110,2184,APAC,grocery,online,119.42,4,0.015,coupon,2024-08-05\r\n15111,1135,APAC,grocery,mobile,33.69,3,0.046,coupon,2024-02-27\r\n15112,1653,APAC,home,online,93.55,2,0.189,coupon,2024-08-23\r\n15113,2488,EMEA,fashion,retail,29.40,3,0.097,bundle,2024-06-28\r\n15114,1326,AMER,electronics,retail,69.34,3,0.070,none,2024-02-16\r\n15115,2351,EMEA,home,online,102.18,5,0.107,none,2024-05-06\r\n15116,2172,EMEA,sports,mobile,95.99,7,0.125,none,2024-09-05\r\n15117,1788,AMER,toys,retail,82.96,8,0.094,coupon,2024-11-23\r\n15118,1329,APAC,sports,retail,41.35,3,0.152,coupon,2024-03-21\r\n15119,1807,EMEA,fashion,online,35.39,3,0.230,loyalty,2024-02-14\r\n15120,1380,AMER,home,retail,47.38,7,0.153,none,2024-03-23\r\n15121,2058,LATAM,electronics,retail,74.20,2,0.007,none,2024-10-13\r\n15122,1099,LATAM,fashion,online,101.79,7,0.237,none,2024-09-22\r\n15123,1425,EMEA,home,online,70.67,2,0.115,coupon,2024-08-14\r\n15124,1428,APAC,electronics,retail,65.40,2,0.239,none,2024-05-24\r\n15125,1433,EMEA,grocery,online,48.45,4,0.223,none,2024-01-18\r\n15126,1101,AMER,toys,retail,51.51,1,0.145,none,2024-04-19\r\n15127,1894,APAC,grocery,retail,52.17,6,0.246,none,2024-07-04\r\n15128,1894,APAC,electronics,retail,35.32,1,0.097,none,2024-06-21\r\n15129,1233,AMER,home,retail,65.33,4,0.055,none,2024-10-03\r\n15130,2160,LATAM,toys,online,156.49,4,0.187,none,2024-03-12\r\n15131,1863,EMEA,grocery,online,86.79,4,0.028,bundle,2024-09-09\r\n15132,1474,LATAM,sports,retail,41.55,4,0.227,none,2024-08-26\r\n15133,2352,APAC,grocery,retail,59.20,4,0.188,none,2024-09-03\r\n15134,1535,AMER,sports,online,101.56,4,0.192,coupon,2024-02-23\r\n15135,1930,AMER,electronics,online,59.31,8,0.031,loyalty,2024-01-14\r\n15136,1494,AMER,home,retail,36.02,8,0.167,none,2024-03-04\r\n15137,2349,APAC,electronics,partner,65.98,7,0.249,coupon,2024-11-11\r\n15138,1889,APAC,sports,online,28.62,5,0.203,coupon,2024-05-09\r\n15139,1745,APAC,electronics,online,128.30,2,0.094,coupon,2024-11-13\r\n15140,1576,EMEA,toys,mobile,34.98,2,0.013,coupon,2024-07-24\r\n15141,2201,AMER,grocery,online,63.16,1,0.156,loyalty,2024-09-16\r\n15142,2137,LATAM,grocery,retail,44.29,8,0.133,none,2024-08-23\r\n15143,1932,EMEA,toys,retail,116.39,3,0.245,none,2024-05-18\r\n15144,1217,EMEA,grocery,retail,32.14,1,0.247,none,2024-07-12\r\n15145,1079,LATAM,home,retail,93.31,6,0.013,loyalty,2024-07-18\r\n15146,1342,LATAM,home,online,36.95,8,0.109,none,2024-09-02\r\n15147,2141,AMER,electronics,retail,74.62,6,0.090,none,2024-11-28\r\n15148,1522,LATAM,grocery,online,33.57,8,0.110,none,2024-06-07\r\n15149,1019,APAC,grocery,mobile,100.37,3,0.104,none,2024-06-25\r\n15150,1611,EMEA,fashion,retail,80.11,8,0.221,none,2024-06-07\r\n15151,1831,APAC,home,retail,258.86,6,0.130,none,2024-01-13\r\n15152,1369,AMER,electronics,retail,113.34,6,0.225,loyalty,2024-01-24\r\n15153,1661,LATAM,grocery,retail,50.64,2,0.052,coupon,2024-08-20\r\n15154,2256,AMER,home,mobile,60.91,4,0.244,bundle,2024-09-16\r\n15155,1077,AMER,electronics,online,107.25,8,0.150,loyalty,2024-06-26\r\n15156,1351,APAC,home,online,40.27,6,0.100,none,2024-05-23\r\n15157,1865,LATAM,grocery,online,89.51,7,0.202,none,2024-06-27\r\n15158,1591,APAC,grocery,mobile,82.69,2,0.051,none,2024-01-23\r\n15159,1472,AMER,grocery,mobile,63.83,4,0.200,none,2024-04-07\r\n15160,1538,AMER,electronics,online,51.10,5,0.231,bundle,2024-08-12\r\n15161,2311,LATAM,fashion,online,113.29,2,0.018,none,2024-10-03\r\n15162,2354,LATAM,toys,online,101.12,8,0.069,loyalty,2024-04-25\r\n15163,1926,AMER,toys,mobile,93.14,7,0.207,none,2024-06-08\r\n15164,1891,APAC,electronics,mobile,64.20,7,0.231,none,2024-12-19\r\n15165,2112,LATAM,sports,online,76.63,1,0.034,none,2024-09-21\r\n15166,1061,APAC,fashion,online,18.33,8,0.221,none,2024-12-12\r\n15167,1734,AMER,home,online,44.94,1,0.204,coupon,2024-02-05\r\n15168,1987,AMER,fashion,retail,61.13,6,0.140,none,2024-05-12\r\n15169,1825,AMER,toys,online,58.81,3,0.214,none,2024-02-24\r\n15170,1168,APAC,grocery,online,54.75,4,0.115,coupon,2024-09-11\r\n15171,2390,AMER,electronics,mobile,102.76,1,0.065,none,2024-02-04\r\n15172,1248,APAC,sports,online,49.64,8,0.141,none,2024-07-16\r\n15173,1560,AMER,electronics,partner,28.85,7,0.164,none,2024-02-14\r\n15174,1427,EMEA,grocery,online,57.04,4,0.234,none,2024-03-02\r\n15175,2329,LATAM,sports,retail,28.63,6,0.126,none,2024-09-18\r\n15176,1995,LATAM,home,retail,100.90,4,0.079,none,2024-07-23\r\n15177,2167,APAC,toys,online,42.77,1,0.093,bundle,2024-03-19\r\n15178,1700,EMEA,fashion,retail,55.49,6,0.114,loyalty,2024-07-07\r\n15179,1184,AMER,home,online,47.63,7,0.199,bundle,2024-01-15\r\n15180,1715,AMER,toys,retail,43.38,7,0.027,coupon,2024-01-12\r\n15181,1179,APAC,sports,retail,67.67,6,0.173,loyalty,2024-08-07\r\n15182,2119,AMER,toys,online,19.21,3,0.189,none,2024-06-28\r\n15183,1787,APAC,sports,mobile,30.88,6,0.164,none,2024-04-20\r\n15184,1351,APAC,grocery,mobile,68.73,8,0.051,loyalty,2024-10-28\r\n15185,2354,LATAM,grocery,retail,33.72,8,0.153,none,2024-01-28\r\n15186,1572,LATAM,electronics,mobile,46.05,8,0.198,none,2024-11-03\r\n15187,2000,APAC,sports,partner,59.90,1,0.100,none,2024-02-10\r\n15188,1207,APAC,grocery,online,45.45,6,0.214,coupon,2024-06-02\r\n15189,2163,EMEA,sports,retail,70.72,6,0.033,coupon,2024-07-16\r\n15190,1052,LATAM,fashion,online,40.82,6,0.185,coupon,2024-12-24\r\n15191,1175,AMER,sports,online,25.60,5,0.061,none,2024-01-03\r\n15192,1795,EMEA,home,retail,90.21,4,0.213,bundle,2024-10-19\r\n15193,1950,LATAM,sports,online,56.95,4,0.068,none,2024-04-15\r\n15194,1648,APAC,grocery,retail,410.87,1,0.084,none,2024-08-18\r\n15195,1831,APAC,home,partner,53.71,1,0.196,none,2024-10-14\r\n15196,2326,LATAM,toys,retail,65.53,7,0.052,none,2024-09-17\r\n15197,2080,LATAM,home,online,50.49,7,0.206,none,2024-08-17\r\n15198,1702,AMER,electronics,online,108.12,1,0.235,coupon,2024-09-26\r\n15199,1303,LATAM,fashion,retail,37.81,7,0.019,none,2024-01-04\r\n15200,1119,LATAM,electronics,online,35.91,6,0.005,loyalty,2024-06-07\r\n15201,1046,EMEA,sports,retail,34.71,4,0.076,none,2024-06-19\r\n15202,1333,EMEA,electronics,retail,54.27,3,0.182,loyalty,2024-10-03\r\n15203,1369,AMER,electronics,retail,36.94,8,0.068,bundle,2024-01-25\r\n15204,2121,APAC,home,online,64.81,7,0.087,bundle,2024-02-23\r\n15205,2023,LATAM,home,online,41.87,4,0.129,none,2024-12-21\r\n15206,1883,LATAM,grocery,online,95.08,7,0.106,none,2024-07-10\r\n15207,1775,EMEA,home,mobile,76.24,8,0.123,none,2024-01-08\r\n15208,2090,AMER,fashion,online,60.07,7,0.063,bundle,2024-05-12\r\n15209,1429,APAC,grocery,online,73.75,4,0.130,bundle,2024-12-12\r\n15210,2115,APAC,fashion,online,54.71,4,0.028,none,2024-11-12\r\n15211,1545,AMER,toys,online,40.43,1,0.058,coupon,2024-08-12\r\n15212,1837,LATAM,grocery,retail,93.41,1,0.202,coupon,2024-03-02\r\n15213,1333,EMEA,electronics,retail,91.33,6,0.044,none,2024-01-15\r\n15214,1010,EMEA,grocery,retail,66.07,5,0.096,bundle,2024-11-24\r\n15215,1487,AMER,grocery,retail,36.39,1,0.201,none,2024-04-20\r\n15216,1294,APAC,toys,online,30.40,6,0.120,none,2024-10-15\r\n15217,1132,EMEA,grocery,retail,64.51,5,0.021,none,2024-08-19\r\n15218,2364,APAC,electronics,partner,73.83,4,0.232,none,2024-12-20\r\n15219,2430,APAC,fashion,mobile,73.85,7,0.009,none,2024-09-13\r\n15220,1390,APAC,toys,mobile,63.60,1,0.194,none,2024-10-17\r\n15221,2394,EMEA,toys,online,115.88,4,0.232,coupon,2024-12-24\r\n15222,1595,AMER,grocery,online,54.41,7,0.229,bundle,2024-11-20\r\n15223,1722,EMEA,home,online,70.72,6,0.088,coupon,2024-02-05\r\n15224,2020,AMER,grocery,mobile,36.16,8,0.210,bundle,2024-05-09\r\n15225,1354,AMER,sports,retail,35.13,3,0.033,none,2024-08-23\r\n15226,1647,LATAM,electronics,retail,53.29,5,0.076,none,2024-06-28\r\n15227,1944,AMER,grocery,online,30.89,8,0.129,none,2024-06-13\r\n15228,1089,LATAM,home,online,67.79,7,0.150,bundle,2024-02-07\r\n15229,1038,APAC,electronics,online,75.88,5,0.029,none,2024-06-24\r\n15230,1141,AMER,toys,retail,124.57,3,0.130,none,2024-11-10\r\n15231,2224,EMEA,fashion,retail,40.75,2,0.240,none,2024-09-04\r\n15232,1067,APAC,electronics,retail,139.48,3,0.060,none,2024-09-15\r\n15233,1827,EMEA,electronics,mobile,88.68,5,0.166,none,2024-04-27\r\n15234,1821,LATAM,electronics,online,33.22,8,0.068,none,2024-09-24\r\n15235,1196,APAC,grocery,mobile,69.03,2,0.062,coupon,2024-11-21\r\n15236,1504,AMER,home,retail,52.75,1,0.095,none,2024-05-23\r\n15237,1137,APAC,home,retail,49.15,1,0.001,none,2024-05-03\r\n15238,2246,AMER,grocery,online,81.97,1,0.186,loyalty,2024-08-15\r\n15239,2466,APAC,home,online,48.53,1,0.157,none,2024-03-21\r\n15240,1770,AMER,electronics,retail,169.03,4,0.116,coupon,2024-06-02\r\n15241,2045,LATAM,electronics,online,52.89,3,0.138,none,2024-04-08\r\n15242,1730,AMER,toys,mobile,46.87,3,0.094,bundle,2024-08-04\r\n15243,1630,APAC,electronics,retail,83.95,6,0.005,loyalty,2024-12-12\r\n15244,1737,AMER,sports,mobile,53.67,7,0.034,coupon,2024-08-18\r\n15245,1407,LATAM,fashion,retail,60.86,5,0.179,none,2024-03-10\r\n15246,1163,AMER,home,online,33.49,4,0.236,bundle,2024-04-25\r\n15247,2446,LATAM,electronics,retail,98.69,4,0.108,none,2024-12-04\r\n15248,1355,EMEA,home,online,57.90,6,0.077,none,2024-06-13\r\n15249,2102,APAC,home,retail,93.42,3,0.001,none,2024-02-15\r\n15250,1430,EMEA,electronics,online,51.87,8,0.245,loyalty,2024-08-19\r\n15251,1848,EMEA,toys,mobile,73.17,7,0.143,none,2024-02-10\r\n15252,2274,APAC,fashion,retail,52.43,8,0.217,none,2024-12-10\r\n15253,2025,EMEA,home,retail,27.16,7,0.190,none,2024-04-17\r\n15254,1781,LATAM,grocery,online,109.75,2,0.033,coupon,2024-12-24\r\n15255,2031,AMER,home,online,42.53,6,0.137,bundle,2024-05-02\r\n15256,1510,EMEA,home,online,37.77,6,0.094,loyalty,2024-10-18\r\n15257,1368,EMEA,home,retail,46.29,2,0.182,none,2024-09-03\r\n15258,1781,LATAM,home,online,54.70,4,0.155,bundle,2024-12-23\r\n15259,2128,EMEA,home,mobile,50.47,7,0.111,none,2024-08-05\r\n15260,1005,LATAM,electronics,retail,56.93,2,0.037,none,2024-03-04\r\n15261,1164,EMEA,fashion,retail,58.16,5,0.074,none,2024-11-07\r\n15262,2195,APAC,grocery,online,58.20,5,0.080,coupon,2024-04-22\r\n15263,1148,AMER,grocery,online,26.89,4,0.097,none,2024-06-13\r\n15264,1865,LATAM,sports,online,62.42,5,0.161,coupon,2024-07-20\r\n15265,1875,EMEA,home,online,62.87,3,0.011,none,2024-06-03\r\n15266,1859,AMER,sports,retail,32.18,5,0.163,bundle,2024-04-01\r\n15267,1518,AMER,home,retail,36.05,3,0.187,none,2024-05-20\r\n15268,2071,APAC,home,online,51.30,4,0.132,none,2024-12-17\r\n15269,1399,AMER,home,partner,36.75,5,0.136,none,2024-01-28\r\n15270,1290,EMEA,sports,retail,32.58,1,0.083,none,2024-02-28\r\n15271,1788,AMER,grocery,retail,65.59,7,0.114,coupon,2024-12-22\r\n15272,2173,LATAM,electronics,retail,36.00,7,0.233,none,2024-09-16\r\n15273,2385,APAC,grocery,retail,49.80,1,0.187,none,2024-01-16\r\n15274,2300,EMEA,grocery,online,15.91,1,0.053,coupon,2024-01-06\r\n15275,2397,LATAM,home,retail,30.91,3,0.134,coupon,2024-03-12\r\n15276,2317,LATAM,fashion,retail,31.05,1,0.029,none,2024-05-05\r\n15277,1388,AMER,sports,online,56.02,3,0.039,coupon,2024-07-11\r\n15278,1806,APAC,electronics,partner,89.54,1,0.195,none,2024-09-02\r\n15279,1708,LATAM,grocery,mobile,62.71,4,0.094,none,2024-11-03\r\n15280,2325,LATAM,grocery,online,70.11,5,0.098,coupon,2024-04-07\r\n15281,2386,EMEA,home,online,59.55,8,0.099,none,2024-07-08\r\n15282,2075,LATAM,sports,mobile,49.98,3,0.026,none,2024-11-06\r\n15283,1821,LATAM,grocery,retail,52.02,5,0.026,none,2024-05-05\r\n15284,1769,LATAM,home,online,61.66,3,0.074,none,2024-09-13\r\n15285,2114,AMER,fashion,online,64.75,7,0.049,bundle,2024-02-26\r\n15286,2498,LATAM,toys,mobile,59.18,1,0.137,none,2024-03-21\r\n15287,1931,APAC,grocery,online,39.90,3,0.144,loyalty,2024-04-23\r\n15288,2032,AMER,grocery,online,37.88,5,0.189,bundle,2024-01-09\r\n15289,1937,APAC,home,retail,83.67,7,0.217,none,2024-10-06\r\n15290,1339,EMEA,fashion,mobile,37.19,2,0.218,none,2024-12-16\r\n15291,1917,LATAM,home,online,33.23,7,0.098,none,2024-08-25\r\n15292,1956,APAC,electronics,mobile,93.90,8,0.096,coupon,2024-11-23\r\n15293,1936,EMEA,electronics,retail,74.33,4,0.110,loyalty,2024-08-03\r\n15294,1507,EMEA,electronics,online,105.24,4,0.043,coupon,2024-06-07\r\n15295,1265,APAC,sports,retail,54.00,6,0.202,coupon,2024-06-23\r\n15296,2404,EMEA,grocery,retail,62.63,2,0.061,coupon,2024-08-08\r\n15297,1108,EMEA,home,retail,108.57,4,0.149,none,2024-02-27\r\n15298,1654,EMEA,toys,online,79.72,5,0.205,bundle,2024-11-12\r\n15299,1822,EMEA,home,retail,51.17,7,0.015,bundle,2024-07-18\r\n15300,1417,APAC,home,online,54.66,5,0.053,bundle,2024-02-06\r\n15301,1466,AMER,fashion,retail,22.91,4,0.020,none,2024-02-24\r\n15302,1437,EMEA,home,retail,57.76,5,0.192,coupon,2024-03-20\r\n15303,1170,AMER,grocery,retail,35.60,6,0.177,none,2024-12-12\r\n15304,1490,AMER,home,online,130.94,8,0.139,none,2024-04-21\r\n15305,1311,APAC,grocery,online,114.21,3,0.131,bundle,2024-09-26\r\n15306,1568,AMER,home,mobile,41.62,6,0.107,bundle,2024-03-07\r\n15307,1095,APAC,electronics,online,158.32,7,0.018,none,2024-02-15\r\n15308,1038,APAC,grocery,retail,71.60,4,0.131,none,2024-09-20\r\n15309,1924,AMER,grocery,partner,86.47,7,0.167,loyalty,2024-08-11\r\n15310,1066,AMER,sports,retail,26.63,4,0.155,none,2024-04-21\r\n15311,1787,APAC,fashion,retail,78.86,4,0.098,none,2024-07-23\r\n15312,2131,APAC,sports,online,52.83,5,0.159,none,2024-06-23\r\n15313,1588,LATAM,sports,online,30.92,6,0.128,none,2024-03-26\r\n15314,2049,LATAM,home,online,96.68,3,0.157,none,2024-10-19\r\n15315,1666,LATAM,fashion,online,39.65,4,0.012,none,2024-04-27\r\n15316,2033,LATAM,toys,retail,29.26,2,0.177,loyalty,2024-01-21\r\n15317,1148,AMER,fashion,online,45.48,3,0.142,none,2024-04-16\r\n15318,1161,AMER,home,online,72.91,3,0.044,none,2024-06-21\r\n15319,2458,EMEA,home,online,61.20,4,0.209,none,2024-06-15\r\n15320,1484,AMER,home,online,29.11,4,0.183,none,2024-11-03\r\n15321,2023,LATAM,toys,retail,112.45,6,0.034,coupon,2024-10-21\r\n15322,1829,EMEA,grocery,retail,69.21,7,0.132,bundle,2024-10-23\r\n15323,2256,AMER,electronics,online,25.13,4,0.204,none,2024-02-19\r\n15324,2381,AMER,grocery,retail,96.03,2,0.022,none,2024-06-03\r\n15325,1108,EMEA,electronics,partner,58.73,4,0.069,none,2024-09-15\r\n15326,1056,LATAM,sports,online,64.51,1,0.110,none,2024-01-24\r\n15327,1479,AMER,home,retail,67.68,2,0.073,loyalty,2024-01-18\r\n15328,1256,LATAM,fashion,online,62.44,2,0.115,none,2024-07-13\r\n15329,1649,APAC,toys,online,121.41,1,0.133,bundle,2024-09-08\r\n15330,1253,AMER,toys,online,64.16,1,0.086,bundle,2024-05-27\r\n15331,1012,LATAM,grocery,mobile,48.09,6,0.136,bundle,2024-03-27\r\n15332,2051,APAC,grocery,mobile,127.29,5,0.111,none,2024-08-22\r\n15333,2329,LATAM,sports,partner,56.61,8,0.118,bundle,2024-09-17\r\n15334,1614,EMEA,grocery,mobile,79.01,2,0.025,none,2024-04-16\r\n15335,1891,APAC,grocery,partner,134.75,3,0.111,none,2024-01-17\r\n15336,1619,APAC,electronics,online,70.93,2,0.207,none,2024-01-14\r\n15337,1735,LATAM,sports,online,137.28,5,0.133,bundle,2024-05-16\r\n15338,1741,AMER,electronics,online,64.65,7,0.080,none,2024-06-24\r\n15339,1196,APAC,grocery,online,38.03,8,0.197,coupon,2024-03-23\r\n15340,1961,EMEA,electronics,retail,77.98,3,0.054,bundle,2024-06-11\r\n15341,1049,AMER,grocery,retail,70.80,3,0.015,none,2024-03-08\r\n15342,1386,AMER,toys,mobile,134.52,5,0.118,coupon,2024-11-11\r\n15343,2483,LATAM,sports,retail,30.76,7,0.025,coupon,2024-03-21\r\n15344,1925,LATAM,sports,retail,104.53,7,0.088,none,2024-04-03\r\n15345,2106,LATAM,home,online,111.26,5,0.070,bundle,2024-03-06\r\n15346,1572,LATAM,electronics,retail,30.01,4,0.004,none,2024-02-06\r\n15347,1140,LATAM,grocery,online,114.20,7,0.152,none,2024-11-21\r\n15348,1580,AMER,home,online,108.53,7,0.175,bundle,2024-03-15\r\n15349,1007,APAC,electronics,mobile,49.69,6,0.101,loyalty,2024-12-03\r\n15350,1231,AMER,fashion,mobile,50.02,1,0.009,none,2024-05-07\r\n15351,1502,APAC,home,online,107.51,8,0.009,coupon,2024-02-24\r\n15352,1880,LATAM,home,mobile,41.01,4,0.244,coupon,2024-07-27\r\n15353,1203,AMER,grocery,mobile,34.79,7,0.203,none,2024-09-27\r\n15354,1373,LATAM,grocery,online,20.50,3,0.086,none,2024-02-14\r\n15355,1869,AMER,toys,online,45.93,3,0.155,none,2024-05-08\r\n15356,1877,LATAM,home,online,40.36,1,0.077,none,2024-03-25\r\n15357,2456,APAC,electronics,online,76.90,8,0.086,coupon,2024-04-24\r\n15358,1737,AMER,sports,online,72.52,3,0.197,none,2024-06-22\r\n15359,1533,APAC,fashion,retail,121.20,1,0.247,none,2024-07-28\r\n15360,2021,EMEA,home,online,28.14,7,0.015,coupon,2024-11-05\r\n15361,2081,APAC,fashion,online,50.34,4,0.246,bundle,2024-04-18\r\n15362,1939,LATAM,home,mobile,84.88,1,0.010,none,2024-11-05\r\n15363,2379,AMER,electronics,retail,54.11,4,0.175,loyalty,2024-04-14\r\n15364,1560,AMER,electronics,partner,32.96,5,0.123,none,2024-01-07\r\n15365,2295,EMEA,grocery,online,61.03,5,0.094,coupon,2024-02-28\r\n15366,1272,AMER,toys,retail,27.87,5,0.139,none,2024-02-13\r\n15367,1946,AMER,grocery,online,26.77,4,0.059,loyalty,2024-05-17\r\n15368,2472,AMER,toys,online,73.90,1,0.141,loyalty,2024-02-12\r\n15369,1666,LATAM,grocery,retail,55.87,3,0.095,coupon,2024-09-09\r\n15370,2385,APAC,home,online,46.29,7,0.049,bundle,2024-03-19\r\n15371,1610,LATAM,toys,retail,66.02,3,0.171,none,2024-05-01\r\n15372,2480,APAC,electronics,online,15.65,4,0.149,coupon,2024-10-11\r\n15373,2038,LATAM,sports,online,50.07,4,0.143,none,2024-03-13\r\n15374,2342,AMER,electronics,online,78.67,5,0.014,coupon,2024-07-02\r\n15375,2482,EMEA,fashion,retail,38.84,5,0.081,none,2024-07-26\r\n15376,1003,APAC,fashion,online,113.53,5,0.071,coupon,2024-08-04\r\n15377,1871,APAC,electronics,retail,105.01,6,0.175,none,2024-07-22\r\n15378,1698,EMEA,toys,online,17.64,8,0.095,none,2024-08-15\r\n15379,2244,LATAM,fashion,online,44.36,3,0.224,none,2024-12-13\r\n15380,1169,LATAM,fashion,mobile,53.07,2,0.078,bundle,2024-02-28\r\n15381,1794,AMER,toys,online,65.14,6,0.051,none,2024-07-02\r\n15382,1027,APAC,home,online,131.09,2,0.139,none,2024-08-11\r\n15383,2148,EMEA,fashion,mobile,98.29,2,0.010,none,2024-02-10\r\n15384,1993,APAC,grocery,online,54.78,5,0.057,loyalty,2024-05-09\r\n15385,1976,AMER,fashion,online,97.44,3,0.186,none,2024-08-08\r\n15386,2243,APAC,fashion,retail,36.06,1,0.144,none,2024-12-01\r\n15387,1640,APAC,grocery,online,51.54,1,0.024,bundle,2024-02-18\r\n15388,1857,LATAM,sports,retail,34.38,7,0.226,none,2024-09-24\r\n15389,1373,LATAM,grocery,online,57.78,2,0.052,none,2024-04-05\r\n15390,1683,AMER,grocery,retail,51.21,4,0.076,none,2024-12-04\r\n15391,1282,LATAM,sports,online,63.80,2,0.005,none,2024-06-26\r\n15392,1493,APAC,toys,retail,196.39,7,0.111,coupon,2024-05-08\r\n15393,1357,EMEA,toys,retail,62.20,1,0.231,none,2024-04-22\r\n15394,2492,LATAM,sports,online,58.74,1,0.224,none,2024-04-07\r\n15395,2029,APAC,toys,retail,105.49,1,0.177,none,2024-04-02\r\n15396,1463,EMEA,electronics,retail,19.93,7,0.123,none,2024-11-16\r\n15397,1096,EMEA,electronics,online,91.07,7,0.170,none,2024-09-15\r\n15398,1267,EMEA,electronics,retail,66.32,2,0.165,none,2024-02-22\r\n15399,1838,AMER,electronics,mobile,137.03,6,0.098,none,2024-07-26\r\n15400,1974,EMEA,electronics,mobile,24.12,8,0.036,none,2024-11-25\r\n15401,2182,AMER,home,retail,78.57,4,0.123,coupon,2024-11-13\r\n15402,1591,APAC,grocery,retail,49.13,5,0.139,none,2024-02-04\r\n15403,2132,LATAM,electronics,online,90.80,6,0.093,loyalty,2024-05-05\r\n15404,1771,AMER,fashion,retail,34.73,6,0.067,none,2024-12-22\r\n15405,1264,APAC,home,retail,65.98,2,0.148,none,2024-09-14\r\n15406,2199,LATAM,sports,retail,57.43,4,0.205,none,2024-07-22\r\n15407,2286,AMER,electronics,online,229.97,6,0.089,none,2024-08-12\r\n15408,2023,LATAM,electronics,online,97.72,2,0.109,none,2024-06-05\r\n15409,2007,LATAM,toys,retail,44.24,5,0.133,loyalty,2024-08-03\r\n15410,2072,AMER,sports,online,109.98,3,0.034,bundle,2024-03-25\r\n15411,1364,EMEA,fashion,retail,137.90,3,0.177,none,2024-11-04\r\n15412,1890,LATAM,sports,online,118.21,2,0.211,coupon,2024-06-13\r\n15413,1967,EMEA,electronics,online,71.07,4,0.197,none,2024-11-03\r\n15414,2294,EMEA,toys,retail,77.41,5,0.080,bundle,2024-06-06\r\n15415,1942,APAC,sports,retail,52.27,6,0.013,coupon,2024-09-10\r\n15416,2295,EMEA,electronics,retail,97.67,4,0.145,coupon,2024-06-17\r\n15417,1800,APAC,sports,retail,110.24,4,0.160,coupon,2024-01-04\r\n15418,2048,LATAM,electronics,online,35.96,5,0.051,coupon,2024-04-07\r\n15419,1987,AMER,electronics,retail,113.30,2,0.244,none,2024-08-02\r\n15420,2497,AMER,electronics,retail,32.61,8,0.031,none,2024-04-26\r\n15421,1568,AMER,toys,retail,136.20,6,0.130,bundle,2024-07-05\r\n15422,1484,AMER,toys,partner,50.55,6,0.087,none,2024-12-03\r\n15423,2203,APAC,fashion,online,62.83,3,0.228,coupon,2024-07-21\r\n15424,2239,EMEA,fashion,online,42.77,5,0.172,coupon,2024-10-15\r\n15425,1400,EMEA,electronics,mobile,61.13,5,0.148,bundle,2024-08-19\r\n15426,1434,EMEA,sports,online,33.32,5,0.107,none,2024-06-24\r\n15427,2348,EMEA,grocery,online,46.60,1,0.112,none,2024-04-17\r\n15428,1198,AMER,fashion,online,147.70,5,0.016,none,2024-06-27\r\n15429,2299,EMEA,home,retail,74.66,6,0.239,none,2024-05-21\r\n15430,2198,EMEA,sports,online,32.51,4,0.202,none,2024-12-27\r\n15431,1131,APAC,sports,online,62.65,6,0.152,none,2024-02-03\r\n15432,1111,APAC,toys,retail,43.54,4,0.093,none,2024-12-06\r\n15433,1902,AMER,fashion,online,26.40,8,0.222,none,2024-04-20\r\n15434,2144,EMEA,grocery,online,43.44,6,0.243,none,2024-01-19\r\n15435,2174,LATAM,grocery,online,164.55,8,0.161,coupon,2024-09-17\r\n15436,2450,EMEA,grocery,online,65.17,2,0.069,none,2024-01-07\r\n15437,2180,AMER,sports,online,52.65,5,0.102,none,2024-10-06\r\n15438,1807,EMEA,electronics,retail,19.89,1,0.224,none,2024-05-26\r\n15439,2104,EMEA,toys,online,34.81,5,0.191,none,2024-05-25\r\n15440,1396,EMEA,grocery,retail,84.36,1,0.017,coupon,2024-01-09\r\n15441,1299,LATAM,electronics,online,206.71,6,0.225,coupon,2024-10-06\r\n15442,2041,LATAM,home,online,15.77,7,0.211,loyalty,2024-11-17\r\n15443,1717,AMER,home,retail,92.52,4,0.065,loyalty,2024-01-26\r\n15444,2454,LATAM,grocery,retail,39.80,6,0.014,none,2024-05-10\r\n15445,1553,LATAM,grocery,online,111.86,5,0.132,none,2024-11-12\r\n15446,1397,LATAM,electronics,partner,23.19,1,0.129,loyalty,2024-12-04\r\n15447,2131,APAC,electronics,online,127.18,1,0.106,loyalty,2024-01-02\r\n15448,1948,EMEA,fashion,online,47.43,3,0.058,none,2024-06-15\r\n15449,1486,LATAM,electronics,retail,66.37,8,0.118,none,2024-07-12\r\n15450,1430,EMEA,home,retail,126.50,2,0.166,none,2024-01-02\r\n15451,2159,AMER,fashion,online,60.63,3,0.244,bundle,2024-12-10\r\n15452,1292,LATAM,home,retail,120.85,5,0.229,none,2024-08-08\r\n15453,1588,LATAM,home,online,84.70,3,0.180,none,2024-05-19\r\n15454,2331,APAC,electronics,retail,53.87,2,0.215,none,2024-05-06\r\n15455,1433,EMEA,grocery,retail,67.12,6,0.091,bundle,2024-07-17\r\n15456,1354,AMER,grocery,retail,55.27,4,0.086,coupon,2024-01-21\r\n15457,2101,APAC,home,retail,38.59,3,0.082,bundle,2024-06-26\r\n15458,1528,EMEA,grocery,mobile,36.24,7,0.162,none,2024-04-04\r\n15459,1824,LATAM,electronics,retail,117.10,6,0.132,none,2024-05-07\r\n15460,2266,LATAM,grocery,mobile,31.02,4,0.145,none,2024-12-17\r\n15461,1828,EMEA,toys,partner,44.75,7,0.104,none,2024-03-03\r\n15462,1271,EMEA,grocery,retail,79.18,2,0.207,bundle,2024-04-20\r\n15463,1057,LATAM,fashion,online,70.54,5,0.240,none,2024-08-01\r\n15464,1595,AMER,electronics,retail,60.98,7,0.217,coupon,2024-03-03\r\n15465,2245,APAC,home,online,102.74,7,0.191,none,2024-02-14\r\n15466,2140,AMER,home,online,65.32,3,0.105,none,2024-01-07\r\n15467,1325,APAC,home,retail,72.86,3,0.080,coupon,2024-02-26\r\n15468,1426,AMER,home,mobile,62.05,6,0.158,none,2024-12-23\r\n15469,1805,EMEA,home,retail,81.24,3,0.105,loyalty,2024-12-19\r\n15470,1003,APAC,grocery,partner,33.19,3,0.040,none,2024-06-10\r\n15471,1883,LATAM,electronics,retail,44.84,5,0.026,none,2024-05-06\r\n15472,1181,LATAM,sports,retail,123.67,5,0.235,none,2024-09-02\r\n15473,2441,EMEA,electronics,retail,49.98,4,0.165,coupon,2024-09-21\r\n15474,1294,APAC,electronics,retail,68.44,2,0.174,coupon,2024-11-08\r\n15475,2176,AMER,home,online,42.17,6,0.109,none,2024-03-13\r\n15476,1567,AMER,toys,retail,49.41,5,0.062,coupon,2024-02-13\r\n15477,1147,EMEA,sports,online,19.01,2,0.170,coupon,2024-05-21\r\n15478,1748,APAC,grocery,online,105.55,3,0.216,none,2024-09-01\r\n15479,2086,APAC,fashion,partner,66.85,8,0.228,none,2024-05-09\r\n15480,2076,AMER,toys,online,70.90,1,0.228,none,2024-01-16\r\n15481,1273,AMER,grocery,online,109.66,5,0.104,coupon,2024-05-05\r\n15482,2333,APAC,fashion,online,43.09,2,0.138,none,2024-12-20\r\n15483,1640,APAC,fashion,retail,77.54,8,0.044,none,2024-01-16\r\n15484,1144,APAC,toys,mobile,67.00,5,0.062,coupon,2024-03-01\r\n15485,2250,AMER,fashion,online,144.76,8,0.077,none,2024-03-04\r\n15486,1077,AMER,electronics,partner,65.75,7,0.090,none,2024-10-28\r\n15487,1464,APAC,electronics,retail,61.31,5,0.077,bundle,2024-01-08\r\n15488,2239,EMEA,toys,retail,85.84,7,0.096,none,2024-04-27\r\n15489,1682,EMEA,electronics,online,94.26,4,0.162,none,2024-05-08\r\n15490,1054,EMEA,electronics,retail,19.14,8,0.049,none,2024-02-08\r\n15491,1437,EMEA,toys,retail,37.18,8,0.136,none,2024-05-18\r\n15492,1676,LATAM,fashion,mobile,29.59,8,0.110,none,2024-12-22\r\n15493,1498,LATAM,toys,online,78.50,8,0.151,bundle,2024-11-25\r\n15494,1050,AMER,grocery,retail,51.63,1,0.023,loyalty,2024-12-18\r\n15495,1721,EMEA,home,online,102.58,2,0.015,none,2024-06-08\r\n15496,2208,AMER,home,online,41.29,3,0.009,coupon,2024-04-07\r\n15497,1633,EMEA,fashion,mobile,29.05,7,0.241,coupon,2024-04-28\r\n15498,1319,EMEA,grocery,retail,109.22,5,0.120,loyalty,2024-02-25\r\n15499,1074,LATAM,grocery,online,88.56,1,0.043,none,2024-02-07\r\n15500,1833,EMEA,grocery,retail,28.53,1,0.156,coupon,2024-01-02\r\n15501,1335,APAC,electronics,retail,67.95,3,0.235,coupon,2024-07-15\r\n15502,1784,EMEA,toys,retail,50.64,6,0.177,none,2024-11-01\r\n15503,2084,LATAM,home,online,61.14,7,0.075,none,2024-06-11\r\n15504,1351,APAC,fashion,online,77.87,8,0.128,loyalty,2024-03-28\r\n15505,1420,APAC,home,online,64.31,8,0.101,none,2024-06-20\r\n15506,1254,APAC,electronics,mobile,56.25,3,0.002,bundle,2024-04-25\r\n15507,1759,EMEA,home,retail,24.95,2,0.087,bundle,2024-04-18\r\n15508,1293,AMER,electronics,online,57.91,4,0.069,loyalty,2024-12-17\r\n15509,1433,EMEA,electronics,retail,25.40,7,0.167,none,2024-12-25\r\n15510,1391,LATAM,electronics,retail,67.55,3,0.045,loyalty,2024-03-14\r\n15511,1395,APAC,toys,retail,66.59,3,0.200,none,2024-08-25\r\n15512,1913,LATAM,electronics,mobile,52.53,7,0.221,coupon,2024-05-27\r\n15513,2159,AMER,grocery,online,86.91,2,0.045,none,2024-07-23\r\n15514,2377,AMER,grocery,online,86.78,2,0.025,bundle,2024-06-01\r\n15515,1017,AMER,electronics,online,22.55,5,0.055,bundle,2024-10-02\r\n15516,2241,APAC,grocery,retail,58.37,3,0.064,none,2024-06-22\r\n15517,2019,AMER,toys,retail,80.39,7,0.020,loyalty,2024-01-09\r\n15518,1171,APAC,sports,mobile,72.11,2,0.085,coupon,2024-04-22\r\n15519,1913,LATAM,electronics,partner,47.85,3,0.217,none,2024-07-26\r\n15520,1972,LATAM,toys,retail,78.81,1,0.188,coupon,2024-03-22\r\n15521,2170,EMEA,electronics,retail,44.56,4,0.240,coupon,2024-08-08\r\n15522,2459,AMER,electronics,retail,67.93,1,0.156,coupon,2024-09-09\r\n15523,1470,LATAM,home,retail,83.57,7,0.083,coupon,2024-03-22\r\n15524,2478,AMER,toys,retail,59.84,8,0.234,none,2024-08-26\r\n15525,1042,LATAM,home,retail,55.21,7,0.071,loyalty,2024-09-17\r\n15526,1507,EMEA,grocery,online,83.43,1,0.197,none,2024-08-03\r\n15527,2280,EMEA,sports,retail,121.33,8,0.025,none,2024-08-16\r\n15528,1215,LATAM,fashion,online,106.91,1,0.039,none,2024-03-13\r\n15529,2470,EMEA,electronics,retail,44.65,2,0.111,bundle,2024-04-08\r\n15530,2155,APAC,home,retail,53.48,8,0.246,none,2024-10-08\r\n15531,1175,AMER,home,retail,30.15,6,0.215,none,2024-12-11\r\n15532,1013,LATAM,fashion,retail,82.87,8,0.031,none,2024-04-12\r\n15533,2320,LATAM,grocery,retail,36.32,7,0.199,bundle,2024-09-01\r\n15534,1030,EMEA,electronics,mobile,47.04,3,0.068,loyalty,2024-08-19\r\n15535,1105,AMER,electronics,online,42.68,1,0.072,coupon,2024-04-11\r\n15536,1276,AMER,sports,retail,29.73,8,0.135,none,2024-09-25\r\n15537,2262,APAC,electronics,partner,25.01,1,0.140,coupon,2024-11-15\r\n15538,1041,APAC,electronics,mobile,108.45,7,0.040,bundle,2024-06-02\r\n15539,1907,EMEA,electronics,online,32.06,8,0.198,none,2024-04-18\r\n15540,2153,APAC,home,retail,241.71,6,0.240,coupon,2024-10-03\r\n15541,1654,EMEA,grocery,retail,53.19,4,0.098,loyalty,2024-12-01\r\n15542,1915,LATAM,fashion,retail,54.43,8,0.013,coupon,2024-10-17\r\n15543,1773,LATAM,toys,online,20.98,4,0.038,none,2024-11-25\r\n15544,1903,LATAM,electronics,online,23.26,5,0.171,bundle,2024-08-17\r\n15545,1936,EMEA,grocery,online,106.18,4,0.010,coupon,2024-04-14\r\n15546,1303,LATAM,toys,retail,46.01,3,0.168,coupon,2024-03-26\r\n15547,2331,APAC,home,mobile,203.55,3,0.060,bundle,2024-08-05\r\n15548,1635,APAC,grocery,online,54.37,8,0.024,none,2024-01-07\r\n15549,2094,AMER,toys,retail,28.86,1,0.069,loyalty,2024-05-07\r\n15550,1339,EMEA,electronics,online,59.36,4,0.216,none,2024-04-15\r\n15551,1589,AMER,grocery,mobile,27.40,2,0.049,coupon,2024-04-14\r\n15552,1954,APAC,fashion,online,59.94,3,0.051,bundle,2024-03-24\r\n15553,2474,LATAM,grocery,online,29.36,4,0.073,none,2024-05-26\r\n15554,2128,EMEA,sports,online,88.59,2,0.216,bundle,2024-07-20\r\n15555,1595,AMER,sports,retail,93.58,8,0.229,loyalty,2024-08-20\r\n15556,1816,EMEA,electronics,mobile,42.75,8,0.094,none,2024-08-15\r\n15557,2382,LATAM,home,online,92.29,6,0.010,none,2024-07-10\r\n15558,2067,LATAM,toys,online,70.27,8,0.177,coupon,2024-04-27\r\n15559,1802,AMER,electronics,online,51.83,4,0.054,bundle,2024-07-21\r\n15560,1943,AMER,electronics,online,54.76,5,0.244,none,2024-03-25\r\n15561,1654,EMEA,home,retail,43.47,4,0.172,none,2024-11-26\r\n15562,1369,AMER,home,online,78.96,8,0.188,loyalty,2024-06-26\r\n15563,1592,LATAM,home,retail,81.38,8,0.030,none,2024-12-28\r\n15564,1256,LATAM,fashion,online,68.02,7,0.021,bundle,2024-08-01\r\n15565,1464,APAC,toys,mobile,60.42,8,0.163,loyalty,2024-08-10\r\n15566,1637,APAC,toys,mobile,72.04,5,0.235,coupon,2024-04-18\r\n15567,1469,EMEA,electronics,online,77.24,5,0.208,none,2024-08-15\r\n15568,2329,LATAM,toys,retail,171.56,7,0.055,coupon,2024-03-22\r\n15569,2481,APAC,home,online,40.42,8,0.052,loyalty,2024-12-23\r\n15570,1117,LATAM,sports,retail,60.09,5,0.186,none,2024-01-25\r\n15571,1917,LATAM,grocery,mobile,72.32,5,0.232,coupon,2024-07-12\r\n15572,1909,APAC,sports,partner,69.19,8,0.223,loyalty,2024-01-08\r\n15573,2194,APAC,grocery,retail,78.96,1,0.141,coupon,2024-02-25\r\n15574,2350,APAC,electronics,retail,68.28,7,0.062,none,2024-03-26\r\n15575,1584,EMEA,grocery,retail,57.94,4,0.213,bundle,2024-09-09\r\n15576,2082,APAC,fashion,online,98.31,1,0.200,none,2024-09-14\r\n15577,1683,AMER,electronics,retail,38.48,2,0.169,none,2024-07-18\r\n15578,1752,APAC,grocery,mobile,17.17,1,0.155,none,2024-12-16\r\n15579,1808,APAC,fashion,online,119.43,4,0.112,none,2024-10-25\r\n15580,1361,LATAM,fashion,online,142.63,8,0.004,coupon,2024-10-21\r\n15581,1071,AMER,home,retail,90.44,5,0.127,loyalty,2024-12-09\r\n15582,2208,AMER,grocery,online,162.57,8,0.014,coupon,2024-07-22\r\n15583,2014,EMEA,grocery,online,32.44,7,0.240,none,2024-11-05\r\n15584,1019,APAC,home,retail,54.47,5,0.097,none,2024-02-13\r\n15585,1443,EMEA,home,mobile,133.72,6,0.249,none,2024-10-26\r\n15586,1495,LATAM,grocery,online,49.11,4,0.005,coupon,2024-06-13\r\n15587,1523,LATAM,grocery,retail,63.92,7,0.132,none,2024-09-04\r\n15588,1762,LATAM,electronics,online,70.51,1,0.144,none,2024-04-15\r\n15589,1901,AMER,home,retail,51.50,8,0.039,none,2024-06-07\r\n15590,2161,LATAM,electronics,mobile,36.77,3,0.031,coupon,2024-05-11\r\n15591,1531,EMEA,toys,online,65.71,6,0.176,bundle,2024-03-19\r\n15592,1355,EMEA,grocery,retail,49.80,1,0.042,coupon,2024-12-20\r\n15593,1225,APAC,sports,retail,55.35,2,0.110,coupon,2024-01-26\r\n15594,2285,APAC,fashion,online,33.78,1,0.088,none,2024-07-01\r\n15595,2370,EMEA,electronics,mobile,53.46,1,0.170,bundle,2024-08-24\r\n15596,1390,APAC,toys,retail,41.53,4,0.212,none,2024-10-21\r\n15597,1129,LATAM,toys,retail,60.14,5,0.014,none,2024-10-01\r\n15598,2484,APAC,fashion,online,45.57,5,0.048,bundle,2024-01-07\r\n15599,1519,APAC,toys,retail,37.64,2,0.169,none,2024-02-27\r\n15600,2025,EMEA,home,retail,93.35,7,0.194,none,2024-10-08\r\n15601,2194,APAC,electronics,retail,58.04,2,0.220,coupon,2024-04-18\r\n15602,1801,LATAM,fashion,online,135.59,2,0.014,bundle,2024-08-12\r\n15603,2216,AMER,grocery,mobile,24.27,8,0.127,coupon,2024-05-18\r\n15604,1979,APAC,home,online,72.92,6,0.074,bundle,2024-03-21\r\n15605,1460,LATAM,sports,partner,46.05,6,0.116,bundle,2024-11-13\r\n15606,2170,EMEA,sports,retail,36.63,4,0.128,none,2024-05-15\r\n15607,2459,AMER,toys,retail,65.47,2,0.102,coupon,2024-01-20\r\n15608,1017,AMER,home,online,80.37,5,0.091,coupon,2024-04-02\r\n15609,1625,EMEA,home,retail,19.06,8,0.027,none,2024-02-12\r\n15610,1399,AMER,home,retail,45.53,5,0.006,loyalty,2024-06-12\r\n15611,1670,EMEA,electronics,retail,54.69,8,0.086,coupon,2024-05-09\r\n15612,1780,APAC,home,retail,43.96,8,0.197,none,2024-01-19\r\n15613,1527,AMER,home,online,38.52,2,0.108,none,2024-06-05\r\n15614,2190,LATAM,fashion,online,92.77,6,0.113,none,2024-01-04\r\n15615,2495,EMEA,fashion,online,95.05,7,0.198,loyalty,2024-05-05\r\n15616,1741,AMER,grocery,mobile,53.98,7,0.132,bundle,2024-02-28\r\n15617,1062,EMEA,toys,online,40.12,1,0.171,coupon,2024-08-08\r\n15618,2164,AMER,grocery,partner,28.75,2,0.041,bundle,2024-02-04\r\n15619,2040,LATAM,home,partner,41.39,4,0.095,none,2024-02-18\r\n15620,1559,EMEA,home,online,23.63,6,0.113,none,2024-08-14\r\n15621,2123,AMER,fashion,online,85.71,5,0.135,none,2024-03-14\r\n15622,2174,LATAM,fashion,retail,28.85,8,0.142,bundle,2024-08-27\r\n15623,2442,APAC,sports,retail,103.45,6,0.170,loyalty,2024-03-05\r\n15624,1264,APAC,electronics,online,84.10,8,0.018,none,2024-12-22\r\n15625,2266,LATAM,electronics,mobile,60.72,4,0.024,bundle,2024-09-18\r\n15626,1691,LATAM,electronics,retail,46.83,2,0.062,none,2024-09-03\r\n15627,1113,EMEA,toys,online,64.77,7,0.185,coupon,2024-04-17\r\n15628,1228,APAC,grocery,partner,71.93,1,0.146,bundle,2024-03-06\r\n15629,1373,LATAM,grocery,online,49.13,8,0.206,none,2024-07-09\r\n15630,2156,AMER,sports,online,101.65,6,0.149,bundle,2024-10-28\r\n15631,2209,AMER,fashion,online,62.28,1,0.025,none,2024-06-19\r\n15632,2005,APAC,fashion,retail,60.77,5,0.013,none,2024-08-23\r\n15633,1312,EMEA,fashion,mobile,128.89,3,0.143,coupon,2024-01-18\r\n15634,1316,APAC,sports,online,60.82,1,0.077,none,2024-11-13\r\n15635,1971,EMEA,electronics,mobile,19.23,7,0.095,loyalty,2024-11-13\r\n15636,1137,APAC,home,retail,37.28,1,0.132,loyalty,2024-03-21\r\n15637,2260,EMEA,home,retail,22.02,5,0.239,coupon,2024-08-23\r\n15638,1928,AMER,grocery,online,67.23,1,0.028,loyalty,2024-08-28\r\n15639,1524,LATAM,grocery,retail,42.66,4,0.237,none,2024-05-14\r\n15640,1115,AMER,sports,online,31.46,3,0.119,none,2024-10-26\r\n15641,1804,AMER,grocery,mobile,109.13,3,0.241,none,2024-02-21\r\n15642,2269,EMEA,grocery,online,61.74,5,0.043,none,2024-01-05\r\n15643,1165,AMER,electronics,online,150.29,1,0.100,none,2024-02-18\r\n15644,2241,APAC,electronics,retail,37.04,7,0.062,coupon,2024-08-06\r\n15645,2160,LATAM,grocery,online,36.52,3,0.098,none,2024-12-27\r\n15646,2246,AMER,electronics,partner,46.76,2,0.116,none,2024-06-28\r\n15647,1488,AMER,electronics,online,56.45,5,0.154,bundle,2024-08-14\r\n15648,2218,EMEA,grocery,online,28.06,2,0.199,coupon,2024-09-07\r\n15649,2115,APAC,fashion,retail,48.98,8,0.214,none,2024-09-10\r\n15650,1996,APAC,fashion,retail,29.90,3,0.205,none,2024-08-12\r\n15651,1473,LATAM,toys,retail,49.06,7,0.245,none,2024-04-17\r\n15652,2096,LATAM,sports,online,101.81,3,0.117,coupon,2024-06-13\r\n15653,1599,APAC,grocery,online,117.55,3,0.024,none,2024-04-27\r\n15654,1042,LATAM,home,retail,166.41,2,0.220,bundle,2024-04-23\r\n15655,2115,APAC,grocery,online,55.04,4,0.172,coupon,2024-10-28\r\n15656,1926,AMER,home,online,62.98,5,0.158,none,2024-02-26\r\n15657,1375,AMER,toys,online,78.02,2,0.200,coupon,2024-07-09\r\n15658,2378,LATAM,grocery,online,22.00,6,0.103,none,2024-04-11\r\n15659,1845,AMER,grocery,retail,44.01,3,0.170,none,2024-10-05\r\n15660,2490,AMER,grocery,retail,34.10,4,0.180,none,2024-08-23\r\n15661,2309,AMER,fashion,retail,69.23,1,0.020,none,2024-02-18\r\n15662,1478,EMEA,grocery,online,71.67,7,0.147,none,2024-12-20\r\n15663,1421,APAC,fashion,online,16.88,6,0.195,bundle,2024-02-04\r\n15664,1235,EMEA,fashion,online,54.64,1,0.001,none,2024-05-07\r\n15665,1434,EMEA,fashion,online,53.20,5,0.042,none,2024-04-11\r\n15666,2485,AMER,grocery,online,43.51,2,0.034,coupon,2024-12-11\r\n15667,1435,AMER,electronics,mobile,97.38,8,0.135,none,2024-08-15\r\n15668,1689,LATAM,home,online,74.03,2,0.040,none,2024-02-21\r\n15669,1442,EMEA,home,mobile,119.13,2,0.083,none,2024-01-23\r\n15670,2223,EMEA,grocery,online,78.59,7,0.071,bundle,2024-12-16\r\n15671,1612,LATAM,grocery,retail,35.62,3,0.137,coupon,2024-02-01\r\n15672,1201,LATAM,home,online,38.96,3,0.209,bundle,2024-01-12\r\n15673,2360,EMEA,grocery,mobile,66.79,6,0.072,none,2024-12-22\r\n15674,1799,EMEA,toys,mobile,75.46,1,0.219,coupon,2024-11-21\r\n15675,1045,LATAM,sports,online,21.07,7,0.105,none,2024-04-07\r\n15676,1022,APAC,grocery,retail,48.61,6,0.060,none,2024-05-26\r\n15677,1521,LATAM,fashion,retail,38.71,8,0.194,bundle,2024-06-24\r\n15678,1061,APAC,toys,online,69.26,7,0.236,loyalty,2024-04-08\r\n15679,1471,EMEA,electronics,online,91.66,7,0.153,coupon,2024-01-06\r\n15680,2263,AMER,fashion,online,84.19,3,0.047,loyalty,2024-02-20\r\n15681,2313,LATAM,electronics,retail,74.31,6,0.069,loyalty,2024-06-22\r\n15682,2005,APAC,grocery,retail,26.14,1,0.143,none,2024-04-07\r\n15683,2168,EMEA,fashion,mobile,49.47,6,0.114,loyalty,2024-08-25\r\n15684,1576,EMEA,grocery,online,59.84,1,0.121,none,2024-05-06\r\n15685,1111,APAC,fashion,online,47.47,5,0.124,coupon,2024-03-16\r\n15686,1132,EMEA,toys,online,31.43,5,0.224,none,2024-08-21\r\n15687,1740,EMEA,fashion,mobile,66.73,1,0.236,none,2024-11-14\r\n15688,1308,EMEA,grocery,retail,73.09,7,0.129,none,2024-07-24\r\n15689,1710,APAC,sports,retail,70.30,8,0.185,none,2024-05-05\r\n15690,2302,APAC,grocery,retail,38.09,1,0.102,none,2024-04-14\r\n15691,1621,APAC,electronics,retail,62.58,7,0.120,loyalty,2024-01-22\r\n15692,2271,LATAM,sports,mobile,34.52,3,0.057,coupon,2024-07-25\r\n15693,1107,APAC,fashion,mobile,44.47,3,0.221,none,2024-04-19\r\n15694,1674,LATAM,grocery,online,96.54,5,0.084,none,2024-12-08\r\n15695,1254,APAC,grocery,online,57.30,7,0.146,bundle,2024-02-11\r\n15696,1985,AMER,sports,retail,66.86,3,0.093,coupon,2024-03-26\r\n15697,1095,APAC,grocery,retail,53.53,1,0.090,none,2024-02-27\r\n15698,1216,APAC,toys,online,66.22,2,0.068,loyalty,2024-08-11\r\n15699,2145,AMER,toys,online,46.78,8,0.105,coupon,2024-01-10\r\n15700,1431,APAC,grocery,retail,91.68,6,0.016,loyalty,2024-03-09\r\n15701,2211,APAC,electronics,online,88.33,2,0.038,none,2024-10-23\r\n15702,1565,AMER,sports,online,56.23,1,0.164,coupon,2024-06-09\r\n15703,1336,APAC,toys,retail,37.35,7,0.050,loyalty,2024-01-09\r\n15704,1241,APAC,fashion,retail,35.84,6,0.092,none,2024-06-08\r\n15705,2258,AMER,electronics,online,31.94,7,0.111,none,2024-05-21\r\n15706,1963,AMER,grocery,online,160.43,1,0.027,loyalty,2024-01-03\r\n15707,2235,AMER,fashion,online,95.90,4,0.172,coupon,2024-12-28\r\n15708,1596,EMEA,grocery,retail,40.94,5,0.007,bundle,2024-05-27\r\n15709,2382,LATAM,toys,online,49.69,4,0.015,none,2024-08-08\r\n15710,1255,AMER,home,retail,52.44,4,0.185,coupon,2024-01-07\r\n15711,1588,LATAM,fashion,retail,68.20,4,0.239,loyalty,2024-01-26\r\n15712,2473,EMEA,sports,online,45.00,5,0.187,none,2024-12-05\r\n15713,1374,APAC,sports,retail,72.52,2,0.020,coupon,2024-09-01\r\n15714,1284,APAC,home,online,19.13,8,0.151,bundle,2024-02-04\r\n15715,1231,AMER,electronics,retail,54.53,8,0.066,none,2024-06-11\r\n15716,1368,EMEA,toys,retail,42.02,2,0.176,none,2024-09-18\r\n15717,2113,LATAM,grocery,online,39.06,4,0.212,none,2024-08-21\r\n15718,1107,APAC,home,retail,94.69,8,0.226,none,2024-09-25\r\n15719,1356,LATAM,grocery,online,22.41,3,0.034,coupon,2024-01-19\r\n15720,2348,EMEA,electronics,retail,44.44,7,0.188,coupon,2024-12-19\r\n15721,2336,APAC,home,online,140.37,5,0.065,none,2024-05-21\r\n15722,1972,LATAM,grocery,retail,98.40,5,0.108,none,2024-08-03\r\n15723,1542,APAC,sports,online,77.71,4,0.001,none,2024-11-08\r\n15724,2036,APAC,sports,online,54.66,3,0.081,none,2024-10-03\r\n15725,1549,APAC,grocery,online,52.93,6,0.165,coupon,2024-07-27\r\n15726,2058,LATAM,sports,online,23.60,2,0.080,bundle,2024-04-11\r\n15727,1160,LATAM,home,online,122.47,7,0.176,none,2024-10-24\r\n15728,1770,AMER,sports,online,55.66,3,0.173,bundle,2024-10-25\r\n15729,2135,EMEA,electronics,online,25.78,2,0.024,none,2024-09-13\r\n15730,2399,LATAM,electronics,online,60.51,7,0.112,none,2024-01-04\r\n15731,1953,EMEA,fashion,retail,41.05,7,0.224,none,2024-12-06\r\n15732,2198,EMEA,grocery,online,33.16,7,0.244,coupon,2024-07-10\r\n15733,2490,AMER,sports,retail,18.88,8,0.220,none,2024-01-26\r\n15734,1043,LATAM,grocery,retail,132.96,3,0.102,none,2024-08-21\r\n15735,2117,EMEA,electronics,online,94.88,3,0.017,none,2024-04-24\r\n15736,1543,AMER,grocery,retail,47.99,5,0.024,coupon,2024-09-03\r\n15737,1677,EMEA,electronics,retail,27.26,5,0.091,bundle,2024-11-01\r\n15738,1434,EMEA,toys,retail,41.84,5,0.109,none,2024-12-09\r\n15739,2205,AMER,electronics,online,107.39,6,0.055,none,2024-09-05\r\n15740,2470,EMEA,grocery,online,76.98,4,0.214,bundle,2024-02-02\r\n15741,1073,AMER,grocery,online,81.07,1,0.016,none,2024-05-22\r\n15742,2303,EMEA,home,mobile,50.91,4,0.241,none,2024-11-28\r\n15743,1187,AMER,grocery,online,96.26,5,0.203,coupon,2024-04-08\r\n15744,1842,LATAM,grocery,online,90.59,5,0.161,none,2024-09-08\r\n15745,1754,EMEA,home,partner,88.06,4,0.024,none,2024-06-15\r\n15746,2301,EMEA,fashion,retail,72.71,2,0.019,loyalty,2024-06-21\r\n15747,1484,AMER,grocery,online,112.75,8,0.228,none,2024-02-14\r\n15748,2281,AMER,toys,retail,31.66,1,0.190,none,2024-05-03\r\n15749,1944,AMER,grocery,online,71.27,7,0.108,coupon,2024-11-19\r\n15750,1393,LATAM,grocery,online,32.73,7,0.071,bundle,2024-05-05\r\n15751,1584,EMEA,grocery,retail,49.98,1,0.108,none,2024-09-10\r\n15752,1645,EMEA,sports,online,59.99,1,0.115,none,2024-07-18\r\n15753,2275,LATAM,fashion,retail,30.01,7,0.061,coupon,2024-12-21\r\n15754,2257,AMER,electronics,retail,75.93,2,0.013,none,2024-01-09\r\n15755,1901,AMER,fashion,partner,57.88,3,0.150,coupon,2024-06-15\r\n15756,2049,LATAM,home,retail,52.95,5,0.178,none,2024-04-16\r\n15757,1574,AMER,toys,online,81.18,8,0.184,bundle,2024-02-23\r\n15758,2319,AMER,electronics,retail,79.25,4,0.159,coupon,2024-03-28\r\n15759,1556,AMER,electronics,online,49.63,5,0.068,none,2024-06-02\r\n15760,1106,AMER,sports,online,47.55,5,0.122,none,2024-01-22\r\n15761,1832,APAC,toys,online,56.78,7,0.212,none,2024-02-19\r\n15762,2335,EMEA,sports,retail,66.24,7,0.035,none,2024-08-24\r\n15763,1949,AMER,home,retail,76.75,1,0.147,loyalty,2024-02-14\r\n15764,1383,AMER,home,retail,18.87,8,0.044,none,2024-01-20\r\n15765,1010,EMEA,sports,retail,43.71,8,0.126,bundle,2024-08-27\r\n15766,1079,LATAM,grocery,mobile,72.49,4,0.203,none,2024-05-10\r\n15767,1165,AMER,sports,online,43.89,5,0.203,none,2024-05-12\r\n15768,2293,LATAM,electronics,online,51.29,8,0.035,bundle,2024-10-17\r\n15769,2231,LATAM,grocery,online,39.46,6,0.080,none,2024-02-09\r\n15770,2308,AMER,toys,retail,106.31,7,0.117,none,2024-05-19\r\n15771,2419,LATAM,home,online,25.23,7,0.229,coupon,2024-04-27\r\n15772,1787,APAC,electronics,online,77.02,1,0.212,none,2024-12-09\r\n15773,1869,AMER,toys,retail,48.39,7,0.190,coupon,2024-03-27\r\n15774,1137,APAC,grocery,online,52.68,4,0.033,none,2024-07-13\r\n15775,2258,AMER,electronics,online,42.49,8,0.183,none,2024-01-28\r\n15776,2494,AMER,fashion,online,91.78,4,0.071,loyalty,2024-01-15\r\n15777,2301,EMEA,home,online,104.94,6,0.158,none,2024-08-17\r\n15778,1975,EMEA,fashion,online,41.75,7,0.109,none,2024-04-22\r\n15779,2413,AMER,grocery,online,39.52,1,0.025,loyalty,2024-09-27\r\n15780,1009,APAC,toys,online,46.60,6,0.116,none,2024-10-15\r\n15781,2420,EMEA,grocery,mobile,34.40,1,0.225,none,2024-04-01\r\n15782,2266,LATAM,sports,mobile,28.22,4,0.142,none,2024-11-08\r\n15783,2006,APAC,sports,online,68.23,6,0.223,none,2024-12-23\r\n15784,1347,APAC,toys,online,20.30,8,0.100,none,2024-11-14\r\n15785,1927,EMEA,fashion,online,153.36,3,0.197,loyalty,2024-09-08\r\n15786,1379,EMEA,home,online,41.45,4,0.209,none,2024-01-05\r\n15787,1618,EMEA,electronics,retail,79.12,1,0.093,none,2024-06-10\r\n15788,2478,AMER,sports,retail,68.48,7,0.047,bundle,2024-05-20\r\n15789,2033,LATAM,grocery,online,91.59,2,0.016,none,2024-12-16\r\n15790,1899,APAC,fashion,mobile,66.62,7,0.181,bundle,2024-06-08\r\n15791,2483,LATAM,home,online,45.30,8,0.014,none,2024-10-12\r\n15792,1465,AMER,electronics,online,44.30,3,0.017,none,2024-12-08\r\n15793,1488,AMER,toys,retail,34.82,8,0.224,bundle,2024-04-11\r\n15794,2306,AMER,sports,partner,160.31,4,0.061,coupon,2024-06-27\r\n15795,1379,EMEA,home,retail,72.48,7,0.015,coupon,2024-09-25\r\n15796,2338,AMER,fashion,online,31.82,2,0.146,none,2024-03-04\r\n15797,2217,LATAM,home,online,40.48,3,0.107,loyalty,2024-12-22\r\n15798,1420,APAC,sports,retail,51.11,6,0.227,bundle,2024-08-07\r\n15799,1730,AMER,home,online,132.17,8,0.183,bundle,2024-09-05\r\n15800,2086,APAC,grocery,mobile,28.15,8,0.197,none,2024-05-15\r\n15801,2244,LATAM,sports,partner,76.76,7,0.171,none,2024-10-01\r\n15802,1082,EMEA,sports,online,62.00,6,0.096,none,2024-06-03\r\n15803,1449,EMEA,fashion,mobile,83.65,1,0.115,none,2024-04-12\r\n15804,1962,APAC,electronics,online,64.77,7,0.221,none,2024-12-27\r\n15805,1716,LATAM,home,retail,21.04,3,0.069,none,2024-04-26\r\n15806,1694,APAC,home,partner,74.89,7,0.188,bundle,2024-08-26\r\n15807,2034,LATAM,sports,retail,40.27,1,0.094,bundle,2024-10-11\r\n15808,1512,APAC,fashion,retail,43.96,3,0.056,coupon,2024-08-18\r\n15809,2102,APAC,home,online,49.75,2,0.170,none,2024-11-07\r\n15810,1356,LATAM,toys,mobile,45.98,6,0.045,none,2024-06-08\r\n15811,1308,EMEA,grocery,retail,48.53,1,0.245,none,2024-11-25\r\n15812,2429,EMEA,electronics,online,100.60,1,0.032,bundle,2024-06-03\r\n15813,2019,AMER,electronics,retail,53.95,7,0.053,coupon,2024-06-04\r\n15814,2387,EMEA,grocery,retail,64.44,7,0.010,none,2024-06-06\r\n15815,1108,EMEA,home,online,74.39,5,0.212,coupon,2024-09-18\r\n15816,1023,APAC,fashion,retail,75.46,8,0.080,none,2024-02-19\r\n15817,1273,AMER,grocery,online,38.11,5,0.130,none,2024-06-10\r\n15818,1175,AMER,home,online,100.84,7,0.066,coupon,2024-07-05\r\n15819,1789,EMEA,fashion,online,33.23,6,0.147,coupon,2024-07-23\r\n15820,2349,APAC,home,retail,82.04,7,0.111,none,2024-03-25\r\n15821,1318,LATAM,grocery,retail,25.65,6,0.093,none,2024-04-01\r\n15822,1993,APAC,fashion,online,63.68,5,0.111,none,2024-09-03\r\n15823,2262,APAC,electronics,online,60.67,1,0.102,none,2024-03-19\r\n15824,1274,LATAM,grocery,mobile,31.01,2,0.043,none,2024-12-20\r\n15825,1828,EMEA,fashion,retail,38.70,8,0.019,coupon,2024-03-10\r\n15826,1901,AMER,home,mobile,35.44,1,0.078,none,2024-06-02\r\n15827,2097,AMER,electronics,online,50.35,5,0.152,none,2024-11-13\r\n15828,1149,LATAM,electronics,online,30.09,2,0.077,coupon,2024-02-21\r\n15829,2441,EMEA,toys,mobile,58.03,5,0.212,none,2024-01-14\r\n15830,1457,EMEA,toys,mobile,33.84,4,0.115,none,2024-11-19\r\n15831,1447,LATAM,electronics,online,25.59,6,0.133,coupon,2024-06-19\r\n15832,1454,APAC,electronics,online,53.86,8,0.127,none,2024-02-02\r\n15833,1575,APAC,grocery,online,37.59,4,0.190,bundle,2024-01-04\r\n15834,1093,APAC,grocery,retail,81.99,2,0.167,none,2024-07-18\r\n15835,2206,AMER,electronics,retail,46.07,2,0.212,coupon,2024-05-28\r\n15836,2133,AMER,grocery,online,37.69,2,0.053,bundle,2024-03-03\r\n15837,1576,EMEA,sports,retail,34.56,5,0.227,none,2024-02-19\r\n15838,2248,LATAM,electronics,retail,127.22,8,0.003,none,2024-01-01\r\n15839,2152,EMEA,electronics,online,106.32,2,0.142,none,2024-04-19\r\n15840,1099,LATAM,grocery,retail,39.61,1,0.135,none,2024-06-24\r\n15841,1065,AMER,fashion,mobile,59.20,5,0.175,none,2024-05-27\r\n15842,1816,EMEA,grocery,mobile,41.78,7,0.230,none,2024-02-09\r\n15843,2417,LATAM,toys,online,37.16,5,0.054,none,2024-04-27\r\n15844,1562,AMER,fashion,online,87.07,4,0.068,bundle,2024-02-04\r\n15845,1612,LATAM,grocery,online,122.90,8,0.112,none,2024-06-13\r\n15846,1663,LATAM,grocery,online,53.72,5,0.172,coupon,2024-10-19\r\n15847,1089,LATAM,grocery,mobile,49.57,6,0.234,none,2024-02-22\r\n15848,1041,APAC,home,mobile,31.10,8,0.169,coupon,2024-03-04\r\n15849,2344,LATAM,grocery,online,36.68,6,0.161,none,2024-07-01\r\n15850,1668,AMER,electronics,online,36.48,8,0.158,bundle,2024-03-15\r\n15851,1149,LATAM,grocery,mobile,63.48,4,0.098,loyalty,2024-01-14\r\n15852,1072,LATAM,fashion,partner,26.01,4,0.002,loyalty,2024-03-24\r\n15853,1940,APAC,grocery,online,46.89,1,0.007,loyalty,2024-04-06\r\n15854,2450,EMEA,grocery,retail,50.26,2,0.052,bundle,2024-05-01\r\n15855,1621,APAC,grocery,retail,78.46,4,0.069,none,2024-09-14\r\n15856,2130,EMEA,toys,mobile,35.89,4,0.234,bundle,2024-08-12\r\n15857,1886,LATAM,fashion,retail,64.35,6,0.123,none,2024-11-28\r\n15858,1549,APAC,grocery,retail,44.27,8,0.012,coupon,2024-02-25\r\n15859,1625,EMEA,electronics,retail,54.17,1,0.246,none,2024-11-13\r\n15860,1568,AMER,electronics,retail,75.70,7,0.184,none,2024-07-13\r\n15861,1914,EMEA,electronics,online,141.02,6,0.244,bundle,2024-10-21\r\n15862,1705,AMER,electronics,partner,66.09,8,0.010,bundle,2024-01-06\r\n15863,1113,EMEA,grocery,online,91.24,8,0.027,none,2024-05-07\r\n15864,2035,LATAM,toys,retail,148.34,5,0.175,bundle,2024-05-22\r\n15865,1763,LATAM,grocery,online,95.11,8,0.032,none,2024-02-12\r\n15866,1464,APAC,fashion,online,90.38,2,0.033,none,2024-01-15\r\n15867,1152,LATAM,grocery,online,52.67,1,0.038,none,2024-12-22\r\n15868,1470,LATAM,home,online,47.78,2,0.034,coupon,2024-12-21\r\n15869,1709,EMEA,electronics,retail,47.50,4,0.161,none,2024-02-15\r\n15870,1586,LATAM,fashion,mobile,50.80,5,0.207,none,2024-08-25\r\n15871,1237,LATAM,fashion,retail,75.58,8,0.009,bundle,2024-06-04\r\n15872,2003,LATAM,grocery,retail,35.74,5,0.141,none,2024-08-27\r\n15873,1022,APAC,sports,online,81.75,7,0.160,bundle,2024-11-11\r\n15874,1628,EMEA,fashion,online,47.08,8,0.093,bundle,2024-02-20\r\n15875,1283,APAC,home,retail,42.09,1,0.003,none,2024-12-13\r\n15876,2244,LATAM,sports,retail,139.16,2,0.027,none,2024-01-18\r\n15877,1504,AMER,grocery,mobile,34.90,2,0.069,none,2024-02-16\r\n15878,1725,APAC,sports,online,71.46,1,0.029,none,2024-01-13\r\n15879,1695,LATAM,grocery,mobile,38.90,5,0.032,none,2024-08-04\r\n15880,2159,AMER,fashion,online,31.79,3,0.078,coupon,2024-01-05\r\n15881,1805,EMEA,grocery,retail,57.88,8,0.144,coupon,2024-02-25\r\n15882,1385,LATAM,grocery,online,88.85,4,0.033,none,2024-10-24\r\n15883,1251,EMEA,grocery,online,38.01,7,0.186,none,2024-03-11\r\n15884,2124,AMER,grocery,mobile,94.21,4,0.055,none,2024-09-16\r\n15885,1267,EMEA,fashion,retail,121.15,2,0.006,none,2024-01-15\r\n15886,1769,LATAM,grocery,mobile,79.53,6,0.029,none,2024-04-19\r\n15887,2059,AMER,fashion,online,15.78,7,0.153,loyalty,2024-06-26\r\n15888,1059,AMER,fashion,partner,119.35,7,0.233,coupon,2024-05-07\r\n15889,1852,AMER,grocery,online,57.73,3,0.200,none,2024-05-11\r\n15890,2325,LATAM,grocery,online,117.73,2,0.198,coupon,2024-04-01\r\n15891,2131,APAC,fashion,partner,34.66,5,0.124,bundle,2024-04-10\r\n15892,1267,EMEA,toys,online,17.31,2,0.239,coupon,2024-01-25\r\n15893,2262,APAC,toys,online,133.35,7,0.010,coupon,2024-04-02\r\n15894,1180,AMER,grocery,retail,132.09,1,0.149,none,2024-12-10\r\n15895,1956,APAC,fashion,retail,50.50,4,0.128,none,2024-04-17\r\n15896,1567,AMER,grocery,retail,30.88,1,0.229,none,2024-09-24\r\n15897,1941,AMER,fashion,retail,205.15,7,0.055,coupon,2024-10-13\r\n15898,1183,AMER,electronics,mobile,41.87,7,0.040,bundle,2024-07-19\r\n15899,1184,AMER,toys,retail,203.72,6,0.245,none,2024-06-07\r\n15900,2138,APAC,electronics,retail,64.31,7,0.184,bundle,2024-09-10\r\n15901,1850,APAC,sports,online,82.04,5,0.096,none,2024-10-27\r\n15902,1429,APAC,grocery,mobile,41.73,4,0.020,none,2024-10-19\r\n15903,1417,APAC,grocery,online,43.92,3,0.184,none,2024-10-28\r\n15904,1270,LATAM,electronics,online,34.39,8,0.140,none,2024-11-06\r\n15905,1811,APAC,home,online,66.06,5,0.066,bundle,2024-07-09\r\n15906,1179,APAC,grocery,partner,57.60,6,0.047,none,2024-11-01\r\n15907,2225,EMEA,electronics,online,25.37,7,0.100,none,2024-06-14\r\n15908,2365,LATAM,fashion,online,91.41,1,0.027,coupon,2024-01-25\r\n15909,2085,AMER,electronics,retail,72.19,4,0.074,none,2024-04-15\r\n15910,2177,AMER,electronics,retail,86.03,1,0.245,none,2024-01-03\r\n15911,1539,LATAM,fashion,retail,71.45,8,0.135,none,2024-11-20\r\n15912,1361,LATAM,sports,mobile,61.31,3,0.007,none,2024-08-27\r\n15913,1188,LATAM,electronics,online,66.97,6,0.026,none,2024-09-08\r\n15914,2337,AMER,electronics,mobile,88.38,8,0.181,none,2024-05-01\r\n15915,1693,EMEA,electronics,mobile,106.82,7,0.221,none,2024-11-02\r\n15916,1338,EMEA,home,mobile,92.15,5,0.054,none,2024-04-22\r\n15917,1221,LATAM,electronics,online,64.63,5,0.104,none,2024-09-26\r\n15918,1009,APAC,sports,online,40.06,8,0.193,none,2024-02-21\r\n15919,1151,APAC,home,online,40.64,6,0.215,coupon,2024-08-06\r\n15920,1919,EMEA,electronics,mobile,55.34,6,0.028,coupon,2024-12-11\r\n15921,2252,EMEA,home,retail,71.31,4,0.192,loyalty,2024-06-26\r\n15922,2240,LATAM,home,retail,54.30,2,0.102,loyalty,2024-04-28\r\n15923,1613,EMEA,grocery,online,28.80,4,0.143,none,2024-04-25\r\n15924,1060,LATAM,grocery,online,38.99,1,0.085,none,2024-08-16\r\n15925,2193,AMER,toys,retail,31.08,5,0.103,none,2024-01-02\r\n15926,2388,LATAM,fashion,retail,86.69,2,0.111,bundle,2024-12-16\r\n15927,1386,AMER,home,mobile,23.70,4,0.003,loyalty,2024-04-13\r\n15928,1573,AMER,grocery,retail,75.68,3,0.078,coupon,2024-06-17\r\n15929,2126,APAC,home,retail,21.28,3,0.076,none,2024-02-09\r\n15930,2001,EMEA,electronics,online,37.12,3,0.222,loyalty,2024-02-23\r\n15931,1041,APAC,fashion,online,78.81,8,0.172,bundle,2024-05-21\r\n15932,2272,EMEA,sports,partner,73.02,1,0.232,loyalty,2024-03-16\r\n15933,1726,EMEA,fashion,mobile,28.91,7,0.178,none,2024-07-21\r\n15934,1903,LATAM,electronics,online,40.24,8,0.152,none,2024-06-06\r\n15935,1906,APAC,grocery,online,39.93,6,0.086,bundle,2024-07-23\r\n15936,2355,EMEA,electronics,online,62.30,1,0.153,coupon,2024-01-05\r\n15937,1796,LATAM,sports,mobile,77.81,5,0.005,none,2024-08-13\r\n15938,2454,LATAM,toys,retail,20.10,3,0.023,none,2024-01-05\r\n15939,1466,AMER,home,retail,98.53,6,0.196,none,2024-04-06\r\n15940,1827,EMEA,fashion,mobile,35.63,8,0.226,none,2024-12-06\r\n15941,2024,AMER,toys,retail,46.51,7,0.002,none,2024-08-09\r\n15942,1914,EMEA,grocery,mobile,119.55,6,0.134,none,2024-11-16\r\n15943,1104,APAC,sports,retail,89.55,7,0.094,bundle,2024-11-13\r\n15944,1093,APAC,grocery,mobile,231.41,6,0.198,coupon,2024-05-22\r\n15945,2459,AMER,electronics,online,109.01,8,0.075,none,2024-11-26\r\n15946,2028,APAC,sports,retail,95.34,4,0.068,none,2024-01-02\r\n15947,1824,LATAM,electronics,retail,59.17,2,0.183,none,2024-12-07\r\n15948,1473,LATAM,grocery,online,44.49,2,0.182,none,2024-10-18\r\n15949,1726,EMEA,toys,online,67.35,1,0.130,none,2024-07-02\r\n15950,1176,EMEA,fashion,online,40.25,1,0.058,none,2024-03-20\r\n15951,2009,LATAM,sports,mobile,23.75,2,0.126,coupon,2024-08-14\r\n15952,2035,LATAM,home,retail,152.80,1,0.148,none,2024-03-09\r\n15953,1218,AMER,home,online,78.26,8,0.152,none,2024-11-07\r\n15954,2068,LATAM,home,mobile,160.13,8,0.139,loyalty,2024-12-09\r\n15955,1924,AMER,electronics,mobile,106.29,5,0.017,none,2024-06-09\r\n15956,1604,EMEA,home,mobile,51.69,4,0.170,none,2024-10-28\r\n15957,1074,LATAM,home,online,34.82,8,0.203,coupon,2024-03-01\r\n15958,1833,EMEA,sports,mobile,50.21,2,0.116,none,2024-01-23\r\n15959,2097,AMER,grocery,mobile,73.65,7,0.038,coupon,2024-10-10\r\n15960,2309,AMER,electronics,retail,58.26,5,0.161,none,2024-05-25\r\n15961,2058,LATAM,home,retail,64.51,7,0.143,none,2024-08-04\r\n15962,2477,APAC,electronics,retail,67.91,8,0.117,coupon,2024-10-20\r\n15963,2073,AMER,home,retail,68.84,2,0.199,none,2024-04-28\r\n15964,1642,EMEA,electronics,retail,82.74,5,0.042,none,2024-02-28\r\n15965,1932,EMEA,home,retail,58.62,5,0.022,none,2024-10-08\r\n15966,2353,AMER,home,mobile,25.15,2,0.228,none,2024-10-20\r\n15967,1707,APAC,grocery,online,55.36,8,0.062,none,2024-09-04\r\n15968,1703,AMER,home,online,37.85,3,0.084,coupon,2024-02-27\r\n15969,1821,LATAM,electronics,partner,14.70,7,0.052,loyalty,2024-06-27\r\n15970,1815,APAC,electronics,mobile,47.46,6,0.008,none,2024-04-25\r\n15971,1307,AMER,home,online,34.57,6,0.048,none,2024-08-08\r\n15972,1748,APAC,electronics,online,49.85,7,0.040,none,2024-03-06\r\n15973,1841,AMER,electronics,online,55.61,2,0.162,none,2024-04-20\r\n15974,1185,LATAM,fashion,online,85.02,4,0.061,bundle,2024-06-28\r\n15975,1529,LATAM,fashion,retail,24.69,5,0.083,bundle,2024-03-19\r\n15976,1576,EMEA,grocery,retail,68.70,8,0.082,none,2024-12-11\r\n15977,1268,EMEA,electronics,mobile,44.51,4,0.129,none,2024-03-01\r\n15978,2083,LATAM,toys,retail,49.50,6,0.042,loyalty,2024-04-14\r\n15979,2446,LATAM,fashion,online,46.62,2,0.214,none,2024-04-23\r\n15980,1084,AMER,home,online,33.08,7,0.202,none,2024-07-12\r\n15981,2208,AMER,electronics,retail,39.85,4,0.213,bundle,2024-12-15\r\n15982,1547,AMER,electronics,retail,68.51,4,0.073,none,2024-04-15\r\n15983,1918,EMEA,fashion,online,34.83,6,0.125,bundle,2024-10-06\r\n15984,2090,AMER,grocery,online,38.78,1,0.232,none,2024-12-24\r\n15985,1243,AMER,electronics,online,45.50,5,0.074,bundle,2024-03-23\r\n15986,1379,EMEA,toys,mobile,71.62,5,0.067,bundle,2024-09-25\r\n15987,1908,AMER,home,retail,141.91,3,0.087,coupon,2024-07-19\r\n15988,1314,AMER,electronics,online,74.88,6,0.090,none,2024-08-26\r\n15989,2320,LATAM,home,retail,27.65,1,0.015,none,2024-02-02\r\n15990,1852,AMER,sports,online,68.70,3,0.222,none,2024-04-25\r\n15991,1565,AMER,home,online,49.94,5,0.160,bundle,2024-10-08\r\n15992,1761,EMEA,fashion,retail,23.38,6,0.155,coupon,2024-01-01\r\n15993,2069,AMER,grocery,retail,38.77,3,0.246,bundle,2024-06-12\r\n15994,2482,EMEA,fashion,mobile,79.69,4,0.210,none,2024-02-26\r\n15995,1951,LATAM,grocery,retail,82.95,1,0.092,coupon,2024-09-09\r\n15996,2403,LATAM,electronics,retail,66.71,4,0.227,none,2024-06-13\r\n15997,2094,AMER,sports,online,68.11,1,0.160,none,2024-07-01\r\n15998,2374,LATAM,grocery,online,79.56,2,0.062,bundle,2024-10-22\r\n15999,2144,EMEA,home,retail,85.87,5,0.087,none,2024-03-02\r\n16000,2181,AMER,sports,online,100.09,6,0.053,none,2024-06-21\r\n16001,1991,APAC,electronics,mobile,35.74,7,0.073,coupon,2024-08-09\r\n16002,1634,AMER,electronics,online,31.99,6,0.096,none,2024-12-20\r\n16003,1528,EMEA,electronics,retail,28.88,6,0.139,none,2024-02-26\r\n16004,2257,AMER,sports,retail,29.35,2,0.079,none,2024-01-25\r\n16005,1152,LATAM,sports,retail,28.50,6,0.161,none,2024-05-06\r\n16006,1263,AMER,home,retail,120.87,4,0.008,none,2024-04-05\r\n16007,1069,APAC,grocery,online,36.25,7,0.019,loyalty,2024-04-20\r\n16008,2121,APAC,electronics,mobile,33.28,4,0.030,none,2024-03-23\r\n16009,1459,LATAM,toys,online,59.09,1,0.200,loyalty,2024-03-17\r\n16010,2091,LATAM,sports,retail,42.17,7,0.011,coupon,2024-02-24\r\n16011,1883,LATAM,grocery,online,71.35,2,0.030,loyalty,2024-02-09\r\n16012,2114,AMER,home,online,33.73,5,0.196,bundle,2024-12-21\r\n16013,1622,LATAM,electronics,online,67.43,3,0.244,coupon,2024-05-01\r\n16014,2396,AMER,electronics,online,96.77,4,0.162,coupon,2024-02-27\r\n16015,1831,APAC,grocery,online,43.87,6,0.214,coupon,2024-07-16\r\n16016,1966,APAC,sports,mobile,106.80,3,0.185,none,2024-05-11\r\n16017,1047,APAC,grocery,online,65.65,1,0.065,coupon,2024-11-24\r\n16018,1465,AMER,fashion,retail,124.94,5,0.229,bundle,2024-07-07\r\n16019,1035,EMEA,fashion,retail,86.22,4,0.070,none,2024-03-17\r\n16020,2125,LATAM,toys,retail,53.79,6,0.045,bundle,2024-01-23\r\n16021,1682,EMEA,grocery,retail,95.31,5,0.050,bundle,2024-04-27\r\n16022,2232,EMEA,fashion,mobile,121.09,1,0.202,none,2024-04-28\r\n16023,1304,LATAM,electronics,online,44.02,5,0.177,none,2024-11-03\r\n16024,2109,EMEA,electronics,retail,110.45,5,0.073,none,2024-01-20\r\n16025,1845,AMER,grocery,online,47.28,3,0.120,bundle,2024-09-10\r\n16026,1696,LATAM,grocery,online,25.22,8,0.126,none,2024-01-18\r\n16027,1933,EMEA,toys,retail,62.27,7,0.097,coupon,2024-08-10\r\n16028,1936,EMEA,grocery,retail,179.91,8,0.218,none,2024-03-22\r\n16029,1677,EMEA,electronics,retail,43.08,2,0.114,none,2024-07-26\r\n16030,1876,LATAM,fashion,retail,36.64,6,0.089,none,2024-08-02\r\n16031,2409,APAC,toys,online,67.75,8,0.139,none,2024-08-28\r\n16032,1619,APAC,electronics,partner,80.74,1,0.244,coupon,2024-12-20\r\n16033,1989,LATAM,fashion,mobile,23.29,8,0.182,none,2024-08-05\r\n16034,1199,APAC,electronics,partner,62.42,1,0.134,none,2024-03-02\r\n16035,1667,AMER,home,online,66.51,3,0.240,none,2024-11-25\r\n16036,2195,APAC,toys,online,60.28,7,0.032,coupon,2024-10-15\r\n16037,1534,EMEA,fashion,online,78.92,2,0.027,none,2024-10-22\r\n16038,1844,APAC,grocery,mobile,70.54,3,0.057,none,2024-12-10\r\n16039,1669,AMER,home,online,48.72,7,0.172,coupon,2024-07-11\r\n16040,1557,LATAM,toys,retail,38.64,2,0.180,none,2024-10-28\r\n16041,1181,LATAM,home,online,10.76,6,0.149,none,2024-07-24\r\n16042,1741,AMER,toys,retail,43.90,2,0.188,coupon,2024-08-16\r\n16043,1821,LATAM,fashion,online,92.67,8,0.245,bundle,2024-09-05\r\n16044,1283,APAC,toys,online,62.64,8,0.248,coupon,2024-01-13\r\n16045,1376,EMEA,electronics,retail,40.38,5,0.131,none,2024-03-21\r\n16046,2171,EMEA,electronics,retail,44.46,3,0.060,none,2024-10-19\r\n16047,1256,LATAM,grocery,retail,86.29,8,0.084,coupon,2024-02-12\r\n16048,1755,APAC,home,online,66.93,3,0.052,none,2024-05-12\r\n16049,1002,EMEA,sports,online,45.29,2,0.201,coupon,2024-09-11\r\n16050,1825,AMER,home,retail,50.17,3,0.116,coupon,2024-08-16\r\n16051,1846,APAC,home,online,100.83,5,0.057,bundle,2024-01-26\r\n16052,2054,AMER,electronics,retail,75.50,8,0.016,coupon,2024-09-16\r\n16053,2294,EMEA,fashion,online,79.67,1,0.122,none,2024-09-17\r\n16054,1488,AMER,electronics,online,62.80,4,0.021,none,2024-01-16\r\n16055,2323,AMER,grocery,online,30.53,5,0.144,loyalty,2024-01-21\r\n16056,1743,LATAM,electronics,online,81.60,1,0.048,bundle,2024-11-18\r\n16057,2303,EMEA,sports,retail,57.90,7,0.170,none,2024-11-26\r\n16058,2409,APAC,sports,online,45.94,6,0.069,loyalty,2024-05-23\r\n16059,1224,APAC,electronics,mobile,50.86,2,0.085,none,2024-02-01\r\n16060,1767,AMER,home,retail,36.12,8,0.207,loyalty,2024-05-23\r\n16061,2179,LATAM,grocery,mobile,34.79,4,0.108,coupon,2024-03-12\r\n16062,2233,EMEA,sports,retail,30.78,1,0.189,coupon,2024-09-03\r\n16063,2103,LATAM,grocery,online,74.32,5,0.030,none,2024-02-24\r\n16064,1470,LATAM,fashion,online,78.42,8,0.049,none,2024-02-25\r\n16065,1474,LATAM,grocery,retail,105.92,5,0.249,none,2024-12-24\r\n16066,2300,EMEA,electronics,online,19.53,5,0.191,none,2024-08-05\r\n16067,1950,LATAM,grocery,retail,79.91,6,0.174,bundle,2024-12-20\r\n16068,1212,LATAM,toys,online,145.73,6,0.218,bundle,2024-05-23\r\n16069,1786,APAC,electronics,online,64.26,2,0.033,coupon,2024-04-19\r\n16070,1578,LATAM,toys,online,76.44,5,0.026,none,2024-09-04\r\n16071,1458,APAC,sports,online,110.56,3,0.069,none,2024-01-23\r\n16072,1343,LATAM,fashion,online,39.28,4,0.204,coupon,2024-10-01\r\n16073,2358,AMER,sports,online,41.60,3,0.055,none,2024-05-28\r\n16074,1859,AMER,home,retail,31.73,3,0.199,none,2024-04-28\r\n16075,2328,EMEA,fashion,retail,32.01,7,0.217,loyalty,2024-01-08\r\n16076,1790,AMER,grocery,retail,67.95,5,0.097,bundle,2024-01-01\r\n16077,1756,EMEA,sports,retail,90.17,3,0.184,loyalty,2024-05-25\r\n16078,2348,EMEA,fashion,online,125.47,3,0.130,bundle,2024-09-04\r\n16079,1585,AMER,grocery,online,57.41,4,0.087,none,2024-05-24\r\n16080,1899,APAC,toys,mobile,83.39,7,0.024,loyalty,2024-10-23\r\n16081,2147,LATAM,grocery,online,80.13,6,0.093,none,2024-12-27\r\n16082,1527,AMER,home,retail,61.25,3,0.138,coupon,2024-02-05\r\n16083,1772,EMEA,electronics,retail,82.68,1,0.089,none,2024-01-19\r\n16084,2387,EMEA,toys,retail,136.49,2,0.052,coupon,2024-12-12\r\n16085,1207,APAC,fashion,retail,62.57,6,0.036,bundle,2024-03-11\r\n16086,2197,LATAM,home,online,68.06,7,0.013,none,2024-12-08\r\n16087,1876,LATAM,sports,online,48.77,2,0.073,none,2024-03-18\r\n16088,1225,APAC,grocery,online,36.59,2,0.144,none,2024-08-19\r\n16089,1521,LATAM,grocery,retail,49.15,3,0.218,none,2024-11-10\r\n16090,1498,LATAM,grocery,retail,32.65,8,0.208,coupon,2024-03-17\r\n16091,2397,LATAM,electronics,mobile,55.28,2,0.111,coupon,2024-12-01\r\n16092,2283,AMER,sports,online,39.58,6,0.150,none,2024-05-25\r\n16093,1046,EMEA,home,mobile,102.34,3,0.158,none,2024-03-02\r\n16094,1760,LATAM,electronics,online,48.61,1,0.137,none,2024-09-16\r\n16095,2245,APAC,grocery,retail,17.66,5,0.218,bundle,2024-05-06\r\n16096,1053,AMER,fashion,online,44.34,4,0.238,none,2024-11-18\r\n16097,1509,AMER,fashion,mobile,114.89,2,0.244,none,2024-07-27\r\n16098,1907,EMEA,toys,online,46.70,5,0.063,coupon,2024-08-22\r\n16099,1907,EMEA,fashion,online,61.84,6,0.141,bundle,2024-04-23\r\n16100,2374,LATAM,electronics,mobile,62.95,8,0.169,loyalty,2024-07-27\r\n16101,1058,LATAM,electronics,online,62.70,4,0.062,coupon,2024-10-02\r\n16102,1271,EMEA,electronics,partner,53.69,3,0.151,coupon,2024-04-12\r\n16103,1562,AMER,sports,mobile,104.03,3,0.239,none,2024-07-19\r\n16104,2485,AMER,grocery,mobile,26.90,8,0.181,bundle,2024-03-05\r\n16105,1873,EMEA,home,online,69.19,4,0.197,none,2024-02-20\r\n16106,1929,LATAM,electronics,retail,69.12,5,0.134,loyalty,2024-11-23\r\n16107,1670,EMEA,fashion,online,50.05,5,0.205,coupon,2024-07-20\r\n16108,2239,EMEA,grocery,online,214.14,1,0.099,coupon,2024-05-22\r\n16109,2228,EMEA,toys,retail,89.19,3,0.240,none,2024-01-18\r\n16110,2409,APAC,electronics,online,31.13,8,0.225,bundle,2024-01-21\r\n16111,1216,APAC,home,retail,99.98,8,0.105,none,2024-04-03\r\n16112,1280,LATAM,home,online,67.38,3,0.165,none,2024-08-27\r\n16113,2199,LATAM,sports,online,64.45,2,0.167,coupon,2024-01-13\r\n16114,1273,AMER,toys,mobile,56.41,4,0.052,coupon,2024-03-25\r\n16115,1902,AMER,home,retail,43.78,1,0.219,loyalty,2024-01-23\r\n16116,2144,EMEA,toys,retail,42.05,1,0.216,coupon,2024-06-20\r\n16117,1797,LATAM,sports,retail,56.19,5,0.241,coupon,2024-05-14\r\n16118,2257,AMER,home,online,50.54,8,0.041,none,2024-04-08\r\n16119,1608,AMER,sports,online,44.12,3,0.236,none,2024-08-19\r\n16120,1076,LATAM,sports,mobile,73.84,6,0.151,coupon,2024-02-08\r\n16121,2119,AMER,grocery,retail,73.65,2,0.096,coupon,2024-07-05\r\n16122,1322,AMER,toys,retail,39.42,4,0.044,none,2024-06-18\r\n16123,1111,APAC,electronics,retail,46.79,6,0.052,none,2024-04-03\r\n16124,1139,EMEA,grocery,partner,50.54,4,0.129,none,2024-02-11\r\n16125,1183,AMER,home,retail,173.78,3,0.159,none,2024-05-12\r\n16126,1217,EMEA,sports,retail,48.01,7,0.246,none,2024-06-13\r\n16127,1053,AMER,electronics,online,99.44,6,0.212,coupon,2024-05-04\r\n16128,1302,LATAM,grocery,retail,50.62,5,0.151,none,2024-08-20\r\n16129,1924,AMER,toys,online,58.48,3,0.202,none,2024-04-02\r\n16130,2225,EMEA,sports,online,48.96,2,0.123,none,2024-08-01\r\n16131,1369,AMER,grocery,retail,50.13,8,0.031,bundle,2024-12-05\r\n16132,1757,EMEA,sports,retail,104.71,1,0.093,coupon,2024-10-22\r\n16133,1556,AMER,electronics,online,67.55,1,0.199,none,2024-02-28\r\n16134,1655,LATAM,fashion,online,20.28,2,0.035,none,2024-01-23\r\n16135,2418,AMER,sports,online,73.95,2,0.131,loyalty,2024-04-16\r\n16136,1501,AMER,electronics,online,89.29,7,0.063,none,2024-12-07\r\n16137,1815,APAC,electronics,retail,16.69,2,0.100,none,2024-07-03\r\n16138,2182,AMER,home,retail,67.22,7,0.207,none,2024-11-05\r\n16139,1734,AMER,toys,online,61.85,6,0.025,bundle,2024-11-03\r\n16140,1661,LATAM,grocery,retail,59.84,6,0.029,none,2024-02-09\r\n16141,2307,LATAM,home,online,35.54,1,0.025,none,2024-01-22\r\n16142,1369,AMER,home,retail,15.81,6,0.030,coupon,2024-05-16\r\n16143,1521,LATAM,home,online,55.04,8,0.158,bundle,2024-02-22\r\n16144,2069,AMER,home,retail,23.93,4,0.184,none,2024-11-09\r\n16145,2124,AMER,toys,mobile,41.31,6,0.101,none,2024-10-04\r\n16146,2074,AMER,sports,online,112.59,3,0.165,none,2024-10-17\r\n16147,1496,AMER,fashion,online,43.50,3,0.004,none,2024-04-01\r\n16148,2209,AMER,sports,online,22.77,4,0.093,coupon,2024-06-08\r\n16149,2117,EMEA,electronics,mobile,45.94,8,0.201,bundle,2024-04-21\r\n16150,2279,LATAM,electronics,retail,45.76,1,0.053,none,2024-10-01\r\n16151,1877,LATAM,home,online,33.92,7,0.003,bundle,2024-05-10\r\n16152,2120,AMER,electronics,mobile,66.58,1,0.072,none,2024-05-11\r\n16153,1925,LATAM,home,online,82.97,8,0.222,none,2024-07-11\r\n16154,1810,LATAM,grocery,online,46.32,7,0.080,none,2024-02-22\r\n16155,1901,AMER,home,retail,67.27,1,0.204,none,2024-01-07\r\n16156,2436,LATAM,electronics,online,38.55,6,0.084,none,2024-01-27\r\n16157,1065,AMER,grocery,online,110.77,1,0.191,none,2024-02-21\r\n16158,1633,EMEA,grocery,online,23.00,6,0.045,none,2024-07-05\r\n16159,1204,AMER,sports,retail,65.11,6,0.127,coupon,2024-12-13\r\n16160,1592,LATAM,toys,retail,51.27,7,0.183,coupon,2024-01-13\r\n16161,1830,EMEA,grocery,retail,31.86,5,0.139,bundle,2024-03-12\r\n16162,1360,APAC,sports,retail,23.34,4,0.190,coupon,2024-09-11\r\n16163,1106,AMER,toys,online,101.19,1,0.167,bundle,2024-07-24\r\n16164,1346,AMER,electronics,online,46.35,3,0.176,none,2024-01-10\r\n16165,2207,APAC,home,retail,83.11,7,0.242,none,2024-02-27\r\n16166,1006,AMER,sports,retail,88.73,3,0.043,coupon,2024-09-21\r\n16167,1938,APAC,home,retail,98.30,1,0.225,coupon,2024-05-10\r\n16168,1907,EMEA,grocery,online,56.69,7,0.211,none,2024-03-06\r\n16169,2174,LATAM,grocery,online,48.44,6,0.079,loyalty,2024-05-11\r\n16170,1236,AMER,sports,online,44.51,3,0.134,none,2024-04-09\r\n16171,1206,EMEA,grocery,retail,39.52,6,0.001,none,2024-07-05\r\n16172,1308,EMEA,fashion,online,65.61,6,0.055,none,2024-02-27\r\n16173,1491,EMEA,home,partner,79.21,1,0.092,coupon,2024-04-23\r\n16174,1480,APAC,toys,retail,130.44,8,0.152,bundle,2024-08-17\r\n16175,2023,LATAM,fashion,online,110.41,2,0.103,none,2024-05-01\r\n16176,1027,APAC,toys,online,59.23,2,0.024,coupon,2024-08-07\r\n16177,1665,AMER,fashion,online,56.49,5,0.173,bundle,2024-10-09\r\n16178,2132,LATAM,electronics,online,49.20,6,0.168,none,2024-05-14\r\n16179,2228,EMEA,fashion,online,29.23,7,0.169,bundle,2024-04-25\r\n16180,2079,EMEA,electronics,retail,68.23,1,0.014,none,2024-05-09\r\n16181,1619,APAC,toys,online,43.82,6,0.167,none,2024-08-03\r\n16182,1406,LATAM,home,retail,74.13,5,0.156,none,2024-12-14\r\n16183,1011,APAC,electronics,online,27.71,4,0.211,none,2024-12-26\r\n16184,2069,AMER,grocery,online,145.03,5,0.242,coupon,2024-12-27\r\n16185,1400,EMEA,sports,retail,35.65,3,0.171,coupon,2024-07-26\r\n16186,1993,APAC,electronics,retail,61.14,5,0.237,coupon,2024-10-09\r\n16187,2197,LATAM,electronics,mobile,59.99,6,0.211,none,2024-03-14\r\n16188,2042,LATAM,home,online,51.39,7,0.186,loyalty,2024-11-07\r\n16189,1501,AMER,fashion,online,61.75,4,0.191,coupon,2024-03-13\r\n16190,2108,AMER,sports,online,42.35,3,0.233,none,2024-02-05\r\n16191,2285,APAC,grocery,retail,59.07,1,0.018,none,2024-10-20\r\n16192,2378,LATAM,grocery,retail,101.95,3,0.102,none,2024-11-11\r\n16193,2133,AMER,sports,mobile,77.84,1,0.134,none,2024-11-19\r\n16194,1162,AMER,home,retail,54.81,6,0.032,bundle,2024-03-01\r\n16195,1867,AMER,fashion,mobile,53.47,6,0.224,coupon,2024-08-03\r\n16196,2109,EMEA,home,online,46.82,3,0.041,none,2024-12-05\r\n16197,1985,AMER,fashion,retail,65.45,2,0.207,none,2024-07-22\r\n16198,1864,EMEA,sports,retail,99.72,4,0.228,none,2024-09-11\r\n16199,1821,LATAM,sports,mobile,64.24,3,0.123,bundle,2024-12-04\r\n16200,1499,EMEA,sports,online,42.18,5,0.116,none,2024-07-09\r\n16201,1114,APAC,fashion,online,105.11,4,0.043,none,2024-10-28\r\n16202,1663,LATAM,sports,retail,81.99,7,0.187,none,2024-06-11\r\n16203,1273,AMER,grocery,mobile,55.93,7,0.130,coupon,2024-07-19\r\n16204,1168,APAC,home,retail,29.17,1,0.099,none,2024-10-22\r\n16205,1526,EMEA,fashion,retail,31.67,1,0.209,none,2024-11-25\r\n16206,1545,AMER,grocery,retail,40.95,7,0.034,none,2024-09-14\r\n16207,1768,AMER,home,retail,27.23,4,0.133,none,2024-06-18\r\n16208,2102,APAC,electronics,online,34.71,6,0.087,none,2024-01-13\r\n16209,1624,AMER,grocery,retail,53.98,2,0.037,none,2024-10-05\r\n16210,2462,EMEA,grocery,online,76.61,8,0.232,coupon,2024-03-17\r\n16211,1931,APAC,fashion,retail,50.59,7,0.248,bundle,2024-02-01\r\n16212,2394,EMEA,fashion,online,54.75,6,0.132,none,2024-05-06\r\n16213,1113,EMEA,electronics,online,68.00,4,0.045,none,2024-07-22\r\n16214,1386,AMER,sports,online,37.64,8,0.142,loyalty,2024-10-14\r\n16215,2210,APAC,electronics,mobile,91.84,8,0.017,none,2024-05-12\r\n16216,1448,EMEA,electronics,retail,22.70,5,0.051,none,2024-01-22\r\n16217,1130,LATAM,electronics,retail,32.08,6,0.238,loyalty,2024-10-01\r\n16218,2434,APAC,grocery,retail,86.75,1,0.148,none,2024-01-20\r\n16219,1137,APAC,toys,mobile,53.99,4,0.047,none,2024-04-14\r\n16220,1498,LATAM,sports,online,50.41,5,0.110,coupon,2024-07-11\r\n16221,1803,LATAM,electronics,online,55.58,5,0.190,coupon,2024-09-04\r\n16222,1449,EMEA,grocery,partner,64.21,5,0.149,loyalty,2024-01-21\r\n16223,2299,EMEA,grocery,online,44.55,3,0.118,coupon,2024-06-06\r\n16224,2297,EMEA,fashion,online,118.49,4,0.237,loyalty,2024-07-09\r\n16225,1716,LATAM,grocery,retail,47.95,8,0.212,none,2024-06-03\r\n16226,1820,AMER,grocery,retail,56.48,7,0.182,none,2024-02-14\r\n16227,2392,EMEA,fashion,online,78.05,8,0.204,none,2024-01-10\r\n16228,2116,LATAM,toys,retail,56.97,3,0.097,none,2024-08-20\r\n16229,2477,APAC,grocery,retail,190.69,3,0.056,coupon,2024-06-18\r\n16230,2223,EMEA,sports,mobile,62.45,3,0.041,coupon,2024-07-10\r\n16231,2017,EMEA,grocery,retail,61.52,1,0.155,coupon,2024-11-25\r\n16232,2202,APAC,sports,online,50.31,3,0.105,none,2024-04-04\r\n16233,1695,LATAM,home,retail,96.09,8,0.088,coupon,2024-01-08\r\n16234,1722,EMEA,fashion,online,35.14,6,0.199,none,2024-09-08\r\n16235,1592,LATAM,grocery,mobile,40.11,6,0.161,coupon,2024-05-11\r\n16236,1511,EMEA,sports,online,52.14,1,0.108,coupon,2024-05-11\r\n16237,1422,LATAM,grocery,online,55.53,6,0.018,none,2024-06-04\r\n16238,1448,EMEA,electronics,retail,50.42,6,0.188,bundle,2024-12-02\r\n16239,2035,LATAM,grocery,online,86.60,4,0.191,none,2024-05-01\r\n16240,1827,EMEA,grocery,retail,31.18,1,0.244,none,2024-02-18\r\n16241,1762,LATAM,home,retail,28.91,1,0.111,loyalty,2024-01-11\r\n16242,1564,APAC,fashion,mobile,52.36,5,0.127,bundle,2024-08-07\r\n16243,2320,LATAM,grocery,online,31.12,1,0.067,none,2024-08-27\r\n16244,1369,AMER,toys,online,42.88,1,0.180,none,2024-05-20\r\n16245,1444,EMEA,electronics,online,103.33,8,0.008,none,2024-04-13\r\n16246,1650,LATAM,home,online,59.36,2,0.100,none,2024-06-22\r\n16247,2038,LATAM,home,mobile,108.94,1,0.144,loyalty,2024-08-10\r\n16248,1082,EMEA,grocery,online,116.11,2,0.247,none,2024-03-10\r\n16249,1249,EMEA,grocery,online,65.83,4,0.132,none,2024-10-07\r\n16250,1885,EMEA,grocery,online,43.15,3,0.172,none,2024-10-27\r\n16251,2350,APAC,home,mobile,87.51,7,0.145,none,2024-08-07\r\n16252,2227,LATAM,grocery,online,58.79,8,0.212,none,2024-04-26\r\n16253,1504,AMER,home,online,88.39,7,0.009,none,2024-01-18\r\n16254,1112,APAC,grocery,retail,52.08,8,0.009,none,2024-06-22\r\n16255,2320,LATAM,grocery,mobile,72.57,3,0.133,none,2024-04-11\r\n16256,1504,AMER,sports,retail,65.92,7,0.113,bundle,2024-05-21\r\n16257,2313,LATAM,grocery,online,86.10,1,0.134,coupon,2024-07-12\r\n16258,1525,APAC,home,retail,31.02,6,0.206,none,2024-09-17\r\n16259,2200,LATAM,home,mobile,34.25,1,0.028,loyalty,2024-02-11\r\n16260,1726,EMEA,home,mobile,33.01,7,0.142,none,2024-05-13\r\n16261,1193,APAC,electronics,online,40.98,8,0.083,none,2024-01-15\r\n16262,1673,AMER,fashion,online,45.05,4,0.026,bundle,2024-02-11\r\n16263,2291,EMEA,home,online,62.03,3,0.218,none,2024-03-24\r\n16264,2458,EMEA,toys,partner,65.29,4,0.167,bundle,2024-09-28\r\n16265,2454,LATAM,electronics,retail,97.91,3,0.162,coupon,2024-12-15\r\n16266,1195,AMER,grocery,online,115.03,5,0.064,bundle,2024-05-19\r\n16267,1744,EMEA,sports,online,89.70,8,0.129,none,2024-12-01\r\n16268,2298,APAC,fashion,online,23.85,8,0.185,coupon,2024-04-07\r\n16269,1061,APAC,toys,online,37.89,7,0.142,none,2024-03-23\r\n16270,1613,EMEA,grocery,online,151.23,1,0.132,none,2024-09-21\r\n16271,2450,EMEA,toys,retail,175.71,7,0.124,none,2024-05-08\r\n16272,1058,LATAM,home,mobile,135.62,2,0.037,loyalty,2024-02-17\r\n16273,1363,EMEA,grocery,retail,66.22,2,0.077,coupon,2024-07-16\r\n16274,1085,EMEA,home,retail,48.16,6,0.079,coupon,2024-09-20\r\n16275,1493,APAC,toys,retail,37.11,1,0.050,coupon,2024-05-13\r\n16276,1784,EMEA,electronics,mobile,47.55,1,0.031,coupon,2024-10-03\r\n16277,2119,AMER,grocery,retail,29.34,5,0.140,bundle,2024-08-19\r\n16278,1840,LATAM,grocery,online,65.03,4,0.106,none,2024-06-09\r\n16279,2069,AMER,home,mobile,167.60,2,0.138,none,2024-02-13\r\n16280,1647,LATAM,grocery,online,44.38,5,0.164,coupon,2024-11-12\r\n16281,1709,EMEA,home,retail,38.01,7,0.064,bundle,2024-05-08\r\n16282,1329,APAC,toys,online,123.13,6,0.142,none,2024-03-23\r\n16283,1903,LATAM,home,mobile,55.18,3,0.157,coupon,2024-06-17\r\n16284,2202,APAC,grocery,retail,112.83,3,0.046,none,2024-04-02\r\n16285,1304,LATAM,fashion,retail,53.19,5,0.038,coupon,2024-02-03\r\n16286,2229,APAC,grocery,mobile,63.35,2,0.004,none,2024-09-28\r\n16287,1887,LATAM,home,online,90.96,4,0.035,loyalty,2024-08-20\r\n16288,1814,AMER,grocery,retail,74.43,7,0.154,bundle,2024-01-25\r\n16289,2479,EMEA,home,retail,125.36,3,0.019,none,2024-05-19\r\n16290,1372,APAC,grocery,partner,61.28,7,0.032,loyalty,2024-05-25\r\n16291,2459,AMER,fashion,online,56.80,8,0.006,bundle,2024-01-19\r\n16292,1045,LATAM,home,online,130.45,8,0.132,coupon,2024-03-28\r\n16293,1279,EMEA,fashion,online,50.79,5,0.241,bundle,2024-04-07\r\n16294,1991,APAC,electronics,retail,46.56,2,0.144,none,2024-09-12\r\n16295,1176,EMEA,toys,online,57.84,1,0.213,coupon,2024-04-01\r\n16296,1367,AMER,sports,online,43.87,6,0.044,none,2024-12-17\r\n16297,1120,LATAM,electronics,online,43.67,4,0.138,none,2024-02-07\r\n16298,2363,AMER,home,retail,40.04,8,0.227,bundle,2024-09-27\r\n16299,2364,APAC,fashion,retail,89.86,7,0.093,coupon,2024-03-26\r\n16300,2089,EMEA,electronics,online,33.00,1,0.037,none,2024-07-01\r\n16301,1802,AMER,sports,retail,158.79,7,0.184,bundle,2024-01-21\r\n16302,2133,AMER,toys,retail,86.13,5,0.171,bundle,2024-10-03\r\n16303,1916,AMER,sports,online,26.83,8,0.042,none,2024-10-28\r\n16304,1457,EMEA,sports,retail,55.47,5,0.196,none,2024-09-02\r\n16305,2064,LATAM,home,retail,67.90,3,0.050,bundle,2024-11-05\r\n16306,1398,APAC,electronics,retail,68.80,2,0.019,none,2024-11-08\r\n16307,2015,APAC,electronics,retail,73.92,4,0.209,none,2024-04-24\r\n16308,1648,APAC,electronics,online,74.05,8,0.018,coupon,2024-09-06\r\n16309,2271,LATAM,sports,online,41.46,1,0.090,bundle,2024-02-22\r\n16310,1518,AMER,electronics,mobile,66.50,6,0.179,none,2024-09-04\r\n16311,1717,AMER,toys,retail,35.17,8,0.157,none,2024-10-18\r\n16312,1549,APAC,electronics,retail,80.90,6,0.029,none,2024-05-15\r\n16313,2176,AMER,home,partner,45.22,4,0.184,none,2024-05-05\r\n16314,2439,AMER,sports,online,60.19,3,0.042,bundle,2024-07-25\r\n16315,1812,EMEA,grocery,mobile,105.70,4,0.165,none,2024-04-09\r\n16316,1578,LATAM,fashion,retail,64.15,3,0.203,none,2024-04-24\r\n16317,1109,APAC,electronics,online,60.87,8,0.009,coupon,2024-01-17\r\n16318,1772,EMEA,sports,retail,67.20,6,0.046,bundle,2024-02-20\r\n16319,1267,EMEA,toys,online,104.98,6,0.139,none,2024-01-15\r\n16320,2232,EMEA,electronics,online,44.06,8,0.164,coupon,2024-10-02\r\n16321,1487,AMER,electronics,retail,69.61,1,0.160,none,2024-09-10\r\n16322,2134,AMER,home,online,47.96,7,0.229,coupon,2024-04-06\r\n16323,2134,AMER,electronics,online,53.12,6,0.226,coupon,2024-04-04\r\n16324,1794,AMER,grocery,mobile,82.96,3,0.100,loyalty,2024-04-15\r\n16325,1005,LATAM,sports,retail,43.35,6,0.131,coupon,2024-08-01\r\n16326,2274,APAC,grocery,retail,61.29,7,0.228,none,2024-03-03\r\n16327,1022,APAC,electronics,mobile,68.92,2,0.140,bundle,2024-07-11\r\n16328,1324,LATAM,electronics,online,54.66,4,0.139,none,2024-05-09\r\n16329,2051,APAC,electronics,partner,47.72,6,0.174,none,2024-11-03\r\n16330,1143,LATAM,grocery,online,90.54,2,0.204,none,2024-04-06\r\n16331,2185,EMEA,toys,mobile,37.04,3,0.001,coupon,2024-07-01\r\n16332,2295,EMEA,fashion,retail,51.29,2,0.041,none,2024-11-23\r\n16333,1342,LATAM,electronics,mobile,69.79,4,0.188,none,2024-08-28\r\n16334,2217,LATAM,fashion,retail,59.59,8,0.052,coupon,2024-07-24\r\n16335,1176,EMEA,toys,retail,127.21,6,0.123,none,2024-11-19\r\n16336,2431,LATAM,home,retail,34.17,1,0.026,none,2024-01-02\r\n16337,1758,AMER,grocery,retail,54.60,6,0.017,coupon,2024-05-27\r\n16338,1479,AMER,sports,mobile,151.20,7,0.131,none,2024-11-28\r\n16339,1918,EMEA,fashion,retail,99.17,4,0.111,coupon,2024-02-16\r\n16340,1978,AMER,grocery,online,37.82,2,0.192,bundle,2024-03-21\r\n16341,1183,AMER,sports,online,39.47,5,0.104,none,2024-12-09\r\n16342,1471,EMEA,grocery,online,32.45,5,0.040,bundle,2024-06-20\r\n16343,1078,APAC,fashion,mobile,74.38,7,0.084,loyalty,2024-12-25\r\n16344,1542,APAC,grocery,online,74.85,8,0.075,none,2024-04-24\r\n16345,1943,AMER,sports,partner,92.84,7,0.164,loyalty,2024-04-23\r\n16346,1157,LATAM,electronics,mobile,29.16,7,0.125,loyalty,2024-10-28\r\n16347,1393,LATAM,grocery,online,35.96,5,0.020,none,2024-11-12\r\n16348,2042,LATAM,fashion,mobile,47.45,1,0.141,none,2024-08-02\r\n16349,1122,AMER,grocery,retail,39.89,2,0.094,loyalty,2024-06-23\r\n16350,1952,EMEA,fashion,mobile,132.37,1,0.151,bundle,2024-07-20\r\n16351,1508,LATAM,home,online,122.05,2,0.014,none,2024-08-19\r\n16352,2154,APAC,grocery,online,119.24,4,0.096,coupon,2024-10-26\r\n16353,1246,EMEA,grocery,online,120.72,3,0.035,none,2024-07-13\r\n16354,1118,AMER,grocery,online,57.32,6,0.160,none,2024-08-26\r\n16355,1878,EMEA,sports,online,68.82,7,0.089,none,2024-12-03\r\n16356,1148,AMER,grocery,mobile,56.42,7,0.104,none,2024-04-27\r\n16357,1777,AMER,grocery,online,41.33,6,0.194,coupon,2024-06-09\r\n16358,2078,APAC,home,retail,58.20,1,0.092,none,2024-04-02\r\n16359,2139,AMER,grocery,retail,115.23,6,0.031,none,2024-11-13\r\n16360,1610,LATAM,toys,retail,105.06,3,0.244,none,2024-08-22\r\n16361,2378,LATAM,electronics,mobile,63.52,2,0.054,bundle,2024-06-23\r\n16362,2243,APAC,home,online,171.50,3,0.245,none,2024-11-21\r\n16363,1801,LATAM,grocery,online,92.71,8,0.172,none,2024-12-17\r\n16364,2239,EMEA,electronics,online,60.39,8,0.114,none,2024-07-05\r\n16365,1321,EMEA,grocery,online,55.00,3,0.060,coupon,2024-10-07\r\n16366,1421,APAC,sports,online,27.30,5,0.105,none,2024-02-22\r\n16367,2281,AMER,grocery,retail,55.22,1,0.137,none,2024-09-09\r\n16368,1398,APAC,toys,retail,47.31,4,0.116,none,2024-10-22\r\n16369,1314,AMER,electronics,online,111.18,2,0.059,coupon,2024-03-14\r\n16370,2068,LATAM,grocery,online,84.82,6,0.167,none,2024-10-03\r\n16371,1449,EMEA,toys,online,205.28,4,0.037,none,2024-10-12\r\n16372,1999,EMEA,fashion,online,66.40,3,0.051,coupon,2024-04-02\r\n16373,2403,LATAM,sports,online,89.93,6,0.171,loyalty,2024-09-09\r\n16374,1423,EMEA,home,online,86.21,2,0.009,none,2024-12-24\r\n16375,1864,EMEA,fashion,partner,52.58,8,0.018,none,2024-03-11\r\n16376,2162,EMEA,grocery,retail,28.51,7,0.111,bundle,2024-08-03\r\n16377,1738,LATAM,grocery,retail,17.86,8,0.116,coupon,2024-02-26\r\n16378,1800,APAC,grocery,online,53.53,2,0.161,none,2024-06-01\r\n16379,2056,LATAM,home,online,52.55,8,0.136,none,2024-04-24\r\n16380,1833,EMEA,grocery,online,47.18,7,0.022,none,2024-12-10\r\n16381,1941,AMER,electronics,retail,91.44,2,0.137,none,2024-11-08\r\n16382,1172,APAC,fashion,online,74.22,5,0.086,loyalty,2024-12-05\r\n16383,1767,AMER,toys,mobile,33.31,6,0.058,bundle,2024-08-10\r\n16384,1745,APAC,fashion,online,31.74,3,0.241,bundle,2024-06-05\r\n16385,1112,APAC,sports,retail,64.21,4,0.191,none,2024-10-15\r\n16386,1646,APAC,fashion,online,83.30,7,0.043,coupon,2024-01-05\r\n16387,1457,EMEA,electronics,online,61.11,8,0.146,none,2024-05-04\r\n16388,1969,LATAM,electronics,retail,117.05,2,0.023,coupon,2024-08-20\r\n16389,1165,AMER,electronics,online,58.72,3,0.141,none,2024-06-22\r\n16390,1648,APAC,home,retail,106.59,8,0.154,bundle,2024-09-27\r\n16391,2124,AMER,home,retail,29.50,1,0.013,none,2024-09-16\r\n16392,1406,LATAM,home,online,39.47,2,0.038,none,2024-09-06\r\n16393,2101,APAC,electronics,retail,25.52,1,0.082,bundle,2024-10-20\r\n16394,1874,LATAM,toys,retail,54.57,6,0.039,none,2024-07-12\r\n16395,2049,LATAM,sports,retail,113.92,5,0.067,none,2024-10-17\r\n16396,2491,APAC,electronics,online,55.57,5,0.228,loyalty,2024-08-13\r\n16397,1708,LATAM,grocery,online,87.46,2,0.222,bundle,2024-04-04\r\n16398,2231,LATAM,home,mobile,30.64,4,0.077,coupon,2024-07-05\r\n16399,1301,AMER,grocery,online,46.75,8,0.095,none,2024-03-28\r\n16400,1032,AMER,grocery,retail,55.07,8,0.214,coupon,2024-08-07\r\n16401,1373,LATAM,electronics,mobile,34.23,6,0.168,bundle,2024-10-22\r\n16402,2168,EMEA,grocery,partner,22.04,4,0.031,bundle,2024-12-19\r\n16403,1590,APAC,toys,retail,100.89,4,0.138,none,2024-01-01\r\n16404,1072,LATAM,grocery,online,39.84,2,0.060,none,2024-04-06\r\n16405,1905,APAC,grocery,retail,36.93,8,0.076,coupon,2024-09-25\r\n16406,1278,AMER,fashion,mobile,184.64,4,0.197,coupon,2024-02-04\r\n16407,1374,APAC,electronics,online,36.12,1,0.165,loyalty,2024-05-23\r\n16408,2493,APAC,grocery,online,51.77,7,0.169,coupon,2024-03-11\r\n16409,1639,APAC,grocery,online,49.72,6,0.112,none,2024-01-26\r\n16410,1580,AMER,electronics,partner,24.39,5,0.118,loyalty,2024-12-20\r\n16411,2030,EMEA,fashion,retail,108.58,7,0.240,none,2024-07-10\r\n16412,1443,EMEA,grocery,mobile,88.58,3,0.133,none,2024-10-20\r\n16413,1200,EMEA,home,retail,42.01,2,0.040,bundle,2024-11-21\r\n16414,1351,APAC,electronics,online,62.02,1,0.159,coupon,2024-08-12\r\n16415,1210,LATAM,fashion,retail,39.52,2,0.027,bundle,2024-04-15\r\n16416,1741,AMER,electronics,online,63.32,3,0.175,none,2024-02-21\r\n16417,1869,AMER,electronics,online,76.08,4,0.239,none,2024-05-08\r\n16418,2271,LATAM,sports,retail,112.99,5,0.072,none,2024-12-23\r\n16419,1326,AMER,home,online,34.95,5,0.244,none,2024-09-14\r\n16420,1974,EMEA,grocery,online,115.84,8,0.236,none,2024-05-09\r\n16421,2350,APAC,electronics,online,49.38,8,0.188,none,2024-12-18\r\n16422,1773,LATAM,electronics,mobile,20.36,5,0.068,loyalty,2024-07-03\r\n16423,1406,LATAM,sports,online,19.60,8,0.244,none,2024-11-12\r\n16424,1848,EMEA,electronics,online,46.28,2,0.075,none,2024-12-16\r\n16425,1529,LATAM,grocery,mobile,21.47,2,0.038,none,2024-02-04\r\n16426,1768,AMER,fashion,mobile,30.62,3,0.212,bundle,2024-05-17\r\n16427,2125,LATAM,sports,online,28.20,2,0.197,none,2024-10-21\r\n16428,1759,EMEA,electronics,retail,101.80,4,0.190,none,2024-02-11\r\n16429,2290,LATAM,home,online,64.54,2,0.220,bundle,2024-04-28\r\n16430,1992,LATAM,home,partner,96.23,6,0.066,coupon,2024-08-13\r\n16431,2280,EMEA,electronics,mobile,40.58,5,0.032,coupon,2024-01-17\r\n16432,1659,APAC,home,online,19.01,1,0.140,none,2024-05-21\r\n16433,2081,APAC,fashion,retail,38.96,2,0.237,bundle,2024-10-21\r\n16434,2483,LATAM,grocery,mobile,44.71,6,0.070,loyalty,2024-05-27\r\n16435,2334,LATAM,sports,online,44.05,2,0.126,bundle,2024-02-06\r\n16436,1020,APAC,electronics,online,105.55,6,0.198,none,2024-03-27\r\n16437,2285,APAC,grocery,retail,46.31,3,0.117,bundle,2024-09-16\r\n16438,2482,EMEA,home,partner,91.21,1,0.203,none,2024-08-13\r\n16439,1219,LATAM,toys,mobile,69.55,6,0.174,none,2024-06-13\r\n16440,1061,APAC,home,online,22.92,4,0.175,none,2024-10-07\r\n16441,2188,EMEA,grocery,online,52.04,1,0.217,none,2024-03-03\r\n16442,1341,EMEA,electronics,retail,68.95,7,0.160,bundle,2024-09-19\r\n16443,1967,EMEA,toys,online,38.41,8,0.071,coupon,2024-04-10\r\n16444,2041,LATAM,home,retail,51.39,8,0.033,bundle,2024-10-22\r\n16445,1418,LATAM,grocery,retail,63.45,8,0.007,bundle,2024-11-18\r\n16446,2046,APAC,electronics,online,89.97,8,0.101,coupon,2024-05-04\r\n16447,2296,AMER,grocery,partner,34.28,7,0.228,coupon,2024-09-12\r\n16448,2272,EMEA,electronics,retail,67.48,3,0.057,coupon,2024-06-01\r\n16449,1724,LATAM,fashion,retail,57.24,3,0.073,loyalty,2024-05-03\r\n16450,2224,EMEA,sports,online,88.59,1,0.234,bundle,2024-10-03\r\n16451,1247,AMER,toys,retail,38.59,8,0.214,none,2024-12-05\r\n16452,1089,LATAM,sports,mobile,45.08,7,0.037,none,2024-06-09\r\n16453,1239,APAC,electronics,online,56.44,4,0.164,coupon,2024-02-02\r\n16454,1529,LATAM,fashion,retail,92.40,8,0.075,none,2024-02-07\r\n16455,1900,APAC,fashion,retail,44.27,4,0.018,loyalty,2024-03-13\r\n16456,2002,APAC,electronics,online,139.52,8,0.027,none,2024-03-21\r\n16457,1741,AMER,grocery,retail,57.38,2,0.127,none,2024-07-10\r\n16458,1144,APAC,grocery,online,68.90,1,0.212,none,2024-04-21\r\n16459,2490,AMER,home,retail,32.09,8,0.026,bundle,2024-07-22\r\n16460,1675,LATAM,grocery,online,51.64,3,0.068,none,2024-11-22\r\n16461,2128,EMEA,home,online,84.09,2,0.208,loyalty,2024-09-23\r\n16462,2495,EMEA,toys,online,132.67,3,0.122,coupon,2024-08-10\r\n16463,2340,EMEA,grocery,online,43.62,3,0.172,none,2024-09-08\r\n16464,2137,LATAM,electronics,retail,149.33,4,0.178,bundle,2024-12-11\r\n16465,1372,APAC,toys,mobile,58.82,7,0.054,none,2024-06-22\r\n16466,1325,APAC,grocery,online,97.85,3,0.215,coupon,2024-02-25\r\n16467,1491,EMEA,electronics,retail,66.22,4,0.207,none,2024-11-21\r\n16468,2375,AMER,fashion,online,63.58,8,0.147,bundle,2024-10-28\r\n16469,1616,APAC,grocery,retail,85.43,4,0.236,loyalty,2024-05-13\r\n16470,1612,LATAM,home,retail,77.14,1,0.154,coupon,2024-03-02\r\n16471,2292,EMEA,toys,mobile,57.35,7,0.192,coupon,2024-06-08\r\n16472,2413,AMER,home,retail,68.04,3,0.099,none,2024-06-13\r\n16473,1343,LATAM,electronics,retail,42.07,4,0.122,coupon,2024-02-08\r\n16474,1769,LATAM,sports,retail,78.02,4,0.136,none,2024-01-15\r\n16475,2371,LATAM,grocery,retail,48.16,4,0.040,none,2024-11-12\r\n16476,1001,LATAM,toys,online,141.23,6,0.085,none,2024-10-16\r\n16477,1901,AMER,toys,retail,85.77,6,0.069,bundle,2024-12-08\r\n16478,2212,EMEA,home,retail,70.65,8,0.241,bundle,2024-10-10\r\n16479,2365,LATAM,electronics,retail,34.12,1,0.128,bundle,2024-06-28\r\n16480,2335,EMEA,grocery,mobile,66.84,2,0.230,loyalty,2024-07-18\r\n16481,1416,EMEA,sports,online,54.57,7,0.243,coupon,2024-06-11\r\n16482,1388,AMER,sports,mobile,108.31,2,0.126,none,2024-04-18\r\n16483,2289,APAC,grocery,retail,17.21,4,0.198,none,2024-05-04\r\n16484,1023,APAC,fashion,retail,94.27,7,0.226,coupon,2024-12-14\r\n16485,1122,AMER,grocery,online,37.76,4,0.077,none,2024-07-10\r\n16486,2209,AMER,electronics,retail,43.72,1,0.069,none,2024-12-04\r\n16487,1640,APAC,electronics,mobile,51.43,5,0.068,none,2024-06-11\r\n16488,1125,LATAM,grocery,online,98.85,7,0.097,none,2024-08-24\r\n16489,2233,EMEA,electronics,retail,43.02,6,0.165,none,2024-09-25\r\n16490,2338,AMER,electronics,online,40.18,1,0.226,none,2024-11-24\r\n16491,2199,LATAM,home,retail,155.09,2,0.229,none,2024-08-01\r\n16492,1380,AMER,grocery,online,59.69,4,0.044,coupon,2024-11-17\r\n16493,1572,LATAM,toys,retail,39.25,1,0.224,coupon,2024-06-15\r\n16494,1903,LATAM,home,online,41.81,1,0.216,none,2024-08-11\r\n16495,2243,APAC,grocery,retail,20.81,3,0.203,none,2024-08-23\r\n16496,1080,LATAM,fashion,online,88.62,3,0.211,bundle,2024-09-11\r\n16497,1225,APAC,fashion,online,33.72,7,0.223,none,2024-01-10\r\n16498,1100,AMER,home,mobile,43.98,5,0.044,bundle,2024-03-21\r\n16499,1504,AMER,electronics,online,46.84,2,0.232,none,2024-11-05\r\n16500,1466,AMER,grocery,mobile,53.30,4,0.023,coupon,2024-06-08\r\n16501,1623,AMER,sports,retail,151.76,8,0.110,bundle,2024-09-04\r\n16502,2174,LATAM,electronics,online,47.33,8,0.003,loyalty,2024-11-15\r\n16503,1400,EMEA,grocery,online,66.89,1,0.109,none,2024-11-14\r\n16504,1110,LATAM,fashion,online,49.86,7,0.071,coupon,2024-05-27\r\n16505,2162,EMEA,fashion,online,65.23,5,0.224,bundle,2024-02-01\r\n16506,1196,APAC,electronics,retail,77.15,6,0.140,bundle,2024-08-12\r\n16507,1868,AMER,electronics,mobile,66.88,3,0.209,bundle,2024-04-04\r\n16508,1711,APAC,home,retail,21.07,2,0.049,coupon,2024-06-16\r\n16509,1170,AMER,electronics,retail,45.89,3,0.115,bundle,2024-04-06\r\n16510,2312,APAC,home,mobile,85.63,5,0.193,none,2024-10-04\r\n16511,2353,AMER,toys,online,133.81,6,0.240,loyalty,2024-10-15\r\n16512,1877,LATAM,grocery,online,193.52,5,0.085,coupon,2024-03-18\r\n16513,1737,AMER,home,online,90.39,2,0.007,none,2024-08-13\r\n16514,1126,LATAM,sports,online,37.75,1,0.057,none,2024-02-12\r\n16515,1099,LATAM,grocery,retail,64.30,6,0.121,none,2024-02-08\r\n16516,1948,EMEA,fashion,retail,76.26,7,0.023,coupon,2024-09-14\r\n16517,1669,AMER,fashion,online,113.57,4,0.157,none,2024-03-20\r\n16518,1102,APAC,electronics,online,46.57,2,0.212,none,2024-09-17\r\n16519,1523,LATAM,fashion,online,53.46,3,0.198,coupon,2024-03-04\r\n16520,1218,AMER,fashion,retail,21.32,4,0.135,loyalty,2024-10-02\r\n16521,1647,LATAM,home,online,29.50,3,0.018,loyalty,2024-11-13\r\n16522,1088,LATAM,grocery,online,31.60,7,0.042,coupon,2024-04-24\r\n16523,1025,EMEA,fashion,retail,23.97,2,0.186,none,2024-11-02\r\n16524,1052,LATAM,electronics,online,82.21,8,0.008,none,2024-11-10\r\n16525,2049,LATAM,home,retail,17.61,8,0.146,none,2024-09-13\r\n16526,1732,LATAM,electronics,online,63.12,6,0.057,none,2024-02-07\r\n16527,1093,APAC,toys,online,75.93,5,0.189,bundle,2024-11-01\r\n16528,1486,LATAM,sports,online,86.86,8,0.056,loyalty,2024-09-25\r\n16529,1854,AMER,grocery,online,18.45,1,0.073,none,2024-05-01\r\n16530,1018,APAC,home,online,43.92,3,0.122,coupon,2024-02-18\r\n16531,2334,LATAM,electronics,retail,80.45,1,0.149,coupon,2024-06-03\r\n16532,1808,APAC,electronics,online,61.36,5,0.001,coupon,2024-01-10\r\n16533,1284,APAC,home,online,103.86,8,0.013,none,2024-01-02\r\n16534,1126,LATAM,grocery,retail,79.79,2,0.212,none,2024-07-07\r\n16535,1108,EMEA,home,retail,104.95,6,0.139,none,2024-06-14\r\n16536,1257,APAC,sports,partner,44.19,6,0.238,none,2024-02-28\r\n16537,1740,EMEA,sports,retail,28.74,1,0.112,none,2024-04-01\r\n16538,1253,AMER,home,online,43.30,8,0.179,bundle,2024-11-04\r\n16539,2184,APAC,sports,online,26.13,6,0.051,bundle,2024-04-22\r\n16540,1778,LATAM,grocery,mobile,41.23,8,0.173,coupon,2024-01-17\r\n16541,1095,APAC,toys,retail,93.07,3,0.016,none,2024-05-23\r\n16542,2010,APAC,electronics,online,39.92,1,0.019,none,2024-06-14\r\n16543,1523,LATAM,fashion,online,57.68,7,0.157,none,2024-02-09\r\n16544,2102,APAC,home,online,40.08,2,0.247,bundle,2024-04-10\r\n16545,1079,LATAM,home,online,25.75,2,0.109,none,2024-08-12\r\n16546,2157,AMER,home,mobile,116.22,6,0.143,none,2024-12-13\r\n16547,1780,APAC,grocery,mobile,38.08,5,0.226,coupon,2024-10-02\r\n16548,1325,APAC,grocery,retail,60.59,1,0.135,none,2024-08-20\r\n16549,1063,AMER,toys,retail,50.60,4,0.172,none,2024-02-24\r\n16550,1380,AMER,home,mobile,51.97,7,0.220,none,2024-12-16\r\n16551,1193,APAC,home,online,180.29,7,0.203,none,2024-11-09\r\n16552,1991,APAC,electronics,online,31.41,7,0.218,loyalty,2024-01-16\r\n16553,2209,AMER,grocery,retail,89.37,4,0.044,none,2024-11-26\r\n16554,2449,LATAM,electronics,online,46.79,1,0.016,none,2024-04-05\r\n16555,1470,LATAM,toys,online,180.38,2,0.162,loyalty,2024-06-25\r\n16556,1480,APAC,home,retail,36.55,4,0.059,none,2024-08-07\r\n16557,1706,EMEA,grocery,partner,75.84,2,0.116,none,2024-10-04\r\n16558,1440,AMER,grocery,online,60.65,2,0.025,loyalty,2024-03-05\r\n16559,1353,EMEA,electronics,online,31.29,7,0.170,none,2024-12-14\r\n16560,1432,APAC,sports,retail,59.99,1,0.070,coupon,2024-02-15\r\n16561,1603,EMEA,home,online,140.78,4,0.177,none,2024-11-09\r\n16562,1802,AMER,sports,online,83.23,3,0.011,none,2024-07-25\r\n16563,1029,EMEA,electronics,retail,39.58,2,0.034,none,2024-08-09\r\n16564,1799,EMEA,grocery,mobile,57.98,7,0.082,none,2024-06-04\r\n16565,2172,EMEA,home,online,76.64,6,0.022,bundle,2024-05-22\r\n16566,2362,AMER,home,retail,71.71,4,0.175,coupon,2024-01-09\r\n16567,2462,EMEA,fashion,online,25.73,2,0.103,none,2024-07-26\r\n16568,1839,APAC,toys,online,56.43,5,0.049,none,2024-03-28\r\n16569,1028,EMEA,toys,online,117.93,4,0.177,none,2024-09-06\r\n16570,2208,AMER,home,retail,73.65,4,0.108,none,2024-02-10\r\n16571,1533,APAC,toys,mobile,29.34,7,0.070,none,2024-11-24\r\n16572,1354,AMER,fashion,online,35.46,4,0.181,bundle,2024-10-20\r\n16573,1108,EMEA,grocery,online,36.72,2,0.081,none,2024-05-24\r\n16574,1653,APAC,grocery,mobile,41.13,5,0.113,none,2024-04-14\r\n16575,1745,APAC,fashion,retail,94.38,6,0.161,none,2024-07-26\r\n16576,1955,AMER,grocery,retail,47.95,1,0.109,none,2024-09-02\r\n16577,1719,LATAM,toys,mobile,66.85,7,0.024,loyalty,2024-01-16\r\n16578,1878,EMEA,grocery,retail,94.59,1,0.221,none,2024-06-05\r\n16579,1338,EMEA,grocery,online,67.24,5,0.020,bundle,2024-04-09\r\n16580,2340,EMEA,grocery,online,69.38,6,0.027,none,2024-01-08\r\n16581,1033,APAC,toys,mobile,59.71,5,0.076,coupon,2024-07-24\r\n16582,1625,EMEA,grocery,mobile,25.05,7,0.065,bundle,2024-02-02\r\n16583,1223,LATAM,grocery,mobile,23.97,7,0.233,coupon,2024-08-22\r\n16584,1056,LATAM,grocery,retail,95.95,1,0.039,bundle,2024-06-13\r\n16585,1534,EMEA,electronics,retail,55.44,4,0.115,coupon,2024-12-05\r\n16586,1791,LATAM,sports,online,71.97,2,0.083,none,2024-04-12\r\n16587,2494,AMER,grocery,retail,62.17,2,0.042,bundle,2024-08-28\r\n16588,1457,EMEA,grocery,online,37.63,2,0.005,coupon,2024-09-15\r\n16589,1608,AMER,home,mobile,80.17,4,0.113,coupon,2024-01-08\r\n16590,1634,AMER,toys,online,23.04,4,0.031,none,2024-06-23\r\n16591,1191,EMEA,sports,retail,64.56,5,0.039,none,2024-10-07\r\n16592,1037,EMEA,grocery,mobile,100.10,8,0.073,none,2024-02-07\r\n16593,1031,AMER,sports,retail,40.95,4,0.088,bundle,2024-01-13\r\n16594,1154,LATAM,fashion,retail,37.20,8,0.111,none,2024-04-11\r\n16595,1074,LATAM,grocery,retail,31.35,6,0.086,none,2024-01-16\r\n16596,1824,LATAM,electronics,online,37.25,1,0.175,none,2024-02-13\r\n16597,1403,APAC,grocery,retail,73.12,2,0.156,none,2024-09-27\r\n16598,1127,EMEA,home,online,37.03,4,0.241,none,2024-05-27\r\n16599,1069,APAC,sports,online,63.77,3,0.034,coupon,2024-08-22\r\n16600,2322,AMER,home,online,41.53,3,0.166,none,2024-10-22\r\n16601,1569,APAC,grocery,online,64.72,6,0.211,coupon,2024-12-03\r\n16602,2476,APAC,electronics,online,64.57,2,0.070,none,2024-05-19\r\n16603,2418,AMER,fashion,online,105.18,2,0.077,none,2024-09-27\r\n16604,2091,LATAM,fashion,online,60.23,8,0.083,coupon,2024-03-06\r\n16605,2225,EMEA,toys,mobile,58.52,2,0.011,none,2024-04-07\r\n16606,1414,APAC,electronics,retail,23.34,3,0.236,none,2024-02-27\r\n16607,1889,APAC,home,retail,75.07,5,0.178,loyalty,2024-08-02\r\n16608,1752,APAC,home,online,59.36,3,0.242,bundle,2024-11-07\r\n16609,2045,LATAM,grocery,partner,20.97,6,0.072,none,2024-07-11\r\n16610,1678,LATAM,sports,online,26.39,6,0.076,loyalty,2024-10-17\r\n16611,1004,LATAM,grocery,retail,68.13,4,0.068,none,2024-12-08\r\n16612,1606,AMER,toys,online,57.47,3,0.146,none,2024-07-18\r\n16613,1703,AMER,toys,online,98.07,2,0.034,coupon,2024-07-11\r\n16614,1885,EMEA,grocery,retail,67.49,1,0.140,none,2024-03-02\r\n16615,2427,LATAM,sports,retail,32.66,6,0.063,none,2024-09-04\r\n16616,2049,LATAM,home,online,31.29,4,0.003,none,2024-08-12\r\n16617,2063,APAC,grocery,mobile,57.18,2,0.230,none,2024-06-28\r\n16618,1817,APAC,home,online,29.94,7,0.021,none,2024-01-09\r\n16619,2120,AMER,home,retail,64.24,1,0.072,coupon,2024-08-01\r\n16620,1468,AMER,fashion,retail,96.14,4,0.177,none,2024-10-05\r\n16621,1466,AMER,fashion,retail,36.64,6,0.220,none,2024-02-04\r\n16622,1841,AMER,electronics,retail,61.39,2,0.068,coupon,2024-01-22\r\n16623,1378,APAC,toys,online,76.39,8,0.112,none,2024-02-22\r\n16624,1850,APAC,home,mobile,35.54,7,0.116,none,2024-11-16\r\n16625,1145,AMER,fashion,retail,103.74,3,0.030,loyalty,2024-06-11\r\n16626,1038,APAC,fashion,retail,76.70,4,0.130,none,2024-11-16\r\n16627,1471,EMEA,grocery,retail,50.53,4,0.064,loyalty,2024-03-19\r\n16628,1757,EMEA,toys,online,49.40,3,0.074,coupon,2024-04-14\r\n16629,1233,AMER,home,retail,140.94,7,0.237,none,2024-05-10\r\n16630,2477,APAC,electronics,mobile,80.05,2,0.122,bundle,2024-01-07\r\n16631,2436,LATAM,grocery,online,43.61,2,0.202,bundle,2024-02-20\r\n16632,1077,AMER,home,mobile,51.89,7,0.190,coupon,2024-05-20\r\n16633,2057,APAC,fashion,online,108.09,7,0.025,none,2024-02-12\r\n16634,2198,EMEA,electronics,retail,64.05,7,0.155,none,2024-08-21\r\n16635,1067,APAC,grocery,partner,40.35,8,0.208,none,2024-05-21\r\n16636,1390,APAC,fashion,online,67.64,2,0.039,none,2024-02-04\r\n16637,1009,APAC,electronics,online,75.28,2,0.179,none,2024-02-08\r\n16638,1868,AMER,grocery,online,180.17,8,0.161,none,2024-06-21\r\n16639,2485,AMER,grocery,retail,62.06,3,0.143,none,2024-02-18\r\n16640,1516,EMEA,home,partner,54.12,3,0.225,none,2024-09-18\r\n16641,1964,EMEA,electronics,retail,49.29,6,0.051,none,2024-03-08\r\n16642,1053,AMER,fashion,online,28.48,6,0.189,bundle,2024-06-17\r\n16643,1160,LATAM,toys,retail,54.19,1,0.006,bundle,2024-09-06\r\n16644,2445,APAC,electronics,retail,50.68,2,0.192,none,2024-11-01\r\n16645,1627,LATAM,toys,partner,31.20,4,0.235,none,2024-10-13\r\n16646,1684,EMEA,toys,online,113.70,8,0.121,coupon,2024-09-18\r\n16647,1716,LATAM,home,online,118.10,8,0.031,coupon,2024-11-01\r\n16648,2301,EMEA,electronics,online,202.52,8,0.002,none,2024-06-02\r\n16649,2340,EMEA,sports,mobile,48.70,4,0.021,bundle,2024-10-25\r\n16650,2118,AMER,toys,online,57.64,1,0.097,coupon,2024-05-06\r\n16651,1825,AMER,sports,retail,76.34,5,0.108,none,2024-06-21\r\n16652,2028,APAC,toys,online,26.14,5,0.185,coupon,2024-12-07\r\n16653,1549,APAC,electronics,online,30.39,4,0.056,none,2024-01-07\r\n16654,1527,AMER,sports,online,31.05,3,0.024,loyalty,2024-10-05\r\n16655,1245,APAC,fashion,online,35.78,5,0.239,coupon,2024-10-24\r\n16656,2137,LATAM,toys,online,31.78,2,0.127,none,2024-05-15\r\n16657,1308,EMEA,grocery,online,20.76,7,0.057,none,2024-05-06\r\n16658,2052,LATAM,home,online,45.40,4,0.186,none,2024-07-19\r\n16659,2199,LATAM,grocery,online,50.20,8,0.114,coupon,2024-07-28\r\n16660,1475,LATAM,fashion,online,47.83,2,0.029,none,2024-03-23\r\n16661,1881,LATAM,electronics,mobile,65.84,1,0.053,coupon,2024-07-04\r\n16662,1361,LATAM,electronics,online,33.62,2,0.061,none,2024-10-27\r\n16663,2373,LATAM,home,online,63.02,6,0.246,loyalty,2024-12-27\r\n16664,2046,APAC,electronics,retail,65.53,4,0.122,none,2024-11-20\r\n16665,2175,AMER,grocery,online,27.51,1,0.179,none,2024-11-05\r\n16666,1360,APAC,grocery,online,68.68,8,0.007,coupon,2024-09-05\r\n16667,1099,LATAM,electronics,mobile,68.45,6,0.109,none,2024-01-01\r\n16668,1139,EMEA,toys,partner,17.27,1,0.054,coupon,2024-08-15\r\n16669,1781,LATAM,electronics,online,58.95,2,0.174,none,2024-05-23\r\n16670,1723,LATAM,grocery,online,51.90,5,0.021,none,2024-04-27\r\n16671,1495,LATAM,grocery,online,73.66,8,0.182,coupon,2024-03-19\r\n16672,1341,EMEA,home,mobile,56.89,4,0.050,loyalty,2024-01-06\r\n16673,2013,APAC,grocery,online,83.83,1,0.163,none,2024-06-10\r\n16674,2006,APAC,electronics,retail,60.95,7,0.063,none,2024-04-01\r\n16675,1719,LATAM,electronics,online,33.45,3,0.017,none,2024-10-19\r\n16676,2093,LATAM,fashion,online,52.36,7,0.047,none,2024-10-01\r\n16677,1010,EMEA,grocery,online,84.83,2,0.160,none,2024-10-19\r\n16678,2200,LATAM,home,retail,60.07,3,0.096,coupon,2024-03-18\r\n16679,2121,APAC,toys,online,99.53,8,0.043,none,2024-04-07\r\n16680,1082,EMEA,grocery,mobile,123.50,7,0.235,none,2024-04-19\r\n16681,2466,APAC,grocery,mobile,41.89,3,0.048,loyalty,2024-08-16\r\n16682,1924,AMER,electronics,online,52.81,1,0.193,none,2024-06-18\r\n16683,2013,APAC,fashion,online,87.05,5,0.248,none,2024-04-20\r\n16684,2460,AMER,grocery,online,120.99,8,0.119,none,2024-04-24\r\n16685,2128,EMEA,sports,mobile,135.67,2,0.060,loyalty,2024-11-09\r\n16686,1469,EMEA,grocery,retail,24.70,7,0.081,coupon,2024-09-20\r\n16687,1397,LATAM,grocery,retail,81.20,2,0.116,none,2024-04-27\r\n16688,2270,APAC,home,partner,133.99,8,0.186,loyalty,2024-01-22\r\n16689,1623,AMER,grocery,online,19.79,5,0.005,none,2024-07-20\r\n16690,1161,AMER,grocery,online,33.31,6,0.041,coupon,2024-08-05\r\n16691,1964,EMEA,sports,retail,33.07,3,0.215,coupon,2024-01-15\r\n16692,2292,EMEA,sports,online,54.80,7,0.225,none,2024-08-20\r\n16693,1050,AMER,grocery,retail,133.44,4,0.078,bundle,2024-02-11\r\n16694,1827,EMEA,fashion,retail,57.24,1,0.026,none,2024-05-26\r\n16695,1423,EMEA,grocery,retail,41.16,4,0.135,none,2024-10-14\r\n16696,1046,EMEA,grocery,mobile,220.82,1,0.215,bundle,2024-02-27\r\n16697,1743,LATAM,toys,online,111.11,3,0.227,coupon,2024-02-01\r\n16698,1826,LATAM,toys,retail,110.67,2,0.113,bundle,2024-06-10\r\n16699,1854,AMER,toys,online,101.73,5,0.090,bundle,2024-03-27\r\n16700,1135,APAC,electronics,online,41.21,6,0.127,bundle,2024-03-09\r\n16701,1898,EMEA,grocery,retail,87.62,8,0.076,none,2024-01-12\r\n16702,1852,AMER,electronics,online,78.78,1,0.107,none,2024-02-04\r\n16703,1059,AMER,electronics,partner,46.89,7,0.076,none,2024-09-26\r\n16704,2245,APAC,toys,retail,131.57,1,0.225,none,2024-08-22\r\n16705,1046,EMEA,toys,online,124.18,1,0.048,loyalty,2024-10-24\r\n16706,1364,EMEA,sports,online,22.86,7,0.126,coupon,2024-03-01\r\n16707,1069,APAC,sports,retail,34.73,6,0.084,bundle,2024-04-20\r\n16708,1752,APAC,toys,retail,98.28,4,0.167,coupon,2024-08-28\r\n16709,2118,AMER,grocery,retail,50.92,6,0.108,none,2024-01-14\r\n16710,2317,LATAM,fashion,online,78.89,5,0.080,none,2024-12-19\r\n16711,2213,APAC,grocery,mobile,66.30,6,0.187,bundle,2024-10-02\r\n16712,1236,AMER,electronics,retail,60.17,2,0.109,coupon,2024-09-13\r\n16713,1976,AMER,electronics,retail,87.80,8,0.134,none,2024-09-22\r\n16714,1067,APAC,home,partner,85.15,2,0.061,coupon,2024-03-18\r\n16715,1595,AMER,grocery,mobile,77.62,4,0.170,none,2024-11-20\r\n16716,1230,EMEA,fashion,online,107.24,4,0.130,none,2024-04-22\r\n16717,1641,EMEA,electronics,online,52.14,2,0.248,coupon,2024-05-06\r\n16718,2039,EMEA,electronics,online,214.16,1,0.159,coupon,2024-03-04\r\n16719,2147,LATAM,grocery,retail,38.30,4,0.208,none,2024-08-23\r\n16720,2027,EMEA,grocery,online,74.33,2,0.176,coupon,2024-07-06\r\n16721,2141,AMER,grocery,online,56.70,2,0.147,coupon,2024-02-23\r\n16722,2416,LATAM,fashion,retail,205.72,3,0.190,none,2024-08-28\r\n16723,2117,EMEA,grocery,online,40.13,4,0.156,loyalty,2024-01-02\r\n16724,1752,APAC,home,retail,30.82,8,0.011,bundle,2024-12-25\r\n16725,1378,APAC,grocery,online,61.84,8,0.022,coupon,2024-07-05\r\n16726,1482,AMER,electronics,retail,63.72,5,0.145,coupon,2024-07-13\r\n16727,2149,EMEA,home,online,65.06,6,0.218,coupon,2024-03-02\r\n16728,1714,APAC,sports,online,112.67,8,0.060,loyalty,2024-08-10\r\n16729,1324,LATAM,electronics,retail,85.82,4,0.170,loyalty,2024-07-09\r\n16730,1173,LATAM,electronics,mobile,37.57,1,0.045,coupon,2024-06-26\r\n16731,1354,AMER,electronics,retail,45.91,4,0.024,none,2024-03-08\r\n16732,2108,AMER,electronics,retail,30.79,4,0.235,coupon,2024-03-09\r\n16733,1196,APAC,grocery,retail,91.28,8,0.020,none,2024-06-06\r\n16734,1081,AMER,home,online,84.16,8,0.113,none,2024-12-10\r\n16735,1756,EMEA,sports,partner,57.95,5,0.133,none,2024-10-07\r\n16736,2052,LATAM,home,mobile,33.95,1,0.139,none,2024-10-23\r\n16737,1659,APAC,grocery,online,83.43,3,0.136,none,2024-04-02\r\n16738,2028,APAC,electronics,retail,45.00,2,0.208,none,2024-02-04\r\n16739,1808,APAC,fashion,online,39.63,3,0.086,coupon,2024-03-20\r\n16740,2269,EMEA,fashion,retail,78.76,4,0.051,coupon,2024-10-07\r\n16741,2138,APAC,electronics,online,52.55,4,0.068,none,2024-05-19\r\n16742,1086,AMER,home,retail,123.50,6,0.100,none,2024-07-12\r\n16743,1467,LATAM,grocery,retail,61.38,1,0.074,coupon,2024-09-07\r\n16744,2232,EMEA,electronics,online,98.37,4,0.178,none,2024-10-04\r\n16745,2076,AMER,grocery,online,42.97,8,0.080,bundle,2024-01-15\r\n16746,2059,AMER,electronics,retail,37.65,4,0.078,none,2024-09-12\r\n16747,1909,APAC,electronics,online,211.89,6,0.218,bundle,2024-05-08\r\n16748,1261,APAC,grocery,retail,140.04,5,0.085,none,2024-02-20\r\n16749,1913,LATAM,home,online,61.51,8,0.212,none,2024-03-08\r\n16750,1116,LATAM,sports,online,37.98,2,0.023,none,2024-10-09\r\n16751,1068,APAC,home,retail,90.21,2,0.054,none,2024-06-20\r\n16752,1361,LATAM,fashion,online,49.13,4,0.003,loyalty,2024-02-01\r\n16753,1013,LATAM,toys,retail,43.71,8,0.049,none,2024-01-05\r\n16754,2083,LATAM,sports,online,56.62,6,0.205,coupon,2024-07-27\r\n16755,1957,AMER,fashion,online,32.55,2,0.135,none,2024-05-01\r\n16756,1562,AMER,fashion,retail,16.73,3,0.018,none,2024-02-02\r\n16757,1115,AMER,home,online,165.65,5,0.036,none,2024-05-05\r\n16758,1345,AMER,grocery,online,40.10,8,0.020,none,2024-06-14\r\n16759,1276,AMER,fashion,mobile,21.81,6,0.095,none,2024-10-23\r\n16760,2471,APAC,fashion,online,74.30,5,0.089,none,2024-03-18\r\n16761,1213,EMEA,home,mobile,42.86,5,0.169,bundle,2024-12-09\r\n16762,1487,AMER,fashion,mobile,54.33,1,0.245,none,2024-10-15\r\n16763,1044,EMEA,home,online,42.43,6,0.194,none,2024-08-15\r\n16764,1438,APAC,toys,retail,121.09,6,0.189,none,2024-01-26\r\n16765,1458,APAC,sports,online,41.76,1,0.228,none,2024-12-25\r\n16766,2150,APAC,electronics,online,34.92,3,0.005,none,2024-02-20\r\n16767,1332,APAC,grocery,online,38.82,5,0.086,none,2024-06-21\r\n16768,1827,EMEA,home,online,22.00,6,0.034,none,2024-06-26\r\n16769,1770,AMER,sports,online,110.43,4,0.241,none,2024-02-24\r\n16770,1669,AMER,grocery,online,70.58,8,0.250,coupon,2024-03-26\r\n16771,1534,EMEA,grocery,online,105.13,1,0.039,none,2024-07-21\r\n16772,1038,APAC,home,retail,83.02,2,0.189,none,2024-05-19\r\n16773,1197,LATAM,sports,online,45.73,2,0.117,none,2024-06-12\r\n16774,1310,AMER,home,online,21.96,2,0.045,none,2024-07-09\r\n16775,1034,EMEA,home,retail,23.04,2,0.138,coupon,2024-11-04\r\n16776,1012,LATAM,grocery,mobile,57.17,1,0.184,coupon,2024-05-22\r\n16777,1284,APAC,electronics,retail,54.13,4,0.119,coupon,2024-03-28\r\n16778,2018,AMER,sports,online,103.63,2,0.197,coupon,2024-02-04\r\n16779,1644,EMEA,grocery,mobile,48.77,6,0.135,bundle,2024-03-14\r\n16780,1529,LATAM,grocery,online,53.33,5,0.124,bundle,2024-12-03\r\n16781,1837,LATAM,home,retail,38.05,6,0.213,loyalty,2024-08-11\r\n16782,2191,AMER,grocery,retail,83.70,5,0.011,coupon,2024-04-16\r\n16783,2283,AMER,sports,online,80.30,1,0.138,none,2024-09-13\r\n16784,1240,EMEA,toys,retail,44.42,7,0.046,none,2024-09-09\r\n16785,1692,LATAM,electronics,online,99.78,8,0.207,none,2024-12-18\r\n16786,2367,AMER,fashion,online,49.65,6,0.147,coupon,2024-03-17\r\n16787,1387,AMER,fashion,mobile,49.01,5,0.040,none,2024-11-28\r\n16788,1084,AMER,electronics,online,45.09,5,0.065,none,2024-02-23\r\n16789,1416,EMEA,grocery,mobile,102.78,2,0.190,bundle,2024-03-27\r\n16790,2300,EMEA,sports,mobile,37.69,3,0.005,none,2024-10-14\r\n16791,1190,EMEA,sports,mobile,102.45,8,0.088,none,2024-10-03\r\n16792,2157,AMER,fashion,online,114.82,1,0.209,coupon,2024-12-07\r\n16793,1561,EMEA,electronics,online,95.31,6,0.029,none,2024-06-24\r\n16794,1149,LATAM,electronics,retail,99.84,7,0.104,none,2024-11-08\r\n16795,2499,LATAM,grocery,retail,33.81,1,0.191,coupon,2024-07-05\r\n16796,1047,APAC,home,retail,54.79,2,0.140,none,2024-02-13\r\n16797,1060,LATAM,grocery,retail,42.69,2,0.149,none,2024-10-21\r\n16798,2391,EMEA,fashion,online,37.11,6,0.116,none,2024-02-05\r\n16799,1012,LATAM,toys,online,93.64,6,0.107,bundle,2024-02-17\r\n16800,1624,AMER,home,online,49.04,4,0.093,none,2024-03-05\r\n16801,1712,LATAM,electronics,online,59.57,3,0.160,none,2024-07-21\r\n16802,1640,APAC,grocery,online,57.73,5,0.129,coupon,2024-11-28\r\n16803,2135,EMEA,electronics,retail,60.67,8,0.085,none,2024-10-13\r\n16804,1470,LATAM,electronics,retail,42.92,4,0.206,bundle,2024-09-23\r\n16805,1711,APAC,sports,online,55.66,2,0.112,none,2024-05-23\r\n16806,2022,LATAM,fashion,retail,35.96,1,0.004,none,2024-11-15\r\n16807,1861,AMER,home,retail,70.29,4,0.001,none,2024-05-28\r\n16808,1091,EMEA,home,retail,59.93,5,0.039,bundle,2024-08-24\r\n16809,2442,APAC,grocery,online,33.59,5,0.144,none,2024-08-09\r\n16810,1244,LATAM,toys,retail,68.62,8,0.137,bundle,2024-02-21\r\n16811,2381,AMER,home,online,105.16,6,0.198,bundle,2024-12-16\r\n16812,1166,AMER,fashion,online,47.91,7,0.091,coupon,2024-06-13\r\n16813,2156,AMER,electronics,retail,88.46,2,0.154,none,2024-02-08\r\n16814,2160,LATAM,fashion,online,140.03,2,0.027,loyalty,2024-08-14\r\n16815,1698,EMEA,electronics,online,22.96,3,0.239,none,2024-03-11\r\n16816,1085,EMEA,grocery,retail,65.55,1,0.240,bundle,2024-09-01\r\n16817,2184,APAC,electronics,online,81.60,6,0.007,none,2024-06-03\r\n16818,1872,LATAM,electronics,online,45.88,5,0.134,none,2024-11-16\r\n16819,2152,EMEA,fashion,online,87.36,4,0.225,bundle,2024-05-22\r\n16820,1069,APAC,home,online,162.61,7,0.124,none,2024-09-18\r\n16821,1027,APAC,home,online,118.41,7,0.140,none,2024-05-10\r\n16822,1939,LATAM,electronics,retail,61.83,7,0.095,bundle,2024-01-09\r\n16823,1908,AMER,grocery,mobile,70.14,8,0.211,none,2024-11-27\r\n16824,2390,AMER,home,mobile,41.91,6,0.206,loyalty,2024-01-14\r\n16825,2434,APAC,toys,online,54.62,8,0.090,none,2024-06-21\r\n16826,2241,APAC,grocery,online,34.98,8,0.011,loyalty,2024-02-19\r\n16827,1201,LATAM,sports,retail,43.68,3,0.040,none,2024-01-19\r\n16828,2134,AMER,grocery,online,33.92,3,0.037,none,2024-09-02\r\n16829,2326,LATAM,fashion,retail,99.40,3,0.034,none,2024-04-03\r\n16830,1519,APAC,electronics,online,79.94,5,0.116,coupon,2024-06-14\r\n16831,1947,EMEA,electronics,retail,63.66,8,0.169,coupon,2024-07-10\r\n16832,1131,APAC,grocery,partner,75.13,6,0.245,coupon,2024-09-21\r\n16833,1593,AMER,home,retail,46.79,4,0.222,none,2024-06-28\r\n16834,1805,EMEA,home,mobile,54.95,2,0.150,bundle,2024-05-13\r\n16835,1200,EMEA,grocery,mobile,26.27,2,0.249,none,2024-06-11\r\n16836,1802,AMER,grocery,online,71.69,2,0.214,none,2024-08-12\r\n16837,1112,APAC,fashion,retail,214.62,5,0.125,coupon,2024-06-17\r\n16838,1698,EMEA,home,retail,27.79,4,0.018,none,2024-06-06\r\n16839,1044,EMEA,electronics,online,77.33,1,0.024,coupon,2024-02-19\r\n16840,1064,AMER,electronics,retail,81.75,5,0.199,bundle,2024-01-22\r\n16841,1970,LATAM,electronics,online,44.17,3,0.156,none,2024-08-21\r\n16842,1392,AMER,grocery,partner,36.63,3,0.243,coupon,2024-09-26\r\n16843,1326,AMER,electronics,online,107.09,3,0.034,loyalty,2024-12-07\r\n16844,1695,LATAM,grocery,online,69.79,6,0.090,none,2024-06-03\r\n16845,1897,AMER,grocery,mobile,39.05,6,0.137,bundle,2024-05-27\r\n16846,1521,LATAM,toys,mobile,40.97,2,0.198,none,2024-09-22\r\n16847,1472,AMER,grocery,partner,43.89,8,0.217,coupon,2024-06-13\r\n16848,2278,APAC,home,mobile,60.12,4,0.107,bundle,2024-05-17\r\n16849,2107,APAC,fashion,retail,44.65,5,0.053,none,2024-07-13\r\n16850,1238,AMER,fashion,online,95.82,1,0.247,loyalty,2024-06-02\r\n16851,1600,AMER,sports,online,58.68,1,0.043,none,2024-06-02\r\n16852,1039,AMER,home,online,54.59,3,0.241,none,2024-01-21\r\n16853,1669,AMER,electronics,retail,73.49,4,0.204,none,2024-05-16\r\n16854,2037,LATAM,sports,online,59.15,4,0.116,bundle,2024-06-09\r\n16855,2032,AMER,grocery,retail,41.68,8,0.016,none,2024-05-18\r\n16856,1236,AMER,grocery,online,41.44,3,0.105,bundle,2024-07-20\r\n16857,1481,LATAM,fashion,online,32.36,2,0.107,coupon,2024-12-14\r\n16858,1597,APAC,electronics,mobile,28.81,4,0.154,none,2024-01-27\r\n16859,1170,AMER,toys,retail,63.56,6,0.019,loyalty,2024-03-14\r\n16860,1089,LATAM,home,online,85.45,4,0.230,bundle,2024-12-24\r\n16861,2119,AMER,sports,online,122.65,1,0.207,none,2024-10-07\r\n16862,2024,AMER,grocery,online,39.51,7,0.029,none,2024-09-05\r\n16863,2130,EMEA,grocery,online,68.78,7,0.048,none,2024-08-01\r\n16864,1364,EMEA,grocery,mobile,99.92,2,0.195,bundle,2024-08-11\r\n16865,1196,APAC,sports,retail,88.27,1,0.111,none,2024-01-04\r\n16866,1244,LATAM,home,online,28.81,8,0.235,none,2024-04-11\r\n16867,1819,AMER,grocery,retail,132.23,7,0.077,bundle,2024-11-08\r\n16868,2388,LATAM,electronics,online,46.73,2,0.235,coupon,2024-01-02\r\n16869,1179,APAC,home,retail,46.40,4,0.104,none,2024-05-08\r\n16870,1338,EMEA,grocery,online,29.08,7,0.105,coupon,2024-09-21\r\n16871,1194,APAC,toys,online,100.98,6,0.139,none,2024-05-07\r\n16872,1120,LATAM,fashion,online,81.28,4,0.232,coupon,2024-02-09\r\n16873,1002,EMEA,grocery,retail,57.75,4,0.177,none,2024-03-02\r\n16874,2047,AMER,sports,online,40.34,4,0.070,none,2024-12-16\r\n16875,1470,LATAM,toys,mobile,77.49,3,0.234,none,2024-08-13\r\n16876,1354,AMER,fashion,online,97.72,8,0.220,none,2024-10-20\r\n16877,2268,EMEA,fashion,retail,57.16,5,0.172,none,2024-04-23\r\n16878,1963,AMER,home,online,91.81,3,0.022,none,2024-03-25\r\n16879,1048,EMEA,grocery,online,79.46,8,0.183,none,2024-03-15\r\n16880,2403,LATAM,electronics,retail,80.72,8,0.154,coupon,2024-03-17\r\n16881,2171,EMEA,home,retail,129.01,4,0.110,none,2024-08-03\r\n16882,1157,LATAM,grocery,retail,63.51,4,0.157,none,2024-06-17\r\n16883,1259,EMEA,grocery,online,48.85,8,0.120,loyalty,2024-03-04\r\n16884,2397,LATAM,home,mobile,46.01,7,0.096,coupon,2024-08-04\r\n16885,1313,EMEA,electronics,retail,34.42,5,0.239,loyalty,2024-01-20\r\n16886,2391,EMEA,home,online,46.62,8,0.030,none,2024-02-26\r\n16887,1231,AMER,grocery,online,34.32,1,0.181,coupon,2024-02-23\r\n16888,2088,EMEA,electronics,mobile,77.25,4,0.084,coupon,2024-09-15\r\n16889,1986,LATAM,grocery,retail,37.09,4,0.174,none,2024-07-07\r\n16890,2317,LATAM,electronics,online,104.77,6,0.012,bundle,2024-09-27\r\n16891,1449,EMEA,home,online,35.23,1,0.195,coupon,2024-12-06\r\n16892,1690,LATAM,sports,online,39.05,8,0.162,bundle,2024-07-21\r\n16893,1194,APAC,electronics,online,36.64,6,0.240,none,2024-10-15\r\n16894,1970,LATAM,grocery,retail,51.99,4,0.021,coupon,2024-09-25\r\n16895,2137,LATAM,grocery,online,45.98,4,0.114,none,2024-11-25\r\n16896,1583,AMER,electronics,retail,40.94,2,0.143,coupon,2024-05-22\r\n16897,1216,APAC,grocery,retail,37.44,7,0.066,none,2024-09-22\r\n16898,1993,APAC,electronics,retail,118.65,1,0.213,bundle,2024-01-23\r\n16899,1849,EMEA,home,retail,61.09,8,0.164,loyalty,2024-12-03\r\n16900,1477,APAC,grocery,mobile,55.15,2,0.228,coupon,2024-07-17\r\n16901,1130,LATAM,toys,online,52.62,6,0.049,bundle,2024-08-17\r\n16902,1123,LATAM,electronics,online,36.41,8,0.092,loyalty,2024-07-12\r\n16903,1663,LATAM,home,online,47.33,2,0.090,none,2024-04-26\r\n16904,2060,LATAM,electronics,online,68.91,2,0.119,none,2024-05-11\r\n16905,2007,LATAM,grocery,online,80.67,6,0.145,none,2024-08-26\r\n16906,1201,LATAM,home,online,110.22,6,0.109,none,2024-07-16\r\n16907,1370,APAC,fashion,online,55.93,4,0.161,none,2024-08-08\r\n16908,2457,EMEA,electronics,online,46.09,5,0.024,none,2024-01-20\r\n16909,2257,AMER,home,mobile,97.63,2,0.058,none,2024-05-25\r\n16910,1365,LATAM,sports,online,73.87,7,0.084,none,2024-12-26\r\n16911,2107,APAC,fashion,partner,50.23,6,0.139,loyalty,2024-01-25\r\n16912,1902,AMER,sports,online,89.24,8,0.177,none,2024-08-18\r\n16913,1129,LATAM,grocery,online,28.28,4,0.172,none,2024-07-15\r\n16914,1948,EMEA,sports,mobile,80.15,8,0.002,loyalty,2024-03-17\r\n16915,1025,EMEA,grocery,retail,67.20,6,0.127,none,2024-05-11\r\n16916,1241,APAC,toys,online,50.18,5,0.029,bundle,2024-03-12\r\n16917,1128,LATAM,home,online,32.97,8,0.051,coupon,2024-12-17\r\n16918,1851,EMEA,toys,mobile,76.26,1,0.180,loyalty,2024-09-07\r\n16919,1105,AMER,toys,partner,96.44,8,0.006,none,2024-04-17\r\n16920,2274,APAC,fashion,online,22.12,5,0.164,bundle,2024-05-11\r\n16921,1758,AMER,toys,online,178.34,3,0.169,coupon,2024-02-22\r\n16922,1125,LATAM,electronics,retail,45.12,1,0.234,coupon,2024-12-13\r\n16923,1799,EMEA,grocery,online,28.12,1,0.155,none,2024-01-19\r\n16924,1340,LATAM,home,retail,59.68,1,0.038,none,2024-08-15\r\n16925,1206,EMEA,home,online,42.32,7,0.214,none,2024-09-09\r\n16926,1547,AMER,electronics,retail,20.48,2,0.162,none,2024-08-06\r\n16927,1129,LATAM,grocery,online,85.56,7,0.160,bundle,2024-08-01\r\n16928,1823,EMEA,grocery,retail,33.15,4,0.077,none,2024-03-04\r\n16929,1784,EMEA,electronics,retail,73.11,5,0.160,none,2024-11-05\r\n16930,1027,APAC,toys,online,57.81,1,0.024,none,2024-06-12\r\n16931,2068,LATAM,electronics,retail,36.02,2,0.064,none,2024-06-17\r\n16932,2109,EMEA,fashion,mobile,93.74,1,0.003,coupon,2024-07-28\r\n16933,1626,EMEA,grocery,mobile,64.70,7,0.237,none,2024-10-18\r\n16934,1019,APAC,home,online,38.10,4,0.208,none,2024-07-18\r\n16935,1666,LATAM,grocery,online,77.89,3,0.245,loyalty,2024-01-26\r\n16936,1680,LATAM,grocery,retail,69.68,8,0.212,bundle,2024-09-14\r\n16937,1297,AMER,home,online,57.93,1,0.164,coupon,2024-07-02\r\n16938,2077,APAC,toys,retail,38.36,6,0.008,coupon,2024-02-19\r\n16939,2310,EMEA,grocery,retail,47.01,3,0.231,none,2024-02-02\r\n16940,2143,AMER,fashion,online,30.15,3,0.159,coupon,2024-08-16\r\n16941,2399,LATAM,electronics,retail,43.13,4,0.142,bundle,2024-04-05\r\n16942,1180,AMER,toys,retail,57.41,3,0.196,none,2024-08-17\r\n16943,2440,APAC,electronics,online,93.13,3,0.011,none,2024-06-04\r\n16944,2105,APAC,toys,online,69.81,4,0.006,coupon,2024-10-18\r\n16945,1659,APAC,toys,partner,32.51,2,0.110,loyalty,2024-07-01\r\n16946,1225,APAC,electronics,retail,48.04,3,0.137,none,2024-05-05\r\n16947,2417,LATAM,toys,mobile,96.17,7,0.244,none,2024-01-14\r\n16948,1385,LATAM,electronics,mobile,26.38,8,0.066,none,2024-08-18\r\n16949,2088,EMEA,toys,mobile,87.78,6,0.187,loyalty,2024-08-18\r\n16950,1741,AMER,home,retail,26.98,1,0.037,bundle,2024-06-19\r\n16951,2224,EMEA,toys,retail,36.50,1,0.048,none,2024-08-15\r\n16952,2342,AMER,home,retail,106.66,2,0.216,coupon,2024-12-22\r\n16953,2439,AMER,fashion,online,113.41,4,0.158,coupon,2024-05-01\r\n16954,2408,EMEA,fashion,mobile,70.40,7,0.220,bundle,2024-05-02\r\n16955,2253,AMER,grocery,retail,36.13,7,0.107,bundle,2024-04-05\r\n16956,1755,APAC,electronics,retail,35.65,3,0.214,none,2024-05-24\r\n16957,1550,APAC,sports,online,57.51,4,0.077,bundle,2024-03-14\r\n16958,1120,LATAM,grocery,online,59.51,2,0.071,none,2024-07-28\r\n16959,1677,EMEA,fashion,retail,94.86,8,0.069,none,2024-07-08\r\n16960,1476,APAC,grocery,retail,39.77,3,0.130,none,2024-06-11\r\n16961,2391,EMEA,electronics,retail,36.40,2,0.066,none,2024-01-19\r\n16962,2416,LATAM,electronics,mobile,36.32,3,0.144,bundle,2024-03-26\r\n16963,1368,EMEA,toys,online,46.23,1,0.214,none,2024-07-03\r\n16964,1689,LATAM,home,retail,69.87,7,0.015,none,2024-10-04\r\n16965,2149,EMEA,fashion,mobile,50.07,8,0.051,none,2024-08-24\r\n16966,1233,AMER,toys,online,61.18,4,0.012,loyalty,2024-05-16\r\n16967,1723,LATAM,grocery,retail,32.31,2,0.049,loyalty,2024-08-24\r\n16968,2342,AMER,electronics,online,46.32,8,0.249,none,2024-08-24\r\n16969,2312,APAC,electronics,mobile,59.36,7,0.090,bundle,2024-02-04\r\n16970,1535,AMER,fashion,retail,44.53,2,0.193,coupon,2024-03-23\r\n16971,2123,AMER,home,retail,76.60,2,0.025,none,2024-12-07\r\n16972,1515,EMEA,home,online,36.73,1,0.245,none,2024-12-18\r\n16973,1087,AMER,grocery,online,58.52,7,0.166,none,2024-08-08\r\n16974,1871,APAC,sports,online,40.82,5,0.047,none,2024-10-20\r\n16975,1460,LATAM,fashion,retail,73.29,6,0.111,none,2024-09-10\r\n16976,1874,LATAM,sports,mobile,70.71,4,0.055,none,2024-03-19\r\n16977,1015,AMER,grocery,retail,31.02,7,0.149,coupon,2024-09-15\r\n16978,2016,LATAM,fashion,online,37.26,1,0.087,none,2024-02-03\r\n16979,1989,LATAM,toys,retail,66.63,5,0.243,none,2024-11-08\r\n16980,2134,AMER,toys,retail,135.81,7,0.190,loyalty,2024-04-22\r\n16981,1536,LATAM,sports,online,68.30,3,0.205,coupon,2024-02-13\r\n16982,1582,AMER,electronics,mobile,82.64,3,0.245,coupon,2024-06-25\r\n16983,1176,EMEA,sports,retail,78.74,7,0.058,none,2024-08-16\r\n16984,1609,LATAM,toys,retail,80.66,4,0.070,none,2024-04-25\r\n16985,1720,AMER,grocery,retail,56.66,2,0.015,loyalty,2024-04-17\r\n16986,1413,LATAM,toys,online,66.11,8,0.094,coupon,2024-12-22\r\n16987,2426,AMER,toys,retail,77.57,4,0.030,none,2024-08-09\r\n16988,1425,EMEA,electronics,online,109.31,5,0.125,bundle,2024-01-03\r\n16989,1315,AMER,fashion,online,110.20,2,0.168,coupon,2024-07-25\r\n16990,1367,AMER,electronics,online,64.43,8,0.130,loyalty,2024-02-22\r\n16991,2266,LATAM,home,retail,44.48,1,0.235,none,2024-02-06\r\n16992,1966,APAC,sports,mobile,93.83,1,0.089,coupon,2024-08-09\r\n16993,1458,APAC,electronics,online,42.07,4,0.045,none,2024-11-02\r\n16994,1395,APAC,grocery,online,93.44,1,0.043,none,2024-05-27\r\n16995,1058,LATAM,home,partner,34.05,5,0.035,coupon,2024-10-20\r\n16996,1042,LATAM,grocery,online,53.99,1,0.091,loyalty,2024-04-21\r\n16997,2330,EMEA,home,retail,37.11,6,0.004,coupon,2024-10-22\r\n16998,1049,AMER,grocery,mobile,29.74,3,0.091,none,2024-07-10\r\n16999,1638,EMEA,grocery,mobile,73.16,7,0.102,none,2024-02-06\r\n17000,1453,APAC,fashion,retail,69.81,2,0.246,none,2024-10-19\r\n17001,1686,LATAM,grocery,retail,39.78,3,0.029,coupon,2024-10-11\r\n17002,1516,EMEA,electronics,online,56.71,5,0.179,none,2024-10-06\r\n17003,1845,AMER,electronics,retail,52.62,4,0.083,loyalty,2024-07-08\r\n17004,1612,LATAM,grocery,online,70.41,7,0.038,coupon,2024-06-19\r\n17005,1480,APAC,home,online,57.51,7,0.184,loyalty,2024-03-17\r\n17006,2473,EMEA,sports,retail,29.97,6,0.237,bundle,2024-10-20\r\n17007,1367,AMER,grocery,retail,30.16,2,0.182,bundle,2024-10-17\r\n17008,1024,APAC,fashion,mobile,65.46,8,0.136,coupon,2024-08-18\r\n17009,1473,LATAM,electronics,online,60.67,1,0.207,none,2024-06-25\r\n17010,1997,APAC,home,online,56.25,3,0.136,none,2024-12-06\r\n17011,2009,LATAM,sports,online,49.18,3,0.004,none,2024-07-20\r\n17012,2326,LATAM,home,retail,21.91,2,0.056,loyalty,2024-08-06\r\n17013,1254,APAC,grocery,retail,71.07,1,0.120,coupon,2024-04-27\r\n17014,1979,APAC,sports,online,28.99,5,0.051,coupon,2024-10-12\r\n17015,2263,AMER,sports,online,41.83,8,0.146,none,2024-09-13\r\n17016,1228,APAC,home,online,36.97,6,0.197,none,2024-07-17\r\n17017,2144,EMEA,grocery,online,78.69,5,0.155,loyalty,2024-01-02\r\n17018,2080,LATAM,grocery,online,34.03,4,0.011,loyalty,2024-11-14\r\n17019,1939,LATAM,fashion,retail,57.73,8,0.174,none,2024-04-06\r\n17020,1305,EMEA,toys,online,31.57,7,0.010,coupon,2024-11-03\r\n17021,2098,AMER,grocery,retail,74.53,2,0.115,none,2024-07-04\r\n17022,1477,APAC,electronics,online,26.91,1,0.044,none,2024-01-04\r\n17023,1223,LATAM,fashion,partner,78.61,1,0.050,none,2024-03-23\r\n17024,2325,LATAM,grocery,retail,54.10,1,0.235,none,2024-06-14\r\n17025,1446,AMER,grocery,retail,90.81,3,0.205,loyalty,2024-05-15\r\n17026,1353,EMEA,sports,online,44.10,7,0.232,bundle,2024-09-10\r\n17027,1064,AMER,grocery,online,28.18,3,0.043,none,2024-04-07\r\n17028,1761,EMEA,grocery,retail,109.90,1,0.199,loyalty,2024-10-17\r\n17029,1015,AMER,electronics,retail,37.46,1,0.001,none,2024-09-14\r\n17030,2244,LATAM,electronics,online,77.56,3,0.152,none,2024-01-22\r\n17031,1734,AMER,electronics,online,24.16,4,0.152,none,2024-01-01\r\n17032,2077,APAC,grocery,mobile,32.87,8,0.056,none,2024-03-01\r\n17033,1971,EMEA,home,mobile,23.29,4,0.091,none,2024-06-02\r\n17034,1862,LATAM,electronics,retail,92.58,7,0.243,none,2024-07-18\r\n17035,2237,EMEA,fashion,mobile,47.60,6,0.067,none,2024-01-16\r\n17036,2129,APAC,sports,online,93.97,1,0.087,none,2024-12-24\r\n17037,1150,LATAM,home,online,52.39,1,0.130,none,2024-02-15\r\n17038,1761,EMEA,electronics,partner,84.58,4,0.210,none,2024-09-12\r\n17039,2277,EMEA,home,retail,87.08,5,0.047,none,2024-09-13\r\n17040,1136,EMEA,sports,retail,109.60,2,0.045,loyalty,2024-06-18\r\n17041,1972,LATAM,toys,mobile,60.81,4,0.074,coupon,2024-10-15\r\n17042,1892,LATAM,grocery,online,38.80,4,0.195,none,2024-07-15\r\n17043,1827,EMEA,electronics,retail,23.92,4,0.248,none,2024-03-22\r\n17044,2200,LATAM,grocery,retail,35.99,2,0.002,none,2024-06-03\r\n17045,2069,AMER,sports,mobile,57.40,1,0.142,none,2024-10-20\r\n17046,1626,EMEA,electronics,retail,47.23,3,0.215,none,2024-12-18\r\n17047,1290,EMEA,grocery,online,30.50,8,0.178,none,2024-10-02\r\n17048,2286,AMER,sports,online,48.89,5,0.082,none,2024-10-17\r\n17049,2411,EMEA,grocery,online,110.84,1,0.105,loyalty,2024-10-20\r\n17050,1334,APAC,grocery,online,74.81,6,0.190,loyalty,2024-08-25\r\n17051,1536,LATAM,home,retail,55.88,3,0.105,bundle,2024-01-08\r\n17052,1013,LATAM,sports,online,69.18,6,0.141,bundle,2024-04-28\r\n17053,2473,EMEA,grocery,online,60.82,2,0.127,loyalty,2024-06-04\r\n17054,1327,APAC,grocery,online,70.68,2,0.175,coupon,2024-08-26\r\n17055,2040,LATAM,fashion,online,42.61,1,0.173,loyalty,2024-09-24\r\n17056,1118,AMER,grocery,mobile,37.23,2,0.187,coupon,2024-04-10\r\n17057,1567,AMER,home,mobile,25.73,3,0.137,none,2024-07-06\r\n17058,1946,AMER,home,online,32.12,2,0.134,loyalty,2024-09-11\r\n17059,1990,EMEA,electronics,mobile,40.42,2,0.154,none,2024-09-26\r\n17060,2156,AMER,fashion,online,100.25,2,0.245,none,2024-03-02\r\n17061,2347,AMER,toys,online,49.40,6,0.057,none,2024-07-18\r\n17062,1029,EMEA,home,online,24.04,6,0.109,coupon,2024-05-13\r\n17063,1653,APAC,toys,retail,48.55,3,0.162,none,2024-03-06\r\n17064,1563,EMEA,home,retail,68.64,7,0.084,loyalty,2024-09-08\r\n17065,2164,AMER,electronics,partner,73.20,2,0.003,loyalty,2024-01-24\r\n17066,1488,AMER,home,online,42.48,2,0.069,coupon,2024-06-11\r\n17067,2105,APAC,grocery,online,72.20,1,0.185,none,2024-03-24\r\n17068,1113,EMEA,home,online,18.46,6,0.146,loyalty,2024-08-02\r\n17069,1714,APAC,home,online,97.83,3,0.015,none,2024-03-22\r\n17070,2363,AMER,electronics,retail,50.17,3,0.150,bundle,2024-02-03\r\n17071,1316,APAC,fashion,mobile,59.19,8,0.081,none,2024-10-12\r\n17072,2351,EMEA,toys,retail,23.58,7,0.211,none,2024-05-14\r\n17073,1305,EMEA,fashion,online,56.75,5,0.212,coupon,2024-12-18\r\n17074,1568,AMER,fashion,online,39.07,2,0.142,coupon,2024-01-24\r\n17075,1049,AMER,sports,retail,84.78,7,0.021,coupon,2024-01-22\r\n17076,1160,LATAM,fashion,mobile,116.10,6,0.171,none,2024-11-23\r\n17077,1602,EMEA,fashion,online,48.49,5,0.100,none,2024-06-07\r\n17078,1189,AMER,toys,online,30.21,5,0.113,none,2024-08-18\r\n17079,1417,APAC,grocery,online,23.30,6,0.221,none,2024-05-05\r\n17080,1838,AMER,electronics,retail,55.94,3,0.133,none,2024-03-06\r\n17081,2109,EMEA,home,online,86.70,3,0.121,coupon,2024-03-06\r\n17082,2127,LATAM,home,retail,127.37,1,0.009,none,2024-08-01\r\n17083,2054,AMER,fashion,online,123.55,5,0.166,loyalty,2024-01-18\r\n17084,2304,LATAM,fashion,online,56.05,1,0.045,coupon,2024-12-11\r\n17085,1256,LATAM,grocery,partner,48.56,6,0.207,none,2024-12-20\r\n17086,1648,APAC,home,retail,49.62,4,0.083,none,2024-03-18\r\n17087,2141,AMER,home,mobile,140.59,7,0.245,bundle,2024-05-03\r\n17088,1367,AMER,grocery,online,86.89,2,0.038,coupon,2024-01-10\r\n17089,1245,APAC,electronics,online,96.49,4,0.157,coupon,2024-07-03\r\n17090,1232,LATAM,grocery,mobile,65.33,4,0.087,none,2024-07-17\r\n17091,1919,EMEA,home,retail,88.92,1,0.082,coupon,2024-08-26\r\n17092,1135,APAC,home,retail,33.44,6,0.244,coupon,2024-06-18\r\n17093,1323,EMEA,toys,retail,43.63,5,0.209,coupon,2024-12-01\r\n17094,2224,EMEA,grocery,online,64.02,1,0.197,loyalty,2024-08-22\r\n17095,1838,AMER,home,partner,33.41,7,0.018,loyalty,2024-02-27\r\n17096,2182,AMER,home,retail,34.49,3,0.201,loyalty,2024-11-28\r\n17097,1404,EMEA,home,retail,41.26,2,0.148,bundle,2024-07-25\r\n17098,1772,EMEA,grocery,retail,56.28,1,0.212,none,2024-11-08\r\n17099,1364,EMEA,fashion,online,97.10,1,0.074,coupon,2024-01-28\r\n17100,2109,EMEA,home,online,139.49,5,0.052,coupon,2024-09-04\r\n17101,1997,APAC,electronics,partner,35.77,8,0.101,none,2024-06-19\r\n17102,1317,EMEA,grocery,online,25.80,2,0.188,coupon,2024-11-09\r\n17103,2454,LATAM,electronics,retail,44.92,7,0.148,none,2024-02-15\r\n17104,1789,EMEA,fashion,online,49.45,4,0.110,none,2024-07-11\r\n17105,2302,APAC,home,online,100.88,1,0.079,none,2024-12-12\r\n17106,2214,AMER,grocery,partner,179.42,8,0.036,none,2024-11-17\r\n17107,1226,AMER,home,online,81.66,8,0.231,none,2024-02-05\r\n17108,2209,AMER,toys,partner,68.57,6,0.069,none,2024-08-21\r\n17109,1183,AMER,home,online,144.44,3,0.124,none,2024-03-03\r\n17110,2022,LATAM,sports,online,148.54,8,0.023,loyalty,2024-06-26\r\n17111,1015,AMER,grocery,online,45.33,2,0.220,none,2024-08-17\r\n17112,1313,EMEA,grocery,retail,29.07,3,0.224,none,2024-01-14\r\n17113,1899,APAC,grocery,online,26.67,6,0.216,coupon,2024-11-17\r\n17114,1800,APAC,home,online,25.24,5,0.191,none,2024-08-05\r\n17115,1006,AMER,fashion,partner,53.58,6,0.244,none,2024-08-12\r\n17116,2110,LATAM,toys,retail,61.42,6,0.028,none,2024-09-10\r\n17117,1789,EMEA,fashion,online,123.58,4,0.192,coupon,2024-09-14\r\n17118,2448,APAC,toys,online,29.81,7,0.121,none,2024-05-16\r\n17119,1014,EMEA,grocery,online,54.10,3,0.243,none,2024-02-21\r\n17120,1173,LATAM,home,retail,87.29,6,0.106,bundle,2024-04-19\r\n17121,1806,APAC,electronics,retail,72.94,8,0.203,none,2024-06-03\r\n17122,1954,APAC,home,retail,34.65,7,0.180,none,2024-04-24\r\n17123,2299,EMEA,grocery,retail,26.89,8,0.226,coupon,2024-09-14\r\n17124,2441,EMEA,sports,mobile,56.95,5,0.063,loyalty,2024-07-09\r\n17125,1978,AMER,grocery,retail,36.83,5,0.054,coupon,2024-04-06\r\n17126,1047,APAC,home,online,74.82,8,0.231,none,2024-06-24\r\n17127,1454,APAC,sports,retail,89.69,3,0.153,coupon,2024-01-28\r\n17128,1857,LATAM,grocery,partner,45.72,8,0.221,none,2024-01-21\r\n17129,1025,EMEA,grocery,mobile,59.70,6,0.226,none,2024-04-23\r\n17130,1503,APAC,home,mobile,77.10,7,0.076,coupon,2024-12-19\r\n17131,2129,APAC,grocery,online,61.14,4,0.198,none,2024-06-10\r\n17132,2179,LATAM,fashion,mobile,55.83,8,0.240,loyalty,2024-01-06\r\n17133,1265,APAC,grocery,online,115.24,1,0.030,bundle,2024-07-12\r\n17134,1915,LATAM,sports,online,42.75,4,0.147,none,2024-01-27\r\n17135,1870,EMEA,fashion,online,32.75,7,0.170,coupon,2024-07-05\r\n17136,1539,LATAM,electronics,retail,80.54,4,0.027,loyalty,2024-11-22\r\n17137,2017,EMEA,electronics,partner,93.69,6,0.118,none,2024-03-03\r\n17138,1189,AMER,grocery,online,45.03,4,0.130,coupon,2024-06-17\r\n17139,1740,EMEA,electronics,online,34.90,1,0.090,loyalty,2024-11-01\r\n17140,2245,APAC,sports,mobile,172.13,7,0.208,none,2024-03-01\r\n17141,1587,LATAM,home,partner,54.80,5,0.018,loyalty,2024-09-21\r\n17142,1110,LATAM,sports,retail,104.77,7,0.163,none,2024-02-07\r\n17143,2315,LATAM,fashion,online,24.75,5,0.172,none,2024-02-13\r\n17144,2140,AMER,electronics,retail,39.91,2,0.247,coupon,2024-05-10\r\n17145,2197,LATAM,toys,online,50.75,2,0.069,bundle,2024-01-14\r\n17146,1574,AMER,grocery,online,54.30,6,0.113,loyalty,2024-01-20\r\n17147,2409,APAC,fashion,online,24.42,1,0.029,loyalty,2024-08-25\r\n17148,1274,LATAM,grocery,partner,44.82,1,0.044,bundle,2024-01-18\r\n17149,1388,AMER,grocery,online,58.22,7,0.107,coupon,2024-12-05\r\n17150,1184,AMER,sports,mobile,87.78,7,0.187,loyalty,2024-12-11\r\n17151,2241,APAC,electronics,online,31.59,8,0.183,none,2024-11-02\r\n17152,1398,APAC,sports,mobile,167.95,6,0.080,none,2024-10-23\r\n17153,1840,LATAM,home,retail,53.35,1,0.125,coupon,2024-06-25\r\n17154,1798,AMER,toys,online,62.24,4,0.102,none,2024-09-05\r\n17155,1737,AMER,home,retail,100.10,2,0.186,none,2024-05-18\r\n17156,1984,LATAM,grocery,online,26.54,1,0.165,none,2024-12-18\r\n17157,2187,EMEA,electronics,online,65.55,3,0.015,none,2024-05-26\r\n17158,1084,AMER,electronics,mobile,65.67,7,0.229,none,2024-10-18\r\n17159,2289,APAC,electronics,retail,44.54,1,0.149,none,2024-10-08\r\n17160,1296,LATAM,home,retail,32.67,5,0.069,coupon,2024-11-15\r\n17161,2263,AMER,sports,retail,19.64,4,0.219,none,2024-12-25\r\n17162,2433,APAC,electronics,retail,38.81,2,0.147,none,2024-09-20\r\n17163,1544,LATAM,fashion,online,49.35,7,0.005,coupon,2024-11-27\r\n17164,1895,AMER,home,retail,94.97,4,0.108,none,2024-08-28\r\n17165,1897,AMER,sports,mobile,54.14,5,0.220,none,2024-01-16\r\n17166,1457,EMEA,grocery,retail,51.90,2,0.225,coupon,2024-04-08\r\n17167,1822,EMEA,grocery,online,209.46,6,0.142,none,2024-03-22\r\n17168,2184,APAC,grocery,retail,58.57,7,0.156,bundle,2024-07-09\r\n17169,1653,APAC,grocery,mobile,79.61,2,0.224,none,2024-11-16\r\n17170,1410,AMER,grocery,retail,35.67,6,0.239,bundle,2024-11-14\r\n17171,2402,AMER,grocery,retail,75.92,5,0.128,none,2024-12-05\r\n17172,1795,EMEA,electronics,retail,80.38,7,0.137,none,2024-08-15\r\n17173,2090,AMER,grocery,online,19.50,1,0.113,none,2024-06-02\r\n17174,1224,APAC,electronics,online,213.11,2,0.204,none,2024-10-15\r\n17175,2073,AMER,sports,partner,79.59,5,0.199,coupon,2024-10-26\r\n17176,1357,EMEA,grocery,retail,57.86,7,0.051,coupon,2024-07-14\r\n17177,1734,AMER,home,online,71.74,4,0.231,coupon,2024-11-21\r\n17178,1465,AMER,toys,retail,26.39,4,0.069,coupon,2024-04-11\r\n17179,2331,APAC,electronics,retail,32.39,4,0.055,none,2024-04-02\r\n17180,2123,AMER,grocery,mobile,34.62,3,0.112,none,2024-07-11\r\n17181,1258,EMEA,home,online,167.85,3,0.161,none,2024-05-18\r\n17182,2298,APAC,grocery,online,32.25,8,0.127,bundle,2024-06-25\r\n17183,1795,EMEA,fashion,partner,40.19,8,0.096,loyalty,2024-04-11\r\n17184,1155,EMEA,toys,online,54.80,8,0.106,loyalty,2024-12-09\r\n17185,1900,APAC,home,online,46.64,5,0.165,bundle,2024-10-13\r\n17186,2481,APAC,grocery,mobile,63.90,6,0.170,coupon,2024-02-11\r\n17187,1191,EMEA,home,online,69.57,8,0.186,coupon,2024-02-16\r\n17188,1419,APAC,electronics,retail,27.16,7,0.128,bundle,2024-05-17\r\n17189,1705,AMER,electronics,online,186.67,6,0.169,none,2024-04-11\r\n17190,2475,AMER,fashion,online,48.62,7,0.085,none,2024-04-14\r\n17191,1552,EMEA,home,online,99.03,5,0.066,none,2024-08-18\r\n17192,1261,APAC,grocery,online,25.40,5,0.008,none,2024-06-27\r\n17193,1544,LATAM,fashion,online,91.62,6,0.215,none,2024-04-01\r\n17194,2425,APAC,fashion,retail,45.84,8,0.166,none,2024-07-27\r\n17195,2353,AMER,fashion,online,60.25,6,0.243,coupon,2024-01-24\r\n17196,2262,APAC,fashion,online,119.17,5,0.014,coupon,2024-03-19\r\n17197,1634,AMER,home,mobile,30.96,8,0.121,loyalty,2024-04-06\r\n17198,1250,APAC,grocery,mobile,20.33,6,0.031,bundle,2024-06-19\r\n17199,1762,LATAM,grocery,mobile,66.54,6,0.098,none,2024-09-22\r\n17200,1828,EMEA,grocery,retail,79.24,5,0.178,none,2024-06-16\r\n17201,2042,LATAM,grocery,online,28.84,4,0.080,none,2024-05-28\r\n17202,2189,LATAM,grocery,retail,50.68,2,0.068,none,2024-09-06\r\n17203,2472,AMER,toys,online,60.31,6,0.007,coupon,2024-08-14\r\n17204,1688,LATAM,fashion,online,56.68,5,0.177,none,2024-01-14\r\n17205,1875,EMEA,toys,online,83.33,8,0.192,none,2024-12-17\r\n17206,1041,APAC,electronics,online,42.02,4,0.017,none,2024-09-23\r\n17207,2295,EMEA,grocery,retail,36.55,6,0.199,coupon,2024-03-11\r\n17208,1588,LATAM,fashion,online,64.54,2,0.157,loyalty,2024-07-13\r\n17209,2340,EMEA,home,mobile,78.40,5,0.158,bundle,2024-08-25\r\n17210,1876,LATAM,grocery,retail,46.84,5,0.179,none,2024-09-25\r\n17211,1708,LATAM,electronics,online,26.50,3,0.066,bundle,2024-01-02\r\n17212,2193,AMER,grocery,retail,93.46,7,0.156,none,2024-02-15\r\n17213,1552,EMEA,grocery,retail,15.53,4,0.020,coupon,2024-05-17\r\n17214,1924,AMER,grocery,online,26.38,8,0.112,none,2024-03-15\r\n17215,2280,EMEA,grocery,online,59.73,1,0.137,none,2024-10-12\r\n17216,2099,AMER,fashion,mobile,68.57,8,0.173,loyalty,2024-04-04\r\n17217,2373,LATAM,sports,online,49.53,2,0.225,bundle,2024-01-23\r\n17218,1590,APAC,home,retail,67.23,2,0.118,none,2024-10-12\r\n17219,2492,LATAM,grocery,retail,68.77,1,0.137,none,2024-09-14\r\n17220,1325,APAC,toys,retail,132.07,5,0.037,none,2024-09-25\r\n17221,1163,AMER,sports,online,53.39,8,0.202,loyalty,2024-03-08\r\n17222,1681,LATAM,grocery,mobile,65.51,7,0.202,none,2024-04-07\r\n17223,2124,AMER,home,online,38.42,1,0.245,none,2024-01-11\r\n17224,1539,LATAM,home,retail,63.44,3,0.003,none,2024-06-25\r\n17225,2011,AMER,grocery,retail,81.30,8,0.098,none,2024-06-10\r\n17226,1392,AMER,grocery,retail,30.10,8,0.191,bundle,2024-05-07\r\n17227,1093,APAC,home,online,44.74,6,0.080,none,2024-04-17\r\n17228,1980,LATAM,home,online,177.17,2,0.195,none,2024-06-28\r\n17229,2417,LATAM,toys,online,75.67,8,0.004,none,2024-09-24\r\n17230,1047,APAC,fashion,online,26.00,8,0.154,coupon,2024-01-20\r\n17231,1810,LATAM,fashion,online,27.74,7,0.125,none,2024-09-13\r\n17232,1418,LATAM,home,retail,46.45,2,0.204,none,2024-07-19\r\n17233,1389,LATAM,toys,online,92.53,3,0.013,none,2024-08-18\r\n17234,1434,EMEA,electronics,online,131.27,8,0.192,none,2024-06-08\r\n17235,2268,EMEA,fashion,retail,58.87,1,0.026,coupon,2024-10-09\r\n17236,1132,EMEA,sports,online,86.35,5,0.051,loyalty,2024-04-22\r\n17237,2488,EMEA,electronics,online,138.22,3,0.183,none,2024-08-16\r\n17238,2106,LATAM,electronics,online,101.49,2,0.167,none,2024-03-09\r\n17239,1950,LATAM,fashion,online,26.88,5,0.149,none,2024-11-19\r\n17240,1863,EMEA,home,retail,58.10,2,0.176,none,2024-05-12\r\n17241,2317,LATAM,sports,online,69.72,6,0.210,none,2024-11-01\r\n17242,1042,LATAM,home,online,184.66,1,0.085,coupon,2024-12-06\r\n17243,2234,LATAM,fashion,retail,62.02,1,0.234,none,2024-06-14\r\n17244,1658,AMER,home,mobile,111.18,7,0.201,none,2024-07-04\r\n17245,2394,EMEA,sports,retail,65.79,6,0.116,bundle,2024-01-24\r\n17246,1832,APAC,home,retail,84.99,1,0.113,coupon,2024-12-07\r\n17247,2278,APAC,home,mobile,71.70,5,0.187,loyalty,2024-04-09\r\n17248,1369,AMER,sports,online,32.54,4,0.136,none,2024-06-28\r\n17249,1354,AMER,toys,retail,90.26,8,0.003,coupon,2024-11-16\r\n17250,1712,LATAM,toys,online,65.50,8,0.064,none,2024-06-17\r\n17251,1582,AMER,sports,retail,68.20,5,0.063,none,2024-05-04\r\n17252,1207,APAC,electronics,retail,69.70,3,0.108,bundle,2024-08-10\r\n17253,1094,LATAM,home,retail,41.14,3,0.171,bundle,2024-09-23\r\n17254,1329,APAC,home,online,62.74,5,0.183,none,2024-04-05\r\n17255,1964,EMEA,home,retail,80.03,7,0.132,none,2024-04-11\r\n17256,1055,AMER,grocery,mobile,33.42,5,0.031,none,2024-07-21\r\n17257,1078,APAC,grocery,retail,17.47,6,0.048,none,2024-05-03\r\n17258,1802,AMER,electronics,online,57.44,6,0.242,bundle,2024-05-20\r\n17259,1160,LATAM,sports,retail,116.77,2,0.155,none,2024-06-20\r\n17260,1672,APAC,fashion,mobile,59.09,3,0.189,none,2024-11-02\r\n17261,1703,AMER,grocery,retail,60.22,2,0.048,none,2024-03-11\r\n17262,2291,EMEA,grocery,online,116.09,8,0.025,none,2024-01-10\r\n17263,1852,AMER,grocery,mobile,62.20,4,0.047,coupon,2024-02-24\r\n17264,1728,AMER,toys,online,37.98,2,0.110,coupon,2024-12-02\r\n17265,2437,LATAM,grocery,retail,99.76,2,0.180,none,2024-07-28\r\n17266,2400,EMEA,electronics,retail,44.75,5,0.032,none,2024-08-25\r\n17267,2479,EMEA,home,retail,70.94,7,0.023,none,2024-04-25\r\n17268,1703,AMER,home,retail,32.68,4,0.022,coupon,2024-12-26\r\n17269,2135,EMEA,home,retail,65.44,2,0.069,none,2024-05-01\r\n17270,1781,LATAM,home,retail,55.19,3,0.132,loyalty,2024-08-13\r\n17271,1247,AMER,home,online,142.70,7,0.016,none,2024-06-18\r\n17272,1424,APAC,electronics,online,38.78,3,0.241,none,2024-06-11\r\n17273,1460,LATAM,home,online,78.78,8,0.024,none,2024-07-01\r\n17274,2459,AMER,fashion,retail,81.95,5,0.181,bundle,2024-07-21\r\n17275,1589,AMER,grocery,online,126.43,6,0.243,bundle,2024-07-27\r\n17276,2118,AMER,electronics,mobile,83.28,4,0.181,none,2024-09-10\r\n17277,2364,APAC,electronics,online,36.16,7,0.106,none,2024-02-09\r\n17278,1481,LATAM,toys,online,99.76,7,0.227,none,2024-03-26\r\n17279,2345,LATAM,grocery,retail,154.36,3,0.179,loyalty,2024-12-10\r\n17280,2327,EMEA,sports,mobile,57.69,5,0.044,none,2024-07-12\r\n17281,1743,LATAM,home,online,23.99,7,0.140,none,2024-01-13\r\n17282,1772,EMEA,electronics,online,81.78,3,0.005,none,2024-09-04\r\n17283,1043,LATAM,electronics,retail,29.06,2,0.221,loyalty,2024-02-20\r\n17284,1249,EMEA,grocery,retail,50.84,5,0.129,none,2024-04-27\r\n17285,1495,LATAM,grocery,online,50.75,6,0.122,loyalty,2024-10-05\r\n17286,1681,LATAM,sports,online,73.30,8,0.144,coupon,2024-01-17\r\n17287,1303,LATAM,home,retail,54.93,2,0.244,none,2024-04-05\r\n17288,1098,APAC,toys,online,17.90,1,0.011,coupon,2024-02-27\r\n17289,1060,LATAM,grocery,retail,43.67,7,0.162,none,2024-09-23\r\n17290,2407,EMEA,electronics,online,32.57,3,0.226,none,2024-03-18\r\n17291,2001,EMEA,toys,retail,36.64,2,0.175,none,2024-01-04\r\n17292,1831,APAC,electronics,online,47.02,7,0.247,none,2024-04-10\r\n17293,1664,LATAM,grocery,mobile,27.71,8,0.191,none,2024-07-23\r\n17294,1061,APAC,fashion,mobile,97.12,2,0.200,coupon,2024-04-07\r\n17295,1298,LATAM,grocery,mobile,29.30,2,0.119,none,2024-06-23\r\n17296,1496,AMER,electronics,mobile,51.20,6,0.243,none,2024-06-18\r\n17297,2041,LATAM,home,online,87.43,4,0.144,none,2024-02-28\r\n17298,1696,LATAM,fashion,retail,36.08,5,0.131,none,2024-05-26\r\n17299,1844,APAC,toys,online,71.59,4,0.074,none,2024-12-12\r\n17300,2112,LATAM,fashion,online,45.30,7,0.012,none,2024-01-24\r\n17301,1006,AMER,home,online,68.52,2,0.150,none,2024-05-20\r\n17302,1591,APAC,electronics,mobile,75.17,3,0.004,loyalty,2024-02-07\r\n17303,2102,APAC,electronics,partner,210.18,7,0.167,loyalty,2024-03-22\r\n17304,2453,AMER,home,partner,77.38,7,0.173,none,2024-10-28\r\n17305,1106,AMER,electronics,retail,122.51,4,0.201,coupon,2024-03-09\r\n17306,2211,APAC,home,retail,85.28,5,0.029,none,2024-05-11\r\n17307,1575,APAC,home,online,46.80,5,0.207,none,2024-06-01\r\n17308,1248,APAC,sports,online,32.17,6,0.091,bundle,2024-01-06\r\n17309,1404,EMEA,electronics,mobile,31.01,8,0.130,none,2024-08-22\r\n17310,1953,EMEA,fashion,online,76.87,6,0.124,none,2024-09-04\r\n17311,1773,LATAM,electronics,online,121.40,2,0.115,coupon,2024-08-26\r\n17312,2098,AMER,sports,online,57.19,3,0.122,none,2024-09-03\r\n17313,1027,APAC,grocery,online,57.46,5,0.051,none,2024-03-18\r\n17314,2163,EMEA,home,online,53.70,4,0.198,none,2024-06-06\r\n17315,1679,APAC,fashion,mobile,90.97,5,0.128,none,2024-06-24\r\n17316,1080,LATAM,electronics,online,58.36,2,0.101,none,2024-08-06\r\n17317,2256,AMER,electronics,retail,88.34,7,0.193,none,2024-05-08\r\n17318,2215,LATAM,fashion,online,113.39,2,0.178,none,2024-11-13\r\n17319,1332,APAC,fashion,online,33.31,7,0.155,bundle,2024-12-12\r\n17320,2392,EMEA,fashion,online,61.26,8,0.204,none,2024-12-13\r\n17321,1082,EMEA,home,mobile,159.51,2,0.189,none,2024-09-05\r\n17322,2158,APAC,home,mobile,85.87,6,0.165,none,2024-06-14\r\n17323,2002,APAC,sports,online,97.11,4,0.181,coupon,2024-11-08\r\n17324,1976,AMER,home,online,61.17,4,0.108,none,2024-02-05\r\n17325,1742,AMER,home,online,41.95,8,0.225,none,2024-05-20\r\n17326,2144,EMEA,toys,mobile,38.46,3,0.059,coupon,2024-11-02\r\n17327,2233,EMEA,home,retail,49.43,8,0.042,none,2024-08-04\r\n17328,1809,APAC,home,online,69.15,2,0.233,none,2024-08-02\r\n17329,2355,EMEA,fashion,retail,63.19,5,0.030,none,2024-02-19\r\n17330,1917,LATAM,electronics,online,63.89,1,0.009,coupon,2024-01-04\r\n17331,2234,LATAM,sports,retail,180.81,4,0.072,coupon,2024-12-11\r\n17332,1300,EMEA,electronics,online,157.09,5,0.228,bundle,2024-05-12\r\n17333,1707,APAC,toys,online,38.63,2,0.062,coupon,2024-01-17\r\n17334,1190,EMEA,home,mobile,74.67,7,0.079,coupon,2024-05-18\r\n17335,1048,EMEA,sports,online,205.91,3,0.121,coupon,2024-02-02\r\n17336,1540,LATAM,grocery,online,32.31,8,0.006,coupon,2024-03-13\r\n17337,1031,AMER,sports,online,31.34,6,0.175,bundle,2024-05-18\r\n17338,1947,EMEA,sports,retail,138.30,5,0.224,coupon,2024-11-05\r\n17339,2490,AMER,electronics,online,67.57,6,0.244,loyalty,2024-04-15\r\n17340,2416,LATAM,home,mobile,26.63,4,0.071,none,2024-09-28\r\n17341,1957,AMER,grocery,retail,27.89,3,0.194,none,2024-11-25\r\n17342,1064,AMER,toys,retail,42.12,8,0.208,loyalty,2024-02-06\r\n17343,1106,AMER,electronics,online,36.80,7,0.157,coupon,2024-06-04\r\n17344,1939,LATAM,sports,mobile,76.05,3,0.058,none,2024-11-16\r\n17345,1613,EMEA,grocery,retail,63.46,8,0.011,none,2024-10-26\r\n17346,2313,LATAM,home,mobile,31.01,4,0.094,bundle,2024-03-13\r\n17347,2122,AMER,fashion,retail,22.81,3,0.061,none,2024-03-09\r\n17348,1332,APAC,home,online,49.70,8,0.023,loyalty,2024-06-04\r\n17349,1173,LATAM,home,retail,103.17,3,0.147,none,2024-04-04\r\n17350,1237,LATAM,fashion,online,57.04,3,0.238,bundle,2024-03-23\r\n17351,1887,LATAM,grocery,partner,62.08,7,0.108,coupon,2024-03-01\r\n17352,2241,APAC,grocery,online,79.88,8,0.192,bundle,2024-09-20\r\n17353,2150,APAC,sports,retail,58.55,8,0.154,none,2024-04-20\r\n17354,2295,EMEA,sports,online,22.73,4,0.236,coupon,2024-09-16\r\n17355,2348,EMEA,grocery,retail,67.43,2,0.174,loyalty,2024-11-08\r\n17356,1197,LATAM,home,online,75.42,5,0.239,bundle,2024-11-01\r\n17357,2265,APAC,grocery,online,105.19,7,0.098,bundle,2024-08-06\r\n17358,1698,EMEA,home,partner,11.53,2,0.238,loyalty,2024-07-26\r\n17359,1778,LATAM,grocery,retail,27.90,3,0.187,none,2024-03-16\r\n17360,1489,AMER,home,online,91.25,2,0.214,none,2024-11-10\r\n17361,2442,APAC,home,retail,45.61,1,0.246,none,2024-07-01\r\n17362,1359,LATAM,grocery,online,49.64,2,0.036,coupon,2024-10-01\r\n17363,2289,APAC,toys,retail,58.68,1,0.238,none,2024-01-11\r\n17364,1444,EMEA,electronics,mobile,54.24,8,0.008,bundle,2024-02-12\r\n17365,2246,AMER,sports,online,152.03,6,0.016,none,2024-09-21\r\n17366,1210,LATAM,electronics,retail,56.06,3,0.107,none,2024-09-10\r\n17367,2170,EMEA,grocery,online,60.66,1,0.040,coupon,2024-08-24\r\n17368,1493,APAC,grocery,retail,73.32,5,0.227,coupon,2024-06-09\r\n17369,2179,LATAM,electronics,online,54.88,3,0.115,bundle,2024-01-23\r\n17370,2100,APAC,electronics,online,66.25,8,0.135,coupon,2024-07-11\r\n17371,1377,APAC,sports,retail,68.01,1,0.118,loyalty,2024-10-09\r\n17372,2222,LATAM,toys,online,77.31,1,0.001,none,2024-08-27\r\n17373,1173,LATAM,home,retail,83.32,3,0.041,none,2024-01-06\r\n17374,1001,LATAM,electronics,retail,19.97,5,0.119,coupon,2024-04-22\r\n17375,1378,APAC,toys,online,62.93,5,0.176,bundle,2024-04-19\r\n17376,1307,AMER,electronics,online,44.26,3,0.048,bundle,2024-04-21\r\n17377,2172,EMEA,toys,online,43.68,1,0.005,none,2024-08-14\r\n17378,1578,LATAM,home,online,14.92,8,0.085,none,2024-06-10\r\n17379,2488,EMEA,home,mobile,50.74,4,0.060,coupon,2024-05-09\r\n17380,2239,EMEA,fashion,online,54.82,1,0.121,none,2024-04-05\r\n17381,1325,APAC,fashion,online,29.26,8,0.020,none,2024-01-26\r\n17382,1983,LATAM,grocery,retail,37.69,5,0.086,loyalty,2024-10-13\r\n17383,2354,LATAM,grocery,online,58.06,1,0.239,none,2024-08-20\r\n17384,1156,APAC,toys,online,25.48,6,0.221,none,2024-08-20\r\n17385,2421,AMER,fashion,online,39.61,8,0.060,none,2024-09-20\r\n17386,1711,APAC,home,online,36.85,5,0.110,coupon,2024-05-21\r\n17387,1214,EMEA,electronics,partner,54.96,2,0.105,none,2024-06-08\r\n17388,2154,APAC,grocery,mobile,81.76,3,0.046,bundle,2024-03-13\r\n17389,1902,AMER,home,retail,61.93,4,0.089,none,2024-06-24\r\n17390,1459,LATAM,home,online,54.34,6,0.116,bundle,2024-01-14\r\n17391,1411,LATAM,grocery,online,67.64,7,0.019,bundle,2024-09-17\r\n17392,2091,LATAM,home,online,61.00,7,0.225,none,2024-05-24\r\n17393,1332,APAC,grocery,online,57.55,5,0.128,bundle,2024-11-25\r\n17394,1478,EMEA,electronics,online,47.45,7,0.066,coupon,2024-03-14\r\n17395,1427,EMEA,electronics,mobile,37.33,7,0.241,loyalty,2024-12-13\r\n17396,2474,LATAM,grocery,retail,34.08,6,0.206,loyalty,2024-09-24\r\n17397,1053,AMER,grocery,retail,24.12,8,0.110,none,2024-06-01\r\n17398,1888,LATAM,toys,online,65.09,7,0.203,coupon,2024-09-15\r\n17399,2477,APAC,grocery,online,42.70,8,0.200,none,2024-09-19\r\n17400,2338,AMER,home,online,86.61,1,0.112,none,2024-08-07\r\n17401,1832,APAC,fashion,online,33.42,7,0.181,coupon,2024-03-25\r\n17402,1803,LATAM,electronics,mobile,27.19,2,0.088,loyalty,2024-05-03\r\n17403,1383,AMER,grocery,retail,21.01,8,0.065,none,2024-10-11\r\n17404,1561,EMEA,sports,online,38.86,4,0.134,none,2024-10-15\r\n17405,1352,AMER,grocery,online,53.25,2,0.102,none,2024-01-23\r\n17406,1319,EMEA,electronics,online,18.32,4,0.154,none,2024-10-07\r\n17407,2166,AMER,toys,online,57.58,8,0.128,none,2024-08-02\r\n17408,2043,EMEA,home,online,99.24,7,0.185,none,2024-11-13\r\n17409,1916,AMER,fashion,mobile,50.33,8,0.163,none,2024-05-07\r\n17410,1866,EMEA,electronics,online,122.36,8,0.139,bundle,2024-07-06\r\n17411,1904,APAC,home,online,105.99,6,0.094,none,2024-11-25\r\n17412,1733,LATAM,home,online,68.10,1,0.088,coupon,2024-01-10\r\n17413,1139,EMEA,grocery,online,64.72,1,0.241,none,2024-04-14\r\n17414,2424,LATAM,fashion,mobile,67.56,1,0.089,coupon,2024-03-23\r\n17415,2317,LATAM,grocery,online,121.26,7,0.006,coupon,2024-09-21\r\n17416,2241,APAC,grocery,retail,19.32,4,0.196,loyalty,2024-11-26\r\n17417,2490,AMER,toys,online,82.88,1,0.142,none,2024-08-15\r\n17418,2427,LATAM,home,online,139.16,3,0.064,none,2024-04-11\r\n17419,2303,EMEA,fashion,retail,56.43,5,0.143,bundle,2024-10-22\r\n17420,1515,EMEA,grocery,mobile,22.98,8,0.176,loyalty,2024-05-09\r\n17421,1083,AMER,fashion,mobile,72.91,7,0.004,coupon,2024-02-04\r\n17422,1951,LATAM,electronics,mobile,33.35,2,0.247,bundle,2024-11-17\r\n17423,2036,APAC,home,online,20.05,2,0.232,none,2024-08-03\r\n17424,2452,LATAM,sports,online,70.92,6,0.015,none,2024-11-24\r\n17425,1358,APAC,home,online,88.18,3,0.249,none,2024-02-25\r\n17426,1805,EMEA,grocery,mobile,74.04,6,0.101,loyalty,2024-06-17\r\n17427,1470,LATAM,fashion,retail,108.02,7,0.179,none,2024-03-07\r\n17428,1976,AMER,sports,online,38.57,4,0.087,none,2024-09-24\r\n17429,2474,LATAM,electronics,online,21.80,3,0.019,loyalty,2024-08-20\r\n17430,2132,LATAM,electronics,online,93.84,5,0.197,bundle,2024-01-25\r\n17431,1743,LATAM,toys,retail,65.03,2,0.200,none,2024-11-21\r\n17432,1999,EMEA,home,retail,69.27,3,0.200,none,2024-04-22\r\n17433,1968,EMEA,home,retail,36.37,5,0.093,none,2024-02-10\r\n17434,1591,APAC,home,retail,43.45,4,0.116,none,2024-05-23\r\n17435,2301,EMEA,fashion,online,44.63,1,0.240,bundle,2024-05-28\r\n17436,1376,EMEA,sports,online,68.87,5,0.118,coupon,2024-03-06\r\n17437,2467,AMER,electronics,online,55.27,4,0.091,coupon,2024-12-09\r\n17438,2256,AMER,home,retail,34.75,4,0.090,none,2024-04-17\r\n17439,1395,APAC,home,retail,67.68,3,0.069,none,2024-06-22\r\n17440,1307,AMER,electronics,online,42.40,4,0.238,bundle,2024-03-13\r\n17441,1725,APAC,fashion,online,165.04,2,0.089,none,2024-11-02\r\n17442,2124,AMER,electronics,online,51.99,7,0.129,none,2024-11-06\r\n17443,2453,AMER,home,mobile,63.11,5,0.204,none,2024-02-18\r\n17444,1344,EMEA,fashion,online,59.42,2,0.083,bundle,2024-09-03\r\n17445,1278,AMER,home,online,36.85,1,0.019,none,2024-01-13\r\n17446,1975,EMEA,grocery,mobile,61.65,5,0.114,none,2024-03-14\r\n17447,1854,AMER,fashion,retail,35.78,2,0.215,none,2024-03-01\r\n17448,1826,LATAM,home,retail,175.95,3,0.061,none,2024-09-14\r\n17449,1583,AMER,fashion,retail,114.87,3,0.116,none,2024-05-20\r\n17450,1201,LATAM,home,retail,49.55,1,0.165,coupon,2024-05-04\r\n17451,2306,AMER,grocery,mobile,22.90,7,0.219,bundle,2024-01-06\r\n17452,2061,EMEA,electronics,online,61.78,8,0.109,none,2024-01-16\r\n17453,1851,EMEA,toys,online,60.20,1,0.127,none,2024-07-27\r\n17454,1877,LATAM,fashion,mobile,73.00,7,0.138,bundle,2024-08-17\r\n17455,2427,LATAM,grocery,online,82.84,5,0.102,none,2024-06-08\r\n17456,2157,AMER,grocery,retail,57.61,4,0.211,bundle,2024-04-06\r\n17457,1528,EMEA,toys,online,66.66,4,0.125,bundle,2024-10-26\r\n17458,2246,AMER,grocery,online,53.31,1,0.118,none,2024-06-11\r\n17459,2332,APAC,grocery,online,24.99,1,0.090,none,2024-02-13\r\n17460,1170,AMER,electronics,online,94.07,4,0.105,loyalty,2024-03-20\r\n17461,1171,APAC,electronics,retail,135.97,5,0.096,bundle,2024-08-06\r\n17462,2236,APAC,electronics,retail,57.17,1,0.247,coupon,2024-03-16\r\n17463,1040,LATAM,electronics,retail,113.72,8,0.117,none,2024-09-12\r\n17464,1520,APAC,fashion,retail,72.08,7,0.102,bundle,2024-10-27\r\n17465,1203,AMER,fashion,online,59.07,1,0.081,none,2024-07-10\r\n17466,1442,EMEA,grocery,retail,45.46,5,0.035,none,2024-10-17\r\n17467,2313,LATAM,toys,mobile,65.74,5,0.042,none,2024-01-13\r\n17468,1671,APAC,toys,online,92.28,5,0.241,loyalty,2024-06-11\r\n17469,2311,LATAM,grocery,online,39.76,6,0.075,bundle,2024-09-20\r\n17470,1768,AMER,fashion,online,91.84,1,0.246,none,2024-04-23\r\n17471,2256,AMER,home,retail,52.12,3,0.095,coupon,2024-03-20\r\n17472,1946,AMER,sports,retail,35.35,1,0.005,none,2024-08-25\r\n17473,1536,LATAM,sports,mobile,99.15,2,0.247,coupon,2024-03-08\r\n17474,2451,APAC,grocery,online,108.68,5,0.214,bundle,2024-09-09\r\n17475,1372,APAC,sports,online,26.20,7,0.230,bundle,2024-05-04\r\n17476,1909,APAC,sports,retail,61.86,7,0.098,bundle,2024-09-21\r\n17477,2485,AMER,fashion,retail,65.73,4,0.039,none,2024-12-22\r\n17478,1840,LATAM,sports,online,20.22,3,0.131,bundle,2024-07-11\r\n17479,1149,LATAM,fashion,mobile,47.16,1,0.040,none,2024-09-05\r\n17480,1820,AMER,grocery,online,71.93,8,0.208,bundle,2024-10-23\r\n17481,1319,EMEA,grocery,online,103.39,5,0.129,coupon,2024-06-20\r\n17482,1597,APAC,electronics,retail,47.06,4,0.085,loyalty,2024-11-12\r\n17483,1581,APAC,grocery,retail,22.69,1,0.086,coupon,2024-01-24\r\n17484,2301,EMEA,home,online,38.70,2,0.146,coupon,2024-09-16\r\n17485,1977,APAC,home,online,76.65,2,0.056,loyalty,2024-01-21\r\n17486,2082,APAC,sports,mobile,64.56,3,0.085,none,2024-04-07\r\n17487,1409,APAC,home,retail,30.11,1,0.194,bundle,2024-04-26\r\n17488,1189,AMER,home,retail,72.53,6,0.132,none,2024-11-11\r\n17489,1495,LATAM,fashion,mobile,54.02,8,0.172,none,2024-02-24\r\n17490,1352,AMER,grocery,retail,69.41,7,0.218,none,2024-08-19\r\n17491,1434,EMEA,electronics,online,59.61,4,0.030,none,2024-04-11\r\n17492,2317,LATAM,grocery,online,105.61,5,0.119,none,2024-03-10\r\n17493,2031,AMER,fashion,retail,54.24,5,0.165,bundle,2024-10-14\r\n17494,2407,EMEA,toys,online,86.80,2,0.208,loyalty,2024-11-28\r\n17495,1260,LATAM,electronics,online,85.09,4,0.185,loyalty,2024-02-13\r\n17496,1621,APAC,electronics,partner,51.50,7,0.186,none,2024-03-07\r\n17497,1220,LATAM,sports,mobile,67.57,2,0.189,none,2024-09-09\r\n17498,1541,APAC,grocery,mobile,117.09,6,0.133,coupon,2024-09-26\r\n17499,1004,LATAM,fashion,retail,68.10,6,0.237,coupon,2024-09-11\r\n17500,1667,AMER,grocery,retail,114.29,4,0.169,coupon,2024-12-20\r\n17501,2183,EMEA,grocery,online,29.60,2,0.059,coupon,2024-12-24\r\n17502,2474,LATAM,grocery,retail,17.71,6,0.037,bundle,2024-09-06\r\n17503,2343,EMEA,fashion,mobile,39.13,3,0.187,coupon,2024-09-03\r\n17504,1408,AMER,home,online,48.12,7,0.166,bundle,2024-06-24\r\n17505,1192,EMEA,sports,online,141.80,1,0.014,coupon,2024-03-03\r\n17506,2297,EMEA,toys,retail,83.86,1,0.158,none,2024-02-16\r\n17507,1158,LATAM,electronics,mobile,107.18,2,0.204,none,2024-10-23\r\n17508,1357,EMEA,grocery,online,255.01,4,0.207,none,2024-11-25\r\n17509,1700,EMEA,grocery,retail,45.74,7,0.226,coupon,2024-01-09\r\n17510,1563,EMEA,home,online,52.79,4,0.213,none,2024-03-17\r\n17511,1102,APAC,sports,online,55.89,7,0.142,bundle,2024-04-20\r\n17512,2095,EMEA,home,retail,94.73,8,0.032,none,2024-06-04\r\n17513,1628,EMEA,toys,mobile,48.29,4,0.192,coupon,2024-08-18\r\n17514,2384,LATAM,fashion,retail,66.53,8,0.077,none,2024-02-26\r\n17515,2323,AMER,toys,online,76.79,7,0.217,none,2024-09-14\r\n17516,1762,LATAM,sports,retail,63.02,2,0.134,bundle,2024-06-20\r\n17517,2314,EMEA,home,online,63.31,7,0.118,none,2024-03-06\r\n17518,1539,LATAM,toys,retail,110.42,7,0.140,coupon,2024-03-26\r\n17519,1907,EMEA,fashion,retail,74.75,1,0.101,none,2024-09-09\r\n17520,1420,APAC,fashion,retail,84.84,7,0.138,none,2024-12-03\r\n17521,1118,AMER,toys,retail,49.27,7,0.061,loyalty,2024-04-04\r\n17522,1715,AMER,home,online,64.25,5,0.239,none,2024-02-27\r\n17523,2244,LATAM,grocery,retail,32.08,7,0.217,coupon,2024-05-16\r\n17524,2493,APAC,fashion,retail,103.88,4,0.077,none,2024-03-27\r\n17525,1581,APAC,home,online,29.91,7,0.142,none,2024-04-13\r\n17526,1717,AMER,electronics,retail,120.42,4,0.016,none,2024-07-14\r\n17527,1398,APAC,electronics,online,69.38,3,0.240,loyalty,2024-11-13\r\n17528,2234,LATAM,toys,online,25.39,5,0.054,loyalty,2024-01-05\r\n17529,1218,AMER,grocery,retail,73.29,4,0.208,coupon,2024-06-08\r\n17530,2066,APAC,grocery,retail,54.76,3,0.018,none,2024-10-26\r\n17531,1813,EMEA,grocery,online,37.68,4,0.227,none,2024-06-23\r\n17532,1182,EMEA,sports,online,30.51,2,0.010,loyalty,2024-05-10\r\n17533,2279,LATAM,fashion,online,133.26,7,0.093,none,2024-11-22\r\n17534,2441,EMEA,grocery,retail,67.74,2,0.248,none,2024-01-20\r\n17535,1944,AMER,electronics,retail,58.58,4,0.158,none,2024-05-21\r\n17536,2248,LATAM,sports,partner,80.53,1,0.214,none,2024-03-10\r\n17537,1252,APAC,grocery,partner,23.68,2,0.080,none,2024-12-14\r\n17538,1663,LATAM,electronics,retail,101.93,5,0.217,coupon,2024-06-08\r\n17539,1973,EMEA,home,retail,97.30,2,0.058,bundle,2024-09-18\r\n17540,1157,LATAM,grocery,online,20.44,1,0.025,coupon,2024-04-26\r\n17541,1198,AMER,grocery,mobile,34.84,3,0.059,none,2024-02-02\r\n17542,1893,APAC,grocery,online,123.54,8,0.122,loyalty,2024-04-13\r\n17543,1044,EMEA,fashion,partner,145.57,7,0.196,none,2024-07-26\r\n17544,2353,AMER,grocery,retail,42.95,8,0.238,none,2024-03-06\r\n17545,1546,EMEA,electronics,retail,120.23,7,0.166,coupon,2024-09-07\r\n17546,1214,EMEA,grocery,online,38.92,2,0.040,loyalty,2024-03-15\r\n17547,2049,LATAM,electronics,mobile,20.06,7,0.092,none,2024-05-17\r\n17548,1426,AMER,home,retail,66.27,2,0.055,none,2024-06-22\r\n17549,1604,EMEA,home,online,103.71,6,0.193,bundle,2024-11-17\r\n17550,1486,LATAM,electronics,retail,73.38,1,0.122,none,2024-07-02\r\n17551,1465,AMER,electronics,retail,61.42,2,0.004,none,2024-10-14\r\n17552,1943,AMER,electronics,mobile,37.21,3,0.135,none,2024-11-28\r\n17553,1440,AMER,sports,mobile,77.45,4,0.157,coupon,2024-02-24\r\n17554,1585,AMER,fashion,online,45.49,6,0.070,bundle,2024-09-05\r\n17555,1690,LATAM,grocery,partner,61.09,6,0.212,coupon,2024-01-27\r\n17556,2333,APAC,electronics,mobile,55.63,6,0.084,none,2024-08-06\r\n17557,1593,AMER,grocery,online,144.00,8,0.096,coupon,2024-03-17\r\n17558,2011,AMER,electronics,online,49.60,8,0.157,bundle,2024-12-23\r\n17559,1707,APAC,home,partner,60.41,6,0.225,none,2024-01-04\r\n17560,1429,APAC,grocery,retail,48.01,7,0.158,bundle,2024-04-18\r\n17561,1214,EMEA,grocery,partner,59.70,6,0.233,loyalty,2024-06-18\r\n17562,1719,LATAM,fashion,retail,61.34,3,0.201,coupon,2024-04-15\r\n17563,1497,EMEA,electronics,online,107.38,8,0.097,none,2024-06-16\r\n17564,1380,AMER,grocery,retail,41.83,4,0.009,none,2024-10-24\r\n17565,1896,EMEA,home,retail,77.99,2,0.224,none,2024-01-05\r\n17566,1467,LATAM,electronics,online,37.97,7,0.041,coupon,2024-12-11\r\n17567,2073,AMER,sports,retail,55.06,1,0.055,none,2024-04-23\r\n17568,1068,APAC,electronics,online,13.55,6,0.172,bundle,2024-12-23\r\n17569,1535,AMER,grocery,retail,88.28,8,0.037,none,2024-06-15\r\n17570,2319,AMER,fashion,retail,29.91,8,0.034,bundle,2024-05-04\r\n17571,2019,AMER,toys,online,27.22,2,0.224,bundle,2024-09-10\r\n17572,1571,EMEA,grocery,online,30.88,4,0.005,bundle,2024-09-10\r\n17573,2240,LATAM,home,partner,83.33,3,0.242,none,2024-10-19\r\n17574,1867,AMER,grocery,online,90.31,4,0.173,none,2024-07-24\r\n17575,2441,EMEA,grocery,retail,177.42,4,0.235,coupon,2024-12-07\r\n17576,1367,AMER,fashion,online,40.51,1,0.224,none,2024-02-02\r\n17577,1019,APAC,grocery,online,38.41,5,0.014,coupon,2024-04-20\r\n17578,2208,AMER,electronics,online,42.15,1,0.159,coupon,2024-07-16\r\n17579,2265,APAC,toys,online,75.84,2,0.002,bundle,2024-11-22\r\n17580,2270,APAC,grocery,online,118.34,5,0.013,none,2024-04-16\r\n17581,1133,EMEA,toys,retail,190.86,1,0.013,loyalty,2024-04-02\r\n17582,2171,EMEA,fashion,partner,195.98,5,0.131,none,2024-04-19\r\n17583,1192,EMEA,fashion,retail,85.76,2,0.234,coupon,2024-05-12\r\n17584,2085,AMER,electronics,online,35.46,4,0.203,loyalty,2024-10-05\r\n17585,1401,LATAM,toys,online,69.96,5,0.249,none,2024-03-22\r\n17586,2039,EMEA,electronics,partner,29.75,8,0.224,coupon,2024-07-22\r\n17587,2030,EMEA,toys,online,67.43,3,0.222,bundle,2024-06-12\r\n17588,2235,AMER,grocery,mobile,117.29,5,0.234,bundle,2024-11-09\r\n17589,2269,EMEA,grocery,retail,60.58,4,0.192,bundle,2024-02-01\r\n17590,2045,LATAM,electronics,retail,53.19,8,0.023,none,2024-12-20\r\n17591,1165,AMER,home,partner,37.15,8,0.196,none,2024-03-04\r\n17592,1326,AMER,fashion,retail,42.49,7,0.128,loyalty,2024-05-07\r\n17593,2392,EMEA,toys,online,52.03,4,0.232,bundle,2024-04-22\r\n17594,2051,APAC,home,online,109.03,7,0.111,none,2024-02-05\r\n17595,1132,EMEA,grocery,retail,98.64,4,0.086,none,2024-06-08\r\n17596,1859,AMER,home,online,72.77,3,0.156,loyalty,2024-05-10\r\n17597,1679,APAC,sports,retail,81.62,5,0.094,none,2024-01-06\r\n17598,1069,APAC,home,retail,69.36,2,0.169,none,2024-12-27\r\n17599,1192,EMEA,sports,mobile,62.11,8,0.026,none,2024-02-25\r\n17600,2258,AMER,sports,retail,48.13,6,0.023,none,2024-09-05\r\n17601,2413,AMER,electronics,online,139.70,2,0.218,none,2024-04-01\r\n17602,2249,LATAM,home,retail,46.28,2,0.058,none,2024-08-14\r\n17603,1647,LATAM,sports,retail,81.75,1,0.125,loyalty,2024-02-03\r\n17604,2037,LATAM,fashion,retail,42.55,6,0.136,coupon,2024-11-04\r\n17605,1800,APAC,electronics,retail,35.76,3,0.223,none,2024-08-20\r\n17606,1910,LATAM,grocery,retail,96.36,1,0.043,none,2024-11-06\r\n17607,2385,APAC,grocery,retail,76.53,2,0.131,none,2024-08-01\r\n17608,1489,AMER,electronics,mobile,54.84,8,0.103,none,2024-12-01\r\n17609,1664,LATAM,home,online,37.40,6,0.071,coupon,2024-02-11\r\n17610,2115,APAC,home,mobile,35.04,1,0.074,none,2024-06-17\r\n17611,2209,AMER,grocery,online,124.65,2,0.134,none,2024-01-03\r\n17612,1683,AMER,electronics,mobile,50.62,4,0.214,none,2024-03-15\r\n17613,1821,LATAM,sports,mobile,31.24,6,0.179,none,2024-05-27\r\n17614,1824,LATAM,fashion,partner,122.11,6,0.042,none,2024-02-02\r\n17615,1732,LATAM,grocery,mobile,65.66,6,0.026,none,2024-09-01\r\n17616,1382,LATAM,fashion,online,50.22,2,0.027,none,2024-11-12\r\n17617,1173,LATAM,electronics,retail,58.26,8,0.013,none,2024-06-06\r\n17618,1761,EMEA,electronics,retail,47.06,8,0.147,none,2024-01-18\r\n17619,2443,LATAM,home,mobile,92.61,5,0.055,none,2024-10-02\r\n17620,1524,LATAM,fashion,online,34.40,8,0.151,none,2024-06-12\r\n17621,2439,AMER,electronics,retail,26.04,7,0.036,loyalty,2024-04-17\r\n17622,1577,AMER,grocery,online,30.95,7,0.154,none,2024-01-04\r\n17623,1678,LATAM,electronics,retail,76.89,5,0.042,none,2024-01-24\r\n17624,1085,EMEA,fashion,online,38.69,3,0.222,bundle,2024-12-18\r\n17625,1658,AMER,fashion,online,44.68,6,0.177,bundle,2024-02-06\r\n17626,1756,EMEA,grocery,mobile,82.38,3,0.111,none,2024-12-24\r\n17627,1439,LATAM,fashion,online,56.66,3,0.017,loyalty,2024-01-20\r\n17628,1765,EMEA,electronics,online,38.54,3,0.189,bundle,2024-03-28\r\n17629,1546,EMEA,grocery,online,60.46,6,0.059,none,2024-08-02\r\n17630,1182,EMEA,grocery,online,93.88,3,0.128,none,2024-06-28\r\n17631,1702,AMER,fashion,online,78.68,1,0.177,loyalty,2024-07-13\r\n17632,1782,LATAM,toys,retail,61.56,6,0.232,none,2024-03-27\r\n17633,1521,LATAM,home,retail,102.86,1,0.165,coupon,2024-12-28\r\n17634,1103,EMEA,grocery,online,51.35,7,0.106,none,2024-04-01\r\n17635,1391,LATAM,home,mobile,49.76,4,0.031,coupon,2024-07-28\r\n17636,1834,AMER,grocery,retail,43.23,3,0.231,coupon,2024-02-07\r\n17637,2068,LATAM,sports,online,42.83,5,0.218,bundle,2024-07-22\r\n17638,1067,APAC,home,retail,28.81,4,0.102,none,2024-09-12\r\n17639,1920,LATAM,toys,retail,46.56,5,0.182,coupon,2024-08-25\r\n17640,1891,APAC,fashion,retail,28.11,6,0.123,loyalty,2024-03-09\r\n17641,1445,APAC,grocery,retail,57.95,6,0.189,none,2024-08-10\r\n17642,1885,EMEA,electronics,online,33.80,7,0.226,none,2024-05-05\r\n17643,1167,EMEA,fashion,retail,75.02,2,0.099,bundle,2024-10-01\r\n17644,1110,LATAM,electronics,partner,36.10,2,0.161,none,2024-06-12\r\n17645,1823,EMEA,toys,retail,31.85,4,0.126,bundle,2024-04-21\r\n17646,2429,EMEA,home,mobile,224.01,5,0.093,none,2024-03-02\r\n17647,1610,LATAM,fashion,retail,38.34,8,0.148,loyalty,2024-02-25\r\n17648,1132,EMEA,sports,retail,41.61,6,0.236,coupon,2024-11-02\r\n17649,1536,LATAM,fashion,online,72.67,7,0.225,none,2024-02-10\r\n17650,2466,APAC,toys,retail,55.60,5,0.104,none,2024-07-15\r\n17651,2237,EMEA,sports,retail,56.32,1,0.229,none,2024-01-19\r\n17652,2126,APAC,toys,retail,65.91,6,0.042,coupon,2024-12-28\r\n17653,1335,APAC,sports,online,66.19,4,0.035,none,2024-04-14\r\n17654,1153,AMER,grocery,partner,33.06,7,0.096,coupon,2024-03-17\r\n17655,2180,AMER,fashion,mobile,31.21,4,0.227,none,2024-07-01\r\n17656,1663,LATAM,electronics,online,117.43,2,0.092,coupon,2024-07-19\r\n17657,1194,APAC,toys,retail,125.29,2,0.024,none,2024-01-21\r\n17658,1293,AMER,toys,online,48.85,1,0.059,none,2024-02-09\r\n17659,1536,LATAM,toys,retail,27.98,2,0.224,none,2024-02-16\r\n17660,1923,LATAM,grocery,online,90.51,2,0.215,none,2024-07-15\r\n17661,2416,LATAM,home,partner,57.49,5,0.005,none,2024-04-11\r\n17662,1060,LATAM,grocery,online,56.01,6,0.036,coupon,2024-11-26\r\n17663,2136,AMER,sports,retail,15.37,3,0.122,coupon,2024-08-22\r\n17664,2467,AMER,fashion,online,43.38,3,0.079,none,2024-05-25\r\n17665,1649,APAC,electronics,online,38.01,5,0.155,bundle,2024-01-22\r\n17666,1619,APAC,electronics,online,32.42,6,0.133,none,2024-11-16\r\n17667,1807,EMEA,fashion,online,40.92,6,0.078,none,2024-05-20\r\n17668,1521,LATAM,fashion,retail,37.67,1,0.010,bundle,2024-08-07\r\n17669,1006,AMER,electronics,online,31.97,1,0.179,bundle,2024-03-24\r\n17670,1823,EMEA,grocery,retail,69.36,5,0.149,loyalty,2024-08-05\r\n17671,1361,LATAM,grocery,online,61.38,1,0.101,loyalty,2024-12-05\r\n17672,1031,AMER,grocery,retail,55.35,1,0.124,none,2024-01-07\r\n17673,1479,AMER,toys,online,40.06,8,0.222,none,2024-02-21\r\n17674,1849,EMEA,home,online,64.47,8,0.237,none,2024-08-24\r\n17675,1445,APAC,electronics,mobile,37.97,8,0.011,none,2024-07-09\r\n17676,1117,LATAM,electronics,retail,41.29,4,0.092,none,2024-11-14\r\n17677,1983,LATAM,grocery,online,30.22,5,0.157,none,2024-01-02\r\n17678,1922,EMEA,grocery,partner,92.19,1,0.098,coupon,2024-05-24\r\n17679,2012,APAC,sports,online,27.16,1,0.055,bundle,2024-07-24\r\n17680,1215,LATAM,electronics,partner,39.66,5,0.161,none,2024-08-16\r\n17681,1445,APAC,electronics,mobile,91.74,7,0.178,none,2024-12-16\r\n17682,1128,LATAM,fashion,retail,30.51,4,0.062,none,2024-08-10\r\n17683,1454,APAC,home,retail,53.11,5,0.053,loyalty,2024-11-28\r\n17684,1984,LATAM,sports,online,90.70,2,0.010,none,2024-10-11\r\n17685,1434,EMEA,grocery,retail,52.87,2,0.048,none,2024-09-07\r\n17686,1654,EMEA,sports,mobile,51.19,2,0.175,bundle,2024-07-05\r\n17687,1923,LATAM,electronics,online,50.15,5,0.245,none,2024-12-04\r\n17688,1725,APAC,grocery,retail,52.27,6,0.054,none,2024-12-07\r\n17689,1918,EMEA,sports,retail,46.07,3,0.020,none,2024-10-18\r\n17690,2190,LATAM,fashion,retail,81.85,1,0.159,none,2024-04-26\r\n17691,1071,AMER,home,online,65.08,2,0.209,coupon,2024-03-08\r\n17692,1145,AMER,sports,online,99.91,7,0.196,none,2024-11-17\r\n17693,1581,APAC,sports,online,56.27,1,0.153,none,2024-06-28\r\n17694,1502,APAC,electronics,online,87.80,3,0.218,none,2024-01-10\r\n17695,1654,EMEA,sports,mobile,56.53,8,0.038,none,2024-12-22\r\n17696,1880,LATAM,grocery,mobile,79.22,4,0.094,none,2024-06-03\r\n17697,2304,LATAM,electronics,retail,50.29,1,0.119,none,2024-11-19\r\n17698,2420,EMEA,toys,retail,103.35,6,0.042,loyalty,2024-02-11\r\n17699,1984,LATAM,toys,retail,44.36,3,0.009,coupon,2024-01-22\r\n17700,2487,LATAM,home,retail,38.96,1,0.224,coupon,2024-06-20\r\n17701,1907,EMEA,sports,online,268.88,1,0.155,bundle,2024-10-21\r\n17702,1430,EMEA,home,online,41.58,3,0.197,coupon,2024-07-19\r\n17703,2377,AMER,home,online,31.62,1,0.090,none,2024-10-27\r\n17704,1569,APAC,fashion,mobile,50.57,3,0.250,none,2024-05-12\r\n17705,1310,AMER,home,mobile,32.68,6,0.026,none,2024-09-12\r\n17706,1267,EMEA,grocery,online,64.48,8,0.009,coupon,2024-08-10\r\n17707,2436,LATAM,grocery,retail,55.29,1,0.110,coupon,2024-02-15\r\n17708,1095,APAC,electronics,retail,59.30,7,0.149,bundle,2024-05-21\r\n17709,1852,AMER,home,retail,111.19,7,0.063,bundle,2024-12-15\r\n17710,1717,AMER,grocery,retail,33.69,8,0.203,none,2024-10-02\r\n17711,1500,EMEA,home,partner,49.89,5,0.128,none,2024-02-23\r\n17712,2444,EMEA,sports,online,16.87,2,0.010,none,2024-12-12\r\n17713,2025,EMEA,electronics,online,40.88,5,0.192,bundle,2024-01-27\r\n17714,1372,APAC,toys,mobile,77.43,7,0.149,none,2024-01-08\r\n17715,2398,EMEA,electronics,retail,64.65,8,0.005,none,2024-04-08\r\n17716,2206,AMER,sports,retail,47.83,5,0.038,none,2024-12-13\r\n17717,2042,LATAM,electronics,mobile,29.75,1,0.032,none,2024-05-06\r\n17718,1013,LATAM,electronics,mobile,81.30,5,0.235,none,2024-01-27\r\n17719,1715,AMER,electronics,partner,35.65,8,0.023,none,2024-03-08\r\n17720,2354,LATAM,home,online,75.42,6,0.167,bundle,2024-03-07\r\n17721,1935,EMEA,grocery,online,74.65,2,0.144,loyalty,2024-11-20\r\n17722,2481,APAC,toys,mobile,150.25,8,0.067,loyalty,2024-12-09\r\n17723,2350,APAC,grocery,online,83.02,5,0.210,none,2024-08-11\r\n17724,1802,AMER,home,online,37.14,8,0.223,bundle,2024-12-21\r\n17725,1650,LATAM,grocery,retail,45.45,4,0.072,none,2024-07-09\r\n17726,1360,APAC,home,online,36.49,6,0.141,none,2024-04-08\r\n17727,2117,EMEA,fashion,online,123.37,6,0.016,none,2024-05-10\r\n17728,2342,AMER,home,online,49.75,3,0.204,none,2024-06-21\r\n17729,2323,AMER,electronics,retail,107.99,4,0.092,coupon,2024-04-24\r\n17730,1096,EMEA,grocery,online,47.66,8,0.034,none,2024-03-07\r\n17731,1422,LATAM,electronics,partner,36.88,8,0.241,coupon,2024-09-15\r\n17732,1723,LATAM,home,mobile,39.34,8,0.041,coupon,2024-09-11\r\n17733,2333,APAC,sports,retail,64.05,5,0.220,none,2024-10-04\r\n17734,1819,AMER,electronics,partner,83.16,7,0.102,none,2024-06-19\r\n17735,1966,APAC,electronics,retail,76.93,1,0.201,none,2024-02-13\r\n17736,2358,AMER,home,retail,105.22,7,0.204,none,2024-02-20\r\n17737,1735,LATAM,home,online,61.91,5,0.127,none,2024-01-22\r\n17738,1392,AMER,electronics,partner,54.34,3,0.208,none,2024-06-20\r\n17739,2203,APAC,home,retail,47.18,3,0.082,coupon,2024-07-04\r\n17740,1772,EMEA,grocery,mobile,110.79,4,0.072,coupon,2024-08-13\r\n17741,2190,LATAM,fashion,retail,70.26,6,0.080,none,2024-11-19\r\n17742,1858,LATAM,home,online,74.74,4,0.238,none,2024-03-09\r\n17743,1057,LATAM,fashion,retail,126.25,4,0.023,bundle,2024-03-22\r\n17744,2349,APAC,toys,online,72.21,5,0.169,none,2024-10-06\r\n17745,1034,EMEA,sports,online,82.60,2,0.132,none,2024-08-05\r\n17746,1596,EMEA,electronics,online,30.31,5,0.207,none,2024-07-09\r\n17747,1009,APAC,fashion,online,43.79,1,0.174,coupon,2024-10-07\r\n17748,2496,EMEA,fashion,mobile,17.86,8,0.139,bundle,2024-02-20\r\n17749,1192,EMEA,fashion,online,126.21,4,0.197,coupon,2024-04-16\r\n17750,1760,LATAM,sports,retail,100.85,4,0.141,none,2024-06-02\r\n17751,1047,APAC,sports,online,96.70,5,0.181,coupon,2024-12-11\r\n17752,1271,EMEA,grocery,online,48.19,6,0.004,bundle,2024-03-01\r\n17753,2170,EMEA,grocery,retail,69.81,4,0.126,coupon,2024-09-28\r\n17754,2398,EMEA,fashion,retail,43.57,6,0.146,none,2024-08-14\r\n17755,2388,LATAM,grocery,online,25.82,3,0.006,none,2024-03-14\r\n17756,1281,AMER,grocery,online,24.13,2,0.085,loyalty,2024-02-03\r\n17757,2396,AMER,home,online,48.75,2,0.240,none,2024-06-06\r\n17758,2183,EMEA,home,online,35.33,4,0.228,none,2024-09-04\r\n17759,2425,APAC,sports,retail,82.65,2,0.021,none,2024-11-10\r\n17760,1862,LATAM,toys,retail,103.97,5,0.106,loyalty,2024-08-21\r\n17761,1019,APAC,electronics,online,87.56,2,0.176,none,2024-11-13\r\n17762,1601,APAC,home,retail,42.82,6,0.231,none,2024-06-15\r\n17763,1993,APAC,home,online,87.18,4,0.134,none,2024-09-14\r\n17764,1305,EMEA,home,retail,181.51,4,0.193,none,2024-08-14\r\n17765,1300,EMEA,grocery,retail,75.45,7,0.169,none,2024-06-15\r\n17766,1489,AMER,fashion,retail,142.37,3,0.030,none,2024-05-18\r\n17767,2282,EMEA,fashion,retail,81.41,2,0.109,coupon,2024-07-24\r\n17768,2425,APAC,home,mobile,21.42,3,0.245,none,2024-07-10\r\n17769,1229,LATAM,grocery,retail,64.75,1,0.233,bundle,2024-04-28\r\n17770,1067,APAC,grocery,online,70.84,5,0.189,none,2024-10-01\r\n17771,2094,AMER,grocery,retail,24.96,2,0.245,loyalty,2024-08-13\r\n17772,2122,AMER,home,online,72.14,8,0.158,bundle,2024-06-10\r\n17773,1930,AMER,electronics,mobile,33.09,8,0.156,loyalty,2024-08-09\r\n17774,1647,LATAM,grocery,online,39.09,1,0.236,coupon,2024-01-24\r\n17775,1274,LATAM,grocery,online,100.62,3,0.249,none,2024-11-18\r\n17776,2143,AMER,electronics,mobile,86.90,1,0.097,coupon,2024-09-12\r\n17777,1713,EMEA,fashion,mobile,27.59,6,0.140,loyalty,2024-09-15\r\n17778,1752,APAC,toys,mobile,56.01,2,0.240,none,2024-05-05\r\n17779,1512,APAC,home,retail,31.88,7,0.155,none,2024-06-01\r\n17780,1297,AMER,grocery,retail,35.13,3,0.004,none,2024-08-03\r\n17781,1634,AMER,home,online,28.46,5,0.151,bundle,2024-04-07\r\n17782,2169,EMEA,grocery,online,26.51,2,0.237,none,2024-12-18\r\n17783,1254,APAC,fashion,retail,79.48,2,0.073,coupon,2024-05-07\r\n17784,2108,AMER,grocery,mobile,124.19,5,0.197,none,2024-07-19\r\n17785,2002,APAC,fashion,online,110.59,6,0.235,bundle,2024-03-26\r\n17786,1942,APAC,toys,retail,89.21,5,0.246,bundle,2024-07-14\r\n17787,2223,EMEA,toys,online,34.00,5,0.184,none,2024-11-24\r\n17788,1865,LATAM,home,mobile,45.15,6,0.169,none,2024-09-24\r\n17789,2104,EMEA,fashion,retail,44.89,8,0.217,none,2024-01-23\r\n17790,2188,EMEA,electronics,retail,81.92,7,0.082,bundle,2024-01-15\r\n17791,1414,APAC,electronics,partner,25.74,1,0.232,none,2024-08-17\r\n17792,2401,LATAM,electronics,online,30.77,6,0.030,none,2024-01-06\r\n17793,2350,APAC,sports,online,38.25,5,0.086,none,2024-12-09\r\n17794,1749,LATAM,grocery,online,118.88,1,0.150,coupon,2024-05-25\r\n17795,2485,AMER,toys,retail,62.22,7,0.002,none,2024-10-27\r\n17796,1796,LATAM,grocery,mobile,68.76,3,0.127,loyalty,2024-11-13\r\n17797,2019,AMER,electronics,retail,38.24,6,0.148,coupon,2024-04-06\r\n17798,2326,LATAM,home,online,50.45,5,0.139,coupon,2024-08-12\r\n17799,2271,LATAM,sports,online,17.69,5,0.058,none,2024-03-22\r\n17800,1081,AMER,sports,online,121.60,3,0.190,none,2024-11-02\r\n17801,1862,LATAM,fashion,online,52.08,2,0.018,loyalty,2024-07-21\r\n17802,1444,EMEA,fashion,mobile,60.81,5,0.155,coupon,2024-11-14\r\n17803,1640,APAC,grocery,online,87.78,8,0.095,bundle,2024-08-27\r\n17804,2001,EMEA,home,retail,52.79,8,0.073,coupon,2024-12-06\r\n17805,1526,EMEA,electronics,online,47.03,7,0.027,coupon,2024-01-21\r\n17806,1153,AMER,grocery,retail,72.73,6,0.009,none,2024-03-17\r\n17807,1264,APAC,sports,online,44.17,6,0.149,coupon,2024-07-10\r\n17808,1843,EMEA,electronics,online,49.81,4,0.019,bundle,2024-10-02\r\n17809,1966,APAC,grocery,online,66.46,1,0.139,none,2024-04-28\r\n17810,2460,AMER,toys,retail,51.03,7,0.084,none,2024-10-24\r\n17811,1925,LATAM,sports,online,72.90,8,0.055,none,2024-08-17\r\n17812,1528,EMEA,electronics,online,43.93,3,0.102,none,2024-03-19\r\n17813,1814,AMER,home,online,42.29,3,0.149,loyalty,2024-05-28\r\n17814,1725,APAC,sports,online,40.52,3,0.004,coupon,2024-03-27\r\n17815,1481,LATAM,sports,retail,51.17,7,0.193,coupon,2024-10-18\r\n17816,2263,AMER,grocery,online,51.87,4,0.249,bundle,2024-10-13\r\n17817,1710,APAC,fashion,mobile,29.99,7,0.081,none,2024-11-27\r\n17818,1141,AMER,fashion,retail,33.29,7,0.104,none,2024-07-14\r\n17819,1570,AMER,sports,mobile,86.96,7,0.132,bundle,2024-04-16\r\n17820,1653,APAC,electronics,online,74.44,2,0.133,bundle,2024-10-02\r\n17821,2047,AMER,fashion,mobile,17.86,4,0.246,bundle,2024-03-24\r\n17822,1375,AMER,fashion,retail,97.24,3,0.218,none,2024-09-10\r\n17823,1049,AMER,home,online,83.55,3,0.133,bundle,2024-02-22\r\n17824,1529,LATAM,grocery,retail,74.11,5,0.107,none,2024-11-18\r\n17825,2483,LATAM,home,online,78.18,1,0.059,none,2024-03-01\r\n17826,1265,APAC,toys,retail,53.91,8,0.095,coupon,2024-02-04\r\n17827,1029,EMEA,grocery,online,68.27,8,0.124,loyalty,2024-02-02\r\n17828,1858,LATAM,home,online,80.81,5,0.127,none,2024-04-17\r\n17829,1077,AMER,grocery,online,93.45,4,0.096,none,2024-10-08\r\n17830,1542,APAC,grocery,mobile,156.17,5,0.220,none,2024-01-08\r\n17831,1707,APAC,fashion,retail,52.62,7,0.104,coupon,2024-03-14\r\n17832,2475,AMER,home,retail,41.67,2,0.075,none,2024-12-20\r\n17833,1644,EMEA,grocery,retail,79.18,7,0.091,none,2024-05-06\r\n17834,2160,LATAM,fashion,mobile,41.38,7,0.171,none,2024-07-28\r\n17835,2396,AMER,fashion,retail,69.30,8,0.060,bundle,2024-02-05\r\n17836,1192,EMEA,electronics,retail,65.80,6,0.022,coupon,2024-09-05\r\n17837,1431,APAC,grocery,online,39.93,1,0.022,bundle,2024-02-27\r\n17838,1517,AMER,home,mobile,54.35,7,0.184,coupon,2024-03-13\r\n17839,1915,LATAM,home,mobile,58.93,6,0.044,none,2024-10-26\r\n17840,2364,APAC,toys,online,34.41,7,0.187,none,2024-09-10\r\n17841,1744,EMEA,grocery,online,92.29,8,0.071,coupon,2024-09-08\r\n17842,2378,LATAM,grocery,mobile,86.56,6,0.220,none,2024-05-28\r\n17843,2192,APAC,grocery,retail,79.80,1,0.015,none,2024-03-05\r\n17844,1858,LATAM,electronics,online,62.05,4,0.170,none,2024-06-14\r\n17845,1753,APAC,sports,mobile,62.62,1,0.118,coupon,2024-11-07\r\n17846,1982,EMEA,sports,retail,54.02,3,0.097,bundle,2024-10-01\r\n17847,1117,LATAM,grocery,online,17.07,7,0.245,none,2024-09-11\r\n17848,2303,EMEA,grocery,online,82.73,4,0.000,coupon,2024-03-14\r\n17849,1042,LATAM,home,online,129.26,8,0.203,coupon,2024-07-09\r\n17850,1925,LATAM,home,retail,67.90,4,0.109,none,2024-11-06\r\n17851,2473,EMEA,electronics,mobile,43.61,6,0.179,none,2024-06-07\r\n17852,1472,AMER,sports,retail,21.76,1,0.226,none,2024-03-12\r\n17853,2186,LATAM,electronics,retail,43.14,5,0.118,bundle,2024-08-08\r\n17854,1545,AMER,home,online,107.43,4,0.168,none,2024-04-23\r\n17855,1898,EMEA,sports,online,83.17,7,0.154,bundle,2024-03-26\r\n17856,1006,AMER,sports,retail,49.10,3,0.128,none,2024-04-03\r\n17857,2116,LATAM,electronics,online,31.14,6,0.097,none,2024-07-27\r\n17858,1830,EMEA,fashion,retail,47.31,6,0.109,bundle,2024-07-20\r\n17859,2079,EMEA,electronics,online,51.46,1,0.097,coupon,2024-10-24\r\n17860,1533,APAC,sports,retail,73.72,4,0.066,coupon,2024-06-23\r\n17861,2490,AMER,fashion,retail,29.49,8,0.108,none,2024-11-17\r\n17862,1744,EMEA,toys,online,66.34,3,0.230,bundle,2024-02-22\r\n17863,1949,AMER,toys,online,33.77,1,0.035,coupon,2024-10-17\r\n17864,1397,LATAM,electronics,retail,105.16,5,0.197,none,2024-12-17\r\n17865,1779,APAC,home,online,26.68,2,0.117,none,2024-07-11\r\n17866,1080,LATAM,electronics,online,57.46,6,0.077,none,2024-11-17\r\n17867,1011,APAC,sports,mobile,49.14,2,0.001,bundle,2024-09-27\r\n17868,1919,EMEA,grocery,retail,77.81,4,0.102,coupon,2024-05-04\r\n17869,2295,EMEA,home,retail,39.43,5,0.007,coupon,2024-05-01\r\n17870,2176,AMER,grocery,online,50.20,6,0.060,bundle,2024-08-17\r\n17871,1699,APAC,grocery,retail,165.97,7,0.047,coupon,2024-05-24\r\n17872,1406,LATAM,home,mobile,54.41,4,0.030,none,2024-08-27\r\n17873,1363,EMEA,grocery,retail,22.21,2,0.089,none,2024-03-18\r\n17874,1543,AMER,electronics,retail,77.21,6,0.190,none,2024-02-16\r\n17875,1085,EMEA,electronics,online,45.37,2,0.235,bundle,2024-04-25\r\n17876,1592,LATAM,grocery,partner,16.09,7,0.109,none,2024-02-27\r\n17877,2223,EMEA,grocery,retail,61.11,7,0.094,coupon,2024-12-09\r\n17878,1895,AMER,sports,retail,62.12,4,0.037,coupon,2024-01-22\r\n17879,1066,AMER,electronics,mobile,39.34,4,0.023,coupon,2024-10-26\r\n17880,2208,AMER,toys,online,68.12,5,0.058,none,2024-04-26\r\n17881,1938,APAC,grocery,retail,92.11,7,0.004,loyalty,2024-03-01\r\n17882,2354,LATAM,grocery,partner,87.78,6,0.095,none,2024-09-01\r\n17883,1869,AMER,grocery,retail,57.34,4,0.072,coupon,2024-11-14\r\n17884,2396,AMER,grocery,online,98.70,1,0.032,none,2024-06-02\r\n17885,1513,APAC,home,mobile,70.10,5,0.034,coupon,2024-08-21\r\n17886,2068,LATAM,grocery,partner,35.71,4,0.103,loyalty,2024-05-17\r\n17887,2049,LATAM,toys,retail,57.36,7,0.100,coupon,2024-02-21\r\n17888,1607,LATAM,sports,online,34.69,8,0.011,none,2024-10-19\r\n17889,1711,APAC,home,online,39.96,2,0.081,bundle,2024-09-17\r\n17890,1008,AMER,electronics,retail,94.75,5,0.211,coupon,2024-03-25\r\n17891,1345,AMER,home,online,42.26,4,0.138,none,2024-08-19\r\n17892,1732,LATAM,toys,online,113.16,4,0.124,none,2024-03-23\r\n17893,1422,LATAM,grocery,retail,24.38,1,0.025,coupon,2024-06-25\r\n17894,1932,EMEA,grocery,mobile,32.74,3,0.195,none,2024-06-11\r\n17895,1399,AMER,fashion,online,66.90,1,0.086,none,2024-04-19\r\n17896,1690,LATAM,home,online,14.78,3,0.234,bundle,2024-02-08\r\n17897,1777,AMER,electronics,online,63.40,8,0.165,bundle,2024-11-28\r\n17898,1245,APAC,fashion,retail,42.00,8,0.089,none,2024-07-20\r\n17899,1102,APAC,grocery,partner,23.97,5,0.003,none,2024-07-25\r\n17900,1455,APAC,home,online,51.18,2,0.236,coupon,2024-03-25\r\n17901,2405,AMER,electronics,partner,114.21,3,0.017,none,2024-01-07\r\n17902,1020,APAC,sports,retail,34.43,1,0.012,coupon,2024-01-02\r\n17903,2497,AMER,grocery,online,166.84,3,0.143,none,2024-05-24\r\n17904,1204,AMER,electronics,mobile,80.89,2,0.044,bundle,2024-10-04\r\n17905,1233,AMER,toys,online,76.28,5,0.196,none,2024-09-22\r\n17906,1898,EMEA,electronics,mobile,28.54,8,0.138,loyalty,2024-02-11\r\n17907,1796,LATAM,electronics,online,55.30,4,0.189,loyalty,2024-10-23\r\n17908,2331,APAC,toys,online,63.08,4,0.073,loyalty,2024-02-03\r\n17909,1922,EMEA,electronics,retail,41.94,8,0.246,none,2024-12-20\r\n17910,1634,AMER,home,retail,47.97,8,0.151,coupon,2024-09-23\r\n17911,2366,APAC,home,mobile,53.24,4,0.018,none,2024-12-08\r\n17912,1253,AMER,grocery,retail,64.29,7,0.003,bundle,2024-11-07\r\n17913,2479,EMEA,home,mobile,42.42,8,0.216,loyalty,2024-02-27\r\n17914,1009,APAC,fashion,online,97.83,7,0.205,none,2024-10-05\r\n17915,1831,APAC,toys,retail,44.61,8,0.174,none,2024-04-21\r\n17916,2364,APAC,fashion,retail,69.08,8,0.068,loyalty,2024-02-17\r\n17917,1927,EMEA,fashion,online,52.74,7,0.218,coupon,2024-01-26\r\n17918,1063,AMER,electronics,online,41.13,1,0.235,coupon,2024-03-16\r\n17919,2242,AMER,grocery,online,59.98,2,0.049,coupon,2024-04-16\r\n17920,2424,LATAM,fashion,retail,32.02,2,0.108,coupon,2024-05-11\r\n17921,2103,LATAM,home,retail,57.87,5,0.156,bundle,2024-03-23\r\n17922,1762,LATAM,electronics,online,113.90,7,0.163,loyalty,2024-10-27\r\n17923,2189,LATAM,grocery,retail,75.03,7,0.060,none,2024-10-13\r\n17924,1086,AMER,grocery,online,171.37,4,0.238,coupon,2024-05-21\r\n17925,1088,LATAM,home,online,18.84,3,0.180,none,2024-06-27\r\n17926,2218,EMEA,toys,online,80.58,6,0.155,loyalty,2024-02-06\r\n17927,1497,EMEA,fashion,retail,57.08,8,0.012,none,2024-02-22\r\n17928,2096,LATAM,home,retail,101.70,3,0.216,none,2024-12-21\r\n17929,2128,EMEA,grocery,online,36.90,3,0.175,none,2024-10-17\r\n17930,1096,EMEA,grocery,partner,48.16,6,0.248,coupon,2024-07-24\r\n17931,1690,LATAM,home,retail,47.18,2,0.043,none,2024-03-02\r\n17932,1484,AMER,fashion,retail,67.98,6,0.094,bundle,2024-07-28\r\n17933,1557,LATAM,fashion,online,59.34,3,0.097,bundle,2024-06-13\r\n17934,2359,LATAM,electronics,mobile,54.71,1,0.025,none,2024-03-15\r\n17935,2107,APAC,electronics,retail,24.89,6,0.034,bundle,2024-12-03\r\n17936,1364,EMEA,fashion,retail,50.73,4,0.031,none,2024-04-16\r\n17937,1972,LATAM,grocery,online,133.81,8,0.154,none,2024-10-24\r\n17938,2186,LATAM,grocery,retail,90.74,1,0.151,coupon,2024-12-11\r\n17939,2004,LATAM,home,mobile,66.16,1,0.182,none,2024-03-07\r\n17940,1133,EMEA,grocery,online,163.99,7,0.108,none,2024-03-20\r\n17941,1097,EMEA,grocery,mobile,40.56,8,0.192,none,2024-03-25\r\n17942,1394,LATAM,grocery,retail,25.83,2,0.116,bundle,2024-04-22\r\n17943,1409,APAC,grocery,retail,92.82,3,0.201,none,2024-11-26\r\n17944,1653,APAC,home,partner,75.20,6,0.048,none,2024-02-18\r\n17945,1683,AMER,home,online,80.18,6,0.246,none,2024-12-20\r\n17946,1638,EMEA,grocery,online,32.67,5,0.219,none,2024-11-10\r\n17947,1954,APAC,home,retail,35.42,4,0.061,none,2024-06-21\r\n17948,1749,LATAM,grocery,retail,35.97,5,0.010,none,2024-03-16\r\n17949,2358,AMER,grocery,retail,58.54,6,0.183,bundle,2024-08-20\r\n17950,1504,AMER,electronics,retail,31.29,8,0.101,none,2024-08-23\r\n17951,1634,AMER,electronics,retail,94.84,2,0.095,none,2024-12-20\r\n17952,2209,AMER,toys,online,62.11,7,0.169,loyalty,2024-02-03\r\n17953,1330,EMEA,fashion,online,94.06,8,0.037,none,2024-11-16\r\n17954,1672,APAC,grocery,retail,42.38,2,0.046,none,2024-06-22\r\n17955,2416,LATAM,grocery,partner,138.15,7,0.247,none,2024-08-24\r\n17956,1719,LATAM,electronics,online,149.87,5,0.200,bundle,2024-02-14\r\n17957,1027,APAC,fashion,online,31.33,1,0.104,loyalty,2024-03-28\r\n17958,1109,APAC,sports,partner,71.00,8,0.179,bundle,2024-06-14\r\n17959,1084,AMER,fashion,online,75.67,8,0.002,coupon,2024-10-01\r\n17960,1774,EMEA,fashion,online,48.31,7,0.171,coupon,2024-04-23\r\n17961,1109,APAC,fashion,online,34.37,2,0.120,none,2024-01-24\r\n17962,1667,AMER,grocery,retail,33.96,2,0.024,none,2024-06-13\r\n17963,1985,AMER,sports,online,48.63,7,0.010,none,2024-01-03\r\n17964,1069,APAC,grocery,mobile,115.58,7,0.142,loyalty,2024-09-14\r\n17965,1706,EMEA,grocery,retail,129.13,5,0.123,loyalty,2024-05-09\r\n17966,1716,LATAM,grocery,online,68.31,2,0.173,coupon,2024-06-05\r\n17967,2253,AMER,home,retail,47.92,1,0.077,none,2024-12-11\r\n17968,2032,AMER,electronics,online,87.93,2,0.156,none,2024-09-01\r\n17969,1275,EMEA,electronics,online,51.46,7,0.085,none,2024-12-07\r\n17970,1908,AMER,toys,mobile,47.52,7,0.160,none,2024-12-28\r\n17971,1092,AMER,toys,online,107.15,3,0.074,coupon,2024-05-02\r\n17972,1925,LATAM,electronics,mobile,114.45,7,0.052,none,2024-09-15\r\n17973,1021,AMER,home,online,63.16,5,0.189,none,2024-02-15\r\n17974,2044,APAC,home,online,39.44,5,0.209,none,2024-04-15\r\n17975,1979,APAC,grocery,online,56.52,3,0.058,none,2024-08-07\r\n17976,1063,AMER,fashion,retail,62.10,7,0.163,none,2024-04-17\r\n17977,2035,LATAM,grocery,retail,65.81,2,0.097,coupon,2024-07-20\r\n17978,2245,APAC,home,online,75.00,6,0.021,coupon,2024-08-01\r\n17979,2078,APAC,grocery,retail,30.04,4,0.127,loyalty,2024-09-04\r\n17980,1368,EMEA,electronics,online,119.60,5,0.115,none,2024-02-05\r\n17981,1459,LATAM,grocery,retail,84.84,3,0.186,none,2024-10-01\r\n17982,1845,AMER,grocery,online,52.24,7,0.112,coupon,2024-03-10\r\n17983,1342,LATAM,electronics,online,60.80,5,0.067,coupon,2024-08-26\r\n17984,1850,APAC,home,retail,153.44,7,0.063,none,2024-08-05\r\n17985,1421,APAC,toys,online,36.28,1,0.012,none,2024-03-12\r\n17986,1535,AMER,grocery,online,30.28,6,0.160,none,2024-05-23\r\n17987,1779,APAC,sports,online,89.09,3,0.211,none,2024-11-10\r\n17988,1679,APAC,grocery,online,80.20,8,0.182,coupon,2024-09-23\r\n17989,1051,EMEA,grocery,online,44.16,4,0.231,none,2024-01-17\r\n17990,1141,AMER,grocery,online,46.21,1,0.015,loyalty,2024-05-13\r\n17991,1090,AMER,electronics,retail,70.50,8,0.087,none,2024-02-28\r\n17992,1242,LATAM,toys,mobile,52.67,2,0.031,coupon,2024-10-08\r\n17993,1597,APAC,toys,online,31.52,7,0.191,none,2024-12-22\r\n17994,1140,LATAM,grocery,mobile,40.57,6,0.227,bundle,2024-09-05\r\n17995,1907,EMEA,grocery,mobile,38.76,5,0.075,bundle,2024-08-23\r\n17996,1042,LATAM,toys,online,69.77,7,0.012,none,2024-10-28\r\n17997,1232,LATAM,fashion,online,51.27,1,0.003,none,2024-12-14\r\n17998,1085,EMEA,grocery,retail,58.30,2,0.019,none,2024-02-13\r\n17999,1465,AMER,electronics,retail,106.10,6,0.224,none,2024-03-08\r\n18000,1185,LATAM,home,retail,39.44,4,0.018,coupon,2024-04-06\r\n18001,1942,APAC,grocery,online,78.98,4,0.224,coupon,2024-10-02\r\n18002,1671,APAC,fashion,retail,52.09,1,0.246,coupon,2024-04-04\r\n18003,1145,AMER,fashion,online,37.28,8,0.233,bundle,2024-03-02\r\n18004,1543,AMER,home,retail,58.14,6,0.115,none,2024-03-06\r\n18005,1051,EMEA,electronics,retail,81.06,5,0.223,none,2024-10-13\r\n18006,1872,LATAM,home,online,43.31,1,0.131,none,2024-09-05\r\n18007,1524,LATAM,electronics,retail,96.67,5,0.224,bundle,2024-06-04\r\n18008,2260,EMEA,fashion,online,42.92,4,0.006,none,2024-05-01\r\n18009,2116,LATAM,home,online,33.43,1,0.126,none,2024-04-22\r\n18010,1136,EMEA,home,online,60.79,1,0.024,coupon,2024-11-15\r\n18011,1545,AMER,grocery,retail,101.83,2,0.091,coupon,2024-08-05\r\n18012,1159,LATAM,fashion,online,36.93,7,0.002,coupon,2024-05-13\r\n18013,1509,AMER,fashion,retail,70.78,2,0.162,none,2024-11-08\r\n18014,2261,EMEA,home,partner,28.49,1,0.050,none,2024-02-01\r\n18015,1665,AMER,sports,online,40.73,6,0.221,none,2024-08-16\r\n18016,1787,APAC,electronics,online,45.49,4,0.132,none,2024-03-27\r\n18017,1685,AMER,grocery,online,168.89,6,0.041,none,2024-08-04\r\n18018,1883,LATAM,fashion,online,51.69,2,0.086,coupon,2024-10-13\r\n18019,1557,LATAM,home,retail,42.48,6,0.149,none,2024-02-17\r\n18020,1985,AMER,electronics,retail,43.14,2,0.069,bundle,2024-11-23\r\n18021,1011,APAC,sports,retail,41.49,6,0.087,bundle,2024-09-08\r\n18022,2241,APAC,grocery,online,54.90,7,0.027,none,2024-07-07\r\n18023,1097,EMEA,grocery,retail,74.46,4,0.097,bundle,2024-02-24\r\n18024,1972,LATAM,electronics,retail,57.16,6,0.115,none,2024-04-26\r\n18025,2331,APAC,sports,online,30.51,7,0.165,none,2024-03-23\r\n18026,1321,EMEA,sports,mobile,50.99,7,0.245,none,2024-01-14\r\n18027,1240,EMEA,home,mobile,130.95,1,0.022,coupon,2024-09-11\r\n18028,1033,APAC,home,mobile,61.77,8,0.150,bundle,2024-08-23\r\n18029,1017,AMER,home,online,17.60,3,0.175,coupon,2024-10-10\r\n18030,1498,LATAM,fashion,online,80.09,3,0.219,loyalty,2024-09-17\r\n18031,2018,AMER,grocery,retail,49.20,6,0.161,bundle,2024-07-02\r\n18032,1539,LATAM,electronics,mobile,47.72,3,0.152,none,2024-09-23\r\n18033,2342,AMER,sports,online,36.04,7,0.087,none,2024-06-02\r\n18034,1561,EMEA,grocery,retail,50.94,4,0.110,none,2024-05-04\r\n18035,1416,EMEA,electronics,online,40.06,2,0.163,none,2024-04-06\r\n18036,1521,LATAM,fashion,retail,57.35,2,0.232,loyalty,2024-09-07\r\n18037,2168,EMEA,grocery,online,28.62,2,0.215,none,2024-07-15\r\n18038,1335,APAC,fashion,retail,92.36,2,0.050,coupon,2024-06-11\r\n18039,1824,LATAM,electronics,online,82.75,2,0.124,none,2024-08-27\r\n18040,1221,LATAM,fashion,online,30.92,2,0.034,coupon,2024-09-03\r\n18041,1333,EMEA,electronics,online,54.53,3,0.199,none,2024-07-16\r\n18042,1481,LATAM,electronics,partner,93.76,1,0.049,none,2024-06-07\r\n18043,2126,APAC,home,online,54.46,6,0.106,loyalty,2024-01-17\r\n18044,1202,APAC,electronics,retail,72.67,7,0.212,loyalty,2024-11-19\r\n18045,1446,AMER,fashion,online,91.15,7,0.201,loyalty,2024-04-25\r\n18046,2483,LATAM,fashion,online,106.78,1,0.038,none,2024-09-05\r\n18047,1944,AMER,home,mobile,38.22,5,0.171,coupon,2024-02-10\r\n18048,2235,AMER,electronics,online,40.37,1,0.198,loyalty,2024-01-07\r\n18049,1866,EMEA,electronics,mobile,11.81,4,0.078,none,2024-03-28\r\n18050,1627,LATAM,electronics,retail,23.16,3,0.117,none,2024-09-17\r\n18051,2101,APAC,home,online,125.86,2,0.185,loyalty,2024-10-21\r\n18052,1175,AMER,grocery,retail,49.87,6,0.030,coupon,2024-02-07\r\n18053,1390,APAC,home,mobile,68.15,2,0.180,bundle,2024-06-13\r\n18054,2203,APAC,grocery,partner,49.50,1,0.082,bundle,2024-10-12\r\n18055,1073,AMER,grocery,online,45.82,1,0.048,none,2024-07-14\r\n18056,1323,EMEA,fashion,online,56.54,4,0.163,coupon,2024-01-22\r\n18057,1557,LATAM,grocery,online,137.45,6,0.185,none,2024-03-12\r\n18058,1837,LATAM,toys,mobile,22.50,2,0.134,none,2024-02-20\r\n18059,2306,AMER,grocery,online,61.75,5,0.038,none,2024-11-28\r\n18060,2161,LATAM,electronics,online,28.03,6,0.094,none,2024-11-09\r\n18061,1192,EMEA,grocery,retail,19.54,2,0.095,bundle,2024-11-08\r\n18062,1407,LATAM,home,online,55.00,8,0.174,none,2024-12-23\r\n18063,1911,LATAM,grocery,online,31.90,3,0.147,coupon,2024-02-25\r\n18064,1826,LATAM,grocery,online,72.65,8,0.219,none,2024-10-28\r\n18065,1834,AMER,home,retail,38.09,3,0.209,none,2024-04-10\r\n18066,1548,EMEA,sports,online,72.76,2,0.010,loyalty,2024-04-06\r\n18067,1959,EMEA,toys,online,68.83,2,0.068,none,2024-02-28\r\n18068,1265,APAC,electronics,online,46.56,4,0.186,bundle,2024-03-23\r\n18069,1884,APAC,fashion,retail,48.82,3,0.193,none,2024-11-04\r\n18070,1964,EMEA,home,online,88.24,2,0.011,none,2024-05-06\r\n18071,1589,AMER,electronics,online,109.79,2,0.147,loyalty,2024-08-07\r\n18072,2211,APAC,electronics,online,58.08,8,0.137,none,2024-10-21\r\n18073,2317,LATAM,grocery,retail,30.82,4,0.048,none,2024-04-26\r\n18074,1425,EMEA,sports,retail,83.83,5,0.066,loyalty,2024-06-03\r\n18075,2059,AMER,fashion,online,33.14,7,0.167,none,2024-02-21\r\n18076,1500,EMEA,electronics,online,139.41,2,0.228,none,2024-08-10\r\n18077,2032,AMER,toys,online,53.10,2,0.008,bundle,2024-11-21\r\n18078,1010,EMEA,grocery,online,30.73,7,0.035,coupon,2024-02-27\r\n18079,1906,APAC,grocery,mobile,48.43,3,0.137,none,2024-08-05\r\n18080,2216,AMER,home,online,88.03,4,0.053,bundle,2024-10-06\r\n18081,2024,AMER,toys,online,56.59,8,0.149,none,2024-08-09\r\n18082,2359,LATAM,grocery,retail,59.95,1,0.129,none,2024-08-08\r\n18083,2342,AMER,fashion,mobile,37.42,7,0.203,none,2024-03-07\r\n18084,1181,LATAM,grocery,online,97.03,8,0.120,loyalty,2024-09-23\r\n18085,2131,APAC,home,online,43.39,1,0.098,bundle,2024-08-04\r\n18086,2419,LATAM,home,online,42.90,2,0.038,loyalty,2024-08-13\r\n18087,1364,EMEA,electronics,online,55.02,6,0.006,none,2024-09-14\r\n18088,1257,APAC,toys,retail,88.46,1,0.190,none,2024-09-02\r\n18089,2392,EMEA,home,online,126.44,8,0.121,bundle,2024-02-07\r\n18090,2316,EMEA,electronics,online,57.29,4,0.052,bundle,2024-12-12\r\n18091,2160,LATAM,electronics,partner,65.13,5,0.227,coupon,2024-10-08\r\n18092,1302,LATAM,electronics,online,110.88,2,0.062,bundle,2024-08-15\r\n18093,1438,APAC,grocery,mobile,45.95,2,0.158,none,2024-07-22\r\n18094,1699,APAC,toys,online,111.05,4,0.167,none,2024-04-22\r\n18095,1281,AMER,electronics,retail,72.87,6,0.151,none,2024-01-01\r\n18096,2007,LATAM,home,mobile,73.23,7,0.042,none,2024-11-27\r\n18097,1648,APAC,fashion,partner,47.07,3,0.022,none,2024-05-21\r\n18098,1837,LATAM,fashion,partner,65.96,5,0.135,bundle,2024-06-28\r\n18099,1709,EMEA,grocery,retail,55.32,3,0.239,coupon,2024-03-07\r\n18100,2164,AMER,electronics,online,83.21,1,0.048,coupon,2024-05-13\r\n18101,1821,LATAM,fashion,retail,45.30,7,0.187,bundle,2024-05-26\r\n18102,1859,AMER,grocery,retail,57.51,4,0.134,bundle,2024-03-13\r\n18103,1099,LATAM,home,partner,47.55,4,0.228,none,2024-06-23\r\n18104,1824,LATAM,sports,online,78.91,5,0.124,none,2024-09-19\r\n18105,1788,AMER,toys,retail,31.91,5,0.096,none,2024-04-22\r\n18106,2027,EMEA,grocery,online,58.24,2,0.183,loyalty,2024-10-10\r\n18107,1910,LATAM,electronics,retail,34.42,2,0.053,bundle,2024-01-04\r\n18108,1423,EMEA,electronics,partner,93.85,3,0.045,loyalty,2024-12-13\r\n18109,1066,AMER,home,online,20.75,5,0.115,none,2024-04-15\r\n18110,1552,EMEA,home,retail,91.19,6,0.110,none,2024-03-11\r\n18111,2272,EMEA,electronics,mobile,33.97,3,0.087,bundle,2024-09-13\r\n18112,1058,LATAM,home,online,19.46,6,0.101,none,2024-10-15\r\n18113,2293,LATAM,grocery,online,35.37,5,0.090,none,2024-06-05\r\n18114,1445,APAC,electronics,online,39.83,2,0.055,bundle,2024-01-22\r\n18115,1250,APAC,fashion,retail,82.66,1,0.124,none,2024-11-06\r\n18116,1267,EMEA,electronics,mobile,82.89,8,0.247,none,2024-02-13\r\n18117,2069,AMER,electronics,online,55.69,6,0.033,none,2024-12-02\r\n18118,1485,APAC,electronics,online,25.22,2,0.150,loyalty,2024-03-10\r\n18119,1528,EMEA,electronics,retail,41.96,3,0.119,coupon,2024-08-23\r\n18120,1715,AMER,toys,retail,61.95,2,0.167,coupon,2024-01-07\r\n18121,1340,LATAM,electronics,mobile,62.46,4,0.065,bundle,2024-06-16\r\n18122,1493,APAC,home,online,81.45,8,0.185,none,2024-08-16\r\n18123,1282,LATAM,home,retail,75.68,5,0.044,none,2024-11-05\r\n18124,1050,AMER,grocery,online,52.44,2,0.047,coupon,2024-12-17\r\n18125,2127,LATAM,sports,mobile,23.77,4,0.054,coupon,2024-03-15\r\n18126,1801,LATAM,home,retail,54.90,5,0.209,bundle,2024-09-19\r\n18127,2445,APAC,electronics,online,118.35,3,0.000,none,2024-10-27\r\n18128,1126,LATAM,electronics,online,51.38,7,0.155,none,2024-06-20\r\n18129,2246,AMER,grocery,retail,89.44,7,0.125,none,2024-01-27\r\n18130,2324,AMER,electronics,online,23.98,4,0.128,bundle,2024-09-17\r\n18131,1344,EMEA,electronics,retail,47.33,1,0.137,none,2024-05-19\r\n18132,1526,EMEA,toys,online,126.23,1,0.204,bundle,2024-05-24\r\n18133,1914,EMEA,home,retail,56.93,3,0.187,none,2024-11-12\r\n18134,1216,APAC,home,online,41.14,2,0.103,coupon,2024-11-11\r\n18135,2333,APAC,grocery,retail,52.75,4,0.132,bundle,2024-03-21\r\n18136,1042,LATAM,electronics,mobile,53.76,2,0.149,bundle,2024-09-12\r\n18137,1038,APAC,fashion,retail,129.78,8,0.152,bundle,2024-05-02\r\n18138,1924,AMER,grocery,retail,28.82,8,0.081,coupon,2024-06-11\r\n18139,2474,LATAM,sports,online,46.42,7,0.235,coupon,2024-08-04\r\n18140,2074,AMER,electronics,retail,37.89,4,0.203,none,2024-01-09\r\n18141,1353,EMEA,sports,retail,31.59,4,0.101,coupon,2024-07-01\r\n18142,2379,AMER,fashion,mobile,53.47,6,0.028,none,2024-11-26\r\n18143,1130,LATAM,grocery,online,70.62,2,0.102,coupon,2024-02-01\r\n18144,1107,APAC,toys,partner,52.59,4,0.135,coupon,2024-04-12\r\n18145,2300,EMEA,electronics,online,21.87,3,0.064,loyalty,2024-02-01\r\n18146,1896,EMEA,home,mobile,64.27,4,0.235,coupon,2024-11-16\r\n18147,1832,APAC,grocery,online,43.02,8,0.080,none,2024-01-07\r\n18148,1790,AMER,fashion,retail,48.54,3,0.235,bundle,2024-05-11\r\n18149,1601,APAC,fashion,partner,172.33,6,0.094,none,2024-04-15\r\n18150,2029,APAC,toys,retail,81.57,6,0.241,bundle,2024-10-27\r\n18151,1322,AMER,sports,online,50.00,8,0.192,none,2024-10-15\r\n18152,2428,LATAM,electronics,retail,91.09,8,0.208,coupon,2024-10-11\r\n18153,2496,EMEA,sports,online,36.29,5,0.034,none,2024-10-18\r\n18154,1133,EMEA,fashion,online,91.47,6,0.054,coupon,2024-03-27\r\n18155,2150,APAC,home,online,64.66,6,0.233,none,2024-08-21\r\n18156,2052,LATAM,sports,retail,60.24,8,0.154,bundle,2024-08-23\r\n18157,2137,LATAM,fashion,mobile,55.54,5,0.044,none,2024-04-01\r\n18158,1216,APAC,fashion,online,31.74,3,0.208,coupon,2024-02-28\r\n18159,1103,EMEA,electronics,online,22.11,2,0.080,coupon,2024-09-16\r\n18160,2308,AMER,grocery,retail,59.84,7,0.070,none,2024-10-25\r\n18161,1804,AMER,grocery,retail,125.41,3,0.031,none,2024-02-01\r\n18162,1159,LATAM,electronics,online,29.82,5,0.236,bundle,2024-06-12\r\n18163,2445,APAC,grocery,retail,98.15,6,0.035,none,2024-11-18\r\n18164,1177,LATAM,home,retail,27.50,3,0.098,coupon,2024-03-06\r\n18165,2450,EMEA,toys,retail,106.33,2,0.228,none,2024-10-05\r\n18166,1989,LATAM,electronics,retail,110.98,6,0.243,none,2024-08-28\r\n18167,1949,AMER,electronics,online,44.90,6,0.191,coupon,2024-03-05\r\n18168,1260,LATAM,home,online,31.25,3,0.132,none,2024-11-01\r\n18169,1375,AMER,grocery,retail,52.46,7,0.038,none,2024-07-26\r\n18170,2414,EMEA,grocery,online,105.71,5,0.134,none,2024-02-05\r\n18171,1279,EMEA,grocery,retail,65.41,1,0.156,coupon,2024-07-07\r\n18172,2165,AMER,electronics,retail,80.98,8,0.052,bundle,2024-03-15\r\n18173,2379,AMER,fashion,retail,56.92,3,0.138,none,2024-06-04\r\n18174,1627,LATAM,grocery,retail,60.43,3,0.222,coupon,2024-08-28\r\n18175,1307,AMER,fashion,retail,68.12,2,0.053,coupon,2024-06-19\r\n18176,1211,EMEA,electronics,online,31.91,7,0.069,bundle,2024-09-17\r\n18177,1252,APAC,fashion,online,57.97,2,0.202,coupon,2024-01-24\r\n18178,1381,LATAM,home,retail,35.05,7,0.184,coupon,2024-11-12\r\n18179,2165,AMER,home,mobile,68.76,5,0.104,none,2024-04-04\r\n18180,1148,AMER,home,retail,56.85,8,0.087,none,2024-09-08\r\n18181,1107,APAC,fashion,retail,102.02,6,0.218,none,2024-09-08\r\n18182,1038,APAC,fashion,online,22.87,7,0.091,none,2024-09-18\r\n18183,1153,AMER,fashion,partner,50.97,7,0.036,none,2024-06-04\r\n18184,1230,EMEA,grocery,retail,77.11,2,0.219,bundle,2024-08-23\r\n18185,2433,APAC,toys,online,43.22,5,0.032,loyalty,2024-06-28\r\n18186,2195,APAC,sports,online,52.78,6,0.070,bundle,2024-10-17\r\n18187,1337,APAC,grocery,partner,128.77,4,0.178,none,2024-02-02\r\n18188,1243,AMER,grocery,retail,28.89,5,0.215,none,2024-07-07\r\n18189,1010,EMEA,grocery,retail,149.77,5,0.147,bundle,2024-02-03\r\n18190,2426,AMER,fashion,mobile,34.12,6,0.129,coupon,2024-04-26\r\n18191,2398,EMEA,grocery,partner,26.55,8,0.001,none,2024-09-28\r\n18192,2059,AMER,toys,online,78.05,4,0.195,none,2024-03-23\r\n18193,1390,APAC,fashion,retail,62.82,6,0.146,none,2024-11-14\r\n18194,2219,LATAM,electronics,online,38.44,3,0.062,bundle,2024-05-14\r\n18195,1480,APAC,toys,mobile,57.24,6,0.010,none,2024-12-18\r\n18196,2277,EMEA,toys,online,100.92,7,0.021,bundle,2024-08-25\r\n18197,1025,EMEA,home,retail,67.33,2,0.138,bundle,2024-07-10\r\n18198,1463,EMEA,fashion,retail,130.08,4,0.148,loyalty,2024-01-06\r\n18199,2335,EMEA,home,online,48.10,4,0.040,bundle,2024-05-04\r\n18200,1116,LATAM,home,mobile,39.29,5,0.064,none,2024-01-28\r\n18201,1981,EMEA,grocery,mobile,129.13,5,0.004,coupon,2024-11-16\r\n18202,1564,APAC,electronics,online,238.01,5,0.083,none,2024-08-06\r\n18203,1051,EMEA,grocery,online,49.86,5,0.090,none,2024-01-02\r\n18204,1060,LATAM,sports,retail,91.46,3,0.052,none,2024-04-26\r\n18205,2422,APAC,grocery,online,20.69,6,0.105,none,2024-06-15\r\n18206,1572,LATAM,electronics,retail,49.15,6,0.205,none,2024-04-21\r\n18207,2487,LATAM,sports,online,95.73,4,0.003,loyalty,2024-10-26\r\n18208,1499,EMEA,fashion,mobile,65.03,8,0.028,bundle,2024-10-13\r\n18209,1658,AMER,sports,retail,112.82,6,0.020,none,2024-10-28\r\n18210,1469,EMEA,electronics,retail,73.71,2,0.175,coupon,2024-09-11\r\n18211,2146,APAC,electronics,online,60.79,4,0.006,none,2024-08-02\r\n18212,2030,EMEA,toys,online,34.88,4,0.095,none,2024-01-11\r\n18213,2376,LATAM,grocery,online,47.97,7,0.047,none,2024-04-24\r\n18214,1736,AMER,electronics,online,68.85,2,0.180,none,2024-12-24\r\n18215,1306,LATAM,grocery,retail,49.68,8,0.090,none,2024-08-15\r\n18216,1751,AMER,home,mobile,31.05,4,0.160,none,2024-11-23\r\n18217,1802,AMER,home,retail,17.98,4,0.202,none,2024-06-21\r\n18218,2297,EMEA,toys,mobile,20.88,1,0.133,none,2024-10-13\r\n18219,1558,EMEA,electronics,online,36.45,1,0.047,none,2024-05-11\r\n18220,2181,AMER,home,online,100.24,2,0.146,none,2024-10-09\r\n18221,1709,EMEA,home,online,74.35,6,0.130,none,2024-08-26\r\n18222,1776,APAC,home,retail,12.57,7,0.075,none,2024-05-28\r\n18223,1859,AMER,electronics,retail,47.64,3,0.023,none,2024-09-10\r\n18224,1391,LATAM,home,online,64.88,4,0.052,bundle,2024-10-21\r\n18225,1019,APAC,home,mobile,75.08,7,0.183,none,2024-03-20\r\n18226,2213,APAC,grocery,online,39.17,7,0.014,loyalty,2024-03-22\r\n18227,1842,LATAM,grocery,online,121.21,2,0.162,none,2024-02-15\r\n18228,1546,EMEA,sports,online,16.68,1,0.112,none,2024-02-27\r\n18229,1342,LATAM,sports,retail,51.49,1,0.176,coupon,2024-05-05\r\n18230,1576,EMEA,grocery,online,42.52,4,0.078,none,2024-08-27\r\n18231,1806,APAC,grocery,online,66.49,6,0.248,coupon,2024-06-08\r\n18232,1986,LATAM,electronics,retail,59.81,7,0.184,none,2024-07-08\r\n18233,2262,APAC,home,mobile,126.34,3,0.178,none,2024-10-03\r\n18234,1930,AMER,home,online,74.98,5,0.075,none,2024-12-17\r\n18235,2071,APAC,grocery,online,37.42,4,0.202,none,2024-09-10\r\n18236,1537,LATAM,home,online,46.39,1,0.059,none,2024-04-13\r\n18237,2412,LATAM,fashion,retail,52.56,6,0.160,bundle,2024-11-03\r\n18238,2259,AMER,grocery,retail,42.71,2,0.081,none,2024-08-21\r\n18239,1170,AMER,grocery,online,33.22,2,0.064,loyalty,2024-01-25\r\n18240,1398,APAC,grocery,online,207.19,8,0.122,none,2024-09-03\r\n18241,1762,LATAM,sports,online,36.96,3,0.161,none,2024-10-23\r\n18242,1583,AMER,toys,online,70.99,8,0.245,coupon,2024-04-15\r\n18243,2211,APAC,fashion,online,20.59,3,0.035,none,2024-03-01\r\n18244,2269,EMEA,sports,retail,80.77,1,0.022,none,2024-02-11\r\n18245,2425,APAC,toys,online,76.13,4,0.036,coupon,2024-09-27\r\n18246,1367,AMER,electronics,online,109.19,3,0.189,none,2024-04-18\r\n18247,1486,LATAM,grocery,online,81.15,1,0.031,coupon,2024-10-20\r\n18248,1809,APAC,electronics,online,100.01,1,0.214,bundle,2024-03-22\r\n18249,1421,APAC,fashion,online,21.59,1,0.080,none,2024-11-12\r\n18250,2001,EMEA,sports,retail,49.94,7,0.078,bundle,2024-04-04\r\n18251,2105,APAC,electronics,online,65.30,6,0.083,none,2024-03-28\r\n18252,1724,LATAM,toys,retail,54.19,4,0.169,none,2024-09-18\r\n18253,1571,EMEA,grocery,online,32.77,1,0.005,none,2024-07-21\r\n18254,1978,AMER,sports,partner,41.30,3,0.034,none,2024-02-15\r\n18255,1796,LATAM,home,online,40.78,6,0.045,none,2024-03-10\r\n18256,2071,APAC,electronics,online,56.88,2,0.050,none,2024-08-15\r\n18257,2189,LATAM,home,mobile,57.28,4,0.050,none,2024-07-16\r\n18258,1042,LATAM,electronics,mobile,79.31,3,0.127,coupon,2024-07-01\r\n18259,1887,LATAM,grocery,retail,66.84,7,0.159,none,2024-08-14\r\n18260,2269,EMEA,toys,online,76.45,1,0.215,none,2024-01-03\r\n18261,2019,AMER,sports,online,17.85,8,0.083,bundle,2024-10-25\r\n18262,2316,EMEA,home,online,52.64,6,0.186,loyalty,2024-07-24\r\n18263,1533,APAC,grocery,partner,36.95,7,0.115,none,2024-03-02\r\n18264,1579,AMER,fashion,online,88.32,8,0.241,coupon,2024-05-10\r\n18265,1526,EMEA,sports,online,45.85,3,0.169,bundle,2024-03-21\r\n18266,1438,APAC,toys,online,60.78,3,0.064,none,2024-01-10\r\n18267,1089,LATAM,sports,mobile,52.38,8,0.064,coupon,2024-08-08\r\n18268,1898,EMEA,fashion,online,39.23,5,0.232,bundle,2024-02-07\r\n18269,2315,LATAM,home,mobile,76.12,5,0.155,coupon,2024-11-28\r\n18270,2362,AMER,grocery,online,22.76,2,0.162,none,2024-08-28\r\n18271,1594,LATAM,fashion,retail,48.77,8,0.034,none,2024-09-16\r\n18272,1237,LATAM,electronics,online,16.72,7,0.223,none,2024-05-16\r\n18273,1660,AMER,fashion,online,35.88,8,0.046,bundle,2024-08-23\r\n18274,1825,AMER,grocery,retail,61.52,2,0.150,coupon,2024-03-25\r\n18275,1278,AMER,sports,retail,42.35,5,0.140,none,2024-02-09\r\n18276,1906,APAC,sports,online,55.27,5,0.233,none,2024-06-10\r\n18277,1931,APAC,toys,online,98.00,5,0.093,coupon,2024-08-12\r\n18278,1123,LATAM,fashion,online,36.52,7,0.047,none,2024-07-13\r\n18279,1433,EMEA,home,online,36.09,5,0.233,loyalty,2024-05-18\r\n18280,2174,LATAM,electronics,online,79.10,7,0.041,none,2024-02-07\r\n18281,1315,AMER,toys,online,36.11,1,0.130,coupon,2024-11-12\r\n18282,1222,AMER,home,partner,49.91,6,0.181,none,2024-06-10\r\n18283,1161,AMER,electronics,online,35.12,7,0.095,none,2024-09-10\r\n18284,1871,APAC,electronics,mobile,42.24,5,0.221,bundle,2024-07-02\r\n18285,1881,LATAM,grocery,online,112.94,1,0.052,loyalty,2024-11-24\r\n18286,1126,LATAM,sports,mobile,23.96,5,0.102,bundle,2024-09-12\r\n18287,2389,LATAM,electronics,retail,36.61,8,0.061,coupon,2024-06-17\r\n18288,1724,LATAM,toys,online,27.37,4,0.113,loyalty,2024-06-23\r\n18289,2199,LATAM,toys,retail,24.96,2,0.075,none,2024-04-02\r\n18290,1691,LATAM,grocery,mobile,55.28,4,0.036,none,2024-10-22\r\n18291,2088,EMEA,fashion,mobile,24.40,2,0.147,none,2024-05-18\r\n18292,2103,LATAM,fashion,retail,108.32,5,0.052,loyalty,2024-10-11\r\n18293,1710,APAC,home,mobile,88.29,3,0.113,none,2024-08-11\r\n18294,1496,AMER,electronics,retail,80.25,1,0.162,none,2024-09-13\r\n18295,1803,LATAM,toys,mobile,58.04,5,0.128,none,2024-05-15\r\n18296,1057,LATAM,grocery,retail,69.45,4,0.131,none,2024-01-27\r\n18297,2367,AMER,home,online,29.72,6,0.190,none,2024-04-05\r\n18298,1025,EMEA,sports,online,58.64,5,0.212,none,2024-03-28\r\n18299,1505,EMEA,sports,retail,93.29,7,0.169,none,2024-08-11\r\n18300,2490,AMER,fashion,mobile,90.07,1,0.241,coupon,2024-05-14\r\n18301,1982,EMEA,grocery,online,39.99,7,0.130,bundle,2024-10-05\r\n18302,1889,APAC,home,retail,25.75,7,0.215,bundle,2024-12-22\r\n18303,2360,EMEA,grocery,retail,58.32,3,0.239,none,2024-10-07\r\n18304,2284,EMEA,grocery,retail,65.46,7,0.201,none,2024-12-12\r\n18305,1988,AMER,grocery,retail,28.06,5,0.213,none,2024-01-23\r\n18306,2479,EMEA,grocery,retail,64.55,1,0.039,bundle,2024-02-11\r\n18307,1954,APAC,electronics,online,50.64,7,0.030,none,2024-03-16\r\n18308,1505,EMEA,toys,retail,87.57,7,0.237,bundle,2024-06-02\r\n18309,1542,APAC,electronics,retail,81.96,4,0.031,none,2024-04-11\r\n18310,1308,EMEA,fashion,mobile,24.94,8,0.026,loyalty,2024-09-02\r\n18311,1519,APAC,sports,retail,52.39,5,0.018,none,2024-07-28\r\n18312,1202,APAC,grocery,online,64.62,3,0.069,none,2024-03-15\r\n18313,1845,AMER,electronics,online,39.86,1,0.021,none,2024-06-01\r\n18314,2207,APAC,grocery,partner,102.96,5,0.149,loyalty,2024-04-14\r\n18315,1496,AMER,grocery,online,52.14,1,0.077,coupon,2024-10-15\r\n18316,1276,AMER,fashion,mobile,120.59,4,0.227,none,2024-04-10\r\n18317,1607,LATAM,grocery,online,50.20,7,0.226,none,2024-08-25\r\n18318,2498,LATAM,grocery,mobile,31.87,8,0.082,none,2024-02-20\r\n18319,1667,AMER,grocery,retail,99.81,3,0.192,none,2024-12-11\r\n18320,2019,AMER,toys,online,53.56,2,0.126,coupon,2024-09-28\r\n18321,2014,EMEA,fashion,partner,34.08,8,0.077,none,2024-12-19\r\n18322,1543,AMER,toys,online,92.93,5,0.042,none,2024-04-09\r\n18323,1031,AMER,sports,online,55.25,6,0.024,coupon,2024-08-14\r\n18324,2329,LATAM,toys,retail,26.09,7,0.026,loyalty,2024-12-14\r\n18325,1434,EMEA,toys,online,46.87,5,0.162,none,2024-02-16\r\n18326,1925,LATAM,home,online,59.41,3,0.245,coupon,2024-08-21\r\n18327,2046,APAC,toys,online,59.23,3,0.216,none,2024-05-05\r\n18328,2446,LATAM,home,online,29.43,2,0.020,bundle,2024-11-24\r\n18329,2363,AMER,sports,online,46.60,4,0.026,none,2024-11-01\r\n18330,2324,AMER,electronics,retail,24.56,6,0.076,none,2024-05-27\r\n18331,1492,APAC,toys,retail,51.54,3,0.048,coupon,2024-06-04\r\n18332,1257,APAC,electronics,retail,17.83,1,0.227,none,2024-12-05\r\n18333,2233,EMEA,grocery,retail,99.60,5,0.163,coupon,2024-06-28\r\n18334,2305,AMER,toys,mobile,20.01,5,0.196,bundle,2024-03-03\r\n18335,1518,AMER,grocery,retail,93.25,2,0.122,none,2024-02-03\r\n18336,1167,EMEA,toys,online,87.65,6,0.109,coupon,2024-07-16\r\n18337,1577,AMER,electronics,retail,43.67,2,0.069,coupon,2024-07-13\r\n18338,1750,LATAM,sports,online,120.35,1,0.248,none,2024-08-16\r\n18339,1570,AMER,toys,mobile,42.28,1,0.073,coupon,2024-01-27\r\n18340,1980,LATAM,grocery,online,107.39,5,0.161,none,2024-05-24\r\n18341,1808,APAC,grocery,retail,72.26,3,0.014,none,2024-02-19\r\n18342,1449,EMEA,electronics,online,18.55,8,0.058,none,2024-12-22\r\n18343,1475,LATAM,grocery,retail,34.01,7,0.058,none,2024-10-13\r\n18344,1751,AMER,toys,online,85.95,3,0.029,coupon,2024-10-07\r\n18345,1653,APAC,fashion,online,92.54,8,0.159,coupon,2024-12-04\r\n18346,2273,APAC,electronics,retail,95.68,8,0.172,none,2024-12-23\r\n18347,1175,AMER,sports,online,61.21,6,0.155,bundle,2024-11-06\r\n18348,1034,EMEA,electronics,online,56.13,8,0.232,coupon,2024-03-18\r\n18349,1187,AMER,home,online,77.97,4,0.009,coupon,2024-12-11\r\n18350,1999,EMEA,home,retail,60.93,3,0.243,none,2024-05-28\r\n18351,1542,APAC,electronics,retail,61.17,8,0.164,none,2024-09-04\r\n18352,1307,AMER,toys,online,52.01,6,0.078,none,2024-12-07\r\n18353,1230,EMEA,grocery,online,80.24,1,0.101,none,2024-02-17\r\n18354,1793,LATAM,grocery,online,52.93,1,0.044,none,2024-08-23\r\n18355,1771,AMER,fashion,retail,40.62,4,0.117,none,2024-08-21\r\n18356,1834,AMER,sports,retail,79.48,3,0.008,none,2024-08-08\r\n18357,1395,APAC,electronics,online,30.05,2,0.246,none,2024-01-28\r\n18358,1778,LATAM,electronics,retail,54.13,7,0.109,bundle,2024-03-23\r\n18359,1368,EMEA,grocery,retail,107.67,5,0.001,coupon,2024-12-22\r\n18360,1836,LATAM,sports,online,30.50,6,0.155,none,2024-01-16\r\n18361,1260,LATAM,electronics,online,20.74,8,0.104,none,2024-01-13\r\n18362,2407,EMEA,home,retail,55.13,6,0.215,none,2024-10-24\r\n18363,2321,APAC,home,retail,36.93,4,0.034,bundle,2024-12-24\r\n18364,1839,APAC,grocery,retail,38.13,6,0.087,none,2024-04-20\r\n18365,1272,AMER,grocery,online,99.56,2,0.162,loyalty,2024-07-08\r\n18366,2440,APAC,electronics,mobile,51.04,4,0.073,none,2024-12-18\r\n18367,2382,LATAM,electronics,retail,119.91,8,0.044,coupon,2024-11-22\r\n18368,1590,APAC,grocery,online,51.57,4,0.041,bundle,2024-09-25\r\n18369,2155,APAC,grocery,retail,44.10,2,0.110,coupon,2024-11-01\r\n18370,1344,EMEA,grocery,retail,78.00,8,0.059,coupon,2024-07-04\r\n18371,1211,EMEA,fashion,retail,103.22,7,0.213,none,2024-04-01\r\n18372,2179,LATAM,toys,retail,49.69,2,0.196,bundle,2024-04-24\r\n18373,1702,AMER,home,retail,42.52,4,0.018,none,2024-12-09\r\n18374,2025,EMEA,sports,retail,19.81,7,0.203,none,2024-04-12\r\n18375,2264,LATAM,fashion,mobile,49.67,7,0.044,coupon,2024-10-24\r\n18376,1881,LATAM,sports,online,93.98,6,0.165,bundle,2024-08-11\r\n18377,1484,AMER,grocery,online,86.40,6,0.036,bundle,2024-05-21\r\n18378,1748,APAC,fashion,retail,34.60,2,0.057,none,2024-08-23\r\n18379,1934,EMEA,grocery,mobile,35.98,3,0.132,coupon,2024-07-26\r\n18380,2173,LATAM,grocery,online,135.74,5,0.161,none,2024-02-16\r\n18381,2491,APAC,grocery,partner,46.25,8,0.107,none,2024-09-02\r\n18382,2448,APAC,grocery,online,44.24,4,0.198,coupon,2024-02-02\r\n18383,2453,AMER,fashion,retail,38.14,1,0.020,none,2024-09-09\r\n18384,2335,EMEA,grocery,online,87.75,8,0.093,none,2024-08-09\r\n18385,2380,AMER,grocery,online,24.80,2,0.008,loyalty,2024-01-07\r\n18386,1465,AMER,home,retail,145.33,5,0.086,none,2024-09-17\r\n18387,1441,LATAM,grocery,retail,64.85,2,0.249,none,2024-07-22\r\n18388,2093,LATAM,sports,online,35.34,3,0.227,none,2024-09-22\r\n18389,1394,LATAM,grocery,partner,70.21,3,0.043,bundle,2024-09-07\r\n18390,2179,LATAM,grocery,retail,62.87,7,0.110,bundle,2024-06-27\r\n18391,1152,LATAM,fashion,online,61.90,6,0.150,none,2024-07-06\r\n18392,1468,AMER,sports,retail,56.56,8,0.138,none,2024-09-05\r\n18393,1884,APAC,home,mobile,101.20,6,0.073,bundle,2024-03-05\r\n18394,1512,APAC,grocery,mobile,31.44,3,0.116,none,2024-06-07\r\n18395,2345,LATAM,home,partner,33.98,1,0.067,none,2024-04-05\r\n18396,1340,LATAM,electronics,mobile,69.55,3,0.172,none,2024-08-25\r\n18397,2203,APAC,grocery,online,90.91,3,0.067,none,2024-02-09\r\n18398,1574,AMER,home,online,74.16,3,0.026,coupon,2024-01-27\r\n18399,1872,LATAM,electronics,online,39.74,5,0.189,none,2024-09-16\r\n18400,1892,LATAM,electronics,mobile,95.85,2,0.234,none,2024-03-23\r\n18401,2195,APAC,electronics,online,57.53,4,0.195,loyalty,2024-08-24\r\n18402,2482,EMEA,grocery,retail,33.73,7,0.039,none,2024-04-11\r\n18403,1769,LATAM,fashion,online,34.11,1,0.078,coupon,2024-05-16\r\n18404,1599,APAC,fashion,online,42.89,7,0.132,coupon,2024-07-03\r\n18405,2028,APAC,grocery,online,37.18,3,0.129,bundle,2024-03-12\r\n18406,2185,EMEA,fashion,retail,64.61,2,0.195,none,2024-04-10\r\n18407,2372,AMER,grocery,online,96.08,1,0.245,bundle,2024-08-25\r\n18408,1598,EMEA,grocery,online,32.28,7,0.190,none,2024-06-09\r\n18409,1703,AMER,toys,mobile,56.02,4,0.030,coupon,2024-04-16\r\n18410,1193,APAC,home,mobile,89.47,8,0.239,loyalty,2024-07-20\r\n18411,2478,AMER,grocery,online,99.76,2,0.184,none,2024-06-15\r\n18412,1746,LATAM,fashion,online,48.66,3,0.049,none,2024-02-26\r\n18413,2133,AMER,electronics,retail,46.18,3,0.186,none,2024-12-20\r\n18414,1467,LATAM,sports,online,95.62,7,0.136,loyalty,2024-07-28\r\n18415,1079,LATAM,home,retail,120.96,4,0.044,none,2024-04-22\r\n18416,1510,EMEA,grocery,online,66.31,5,0.250,coupon,2024-04-07\r\n18417,1436,APAC,grocery,online,99.57,5,0.113,coupon,2024-06-18\r\n18418,1356,LATAM,fashion,retail,65.92,3,0.200,none,2024-10-03\r\n18419,2202,APAC,electronics,retail,42.73,3,0.136,none,2024-09-10\r\n18420,1836,LATAM,home,retail,75.31,1,0.062,none,2024-06-24\r\n18421,1935,EMEA,electronics,online,124.17,2,0.118,coupon,2024-11-14\r\n18422,1536,LATAM,grocery,online,58.52,1,0.227,none,2024-03-04\r\n18423,2273,APAC,sports,online,68.79,7,0.180,none,2024-11-06\r\n18424,1952,EMEA,home,online,44.01,2,0.202,loyalty,2024-10-12\r\n18425,2238,AMER,grocery,partner,55.19,4,0.065,bundle,2024-01-04\r\n18426,2035,LATAM,home,partner,59.79,3,0.014,none,2024-03-19\r\n18427,1201,LATAM,grocery,online,80.59,4,0.111,none,2024-10-14\r\n18428,1113,EMEA,toys,online,85.44,5,0.160,none,2024-06-25\r\n18429,2053,AMER,grocery,online,33.39,8,0.166,coupon,2024-11-07\r\n18430,2076,AMER,home,mobile,43.10,3,0.034,bundle,2024-01-14\r\n18431,1966,APAC,grocery,online,19.48,3,0.079,bundle,2024-10-02\r\n18432,1081,AMER,fashion,retail,42.06,3,0.165,none,2024-02-04\r\n18433,2322,AMER,fashion,retail,70.32,3,0.018,loyalty,2024-09-09\r\n18434,1331,AMER,grocery,retail,177.04,8,0.155,loyalty,2024-10-25\r\n18435,2220,LATAM,fashion,online,43.14,5,0.157,none,2024-05-13\r\n18436,1892,LATAM,electronics,mobile,77.14,5,0.051,bundle,2024-11-11\r\n18437,2476,APAC,grocery,online,101.35,5,0.201,coupon,2024-03-28\r\n18438,1078,APAC,electronics,mobile,83.17,3,0.136,none,2024-08-08\r\n18439,2268,EMEA,grocery,online,98.46,7,0.164,coupon,2024-04-12\r\n18440,2046,APAC,home,online,94.39,1,0.075,none,2024-02-08\r\n18441,1610,LATAM,grocery,mobile,53.77,7,0.048,none,2024-12-08\r\n18442,1478,EMEA,sports,mobile,106.83,8,0.210,none,2024-03-18\r\n18443,1530,APAC,toys,mobile,105.43,2,0.159,coupon,2024-02-23\r\n18444,1829,EMEA,electronics,online,84.09,7,0.082,bundle,2024-07-22\r\n18445,1894,APAC,toys,retail,44.63,1,0.237,none,2024-06-17\r\n18446,2405,AMER,electronics,retail,85.13,3,0.127,none,2024-04-21\r\n18447,1924,AMER,grocery,mobile,24.74,4,0.057,none,2024-06-24\r\n18448,1741,AMER,home,online,72.15,7,0.006,none,2024-10-23\r\n18449,2351,EMEA,grocery,online,60.16,5,0.060,none,2024-02-25\r\n18450,1268,EMEA,fashion,retail,196.07,4,0.122,none,2024-06-11\r\n18451,1132,EMEA,home,retail,43.26,3,0.243,bundle,2024-09-13\r\n18452,1678,LATAM,grocery,online,72.26,2,0.154,coupon,2024-06-28\r\n18453,1485,APAC,toys,online,50.94,4,0.164,coupon,2024-10-07\r\n18454,1978,AMER,sports,online,54.34,3,0.219,none,2024-08-05\r\n18455,1291,EMEA,toys,retail,49.91,6,0.099,none,2024-03-25\r\n18456,1962,APAC,sports,mobile,64.23,5,0.172,none,2024-06-07\r\n18457,2098,AMER,grocery,mobile,173.80,8,0.081,coupon,2024-07-11\r\n18458,1351,APAC,sports,online,69.26,7,0.149,none,2024-08-27\r\n18459,1414,APAC,grocery,online,23.33,1,0.200,none,2024-01-06\r\n18460,1012,LATAM,grocery,mobile,66.05,3,0.195,none,2024-03-19\r\n18461,2040,LATAM,home,online,72.65,3,0.036,none,2024-04-12\r\n18462,1567,AMER,fashion,retail,71.24,7,0.229,coupon,2024-03-16\r\n18463,1411,LATAM,grocery,retail,43.59,7,0.144,bundle,2024-04-26\r\n18464,1586,LATAM,grocery,online,42.23,4,0.230,coupon,2024-10-12\r\n18465,2180,AMER,toys,online,62.02,3,0.077,none,2024-10-16\r\n18466,1324,LATAM,home,mobile,60.57,2,0.205,none,2024-07-22\r\n18467,2419,LATAM,home,retail,38.67,2,0.001,coupon,2024-09-03\r\n18468,1585,AMER,grocery,online,46.06,5,0.022,none,2024-09-05\r\n18469,1307,AMER,home,retail,57.20,8,0.233,none,2024-06-06\r\n18470,1869,AMER,home,mobile,159.53,5,0.111,none,2024-09-24\r\n18471,1496,AMER,home,online,33.62,4,0.026,none,2024-11-20\r\n18472,1236,AMER,grocery,online,44.53,2,0.096,loyalty,2024-07-28\r\n18473,1370,APAC,fashion,retail,44.98,2,0.122,coupon,2024-03-14\r\n18474,1936,EMEA,home,retail,51.58,6,0.147,loyalty,2024-03-09\r\n18475,2414,EMEA,grocery,online,12.76,6,0.060,none,2024-11-22\r\n18476,1627,LATAM,grocery,retail,37.44,7,0.177,none,2024-08-16\r\n18477,2194,APAC,home,mobile,44.71,7,0.083,none,2024-05-19\r\n18478,2310,EMEA,electronics,retail,108.62,7,0.100,none,2024-02-12\r\n18479,1776,APAC,toys,online,56.44,4,0.169,none,2024-12-08\r\n18480,1136,EMEA,grocery,online,42.90,3,0.177,loyalty,2024-08-24\r\n18481,1768,AMER,fashion,mobile,40.45,6,0.161,none,2024-10-24\r\n18482,2430,APAC,grocery,online,40.68,6,0.111,none,2024-03-04\r\n18483,1481,LATAM,home,mobile,59.72,2,0.099,none,2024-11-13\r\n18484,1268,EMEA,fashion,online,74.52,1,0.191,none,2024-08-08\r\n18485,1946,AMER,fashion,retail,58.91,1,0.076,none,2024-07-18\r\n18486,1681,LATAM,home,online,84.66,4,0.193,none,2024-02-23\r\n18487,1262,APAC,sports,retail,59.94,8,0.114,none,2024-04-27\r\n18488,1239,APAC,toys,online,43.88,1,0.023,none,2024-10-13\r\n18489,2435,AMER,home,online,84.01,5,0.228,none,2024-12-25\r\n18490,1311,APAC,toys,online,42.15,4,0.091,loyalty,2024-07-09\r\n18491,1178,EMEA,grocery,partner,56.53,4,0.185,none,2024-10-20\r\n18492,2116,LATAM,fashion,retail,45.99,8,0.142,coupon,2024-06-23\r\n18493,1328,APAC,home,partner,61.06,7,0.075,bundle,2024-07-20\r\n18494,1407,LATAM,grocery,retail,60.16,5,0.187,none,2024-04-23\r\n18495,2330,EMEA,grocery,retail,68.26,6,0.103,loyalty,2024-08-13\r\n18496,1401,LATAM,toys,online,61.97,4,0.065,coupon,2024-04-27\r\n18497,1920,LATAM,grocery,online,76.32,2,0.179,bundle,2024-12-10\r\n18498,1938,APAC,electronics,online,155.52,7,0.011,none,2024-01-24\r\n18499,1757,EMEA,electronics,mobile,59.09,1,0.011,none,2024-10-15\r\n18500,2323,AMER,toys,retail,78.86,3,0.125,none,2024-12-16\r\n18501,2229,APAC,sports,online,25.15,2,0.102,loyalty,2024-08-02\r\n18502,1820,AMER,grocery,online,37.50,3,0.163,none,2024-12-15\r\n18503,1963,AMER,electronics,online,79.80,7,0.080,loyalty,2024-11-27\r\n18504,1343,LATAM,sports,retail,45.60,7,0.082,coupon,2024-09-16\r\n18505,1987,AMER,home,partner,96.75,2,0.230,bundle,2024-04-23\r\n18506,2389,LATAM,fashion,online,25.31,5,0.220,none,2024-03-07\r\n18507,1654,EMEA,grocery,mobile,116.53,3,0.232,coupon,2024-12-09\r\n18508,1056,LATAM,toys,online,67.97,5,0.103,none,2024-08-22\r\n18509,2423,LATAM,fashion,retail,85.10,6,0.083,none,2024-09-10\r\n18510,2002,APAC,home,online,58.71,6,0.217,coupon,2024-11-26\r\n18511,1637,APAC,home,mobile,56.08,2,0.100,none,2024-12-18\r\n18512,1781,LATAM,electronics,mobile,112.57,4,0.152,none,2024-07-05\r\n18513,1830,EMEA,electronics,retail,75.47,2,0.233,loyalty,2024-08-18\r\n18514,2408,EMEA,grocery,retail,35.25,8,0.237,none,2024-09-24\r\n18515,1077,AMER,grocery,online,16.13,3,0.016,none,2024-12-13\r\n18516,2434,APAC,grocery,retail,83.94,1,0.048,coupon,2024-08-26\r\n18517,2016,LATAM,grocery,mobile,52.58,4,0.071,bundle,2024-01-22\r\n18518,1428,APAC,electronics,retail,78.55,6,0.194,bundle,2024-04-09\r\n18519,2039,EMEA,home,online,55.86,7,0.236,none,2024-12-28\r\n18520,2429,EMEA,fashion,retail,99.59,3,0.138,bundle,2024-11-24\r\n18521,1404,EMEA,fashion,online,67.24,3,0.216,none,2024-09-23\r\n18522,1924,AMER,fashion,retail,53.90,2,0.155,coupon,2024-09-04\r\n18523,1200,EMEA,grocery,online,134.33,3,0.034,none,2024-10-04\r\n18524,2222,LATAM,fashion,mobile,79.45,1,0.118,none,2024-04-21\r\n18525,1422,LATAM,sports,mobile,63.28,6,0.177,bundle,2024-03-20\r\n18526,1372,APAC,fashion,online,86.10,8,0.021,none,2024-07-23\r\n18527,1247,AMER,electronics,online,34.94,8,0.112,none,2024-09-12\r\n18528,2440,APAC,fashion,online,97.32,1,0.025,none,2024-06-27\r\n18529,2478,AMER,grocery,mobile,30.06,3,0.239,loyalty,2024-05-26\r\n18530,1364,EMEA,electronics,online,49.65,3,0.148,loyalty,2024-09-10\r\n18531,1356,LATAM,toys,retail,58.70,3,0.052,loyalty,2024-02-02\r\n18532,1385,LATAM,home,retail,45.31,8,0.044,bundle,2024-08-02\r\n18533,1724,LATAM,toys,retail,54.14,5,0.157,none,2024-03-24\r\n18534,2046,APAC,sports,mobile,46.32,4,0.102,none,2024-05-27\r\n18535,1133,EMEA,electronics,online,83.85,1,0.224,none,2024-04-28\r\n18536,1378,APAC,fashion,mobile,22.04,8,0.040,none,2024-10-05\r\n18537,2095,EMEA,grocery,retail,61.31,6,0.232,coupon,2024-02-16\r\n18538,1181,LATAM,fashion,mobile,22.87,5,0.014,bundle,2024-05-01\r\n18539,2191,AMER,sports,retail,91.85,1,0.201,none,2024-12-14\r\n18540,1522,LATAM,electronics,retail,45.84,1,0.035,none,2024-10-03\r\n18541,1068,APAC,home,online,119.55,7,0.025,none,2024-06-03\r\n18542,1434,EMEA,sports,retail,40.05,3,0.018,bundle,2024-12-22\r\n18543,1111,APAC,grocery,mobile,69.98,8,0.180,coupon,2024-06-04\r\n18544,2175,AMER,electronics,mobile,64.65,1,0.248,none,2024-03-04\r\n18545,2154,APAC,electronics,retail,54.92,7,0.124,none,2024-05-21\r\n18546,2020,AMER,fashion,retail,104.32,3,0.194,bundle,2024-04-28\r\n18547,2406,EMEA,home,online,45.23,4,0.220,none,2024-06-24\r\n18548,2217,LATAM,electronics,retail,75.39,4,0.013,bundle,2024-06-11\r\n18549,2177,AMER,grocery,online,59.96,1,0.064,none,2024-07-12\r\n18550,1092,AMER,toys,retail,19.60,8,0.239,none,2024-04-03\r\n18551,1863,EMEA,sports,retail,74.00,3,0.243,loyalty,2024-06-03\r\n18552,1183,AMER,fashion,online,149.62,5,0.097,coupon,2024-10-19\r\n18553,1093,APAC,home,online,42.86,5,0.024,loyalty,2024-04-19\r\n18554,1514,LATAM,electronics,retail,59.89,5,0.144,none,2024-05-06\r\n18555,1710,APAC,home,online,42.23,3,0.058,coupon,2024-04-04\r\n18556,1692,LATAM,home,online,73.75,2,0.088,none,2024-06-21\r\n18557,1812,EMEA,grocery,retail,68.72,1,0.164,none,2024-06-16\r\n18558,1485,APAC,electronics,online,49.15,2,0.047,none,2024-12-04\r\n18559,2244,LATAM,grocery,online,114.26,5,0.202,none,2024-10-11\r\n18560,1510,EMEA,grocery,mobile,48.08,5,0.164,bundle,2024-06-25\r\n18561,1580,AMER,home,retail,32.76,1,0.162,none,2024-12-15\r\n18562,2385,APAC,toys,mobile,89.51,1,0.214,bundle,2024-09-26\r\n18563,2265,APAC,electronics,online,63.91,4,0.083,none,2024-05-23\r\n18564,1720,AMER,home,online,30.14,3,0.110,bundle,2024-11-28\r\n18565,2270,APAC,grocery,retail,106.85,1,0.061,bundle,2024-06-15\r\n18566,2141,AMER,electronics,mobile,50.59,6,0.156,none,2024-12-06\r\n18567,1328,APAC,electronics,retail,31.17,6,0.069,bundle,2024-03-27\r\n18568,1178,EMEA,home,mobile,78.09,8,0.038,none,2024-08-22\r\n18569,1777,AMER,fashion,online,22.09,4,0.080,none,2024-12-03\r\n18570,2365,LATAM,fashion,mobile,40.87,3,0.012,none,2024-05-09\r\n18571,2420,EMEA,home,online,30.31,7,0.190,none,2024-03-05\r\n18572,1555,AMER,sports,retail,72.00,6,0.032,none,2024-05-19\r\n18573,1609,LATAM,grocery,online,68.50,8,0.078,none,2024-10-23\r\n18574,2487,LATAM,electronics,online,48.65,7,0.158,none,2024-03-17\r\n18575,1056,LATAM,home,online,30.30,3,0.198,none,2024-03-27\r\n18576,2274,APAC,grocery,partner,27.95,8,0.032,none,2024-06-22\r\n18577,1362,AMER,fashion,retail,43.38,7,0.056,none,2024-12-12\r\n18578,2272,EMEA,home,partner,51.04,7,0.181,none,2024-08-15\r\n18579,1841,AMER,grocery,online,77.52,7,0.055,loyalty,2024-05-25\r\n18580,2013,APAC,fashion,retail,51.31,6,0.083,bundle,2024-07-17\r\n18581,1630,APAC,toys,retail,28.03,1,0.112,none,2024-05-02\r\n18582,1557,LATAM,sports,online,92.68,6,0.176,bundle,2024-05-23\r\n18583,2297,EMEA,home,online,166.48,6,0.191,none,2024-10-20\r\n18584,2086,APAC,grocery,online,25.33,1,0.137,coupon,2024-09-26\r\n18585,1735,LATAM,grocery,mobile,37.91,8,0.141,bundle,2024-06-10\r\n18586,1739,AMER,toys,retail,28.80,6,0.157,none,2024-01-21\r\n18587,2334,LATAM,home,online,47.77,7,0.010,none,2024-04-12\r\n18588,1632,LATAM,home,retail,38.04,2,0.022,none,2024-10-17\r\n18589,1865,LATAM,sports,online,64.07,2,0.040,bundle,2024-08-09\r\n18590,1111,APAC,grocery,mobile,48.32,3,0.012,none,2024-07-25\r\n18591,1992,LATAM,grocery,mobile,96.24,1,0.214,coupon,2024-09-26\r\n18592,1235,EMEA,electronics,retail,39.76,7,0.004,none,2024-11-18\r\n18593,2441,EMEA,home,online,44.78,8,0.129,bundle,2024-01-24\r\n18594,2325,LATAM,grocery,online,23.69,4,0.224,none,2024-04-01\r\n18595,2409,APAC,sports,retail,95.20,5,0.010,none,2024-05-25\r\n18596,1670,EMEA,fashion,mobile,32.88,2,0.048,bundle,2024-11-27\r\n18597,1194,APAC,grocery,retail,66.80,1,0.074,none,2024-02-14\r\n18598,2358,AMER,electronics,online,62.12,1,0.075,loyalty,2024-09-13\r\n18599,1618,EMEA,fashion,online,57.25,7,0.169,none,2024-10-22\r\n18600,2301,EMEA,electronics,online,34.59,3,0.032,bundle,2024-01-11\r\n18601,1088,LATAM,home,online,22.03,5,0.161,bundle,2024-10-21\r\n18602,1052,LATAM,grocery,online,65.61,7,0.059,none,2024-01-28\r\n18603,2204,AMER,electronics,online,116.52,1,0.236,bundle,2024-08-03\r\n18604,1530,APAC,sports,online,22.82,1,0.104,none,2024-07-01\r\n18605,1945,AMER,grocery,online,44.89,3,0.084,loyalty,2024-08-23\r\n18606,1203,AMER,grocery,online,49.57,4,0.219,coupon,2024-03-06\r\n18607,1922,EMEA,electronics,online,29.11,5,0.061,none,2024-10-09\r\n18608,1626,EMEA,sports,online,18.66,6,0.015,bundle,2024-07-04\r\n18609,2104,EMEA,sports,retail,135.07,2,0.088,loyalty,2024-07-05\r\n18610,1304,LATAM,grocery,partner,22.11,6,0.147,bundle,2024-07-19\r\n18611,2164,AMER,electronics,retail,41.35,8,0.214,none,2024-10-16\r\n18612,1812,EMEA,grocery,online,31.94,4,0.138,coupon,2024-05-16\r\n18613,1719,LATAM,electronics,online,73.91,8,0.153,none,2024-09-01\r\n18614,1599,APAC,electronics,retail,62.73,3,0.143,none,2024-06-01\r\n18615,2434,APAC,electronics,online,60.15,7,0.055,coupon,2024-12-05\r\n18616,1672,APAC,home,online,80.73,5,0.014,coupon,2024-08-16\r\n18617,2120,AMER,home,retail,59.61,7,0.046,none,2024-12-07\r\n18618,1310,AMER,home,online,121.28,8,0.180,none,2024-08-17\r\n18619,1764,LATAM,grocery,retail,76.44,3,0.079,none,2024-09-03\r\n18620,1682,EMEA,grocery,retail,50.99,2,0.077,none,2024-02-14\r\n18621,2203,APAC,grocery,online,17.66,8,0.195,none,2024-12-23\r\n18622,2088,EMEA,fashion,online,50.39,8,0.143,loyalty,2024-08-19\r\n18623,1211,EMEA,home,online,111.20,3,0.170,none,2024-10-22\r\n18624,1712,LATAM,fashion,online,43.22,5,0.047,none,2024-03-22\r\n18625,2458,EMEA,fashion,retail,121.76,6,0.080,bundle,2024-01-14\r\n18626,1230,EMEA,sports,online,37.16,5,0.121,none,2024-08-23\r\n18627,1339,EMEA,toys,online,35.14,1,0.124,none,2024-11-20\r\n18628,1519,APAC,home,mobile,53.14,3,0.202,coupon,2024-07-01\r\n18629,1925,LATAM,toys,retail,35.88,6,0.027,none,2024-06-02\r\n18630,2243,APAC,sports,retail,78.76,4,0.188,none,2024-01-02\r\n18631,1722,EMEA,grocery,online,52.58,1,0.227,none,2024-09-03\r\n18632,1564,APAC,home,retail,44.58,8,0.231,none,2024-07-17\r\n18633,1157,LATAM,home,online,43.11,4,0.088,loyalty,2024-12-04\r\n18634,2008,APAC,sports,online,32.07,6,0.101,none,2024-03-28\r\n18635,2377,AMER,fashion,retail,94.38,4,0.065,none,2024-04-25\r\n18636,2276,AMER,fashion,online,55.94,4,0.070,none,2024-07-22\r\n18637,2222,LATAM,grocery,online,81.57,7,0.121,coupon,2024-12-16\r\n18638,1157,LATAM,toys,partner,42.60,4,0.216,none,2024-08-27\r\n18639,1118,AMER,home,retail,70.97,8,0.202,none,2024-05-06\r\n18640,1640,APAC,electronics,online,205.54,6,0.095,none,2024-02-06\r\n18641,2283,AMER,toys,partner,38.18,7,0.016,bundle,2024-01-11\r\n18642,1326,AMER,grocery,retail,163.96,5,0.116,none,2024-07-12\r\n18643,1407,LATAM,electronics,retail,93.95,8,0.217,none,2024-06-20\r\n18644,1192,EMEA,electronics,online,41.63,6,0.148,bundle,2024-06-12\r\n18645,1061,APAC,electronics,retail,27.78,4,0.055,none,2024-01-17\r\n18646,2473,EMEA,grocery,mobile,24.24,8,0.105,coupon,2024-08-05\r\n18647,1731,AMER,electronics,online,91.33,2,0.052,none,2024-12-03\r\n18648,1284,APAC,grocery,retail,29.20,7,0.011,none,2024-11-09\r\n18649,1578,LATAM,fashion,mobile,130.68,4,0.071,none,2024-08-12\r\n18650,2020,AMER,grocery,online,74.82,7,0.087,none,2024-11-06\r\n18651,1013,LATAM,toys,retail,168.05,8,0.170,none,2024-09-13\r\n18652,2248,LATAM,electronics,online,58.93,1,0.124,bundle,2024-08-04\r\n18653,1697,APAC,fashion,online,30.92,5,0.009,bundle,2024-07-21\r\n18654,1153,AMER,fashion,retail,94.40,8,0.078,none,2024-10-26\r\n18655,2491,APAC,home,mobile,28.09,6,0.056,none,2024-06-27\r\n18656,2445,APAC,home,online,45.73,1,0.064,coupon,2024-06-14\r\n18657,2402,AMER,sports,online,66.72,2,0.066,bundle,2024-08-19\r\n18658,1195,AMER,fashion,online,110.43,8,0.067,none,2024-10-03\r\n18659,1669,AMER,electronics,partner,41.70,6,0.087,none,2024-09-06\r\n18660,1889,APAC,fashion,online,16.35,1,0.162,none,2024-07-18\r\n18661,1957,AMER,electronics,mobile,102.42,7,0.156,loyalty,2024-08-09\r\n18662,1236,AMER,grocery,retail,43.43,4,0.063,bundle,2024-01-22\r\n18663,2369,LATAM,sports,retail,65.43,1,0.177,coupon,2024-04-28\r\n18664,1177,LATAM,electronics,online,28.65,1,0.158,bundle,2024-10-04\r\n18665,2415,AMER,grocery,online,86.64,8,0.108,none,2024-12-21\r\n18666,1003,APAC,electronics,retail,82.57,3,0.021,none,2024-09-11\r\n18667,1541,APAC,grocery,online,37.79,6,0.133,coupon,2024-05-08\r\n18668,1963,AMER,toys,retail,63.15,5,0.247,coupon,2024-11-14\r\n18669,1459,LATAM,grocery,online,91.94,5,0.029,none,2024-12-23\r\n18670,1763,LATAM,home,retail,57.58,2,0.122,none,2024-07-13\r\n18671,1148,AMER,grocery,online,37.03,7,0.056,none,2024-09-28\r\n18672,2032,AMER,home,retail,59.79,3,0.187,coupon,2024-05-24\r\n18673,2254,LATAM,sports,online,36.46,4,0.126,none,2024-04-24\r\n18674,1964,EMEA,grocery,mobile,31.95,4,0.189,none,2024-06-23\r\n18675,1894,APAC,home,online,49.17,6,0.147,coupon,2024-12-18\r\n18676,1487,AMER,electronics,online,76.93,8,0.125,none,2024-11-17\r\n18677,1586,LATAM,sports,mobile,105.00,4,0.032,none,2024-01-24\r\n18678,2252,EMEA,grocery,mobile,76.13,1,0.105,none,2024-03-20\r\n18679,1230,EMEA,electronics,retail,67.64,2,0.183,bundle,2024-09-08\r\n18680,1536,LATAM,home,retail,155.04,7,0.195,none,2024-11-24\r\n18681,2196,AMER,grocery,retail,66.38,6,0.217,none,2024-04-13\r\n18682,2086,APAC,home,online,10.80,5,0.037,bundle,2024-01-28\r\n18683,1205,APAC,fashion,online,39.23,4,0.190,none,2024-02-14\r\n18684,2092,AMER,home,online,94.82,3,0.230,none,2024-11-07\r\n18685,1451,EMEA,toys,mobile,51.28,7,0.021,bundle,2024-02-09\r\n18686,1056,LATAM,electronics,mobile,49.61,5,0.124,coupon,2024-03-19\r\n18687,1037,EMEA,sports,mobile,40.11,2,0.154,bundle,2024-07-02\r\n18688,1109,APAC,toys,mobile,85.53,5,0.040,none,2024-05-21\r\n18689,2203,APAC,grocery,online,39.97,6,0.031,coupon,2024-08-24\r\n18690,1105,AMER,sports,online,40.23,8,0.042,loyalty,2024-09-07\r\n18691,2480,APAC,fashion,mobile,52.05,2,0.105,coupon,2024-10-15\r\n18692,1610,LATAM,sports,retail,98.65,4,0.097,loyalty,2024-11-03\r\n18693,1170,AMER,toys,online,98.46,7,0.225,none,2024-10-01\r\n18694,1094,LATAM,grocery,retail,72.13,4,0.062,none,2024-08-01\r\n18695,1652,APAC,electronics,online,52.42,1,0.157,none,2024-10-04\r\n18696,2134,AMER,home,retail,95.16,3,0.060,coupon,2024-08-14\r\n18697,1367,AMER,fashion,online,58.30,7,0.156,bundle,2024-06-11\r\n18698,1009,APAC,grocery,retail,20.11,4,0.135,coupon,2024-04-10\r\n18699,2155,APAC,grocery,online,57.91,7,0.148,bundle,2024-02-24\r\n18700,1335,APAC,home,retail,50.04,3,0.181,none,2024-09-13\r\n18701,1593,AMER,home,online,166.73,8,0.096,none,2024-02-01\r\n18702,1686,LATAM,grocery,mobile,109.90,5,0.125,none,2024-12-02\r\n18703,2086,APAC,grocery,retail,37.01,1,0.089,none,2024-07-09\r\n18704,1882,AMER,sports,partner,49.87,4,0.224,none,2024-05-13\r\n18705,1881,LATAM,grocery,partner,97.13,3,0.046,none,2024-03-14\r\n18706,1360,APAC,grocery,retail,38.27,4,0.119,none,2024-07-09\r\n18707,1758,AMER,electronics,mobile,35.10,8,0.168,none,2024-04-05\r\n18708,1247,AMER,grocery,partner,48.98,4,0.165,none,2024-11-14\r\n18709,2396,AMER,home,mobile,33.45,3,0.029,none,2024-09-16\r\n18710,1328,APAC,home,online,61.84,1,0.200,none,2024-05-20\r\n18711,1654,EMEA,fashion,partner,60.15,7,0.045,none,2024-02-13\r\n18712,2361,EMEA,fashion,online,110.34,2,0.118,loyalty,2024-05-25\r\n18713,2203,APAC,sports,mobile,47.30,4,0.008,none,2024-12-14\r\n18714,2234,LATAM,electronics,retail,44.58,6,0.246,none,2024-05-09\r\n18715,1892,LATAM,home,retail,35.46,7,0.048,none,2024-08-24\r\n18716,1023,APAC,grocery,retail,21.37,2,0.023,bundle,2024-11-05\r\n18717,1462,LATAM,electronics,online,60.25,6,0.038,coupon,2024-09-03\r\n18718,1476,APAC,grocery,online,34.08,8,0.056,none,2024-12-07\r\n18719,1739,AMER,fashion,mobile,28.92,1,0.069,loyalty,2024-05-13\r\n18720,2310,EMEA,toys,retail,29.95,1,0.146,bundle,2024-09-21\r\n18721,1377,APAC,grocery,online,29.76,2,0.078,none,2024-01-16\r\n18722,1859,AMER,home,retail,35.91,3,0.187,coupon,2024-06-16\r\n18723,1603,EMEA,electronics,retail,19.87,2,0.041,none,2024-07-05\r\n18724,1816,EMEA,electronics,mobile,54.60,3,0.206,none,2024-11-07\r\n18725,2152,EMEA,sports,online,33.12,7,0.039,none,2024-07-14\r\n18726,1951,LATAM,fashion,partner,121.82,3,0.053,bundle,2024-08-15\r\n18727,1243,AMER,grocery,online,106.34,6,0.120,none,2024-12-02\r\n18728,1459,LATAM,grocery,mobile,59.49,2,0.169,none,2024-09-10\r\n18729,2049,LATAM,fashion,online,27.21,3,0.231,loyalty,2024-12-10\r\n18730,1289,LATAM,fashion,online,61.73,6,0.232,coupon,2024-01-02\r\n18731,1553,LATAM,fashion,retail,66.92,8,0.233,bundle,2024-01-16\r\n18732,1612,LATAM,electronics,retail,40.22,1,0.070,coupon,2024-05-05\r\n18733,1572,LATAM,sports,online,57.10,5,0.155,bundle,2024-06-07\r\n18734,1678,LATAM,grocery,online,30.38,7,0.141,none,2024-10-19\r\n18735,2014,EMEA,fashion,retail,35.78,8,0.061,bundle,2024-05-01\r\n18736,1309,EMEA,sports,retail,123.34,3,0.246,coupon,2024-02-28\r\n18737,1582,AMER,home,online,62.04,3,0.129,loyalty,2024-04-13\r\n18738,2354,LATAM,grocery,online,89.83,4,0.144,bundle,2024-06-14\r\n18739,1029,EMEA,grocery,mobile,53.20,7,0.068,bundle,2024-06-19\r\n18740,2376,LATAM,sports,online,35.77,8,0.244,none,2024-06-10\r\n18741,2324,AMER,grocery,online,42.84,6,0.210,coupon,2024-08-14\r\n18742,1157,LATAM,grocery,retail,64.37,2,0.004,none,2024-01-17\r\n18743,2448,APAC,fashion,retail,36.16,2,0.043,none,2024-04-17\r\n18744,1447,LATAM,grocery,retail,42.41,6,0.051,loyalty,2024-10-23\r\n18745,2464,LATAM,sports,online,89.41,3,0.109,bundle,2024-12-05\r\n18746,1875,EMEA,grocery,retail,44.86,1,0.056,none,2024-10-11\r\n18747,2137,LATAM,sports,retail,68.67,8,0.024,coupon,2024-04-23\r\n18748,1557,LATAM,home,mobile,55.73,8,0.130,bundle,2024-12-27\r\n18749,1802,AMER,fashion,mobile,13.37,7,0.161,bundle,2024-05-24\r\n18750,1798,AMER,sports,partner,141.17,5,0.009,loyalty,2024-09-03\r\n18751,1701,LATAM,electronics,retail,88.87,3,0.173,coupon,2024-03-22\r\n18752,2158,APAC,grocery,mobile,27.10,4,0.229,none,2024-11-14\r\n18753,2117,EMEA,grocery,retail,45.06,2,0.240,none,2024-08-03\r\n18754,1188,LATAM,electronics,mobile,88.96,3,0.231,none,2024-12-18\r\n18755,1082,EMEA,fashion,retail,54.15,1,0.107,none,2024-01-01\r\n18756,1089,LATAM,fashion,online,66.37,4,0.116,none,2024-10-21\r\n18757,1103,EMEA,electronics,partner,86.72,2,0.130,none,2024-05-13\r\n18758,2329,LATAM,electronics,retail,33.65,7,0.056,none,2024-01-24\r\n18759,1751,AMER,grocery,online,138.65,2,0.221,loyalty,2024-09-01\r\n18760,1800,APAC,grocery,retail,44.68,4,0.249,loyalty,2024-06-27\r\n18761,1842,LATAM,fashion,retail,75.34,2,0.005,none,2024-12-14\r\n18762,1937,APAC,toys,online,91.92,1,0.226,none,2024-03-22\r\n18763,1706,EMEA,home,retail,31.79,4,0.228,none,2024-10-22\r\n18764,2066,APAC,electronics,online,75.81,1,0.040,none,2024-07-14\r\n18765,1381,LATAM,home,online,23.54,4,0.126,none,2024-10-28\r\n18766,1261,APAC,grocery,retail,24.91,3,0.044,bundle,2024-12-12\r\n18767,2498,LATAM,grocery,retail,91.07,2,0.156,none,2024-05-03\r\n18768,2079,EMEA,home,retail,99.41,8,0.082,coupon,2024-10-14\r\n18769,1051,EMEA,home,online,69.06,5,0.229,none,2024-08-23\r\n18770,2414,EMEA,home,online,61.16,5,0.031,none,2024-05-02\r\n18771,1120,LATAM,electronics,mobile,56.78,7,0.002,bundle,2024-03-22\r\n18772,1102,APAC,home,online,33.59,6,0.060,none,2024-12-22\r\n18773,2347,AMER,fashion,retail,134.82,3,0.119,coupon,2024-07-04\r\n18774,1808,APAC,grocery,retail,90.74,2,0.012,loyalty,2024-11-28\r\n18775,2459,AMER,fashion,retail,35.31,5,0.065,bundle,2024-10-17\r\n18776,2309,AMER,toys,online,70.65,1,0.077,coupon,2024-11-07\r\n18777,2061,EMEA,electronics,mobile,57.31,4,0.183,none,2024-11-19\r\n18778,1006,AMER,sports,online,37.18,2,0.107,none,2024-09-21\r\n18779,2249,LATAM,grocery,retail,26.07,5,0.092,bundle,2024-08-23\r\n18780,1752,APAC,toys,retail,77.70,5,0.116,coupon,2024-11-15\r\n18781,1093,APAC,home,retail,28.67,3,0.068,bundle,2024-12-06\r\n18782,1882,AMER,fashion,online,34.76,1,0.227,none,2024-12-26\r\n18783,1699,APAC,home,online,32.96,8,0.218,loyalty,2024-01-03\r\n18784,1625,EMEA,fashion,partner,39.05,5,0.239,coupon,2024-01-05\r\n18785,2350,APAC,toys,retail,37.92,3,0.169,bundle,2024-12-26\r\n18786,1411,LATAM,grocery,online,68.05,7,0.077,none,2024-03-08\r\n18787,2078,APAC,electronics,retail,57.30,6,0.158,bundle,2024-09-07\r\n18788,2474,LATAM,toys,online,33.54,3,0.205,none,2024-08-13\r\n18789,2288,AMER,toys,mobile,59.06,7,0.211,loyalty,2024-10-27\r\n18790,1229,LATAM,electronics,online,28.77,2,0.159,loyalty,2024-12-03\r\n18791,2039,EMEA,grocery,retail,87.67,1,0.069,none,2024-06-02\r\n18792,1104,APAC,grocery,online,36.61,4,0.064,none,2024-04-26\r\n18793,1411,LATAM,grocery,retail,73.86,3,0.069,bundle,2024-08-26\r\n18794,1496,AMER,sports,mobile,95.58,4,0.017,none,2024-12-20\r\n18795,1110,LATAM,electronics,online,60.02,7,0.184,coupon,2024-02-25\r\n18796,1786,APAC,grocery,retail,77.23,1,0.175,none,2024-11-08\r\n18797,1823,EMEA,grocery,retail,33.43,8,0.004,bundle,2024-10-25\r\n18798,1407,LATAM,grocery,online,65.80,7,0.085,loyalty,2024-02-20\r\n18799,2427,LATAM,sports,retail,28.14,1,0.200,none,2024-05-27\r\n18800,1423,EMEA,grocery,mobile,25.71,2,0.166,bundle,2024-10-24\r\n18801,2359,LATAM,electronics,retail,60.17,7,0.202,none,2024-07-02\r\n18802,1620,LATAM,electronics,retail,59.88,6,0.218,none,2024-02-06\r\n18803,2184,APAC,home,online,47.48,7,0.145,none,2024-04-21\r\n18804,1041,APAC,electronics,partner,48.82,4,0.147,loyalty,2024-03-24\r\n18805,1772,EMEA,electronics,online,137.35,1,0.230,none,2024-04-19\r\n18806,2406,EMEA,toys,retail,49.37,6,0.207,coupon,2024-11-21\r\n18807,1023,APAC,home,mobile,52.52,3,0.153,coupon,2024-12-17\r\n18808,1036,EMEA,toys,mobile,45.62,5,0.172,coupon,2024-05-15\r\n18809,1454,APAC,fashion,retail,32.65,8,0.097,loyalty,2024-08-09\r\n18810,2160,LATAM,grocery,online,45.94,4,0.228,none,2024-06-27\r\n18811,1019,APAC,home,retail,144.56,4,0.208,none,2024-03-17\r\n18812,1624,AMER,home,retail,24.59,8,0.029,coupon,2024-09-25\r\n18813,2387,EMEA,sports,online,141.25,2,0.215,coupon,2024-06-28\r\n18814,1268,EMEA,grocery,retail,29.15,4,0.220,coupon,2024-09-19\r\n18815,1334,APAC,fashion,online,203.75,1,0.077,coupon,2024-09-19\r\n18816,1570,AMER,electronics,online,138.40,6,0.023,none,2024-02-19\r\n18817,2244,LATAM,fashion,online,38.46,2,0.156,none,2024-01-15\r\n18818,1624,AMER,electronics,mobile,113.82,4,0.030,bundle,2024-05-09\r\n18819,2064,LATAM,grocery,retail,36.10,2,0.240,none,2024-03-06\r\n18820,2383,APAC,electronics,partner,120.91,7,0.032,none,2024-12-15\r\n18821,1821,LATAM,grocery,online,49.51,6,0.119,coupon,2024-01-04\r\n18822,1185,LATAM,electronics,retail,85.77,6,0.081,none,2024-06-17\r\n18823,1271,EMEA,grocery,retail,96.81,6,0.205,loyalty,2024-02-12\r\n18824,1281,AMER,toys,retail,28.61,1,0.129,none,2024-05-05\r\n18825,1744,EMEA,home,mobile,117.63,8,0.085,bundle,2024-06-12\r\n18826,1747,EMEA,sports,online,34.97,5,0.217,loyalty,2024-12-09\r\n18827,1799,EMEA,fashion,mobile,42.54,6,0.098,none,2024-01-09\r\n18828,1268,EMEA,grocery,online,57.24,6,0.220,coupon,2024-04-27\r\n18829,1864,EMEA,toys,online,44.17,8,0.012,coupon,2024-05-26\r\n18830,2471,APAC,grocery,online,94.54,1,0.111,none,2024-06-11\r\n18831,1657,LATAM,toys,retail,38.80,5,0.139,none,2024-10-06\r\n18832,2290,LATAM,fashion,retail,87.56,6,0.245,none,2024-12-04\r\n18833,1408,AMER,grocery,mobile,83.98,7,0.022,none,2024-10-05\r\n18834,1584,EMEA,sports,mobile,62.79,6,0.009,none,2024-02-02\r\n18835,2405,AMER,electronics,retail,44.07,6,0.188,coupon,2024-12-03\r\n18836,1118,AMER,home,online,99.54,5,0.161,none,2024-02-04\r\n18837,1264,APAC,grocery,online,60.52,8,0.154,coupon,2024-08-03\r\n18838,2486,APAC,toys,retail,56.08,7,0.238,none,2024-04-12\r\n18839,2241,APAC,grocery,retail,66.66,3,0.065,none,2024-01-06\r\n18840,1934,EMEA,electronics,retail,75.63,1,0.011,loyalty,2024-01-03\r\n18841,1788,AMER,grocery,retail,75.02,5,0.155,none,2024-09-18\r\n18842,2118,AMER,grocery,retail,38.91,7,0.109,none,2024-06-24\r\n18843,1517,AMER,grocery,online,87.80,5,0.001,loyalty,2024-11-14\r\n18844,1267,EMEA,home,online,58.79,4,0.111,coupon,2024-06-22\r\n18845,2040,LATAM,grocery,retail,70.77,3,0.168,loyalty,2024-02-07\r\n18846,1686,LATAM,fashion,mobile,38.69,3,0.220,bundle,2024-08-22\r\n18847,1594,LATAM,fashion,online,108.24,1,0.176,coupon,2024-07-06\r\n18848,1402,EMEA,fashion,partner,84.04,4,0.105,none,2024-05-10\r\n18849,2082,APAC,fashion,retail,76.21,1,0.163,coupon,2024-10-10\r\n18850,1940,APAC,grocery,partner,42.44,6,0.047,none,2024-07-12\r\n18851,2495,EMEA,home,mobile,64.45,2,0.237,bundle,2024-10-25\r\n18852,1806,APAC,grocery,retail,13.59,3,0.137,none,2024-10-09\r\n18853,1341,EMEA,grocery,retail,34.80,8,0.191,loyalty,2024-09-10\r\n18854,2016,LATAM,sports,online,46.48,3,0.083,none,2024-05-13\r\n18855,2163,EMEA,electronics,partner,48.48,1,0.169,none,2024-05-26\r\n18856,2123,AMER,electronics,mobile,48.29,7,0.142,coupon,2024-08-28\r\n18857,1689,LATAM,electronics,retail,44.72,8,0.053,loyalty,2024-02-20\r\n18858,1840,LATAM,sports,partner,47.47,1,0.229,none,2024-12-06\r\n18859,1223,LATAM,fashion,partner,121.02,8,0.107,none,2024-04-21\r\n18860,1749,LATAM,fashion,online,78.93,7,0.142,coupon,2024-02-06\r\n18861,1869,AMER,grocery,retail,65.91,5,0.065,bundle,2024-10-27\r\n18862,2472,AMER,fashion,mobile,50.14,2,0.096,coupon,2024-04-16\r\n18863,1114,APAC,sports,online,121.65,7,0.048,bundle,2024-06-01\r\n18864,2237,EMEA,grocery,retail,50.34,3,0.233,none,2024-01-15\r\n18865,1017,AMER,electronics,partner,29.88,3,0.084,none,2024-11-20\r\n18866,1438,APAC,toys,online,78.76,3,0.236,coupon,2024-06-19\r\n18867,2005,APAC,grocery,online,38.30,4,0.229,bundle,2024-12-22\r\n18868,1738,LATAM,sports,retail,86.45,1,0.191,bundle,2024-02-18\r\n18869,1111,APAC,home,retail,107.82,5,0.234,none,2024-08-22\r\n18870,1538,AMER,toys,online,22.08,8,0.200,none,2024-02-20\r\n18871,1418,LATAM,toys,online,80.90,7,0.250,loyalty,2024-04-28\r\n18872,1844,APAC,home,retail,59.31,2,0.184,bundle,2024-09-15\r\n18873,1112,APAC,grocery,retail,42.46,7,0.228,bundle,2024-12-14\r\n18874,2036,APAC,electronics,online,50.95,3,0.052,loyalty,2024-06-26\r\n18875,1950,LATAM,home,retail,27.68,2,0.091,coupon,2024-02-11\r\n18876,2358,AMER,toys,retail,94.04,3,0.206,none,2024-08-16\r\n18877,2169,EMEA,electronics,online,39.76,1,0.137,none,2024-08-02\r\n18878,1994,LATAM,sports,online,52.28,2,0.120,loyalty,2024-09-13\r\n18879,2023,LATAM,fashion,online,87.54,5,0.004,none,2024-11-02\r\n18880,1340,LATAM,electronics,retail,54.42,6,0.042,none,2024-06-09\r\n18881,1122,AMER,sports,mobile,96.26,7,0.141,none,2024-07-07\r\n18882,1067,APAC,electronics,online,28.87,7,0.194,none,2024-01-24\r\n18883,1344,EMEA,electronics,online,51.88,7,0.088,none,2024-07-07\r\n18884,1006,AMER,grocery,online,92.27,3,0.072,none,2024-04-26\r\n18885,2031,AMER,electronics,online,76.71,3,0.074,none,2024-02-13\r\n18886,2326,LATAM,fashion,retail,42.50,4,0.099,none,2024-03-26\r\n18887,1141,AMER,home,partner,25.80,8,0.039,coupon,2024-02-06\r\n18888,1282,LATAM,fashion,online,64.07,1,0.219,none,2024-12-25\r\n18889,2082,APAC,electronics,online,49.54,2,0.104,none,2024-02-23\r\n18890,1499,EMEA,sports,online,43.71,5,0.021,loyalty,2024-04-13\r\n18891,1577,AMER,toys,online,38.80,6,0.085,none,2024-08-23\r\n18892,1809,APAC,grocery,retail,38.42,5,0.135,none,2024-03-04\r\n18893,2216,AMER,home,online,60.99,7,0.051,none,2024-08-06\r\n18894,1225,APAC,electronics,online,48.38,7,0.217,none,2024-12-07\r\n18895,1247,AMER,home,mobile,27.31,3,0.183,coupon,2024-07-19\r\n18896,1757,EMEA,fashion,retail,73.09,7,0.232,none,2024-08-12\r\n18897,1501,AMER,home,retail,259.16,3,0.206,none,2024-02-19\r\n18898,1771,AMER,grocery,retail,144.44,4,0.120,none,2024-10-17\r\n18899,1682,EMEA,home,online,49.98,8,0.085,bundle,2024-06-03\r\n18900,1318,LATAM,electronics,online,112.24,3,0.245,none,2024-01-15\r\n18901,1019,APAC,sports,online,40.16,5,0.155,none,2024-09-23\r\n18902,2425,APAC,grocery,mobile,47.52,8,0.139,none,2024-08-01\r\n18903,2335,EMEA,electronics,online,32.27,3,0.178,none,2024-10-08\r\n18904,1506,EMEA,fashion,online,58.46,6,0.072,none,2024-09-05\r\n18905,2489,LATAM,electronics,retail,32.96,2,0.125,none,2024-10-28\r\n18906,2242,AMER,grocery,online,27.77,4,0.227,none,2024-05-26\r\n18907,2271,LATAM,sports,retail,65.45,8,0.107,none,2024-12-20\r\n18908,1181,LATAM,electronics,online,90.84,8,0.035,none,2024-02-14\r\n18909,2069,AMER,home,online,141.92,6,0.152,none,2024-08-03\r\n18910,1625,EMEA,grocery,online,65.11,6,0.009,none,2024-10-03\r\n18911,1509,AMER,fashion,retail,29.99,6,0.120,loyalty,2024-07-18\r\n18912,1387,AMER,electronics,mobile,69.35,4,0.124,coupon,2024-05-11\r\n18913,2319,AMER,grocery,online,112.46,2,0.197,none,2024-11-19\r\n18914,2176,AMER,electronics,online,37.33,8,0.110,none,2024-03-04\r\n18915,1103,EMEA,toys,retail,57.64,4,0.009,bundle,2024-10-06\r\n18916,1695,LATAM,fashion,online,101.57,5,0.242,none,2024-07-21\r\n18917,1700,EMEA,sports,online,51.48,2,0.047,bundle,2024-06-26\r\n18918,1250,APAC,toys,partner,123.05,6,0.022,none,2024-09-14\r\n18919,1263,AMER,electronics,mobile,51.78,1,0.190,none,2024-01-12\r\n18920,2203,APAC,electronics,retail,51.15,3,0.114,none,2024-01-04\r\n18921,1314,AMER,fashion,retail,50.26,4,0.099,bundle,2024-11-13\r\n18922,1654,EMEA,home,online,53.05,5,0.248,none,2024-08-24\r\n18923,1389,LATAM,electronics,online,50.97,3,0.210,none,2024-08-23\r\n18924,2007,LATAM,home,retail,54.32,8,0.194,coupon,2024-06-09\r\n18925,1095,APAC,fashion,retail,69.75,2,0.189,bundle,2024-11-07\r\n18926,1324,LATAM,electronics,online,72.03,4,0.181,none,2024-09-09\r\n18927,1205,APAC,toys,mobile,55.34,8,0.195,none,2024-01-24\r\n18928,1368,EMEA,toys,retail,43.05,7,0.135,bundle,2024-04-09\r\n18929,1071,AMER,electronics,online,50.24,1,0.220,none,2024-10-09\r\n18930,2344,LATAM,sports,mobile,20.92,1,0.156,bundle,2024-09-24\r\n18931,1032,AMER,electronics,online,38.70,4,0.039,coupon,2024-04-04\r\n18932,1088,LATAM,sports,online,65.52,1,0.223,none,2024-09-21\r\n18933,2023,LATAM,electronics,mobile,58.70,7,0.189,none,2024-03-25\r\n18934,1509,AMER,fashion,mobile,52.98,1,0.195,coupon,2024-09-03\r\n18935,1448,EMEA,toys,retail,44.01,5,0.013,loyalty,2024-07-28\r\n18936,1300,EMEA,home,online,49.83,1,0.051,bundle,2024-06-10\r\n18937,1867,AMER,home,online,43.19,4,0.227,bundle,2024-01-05\r\n18938,1327,APAC,grocery,retail,85.63,3,0.239,none,2024-05-28\r\n18939,1336,APAC,grocery,retail,136.78,4,0.083,none,2024-05-19\r\n18940,2199,LATAM,toys,mobile,59.40,1,0.020,none,2024-03-22\r\n18941,1412,AMER,grocery,online,112.03,5,0.012,loyalty,2024-02-01\r\n18942,2473,EMEA,fashion,online,63.91,1,0.070,none,2024-06-14\r\n18943,1626,EMEA,toys,online,17.60,1,0.059,none,2024-08-14\r\n18944,1514,LATAM,electronics,retail,55.28,5,0.032,none,2024-05-01\r\n18945,1752,APAC,sports,mobile,53.68,1,0.050,coupon,2024-01-11\r\n18946,1934,EMEA,sports,mobile,44.12,4,0.035,none,2024-09-22\r\n18947,1677,EMEA,home,online,75.30,7,0.166,coupon,2024-06-08\r\n18948,2251,APAC,fashion,mobile,69.16,3,0.192,coupon,2024-09-21\r\n18949,1984,LATAM,home,online,59.79,3,0.213,none,2024-09-20\r\n18950,1407,LATAM,sports,online,95.07,7,0.208,bundle,2024-07-18\r\n18951,2289,APAC,electronics,retail,50.05,4,0.187,coupon,2024-07-08\r\n18952,1692,LATAM,home,mobile,40.39,6,0.011,bundle,2024-07-09\r\n18953,1394,LATAM,electronics,online,117.32,3,0.080,bundle,2024-03-09\r\n18954,1485,APAC,grocery,online,49.86,4,0.210,none,2024-12-20\r\n18955,2322,AMER,fashion,online,78.79,6,0.151,none,2024-11-07\r\n18956,1235,EMEA,electronics,retail,14.18,4,0.124,coupon,2024-03-03\r\n18957,1456,APAC,grocery,partner,24.77,7,0.158,coupon,2024-10-28\r\n18958,1219,LATAM,grocery,online,21.31,5,0.159,none,2024-05-10\r\n18959,2135,EMEA,electronics,retail,100.79,4,0.197,coupon,2024-09-05\r\n18960,2444,EMEA,toys,retail,49.16,8,0.006,none,2024-02-27\r\n18961,1620,LATAM,sports,mobile,124.94,1,0.172,none,2024-07-12\r\n18962,1246,EMEA,electronics,online,28.27,4,0.010,none,2024-08-02\r\n18963,2190,LATAM,electronics,retail,52.40,6,0.221,none,2024-05-22\r\n18964,2275,LATAM,grocery,online,72.08,8,0.102,coupon,2024-11-21\r\n18965,2312,APAC,grocery,mobile,72.17,5,0.181,none,2024-04-10\r\n18966,1503,APAC,electronics,retail,53.74,7,0.027,coupon,2024-10-26\r\n18967,1229,LATAM,grocery,retail,74.82,5,0.006,bundle,2024-10-10\r\n18968,1727,APAC,fashion,mobile,81.58,6,0.010,none,2024-12-21\r\n18969,2283,AMER,grocery,retail,85.04,6,0.037,loyalty,2024-05-14\r\n18970,2404,EMEA,grocery,online,108.35,5,0.121,none,2024-07-28\r\n18971,1214,EMEA,home,online,85.28,1,0.163,none,2024-04-28\r\n18972,1664,LATAM,grocery,retail,97.42,3,0.147,none,2024-04-11\r\n18973,1187,AMER,home,online,106.44,8,0.037,none,2024-05-14\r\n18974,1886,LATAM,electronics,mobile,48.37,4,0.025,coupon,2024-09-13\r\n18975,1941,AMER,grocery,retail,65.49,6,0.190,coupon,2024-06-02\r\n18976,2175,AMER,sports,mobile,44.91,3,0.218,coupon,2024-04-05\r\n18977,2410,EMEA,grocery,online,35.57,3,0.002,none,2024-12-23\r\n18978,1348,AMER,sports,retail,77.14,6,0.206,none,2024-11-06\r\n18979,2424,LATAM,electronics,mobile,98.15,5,0.219,loyalty,2024-04-19\r\n18980,2367,AMER,home,online,58.21,5,0.100,none,2024-02-03\r\n18981,1420,APAC,sports,online,134.73,2,0.148,none,2024-05-04\r\n18982,2364,APAC,sports,online,58.67,8,0.098,none,2024-10-10\r\n18983,2265,APAC,home,mobile,86.77,3,0.011,bundle,2024-06-23\r\n18984,2144,EMEA,grocery,online,35.79,7,0.020,none,2024-05-08\r\n18985,1719,LATAM,grocery,mobile,49.78,7,0.172,none,2024-06-05\r\n18986,2300,EMEA,home,online,32.62,5,0.219,bundle,2024-07-16\r\n18987,1877,LATAM,electronics,retail,64.67,6,0.055,bundle,2024-12-13\r\n18988,1177,LATAM,fashion,online,56.63,3,0.019,none,2024-07-17\r\n18989,1902,AMER,fashion,retail,54.19,1,0.190,none,2024-05-02\r\n18990,2269,EMEA,grocery,mobile,112.39,1,0.128,none,2024-01-27\r\n18991,1378,APAC,fashion,online,44.54,7,0.129,coupon,2024-09-19\r\n18992,1293,AMER,grocery,online,59.33,6,0.111,none,2024-04-24\r\n18993,2349,APAC,toys,online,38.89,8,0.092,coupon,2024-03-18\r\n18994,1107,APAC,home,mobile,37.50,2,0.061,none,2024-06-22\r\n18995,1954,APAC,fashion,mobile,52.08,4,0.094,none,2024-02-16\r\n18996,1838,AMER,fashion,online,30.66,5,0.013,loyalty,2024-06-17\r\n18997,1793,LATAM,electronics,retail,95.50,5,0.216,none,2024-06-01\r\n18998,2168,EMEA,sports,retail,60.59,2,0.067,loyalty,2024-03-22\r\n18999,1340,LATAM,sports,online,30.35,5,0.002,none,2024-07-27\r\n19000,2267,AMER,grocery,retail,56.91,6,0.155,none,2024-02-06\r\n19001,2126,APAC,grocery,online,37.77,3,0.210,none,2024-11-01\r\n19002,1299,LATAM,grocery,online,109.09,1,0.100,none,2024-05-08\r\n19003,1114,APAC,sports,online,22.90,5,0.134,none,2024-12-06\r\n19004,1812,EMEA,home,mobile,46.12,5,0.188,bundle,2024-06-09\r\n19005,1650,LATAM,grocery,online,88.67,7,0.100,bundle,2024-08-11\r\n19006,2370,EMEA,fashion,online,25.60,8,0.165,bundle,2024-08-09\r\n19007,2281,AMER,grocery,retail,198.22,2,0.050,coupon,2024-12-25\r\n19008,2396,AMER,sports,online,65.89,5,0.101,none,2024-11-18\r\n19009,2088,EMEA,home,partner,53.51,5,0.026,none,2024-11-28\r\n19010,1124,AMER,electronics,online,94.17,1,0.123,none,2024-01-09\r\n19011,2020,AMER,home,retail,42.52,5,0.070,none,2024-05-13\r\n19012,1968,EMEA,fashion,online,61.58,5,0.056,none,2024-01-19\r\n19013,1517,AMER,electronics,online,109.35,2,0.022,none,2024-05-16\r\n19014,2140,AMER,sports,retail,25.21,6,0.197,loyalty,2024-03-03\r\n19015,1399,AMER,fashion,retail,146.66,6,0.011,none,2024-04-10\r\n19016,1033,APAC,home,retail,70.61,6,0.195,coupon,2024-04-19\r\n19017,2423,LATAM,fashion,online,51.97,2,0.118,none,2024-11-09\r\n19018,1512,APAC,grocery,online,77.74,6,0.133,none,2024-11-19\r\n19019,1183,AMER,electronics,online,74.60,6,0.228,none,2024-12-09\r\n19020,2356,LATAM,sports,partner,34.62,5,0.175,none,2024-12-24\r\n19021,1444,EMEA,electronics,mobile,38.57,2,0.233,bundle,2024-02-25\r\n19022,2015,APAC,toys,online,40.27,3,0.114,none,2024-03-02\r\n19023,1055,AMER,grocery,mobile,41.39,7,0.198,bundle,2024-08-04\r\n19024,1559,EMEA,fashion,mobile,65.17,7,0.248,bundle,2024-01-19\r\n19025,1870,EMEA,fashion,online,21.40,1,0.054,bundle,2024-09-13\r\n19026,1637,APAC,electronics,retail,99.89,6,0.000,loyalty,2024-05-24\r\n19027,1481,LATAM,grocery,mobile,121.83,8,0.100,none,2024-12-16\r\n19028,2125,LATAM,sports,online,39.42,2,0.177,none,2024-04-03\r\n19029,1564,APAC,fashion,online,34.05,4,0.039,none,2024-11-16\r\n19030,2173,LATAM,fashion,online,28.19,2,0.133,bundle,2024-06-12\r\n19031,1118,AMER,electronics,online,42.33,3,0.148,none,2024-08-23\r\n19032,1870,EMEA,electronics,retail,97.57,4,0.178,none,2024-01-15\r\n19033,2110,LATAM,grocery,online,20.94,1,0.219,coupon,2024-01-25\r\n19034,1721,EMEA,home,retail,70.45,2,0.138,none,2024-07-18\r\n19035,2189,LATAM,grocery,online,77.92,6,0.244,bundle,2024-08-18\r\n19036,2055,AMER,grocery,online,96.68,1,0.164,loyalty,2024-09-08\r\n19037,1742,AMER,home,online,44.62,8,0.226,none,2024-05-13\r\n19038,1983,LATAM,home,online,110.88,8,0.079,none,2024-11-05\r\n19039,1700,EMEA,home,online,96.28,8,0.250,none,2024-06-19\r\n19040,1276,AMER,home,online,49.98,5,0.080,bundle,2024-04-13\r\n19041,1778,LATAM,electronics,online,45.22,1,0.240,none,2024-04-14\r\n19042,1085,EMEA,electronics,mobile,90.20,2,0.168,coupon,2024-12-13\r\n19043,1881,LATAM,home,retail,215.35,7,0.217,bundle,2024-07-17\r\n19044,2455,AMER,sports,online,29.26,6,0.120,coupon,2024-05-09\r\n19045,1633,EMEA,grocery,online,73.99,2,0.207,loyalty,2024-06-23\r\n19046,2468,EMEA,grocery,retail,80.94,7,0.233,coupon,2024-04-06\r\n19047,1756,EMEA,electronics,online,90.68,4,0.061,none,2024-03-23\r\n19048,2087,LATAM,fashion,mobile,19.01,6,0.224,coupon,2024-01-17\r\n19049,1392,AMER,grocery,online,41.83,7,0.058,none,2024-11-07\r\n19050,1266,AMER,fashion,retail,49.07,7,0.249,bundle,2024-12-17\r\n19051,2140,AMER,electronics,mobile,93.67,6,0.006,none,2024-05-28\r\n19052,1491,EMEA,fashion,retail,12.20,5,0.045,none,2024-06-22\r\n19053,2136,AMER,home,mobile,38.19,3,0.096,bundle,2024-11-05\r\n19054,2445,APAC,fashion,retail,34.08,6,0.162,none,2024-04-28\r\n19055,1596,EMEA,grocery,mobile,49.83,5,0.101,none,2024-08-05\r\n19056,1388,AMER,grocery,online,304.32,5,0.032,coupon,2024-07-05\r\n19057,1770,AMER,electronics,online,89.29,7,0.110,none,2024-02-25\r\n19058,1026,APAC,home,retail,37.28,1,0.041,bundle,2024-04-17\r\n19059,2197,LATAM,grocery,online,32.37,2,0.008,none,2024-10-21\r\n19060,2137,LATAM,grocery,retail,74.58,8,0.244,loyalty,2024-12-17\r\n19061,1294,APAC,grocery,retail,98.21,1,0.167,none,2024-03-08\r\n19062,1462,LATAM,home,retail,60.23,3,0.057,none,2024-07-06\r\n19063,2305,AMER,fashion,online,77.36,4,0.169,none,2024-10-28\r\n19064,1784,EMEA,grocery,retail,31.42,2,0.087,none,2024-07-04\r\n19065,1146,LATAM,sports,retail,31.60,1,0.045,bundle,2024-08-15\r\n19066,1668,AMER,electronics,retail,45.97,4,0.238,none,2024-07-17\r\n19067,1261,APAC,sports,online,15.39,4,0.179,coupon,2024-07-18\r\n19068,1747,EMEA,sports,retail,90.06,8,0.173,none,2024-08-18\r\n19069,1411,LATAM,electronics,online,90.80,7,0.158,bundle,2024-08-28\r\n19070,1447,LATAM,grocery,mobile,51.50,1,0.198,bundle,2024-09-17\r\n19071,2154,APAC,sports,retail,24.86,8,0.054,none,2024-04-24\r\n19072,1422,LATAM,grocery,mobile,35.65,5,0.119,bundle,2024-01-15\r\n19073,2323,AMER,electronics,online,67.95,5,0.039,bundle,2024-09-21\r\n19074,1611,EMEA,toys,online,34.28,8,0.132,coupon,2024-02-28\r\n19075,1221,LATAM,fashion,retail,72.66,4,0.144,coupon,2024-08-21\r\n19076,1758,AMER,fashion,mobile,103.80,7,0.027,none,2024-06-26\r\n19077,1543,AMER,grocery,retail,69.63,1,0.063,none,2024-02-18\r\n19078,1962,APAC,grocery,online,154.79,1,0.046,none,2024-03-24\r\n19079,2018,AMER,fashion,retail,31.61,7,0.248,loyalty,2024-02-13\r\n19080,1244,LATAM,electronics,retail,56.57,1,0.091,coupon,2024-12-02\r\n19081,1176,EMEA,grocery,mobile,42.51,6,0.024,none,2024-12-11\r\n19082,2209,AMER,sports,online,19.34,2,0.171,loyalty,2024-03-08\r\n19083,1965,LATAM,sports,online,51.36,3,0.054,coupon,2024-01-23\r\n19084,1650,LATAM,sports,partner,65.68,8,0.092,none,2024-12-13\r\n19085,1650,LATAM,home,retail,34.18,8,0.170,none,2024-11-21\r\n19086,1450,EMEA,grocery,mobile,25.62,3,0.007,bundle,2024-07-04\r\n19087,1466,AMER,electronics,retail,143.23,1,0.060,none,2024-04-27\r\n19088,1763,LATAM,fashion,online,86.76,5,0.054,coupon,2024-08-22\r\n19089,2413,AMER,sports,retail,46.20,7,0.154,none,2024-03-13\r\n19090,2104,EMEA,electronics,retail,18.16,7,0.006,coupon,2024-09-27\r\n19091,1928,AMER,home,online,74.31,7,0.125,coupon,2024-01-17\r\n19092,1688,LATAM,electronics,online,31.94,2,0.017,none,2024-05-27\r\n19093,1410,AMER,home,online,36.83,5,0.237,coupon,2024-09-14\r\n19094,1197,LATAM,electronics,online,133.62,8,0.187,coupon,2024-10-09\r\n19095,1053,AMER,fashion,retail,68.31,5,0.189,loyalty,2024-04-24\r\n19096,1686,LATAM,fashion,online,152.90,1,0.141,bundle,2024-05-13\r\n19097,2100,APAC,grocery,retail,89.17,2,0.077,none,2024-06-19\r\n19098,1668,AMER,fashion,online,84.13,4,0.237,none,2024-06-19\r\n19099,1961,EMEA,home,online,118.52,3,0.147,loyalty,2024-06-27\r\n19100,1020,APAC,grocery,retail,45.94,8,0.044,none,2024-08-27\r\n19101,1911,LATAM,fashion,mobile,117.08,1,0.155,bundle,2024-03-09\r\n19102,1498,LATAM,grocery,online,36.64,5,0.012,coupon,2024-09-21\r\n19103,2178,AMER,home,retail,29.92,3,0.061,none,2024-10-07\r\n19104,2154,APAC,home,online,29.56,7,0.197,coupon,2024-01-05\r\n19105,2264,LATAM,electronics,online,84.19,6,0.075,coupon,2024-04-03\r\n19106,1567,AMER,sports,retail,165.87,6,0.226,loyalty,2024-10-09\r\n19107,1485,APAC,toys,retail,34.47,7,0.014,bundle,2024-05-02\r\n19108,1888,LATAM,toys,online,55.48,2,0.032,bundle,2024-08-17\r\n19109,1968,EMEA,fashion,retail,50.63,7,0.159,none,2024-05-09\r\n19110,1821,LATAM,sports,online,49.57,1,0.052,none,2024-12-19\r\n19111,1182,EMEA,home,online,21.28,2,0.160,none,2024-01-18\r\n19112,1697,APAC,electronics,online,92.02,1,0.173,none,2024-09-23\r\n19113,2425,APAC,home,online,42.05,3,0.034,none,2024-06-09\r\n19114,1686,LATAM,fashion,online,48.40,1,0.200,none,2024-09-06\r\n19115,1817,APAC,home,online,63.56,5,0.151,loyalty,2024-03-18\r\n19116,2200,LATAM,grocery,online,94.66,5,0.019,coupon,2024-01-19\r\n19117,2471,APAC,home,retail,50.76,3,0.088,none,2024-08-04\r\n19118,2102,APAC,electronics,partner,33.69,1,0.079,none,2024-02-15\r\n19119,1830,EMEA,grocery,online,90.85,2,0.135,coupon,2024-02-13\r\n19120,1352,AMER,electronics,online,39.44,8,0.036,none,2024-01-17\r\n19121,1801,LATAM,grocery,retail,110.69,4,0.208,none,2024-01-25\r\n19122,1870,EMEA,electronics,online,43.69,8,0.219,none,2024-08-23\r\n19123,1663,LATAM,toys,retail,101.05,8,0.218,none,2024-06-07\r\n19124,2404,EMEA,toys,retail,45.10,2,0.130,bundle,2024-07-24\r\n19125,2237,EMEA,toys,partner,132.76,4,0.166,none,2024-02-12\r\n19126,1086,AMER,fashion,retail,54.38,8,0.166,none,2024-10-15\r\n19127,1796,LATAM,toys,mobile,130.45,7,0.081,none,2024-09-06\r\n19128,2110,LATAM,fashion,online,71.87,6,0.232,coupon,2024-03-28\r\n19129,2356,LATAM,grocery,online,37.30,1,0.248,bundle,2024-03-07\r\n19130,1733,LATAM,grocery,online,55.54,7,0.239,none,2024-08-09\r\n19131,2029,APAC,home,online,31.27,7,0.162,none,2024-10-09\r\n19132,1066,AMER,grocery,retail,28.64,4,0.225,none,2024-01-21\r\n19133,1702,AMER,grocery,online,129.48,8,0.015,none,2024-08-10\r\n19134,1001,LATAM,grocery,online,109.57,3,0.033,bundle,2024-12-10\r\n19135,2233,EMEA,grocery,online,50.10,4,0.233,none,2024-10-06\r\n19136,2178,AMER,grocery,retail,50.90,4,0.052,coupon,2024-06-07\r\n19137,2068,LATAM,home,online,102.13,1,0.203,none,2024-01-28\r\n19138,1213,EMEA,home,retail,43.33,3,0.090,none,2024-09-20\r\n19139,1239,APAC,grocery,retail,142.03,7,0.059,loyalty,2024-10-15\r\n19140,1685,AMER,sports,retail,62.86,7,0.141,none,2024-05-02\r\n19141,1689,LATAM,grocery,online,71.07,6,0.215,none,2024-11-21\r\n19142,1574,AMER,grocery,online,41.20,6,0.170,none,2024-05-08\r\n19143,2408,EMEA,home,retail,77.93,5,0.100,none,2024-04-07\r\n19144,1321,EMEA,home,retail,23.02,3,0.049,none,2024-06-08\r\n19145,1664,LATAM,grocery,online,50.45,4,0.084,loyalty,2024-12-08\r\n19146,1258,EMEA,electronics,online,55.29,3,0.235,none,2024-05-18\r\n19147,1182,EMEA,grocery,retail,37.88,3,0.064,coupon,2024-10-04\r\n19148,1372,APAC,sports,online,87.88,7,0.228,coupon,2024-06-06\r\n19149,1914,EMEA,grocery,retail,48.56,2,0.166,none,2024-01-17\r\n19150,2042,LATAM,electronics,retail,41.64,6,0.128,none,2024-11-27\r\n19151,1926,AMER,sports,retail,35.95,6,0.020,coupon,2024-04-20\r\n19152,1700,EMEA,home,mobile,36.70,1,0.039,loyalty,2024-06-15\r\n19153,2304,LATAM,toys,mobile,107.80,5,0.240,none,2024-07-25\r\n19154,1470,LATAM,fashion,retail,92.18,6,0.036,none,2024-01-13\r\n19155,1906,APAC,fashion,online,29.30,2,0.144,none,2024-02-27\r\n19156,1014,EMEA,fashion,online,44.14,7,0.152,none,2024-12-15\r\n19157,1672,APAC,electronics,online,36.88,3,0.206,none,2024-02-13\r\n19158,2320,LATAM,grocery,online,73.41,1,0.060,loyalty,2024-06-19\r\n19159,1504,AMER,electronics,partner,131.20,3,0.032,none,2024-08-13\r\n19160,1188,LATAM,electronics,online,115.36,5,0.158,bundle,2024-05-07\r\n19161,2299,EMEA,electronics,online,48.80,1,0.142,none,2024-04-24\r\n19162,2144,EMEA,sports,retail,71.42,4,0.107,none,2024-07-24\r\n19163,1849,EMEA,sports,retail,81.31,2,0.104,loyalty,2024-03-10\r\n19164,1717,AMER,sports,online,49.38,1,0.059,none,2024-09-03\r\n19165,1356,LATAM,home,online,55.05,7,0.101,none,2024-10-12\r\n19166,1508,LATAM,home,online,96.81,1,0.083,none,2024-04-09\r\n19167,1358,APAC,home,retail,32.36,3,0.154,none,2024-07-06\r\n19168,2001,EMEA,electronics,mobile,52.53,3,0.237,none,2024-01-15\r\n19169,1634,AMER,fashion,mobile,32.76,2,0.015,bundle,2024-02-21\r\n19170,1689,LATAM,grocery,online,42.04,8,0.096,none,2024-03-11\r\n19171,1266,AMER,grocery,retail,64.57,5,0.226,loyalty,2024-06-13\r\n19172,1553,LATAM,home,retail,31.83,6,0.227,none,2024-05-17\r\n19173,1189,AMER,electronics,mobile,84.14,2,0.136,bundle,2024-04-13\r\n19174,1795,EMEA,grocery,online,38.74,3,0.157,none,2024-10-08\r\n19175,2110,LATAM,fashion,online,53.30,1,0.027,loyalty,2024-04-01\r\n19176,2092,AMER,toys,online,27.43,8,0.200,none,2024-03-05\r\n19177,1009,APAC,electronics,online,79.78,4,0.065,none,2024-08-05\r\n19178,1257,APAC,grocery,retail,139.80,5,0.066,none,2024-02-17\r\n19179,2029,APAC,fashion,online,38.33,4,0.225,none,2024-09-10\r\n19180,1341,EMEA,grocery,online,37.60,7,0.001,none,2024-01-16\r\n19181,1659,APAC,grocery,retail,33.39,4,0.120,none,2024-09-03\r\n19182,1796,LATAM,electronics,online,41.06,2,0.184,none,2024-09-05\r\n19183,2026,LATAM,electronics,retail,52.95,3,0.237,loyalty,2024-07-15\r\n19184,2440,APAC,grocery,mobile,40.10,2,0.186,bundle,2024-10-01\r\n19185,1127,EMEA,sports,online,9.93,8,0.246,none,2024-05-04\r\n19186,2231,LATAM,grocery,retail,42.13,4,0.218,none,2024-10-22\r\n19187,1918,EMEA,sports,retail,129.41,3,0.058,none,2024-10-06\r\n19188,1478,EMEA,fashion,online,37.34,3,0.057,none,2024-02-11\r\n19189,1307,AMER,electronics,partner,37.79,5,0.105,loyalty,2024-05-21\r\n19190,1893,APAC,electronics,online,53.50,6,0.042,loyalty,2024-01-04\r\n19191,1082,EMEA,electronics,retail,50.74,8,0.179,loyalty,2024-06-11\r\n19192,1049,AMER,electronics,online,111.98,6,0.118,loyalty,2024-10-26\r\n19193,1833,EMEA,fashion,retail,32.16,6,0.157,bundle,2024-04-06\r\n19194,1148,AMER,grocery,retail,40.86,8,0.110,coupon,2024-08-26\r\n19195,2200,LATAM,sports,retail,73.89,2,0.208,none,2024-02-01\r\n19196,1530,APAC,fashion,online,84.52,4,0.016,none,2024-12-27\r\n19197,2471,APAC,grocery,online,29.13,4,0.230,bundle,2024-06-12\r\n19198,1717,AMER,fashion,online,22.64,4,0.097,bundle,2024-03-24\r\n19199,2173,LATAM,home,mobile,65.75,3,0.066,none,2024-10-27\r\n19200,1272,AMER,fashion,online,30.82,5,0.208,none,2024-09-20\r\n19201,2446,LATAM,grocery,online,79.58,5,0.082,bundle,2024-04-03\r\n19202,2468,EMEA,toys,online,88.30,3,0.186,coupon,2024-07-01\r\n19203,1513,APAC,grocery,retail,62.14,6,0.087,none,2024-10-10\r\n19204,2360,EMEA,fashion,online,72.20,8,0.173,none,2024-10-04\r\n19205,2383,APAC,home,online,96.66,1,0.201,coupon,2024-01-01\r\n19206,1226,AMER,electronics,retail,142.97,3,0.218,coupon,2024-07-25\r\n19207,2192,APAC,grocery,retail,63.29,2,0.078,coupon,2024-05-10\r\n19208,1177,LATAM,fashion,retail,49.59,6,0.231,bundle,2024-12-10\r\n19209,1748,APAC,home,partner,87.96,6,0.014,loyalty,2024-06-17\r\n19210,1353,EMEA,grocery,online,36.30,8,0.056,loyalty,2024-04-06\r\n19211,2327,EMEA,grocery,online,71.89,8,0.125,bundle,2024-11-04\r\n19212,1797,LATAM,electronics,retail,130.72,7,0.092,bundle,2024-10-22\r\n19213,2006,APAC,electronics,online,29.64,7,0.188,none,2024-05-26\r\n19214,1680,LATAM,home,online,113.53,2,0.131,bundle,2024-08-11\r\n19215,1502,APAC,home,online,32.02,2,0.133,none,2024-12-26\r\n19216,1322,AMER,electronics,retail,41.72,2,0.222,loyalty,2024-10-12\r\n19217,2453,AMER,fashion,retail,34.31,6,0.217,none,2024-08-18\r\n19218,1625,EMEA,home,online,88.16,1,0.065,none,2024-03-25\r\n19219,2237,EMEA,grocery,mobile,37.97,6,0.206,loyalty,2024-04-21\r\n19220,1373,LATAM,electronics,mobile,100.02,4,0.094,none,2024-12-24\r\n19221,1492,APAC,grocery,partner,37.07,6,0.208,none,2024-07-07\r\n19222,2300,EMEA,home,retail,65.86,1,0.165,coupon,2024-09-05\r\n19223,1857,LATAM,fashion,online,79.77,3,0.241,coupon,2024-11-08\r\n19224,1987,AMER,electronics,online,39.27,5,0.043,none,2024-11-14\r\n19225,1883,LATAM,grocery,online,11.64,4,0.062,bundle,2024-10-08\r\n19226,1637,APAC,sports,online,47.30,4,0.246,bundle,2024-02-10\r\n19227,2492,LATAM,home,retail,36.88,5,0.220,none,2024-01-03\r\n19228,1128,LATAM,grocery,mobile,53.72,8,0.120,loyalty,2024-06-11\r\n19229,1987,AMER,electronics,online,25.05,3,0.105,none,2024-08-11\r\n19230,1890,LATAM,sports,online,38.08,2,0.135,coupon,2024-08-22\r\n19231,1174,APAC,electronics,retail,50.59,5,0.079,bundle,2024-10-10\r\n19232,2401,LATAM,sports,retail,23.31,1,0.189,none,2024-05-24\r\n19233,1216,APAC,grocery,mobile,35.91,7,0.160,none,2024-09-01\r\n19234,1747,EMEA,grocery,online,66.18,5,0.099,none,2024-01-27\r\n19235,2492,LATAM,toys,online,60.72,4,0.176,none,2024-09-20\r\n19236,1565,AMER,electronics,online,62.29,3,0.095,loyalty,2024-06-28\r\n19237,1960,EMEA,fashion,mobile,32.48,7,0.138,coupon,2024-11-15\r\n19238,1976,AMER,grocery,retail,36.08,2,0.176,bundle,2024-12-23\r\n19239,1909,APAC,grocery,retail,86.90,2,0.134,coupon,2024-05-07\r\n19240,2034,LATAM,grocery,online,88.49,5,0.160,coupon,2024-06-18\r\n19241,1343,LATAM,electronics,mobile,35.08,4,0.222,bundle,2024-01-23\r\n19242,2358,AMER,grocery,mobile,34.57,2,0.130,none,2024-06-26\r\n19243,2097,AMER,toys,retail,72.66,3,0.131,coupon,2024-10-18\r\n19244,2492,LATAM,toys,retail,71.93,1,0.169,none,2024-07-05\r\n19245,1699,APAC,home,mobile,72.82,1,0.208,bundle,2024-07-12\r\n19246,1653,APAC,electronics,online,107.59,2,0.213,none,2024-08-15\r\n19247,2317,LATAM,sports,retail,92.37,3,0.166,coupon,2024-02-03\r\n19248,1852,AMER,home,mobile,49.49,7,0.153,none,2024-01-09\r\n19249,1128,LATAM,sports,online,60.22,7,0.060,coupon,2024-09-05\r\n19250,2353,AMER,grocery,retail,117.96,1,0.158,none,2024-11-09\r\n19251,1535,AMER,electronics,retail,42.86,6,0.025,loyalty,2024-11-04\r\n19252,1144,APAC,toys,partner,111.60,4,0.185,coupon,2024-09-07\r\n19253,1042,LATAM,grocery,mobile,59.67,6,0.081,none,2024-04-04\r\n19254,1239,APAC,sports,online,80.15,1,0.099,bundle,2024-11-15\r\n19255,1838,AMER,grocery,mobile,152.49,3,0.116,none,2024-10-27\r\n19256,1416,EMEA,fashion,online,82.10,1,0.242,none,2024-01-15\r\n19257,2369,LATAM,grocery,retail,51.68,8,0.122,coupon,2024-04-22\r\n19258,2084,LATAM,electronics,online,18.77,5,0.014,none,2024-06-23\r\n19259,1735,LATAM,sports,online,38.87,5,0.004,none,2024-01-17\r\n19260,1475,LATAM,sports,online,173.35,7,0.186,coupon,2024-02-02\r\n19261,1442,EMEA,electronics,online,45.80,3,0.130,coupon,2024-05-07\r\n19262,1992,LATAM,home,online,86.90,6,0.082,bundle,2024-03-14\r\n19263,1959,EMEA,home,online,50.12,6,0.173,none,2024-04-20\r\n19264,1504,AMER,sports,online,84.12,5,0.040,bundle,2024-02-05\r\n19265,1331,AMER,fashion,retail,61.18,7,0.039,none,2024-03-20\r\n19266,2179,LATAM,sports,mobile,76.54,8,0.094,none,2024-02-23\r\n19267,2333,APAC,sports,retail,35.37,6,0.071,none,2024-09-01\r\n19268,2181,AMER,electronics,retail,131.69,4,0.143,none,2024-09-19\r\n19269,1056,LATAM,electronics,retail,18.05,8,0.053,none,2024-10-28\r\n19270,1963,AMER,grocery,retail,32.28,6,0.038,none,2024-03-11\r\n19271,1541,APAC,toys,online,77.19,3,0.128,none,2024-02-11\r\n19272,1884,APAC,fashion,retail,89.01,7,0.221,loyalty,2024-03-21\r\n19273,1196,APAC,electronics,online,57.35,1,0.129,none,2024-10-13\r\n19274,1160,LATAM,sports,online,43.09,4,0.199,loyalty,2024-10-16\r\n19275,1670,EMEA,toys,retail,89.23,3,0.233,loyalty,2024-01-15\r\n19276,2244,LATAM,grocery,online,141.91,2,0.105,loyalty,2024-10-05\r\n19277,2297,EMEA,home,online,87.66,3,0.054,none,2024-12-19\r\n19278,2184,APAC,home,retail,45.37,3,0.243,coupon,2024-11-06\r\n19279,1406,LATAM,toys,mobile,74.84,8,0.239,none,2024-07-23\r\n19280,1511,EMEA,grocery,retail,178.31,7,0.143,bundle,2024-07-16\r\n19281,1890,LATAM,home,partner,14.96,5,0.052,bundle,2024-09-20\r\n19282,1085,EMEA,electronics,mobile,71.55,8,0.219,none,2024-11-03\r\n19283,1379,EMEA,sports,retail,77.00,1,0.021,none,2024-10-12\r\n19284,2291,EMEA,toys,online,27.09,2,0.223,coupon,2024-08-23\r\n19285,2272,EMEA,grocery,retail,46.53,5,0.140,none,2024-09-09\r\n19286,2472,AMER,home,retail,61.08,3,0.075,coupon,2024-05-25\r\n19287,1684,EMEA,home,online,77.37,8,0.022,none,2024-02-20\r\n19288,2450,EMEA,electronics,partner,33.42,3,0.118,none,2024-01-24\r\n19289,2035,LATAM,grocery,online,65.82,4,0.049,bundle,2024-04-22\r\n19290,2019,AMER,grocery,mobile,71.16,2,0.129,coupon,2024-08-14\r\n19291,2356,LATAM,fashion,retail,116.83,8,0.156,bundle,2024-08-15\r\n19292,1333,EMEA,grocery,retail,48.06,7,0.161,none,2024-08-19\r\n19293,2024,AMER,home,online,33.42,3,0.022,loyalty,2024-07-23\r\n19294,1323,EMEA,home,online,50.65,8,0.072,none,2024-02-18\r\n19295,1246,EMEA,sports,online,59.77,8,0.132,none,2024-09-24\r\n19296,1493,APAC,grocery,retail,67.86,5,0.020,bundle,2024-11-28\r\n19297,2109,EMEA,electronics,online,63.02,6,0.121,bundle,2024-05-28\r\n19298,2361,EMEA,grocery,retail,71.85,6,0.084,bundle,2024-07-27\r\n19299,1472,AMER,toys,partner,74.88,5,0.185,bundle,2024-02-01\r\n19300,1133,EMEA,home,online,78.23,4,0.035,none,2024-10-24\r\n19301,2498,LATAM,grocery,online,114.80,1,0.024,none,2024-09-25\r\n19302,1488,AMER,home,online,26.38,1,0.152,loyalty,2024-06-16\r\n19303,1604,EMEA,sports,mobile,117.66,1,0.227,loyalty,2024-08-15\r\n19304,1661,LATAM,sports,mobile,65.39,1,0.034,none,2024-12-01\r\n19305,2055,AMER,sports,retail,70.91,6,0.052,none,2024-07-02\r\n19306,2070,APAC,toys,online,39.11,6,0.165,none,2024-07-06\r\n19307,2169,EMEA,fashion,retail,62.81,2,0.027,none,2024-06-16\r\n19308,1149,LATAM,sports,online,38.88,8,0.004,none,2024-08-07\r\n19309,1415,AMER,electronics,online,50.95,2,0.146,none,2024-01-03\r\n19310,1928,AMER,electronics,online,64.78,5,0.183,none,2024-05-18\r\n19311,1465,AMER,home,online,48.13,7,0.079,loyalty,2024-05-10\r\n19312,1388,AMER,fashion,retail,30.48,4,0.133,loyalty,2024-03-18\r\n19313,1809,APAC,grocery,mobile,116.62,5,0.107,coupon,2024-10-17\r\n19314,1795,EMEA,home,online,20.74,7,0.143,coupon,2024-02-18\r\n19315,1505,EMEA,grocery,mobile,130.36,8,0.228,loyalty,2024-02-22\r\n19316,1110,LATAM,grocery,mobile,52.34,8,0.182,bundle,2024-05-11\r\n19317,1742,AMER,home,mobile,52.22,6,0.126,none,2024-08-03\r\n19318,1628,EMEA,toys,retail,137.81,8,0.224,coupon,2024-05-06\r\n19319,1042,LATAM,electronics,online,45.14,2,0.243,none,2024-04-23\r\n19320,2143,AMER,home,online,50.19,4,0.166,bundle,2024-08-04\r\n19321,2046,APAC,fashion,retail,71.65,8,0.235,loyalty,2024-12-07\r\n19322,2010,APAC,sports,retail,115.61,4,0.199,bundle,2024-09-11\r\n19323,1165,AMER,toys,retail,32.73,6,0.067,none,2024-01-11\r\n19324,2018,AMER,fashion,retail,127.50,6,0.160,coupon,2024-10-26\r\n19325,1529,LATAM,electronics,retail,100.08,5,0.154,bundle,2024-04-07\r\n19326,1016,AMER,home,mobile,68.89,8,0.029,none,2024-02-08\r\n19327,2186,LATAM,grocery,mobile,41.91,2,0.039,none,2024-10-26\r\n19328,2040,LATAM,electronics,retail,104.23,7,0.202,loyalty,2024-09-19\r\n19329,2239,EMEA,grocery,online,107.10,8,0.097,none,2024-02-28\r\n19330,2347,AMER,grocery,mobile,67.27,4,0.219,loyalty,2024-01-11\r\n19331,2186,LATAM,grocery,mobile,132.68,2,0.099,coupon,2024-05-13\r\n19332,2403,LATAM,home,retail,43.74,5,0.244,bundle,2024-07-15\r\n19333,2316,EMEA,grocery,online,28.69,3,0.203,none,2024-02-23\r\n19334,2021,EMEA,grocery,online,90.39,8,0.098,none,2024-02-05\r\n19335,1746,LATAM,sports,retail,23.79,2,0.223,coupon,2024-06-07\r\n19336,1142,EMEA,grocery,online,33.44,6,0.202,none,2024-09-21\r\n19337,1914,EMEA,home,retail,105.54,2,0.083,loyalty,2024-12-23\r\n19338,1561,EMEA,home,partner,82.70,1,0.028,loyalty,2024-06-25\r\n19339,1857,LATAM,electronics,retail,79.81,2,0.233,bundle,2024-09-25\r\n19340,1953,EMEA,grocery,online,161.94,8,0.033,none,2024-09-05\r\n19341,1357,EMEA,grocery,retail,50.19,3,0.017,bundle,2024-09-22\r\n19342,1605,APAC,electronics,mobile,84.38,3,0.097,bundle,2024-02-25\r\n19343,2079,EMEA,toys,online,80.00,1,0.130,none,2024-01-13\r\n19344,1623,AMER,electronics,online,29.81,1,0.203,bundle,2024-10-07\r\n19345,2148,EMEA,home,mobile,50.47,3,0.118,none,2024-01-20\r\n19346,1559,EMEA,electronics,online,55.70,2,0.022,coupon,2024-10-09\r\n19347,1154,LATAM,electronics,retail,56.90,2,0.119,coupon,2024-03-04\r\n19348,1152,LATAM,grocery,mobile,76.23,3,0.057,coupon,2024-08-28\r\n19349,1537,LATAM,toys,online,67.07,4,0.088,coupon,2024-07-10\r\n19350,1941,AMER,electronics,mobile,75.99,8,0.008,none,2024-04-08\r\n19351,1587,LATAM,sports,online,86.91,8,0.068,coupon,2024-10-23\r\n19352,1061,APAC,electronics,online,59.78,5,0.123,none,2024-05-20\r\n19353,1530,APAC,electronics,online,102.89,3,0.009,bundle,2024-06-20\r\n19354,2103,LATAM,home,partner,32.75,4,0.205,coupon,2024-06-22\r\n19355,2294,EMEA,grocery,online,27.95,6,0.119,none,2024-07-22\r\n19356,1879,EMEA,sports,retail,52.28,6,0.211,coupon,2024-07-12\r\n19357,2011,AMER,fashion,mobile,103.33,2,0.055,loyalty,2024-01-27\r\n19358,1300,EMEA,toys,online,32.54,5,0.159,none,2024-01-28\r\n19359,2118,AMER,grocery,mobile,102.06,7,0.090,bundle,2024-02-06\r\n19360,2403,LATAM,electronics,retail,113.16,2,0.110,none,2024-01-26\r\n19361,2333,APAC,toys,retail,73.37,6,0.107,coupon,2024-05-12\r\n19362,1003,APAC,toys,online,120.94,6,0.047,bundle,2024-06-02\r\n19363,1659,APAC,fashion,retail,54.80,4,0.133,loyalty,2024-07-06\r\n19364,2332,APAC,toys,online,104.07,8,0.197,none,2024-10-25\r\n19365,1095,APAC,grocery,retail,79.54,1,0.027,bundle,2024-01-04\r\n19366,2061,EMEA,electronics,online,72.91,6,0.168,bundle,2024-09-03\r\n19367,2303,EMEA,sports,online,71.28,5,0.157,none,2024-11-11\r\n19368,1023,APAC,electronics,mobile,138.36,1,0.153,bundle,2024-07-28\r\n19369,2350,APAC,grocery,online,128.17,6,0.246,bundle,2024-12-23\r\n19370,2128,EMEA,grocery,mobile,170.75,6,0.118,none,2024-05-08\r\n19371,2066,APAC,electronics,online,61.05,5,0.187,none,2024-10-14\r\n19372,1731,AMER,toys,online,20.28,5,0.028,coupon,2024-07-02\r\n19373,1315,AMER,grocery,mobile,151.71,5,0.184,none,2024-04-04\r\n19374,1156,APAC,home,online,136.07,2,0.040,none,2024-12-08\r\n19375,1957,AMER,grocery,mobile,24.64,5,0.122,bundle,2024-03-05\r\n19376,1958,APAC,electronics,online,33.15,2,0.042,none,2024-04-02\r\n19377,2269,EMEA,sports,online,72.14,4,0.041,loyalty,2024-01-28\r\n19378,2245,APAC,toys,online,94.32,7,0.232,none,2024-01-16\r\n19379,1985,AMER,electronics,mobile,102.92,7,0.169,coupon,2024-09-18\r\n19380,2383,APAC,toys,online,67.80,2,0.043,coupon,2024-07-09\r\n19381,2274,APAC,electronics,online,88.05,8,0.027,none,2024-04-03\r\n19382,1097,EMEA,home,online,44.14,5,0.106,none,2024-02-17\r\n19383,1312,EMEA,fashion,retail,50.01,8,0.211,bundle,2024-09-01\r\n19384,1332,APAC,fashion,online,40.41,8,0.070,loyalty,2024-02-12\r\n19385,2086,APAC,home,online,25.90,1,0.219,loyalty,2024-07-18\r\n19386,1767,AMER,electronics,online,142.09,7,0.147,coupon,2024-09-11\r\n19387,1143,LATAM,grocery,online,45.90,2,0.239,coupon,2024-04-10\r\n19388,2493,APAC,home,online,39.61,2,0.192,none,2024-01-07\r\n19389,2037,LATAM,grocery,retail,71.13,6,0.058,none,2024-10-14\r\n19390,2427,LATAM,electronics,mobile,79.18,8,0.206,coupon,2024-02-27\r\n19391,2455,AMER,grocery,retail,93.17,3,0.085,bundle,2024-07-19\r\n19392,1241,APAC,electronics,online,40.00,1,0.056,coupon,2024-07-09\r\n19393,2218,EMEA,fashion,online,69.60,3,0.076,none,2024-08-06\r\n19394,1669,AMER,electronics,retail,45.52,8,0.217,loyalty,2024-02-07\r\n19395,2041,LATAM,grocery,online,97.75,5,0.245,coupon,2024-03-05\r\n19396,1748,APAC,electronics,retail,88.23,1,0.121,none,2024-03-26\r\n19397,2383,APAC,electronics,mobile,47.15,4,0.057,loyalty,2024-05-26\r\n19398,2279,LATAM,fashion,partner,43.62,2,0.202,bundle,2024-06-19\r\n19399,1347,APAC,grocery,retail,17.86,1,0.217,coupon,2024-06-25\r\n19400,2239,EMEA,electronics,retail,63.69,2,0.067,bundle,2024-10-19\r\n19401,1691,LATAM,home,retail,49.48,2,0.013,coupon,2024-08-11\r\n19402,1911,LATAM,fashion,online,36.88,6,0.206,coupon,2024-04-15\r\n19403,1761,EMEA,fashion,online,63.68,3,0.247,bundle,2024-08-05\r\n19404,1503,APAC,electronics,retail,53.04,4,0.151,none,2024-10-15\r\n19405,1782,LATAM,toys,online,35.23,1,0.087,loyalty,2024-12-06\r\n19406,1514,LATAM,home,online,33.50,3,0.229,coupon,2024-08-24\r\n19407,1453,APAC,electronics,online,102.93,6,0.051,bundle,2024-11-16\r\n19408,2061,EMEA,toys,online,41.13,3,0.132,bundle,2024-10-09\r\n19409,1597,APAC,electronics,online,49.28,1,0.119,bundle,2024-07-06\r\n19410,2178,AMER,fashion,online,36.51,1,0.016,coupon,2024-09-28\r\n19411,2414,EMEA,home,online,58.43,3,0.004,none,2024-03-12\r\n19412,1794,AMER,grocery,online,49.08,4,0.109,none,2024-07-28\r\n19413,1487,AMER,grocery,mobile,213.11,1,0.213,loyalty,2024-11-17\r\n19414,2415,AMER,home,mobile,58.89,8,0.078,none,2024-03-12\r\n19415,1679,APAC,sports,retail,56.53,7,0.057,none,2024-04-27\r\n19416,2470,EMEA,electronics,retail,62.77,5,0.235,bundle,2024-12-28\r\n19417,2014,EMEA,grocery,retail,33.08,4,0.221,loyalty,2024-09-18\r\n19418,1750,LATAM,home,retail,81.20,8,0.233,coupon,2024-12-04\r\n19419,1588,LATAM,grocery,retail,125.59,6,0.025,coupon,2024-10-17\r\n19420,1285,EMEA,grocery,mobile,45.07,2,0.108,none,2024-12-20\r\n19421,1137,APAC,grocery,mobile,98.87,2,0.213,none,2024-01-06\r\n19422,1905,APAC,grocery,retail,85.46,4,0.209,none,2024-12-24\r\n19423,1538,AMER,home,retail,44.05,5,0.071,none,2024-04-03\r\n19424,2357,EMEA,fashion,online,92.83,5,0.104,loyalty,2024-03-19\r\n19425,2382,LATAM,electronics,mobile,70.46,1,0.217,bundle,2024-09-27\r\n19426,2197,LATAM,grocery,online,37.05,5,0.215,none,2024-09-08\r\n19427,2066,APAC,electronics,mobile,112.60,6,0.011,none,2024-09-27\r\n19428,1515,EMEA,grocery,online,29.54,3,0.185,loyalty,2024-02-11\r\n19429,1819,AMER,home,retail,52.55,1,0.084,none,2024-06-25\r\n19430,2270,APAC,toys,retail,68.13,7,0.055,none,2024-01-14\r\n19431,2344,LATAM,electronics,online,52.75,8,0.217,none,2024-08-28\r\n19432,1967,EMEA,home,online,45.02,3,0.223,none,2024-08-16\r\n19433,1363,EMEA,home,online,16.87,4,0.101,bundle,2024-08-08\r\n19434,1639,APAC,toys,mobile,57.00,7,0.194,loyalty,2024-09-18\r\n19435,1762,LATAM,toys,online,55.43,5,0.107,coupon,2024-04-25\r\n19436,2407,EMEA,fashion,retail,70.77,7,0.220,none,2024-08-20\r\n19437,1514,LATAM,grocery,retail,48.29,2,0.031,none,2024-12-14\r\n19438,2001,EMEA,home,online,52.23,5,0.122,coupon,2024-08-14\r\n19439,1132,EMEA,home,partner,24.48,2,0.156,bundle,2024-09-10\r\n19440,2408,EMEA,toys,retail,63.27,7,0.230,loyalty,2024-09-11\r\n19441,1845,AMER,grocery,online,34.85,8,0.072,coupon,2024-10-13\r\n19442,2036,APAC,home,online,63.32,3,0.158,coupon,2024-07-16\r\n19443,2017,EMEA,home,retail,52.68,2,0.098,none,2024-07-04\r\n19444,2130,EMEA,grocery,online,31.22,4,0.107,loyalty,2024-08-22\r\n19445,1365,LATAM,home,online,58.88,8,0.206,none,2024-12-20\r\n19446,1340,LATAM,home,retail,78.46,5,0.090,none,2024-09-23\r\n19447,2408,EMEA,sports,online,40.32,5,0.033,none,2024-08-26\r\n19448,1404,EMEA,electronics,online,56.74,6,0.051,none,2024-09-14\r\n19449,1984,LATAM,sports,retail,53.41,3,0.225,none,2024-03-09\r\n19450,1682,EMEA,sports,retail,44.27,6,0.117,none,2024-11-09\r\n19451,1133,EMEA,grocery,retail,99.39,1,0.148,coupon,2024-09-06\r\n19452,1552,EMEA,grocery,mobile,104.55,3,0.102,none,2024-02-02\r\n19453,2372,AMER,sports,online,59.48,2,0.228,coupon,2024-07-02\r\n19454,1967,EMEA,fashion,online,60.97,2,0.013,coupon,2024-05-13\r\n19455,1375,AMER,home,online,51.35,4,0.218,none,2024-08-21\r\n19456,1505,EMEA,grocery,partner,70.39,3,0.016,bundle,2024-02-16\r\n19457,1019,APAC,toys,retail,84.64,3,0.193,loyalty,2024-12-02\r\n19458,2322,AMER,toys,retail,73.48,7,0.242,none,2024-01-26\r\n19459,1986,LATAM,electronics,online,39.14,6,0.187,none,2024-04-04\r\n19460,1661,LATAM,electronics,online,73.40,5,0.094,none,2024-03-23\r\n19461,2261,EMEA,toys,online,47.52,4,0.087,bundle,2024-12-05\r\n19462,1068,APAC,home,retail,127.35,6,0.219,coupon,2024-01-21\r\n19463,1912,APAC,grocery,partner,25.24,7,0.159,coupon,2024-06-27\r\n19464,2474,LATAM,fashion,retail,53.69,2,0.042,none,2024-02-25\r\n19465,1772,EMEA,fashion,online,38.47,1,0.095,coupon,2024-09-12\r\n19466,2399,LATAM,toys,mobile,102.61,4,0.233,none,2024-09-19\r\n19467,1324,LATAM,grocery,mobile,135.99,7,0.110,coupon,2024-02-18\r\n19468,1770,AMER,electronics,mobile,96.26,4,0.243,none,2024-01-21\r\n19469,1703,AMER,electronics,mobile,39.45,7,0.036,none,2024-07-10\r\n19470,2409,APAC,fashion,retail,132.04,6,0.032,coupon,2024-06-24\r\n19471,1775,EMEA,grocery,online,62.87,5,0.182,none,2024-05-28\r\n19472,2103,LATAM,fashion,online,105.96,7,0.201,coupon,2024-08-09\r\n19473,1196,APAC,sports,online,48.67,4,0.049,none,2024-12-26\r\n19474,1791,LATAM,fashion,retail,70.28,6,0.230,coupon,2024-11-15\r\n19475,2253,AMER,fashion,mobile,46.00,3,0.222,none,2024-06-05\r\n19476,1408,AMER,fashion,online,75.31,8,0.111,coupon,2024-12-13\r\n19477,1104,APAC,toys,online,48.59,3,0.142,none,2024-08-07\r\n19478,2014,EMEA,electronics,online,41.70,3,0.075,bundle,2024-06-11\r\n19479,2099,AMER,electronics,retail,134.08,3,0.189,none,2024-09-23\r\n19480,1551,APAC,grocery,retail,29.57,6,0.056,none,2024-09-24\r\n19481,2308,AMER,sports,online,48.03,5,0.072,none,2024-03-21\r\n19482,1173,LATAM,fashion,mobile,64.54,1,0.195,none,2024-05-22\r\n19483,1429,APAC,grocery,retail,26.26,2,0.111,none,2024-03-14\r\n19484,1899,APAC,fashion,mobile,40.77,6,0.096,none,2024-06-18\r\n19485,2398,EMEA,toys,online,174.53,5,0.204,none,2024-01-11\r\n19486,1885,EMEA,toys,retail,45.06,6,0.135,coupon,2024-05-11\r\n19487,2354,LATAM,electronics,mobile,47.14,7,0.096,none,2024-03-08\r\n19488,1080,LATAM,electronics,online,16.13,2,0.220,none,2024-12-26\r\n19489,1087,AMER,grocery,retail,84.62,2,0.062,none,2024-11-01\r\n19490,1906,APAC,fashion,online,101.21,8,0.076,none,2024-03-07\r\n19491,2178,AMER,home,mobile,35.87,6,0.043,none,2024-07-24\r\n19492,1778,LATAM,toys,partner,56.55,2,0.239,none,2024-07-05\r\n19493,1144,APAC,fashion,retail,70.60,8,0.055,none,2024-05-10\r\n19494,2000,APAC,electronics,retail,32.17,4,0.068,none,2024-09-17\r\n19495,1899,APAC,grocery,mobile,65.46,5,0.227,none,2024-01-01\r\n19496,2466,APAC,toys,online,39.46,8,0.144,none,2024-07-06\r\n19497,1022,APAC,grocery,mobile,27.94,2,0.168,none,2024-09-03\r\n19498,1918,EMEA,grocery,partner,71.83,8,0.018,none,2024-07-13\r\n19499,2137,LATAM,home,online,32.73,8,0.124,none,2024-04-27\r\n19500,2188,EMEA,fashion,mobile,33.99,2,0.093,loyalty,2024-07-07\r\n19501,1459,LATAM,electronics,retail,35.49,1,0.101,none,2024-01-04\r\n19502,1301,AMER,electronics,retail,143.06,3,0.069,none,2024-12-16\r\n19503,1140,LATAM,toys,online,46.63,2,0.022,coupon,2024-06-03\r\n19504,1591,APAC,electronics,online,56.10,4,0.125,bundle,2024-12-15\r\n19505,1233,AMER,electronics,online,59.86,7,0.123,coupon,2024-11-05\r\n19506,1536,LATAM,toys,retail,73.02,1,0.071,none,2024-04-02\r\n19507,2041,LATAM,fashion,mobile,55.74,6,0.250,coupon,2024-04-08\r\n19508,1055,AMER,grocery,online,60.31,6,0.144,none,2024-08-21\r\n19509,1094,LATAM,sports,online,109.18,3,0.020,none,2024-12-02\r\n19510,1472,AMER,home,retail,93.60,1,0.133,coupon,2024-08-05\r\n19511,2065,EMEA,home,mobile,37.70,5,0.135,coupon,2024-10-12\r\n19512,2278,APAC,electronics,mobile,63.50,6,0.002,loyalty,2024-05-03\r\n19513,2254,LATAM,fashion,mobile,33.33,5,0.085,coupon,2024-10-11\r\n19514,1980,LATAM,electronics,mobile,66.34,1,0.158,none,2024-04-23\r\n19515,1752,APAC,grocery,retail,41.54,5,0.185,coupon,2024-07-10\r\n19516,1345,AMER,fashion,online,126.84,8,0.122,none,2024-02-23\r\n19517,2489,LATAM,sports,online,64.91,5,0.065,bundle,2024-08-04\r\n19518,1974,EMEA,fashion,online,63.88,3,0.169,bundle,2024-12-25\r\n19519,1518,AMER,fashion,online,46.24,8,0.206,none,2024-12-03\r\n19520,2081,APAC,electronics,online,24.39,6,0.167,bundle,2024-11-02\r\n19521,1442,EMEA,home,retail,93.49,2,0.012,coupon,2024-06-18\r\n19522,2238,AMER,toys,retail,54.32,2,0.223,coupon,2024-12-23\r\n19523,1316,APAC,home,retail,154.73,2,0.173,none,2024-05-24\r\n19524,1597,APAC,grocery,mobile,22.66,3,0.205,none,2024-06-18\r\n19525,2370,EMEA,sports,retail,78.57,3,0.212,coupon,2024-10-23\r\n19526,1015,AMER,home,online,96.10,3,0.060,none,2024-01-17\r\n19527,1946,AMER,grocery,retail,64.27,3,0.174,none,2024-03-22\r\n19528,1459,LATAM,electronics,mobile,39.72,7,0.207,none,2024-08-09\r\n19529,1964,EMEA,electronics,online,20.73,1,0.111,coupon,2024-08-27\r\n19530,2263,AMER,fashion,online,75.41,1,0.062,none,2024-02-04\r\n19531,1810,LATAM,grocery,retail,44.92,6,0.076,coupon,2024-07-08\r\n19532,1893,APAC,grocery,mobile,83.22,2,0.019,none,2024-12-09\r\n19533,2457,EMEA,fashion,online,41.45,7,0.128,coupon,2024-01-19\r\n19534,1302,LATAM,electronics,retail,50.42,2,0.248,coupon,2024-12-08\r\n19535,2312,APAC,home,retail,49.11,6,0.170,coupon,2024-09-12\r\n19536,2322,AMER,home,retail,69.71,1,0.070,none,2024-11-04\r\n19537,1911,LATAM,grocery,partner,40.05,7,0.147,none,2024-11-14\r\n19538,2497,AMER,home,retail,85.56,2,0.049,bundle,2024-10-28\r\n19539,1436,APAC,electronics,online,31.29,8,0.189,bundle,2024-03-28\r\n19540,1004,LATAM,toys,retail,69.37,7,0.121,none,2024-10-14\r\n19541,1315,AMER,home,retail,31.50,2,0.227,coupon,2024-08-05\r\n19542,1765,EMEA,grocery,mobile,39.36,2,0.219,bundle,2024-12-24\r\n19543,1703,AMER,home,online,25.10,5,0.062,none,2024-09-16\r\n19544,1788,AMER,toys,retail,91.46,2,0.173,coupon,2024-10-16\r\n19545,1474,LATAM,grocery,retail,206.06,1,0.042,none,2024-06-13\r\n19546,1627,LATAM,grocery,retail,14.30,4,0.174,bundle,2024-12-13\r\n19547,2387,EMEA,home,retail,59.46,5,0.084,none,2024-10-15\r\n19548,1749,LATAM,grocery,online,62.81,1,0.012,none,2024-04-25\r\n19549,2373,LATAM,grocery,online,42.77,2,0.157,coupon,2024-08-09\r\n19550,2225,EMEA,sports,retail,57.75,3,0.232,coupon,2024-04-05\r\n19551,1589,AMER,home,retail,87.67,4,0.020,coupon,2024-09-24\r\n19552,2172,EMEA,electronics,online,46.22,1,0.078,none,2024-09-19\r\n19553,1066,AMER,grocery,mobile,46.37,7,0.211,loyalty,2024-02-08\r\n19554,1258,EMEA,home,retail,33.16,7,0.084,none,2024-01-14\r\n19555,1838,AMER,grocery,online,60.65,6,0.007,coupon,2024-09-13\r\n19556,1219,LATAM,fashion,mobile,75.32,1,0.215,coupon,2024-09-11\r\n19557,1514,LATAM,sports,online,40.58,1,0.178,none,2024-11-17\r\n19558,1719,LATAM,sports,retail,51.05,1,0.010,loyalty,2024-03-22\r\n19559,1009,APAC,home,partner,45.32,3,0.184,none,2024-12-10\r\n19560,1529,LATAM,grocery,online,148.59,7,0.106,coupon,2024-04-09\r\n19561,1859,AMER,sports,retail,73.67,1,0.080,none,2024-03-25\r\n19562,1131,APAC,fashion,online,43.90,4,0.071,none,2024-02-06\r\n19563,1283,APAC,sports,partner,88.56,7,0.220,none,2024-04-07\r\n19564,1004,LATAM,electronics,retail,103.25,2,0.107,none,2024-05-24\r\n19565,1438,APAC,electronics,retail,80.31,2,0.018,none,2024-07-25\r\n19566,2076,AMER,grocery,online,112.79,1,0.097,none,2024-01-06\r\n19567,1053,AMER,grocery,retail,95.84,2,0.054,none,2024-04-27\r\n19568,2037,LATAM,electronics,retail,59.38,2,0.166,coupon,2024-01-03\r\n19569,1150,LATAM,home,online,34.26,4,0.247,none,2024-02-05\r\n19570,1400,EMEA,fashion,online,66.98,4,0.056,coupon,2024-05-17\r\n19571,1899,APAC,fashion,online,119.07,5,0.131,none,2024-04-12\r\n19572,1241,APAC,electronics,online,59.39,2,0.202,none,2024-12-12\r\n19573,1396,EMEA,grocery,online,133.45,6,0.188,coupon,2024-03-01\r\n19574,1126,LATAM,sports,partner,20.88,5,0.123,coupon,2024-06-10\r\n19575,1281,AMER,grocery,retail,37.83,8,0.004,none,2024-02-23\r\n19576,1495,LATAM,toys,mobile,65.72,6,0.119,coupon,2024-07-12\r\n19577,1921,LATAM,toys,retail,27.87,8,0.069,none,2024-05-22\r\n19578,2294,EMEA,electronics,retail,58.77,6,0.011,coupon,2024-12-25\r\n19579,2046,APAC,home,online,71.74,3,0.166,coupon,2024-11-13\r\n19580,1317,EMEA,electronics,retail,156.40,1,0.067,none,2024-10-03\r\n19581,1596,EMEA,sports,mobile,35.72,1,0.041,none,2024-10-06\r\n19582,2405,AMER,grocery,online,70.80,3,0.052,coupon,2024-01-04\r\n19583,1407,LATAM,fashion,online,35.70,5,0.157,none,2024-02-17\r\n19584,2007,LATAM,electronics,online,74.43,2,0.066,none,2024-12-01\r\n19585,1834,AMER,fashion,retail,108.22,2,0.103,none,2024-06-18\r\n19586,1727,APAC,fashion,online,42.16,3,0.208,loyalty,2024-05-21\r\n19587,1130,LATAM,electronics,online,31.84,1,0.035,coupon,2024-05-13\r\n19588,2006,APAC,fashion,online,112.06,5,0.129,none,2024-07-10\r\n19589,1199,APAC,fashion,partner,16.78,6,0.013,none,2024-05-03\r\n19590,1433,EMEA,toys,mobile,112.22,7,0.100,none,2024-01-02\r\n19591,1149,LATAM,grocery,online,30.30,3,0.020,none,2024-12-26\r\n19592,2256,AMER,electronics,retail,50.48,3,0.021,loyalty,2024-08-10\r\n19593,1799,EMEA,grocery,online,37.00,3,0.017,none,2024-11-03\r\n19594,1573,AMER,electronics,retail,75.42,8,0.032,bundle,2024-09-18\r\n19595,1596,EMEA,grocery,online,31.40,4,0.079,none,2024-04-15\r\n19596,2031,AMER,sports,partner,86.72,7,0.006,none,2024-05-13\r\n19597,1511,EMEA,electronics,retail,97.10,3,0.094,coupon,2024-09-03\r\n19598,1625,EMEA,fashion,retail,75.18,4,0.248,none,2024-04-18\r\n19599,1223,LATAM,electronics,online,47.00,8,0.191,none,2024-07-18\r\n19600,1376,EMEA,electronics,retail,67.54,8,0.084,none,2024-01-08\r\n19601,1987,AMER,home,retail,62.88,7,0.227,bundle,2024-07-27\r\n19602,2050,APAC,fashion,online,122.91,8,0.134,none,2024-11-21\r\n19603,1142,EMEA,electronics,online,88.94,3,0.121,loyalty,2024-10-18\r\n19604,2021,EMEA,electronics,online,53.12,6,0.015,coupon,2024-01-19\r\n19605,2185,EMEA,grocery,retail,45.07,5,0.128,none,2024-02-14\r\n19606,1200,EMEA,toys,retail,50.06,6,0.227,none,2024-06-02\r\n19607,2390,AMER,home,retail,102.28,4,0.019,none,2024-06-10\r\n19608,1787,APAC,home,online,101.86,2,0.000,none,2024-04-05\r\n19609,1509,AMER,toys,partner,188.30,7,0.168,loyalty,2024-04-11\r\n19610,2028,APAC,home,retail,71.19,7,0.063,none,2024-04-16\r\n19611,1861,AMER,electronics,retail,36.20,5,0.044,none,2024-09-28\r\n19612,2282,EMEA,grocery,mobile,33.04,7,0.143,none,2024-02-11\r\n19613,1296,LATAM,sports,retail,44.14,7,0.072,none,2024-05-05\r\n19614,1281,AMER,electronics,retail,37.44,8,0.109,none,2024-11-10\r\n19615,2400,EMEA,grocery,online,128.72,5,0.202,none,2024-06-08\r\n19616,1849,EMEA,home,mobile,40.25,1,0.007,none,2024-05-14\r\n19617,2057,APAC,home,retail,113.38,3,0.228,none,2024-02-02\r\n19618,1093,APAC,grocery,online,64.23,2,0.147,bundle,2024-09-15\r\n19619,1104,APAC,sports,online,39.29,8,0.200,none,2024-09-15\r\n19620,2481,APAC,grocery,partner,25.75,2,0.245,none,2024-12-10\r\n19621,1202,APAC,grocery,retail,60.36,6,0.041,none,2024-11-13\r\n19622,2090,AMER,toys,online,20.82,2,0.006,bundle,2024-03-09\r\n19623,2471,APAC,electronics,retail,24.61,6,0.239,none,2024-01-24\r\n19624,2293,LATAM,electronics,online,84.15,4,0.111,none,2024-10-19\r\n19625,2159,AMER,electronics,online,23.44,4,0.136,bundle,2024-12-03\r\n19626,1493,APAC,fashion,online,181.06,2,0.149,loyalty,2024-01-02\r\n19627,2038,LATAM,toys,partner,25.97,6,0.008,none,2024-09-20\r\n19628,2485,AMER,sports,online,90.18,3,0.220,coupon,2024-05-27\r\n19629,2427,LATAM,fashion,partner,58.09,1,0.163,none,2024-12-28\r\n19630,2366,APAC,grocery,retail,70.67,1,0.045,none,2024-07-23\r\n19631,1060,LATAM,grocery,retail,95.98,6,0.090,bundle,2024-06-02\r\n19632,2485,AMER,toys,mobile,21.09,2,0.055,none,2024-10-02\r\n19633,2307,LATAM,grocery,mobile,46.71,5,0.043,bundle,2024-10-16\r\n19634,2312,APAC,electronics,retail,53.03,5,0.005,none,2024-03-12\r\n19635,1825,AMER,grocery,online,113.93,2,0.049,bundle,2024-06-09\r\n19636,2423,LATAM,sports,online,67.16,2,0.121,none,2024-09-14\r\n19637,1178,EMEA,home,online,42.16,5,0.066,none,2024-01-01\r\n19638,1678,LATAM,fashion,online,32.67,2,0.043,none,2024-09-13\r\n19639,1674,LATAM,grocery,online,44.20,7,0.191,none,2024-03-26\r\n19640,1795,EMEA,electronics,online,44.11,4,0.234,none,2024-01-05\r\n19641,1665,AMER,grocery,retail,64.86,1,0.053,none,2024-08-27\r\n19642,2346,LATAM,fashion,online,37.94,8,0.084,none,2024-08-04\r\n19643,2031,AMER,toys,mobile,192.54,6,0.109,none,2024-06-15\r\n19644,1374,APAC,sports,partner,59.61,8,0.195,none,2024-04-07\r\n19645,2366,APAC,electronics,retail,39.43,4,0.145,coupon,2024-09-09\r\n19646,2041,LATAM,fashion,retail,32.98,8,0.058,none,2024-12-01\r\n19647,1454,APAC,toys,retail,64.53,5,0.207,coupon,2024-08-03\r\n19648,1309,EMEA,grocery,mobile,92.46,3,0.199,coupon,2024-05-06\r\n19649,1504,AMER,fashion,retail,53.70,7,0.018,none,2024-03-18\r\n19650,1910,LATAM,home,online,29.85,5,0.169,bundle,2024-11-24\r\n19651,1181,LATAM,electronics,online,65.34,3,0.240,none,2024-04-16\r\n19652,2324,AMER,home,online,31.15,7,0.225,loyalty,2024-04-09\r\n19653,1655,LATAM,grocery,online,31.48,5,0.223,none,2024-01-06\r\n19654,2065,EMEA,home,online,69.54,7,0.024,none,2024-08-17\r\n19655,1589,AMER,electronics,retail,122.36,7,0.183,coupon,2024-02-01\r\n19656,1370,APAC,electronics,online,58.21,3,0.046,none,2024-06-03\r\n19657,2133,AMER,grocery,retail,38.31,1,0.233,loyalty,2024-10-25\r\n19658,2099,AMER,home,mobile,56.56,4,0.104,none,2024-11-16\r\n19659,2101,APAC,home,online,38.83,4,0.210,coupon,2024-04-11\r\n19660,1904,APAC,home,retail,23.62,5,0.152,coupon,2024-05-27\r\n19661,2048,LATAM,grocery,mobile,28.87,3,0.019,bundle,2024-03-16\r\n19662,2331,APAC,grocery,retail,38.99,2,0.116,none,2024-11-01\r\n19663,2259,AMER,grocery,mobile,58.78,7,0.060,bundle,2024-05-15\r\n19664,1688,LATAM,grocery,online,72.90,6,0.092,coupon,2024-01-19\r\n19665,1442,EMEA,grocery,retail,39.85,2,0.099,coupon,2024-09-12\r\n19666,2279,LATAM,grocery,retail,128.64,1,0.002,none,2024-05-01\r\n19667,1332,APAC,home,retail,92.70,3,0.140,loyalty,2024-10-07\r\n19668,1175,AMER,home,retail,132.41,4,0.061,none,2024-06-23\r\n19669,1382,LATAM,electronics,online,75.71,7,0.207,coupon,2024-07-16\r\n19670,1003,APAC,electronics,online,52.00,8,0.236,none,2024-07-08\r\n19671,1524,LATAM,sports,online,59.93,7,0.038,bundle,2024-02-17\r\n19672,1345,AMER,toys,online,92.27,3,0.135,bundle,2024-05-21\r\n19673,1726,EMEA,fashion,retail,49.00,5,0.242,bundle,2024-05-23\r\n19674,2027,EMEA,grocery,online,73.46,2,0.247,none,2024-09-14\r\n19675,1338,EMEA,grocery,online,38.87,4,0.193,loyalty,2024-05-05\r\n19676,1821,LATAM,toys,mobile,20.80,6,0.114,coupon,2024-02-14\r\n19677,1341,EMEA,sports,online,16.29,7,0.233,none,2024-10-14\r\n19678,2164,AMER,grocery,mobile,57.46,4,0.112,none,2024-08-23\r\n19679,1203,AMER,electronics,retail,101.71,6,0.056,coupon,2024-09-07\r\n19680,1860,EMEA,toys,online,161.33,8,0.191,none,2024-07-24\r\n19681,1562,AMER,toys,online,28.23,4,0.163,none,2024-07-12\r\n19682,1555,AMER,electronics,retail,91.85,1,0.022,none,2024-11-06\r\n19683,1661,LATAM,grocery,online,83.48,8,0.156,none,2024-06-03\r\n19684,1091,EMEA,grocery,retail,69.32,2,0.113,loyalty,2024-09-09\r\n19685,1351,APAC,electronics,partner,38.51,3,0.054,none,2024-07-17\r\n19686,2447,AMER,home,online,65.40,2,0.146,none,2024-04-01\r\n19687,1660,AMER,toys,online,59.75,1,0.129,coupon,2024-06-14\r\n19688,1767,AMER,fashion,partner,50.93,1,0.249,none,2024-03-24\r\n19689,1243,AMER,electronics,online,24.68,6,0.169,none,2024-01-04\r\n19690,1871,APAC,fashion,online,76.99,3,0.203,none,2024-11-14\r\n19691,2216,AMER,home,retail,46.66,6,0.196,coupon,2024-06-17\r\n19692,1445,APAC,sports,retail,38.01,5,0.187,coupon,2024-10-26\r\n19693,1813,EMEA,fashion,online,115.31,2,0.131,none,2024-06-02\r\n19694,1754,EMEA,grocery,retail,80.61,7,0.056,coupon,2024-08-03\r\n19695,1303,LATAM,toys,retail,41.07,8,0.012,none,2024-03-26\r\n19696,1144,APAC,toys,online,56.13,3,0.051,bundle,2024-09-07\r\n19697,2102,APAC,electronics,retail,67.30,7,0.099,coupon,2024-11-19\r\n19698,1173,LATAM,fashion,online,63.78,1,0.117,bundle,2024-04-15\r\n19699,2407,EMEA,grocery,online,111.54,8,0.020,bundle,2024-11-12\r\n19700,1887,LATAM,home,retail,35.02,8,0.098,none,2024-05-16\r\n19701,1467,LATAM,grocery,online,58.47,8,0.038,bundle,2024-07-22\r\n19702,2467,AMER,grocery,online,35.85,2,0.246,none,2024-05-22\r\n19703,1786,APAC,electronics,mobile,25.61,2,0.024,none,2024-09-24\r\n19704,1052,LATAM,home,online,19.09,7,0.110,none,2024-04-15\r\n19705,1283,APAC,home,retail,89.89,7,0.091,none,2024-06-02\r\n19706,1552,EMEA,electronics,online,68.64,4,0.200,none,2024-01-16\r\n19707,2485,AMER,grocery,online,23.10,8,0.052,none,2024-01-09\r\n19708,2205,AMER,grocery,mobile,101.52,4,0.164,bundle,2024-12-27\r\n19709,1169,LATAM,fashion,online,42.06,2,0.041,none,2024-01-04\r\n19710,1633,EMEA,grocery,retail,50.05,8,0.225,none,2024-04-22\r\n19711,1475,LATAM,grocery,retail,139.51,2,0.020,coupon,2024-08-01\r\n19712,2222,LATAM,sports,retail,39.89,8,0.188,coupon,2024-09-27\r\n19713,1933,EMEA,grocery,partner,34.93,3,0.039,none,2024-01-03\r\n19714,2205,AMER,electronics,online,17.35,1,0.150,none,2024-09-13\r\n19715,2120,AMER,fashion,online,47.55,8,0.007,none,2024-11-23\r\n19716,1288,LATAM,home,retail,121.94,1,0.222,none,2024-11-01\r\n19717,1674,LATAM,electronics,retail,74.19,3,0.041,none,2024-10-14\r\n19718,2324,AMER,grocery,retail,62.49,2,0.018,none,2024-01-04\r\n19719,1203,AMER,sports,online,95.18,8,0.073,none,2024-07-04\r\n19720,1462,LATAM,home,retail,103.69,4,0.049,coupon,2024-04-04\r\n19721,1954,APAC,toys,partner,156.47,3,0.123,none,2024-09-02\r\n19722,1532,APAC,grocery,retail,27.81,4,0.056,loyalty,2024-10-18\r\n19723,2463,AMER,fashion,mobile,94.13,6,0.163,bundle,2024-03-18\r\n19724,2010,APAC,electronics,online,46.68,4,0.012,none,2024-04-27\r\n19725,1472,AMER,electronics,retail,32.48,2,0.104,coupon,2024-11-11\r\n19726,2418,AMER,fashion,online,153.54,6,0.173,coupon,2024-12-05\r\n19727,1095,APAC,grocery,online,20.80,6,0.197,coupon,2024-01-05\r\n19728,2207,APAC,sports,online,71.18,8,0.029,none,2024-08-02\r\n19729,1673,AMER,home,online,70.37,5,0.084,bundle,2024-11-21\r\n19730,1812,EMEA,grocery,online,41.83,4,0.011,coupon,2024-01-03\r\n19731,1548,EMEA,sports,mobile,36.60,1,0.075,none,2024-08-18\r\n19732,2241,APAC,grocery,retail,11.91,8,0.001,bundle,2024-01-03\r\n19733,2465,EMEA,home,retail,38.22,6,0.081,none,2024-08-17\r\n19734,1012,LATAM,grocery,retail,78.72,3,0.090,none,2024-11-06\r\n19735,1316,APAC,home,mobile,72.60,3,0.173,loyalty,2024-01-22\r\n19736,1008,AMER,home,mobile,64.35,1,0.002,bundle,2024-02-05\r\n19737,1284,APAC,grocery,online,89.27,8,0.200,none,2024-06-20\r\n19738,1935,EMEA,grocery,retail,50.82,6,0.018,coupon,2024-10-12\r\n19739,1261,APAC,electronics,mobile,58.53,3,0.063,bundle,2024-03-27\r\n19740,1215,LATAM,electronics,retail,27.68,5,0.167,none,2024-01-06\r\n19741,1387,AMER,sports,retail,47.59,2,0.010,loyalty,2024-03-12\r\n19742,1515,EMEA,toys,online,43.74,3,0.169,none,2024-01-24\r\n19743,1290,EMEA,grocery,retail,85.91,4,0.220,coupon,2024-10-23\r\n19744,1918,EMEA,grocery,online,41.81,5,0.216,none,2024-03-27\r\n19745,2190,LATAM,grocery,retail,146.90,3,0.093,none,2024-03-10\r\n19746,2412,LATAM,grocery,mobile,57.12,2,0.015,loyalty,2024-02-11\r\n19747,2071,APAC,electronics,retail,47.70,6,0.092,none,2024-09-11\r\n19748,2173,LATAM,grocery,online,29.28,1,0.171,bundle,2024-06-07\r\n19749,1359,LATAM,sports,online,39.03,7,0.015,bundle,2024-07-20\r\n19750,2098,AMER,home,online,55.04,8,0.133,none,2024-04-13\r\n19751,1987,AMER,sports,online,44.88,8,0.091,coupon,2024-03-09\r\n19752,1174,APAC,electronics,online,92.35,8,0.086,none,2024-01-11\r\n19753,1734,AMER,grocery,retail,50.51,7,0.235,loyalty,2024-08-20\r\n19754,2269,EMEA,sports,retail,94.57,4,0.142,none,2024-12-01\r\n19755,1561,EMEA,fashion,mobile,89.97,8,0.244,coupon,2024-02-14\r\n19756,1700,EMEA,home,online,114.05,1,0.228,none,2024-04-23\r\n19757,2337,AMER,home,online,37.80,3,0.102,none,2024-05-12\r\n19758,2218,EMEA,home,retail,57.84,7,0.084,bundle,2024-05-11\r\n19759,1314,AMER,sports,online,44.55,2,0.076,coupon,2024-08-22\r\n19760,1023,APAC,sports,online,66.25,1,0.048,coupon,2024-05-01\r\n19761,1800,APAC,grocery,retail,34.35,8,0.067,none,2024-12-24\r\n19762,1287,AMER,grocery,online,44.94,5,0.235,none,2024-06-17\r\n19763,2234,LATAM,toys,retail,33.46,7,0.201,loyalty,2024-12-09\r\n19764,1444,EMEA,sports,retail,35.81,1,0.227,none,2024-11-15\r\n19765,2239,EMEA,sports,online,42.24,4,0.105,none,2024-10-28\r\n19766,2106,LATAM,toys,mobile,80.12,8,0.158,none,2024-07-02\r\n19767,1477,APAC,toys,retail,84.83,4,0.062,none,2024-12-09\r\n19768,1068,APAC,fashion,mobile,67.26,3,0.197,none,2024-10-02\r\n19769,1408,AMER,sports,mobile,20.60,8,0.181,none,2024-12-15\r\n19770,1296,LATAM,electronics,retail,52.65,4,0.168,none,2024-12-19\r\n19771,1169,LATAM,grocery,online,139.18,5,0.187,none,2024-09-07\r\n19772,2201,AMER,electronics,online,154.87,3,0.093,none,2024-05-02\r\n19773,1223,LATAM,grocery,mobile,29.28,7,0.231,bundle,2024-09-05\r\n19774,1637,APAC,home,online,54.67,7,0.057,coupon,2024-04-07\r\n19775,1540,LATAM,home,retail,52.97,6,0.072,none,2024-11-16\r\n19776,1598,EMEA,electronics,mobile,51.48,8,0.018,none,2024-09-01\r\n19777,2437,LATAM,grocery,online,45.91,2,0.054,none,2024-06-04\r\n19778,1364,EMEA,sports,retail,50.42,2,0.154,loyalty,2024-02-20\r\n19779,1614,EMEA,sports,online,24.57,3,0.057,none,2024-09-08\r\n19780,1318,LATAM,electronics,retail,80.32,2,0.059,loyalty,2024-11-05\r\n19781,1698,EMEA,grocery,online,52.38,5,0.117,none,2024-06-11\r\n19782,1736,AMER,fashion,mobile,35.15,5,0.042,none,2024-09-22\r\n19783,1605,APAC,fashion,retail,80.31,1,0.077,none,2024-11-05\r\n19784,2066,APAC,grocery,online,154.21,6,0.086,none,2024-03-26\r\n19785,1108,EMEA,grocery,retail,48.09,3,0.185,none,2024-04-06\r\n19786,2409,APAC,fashion,online,50.61,2,0.101,none,2024-11-13\r\n19787,2153,APAC,toys,retail,28.73,6,0.180,loyalty,2024-03-23\r\n19788,1091,EMEA,home,retail,29.48,7,0.088,none,2024-12-20\r\n19789,1220,LATAM,grocery,online,41.23,7,0.099,none,2024-06-14\r\n19790,2138,APAC,electronics,mobile,26.06,4,0.129,none,2024-06-24\r\n19791,1267,EMEA,toys,retail,49.43,3,0.047,none,2024-12-07\r\n19792,1731,AMER,toys,partner,33.85,2,0.149,coupon,2024-03-05\r\n19793,2472,AMER,home,online,75.12,8,0.165,none,2024-10-25\r\n19794,2074,AMER,home,retail,89.94,3,0.029,none,2024-04-02\r\n19795,1910,LATAM,sports,mobile,41.75,5,0.202,coupon,2024-08-23\r\n19796,1051,EMEA,electronics,online,67.73,5,0.207,loyalty,2024-11-01\r\n19797,1262,APAC,fashion,retail,17.84,8,0.109,none,2024-02-14\r\n19798,2121,APAC,sports,mobile,116.45,7,0.120,bundle,2024-01-14\r\n19799,2079,EMEA,toys,retail,52.99,1,0.070,bundle,2024-12-04\r\n19800,1080,LATAM,sports,retail,121.66,3,0.115,none,2024-06-13\r\n19801,1223,LATAM,sports,online,97.87,6,0.212,none,2024-06-27\r\n19802,1495,LATAM,fashion,retail,53.36,1,0.013,none,2024-04-21\r\n19803,2455,AMER,home,retail,136.68,6,0.191,coupon,2024-08-09\r\n19804,1659,APAC,fashion,online,69.27,6,0.095,coupon,2024-07-07\r\n19805,1765,EMEA,grocery,online,54.52,8,0.036,none,2024-08-01\r\n19806,1246,EMEA,toys,retail,28.74,4,0.212,coupon,2024-07-17\r\n19807,1745,APAC,sports,retail,24.26,4,0.028,bundle,2024-08-27\r\n19808,2186,LATAM,fashion,retail,44.62,2,0.186,none,2024-11-01\r\n19809,1076,LATAM,sports,mobile,24.57,8,0.210,coupon,2024-07-20\r\n19810,1200,EMEA,grocery,retail,78.97,4,0.126,bundle,2024-06-11\r\n19811,2123,AMER,fashion,online,92.43,3,0.125,none,2024-07-17\r\n19812,1846,APAC,home,retail,48.92,8,0.210,coupon,2024-08-17\r\n19813,2374,LATAM,grocery,online,64.74,3,0.103,none,2024-12-25\r\n19814,2289,APAC,home,online,95.58,8,0.146,none,2024-02-09\r\n19815,1942,APAC,electronics,retail,80.65,6,0.033,none,2024-02-28\r\n19816,1205,APAC,grocery,retail,30.02,1,0.033,none,2024-01-10\r\n19817,1661,LATAM,sports,online,33.84,7,0.128,coupon,2024-09-14\r\n19818,1677,EMEA,fashion,mobile,115.95,4,0.032,loyalty,2024-11-09\r\n19819,1032,AMER,toys,mobile,20.88,2,0.070,none,2024-08-25\r\n19820,2205,AMER,fashion,online,45.82,3,0.103,loyalty,2024-11-11\r\n19821,2014,EMEA,electronics,online,38.75,7,0.247,none,2024-11-16\r\n19822,1366,APAC,toys,online,75.59,7,0.104,none,2024-06-16\r\n19823,1965,LATAM,electronics,online,59.94,3,0.056,none,2024-10-05\r\n19824,1786,APAC,toys,online,74.74,7,0.001,none,2024-10-03\r\n19825,2062,EMEA,toys,mobile,46.05,1,0.146,none,2024-11-18\r\n19826,2381,AMER,sports,online,107.33,2,0.059,none,2024-07-11\r\n19827,2223,EMEA,electronics,retail,36.43,6,0.054,none,2024-06-03\r\n19828,1159,LATAM,fashion,online,61.49,4,0.229,none,2024-01-10\r\n19829,1724,LATAM,home,online,42.81,1,0.103,none,2024-05-07\r\n19830,2312,APAC,fashion,partner,39.40,3,0.169,loyalty,2024-10-20\r\n19831,1718,EMEA,home,retail,54.80,2,0.016,bundle,2024-09-24\r\n19832,1893,APAC,electronics,mobile,44.53,5,0.178,bundle,2024-01-17\r\n19833,1485,APAC,grocery,mobile,54.55,6,0.088,bundle,2024-07-21\r\n19834,1168,APAC,electronics,retail,168.31,5,0.080,none,2024-04-26\r\n19835,1760,LATAM,home,online,67.40,8,0.125,coupon,2024-08-18\r\n19836,1279,EMEA,grocery,online,210.49,2,0.227,none,2024-11-20\r\n19837,2087,LATAM,toys,online,51.47,2,0.078,loyalty,2024-05-25\r\n19838,2150,APAC,grocery,online,89.86,3,0.137,none,2024-09-10\r\n19839,1222,AMER,electronics,online,56.08,5,0.055,coupon,2024-10-25\r\n19840,2371,LATAM,home,online,57.51,7,0.097,bundle,2024-05-03\r\n19841,1688,LATAM,grocery,retail,51.63,3,0.158,bundle,2024-08-27\r\n19842,1630,APAC,toys,retail,90.72,5,0.141,coupon,2024-11-14\r\n19843,1388,AMER,sports,online,22.88,6,0.220,coupon,2024-01-02\r\n19844,1654,EMEA,home,retail,116.68,8,0.236,bundle,2024-08-07\r\n19845,1187,AMER,electronics,retail,36.81,1,0.137,none,2024-11-23\r\n19846,1652,APAC,sports,online,73.94,5,0.042,none,2024-09-14\r\n19847,2240,LATAM,home,retail,101.28,5,0.074,none,2024-04-02\r\n19848,1188,LATAM,fashion,retail,75.03,3,0.066,coupon,2024-02-06\r\n19849,1134,APAC,home,retail,124.40,1,0.156,coupon,2024-12-19\r\n19850,1873,EMEA,fashion,mobile,33.03,2,0.134,coupon,2024-09-02\r\n19851,1021,AMER,electronics,online,62.48,6,0.027,none,2024-03-20\r\n19852,1262,APAC,grocery,online,101.32,2,0.014,none,2024-05-19\r\n19853,1160,LATAM,grocery,retail,75.09,1,0.168,none,2024-02-13\r\n19854,2488,EMEA,home,online,28.64,8,0.123,loyalty,2024-11-11\r\n19855,1271,EMEA,grocery,online,53.46,4,0.211,none,2024-12-28\r\n19856,2028,APAC,fashion,retail,26.76,5,0.128,coupon,2024-12-05\r\n19857,1152,LATAM,electronics,retail,55.60,4,0.164,none,2024-02-19\r\n19858,2382,LATAM,sports,retail,74.44,7,0.002,none,2024-08-16\r\n19859,1968,EMEA,home,mobile,44.16,1,0.138,coupon,2024-10-08\r\n19860,1639,APAC,fashion,online,103.46,2,0.057,bundle,2024-08-01\r\n19861,2083,LATAM,fashion,online,90.00,3,0.035,coupon,2024-06-06\r\n19862,2354,LATAM,fashion,mobile,41.47,6,0.228,loyalty,2024-06-28\r\n19863,1712,LATAM,toys,online,50.48,8,0.028,coupon,2024-12-04\r\n19864,1007,APAC,electronics,retail,75.63,7,0.106,none,2024-12-05\r\n19865,1596,EMEA,fashion,online,141.56,4,0.206,loyalty,2024-02-18\r\n19866,1272,AMER,home,online,57.18,3,0.244,bundle,2024-08-18\r\n19867,1530,APAC,home,online,110.11,7,0.167,none,2024-02-24\r\n19868,1114,APAC,electronics,mobile,76.28,3,0.186,bundle,2024-01-14\r\n19869,1482,AMER,fashion,online,61.63,1,0.127,none,2024-11-08\r\n19870,1960,EMEA,grocery,online,52.24,7,0.082,none,2024-03-06\r\n19871,2232,EMEA,grocery,retail,74.03,5,0.240,bundle,2024-12-08\r\n19872,1517,AMER,toys,retail,20.12,8,0.183,none,2024-12-01\r\n19873,2400,EMEA,grocery,retail,52.27,3,0.064,coupon,2024-09-01\r\n19874,1173,LATAM,sports,online,47.46,5,0.076,none,2024-08-18\r\n19875,1494,AMER,electronics,online,216.34,3,0.179,none,2024-01-11\r\n19876,1282,LATAM,electronics,online,53.36,8,0.183,loyalty,2024-06-22\r\n19877,2024,AMER,electronics,retail,71.08,1,0.069,coupon,2024-04-05\r\n19878,2203,APAC,electronics,retail,44.95,1,0.142,none,2024-08-28\r\n19879,2039,EMEA,toys,retail,37.33,5,0.055,none,2024-02-28\r\n19880,1796,LATAM,home,online,51.74,4,0.039,none,2024-05-22\r\n19881,2409,APAC,electronics,retail,36.41,2,0.023,none,2024-04-17\r\n19882,1683,AMER,sports,online,127.30,1,0.090,loyalty,2024-04-17\r\n19883,1233,AMER,electronics,mobile,147.32,1,0.063,coupon,2024-04-07\r\n19884,1523,LATAM,home,retail,23.78,1,0.210,none,2024-06-13\r\n19885,2466,APAC,electronics,mobile,125.75,2,0.044,coupon,2024-05-14\r\n19886,2117,EMEA,grocery,retail,52.43,7,0.018,coupon,2024-11-18\r\n19887,1536,LATAM,grocery,mobile,43.31,6,0.120,bundle,2024-07-18\r\n19888,1574,AMER,home,online,126.82,3,0.037,none,2024-09-05\r\n19889,1867,AMER,electronics,retail,48.35,5,0.049,none,2024-12-14\r\n19890,1148,AMER,fashion,retail,26.83,7,0.148,none,2024-04-16\r\n19891,2114,AMER,fashion,retail,38.17,8,0.003,none,2024-12-10\r\n19892,1172,APAC,grocery,online,70.26,2,0.227,bundle,2024-04-15\r\n19893,1851,EMEA,electronics,online,20.91,2,0.070,none,2024-09-16\r\n19894,2298,APAC,fashion,online,56.81,3,0.097,none,2024-12-08\r\n19895,1323,EMEA,home,online,44.68,2,0.019,none,2024-08-05\r\n19896,2064,LATAM,toys,online,64.19,4,0.084,coupon,2024-12-22\r\n19897,1736,AMER,sports,retail,51.67,7,0.044,none,2024-02-12\r\n19898,1275,EMEA,grocery,online,14.90,4,0.145,none,2024-11-11\r\n19899,2408,EMEA,electronics,retail,81.65,1,0.117,bundle,2024-10-10\r\n19900,1484,AMER,grocery,online,80.50,7,0.230,none,2024-06-11\r\n19901,2204,AMER,fashion,mobile,17.56,4,0.179,bundle,2024-11-05\r\n19902,2372,AMER,electronics,online,52.08,4,0.124,none,2024-11-06\r\n19903,1877,LATAM,electronics,mobile,36.87,5,0.088,coupon,2024-09-01\r\n19904,2311,LATAM,electronics,online,72.67,2,0.050,bundle,2024-07-12\r\n19905,1154,LATAM,electronics,mobile,79.52,6,0.147,none,2024-05-05\r\n19906,1098,APAC,home,retail,75.82,3,0.001,none,2024-06-15\r\n19907,1781,LATAM,grocery,mobile,133.46,4,0.204,loyalty,2024-06-16\r\n19908,1671,APAC,fashion,retail,44.77,3,0.152,loyalty,2024-10-01\r\n19909,1826,LATAM,toys,online,31.97,5,0.016,coupon,2024-08-12\r\n19910,2447,AMER,home,retail,85.16,7,0.023,coupon,2024-09-18\r\n19911,1289,LATAM,electronics,retail,75.29,2,0.006,none,2024-10-28\r\n19912,2228,EMEA,fashion,mobile,75.64,5,0.051,none,2024-01-11\r\n19913,1031,AMER,toys,mobile,72.22,1,0.113,bundle,2024-12-22\r\n19914,1939,LATAM,fashion,retail,64.24,6,0.088,none,2024-08-11\r\n19915,1272,AMER,electronics,online,83.50,2,0.124,none,2024-08-28\r\n19916,2384,LATAM,sports,online,54.13,6,0.005,bundle,2024-12-04\r\n19917,2423,LATAM,home,retail,154.98,5,0.222,none,2024-05-05\r\n19918,2118,AMER,home,retail,125.18,6,0.134,none,2024-03-01\r\n19919,1868,AMER,electronics,online,41.86,5,0.052,none,2024-02-28\r\n19920,1483,EMEA,grocery,online,64.43,6,0.051,none,2024-06-08\r\n19921,1098,APAC,electronics,online,48.73,1,0.164,none,2024-11-22\r\n19922,1073,AMER,home,mobile,211.38,1,0.050,none,2024-05-08\r\n19923,2465,EMEA,electronics,online,69.90,2,0.119,none,2024-11-07\r\n19924,2156,AMER,electronics,partner,32.91,4,0.043,loyalty,2024-04-11\r\n19925,2099,AMER,grocery,online,63.27,8,0.056,none,2024-12-13\r\n19926,1818,AMER,sports,partner,54.78,8,0.026,none,2024-12-18\r\n19927,1085,EMEA,grocery,online,23.79,6,0.093,bundle,2024-12-26\r\n19928,1121,EMEA,electronics,online,62.89,1,0.211,none,2024-02-23\r\n19929,1507,EMEA,home,retail,164.92,6,0.135,bundle,2024-12-21\r\n19930,1608,AMER,fashion,retail,34.45,1,0.146,bundle,2024-12-02\r\n19931,1687,APAC,toys,partner,32.93,7,0.053,bundle,2024-12-18\r\n19932,1707,APAC,grocery,partner,105.52,6,0.226,none,2024-07-28\r\n19933,2113,LATAM,fashion,retail,43.07,5,0.238,none,2024-07-07\r\n19934,2203,APAC,grocery,mobile,29.66,6,0.008,none,2024-07-02\r\n19935,2140,AMER,home,mobile,58.20,1,0.056,coupon,2024-06-19\r\n19936,2265,APAC,sports,online,24.01,6,0.077,bundle,2024-11-21\r\n19937,1800,APAC,sports,mobile,55.72,6,0.222,none,2024-01-15\r\n19938,2182,AMER,home,retail,78.63,7,0.161,none,2024-06-04\r\n19939,1819,AMER,electronics,online,47.75,3,0.008,bundle,2024-01-13\r\n19940,1883,LATAM,electronics,online,70.52,5,0.248,coupon,2024-07-11\r\n19941,2364,APAC,sports,online,46.58,2,0.009,coupon,2024-02-05\r\n19942,2081,APAC,toys,mobile,70.76,2,0.070,loyalty,2024-07-13\r\n19943,2119,AMER,toys,mobile,23.88,2,0.207,bundle,2024-03-22\r\n19944,1389,LATAM,sports,retail,72.52,3,0.066,none,2024-03-10\r\n19945,1060,LATAM,sports,partner,81.22,7,0.158,bundle,2024-11-22\r\n19946,1326,AMER,sports,mobile,52.82,7,0.032,none,2024-09-27\r\n19947,2218,EMEA,toys,retail,52.45,2,0.230,coupon,2024-09-09\r\n19948,2005,APAC,toys,retail,25.67,2,0.039,bundle,2024-06-20\r\n19949,2416,LATAM,electronics,partner,145.24,5,0.006,none,2024-03-14\r\n19950,1001,LATAM,sports,retail,66.17,1,0.205,coupon,2024-08-06\r\n19951,2141,AMER,sports,retail,50.44,8,0.006,coupon,2024-10-20\r\n19952,2023,LATAM,sports,retail,56.61,7,0.224,bundle,2024-02-05\r\n19953,1907,EMEA,toys,online,27.78,3,0.024,none,2024-05-18\r\n19954,2131,APAC,grocery,retail,156.33,1,0.205,bundle,2024-09-08\r\n19955,1428,APAC,home,mobile,93.05,2,0.083,coupon,2024-02-16\r\n19956,1826,LATAM,sports,online,131.48,7,0.021,loyalty,2024-03-16\r\n19957,1339,EMEA,electronics,online,55.00,8,0.066,coupon,2024-11-06\r\n19958,1920,LATAM,fashion,online,140.18,3,0.091,none,2024-04-10\r\n19959,1436,APAC,home,online,75.10,2,0.148,bundle,2024-06-01\r\n19960,1922,EMEA,home,retail,38.82,2,0.097,none,2024-01-11\r\n19961,1347,APAC,electronics,online,104.47,7,0.017,none,2024-02-25\r\n19962,2028,APAC,electronics,online,34.58,6,0.107,coupon,2024-10-04\r\n19963,1755,APAC,home,online,116.59,1,0.155,loyalty,2024-12-20\r\n19964,1362,AMER,toys,retail,38.64,6,0.020,none,2024-05-09\r\n19965,2039,EMEA,home,online,70.90,8,0.150,coupon,2024-11-20\r\n19966,2422,APAC,toys,online,69.38,2,0.042,bundle,2024-11-07\r\n19967,2330,EMEA,fashion,online,39.02,7,0.166,none,2024-04-19\r\n19968,1568,AMER,grocery,online,80.48,4,0.182,none,2024-12-24\r\n19969,2060,LATAM,electronics,online,53.88,3,0.110,bundle,2024-04-05\r\n19970,1461,LATAM,home,online,156.09,6,0.232,none,2024-12-18\r\n19971,1677,EMEA,grocery,online,64.88,6,0.035,none,2024-05-13\r\n19972,2000,APAC,sports,retail,53.10,5,0.142,none,2024-10-04\r\n19973,1294,APAC,grocery,online,29.25,7,0.248,none,2024-08-10\r\n19974,1058,LATAM,grocery,online,52.70,7,0.161,none,2024-07-18\r\n19975,1528,EMEA,home,retail,59.09,6,0.078,none,2024-11-11\r\n19976,1615,LATAM,grocery,retail,42.25,7,0.184,bundle,2024-10-14\r\n19977,1127,EMEA,sports,mobile,91.78,5,0.109,coupon,2024-11-13\r\n19978,2398,EMEA,home,retail,122.98,6,0.124,none,2024-08-12\r\n19979,1696,LATAM,electronics,online,53.09,5,0.166,none,2024-04-10\r\n19980,1499,EMEA,home,online,40.23,2,0.179,none,2024-03-19\r\n19981,1368,EMEA,sports,online,96.67,3,0.184,coupon,2024-07-25\r\n19982,1438,APAC,grocery,retail,57.90,4,0.129,loyalty,2024-05-02\r\n19983,1259,EMEA,grocery,retail,29.72,7,0.223,none,2024-02-03\r\n19984,1542,APAC,fashion,online,66.30,4,0.184,loyalty,2024-02-20\r\n19985,1456,APAC,electronics,online,25.60,5,0.104,loyalty,2024-06-22\r\n19986,1896,EMEA,sports,online,68.10,5,0.250,none,2024-11-22\r\n19987,2325,LATAM,sports,online,69.67,6,0.067,none,2024-01-23\r\n19988,1286,EMEA,toys,online,37.67,4,0.227,none,2024-10-18\r\n19989,1125,LATAM,grocery,retail,77.53,1,0.184,none,2024-10-01\r\n19990,1954,APAC,grocery,retail,162.70,3,0.049,bundle,2024-07-22\r\n19991,1806,APAC,fashion,retail,32.75,5,0.247,bundle,2024-11-27\r\n19992,1610,LATAM,electronics,online,79.89,6,0.137,coupon,2024-04-14\r\n19993,2061,EMEA,fashion,online,68.50,5,0.250,none,2024-04-16\r\n19994,2085,AMER,sports,online,71.84,7,0.217,none,2024-09-22\r\n19995,2007,LATAM,fashion,partner,101.63,5,0.127,bundle,2024-05-13\r\n19996,2260,EMEA,toys,online,81.22,2,0.069,none,2024-08-03\r\n19997,1145,AMER,home,retail,33.46,4,0.247,loyalty,2024-04-05\r\n19998,1186,APAC,grocery,online,80.07,7,0.040,coupon,2024-06-04\r\n19999,1319,EMEA,electronics,retail,57.18,8,0.041,none,2024-07-24\r\n20000,2490,AMER,electronics,online,78.43,8,0.129,loyalty,2024-08-26\r\n20001,1945,AMER,home,mobile,45.90,8,0.244,bundle,2024-01-19\r\n20002,1740,EMEA,toys,retail,95.98,8,0.199,loyalty,2024-07-16\r\n20003,1067,APAC,home,retail,47.17,8,0.187,none,2024-10-15\r\n20004,1312,EMEA,sports,online,39.95,8,0.043,none,2024-08-26\r\n20005,1306,LATAM,electronics,online,26.62,7,0.099,coupon,2024-03-04\r\n20006,2308,AMER,toys,retail,36.23,8,0.177,bundle,2024-09-02\r\n20007,1574,AMER,sports,retail,62.51,5,0.007,none,2024-02-13\r\n20008,1318,LATAM,grocery,mobile,43.47,6,0.159,none,2024-03-19\r\n20009,1397,LATAM,fashion,online,43.78,6,0.066,none,2024-05-02\r\n20010,2341,EMEA,electronics,retail,43.85,6,0.145,bundle,2024-11-10\r\n20011,1701,LATAM,sports,partner,79.67,8,0.095,bundle,2024-02-27\r\n20012,2065,EMEA,electronics,online,23.77,3,0.194,loyalty,2024-11-24\r\n20013,1861,AMER,home,retail,80.67,6,0.194,coupon,2024-07-05\r\n20014,2073,AMER,fashion,online,66.83,2,0.189,none,2024-02-21\r\n20015,1344,EMEA,electronics,online,60.68,2,0.169,loyalty,2024-08-22\r\n20016,2230,LATAM,grocery,online,25.79,7,0.222,none,2024-11-20\r\n20017,2364,APAC,sports,online,58.67,5,0.061,none,2024-06-04\r\n20018,2202,APAC,toys,retail,87.05,3,0.114,none,2024-06-11\r\n20019,1652,APAC,grocery,online,66.13,7,0.124,none,2024-04-10\r\n20020,1473,LATAM,grocery,retail,42.23,2,0.193,coupon,2024-05-23\r\n20021,2022,LATAM,fashion,retail,53.11,2,0.233,none,2024-10-11\r\n20022,1257,APAC,home,online,88.00,2,0.166,loyalty,2024-04-09\r\n20023,2050,APAC,grocery,online,37.82,8,0.167,coupon,2024-10-17\r\n20024,1526,EMEA,fashion,online,81.56,7,0.049,coupon,2024-08-03\r\n20025,2449,LATAM,home,online,31.30,5,0.181,loyalty,2024-10-28\r\n20026,1710,APAC,sports,online,45.56,5,0.029,coupon,2024-10-04\r\n20027,1735,LATAM,fashion,mobile,41.05,8,0.167,coupon,2024-05-12\r\n20028,2006,APAC,home,retail,41.60,6,0.145,coupon,2024-01-02\r\n20029,2193,AMER,electronics,retail,46.86,5,0.029,bundle,2024-05-02\r\n20030,1719,LATAM,fashion,retail,60.49,2,0.146,none,2024-02-22\r\n20031,1526,EMEA,home,online,56.85,4,0.239,coupon,2024-05-25\r\n20032,1580,AMER,electronics,retail,37.05,6,0.222,none,2024-02-18\r\n20033,1940,APAC,home,retail,67.14,7,0.094,none,2024-01-01\r\n20034,1512,APAC,grocery,retail,101.58,4,0.032,none,2024-12-10\r\n20035,1841,AMER,home,mobile,37.08,6,0.110,bundle,2024-07-12\r\n20036,2216,AMER,sports,mobile,43.57,2,0.036,none,2024-03-09\r\n20037,1048,EMEA,fashion,online,66.51,2,0.242,none,2024-11-06\r\n20038,1418,LATAM,fashion,online,82.14,4,0.224,none,2024-01-06\r\n20039,2499,LATAM,fashion,retail,61.85,7,0.097,none,2024-05-15\r\n20040,2368,AMER,electronics,retail,48.84,1,0.189,coupon,2024-07-22\r\n20041,2212,EMEA,electronics,online,49.07,1,0.151,none,2024-06-02\r\n20042,2151,APAC,grocery,retail,38.24,2,0.074,bundle,2024-06-06\r\n20043,1660,AMER,fashion,partner,24.15,8,0.112,none,2024-12-06\r\n20044,1182,EMEA,fashion,retail,131.14,7,0.057,bundle,2024-01-24\r\n20045,1443,EMEA,toys,retail,37.84,2,0.176,bundle,2024-05-11\r\n20046,1523,LATAM,fashion,online,81.19,7,0.114,none,2024-09-05\r\n20047,1681,LATAM,sports,retail,41.58,3,0.001,coupon,2024-09-21\r\n20048,1673,AMER,grocery,mobile,114.05,5,0.186,none,2024-04-06\r\n20049,1146,LATAM,electronics,retail,63.22,1,0.158,coupon,2024-08-13\r\n20050,2089,EMEA,grocery,online,36.26,7,0.165,none,2024-05-21\r\n20051,1995,LATAM,fashion,online,50.01,1,0.020,none,2024-06-14\r\n20052,2285,APAC,fashion,mobile,97.83,3,0.212,loyalty,2024-11-18\r\n20053,2465,EMEA,electronics,retail,58.28,4,0.019,none,2024-10-19\r\n20054,1004,LATAM,toys,retail,45.04,5,0.135,none,2024-04-15\r\n20055,1648,APAC,home,online,32.73,7,0.098,bundle,2024-07-20\r\n20056,1705,AMER,home,online,79.05,3,0.017,none,2024-12-18\r\n20057,2186,LATAM,home,online,57.20,7,0.147,none,2024-02-19\r\n20058,1301,AMER,electronics,online,46.64,6,0.247,none,2024-11-24\r\n20059,2188,EMEA,fashion,mobile,91.04,3,0.099,none,2024-03-02\r\n20060,1876,LATAM,grocery,retail,11.10,2,0.003,none,2024-08-07\r\n20061,2219,LATAM,toys,online,57.78,8,0.048,loyalty,2024-11-04\r\n20062,2210,APAC,home,mobile,90.47,8,0.235,none,2024-05-06\r\n20063,2228,EMEA,electronics,online,74.94,1,0.183,none,2024-05-23\r\n20064,1024,APAC,grocery,mobile,74.22,4,0.159,bundle,2024-02-22\r\n20065,1587,LATAM,grocery,retail,90.11,4,0.181,coupon,2024-05-27\r\n20066,2319,AMER,toys,retail,39.64,1,0.093,none,2024-12-24\r\n20067,1809,APAC,sports,retail,89.30,2,0.164,none,2024-10-06\r\n20068,1060,LATAM,home,online,87.11,6,0.094,none,2024-09-25\r\n20069,1411,LATAM,electronics,online,94.46,4,0.108,bundle,2024-04-10\r\n20070,2296,AMER,home,online,99.67,3,0.224,loyalty,2024-02-08\r\n20071,1156,APAC,sports,online,43.65,5,0.119,none,2024-07-14\r\n20072,2075,LATAM,fashion,retail,73.38,5,0.224,coupon,2024-02-22\r\n20073,1248,APAC,fashion,online,55.52,2,0.083,coupon,2024-07-08\r\n20074,2321,APAC,grocery,online,54.63,3,0.043,none,2024-09-04\r\n20075,1712,LATAM,toys,retail,105.24,3,0.173,none,2024-07-01\r\n20076,1745,APAC,fashion,retail,69.06,4,0.086,none,2024-05-08\r\n20077,1537,LATAM,fashion,online,44.82,7,0.224,coupon,2024-12-20\r\n20078,1349,APAC,grocery,online,53.38,4,0.207,coupon,2024-06-10\r\n20079,1457,EMEA,fashion,mobile,77.58,3,0.149,none,2024-09-22\r\n20080,1896,EMEA,toys,online,27.94,6,0.046,none,2024-10-10\r\n20081,1945,AMER,grocery,online,44.65,4,0.050,none,2024-03-06\r\n20082,2260,EMEA,home,mobile,50.16,6,0.064,coupon,2024-02-10\r\n20083,2124,AMER,electronics,retail,46.57,7,0.017,none,2024-11-05\r\n20084,1420,APAC,home,mobile,41.18,1,0.224,none,2024-04-14\r\n20085,1435,AMER,sports,retail,118.90,5,0.018,bundle,2024-04-15\r\n20086,1013,LATAM,fashion,online,33.30,2,0.064,none,2024-02-23\r\n20087,2181,AMER,electronics,online,46.82,1,0.235,none,2024-12-16\r\n20088,1096,EMEA,home,online,25.79,4,0.061,none,2024-07-28\r\n20089,1808,APAC,sports,online,20.47,6,0.240,none,2024-09-04\r\n20090,1888,LATAM,grocery,online,74.04,3,0.060,bundle,2024-05-06\r\n20091,1349,APAC,sports,online,26.89,4,0.185,none,2024-07-17\r\n20092,2137,LATAM,home,online,29.10,1,0.031,coupon,2024-05-16\r\n20093,1163,AMER,sports,mobile,30.69,8,0.152,coupon,2024-07-03\r\n20094,1715,AMER,toys,retail,81.30,7,0.061,none,2024-09-13\r\n20095,2424,LATAM,fashion,retail,37.18,7,0.082,none,2024-11-15\r\n20096,2089,EMEA,grocery,retail,36.89,4,0.006,none,2024-05-16\r\n20097,1096,EMEA,electronics,online,77.67,4,0.045,loyalty,2024-01-17\r\n20098,1517,AMER,fashion,partner,21.58,5,0.147,none,2024-09-22\r\n20099,1398,APAC,grocery,retail,51.20,4,0.075,none,2024-05-01\r\n20100,1342,LATAM,grocery,retail,84.45,2,0.079,none,2024-03-24\r\n20101,1728,AMER,sports,online,44.26,2,0.085,none,2024-10-18\r\n20102,2242,AMER,grocery,mobile,38.12,5,0.119,none,2024-05-25\r\n20103,1091,EMEA,toys,online,31.16,1,0.089,none,2024-07-15\r\n20104,1008,AMER,fashion,online,98.12,7,0.060,loyalty,2024-10-21\r\n20105,2125,LATAM,grocery,online,100.58,8,0.213,none,2024-02-08\r\n20106,2010,APAC,grocery,retail,101.18,2,0.011,loyalty,2024-12-22\r\n20107,1689,LATAM,sports,retail,120.10,7,0.150,none,2024-03-18\r\n20108,2337,AMER,home,online,76.56,3,0.064,none,2024-11-23\r\n20109,1631,APAC,electronics,retail,44.73,1,0.060,loyalty,2024-09-08\r\n20110,1735,LATAM,sports,online,67.67,5,0.009,loyalty,2024-10-01\r\n20111,1350,LATAM,grocery,online,37.32,7,0.135,none,2024-05-24\r\n20112,2378,LATAM,sports,online,104.19,3,0.186,none,2024-05-11\r\n20113,1981,EMEA,electronics,online,26.51,6,0.158,none,2024-01-03\r\n20114,1807,EMEA,electronics,retail,60.80,7,0.182,coupon,2024-11-05\r\n20115,1360,APAC,electronics,online,35.94,1,0.015,coupon,2024-11-28\r\n20116,1336,APAC,sports,online,28.18,7,0.071,coupon,2024-07-08\r\n20117,1419,APAC,grocery,retail,100.12,2,0.028,none,2024-10-05\r\n20118,1518,AMER,grocery,online,69.99,4,0.187,loyalty,2024-06-06\r\n20119,1121,EMEA,home,online,54.21,1,0.194,coupon,2024-04-21\r\n20120,1941,AMER,electronics,online,42.52,5,0.088,none,2024-03-15\r\n20121,1530,APAC,toys,retail,63.88,2,0.170,none,2024-10-14\r\n20122,2070,APAC,fashion,retail,44.38,2,0.055,none,2024-01-11\r\n20123,1505,EMEA,sports,online,120.89,3,0.225,none,2024-03-25\r\n20124,2239,EMEA,sports,online,169.95,3,0.230,none,2024-06-18\r\n20125,2360,EMEA,sports,online,44.45,3,0.021,none,2024-02-03\r\n20126,1455,APAC,grocery,retail,55.15,6,0.178,none,2024-03-13\r\n20127,1182,EMEA,toys,online,52.31,7,0.240,none,2024-07-01\r\n20128,2155,APAC,grocery,online,49.78,2,0.137,none,2024-12-13\r\n20129,1867,AMER,grocery,online,26.84,2,0.045,loyalty,2024-05-24\r\n20130,1991,APAC,toys,retail,66.84,6,0.230,none,2024-03-17\r\n20131,2064,LATAM,electronics,mobile,55.08,4,0.185,coupon,2024-09-17\r\n20132,1055,AMER,grocery,partner,41.23,5,0.101,none,2024-11-01\r\n20133,2050,APAC,grocery,retail,76.46,3,0.161,none,2024-09-20\r\n20134,1900,APAC,sports,online,60.54,1,0.134,coupon,2024-01-21\r\n20135,1867,AMER,fashion,retail,70.48,8,0.099,none,2024-10-14\r\n20136,1849,EMEA,grocery,online,16.48,5,0.010,none,2024-01-26\r\n20137,1224,APAC,electronics,retail,44.71,1,0.161,none,2024-09-09\r\n20138,1903,LATAM,electronics,mobile,48.58,6,0.156,loyalty,2024-01-07\r\n20139,1738,LATAM,sports,online,52.31,5,0.128,coupon,2024-04-07\r\n20140,1878,EMEA,fashion,online,40.17,6,0.174,bundle,2024-10-13\r\n20141,1284,APAC,fashion,retail,59.77,5,0.224,none,2024-10-07\r\n20142,1664,LATAM,grocery,mobile,34.46,7,0.021,coupon,2024-02-08\r\n20143,2326,LATAM,home,retail,117.49,7,0.015,coupon,2024-11-16\r\n20144,1088,LATAM,grocery,online,58.51,1,0.210,none,2024-06-03\r\n20145,1667,AMER,electronics,retail,27.99,3,0.074,loyalty,2024-01-20\r\n20146,2238,AMER,home,retail,44.44,4,0.213,none,2024-01-12\r\n20147,1240,EMEA,fashion,partner,76.07,2,0.005,none,2024-10-07\r\n20148,2339,AMER,electronics,online,24.96,6,0.028,none,2024-12-08\r\n20149,1001,LATAM,grocery,online,98.87,3,0.085,none,2024-06-07\r\n20150,1407,LATAM,fashion,retail,38.35,5,0.112,coupon,2024-04-24\r\n20151,1985,AMER,grocery,partner,64.09,7,0.022,coupon,2024-01-07\r\n20152,2459,AMER,home,online,39.95,8,0.192,none,2024-02-03\r\n20153,1303,LATAM,sports,online,64.61,5,0.102,none,2024-12-02\r\n20154,1130,LATAM,fashion,online,44.06,8,0.206,none,2024-05-11\r\n20155,1122,AMER,sports,retail,92.25,1,0.246,coupon,2024-10-08\r\n20156,2035,LATAM,grocery,online,130.14,1,0.077,bundle,2024-03-16\r\n20157,1158,LATAM,sports,retail,104.46,5,0.119,none,2024-07-21\r\n20158,1133,EMEA,electronics,retail,58.59,4,0.238,coupon,2024-10-13\r\n20159,1592,LATAM,sports,retail,56.26,5,0.104,none,2024-01-26\r\n20160,1777,AMER,electronics,online,40.16,2,0.177,coupon,2024-09-02\r\n20161,1543,AMER,sports,online,44.83,2,0.133,loyalty,2024-07-21\r\n20162,1961,EMEA,grocery,retail,103.51,2,0.118,none,2024-04-08\r\n20163,1910,LATAM,home,online,102.84,6,0.250,loyalty,2024-09-28\r\n20164,1219,LATAM,electronics,retail,61.47,4,0.181,none,2024-11-11\r\n20165,1385,LATAM,home,online,83.73,5,0.087,bundle,2024-05-07\r\n20166,2265,APAC,home,online,31.85,1,0.217,none,2024-07-13\r\n20167,1151,APAC,sports,online,74.67,2,0.075,none,2024-02-19\r\n20168,2295,EMEA,home,mobile,154.95,3,0.211,bundle,2024-10-07\r\n20169,2247,LATAM,electronics,online,40.40,3,0.012,none,2024-01-06\r\n20170,1366,APAC,fashion,mobile,92.60,3,0.148,coupon,2024-07-04\r\n20171,2109,EMEA,grocery,mobile,63.19,2,0.128,coupon,2024-01-22\r\n20172,1220,LATAM,grocery,mobile,28.44,7,0.180,coupon,2024-01-27\r\n20173,2339,AMER,home,partner,25.36,7,0.123,coupon,2024-03-21\r\n20174,1926,AMER,fashion,retail,53.55,3,0.161,bundle,2024-05-28\r\n20175,1055,AMER,fashion,online,155.23,2,0.222,bundle,2024-08-19\r\n20176,1337,APAC,home,online,95.92,1,0.214,loyalty,2024-12-07\r\n20177,1737,AMER,grocery,mobile,123.30,5,0.121,none,2024-03-26\r\n20178,2331,APAC,sports,online,53.28,2,0.203,none,2024-08-17\r\n20179,1159,LATAM,grocery,online,49.66,7,0.132,loyalty,2024-06-05\r\n20180,2439,AMER,fashion,retail,80.53,4,0.052,none,2024-10-27\r\n20181,1091,EMEA,fashion,online,36.51,1,0.189,none,2024-09-15\r\n20182,2449,LATAM,home,retail,32.38,1,0.143,bundle,2024-12-21\r\n20183,2410,EMEA,home,retail,150.79,5,0.126,coupon,2024-05-22\r\n20184,1199,APAC,electronics,partner,36.29,3,0.107,none,2024-03-05\r\n20185,2289,APAC,electronics,retail,34.30,1,0.062,none,2024-04-21\r\n20186,2314,EMEA,grocery,online,37.18,4,0.144,none,2024-02-07\r\n20187,1393,LATAM,home,online,60.24,2,0.238,none,2024-05-11\r\n20188,1682,EMEA,fashion,mobile,48.16,8,0.057,none,2024-04-22\r\n20189,1975,EMEA,grocery,online,112.63,7,0.230,none,2024-05-01\r\n20190,1212,LATAM,electronics,mobile,54.62,1,0.194,none,2024-03-25\r\n20191,1439,LATAM,fashion,mobile,168.06,7,0.072,bundle,2024-03-23\r\n20192,2189,LATAM,toys,retail,48.82,2,0.135,none,2024-03-05\r\n20193,1223,LATAM,electronics,mobile,108.03,7,0.153,bundle,2024-05-06\r\n20194,1656,LATAM,electronics,retail,22.10,6,0.185,none,2024-12-04\r\n20195,1076,LATAM,fashion,online,84.41,7,0.152,coupon,2024-07-18\r\n20196,1176,EMEA,home,mobile,35.39,1,0.149,loyalty,2024-03-16\r\n20197,1850,APAC,grocery,online,50.48,8,0.164,loyalty,2024-11-12\r\n20198,2064,LATAM,fashion,online,38.78,1,0.025,loyalty,2024-10-05\r\n20199,1263,AMER,sports,retail,78.66,5,0.238,coupon,2024-04-24\r\n20200,1518,AMER,electronics,retail,78.23,8,0.168,none,2024-11-04\r\n20201,1245,APAC,grocery,online,59.51,4,0.172,none,2024-12-20\r\n20202,2070,APAC,grocery,mobile,36.79,1,0.056,coupon,2024-08-22\r\n20203,2388,LATAM,sports,retail,53.39,4,0.157,bundle,2024-10-18\r\n20204,1857,LATAM,fashion,online,21.27,3,0.010,bundle,2024-11-24\r\n20205,1494,AMER,grocery,online,46.39,2,0.180,coupon,2024-10-04\r\n20206,1706,EMEA,sports,retail,54.69,6,0.066,none,2024-08-24\r\n20207,1000,APAC,grocery,online,42.72,4,0.142,none,2024-01-15\r\n20208,1993,APAC,electronics,online,30.49,7,0.005,coupon,2024-05-24\r\n20209,1101,AMER,home,online,146.83,5,0.136,loyalty,2024-06-25\r\n20210,2301,EMEA,electronics,retail,32.79,2,0.234,coupon,2024-01-23\r\n20211,2303,EMEA,fashion,online,41.33,7,0.192,coupon,2024-04-18\r\n20212,2147,LATAM,grocery,online,75.01,1,0.053,coupon,2024-08-03\r\n20213,2258,AMER,grocery,online,95.52,3,0.103,bundle,2024-03-14\r\n20214,2319,AMER,electronics,online,44.83,5,0.050,loyalty,2024-06-26\r\n20215,1676,LATAM,home,online,159.21,5,0.164,none,2024-08-21\r\n20216,1065,AMER,grocery,retail,58.27,5,0.003,none,2024-04-27\r\n20217,1128,LATAM,grocery,online,79.35,5,0.129,none,2024-06-25\r\n20218,1167,EMEA,grocery,online,25.06,6,0.230,coupon,2024-07-06\r\n20219,1224,APAC,electronics,online,78.38,2,0.088,none,2024-12-11\r\n20220,1680,LATAM,electronics,online,43.68,3,0.003,none,2024-09-06\r\n20221,1851,EMEA,grocery,online,120.71,2,0.090,loyalty,2024-01-12\r\n20222,1077,AMER,home,retail,43.04,3,0.026,coupon,2024-05-13\r\n20223,1314,AMER,home,online,23.51,3,0.076,loyalty,2024-05-11\r\n20224,1543,AMER,home,retail,33.17,8,0.045,loyalty,2024-09-26\r\n20225,2296,AMER,sports,online,64.76,8,0.168,none,2024-01-20\r\n20226,1762,LATAM,home,online,112.92,8,0.200,loyalty,2024-02-25\r\n20227,1321,EMEA,fashion,retail,66.02,5,0.184,none,2024-06-21\r\n20228,1742,AMER,sports,retail,24.06,7,0.225,coupon,2024-01-19\r\n20229,2308,AMER,sports,retail,49.59,1,0.040,coupon,2024-12-22\r\n20230,2075,LATAM,electronics,online,90.71,2,0.225,none,2024-03-11\r\n20231,1149,LATAM,electronics,online,26.79,6,0.234,bundle,2024-10-13\r\n20232,2111,EMEA,home,online,50.59,7,0.220,bundle,2024-02-01\r\n20233,2371,LATAM,fashion,online,27.14,6,0.079,none,2024-08-25\r\n20234,1756,EMEA,toys,online,46.40,8,0.169,none,2024-03-19\r\n20235,1700,EMEA,home,online,37.83,1,0.142,none,2024-09-11\r\n20236,1443,EMEA,sports,mobile,74.75,8,0.219,loyalty,2024-02-04\r\n20237,1136,EMEA,electronics,online,68.28,3,0.115,coupon,2024-04-18\r\n20238,1697,APAC,sports,retail,117.98,4,0.190,none,2024-04-16\r\n20239,1079,LATAM,toys,online,77.95,5,0.194,none,2024-05-09\r\n20240,2033,LATAM,grocery,online,117.94,4,0.082,none,2024-11-17\r\n20241,1516,EMEA,home,retail,18.45,4,0.114,none,2024-06-25\r\n20242,2407,EMEA,electronics,online,51.08,5,0.145,coupon,2024-05-22\r\n20243,2223,EMEA,grocery,online,87.62,5,0.087,coupon,2024-12-13\r\n20244,1622,LATAM,electronics,retail,114.58,8,0.106,coupon,2024-01-10\r\n20245,1066,AMER,home,retail,39.37,2,0.201,coupon,2024-03-13\r\n20246,1470,LATAM,home,online,38.59,3,0.100,coupon,2024-08-06\r\n20247,1912,APAC,electronics,retail,56.45,2,0.005,coupon,2024-03-04\r\n20248,1605,APAC,home,online,44.11,7,0.248,none,2024-03-13\r\n20249,1426,AMER,home,online,33.87,3,0.232,coupon,2024-06-09\r\n20250,1676,LATAM,fashion,online,30.56,8,0.151,none,2024-11-07\r\n20251,1932,EMEA,grocery,retail,31.24,6,0.116,none,2024-05-17\r\n20252,2481,APAC,electronics,online,55.04,6,0.155,loyalty,2024-08-02\r\n20253,2378,LATAM,home,online,43.26,4,0.218,none,2024-12-10\r\n20254,1979,APAC,sports,online,50.49,7,0.037,coupon,2024-09-20\r\n20255,1847,LATAM,electronics,mobile,78.07,7,0.162,coupon,2024-01-22\r\n20256,1354,AMER,electronics,partner,48.40,7,0.097,coupon,2024-01-09\r\n20257,1863,EMEA,grocery,online,34.53,5,0.145,none,2024-01-18\r\n20258,2272,EMEA,home,retail,64.86,3,0.212,none,2024-01-10\r\n20259,1861,AMER,home,online,86.06,4,0.066,none,2024-08-22\r\n20260,1884,APAC,toys,retail,54.35,1,0.045,none,2024-11-10\r\n20261,1525,APAC,sports,mobile,123.43,3,0.162,coupon,2024-08-05\r\n20262,2370,EMEA,grocery,retail,65.06,7,0.200,coupon,2024-12-12\r\n20263,1565,AMER,fashion,online,58.90,6,0.242,coupon,2024-10-28\r\n20264,1214,EMEA,electronics,retail,33.49,2,0.059,none,2024-04-05\r\n20265,2362,AMER,grocery,partner,64.76,6,0.187,none,2024-03-17\r\n20266,1682,EMEA,grocery,online,29.18,1,0.107,none,2024-12-23\r\n20267,2003,LATAM,fashion,online,56.49,1,0.063,none,2024-11-01\r\n20268,1776,APAC,fashion,retail,59.17,6,0.069,loyalty,2024-05-15\r\n20269,2230,LATAM,home,partner,33.73,6,0.197,none,2024-04-19\r\n20270,2051,APAC,grocery,online,29.11,2,0.094,none,2024-11-03\r\n20271,1427,EMEA,fashion,online,54.61,5,0.086,coupon,2024-11-13\r\n20272,2251,APAC,fashion,mobile,160.71,4,0.156,none,2024-04-03\r\n20273,1166,AMER,home,mobile,29.54,7,0.033,none,2024-01-06\r\n20274,1315,AMER,fashion,online,36.22,6,0.248,none,2024-12-13\r\n20275,1646,APAC,grocery,online,30.17,7,0.008,coupon,2024-03-25\r\n20276,1863,EMEA,home,retail,40.06,7,0.186,bundle,2024-08-26\r\n20277,1889,APAC,electronics,online,105.32,7,0.069,none,2024-02-18\r\n20278,2499,LATAM,toys,retail,101.85,6,0.223,bundle,2024-05-21\r\n20279,2439,AMER,electronics,online,39.29,7,0.152,none,2024-12-01\r\n20280,1684,EMEA,grocery,retail,46.46,4,0.000,coupon,2024-05-04\r\n20281,1796,LATAM,toys,partner,40.15,7,0.216,bundle,2024-01-20\r\n20282,1641,EMEA,fashion,online,35.83,1,0.233,none,2024-05-05\r\n20283,1088,LATAM,grocery,retail,42.73,6,0.172,none,2024-05-22\r\n20284,2298,APAC,fashion,online,49.59,6,0.170,coupon,2024-10-16\r\n20285,1737,AMER,home,mobile,46.66,2,0.003,none,2024-04-26\r\n20286,2236,APAC,electronics,online,22.93,4,0.208,none,2024-02-18\r\n20287,1900,APAC,grocery,online,55.18,4,0.222,none,2024-02-12\r\n20288,2115,APAC,home,online,44.82,3,0.013,bundle,2024-08-11\r\n20289,2114,AMER,sports,retail,59.73,3,0.197,none,2024-10-25\r\n20290,1786,APAC,grocery,online,85.70,3,0.212,none,2024-03-15\r\n20291,1346,AMER,grocery,mobile,33.21,8,0.056,none,2024-10-26\r\n20292,2239,EMEA,sports,retail,66.06,1,0.049,none,2024-10-27\r\n20293,1697,APAC,toys,online,41.34,1,0.134,none,2024-11-19\r\n20294,1374,APAC,fashion,retail,97.94,5,0.108,bundle,2024-01-05\r\n20295,1363,EMEA,toys,retail,28.91,3,0.167,none,2024-10-22\r\n20296,1592,LATAM,electronics,online,33.25,5,0.056,bundle,2024-01-22\r\n20297,1025,EMEA,sports,mobile,44.56,1,0.006,coupon,2024-03-21\r\n20298,1415,AMER,home,retail,55.39,4,0.236,none,2024-01-17\r\n20299,1412,AMER,sports,online,60.13,3,0.061,none,2024-11-28\r\n20300,2364,APAC,fashion,retail,42.85,1,0.218,coupon,2024-11-03\r\n20301,1766,AMER,grocery,retail,47.40,5,0.125,bundle,2024-06-25\r\n20302,1338,EMEA,electronics,online,28.64,7,0.016,bundle,2024-01-18\r\n20303,2236,APAC,home,online,43.96,1,0.237,bundle,2024-04-24\r\n20304,1764,LATAM,electronics,retail,37.62,2,0.095,none,2024-04-12\r\n20305,1096,EMEA,sports,online,58.96,2,0.059,none,2024-04-08\r\n20306,1304,LATAM,home,partner,99.75,6,0.052,bundle,2024-05-08\r\n20307,2204,AMER,grocery,online,52.72,6,0.082,coupon,2024-02-08\r\n20308,1565,AMER,home,online,123.27,7,0.192,coupon,2024-02-16\r\n20309,2433,APAC,home,retail,39.37,2,0.167,coupon,2024-04-20\r\n20310,2044,APAC,home,online,53.19,7,0.120,coupon,2024-02-16\r\n20311,1978,AMER,grocery,retail,47.60,8,0.041,none,2024-01-28\r\n20312,1802,AMER,electronics,retail,119.87,4,0.070,none,2024-09-24\r\n20313,1035,EMEA,home,online,46.50,4,0.093,none,2024-09-09\r\n20314,2124,AMER,fashion,partner,74.36,4,0.027,none,2024-06-20\r\n20315,2301,EMEA,home,partner,21.94,6,0.015,none,2024-10-21\r\n20316,2117,EMEA,sports,online,160.24,7,0.027,none,2024-01-25\r\n20317,2238,AMER,fashion,retail,47.77,5,0.200,none,2024-06-03\r\n20318,1290,EMEA,fashion,mobile,29.90,6,0.183,coupon,2024-05-28\r\n20319,1472,AMER,grocery,online,91.66,3,0.063,none,2024-09-15\r\n20320,1994,LATAM,home,mobile,94.38,5,0.195,none,2024-04-17\r\n20321,2163,EMEA,home,retail,53.03,4,0.224,none,2024-03-17\r\n20322,2494,AMER,home,retail,51.49,7,0.057,none,2024-07-03\r\n20323,1148,AMER,grocery,retail,49.37,4,0.100,none,2024-03-06\r\n20324,2115,APAC,sports,mobile,58.26,5,0.151,none,2024-05-06\r\n20325,1551,APAC,fashion,online,49.62,4,0.248,none,2024-05-03\r\n20326,1624,AMER,home,mobile,29.63,5,0.034,none,2024-08-09\r\n20327,1923,LATAM,grocery,retail,109.67,2,0.239,none,2024-02-13\r\n20328,1622,LATAM,grocery,partner,33.24,5,0.220,none,2024-02-07\r\n20329,1705,AMER,grocery,mobile,143.57,3,0.126,none,2024-10-28\r\n20330,1655,LATAM,grocery,online,46.97,1,0.174,loyalty,2024-09-14\r\n20331,1398,APAC,home,mobile,140.54,4,0.016,none,2024-06-12\r\n20332,1645,EMEA,fashion,online,109.67,4,0.183,none,2024-09-12\r\n20333,2065,EMEA,home,mobile,30.50,7,0.217,none,2024-10-04\r\n20334,1060,LATAM,sports,retail,70.14,4,0.221,none,2024-03-13\r\n20335,1415,AMER,sports,retail,47.30,2,0.064,none,2024-07-06\r\n20336,1756,EMEA,grocery,retail,30.24,1,0.158,none,2024-12-25\r\n20337,1815,APAC,home,online,22.04,2,0.097,bundle,2024-08-10\r\n20338,1233,AMER,home,online,59.60,6,0.213,none,2024-08-21\r\n20339,1065,AMER,toys,mobile,46.83,6,0.191,loyalty,2024-09-19\r\n20340,2368,AMER,grocery,online,54.58,8,0.002,bundle,2024-09-21\r\n20341,2274,APAC,sports,partner,176.86,1,0.085,none,2024-09-18\r\n20342,1433,EMEA,fashion,retail,60.44,3,0.129,none,2024-07-01\r\n20343,2359,LATAM,home,retail,16.12,2,0.089,none,2024-05-18\r\n20344,1432,APAC,electronics,mobile,94.09,6,0.054,none,2024-03-23\r\n20345,1863,EMEA,fashion,mobile,46.36,5,0.170,none,2024-02-15\r\n20346,1878,EMEA,grocery,online,51.28,3,0.117,bundle,2024-10-21\r\n20347,1538,AMER,fashion,online,165.33,4,0.196,none,2024-02-15\r\n20348,2356,LATAM,home,retail,37.34,5,0.122,none,2024-02-27\r\n20349,1359,LATAM,home,retail,90.15,3,0.065,coupon,2024-05-21\r\n20350,1902,AMER,toys,retail,96.06,3,0.180,none,2024-02-20\r\n20351,2063,APAC,fashion,online,40.78,8,0.085,coupon,2024-04-08\r\n20352,2181,AMER,sports,retail,78.32,8,0.013,coupon,2024-12-08\r\n20353,1739,AMER,home,retail,67.03,2,0.172,none,2024-06-09\r\n20354,2424,LATAM,toys,online,76.42,3,0.190,none,2024-09-16\r\n20355,1819,AMER,sports,online,60.63,6,0.166,none,2024-08-20\r\n20356,1514,LATAM,grocery,online,54.88,6,0.247,none,2024-05-25\r\n20357,1227,AMER,electronics,online,40.77,7,0.046,none,2024-01-13\r\n20358,2337,AMER,grocery,online,49.76,6,0.161,coupon,2024-04-06\r\n20359,1410,AMER,electronics,retail,82.56,6,0.231,coupon,2024-04-12\r\n20360,1806,APAC,fashion,online,161.39,1,0.199,none,2024-04-21\r\n20361,2480,APAC,sports,online,69.90,6,0.214,bundle,2024-02-26\r\n20362,1295,EMEA,sports,online,19.05,6,0.235,coupon,2024-08-04\r\n20363,1752,APAC,home,retail,31.73,8,0.077,none,2024-05-02\r\n20364,1563,EMEA,sports,retail,51.93,5,0.130,none,2024-01-21\r\n20365,2297,EMEA,electronics,retail,113.01,5,0.085,none,2024-06-17\r\n20366,1617,AMER,fashion,retail,38.22,3,0.163,none,2024-09-24\r\n20367,1951,LATAM,electronics,partner,47.37,1,0.120,none,2024-03-20\r\n20368,2359,LATAM,electronics,online,64.59,8,0.006,bundle,2024-05-07\r\n20369,1551,APAC,home,retail,59.47,7,0.062,none,2024-06-03\r\n20370,1715,AMER,home,online,59.29,4,0.130,none,2024-07-20\r\n20371,2406,EMEA,toys,retail,58.84,1,0.018,bundle,2024-07-11\r\n20372,1694,APAC,grocery,retail,63.19,3,0.200,none,2024-12-13\r\n20373,2030,EMEA,grocery,online,86.54,4,0.026,coupon,2024-12-20\r\n20374,2452,LATAM,toys,mobile,75.25,1,0.047,loyalty,2024-11-16\r\n20375,1073,AMER,home,retail,55.86,4,0.207,coupon,2024-04-22\r\n20376,2465,EMEA,grocery,online,39.78,8,0.237,none,2024-11-15\r\n20377,1290,EMEA,grocery,retail,117.75,8,0.223,none,2024-02-23\r\n20378,1449,EMEA,sports,online,49.54,3,0.098,loyalty,2024-12-03\r\n20379,1272,AMER,home,online,27.94,7,0.006,bundle,2024-08-19\r\n20380,1616,APAC,home,online,123.15,5,0.109,none,2024-06-07\r\n20381,2062,EMEA,sports,mobile,49.06,7,0.110,loyalty,2024-03-18\r\n20382,1314,AMER,toys,mobile,99.27,6,0.159,none,2024-07-24\r\n20383,1162,AMER,home,online,150.60,8,0.199,loyalty,2024-06-01\r\n20384,1215,LATAM,electronics,online,78.77,6,0.036,none,2024-03-18\r\n20385,1915,LATAM,sports,online,178.73,6,0.063,bundle,2024-08-08\r\n20386,1141,AMER,home,retail,88.09,3,0.086,none,2024-08-04\r\n20387,1495,LATAM,fashion,retail,81.34,2,0.025,none,2024-03-10\r\n20388,1139,EMEA,home,online,117.80,5,0.054,coupon,2024-04-20\r\n20389,1657,LATAM,electronics,online,65.15,3,0.033,none,2024-01-01\r\n20390,2018,AMER,sports,mobile,70.10,3,0.091,bundle,2024-06-10\r\n20391,2174,LATAM,electronics,partner,40.67,8,0.196,coupon,2024-11-01\r\n20392,1076,LATAM,home,online,46.39,6,0.100,coupon,2024-09-10\r\n20393,2161,LATAM,home,online,60.45,8,0.202,none,2024-02-15\r\n20394,1926,AMER,fashion,online,43.09,1,0.130,coupon,2024-10-04\r\n20395,1121,EMEA,grocery,retail,65.71,5,0.087,bundle,2024-01-08\r\n20396,2066,APAC,toys,mobile,56.36,7,0.059,none,2024-03-20\r\n20397,1979,APAC,electronics,retail,40.53,5,0.048,none,2024-06-27\r\n20398,2436,LATAM,grocery,online,44.60,6,0.074,none,2024-10-17\r\n20399,2396,AMER,electronics,online,35.38,4,0.009,none,2024-02-17\r\n20400,2259,AMER,grocery,retail,97.59,6,0.240,loyalty,2024-05-21\r\n20401,2352,APAC,home,retail,64.75,1,0.048,none,2024-06-01\r\n20402,1284,APAC,toys,online,32.72,6,0.178,none,2024-02-04\r\n20403,1596,EMEA,sports,online,55.18,7,0.111,loyalty,2024-12-28\r\n20404,1702,AMER,home,retail,26.13,3,0.228,bundle,2024-12-05\r\n20405,2496,EMEA,fashion,online,44.80,8,0.151,coupon,2024-11-17\r\n20406,1707,APAC,fashion,online,127.97,5,0.087,coupon,2024-09-08\r\n20407,2482,EMEA,electronics,mobile,28.76,3,0.114,none,2024-05-22\r\n20408,1224,APAC,grocery,mobile,20.55,3,0.116,none,2024-08-22\r\n20409,2452,LATAM,fashion,retail,56.09,4,0.215,none,2024-10-04\r\n20410,2371,LATAM,grocery,mobile,146.72,7,0.034,none,2024-06-23\r\n20411,1419,APAC,sports,mobile,33.96,7,0.130,bundle,2024-01-04\r\n20412,1341,EMEA,grocery,online,24.70,6,0.076,coupon,2024-12-19\r\n20413,1572,LATAM,grocery,retail,42.68,8,0.094,coupon,2024-07-19\r\n20414,1446,AMER,electronics,retail,39.81,7,0.063,none,2024-08-09\r\n20415,1900,APAC,fashion,online,63.93,2,0.250,none,2024-09-09\r\n20416,1724,LATAM,grocery,retail,50.08,4,0.179,none,2024-12-05\r\n20417,1230,EMEA,grocery,online,98.06,7,0.114,none,2024-04-14\r\n20418,1991,APAC,electronics,online,59.61,4,0.036,none,2024-08-26\r\n20419,2182,AMER,grocery,online,101.32,1,0.070,none,2024-08-12\r\n20420,1159,LATAM,home,retail,81.33,8,0.247,none,2024-05-22\r\n20421,1027,APAC,home,retail,78.68,8,0.124,none,2024-08-08\r\n20422,2410,EMEA,grocery,online,115.66,3,0.098,coupon,2024-04-09\r\n20423,2146,APAC,sports,retail,19.24,8,0.117,bundle,2024-03-07\r\n20424,1319,EMEA,grocery,online,93.26,1,0.095,none,2024-08-18\r\n20425,1724,LATAM,home,retail,145.15,3,0.158,bundle,2024-08-27\r\n20426,2348,EMEA,electronics,mobile,94.76,3,0.125,none,2024-08-25\r\n20427,1587,LATAM,fashion,online,77.45,3,0.178,none,2024-04-23\r\n20428,1484,AMER,fashion,mobile,40.25,5,0.045,none,2024-01-25\r\n20429,1124,AMER,grocery,retail,33.47,2,0.069,none,2024-01-17\r\n20430,1317,EMEA,electronics,partner,82.15,5,0.250,none,2024-08-26\r\n20431,1573,AMER,grocery,retail,23.18,4,0.240,none,2024-11-25\r\n20432,1934,EMEA,electronics,retail,127.69,4,0.048,coupon,2024-10-28\r\n20433,1263,AMER,grocery,retail,83.04,1,0.230,none,2024-01-20\r\n20434,2097,AMER,home,online,48.54,6,0.212,bundle,2024-08-23\r\n20435,1631,APAC,home,online,59.12,7,0.143,none,2024-09-02\r\n20436,1321,EMEA,grocery,online,85.60,2,0.085,bundle,2024-11-24\r\n20437,2382,LATAM,home,retail,40.36,4,0.137,loyalty,2024-01-05\r\n20438,1076,LATAM,home,online,47.85,2,0.178,loyalty,2024-06-14\r\n20439,1026,APAC,grocery,online,70.65,4,0.180,none,2024-12-14\r\n20440,1027,APAC,electronics,retail,86.30,2,0.123,none,2024-03-08\r\n20441,1719,LATAM,toys,partner,90.58,1,0.078,bundle,2024-06-28\r\n20442,1661,LATAM,sports,retail,42.68,2,0.039,none,2024-11-26\r\n20443,1364,EMEA,grocery,mobile,67.26,8,0.184,coupon,2024-06-17\r\n20444,1415,AMER,home,partner,54.52,5,0.190,bundle,2024-01-16\r\n20445,1386,AMER,home,online,65.52,2,0.034,bundle,2024-04-23\r\n20446,1314,AMER,electronics,retail,174.20,6,0.235,coupon,2024-02-05\r\n20447,2232,EMEA,electronics,online,129.31,4,0.248,bundle,2024-12-19\r\n20448,1695,LATAM,sports,retail,57.32,2,0.131,none,2024-02-09\r\n20449,1797,LATAM,home,online,66.66,8,0.143,bundle,2024-07-11\r\n20450,2300,EMEA,electronics,retail,66.58,7,0.022,none,2024-05-01\r\n20451,1081,AMER,fashion,online,44.03,2,0.052,bundle,2024-11-04\r\n20452,1410,AMER,toys,mobile,66.44,4,0.037,none,2024-07-12\r\n20453,2307,LATAM,grocery,retail,79.21,8,0.179,none,2024-01-04\r\n20454,2011,AMER,fashion,partner,139.67,3,0.049,none,2024-10-06\r\n20455,2267,AMER,toys,mobile,117.67,1,0.004,coupon,2024-11-08\r\n20456,1642,EMEA,grocery,online,56.23,7,0.077,none,2024-01-03\r\n20457,2094,AMER,electronics,retail,82.25,5,0.080,none,2024-09-24\r\n20458,1374,APAC,sports,retail,22.39,8,0.037,coupon,2024-03-08\r\n20459,1157,LATAM,sports,online,66.12,7,0.185,loyalty,2024-05-22\r\n20460,2090,AMER,fashion,mobile,78.17,5,0.212,loyalty,2024-07-04\r\n20461,2107,APAC,grocery,mobile,91.21,6,0.047,bundle,2024-06-20\r\n20462,2484,APAC,sports,online,63.78,3,0.067,none,2024-01-02\r\n20463,1026,APAC,sports,online,36.27,3,0.144,loyalty,2024-10-08\r\n20464,2297,EMEA,fashion,retail,72.04,7,0.121,loyalty,2024-10-04\r\n20465,2047,AMER,home,retail,82.49,4,0.222,none,2024-09-08\r\n20466,2048,LATAM,fashion,retail,36.60,8,0.214,coupon,2024-06-21\r\n20467,1522,LATAM,home,online,67.14,5,0.016,coupon,2024-03-19\r\n20468,2132,LATAM,home,mobile,38.45,5,0.042,none,2024-02-13\r\n20469,1467,LATAM,fashion,mobile,163.88,7,0.218,coupon,2024-12-07\r\n20470,1939,LATAM,grocery,online,50.92,1,0.073,none,2024-12-04\r\n20471,2452,LATAM,grocery,online,132.32,1,0.186,coupon,2024-11-28\r\n20472,2498,LATAM,toys,mobile,98.24,8,0.224,none,2024-03-27\r\n20473,1072,LATAM,sports,online,45.11,6,0.021,none,2024-04-27\r\n20474,1421,APAC,grocery,online,122.05,2,0.082,coupon,2024-06-14\r\n20475,1230,EMEA,grocery,partner,58.82,2,0.009,none,2024-06-10\r\n20476,1437,EMEA,fashion,retail,107.42,6,0.060,coupon,2024-01-05\r\n20477,2417,LATAM,electronics,online,114.89,1,0.227,coupon,2024-12-01\r\n20478,2437,LATAM,grocery,mobile,59.26,4,0.205,coupon,2024-06-21\r\n20479,1275,EMEA,sports,online,132.82,7,0.235,none,2024-03-07\r\n20480,2313,LATAM,home,retail,106.79,1,0.160,coupon,2024-12-11\r\n20481,1713,EMEA,electronics,online,37.51,2,0.043,none,2024-01-07\r\n20482,1922,EMEA,electronics,online,53.51,8,0.125,bundle,2024-06-23\r\n20483,1198,AMER,electronics,online,42.78,3,0.093,bundle,2024-06-09\r\n20484,1482,AMER,sports,mobile,50.73,7,0.143,none,2024-01-12\r\n20485,2209,AMER,grocery,mobile,56.74,5,0.207,none,2024-07-13\r\n20486,1169,LATAM,electronics,online,81.26,6,0.014,none,2024-09-24\r\n20487,1085,EMEA,fashion,online,29.53,4,0.087,none,2024-07-15\r\n20488,2111,EMEA,fashion,retail,74.51,4,0.005,none,2024-09-14\r\n20489,2155,APAC,grocery,online,27.83,3,0.079,bundle,2024-04-12\r\n20490,1917,LATAM,toys,online,29.31,4,0.225,none,2024-08-13\r\n20491,1897,AMER,sports,online,33.28,4,0.017,none,2024-09-13\r\n20492,1157,LATAM,electronics,retail,43.26,7,0.057,coupon,2024-01-27\r\n20493,1520,APAC,grocery,retail,37.95,3,0.015,bundle,2024-05-19\r\n20494,1078,APAC,toys,online,34.24,4,0.041,none,2024-03-06\r\n20495,2477,APAC,sports,online,22.16,8,0.110,none,2024-02-21\r\n20496,1505,EMEA,fashion,partner,55.72,5,0.056,loyalty,2024-12-15\r\n20497,1948,EMEA,grocery,online,48.63,5,0.034,coupon,2024-09-25\r\n20498,1879,EMEA,electronics,retail,84.91,3,0.195,none,2024-10-28\r\n20499,1875,EMEA,electronics,retail,86.21,5,0.165,none,2024-10-27\r\n20500,1289,LATAM,home,retail,37.98,1,0.191,loyalty,2024-08-12\r\n20501,1478,EMEA,grocery,retail,50.09,5,0.226,none,2024-04-03\r\n20502,2320,LATAM,sports,online,18.62,1,0.110,bundle,2024-03-07\r\n20503,1902,AMER,sports,retail,64.72,7,0.090,bundle,2024-08-17\r\n20504,1731,AMER,sports,online,45.83,8,0.190,none,2024-05-28\r\n20505,2298,APAC,electronics,online,88.77,6,0.034,loyalty,2024-06-28\r\n20506,1170,AMER,fashion,retail,160.06,2,0.183,none,2024-06-22\r\n20507,1231,AMER,home,retail,15.29,2,0.203,none,2024-06-08\r\n20508,2162,EMEA,electronics,online,47.49,1,0.150,none,2024-04-01\r\n20509,1532,APAC,grocery,mobile,27.98,8,0.161,none,2024-06-01\r\n20510,1835,AMER,grocery,online,120.63,2,0.081,coupon,2024-09-01\r\n20511,1993,APAC,toys,online,118.62,4,0.023,coupon,2024-07-05\r\n20512,1986,LATAM,fashion,partner,18.22,4,0.076,none,2024-01-13\r\n20513,2224,EMEA,sports,online,42.25,8,0.165,bundle,2024-04-14\r\n20514,1387,AMER,electronics,online,97.44,5,0.107,none,2024-08-05\r\n20515,1812,EMEA,electronics,online,86.54,7,0.091,none,2024-07-27\r\n20516,1883,LATAM,sports,online,93.09,4,0.169,none,2024-05-11\r\n20517,1358,APAC,grocery,online,41.66,6,0.043,none,2024-03-27\r\n20518,1372,APAC,fashion,retail,39.84,1,0.123,none,2024-06-08\r\n20519,1262,APAC,electronics,online,44.70,7,0.210,none,2024-10-04\r\n20520,1034,EMEA,fashion,retail,125.06,2,0.091,coupon,2024-10-13\r\n20521,1667,AMER,grocery,retail,59.45,4,0.133,bundle,2024-11-28\r\n20522,2325,LATAM,home,online,62.08,2,0.028,none,2024-02-12\r\n20523,1275,EMEA,home,online,49.53,4,0.150,none,2024-06-14\r\n20524,1623,AMER,grocery,online,37.61,3,0.132,loyalty,2024-06-14\r\n20525,1256,LATAM,grocery,online,93.06,1,0.142,none,2024-01-17\r\n20526,1662,LATAM,fashion,retail,108.89,1,0.063,loyalty,2024-11-02\r\n20527,2304,LATAM,sports,online,27.41,5,0.248,bundle,2024-11-12\r\n20528,1014,EMEA,grocery,mobile,69.39,6,0.141,none,2024-03-16\r\n20529,1037,EMEA,toys,online,66.31,7,0.219,none,2024-08-15\r\n20530,1644,EMEA,electronics,mobile,61.29,6,0.125,none,2024-03-20\r\n20531,2123,AMER,toys,mobile,126.87,4,0.081,coupon,2024-11-20\r\n20532,2001,EMEA,electronics,mobile,49.70,6,0.088,none,2024-04-17\r\n20533,1544,LATAM,grocery,online,80.23,3,0.247,none,2024-10-22\r\n20534,2450,EMEA,grocery,online,54.56,1,0.001,none,2024-09-03\r\n20535,1659,APAC,toys,partner,37.28,6,0.168,coupon,2024-10-23\r\n20536,1182,EMEA,grocery,online,106.68,6,0.014,none,2024-04-27\r\n20537,2050,APAC,electronics,retail,99.03,4,0.060,none,2024-06-06\r\n20538,1799,EMEA,grocery,online,100.30,4,0.040,none,2024-04-15\r\n20539,1477,APAC,sports,online,74.68,1,0.218,none,2024-02-19\r\n20540,1466,AMER,grocery,partner,44.63,6,0.007,loyalty,2024-12-19\r\n20541,1379,EMEA,electronics,retail,102.22,1,0.123,none,2024-03-17\r\n20542,2295,EMEA,home,online,12.76,1,0.056,bundle,2024-11-06\r\n20543,1480,APAC,home,retail,44.72,7,0.094,none,2024-12-22\r\n20544,1349,APAC,fashion,online,24.26,6,0.031,loyalty,2024-06-07\r\n20545,1963,AMER,sports,online,46.22,2,0.071,coupon,2024-06-28\r\n20546,1722,EMEA,home,retail,60.78,6,0.231,loyalty,2024-12-01\r\n20547,1002,EMEA,sports,online,32.07,6,0.061,coupon,2024-10-03\r\n20548,1440,AMER,home,mobile,89.16,7,0.227,loyalty,2024-01-12\r\n20549,2212,EMEA,grocery,online,38.53,5,0.024,none,2024-02-28\r\n20550,2337,AMER,sports,online,149.18,8,0.178,loyalty,2024-01-21\r\n20551,2223,EMEA,fashion,mobile,62.27,6,0.239,coupon,2024-09-15\r\n20552,1103,EMEA,fashion,retail,73.76,6,0.054,none,2024-09-27\r\n20553,1200,EMEA,grocery,online,60.81,1,0.131,none,2024-03-26\r\n20554,2294,EMEA,home,retail,75.71,3,0.080,none,2024-06-06\r\n20555,1692,LATAM,fashion,online,50.97,5,0.145,none,2024-11-14\r\n20556,2033,LATAM,sports,retail,75.93,5,0.233,coupon,2024-02-25\r\n20557,1584,EMEA,home,online,124.63,8,0.213,none,2024-08-03\r\n20558,2242,AMER,toys,online,46.43,7,0.067,bundle,2024-06-22\r\n20559,2441,EMEA,electronics,retail,69.28,8,0.106,coupon,2024-01-01\r\n20560,2486,APAC,fashion,retail,55.59,3,0.240,none,2024-03-03\r\n20561,1300,EMEA,grocery,online,44.64,8,0.249,none,2024-04-27\r\n20562,1755,APAC,grocery,online,23.71,5,0.030,none,2024-09-02\r\n20563,1518,AMER,grocery,online,30.49,3,0.198,none,2024-08-22\r\n20564,1398,APAC,toys,online,37.24,8,0.048,none,2024-10-07\r\n20565,1557,LATAM,sports,retail,40.53,3,0.049,loyalty,2024-03-13\r\n20566,2164,AMER,home,retail,40.03,1,0.100,bundle,2024-01-16\r\n20567,1338,EMEA,sports,retail,34.38,8,0.164,none,2024-04-05\r\n20568,1683,AMER,fashion,retail,99.60,3,0.149,none,2024-07-03\r\n20569,1899,APAC,sports,mobile,33.93,3,0.176,bundle,2024-01-24\r\n20570,1385,LATAM,electronics,retail,55.58,2,0.216,coupon,2024-12-07\r\n20571,1933,EMEA,electronics,retail,40.68,4,0.223,none,2024-11-13\r\n20572,1160,LATAM,electronics,mobile,170.67,8,0.155,none,2024-11-05\r\n20573,1272,AMER,electronics,online,46.34,3,0.007,none,2024-08-19\r\n20574,1319,EMEA,fashion,retail,45.47,2,0.092,none,2024-01-24\r\n20575,2468,EMEA,home,retail,71.43,2,0.241,none,2024-09-03\r\n20576,1796,LATAM,home,online,55.76,2,0.204,none,2024-05-09\r\n20577,1612,LATAM,sports,retail,117.89,1,0.112,coupon,2024-05-11\r\n20578,1336,APAC,electronics,online,26.43,8,0.074,none,2024-10-16\r\n20579,1441,LATAM,home,partner,68.18,1,0.052,coupon,2024-12-01\r\n20580,1844,APAC,grocery,online,81.51,2,0.099,loyalty,2024-11-02\r\n20581,1099,LATAM,electronics,retail,69.30,3,0.199,none,2024-08-08\r\n20582,2052,LATAM,fashion,mobile,80.58,3,0.088,coupon,2024-07-21\r\n20583,1015,AMER,fashion,retail,67.38,2,0.025,coupon,2024-12-22\r\n20584,1520,APAC,toys,mobile,108.70,7,0.020,bundle,2024-04-12\r\n20585,2079,EMEA,grocery,online,124.84,5,0.215,coupon,2024-10-09\r\n20586,1066,AMER,electronics,retail,82.95,1,0.167,bundle,2024-05-14\r\n20587,1760,LATAM,home,partner,38.04,1,0.140,none,2024-06-21\r\n20588,1998,APAC,electronics,online,69.89,2,0.073,coupon,2024-02-06\r\n20589,1862,LATAM,sports,retail,81.30,5,0.047,loyalty,2024-05-08\r\n20590,1611,EMEA,home,online,59.56,5,0.018,none,2024-01-26\r\n20591,1970,LATAM,fashion,retail,33.44,2,0.205,none,2024-11-18\r\n20592,1988,AMER,electronics,retail,58.32,1,0.110,none,2024-09-15\r\n20593,1109,APAC,grocery,mobile,69.52,6,0.147,none,2024-06-19\r\n20594,2198,EMEA,electronics,retail,62.50,4,0.246,none,2024-08-12\r\n20595,2069,AMER,home,retail,53.66,3,0.031,none,2024-03-20\r\n20596,1994,LATAM,grocery,partner,74.22,4,0.132,none,2024-11-13\r\n20597,1390,APAC,home,retail,75.24,6,0.048,none,2024-04-16\r\n20598,2424,LATAM,home,retail,63.13,1,0.178,none,2024-09-16\r\n20599,2313,LATAM,sports,online,33.75,5,0.218,coupon,2024-12-22\r\n20600,2284,EMEA,electronics,online,102.19,4,0.051,bundle,2024-04-20\r\n20601,1575,APAC,grocery,retail,77.15,7,0.099,loyalty,2024-03-24\r\n20602,1485,APAC,electronics,retail,76.22,6,0.195,coupon,2024-05-26\r\n20603,2300,EMEA,home,retail,43.25,8,0.078,none,2024-04-18\r\n20604,2113,LATAM,home,online,35.75,2,0.107,bundle,2024-04-25\r\n20605,2127,LATAM,home,mobile,66.51,6,0.084,loyalty,2024-08-12\r\n20606,1723,LATAM,electronics,online,106.18,6,0.015,none,2024-02-11\r\n20607,2042,LATAM,electronics,online,91.58,7,0.207,coupon,2024-10-25\r\n20608,1891,APAC,home,online,36.06,2,0.083,loyalty,2024-10-14\r\n20609,1548,EMEA,toys,online,53.90,1,0.070,loyalty,2024-01-11\r\n20610,1074,LATAM,fashion,retail,42.48,7,0.035,loyalty,2024-07-06\r\n20611,1191,EMEA,electronics,online,59.76,6,0.183,none,2024-10-08\r\n20612,1505,EMEA,grocery,mobile,38.33,6,0.171,loyalty,2024-11-25\r\n20613,2439,AMER,grocery,online,152.60,7,0.006,none,2024-03-04\r\n20614,1570,AMER,grocery,retail,44.74,1,0.121,loyalty,2024-01-01\r\n20615,2150,APAC,grocery,online,153.15,6,0.175,bundle,2024-06-05\r\n20616,1296,LATAM,home,online,116.92,2,0.071,coupon,2024-11-04\r\n20617,1352,AMER,fashion,online,108.30,1,0.189,none,2024-08-28\r\n20618,2497,AMER,electronics,online,43.56,8,0.096,coupon,2024-02-15\r\n20619,1295,EMEA,sports,online,39.59,2,0.036,bundle,2024-09-07\r\n20620,2055,AMER,electronics,mobile,111.49,3,0.088,none,2024-01-16\r\n20621,1053,AMER,home,online,68.51,3,0.129,none,2024-09-12\r\n20622,1404,EMEA,toys,online,31.00,6,0.097,none,2024-10-06\r\n20623,2320,LATAM,home,online,39.78,5,0.000,none,2024-04-19\r\n20624,1567,AMER,home,online,56.74,5,0.196,loyalty,2024-07-21\r\n20625,1253,AMER,grocery,online,61.70,2,0.174,coupon,2024-07-08\r\n20626,1482,AMER,home,online,35.86,1,0.204,none,2024-10-03\r\n20627,1318,LATAM,sports,retail,51.68,7,0.227,none,2024-03-19\r\n20628,2334,LATAM,toys,partner,99.61,2,0.212,coupon,2024-01-13\r\n20629,2462,EMEA,electronics,online,24.44,3,0.156,coupon,2024-11-11\r\n20630,1675,LATAM,grocery,online,59.65,2,0.215,coupon,2024-04-10\r\n20631,1759,EMEA,grocery,online,75.38,3,0.058,bundle,2024-02-12\r\n20632,1514,LATAM,sports,online,55.33,5,0.083,bundle,2024-09-13\r\n20633,1468,AMER,toys,retail,55.81,8,0.002,none,2024-02-06\r\n20634,2340,EMEA,fashion,retail,46.41,1,0.032,none,2024-11-12\r\n20635,2474,LATAM,grocery,retail,59.22,2,0.050,none,2024-08-01\r\n20636,2406,EMEA,grocery,partner,50.10,8,0.079,none,2024-11-03\r\n20637,1447,LATAM,home,retail,40.92,6,0.113,none,2024-08-11\r\n20638,2338,AMER,sports,online,158.43,4,0.022,none,2024-03-12\r\n20639,1608,AMER,electronics,online,34.36,6,0.093,none,2024-05-07\r\n20640,2156,AMER,sports,online,80.91,8,0.111,bundle,2024-05-14\r\n20641,1915,LATAM,fashion,mobile,76.59,7,0.135,coupon,2024-09-23\r\n20642,2245,APAC,fashion,mobile,54.27,4,0.127,coupon,2024-08-15\r\n20643,2395,APAC,sports,retail,67.13,7,0.210,none,2024-08-23\r\n20644,1958,APAC,electronics,online,81.61,5,0.072,none,2024-07-11\r\n20645,2062,EMEA,grocery,mobile,150.89,6,0.192,none,2024-11-27\r\n20646,2362,AMER,grocery,online,31.55,6,0.023,none,2024-09-10\r\n20647,1898,EMEA,grocery,online,52.00,8,0.134,coupon,2024-09-21\r\n20648,1588,LATAM,electronics,mobile,53.15,2,0.007,none,2024-07-17\r\n20649,1664,LATAM,electronics,online,46.39,5,0.187,loyalty,2024-04-09\r\n20650,1582,AMER,grocery,partner,27.83,5,0.124,none,2024-11-25\r\n20651,1435,AMER,home,online,65.32,6,0.137,none,2024-09-06\r\n20652,1647,LATAM,electronics,mobile,81.73,3,0.060,bundle,2024-09-16\r\n20653,2285,APAC,electronics,retail,134.90,7,0.098,none,2024-09-13\r\n20654,1172,APAC,fashion,retail,54.92,2,0.187,loyalty,2024-11-22\r\n20655,1152,LATAM,grocery,retail,36.32,4,0.054,none,2024-03-17\r\n20656,1248,APAC,sports,online,20.52,7,0.063,none,2024-11-07\r\n20657,1943,AMER,electronics,online,182.63,2,0.175,coupon,2024-09-03\r\n20658,1255,AMER,grocery,online,33.11,6,0.072,bundle,2024-05-18\r\n20659,2268,EMEA,sports,retail,87.47,5,0.025,none,2024-07-08\r\n20660,1440,AMER,sports,retail,73.60,2,0.007,none,2024-12-15\r\n20661,1832,APAC,sports,online,91.37,6,0.082,bundle,2024-03-09\r\n20662,2406,EMEA,electronics,retail,42.66,3,0.164,none,2024-08-07\r\n20663,1172,APAC,grocery,retail,75.10,4,0.221,none,2024-10-27\r\n20664,1322,AMER,fashion,retail,52.94,6,0.130,none,2024-09-01\r\n20665,1233,AMER,grocery,retail,42.14,4,0.184,coupon,2024-05-01\r\n20666,2226,EMEA,electronics,online,53.51,5,0.072,none,2024-03-23\r\n20667,2179,LATAM,electronics,online,76.14,7,0.066,bundle,2024-03-27\r\n20668,1744,EMEA,grocery,retail,50.95,1,0.187,none,2024-11-05\r\n20669,2058,LATAM,grocery,mobile,88.55,4,0.076,none,2024-09-23\r\n20670,2148,EMEA,grocery,online,65.87,3,0.082,none,2024-06-01\r\n20671,1434,EMEA,electronics,retail,49.59,5,0.212,bundle,2024-04-18\r\n20672,2368,AMER,fashion,online,96.79,2,0.082,none,2024-04-07\r\n20673,2037,LATAM,electronics,retail,126.65,2,0.029,bundle,2024-10-07\r\n20674,2461,LATAM,grocery,online,94.41,7,0.016,coupon,2024-08-19\r\n20675,2286,AMER,sports,online,56.42,7,0.245,bundle,2024-10-12\r\n20676,2114,AMER,toys,partner,80.07,4,0.154,loyalty,2024-12-01\r\n20677,1310,AMER,grocery,online,46.04,3,0.117,coupon,2024-10-09\r\n20678,2178,AMER,sports,online,81.71,8,0.137,bundle,2024-02-08\r\n20679,1646,APAC,grocery,online,26.70,1,0.175,bundle,2024-04-25\r\n20680,2317,LATAM,electronics,online,80.26,2,0.051,coupon,2024-09-14\r\n20681,1870,EMEA,sports,mobile,68.14,5,0.059,none,2024-08-20\r\n20682,1104,APAC,electronics,retail,34.99,8,0.064,none,2024-08-17\r\n20683,1501,AMER,home,online,134.07,2,0.208,coupon,2024-06-06\r\n20684,1085,EMEA,fashion,online,65.99,1,0.200,coupon,2024-01-19\r\n20685,1968,EMEA,home,retail,48.95,4,0.099,coupon,2024-01-02\r\n20686,1313,EMEA,home,mobile,33.63,1,0.136,none,2024-02-10\r\n20687,1648,APAC,electronics,retail,78.57,4,0.177,none,2024-08-25\r\n20688,2321,APAC,grocery,online,63.50,1,0.230,bundle,2024-02-12\r\n20689,1600,AMER,grocery,online,42.88,4,0.029,none,2024-11-21\r\n20690,2441,EMEA,grocery,retail,113.64,2,0.213,none,2024-07-13\r\n20691,1581,APAC,electronics,retail,60.33,3,0.003,none,2024-01-26\r\n20692,2390,AMER,grocery,mobile,77.76,8,0.199,none,2024-02-04\r\n20693,1309,EMEA,sports,retail,41.38,8,0.062,none,2024-02-12\r\n20694,2383,APAC,toys,retail,75.24,6,0.094,none,2024-02-28\r\n20695,1908,AMER,grocery,mobile,40.29,3,0.053,coupon,2024-04-19\r\n20696,1311,APAC,electronics,mobile,54.02,7,0.224,none,2024-02-09\r\n20697,2383,APAC,electronics,retail,35.18,4,0.192,bundle,2024-08-15\r\n20698,1434,EMEA,grocery,retail,22.81,3,0.189,none,2024-06-02\r\n20699,1225,APAC,electronics,online,91.53,1,0.039,coupon,2024-04-28\r\n20700,1853,APAC,electronics,retail,35.67,1,0.013,coupon,2024-08-08\r\n20701,1502,APAC,home,retail,28.40,6,0.143,none,2024-05-17\r\n20702,1793,LATAM,electronics,online,35.27,2,0.203,coupon,2024-05-14\r\n20703,2477,APAC,electronics,online,56.81,5,0.202,none,2024-05-13\r\n20704,1681,LATAM,electronics,retail,31.71,2,0.192,none,2024-04-28\r\n20705,2171,EMEA,electronics,online,92.90,6,0.018,none,2024-03-28\r\n20706,1067,APAC,grocery,retail,83.12,7,0.190,none,2024-06-08\r\n20707,1822,EMEA,sports,online,69.20,1,0.004,coupon,2024-08-02\r\n20708,1661,LATAM,grocery,mobile,65.78,3,0.059,none,2024-04-07\r\n20709,2478,AMER,electronics,retail,26.22,1,0.008,none,2024-11-08\r\n20710,1812,EMEA,home,retail,28.64,8,0.067,none,2024-10-21\r\n20711,2493,APAC,grocery,mobile,66.78,3,0.201,loyalty,2024-08-20\r\n20712,2122,AMER,toys,mobile,48.81,1,0.012,none,2024-09-15\r\n20713,2385,APAC,fashion,partner,83.70,6,0.003,none,2024-04-06\r\n20714,1478,EMEA,grocery,retail,64.61,4,0.129,loyalty,2024-08-18\r\n20715,1592,LATAM,fashion,retail,68.11,2,0.111,none,2024-07-12\r\n20716,1328,APAC,grocery,mobile,50.72,6,0.175,loyalty,2024-11-11\r\n20717,2127,LATAM,grocery,online,20.41,8,0.249,none,2024-10-07\r\n20718,2089,EMEA,toys,online,29.39,6,0.158,none,2024-11-21\r\n20719,1263,AMER,grocery,online,74.02,4,0.219,none,2024-01-26\r\n20720,1342,LATAM,grocery,online,88.56,1,0.032,none,2024-06-07\r\n20721,1110,LATAM,sports,online,83.46,5,0.136,none,2024-02-24\r\n20722,1145,AMER,fashion,partner,90.69,7,0.021,bundle,2024-09-01\r\n20723,1669,AMER,sports,retail,196.35,2,0.035,coupon,2024-10-17\r\n20724,1690,LATAM,sports,retail,156.60,3,0.007,none,2024-05-10\r\n20725,1738,LATAM,grocery,online,68.48,1,0.096,none,2024-02-25\r\n20726,1460,LATAM,grocery,retail,92.86,4,0.010,none,2024-01-23\r\n20727,2192,APAC,electronics,online,86.96,1,0.096,loyalty,2024-12-15\r\n20728,2060,LATAM,home,retail,98.04,7,0.199,none,2024-05-19\r\n20729,1623,AMER,sports,retail,37.60,8,0.073,bundle,2024-08-01\r\n20730,1212,LATAM,electronics,online,67.27,1,0.059,none,2024-01-13\r\n20731,1429,APAC,toys,retail,56.46,6,0.050,none,2024-03-17\r\n20732,1651,LATAM,grocery,online,17.98,4,0.120,none,2024-03-21\r\n20733,1853,APAC,grocery,online,51.17,3,0.175,coupon,2024-09-02\r\n20734,1945,AMER,electronics,retail,33.04,2,0.111,coupon,2024-06-24\r\n20735,1806,APAC,electronics,retail,94.07,6,0.025,bundle,2024-09-07\r\n20736,1269,LATAM,fashion,retail,25.95,3,0.065,bundle,2024-02-13\r\n20737,2460,AMER,fashion,online,28.04,7,0.174,bundle,2024-01-09\r\n20738,2000,APAC,toys,online,74.19,8,0.106,bundle,2024-03-23\r\n20739,1970,LATAM,sports,online,49.55,4,0.147,none,2024-08-15\r\n20740,1917,LATAM,home,retail,28.48,5,0.071,coupon,2024-03-24\r\n20741,1480,APAC,grocery,mobile,68.73,5,0.002,none,2024-01-02\r\n20742,1389,LATAM,grocery,online,32.00,4,0.071,bundle,2024-11-18\r\n20743,2086,APAC,electronics,retail,55.62,6,0.053,coupon,2024-01-16\r\n20744,2255,AMER,grocery,online,58.58,5,0.069,none,2024-06-01\r\n20745,2234,LATAM,sports,online,68.89,7,0.144,none,2024-08-14\r\n20746,2388,LATAM,grocery,mobile,51.52,4,0.041,bundle,2024-01-09\r\n20747,1197,LATAM,grocery,online,56.75,8,0.026,none,2024-07-06\r\n20748,1271,EMEA,fashion,online,31.97,6,0.110,bundle,2024-02-22\r\n20749,1654,EMEA,fashion,mobile,35.06,8,0.213,none,2024-09-06\r\n20750,2085,AMER,electronics,retail,140.14,2,0.105,loyalty,2024-12-19\r\n20751,1971,EMEA,sports,online,65.81,8,0.207,none,2024-10-18\r\n20752,1994,LATAM,toys,retail,49.56,6,0.166,none,2024-08-06\r\n20753,1711,APAC,fashion,online,22.42,4,0.011,none,2024-09-10\r\n20754,1162,AMER,fashion,retail,36.45,2,0.069,bundle,2024-09-26\r\n20755,2473,EMEA,sports,online,37.24,5,0.172,none,2024-04-14\r\n20756,2453,AMER,home,mobile,60.71,3,0.092,none,2024-03-05\r\n20757,1486,LATAM,sports,online,40.89,4,0.027,bundle,2024-06-22\r\n20758,1601,APAC,toys,online,50.93,6,0.062,none,2024-03-14\r\n20759,1744,EMEA,fashion,online,67.16,8,0.170,bundle,2024-09-26\r\n20760,1639,APAC,toys,mobile,36.55,8,0.093,none,2024-08-02\r\n20761,1738,LATAM,grocery,online,13.49,2,0.015,none,2024-12-14\r\n20762,1631,APAC,electronics,retail,107.31,1,0.150,none,2024-10-03\r\n20763,2312,APAC,home,retail,37.60,6,0.123,none,2024-08-22\r\n20764,2211,APAC,electronics,partner,42.09,4,0.082,none,2024-07-04\r\n20765,1604,EMEA,fashion,retail,50.27,4,0.170,none,2024-04-21\r\n20766,1115,AMER,fashion,online,71.15,5,0.162,none,2024-04-05\r\n20767,2252,EMEA,fashion,online,57.18,2,0.125,loyalty,2024-10-20\r\n20768,1486,LATAM,sports,retail,121.44,4,0.192,none,2024-04-02\r\n20769,1281,AMER,fashion,retail,47.63,6,0.188,bundle,2024-10-03\r\n20770,2477,APAC,home,retail,54.25,3,0.164,none,2024-11-14\r\n20771,2145,AMER,fashion,retail,87.12,1,0.134,none,2024-03-28\r\n20772,1411,LATAM,electronics,online,73.40,6,0.205,none,2024-09-15\r\n20773,1404,EMEA,grocery,partner,49.08,3,0.089,none,2024-08-10\r\n20774,1013,LATAM,home,mobile,54.16,5,0.220,coupon,2024-02-21\r\n20775,1919,EMEA,grocery,online,52.13,7,0.031,bundle,2024-08-27\r\n20776,2314,EMEA,electronics,online,61.47,1,0.248,none,2024-12-27\r\n20777,2066,APAC,home,mobile,63.05,4,0.064,none,2024-05-09\r\n20778,1155,EMEA,home,online,35.23,7,0.065,coupon,2024-01-01\r\n20779,2192,APAC,sports,retail,80.92,2,0.158,none,2024-11-09\r\n20780,1876,LATAM,toys,mobile,67.69,1,0.044,none,2024-09-09\r\n20781,2148,EMEA,toys,online,30.95,7,0.221,none,2024-01-24\r\n20782,2419,LATAM,fashion,online,58.04,6,0.145,none,2024-06-06\r\n20783,1049,AMER,grocery,online,63.82,8,0.128,none,2024-08-13\r\n20784,2341,EMEA,electronics,retail,36.67,1,0.075,bundle,2024-10-21\r\n20785,1447,LATAM,home,retail,47.17,8,0.135,coupon,2024-05-01\r\n20786,1318,LATAM,sports,retail,28.26,5,0.033,none,2024-01-07\r\n20787,1907,EMEA,home,mobile,44.23,7,0.086,bundle,2024-07-12\r\n20788,1443,EMEA,home,partner,41.15,7,0.098,none,2024-03-09\r\n20789,1984,LATAM,electronics,online,67.18,8,0.033,coupon,2024-07-11\r\n20790,2412,LATAM,grocery,partner,35.28,7,0.213,none,2024-02-03\r\n20791,1445,APAC,electronics,retail,64.11,4,0.240,none,2024-04-10\r\n20792,2404,EMEA,home,retail,49.88,5,0.070,loyalty,2024-08-09\r\n20793,1431,APAC,home,retail,35.68,5,0.171,none,2024-04-11\r\n20794,2352,APAC,home,retail,76.56,6,0.132,none,2024-01-28\r\n20795,1091,EMEA,home,retail,67.75,4,0.050,none,2024-06-24\r\n20796,1535,AMER,fashion,mobile,96.63,3,0.065,none,2024-09-08\r\n20797,1718,EMEA,toys,online,40.85,5,0.121,none,2024-07-15\r\n20798,1028,EMEA,fashion,online,27.49,6,0.053,none,2024-10-26\r\n20799,1074,LATAM,toys,retail,58.69,8,0.191,bundle,2024-08-28\r\n20800,2126,APAC,home,retail,28.02,5,0.051,loyalty,2024-01-09\r\n20801,1584,EMEA,grocery,retail,66.65,4,0.239,none,2024-01-07\r\n20802,1617,AMER,home,online,64.83,3,0.152,bundle,2024-07-07\r\n20803,1600,AMER,fashion,mobile,44.49,7,0.242,none,2024-09-22\r\n20804,2003,LATAM,grocery,partner,32.86,4,0.247,none,2024-09-13\r\n20805,2319,AMER,electronics,online,70.49,8,0.036,none,2024-11-27\r\n20806,2297,EMEA,grocery,partner,70.95,3,0.142,none,2024-08-07\r\n20807,1991,APAC,electronics,retail,62.04,4,0.057,loyalty,2024-07-01\r\n20808,2125,LATAM,home,mobile,47.01,5,0.135,none,2024-01-15\r\n20809,1563,EMEA,grocery,retail,40.82,6,0.034,none,2024-05-12\r\n20810,1461,LATAM,fashion,online,200.51,1,0.071,none,2024-11-27\r\n20811,1245,APAC,home,mobile,173.90,6,0.081,none,2024-09-03\r\n20812,1440,AMER,home,online,68.08,7,0.049,none,2024-01-22\r\n20813,1673,AMER,grocery,online,63.19,5,0.225,none,2024-04-18\r\n20814,1122,AMER,home,mobile,42.60,3,0.241,none,2024-10-08\r\n20815,1709,EMEA,toys,retail,137.97,4,0.229,bundle,2024-01-22\r\n20816,1941,AMER,fashion,partner,44.79,4,0.190,loyalty,2024-10-20\r\n20817,1519,APAC,electronics,online,46.89,2,0.072,none,2024-05-12\r\n20818,1162,AMER,grocery,online,96.82,8,0.241,coupon,2024-09-10\r\n20819,2403,LATAM,home,partner,116.38,5,0.001,none,2024-12-24\r\n20820,2165,AMER,fashion,online,48.90,6,0.130,coupon,2024-01-11\r\n20821,2117,EMEA,grocery,retail,81.71,3,0.183,none,2024-06-03\r\n20822,1823,EMEA,grocery,mobile,56.08,1,0.188,none,2024-12-20\r\n20823,1420,APAC,sports,online,87.50,6,0.222,loyalty,2024-12-11\r\n20824,1703,AMER,electronics,mobile,62.99,5,0.090,none,2024-03-08\r\n20825,2305,AMER,electronics,retail,103.81,6,0.074,coupon,2024-11-05\r\n20826,2413,AMER,home,retail,106.78,1,0.225,none,2024-06-10\r\n20827,1278,AMER,fashion,online,49.85,6,0.066,coupon,2024-12-14\r\n20828,1316,APAC,electronics,online,37.07,8,0.159,none,2024-08-17\r\n20829,2108,AMER,fashion,online,85.09,8,0.111,coupon,2024-04-17\r\n20830,1086,AMER,fashion,mobile,111.10,6,0.087,none,2024-11-15\r\n20831,1428,APAC,electronics,mobile,43.77,6,0.080,none,2024-11-05\r\n20832,1948,EMEA,grocery,online,60.09,5,0.029,none,2024-01-01\r\n20833,2390,AMER,grocery,online,72.13,7,0.100,none,2024-04-05\r\n20834,1059,AMER,sports,retail,22.74,4,0.085,bundle,2024-01-07\r\n20835,2349,APAC,fashion,retail,154.92,6,0.032,none,2024-11-26\r\n20836,1586,LATAM,grocery,retail,73.84,1,0.085,none,2024-06-16\r\n20837,1822,EMEA,grocery,online,93.81,1,0.081,none,2024-01-21\r\n20838,1133,EMEA,home,online,83.58,7,0.093,bundle,2024-04-01\r\n20839,1518,AMER,electronics,online,76.59,6,0.185,coupon,2024-09-16\r\n20840,1594,LATAM,fashion,online,45.37,5,0.110,bundle,2024-07-18\r\n20841,1787,APAC,sports,online,43.34,3,0.216,coupon,2024-04-19\r\n20842,1787,APAC,toys,mobile,127.01,4,0.119,coupon,2024-02-26\r\n20843,1210,LATAM,home,mobile,40.09,3,0.003,coupon,2024-11-05\r\n20844,1274,LATAM,grocery,mobile,82.37,5,0.060,none,2024-01-18\r\n20845,1485,APAC,toys,partner,40.17,4,0.198,coupon,2024-04-16\r\n20846,1296,LATAM,grocery,mobile,18.33,4,0.203,none,2024-12-15\r\n20847,2309,AMER,grocery,online,61.07,4,0.182,none,2024-01-13\r\n20848,2441,EMEA,electronics,partner,78.80,3,0.067,none,2024-06-18\r\n20849,2197,LATAM,toys,retail,36.35,2,0.083,coupon,2024-09-17\r\n20850,1169,LATAM,fashion,online,37.17,1,0.049,none,2024-07-21\r\n20851,2013,APAC,electronics,mobile,135.53,7,0.019,none,2024-02-13\r\n20852,1663,LATAM,grocery,online,37.73,8,0.087,bundle,2024-01-12\r\n20853,2155,APAC,home,retail,46.12,1,0.245,bundle,2024-03-04\r\n20854,1046,EMEA,electronics,online,45.94,4,0.094,none,2024-09-04\r\n20855,2134,AMER,sports,online,73.46,4,0.238,bundle,2024-06-12\r\n20856,1478,EMEA,electronics,retail,26.44,2,0.151,none,2024-04-17\r\n20857,1627,LATAM,fashion,retail,50.52,3,0.159,none,2024-07-14\r\n20858,2366,APAC,electronics,retail,20.56,4,0.190,bundle,2024-02-03\r\n20859,1211,EMEA,grocery,retail,55.24,6,0.223,coupon,2024-01-27\r\n20860,1071,AMER,toys,online,59.13,7,0.229,none,2024-07-06\r\n20861,1684,EMEA,grocery,retail,66.61,1,0.204,none,2024-07-12\r\n20862,1962,APAC,grocery,mobile,40.76,1,0.127,none,2024-02-11\r\n20863,1167,EMEA,electronics,retail,31.34,4,0.007,bundle,2024-04-07\r\n20864,1510,EMEA,toys,online,55.44,2,0.067,none,2024-02-20\r\n20865,1543,AMER,fashion,retail,78.49,2,0.202,none,2024-12-02\r\n20866,1094,LATAM,grocery,retail,78.70,3,0.158,none,2024-02-20\r\n20867,2108,AMER,home,online,106.94,8,0.094,loyalty,2024-09-14\r\n20868,1079,LATAM,grocery,retail,76.02,7,0.118,none,2024-01-14\r\n20869,1785,EMEA,home,retail,17.57,1,0.139,none,2024-09-11\r\n20870,2308,AMER,fashion,retail,88.86,5,0.061,none,2024-01-02\r\n20871,2434,APAC,electronics,online,38.40,7,0.149,none,2024-04-16\r\n20872,1771,AMER,grocery,online,17.95,8,0.202,none,2024-11-27\r\n20873,1256,LATAM,electronics,retail,42.49,1,0.166,none,2024-03-09\r\n20874,1089,LATAM,home,online,47.96,1,0.076,coupon,2024-03-25\r\n20875,2158,APAC,electronics,online,160.65,5,0.188,bundle,2024-06-12\r\n20876,2491,APAC,sports,online,93.82,1,0.217,none,2024-07-19\r\n20877,1199,APAC,toys,retail,50.96,6,0.101,loyalty,2024-09-09\r\n20878,2409,APAC,grocery,online,43.13,8,0.192,coupon,2024-06-20\r\n20879,1098,APAC,home,online,73.55,5,0.062,none,2024-10-28\r\n20880,1961,EMEA,electronics,online,46.48,5,0.241,none,2024-12-14\r\n20881,1961,EMEA,grocery,mobile,104.75,6,0.240,none,2024-08-11\r\n20882,1190,EMEA,grocery,online,41.87,7,0.228,coupon,2024-07-25\r\n20883,1789,EMEA,grocery,online,24.89,7,0.096,none,2024-04-10\r\n20884,2348,EMEA,fashion,partner,44.11,5,0.229,coupon,2024-07-28\r\n20885,1399,AMER,toys,retail,88.32,4,0.059,loyalty,2024-09-04\r\n20886,1992,LATAM,fashion,online,38.20,3,0.243,none,2024-06-24\r\n20887,2187,EMEA,home,retail,40.94,1,0.141,bundle,2024-04-27\r\n20888,1893,APAC,toys,online,72.02,7,0.203,none,2024-01-09\r\n20889,2273,APAC,electronics,mobile,32.91,7,0.046,coupon,2024-10-26\r\n20890,1536,LATAM,grocery,online,69.21,7,0.046,none,2024-06-20\r\n20891,1731,AMER,grocery,retail,73.64,6,0.100,bundle,2024-03-12\r\n20892,2469,LATAM,home,mobile,49.72,4,0.187,coupon,2024-09-15\r\n20893,1312,EMEA,toys,online,64.24,7,0.163,bundle,2024-09-24\r\n20894,1645,EMEA,grocery,online,57.96,4,0.142,none,2024-05-03\r\n20895,1026,APAC,electronics,online,50.67,2,0.201,none,2024-03-24\r\n20896,1833,EMEA,home,online,51.77,3,0.202,none,2024-02-03\r\n20897,2115,APAC,sports,online,67.35,6,0.165,none,2024-12-06\r\n20898,1898,EMEA,electronics,online,144.46,1,0.164,none,2024-10-09\r\n20899,1287,AMER,fashion,retail,120.98,6,0.008,coupon,2024-06-08\r\n20900,1040,LATAM,fashion,retail,32.15,5,0.011,none,2024-10-09\r\n20901,2320,LATAM,home,online,191.06,1,0.012,none,2024-03-09\r\n20902,1218,AMER,sports,mobile,53.43,4,0.036,none,2024-07-10\r\n20903,2376,LATAM,sports,retail,26.47,2,0.090,none,2024-03-27\r\n20904,1063,AMER,toys,retail,47.25,6,0.221,none,2024-05-14\r\n20905,1594,LATAM,home,retail,66.91,4,0.233,coupon,2024-09-07\r\n20906,1524,LATAM,electronics,mobile,106.87,5,0.042,none,2024-10-18\r\n20907,2444,EMEA,grocery,retail,50.46,8,0.098,bundle,2024-10-14\r\n20908,2375,AMER,grocery,retail,63.57,5,0.122,coupon,2024-09-17\r\n20909,1869,AMER,grocery,retail,69.30,7,0.165,none,2024-05-23\r\n20910,2465,EMEA,toys,retail,41.17,7,0.250,coupon,2024-08-08\r\n20911,1060,LATAM,electronics,online,24.34,6,0.238,bundle,2024-02-26\r\n20912,1590,APAC,fashion,online,67.78,1,0.116,none,2024-05-04\r\n20913,1976,AMER,electronics,online,29.56,6,0.225,loyalty,2024-05-11\r\n20914,1674,LATAM,toys,online,36.38,4,0.193,bundle,2024-09-27\r\n20915,2227,LATAM,fashion,mobile,48.30,8,0.024,loyalty,2024-02-21\r\n20916,2113,LATAM,grocery,online,17.48,3,0.120,coupon,2024-03-13\r\n20917,2071,APAC,home,online,23.11,6,0.187,none,2024-11-16\r\n20918,2108,AMER,sports,partner,24.95,4,0.108,bundle,2024-01-24\r\n20919,1812,EMEA,sports,online,170.10,6,0.240,none,2024-09-25\r\n20920,1799,EMEA,fashion,partner,111.12,8,0.248,none,2024-09-02\r\n20921,1089,LATAM,sports,retail,83.64,8,0.101,loyalty,2024-01-09\r\n20922,2224,EMEA,toys,mobile,22.63,1,0.218,bundle,2024-06-05\r\n20923,1901,AMER,home,online,44.06,7,0.200,bundle,2024-02-01\r\n20924,1014,EMEA,grocery,retail,79.05,3,0.034,none,2024-07-03\r\n20925,1223,LATAM,fashion,retail,63.48,3,0.017,coupon,2024-02-06\r\n20926,1736,AMER,home,retail,29.98,1,0.177,none,2024-12-01\r\n20927,2137,LATAM,sports,online,81.10,3,0.011,bundle,2024-08-01\r\n20928,1990,EMEA,grocery,online,19.02,2,0.122,bundle,2024-09-09\r\n20929,1285,EMEA,fashion,online,40.83,2,0.084,none,2024-09-24\r\n20930,1405,LATAM,fashion,partner,108.45,6,0.122,none,2024-07-24\r\n20931,2047,AMER,toys,online,52.67,4,0.134,coupon,2024-07-21\r\n20932,1130,LATAM,grocery,online,61.92,2,0.127,loyalty,2024-06-24\r\n20933,2186,LATAM,electronics,retail,68.27,4,0.061,bundle,2024-07-01\r\n20934,1342,LATAM,electronics,online,139.53,7,0.101,none,2024-06-07\r\n20935,2137,LATAM,sports,mobile,52.53,4,0.064,none,2024-12-15\r\n20936,2196,AMER,grocery,retail,61.32,7,0.124,loyalty,2024-07-07\r\n20937,1740,EMEA,electronics,mobile,38.83,6,0.104,bundle,2024-12-26\r\n20938,2215,LATAM,sports,retail,38.15,2,0.063,none,2024-05-22\r\n20939,2074,AMER,toys,retail,22.58,4,0.187,bundle,2024-08-22\r\n20940,1447,LATAM,home,retail,84.96,6,0.062,none,2024-07-24\r\n20941,2171,EMEA,grocery,retail,79.87,5,0.210,coupon,2024-09-04\r\n20942,1842,LATAM,electronics,online,78.85,5,0.007,coupon,2024-10-13\r\n20943,2446,LATAM,grocery,mobile,46.28,4,0.047,none,2024-10-17\r\n20944,2008,APAC,fashion,online,49.92,3,0.249,loyalty,2024-07-25\r\n20945,2416,LATAM,grocery,online,26.69,8,0.190,loyalty,2024-09-06\r\n20946,1243,AMER,grocery,retail,105.64,2,0.041,none,2024-12-25\r\n20947,1297,AMER,toys,online,39.08,4,0.036,loyalty,2024-03-27\r\n20948,2167,APAC,fashion,retail,77.18,8,0.123,none,2024-02-16\r\n20949,1105,AMER,grocery,online,24.59,4,0.162,none,2024-11-23\r\n20950,2407,EMEA,home,retail,33.63,5,0.245,coupon,2024-11-10\r\n20951,2303,EMEA,grocery,retail,94.47,6,0.013,none,2024-05-11\r\n20952,2157,AMER,fashion,retail,52.45,4,0.134,none,2024-02-19\r\n20953,1726,EMEA,grocery,retail,35.97,2,0.015,none,2024-11-05\r\n20954,2201,AMER,electronics,retail,177.62,1,0.026,bundle,2024-07-18\r\n20955,1691,LATAM,grocery,online,48.89,2,0.115,none,2024-04-02\r\n20956,2198,EMEA,sports,mobile,97.70,2,0.048,none,2024-01-13\r\n20957,1267,EMEA,fashion,online,54.79,8,0.175,none,2024-02-23\r\n20958,1940,APAC,sports,online,50.09,8,0.130,none,2024-11-22\r\n20959,1515,EMEA,sports,retail,49.58,8,0.236,none,2024-07-13\r\n20960,2360,EMEA,toys,retail,67.08,4,0.014,none,2024-11-26\r\n20961,1923,LATAM,electronics,online,27.10,8,0.101,bundle,2024-01-22\r\n20962,1898,EMEA,home,online,71.69,8,0.040,none,2024-08-12\r\n20963,1759,EMEA,home,online,26.20,2,0.010,none,2024-05-23\r\n20964,1000,APAC,grocery,mobile,102.56,3,0.181,loyalty,2024-06-05\r\n20965,1147,EMEA,home,online,40.55,4,0.164,coupon,2024-08-12\r\n20966,1127,EMEA,grocery,online,58.84,3,0.046,none,2024-02-22\r\n20967,1941,AMER,electronics,online,64.58,1,0.188,none,2024-05-01\r\n20968,1918,EMEA,electronics,retail,18.37,7,0.157,none,2024-02-11\r\n20969,2164,AMER,home,online,28.27,5,0.111,none,2024-08-24\r\n20970,2295,EMEA,grocery,online,53.82,3,0.068,none,2024-01-21\r\n20971,2129,APAC,sports,online,30.09,3,0.105,none,2024-01-16\r\n20972,2317,LATAM,electronics,online,88.89,1,0.143,none,2024-10-01\r\n20973,2140,AMER,toys,partner,119.38,2,0.060,none,2024-09-15\r\n20974,1052,LATAM,sports,online,84.63,4,0.079,coupon,2024-07-05\r\n20975,1469,EMEA,grocery,mobile,73.07,8,0.042,coupon,2024-07-25\r\n20976,2439,AMER,grocery,partner,81.30,2,0.169,none,2024-08-17\r\n20977,1198,AMER,fashion,online,85.49,6,0.249,none,2024-12-04\r\n20978,1918,EMEA,electronics,mobile,108.25,7,0.015,coupon,2024-02-01\r\n20979,2204,AMER,sports,online,63.61,3,0.203,coupon,2024-09-15\r\n20980,1154,LATAM,home,mobile,111.95,4,0.218,none,2024-06-27\r\n20981,2487,LATAM,home,retail,79.32,8,0.229,none,2024-10-26\r\n20982,1928,AMER,electronics,online,103.45,3,0.147,coupon,2024-01-23\r\n20983,1986,LATAM,home,retail,61.08,3,0.028,none,2024-10-27\r\n20984,1253,AMER,electronics,retail,39.21,3,0.186,none,2024-02-24\r\n20985,1515,EMEA,fashion,mobile,14.79,2,0.209,coupon,2024-11-13\r\n20986,2158,APAC,sports,mobile,101.33,2,0.184,loyalty,2024-01-17\r\n20987,2383,APAC,grocery,retail,113.80,1,0.117,none,2024-06-25\r\n20988,1656,LATAM,grocery,retail,62.81,6,0.105,none,2024-11-13\r\n20989,1987,AMER,toys,online,55.40,5,0.087,none,2024-04-11\r\n20990,1593,AMER,electronics,mobile,85.54,8,0.200,none,2024-11-07\r\n20991,1820,AMER,grocery,partner,145.28,7,0.226,none,2024-11-22\r\n20992,2141,AMER,grocery,online,74.58,7,0.240,none,2024-10-15\r\n20993,1406,LATAM,electronics,retail,120.66,6,0.178,none,2024-01-23\r\n20994,2303,EMEA,home,online,87.60,5,0.193,none,2024-12-06\r\n20995,1363,EMEA,sports,online,82.77,3,0.226,none,2024-10-13\r\n20996,1127,EMEA,sports,online,62.59,5,0.249,loyalty,2024-01-27\r\n20997,2469,LATAM,fashion,retail,46.17,4,0.104,none,2024-06-09\r\n20998,1478,EMEA,home,partner,119.09,2,0.007,coupon,2024-04-01\r\n20999,1496,AMER,sports,retail,31.47,5,0.228,none,2024-01-27\r\n21000,1212,LATAM,grocery,retail,37.33,1,0.196,none,2024-11-25\r\n21001,1319,EMEA,fashion,retail,47.89,1,0.188,none,2024-08-07\r\n21002,2033,LATAM,electronics,mobile,147.48,5,0.165,coupon,2024-05-14\r\n21003,1866,EMEA,grocery,online,42.89,5,0.145,none,2024-10-25\r\n21004,1513,APAC,grocery,retail,61.42,5,0.121,none,2024-09-15\r\n21005,1448,EMEA,fashion,retail,101.61,3,0.143,none,2024-08-07\r\n21006,1313,EMEA,electronics,mobile,22.43,3,0.170,loyalty,2024-06-02\r\n21007,1553,LATAM,home,retail,69.80,6,0.077,none,2024-02-10\r\n21008,1626,EMEA,grocery,online,42.36,4,0.242,none,2024-09-04\r\n21009,1619,APAC,grocery,retail,129.08,1,0.093,none,2024-05-14\r\n21010,1260,LATAM,electronics,online,110.23,8,0.105,bundle,2024-12-17\r\n21011,2003,LATAM,sports,retail,48.63,6,0.212,loyalty,2024-05-27\r\n21012,2264,LATAM,grocery,online,62.54,3,0.116,coupon,2024-10-09\r\n21013,1489,AMER,sports,online,106.11,3,0.101,none,2024-02-08\r\n21014,1763,LATAM,home,online,124.32,4,0.022,loyalty,2024-10-21\r\n21015,1650,LATAM,grocery,retail,119.19,8,0.127,none,2024-11-24\r\n21016,2268,EMEA,electronics,retail,57.37,4,0.065,none,2024-05-27\r\n21017,2306,AMER,toys,online,52.73,8,0.043,none,2024-05-14\r\n21018,1324,LATAM,toys,mobile,62.76,3,0.208,bundle,2024-12-19\r\n21019,2326,LATAM,sports,online,60.65,2,0.167,none,2024-12-27\r\n21020,1509,AMER,grocery,online,50.55,8,0.016,none,2024-03-25\r\n21021,2276,AMER,electronics,retail,48.65,7,0.049,none,2024-04-23\r\n21022,2056,LATAM,home,partner,88.00,7,0.080,loyalty,2024-08-03\r\n21023,1444,EMEA,sports,online,130.79,8,0.120,none,2024-04-04\r\n21024,1738,LATAM,home,retail,41.90,7,0.035,coupon,2024-06-18\r\n21025,1550,APAC,fashion,mobile,48.50,5,0.222,none,2024-04-28\r\n21026,1543,AMER,fashion,retail,65.59,3,0.057,none,2024-03-13\r\n21027,2007,LATAM,electronics,online,61.28,7,0.245,none,2024-04-23\r\n21028,1828,EMEA,sports,retail,137.26,7,0.228,none,2024-03-27\r\n21029,1873,EMEA,fashion,online,107.11,1,0.203,coupon,2024-05-14\r\n21030,1267,EMEA,electronics,retail,80.40,3,0.108,none,2024-11-19\r\n21031,2010,APAC,home,retail,112.23,6,0.019,loyalty,2024-08-20\r\n21032,2390,AMER,home,online,88.22,8,0.116,bundle,2024-05-05\r\n21033,1358,APAC,fashion,retail,96.03,4,0.165,loyalty,2024-12-13\r\n21034,1162,AMER,fashion,retail,158.87,6,0.210,bundle,2024-07-15\r\n21035,2087,LATAM,home,online,31.75,1,0.246,coupon,2024-04-18\r\n21036,2102,APAC,toys,online,66.42,7,0.057,coupon,2024-07-11\r\n21037,1065,AMER,sports,online,22.97,6,0.015,loyalty,2024-12-05\r\n21038,1840,LATAM,electronics,retail,90.17,2,0.031,none,2024-09-03\r\n21039,1807,EMEA,fashion,online,89.13,4,0.177,none,2024-06-09\r\n21040,1606,AMER,electronics,retail,38.45,5,0.218,none,2024-08-23\r\n21041,1661,LATAM,fashion,online,43.24,2,0.068,none,2024-04-21\r\n21042,1881,LATAM,toys,online,81.36,6,0.101,none,2024-02-08\r\n21043,1847,LATAM,grocery,mobile,39.68,8,0.055,none,2024-05-25\r\n21044,1263,AMER,toys,online,122.66,4,0.113,none,2024-05-20\r\n21045,2180,AMER,sports,online,50.91,1,0.023,none,2024-03-17\r\n21046,1159,LATAM,grocery,online,73.66,4,0.034,coupon,2024-07-12\r\n21047,1397,LATAM,home,mobile,46.19,4,0.100,none,2024-02-09\r\n21048,2497,AMER,electronics,online,134.76,1,0.156,none,2024-07-12\r\n21049,2284,EMEA,sports,online,70.58,1,0.223,none,2024-09-04\r\n21050,1899,APAC,sports,online,40.93,8,0.211,coupon,2024-06-09\r\n21051,1850,APAC,fashion,online,33.29,4,0.122,bundle,2024-01-05\r\n21052,1632,LATAM,home,partner,37.03,7,0.063,coupon,2024-12-06\r\n21053,1531,EMEA,electronics,retail,105.96,3,0.021,none,2024-10-17\r\n21054,2167,APAC,toys,retail,54.71,6,0.105,none,2024-03-14\r\n21055,2106,LATAM,toys,partner,77.12,1,0.050,none,2024-01-19\r\n21056,1829,EMEA,sports,online,51.70,8,0.116,none,2024-12-05\r\n21057,1982,EMEA,fashion,online,33.53,5,0.021,none,2024-02-19\r\n21058,2002,APAC,electronics,online,137.32,3,0.104,bundle,2024-05-04\r\n21059,1233,AMER,grocery,online,67.07,5,0.164,none,2024-10-12\r\n21060,1084,AMER,grocery,online,24.71,6,0.150,none,2024-03-14\r\n21061,1326,AMER,grocery,online,118.61,2,0.044,coupon,2024-06-28\r\n21062,2125,LATAM,grocery,retail,91.26,2,0.209,loyalty,2024-02-21\r\n21063,1891,APAC,electronics,online,148.92,6,0.198,coupon,2024-04-22\r\n21064,1159,LATAM,grocery,mobile,62.62,5,0.160,none,2024-01-18\r\n21065,2338,AMER,sports,online,49.33,7,0.045,none,2024-07-25\r\n21066,2172,EMEA,sports,partner,49.22,8,0.026,none,2024-04-01\r\n21067,1558,EMEA,electronics,retail,76.71,3,0.181,none,2024-01-15\r\n21068,1357,EMEA,grocery,online,91.10,3,0.119,none,2024-07-12\r\n21069,1826,LATAM,home,retail,114.75,7,0.029,coupon,2024-07-27\r\n21070,1267,EMEA,sports,online,114.57,4,0.125,coupon,2024-03-20\r\n21071,1482,AMER,electronics,retail,74.41,4,0.195,none,2024-03-09\r\n21072,1163,AMER,sports,mobile,276.08,8,0.185,bundle,2024-02-25\r\n21073,2333,APAC,sports,online,62.03,4,0.157,coupon,2024-10-27\r\n21074,1814,AMER,grocery,partner,61.43,5,0.028,loyalty,2024-10-11\r\n21075,2448,APAC,sports,retail,60.34,3,0.230,none,2024-01-21\r\n21076,1785,EMEA,sports,online,69.23,8,0.239,none,2024-01-08\r\n21077,1269,LATAM,sports,online,82.08,2,0.102,none,2024-07-22\r\n21078,1800,APAC,electronics,online,76.63,3,0.112,none,2024-02-18\r\n21079,2478,AMER,electronics,retail,46.24,5,0.231,bundle,2024-02-13\r\n21080,1643,EMEA,electronics,online,22.24,2,0.144,coupon,2024-05-04\r\n21081,1468,AMER,fashion,mobile,95.18,6,0.107,bundle,2024-02-04\r\n21082,2346,LATAM,sports,retail,84.38,5,0.099,bundle,2024-04-26\r\n21083,2372,AMER,home,retail,42.12,8,0.127,bundle,2024-06-18\r\n21084,2061,EMEA,home,online,64.16,8,0.225,none,2024-05-13\r\n21085,1383,AMER,fashion,online,55.05,7,0.126,loyalty,2024-01-21\r\n21086,1157,LATAM,grocery,online,25.17,3,0.235,none,2024-06-02\r\n21087,1659,APAC,toys,online,100.00,8,0.160,none,2024-11-20\r\n21088,2447,AMER,home,retail,47.79,5,0.086,bundle,2024-10-23\r\n21089,2028,APAC,fashion,mobile,39.23,1,0.099,none,2024-06-04\r\n21090,2491,APAC,fashion,mobile,73.00,2,0.157,coupon,2024-02-08\r\n21091,1726,EMEA,grocery,online,86.40,6,0.021,loyalty,2024-10-28\r\n21092,1398,APAC,grocery,mobile,54.83,7,0.022,none,2024-10-11\r\n21093,2304,LATAM,sports,online,58.88,1,0.248,coupon,2024-05-17\r\n21094,1843,EMEA,home,retail,51.11,2,0.104,none,2024-08-12\r\n21095,1345,AMER,grocery,retail,22.54,8,0.150,none,2024-06-21\r\n21096,2170,EMEA,toys,retail,68.57,5,0.209,bundle,2024-07-11\r\n21097,2188,EMEA,sports,online,27.38,4,0.096,bundle,2024-11-05\r\n21098,1941,AMER,grocery,partner,56.50,2,0.103,bundle,2024-03-10\r\n21099,1712,LATAM,electronics,online,50.71,3,0.113,none,2024-07-22\r\n21100,1216,APAC,electronics,partner,46.63,8,0.148,none,2024-01-04\r\n21101,1035,EMEA,grocery,online,43.41,3,0.005,bundle,2024-10-25\r\n21102,2472,AMER,grocery,online,31.24,6,0.247,loyalty,2024-09-27\r\n21103,1039,AMER,home,online,119.40,7,0.194,none,2024-02-13\r\n21104,2244,LATAM,electronics,partner,37.84,5,0.142,none,2024-04-17\r\n21105,2469,LATAM,sports,online,159.82,5,0.065,none,2024-08-10\r\n21106,1267,EMEA,home,online,102.21,1,0.038,coupon,2024-11-28\r\n21107,1416,EMEA,fashion,online,37.36,2,0.160,none,2024-11-22\r\n21108,1510,EMEA,fashion,online,29.59,4,0.191,none,2024-11-23\r\n21109,1658,AMER,fashion,retail,58.34,4,0.128,none,2024-01-03\r\n21110,1014,EMEA,electronics,retail,51.43,1,0.004,bundle,2024-02-20\r\n21111,2131,APAC,electronics,retail,51.17,1,0.216,none,2024-04-01\r\n21112,1876,LATAM,electronics,online,90.32,3,0.214,none,2024-01-28\r\n21113,1213,EMEA,home,retail,70.44,4,0.207,loyalty,2024-02-07\r\n21114,1605,APAC,electronics,online,45.27,6,0.043,none,2024-11-23\r\n21115,1953,EMEA,fashion,mobile,97.20,8,0.084,none,2024-05-04\r\n21116,1417,APAC,electronics,online,23.75,1,0.177,none,2024-06-25\r\n21117,2457,EMEA,sports,retail,37.18,8,0.132,coupon,2024-12-25\r\n21118,1389,LATAM,sports,retail,67.42,4,0.062,none,2024-09-03\r\n21119,1696,LATAM,fashion,mobile,45.15,7,0.155,none,2024-01-20\r\n21120,1430,EMEA,sports,online,162.31,5,0.057,loyalty,2024-08-05\r\n21121,2425,APAC,electronics,online,52.34,6,0.168,none,2024-07-11\r\n21122,1272,AMER,home,online,59.69,5,0.204,none,2024-10-07\r\n21123,1279,EMEA,toys,retail,103.72,3,0.212,coupon,2024-02-23\r\n21124,2022,LATAM,sports,retail,55.87,6,0.021,none,2024-01-08\r\n21125,1239,APAC,home,partner,21.30,1,0.035,none,2024-09-08\r\n21126,1547,AMER,grocery,mobile,72.41,6,0.237,bundle,2024-10-13\r\n21127,1219,LATAM,home,mobile,19.95,7,0.107,bundle,2024-10-11\r\n21128,1414,APAC,home,retail,20.79,5,0.068,loyalty,2024-08-09\r\n21129,1814,AMER,toys,online,33.64,7,0.093,none,2024-04-26\r\n21130,1694,APAC,toys,online,27.49,3,0.041,coupon,2024-01-11\r\n21131,2011,AMER,grocery,online,35.11,5,0.101,none,2024-04-09\r\n21132,2235,AMER,home,retail,45.37,2,0.034,loyalty,2024-10-19\r\n21133,1353,EMEA,grocery,online,102.46,4,0.215,none,2024-06-03\r\n21134,1520,APAC,sports,partner,48.79,3,0.241,none,2024-10-19\r\n21135,2219,LATAM,electronics,online,63.51,2,0.226,bundle,2024-10-03\r\n21136,1354,AMER,grocery,online,57.38,8,0.129,none,2024-12-27\r\n21137,1143,LATAM,sports,online,43.74,1,0.184,bundle,2024-07-23\r\n21138,1162,AMER,sports,partner,157.01,7,0.045,coupon,2024-06-25\r\n21139,2190,LATAM,grocery,partner,61.20,7,0.020,loyalty,2024-12-01\r\n21140,2400,EMEA,grocery,mobile,431.98,8,0.144,none,2024-01-17\r\n21141,2186,LATAM,electronics,online,29.27,6,0.091,coupon,2024-08-01\r\n21142,1675,LATAM,fashion,online,39.68,1,0.067,none,2024-11-13\r\n21143,1958,APAC,home,retail,40.65,5,0.198,bundle,2024-06-18\r\n21144,1088,LATAM,toys,online,81.26,5,0.245,bundle,2024-05-14\r\n21145,1846,APAC,sports,online,82.90,8,0.103,none,2024-06-06\r\n21146,2176,AMER,toys,partner,33.35,3,0.068,none,2024-08-01\r\n21147,1956,APAC,electronics,online,83.97,2,0.013,none,2024-03-27\r\n21148,1005,LATAM,sports,partner,39.60,6,0.033,bundle,2024-10-13\r\n21149,2302,APAC,home,online,18.93,3,0.058,none,2024-08-13\r\n21150,2259,AMER,electronics,online,75.43,5,0.237,none,2024-04-02\r\n21151,1774,EMEA,fashion,online,19.91,8,0.221,coupon,2024-06-20\r\n21152,1401,LATAM,fashion,online,56.50,5,0.014,none,2024-12-20\r\n21153,1243,AMER,grocery,mobile,30.95,4,0.040,coupon,2024-07-13\r\n21154,2479,EMEA,electronics,retail,61.56,1,0.040,bundle,2024-05-17\r\n21155,1346,AMER,home,online,57.04,1,0.243,coupon,2024-03-26\r\n21156,1841,AMER,electronics,online,65.21,8,0.148,coupon,2024-08-04\r\n21157,1598,EMEA,electronics,online,22.71,1,0.215,coupon,2024-11-05\r\n21158,1947,EMEA,grocery,online,11.71,7,0.200,none,2024-05-08\r\n21159,1780,APAC,fashion,retail,68.65,4,0.246,none,2024-09-09\r\n21160,2054,AMER,electronics,online,17.78,4,0.086,bundle,2024-06-18\r\n21161,2272,EMEA,fashion,online,37.37,1,0.011,none,2024-10-20\r\n21162,1148,AMER,home,mobile,17.74,6,0.181,coupon,2024-11-13\r\n21163,2053,AMER,fashion,mobile,22.17,3,0.062,none,2024-03-18\r\n21164,1569,APAC,fashion,mobile,42.80,2,0.250,none,2024-11-20\r\n21165,1217,EMEA,fashion,retail,55.28,7,0.066,coupon,2024-04-26\r\n21166,2210,APAC,home,online,110.70,1,0.243,none,2024-11-18\r\n21167,1736,AMER,sports,mobile,54.94,7,0.245,none,2024-10-21\r\n21168,1733,LATAM,electronics,retail,72.85,8,0.056,none,2024-12-21\r\n21169,1020,APAC,home,mobile,145.45,2,0.207,none,2024-06-16\r\n21170,1192,EMEA,fashion,online,41.97,6,0.017,none,2024-07-12\r\n21171,1967,EMEA,fashion,online,37.18,6,0.211,coupon,2024-04-12\r\n21172,1290,EMEA,grocery,mobile,38.09,2,0.108,bundle,2024-01-10\r\n21173,1470,LATAM,sports,online,35.45,1,0.201,none,2024-07-17\r\n21174,1071,AMER,sports,online,52.10,3,0.176,coupon,2024-02-25\r\n21175,2487,LATAM,fashion,mobile,58.34,8,0.164,none,2024-03-19\r\n21176,2260,EMEA,grocery,online,74.86,4,0.226,none,2024-03-07\r\n21177,1622,LATAM,grocery,retail,55.13,1,0.170,coupon,2024-10-22\r\n21178,2173,LATAM,fashion,online,74.70,3,0.121,none,2024-11-13\r\n21179,2184,APAC,grocery,online,65.88,6,0.183,loyalty,2024-10-28\r\n21180,2220,LATAM,toys,online,54.54,2,0.004,none,2024-05-28\r\n21181,1366,APAC,electronics,retail,73.84,3,0.159,none,2024-06-27\r\n21182,2046,APAC,toys,online,95.51,6,0.241,none,2024-06-17\r\n21183,1916,AMER,electronics,retail,72.40,4,0.077,bundle,2024-10-11\r\n21184,1021,AMER,grocery,retail,34.85,8,0.030,bundle,2024-01-25\r\n21185,2148,EMEA,home,online,35.56,7,0.070,none,2024-09-07\r\n21186,1944,AMER,home,online,74.96,3,0.089,none,2024-02-05\r\n21187,2099,AMER,sports,online,48.45,6,0.210,coupon,2024-06-08\r\n21188,1313,EMEA,fashion,online,80.03,7,0.213,none,2024-03-22\r\n21189,1631,APAC,sports,online,63.27,1,0.230,loyalty,2024-01-24\r\n21190,1762,LATAM,toys,mobile,158.07,3,0.072,none,2024-04-26\r\n21191,1744,EMEA,grocery,online,127.28,6,0.062,bundle,2024-12-01\r\n21192,1982,EMEA,grocery,online,63.58,5,0.077,none,2024-09-21\r\n21193,1997,APAC,grocery,online,67.33,4,0.080,bundle,2024-07-09\r\n21194,1832,APAC,toys,retail,40.71,6,0.066,none,2024-07-26\r\n21195,1766,AMER,grocery,online,117.29,6,0.189,none,2024-12-19\r\n21196,1688,LATAM,fashion,mobile,70.69,8,0.180,none,2024-10-18\r\n21197,2283,AMER,electronics,retail,86.44,5,0.205,loyalty,2024-11-21\r\n21198,2213,APAC,home,online,21.85,7,0.005,none,2024-05-17\r\n21199,1961,EMEA,electronics,online,26.05,4,0.057,coupon,2024-03-11\r\n21200,1830,EMEA,electronics,online,80.44,8,0.200,coupon,2024-02-02\r\n21201,2427,LATAM,grocery,retail,37.61,4,0.169,none,2024-05-13\r\n21202,2278,APAC,fashion,retail,37.50,6,0.132,bundle,2024-06-15\r\n21203,2233,EMEA,toys,mobile,100.02,2,0.063,bundle,2024-07-20\r\n21204,1176,EMEA,home,partner,24.72,6,0.134,none,2024-07-04\r\n21205,1003,APAC,fashion,retail,74.21,5,0.086,none,2024-06-23\r\n21206,1484,AMER,fashion,retail,157.00,5,0.191,none,2024-08-13\r\n21207,2192,APAC,sports,online,119.12,1,0.238,none,2024-07-21\r\n21208,1364,EMEA,toys,mobile,62.20,4,0.179,none,2024-03-10\r\n21209,1358,APAC,fashion,online,28.53,6,0.172,none,2024-08-08\r\n21210,1474,LATAM,grocery,online,56.36,3,0.207,coupon,2024-08-11\r\n21211,1459,LATAM,home,online,66.90,1,0.077,coupon,2024-09-10\r\n21212,2105,APAC,fashion,mobile,112.51,5,0.123,loyalty,2024-10-16\r\n21213,1948,EMEA,fashion,mobile,24.82,6,0.083,none,2024-05-26\r\n21214,2273,APAC,electronics,online,40.48,8,0.093,bundle,2024-08-10\r\n21215,1301,AMER,sports,retail,35.34,7,0.077,coupon,2024-06-01\r\n21216,1504,AMER,grocery,online,104.85,2,0.234,loyalty,2024-02-21\r\n21217,1940,APAC,home,mobile,70.71,6,0.203,bundle,2024-05-18\r\n21218,1991,APAC,fashion,retail,103.47,2,0.225,loyalty,2024-02-22\r\n21219,1671,APAC,fashion,online,113.06,2,0.101,none,2024-05-12\r\n21220,1911,LATAM,grocery,online,47.70,5,0.073,none,2024-06-05\r\n21221,1140,LATAM,electronics,online,61.47,6,0.031,bundle,2024-12-03\r\n21222,2429,EMEA,grocery,online,88.65,4,0.207,coupon,2024-09-16\r\n21223,1358,APAC,fashion,online,53.04,3,0.166,coupon,2024-03-01\r\n21224,1483,EMEA,grocery,online,58.10,5,0.194,loyalty,2024-01-25\r\n21225,2140,AMER,grocery,retail,52.36,4,0.003,bundle,2024-03-26\r\n21226,2478,AMER,fashion,online,42.58,8,0.087,loyalty,2024-01-08\r\n21227,2289,APAC,grocery,retail,18.75,6,0.152,coupon,2024-05-03\r\n21228,1806,APAC,electronics,online,43.94,3,0.038,loyalty,2024-05-07\r\n21229,2267,AMER,home,online,144.65,6,0.233,none,2024-12-18\r\n21230,1817,APAC,home,mobile,60.32,6,0.139,none,2024-09-06\r\n21231,1574,AMER,electronics,online,27.15,1,0.043,coupon,2024-03-18\r\n21232,1545,AMER,fashion,online,62.00,1,0.139,coupon,2024-03-17\r\n21233,2306,AMER,toys,mobile,78.61,5,0.238,coupon,2024-10-04\r\n21234,1039,AMER,electronics,mobile,84.90,3,0.240,none,2024-08-10\r\n21235,1843,EMEA,fashion,online,73.16,3,0.057,bundle,2024-04-10\r\n21236,1490,AMER,fashion,partner,88.63,8,0.034,coupon,2024-05-13\r\n21237,1728,AMER,grocery,retail,49.27,8,0.071,none,2024-05-23\r\n21238,2057,APAC,fashion,mobile,84.72,8,0.193,loyalty,2024-08-24\r\n21239,2212,EMEA,toys,mobile,95.22,1,0.236,coupon,2024-12-05\r\n21240,1913,LATAM,grocery,online,41.79,8,0.049,loyalty,2024-06-28\r\n21241,2202,APAC,fashion,online,74.12,3,0.046,none,2024-07-06\r\n21242,1696,LATAM,fashion,online,98.72,3,0.225,none,2024-12-26\r\n21243,1097,EMEA,grocery,online,65.98,8,0.188,none,2024-06-08\r\n21244,1414,APAC,electronics,retail,27.54,5,0.173,coupon,2024-12-18\r\n21245,1470,LATAM,electronics,mobile,34.73,8,0.076,none,2024-01-26\r\n21246,1352,AMER,grocery,mobile,37.18,1,0.218,none,2024-02-28\r\n21247,1421,APAC,grocery,mobile,61.64,2,0.223,coupon,2024-04-01\r\n21248,1634,AMER,electronics,mobile,61.86,3,0.036,coupon,2024-06-27\r\n21249,1790,AMER,fashion,online,107.90,1,0.164,loyalty,2024-12-13\r\n21250,1807,EMEA,electronics,online,36.00,5,0.078,none,2024-11-06\r\n21251,2344,LATAM,grocery,online,211.38,7,0.063,coupon,2024-07-08\r\n21252,1029,EMEA,fashion,online,26.65,8,0.169,coupon,2024-05-15\r\n21253,2429,EMEA,grocery,retail,35.76,8,0.227,none,2024-11-15\r\n21254,1280,LATAM,home,mobile,45.57,4,0.028,none,2024-08-27\r\n21255,2392,EMEA,fashion,retail,88.46,6,0.075,coupon,2024-07-06\r\n21256,1399,AMER,home,online,22.21,5,0.038,none,2024-10-17\r\n21257,1558,EMEA,home,retail,65.73,4,0.205,none,2024-12-21\r\n21258,1077,AMER,toys,online,52.59,4,0.090,none,2024-03-25\r\n21259,2400,EMEA,home,online,29.73,5,0.138,coupon,2024-02-15\r\n21260,1503,APAC,grocery,online,66.45,1,0.247,none,2024-11-21\r\n21261,1941,AMER,sports,online,91.70,1,0.056,loyalty,2024-03-16\r\n21262,1939,LATAM,home,mobile,48.31,2,0.093,none,2024-11-01\r\n21263,1573,AMER,home,online,139.45,6,0.182,bundle,2024-02-02\r\n21264,1891,APAC,home,retail,59.64,6,0.218,none,2024-02-01\r\n21265,1661,LATAM,electronics,online,70.87,2,0.185,bundle,2024-07-25\r\n21266,2203,APAC,grocery,online,72.40,6,0.191,bundle,2024-12-09\r\n21267,1414,APAC,sports,online,68.42,4,0.115,none,2024-01-01\r\n21268,1724,LATAM,fashion,partner,30.62,7,0.167,loyalty,2024-03-12\r\n21269,2255,AMER,sports,retail,159.24,5,0.136,bundle,2024-09-08\r\n21270,1167,EMEA,electronics,online,42.86,3,0.137,bundle,2024-01-03\r\n21271,2253,AMER,home,mobile,44.48,7,0.161,none,2024-10-17\r\n21272,2288,AMER,fashion,retail,39.21,4,0.234,none,2024-07-23\r\n21273,1896,EMEA,sports,online,64.94,8,0.019,coupon,2024-04-25\r\n21274,2482,EMEA,fashion,retail,27.76,5,0.030,none,2024-01-09\r\n21275,1740,EMEA,sports,partner,49.90,6,0.116,coupon,2024-05-08\r\n21276,1650,LATAM,home,online,28.60,2,0.065,coupon,2024-06-28\r\n21277,1804,AMER,toys,online,66.56,1,0.230,none,2024-03-23\r\n21278,2467,AMER,home,mobile,34.04,1,0.103,none,2024-03-18\r\n21279,2175,AMER,electronics,online,23.90,8,0.131,loyalty,2024-03-26\r\n21280,1536,LATAM,grocery,retail,77.33,7,0.004,coupon,2024-11-06\r\n21281,1357,EMEA,toys,online,80.15,4,0.115,bundle,2024-03-04\r\n21282,2498,LATAM,toys,mobile,50.72,3,0.154,none,2024-02-24\r\n21283,2291,EMEA,grocery,retail,66.15,7,0.118,none,2024-12-12\r\n21284,1383,AMER,sports,retail,41.98,8,0.143,coupon,2024-04-21\r\n21285,1710,APAC,sports,online,43.26,8,0.228,bundle,2024-12-28\r\n21286,2008,APAC,sports,retail,158.77,5,0.147,bundle,2024-12-04\r\n21287,1655,LATAM,electronics,online,61.72,1,0.208,none,2024-08-09\r\n21288,1101,AMER,electronics,online,44.82,8,0.063,none,2024-02-15\r\n21289,2256,AMER,home,online,41.83,3,0.142,loyalty,2024-04-02\r\n21290,1135,APAC,home,retail,57.13,2,0.240,none,2024-05-08\r\n21291,1882,AMER,toys,mobile,24.39,5,0.153,bundle,2024-11-18\r\n21292,2378,LATAM,home,mobile,58.77,7,0.011,bundle,2024-07-20\r\n21293,1026,APAC,toys,online,31.45,1,0.219,loyalty,2024-06-19\r\n21294,1452,LATAM,electronics,mobile,85.60,3,0.124,none,2024-11-01\r\n21295,1928,AMER,fashion,online,38.56,5,0.099,none,2024-05-27\r\n21296,1487,AMER,fashion,online,50.03,8,0.106,loyalty,2024-05-08\r\n21297,1638,EMEA,grocery,retail,42.10,7,0.170,coupon,2024-08-14\r\n21298,1264,APAC,home,retail,139.93,6,0.193,coupon,2024-08-16\r\n21299,1774,EMEA,sports,retail,163.18,8,0.068,bundle,2024-07-23\r\n21300,1324,LATAM,fashion,online,77.36,7,0.057,none,2024-06-03\r\n21301,1017,AMER,electronics,online,78.98,8,0.125,none,2024-03-03\r\n21302,1275,EMEA,fashion,retail,98.93,5,0.124,none,2024-05-06\r\n21303,1420,APAC,home,mobile,36.57,4,0.235,none,2024-06-01\r\n21304,1339,EMEA,grocery,retail,20.74,3,0.200,bundle,2024-04-16\r\n21305,1356,LATAM,fashion,mobile,39.52,7,0.178,coupon,2024-08-09\r\n21306,1476,APAC,grocery,online,111.31,6,0.227,none,2024-03-22\r\n21307,1904,APAC,fashion,mobile,74.92,2,0.181,bundle,2024-03-07\r\n21308,2136,AMER,electronics,online,63.36,4,0.142,none,2024-08-10\r\n21309,1314,AMER,sports,retail,124.15,8,0.163,none,2024-03-08\r\n21310,2237,EMEA,electronics,retail,44.69,4,0.070,bundle,2024-12-17\r\n21311,1562,AMER,electronics,online,35.38,6,0.210,coupon,2024-05-02\r\n21312,2333,APAC,grocery,retail,91.44,3,0.039,none,2024-01-08\r\n21313,2103,LATAM,sports,mobile,70.79,1,0.000,bundle,2024-06-07\r\n21314,2030,EMEA,electronics,online,30.28,6,0.214,none,2024-09-03\r\n21315,2487,LATAM,grocery,retail,46.60,2,0.171,coupon,2024-04-18\r\n21316,1002,EMEA,fashion,online,101.21,6,0.085,coupon,2024-06-21\r\n21317,2335,EMEA,home,retail,64.97,2,0.162,coupon,2024-07-08\r\n21318,1394,LATAM,home,online,56.64,6,0.196,bundle,2024-01-22\r\n21319,1653,APAC,grocery,mobile,83.90,5,0.190,none,2024-08-04\r\n21320,1791,LATAM,home,mobile,65.15,6,0.052,none,2024-03-01\r\n21321,1127,EMEA,grocery,retail,67.02,5,0.053,none,2024-04-18\r\n21322,2123,AMER,sports,online,60.55,5,0.050,coupon,2024-03-10\r\n21323,1288,LATAM,grocery,retail,59.40,7,0.096,none,2024-07-12\r\n21324,1449,EMEA,home,online,46.93,4,0.189,loyalty,2024-08-17\r\n21325,1093,APAC,electronics,online,121.17,5,0.071,none,2024-01-16\r\n21326,1129,LATAM,home,online,174.83,8,0.100,none,2024-08-20\r\n21327,1037,EMEA,fashion,online,70.80,3,0.155,coupon,2024-07-13\r\n21328,1776,APAC,grocery,retail,51.62,8,0.169,loyalty,2024-04-09\r\n21329,1069,APAC,fashion,mobile,45.64,4,0.119,none,2024-03-01\r\n21330,2282,EMEA,home,online,46.42,7,0.141,coupon,2024-10-02\r\n21331,1515,EMEA,electronics,mobile,64.47,7,0.002,none,2024-05-12\r\n21332,2297,EMEA,electronics,retail,54.28,5,0.063,none,2024-12-21\r\n21333,1298,LATAM,toys,retail,40.93,8,0.046,none,2024-10-26\r\n21334,1365,LATAM,home,retail,76.63,6,0.182,loyalty,2024-04-15\r\n21335,2482,EMEA,toys,retail,97.64,3,0.064,bundle,2024-03-13\r\n21336,1630,APAC,toys,retail,37.85,4,0.093,none,2024-09-15\r\n21337,2195,APAC,fashion,online,155.22,2,0.240,none,2024-02-07\r\n21338,1503,APAC,grocery,online,101.36,3,0.044,none,2024-06-25\r\n21339,1450,EMEA,fashion,online,29.85,1,0.212,none,2024-12-11\r\n21340,1721,EMEA,grocery,retail,32.44,5,0.059,none,2024-03-22\r\n21341,1179,APAC,grocery,mobile,55.84,8,0.023,coupon,2024-06-12\r\n21342,1723,LATAM,toys,online,30.28,6,0.239,coupon,2024-07-16\r\n21343,1675,LATAM,sports,retail,44.13,4,0.076,bundle,2024-03-27\r\n21344,1855,APAC,electronics,online,58.10,5,0.232,none,2024-06-19\r\n21345,1671,APAC,grocery,online,49.88,8,0.083,coupon,2024-06-26\r\n21346,2303,EMEA,fashion,mobile,55.43,2,0.048,bundle,2024-08-26\r\n21347,1567,AMER,sports,retail,55.81,4,0.199,coupon,2024-03-26\r\n21348,1527,AMER,electronics,online,34.87,7,0.095,bundle,2024-01-07\r\n21349,2008,APAC,electronics,mobile,37.43,6,0.096,none,2024-03-20\r\n21350,2135,EMEA,electronics,online,44.30,7,0.213,none,2024-05-26\r\n21351,1772,EMEA,home,mobile,51.23,2,0.148,none,2024-03-03\r\n21352,2485,AMER,home,online,49.48,8,0.164,coupon,2024-11-19\r\n21353,1373,LATAM,electronics,mobile,43.10,6,0.189,loyalty,2024-11-06\r\n21354,1359,LATAM,home,retail,48.27,8,0.096,none,2024-09-03\r\n21355,2387,EMEA,home,online,154.58,4,0.135,none,2024-09-27\r\n21356,2163,EMEA,sports,online,74.80,7,0.046,none,2024-08-01\r\n21357,2319,AMER,grocery,retail,54.44,4,0.148,none,2024-02-14\r\n21358,1900,APAC,grocery,online,54.65,4,0.037,none,2024-08-25\r\n21359,1211,EMEA,electronics,mobile,74.44,8,0.064,bundle,2024-10-24\r\n21360,1920,LATAM,home,retail,74.90,7,0.040,none,2024-11-18\r\n21361,2449,LATAM,grocery,mobile,34.82,1,0.115,coupon,2024-08-22\r\n21362,1044,EMEA,grocery,retail,70.20,2,0.247,loyalty,2024-08-17\r\n21363,2096,LATAM,home,mobile,21.89,3,0.057,coupon,2024-07-17\r\n21364,1895,AMER,home,mobile,34.39,1,0.096,none,2024-02-18\r\n21365,2498,LATAM,grocery,mobile,52.98,3,0.126,none,2024-03-07\r\n21366,1123,LATAM,fashion,retail,86.79,3,0.231,loyalty,2024-05-11\r\n21367,2495,EMEA,grocery,retail,64.27,7,0.048,none,2024-08-06\r\n21368,1298,LATAM,grocery,online,51.21,1,0.061,bundle,2024-03-02\r\n21369,1231,AMER,electronics,retail,57.99,4,0.021,none,2024-03-07\r\n21370,2320,LATAM,sports,mobile,54.00,6,0.017,none,2024-06-12\r\n21371,1839,APAC,electronics,online,71.34,5,0.086,coupon,2024-01-21\r\n21372,1014,EMEA,fashion,online,121.38,6,0.244,coupon,2024-07-09\r\n21373,1674,LATAM,fashion,online,36.47,2,0.001,coupon,2024-03-23\r\n21374,1268,EMEA,electronics,retail,34.12,1,0.037,coupon,2024-03-15\r\n21375,1279,EMEA,grocery,online,45.98,1,0.219,none,2024-09-12\r\n21376,1153,AMER,grocery,online,37.23,6,0.051,loyalty,2024-10-15\r\n21377,1663,LATAM,fashion,mobile,27.74,7,0.243,coupon,2024-11-21\r\n21378,1880,LATAM,electronics,retail,49.27,2,0.027,loyalty,2024-09-27\r\n21379,2326,LATAM,electronics,retail,27.58,7,0.062,none,2024-02-28\r\n21380,1463,EMEA,sports,mobile,81.52,1,0.216,none,2024-03-04\r\n21381,1822,EMEA,electronics,retail,71.22,1,0.236,none,2024-10-02\r\n21382,1414,APAC,fashion,online,91.82,6,0.110,coupon,2024-06-28\r\n21383,1794,AMER,toys,online,33.19,2,0.025,none,2024-05-16\r\n21384,2242,AMER,toys,online,17.44,4,0.111,none,2024-05-22\r\n21385,2199,LATAM,home,retail,31.27,3,0.146,coupon,2024-08-12\r\n21386,2201,AMER,electronics,online,46.21,7,0.050,bundle,2024-08-19\r\n21387,1064,AMER,electronics,retail,54.83,2,0.030,none,2024-11-24\r\n21388,2188,EMEA,home,mobile,87.42,1,0.236,none,2024-04-10\r\n21389,2391,EMEA,electronics,online,66.71,8,0.173,none,2024-06-17\r\n21390,2107,APAC,grocery,retail,71.36,1,0.155,none,2024-04-14\r\n21391,2080,LATAM,grocery,online,70.78,2,0.091,none,2024-10-07\r\n21392,2142,LATAM,grocery,online,49.41,4,0.195,coupon,2024-12-28\r\n21393,1544,LATAM,toys,mobile,55.98,1,0.189,none,2024-09-05\r\n21394,1445,APAC,home,online,66.40,1,0.214,none,2024-10-18\r\n21395,1379,EMEA,grocery,online,35.25,7,0.219,coupon,2024-04-16\r\n21396,2111,EMEA,home,online,54.59,2,0.231,none,2024-07-07\r\n21397,2145,AMER,sports,retail,29.27,7,0.242,none,2024-08-16\r\n21398,1407,LATAM,home,online,118.73,2,0.072,bundle,2024-02-13\r\n21399,1753,APAC,sports,partner,76.05,5,0.207,none,2024-01-03\r\n21400,1731,AMER,toys,retail,69.98,7,0.183,coupon,2024-07-12\r\n21401,2080,LATAM,fashion,partner,24.60,5,0.036,coupon,2024-01-25\r\n21402,1435,AMER,home,online,83.24,7,0.006,none,2024-01-01\r\n21403,2472,AMER,grocery,online,56.40,5,0.134,none,2024-05-03\r\n21404,1222,AMER,grocery,retail,151.88,4,0.233,none,2024-03-03\r\n21405,1814,AMER,electronics,retail,143.12,3,0.131,none,2024-04-17\r\n21406,2459,AMER,fashion,retail,111.05,3,0.102,none,2024-04-26\r\n21407,2218,EMEA,electronics,retail,144.58,3,0.061,bundle,2024-05-18\r\n21408,2129,APAC,fashion,retail,56.99,5,0.047,none,2024-09-24\r\n21409,2239,EMEA,electronics,mobile,31.24,3,0.101,coupon,2024-01-18\r\n21410,1047,APAC,home,online,83.99,1,0.180,bundle,2024-07-12\r\n21411,2474,LATAM,grocery,mobile,61.86,1,0.106,none,2024-09-03\r\n21412,1004,LATAM,fashion,online,53.29,6,0.117,coupon,2024-05-21\r\n21413,1753,APAC,fashion,retail,57.70,2,0.146,none,2024-01-21\r\n21414,1655,LATAM,grocery,online,76.21,1,0.095,none,2024-12-16\r\n21415,2462,EMEA,home,online,168.27,6,0.122,loyalty,2024-07-07\r\n21416,1401,LATAM,home,online,176.69,4,0.244,none,2024-12-16\r\n21417,2054,AMER,grocery,online,24.72,5,0.229,none,2024-03-28\r\n21418,2108,AMER,electronics,mobile,57.93,2,0.209,none,2024-10-11\r\n21419,1230,EMEA,grocery,retail,44.36,8,0.033,loyalty,2024-08-13\r\n21420,1756,EMEA,grocery,mobile,16.51,8,0.027,none,2024-01-19\r\n21421,1674,LATAM,fashion,online,149.28,5,0.042,none,2024-03-17\r\n21422,2070,APAC,sports,retail,89.98,1,0.034,bundle,2024-02-22\r\n21423,1359,LATAM,electronics,online,63.92,1,0.003,bundle,2024-03-07\r\n21424,1108,EMEA,home,online,117.66,6,0.150,bundle,2024-02-08\r\n21425,1646,APAC,electronics,retail,33.34,4,0.116,coupon,2024-06-02\r\n21426,2170,EMEA,fashion,partner,56.71,7,0.184,none,2024-11-19\r\n21427,1753,APAC,electronics,online,48.22,5,0.051,none,2024-06-10\r\n21428,1586,LATAM,electronics,online,109.07,8,0.009,coupon,2024-03-17\r\n21429,1172,APAC,sports,online,22.74,4,0.021,loyalty,2024-10-28\r\n21430,1758,AMER,home,retail,22.66,8,0.053,none,2024-12-20\r\n21431,1166,AMER,home,retail,40.75,1,0.054,coupon,2024-09-25\r\n21432,2319,AMER,fashion,retail,25.85,1,0.104,loyalty,2024-08-12\r\n21433,2243,APAC,home,retail,56.09,4,0.172,coupon,2024-05-09\r\n21434,1579,AMER,electronics,retail,68.96,7,0.170,none,2024-07-21\r\n21435,1868,AMER,grocery,partner,116.10,4,0.162,coupon,2024-12-25\r\n21436,2353,AMER,home,online,95.36,2,0.113,none,2024-03-13\r\n21437,2231,LATAM,grocery,online,84.11,5,0.168,coupon,2024-03-24\r\n21438,1921,LATAM,grocery,online,78.65,1,0.151,none,2024-11-03\r\n21439,1269,LATAM,toys,online,121.87,4,0.079,loyalty,2024-12-14\r\n21440,1819,AMER,sports,mobile,44.03,8,0.115,coupon,2024-08-15\r\n21441,2267,AMER,electronics,partner,42.25,2,0.069,none,2024-10-02\r\n21442,1244,LATAM,grocery,mobile,22.67,2,0.244,none,2024-09-11\r\n21443,1942,APAC,fashion,online,37.73,3,0.113,none,2024-01-11\r\n21444,1939,LATAM,grocery,partner,94.00,4,0.203,none,2024-11-24\r\n21445,1190,EMEA,grocery,online,62.25,7,0.136,coupon,2024-12-28\r\n21446,2414,EMEA,grocery,retail,41.75,4,0.037,loyalty,2024-12-20\r\n21447,2444,EMEA,grocery,partner,98.35,4,0.034,none,2024-03-21\r\n21448,1882,AMER,grocery,online,52.41,6,0.011,loyalty,2024-07-28\r\n21449,2389,LATAM,sports,online,44.38,1,0.062,none,2024-10-18\r\n21450,2245,APAC,grocery,retail,45.83,1,0.029,loyalty,2024-05-04\r\n21451,1231,AMER,home,retail,184.72,8,0.046,none,2024-03-14\r\n21452,1958,APAC,grocery,mobile,44.32,8,0.109,none,2024-05-13\r\n21453,2212,EMEA,grocery,mobile,23.53,1,0.204,none,2024-07-04\r\n21454,1492,APAC,fashion,mobile,76.66,4,0.063,coupon,2024-11-22\r\n21455,1564,APAC,toys,partner,203.07,6,0.137,coupon,2024-04-06\r\n21456,2384,LATAM,electronics,online,25.05,7,0.021,coupon,2024-03-27\r\n21457,2211,APAC,grocery,retail,34.91,6,0.051,loyalty,2024-07-26\r\n21458,1245,APAC,grocery,retail,46.86,3,0.083,none,2024-03-04\r\n21459,1315,AMER,electronics,online,30.16,4,0.007,none,2024-02-23\r\n21460,2332,APAC,home,retail,44.36,6,0.022,none,2024-07-22\r\n21461,1311,APAC,electronics,retail,17.04,7,0.047,none,2024-11-08\r\n21462,1573,AMER,fashion,retail,63.68,7,0.071,coupon,2024-11-27\r\n21463,1346,AMER,fashion,online,43.07,1,0.035,none,2024-01-13\r\n21464,1896,EMEA,home,retail,18.65,8,0.008,none,2024-11-09\r\n21465,2124,AMER,home,retail,64.32,4,0.013,loyalty,2024-05-04\r\n21466,1277,AMER,grocery,online,28.95,6,0.089,none,2024-01-18\r\n21467,1527,AMER,grocery,online,38.94,5,0.222,none,2024-11-12\r\n21468,1880,LATAM,electronics,online,145.93,2,0.176,none,2024-03-14\r\n21469,1203,AMER,home,online,46.37,8,0.170,bundle,2024-01-20\r\n21470,2103,LATAM,fashion,mobile,16.99,1,0.113,loyalty,2024-05-10\r\n21471,1998,APAC,electronics,online,12.78,5,0.232,none,2024-05-19\r\n21472,1053,AMER,electronics,mobile,38.80,3,0.210,loyalty,2024-12-28\r\n21473,2061,EMEA,electronics,online,81.00,8,0.229,coupon,2024-06-26\r\n21474,1845,AMER,sports,mobile,50.85,1,0.174,none,2024-04-19\r\n21475,2029,APAC,home,mobile,73.02,1,0.113,bundle,2024-12-26\r\n21476,1592,LATAM,home,retail,54.39,3,0.026,none,2024-02-02\r\n21477,1848,EMEA,grocery,retail,29.87,5,0.217,none,2024-03-08\r\n21478,1469,EMEA,grocery,online,182.35,8,0.250,none,2024-04-11\r\n21479,1927,EMEA,grocery,retail,68.34,6,0.148,coupon,2024-08-13\r\n21480,2154,APAC,electronics,mobile,26.76,2,0.113,coupon,2024-09-14\r\n21481,1051,EMEA,grocery,online,16.64,6,0.001,coupon,2024-10-27\r\n21482,2208,AMER,sports,online,45.94,2,0.150,none,2024-01-01\r\n21483,1285,EMEA,sports,online,60.97,6,0.227,coupon,2024-12-16\r\n21484,1227,AMER,sports,online,27.56,1,0.081,coupon,2024-10-12\r\n21485,2291,EMEA,grocery,retail,66.95,1,0.230,none,2024-08-05\r\n21486,2005,APAC,toys,retail,54.80,2,0.128,coupon,2024-05-11\r\n21487,1874,LATAM,grocery,retail,37.23,1,0.073,coupon,2024-01-03\r\n21488,2337,AMER,electronics,online,97.98,6,0.247,none,2024-05-11\r\n21489,1817,APAC,fashion,retail,53.34,1,0.021,coupon,2024-04-14\r\n21490,1266,AMER,electronics,retail,99.21,6,0.164,coupon,2024-07-12\r\n21491,1805,EMEA,electronics,online,59.35,4,0.059,none,2024-03-26\r\n21492,2062,EMEA,grocery,online,72.20,2,0.132,bundle,2024-09-09\r\n21493,1831,APAC,electronics,retail,91.43,4,0.031,none,2024-05-08\r\n21494,1742,AMER,home,retail,71.69,3,0.142,loyalty,2024-10-16\r\n21495,1341,EMEA,toys,retail,30.43,4,0.160,bundle,2024-12-25\r\n21496,1137,APAC,sports,retail,49.59,5,0.189,none,2024-04-23\r\n21497,2148,EMEA,grocery,mobile,84.92,8,0.150,none,2024-12-15\r\n21498,1682,EMEA,fashion,retail,110.18,2,0.205,none,2024-03-07\r\n21499,1741,AMER,grocery,retail,50.73,2,0.169,coupon,2024-04-17\r\n21500,2342,AMER,home,retail,70.39,8,0.164,none,2024-12-25\r\n21501,1510,EMEA,electronics,online,158.98,3,0.214,bundle,2024-06-24\r\n21502,1527,AMER,home,online,140.74,5,0.106,none,2024-07-18\r\n21503,1694,APAC,fashion,online,42.14,2,0.041,none,2024-01-12\r\n21504,1331,AMER,home,mobile,80.92,8,0.034,none,2024-11-28\r\n21505,1794,AMER,home,retail,83.00,6,0.131,none,2024-08-07\r\n21506,1237,LATAM,electronics,online,33.27,8,0.167,none,2024-06-04\r\n21507,1796,LATAM,grocery,retail,38.36,4,0.071,bundle,2024-06-18\r\n21508,1860,EMEA,home,retail,55.89,5,0.151,none,2024-01-08\r\n21509,1558,EMEA,sports,online,61.73,7,0.024,none,2024-07-16\r\n21510,1361,LATAM,grocery,online,43.36,1,0.191,none,2024-06-03\r\n21511,1821,LATAM,fashion,online,25.88,4,0.100,coupon,2024-01-04\r\n21512,2495,EMEA,sports,online,88.31,6,0.028,none,2024-12-08\r\n21513,1950,LATAM,toys,retail,91.21,6,0.034,none,2024-04-12\r\n21514,2490,AMER,home,retail,50.41,4,0.156,loyalty,2024-06-23\r\n21515,2260,EMEA,grocery,retail,64.69,5,0.237,coupon,2024-04-03\r\n21516,1085,EMEA,fashion,online,46.14,2,0.089,loyalty,2024-08-13\r\n21517,2084,LATAM,electronics,online,60.66,1,0.100,none,2024-02-16\r\n21518,1691,LATAM,sports,retail,76.17,8,0.224,none,2024-11-15\r\n21519,2479,EMEA,electronics,online,137.58,8,0.057,none,2024-02-19\r\n21520,2281,AMER,home,retail,45.47,3,0.062,none,2024-05-19\r\n21521,1409,APAC,sports,retail,34.57,5,0.132,none,2024-07-12\r\n21522,1762,LATAM,electronics,online,50.72,7,0.058,loyalty,2024-07-27\r\n21523,1533,APAC,fashion,online,11.13,8,0.079,none,2024-04-26\r\n21524,2452,LATAM,grocery,mobile,108.24,7,0.227,loyalty,2024-08-07\r\n21525,2183,EMEA,toys,retail,92.80,3,0.008,none,2024-04-18\r\n21526,2182,AMER,sports,retail,40.26,3,0.133,coupon,2024-10-25\r\n21527,1823,EMEA,grocery,online,96.31,2,0.077,coupon,2024-07-12\r\n21528,1868,AMER,electronics,retail,53.11,3,0.061,coupon,2024-04-13\r\n21529,2188,EMEA,electronics,online,37.02,8,0.242,none,2024-03-28\r\n21530,1132,EMEA,grocery,online,76.90,3,0.046,bundle,2024-08-04\r\n21531,1481,LATAM,sports,online,29.10,6,0.128,none,2024-04-06\r\n21532,2179,LATAM,grocery,mobile,71.02,4,0.100,coupon,2024-12-21\r\n21533,1900,APAC,electronics,mobile,98.10,6,0.073,none,2024-01-20\r\n21534,1827,EMEA,home,retail,82.02,7,0.116,none,2024-02-04\r\n21535,1689,LATAM,home,online,74.70,4,0.215,none,2024-04-03\r\n21536,2419,LATAM,sports,online,56.19,7,0.151,loyalty,2024-04-07\r\n21537,1670,EMEA,electronics,online,89.55,3,0.046,bundle,2024-12-05\r\n21538,1493,APAC,toys,online,62.31,1,0.200,coupon,2024-08-23\r\n21539,1046,EMEA,grocery,online,46.14,8,0.132,bundle,2024-09-03\r\n21540,1634,AMER,sports,online,73.12,6,0.071,coupon,2024-10-24\r\n21541,1516,EMEA,sports,online,94.51,2,0.185,none,2024-12-28\r\n21542,2017,EMEA,electronics,retail,43.23,5,0.062,none,2024-06-14\r\n21543,2013,APAC,home,mobile,67.88,2,0.052,none,2024-06-25\r\n21544,1403,APAC,grocery,retail,97.58,5,0.099,none,2024-06-16\r\n21545,1440,AMER,grocery,retail,33.86,2,0.050,coupon,2024-02-11\r\n21546,1028,EMEA,fashion,online,56.53,6,0.052,none,2024-02-16\r\n21547,2403,LATAM,toys,mobile,21.23,7,0.076,none,2024-09-05\r\n21548,2101,APAC,electronics,online,51.27,8,0.066,none,2024-04-04\r\n21549,2128,EMEA,electronics,retail,92.28,5,0.023,none,2024-06-28\r\n21550,1534,EMEA,fashion,retail,32.96,5,0.178,coupon,2024-10-14\r\n21551,2449,LATAM,sports,retail,45.63,3,0.007,none,2024-01-18\r\n21552,1051,EMEA,fashion,partner,104.02,5,0.023,loyalty,2024-02-18\r\n21553,1611,EMEA,sports,online,27.84,7,0.067,none,2024-11-22\r\n21554,1097,EMEA,grocery,partner,96.20,6,0.113,bundle,2024-07-11\r\n21555,1718,EMEA,grocery,retail,43.92,8,0.135,none,2024-08-06\r\n21556,1201,LATAM,home,online,37.80,1,0.053,none,2024-06-21\r\n21557,1208,AMER,fashion,online,103.49,3,0.124,coupon,2024-12-12\r\n21558,1710,APAC,grocery,mobile,82.69,7,0.114,none,2024-04-20\r\n21559,1653,APAC,electronics,mobile,43.45,8,0.127,bundle,2024-06-20\r\n21560,2122,AMER,electronics,online,85.37,4,0.087,none,2024-08-25\r\n21561,1551,APAC,fashion,online,42.80,3,0.012,none,2024-07-28\r\n21562,1715,AMER,home,retail,47.26,1,0.228,none,2024-05-15\r\n21563,2329,LATAM,home,online,96.54,7,0.119,bundle,2024-06-18\r\n21564,1148,AMER,grocery,online,93.71,3,0.009,coupon,2024-01-25\r\n21565,1834,AMER,sports,online,51.80,4,0.249,coupon,2024-12-11\r\n21566,1609,LATAM,home,partner,61.40,7,0.138,none,2024-11-12\r\n21567,1130,LATAM,fashion,online,61.10,7,0.204,none,2024-10-12\r\n21568,1933,EMEA,home,online,64.69,4,0.220,none,2024-02-14\r\n21569,1314,AMER,fashion,online,79.28,4,0.223,loyalty,2024-08-28\r\n21570,1667,AMER,electronics,online,41.61,6,0.195,none,2024-12-21\r\n21571,1475,LATAM,fashion,online,118.61,3,0.065,none,2024-02-16\r\n21572,1456,APAC,electronics,online,39.05,4,0.190,none,2024-05-04\r\n21573,1827,EMEA,toys,retail,59.84,1,0.199,bundle,2024-04-19\r\n21574,2046,APAC,fashion,online,53.07,8,0.011,none,2024-01-22\r\n21575,2372,AMER,electronics,online,36.36,5,0.236,none,2024-03-06\r\n21576,1041,APAC,home,retail,132.75,4,0.025,none,2024-08-18\r\n21577,1746,LATAM,home,online,51.21,1,0.057,bundle,2024-11-24\r\n21578,1844,APAC,toys,online,91.02,3,0.173,bundle,2024-07-16\r\n21579,1567,AMER,electronics,online,43.83,4,0.120,none,2024-11-13\r\n21580,1144,APAC,electronics,retail,80.36,4,0.101,none,2024-05-18\r\n21581,1235,EMEA,home,retail,87.55,5,0.039,none,2024-07-03\r\n21582,1481,LATAM,grocery,partner,48.94,5,0.186,none,2024-04-20\r\n21583,1781,LATAM,home,partner,56.98,5,0.165,bundle,2024-01-12\r\n21584,1403,APAC,fashion,retail,25.03,2,0.068,none,2024-12-24\r\n21585,1262,APAC,grocery,online,55.86,8,0.078,bundle,2024-05-09\r\n21586,2268,EMEA,grocery,retail,39.93,8,0.014,none,2024-03-15\r\n21587,2488,EMEA,toys,retail,51.41,5,0.131,bundle,2024-08-14\r\n21588,2496,EMEA,electronics,retail,116.17,3,0.076,none,2024-10-21\r\n21589,1726,EMEA,electronics,partner,80.95,8,0.001,none,2024-10-15\r\n21590,2104,EMEA,home,retail,49.62,5,0.088,bundle,2024-05-11\r\n21591,2382,LATAM,electronics,online,105.81,5,0.166,none,2024-11-06\r\n21592,2181,AMER,toys,retail,78.50,7,0.185,none,2024-07-19\r\n21593,1833,EMEA,fashion,online,53.58,5,0.184,bundle,2024-02-23\r\n21594,1928,AMER,home,retail,41.46,7,0.095,none,2024-03-11\r\n21595,1792,AMER,grocery,retail,30.27,6,0.194,none,2024-10-24\r\n21596,1942,APAC,fashion,online,84.83,6,0.173,none,2024-03-12\r\n21597,2286,AMER,fashion,retail,121.35,2,0.173,none,2024-10-18\r\n21598,1259,EMEA,home,retail,90.58,1,0.017,coupon,2024-03-07\r\n21599,2233,EMEA,toys,retail,38.45,2,0.113,none,2024-02-14\r\n21600,1242,LATAM,home,retail,58.48,3,0.078,bundle,2024-09-27\r\n21601,2497,AMER,grocery,online,34.61,6,0.089,bundle,2024-12-11\r\n21602,2363,AMER,electronics,mobile,63.93,5,0.172,loyalty,2024-07-14\r\n21603,1207,APAC,sports,online,19.72,2,0.085,bundle,2024-09-24\r\n21604,1571,EMEA,electronics,retail,35.21,1,0.054,coupon,2024-03-21\r\n21605,1375,AMER,sports,online,45.72,6,0.227,none,2024-03-20\r\n21606,1277,AMER,sports,retail,37.26,2,0.037,none,2024-04-10\r\n21607,1319,EMEA,home,online,51.69,8,0.215,none,2024-08-16\r\n21608,1591,APAC,fashion,online,79.31,8,0.102,none,2024-08-02\r\n21609,2352,APAC,toys,online,84.36,2,0.073,none,2024-12-13\r\n21610,2332,APAC,electronics,online,98.67,6,0.235,loyalty,2024-11-10\r\n21611,2461,LATAM,grocery,retail,91.17,8,0.014,none,2024-06-20\r\n21612,1772,EMEA,electronics,retail,35.48,4,0.119,none,2024-12-20\r\n21613,1706,EMEA,sports,retail,35.85,7,0.136,none,2024-10-01\r\n21614,1824,LATAM,electronics,mobile,51.64,4,0.041,none,2024-10-11\r\n21615,2050,APAC,grocery,online,72.63,1,0.209,none,2024-03-04\r\n21616,2409,APAC,sports,retail,65.57,4,0.070,none,2024-08-08\r\n21617,2383,APAC,fashion,online,101.73,8,0.166,none,2024-04-09\r\n21618,2162,EMEA,home,online,42.54,7,0.178,bundle,2024-05-03\r\n21619,2427,LATAM,grocery,retail,61.94,1,0.077,none,2024-09-28\r\n21620,1405,LATAM,electronics,online,119.59,2,0.088,bundle,2024-04-09\r\n21621,2018,AMER,grocery,retail,27.19,1,0.069,none,2024-06-16\r\n21622,2359,LATAM,fashion,online,106.24,1,0.222,none,2024-08-02\r\n21623,1255,AMER,grocery,online,61.08,5,0.205,loyalty,2024-08-02\r\n21624,1991,APAC,electronics,mobile,44.68,4,0.114,none,2024-12-22\r\n21625,1120,LATAM,electronics,retail,92.81,6,0.088,none,2024-10-11\r\n21626,1843,EMEA,sports,online,18.66,1,0.240,bundle,2024-07-04\r\n21627,2348,EMEA,grocery,online,49.78,4,0.106,none,2024-05-24\r\n21628,1286,EMEA,fashion,retail,43.20,2,0.061,none,2024-03-25\r\n21629,1833,EMEA,fashion,online,45.27,5,0.125,none,2024-11-01\r\n21630,2140,AMER,home,mobile,47.40,4,0.117,none,2024-01-23\r\n21631,2233,EMEA,electronics,mobile,37.64,8,0.052,none,2024-05-27\r\n21632,1393,LATAM,home,online,45.73,6,0.006,none,2024-09-02\r\n21633,1358,APAC,grocery,retail,63.30,7,0.061,none,2024-10-21\r\n21634,2209,AMER,grocery,mobile,98.36,3,0.122,bundle,2024-03-17\r\n21635,2147,LATAM,sports,online,80.71,4,0.217,coupon,2024-04-27\r\n21636,2221,LATAM,home,online,55.12,8,0.172,bundle,2024-05-07\r\n21637,2253,AMER,electronics,online,29.52,5,0.249,none,2024-11-25\r\n21638,2395,APAC,grocery,online,66.60,7,0.132,coupon,2024-07-28\r\n21639,2072,AMER,home,retail,63.98,3,0.177,loyalty,2024-10-01\r\n21640,2236,APAC,electronics,online,33.60,8,0.082,coupon,2024-11-18\r\n21641,1184,AMER,grocery,mobile,52.24,3,0.245,bundle,2024-05-07\r\n21642,2445,APAC,fashion,retail,70.69,2,0.071,none,2024-07-01\r\n21643,1353,EMEA,home,online,100.51,8,0.001,coupon,2024-12-10\r\n21644,2311,LATAM,sports,retail,32.96,1,0.023,coupon,2024-07-01\r\n21645,1226,AMER,electronics,mobile,62.63,1,0.135,none,2024-06-03\r\n21646,1342,LATAM,grocery,mobile,53.61,7,0.053,none,2024-01-12\r\n21647,1025,EMEA,grocery,online,137.97,1,0.014,loyalty,2024-02-28\r\n21648,1924,AMER,grocery,online,24.03,4,0.211,none,2024-10-10\r\n21649,2131,APAC,home,online,58.12,5,0.077,none,2024-03-24\r\n21650,1638,EMEA,electronics,retail,48.39,2,0.221,none,2024-12-07\r\n21651,1135,APAC,grocery,retail,40.42,1,0.096,bundle,2024-10-08\r\n21652,1652,APAC,grocery,online,79.56,6,0.012,loyalty,2024-01-17\r\n21653,2477,APAC,toys,retail,42.89,5,0.058,coupon,2024-08-22\r\n21654,1626,EMEA,toys,online,58.56,5,0.065,none,2024-04-11\r\n21655,1416,EMEA,fashion,retail,44.12,2,0.062,none,2024-02-04\r\n21656,2350,APAC,sports,online,68.15,2,0.046,coupon,2024-07-15\r\n21657,2336,APAC,fashion,mobile,15.67,6,0.071,coupon,2024-09-01\r\n21658,1327,APAC,fashion,online,50.38,2,0.153,none,2024-07-01\r\n21659,2109,EMEA,home,partner,45.49,6,0.034,none,2024-10-27\r\n21660,1152,LATAM,sports,online,33.63,1,0.034,loyalty,2024-11-26\r\n21661,1972,LATAM,sports,online,54.78,3,0.012,none,2024-01-05\r\n21662,2412,LATAM,sports,retail,46.57,1,0.016,none,2024-07-08\r\n21663,1401,LATAM,electronics,online,40.60,2,0.115,none,2024-05-19\r\n21664,1870,EMEA,electronics,mobile,32.93,1,0.030,bundle,2024-05-28\r\n21665,1823,EMEA,fashion,online,110.14,8,0.172,bundle,2024-01-19\r\n21666,1539,LATAM,sports,online,53.14,4,0.070,none,2024-07-08\r\n21667,1457,EMEA,electronics,online,26.59,7,0.143,bundle,2024-05-17\r\n21668,1118,AMER,grocery,retail,80.50,8,0.022,loyalty,2024-11-16\r\n21669,1799,EMEA,electronics,online,52.15,2,0.113,none,2024-01-04\r\n21670,1794,AMER,sports,online,81.00,7,0.200,none,2024-01-17\r\n21671,1011,APAC,electronics,online,90.30,4,0.142,bundle,2024-01-23\r\n21672,1261,APAC,sports,online,69.64,4,0.055,bundle,2024-12-21\r\n21673,1599,APAC,grocery,retail,115.06,1,0.238,bundle,2024-02-15\r\n21674,1503,APAC,electronics,mobile,45.90,3,0.121,loyalty,2024-05-16\r\n21675,2413,AMER,fashion,mobile,30.67,4,0.224,bundle,2024-12-05\r\n21676,2339,AMER,home,online,34.09,1,0.205,loyalty,2024-08-28\r\n21677,1936,EMEA,toys,online,46.05,1,0.240,none,2024-03-16\r\n21678,2434,APAC,toys,online,54.28,1,0.025,coupon,2024-11-28\r\n21679,1055,AMER,grocery,online,11.29,4,0.051,none,2024-11-09\r\n21680,1193,APAC,electronics,retail,66.89,8,0.142,coupon,2024-12-26\r\n21681,1785,EMEA,sports,mobile,34.18,3,0.098,none,2024-04-07\r\n21682,1758,AMER,home,online,31.39,4,0.049,none,2024-01-11\r\n21683,2202,APAC,toys,retail,157.01,2,0.013,none,2024-04-09\r\n21684,1560,AMER,home,retail,70.50,4,0.185,none,2024-07-05\r\n21685,2149,EMEA,home,retail,78.96,7,0.232,coupon,2024-10-18\r\n21686,1681,LATAM,sports,mobile,127.98,4,0.142,none,2024-01-18\r\n21687,2104,EMEA,grocery,retail,42.99,4,0.069,none,2024-11-15\r\n21688,1844,APAC,grocery,retail,80.99,4,0.190,none,2024-02-28\r\n21689,1743,LATAM,fashion,retail,71.74,2,0.128,coupon,2024-10-19\r\n21690,1533,APAC,electronics,retail,55.92,3,0.179,bundle,2024-08-04\r\n21691,2001,EMEA,sports,partner,134.13,2,0.118,none,2024-08-28\r\n21692,1833,EMEA,toys,mobile,37.92,1,0.074,none,2024-12-16\r\n21693,2344,LATAM,sports,online,118.95,8,0.168,coupon,2024-01-11\r\n21694,1589,AMER,sports,online,22.80,5,0.206,none,2024-06-25\r\n21695,1088,LATAM,grocery,online,23.34,6,0.003,none,2024-03-18\r\n21696,2281,AMER,fashion,retail,71.42,4,0.013,none,2024-09-15\r\n21697,1007,APAC,home,retail,105.65,7,0.119,coupon,2024-12-21\r\n21698,2255,AMER,fashion,mobile,39.18,3,0.150,coupon,2024-11-11\r\n21699,1229,LATAM,fashion,partner,75.25,3,0.066,none,2024-08-15\r\n21700,1788,AMER,grocery,retail,52.87,4,0.019,bundle,2024-10-26\r\n21701,1733,LATAM,sports,retail,56.21,3,0.130,none,2024-02-05\r\n21702,1425,EMEA,sports,online,55.26,2,0.145,coupon,2024-01-08\r\n21703,1752,APAC,electronics,retail,16.98,3,0.245,coupon,2024-10-19\r\n21704,1107,APAC,home,online,27.00,8,0.221,bundle,2024-06-03\r\n21705,1432,APAC,electronics,mobile,37.58,5,0.036,coupon,2024-08-13\r\n21706,1819,AMER,toys,retail,90.24,6,0.050,none,2024-06-27\r\n21707,2225,EMEA,home,retail,80.47,1,0.225,none,2024-05-24\r\n21708,2362,AMER,sports,online,83.12,5,0.164,coupon,2024-10-26\r\n21709,1879,EMEA,electronics,retail,65.34,3,0.075,none,2024-11-06\r\n21710,1498,LATAM,toys,retail,43.97,3,0.113,none,2024-07-21\r\n21711,1288,LATAM,sports,online,49.77,5,0.239,loyalty,2024-01-10\r\n21712,2458,EMEA,electronics,online,56.59,4,0.186,coupon,2024-11-13\r\n21713,2062,EMEA,fashion,partner,39.16,3,0.063,none,2024-06-17\r\n21714,2270,APAC,sports,mobile,46.73,8,0.248,loyalty,2024-01-04\r\n21715,1761,EMEA,grocery,online,106.28,2,0.084,none,2024-04-15\r\n21716,1629,LATAM,fashion,mobile,20.75,6,0.229,coupon,2024-03-23\r\n21717,1329,APAC,electronics,online,55.82,1,0.032,coupon,2024-03-01\r\n21718,1607,LATAM,electronics,online,46.10,6,0.089,coupon,2024-12-18\r\n21719,1665,AMER,grocery,retail,66.70,1,0.002,none,2024-03-14\r\n21720,1974,EMEA,sports,retail,93.84,6,0.164,none,2024-07-25\r\n21721,2334,LATAM,grocery,retail,21.25,7,0.237,none,2024-01-02\r\n21722,2009,LATAM,home,mobile,58.88,8,0.107,none,2024-01-01\r\n21723,1282,LATAM,fashion,online,57.83,6,0.162,none,2024-11-22\r\n21724,1192,EMEA,toys,mobile,76.31,3,0.023,none,2024-09-25\r\n21725,1243,AMER,grocery,online,43.72,7,0.024,bundle,2024-01-01\r\n21726,1317,EMEA,electronics,retail,58.49,4,0.046,none,2024-10-23\r\n21727,1896,EMEA,electronics,online,21.73,6,0.236,coupon,2024-05-22\r\n21728,1989,LATAM,toys,retail,45.82,6,0.090,none,2024-03-18\r\n21729,1713,EMEA,sports,online,70.49,2,0.019,none,2024-04-10\r\n21730,1169,LATAM,fashion,retail,59.06,1,0.047,none,2024-11-10\r\n21731,1473,LATAM,sports,online,39.93,1,0.178,bundle,2024-10-27\r\n21732,1456,APAC,home,retail,39.28,5,0.067,none,2024-08-26\r\n21733,1033,APAC,electronics,online,59.53,5,0.137,none,2024-01-10\r\n21734,1726,EMEA,grocery,mobile,19.72,8,0.021,none,2024-03-13\r\n21735,1272,AMER,home,retail,176.85,1,0.217,bundle,2024-09-14\r\n21736,1605,APAC,fashion,online,51.75,2,0.175,none,2024-01-08\r\n21737,2460,AMER,sports,retail,54.17,8,0.054,coupon,2024-06-06\r\n21738,1931,APAC,electronics,retail,48.22,1,0.117,coupon,2024-10-10\r\n21739,1354,AMER,sports,online,128.00,2,0.238,none,2024-07-26\r\n21740,2151,APAC,sports,mobile,78.06,2,0.149,coupon,2024-10-21\r\n21741,2099,AMER,home,online,50.44,2,0.103,coupon,2024-05-24\r\n21742,1045,LATAM,grocery,retail,57.31,6,0.037,coupon,2024-04-17\r\n21743,1645,EMEA,electronics,online,44.73,2,0.194,bundle,2024-04-17\r\n21744,2313,LATAM,grocery,online,118.06,3,0.174,none,2024-11-07\r\n21745,1383,AMER,grocery,online,67.01,1,0.198,coupon,2024-03-16\r\n21746,2277,EMEA,fashion,retail,81.96,7,0.169,loyalty,2024-11-15\r\n21747,2464,LATAM,grocery,online,47.41,7,0.178,none,2024-07-16\r\n21748,1455,APAC,fashion,partner,78.46,6,0.121,none,2024-01-08\r\n21749,1717,AMER,home,partner,27.61,8,0.203,bundle,2024-08-10\r\n21750,2351,EMEA,home,online,61.92,4,0.176,none,2024-01-13\r\n21751,2018,AMER,electronics,online,36.65,2,0.027,none,2024-10-04\r\n21752,1644,EMEA,grocery,retail,20.67,7,0.007,bundle,2024-06-19\r\n21753,2345,LATAM,home,retail,59.42,7,0.012,none,2024-01-01\r\n21754,1224,APAC,grocery,retail,62.36,1,0.040,none,2024-08-07\r\n21755,2316,EMEA,fashion,retail,17.63,2,0.165,none,2024-10-24\r\n21756,2490,AMER,electronics,partner,37.33,4,0.130,none,2024-02-17\r\n21757,2010,APAC,sports,mobile,36.52,1,0.189,none,2024-03-06\r\n21758,1829,EMEA,electronics,retail,54.81,6,0.017,bundle,2024-10-27\r\n21759,2172,EMEA,home,mobile,123.21,4,0.171,none,2024-02-20\r\n21760,1015,AMER,fashion,retail,73.64,6,0.231,coupon,2024-11-21\r\n21761,2165,AMER,grocery,online,67.45,8,0.238,none,2024-06-05\r\n21762,2202,APAC,sports,online,33.42,4,0.152,coupon,2024-02-13\r\n21763,2269,EMEA,electronics,partner,58.18,7,0.079,none,2024-07-25\r\n21764,1040,LATAM,toys,online,49.19,2,0.225,bundle,2024-03-27\r\n21765,1185,LATAM,grocery,mobile,44.61,3,0.170,loyalty,2024-03-10\r\n21766,1939,LATAM,fashion,retail,60.32,5,0.227,none,2024-11-04\r\n21767,1197,LATAM,grocery,partner,74.71,8,0.093,loyalty,2024-10-09\r\n21768,2300,EMEA,fashion,online,37.16,2,0.156,none,2024-12-26\r\n21769,1330,EMEA,sports,retail,90.34,2,0.024,none,2024-03-19\r\n21770,1054,EMEA,grocery,online,91.35,8,0.249,loyalty,2024-10-03\r\n21771,1279,EMEA,sports,mobile,44.39,3,0.150,bundle,2024-09-05\r\n21772,1782,LATAM,electronics,online,49.98,3,0.132,none,2024-12-17\r\n21773,1093,APAC,electronics,online,51.70,3,0.002,bundle,2024-12-24\r\n21774,2298,APAC,electronics,online,49.24,6,0.122,none,2024-06-23\r\n21775,1162,AMER,home,retail,46.87,7,0.086,none,2024-09-02\r\n21776,1763,LATAM,home,retail,49.31,5,0.240,coupon,2024-06-23\r\n21777,1735,LATAM,sports,online,75.04,8,0.035,bundle,2024-07-14\r\n21778,1308,EMEA,home,online,204.45,4,0.056,none,2024-12-01\r\n21779,1920,LATAM,electronics,online,124.54,4,0.186,coupon,2024-12-12\r\n21780,1686,LATAM,electronics,online,40.78,4,0.141,none,2024-01-20\r\n21781,1202,APAC,electronics,mobile,102.21,3,0.165,none,2024-02-08\r\n21782,1825,AMER,home,online,28.73,4,0.062,none,2024-01-14\r\n21783,2052,LATAM,electronics,online,27.08,8,0.120,none,2024-06-05\r\n21784,1801,LATAM,electronics,retail,37.03,8,0.069,none,2024-08-06\r\n21785,1642,EMEA,sports,retail,63.06,2,0.157,none,2024-03-22\r\n21786,1185,LATAM,home,online,48.68,2,0.180,none,2024-12-01\r\n21787,1694,APAC,fashion,online,33.74,4,0.002,none,2024-03-18\r\n21788,1750,LATAM,fashion,retail,41.98,6,0.159,none,2024-01-05\r\n21789,1524,LATAM,fashion,retail,45.90,5,0.152,none,2024-10-12\r\n21790,1985,AMER,grocery,online,47.70,1,0.154,none,2024-04-05\r\n21791,2137,LATAM,home,online,53.70,6,0.034,none,2024-11-08\r\n21792,1124,AMER,grocery,retail,67.28,3,0.007,bundle,2024-01-05\r\n21793,1008,AMER,sports,online,80.45,4,0.106,none,2024-07-15\r\n21794,2005,APAC,fashion,retail,106.37,4,0.036,none,2024-01-27\r\n21795,1450,EMEA,sports,online,52.34,6,0.093,coupon,2024-02-19\r\n21796,2258,AMER,toys,retail,140.13,1,0.082,loyalty,2024-08-28\r\n21797,1101,AMER,electronics,online,88.87,2,0.156,coupon,2024-06-09\r\n21798,2376,LATAM,fashion,online,96.26,5,0.168,none,2024-07-17\r\n21799,1087,AMER,grocery,online,53.25,3,0.042,none,2024-05-07\r\n21800,1287,AMER,grocery,partner,8.97,3,0.191,none,2024-07-06\r\n21801,1387,AMER,toys,retail,32.70,6,0.249,loyalty,2024-06-25\r\n21802,1571,EMEA,electronics,retail,58.86,3,0.116,none,2024-01-09\r\n21803,2080,LATAM,fashion,mobile,88.05,5,0.209,none,2024-10-06\r\n21804,1375,AMER,electronics,mobile,47.57,6,0.055,coupon,2024-09-21\r\n21805,1413,LATAM,grocery,retail,31.40,6,0.176,none,2024-11-27\r\n21806,2352,APAC,toys,mobile,57.31,3,0.022,none,2024-09-03\r\n21807,1692,LATAM,electronics,online,49.49,3,0.013,none,2024-11-05\r\n21808,2458,EMEA,home,online,24.79,5,0.214,bundle,2024-12-05\r\n21809,1640,APAC,electronics,retail,29.04,2,0.207,bundle,2024-02-24\r\n21810,1304,LATAM,electronics,mobile,51.05,7,0.132,none,2024-04-06\r\n21811,1631,APAC,electronics,online,51.46,2,0.155,coupon,2024-02-03\r\n21812,1084,AMER,grocery,retail,69.53,6,0.042,none,2024-09-22\r\n21813,1166,AMER,electronics,mobile,70.60,7,0.017,bundle,2024-03-11\r\n21814,1210,LATAM,fashion,retail,39.87,3,0.165,none,2024-03-24\r\n21815,2351,EMEA,electronics,online,76.52,4,0.030,coupon,2024-02-06\r\n21816,1573,AMER,fashion,partner,47.35,5,0.087,coupon,2024-06-04\r\n21817,1887,LATAM,electronics,online,41.48,8,0.243,none,2024-09-22\r\n21818,1606,AMER,grocery,online,35.43,2,0.038,none,2024-09-21\r\n21819,2031,AMER,home,online,73.59,6,0.209,none,2024-07-19\r\n21820,1261,APAC,grocery,retail,69.83,4,0.064,bundle,2024-08-19\r\n21821,1761,EMEA,home,online,33.79,5,0.205,none,2024-09-13\r\n21822,1229,LATAM,electronics,retail,30.88,6,0.228,none,2024-06-01\r\n21823,1192,EMEA,home,mobile,22.59,4,0.162,none,2024-04-07\r\n21824,1156,APAC,sports,retail,67.12,2,0.172,bundle,2024-02-01\r\n21825,1572,LATAM,sports,online,75.39,3,0.068,none,2024-07-19\r\n21826,1018,APAC,grocery,online,65.48,4,0.116,bundle,2024-11-21\r\n21827,1900,APAC,grocery,retail,79.91,1,0.221,loyalty,2024-04-21\r\n21828,2277,EMEA,fashion,online,32.80,3,0.215,bundle,2024-04-23\r\n21829,2092,AMER,grocery,online,83.65,8,0.157,coupon,2024-04-16\r\n21830,1811,APAC,home,partner,90.46,1,0.176,coupon,2024-06-27\r\n21831,2092,AMER,home,retail,43.23,4,0.092,bundle,2024-03-18\r\n21832,2486,APAC,home,retail,55.26,3,0.044,none,2024-07-09\r\n21833,2433,APAC,electronics,mobile,144.25,7,0.221,bundle,2024-11-01\r\n21834,1888,LATAM,fashion,partner,58.37,3,0.168,none,2024-04-23\r\n21835,1901,AMER,fashion,retail,51.41,1,0.115,loyalty,2024-04-16\r\n21836,1748,APAC,home,online,53.21,6,0.095,none,2024-08-21\r\n21837,1048,EMEA,electronics,online,44.40,5,0.205,none,2024-02-04\r\n21838,1000,APAC,grocery,retail,43.63,5,0.038,none,2024-12-07\r\n21839,1186,APAC,grocery,retail,153.78,5,0.249,coupon,2024-11-24\r\n21840,1992,LATAM,home,retail,51.47,2,0.050,coupon,2024-12-16\r\n21841,1366,APAC,toys,online,40.91,8,0.249,none,2024-09-25\r\n21842,1643,EMEA,grocery,online,61.17,6,0.185,coupon,2024-02-25\r\n21843,1902,AMER,toys,mobile,75.69,7,0.070,none,2024-04-04\r\n21844,2271,LATAM,electronics,online,57.85,7,0.068,bundle,2024-12-12\r\n21845,2462,EMEA,fashion,retail,56.71,4,0.178,none,2024-02-28\r\n21846,2469,LATAM,home,online,24.43,4,0.021,coupon,2024-01-06\r\n21847,1018,APAC,home,retail,44.85,2,0.227,bundle,2024-05-11\r\n21848,1639,APAC,electronics,online,26.00,8,0.014,none,2024-04-26\r\n21849,1372,APAC,fashion,online,96.13,2,0.095,none,2024-04-07\r\n21850,2436,LATAM,sports,online,55.13,5,0.236,none,2024-05-02\r\n21851,2452,LATAM,sports,online,88.83,4,0.211,loyalty,2024-11-24\r\n21852,1116,LATAM,grocery,retail,33.31,2,0.049,bundle,2024-04-03\r\n21853,1295,EMEA,grocery,mobile,21.92,2,0.037,bundle,2024-11-02\r\n21854,2171,EMEA,grocery,retail,58.80,3,0.033,none,2024-01-07\r\n21855,1653,APAC,electronics,online,30.76,1,0.231,none,2024-10-23\r\n21856,2488,EMEA,home,retail,86.59,1,0.104,loyalty,2024-07-07\r\n21857,1433,EMEA,electronics,mobile,142.71,8,0.148,bundle,2024-02-05\r\n21858,2020,AMER,electronics,online,95.98,6,0.168,bundle,2024-12-26\r\n21859,1931,APAC,fashion,online,83.10,5,0.034,none,2024-04-27\r\n21860,1838,AMER,electronics,online,36.88,8,0.241,none,2024-06-28\r\n21861,1449,EMEA,home,mobile,46.28,3,0.227,loyalty,2024-05-06\r\n21862,1479,AMER,home,online,41.75,5,0.058,coupon,2024-07-01\r\n21863,1777,AMER,grocery,online,29.69,8,0.236,coupon,2024-05-26\r\n21864,2431,LATAM,electronics,retail,47.91,7,0.239,none,2024-01-24\r\n21865,2312,APAC,home,retail,104.71,2,0.150,loyalty,2024-01-04\r\n21866,1137,APAC,sports,online,33.50,4,0.143,coupon,2024-09-23\r\n21867,1785,EMEA,electronics,online,30.52,4,0.017,none,2024-11-19\r\n21868,1314,AMER,grocery,partner,114.96,5,0.035,coupon,2024-01-28\r\n21869,2100,APAC,home,online,289.49,7,0.001,none,2024-02-21\r\n21870,1313,EMEA,toys,mobile,29.02,4,0.072,coupon,2024-03-20\r\n21871,2291,EMEA,fashion,online,39.95,8,0.159,none,2024-04-10\r\n21872,2429,EMEA,home,retail,39.58,7,0.195,loyalty,2024-04-07\r\n21873,1243,AMER,fashion,retail,13.12,7,0.153,coupon,2024-01-26\r\n21874,1751,AMER,grocery,online,148.83,2,0.241,loyalty,2024-09-05\r\n21875,2498,LATAM,sports,partner,66.22,1,0.124,bundle,2024-12-19\r\n21876,2016,LATAM,fashion,retail,65.80,7,0.144,coupon,2024-07-21\r\n21877,1375,AMER,toys,retail,35.98,8,0.130,bundle,2024-05-14\r\n21878,2260,EMEA,home,retail,57.57,5,0.031,none,2024-05-12\r\n21879,2049,LATAM,electronics,retail,29.85,8,0.190,none,2024-11-14\r\n21880,1701,LATAM,toys,online,43.38,4,0.157,none,2024-03-25\r\n21881,1902,AMER,grocery,retail,113.18,2,0.107,coupon,2024-10-17\r\n21882,1252,APAC,toys,online,57.55,2,0.186,loyalty,2024-10-08\r\n21883,1523,LATAM,electronics,online,78.90,4,0.220,coupon,2024-05-08\r\n21884,2389,LATAM,grocery,online,117.08,3,0.159,none,2024-08-28\r\n21885,2173,LATAM,sports,online,72.10,3,0.221,loyalty,2024-09-04\r\n21886,1480,APAC,electronics,partner,61.50,4,0.088,none,2024-04-27\r\n21887,1081,AMER,fashion,online,36.15,3,0.159,coupon,2024-08-16\r\n21888,1598,EMEA,grocery,online,62.24,7,0.002,bundle,2024-06-26\r\n21889,1825,AMER,grocery,online,51.15,4,0.148,none,2024-01-18\r\n21890,2086,APAC,grocery,retail,29.50,2,0.068,none,2024-06-05\r\n21891,2129,APAC,toys,mobile,58.42,4,0.228,coupon,2024-06-20\r\n21892,1601,APAC,electronics,retail,61.00,5,0.124,none,2024-11-20\r\n21893,1774,EMEA,toys,online,42.33,6,0.146,none,2024-01-09\r\n21894,1104,APAC,toys,retail,23.42,7,0.083,none,2024-07-12\r\n21895,1740,EMEA,electronics,retail,28.09,5,0.212,bundle,2024-06-08\r\n21896,2022,LATAM,home,retail,53.10,6,0.142,coupon,2024-10-16\r\n21897,1731,AMER,fashion,partner,76.87,2,0.217,none,2024-02-24\r\n21898,1978,AMER,electronics,mobile,39.89,8,0.176,bundle,2024-07-25\r\n21899,1654,EMEA,electronics,online,55.53,4,0.036,none,2024-12-16\r\n21900,2065,EMEA,sports,online,41.72,8,0.232,none,2024-11-08\r\n21901,2029,APAC,home,online,46.63,3,0.222,coupon,2024-09-25\r\n21902,1738,LATAM,home,online,46.40,4,0.002,none,2024-05-27\r\n21903,1140,LATAM,grocery,retail,78.43,6,0.175,coupon,2024-06-04\r\n21904,2282,EMEA,home,online,50.23,5,0.232,none,2024-04-13\r\n21905,2360,EMEA,electronics,retail,23.74,8,0.237,loyalty,2024-07-23\r\n21906,2429,EMEA,grocery,online,21.24,1,0.089,bundle,2024-07-26\r\n21907,1407,LATAM,home,mobile,56.90,8,0.127,coupon,2024-08-20\r\n21908,1966,APAC,electronics,online,76.05,7,0.227,bundle,2024-08-21\r\n21909,1672,APAC,toys,online,86.72,6,0.231,none,2024-05-19\r\n21910,2430,APAC,grocery,online,105.79,7,0.026,none,2024-11-01\r\n21911,2201,AMER,electronics,retail,38.96,7,0.013,coupon,2024-09-07\r\n21912,1224,APAC,grocery,retail,319.49,2,0.129,none,2024-09-07\r\n21913,1008,AMER,home,online,57.80,7,0.082,none,2024-10-20\r\n21914,1202,APAC,fashion,online,29.49,7,0.055,bundle,2024-02-19\r\n21915,2292,EMEA,fashion,online,39.67,1,0.011,none,2024-04-12\r\n21916,1073,AMER,toys,mobile,46.14,5,0.248,none,2024-05-04\r\n21917,1109,APAC,grocery,online,22.72,8,0.026,none,2024-11-03\r\n21918,1548,EMEA,grocery,mobile,60.23,8,0.020,none,2024-02-26\r\n21919,2464,LATAM,grocery,mobile,75.67,1,0.184,none,2024-01-13\r\n21920,1219,LATAM,electronics,online,81.50,7,0.213,none,2024-03-14\r\n21921,1080,LATAM,grocery,partner,46.98,5,0.176,none,2024-01-23\r\n21922,2225,EMEA,electronics,mobile,49.43,3,0.053,bundle,2024-05-09\r\n21923,1466,AMER,grocery,online,47.36,6,0.250,none,2024-01-13\r\n21924,1920,LATAM,fashion,online,65.06,2,0.238,none,2024-10-01\r\n21925,1149,LATAM,sports,online,121.53,8,0.246,none,2024-07-01\r\n21926,1579,AMER,home,retail,74.05,7,0.234,none,2024-06-21\r\n21927,1892,LATAM,grocery,online,116.61,5,0.036,none,2024-07-02\r\n21928,1064,AMER,fashion,online,20.06,4,0.143,none,2024-05-16\r\n21929,1847,LATAM,electronics,online,27.63,2,0.052,bundle,2024-10-16\r\n21930,2237,EMEA,electronics,partner,57.38,6,0.152,none,2024-01-28\r\n21931,1838,AMER,grocery,retail,126.82,5,0.226,none,2024-06-24\r\n21932,1359,LATAM,grocery,retail,151.78,4,0.123,coupon,2024-12-28\r\n21933,1225,APAC,grocery,retail,66.61,8,0.142,none,2024-03-07\r\n21934,2456,APAC,electronics,online,54.03,1,0.004,none,2024-08-08\r\n21935,1238,AMER,electronics,partner,44.03,2,0.178,bundle,2024-12-24\r\n21936,1405,LATAM,fashion,retail,47.33,8,0.245,none,2024-10-17\r\n21937,2063,APAC,fashion,online,41.03,4,0.154,coupon,2024-08-23\r\n21938,2478,AMER,sports,online,76.89,6,0.038,none,2024-07-09\r\n21939,1577,AMER,fashion,partner,51.55,5,0.140,none,2024-09-01\r\n21940,1023,APAC,grocery,retail,63.58,8,0.228,coupon,2024-01-03\r\n21941,1868,AMER,grocery,mobile,60.11,6,0.068,none,2024-07-24\r\n21942,1836,LATAM,fashion,online,99.83,3,0.207,bundle,2024-10-21\r\n21943,2279,LATAM,fashion,online,115.70,1,0.103,none,2024-12-11\r\n21944,1215,LATAM,electronics,online,75.73,2,0.130,none,2024-10-01\r\n21945,1191,EMEA,sports,online,112.31,2,0.075,none,2024-11-22\r\n21946,2197,LATAM,sports,online,49.29,6,0.175,bundle,2024-12-02\r\n21947,2350,APAC,sports,retail,33.99,4,0.097,none,2024-10-23\r\n21948,1300,EMEA,home,online,51.00,4,0.198,bundle,2024-03-18\r\n21949,1963,AMER,toys,online,17.67,1,0.033,none,2024-09-20\r\n21950,1556,AMER,sports,mobile,30.43,5,0.200,loyalty,2024-05-14\r\n21951,1965,LATAM,grocery,online,66.37,5,0.029,none,2024-11-24\r\n21952,1448,EMEA,grocery,online,54.32,7,0.040,coupon,2024-01-26\r\n21953,1051,EMEA,toys,online,190.02,1,0.097,none,2024-01-23\r\n21954,2264,LATAM,sports,retail,55.54,2,0.009,none,2024-10-09\r\n21955,2275,LATAM,home,partner,25.34,8,0.052,none,2024-12-08\r\n21956,1625,EMEA,home,mobile,36.37,2,0.027,coupon,2024-11-25\r\n21957,1939,LATAM,grocery,mobile,78.71,3,0.189,none,2024-08-06\r\n21958,1198,AMER,home,online,30.95,7,0.163,bundle,2024-02-07\r\n21959,2156,AMER,fashion,online,13.83,5,0.140,none,2024-08-06\r\n21960,2143,AMER,home,retail,70.03,5,0.061,none,2024-04-25\r\n21961,1389,LATAM,home,retail,127.01,6,0.039,none,2024-10-04\r\n21962,2492,LATAM,toys,retail,71.44,4,0.077,none,2024-08-14\r\n21963,2304,LATAM,grocery,retail,36.20,6,0.209,coupon,2024-12-13\r\n21964,1898,EMEA,grocery,partner,127.18,3,0.188,none,2024-02-15\r\n21965,1963,AMER,fashion,partner,36.29,4,0.049,bundle,2024-02-15\r\n21966,1170,AMER,fashion,partner,44.67,7,0.173,bundle,2024-08-14\r\n21967,1871,APAC,electronics,retail,70.75,4,0.180,none,2024-04-15\r\n21968,2103,LATAM,home,online,46.93,4,0.221,none,2024-08-07\r\n21969,1574,AMER,electronics,online,26.31,7,0.003,coupon,2024-10-11\r\n21970,1925,LATAM,grocery,retail,42.79,2,0.181,none,2024-06-18\r\n21971,2420,EMEA,home,online,55.12,3,0.040,bundle,2024-07-28\r\n21972,1624,AMER,electronics,mobile,16.61,6,0.170,none,2024-11-06\r\n21973,1561,EMEA,toys,retail,97.84,8,0.167,none,2024-03-16\r\n21974,1142,EMEA,fashion,online,57.73,3,0.022,none,2024-08-13\r\n21975,1938,APAC,home,online,50.99,7,0.196,none,2024-11-06\r\n21976,2412,LATAM,home,online,77.54,4,0.120,none,2024-09-25\r\n21977,2020,AMER,electronics,online,178.93,7,0.040,loyalty,2024-03-17\r\n21978,2120,AMER,home,online,104.37,6,0.172,bundle,2024-08-07\r\n21979,1025,EMEA,grocery,online,115.79,5,0.240,bundle,2024-10-22\r\n21980,1376,EMEA,fashion,online,157.74,1,0.079,loyalty,2024-12-12\r\n21981,2062,EMEA,electronics,mobile,50.05,1,0.013,loyalty,2024-03-21\r\n21982,1757,EMEA,sports,retail,109.24,7,0.076,none,2024-10-26\r\n21983,1968,EMEA,grocery,mobile,20.69,5,0.061,none,2024-06-06\r\n21984,2438,AMER,grocery,online,81.92,2,0.214,coupon,2024-01-08\r\n21985,2050,APAC,fashion,retail,119.43,6,0.037,none,2024-08-17\r\n21986,1203,AMER,home,mobile,86.65,4,0.221,loyalty,2024-11-04\r\n21987,1598,EMEA,home,retail,31.85,2,0.246,coupon,2024-03-12\r\n21988,2185,EMEA,fashion,online,83.41,8,0.209,none,2024-01-27\r\n21989,1325,APAC,grocery,online,52.65,1,0.237,none,2024-06-09\r\n21990,1882,AMER,grocery,retail,33.72,2,0.138,loyalty,2024-10-11\r\n21991,2294,EMEA,fashion,retail,38.14,4,0.056,none,2024-02-16\r\n21992,1349,APAC,home,online,39.99,1,0.198,bundle,2024-09-10\r\n21993,1029,EMEA,sports,online,55.20,2,0.234,none,2024-06-15\r\n21994,2364,APAC,electronics,online,78.20,7,0.044,bundle,2024-06-13\r\n21995,1241,APAC,home,online,74.44,5,0.199,bundle,2024-03-21\r\n21996,1635,APAC,grocery,online,55.41,8,0.204,coupon,2024-02-18\r\n21997,1043,LATAM,sports,online,36.15,1,0.204,none,2024-04-16\r\n21998,1171,APAC,home,retail,75.57,8,0.019,none,2024-08-07\r\n21999,1977,APAC,home,online,30.89,6,0.069,bundle,2024-03-16\r\n22000,2476,APAC,fashion,online,81.25,7,0.061,coupon,2024-08-25\r\n22001,1681,LATAM,fashion,online,75.95,3,0.211,none,2024-06-08\r\n22002,2230,LATAM,home,retail,51.86,1,0.087,bundle,2024-12-20\r\n22003,1496,AMER,toys,partner,55.71,7,0.217,none,2024-03-21\r\n22004,1626,EMEA,home,retail,92.71,8,0.123,none,2024-09-04\r\n22005,1630,APAC,grocery,online,60.36,8,0.047,bundle,2024-01-25\r\n22006,1238,AMER,fashion,retail,130.77,4,0.243,bundle,2024-01-11\r\n22007,1709,EMEA,toys,retail,38.35,7,0.059,none,2024-12-12\r\n22008,1039,AMER,home,online,42.67,5,0.020,loyalty,2024-12-17\r\n22009,1455,APAC,electronics,online,77.49,4,0.104,none,2024-08-16\r\n22010,1246,EMEA,grocery,retail,53.71,3,0.147,loyalty,2024-06-08\r\n22011,1257,APAC,home,online,55.43,7,0.051,coupon,2024-09-15\r\n22012,1549,APAC,sports,online,47.58,8,0.233,none,2024-04-12\r\n22013,1912,APAC,grocery,retail,48.87,2,0.040,coupon,2024-06-07\r\n22014,2051,APAC,fashion,online,33.29,2,0.064,bundle,2024-03-07\r\n22015,2005,APAC,electronics,online,47.69,7,0.191,coupon,2024-10-19\r\n22016,1223,LATAM,grocery,online,31.28,6,0.010,none,2024-02-04\r\n22017,2415,AMER,electronics,online,94.09,1,0.085,none,2024-12-02\r\n22018,1682,EMEA,toys,online,51.07,5,0.199,coupon,2024-12-27\r\n22019,1126,LATAM,home,retail,51.06,4,0.066,loyalty,2024-02-14\r\n22020,2378,LATAM,grocery,retail,44.06,2,0.046,none,2024-11-18\r\n22021,1881,LATAM,toys,retail,27.04,7,0.053,none,2024-10-02\r\n22022,1188,LATAM,toys,online,52.16,7,0.102,coupon,2024-12-25\r\n22023,1836,LATAM,sports,mobile,22.08,5,0.235,none,2024-01-08\r\n22024,1067,APAC,home,retail,99.45,7,0.165,none,2024-10-21\r\n22025,2060,LATAM,toys,retail,95.92,4,0.085,none,2024-02-19\r\n22026,1883,LATAM,sports,retail,80.67,1,0.136,none,2024-10-14\r\n22027,2360,EMEA,home,online,40.50,3,0.144,loyalty,2024-06-03\r\n22028,1095,APAC,home,partner,51.25,5,0.087,bundle,2024-07-28\r\n22029,1484,AMER,sports,mobile,34.18,2,0.034,loyalty,2024-10-04\r\n22030,2234,LATAM,fashion,online,68.39,4,0.078,none,2024-06-01\r\n22031,2104,EMEA,home,retail,35.05,6,0.065,bundle,2024-06-22\r\n22032,1239,APAC,grocery,online,40.85,8,0.135,none,2024-10-09\r\n22033,1894,APAC,fashion,online,32.64,5,0.089,coupon,2024-05-26\r\n22034,2348,EMEA,grocery,mobile,34.47,3,0.234,none,2024-03-15\r\n22035,1395,APAC,home,online,125.77,2,0.217,coupon,2024-06-03\r\n22036,1713,EMEA,fashion,online,63.17,4,0.156,coupon,2024-04-05\r\n22037,1479,AMER,grocery,online,38.74,4,0.224,none,2024-02-24\r\n22038,2485,AMER,fashion,retail,20.75,2,0.091,none,2024-12-15\r\n22039,2207,APAC,electronics,mobile,63.54,7,0.008,bundle,2024-04-26\r\n22040,2366,APAC,fashion,retail,160.60,8,0.051,none,2024-05-05\r\n22041,1164,EMEA,sports,retail,35.17,2,0.071,none,2024-03-21\r\n22042,1383,AMER,home,online,82.26,5,0.153,coupon,2024-09-10\r\n22043,2029,APAC,home,retail,46.78,4,0.215,none,2024-09-18\r\n22044,1874,LATAM,fashion,online,85.24,8,0.082,bundle,2024-06-03\r\n22045,1216,APAC,grocery,online,125.02,1,0.202,none,2024-07-02\r\n22046,1471,EMEA,toys,retail,41.04,5,0.057,none,2024-11-19\r\n22047,2376,LATAM,toys,mobile,16.84,5,0.054,bundle,2024-08-17\r\n22048,1659,APAC,electronics,partner,30.35,2,0.054,loyalty,2024-03-21\r\n22049,1070,EMEA,home,online,64.16,5,0.216,bundle,2024-05-23\r\n22050,1526,EMEA,home,online,47.80,1,0.127,none,2024-08-07\r\n22051,1504,AMER,home,online,63.09,7,0.056,none,2024-09-17\r\n22052,1355,EMEA,sports,retail,35.75,5,0.152,none,2024-02-17\r\n22053,2043,EMEA,grocery,mobile,43.19,5,0.000,bundle,2024-05-22\r\n22054,1956,APAC,electronics,online,93.90,7,0.211,none,2024-09-21\r\n22055,1663,LATAM,grocery,online,146.57,4,0.108,coupon,2024-05-04\r\n22056,2481,APAC,fashion,mobile,78.03,5,0.158,bundle,2024-10-14\r\n22057,1365,LATAM,home,retail,69.44,1,0.089,none,2024-04-02\r\n22058,2487,LATAM,grocery,online,39.35,8,0.080,coupon,2024-04-16\r\n22059,1309,EMEA,grocery,retail,58.81,3,0.199,bundle,2024-08-15\r\n22060,1303,LATAM,toys,retail,41.22,1,0.096,bundle,2024-12-03\r\n22061,1538,AMER,sports,online,37.02,6,0.112,none,2024-07-19\r\n22062,2181,AMER,grocery,retail,46.16,2,0.118,none,2024-01-14\r\n22063,1927,EMEA,sports,online,29.19,4,0.016,none,2024-03-17\r\n22064,1820,AMER,grocery,online,136.45,3,0.133,bundle,2024-08-24\r\n22065,2151,APAC,grocery,retail,28.87,2,0.025,bundle,2024-06-21\r\n22066,1706,EMEA,sports,retail,102.91,7,0.198,loyalty,2024-10-16\r\n22067,1602,EMEA,electronics,retail,91.58,2,0.185,bundle,2024-07-06\r\n22068,1497,EMEA,fashion,retail,25.87,1,0.130,none,2024-05-01\r\n22069,1685,AMER,home,partner,28.92,3,0.082,none,2024-07-09\r\n22070,1804,AMER,sports,retail,28.27,1,0.197,coupon,2024-06-23\r\n22071,2090,AMER,grocery,mobile,23.64,1,0.062,loyalty,2024-05-27\r\n22072,1430,EMEA,grocery,mobile,41.62,8,0.236,bundle,2024-01-08\r\n22073,2486,APAC,electronics,online,66.30,4,0.151,none,2024-09-23\r\n22074,1512,APAC,fashion,online,35.28,5,0.088,coupon,2024-02-12\r\n22075,1470,LATAM,home,partner,222.55,2,0.123,none,2024-08-18\r\n22076,1601,APAC,electronics,online,30.18,5,0.156,none,2024-08-11\r\n22077,2442,APAC,grocery,retail,45.60,7,0.205,loyalty,2024-04-03\r\n22078,1946,AMER,sports,retail,58.99,6,0.165,none,2024-11-14\r\n22079,2404,EMEA,home,online,83.91,8,0.228,none,2024-09-13\r\n22080,1478,EMEA,fashion,online,39.20,5,0.029,bundle,2024-08-03\r\n22081,2390,AMER,grocery,online,40.79,5,0.193,loyalty,2024-02-22\r\n22082,2255,AMER,grocery,retail,301.98,4,0.172,bundle,2024-08-25\r\n22083,1531,EMEA,toys,online,28.84,4,0.048,coupon,2024-06-17\r\n22084,2339,AMER,home,online,73.11,1,0.158,coupon,2024-09-16\r\n22085,1863,EMEA,home,online,44.77,3,0.238,loyalty,2024-11-24\r\n22086,1666,LATAM,grocery,online,72.59,7,0.213,none,2024-01-27\r\n22087,2173,LATAM,electronics,online,85.27,1,0.179,coupon,2024-01-25\r\n22088,1807,EMEA,grocery,online,86.61,2,0.133,coupon,2024-08-11\r\n22089,2384,LATAM,home,retail,27.25,3,0.202,coupon,2024-09-04\r\n22090,1045,LATAM,sports,online,90.49,7,0.064,none,2024-05-09\r\n22091,1194,APAC,electronics,online,42.26,1,0.159,none,2024-08-22\r\n22092,1600,AMER,home,online,41.28,1,0.211,loyalty,2024-12-25\r\n22093,2203,APAC,fashion,mobile,169.94,5,0.150,none,2024-07-28\r\n22094,1718,EMEA,grocery,online,58.63,7,0.142,coupon,2024-09-26\r\n22095,2080,LATAM,grocery,online,123.19,2,0.221,bundle,2024-11-14\r\n22096,2305,AMER,fashion,partner,59.77,8,0.089,none,2024-11-25\r\n22097,2444,EMEA,electronics,online,17.06,8,0.220,bundle,2024-01-01\r\n22098,1011,APAC,electronics,mobile,49.01,8,0.025,none,2024-10-24\r\n22099,1364,EMEA,home,mobile,44.59,5,0.187,none,2024-12-20\r\n22100,1801,LATAM,grocery,retail,55.26,2,0.052,none,2024-07-22\r\n22101,1706,EMEA,grocery,retail,73.35,3,0.084,none,2024-06-27\r\n22102,1382,LATAM,home,mobile,68.11,6,0.206,loyalty,2024-09-14\r\n22103,2263,AMER,grocery,online,37.45,5,0.029,none,2024-07-19\r\n22104,1671,APAC,electronics,online,36.04,6,0.083,coupon,2024-08-27\r\n22105,2076,AMER,home,online,33.71,2,0.227,none,2024-10-20\r\n22106,2213,APAC,grocery,retail,48.06,3,0.174,bundle,2024-02-13\r\n22107,1965,LATAM,toys,online,87.46,7,0.006,coupon,2024-08-18\r\n22108,2001,EMEA,sports,mobile,29.54,1,0.030,loyalty,2024-07-15\r\n22109,2315,LATAM,home,online,99.19,1,0.176,none,2024-07-18\r\n22110,2163,EMEA,electronics,mobile,197.23,3,0.039,loyalty,2024-01-26\r\n22111,2069,AMER,fashion,retail,78.67,7,0.146,none,2024-09-15\r\n22112,1645,EMEA,fashion,online,63.66,6,0.119,coupon,2024-10-27\r\n22113,1782,LATAM,sports,online,34.58,7,0.168,bundle,2024-11-02\r\n22114,2080,LATAM,home,online,21.12,1,0.071,coupon,2024-03-24\r\n22115,1403,APAC,grocery,mobile,80.03,5,0.020,bundle,2024-05-23\r\n22116,1085,EMEA,grocery,partner,30.32,8,0.065,coupon,2024-05-24\r\n22117,1541,APAC,toys,retail,45.60,5,0.119,loyalty,2024-12-25\r\n22118,1922,EMEA,fashion,partner,52.25,1,0.097,none,2024-10-05\r\n22119,1609,LATAM,home,online,159.83,4,0.000,loyalty,2024-06-12\r\n22120,2027,EMEA,fashion,online,26.58,7,0.125,none,2024-12-27\r\n22121,2314,EMEA,home,retail,98.88,4,0.230,none,2024-10-22\r\n22122,2406,EMEA,electronics,retail,85.68,4,0.198,none,2024-01-19\r\n22123,1725,APAC,electronics,retail,94.53,1,0.213,bundle,2024-07-08\r\n22124,1252,APAC,fashion,retail,124.83,1,0.148,coupon,2024-12-05\r\n22125,1569,APAC,grocery,online,49.67,6,0.089,none,2024-01-01\r\n22126,1762,LATAM,grocery,online,47.95,6,0.166,none,2024-03-18\r\n22127,1917,LATAM,grocery,retail,107.56,5,0.017,none,2024-11-14\r\n22128,1863,EMEA,toys,mobile,65.97,1,0.212,none,2024-06-23\r\n22129,2000,APAC,grocery,partner,97.27,2,0.229,none,2024-03-19\r\n22130,2256,AMER,home,retail,32.30,6,0.026,none,2024-03-19\r\n22131,1546,EMEA,electronics,retail,65.58,4,0.093,bundle,2024-03-22\r\n22132,1456,APAC,sports,online,50.41,4,0.132,loyalty,2024-07-06\r\n22133,1993,APAC,electronics,retail,101.00,4,0.141,loyalty,2024-09-02\r\n22134,1607,LATAM,sports,online,14.84,7,0.124,none,2024-05-24\r\n22135,2422,APAC,electronics,retail,51.24,4,0.247,none,2024-10-05\r\n22136,1706,EMEA,electronics,online,71.29,3,0.199,none,2024-01-07\r\n22137,1444,EMEA,grocery,retail,77.33,5,0.226,none,2024-01-26\r\n22138,1055,AMER,sports,online,31.80,6,0.022,none,2024-04-20\r\n22139,2333,APAC,electronics,online,126.37,2,0.231,none,2024-06-04\r\n22140,2476,APAC,grocery,online,32.26,6,0.118,loyalty,2024-10-01\r\n22141,1065,AMER,home,online,36.17,3,0.116,loyalty,2024-02-08\r\n22142,2237,EMEA,home,mobile,26.96,1,0.113,coupon,2024-07-07\r\n22143,1043,LATAM,grocery,retail,50.46,4,0.169,none,2024-12-26\r\n22144,2225,EMEA,grocery,online,40.69,3,0.228,none,2024-08-18\r\n22145,1327,APAC,fashion,online,33.50,3,0.141,coupon,2024-11-12\r\n22146,1911,LATAM,home,retail,61.32,3,0.167,none,2024-08-22\r\n22147,2064,LATAM,sports,retail,138.41,3,0.173,none,2024-12-10\r\n22148,1065,AMER,home,retail,60.56,8,0.019,loyalty,2024-06-15\r\n22149,2340,EMEA,home,mobile,33.76,3,0.119,coupon,2024-11-21\r\n22150,1535,AMER,electronics,mobile,54.69,5,0.240,none,2024-07-27\r\n22151,2216,AMER,fashion,online,38.34,6,0.206,none,2024-08-15\r\n22152,2396,AMER,home,mobile,47.97,7,0.053,none,2024-10-14\r\n22153,1325,APAC,home,online,87.30,8,0.154,none,2024-01-03\r\n22154,1988,AMER,toys,retail,31.84,2,0.052,loyalty,2024-05-11\r\n22155,1004,LATAM,sports,retail,30.16,3,0.036,none,2024-04-09\r\n22156,1818,AMER,electronics,mobile,63.17,6,0.190,loyalty,2024-10-18\r\n22157,1605,APAC,home,online,116.55,1,0.050,none,2024-12-05\r\n22158,1847,LATAM,grocery,retail,113.88,8,0.134,bundle,2024-01-19\r\n22159,1993,APAC,home,online,81.37,5,0.195,none,2024-04-19\r\n22160,1386,AMER,sports,online,109.13,5,0.131,coupon,2024-05-02\r\n22161,1749,LATAM,toys,online,41.17,6,0.075,coupon,2024-02-15\r\n22162,1058,LATAM,grocery,online,65.48,3,0.250,none,2024-05-15\r\n22163,1790,AMER,fashion,retail,56.16,3,0.046,coupon,2024-03-28\r\n22164,1581,APAC,grocery,retail,120.07,6,0.108,none,2024-02-08\r\n22165,2165,AMER,home,mobile,23.85,8,0.013,none,2024-12-06\r\n22166,1604,EMEA,electronics,online,52.47,2,0.129,none,2024-12-12\r\n22167,1749,LATAM,toys,online,93.56,8,0.168,loyalty,2024-04-27\r\n22168,1375,AMER,sports,online,65.12,8,0.020,loyalty,2024-08-24\r\n22169,2164,AMER,grocery,mobile,52.49,2,0.199,none,2024-01-18\r\n22170,1204,AMER,sports,online,86.27,8,0.140,none,2024-12-25\r\n22171,2276,AMER,home,mobile,46.66,2,0.226,coupon,2024-10-04\r\n22172,1147,EMEA,fashion,online,44.75,1,0.105,none,2024-12-21\r\n22173,1133,EMEA,grocery,online,21.01,3,0.172,none,2024-07-27\r\n22174,1668,AMER,electronics,retail,27.39,2,0.171,none,2024-12-17\r\n22175,1119,LATAM,home,retail,117.30,5,0.235,coupon,2024-03-25\r\n22176,2457,EMEA,sports,online,138.51,2,0.043,none,2024-05-27\r\n22177,2331,APAC,toys,online,39.91,7,0.155,none,2024-07-20\r\n22178,2118,AMER,fashion,online,81.18,8,0.203,coupon,2024-09-19\r\n22179,1006,AMER,grocery,mobile,43.49,7,0.209,none,2024-09-10\r\n22180,1484,AMER,grocery,mobile,62.68,7,0.247,none,2024-04-03\r\n22181,1216,APAC,home,retail,51.00,1,0.190,coupon,2024-03-01\r\n22182,1755,APAC,home,retail,63.71,6,0.236,coupon,2024-06-22\r\n22183,1764,LATAM,grocery,online,42.21,3,0.101,coupon,2024-06-04\r\n22184,2466,APAC,home,retail,27.02,6,0.188,loyalty,2024-02-12\r\n22185,1892,LATAM,sports,mobile,116.96,4,0.054,loyalty,2024-06-26\r\n22186,1133,EMEA,grocery,mobile,98.86,3,0.170,bundle,2024-05-20\r\n22187,1796,LATAM,grocery,online,100.01,6,0.043,bundle,2024-08-02\r\n22188,1703,AMER,electronics,retail,110.81,2,0.131,loyalty,2024-05-18\r\n22189,2321,APAC,fashion,online,47.09,3,0.167,loyalty,2024-02-24\r\n22190,2237,EMEA,toys,mobile,58.95,7,0.200,loyalty,2024-05-06\r\n22191,2064,LATAM,fashion,retail,77.01,6,0.081,bundle,2024-06-28\r\n22192,1492,APAC,sports,online,34.07,6,0.123,bundle,2024-06-03\r\n22193,1591,APAC,sports,online,28.44,8,0.062,none,2024-10-11\r\n22194,1205,APAC,electronics,online,85.39,3,0.141,none,2024-02-12\r\n22195,1601,APAC,home,online,27.52,8,0.009,none,2024-07-22\r\n22196,2328,EMEA,fashion,online,76.69,7,0.016,none,2024-12-15\r\n22197,1854,AMER,grocery,mobile,49.22,6,0.210,none,2024-06-15\r\n22198,2390,AMER,home,online,24.62,1,0.229,coupon,2024-02-01\r\n22199,1830,EMEA,home,retail,40.71,2,0.056,coupon,2024-03-04\r\n22200,1221,LATAM,fashion,online,61.61,8,0.051,none,2024-11-14\r\n22201,1968,EMEA,sports,retail,107.22,7,0.232,none,2024-02-12\r\n22202,2071,APAC,home,online,21.06,3,0.029,loyalty,2024-07-20\r\n22203,2073,AMER,grocery,retail,26.04,6,0.150,coupon,2024-09-04\r\n22204,1968,EMEA,sports,online,38.34,8,0.167,loyalty,2024-12-18\r\n22205,1765,EMEA,grocery,online,23.10,2,0.224,none,2024-08-23\r\n22206,1290,EMEA,home,retail,84.38,4,0.024,bundle,2024-05-02\r\n22207,1721,EMEA,home,online,46.22,3,0.236,coupon,2024-04-08\r\n22208,1023,APAC,grocery,retail,55.07,6,0.119,coupon,2024-06-12\r\n22209,1247,AMER,toys,online,29.52,5,0.122,none,2024-08-05\r\n22210,1936,EMEA,sports,retail,73.13,7,0.239,none,2024-04-04\r\n22211,1687,APAC,electronics,retail,104.56,8,0.042,coupon,2024-05-06\r\n22212,1567,AMER,electronics,retail,91.86,6,0.024,coupon,2024-05-28\r\n22213,1046,EMEA,fashion,online,81.33,6,0.069,bundle,2024-06-13\r\n22214,1820,AMER,fashion,online,176.80,6,0.155,loyalty,2024-05-12\r\n22215,2163,EMEA,home,mobile,99.59,8,0.136,none,2024-07-20\r\n22216,2261,EMEA,fashion,retail,35.98,1,0.115,none,2024-11-04\r\n22217,1879,EMEA,sports,retail,95.44,7,0.173,none,2024-01-07\r\n22218,2335,EMEA,toys,online,71.98,6,0.210,none,2024-02-15\r\n22219,1453,APAC,fashion,mobile,76.02,7,0.243,none,2024-06-16\r\n22220,1654,EMEA,home,online,53.99,6,0.108,bundle,2024-10-17\r\n22221,1330,EMEA,electronics,retail,78.24,1,0.236,bundle,2024-11-01\r\n22222,1416,EMEA,fashion,retail,64.77,6,0.117,none,2024-02-07\r\n22223,1808,APAC,fashion,retail,169.99,4,0.103,none,2024-09-11\r\n22224,2162,EMEA,fashion,online,101.35,1,0.020,bundle,2024-07-22\r\n22225,1821,LATAM,fashion,online,56.44,1,0.070,none,2024-09-04\r\n22226,1272,AMER,grocery,online,76.44,1,0.167,loyalty,2024-02-25\r\n22227,1353,EMEA,fashion,online,35.28,3,0.002,none,2024-01-10\r\n22228,1297,AMER,home,online,49.48,5,0.178,none,2024-02-28\r\n22229,1971,EMEA,sports,online,69.72,8,0.148,bundle,2024-07-04\r\n22230,1893,APAC,home,retail,83.83,2,0.182,none,2024-03-27\r\n22231,1985,AMER,electronics,online,63.70,8,0.226,loyalty,2024-04-23\r\n22232,1167,EMEA,sports,online,68.99,3,0.230,none,2024-03-22\r\n22233,1140,LATAM,electronics,retail,164.08,6,0.180,none,2024-06-22\r\n22234,1583,AMER,fashion,retail,37.41,3,0.041,none,2024-02-04\r\n22235,2329,LATAM,grocery,mobile,23.38,4,0.092,none,2024-08-09\r\n22236,1778,LATAM,fashion,online,48.77,6,0.229,none,2024-07-22\r\n22237,2235,AMER,electronics,partner,84.47,4,0.037,bundle,2024-08-13\r\n22238,1304,LATAM,electronics,mobile,89.29,6,0.086,bundle,2024-08-08\r\n22239,2433,APAC,toys,partner,93.69,3,0.082,none,2024-05-18\r\n22240,1710,APAC,fashion,online,75.40,7,0.149,none,2024-04-07\r\n22241,1380,AMER,fashion,retail,28.47,7,0.039,none,2024-05-04\r\n22242,1216,APAC,sports,online,37.76,5,0.202,none,2024-12-11\r\n22243,2049,LATAM,fashion,online,37.33,4,0.247,none,2024-04-06\r\n22244,1246,EMEA,sports,mobile,57.07,1,0.020,bundle,2024-04-16\r\n22245,1064,AMER,sports,mobile,57.21,8,0.005,loyalty,2024-05-17\r\n22246,2019,AMER,grocery,online,31.51,5,0.031,none,2024-11-11\r\n22247,2250,AMER,electronics,online,29.18,4,0.146,bundle,2024-02-12\r\n22248,2002,APAC,grocery,online,31.08,3,0.223,none,2024-11-25\r\n22249,2197,LATAM,electronics,online,43.67,5,0.012,coupon,2024-10-20\r\n22250,1602,EMEA,toys,online,38.50,5,0.169,none,2024-03-07\r\n22251,1677,EMEA,grocery,mobile,42.77,4,0.084,coupon,2024-02-07\r\n22252,1024,APAC,toys,retail,110.79,6,0.230,none,2024-12-24\r\n22253,1831,APAC,electronics,mobile,59.54,5,0.181,none,2024-08-23\r\n22254,1466,AMER,grocery,online,79.57,1,0.160,coupon,2024-06-17\r\n22255,1535,AMER,fashion,online,57.45,7,0.190,coupon,2024-05-19\r\n22256,1689,LATAM,home,online,25.98,6,0.088,coupon,2024-04-09\r\n22257,1234,AMER,grocery,retail,131.00,3,0.064,none,2024-02-23\r\n22258,1723,LATAM,electronics,retail,107.74,3,0.079,coupon,2024-04-02\r\n22259,1235,EMEA,grocery,mobile,11.64,6,0.139,coupon,2024-08-07\r\n22260,2272,EMEA,grocery,online,36.91,5,0.142,none,2024-04-19\r\n22261,1976,AMER,home,mobile,57.27,1,0.026,none,2024-02-18\r\n22262,1714,APAC,grocery,partner,52.62,4,0.206,none,2024-01-02\r\n22263,1773,LATAM,fashion,retail,45.89,2,0.151,none,2024-12-16\r\n22264,1276,AMER,sports,partner,87.17,3,0.138,coupon,2024-07-28\r\n22265,1972,LATAM,toys,mobile,72.52,7,0.212,none,2024-04-12\r\n22266,1306,LATAM,grocery,online,46.40,1,0.065,bundle,2024-01-21\r\n22267,1319,EMEA,grocery,online,87.36,4,0.097,coupon,2024-04-15\r\n22268,1605,APAC,sports,online,33.21,2,0.226,none,2024-02-15\r\n22269,1035,EMEA,toys,retail,55.81,8,0.238,none,2024-11-07\r\n22270,2043,EMEA,grocery,retail,19.19,8,0.066,coupon,2024-11-08\r\n22271,1159,LATAM,fashion,online,48.36,5,0.202,none,2024-02-09\r\n22272,1429,APAC,grocery,retail,80.24,8,0.112,none,2024-02-24\r\n22273,2187,EMEA,electronics,online,51.77,2,0.111,none,2024-10-20\r\n22274,1820,AMER,home,partner,68.20,5,0.237,none,2024-03-19\r\n22275,2336,APAC,home,partner,68.24,3,0.123,none,2024-02-01\r\n22276,1781,LATAM,home,online,68.35,3,0.203,none,2024-09-24\r\n22277,1510,EMEA,grocery,retail,33.21,5,0.029,none,2024-06-06\r\n22278,1613,EMEA,toys,retail,109.44,1,0.006,none,2024-04-20\r\n22279,2312,APAC,grocery,online,70.07,4,0.192,none,2024-12-21\r\n22280,1137,APAC,grocery,online,89.80,5,0.008,none,2024-03-11\r\n22281,1295,EMEA,toys,retail,90.83,2,0.126,none,2024-02-12\r\n22282,1747,EMEA,grocery,online,13.21,6,0.174,coupon,2024-08-25\r\n22283,2288,AMER,electronics,retail,122.69,5,0.197,none,2024-04-20\r\n22284,1717,AMER,toys,retail,33.87,8,0.031,bundle,2024-03-06\r\n22285,1038,APAC,sports,partner,24.85,3,0.122,coupon,2024-11-21\r\n22286,2000,APAC,toys,retail,48.04,3,0.167,none,2024-07-24\r\n22287,2031,AMER,grocery,online,38.02,6,0.217,none,2024-08-23\r\n22288,1997,APAC,toys,partner,24.99,2,0.244,loyalty,2024-09-17\r\n22289,1093,APAC,home,mobile,38.78,6,0.239,coupon,2024-04-09\r\n22290,1162,AMER,grocery,online,43.26,7,0.011,bundle,2024-02-02\r\n22291,2041,LATAM,home,online,39.14,1,0.196,coupon,2024-07-22\r\n22292,2005,APAC,electronics,online,62.60,2,0.191,none,2024-06-18\r\n22293,1590,APAC,electronics,mobile,75.66,4,0.217,loyalty,2024-03-23\r\n22294,1358,APAC,grocery,online,58.69,4,0.165,bundle,2024-11-11\r\n22295,1250,APAC,toys,online,29.26,3,0.155,coupon,2024-03-06\r\n22296,1862,LATAM,electronics,retail,28.55,8,0.060,loyalty,2024-06-06\r\n22297,1267,EMEA,sports,online,73.34,2,0.085,bundle,2024-10-10\r\n22298,1478,EMEA,home,retail,65.18,5,0.059,none,2024-11-01\r\n22299,1836,LATAM,home,online,94.13,1,0.008,none,2024-05-16\r\n22300,1982,EMEA,home,online,66.83,8,0.217,none,2024-06-22\r\n22301,1474,LATAM,electronics,online,111.26,7,0.207,coupon,2024-08-06\r\n22302,1884,APAC,electronics,online,63.65,8,0.246,coupon,2024-11-17\r\n22303,2120,AMER,sports,retail,56.29,6,0.058,coupon,2024-03-10\r\n22304,1468,AMER,grocery,retail,44.75,2,0.137,coupon,2024-02-07\r\n22305,2253,AMER,fashion,online,60.88,2,0.086,none,2024-03-27\r\n22306,1122,AMER,electronics,retail,83.70,7,0.081,none,2024-09-28\r\n22307,1985,AMER,sports,retail,161.01,7,0.121,none,2024-06-06\r\n22308,1286,EMEA,sports,retail,79.28,6,0.041,coupon,2024-04-15\r\n22309,2321,APAC,home,mobile,40.01,2,0.077,none,2024-12-12\r\n22310,1511,EMEA,home,retail,32.04,6,0.242,coupon,2024-10-24\r\n22311,1919,EMEA,grocery,online,159.30,2,0.226,coupon,2024-07-04\r\n22312,2232,EMEA,sports,online,63.52,1,0.096,bundle,2024-03-18\r\n22313,2228,EMEA,toys,online,108.30,6,0.106,none,2024-03-20\r\n22314,1891,APAC,electronics,online,69.00,5,0.122,none,2024-11-01\r\n22315,1829,EMEA,electronics,online,45.49,3,0.135,none,2024-11-03\r\n22316,1409,APAC,toys,online,103.06,3,0.125,none,2024-07-22\r\n22317,2074,AMER,home,retail,50.05,8,0.184,none,2024-06-14\r\n22318,1166,AMER,home,online,81.87,2,0.012,loyalty,2024-09-09\r\n22319,2174,LATAM,grocery,mobile,69.52,3,0.046,coupon,2024-12-25\r\n22320,1293,AMER,grocery,online,20.87,8,0.156,bundle,2024-04-12\r\n22321,2043,EMEA,home,online,55.42,3,0.052,none,2024-04-15\r\n22322,2089,EMEA,toys,partner,48.94,8,0.220,coupon,2024-03-13\r\n22323,1635,APAC,sports,online,100.46,7,0.027,loyalty,2024-08-25\r\n22324,2353,AMER,home,online,135.00,6,0.082,none,2024-10-22\r\n22325,1046,EMEA,grocery,retail,128.16,8,0.065,coupon,2024-12-28\r\n22326,1895,AMER,grocery,retail,52.75,7,0.217,coupon,2024-02-01\r\n22327,2092,AMER,home,retail,150.26,4,0.089,none,2024-09-07\r\n22328,1820,AMER,fashion,online,35.71,5,0.070,loyalty,2024-07-18\r\n22329,1574,AMER,home,retail,70.62,5,0.116,loyalty,2024-11-22\r\n22330,2274,APAC,electronics,online,20.84,8,0.184,bundle,2024-02-03\r\n22331,1477,APAC,electronics,mobile,87.58,1,0.072,none,2024-02-19\r\n22332,1770,AMER,grocery,online,49.50,8,0.017,bundle,2024-03-14\r\n22333,1556,AMER,fashion,online,84.62,6,0.031,bundle,2024-10-03\r\n22334,1767,AMER,electronics,online,60.32,3,0.179,coupon,2024-02-21\r\n22335,1910,LATAM,home,partner,53.05,4,0.204,none,2024-07-06\r\n22336,1682,EMEA,toys,online,41.16,7,0.189,none,2024-02-04\r\n22337,2375,AMER,sports,retail,81.29,4,0.043,none,2024-06-25\r\n22338,1198,AMER,grocery,online,63.89,3,0.210,coupon,2024-02-19\r\n22339,2069,AMER,grocery,retail,34.98,5,0.039,coupon,2024-05-20\r\n22340,1193,APAC,toys,online,132.27,5,0.073,coupon,2024-03-06\r\n22341,1361,LATAM,grocery,retail,147.74,3,0.192,none,2024-06-02\r\n22342,1309,EMEA,fashion,online,76.72,3,0.016,none,2024-10-25\r\n22343,2230,LATAM,fashion,retail,28.20,5,0.072,none,2024-01-08\r\n22344,1710,APAC,grocery,online,114.68,8,0.205,coupon,2024-11-19\r\n22345,1691,LATAM,sports,retail,48.02,5,0.130,bundle,2024-10-13\r\n22346,2134,AMER,electronics,online,28.04,5,0.168,bundle,2024-01-08\r\n22347,1741,AMER,sports,online,173.20,2,0.166,none,2024-07-02\r\n22348,1604,EMEA,sports,retail,67.20,6,0.000,none,2024-09-12\r\n22349,1162,AMER,grocery,online,32.76,2,0.001,none,2024-06-21\r\n22350,1067,APAC,sports,online,114.75,7,0.028,none,2024-05-26\r\n22351,1753,APAC,sports,online,41.95,2,0.212,none,2024-09-03\r\n22352,1280,LATAM,electronics,online,49.88,6,0.057,bundle,2024-11-02\r\n22353,2469,LATAM,electronics,online,41.30,1,0.244,coupon,2024-11-25\r\n22354,1460,LATAM,sports,retail,46.71,3,0.020,coupon,2024-01-06\r\n22355,1897,AMER,electronics,online,51.42,2,0.214,coupon,2024-04-19\r\n22356,1524,LATAM,fashion,mobile,74.08,7,0.011,bundle,2024-10-09\r\n22357,2454,LATAM,electronics,mobile,58.67,5,0.083,coupon,2024-12-19\r\n22358,1477,APAC,home,mobile,34.38,8,0.235,coupon,2024-05-13\r\n22359,1583,AMER,sports,partner,27.63,5,0.157,coupon,2024-06-20\r\n22360,1249,EMEA,sports,online,41.70,2,0.155,loyalty,2024-10-08\r\n22361,2017,EMEA,grocery,online,53.98,8,0.016,none,2024-04-15\r\n22362,2028,APAC,electronics,online,90.46,8,0.231,coupon,2024-07-24\r\n22363,2320,LATAM,electronics,online,40.53,6,0.032,coupon,2024-10-24\r\n22364,1094,LATAM,electronics,online,24.86,7,0.121,loyalty,2024-02-14\r\n22365,2416,LATAM,grocery,retail,123.74,4,0.145,coupon,2024-05-20\r\n22366,2360,EMEA,grocery,retail,88.91,1,0.164,coupon,2024-06-02\r\n22367,1694,APAC,grocery,mobile,21.95,1,0.024,none,2024-06-20\r\n22368,2188,EMEA,electronics,online,36.21,7,0.116,none,2024-03-02\r\n22369,2455,AMER,sports,online,27.50,4,0.159,loyalty,2024-11-27\r\n22370,1179,APAC,grocery,retail,80.14,5,0.102,none,2024-02-27\r\n22371,1125,LATAM,electronics,mobile,110.23,6,0.029,loyalty,2024-09-28\r\n22372,1700,EMEA,fashion,online,63.47,4,0.124,none,2024-11-24\r\n22373,1487,AMER,electronics,retail,73.30,8,0.075,none,2024-08-06\r\n22374,2246,AMER,grocery,online,27.99,7,0.186,bundle,2024-03-21\r\n22375,2416,LATAM,sports,mobile,56.15,3,0.177,bundle,2024-05-21\r\n22376,1762,LATAM,grocery,retail,71.33,5,0.227,coupon,2024-10-09\r\n22377,2445,APAC,fashion,online,96.27,3,0.076,coupon,2024-06-10\r\n22378,1545,AMER,sports,online,37.16,5,0.188,none,2024-09-20\r\n22379,1752,APAC,home,online,25.34,1,0.241,coupon,2024-05-03\r\n22380,1471,EMEA,electronics,online,237.34,3,0.186,bundle,2024-03-14\r\n22381,2117,EMEA,home,retail,51.12,8,0.189,none,2024-12-28\r\n22382,1612,LATAM,toys,partner,66.55,3,0.144,none,2024-10-02\r\n22383,1043,LATAM,grocery,retail,68.98,8,0.091,none,2024-11-22\r\n22384,1443,EMEA,sports,retail,29.79,4,0.113,coupon,2024-04-08\r\n22385,2341,EMEA,grocery,online,96.31,3,0.130,none,2024-12-22\r\n22386,1030,EMEA,toys,online,91.72,1,0.182,coupon,2024-07-24\r\n22387,2176,AMER,grocery,retail,165.48,5,0.135,none,2024-04-12\r\n22388,1610,LATAM,grocery,online,27.66,5,0.143,coupon,2024-09-28\r\n22389,2209,AMER,home,online,42.64,4,0.237,none,2024-05-11\r\n22390,1711,APAC,home,mobile,38.22,3,0.151,coupon,2024-06-18\r\n22391,1122,AMER,grocery,retail,61.78,3,0.055,none,2024-12-15\r\n22392,1857,LATAM,electronics,retail,98.85,1,0.230,coupon,2024-10-08\r\n22393,2303,EMEA,grocery,online,22.56,4,0.019,coupon,2024-07-17\r\n22394,1264,APAC,electronics,retail,100.80,2,0.146,loyalty,2024-04-28\r\n22395,1575,APAC,electronics,partner,35.59,2,0.169,coupon,2024-07-18\r\n22396,1250,APAC,grocery,mobile,34.61,1,0.016,none,2024-04-02\r\n22397,2397,LATAM,electronics,retail,187.21,5,0.091,loyalty,2024-07-18\r\n22398,1011,APAC,home,mobile,68.68,6,0.185,loyalty,2024-10-25\r\n22399,1547,AMER,home,mobile,132.05,7,0.222,none,2024-02-25\r\n22400,1238,AMER,electronics,online,40.65,7,0.162,coupon,2024-03-09\r\n22401,1271,EMEA,home,online,77.75,2,0.027,none,2024-09-18\r\n22402,2050,APAC,home,online,34.00,5,0.216,none,2024-11-13\r\n22403,2114,AMER,fashion,online,72.12,5,0.167,coupon,2024-10-24\r\n22404,1287,AMER,sports,retail,47.00,2,0.209,none,2024-12-17\r\n22405,1286,EMEA,grocery,retail,35.66,7,0.009,none,2024-06-22\r\n22406,2348,EMEA,grocery,retail,145.28,8,0.046,none,2024-06-22\r\n22407,1670,EMEA,grocery,online,32.03,4,0.096,bundle,2024-04-16\r\n22408,1561,EMEA,electronics,mobile,39.62,6,0.186,none,2024-08-13\r\n22409,1863,EMEA,home,retail,20.53,7,0.090,none,2024-06-28\r\n22410,2265,APAC,toys,retail,45.53,7,0.179,none,2024-03-26\r\n22411,1264,APAC,home,online,32.51,3,0.000,none,2024-09-10\r\n22412,1515,EMEA,grocery,online,34.11,8,0.109,none,2024-02-05\r\n22413,2124,AMER,toys,online,38.40,3,0.108,coupon,2024-08-07\r\n22414,2467,AMER,home,mobile,31.12,5,0.178,none,2024-09-23\r\n22415,1414,APAC,electronics,online,49.39,7,0.036,bundle,2024-11-12\r\n22416,2389,LATAM,electronics,online,116.41,5,0.058,none,2024-12-27\r\n22417,1600,AMER,home,retail,55.30,3,0.162,none,2024-02-16\r\n22418,1076,LATAM,electronics,online,127.47,8,0.134,coupon,2024-05-10\r\n22419,2267,AMER,toys,online,131.22,4,0.078,coupon,2024-07-01\r\n22420,2356,LATAM,home,online,56.52,6,0.022,none,2024-09-10\r\n22421,1138,AMER,grocery,retail,91.41,8,0.109,none,2024-04-11\r\n22422,2404,EMEA,electronics,online,42.19,2,0.184,coupon,2024-04-06\r\n22423,2213,APAC,toys,mobile,121.69,6,0.058,loyalty,2024-03-16\r\n22424,1996,APAC,toys,mobile,104.43,8,0.212,loyalty,2024-12-04\r\n22425,2320,LATAM,sports,retail,61.53,3,0.095,bundle,2024-07-09\r\n22426,2206,AMER,sports,retail,79.33,2,0.148,bundle,2024-02-15\r\n22427,2464,LATAM,grocery,retail,65.07,8,0.056,none,2024-06-19\r\n22428,1355,EMEA,sports,partner,55.19,4,0.044,none,2024-11-28\r\n22429,2146,APAC,electronics,online,47.88,8,0.022,none,2024-08-07\r\n22430,2001,EMEA,fashion,retail,50.38,6,0.154,bundle,2024-01-01\r\n22431,1685,AMER,sports,retail,71.38,5,0.242,none,2024-12-15\r\n22432,2145,AMER,electronics,mobile,91.02,2,0.019,none,2024-03-01\r\n22433,1032,AMER,fashion,partner,45.26,5,0.088,none,2024-11-15\r\n22434,2445,APAC,home,online,43.02,7,0.078,coupon,2024-06-11\r\n22435,2327,EMEA,grocery,retail,68.72,7,0.166,none,2024-01-26\r\n22436,2254,LATAM,electronics,mobile,29.36,7,0.041,coupon,2024-11-04\r\n22437,1697,APAC,toys,online,44.87,6,0.098,coupon,2024-07-15\r\n22438,1741,AMER,electronics,online,17.70,4,0.002,none,2024-11-09\r\n22439,1271,EMEA,grocery,mobile,47.94,2,0.208,bundle,2024-08-21\r\n22440,2220,LATAM,home,retail,37.78,6,0.056,coupon,2024-04-05\r\n22441,2426,AMER,sports,online,121.00,2,0.196,coupon,2024-02-07\r\n22442,2329,LATAM,fashion,retail,64.83,7,0.106,none,2024-07-12\r\n22443,1896,EMEA,toys,retail,52.55,1,0.191,none,2024-08-01\r\n22444,2433,APAC,fashion,online,49.63,1,0.159,coupon,2024-08-26\r\n22445,1277,AMER,home,online,96.44,4,0.189,none,2024-09-25\r\n22446,2468,EMEA,fashion,retail,88.72,2,0.221,none,2024-09-06\r\n22447,1397,LATAM,grocery,online,24.54,2,0.002,coupon,2024-05-12\r\n22448,2138,APAC,grocery,mobile,51.79,7,0.149,none,2024-02-23\r\n22449,1442,EMEA,fashion,partner,44.17,3,0.169,bundle,2024-08-21\r\n22450,2169,EMEA,home,retail,63.13,2,0.208,coupon,2024-06-08\r\n22451,2048,LATAM,grocery,mobile,24.74,6,0.088,loyalty,2024-11-25\r\n22452,1649,APAC,electronics,online,11.84,7,0.017,none,2024-10-20\r\n22453,1120,LATAM,grocery,online,40.99,1,0.142,coupon,2024-05-13\r\n22454,1555,AMER,toys,online,49.54,5,0.097,none,2024-10-06\r\n22455,1426,AMER,sports,mobile,104.37,2,0.148,coupon,2024-11-24\r\n22456,2181,AMER,fashion,online,81.69,8,0.098,coupon,2024-08-21\r\n22457,1819,AMER,home,online,35.77,3,0.013,none,2024-12-02\r\n22458,1316,APAC,grocery,online,30.21,4,0.101,coupon,2024-01-26\r\n22459,1644,EMEA,grocery,retail,26.09,1,0.082,none,2024-08-14\r\n22460,1321,EMEA,fashion,retail,152.52,5,0.167,bundle,2024-08-05\r\n22461,2105,APAC,electronics,retail,81.20,1,0.156,none,2024-05-20\r\n22462,1678,LATAM,grocery,online,65.35,6,0.010,loyalty,2024-01-26\r\n22463,2279,LATAM,fashion,mobile,42.23,1,0.113,none,2024-11-03\r\n22464,1956,APAC,electronics,retail,45.50,1,0.201,none,2024-04-02\r\n22465,1174,APAC,sports,online,51.10,6,0.240,none,2024-06-15\r\n22466,1141,AMER,home,online,41.12,7,0.196,bundle,2024-05-03\r\n22467,1616,APAC,toys,mobile,63.50,5,0.182,none,2024-09-17\r\n22468,1726,EMEA,toys,mobile,34.87,4,0.057,coupon,2024-01-03\r\n22469,1056,LATAM,electronics,retail,13.66,8,0.187,none,2024-08-25\r\n22470,1205,APAC,electronics,retail,48.05,3,0.066,loyalty,2024-03-22\r\n22471,1222,AMER,home,online,72.53,2,0.244,none,2024-11-02\r\n22472,1439,LATAM,home,online,67.34,5,0.046,none,2024-12-23\r\n22473,1767,AMER,grocery,online,45.85,2,0.104,none,2024-11-25\r\n22474,1764,LATAM,home,online,128.33,4,0.040,none,2024-11-25\r\n22475,1682,EMEA,electronics,retail,89.47,5,0.001,none,2024-06-01\r\n22476,1878,EMEA,electronics,online,41.25,5,0.115,coupon,2024-01-23\r\n22477,1383,AMER,grocery,online,67.15,8,0.248,bundle,2024-12-18\r\n22478,2178,AMER,electronics,online,134.17,1,0.247,coupon,2024-05-26\r\n22479,1354,AMER,electronics,mobile,47.73,1,0.087,none,2024-10-05\r\n22480,1443,EMEA,toys,online,17.03,2,0.202,none,2024-10-22\r\n22481,1061,APAC,electronics,mobile,14.28,7,0.184,none,2024-03-10\r\n22482,1955,AMER,toys,mobile,40.91,1,0.115,coupon,2024-11-28\r\n22483,1215,LATAM,toys,retail,41.06,8,0.148,coupon,2024-03-14\r\n22484,1389,LATAM,electronics,mobile,111.34,6,0.034,none,2024-10-21\r\n22485,2187,EMEA,toys,online,58.21,6,0.221,none,2024-11-28\r\n22486,1833,EMEA,electronics,mobile,57.49,2,0.048,bundle,2024-05-15\r\n22487,1219,LATAM,electronics,online,56.31,7,0.110,bundle,2024-12-28\r\n22488,2282,EMEA,electronics,online,76.56,3,0.235,bundle,2024-06-07\r\n22489,1582,AMER,toys,online,70.17,2,0.030,coupon,2024-02-23\r\n22490,1616,APAC,electronics,retail,61.62,7,0.098,none,2024-12-07\r\n22491,2495,EMEA,sports,online,96.04,3,0.090,loyalty,2024-03-10\r\n22492,2498,LATAM,fashion,online,67.74,8,0.099,bundle,2024-12-14\r\n22493,1669,AMER,home,online,55.20,7,0.026,none,2024-12-08\r\n22494,2022,LATAM,home,online,18.51,5,0.042,coupon,2024-09-12\r\n22495,1886,LATAM,toys,retail,44.62,3,0.047,coupon,2024-03-17\r\n22496,1750,LATAM,home,retail,35.05,5,0.061,none,2024-01-08\r\n22497,2198,EMEA,electronics,online,76.95,3,0.203,none,2024-10-18\r\n22498,2408,EMEA,home,online,29.80,6,0.016,none,2024-02-02\r\n22499,2184,APAC,sports,partner,87.41,8,0.193,coupon,2024-05-05\r\n22500,1045,LATAM,sports,online,95.96,4,0.058,none,2024-08-21\r\n22501,2246,AMER,sports,retail,47.04,6,0.130,none,2024-07-08\r\n22502,1198,AMER,fashion,mobile,106.33,6,0.164,coupon,2024-11-26\r\n22503,2493,APAC,home,mobile,92.01,6,0.096,none,2024-04-16\r\n22504,1344,EMEA,fashion,retail,34.50,7,0.000,coupon,2024-03-07\r\n22505,2417,LATAM,sports,retail,31.81,8,0.246,none,2024-08-07\r\n22506,2329,LATAM,home,mobile,93.53,7,0.086,none,2024-03-08\r\n22507,1648,APAC,sports,retail,85.38,5,0.033,none,2024-06-20\r\n22508,1062,EMEA,fashion,retail,57.87,4,0.218,none,2024-12-08\r\n22509,1199,APAC,toys,mobile,90.96,4,0.145,none,2024-10-04\r\n22510,2142,LATAM,electronics,retail,48.79,6,0.191,bundle,2024-06-12\r\n22511,2294,EMEA,electronics,retail,27.28,1,0.136,coupon,2024-03-03\r\n22512,2019,AMER,home,retail,59.01,1,0.212,none,2024-10-25\r\n22513,1909,APAC,electronics,online,91.48,3,0.055,coupon,2024-03-04\r\n22514,2054,AMER,grocery,retail,40.05,1,0.099,coupon,2024-07-21\r\n22515,1202,APAC,home,online,31.33,1,0.157,bundle,2024-09-16\r\n22516,1261,APAC,electronics,mobile,42.90,6,0.096,none,2024-05-26\r\n22517,1819,AMER,fashion,partner,27.78,4,0.159,none,2024-12-28\r\n22518,1406,LATAM,grocery,online,148.31,3,0.142,none,2024-12-10\r\n22519,1138,AMER,sports,retail,131.96,2,0.119,bundle,2024-12-12\r\n22520,1141,AMER,electronics,online,50.34,2,0.212,coupon,2024-07-13\r\n22521,1351,APAC,toys,retail,43.36,1,0.086,bundle,2024-02-03\r\n22522,2177,AMER,home,retail,25.40,1,0.162,none,2024-07-27\r\n22523,1473,LATAM,sports,partner,72.05,4,0.156,none,2024-09-17\r\n22524,2066,APAC,sports,retail,75.77,7,0.244,none,2024-09-03\r\n22525,2255,AMER,sports,online,53.14,7,0.138,none,2024-02-25\r\n22526,1168,APAC,sports,mobile,295.18,6,0.132,bundle,2024-08-07\r\n22527,1457,EMEA,grocery,online,100.07,3,0.248,bundle,2024-02-10\r\n22528,1383,AMER,fashion,online,155.12,7,0.014,coupon,2024-09-15\r\n22529,1274,LATAM,electronics,online,144.84,3,0.020,coupon,2024-11-28\r\n22530,2106,LATAM,grocery,online,44.94,3,0.040,loyalty,2024-09-23\r\n22531,2141,AMER,grocery,retail,95.31,4,0.006,bundle,2024-01-03\r\n22532,2224,EMEA,electronics,online,55.64,5,0.202,none,2024-06-07\r\n22533,1145,AMER,grocery,online,70.93,2,0.079,none,2024-12-15\r\n22534,1305,EMEA,home,retail,81.82,8,0.002,none,2024-09-14\r\n22535,1819,AMER,grocery,mobile,19.86,7,0.110,bundle,2024-09-26\r\n22536,1124,AMER,home,online,67.06,4,0.087,none,2024-10-06\r\n22537,1789,EMEA,grocery,online,43.83,4,0.050,none,2024-06-12\r\n22538,1126,LATAM,home,retail,106.78,3,0.037,none,2024-12-27\r\n22539,1074,LATAM,home,mobile,89.66,2,0.023,none,2024-01-24\r\n22540,1402,EMEA,home,retail,46.27,5,0.246,none,2024-11-08\r\n22541,2195,APAC,grocery,retail,70.62,3,0.055,coupon,2024-04-09\r\n22542,1380,AMER,grocery,online,54.76,2,0.030,none,2024-07-07\r\n22543,1552,EMEA,fashion,partner,26.76,8,0.180,coupon,2024-03-02\r\n22544,1703,AMER,home,mobile,72.89,1,0.171,none,2024-10-16\r\n22545,2304,LATAM,sports,online,49.93,3,0.185,coupon,2024-10-03\r\n22546,1346,AMER,sports,online,42.21,2,0.018,none,2024-05-04\r\n22547,1376,EMEA,toys,retail,108.61,2,0.011,none,2024-05-09\r\n22548,2168,EMEA,toys,retail,130.68,8,0.167,bundle,2024-03-12\r\n22549,1557,LATAM,fashion,online,54.02,4,0.241,bundle,2024-11-24\r\n22550,1958,APAC,grocery,mobile,94.27,1,0.179,none,2024-01-21\r\n22551,1987,AMER,home,online,40.32,4,0.164,loyalty,2024-12-24\r\n22552,1053,AMER,sports,online,51.17,8,0.007,loyalty,2024-08-14\r\n22553,1105,AMER,sports,online,35.84,7,0.184,none,2024-11-09\r\n22554,1732,LATAM,fashion,retail,85.51,7,0.082,none,2024-06-21\r\n22555,1400,EMEA,electronics,online,73.69,1,0.179,bundle,2024-11-06\r\n22556,2200,LATAM,electronics,online,42.54,8,0.037,bundle,2024-06-06\r\n22557,2327,EMEA,sports,online,33.84,5,0.180,bundle,2024-05-12\r\n22558,2158,APAC,grocery,mobile,18.64,8,0.017,none,2024-06-06\r\n22559,1874,LATAM,toys,online,45.51,7,0.027,none,2024-11-02\r\n22560,1699,APAC,electronics,retail,37.08,5,0.026,bundle,2024-12-02\r\n22561,1825,AMER,home,partner,67.09,7,0.092,bundle,2024-11-15\r\n22562,1260,LATAM,electronics,partner,111.85,4,0.195,none,2024-06-19\r\n22563,2413,AMER,home,online,39.21,4,0.231,none,2024-04-15\r\n22564,1098,APAC,electronics,retail,30.79,5,0.187,none,2024-03-15\r\n22565,1268,EMEA,home,mobile,44.81,5,0.019,bundle,2024-10-17\r\n22566,1193,APAC,home,online,52.62,3,0.068,coupon,2024-06-19\r\n22567,2141,AMER,sports,retail,47.13,1,0.149,none,2024-01-20\r\n22568,1052,LATAM,grocery,retail,69.90,7,0.218,coupon,2024-01-21\r\n22569,2186,LATAM,grocery,retail,45.51,6,0.079,bundle,2024-09-08\r\n22570,1421,APAC,home,partner,16.33,8,0.006,none,2024-11-15\r\n22571,1945,AMER,home,online,50.35,6,0.140,bundle,2024-04-08\r\n22572,1216,APAC,grocery,online,72.75,7,0.147,coupon,2024-04-06\r\n22573,1239,APAC,electronics,online,36.36,7,0.124,coupon,2024-10-09\r\n22574,2326,LATAM,sports,online,81.80,7,0.013,none,2024-11-07\r\n22575,1706,EMEA,home,retail,83.01,8,0.005,none,2024-03-19\r\n22576,2014,EMEA,fashion,retail,39.52,8,0.091,loyalty,2024-01-02\r\n22577,1805,EMEA,sports,online,78.57,3,0.161,none,2024-04-23\r\n22578,1210,LATAM,toys,mobile,56.52,2,0.098,none,2024-05-23\r\n22579,2210,APAC,grocery,retail,32.86,7,0.053,none,2024-12-10\r\n22580,2178,AMER,grocery,mobile,106.82,3,0.210,loyalty,2024-02-18\r\n22581,2463,AMER,toys,online,69.15,7,0.069,coupon,2024-03-25\r\n22582,1304,LATAM,home,retail,57.93,4,0.103,none,2024-02-14\r\n22583,1092,AMER,toys,online,68.98,5,0.086,coupon,2024-06-21\r\n22584,1796,LATAM,home,retail,60.29,8,0.064,none,2024-09-15\r\n22585,2491,APAC,fashion,online,54.75,6,0.129,loyalty,2024-05-27\r\n22586,2169,EMEA,home,online,43.00,4,0.242,none,2024-05-19\r\n22587,2329,LATAM,electronics,retail,116.90,5,0.034,coupon,2024-03-11\r\n22588,2494,AMER,grocery,retail,57.48,6,0.140,coupon,2024-01-11\r\n22589,1046,EMEA,home,mobile,28.49,3,0.140,none,2024-04-20\r\n22590,2401,LATAM,sports,online,49.97,8,0.114,coupon,2024-06-23\r\n22591,2404,EMEA,grocery,retail,99.91,7,0.109,coupon,2024-03-25\r\n22592,1967,EMEA,grocery,online,11.42,7,0.092,none,2024-05-22\r\n22593,2039,EMEA,grocery,mobile,35.52,6,0.211,none,2024-02-07\r\n22594,1897,AMER,toys,online,125.40,6,0.181,none,2024-08-09\r\n22595,1501,AMER,fashion,online,26.33,1,0.205,loyalty,2024-07-04\r\n22596,2252,EMEA,fashion,retail,71.05,6,0.090,coupon,2024-06-28\r\n22597,1525,APAC,home,online,34.79,5,0.131,none,2024-04-02\r\n22598,1794,AMER,home,online,240.56,6,0.022,coupon,2024-07-08\r\n22599,1475,LATAM,home,retail,87.72,2,0.124,none,2024-05-18\r\n22600,2485,AMER,sports,retail,23.81,8,0.036,none,2024-11-12\r\n22601,1769,LATAM,grocery,mobile,38.07,8,0.005,none,2024-07-27\r\n22602,1139,EMEA,home,online,54.75,6,0.181,loyalty,2024-12-26\r\n22603,1165,AMER,home,online,37.98,3,0.211,coupon,2024-01-18\r\n22604,1799,EMEA,electronics,retail,52.03,3,0.057,none,2024-03-10\r\n22605,1687,APAC,toys,retail,149.23,6,0.078,none,2024-08-11\r\n22606,1063,AMER,grocery,retail,79.70,2,0.174,loyalty,2024-02-20\r\n22607,2303,EMEA,home,retail,48.41,5,0.109,none,2024-12-24\r\n22608,1139,EMEA,toys,retail,52.09,6,0.079,none,2024-02-02\r\n22609,1069,APAC,fashion,online,103.76,4,0.236,coupon,2024-06-03\r\n22610,1108,EMEA,sports,online,93.47,1,0.202,none,2024-03-13\r\n22611,1816,EMEA,electronics,online,82.82,5,0.133,none,2024-02-15\r\n22612,2320,LATAM,sports,online,30.87,8,0.080,bundle,2024-06-13\r\n22613,1700,EMEA,grocery,mobile,57.38,4,0.246,none,2024-12-13\r\n22614,2228,EMEA,electronics,online,41.40,4,0.137,none,2024-07-17\r\n22615,1530,APAC,electronics,retail,36.53,4,0.127,none,2024-03-09\r\n22616,1379,EMEA,fashion,retail,52.81,7,0.022,none,2024-07-24\r\n22617,2293,LATAM,toys,online,67.26,2,0.194,loyalty,2024-03-01\r\n22618,1166,AMER,home,mobile,53.23,3,0.029,loyalty,2024-09-22\r\n22619,2214,AMER,grocery,online,117.72,6,0.189,loyalty,2024-04-08\r\n22620,1844,APAC,grocery,mobile,40.68,6,0.136,none,2024-05-21\r\n22621,2252,EMEA,electronics,retail,48.77,3,0.021,none,2024-02-07\r\n22622,1798,AMER,sports,online,52.80,4,0.249,bundle,2024-08-06\r\n22623,1614,EMEA,grocery,retail,60.72,4,0.016,bundle,2024-09-04\r\n22624,2292,EMEA,fashion,retail,79.16,8,0.045,bundle,2024-03-13\r\n22625,2200,LATAM,grocery,retail,13.18,7,0.159,coupon,2024-01-14\r\n22626,1676,LATAM,toys,retail,45.60,8,0.152,coupon,2024-06-25\r\n22627,1732,LATAM,sports,online,27.71,3,0.083,none,2024-08-17\r\n22628,1906,APAC,fashion,retail,50.57,7,0.232,loyalty,2024-04-10\r\n22629,1256,LATAM,toys,mobile,27.79,5,0.231,coupon,2024-09-16\r\n22630,2118,AMER,home,online,55.83,4,0.248,none,2024-11-10\r\n22631,1799,EMEA,electronics,online,81.22,6,0.177,bundle,2024-05-04\r\n22632,1136,EMEA,grocery,online,51.91,2,0.112,none,2024-08-25\r\n22633,1373,LATAM,fashion,retail,87.41,1,0.007,bundle,2024-09-22\r\n22634,2336,APAC,sports,retail,48.05,6,0.044,coupon,2024-07-01\r\n22635,2172,EMEA,grocery,online,126.81,4,0.116,none,2024-07-19\r\n22636,1032,AMER,sports,retail,18.16,7,0.138,none,2024-09-22\r\n22637,2370,EMEA,fashion,online,49.10,5,0.095,none,2024-05-03\r\n22638,2132,LATAM,grocery,partner,35.84,7,0.024,none,2024-06-02\r\n22639,2134,AMER,grocery,mobile,39.50,6,0.171,coupon,2024-09-23\r\n22640,2414,EMEA,grocery,online,59.62,1,0.038,none,2024-08-13\r\n22641,2297,EMEA,grocery,online,39.28,1,0.011,bundle,2024-06-02\r\n22642,1624,AMER,home,online,46.78,8,0.083,none,2024-01-01\r\n22643,1220,LATAM,grocery,retail,32.63,8,0.060,bundle,2024-06-12\r\n22644,1700,EMEA,sports,online,16.20,1,0.249,none,2024-01-27\r\n22645,1916,AMER,home,retail,52.98,3,0.075,none,2024-05-15\r\n22646,2267,AMER,fashion,online,49.87,3,0.195,coupon,2024-04-09\r\n22647,1671,APAC,fashion,online,25.78,7,0.083,bundle,2024-06-28\r\n22648,1100,AMER,sports,mobile,103.30,5,0.149,none,2024-10-05\r\n22649,1023,APAC,fashion,online,63.13,2,0.066,loyalty,2024-03-17\r\n22650,2424,LATAM,grocery,retail,118.92,6,0.249,coupon,2024-11-14\r\n22651,2033,LATAM,grocery,retail,26.22,7,0.018,coupon,2024-02-03\r\n22652,2443,LATAM,electronics,online,52.94,6,0.146,none,2024-12-15\r\n22653,1514,LATAM,grocery,online,32.71,8,0.010,loyalty,2024-06-17\r\n22654,1125,LATAM,toys,online,86.66,3,0.147,bundle,2024-03-10\r\n22655,2027,EMEA,home,retail,66.18,5,0.232,none,2024-05-20\r\n22656,2438,AMER,grocery,online,145.27,1,0.020,none,2024-04-14\r\n22657,1952,EMEA,home,mobile,61.58,3,0.017,none,2024-06-19\r\n22658,2483,LATAM,fashion,online,86.96,5,0.070,none,2024-03-02\r\n22659,1775,EMEA,sports,online,114.46,1,0.144,coupon,2024-06-15\r\n22660,1085,EMEA,home,retail,83.99,7,0.207,loyalty,2024-03-08\r\n22661,1182,EMEA,electronics,retail,27.92,8,0.165,none,2024-11-06\r\n22662,1249,EMEA,fashion,mobile,52.72,1,0.043,none,2024-03-13\r\n22663,1185,LATAM,sports,online,46.07,8,0.176,none,2024-05-04\r\n22664,2000,APAC,fashion,online,56.34,4,0.040,coupon,2024-06-16\r\n22665,1135,APAC,home,retail,61.12,3,0.092,none,2024-08-14\r\n22666,2245,APAC,electronics,retail,56.04,3,0.041,none,2024-02-07\r\n22667,2485,AMER,toys,retail,37.96,1,0.127,loyalty,2024-02-11\r\n22668,1091,EMEA,home,mobile,33.69,6,0.218,none,2024-12-24\r\n22669,1950,LATAM,toys,online,87.72,8,0.095,none,2024-02-15\r\n22670,1533,APAC,electronics,mobile,87.64,7,0.128,none,2024-12-16\r\n22671,2295,EMEA,electronics,online,111.60,7,0.144,none,2024-06-28\r\n22672,1668,AMER,electronics,retail,58.85,5,0.234,none,2024-11-28\r\n22673,2270,APAC,sports,online,23.19,3,0.240,none,2024-11-11\r\n22674,2246,AMER,electronics,mobile,133.86,7,0.152,none,2024-08-28\r\n22675,2427,LATAM,home,retail,13.33,7,0.220,bundle,2024-02-10\r\n22676,1032,AMER,toys,mobile,18.78,8,0.118,none,2024-11-24\r\n22677,1750,LATAM,electronics,online,89.81,1,0.162,none,2024-06-16\r\n22678,2248,LATAM,sports,retail,44.07,7,0.163,none,2024-09-13\r\n22679,2332,APAC,fashion,online,70.10,1,0.147,none,2024-01-03\r\n22680,2042,LATAM,electronics,online,68.32,4,0.032,bundle,2024-09-12\r\n22681,1177,LATAM,grocery,retail,38.74,8,0.048,none,2024-01-06\r\n22682,2391,EMEA,electronics,mobile,18.71,3,0.245,coupon,2024-03-02\r\n22683,1229,LATAM,grocery,retail,85.55,3,0.208,bundle,2024-08-27\r\n22684,2225,EMEA,home,online,50.36,2,0.115,none,2024-10-04\r\n22685,1654,EMEA,electronics,retail,50.22,5,0.181,none,2024-10-16\r\n22686,1104,APAC,grocery,online,118.20,8,0.073,none,2024-02-03\r\n22687,1689,LATAM,home,mobile,27.91,7,0.197,none,2024-02-14\r\n22688,1977,APAC,grocery,partner,72.81,3,0.151,loyalty,2024-03-17\r\n22689,1799,EMEA,home,retail,44.12,8,0.218,bundle,2024-08-04\r\n22690,1683,AMER,fashion,online,103.24,5,0.212,none,2024-10-13\r\n22691,2385,APAC,grocery,partner,62.88,7,0.198,bundle,2024-02-04\r\n22692,2017,EMEA,electronics,online,77.16,2,0.213,coupon,2024-01-09\r\n22693,1383,AMER,home,mobile,33.93,6,0.225,none,2024-12-16\r\n22694,1367,AMER,grocery,retail,40.40,3,0.232,coupon,2024-07-08\r\n22695,2412,LATAM,toys,online,19.40,8,0.144,none,2024-08-05\r\n22696,1845,AMER,grocery,online,64.07,4,0.235,none,2024-05-06\r\n22697,2215,LATAM,sports,retail,22.76,8,0.178,none,2024-08-11\r\n22698,1643,EMEA,home,partner,129.82,6,0.025,loyalty,2024-11-27\r\n22699,1656,LATAM,electronics,retail,72.92,7,0.202,coupon,2024-07-19\r\n22700,2287,EMEA,home,online,64.88,6,0.184,bundle,2024-09-17\r\n22701,1946,AMER,grocery,online,88.44,5,0.112,bundle,2024-11-06\r\n22702,2232,EMEA,electronics,mobile,91.32,4,0.086,none,2024-12-18\r\n22703,1579,AMER,home,online,52.10,3,0.207,coupon,2024-02-28\r\n22704,1611,EMEA,grocery,retail,56.45,1,0.231,none,2024-04-06\r\n22705,1921,LATAM,fashion,retail,158.93,7,0.188,loyalty,2024-04-14\r\n22706,2023,LATAM,fashion,online,130.85,5,0.109,none,2024-12-14\r\n22707,1669,AMER,toys,retail,82.58,8,0.016,none,2024-11-15\r\n22708,2093,LATAM,grocery,mobile,30.30,6,0.155,none,2024-05-04\r\n22709,2145,AMER,sports,retail,44.53,2,0.094,none,2024-04-08\r\n22710,1795,EMEA,grocery,online,45.72,1,0.074,loyalty,2024-12-08\r\n22711,1636,APAC,home,online,43.70,7,0.205,none,2024-01-25\r\n22712,1822,EMEA,electronics,mobile,25.01,2,0.121,coupon,2024-01-01\r\n22713,1545,AMER,grocery,mobile,19.81,7,0.139,coupon,2024-06-10\r\n22714,1526,EMEA,electronics,retail,72.83,8,0.014,bundle,2024-11-13\r\n22715,2173,LATAM,home,mobile,32.99,4,0.098,none,2024-04-17\r\n22716,1620,LATAM,sports,mobile,127.86,5,0.106,none,2024-07-28\r\n22717,2420,EMEA,electronics,online,83.85,1,0.039,none,2024-08-12\r\n22718,1851,EMEA,grocery,online,27.05,2,0.000,coupon,2024-09-21\r\n22719,1203,AMER,grocery,retail,44.75,5,0.091,none,2024-01-08\r\n22720,1135,APAC,electronics,mobile,84.04,3,0.005,coupon,2024-01-01\r\n22721,1889,APAC,electronics,online,124.52,2,0.233,none,2024-03-27\r\n22722,1434,EMEA,electronics,online,47.69,6,0.230,bundle,2024-06-14\r\n22723,1242,LATAM,grocery,online,51.34,7,0.056,none,2024-02-28\r\n22724,1379,EMEA,sports,retail,73.09,6,0.049,none,2024-01-07\r\n22725,2244,LATAM,toys,online,23.71,7,0.204,none,2024-03-07\r\n22726,2239,EMEA,grocery,retail,40.67,1,0.225,bundle,2024-08-08\r\n22727,1550,APAC,home,online,40.40,4,0.016,none,2024-08-02\r\n22728,1753,APAC,electronics,mobile,131.47,6,0.246,none,2024-08-03\r\n22729,1177,LATAM,electronics,retail,74.32,7,0.104,bundle,2024-10-18\r\n22730,1016,AMER,fashion,retail,41.45,6,0.093,coupon,2024-09-19\r\n22731,1148,AMER,grocery,online,101.42,6,0.022,none,2024-12-13\r\n22732,2231,LATAM,electronics,online,138.12,4,0.210,coupon,2024-12-25\r\n22733,1652,APAC,home,online,87.00,8,0.006,coupon,2024-08-01\r\n22734,1688,LATAM,electronics,retail,52.14,2,0.233,bundle,2024-09-01\r\n22735,1796,LATAM,home,online,62.02,1,0.101,loyalty,2024-04-14\r\n22736,1893,APAC,toys,online,67.33,4,0.023,none,2024-04-15\r\n22737,1153,AMER,toys,retail,64.04,4,0.062,none,2024-06-26\r\n22738,2151,APAC,fashion,retail,76.51,8,0.081,loyalty,2024-05-25\r\n22739,1391,LATAM,home,retail,56.68,2,0.064,coupon,2024-02-04\r\n22740,1706,EMEA,grocery,retail,81.47,4,0.203,none,2024-05-07\r\n22741,2440,APAC,electronics,retail,44.18,7,0.162,none,2024-10-15\r\n22742,1749,LATAM,electronics,retail,68.43,6,0.010,bundle,2024-05-14\r\n22743,1606,AMER,grocery,online,79.37,6,0.232,none,2024-01-10\r\n22744,2099,AMER,home,retail,30.50,8,0.078,coupon,2024-05-26\r\n22745,1990,EMEA,home,partner,34.40,5,0.059,none,2024-05-05\r\n22746,1464,APAC,fashion,mobile,40.00,6,0.039,loyalty,2024-06-26\r\n22747,2471,APAC,grocery,retail,58.91,6,0.015,coupon,2024-09-09\r\n22748,1278,AMER,toys,retail,58.11,2,0.192,none,2024-12-21\r\n22749,2095,EMEA,grocery,retail,45.96,2,0.238,none,2024-08-21\r\n22750,1881,LATAM,fashion,online,40.29,1,0.214,none,2024-06-28\r\n22751,2324,AMER,electronics,partner,94.46,7,0.157,coupon,2024-07-13\r\n22752,2154,APAC,sports,mobile,45.60,1,0.138,coupon,2024-03-09\r\n22753,1278,AMER,electronics,online,122.75,3,0.085,none,2024-05-07\r\n22754,1712,LATAM,fashion,partner,160.67,5,0.226,none,2024-08-09\r\n22755,1019,APAC,fashion,online,44.69,2,0.006,coupon,2024-05-28\r\n22756,1005,LATAM,fashion,retail,117.75,1,0.110,loyalty,2024-08-05\r\n22757,2110,LATAM,home,retail,63.45,4,0.185,none,2024-10-07\r\n22758,1332,APAC,grocery,online,38.97,4,0.140,coupon,2024-02-19\r\n22759,2011,AMER,grocery,online,29.20,2,0.048,none,2024-08-06\r\n22760,1654,EMEA,sports,online,120.73,3,0.014,coupon,2024-10-13\r\n22761,1301,AMER,sports,online,59.28,6,0.118,none,2024-11-28\r\n22762,1528,EMEA,sports,retail,48.91,7,0.073,none,2024-01-26\r\n22763,1691,LATAM,fashion,online,55.25,7,0.075,none,2024-08-23\r\n22764,1131,APAC,fashion,mobile,23.10,7,0.185,bundle,2024-11-01\r\n22765,1388,AMER,fashion,retail,62.45,4,0.118,none,2024-07-06\r\n22766,1665,AMER,electronics,retail,86.45,6,0.067,loyalty,2024-10-24\r\n22767,1020,APAC,home,online,61.61,8,0.170,coupon,2024-08-26\r\n22768,1208,AMER,electronics,online,112.06,1,0.210,coupon,2024-07-20\r\n22769,2273,APAC,home,online,14.42,6,0.160,none,2024-06-14\r\n22770,2413,AMER,home,online,82.16,8,0.017,none,2024-05-01\r\n22771,1918,EMEA,sports,retail,42.53,1,0.129,none,2024-06-23\r\n22772,1415,AMER,toys,retail,206.86,4,0.026,none,2024-02-27\r\n22773,1912,APAC,electronics,online,105.54,8,0.145,none,2024-05-01\r\n22774,1663,LATAM,fashion,retail,68.26,3,0.037,none,2024-06-14\r\n22775,1817,APAC,grocery,online,56.48,3,0.206,none,2024-06-28\r\n22776,1190,EMEA,sports,online,46.81,2,0.230,none,2024-07-21\r\n22777,1066,AMER,electronics,online,73.01,1,0.224,none,2024-05-08\r\n22778,2263,AMER,electronics,online,61.92,7,0.244,loyalty,2024-05-03\r\n22779,1609,LATAM,electronics,online,61.09,4,0.013,none,2024-07-08\r\n22780,2090,AMER,home,retail,35.89,5,0.179,coupon,2024-01-05\r\n22781,1466,AMER,toys,online,41.99,5,0.007,none,2024-10-15\r\n22782,1268,EMEA,electronics,mobile,34.19,8,0.013,loyalty,2024-12-18\r\n22783,1624,AMER,home,mobile,31.67,5,0.246,coupon,2024-07-19\r\n22784,1728,AMER,sports,online,41.56,3,0.089,loyalty,2024-10-08\r\n22785,1719,LATAM,toys,retail,55.43,2,0.105,bundle,2024-04-03\r\n22786,1484,AMER,grocery,online,54.30,5,0.089,none,2024-07-19\r\n22787,1897,AMER,sports,online,87.55,7,0.155,none,2024-03-28\r\n22788,1474,LATAM,home,retail,52.00,1,0.063,coupon,2024-06-10\r\n22789,1869,AMER,fashion,partner,26.59,8,0.119,none,2024-01-12\r\n22790,2342,AMER,toys,retail,29.43,4,0.230,none,2024-12-20\r\n22791,1394,LATAM,home,online,51.10,5,0.184,none,2024-01-20\r\n22792,1687,APAC,grocery,mobile,88.13,8,0.083,none,2024-02-22\r\n22793,2313,LATAM,sports,online,51.48,1,0.006,none,2024-07-25\r\n22794,2178,AMER,sports,retail,49.85,2,0.225,coupon,2024-09-28\r\n22795,1758,AMER,grocery,retail,30.83,3,0.008,none,2024-11-25\r\n22796,2180,AMER,electronics,online,81.75,2,0.148,none,2024-08-25\r\n22797,1049,AMER,home,online,30.82,8,0.031,loyalty,2024-07-21\r\n22798,1463,EMEA,electronics,retail,80.60,4,0.017,none,2024-04-18\r\n22799,1934,EMEA,sports,online,52.75,6,0.033,coupon,2024-08-09\r\n22800,1201,LATAM,home,mobile,88.32,8,0.037,none,2024-04-18\r\n22801,1681,LATAM,sports,online,58.14,4,0.105,none,2024-03-16\r\n22802,2494,AMER,grocery,retail,57.23,1,0.062,bundle,2024-12-04\r\n22803,2415,AMER,home,mobile,52.77,3,0.162,none,2024-12-07\r\n22804,1910,LATAM,home,online,117.40,5,0.111,bundle,2024-02-01\r\n22805,1865,LATAM,fashion,online,68.50,6,0.087,loyalty,2024-03-09\r\n22806,2321,APAC,electronics,retail,80.56,2,0.193,bundle,2024-01-24\r\n22807,2445,APAC,fashion,online,19.58,4,0.210,coupon,2024-01-24\r\n22808,2458,EMEA,grocery,mobile,101.54,2,0.074,none,2024-03-16\r\n22809,1737,AMER,grocery,online,63.05,3,0.225,none,2024-10-06\r\n22810,2150,APAC,grocery,online,28.11,5,0.141,bundle,2024-02-05\r\n22811,2258,AMER,fashion,online,87.32,7,0.020,bundle,2024-08-01\r\n22812,1669,AMER,grocery,retail,101.77,6,0.219,bundle,2024-10-27\r\n22813,1314,AMER,electronics,online,52.47,1,0.042,none,2024-09-28\r\n22814,2342,AMER,electronics,retail,41.36,1,0.171,none,2024-01-11\r\n22815,1600,AMER,sports,online,35.27,3,0.237,coupon,2024-11-15\r\n22816,2457,EMEA,fashion,retail,59.04,6,0.110,none,2024-08-06\r\n22817,2200,LATAM,fashion,retail,35.86,3,0.169,coupon,2024-12-14\r\n22818,1909,APAC,toys,retail,66.54,1,0.003,none,2024-07-21\r\n22819,2350,APAC,grocery,mobile,140.33,7,0.169,loyalty,2024-04-25\r\n22820,1177,LATAM,electronics,online,52.76,5,0.049,none,2024-09-19\r\n22821,1068,APAC,grocery,retail,52.33,6,0.099,bundle,2024-10-23\r\n22822,2338,AMER,fashion,online,100.11,7,0.234,coupon,2024-07-10\r\n22823,1436,APAC,grocery,online,31.51,8,0.148,none,2024-08-09\r\n22824,2310,EMEA,toys,retail,47.25,5,0.180,coupon,2024-06-01\r\n22825,1651,LATAM,toys,retail,82.61,7,0.222,none,2024-11-03\r\n22826,1279,EMEA,grocery,online,48.32,3,0.108,none,2024-08-19\r\n22827,1351,APAC,fashion,retail,59.22,3,0.091,none,2024-10-16\r\n22828,1401,LATAM,fashion,retail,55.04,3,0.033,none,2024-06-01\r\n22829,1244,LATAM,sports,retail,27.52,5,0.181,coupon,2024-06-08\r\n22830,1867,AMER,home,retail,30.67,8,0.235,coupon,2024-02-24\r\n22831,1226,AMER,sports,retail,68.14,7,0.226,loyalty,2024-07-07\r\n22832,2154,APAC,toys,online,34.93,6,0.013,coupon,2024-05-09\r\n22833,1562,AMER,grocery,online,54.81,2,0.201,none,2024-02-21\r\n22834,1582,AMER,fashion,retail,27.22,8,0.206,loyalty,2024-01-04\r\n22835,1674,LATAM,grocery,retail,30.09,8,0.128,none,2024-09-04\r\n22836,1030,EMEA,home,online,73.70,6,0.088,bundle,2024-07-10\r\n22837,1667,AMER,grocery,partner,19.68,4,0.016,none,2024-12-17\r\n22838,2223,EMEA,electronics,partner,45.36,2,0.247,bundle,2024-04-18\r\n22839,2485,AMER,sports,mobile,70.25,8,0.015,none,2024-02-06\r\n22840,2110,LATAM,electronics,retail,101.37,7,0.023,loyalty,2024-05-07\r\n22841,2349,APAC,grocery,online,53.12,3,0.112,coupon,2024-04-19\r\n22842,2101,APAC,electronics,retail,78.43,3,0.017,none,2024-06-12\r\n22843,2488,EMEA,electronics,mobile,43.60,8,0.204,none,2024-03-13\r\n22844,1692,LATAM,grocery,retail,71.70,6,0.225,none,2024-09-14\r\n22845,1994,LATAM,sports,mobile,41.22,4,0.015,bundle,2024-01-04\r\n22846,2090,AMER,fashion,online,121.74,7,0.094,bundle,2024-01-14\r\n22847,1121,EMEA,electronics,mobile,75.88,3,0.216,bundle,2024-08-10\r\n22848,1614,EMEA,electronics,online,67.87,1,0.025,loyalty,2024-01-28\r\n22849,2419,LATAM,fashion,retail,75.70,5,0.002,bundle,2024-04-12\r\n22850,1731,AMER,grocery,retail,159.86,1,0.151,none,2024-01-03\r\n22851,1111,APAC,home,online,31.97,5,0.191,none,2024-02-24\r\n22852,1007,APAC,electronics,online,18.32,8,0.096,none,2024-10-03\r\n22853,1341,EMEA,toys,online,8.27,7,0.080,coupon,2024-03-11\r\n22854,1759,EMEA,fashion,retail,56.48,3,0.220,none,2024-05-13\r\n22855,1457,EMEA,grocery,retail,43.97,7,0.001,coupon,2024-01-17\r\n22856,2433,APAC,toys,online,65.48,1,0.174,coupon,2024-05-07\r\n22857,1477,APAC,grocery,retail,62.73,3,0.230,bundle,2024-01-11\r\n22858,2326,LATAM,electronics,retail,85.59,7,0.192,none,2024-10-16\r\n22859,2204,AMER,fashion,retail,62.15,4,0.001,none,2024-10-21\r\n22860,2214,AMER,home,partner,40.15,8,0.126,none,2024-08-28\r\n22861,2498,LATAM,sports,retail,31.53,7,0.236,none,2024-12-12\r\n22862,1285,EMEA,home,mobile,47.65,7,0.079,none,2024-08-28\r\n22863,1385,LATAM,fashion,retail,45.14,6,0.238,bundle,2024-08-26\r\n22864,1161,AMER,grocery,retail,100.55,7,0.205,none,2024-06-13\r\n22865,1246,EMEA,grocery,retail,67.30,1,0.231,none,2024-06-09\r\n22866,1770,AMER,grocery,online,38.35,6,0.201,none,2024-08-21\r\n22867,2170,EMEA,fashion,retail,27.57,6,0.154,none,2024-04-08\r\n22868,2240,LATAM,grocery,mobile,35.26,6,0.192,loyalty,2024-05-27\r\n22869,2256,AMER,toys,retail,38.38,3,0.178,none,2024-07-22\r\n22870,2351,EMEA,fashion,retail,28.32,3,0.013,none,2024-09-15\r\n22871,1612,LATAM,grocery,retail,39.92,8,0.012,bundle,2024-09-06\r\n22872,2299,EMEA,fashion,retail,45.19,8,0.021,none,2024-03-09\r\n22873,1853,APAC,fashion,retail,71.82,8,0.086,coupon,2024-11-08\r\n22874,2439,AMER,electronics,online,37.31,4,0.246,loyalty,2024-06-12\r\n22875,1541,APAC,sports,online,44.05,7,0.035,none,2024-03-09\r\n22876,1207,APAC,grocery,retail,96.24,3,0.031,none,2024-05-02\r\n22877,1028,EMEA,toys,online,39.92,5,0.152,none,2024-11-03\r\n22878,2266,LATAM,home,online,44.57,3,0.203,coupon,2024-04-10\r\n22879,2308,AMER,electronics,online,61.57,8,0.172,none,2024-07-13\r\n22880,1664,LATAM,home,retail,33.56,8,0.037,none,2024-09-02\r\n22881,2053,AMER,sports,online,94.22,5,0.059,coupon,2024-02-26\r\n22882,1656,LATAM,electronics,partner,108.98,7,0.181,none,2024-12-21\r\n22883,2193,AMER,electronics,retail,51.34,1,0.131,none,2024-02-04\r\n22884,1836,LATAM,grocery,mobile,26.09,1,0.065,none,2024-04-25\r\n22885,1952,EMEA,sports,online,53.53,5,0.139,none,2024-02-18\r\n22886,2191,AMER,grocery,online,28.41,1,0.072,none,2024-02-07\r\n22887,1425,EMEA,sports,mobile,90.70,2,0.223,coupon,2024-07-21\r\n22888,2258,AMER,home,retail,42.92,3,0.106,none,2024-10-08\r\n22889,2249,LATAM,sports,mobile,71.80,8,0.093,bundle,2024-12-14\r\n22890,1958,APAC,toys,online,78.40,3,0.121,bundle,2024-06-12\r\n22891,2330,EMEA,toys,mobile,47.83,5,0.242,coupon,2024-11-05\r\n22892,2288,AMER,sports,mobile,51.23,2,0.201,none,2024-09-04\r\n22893,1314,AMER,electronics,partner,36.98,5,0.169,none,2024-09-24\r\n22894,2157,AMER,home,online,103.12,3,0.120,none,2024-11-05\r\n22895,2329,LATAM,grocery,online,46.30,2,0.192,none,2024-03-01\r\n22896,2470,EMEA,electronics,partner,66.64,1,0.172,coupon,2024-04-02\r\n22897,2482,EMEA,fashion,retail,137.28,1,0.027,coupon,2024-10-23\r\n22898,1717,AMER,electronics,retail,85.12,2,0.165,none,2024-09-04\r\n22899,2257,AMER,grocery,mobile,109.55,7,0.239,none,2024-04-23\r\n22900,1281,AMER,grocery,retail,43.03,6,0.205,loyalty,2024-10-18\r\n22901,2221,LATAM,electronics,online,78.98,4,0.118,none,2024-02-16\r\n22902,2315,LATAM,grocery,retail,44.02,7,0.173,none,2024-08-02\r\n22903,1481,LATAM,electronics,online,190.42,6,0.088,none,2024-08-22\r\n22904,2049,LATAM,home,retail,119.73,7,0.127,bundle,2024-11-25\r\n22905,1046,EMEA,home,online,36.97,5,0.121,none,2024-12-05\r\n22906,2061,EMEA,grocery,retail,102.90,2,0.003,bundle,2024-08-28\r\n22907,1308,EMEA,toys,online,184.38,1,0.213,none,2024-12-06\r\n22908,1847,LATAM,sports,online,62.65,5,0.110,none,2024-01-02\r\n22909,1305,EMEA,fashion,online,65.44,7,0.188,none,2024-11-21\r\n22910,2202,APAC,grocery,partner,50.81,5,0.034,bundle,2024-03-13\r\n22911,2459,AMER,fashion,online,45.52,6,0.067,none,2024-09-21\r\n22912,1398,APAC,sports,online,30.94,6,0.108,none,2024-03-22\r\n22913,2285,APAC,grocery,online,21.67,3,0.222,coupon,2024-12-13\r\n22914,2037,LATAM,toys,retail,45.23,3,0.160,none,2024-02-11\r\n22915,1622,LATAM,grocery,retail,300.27,4,0.147,none,2024-09-16\r\n22916,2140,AMER,home,mobile,25.38,3,0.133,none,2024-06-09\r\n22917,1478,EMEA,home,retail,86.47,6,0.222,none,2024-10-16\r\n22918,1798,AMER,electronics,online,68.90,7,0.196,coupon,2024-07-24\r\n22919,2488,EMEA,electronics,retail,152.48,8,0.012,none,2024-08-02\r\n22920,1988,AMER,grocery,mobile,76.33,3,0.102,coupon,2024-01-07\r\n22921,1132,EMEA,grocery,mobile,81.46,6,0.146,loyalty,2024-01-15\r\n22922,2087,LATAM,home,online,27.88,6,0.191,bundle,2024-09-22\r\n22923,2319,AMER,grocery,mobile,107.47,7,0.082,coupon,2024-11-19\r\n22924,2103,LATAM,fashion,online,38.11,3,0.174,coupon,2024-04-19\r\n22925,1263,AMER,electronics,retail,57.42,6,0.191,none,2024-02-09\r\n22926,1284,APAC,toys,retail,21.39,5,0.171,bundle,2024-08-16\r\n22927,2003,LATAM,fashion,retail,17.50,5,0.050,none,2024-10-23\r\n22928,2461,LATAM,grocery,online,58.89,7,0.197,coupon,2024-05-28\r\n22929,2337,AMER,toys,online,34.58,3,0.038,coupon,2024-10-16\r\n22930,2465,EMEA,home,online,71.47,6,0.207,coupon,2024-11-02\r\n22931,2101,APAC,grocery,retail,115.34,7,0.123,none,2024-07-06\r\n22932,2404,EMEA,electronics,retail,45.06,2,0.071,loyalty,2024-04-27\r\n22933,1061,APAC,fashion,online,25.39,6,0.092,coupon,2024-02-20\r\n22934,2488,EMEA,fashion,partner,21.67,3,0.150,none,2024-11-10\r\n22935,2280,EMEA,home,online,73.21,5,0.130,bundle,2024-03-01\r\n22936,2089,EMEA,home,retail,50.45,5,0.246,loyalty,2024-12-04\r\n22937,2098,AMER,electronics,online,48.19,1,0.048,none,2024-07-24\r\n22938,1222,AMER,home,online,128.16,7,0.072,none,2024-01-14\r\n22939,1164,EMEA,electronics,retail,74.88,1,0.113,none,2024-09-07\r\n22940,1491,EMEA,grocery,retail,29.67,4,0.182,none,2024-05-07\r\n22941,1459,LATAM,home,online,49.87,1,0.218,none,2024-06-24\r\n22942,1384,LATAM,grocery,retail,82.55,5,0.202,loyalty,2024-04-19\r\n22943,1159,LATAM,fashion,retail,104.43,3,0.181,none,2024-12-04\r\n22944,1028,EMEA,fashion,retail,159.40,2,0.142,none,2024-09-01\r\n22945,1014,EMEA,fashion,online,63.24,2,0.099,none,2024-04-25\r\n22946,1319,EMEA,grocery,retail,97.01,7,0.237,coupon,2024-03-01\r\n22947,2061,EMEA,home,online,29.38,6,0.240,bundle,2024-06-21\r\n22948,1401,LATAM,grocery,online,80.14,4,0.176,coupon,2024-12-25\r\n22949,1679,APAC,sports,online,139.66,8,0.033,none,2024-03-09\r\n22950,1588,LATAM,electronics,retail,62.46,8,0.112,bundle,2024-04-04\r\n22951,1317,EMEA,electronics,mobile,95.62,7,0.170,none,2024-12-02\r\n22952,1154,LATAM,home,online,119.66,5,0.066,none,2024-12-15\r\n22953,1102,APAC,home,online,27.66,3,0.036,none,2024-07-17\r\n22954,1466,AMER,home,retail,37.38,2,0.225,none,2024-09-16\r\n22955,1333,EMEA,home,mobile,82.19,5,0.181,loyalty,2024-08-10\r\n22956,1456,APAC,home,online,59.76,5,0.198,none,2024-02-10\r\n22957,2226,EMEA,grocery,online,47.32,4,0.072,none,2024-07-21\r\n22958,2224,EMEA,electronics,online,43.46,6,0.031,none,2024-07-10\r\n22959,1332,APAC,home,online,44.82,8,0.199,none,2024-01-08\r\n22960,1545,AMER,toys,mobile,36.41,4,0.049,coupon,2024-08-16\r\n22961,1138,AMER,toys,online,54.00,2,0.031,coupon,2024-01-28\r\n22962,1296,LATAM,toys,retail,19.52,5,0.050,none,2024-02-24\r\n22963,1058,LATAM,electronics,retail,55.28,7,0.158,coupon,2024-03-14\r\n22964,2316,EMEA,sports,retail,44.45,4,0.234,none,2024-09-14\r\n22965,1586,LATAM,toys,retail,83.09,3,0.106,loyalty,2024-07-18\r\n22966,1126,LATAM,fashion,retail,24.31,3,0.072,loyalty,2024-07-08\r\n22967,2496,EMEA,grocery,retail,101.53,7,0.242,loyalty,2024-07-09\r\n22968,2427,LATAM,sports,partner,23.96,5,0.072,none,2024-01-05\r\n22969,1972,LATAM,electronics,retail,37.16,6,0.123,none,2024-11-09\r\n22970,2352,APAC,electronics,retail,30.02,1,0.232,coupon,2024-07-11\r\n22971,1930,AMER,electronics,online,133.10,7,0.080,bundle,2024-10-24\r\n22972,1849,EMEA,electronics,retail,96.03,6,0.010,none,2024-09-11\r\n22973,1319,EMEA,grocery,online,73.05,1,0.083,coupon,2024-03-24\r\n22974,1674,LATAM,electronics,online,54.35,4,0.058,none,2024-12-27\r\n22975,1855,APAC,fashion,online,38.74,6,0.168,coupon,2024-04-10\r\n22976,1672,APAC,grocery,online,69.82,7,0.233,none,2024-01-23\r\n22977,2085,AMER,home,mobile,50.32,8,0.113,none,2024-05-22\r\n22978,1269,LATAM,home,mobile,133.92,7,0.196,coupon,2024-04-16\r\n22979,2083,LATAM,fashion,retail,14.22,1,0.136,none,2024-08-23\r\n22980,1556,AMER,electronics,online,37.81,5,0.022,loyalty,2024-04-03\r\n22981,1619,APAC,fashion,mobile,60.24,7,0.185,loyalty,2024-08-25\r\n22982,1397,LATAM,toys,partner,33.74,2,0.244,none,2024-11-17\r\n22983,1710,APAC,fashion,retail,54.50,5,0.023,none,2024-09-20\r\n22984,2102,APAC,electronics,mobile,331.46,5,0.082,coupon,2024-08-11\r\n22985,1805,EMEA,fashion,online,21.19,2,0.042,coupon,2024-09-07\r\n22986,1560,AMER,grocery,retail,122.22,7,0.115,none,2024-04-11\r\n22987,1930,AMER,grocery,retail,50.06,8,0.048,none,2024-02-11\r\n22988,2112,LATAM,home,online,76.53,5,0.208,coupon,2024-02-02\r\n22989,1860,EMEA,toys,online,53.68,6,0.025,none,2024-09-28\r\n22990,1585,AMER,electronics,online,26.76,4,0.097,loyalty,2024-02-05\r\n22991,1929,LATAM,home,online,119.79,5,0.163,coupon,2024-03-27\r\n22992,1224,APAC,home,retail,80.09,5,0.187,bundle,2024-05-20\r\n22993,1140,LATAM,home,online,53.06,6,0.215,none,2024-07-09\r\n22994,2343,EMEA,electronics,retail,23.09,8,0.030,coupon,2024-01-23\r\n22995,2361,EMEA,toys,online,105.98,1,0.205,bundle,2024-01-17\r\n22996,1521,LATAM,sports,online,59.68,8,0.248,coupon,2024-01-08\r\n22997,1145,AMER,sports,mobile,73.27,6,0.171,coupon,2024-07-14\r\n22998,2091,LATAM,grocery,online,60.31,6,0.031,none,2024-03-11\r\n22999,2272,EMEA,grocery,retail,54.94,1,0.141,none,2024-04-19\r\n23000,2324,AMER,grocery,retail,67.82,1,0.134,bundle,2024-05-15\r\n23001,2154,APAC,electronics,retail,77.14,7,0.175,coupon,2024-08-01\r\n23002,2322,AMER,home,partner,42.30,1,0.222,coupon,2024-09-16\r\n23003,1450,EMEA,fashion,partner,50.06,4,0.246,none,2024-11-06\r\n23004,1136,EMEA,fashion,retail,46.25,2,0.123,coupon,2024-12-06\r\n23005,1878,EMEA,electronics,online,32.95,6,0.242,loyalty,2024-02-22\r\n23006,1931,APAC,grocery,partner,40.68,5,0.022,none,2024-02-05\r\n23007,1022,APAC,home,mobile,49.21,6,0.086,none,2024-08-21\r\n23008,1467,LATAM,fashion,online,65.21,5,0.238,none,2024-01-19\r\n23009,1909,APAC,toys,retail,60.29,1,0.086,loyalty,2024-10-11\r\n23010,1539,LATAM,sports,online,65.16,4,0.066,coupon,2024-09-12\r\n23011,2254,LATAM,grocery,retail,49.38,8,0.170,none,2024-03-07\r\n23012,2011,AMER,toys,online,54.32,6,0.055,none,2024-08-07\r\n23013,1615,LATAM,sports,online,43.58,6,0.102,coupon,2024-12-19\r\n23014,1082,EMEA,sports,mobile,24.12,1,0.089,none,2024-02-03\r\n23015,1250,APAC,electronics,online,43.06,1,0.247,loyalty,2024-11-10\r\n23016,1611,EMEA,grocery,mobile,55.38,6,0.200,none,2024-01-28\r\n23017,1903,LATAM,grocery,retail,64.87,6,0.087,coupon,2024-11-04\r\n23018,2328,EMEA,sports,partner,139.76,2,0.166,none,2024-06-05\r\n23019,1724,LATAM,toys,mobile,55.68,2,0.155,coupon,2024-11-15\r\n23020,1972,LATAM,grocery,online,50.70,4,0.081,coupon,2024-07-25\r\n23021,1848,EMEA,grocery,online,58.92,6,0.196,coupon,2024-06-23\r\n23022,2293,LATAM,grocery,online,38.92,5,0.144,coupon,2024-12-03\r\n23023,1292,LATAM,electronics,retail,50.06,3,0.156,none,2024-07-25\r\n23024,1937,APAC,electronics,mobile,203.61,1,0.198,coupon,2024-11-12\r\n23025,2273,APAC,sports,retail,56.62,6,0.120,bundle,2024-04-17\r\n23026,1611,EMEA,toys,mobile,35.90,5,0.150,none,2024-07-23\r\n23027,1401,LATAM,home,mobile,212.88,3,0.017,coupon,2024-01-09\r\n23028,2235,AMER,fashion,mobile,78.34,7,0.027,coupon,2024-10-24\r\n23029,1622,LATAM,electronics,retail,41.60,1,0.024,none,2024-12-13\r\n23030,1991,APAC,home,retail,92.37,5,0.233,bundle,2024-02-10\r\n23031,1733,LATAM,grocery,retail,55.03,2,0.055,none,2024-09-19\r\n23032,1339,EMEA,electronics,partner,85.98,6,0.020,none,2024-11-26\r\n23033,1871,APAC,electronics,retail,69.17,5,0.035,none,2024-02-28\r\n23034,1682,EMEA,electronics,online,50.42,7,0.060,bundle,2024-05-06\r\n23035,2090,AMER,home,online,63.92,8,0.147,none,2024-08-25\r\n23036,2407,EMEA,electronics,retail,66.64,3,0.130,bundle,2024-10-17\r\n23037,2447,AMER,fashion,retail,82.11,1,0.060,bundle,2024-11-13\r\n23038,2098,AMER,electronics,mobile,40.24,1,0.085,coupon,2024-01-08\r\n23039,1501,AMER,grocery,online,35.23,6,0.072,none,2024-08-20\r\n23040,2179,LATAM,electronics,retail,27.19,4,0.214,none,2024-02-10\r\n23041,1733,LATAM,electronics,retail,54.26,4,0.223,coupon,2024-06-08\r\n23042,1187,AMER,home,retail,164.94,4,0.033,none,2024-08-19\r\n23043,1459,LATAM,electronics,online,60.32,4,0.131,none,2024-10-14\r\n23044,2495,EMEA,home,retail,64.13,4,0.119,coupon,2024-06-26\r\n23045,1206,EMEA,electronics,retail,42.79,8,0.186,none,2024-11-02\r\n23046,1219,LATAM,toys,online,72.65,6,0.203,none,2024-03-03\r\n23047,1375,AMER,electronics,online,44.19,5,0.231,bundle,2024-04-10\r\n23048,1545,AMER,electronics,online,26.19,2,0.230,coupon,2024-04-09\r\n23049,2223,EMEA,grocery,retail,28.45,4,0.210,none,2024-11-03\r\n23050,1921,LATAM,home,retail,86.45,6,0.007,none,2024-06-16\r\n23051,1493,APAC,toys,partner,30.01,4,0.112,none,2024-07-24\r\n23052,1613,EMEA,sports,online,13.97,5,0.181,coupon,2024-04-11\r\n23053,1163,AMER,toys,online,68.39,8,0.067,bundle,2024-02-21\r\n23054,1431,APAC,home,retail,36.21,8,0.098,coupon,2024-05-08\r\n23055,1618,EMEA,grocery,retail,20.17,6,0.038,bundle,2024-01-02\r\n23056,1298,LATAM,electronics,mobile,42.65,6,0.169,none,2024-04-13\r\n23057,2346,LATAM,grocery,retail,54.17,1,0.165,none,2024-05-02\r\n23058,1666,LATAM,fashion,retail,57.24,1,0.192,coupon,2024-08-07\r\n23059,1933,EMEA,fashion,mobile,31.43,5,0.140,bundle,2024-11-08\r\n23060,2366,APAC,electronics,retail,44.96,6,0.003,none,2024-04-25\r\n23061,2423,LATAM,electronics,online,33.12,2,0.025,loyalty,2024-09-11\r\n23062,1582,AMER,grocery,online,25.88,4,0.144,none,2024-04-26\r\n23063,1564,APAC,grocery,online,69.30,3,0.233,none,2024-11-10\r\n23064,1848,EMEA,home,partner,131.56,6,0.042,coupon,2024-12-03\r\n23065,1652,APAC,grocery,retail,49.64,6,0.192,none,2024-01-13\r\n23066,2035,LATAM,sports,online,91.27,7,0.154,none,2024-06-12\r\n23067,2458,EMEA,grocery,online,60.19,6,0.001,none,2024-10-14\r\n23068,1228,APAC,grocery,mobile,45.10,6,0.042,none,2024-10-12\r\n23069,1997,APAC,electronics,retail,64.40,8,0.145,none,2024-05-27\r\n23070,1457,EMEA,fashion,online,140.09,7,0.066,none,2024-11-03\r\n23071,2139,AMER,grocery,partner,69.72,3,0.123,none,2024-06-28\r\n23072,1832,APAC,grocery,online,79.34,1,0.021,none,2024-06-02\r\n23073,1514,LATAM,sports,online,29.28,2,0.147,none,2024-01-12\r\n23074,2322,AMER,fashion,online,69.58,6,0.125,coupon,2024-11-04\r\n23075,1876,LATAM,home,retail,76.70,1,0.109,none,2024-02-28\r\n23076,1338,EMEA,toys,retail,62.98,1,0.173,loyalty,2024-11-28\r\n23077,1639,APAC,sports,mobile,51.12,1,0.161,coupon,2024-09-06\r\n23078,2137,LATAM,electronics,retail,58.09,3,0.111,coupon,2024-12-24\r\n23079,2256,AMER,grocery,retail,29.18,5,0.081,none,2024-10-28\r\n23080,2080,LATAM,fashion,partner,25.92,3,0.069,bundle,2024-01-10\r\n23081,1201,LATAM,grocery,online,118.37,1,0.072,coupon,2024-04-08\r\n23082,1901,AMER,grocery,online,26.17,8,0.135,none,2024-04-26\r\n23083,2055,AMER,electronics,online,43.56,3,0.101,coupon,2024-11-01\r\n23084,1638,EMEA,home,online,62.20,2,0.154,bundle,2024-04-11\r\n23085,1129,LATAM,grocery,retail,60.22,5,0.092,loyalty,2024-07-22\r\n23086,1073,AMER,grocery,retail,81.40,2,0.024,none,2024-10-19\r\n23087,2007,LATAM,grocery,retail,55.53,7,0.106,none,2024-05-05\r\n23088,1189,AMER,electronics,online,94.32,6,0.070,none,2024-08-20\r\n23089,2380,AMER,grocery,mobile,104.10,6,0.209,none,2024-03-18\r\n23090,1958,APAC,sports,mobile,75.72,8,0.233,none,2024-12-04\r\n23091,1435,AMER,fashion,mobile,45.17,8,0.242,none,2024-06-08\r\n23092,2393,LATAM,electronics,retail,75.96,6,0.120,coupon,2024-05-20\r\n23093,1754,EMEA,home,online,115.68,2,0.026,none,2024-10-11\r\n23094,2343,EMEA,home,mobile,116.60,5,0.012,none,2024-07-08\r\n23095,1577,AMER,grocery,online,59.20,4,0.163,loyalty,2024-12-26\r\n23096,1811,APAC,electronics,mobile,94.79,1,0.181,loyalty,2024-09-09\r\n23097,2420,EMEA,electronics,retail,75.37,3,0.164,none,2024-08-03\r\n23098,2107,APAC,sports,retail,39.91,3,0.090,none,2024-03-24\r\n23099,1887,LATAM,fashion,online,67.39,7,0.028,none,2024-11-09\r\n23100,1637,APAC,fashion,mobile,91.51,6,0.090,bundle,2024-09-05\r\n23101,1966,APAC,toys,retail,38.75,7,0.127,none,2024-07-06\r\n23102,2219,LATAM,grocery,partner,84.78,5,0.156,none,2024-11-13\r\n23103,2030,EMEA,fashion,partner,85.75,2,0.028,coupon,2024-05-12\r\n23104,1504,AMER,electronics,online,56.58,1,0.006,none,2024-05-25\r\n23105,1289,LATAM,electronics,mobile,46.38,6,0.010,coupon,2024-06-23\r\n23106,1806,APAC,home,mobile,31.98,3,0.018,bundle,2024-08-07\r\n23107,1779,APAC,grocery,mobile,46.76,7,0.187,none,2024-08-19\r\n23108,1507,EMEA,home,online,51.00,7,0.129,none,2024-02-03\r\n23109,1753,APAC,electronics,retail,58.22,5,0.031,none,2024-10-06\r\n23110,2406,EMEA,grocery,online,16.65,7,0.174,none,2024-09-23\r\n23111,2397,LATAM,fashion,online,67.97,6,0.004,loyalty,2024-01-22\r\n23112,2040,LATAM,toys,mobile,36.14,6,0.188,none,2024-03-20\r\n23113,1585,AMER,electronics,mobile,40.31,7,0.086,none,2024-03-27\r\n23114,2386,EMEA,toys,online,59.07,7,0.038,none,2024-05-12\r\n23115,1169,LATAM,grocery,mobile,37.62,3,0.022,none,2024-10-03\r\n23116,1954,APAC,toys,online,84.77,2,0.248,none,2024-08-19\r\n23117,1622,LATAM,sports,online,66.08,5,0.178,loyalty,2024-05-25\r\n23118,2090,AMER,toys,online,35.01,5,0.153,coupon,2024-02-05\r\n23119,1745,APAC,grocery,retail,45.36,7,0.065,bundle,2024-08-14\r\n23120,1954,APAC,electronics,online,33.74,6,0.069,coupon,2024-02-01\r\n23121,2402,AMER,electronics,online,177.52,4,0.017,loyalty,2024-10-10\r\n23122,1739,AMER,grocery,mobile,20.94,4,0.233,bundle,2024-02-26\r\n23123,1612,LATAM,toys,retail,53.12,1,0.198,bundle,2024-08-04\r\n23124,1019,APAC,electronics,online,99.33,6,0.226,none,2024-07-21\r\n23125,2001,EMEA,grocery,retail,49.32,3,0.077,none,2024-12-02\r\n23126,1443,EMEA,home,retail,81.11,7,0.243,none,2024-08-03\r\n23127,1195,AMER,fashion,partner,94.52,6,0.145,loyalty,2024-10-24\r\n23128,1725,APAC,toys,online,93.87,3,0.174,loyalty,2024-05-18\r\n23129,1830,EMEA,grocery,online,64.41,6,0.145,none,2024-07-15\r\n23130,1203,AMER,grocery,online,55.60,6,0.140,bundle,2024-02-02\r\n23131,1366,APAC,grocery,mobile,73.98,3,0.214,none,2024-10-21\r\n23132,1194,APAC,home,retail,60.09,1,0.075,bundle,2024-06-24\r\n23133,1549,APAC,sports,online,55.40,3,0.224,coupon,2024-12-18\r\n23134,1682,EMEA,home,online,152.34,1,0.081,none,2024-12-19\r\n23135,2107,APAC,electronics,retail,93.39,5,0.068,loyalty,2024-09-18\r\n23136,1980,LATAM,grocery,partner,47.47,3,0.138,none,2024-01-20\r\n23137,1107,APAC,home,partner,59.63,6,0.067,none,2024-10-28\r\n23138,2239,EMEA,grocery,online,193.95,3,0.183,none,2024-11-07\r\n23139,1085,EMEA,electronics,online,71.37,5,0.127,none,2024-03-04\r\n23140,1538,AMER,sports,online,56.09,5,0.222,bundle,2024-12-12\r\n23141,1476,APAC,home,retail,78.21,3,0.132,none,2024-02-12\r\n23142,2430,APAC,home,mobile,32.05,6,0.064,none,2024-03-14\r\n23143,1280,LATAM,grocery,retail,42.94,4,0.031,coupon,2024-06-01\r\n23144,1123,LATAM,electronics,retail,50.60,5,0.022,none,2024-01-10\r\n23145,1552,EMEA,fashion,retail,106.25,8,0.181,none,2024-06-11\r\n23146,2204,AMER,home,mobile,23.42,5,0.038,none,2024-04-01\r\n23147,1904,APAC,home,mobile,31.76,7,0.185,bundle,2024-10-06\r\n23148,1235,EMEA,sports,retail,40.04,5,0.106,none,2024-06-16\r\n23149,1471,EMEA,home,online,84.25,2,0.124,coupon,2024-04-26\r\n23150,1154,LATAM,grocery,retail,35.52,6,0.007,none,2024-04-23\r\n23151,1420,APAC,grocery,retail,60.49,2,0.026,none,2024-01-13\r\n23152,1234,AMER,electronics,online,80.63,3,0.024,none,2024-01-03\r\n23153,1443,EMEA,home,online,77.70,2,0.115,coupon,2024-01-22\r\n23154,1781,LATAM,fashion,retail,81.24,4,0.082,none,2024-06-25\r\n23155,2004,LATAM,home,online,88.40,3,0.122,loyalty,2024-02-16\r\n23156,1189,AMER,electronics,online,50.01,3,0.155,bundle,2024-08-23\r\n23157,2171,EMEA,electronics,online,56.73,2,0.136,none,2024-07-10\r\n23158,1177,LATAM,home,partner,49.03,3,0.165,coupon,2024-11-16\r\n23159,2232,EMEA,home,online,42.26,8,0.127,coupon,2024-11-10\r\n23160,1442,EMEA,sports,retail,39.40,2,0.142,coupon,2024-05-13\r\n23161,1700,EMEA,grocery,retail,84.49,5,0.148,none,2024-10-12\r\n23162,2091,LATAM,home,online,53.99,6,0.231,none,2024-01-15\r\n23163,1517,AMER,grocery,online,48.07,5,0.242,coupon,2024-12-18\r\n23164,1845,AMER,home,online,34.91,3,0.040,coupon,2024-07-16\r\n23165,2171,EMEA,grocery,mobile,28.99,8,0.242,bundle,2024-09-26\r\n23166,1218,AMER,sports,retail,117.02,8,0.033,none,2024-01-18\r\n23167,2102,APAC,toys,retail,42.33,5,0.190,none,2024-09-23\r\n23168,1207,APAC,electronics,online,134.29,5,0.155,coupon,2024-03-19\r\n23169,2204,AMER,fashion,retail,70.60,4,0.022,none,2024-01-05\r\n23170,2197,LATAM,sports,online,59.85,1,0.165,coupon,2024-06-15\r\n23171,1387,AMER,home,retail,47.46,4,0.225,bundle,2024-05-05\r\n23172,2260,EMEA,grocery,online,29.72,7,0.132,bundle,2024-02-07\r\n23173,2057,APAC,home,online,52.32,3,0.077,none,2024-01-19\r\n23174,1714,APAC,fashion,online,46.08,1,0.084,coupon,2024-06-15\r\n23175,2044,APAC,electronics,retail,92.42,3,0.170,none,2024-12-15\r\n23176,2071,APAC,grocery,online,43.10,8,0.042,none,2024-06-22\r\n23177,1207,APAC,toys,online,59.83,7,0.143,coupon,2024-12-18\r\n23178,2400,EMEA,home,retail,48.05,7,0.106,coupon,2024-05-22\r\n23179,1432,APAC,home,mobile,81.02,4,0.146,none,2024-04-22\r\n23180,1547,AMER,fashion,online,72.70,6,0.164,loyalty,2024-01-05\r\n23181,1600,AMER,grocery,partner,63.92,5,0.046,coupon,2024-01-03\r\n23182,1663,LATAM,electronics,mobile,140.91,2,0.200,none,2024-03-22\r\n23183,2002,APAC,home,mobile,44.90,3,0.058,coupon,2024-01-16\r\n23184,1765,EMEA,fashion,retail,47.22,7,0.099,bundle,2024-07-08\r\n23185,2469,LATAM,electronics,retail,54.05,6,0.095,none,2024-10-17\r\n23186,1982,EMEA,grocery,retail,66.17,5,0.094,none,2024-09-26\r\n23187,1347,APAC,home,retail,59.67,2,0.137,loyalty,2024-05-05\r\n23188,1264,APAC,fashion,online,35.56,1,0.154,coupon,2024-10-24\r\n23189,2026,LATAM,toys,online,63.45,2,0.012,coupon,2024-11-13\r\n23190,2326,LATAM,electronics,retail,77.73,7,0.068,loyalty,2024-03-01\r\n23191,1632,LATAM,electronics,retail,73.18,5,0.062,none,2024-03-11\r\n23192,1207,APAC,home,partner,92.80,7,0.125,none,2024-01-08\r\n23193,2459,AMER,grocery,partner,51.40,8,0.210,none,2024-06-20\r\n23194,2389,LATAM,grocery,online,45.32,7,0.096,loyalty,2024-04-14\r\n23195,1071,AMER,grocery,mobile,33.56,7,0.237,none,2024-02-15\r\n23196,2217,LATAM,sports,online,42.26,6,0.186,none,2024-11-15\r\n23197,1579,AMER,grocery,mobile,48.20,6,0.010,none,2024-06-27\r\n23198,1333,EMEA,grocery,mobile,113.20,4,0.203,none,2024-07-19\r\n23199,1368,EMEA,fashion,retail,143.49,1,0.212,none,2024-01-08\r\n23200,1616,APAC,grocery,online,35.15,5,0.191,none,2024-01-21\r\n23201,1656,LATAM,fashion,retail,46.16,5,0.119,none,2024-10-10\r\n23202,1021,AMER,electronics,online,34.93,5,0.164,loyalty,2024-06-10\r\n23203,2386,EMEA,electronics,mobile,45.40,1,0.184,none,2024-07-18\r\n23204,2496,EMEA,grocery,online,32.01,5,0.127,none,2024-11-02\r\n23205,2360,EMEA,fashion,retail,66.97,7,0.063,bundle,2024-12-14\r\n23206,2297,EMEA,electronics,online,48.96,8,0.228,bundle,2024-03-26\r\n23207,1758,AMER,fashion,retail,55.12,3,0.063,bundle,2024-11-05\r\n23208,1331,AMER,electronics,retail,61.60,5,0.222,none,2024-01-11\r\n23209,1112,APAC,electronics,mobile,97.25,4,0.052,coupon,2024-05-11\r\n23210,1148,AMER,home,retail,46.19,7,0.136,none,2024-01-05\r\n23211,2417,LATAM,toys,retail,69.09,2,0.011,none,2024-07-01\r\n23212,1894,APAC,fashion,retail,64.98,2,0.006,none,2024-11-19\r\n23213,1547,AMER,sports,partner,32.86,2,0.204,none,2024-03-06\r\n23214,1833,EMEA,electronics,retail,82.97,6,0.094,bundle,2024-01-20\r\n23215,1839,APAC,grocery,retail,56.48,6,0.042,loyalty,2024-08-21\r\n23216,1966,APAC,electronics,retail,35.10,2,0.125,loyalty,2024-02-10\r\n23217,1227,AMER,fashion,retail,37.84,5,0.129,coupon,2024-11-25\r\n23218,2097,AMER,home,online,56.42,7,0.052,none,2024-06-04\r\n23219,1639,APAC,home,retail,54.98,2,0.068,coupon,2024-12-14\r\n23220,1652,APAC,toys,partner,173.10,4,0.200,loyalty,2024-08-04\r\n23221,1721,EMEA,home,online,43.91,8,0.073,none,2024-09-03\r\n23222,1084,AMER,grocery,online,93.87,2,0.130,loyalty,2024-11-04\r\n23223,1436,APAC,home,retail,68.51,3,0.173,loyalty,2024-07-18\r\n23224,2237,EMEA,fashion,online,72.33,7,0.043,none,2024-07-04\r\n23225,1496,AMER,sports,retail,43.30,5,0.005,loyalty,2024-08-24\r\n23226,1821,LATAM,home,online,65.78,8,0.232,bundle,2024-03-15\r\n23227,2094,AMER,fashion,retail,98.63,4,0.043,none,2024-06-11\r\n23228,1933,EMEA,grocery,online,14.14,4,0.036,none,2024-01-28\r\n23229,1453,APAC,sports,retail,153.93,2,0.249,bundle,2024-07-06\r\n23230,1872,LATAM,grocery,mobile,34.46,8,0.100,loyalty,2024-04-18\r\n23231,1703,AMER,grocery,online,39.14,4,0.067,none,2024-05-15\r\n23232,1379,EMEA,grocery,mobile,86.47,6,0.214,coupon,2024-04-22\r\n23233,1160,LATAM,electronics,mobile,27.11,7,0.244,bundle,2024-12-26\r\n23234,1143,LATAM,home,online,35.01,8,0.144,none,2024-04-23\r\n23235,1535,AMER,grocery,retail,161.58,1,0.235,loyalty,2024-10-10\r\n23236,1170,AMER,toys,retail,41.90,6,0.175,none,2024-07-19\r\n23237,2073,AMER,grocery,online,83.88,8,0.104,none,2024-09-10\r\n23238,1870,EMEA,sports,online,41.45,3,0.138,none,2024-04-11\r\n23239,1438,APAC,toys,retail,62.74,4,0.146,none,2024-07-25\r\n23240,1678,LATAM,home,online,19.33,5,0.157,none,2024-05-03\r\n23241,1838,AMER,grocery,retail,59.12,6,0.229,coupon,2024-10-20\r\n23242,1407,LATAM,sports,retail,126.08,6,0.155,none,2024-05-16\r\n23243,1778,LATAM,home,retail,34.55,1,0.166,none,2024-11-08\r\n23244,1572,LATAM,fashion,retail,16.27,5,0.201,coupon,2024-04-12\r\n23245,1093,APAC,electronics,online,29.82,4,0.060,loyalty,2024-05-14\r\n23246,1742,AMER,home,online,164.89,5,0.214,none,2024-01-09\r\n23247,1028,EMEA,electronics,online,28.43,3,0.197,none,2024-10-15\r\n23248,1913,LATAM,home,online,136.01,8,0.188,coupon,2024-12-06\r\n23249,1856,EMEA,sports,online,85.88,7,0.240,none,2024-02-09\r\n23250,1074,LATAM,grocery,retail,37.34,8,0.190,bundle,2024-06-10\r\n23251,1739,AMER,grocery,online,64.02,3,0.172,loyalty,2024-05-22\r\n23252,1579,AMER,fashion,online,120.52,2,0.152,coupon,2024-06-08\r\n23253,1640,APAC,electronics,mobile,54.42,1,0.060,bundle,2024-01-28\r\n23254,1306,LATAM,sports,online,49.94,5,0.243,bundle,2024-01-22\r\n23255,2279,LATAM,grocery,online,55.70,8,0.072,none,2024-07-07\r\n23256,1408,AMER,toys,online,79.63,8,0.073,none,2024-09-10\r\n23257,1972,LATAM,grocery,mobile,33.71,1,0.004,coupon,2024-05-05\r\n23258,2180,AMER,home,retail,51.76,3,0.076,loyalty,2024-02-06\r\n23259,1124,AMER,electronics,partner,57.02,8,0.214,bundle,2024-11-26\r\n23260,1283,APAC,grocery,retail,23.82,2,0.086,none,2024-09-04\r\n23261,1662,LATAM,sports,online,107.77,1,0.183,bundle,2024-04-12\r\n23262,1260,LATAM,fashion,online,41.66,8,0.171,coupon,2024-02-12\r\n23263,1476,APAC,grocery,retail,50.15,1,0.125,none,2024-07-04\r\n23264,1281,AMER,home,online,30.14,4,0.085,coupon,2024-08-15\r\n23265,1534,EMEA,sports,online,89.93,5,0.024,none,2024-06-19\r\n23266,1326,AMER,sports,online,21.39,1,0.083,none,2024-12-11\r\n23267,1228,APAC,electronics,online,117.16,3,0.145,coupon,2024-11-11\r\n23268,2413,AMER,grocery,online,45.05,6,0.114,bundle,2024-01-26\r\n23269,1699,APAC,toys,online,45.80,1,0.190,coupon,2024-07-22\r\n23270,1580,AMER,home,mobile,33.58,8,0.217,none,2024-03-01\r\n23271,1800,APAC,grocery,mobile,46.90,2,0.081,loyalty,2024-02-26\r\n23272,1566,EMEA,sports,online,142.63,3,0.236,none,2024-06-13\r\n23273,1857,LATAM,fashion,partner,83.13,2,0.024,none,2024-01-12\r\n23274,2364,APAC,toys,online,89.97,8,0.216,coupon,2024-12-21\r\n23275,2146,APAC,fashion,retail,66.22,7,0.110,none,2024-10-05\r\n23276,2356,LATAM,grocery,online,42.01,6,0.062,none,2024-12-05\r\n23277,1345,AMER,toys,retail,37.80,1,0.220,none,2024-03-23\r\n23278,1362,AMER,home,online,86.20,3,0.050,loyalty,2024-05-09\r\n23279,1943,AMER,grocery,online,98.33,8,0.168,loyalty,2024-02-13\r\n23280,2373,LATAM,sports,mobile,59.87,3,0.072,loyalty,2024-08-18\r\n23281,2312,APAC,electronics,retail,20.91,8,0.065,none,2024-11-26\r\n23282,1845,AMER,electronics,online,79.44,5,0.073,coupon,2024-06-07\r\n23283,1940,APAC,fashion,online,46.09,8,0.086,none,2024-09-02\r\n23284,1411,LATAM,electronics,online,53.70,8,0.106,none,2024-04-05\r\n23285,1338,EMEA,fashion,retail,49.50,7,0.029,none,2024-07-23\r\n23286,1876,LATAM,sports,retail,38.51,5,0.148,none,2024-11-27\r\n23287,2283,AMER,grocery,online,46.80,5,0.045,coupon,2024-10-27\r\n23288,1854,AMER,electronics,retail,46.78,7,0.212,none,2024-12-16\r\n23289,1688,LATAM,toys,retail,101.14,5,0.148,none,2024-03-07\r\n23290,1834,AMER,fashion,retail,46.71,4,0.070,none,2024-08-26\r\n23291,1459,LATAM,electronics,online,40.81,3,0.112,coupon,2024-11-01\r\n23292,1648,APAC,electronics,online,30.46,2,0.227,coupon,2024-01-18\r\n23293,2030,EMEA,sports,online,111.75,1,0.229,none,2024-03-02\r\n23294,1597,APAC,fashion,retail,59.01,5,0.148,none,2024-06-04\r\n23295,2351,EMEA,home,retail,51.71,2,0.159,none,2024-03-02\r\n23296,1480,APAC,fashion,retail,58.52,6,0.135,loyalty,2024-08-20\r\n23297,1104,APAC,home,online,27.93,7,0.012,coupon,2024-09-25\r\n23298,1539,LATAM,grocery,mobile,70.63,3,0.135,bundle,2024-07-17\r\n23299,1525,APAC,grocery,online,64.67,8,0.220,coupon,2024-09-05\r\n23300,1313,EMEA,sports,retail,50.64,5,0.134,bundle,2024-11-02\r\n23301,1222,AMER,toys,online,61.00,2,0.183,none,2024-07-28\r\n23302,2413,AMER,grocery,retail,68.49,7,0.134,bundle,2024-11-21\r\n23303,1197,LATAM,grocery,online,22.88,7,0.017,coupon,2024-12-09\r\n23304,1297,AMER,grocery,retail,79.99,5,0.080,coupon,2024-09-09\r\n23305,1748,APAC,grocery,retail,43.16,1,0.043,loyalty,2024-12-06\r\n23306,2346,LATAM,grocery,partner,62.01,6,0.099,none,2024-01-03\r\n23307,2110,LATAM,electronics,retail,23.91,5,0.030,none,2024-07-15\r\n23308,2417,LATAM,sports,retail,30.89,6,0.202,none,2024-03-18\r\n23309,1241,APAC,electronics,online,89.05,4,0.100,none,2024-11-12\r\n23310,2242,AMER,home,mobile,23.74,3,0.071,loyalty,2024-07-17\r\n23311,2052,LATAM,fashion,retail,40.64,2,0.126,none,2024-04-09\r\n23312,1564,APAC,grocery,online,66.91,2,0.075,bundle,2024-02-15\r\n23313,1834,AMER,fashion,partner,68.78,8,0.044,loyalty,2024-06-20\r\n23314,1087,AMER,toys,retail,53.43,8,0.113,bundle,2024-02-19\r\n23315,1739,AMER,sports,mobile,39.38,6,0.126,coupon,2024-08-27\r\n23316,2401,LATAM,home,online,107.69,4,0.008,coupon,2024-01-17\r\n23317,2407,EMEA,grocery,online,76.18,2,0.143,coupon,2024-10-25\r\n23318,2330,EMEA,grocery,retail,48.79,3,0.086,bundle,2024-10-24\r\n23319,2074,AMER,grocery,retail,27.62,3,0.248,none,2024-05-28\r\n23320,1484,AMER,fashion,mobile,35.28,5,0.214,bundle,2024-03-22\r\n23321,1599,APAC,electronics,partner,71.81,1,0.200,coupon,2024-09-15\r\n23322,2477,APAC,electronics,mobile,26.77,2,0.210,none,2024-04-09\r\n23323,1748,APAC,electronics,online,66.03,8,0.061,bundle,2024-10-21\r\n23324,1767,AMER,home,retail,34.73,2,0.097,coupon,2024-11-12\r\n23325,1258,EMEA,grocery,retail,39.22,4,0.073,none,2024-04-28\r\n23326,1865,LATAM,toys,retail,140.66,2,0.122,bundle,2024-04-17\r\n23327,1013,LATAM,sports,online,32.64,5,0.088,none,2024-05-28\r\n23328,2459,AMER,grocery,retail,85.35,1,0.242,none,2024-03-09\r\n23329,1466,AMER,toys,mobile,44.12,2,0.125,none,2024-08-22\r\n23330,2460,AMER,electronics,mobile,74.50,3,0.157,bundle,2024-04-25\r\n23331,1127,EMEA,home,retail,51.30,3,0.111,coupon,2024-07-11\r\n23332,1093,APAC,home,online,31.07,2,0.117,none,2024-10-13\r\n23333,2374,LATAM,fashion,online,81.69,3,0.050,none,2024-07-21\r\n23334,2003,LATAM,fashion,mobile,51.66,3,0.174,bundle,2024-06-21\r\n23335,2375,AMER,electronics,partner,41.05,3,0.108,none,2024-08-26\r\n23336,1215,LATAM,electronics,retail,111.00,8,0.015,bundle,2024-09-25\r\n23337,1672,APAC,electronics,retail,66.36,2,0.160,none,2024-08-05\r\n23338,1702,AMER,toys,online,43.18,7,0.172,none,2024-09-25\r\n23339,2193,AMER,grocery,retail,40.58,4,0.020,none,2024-12-13\r\n23340,1704,AMER,fashion,online,37.86,3,0.089,none,2024-06-27\r\n23341,1560,AMER,sports,retail,106.37,6,0.121,none,2024-01-07\r\n23342,2279,LATAM,sports,online,140.52,5,0.116,none,2024-10-06\r\n23343,1023,APAC,home,online,61.11,1,0.097,loyalty,2024-07-03\r\n23344,2259,AMER,fashion,mobile,93.12,1,0.203,none,2024-03-19\r\n23345,2361,EMEA,home,mobile,74.53,6,0.088,coupon,2024-11-10\r\n23346,2338,AMER,electronics,online,35.94,3,0.178,none,2024-02-18\r\n23347,2453,AMER,grocery,retail,62.23,7,0.083,coupon,2024-02-08\r\n23348,1330,EMEA,grocery,online,46.44,1,0.069,bundle,2024-02-09\r\n23349,1761,EMEA,grocery,online,53.27,1,0.106,none,2024-12-26\r\n23350,2370,EMEA,electronics,partner,29.42,1,0.141,none,2024-05-24\r\n23351,2005,APAC,grocery,online,88.82,1,0.001,none,2024-01-14\r\n23352,1351,APAC,electronics,online,91.93,1,0.111,coupon,2024-10-03\r\n23353,1021,AMER,grocery,retail,49.23,3,0.071,none,2024-04-14\r\n23354,1478,EMEA,grocery,online,34.32,1,0.061,none,2024-10-26\r\n23355,2474,LATAM,sports,mobile,48.18,4,0.106,coupon,2024-12-19\r\n23356,1288,LATAM,electronics,retail,94.12,6,0.031,none,2024-11-08\r\n23357,1972,LATAM,home,online,79.87,2,0.135,none,2024-07-18\r\n23358,1789,EMEA,fashion,retail,58.75,4,0.115,none,2024-01-16\r\n23359,1612,LATAM,sports,retail,35.65,3,0.129,none,2024-10-25\r\n23360,2300,EMEA,grocery,online,174.50,3,0.093,none,2024-07-21\r\n23361,1375,AMER,home,mobile,44.87,2,0.205,bundle,2024-03-19\r\n23362,1358,APAC,electronics,retail,41.79,3,0.006,bundle,2024-11-13\r\n23363,1117,LATAM,home,online,75.68,3,0.044,none,2024-09-16\r\n23364,1983,LATAM,electronics,mobile,177.89,5,0.103,none,2024-06-06\r\n23365,1717,AMER,sports,online,174.04,4,0.153,none,2024-01-17\r\n23366,1123,LATAM,electronics,mobile,30.72,5,0.192,none,2024-02-09\r\n23367,1255,AMER,fashion,mobile,72.88,3,0.130,coupon,2024-04-24\r\n23368,2104,EMEA,sports,online,69.80,2,0.028,coupon,2024-06-03\r\n23369,1854,AMER,grocery,online,34.98,1,0.072,none,2024-12-26\r\n23370,1870,EMEA,fashion,mobile,80.33,5,0.217,none,2024-11-10\r\n23371,1296,LATAM,home,partner,73.17,7,0.192,loyalty,2024-07-11\r\n23372,1656,LATAM,sports,retail,56.66,2,0.234,none,2024-06-25\r\n23373,2033,LATAM,home,retail,22.05,4,0.026,none,2024-11-05\r\n23374,1946,AMER,electronics,mobile,81.32,2,0.069,bundle,2024-04-15\r\n23375,1338,EMEA,electronics,mobile,58.33,5,0.152,none,2024-06-14\r\n23376,2404,EMEA,home,retail,26.75,2,0.189,none,2024-11-14\r\n23377,1476,APAC,sports,partner,48.50,5,0.192,loyalty,2024-05-04\r\n23378,1065,AMER,fashion,online,32.53,1,0.051,none,2024-04-08\r\n23379,2444,EMEA,sports,mobile,29.86,2,0.143,none,2024-11-14\r\n23380,1956,APAC,electronics,retail,69.36,4,0.176,bundle,2024-11-26\r\n23381,1646,APAC,home,online,27.25,1,0.100,none,2024-04-20\r\n23382,1038,APAC,fashion,retail,52.18,1,0.226,coupon,2024-10-12\r\n23383,2239,EMEA,electronics,online,47.87,1,0.022,none,2024-06-27\r\n23384,1907,EMEA,grocery,online,30.56,3,0.057,coupon,2024-09-21\r\n23385,2085,AMER,electronics,retail,92.76,7,0.149,bundle,2024-10-01\r\n23386,1833,EMEA,grocery,online,56.19,7,0.011,coupon,2024-06-17\r\n23387,2035,LATAM,grocery,retail,56.92,1,0.229,coupon,2024-04-07\r\n23388,2375,AMER,grocery,retail,38.32,2,0.129,coupon,2024-05-23\r\n23389,2471,APAC,electronics,retail,59.90,8,0.143,none,2024-07-05\r\n23390,2319,AMER,grocery,online,61.24,4,0.159,bundle,2024-03-10\r\n23391,2066,APAC,home,retail,46.10,1,0.005,none,2024-02-23\r\n23392,1123,LATAM,sports,mobile,126.11,6,0.156,none,2024-11-10\r\n23393,1476,APAC,electronics,online,32.65,2,0.248,none,2024-05-09\r\n23394,2327,EMEA,toys,mobile,79.54,2,0.175,none,2024-12-17\r\n23395,1629,LATAM,fashion,mobile,62.64,7,0.179,coupon,2024-07-21\r\n23396,1192,EMEA,grocery,partner,38.90,5,0.063,none,2024-01-22\r\n23397,1180,AMER,fashion,online,28.81,2,0.082,none,2024-06-19\r\n23398,2056,LATAM,home,online,37.48,8,0.087,none,2024-01-02\r\n23399,1607,LATAM,fashion,online,27.80,4,0.082,coupon,2024-10-12\r\n23400,1001,LATAM,home,retail,66.50,6,0.219,none,2024-04-07\r\n23401,2031,AMER,electronics,retail,72.50,8,0.048,loyalty,2024-11-15\r\n23402,1693,EMEA,electronics,online,88.93,4,0.039,coupon,2024-01-28\r\n23403,1050,AMER,fashion,online,141.01,2,0.026,loyalty,2024-04-07\r\n23404,1327,APAC,fashion,online,45.76,4,0.062,coupon,2024-01-03\r\n23405,1470,LATAM,electronics,retail,38.49,2,0.119,none,2024-03-18\r\n23406,2399,LATAM,grocery,retail,25.36,8,0.005,coupon,2024-12-08\r\n23407,1683,AMER,fashion,retail,172.10,8,0.104,none,2024-11-07\r\n23408,1694,APAC,grocery,retail,36.64,1,0.045,none,2024-10-09\r\n23409,1367,AMER,home,mobile,47.44,6,0.006,none,2024-11-11\r\n23410,1468,AMER,home,online,50.17,8,0.192,coupon,2024-02-03\r\n23411,1351,APAC,grocery,retail,85.32,7,0.225,none,2024-11-03\r\n23412,1529,LATAM,fashion,retail,39.33,5,0.179,coupon,2024-02-26\r\n23413,2102,APAC,electronics,online,39.54,6,0.037,loyalty,2024-04-18\r\n23414,2493,APAC,electronics,retail,96.95,2,0.142,loyalty,2024-11-26\r\n23415,1284,APAC,home,retail,69.34,3,0.181,bundle,2024-08-03\r\n23416,1765,EMEA,fashion,retail,54.00,6,0.142,none,2024-01-07\r\n23417,1380,AMER,fashion,retail,85.95,7,0.108,coupon,2024-10-22\r\n23418,1370,APAC,electronics,retail,40.90,5,0.095,none,2024-03-02\r\n23419,1112,APAC,sports,retail,53.33,5,0.043,coupon,2024-09-22\r\n23420,1425,EMEA,fashion,mobile,129.00,7,0.249,bundle,2024-07-13\r\n23421,1901,AMER,grocery,online,15.01,7,0.162,none,2024-07-08\r\n23422,2361,EMEA,electronics,online,39.18,1,0.065,none,2024-01-15\r\n23423,2069,AMER,toys,online,30.85,5,0.165,none,2024-08-07\r\n23424,2108,AMER,electronics,online,30.53,2,0.123,none,2024-08-24\r\n23425,2435,AMER,grocery,online,96.44,5,0.175,none,2024-07-25\r\n23426,1505,EMEA,home,mobile,18.18,5,0.039,coupon,2024-12-16\r\n23427,1521,LATAM,grocery,retail,33.01,2,0.195,loyalty,2024-04-17\r\n23428,1744,EMEA,home,online,54.99,3,0.096,none,2024-03-06\r\n23429,1620,LATAM,home,online,31.96,4,0.081,none,2024-06-06\r\n23430,2376,LATAM,fashion,retail,45.91,3,0.149,coupon,2024-06-11\r\n23431,1502,APAC,sports,online,74.47,5,0.088,none,2024-04-07\r\n23432,2100,APAC,sports,online,97.79,6,0.226,none,2024-04-09\r\n23433,1029,EMEA,home,online,83.47,5,0.194,coupon,2024-09-16\r\n23434,2484,APAC,home,online,67.33,7,0.127,bundle,2024-03-03\r\n23435,1241,APAC,grocery,retail,45.25,2,0.097,loyalty,2024-08-23\r\n23436,2439,AMER,grocery,online,43.38,5,0.102,none,2024-03-17\r\n23437,2344,LATAM,electronics,partner,51.32,3,0.015,coupon,2024-05-23\r\n23438,1579,AMER,home,online,26.90,5,0.071,bundle,2024-11-22\r\n23439,1716,LATAM,home,retail,56.01,7,0.074,none,2024-10-18\r\n23440,1224,APAC,grocery,online,197.41,8,0.178,none,2024-02-24\r\n23441,2161,LATAM,fashion,online,25.51,6,0.114,none,2024-01-02\r\n23442,1933,EMEA,home,mobile,53.20,7,0.113,none,2024-08-25\r\n23443,1448,EMEA,electronics,online,64.60,4,0.152,bundle,2024-09-13\r\n23444,2385,APAC,toys,partner,38.51,1,0.056,none,2024-11-25\r\n23445,1709,EMEA,toys,mobile,267.41,6,0.112,none,2024-11-12\r\n23446,2460,AMER,toys,mobile,62.27,7,0.057,none,2024-09-22\r\n23447,1591,APAC,toys,online,66.84,8,0.246,bundle,2024-06-03\r\n23448,1399,AMER,electronics,online,45.25,5,0.242,bundle,2024-08-19\r\n23449,1406,LATAM,fashion,online,114.01,3,0.125,none,2024-06-26\r\n23450,1625,EMEA,home,online,65.08,7,0.114,coupon,2024-05-18\r\n23451,2298,APAC,toys,online,32.92,8,0.161,coupon,2024-04-09\r\n23452,2239,EMEA,grocery,online,18.02,6,0.107,loyalty,2024-10-05\r\n23453,1010,EMEA,grocery,retail,55.93,1,0.227,none,2024-03-07\r\n23454,1142,EMEA,home,online,111.04,3,0.227,bundle,2024-01-07\r\n23455,1245,APAC,grocery,online,32.75,1,0.156,loyalty,2024-10-01\r\n23456,2492,LATAM,electronics,partner,135.35,2,0.211,none,2024-03-18\r\n23457,1857,LATAM,toys,online,69.82,1,0.015,none,2024-07-06\r\n23458,1305,EMEA,fashion,online,46.18,2,0.218,none,2024-07-05\r\n23459,1586,LATAM,fashion,online,67.73,3,0.233,bundle,2024-11-20\r\n23460,2289,APAC,fashion,online,116.72,1,0.171,bundle,2024-05-04\r\n23461,2481,APAC,home,partner,74.32,3,0.205,loyalty,2024-12-12\r\n23462,2252,EMEA,home,online,54.04,3,0.178,coupon,2024-06-27\r\n23463,1687,APAC,sports,online,62.92,5,0.033,none,2024-05-16\r\n23464,1974,EMEA,grocery,retail,71.39,7,0.078,coupon,2024-11-13\r\n23465,1542,APAC,fashion,retail,63.31,7,0.022,coupon,2024-05-25\r\n23466,2112,LATAM,grocery,online,49.10,6,0.192,none,2024-10-16\r\n23467,2049,LATAM,sports,partner,46.09,6,0.130,coupon,2024-05-15\r\n23468,1024,APAC,sports,mobile,80.35,2,0.083,none,2024-04-18\r\n23469,1544,LATAM,home,online,48.88,8,0.167,none,2024-10-05\r\n23470,1851,EMEA,home,partner,38.58,5,0.080,bundle,2024-01-26\r\n23471,2194,APAC,home,online,58.02,3,0.135,none,2024-03-22\r\n23472,2175,AMER,fashion,online,85.88,6,0.212,bundle,2024-01-18\r\n23473,1682,EMEA,home,retail,65.52,4,0.142,none,2024-07-14\r\n23474,1833,EMEA,home,retail,21.67,8,0.192,none,2024-02-05\r\n23475,1996,APAC,sports,retail,84.47,3,0.142,coupon,2024-01-27\r\n23476,1711,APAC,toys,online,99.48,2,0.129,none,2024-09-08\r\n23477,1553,LATAM,sports,retail,18.02,7,0.042,none,2024-09-04\r\n23478,1181,LATAM,home,mobile,53.70,6,0.057,none,2024-09-17\r\n23479,2004,LATAM,fashion,retail,178.34,3,0.162,none,2024-11-12\r\n23480,1770,AMER,electronics,retail,37.98,7,0.101,none,2024-08-04\r\n23481,2389,LATAM,toys,partner,64.53,8,0.172,none,2024-12-07\r\n23482,1807,EMEA,sports,online,41.97,2,0.128,none,2024-05-07\r\n23483,2183,EMEA,fashion,online,87.97,1,0.177,coupon,2024-12-23\r\n23484,2044,APAC,grocery,retail,100.85,5,0.113,loyalty,2024-04-24\r\n23485,1055,AMER,home,online,46.91,3,0.066,coupon,2024-11-04\r\n23486,1270,LATAM,fashion,online,65.31,6,0.160,none,2024-09-20\r\n23487,2082,APAC,grocery,online,63.11,5,0.117,none,2024-11-11\r\n23488,1353,EMEA,grocery,online,42.66,6,0.076,bundle,2024-07-08\r\n23489,2119,AMER,electronics,retail,54.62,4,0.198,bundle,2024-02-15\r\n23490,1192,EMEA,toys,mobile,16.14,4,0.249,loyalty,2024-12-27\r\n23491,2313,LATAM,sports,mobile,44.74,6,0.013,none,2024-04-28\r\n23492,2281,AMER,grocery,retail,84.83,7,0.104,none,2024-09-21\r\n23493,1340,LATAM,grocery,retail,97.74,7,0.133,loyalty,2024-07-18\r\n23494,1777,AMER,home,online,94.68,8,0.003,coupon,2024-01-25\r\n23495,2112,LATAM,fashion,partner,50.33,7,0.009,coupon,2024-09-14\r\n23496,1834,AMER,grocery,retail,51.12,1,0.159,none,2024-01-05\r\n23497,2197,LATAM,grocery,retail,32.08,2,0.169,coupon,2024-03-06\r\n23498,1510,EMEA,toys,online,58.36,3,0.210,coupon,2024-05-21\r\n23499,2084,LATAM,home,online,58.72,5,0.142,bundle,2024-02-17\r\n23500,1820,AMER,electronics,online,53.14,3,0.173,none,2024-12-20\r\n23501,2429,EMEA,electronics,online,37.66,7,0.127,none,2024-10-02\r\n23502,2474,LATAM,electronics,retail,64.01,5,0.220,none,2024-11-24\r\n23503,1703,AMER,home,mobile,43.37,6,0.036,none,2024-01-22\r\n23504,1463,EMEA,electronics,online,21.79,3,0.232,none,2024-06-11\r\n23505,1149,LATAM,toys,online,114.99,6,0.178,coupon,2024-06-09\r\n23506,2235,AMER,grocery,online,43.46,1,0.056,bundle,2024-07-17\r\n23507,1128,LATAM,toys,online,46.50,5,0.072,loyalty,2024-03-17\r\n23508,2101,APAC,fashion,mobile,25.47,6,0.059,none,2024-01-26\r\n23509,1935,EMEA,toys,online,108.38,8,0.096,none,2024-04-21\r\n23510,1983,LATAM,grocery,mobile,40.51,8,0.193,coupon,2024-01-01\r\n23511,2216,AMER,grocery,retail,84.49,6,0.006,none,2024-02-01\r\n23512,1032,AMER,grocery,online,23.43,7,0.183,coupon,2024-05-04\r\n23513,1400,EMEA,fashion,retail,39.76,5,0.213,loyalty,2024-03-15\r\n23514,1587,LATAM,sports,mobile,17.73,5,0.167,none,2024-09-20\r\n23515,1151,APAC,home,online,30.14,1,0.249,loyalty,2024-01-22\r\n23516,1299,LATAM,grocery,online,62.23,8,0.110,none,2024-02-10\r\n23517,1686,LATAM,home,retail,83.72,7,0.161,none,2024-01-28\r\n23518,1991,APAC,home,online,43.01,2,0.018,none,2024-10-12\r\n23519,1655,LATAM,sports,online,40.21,3,0.096,loyalty,2024-11-09\r\n23520,1743,LATAM,fashion,partner,51.41,2,0.147,bundle,2024-08-16\r\n23521,1131,APAC,home,retail,97.50,3,0.113,none,2024-10-21\r\n23522,2146,APAC,grocery,online,50.39,6,0.222,bundle,2024-01-23\r\n23523,2100,APAC,home,mobile,42.24,2,0.187,none,2024-06-19\r\n23524,2105,APAC,home,online,70.98,2,0.118,bundle,2024-09-24\r\n23525,1883,LATAM,electronics,online,124.14,1,0.035,none,2024-03-04\r\n23526,1878,EMEA,grocery,retail,94.51,6,0.000,none,2024-12-15\r\n23527,1763,LATAM,sports,online,95.43,5,0.007,bundle,2024-04-22\r\n23528,1032,AMER,fashion,online,27.44,8,0.079,none,2024-02-13\r\n23529,2256,AMER,grocery,online,28.68,1,0.182,none,2024-11-07\r\n23530,2007,LATAM,fashion,retail,36.67,1,0.102,loyalty,2024-03-27\r\n23531,2236,APAC,electronics,retail,118.63,7,0.181,none,2024-05-26\r\n23532,2215,LATAM,home,mobile,106.87,5,0.036,none,2024-02-08\r\n23533,2355,EMEA,electronics,online,45.60,1,0.040,none,2024-03-22\r\n23534,2359,LATAM,toys,online,43.21,8,0.204,none,2024-04-15\r\n23535,1270,LATAM,grocery,online,60.53,6,0.138,none,2024-04-21\r\n23536,2328,EMEA,grocery,mobile,52.11,7,0.160,bundle,2024-05-20\r\n23537,2485,AMER,electronics,online,17.24,2,0.078,bundle,2024-03-16\r\n23538,1800,APAC,fashion,retail,88.38,3,0.177,coupon,2024-06-09\r\n23539,1533,APAC,toys,online,68.66,4,0.164,none,2024-06-25\r\n23540,1857,LATAM,toys,retail,91.63,2,0.101,coupon,2024-07-02\r\n23541,1479,AMER,fashion,retail,37.35,6,0.094,none,2024-06-22\r\n23542,1661,LATAM,home,retail,34.19,6,0.065,none,2024-11-24\r\n23543,1687,APAC,toys,online,25.20,7,0.213,loyalty,2024-04-20\r\n23544,2197,LATAM,grocery,online,26.46,3,0.170,coupon,2024-03-09\r\n23545,1106,AMER,grocery,online,100.82,2,0.014,coupon,2024-12-23\r\n23546,1913,LATAM,home,online,45.45,1,0.026,none,2024-11-26\r\n23547,2005,APAC,sports,online,42.73,4,0.092,none,2024-09-22\r\n23548,2081,APAC,grocery,retail,42.02,7,0.069,loyalty,2024-09-28\r\n23549,2454,LATAM,toys,retail,136.98,4,0.080,none,2024-07-27\r\n23550,1335,APAC,grocery,mobile,103.45,4,0.191,none,2024-06-02\r\n23551,1591,APAC,electronics,online,104.00,5,0.211,loyalty,2024-03-13\r\n23552,1191,EMEA,grocery,online,39.92,3,0.060,coupon,2024-10-01\r\n23553,2324,AMER,toys,online,134.58,2,0.183,none,2024-11-21\r\n23554,1915,LATAM,home,retail,71.91,4,0.245,coupon,2024-04-20\r\n23555,1406,LATAM,fashion,retail,58.55,6,0.229,none,2024-09-02\r\n23556,1216,APAC,fashion,online,41.00,7,0.205,none,2024-05-06\r\n23557,2160,LATAM,home,retail,64.00,7,0.219,loyalty,2024-09-03\r\n23558,1923,LATAM,toys,online,39.11,3,0.139,coupon,2024-01-04\r\n23559,1385,LATAM,toys,retail,47.48,1,0.104,bundle,2024-07-23\r\n23560,2359,LATAM,home,retail,175.46,7,0.065,none,2024-10-07\r\n23561,2035,LATAM,grocery,partner,27.95,6,0.231,none,2024-04-26\r\n23562,1628,EMEA,home,retail,40.50,2,0.180,bundle,2024-12-26\r\n23563,2310,EMEA,fashion,online,27.55,8,0.022,coupon,2024-10-13\r\n23564,2022,LATAM,grocery,partner,21.88,8,0.188,bundle,2024-05-21\r\n23565,1427,EMEA,home,online,26.82,7,0.238,bundle,2024-11-13\r\n23566,1180,AMER,electronics,retail,32.53,7,0.064,none,2024-07-06\r\n23567,1385,LATAM,home,online,63.23,3,0.014,bundle,2024-04-02\r\n23568,1590,APAC,grocery,retail,34.25,1,0.108,none,2024-07-23\r\n23569,1194,APAC,electronics,retail,92.60,4,0.136,coupon,2024-01-26\r\n23570,1690,LATAM,grocery,partner,83.66,8,0.250,none,2024-09-05\r\n23571,1570,AMER,electronics,online,64.05,6,0.153,none,2024-03-19\r\n23572,2164,AMER,fashion,retail,75.52,8,0.181,none,2024-11-27\r\n23573,2375,AMER,sports,online,67.31,3,0.081,none,2024-06-12\r\n23574,1263,AMER,grocery,online,36.10,6,0.081,none,2024-05-03\r\n23575,2097,AMER,fashion,online,33.10,2,0.019,none,2024-09-11\r\n23576,1252,APAC,grocery,online,59.96,8,0.032,bundle,2024-07-02\r\n23577,1909,APAC,home,partner,37.54,6,0.050,none,2024-09-26\r\n23578,2126,APAC,electronics,retail,52.15,7,0.087,coupon,2024-09-18\r\n23579,2173,LATAM,grocery,mobile,27.42,3,0.156,coupon,2024-08-03\r\n23580,2218,EMEA,electronics,retail,79.27,4,0.161,none,2024-09-11\r\n23581,1162,AMER,electronics,online,88.16,3,0.029,none,2024-12-04\r\n23582,1027,APAC,home,online,62.62,3,0.137,none,2024-09-23\r\n23583,1973,EMEA,home,retail,97.89,4,0.183,none,2024-03-21\r\n23584,1552,EMEA,home,online,84.07,7,0.079,none,2024-11-24\r\n23585,2413,AMER,home,online,82.31,1,0.180,none,2024-02-01\r\n23586,1970,LATAM,home,online,65.50,5,0.105,none,2024-06-14\r\n23587,2163,EMEA,grocery,retail,135.45,3,0.170,none,2024-01-16\r\n23588,1838,AMER,sports,mobile,45.25,2,0.145,bundle,2024-07-14\r\n23589,2171,EMEA,toys,online,74.47,7,0.011,none,2024-10-16\r\n23590,1985,AMER,grocery,online,41.61,3,0.032,none,2024-11-12\r\n23591,2050,APAC,electronics,online,61.42,1,0.050,none,2024-01-19\r\n23592,1580,AMER,grocery,online,56.26,2,0.012,bundle,2024-02-25\r\n23593,2412,LATAM,home,online,53.54,3,0.015,coupon,2024-09-20\r\n23594,1040,LATAM,home,retail,53.30,5,0.141,none,2024-07-19\r\n23595,2360,EMEA,electronics,online,122.71,4,0.029,bundle,2024-01-05\r\n23596,2297,EMEA,toys,online,65.61,2,0.077,coupon,2024-05-17\r\n23597,1573,AMER,home,online,87.20,6,0.112,bundle,2024-09-17\r\n23598,1512,APAC,grocery,mobile,32.68,2,0.080,coupon,2024-10-27\r\n23599,1506,EMEA,sports,online,67.50,3,0.011,bundle,2024-06-04\r\n23600,1316,APAC,home,online,59.82,4,0.246,none,2024-12-25\r\n23601,2267,AMER,electronics,retail,101.54,1,0.133,bundle,2024-11-06\r\n23602,2214,AMER,sports,mobile,47.57,6,0.217,none,2024-04-13\r\n23603,1631,APAC,grocery,online,54.79,1,0.193,none,2024-11-09\r\n23604,1117,LATAM,grocery,online,58.23,2,0.090,bundle,2024-10-18\r\n23605,1449,EMEA,home,mobile,28.42,5,0.065,bundle,2024-01-11\r\n23606,2285,APAC,home,online,43.63,6,0.078,none,2024-03-18\r\n23607,2399,LATAM,electronics,retail,46.97,5,0.119,coupon,2024-03-22\r\n23608,1377,APAC,sports,partner,37.52,6,0.037,none,2024-11-18\r\n23609,1183,AMER,electronics,retail,92.97,1,0.019,none,2024-09-19\r\n23610,1937,APAC,grocery,retail,84.97,3,0.021,none,2024-04-09\r\n23611,1565,AMER,electronics,partner,116.88,1,0.210,none,2024-08-25\r\n23612,1040,LATAM,home,retail,81.45,7,0.073,coupon,2024-07-27\r\n23613,2237,EMEA,sports,retail,46.63,2,0.204,coupon,2024-12-21\r\n23614,1210,LATAM,grocery,online,59.32,3,0.181,bundle,2024-01-05\r\n23615,2211,APAC,electronics,retail,48.30,2,0.133,coupon,2024-06-14\r\n23616,1022,APAC,electronics,retail,44.55,7,0.197,none,2024-02-04\r\n23617,2304,LATAM,grocery,online,74.31,5,0.182,loyalty,2024-06-02\r\n23618,2461,LATAM,electronics,online,28.42,1,0.064,coupon,2024-09-15\r\n23619,2089,EMEA,grocery,online,30.67,2,0.160,none,2024-10-11\r\n23620,2464,LATAM,home,mobile,69.87,1,0.170,coupon,2024-08-26\r\n23621,2265,APAC,home,retail,37.46,2,0.032,loyalty,2024-08-13\r\n23622,2138,APAC,sports,mobile,74.06,8,0.005,none,2024-09-28\r\n23623,2479,EMEA,electronics,retail,101.12,6,0.241,loyalty,2024-04-19\r\n23624,1730,AMER,toys,online,49.87,4,0.010,bundle,2024-08-16\r\n23625,1340,LATAM,fashion,mobile,34.94,6,0.061,none,2024-08-08\r\n23626,1682,EMEA,fashion,online,59.97,5,0.006,none,2024-05-16\r\n23627,2191,AMER,grocery,online,109.70,2,0.046,bundle,2024-05-16\r\n23628,1703,AMER,fashion,retail,32.52,8,0.054,none,2024-03-12\r\n23629,1416,EMEA,grocery,online,85.82,1,0.119,coupon,2024-12-22\r\n23630,1132,EMEA,home,online,36.45,6,0.045,none,2024-02-11\r\n23631,1274,LATAM,home,online,64.57,7,0.185,loyalty,2024-09-10\r\n23632,1974,EMEA,grocery,online,106.65,1,0.070,bundle,2024-02-21\r\n23633,2390,AMER,sports,online,74.45,5,0.138,none,2024-04-04\r\n23634,1823,EMEA,fashion,online,45.61,6,0.062,coupon,2024-04-10\r\n23635,1724,LATAM,fashion,online,106.86,5,0.055,none,2024-05-27\r\n23636,1874,LATAM,sports,retail,35.94,3,0.152,coupon,2024-03-21\r\n23637,1351,APAC,electronics,retail,35.21,4,0.224,bundle,2024-07-17\r\n23638,1450,EMEA,grocery,online,48.69,3,0.141,coupon,2024-08-24\r\n23639,2371,LATAM,sports,online,91.42,3,0.191,none,2024-06-28\r\n23640,1373,LATAM,fashion,retail,48.48,1,0.100,none,2024-06-26\r\n23641,1283,APAC,electronics,mobile,74.04,5,0.206,none,2024-01-16\r\n23642,2122,AMER,toys,retail,45.14,8,0.144,coupon,2024-07-13\r\n23643,2045,LATAM,home,online,27.15,1,0.191,none,2024-06-16\r\n23644,2065,EMEA,grocery,partner,23.89,2,0.117,none,2024-03-07\r\n23645,2078,APAC,home,partner,54.77,7,0.156,bundle,2024-10-13\r\n23646,2095,EMEA,fashion,online,62.62,2,0.151,none,2024-05-23\r\n23647,2199,LATAM,electronics,retail,25.31,4,0.065,coupon,2024-07-28\r\n23648,1897,AMER,sports,online,52.88,7,0.249,none,2024-08-17\r\n23649,2391,EMEA,sports,online,71.34,8,0.167,none,2024-11-13\r\n23650,1311,APAC,grocery,retail,107.32,8,0.031,coupon,2024-09-10\r\n23651,1575,APAC,fashion,mobile,77.05,3,0.001,none,2024-11-03\r\n23652,2274,APAC,grocery,retail,57.49,3,0.154,coupon,2024-06-16\r\n23653,2344,LATAM,home,online,43.57,4,0.217,none,2024-03-22\r\n23654,1500,EMEA,home,online,62.66,3,0.000,none,2024-03-11\r\n23655,1355,EMEA,fashion,retail,121.35,2,0.223,none,2024-10-17\r\n23656,1792,AMER,toys,online,152.33,2,0.216,bundle,2024-05-21\r\n23657,1764,LATAM,grocery,mobile,23.51,6,0.126,coupon,2024-05-20\r\n23658,2315,LATAM,home,online,34.02,4,0.123,coupon,2024-12-18\r\n23659,2003,LATAM,electronics,mobile,86.98,6,0.246,bundle,2024-03-26\r\n23660,1412,AMER,grocery,mobile,120.48,4,0.211,none,2024-12-27\r\n23661,2211,APAC,fashion,online,84.66,4,0.112,none,2024-10-28\r\n23662,1309,EMEA,electronics,online,152.28,7,0.173,none,2024-11-05\r\n23663,1471,EMEA,sports,retail,58.97,5,0.221,none,2024-04-06\r\n23664,1182,EMEA,fashion,mobile,51.63,3,0.212,none,2024-05-11\r\n23665,2092,AMER,grocery,online,24.24,5,0.159,bundle,2024-09-11\r\n23666,2488,EMEA,home,retail,74.34,5,0.173,none,2024-04-01\r\n23667,1937,APAC,home,online,83.15,1,0.245,none,2024-09-05\r\n23668,2430,APAC,fashion,mobile,49.59,6,0.113,coupon,2024-11-15\r\n23669,2103,LATAM,electronics,retail,58.95,3,0.094,coupon,2024-02-12\r\n23670,1309,EMEA,electronics,online,47.39,7,0.040,bundle,2024-01-04\r\n23671,1585,AMER,sports,retail,22.29,7,0.180,none,2024-12-12\r\n23672,1917,LATAM,fashion,online,91.79,5,0.124,bundle,2024-02-14\r\n23673,2290,LATAM,home,retail,149.47,4,0.219,none,2024-03-14\r\n23674,2288,AMER,home,partner,43.93,1,0.120,none,2024-12-27\r\n23675,2411,EMEA,toys,mobile,23.22,8,0.236,coupon,2024-05-12\r\n23676,1636,APAC,grocery,retail,115.09,4,0.123,bundle,2024-01-03\r\n23677,1448,EMEA,grocery,retail,82.94,8,0.009,loyalty,2024-01-08\r\n23678,1584,EMEA,electronics,online,81.16,4,0.133,loyalty,2024-02-03\r\n23679,1171,APAC,grocery,online,62.53,1,0.163,none,2024-02-24\r\n23680,1640,APAC,fashion,retail,58.22,8,0.109,coupon,2024-10-20\r\n23681,1435,AMER,toys,online,102.91,3,0.091,bundle,2024-04-06\r\n23682,1019,APAC,grocery,online,106.72,1,0.162,none,2024-02-06\r\n23683,2209,AMER,sports,retail,60.36,3,0.120,coupon,2024-06-18\r\n23684,1071,AMER,grocery,online,33.39,2,0.161,none,2024-12-01\r\n23685,1417,APAC,home,retail,68.38,5,0.030,none,2024-03-11\r\n23686,1022,APAC,electronics,mobile,49.47,5,0.143,loyalty,2024-01-16\r\n23687,1158,LATAM,home,retail,61.25,8,0.003,coupon,2024-03-27\r\n23688,2478,AMER,grocery,online,138.14,3,0.068,none,2024-04-06\r\n23689,2043,EMEA,fashion,retail,43.02,6,0.240,none,2024-12-08\r\n23690,2157,AMER,fashion,mobile,114.60,3,0.078,none,2024-07-14\r\n23691,1700,EMEA,grocery,online,84.25,1,0.206,none,2024-07-15\r\n23692,1084,AMER,toys,retail,32.97,6,0.133,none,2024-12-10\r\n23693,2461,LATAM,home,online,40.15,7,0.119,bundle,2024-10-23\r\n23694,1357,EMEA,home,online,42.20,8,0.242,none,2024-09-11\r\n23695,2037,LATAM,electronics,mobile,37.40,5,0.171,none,2024-07-27\r\n23696,2205,AMER,electronics,retail,54.94,2,0.169,coupon,2024-02-15\r\n23697,1313,EMEA,grocery,retail,75.16,4,0.149,none,2024-02-15\r\n23698,1445,APAC,grocery,online,61.52,4,0.216,none,2024-04-02\r\n23699,1297,AMER,home,online,208.36,7,0.148,none,2024-06-28\r\n23700,1682,EMEA,grocery,online,28.00,3,0.173,coupon,2024-04-04\r\n23701,2386,EMEA,electronics,retail,108.81,4,0.248,bundle,2024-12-28\r\n23702,1346,AMER,home,online,69.08,6,0.090,none,2024-05-08\r\n23703,2083,LATAM,grocery,retail,104.21,6,0.052,none,2024-01-21\r\n23704,2223,EMEA,grocery,retail,107.69,8,0.244,none,2024-12-20\r\n23705,2375,AMER,fashion,retail,40.22,7,0.232,bundle,2024-12-13\r\n23706,2061,EMEA,fashion,retail,83.10,2,0.228,bundle,2024-12-26\r\n23707,2157,AMER,home,online,52.78,1,0.178,bundle,2024-01-02\r\n23708,1605,APAC,grocery,online,39.22,3,0.224,none,2024-05-11\r\n23709,1651,LATAM,grocery,mobile,45.55,4,0.160,bundle,2024-01-09\r\n23710,1676,LATAM,toys,retail,46.20,4,0.196,bundle,2024-01-25\r\n23711,2044,APAC,sports,partner,42.95,1,0.033,none,2024-07-10\r\n23712,1500,EMEA,home,online,108.14,8,0.032,coupon,2024-01-06\r\n23713,1021,AMER,fashion,retail,40.79,3,0.202,bundle,2024-01-15\r\n23714,2491,APAC,electronics,retail,175.50,2,0.051,none,2024-03-13\r\n23715,1571,EMEA,home,online,50.34,6,0.154,none,2024-04-16\r\n23716,2154,APAC,toys,mobile,68.41,8,0.176,none,2024-05-10\r\n23717,2343,EMEA,sports,retail,111.26,5,0.008,none,2024-02-17\r\n23718,1489,AMER,grocery,online,160.41,5,0.119,loyalty,2024-11-28\r\n23719,1602,EMEA,sports,online,101.19,2,0.118,none,2024-06-12\r\n23720,1891,APAC,sports,retail,94.03,7,0.064,none,2024-10-03\r\n23721,1566,EMEA,electronics,mobile,32.34,8,0.215,none,2024-02-09\r\n23722,2162,EMEA,home,online,93.92,6,0.074,bundle,2024-07-24\r\n23723,1592,LATAM,electronics,online,95.12,6,0.127,none,2024-04-12\r\n23724,1277,AMER,fashion,online,101.02,5,0.150,none,2024-10-19\r\n23725,2342,AMER,home,retail,51.33,5,0.086,none,2024-12-06\r\n23726,1879,EMEA,grocery,retail,77.30,4,0.169,coupon,2024-11-01\r\n23727,1056,LATAM,toys,online,24.01,1,0.217,loyalty,2024-08-22\r\n23728,1241,APAC,sports,retail,60.10,4,0.045,coupon,2024-05-01\r\n23729,2414,EMEA,grocery,retail,157.28,6,0.053,none,2024-10-04\r\n23730,1494,AMER,grocery,mobile,93.53,1,0.099,none,2024-10-25\r\n23731,2313,LATAM,home,online,51.41,7,0.057,none,2024-01-25\r\n23732,1496,AMER,toys,online,35.34,4,0.105,none,2024-07-08\r\n23733,2310,EMEA,home,online,30.59,8,0.160,bundle,2024-06-18\r\n23734,1842,LATAM,electronics,online,47.78,1,0.245,none,2024-03-20\r\n23735,1816,EMEA,electronics,retail,79.31,1,0.082,coupon,2024-07-14\r\n23736,1650,LATAM,toys,retail,55.57,7,0.007,none,2024-06-15\r\n23737,2357,EMEA,home,online,39.73,5,0.061,loyalty,2024-07-26\r\n23738,1869,AMER,electronics,online,45.64,6,0.119,coupon,2024-04-01\r\n23739,1382,LATAM,home,retail,134.94,8,0.034,none,2024-10-16\r\n23740,1663,LATAM,toys,mobile,38.28,4,0.151,none,2024-10-20\r\n23741,1239,APAC,sports,retail,103.96,6,0.160,none,2024-03-18\r\n23742,1821,LATAM,fashion,retail,54.90,1,0.248,coupon,2024-01-10\r\n23743,1771,AMER,toys,retail,64.83,2,0.046,coupon,2024-02-07\r\n23744,1340,LATAM,fashion,online,26.78,3,0.048,none,2024-09-23\r\n23745,2492,LATAM,home,retail,60.94,7,0.032,none,2024-12-21\r\n23746,2243,APAC,electronics,online,56.97,5,0.078,none,2024-01-12\r\n23747,1708,LATAM,grocery,mobile,57.96,3,0.159,none,2024-03-16\r\n23748,1466,AMER,fashion,retail,138.98,8,0.050,loyalty,2024-01-24\r\n23749,1469,EMEA,home,retail,50.02,6,0.167,none,2024-10-26\r\n23750,1203,AMER,fashion,retail,30.89,1,0.034,none,2024-06-25\r\n23751,2178,AMER,grocery,online,37.05,3,0.093,none,2024-06-22\r\n23752,1459,LATAM,home,online,36.87,1,0.158,bundle,2024-12-07\r\n23753,1548,EMEA,sports,retail,36.17,4,0.172,none,2024-11-03\r\n23754,1153,AMER,toys,retail,25.09,4,0.014,bundle,2024-03-01\r\n23755,2051,APAC,sports,online,52.51,8,0.213,coupon,2024-06-03\r\n23756,1862,LATAM,home,mobile,58.05,7,0.017,bundle,2024-10-24\r\n23757,1998,APAC,grocery,online,45.96,1,0.187,none,2024-03-18\r\n23758,1838,AMER,electronics,retail,26.29,8,0.155,coupon,2024-05-27\r\n23759,1284,APAC,home,online,66.45,4,0.027,none,2024-04-13\r\n23760,1120,LATAM,fashion,retail,96.48,7,0.242,none,2024-09-04\r\n23761,1841,AMER,sports,partner,125.62,5,0.040,none,2024-09-13\r\n23762,1008,AMER,grocery,retail,51.96,2,0.004,coupon,2024-03-11\r\n23763,1456,APAC,home,online,25.41,7,0.007,none,2024-02-07\r\n23764,1887,LATAM,home,online,116.60,3,0.199,loyalty,2024-05-22\r\n23765,1733,LATAM,fashion,retail,34.91,7,0.026,none,2024-06-12\r\n23766,1649,APAC,electronics,online,111.54,6,0.075,coupon,2024-03-18\r\n23767,1997,APAC,grocery,retail,39.13,6,0.105,none,2024-02-14\r\n23768,2488,EMEA,sports,online,37.20,2,0.207,none,2024-12-06\r\n23769,1212,LATAM,grocery,online,39.55,6,0.068,coupon,2024-04-19\r\n23770,2246,AMER,electronics,online,19.71,8,0.124,none,2024-03-10\r\n23771,1845,AMER,grocery,online,50.35,6,0.166,none,2024-08-28\r\n23772,1690,LATAM,sports,online,71.58,3,0.041,coupon,2024-07-24\r\n23773,2421,AMER,grocery,online,90.32,4,0.140,none,2024-05-22\r\n23774,2484,APAC,grocery,retail,51.87,1,0.219,bundle,2024-05-10\r\n23775,2283,AMER,fashion,online,140.81,3,0.116,none,2024-09-06\r\n23776,1209,AMER,electronics,partner,77.83,2,0.043,loyalty,2024-10-12\r\n23777,1327,APAC,electronics,online,50.60,2,0.213,loyalty,2024-05-11\r\n23778,1691,LATAM,home,retail,157.15,8,0.022,none,2024-02-12\r\n23779,1584,EMEA,home,online,61.64,2,0.013,none,2024-02-13\r\n23780,1038,APAC,grocery,retail,61.36,4,0.166,none,2024-12-14\r\n23781,1088,LATAM,home,online,22.24,7,0.208,loyalty,2024-03-04\r\n23782,1818,AMER,grocery,retail,52.05,4,0.206,coupon,2024-09-09\r\n23783,1679,APAC,home,online,38.08,2,0.155,none,2024-10-24\r\n23784,1929,LATAM,toys,online,72.50,3,0.037,coupon,2024-10-13\r\n23785,2380,AMER,electronics,online,176.53,1,0.157,none,2024-08-13\r\n23786,2367,AMER,grocery,mobile,46.04,7,0.006,bundle,2024-11-28\r\n23787,2090,AMER,grocery,online,49.27,5,0.220,none,2024-02-24\r\n23788,1537,LATAM,electronics,online,76.87,6,0.197,coupon,2024-05-17\r\n23789,1421,APAC,toys,online,77.04,3,0.201,none,2024-06-26\r\n23790,1488,AMER,grocery,online,35.24,3,0.180,none,2024-09-20\r\n23791,2071,APAC,grocery,online,45.90,5,0.057,loyalty,2024-09-01\r\n23792,1259,EMEA,grocery,online,49.09,4,0.218,none,2024-06-05\r\n23793,1148,AMER,grocery,retail,20.91,4,0.137,coupon,2024-10-06\r\n23794,2262,APAC,grocery,online,77.25,1,0.031,none,2024-06-11\r\n23795,2382,LATAM,toys,online,70.62,8,0.076,none,2024-09-01\r\n23796,1392,AMER,home,retail,43.51,8,0.042,none,2024-02-12\r\n23797,1927,EMEA,grocery,online,67.27,1,0.217,none,2024-01-01\r\n23798,2315,LATAM,fashion,retail,56.05,4,0.009,coupon,2024-02-25\r\n23799,2235,AMER,grocery,online,28.90,2,0.049,none,2024-07-09\r\n23800,2150,APAC,home,online,126.99,3,0.185,bundle,2024-12-13\r\n23801,1217,EMEA,home,online,39.61,8,0.119,coupon,2024-02-24\r\n23802,1103,EMEA,electronics,mobile,28.87,3,0.068,coupon,2024-03-25\r\n23803,1792,AMER,electronics,online,49.93,1,0.087,loyalty,2024-12-21\r\n23804,1117,LATAM,grocery,retail,132.41,6,0.213,bundle,2024-08-23\r\n23805,1501,AMER,electronics,online,29.19,6,0.102,none,2024-05-26\r\n23806,1245,APAC,fashion,online,85.92,6,0.116,none,2024-01-20\r\n23807,2074,AMER,sports,online,36.16,5,0.006,bundle,2024-10-12\r\n23808,1397,LATAM,electronics,retail,70.53,7,0.100,coupon,2024-12-27\r\n23809,1628,EMEA,toys,online,41.32,3,0.155,coupon,2024-08-07\r\n23810,2458,EMEA,home,online,61.32,7,0.120,coupon,2024-07-20\r\n23811,1434,EMEA,grocery,retail,26.30,3,0.221,loyalty,2024-10-18\r\n23812,2120,AMER,grocery,online,79.48,1,0.026,none,2024-10-11\r\n23813,1219,LATAM,fashion,retail,47.69,6,0.229,coupon,2024-04-23\r\n23814,2388,LATAM,electronics,mobile,152.17,5,0.213,none,2024-07-27\r\n23815,2120,AMER,electronics,online,71.22,2,0.032,coupon,2024-03-14\r\n23816,2391,EMEA,electronics,mobile,64.33,1,0.146,none,2024-04-10\r\n23817,1411,LATAM,fashion,online,23.40,1,0.001,none,2024-10-09\r\n23818,1216,APAC,grocery,online,60.03,6,0.250,none,2024-09-18\r\n23819,1772,EMEA,sports,mobile,199.87,6,0.116,none,2024-03-24\r\n23820,2359,LATAM,sports,retail,47.92,3,0.008,coupon,2024-10-02\r\n23821,1562,AMER,sports,online,50.42,8,0.166,bundle,2024-10-02\r\n23822,2024,AMER,fashion,retail,60.34,3,0.070,coupon,2024-01-15\r\n23823,1801,LATAM,home,online,72.71,6,0.027,none,2024-11-02\r\n23824,1323,EMEA,sports,retail,57.21,6,0.023,none,2024-04-10\r\n23825,2349,APAC,fashion,online,125.67,1,0.198,none,2024-12-04\r\n23826,2181,AMER,electronics,retail,94.11,5,0.193,none,2024-03-01\r\n23827,1256,LATAM,home,retail,91.82,5,0.187,none,2024-04-28\r\n23828,1542,APAC,grocery,online,51.56,3,0.058,none,2024-08-24\r\n23829,2323,AMER,electronics,online,67.42,8,0.133,coupon,2024-05-07\r\n23830,1697,APAC,home,online,31.63,2,0.074,bundle,2024-10-25\r\n23831,1509,AMER,home,online,158.75,4,0.229,none,2024-04-01\r\n23832,2493,APAC,fashion,mobile,36.02,1,0.026,coupon,2024-02-26\r\n23833,1970,LATAM,grocery,retail,114.27,1,0.164,bundle,2024-03-19\r\n23834,1733,LATAM,electronics,retail,46.26,4,0.103,bundle,2024-12-27\r\n23835,1433,EMEA,electronics,online,34.32,1,0.010,loyalty,2024-01-17\r\n23836,1052,LATAM,electronics,partner,68.50,4,0.240,none,2024-05-19\r\n23837,2136,AMER,toys,mobile,64.44,7,0.016,none,2024-04-18\r\n23838,1657,LATAM,electronics,online,50.00,2,0.174,none,2024-06-11\r\n23839,2121,APAC,grocery,partner,47.97,4,0.218,none,2024-08-03\r\n23840,1635,APAC,toys,online,38.52,3,0.056,none,2024-09-02\r\n23841,1560,AMER,grocery,retail,34.95,2,0.105,none,2024-05-07\r\n23842,1239,APAC,toys,retail,67.34,1,0.093,coupon,2024-11-01\r\n23843,1844,APAC,sports,retail,43.43,4,0.043,none,2024-05-03\r\n23844,1138,AMER,home,mobile,23.73,2,0.137,coupon,2024-05-05\r\n23845,2357,EMEA,grocery,online,145.22,1,0.156,coupon,2024-01-17\r\n23846,1459,LATAM,grocery,mobile,17.28,8,0.073,none,2024-07-25\r\n23847,1293,AMER,home,retail,39.48,7,0.153,coupon,2024-10-04\r\n23848,2365,LATAM,grocery,retail,61.20,2,0.226,none,2024-03-13\r\n23849,1846,APAC,grocery,online,78.43,5,0.100,bundle,2024-06-04\r\n23850,1949,AMER,home,online,119.83,5,0.247,none,2024-11-15\r\n23851,1875,EMEA,electronics,retail,114.29,6,0.169,none,2024-11-14\r\n23852,1589,AMER,grocery,retail,123.78,7,0.241,none,2024-03-26\r\n23853,1550,APAC,electronics,partner,87.66,6,0.226,bundle,2024-07-20\r\n23854,1336,APAC,fashion,retail,88.17,8,0.023,none,2024-12-02\r\n23855,1233,AMER,electronics,online,74.94,5,0.033,coupon,2024-08-26\r\n23856,1433,EMEA,sports,online,48.33,1,0.125,coupon,2024-09-10\r\n23857,1471,EMEA,grocery,online,61.81,4,0.106,loyalty,2024-04-20\r\n23858,1653,APAC,sports,retail,108.67,5,0.236,coupon,2024-06-04\r\n23859,2418,AMER,fashion,retail,49.94,7,0.003,none,2024-11-02\r\n23860,2423,LATAM,electronics,retail,102.78,4,0.056,none,2024-09-03\r\n23861,1939,LATAM,home,online,45.68,8,0.009,none,2024-05-26\r\n23862,1313,EMEA,grocery,mobile,69.30,5,0.058,none,2024-08-06\r\n23863,1395,APAC,grocery,retail,30.27,7,0.200,none,2024-06-01\r\n23864,2477,APAC,grocery,retail,55.17,8,0.054,coupon,2024-07-01\r\n23865,2410,EMEA,toys,online,47.75,3,0.029,none,2024-03-06\r\n23866,1100,AMER,grocery,mobile,75.06,8,0.243,none,2024-03-22\r\n23867,1175,AMER,toys,mobile,21.78,3,0.095,none,2024-10-15\r\n23868,2498,LATAM,fashion,mobile,50.78,4,0.178,loyalty,2024-06-04\r\n23869,1602,EMEA,fashion,retail,25.80,1,0.154,none,2024-11-19\r\n23870,1656,LATAM,home,mobile,227.71,2,0.178,none,2024-08-25\r\n23871,2235,AMER,fashion,retail,65.44,6,0.125,none,2024-05-02\r\n23872,1706,EMEA,electronics,online,83.46,1,0.116,none,2024-10-15\r\n23873,2059,AMER,grocery,retail,74.86,8,0.213,none,2024-05-08\r\n23874,2354,LATAM,home,retail,134.72,8,0.222,none,2024-01-02\r\n23875,2411,EMEA,fashion,retail,69.53,8,0.071,bundle,2024-01-03\r\n23876,1066,AMER,home,retail,106.73,6,0.047,coupon,2024-11-13\r\n23877,1891,APAC,home,partner,126.47,6,0.100,none,2024-12-23\r\n23878,1252,APAC,grocery,online,60.65,4,0.202,none,2024-04-24\r\n23879,2136,AMER,toys,mobile,74.60,3,0.044,none,2024-02-21\r\n23880,2071,APAC,electronics,retail,43.56,7,0.053,loyalty,2024-04-23\r\n23881,1958,APAC,electronics,retail,89.33,5,0.238,none,2024-03-21\r\n23882,1319,EMEA,sports,mobile,51.54,5,0.011,none,2024-12-02\r\n23883,1258,EMEA,toys,online,58.46,1,0.152,none,2024-08-21\r\n23884,1911,LATAM,toys,retail,49.77,8,0.035,none,2024-03-17\r\n23885,1077,AMER,electronics,retail,47.93,5,0.134,none,2024-03-26\r\n23886,1047,APAC,grocery,online,77.79,5,0.114,bundle,2024-06-16\r\n23887,1932,EMEA,home,mobile,101.87,1,0.002,none,2024-02-01\r\n23888,2290,LATAM,toys,mobile,36.63,2,0.108,none,2024-11-13\r\n23889,1727,APAC,fashion,partner,59.44,7,0.135,none,2024-01-21\r\n23890,1245,APAC,fashion,retail,67.42,8,0.039,none,2024-11-18\r\n23891,2143,AMER,sports,partner,50.12,8,0.182,coupon,2024-05-14\r\n23892,2022,LATAM,electronics,retail,18.84,4,0.186,none,2024-03-03\r\n23893,2470,EMEA,electronics,retail,80.38,8,0.063,bundle,2024-11-10\r\n23894,1340,LATAM,fashion,retail,62.78,2,0.053,none,2024-12-05\r\n23895,1430,EMEA,electronics,retail,49.71,5,0.088,none,2024-09-09\r\n23896,2402,AMER,grocery,online,28.85,3,0.204,none,2024-04-16\r\n23897,2294,EMEA,home,online,37.66,6,0.164,none,2024-08-26\r\n23898,1339,EMEA,home,mobile,34.50,6,0.231,none,2024-06-28\r\n23899,1410,AMER,home,online,88.28,4,0.134,coupon,2024-09-17\r\n23900,1565,AMER,toys,retail,55.34,2,0.095,coupon,2024-07-19\r\n23901,2177,AMER,sports,retail,53.48,4,0.231,coupon,2024-10-12\r\n23902,1898,EMEA,home,online,44.67,8,0.005,coupon,2024-12-08\r\n23903,1094,LATAM,grocery,retail,82.25,1,0.219,none,2024-12-16\r\n23904,1742,AMER,sports,online,156.81,7,0.036,coupon,2024-05-05\r\n23905,2278,APAC,toys,online,32.46,2,0.015,none,2024-02-25\r\n23906,1483,EMEA,toys,online,82.08,1,0.205,coupon,2024-12-04\r\n23907,2304,LATAM,electronics,online,37.84,4,0.117,coupon,2024-07-19\r\n23908,1870,EMEA,toys,partner,17.80,3,0.117,none,2024-09-09\r\n23909,2284,EMEA,electronics,retail,35.01,7,0.015,none,2024-05-02\r\n23910,2398,EMEA,electronics,retail,42.13,6,0.053,none,2024-04-23\r\n23911,1524,LATAM,sports,partner,59.37,7,0.149,coupon,2024-05-11\r\n23912,1162,AMER,home,retail,74.35,5,0.163,none,2024-01-06\r\n23913,1417,APAC,grocery,online,76.18,4,0.207,bundle,2024-11-02\r\n23914,2037,LATAM,toys,mobile,24.92,3,0.158,none,2024-11-05\r\n23915,1743,LATAM,grocery,online,37.91,5,0.022,none,2024-05-04\r\n23916,1722,EMEA,home,retail,38.22,3,0.002,coupon,2024-11-07\r\n23917,1252,APAC,fashion,mobile,55.46,7,0.149,none,2024-04-01\r\n23918,1579,AMER,electronics,online,36.05,1,0.228,none,2024-09-24\r\n23919,2254,LATAM,fashion,retail,47.14,7,0.142,coupon,2024-04-25\r\n23920,2047,AMER,toys,retail,90.72,3,0.142,bundle,2024-07-14\r\n23921,1695,LATAM,electronics,online,51.04,2,0.088,none,2024-10-13\r\n23922,1522,LATAM,home,mobile,37.53,2,0.147,none,2024-12-23\r\n23923,1544,LATAM,electronics,mobile,44.96,7,0.065,loyalty,2024-09-24\r\n23924,1602,EMEA,electronics,partner,134.60,6,0.147,none,2024-10-07\r\n23925,2125,LATAM,sports,retail,32.19,3,0.105,bundle,2024-09-27\r\n23926,1114,APAC,electronics,retail,64.04,6,0.001,coupon,2024-09-06\r\n23927,1778,LATAM,electronics,retail,84.09,7,0.209,coupon,2024-04-15\r\n23928,2271,LATAM,sports,online,125.05,5,0.106,bundle,2024-04-09\r\n23929,2327,EMEA,grocery,online,63.31,2,0.181,none,2024-04-01\r\n23930,1736,AMER,fashion,online,37.66,3,0.160,none,2024-12-16\r\n23931,2239,EMEA,home,retail,57.06,2,0.061,bundle,2024-03-02\r\n23932,2197,LATAM,grocery,mobile,119.98,4,0.220,bundle,2024-09-26\r\n23933,2290,LATAM,grocery,retail,35.01,8,0.129,coupon,2024-01-19\r\n23934,2162,EMEA,toys,retail,68.67,5,0.146,coupon,2024-03-08\r\n23935,2150,APAC,electronics,retail,84.44,3,0.197,coupon,2024-08-26\r\n23936,1369,AMER,electronics,retail,38.92,5,0.166,none,2024-08-23\r\n23937,1508,LATAM,grocery,retail,157.64,6,0.150,bundle,2024-05-26\r\n23938,2010,APAC,fashion,retail,59.46,4,0.078,coupon,2024-05-18\r\n23939,1704,AMER,sports,online,33.12,3,0.222,none,2024-12-02\r\n23940,1905,APAC,fashion,retail,38.30,3,0.018,bundle,2024-07-27\r\n23941,1267,EMEA,grocery,mobile,62.66,6,0.001,none,2024-06-14\r\n23942,2249,LATAM,electronics,retail,84.94,8,0.183,none,2024-04-18\r\n23943,1226,AMER,fashion,online,47.57,6,0.018,bundle,2024-11-01\r\n23944,1226,AMER,home,retail,61.16,7,0.169,none,2024-05-08\r\n23945,2489,LATAM,electronics,partner,23.28,4,0.164,coupon,2024-05-21\r\n23946,1561,EMEA,grocery,retail,55.95,4,0.112,bundle,2024-05-28\r\n23947,1494,AMER,fashion,mobile,56.09,7,0.019,none,2024-07-27\r\n23948,2113,LATAM,grocery,partner,40.60,5,0.219,coupon,2024-03-20\r\n23949,2190,LATAM,grocery,online,59.35,8,0.181,none,2024-09-03\r\n23950,1323,EMEA,fashion,retail,107.49,1,0.066,none,2024-01-25\r\n23951,1122,AMER,grocery,retail,63.31,2,0.187,none,2024-09-14\r\n23952,1702,AMER,grocery,online,66.62,7,0.133,bundle,2024-02-23\r\n23953,1222,AMER,home,online,50.10,1,0.047,none,2024-08-04\r\n23954,1457,EMEA,sports,retail,59.89,8,0.181,none,2024-09-15\r\n23955,1108,EMEA,electronics,retail,57.56,2,0.062,none,2024-12-11\r\n23956,1238,AMER,electronics,retail,39.86,4,0.059,none,2024-11-24\r\n23957,2189,LATAM,toys,online,30.21,3,0.005,loyalty,2024-08-06\r\n23958,2305,AMER,toys,online,108.25,3,0.176,coupon,2024-07-27\r\n23959,2329,LATAM,toys,retail,84.84,7,0.007,none,2024-09-21\r\n23960,1647,LATAM,toys,mobile,57.09,5,0.208,none,2024-03-03\r\n23961,2360,EMEA,grocery,retail,65.46,8,0.182,bundle,2024-06-19\r\n23962,2228,EMEA,sports,online,52.70,2,0.053,none,2024-04-12\r\n23963,1944,AMER,grocery,online,45.17,7,0.158,none,2024-07-26\r\n23964,1043,LATAM,fashion,retail,46.87,3,0.052,none,2024-12-04\r\n23965,2244,LATAM,toys,online,42.36,8,0.098,none,2024-01-02\r\n23966,1165,AMER,electronics,retail,88.92,6,0.081,bundle,2024-12-24\r\n23967,2010,APAC,toys,partner,56.52,1,0.094,coupon,2024-09-21\r\n23968,1187,AMER,fashion,retail,64.04,1,0.102,none,2024-03-10\r\n23969,1857,LATAM,toys,online,104.29,4,0.220,coupon,2024-08-27\r\n23970,2311,LATAM,home,retail,63.29,2,0.112,none,2024-07-15\r\n23971,1710,APAC,toys,online,112.57,4,0.240,none,2024-04-16\r\n23972,1231,AMER,toys,online,62.83,5,0.015,coupon,2024-06-12\r\n23973,1300,EMEA,electronics,retail,24.80,8,0.107,coupon,2024-01-23\r\n23974,2431,LATAM,fashion,online,77.61,5,0.013,none,2024-02-23\r\n23975,2261,EMEA,sports,online,224.16,1,0.223,none,2024-01-06\r\n23976,1468,AMER,electronics,online,87.09,6,0.235,coupon,2024-02-28\r\n23977,2401,LATAM,toys,retail,38.79,2,0.022,none,2024-01-15\r\n23978,2329,LATAM,electronics,online,27.01,8,0.190,none,2024-05-10\r\n23979,1162,AMER,sports,online,26.82,1,0.193,none,2024-04-26\r\n23980,2045,LATAM,home,online,38.40,8,0.239,none,2024-02-04\r\n23981,1377,APAC,fashion,online,18.88,4,0.196,none,2024-04-08\r\n23982,1410,AMER,toys,online,45.25,8,0.106,none,2024-10-25\r\n23983,1952,EMEA,fashion,retail,135.57,3,0.157,none,2024-07-18\r\n23984,1731,AMER,sports,online,70.77,5,0.221,bundle,2024-07-08\r\n23985,1739,AMER,fashion,retail,97.10,8,0.245,none,2024-07-28\r\n23986,1204,AMER,grocery,mobile,189.44,1,0.116,none,2024-04-06\r\n23987,1629,LATAM,sports,retail,52.91,3,0.084,none,2024-06-11\r\n23988,2125,LATAM,grocery,online,63.51,4,0.245,coupon,2024-09-26\r\n23989,1224,APAC,home,online,116.18,7,0.053,bundle,2024-05-05\r\n23990,1479,AMER,toys,online,46.72,5,0.000,coupon,2024-01-07\r\n23991,1070,EMEA,sports,retail,127.70,7,0.147,coupon,2024-08-03\r\n23992,1815,APAC,fashion,online,48.01,3,0.016,none,2024-10-20\r\n23993,2293,LATAM,fashion,online,52.56,8,0.022,none,2024-02-21\r\n23994,2493,APAC,toys,retail,60.83,7,0.225,coupon,2024-06-22\r\n23995,1081,AMER,electronics,online,43.14,2,0.236,none,2024-05-10\r\n23996,1166,AMER,fashion,retail,38.85,1,0.038,none,2024-05-17\r\n23997,1625,EMEA,electronics,online,89.29,4,0.181,none,2024-09-19\r\n23998,1734,AMER,sports,online,73.43,7,0.137,none,2024-02-21\r\n23999,1505,EMEA,sports,online,24.95,5,0.052,none,2024-08-02\r\n24000,1066,AMER,electronics,online,157.48,7,0.071,none,2024-11-14\r\n24001,2045,LATAM,home,mobile,66.93,5,0.158,none,2024-04-15\r\n24002,1786,APAC,fashion,retail,38.77,5,0.053,bundle,2024-09-09\r\n24003,1141,AMER,grocery,mobile,43.39,8,0.134,coupon,2024-01-02\r\n24004,1117,LATAM,electronics,online,67.29,1,0.106,none,2024-12-19\r\n24005,1456,APAC,home,retail,53.92,4,0.135,loyalty,2024-04-25\r\n24006,1588,LATAM,toys,retail,49.15,3,0.017,none,2024-05-17\r\n24007,1867,AMER,home,online,123.97,2,0.248,coupon,2024-03-08\r\n24008,1561,EMEA,sports,online,42.65,1,0.039,none,2024-02-06\r\n24009,1845,AMER,sports,retail,49.90,2,0.132,none,2024-06-24\r\n24010,1722,EMEA,toys,partner,75.42,8,0.228,none,2024-06-23\r\n24011,1954,APAC,grocery,mobile,137.00,3,0.229,none,2024-10-20\r\n24012,2396,AMER,electronics,mobile,47.21,4,0.105,loyalty,2024-10-24\r\n24013,1369,AMER,home,online,75.00,3,0.236,coupon,2024-12-20\r\n24014,2195,APAC,fashion,retail,89.30,4,0.080,loyalty,2024-02-09\r\n24015,2459,AMER,electronics,retail,43.96,4,0.229,none,2024-03-13\r\n24016,1600,AMER,grocery,mobile,36.24,7,0.000,loyalty,2024-10-26\r\n24017,1477,APAC,fashion,online,45.83,3,0.130,none,2024-08-07\r\n24018,1386,AMER,fashion,online,28.77,2,0.093,coupon,2024-08-24\r\n24019,1329,APAC,toys,online,29.88,7,0.086,none,2024-03-05\r\n24020,1644,EMEA,grocery,retail,26.47,8,0.134,none,2024-02-27\r\n24021,1705,AMER,home,retail,44.26,6,0.160,none,2024-05-13\r\n24022,2196,AMER,electronics,online,38.05,5,0.097,coupon,2024-06-02\r\n24023,2251,APAC,electronics,online,36.76,1,0.205,bundle,2024-03-15\r\n24024,1630,APAC,fashion,online,146.83,7,0.084,none,2024-10-18\r\n24025,2220,LATAM,fashion,online,47.33,7,0.234,coupon,2024-04-23\r\n24026,2476,APAC,toys,mobile,99.85,3,0.060,none,2024-03-21\r\n24027,1010,EMEA,toys,retail,119.75,3,0.063,coupon,2024-09-24\r\n24028,1636,APAC,grocery,retail,52.31,1,0.086,coupon,2024-03-22\r\n24029,1353,EMEA,home,retail,22.17,2,0.143,none,2024-01-06\r\n24030,1706,EMEA,fashion,online,38.37,5,0.178,bundle,2024-06-27\r\n24031,1567,AMER,sports,online,58.11,1,0.202,none,2024-11-23\r\n24032,1425,EMEA,sports,mobile,189.48,6,0.049,bundle,2024-04-05\r\n24033,2247,LATAM,toys,online,22.85,8,0.160,none,2024-08-26\r\n24034,1558,EMEA,electronics,online,79.82,3,0.150,coupon,2024-04-20\r\n24035,1953,EMEA,home,retail,60.13,2,0.223,none,2024-02-07\r\n24036,1932,EMEA,home,mobile,20.81,4,0.199,none,2024-05-24\r\n24037,1369,AMER,grocery,retail,51.19,1,0.083,coupon,2024-12-07\r\n24038,2106,LATAM,home,online,47.09,7,0.167,coupon,2024-08-04\r\n24039,1456,APAC,grocery,online,47.41,1,0.196,none,2024-01-02\r\n24040,2041,LATAM,sports,online,65.92,3,0.148,none,2024-05-03\r\n24041,2156,AMER,grocery,retail,36.54,2,0.018,loyalty,2024-11-18\r\n24042,1020,APAC,fashion,mobile,37.35,5,0.103,none,2024-09-13\r\n24043,1991,APAC,grocery,online,49.08,3,0.089,coupon,2024-01-25\r\n24044,1387,AMER,home,mobile,107.05,4,0.015,none,2024-01-14\r\n24045,2460,AMER,electronics,retail,27.23,4,0.055,none,2024-07-25\r\n24046,1721,EMEA,electronics,partner,97.56,6,0.016,bundle,2024-04-25\r\n24047,2144,EMEA,electronics,online,31.50,7,0.110,none,2024-05-15\r\n24048,1231,AMER,toys,online,50.94,7,0.197,bundle,2024-02-27\r\n24049,1004,LATAM,home,partner,48.51,2,0.093,none,2024-07-02\r\n24050,1737,AMER,grocery,online,85.52,6,0.203,coupon,2024-09-15\r\n24051,1873,EMEA,grocery,online,31.70,7,0.075,none,2024-12-26\r\n24052,2298,APAC,home,mobile,33.47,3,0.181,loyalty,2024-07-06\r\n24053,1632,LATAM,home,retail,134.27,2,0.079,none,2024-11-27\r\n24054,1300,EMEA,grocery,retail,92.75,3,0.053,coupon,2024-02-02\r\n24055,2374,LATAM,electronics,online,42.46,6,0.099,bundle,2024-03-28\r\n24056,2202,APAC,grocery,retail,35.70,1,0.110,none,2024-04-06\r\n24057,1512,APAC,sports,partner,39.69,6,0.162,loyalty,2024-12-16\r\n24058,1096,EMEA,fashion,online,81.78,5,0.067,none,2024-09-17\r\n24059,2181,AMER,home,online,125.70,1,0.092,coupon,2024-07-27\r\n24060,2367,AMER,home,online,62.00,1,0.130,coupon,2024-08-10\r\n24061,1798,AMER,fashion,retail,33.00,1,0.250,bundle,2024-05-08\r\n24062,1124,AMER,home,mobile,26.45,2,0.145,none,2024-02-20\r\n24063,2463,AMER,grocery,online,41.29,5,0.201,none,2024-07-24\r\n24064,1722,EMEA,electronics,online,83.69,1,0.078,none,2024-07-21\r\n24065,2325,LATAM,home,mobile,106.97,2,0.136,none,2024-07-16\r\n24066,1019,APAC,toys,retail,46.60,3,0.091,none,2024-03-14\r\n24067,2181,AMER,electronics,retail,63.21,8,0.187,none,2024-12-25\r\n24068,2160,LATAM,grocery,retail,56.92,6,0.109,none,2024-09-03\r\n24069,1387,AMER,fashion,online,30.04,3,0.018,none,2024-04-15\r\n24070,1120,LATAM,electronics,retail,88.58,1,0.041,none,2024-11-26\r\n24071,1619,APAC,fashion,online,42.27,8,0.054,loyalty,2024-08-03\r\n24072,1176,EMEA,electronics,online,44.51,2,0.248,none,2024-08-24\r\n24073,1544,LATAM,sports,mobile,24.64,3,0.246,bundle,2024-06-08\r\n24074,2273,APAC,grocery,retail,51.75,6,0.097,none,2024-08-24\r\n24075,1456,APAC,fashion,mobile,103.57,2,0.002,none,2024-02-24\r\n24076,1908,AMER,sports,retail,44.32,3,0.004,loyalty,2024-11-06\r\n24077,2065,EMEA,fashion,retail,55.80,6,0.046,bundle,2024-01-02\r\n24078,1714,APAC,grocery,online,45.42,2,0.197,none,2024-09-28\r\n24079,1068,APAC,home,online,59.68,1,0.187,none,2024-12-04\r\n24080,2495,EMEA,sports,retail,12.81,1,0.010,bundle,2024-01-28\r\n24081,1952,EMEA,fashion,retail,51.22,2,0.229,none,2024-01-14\r\n24082,2497,AMER,toys,retail,51.91,6,0.013,none,2024-08-16\r\n24083,2322,AMER,home,retail,29.65,7,0.075,none,2024-05-11\r\n24084,1437,EMEA,electronics,online,100.30,3,0.165,none,2024-08-09\r\n24085,2446,LATAM,grocery,mobile,92.40,1,0.092,bundle,2024-07-13\r\n24086,1622,LATAM,grocery,online,46.04,7,0.076,loyalty,2024-11-09\r\n24087,1133,EMEA,home,retail,82.01,2,0.142,none,2024-01-01\r\n24088,2491,APAC,electronics,mobile,32.93,1,0.203,bundle,2024-04-24\r\n24089,2067,LATAM,fashion,online,69.83,4,0.152,none,2024-01-01\r\n24090,1572,LATAM,sports,retail,38.54,5,0.143,none,2024-07-03\r\n24091,1011,APAC,fashion,retail,63.90,7,0.119,coupon,2024-01-25\r\n24092,1007,APAC,grocery,mobile,33.56,2,0.153,bundle,2024-08-16\r\n24093,2036,APAC,toys,retail,59.21,1,0.164,bundle,2024-03-07\r\n24094,1517,AMER,grocery,retail,107.46,1,0.226,none,2024-01-25\r\n24095,1320,EMEA,electronics,retail,91.98,6,0.106,none,2024-07-18\r\n24096,1035,EMEA,home,online,38.47,3,0.058,none,2024-01-12\r\n24097,1405,LATAM,electronics,online,87.57,2,0.040,none,2024-08-28\r\n24098,1749,LATAM,home,online,109.73,8,0.103,none,2024-03-19\r\n24099,2466,APAC,fashion,online,114.66,7,0.063,coupon,2024-11-28\r\n24100,1069,APAC,grocery,online,105.22,4,0.203,bundle,2024-08-09\r\n24101,1319,EMEA,grocery,partner,60.24,3,0.250,none,2024-06-28\r\n24102,2004,LATAM,home,online,40.71,1,0.133,loyalty,2024-08-02\r\n24103,1303,LATAM,home,mobile,130.93,3,0.249,none,2024-09-07\r\n24104,1907,EMEA,sports,online,43.83,8,0.248,coupon,2024-11-13\r\n24105,2332,APAC,toys,retail,93.76,6,0.083,bundle,2024-03-14\r\n24106,2124,AMER,toys,retail,53.87,1,0.119,coupon,2024-12-08\r\n24107,1602,EMEA,sports,online,139.04,1,0.043,loyalty,2024-08-16\r\n24108,2480,APAC,fashion,online,61.96,7,0.055,bundle,2024-03-17\r\n24109,2487,LATAM,home,online,127.99,6,0.128,coupon,2024-12-01\r\n24110,1982,EMEA,grocery,mobile,102.61,5,0.200,bundle,2024-04-16\r\n24111,1026,APAC,fashion,retail,65.77,5,0.111,bundle,2024-10-10\r\n24112,1725,APAC,electronics,retail,108.75,4,0.243,bundle,2024-11-22\r\n24113,2420,EMEA,electronics,online,82.91,6,0.245,bundle,2024-09-14\r\n24114,2082,APAC,electronics,online,81.81,3,0.233,bundle,2024-01-09\r\n24115,1429,APAC,grocery,retail,29.66,6,0.176,none,2024-12-28\r\n24116,1470,LATAM,grocery,online,74.15,7,0.181,loyalty,2024-04-14\r\n24117,2403,LATAM,grocery,online,147.47,6,0.095,none,2024-09-10\r\n24118,1474,LATAM,home,online,25.94,1,0.244,none,2024-07-17\r\n24119,1588,LATAM,electronics,online,49.02,6,0.007,coupon,2024-11-19\r\n24120,1056,LATAM,grocery,online,116.93,2,0.133,none,2024-06-08\r\n24121,1653,APAC,grocery,retail,31.43,6,0.057,none,2024-06-06\r\n24122,2154,APAC,home,mobile,32.56,4,0.087,coupon,2024-11-07\r\n24123,1011,APAC,toys,retail,63.33,3,0.179,none,2024-05-03\r\n24124,1251,EMEA,grocery,mobile,20.11,4,0.100,none,2024-06-08\r\n24125,2225,EMEA,grocery,retail,73.94,2,0.234,coupon,2024-04-14\r\n24126,1268,EMEA,grocery,partner,161.33,4,0.089,none,2024-04-05\r\n24127,1440,AMER,toys,partner,52.32,3,0.157,none,2024-12-23\r\n24128,2261,EMEA,fashion,online,111.69,1,0.098,loyalty,2024-10-11\r\n24129,2392,EMEA,fashion,online,48.15,4,0.190,coupon,2024-04-09\r\n24130,1567,AMER,electronics,mobile,45.53,8,0.045,none,2024-07-05\r\n24131,1905,APAC,fashion,retail,45.72,2,0.208,loyalty,2024-12-11\r\n24132,1291,EMEA,electronics,retail,117.24,3,0.000,loyalty,2024-06-02\r\n24133,1386,AMER,grocery,online,147.41,5,0.075,none,2024-09-23\r\n24134,1095,APAC,fashion,online,54.11,1,0.246,none,2024-04-24\r\n24135,1819,AMER,grocery,partner,78.70,7,0.225,loyalty,2024-02-17\r\n24136,1658,AMER,home,retail,169.67,6,0.060,none,2024-08-12\r\n24137,1127,EMEA,grocery,retail,34.04,3,0.134,none,2024-05-12\r\n24138,2015,APAC,grocery,retail,81.36,3,0.185,none,2024-03-21\r\n24139,1718,EMEA,grocery,online,104.19,5,0.140,none,2024-05-24\r\n24140,1429,APAC,grocery,online,64.32,8,0.162,none,2024-10-21\r\n24141,1873,EMEA,grocery,online,38.83,6,0.032,coupon,2024-10-02\r\n24142,2299,EMEA,grocery,partner,67.03,3,0.213,loyalty,2024-03-04\r\n24143,2067,LATAM,grocery,mobile,39.85,1,0.008,coupon,2024-09-04\r\n24144,1495,LATAM,grocery,retail,43.69,7,0.147,bundle,2024-11-09\r\n24145,1862,LATAM,home,online,114.00,2,0.040,none,2024-11-04\r\n24146,2407,EMEA,sports,mobile,204.24,4,0.115,none,2024-08-23\r\n24147,2214,AMER,home,mobile,60.91,3,0.101,none,2024-04-02\r\n24148,1623,AMER,electronics,retail,62.87,4,0.046,none,2024-05-03\r\n24149,2240,LATAM,toys,retail,46.04,5,0.028,none,2024-03-02\r\n24150,1491,EMEA,electronics,partner,42.77,3,0.181,coupon,2024-05-04\r\n24151,1024,APAC,home,online,31.26,4,0.153,none,2024-12-05\r\n24152,1934,EMEA,sports,mobile,156.06,7,0.083,coupon,2024-06-16\r\n24153,2001,EMEA,grocery,online,35.27,1,0.004,coupon,2024-02-06\r\n24154,2189,LATAM,home,mobile,44.00,8,0.080,none,2024-11-25\r\n24155,2152,EMEA,electronics,retail,66.02,7,0.099,coupon,2024-11-02\r\n24156,2203,APAC,grocery,retail,73.16,1,0.168,loyalty,2024-03-04\r\n24157,2008,APAC,sports,online,74.87,8,0.008,coupon,2024-10-01\r\n24158,2050,APAC,home,online,43.42,3,0.119,coupon,2024-08-12\r\n24159,1230,EMEA,sports,retail,64.23,5,0.232,loyalty,2024-04-22\r\n24160,2134,AMER,electronics,mobile,24.01,8,0.179,coupon,2024-03-08\r\n24161,2115,APAC,sports,retail,28.78,3,0.184,none,2024-12-14\r\n24162,1759,EMEA,grocery,online,103.70,8,0.117,none,2024-06-25\r\n24163,2254,LATAM,home,online,79.01,3,0.009,none,2024-05-12\r\n24164,1370,APAC,grocery,online,41.39,8,0.191,loyalty,2024-10-18\r\n24165,2288,AMER,sports,online,40.16,3,0.244,none,2024-09-27\r\n24166,1385,LATAM,fashion,mobile,138.88,3,0.133,none,2024-01-17\r\n24167,1714,APAC,sports,retail,50.95,7,0.186,none,2024-10-09\r\n24168,2186,LATAM,grocery,online,29.19,5,0.027,none,2024-12-17\r\n24169,1037,EMEA,toys,online,120.83,4,0.226,none,2024-05-23\r\n24170,1477,APAC,toys,online,55.21,2,0.075,coupon,2024-03-05\r\n24171,2314,EMEA,electronics,online,117.00,2,0.169,none,2024-06-02\r\n24172,1417,APAC,toys,partner,51.32,8,0.088,coupon,2024-09-16\r\n24173,1867,AMER,fashion,online,119.25,1,0.199,loyalty,2024-03-02\r\n24174,1808,APAC,grocery,retail,70.58,3,0.036,bundle,2024-01-12\r\n24175,1146,LATAM,fashion,retail,51.10,6,0.180,none,2024-12-06\r\n24176,2350,APAC,toys,retail,50.91,2,0.224,none,2024-11-28\r\n24177,2016,LATAM,toys,online,68.77,3,0.039,none,2024-09-25\r\n24178,1254,APAC,grocery,retail,107.08,7,0.194,none,2024-01-06\r\n24179,2473,EMEA,grocery,mobile,84.96,3,0.208,none,2024-10-11\r\n24180,1212,LATAM,fashion,online,52.40,3,0.222,coupon,2024-04-06\r\n24181,2297,EMEA,electronics,online,48.75,5,0.096,loyalty,2024-07-15\r\n24182,1074,LATAM,electronics,online,47.89,7,0.184,none,2024-10-18\r\n24183,1788,AMER,home,online,144.36,1,0.225,coupon,2024-10-15\r\n24184,1021,AMER,electronics,online,59.78,4,0.053,coupon,2024-02-10\r\n24185,2141,AMER,sports,retail,59.70,3,0.148,bundle,2024-08-09\r\n24186,1987,AMER,toys,retail,46.64,1,0.165,bundle,2024-05-28\r\n24187,2481,APAC,electronics,mobile,23.74,4,0.046,loyalty,2024-09-21\r\n24188,1901,AMER,home,retail,64.72,4,0.050,none,2024-10-26\r\n24189,1378,APAC,grocery,partner,65.23,3,0.180,none,2024-05-12\r\n24190,1054,EMEA,toys,online,50.75,8,0.238,none,2024-08-02\r\n24191,1485,APAC,home,retail,39.59,2,0.234,none,2024-08-13\r\n24192,2400,EMEA,grocery,online,134.35,8,0.195,coupon,2024-11-24\r\n24193,2097,AMER,electronics,online,45.70,4,0.062,none,2024-03-28\r\n24194,1052,LATAM,home,mobile,70.37,3,0.218,coupon,2024-09-25\r\n24195,1327,APAC,grocery,online,49.57,2,0.233,none,2024-12-22\r\n24196,2462,EMEA,home,online,59.95,1,0.142,loyalty,2024-07-21\r\n24197,1837,LATAM,electronics,online,119.37,2,0.196,none,2024-01-04\r\n24198,1105,AMER,fashion,retail,76.92,1,0.033,none,2024-04-12\r\n24199,1895,AMER,grocery,online,158.56,3,0.041,loyalty,2024-10-09\r\n24200,2118,AMER,grocery,online,68.71,7,0.214,none,2024-06-25\r\n24201,2403,LATAM,grocery,retail,22.57,2,0.113,loyalty,2024-02-03\r\n24202,2177,AMER,fashion,online,56.78,3,0.089,coupon,2024-12-22\r\n24203,1695,LATAM,electronics,online,73.63,1,0.215,coupon,2024-05-25\r\n24204,2323,AMER,home,online,35.21,4,0.144,loyalty,2024-01-23\r\n24205,1198,AMER,electronics,online,28.91,6,0.228,none,2024-06-12\r\n24206,1875,EMEA,grocery,retail,53.25,5,0.141,none,2024-04-22\r\n24207,1871,APAC,toys,mobile,55.88,8,0.170,none,2024-05-04\r\n24208,2254,LATAM,fashion,retail,54.62,8,0.097,coupon,2024-08-06\r\n24209,2052,LATAM,home,mobile,72.27,8,0.045,coupon,2024-09-12\r\n24210,1157,LATAM,grocery,retail,187.09,7,0.177,coupon,2024-10-24\r\n24211,1237,LATAM,electronics,mobile,54.05,6,0.192,none,2024-04-11\r\n24212,2364,APAC,electronics,retail,57.90,8,0.182,none,2024-04-15\r\n24213,1083,AMER,toys,online,54.75,5,0.186,none,2024-02-21\r\n24214,2379,AMER,grocery,retail,119.53,8,0.101,none,2024-02-20\r\n24215,1086,AMER,home,retail,66.46,2,0.248,none,2024-11-27\r\n24216,1636,APAC,electronics,retail,36.75,7,0.076,coupon,2024-06-09\r\n24217,1794,AMER,electronics,mobile,61.78,1,0.084,none,2024-07-19\r\n24218,1594,LATAM,home,retail,63.13,5,0.067,bundle,2024-10-07\r\n24219,1158,LATAM,toys,online,56.41,4,0.109,bundle,2024-11-09\r\n24220,1361,LATAM,grocery,retail,61.93,7,0.089,loyalty,2024-07-04\r\n24221,1022,APAC,toys,online,80.69,6,0.222,coupon,2024-09-02\r\n24222,2033,LATAM,home,retail,57.02,2,0.024,loyalty,2024-02-10\r\n24223,1924,AMER,sports,retail,33.60,6,0.204,none,2024-04-04\r\n24224,1705,AMER,toys,online,185.17,5,0.071,bundle,2024-01-18\r\n24225,2166,AMER,electronics,retail,20.45,1,0.249,bundle,2024-11-10\r\n24226,1251,EMEA,fashion,online,81.15,8,0.079,none,2024-02-21\r\n24227,1915,LATAM,home,online,90.81,1,0.148,none,2024-03-11\r\n24228,1767,AMER,grocery,retail,31.81,2,0.080,none,2024-04-03\r\n24229,2072,AMER,fashion,online,26.83,7,0.024,loyalty,2024-09-24\r\n24230,2368,AMER,sports,online,37.63,6,0.224,none,2024-06-21\r\n24231,1633,EMEA,home,retail,50.68,6,0.000,coupon,2024-07-17\r\n24232,2216,AMER,fashion,mobile,65.73,6,0.217,none,2024-07-05\r\n24233,2046,APAC,electronics,online,56.85,7,0.135,coupon,2024-07-12\r\n24234,1095,APAC,electronics,online,50.95,1,0.116,coupon,2024-11-16\r\n24235,1489,AMER,grocery,retail,53.05,7,0.039,none,2024-06-27\r\n24236,1238,AMER,grocery,retail,100.27,2,0.102,none,2024-02-04\r\n24237,1104,APAC,fashion,online,22.16,4,0.056,none,2024-01-18\r\n24238,1835,AMER,grocery,retail,70.64,7,0.240,coupon,2024-10-13\r\n24239,1301,AMER,electronics,mobile,38.82,5,0.213,coupon,2024-08-25\r\n24240,1961,EMEA,toys,mobile,38.18,7,0.011,none,2024-09-12\r\n24241,1167,EMEA,grocery,online,29.43,1,0.043,bundle,2024-11-18\r\n24242,1707,APAC,electronics,retail,95.20,2,0.182,none,2024-09-14\r\n24243,1099,LATAM,fashion,online,26.45,6,0.237,none,2024-12-19\r\n24244,2260,EMEA,toys,mobile,17.10,6,0.220,loyalty,2024-08-04\r\n24245,2274,APAC,toys,mobile,64.93,5,0.152,coupon,2024-07-14\r\n24246,1074,LATAM,electronics,online,43.70,1,0.036,bundle,2024-09-28\r\n24247,1158,LATAM,fashion,mobile,73.69,5,0.107,coupon,2024-02-02\r\n24248,1148,AMER,home,online,56.56,2,0.242,none,2024-09-12\r\n24249,1516,EMEA,sports,online,73.91,4,0.003,none,2024-09-16\r\n24250,1319,EMEA,grocery,online,32.73,3,0.059,none,2024-01-01\r\n24251,2112,LATAM,toys,online,63.32,6,0.211,loyalty,2024-01-06\r\n24252,2380,AMER,grocery,online,57.55,5,0.086,loyalty,2024-04-10\r\n24253,1292,LATAM,sports,partner,62.00,7,0.072,none,2024-09-10\r\n24254,2396,AMER,home,mobile,21.55,6,0.196,none,2024-03-01\r\n24255,1768,AMER,electronics,online,34.17,2,0.235,loyalty,2024-07-01\r\n24256,1460,LATAM,sports,online,51.44,3,0.084,none,2024-11-02\r\n24257,2206,AMER,sports,retail,78.77,2,0.012,none,2024-01-12\r\n24258,1752,APAC,electronics,retail,44.73,4,0.090,none,2024-05-07\r\n24259,2499,LATAM,home,retail,38.74,1,0.188,none,2024-03-01\r\n24260,1347,APAC,grocery,partner,69.02,2,0.078,none,2024-05-21\r\n24261,1638,EMEA,electronics,retail,72.49,4,0.127,coupon,2024-10-15\r\n24262,1137,APAC,fashion,retail,38.00,6,0.015,none,2024-08-12\r\n24263,1045,LATAM,grocery,online,62.63,1,0.143,none,2024-08-03\r\n24264,1980,LATAM,fashion,retail,33.31,7,0.227,bundle,2024-05-15\r\n24265,1228,APAC,grocery,retail,74.98,8,0.049,coupon,2024-06-14\r\n24266,1438,APAC,electronics,retail,61.96,2,0.097,bundle,2024-04-20\r\n24267,1085,EMEA,grocery,retail,49.29,1,0.167,none,2024-03-12\r\n24268,1018,APAC,electronics,online,149.80,6,0.238,coupon,2024-04-22\r\n24269,1363,EMEA,sports,online,71.44,5,0.101,none,2024-09-17\r\n24270,1087,AMER,electronics,online,31.49,8,0.168,bundle,2024-07-09\r\n24271,1901,AMER,fashion,online,73.84,8,0.167,none,2024-04-16\r\n24272,2152,EMEA,grocery,retail,32.25,4,0.244,none,2024-09-05\r\n24273,2277,EMEA,electronics,online,38.28,7,0.244,none,2024-09-14\r\n24274,1320,EMEA,fashion,retail,46.30,7,0.025,loyalty,2024-01-04\r\n24275,2321,APAC,fashion,retail,47.09,2,0.239,none,2024-09-26\r\n24276,1599,APAC,sports,mobile,54.81,7,0.186,loyalty,2024-07-06\r\n24277,1791,LATAM,toys,online,98.69,2,0.191,none,2024-03-27\r\n24278,1464,APAC,fashion,retail,50.80,5,0.075,bundle,2024-08-10\r\n24279,1843,EMEA,grocery,retail,69.29,2,0.078,none,2024-03-22\r\n24280,2043,EMEA,grocery,retail,60.71,6,0.077,none,2024-12-18\r\n24281,1440,AMER,electronics,online,89.49,8,0.107,none,2024-12-14\r\n24282,1416,EMEA,fashion,retail,38.80,5,0.057,none,2024-10-12\r\n24283,2161,LATAM,home,retail,126.32,4,0.122,none,2024-09-24\r\n24284,1617,AMER,home,online,46.83,8,0.099,coupon,2024-06-15\r\n24285,1112,APAC,electronics,online,154.38,2,0.242,none,2024-06-26\r\n24286,2000,APAC,fashion,online,40.62,5,0.005,coupon,2024-10-28\r\n24287,1964,EMEA,sports,online,55.59,7,0.166,none,2024-02-02\r\n24288,1028,EMEA,fashion,retail,101.15,1,0.214,coupon,2024-03-14\r\n24289,2052,LATAM,fashion,online,54.54,8,0.245,loyalty,2024-09-06\r\n24290,2204,AMER,grocery,retail,67.59,5,0.243,none,2024-05-04\r\n24291,1879,EMEA,toys,online,24.00,5,0.019,coupon,2024-08-03\r\n24292,1652,APAC,sports,retail,112.56,2,0.152,none,2024-04-12\r\n24293,1257,APAC,home,mobile,98.53,2,0.090,none,2024-05-08\r\n24294,1853,APAC,sports,online,16.22,5,0.148,none,2024-08-21\r\n24295,2032,AMER,sports,partner,61.29,6,0.020,none,2024-07-14\r\n24296,2356,LATAM,grocery,online,43.63,3,0.212,bundle,2024-03-19\r\n24297,1754,EMEA,grocery,online,44.73,3,0.220,bundle,2024-04-12\r\n24298,1056,LATAM,toys,retail,75.79,3,0.178,none,2024-12-06\r\n24299,2039,EMEA,grocery,online,52.13,6,0.218,none,2024-08-22\r\n24300,2215,LATAM,electronics,retail,84.19,4,0.003,none,2024-08-16\r\n24301,1513,APAC,grocery,online,69.55,3,0.130,bundle,2024-02-10\r\n24302,1567,AMER,electronics,online,136.87,8,0.208,none,2024-04-12\r\n24303,2422,APAC,fashion,online,31.51,6,0.082,bundle,2024-11-08\r\n24304,1034,EMEA,fashion,retail,37.93,5,0.024,none,2024-06-18\r\n24305,1010,EMEA,fashion,partner,37.28,2,0.242,none,2024-07-17\r\n24306,2279,LATAM,home,online,32.86,7,0.053,coupon,2024-04-06\r\n24307,1302,LATAM,electronics,online,142.52,8,0.000,none,2024-05-15\r\n24308,2258,AMER,grocery,online,78.46,8,0.147,none,2024-02-21\r\n24309,2168,EMEA,toys,online,62.57,5,0.163,none,2024-10-18\r\n24310,1867,AMER,grocery,mobile,68.24,1,0.189,none,2024-08-23\r\n24311,2045,LATAM,fashion,partner,28.08,3,0.234,none,2024-06-24\r\n24312,1824,LATAM,grocery,retail,139.04,6,0.032,none,2024-07-14\r\n24313,1718,EMEA,home,retail,47.72,5,0.030,none,2024-04-18\r\n24314,1705,AMER,electronics,mobile,39.81,3,0.123,coupon,2024-03-23\r\n24315,2167,APAC,grocery,retail,70.20,6,0.025,none,2024-11-02\r\n24316,1213,EMEA,home,online,30.30,6,0.239,none,2024-06-01\r\n24317,1966,APAC,grocery,mobile,102.78,7,0.127,none,2024-01-20\r\n24318,2152,EMEA,sports,online,42.46,2,0.170,bundle,2024-05-10\r\n24319,2428,LATAM,grocery,retail,37.11,7,0.117,none,2024-11-17\r\n24320,1311,APAC,fashion,online,74.56,3,0.061,loyalty,2024-01-17\r\n24321,1139,EMEA,grocery,retail,77.93,4,0.003,coupon,2024-06-15\r\n24322,2022,LATAM,grocery,online,93.04,7,0.117,coupon,2024-01-25\r\n24323,2004,LATAM,fashion,retail,62.02,5,0.146,none,2024-05-26\r\n24324,1666,LATAM,grocery,online,72.08,1,0.126,coupon,2024-07-07\r\n24325,1236,AMER,toys,online,35.91,1,0.067,none,2024-03-01\r\n24326,2023,LATAM,toys,online,36.25,1,0.010,none,2024-07-14\r\n24327,2484,APAC,home,retail,50.69,5,0.093,loyalty,2024-09-08\r\n24328,1991,APAC,grocery,partner,67.14,8,0.123,coupon,2024-12-14\r\n24329,2203,APAC,fashion,online,130.04,2,0.247,coupon,2024-07-13\r\n24330,2451,APAC,electronics,online,40.93,6,0.188,none,2024-09-21\r\n24331,1041,APAC,home,retail,23.98,8,0.019,loyalty,2024-04-06\r\n24332,2344,LATAM,electronics,online,74.66,2,0.012,none,2024-07-15\r\n24333,1201,LATAM,toys,online,33.75,6,0.024,bundle,2024-11-18\r\n24334,1370,APAC,grocery,online,53.22,6,0.124,coupon,2024-12-05\r\n24335,1701,LATAM,toys,retail,37.45,6,0.032,bundle,2024-10-17\r\n24336,1966,APAC,toys,online,33.73,2,0.121,none,2024-10-09\r\n24337,1173,LATAM,grocery,online,37.91,6,0.145,none,2024-10-08\r\n24338,1262,APAC,home,online,43.29,6,0.195,coupon,2024-02-05\r\n24339,2043,EMEA,sports,online,79.33,7,0.065,bundle,2024-09-21\r\n24340,2230,LATAM,home,retail,56.62,6,0.223,coupon,2024-04-06\r\n24341,2095,EMEA,toys,retail,37.55,8,0.004,none,2024-05-13\r\n24342,1174,APAC,sports,mobile,242.56,4,0.087,coupon,2024-05-21\r\n24343,1020,APAC,electronics,online,124.24,6,0.022,none,2024-10-15\r\n24344,1862,LATAM,grocery,online,34.01,7,0.204,none,2024-09-28\r\n24345,1261,APAC,home,retail,82.77,7,0.038,none,2024-03-15\r\n24346,2001,EMEA,electronics,retail,31.31,8,0.224,coupon,2024-02-12\r\n24347,2153,APAC,fashion,mobile,66.94,5,0.153,coupon,2024-02-15\r\n24348,1558,EMEA,fashion,online,75.68,5,0.025,coupon,2024-08-17\r\n24349,1334,APAC,grocery,retail,116.11,2,0.126,none,2024-12-28\r\n24350,1784,EMEA,electronics,online,51.48,5,0.218,none,2024-09-22\r\n24351,1656,LATAM,grocery,retail,90.75,3,0.222,none,2024-12-19\r\n24352,1846,APAC,electronics,retail,63.14,8,0.085,loyalty,2024-12-11\r\n24353,2078,APAC,fashion,online,43.26,5,0.242,coupon,2024-07-08\r\n24354,1591,APAC,grocery,retail,67.82,3,0.051,bundle,2024-05-03\r\n24355,2195,APAC,sports,retail,57.15,3,0.094,none,2024-03-20\r\n24356,2066,APAC,home,online,47.14,5,0.119,none,2024-03-03\r\n24357,1264,APAC,electronics,retail,62.59,3,0.219,none,2024-11-02\r\n24358,1666,LATAM,toys,retail,97.52,7,0.108,none,2024-01-25\r\n24359,1554,AMER,toys,online,70.39,7,0.220,none,2024-09-08\r\n24360,1108,EMEA,grocery,online,46.64,3,0.150,none,2024-07-14\r\n24361,1266,AMER,grocery,online,59.23,8,0.069,none,2024-04-18\r\n24362,1894,APAC,sports,retail,57.51,4,0.158,bundle,2024-01-22\r\n24363,1111,APAC,grocery,online,23.53,5,0.071,none,2024-12-22\r\n24364,1465,AMER,grocery,retail,36.65,6,0.153,bundle,2024-04-23\r\n24365,1556,AMER,electronics,online,64.72,5,0.035,none,2024-11-13\r\n24366,1235,EMEA,fashion,online,92.76,5,0.243,coupon,2024-08-19\r\n24367,1344,EMEA,grocery,online,59.61,8,0.174,bundle,2024-09-20\r\n24368,2016,LATAM,fashion,online,72.07,4,0.086,none,2024-04-23\r\n24369,1960,EMEA,home,retail,59.95,8,0.099,coupon,2024-05-08\r\n24370,2268,EMEA,home,online,70.49,5,0.242,none,2024-02-27\r\n24371,1483,EMEA,home,online,35.48,5,0.054,none,2024-07-09\r\n24372,2136,AMER,sports,online,99.57,3,0.076,bundle,2024-11-10\r\n24373,1408,AMER,sports,mobile,71.84,2,0.072,none,2024-11-01\r\n24374,1446,AMER,sports,online,39.54,5,0.133,none,2024-05-16\r\n24375,2364,APAC,fashion,retail,38.56,4,0.194,none,2024-12-20\r\n24376,2472,AMER,electronics,online,74.83,3,0.241,bundle,2024-05-19\r\n24377,2322,AMER,sports,mobile,51.61,2,0.204,none,2024-11-19\r\n24378,1664,LATAM,home,retail,88.02,6,0.072,none,2024-03-28\r\n24379,2434,APAC,grocery,online,74.17,2,0.137,none,2024-06-20\r\n24380,2394,EMEA,electronics,online,81.05,5,0.114,none,2024-01-06\r\n24381,1112,APAC,fashion,online,55.94,8,0.080,none,2024-12-07\r\n24382,1280,LATAM,electronics,retail,83.29,3,0.195,bundle,2024-05-24\r\n24383,1527,AMER,home,retail,52.77,1,0.000,none,2024-03-11\r\n24384,2464,LATAM,home,retail,56.08,4,0.219,coupon,2024-03-12\r\n24385,1327,APAC,fashion,online,54.94,5,0.244,loyalty,2024-11-18\r\n24386,1095,APAC,fashion,mobile,60.59,3,0.154,none,2024-09-10\r\n24387,1288,LATAM,toys,retail,128.30,7,0.216,none,2024-04-01\r\n24388,1292,LATAM,grocery,online,62.12,3,0.183,none,2024-01-04\r\n24389,2149,EMEA,home,online,44.40,8,0.204,coupon,2024-04-10\r\n24390,1134,APAC,grocery,online,73.31,3,0.132,loyalty,2024-05-18\r\n24391,2008,APAC,grocery,mobile,57.55,7,0.049,coupon,2024-09-10\r\n24392,1747,EMEA,grocery,retail,29.28,4,0.175,none,2024-06-16\r\n24393,1277,AMER,electronics,online,71.27,3,0.078,none,2024-08-17\r\n24394,1439,LATAM,home,mobile,50.73,4,0.233,none,2024-03-19\r\n24395,2106,LATAM,toys,online,56.47,3,0.015,none,2024-08-02\r\n24396,2170,EMEA,electronics,mobile,39.60,2,0.103,loyalty,2024-12-19\r\n24397,1555,AMER,grocery,online,60.62,1,0.200,none,2024-06-23\r\n24398,1432,APAC,grocery,online,48.22,1,0.142,none,2024-08-21\r\n24399,1557,LATAM,electronics,retail,134.35,8,0.154,loyalty,2024-03-03\r\n24400,2217,LATAM,sports,retail,116.81,4,0.025,none,2024-09-18\r\n24401,1830,EMEA,fashion,online,56.51,8,0.189,coupon,2024-09-05\r\n24402,2082,APAC,fashion,online,41.34,7,0.013,none,2024-02-01\r\n24403,2201,AMER,electronics,retail,59.06,1,0.213,loyalty,2024-02-04\r\n24404,1281,AMER,grocery,online,39.45,6,0.063,bundle,2024-07-23\r\n24405,1527,AMER,sports,online,27.12,2,0.189,coupon,2024-04-05\r\n24406,1818,AMER,sports,retail,31.34,1,0.188,none,2024-01-21\r\n24407,2124,AMER,electronics,mobile,32.13,4,0.048,coupon,2024-05-28\r\n24408,1966,APAC,sports,retail,79.77,8,0.224,none,2024-11-25\r\n24409,2210,APAC,grocery,online,41.76,6,0.160,coupon,2024-05-03\r\n24410,2469,LATAM,electronics,mobile,76.67,7,0.140,none,2024-09-25\r\n24411,1615,LATAM,grocery,online,126.21,4,0.071,coupon,2024-08-27\r\n24412,1226,AMER,home,online,53.41,7,0.017,none,2024-10-22\r\n24413,1597,APAC,electronics,retail,171.35,4,0.089,none,2024-10-16\r\n24414,2206,AMER,grocery,retail,43.81,4,0.183,none,2024-06-25\r\n24415,2497,AMER,electronics,partner,54.54,3,0.163,none,2024-06-11\r\n24416,2038,LATAM,sports,online,54.08,6,0.001,none,2024-02-15\r\n24417,2261,EMEA,electronics,online,70.08,2,0.027,loyalty,2024-10-15\r\n24418,2112,LATAM,home,retail,142.55,3,0.065,none,2024-10-06\r\n24419,1041,APAC,toys,retail,20.57,2,0.022,loyalty,2024-07-18\r\n24420,2391,EMEA,fashion,online,42.65,6,0.083,none,2024-03-26\r\n24421,1991,APAC,sports,online,55.80,6,0.007,none,2024-05-10\r\n24422,2117,EMEA,home,online,161.46,1,0.233,coupon,2024-01-06\r\n24423,1836,LATAM,toys,mobile,40.72,2,0.004,coupon,2024-12-16\r\n24424,1665,AMER,toys,online,87.02,5,0.064,bundle,2024-08-03\r\n24425,2019,AMER,fashion,online,111.64,7,0.146,loyalty,2024-05-07\r\n24426,1623,AMER,home,online,30.96,4,0.134,bundle,2024-08-24\r\n24427,1365,LATAM,fashion,online,36.30,8,0.178,coupon,2024-02-02\r\n24428,1708,LATAM,electronics,online,66.43,6,0.049,none,2024-06-09\r\n24429,1872,LATAM,toys,online,45.47,8,0.229,none,2024-03-23\r\n24430,1029,EMEA,grocery,retail,29.82,2,0.240,none,2024-08-23\r\n24431,1174,APAC,sports,mobile,29.61,7,0.242,none,2024-12-20\r\n24432,1741,AMER,home,online,72.81,5,0.010,loyalty,2024-01-15\r\n24433,1970,LATAM,electronics,retail,58.58,5,0.092,none,2024-12-26\r\n24434,2271,LATAM,grocery,online,53.22,4,0.067,none,2024-03-04\r\n24435,1868,AMER,grocery,online,52.03,6,0.218,none,2024-08-28\r\n24436,1967,EMEA,toys,online,45.05,4,0.220,none,2024-07-27\r\n24437,1239,APAC,sports,partner,55.99,8,0.118,coupon,2024-12-10\r\n24438,1762,LATAM,home,retail,142.94,3,0.190,bundle,2024-11-26\r\n24439,1694,APAC,grocery,retail,69.54,2,0.021,coupon,2024-02-23\r\n24440,1903,LATAM,sports,online,37.48,2,0.245,loyalty,2024-07-07\r\n24441,2437,LATAM,grocery,retail,96.87,8,0.240,bundle,2024-07-23\r\n24442,1051,EMEA,electronics,online,95.11,7,0.025,none,2024-04-09\r\n24443,1305,EMEA,fashion,retail,29.33,7,0.081,coupon,2024-08-18\r\n24444,1777,AMER,grocery,online,69.22,7,0.120,coupon,2024-04-18\r\n24445,1785,EMEA,electronics,mobile,20.05,6,0.059,none,2024-10-17\r\n24446,2492,LATAM,fashion,online,23.69,8,0.122,none,2024-09-23\r\n24447,2132,LATAM,fashion,online,97.52,5,0.158,loyalty,2024-06-25\r\n24448,1065,AMER,sports,retail,41.42,6,0.067,loyalty,2024-09-06\r\n24449,2208,AMER,grocery,online,57.18,4,0.076,bundle,2024-10-05\r\n24450,1588,LATAM,toys,online,83.50,5,0.198,none,2024-09-16\r\n24451,2274,APAC,grocery,online,92.18,3,0.022,bundle,2024-08-13\r\n24452,2025,EMEA,home,mobile,53.05,6,0.162,coupon,2024-12-05\r\n24453,1398,APAC,home,retail,81.42,1,0.011,none,2024-02-09\r\n24454,1478,EMEA,fashion,retail,59.33,3,0.056,coupon,2024-12-16\r\n24455,1053,AMER,fashion,partner,57.22,2,0.040,none,2024-08-10\r\n24456,2391,EMEA,electronics,retail,83.72,6,0.146,loyalty,2024-11-02\r\n24457,1710,APAC,home,retail,67.50,5,0.197,none,2024-01-16\r\n24458,1709,EMEA,fashion,online,26.52,1,0.119,none,2024-04-06\r\n24459,1821,LATAM,electronics,retail,55.45,6,0.180,none,2024-07-22\r\n24460,2215,LATAM,grocery,online,38.09,3,0.047,none,2024-11-25\r\n24461,1508,LATAM,electronics,online,37.62,7,0.071,loyalty,2024-10-14\r\n24462,2030,EMEA,grocery,retail,95.84,2,0.145,bundle,2024-10-20\r\n24463,2302,APAC,home,online,65.19,6,0.157,none,2024-01-13\r\n24464,1861,AMER,grocery,retail,85.34,4,0.011,none,2024-07-02\r\n24465,2182,AMER,home,mobile,85.94,1,0.075,none,2024-07-04\r\n24466,2105,APAC,electronics,mobile,67.67,7,0.018,none,2024-02-25\r\n24467,2286,AMER,electronics,online,50.64,7,0.141,none,2024-10-03\r\n24468,1774,EMEA,home,online,35.09,7,0.071,coupon,2024-07-27\r\n24469,2246,AMER,grocery,partner,31.09,7,0.219,coupon,2024-05-21\r\n24470,1430,EMEA,grocery,online,45.17,1,0.179,loyalty,2024-10-22\r\n24471,2483,LATAM,electronics,retail,116.56,3,0.093,none,2024-12-19\r\n24472,2125,LATAM,home,online,58.10,2,0.164,none,2024-02-01\r\n24473,1395,APAC,grocery,retail,61.99,4,0.027,coupon,2024-03-02\r\n24474,1388,AMER,grocery,online,47.73,2,0.212,bundle,2024-10-27\r\n24475,1942,APAC,grocery,online,61.41,3,0.201,coupon,2024-08-25\r\n24476,1208,AMER,home,mobile,57.12,7,0.077,none,2024-01-20\r\n24477,1487,AMER,toys,online,92.90,3,0.061,none,2024-05-19\r\n24478,1240,EMEA,toys,online,48.75,4,0.144,bundle,2024-11-27\r\n24479,1993,APAC,grocery,online,35.45,6,0.220,none,2024-08-04\r\n24480,1844,APAC,fashion,online,98.47,2,0.158,none,2024-07-22\r\n24481,1991,APAC,electronics,mobile,46.85,6,0.106,coupon,2024-09-21\r\n24482,1999,EMEA,grocery,mobile,36.96,1,0.033,bundle,2024-07-05\r\n24483,1447,LATAM,grocery,online,69.66,2,0.224,none,2024-10-09\r\n24484,1144,APAC,sports,online,42.49,5,0.040,loyalty,2024-12-09\r\n24485,2236,APAC,grocery,online,45.19,8,0.006,coupon,2024-06-07\r\n24486,2046,APAC,toys,online,38.16,4,0.130,none,2024-01-17\r\n24487,2416,LATAM,toys,retail,78.73,6,0.011,bundle,2024-10-27\r\n24488,2432,AMER,grocery,online,47.24,4,0.199,none,2024-09-23\r\n24489,1125,LATAM,fashion,partner,55.77,1,0.114,none,2024-10-23\r\n24490,2060,LATAM,home,online,83.01,2,0.081,coupon,2024-04-01\r\n24491,1305,EMEA,grocery,online,42.62,8,0.119,none,2024-12-09\r\n24492,1977,APAC,sports,partner,53.90,2,0.171,loyalty,2024-07-10\r\n24493,2487,LATAM,fashion,retail,45.62,2,0.169,none,2024-05-14\r\n24494,1447,LATAM,electronics,partner,54.49,8,0.053,none,2024-06-25\r\n24495,1963,AMER,sports,mobile,88.85,1,0.119,loyalty,2024-11-17\r\n24496,1837,LATAM,grocery,retail,45.85,8,0.154,none,2024-08-08\r\n24497,1350,LATAM,electronics,mobile,86.89,2,0.167,none,2024-11-25\r\n24498,1544,LATAM,fashion,online,19.21,2,0.215,none,2024-09-18\r\n24499,2412,LATAM,toys,online,63.74,5,0.114,bundle,2024-09-01\r\n24500,2232,EMEA,toys,online,112.00,6,0.115,bundle,2024-08-25\r\n24501,1808,APAC,electronics,mobile,34.66,7,0.179,none,2024-06-28\r\n24502,2151,APAC,home,retail,55.07,1,0.139,none,2024-01-24\r\n24503,1334,APAC,toys,retail,35.42,8,0.061,none,2024-04-03\r\n24504,1294,APAC,electronics,online,57.58,7,0.124,none,2024-10-17\r\n24505,2490,AMER,fashion,online,88.53,4,0.196,none,2024-10-17\r\n24506,2212,EMEA,grocery,retail,37.53,3,0.100,none,2024-07-07\r\n24507,1315,AMER,electronics,online,76.15,6,0.118,coupon,2024-09-23\r\n24508,2260,EMEA,sports,online,126.53,3,0.045,none,2024-06-24\r\n24509,1558,EMEA,home,online,41.83,5,0.243,coupon,2024-01-27\r\n24510,1267,EMEA,home,online,48.91,4,0.140,none,2024-01-23\r\n24511,2277,EMEA,sports,partner,21.69,8,0.089,bundle,2024-10-08\r\n24512,1412,AMER,fashion,online,53.82,2,0.064,none,2024-09-14\r\n24513,1806,APAC,grocery,online,58.36,5,0.090,coupon,2024-05-05\r\n24514,2454,LATAM,home,retail,36.32,4,0.041,none,2024-05-01\r\n24515,2042,LATAM,fashion,online,55.93,7,0.077,bundle,2024-03-12\r\n24516,2139,AMER,grocery,online,67.33,7,0.120,bundle,2024-08-04\r\n24517,1779,APAC,electronics,online,57.40,7,0.010,none,2024-02-09\r\n24518,1454,APAC,fashion,online,25.26,6,0.249,none,2024-03-22\r\n24519,1948,EMEA,grocery,mobile,86.95,5,0.236,coupon,2024-01-24\r\n24520,1812,EMEA,home,mobile,42.16,5,0.229,coupon,2024-07-27\r\n24521,1197,LATAM,fashion,retail,117.93,5,0.023,none,2024-03-08\r\n24522,1830,EMEA,home,online,22.56,4,0.163,none,2024-07-13\r\n24523,1783,AMER,electronics,mobile,86.54,8,0.085,none,2024-07-23\r\n24524,1612,LATAM,electronics,retail,102.27,1,0.037,bundle,2024-07-24\r\n24525,1348,AMER,sports,retail,27.48,6,0.198,none,2024-10-19\r\n24526,2266,LATAM,home,retail,95.27,6,0.103,loyalty,2024-06-18\r\n24527,1334,APAC,home,online,47.86,1,0.194,none,2024-06-06\r\n24528,1236,AMER,fashion,retail,76.85,4,0.138,none,2024-11-23\r\n24529,2403,LATAM,sports,retail,106.34,6,0.228,none,2024-11-22\r\n24530,1603,EMEA,electronics,online,36.20,2,0.127,bundle,2024-08-10\r\n24531,1472,AMER,grocery,online,93.85,8,0.158,none,2024-08-23\r\n24532,1325,APAC,home,retail,20.87,5,0.161,none,2024-06-12\r\n24533,1090,AMER,grocery,online,20.95,7,0.003,coupon,2024-07-07\r\n24534,1861,AMER,home,online,169.50,4,0.114,none,2024-10-12\r\n24535,1993,APAC,grocery,mobile,69.55,7,0.145,bundle,2024-04-06\r\n24536,1707,APAC,home,partner,69.23,3,0.218,coupon,2024-05-27\r\n24537,1287,AMER,home,partner,126.04,8,0.230,bundle,2024-03-13\r\n24538,1083,AMER,electronics,online,33.87,4,0.153,none,2024-12-15\r\n24539,1539,LATAM,sports,online,70.07,5,0.222,none,2024-02-28\r\n24540,1197,LATAM,sports,retail,40.79,2,0.021,loyalty,2024-03-09\r\n24541,1469,EMEA,grocery,retail,70.09,8,0.165,loyalty,2024-02-01\r\n24542,2136,AMER,home,retail,17.99,5,0.073,none,2024-11-24\r\n24543,1208,AMER,toys,mobile,89.83,3,0.185,loyalty,2024-12-20\r\n24544,2410,EMEA,grocery,online,44.37,4,0.122,none,2024-03-06\r\n24545,1873,EMEA,grocery,online,37.25,6,0.184,none,2024-09-03\r\n24546,1185,LATAM,grocery,mobile,108.15,8,0.091,none,2024-04-01\r\n24547,1502,APAC,electronics,retail,61.72,3,0.038,coupon,2024-11-09\r\n24548,2190,LATAM,sports,retail,32.85,6,0.067,none,2024-03-16\r\n24549,1815,APAC,electronics,online,121.05,5,0.173,bundle,2024-09-14\r\n24550,1241,APAC,grocery,mobile,51.92,7,0.189,coupon,2024-01-07\r\n24551,2056,LATAM,sports,online,31.62,5,0.072,none,2024-05-02\r\n24552,1570,AMER,home,online,35.96,7,0.146,none,2024-03-06\r\n24553,1091,EMEA,electronics,retail,36.14,7,0.144,none,2024-03-09\r\n24554,1970,LATAM,sports,retail,39.90,1,0.156,coupon,2024-10-09\r\n24555,1005,LATAM,electronics,retail,105.42,7,0.111,bundle,2024-04-04\r\n24556,2318,AMER,grocery,retail,70.85,7,0.249,none,2024-10-06\r\n24557,2337,AMER,grocery,retail,49.32,5,0.006,bundle,2024-11-25\r\n24558,2103,LATAM,fashion,online,63.31,4,0.135,none,2024-04-25\r\n24559,1016,AMER,home,online,69.26,2,0.165,bundle,2024-08-19\r\n24560,1210,LATAM,grocery,online,48.49,5,0.247,none,2024-10-12\r\n24561,1002,EMEA,fashion,retail,24.07,6,0.003,coupon,2024-08-24\r\n24562,1356,LATAM,home,retail,64.17,7,0.081,none,2024-12-25\r\n24563,2204,AMER,home,retail,45.73,2,0.129,bundle,2024-07-07\r\n24564,1366,APAC,fashion,online,37.08,8,0.025,coupon,2024-08-15\r\n24565,1896,EMEA,home,retail,195.17,5,0.216,none,2024-04-23\r\n24566,2093,LATAM,toys,retail,45.20,6,0.071,none,2024-09-25\r\n24567,1413,LATAM,fashion,retail,117.85,2,0.139,bundle,2024-03-04\r\n24568,1560,AMER,home,online,85.81,7,0.149,none,2024-03-25\r\n24569,2104,EMEA,grocery,online,15.00,8,0.167,loyalty,2024-12-17\r\n24570,2033,LATAM,sports,online,50.04,8,0.238,none,2024-01-07\r\n24571,1264,APAC,toys,online,44.08,6,0.228,coupon,2024-08-01\r\n24572,1510,EMEA,home,online,72.80,4,0.181,loyalty,2024-04-20\r\n24573,1007,APAC,fashion,retail,37.98,6,0.037,coupon,2024-06-20\r\n24574,2098,AMER,grocery,partner,70.62,3,0.056,none,2024-05-04\r\n24575,1777,AMER,grocery,online,71.96,1,0.133,none,2024-05-18\r\n24576,2268,EMEA,fashion,retail,72.17,6,0.159,none,2024-07-16\r\n24577,1852,AMER,electronics,retail,48.22,7,0.103,coupon,2024-08-09\r\n24578,2491,APAC,fashion,retail,65.32,3,0.139,coupon,2024-11-24\r\n24579,1175,AMER,electronics,online,54.41,8,0.010,none,2024-02-16\r\n24580,1700,EMEA,electronics,retail,55.77,7,0.201,loyalty,2024-09-22\r\n24581,1271,EMEA,home,retail,84.63,2,0.053,bundle,2024-07-25\r\n24582,1599,APAC,home,retail,176.05,3,0.109,none,2024-02-25\r\n24583,1823,EMEA,grocery,online,83.23,1,0.115,loyalty,2024-03-19\r\n24584,1395,APAC,grocery,retail,167.27,2,0.162,none,2024-04-16\r\n24585,1420,APAC,grocery,online,50.19,7,0.070,none,2024-05-23\r\n24586,1049,AMER,home,retail,179.52,2,0.202,none,2024-10-08\r\n24587,1786,APAC,grocery,online,40.52,7,0.233,loyalty,2024-10-17\r\n24588,1457,EMEA,grocery,online,17.07,2,0.220,none,2024-06-13\r\n24589,1571,EMEA,home,online,158.15,2,0.076,none,2024-04-20\r\n24590,1010,EMEA,grocery,online,54.03,6,0.052,none,2024-11-20\r\n24591,2487,LATAM,fashion,online,63.14,5,0.023,none,2024-04-02\r\n24592,1204,AMER,electronics,online,30.06,8,0.246,none,2024-12-16\r\n24593,1684,EMEA,home,online,36.66,5,0.038,none,2024-10-18\r\n24594,1921,LATAM,fashion,online,52.09,7,0.182,coupon,2024-08-24\r\n24595,2112,LATAM,fashion,retail,53.39,6,0.103,coupon,2024-09-05\r\n24596,1718,EMEA,sports,mobile,64.23,4,0.156,loyalty,2024-10-27\r\n24597,2048,LATAM,home,retail,111.54,2,0.183,none,2024-12-04\r\n24598,1589,AMER,electronics,online,18.08,7,0.083,bundle,2024-11-03\r\n24599,2036,APAC,fashion,mobile,28.48,2,0.178,none,2024-08-16\r\n24600,1231,AMER,grocery,retail,48.05,4,0.055,none,2024-10-13\r\n24601,1744,EMEA,grocery,retail,43.23,3,0.149,none,2024-10-12\r\n24602,2278,APAC,grocery,mobile,44.30,1,0.073,none,2024-05-07\r\n24603,1524,LATAM,grocery,online,36.08,4,0.231,loyalty,2024-12-16\r\n24604,2421,AMER,sports,online,49.31,8,0.036,coupon,2024-09-01\r\n24605,1207,APAC,grocery,online,43.45,4,0.113,none,2024-09-09\r\n24606,1082,EMEA,electronics,online,53.10,2,0.172,bundle,2024-12-19\r\n24607,1546,EMEA,electronics,mobile,63.79,6,0.198,none,2024-01-08\r\n24608,1152,LATAM,home,online,32.26,3,0.247,coupon,2024-05-20\r\n24609,2131,APAC,electronics,online,79.22,7,0.129,bundle,2024-01-19\r\n24610,1449,EMEA,sports,retail,29.57,5,0.219,none,2024-09-16\r\n24611,1638,EMEA,home,retail,35.73,8,0.172,none,2024-08-17\r\n24612,2311,LATAM,electronics,retail,51.01,6,0.037,bundle,2024-02-10\r\n24613,1957,AMER,toys,mobile,69.71,2,0.187,coupon,2024-02-20\r\n24614,2257,AMER,fashion,retail,20.42,2,0.236,bundle,2024-06-28\r\n24615,1742,AMER,grocery,retail,92.12,6,0.091,none,2024-12-05\r\n24616,1306,LATAM,fashion,online,158.22,1,0.214,none,2024-10-18\r\n24617,1317,EMEA,electronics,retail,76.53,2,0.018,loyalty,2024-08-10\r\n24618,1066,AMER,electronics,online,24.71,1,0.135,none,2024-08-23\r\n24619,1649,APAC,electronics,retail,35.36,3,0.099,none,2024-03-07\r\n24620,1728,AMER,toys,online,65.51,1,0.118,none,2024-02-02\r\n24621,2189,LATAM,electronics,mobile,65.67,3,0.008,none,2024-01-01\r\n24622,1650,LATAM,home,online,62.17,1,0.110,loyalty,2024-04-12\r\n24623,1715,AMER,electronics,online,59.66,1,0.065,none,2024-12-10\r\n24624,2076,AMER,grocery,online,41.92,5,0.053,none,2024-12-07\r\n24625,2190,LATAM,sports,mobile,56.47,3,0.127,none,2024-05-25\r\n24626,1936,EMEA,home,retail,29.11,1,0.247,none,2024-02-26\r\n24627,2392,EMEA,fashion,mobile,49.62,5,0.052,coupon,2024-06-08\r\n24628,2243,APAC,electronics,partner,157.19,3,0.196,coupon,2024-07-18\r\n24629,2303,EMEA,fashion,retail,32.06,1,0.197,none,2024-08-27\r\n24630,2149,EMEA,toys,online,46.93,8,0.043,none,2024-04-21\r\n24631,1744,EMEA,sports,online,71.28,8,0.138,none,2024-11-24\r\n24632,2311,LATAM,fashion,retail,170.39,3,0.015,coupon,2024-03-14\r\n24633,1884,APAC,fashion,online,65.57,3,0.213,none,2024-11-14\r\n24634,2451,APAC,fashion,online,70.03,8,0.095,bundle,2024-07-01\r\n24635,1585,AMER,grocery,mobile,68.43,6,0.214,loyalty,2024-03-06\r\n24636,1900,APAC,grocery,retail,88.53,3,0.132,loyalty,2024-04-17\r\n24637,2467,AMER,grocery,mobile,74.11,3,0.166,none,2024-08-09\r\n24638,2052,LATAM,fashion,mobile,98.39,7,0.077,none,2024-09-21\r\n24639,1018,APAC,electronics,retail,14.85,7,0.045,loyalty,2024-04-21\r\n24640,2438,AMER,toys,retail,36.06,6,0.069,coupon,2024-04-08\r\n24641,2235,AMER,electronics,online,72.52,4,0.015,coupon,2024-08-09\r\n24642,2248,LATAM,electronics,retail,53.58,8,0.050,none,2024-03-20\r\n24643,2066,APAC,toys,retail,89.32,2,0.216,none,2024-02-24\r\n24644,1124,AMER,electronics,online,57.90,5,0.175,coupon,2024-01-09\r\n24645,1649,APAC,electronics,retail,41.91,8,0.034,none,2024-06-24\r\n24646,1670,EMEA,fashion,online,212.50,5,0.138,bundle,2024-11-09\r\n24647,2048,LATAM,sports,online,54.78,3,0.145,none,2024-08-17\r\n24648,2118,AMER,grocery,partner,102.23,5,0.163,coupon,2024-08-01\r\n24649,1487,AMER,fashion,online,39.59,3,0.022,none,2024-05-07\r\n24650,2171,EMEA,grocery,online,54.33,7,0.043,none,2024-05-22\r\n24651,1678,LATAM,home,online,44.79,4,0.034,none,2024-10-01\r\n24652,1429,APAC,fashion,online,34.92,2,0.109,none,2024-02-10\r\n24653,2331,APAC,sports,retail,45.94,5,0.199,coupon,2024-02-10\r\n24654,1423,EMEA,home,retail,50.43,7,0.243,loyalty,2024-04-23\r\n24655,2035,LATAM,sports,online,52.03,1,0.095,none,2024-07-14\r\n24656,2255,AMER,home,retail,53.92,7,0.056,coupon,2024-08-20\r\n24657,1988,AMER,fashion,online,41.05,5,0.199,bundle,2024-10-13\r\n24658,1513,APAC,grocery,partner,81.83,4,0.177,none,2024-10-10\r\n24659,1909,APAC,grocery,online,53.09,3,0.062,none,2024-04-15\r\n24660,1614,EMEA,fashion,online,56.48,3,0.049,none,2024-05-23\r\n24661,1999,EMEA,toys,online,46.81,8,0.132,bundle,2024-01-04\r\n24662,1647,LATAM,grocery,retail,72.16,8,0.238,bundle,2024-10-23\r\n24663,2165,AMER,grocery,retail,73.31,5,0.172,none,2024-07-17\r\n24664,1565,AMER,home,mobile,68.21,2,0.248,none,2024-09-03\r\n24665,1984,LATAM,home,online,40.06,8,0.037,none,2024-12-24\r\n24666,1720,AMER,home,online,30.11,2,0.044,coupon,2024-02-18\r\n24667,1440,AMER,fashion,online,52.54,8,0.021,bundle,2024-06-01\r\n24668,2072,AMER,sports,online,32.26,4,0.069,coupon,2024-06-23\r\n24669,2092,AMER,toys,online,69.81,2,0.008,bundle,2024-07-12\r\n24670,2269,EMEA,toys,online,70.17,3,0.132,loyalty,2024-09-21\r\n24671,1546,EMEA,home,retail,50.89,4,0.113,none,2024-08-19\r\n24672,2330,EMEA,fashion,mobile,34.38,5,0.205,coupon,2024-12-21\r\n24673,2030,EMEA,electronics,online,17.61,2,0.022,bundle,2024-03-08\r\n24674,1235,EMEA,toys,online,24.73,3,0.124,loyalty,2024-03-05\r\n24675,2373,LATAM,grocery,online,53.20,3,0.020,none,2024-01-14\r\n24676,2194,APAC,electronics,retail,51.56,7,0.026,coupon,2024-04-10\r\n24677,1223,LATAM,home,retail,103.25,3,0.023,loyalty,2024-04-24\r\n24678,2437,LATAM,electronics,online,164.81,6,0.180,coupon,2024-12-02\r\n24679,1830,EMEA,home,retail,52.33,2,0.045,bundle,2024-07-08\r\n24680,1503,APAC,home,retail,129.16,3,0.106,none,2024-01-13\r\n24681,1033,APAC,fashion,online,81.10,6,0.086,coupon,2024-08-01\r\n24682,1564,APAC,fashion,mobile,82.91,3,0.025,none,2024-12-24\r\n24683,1573,AMER,sports,retail,47.36,1,0.099,none,2024-01-08\r\n24684,1530,APAC,home,online,59.77,3,0.119,none,2024-02-03\r\n24685,1228,APAC,home,online,93.20,4,0.088,none,2024-05-06\r\n24686,1791,LATAM,electronics,online,25.79,6,0.022,bundle,2024-05-20\r\n24687,1377,APAC,grocery,retail,33.08,2,0.171,none,2024-09-14\r\n24688,2097,AMER,fashion,retail,49.45,3,0.153,none,2024-07-20\r\n24689,2409,APAC,electronics,partner,18.07,2,0.215,bundle,2024-02-04\r\n24690,1342,LATAM,grocery,online,74.37,5,0.114,none,2024-12-24\r\n24691,2386,EMEA,sports,mobile,24.38,2,0.243,none,2024-07-14\r\n24692,2067,LATAM,sports,online,72.50,5,0.139,none,2024-01-17\r\n24693,1968,EMEA,electronics,online,41.12,4,0.059,none,2024-07-20\r\n24694,2486,APAC,toys,retail,156.21,6,0.017,bundle,2024-11-13\r\n24695,1359,LATAM,fashion,retail,31.76,8,0.146,bundle,2024-08-08\r\n24696,1592,LATAM,electronics,online,50.29,1,0.205,none,2024-08-01\r\n24697,2242,AMER,electronics,retail,65.71,5,0.057,coupon,2024-01-15\r\n24698,1018,APAC,home,online,61.19,8,0.177,none,2024-10-09\r\n24699,1040,LATAM,sports,retail,34.77,8,0.086,none,2024-07-11\r\n24700,2265,APAC,grocery,retail,100.57,5,0.069,none,2024-04-13\r\n24701,1903,LATAM,electronics,online,50.99,3,0.214,none,2024-07-19\r\n24702,2396,AMER,electronics,retail,26.56,1,0.119,none,2024-08-17\r\n24703,2027,EMEA,sports,online,42.24,5,0.040,none,2024-02-17\r\n24704,2055,AMER,toys,retail,36.50,4,0.121,none,2024-08-26\r\n24705,1381,LATAM,home,online,61.95,1,0.086,none,2024-12-24\r\n24706,1678,LATAM,sports,retail,64.98,6,0.002,none,2024-05-14\r\n24707,2105,APAC,home,online,35.38,1,0.182,coupon,2024-07-02\r\n24708,1970,LATAM,sports,online,73.69,5,0.159,none,2024-06-06\r\n24709,1352,AMER,grocery,retail,111.78,6,0.155,coupon,2024-12-06\r\n24710,1351,APAC,home,retail,39.51,7,0.081,bundle,2024-03-07\r\n24711,1910,LATAM,fashion,mobile,36.72,2,0.011,loyalty,2024-08-09\r\n24712,1869,AMER,grocery,retail,53.01,3,0.241,none,2024-08-22\r\n24713,1642,EMEA,home,retail,160.23,6,0.096,coupon,2024-02-06\r\n24714,1598,EMEA,electronics,online,32.40,2,0.231,none,2024-04-23\r\n24715,1478,EMEA,grocery,online,61.90,5,0.009,none,2024-01-10\r\n24716,1848,EMEA,home,retail,40.73,6,0.013,bundle,2024-03-24\r\n24717,2422,APAC,fashion,online,48.47,2,0.007,none,2024-04-23\r\n24718,1350,LATAM,fashion,retail,65.06,8,0.004,bundle,2024-06-27\r\n24719,2218,EMEA,sports,retail,42.92,7,0.128,coupon,2024-10-18\r\n24720,2357,EMEA,fashion,online,84.01,8,0.109,none,2024-05-17\r\n24721,1845,AMER,grocery,online,72.23,8,0.136,bundle,2024-11-28\r\n24722,2314,EMEA,home,partner,74.14,6,0.123,none,2024-12-12\r\n24723,2272,EMEA,electronics,online,50.52,5,0.112,none,2024-04-12\r\n24724,1263,AMER,grocery,partner,71.02,2,0.009,none,2024-11-11\r\n24725,1195,AMER,toys,online,25.48,1,0.244,coupon,2024-06-27\r\n24726,1296,LATAM,home,online,63.49,4,0.007,bundle,2024-08-12\r\n24727,1760,LATAM,home,retail,42.32,6,0.040,none,2024-11-11\r\n24728,1565,AMER,sports,retail,31.58,1,0.053,bundle,2024-04-22\r\n24729,1978,AMER,sports,online,27.14,8,0.194,none,2024-03-21\r\n24730,1866,EMEA,grocery,retail,79.74,8,0.137,none,2024-03-26\r\n24731,2446,LATAM,grocery,online,88.59,3,0.205,bundle,2024-02-18\r\n24732,1933,EMEA,electronics,retail,62.34,6,0.019,none,2024-06-14\r\n24733,1300,EMEA,home,retail,29.19,2,0.102,none,2024-09-11\r\n24734,1359,LATAM,electronics,online,59.84,1,0.175,none,2024-09-11\r\n24735,1410,AMER,fashion,online,35.32,2,0.117,coupon,2024-01-14\r\n24736,1238,AMER,fashion,retail,92.94,5,0.050,none,2024-12-26\r\n24737,1493,APAC,toys,mobile,23.46,6,0.234,none,2024-01-07\r\n24738,2112,LATAM,electronics,online,54.85,3,0.244,none,2024-02-15\r\n24739,1074,LATAM,sports,retail,60.92,1,0.167,none,2024-07-23\r\n24740,1468,AMER,grocery,online,64.12,2,0.139,none,2024-01-14\r\n24741,1370,APAC,grocery,online,42.76,2,0.037,coupon,2024-01-09\r\n24742,1947,EMEA,toys,online,58.98,4,0.030,none,2024-05-17\r\n24743,1499,EMEA,grocery,online,16.47,5,0.220,coupon,2024-07-14\r\n24744,2117,EMEA,electronics,retail,131.32,6,0.156,none,2024-01-03\r\n24745,1037,EMEA,home,retail,25.25,4,0.082,bundle,2024-02-26\r\n24746,2050,APAC,electronics,mobile,21.52,4,0.125,loyalty,2024-03-09\r\n24747,1940,APAC,sports,mobile,33.52,1,0.128,none,2024-03-06\r\n24748,1401,LATAM,electronics,online,34.88,2,0.029,none,2024-01-18\r\n24749,1504,AMER,grocery,mobile,94.44,3,0.202,bundle,2024-07-05\r\n24750,1502,APAC,electronics,online,55.72,5,0.081,none,2024-11-07\r\n24751,1341,EMEA,electronics,mobile,79.38,5,0.017,none,2024-01-04\r\n24752,2107,APAC,fashion,retail,68.48,2,0.100,none,2024-02-02\r\n24753,1130,LATAM,fashion,online,20.30,3,0.005,coupon,2024-10-14\r\n24754,1233,AMER,grocery,online,21.35,7,0.201,coupon,2024-12-26\r\n24755,2090,AMER,fashion,retail,44.67,7,0.092,coupon,2024-08-02\r\n24756,2301,EMEA,home,online,67.34,6,0.020,none,2024-08-17\r\n24757,2475,AMER,home,retail,57.63,1,0.009,loyalty,2024-07-09\r\n24758,1657,LATAM,electronics,mobile,53.54,7,0.145,none,2024-06-22\r\n24759,1261,APAC,fashion,retail,87.52,2,0.061,none,2024-11-07\r\n24760,1386,AMER,grocery,mobile,39.48,2,0.129,coupon,2024-04-06\r\n24761,1163,AMER,sports,retail,60.22,3,0.118,none,2024-09-13\r\n24762,1299,LATAM,electronics,mobile,100.24,4,0.249,loyalty,2024-02-15\r\n24763,2039,EMEA,grocery,online,40.88,6,0.167,coupon,2024-03-15\r\n24764,2154,APAC,fashion,mobile,46.56,2,0.014,bundle,2024-07-26\r\n24765,2007,LATAM,grocery,mobile,92.22,1,0.169,bundle,2024-09-10\r\n24766,1631,APAC,electronics,retail,96.87,5,0.125,none,2024-06-26\r\n24767,1285,EMEA,fashion,online,111.67,4,0.123,coupon,2024-08-15\r\n24768,1973,EMEA,grocery,mobile,45.16,2,0.060,none,2024-05-17\r\n24769,1295,EMEA,fashion,partner,30.88,7,0.071,none,2024-09-01\r\n24770,1165,AMER,fashion,retail,34.84,6,0.111,none,2024-07-03\r\n24771,1953,EMEA,grocery,mobile,138.99,4,0.141,none,2024-04-21\r\n24772,1859,AMER,grocery,retail,30.16,3,0.058,bundle,2024-09-12\r\n24773,1093,APAC,grocery,partner,52.03,8,0.070,bundle,2024-10-09\r\n24774,2028,APAC,toys,retail,59.74,8,0.059,none,2024-03-07\r\n24775,1348,AMER,toys,retail,43.24,1,0.193,none,2024-12-21\r\n24776,1586,LATAM,electronics,online,31.27,3,0.031,bundle,2024-11-18\r\n24777,1606,AMER,toys,retail,70.71,4,0.025,bundle,2024-03-08\r\n24778,1741,AMER,grocery,mobile,39.17,1,0.148,loyalty,2024-05-13\r\n24779,1024,APAC,electronics,mobile,39.14,8,0.090,loyalty,2024-01-09\r\n24780,2072,AMER,grocery,online,54.55,1,0.228,none,2024-10-09\r\n24781,2270,APAC,sports,retail,42.96,5,0.119,loyalty,2024-10-19\r\n24782,1457,EMEA,electronics,retail,68.12,8,0.091,none,2024-12-18\r\n24783,2291,EMEA,home,mobile,58.53,6,0.205,none,2024-09-08\r\n24784,1968,EMEA,fashion,online,44.97,4,0.164,coupon,2024-10-06\r\n24785,1930,AMER,home,online,171.38,4,0.190,none,2024-06-04\r\n24786,1067,APAC,electronics,online,131.15,2,0.156,bundle,2024-03-16\r\n24787,2273,APAC,grocery,online,48.35,2,0.121,none,2024-02-26\r\n24788,2315,LATAM,grocery,retail,42.46,5,0.123,none,2024-06-05\r\n24789,1004,LATAM,sports,mobile,27.05,6,0.048,none,2024-01-04\r\n24790,1024,APAC,home,retail,41.60,6,0.185,none,2024-12-06\r\n24791,1378,APAC,electronics,online,35.31,3,0.139,coupon,2024-08-08\r\n24792,2151,APAC,sports,online,58.60,2,0.104,none,2024-06-02\r\n24793,1686,LATAM,electronics,mobile,42.04,3,0.175,coupon,2024-11-14\r\n24794,1394,LATAM,grocery,online,31.62,1,0.097,coupon,2024-12-26\r\n24795,2163,EMEA,fashion,retail,116.55,5,0.059,none,2024-04-11\r\n24796,2452,LATAM,fashion,retail,34.24,5,0.070,none,2024-05-01\r\n24797,1872,LATAM,home,retail,59.69,1,0.020,none,2024-11-15\r\n24798,1677,EMEA,grocery,retail,33.19,4,0.118,none,2024-06-10\r\n24799,1799,EMEA,fashion,online,29.97,8,0.141,coupon,2024-11-22\r\n24800,2054,AMER,sports,retail,55.24,8,0.136,none,2024-12-14\r\n24801,1166,AMER,fashion,partner,63.55,3,0.009,none,2024-12-09\r\n24802,1231,AMER,electronics,retail,57.03,6,0.102,bundle,2024-09-05\r\n24803,2199,LATAM,fashion,online,60.57,3,0.101,coupon,2024-07-25\r\n24804,1011,APAC,toys,retail,75.56,4,0.121,none,2024-01-16\r\n24805,1850,APAC,grocery,retail,54.68,4,0.043,none,2024-07-06\r\n24806,2240,LATAM,fashion,retail,121.83,1,0.125,none,2024-08-27\r\n24807,1294,APAC,grocery,online,123.34,1,0.098,none,2024-08-11\r\n24808,2008,APAC,electronics,online,43.58,1,0.127,bundle,2024-12-23\r\n24809,1760,LATAM,electronics,online,34.61,5,0.200,loyalty,2024-01-20\r\n24810,2387,EMEA,home,online,35.90,6,0.207,none,2024-03-11\r\n24811,1025,EMEA,grocery,partner,41.25,7,0.018,none,2024-08-25\r\n24812,1089,LATAM,grocery,mobile,36.53,5,0.108,none,2024-09-11\r\n24813,1066,AMER,grocery,partner,46.41,6,0.087,bundle,2024-06-23\r\n24814,1912,APAC,grocery,mobile,25.90,3,0.242,none,2024-11-03\r\n24815,1831,APAC,fashion,online,52.81,5,0.203,none,2024-02-19\r\n24816,1606,AMER,sports,online,44.27,3,0.145,coupon,2024-07-14\r\n24817,1767,AMER,grocery,retail,145.59,5,0.006,none,2024-12-10\r\n24818,1030,EMEA,toys,online,49.44,1,0.015,bundle,2024-11-10\r\n24819,1087,AMER,grocery,retail,85.14,8,0.056,none,2024-09-07\r\n24820,1573,AMER,fashion,partner,45.81,6,0.161,bundle,2024-01-24\r\n24821,1049,AMER,fashion,retail,103.34,6,0.021,none,2024-08-16\r\n24822,1239,APAC,toys,partner,70.32,3,0.205,bundle,2024-01-25\r\n24823,1559,EMEA,toys,retail,77.13,4,0.094,coupon,2024-02-18\r\n24824,2211,APAC,grocery,online,45.55,6,0.242,none,2024-10-11\r\n24825,2139,AMER,grocery,retail,200.35,5,0.013,none,2024-08-27\r\n24826,2322,AMER,electronics,retail,78.05,8,0.172,bundle,2024-11-03\r\n24827,1216,APAC,home,mobile,85.44,4,0.087,coupon,2024-01-05\r\n24828,2265,APAC,sports,online,59.61,7,0.020,bundle,2024-02-20\r\n24829,2465,EMEA,home,mobile,109.06,3,0.205,none,2024-06-13\r\n24830,1905,APAC,sports,online,40.40,3,0.241,coupon,2024-04-05\r\n24831,1007,APAC,grocery,online,30.27,3,0.080,bundle,2024-01-02\r\n24832,2444,EMEA,electronics,online,51.87,8,0.142,none,2024-05-12\r\n24833,2487,LATAM,grocery,retail,56.82,5,0.034,none,2024-08-06\r\n24834,1023,APAC,toys,online,117.83,4,0.097,none,2024-07-27\r\n24835,1986,LATAM,grocery,mobile,70.04,1,0.034,none,2024-07-19\r\n24836,1276,AMER,grocery,partner,68.63,1,0.155,none,2024-06-07\r\n24837,1155,EMEA,grocery,online,133.07,8,0.137,coupon,2024-02-12\r\n24838,1690,LATAM,fashion,retail,78.61,2,0.144,none,2024-04-16\r\n24839,1013,LATAM,sports,retail,62.40,7,0.037,none,2024-05-20\r\n24840,2489,LATAM,electronics,retail,51.07,4,0.029,none,2024-07-21\r\n24841,2060,LATAM,fashion,mobile,67.90,8,0.002,none,2024-11-28\r\n24842,1616,APAC,grocery,retail,81.44,1,0.171,coupon,2024-09-14\r\n24843,1648,APAC,grocery,online,20.23,1,0.055,none,2024-11-16\r\n24844,1075,AMER,home,online,13.07,7,0.176,none,2024-10-27\r\n24845,1652,APAC,grocery,online,74.63,7,0.084,none,2024-02-06\r\n24846,2284,EMEA,grocery,retail,73.01,1,0.023,coupon,2024-06-03\r\n24847,1527,AMER,electronics,retail,121.34,3,0.029,none,2024-05-15\r\n24848,1127,EMEA,fashion,online,72.33,2,0.193,coupon,2024-07-18\r\n24849,1015,AMER,electronics,retail,202.06,8,0.239,none,2024-02-20\r\n24850,1941,AMER,grocery,online,77.28,8,0.038,none,2024-11-22\r\n24851,1080,LATAM,toys,online,58.85,5,0.176,loyalty,2024-07-05\r\n24852,1516,EMEA,electronics,online,47.97,6,0.178,none,2024-07-17\r\n24853,2290,LATAM,sports,online,27.71,4,0.250,loyalty,2024-03-05\r\n24854,1323,EMEA,sports,retail,157.55,6,0.090,none,2024-05-03\r\n24855,1951,LATAM,grocery,mobile,54.18,8,0.025,none,2024-06-19\r\n24856,1135,APAC,electronics,online,88.00,7,0.069,loyalty,2024-06-01\r\n24857,2254,LATAM,grocery,retail,58.55,8,0.128,none,2024-12-21\r\n24858,1132,EMEA,fashion,online,36.13,8,0.250,none,2024-03-17\r\n24859,1941,AMER,toys,mobile,22.54,6,0.183,none,2024-12-10\r\n24860,1516,EMEA,grocery,retail,107.45,3,0.056,loyalty,2024-07-13\r\n24861,2292,EMEA,home,online,23.76,4,0.232,bundle,2024-08-03\r\n24862,1133,EMEA,home,mobile,57.21,6,0.203,none,2024-12-28\r\n24863,1388,AMER,fashion,online,45.09,2,0.052,none,2024-09-21\r\n24864,1916,AMER,fashion,partner,60.67,1,0.105,coupon,2024-07-16\r\n24865,1318,LATAM,toys,online,26.93,3,0.209,none,2024-01-13\r\n24866,2192,APAC,home,mobile,92.90,7,0.209,none,2024-06-28\r\n24867,2294,EMEA,home,online,52.49,2,0.005,none,2024-10-25\r\n24868,1622,LATAM,fashion,retail,95.70,1,0.136,none,2024-10-01\r\n24869,2228,EMEA,toys,retail,116.43,2,0.039,none,2024-02-27\r\n24870,1455,APAC,sports,mobile,93.67,4,0.025,none,2024-04-08\r\n24871,1370,APAC,fashion,retail,48.12,3,0.197,coupon,2024-07-27\r\n24872,1398,APAC,electronics,online,32.25,4,0.050,loyalty,2024-01-27\r\n24873,2378,LATAM,sports,online,101.74,5,0.217,bundle,2024-02-22\r\n24874,1568,AMER,home,online,101.18,4,0.236,none,2024-11-28\r\n24875,1473,LATAM,home,online,38.54,7,0.225,none,2024-10-19\r\n24876,1926,AMER,fashion,online,38.11,6,0.097,bundle,2024-12-28\r\n24877,1905,APAC,grocery,retail,94.46,5,0.022,none,2024-10-13\r\n24878,1196,APAC,fashion,online,50.54,7,0.016,none,2024-07-21\r\n24879,1451,EMEA,electronics,retail,45.11,3,0.007,none,2024-01-15\r\n24880,2044,APAC,sports,online,123.00,7,0.045,none,2024-11-08\r\n24881,1479,AMER,toys,retail,41.51,5,0.109,coupon,2024-11-20\r\n24882,1362,AMER,toys,online,101.36,2,0.238,coupon,2024-06-15\r\n24883,2450,EMEA,fashion,retail,127.07,4,0.033,none,2024-04-15\r\n24884,1974,EMEA,home,online,51.81,1,0.229,none,2024-11-06\r\n24885,1316,APAC,grocery,retail,81.68,3,0.063,coupon,2024-06-05\r\n24886,2200,LATAM,fashion,retail,60.88,2,0.207,none,2024-12-24\r\n24887,2047,AMER,grocery,retail,107.48,7,0.173,none,2024-09-21\r\n24888,1253,AMER,electronics,retail,121.84,6,0.243,bundle,2024-04-27\r\n24889,1277,AMER,grocery,online,105.02,7,0.227,coupon,2024-05-13\r\n24890,2370,EMEA,grocery,retail,36.84,8,0.041,bundle,2024-08-21\r\n24891,1676,LATAM,home,online,75.85,3,0.077,none,2024-05-27\r\n24892,2349,APAC,toys,online,95.95,6,0.153,none,2024-03-10\r\n24893,2151,APAC,grocery,online,32.05,6,0.104,coupon,2024-01-15\r\n24894,1252,APAC,toys,mobile,42.60,4,0.210,none,2024-08-25\r\n24895,1443,EMEA,sports,online,106.43,6,0.030,bundle,2024-11-20\r\n24896,2485,AMER,grocery,retail,124.23,6,0.133,none,2024-02-20\r\n24897,1284,APAC,electronics,retail,100.06,5,0.210,none,2024-04-09\r\n24898,1233,AMER,electronics,retail,87.30,6,0.204,bundle,2024-01-06\r\n24899,1744,EMEA,sports,online,51.79,2,0.128,bundle,2024-12-11\r\n24900,2132,LATAM,electronics,online,84.51,4,0.208,bundle,2024-06-23\r\n24901,1282,LATAM,electronics,partner,101.19,6,0.118,coupon,2024-09-15\r\n24902,1831,APAC,home,online,96.55,2,0.048,none,2024-05-09\r\n24903,1734,AMER,toys,online,35.76,4,0.168,none,2024-01-26\r\n24904,1443,EMEA,grocery,retail,30.68,4,0.176,loyalty,2024-09-09\r\n24905,2255,AMER,grocery,retail,90.14,8,0.180,none,2024-11-16\r\n24906,2166,AMER,grocery,online,80.12,4,0.041,coupon,2024-07-27\r\n24907,1962,APAC,home,mobile,27.04,4,0.133,none,2024-07-03\r\n24908,2165,AMER,electronics,retail,38.40,6,0.173,none,2024-01-03\r\n24909,1973,EMEA,electronics,online,80.98,8,0.049,loyalty,2024-05-24\r\n24910,2119,AMER,grocery,retail,41.39,1,0.072,none,2024-05-14\r\n24911,2171,EMEA,grocery,online,67.96,7,0.002,coupon,2024-06-11\r\n24912,1423,EMEA,home,online,100.26,5,0.062,none,2024-04-09\r\n24913,1934,EMEA,grocery,retail,119.97,5,0.104,loyalty,2024-09-12\r\n24914,1548,EMEA,electronics,mobile,171.48,1,0.041,loyalty,2024-09-10\r\n24915,1318,LATAM,electronics,retail,75.73,8,0.119,coupon,2024-09-02\r\n24916,1001,LATAM,electronics,online,55.58,4,0.107,coupon,2024-12-15\r\n24917,1270,LATAM,home,online,44.01,1,0.114,none,2024-06-10\r\n24918,1875,EMEA,grocery,mobile,74.39,6,0.071,none,2024-11-11\r\n24919,1426,AMER,home,online,65.77,5,0.238,none,2024-07-15\r\n24920,2454,LATAM,electronics,online,45.97,5,0.220,none,2024-11-14\r\n24921,1364,EMEA,electronics,online,40.52,5,0.037,none,2024-03-28\r\n24922,1949,AMER,toys,online,33.86,3,0.071,coupon,2024-06-09\r\n24923,1869,AMER,home,online,61.82,2,0.029,none,2024-10-23\r\n24924,1777,AMER,home,online,38.30,1,0.130,none,2024-07-03\r\n24925,1521,LATAM,toys,online,118.46,6,0.065,none,2024-03-16\r\n24926,1849,EMEA,home,online,91.04,4,0.054,none,2024-10-23\r\n24927,2174,LATAM,toys,mobile,74.86,1,0.017,coupon,2024-04-21\r\n24928,2451,APAC,grocery,online,44.66,6,0.201,loyalty,2024-03-15\r\n24929,1127,EMEA,sports,retail,39.50,4,0.078,coupon,2024-12-28\r\n24930,1519,APAC,toys,online,43.01,1,0.119,none,2024-03-10\r\n24931,1664,LATAM,fashion,retail,157.57,8,0.059,bundle,2024-03-01\r\n24932,1910,LATAM,sports,retail,69.12,7,0.176,loyalty,2024-02-25\r\n24933,2242,AMER,home,mobile,43.04,4,0.186,none,2024-11-26\r\n24934,1884,APAC,sports,retail,56.11,7,0.037,none,2024-09-07\r\n24935,2326,LATAM,home,mobile,90.05,1,0.226,none,2024-10-18\r\n24936,1842,LATAM,home,online,62.24,8,0.011,none,2024-05-13\r\n24937,1437,EMEA,electronics,online,62.94,8,0.233,coupon,2024-09-18\r\n24938,2161,LATAM,electronics,online,63.47,3,0.023,loyalty,2024-08-02\r\n24939,1086,AMER,grocery,online,36.84,2,0.054,bundle,2024-10-23\r\n24940,1443,EMEA,home,retail,54.34,2,0.054,none,2024-06-12\r\n24941,2054,AMER,grocery,retail,32.88,8,0.172,none,2024-02-09\r\n24942,1433,EMEA,toys,retail,20.02,1,0.112,coupon,2024-12-24\r\n24943,2458,EMEA,electronics,retail,43.48,5,0.234,none,2024-09-03\r\n24944,1362,AMER,grocery,online,106.90,2,0.225,coupon,2024-10-11\r\n24945,1326,AMER,home,online,64.25,4,0.220,loyalty,2024-06-26\r\n24946,1497,EMEA,grocery,retail,35.13,4,0.132,bundle,2024-06-26\r\n24947,2355,EMEA,toys,retail,44.61,2,0.209,loyalty,2024-05-16\r\n24948,1081,AMER,grocery,online,111.30,8,0.238,none,2024-07-03\r\n24949,2105,APAC,sports,retail,34.86,2,0.058,bundle,2024-10-13\r\n24950,2410,EMEA,electronics,online,106.92,4,0.112,none,2024-10-28\r\n24951,1177,LATAM,grocery,retail,91.26,7,0.141,none,2024-07-28\r\n24952,1697,APAC,toys,online,38.28,2,0.141,bundle,2024-02-08\r\n24953,1045,LATAM,fashion,online,55.59,5,0.134,none,2024-10-13\r\n24954,2456,APAC,grocery,online,79.67,7,0.143,none,2024-09-07\r\n24955,2300,EMEA,home,online,82.36,4,0.214,none,2024-01-28\r\n24956,1051,EMEA,home,mobile,67.97,8,0.055,none,2024-10-08\r\n24957,1795,EMEA,sports,retail,45.57,4,0.088,bundle,2024-06-18\r\n24958,1609,LATAM,sports,mobile,49.63,5,0.014,none,2024-01-18\r\n24959,2379,AMER,home,retail,37.82,6,0.145,none,2024-05-22\r\n24960,2306,AMER,sports,retail,63.27,4,0.131,none,2024-10-25\r\n24961,1894,APAC,sports,online,147.16,5,0.014,none,2024-05-13\r\n24962,2356,LATAM,home,partner,34.62,2,0.154,none,2024-08-01\r\n24963,2348,EMEA,grocery,retail,38.45,8,0.055,coupon,2024-09-11\r\n24964,1267,EMEA,fashion,online,34.35,7,0.140,coupon,2024-11-28\r\n24965,1664,LATAM,fashion,online,105.97,7,0.058,coupon,2024-09-23\r\n24966,1287,AMER,toys,partner,32.85,8,0.069,none,2024-10-12\r\n24967,1689,LATAM,fashion,mobile,58.65,8,0.027,none,2024-08-17\r\n24968,2335,EMEA,grocery,mobile,77.14,5,0.052,loyalty,2024-02-04\r\n24969,2463,AMER,sports,mobile,73.49,8,0.193,bundle,2024-12-23\r\n24970,2000,APAC,electronics,mobile,41.26,6,0.182,none,2024-02-22\r\n24971,2347,AMER,home,mobile,37.03,1,0.201,bundle,2024-01-04\r\n24972,2398,EMEA,electronics,online,65.33,1,0.157,none,2024-07-09\r\n24973,1972,LATAM,grocery,online,113.51,5,0.166,none,2024-03-06\r\n24974,1284,APAC,home,online,59.05,2,0.085,bundle,2024-01-11\r\n24975,1900,APAC,grocery,online,33.14,6,0.185,coupon,2024-05-25\r\n24976,1813,EMEA,fashion,mobile,107.24,4,0.107,bundle,2024-02-04\r\n24977,2023,LATAM,fashion,online,121.01,5,0.110,coupon,2024-05-07\r\n24978,1247,AMER,grocery,retail,170.70,2,0.234,bundle,2024-03-12\r\n24979,1512,APAC,grocery,mobile,53.47,1,0.100,none,2024-01-19\r\n24980,1860,EMEA,home,retail,86.81,3,0.171,none,2024-03-10\r\n24981,2168,EMEA,sports,online,31.43,5,0.068,none,2024-02-06\r\n24982,1270,LATAM,fashion,mobile,33.51,3,0.161,none,2024-01-19\r\n24983,1541,APAC,toys,retail,35.91,1,0.119,none,2024-08-10\r\n24984,1383,AMER,grocery,retail,20.94,8,0.076,none,2024-10-25\r\n24985,1042,LATAM,grocery,online,50.54,7,0.212,bundle,2024-07-07\r\n24986,2133,AMER,home,online,100.42,3,0.178,none,2024-12-05\r\n24987,1845,AMER,grocery,online,43.72,2,0.210,none,2024-10-10\r\n24988,2165,AMER,sports,retail,42.09,1,0.183,bundle,2024-10-10\r\n24989,1545,AMER,electronics,retail,26.04,2,0.099,coupon,2024-01-03\r\n24990,2481,APAC,fashion,online,93.08,8,0.135,none,2024-07-21\r\n24991,1302,LATAM,toys,online,35.35,7,0.006,bundle,2024-12-06\r\n24992,1152,LATAM,fashion,partner,56.09,1,0.194,none,2024-04-27\r\n24993,1730,AMER,electronics,mobile,66.09,2,0.204,none,2024-12-17\r\n24994,1548,EMEA,home,online,41.12,2,0.016,bundle,2024-07-08\r\n24995,1275,EMEA,grocery,online,54.89,4,0.062,coupon,2024-06-26\r\n24996,1440,AMER,fashion,online,93.32,3,0.230,loyalty,2024-09-07\r\n24997,2247,LATAM,electronics,online,71.07,6,0.244,bundle,2024-05-06\r\n24998,1681,LATAM,sports,mobile,152.19,7,0.007,coupon,2024-03-14\r\n24999,1507,EMEA,grocery,online,64.94,3,0.199,none,2024-08-07\r\n25000,1087,AMER,grocery,mobile,43.88,6,0.175,none,2024-09-25\r\n25001,2472,AMER,electronics,online,47.14,7,0.152,none,2024-07-03\r\n25002,1601,APAC,sports,partner,30.37,1,0.193,none,2024-01-24\r\n25003,2292,EMEA,sports,retail,31.56,4,0.204,coupon,2024-01-28\r\n25004,1948,EMEA,sports,online,182.68,1,0.208,none,2024-07-02\r\n25005,2255,AMER,grocery,retail,63.86,7,0.025,none,2024-04-22\r\n25006,1519,APAC,fashion,mobile,36.01,3,0.038,coupon,2024-07-03\r\n25007,2435,AMER,grocery,mobile,38.75,8,0.161,coupon,2024-01-25\r\n25008,1445,APAC,grocery,online,27.83,7,0.154,none,2024-11-25\r\n25009,1354,AMER,grocery,retail,40.89,8,0.233,loyalty,2024-05-24\r\n25010,2273,APAC,home,retail,26.40,3,0.013,loyalty,2024-05-14\r\n25011,1033,APAC,fashion,mobile,26.28,5,0.234,none,2024-12-09\r\n25012,1813,EMEA,home,retail,54.46,3,0.129,bundle,2024-08-07\r\n25013,1194,APAC,electronics,retail,90.63,5,0.096,bundle,2024-03-02\r\n25014,1820,AMER,home,retail,50.94,5,0.081,coupon,2024-11-02\r\n25015,1888,LATAM,grocery,online,26.26,7,0.075,coupon,2024-06-09\r\n25016,1286,EMEA,grocery,online,106.31,1,0.070,coupon,2024-01-07\r\n25017,1488,AMER,grocery,retail,36.04,4,0.209,none,2024-01-05\r\n25018,1925,LATAM,electronics,online,38.93,3,0.089,coupon,2024-09-15\r\n25019,1238,AMER,grocery,online,52.21,8,0.085,none,2024-07-01\r\n25020,1008,AMER,grocery,retail,155.61,6,0.115,none,2024-04-03\r\n25021,2304,LATAM,fashion,partner,38.18,2,0.101,coupon,2024-04-09\r\n25022,1300,EMEA,grocery,online,48.12,7,0.162,coupon,2024-08-21\r\n25023,1512,APAC,grocery,retail,57.34,7,0.025,none,2024-03-09\r\n25024,2409,APAC,electronics,online,68.71,1,0.198,bundle,2024-07-25\r\n25025,1727,APAC,home,online,52.08,8,0.243,coupon,2024-09-05\r\n25026,1621,APAC,electronics,retail,85.60,8,0.100,bundle,2024-03-19\r\n25027,1040,LATAM,sports,retail,208.47,8,0.022,none,2024-09-25\r\n25028,1040,LATAM,fashion,online,34.16,2,0.115,none,2024-02-19\r\n25029,1278,AMER,home,online,69.98,3,0.009,none,2024-08-13\r\n25030,1848,EMEA,sports,online,23.91,4,0.091,none,2024-03-11\r\n25031,1577,AMER,grocery,retail,51.86,7,0.150,none,2024-12-11\r\n25032,1725,APAC,sports,retail,42.28,5,0.158,coupon,2024-04-08\r\n25033,1899,APAC,sports,mobile,74.03,5,0.177,loyalty,2024-12-20\r\n25034,1963,AMER,grocery,partner,93.25,1,0.231,none,2024-10-17\r\n25035,1538,AMER,sports,online,33.43,2,0.016,none,2024-09-25\r\n25036,2090,AMER,electronics,online,149.39,7,0.246,loyalty,2024-11-02\r\n25037,2228,EMEA,electronics,retail,68.85,7,0.105,bundle,2024-09-22\r\n25038,1729,AMER,grocery,online,56.15,7,0.248,coupon,2024-01-22\r\n25039,2281,AMER,grocery,retail,43.61,5,0.241,loyalty,2024-04-20\r\n25040,1365,LATAM,electronics,online,143.09,1,0.063,bundle,2024-07-06\r\n25041,1898,EMEA,sports,online,71.51,7,0.034,loyalty,2024-10-19\r\n25042,1171,APAC,electronics,retail,56.38,1,0.128,coupon,2024-10-12\r\n25043,2032,AMER,sports,online,35.08,5,0.205,coupon,2024-05-14\r\n25044,1793,LATAM,toys,retail,83.04,6,0.040,none,2024-10-24\r\n25045,1036,EMEA,toys,mobile,44.85,7,0.171,bundle,2024-04-21\r\n25046,1823,EMEA,sports,retail,66.29,2,0.016,none,2024-07-12\r\n25047,2011,AMER,fashion,online,76.23,7,0.028,none,2024-08-03\r\n25048,2447,AMER,grocery,retail,43.14,4,0.063,coupon,2024-09-16\r\n25049,1246,EMEA,grocery,retail,56.10,1,0.204,none,2024-09-27\r\n25050,1314,AMER,electronics,mobile,67.41,3,0.212,bundle,2024-08-23\r\n25051,1936,EMEA,home,retail,50.51,7,0.134,none,2024-08-04\r\n25052,2394,EMEA,sports,retail,50.58,1,0.245,coupon,2024-12-17\r\n25053,2436,LATAM,toys,online,64.19,5,0.244,coupon,2024-09-19\r\n25054,2272,EMEA,electronics,online,51.78,8,0.117,loyalty,2024-08-19\r\n25055,1930,AMER,electronics,online,38.11,8,0.247,coupon,2024-06-04\r\n25056,2261,EMEA,fashion,retail,61.72,2,0.172,none,2024-08-15\r\n25057,1774,EMEA,fashion,online,85.42,1,0.124,none,2024-11-28\r\n25058,1857,LATAM,toys,mobile,46.15,1,0.170,none,2024-09-28\r\n25059,1369,AMER,electronics,online,43.89,1,0.119,none,2024-07-24\r\n25060,1093,APAC,home,retail,27.30,3,0.126,none,2024-04-13\r\n25061,2479,EMEA,sports,retail,61.26,6,0.186,bundle,2024-12-05\r\n25062,1670,EMEA,fashion,retail,57.65,2,0.189,loyalty,2024-07-14\r\n25063,1230,EMEA,grocery,mobile,48.17,8,0.246,coupon,2024-01-22\r\n25064,1749,LATAM,home,online,46.24,3,0.233,none,2024-06-16\r\n25065,1705,AMER,sports,retail,53.64,2,0.143,none,2024-03-20\r\n25066,1710,APAC,sports,retail,52.66,5,0.164,bundle,2024-09-05\r\n25067,2422,APAC,grocery,retail,76.15,3,0.008,coupon,2024-07-01\r\n25068,1831,APAC,electronics,retail,49.33,4,0.202,loyalty,2024-02-05\r\n25069,1220,LATAM,home,retail,42.13,8,0.140,none,2024-01-24\r\n25070,1121,EMEA,grocery,partner,47.73,1,0.243,coupon,2024-01-02\r\n25071,1420,APAC,fashion,retail,45.11,5,0.218,none,2024-07-01\r\n25072,1090,AMER,fashion,online,31.09,3,0.134,coupon,2024-01-25\r\n25073,1366,APAC,electronics,retail,25.02,2,0.012,coupon,2024-02-02\r\n25074,1180,AMER,grocery,retail,123.74,7,0.199,none,2024-04-12\r\n25075,1096,EMEA,sports,retail,26.22,1,0.047,none,2024-12-11\r\n25076,2127,LATAM,fashion,mobile,39.60,1,0.103,none,2024-05-04\r\n25077,2304,LATAM,sports,mobile,117.36,3,0.105,none,2024-09-06\r\n25078,1725,APAC,sports,online,25.15,7,0.227,bundle,2024-05-06\r\n25079,1044,EMEA,fashion,retail,46.51,8,0.019,coupon,2024-07-22\r\n25080,1700,EMEA,sports,retail,46.89,3,0.166,loyalty,2024-06-03\r\n25081,1098,APAC,home,online,65.72,7,0.100,coupon,2024-08-26\r\n25082,1093,APAC,toys,online,80.70,6,0.242,none,2024-05-23\r\n25083,1788,AMER,sports,partner,70.45,3,0.117,none,2024-01-01\r\n25084,1871,APAC,grocery,retail,30.15,5,0.059,none,2024-04-15\r\n25085,1569,APAC,sports,online,61.57,8,0.080,none,2024-02-02\r\n25086,1252,APAC,fashion,online,15.25,7,0.187,coupon,2024-07-01\r\n25087,1842,LATAM,home,retail,31.43,6,0.019,none,2024-04-25\r\n25088,1365,LATAM,toys,online,28.37,3,0.057,none,2024-11-15\r\n25089,2303,EMEA,grocery,partner,49.41,3,0.233,none,2024-04-16\r\n25090,2029,APAC,fashion,online,46.09,7,0.240,bundle,2024-03-03\r\n25091,2398,EMEA,fashion,online,105.80,3,0.091,none,2024-11-27\r\n25092,1216,APAC,fashion,retail,57.62,8,0.132,none,2024-06-18\r\n25093,1130,LATAM,electronics,online,70.27,5,0.066,none,2024-03-06\r\n25094,1832,APAC,toys,mobile,81.05,6,0.032,bundle,2024-10-23\r\n25095,1218,AMER,grocery,partner,45.11,6,0.199,coupon,2024-11-23\r\n25096,1456,APAC,home,online,56.36,1,0.164,coupon,2024-05-01\r\n25097,2471,APAC,fashion,online,54.73,8,0.249,none,2024-10-13\r\n25098,2215,LATAM,sports,online,73.31,6,0.026,none,2024-01-25\r\n25099,2106,LATAM,grocery,online,49.00,6,0.159,coupon,2024-01-25\r\n25100,1076,LATAM,home,online,63.37,4,0.152,none,2024-06-20\r\n25101,1920,LATAM,home,retail,55.99,5,0.172,none,2024-08-27\r\n25102,2095,EMEA,home,mobile,29.95,2,0.230,none,2024-01-04\r\n25103,1871,APAC,electronics,online,69.85,3,0.022,loyalty,2024-09-04\r\n25104,1271,EMEA,electronics,mobile,135.72,4,0.143,none,2024-12-15\r\n25105,1499,EMEA,electronics,online,110.17,5,0.046,none,2024-06-18\r\n25106,1242,LATAM,grocery,retail,151.39,1,0.023,none,2024-12-28\r\n25107,1721,EMEA,fashion,online,19.53,5,0.075,none,2024-12-05\r\n25108,1341,EMEA,electronics,online,81.97,8,0.078,coupon,2024-09-09\r\n25109,2484,APAC,home,retail,38.05,4,0.106,coupon,2024-04-11\r\n25110,1126,LATAM,grocery,online,64.01,3,0.172,none,2024-04-28\r\n25111,1594,LATAM,electronics,online,41.05,7,0.089,none,2024-09-04\r\n25112,1517,AMER,home,online,40.22,7,0.027,coupon,2024-03-21\r\n25113,2042,LATAM,electronics,mobile,187.33,7,0.138,bundle,2024-03-25\r\n25114,1476,APAC,toys,online,32.02,7,0.070,none,2024-05-04\r\n25115,1179,APAC,electronics,retail,75.14,8,0.105,coupon,2024-01-25\r\n25116,2355,EMEA,grocery,retail,95.10,7,0.115,none,2024-09-06\r\n25117,1650,LATAM,grocery,online,88.06,7,0.228,loyalty,2024-10-07\r\n25118,1826,LATAM,toys,online,59.42,5,0.056,coupon,2024-05-10\r\n25119,2287,EMEA,sports,online,45.75,4,0.101,none,2024-01-27\r\n25120,2411,EMEA,toys,partner,58.53,4,0.201,none,2024-10-17\r\n25121,2062,EMEA,sports,online,46.68,7,0.250,bundle,2024-06-12\r\n25122,1162,AMER,home,online,35.05,7,0.017,none,2024-01-13\r\n25123,1521,LATAM,grocery,retail,88.45,5,0.166,none,2024-04-28\r\n25124,2261,EMEA,toys,mobile,43.41,3,0.232,none,2024-12-05\r\n25125,1211,EMEA,sports,online,30.99,2,0.230,coupon,2024-05-01\r\n25126,1615,LATAM,home,online,19.00,3,0.102,loyalty,2024-02-05\r\n25127,1582,AMER,grocery,retail,101.73,1,0.056,none,2024-11-10\r\n25128,2010,APAC,fashion,retail,47.26,1,0.066,none,2024-10-18\r\n25129,2141,AMER,electronics,retail,33.14,5,0.153,none,2024-08-12\r\n25130,1012,LATAM,toys,mobile,34.65,6,0.120,none,2024-05-14\r\n25131,2438,AMER,home,retail,20.87,7,0.094,none,2024-12-06\r\n25132,1048,EMEA,electronics,retail,35.40,6,0.224,loyalty,2024-10-28\r\n25133,2440,APAC,fashion,retail,61.08,8,0.035,none,2024-12-17\r\n25134,2476,APAC,electronics,retail,39.05,6,0.091,none,2024-09-18\r\n25135,1558,EMEA,home,retail,84.15,1,0.090,loyalty,2024-05-16\r\n25136,1894,APAC,home,retail,27.17,3,0.136,coupon,2024-11-18\r\n25137,2350,APAC,home,retail,55.93,7,0.042,none,2024-11-20\r\n25138,1771,AMER,home,online,57.75,2,0.022,coupon,2024-12-22\r\n25139,1840,LATAM,grocery,online,90.23,6,0.250,none,2024-06-02\r\n25140,1980,LATAM,home,mobile,75.82,8,0.218,none,2024-06-10\r\n25141,1619,APAC,home,online,120.48,6,0.131,coupon,2024-11-11\r\n25142,1547,AMER,electronics,online,51.13,6,0.061,none,2024-08-05\r\n25143,1701,LATAM,electronics,retail,33.13,1,0.130,coupon,2024-04-28\r\n25144,2136,AMER,home,retail,26.52,1,0.230,none,2024-07-10\r\n25145,1994,LATAM,sports,online,144.86,3,0.133,bundle,2024-10-12\r\n25146,1277,AMER,grocery,mobile,27.70,1,0.075,bundle,2024-07-13\r\n25147,1842,LATAM,grocery,mobile,101.88,1,0.140,loyalty,2024-06-15\r\n25148,1115,AMER,home,partner,25.07,2,0.207,none,2024-03-25\r\n25149,1046,EMEA,grocery,mobile,59.18,1,0.211,none,2024-05-14\r\n25150,2382,LATAM,home,online,54.36,8,0.141,bundle,2024-08-24\r\n25151,2232,EMEA,sports,online,59.48,7,0.061,bundle,2024-04-02\r\n25152,2228,EMEA,sports,mobile,39.41,3,0.221,coupon,2024-04-21\r\n25153,1694,APAC,fashion,retail,56.32,8,0.034,coupon,2024-01-16\r\n25154,1430,EMEA,grocery,online,173.74,7,0.057,none,2024-12-02\r\n25155,1998,APAC,electronics,mobile,75.21,8,0.047,none,2024-05-10\r\n25156,1992,LATAM,grocery,online,102.75,6,0.043,loyalty,2024-11-16\r\n25157,1081,AMER,fashion,retail,91.24,4,0.226,none,2024-09-06\r\n25158,2202,APAC,grocery,online,15.89,6,0.127,none,2024-08-19\r\n25159,1963,AMER,home,online,29.34,6,0.153,none,2024-03-18\r\n25160,1953,EMEA,sports,online,58.05,7,0.005,none,2024-08-01\r\n25161,1176,EMEA,electronics,partner,71.16,6,0.015,none,2024-12-09\r\n25162,1436,APAC,electronics,mobile,48.66,4,0.245,none,2024-11-26\r\n25163,1148,AMER,electronics,mobile,75.89,6,0.017,loyalty,2024-12-24\r\n25164,1786,APAC,grocery,online,119.87,6,0.099,none,2024-09-01\r\n25165,1526,EMEA,grocery,online,37.16,6,0.030,none,2024-07-18\r\n25166,2282,EMEA,fashion,retail,61.64,8,0.203,none,2024-05-18\r\n25167,1521,LATAM,home,retail,83.29,4,0.138,loyalty,2024-02-05\r\n25168,2264,LATAM,grocery,online,40.37,2,0.093,none,2024-03-23\r\n25169,1090,AMER,grocery,retail,46.00,5,0.159,loyalty,2024-01-06\r\n25170,1178,EMEA,home,online,133.98,4,0.242,none,2024-09-16\r\n25171,1120,LATAM,electronics,online,68.61,6,0.058,bundle,2024-12-27\r\n25172,2399,LATAM,sports,retail,63.03,7,0.037,loyalty,2024-04-26\r\n25173,1067,APAC,home,retail,88.75,1,0.223,none,2024-11-14\r\n25174,2170,EMEA,fashion,retail,47.32,4,0.213,none,2024-02-01\r\n25175,1817,APAC,fashion,online,41.33,2,0.202,none,2024-06-27\r\n25176,1081,AMER,grocery,mobile,43.88,5,0.227,loyalty,2024-12-07\r\n25177,1147,EMEA,electronics,retail,67.53,5,0.031,none,2024-02-08\r\n25178,1553,LATAM,grocery,online,77.43,6,0.203,none,2024-04-13\r\n25179,2080,LATAM,grocery,online,35.34,2,0.149,none,2024-05-11\r\n25180,2273,APAC,toys,retail,57.82,6,0.186,none,2024-09-07\r\n25181,1825,AMER,home,online,56.20,6,0.182,coupon,2024-04-01\r\n25182,2074,AMER,sports,online,45.69,8,0.026,none,2024-05-16\r\n25183,1819,AMER,electronics,mobile,83.29,6,0.159,coupon,2024-11-07\r\n25184,2314,EMEA,home,retail,57.24,7,0.217,none,2024-03-28\r\n25185,1950,LATAM,electronics,mobile,73.81,4,0.115,none,2024-09-16\r\n25186,2219,LATAM,home,retail,41.89,2,0.092,bundle,2024-06-23\r\n25187,1533,APAC,grocery,online,45.72,5,0.069,coupon,2024-06-03\r\n25188,2216,AMER,electronics,retail,45.05,3,0.242,none,2024-03-09\r\n25189,2462,EMEA,fashion,mobile,61.97,4,0.011,bundle,2024-07-03\r\n25190,2354,LATAM,home,online,52.10,7,0.238,none,2024-07-13\r\n25191,2281,AMER,toys,online,37.21,4,0.148,bundle,2024-04-24\r\n25192,2479,EMEA,electronics,retail,87.27,4,0.059,coupon,2024-11-27\r\n25193,2276,AMER,fashion,online,34.53,3,0.181,none,2024-04-14\r\n25194,1817,APAC,fashion,online,73.15,3,0.019,none,2024-03-11\r\n25195,1624,AMER,home,retail,148.72,5,0.014,bundle,2024-05-25\r\n25196,1158,LATAM,electronics,online,31.29,8,0.141,bundle,2024-07-18\r\n25197,1722,EMEA,home,online,119.06,6,0.009,loyalty,2024-07-08\r\n25198,1009,APAC,toys,online,53.09,1,0.221,bundle,2024-08-13\r\n25199,1286,EMEA,electronics,online,64.49,6,0.051,none,2024-01-14\r\n25200,1072,LATAM,home,retail,118.54,2,0.237,none,2024-10-21\r\n25201,1497,EMEA,home,online,23.82,7,0.026,none,2024-05-06\r\n25202,1033,APAC,grocery,online,20.91,3,0.218,bundle,2024-09-23\r\n25203,1288,LATAM,grocery,retail,47.60,8,0.194,none,2024-08-16\r\n25204,2440,APAC,grocery,mobile,71.60,1,0.063,coupon,2024-11-17\r\n25205,1044,EMEA,toys,partner,80.06,7,0.099,none,2024-05-28\r\n25206,2492,LATAM,grocery,online,47.53,8,0.076,coupon,2024-11-11\r\n25207,2218,EMEA,electronics,online,51.32,4,0.001,none,2024-04-09\r\n25208,1305,EMEA,electronics,online,131.79,4,0.114,coupon,2024-01-15\r\n25209,1546,EMEA,toys,online,54.61,6,0.100,none,2024-02-27\r\n25210,1708,LATAM,grocery,online,69.92,1,0.081,none,2024-07-09\r\n25211,1900,APAC,fashion,retail,61.89,3,0.144,loyalty,2024-10-09\r\n25212,2065,EMEA,electronics,partner,82.81,4,0.159,none,2024-03-24\r\n25213,1551,APAC,electronics,retail,46.40,7,0.048,none,2024-12-02\r\n25214,1090,AMER,fashion,mobile,90.78,8,0.172,bundle,2024-08-02\r\n25215,2186,LATAM,fashion,retail,43.29,8,0.123,none,2024-07-27\r\n25216,1218,AMER,toys,online,71.50,6,0.183,none,2024-08-20\r\n25217,2112,LATAM,fashion,retail,50.48,6,0.053,loyalty,2024-12-09\r\n25218,1122,AMER,fashion,online,68.82,2,0.180,coupon,2024-04-02\r\n25219,1422,LATAM,grocery,online,102.34,5,0.129,none,2024-02-02\r\n25220,2133,AMER,grocery,online,81.84,5,0.192,none,2024-08-02\r\n25221,1422,LATAM,electronics,online,66.37,8,0.084,none,2024-12-21\r\n25222,1623,AMER,electronics,retail,66.49,5,0.047,coupon,2024-03-01\r\n25223,1408,AMER,grocery,online,20.21,1,0.052,coupon,2024-08-23\r\n25224,1352,AMER,grocery,online,82.44,8,0.132,none,2024-10-20\r\n25225,1342,LATAM,electronics,online,45.50,4,0.196,none,2024-12-18\r\n25226,2062,EMEA,toys,online,87.52,2,0.015,bundle,2024-08-06\r\n25227,1117,LATAM,home,retail,45.14,1,0.126,bundle,2024-12-22\r\n25228,2381,AMER,grocery,online,199.19,8,0.086,bundle,2024-04-10\r\n25229,1035,EMEA,grocery,online,66.21,3,0.142,bundle,2024-03-04\r\n25230,2328,EMEA,sports,retail,26.57,3,0.087,none,2024-10-16\r\n25231,2014,EMEA,sports,mobile,26.80,5,0.149,bundle,2024-11-12\r\n25232,2400,EMEA,grocery,online,56.11,3,0.118,bundle,2024-02-15\r\n25233,2262,APAC,toys,online,39.18,1,0.227,bundle,2024-10-03\r\n25234,1599,APAC,toys,retail,58.52,3,0.165,none,2024-04-19\r\n25235,1193,APAC,fashion,retail,86.99,6,0.050,bundle,2024-06-14\r\n25236,1965,LATAM,fashion,retail,46.29,1,0.160,none,2024-03-15\r\n25237,1503,APAC,grocery,mobile,92.73,6,0.229,none,2024-02-13\r\n25238,2478,AMER,fashion,retail,119.18,8,0.082,coupon,2024-01-12\r\n25239,1840,LATAM,grocery,online,139.10,5,0.250,coupon,2024-04-21\r\n25240,1860,EMEA,toys,mobile,24.82,7,0.238,bundle,2024-09-03\r\n25241,2288,AMER,electronics,online,100.79,7,0.206,none,2024-07-11\r\n25242,1688,LATAM,electronics,mobile,28.41,6,0.212,none,2024-01-11\r\n25243,2158,APAC,grocery,online,60.02,5,0.191,coupon,2024-04-16\r\n25244,1296,LATAM,grocery,retail,52.33,4,0.181,none,2024-12-06\r\n25245,2104,EMEA,toys,retail,65.57,3,0.237,none,2024-10-22\r\n25246,2383,APAC,electronics,online,72.74,7,0.241,none,2024-07-06\r\n25247,1076,LATAM,grocery,retail,29.61,5,0.132,bundle,2024-03-22\r\n25248,1564,APAC,fashion,mobile,97.20,8,0.234,none,2024-07-08\r\n25249,2475,AMER,home,online,26.29,5,0.116,loyalty,2024-12-04\r\n25250,1836,LATAM,home,mobile,26.29,8,0.059,bundle,2024-02-15\r\n25251,2003,LATAM,sports,retail,60.24,4,0.036,none,2024-03-10\r\n25252,1203,AMER,fashion,online,28.89,1,0.130,bundle,2024-10-24\r\n25253,2356,LATAM,home,retail,100.09,5,0.194,coupon,2024-06-11\r\n25254,1897,AMER,electronics,retail,49.49,3,0.050,none,2024-12-26\r\n25255,1619,APAC,sports,online,112.96,4,0.154,none,2024-10-21\r\n25256,1668,AMER,home,retail,109.63,2,0.187,coupon,2024-02-07\r\n25257,1953,EMEA,sports,mobile,341.28,6,0.030,none,2024-10-13\r\n25258,2054,AMER,home,online,43.27,1,0.235,none,2024-06-11\r\n25259,1772,EMEA,grocery,online,53.69,6,0.156,none,2024-06-04\r\n25260,1777,AMER,toys,online,39.43,3,0.016,bundle,2024-01-15\r\n25261,1966,APAC,electronics,mobile,34.86,4,0.156,none,2024-02-25\r\n25262,1168,APAC,electronics,retail,134.02,8,0.044,coupon,2024-03-28\r\n25263,1611,EMEA,electronics,online,58.76,1,0.105,none,2024-09-07\r\n25264,2050,APAC,toys,mobile,27.50,2,0.200,coupon,2024-04-23\r\n25265,1756,EMEA,grocery,retail,60.48,4,0.203,coupon,2024-09-02\r\n25266,2315,LATAM,grocery,online,64.30,2,0.020,none,2024-03-04\r\n25267,1139,EMEA,sports,retail,62.60,3,0.054,coupon,2024-02-07\r\n25268,2368,AMER,fashion,online,55.26,4,0.172,bundle,2024-02-12\r\n25269,1168,APAC,fashion,online,26.00,2,0.210,bundle,2024-07-11\r\n25270,1406,LATAM,grocery,retail,40.28,1,0.065,none,2024-12-19\r\n25271,2356,LATAM,fashion,retail,58.90,1,0.040,none,2024-04-09\r\n25272,1216,APAC,electronics,retail,44.16,7,0.055,none,2024-06-05\r\n25273,2452,LATAM,fashion,retail,41.17,2,0.012,coupon,2024-12-28\r\n25274,1705,AMER,toys,retail,60.18,4,0.135,none,2024-02-28\r\n25275,1490,AMER,grocery,retail,40.93,2,0.059,none,2024-07-10\r\n25276,2117,EMEA,grocery,retail,46.54,5,0.009,none,2024-01-18\r\n25277,1618,EMEA,electronics,online,120.13,2,0.019,none,2024-12-06\r\n25278,2381,AMER,electronics,retail,24.17,5,0.095,loyalty,2024-06-16\r\n25279,1785,EMEA,grocery,mobile,20.25,5,0.070,none,2024-09-08\r\n25280,1159,LATAM,sports,retail,38.63,8,0.243,none,2024-05-21\r\n25281,1353,EMEA,grocery,mobile,29.04,1,0.110,coupon,2024-07-18\r\n25282,1665,AMER,fashion,online,91.22,7,0.193,none,2024-05-11\r\n25283,1256,LATAM,grocery,retail,45.07,3,0.084,none,2024-11-18\r\n25284,1956,APAC,grocery,retail,50.32,3,0.193,bundle,2024-01-28\r\n25285,1403,APAC,fashion,retail,51.39,8,0.051,coupon,2024-10-15\r\n25286,2038,LATAM,fashion,retail,20.50,2,0.227,coupon,2024-05-14\r\n25287,2214,AMER,electronics,retail,49.93,8,0.141,loyalty,2024-01-27\r\n25288,1210,LATAM,grocery,partner,64.44,8,0.188,loyalty,2024-04-26\r\n25289,1084,AMER,grocery,mobile,67.14,2,0.145,coupon,2024-05-03\r\n25290,1423,EMEA,grocery,partner,58.62,4,0.187,bundle,2024-03-06\r\n25291,1394,LATAM,electronics,online,149.89,4,0.181,none,2024-04-02\r\n25292,1752,APAC,fashion,online,102.46,6,0.061,none,2024-02-06\r\n25293,1834,AMER,home,retail,23.95,4,0.240,none,2024-05-13\r\n25294,2219,LATAM,home,online,96.15,5,0.023,none,2024-04-03\r\n25295,1331,AMER,toys,retail,59.14,7,0.192,none,2024-12-17\r\n25296,1206,EMEA,sports,online,110.55,2,0.095,none,2024-12-14\r\n25297,1281,AMER,grocery,retail,35.47,6,0.006,coupon,2024-07-24\r\n25298,2452,LATAM,grocery,retail,154.88,6,0.046,bundle,2024-10-03\r\n25299,2265,APAC,electronics,online,18.25,8,0.059,none,2024-12-13\r\n25300,1545,AMER,fashion,retail,67.13,3,0.152,none,2024-02-26\r\n25301,1350,LATAM,fashion,online,44.43,8,0.167,loyalty,2024-11-25\r\n25302,2063,APAC,electronics,retail,121.71,8,0.082,none,2024-12-22\r\n25303,2026,LATAM,grocery,retail,70.89,8,0.073,loyalty,2024-12-14\r\n25304,1387,AMER,electronics,retail,78.30,1,0.200,loyalty,2024-03-10\r\n25305,1307,AMER,grocery,online,59.95,3,0.021,coupon,2024-11-15\r\n25306,1919,EMEA,sports,online,53.29,5,0.232,coupon,2024-10-14\r\n25307,1362,AMER,toys,partner,41.04,1,0.120,coupon,2024-04-25\r\n25308,1885,EMEA,fashion,retail,71.13,8,0.178,none,2024-04-04\r\n25309,1456,APAC,grocery,partner,38.33,1,0.026,none,2024-08-21\r\n25310,1762,LATAM,electronics,partner,134.97,6,0.016,none,2024-09-28\r\n25311,1986,LATAM,fashion,retail,18.74,8,0.008,none,2024-09-13\r\n25312,1263,AMER,electronics,online,50.10,7,0.087,bundle,2024-02-13\r\n25313,1457,EMEA,electronics,retail,26.02,5,0.031,coupon,2024-08-16\r\n25314,1758,AMER,grocery,mobile,73.89,3,0.058,bundle,2024-05-12\r\n25315,1988,AMER,grocery,online,56.08,7,0.167,none,2024-05-27\r\n25316,2487,LATAM,home,retail,121.59,5,0.097,coupon,2024-12-02\r\n25317,2255,AMER,home,retail,51.57,2,0.031,none,2024-06-15\r\n25318,1946,AMER,electronics,online,96.68,2,0.068,loyalty,2024-05-01\r\n25319,1969,LATAM,electronics,online,55.44,2,0.038,coupon,2024-08-08\r\n25320,2400,EMEA,electronics,retail,99.67,6,0.223,none,2024-12-08\r\n25321,1531,EMEA,electronics,mobile,76.25,5,0.120,coupon,2024-06-23\r\n25322,1337,APAC,sports,mobile,45.03,8,0.059,none,2024-02-19\r\n25323,1846,APAC,sports,retail,47.23,6,0.026,loyalty,2024-03-17\r\n25324,1934,EMEA,fashion,retail,26.32,2,0.182,bundle,2024-05-11\r\n25325,2298,APAC,home,mobile,66.36,7,0.022,coupon,2024-11-09\r\n25326,1544,LATAM,grocery,retail,47.20,3,0.043,none,2024-07-17\r\n25327,1680,LATAM,sports,partner,47.88,3,0.097,coupon,2024-05-15\r\n25328,2146,APAC,home,online,68.55,3,0.192,bundle,2024-12-15\r\n25329,2198,EMEA,grocery,online,54.72,6,0.211,none,2024-04-08\r\n25330,1597,APAC,electronics,retail,128.90,1,0.045,coupon,2024-02-14\r\n25331,2375,AMER,home,online,81.11,3,0.217,coupon,2024-01-24\r\n25332,1227,AMER,home,online,137.59,2,0.215,none,2024-08-04\r\n25333,1360,APAC,fashion,online,101.61,1,0.207,none,2024-12-25\r\n25334,1782,LATAM,grocery,online,84.28,2,0.229,none,2024-06-04\r\n25335,2353,AMER,electronics,mobile,44.37,8,0.213,none,2024-07-28\r\n25336,2408,EMEA,fashion,mobile,41.11,7,0.116,none,2024-06-02\r\n25337,1451,EMEA,grocery,mobile,104.51,8,0.036,loyalty,2024-08-09\r\n25338,2432,AMER,grocery,retail,71.91,2,0.134,bundle,2024-10-24\r\n25339,1476,APAC,grocery,mobile,14.41,3,0.243,none,2024-05-05\r\n25340,1739,AMER,home,retail,56.15,3,0.085,none,2024-10-25\r\n25341,2297,EMEA,home,online,36.54,6,0.129,coupon,2024-03-28\r\n25342,2034,LATAM,electronics,partner,142.87,4,0.121,none,2024-08-15\r\n25343,1051,EMEA,electronics,online,119.58,5,0.139,none,2024-01-14\r\n25344,1496,AMER,electronics,retail,132.44,4,0.050,none,2024-04-13\r\n25345,2366,APAC,grocery,online,94.87,7,0.148,none,2024-01-17\r\n25346,1160,LATAM,toys,mobile,76.27,4,0.111,none,2024-08-25\r\n25347,2330,EMEA,home,online,65.26,7,0.037,none,2024-07-14\r\n25348,1885,EMEA,sports,online,44.75,6,0.246,bundle,2024-09-16\r\n25349,1651,LATAM,fashion,online,56.70,2,0.174,none,2024-02-22\r\n25350,1434,EMEA,home,online,149.42,2,0.225,loyalty,2024-02-21\r\n25351,1538,AMER,grocery,retail,88.84,4,0.091,loyalty,2024-09-18\r\n25352,1937,APAC,grocery,online,32.48,3,0.083,bundle,2024-02-04\r\n25353,2073,AMER,fashion,online,46.27,1,0.134,none,2024-07-23\r\n25354,2320,LATAM,fashion,online,104.62,7,0.055,none,2024-02-26\r\n25355,1900,APAC,home,online,22.78,3,0.141,none,2024-08-28\r\n25356,2057,APAC,fashion,online,43.88,6,0.099,none,2024-10-27\r\n25357,2158,APAC,sports,mobile,65.42,3,0.110,none,2024-05-01\r\n25358,2420,EMEA,home,online,76.78,4,0.059,bundle,2024-01-01\r\n25359,2131,APAC,electronics,retail,63.30,7,0.167,loyalty,2024-05-25\r\n25360,1310,AMER,home,online,36.33,1,0.091,loyalty,2024-03-11\r\n25361,1955,AMER,electronics,retail,71.52,1,0.020,none,2024-04-19\r\n25362,1395,APAC,grocery,online,43.63,3,0.189,none,2024-11-22\r\n25363,1381,LATAM,home,online,48.55,5,0.240,coupon,2024-05-10\r\n25364,1037,EMEA,sports,online,47.62,8,0.175,bundle,2024-02-02\r\n25365,2366,APAC,fashion,mobile,29.50,7,0.149,none,2024-01-03\r\n25366,1303,LATAM,fashion,mobile,79.83,3,0.059,none,2024-03-13\r\n25367,1883,LATAM,grocery,retail,38.58,7,0.189,none,2024-05-27\r\n25368,1113,EMEA,grocery,mobile,62.74,6,0.070,none,2024-07-06\r\n25369,2442,APAC,grocery,retail,20.98,4,0.021,none,2024-06-17\r\n25370,2369,LATAM,home,mobile,62.26,1,0.053,none,2024-07-08\r\n25371,2353,AMER,fashion,online,37.56,8,0.096,none,2024-12-05\r\n25372,2068,LATAM,toys,online,98.75,3,0.225,coupon,2024-11-05\r\n25373,1620,LATAM,grocery,retail,53.80,8,0.049,none,2024-07-08\r\n25374,2012,APAC,home,mobile,21.25,1,0.047,none,2024-10-17\r\n25375,1751,AMER,fashion,online,26.12,7,0.097,none,2024-08-19\r\n25376,1069,APAC,toys,partner,37.37,5,0.183,coupon,2024-10-03\r\n25377,1506,EMEA,home,online,48.60,1,0.041,none,2024-10-28\r\n25378,2350,APAC,home,mobile,31.52,5,0.137,bundle,2024-09-04\r\n25379,1112,APAC,electronics,retail,90.28,4,0.150,none,2024-05-25\r\n25380,2042,LATAM,grocery,online,37.63,8,0.011,coupon,2024-07-25\r\n25381,1817,APAC,electronics,online,37.75,7,0.230,none,2024-08-25\r\n25382,2477,APAC,grocery,online,163.72,3,0.249,loyalty,2024-06-26\r\n25383,1459,LATAM,grocery,retail,33.40,3,0.222,coupon,2024-07-21\r\n25384,2277,EMEA,sports,retail,62.01,6,0.023,bundle,2024-02-04\r\n25385,2125,LATAM,fashion,retail,149.86,2,0.228,none,2024-02-24\r\n25386,2485,AMER,home,retail,38.09,5,0.109,bundle,2024-09-23\r\n25387,2315,LATAM,sports,retail,28.19,6,0.122,bundle,2024-07-26\r\n25388,1020,APAC,grocery,retail,13.42,5,0.215,bundle,2024-02-15\r\n25389,1954,APAC,electronics,online,33.15,8,0.184,coupon,2024-07-25\r\n25390,2224,EMEA,electronics,retail,69.90,4,0.096,none,2024-11-16\r\n25391,1701,LATAM,electronics,online,22.54,1,0.098,loyalty,2024-05-03\r\n25392,1529,LATAM,fashion,retail,39.24,5,0.010,none,2024-01-03\r\n25393,2128,EMEA,toys,retail,36.76,6,0.110,none,2024-04-06\r\n25394,2017,EMEA,sports,online,32.50,4,0.170,bundle,2024-08-16\r\n25395,1636,APAC,toys,retail,16.68,2,0.106,none,2024-06-06\r\n25396,1420,APAC,electronics,online,29.94,8,0.217,none,2024-05-11\r\n25397,1516,EMEA,grocery,retail,52.99,7,0.066,none,2024-05-28\r\n25398,1733,LATAM,electronics,retail,97.18,7,0.201,none,2024-01-01\r\n25399,2184,APAC,grocery,retail,39.77,5,0.230,none,2024-08-18\r\n25400,1186,APAC,sports,mobile,38.88,2,0.037,coupon,2024-01-24\r\n25401,1021,AMER,sports,online,70.37,6,0.021,none,2024-09-15\r\n25402,2387,EMEA,electronics,mobile,58.51,8,0.047,none,2024-02-02\r\n25403,1395,APAC,home,retail,51.17,8,0.146,none,2024-04-12\r\n25404,1452,LATAM,electronics,online,82.58,8,0.092,bundle,2024-02-09\r\n25405,1964,EMEA,electronics,online,73.74,3,0.122,loyalty,2024-10-25\r\n25406,1366,APAC,fashion,online,72.60,1,0.025,none,2024-08-21\r\n25407,1426,AMER,grocery,retail,31.51,3,0.231,coupon,2024-08-26\r\n25408,1197,LATAM,home,online,74.69,6,0.041,none,2024-09-09\r\n25409,2290,LATAM,toys,online,16.25,4,0.138,bundle,2024-02-15\r\n25410,1713,EMEA,grocery,online,35.26,5,0.223,none,2024-09-13\r\n25411,1077,AMER,fashion,retail,53.71,7,0.067,bundle,2024-04-23\r\n25412,2292,EMEA,home,retail,115.55,2,0.107,loyalty,2024-07-23\r\n25413,2205,AMER,electronics,mobile,54.25,1,0.235,none,2024-09-14\r\n25414,2462,EMEA,grocery,mobile,71.26,6,0.017,bundle,2024-01-14\r\n25415,1946,AMER,grocery,online,151.41,4,0.107,none,2024-01-07\r\n25416,2224,EMEA,fashion,partner,29.09,6,0.237,none,2024-09-06\r\n25417,1585,AMER,sports,mobile,43.24,8,0.206,none,2024-08-04\r\n25418,2214,AMER,electronics,retail,79.59,8,0.163,none,2024-12-10\r\n25419,2433,APAC,grocery,online,47.11,2,0.065,none,2024-09-22\r\n25420,1630,APAC,grocery,online,45.59,4,0.187,none,2024-07-02\r\n25421,1583,AMER,grocery,online,38.39,7,0.023,none,2024-02-16\r\n25422,1814,AMER,home,online,34.39,8,0.248,none,2024-05-12\r\n25423,2390,AMER,electronics,mobile,41.17,1,0.176,none,2024-11-13\r\n25424,2095,EMEA,sports,online,94.74,8,0.233,none,2024-06-25\r\n25425,2409,APAC,grocery,online,28.55,3,0.232,none,2024-04-02\r\n25426,1452,LATAM,electronics,retail,130.50,6,0.072,bundle,2024-07-12\r\n25427,1120,LATAM,sports,online,32.63,5,0.051,none,2024-12-08\r\n25428,1409,APAC,grocery,retail,117.26,8,0.213,none,2024-03-26\r\n25429,1000,APAC,toys,retail,56.44,1,0.144,bundle,2024-07-20\r\n25430,1015,AMER,home,retail,46.21,2,0.084,none,2024-09-22\r\n25431,1842,LATAM,fashion,online,149.31,6,0.231,bundle,2024-06-23\r\n25432,1405,LATAM,sports,online,36.86,2,0.196,none,2024-11-10\r\n25433,2100,APAC,electronics,retail,21.90,7,0.107,bundle,2024-06-09\r\n25434,1515,EMEA,electronics,retail,110.48,3,0.061,none,2024-05-05\r\n25435,1341,EMEA,fashion,retail,34.30,6,0.177,coupon,2024-12-19\r\n25436,1439,LATAM,grocery,retail,63.47,8,0.095,loyalty,2024-12-09\r\n25437,2023,LATAM,toys,mobile,97.05,2,0.051,none,2024-01-26\r\n25438,1517,AMER,fashion,retail,71.65,3,0.163,bundle,2024-05-14\r\n25439,2320,LATAM,electronics,retail,77.24,4,0.244,bundle,2024-07-20\r\n25440,2476,APAC,fashion,partner,78.75,1,0.232,none,2024-03-16\r\n25441,2228,EMEA,fashion,retail,61.51,6,0.141,coupon,2024-07-14\r\n25442,1935,EMEA,fashion,mobile,30.31,8,0.154,none,2024-07-06\r\n25443,2413,AMER,sports,retail,68.85,8,0.065,none,2024-05-05\r\n25444,1440,AMER,home,partner,50.27,8,0.025,coupon,2024-09-09\r\n25445,2047,AMER,fashion,online,42.86,4,0.217,none,2024-06-06\r\n25446,1276,AMER,fashion,partner,34.95,6,0.081,none,2024-04-02\r\n25447,1839,APAC,toys,online,18.68,7,0.189,bundle,2024-11-24\r\n25448,1859,AMER,electronics,online,38.60,8,0.118,none,2024-04-20\r\n25449,1813,EMEA,electronics,retail,74.74,8,0.118,none,2024-05-14\r\n25450,2132,LATAM,grocery,retail,42.55,7,0.101,coupon,2024-09-22\r\n25451,2154,APAC,sports,mobile,32.69,5,0.063,coupon,2024-06-28\r\n25452,2267,AMER,fashion,retail,72.18,8,0.183,none,2024-06-01\r\n25453,2125,LATAM,fashion,retail,16.20,8,0.182,none,2024-04-22\r\n25454,2472,AMER,electronics,online,113.68,3,0.184,coupon,2024-04-09\r\n25455,1495,LATAM,electronics,mobile,33.62,5,0.164,none,2024-09-23\r\n25456,1670,EMEA,home,retail,307.19,7,0.178,none,2024-01-28\r\n25457,1726,EMEA,grocery,partner,70.16,2,0.118,coupon,2024-12-24\r\n25458,2464,LATAM,sports,retail,68.01,4,0.148,none,2024-12-09\r\n25459,2319,AMER,electronics,retail,277.67,5,0.011,loyalty,2024-05-21\r\n25460,2142,LATAM,electronics,retail,56.74,4,0.216,none,2024-08-14\r\n25461,2186,LATAM,sports,online,83.02,6,0.172,none,2024-02-18\r\n25462,2045,LATAM,electronics,partner,22.38,4,0.177,none,2024-06-26\r\n25463,1883,LATAM,electronics,retail,55.74,7,0.201,none,2024-01-18\r\n25464,2400,EMEA,sports,retail,77.64,7,0.062,none,2024-05-04\r\n25465,2401,LATAM,home,online,72.91,4,0.045,none,2024-05-02\r\n25466,2201,AMER,fashion,online,174.57,4,0.110,none,2024-01-07\r\n25467,1546,EMEA,sports,retail,71.68,8,0.244,none,2024-09-09\r\n25468,1735,LATAM,sports,partner,118.14,8,0.088,none,2024-03-07\r\n25469,1164,EMEA,electronics,online,49.70,1,0.142,coupon,2024-08-20\r\n25470,1656,LATAM,electronics,retail,62.37,8,0.035,none,2024-10-12\r\n25471,1106,AMER,fashion,retail,28.70,3,0.186,none,2024-09-04\r\n25472,2232,EMEA,toys,online,99.74,3,0.205,none,2024-04-02\r\n25473,1368,EMEA,grocery,retail,24.96,4,0.196,none,2024-05-19\r\n25474,1489,AMER,electronics,online,146.88,8,0.021,coupon,2024-01-09\r\n25475,1526,EMEA,electronics,retail,77.62,8,0.143,coupon,2024-10-10\r\n25476,1292,LATAM,grocery,online,61.26,4,0.243,none,2024-07-03\r\n25477,1653,APAC,grocery,retail,45.70,3,0.139,bundle,2024-08-23\r\n25478,2155,APAC,grocery,online,84.86,6,0.027,none,2024-08-23\r\n25479,1141,AMER,grocery,retail,16.96,8,0.096,none,2024-01-16\r\n25480,1033,APAC,sports,retail,51.90,7,0.007,none,2024-12-22\r\n25481,1085,EMEA,grocery,retail,22.11,2,0.200,none,2024-04-23\r\n25482,1730,AMER,grocery,online,38.25,7,0.171,none,2024-02-11\r\n25483,2262,APAC,toys,retail,46.94,8,0.108,none,2024-08-14\r\n25484,1087,AMER,electronics,online,44.84,8,0.005,coupon,2024-03-11\r\n25485,2289,APAC,fashion,online,49.35,4,0.231,coupon,2024-05-05\r\n25486,2271,LATAM,home,mobile,24.50,2,0.081,bundle,2024-06-24\r\n25487,2200,LATAM,fashion,partner,51.98,3,0.069,none,2024-10-15\r\n25488,2197,LATAM,grocery,online,41.24,1,0.003,none,2024-04-18\r\n25489,2031,AMER,electronics,online,30.67,8,0.163,none,2024-08-27\r\n25490,1750,LATAM,fashion,retail,51.68,7,0.099,coupon,2024-02-01\r\n25491,2173,LATAM,electronics,retail,85.58,1,0.062,coupon,2024-07-13\r\n25492,2423,LATAM,fashion,online,57.22,3,0.130,none,2024-04-08\r\n25493,1412,AMER,toys,online,75.27,4,0.198,coupon,2024-08-01\r\n25494,2310,EMEA,electronics,online,170.70,2,0.143,coupon,2024-07-12\r\n25495,2433,APAC,fashion,online,62.58,2,0.042,none,2024-03-13\r\n25496,2411,EMEA,grocery,online,17.39,4,0.144,bundle,2024-11-21\r\n25497,1911,LATAM,electronics,online,99.00,6,0.035,none,2024-12-26\r\n25498,1080,LATAM,toys,online,87.22,2,0.097,bundle,2024-01-23\r\n25499,2327,EMEA,grocery,mobile,163.52,7,0.121,none,2024-12-09\r\n25500,2016,LATAM,electronics,retail,29.01,6,0.160,none,2024-06-07\r\n25501,1424,APAC,home,online,113.19,2,0.124,coupon,2024-10-12\r\n25502,1050,AMER,electronics,online,69.38,4,0.182,none,2024-11-06\r\n25503,1574,AMER,home,mobile,147.59,3,0.239,none,2024-10-22\r\n25504,1949,AMER,grocery,retail,71.52,4,0.083,none,2024-02-18\r\n25505,1730,AMER,grocery,online,25.54,5,0.121,none,2024-05-27\r\n25506,2383,APAC,electronics,mobile,64.32,1,0.096,none,2024-03-13\r\n25507,2075,LATAM,electronics,retail,31.11,8,0.048,none,2024-03-22\r\n25508,1587,LATAM,fashion,mobile,38.39,3,0.002,loyalty,2024-04-01\r\n25509,1550,APAC,sports,retail,74.63,2,0.128,none,2024-07-05\r\n25510,1650,LATAM,electronics,online,70.37,6,0.035,none,2024-02-26\r\n25511,1032,AMER,grocery,online,57.49,1,0.077,none,2024-05-08\r\n25512,1736,AMER,fashion,online,67.25,8,0.153,loyalty,2024-10-11\r\n25513,1895,AMER,home,mobile,87.05,2,0.188,loyalty,2024-11-11\r\n25514,1512,APAC,electronics,online,75.68,5,0.067,none,2024-09-08\r\n25515,1595,AMER,electronics,online,84.16,7,0.028,none,2024-05-03\r\n25516,2028,APAC,electronics,retail,42.89,8,0.123,none,2024-08-04\r\n25517,1111,APAC,home,retail,105.33,2,0.123,bundle,2024-09-21\r\n25518,2175,AMER,sports,online,10.46,3,0.184,none,2024-01-01\r\n25519,1097,EMEA,grocery,partner,87.94,4,0.141,none,2024-10-21\r\n25520,1341,EMEA,grocery,mobile,53.21,6,0.220,none,2024-10-21\r\n25521,1669,AMER,electronics,online,38.36,1,0.196,loyalty,2024-08-15\r\n25522,1541,APAC,electronics,retail,33.44,6,0.096,coupon,2024-08-20\r\n25523,1718,EMEA,grocery,retail,34.87,8,0.079,none,2024-09-04\r\n25524,1504,AMER,home,online,66.49,6,0.179,coupon,2024-06-19\r\n25525,2177,AMER,home,partner,84.01,1,0.138,none,2024-09-09\r\n25526,1231,AMER,fashion,retail,52.49,1,0.121,none,2024-03-06\r\n25527,1757,EMEA,toys,online,66.34,8,0.025,none,2024-10-24\r\n25528,2109,EMEA,electronics,online,37.00,5,0.056,coupon,2024-05-02\r\n25529,1972,LATAM,electronics,retail,68.20,8,0.095,none,2024-12-01\r\n25530,1612,LATAM,electronics,online,38.84,8,0.063,bundle,2024-06-01\r\n25531,2045,LATAM,toys,retail,49.76,1,0.097,loyalty,2024-04-24\r\n25532,2299,EMEA,electronics,retail,32.85,8,0.226,coupon,2024-10-03\r\n25533,2438,AMER,electronics,online,40.42,3,0.230,coupon,2024-07-09\r\n25534,2017,EMEA,home,online,99.28,4,0.139,bundle,2024-08-24\r\n25535,2313,LATAM,toys,online,82.82,2,0.086,bundle,2024-11-25\r\n25536,2163,EMEA,fashion,retail,49.38,3,0.219,coupon,2024-05-05\r\n25537,1399,AMER,home,retail,59.04,2,0.027,loyalty,2024-06-05\r\n25538,1675,LATAM,sports,mobile,52.23,8,0.120,none,2024-09-15\r\n25539,2087,LATAM,sports,retail,75.27,5,0.062,none,2024-12-22\r\n25540,1661,LATAM,fashion,retail,93.12,6,0.224,none,2024-05-01\r\n25541,1039,AMER,grocery,online,85.48,5,0.084,bundle,2024-10-19\r\n25542,1419,APAC,home,online,195.89,2,0.191,none,2024-09-19\r\n25543,1889,APAC,electronics,mobile,122.25,5,0.181,none,2024-03-07\r\n25544,1762,LATAM,grocery,online,46.05,8,0.172,coupon,2024-12-13\r\n25545,2129,APAC,electronics,online,54.49,1,0.136,loyalty,2024-02-22\r\n25546,2027,EMEA,electronics,retail,35.15,5,0.115,bundle,2024-12-03\r\n25547,1792,AMER,electronics,retail,86.54,8,0.012,coupon,2024-08-01\r\n25548,1520,APAC,home,partner,62.16,7,0.179,coupon,2024-04-06\r\n25549,2180,AMER,sports,retail,69.29,5,0.111,bundle,2024-08-17\r\n25550,1135,APAC,sports,online,56.83,6,0.104,none,2024-06-21\r\n25551,2331,APAC,sports,partner,29.45,5,0.230,none,2024-10-11\r\n25552,2180,AMER,sports,mobile,99.82,7,0.073,none,2024-03-19\r\n25553,2012,APAC,home,mobile,44.46,4,0.205,loyalty,2024-11-20\r\n25554,1420,APAC,grocery,online,75.33,6,0.184,bundle,2024-01-18\r\n25555,2185,EMEA,grocery,mobile,85.99,5,0.085,coupon,2024-09-18\r\n25556,1254,APAC,electronics,online,99.39,6,0.073,none,2024-11-03\r\n25557,1300,EMEA,sports,retail,27.45,3,0.066,bundle,2024-11-01\r\n25558,1294,APAC,home,retail,25.41,1,0.100,coupon,2024-04-14\r\n25559,1519,APAC,grocery,mobile,26.12,5,0.051,none,2024-05-11\r\n25560,1606,AMER,fashion,partner,75.56,5,0.004,none,2024-09-20\r\n25561,2297,EMEA,fashion,retail,53.95,2,0.013,none,2024-03-14\r\n25562,2174,LATAM,home,online,48.40,7,0.158,loyalty,2024-07-13\r\n25563,1183,AMER,electronics,online,81.95,1,0.024,loyalty,2024-12-18\r\n25564,1143,LATAM,home,online,52.43,2,0.230,coupon,2024-01-21\r\n25565,1889,APAC,grocery,mobile,90.22,4,0.115,loyalty,2024-09-20\r\n25566,1018,APAC,grocery,online,70.26,3,0.031,none,2024-04-03\r\n25567,2103,LATAM,grocery,mobile,92.51,2,0.179,bundle,2024-04-24\r\n25568,1797,LATAM,sports,partner,37.23,2,0.188,bundle,2024-04-06\r\n25569,1544,LATAM,electronics,online,55.64,5,0.029,loyalty,2024-08-02\r\n25570,1918,EMEA,electronics,mobile,83.33,3,0.015,none,2024-03-23\r\n25571,1787,APAC,grocery,partner,103.19,2,0.187,coupon,2024-08-03\r\n25572,2439,AMER,electronics,mobile,69.36,6,0.080,bundle,2024-11-19\r\n25573,1281,AMER,fashion,online,57.96,1,0.085,none,2024-02-28\r\n25574,1090,AMER,grocery,online,72.09,4,0.004,loyalty,2024-09-25\r\n25575,1507,EMEA,electronics,online,22.27,1,0.203,coupon,2024-06-23\r\n25576,1761,EMEA,home,retail,109.38,4,0.209,none,2024-12-28\r\n25577,1027,APAC,grocery,online,68.51,5,0.086,bundle,2024-09-15\r\n25578,1768,AMER,sports,online,91.14,7,0.185,none,2024-02-01\r\n25579,1713,EMEA,home,mobile,167.38,5,0.224,none,2024-09-19\r\n25580,2162,EMEA,electronics,mobile,36.44,8,0.237,bundle,2024-04-09\r\n25581,2086,APAC,toys,retail,49.38,7,0.239,none,2024-03-26\r\n25582,2134,AMER,sports,online,39.35,2,0.011,loyalty,2024-03-22\r\n25583,1748,APAC,toys,retail,18.32,1,0.098,none,2024-04-01\r\n25584,1427,EMEA,fashion,online,35.26,7,0.237,coupon,2024-03-07\r\n25585,1780,APAC,fashion,online,38.14,1,0.040,bundle,2024-06-23\r\n25586,2432,AMER,fashion,retail,58.78,4,0.146,bundle,2024-09-20\r\n25587,2061,EMEA,fashion,mobile,21.79,5,0.207,coupon,2024-01-08\r\n25588,1646,APAC,grocery,online,66.26,1,0.174,none,2024-04-02\r\n25589,1389,LATAM,home,retail,37.27,5,0.213,bundle,2024-04-25\r\n25590,1131,APAC,electronics,retail,58.92,7,0.021,bundle,2024-09-02\r\n25591,1839,APAC,grocery,online,173.06,3,0.124,loyalty,2024-11-03\r\n25592,1472,AMER,home,mobile,99.82,7,0.103,bundle,2024-05-08\r\n25593,1722,EMEA,sports,partner,64.40,2,0.147,none,2024-12-10\r\n25594,2264,LATAM,home,online,40.71,4,0.225,none,2024-06-24\r\n25595,1214,EMEA,grocery,retail,85.21,6,0.116,none,2024-10-16\r\n25596,1237,LATAM,grocery,retail,42.31,8,0.224,coupon,2024-12-02\r\n25597,2247,LATAM,grocery,online,136.83,8,0.186,bundle,2024-09-13\r\n25598,2290,LATAM,fashion,online,50.53,5,0.053,none,2024-07-14\r\n25599,1818,AMER,toys,retail,41.14,3,0.109,bundle,2024-12-04\r\n25600,2432,AMER,home,online,62.76,1,0.064,bundle,2024-12-23\r\n25601,1241,APAC,toys,online,134.60,7,0.036,none,2024-10-12\r\n25602,1623,AMER,electronics,mobile,32.76,5,0.169,none,2024-02-08\r\n25603,1352,AMER,home,retail,40.62,8,0.094,none,2024-11-23\r\n25604,2135,EMEA,grocery,retail,70.32,2,0.128,none,2024-03-20\r\n25605,1615,LATAM,grocery,retail,43.77,7,0.006,bundle,2024-09-20\r\n25606,2069,AMER,grocery,online,22.45,6,0.209,none,2024-03-14\r\n25607,2102,APAC,toys,online,59.64,8,0.021,coupon,2024-10-10\r\n25608,2410,EMEA,toys,mobile,80.04,8,0.088,bundle,2024-11-18\r\n25609,1857,LATAM,electronics,partner,69.34,2,0.090,coupon,2024-12-01\r\n25610,1504,AMER,fashion,retail,93.39,2,0.026,coupon,2024-02-13\r\n25611,2066,APAC,toys,online,60.91,8,0.036,coupon,2024-09-02\r\n25612,1673,AMER,home,online,36.45,4,0.095,none,2024-02-10\r\n25613,1951,LATAM,grocery,mobile,35.22,7,0.065,none,2024-11-28\r\n25614,2302,APAC,sports,retail,57.63,3,0.219,loyalty,2024-05-03\r\n25615,1739,AMER,toys,online,98.38,3,0.000,none,2024-03-06\r\n25616,2311,LATAM,toys,mobile,28.92,4,0.157,none,2024-03-15\r\n25617,2253,AMER,electronics,retail,123.72,4,0.059,coupon,2024-01-03\r\n25618,1480,APAC,fashion,retail,35.35,1,0.021,none,2024-11-09\r\n25619,1179,APAC,electronics,retail,35.86,8,0.030,coupon,2024-09-23\r\n25620,2100,APAC,grocery,online,74.99,1,0.203,loyalty,2024-12-20\r\n25621,1204,AMER,grocery,retail,42.33,1,0.132,none,2024-05-22\r\n25622,2374,LATAM,grocery,mobile,49.91,6,0.156,coupon,2024-09-18\r\n25623,2025,EMEA,electronics,retail,62.09,1,0.198,coupon,2024-09-01\r\n25624,1243,AMER,toys,retail,37.31,1,0.003,loyalty,2024-08-02\r\n25625,2332,APAC,electronics,partner,56.21,8,0.044,none,2024-08-19\r\n25626,1440,AMER,grocery,online,57.47,6,0.237,none,2024-11-18\r\n25627,2120,AMER,grocery,mobile,29.53,2,0.183,coupon,2024-04-20\r\n25628,2115,APAC,grocery,online,33.24,7,0.066,none,2024-12-23\r\n25629,2222,LATAM,grocery,retail,66.60,6,0.200,none,2024-10-03\r\n25630,1765,EMEA,toys,mobile,63.17,3,0.237,coupon,2024-01-27\r\n25631,2089,EMEA,electronics,online,149.08,4,0.202,bundle,2024-09-25\r\n25632,1961,EMEA,electronics,retail,77.10,3,0.191,loyalty,2024-05-09\r\n25633,2177,AMER,electronics,retail,88.54,1,0.044,loyalty,2024-04-18\r\n25634,1263,AMER,grocery,mobile,102.50,7,0.004,coupon,2024-08-26\r\n25635,1880,LATAM,electronics,online,34.94,5,0.146,coupon,2024-11-25\r\n25636,1315,AMER,electronics,mobile,48.59,3,0.021,bundle,2024-05-15\r\n25637,2377,AMER,electronics,retail,61.61,7,0.025,none,2024-02-26\r\n25638,2211,APAC,home,retail,66.43,6,0.023,coupon,2024-12-09\r\n25639,2427,LATAM,fashion,online,36.44,3,0.165,loyalty,2024-02-01\r\n25640,1098,APAC,home,online,46.61,7,0.168,none,2024-06-20\r\n25641,2132,LATAM,home,retail,61.58,1,0.166,none,2024-05-18\r\n25642,2006,APAC,fashion,online,145.67,2,0.139,bundle,2024-03-26\r\n25643,1600,AMER,home,retail,46.65,5,0.066,coupon,2024-09-12\r\n25644,2016,LATAM,home,retail,60.65,6,0.174,none,2024-08-22\r\n25645,2348,EMEA,electronics,online,62.56,2,0.178,none,2024-05-12\r\n25646,2380,AMER,grocery,mobile,39.49,5,0.120,none,2024-11-18\r\n25647,2009,LATAM,home,online,47.49,2,0.168,none,2024-09-26\r\n25648,2277,EMEA,home,online,158.19,1,0.155,loyalty,2024-08-01\r\n25649,2335,EMEA,sports,online,136.75,7,0.247,coupon,2024-11-25\r\n25650,1658,AMER,fashion,mobile,80.32,2,0.055,coupon,2024-07-24\r\n25651,2142,LATAM,fashion,online,54.07,2,0.039,coupon,2024-09-18\r\n25652,2081,APAC,home,mobile,46.22,2,0.121,none,2024-10-12\r\n25653,1508,LATAM,sports,online,59.89,2,0.054,loyalty,2024-10-21\r\n25654,1689,LATAM,fashion,online,47.54,5,0.076,coupon,2024-04-06\r\n25655,2211,APAC,toys,online,72.56,3,0.093,none,2024-04-15\r\n25656,1398,APAC,toys,online,66.60,6,0.243,bundle,2024-02-25\r\n25657,2020,AMER,electronics,online,33.97,4,0.088,none,2024-01-05\r\n25658,1786,APAC,home,online,103.61,7,0.243,none,2024-04-07\r\n25659,1994,LATAM,electronics,retail,60.57,8,0.141,none,2024-12-26\r\n25660,1489,AMER,fashion,online,60.30,7,0.155,none,2024-07-20\r\n25661,1988,AMER,electronics,online,144.67,2,0.010,none,2024-09-23\r\n25662,2148,EMEA,toys,retail,70.97,8,0.042,none,2024-04-01\r\n25663,2267,AMER,home,retail,32.43,7,0.202,none,2024-11-21\r\n25664,1953,EMEA,grocery,online,46.96,5,0.056,none,2024-09-06\r\n25665,1827,EMEA,fashion,online,49.14,7,0.014,none,2024-06-19\r\n25666,1051,EMEA,grocery,online,82.70,3,0.221,bundle,2024-10-09\r\n25667,1681,LATAM,toys,mobile,72.19,7,0.062,none,2024-12-10\r\n25668,1079,LATAM,toys,online,132.40,6,0.008,none,2024-07-05\r\n25669,2065,EMEA,electronics,retail,59.74,6,0.005,none,2024-05-17\r\n25670,1763,LATAM,toys,online,43.46,3,0.170,loyalty,2024-03-01\r\n25671,1988,AMER,toys,online,64.90,8,0.124,none,2024-04-28\r\n25672,1966,APAC,grocery,online,52.62,1,0.147,bundle,2024-06-25\r\n25673,2477,APAC,electronics,mobile,55.26,7,0.140,none,2024-01-11\r\n25674,1352,AMER,fashion,online,128.88,6,0.202,none,2024-04-10\r\n25675,2470,EMEA,fashion,mobile,48.05,5,0.195,none,2024-10-17\r\n25676,1271,EMEA,grocery,retail,34.41,4,0.098,loyalty,2024-04-20\r\n25677,1186,APAC,electronics,online,99.87,4,0.020,none,2024-10-16\r\n25678,1567,AMER,sports,online,63.40,1,0.191,coupon,2024-01-16\r\n25679,2305,AMER,home,online,62.14,5,0.175,none,2024-08-08\r\n25680,1054,EMEA,toys,retail,41.47,4,0.216,none,2024-05-08\r\n25681,1846,APAC,grocery,mobile,52.24,5,0.005,coupon,2024-03-18\r\n25682,1936,EMEA,toys,retail,74.15,6,0.040,bundle,2024-10-17\r\n25683,1132,EMEA,fashion,mobile,75.85,7,0.096,none,2024-11-10\r\n25684,1489,AMER,grocery,online,54.69,3,0.200,none,2024-02-10\r\n25685,1012,LATAM,sports,retail,24.99,5,0.135,none,2024-10-05\r\n25686,1553,LATAM,toys,retail,67.11,4,0.182,coupon,2024-03-23\r\n25687,1892,LATAM,grocery,online,167.24,2,0.079,none,2024-08-13\r\n25688,2008,APAC,grocery,online,93.01,4,0.112,bundle,2024-07-13\r\n25689,1574,AMER,home,mobile,67.35,8,0.002,loyalty,2024-11-13\r\n25690,2498,LATAM,fashion,online,18.06,5,0.067,none,2024-12-28\r\n25691,1918,EMEA,electronics,retail,63.32,3,0.155,none,2024-10-11\r\n25692,1209,AMER,toys,retail,52.10,1,0.093,none,2024-03-11\r\n25693,2016,LATAM,toys,retail,25.15,6,0.170,none,2024-03-22\r\n25694,1758,AMER,toys,partner,64.77,2,0.132,none,2024-11-04\r\n25695,2259,AMER,home,retail,32.61,7,0.005,none,2024-04-23\r\n25696,2422,APAC,grocery,retail,98.80,7,0.080,bundle,2024-12-07\r\n25697,2094,AMER,home,online,29.80,5,0.122,none,2024-03-27\r\n25698,1672,APAC,toys,online,48.66,5,0.008,none,2024-03-02\r\n25699,2350,APAC,home,online,105.29,4,0.010,none,2024-08-28\r\n25700,2040,LATAM,electronics,online,113.79,5,0.036,coupon,2024-02-05\r\n25701,2240,LATAM,grocery,retail,44.55,6,0.030,none,2024-09-24\r\n25702,1774,EMEA,toys,online,113.69,6,0.224,loyalty,2024-09-17\r\n25703,1259,EMEA,grocery,online,32.04,4,0.198,bundle,2024-01-11\r\n25704,1717,AMER,fashion,online,76.21,6,0.018,coupon,2024-11-10\r\n25705,1887,LATAM,sports,mobile,52.12,1,0.095,none,2024-06-06\r\n25706,1804,AMER,grocery,retail,26.72,4,0.181,none,2024-06-25\r\n25707,2429,EMEA,sports,retail,38.85,4,0.204,coupon,2024-08-19\r\n25708,2288,AMER,home,online,63.97,3,0.013,none,2024-06-13\r\n25709,1381,LATAM,home,retail,25.58,5,0.123,bundle,2024-04-06\r\n25710,1535,AMER,sports,retail,132.66,8,0.089,bundle,2024-02-02\r\n25711,2392,EMEA,electronics,mobile,20.98,7,0.065,none,2024-11-09\r\n25712,2081,APAC,grocery,online,89.63,8,0.243,none,2024-07-14\r\n25713,1876,LATAM,home,online,123.80,1,0.053,none,2024-11-28\r\n25714,1427,EMEA,electronics,retail,45.98,2,0.169,none,2024-01-05\r\n25715,1837,LATAM,electronics,online,58.36,4,0.014,coupon,2024-12-08\r\n25716,1059,AMER,electronics,online,108.77,7,0.100,none,2024-03-20\r\n25717,1183,AMER,fashion,retail,46.65,8,0.102,coupon,2024-06-11\r\n25718,1879,EMEA,fashion,online,39.17,7,0.140,coupon,2024-09-08\r\n25719,1072,LATAM,toys,partner,73.93,1,0.049,none,2024-04-19\r\n25720,1977,APAC,grocery,mobile,85.49,6,0.112,none,2024-12-14\r\n25721,1639,APAC,toys,retail,47.99,1,0.047,none,2024-07-22\r\n25722,1913,LATAM,sports,retail,22.20,3,0.090,none,2024-03-20\r\n25723,2147,LATAM,grocery,retail,61.88,4,0.169,none,2024-12-08\r\n25724,2102,APAC,sports,retail,23.04,1,0.126,bundle,2024-12-25\r\n25725,1429,APAC,fashion,retail,78.51,3,0.208,none,2024-03-20\r\n25726,1222,AMER,sports,online,127.29,1,0.113,loyalty,2024-03-03\r\n25727,2370,EMEA,fashion,online,93.63,3,0.228,none,2024-06-24\r\n25728,1063,AMER,grocery,online,60.43,3,0.133,bundle,2024-06-07\r\n25729,2033,LATAM,fashion,retail,14.56,8,0.068,coupon,2024-06-17\r\n25730,1995,LATAM,electronics,mobile,160.74,7,0.078,bundle,2024-05-06\r\n25731,1477,APAC,grocery,mobile,77.35,6,0.233,coupon,2024-10-16\r\n25732,2187,EMEA,grocery,online,83.19,8,0.181,bundle,2024-10-09\r\n25733,1146,LATAM,grocery,online,67.49,2,0.138,none,2024-09-03\r\n25734,1121,EMEA,home,mobile,31.44,2,0.004,bundle,2024-06-14\r\n25735,2113,LATAM,grocery,online,67.64,2,0.162,coupon,2024-07-18\r\n25736,1280,LATAM,sports,online,46.40,3,0.007,none,2024-11-18\r\n25737,2445,APAC,toys,retail,16.33,5,0.002,none,2024-03-04\r\n25738,1374,APAC,electronics,online,39.85,1,0.018,coupon,2024-10-26\r\n25739,1968,EMEA,grocery,online,98.67,5,0.042,coupon,2024-03-05\r\n25740,2070,APAC,electronics,retail,47.88,1,0.145,none,2024-03-17\r\n25741,1446,AMER,grocery,retail,45.79,1,0.089,none,2024-05-15\r\n25742,1685,AMER,electronics,online,35.79,3,0.014,bundle,2024-07-15\r\n25743,1869,AMER,fashion,online,30.53,1,0.132,coupon,2024-05-06\r\n25744,1584,EMEA,home,online,31.46,3,0.050,none,2024-07-12\r\n25745,2279,LATAM,electronics,online,68.49,1,0.020,none,2024-06-07\r\n25746,1999,EMEA,home,retail,55.07,2,0.080,none,2024-06-04\r\n25747,1827,EMEA,toys,online,45.98,5,0.135,bundle,2024-02-28\r\n25748,2441,EMEA,sports,online,44.19,5,0.146,none,2024-02-28\r\n25749,1387,AMER,home,retail,71.32,3,0.188,none,2024-11-10\r\n25750,1454,APAC,sports,mobile,32.91,3,0.028,none,2024-04-11\r\n25751,2077,APAC,toys,online,108.52,7,0.120,bundle,2024-04-22\r\n25752,1123,LATAM,electronics,online,22.84,1,0.055,coupon,2024-01-21\r\n25753,2169,EMEA,fashion,online,35.03,2,0.177,bundle,2024-07-06\r\n25754,1266,AMER,electronics,partner,44.65,3,0.035,none,2024-07-03\r\n25755,2153,APAC,fashion,mobile,43.84,7,0.110,bundle,2024-03-20\r\n25756,1712,LATAM,sports,online,42.31,7,0.090,bundle,2024-10-13\r\n25757,1576,EMEA,fashion,retail,80.13,4,0.029,bundle,2024-12-05\r\n25758,1095,APAC,grocery,mobile,78.96,5,0.077,loyalty,2024-06-14\r\n25759,2362,AMER,fashion,online,74.23,4,0.119,bundle,2024-10-09\r\n25760,2487,LATAM,toys,retail,31.20,3,0.189,loyalty,2024-05-28\r\n25761,2111,EMEA,grocery,online,24.72,2,0.045,bundle,2024-08-13\r\n25762,1386,AMER,home,online,76.26,2,0.034,coupon,2024-02-05\r\n25763,1413,LATAM,electronics,partner,101.28,1,0.106,coupon,2024-01-11\r\n25764,1862,LATAM,grocery,retail,51.48,1,0.177,none,2024-09-11\r\n25765,1095,APAC,electronics,retail,37.90,4,0.201,none,2024-01-24\r\n25766,2381,AMER,sports,partner,33.84,7,0.003,none,2024-04-11\r\n25767,1999,EMEA,home,retail,77.72,8,0.126,none,2024-11-22\r\n25768,1855,APAC,home,retail,76.78,5,0.219,none,2024-04-22\r\n25769,1491,EMEA,sports,retail,62.89,2,0.133,coupon,2024-09-24\r\n25770,1494,AMER,electronics,retail,31.22,8,0.070,none,2024-12-11\r\n25771,1934,EMEA,home,online,112.79,7,0.156,none,2024-06-25\r\n25772,2050,APAC,grocery,retail,34.71,8,0.048,coupon,2024-09-28\r\n25773,1311,APAC,sports,mobile,26.61,2,0.057,none,2024-06-18\r\n25774,1961,EMEA,electronics,online,66.83,3,0.095,bundle,2024-10-25\r\n25775,1428,APAC,grocery,online,62.52,5,0.095,none,2024-05-28\r\n25776,2245,APAC,sports,retail,52.99,6,0.081,none,2024-05-16\r\n25777,1244,LATAM,sports,retail,57.76,7,0.247,none,2024-01-20\r\n25778,1862,LATAM,home,online,78.60,1,0.034,coupon,2024-02-17\r\n25779,1703,AMER,home,online,65.81,8,0.036,none,2024-06-01\r\n25780,1881,LATAM,home,online,45.75,3,0.198,bundle,2024-10-01\r\n25781,2360,EMEA,electronics,partner,66.25,4,0.242,bundle,2024-12-03\r\n25782,2214,AMER,fashion,mobile,39.61,1,0.129,bundle,2024-10-23\r\n25783,1366,APAC,fashion,retail,63.81,8,0.187,coupon,2024-06-21\r\n25784,1544,LATAM,grocery,mobile,78.99,4,0.189,none,2024-03-13\r\n25785,1630,APAC,electronics,retail,41.00,5,0.147,none,2024-12-16\r\n25786,1647,LATAM,fashion,retail,60.31,1,0.178,none,2024-12-18\r\n25787,1150,LATAM,sports,online,33.43,1,0.123,none,2024-03-09\r\n25788,1983,LATAM,grocery,retail,113.67,1,0.245,none,2024-09-07\r\n25789,1571,EMEA,toys,retail,62.52,8,0.240,none,2024-03-13\r\n25790,1084,AMER,grocery,mobile,49.79,2,0.012,bundle,2024-10-08\r\n25791,2416,LATAM,electronics,retail,116.57,1,0.173,none,2024-07-15\r\n25792,1317,EMEA,home,mobile,47.82,3,0.005,bundle,2024-02-18\r\n25793,1131,APAC,electronics,online,43.30,3,0.126,bundle,2024-01-15\r\n25794,1860,EMEA,sports,retail,72.67,5,0.086,none,2024-06-23\r\n25795,1642,EMEA,home,online,38.82,8,0.104,bundle,2024-06-01\r\n25796,2421,AMER,fashion,online,74.60,2,0.127,none,2024-03-11\r\n25797,1232,LATAM,grocery,retail,76.95,7,0.238,bundle,2024-07-08\r\n25798,1732,LATAM,home,online,44.15,5,0.010,coupon,2024-03-07\r\n25799,1060,LATAM,electronics,online,73.46,3,0.034,none,2024-07-08\r\n25800,1520,APAC,home,retail,48.92,1,0.114,none,2024-11-09\r\n25801,1180,AMER,grocery,online,24.53,5,0.185,none,2024-05-06\r\n25802,2038,LATAM,toys,retail,40.15,1,0.250,none,2024-03-26\r\n25803,1131,APAC,grocery,retail,38.68,8,0.228,none,2024-08-01\r\n25804,2233,EMEA,home,retail,109.51,1,0.049,none,2024-06-25\r\n25805,2163,EMEA,toys,mobile,91.93,5,0.191,none,2024-07-16\r\n25806,1692,LATAM,grocery,mobile,20.67,4,0.171,bundle,2024-11-02\r\n25807,1359,LATAM,grocery,retail,46.40,8,0.113,coupon,2024-07-08\r\n25808,1586,LATAM,fashion,mobile,58.36,3,0.033,none,2024-12-26\r\n25809,2467,AMER,home,retail,57.36,7,0.248,none,2024-02-09\r\n25810,2101,APAC,electronics,retail,159.08,5,0.051,none,2024-04-04\r\n25811,2007,LATAM,electronics,online,45.87,6,0.011,none,2024-03-14\r\n25812,1201,LATAM,fashion,online,41.37,5,0.149,none,2024-10-20\r\n25813,1431,APAC,fashion,retail,26.46,2,0.169,none,2024-02-02\r\n25814,2063,APAC,electronics,online,22.07,6,0.193,coupon,2024-06-19\r\n25815,1632,LATAM,grocery,mobile,61.54,1,0.056,none,2024-05-05\r\n25816,2254,LATAM,grocery,mobile,80.90,8,0.050,none,2024-08-09\r\n25817,2109,EMEA,fashion,retail,50.49,8,0.168,bundle,2024-03-08\r\n25818,1383,AMER,grocery,online,21.37,1,0.164,none,2024-06-20\r\n25819,1680,LATAM,grocery,mobile,78.95,6,0.129,none,2024-08-26\r\n25820,1968,EMEA,grocery,online,58.90,2,0.149,coupon,2024-07-13\r\n25821,2382,LATAM,grocery,online,28.15,6,0.180,coupon,2024-05-07\r\n25822,1997,APAC,electronics,retail,80.40,3,0.059,none,2024-04-28\r\n25823,2186,LATAM,fashion,online,85.10,8,0.069,none,2024-02-13\r\n25824,1601,APAC,grocery,retail,79.22,4,0.130,none,2024-03-10\r\n25825,1806,APAC,sports,online,80.40,3,0.175,coupon,2024-04-09\r\n25826,1768,AMER,electronics,retail,78.62,5,0.022,none,2024-08-07\r\n25827,1956,APAC,sports,mobile,48.45,5,0.106,bundle,2024-09-24\r\n25828,1076,LATAM,grocery,online,12.50,2,0.190,none,2024-06-19\r\n25829,2089,EMEA,home,retail,59.49,1,0.199,none,2024-03-22\r\n25830,1110,LATAM,grocery,online,43.96,7,0.170,coupon,2024-05-22\r\n25831,1325,APAC,grocery,retail,124.36,8,0.121,none,2024-06-10\r\n25832,1777,AMER,fashion,online,26.76,7,0.201,bundle,2024-07-24\r\n25833,1842,LATAM,sports,retail,106.33,4,0.143,coupon,2024-02-20\r\n25834,1879,EMEA,toys,retail,289.38,2,0.245,none,2024-01-06\r\n25835,1482,AMER,sports,online,69.51,7,0.154,none,2024-10-21\r\n25836,1607,LATAM,sports,retail,133.98,2,0.034,coupon,2024-02-22\r\n25837,1374,APAC,electronics,retail,69.82,4,0.071,none,2024-07-11\r\n25838,1757,EMEA,home,retail,44.82,4,0.159,none,2024-01-17\r\n25839,2430,APAC,sports,online,40.75,5,0.170,loyalty,2024-09-07\r\n25840,1298,LATAM,grocery,online,58.60,6,0.066,none,2024-11-25\r\n25841,2024,AMER,fashion,retail,32.45,8,0.042,none,2024-08-19\r\n25842,2458,EMEA,grocery,online,66.59,1,0.115,bundle,2024-06-24\r\n25843,2192,APAC,grocery,online,37.15,5,0.076,none,2024-05-17\r\n25844,1717,AMER,grocery,online,41.99,6,0.059,coupon,2024-05-09\r\n25845,1587,LATAM,grocery,online,46.11,6,0.161,none,2024-12-01\r\n25846,2421,AMER,home,online,45.06,2,0.152,none,2024-06-22\r\n25847,2336,APAC,toys,retail,34.83,4,0.153,bundle,2024-03-13\r\n25848,2384,LATAM,home,online,63.19,3,0.090,none,2024-12-25\r\n25849,1621,APAC,home,online,68.09,1,0.150,coupon,2024-11-22\r\n25850,1982,EMEA,sports,online,66.16,5,0.088,none,2024-08-01\r\n25851,2057,APAC,grocery,retail,77.06,3,0.081,bundle,2024-09-26\r\n25852,1916,AMER,grocery,online,152.80,1,0.112,none,2024-07-25\r\n25853,1967,EMEA,sports,retail,34.77,6,0.034,coupon,2024-09-09\r\n25854,1107,APAC,grocery,retail,57.82,6,0.102,loyalty,2024-07-04\r\n25855,1162,AMER,electronics,online,123.27,7,0.145,coupon,2024-11-24\r\n25856,2257,AMER,electronics,retail,76.89,4,0.185,none,2024-11-21\r\n25857,1036,EMEA,grocery,online,38.12,6,0.202,none,2024-03-24\r\n25858,1827,EMEA,electronics,online,41.77,7,0.233,none,2024-02-13\r\n25859,1339,EMEA,electronics,mobile,67.78,2,0.109,none,2024-11-19\r\n25860,1459,LATAM,electronics,online,88.85,8,0.090,none,2024-08-24\r\n25861,1897,AMER,grocery,online,41.62,3,0.130,none,2024-03-24\r\n25862,2066,APAC,electronics,retail,50.49,5,0.179,none,2024-10-10\r\n25863,2041,LATAM,grocery,online,68.04,2,0.022,loyalty,2024-02-02\r\n25864,1398,APAC,electronics,retail,60.65,8,0.021,none,2024-12-07\r\n25865,1146,LATAM,fashion,mobile,134.80,1,0.089,loyalty,2024-01-24\r\n25866,2008,APAC,grocery,partner,75.96,3,0.153,none,2024-07-12\r\n25867,2003,LATAM,grocery,retail,86.60,7,0.093,none,2024-03-20\r\n25868,1564,APAC,home,online,51.58,2,0.096,none,2024-02-17\r\n25869,1062,EMEA,grocery,partner,63.00,8,0.232,coupon,2024-07-24\r\n25870,1235,EMEA,grocery,online,42.98,3,0.020,coupon,2024-09-25\r\n25871,2312,APAC,home,retail,78.34,7,0.156,none,2024-11-04\r\n25872,2234,LATAM,electronics,online,73.49,4,0.161,none,2024-07-21\r\n25873,1754,EMEA,electronics,online,58.83,5,0.062,loyalty,2024-06-14\r\n25874,2431,LATAM,grocery,retail,34.34,1,0.131,coupon,2024-09-27\r\n25875,1384,LATAM,grocery,online,36.42,6,0.119,loyalty,2024-12-18\r\n25876,1919,EMEA,toys,retail,160.07,5,0.134,coupon,2024-11-03\r\n25877,1614,EMEA,grocery,online,26.15,4,0.116,coupon,2024-07-05\r\n25878,2391,EMEA,electronics,retail,70.73,6,0.140,none,2024-11-17\r\n25879,1813,EMEA,grocery,online,103.41,2,0.035,none,2024-04-14\r\n25880,1408,AMER,grocery,retail,61.40,2,0.055,none,2024-08-21\r\n25881,1909,APAC,grocery,retail,57.08,8,0.133,coupon,2024-04-14\r\n25882,1881,LATAM,home,online,92.55,3,0.091,none,2024-09-21\r\n25883,1922,EMEA,grocery,retail,70.65,7,0.006,coupon,2024-03-07\r\n25884,2213,APAC,fashion,online,65.08,3,0.229,coupon,2024-07-11\r\n25885,1565,AMER,electronics,retail,92.27,1,0.002,none,2024-11-01\r\n25886,1704,AMER,electronics,online,115.44,7,0.057,none,2024-11-24\r\n25887,2448,APAC,electronics,online,62.71,6,0.135,none,2024-10-10\r\n25888,1281,AMER,fashion,online,44.45,3,0.139,coupon,2024-05-01\r\n25889,1589,AMER,electronics,retail,53.35,6,0.105,none,2024-11-21\r\n25890,1218,AMER,home,retail,69.15,4,0.177,none,2024-01-18\r\n25891,1926,AMER,grocery,mobile,67.60,6,0.068,bundle,2024-12-12\r\n25892,1601,APAC,toys,mobile,26.40,6,0.205,none,2024-01-13\r\n25893,1243,AMER,fashion,partner,47.97,4,0.116,none,2024-02-14\r\n25894,2084,LATAM,grocery,online,53.36,8,0.159,none,2024-03-17\r\n25895,1726,EMEA,grocery,online,71.47,2,0.102,coupon,2024-05-04\r\n25896,1241,APAC,grocery,retail,78.21,1,0.241,coupon,2024-12-28\r\n25897,2436,LATAM,electronics,retail,29.88,3,0.040,bundle,2024-03-02\r\n25898,1139,EMEA,electronics,online,84.71,7,0.177,loyalty,2024-01-11\r\n25899,1497,EMEA,grocery,online,86.11,6,0.001,coupon,2024-11-19\r\n25900,2176,AMER,fashion,retail,190.29,2,0.138,none,2024-12-16\r\n25901,1981,EMEA,home,online,46.80,1,0.080,none,2024-04-19\r\n25902,1349,APAC,electronics,retail,22.06,4,0.067,none,2024-06-01\r\n25903,2355,EMEA,fashion,retail,29.39,6,0.129,none,2024-03-03\r\n25904,1800,APAC,grocery,online,37.42,6,0.188,bundle,2024-05-03\r\n25905,2342,AMER,fashion,online,120.43,3,0.011,loyalty,2024-04-21\r\n25906,1928,AMER,grocery,retail,149.87,3,0.007,coupon,2024-05-25\r\n25907,1907,EMEA,electronics,online,45.90,8,0.145,coupon,2024-02-02\r\n25908,1460,LATAM,grocery,retail,115.15,2,0.005,loyalty,2024-04-16\r\n25909,2359,LATAM,electronics,retail,25.83,8,0.032,none,2024-03-12\r\n25910,1764,LATAM,toys,mobile,20.12,4,0.007,loyalty,2024-10-06\r\n25911,2076,AMER,toys,online,68.32,5,0.072,bundle,2024-09-11\r\n25912,2256,AMER,sports,online,31.39,1,0.227,none,2024-07-06\r\n25913,1688,LATAM,home,online,53.85,6,0.047,coupon,2024-07-19\r\n25914,1143,LATAM,home,retail,67.61,4,0.071,bundle,2024-06-16\r\n25915,1166,AMER,fashion,online,84.00,4,0.249,none,2024-10-16\r\n25916,1145,AMER,sports,online,77.28,3,0.074,none,2024-02-16\r\n25917,2172,EMEA,sports,retail,54.57,3,0.054,none,2024-04-02\r\n25918,2312,APAC,sports,retail,48.67,4,0.208,none,2024-12-21\r\n25919,1040,LATAM,grocery,online,47.46,8,0.178,none,2024-09-11\r\n25920,1560,AMER,fashion,online,89.10,7,0.107,coupon,2024-10-28\r\n25921,1101,AMER,home,partner,30.52,1,0.148,coupon,2024-09-25\r\n25922,2461,LATAM,grocery,retail,129.88,2,0.130,none,2024-07-27\r\n25923,1290,EMEA,home,partner,109.55,6,0.100,none,2024-02-11\r\n25924,1586,LATAM,electronics,online,35.49,4,0.129,bundle,2024-10-20\r\n25925,1788,AMER,home,online,77.80,3,0.017,bundle,2024-04-22\r\n25926,2172,EMEA,electronics,retail,44.60,8,0.111,loyalty,2024-10-04\r\n25927,1350,LATAM,sports,retail,35.40,8,0.041,coupon,2024-11-24\r\n25928,1757,EMEA,grocery,online,52.03,1,0.020,loyalty,2024-05-27\r\n25929,1381,LATAM,electronics,online,58.66,7,0.075,coupon,2024-05-28\r\n25930,1187,AMER,sports,retail,35.90,6,0.166,loyalty,2024-03-16\r\n25931,1911,LATAM,home,retail,67.69,1,0.005,coupon,2024-03-16\r\n25932,1161,AMER,grocery,online,126.82,1,0.042,none,2024-06-28\r\n25933,1813,EMEA,electronics,retail,105.95,2,0.223,bundle,2024-04-04\r\n25934,1020,APAC,grocery,mobile,53.94,7,0.115,none,2024-02-22\r\n25935,1061,APAC,sports,retail,38.90,6,0.062,none,2024-11-13\r\n25936,1367,AMER,grocery,online,98.29,7,0.008,none,2024-12-17\r\n25937,1599,APAC,fashion,retail,103.52,3,0.035,none,2024-07-01\r\n25938,1885,EMEA,grocery,retail,43.68,2,0.198,bundle,2024-02-23\r\n25939,2048,LATAM,fashion,online,91.49,4,0.179,none,2024-08-24\r\n25940,1406,LATAM,home,mobile,107.16,8,0.132,none,2024-12-10\r\n25941,2303,EMEA,toys,online,65.15,1,0.173,loyalty,2024-09-17\r\n25942,1945,AMER,toys,online,97.82,4,0.210,none,2024-05-13\r\n25943,2312,APAC,electronics,mobile,93.45,4,0.246,none,2024-05-20\r\n25944,2121,APAC,grocery,online,41.49,1,0.110,none,2024-04-16\r\n25945,1014,EMEA,electronics,retail,40.04,6,0.160,coupon,2024-08-21\r\n25946,1892,LATAM,fashion,online,28.65,6,0.025,none,2024-02-20\r\n25947,1981,EMEA,fashion,retail,99.92,4,0.173,bundle,2024-08-13\r\n25948,2385,APAC,fashion,online,37.86,1,0.206,coupon,2024-11-15\r\n25949,2323,AMER,grocery,online,37.88,3,0.125,loyalty,2024-06-01\r\n25950,1195,AMER,grocery,online,48.72,4,0.061,coupon,2024-09-24\r\n25951,1598,EMEA,grocery,online,82.71,5,0.069,none,2024-04-25\r\n25952,2264,LATAM,home,online,38.16,2,0.015,none,2024-05-15\r\n25953,2343,EMEA,grocery,mobile,38.23,7,0.238,none,2024-01-09\r\n25954,1920,LATAM,grocery,online,35.66,1,0.160,none,2024-09-04\r\n25955,2016,LATAM,toys,online,35.19,4,0.232,none,2024-04-27\r\n25956,1504,AMER,fashion,online,45.51,7,0.041,none,2024-11-27\r\n25957,2430,APAC,sports,online,10.66,7,0.131,loyalty,2024-10-23\r\n25958,1515,EMEA,toys,partner,47.54,4,0.182,none,2024-05-02\r\n25959,2200,LATAM,grocery,retail,99.36,1,0.156,coupon,2024-10-06\r\n25960,1409,APAC,sports,retail,86.90,1,0.082,bundle,2024-08-28\r\n25961,2322,AMER,grocery,online,54.52,3,0.059,coupon,2024-12-14\r\n25962,2421,AMER,toys,online,59.97,1,0.114,coupon,2024-12-21\r\n25963,2387,EMEA,sports,online,53.54,5,0.240,none,2024-08-12\r\n25964,2231,LATAM,electronics,retail,326.54,1,0.165,coupon,2024-06-11\r\n25965,1331,AMER,grocery,retail,51.58,7,0.141,bundle,2024-07-08\r\n25966,2080,LATAM,home,retail,76.78,1,0.162,loyalty,2024-11-07\r\n25967,1882,AMER,home,online,31.40,6,0.038,coupon,2024-12-08\r\n25968,1765,EMEA,fashion,online,35.14,2,0.231,bundle,2024-10-18\r\n25969,2159,AMER,fashion,online,51.87,7,0.065,none,2024-03-20\r\n25970,2221,LATAM,fashion,retail,91.72,6,0.149,bundle,2024-02-08\r\n25971,1331,AMER,electronics,retail,30.37,4,0.071,loyalty,2024-06-15\r\n25972,1947,EMEA,home,retail,286.12,1,0.170,none,2024-04-08\r\n25973,1632,LATAM,home,online,30.55,6,0.183,none,2024-06-02\r\n25974,1564,APAC,sports,online,48.51,5,0.055,coupon,2024-06-28\r\n25975,1722,EMEA,home,online,67.01,1,0.159,bundle,2024-06-24\r\n25976,1280,LATAM,sports,retail,80.04,8,0.130,none,2024-01-14\r\n25977,1433,EMEA,toys,retail,117.26,4,0.188,none,2024-07-14\r\n25978,1397,LATAM,electronics,retail,127.33,5,0.249,coupon,2024-05-21\r\n25979,1920,LATAM,electronics,retail,63.68,3,0.182,none,2024-04-28\r\n25980,1879,EMEA,home,online,107.36,1,0.154,none,2024-07-16\r\n25981,1178,EMEA,fashion,online,40.01,7,0.231,none,2024-04-27\r\n25982,1323,EMEA,grocery,online,67.08,5,0.058,coupon,2024-10-07\r\n25983,1403,APAC,fashion,online,47.90,4,0.228,none,2024-05-03\r\n25984,2290,LATAM,fashion,retail,42.82,1,0.185,none,2024-11-12\r\n25985,2043,EMEA,fashion,retail,80.88,3,0.232,none,2024-03-26\r\n25986,2287,EMEA,grocery,online,55.96,6,0.124,loyalty,2024-11-19\r\n25987,1705,AMER,grocery,online,71.14,2,0.096,none,2024-08-01\r\n25988,1126,LATAM,toys,retail,44.23,7,0.203,bundle,2024-12-20\r\n25989,1594,LATAM,electronics,online,77.72,4,0.212,none,2024-03-22\r\n25990,2280,EMEA,sports,mobile,47.82,1,0.233,none,2024-08-03\r\n25991,1391,LATAM,sports,retail,43.38,4,0.004,bundle,2024-11-28\r\n25992,2443,LATAM,grocery,online,90.87,1,0.161,none,2024-10-27\r\n25993,1836,LATAM,grocery,retail,116.72,8,0.052,none,2024-04-02\r\n25994,1283,APAC,electronics,online,51.40,3,0.102,none,2024-12-08\r\n25995,2396,AMER,sports,retail,41.99,8,0.149,bundle,2024-12-04\r\n25996,1695,LATAM,fashion,online,19.09,2,0.137,none,2024-10-19\r\n25997,1230,EMEA,electronics,online,60.16,7,0.093,none,2024-08-05\r\n25998,2412,LATAM,grocery,online,128.41,3,0.087,none,2024-01-26\r\n25999,1334,APAC,grocery,mobile,41.44,1,0.025,none,2024-07-06\r\n26000,1959,EMEA,home,online,43.02,5,0.219,loyalty,2024-11-20\r\n26001,1715,AMER,home,online,42.16,7,0.014,none,2024-08-07\r\n26002,1971,EMEA,sports,online,31.94,7,0.035,coupon,2024-11-28\r\n26003,1818,AMER,toys,online,47.37,3,0.213,loyalty,2024-02-17\r\n26004,1939,LATAM,home,retail,122.79,6,0.123,bundle,2024-11-24\r\n26005,1455,APAC,home,mobile,82.42,3,0.223,none,2024-02-11\r\n26006,2023,LATAM,electronics,retail,99.57,7,0.126,coupon,2024-02-22\r\n26007,1468,AMER,toys,retail,115.61,3,0.211,loyalty,2024-02-01\r\n26008,1569,APAC,electronics,mobile,48.89,5,0.174,loyalty,2024-07-09\r\n26009,1988,AMER,electronics,online,62.73,6,0.210,loyalty,2024-04-28\r\n26010,2429,EMEA,home,mobile,55.30,6,0.034,coupon,2024-02-02\r\n26011,2061,EMEA,grocery,mobile,87.76,4,0.145,bundle,2024-11-23\r\n26012,1482,AMER,grocery,online,37.54,4,0.078,bundle,2024-06-28\r\n26013,1449,EMEA,grocery,online,83.59,2,0.009,none,2024-08-01\r\n26014,1435,AMER,electronics,retail,32.56,7,0.160,none,2024-02-10\r\n26015,1048,EMEA,home,mobile,39.86,3,0.137,none,2024-10-16\r\n26016,1348,AMER,fashion,online,58.74,6,0.080,coupon,2024-02-04\r\n26017,1665,AMER,home,retail,149.55,3,0.136,none,2024-12-19\r\n26018,1609,LATAM,sports,partner,96.10,2,0.036,coupon,2024-03-03\r\n26019,2062,EMEA,sports,retail,72.74,1,0.057,loyalty,2024-03-27\r\n26020,2234,LATAM,sports,online,84.03,5,0.073,bundle,2024-03-12\r\n26021,1548,EMEA,sports,retail,99.04,8,0.074,loyalty,2024-05-12\r\n26022,2298,APAC,electronics,online,32.54,8,0.074,none,2024-07-22\r\n26023,1064,AMER,home,mobile,32.72,6,0.040,bundle,2024-04-23\r\n26024,1139,EMEA,toys,online,27.60,8,0.115,none,2024-11-14\r\n26025,2100,APAC,toys,retail,124.28,1,0.046,none,2024-03-21\r\n26026,1202,APAC,grocery,partner,70.59,2,0.245,none,2024-10-28\r\n26027,1187,AMER,electronics,mobile,50.31,8,0.128,none,2024-10-20\r\n26028,1112,APAC,sports,retail,93.79,2,0.026,none,2024-10-15\r\n26029,2087,LATAM,grocery,partner,46.97,7,0.237,none,2024-09-18\r\n26030,2428,LATAM,grocery,retail,30.84,1,0.151,coupon,2024-02-18\r\n26031,1266,AMER,grocery,retail,65.86,2,0.153,coupon,2024-10-10\r\n26032,1922,EMEA,sports,mobile,77.43,7,0.105,bundle,2024-06-19\r\n26033,2333,APAC,fashion,partner,108.91,7,0.001,none,2024-04-26\r\n26034,1546,EMEA,fashion,mobile,59.15,7,0.131,bundle,2024-10-27\r\n26035,2280,EMEA,grocery,online,123.54,1,0.039,none,2024-10-22\r\n26036,1344,EMEA,fashion,online,46.66,2,0.104,none,2024-12-20\r\n26037,2276,AMER,grocery,online,34.09,1,0.196,coupon,2024-03-04\r\n26038,1614,EMEA,home,online,66.62,8,0.243,none,2024-06-19\r\n26039,1721,EMEA,sports,partner,68.45,7,0.176,none,2024-11-09\r\n26040,1334,APAC,grocery,online,39.35,3,0.189,bundle,2024-06-27\r\n26041,1950,LATAM,grocery,mobile,75.52,2,0.173,none,2024-08-15\r\n26042,1427,EMEA,grocery,online,28.09,6,0.093,loyalty,2024-01-10\r\n26043,1029,EMEA,toys,online,273.49,2,0.067,none,2024-05-14\r\n26044,2239,EMEA,grocery,retail,54.51,4,0.165,none,2024-10-18\r\n26045,2272,EMEA,electronics,retail,29.55,8,0.098,none,2024-03-27\r\n26046,1865,LATAM,grocery,online,36.02,5,0.036,none,2024-07-27\r\n26047,2438,AMER,electronics,online,56.42,6,0.232,none,2024-09-16\r\n26048,1007,APAC,fashion,online,58.66,4,0.007,bundle,2024-01-08\r\n26049,2442,APAC,fashion,online,46.68,2,0.177,coupon,2024-04-07\r\n26050,1394,LATAM,electronics,retail,49.50,7,0.107,loyalty,2024-06-06\r\n26051,1923,LATAM,home,online,27.21,6,0.066,loyalty,2024-01-13\r\n26052,1694,APAC,grocery,online,41.82,2,0.095,coupon,2024-08-04\r\n26053,2230,LATAM,fashion,online,50.71,7,0.157,bundle,2024-02-27\r\n26054,2408,EMEA,toys,online,51.58,4,0.184,none,2024-08-17\r\n26055,1602,EMEA,electronics,online,27.93,4,0.026,none,2024-05-02\r\n26056,2460,AMER,toys,retail,36.27,6,0.228,bundle,2024-04-04\r\n26057,2262,APAC,grocery,retail,69.45,1,0.049,none,2024-12-17\r\n26058,1804,AMER,electronics,retail,87.12,4,0.156,none,2024-01-04\r\n26059,2291,EMEA,electronics,retail,44.59,6,0.146,coupon,2024-01-22\r\n26060,1439,LATAM,electronics,online,34.77,5,0.133,none,2024-01-13\r\n26061,2039,EMEA,toys,retail,42.30,8,0.242,none,2024-11-15\r\n26062,1155,EMEA,electronics,online,53.78,5,0.247,none,2024-03-16\r\n26063,2396,AMER,sports,retail,22.39,1,0.172,coupon,2024-02-09\r\n26064,1003,APAC,home,partner,58.65,7,0.205,bundle,2024-01-11\r\n26065,1359,LATAM,home,mobile,25.25,8,0.100,bundle,2024-04-23\r\n26066,1244,LATAM,fashion,retail,55.10,3,0.230,none,2024-09-06\r\n26067,1614,EMEA,toys,mobile,53.04,8,0.087,none,2024-02-16\r\n26068,2478,AMER,electronics,retail,40.71,7,0.078,none,2024-11-05\r\n26069,1097,EMEA,home,online,83.21,1,0.060,none,2024-05-08\r\n26070,1249,EMEA,toys,online,37.64,7,0.097,none,2024-08-07\r\n26071,2261,EMEA,home,mobile,53.46,2,0.041,none,2024-12-07\r\n26072,2466,APAC,grocery,retail,94.61,3,0.075,none,2024-03-25\r\n26073,1423,EMEA,fashion,online,49.44,3,0.188,none,2024-06-19\r\n26074,2364,APAC,electronics,retail,84.30,3,0.021,bundle,2024-09-12\r\n26075,1896,EMEA,fashion,retail,89.46,4,0.108,none,2024-07-08\r\n26076,2421,AMER,toys,retail,36.28,7,0.188,none,2024-08-02\r\n26077,1025,EMEA,home,retail,34.15,4,0.185,coupon,2024-09-13\r\n26078,1817,APAC,home,online,58.81,2,0.199,none,2024-06-05\r\n26079,2447,AMER,toys,retail,57.61,4,0.180,none,2024-03-06\r\n26080,2024,AMER,grocery,online,72.08,7,0.140,loyalty,2024-03-20\r\n26081,2107,APAC,toys,retail,66.16,2,0.142,coupon,2024-12-13\r\n26082,1563,EMEA,grocery,retail,44.06,5,0.149,loyalty,2024-01-20\r\n26083,1189,AMER,fashion,retail,33.64,4,0.074,bundle,2024-02-06\r\n26084,1648,APAC,home,online,195.74,6,0.018,none,2024-11-23\r\n26085,1545,AMER,sports,online,56.25,5,0.041,none,2024-01-04\r\n26086,1638,EMEA,grocery,retail,42.30,4,0.094,none,2024-12-01\r\n26087,1654,EMEA,grocery,online,63.48,1,0.108,none,2024-12-21\r\n26088,1167,EMEA,grocery,mobile,38.79,1,0.026,bundle,2024-04-17\r\n26089,2078,APAC,electronics,online,62.09,6,0.084,none,2024-01-25\r\n26090,2196,AMER,grocery,mobile,33.89,5,0.146,none,2024-05-09\r\n26091,1967,EMEA,fashion,online,85.18,7,0.206,coupon,2024-10-05\r\n26092,1567,AMER,fashion,online,65.43,5,0.001,coupon,2024-10-04\r\n26093,1017,AMER,electronics,retail,33.72,1,0.139,none,2024-01-03\r\n26094,1730,AMER,sports,partner,61.84,3,0.069,bundle,2024-08-05\r\n26095,1694,APAC,sports,online,42.79,3,0.168,coupon,2024-11-18\r\n26096,1532,APAC,home,partner,49.54,1,0.148,none,2024-10-01\r\n26097,1301,AMER,grocery,retail,51.59,4,0.103,none,2024-08-13\r\n26098,2082,APAC,home,partner,52.46,4,0.001,coupon,2024-09-27\r\n26099,2010,APAC,grocery,online,93.98,1,0.226,coupon,2024-02-11\r\n26100,1192,EMEA,grocery,online,50.78,2,0.217,none,2024-09-09\r\n26101,1472,AMER,grocery,mobile,173.49,8,0.036,none,2024-10-10\r\n26102,1560,AMER,grocery,online,25.97,1,0.015,none,2024-07-20\r\n26103,1938,APAC,electronics,retail,115.17,3,0.044,bundle,2024-07-17\r\n26104,1125,LATAM,fashion,online,92.79,7,0.198,none,2024-10-13\r\n26105,2031,AMER,fashion,online,37.79,2,0.037,none,2024-12-28\r\n26106,2036,APAC,electronics,mobile,46.33,7,0.214,none,2024-09-16\r\n26107,2455,AMER,sports,online,45.91,2,0.040,none,2024-04-12\r\n26108,2058,LATAM,grocery,online,23.47,3,0.040,none,2024-11-11\r\n26109,1505,EMEA,home,mobile,62.83,1,0.215,none,2024-06-07\r\n26110,1152,LATAM,home,online,106.73,6,0.053,none,2024-04-03\r\n26111,1889,APAC,fashion,retail,96.98,3,0.046,bundle,2024-05-23\r\n26112,2417,LATAM,sports,retail,43.01,8,0.235,coupon,2024-11-04\r\n26113,2133,AMER,sports,retail,54.32,6,0.177,none,2024-02-18\r\n26114,2288,AMER,grocery,online,56.31,3,0.019,loyalty,2024-08-22\r\n26115,2166,AMER,toys,online,74.35,4,0.109,bundle,2024-08-18\r\n26116,1184,AMER,sports,online,27.37,4,0.100,bundle,2024-09-10\r\n26117,1239,APAC,grocery,retail,170.10,1,0.007,coupon,2024-05-05\r\n26118,1993,APAC,electronics,online,18.51,2,0.135,bundle,2024-09-27\r\n26119,1654,EMEA,fashion,online,113.31,4,0.229,loyalty,2024-01-11\r\n26120,2285,APAC,grocery,online,35.74,2,0.139,coupon,2024-07-09\r\n26121,2470,EMEA,grocery,retail,19.21,3,0.128,bundle,2024-07-26\r\n26122,2286,AMER,home,online,60.40,2,0.210,loyalty,2024-02-19\r\n26123,1099,LATAM,home,online,49.80,3,0.059,bundle,2024-08-21\r\n26124,1604,EMEA,fashion,mobile,48.72,2,0.148,none,2024-12-14\r\n26125,1409,APAC,grocery,online,66.19,4,0.163,none,2024-08-24\r\n26126,1074,LATAM,electronics,online,60.64,7,0.079,none,2024-07-22\r\n26127,1963,AMER,grocery,online,68.72,1,0.132,none,2024-01-15\r\n26128,1555,AMER,grocery,online,75.77,4,0.187,none,2024-08-09\r\n26129,1577,AMER,electronics,retail,99.80,8,0.091,coupon,2024-06-01\r\n26130,2322,AMER,grocery,online,62.91,8,0.243,coupon,2024-06-07\r\n26131,1878,EMEA,fashion,online,77.27,5,0.118,none,2024-06-21\r\n26132,1802,AMER,grocery,mobile,41.14,6,0.141,none,2024-02-18\r\n26133,1791,LATAM,toys,retail,65.12,1,0.218,none,2024-10-28\r\n26134,1951,LATAM,grocery,online,44.21,5,0.160,none,2024-08-25\r\n26135,1033,APAC,grocery,online,31.57,4,0.229,bundle,2024-11-07\r\n26136,1383,AMER,home,mobile,102.39,2,0.144,none,2024-10-12\r\n26137,1959,EMEA,grocery,online,45.28,6,0.070,none,2024-01-25\r\n26138,1683,AMER,toys,online,83.65,4,0.202,none,2024-01-04\r\n26139,2390,AMER,grocery,retail,97.07,5,0.075,loyalty,2024-11-05\r\n26140,2452,LATAM,toys,retail,67.19,1,0.022,none,2024-04-07\r\n26141,1090,AMER,electronics,retail,90.02,7,0.212,bundle,2024-05-16\r\n26142,2178,AMER,electronics,online,91.28,4,0.106,none,2024-02-12\r\n26143,1582,AMER,electronics,online,79.49,1,0.135,loyalty,2024-04-24\r\n26144,1488,AMER,electronics,mobile,28.88,5,0.205,none,2024-04-08\r\n26145,2035,LATAM,home,online,49.90,7,0.127,none,2024-12-14\r\n26146,1629,LATAM,electronics,mobile,127.58,1,0.210,none,2024-05-22\r\n26147,1033,APAC,home,retail,39.89,1,0.191,none,2024-09-10\r\n26148,1253,AMER,grocery,online,46.62,6,0.139,none,2024-06-13\r\n26149,1070,EMEA,grocery,online,30.33,5,0.080,bundle,2024-11-28\r\n26150,1125,LATAM,electronics,online,38.27,4,0.132,bundle,2024-02-27\r\n26151,1234,AMER,toys,online,101.62,5,0.057,coupon,2024-04-21\r\n26152,1509,AMER,fashion,retail,42.18,2,0.235,none,2024-12-24\r\n26153,1663,LATAM,home,online,31.65,2,0.038,none,2024-12-15\r\n26154,1804,AMER,home,retail,40.29,3,0.171,none,2024-04-28\r\n26155,2370,EMEA,electronics,retail,59.11,2,0.163,none,2024-01-25\r\n26156,2185,EMEA,electronics,retail,56.53,6,0.079,none,2024-03-01\r\n26157,1950,LATAM,fashion,retail,66.32,1,0.233,none,2024-10-28\r\n26158,2053,AMER,home,online,47.61,8,0.110,none,2024-06-05\r\n26159,1567,AMER,sports,retail,23.57,5,0.249,none,2024-10-15\r\n26160,1251,EMEA,home,partner,50.34,4,0.082,bundle,2024-05-02\r\n26161,2273,APAC,toys,retail,21.16,4,0.216,none,2024-07-15\r\n26162,1399,AMER,fashion,retail,118.88,1,0.207,bundle,2024-05-13\r\n26163,1617,AMER,electronics,retail,62.38,2,0.238,none,2024-06-21\r\n26164,1249,EMEA,sports,retail,140.77,3,0.057,none,2024-11-02\r\n26165,1743,LATAM,toys,retail,45.30,2,0.155,none,2024-11-25\r\n26166,1385,LATAM,grocery,online,81.68,1,0.015,bundle,2024-09-15\r\n26167,1241,APAC,fashion,online,56.56,2,0.249,none,2024-06-03\r\n26168,1857,LATAM,grocery,online,59.97,5,0.125,bundle,2024-12-21\r\n26169,1867,AMER,electronics,retail,70.81,4,0.071,coupon,2024-02-19\r\n26170,1593,AMER,home,retail,39.78,2,0.106,coupon,2024-06-13\r\n26171,2386,EMEA,fashion,partner,32.53,8,0.091,bundle,2024-12-11\r\n26172,1213,EMEA,home,retail,78.37,5,0.063,coupon,2024-01-12\r\n26173,1308,EMEA,grocery,retail,49.87,5,0.120,none,2024-09-05\r\n26174,1058,LATAM,grocery,retail,17.49,5,0.114,none,2024-05-05\r\n26175,1061,APAC,electronics,online,44.78,4,0.129,none,2024-10-28\r\n26176,1191,EMEA,home,online,44.86,4,0.129,none,2024-01-28\r\n26177,1920,LATAM,sports,online,47.74,7,0.178,loyalty,2024-04-15\r\n26178,1449,EMEA,fashion,mobile,39.54,2,0.180,coupon,2024-10-01\r\n26179,1159,LATAM,electronics,partner,44.96,1,0.204,none,2024-12-07\r\n26180,2359,LATAM,toys,online,133.71,6,0.128,none,2024-07-04\r\n26181,1812,EMEA,sports,retail,20.56,2,0.166,none,2024-12-03\r\n26182,1021,AMER,grocery,retail,36.16,4,0.096,none,2024-11-07\r\n26183,1920,LATAM,fashion,online,70.23,6,0.022,bundle,2024-10-16\r\n26184,1012,LATAM,grocery,online,18.98,6,0.236,loyalty,2024-04-13\r\n26185,2241,APAC,grocery,online,74.69,2,0.200,none,2024-04-19\r\n26186,2055,AMER,electronics,online,28.01,5,0.036,loyalty,2024-01-20\r\n26187,2123,AMER,home,online,85.38,3,0.003,bundle,2024-12-19\r\n26188,1069,APAC,grocery,mobile,125.73,6,0.033,none,2024-09-09\r\n26189,1072,LATAM,grocery,retail,19.26,1,0.171,bundle,2024-03-28\r\n26190,1983,LATAM,home,retail,124.62,1,0.092,none,2024-08-27\r\n26191,2098,AMER,grocery,online,30.88,7,0.200,none,2024-05-16\r\n26192,1301,AMER,electronics,partner,35.65,3,0.222,none,2024-05-10\r\n26193,2068,LATAM,grocery,retail,44.68,7,0.051,loyalty,2024-06-18\r\n26194,1536,LATAM,home,online,145.79,5,0.108,coupon,2024-09-08\r\n26195,1180,AMER,toys,online,37.00,1,0.120,none,2024-06-15\r\n26196,1511,EMEA,electronics,online,90.42,1,0.192,none,2024-04-09\r\n26197,1350,LATAM,grocery,retail,59.70,3,0.010,coupon,2024-06-13\r\n26198,1644,EMEA,home,retail,29.19,8,0.077,none,2024-10-20\r\n26199,1975,EMEA,toys,retail,34.69,8,0.151,none,2024-09-22\r\n26200,1878,EMEA,electronics,retail,35.44,6,0.174,coupon,2024-06-16\r\n26201,1545,AMER,toys,online,73.80,6,0.052,none,2024-05-14\r\n26202,2312,APAC,toys,online,51.46,6,0.072,none,2024-02-21\r\n26203,2086,APAC,fashion,online,35.82,3,0.232,coupon,2024-12-20\r\n26204,1537,LATAM,fashion,retail,64.64,3,0.005,none,2024-06-12\r\n26205,1141,AMER,electronics,online,25.06,4,0.152,none,2024-05-11\r\n26206,1747,EMEA,home,mobile,121.80,4,0.110,none,2024-06-01\r\n26207,1908,AMER,grocery,online,65.40,4,0.138,none,2024-06-06\r\n26208,2318,AMER,electronics,online,44.27,3,0.184,none,2024-04-02\r\n26209,2089,EMEA,grocery,mobile,65.92,2,0.104,none,2024-05-01\r\n26210,1077,AMER,grocery,online,25.75,7,0.170,coupon,2024-04-07\r\n26211,1667,AMER,electronics,retail,69.03,2,0.222,coupon,2024-04-26\r\n26212,1016,AMER,fashion,online,30.61,2,0.219,loyalty,2024-01-01\r\n26213,1828,EMEA,fashion,online,35.20,1,0.135,coupon,2024-07-08\r\n26214,1637,APAC,electronics,retail,44.91,8,0.026,coupon,2024-12-27\r\n26215,1475,LATAM,home,partner,27.98,7,0.142,none,2024-07-20\r\n26216,1482,AMER,toys,mobile,30.41,8,0.244,none,2024-08-25\r\n26217,1325,APAC,toys,retail,56.37,4,0.126,none,2024-06-01\r\n26218,1057,LATAM,fashion,online,54.17,2,0.129,loyalty,2024-08-27\r\n26219,2180,AMER,toys,retail,38.29,4,0.026,loyalty,2024-05-07\r\n26220,1231,AMER,home,retail,150.55,2,0.194,bundle,2024-03-03\r\n26221,1299,LATAM,grocery,retail,32.73,8,0.132,loyalty,2024-10-14\r\n26222,1056,LATAM,electronics,online,40.65,3,0.061,none,2024-07-15\r\n26223,2197,LATAM,sports,retail,87.46,7,0.035,none,2024-05-12\r\n26224,1539,LATAM,home,retail,42.07,4,0.132,none,2024-04-02\r\n26225,1020,APAC,home,online,58.82,8,0.227,none,2024-09-04\r\n26226,1474,LATAM,electronics,mobile,184.22,4,0.064,bundle,2024-10-08\r\n26227,2289,APAC,grocery,online,52.84,4,0.134,none,2024-07-26\r\n26228,1077,AMER,grocery,retail,71.83,5,0.108,bundle,2024-11-04\r\n26229,1390,APAC,grocery,online,41.32,2,0.247,loyalty,2024-05-27\r\n26230,1605,APAC,grocery,retail,25.47,1,0.205,none,2024-01-03\r\n26231,2411,EMEA,fashion,retail,48.12,4,0.032,loyalty,2024-12-21\r\n26232,2112,LATAM,fashion,retail,45.40,8,0.231,bundle,2024-03-04\r\n26233,1290,EMEA,home,online,45.02,6,0.193,none,2024-02-09\r\n26234,2318,AMER,grocery,online,78.47,2,0.245,loyalty,2024-05-21\r\n26235,1822,EMEA,grocery,partner,141.50,1,0.229,none,2024-05-10\r\n26236,1374,APAC,home,partner,30.44,8,0.021,none,2024-08-08\r\n26237,2318,AMER,toys,online,116.27,2,0.014,loyalty,2024-07-02\r\n26238,1075,AMER,fashion,mobile,74.26,8,0.004,bundle,2024-08-21\r\n26239,2267,AMER,grocery,online,58.88,1,0.036,bundle,2024-09-12\r\n26240,1312,EMEA,electronics,online,41.74,5,0.164,none,2024-04-08\r\n26241,1717,AMER,electronics,mobile,66.57,4,0.180,coupon,2024-09-21\r\n26242,1290,EMEA,fashion,partner,75.84,6,0.020,none,2024-03-21\r\n26243,1461,LATAM,grocery,retail,86.75,5,0.073,coupon,2024-06-01\r\n26244,2264,LATAM,home,online,53.32,1,0.168,coupon,2024-08-14\r\n26245,2126,APAC,fashion,retail,74.09,1,0.218,bundle,2024-07-25\r\n26246,1512,APAC,toys,retail,57.19,4,0.049,none,2024-04-24\r\n26247,1463,EMEA,grocery,retail,45.15,8,0.070,loyalty,2024-12-23\r\n26248,2084,LATAM,toys,retail,110.91,5,0.023,coupon,2024-04-21\r\n26249,1841,AMER,electronics,online,27.28,6,0.234,none,2024-02-13\r\n26250,1047,APAC,electronics,retail,36.41,8,0.204,none,2024-10-26\r\n26251,1460,LATAM,home,online,50.91,8,0.191,none,2024-01-02\r\n26252,2320,LATAM,grocery,online,41.78,1,0.097,bundle,2024-03-26\r\n26253,1311,APAC,fashion,retail,27.31,3,0.154,bundle,2024-04-18\r\n26254,2328,EMEA,electronics,mobile,61.77,7,0.091,loyalty,2024-10-28\r\n26255,1567,AMER,toys,mobile,52.30,6,0.139,none,2024-04-28\r\n26256,1869,AMER,grocery,retail,92.55,3,0.048,coupon,2024-11-11\r\n26257,1337,APAC,fashion,retail,20.05,2,0.201,none,2024-10-18\r\n26258,2005,APAC,grocery,retail,50.02,1,0.025,none,2024-04-24\r\n26259,1688,LATAM,toys,online,50.41,1,0.203,none,2024-11-11\r\n26260,1098,APAC,fashion,online,80.32,5,0.175,none,2024-07-13\r\n26261,1444,EMEA,toys,online,39.35,3,0.161,loyalty,2024-10-28\r\n26262,2495,EMEA,grocery,mobile,226.97,6,0.018,none,2024-06-16\r\n26263,2238,AMER,electronics,online,123.56,5,0.017,coupon,2024-03-27\r\n26264,2371,LATAM,toys,online,39.08,3,0.026,none,2024-04-27\r\n26265,1624,AMER,home,partner,43.40,7,0.095,none,2024-04-05\r\n26266,1754,EMEA,toys,retail,55.40,8,0.122,bundle,2024-06-18\r\n26267,1452,LATAM,fashion,online,37.67,3,0.239,none,2024-04-19\r\n26268,1125,LATAM,fashion,online,83.50,6,0.134,coupon,2024-02-21\r\n26269,2429,EMEA,fashion,online,63.96,6,0.066,none,2024-02-02\r\n26270,2264,LATAM,grocery,retail,20.08,6,0.236,none,2024-05-13\r\n26271,2171,EMEA,electronics,mobile,74.26,7,0.031,none,2024-03-09\r\n26272,2331,APAC,fashion,online,64.31,3,0.106,loyalty,2024-10-28\r\n26273,1698,EMEA,grocery,retail,130.86,6,0.241,none,2024-07-23\r\n26274,2176,AMER,fashion,online,24.38,6,0.179,coupon,2024-10-20\r\n26275,2190,LATAM,home,retail,94.82,6,0.149,coupon,2024-05-06\r\n26276,1013,LATAM,home,retail,65.39,7,0.092,none,2024-07-08\r\n26277,2474,LATAM,home,online,60.27,2,0.231,none,2024-02-05\r\n26278,2242,AMER,sports,retail,83.73,2,0.064,none,2024-12-12\r\n26279,1321,EMEA,home,online,34.83,3,0.242,none,2024-02-11\r\n26280,2158,APAC,grocery,retail,30.92,4,0.189,coupon,2024-12-25\r\n26281,1659,APAC,grocery,online,63.09,1,0.201,none,2024-03-20\r\n26282,1684,EMEA,electronics,online,28.01,8,0.191,none,2024-11-21\r\n26283,1145,AMER,home,online,33.16,8,0.123,loyalty,2024-10-07\r\n26284,2251,APAC,grocery,mobile,48.68,1,0.037,none,2024-04-06\r\n26285,1738,LATAM,fashion,online,94.50,8,0.179,coupon,2024-02-28\r\n26286,2275,LATAM,electronics,retail,44.55,8,0.245,coupon,2024-08-27\r\n26287,1618,EMEA,electronics,mobile,75.16,8,0.174,none,2024-06-15\r\n26288,1754,EMEA,sports,online,35.58,2,0.012,coupon,2024-03-22\r\n26289,2088,EMEA,home,online,69.48,5,0.054,none,2024-10-22\r\n26290,1655,LATAM,grocery,mobile,142.20,6,0.247,loyalty,2024-04-17\r\n26291,1419,APAC,home,retail,19.32,6,0.120,coupon,2024-06-08\r\n26292,1174,APAC,sports,partner,35.11,4,0.012,none,2024-07-01\r\n26293,2329,LATAM,grocery,retail,33.42,3,0.034,none,2024-11-05\r\n26294,1043,LATAM,electronics,online,29.80,7,0.194,coupon,2024-11-25\r\n26295,1388,AMER,home,retail,81.92,6,0.144,none,2024-03-21\r\n26296,1781,LATAM,grocery,mobile,60.04,6,0.155,bundle,2024-12-20\r\n26297,1476,APAC,electronics,online,32.58,8,0.053,none,2024-08-16\r\n26298,1867,AMER,home,online,42.47,5,0.130,none,2024-12-11\r\n26299,1414,APAC,fashion,retail,14.53,6,0.164,loyalty,2024-09-22\r\n26300,2082,APAC,electronics,mobile,27.68,4,0.178,none,2024-12-27\r\n26301,1930,AMER,toys,online,35.29,2,0.053,coupon,2024-01-11\r\n26302,1081,AMER,home,retail,107.20,4,0.078,bundle,2024-01-27\r\n26303,1293,AMER,electronics,online,52.87,3,0.215,none,2024-07-20\r\n26304,2071,APAC,fashion,online,17.57,1,0.110,none,2024-05-05\r\n26305,2011,AMER,electronics,online,108.73,3,0.014,none,2024-08-13\r\n26306,1479,AMER,grocery,online,33.03,7,0.101,bundle,2024-05-10\r\n26307,2009,LATAM,fashion,online,52.35,5,0.119,none,2024-03-08\r\n26308,2062,EMEA,electronics,partner,120.15,4,0.094,none,2024-06-07\r\n26309,2305,AMER,fashion,retail,60.66,4,0.050,coupon,2024-09-15\r\n26310,1695,LATAM,home,online,45.34,1,0.047,coupon,2024-05-03\r\n26311,2025,EMEA,grocery,mobile,96.97,7,0.064,none,2024-09-23\r\n26312,2490,AMER,grocery,retail,84.47,6,0.179,coupon,2024-04-23\r\n26313,2302,APAC,toys,retail,72.15,5,0.119,none,2024-06-27\r\n26314,1773,LATAM,electronics,retail,74.70,3,0.035,coupon,2024-06-20\r\n26315,2324,AMER,electronics,mobile,44.69,2,0.006,loyalty,2024-11-18\r\n26316,1381,LATAM,fashion,partner,78.44,5,0.180,none,2024-03-23\r\n26317,1337,APAC,fashion,online,38.77,1,0.056,none,2024-04-14\r\n26318,1227,AMER,electronics,retail,94.07,1,0.241,none,2024-10-12\r\n26319,1603,EMEA,grocery,retail,106.00,5,0.120,coupon,2024-03-19\r\n26320,1118,AMER,grocery,online,44.34,4,0.185,none,2024-08-24\r\n26321,2121,APAC,fashion,online,73.33,2,0.058,bundle,2024-05-12\r\n26322,1063,AMER,home,online,74.86,3,0.047,none,2024-05-21\r\n26323,1661,LATAM,sports,online,43.71,4,0.215,coupon,2024-07-25\r\n26324,1320,EMEA,sports,online,44.85,2,0.067,none,2024-08-01\r\n26325,2320,LATAM,grocery,online,36.24,8,0.073,none,2024-11-08\r\n26326,1943,AMER,home,online,34.42,6,0.069,coupon,2024-09-03\r\n26327,1386,AMER,toys,online,59.15,4,0.233,coupon,2024-01-06\r\n26328,2418,AMER,home,online,59.87,7,0.118,none,2024-05-10\r\n26329,2313,LATAM,grocery,retail,175.31,7,0.034,bundle,2024-07-20\r\n26330,1488,AMER,sports,retail,44.21,7,0.131,none,2024-05-10\r\n26331,2465,EMEA,electronics,retail,116.11,8,0.004,none,2024-10-02\r\n26332,1847,LATAM,sports,mobile,24.09,5,0.171,none,2024-05-24\r\n26333,1966,APAC,grocery,online,56.86,1,0.060,none,2024-08-22\r\n26334,2451,APAC,home,retail,38.34,7,0.233,none,2024-02-24\r\n26335,1859,AMER,electronics,online,132.71,4,0.102,none,2024-12-03\r\n26336,1475,LATAM,fashion,online,109.35,4,0.147,coupon,2024-05-23\r\n26337,2133,AMER,fashion,online,45.21,8,0.034,none,2024-09-02\r\n26338,1023,APAC,home,retail,35.79,5,0.033,coupon,2024-03-27\r\n26339,1828,EMEA,sports,retail,87.12,2,0.216,none,2024-09-08\r\n26340,1963,AMER,electronics,retail,119.29,2,0.107,coupon,2024-11-17\r\n26341,1041,APAC,home,online,106.81,7,0.062,bundle,2024-01-20\r\n26342,2421,AMER,fashion,online,25.87,4,0.059,none,2024-10-15\r\n26343,2086,APAC,grocery,partner,49.61,8,0.200,bundle,2024-03-27\r\n26344,1025,EMEA,toys,mobile,57.93,7,0.181,none,2024-12-26\r\n26345,2032,AMER,grocery,retail,55.24,4,0.172,loyalty,2024-02-27\r\n26346,1540,LATAM,sports,mobile,51.06,2,0.198,bundle,2024-07-27\r\n26347,2204,AMER,fashion,online,43.23,1,0.043,none,2024-03-20\r\n26348,2325,LATAM,toys,online,24.89,6,0.237,bundle,2024-03-04\r\n26349,1945,AMER,toys,mobile,31.52,1,0.046,none,2024-03-08\r\n26350,1816,EMEA,grocery,retail,47.09,7,0.128,none,2024-04-27\r\n26351,2122,AMER,electronics,online,76.94,7,0.229,bundle,2024-10-02\r\n26352,1290,EMEA,grocery,online,65.20,1,0.094,loyalty,2024-07-08\r\n26353,2367,AMER,home,mobile,25.18,8,0.017,none,2024-03-27\r\n26354,2079,EMEA,toys,online,87.24,3,0.036,none,2024-03-11\r\n26355,1638,EMEA,toys,online,41.95,1,0.057,coupon,2024-02-03\r\n26356,1778,LATAM,toys,mobile,29.22,4,0.050,none,2024-04-27\r\n26357,2101,APAC,home,online,81.03,2,0.208,coupon,2024-04-26\r\n26358,2207,APAC,grocery,mobile,57.03,1,0.131,none,2024-07-13\r\n26359,1515,EMEA,sports,online,48.20,6,0.024,none,2024-05-16\r\n26360,1175,AMER,sports,retail,64.01,4,0.190,bundle,2024-06-26\r\n26361,1706,EMEA,grocery,online,86.84,8,0.101,coupon,2024-06-18\r\n26362,1933,EMEA,grocery,online,52.18,6,0.003,none,2024-03-18\r\n26363,1833,EMEA,grocery,retail,103.94,6,0.157,none,2024-06-08\r\n26364,1624,AMER,electronics,online,84.98,3,0.089,none,2024-01-15\r\n26365,1357,EMEA,grocery,online,48.94,6,0.020,coupon,2024-05-11\r\n26366,2303,EMEA,fashion,mobile,41.80,2,0.186,none,2024-08-27\r\n26367,2075,LATAM,fashion,retail,100.71,1,0.241,bundle,2024-04-11\r\n26368,2442,APAC,sports,retail,73.35,3,0.057,none,2024-08-20\r\n26369,1402,EMEA,fashion,online,46.56,3,0.098,none,2024-08-11\r\n26370,1240,EMEA,grocery,retail,97.07,2,0.234,bundle,2024-10-16\r\n26371,2121,APAC,fashion,online,31.20,3,0.054,none,2024-07-23\r\n26372,1340,LATAM,sports,online,34.65,1,0.109,coupon,2024-09-02\r\n26373,1757,EMEA,sports,retail,14.66,4,0.060,none,2024-10-08\r\n26374,2243,APAC,sports,online,82.83,7,0.034,bundle,2024-12-16\r\n26375,1113,EMEA,electronics,mobile,72.47,1,0.003,none,2024-06-15\r\n26376,2336,APAC,home,online,48.64,5,0.156,coupon,2024-06-27\r\n26377,2216,AMER,grocery,mobile,24.47,1,0.035,none,2024-11-24\r\n26378,1066,AMER,grocery,retail,39.71,5,0.111,none,2024-11-14\r\n26379,2195,APAC,sports,online,66.15,5,0.017,none,2024-07-12\r\n26380,1009,APAC,fashion,retail,48.35,4,0.001,bundle,2024-05-13\r\n26381,1109,APAC,sports,partner,117.70,3,0.095,none,2024-02-04\r\n26382,2454,LATAM,sports,mobile,26.75,7,0.129,none,2024-01-15\r\n26383,1851,EMEA,sports,online,137.91,4,0.195,none,2024-06-17\r\n26384,1543,AMER,electronics,online,53.36,3,0.048,none,2024-10-13\r\n26385,1506,EMEA,grocery,online,64.60,5,0.046,loyalty,2024-07-11\r\n26386,1505,EMEA,toys,retail,64.35,8,0.176,none,2024-12-08\r\n26387,2239,EMEA,toys,mobile,100.39,4,0.044,none,2024-03-28\r\n26388,1367,AMER,grocery,retail,54.69,1,0.210,none,2024-07-08\r\n26389,1659,APAC,grocery,online,70.70,8,0.114,bundle,2024-10-01\r\n26390,2277,EMEA,grocery,retail,29.45,5,0.004,none,2024-08-13\r\n26391,2030,EMEA,home,mobile,62.39,6,0.156,bundle,2024-07-08\r\n26392,2401,LATAM,toys,mobile,46.78,2,0.098,none,2024-01-05\r\n26393,1518,AMER,fashion,online,65.29,8,0.033,bundle,2024-12-15\r\n26394,1039,AMER,grocery,online,111.47,6,0.114,none,2024-02-22\r\n26395,1949,AMER,electronics,mobile,44.56,4,0.029,none,2024-04-12\r\n26396,1317,EMEA,grocery,online,29.80,6,0.126,loyalty,2024-09-09\r\n26397,1764,LATAM,toys,retail,33.10,8,0.030,none,2024-08-11\r\n26398,1306,LATAM,fashion,online,81.79,3,0.084,coupon,2024-09-13\r\n26399,1660,AMER,toys,online,98.96,8,0.098,none,2024-08-24\r\n26400,1387,AMER,sports,partner,55.78,4,0.088,coupon,2024-09-21\r\n26401,1033,APAC,grocery,mobile,51.86,2,0.184,coupon,2024-01-18\r\n26402,2050,APAC,fashion,online,80.45,6,0.064,none,2024-09-04\r\n26403,1847,LATAM,sports,retail,56.09,3,0.027,none,2024-01-15\r\n26404,1279,EMEA,fashion,retail,86.40,1,0.119,none,2024-04-25\r\n26405,1663,LATAM,sports,online,102.86,5,0.222,bundle,2024-12-12\r\n26406,1470,LATAM,toys,retail,134.65,2,0.087,none,2024-12-03\r\n26407,1661,LATAM,electronics,retail,49.18,5,0.240,none,2024-03-10\r\n26408,1649,APAC,electronics,online,76.62,4,0.126,coupon,2024-07-23\r\n26409,2091,LATAM,electronics,mobile,36.93,6,0.100,none,2024-10-12\r\n26410,2422,APAC,fashion,online,36.49,6,0.123,none,2024-04-24\r\n26411,1957,AMER,fashion,online,37.65,3,0.016,none,2024-04-19\r\n26412,1241,APAC,home,mobile,42.77,8,0.039,none,2024-10-01\r\n26413,2082,APAC,home,mobile,59.83,7,0.077,bundle,2024-03-02\r\n26414,1532,APAC,toys,online,35.36,4,0.230,none,2024-09-21\r\n26415,1528,EMEA,grocery,online,172.25,5,0.133,none,2024-03-06\r\n26416,1843,EMEA,grocery,retail,116.69,8,0.049,coupon,2024-01-09\r\n26417,2204,AMER,toys,retail,74.36,4,0.235,coupon,2024-08-02\r\n26418,1199,APAC,grocery,retail,54.62,4,0.097,coupon,2024-05-04\r\n26419,2094,AMER,grocery,online,43.48,2,0.096,none,2024-10-28\r\n26420,2447,AMER,grocery,online,35.37,5,0.088,none,2024-03-01\r\n26421,1631,APAC,home,retail,75.44,6,0.049,coupon,2024-04-14\r\n26422,2106,LATAM,home,retail,64.01,1,0.245,none,2024-06-11\r\n26423,2301,EMEA,grocery,retail,33.20,6,0.153,none,2024-03-03\r\n26424,1971,EMEA,grocery,online,107.50,5,0.225,loyalty,2024-07-07\r\n26425,2384,LATAM,home,retail,54.14,1,0.123,none,2024-09-21\r\n26426,1992,LATAM,home,online,106.55,8,0.087,none,2024-09-03\r\n26427,1576,EMEA,home,online,65.38,6,0.151,none,2024-09-01\r\n26428,1955,AMER,grocery,retail,85.99,6,0.125,none,2024-08-21\r\n26429,2043,EMEA,grocery,retail,55.64,1,0.164,none,2024-06-09\r\n26430,2001,EMEA,sports,online,42.09,3,0.081,none,2024-09-15\r\n26431,1759,EMEA,grocery,online,109.71,8,0.069,none,2024-05-22\r\n26432,1750,LATAM,sports,retail,66.77,7,0.110,loyalty,2024-04-18\r\n26433,1095,APAC,grocery,retail,62.79,2,0.121,loyalty,2024-12-17\r\n26434,1520,APAC,grocery,retail,77.26,3,0.093,bundle,2024-07-12\r\n26435,1627,LATAM,fashion,online,21.61,1,0.166,none,2024-04-14\r\n26436,1040,LATAM,home,mobile,62.23,4,0.071,coupon,2024-06-11\r\n26437,2225,EMEA,sports,retail,93.48,5,0.157,loyalty,2024-01-07\r\n26438,1399,AMER,home,online,39.58,8,0.104,bundle,2024-04-05\r\n26439,1328,APAC,fashion,online,54.01,1,0.029,bundle,2024-10-11\r\n26440,2498,LATAM,electronics,retail,65.79,3,0.033,bundle,2024-07-19\r\n26441,1403,APAC,fashion,online,23.35,5,0.044,coupon,2024-09-25\r\n26442,1796,LATAM,fashion,online,77.91,3,0.020,loyalty,2024-02-10\r\n26443,1373,LATAM,grocery,partner,41.58,3,0.118,none,2024-04-13\r\n26444,1901,AMER,grocery,online,54.16,5,0.093,none,2024-05-13\r\n26445,1178,EMEA,grocery,online,45.40,5,0.062,coupon,2024-01-19\r\n26446,1869,AMER,electronics,retail,47.73,7,0.002,none,2024-12-16\r\n26447,2355,EMEA,home,online,62.56,1,0.218,none,2024-03-04\r\n26448,1416,EMEA,home,retail,55.66,1,0.186,coupon,2024-07-23\r\n26449,2371,LATAM,home,online,58.72,5,0.077,bundle,2024-05-27\r\n26450,2011,AMER,grocery,retail,62.58,4,0.047,none,2024-11-22\r\n26451,1720,AMER,toys,online,41.11,8,0.181,none,2024-11-16\r\n26452,1304,LATAM,sports,online,24.38,8,0.243,none,2024-09-07\r\n26453,1784,EMEA,electronics,online,52.39,4,0.130,none,2024-01-26\r\n26454,1794,AMER,sports,online,57.76,3,0.192,none,2024-08-09\r\n26455,1754,EMEA,home,online,93.99,8,0.165,none,2024-06-08\r\n26456,2232,EMEA,grocery,retail,53.67,8,0.243,none,2024-10-18\r\n26457,1247,AMER,grocery,retail,37.17,2,0.010,none,2024-05-03\r\n26458,1689,LATAM,fashion,online,65.48,5,0.107,none,2024-05-02\r\n26459,2355,EMEA,toys,online,74.36,6,0.231,bundle,2024-12-23\r\n26460,2419,LATAM,electronics,mobile,48.23,4,0.119,coupon,2024-10-15\r\n26461,2129,APAC,grocery,online,44.39,4,0.025,none,2024-08-09\r\n26462,2432,AMER,grocery,retail,31.26,3,0.237,bundle,2024-04-17\r\n26463,1160,LATAM,home,online,41.51,8,0.061,coupon,2024-07-05\r\n26464,2400,EMEA,electronics,online,70.00,6,0.217,none,2024-02-05\r\n26465,1729,AMER,grocery,online,20.93,6,0.084,none,2024-04-23\r\n26466,1289,LATAM,grocery,online,106.53,4,0.147,none,2024-05-10\r\n26467,1061,APAC,electronics,partner,58.88,5,0.158,loyalty,2024-12-18\r\n26468,1258,EMEA,fashion,online,25.98,5,0.146,none,2024-05-20\r\n26469,1493,APAC,sports,online,118.35,4,0.241,none,2024-05-23\r\n26470,2450,EMEA,grocery,retail,42.55,4,0.231,bundle,2024-07-28\r\n26471,2093,LATAM,electronics,retail,66.67,6,0.215,coupon,2024-05-07\r\n26472,1487,AMER,home,mobile,63.63,6,0.088,bundle,2024-08-04\r\n26473,1184,AMER,home,retail,64.05,5,0.109,none,2024-11-06\r\n26474,1595,AMER,grocery,retail,74.00,6,0.177,bundle,2024-07-14\r\n26475,1073,AMER,toys,online,54.06,8,0.154,none,2024-05-26\r\n26476,1162,AMER,electronics,online,74.23,1,0.054,none,2024-05-20\r\n26477,1245,APAC,toys,online,61.07,6,0.183,none,2024-06-19\r\n26478,1380,AMER,electronics,mobile,52.93,8,0.146,none,2024-01-05\r\n26479,1537,LATAM,fashion,online,52.96,4,0.242,none,2024-12-02\r\n26480,1277,AMER,grocery,retail,69.70,6,0.108,none,2024-12-13\r\n26481,2421,AMER,electronics,mobile,66.03,3,0.055,loyalty,2024-08-01\r\n26482,1245,APAC,home,online,42.14,5,0.189,coupon,2024-10-23\r\n26483,1354,AMER,sports,online,56.90,6,0.018,none,2024-09-01\r\n26484,1259,EMEA,toys,mobile,33.63,6,0.127,none,2024-02-08\r\n26485,1025,EMEA,grocery,online,33.02,8,0.089,coupon,2024-08-27\r\n26486,1020,APAC,electronics,retail,66.13,6,0.013,none,2024-07-26\r\n26487,1090,AMER,toys,online,178.79,3,0.072,none,2024-07-11\r\n26488,1933,EMEA,electronics,mobile,67.57,8,0.045,loyalty,2024-07-04\r\n26489,2007,LATAM,electronics,retail,25.86,1,0.018,none,2024-11-04\r\n26490,1314,AMER,electronics,retail,34.28,5,0.118,none,2024-05-15\r\n26491,1732,LATAM,toys,online,16.71,1,0.104,none,2024-03-02\r\n26492,1867,AMER,electronics,online,65.21,8,0.221,coupon,2024-07-21\r\n26493,2416,LATAM,home,mobile,58.51,4,0.093,none,2024-04-06\r\n26494,2397,LATAM,electronics,online,48.81,5,0.204,bundle,2024-05-23\r\n26495,2042,LATAM,home,online,50.26,6,0.020,coupon,2024-08-18\r\n26496,1653,APAC,grocery,mobile,115.02,1,0.207,none,2024-09-15\r\n26497,1415,AMER,grocery,online,47.46,1,0.242,none,2024-09-25\r\n26498,1911,LATAM,electronics,retail,50.69,5,0.091,none,2024-10-07\r\n26499,1592,LATAM,electronics,online,95.14,1,0.208,bundle,2024-05-16\r\n26500,1422,LATAM,grocery,mobile,40.70,6,0.163,none,2024-05-12\r\n26501,2173,LATAM,grocery,online,50.85,5,0.125,coupon,2024-12-13\r\n26502,1334,APAC,home,retail,33.86,8,0.035,bundle,2024-03-01\r\n26503,1374,APAC,sports,online,20.72,2,0.059,none,2024-01-28\r\n26504,1066,AMER,toys,retail,65.47,7,0.113,bundle,2024-03-23\r\n26505,2406,EMEA,toys,mobile,43.89,8,0.091,coupon,2024-09-27\r\n26506,1865,LATAM,grocery,partner,23.01,7,0.135,loyalty,2024-10-19\r\n26507,2205,AMER,toys,online,26.54,3,0.010,coupon,2024-02-22\r\n26508,2208,AMER,toys,online,122.74,1,0.207,coupon,2024-10-24\r\n26509,1694,APAC,fashion,retail,60.10,7,0.072,coupon,2024-01-09\r\n26510,2234,LATAM,toys,online,28.48,6,0.006,coupon,2024-10-04\r\n26511,1075,AMER,electronics,online,99.73,5,0.118,none,2024-11-06\r\n26512,2464,LATAM,fashion,mobile,105.96,8,0.125,none,2024-06-12\r\n26513,2116,LATAM,sports,online,111.43,2,0.177,coupon,2024-12-08\r\n26514,2251,APAC,toys,mobile,49.32,3,0.168,bundle,2024-07-03\r\n26515,2285,APAC,electronics,mobile,60.81,2,0.217,none,2024-05-09\r\n26516,1872,LATAM,sports,mobile,84.04,6,0.134,none,2024-04-02\r\n26517,1718,EMEA,grocery,retail,59.07,2,0.010,none,2024-02-24\r\n26518,1764,LATAM,home,retail,70.68,2,0.169,loyalty,2024-11-13\r\n26519,1775,EMEA,grocery,online,45.84,6,0.211,none,2024-05-03\r\n26520,2020,AMER,electronics,mobile,76.89,6,0.032,none,2024-04-16\r\n26521,2489,LATAM,toys,online,152.83,4,0.027,none,2024-08-27\r\n26522,1021,AMER,home,retail,74.36,1,0.214,coupon,2024-03-09\r\n26523,1372,APAC,grocery,online,76.82,8,0.242,coupon,2024-05-13\r\n26524,1814,AMER,home,mobile,145.62,7,0.203,none,2024-07-23\r\n26525,1826,LATAM,grocery,online,61.77,1,0.233,bundle,2024-06-21\r\n26526,1127,EMEA,grocery,mobile,33.40,2,0.153,none,2024-08-21\r\n26527,1688,LATAM,grocery,retail,79.10,1,0.039,loyalty,2024-11-14\r\n26528,1923,LATAM,electronics,online,72.56,6,0.093,coupon,2024-02-01\r\n26529,2490,AMER,home,partner,100.69,7,0.127,bundle,2024-10-11\r\n26530,1526,EMEA,grocery,online,54.32,8,0.064,coupon,2024-01-22\r\n26531,2405,AMER,grocery,mobile,61.79,1,0.008,none,2024-07-14\r\n26532,1811,APAC,sports,retail,24.16,1,0.029,loyalty,2024-02-25\r\n26533,1809,APAC,electronics,online,105.71,8,0.049,bundle,2024-11-16\r\n26534,1214,EMEA,electronics,mobile,59.11,2,0.122,none,2024-10-01\r\n26535,1099,LATAM,fashion,online,27.85,6,0.211,none,2024-02-07\r\n26536,1096,EMEA,home,retail,66.56,6,0.105,loyalty,2024-04-05\r\n26537,1469,EMEA,sports,retail,24.15,8,0.168,coupon,2024-04-16\r\n26538,1351,APAC,toys,online,37.67,8,0.250,loyalty,2024-10-13\r\n26539,1743,LATAM,home,online,117.85,5,0.223,none,2024-09-03\r\n26540,1480,APAC,sports,online,140.06,6,0.184,none,2024-08-03\r\n26541,1086,AMER,fashion,online,56.41,7,0.031,coupon,2024-12-17\r\n26542,1723,LATAM,grocery,retail,27.18,4,0.219,none,2024-05-01\r\n26543,2185,EMEA,grocery,partner,71.36,7,0.250,loyalty,2024-01-07\r\n26544,2045,LATAM,electronics,mobile,38.87,6,0.146,none,2024-05-11\r\n26545,1394,LATAM,home,online,73.96,6,0.185,none,2024-12-03\r\n26546,1639,APAC,home,retail,34.96,6,0.025,none,2024-09-17\r\n26547,1014,EMEA,toys,retail,44.14,2,0.119,none,2024-09-25\r\n26548,2232,EMEA,toys,retail,134.53,7,0.031,none,2024-07-19\r\n26549,1088,LATAM,fashion,retail,76.21,8,0.069,none,2024-08-11\r\n26550,1941,AMER,home,retail,36.29,5,0.117,none,2024-08-14\r\n26551,1518,AMER,electronics,retail,103.73,6,0.218,none,2024-09-27\r\n26552,1148,AMER,toys,retail,28.54,6,0.062,none,2024-06-19\r\n26553,1782,LATAM,electronics,retail,56.92,6,0.018,none,2024-12-09\r\n26554,2391,EMEA,grocery,partner,115.30,7,0.060,none,2024-06-04\r\n26555,1953,EMEA,electronics,online,46.19,6,0.121,none,2024-10-11\r\n26556,1260,LATAM,grocery,mobile,60.63,1,0.107,coupon,2024-09-17\r\n26557,1759,EMEA,electronics,online,151.15,5,0.233,loyalty,2024-10-02\r\n26558,1457,EMEA,electronics,online,118.83,1,0.009,none,2024-04-08\r\n26559,2302,APAC,grocery,online,32.29,3,0.052,none,2024-06-04\r\n26560,1543,AMER,toys,mobile,48.72,5,0.089,coupon,2024-10-06\r\n26561,2321,APAC,grocery,retail,100.66,7,0.102,none,2024-07-20\r\n26562,1636,APAC,grocery,retail,63.36,3,0.128,coupon,2024-04-02\r\n26563,1254,APAC,home,mobile,71.87,8,0.232,loyalty,2024-06-25\r\n26564,1189,AMER,sports,online,85.16,3,0.183,none,2024-01-23\r\n26565,2248,LATAM,home,retail,27.29,7,0.218,bundle,2024-07-21\r\n26566,1700,EMEA,home,retail,56.60,8,0.160,none,2024-11-20\r\n26567,1105,AMER,electronics,online,110.64,1,0.012,none,2024-01-06\r\n26568,1347,APAC,home,retail,46.26,2,0.067,loyalty,2024-07-13\r\n26569,1393,LATAM,grocery,online,76.06,6,0.064,coupon,2024-08-13\r\n26570,1233,AMER,electronics,online,58.30,4,0.181,loyalty,2024-04-13\r\n26571,2179,LATAM,grocery,partner,62.15,8,0.086,loyalty,2024-06-21\r\n26572,1452,LATAM,fashion,mobile,49.47,3,0.150,coupon,2024-05-14\r\n26573,1497,EMEA,sports,online,80.99,5,0.125,bundle,2024-11-28\r\n26574,1727,APAC,fashion,mobile,38.53,5,0.143,none,2024-12-14\r\n26575,1839,APAC,sports,mobile,70.27,4,0.194,none,2024-03-11\r\n26576,1280,LATAM,electronics,retail,58.63,3,0.033,bundle,2024-10-20\r\n26577,1816,EMEA,fashion,online,108.74,5,0.180,none,2024-04-11\r\n26578,1007,APAC,grocery,mobile,78.50,2,0.221,none,2024-11-24\r\n26579,1679,APAC,fashion,retail,19.65,5,0.146,loyalty,2024-02-07\r\n26580,1996,APAC,grocery,mobile,32.64,8,0.137,none,2024-02-24\r\n26581,1833,EMEA,electronics,retail,38.96,7,0.120,none,2024-01-08\r\n26582,2450,EMEA,sports,retail,54.76,6,0.030,none,2024-03-05\r\n26583,2055,AMER,sports,retail,61.79,4,0.166,bundle,2024-07-16\r\n26584,1945,AMER,sports,online,64.71,2,0.131,none,2024-07-05\r\n26585,2142,LATAM,electronics,retail,97.17,4,0.136,none,2024-11-24\r\n26586,1523,LATAM,home,retail,286.80,5,0.139,none,2024-08-13\r\n26587,2489,LATAM,home,retail,81.46,4,0.167,none,2024-10-09\r\n26588,2045,LATAM,electronics,retail,23.22,3,0.144,bundle,2024-12-24\r\n26589,1110,LATAM,grocery,online,74.07,1,0.060,none,2024-06-13\r\n26590,2105,APAC,grocery,online,84.32,2,0.066,coupon,2024-07-05\r\n26591,1615,LATAM,toys,retail,67.51,4,0.157,bundle,2024-11-22\r\n26592,2498,LATAM,home,online,32.72,2,0.197,none,2024-12-19\r\n26593,1655,LATAM,toys,partner,89.16,5,0.046,bundle,2024-01-17\r\n26594,1002,EMEA,fashion,online,26.84,7,0.014,coupon,2024-05-15\r\n26595,1221,LATAM,home,retail,53.35,7,0.129,none,2024-12-19\r\n26596,2470,EMEA,grocery,retail,65.95,5,0.184,coupon,2024-10-24\r\n26597,1317,EMEA,sports,online,35.85,2,0.032,none,2024-03-22\r\n26598,1101,AMER,fashion,retail,111.49,1,0.038,coupon,2024-06-15\r\n26599,2154,APAC,electronics,online,78.53,7,0.223,none,2024-12-20\r\n26600,2471,APAC,fashion,retail,71.68,7,0.098,none,2024-03-13\r\n26601,2304,LATAM,fashion,mobile,65.14,4,0.158,none,2024-01-27\r\n26602,1450,EMEA,toys,retail,49.48,8,0.112,none,2024-06-20\r\n26603,2409,APAC,electronics,mobile,44.15,1,0.004,coupon,2024-10-10\r\n26604,1250,APAC,electronics,retail,47.79,6,0.194,none,2024-10-26\r\n26605,2448,APAC,sports,retail,84.53,4,0.120,none,2024-10-05\r\n26606,1585,AMER,home,online,88.47,7,0.121,loyalty,2024-11-08\r\n26607,1563,EMEA,home,partner,47.10,2,0.203,none,2024-04-28\r\n26608,1606,AMER,toys,online,60.77,2,0.228,none,2024-06-04\r\n26609,2448,APAC,grocery,retail,23.94,3,0.029,coupon,2024-04-04\r\n26610,1186,APAC,home,retail,28.15,3,0.131,bundle,2024-04-15\r\n26611,1804,AMER,sports,retail,32.87,3,0.178,none,2024-01-16\r\n26612,1028,EMEA,grocery,retail,38.25,7,0.088,coupon,2024-03-18\r\n26613,1082,EMEA,electronics,online,51.24,1,0.163,none,2024-09-05\r\n26614,2383,APAC,grocery,online,81.58,3,0.202,coupon,2024-11-28\r\n26615,2279,LATAM,home,retail,62.56,7,0.061,bundle,2024-04-17\r\n26616,1671,APAC,home,mobile,100.64,8,0.169,bundle,2024-03-04\r\n26617,1606,AMER,fashion,online,31.38,3,0.196,bundle,2024-10-04\r\n26618,1537,LATAM,toys,online,20.53,4,0.183,none,2024-02-26\r\n26619,1805,EMEA,grocery,online,73.49,5,0.183,none,2024-06-01\r\n26620,1139,EMEA,grocery,partner,95.92,3,0.151,none,2024-06-07\r\n26621,2344,LATAM,grocery,online,102.03,6,0.050,bundle,2024-07-19\r\n26622,1066,AMER,sports,online,55.02,2,0.048,none,2024-12-02\r\n26623,2208,AMER,home,retail,95.61,8,0.210,coupon,2024-12-21\r\n26624,1282,LATAM,grocery,mobile,89.39,8,0.037,coupon,2024-11-06\r\n26625,2324,AMER,grocery,online,53.32,3,0.161,none,2024-05-21\r\n26626,2200,LATAM,fashion,mobile,123.65,3,0.064,loyalty,2024-06-27\r\n26627,1760,LATAM,electronics,retail,38.77,6,0.022,coupon,2024-09-18\r\n26628,1655,LATAM,sports,online,39.44,4,0.011,none,2024-11-04\r\n26629,1349,APAC,sports,retail,28.93,6,0.157,coupon,2024-11-03\r\n26630,1426,AMER,grocery,retail,24.38,7,0.095,none,2024-03-16\r\n26631,1524,LATAM,grocery,online,33.70,2,0.228,bundle,2024-02-19\r\n26632,2140,AMER,electronics,retail,45.76,2,0.139,coupon,2024-11-04\r\n26633,2034,LATAM,home,retail,24.81,4,0.197,loyalty,2024-10-01\r\n26634,2223,EMEA,electronics,retail,74.24,4,0.074,none,2024-01-06\r\n26635,1099,LATAM,grocery,online,69.05,1,0.198,none,2024-03-19\r\n26636,1388,AMER,grocery,partner,51.08,2,0.231,none,2024-08-11\r\n26637,1926,AMER,electronics,online,54.33,5,0.017,none,2024-08-01\r\n26638,1685,AMER,grocery,retail,38.35,1,0.241,none,2024-06-04\r\n26639,1189,AMER,grocery,retail,119.46,8,0.217,none,2024-09-03\r\n26640,1410,AMER,electronics,retail,118.30,8,0.215,coupon,2024-01-08\r\n26641,1161,AMER,toys,online,51.20,5,0.125,none,2024-12-17\r\n26642,1435,AMER,electronics,retail,38.12,8,0.103,none,2024-05-24\r\n26643,1810,LATAM,toys,mobile,105.64,5,0.119,bundle,2024-01-20\r\n26644,2449,LATAM,fashion,online,38.51,8,0.217,none,2024-12-28\r\n26645,1208,AMER,home,mobile,97.93,4,0.179,none,2024-09-25\r\n26646,1399,AMER,fashion,online,110.44,1,0.014,none,2024-11-03\r\n26647,2489,LATAM,grocery,online,42.22,2,0.055,none,2024-11-24\r\n26648,1187,AMER,grocery,online,24.81,3,0.026,none,2024-08-07\r\n26649,1839,APAC,home,retail,60.50,3,0.142,bundle,2024-09-21\r\n26650,2033,LATAM,fashion,retail,109.73,8,0.204,loyalty,2024-12-24\r\n26651,1918,EMEA,electronics,online,20.35,2,0.118,bundle,2024-11-18\r\n26652,2291,EMEA,sports,retail,27.85,8,0.058,none,2024-06-15\r\n26653,1788,AMER,grocery,retail,113.75,6,0.049,coupon,2024-06-25\r\n26654,1165,AMER,home,retail,35.80,2,0.245,none,2024-06-19\r\n26655,2154,APAC,home,retail,96.73,4,0.184,coupon,2024-06-10\r\n26656,1045,LATAM,home,online,29.37,2,0.227,coupon,2024-10-26\r\n26657,1122,AMER,electronics,mobile,40.34,7,0.194,none,2024-07-14\r\n26658,1642,EMEA,home,online,66.47,7,0.212,none,2024-09-26\r\n26659,2149,EMEA,toys,online,112.26,7,0.161,none,2024-09-10\r\n26660,2205,AMER,home,retail,22.07,1,0.012,none,2024-03-25\r\n26661,2352,APAC,electronics,retail,48.28,3,0.044,coupon,2024-01-06\r\n26662,2404,EMEA,grocery,retail,69.92,3,0.161,none,2024-06-07\r\n26663,1675,LATAM,home,retail,30.92,7,0.099,bundle,2024-08-16\r\n26664,2294,EMEA,sports,online,61.00,8,0.171,coupon,2024-12-25\r\n26665,1748,APAC,electronics,mobile,102.74,6,0.046,none,2024-05-02\r\n26666,2113,LATAM,fashion,online,14.14,5,0.245,bundle,2024-12-16\r\n26667,2416,LATAM,home,retail,128.04,8,0.085,none,2024-12-09\r\n26668,2006,APAC,grocery,retail,32.59,4,0.072,coupon,2024-06-22\r\n26669,2185,EMEA,fashion,mobile,22.15,7,0.161,coupon,2024-08-21\r\n26670,2434,APAC,grocery,retail,88.86,3,0.241,none,2024-01-14\r\n26671,2214,AMER,grocery,retail,75.54,8,0.089,none,2024-09-17\r\n26672,1614,EMEA,fashion,online,55.66,1,0.021,bundle,2024-05-05\r\n26673,2359,LATAM,grocery,partner,40.07,1,0.246,coupon,2024-03-06\r\n26674,1523,LATAM,grocery,retail,35.95,1,0.116,none,2024-03-08\r\n26675,2254,LATAM,fashion,online,81.70,5,0.135,bundle,2024-11-14\r\n26676,1442,EMEA,sports,online,21.98,6,0.014,none,2024-12-18\r\n26677,1728,AMER,home,mobile,76.19,5,0.082,none,2024-02-28\r\n26678,1878,EMEA,electronics,partner,110.26,7,0.220,coupon,2024-04-19\r\n26679,1842,LATAM,grocery,retail,15.92,7,0.055,none,2024-05-15\r\n26680,1663,LATAM,grocery,partner,72.58,7,0.229,none,2024-05-03\r\n26681,1586,LATAM,home,online,51.85,7,0.081,none,2024-03-25\r\n26682,1301,AMER,electronics,partner,54.40,6,0.235,none,2024-11-07\r\n26683,1123,LATAM,electronics,retail,53.04,4,0.070,coupon,2024-07-09\r\n26684,2087,LATAM,toys,retail,63.72,7,0.245,bundle,2024-01-27\r\n26685,1612,LATAM,fashion,retail,64.67,8,0.045,none,2024-03-23\r\n26686,1824,LATAM,electronics,retail,54.55,4,0.184,none,2024-04-26\r\n26687,1007,APAC,fashion,mobile,39.98,5,0.125,none,2024-02-16\r\n26688,1609,LATAM,sports,online,25.61,2,0.091,none,2024-02-19\r\n26689,1460,LATAM,home,partner,59.29,6,0.117,none,2024-07-28\r\n26690,1394,LATAM,electronics,online,47.63,3,0.195,none,2024-10-04\r\n26691,2099,AMER,electronics,online,44.61,4,0.184,none,2024-12-15\r\n26692,2119,AMER,fashion,mobile,37.20,7,0.203,none,2024-02-11\r\n26693,2262,APAC,grocery,online,79.72,6,0.037,bundle,2024-08-11\r\n26694,2033,LATAM,electronics,online,55.69,6,0.166,bundle,2024-07-14\r\n26695,1803,LATAM,fashion,online,29.97,4,0.238,loyalty,2024-06-04\r\n26696,2356,LATAM,grocery,online,100.76,5,0.055,none,2024-10-22\r\n26697,1503,APAC,toys,retail,46.89,2,0.162,none,2024-03-25\r\n26698,2182,AMER,grocery,retail,168.37,7,0.230,none,2024-05-02\r\n26699,2412,LATAM,sports,retail,39.49,4,0.191,coupon,2024-05-16\r\n26700,2290,LATAM,grocery,online,65.28,3,0.220,bundle,2024-05-23\r\n26701,2052,LATAM,grocery,online,50.33,6,0.137,none,2024-03-20\r\n26702,2073,AMER,grocery,online,24.61,1,0.140,none,2024-06-19\r\n26703,2050,APAC,home,mobile,34.32,2,0.127,none,2024-12-27\r\n26704,2322,AMER,fashion,retail,53.08,2,0.148,none,2024-01-20\r\n26705,1166,AMER,grocery,mobile,54.82,2,0.157,none,2024-08-20\r\n26706,1360,APAC,toys,online,52.85,2,0.074,none,2024-11-24\r\n26707,2049,LATAM,sports,online,123.89,7,0.014,none,2024-10-01\r\n26708,2155,APAC,grocery,online,74.67,1,0.134,none,2024-11-04\r\n26709,2332,APAC,electronics,partner,75.51,5,0.029,coupon,2024-10-03\r\n26710,1237,LATAM,sports,partner,40.10,4,0.176,none,2024-11-19\r\n26711,1880,LATAM,grocery,online,60.14,1,0.044,none,2024-02-24\r\n26712,1083,AMER,toys,mobile,115.60,6,0.135,bundle,2024-07-24\r\n26713,1814,AMER,grocery,online,66.23,3,0.219,none,2024-05-21\r\n26714,1585,AMER,fashion,mobile,32.58,8,0.226,loyalty,2024-01-08\r\n26715,2234,LATAM,fashion,retail,45.23,1,0.007,none,2024-09-19\r\n26716,1752,APAC,sports,online,49.25,4,0.098,none,2024-05-24\r\n26717,2115,APAC,fashion,mobile,37.17,6,0.240,none,2024-01-16\r\n26718,1875,EMEA,fashion,online,46.41,3,0.069,none,2024-08-13\r\n26719,2154,APAC,fashion,mobile,56.78,5,0.128,none,2024-05-05\r\n26720,1790,AMER,home,online,40.55,5,0.076,none,2024-03-26\r\n26721,2286,AMER,grocery,partner,37.50,6,0.034,none,2024-10-21\r\n26722,2459,AMER,electronics,online,79.40,5,0.010,coupon,2024-05-11\r\n26723,1622,LATAM,fashion,retail,44.62,3,0.112,none,2024-02-12\r\n26724,1634,AMER,grocery,retail,82.73,6,0.132,none,2024-07-16\r\n26725,1333,EMEA,grocery,mobile,41.37,4,0.107,none,2024-10-02\r\n26726,1447,LATAM,electronics,partner,65.21,5,0.197,coupon,2024-11-17\r\n26727,1969,LATAM,electronics,mobile,58.68,2,0.204,none,2024-12-17\r\n26728,2314,EMEA,grocery,retail,39.49,7,0.098,bundle,2024-11-06\r\n26729,2427,LATAM,electronics,retail,73.08,6,0.049,none,2024-12-10\r\n26730,1931,APAC,sports,retail,51.02,4,0.089,none,2024-12-12\r\n26731,2175,AMER,home,online,63.00,6,0.061,loyalty,2024-05-21\r\n26732,2249,LATAM,electronics,mobile,123.50,3,0.122,loyalty,2024-08-06\r\n26733,1697,APAC,sports,mobile,60.57,8,0.012,none,2024-08-27\r\n26734,1905,APAC,grocery,online,93.29,4,0.118,none,2024-03-17\r\n26735,1772,EMEA,sports,retail,118.90,2,0.055,none,2024-01-26\r\n26736,2150,APAC,sports,retail,48.57,1,0.084,none,2024-11-21\r\n26737,1762,LATAM,fashion,mobile,116.11,2,0.062,none,2024-01-25\r\n26738,2203,APAC,toys,online,43.90,7,0.069,bundle,2024-09-08\r\n26739,2020,AMER,grocery,retail,106.51,6,0.093,coupon,2024-08-23\r\n26740,1517,AMER,fashion,online,60.24,7,0.191,none,2024-12-04\r\n26741,1099,LATAM,grocery,retail,81.75,6,0.235,bundle,2024-05-12\r\n26742,1035,EMEA,home,retail,61.19,6,0.086,none,2024-05-01\r\n26743,2327,EMEA,sports,online,73.34,4,0.204,none,2024-10-01\r\n26744,2466,APAC,toys,retail,43.42,5,0.174,coupon,2024-11-08\r\n26745,1222,AMER,sports,mobile,96.11,6,0.149,none,2024-03-09\r\n26746,2161,LATAM,fashion,retail,46.78,4,0.166,coupon,2024-12-04\r\n26747,1442,EMEA,toys,mobile,29.59,2,0.015,none,2024-11-15\r\n26748,1945,AMER,home,online,48.70,4,0.122,none,2024-01-03\r\n26749,1288,LATAM,grocery,retail,37.29,5,0.208,bundle,2024-08-20\r\n26750,1402,EMEA,toys,partner,41.50,5,0.061,coupon,2024-12-12\r\n26751,1234,AMER,fashion,retail,69.03,2,0.028,loyalty,2024-03-28\r\n26752,2386,EMEA,grocery,online,73.59,6,0.190,bundle,2024-04-10\r\n26753,1900,APAC,electronics,partner,42.53,2,0.103,bundle,2024-01-13\r\n26754,1739,AMER,electronics,partner,54.65,2,0.034,coupon,2024-11-04\r\n26755,1467,LATAM,fashion,online,51.36,7,0.123,none,2024-04-10\r\n26756,2005,APAC,grocery,online,22.51,6,0.219,loyalty,2024-01-25\r\n26757,1351,APAC,electronics,retail,90.06,5,0.061,none,2024-11-02\r\n26758,2424,LATAM,grocery,retail,33.00,8,0.208,loyalty,2024-01-13\r\n26759,1779,APAC,toys,retail,95.06,6,0.230,none,2024-02-15\r\n26760,2399,LATAM,grocery,online,64.38,1,0.108,coupon,2024-06-07\r\n26761,1543,AMER,home,online,50.26,2,0.218,none,2024-12-03\r\n26762,1626,EMEA,fashion,retail,120.46,5,0.097,bundle,2024-04-02\r\n26763,1354,AMER,fashion,retail,55.33,7,0.066,none,2024-10-28\r\n26764,1135,APAC,sports,retail,28.41,6,0.025,coupon,2024-03-26\r\n26765,1671,APAC,grocery,online,75.24,7,0.161,none,2024-09-20\r\n26766,1055,AMER,grocery,online,97.99,2,0.135,none,2024-04-01\r\n26767,1845,AMER,home,online,59.22,4,0.201,loyalty,2024-11-09\r\n26768,1605,APAC,home,online,38.34,1,0.128,coupon,2024-07-17\r\n26769,1354,AMER,home,retail,47.44,8,0.083,none,2024-01-14\r\n26770,1741,AMER,sports,mobile,94.17,8,0.080,none,2024-01-01\r\n26771,2057,APAC,home,online,48.71,4,0.074,coupon,2024-08-13\r\n26772,1918,EMEA,home,online,59.78,4,0.223,none,2024-12-11\r\n26773,2062,EMEA,toys,online,32.76,5,0.212,loyalty,2024-04-09\r\n26774,2267,AMER,grocery,partner,55.14,1,0.032,loyalty,2024-10-16\r\n26775,1314,AMER,home,retail,61.74,5,0.200,none,2024-08-19\r\n26776,1420,APAC,toys,online,54.20,1,0.249,none,2024-05-21\r\n26777,1546,EMEA,home,online,40.56,8,0.019,none,2024-09-18\r\n26778,2298,APAC,grocery,mobile,21.16,5,0.086,none,2024-11-18\r\n26779,1533,APAC,toys,online,26.23,7,0.062,loyalty,2024-05-14\r\n26780,2084,LATAM,home,online,65.89,7,0.142,none,2024-10-03\r\n26781,1273,AMER,sports,online,41.99,4,0.136,bundle,2024-08-22\r\n26782,2437,LATAM,electronics,mobile,55.90,6,0.039,coupon,2024-08-09\r\n26783,2452,LATAM,home,online,55.24,8,0.055,none,2024-08-07\r\n26784,2388,LATAM,electronics,mobile,54.53,6,0.125,none,2024-05-27\r\n26785,1727,APAC,sports,online,129.34,1,0.205,bundle,2024-07-18\r\n26786,2239,EMEA,toys,mobile,82.30,2,0.225,none,2024-08-05\r\n26787,1829,EMEA,home,mobile,40.07,5,0.120,coupon,2024-06-04\r\n26788,1312,EMEA,home,retail,45.17,7,0.141,bundle,2024-08-06\r\n26789,2392,EMEA,sports,online,27.01,1,0.040,none,2024-12-13\r\n26790,2001,EMEA,grocery,retail,137.76,7,0.230,loyalty,2024-05-13\r\n26791,1225,APAC,grocery,online,75.53,8,0.189,loyalty,2024-01-17\r\n26792,2480,APAC,toys,mobile,41.98,8,0.088,none,2024-06-11\r\n26793,1931,APAC,toys,online,14.95,1,0.033,none,2024-10-06\r\n26794,1907,EMEA,grocery,online,63.23,5,0.181,bundle,2024-09-21\r\n26795,1377,APAC,sports,online,34.93,4,0.114,coupon,2024-06-06\r\n26796,1028,EMEA,sports,retail,68.40,2,0.234,none,2024-03-24\r\n26797,2132,LATAM,home,online,107.92,6,0.154,coupon,2024-08-08\r\n26798,1500,EMEA,grocery,retail,15.92,5,0.063,bundle,2024-03-02\r\n26799,1194,APAC,grocery,retail,137.87,4,0.037,bundle,2024-10-18\r\n26800,1321,EMEA,grocery,online,218.82,4,0.166,coupon,2024-08-11\r\n26801,2221,LATAM,fashion,retail,48.79,8,0.237,coupon,2024-05-22\r\n26802,2401,LATAM,electronics,online,50.26,5,0.022,none,2024-02-01\r\n26803,1861,AMER,home,retail,35.35,5,0.072,none,2024-10-28\r\n26804,1234,AMER,sports,retail,32.03,1,0.050,bundle,2024-06-08\r\n26805,1304,LATAM,grocery,online,48.94,8,0.102,none,2024-08-16\r\n26806,1708,LATAM,grocery,online,76.05,3,0.154,none,2024-01-04\r\n26807,1099,LATAM,home,retail,53.03,4,0.249,none,2024-03-20\r\n26808,2333,APAC,toys,retail,97.01,5,0.208,loyalty,2024-03-13\r\n26809,2357,EMEA,grocery,online,75.29,5,0.214,loyalty,2024-11-05\r\n26810,1614,EMEA,toys,mobile,47.66,6,0.081,none,2024-09-08\r\n26811,2398,EMEA,sports,online,55.24,3,0.209,coupon,2024-09-17\r\n26812,2037,LATAM,sports,online,54.21,7,0.161,bundle,2024-06-26\r\n26813,2337,AMER,electronics,online,64.14,2,0.097,bundle,2024-07-27\r\n26814,2379,AMER,sports,retail,52.49,6,0.043,none,2024-04-23\r\n26815,1116,LATAM,grocery,retail,74.13,5,0.111,coupon,2024-10-16\r\n26816,2153,APAC,home,online,84.76,6,0.035,none,2024-10-26\r\n26817,2281,AMER,fashion,online,53.84,7,0.111,none,2024-10-17\r\n26818,2177,AMER,home,retail,82.25,3,0.096,none,2024-02-28\r\n26819,1622,LATAM,home,online,65.69,3,0.229,coupon,2024-05-26\r\n26820,1766,AMER,home,retail,82.28,6,0.185,loyalty,2024-03-03\r\n26821,2453,AMER,sports,online,96.25,7,0.017,coupon,2024-05-23\r\n26822,1941,AMER,grocery,mobile,45.13,4,0.141,bundle,2024-06-23\r\n26823,1637,APAC,grocery,online,42.25,4,0.122,none,2024-06-09\r\n26824,1006,AMER,electronics,retail,27.38,6,0.212,loyalty,2024-12-28\r\n26825,1766,AMER,grocery,retail,47.92,2,0.074,none,2024-03-27\r\n26826,1877,LATAM,home,online,70.51,7,0.226,none,2024-05-27\r\n26827,2164,AMER,electronics,retail,70.46,8,0.115,none,2024-03-15\r\n26828,1552,EMEA,electronics,mobile,60.77,8,0.184,none,2024-04-05\r\n26829,1670,EMEA,sports,online,110.85,1,0.184,coupon,2024-03-04\r\n26830,2185,EMEA,home,retail,76.75,4,0.051,none,2024-02-02\r\n26831,1658,AMER,fashion,retail,45.83,3,0.094,none,2024-05-06\r\n26832,2485,AMER,sports,online,65.82,6,0.049,none,2024-03-11\r\n26833,1415,AMER,fashion,partner,45.45,7,0.093,bundle,2024-05-19\r\n26834,1983,LATAM,grocery,online,64.77,1,0.196,none,2024-07-07\r\n26835,1538,AMER,home,retail,38.64,5,0.015,none,2024-12-06\r\n26836,2130,EMEA,fashion,retail,21.10,5,0.089,loyalty,2024-06-14\r\n26837,1246,EMEA,home,online,49.48,3,0.141,bundle,2024-09-05\r\n26838,1689,LATAM,grocery,retail,26.09,2,0.194,none,2024-05-21\r\n26839,1901,AMER,sports,online,33.50,4,0.091,none,2024-09-07\r\n26840,1184,AMER,fashion,mobile,93.47,6,0.115,none,2024-03-15\r\n26841,1401,LATAM,electronics,mobile,36.69,2,0.002,coupon,2024-11-27\r\n26842,2127,LATAM,home,partner,60.59,5,0.024,none,2024-03-10\r\n26843,1882,AMER,home,online,95.99,7,0.071,none,2024-04-21\r\n26844,2379,AMER,sports,online,100.87,5,0.164,loyalty,2024-01-11\r\n26845,1815,APAC,home,retail,62.04,7,0.209,none,2024-03-23\r\n26846,1487,AMER,fashion,mobile,55.67,8,0.019,coupon,2024-06-20\r\n26847,1159,LATAM,fashion,online,66.04,8,0.047,coupon,2024-11-01\r\n26848,2142,LATAM,grocery,online,28.06,3,0.072,bundle,2024-12-10\r\n26849,1851,EMEA,fashion,online,135.95,7,0.169,none,2024-06-05\r\n26850,2338,AMER,grocery,online,54.44,7,0.233,none,2024-01-14\r\n26851,2109,EMEA,toys,retail,50.00,5,0.178,none,2024-09-02\r\n26852,1996,APAC,grocery,online,166.39,4,0.096,none,2024-05-10\r\n26853,2465,EMEA,sports,online,36.61,1,0.205,none,2024-12-25\r\n26854,1646,APAC,electronics,retail,91.37,1,0.089,none,2024-11-04\r\n26855,2003,LATAM,fashion,retail,65.27,6,0.157,none,2024-10-12\r\n26856,2221,LATAM,home,online,39.23,1,0.047,none,2024-04-08\r\n26857,1710,APAC,electronics,online,34.11,1,0.222,coupon,2024-06-28\r\n26858,1263,AMER,fashion,retail,64.31,7,0.244,none,2024-06-20\r\n26859,1506,EMEA,grocery,online,60.87,1,0.240,none,2024-04-28\r\n26860,1249,EMEA,sports,online,172.28,4,0.106,bundle,2024-10-01\r\n26861,2466,APAC,grocery,online,65.07,1,0.069,none,2024-06-20\r\n26862,2400,EMEA,grocery,mobile,48.39,2,0.001,coupon,2024-05-06\r\n26863,1985,AMER,home,retail,42.37,5,0.128,none,2024-09-21\r\n26864,1434,EMEA,sports,retail,46.60,8,0.049,none,2024-06-25\r\n26865,2200,LATAM,home,mobile,87.13,6,0.138,none,2024-06-21\r\n26866,2384,LATAM,home,online,68.95,6,0.114,bundle,2024-06-06\r\n26867,1904,APAC,home,partner,75.37,1,0.121,none,2024-11-12\r\n26868,2129,APAC,grocery,retail,38.26,5,0.069,none,2024-02-13\r\n26869,1293,AMER,electronics,online,133.14,1,0.187,none,2024-12-27\r\n26870,1635,APAC,fashion,retail,101.00,4,0.114,bundle,2024-07-05\r\n26871,1690,LATAM,electronics,mobile,20.13,1,0.040,loyalty,2024-06-12\r\n26872,1191,EMEA,toys,online,119.26,2,0.175,bundle,2024-04-21\r\n26873,1544,LATAM,grocery,retail,133.60,2,0.015,coupon,2024-08-14\r\n26874,2014,EMEA,fashion,online,48.69,5,0.155,none,2024-10-02\r\n26875,2154,APAC,grocery,mobile,147.16,5,0.119,none,2024-12-24\r\n26876,1595,AMER,home,retail,27.95,2,0.094,loyalty,2024-10-20\r\n26877,1169,LATAM,electronics,online,60.48,3,0.227,bundle,2024-08-21\r\n26878,1914,EMEA,fashion,online,109.61,6,0.035,none,2024-08-01\r\n26879,1456,APAC,sports,retail,127.88,2,0.038,none,2024-02-14\r\n26880,2001,EMEA,electronics,online,74.82,8,0.226,none,2024-11-17\r\n26881,2364,APAC,fashion,online,54.50,3,0.069,coupon,2024-04-20\r\n26882,2343,EMEA,sports,retail,41.16,8,0.106,none,2024-11-18\r\n26883,1519,APAC,grocery,mobile,23.41,6,0.229,bundle,2024-04-15\r\n26884,2035,LATAM,home,retail,75.59,5,0.232,none,2024-05-11\r\n26885,1024,APAC,grocery,retail,34.30,6,0.146,none,2024-06-28\r\n26886,1874,LATAM,electronics,retail,142.83,7,0.053,none,2024-08-17\r\n26887,1421,APAC,electronics,retail,110.84,7,0.111,none,2024-10-23\r\n26888,1491,EMEA,fashion,online,123.40,1,0.233,none,2024-01-03\r\n26889,1031,AMER,home,online,24.77,5,0.057,loyalty,2024-06-01\r\n26890,1082,EMEA,electronics,mobile,61.19,1,0.039,none,2024-01-03\r\n26891,1085,EMEA,home,online,70.24,6,0.046,none,2024-03-02\r\n26892,2439,AMER,home,partner,33.80,6,0.032,loyalty,2024-11-13\r\n26893,2241,APAC,grocery,online,54.16,3,0.159,none,2024-05-25\r\n26894,1383,AMER,fashion,mobile,31.47,6,0.201,none,2024-12-03\r\n26895,1398,APAC,sports,online,63.28,5,0.094,coupon,2024-05-18\r\n26896,1205,APAC,home,online,58.16,6,0.182,none,2024-03-26\r\n26897,1892,LATAM,electronics,retail,78.02,6,0.073,coupon,2024-10-04\r\n26898,2180,AMER,fashion,retail,27.80,5,0.238,none,2024-12-28\r\n26899,1029,EMEA,toys,online,59.12,3,0.000,none,2024-05-05\r\n26900,1486,LATAM,electronics,mobile,38.79,1,0.014,none,2024-09-19\r\n26901,1198,AMER,grocery,retail,103.79,7,0.198,none,2024-10-22\r\n26902,1769,LATAM,home,online,56.88,1,0.186,none,2024-06-09\r\n26903,1861,AMER,fashion,mobile,61.49,6,0.242,bundle,2024-03-11\r\n26904,1243,AMER,toys,online,36.93,7,0.062,none,2024-09-17\r\n26905,1396,EMEA,fashion,retail,62.02,4,0.065,loyalty,2024-01-21\r\n26906,2061,EMEA,electronics,online,30.04,1,0.049,bundle,2024-02-02\r\n26907,2217,LATAM,home,retail,85.20,2,0.028,none,2024-10-04\r\n26908,2347,AMER,fashion,online,29.91,1,0.230,none,2024-06-15\r\n26909,1868,AMER,sports,partner,77.75,5,0.058,none,2024-11-06\r\n26910,1036,EMEA,grocery,online,76.29,5,0.069,none,2024-03-04\r\n26911,1066,AMER,fashion,retail,58.22,8,0.088,none,2024-06-04\r\n26912,2263,AMER,electronics,online,45.50,3,0.021,none,2024-04-28\r\n26913,1554,AMER,electronics,mobile,22.67,8,0.143,coupon,2024-06-19\r\n26914,2223,EMEA,grocery,retail,131.32,4,0.190,none,2024-11-24\r\n26915,1893,APAC,electronics,retail,50.52,1,0.166,coupon,2024-11-01\r\n26916,2337,AMER,grocery,online,24.51,4,0.037,bundle,2024-12-14\r\n26917,1761,EMEA,electronics,retail,132.73,7,0.004,none,2024-09-24\r\n26918,1545,AMER,electronics,retail,49.40,6,0.019,loyalty,2024-10-12\r\n26919,1689,LATAM,grocery,online,70.19,5,0.012,none,2024-03-27\r\n26920,2492,LATAM,electronics,mobile,44.98,3,0.138,none,2024-08-17\r\n26921,1534,EMEA,toys,retail,30.70,7,0.219,loyalty,2024-08-20\r\n26922,2242,AMER,toys,mobile,65.44,1,0.021,none,2024-01-19\r\n26923,2167,APAC,grocery,online,35.36,6,0.245,loyalty,2024-07-10\r\n26924,2133,AMER,grocery,partner,46.40,2,0.047,coupon,2024-09-10\r\n26925,2379,AMER,toys,retail,47.73,6,0.203,loyalty,2024-01-02\r\n26926,2347,AMER,fashion,retail,16.39,4,0.040,none,2024-10-22\r\n26927,1030,EMEA,home,retail,152.46,5,0.230,none,2024-02-14\r\n26928,1885,EMEA,electronics,mobile,87.53,5,0.195,none,2024-06-15\r\n26929,2199,LATAM,sports,retail,56.71,8,0.239,none,2024-01-06\r\n26930,1249,EMEA,toys,retail,36.50,6,0.090,none,2024-12-14\r\n26931,1119,LATAM,fashion,retail,56.98,7,0.048,none,2024-11-24\r\n26932,1358,APAC,electronics,online,61.47,5,0.178,bundle,2024-12-18\r\n26933,1527,AMER,electronics,online,37.05,7,0.137,coupon,2024-01-12\r\n26934,2486,APAC,grocery,retail,57.33,7,0.074,none,2024-11-15\r\n26935,1035,EMEA,toys,online,48.40,3,0.065,loyalty,2024-02-19\r\n26936,1898,EMEA,fashion,retail,89.35,8,0.037,coupon,2024-08-18\r\n26937,2350,APAC,sports,mobile,43.34,3,0.227,none,2024-01-21\r\n26938,2145,AMER,sports,mobile,89.42,3,0.196,none,2024-05-20\r\n26939,1459,LATAM,fashion,online,23.04,5,0.130,coupon,2024-10-14\r\n26940,1635,APAC,toys,online,96.19,8,0.042,none,2024-06-16\r\n26941,1748,APAC,grocery,mobile,32.37,5,0.009,bundle,2024-12-03\r\n26942,1580,AMER,home,online,54.67,4,0.087,none,2024-12-25\r\n26943,1377,APAC,sports,online,58.83,8,0.174,none,2024-05-14\r\n26944,1130,LATAM,toys,retail,29.82,3,0.086,bundle,2024-06-14\r\n26945,1079,LATAM,toys,online,56.71,2,0.231,none,2024-05-08\r\n26946,1875,EMEA,grocery,mobile,30.10,5,0.080,none,2024-11-15\r\n26947,2236,APAC,fashion,online,94.07,2,0.119,bundle,2024-03-14\r\n26948,2178,AMER,electronics,online,54.93,7,0.249,none,2024-06-09\r\n26949,1661,LATAM,sports,retail,266.60,6,0.184,coupon,2024-07-14\r\n26950,1017,AMER,sports,mobile,59.38,8,0.178,bundle,2024-04-12\r\n26951,1643,EMEA,home,online,60.54,7,0.152,none,2024-05-02\r\n26952,1623,AMER,home,mobile,35.82,6,0.090,loyalty,2024-07-25\r\n26953,1299,LATAM,grocery,online,36.53,1,0.201,coupon,2024-10-16\r\n26954,2409,APAC,electronics,retail,63.15,6,0.225,none,2024-08-28\r\n26955,1927,EMEA,grocery,retail,45.53,1,0.113,none,2024-01-09\r\n26956,2223,EMEA,grocery,retail,68.25,6,0.037,none,2024-03-16\r\n26957,1937,APAC,toys,mobile,25.48,4,0.175,none,2024-12-28\r\n26958,2352,APAC,electronics,online,23.82,4,0.128,none,2024-11-27\r\n26959,2253,AMER,fashion,online,36.33,4,0.062,loyalty,2024-01-13\r\n26960,1923,LATAM,toys,partner,20.00,1,0.139,loyalty,2024-08-13\r\n26961,2066,APAC,electronics,online,43.13,3,0.162,none,2024-08-05\r\n26962,2479,EMEA,home,partner,94.69,2,0.192,coupon,2024-08-05\r\n26963,2356,LATAM,grocery,retail,87.59,4,0.199,none,2024-08-19\r\n26964,2007,LATAM,grocery,retail,30.81,6,0.062,loyalty,2024-02-25\r\n26965,1427,EMEA,electronics,online,41.46,1,0.195,none,2024-07-18\r\n26966,1513,APAC,grocery,retail,126.19,5,0.018,coupon,2024-01-17\r\n26967,1645,EMEA,home,online,20.21,5,0.167,none,2024-02-09\r\n26968,1528,EMEA,home,mobile,90.44,2,0.055,none,2024-11-23\r\n26969,2452,LATAM,home,retail,66.60,3,0.009,loyalty,2024-12-02\r\n26970,1995,LATAM,grocery,retail,33.07,7,0.073,coupon,2024-06-08\r\n26971,1433,EMEA,sports,retail,84.48,1,0.192,none,2024-11-10\r\n26972,2351,EMEA,home,retail,67.44,2,0.141,none,2024-05-02\r\n26973,2176,AMER,fashion,online,58.59,5,0.210,loyalty,2024-02-21\r\n26974,1303,LATAM,toys,retail,177.86,4,0.133,bundle,2024-10-04\r\n26975,1420,APAC,fashion,online,21.44,3,0.209,coupon,2024-08-26\r\n26976,1897,AMER,grocery,retail,47.38,4,0.092,coupon,2024-12-20\r\n26977,2406,EMEA,home,online,69.63,6,0.106,loyalty,2024-03-23\r\n26978,1832,APAC,fashion,online,37.15,8,0.139,none,2024-11-03\r\n26979,1750,LATAM,fashion,retail,110.90,2,0.148,none,2024-10-21\r\n26980,2167,APAC,grocery,retail,35.13,6,0.156,none,2024-11-17\r\n26981,1068,APAC,electronics,online,107.90,3,0.017,none,2024-12-17\r\n26982,1769,LATAM,grocery,online,75.35,7,0.178,coupon,2024-12-07\r\n26983,1703,AMER,home,retail,107.33,3,0.248,loyalty,2024-03-26\r\n26984,1246,EMEA,electronics,online,21.26,6,0.137,coupon,2024-04-15\r\n26985,1104,APAC,grocery,online,73.25,4,0.227,none,2024-11-02\r\n26986,1604,EMEA,fashion,retail,50.39,4,0.139,bundle,2024-08-25\r\n26987,1601,APAC,home,partner,51.22,3,0.046,none,2024-12-16\r\n26988,2169,EMEA,sports,retail,51.37,8,0.240,coupon,2024-08-10\r\n26989,2336,APAC,electronics,retail,42.69,4,0.123,none,2024-05-11\r\n26990,1725,APAC,home,retail,35.27,8,0.061,none,2024-04-03\r\n26991,2396,AMER,grocery,online,51.72,6,0.087,none,2024-02-06\r\n26992,1496,AMER,electronics,online,78.64,5,0.056,none,2024-03-18\r\n26993,1720,AMER,grocery,partner,120.04,6,0.055,none,2024-07-01\r\n26994,1820,AMER,toys,online,83.69,8,0.019,none,2024-09-11\r\n26995,1556,AMER,fashion,mobile,43.75,8,0.111,coupon,2024-11-07\r\n26996,1907,EMEA,grocery,online,33.25,2,0.089,none,2024-02-14\r\n26997,2029,APAC,home,online,37.96,3,0.065,coupon,2024-09-05\r\n26998,1149,LATAM,fashion,online,75.67,4,0.100,loyalty,2024-04-16\r\n26999,1579,AMER,home,retail,47.44,7,0.068,none,2024-02-03\r\n27000,1492,APAC,electronics,online,35.69,5,0.212,none,2024-03-10\r\n27001,2351,EMEA,fashion,online,84.37,2,0.105,loyalty,2024-01-22\r\n27002,2310,EMEA,electronics,retail,49.18,4,0.244,coupon,2024-08-18\r\n27003,2047,AMER,electronics,online,80.81,4,0.054,none,2024-09-01\r\n27004,2043,EMEA,home,online,61.89,1,0.246,loyalty,2024-12-02\r\n27005,2338,AMER,toys,retail,101.69,7,0.107,none,2024-06-12\r\n27006,2398,EMEA,sports,mobile,37.06,6,0.023,none,2024-03-01\r\n27007,1126,LATAM,home,mobile,51.67,3,0.084,none,2024-08-13\r\n27008,1177,LATAM,home,mobile,56.53,1,0.004,none,2024-07-11\r\n27009,2411,EMEA,electronics,online,35.98,2,0.111,coupon,2024-05-01\r\n27010,1633,EMEA,grocery,retail,46.27,6,0.078,bundle,2024-03-03\r\n27011,2127,LATAM,home,online,138.09,2,0.166,none,2024-01-08\r\n27012,1467,LATAM,electronics,mobile,103.89,3,0.246,loyalty,2024-12-22\r\n27013,1776,APAC,sports,retail,35.48,8,0.222,coupon,2024-05-09\r\n27014,2012,APAC,grocery,retail,24.83,7,0.078,none,2024-09-04\r\n27015,1205,APAC,sports,online,51.39,2,0.216,none,2024-03-18\r\n27016,2111,EMEA,home,online,36.91,5,0.038,bundle,2024-04-01\r\n27017,1066,AMER,home,mobile,66.70,2,0.103,none,2024-01-11\r\n27018,1568,AMER,electronics,mobile,78.71,4,0.027,none,2024-12-08\r\n27019,2385,APAC,electronics,mobile,48.91,8,0.034,none,2024-01-08\r\n27020,1705,AMER,grocery,online,68.20,1,0.014,coupon,2024-09-25\r\n27021,1186,APAC,toys,retail,70.73,3,0.208,none,2024-02-01\r\n27022,2059,AMER,home,online,16.68,3,0.048,coupon,2024-09-24\r\n27023,1799,EMEA,home,online,31.16,8,0.148,bundle,2024-09-09\r\n27024,1675,LATAM,toys,retail,48.52,6,0.006,none,2024-03-12\r\n27025,2314,EMEA,fashion,retail,58.02,5,0.111,none,2024-02-12\r\n27026,1325,APAC,home,mobile,80.07,2,0.098,coupon,2024-09-22\r\n27027,1067,APAC,sports,mobile,33.10,3,0.095,none,2024-06-23\r\n27028,1496,AMER,sports,online,53.72,4,0.184,none,2024-04-09\r\n27029,1751,AMER,sports,online,38.96,7,0.234,none,2024-06-17\r\n27030,1197,LATAM,home,online,62.53,1,0.210,none,2024-01-28\r\n27031,2465,EMEA,home,retail,88.81,2,0.161,none,2024-06-18\r\n27032,1163,AMER,fashion,online,20.14,2,0.124,none,2024-07-26\r\n27033,2194,APAC,electronics,online,89.74,1,0.106,none,2024-12-14\r\n27034,1054,EMEA,grocery,retail,34.33,6,0.126,bundle,2024-01-17\r\n27035,1502,APAC,sports,online,199.28,8,0.152,coupon,2024-10-12\r\n27036,2117,EMEA,toys,retail,37.52,8,0.186,none,2024-08-19\r\n27037,2195,APAC,home,mobile,72.95,8,0.223,none,2024-06-13\r\n27038,1418,LATAM,home,retail,58.86,1,0.031,coupon,2024-12-06\r\n27039,2373,LATAM,home,online,36.69,2,0.235,coupon,2024-01-28\r\n27040,2238,AMER,electronics,retail,76.77,2,0.074,none,2024-10-18\r\n27041,2193,AMER,fashion,online,73.57,6,0.177,none,2024-11-05\r\n27042,2086,APAC,fashion,online,31.46,8,0.036,none,2024-08-09\r\n27043,1903,LATAM,sports,retail,89.87,6,0.099,none,2024-11-14\r\n27044,1936,EMEA,grocery,retail,83.31,5,0.151,none,2024-03-02\r\n27045,1111,APAC,home,mobile,92.58,1,0.157,loyalty,2024-02-09\r\n27046,1906,APAC,sports,retail,78.04,6,0.107,none,2024-12-25\r\n27047,2202,APAC,home,mobile,63.76,2,0.029,bundle,2024-12-16\r\n27048,1505,EMEA,toys,online,143.19,8,0.137,coupon,2024-02-05\r\n27049,1715,AMER,grocery,online,44.38,8,0.013,bundle,2024-04-20\r\n27050,1464,APAC,home,mobile,96.49,6,0.160,bundle,2024-05-06\r\n27051,1154,LATAM,electronics,retail,81.56,6,0.124,none,2024-10-12\r\n27052,1402,EMEA,fashion,retail,147.59,1,0.141,none,2024-06-02\r\n27053,1612,LATAM,grocery,mobile,36.62,4,0.191,none,2024-01-03\r\n27054,2430,APAC,grocery,mobile,61.42,5,0.040,none,2024-02-26\r\n27055,2189,LATAM,electronics,retail,31.52,3,0.190,none,2024-08-02\r\n27056,2207,APAC,sports,retail,36.46,1,0.013,loyalty,2024-07-02\r\n27057,1847,LATAM,grocery,partner,32.88,7,0.190,coupon,2024-11-05\r\n27058,2424,LATAM,electronics,retail,93.23,7,0.131,none,2024-05-04\r\n27059,1957,AMER,grocery,online,29.68,8,0.096,none,2024-09-20\r\n27060,2378,LATAM,electronics,online,57.70,7,0.215,none,2024-10-12\r\n27061,1625,EMEA,home,retail,56.84,7,0.123,none,2024-03-17\r\n27062,2107,APAC,electronics,retail,67.86,5,0.195,none,2024-02-15\r\n27063,2338,AMER,electronics,online,111.33,8,0.033,coupon,2024-04-18\r\n27064,1373,LATAM,grocery,online,42.85,1,0.121,none,2024-12-13\r\n27065,2346,LATAM,home,retail,47.38,4,0.037,none,2024-06-13\r\n27066,1971,EMEA,home,retail,95.25,3,0.045,none,2024-01-15\r\n27067,1802,AMER,toys,online,56.59,6,0.211,none,2024-01-03\r\n27068,2455,AMER,home,online,71.22,1,0.160,none,2024-12-16\r\n27069,1594,LATAM,grocery,mobile,53.77,5,0.134,none,2024-01-12\r\n27070,1655,LATAM,electronics,retail,41.89,5,0.044,loyalty,2024-02-04\r\n27071,2326,LATAM,sports,retail,54.19,5,0.083,bundle,2024-11-01\r\n27072,1293,AMER,home,online,22.77,5,0.210,none,2024-10-18\r\n27073,1283,APAC,home,online,53.36,6,0.142,none,2024-09-15\r\n27074,2037,LATAM,home,retail,38.65,8,0.221,none,2024-09-11\r\n27075,2212,EMEA,home,mobile,96.86,8,0.102,coupon,2024-09-26\r\n27076,1547,AMER,electronics,partner,52.67,3,0.140,bundle,2024-10-17\r\n27077,2490,AMER,toys,mobile,120.27,1,0.113,bundle,2024-10-21\r\n27078,1962,APAC,sports,online,45.84,8,0.105,none,2024-03-18\r\n27079,1194,APAC,electronics,online,86.84,6,0.126,none,2024-07-25\r\n27080,1827,EMEA,sports,online,121.89,6,0.112,coupon,2024-10-25\r\n27081,2388,LATAM,fashion,online,19.42,2,0.030,none,2024-01-28\r\n27082,1862,LATAM,grocery,mobile,99.82,1,0.240,none,2024-07-17\r\n27083,1941,AMER,electronics,online,83.31,6,0.127,loyalty,2024-03-16\r\n27084,1024,APAC,fashion,online,83.73,4,0.015,coupon,2024-07-18\r\n27085,2063,APAC,electronics,retail,67.31,3,0.242,none,2024-12-01\r\n27086,1245,APAC,electronics,online,23.93,6,0.201,loyalty,2024-01-04\r\n27087,1154,LATAM,home,retail,70.76,3,0.210,coupon,2024-10-09\r\n27088,2397,LATAM,sports,online,42.06,6,0.090,coupon,2024-01-05\r\n27089,1312,EMEA,sports,online,69.88,8,0.055,none,2024-03-10\r\n27090,1762,LATAM,sports,retail,75.24,5,0.155,none,2024-07-10\r\n27091,2202,APAC,fashion,retail,37.06,1,0.199,bundle,2024-12-14\r\n27092,1882,AMER,grocery,online,54.31,1,0.181,none,2024-08-15\r\n27093,2359,LATAM,grocery,retail,31.72,4,0.023,bundle,2024-06-03\r\n27094,1883,LATAM,toys,retail,49.08,2,0.191,bundle,2024-04-28\r\n27095,1428,APAC,home,online,102.06,4,0.132,none,2024-09-21\r\n27096,1476,APAC,toys,mobile,49.69,6,0.009,loyalty,2024-07-24\r\n27097,1110,LATAM,fashion,retail,19.97,7,0.098,coupon,2024-07-09\r\n27098,1009,APAC,home,online,32.70,5,0.015,none,2024-08-16\r\n27099,2383,APAC,sports,retail,25.29,2,0.205,coupon,2024-12-02\r\n27100,2026,LATAM,fashion,online,95.02,6,0.174,none,2024-07-18\r\n27101,1978,AMER,grocery,online,77.01,6,0.215,none,2024-06-12\r\n27102,2164,AMER,sports,retail,99.72,1,0.078,coupon,2024-01-02\r\n27103,1264,APAC,home,mobile,54.91,6,0.200,bundle,2024-01-22\r\n27104,1472,AMER,electronics,online,81.85,7,0.211,coupon,2024-12-09\r\n27105,2060,LATAM,grocery,online,60.56,4,0.145,none,2024-07-05\r\n27106,2278,APAC,electronics,online,43.68,6,0.016,none,2024-05-23\r\n27107,1403,APAC,grocery,online,104.14,5,0.176,coupon,2024-08-09\r\n27108,1563,EMEA,sports,retail,68.60,3,0.185,coupon,2024-04-17\r\n27109,2203,APAC,fashion,mobile,19.71,2,0.049,none,2024-04-26\r\n27110,2326,LATAM,fashion,online,76.49,3,0.142,none,2024-06-19\r\n27111,2273,APAC,electronics,online,38.70,6,0.030,none,2024-10-15\r\n27112,1891,APAC,grocery,retail,76.06,2,0.192,bundle,2024-11-26\r\n27113,1044,EMEA,electronics,online,51.31,4,0.115,none,2024-02-16\r\n27114,2402,AMER,fashion,mobile,37.54,1,0.071,none,2024-04-17\r\n27115,2279,LATAM,grocery,online,59.98,4,0.088,none,2024-01-27\r\n27116,1230,EMEA,home,online,88.49,8,0.052,coupon,2024-06-20\r\n27117,1845,AMER,grocery,online,27.99,2,0.153,bundle,2024-09-17\r\n27118,2221,LATAM,home,retail,62.11,6,0.072,none,2024-01-05\r\n27119,1572,LATAM,toys,retail,76.22,2,0.126,bundle,2024-11-27\r\n27120,1532,APAC,sports,online,45.62,7,0.118,coupon,2024-06-27\r\n27121,2176,AMER,home,online,156.89,7,0.150,none,2024-01-04\r\n27122,1982,EMEA,grocery,mobile,19.25,8,0.007,none,2024-01-09\r\n27123,1259,EMEA,grocery,retail,61.46,1,0.212,none,2024-11-21\r\n27124,1245,APAC,electronics,online,30.79,2,0.138,bundle,2024-02-27\r\n27125,1296,LATAM,toys,online,85.05,4,0.112,coupon,2024-07-03\r\n27126,1513,APAC,sports,mobile,28.89,8,0.160,none,2024-06-15\r\n27127,1980,LATAM,grocery,retail,31.25,2,0.016,none,2024-05-19\r\n27128,1800,APAC,fashion,partner,50.75,2,0.130,coupon,2024-12-08\r\n27129,1174,APAC,toys,online,50.22,8,0.206,none,2024-12-21\r\n27130,1070,EMEA,electronics,online,90.71,8,0.052,none,2024-09-09\r\n27131,2334,LATAM,grocery,online,48.85,5,0.105,coupon,2024-08-10\r\n27132,2440,APAC,toys,online,57.11,8,0.032,loyalty,2024-11-23\r\n27133,2152,EMEA,grocery,mobile,82.39,5,0.221,none,2024-03-16\r\n27134,2128,EMEA,grocery,mobile,123.22,8,0.210,coupon,2024-01-14\r\n27135,2315,LATAM,fashion,online,61.54,8,0.218,none,2024-08-24\r\n27136,1798,AMER,sports,online,70.06,4,0.011,none,2024-03-01\r\n27137,1233,AMER,electronics,online,88.96,4,0.142,none,2024-06-19\r\n27138,1366,APAC,electronics,retail,35.08,4,0.177,none,2024-12-27\r\n27139,2258,AMER,sports,retail,50.24,4,0.019,bundle,2024-03-14\r\n27140,1807,EMEA,electronics,online,39.51,5,0.127,none,2024-12-23\r\n27141,2231,LATAM,home,online,55.97,1,0.238,none,2024-07-09\r\n27142,2401,LATAM,home,online,51.25,4,0.015,bundle,2024-08-11\r\n27143,1718,EMEA,grocery,retail,38.62,7,0.245,none,2024-05-09\r\n27144,2299,EMEA,electronics,online,27.03,6,0.074,loyalty,2024-07-13\r\n27145,1147,EMEA,sports,mobile,153.31,1,0.211,bundle,2024-05-05\r\n27146,1500,EMEA,fashion,retail,84.64,4,0.010,bundle,2024-11-19\r\n27147,1580,AMER,grocery,online,52.47,4,0.023,none,2024-01-18\r\n27148,1647,LATAM,fashion,online,53.60,6,0.144,coupon,2024-03-01\r\n27149,2326,LATAM,toys,retail,44.44,7,0.214,none,2024-10-21\r\n27150,2271,LATAM,fashion,online,67.84,7,0.082,none,2024-09-13\r\n27151,1353,EMEA,sports,retail,60.93,8,0.043,bundle,2024-07-24\r\n27152,2406,EMEA,electronics,online,89.44,2,0.152,none,2024-11-05\r\n27153,1164,EMEA,fashion,online,47.93,4,0.157,none,2024-04-10\r\n27154,2175,AMER,fashion,online,33.00,5,0.022,coupon,2024-01-10\r\n27155,1143,LATAM,home,retail,143.87,2,0.058,none,2024-06-08\r\n27156,2401,LATAM,home,mobile,55.62,3,0.019,coupon,2024-08-06\r\n27157,1508,LATAM,electronics,partner,257.43,2,0.176,none,2024-11-06\r\n27158,2091,LATAM,home,retail,71.87,5,0.111,none,2024-11-17\r\n27159,2441,EMEA,fashion,online,107.11,6,0.007,bundle,2024-12-18\r\n27160,1312,EMEA,home,retail,76.44,1,0.155,loyalty,2024-06-19\r\n27161,1937,APAC,grocery,online,46.63,3,0.139,none,2024-01-12\r\n27162,2380,AMER,toys,retail,51.37,5,0.120,coupon,2024-08-10\r\n27163,1573,AMER,fashion,partner,45.67,1,0.169,none,2024-06-17\r\n27164,2303,EMEA,sports,online,21.82,1,0.087,none,2024-04-12\r\n27165,2436,LATAM,electronics,online,56.70,3,0.225,none,2024-03-11\r\n27166,1847,LATAM,grocery,online,37.39,5,0.070,none,2024-02-08\r\n27167,2346,LATAM,electronics,online,74.13,7,0.023,bundle,2024-11-01\r\n27168,2039,EMEA,sports,mobile,48.61,6,0.114,none,2024-11-19\r\n27169,2459,AMER,sports,retail,23.66,1,0.204,none,2024-11-26\r\n27170,1549,APAC,home,online,71.77,3,0.138,loyalty,2024-09-08\r\n27171,2180,AMER,grocery,online,30.44,2,0.119,none,2024-10-04\r\n27172,1480,APAC,home,online,39.09,5,0.213,none,2024-11-27\r\n27173,2459,AMER,fashion,online,79.51,2,0.223,bundle,2024-02-13\r\n27174,2424,LATAM,grocery,retail,61.54,2,0.061,bundle,2024-05-13\r\n27175,1245,APAC,sports,online,141.58,1,0.223,none,2024-10-21\r\n27176,2220,LATAM,grocery,retail,49.65,6,0.126,coupon,2024-12-01\r\n27177,1552,EMEA,grocery,online,69.42,6,0.124,bundle,2024-12-10\r\n27178,2045,LATAM,electronics,online,111.42,8,0.041,none,2024-04-10\r\n27179,2087,LATAM,toys,retail,49.02,3,0.245,coupon,2024-12-19\r\n27180,1570,AMER,toys,online,40.56,4,0.121,bundle,2024-11-15\r\n27181,1993,APAC,home,mobile,27.44,2,0.029,none,2024-03-14\r\n27182,2404,EMEA,sports,online,156.70,6,0.040,none,2024-12-24\r\n27183,1586,LATAM,electronics,retail,50.57,5,0.040,loyalty,2024-11-10\r\n27184,1453,APAC,toys,online,43.76,6,0.000,none,2024-09-04\r\n27185,2053,AMER,electronics,retail,48.45,4,0.039,none,2024-01-22\r\n27186,1828,EMEA,electronics,retail,44.59,2,0.233,none,2024-07-22\r\n27187,1098,APAC,electronics,retail,53.01,1,0.239,none,2024-10-22\r\n27188,2267,AMER,grocery,partner,60.45,2,0.029,none,2024-05-09\r\n27189,1013,LATAM,grocery,mobile,112.07,1,0.096,coupon,2024-02-05\r\n27190,2486,APAC,electronics,mobile,39.79,5,0.097,bundle,2024-01-24\r\n27191,2010,APAC,sports,retail,31.46,6,0.130,coupon,2024-09-14\r\n27192,2154,APAC,fashion,retail,73.41,5,0.242,none,2024-08-08\r\n27193,2075,LATAM,home,online,80.44,3,0.145,none,2024-10-15\r\n27194,1816,EMEA,home,retail,57.37,8,0.199,bundle,2024-03-28\r\n27195,1715,AMER,fashion,retail,30.96,6,0.249,bundle,2024-07-01\r\n27196,2142,LATAM,toys,online,106.55,5,0.111,none,2024-01-20\r\n27197,1207,APAC,toys,mobile,102.27,5,0.105,none,2024-12-18\r\n27198,1237,LATAM,grocery,mobile,15.11,8,0.182,none,2024-07-13\r\n27199,2005,APAC,fashion,online,59.31,3,0.193,coupon,2024-03-14\r\n27200,1333,EMEA,toys,online,66.89,7,0.070,none,2024-07-04\r\n27201,1392,AMER,grocery,retail,72.48,4,0.154,coupon,2024-11-16\r\n27202,1329,APAC,grocery,mobile,73.94,7,0.093,coupon,2024-08-13\r\n27203,2202,APAC,fashion,online,36.44,2,0.179,none,2024-10-18\r\n27204,2284,EMEA,grocery,online,65.91,1,0.013,coupon,2024-07-22\r\n27205,2091,LATAM,toys,online,37.66,6,0.129,none,2024-04-27\r\n27206,2289,APAC,home,online,38.00,4,0.092,coupon,2024-04-21\r\n27207,2333,APAC,electronics,retail,52.50,4,0.036,bundle,2024-05-15\r\n27208,1847,LATAM,grocery,mobile,75.24,7,0.039,none,2024-11-05\r\n27209,1095,APAC,fashion,mobile,76.68,6,0.080,bundle,2024-05-19\r\n27210,2460,AMER,toys,mobile,39.04,7,0.069,none,2024-08-27\r\n27211,1568,AMER,electronics,retail,90.77,7,0.145,loyalty,2024-02-16\r\n27212,1419,APAC,fashion,online,29.88,8,0.129,none,2024-10-27\r\n27213,1583,AMER,sports,online,85.81,7,0.200,none,2024-10-28\r\n27214,2114,AMER,fashion,mobile,110.90,2,0.237,loyalty,2024-04-07\r\n27215,1345,AMER,grocery,online,33.52,3,0.003,coupon,2024-06-02\r\n27216,1941,AMER,grocery,online,32.08,8,0.135,loyalty,2024-07-11\r\n27217,1833,EMEA,electronics,retail,60.83,3,0.219,none,2024-03-03\r\n27218,1223,LATAM,sports,online,65.13,5,0.099,none,2024-12-26\r\n27219,1949,AMER,electronics,retail,50.37,4,0.232,loyalty,2024-12-14\r\n27220,1086,AMER,fashion,retail,35.42,1,0.029,none,2024-01-21\r\n27221,1685,AMER,grocery,online,89.83,3,0.167,none,2024-10-23\r\n27222,1141,AMER,fashion,retail,34.69,5,0.049,none,2024-11-01\r\n27223,1930,AMER,home,retail,24.95,7,0.114,none,2024-07-03\r\n27224,1964,EMEA,toys,online,122.57,2,0.195,none,2024-05-05\r\n27225,1607,LATAM,electronics,retail,65.40,4,0.027,none,2024-06-20\r\n27226,1020,APAC,sports,retail,31.10,6,0.092,none,2024-01-16\r\n27227,1941,AMER,sports,online,25.71,8,0.053,none,2024-04-27\r\n27228,1722,EMEA,sports,retail,34.51,1,0.076,coupon,2024-01-24\r\n27229,1785,EMEA,electronics,online,45.26,2,0.141,none,2024-09-17\r\n27230,1232,LATAM,grocery,retail,18.15,4,0.016,bundle,2024-09-01\r\n27231,2089,EMEA,grocery,retail,50.12,3,0.242,loyalty,2024-03-17\r\n27232,1159,LATAM,toys,online,78.26,5,0.080,bundle,2024-06-02\r\n27233,2416,LATAM,electronics,retail,79.06,1,0.222,none,2024-05-12\r\n27234,1545,AMER,electronics,retail,145.72,3,0.194,none,2024-05-07\r\n27235,1552,EMEA,sports,online,30.69,4,0.182,none,2024-03-13\r\n27236,1627,LATAM,home,mobile,72.04,2,0.147,none,2024-11-17\r\n27237,2364,APAC,fashion,online,129.83,6,0.166,none,2024-09-27\r\n27238,1922,EMEA,sports,mobile,89.04,3,0.159,none,2024-08-24\r\n27239,1053,AMER,grocery,retail,66.30,5,0.176,none,2024-04-02\r\n27240,1750,LATAM,home,online,111.91,2,0.239,none,2024-10-28\r\n27241,1793,LATAM,electronics,online,52.20,8,0.067,none,2024-02-20\r\n27242,1023,APAC,toys,partner,68.49,8,0.013,none,2024-03-07\r\n27243,1336,APAC,grocery,online,124.56,6,0.246,none,2024-02-26\r\n27244,1626,EMEA,home,online,57.57,7,0.005,none,2024-08-02\r\n27245,1317,EMEA,grocery,retail,52.33,7,0.207,none,2024-01-25\r\n27246,1155,EMEA,grocery,online,25.94,7,0.213,coupon,2024-12-04\r\n27247,2039,EMEA,grocery,retail,50.31,1,0.043,coupon,2024-03-18\r\n27248,2295,EMEA,grocery,online,28.29,1,0.248,loyalty,2024-11-18\r\n27249,1221,LATAM,sports,partner,37.47,6,0.153,loyalty,2024-01-16\r\n27250,1054,EMEA,home,mobile,36.36,1,0.205,coupon,2024-10-19\r\n27251,2349,APAC,home,retail,112.10,6,0.037,none,2024-05-02\r\n27252,2309,AMER,electronics,online,86.73,8,0.196,none,2024-11-11\r\n27253,1505,EMEA,grocery,online,67.83,1,0.127,bundle,2024-09-13\r\n27254,2168,EMEA,grocery,retail,37.61,6,0.223,none,2024-10-22\r\n27255,1004,LATAM,fashion,online,48.67,6,0.182,coupon,2024-02-23\r\n27256,1303,LATAM,grocery,retail,50.88,2,0.017,bundle,2024-05-15\r\n27257,1087,AMER,grocery,retail,41.97,8,0.016,coupon,2024-06-11\r\n27258,1257,APAC,grocery,online,49.38,5,0.127,bundle,2024-06-25\r\n27259,1890,LATAM,fashion,online,73.61,6,0.001,coupon,2024-03-15\r\n27260,2022,LATAM,home,online,77.75,6,0.093,none,2024-12-04\r\n27261,1530,APAC,home,retail,25.39,1,0.241,bundle,2024-03-02\r\n27262,1958,APAC,grocery,mobile,48.99,1,0.187,none,2024-04-11\r\n27263,1303,LATAM,grocery,retail,49.42,1,0.016,coupon,2024-08-12\r\n27264,1403,APAC,toys,mobile,53.83,2,0.173,none,2024-01-22\r\n27265,2150,APAC,grocery,online,51.99,1,0.165,bundle,2024-02-06\r\n27266,1644,EMEA,grocery,partner,50.84,5,0.131,none,2024-09-27\r\n27267,2377,AMER,sports,online,131.84,2,0.034,none,2024-07-24\r\n27268,2334,LATAM,home,online,23.64,7,0.079,bundle,2024-02-05\r\n27269,1172,APAC,grocery,online,38.15,7,0.191,none,2024-01-24\r\n27270,2468,EMEA,toys,retail,26.80,3,0.076,none,2024-08-18\r\n27271,1070,EMEA,toys,mobile,31.21,5,0.054,none,2024-06-09\r\n27272,1910,LATAM,grocery,partner,97.99,1,0.181,loyalty,2024-08-17\r\n27273,1777,AMER,grocery,online,68.99,1,0.048,none,2024-12-14\r\n27274,1182,EMEA,grocery,online,43.25,6,0.175,none,2024-03-20\r\n27275,1414,APAC,toys,partner,84.16,4,0.230,bundle,2024-05-06\r\n27276,1518,AMER,toys,online,142.80,8,0.086,none,2024-07-20\r\n27277,1124,AMER,toys,online,70.95,5,0.229,none,2024-04-02\r\n27278,1679,APAC,electronics,retail,60.95,7,0.170,coupon,2024-03-19\r\n27279,1614,EMEA,toys,retail,73.93,1,0.079,none,2024-06-07\r\n27280,1211,EMEA,fashion,mobile,146.25,8,0.043,none,2024-12-06\r\n27281,1563,EMEA,electronics,retail,153.89,4,0.110,none,2024-08-26\r\n27282,1969,LATAM,fashion,online,55.81,7,0.081,none,2024-08-06\r\n27283,2195,APAC,fashion,online,119.56,8,0.169,bundle,2024-02-06\r\n27284,1503,APAC,home,retail,36.63,6,0.054,loyalty,2024-08-10\r\n27285,1084,AMER,sports,retail,52.84,6,0.073,none,2024-09-08\r\n27286,2303,EMEA,grocery,retail,50.99,1,0.190,none,2024-07-17\r\n27287,1499,EMEA,fashion,partner,69.64,3,0.001,none,2024-08-17\r\n27288,2157,AMER,grocery,online,29.82,5,0.080,loyalty,2024-01-08\r\n27289,1902,AMER,fashion,retail,113.75,7,0.050,coupon,2024-04-23\r\n27290,1015,AMER,toys,retail,61.38,7,0.073,loyalty,2024-10-26\r\n27291,1554,AMER,grocery,online,142.21,7,0.081,none,2024-10-19\r\n27292,2356,LATAM,home,online,87.99,3,0.132,bundle,2024-11-15\r\n27293,1985,AMER,home,online,71.65,7,0.053,none,2024-08-08\r\n27294,2286,AMER,fashion,online,111.70,1,0.048,loyalty,2024-06-26\r\n27295,2044,APAC,grocery,partner,37.85,3,0.186,none,2024-07-01\r\n27296,1747,EMEA,fashion,retail,62.71,3,0.214,none,2024-08-25\r\n27297,1485,APAC,home,retail,66.84,8,0.180,bundle,2024-06-14\r\n27298,1380,AMER,grocery,retail,78.85,3,0.033,none,2024-05-02\r\n27299,2193,AMER,home,partner,201.18,1,0.079,none,2024-11-16\r\n27300,1771,AMER,grocery,mobile,81.87,7,0.078,none,2024-06-23\r\n27301,1436,APAC,grocery,retail,75.56,5,0.111,bundle,2024-06-20\r\n27302,2339,AMER,grocery,retail,26.75,1,0.126,none,2024-11-18\r\n27303,1031,AMER,fashion,retail,67.76,6,0.103,bundle,2024-11-22\r\n27304,2493,APAC,grocery,online,59.53,7,0.186,loyalty,2024-10-05\r\n27305,1303,LATAM,electronics,mobile,54.78,3,0.230,none,2024-05-25\r\n27306,1809,APAC,fashion,online,113.15,2,0.170,none,2024-11-05\r\n27307,1031,AMER,sports,retail,35.71,5,0.174,none,2024-11-12\r\n27308,1040,LATAM,grocery,online,72.70,6,0.027,none,2024-01-03\r\n27309,2399,LATAM,electronics,retail,54.38,5,0.110,none,2024-10-24\r\n27310,1788,AMER,grocery,partner,62.39,8,0.243,coupon,2024-11-11\r\n27311,1771,AMER,electronics,online,13.94,1,0.116,bundle,2024-08-11\r\n27312,1751,AMER,electronics,online,15.54,6,0.072,loyalty,2024-07-06\r\n27313,1986,LATAM,electronics,partner,56.97,3,0.133,coupon,2024-03-22\r\n27314,1186,APAC,electronics,retail,70.14,4,0.150,none,2024-09-18\r\n27315,1791,LATAM,home,online,82.70,6,0.072,none,2024-09-13\r\n27316,2402,AMER,toys,online,20.43,2,0.050,bundle,2024-06-10\r\n27317,2184,APAC,home,mobile,46.47,1,0.176,none,2024-03-22\r\n27318,1597,APAC,grocery,online,89.21,7,0.175,coupon,2024-02-02\r\n27319,1092,AMER,fashion,online,90.99,7,0.014,none,2024-05-16\r\n27320,1722,EMEA,grocery,retail,90.71,1,0.019,loyalty,2024-10-01\r\n27321,1340,LATAM,home,mobile,40.56,7,0.172,coupon,2024-04-13\r\n27322,1255,AMER,electronics,mobile,44.93,2,0.241,none,2024-05-05\r\n27323,2439,AMER,grocery,retail,55.85,8,0.060,none,2024-05-04\r\n27324,1723,LATAM,fashion,online,97.74,8,0.208,coupon,2024-08-05\r\n27325,1841,AMER,fashion,online,16.74,4,0.209,none,2024-05-15\r\n27326,1251,EMEA,sports,retail,71.52,3,0.229,none,2024-10-10\r\n27327,1428,APAC,grocery,retail,46.85,3,0.128,none,2024-11-21\r\n27328,1727,APAC,electronics,retail,54.10,5,0.051,none,2024-12-25\r\n27329,2255,AMER,grocery,retail,57.09,5,0.190,none,2024-01-19\r\n27330,1500,EMEA,electronics,online,84.20,3,0.142,bundle,2024-11-02\r\n27331,1453,APAC,fashion,online,59.92,2,0.203,coupon,2024-01-10\r\n27332,1316,APAC,electronics,retail,101.66,1,0.109,none,2024-01-16\r\n27333,1588,LATAM,sports,online,61.18,4,0.218,coupon,2024-09-13\r\n27334,1826,LATAM,toys,mobile,45.28,8,0.209,none,2024-06-13\r\n27335,2112,LATAM,electronics,online,63.46,1,0.183,none,2024-12-15\r\n27336,1906,APAC,fashion,online,49.27,3,0.143,coupon,2024-07-01\r\n27337,1495,LATAM,grocery,mobile,28.77,4,0.143,none,2024-10-23\r\n27338,1735,LATAM,grocery,mobile,46.40,2,0.163,none,2024-01-07\r\n27339,1926,AMER,grocery,mobile,99.18,3,0.037,none,2024-04-16\r\n27340,1796,LATAM,toys,online,62.76,8,0.162,bundle,2024-04-16\r\n27341,2040,LATAM,home,retail,88.17,4,0.208,none,2024-01-04\r\n27342,1319,EMEA,fashion,retail,66.00,1,0.013,coupon,2024-10-23\r\n27343,1733,LATAM,grocery,mobile,59.08,1,0.095,bundle,2024-08-05\r\n27344,1097,EMEA,grocery,retail,52.80,3,0.142,loyalty,2024-11-18\r\n27345,1804,AMER,fashion,online,36.27,6,0.200,bundle,2024-01-12\r\n27346,1916,AMER,home,retail,34.28,2,0.045,bundle,2024-07-10\r\n27347,1553,LATAM,grocery,online,31.33,8,0.110,none,2024-02-13\r\n27348,1424,APAC,fashion,retail,73.73,8,0.021,none,2024-05-07\r\n27349,2391,EMEA,fashion,mobile,108.40,1,0.043,coupon,2024-06-09\r\n27350,1003,APAC,grocery,online,113.32,5,0.027,none,2024-09-17\r\n27351,2462,EMEA,home,retail,135.89,2,0.192,coupon,2024-08-17\r\n27352,2478,AMER,toys,online,45.87,1,0.170,loyalty,2024-06-25\r\n27353,2322,AMER,toys,retail,45.89,4,0.137,bundle,2024-12-04\r\n27354,2000,APAC,electronics,partner,22.95,3,0.164,loyalty,2024-10-22\r\n27355,2005,APAC,home,online,20.95,7,0.149,none,2024-06-05\r\n27356,1989,LATAM,sports,online,77.51,3,0.036,none,2024-01-09\r\n27357,1503,APAC,fashion,retail,30.40,7,0.054,none,2024-11-11\r\n27358,1082,EMEA,home,online,77.10,4,0.174,none,2024-07-06\r\n27359,2269,EMEA,toys,retail,19.72,8,0.060,none,2024-01-11\r\n27360,2428,LATAM,grocery,retail,83.79,8,0.069,loyalty,2024-12-22\r\n27361,1271,EMEA,electronics,retail,51.75,6,0.145,none,2024-12-27\r\n27362,2043,EMEA,home,retail,38.43,5,0.228,none,2024-01-19\r\n27363,1657,LATAM,grocery,online,70.66,2,0.020,none,2024-05-22\r\n27364,2371,LATAM,home,mobile,128.42,3,0.078,loyalty,2024-06-26\r\n27365,1222,AMER,grocery,online,20.20,8,0.041,none,2024-12-10\r\n27366,1875,EMEA,toys,retail,33.47,7,0.134,bundle,2024-07-17\r\n27367,1951,LATAM,home,partner,46.48,2,0.230,coupon,2024-11-13\r\n27368,2384,LATAM,fashion,retail,91.69,2,0.003,coupon,2024-08-11\r\n27369,1189,AMER,electronics,mobile,39.43,7,0.129,coupon,2024-03-11\r\n27370,1117,LATAM,home,retail,28.92,1,0.002,none,2024-05-27\r\n27371,1797,LATAM,home,online,68.23,3,0.115,none,2024-09-22\r\n27372,1855,APAC,toys,retail,71.97,7,0.137,bundle,2024-09-09\r\n27373,2363,AMER,fashion,retail,125.43,8,0.187,none,2024-11-21\r\n27374,1711,APAC,toys,online,94.29,8,0.000,none,2024-09-11\r\n27375,1246,EMEA,electronics,online,23.63,4,0.064,none,2024-04-18\r\n27376,1411,LATAM,fashion,mobile,46.26,8,0.155,bundle,2024-08-09\r\n27377,1191,EMEA,fashion,online,73.88,8,0.099,none,2024-05-27\r\n27378,1522,LATAM,electronics,retail,71.49,7,0.225,none,2024-08-20\r\n27379,1426,AMER,home,retail,55.45,4,0.026,loyalty,2024-02-25\r\n27380,2456,APAC,electronics,online,29.38,3,0.233,none,2024-01-17\r\n27381,2046,APAC,electronics,online,135.75,8,0.217,none,2024-08-18\r\n27382,1912,APAC,electronics,mobile,106.87,7,0.115,bundle,2024-01-14\r\n27383,2117,EMEA,grocery,online,57.30,6,0.097,none,2024-12-08\r\n27384,1228,APAC,grocery,partner,59.96,6,0.172,none,2024-06-07\r\n27385,1925,LATAM,home,online,81.41,7,0.233,none,2024-09-19\r\n27386,1081,AMER,electronics,mobile,67.93,6,0.169,loyalty,2024-07-18\r\n27387,2125,LATAM,grocery,online,91.04,6,0.107,coupon,2024-04-01\r\n27388,2056,LATAM,grocery,online,81.98,6,0.017,none,2024-12-18\r\n27389,2400,EMEA,electronics,partner,54.11,6,0.138,none,2024-04-12\r\n27390,1158,LATAM,sports,retail,157.85,6,0.248,coupon,2024-05-13\r\n27391,2354,LATAM,toys,mobile,89.33,4,0.066,none,2024-05-27\r\n27392,2374,LATAM,fashion,retail,61.45,4,0.170,coupon,2024-08-25\r\n27393,1514,LATAM,electronics,retail,89.00,1,0.168,none,2024-07-21\r\n27394,1339,EMEA,grocery,online,110.08,8,0.185,none,2024-08-09\r\n27395,1249,EMEA,grocery,retail,34.80,2,0.156,coupon,2024-07-09\r\n27396,1388,AMER,fashion,retail,34.14,4,0.090,none,2024-05-06\r\n27397,1927,EMEA,grocery,retail,139.87,3,0.171,none,2024-02-02\r\n27398,1651,LATAM,grocery,online,16.29,2,0.049,bundle,2024-12-20\r\n27399,1580,AMER,grocery,online,19.51,7,0.194,loyalty,2024-11-06\r\n27400,2219,LATAM,fashion,retail,52.69,5,0.218,none,2024-03-01\r\n27401,2375,AMER,home,online,49.97,1,0.014,bundle,2024-09-25\r\n27402,1680,LATAM,home,retail,37.74,5,0.245,none,2024-09-24\r\n27403,1243,AMER,fashion,retail,80.67,6,0.192,none,2024-01-01\r\n27404,1716,LATAM,home,retail,59.64,1,0.093,coupon,2024-07-19\r\n27405,1707,APAC,grocery,online,30.09,1,0.045,none,2024-01-12\r\n27406,1171,APAC,home,retail,50.72,4,0.158,bundle,2024-11-03\r\n27407,2153,APAC,sports,retail,80.42,6,0.065,none,2024-11-18\r\n27408,1568,AMER,electronics,retail,72.42,4,0.031,loyalty,2024-07-22\r\n27409,1036,EMEA,grocery,retail,29.98,3,0.005,loyalty,2024-07-03\r\n27410,1749,LATAM,home,online,44.07,1,0.120,none,2024-07-23\r\n27411,1346,AMER,grocery,online,121.50,4,0.152,bundle,2024-03-01\r\n27412,2030,EMEA,fashion,partner,95.37,5,0.218,none,2024-06-18\r\n27413,1724,LATAM,electronics,online,63.20,4,0.228,none,2024-01-22\r\n27414,2248,LATAM,grocery,retail,64.53,6,0.247,loyalty,2024-08-17\r\n27415,2174,LATAM,sports,online,69.24,6,0.123,coupon,2024-01-26\r\n27416,1963,AMER,home,retail,63.70,8,0.158,none,2024-06-23\r\n27417,1320,EMEA,toys,retail,141.38,7,0.181,coupon,2024-06-02\r\n27418,1423,EMEA,fashion,retail,50.97,8,0.070,none,2024-08-05\r\n27419,2200,LATAM,home,partner,85.97,6,0.192,none,2024-04-26\r\n27420,2163,EMEA,fashion,online,86.65,2,0.027,loyalty,2024-06-20\r\n27421,1616,APAC,toys,online,32.39,3,0.194,bundle,2024-05-18\r\n27422,2239,EMEA,grocery,retail,62.52,7,0.051,none,2024-12-23\r\n27423,1208,AMER,grocery,mobile,47.99,5,0.135,bundle,2024-03-26\r\n27424,1427,EMEA,grocery,online,76.81,1,0.024,coupon,2024-09-03\r\n27425,1438,APAC,home,online,17.91,3,0.095,none,2024-01-25\r\n27426,1391,LATAM,grocery,online,85.73,1,0.136,none,2024-11-07\r\n27427,1841,AMER,home,online,81.25,6,0.156,coupon,2024-10-16\r\n27428,2148,EMEA,grocery,retail,27.24,4,0.028,coupon,2024-12-06\r\n27429,1364,EMEA,electronics,retail,71.40,8,0.003,bundle,2024-05-18\r\n27430,1801,LATAM,toys,mobile,88.30,8,0.053,none,2024-03-24\r\n27431,1483,EMEA,home,retail,64.87,5,0.063,none,2024-11-17\r\n27432,1189,AMER,electronics,retail,17.04,8,0.191,none,2024-08-08\r\n27433,1060,LATAM,home,retail,60.55,7,0.225,none,2024-07-01\r\n27434,1941,AMER,home,online,53.23,7,0.000,none,2024-04-10\r\n27435,2138,APAC,toys,retail,102.48,7,0.090,none,2024-03-14\r\n27436,1883,LATAM,home,retail,75.75,4,0.167,none,2024-08-01\r\n27437,2221,LATAM,fashion,online,104.43,1,0.153,bundle,2024-06-04\r\n27438,2446,LATAM,grocery,retail,58.40,5,0.228,none,2024-02-10\r\n27439,1816,EMEA,home,retail,57.55,1,0.217,none,2024-02-14\r\n27440,2331,APAC,grocery,online,20.72,2,0.189,loyalty,2024-10-19\r\n27441,1717,AMER,grocery,retail,158.92,6,0.082,none,2024-10-27\r\n27442,1494,AMER,grocery,mobile,52.72,6,0.022,coupon,2024-03-13\r\n27443,1568,AMER,fashion,online,60.35,8,0.057,none,2024-09-18\r\n27444,1032,AMER,grocery,mobile,59.89,5,0.228,none,2024-12-25\r\n27445,2070,APAC,toys,retail,78.71,6,0.149,none,2024-03-18\r\n27446,1565,AMER,fashion,online,63.89,6,0.089,coupon,2024-06-27\r\n27447,2240,LATAM,grocery,partner,58.66,1,0.052,loyalty,2024-03-16\r\n27448,1665,AMER,toys,retail,66.05,5,0.214,none,2024-02-24\r\n27449,1900,APAC,sports,mobile,186.14,6,0.241,none,2024-03-03\r\n27450,1529,LATAM,grocery,mobile,225.19,5,0.057,none,2024-02-17\r\n27451,1087,AMER,home,mobile,199.42,4,0.247,coupon,2024-02-12\r\n27452,1868,AMER,electronics,retail,40.07,7,0.062,bundle,2024-02-11\r\n27453,1668,AMER,electronics,online,157.20,1,0.047,loyalty,2024-02-25\r\n27454,2359,LATAM,fashion,online,33.12,7,0.125,none,2024-02-25\r\n27455,1055,AMER,electronics,retail,63.22,6,0.072,coupon,2024-09-09\r\n27456,2063,APAC,grocery,online,63.90,8,0.121,none,2024-12-20\r\n27457,1062,EMEA,home,online,44.11,8,0.204,coupon,2024-09-13\r\n27458,2256,AMER,toys,retail,42.10,8,0.049,coupon,2024-05-05\r\n27459,1511,EMEA,sports,online,38.79,2,0.133,bundle,2024-05-18\r\n27460,1762,LATAM,electronics,mobile,132.82,2,0.221,none,2024-10-10\r\n27461,1942,APAC,sports,online,45.90,3,0.136,none,2024-05-26\r\n27462,2219,LATAM,toys,mobile,80.02,4,0.192,loyalty,2024-12-11\r\n27463,1006,AMER,home,retail,50.58,7,0.145,loyalty,2024-08-10\r\n27464,2307,LATAM,fashion,retail,52.47,5,0.109,loyalty,2024-03-20\r\n27465,2222,LATAM,electronics,mobile,81.38,5,0.175,none,2024-12-02\r\n27466,1921,LATAM,toys,mobile,25.72,1,0.102,loyalty,2024-10-08\r\n27467,1060,LATAM,grocery,mobile,45.32,2,0.173,none,2024-03-26\r\n27468,1584,EMEA,grocery,online,116.59,2,0.147,none,2024-02-01\r\n27469,2446,LATAM,sports,retail,42.60,8,0.226,bundle,2024-08-02\r\n27470,1390,APAC,toys,online,64.70,6,0.097,coupon,2024-05-09\r\n27471,2146,APAC,grocery,mobile,23.11,2,0.146,coupon,2024-05-05\r\n27472,2079,EMEA,grocery,online,66.12,4,0.046,none,2024-09-07\r\n27473,1315,AMER,electronics,retail,111.46,3,0.222,none,2024-08-27\r\n27474,2219,LATAM,electronics,online,18.72,1,0.234,none,2024-07-27\r\n27475,2473,EMEA,home,online,75.09,6,0.064,coupon,2024-07-03\r\n27476,1129,LATAM,grocery,online,95.13,7,0.203,none,2024-09-15\r\n27477,1154,LATAM,toys,retail,44.91,8,0.023,bundle,2024-01-03\r\n27478,1122,AMER,home,online,30.63,2,0.161,none,2024-05-15\r\n27479,2134,AMER,fashion,online,53.39,8,0.177,none,2024-06-08\r\n27480,2164,AMER,grocery,online,43.48,2,0.098,coupon,2024-08-26\r\n27481,1009,APAC,grocery,online,39.01,1,0.192,bundle,2024-06-25\r\n27482,1400,EMEA,sports,retail,48.16,4,0.220,loyalty,2024-03-05\r\n27483,2250,AMER,home,online,66.54,1,0.090,coupon,2024-03-28\r\n27484,1906,APAC,grocery,mobile,42.19,6,0.108,none,2024-06-22\r\n27485,2262,APAC,grocery,retail,23.57,4,0.149,none,2024-04-16\r\n27486,1463,EMEA,electronics,online,50.90,5,0.229,none,2024-11-12\r\n27487,2041,LATAM,electronics,retail,23.33,4,0.023,none,2024-08-24\r\n27488,1420,APAC,sports,retail,15.51,1,0.091,bundle,2024-03-03\r\n27489,1525,APAC,sports,online,109.07,7,0.106,coupon,2024-11-01\r\n27490,1232,LATAM,sports,online,20.32,2,0.150,none,2024-02-21\r\n27491,2150,APAC,electronics,online,18.61,2,0.213,bundle,2024-08-06\r\n27492,2060,LATAM,fashion,online,22.02,4,0.196,none,2024-12-11\r\n27493,2027,EMEA,sports,online,42.26,4,0.247,loyalty,2024-12-05\r\n27494,1822,EMEA,sports,online,84.51,3,0.088,none,2024-07-27\r\n27495,2445,APAC,grocery,retail,40.70,2,0.047,bundle,2024-07-28\r\n27496,1993,APAC,grocery,online,137.07,4,0.036,none,2024-02-04\r\n27497,1582,AMER,fashion,mobile,34.44,5,0.241,none,2024-12-19\r\n27498,1003,APAC,toys,retail,61.12,3,0.213,none,2024-02-20\r\n27499,2266,LATAM,grocery,online,55.73,8,0.083,none,2024-10-08\r\n27500,2047,AMER,grocery,online,26.59,5,0.006,coupon,2024-06-17\r\n27501,1217,EMEA,grocery,partner,87.03,4,0.032,bundle,2024-09-11\r\n27502,1334,APAC,grocery,online,42.22,6,0.041,loyalty,2024-03-15\r\n27503,1478,EMEA,electronics,partner,34.65,8,0.228,coupon,2024-06-28\r\n27504,2202,APAC,sports,partner,47.05,8,0.198,none,2024-04-19\r\n27505,1430,EMEA,grocery,retail,57.06,7,0.140,bundle,2024-03-07\r\n27506,1041,APAC,electronics,retail,69.81,6,0.064,none,2024-06-07\r\n27507,1149,LATAM,grocery,retail,68.45,7,0.027,none,2024-02-07\r\n27508,1019,APAC,fashion,online,261.97,4,0.092,coupon,2024-03-27\r\n27509,1513,APAC,electronics,partner,102.44,1,0.063,none,2024-02-15\r\n27510,1516,EMEA,fashion,online,69.60,7,0.022,none,2024-06-10\r\n27511,1279,EMEA,electronics,online,36.94,5,0.027,none,2024-06-12\r\n27512,1143,LATAM,toys,online,60.37,5,0.004,coupon,2024-05-22\r\n27513,1795,EMEA,fashion,retail,76.21,5,0.200,none,2024-01-23\r\n27514,1296,LATAM,sports,mobile,20.98,8,0.041,none,2024-04-19\r\n27515,1683,AMER,fashion,online,64.84,6,0.156,none,2024-09-16\r\n27516,2284,EMEA,electronics,online,78.36,6,0.103,bundle,2024-01-08\r\n27517,1482,AMER,grocery,retail,105.28,6,0.107,coupon,2024-10-25\r\n27518,1763,LATAM,grocery,retail,34.72,4,0.013,coupon,2024-10-25\r\n27519,1725,APAC,home,online,45.77,6,0.011,none,2024-05-25\r\n27520,1748,APAC,electronics,retail,68.35,7,0.205,coupon,2024-01-01\r\n27521,2315,LATAM,toys,mobile,128.67,5,0.093,none,2024-10-15\r\n27522,1224,APAC,home,online,62.13,4,0.124,none,2024-12-18\r\n27523,1641,EMEA,fashion,online,44.91,6,0.100,none,2024-07-01\r\n27524,2217,LATAM,grocery,retail,67.43,2,0.074,none,2024-08-15\r\n27525,1594,LATAM,electronics,online,70.72,1,0.101,none,2024-01-05\r\n27526,2357,EMEA,electronics,retail,58.94,4,0.007,none,2024-09-17\r\n27527,2125,LATAM,sports,online,132.70,8,0.023,none,2024-08-24\r\n27528,1237,LATAM,sports,retail,30.29,8,0.208,coupon,2024-03-28\r\n27529,1812,EMEA,electronics,mobile,101.43,1,0.011,none,2024-09-04\r\n27530,1953,EMEA,grocery,partner,104.31,1,0.043,none,2024-11-08\r\n27531,2371,LATAM,grocery,partner,30.28,1,0.108,coupon,2024-12-24\r\n27532,2226,EMEA,electronics,retail,84.41,7,0.049,loyalty,2024-03-11\r\n27533,1261,APAC,electronics,online,34.28,7,0.075,none,2024-01-24\r\n27534,1574,AMER,home,online,72.32,7,0.100,none,2024-04-07\r\n27535,1091,EMEA,electronics,retail,69.87,7,0.095,none,2024-03-16\r\n27536,1354,AMER,fashion,online,111.48,6,0.132,none,2024-04-25\r\n27537,1752,APAC,fashion,retail,51.76,3,0.105,none,2024-03-11\r\n27538,1208,AMER,home,retail,40.12,8,0.104,none,2024-04-03\r\n27539,1366,APAC,fashion,online,22.43,7,0.083,none,2024-07-15\r\n27540,2497,AMER,electronics,mobile,55.21,8,0.210,none,2024-09-01\r\n27541,1531,EMEA,home,online,76.39,2,0.015,none,2024-08-03\r\n27542,1402,EMEA,fashion,online,26.63,5,0.038,none,2024-04-20\r\n27543,1484,AMER,fashion,retail,33.90,2,0.226,coupon,2024-11-12\r\n27544,1323,EMEA,grocery,online,37.92,3,0.185,none,2024-12-08\r\n27545,2205,AMER,electronics,online,19.29,2,0.013,loyalty,2024-03-26\r\n27546,1086,AMER,toys,mobile,71.17,3,0.133,none,2024-10-25\r\n27547,1726,EMEA,toys,retail,54.55,7,0.036,none,2024-02-14\r\n27548,1587,LATAM,grocery,online,36.84,5,0.102,loyalty,2024-12-22\r\n27549,2390,AMER,sports,online,127.76,3,0.009,none,2024-06-07\r\n27550,1496,AMER,sports,retail,75.48,7,0.229,none,2024-03-17\r\n27551,1124,AMER,electronics,online,57.12,8,0.248,loyalty,2024-11-14\r\n27552,1390,APAC,electronics,online,45.16,3,0.101,bundle,2024-03-10\r\n27553,2297,EMEA,grocery,retail,29.69,6,0.051,none,2024-11-21\r\n27554,1405,LATAM,sports,online,64.97,1,0.075,none,2024-08-14\r\n27555,2459,AMER,home,online,74.70,7,0.250,coupon,2024-10-14\r\n27556,1389,LATAM,home,online,60.82,7,0.202,none,2024-08-05\r\n27557,1857,LATAM,toys,mobile,74.54,7,0.146,none,2024-02-10\r\n27558,2348,EMEA,sports,retail,23.40,4,0.083,none,2024-12-27\r\n27559,1126,LATAM,grocery,mobile,91.52,7,0.230,none,2024-02-09\r\n27560,2279,LATAM,home,partner,50.66,3,0.078,bundle,2024-04-21\r\n27561,1622,LATAM,grocery,retail,86.05,7,0.025,bundle,2024-07-06\r\n27562,2138,APAC,electronics,retail,44.03,4,0.116,coupon,2024-05-14\r\n27563,1635,APAC,fashion,retail,49.09,5,0.186,none,2024-01-27\r\n27564,1120,LATAM,toys,retail,67.64,1,0.091,coupon,2024-09-11\r\n27565,2246,AMER,home,online,31.69,3,0.148,coupon,2024-08-15\r\n27566,2414,EMEA,electronics,online,104.80,1,0.028,none,2024-09-02\r\n27567,1784,EMEA,grocery,retail,171.16,2,0.103,bundle,2024-04-09\r\n27568,1435,AMER,home,retail,68.64,5,0.148,none,2024-10-20\r\n27569,2340,EMEA,grocery,retail,48.59,1,0.141,none,2024-01-21\r\n27570,1892,LATAM,sports,online,31.02,6,0.054,none,2024-11-24\r\n27571,1007,APAC,electronics,mobile,64.95,7,0.062,none,2024-09-06\r\n27572,2295,EMEA,home,retail,44.70,8,0.146,none,2024-10-13\r\n27573,1046,EMEA,fashion,partner,33.39,3,0.135,none,2024-09-16\r\n27574,2471,APAC,grocery,retail,42.28,3,0.217,none,2024-10-28\r\n27575,2034,LATAM,grocery,online,33.97,7,0.178,none,2024-07-18\r\n27576,2212,EMEA,sports,partner,18.54,1,0.246,bundle,2024-02-03\r\n27577,1413,LATAM,fashion,online,107.00,5,0.085,loyalty,2024-12-12\r\n27578,1120,LATAM,electronics,mobile,80.90,5,0.074,coupon,2024-10-10\r\n27579,2032,AMER,sports,online,84.17,1,0.202,none,2024-07-15\r\n27580,2432,AMER,fashion,mobile,37.44,6,0.090,none,2024-01-08\r\n27581,1099,LATAM,sports,online,34.95,2,0.085,none,2024-12-07\r\n27582,1318,LATAM,sports,online,63.54,4,0.090,none,2024-01-05\r\n27583,2197,LATAM,home,mobile,41.38,2,0.138,coupon,2024-06-25\r\n27584,2352,APAC,fashion,online,61.81,8,0.245,none,2024-05-27\r\n27585,1833,EMEA,grocery,online,99.30,2,0.127,bundle,2024-04-03\r\n27586,1659,APAC,electronics,online,18.23,2,0.111,none,2024-11-27\r\n27587,1127,EMEA,electronics,online,67.32,1,0.093,none,2024-11-01\r\n27588,1375,AMER,fashion,online,79.00,5,0.180,bundle,2024-10-03\r\n27589,2248,LATAM,sports,mobile,30.34,5,0.037,bundle,2024-12-05\r\n27590,1712,LATAM,home,online,87.00,1,0.248,loyalty,2024-10-02\r\n27591,1567,AMER,sports,retail,61.68,5,0.075,none,2024-02-17\r\n27592,1351,APAC,grocery,online,121.97,7,0.205,bundle,2024-10-16\r\n27593,1705,AMER,home,retail,55.13,8,0.201,bundle,2024-12-13\r\n27594,1348,AMER,toys,online,114.73,6,0.103,bundle,2024-09-06\r\n27595,1371,AMER,fashion,online,72.33,2,0.144,none,2024-12-13\r\n27596,2068,LATAM,electronics,retail,97.28,2,0.234,loyalty,2024-03-27\r\n27597,2097,AMER,sports,mobile,80.80,5,0.057,loyalty,2024-05-16\r\n27598,2257,AMER,electronics,retail,31.94,3,0.192,none,2024-11-25\r\n27599,2498,LATAM,sports,partner,60.59,8,0.165,loyalty,2024-07-22\r\n27600,1093,APAC,sports,retail,92.83,4,0.068,coupon,2024-03-18\r\n27601,1720,AMER,fashion,online,75.07,4,0.214,coupon,2024-12-23\r\n27602,1505,EMEA,electronics,online,57.09,6,0.171,bundle,2024-04-12\r\n27603,2437,LATAM,home,online,91.60,8,0.204,none,2024-02-07\r\n27604,1470,LATAM,sports,retail,124.97,7,0.029,none,2024-12-23\r\n27605,2281,AMER,toys,retail,38.31,5,0.028,none,2024-04-14\r\n27606,1664,LATAM,grocery,retail,52.43,5,0.173,none,2024-05-20\r\n27607,1536,LATAM,fashion,retail,72.70,7,0.200,bundle,2024-07-15\r\n27608,1561,EMEA,home,retail,51.05,6,0.071,none,2024-07-05\r\n27609,1276,AMER,fashion,online,48.03,3,0.143,loyalty,2024-07-23\r\n27610,1759,EMEA,toys,mobile,43.35,2,0.135,none,2024-12-25\r\n27611,1563,EMEA,fashion,online,36.51,6,0.067,bundle,2024-08-28\r\n27612,1082,EMEA,sports,online,57.98,1,0.229,coupon,2024-05-27\r\n27613,1900,APAC,toys,retail,130.98,8,0.083,none,2024-01-21\r\n27614,1613,EMEA,grocery,online,36.04,2,0.207,bundle,2024-05-25\r\n27615,1849,EMEA,electronics,online,68.73,7,0.121,none,2024-12-15\r\n27616,1677,EMEA,toys,online,39.73,2,0.062,coupon,2024-07-13\r\n27617,1655,LATAM,toys,online,61.47,8,0.229,coupon,2024-02-12\r\n27618,2096,LATAM,grocery,retail,43.60,5,0.010,none,2024-09-13\r\n27619,2283,AMER,grocery,retail,188.74,7,0.161,none,2024-01-02\r\n27620,2024,AMER,home,retail,75.45,8,0.093,bundle,2024-09-16\r\n27621,1471,EMEA,grocery,online,79.70,4,0.073,none,2024-07-10\r\n27622,2029,APAC,fashion,online,81.42,8,0.221,loyalty,2024-04-15\r\n27623,2064,LATAM,electronics,partner,56.65,2,0.107,none,2024-06-28\r\n27624,1473,LATAM,toys,online,100.90,1,0.124,loyalty,2024-06-28\r\n27625,1839,APAC,sports,online,48.91,7,0.207,bundle,2024-08-24\r\n27626,1050,AMER,electronics,mobile,31.25,2,0.095,none,2024-05-16\r\n27627,1752,APAC,grocery,online,46.11,1,0.042,none,2024-01-28\r\n27628,2215,LATAM,sports,online,85.45,6,0.178,none,2024-02-18\r\n27629,1616,APAC,fashion,online,29.91,8,0.135,none,2024-07-12\r\n27630,1991,APAC,grocery,partner,112.03,8,0.060,coupon,2024-01-01\r\n27631,1375,AMER,home,retail,35.79,4,0.143,none,2024-04-20\r\n27632,2351,EMEA,home,online,52.04,4,0.223,none,2024-12-14\r\n27633,1064,AMER,electronics,retail,113.11,7,0.023,coupon,2024-11-02\r\n27634,1559,EMEA,electronics,online,32.73,8,0.022,none,2024-05-04\r\n27635,1321,EMEA,home,retail,201.28,4,0.167,bundle,2024-01-21\r\n27636,2366,APAC,grocery,online,44.40,4,0.094,coupon,2024-12-06\r\n27637,1495,LATAM,fashion,retail,58.00,1,0.185,none,2024-07-23\r\n27638,1736,AMER,electronics,online,238.50,3,0.121,loyalty,2024-04-13\r\n27639,1764,LATAM,fashion,online,78.97,6,0.033,none,2024-01-04\r\n27640,1486,LATAM,electronics,online,57.07,7,0.080,coupon,2024-05-19\r\n27641,2175,AMER,toys,retail,66.37,4,0.182,coupon,2024-07-18\r\n27642,1439,LATAM,electronics,online,106.35,6,0.066,loyalty,2024-12-12\r\n27643,1602,EMEA,home,online,52.19,6,0.010,none,2024-06-26\r\n27644,1690,LATAM,sports,retail,37.14,5,0.182,coupon,2024-07-26\r\n27645,1229,LATAM,home,retail,110.65,2,0.137,coupon,2024-02-18\r\n27646,1635,APAC,toys,partner,73.34,5,0.094,loyalty,2024-12-20\r\n27647,1334,APAC,grocery,online,33.07,1,0.129,none,2024-11-18\r\n27648,2236,APAC,electronics,partner,210.11,3,0.237,loyalty,2024-06-08\r\n27649,1193,APAC,grocery,online,28.73,7,0.211,none,2024-09-06\r\n27650,1023,APAC,home,partner,72.10,7,0.233,bundle,2024-09-26\r\n27651,1288,LATAM,fashion,online,75.32,1,0.068,coupon,2024-09-16\r\n27652,2443,LATAM,toys,online,61.27,7,0.147,loyalty,2024-08-13\r\n27653,1432,APAC,fashion,online,63.72,4,0.106,none,2024-07-26\r\n27654,1902,AMER,home,retail,29.92,4,0.215,none,2024-08-12\r\n27655,2462,EMEA,grocery,online,43.56,8,0.182,none,2024-12-16\r\n27656,1978,AMER,home,retail,68.86,2,0.045,loyalty,2024-01-18\r\n27657,1195,AMER,electronics,retail,62.70,5,0.029,loyalty,2024-01-19\r\n27658,1792,AMER,grocery,online,88.56,4,0.002,none,2024-02-04\r\n27659,2148,EMEA,fashion,mobile,69.92,7,0.073,none,2024-05-09\r\n27660,1068,APAC,fashion,retail,84.29,3,0.039,none,2024-04-16\r\n27661,1663,LATAM,grocery,retail,82.08,8,0.071,coupon,2024-11-09\r\n27662,1820,AMER,fashion,retail,44.29,3,0.105,bundle,2024-01-23\r\n27663,2397,LATAM,electronics,partner,66.97,1,0.000,none,2024-08-15\r\n27664,1004,LATAM,home,online,80.57,4,0.065,coupon,2024-05-13\r\n27665,2340,EMEA,grocery,retail,90.27,4,0.057,none,2024-10-05\r\n27666,2404,EMEA,grocery,online,64.70,7,0.248,bundle,2024-03-01\r\n27667,2210,APAC,grocery,online,35.91,6,0.150,none,2024-12-03\r\n27668,2043,EMEA,fashion,online,38.31,1,0.124,loyalty,2024-10-23\r\n27669,1455,APAC,toys,online,100.42,7,0.065,none,2024-05-08\r\n27670,1358,APAC,grocery,online,32.49,8,0.145,coupon,2024-10-01\r\n27671,1277,AMER,grocery,retail,61.43,4,0.123,loyalty,2024-06-22\r\n27672,1453,APAC,grocery,online,64.25,8,0.056,coupon,2024-12-02\r\n27673,1892,LATAM,grocery,retail,50.92,6,0.083,none,2024-12-14\r\n27674,1658,AMER,grocery,online,39.68,7,0.238,none,2024-06-14\r\n27675,2299,EMEA,fashion,online,62.20,3,0.024,coupon,2024-07-13\r\n27676,1559,EMEA,grocery,partner,56.97,1,0.036,loyalty,2024-02-09\r\n27677,2345,LATAM,fashion,retail,34.18,2,0.062,none,2024-04-26\r\n27678,2434,APAC,home,retail,35.18,7,0.141,none,2024-09-18\r\n27679,1750,LATAM,electronics,mobile,81.43,4,0.124,none,2024-12-02\r\n27680,2095,EMEA,grocery,mobile,40.49,5,0.178,none,2024-08-14\r\n27681,1909,APAC,grocery,online,114.30,6,0.155,none,2024-11-26\r\n27682,1049,AMER,grocery,mobile,47.76,5,0.128,none,2024-09-01\r\n27683,2254,LATAM,electronics,online,139.21,6,0.112,coupon,2024-09-22\r\n27684,2334,LATAM,sports,online,43.55,5,0.074,none,2024-04-01\r\n27685,1729,AMER,electronics,mobile,160.32,6,0.198,coupon,2024-02-03\r\n27686,1704,AMER,grocery,mobile,37.83,5,0.001,none,2024-02-05\r\n27687,1297,AMER,grocery,online,100.81,6,0.235,none,2024-08-14\r\n27688,1354,AMER,toys,retail,187.14,5,0.249,none,2024-06-02\r\n27689,1404,EMEA,toys,online,73.05,8,0.079,loyalty,2024-10-04\r\n27690,2132,LATAM,grocery,online,105.86,6,0.234,coupon,2024-11-12\r\n27691,1757,EMEA,grocery,retail,159.12,6,0.101,none,2024-06-28\r\n27692,1270,LATAM,toys,online,69.98,8,0.079,none,2024-12-01\r\n27693,2003,LATAM,grocery,online,87.19,6,0.243,none,2024-07-21\r\n27694,1661,LATAM,electronics,online,45.56,1,0.168,none,2024-11-02\r\n27695,1541,APAC,sports,online,90.70,7,0.220,coupon,2024-10-18\r\n27696,1524,LATAM,fashion,online,39.10,8,0.129,coupon,2024-04-03\r\n27697,1842,LATAM,electronics,online,99.12,2,0.137,none,2024-12-13\r\n27698,1294,APAC,home,retail,61.73,6,0.146,loyalty,2024-11-02\r\n27699,2249,LATAM,electronics,retail,38.85,8,0.132,coupon,2024-06-04\r\n27700,1460,LATAM,fashion,mobile,90.56,8,0.034,none,2024-10-12\r\n27701,1094,LATAM,grocery,mobile,68.17,6,0.131,none,2024-01-17\r\n27702,1643,EMEA,toys,online,27.75,4,0.038,bundle,2024-08-22\r\n27703,2030,EMEA,home,mobile,50.57,5,0.156,coupon,2024-02-26\r\n27704,2034,LATAM,toys,partner,23.28,1,0.024,none,2024-04-17\r\n27705,1129,LATAM,electronics,online,24.19,7,0.196,none,2024-05-10\r\n27706,1381,LATAM,electronics,retail,43.60,2,0.185,none,2024-08-27\r\n27707,1847,LATAM,grocery,retail,53.79,5,0.195,none,2024-03-16\r\n27708,2149,EMEA,electronics,online,90.51,7,0.066,bundle,2024-10-28\r\n27709,1753,APAC,electronics,retail,53.70,4,0.152,none,2024-04-11\r\n27710,1549,APAC,electronics,mobile,22.46,6,0.143,bundle,2024-12-15\r\n27711,2498,LATAM,electronics,mobile,84.30,2,0.069,loyalty,2024-07-07\r\n27712,1506,EMEA,grocery,retail,19.17,2,0.178,none,2024-01-20\r\n27713,1929,LATAM,home,retail,72.82,6,0.035,none,2024-05-07\r\n27714,1184,AMER,electronics,retail,55.39,4,0.009,none,2024-06-01\r\n27715,1370,APAC,grocery,partner,123.19,6,0.131,none,2024-01-25\r\n27716,1053,AMER,toys,retail,63.20,1,0.092,bundle,2024-03-04\r\n27717,1954,APAC,sports,partner,104.07,1,0.133,none,2024-12-12\r\n27718,1116,LATAM,home,online,150.65,1,0.185,none,2024-10-08\r\n27719,1995,LATAM,home,online,74.53,5,0.085,none,2024-02-04\r\n27720,2235,AMER,grocery,retail,20.04,4,0.229,none,2024-04-15\r\n27721,1762,LATAM,toys,retail,93.94,8,0.226,none,2024-12-11\r\n27722,2183,EMEA,sports,retail,61.55,8,0.228,none,2024-09-19\r\n27723,1758,AMER,home,retail,61.84,8,0.052,coupon,2024-02-25\r\n27724,1965,LATAM,grocery,mobile,61.89,2,0.002,none,2024-01-06\r\n27725,2192,APAC,grocery,online,38.15,7,0.073,none,2024-12-06\r\n27726,1725,APAC,electronics,retail,41.35,8,0.248,loyalty,2024-08-06\r\n27727,1588,LATAM,grocery,retail,135.70,2,0.207,none,2024-01-10\r\n27728,2421,AMER,fashion,retail,87.52,2,0.097,coupon,2024-02-20\r\n27729,1533,APAC,home,online,76.70,6,0.118,loyalty,2024-02-14\r\n27730,2348,EMEA,home,online,94.32,1,0.078,none,2024-01-22\r\n27731,1907,EMEA,electronics,mobile,39.08,3,0.206,bundle,2024-02-26\r\n27732,1763,LATAM,home,retail,21.14,7,0.159,none,2024-02-12\r\n27733,1983,LATAM,grocery,retail,77.90,1,0.111,coupon,2024-07-24\r\n27734,1899,APAC,home,online,73.95,7,0.130,bundle,2024-12-20\r\n27735,1167,EMEA,fashion,online,34.93,3,0.202,bundle,2024-06-16\r\n27736,1721,EMEA,electronics,mobile,59.30,8,0.087,coupon,2024-10-28\r\n27737,1066,AMER,toys,mobile,40.66,2,0.000,loyalty,2024-03-12\r\n27738,1672,APAC,sports,online,81.47,8,0.145,coupon,2024-05-04\r\n27739,1857,LATAM,toys,online,87.54,8,0.129,bundle,2024-10-10\r\n27740,1337,APAC,electronics,retail,38.16,1,0.103,loyalty,2024-03-13\r\n27741,1974,EMEA,grocery,online,137.72,7,0.165,coupon,2024-10-04\r\n27742,1727,APAC,fashion,online,73.54,6,0.146,bundle,2024-01-26\r\n27743,1789,EMEA,home,online,138.11,1,0.095,none,2024-10-25\r\n27744,1546,EMEA,grocery,retail,42.22,6,0.002,none,2024-08-02\r\n27745,1852,AMER,toys,retail,145.25,5,0.077,loyalty,2024-07-26\r\n27746,2010,APAC,grocery,mobile,14.91,8,0.222,none,2024-06-01\r\n27747,1007,APAC,toys,retail,61.15,3,0.172,coupon,2024-11-04\r\n27748,1123,LATAM,fashion,online,41.93,5,0.213,none,2024-02-28\r\n27749,1522,LATAM,electronics,retail,48.85,1,0.201,none,2024-03-11\r\n27750,1394,LATAM,sports,mobile,64.71,1,0.095,coupon,2024-07-13\r\n27751,1528,EMEA,fashion,mobile,73.80,1,0.240,loyalty,2024-01-05\r\n27752,1595,AMER,toys,online,34.85,4,0.205,coupon,2024-04-15\r\n27753,2496,EMEA,grocery,online,25.63,2,0.188,none,2024-12-21\r\n27754,1225,APAC,sports,retail,110.69,5,0.194,bundle,2024-03-18\r\n27755,1468,AMER,grocery,online,33.45,6,0.224,bundle,2024-03-19\r\n27756,1080,LATAM,grocery,online,54.76,5,0.236,coupon,2024-05-16\r\n27757,2126,APAC,sports,online,61.57,1,0.150,none,2024-09-01\r\n27758,1650,LATAM,electronics,online,51.35,3,0.089,none,2024-05-15\r\n27759,1546,EMEA,fashion,online,71.37,4,0.134,none,2024-06-25\r\n27760,1427,EMEA,fashion,online,41.75,8,0.093,bundle,2024-02-15\r\n27761,1178,EMEA,home,retail,31.95,4,0.240,none,2024-07-25\r\n27762,1317,EMEA,electronics,retail,36.09,5,0.239,none,2024-03-14\r\n27763,1330,EMEA,home,online,50.26,2,0.209,bundle,2024-01-24\r\n27764,1804,AMER,fashion,retail,116.48,5,0.187,none,2024-02-18\r\n27765,1667,AMER,toys,online,62.37,4,0.149,none,2024-03-28\r\n27766,1799,EMEA,electronics,retail,104.41,5,0.110,coupon,2024-07-25\r\n27767,2497,AMER,grocery,retail,58.59,6,0.224,none,2024-12-03\r\n27768,2412,LATAM,toys,retail,45.06,5,0.231,bundle,2024-11-20\r\n27769,2422,APAC,electronics,mobile,60.25,4,0.136,none,2024-12-18\r\n27770,1017,AMER,electronics,online,18.85,4,0.139,bundle,2024-12-23\r\n27771,1714,APAC,home,mobile,65.04,6,0.042,none,2024-04-11\r\n27772,2496,EMEA,fashion,retail,32.98,3,0.097,none,2024-12-08\r\n27773,1885,EMEA,home,retail,84.23,4,0.250,none,2024-10-07\r\n27774,1684,EMEA,electronics,mobile,59.77,3,0.019,none,2024-02-22\r\n27775,2414,EMEA,sports,online,50.79,6,0.036,none,2024-11-02\r\n27776,2334,LATAM,grocery,partner,101.77,5,0.232,bundle,2024-03-12\r\n27777,1036,EMEA,home,retail,45.74,3,0.076,bundle,2024-08-28\r\n27778,1081,AMER,grocery,retail,44.11,2,0.117,none,2024-09-18\r\n27779,2002,APAC,electronics,partner,49.02,7,0.083,coupon,2024-06-14\r\n27780,2164,AMER,sports,online,54.79,5,0.152,none,2024-05-01\r\n27781,1206,EMEA,toys,online,20.58,1,0.141,bundle,2024-08-03\r\n27782,2003,LATAM,fashion,online,31.94,6,0.026,loyalty,2024-08-25\r\n27783,1204,AMER,home,retail,84.53,6,0.048,none,2024-07-15\r\n27784,1508,LATAM,electronics,mobile,26.09,8,0.153,bundle,2024-01-16\r\n27785,1551,APAC,fashion,online,36.91,2,0.052,coupon,2024-09-16\r\n27786,2338,AMER,fashion,retail,99.52,3,0.158,none,2024-04-19\r\n27787,1890,LATAM,fashion,mobile,108.04,2,0.190,none,2024-04-22\r\n27788,2031,AMER,home,online,42.36,8,0.199,none,2024-10-22\r\n27789,1032,AMER,sports,retail,56.69,8,0.132,loyalty,2024-10-28\r\n27790,1693,EMEA,grocery,mobile,22.44,7,0.181,none,2024-02-08\r\n27791,2379,AMER,electronics,retail,70.48,8,0.243,none,2024-11-28\r\n27792,2334,LATAM,fashion,retail,54.58,5,0.108,loyalty,2024-04-05\r\n27793,1739,AMER,home,online,62.39,4,0.035,none,2024-01-24\r\n27794,1224,APAC,home,partner,78.23,3,0.236,none,2024-10-14\r\n27795,1781,LATAM,home,online,76.08,1,0.248,loyalty,2024-05-09\r\n27796,1660,AMER,sports,mobile,131.27,3,0.231,none,2024-01-27\r\n27797,1528,EMEA,toys,online,46.03,1,0.200,loyalty,2024-09-12\r\n27798,1481,LATAM,home,retail,145.98,7,0.019,none,2024-09-15\r\n27799,1469,EMEA,fashion,online,45.28,2,0.082,none,2024-05-19\r\n27800,1136,EMEA,fashion,online,86.04,7,0.146,coupon,2024-12-12\r\n27801,1175,AMER,sports,online,47.07,1,0.017,coupon,2024-05-19\r\n27802,1796,LATAM,home,retail,27.19,5,0.142,none,2024-04-10\r\n27803,1517,AMER,home,online,50.05,4,0.129,none,2024-10-01\r\n27804,2109,EMEA,electronics,retail,89.00,8,0.186,none,2024-11-11\r\n27805,2495,EMEA,grocery,online,44.11,2,0.059,coupon,2024-02-10\r\n27806,1988,AMER,grocery,retail,37.92,5,0.013,none,2024-07-09\r\n27807,1803,LATAM,toys,retail,76.15,8,0.132,coupon,2024-09-20\r\n27808,1697,APAC,home,online,44.13,3,0.037,none,2024-10-28\r\n27809,2256,AMER,fashion,mobile,108.48,4,0.177,none,2024-09-13\r\n27810,2320,LATAM,electronics,retail,25.17,8,0.090,bundle,2024-12-03\r\n27811,1385,LATAM,electronics,mobile,69.09,8,0.008,bundle,2024-01-09\r\n27812,1414,APAC,electronics,mobile,27.62,6,0.009,none,2024-01-16\r\n27813,1332,APAC,fashion,online,50.33,5,0.038,none,2024-07-28\r\n27814,2312,APAC,electronics,retail,107.74,7,0.017,none,2024-06-19\r\n27815,2377,AMER,home,online,70.98,1,0.086,loyalty,2024-11-09\r\n27816,1780,APAC,sports,mobile,46.43,7,0.077,none,2024-07-16\r\n27817,2112,LATAM,electronics,online,29.66,5,0.027,loyalty,2024-03-14\r\n27818,1987,AMER,home,online,26.78,2,0.117,none,2024-05-21\r\n27819,1664,LATAM,home,online,45.39,4,0.092,none,2024-04-02\r\n27820,1921,LATAM,electronics,partner,77.24,3,0.172,loyalty,2024-08-23\r\n27821,2210,APAC,grocery,retail,84.95,8,0.245,none,2024-09-01\r\n27822,1967,EMEA,home,online,55.01,7,0.049,coupon,2024-12-02\r\n27823,1045,LATAM,grocery,mobile,59.97,1,0.035,none,2024-04-24\r\n27824,1147,EMEA,electronics,retail,84.11,1,0.166,coupon,2024-05-05\r\n27825,1206,EMEA,electronics,retail,54.21,7,0.187,none,2024-09-01\r\n27826,1264,APAC,home,online,76.91,3,0.169,coupon,2024-07-08\r\n27827,1327,APAC,toys,online,45.23,8,0.191,none,2024-11-01\r\n27828,1075,AMER,electronics,online,40.70,1,0.051,bundle,2024-02-14\r\n27829,2167,APAC,electronics,mobile,74.38,4,0.180,bundle,2024-04-04\r\n27830,1385,LATAM,sports,retail,20.47,3,0.184,none,2024-07-04\r\n27831,1981,EMEA,fashion,online,94.13,6,0.160,none,2024-07-13\r\n27832,2429,EMEA,grocery,online,80.79,8,0.141,none,2024-09-07\r\n27833,1751,AMER,grocery,mobile,61.33,6,0.088,none,2024-08-21\r\n27834,1626,EMEA,sports,retail,52.24,7,0.210,coupon,2024-04-20\r\n27835,1630,APAC,fashion,mobile,118.50,5,0.171,none,2024-10-08\r\n27836,1685,AMER,fashion,partner,45.33,8,0.222,coupon,2024-06-28\r\n27837,1916,AMER,grocery,partner,30.58,1,0.240,bundle,2024-01-04\r\n27838,1765,EMEA,fashion,mobile,108.69,1,0.008,bundle,2024-05-21\r\n27839,1009,APAC,electronics,online,78.78,8,0.131,bundle,2024-08-11\r\n27840,2437,LATAM,home,partner,70.54,5,0.244,loyalty,2024-04-27\r\n27841,1697,APAC,sports,online,49.05,5,0.114,none,2024-02-02\r\n27842,2496,EMEA,grocery,mobile,36.52,8,0.012,none,2024-08-17\r\n27843,1197,LATAM,fashion,online,92.37,4,0.193,bundle,2024-04-09\r\n27844,2428,LATAM,grocery,online,53.72,2,0.013,none,2024-03-13\r\n27845,1504,AMER,home,online,39.26,3,0.176,none,2024-12-08\r\n27846,1509,AMER,sports,retail,121.50,1,0.111,none,2024-05-28\r\n27847,1182,EMEA,sports,partner,44.73,2,0.147,none,2024-02-16\r\n27848,1077,AMER,electronics,mobile,37.59,5,0.192,none,2024-01-27\r\n27849,2214,AMER,toys,retail,57.73,1,0.082,loyalty,2024-07-20\r\n27850,1653,APAC,fashion,online,69.65,2,0.125,none,2024-07-09\r\n27851,1682,EMEA,fashion,online,31.83,4,0.176,none,2024-10-23\r\n27852,1433,EMEA,grocery,mobile,164.70,6,0.238,bundle,2024-08-19\r\n27853,1597,APAC,grocery,retail,52.36,6,0.065,none,2024-07-23\r\n27854,1161,AMER,electronics,online,47.29,2,0.083,none,2024-02-23\r\n27855,1840,LATAM,fashion,online,84.65,5,0.025,none,2024-07-23\r\n27856,2397,LATAM,grocery,retail,55.21,8,0.157,none,2024-04-10\r\n27857,1138,AMER,fashion,retail,34.23,8,0.004,none,2024-06-06\r\n27858,1921,LATAM,grocery,retail,76.51,5,0.026,bundle,2024-02-12\r\n27859,2110,LATAM,grocery,online,55.55,2,0.199,none,2024-12-02\r\n27860,1337,APAC,electronics,online,111.71,2,0.080,loyalty,2024-05-11\r\n27861,1693,EMEA,toys,online,76.06,1,0.060,none,2024-05-07\r\n27862,1290,EMEA,home,retail,53.12,4,0.032,none,2024-04-16\r\n27863,2040,LATAM,toys,retail,43.12,4,0.082,loyalty,2024-08-11\r\n27864,1588,LATAM,electronics,retail,29.73,2,0.145,bundle,2024-05-23\r\n27865,2352,APAC,toys,online,45.64,4,0.118,bundle,2024-01-09\r\n27866,2122,AMER,sports,mobile,64.93,8,0.108,loyalty,2024-10-18\r\n27867,1423,EMEA,home,online,74.84,7,0.245,bundle,2024-05-01\r\n27868,2019,AMER,toys,online,143.96,7,0.113,none,2024-03-03\r\n27869,1600,AMER,home,mobile,34.89,2,0.052,coupon,2024-09-08\r\n27870,1324,LATAM,electronics,mobile,84.55,2,0.140,bundle,2024-04-24\r\n27871,2497,AMER,home,online,74.57,8,0.028,none,2024-09-20\r\n27872,2306,AMER,toys,retail,130.74,6,0.071,coupon,2024-04-26\r\n27873,2298,APAC,home,online,64.05,5,0.087,none,2024-05-15\r\n27874,1763,LATAM,electronics,online,35.62,2,0.046,none,2024-07-05\r\n27875,1291,EMEA,fashion,online,24.32,5,0.124,bundle,2024-08-13\r\n27876,1288,LATAM,grocery,retail,42.23,1,0.072,coupon,2024-11-23\r\n27877,2415,AMER,grocery,retail,79.64,5,0.105,none,2024-06-05\r\n27878,1920,LATAM,electronics,retail,102.55,8,0.129,none,2024-11-06\r\n27879,1996,APAC,electronics,retail,28.93,7,0.068,loyalty,2024-10-25\r\n27880,1853,APAC,fashion,online,35.12,2,0.218,none,2024-04-04\r\n27881,2367,AMER,fashion,online,99.94,6,0.241,loyalty,2024-03-26\r\n27882,1478,EMEA,toys,mobile,60.69,7,0.118,none,2024-10-03\r\n27883,1221,LATAM,sports,mobile,55.90,4,0.116,coupon,2024-03-18\r\n27884,1806,APAC,grocery,retail,83.24,3,0.142,none,2024-08-05\r\n27885,1109,APAC,fashion,retail,33.60,5,0.072,none,2024-07-23\r\n27886,2040,LATAM,grocery,partner,56.29,5,0.068,bundle,2024-05-22\r\n27887,1229,LATAM,sports,online,78.69,1,0.250,bundle,2024-06-07\r\n27888,2043,EMEA,sports,retail,77.86,3,0.236,coupon,2024-11-23\r\n27889,1481,LATAM,electronics,retail,83.92,8,0.245,none,2024-08-03\r\n27890,1562,AMER,grocery,online,109.31,8,0.130,none,2024-12-19\r\n27891,1096,EMEA,toys,retail,26.94,3,0.224,coupon,2024-12-03\r\n27892,1050,AMER,fashion,retail,101.05,5,0.044,none,2024-10-25\r\n27893,1355,EMEA,electronics,partner,79.52,4,0.214,loyalty,2024-07-16\r\n27894,2419,LATAM,grocery,online,44.82,5,0.007,none,2024-10-27\r\n27895,2177,AMER,fashion,online,63.60,8,0.132,bundle,2024-04-05\r\n27896,1501,AMER,grocery,online,84.79,5,0.185,bundle,2024-08-17\r\n27897,1289,LATAM,sports,retail,49.34,2,0.189,loyalty,2024-06-17\r\n27898,1735,LATAM,grocery,online,82.24,2,0.199,none,2024-06-02\r\n27899,1293,AMER,electronics,online,73.54,6,0.200,coupon,2024-01-16\r\n27900,2107,APAC,home,online,109.15,1,0.218,coupon,2024-07-12\r\n27901,2093,LATAM,home,online,26.44,6,0.175,bundle,2024-12-11\r\n27902,1517,AMER,grocery,mobile,37.75,7,0.054,bundle,2024-04-28\r\n27903,1653,APAC,toys,retail,75.64,7,0.052,none,2024-11-12\r\n27904,1232,LATAM,sports,retail,43.89,1,0.214,loyalty,2024-04-22\r\n27905,2418,AMER,sports,online,103.65,8,0.034,none,2024-12-13\r\n27906,1584,EMEA,home,online,102.37,5,0.245,bundle,2024-05-01\r\n27907,1341,EMEA,sports,online,48.19,2,0.138,none,2024-06-08\r\n27908,1361,LATAM,fashion,online,50.22,1,0.007,coupon,2024-05-08\r\n27909,1430,EMEA,grocery,online,51.34,2,0.230,bundle,2024-11-10\r\n27910,1949,AMER,electronics,retail,41.56,5,0.243,none,2024-04-09\r\n27911,1353,EMEA,fashion,retail,93.44,4,0.121,none,2024-03-14\r\n27912,1606,AMER,grocery,retail,59.62,3,0.226,none,2024-09-17\r\n27913,1137,APAC,fashion,retail,61.57,3,0.062,none,2024-07-24\r\n27914,2031,AMER,grocery,online,37.28,5,0.128,loyalty,2024-09-20\r\n27915,1964,EMEA,fashion,retail,28.41,3,0.120,none,2024-02-28\r\n27916,1099,LATAM,electronics,online,46.84,1,0.032,none,2024-05-01\r\n27917,1566,EMEA,sports,online,25.92,3,0.109,none,2024-11-27\r\n27918,1100,AMER,electronics,online,47.52,6,0.196,coupon,2024-07-12\r\n27919,1283,APAC,fashion,online,99.32,7,0.070,none,2024-05-28\r\n27920,1267,EMEA,electronics,mobile,181.95,8,0.232,coupon,2024-05-12\r\n27921,2355,EMEA,home,mobile,47.60,6,0.073,coupon,2024-02-22\r\n27922,2499,LATAM,sports,retail,68.68,1,0.197,none,2024-01-23\r\n27923,1188,LATAM,electronics,online,65.15,5,0.182,bundle,2024-07-01\r\n27924,1144,APAC,grocery,online,87.88,6,0.048,bundle,2024-01-26\r\n27925,1947,EMEA,sports,mobile,55.09,1,0.043,bundle,2024-03-24\r\n27926,1901,AMER,home,partner,87.26,2,0.228,none,2024-03-08\r\n27927,1592,LATAM,fashion,online,74.01,1,0.245,none,2024-11-14\r\n27928,2356,LATAM,electronics,retail,80.61,3,0.103,none,2024-12-07\r\n27929,1006,AMER,sports,mobile,39.37,8,0.204,none,2024-01-25\r\n27930,2320,LATAM,electronics,online,44.60,2,0.100,none,2024-08-12\r\n27931,2423,LATAM,toys,retail,43.20,5,0.208,bundle,2024-08-28\r\n27932,1987,AMER,electronics,retail,44.52,4,0.053,none,2024-05-04\r\n27933,1844,APAC,home,retail,38.47,4,0.140,none,2024-12-04\r\n27934,2270,APAC,home,retail,44.28,1,0.243,none,2024-06-22\r\n27935,2182,AMER,home,online,57.15,8,0.215,coupon,2024-09-08\r\n27936,2004,LATAM,grocery,mobile,80.53,1,0.238,bundle,2024-10-15\r\n27937,1340,LATAM,sports,retail,113.91,4,0.081,bundle,2024-10-25\r\n27938,1758,AMER,home,online,102.60,8,0.146,none,2024-11-22\r\n27939,1161,AMER,grocery,online,79.46,6,0.089,none,2024-12-21\r\n27940,2487,LATAM,grocery,mobile,43.19,4,0.143,loyalty,2024-04-17\r\n27941,2484,APAC,home,online,61.88,8,0.003,none,2024-05-24\r\n27942,1614,EMEA,grocery,retail,100.53,2,0.152,bundle,2024-03-24\r\n27943,1364,EMEA,sports,mobile,40.39,1,0.169,none,2024-08-08\r\n27944,1599,APAC,grocery,online,49.89,3,0.089,none,2024-01-04\r\n27945,1612,LATAM,fashion,online,41.83,8,0.041,none,2024-09-07\r\n27946,2123,AMER,electronics,online,191.23,4,0.084,none,2024-08-08\r\n27947,1071,AMER,home,online,157.96,5,0.119,none,2024-10-09\r\n27948,2346,LATAM,sports,retail,78.11,3,0.079,coupon,2024-05-06\r\n27949,1416,EMEA,fashion,retail,84.42,3,0.056,bundle,2024-07-07\r\n27950,1164,EMEA,home,retail,32.29,3,0.124,none,2024-06-19\r\n27951,1070,EMEA,grocery,retail,43.16,1,0.085,none,2024-04-12\r\n27952,2055,AMER,grocery,online,29.54,7,0.002,none,2024-01-10\r\n27953,2115,APAC,fashion,retail,63.59,5,0.161,loyalty,2024-07-28\r\n27954,2421,AMER,electronics,online,32.83,5,0.081,none,2024-05-19\r\n27955,1700,EMEA,toys,online,50.26,4,0.103,none,2024-02-28\r\n27956,1610,LATAM,home,retail,59.82,2,0.106,none,2024-11-04\r\n27957,1738,LATAM,grocery,retail,76.44,6,0.013,none,2024-04-17\r\n27958,2172,EMEA,fashion,online,35.30,7,0.185,none,2024-10-19\r\n27959,1183,AMER,electronics,online,50.19,3,0.095,loyalty,2024-05-05\r\n27960,2315,LATAM,sports,online,112.70,1,0.041,none,2024-05-12\r\n27961,2424,LATAM,fashion,retail,39.87,5,0.117,none,2024-08-12\r\n27962,2260,EMEA,electronics,retail,64.85,4,0.122,coupon,2024-08-28\r\n27963,1094,LATAM,electronics,online,33.63,1,0.246,none,2024-02-06\r\n27964,1472,AMER,electronics,retail,53.16,4,0.114,loyalty,2024-03-11\r\n27965,1036,EMEA,home,mobile,50.52,7,0.166,none,2024-05-28\r\n27966,1301,AMER,grocery,retail,35.15,2,0.109,loyalty,2024-01-20\r\n27967,1799,EMEA,electronics,online,91.65,1,0.119,loyalty,2024-06-05\r\n27968,1826,LATAM,toys,online,48.23,5,0.105,coupon,2024-09-09\r\n27969,1078,APAC,electronics,retail,28.61,4,0.127,none,2024-09-21\r\n27970,2015,APAC,grocery,mobile,44.73,6,0.170,bundle,2024-02-28\r\n27971,2031,AMER,grocery,online,56.00,7,0.097,bundle,2024-09-17\r\n27972,2369,LATAM,toys,mobile,86.62,2,0.110,bundle,2024-06-20\r\n27973,2100,APAC,sports,retail,53.75,4,0.104,coupon,2024-05-21\r\n27974,2021,EMEA,grocery,retail,17.08,6,0.093,coupon,2024-08-27\r\n27975,2059,AMER,electronics,mobile,55.44,6,0.071,none,2024-06-16\r\n27976,2168,EMEA,fashion,mobile,79.21,7,0.206,none,2024-08-02\r\n27977,1211,EMEA,electronics,mobile,48.11,6,0.011,none,2024-03-10\r\n27978,1349,APAC,home,retail,66.30,3,0.040,none,2024-08-14\r\n27979,1554,AMER,grocery,online,134.30,5,0.108,none,2024-03-23\r\n27980,1056,LATAM,home,retail,48.07,7,0.220,none,2024-03-06\r\n27981,1831,APAC,fashion,online,119.52,5,0.221,coupon,2024-06-10\r\n27982,1045,LATAM,electronics,retail,85.11,3,0.197,none,2024-12-03\r\n27983,2463,AMER,electronics,online,71.55,2,0.030,bundle,2024-09-15\r\n27984,1448,EMEA,sports,mobile,56.49,6,0.106,none,2024-10-13\r\n27985,1069,APAC,grocery,retail,30.97,5,0.203,none,2024-05-06\r\n27986,1111,APAC,toys,online,22.71,7,0.234,bundle,2024-07-23\r\n27987,1944,AMER,home,online,79.76,4,0.242,loyalty,2024-10-06\r\n27988,1502,APAC,grocery,retail,93.70,3,0.105,none,2024-01-19\r\n27989,1830,EMEA,home,online,31.73,5,0.123,none,2024-10-07\r\n27990,1702,AMER,electronics,online,40.42,2,0.191,none,2024-06-27\r\n27991,1945,AMER,grocery,mobile,49.16,6,0.133,none,2024-02-28\r\n27992,2337,AMER,sports,retail,27.37,8,0.187,none,2024-09-23\r\n27993,1835,AMER,grocery,retail,43.76,1,0.104,none,2024-04-20\r\n27994,2106,LATAM,home,retail,90.12,2,0.096,none,2024-04-22\r\n27995,1000,APAC,toys,online,113.96,7,0.156,coupon,2024-10-10\r\n27996,1811,APAC,grocery,retail,90.09,1,0.220,loyalty,2024-02-25\r\n27997,1492,APAC,sports,retail,49.46,3,0.010,none,2024-10-13\r\n27998,1673,AMER,sports,retail,41.78,4,0.205,loyalty,2024-04-11\r\n27999,1956,APAC,home,retail,75.75,2,0.016,loyalty,2024-12-07\r\n28000,1597,APAC,grocery,retail,57.72,8,0.105,none,2024-03-19\r\n28001,2379,AMER,fashion,retail,44.53,5,0.046,none,2024-05-18\r\n28002,1570,AMER,grocery,mobile,117.48,1,0.114,none,2024-06-21\r\n28003,2009,LATAM,grocery,online,83.15,3,0.084,coupon,2024-04-20\r\n28004,2169,EMEA,home,retail,28.27,1,0.082,loyalty,2024-05-05\r\n28005,2235,AMER,fashion,online,54.29,2,0.118,none,2024-05-09\r\n28006,2008,APAC,toys,online,67.01,4,0.247,none,2024-12-01\r\n28007,1479,AMER,home,mobile,59.91,6,0.040,none,2024-07-16\r\n28008,2405,AMER,home,retail,67.61,8,0.143,none,2024-07-21\r\n28009,1355,EMEA,electronics,retail,60.60,5,0.189,none,2024-07-16\r\n28010,1378,APAC,home,online,49.38,8,0.024,none,2024-02-06\r\n28011,1689,LATAM,electronics,retail,13.10,7,0.161,coupon,2024-04-17\r\n28012,1043,LATAM,electronics,online,39.54,4,0.040,loyalty,2024-11-21\r\n28013,2222,LATAM,toys,partner,53.36,2,0.205,none,2024-10-09\r\n28014,1690,LATAM,home,retail,42.80,7,0.092,none,2024-02-16\r\n28015,2080,LATAM,home,retail,128.50,7,0.091,none,2024-05-19\r\n28016,1921,LATAM,fashion,retail,36.72,5,0.169,none,2024-09-26\r\n28017,1320,EMEA,fashion,mobile,46.17,1,0.007,none,2024-11-22\r\n28018,1884,APAC,grocery,online,60.21,7,0.149,none,2024-06-13\r\n28019,1382,LATAM,grocery,online,63.24,1,0.182,coupon,2024-10-12\r\n28020,2436,LATAM,grocery,online,48.62,5,0.081,none,2024-03-01\r\n28021,1189,AMER,grocery,online,28.14,3,0.117,bundle,2024-07-06\r\n28022,1765,EMEA,toys,retail,89.26,8,0.246,bundle,2024-04-11\r\n28023,1596,EMEA,toys,online,21.96,7,0.077,bundle,2024-12-11\r\n28024,2293,LATAM,home,online,40.33,3,0.080,none,2024-04-17\r\n28025,2144,EMEA,fashion,online,113.02,4,0.094,none,2024-01-26\r\n28026,1294,APAC,grocery,online,35.14,2,0.153,loyalty,2024-06-18\r\n28027,1578,LATAM,electronics,retail,51.60,6,0.039,none,2024-08-17\r\n28028,1976,AMER,fashion,online,58.38,3,0.046,none,2024-01-24\r\n28029,1792,AMER,grocery,mobile,109.13,8,0.072,coupon,2024-03-05\r\n28030,2429,EMEA,fashion,retail,49.19,6,0.081,bundle,2024-12-18\r\n28031,1677,EMEA,home,online,57.28,6,0.041,none,2024-09-12\r\n28032,1575,APAC,grocery,mobile,31.03,6,0.215,coupon,2024-11-22\r\n28033,1302,LATAM,grocery,retail,108.05,5,0.128,none,2024-03-04\r\n28034,1789,EMEA,electronics,online,27.18,2,0.110,none,2024-08-10\r\n28035,1405,LATAM,electronics,retail,67.78,5,0.003,bundle,2024-05-07\r\n28036,2140,AMER,home,online,18.65,5,0.111,coupon,2024-08-08\r\n28037,2132,LATAM,electronics,retail,49.40,4,0.010,none,2024-01-12\r\n28038,1860,EMEA,sports,partner,63.96,6,0.130,none,2024-08-19\r\n28039,2342,AMER,electronics,mobile,52.42,6,0.193,coupon,2024-01-25\r\n28040,2363,AMER,grocery,retail,177.59,5,0.138,loyalty,2024-09-16\r\n28041,1460,LATAM,home,online,52.44,4,0.052,coupon,2024-04-19\r\n28042,2276,AMER,grocery,online,34.60,2,0.118,coupon,2024-09-15\r\n28043,1226,AMER,home,retail,102.85,2,0.185,none,2024-10-01\r\n28044,1578,LATAM,grocery,online,28.48,2,0.133,none,2024-01-15\r\n28045,1123,LATAM,grocery,partner,26.65,6,0.099,bundle,2024-05-25\r\n28046,2499,LATAM,home,online,93.13,6,0.183,bundle,2024-10-05\r\n28047,2171,EMEA,grocery,online,42.19,1,0.229,loyalty,2024-05-14\r\n28048,1723,LATAM,home,retail,114.54,7,0.247,none,2024-08-28\r\n28049,1103,EMEA,toys,retail,26.11,2,0.249,bundle,2024-12-20\r\n28050,1994,LATAM,grocery,retail,134.59,3,0.038,none,2024-10-26\r\n28051,1915,LATAM,home,online,59.62,6,0.168,none,2024-11-25\r\n28052,2466,APAC,fashion,retail,133.94,5,0.108,none,2024-01-14\r\n28053,2141,AMER,home,retail,65.90,5,0.226,coupon,2024-06-19\r\n28054,1403,APAC,electronics,retail,77.67,6,0.185,none,2024-07-21\r\n28055,2345,LATAM,home,retail,57.39,6,0.060,none,2024-05-14\r\n28056,2471,APAC,electronics,online,26.77,6,0.013,loyalty,2024-06-17\r\n28057,1026,APAC,grocery,mobile,47.01,1,0.160,none,2024-05-24\r\n28058,1946,AMER,electronics,mobile,46.85,7,0.082,coupon,2024-06-25\r\n28059,1380,AMER,grocery,partner,84.80,1,0.082,none,2024-11-12\r\n28060,1400,EMEA,home,online,61.05,5,0.170,coupon,2024-05-07\r\n28061,2243,APAC,fashion,retail,32.76,2,0.054,none,2024-03-23\r\n28062,1780,APAC,fashion,online,55.89,7,0.205,none,2024-09-05\r\n28063,1577,AMER,electronics,mobile,14.33,7,0.063,none,2024-04-20\r\n28064,2240,LATAM,grocery,retail,58.81,3,0.029,loyalty,2024-06-07\r\n28065,1408,AMER,grocery,online,86.76,4,0.206,none,2024-03-28\r\n28066,1776,APAC,grocery,online,33.41,5,0.055,bundle,2024-01-24\r\n28067,2497,AMER,grocery,online,63.58,7,0.138,none,2024-01-26\r\n28068,2297,EMEA,grocery,online,60.33,6,0.200,bundle,2024-01-16\r\n28069,2080,LATAM,grocery,online,44.69,8,0.039,none,2024-03-18\r\n28070,1452,LATAM,toys,mobile,63.07,8,0.221,none,2024-10-05\r\n28071,1993,APAC,grocery,online,63.44,7,0.149,none,2024-03-14\r\n28072,2256,AMER,electronics,retail,40.47,7,0.246,loyalty,2024-09-09\r\n28073,1865,LATAM,toys,online,81.55,7,0.237,none,2024-03-03\r\n28074,1683,AMER,home,online,86.24,4,0.166,none,2024-06-11\r\n28075,1914,EMEA,grocery,online,84.58,2,0.116,bundle,2024-10-04\r\n28076,1036,EMEA,sports,mobile,42.25,2,0.146,loyalty,2024-09-14\r\n28077,2411,EMEA,grocery,online,37.78,8,0.139,none,2024-09-02\r\n28078,1158,LATAM,grocery,retail,38.10,5,0.079,none,2024-01-03\r\n28079,1042,LATAM,toys,online,24.31,7,0.230,coupon,2024-06-06\r\n28080,1694,APAC,fashion,partner,48.94,1,0.157,loyalty,2024-08-02\r\n28081,1507,EMEA,fashion,online,127.22,5,0.073,none,2024-09-21\r\n28082,1512,APAC,grocery,online,175.46,8,0.076,coupon,2024-07-01\r\n28083,1061,APAC,electronics,retail,63.53,8,0.249,none,2024-12-19\r\n28084,1805,EMEA,fashion,retail,47.11,3,0.035,coupon,2024-04-04\r\n28085,2325,LATAM,fashion,online,69.58,5,0.050,none,2024-03-02\r\n28086,1982,EMEA,grocery,retail,17.65,6,0.082,none,2024-01-11\r\n28087,1690,LATAM,electronics,retail,78.01,4,0.079,none,2024-06-26\r\n28088,1514,LATAM,grocery,partner,47.76,2,0.186,coupon,2024-03-22\r\n28089,2225,EMEA,electronics,retail,25.33,3,0.216,none,2024-03-27\r\n28090,1365,LATAM,electronics,retail,38.29,7,0.186,bundle,2024-12-15\r\n28091,1631,APAC,fashion,retail,32.74,1,0.161,none,2024-07-03\r\n28092,2332,APAC,home,partner,35.09,4,0.133,loyalty,2024-07-23\r\n28093,2051,APAC,grocery,online,75.01,1,0.104,none,2024-08-12\r\n28094,1310,AMER,home,mobile,62.03,6,0.090,none,2024-03-23\r\n28095,1559,EMEA,grocery,retail,75.58,7,0.058,none,2024-01-12\r\n28096,2204,AMER,grocery,online,87.25,7,0.094,none,2024-12-05\r\n28097,1201,LATAM,grocery,mobile,103.20,5,0.154,coupon,2024-06-10\r\n28098,1420,APAC,sports,retail,137.97,3,0.109,loyalty,2024-12-20\r\n28099,1263,AMER,toys,retail,54.99,3,0.065,none,2024-12-01\r\n28100,2075,LATAM,grocery,mobile,46.83,6,0.079,coupon,2024-05-24\r\n28101,1595,AMER,home,online,51.06,1,0.173,loyalty,2024-10-19\r\n28102,1802,AMER,toys,online,29.95,3,0.158,coupon,2024-01-01\r\n28103,2135,EMEA,electronics,online,58.58,1,0.011,bundle,2024-06-28\r\n28104,1859,AMER,electronics,retail,72.37,3,0.107,bundle,2024-11-21\r\n28105,2439,AMER,home,partner,33.80,4,0.096,loyalty,2024-11-22\r\n28106,1643,EMEA,grocery,online,13.31,6,0.000,coupon,2024-08-28\r\n28107,1684,EMEA,sports,mobile,49.76,2,0.160,none,2024-05-16\r\n28108,1680,LATAM,electronics,mobile,33.51,7,0.141,none,2024-03-22\r\n28109,2166,AMER,sports,partner,100.71,3,0.033,none,2024-02-05\r\n28110,1614,EMEA,home,mobile,30.71,6,0.176,none,2024-11-16\r\n28111,2249,LATAM,home,retail,46.77,4,0.132,none,2024-09-08\r\n28112,1024,APAC,fashion,retail,48.07,7,0.223,none,2024-08-28\r\n28113,1613,EMEA,sports,retail,68.04,5,0.077,coupon,2024-07-06\r\n28114,2403,LATAM,fashion,mobile,75.64,3,0.156,none,2024-11-04\r\n28115,1564,APAC,electronics,retail,45.60,1,0.219,none,2024-08-26\r\n28116,1826,LATAM,home,retail,13.54,7,0.185,none,2024-10-01\r\n28117,2470,EMEA,electronics,retail,53.25,5,0.200,coupon,2024-02-04\r\n28118,1056,LATAM,grocery,online,105.47,4,0.074,none,2024-05-14\r\n28119,1758,AMER,grocery,retail,15.98,3,0.007,bundle,2024-06-08\r\n28120,1207,APAC,grocery,retail,56.85,3,0.164,bundle,2024-11-18\r\n28121,1480,APAC,fashion,online,91.51,1,0.168,none,2024-09-18\r\n28122,1525,APAC,electronics,retail,54.84,2,0.104,none,2024-01-01\r\n28123,1653,APAC,grocery,mobile,63.02,6,0.040,loyalty,2024-11-05\r\n28124,1374,APAC,electronics,mobile,46.18,4,0.182,none,2024-01-05\r\n28125,2234,LATAM,electronics,retail,122.44,6,0.244,bundle,2024-05-13\r\n28126,1474,LATAM,home,retail,63.27,2,0.137,coupon,2024-04-06\r\n28127,1942,APAC,electronics,retail,31.73,1,0.145,none,2024-01-17\r\n28128,1990,EMEA,electronics,online,32.25,8,0.209,none,2024-05-10\r\n28129,1944,AMER,electronics,retail,32.64,3,0.138,bundle,2024-05-13\r\n28130,1236,AMER,electronics,partner,65.25,8,0.075,none,2024-07-07\r\n28131,1280,LATAM,sports,retail,61.52,8,0.193,coupon,2024-01-10\r\n28132,1955,AMER,electronics,online,76.87,4,0.206,none,2024-02-01\r\n28133,1473,LATAM,sports,mobile,31.02,1,0.185,coupon,2024-02-03\r\n28134,1640,APAC,home,retail,59.13,7,0.212,none,2024-02-10\r\n28135,2338,AMER,grocery,retail,33.73,5,0.178,bundle,2024-04-02\r\n28136,1017,AMER,sports,mobile,32.29,7,0.156,none,2024-09-16\r\n28137,1096,EMEA,home,online,27.85,2,0.080,none,2024-04-08\r\n28138,2310,EMEA,grocery,partner,31.64,1,0.134,none,2024-03-27\r\n28139,1814,AMER,toys,online,69.34,2,0.233,coupon,2024-08-18\r\n28140,2096,LATAM,fashion,online,50.33,5,0.239,none,2024-12-14\r\n28141,1191,EMEA,grocery,online,54.11,2,0.204,none,2024-05-04\r\n28142,1884,APAC,home,online,56.24,2,0.123,coupon,2024-01-03\r\n28143,1430,EMEA,grocery,retail,101.87,6,0.018,none,2024-05-27\r\n28144,1122,AMER,sports,online,32.43,7,0.197,loyalty,2024-07-15\r\n28145,1876,LATAM,grocery,online,28.25,3,0.156,none,2024-11-19\r\n28146,2042,LATAM,home,retail,174.42,3,0.038,bundle,2024-01-12\r\n28147,1855,APAC,grocery,mobile,54.69,7,0.105,none,2024-06-15\r\n28148,1821,LATAM,electronics,retail,96.36,5,0.119,none,2024-02-01\r\n28149,1995,LATAM,grocery,retail,56.69,3,0.143,none,2024-09-11\r\n28150,2288,AMER,grocery,online,65.76,1,0.120,none,2024-01-19\r\n28151,2456,APAC,electronics,mobile,29.25,3,0.086,coupon,2024-05-13\r\n28152,1330,EMEA,sports,retail,40.59,1,0.041,none,2024-09-11\r\n28153,1362,AMER,home,online,85.63,8,0.196,none,2024-04-23\r\n28154,1777,AMER,sports,retail,101.92,6,0.184,coupon,2024-01-16\r\n28155,2071,APAC,sports,retail,26.40,7,0.085,coupon,2024-10-28\r\n28156,1687,APAC,electronics,retail,115.44,6,0.008,none,2024-03-17\r\n28157,2158,APAC,grocery,online,140.02,5,0.126,none,2024-02-11\r\n28158,2376,LATAM,sports,mobile,37.26,4,0.022,coupon,2024-11-19\r\n28159,1225,APAC,grocery,online,36.01,4,0.077,bundle,2024-01-18\r\n28160,1415,AMER,grocery,online,80.06,6,0.062,coupon,2024-08-24\r\n28161,1152,LATAM,electronics,online,67.96,8,0.210,none,2024-05-03\r\n28162,1182,EMEA,home,online,68.34,6,0.211,none,2024-09-19\r\n28163,2433,APAC,fashion,retail,62.91,1,0.058,none,2024-10-25\r\n28164,1808,APAC,home,mobile,67.52,6,0.004,none,2024-10-19\r\n28165,1849,EMEA,fashion,retail,98.79,7,0.183,none,2024-12-21\r\n28166,1125,LATAM,fashion,online,47.51,6,0.231,none,2024-11-18\r\n28167,2095,EMEA,toys,retail,40.51,2,0.183,coupon,2024-08-02\r\n28168,1139,EMEA,sports,retail,61.52,2,0.144,coupon,2024-05-07\r\n28169,2093,LATAM,grocery,retail,39.56,1,0.209,none,2024-03-18\r\n28170,2289,APAC,grocery,partner,81.06,7,0.182,loyalty,2024-06-09\r\n28171,2379,AMER,fashion,online,66.75,5,0.020,none,2024-07-13\r\n28172,2348,EMEA,electronics,mobile,62.49,1,0.004,coupon,2024-08-23\r\n28173,1837,LATAM,home,online,49.21,3,0.116,loyalty,2024-03-24\r\n28174,1543,AMER,grocery,online,93.36,5,0.212,none,2024-03-20\r\n28175,2101,APAC,electronics,mobile,45.36,2,0.116,coupon,2024-05-22\r\n28176,1516,EMEA,home,online,62.76,7,0.227,coupon,2024-06-24\r\n28177,2230,LATAM,grocery,retail,34.76,4,0.013,coupon,2024-02-03\r\n28178,1365,LATAM,home,mobile,93.68,1,0.069,none,2024-04-01\r\n28179,2166,AMER,grocery,retail,49.79,8,0.173,coupon,2024-11-17\r\n28180,1942,APAC,electronics,online,37.85,3,0.080,coupon,2024-10-19\r\n28181,2456,APAC,fashion,mobile,137.49,4,0.110,none,2024-09-16\r\n28182,1174,APAC,grocery,online,36.80,3,0.108,bundle,2024-11-15\r\n28183,1436,APAC,toys,retail,24.69,5,0.041,none,2024-12-21\r\n28184,2038,LATAM,home,mobile,22.94,3,0.102,none,2024-11-28\r\n28185,2350,APAC,home,online,47.16,8,0.025,none,2024-12-07\r\n28186,1747,EMEA,fashion,online,12.06,5,0.109,none,2024-01-26\r\n28187,1493,APAC,toys,online,62.46,3,0.224,coupon,2024-06-28\r\n28188,2273,APAC,sports,online,52.77,5,0.248,bundle,2024-04-16\r\n28189,2161,LATAM,home,retail,93.58,2,0.089,none,2024-03-03\r\n28190,1742,AMER,home,retail,47.37,2,0.121,none,2024-12-07\r\n28191,1310,AMER,sports,retail,74.72,5,0.131,loyalty,2024-02-05\r\n28192,1812,EMEA,electronics,mobile,23.14,6,0.172,none,2024-02-15\r\n28193,2253,AMER,fashion,online,73.59,5,0.087,none,2024-04-18\r\n28194,2154,APAC,grocery,retail,63.30,2,0.017,coupon,2024-08-03\r\n28195,1131,APAC,grocery,online,77.84,1,0.197,coupon,2024-09-20\r\n28196,1663,LATAM,home,partner,61.11,1,0.245,none,2024-04-08\r\n28197,1733,LATAM,fashion,online,58.27,3,0.058,loyalty,2024-10-02\r\n28198,1595,AMER,fashion,retail,52.17,1,0.225,bundle,2024-11-25\r\n28199,1808,APAC,grocery,online,33.74,3,0.102,coupon,2024-12-15\r\n28200,1653,APAC,home,mobile,43.30,1,0.027,coupon,2024-07-07\r\n28201,2122,AMER,electronics,online,39.58,1,0.246,none,2024-12-20\r\n28202,1198,AMER,electronics,mobile,81.15,4,0.147,bundle,2024-01-20\r\n28203,1405,LATAM,electronics,retail,161.15,1,0.138,none,2024-05-23\r\n28204,2451,APAC,grocery,online,103.56,4,0.101,none,2024-02-13\r\n28205,2029,APAC,electronics,mobile,19.64,7,0.048,coupon,2024-03-09\r\n28206,1529,LATAM,sports,online,109.92,7,0.121,loyalty,2024-02-07\r\n28207,1045,LATAM,home,online,37.56,1,0.103,none,2024-07-18\r\n28208,1119,LATAM,sports,retail,58.81,2,0.171,none,2024-04-07\r\n28209,1073,AMER,grocery,online,107.51,3,0.132,none,2024-06-07\r\n28210,2003,LATAM,home,retail,43.00,4,0.126,none,2024-12-20\r\n28211,1196,APAC,electronics,partner,48.21,5,0.063,bundle,2024-03-13\r\n28212,1119,LATAM,electronics,retail,54.81,5,0.098,none,2024-01-16\r\n28213,1027,APAC,fashion,online,46.92,3,0.132,bundle,2024-02-17\r\n28214,1452,LATAM,fashion,retail,47.00,3,0.017,none,2024-10-12\r\n28215,1287,AMER,toys,retail,95.54,6,0.226,none,2024-05-07\r\n28216,1575,APAC,electronics,online,158.97,8,0.118,coupon,2024-02-18\r\n28217,1489,AMER,fashion,mobile,115.68,8,0.236,none,2024-12-19\r\n28218,2114,AMER,home,online,31.06,6,0.200,coupon,2024-01-10\r\n28219,2035,LATAM,sports,partner,60.76,6,0.008,bundle,2024-09-12\r\n28220,2020,AMER,electronics,retail,97.10,5,0.236,loyalty,2024-12-19\r\n28221,1170,AMER,home,retail,46.23,8,0.170,none,2024-02-21\r\n28222,2187,EMEA,toys,retail,62.67,5,0.029,coupon,2024-10-18\r\n28223,2009,LATAM,grocery,online,37.76,7,0.083,none,2024-07-20\r\n28224,2249,LATAM,grocery,retail,70.99,4,0.088,bundle,2024-07-08\r\n28225,2443,LATAM,sports,online,49.70,1,0.235,none,2024-01-11\r\n28226,1280,LATAM,fashion,retail,30.26,6,0.007,loyalty,2024-07-15\r\n28227,2275,LATAM,toys,partner,24.51,8,0.023,loyalty,2024-10-09\r\n28228,1656,LATAM,toys,online,51.26,5,0.080,none,2024-05-08\r\n28229,2207,APAC,grocery,mobile,37.10,7,0.090,none,2024-01-21\r\n28230,1445,APAC,home,retail,75.52,8,0.230,coupon,2024-01-17\r\n28231,2332,APAC,home,online,28.55,4,0.146,bundle,2024-04-18\r\n28232,2341,EMEA,fashion,mobile,63.74,2,0.218,coupon,2024-11-21\r\n28233,1340,LATAM,grocery,online,58.41,7,0.059,coupon,2024-01-28\r\n28234,2444,EMEA,toys,online,81.07,6,0.004,none,2024-07-23\r\n28235,1694,APAC,sports,online,47.26,6,0.180,none,2024-11-14\r\n28236,2059,AMER,grocery,retail,32.85,6,0.235,bundle,2024-08-05\r\n28237,1305,EMEA,electronics,online,77.91,1,0.051,none,2024-02-11\r\n28238,1095,APAC,toys,online,49.85,5,0.145,bundle,2024-03-03\r\n28239,2461,LATAM,fashion,mobile,61.11,4,0.147,bundle,2024-04-10\r\n28240,1396,EMEA,sports,online,45.81,4,0.218,loyalty,2024-11-16\r\n28241,1990,EMEA,toys,mobile,58.62,6,0.179,none,2024-06-13\r\n28242,1383,AMER,grocery,online,49.06,8,0.134,none,2024-08-21\r\n28243,1310,AMER,sports,retail,53.61,4,0.155,none,2024-08-02\r\n28244,1942,APAC,toys,online,25.45,1,0.007,none,2024-02-10\r\n28245,1663,LATAM,grocery,online,57.68,8,0.205,none,2024-02-24\r\n28246,1410,AMER,electronics,retail,80.40,7,0.099,none,2024-10-25\r\n28247,1604,EMEA,electronics,retail,19.54,6,0.192,none,2024-11-16\r\n28248,2103,LATAM,fashion,mobile,83.67,6,0.100,loyalty,2024-01-03\r\n28249,1830,EMEA,grocery,online,25.72,2,0.023,none,2024-09-07\r\n28250,1035,EMEA,toys,online,66.28,1,0.247,coupon,2024-07-13\r\n28251,1836,LATAM,fashion,mobile,44.56,4,0.194,none,2024-09-25\r\n28252,2372,AMER,sports,retail,72.07,5,0.140,bundle,2024-08-28\r\n28253,1120,LATAM,fashion,online,97.74,8,0.029,coupon,2024-12-07\r\n28254,1851,EMEA,electronics,retail,95.93,7,0.070,none,2024-01-16\r\n28255,1136,EMEA,grocery,mobile,156.73,6,0.240,bundle,2024-08-05\r\n28256,1409,APAC,electronics,online,24.48,4,0.107,none,2024-09-06\r\n28257,1480,APAC,grocery,retail,49.20,4,0.231,loyalty,2024-12-09\r\n28258,1229,LATAM,toys,online,44.64,7,0.116,coupon,2024-02-19\r\n28259,2144,EMEA,fashion,retail,120.82,5,0.218,none,2024-04-18\r\n28260,2471,APAC,grocery,retail,104.10,2,0.214,bundle,2024-03-25\r\n28261,2119,AMER,home,partner,51.09,7,0.237,none,2024-04-16\r\n28262,1036,EMEA,grocery,retail,65.18,7,0.208,none,2024-05-05\r\n28263,1994,LATAM,home,online,56.09,8,0.012,bundle,2024-11-16\r\n28264,2489,LATAM,grocery,mobile,42.16,5,0.015,bundle,2024-04-18\r\n28265,1607,LATAM,sports,retail,59.84,4,0.231,coupon,2024-08-22\r\n28266,1830,EMEA,grocery,mobile,63.53,1,0.203,none,2024-05-03\r\n28267,2365,LATAM,fashion,online,108.73,7,0.030,none,2024-09-20\r\n28268,1764,LATAM,home,retail,53.71,5,0.129,none,2024-12-03\r\n28269,1618,EMEA,grocery,retail,41.58,2,0.211,loyalty,2024-11-22\r\n28270,2179,LATAM,electronics,online,49.60,5,0.035,bundle,2024-06-18\r\n28271,2190,LATAM,grocery,mobile,68.22,8,0.015,loyalty,2024-02-11\r\n28272,1693,EMEA,grocery,online,29.38,6,0.127,none,2024-07-22\r\n28273,2187,EMEA,toys,retail,43.73,7,0.036,none,2024-03-16\r\n28274,2430,APAC,fashion,online,39.42,7,0.116,none,2024-04-28\r\n28275,1536,LATAM,electronics,retail,95.11,6,0.027,coupon,2024-12-02\r\n28276,1850,APAC,grocery,online,28.79,7,0.024,none,2024-11-04\r\n28277,1665,AMER,grocery,online,64.68,7,0.043,none,2024-09-20\r\n28278,1344,EMEA,electronics,retail,62.98,4,0.018,bundle,2024-01-10\r\n28279,2493,APAC,electronics,retail,30.06,5,0.075,none,2024-11-17\r\n28280,1975,EMEA,electronics,online,65.65,3,0.179,bundle,2024-03-21\r\n28281,1875,EMEA,grocery,online,39.53,6,0.006,none,2024-12-25\r\n28282,1587,LATAM,home,online,53.91,6,0.237,none,2024-11-15\r\n28283,1917,LATAM,sports,partner,50.07,5,0.114,coupon,2024-02-02\r\n28284,2223,EMEA,grocery,retail,61.12,1,0.231,none,2024-11-17\r\n28285,1508,LATAM,electronics,mobile,106.34,8,0.246,none,2024-04-07\r\n28286,2143,AMER,electronics,mobile,32.48,6,0.025,none,2024-03-12\r\n28287,1293,AMER,home,retail,31.09,2,0.249,none,2024-11-20\r\n28288,1238,AMER,home,mobile,252.16,6,0.114,none,2024-04-07\r\n28289,1590,APAC,fashion,online,16.44,3,0.081,none,2024-02-01\r\n28290,1067,APAC,electronics,online,51.75,5,0.092,bundle,2024-12-07\r\n28291,1279,EMEA,grocery,online,49.37,3,0.225,none,2024-12-16\r\n28292,1791,LATAM,home,online,62.54,3,0.193,none,2024-04-20\r\n28293,1852,AMER,electronics,online,26.18,1,0.230,loyalty,2024-03-01\r\n28294,1772,EMEA,home,mobile,39.92,6,0.046,loyalty,2024-07-13\r\n28295,1273,AMER,home,online,24.88,4,0.060,coupon,2024-12-16\r\n28296,2180,AMER,fashion,retail,22.95,2,0.181,none,2024-05-09\r\n28297,1836,LATAM,sports,online,51.35,6,0.119,loyalty,2024-10-16\r\n28298,2009,LATAM,grocery,partner,39.15,5,0.071,none,2024-01-13\r\n28299,1975,EMEA,grocery,retail,41.53,3,0.141,none,2024-01-10\r\n28300,1071,AMER,grocery,retail,55.82,1,0.230,bundle,2024-05-04\r\n28301,2086,APAC,fashion,online,29.29,8,0.038,none,2024-11-01\r\n28302,1128,LATAM,fashion,retail,41.43,2,0.077,none,2024-10-16\r\n28303,2142,LATAM,sports,online,45.79,4,0.175,loyalty,2024-02-11\r\n28304,1404,EMEA,toys,retail,46.51,4,0.109,none,2024-05-14\r\n28305,1437,EMEA,electronics,retail,54.94,6,0.064,loyalty,2024-07-06\r\n28306,2406,EMEA,fashion,online,60.02,5,0.216,bundle,2024-02-24\r\n28307,1031,AMER,fashion,online,67.33,7,0.017,none,2024-10-09\r\n28308,1200,EMEA,grocery,retail,130.71,3,0.193,bundle,2024-06-23\r\n28309,1873,EMEA,toys,online,37.67,3,0.226,none,2024-05-08\r\n28310,2105,APAC,home,retail,151.80,2,0.100,coupon,2024-04-28\r\n28311,1029,EMEA,home,online,63.99,2,0.183,none,2024-04-23\r\n28312,1249,EMEA,grocery,online,59.16,3,0.227,none,2024-01-10\r\n28313,2423,LATAM,home,retail,54.68,8,0.207,coupon,2024-12-03\r\n28314,2028,APAC,home,retail,24.13,4,0.164,none,2024-02-08\r\n28315,1271,EMEA,home,online,37.41,8,0.086,none,2024-02-21\r\n28316,1918,EMEA,grocery,mobile,59.77,4,0.014,none,2024-03-23\r\n28317,1489,AMER,home,retail,46.80,5,0.203,none,2024-09-28\r\n28318,2367,AMER,grocery,retail,41.38,8,0.131,bundle,2024-03-12\r\n28319,1345,AMER,grocery,online,69.79,6,0.109,none,2024-07-15\r\n28320,2394,EMEA,sports,mobile,59.28,8,0.007,none,2024-09-27\r\n28321,1947,EMEA,electronics,mobile,34.56,3,0.149,none,2024-06-26\r\n28322,1453,APAC,fashion,partner,86.62,2,0.163,none,2024-01-21\r\n28323,1085,EMEA,grocery,retail,66.51,3,0.110,bundle,2024-06-07\r\n28324,2114,AMER,fashion,retail,139.73,8,0.248,none,2024-03-03\r\n28325,2216,AMER,home,online,72.89,2,0.107,none,2024-09-05\r\n28326,1253,AMER,home,online,55.46,5,0.151,none,2024-12-13\r\n28327,1279,EMEA,sports,retail,18.23,5,0.018,coupon,2024-03-09\r\n28328,1975,EMEA,grocery,online,93.95,6,0.011,none,2024-03-12\r\n28329,1705,AMER,grocery,retail,27.22,8,0.219,none,2024-11-03\r\n28330,2256,AMER,home,retail,39.52,7,0.242,none,2024-04-27\r\n28331,1184,AMER,fashion,online,67.58,2,0.075,loyalty,2024-08-10\r\n28332,1573,AMER,grocery,online,14.31,7,0.226,bundle,2024-08-25\r\n28333,1881,LATAM,sports,online,66.69,1,0.150,none,2024-04-18\r\n28334,1026,APAC,grocery,retail,34.50,1,0.168,bundle,2024-04-26\r\n28335,2250,AMER,home,mobile,33.92,2,0.190,none,2024-06-14\r\n28336,1400,EMEA,home,online,56.51,2,0.213,coupon,2024-04-18\r\n28337,2271,LATAM,electronics,online,45.75,3,0.199,coupon,2024-06-22\r\n28338,2436,LATAM,sports,online,55.56,6,0.084,none,2024-04-20\r\n28339,1123,LATAM,sports,retail,80.04,1,0.154,none,2024-08-19\r\n28340,1862,LATAM,grocery,online,33.70,2,0.176,bundle,2024-09-04\r\n28341,1291,EMEA,home,retail,21.99,2,0.047,loyalty,2024-07-17\r\n28342,1298,LATAM,home,online,98.25,8,0.199,none,2024-08-05\r\n28343,1632,LATAM,home,online,40.13,5,0.001,bundle,2024-05-08\r\n28344,2015,APAC,sports,online,74.18,5,0.135,none,2024-07-01\r\n28345,2005,APAC,toys,retail,69.62,5,0.070,none,2024-04-13\r\n28346,1795,EMEA,grocery,online,51.36,6,0.035,none,2024-07-20\r\n28347,2090,AMER,sports,partner,75.46,6,0.236,none,2024-04-15\r\n28348,2194,APAC,home,retail,19.26,6,0.219,none,2024-02-21\r\n28349,1290,EMEA,grocery,retail,132.28,2,0.179,coupon,2024-05-20\r\n28350,1118,AMER,grocery,online,100.94,1,0.179,coupon,2024-04-11\r\n28351,1924,AMER,electronics,online,95.28,7,0.176,none,2024-02-07\r\n28352,1203,AMER,electronics,retail,52.02,6,0.211,bundle,2024-10-03\r\n28353,1215,LATAM,electronics,retail,140.77,2,0.160,none,2024-08-16\r\n28354,2095,EMEA,sports,retail,19.15,7,0.017,loyalty,2024-10-26\r\n28355,2282,EMEA,sports,online,65.02,7,0.140,bundle,2024-10-10\r\n28356,2250,AMER,electronics,online,26.85,1,0.076,none,2024-11-19\r\n28357,1485,APAC,grocery,online,173.82,7,0.159,none,2024-04-12\r\n28358,1296,LATAM,electronics,retail,58.91,3,0.173,none,2024-03-22\r\n28359,2001,EMEA,sports,online,32.54,7,0.038,none,2024-05-28\r\n28360,1265,APAC,sports,retail,30.27,5,0.168,bundle,2024-06-23\r\n28361,1118,AMER,fashion,retail,62.79,5,0.082,none,2024-10-14\r\n28362,1173,LATAM,fashion,retail,38.58,3,0.243,bundle,2024-08-23\r\n28363,1986,LATAM,electronics,mobile,23.97,1,0.041,none,2024-11-26\r\n28364,1690,LATAM,electronics,retail,53.23,7,0.250,none,2024-08-03\r\n28365,1160,LATAM,sports,retail,15.44,4,0.193,coupon,2024-05-01\r\n28366,2220,LATAM,grocery,retail,55.69,6,0.237,none,2024-08-23\r\n28367,1803,LATAM,grocery,retail,70.11,2,0.082,none,2024-12-04\r\n28368,1600,AMER,fashion,retail,31.88,6,0.141,coupon,2024-12-14\r\n28369,1871,APAC,home,mobile,118.69,8,0.051,bundle,2024-12-13\r\n28370,1224,APAC,grocery,mobile,46.81,1,0.037,none,2024-09-23\r\n28371,1282,LATAM,home,online,61.16,5,0.137,coupon,2024-10-20\r\n28372,2157,AMER,electronics,mobile,97.02,7,0.049,bundle,2024-12-28\r\n28373,2211,APAC,grocery,online,77.73,3,0.196,none,2024-07-26\r\n28374,1307,AMER,home,online,28.51,1,0.006,none,2024-08-04\r\n28375,1603,EMEA,sports,mobile,68.13,5,0.209,loyalty,2024-05-02\r\n28376,1306,LATAM,toys,mobile,50.04,2,0.141,none,2024-08-15\r\n28377,1031,AMER,electronics,retail,49.70,1,0.066,none,2024-08-12\r\n28378,1400,EMEA,fashion,mobile,99.55,1,0.032,bundle,2024-07-08\r\n28379,2403,LATAM,electronics,online,196.32,8,0.240,bundle,2024-01-15\r\n28380,1676,LATAM,electronics,retail,25.89,5,0.177,coupon,2024-12-20\r\n28381,2206,AMER,home,mobile,36.71,4,0.090,coupon,2024-10-16\r\n28382,1259,EMEA,electronics,mobile,36.73,1,0.084,coupon,2024-05-25\r\n28383,1343,LATAM,electronics,online,52.11,1,0.061,none,2024-01-11\r\n28384,1326,AMER,grocery,retail,112.46,5,0.074,none,2024-12-05\r\n28385,1842,LATAM,sports,online,31.63,5,0.123,none,2024-01-16\r\n28386,2069,AMER,sports,mobile,50.72,1,0.091,none,2024-06-23\r\n28387,2356,LATAM,home,online,89.03,3,0.217,none,2024-04-09\r\n28388,2325,LATAM,electronics,online,46.51,7,0.110,coupon,2024-06-28\r\n28389,2301,EMEA,electronics,mobile,67.44,8,0.010,coupon,2024-11-13\r\n28390,1285,EMEA,grocery,retail,28.15,7,0.080,none,2024-10-26\r\n28391,1735,LATAM,sports,online,80.15,3,0.052,coupon,2024-06-14\r\n28392,1892,LATAM,home,mobile,25.72,6,0.088,none,2024-03-22\r\n28393,2129,APAC,fashion,retail,33.31,8,0.221,none,2024-11-08\r\n28394,1661,LATAM,sports,mobile,65.10,2,0.151,bundle,2024-07-06\r\n28395,2199,LATAM,sports,mobile,38.94,7,0.079,coupon,2024-01-09\r\n28396,1172,APAC,fashion,online,145.66,2,0.004,none,2024-01-20\r\n28397,1568,AMER,home,retail,168.86,3,0.179,none,2024-06-20\r\n28398,1574,AMER,sports,retail,46.35,6,0.079,none,2024-12-08\r\n28399,1167,EMEA,toys,online,130.86,5,0.202,none,2024-01-14\r\n28400,2336,APAC,sports,retail,60.84,6,0.080,none,2024-07-10\r\n28401,1307,AMER,grocery,retail,94.07,5,0.079,none,2024-03-22\r\n28402,1881,LATAM,electronics,mobile,56.19,3,0.171,loyalty,2024-09-20\r\n28403,1989,LATAM,home,partner,142.92,5,0.191,coupon,2024-01-03\r\n28404,1899,APAC,home,mobile,41.35,8,0.066,loyalty,2024-12-04\r\n28405,1293,AMER,home,online,73.01,7,0.206,none,2024-07-02\r\n28406,2056,LATAM,grocery,retail,59.63,2,0.084,coupon,2024-06-28\r\n28407,1656,LATAM,home,online,51.13,6,0.118,none,2024-02-19\r\n28408,1455,APAC,toys,online,31.22,6,0.079,loyalty,2024-04-16\r\n28409,1533,APAC,grocery,mobile,46.32,8,0.075,coupon,2024-11-15\r\n28410,1864,EMEA,grocery,online,47.02,5,0.042,coupon,2024-11-02\r\n28411,1739,AMER,home,mobile,65.85,8,0.117,none,2024-09-15\r\n28412,1186,APAC,grocery,online,100.52,6,0.110,none,2024-01-16\r\n28413,1869,AMER,toys,online,166.11,6,0.067,none,2024-05-14\r\n28414,2098,AMER,fashion,mobile,49.53,1,0.050,coupon,2024-11-13\r\n28415,2126,APAC,grocery,online,24.27,7,0.107,none,2024-04-07\r\n28416,2459,AMER,home,online,241.08,5,0.117,loyalty,2024-10-11\r\n28417,2123,AMER,electronics,online,109.30,1,0.043,coupon,2024-12-06\r\n28418,2434,APAC,grocery,retail,80.45,2,0.060,bundle,2024-01-02\r\n28419,1085,EMEA,grocery,online,79.28,8,0.191,none,2024-12-02\r\n28420,1844,APAC,grocery,online,219.04,5,0.209,bundle,2024-07-25\r\n28421,1548,EMEA,electronics,retail,20.98,2,0.147,none,2024-12-05\r\n28422,1776,APAC,fashion,partner,40.82,7,0.237,none,2024-02-07\r\n28423,1220,LATAM,grocery,partner,60.67,2,0.049,none,2024-06-16\r\n28424,2348,EMEA,electronics,retail,61.86,2,0.047,none,2024-02-03\r\n28425,1076,LATAM,electronics,retail,36.18,7,0.247,bundle,2024-04-15\r\n28426,2107,APAC,grocery,online,87.32,5,0.193,none,2024-02-02\r\n28427,1559,EMEA,electronics,online,31.00,2,0.052,none,2024-09-13\r\n28428,1536,LATAM,sports,online,55.41,6,0.080,none,2024-06-26\r\n28429,1606,AMER,electronics,mobile,15.85,7,0.037,none,2024-07-20\r\n28430,2228,EMEA,fashion,retail,53.68,2,0.210,none,2024-06-05\r\n28431,1652,APAC,fashion,retail,82.60,1,0.114,none,2024-07-01\r\n28432,1535,AMER,home,online,74.39,8,0.202,none,2024-07-10\r\n28433,2011,AMER,grocery,retail,22.64,4,0.219,bundle,2024-12-28\r\n28434,1372,APAC,sports,online,103.20,5,0.172,bundle,2024-08-17\r\n28435,1455,APAC,sports,mobile,79.57,4,0.221,bundle,2024-04-16\r\n28436,1002,EMEA,electronics,retail,110.35,2,0.117,none,2024-02-15\r\n28437,1732,LATAM,grocery,retail,28.17,6,0.032,bundle,2024-05-03\r\n28438,1992,LATAM,electronics,retail,66.92,2,0.142,none,2024-04-03\r\n28439,1175,AMER,fashion,retail,14.49,4,0.240,coupon,2024-02-24\r\n28440,1819,AMER,fashion,online,23.74,7,0.030,bundle,2024-09-23\r\n28441,1232,LATAM,grocery,online,74.72,7,0.243,none,2024-06-10\r\n28442,2424,LATAM,fashion,online,63.16,7,0.034,none,2024-07-20\r\n28443,2026,LATAM,grocery,retail,61.52,5,0.159,none,2024-02-19\r\n28444,1196,APAC,grocery,retail,50.10,3,0.047,none,2024-07-11\r\n28445,1595,AMER,home,retail,23.79,6,0.121,none,2024-08-28\r\n28446,2050,APAC,electronics,online,33.95,1,0.007,none,2024-08-10\r\n28447,2023,LATAM,home,retail,20.50,8,0.069,bundle,2024-06-24\r\n28448,1864,EMEA,electronics,retail,51.27,2,0.005,none,2024-03-24\r\n28449,1753,APAC,home,retail,48.05,2,0.063,loyalty,2024-06-26\r\n28450,1640,APAC,home,retail,102.75,8,0.229,loyalty,2024-08-28\r\n28451,1723,LATAM,sports,online,102.49,2,0.013,none,2024-08-06\r\n28452,1189,AMER,home,mobile,98.86,2,0.249,none,2024-11-17\r\n28453,2081,APAC,fashion,retail,54.48,3,0.250,none,2024-02-06\r\n28454,2142,LATAM,electronics,online,18.89,8,0.084,none,2024-12-11\r\n28455,1374,APAC,grocery,online,71.32,7,0.186,loyalty,2024-07-12\r\n28456,1409,APAC,electronics,online,30.48,5,0.187,coupon,2024-10-04\r\n28457,2240,LATAM,electronics,retail,52.11,1,0.138,coupon,2024-09-26\r\n28458,2216,AMER,home,retail,44.76,6,0.188,none,2024-10-27\r\n28459,1665,AMER,fashion,online,68.34,2,0.043,bundle,2024-10-25\r\n28460,2480,APAC,grocery,online,99.96,5,0.120,none,2024-09-28\r\n28461,2458,EMEA,sports,mobile,91.86,8,0.151,none,2024-08-23\r\n28462,1778,LATAM,grocery,mobile,42.48,4,0.066,none,2024-06-22\r\n28463,1335,APAC,electronics,online,139.04,2,0.208,none,2024-12-02\r\n28464,1927,EMEA,electronics,online,74.63,3,0.120,loyalty,2024-12-14\r\n28465,2064,LATAM,electronics,partner,39.75,4,0.162,bundle,2024-11-06\r\n28466,2448,APAC,grocery,online,61.89,8,0.232,bundle,2024-06-02\r\n28467,2048,LATAM,home,mobile,65.93,3,0.224,bundle,2024-11-19\r\n28468,1360,APAC,grocery,mobile,85.53,5,0.209,bundle,2024-09-04\r\n28469,2057,APAC,electronics,retail,22.98,8,0.224,none,2024-01-07\r\n28470,1878,EMEA,home,retail,38.06,2,0.061,coupon,2024-01-04\r\n28471,2017,EMEA,home,online,29.80,6,0.233,bundle,2024-03-10\r\n28472,1811,APAC,fashion,online,35.53,6,0.125,none,2024-11-19\r\n28473,1921,LATAM,home,online,128.04,4,0.013,none,2024-12-08\r\n28474,1153,AMER,sports,retail,96.86,4,0.039,bundle,2024-11-15\r\n28475,1437,EMEA,home,mobile,43.11,7,0.032,none,2024-05-09\r\n28476,1016,AMER,toys,online,179.14,3,0.162,bundle,2024-02-04\r\n28477,2371,LATAM,electronics,partner,25.06,7,0.132,none,2024-02-11\r\n28478,1537,LATAM,sports,online,68.88,7,0.053,none,2024-07-14\r\n28479,1883,LATAM,grocery,partner,53.75,7,0.136,none,2024-08-07\r\n28480,1793,LATAM,electronics,retail,33.47,7,0.000,none,2024-11-19\r\n28481,2347,AMER,grocery,partner,60.66,7,0.035,none,2024-03-07\r\n28482,2470,EMEA,electronics,online,105.74,2,0.117,bundle,2024-10-23\r\n28483,2191,AMER,grocery,online,18.74,4,0.119,none,2024-01-01\r\n28484,1878,EMEA,grocery,online,46.48,8,0.191,loyalty,2024-09-17\r\n28485,1262,APAC,toys,online,16.65,2,0.195,none,2024-07-28\r\n28486,1263,AMER,electronics,mobile,81.71,5,0.146,none,2024-10-04\r\n28487,2037,LATAM,grocery,online,52.17,5,0.190,none,2024-02-18\r\n28488,1140,LATAM,electronics,retail,80.55,3,0.200,none,2024-06-26\r\n28489,1068,APAC,grocery,online,25.56,5,0.191,none,2024-11-06\r\n28490,1749,LATAM,toys,partner,32.99,6,0.170,none,2024-07-21\r\n28491,1825,AMER,grocery,online,45.58,8,0.031,bundle,2024-06-12\r\n28492,1779,APAC,grocery,retail,53.55,5,0.083,none,2024-12-13\r\n28493,2169,EMEA,fashion,retail,87.95,7,0.073,none,2024-08-28\r\n28494,1847,LATAM,sports,partner,48.78,8,0.027,coupon,2024-07-27\r\n28495,1086,AMER,home,online,60.54,8,0.041,none,2024-10-11\r\n28496,1033,APAC,electronics,retail,91.33,2,0.235,coupon,2024-05-27\r\n28497,2221,LATAM,fashion,retail,96.56,1,0.192,coupon,2024-10-17\r\n28498,2252,EMEA,home,partner,80.15,7,0.067,bundle,2024-06-09\r\n28499,1709,EMEA,home,mobile,94.74,8,0.095,bundle,2024-07-11\r\n28500,1540,LATAM,fashion,online,75.57,5,0.070,none,2024-12-07\r\n28501,2410,EMEA,toys,online,49.82,3,0.065,loyalty,2024-04-17\r\n28502,1156,APAC,fashion,retail,49.95,5,0.248,none,2024-10-13\r\n28503,1432,APAC,electronics,mobile,61.97,3,0.011,coupon,2024-05-13\r\n28504,2095,EMEA,sports,online,43.21,7,0.220,none,2024-09-23\r\n28505,1359,LATAM,home,online,89.35,2,0.173,none,2024-11-01\r\n28506,2295,EMEA,home,online,69.85,8,0.104,coupon,2024-03-10\r\n28507,1437,EMEA,sports,online,33.75,4,0.056,coupon,2024-07-06\r\n28508,1063,AMER,grocery,mobile,76.30,4,0.230,loyalty,2024-11-22\r\n28509,1990,EMEA,grocery,online,42.86,3,0.135,coupon,2024-12-23\r\n28510,2444,EMEA,electronics,online,91.29,6,0.073,coupon,2024-02-02\r\n28511,1099,LATAM,electronics,retail,81.72,5,0.214,none,2024-12-20\r\n28512,2291,EMEA,electronics,online,58.59,7,0.173,bundle,2024-10-10\r\n28513,2156,AMER,grocery,partner,54.63,1,0.193,coupon,2024-05-05\r\n28514,1482,AMER,electronics,retail,40.88,3,0.121,loyalty,2024-07-01\r\n28515,1057,LATAM,grocery,retail,51.86,1,0.167,none,2024-06-06\r\n28516,1835,AMER,fashion,online,32.15,8,0.117,none,2024-02-26\r\n28517,1504,AMER,grocery,online,47.69,4,0.158,bundle,2024-07-18\r\n28518,1651,LATAM,fashion,online,22.42,6,0.089,loyalty,2024-08-12\r\n28519,1299,LATAM,fashion,online,73.99,5,0.236,coupon,2024-09-06\r\n28520,1251,EMEA,fashion,mobile,24.95,4,0.076,none,2024-01-10\r\n28521,1795,EMEA,grocery,online,52.60,5,0.167,loyalty,2024-08-20\r\n28522,1548,EMEA,sports,retail,25.45,3,0.017,none,2024-09-16\r\n28523,2368,AMER,fashion,partner,81.77,5,0.143,none,2024-03-23\r\n28524,2033,LATAM,toys,retail,39.98,8,0.205,bundle,2024-01-19\r\n28525,1415,AMER,electronics,online,56.06,2,0.177,none,2024-11-09\r\n28526,1341,EMEA,sports,online,41.32,5,0.109,none,2024-07-21\r\n28527,1182,EMEA,home,retail,67.46,6,0.027,bundle,2024-07-27\r\n28528,1681,LATAM,grocery,online,36.43,1,0.135,bundle,2024-04-08\r\n28529,1822,EMEA,sports,partner,69.07,6,0.221,coupon,2024-06-05\r\n28530,1204,AMER,fashion,retail,53.07,5,0.036,none,2024-09-04\r\n28531,1134,APAC,grocery,partner,71.63,7,0.170,bundle,2024-08-10\r\n28532,2000,APAC,grocery,mobile,60.64,7,0.230,none,2024-09-25\r\n28533,1675,LATAM,home,retail,69.18,5,0.203,none,2024-02-06\r\n28534,1355,EMEA,electronics,retail,58.76,2,0.136,none,2024-12-15\r\n28535,2238,AMER,fashion,retail,30.96,5,0.108,loyalty,2024-11-01\r\n28536,1917,LATAM,grocery,online,95.95,1,0.004,none,2024-11-10\r\n28537,1561,EMEA,home,mobile,37.18,5,0.185,coupon,2024-10-25\r\n28538,1032,AMER,grocery,retail,69.95,8,0.121,none,2024-06-10\r\n28539,1943,AMER,grocery,retail,65.35,8,0.191,none,2024-11-13\r\n28540,2065,EMEA,toys,online,66.15,5,0.194,none,2024-07-23\r\n28541,2027,EMEA,sports,retail,59.86,2,0.015,none,2024-08-09\r\n28542,1962,APAC,toys,online,25.70,5,0.003,coupon,2024-09-03\r\n28543,2461,LATAM,toys,online,37.65,1,0.231,loyalty,2024-10-11\r\n28544,1201,LATAM,toys,mobile,37.21,4,0.189,none,2024-01-25\r\n28545,1493,APAC,electronics,online,37.59,4,0.005,none,2024-01-15\r\n28546,1077,AMER,grocery,mobile,39.08,7,0.033,none,2024-06-13\r\n28547,1918,EMEA,grocery,partner,57.78,5,0.094,none,2024-03-16\r\n28548,1183,AMER,grocery,online,54.44,6,0.028,none,2024-08-02\r\n28549,1909,APAC,fashion,retail,46.77,7,0.164,none,2024-04-28\r\n28550,1665,AMER,fashion,retail,105.44,4,0.089,none,2024-12-23\r\n28551,1099,LATAM,electronics,partner,35.49,2,0.053,none,2024-08-18\r\n28552,1167,EMEA,grocery,retail,79.53,4,0.061,coupon,2024-06-19\r\n28553,1760,LATAM,electronics,online,81.88,5,0.148,loyalty,2024-11-14\r\n28554,1566,EMEA,toys,mobile,70.84,4,0.023,none,2024-02-01\r\n28555,2012,APAC,electronics,retail,46.11,7,0.230,none,2024-08-06\r\n28556,2466,APAC,fashion,retail,39.48,3,0.188,none,2024-01-04\r\n28557,2042,LATAM,sports,mobile,22.19,7,0.180,none,2024-01-05\r\n28558,1724,LATAM,grocery,mobile,42.90,2,0.004,none,2024-02-01\r\n28559,1980,LATAM,electronics,online,21.34,6,0.193,bundle,2024-05-13\r\n28560,1764,LATAM,home,retail,56.63,5,0.100,coupon,2024-11-27\r\n28561,2183,EMEA,sports,online,52.44,7,0.013,none,2024-12-05\r\n28562,2434,APAC,fashion,mobile,58.03,8,0.175,none,2024-06-09\r\n28563,1146,LATAM,sports,online,40.86,1,0.195,coupon,2024-10-02\r\n28564,2092,AMER,electronics,retail,66.90,6,0.147,none,2024-07-26\r\n28565,1626,EMEA,sports,retail,68.95,3,0.153,none,2024-07-18\r\n28566,1460,LATAM,fashion,online,35.16,6,0.153,bundle,2024-03-18\r\n28567,1800,APAC,toys,online,54.58,3,0.224,none,2024-07-26\r\n28568,1799,EMEA,fashion,retail,94.01,2,0.000,coupon,2024-03-23\r\n28569,2139,AMER,grocery,online,69.46,3,0.012,coupon,2024-06-22\r\n28570,2007,LATAM,home,online,64.24,3,0.117,none,2024-02-17\r\n28571,1280,LATAM,grocery,online,34.55,2,0.012,none,2024-11-13\r\n28572,1416,EMEA,home,online,58.04,1,0.239,none,2024-08-07\r\n28573,1469,EMEA,grocery,retail,128.99,6,0.107,none,2024-04-27\r\n28574,1318,LATAM,grocery,online,72.17,5,0.068,none,2024-04-27\r\n28575,1718,EMEA,toys,retail,42.24,5,0.175,loyalty,2024-12-03\r\n28576,2220,LATAM,electronics,online,97.24,3,0.162,bundle,2024-05-06\r\n28577,1049,AMER,grocery,retail,60.56,4,0.063,bundle,2024-05-22\r\n28578,1140,LATAM,electronics,retail,70.15,5,0.138,none,2024-10-07\r\n28579,1472,AMER,electronics,mobile,74.51,3,0.184,coupon,2024-01-15\r\n28580,2006,APAC,electronics,online,31.82,1,0.129,coupon,2024-02-06\r\n28581,1882,AMER,fashion,retail,70.54,7,0.092,none,2024-11-07\r\n28582,1613,EMEA,fashion,mobile,62.69,4,0.214,none,2024-01-12\r\n28583,2361,EMEA,toys,partner,67.20,4,0.006,bundle,2024-10-13\r\n28584,1526,EMEA,home,retail,30.42,1,0.029,none,2024-04-22\r\n28585,2349,APAC,grocery,mobile,28.75,3,0.107,none,2024-08-07\r\n28586,1957,AMER,electronics,mobile,35.31,5,0.079,none,2024-08-09\r\n28587,1431,APAC,grocery,online,65.51,5,0.159,none,2024-07-11\r\n28588,1458,APAC,fashion,partner,41.36,5,0.148,bundle,2024-02-07\r\n28589,1548,EMEA,sports,retail,52.00,2,0.155,none,2024-10-09\r\n28590,1338,EMEA,electronics,mobile,30.15,8,0.038,none,2024-11-16\r\n28591,1436,APAC,electronics,online,98.43,2,0.160,bundle,2024-04-20\r\n28592,1944,AMER,electronics,online,76.48,6,0.244,coupon,2024-09-07\r\n28593,1304,LATAM,home,online,34.23,5,0.096,bundle,2024-05-26\r\n28594,1845,AMER,electronics,online,119.76,3,0.155,loyalty,2024-05-14\r\n28595,1160,LATAM,home,online,68.64,4,0.069,none,2024-11-05\r\n28596,2086,APAC,home,retail,34.03,5,0.103,none,2024-07-07\r\n28597,1965,LATAM,home,mobile,42.63,5,0.136,none,2024-02-14\r\n28598,1857,LATAM,grocery,retail,23.04,3,0.173,none,2024-09-25\r\n28599,2342,AMER,toys,retail,56.99,1,0.123,loyalty,2024-06-13\r\n28600,1880,LATAM,fashion,retail,55.87,3,0.061,loyalty,2024-02-10\r\n28601,1816,EMEA,home,retail,130.69,8,0.024,none,2024-01-24\r\n28602,2190,LATAM,electronics,online,56.74,1,0.025,none,2024-12-18\r\n28603,1261,APAC,fashion,online,51.50,3,0.057,coupon,2024-01-22\r\n28604,1129,LATAM,grocery,online,48.77,7,0.147,loyalty,2024-10-21\r\n28605,1915,LATAM,electronics,retail,72.53,6,0.201,coupon,2024-01-04\r\n28606,2214,AMER,grocery,retail,40.50,6,0.023,none,2024-04-25\r\n28607,1309,EMEA,home,retail,46.74,6,0.114,none,2024-10-13\r\n28608,1207,APAC,grocery,mobile,159.03,4,0.004,none,2024-07-22\r\n28609,2483,LATAM,electronics,mobile,66.29,2,0.049,none,2024-02-20\r\n28610,1499,EMEA,grocery,mobile,42.08,3,0.090,none,2024-03-05\r\n28611,1470,LATAM,home,retail,102.19,1,0.020,none,2024-05-25\r\n28612,1832,APAC,toys,retail,60.69,3,0.070,none,2024-11-05\r\n28613,1166,AMER,fashion,online,46.20,6,0.227,none,2024-12-10\r\n28614,1473,LATAM,grocery,partner,88.93,3,0.209,loyalty,2024-03-04\r\n28615,1975,EMEA,fashion,online,85.86,1,0.190,coupon,2024-11-09\r\n28616,1293,AMER,electronics,retail,46.73,7,0.057,coupon,2024-08-21\r\n28617,1685,AMER,electronics,retail,99.80,5,0.073,none,2024-09-17\r\n28618,1503,APAC,toys,retail,49.46,7,0.064,coupon,2024-10-18\r\n28619,1370,APAC,home,online,83.96,3,0.126,none,2024-07-19\r\n28620,1307,AMER,fashion,online,71.17,6,0.145,coupon,2024-12-09\r\n28621,2302,APAC,grocery,retail,28.82,6,0.220,none,2024-10-05\r\n28622,2352,APAC,grocery,mobile,67.03,5,0.024,coupon,2024-10-25\r\n28623,1448,EMEA,grocery,partner,60.65,8,0.024,none,2024-04-19\r\n28624,1946,AMER,electronics,online,65.02,4,0.247,none,2024-07-23\r\n28625,1120,LATAM,fashion,online,26.26,3,0.177,bundle,2024-05-21\r\n28626,1605,APAC,grocery,online,30.21,5,0.193,none,2024-06-19\r\n28627,2034,LATAM,fashion,retail,19.62,3,0.140,none,2024-12-08\r\n28628,1289,LATAM,toys,online,18.83,3,0.134,none,2024-08-08\r\n28629,1313,EMEA,fashion,retail,89.61,6,0.186,coupon,2024-08-03\r\n28630,2093,LATAM,grocery,retail,92.73,2,0.192,none,2024-03-08\r\n28631,1770,AMER,electronics,retail,31.29,7,0.002,loyalty,2024-11-08\r\n28632,2084,LATAM,electronics,retail,50.79,6,0.019,none,2024-03-25\r\n28633,1648,APAC,sports,online,52.24,6,0.114,loyalty,2024-06-10\r\n28634,1108,EMEA,fashion,retail,32.42,8,0.102,bundle,2024-01-24\r\n28635,1379,EMEA,fashion,mobile,36.44,1,0.123,none,2024-02-06\r\n28636,2351,EMEA,sports,retail,106.81,5,0.158,none,2024-01-13\r\n28637,1627,LATAM,fashion,retail,23.16,6,0.165,loyalty,2024-05-27\r\n28638,1525,APAC,sports,online,26.50,6,0.066,none,2024-06-15\r\n28639,1324,LATAM,toys,retail,112.53,2,0.226,none,2024-01-03\r\n28640,2320,LATAM,home,online,84.90,5,0.011,none,2024-10-22\r\n28641,2042,LATAM,toys,retail,50.75,4,0.048,coupon,2024-08-24\r\n28642,1262,APAC,electronics,retail,61.28,6,0.212,none,2024-05-17\r\n28643,1105,AMER,sports,online,52.89,2,0.230,none,2024-08-15\r\n28644,2050,APAC,electronics,mobile,73.44,5,0.156,none,2024-06-09\r\n28645,1314,AMER,electronics,retail,66.53,3,0.146,none,2024-07-26\r\n28646,2261,EMEA,electronics,online,68.96,2,0.203,coupon,2024-01-28\r\n28647,2349,APAC,electronics,online,117.96,7,0.126,none,2024-05-09\r\n28648,1181,LATAM,toys,partner,50.75,6,0.172,none,2024-03-26\r\n28649,2157,AMER,grocery,retail,47.21,2,0.168,none,2024-06-14\r\n28650,2179,LATAM,grocery,online,85.28,5,0.076,none,2024-11-02\r\n28651,1487,AMER,home,partner,86.80,4,0.070,none,2024-03-16\r\n28652,1506,EMEA,grocery,mobile,33.24,4,0.076,none,2024-01-18\r\n28653,1040,LATAM,home,online,28.38,1,0.232,coupon,2024-05-12\r\n28654,1894,APAC,grocery,retail,77.62,7,0.208,none,2024-08-09\r\n28655,2303,EMEA,electronics,retail,63.35,6,0.228,none,2024-10-04\r\n28656,1176,EMEA,home,online,69.28,4,0.214,coupon,2024-10-02\r\n28657,1811,APAC,grocery,retail,71.90,7,0.109,loyalty,2024-12-04\r\n28658,1268,EMEA,electronics,online,35.50,8,0.079,bundle,2024-05-13\r\n28659,1715,AMER,toys,online,44.02,7,0.237,coupon,2024-12-24\r\n28660,1013,LATAM,electronics,online,132.56,2,0.078,none,2024-07-06\r\n28661,1667,AMER,fashion,online,29.94,2,0.186,bundle,2024-10-22\r\n28662,1252,APAC,sports,mobile,76.56,8,0.101,loyalty,2024-05-20\r\n28663,1064,AMER,electronics,retail,53.46,2,0.033,none,2024-03-06\r\n28664,1882,AMER,fashion,partner,37.53,5,0.063,none,2024-04-10\r\n28665,2380,AMER,home,online,65.25,5,0.026,none,2024-05-21\r\n28666,2176,AMER,grocery,online,95.36,3,0.224,none,2024-02-24\r\n28667,2144,EMEA,grocery,mobile,60.28,8,0.071,loyalty,2024-04-02\r\n28668,1777,AMER,fashion,retail,58.14,8,0.244,coupon,2024-06-04\r\n28669,2034,LATAM,fashion,online,56.13,5,0.203,bundle,2024-03-26\r\n28670,1905,APAC,toys,retail,47.35,1,0.037,none,2024-05-12\r\n28671,2047,AMER,electronics,partner,76.22,5,0.080,coupon,2024-08-12\r\n28672,1297,AMER,sports,online,49.80,6,0.237,none,2024-08-15\r\n28673,1855,APAC,electronics,retail,56.88,2,0.146,none,2024-07-12\r\n28674,1618,EMEA,sports,online,89.47,4,0.024,loyalty,2024-03-26\r\n28675,1687,APAC,grocery,retail,24.41,1,0.165,coupon,2024-04-17\r\n28676,2192,APAC,toys,retail,103.86,8,0.157,none,2024-11-20\r\n28677,2017,EMEA,electronics,mobile,50.34,2,0.237,none,2024-04-22\r\n28678,1214,EMEA,grocery,online,46.68,4,0.186,loyalty,2024-12-23\r\n28679,1244,LATAM,grocery,retail,88.28,4,0.180,none,2024-11-11\r\n28680,1971,EMEA,grocery,online,89.26,8,0.225,none,2024-04-21\r\n28681,1392,AMER,grocery,retail,80.08,1,0.165,coupon,2024-08-27\r\n28682,2291,EMEA,electronics,mobile,64.79,8,0.196,none,2024-07-09\r\n28683,1440,AMER,electronics,online,71.42,2,0.075,none,2024-07-06\r\n28684,1602,EMEA,electronics,mobile,220.08,7,0.190,loyalty,2024-10-25\r\n28685,1821,LATAM,grocery,retail,57.98,4,0.142,none,2024-03-11\r\n28686,1208,AMER,grocery,online,71.22,3,0.177,none,2024-11-13\r\n28687,1101,AMER,sports,partner,67.65,5,0.096,loyalty,2024-07-10\r\n28688,1458,APAC,home,retail,32.30,5,0.094,bundle,2024-12-08\r\n28689,1109,APAC,grocery,online,282.23,2,0.044,none,2024-09-19\r\n28690,1246,EMEA,home,retail,73.16,3,0.002,none,2024-01-20\r\n28691,2442,APAC,electronics,retail,39.23,2,0.039,none,2024-02-11\r\n28692,2211,APAC,fashion,retail,37.34,2,0.021,bundle,2024-12-22\r\n28693,2234,LATAM,electronics,retail,106.98,1,0.062,none,2024-11-22\r\n28694,2313,LATAM,grocery,partner,112.91,2,0.038,none,2024-01-08\r\n28695,1284,APAC,home,online,16.48,1,0.074,none,2024-01-17\r\n28696,1945,AMER,electronics,retail,20.02,2,0.134,coupon,2024-01-16\r\n28697,2023,LATAM,grocery,retail,104.17,5,0.180,coupon,2024-12-26\r\n28698,1597,APAC,fashion,online,49.26,2,0.149,loyalty,2024-12-19\r\n28699,2041,LATAM,home,mobile,54.96,5,0.114,bundle,2024-04-21\r\n28700,1887,LATAM,electronics,retail,76.21,7,0.128,none,2024-08-27\r\n28701,1590,APAC,grocery,online,18.29,4,0.017,none,2024-06-12\r\n28702,1784,EMEA,grocery,retail,62.62,7,0.222,none,2024-05-21\r\n28703,2421,AMER,grocery,retail,60.25,4,0.127,coupon,2024-11-13\r\n28704,1635,APAC,home,retail,76.05,6,0.142,coupon,2024-02-01\r\n28705,2420,EMEA,home,retail,246.11,4,0.175,bundle,2024-08-20\r\n28706,1010,EMEA,toys,online,130.08,1,0.179,bundle,2024-11-01\r\n28707,1328,APAC,home,retail,36.99,3,0.128,none,2024-12-06\r\n28708,2060,LATAM,grocery,mobile,31.88,4,0.001,coupon,2024-11-24\r\n28709,2008,APAC,fashion,retail,35.83,2,0.024,loyalty,2024-07-15\r\n28710,2197,LATAM,electronics,retail,83.08,5,0.112,coupon,2024-06-26\r\n28711,1182,EMEA,grocery,online,71.03,3,0.047,none,2024-01-27\r\n28712,2277,EMEA,electronics,partner,46.43,2,0.108,none,2024-01-13\r\n28713,1440,AMER,home,online,111.73,8,0.022,none,2024-01-28\r\n28714,1449,EMEA,grocery,mobile,48.16,7,0.113,none,2024-06-24\r\n28715,1809,APAC,grocery,mobile,37.70,8,0.222,coupon,2024-03-02\r\n28716,2081,APAC,sports,online,162.00,1,0.135,none,2024-03-03\r\n28717,1990,EMEA,grocery,online,26.20,5,0.085,bundle,2024-08-03\r\n28718,2085,AMER,grocery,partner,78.15,2,0.190,coupon,2024-02-09\r\n28719,2439,AMER,grocery,retail,64.74,6,0.159,none,2024-07-19\r\n28720,1715,AMER,home,online,63.66,6,0.041,loyalty,2024-01-26\r\n28721,1233,AMER,home,online,48.16,4,0.054,coupon,2024-03-24\r\n28722,1730,AMER,home,online,96.87,1,0.144,bundle,2024-09-26\r\n28723,1515,EMEA,sports,online,29.90,6,0.058,none,2024-08-24\r\n28724,1864,EMEA,grocery,retail,45.77,3,0.145,bundle,2024-04-19\r\n28725,1080,LATAM,electronics,online,78.66,8,0.217,none,2024-02-16\r\n28726,1370,APAC,grocery,retail,48.59,6,0.173,none,2024-08-01\r\n28727,1299,LATAM,home,retail,75.44,3,0.092,none,2024-02-22\r\n28728,1889,APAC,sports,mobile,48.05,6,0.045,none,2024-04-20\r\n28729,1302,LATAM,electronics,online,124.57,1,0.035,none,2024-10-21\r\n28730,2239,EMEA,sports,retail,48.79,8,0.225,coupon,2024-02-13\r\n28731,1349,APAC,electronics,partner,59.33,1,0.178,none,2024-01-13\r\n28732,1962,APAC,grocery,mobile,90.60,2,0.244,bundle,2024-09-18\r\n28733,1764,LATAM,fashion,online,70.89,6,0.200,none,2024-04-10\r\n28734,1858,LATAM,electronics,retail,45.22,6,0.040,none,2024-09-28\r\n28735,1583,AMER,sports,retail,67.84,3,0.191,none,2024-05-06\r\n28736,1087,AMER,fashion,retail,63.12,4,0.177,loyalty,2024-09-28\r\n28737,1954,APAC,sports,retail,42.55,6,0.059,none,2024-01-15\r\n28738,1321,EMEA,grocery,online,168.11,3,0.183,coupon,2024-08-01\r\n28739,1479,AMER,grocery,online,247.48,6,0.119,none,2024-12-20\r\n28740,2332,APAC,toys,retail,81.14,5,0.036,none,2024-01-04\r\n28741,1942,APAC,electronics,retail,22.72,2,0.159,none,2024-05-26\r\n28742,1274,LATAM,grocery,online,99.09,2,0.071,bundle,2024-01-26\r\n28743,2085,AMER,toys,retail,41.79,7,0.179,bundle,2024-06-28\r\n28744,1871,APAC,fashion,mobile,30.38,7,0.073,none,2024-06-23\r\n28745,1654,EMEA,sports,online,61.16,2,0.228,none,2024-05-12\r\n28746,1082,EMEA,sports,online,60.70,3,0.084,coupon,2024-04-03\r\n28747,1128,LATAM,toys,retail,154.07,2,0.003,coupon,2024-11-19\r\n28748,2393,LATAM,electronics,online,35.30,6,0.088,none,2024-07-04\r\n28749,1209,AMER,fashion,mobile,59.58,6,0.024,none,2024-04-03\r\n28750,1316,APAC,sports,online,89.50,7,0.142,bundle,2024-01-25\r\n28751,2231,LATAM,grocery,retail,82.81,1,0.056,none,2024-10-24\r\n28752,1086,AMER,toys,retail,71.10,1,0.221,none,2024-01-19\r\n28753,1969,LATAM,sports,retail,108.33,6,0.026,coupon,2024-01-27\r\n28754,1919,EMEA,fashion,retail,25.13,4,0.181,coupon,2024-11-22\r\n28755,2354,LATAM,electronics,mobile,131.00,8,0.051,none,2024-06-01\r\n28756,1356,LATAM,grocery,online,30.84,7,0.041,none,2024-04-20\r\n28757,2288,AMER,home,online,33.04,1,0.107,bundle,2024-05-19\r\n28758,1749,LATAM,home,retail,40.25,8,0.067,none,2024-02-01\r\n28759,2386,EMEA,home,online,83.19,3,0.030,loyalty,2024-10-16\r\n28760,2314,EMEA,electronics,mobile,105.62,7,0.126,loyalty,2024-08-10\r\n28761,1977,APAC,home,mobile,55.95,4,0.214,bundle,2024-08-13\r\n28762,1810,LATAM,fashion,retail,37.24,2,0.202,bundle,2024-12-11\r\n28763,1104,APAC,grocery,retail,46.97,3,0.185,loyalty,2024-12-23\r\n28764,1468,AMER,grocery,retail,76.65,4,0.053,none,2024-03-08\r\n28765,2018,AMER,sports,online,67.58,4,0.169,bundle,2024-08-03\r\n28766,2200,LATAM,fashion,retail,66.75,5,0.041,loyalty,2024-01-25\r\n28767,1747,EMEA,grocery,online,58.80,1,0.022,none,2024-05-03\r\n28768,1156,APAC,electronics,retail,42.30,4,0.033,loyalty,2024-05-03\r\n28769,2442,APAC,grocery,retail,38.47,5,0.215,coupon,2024-05-18\r\n28770,2327,EMEA,sports,online,36.61,5,0.152,none,2024-02-24\r\n28771,1554,AMER,grocery,online,47.21,6,0.139,none,2024-07-10\r\n28772,1300,EMEA,home,online,55.61,8,0.228,bundle,2024-09-25\r\n28773,1370,APAC,electronics,online,39.68,5,0.036,loyalty,2024-05-20\r\n28774,2164,AMER,electronics,retail,27.87,2,0.099,coupon,2024-12-02\r\n28775,2209,AMER,fashion,online,77.16,7,0.157,none,2024-04-04\r\n28776,1701,LATAM,electronics,online,79.93,6,0.116,none,2024-08-14\r\n28777,2157,AMER,sports,online,63.85,8,0.180,none,2024-11-03\r\n28778,2148,EMEA,electronics,retail,191.82,2,0.056,coupon,2024-05-21\r\n28779,2235,AMER,fashion,online,30.71,3,0.020,loyalty,2024-10-28\r\n28780,1880,LATAM,grocery,online,62.34,8,0.244,none,2024-11-10\r\n28781,1572,LATAM,grocery,online,25.61,1,0.111,none,2024-09-18\r\n28782,1166,AMER,sports,partner,76.68,6,0.119,none,2024-11-05\r\n28783,1221,LATAM,grocery,retail,73.89,1,0.158,none,2024-03-25\r\n28784,1158,LATAM,toys,retail,69.79,1,0.044,loyalty,2024-12-12\r\n28785,1801,LATAM,fashion,retail,163.43,6,0.117,none,2024-09-03\r\n28786,2431,LATAM,grocery,retail,61.23,1,0.250,none,2024-01-06\r\n28787,1844,APAC,grocery,online,93.39,2,0.013,none,2024-11-07\r\n28788,1964,EMEA,grocery,retail,72.34,8,0.019,none,2024-06-02\r\n28789,1662,LATAM,electronics,partner,86.55,5,0.141,bundle,2024-09-19\r\n28790,1671,APAC,sports,online,58.70,2,0.236,none,2024-09-19\r\n28791,2372,AMER,electronics,retail,34.14,8,0.172,coupon,2024-10-03\r\n28792,1013,LATAM,fashion,retail,93.51,3,0.216,none,2024-08-12\r\n28793,1154,LATAM,home,retail,83.24,6,0.061,coupon,2024-10-19\r\n28794,1957,AMER,sports,retail,25.16,2,0.238,bundle,2024-12-20\r\n28795,1922,EMEA,electronics,online,158.04,3,0.203,bundle,2024-10-19\r\n28796,2376,LATAM,electronics,retail,28.08,4,0.143,coupon,2024-09-18\r\n28797,1941,AMER,grocery,online,36.72,2,0.120,none,2024-07-20\r\n28798,1536,LATAM,electronics,retail,21.37,4,0.013,coupon,2024-12-22\r\n28799,1741,AMER,grocery,retail,58.93,6,0.075,none,2024-06-05\r\n28800,2356,LATAM,grocery,online,45.23,3,0.009,coupon,2024-03-21\r\n28801,1609,LATAM,fashion,retail,30.28,6,0.229,coupon,2024-02-23\r\n28802,2401,LATAM,grocery,mobile,87.83,8,0.183,coupon,2024-09-02\r\n28803,1438,APAC,toys,online,37.42,7,0.239,none,2024-04-10\r\n28804,1635,APAC,fashion,online,24.52,5,0.036,none,2024-03-13\r\n28805,1219,LATAM,electronics,online,103.43,1,0.007,coupon,2024-03-01\r\n28806,1241,APAC,home,retail,78.32,6,0.084,none,2024-07-25\r\n28807,1456,APAC,fashion,online,24.89,5,0.086,coupon,2024-09-18\r\n28808,2013,APAC,home,mobile,144.73,4,0.189,loyalty,2024-10-27\r\n28809,1091,EMEA,fashion,online,29.19,8,0.248,bundle,2024-02-17\r\n28810,1586,LATAM,grocery,online,42.83,3,0.027,none,2024-02-04\r\n28811,2211,APAC,grocery,online,55.32,1,0.068,none,2024-04-12\r\n28812,1006,AMER,grocery,online,59.49,6,0.041,none,2024-09-09\r\n28813,1151,APAC,sports,online,38.97,1,0.039,none,2024-01-02\r\n28814,1601,APAC,grocery,online,44.36,1,0.053,none,2024-05-06\r\n28815,2261,EMEA,sports,retail,120.67,5,0.238,loyalty,2024-01-09\r\n28816,1983,LATAM,grocery,online,195.40,7,0.146,bundle,2024-02-15\r\n28817,1567,AMER,home,online,82.46,6,0.085,coupon,2024-04-08\r\n28818,1491,EMEA,electronics,retail,44.80,3,0.224,none,2024-04-24\r\n28819,1803,LATAM,electronics,mobile,33.69,3,0.184,none,2024-04-27\r\n28820,1721,EMEA,electronics,online,31.55,5,0.177,none,2024-01-07\r\n28821,1372,APAC,electronics,online,81.43,1,0.101,loyalty,2024-09-20\r\n28822,1762,LATAM,grocery,retail,49.19,8,0.150,none,2024-03-16\r\n28823,1971,EMEA,electronics,online,72.97,2,0.065,none,2024-04-09\r\n28824,1669,AMER,electronics,partner,49.06,4,0.191,bundle,2024-01-13\r\n28825,2394,EMEA,home,retail,48.26,5,0.218,loyalty,2024-12-14\r\n28826,1730,AMER,electronics,retail,22.17,8,0.177,none,2024-05-23\r\n28827,1664,LATAM,electronics,retail,41.65,8,0.213,none,2024-09-05\r\n28828,1283,APAC,grocery,online,76.61,3,0.148,loyalty,2024-08-03\r\n28829,1163,AMER,grocery,online,46.44,3,0.119,bundle,2024-04-04\r\n28830,1173,LATAM,grocery,online,71.65,2,0.116,coupon,2024-07-01\r\n28831,1376,EMEA,toys,online,54.17,8,0.004,none,2024-08-11\r\n28832,2087,LATAM,grocery,mobile,33.54,2,0.212,none,2024-02-22\r\n28833,1562,AMER,fashion,mobile,30.75,5,0.180,bundle,2024-03-12\r\n28834,1275,EMEA,electronics,retail,66.48,1,0.133,none,2024-04-10\r\n28835,1514,LATAM,electronics,retail,97.00,8,0.065,none,2024-03-17\r\n28836,1749,LATAM,grocery,retail,118.00,3,0.085,none,2024-11-03\r\n28837,1388,AMER,toys,retail,71.35,6,0.107,bundle,2024-08-07\r\n28838,1204,AMER,fashion,online,29.61,7,0.084,none,2024-02-14\r\n28839,1702,AMER,grocery,online,74.39,2,0.196,bundle,2024-06-02\r\n28840,1547,AMER,grocery,online,193.13,6,0.094,coupon,2024-12-21\r\n28841,1854,AMER,electronics,online,96.45,8,0.195,none,2024-02-10\r\n28842,2196,AMER,toys,retail,66.49,6,0.225,none,2024-02-06\r\n28843,1703,AMER,grocery,partner,41.59,7,0.072,loyalty,2024-09-07\r\n28844,1756,EMEA,grocery,retail,56.96,8,0.063,coupon,2024-02-22\r\n28845,2259,AMER,grocery,online,101.52,6,0.207,none,2024-05-20\r\n28846,1650,LATAM,grocery,online,36.71,7,0.207,none,2024-10-02\r\n28847,1548,EMEA,fashion,online,62.15,1,0.006,none,2024-04-14\r\n28848,1580,AMER,grocery,mobile,68.36,5,0.170,none,2024-10-25\r\n28849,1098,APAC,home,mobile,52.77,5,0.102,none,2024-08-20\r\n28850,1746,LATAM,electronics,online,128.13,1,0.134,loyalty,2024-12-21\r\n28851,2141,AMER,grocery,retail,244.48,5,0.135,bundle,2024-08-18\r\n28852,1002,EMEA,home,retail,29.92,7,0.109,loyalty,2024-03-09\r\n28853,1152,LATAM,home,online,52.60,6,0.235,bundle,2024-04-25\r\n28854,1369,AMER,home,mobile,61.34,8,0.079,none,2024-04-17\r\n28855,1340,LATAM,fashion,online,54.69,8,0.049,none,2024-07-16\r\n28856,1135,APAC,toys,mobile,83.19,4,0.067,none,2024-09-17\r\n28857,1483,EMEA,electronics,online,47.63,3,0.122,bundle,2024-03-15\r\n28858,1474,LATAM,sports,online,75.50,5,0.143,coupon,2024-09-18\r\n28859,2327,EMEA,grocery,retail,23.43,1,0.106,bundle,2024-02-20\r\n28860,1462,LATAM,grocery,online,18.96,7,0.209,none,2024-11-16\r\n28861,2024,AMER,grocery,online,43.24,7,0.140,coupon,2024-12-13\r\n28862,1705,AMER,grocery,mobile,36.17,3,0.118,none,2024-02-03\r\n28863,2269,EMEA,toys,retail,62.88,7,0.242,none,2024-03-10\r\n28864,1250,APAC,fashion,retail,117.71,1,0.150,none,2024-02-28\r\n28865,2093,LATAM,electronics,retail,75.84,3,0.091,bundle,2024-02-18\r\n28866,1062,EMEA,electronics,online,78.23,1,0.035,none,2024-06-13\r\n28867,1287,AMER,sports,retail,41.99,1,0.016,coupon,2024-05-27\r\n28868,1998,APAC,sports,mobile,27.92,6,0.238,none,2024-07-23\r\n28869,2066,APAC,toys,retail,27.18,6,0.103,none,2024-06-11\r\n28870,2248,LATAM,home,online,66.40,4,0.165,none,2024-05-16\r\n28871,2290,LATAM,sports,online,55.52,2,0.246,bundle,2024-06-07\r\n28872,1996,APAC,sports,online,15.25,7,0.218,none,2024-11-18\r\n28873,2060,LATAM,fashion,retail,82.75,4,0.183,coupon,2024-02-17\r\n28874,1063,AMER,fashion,online,129.00,8,0.097,none,2024-09-01\r\n28875,1003,APAC,home,online,73.05,8,0.182,none,2024-06-05\r\n28876,2001,EMEA,electronics,online,58.10,6,0.081,none,2024-11-09\r\n28877,1629,LATAM,grocery,mobile,52.87,1,0.232,coupon,2024-04-14\r\n28878,2204,AMER,grocery,partner,39.76,3,0.095,coupon,2024-01-11\r\n28879,2081,APAC,fashion,retail,16.95,2,0.041,bundle,2024-01-10\r\n28880,1728,AMER,sports,online,28.91,3,0.036,none,2024-12-28\r\n28881,1657,LATAM,grocery,online,38.72,1,0.210,none,2024-01-14\r\n28882,2183,EMEA,home,online,31.25,5,0.086,bundle,2024-10-01\r\n28883,2361,EMEA,sports,online,36.38,5,0.126,coupon,2024-10-08\r\n28884,1289,LATAM,electronics,retail,35.21,3,0.125,none,2024-12-21\r\n28885,1202,APAC,grocery,partner,23.06,4,0.129,none,2024-06-23\r\n28886,2147,LATAM,sports,retail,66.10,3,0.020,none,2024-07-06\r\n28887,2117,EMEA,fashion,mobile,35.19,3,0.024,none,2024-06-21\r\n28888,1820,AMER,home,partner,37.36,7,0.043,none,2024-06-22\r\n28889,2182,AMER,electronics,online,33.54,3,0.167,none,2024-02-11\r\n28890,1030,EMEA,sports,retail,86.42,1,0.019,none,2024-06-21\r\n28891,1531,EMEA,sports,retail,56.68,6,0.196,none,2024-06-13\r\n28892,1709,EMEA,grocery,retail,50.73,2,0.046,none,2024-06-19\r\n28893,1307,AMER,electronics,online,42.68,4,0.175,loyalty,2024-10-11\r\n28894,2026,LATAM,grocery,mobile,86.92,2,0.077,none,2024-04-21\r\n28895,1897,AMER,fashion,online,18.22,1,0.104,none,2024-02-03\r\n28896,1794,AMER,home,online,31.70,7,0.075,none,2024-10-14\r\n28897,2029,APAC,toys,retail,54.20,3,0.000,loyalty,2024-10-25\r\n28898,2087,LATAM,grocery,online,24.34,1,0.033,loyalty,2024-07-11\r\n28899,1037,EMEA,sports,online,58.93,6,0.007,none,2024-11-02\r\n28900,1499,EMEA,electronics,mobile,48.92,4,0.079,none,2024-01-27\r\n28901,1314,AMER,grocery,online,50.99,2,0.100,loyalty,2024-05-06\r\n28902,1055,AMER,grocery,retail,128.39,8,0.103,loyalty,2024-07-07\r\n28903,1423,EMEA,fashion,mobile,77.07,5,0.012,none,2024-09-08\r\n28904,1989,LATAM,electronics,retail,37.85,8,0.148,loyalty,2024-08-26\r\n28905,2231,LATAM,electronics,retail,34.36,3,0.032,coupon,2024-10-14\r\n28906,1582,AMER,electronics,online,40.84,8,0.191,none,2024-02-18\r\n28907,1603,EMEA,home,partner,31.95,8,0.092,coupon,2024-01-19\r\n28908,1877,LATAM,grocery,partner,54.01,1,0.125,coupon,2024-03-03\r\n28909,2192,APAC,home,retail,86.27,5,0.169,none,2024-08-27\r\n28910,1923,LATAM,fashion,online,120.85,3,0.208,none,2024-08-20\r\n28911,1071,AMER,toys,online,86.70,4,0.005,loyalty,2024-05-26\r\n28912,1194,APAC,grocery,online,139.41,7,0.002,none,2024-02-21\r\n28913,2075,LATAM,home,online,97.17,5,0.180,loyalty,2024-02-16\r\n28914,1582,AMER,grocery,retail,277.73,4,0.097,none,2024-10-10\r\n28915,1766,AMER,home,online,38.19,3,0.245,coupon,2024-04-08\r\n28916,1208,AMER,fashion,online,66.90,2,0.081,loyalty,2024-04-19\r\n28917,2098,AMER,sports,retail,92.94,7,0.156,none,2024-05-23\r\n28918,1022,APAC,sports,online,54.92,1,0.180,none,2024-07-09\r\n28919,1019,APAC,grocery,online,25.71,7,0.076,bundle,2024-07-22\r\n28920,1011,APAC,home,online,58.18,7,0.038,none,2024-08-16\r\n28921,2022,LATAM,electronics,online,74.25,1,0.155,none,2024-01-27\r\n28922,1648,APAC,electronics,mobile,32.73,6,0.092,coupon,2024-10-14\r\n28923,2483,LATAM,toys,mobile,53.43,1,0.115,none,2024-01-25\r\n28924,2177,AMER,grocery,retail,81.41,1,0.014,none,2024-07-17\r\n28925,2057,APAC,electronics,mobile,99.73,2,0.164,bundle,2024-09-02\r\n28926,2086,APAC,fashion,retail,56.55,3,0.024,loyalty,2024-08-04\r\n28927,1233,AMER,home,retail,52.18,1,0.056,none,2024-05-17\r\n28928,1595,AMER,electronics,online,55.01,6,0.020,none,2024-04-13\r\n28929,1448,EMEA,home,online,29.46,3,0.060,bundle,2024-07-28\r\n28930,1985,AMER,home,online,45.67,7,0.108,bundle,2024-12-03\r\n28931,1904,APAC,fashion,mobile,45.80,4,0.023,coupon,2024-06-19\r\n28932,1954,APAC,electronics,retail,34.69,5,0.146,none,2024-01-11\r\n28933,2418,AMER,sports,online,40.23,8,0.156,coupon,2024-04-05\r\n28934,2237,EMEA,electronics,online,48.74,6,0.131,coupon,2024-02-11\r\n28935,2059,AMER,electronics,retail,40.22,2,0.082,none,2024-02-16\r\n28936,1841,AMER,grocery,retail,126.64,5,0.173,loyalty,2024-12-25\r\n28937,1394,LATAM,sports,retail,75.68,6,0.155,bundle,2024-05-15\r\n28938,2396,AMER,home,retail,50.81,8,0.237,coupon,2024-12-18\r\n28939,2133,AMER,home,retail,75.63,1,0.213,coupon,2024-09-16\r\n28940,1514,LATAM,home,retail,47.76,6,0.047,loyalty,2024-10-13\r\n28941,1618,EMEA,home,online,47.29,6,0.056,coupon,2024-07-12\r\n28942,2257,AMER,sports,retail,70.63,2,0.047,coupon,2024-06-01\r\n28943,1348,AMER,grocery,retail,64.25,3,0.076,none,2024-06-17\r\n28944,2418,AMER,grocery,online,43.52,8,0.064,none,2024-04-09\r\n28945,1360,APAC,electronics,retail,137.72,5,0.138,none,2024-04-23\r\n28946,1691,LATAM,grocery,retail,93.45,6,0.191,none,2024-06-25\r\n28947,1597,APAC,electronics,online,60.67,2,0.087,bundle,2024-08-22\r\n28948,1961,EMEA,electronics,mobile,21.14,6,0.010,none,2024-06-24\r\n28949,1266,AMER,grocery,retail,61.33,5,0.220,none,2024-12-13\r\n28950,1242,LATAM,home,retail,64.31,4,0.005,coupon,2024-02-08\r\n28951,2013,APAC,electronics,online,66.84,2,0.055,none,2024-03-20\r\n28952,1082,EMEA,grocery,retail,54.60,8,0.018,coupon,2024-06-26\r\n28953,1812,EMEA,home,online,15.49,7,0.243,none,2024-04-08\r\n28954,1557,LATAM,home,mobile,35.35,5,0.034,none,2024-11-11\r\n28955,1518,AMER,fashion,retail,58.87,7,0.002,none,2024-11-28\r\n28956,1711,APAC,sports,retail,69.39,5,0.051,coupon,2024-02-10\r\n28957,1958,APAC,electronics,online,27.59,1,0.020,none,2024-02-03\r\n28958,1159,LATAM,toys,retail,27.32,2,0.092,none,2024-12-16\r\n28959,1129,LATAM,fashion,online,42.17,8,0.078,none,2024-06-12\r\n28960,1058,LATAM,electronics,retail,62.18,1,0.167,none,2024-12-12\r\n28961,1556,AMER,home,mobile,109.13,1,0.116,none,2024-04-10\r\n28962,2252,EMEA,fashion,mobile,88.76,1,0.177,coupon,2024-01-08\r\n28963,1749,LATAM,sports,retail,163.36,4,0.070,none,2024-12-05\r\n28964,2067,LATAM,electronics,mobile,57.14,8,0.225,none,2024-09-05\r\n28965,1921,LATAM,grocery,retail,66.83,8,0.002,bundle,2024-10-22\r\n28966,2480,APAC,sports,retail,52.39,3,0.093,none,2024-09-21\r\n28967,2020,AMER,home,retail,73.08,2,0.103,none,2024-01-08\r\n28968,1153,AMER,grocery,online,61.53,2,0.066,bundle,2024-03-12\r\n28969,1222,AMER,fashion,retail,139.94,5,0.087,coupon,2024-09-06\r\n28970,2211,APAC,grocery,partner,23.85,6,0.185,coupon,2024-07-14\r\n28971,1223,LATAM,grocery,online,47.62,8,0.248,none,2024-09-26\r\n28972,2095,EMEA,toys,online,44.70,1,0.058,loyalty,2024-01-26\r\n28973,2030,EMEA,grocery,retail,46.20,7,0.220,none,2024-10-11\r\n28974,1299,LATAM,home,retail,124.63,1,0.038,loyalty,2024-03-12\r\n28975,1604,EMEA,sports,mobile,24.38,3,0.085,none,2024-05-25\r\n28976,1602,EMEA,sports,retail,25.48,4,0.134,none,2024-05-01\r\n28977,1920,LATAM,sports,retail,82.00,3,0.019,none,2024-10-01\r\n28978,1477,APAC,electronics,online,63.24,1,0.178,none,2024-01-09\r\n28979,1096,EMEA,grocery,mobile,53.31,8,0.033,none,2024-02-17\r\n28980,1399,AMER,grocery,retail,36.40,6,0.216,none,2024-03-20\r\n28981,1820,AMER,home,retail,98.22,6,0.081,none,2024-10-15\r\n28982,1898,EMEA,home,retail,69.79,2,0.159,none,2024-10-16\r\n28983,1615,LATAM,electronics,mobile,42.81,7,0.140,none,2024-03-04\r\n28984,2398,EMEA,fashion,online,40.60,1,0.029,none,2024-11-25\r\n28985,1447,LATAM,home,online,158.69,4,0.115,bundle,2024-02-24\r\n28986,2257,AMER,home,mobile,121.31,7,0.118,none,2024-09-12\r\n28987,1276,AMER,fashion,partner,60.81,8,0.041,none,2024-12-13\r\n28988,1080,LATAM,electronics,retail,78.88,2,0.116,none,2024-12-04\r\n28989,1669,AMER,sports,online,124.34,8,0.006,bundle,2024-02-22\r\n28990,1166,AMER,home,partner,45.37,5,0.180,bundle,2024-06-16\r\n28991,2003,LATAM,electronics,mobile,46.87,7,0.006,none,2024-08-12\r\n28992,1726,EMEA,grocery,retail,65.77,3,0.164,none,2024-01-25\r\n28993,1530,APAC,grocery,online,41.93,3,0.195,bundle,2024-02-16\r\n28994,1227,AMER,electronics,retail,70.96,7,0.170,none,2024-08-10\r\n28995,2053,AMER,toys,retail,84.31,7,0.072,none,2024-07-01\r\n28996,2084,LATAM,home,online,77.08,6,0.206,none,2024-04-10\r\n28997,1530,APAC,grocery,online,29.92,1,0.207,none,2024-12-28\r\n28998,2206,AMER,fashion,retail,54.01,2,0.238,loyalty,2024-05-17\r\n28999,1642,EMEA,sports,retail,57.19,6,0.080,none,2024-03-12\r\n29000,2085,AMER,toys,retail,45.53,5,0.003,coupon,2024-05-25\r\n29001,1497,EMEA,home,retail,42.49,6,0.233,coupon,2024-09-25\r\n29002,1778,LATAM,grocery,online,79.42,6,0.180,none,2024-12-11\r\n29003,1701,LATAM,home,retail,46.99,6,0.204,loyalty,2024-09-24\r\n29004,1953,EMEA,electronics,mobile,147.33,8,0.169,coupon,2024-02-18\r\n29005,1323,EMEA,grocery,online,49.28,6,0.189,coupon,2024-11-14\r\n29006,1074,LATAM,sports,partner,109.62,7,0.064,coupon,2024-12-07\r\n29007,2242,AMER,fashion,online,37.12,8,0.178,coupon,2024-09-16\r\n29008,1097,EMEA,electronics,retail,49.40,5,0.126,none,2024-11-17\r\n29009,1703,AMER,grocery,online,67.56,5,0.170,none,2024-10-28\r\n29010,2198,EMEA,electronics,retail,71.09,1,0.122,bundle,2024-05-14\r\n29011,1525,APAC,grocery,mobile,76.76,8,0.222,none,2024-12-20\r\n29012,2212,EMEA,home,retail,26.31,4,0.121,none,2024-06-03\r\n29013,1400,EMEA,grocery,online,68.91,3,0.104,none,2024-11-12\r\n29014,1293,AMER,electronics,online,66.18,7,0.034,none,2024-08-27\r\n29015,1537,LATAM,electronics,online,22.51,1,0.174,none,2024-06-27\r\n29016,1414,APAC,grocery,retail,48.08,2,0.208,bundle,2024-11-20\r\n29017,2403,LATAM,grocery,mobile,97.18,6,0.209,none,2024-12-04\r\n29018,2251,APAC,fashion,retail,49.88,8,0.243,loyalty,2024-02-22\r\n29019,1202,APAC,toys,retail,30.63,3,0.159,none,2024-08-22\r\n29020,2297,EMEA,home,retail,48.15,2,0.166,none,2024-05-10\r\n29021,2023,LATAM,electronics,mobile,131.96,3,0.193,none,2024-04-11\r\n29022,2317,LATAM,home,online,42.73,7,0.105,bundle,2024-02-24\r\n29023,2014,EMEA,home,online,189.83,3,0.221,coupon,2024-12-27\r\n29024,1285,EMEA,grocery,retail,53.79,7,0.160,bundle,2024-05-17\r\n29025,1933,EMEA,home,online,44.40,4,0.212,coupon,2024-09-27\r\n29026,1389,LATAM,electronics,mobile,26.63,4,0.228,loyalty,2024-06-06\r\n29027,1621,APAC,grocery,mobile,67.84,6,0.132,coupon,2024-10-19\r\n29028,1813,EMEA,electronics,online,61.14,6,0.056,none,2024-06-20\r\n29029,2470,EMEA,electronics,online,44.67,7,0.110,coupon,2024-06-19\r\n29030,1648,APAC,fashion,retail,109.17,4,0.036,coupon,2024-05-17\r\n29031,1383,AMER,fashion,online,28.45,2,0.103,none,2024-01-09\r\n29032,2209,AMER,grocery,retail,49.46,4,0.200,none,2024-06-11\r\n29033,1518,AMER,sports,retail,22.28,1,0.182,none,2024-03-26\r\n29034,1577,AMER,home,online,34.90,3,0.223,coupon,2024-10-20\r\n29035,1219,LATAM,home,online,67.40,1,0.233,none,2024-07-01\r\n29036,1951,LATAM,fashion,online,44.31,8,0.179,none,2024-11-23\r\n29037,1769,LATAM,grocery,partner,52.27,8,0.021,none,2024-08-09\r\n29038,1223,LATAM,home,mobile,27.26,5,0.129,coupon,2024-09-15\r\n29039,2357,EMEA,grocery,retail,114.90,2,0.117,loyalty,2024-01-03\r\n29040,1096,EMEA,home,online,75.06,1,0.140,none,2024-03-28\r\n29041,1585,AMER,grocery,mobile,42.41,4,0.228,none,2024-03-13\r\n29042,1937,APAC,grocery,mobile,19.75,3,0.003,coupon,2024-10-21\r\n29043,2201,AMER,sports,retail,66.60,2,0.006,none,2024-04-06\r\n29044,1097,EMEA,grocery,retail,18.45,3,0.085,coupon,2024-07-24\r\n29045,1111,APAC,toys,mobile,17.18,7,0.161,none,2024-06-24\r\n29046,1396,EMEA,grocery,mobile,87.10,4,0.083,none,2024-01-09\r\n29047,1143,LATAM,home,retail,81.72,1,0.078,loyalty,2024-08-15\r\n29048,2265,APAC,toys,online,52.13,2,0.142,none,2024-07-17\r\n29049,1230,EMEA,home,retail,70.08,7,0.247,coupon,2024-01-09\r\n29050,1931,APAC,sports,retail,85.60,1,0.075,none,2024-09-21\r\n29051,1084,AMER,sports,mobile,20.50,2,0.048,none,2024-01-01\r\n29052,2036,APAC,grocery,online,36.24,5,0.110,coupon,2024-01-05\r\n29053,1120,LATAM,toys,retail,38.26,4,0.145,none,2024-05-11\r\n29054,1843,EMEA,grocery,online,81.16,1,0.019,none,2024-02-18\r\n29055,1194,APAC,home,online,59.99,2,0.009,none,2024-04-18\r\n29056,2274,APAC,fashion,online,62.21,3,0.072,none,2024-11-21\r\n29057,1170,AMER,electronics,retail,64.75,7,0.192,loyalty,2024-02-23\r\n29058,1283,APAC,fashion,online,93.51,3,0.085,none,2024-10-09\r\n29059,1553,LATAM,grocery,mobile,66.19,7,0.062,none,2024-04-16\r\n29060,2391,EMEA,fashion,online,50.94,1,0.016,none,2024-08-04\r\n29061,2104,EMEA,grocery,mobile,47.24,4,0.051,loyalty,2024-06-16\r\n29062,2189,LATAM,home,online,107.58,7,0.184,none,2024-06-28\r\n29063,1561,EMEA,sports,online,24.85,5,0.025,bundle,2024-12-14\r\n29064,1318,LATAM,electronics,online,44.74,6,0.189,bundle,2024-01-17\r\n29065,1681,LATAM,toys,online,74.22,3,0.121,none,2024-10-07\r\n29066,1774,EMEA,grocery,online,60.46,8,0.104,coupon,2024-12-01\r\n29067,2353,AMER,toys,partner,68.80,6,0.187,bundle,2024-06-13\r\n29068,2092,AMER,electronics,online,70.04,5,0.078,loyalty,2024-04-21\r\n29069,1254,APAC,home,partner,76.97,3,0.070,none,2024-04-11\r\n29070,2414,EMEA,electronics,retail,65.03,5,0.081,none,2024-11-16\r\n29071,1454,APAC,toys,mobile,47.21,1,0.091,loyalty,2024-06-06\r\n29072,2032,AMER,fashion,retail,55.91,8,0.092,coupon,2024-11-19\r\n29073,1806,APAC,electronics,partner,66.99,2,0.192,none,2024-02-27\r\n29074,1915,LATAM,sports,retail,28.70,6,0.160,none,2024-07-20\r\n29075,2480,APAC,fashion,online,54.70,6,0.062,bundle,2024-06-23\r\n29076,2037,LATAM,fashion,online,68.18,1,0.147,none,2024-12-21\r\n29077,1133,EMEA,grocery,retail,52.77,8,0.009,none,2024-12-06\r\n29078,1139,EMEA,electronics,online,46.76,1,0.178,loyalty,2024-06-11\r\n29079,2209,AMER,electronics,online,52.98,8,0.041,none,2024-08-06\r\n29080,1521,LATAM,grocery,retail,29.01,3,0.101,none,2024-06-08\r\n29081,1998,APAC,electronics,online,57.18,6,0.199,none,2024-06-21\r\n29082,2316,EMEA,home,online,59.36,2,0.199,none,2024-11-15\r\n29083,1092,AMER,home,online,37.98,5,0.017,coupon,2024-08-05\r\n29084,1673,AMER,fashion,retail,52.37,2,0.138,bundle,2024-05-06\r\n29085,2270,APAC,electronics,mobile,50.69,5,0.088,loyalty,2024-10-19\r\n29086,1837,LATAM,electronics,retail,49.00,3,0.026,none,2024-08-13\r\n29087,2220,LATAM,grocery,retail,42.53,3,0.054,bundle,2024-10-04\r\n29088,1181,LATAM,electronics,mobile,27.67,8,0.118,coupon,2024-01-14\r\n29089,1741,AMER,toys,online,31.25,1,0.032,coupon,2024-05-11\r\n29090,1619,APAC,toys,retail,116.57,3,0.239,coupon,2024-12-22\r\n29091,2043,EMEA,fashion,online,73.05,3,0.132,none,2024-11-07\r\n29092,1863,EMEA,fashion,online,33.48,2,0.152,coupon,2024-12-20\r\n29093,1695,LATAM,grocery,mobile,63.71,2,0.176,none,2024-11-06\r\n29094,1076,LATAM,fashion,online,51.69,3,0.232,none,2024-09-25\r\n29095,1932,EMEA,electronics,online,64.87,4,0.030,bundle,2024-04-25\r\n29096,1868,AMER,electronics,mobile,118.61,5,0.223,none,2024-05-19\r\n29097,2226,EMEA,grocery,online,31.11,2,0.012,none,2024-01-04\r\n29098,1520,APAC,toys,mobile,20.29,7,0.168,bundle,2024-05-13\r\n29099,1669,AMER,fashion,online,37.05,7,0.075,none,2024-02-17\r\n29100,1515,EMEA,fashion,online,31.81,8,0.086,none,2024-04-03\r\n29101,2230,LATAM,sports,online,115.41,4,0.129,none,2024-04-19\r\n29102,2389,LATAM,sports,online,57.19,5,0.198,bundle,2024-02-22\r\n29103,2351,EMEA,grocery,retail,102.96,1,0.160,none,2024-09-02\r\n29104,1208,AMER,grocery,online,36.97,6,0.096,none,2024-05-19\r\n29105,2447,AMER,grocery,online,38.01,8,0.227,coupon,2024-04-12\r\n29106,2208,AMER,grocery,retail,30.06,7,0.078,none,2024-02-10\r\n29107,1511,EMEA,grocery,online,62.45,2,0.059,none,2024-09-05\r\n29108,2070,APAC,toys,mobile,78.49,5,0.210,none,2024-04-01\r\n29109,1542,APAC,electronics,online,40.56,3,0.106,none,2024-06-16\r\n29110,1640,APAC,fashion,retail,98.53,1,0.049,none,2024-10-17\r\n29111,1674,LATAM,electronics,partner,39.82,2,0.040,loyalty,2024-02-20\r\n29112,1555,AMER,fashion,retail,27.84,5,0.071,loyalty,2024-03-09\r\n29113,1159,LATAM,electronics,online,22.20,1,0.055,none,2024-05-15\r\n29114,1800,APAC,home,retail,84.65,4,0.215,none,2024-07-22\r\n29115,2061,EMEA,electronics,retail,59.70,6,0.112,none,2024-10-27\r\n29116,2450,EMEA,home,partner,77.79,4,0.013,none,2024-08-01\r\n29117,1900,APAC,home,online,120.65,1,0.122,none,2024-06-28\r\n29118,1811,APAC,electronics,online,33.62,4,0.171,none,2024-12-23\r\n29119,2474,LATAM,sports,retail,159.71,1,0.151,none,2024-10-01\r\n29120,1325,APAC,sports,online,27.59,7,0.056,bundle,2024-05-06\r\n29121,2236,APAC,grocery,retail,16.47,4,0.088,coupon,2024-09-25\r\n29122,1119,LATAM,electronics,online,29.08,2,0.021,coupon,2024-10-04\r\n29123,1070,EMEA,grocery,retail,51.65,3,0.173,bundle,2024-07-25\r\n29124,2147,LATAM,sports,online,69.89,8,0.091,none,2024-11-10\r\n29125,1613,EMEA,home,mobile,49.32,6,0.150,coupon,2024-01-28\r\n29126,1882,AMER,fashion,mobile,67.01,6,0.233,none,2024-05-27\r\n29127,1031,AMER,home,retail,104.50,1,0.180,none,2024-10-03\r\n29128,1417,APAC,home,online,63.14,2,0.119,loyalty,2024-07-26\r\n29129,2114,AMER,electronics,retail,28.60,8,0.243,loyalty,2024-10-18\r\n29130,1164,EMEA,grocery,retail,39.69,4,0.249,none,2024-12-01\r\n29131,1436,APAC,electronics,retail,37.87,3,0.205,loyalty,2024-04-12\r\n29132,1729,AMER,sports,retail,23.08,5,0.178,none,2024-10-05\r\n29133,2339,AMER,electronics,mobile,78.68,3,0.033,none,2024-08-14\r\n29134,1018,APAC,sports,online,72.41,3,0.003,none,2024-08-09\r\n29135,1356,LATAM,toys,online,124.97,5,0.189,none,2024-12-28\r\n29136,2399,LATAM,fashion,retail,50.48,2,0.027,coupon,2024-07-16\r\n29137,1049,AMER,grocery,retail,55.09,6,0.111,none,2024-04-21\r\n29138,1837,LATAM,fashion,online,96.21,5,0.027,bundle,2024-02-11\r\n29139,1164,EMEA,electronics,online,81.57,1,0.057,coupon,2024-08-27\r\n29140,1795,EMEA,home,retail,62.14,4,0.139,coupon,2024-11-16\r\n29141,1814,AMER,toys,mobile,33.80,6,0.059,none,2024-10-26\r\n29142,1181,LATAM,electronics,retail,114.09,2,0.187,none,2024-01-06\r\n29143,1840,LATAM,electronics,online,62.32,3,0.159,none,2024-11-20\r\n29144,2020,AMER,electronics,online,113.52,2,0.149,coupon,2024-09-03\r\n29145,1098,APAC,home,online,53.52,1,0.051,coupon,2024-03-23\r\n29146,1524,LATAM,fashion,online,52.57,5,0.089,coupon,2024-11-11\r\n29147,2400,EMEA,home,online,106.02,5,0.188,loyalty,2024-05-04\r\n29148,2043,EMEA,grocery,partner,28.84,3,0.226,none,2024-02-01\r\n29149,1087,AMER,grocery,mobile,97.18,8,0.080,bundle,2024-02-13\r\n29150,1661,LATAM,electronics,partner,64.25,8,0.018,loyalty,2024-03-27\r\n29151,2039,EMEA,fashion,retail,29.86,4,0.042,none,2024-08-26\r\n29152,1014,EMEA,grocery,retail,57.57,3,0.085,none,2024-11-08\r\n29153,1883,LATAM,fashion,online,24.29,2,0.080,none,2024-05-25\r\n29154,1495,LATAM,sports,retail,146.65,7,0.064,none,2024-01-22\r\n29155,1626,EMEA,home,online,26.87,4,0.035,coupon,2024-09-11\r\n29156,1426,AMER,fashion,online,48.74,8,0.151,none,2024-05-11\r\n29157,1736,AMER,fashion,online,83.64,5,0.194,coupon,2024-11-24\r\n29158,1744,EMEA,home,retail,41.64,7,0.006,bundle,2024-07-16\r\n29159,1472,AMER,electronics,online,52.50,1,0.209,none,2024-09-18\r\n29160,1673,AMER,electronics,retail,69.64,6,0.190,none,2024-01-18\r\n29161,2375,AMER,fashion,retail,30.29,2,0.008,loyalty,2024-03-05\r\n29162,2090,AMER,toys,mobile,34.59,1,0.215,none,2024-01-27\r\n29163,2281,AMER,sports,mobile,37.48,5,0.174,none,2024-07-08\r\n29164,1072,LATAM,sports,mobile,24.16,7,0.096,none,2024-01-03\r\n29165,1359,LATAM,home,online,59.65,5,0.023,none,2024-05-24\r\n29166,2418,AMER,home,retail,109.19,8,0.216,coupon,2024-02-09\r\n29167,1563,EMEA,fashion,retail,34.31,4,0.183,none,2024-03-18\r\n29168,1047,APAC,fashion,online,65.69,3,0.153,bundle,2024-10-01\r\n29169,2239,EMEA,electronics,mobile,18.55,6,0.027,coupon,2024-05-12\r\n29170,1200,EMEA,home,mobile,84.03,5,0.048,loyalty,2024-09-01\r\n29171,1479,AMER,home,online,129.66,6,0.127,none,2024-12-01\r\n29172,1150,LATAM,fashion,online,196.91,6,0.070,none,2024-08-09\r\n29173,2295,EMEA,fashion,online,140.67,3,0.066,coupon,2024-02-01\r\n29174,2052,LATAM,grocery,retail,87.16,7,0.098,none,2024-10-22\r\n29175,2473,EMEA,fashion,mobile,56.82,5,0.141,none,2024-11-11\r\n29176,2039,EMEA,sports,online,44.77,3,0.055,bundle,2024-06-07\r\n29177,1076,LATAM,home,online,40.98,1,0.212,coupon,2024-07-02\r\n29178,1537,LATAM,grocery,retail,109.77,8,0.063,none,2024-12-12\r\n29179,2257,AMER,sports,online,20.29,1,0.189,none,2024-04-13\r\n29180,2021,EMEA,fashion,mobile,38.03,7,0.042,coupon,2024-09-22\r\n29181,2194,APAC,fashion,online,53.41,5,0.112,coupon,2024-12-18\r\n29182,2089,EMEA,grocery,online,38.15,3,0.226,bundle,2024-02-11\r\n29183,1247,AMER,home,online,57.78,2,0.068,bundle,2024-12-07\r\n29184,2244,LATAM,home,mobile,39.95,3,0.010,coupon,2024-04-06\r\n29185,1138,AMER,electronics,retail,155.37,1,0.185,none,2024-10-15\r\n29186,1333,EMEA,grocery,retail,107.34,1,0.010,coupon,2024-02-09\r\n29187,2317,LATAM,electronics,online,66.92,5,0.087,none,2024-07-13\r\n29188,2314,EMEA,electronics,online,87.77,3,0.004,none,2024-01-06\r\n29189,1780,APAC,fashion,retail,66.86,7,0.194,none,2024-11-04\r\n29190,1375,AMER,electronics,online,31.10,7,0.192,none,2024-06-26\r\n29191,2235,AMER,sports,online,189.76,5,0.246,none,2024-07-23\r\n29192,1769,LATAM,grocery,retail,109.44,3,0.033,none,2024-04-20\r\n29193,1348,AMER,grocery,online,31.08,6,0.119,loyalty,2024-12-22\r\n29194,1339,EMEA,grocery,mobile,41.02,2,0.245,coupon,2024-08-05\r\n29195,1567,AMER,grocery,online,47.08,8,0.146,loyalty,2024-03-26\r\n29196,1721,EMEA,grocery,online,113.84,4,0.213,loyalty,2024-12-18\r\n29197,1418,LATAM,electronics,online,47.05,7,0.240,loyalty,2024-02-20\r\n29198,2308,AMER,electronics,retail,53.70,6,0.223,none,2024-09-26\r\n29199,1216,APAC,sports,mobile,43.49,3,0.054,coupon,2024-01-09\r\n29200,1722,EMEA,grocery,retail,65.57,7,0.202,coupon,2024-10-12\r\n29201,2402,AMER,grocery,retail,72.01,8,0.121,none,2024-10-21\r\n29202,1870,EMEA,sports,retail,84.59,6,0.114,bundle,2024-08-09\r\n29203,1626,EMEA,fashion,retail,50.30,6,0.194,none,2024-12-12\r\n29204,2029,APAC,electronics,retail,124.20,5,0.244,none,2024-07-09\r\n29205,2056,LATAM,electronics,online,72.71,4,0.019,none,2024-01-09\r\n29206,1183,AMER,grocery,mobile,57.37,4,0.112,none,2024-01-11\r\n29207,1349,APAC,fashion,retail,42.72,4,0.188,none,2024-06-07\r\n29208,2232,EMEA,fashion,partner,116.10,1,0.109,loyalty,2024-03-20\r\n29209,2265,APAC,toys,retail,36.60,2,0.234,bundle,2024-04-15\r\n29210,1647,LATAM,grocery,partner,27.14,6,0.165,none,2024-06-08\r\n29211,1590,APAC,electronics,retail,168.40,4,0.105,none,2024-06-25\r\n29212,1345,AMER,electronics,online,72.33,4,0.069,none,2024-01-25\r\n29213,1294,APAC,fashion,online,42.52,7,0.177,none,2024-07-06\r\n29214,1250,APAC,grocery,online,60.20,6,0.024,none,2024-12-21\r\n29215,1891,APAC,toys,retail,29.80,2,0.149,loyalty,2024-04-22\r\n29216,1921,LATAM,grocery,partner,57.67,2,0.074,loyalty,2024-01-05\r\n29217,1068,APAC,home,online,60.88,4,0.040,loyalty,2024-01-05\r\n29218,2477,APAC,grocery,mobile,99.69,5,0.142,loyalty,2024-03-26\r\n29219,1393,LATAM,sports,online,48.51,2,0.096,bundle,2024-01-09\r\n29220,1574,AMER,home,partner,78.79,8,0.177,coupon,2024-08-06\r\n29221,1650,LATAM,electronics,retail,80.76,2,0.217,loyalty,2024-09-17\r\n29222,1333,EMEA,home,online,20.70,3,0.145,none,2024-09-22\r\n29223,1587,LATAM,fashion,online,108.89,3,0.194,none,2024-12-21\r\n29224,2244,LATAM,fashion,retail,80.28,4,0.128,none,2024-01-08\r\n29225,1229,LATAM,home,retail,123.59,4,0.047,none,2024-04-05\r\n29226,1327,APAC,home,online,59.64,2,0.142,none,2024-11-24\r\n29227,1887,LATAM,grocery,retail,53.17,4,0.210,loyalty,2024-09-08\r\n29228,1922,EMEA,home,online,70.66,7,0.192,coupon,2024-11-27\r\n29229,2064,LATAM,electronics,retail,33.26,1,0.042,none,2024-05-21\r\n29230,1587,LATAM,fashion,online,43.83,5,0.244,coupon,2024-01-13\r\n29231,1279,EMEA,grocery,online,22.74,1,0.207,none,2024-10-21\r\n29232,1979,APAC,sports,online,29.96,1,0.249,none,2024-12-18\r\n29233,2456,APAC,sports,online,56.74,7,0.189,none,2024-08-21\r\n29234,1407,LATAM,electronics,online,43.58,1,0.015,none,2024-02-22\r\n29235,1598,EMEA,grocery,online,57.84,1,0.058,none,2024-09-22\r\n29236,1836,LATAM,sports,online,36.54,2,0.177,none,2024-05-14\r\n29237,1924,AMER,grocery,retail,242.63,6,0.140,none,2024-07-17\r\n29238,1640,APAC,grocery,retail,53.04,3,0.011,none,2024-11-09\r\n29239,2402,AMER,electronics,retail,94.40,7,0.249,coupon,2024-10-17\r\n29240,1546,EMEA,home,online,53.83,2,0.192,bundle,2024-02-21\r\n29241,1173,LATAM,grocery,online,67.07,2,0.194,loyalty,2024-07-09\r\n29242,1828,EMEA,grocery,online,119.60,2,0.237,bundle,2024-06-13\r\n29243,1315,AMER,toys,online,33.51,1,0.073,none,2024-10-15\r\n29244,1927,EMEA,grocery,mobile,34.95,7,0.226,none,2024-06-02\r\n29245,1999,EMEA,grocery,mobile,61.24,5,0.131,none,2024-05-03\r\n29246,1640,APAC,electronics,online,43.15,5,0.195,none,2024-07-24\r\n29247,1649,APAC,toys,mobile,65.80,7,0.174,none,2024-04-01\r\n29248,1188,LATAM,electronics,partner,40.33,7,0.212,loyalty,2024-07-12\r\n29249,1137,APAC,electronics,online,70.04,2,0.173,none,2024-03-04\r\n29250,1110,LATAM,grocery,mobile,48.36,3,0.103,none,2024-01-24\r\n29251,1022,APAC,sports,online,30.30,1,0.141,none,2024-01-24\r\n29252,1978,AMER,grocery,retail,51.86,6,0.121,none,2024-10-12\r\n29253,1878,EMEA,toys,online,70.26,6,0.130,coupon,2024-09-01\r\n29254,1314,AMER,home,online,44.12,4,0.049,none,2024-07-12\r\n29255,1125,LATAM,electronics,online,87.53,1,0.116,coupon,2024-08-01\r\n29256,1632,LATAM,grocery,retail,53.79,8,0.068,none,2024-12-27\r\n29257,1982,EMEA,electronics,mobile,84.38,2,0.034,bundle,2024-01-14\r\n29258,1043,LATAM,electronics,online,40.03,1,0.019,coupon,2024-03-28\r\n29259,1016,AMER,electronics,online,49.44,1,0.193,coupon,2024-12-21\r\n29260,2044,APAC,electronics,online,50.95,3,0.149,none,2024-01-19\r\n29261,1178,EMEA,fashion,retail,44.76,7,0.092,loyalty,2024-11-25\r\n29262,2229,APAC,grocery,online,50.84,8,0.067,none,2024-05-22\r\n29263,2275,LATAM,grocery,mobile,20.02,2,0.009,bundle,2024-08-13\r\n29264,1858,LATAM,home,mobile,38.85,7,0.223,coupon,2024-08-20\r\n29265,2018,AMER,fashion,online,81.70,4,0.173,none,2024-05-20\r\n29266,1264,APAC,home,retail,44.12,6,0.051,loyalty,2024-01-09\r\n29267,1497,EMEA,fashion,retail,59.49,2,0.190,none,2024-05-04\r\n29268,1015,AMER,grocery,online,86.69,8,0.182,bundle,2024-06-16\r\n29269,2364,APAC,toys,mobile,35.24,7,0.036,bundle,2024-07-03\r\n29270,2492,LATAM,grocery,retail,51.16,4,0.030,coupon,2024-10-26\r\n29271,2202,APAC,fashion,retail,66.47,6,0.073,none,2024-10-21\r\n29272,1820,AMER,fashion,retail,26.30,5,0.007,none,2024-07-11\r\n29273,1658,AMER,grocery,online,44.92,8,0.002,none,2024-01-04\r\n29274,2311,LATAM,home,online,65.43,7,0.152,none,2024-12-18\r\n29275,2337,AMER,electronics,retail,140.91,3,0.152,none,2024-12-15\r\n29276,2346,LATAM,grocery,online,70.65,8,0.132,none,2024-05-26\r\n29277,1070,EMEA,toys,online,33.70,6,0.139,loyalty,2024-02-03\r\n29278,1660,AMER,sports,mobile,239.09,3,0.088,none,2024-04-05\r\n29279,1678,LATAM,toys,retail,30.51,5,0.051,bundle,2024-11-22\r\n29280,1770,AMER,home,online,56.29,6,0.124,none,2024-11-03\r\n29281,2190,LATAM,sports,online,40.33,7,0.088,none,2024-01-12\r\n29282,2008,APAC,grocery,online,48.61,2,0.215,none,2024-02-03\r\n29283,1187,AMER,home,mobile,71.54,1,0.208,none,2024-09-12\r\n29284,2382,LATAM,electronics,retail,65.81,2,0.205,none,2024-07-28\r\n29285,2024,AMER,electronics,online,35.71,1,0.040,none,2024-06-18\r\n29286,2462,EMEA,fashion,partner,45.57,4,0.127,coupon,2024-11-25\r\n29287,2278,APAC,sports,online,95.07,6,0.156,none,2024-01-11\r\n29288,1575,APAC,grocery,retail,112.77,1,0.162,loyalty,2024-04-15\r\n29289,1786,APAC,home,retail,36.42,3,0.020,loyalty,2024-11-06\r\n29290,1334,APAC,home,online,64.20,8,0.191,none,2024-09-17\r\n29291,1867,AMER,electronics,online,68.80,6,0.157,none,2024-02-20\r\n29292,1750,LATAM,grocery,retail,42.90,4,0.218,coupon,2024-01-10\r\n29293,2311,LATAM,sports,retail,30.64,7,0.186,none,2024-10-23\r\n29294,1362,AMER,grocery,online,78.42,2,0.229,none,2024-04-13\r\n29295,1120,LATAM,toys,online,58.53,7,0.173,none,2024-04-01\r\n29296,2115,APAC,electronics,mobile,121.66,5,0.012,none,2024-02-04\r\n29297,1313,EMEA,electronics,retail,28.78,2,0.228,bundle,2024-01-04\r\n29298,2434,APAC,sports,retail,51.77,8,0.231,none,2024-03-21\r\n29299,1162,AMER,electronics,online,62.34,8,0.143,none,2024-02-24\r\n29300,2011,AMER,toys,retail,91.02,8,0.067,loyalty,2024-11-22\r\n29301,2019,AMER,grocery,retail,75.40,2,0.248,bundle,2024-05-24\r\n29302,2046,APAC,grocery,retail,55.51,2,0.090,none,2024-04-28\r\n29303,1516,EMEA,home,online,78.25,1,0.117,none,2024-09-04\r\n29304,1888,LATAM,home,retail,44.94,6,0.150,loyalty,2024-12-15\r\n29305,1918,EMEA,sports,mobile,76.24,1,0.191,none,2024-12-24\r\n29306,1932,EMEA,toys,retail,51.43,6,0.021,none,2024-01-22\r\n29307,1403,APAC,sports,retail,141.78,5,0.092,none,2024-02-08\r\n29308,1892,LATAM,home,retail,30.95,6,0.060,none,2024-09-09\r\n29309,2393,LATAM,electronics,retail,34.53,6,0.153,none,2024-02-16\r\n29310,1412,AMER,grocery,retail,17.25,1,0.148,none,2024-11-20\r\n29311,1217,EMEA,home,partner,101.79,2,0.250,none,2024-11-24\r\n29312,1098,APAC,electronics,retail,61.23,3,0.081,none,2024-02-06\r\n29313,1853,APAC,electronics,online,73.49,8,0.002,loyalty,2024-01-03\r\n29314,1085,EMEA,sports,mobile,97.48,8,0.002,loyalty,2024-08-28\r\n29315,2306,AMER,grocery,retail,86.25,8,0.090,none,2024-01-02\r\n29316,1685,AMER,fashion,online,53.70,1,0.162,none,2024-05-18\r\n29317,2475,AMER,electronics,mobile,61.88,1,0.045,loyalty,2024-03-25\r\n29318,1059,AMER,electronics,retail,68.37,5,0.147,coupon,2024-05-13\r\n29319,1506,EMEA,sports,online,30.74,3,0.051,none,2024-07-28\r\n29320,2325,LATAM,fashion,online,28.85,4,0.070,none,2024-05-06\r\n29321,1916,AMER,electronics,online,49.34,4,0.069,bundle,2024-06-03\r\n29322,2475,AMER,home,retail,246.40,6,0.061,none,2024-10-04\r\n29323,1552,EMEA,sports,retail,50.94,2,0.177,loyalty,2024-02-16\r\n29324,1677,EMEA,grocery,retail,38.22,7,0.183,none,2024-06-12\r\n29325,2362,AMER,grocery,online,82.39,4,0.117,coupon,2024-02-27\r\n29326,2182,AMER,grocery,online,76.60,2,0.044,none,2024-05-15\r\n29327,1410,AMER,grocery,online,55.22,2,0.093,none,2024-03-15\r\n29328,1453,APAC,grocery,retail,37.58,5,0.068,coupon,2024-10-17\r\n29329,1616,APAC,grocery,online,64.59,6,0.184,none,2024-03-10\r\n29330,2390,AMER,grocery,mobile,56.46,6,0.128,coupon,2024-10-14\r\n29331,2452,LATAM,grocery,online,120.91,3,0.227,none,2024-03-01\r\n29332,1905,APAC,grocery,retail,64.71,2,0.176,loyalty,2024-07-09\r\n29333,2216,AMER,electronics,online,59.83,8,0.023,coupon,2024-05-11\r\n29334,1780,APAC,toys,online,141.29,7,0.236,loyalty,2024-10-03\r\n29335,2227,LATAM,grocery,retail,37.96,5,0.039,none,2024-04-05\r\n29336,1290,EMEA,sports,online,28.65,2,0.076,none,2024-03-11\r\n29337,1478,EMEA,grocery,online,76.22,6,0.133,coupon,2024-05-01\r\n29338,1615,LATAM,home,online,52.77,8,0.087,coupon,2024-03-22\r\n29339,1043,LATAM,electronics,online,162.39,4,0.202,coupon,2024-06-07\r\n29340,1385,LATAM,sports,online,59.88,5,0.146,none,2024-01-04\r\n29341,2336,APAC,sports,online,56.96,6,0.236,coupon,2024-02-05\r\n29342,1849,EMEA,electronics,partner,115.90,2,0.213,none,2024-09-24\r\n29343,1605,APAC,grocery,retail,53.59,6,0.194,bundle,2024-05-04\r\n29344,1055,AMER,grocery,online,64.50,3,0.087,none,2024-05-28\r\n29345,1702,AMER,home,mobile,51.55,8,0.141,loyalty,2024-07-10\r\n29346,2403,LATAM,fashion,mobile,30.83,8,0.139,coupon,2024-07-27\r\n29347,2125,LATAM,electronics,online,70.62,1,0.004,none,2024-08-26\r\n29348,1828,EMEA,fashion,retail,55.31,3,0.230,coupon,2024-01-24\r\n29349,1298,LATAM,fashion,online,39.76,3,0.234,none,2024-07-11\r\n29350,2299,EMEA,grocery,online,66.63,7,0.225,none,2024-05-10\r\n29351,1688,LATAM,grocery,mobile,47.64,5,0.023,coupon,2024-10-09\r\n29352,1996,APAC,grocery,online,47.62,1,0.144,none,2024-08-21\r\n29353,1443,EMEA,fashion,online,109.62,4,0.007,none,2024-03-14\r\n29354,1010,EMEA,toys,retail,77.88,5,0.006,none,2024-07-23\r\n29355,1686,LATAM,home,mobile,51.75,4,0.077,coupon,2024-11-07\r\n29356,1027,APAC,electronics,online,102.35,2,0.145,coupon,2024-09-12\r\n29357,1706,EMEA,toys,online,113.66,3,0.246,none,2024-09-11\r\n29358,1937,APAC,fashion,retail,44.32,5,0.008,bundle,2024-11-09\r\n29359,2486,APAC,fashion,online,57.37,1,0.012,none,2024-10-09\r\n29360,1728,AMER,fashion,online,38.28,4,0.192,none,2024-10-10\r\n29361,1059,AMER,electronics,retail,135.51,7,0.153,bundle,2024-01-27\r\n29362,1274,LATAM,grocery,retail,74.93,2,0.179,coupon,2024-01-10\r\n29363,1999,EMEA,fashion,retail,28.70,8,0.180,bundle,2024-10-25\r\n29364,2404,EMEA,fashion,retail,47.25,3,0.197,coupon,2024-08-02\r\n29365,1922,EMEA,home,retail,82.81,5,0.148,none,2024-07-16\r\n29366,2480,APAC,grocery,mobile,23.40,3,0.244,bundle,2024-09-06\r\n29367,2368,AMER,home,mobile,93.40,6,0.180,bundle,2024-12-01\r\n29368,1175,AMER,home,retail,68.18,2,0.114,bundle,2024-07-09\r\n29369,1709,EMEA,sports,online,68.62,1,0.024,bundle,2024-04-22\r\n29370,2434,APAC,home,retail,74.25,8,0.054,bundle,2024-03-23\r\n29371,1666,LATAM,home,retail,82.61,6,0.008,none,2024-12-15\r\n29372,1210,LATAM,grocery,online,73.78,1,0.166,none,2024-10-04\r\n29373,2003,LATAM,fashion,online,103.48,2,0.131,loyalty,2024-02-06\r\n29374,1113,EMEA,fashion,online,59.16,4,0.125,none,2024-11-21\r\n29375,2267,AMER,grocery,online,128.28,6,0.108,coupon,2024-11-23\r\n29376,2056,LATAM,home,online,74.22,5,0.072,none,2024-08-11\r\n29377,2303,EMEA,sports,online,86.56,4,0.193,coupon,2024-06-25\r\n29378,1381,LATAM,sports,online,105.44,8,0.114,none,2024-05-03\r\n29379,2283,AMER,grocery,online,83.88,4,0.077,coupon,2024-02-23\r\n29380,2192,APAC,fashion,retail,108.50,8,0.046,coupon,2024-07-23\r\n29381,1342,LATAM,electronics,online,30.90,1,0.197,none,2024-07-06\r\n29382,1304,LATAM,sports,retail,85.06,7,0.135,none,2024-02-07\r\n29383,1830,EMEA,toys,retail,43.40,3,0.246,none,2024-12-27\r\n29384,1396,EMEA,home,retail,57.72,3,0.119,coupon,2024-09-05\r\n29385,1302,LATAM,fashion,online,24.03,1,0.067,none,2024-05-03\r\n29386,1282,LATAM,electronics,online,150.53,7,0.111,none,2024-04-05\r\n29387,1060,LATAM,home,retail,82.38,8,0.077,loyalty,2024-06-03\r\n29388,1996,APAC,home,retail,29.03,1,0.052,none,2024-02-10\r\n29389,2277,EMEA,fashion,online,29.55,4,0.114,bundle,2024-11-14\r\n29390,1258,EMEA,electronics,online,50.28,3,0.168,coupon,2024-11-26\r\n29391,1005,LATAM,home,retail,54.54,3,0.063,loyalty,2024-06-25\r\n29392,1031,AMER,grocery,retail,26.88,7,0.032,coupon,2024-06-04\r\n29393,1854,AMER,sports,online,27.03,3,0.230,none,2024-05-22\r\n29394,1969,LATAM,toys,retail,67.21,7,0.100,none,2024-07-14\r\n29395,1441,LATAM,home,online,38.04,6,0.215,coupon,2024-08-02\r\n29396,1220,LATAM,toys,mobile,52.06,1,0.231,coupon,2024-10-25\r\n29397,1044,EMEA,electronics,retail,94.26,3,0.001,bundle,2024-11-09\r\n29398,1140,LATAM,grocery,retail,33.72,2,0.122,bundle,2024-06-02\r\n29399,1025,EMEA,electronics,mobile,66.68,2,0.099,none,2024-01-09\r\n29400,2329,LATAM,grocery,retail,207.78,2,0.220,bundle,2024-06-04\r\n29401,2284,EMEA,toys,mobile,93.91,6,0.106,coupon,2024-06-15\r\n29402,1051,EMEA,home,online,21.25,5,0.105,none,2024-07-12\r\n29403,1398,APAC,sports,mobile,168.93,6,0.053,bundle,2024-02-01\r\n29404,1896,EMEA,sports,retail,63.15,3,0.221,none,2024-03-17\r\n29405,1464,APAC,electronics,retail,67.66,5,0.138,none,2024-12-10\r\n29406,2001,EMEA,electronics,retail,19.72,5,0.101,none,2024-05-05\r\n29407,1286,EMEA,electronics,retail,58.11,4,0.020,bundle,2024-04-17\r\n29408,1896,EMEA,sports,online,15.33,3,0.099,coupon,2024-10-10\r\n29409,2206,AMER,grocery,retail,74.42,3,0.007,none,2024-01-03\r\n29410,2140,AMER,grocery,online,34.57,1,0.022,loyalty,2024-02-08\r\n29411,2307,LATAM,grocery,online,42.49,8,0.135,none,2024-02-02\r\n29412,1188,LATAM,fashion,online,81.72,2,0.016,bundle,2024-12-09\r\n29413,1891,APAC,electronics,retail,87.85,1,0.235,bundle,2024-11-18\r\n29414,2349,APAC,grocery,partner,55.64,1,0.196,coupon,2024-05-19\r\n29415,1564,APAC,grocery,retail,105.63,1,0.243,loyalty,2024-07-11\r\n29416,1132,EMEA,toys,online,125.34,6,0.079,none,2024-07-28\r\n29417,1155,EMEA,home,partner,72.29,8,0.128,none,2024-01-05\r\n29418,1679,APAC,electronics,online,70.38,6,0.225,coupon,2024-03-03\r\n29419,2053,AMER,toys,mobile,30.55,1,0.014,coupon,2024-06-04\r\n29420,1336,APAC,toys,retail,95.24,5,0.243,none,2024-09-09\r\n29421,1423,EMEA,sports,online,54.06,4,0.206,bundle,2024-12-21\r\n29422,1897,AMER,electronics,online,43.09,7,0.219,none,2024-11-06\r\n29423,2432,AMER,grocery,online,47.58,5,0.193,bundle,2024-05-16\r\n29424,1250,APAC,home,online,46.97,4,0.239,none,2024-05-23\r\n29425,1780,APAC,fashion,partner,61.18,4,0.029,none,2024-07-02\r\n29426,1238,AMER,electronics,online,64.06,2,0.146,bundle,2024-10-06\r\n29427,1490,AMER,grocery,mobile,41.45,6,0.028,loyalty,2024-01-03\r\n29428,1804,AMER,fashion,retail,54.66,3,0.066,none,2024-03-24\r\n29429,1000,APAC,grocery,mobile,38.49,8,0.228,coupon,2024-02-14\r\n29430,1401,LATAM,home,mobile,41.19,5,0.192,coupon,2024-08-09\r\n29431,1103,EMEA,sports,retail,74.71,3,0.117,none,2024-10-22\r\n29432,2090,AMER,fashion,online,79.77,1,0.110,none,2024-11-25\r\n29433,2015,APAC,grocery,retail,20.31,6,0.067,none,2024-09-17\r\n29434,1259,EMEA,home,online,15.30,3,0.136,none,2024-06-01\r\n29435,1600,AMER,fashion,online,35.30,7,0.205,loyalty,2024-05-01\r\n29436,2178,AMER,grocery,retail,25.90,7,0.241,none,2024-12-23\r\n29437,2042,LATAM,home,retail,124.88,5,0.217,none,2024-10-18\r\n29438,1014,EMEA,toys,online,60.85,8,0.026,none,2024-02-27\r\n29439,1446,AMER,electronics,online,61.41,5,0.025,loyalty,2024-06-21\r\n29440,1957,AMER,grocery,retail,47.34,1,0.225,bundle,2024-06-16\r\n29441,2455,AMER,electronics,online,92.09,7,0.038,coupon,2024-03-13\r\n29442,2490,AMER,grocery,online,110.34,8,0.113,coupon,2024-05-19\r\n29443,1240,EMEA,sports,online,49.63,4,0.218,none,2024-09-11\r\n29444,1246,EMEA,grocery,mobile,64.48,6,0.212,none,2024-03-18\r\n29445,1890,LATAM,electronics,online,96.60,8,0.122,bundle,2024-10-06\r\n29446,2037,LATAM,grocery,online,57.50,5,0.063,loyalty,2024-09-20\r\n29447,1977,APAC,fashion,online,73.20,8,0.150,bundle,2024-10-25\r\n29448,2208,AMER,fashion,retail,96.17,4,0.135,none,2024-06-05\r\n29449,1290,EMEA,electronics,retail,49.04,5,0.041,none,2024-01-01\r\n29450,1849,EMEA,fashion,mobile,58.43,2,0.209,bundle,2024-11-15\r\n29451,2041,LATAM,fashion,retail,58.90,4,0.060,bundle,2024-06-06\r\n29452,1580,AMER,fashion,mobile,28.40,5,0.131,none,2024-05-13\r\n29453,1165,AMER,sports,online,48.76,1,0.247,none,2024-12-21\r\n29454,2224,EMEA,toys,online,58.23,8,0.200,none,2024-11-06\r\n29455,1895,AMER,fashion,partner,50.99,2,0.137,bundle,2024-05-05\r\n29456,1372,APAC,toys,retail,64.80,3,0.024,none,2024-09-19\r\n29457,2099,AMER,grocery,retail,89.98,3,0.006,none,2024-08-04\r\n29458,1119,LATAM,electronics,online,38.32,4,0.182,none,2024-01-28\r\n29459,2085,AMER,fashion,retail,77.03,2,0.160,none,2024-06-19\r\n29460,1955,AMER,sports,online,96.60,3,0.083,none,2024-04-19\r\n29461,1859,AMER,electronics,online,73.39,2,0.039,coupon,2024-09-09\r\n29462,1401,LATAM,grocery,retail,39.67,1,0.108,coupon,2024-03-03\r\n29463,1626,EMEA,electronics,retail,97.95,5,0.138,none,2024-06-21\r\n29464,2185,EMEA,fashion,online,47.50,7,0.164,none,2024-01-07\r\n29465,2062,EMEA,electronics,partner,45.54,4,0.030,bundle,2024-10-17\r\n29466,1197,LATAM,grocery,mobile,60.27,2,0.235,coupon,2024-09-01\r\n29467,1296,LATAM,grocery,online,99.01,7,0.226,none,2024-10-17\r\n29468,1536,LATAM,electronics,online,47.44,7,0.221,coupon,2024-01-05\r\n29469,1362,AMER,grocery,online,52.36,6,0.210,none,2024-05-07\r\n29470,1804,AMER,grocery,online,89.75,2,0.085,none,2024-01-25\r\n29471,2456,APAC,grocery,online,40.70,2,0.094,bundle,2024-12-22\r\n29472,1363,EMEA,grocery,online,49.64,7,0.164,bundle,2024-02-21\r\n29473,1212,LATAM,electronics,online,64.10,5,0.023,bundle,2024-10-14\r\n29474,2190,LATAM,sports,online,55.88,3,0.243,coupon,2024-12-27\r\n29475,1647,LATAM,grocery,retail,40.46,5,0.250,none,2024-03-18\r\n29476,1674,LATAM,grocery,online,76.51,6,0.077,none,2024-11-21\r\n29477,1991,APAC,toys,online,53.85,5,0.042,none,2024-10-08\r\n29478,1094,LATAM,sports,mobile,275.20,8,0.179,none,2024-06-19\r\n29479,1836,LATAM,electronics,online,83.06,5,0.223,none,2024-07-17\r\n29480,1964,EMEA,grocery,online,47.87,5,0.138,coupon,2024-11-15\r\n29481,1372,APAC,fashion,retail,43.63,8,0.027,coupon,2024-01-07\r\n29482,2367,AMER,toys,mobile,82.88,7,0.164,bundle,2024-03-18\r\n29483,1000,APAC,toys,mobile,43.34,4,0.161,coupon,2024-08-06\r\n29484,1695,LATAM,sports,online,45.71,3,0.227,coupon,2024-08-20\r\n29485,1326,AMER,electronics,online,62.18,7,0.192,none,2024-04-27\r\n29486,1417,APAC,fashion,mobile,134.34,3,0.045,coupon,2024-09-28\r\n29487,1965,LATAM,sports,mobile,38.69,4,0.125,loyalty,2024-08-01\r\n29488,1692,LATAM,electronics,partner,54.64,8,0.174,none,2024-04-18\r\n29489,1597,APAC,grocery,retail,89.46,1,0.212,loyalty,2024-01-10\r\n29490,1280,LATAM,electronics,online,163.86,7,0.199,bundle,2024-10-27\r\n29491,1434,EMEA,fashion,online,41.82,5,0.019,coupon,2024-07-11\r\n29492,1514,LATAM,home,mobile,27.99,1,0.015,coupon,2024-02-14\r\n29493,1267,EMEA,grocery,retail,53.38,3,0.100,bundle,2024-09-07\r\n29494,2411,EMEA,grocery,online,28.15,3,0.214,loyalty,2024-10-28\r\n29495,1794,AMER,grocery,mobile,31.84,6,0.197,none,2024-04-09\r\n29496,1096,EMEA,grocery,retail,48.67,7,0.204,coupon,2024-09-16\r\n29497,1644,EMEA,sports,retail,48.35,6,0.096,coupon,2024-10-23\r\n29498,1430,EMEA,fashion,retail,49.25,2,0.250,none,2024-10-11\r\n29499,2405,AMER,fashion,retail,48.82,1,0.024,coupon,2024-01-24\r\n29500,1247,AMER,grocery,retail,39.25,3,0.078,none,2024-03-20\r\n29501,2464,LATAM,electronics,online,58.91,7,0.030,coupon,2024-02-05\r\n29502,1462,LATAM,electronics,online,82.57,1,0.141,none,2024-09-18\r\n29503,1303,LATAM,home,retail,33.71,2,0.215,none,2024-11-07\r\n29504,1791,LATAM,electronics,retail,64.09,3,0.025,none,2024-08-13\r\n29505,1790,AMER,electronics,mobile,113.17,7,0.149,bundle,2024-01-15\r\n29506,1989,LATAM,fashion,online,40.59,3,0.238,none,2024-06-20\r\n29507,2136,AMER,electronics,mobile,45.88,8,0.117,loyalty,2024-03-03\r\n29508,1443,EMEA,grocery,online,109.43,2,0.075,none,2024-09-11\r\n29509,1033,APAC,electronics,online,71.36,5,0.198,none,2024-02-28\r\n29510,1002,EMEA,toys,retail,36.66,7,0.215,coupon,2024-12-12\r\n29511,1107,APAC,grocery,mobile,64.39,4,0.122,none,2024-12-03\r\n29512,1845,AMER,electronics,retail,106.78,4,0.023,none,2024-07-12\r\n29513,1414,APAC,toys,partner,70.49,3,0.134,none,2024-07-03\r\n29514,1253,AMER,toys,online,78.16,1,0.216,none,2024-10-12\r\n29515,1485,APAC,sports,mobile,12.82,5,0.153,coupon,2024-04-26\r\n29516,2086,APAC,fashion,online,24.54,5,0.191,none,2024-09-27\r\n29517,1328,APAC,electronics,mobile,78.60,2,0.027,none,2024-05-19\r\n29518,2419,LATAM,grocery,retail,36.35,6,0.177,none,2024-05-21\r\n29519,1915,LATAM,home,partner,52.80,1,0.170,none,2024-05-12\r\n29520,2429,EMEA,home,retail,126.16,7,0.227,loyalty,2024-09-16\r\n29521,1543,AMER,grocery,online,76.14,1,0.145,none,2024-01-18\r\n29522,1544,LATAM,home,mobile,65.80,8,0.239,none,2024-11-28\r\n29523,2121,APAC,toys,online,65.76,6,0.243,none,2024-01-11\r\n29524,2213,APAC,home,online,24.47,2,0.218,none,2024-09-03\r\n29525,2328,EMEA,sports,retail,41.97,1,0.207,coupon,2024-12-16\r\n29526,2490,AMER,grocery,mobile,75.25,4,0.064,coupon,2024-05-11\r\n29527,2204,AMER,electronics,online,17.54,6,0.092,none,2024-06-01\r\n29528,1847,LATAM,sports,mobile,25.54,2,0.141,loyalty,2024-08-14\r\n29529,1485,APAC,home,retail,36.44,2,0.067,none,2024-04-26\r\n29530,1254,APAC,toys,online,19.63,8,0.017,bundle,2024-01-07\r\n29531,2276,AMER,home,online,78.75,7,0.142,coupon,2024-02-14\r\n29532,1235,EMEA,home,online,212.54,2,0.223,coupon,2024-12-13\r\n29533,2457,EMEA,home,online,104.41,4,0.129,coupon,2024-07-12\r\n29534,2165,AMER,sports,mobile,34.72,2,0.175,coupon,2024-08-04\r\n29535,2322,AMER,grocery,online,54.77,2,0.059,coupon,2024-04-26\r\n29536,1352,AMER,home,mobile,38.64,7,0.180,none,2024-04-10\r\n29537,2075,LATAM,electronics,online,66.90,6,0.157,none,2024-08-19\r\n29538,2389,LATAM,home,online,168.27,8,0.156,none,2024-11-26\r\n29539,1715,AMER,grocery,online,31.61,3,0.035,none,2024-07-11\r\n29540,1731,AMER,sports,mobile,35.83,4,0.058,loyalty,2024-06-01\r\n29541,1356,LATAM,electronics,mobile,45.85,2,0.103,none,2024-12-11\r\n29542,1812,EMEA,home,retail,45.25,7,0.127,none,2024-01-13\r\n29543,2377,AMER,toys,retail,42.12,6,0.060,none,2024-12-23\r\n29544,2350,APAC,grocery,online,34.97,4,0.038,none,2024-10-16\r\n29545,1522,LATAM,grocery,retail,81.72,2,0.161,bundle,2024-03-01\r\n29546,1267,EMEA,sports,online,89.39,6,0.138,none,2024-04-21\r\n29547,1814,AMER,home,retail,90.61,6,0.056,loyalty,2024-07-19\r\n29548,1747,EMEA,electronics,mobile,98.79,8,0.046,none,2024-10-18\r\n29549,2150,APAC,sports,online,63.83,3,0.196,none,2024-05-16\r\n29550,2060,LATAM,electronics,retail,54.59,7,0.011,coupon,2024-02-16\r\n29551,1851,EMEA,sports,online,51.74,8,0.146,coupon,2024-01-22\r\n29552,1952,EMEA,sports,retail,88.35,4,0.050,none,2024-09-13\r\n29553,2060,LATAM,grocery,retail,38.64,3,0.185,none,2024-05-01\r\n29554,1619,APAC,fashion,retail,49.04,8,0.248,none,2024-05-17\r\n29555,1271,EMEA,fashion,online,54.85,1,0.010,none,2024-09-21\r\n29556,1264,APAC,home,retail,60.13,2,0.207,bundle,2024-01-08\r\n29557,2472,AMER,electronics,online,89.08,8,0.104,none,2024-12-19\r\n29558,1657,LATAM,fashion,retail,97.14,3,0.109,loyalty,2024-12-17\r\n29559,1278,AMER,sports,retail,66.23,1,0.210,none,2024-11-12\r\n29560,2429,EMEA,fashion,online,44.61,1,0.039,none,2024-05-11\r\n29561,1222,AMER,electronics,partner,106.22,7,0.003,none,2024-09-15\r\n29562,1374,APAC,home,retail,24.62,6,0.193,none,2024-11-03\r\n29563,2474,LATAM,fashion,retail,40.71,5,0.079,bundle,2024-04-13\r\n29564,1446,AMER,toys,retail,97.39,4,0.064,none,2024-11-01\r\n29565,1736,AMER,toys,online,49.58,7,0.107,coupon,2024-08-24\r\n29566,1801,LATAM,electronics,retail,76.42,3,0.021,none,2024-10-14\r\n29567,1754,EMEA,electronics,retail,77.14,3,0.058,none,2024-04-08\r\n29568,1668,AMER,sports,online,23.87,4,0.140,loyalty,2024-06-04\r\n29569,1938,APAC,grocery,retail,50.59,3,0.113,coupon,2024-06-06\r\n29570,2129,APAC,home,online,44.46,8,0.153,loyalty,2024-04-28\r\n29571,2087,LATAM,grocery,partner,74.17,7,0.217,none,2024-01-08\r\n29572,1382,LATAM,electronics,retail,58.42,2,0.146,none,2024-11-11\r\n29573,2391,EMEA,electronics,mobile,95.95,5,0.219,none,2024-03-18\r\n29574,1004,LATAM,electronics,online,27.10,3,0.115,none,2024-09-11\r\n29575,1634,AMER,grocery,partner,133.77,4,0.151,none,2024-12-28\r\n29576,1483,EMEA,sports,retail,63.91,5,0.022,coupon,2024-06-11\r\n29577,2432,AMER,home,retail,79.77,7,0.148,none,2024-02-03\r\n29578,1691,LATAM,home,retail,81.91,8,0.129,none,2024-04-13\r\n29579,2182,AMER,home,online,75.62,2,0.230,none,2024-08-05\r\n29580,1842,LATAM,grocery,online,65.72,3,0.020,coupon,2024-01-26\r\n29581,2062,EMEA,fashion,retail,91.21,7,0.188,none,2024-06-09\r\n29582,2304,LATAM,fashion,online,52.52,6,0.228,none,2024-08-28\r\n29583,1521,LATAM,electronics,retail,52.88,2,0.104,none,2024-05-23\r\n29584,2056,LATAM,grocery,online,69.32,3,0.100,none,2024-05-07\r\n29585,1215,LATAM,home,retail,34.20,3,0.102,none,2024-03-23\r\n29586,1877,LATAM,grocery,retail,162.61,6,0.235,loyalty,2024-04-15\r\n29587,2347,AMER,grocery,online,20.22,2,0.142,coupon,2024-01-22\r\n29588,2422,APAC,fashion,mobile,42.05,7,0.029,none,2024-02-05\r\n29589,2499,LATAM,fashion,online,80.33,7,0.226,none,2024-02-21\r\n29590,1667,AMER,grocery,online,72.82,3,0.024,loyalty,2024-10-08\r\n29591,1349,APAC,home,online,84.26,5,0.198,none,2024-04-21\r\n29592,1339,EMEA,home,retail,71.29,4,0.010,none,2024-11-08\r\n29593,1904,APAC,home,retail,41.75,1,0.080,bundle,2024-04-14\r\n29594,2382,LATAM,home,retail,68.56,8,0.182,none,2024-08-08\r\n29595,2093,LATAM,sports,online,32.63,5,0.089,coupon,2024-04-17\r\n29596,1147,EMEA,electronics,retail,26.39,4,0.091,none,2024-10-05\r\n29597,1176,EMEA,electronics,retail,60.82,3,0.248,none,2024-11-26\r\n29598,1424,APAC,home,retail,127.35,1,0.124,none,2024-11-20\r\n29599,1523,LATAM,electronics,mobile,81.24,8,0.213,none,2024-01-28\r\n29600,1377,APAC,grocery,retail,53.47,1,0.218,none,2024-02-19\r\n29601,1688,LATAM,home,retail,53.91,8,0.237,none,2024-08-05\r\n29602,1062,EMEA,grocery,online,23.67,8,0.201,none,2024-02-20\r\n29603,2302,APAC,sports,online,39.21,1,0.038,none,2024-03-14\r\n29604,1283,APAC,electronics,mobile,22.03,2,0.056,none,2024-02-13\r\n29605,2173,LATAM,sports,online,171.75,4,0.141,none,2024-01-16\r\n29606,1705,AMER,fashion,online,91.36,5,0.060,none,2024-06-12\r\n29607,1955,AMER,fashion,online,73.70,2,0.172,none,2024-03-01\r\n29608,1472,AMER,toys,retail,58.06,8,0.130,coupon,2024-02-22\r\n29609,1547,AMER,grocery,retail,89.74,6,0.077,none,2024-06-13\r\n29610,1707,APAC,electronics,online,23.93,5,0.097,none,2024-02-19\r\n29611,2105,APAC,electronics,partner,48.85,6,0.178,none,2024-06-11\r\n29612,1029,EMEA,sports,retail,66.53,4,0.093,bundle,2024-03-27\r\n29613,1140,LATAM,fashion,retail,68.05,4,0.243,none,2024-03-23\r\n29614,1358,APAC,fashion,online,24.76,2,0.050,none,2024-05-16\r\n29615,2229,APAC,home,online,51.61,2,0.153,bundle,2024-03-28\r\n29616,1461,LATAM,fashion,retail,26.68,3,0.154,none,2024-10-18\r\n29617,1713,EMEA,fashion,online,24.55,7,0.098,none,2024-02-03\r\n29618,2192,APAC,electronics,online,15.19,1,0.182,none,2024-09-05\r\n29619,1071,AMER,grocery,online,44.64,3,0.237,bundle,2024-12-04\r\n29620,1250,APAC,fashion,mobile,83.93,2,0.193,none,2024-11-24\r\n29621,1246,EMEA,grocery,retail,76.06,5,0.178,none,2024-09-24\r\n29622,2020,AMER,grocery,online,51.30,8,0.142,none,2024-04-06\r\n29623,1293,AMER,grocery,online,60.90,4,0.125,bundle,2024-08-02\r\n29624,1884,APAC,electronics,retail,43.06,8,0.020,none,2024-11-23\r\n29625,1508,LATAM,sports,retail,34.62,1,0.112,none,2024-06-25\r\n29626,1286,EMEA,toys,retail,86.40,1,0.172,none,2024-10-17\r\n29627,1861,AMER,sports,retail,53.41,2,0.220,none,2024-04-14\r\n29628,1197,LATAM,sports,online,58.90,4,0.181,none,2024-12-16\r\n29629,1780,APAC,grocery,retail,54.57,2,0.041,none,2024-08-18\r\n29630,1569,APAC,grocery,online,51.70,3,0.135,bundle,2024-10-01\r\n29631,1997,APAC,electronics,online,58.44,1,0.027,none,2024-10-21\r\n29632,2121,APAC,fashion,online,65.59,3,0.175,loyalty,2024-08-11\r\n29633,1574,AMER,home,mobile,61.60,5,0.112,bundle,2024-12-02\r\n29634,2134,AMER,electronics,online,86.01,5,0.079,none,2024-10-20\r\n29635,1979,APAC,toys,mobile,45.87,4,0.070,none,2024-03-08\r\n29636,1388,AMER,fashion,online,38.51,4,0.061,none,2024-01-04\r\n29637,2230,LATAM,home,online,24.37,4,0.153,bundle,2024-11-14\r\n29638,1490,AMER,home,online,67.98,4,0.191,none,2024-02-27\r\n29639,1750,LATAM,grocery,online,26.15,6,0.218,bundle,2024-05-08\r\n29640,1737,AMER,home,mobile,49.73,1,0.047,none,2024-07-11\r\n29641,1361,LATAM,home,retail,69.27,8,0.119,bundle,2024-10-16\r\n29642,1302,LATAM,home,partner,20.21,5,0.093,none,2024-09-20\r\n29643,2370,EMEA,home,retail,39.01,6,0.147,none,2024-07-14\r\n29644,2324,AMER,grocery,online,38.89,1,0.006,none,2024-01-16\r\n29645,2267,AMER,toys,partner,66.52,2,0.150,none,2024-01-18\r\n29646,2372,AMER,electronics,online,36.81,7,0.018,coupon,2024-01-03\r\n29647,2367,AMER,toys,online,44.57,7,0.204,loyalty,2024-06-12\r\n29648,2278,APAC,fashion,retail,68.21,1,0.086,none,2024-10-13\r\n29649,1377,APAC,fashion,online,28.82,5,0.008,none,2024-04-08\r\n29650,2334,LATAM,grocery,retail,65.97,1,0.125,none,2024-06-11\r\n29651,1708,LATAM,sports,partner,58.67,1,0.208,coupon,2024-09-07\r\n29652,2222,LATAM,electronics,mobile,82.39,1,0.224,none,2024-11-11\r\n29653,1208,AMER,grocery,retail,21.08,4,0.138,none,2024-08-01\r\n29654,1958,APAC,electronics,retail,102.62,3,0.110,coupon,2024-06-01\r\n29655,1964,EMEA,grocery,online,30.79,4,0.055,none,2024-09-16\r\n29656,1168,APAC,toys,online,53.44,4,0.115,none,2024-10-02\r\n29657,2212,EMEA,grocery,mobile,62.58,5,0.168,bundle,2024-01-23\r\n29658,1744,EMEA,toys,online,11.84,2,0.063,coupon,2024-02-08\r\n29659,2332,APAC,fashion,online,47.14,5,0.146,none,2024-11-15\r\n29660,1798,AMER,grocery,online,38.16,7,0.090,coupon,2024-11-14\r\n29661,1790,AMER,electronics,online,33.43,5,0.226,bundle,2024-08-09\r\n29662,1191,EMEA,electronics,retail,83.43,3,0.030,none,2024-09-05\r\n29663,1295,EMEA,grocery,online,37.78,8,0.049,none,2024-05-05\r\n29664,1689,LATAM,grocery,online,46.76,2,0.131,none,2024-10-19\r\n29665,1712,LATAM,grocery,online,134.05,7,0.074,none,2024-05-13\r\n29666,1221,LATAM,home,online,46.32,7,0.195,none,2024-07-28\r\n29667,2294,EMEA,home,mobile,88.42,1,0.069,none,2024-03-19\r\n29668,1632,LATAM,fashion,retail,61.40,8,0.048,none,2024-12-11\r\n29669,1990,EMEA,electronics,online,33.75,3,0.071,none,2024-09-21\r\n29670,2351,EMEA,home,partner,65.66,3,0.194,none,2024-11-05\r\n29671,2331,APAC,toys,online,126.12,3,0.041,none,2024-12-12\r\n29672,1131,APAC,toys,mobile,47.10,1,0.034,coupon,2024-08-14\r\n29673,1843,EMEA,grocery,online,70.58,2,0.156,none,2024-10-05\r\n29674,1848,EMEA,fashion,online,57.99,8,0.117,bundle,2024-04-15\r\n29675,2469,LATAM,grocery,online,86.82,5,0.165,none,2024-04-14\r\n29676,1816,EMEA,fashion,mobile,36.56,5,0.092,none,2024-06-08\r\n29677,2361,EMEA,toys,online,81.76,2,0.242,coupon,2024-11-01\r\n29678,2159,AMER,grocery,online,49.81,5,0.004,none,2024-07-10\r\n29679,1456,APAC,electronics,retail,60.64,1,0.073,none,2024-12-10\r\n29680,1702,AMER,fashion,online,51.09,3,0.009,none,2024-01-08\r\n29681,2305,AMER,toys,online,34.87,5,0.235,none,2024-02-27\r\n29682,2497,AMER,sports,retail,32.63,3,0.110,none,2024-06-07\r\n29683,1975,EMEA,electronics,retail,33.66,7,0.159,none,2024-12-18\r\n29684,1194,APAC,grocery,mobile,44.83,7,0.156,loyalty,2024-10-06\r\n29685,1033,APAC,electronics,retail,75.47,4,0.052,coupon,2024-04-11\r\n29686,1941,AMER,home,online,68.65,6,0.026,coupon,2024-04-18\r\n29687,1651,LATAM,fashion,retail,21.91,4,0.163,loyalty,2024-01-02\r\n29688,2109,EMEA,fashion,online,61.15,6,0.015,none,2024-05-17\r\n29689,1719,LATAM,home,online,171.46,3,0.247,none,2024-01-21\r\n29690,1649,APAC,grocery,online,81.13,5,0.003,none,2024-04-07\r\n29691,1375,AMER,electronics,online,118.41,2,0.241,coupon,2024-08-23\r\n29692,1295,EMEA,fashion,retail,24.33,3,0.003,coupon,2024-03-01\r\n29693,1208,AMER,electronics,online,71.19,2,0.125,bundle,2024-10-10\r\n29694,2341,EMEA,toys,retail,84.82,6,0.103,none,2024-01-07\r\n29695,1455,APAC,sports,online,27.96,2,0.040,none,2024-09-20\r\n29696,1863,EMEA,toys,online,53.06,3,0.087,none,2024-07-23\r\n29697,2364,APAC,sports,online,48.17,3,0.012,none,2024-07-02\r\n29698,1839,APAC,fashion,online,45.22,3,0.204,coupon,2024-04-27\r\n29699,1103,EMEA,toys,online,25.60,8,0.213,loyalty,2024-04-11\r\n29700,1935,EMEA,fashion,online,56.50,5,0.155,bundle,2024-08-17\r\n29701,2116,LATAM,sports,retail,81.64,1,0.186,none,2024-02-20\r\n29702,2047,AMER,toys,online,67.03,4,0.107,none,2024-05-20\r\n29703,2444,EMEA,toys,retail,60.79,5,0.056,none,2024-06-03\r\n29704,2398,EMEA,toys,online,33.59,8,0.046,none,2024-01-15\r\n29705,1444,EMEA,electronics,online,71.94,1,0.113,none,2024-02-18\r\n29706,1941,AMER,home,retail,58.27,8,0.167,none,2024-09-26\r\n29707,1213,EMEA,sports,online,42.02,2,0.124,none,2024-01-27\r\n29708,1215,LATAM,electronics,retail,101.12,7,0.226,none,2024-07-06\r\n29709,2444,EMEA,grocery,retail,39.99,2,0.003,none,2024-09-27\r\n29710,1635,APAC,grocery,online,39.55,2,0.110,loyalty,2024-07-11\r\n29711,2096,LATAM,home,online,20.27,2,0.187,none,2024-03-01\r\n29712,1591,APAC,electronics,retail,44.75,7,0.030,loyalty,2024-08-23\r\n29713,2416,LATAM,sports,partner,78.56,6,0.107,coupon,2024-04-05\r\n29714,1621,APAC,grocery,retail,27.07,8,0.100,loyalty,2024-01-14\r\n29715,1165,AMER,home,partner,94.35,6,0.110,none,2024-05-06\r\n29716,2248,LATAM,grocery,online,21.77,4,0.105,coupon,2024-12-23\r\n29717,2022,LATAM,sports,mobile,44.34,2,0.015,none,2024-12-20\r\n29718,2164,AMER,grocery,online,34.10,8,0.054,coupon,2024-04-25\r\n29719,2106,LATAM,fashion,retail,95.43,2,0.138,bundle,2024-05-06\r\n29720,2084,LATAM,grocery,online,20.22,3,0.223,coupon,2024-05-22\r\n29721,1542,APAC,toys,retail,93.41,4,0.088,none,2024-06-14\r\n29722,1249,EMEA,grocery,mobile,81.59,7,0.032,none,2024-01-09\r\n29723,1774,EMEA,grocery,retail,85.87,2,0.114,coupon,2024-10-05\r\n29724,1153,AMER,home,online,39.24,6,0.171,coupon,2024-03-16\r\n29725,1632,LATAM,grocery,online,41.04,2,0.134,coupon,2024-08-25\r\n29726,2272,EMEA,electronics,retail,28.30,1,0.230,bundle,2024-06-24\r\n29727,1873,EMEA,sports,online,62.87,1,0.175,none,2024-04-16\r\n29728,1956,APAC,electronics,online,40.61,1,0.021,coupon,2024-11-05\r\n29729,1439,LATAM,electronics,retail,61.04,5,0.042,none,2024-09-26\r\n29730,1659,APAC,toys,online,59.36,2,0.212,coupon,2024-09-15\r\n29731,1980,LATAM,sports,partner,63.15,5,0.128,loyalty,2024-03-19\r\n29732,2041,LATAM,grocery,retail,31.25,4,0.239,bundle,2024-01-28\r\n29733,1735,LATAM,electronics,retail,46.32,8,0.145,coupon,2024-12-10\r\n29734,2134,AMER,grocery,online,28.99,7,0.045,bundle,2024-02-01\r\n29735,1754,EMEA,home,retail,23.20,4,0.169,loyalty,2024-08-12\r\n29736,1173,LATAM,electronics,online,77.62,4,0.233,coupon,2024-11-24\r\n29737,2323,AMER,grocery,online,73.84,3,0.025,coupon,2024-07-02\r\n29738,1174,APAC,fashion,retail,66.44,1,0.141,coupon,2024-04-03\r\n29739,1498,LATAM,grocery,retail,47.12,1,0.186,none,2024-05-19\r\n29740,1151,APAC,grocery,retail,38.92,4,0.228,none,2024-08-28\r\n29741,1079,LATAM,electronics,online,49.95,3,0.011,none,2024-09-12\r\n29742,1330,EMEA,toys,mobile,46.05,4,0.031,none,2024-06-13\r\n29743,1003,APAC,toys,mobile,48.98,2,0.165,bundle,2024-10-21\r\n29744,1123,LATAM,grocery,retail,36.35,6,0.154,none,2024-12-11\r\n29745,1366,APAC,grocery,online,33.13,7,0.096,none,2024-11-09\r\n29746,1924,AMER,home,retail,39.68,7,0.068,none,2024-02-10\r\n29747,2348,EMEA,grocery,online,54.77,5,0.221,none,2024-03-10\r\n29748,2072,AMER,toys,online,49.92,7,0.061,none,2024-06-22\r\n29749,1227,AMER,home,retail,41.97,4,0.148,none,2024-08-16\r\n29750,2186,LATAM,fashion,online,27.37,1,0.153,none,2024-11-25\r\n29751,1590,APAC,grocery,online,24.27,5,0.117,loyalty,2024-08-19\r\n29752,1371,AMER,fashion,mobile,53.04,2,0.009,none,2024-08-24\r\n29753,1416,EMEA,grocery,online,54.03,7,0.129,none,2024-06-09\r\n29754,1322,AMER,grocery,mobile,66.15,1,0.041,loyalty,2024-04-09\r\n29755,1304,LATAM,home,retail,19.32,6,0.119,none,2024-03-22\r\n29756,1224,APAC,fashion,retail,39.42,8,0.103,loyalty,2024-09-27\r\n29757,1667,AMER,home,online,45.15,5,0.185,none,2024-03-09\r\n29758,2095,EMEA,electronics,online,90.49,2,0.222,bundle,2024-08-15\r\n29759,1980,LATAM,electronics,online,48.07,2,0.206,none,2024-01-18\r\n29760,1024,APAC,fashion,retail,94.71,2,0.217,none,2024-01-26\r\n29761,1885,EMEA,sports,mobile,45.15,2,0.081,none,2024-02-01\r\n29762,1859,AMER,fashion,retail,32.65,7,0.052,bundle,2024-02-25\r\n29763,1632,LATAM,electronics,mobile,35.73,5,0.038,none,2024-11-05\r\n29764,2257,AMER,home,retail,34.89,6,0.067,coupon,2024-01-06\r\n29765,2037,LATAM,grocery,retail,78.06,5,0.123,coupon,2024-01-18\r\n29766,2326,LATAM,fashion,online,59.66,5,0.023,none,2024-06-23\r\n29767,2463,AMER,toys,online,49.26,3,0.206,none,2024-03-24\r\n29768,1582,AMER,grocery,online,78.99,8,0.006,none,2024-04-12\r\n29769,1299,LATAM,grocery,online,26.52,3,0.031,coupon,2024-10-18\r\n29770,1158,LATAM,fashion,mobile,76.19,6,0.199,coupon,2024-08-22\r\n29771,1844,APAC,home,online,40.69,5,0.074,bundle,2024-12-24\r\n29772,1495,LATAM,grocery,retail,43.75,5,0.245,loyalty,2024-01-06\r\n29773,1248,APAC,home,retail,74.64,2,0.132,none,2024-09-22\r\n29774,1854,AMER,grocery,retail,212.00,1,0.073,none,2024-12-09\r\n29775,1713,EMEA,toys,online,39.91,4,0.039,bundle,2024-06-14\r\n29776,1926,AMER,electronics,retail,54.96,2,0.154,bundle,2024-09-14\r\n29777,1789,EMEA,sports,partner,58.02,1,0.114,none,2024-09-15\r\n29778,2198,EMEA,home,retail,54.27,2,0.094,none,2024-05-20\r\n29779,1136,EMEA,grocery,partner,68.85,7,0.216,bundle,2024-10-09\r\n29780,1719,LATAM,toys,online,26.62,1,0.168,none,2024-10-22\r\n29781,2456,APAC,grocery,retail,57.90,3,0.249,coupon,2024-09-06\r\n29782,1499,EMEA,home,partner,63.48,4,0.207,coupon,2024-02-18\r\n29783,1109,APAC,grocery,partner,16.73,1,0.162,none,2024-01-11\r\n29784,1378,APAC,grocery,retail,42.47,5,0.032,none,2024-11-03\r\n29785,1168,APAC,fashion,retail,32.36,8,0.177,none,2024-06-13\r\n29786,2437,LATAM,electronics,retail,54.18,1,0.137,coupon,2024-01-05\r\n29787,2335,EMEA,home,online,41.27,4,0.158,loyalty,2024-06-17\r\n29788,1029,EMEA,sports,mobile,173.90,4,0.177,none,2024-11-10\r\n29789,1050,AMER,grocery,online,35.07,6,0.082,none,2024-08-20\r\n29790,1452,LATAM,electronics,online,23.16,5,0.031,bundle,2024-07-28\r\n29791,2070,APAC,electronics,online,61.93,3,0.006,none,2024-03-12\r\n29792,1588,LATAM,electronics,mobile,40.39,6,0.075,coupon,2024-10-25\r\n29793,2363,AMER,electronics,retail,72.53,6,0.013,none,2024-06-14\r\n29794,1441,LATAM,grocery,online,72.39,7,0.108,bundle,2024-05-03\r\n29795,2414,EMEA,home,retail,62.80,4,0.143,none,2024-01-15\r\n29796,1650,LATAM,toys,retail,83.08,8,0.056,none,2024-09-06\r\n29797,1371,AMER,grocery,online,34.52,4,0.195,none,2024-04-27\r\n29798,1769,LATAM,toys,retail,100.02,2,0.219,bundle,2024-02-14\r\n29799,1201,LATAM,home,partner,197.61,7,0.239,coupon,2024-08-06\r\n29800,1752,APAC,home,retail,88.20,5,0.188,none,2024-01-04\r\n29801,1477,APAC,home,retail,98.23,6,0.230,coupon,2024-11-10\r\n29802,1856,EMEA,home,online,133.86,7,0.105,coupon,2024-06-18\r\n29803,2342,AMER,grocery,retail,56.33,8,0.004,none,2024-03-16\r\n29804,1416,EMEA,home,partner,76.22,2,0.045,bundle,2024-09-09\r\n29805,1045,LATAM,home,partner,105.57,4,0.149,none,2024-03-04\r\n29806,1929,LATAM,home,online,34.66,4,0.049,none,2024-10-27\r\n29807,1809,APAC,sports,retail,89.41,5,0.066,none,2024-01-07\r\n29808,2058,LATAM,toys,mobile,60.55,3,0.001,none,2024-03-24\r\n29809,1126,LATAM,home,online,30.77,8,0.210,none,2024-10-21\r\n29810,1566,EMEA,home,online,87.27,7,0.185,loyalty,2024-07-03\r\n29811,2144,EMEA,toys,mobile,63.72,7,0.242,none,2024-06-12\r\n29812,2204,AMER,sports,mobile,46.38,1,0.219,bundle,2024-05-03\r\n29813,1103,EMEA,home,retail,52.82,3,0.100,none,2024-07-21\r\n29814,2203,APAC,toys,mobile,20.47,6,0.058,bundle,2024-06-18\r\n29815,1784,EMEA,sports,retail,62.07,1,0.182,none,2024-03-17\r\n29816,1840,LATAM,grocery,mobile,59.56,6,0.133,coupon,2024-07-17\r\n29817,2012,APAC,grocery,retail,36.42,6,0.085,none,2024-07-04\r\n29818,1984,LATAM,electronics,online,58.78,7,0.161,none,2024-03-09\r\n29819,2074,AMER,grocery,retail,71.54,7,0.241,none,2024-12-04\r\n29820,2330,EMEA,electronics,online,54.13,5,0.088,none,2024-02-20\r\n29821,1613,EMEA,home,online,63.86,4,0.001,none,2024-03-13\r\n29822,2394,EMEA,fashion,online,56.96,7,0.179,bundle,2024-11-28\r\n29823,1115,AMER,grocery,online,40.39,8,0.197,bundle,2024-02-21\r\n29824,2158,APAC,home,mobile,47.44,7,0.150,none,2024-03-04\r\n29825,1717,AMER,fashion,retail,84.03,5,0.079,none,2024-03-20\r\n29826,2181,AMER,fashion,online,35.29,6,0.202,coupon,2024-11-18\r\n29827,1151,APAC,sports,online,143.58,5,0.210,none,2024-07-07\r\n29828,1995,LATAM,grocery,mobile,94.29,1,0.095,none,2024-10-09\r\n29829,1606,AMER,fashion,online,92.93,6,0.045,coupon,2024-07-21\r\n29830,1334,APAC,home,online,57.58,1,0.080,bundle,2024-05-10\r\n29831,1577,AMER,home,mobile,162.47,8,0.166,none,2024-05-10\r\n29832,1886,LATAM,sports,online,78.86,3,0.089,loyalty,2024-12-16\r\n29833,1447,LATAM,electronics,online,31.18,4,0.059,coupon,2024-03-13\r\n29834,2434,APAC,grocery,retail,36.51,4,0.192,bundle,2024-09-04\r\n29835,1902,AMER,home,retail,40.33,8,0.120,loyalty,2024-07-04\r\n29836,1095,APAC,toys,retail,49.06,4,0.159,none,2024-10-17\r\n29837,2013,APAC,electronics,online,43.29,6,0.150,none,2024-09-12\r\n29838,2259,AMER,sports,partner,102.21,4,0.039,loyalty,2024-07-27\r\n29839,1553,LATAM,fashion,mobile,63.72,2,0.101,none,2024-05-11\r\n29840,2295,EMEA,sports,online,36.40,8,0.240,none,2024-04-17\r\n29841,1750,LATAM,grocery,online,41.25,4,0.044,none,2024-11-22\r\n29842,1078,APAC,electronics,online,38.45,2,0.022,loyalty,2024-08-07\r\n29843,2123,AMER,home,mobile,82.35,2,0.102,loyalty,2024-03-25\r\n29844,2239,EMEA,sports,online,52.46,2,0.176,bundle,2024-07-12\r\n29845,2406,EMEA,electronics,retail,56.09,5,0.182,coupon,2024-11-19\r\n29846,1384,LATAM,fashion,retail,44.37,6,0.239,none,2024-05-06\r\n29847,2388,LATAM,electronics,retail,62.14,1,0.016,none,2024-10-11\r\n29848,1760,LATAM,grocery,online,51.12,3,0.219,loyalty,2024-06-10\r\n29849,1582,AMER,grocery,retail,91.98,7,0.241,coupon,2024-09-25\r\n29850,2041,LATAM,home,retail,104.19,1,0.066,none,2024-11-02\r\n29851,1266,AMER,grocery,retail,49.46,3,0.031,bundle,2024-09-12\r\n29852,1803,LATAM,electronics,online,40.74,5,0.241,coupon,2024-07-05\r\n29853,1334,APAC,home,online,18.79,1,0.108,none,2024-07-09\r\n29854,1951,LATAM,fashion,online,91.25,5,0.006,coupon,2024-05-14\r\n29855,1363,EMEA,electronics,retail,71.89,3,0.212,loyalty,2024-01-04\r\n29856,2415,AMER,sports,partner,36.63,2,0.106,none,2024-05-24\r\n29857,1145,AMER,toys,retail,86.02,6,0.073,bundle,2024-04-09\r\n29858,1214,EMEA,grocery,online,113.97,5,0.143,none,2024-04-08\r\n29859,2448,APAC,home,online,27.57,4,0.161,bundle,2024-12-23\r\n29860,2204,AMER,fashion,retail,41.68,3,0.037,none,2024-09-26\r\n29861,1018,APAC,grocery,retail,32.38,5,0.018,none,2024-04-08\r\n29862,1101,AMER,home,retail,25.97,4,0.045,coupon,2024-10-06\r\n29863,1212,LATAM,home,partner,132.67,6,0.222,coupon,2024-09-25\r\n29864,1902,AMER,home,online,53.78,6,0.024,none,2024-08-11\r\n29865,2091,LATAM,electronics,online,63.06,4,0.158,bundle,2024-07-10\r\n29866,2009,LATAM,home,partner,100.64,1,0.010,coupon,2024-12-15\r\n29867,1758,AMER,sports,retail,64.04,3,0.190,loyalty,2024-08-12\r\n29868,1063,AMER,fashion,online,55.91,7,0.046,none,2024-08-14\r\n29869,1256,LATAM,sports,mobile,57.11,5,0.169,none,2024-10-19\r\n29870,1999,EMEA,grocery,online,73.22,3,0.142,coupon,2024-05-12\r\n29871,2004,LATAM,grocery,online,22.55,8,0.028,none,2024-02-18\r\n29872,1689,LATAM,grocery,retail,23.04,1,0.156,coupon,2024-10-12\r\n29873,2339,AMER,electronics,retail,64.73,2,0.184,none,2024-11-16\r\n29874,1371,AMER,sports,retail,60.77,8,0.053,loyalty,2024-07-27\r\n29875,1937,APAC,fashion,online,36.80,2,0.116,none,2024-02-09\r\n29876,2401,LATAM,sports,retail,19.28,4,0.216,none,2024-03-02\r\n29877,1976,AMER,grocery,online,57.09,7,0.113,none,2024-01-23\r\n29878,2116,LATAM,sports,retail,39.50,2,0.149,coupon,2024-02-02\r\n29879,1706,EMEA,fashion,retail,59.74,3,0.164,loyalty,2024-05-24\r\n29880,1372,APAC,toys,online,40.39,3,0.189,none,2024-10-25\r\n29881,1041,APAC,grocery,online,84.17,7,0.226,loyalty,2024-04-28\r\n29882,2431,LATAM,sports,online,60.99,6,0.178,loyalty,2024-07-16\r\n29883,1349,APAC,grocery,online,26.40,7,0.140,none,2024-09-06\r\n29884,1209,AMER,home,retail,91.24,6,0.243,coupon,2024-02-07\r\n29885,1798,AMER,home,online,31.75,4,0.138,coupon,2024-08-18\r\n29886,2478,AMER,fashion,online,44.19,6,0.050,none,2024-08-04\r\n29887,1539,LATAM,toys,online,30.39,5,0.248,none,2024-04-18\r\n29888,2294,EMEA,electronics,online,38.26,8,0.244,none,2024-02-28\r\n29889,1261,APAC,home,online,46.22,1,0.109,bundle,2024-03-02\r\n29890,2303,EMEA,electronics,online,36.15,2,0.012,bundle,2024-12-28\r\n29891,2107,APAC,home,mobile,110.17,1,0.204,none,2024-04-01\r\n29892,1776,APAC,grocery,online,46.19,5,0.088,none,2024-11-12\r\n29893,1143,LATAM,sports,retail,37.62,2,0.163,loyalty,2024-09-14\r\n29894,1952,EMEA,home,mobile,57.90,5,0.184,none,2024-08-04\r\n29895,1494,AMER,sports,retail,48.68,7,0.186,none,2024-08-20\r\n29896,2331,APAC,grocery,online,51.13,4,0.164,coupon,2024-05-25\r\n29897,2147,LATAM,grocery,retail,84.25,7,0.127,coupon,2024-07-16\r\n29898,1326,AMER,electronics,mobile,69.33,8,0.237,none,2024-05-03\r\n29899,1084,AMER,electronics,retail,106.86,1,0.097,none,2024-12-23\r\n29900,1809,APAC,sports,retail,84.08,3,0.152,none,2024-04-02\r\n29901,2410,EMEA,grocery,retail,63.28,4,0.245,none,2024-02-25\r\n29902,1177,LATAM,fashion,mobile,74.75,5,0.051,none,2024-04-01\r\n29903,2460,AMER,home,online,67.47,6,0.118,bundle,2024-09-18\r\n29904,1319,EMEA,electronics,online,27.61,8,0.032,none,2024-10-27\r\n29905,2445,APAC,fashion,retail,69.03,5,0.214,none,2024-01-01\r\n29906,2394,EMEA,toys,online,70.20,4,0.151,coupon,2024-03-11\r\n29907,1017,AMER,grocery,retail,53.89,6,0.182,none,2024-12-06\r\n29908,1133,EMEA,electronics,retail,36.45,4,0.236,bundle,2024-07-25\r\n29909,1182,EMEA,grocery,retail,30.58,5,0.197,none,2024-07-23\r\n29910,2372,AMER,toys,online,40.05,5,0.056,loyalty,2024-01-04\r\n29911,2326,LATAM,home,retail,105.74,4,0.227,none,2024-09-03\r\n29912,2080,LATAM,electronics,retail,34.69,4,0.094,none,2024-07-13\r\n29913,2096,LATAM,fashion,online,54.84,3,0.120,coupon,2024-07-25\r\n29914,1929,LATAM,home,online,30.24,8,0.078,coupon,2024-04-06\r\n29915,1292,LATAM,fashion,partner,72.27,8,0.136,none,2024-01-19\r\n29916,1903,LATAM,home,online,32.79,3,0.163,none,2024-11-10\r\n29917,1812,EMEA,fashion,partner,55.11,7,0.119,none,2024-10-02\r\n29918,2322,AMER,home,online,72.72,8,0.062,none,2024-05-18\r\n29919,2192,APAC,grocery,online,67.19,7,0.161,none,2024-11-11\r\n29920,2353,AMER,sports,online,58.06,3,0.125,bundle,2024-05-03\r\n29921,2319,AMER,sports,online,49.35,1,0.127,none,2024-01-07\r\n29922,2052,LATAM,electronics,online,70.05,1,0.105,none,2024-12-25\r\n29923,2415,AMER,electronics,online,45.68,8,0.113,bundle,2024-05-16\r\n29924,2318,AMER,fashion,retail,28.12,1,0.246,bundle,2024-03-13\r\n29925,1958,APAC,electronics,retail,69.77,1,0.176,none,2024-08-11\r\n29926,1362,AMER,electronics,online,30.06,7,0.044,none,2024-09-23\r\n29927,1850,APAC,electronics,partner,84.95,1,0.147,none,2024-09-20\r\n29928,1830,EMEA,toys,online,31.22,2,0.169,none,2024-02-19\r\n29929,2349,APAC,grocery,online,37.76,8,0.146,none,2024-02-05\r\n29930,1917,LATAM,electronics,online,107.75,2,0.224,bundle,2024-10-15\r\n29931,2094,AMER,fashion,online,101.15,8,0.032,coupon,2024-01-18\r\n29932,1532,APAC,fashion,retail,43.00,1,0.190,none,2024-03-08\r\n29933,1787,APAC,grocery,retail,27.71,6,0.071,coupon,2024-08-10\r\n29934,1646,APAC,home,online,91.06,2,0.130,none,2024-06-20\r\n29935,1418,LATAM,grocery,partner,97.93,7,0.152,coupon,2024-11-02\r\n29936,1435,AMER,grocery,online,23.15,6,0.137,coupon,2024-12-10\r\n29937,1035,EMEA,electronics,online,129.80,1,0.229,coupon,2024-08-25\r\n29938,1400,EMEA,home,online,117.13,2,0.002,coupon,2024-06-02\r\n29939,2155,APAC,home,retail,42.06,8,0.059,none,2024-07-12\r\n29940,1873,EMEA,home,retail,142.67,4,0.113,none,2024-04-26\r\n29941,1202,APAC,home,online,66.99,8,0.025,none,2024-05-02\r\n29942,2230,LATAM,home,online,45.73,8,0.096,none,2024-06-22\r\n29943,1353,EMEA,grocery,retail,157.87,4,0.035,none,2024-12-05\r\n29944,1056,LATAM,home,mobile,27.40,1,0.220,none,2024-04-25\r\n29945,1867,AMER,sports,mobile,46.78,8,0.140,loyalty,2024-06-22\r\n29946,1795,EMEA,home,retail,44.47,7,0.128,none,2024-03-16\r\n29947,1494,AMER,electronics,retail,21.57,7,0.062,none,2024-12-26\r\n29948,2082,APAC,fashion,online,155.37,3,0.195,none,2024-09-19\r\n29949,2471,APAC,electronics,retail,72.15,7,0.153,coupon,2024-08-01\r\n29950,2241,APAC,fashion,mobile,24.16,1,0.033,none,2024-05-11\r\n29951,1200,EMEA,fashion,online,66.05,6,0.090,loyalty,2024-02-28\r\n29952,1096,EMEA,sports,online,34.70,6,0.145,none,2024-03-13\r\n29953,1787,APAC,grocery,retail,16.58,6,0.159,none,2024-10-23\r\n29954,2331,APAC,fashion,online,54.68,5,0.043,bundle,2024-03-25\r\n29955,1315,AMER,home,online,12.41,4,0.047,none,2024-07-26\r\n29956,1720,AMER,grocery,retail,58.62,8,0.200,none,2024-11-03\r\n29957,1603,EMEA,toys,online,85.98,2,0.030,none,2024-09-02\r\n29958,1694,APAC,home,retail,32.05,2,0.135,loyalty,2024-03-12\r\n29959,1742,AMER,fashion,partner,48.20,4,0.074,bundle,2024-01-21\r\n29960,2156,AMER,fashion,retail,62.36,4,0.078,none,2024-04-24\r\n29961,2264,LATAM,grocery,retail,29.60,6,0.115,bundle,2024-01-25\r\n29962,1685,AMER,grocery,retail,48.40,1,0.235,none,2024-01-27\r\n29963,1677,EMEA,toys,retail,36.07,2,0.075,none,2024-01-09\r\n29964,1113,EMEA,electronics,retail,66.85,1,0.112,none,2024-09-09\r\n29965,1784,EMEA,sports,online,41.43,5,0.003,none,2024-05-07\r\n29966,1220,LATAM,grocery,online,76.90,1,0.010,none,2024-09-23\r\n29967,2075,LATAM,fashion,online,39.59,7,0.201,none,2024-04-19\r\n29968,2456,APAC,sports,online,57.16,5,0.242,none,2024-04-10\r\n29969,1891,APAC,grocery,online,107.10,7,0.202,bundle,2024-06-13\r\n29970,1115,AMER,home,mobile,56.81,7,0.146,none,2024-12-25\r\n29971,2444,EMEA,sports,retail,25.82,8,0.152,none,2024-07-24\r\n29972,1772,EMEA,electronics,retail,124.08,6,0.005,bundle,2024-12-17\r\n29973,1256,LATAM,grocery,online,25.23,3,0.247,none,2024-07-22\r\n29974,1952,EMEA,home,online,45.56,1,0.005,none,2024-03-02\r\n29975,2350,APAC,toys,online,97.13,3,0.205,none,2024-11-27\r\n29976,1150,LATAM,sports,online,74.30,5,0.217,none,2024-10-13\r\n29977,2207,APAC,grocery,online,32.65,6,0.066,none,2024-05-19\r\n29978,1571,EMEA,toys,partner,43.71,1,0.216,coupon,2024-03-18\r\n29979,1274,LATAM,electronics,online,89.31,8,0.217,none,2024-06-16\r\n29980,1202,APAC,toys,online,41.17,5,0.191,none,2024-02-20\r\n29981,1249,EMEA,electronics,retail,74.68,8,0.081,coupon,2024-09-16\r\n29982,2372,AMER,fashion,online,52.21,8,0.003,none,2024-05-14\r\n29983,1594,LATAM,fashion,retail,30.20,5,0.188,none,2024-01-17\r\n29984,1964,EMEA,electronics,online,75.64,2,0.148,none,2024-01-03\r\n29985,1014,EMEA,grocery,online,79.53,4,0.020,loyalty,2024-12-16\r\n29986,1810,LATAM,sports,retail,21.94,6,0.139,none,2024-06-17\r\n29987,1437,EMEA,sports,mobile,54.48,8,0.157,none,2024-05-10\r\n29988,2011,AMER,electronics,retail,103.47,5,0.135,none,2024-08-22\r\n29989,1473,LATAM,sports,retail,42.88,6,0.205,none,2024-11-19\r\n29990,1638,EMEA,toys,online,46.52,1,0.022,bundle,2024-02-02\r\n29991,2375,AMER,electronics,retail,110.58,8,0.207,none,2024-06-25\r\n29992,1556,AMER,toys,retail,101.51,5,0.063,none,2024-07-17\r\n29993,1265,APAC,electronics,online,53.78,5,0.020,loyalty,2024-06-12\r\n29994,1863,EMEA,fashion,retail,43.26,2,0.155,bundle,2024-03-02\r\n29995,1877,LATAM,home,mobile,28.15,8,0.097,bundle,2024-10-23\r\n29996,1714,APAC,fashion,retail,36.14,5,0.123,coupon,2024-02-08\r\n29997,2038,LATAM,fashion,online,50.20,3,0.191,coupon,2024-03-16\r\n29998,1425,EMEA,electronics,online,56.75,2,0.098,none,2024-05-16\r\n29999,1599,APAC,grocery,partner,27.25,7,0.031,none,2024-10-18\r\n30000,2403,LATAM,home,retail,64.00,4,0.019,none,2024-12-05\r\n30001,2259,AMER,home,online,40.73,8,0.015,bundle,2024-07-08\r\n30002,1682,EMEA,home,online,57.78,8,0.104,none,2024-06-23\r\n30003,2374,LATAM,toys,retail,26.48,2,0.201,coupon,2024-08-12\r\n30004,1500,EMEA,fashion,retail,46.01,3,0.055,none,2024-02-10\r\n30005,1899,APAC,toys,online,106.29,1,0.026,bundle,2024-10-01\r\n30006,1477,APAC,grocery,retail,43.54,7,0.141,coupon,2024-01-18\r\n30007,1267,EMEA,fashion,partner,43.54,6,0.104,none,2024-10-26\r\n30008,1916,AMER,fashion,online,98.10,4,0.142,coupon,2024-06-28\r\n30009,1851,EMEA,electronics,online,54.17,2,0.111,coupon,2024-12-13\r\n30010,2401,LATAM,home,retail,39.60,4,0.185,bundle,2024-03-23\r\n30011,1646,APAC,grocery,retail,49.88,7,0.226,bundle,2024-09-18\r\n30012,1950,LATAM,sports,mobile,113.38,8,0.156,bundle,2024-08-06\r\n30013,2072,AMER,toys,mobile,47.07,2,0.083,none,2024-02-19\r\n30014,1673,AMER,home,online,57.75,6,0.154,loyalty,2024-12-13\r\n30015,1469,EMEA,home,online,22.17,2,0.185,coupon,2024-06-13\r\n30016,1332,APAC,grocery,online,89.50,4,0.216,coupon,2024-06-22\r\n30017,2273,APAC,electronics,online,49.34,5,0.081,none,2024-03-01\r\n30018,1072,LATAM,grocery,retail,82.23,5,0.142,bundle,2024-07-12\r\n30019,1920,LATAM,home,retail,28.29,1,0.204,none,2024-11-19\r\n30020,1978,AMER,home,retail,26.78,4,0.229,bundle,2024-10-09\r\n30021,1504,AMER,home,partner,107.42,1,0.072,loyalty,2024-08-16\r\n30022,1548,EMEA,home,online,53.64,3,0.137,bundle,2024-10-22\r\n30023,2160,LATAM,grocery,online,27.86,2,0.179,bundle,2024-11-19\r\n30024,2364,APAC,electronics,retail,67.61,5,0.230,bundle,2024-11-20\r\n30025,1583,AMER,electronics,retail,38.43,4,0.138,loyalty,2024-07-20\r\n30026,1831,APAC,sports,mobile,117.36,2,0.052,none,2024-04-13\r\n30027,1691,LATAM,electronics,retail,21.72,3,0.194,none,2024-07-09\r\n30028,1336,APAC,toys,online,88.64,8,0.157,none,2024-08-26\r\n30029,1830,EMEA,grocery,online,49.37,3,0.201,coupon,2024-06-09\r\n30030,2448,APAC,electronics,online,63.38,8,0.143,none,2024-01-05\r\n30031,1815,APAC,home,retail,164.09,1,0.159,none,2024-06-17\r\n30032,1987,AMER,electronics,retail,86.94,7,0.025,coupon,2024-07-24\r\n30033,2377,AMER,home,retail,69.60,2,0.059,coupon,2024-09-06\r\n30034,1163,AMER,grocery,online,171.92,8,0.103,coupon,2024-12-26\r\n30035,2413,AMER,home,partner,78.94,7,0.058,none,2024-07-19\r\n30036,1025,EMEA,electronics,online,74.07,1,0.075,none,2024-08-07\r\n30037,1091,EMEA,grocery,online,40.71,2,0.001,none,2024-06-11\r\n30038,1707,APAC,home,retail,27.85,1,0.079,coupon,2024-01-18\r\n30039,2244,LATAM,electronics,retail,54.79,1,0.215,none,2024-07-24\r\n30040,2073,AMER,sports,retail,38.16,7,0.133,bundle,2024-03-05\r\n30041,1989,LATAM,home,retail,75.79,8,0.156,coupon,2024-12-23\r\n30042,1622,LATAM,toys,retail,47.58,7,0.173,none,2024-06-25\r\n30043,2117,EMEA,electronics,online,74.30,5,0.079,coupon,2024-03-07\r\n30044,2102,APAC,home,mobile,133.52,3,0.208,bundle,2024-09-25\r\n30045,2012,APAC,sports,partner,72.12,1,0.073,bundle,2024-04-07\r\n30046,2058,LATAM,fashion,retail,50.49,7,0.046,none,2024-04-12\r\n30047,1508,LATAM,electronics,retail,56.00,3,0.185,bundle,2024-05-18\r\n30048,1673,AMER,home,online,69.19,1,0.108,none,2024-04-19\r\n30049,2419,LATAM,electronics,online,21.70,2,0.050,loyalty,2024-08-16\r\n30050,1178,EMEA,sports,online,53.28,1,0.246,none,2024-02-22\r\n30051,1957,AMER,home,mobile,53.32,4,0.108,none,2024-10-10\r\n30052,1364,EMEA,grocery,partner,115.56,4,0.172,none,2024-05-23\r\n30053,1421,APAC,grocery,mobile,41.29,1,0.081,none,2024-10-08\r\n30054,1790,AMER,fashion,retail,64.85,4,0.160,none,2024-09-19\r\n30055,2192,APAC,toys,online,114.67,2,0.245,none,2024-02-10\r\n30056,1883,LATAM,electronics,retail,34.24,8,0.143,none,2024-10-28\r\n30057,1588,LATAM,electronics,online,141.74,4,0.087,none,2024-09-04\r\n30058,1716,LATAM,toys,retail,140.40,5,0.079,coupon,2024-02-22\r\n30059,1680,LATAM,toys,mobile,53.55,6,0.208,none,2024-07-12\r\n30060,1443,EMEA,home,online,37.82,6,0.189,coupon,2024-05-22\r\n30061,1929,LATAM,grocery,online,39.47,5,0.174,none,2024-04-16\r\n30062,1058,LATAM,home,online,81.14,2,0.032,none,2024-03-09\r\n30063,1952,EMEA,toys,online,20.51,8,0.025,coupon,2024-01-28\r\n30064,2079,EMEA,grocery,online,107.51,6,0.166,none,2024-10-02\r\n30065,1446,AMER,grocery,retail,69.89,7,0.170,none,2024-04-06\r\n30066,1875,EMEA,home,mobile,150.05,6,0.124,loyalty,2024-10-20\r\n30067,1021,AMER,fashion,retail,135.44,5,0.193,bundle,2024-10-06\r\n30068,2055,AMER,fashion,retail,46.74,8,0.006,none,2024-09-12\r\n30069,1470,LATAM,home,online,50.31,5,0.017,none,2024-10-17\r\n30070,1975,EMEA,home,online,74.33,7,0.218,none,2024-02-03\r\n30071,1439,LATAM,grocery,online,88.70,2,0.018,none,2024-10-22\r\n30072,1787,APAC,home,mobile,89.48,7,0.089,none,2024-12-18\r\n30073,2089,EMEA,grocery,retail,42.30,2,0.206,none,2024-10-24\r\n30074,1793,LATAM,electronics,retail,66.69,7,0.016,none,2024-10-10\r\n30075,1808,APAC,grocery,online,32.62,2,0.180,none,2024-09-02\r\n30076,1814,AMER,toys,online,117.16,7,0.195,bundle,2024-10-27\r\n30077,1904,APAC,grocery,online,73.99,3,0.036,none,2024-06-21\r\n30078,1773,LATAM,grocery,retail,83.10,1,0.027,coupon,2024-11-26\r\n30079,1117,LATAM,toys,retail,54.45,2,0.060,none,2024-06-28\r\n30080,1082,EMEA,toys,retail,127.69,8,0.222,none,2024-06-23\r\n30081,2039,EMEA,toys,partner,101.83,7,0.190,bundle,2024-12-25\r\n30082,2166,AMER,grocery,retail,79.86,3,0.130,none,2024-10-02\r\n30083,1523,LATAM,electronics,mobile,89.28,8,0.175,none,2024-11-27\r\n30084,1261,APAC,home,retail,60.67,8,0.081,coupon,2024-01-26\r\n30085,2437,LATAM,grocery,online,20.69,2,0.064,bundle,2024-08-16\r\n30086,1495,LATAM,grocery,online,37.85,7,0.164,loyalty,2024-04-28\r\n30087,1912,APAC,home,retail,44.94,5,0.186,bundle,2024-01-16\r\n30088,1323,EMEA,fashion,mobile,66.68,7,0.216,none,2024-05-28\r\n30089,2245,APAC,grocery,retail,43.55,4,0.064,none,2024-06-13\r\n30090,1106,AMER,electronics,online,57.38,7,0.117,none,2024-11-25\r\n30091,2065,EMEA,home,partner,106.10,2,0.230,none,2024-11-05\r\n30092,2217,LATAM,sports,mobile,30.17,6,0.083,coupon,2024-11-07\r\n30093,1956,APAC,electronics,retail,64.45,5,0.073,coupon,2024-12-09\r\n30094,2095,EMEA,home,retail,82.37,2,0.131,coupon,2024-04-25\r\n30095,2401,LATAM,grocery,online,62.14,4,0.183,coupon,2024-04-22\r\n30096,1006,AMER,home,online,110.07,3,0.063,none,2024-02-22\r\n30097,1360,APAC,sports,retail,32.50,5,0.207,none,2024-08-10\r\n30098,1600,AMER,fashion,online,34.05,1,0.121,none,2024-10-05\r\n30099,1486,LATAM,electronics,online,38.09,4,0.102,none,2024-08-19\r\n30100,1531,EMEA,grocery,partner,32.02,6,0.127,bundle,2024-10-17\r\n30101,1635,APAC,home,online,18.75,8,0.143,bundle,2024-02-16\r\n30102,1671,APAC,grocery,retail,22.74,6,0.135,none,2024-09-14\r\n30103,2298,APAC,grocery,online,50.18,6,0.029,none,2024-10-12\r\n30104,1476,APAC,fashion,mobile,79.23,7,0.001,none,2024-02-28\r\n30105,2267,AMER,grocery,online,25.14,8,0.145,loyalty,2024-10-05\r\n30106,1051,EMEA,electronics,online,33.62,7,0.158,bundle,2024-01-28\r\n30107,1675,LATAM,grocery,mobile,24.68,2,0.223,coupon,2024-09-22\r\n30108,1175,AMER,electronics,online,41.63,5,0.145,none,2024-01-08\r\n30109,1331,AMER,sports,retail,81.10,1,0.122,none,2024-01-07\r\n30110,1646,APAC,fashion,retail,74.57,8,0.153,bundle,2024-01-28\r\n30111,1725,APAC,fashion,partner,57.21,7,0.036,none,2024-10-20\r\n30112,1114,APAC,fashion,mobile,50.90,1,0.109,none,2024-08-17\r\n30113,1681,LATAM,home,online,132.78,5,0.099,none,2024-11-16\r\n30114,2076,AMER,sports,online,179.50,7,0.174,none,2024-05-18\r\n30115,1003,APAC,electronics,online,94.21,3,0.179,none,2024-09-20\r\n30116,1718,EMEA,electronics,online,77.99,7,0.070,none,2024-05-04\r\n30117,1714,APAC,electronics,retail,37.37,1,0.075,bundle,2024-07-18\r\n30118,1689,LATAM,home,online,62.56,2,0.150,none,2024-04-25\r\n30119,1528,EMEA,grocery,online,53.73,4,0.133,none,2024-05-19\r\n30120,2007,LATAM,grocery,online,59.67,6,0.249,coupon,2024-01-06\r\n30121,2135,EMEA,home,mobile,61.83,6,0.137,bundle,2024-03-28\r\n30122,2154,APAC,grocery,online,204.44,2,0.139,none,2024-03-05\r\n30123,1430,EMEA,fashion,retail,42.20,5,0.201,coupon,2024-03-26\r\n30124,1880,LATAM,toys,online,34.13,4,0.012,loyalty,2024-03-24\r\n30125,1776,APAC,electronics,online,68.59,2,0.234,coupon,2024-11-08\r\n30126,2274,APAC,home,online,17.15,1,0.058,none,2024-12-24\r\n30127,2080,LATAM,grocery,online,42.78,2,0.161,bundle,2024-05-07\r\n30128,1391,LATAM,toys,online,103.34,1,0.201,bundle,2024-06-21\r\n30129,1671,APAC,electronics,retail,136.15,6,0.057,none,2024-03-18\r\n30130,2400,EMEA,sports,retail,70.79,2,0.136,none,2024-08-21\r\n30131,2071,APAC,home,mobile,86.04,5,0.057,loyalty,2024-12-05\r\n30132,1770,AMER,electronics,retail,49.84,2,0.116,none,2024-11-14\r\n30133,1429,APAC,grocery,online,78.81,5,0.149,loyalty,2024-11-23\r\n30134,1146,LATAM,electronics,online,35.00,5,0.187,coupon,2024-04-23\r\n30135,2181,AMER,sports,retail,23.64,5,0.131,none,2024-03-18\r\n30136,1613,EMEA,toys,retail,60.37,5,0.212,none,2024-07-16\r\n30137,1233,AMER,grocery,online,27.52,3,0.016,none,2024-06-21\r\n30138,2485,AMER,electronics,online,42.71,2,0.054,loyalty,2024-09-26\r\n30139,2047,AMER,electronics,retail,24.48,3,0.016,loyalty,2024-10-17\r\n30140,1257,APAC,grocery,online,79.52,8,0.129,none,2024-03-03\r\n30141,1002,EMEA,home,online,107.28,5,0.234,coupon,2024-01-22\r\n30142,1146,LATAM,home,online,22.41,4,0.118,none,2024-05-15\r\n30143,1940,APAC,fashion,online,59.86,7,0.237,bundle,2024-05-20\r\n30144,2407,EMEA,electronics,online,29.24,3,0.071,bundle,2024-08-01\r\n30145,1713,EMEA,home,online,34.28,2,0.243,loyalty,2024-08-19\r\n30146,1341,EMEA,toys,online,39.22,7,0.101,none,2024-09-08\r\n30147,2355,EMEA,electronics,mobile,36.74,1,0.065,none,2024-08-01\r\n30148,2020,AMER,fashion,retail,34.94,7,0.079,none,2024-02-10\r\n30149,1791,LATAM,grocery,retail,52.70,2,0.150,none,2024-09-21\r\n30150,1466,AMER,sports,online,71.21,5,0.078,none,2024-12-19\r\n30151,1282,LATAM,grocery,retail,76.77,3,0.119,none,2024-12-14\r\n30152,2102,APAC,fashion,retail,45.28,1,0.222,coupon,2024-10-09\r\n30153,2004,LATAM,electronics,mobile,46.61,8,0.158,none,2024-03-12\r\n30154,1249,EMEA,grocery,retail,57.80,1,0.235,none,2024-12-12\r\n30155,1895,AMER,electronics,mobile,108.56,1,0.009,none,2024-06-28\r\n30156,1968,EMEA,home,mobile,44.66,1,0.131,none,2024-12-05\r\n30157,1059,AMER,grocery,retail,61.07,4,0.184,coupon,2024-11-13\r\n30158,1420,APAC,sports,online,41.89,4,0.112,coupon,2024-10-05\r\n30159,2059,AMER,grocery,retail,72.23,3,0.145,none,2024-09-11\r\n30160,1623,AMER,grocery,online,62.44,6,0.204,none,2024-05-03\r\n30161,1063,AMER,home,retail,39.57,5,0.035,none,2024-01-07\r\n30162,1607,LATAM,toys,online,38.42,6,0.128,coupon,2024-07-17\r\n30163,2042,LATAM,grocery,online,133.19,6,0.089,coupon,2024-08-15\r\n30164,1029,EMEA,fashion,online,63.19,5,0.196,none,2024-11-13\r\n30165,1472,AMER,home,mobile,56.83,8,0.241,loyalty,2024-06-04\r\n30166,1005,LATAM,electronics,online,50.89,4,0.167,none,2024-09-08\r\n30167,2385,APAC,sports,retail,55.05,6,0.182,none,2024-05-26\r\n30168,2203,APAC,grocery,partner,44.80,8,0.073,none,2024-04-05\r\n30169,2012,APAC,electronics,retail,58.99,2,0.129,coupon,2024-10-19\r\n30170,2462,EMEA,home,retail,119.38,2,0.124,coupon,2024-05-10\r\n30171,1428,APAC,grocery,retail,56.84,8,0.078,coupon,2024-03-27\r\n30172,1246,EMEA,grocery,retail,39.67,3,0.053,bundle,2024-03-23\r\n30173,2382,LATAM,electronics,mobile,67.94,3,0.021,bundle,2024-08-03\r\n30174,2381,AMER,electronics,online,69.17,3,0.149,none,2024-09-26\r\n30175,1627,LATAM,grocery,retail,42.45,2,0.244,loyalty,2024-10-14\r\n30176,1278,AMER,grocery,online,35.85,2,0.011,bundle,2024-05-25\r\n30177,1574,AMER,grocery,retail,41.53,2,0.244,bundle,2024-12-05\r\n30178,2323,AMER,electronics,retail,39.01,4,0.115,bundle,2024-12-12\r\n30179,1839,APAC,grocery,online,48.01,5,0.049,bundle,2024-02-14\r\n30180,1836,LATAM,sports,online,55.92,4,0.114,none,2024-09-10\r\n30181,2213,APAC,sports,retail,61.33,5,0.131,coupon,2024-03-11\r\n30182,1311,APAC,fashion,mobile,113.79,7,0.014,loyalty,2024-11-23\r\n30183,1095,APAC,electronics,online,29.71,6,0.066,coupon,2024-12-21\r\n30184,2491,APAC,electronics,online,47.06,6,0.181,none,2024-10-13\r\n30185,2480,APAC,home,retail,36.70,2,0.083,coupon,2024-04-16\r\n30186,1418,LATAM,fashion,retail,41.07,6,0.045,none,2024-01-16\r\n30187,1484,AMER,grocery,mobile,69.48,5,0.166,none,2024-02-11\r\n30188,1921,LATAM,electronics,retail,120.81,3,0.132,none,2024-06-08\r\n30189,2365,LATAM,sports,mobile,13.51,2,0.209,coupon,2024-08-20\r\n30190,2129,APAC,electronics,mobile,71.90,1,0.205,bundle,2024-12-01\r\n30191,1705,AMER,sports,retail,87.97,8,0.052,none,2024-06-03\r\n30192,1965,LATAM,grocery,online,92.71,7,0.053,none,2024-10-19\r\n30193,1266,AMER,electronics,online,69.35,2,0.011,none,2024-05-20\r\n30194,2363,AMER,home,online,40.19,4,0.232,none,2024-11-13\r\n30195,1932,EMEA,fashion,online,39.32,6,0.006,none,2024-06-13\r\n30196,1683,AMER,sports,retail,50.63,1,0.167,coupon,2024-03-21\r\n30197,2038,LATAM,fashion,mobile,70.57,3,0.038,bundle,2024-02-27\r\n30198,1992,LATAM,grocery,online,54.47,4,0.126,none,2024-10-28\r\n30199,1213,EMEA,home,online,129.66,2,0.144,none,2024-05-12\r\n30200,1275,EMEA,toys,retail,21.76,8,0.222,coupon,2024-12-28\r\n30201,1456,APAC,sports,retail,187.23,6,0.249,bundle,2024-10-25\r\n30202,2363,AMER,toys,retail,65.58,2,0.118,loyalty,2024-05-21\r\n30203,2053,AMER,home,mobile,50.58,2,0.053,bundle,2024-09-19\r\n30204,1977,APAC,home,mobile,19.83,4,0.184,bundle,2024-03-26\r\n30205,1361,LATAM,grocery,partner,58.10,1,0.057,coupon,2024-07-08\r\n30206,2037,LATAM,fashion,online,65.86,7,0.113,none,2024-04-25\r\n30207,1545,AMER,grocery,online,106.62,6,0.017,bundle,2024-03-20\r\n30208,2240,LATAM,sports,retail,115.22,6,0.241,none,2024-07-11\r\n30209,2012,APAC,electronics,retail,53.53,5,0.173,loyalty,2024-08-19\r\n30210,2449,LATAM,grocery,retail,69.58,8,0.106,loyalty,2024-10-21\r\n30211,1174,APAC,home,mobile,32.63,4,0.229,none,2024-01-16\r\n30212,1279,EMEA,home,mobile,48.68,4,0.149,none,2024-03-24\r\n30213,1704,AMER,grocery,retail,68.28,8,0.188,none,2024-10-11\r\n30214,2240,LATAM,home,online,33.25,4,0.033,bundle,2024-01-27\r\n30215,1043,LATAM,fashion,retail,82.76,1,0.224,none,2024-01-22\r\n30216,1604,EMEA,grocery,online,56.08,7,0.019,none,2024-04-14\r\n30217,1363,EMEA,electronics,online,90.33,8,0.190,none,2024-02-21\r\n30218,1791,LATAM,electronics,retail,80.74,3,0.049,bundle,2024-11-20\r\n30219,2473,EMEA,grocery,retail,93.91,1,0.241,loyalty,2024-07-07\r\n30220,2480,APAC,sports,retail,105.88,7,0.103,coupon,2024-11-22\r\n30221,1917,LATAM,toys,retail,74.55,2,0.111,loyalty,2024-11-24\r\n30222,1544,LATAM,electronics,online,47.16,6,0.168,none,2024-12-22\r\n30223,2483,LATAM,home,online,48.53,5,0.175,none,2024-06-17\r\n30224,1005,LATAM,fashion,online,85.05,2,0.237,none,2024-10-24\r\n30225,1403,APAC,fashion,mobile,53.47,3,0.070,coupon,2024-09-16\r\n30226,1052,LATAM,fashion,online,107.47,6,0.202,bundle,2024-12-09\r\n30227,1364,EMEA,grocery,online,54.63,6,0.050,bundle,2024-02-26\r\n30228,1685,AMER,fashion,retail,221.50,1,0.076,none,2024-09-20\r\n30229,1963,AMER,grocery,retail,91.57,8,0.045,coupon,2024-04-13\r\n30230,1311,APAC,home,online,52.57,5,0.125,loyalty,2024-05-22\r\n30231,2403,LATAM,fashion,retail,68.68,7,0.122,loyalty,2024-12-10\r\n30232,2492,LATAM,electronics,retail,36.44,1,0.087,loyalty,2024-01-15\r\n30233,1043,LATAM,home,mobile,35.17,5,0.052,none,2024-10-08\r\n30234,1089,LATAM,home,online,62.69,7,0.238,coupon,2024-01-20\r\n30235,1015,AMER,fashion,online,137.07,5,0.122,none,2024-06-19\r\n30236,1276,AMER,electronics,online,95.52,6,0.133,coupon,2024-08-14\r\n30237,1873,EMEA,fashion,online,17.36,7,0.144,none,2024-06-17\r\n30238,1371,AMER,home,partner,35.52,6,0.038,none,2024-09-07\r\n30239,1638,EMEA,sports,retail,70.32,8,0.059,none,2024-05-12\r\n30240,1646,APAC,grocery,partner,120.03,7,0.007,none,2024-06-21\r\n30241,2494,AMER,fashion,online,204.38,3,0.108,none,2024-06-03\r\n30242,1986,LATAM,grocery,online,47.72,1,0.033,coupon,2024-11-01\r\n30243,1758,AMER,home,retail,62.38,8,0.061,none,2024-06-07\r\n30244,1521,LATAM,electronics,online,64.98,6,0.208,coupon,2024-11-26\r\n30245,1395,APAC,toys,retail,80.88,4,0.152,bundle,2024-07-06\r\n30246,2117,EMEA,electronics,online,42.99,1,0.247,bundle,2024-02-05\r\n30247,1046,EMEA,fashion,retail,32.53,2,0.211,coupon,2024-04-26\r\n30248,1948,EMEA,sports,mobile,77.62,4,0.247,coupon,2024-04-09\r\n30249,1663,LATAM,fashion,online,105.05,2,0.242,none,2024-09-28\r\n30250,1998,APAC,electronics,mobile,40.34,2,0.036,none,2024-05-08\r\n30251,1456,APAC,grocery,online,19.36,4,0.104,bundle,2024-11-21\r\n30252,1898,EMEA,fashion,retail,33.02,7,0.111,none,2024-04-23\r\n30253,1945,AMER,grocery,online,44.81,4,0.193,none,2024-11-12\r\n30254,1141,AMER,home,retail,41.04,5,0.091,coupon,2024-07-07\r\n30255,2115,APAC,electronics,online,79.17,1,0.105,none,2024-10-17\r\n30256,1037,EMEA,home,online,89.10,6,0.150,none,2024-08-01\r\n30257,1209,AMER,grocery,online,45.93,3,0.162,bundle,2024-10-05\r\n30258,2263,AMER,home,online,41.62,3,0.206,none,2024-04-24\r\n30259,1720,AMER,electronics,online,54.97,4,0.004,none,2024-10-06\r\n30260,1784,EMEA,home,mobile,31.21,5,0.213,none,2024-03-24\r\n30261,1685,AMER,home,online,54.45,3,0.148,none,2024-05-13\r\n30262,2395,APAC,fashion,online,48.28,4,0.095,bundle,2024-11-06\r\n30263,1065,AMER,home,mobile,24.77,7,0.208,loyalty,2024-04-06\r\n30264,1480,APAC,electronics,retail,139.45,8,0.132,none,2024-04-27\r\n30265,1737,AMER,grocery,online,46.65,8,0.220,none,2024-05-11\r\n30266,2273,APAC,electronics,online,50.15,8,0.235,loyalty,2024-10-15\r\n30267,1258,EMEA,electronics,retail,38.50,3,0.216,none,2024-04-24\r\n30268,1769,LATAM,electronics,retail,80.72,7,0.178,bundle,2024-09-26\r\n30269,1169,LATAM,sports,retail,33.41,2,0.147,none,2024-01-09\r\n30270,2451,APAC,home,mobile,80.33,3,0.056,none,2024-12-02\r\n30271,1368,EMEA,electronics,online,36.49,3,0.180,none,2024-04-11\r\n30272,1076,LATAM,fashion,retail,44.04,2,0.050,none,2024-02-17\r\n30273,1058,LATAM,fashion,online,44.76,3,0.041,none,2024-11-28\r\n30274,1212,LATAM,electronics,online,45.93,6,0.169,none,2024-04-09\r\n30275,2471,APAC,sports,online,153.33,3,0.133,none,2024-06-11\r\n30276,2347,AMER,electronics,online,40.50,8,0.213,none,2024-10-05\r\n30277,2058,LATAM,grocery,mobile,74.47,8,0.152,coupon,2024-08-01\r\n30278,1268,EMEA,grocery,online,28.47,4,0.181,coupon,2024-03-09\r\n30279,1946,AMER,electronics,online,44.35,5,0.148,none,2024-10-04\r\n30280,1229,LATAM,sports,online,71.57,3,0.032,none,2024-07-14\r\n30281,1385,LATAM,electronics,retail,73.18,6,0.007,none,2024-05-02\r\n30282,1761,EMEA,electronics,online,77.93,8,0.020,none,2024-04-12\r\n30283,1413,LATAM,home,online,95.75,4,0.021,bundle,2024-02-01\r\n30284,1966,APAC,sports,online,21.69,7,0.104,none,2024-05-18\r\n30285,2194,APAC,electronics,retail,76.47,4,0.024,none,2024-04-03\r\n30286,2497,AMER,sports,online,22.55,2,0.185,loyalty,2024-01-15\r\n30287,1939,LATAM,grocery,mobile,117.53,5,0.158,none,2024-04-16\r\n30288,2145,AMER,sports,online,60.86,5,0.030,none,2024-11-02\r\n30289,1176,EMEA,electronics,retail,81.28,7,0.015,none,2024-06-04\r\n30290,2183,EMEA,electronics,retail,43.84,2,0.048,bundle,2024-05-15\r\n30291,1128,LATAM,grocery,online,99.90,7,0.140,none,2024-08-06\r\n30292,1127,EMEA,home,retail,110.21,5,0.235,none,2024-06-06\r\n30293,1003,APAC,home,mobile,63.19,8,0.008,none,2024-12-10\r\n30294,2293,LATAM,electronics,retail,55.71,5,0.121,none,2024-05-17\r\n30295,1271,EMEA,fashion,online,68.42,2,0.220,none,2024-10-22\r\n30296,1705,AMER,electronics,online,50.66,5,0.085,bundle,2024-11-20\r\n30297,1340,LATAM,home,online,68.71,4,0.241,none,2024-06-19\r\n30298,1630,APAC,home,online,45.31,1,0.240,coupon,2024-11-15\r\n30299,2266,LATAM,home,online,93.89,7,0.102,loyalty,2024-10-27\r\n30300,1497,EMEA,toys,online,59.42,5,0.021,none,2024-07-02\r\n30301,1867,AMER,electronics,retail,81.42,3,0.038,none,2024-06-19\r\n30302,2382,LATAM,home,online,53.68,8,0.174,none,2024-01-18\r\n30303,1823,EMEA,home,mobile,20.90,6,0.143,bundle,2024-08-17\r\n30304,1718,EMEA,home,retail,51.62,2,0.194,coupon,2024-10-04\r\n30305,1561,EMEA,fashion,online,28.11,7,0.145,none,2024-07-15\r\n30306,1372,APAC,toys,retail,37.03,5,0.024,none,2024-12-05\r\n30307,1010,EMEA,fashion,online,40.44,8,0.055,none,2024-05-27\r\n30308,1030,EMEA,grocery,retail,25.15,8,0.219,coupon,2024-12-05\r\n30309,2410,EMEA,electronics,mobile,34.18,2,0.010,none,2024-01-05\r\n30310,1589,AMER,electronics,online,35.86,5,0.195,none,2024-10-12\r\n30311,1018,APAC,grocery,mobile,43.99,8,0.067,loyalty,2024-12-20\r\n30312,1477,APAC,home,online,70.25,2,0.050,none,2024-04-03\r\n30313,1146,LATAM,fashion,online,72.60,8,0.141,none,2024-08-11\r\n30314,1157,LATAM,grocery,online,88.68,7,0.058,none,2024-10-15\r\n30315,1409,APAC,home,online,51.03,1,0.182,loyalty,2024-10-21\r\n30316,2473,EMEA,toys,mobile,79.15,5,0.102,none,2024-01-13\r\n30317,1860,EMEA,home,online,43.05,1,0.171,bundle,2024-12-12\r\n30318,2194,APAC,electronics,online,73.37,2,0.075,none,2024-09-07\r\n30319,1471,EMEA,sports,online,61.25,6,0.186,loyalty,2024-05-12\r\n30320,1929,LATAM,home,online,35.77,4,0.027,none,2024-10-03\r\n30321,2249,LATAM,electronics,mobile,78.50,5,0.005,none,2024-09-15\r\n30322,1283,APAC,toys,online,85.24,4,0.096,none,2024-08-27\r\n30323,1090,AMER,home,online,72.11,5,0.067,none,2024-06-12\r\n30324,2074,AMER,home,online,44.30,1,0.194,coupon,2024-08-19\r\n30325,2276,AMER,sports,online,88.02,8,0.003,coupon,2024-04-15\r\n30326,1529,LATAM,home,online,66.00,1,0.184,loyalty,2024-09-14\r\n30327,2033,LATAM,grocery,online,69.19,1,0.002,coupon,2024-05-06\r\n30328,1874,LATAM,sports,online,34.16,7,0.178,none,2024-06-23\r\n30329,2286,AMER,fashion,online,97.71,3,0.153,loyalty,2024-11-16\r\n30330,1693,EMEA,electronics,online,99.92,4,0.240,bundle,2024-02-03\r\n30331,2114,AMER,grocery,online,39.31,2,0.241,none,2024-12-04\r\n30332,2268,EMEA,toys,online,85.86,1,0.121,none,2024-03-28\r\n30333,1350,LATAM,grocery,retail,97.22,4,0.096,none,2024-08-28\r\n30334,2425,APAC,toys,retail,29.50,1,0.020,coupon,2024-10-18\r\n30335,1417,APAC,sports,online,90.32,6,0.121,none,2024-11-26\r\n30336,1433,EMEA,grocery,retail,132.01,7,0.030,none,2024-02-27\r\n30337,1352,AMER,home,online,52.25,6,0.227,none,2024-10-17\r\n30338,2484,APAC,electronics,online,25.47,7,0.144,none,2024-06-10\r\n30339,2120,AMER,electronics,retail,33.29,7,0.060,none,2024-10-25\r\n30340,1193,APAC,toys,online,30.74,3,0.129,none,2024-11-25\r\n30341,1675,LATAM,sports,mobile,20.29,8,0.076,none,2024-02-25\r\n30342,1041,APAC,fashion,online,44.19,4,0.206,coupon,2024-02-13\r\n30343,1574,AMER,home,online,42.91,4,0.005,coupon,2024-11-26\r\n30344,1193,APAC,home,mobile,71.19,1,0.059,none,2024-06-24\r\n30345,1756,EMEA,grocery,mobile,73.62,2,0.053,none,2024-06-02\r\n30346,1460,LATAM,home,retail,62.22,6,0.197,none,2024-08-25\r\n30347,1515,EMEA,sports,online,17.11,1,0.199,none,2024-03-13\r\n30348,1588,LATAM,sports,retail,50.50,7,0.073,none,2024-04-18\r\n30349,1292,LATAM,toys,online,42.55,2,0.068,none,2024-09-04\r\n30350,2218,EMEA,home,online,72.40,2,0.059,none,2024-06-05\r\n30351,1504,AMER,toys,mobile,25.78,2,0.190,none,2024-07-24\r\n30352,1015,AMER,home,online,91.36,4,0.246,bundle,2024-02-27\r\n30353,2405,AMER,grocery,retail,52.13,5,0.237,none,2024-07-21\r\n30354,1387,AMER,grocery,online,62.90,7,0.053,coupon,2024-10-11\r\n30355,1353,EMEA,fashion,retail,93.14,4,0.136,coupon,2024-03-24\r\n30356,1915,LATAM,home,mobile,79.54,1,0.120,coupon,2024-09-22\r\n30357,1938,APAC,toys,partner,24.73,6,0.030,coupon,2024-02-02\r\n30358,1837,LATAM,sports,partner,36.05,1,0.174,loyalty,2024-12-07\r\n30359,2086,APAC,home,mobile,143.89,4,0.231,bundle,2024-12-15\r\n30360,1319,EMEA,sports,mobile,65.15,2,0.005,none,2024-08-24\r\n30361,2204,AMER,toys,retail,55.05,7,0.147,none,2024-03-13\r\n30362,2096,LATAM,grocery,retail,55.82,4,0.175,none,2024-11-12\r\n30363,1126,LATAM,sports,online,70.27,7,0.003,none,2024-01-26\r\n30364,1851,EMEA,home,retail,57.95,5,0.227,none,2024-08-05\r\n30365,2118,AMER,sports,mobile,33.04,3,0.025,none,2024-02-16\r\n30366,2413,AMER,grocery,mobile,53.18,6,0.116,coupon,2024-11-07\r\n30367,2403,LATAM,fashion,online,71.22,4,0.069,none,2024-03-28\r\n30368,1771,AMER,sports,partner,59.06,4,0.000,bundle,2024-05-20\r\n30369,1884,APAC,fashion,retail,66.54,6,0.238,none,2024-04-06\r\n30370,1415,AMER,grocery,mobile,48.31,3,0.178,none,2024-10-15\r\n30371,2132,LATAM,electronics,retail,83.54,5,0.093,loyalty,2024-07-11\r\n30372,1032,AMER,grocery,partner,50.79,3,0.208,bundle,2024-12-10\r\n30373,2448,APAC,electronics,retail,57.64,1,0.159,coupon,2024-08-27\r\n30374,1346,AMER,electronics,partner,81.26,6,0.073,none,2024-02-17\r\n30375,1277,AMER,grocery,online,172.64,5,0.082,none,2024-02-21\r\n30376,1685,AMER,grocery,online,83.99,7,0.157,none,2024-10-25\r\n30377,1062,EMEA,grocery,retail,95.13,7,0.174,coupon,2024-11-01\r\n30378,1463,EMEA,home,mobile,143.13,1,0.086,coupon,2024-01-20\r\n30379,1684,EMEA,electronics,online,90.00,6,0.225,none,2024-01-12\r\n30380,1849,EMEA,toys,partner,52.32,4,0.029,none,2024-07-21\r\n30381,1357,EMEA,electronics,mobile,126.27,7,0.110,none,2024-05-14\r\n30382,1792,AMER,electronics,retail,125.41,4,0.234,loyalty,2024-12-18\r\n30383,1950,LATAM,grocery,mobile,60.82,6,0.145,none,2024-04-26\r\n30384,2352,APAC,electronics,online,19.69,5,0.163,none,2024-05-02\r\n30385,1948,EMEA,sports,retail,69.22,7,0.131,none,2024-08-19\r\n30386,1166,AMER,grocery,mobile,49.82,1,0.207,none,2024-02-27\r\n30387,2059,AMER,toys,retail,117.71,7,0.181,none,2024-01-19\r\n30388,1625,EMEA,sports,partner,23.42,3,0.089,none,2024-06-13\r\n30389,1509,AMER,sports,online,69.66,3,0.094,none,2024-05-02\r\n30390,1652,APAC,grocery,mobile,45.48,3,0.090,none,2024-04-01\r\n30391,1381,LATAM,sports,retail,46.00,4,0.055,coupon,2024-11-21\r\n30392,1896,EMEA,home,online,35.15,5,0.082,coupon,2024-03-27\r\n30393,2111,EMEA,home,mobile,27.38,6,0.030,coupon,2024-06-27\r\n30394,1175,AMER,grocery,retail,43.07,1,0.045,none,2024-04-01\r\n30395,2351,EMEA,fashion,online,26.17,8,0.020,none,2024-11-19\r\n30396,1264,APAC,electronics,online,193.73,7,0.059,none,2024-08-27\r\n30397,1095,APAC,fashion,retail,47.20,1,0.198,none,2024-02-18\r\n30398,2496,EMEA,grocery,partner,69.45,8,0.111,none,2024-03-21\r\n30399,1576,EMEA,grocery,online,14.30,3,0.090,none,2024-07-09\r\n30400,1919,EMEA,grocery,online,46.31,7,0.232,coupon,2024-12-27\r\n30401,2176,AMER,fashion,mobile,57.87,2,0.023,coupon,2024-11-03\r\n30402,1812,EMEA,grocery,online,103.84,8,0.109,bundle,2024-04-04\r\n30403,1170,AMER,grocery,partner,67.92,7,0.242,loyalty,2024-02-23\r\n30404,1350,LATAM,electronics,online,46.91,6,0.143,none,2024-09-05\r\n30405,1910,LATAM,fashion,partner,146.39,6,0.099,coupon,2024-07-20\r\n30406,1743,LATAM,electronics,retail,66.05,6,0.222,none,2024-09-23\r\n30407,2456,APAC,electronics,online,50.26,4,0.168,bundle,2024-11-16\r\n30408,2379,AMER,fashion,retail,53.33,7,0.057,coupon,2024-04-10\r\n30409,1384,LATAM,electronics,online,40.08,7,0.241,coupon,2024-11-12\r\n30410,1814,AMER,electronics,retail,28.06,2,0.219,coupon,2024-05-01\r\n30411,1649,APAC,electronics,online,48.93,8,0.004,loyalty,2024-02-02\r\n30412,2310,EMEA,electronics,retail,50.40,3,0.206,coupon,2024-03-15\r\n30413,2300,EMEA,sports,mobile,36.88,7,0.116,loyalty,2024-07-17\r\n30414,2241,APAC,toys,online,101.91,3,0.149,coupon,2024-04-10\r\n30415,2029,APAC,sports,retail,18.14,3,0.218,none,2024-06-22\r\n30416,2105,APAC,toys,online,81.47,7,0.216,bundle,2024-11-15\r\n30417,1031,AMER,grocery,retail,47.00,8,0.213,loyalty,2024-11-22\r\n30418,1081,AMER,electronics,mobile,24.02,3,0.085,loyalty,2024-04-02\r\n30419,2475,AMER,sports,online,49.61,2,0.201,bundle,2024-05-23\r\n30420,1619,APAC,toys,online,69.99,6,0.234,none,2024-03-06\r\n30421,1104,APAC,toys,online,65.30,2,0.011,none,2024-01-07\r\n30422,1027,APAC,grocery,online,46.24,5,0.153,none,2024-07-08\r\n30423,1555,AMER,toys,retail,42.43,1,0.216,none,2024-01-28\r\n30424,2055,AMER,electronics,retail,47.05,7,0.069,none,2024-11-15\r\n30425,1245,APAC,grocery,online,52.48,5,0.219,none,2024-02-13\r\n30426,1262,APAC,electronics,online,91.87,3,0.103,loyalty,2024-01-11\r\n30427,1607,LATAM,home,retail,17.59,5,0.222,none,2024-04-17\r\n30428,1936,EMEA,home,partner,13.36,7,0.091,bundle,2024-09-17\r\n30429,1705,AMER,grocery,online,40.99,2,0.225,none,2024-08-17\r\n30430,2424,LATAM,electronics,online,97.95,4,0.057,none,2024-11-22\r\n30431,1185,LATAM,grocery,retail,49.85,7,0.043,bundle,2024-04-25\r\n30432,1188,LATAM,fashion,online,28.68,7,0.131,coupon,2024-06-14\r\n30433,2035,LATAM,grocery,online,176.10,4,0.163,coupon,2024-07-10\r\n30434,1775,EMEA,electronics,online,53.05,3,0.212,coupon,2024-01-04\r\n30435,1039,AMER,home,retail,60.48,5,0.146,none,2024-03-12\r\n30436,1020,APAC,sports,retail,41.59,8,0.226,none,2024-10-21\r\n30437,1019,APAC,home,mobile,29.15,3,0.171,none,2024-08-15\r\n30438,1605,APAC,electronics,mobile,56.68,3,0.201,none,2024-12-09\r\n30439,2434,APAC,toys,online,32.58,3,0.054,none,2024-08-17\r\n30440,1549,APAC,sports,mobile,122.19,4,0.199,none,2024-09-06\r\n30441,2381,AMER,electronics,online,135.92,6,0.077,none,2024-08-06\r\n30442,1729,AMER,grocery,online,36.20,6,0.023,coupon,2024-12-14\r\n30443,1892,LATAM,home,retail,46.15,8,0.199,none,2024-11-23\r\n30444,2221,LATAM,fashion,partner,65.21,1,0.187,none,2024-05-09\r\n30445,2310,EMEA,grocery,online,25.39,4,0.156,bundle,2024-05-16\r\n30446,2459,AMER,home,online,31.13,2,0.214,none,2024-11-03\r\n30447,1093,APAC,electronics,mobile,74.01,5,0.029,bundle,2024-10-10\r\n30448,1923,LATAM,fashion,partner,66.78,1,0.088,none,2024-04-18\r\n30449,1735,LATAM,fashion,retail,29.57,4,0.140,none,2024-09-08\r\n30450,1753,APAC,electronics,retail,39.25,8,0.133,none,2024-12-23\r\n30451,1613,EMEA,toys,mobile,69.12,1,0.074,none,2024-10-12\r\n30452,2257,AMER,sports,online,48.38,3,0.042,none,2024-07-21\r\n30453,2227,LATAM,home,partner,65.23,3,0.069,coupon,2024-08-13\r\n30454,1106,AMER,electronics,partner,70.41,4,0.162,coupon,2024-12-18\r\n30455,1401,LATAM,fashion,online,91.08,1,0.129,none,2024-02-02\r\n30456,2182,AMER,toys,online,78.49,2,0.214,coupon,2024-10-05\r\n30457,1482,AMER,electronics,online,39.38,5,0.233,none,2024-10-18\r\n30458,1059,AMER,grocery,retail,49.78,2,0.106,none,2024-10-24\r\n30459,2305,AMER,grocery,online,24.78,6,0.055,none,2024-05-21\r\n30460,1154,LATAM,grocery,partner,102.44,2,0.179,loyalty,2024-04-24\r\n30461,1313,EMEA,grocery,retail,32.84,1,0.010,none,2024-12-22\r\n30462,1160,LATAM,home,retail,188.55,1,0.250,coupon,2024-08-28\r\n30463,1792,AMER,grocery,retail,47.05,7,0.243,none,2024-07-24\r\n30464,1985,AMER,home,retail,57.42,3,0.044,none,2024-07-28\r\n30465,1235,EMEA,electronics,retail,46.15,7,0.206,none,2024-12-14\r\n30466,2094,AMER,electronics,mobile,24.74,1,0.013,bundle,2024-10-22\r\n30467,2131,APAC,home,mobile,88.57,6,0.222,bundle,2024-07-13\r\n30468,2289,APAC,sports,online,48.42,6,0.191,none,2024-12-28\r\n30469,1048,EMEA,electronics,online,38.99,4,0.085,none,2024-04-25\r\n30470,2044,APAC,electronics,online,75.07,8,0.135,none,2024-10-09\r\n30471,1914,EMEA,electronics,retail,34.70,1,0.221,bundle,2024-10-12\r\n30472,1118,AMER,grocery,online,66.05,1,0.230,loyalty,2024-12-01\r\n30473,1667,AMER,electronics,online,72.98,3,0.084,bundle,2024-03-19\r\n30474,1584,EMEA,grocery,retail,24.28,3,0.040,loyalty,2024-04-19\r\n30475,2018,AMER,electronics,retail,43.54,6,0.080,bundle,2024-07-16\r\n30476,1275,EMEA,home,mobile,22.61,3,0.234,none,2024-09-06\r\n30477,2463,AMER,grocery,retail,34.39,5,0.035,none,2024-01-16\r\n30478,1074,LATAM,fashion,retail,31.19,2,0.038,loyalty,2024-05-13\r\n30479,1545,AMER,toys,online,33.69,1,0.053,coupon,2024-10-04\r\n30480,1264,APAC,toys,online,80.83,4,0.218,none,2024-10-19\r\n30481,1186,APAC,fashion,mobile,37.10,2,0.169,none,2024-10-15\r\n30482,1729,AMER,grocery,online,33.48,7,0.060,none,2024-09-16\r\n30483,1733,LATAM,sports,retail,62.80,3,0.037,none,2024-07-28\r\n30484,2049,LATAM,home,retail,35.93,7,0.069,none,2024-07-26\r\n30485,1948,EMEA,fashion,retail,44.09,3,0.028,none,2024-01-15\r\n30486,1480,APAC,toys,online,43.85,3,0.186,none,2024-03-12\r\n30487,1980,LATAM,electronics,retail,23.46,5,0.097,none,2024-01-13\r\n30488,1198,AMER,grocery,retail,258.77,4,0.018,none,2024-12-26\r\n30489,1427,EMEA,grocery,online,77.44,8,0.063,none,2024-07-10\r\n30490,1148,AMER,grocery,online,31.59,2,0.147,none,2024-12-21\r\n30491,1354,AMER,sports,retail,30.82,5,0.056,none,2024-04-19\r\n30492,1354,AMER,grocery,online,20.37,1,0.248,none,2024-10-13\r\n30493,2371,LATAM,electronics,retail,41.59,6,0.014,none,2024-09-18\r\n30494,1642,EMEA,home,retail,72.06,4,0.062,none,2024-09-17\r\n30495,1857,LATAM,grocery,retail,56.87,4,0.195,none,2024-04-04\r\n30496,1764,LATAM,fashion,online,135.54,5,0.213,none,2024-02-12\r\n30497,1783,AMER,grocery,retail,57.28,2,0.160,bundle,2024-08-27\r\n30498,2411,EMEA,toys,online,107.19,3,0.125,loyalty,2024-12-18\r\n30499,1327,APAC,home,retail,43.06,5,0.228,none,2024-07-11\r\n30500,1081,AMER,electronics,online,63.41,4,0.146,none,2024-08-17\r\n30501,2127,LATAM,sports,retail,96.47,5,0.155,loyalty,2024-04-04\r\n30502,2398,EMEA,electronics,mobile,54.72,1,0.243,none,2024-02-04\r\n30503,1754,EMEA,toys,online,53.42,4,0.094,none,2024-02-17\r\n30504,2331,APAC,electronics,online,33.02,1,0.214,none,2024-02-20\r\n30505,2146,APAC,sports,online,51.91,8,0.164,none,2024-09-23\r\n30506,1678,LATAM,grocery,retail,32.31,4,0.233,none,2024-11-23\r\n30507,2342,AMER,toys,online,44.95,1,0.058,coupon,2024-03-08\r\n30508,1750,LATAM,grocery,retail,72.38,3,0.232,none,2024-03-28\r\n30509,1548,EMEA,toys,online,56.41,1,0.085,coupon,2024-06-04\r\n30510,1460,LATAM,grocery,online,68.42,4,0.137,coupon,2024-12-16\r\n30511,1418,LATAM,electronics,partner,207.25,7,0.177,none,2024-08-24\r\n30512,1724,LATAM,grocery,retail,205.89,7,0.221,none,2024-02-10\r\n30513,2031,AMER,sports,partner,58.24,2,0.153,coupon,2024-01-01\r\n30514,1980,LATAM,electronics,online,35.69,5,0.029,none,2024-12-01\r\n30515,1845,AMER,toys,online,76.89,4,0.031,none,2024-07-07\r\n30516,1808,APAC,electronics,mobile,62.33,6,0.115,none,2024-11-17\r\n30517,1510,EMEA,grocery,mobile,33.96,6,0.145,none,2024-02-02\r\n30518,2461,LATAM,toys,retail,16.76,5,0.174,bundle,2024-04-28\r\n30519,1623,AMER,toys,retail,57.15,1,0.016,none,2024-05-27\r\n30520,2128,EMEA,grocery,online,104.18,1,0.166,coupon,2024-04-10\r\n30521,1271,EMEA,grocery,online,40.93,3,0.038,none,2024-05-26\r\n30522,1862,LATAM,toys,online,58.07,3,0.043,loyalty,2024-03-12\r\n30523,2243,APAC,grocery,retail,28.27,1,0.028,none,2024-10-27\r\n30524,1692,LATAM,fashion,retail,58.72,1,0.219,none,2024-06-16\r\n30525,1921,LATAM,toys,online,48.55,3,0.181,none,2024-12-10\r\n30526,2475,AMER,home,online,95.98,5,0.155,none,2024-09-15\r\n30527,2251,APAC,electronics,retail,118.03,4,0.246,none,2024-10-26\r\n30528,1474,LATAM,grocery,online,60.83,7,0.036,coupon,2024-05-22\r\n30529,1935,EMEA,fashion,online,41.95,2,0.045,none,2024-05-23\r\n30530,1505,EMEA,sports,retail,64.19,2,0.014,none,2024-10-01\r\n30531,1529,LATAM,grocery,mobile,77.24,8,0.087,bundle,2024-02-02\r\n30532,1657,LATAM,fashion,retail,187.66,4,0.147,none,2024-01-03\r\n30533,1875,EMEA,electronics,mobile,53.61,8,0.027,none,2024-01-26\r\n30534,1099,LATAM,home,retail,24.38,3,0.078,none,2024-10-05\r\n30535,1950,LATAM,electronics,retail,67.61,6,0.143,bundle,2024-08-15\r\n30536,1998,APAC,fashion,online,27.89,7,0.070,none,2024-08-03\r\n30537,1412,AMER,electronics,online,39.99,5,0.094,none,2024-03-15\r\n30538,1065,AMER,toys,online,69.81,2,0.149,none,2024-12-06\r\n30539,1480,APAC,electronics,retail,55.16,8,0.052,none,2024-01-22\r\n30540,2210,APAC,home,retail,51.88,6,0.022,bundle,2024-04-04\r\n30541,1951,LATAM,grocery,partner,61.60,3,0.112,loyalty,2024-01-01\r\n30542,1204,AMER,sports,retail,49.57,6,0.154,none,2024-06-11\r\n30543,2306,AMER,grocery,partner,159.24,1,0.193,none,2024-05-26\r\n30544,2115,APAC,home,mobile,61.78,4,0.040,bundle,2024-09-05\r\n30545,1251,EMEA,grocery,online,50.73,8,0.209,none,2024-03-16\r\n30546,1354,AMER,grocery,retail,37.82,5,0.211,none,2024-04-22\r\n30547,1425,EMEA,toys,online,97.09,2,0.146,none,2024-02-04\r\n30548,2074,AMER,grocery,online,46.72,7,0.020,none,2024-10-08\r\n30549,1289,LATAM,electronics,retail,43.78,3,0.107,coupon,2024-01-10\r\n30550,1137,APAC,fashion,online,94.76,2,0.161,coupon,2024-02-23\r\n30551,2013,APAC,toys,retail,50.98,7,0.138,none,2024-08-28\r\n30552,2215,LATAM,sports,retail,46.49,3,0.175,bundle,2024-06-26\r\n30553,1516,EMEA,grocery,mobile,34.78,2,0.082,none,2024-02-19\r\n30554,2371,LATAM,toys,retail,51.48,5,0.190,none,2024-09-10\r\n30555,1883,LATAM,sports,mobile,78.86,2,0.213,coupon,2024-05-26\r\n30556,1242,LATAM,home,online,59.44,3,0.161,none,2024-06-26\r\n30557,2112,LATAM,grocery,partner,47.81,4,0.193,none,2024-06-15\r\n30558,2345,LATAM,grocery,online,106.18,3,0.207,none,2024-01-26\r\n30559,1312,EMEA,electronics,retail,81.27,3,0.024,coupon,2024-10-20\r\n30560,1089,LATAM,electronics,mobile,22.70,3,0.124,coupon,2024-09-25\r\n30561,1404,EMEA,sports,online,40.50,2,0.056,none,2024-01-09\r\n30562,2478,AMER,grocery,partner,33.68,8,0.020,none,2024-06-06\r\n30563,2473,EMEA,grocery,partner,26.34,8,0.012,loyalty,2024-08-20\r\n30564,1023,APAC,sports,retail,128.56,5,0.185,none,2024-09-01\r\n30565,1021,AMER,grocery,online,50.46,3,0.033,none,2024-03-16\r\n30566,1806,APAC,home,online,89.79,8,0.202,none,2024-06-26\r\n30567,1457,EMEA,grocery,online,111.18,6,0.191,none,2024-03-26\r\n30568,1114,APAC,fashion,online,32.95,8,0.192,coupon,2024-03-27\r\n30569,1111,APAC,grocery,online,106.09,2,0.054,bundle,2024-07-28\r\n30570,1504,AMER,home,retail,31.45,8,0.046,coupon,2024-12-06\r\n30571,2369,LATAM,home,retail,57.65,5,0.056,none,2024-06-04\r\n30572,2492,LATAM,grocery,online,46.89,3,0.173,loyalty,2024-03-27\r\n30573,2307,LATAM,electronics,online,41.62,8,0.042,none,2024-09-11\r\n30574,2282,EMEA,electronics,online,63.71,6,0.163,none,2024-12-01\r\n30575,1856,EMEA,electronics,retail,41.46,5,0.197,none,2024-01-11\r\n30576,2324,AMER,fashion,mobile,35.78,2,0.199,none,2024-06-25\r\n30577,1715,AMER,electronics,retail,70.06,3,0.076,none,2024-04-04\r\n30578,1626,EMEA,grocery,retail,96.79,3,0.130,none,2024-02-04\r\n30579,2349,APAC,electronics,online,61.18,1,0.104,loyalty,2024-08-13\r\n30580,1833,EMEA,fashion,online,64.14,4,0.151,coupon,2024-07-09\r\n30581,1623,AMER,fashion,retail,51.61,8,0.246,none,2024-08-26\r\n30582,1393,LATAM,sports,online,39.69,3,0.042,none,2024-09-17\r\n30583,2144,EMEA,grocery,retail,51.89,8,0.164,none,2024-12-23\r\n30584,1593,AMER,fashion,online,53.55,6,0.200,none,2024-08-27\r\n30585,1241,APAC,grocery,online,54.04,5,0.150,bundle,2024-12-09\r\n30586,1075,AMER,fashion,mobile,71.03,5,0.105,none,2024-06-11\r\n30587,2005,APAC,grocery,retail,37.62,6,0.248,none,2024-06-24\r\n30588,1483,EMEA,grocery,online,33.46,3,0.207,none,2024-09-25\r\n30589,2459,AMER,toys,online,119.14,6,0.138,coupon,2024-12-13\r\n30590,1148,AMER,home,online,43.82,7,0.177,bundle,2024-12-13\r\n30591,1391,LATAM,sports,mobile,64.68,1,0.073,none,2024-12-03\r\n30592,1124,AMER,grocery,online,116.10,1,0.117,none,2024-08-11\r\n30593,2341,EMEA,fashion,online,105.94,7,0.209,none,2024-06-10\r\n30594,1623,AMER,electronics,retail,82.42,1,0.124,bundle,2024-09-06\r\n30595,1513,APAC,home,retail,44.93,6,0.216,bundle,2024-09-16\r\n30596,1315,AMER,fashion,online,57.42,5,0.054,none,2024-03-22\r\n30597,2260,EMEA,home,partner,50.73,1,0.173,none,2024-12-26\r\n30598,1316,APAC,grocery,online,55.90,6,0.055,loyalty,2024-03-09\r\n30599,2369,LATAM,sports,retail,34.72,6,0.087,none,2024-01-21\r\n30600,1815,APAC,sports,retail,43.54,4,0.019,none,2024-09-03\r\n30601,1662,LATAM,grocery,mobile,89.13,8,0.171,bundle,2024-01-08\r\n30602,1362,AMER,grocery,retail,21.59,6,0.045,none,2024-06-17\r\n30603,1952,EMEA,toys,mobile,30.53,4,0.202,loyalty,2024-01-10\r\n30604,1014,EMEA,fashion,online,51.89,6,0.120,none,2024-10-01\r\n30605,2070,APAC,electronics,retail,18.36,4,0.227,coupon,2024-07-07\r\n30606,1641,EMEA,sports,retail,65.62,7,0.185,coupon,2024-10-25\r\n30607,1863,EMEA,fashion,online,40.90,8,0.057,none,2024-11-03\r\n30608,2291,EMEA,grocery,online,104.04,6,0.222,none,2024-03-04\r\n30609,1525,APAC,electronics,online,75.00,4,0.242,none,2024-04-14\r\n30610,1218,AMER,grocery,retail,46.54,8,0.165,coupon,2024-12-12\r\n30611,1729,AMER,fashion,retail,104.94,2,0.105,loyalty,2024-11-08\r\n30612,1415,AMER,grocery,retail,61.59,7,0.250,none,2024-02-12\r\n30613,1813,EMEA,sports,online,83.60,5,0.012,coupon,2024-10-23\r\n30614,2257,AMER,fashion,retail,82.72,4,0.005,bundle,2024-06-08\r\n30615,1063,AMER,electronics,retail,22.13,6,0.108,none,2024-10-09\r\n30616,1052,LATAM,grocery,online,48.46,4,0.081,none,2024-09-04\r\n30617,1615,LATAM,fashion,mobile,97.34,3,0.149,none,2024-10-23\r\n30618,1643,EMEA,home,online,56.48,7,0.238,none,2024-05-28\r\n30619,1672,APAC,sports,online,71.90,2,0.048,loyalty,2024-11-06\r\n30620,1849,EMEA,electronics,mobile,29.57,5,0.061,none,2024-02-12\r\n30621,1165,AMER,fashion,online,28.23,8,0.205,loyalty,2024-11-15\r\n30622,1468,AMER,grocery,online,49.62,5,0.159,coupon,2024-08-23\r\n30623,1112,APAC,fashion,mobile,98.67,6,0.158,bundle,2024-11-03\r\n30624,2258,AMER,fashion,online,38.61,2,0.214,coupon,2024-02-06\r\n30625,1583,AMER,home,mobile,26.61,3,0.075,none,2024-11-16\r\n30626,1474,LATAM,electronics,retail,106.17,3,0.150,none,2024-07-06\r\n30627,2183,EMEA,home,retail,104.11,5,0.106,coupon,2024-10-14\r\n30628,1745,APAC,electronics,online,59.79,2,0.026,loyalty,2024-07-21\r\n30629,1292,LATAM,electronics,online,27.18,6,0.012,none,2024-10-14\r\n30630,1594,LATAM,sports,retail,47.87,3,0.114,loyalty,2024-07-05\r\n30631,2477,APAC,home,mobile,27.61,4,0.221,coupon,2024-01-26\r\n30632,1729,AMER,electronics,retail,23.07,4,0.146,none,2024-07-01\r\n30633,1996,APAC,electronics,partner,42.76,4,0.226,none,2024-02-17\r\n30634,1076,LATAM,sports,mobile,105.20,3,0.159,none,2024-06-19\r\n30635,1880,LATAM,electronics,online,44.38,4,0.028,none,2024-07-22\r\n30636,1130,LATAM,toys,online,71.93,3,0.103,bundle,2024-05-17\r\n30637,2304,LATAM,sports,retail,67.44,2,0.165,none,2024-12-26\r\n30638,1237,LATAM,fashion,retail,122.26,5,0.162,bundle,2024-01-16\r\n30639,1733,LATAM,electronics,retail,38.24,5,0.020,none,2024-08-15\r\n30640,1086,AMER,toys,partner,44.43,2,0.112,none,2024-07-28\r\n30641,2408,EMEA,toys,retail,66.55,3,0.116,none,2024-05-16\r\n30642,2359,LATAM,toys,online,65.34,1,0.036,bundle,2024-02-13\r\n30643,1840,LATAM,home,retail,39.64,8,0.215,none,2024-12-04\r\n30644,1211,EMEA,sports,online,24.41,8,0.207,bundle,2024-08-23\r\n30645,2293,LATAM,sports,online,49.61,3,0.153,none,2024-07-15\r\n30646,1032,AMER,sports,online,36.99,5,0.035,none,2024-06-08\r\n30647,1780,APAC,electronics,online,34.73,4,0.157,none,2024-05-16\r\n30648,1379,EMEA,grocery,retail,162.09,6,0.023,none,2024-11-15\r\n30649,2079,EMEA,toys,online,63.69,1,0.148,none,2024-06-27\r\n30650,2277,EMEA,electronics,retail,22.74,7,0.138,none,2024-02-25\r\n30651,1868,AMER,electronics,mobile,24.11,3,0.225,loyalty,2024-10-12\r\n30652,1672,APAC,fashion,mobile,36.86,4,0.152,loyalty,2024-10-24\r\n30653,1468,AMER,fashion,retail,89.53,6,0.105,none,2024-08-09\r\n30654,2401,LATAM,home,retail,47.16,1,0.049,bundle,2024-01-01\r\n30655,1646,APAC,home,online,53.91,1,0.205,none,2024-03-06\r\n30656,1927,EMEA,sports,retail,42.63,2,0.049,none,2024-03-11\r\n30657,1238,AMER,grocery,retail,128.98,6,0.152,none,2024-08-11\r\n30658,1280,LATAM,toys,retail,108.69,6,0.112,none,2024-04-09\r\n30659,1990,EMEA,fashion,retail,38.61,8,0.028,none,2024-05-14\r\n30660,2345,LATAM,grocery,mobile,41.53,7,0.229,coupon,2024-07-10\r\n30661,2448,APAC,grocery,online,181.02,3,0.021,none,2024-12-27\r\n30662,1558,EMEA,grocery,retail,69.75,1,0.128,none,2024-06-15\r\n30663,1030,EMEA,electronics,online,161.09,7,0.153,none,2024-03-05\r\n30664,1994,LATAM,electronics,retail,36.84,3,0.186,coupon,2024-06-13\r\n30665,1247,AMER,grocery,partner,34.04,5,0.084,none,2024-12-23\r\n30666,2428,LATAM,home,retail,28.86,1,0.228,bundle,2024-01-18\r\n30667,2333,APAC,grocery,online,70.50,3,0.239,bundle,2024-10-09\r\n30668,1476,APAC,fashion,online,67.39,5,0.129,bundle,2024-09-07\r\n30669,2453,AMER,grocery,mobile,52.24,5,0.073,bundle,2024-07-24\r\n30670,1492,APAC,sports,retail,96.26,2,0.095,none,2024-02-03\r\n30671,1799,EMEA,grocery,retail,88.35,4,0.183,bundle,2024-03-11\r\n30672,2366,APAC,fashion,retail,51.99,3,0.052,coupon,2024-05-15\r\n30673,1254,APAC,electronics,retail,49.84,4,0.198,none,2024-08-22\r\n30674,1433,EMEA,electronics,retail,50.61,3,0.159,coupon,2024-11-22\r\n30675,1610,LATAM,grocery,retail,56.53,5,0.180,bundle,2024-03-03\r\n30676,1689,LATAM,fashion,online,47.83,1,0.050,none,2024-06-23\r\n30677,2135,EMEA,grocery,retail,53.07,7,0.247,none,2024-09-17\r\n30678,1558,EMEA,home,partner,59.95,5,0.063,none,2024-01-13\r\n30679,1129,LATAM,fashion,online,35.82,8,0.148,none,2024-01-03\r\n30680,1829,EMEA,fashion,retail,41.47,2,0.157,coupon,2024-07-10\r\n30681,2358,AMER,grocery,partner,147.71,8,0.038,loyalty,2024-03-26\r\n30682,1884,APAC,fashion,retail,38.38,4,0.099,none,2024-05-22\r\n30683,1421,APAC,home,online,39.70,5,0.241,coupon,2024-09-22\r\n30684,1824,LATAM,home,mobile,76.56,7,0.233,coupon,2024-05-20\r\n30685,1798,AMER,fashion,partner,62.43,7,0.022,none,2024-12-08\r\n30686,2231,LATAM,electronics,online,49.62,8,0.158,none,2024-06-26\r\n30687,2148,EMEA,grocery,mobile,63.04,5,0.106,none,2024-07-13\r\n30688,1595,AMER,toys,online,89.97,7,0.171,coupon,2024-04-11\r\n30689,2016,LATAM,grocery,retail,26.95,5,0.059,none,2024-04-26\r\n30690,2434,APAC,home,mobile,46.15,1,0.247,none,2024-12-06\r\n30691,2344,LATAM,fashion,retail,71.07,4,0.172,none,2024-09-11\r\n30692,1777,AMER,electronics,retail,37.89,2,0.065,none,2024-10-10\r\n30693,1113,EMEA,grocery,retail,23.00,5,0.058,coupon,2024-11-08\r\n30694,1685,AMER,fashion,retail,86.81,7,0.122,none,2024-05-02\r\n30695,2313,LATAM,home,retail,82.77,6,0.003,none,2024-05-21\r\n30696,2487,LATAM,electronics,mobile,38.55,3,0.167,coupon,2024-07-08\r\n30697,2244,LATAM,home,online,99.59,7,0.098,coupon,2024-10-24\r\n30698,2476,APAC,toys,partner,107.70,5,0.243,none,2024-10-18\r\n30699,1730,AMER,toys,retail,58.24,2,0.082,none,2024-10-24\r\n30700,2026,LATAM,home,retail,42.66,6,0.223,none,2024-04-24\r\n30701,2143,AMER,fashion,online,129.03,5,0.181,none,2024-07-24\r\n30702,2229,APAC,electronics,retail,70.27,5,0.002,none,2024-03-09\r\n30703,1497,EMEA,electronics,online,114.41,3,0.189,bundle,2024-05-01\r\n30704,2336,APAC,fashion,retail,22.23,5,0.121,bundle,2024-01-25\r\n30705,1976,AMER,electronics,retail,56.16,2,0.004,bundle,2024-07-15\r\n30706,1458,APAC,toys,retail,54.17,6,0.023,none,2024-06-24\r\n30707,2034,LATAM,grocery,partner,147.89,3,0.169,none,2024-12-05\r\n30708,2443,LATAM,electronics,mobile,115.67,7,0.051,coupon,2024-01-19\r\n30709,1009,APAC,home,online,130.39,6,0.013,loyalty,2024-04-27\r\n30710,1672,APAC,home,mobile,28.98,5,0.248,none,2024-09-23\r\n30711,1545,AMER,electronics,mobile,95.45,6,0.204,coupon,2024-03-19\r\n30712,1714,APAC,fashion,online,112.68,7,0.182,none,2024-03-08\r\n30713,2253,AMER,sports,online,53.12,6,0.028,loyalty,2024-11-16\r\n30714,1745,APAC,electronics,retail,52.52,2,0.123,coupon,2024-07-13\r\n30715,1063,AMER,home,partner,79.51,1,0.205,none,2024-08-13\r\n30716,2007,LATAM,sports,online,41.25,4,0.080,none,2024-11-03\r\n30717,1252,APAC,fashion,mobile,71.99,3,0.169,bundle,2024-02-10\r\n30718,2201,AMER,home,retail,56.70,3,0.077,bundle,2024-08-20\r\n30719,2025,EMEA,sports,retail,52.90,5,0.123,none,2024-10-14\r\n30720,2483,LATAM,electronics,online,67.75,7,0.141,bundle,2024-03-02\r\n30721,1664,LATAM,grocery,retail,108.72,2,0.013,bundle,2024-07-02\r\n30722,1605,APAC,sports,partner,33.54,6,0.166,bundle,2024-09-11\r\n30723,1080,LATAM,sports,online,25.64,6,0.135,loyalty,2024-08-08\r\n30724,2448,APAC,sports,retail,74.45,6,0.170,none,2024-09-22\r\n30725,2089,EMEA,sports,retail,52.01,4,0.122,coupon,2024-08-18\r\n30726,2147,LATAM,grocery,retail,48.74,3,0.162,none,2024-03-07\r\n30727,1600,AMER,toys,online,70.21,8,0.055,none,2024-07-28\r\n30728,2390,AMER,toys,online,61.39,6,0.230,coupon,2024-08-07\r\n30729,1642,EMEA,grocery,retail,19.65,6,0.014,none,2024-10-18\r\n30730,1948,EMEA,grocery,retail,19.85,5,0.205,none,2024-10-20\r\n30731,1105,AMER,fashion,online,88.00,5,0.072,none,2024-05-17\r\n30732,1017,AMER,grocery,online,82.58,1,0.237,bundle,2024-12-07\r\n30733,1460,LATAM,sports,online,31.22,8,0.213,loyalty,2024-05-12\r\n30734,2446,LATAM,sports,online,41.51,4,0.061,none,2024-10-02\r\n30735,1029,EMEA,home,online,22.34,4,0.224,none,2024-05-08\r\n30736,1481,LATAM,electronics,retail,49.11,8,0.219,coupon,2024-02-08\r\n30737,1471,EMEA,grocery,retail,131.07,6,0.160,coupon,2024-01-26\r\n30738,1409,APAC,home,retail,44.37,7,0.107,none,2024-11-22\r\n30739,1505,EMEA,fashion,retail,86.68,2,0.064,none,2024-11-13\r\n30740,1220,LATAM,toys,online,39.33,5,0.228,none,2024-04-14\r\n30741,1476,APAC,home,online,82.83,5,0.075,coupon,2024-06-26\r\n30742,2234,LATAM,electronics,mobile,44.84,6,0.200,none,2024-06-06\r\n30743,1356,LATAM,grocery,retail,88.63,3,0.183,bundle,2024-06-21\r\n30744,2355,EMEA,sports,retail,79.19,8,0.248,none,2024-01-01\r\n30745,1341,EMEA,grocery,retail,33.49,3,0.030,none,2024-10-15\r\n30746,2218,EMEA,electronics,mobile,66.39,2,0.076,loyalty,2024-09-22\r\n30747,2404,EMEA,electronics,retail,20.23,6,0.028,coupon,2024-05-19\r\n30748,1210,LATAM,electronics,retail,171.93,8,0.213,none,2024-10-06\r\n30749,2459,AMER,electronics,partner,43.03,1,0.237,none,2024-06-18\r\n30750,2238,AMER,home,mobile,67.37,6,0.155,bundle,2024-12-01\r\n30751,1262,APAC,toys,retail,24.82,8,0.064,none,2024-09-07\r\n30752,1480,APAC,sports,mobile,18.77,2,0.060,none,2024-06-20\r\n30753,2236,APAC,sports,mobile,39.68,7,0.093,coupon,2024-02-27\r\n30754,2051,APAC,grocery,mobile,34.70,6,0.203,bundle,2024-09-28\r\n30755,1854,AMER,fashion,online,50.60,2,0.219,none,2024-09-08\r\n30756,1541,APAC,toys,online,41.96,3,0.172,none,2024-03-13\r\n30757,1747,EMEA,electronics,retail,67.94,1,0.093,none,2024-02-08\r\n30758,1654,EMEA,grocery,online,18.28,8,0.108,none,2024-11-12\r\n30759,2217,LATAM,grocery,online,36.58,7,0.207,coupon,2024-03-21\r\n30760,2237,EMEA,fashion,online,67.36,1,0.177,loyalty,2024-07-20\r\n30761,1397,LATAM,electronics,online,31.71,5,0.017,coupon,2024-01-16\r\n30762,1989,LATAM,grocery,online,47.00,3,0.087,bundle,2024-10-24\r\n30763,1316,APAC,fashion,partner,53.93,4,0.185,none,2024-12-04\r\n30764,1548,EMEA,sports,partner,83.78,4,0.076,none,2024-05-25\r\n30765,2424,LATAM,toys,retail,37.90,6,0.142,bundle,2024-10-22\r\n30766,1745,APAC,electronics,retail,72.90,7,0.131,loyalty,2024-05-13\r\n30767,1321,EMEA,grocery,retail,41.57,3,0.224,coupon,2024-10-18\r\n30768,2482,EMEA,electronics,online,71.93,8,0.140,none,2024-07-25\r\n30769,1940,APAC,home,retail,73.94,1,0.233,bundle,2024-09-13\r\n30770,1281,AMER,sports,online,180.22,6,0.029,loyalty,2024-07-19\r\n30771,1469,EMEA,home,online,66.06,6,0.059,none,2024-10-07\r\n30772,1952,EMEA,home,online,62.62,7,0.213,loyalty,2024-10-09\r\n30773,1225,APAC,home,online,65.75,5,0.194,loyalty,2024-10-02\r\n30774,1266,AMER,grocery,online,85.51,6,0.228,coupon,2024-06-05\r\n30775,1854,AMER,grocery,mobile,67.43,2,0.232,none,2024-03-02\r\n30776,1379,EMEA,grocery,retail,43.14,6,0.085,loyalty,2024-09-06\r\n30777,1192,EMEA,grocery,online,152.51,6,0.104,none,2024-03-05\r\n30778,1988,AMER,home,retail,33.61,5,0.052,bundle,2024-12-14\r\n30779,1694,APAC,electronics,retail,35.21,7,0.067,coupon,2024-05-17\r\n30780,1959,EMEA,electronics,mobile,62.76,8,0.018,none,2024-07-17\r\n30781,1087,AMER,grocery,retail,80.13,8,0.131,none,2024-06-05\r\n30782,2013,APAC,fashion,retail,56.91,5,0.192,none,2024-05-22\r\n30783,2331,APAC,electronics,online,23.25,5,0.074,bundle,2024-10-13\r\n30784,1309,EMEA,grocery,partner,54.56,1,0.101,coupon,2024-03-18\r\n30785,2050,APAC,home,mobile,201.46,2,0.119,loyalty,2024-11-26\r\n30786,1194,APAC,toys,mobile,90.16,4,0.026,coupon,2024-08-13\r\n30787,1674,LATAM,electronics,online,65.01,3,0.240,none,2024-10-27\r\n30788,1717,AMER,grocery,retail,46.85,7,0.039,bundle,2024-01-26\r\n30789,2492,LATAM,electronics,online,88.05,6,0.030,coupon,2024-11-14\r\n30790,1898,EMEA,electronics,retail,34.17,6,0.019,none,2024-12-20\r\n30791,1506,EMEA,fashion,online,63.22,8,0.076,none,2024-10-28\r\n30792,2444,EMEA,toys,retail,74.08,3,0.161,none,2024-08-12\r\n30793,2362,AMER,electronics,retail,30.27,1,0.139,none,2024-01-07\r\n30794,1485,APAC,electronics,online,96.45,1,0.241,none,2024-12-13\r\n30795,2386,EMEA,grocery,partner,69.08,1,0.136,bundle,2024-10-12\r\n30796,1330,EMEA,home,online,47.57,8,0.151,none,2024-11-08\r\n30797,2029,APAC,grocery,retail,24.46,1,0.106,coupon,2024-02-26\r\n30798,1495,LATAM,home,retail,81.61,1,0.141,none,2024-07-03\r\n30799,2347,AMER,toys,retail,60.46,6,0.039,none,2024-01-19\r\n30800,1489,AMER,home,retail,29.81,1,0.096,bundle,2024-03-16\r\n30801,1209,AMER,fashion,online,81.64,5,0.081,none,2024-05-13\r\n30802,1169,LATAM,electronics,online,108.21,5,0.114,bundle,2024-02-06\r\n30803,2141,AMER,grocery,retail,131.47,2,0.094,none,2024-09-18\r\n30804,1677,EMEA,toys,retail,54.96,4,0.164,none,2024-11-08\r\n30805,1368,EMEA,sports,online,42.97,2,0.116,none,2024-07-09\r\n30806,1300,EMEA,home,online,53.74,1,0.127,none,2024-02-03\r\n30807,2385,APAC,grocery,online,49.42,3,0.037,bundle,2024-12-09\r\n30808,2138,APAC,home,online,47.90,8,0.095,none,2024-12-07\r\n30809,1286,EMEA,grocery,mobile,89.26,2,0.154,coupon,2024-07-20\r\n30810,1060,LATAM,home,retail,65.96,8,0.084,none,2024-12-19\r\n30811,1119,LATAM,grocery,mobile,92.42,4,0.029,coupon,2024-09-16\r\n30812,1875,EMEA,grocery,retail,41.93,7,0.130,bundle,2024-11-01\r\n30813,1501,AMER,sports,retail,74.16,5,0.030,none,2024-01-06\r\n30814,1241,APAC,grocery,online,60.17,8,0.012,loyalty,2024-08-01\r\n30815,2379,AMER,electronics,mobile,146.64,6,0.121,coupon,2024-12-21\r\n30816,1111,APAC,home,online,44.33,1,0.109,coupon,2024-09-11\r\n30817,2291,EMEA,toys,online,50.04,4,0.045,none,2024-10-16\r\n30818,1581,APAC,electronics,retail,79.01,2,0.182,none,2024-05-24\r\n30819,1933,EMEA,electronics,online,23.05,3,0.188,none,2024-05-24\r\n30820,1173,LATAM,home,retail,45.05,1,0.092,none,2024-07-17\r\n30821,2147,LATAM,grocery,online,108.15,7,0.237,none,2024-06-20\r\n30822,1563,EMEA,electronics,online,111.22,3,0.076,loyalty,2024-04-14\r\n30823,1348,AMER,electronics,online,98.41,8,0.184,none,2024-09-11\r\n30824,1628,EMEA,grocery,online,48.33,1,0.247,coupon,2024-12-15\r\n30825,2068,LATAM,sports,mobile,23.23,5,0.215,none,2024-09-03\r\n30826,1810,LATAM,fashion,online,87.16,4,0.184,none,2024-01-14\r\n30827,1760,LATAM,home,mobile,57.63,4,0.060,none,2024-03-27\r\n30828,1352,AMER,electronics,online,42.65,1,0.045,none,2024-12-21\r\n30829,1057,LATAM,fashion,online,79.25,7,0.068,loyalty,2024-07-08\r\n30830,1968,EMEA,fashion,mobile,38.31,8,0.014,bundle,2024-08-11\r\n30831,2128,EMEA,home,retail,51.17,5,0.142,coupon,2024-01-26\r\n30832,2058,LATAM,toys,retail,46.99,4,0.043,none,2024-10-24\r\n30833,2452,LATAM,sports,online,98.89,3,0.117,none,2024-11-26\r\n30834,1165,AMER,electronics,retail,82.61,8,0.243,loyalty,2024-09-05\r\n30835,2107,APAC,electronics,online,51.18,6,0.172,none,2024-04-04\r\n30836,2191,AMER,sports,online,104.89,8,0.008,loyalty,2024-10-28\r\n30837,1035,EMEA,home,retail,27.19,6,0.148,none,2024-09-18\r\n30838,2394,EMEA,electronics,mobile,39.51,2,0.179,none,2024-11-09\r\n30839,1949,AMER,sports,retail,44.40,7,0.099,none,2024-08-16\r\n30840,2170,EMEA,grocery,mobile,112.48,8,0.046,none,2024-01-26\r\n30841,1553,LATAM,electronics,online,54.46,2,0.179,loyalty,2024-02-28\r\n30842,1621,APAC,grocery,online,98.25,6,0.139,none,2024-10-16\r\n30843,1716,LATAM,electronics,online,41.78,7,0.166,coupon,2024-05-03\r\n30844,1978,AMER,sports,online,23.70,6,0.196,coupon,2024-04-09\r\n30845,1616,APAC,fashion,partner,67.38,4,0.101,none,2024-12-01\r\n30846,2064,LATAM,grocery,retail,160.15,2,0.082,coupon,2024-02-10\r\n30847,1903,LATAM,toys,online,53.79,7,0.067,coupon,2024-03-10\r\n30848,1081,AMER,fashion,online,36.40,1,0.102,none,2024-11-16\r\n30849,1653,APAC,electronics,retail,56.72,7,0.134,loyalty,2024-10-23\r\n30850,1156,APAC,electronics,online,33.22,4,0.234,coupon,2024-05-22\r\n30851,1335,APAC,grocery,online,52.32,8,0.114,coupon,2024-12-25\r\n30852,1056,LATAM,toys,retail,192.44,1,0.017,none,2024-05-17\r\n30853,1847,LATAM,grocery,mobile,47.40,7,0.072,none,2024-01-10\r\n30854,1183,AMER,fashion,online,77.07,1,0.087,coupon,2024-02-04\r\n30855,2258,AMER,grocery,online,47.31,1,0.135,none,2024-03-23\r\n30856,1720,AMER,toys,retail,96.04,8,0.215,none,2024-04-10\r\n30857,1811,APAC,electronics,retail,48.75,1,0.040,none,2024-05-10\r\n30858,2029,APAC,grocery,retail,44.95,4,0.141,none,2024-04-01\r\n30859,2239,EMEA,toys,online,61.52,5,0.003,none,2024-01-13\r\n30860,2380,AMER,grocery,mobile,128.81,5,0.183,none,2024-01-26\r\n30861,1197,LATAM,grocery,retail,58.09,8,0.097,coupon,2024-06-03\r\n30862,1602,EMEA,toys,online,75.67,2,0.191,coupon,2024-03-23\r\n30863,2386,EMEA,home,retail,36.61,5,0.147,bundle,2024-03-07\r\n30864,1430,EMEA,sports,retail,109.16,2,0.047,coupon,2024-08-22\r\n30865,1909,APAC,sports,retail,67.21,4,0.115,none,2024-07-03\r\n30866,1886,LATAM,grocery,online,117.08,6,0.052,bundle,2024-11-24\r\n30867,1845,AMER,sports,retail,52.24,3,0.072,none,2024-03-15\r\n30868,1542,APAC,sports,mobile,130.09,7,0.029,none,2024-10-09\r\n30869,2394,EMEA,grocery,partner,23.03,3,0.207,bundle,2024-02-23\r\n30870,1348,AMER,toys,online,70.92,6,0.212,none,2024-02-20\r\n30871,1345,AMER,electronics,mobile,123.45,7,0.235,coupon,2024-12-24\r\n30872,1715,AMER,electronics,retail,37.68,3,0.125,none,2024-09-06\r\n30873,2328,EMEA,home,retail,34.08,7,0.226,none,2024-10-05\r\n30874,2292,EMEA,sports,online,88.90,3,0.055,loyalty,2024-08-01\r\n30875,2020,AMER,electronics,mobile,58.40,4,0.043,coupon,2024-04-02\r\n30876,1137,APAC,grocery,retail,107.04,4,0.163,coupon,2024-10-17\r\n30877,1293,AMER,fashion,online,45.32,4,0.236,bundle,2024-09-05\r\n30878,2128,EMEA,toys,retail,143.19,6,0.060,none,2024-12-24\r\n30879,2132,LATAM,home,partner,47.13,8,0.236,coupon,2024-07-15\r\n30880,1023,APAC,home,online,56.76,8,0.000,bundle,2024-10-10\r\n30881,1663,LATAM,home,retail,80.40,7,0.008,none,2024-05-25\r\n30882,1403,APAC,grocery,online,61.63,3,0.221,coupon,2024-07-26\r\n30883,2149,EMEA,fashion,retail,105.15,8,0.086,none,2024-10-07\r\n30884,1583,AMER,grocery,online,47.26,7,0.052,none,2024-07-26\r\n30885,1100,AMER,toys,online,78.99,2,0.244,none,2024-05-11\r\n30886,1065,AMER,grocery,retail,25.36,8,0.131,bundle,2024-05-21\r\n30887,1526,EMEA,electronics,online,47.99,1,0.249,coupon,2024-11-07\r\n30888,1020,APAC,toys,online,39.60,6,0.015,none,2024-10-15\r\n30889,1846,APAC,grocery,online,54.48,3,0.115,coupon,2024-09-03\r\n30890,2155,APAC,sports,partner,32.74,4,0.185,none,2024-10-04\r\n30891,1132,EMEA,sports,retail,63.41,3,0.212,none,2024-05-05\r\n30892,2131,APAC,sports,retail,31.31,8,0.185,loyalty,2024-12-07\r\n30893,1796,LATAM,home,online,75.96,4,0.086,none,2024-02-22\r\n30894,1885,EMEA,sports,mobile,25.33,2,0.146,coupon,2024-06-05\r\n30895,1433,EMEA,home,online,38.50,2,0.085,coupon,2024-08-27\r\n30896,1942,APAC,grocery,retail,52.00,4,0.081,loyalty,2024-07-26\r\n30897,1102,APAC,fashion,retail,55.20,8,0.123,bundle,2024-10-03\r\n30898,2004,LATAM,sports,online,92.18,6,0.019,none,2024-01-26\r\n30899,2054,AMER,electronics,online,37.03,4,0.248,none,2024-12-11\r\n30900,2161,LATAM,home,retail,65.54,3,0.109,coupon,2024-02-06\r\n30901,1999,EMEA,grocery,online,110.80,8,0.057,bundle,2024-02-09\r\n30902,1682,EMEA,toys,retail,19.18,5,0.005,none,2024-09-27\r\n30903,1795,EMEA,home,partner,29.75,5,0.070,none,2024-06-11\r\n30904,1900,APAC,sports,mobile,55.43,1,0.182,none,2024-01-19\r\n30905,1014,EMEA,electronics,mobile,79.85,2,0.092,coupon,2024-01-08\r\n30906,1884,APAC,fashion,retail,41.90,5,0.239,bundle,2024-04-28\r\n30907,1123,LATAM,grocery,partner,53.86,3,0.108,none,2024-07-23\r\n30908,2391,EMEA,electronics,retail,71.93,7,0.210,bundle,2024-12-08\r\n30909,1510,EMEA,sports,online,68.84,8,0.026,coupon,2024-05-18\r\n30910,1746,LATAM,sports,retail,143.24,8,0.097,none,2024-11-14\r\n30911,1703,AMER,grocery,online,38.24,6,0.153,none,2024-03-05\r\n30912,1963,AMER,electronics,retail,47.30,3,0.070,none,2024-12-24\r\n30913,2366,APAC,grocery,online,43.70,1,0.041,none,2024-11-11\r\n30914,2284,EMEA,home,retail,78.05,4,0.050,coupon,2024-10-13\r\n30915,1031,AMER,electronics,partner,29.35,7,0.128,coupon,2024-03-22\r\n30916,1516,EMEA,electronics,retail,51.68,7,0.248,none,2024-08-21\r\n30917,2332,APAC,sports,online,126.41,2,0.230,none,2024-01-25\r\n30918,1208,AMER,sports,online,38.98,7,0.007,loyalty,2024-01-26\r\n30919,2314,EMEA,toys,online,24.51,1,0.118,bundle,2024-05-14\r\n30920,2282,EMEA,sports,online,110.43,6,0.082,loyalty,2024-07-24\r\n30921,1124,AMER,toys,retail,69.36,7,0.141,none,2024-06-02\r\n30922,1275,EMEA,grocery,online,50.56,2,0.067,none,2024-03-07\r\n30923,1670,EMEA,electronics,partner,115.26,1,0.193,bundle,2024-05-11\r\n30924,2261,EMEA,grocery,online,71.54,3,0.168,none,2024-06-13\r\n30925,2228,EMEA,grocery,partner,76.18,3,0.124,none,2024-09-21\r\n30926,1644,EMEA,electronics,online,46.89,2,0.092,bundle,2024-01-10\r\n30927,1546,EMEA,electronics,mobile,49.71,4,0.166,none,2024-08-03\r\n30928,1545,AMER,home,mobile,24.84,8,0.045,coupon,2024-02-16\r\n30929,1411,LATAM,electronics,retail,20.16,1,0.040,coupon,2024-11-13\r\n30930,1676,LATAM,home,retail,50.39,8,0.172,none,2024-05-08\r\n30931,1429,APAC,home,online,88.25,5,0.240,bundle,2024-08-19\r\n30932,1201,LATAM,electronics,partner,133.30,6,0.164,loyalty,2024-06-25\r\n30933,1115,AMER,electronics,retail,30.03,8,0.038,none,2024-10-06\r\n30934,1732,LATAM,home,mobile,40.88,6,0.021,none,2024-10-10\r\n30935,1911,LATAM,electronics,retail,57.04,6,0.100,none,2024-08-27\r\n30936,2003,LATAM,home,partner,38.94,6,0.073,none,2024-05-24\r\n30937,1360,APAC,electronics,retail,59.23,5,0.222,none,2024-12-19\r\n30938,1599,APAC,home,online,43.53,3,0.225,none,2024-03-15\r\n30939,2324,AMER,home,mobile,68.70,2,0.133,coupon,2024-04-07\r\n30940,1087,AMER,grocery,retail,24.89,7,0.097,bundle,2024-04-19\r\n30941,1594,LATAM,grocery,online,86.33,8,0.048,none,2024-02-01\r\n30942,2382,LATAM,sports,retail,40.87,5,0.224,none,2024-03-15\r\n30943,1137,APAC,grocery,retail,63.54,3,0.223,coupon,2024-12-10\r\n30944,1194,APAC,toys,retail,51.03,2,0.150,coupon,2024-12-19\r\n30945,1379,EMEA,home,retail,43.99,5,0.077,loyalty,2024-11-27\r\n30946,1622,LATAM,home,partner,54.41,5,0.198,none,2024-02-14\r\n30947,1342,LATAM,electronics,retail,34.40,8,0.212,none,2024-12-10\r\n30948,2012,APAC,home,retail,34.17,2,0.098,none,2024-02-17\r\n30949,2279,LATAM,grocery,online,32.66,5,0.192,none,2024-10-09\r\n30950,1790,AMER,home,online,54.60,1,0.039,coupon,2024-09-04\r\n30951,1765,EMEA,toys,online,46.75,6,0.233,none,2024-03-15\r\n30952,1408,AMER,grocery,online,37.60,8,0.145,none,2024-07-07\r\n30953,1253,AMER,electronics,partner,35.80,5,0.014,none,2024-10-04\r\n30954,1504,AMER,sports,online,61.19,7,0.072,none,2024-03-01\r\n30955,2170,EMEA,sports,online,90.62,3,0.245,none,2024-10-17\r\n30956,2372,AMER,grocery,online,35.47,6,0.230,none,2024-08-26\r\n30957,1990,EMEA,grocery,retail,77.93,6,0.195,loyalty,2024-02-16\r\n30958,1992,LATAM,electronics,online,55.93,1,0.111,loyalty,2024-03-25\r\n30959,1272,AMER,toys,retail,51.46,6,0.184,none,2024-07-09\r\n30960,1541,APAC,electronics,retail,41.40,4,0.207,bundle,2024-03-07\r\n30961,2009,LATAM,sports,partner,30.07,6,0.162,none,2024-02-12\r\n30962,2019,AMER,grocery,retail,129.11,5,0.208,loyalty,2024-02-13\r\n30963,1366,APAC,grocery,online,39.70,8,0.200,bundle,2024-02-24\r\n30964,1961,EMEA,electronics,retail,31.80,6,0.177,none,2024-07-28\r\n30965,1183,AMER,electronics,online,42.04,7,0.126,none,2024-03-15\r\n30966,1913,LATAM,home,online,29.24,2,0.192,none,2024-01-06\r\n30967,1945,AMER,electronics,online,38.35,1,0.175,none,2024-10-03\r\n30968,1315,AMER,home,mobile,66.69,5,0.227,none,2024-06-04\r\n30969,1116,LATAM,fashion,online,43.80,5,0.001,none,2024-05-03\r\n30970,2155,APAC,grocery,retail,25.38,2,0.094,none,2024-03-26\r\n30971,1660,AMER,home,retail,96.85,3,0.008,coupon,2024-10-27\r\n30972,1312,EMEA,grocery,online,61.43,6,0.020,none,2024-11-19\r\n30973,1551,APAC,electronics,online,62.31,5,0.194,none,2024-05-01\r\n30974,1528,EMEA,grocery,retail,99.94,4,0.112,loyalty,2024-04-10\r\n30975,1890,LATAM,electronics,retail,45.30,7,0.143,none,2024-03-09\r\n30976,1248,APAC,fashion,mobile,96.91,2,0.039,none,2024-03-12\r\n30977,1575,APAC,home,online,53.27,8,0.250,loyalty,2024-06-03\r\n30978,1860,EMEA,home,online,45.62,8,0.146,none,2024-11-12\r\n30979,1784,EMEA,sports,retail,50.34,5,0.024,coupon,2024-05-24\r\n30980,1227,AMER,grocery,retail,49.28,2,0.154,none,2024-08-27\r\n30981,1398,APAC,fashion,mobile,61.11,1,0.185,coupon,2024-10-23\r\n30982,2286,AMER,grocery,retail,62.67,6,0.082,coupon,2024-10-05\r\n30983,1255,AMER,fashion,online,62.79,5,0.019,none,2024-02-14\r\n30984,1711,APAC,grocery,retail,53.53,4,0.197,none,2024-02-04\r\n30985,1015,AMER,grocery,retail,90.20,6,0.045,none,2024-09-03\r\n30986,2296,AMER,grocery,mobile,60.28,4,0.055,none,2024-01-12\r\n30987,1824,LATAM,sports,retail,59.57,2,0.166,none,2024-01-15\r\n30988,2362,AMER,electronics,online,38.18,6,0.057,bundle,2024-01-08\r\n30989,2327,EMEA,electronics,retail,64.92,7,0.017,none,2024-01-21\r\n30990,2180,AMER,electronics,mobile,72.91,4,0.143,none,2024-03-28\r\n30991,1025,EMEA,electronics,online,49.17,6,0.016,coupon,2024-02-10\r\n30992,1592,LATAM,toys,mobile,39.93,5,0.109,loyalty,2024-05-22\r\n30993,1398,APAC,grocery,online,41.26,1,0.061,none,2024-12-23\r\n30994,1974,EMEA,electronics,retail,60.66,7,0.250,none,2024-03-04\r\n30995,1834,AMER,fashion,online,51.32,8,0.055,none,2024-07-28\r\n30996,2051,APAC,home,retail,37.41,6,0.079,none,2024-03-19\r\n30997,2197,LATAM,electronics,mobile,55.89,8,0.195,bundle,2024-06-18\r\n30998,2119,AMER,home,online,30.68,3,0.121,loyalty,2024-07-19\r\n30999,1246,EMEA,grocery,online,54.36,3,0.002,none,2024-03-28\r\n31000,1655,LATAM,electronics,retail,38.36,3,0.088,coupon,2024-11-18\r\n31001,2065,EMEA,grocery,retail,39.72,3,0.245,none,2024-07-04\r\n31002,1082,EMEA,grocery,partner,27.18,7,0.130,none,2024-05-18\r\n31003,1458,APAC,sports,mobile,45.33,7,0.226,none,2024-03-09\r\n31004,2014,EMEA,grocery,online,46.88,4,0.168,none,2024-02-07\r\n31005,1160,LATAM,sports,online,77.57,4,0.093,bundle,2024-03-17\r\n31006,1345,AMER,electronics,online,58.89,8,0.208,coupon,2024-03-24\r\n31007,1549,APAC,sports,online,45.53,5,0.048,none,2024-05-17\r\n31008,1917,LATAM,grocery,retail,145.68,4,0.135,none,2024-02-20\r\n31009,1700,EMEA,electronics,mobile,15.38,7,0.107,none,2024-08-08\r\n31010,1172,APAC,toys,retail,42.18,2,0.114,none,2024-01-01\r\n31011,1171,APAC,electronics,online,84.60,4,0.007,none,2024-08-07\r\n31012,2125,LATAM,grocery,mobile,40.00,6,0.144,none,2024-05-15\r\n31013,2098,AMER,grocery,partner,35.96,4,0.123,none,2024-10-23\r\n31014,1923,LATAM,home,online,66.92,5,0.119,loyalty,2024-10-05\r\n31015,2262,APAC,sports,retail,85.98,3,0.024,bundle,2024-03-06\r\n31016,1499,EMEA,grocery,online,47.98,6,0.109,none,2024-09-07\r\n31017,1465,AMER,home,online,62.32,3,0.059,loyalty,2024-06-27\r\n31018,1299,LATAM,electronics,retail,85.71,4,0.216,bundle,2024-10-12\r\n31019,1299,LATAM,grocery,online,116.65,3,0.032,coupon,2024-10-04\r\n31020,1775,EMEA,grocery,mobile,23.19,8,0.043,bundle,2024-03-07\r\n31021,2362,AMER,grocery,retail,37.97,4,0.138,coupon,2024-07-07\r\n31022,1844,APAC,grocery,retail,43.94,3,0.229,none,2024-11-06\r\n31023,1233,AMER,home,retail,43.40,1,0.062,coupon,2024-02-08\r\n31024,1013,LATAM,sports,online,63.42,1,0.166,coupon,2024-02-02\r\n31025,1437,EMEA,grocery,retail,34.00,1,0.144,bundle,2024-11-10\r\n31026,2418,AMER,toys,retail,48.28,6,0.238,none,2024-02-28\r\n31027,1830,EMEA,toys,retail,47.20,5,0.030,coupon,2024-04-27\r\n31028,1834,AMER,electronics,mobile,71.43,4,0.035,bundle,2024-05-26\r\n31029,2407,EMEA,electronics,retail,112.96,5,0.192,bundle,2024-11-10\r\n31030,2045,LATAM,fashion,retail,14.74,2,0.191,none,2024-09-04\r\n31031,1426,AMER,electronics,mobile,168.95,3,0.091,none,2024-05-07\r\n31032,2128,EMEA,home,online,13.11,3,0.079,none,2024-01-14\r\n31033,2334,LATAM,sports,online,60.11,6,0.223,none,2024-02-16\r\n31034,1310,AMER,fashion,online,37.12,4,0.146,none,2024-11-16\r\n31035,2290,LATAM,fashion,retail,49.51,6,0.101,coupon,2024-12-17\r\n31036,1299,LATAM,grocery,mobile,83.68,1,0.223,coupon,2024-02-21\r\n31037,2097,AMER,grocery,online,127.34,5,0.057,coupon,2024-03-25\r\n31038,2289,APAC,home,partner,63.21,4,0.101,none,2024-09-10\r\n31039,2269,EMEA,home,online,40.61,2,0.033,bundle,2024-02-13\r\n31040,1130,LATAM,fashion,retail,55.25,8,0.123,loyalty,2024-12-26\r\n31041,2481,APAC,grocery,retail,18.66,4,0.231,none,2024-09-07\r\n31042,2311,LATAM,fashion,online,33.43,3,0.220,none,2024-12-20\r\n31043,1090,AMER,home,online,106.21,3,0.142,coupon,2024-11-05\r\n31044,1735,LATAM,electronics,online,47.66,6,0.060,none,2024-09-17\r\n31045,2315,LATAM,home,mobile,62.43,3,0.088,bundle,2024-11-01\r\n31046,1024,APAC,sports,online,35.75,5,0.100,none,2024-02-16\r\n31047,1734,AMER,home,online,34.82,5,0.179,none,2024-07-01\r\n31048,1856,EMEA,home,online,15.79,5,0.189,bundle,2024-09-13\r\n31049,2132,LATAM,home,mobile,142.74,6,0.121,none,2024-05-14\r\n31050,2062,EMEA,fashion,retail,52.78,4,0.211,coupon,2024-04-24\r\n31051,1278,AMER,sports,retail,113.37,4,0.039,none,2024-08-26\r\n31052,1749,LATAM,electronics,online,57.05,8,0.230,none,2024-04-18\r\n31053,2029,APAC,electronics,retail,48.30,2,0.127,bundle,2024-02-02\r\n31054,1019,APAC,electronics,retail,104.88,1,0.071,loyalty,2024-12-08\r\n31055,1823,EMEA,fashion,mobile,89.96,4,0.060,none,2024-12-23\r\n31056,2353,AMER,electronics,online,69.28,1,0.247,none,2024-01-13\r\n31057,1769,LATAM,fashion,online,59.56,5,0.180,none,2024-08-06\r\n31058,1585,AMER,fashion,online,55.19,7,0.221,bundle,2024-06-15\r\n31059,1490,AMER,sports,partner,66.86,5,0.130,coupon,2024-03-13\r\n31060,1085,EMEA,sports,online,40.10,3,0.011,coupon,2024-06-25\r\n31061,2261,EMEA,grocery,online,35.79,5,0.015,loyalty,2024-01-16\r\n31062,1565,AMER,grocery,online,58.77,8,0.189,none,2024-03-07\r\n31063,1254,APAC,fashion,retail,63.22,8,0.080,none,2024-10-20\r\n31064,1290,EMEA,toys,online,67.41,1,0.225,coupon,2024-03-22\r\n31065,1492,APAC,sports,retail,72.77,7,0.181,none,2024-11-08\r\n31066,2269,EMEA,home,mobile,38.13,1,0.128,none,2024-03-23\r\n31067,1080,LATAM,grocery,mobile,29.37,3,0.219,none,2024-08-02\r\n31068,1209,AMER,toys,retail,31.74,5,0.176,bundle,2024-06-25\r\n31069,1243,AMER,electronics,online,88.24,7,0.046,coupon,2024-09-19\r\n31070,2356,LATAM,sports,retail,79.50,8,0.004,coupon,2024-06-03\r\n31071,1248,APAC,sports,retail,43.26,1,0.219,loyalty,2024-09-22\r\n31072,1552,EMEA,grocery,online,92.06,8,0.197,none,2024-11-12\r\n31073,1456,APAC,grocery,online,42.51,6,0.247,loyalty,2024-05-12\r\n31074,1422,LATAM,sports,online,66.87,7,0.051,none,2024-11-10\r\n31075,2222,LATAM,sports,mobile,46.16,6,0.043,coupon,2024-09-13\r\n31076,2238,AMER,fashion,retail,47.64,8,0.130,coupon,2024-07-22\r\n31077,1542,APAC,grocery,online,61.63,5,0.043,coupon,2024-10-09\r\n31078,2151,APAC,fashion,mobile,132.45,7,0.136,none,2024-05-14\r\n31079,1979,APAC,fashion,retail,72.17,3,0.152,none,2024-05-10\r\n31080,1054,EMEA,home,retail,79.71,4,0.191,bundle,2024-12-19\r\n31081,2423,LATAM,sports,retail,54.97,8,0.083,bundle,2024-12-08\r\n31082,2447,AMER,grocery,online,47.69,1,0.117,none,2024-08-01\r\n31083,1870,EMEA,electronics,online,35.29,5,0.198,none,2024-06-27\r\n31084,1204,AMER,grocery,retail,35.95,6,0.043,none,2024-03-08\r\n31085,1517,AMER,grocery,online,65.53,8,0.075,none,2024-08-09\r\n31086,1408,AMER,toys,retail,26.52,1,0.174,coupon,2024-01-02\r\n31087,1542,APAC,grocery,partner,34.51,3,0.145,none,2024-05-11\r\n31088,1823,EMEA,toys,retail,129.75,4,0.245,loyalty,2024-09-04\r\n31089,2237,EMEA,home,online,65.07,1,0.128,none,2024-10-21\r\n31090,1911,LATAM,sports,partner,92.67,1,0.185,none,2024-04-16\r\n31091,1073,AMER,grocery,mobile,34.42,4,0.120,loyalty,2024-06-21\r\n31092,2112,LATAM,electronics,online,29.00,5,0.164,none,2024-12-27\r\n31093,1211,EMEA,fashion,retail,55.11,1,0.210,none,2024-02-28\r\n31094,1000,APAC,grocery,online,79.20,3,0.096,none,2024-07-17\r\n31095,2448,APAC,grocery,online,106.20,1,0.229,loyalty,2024-04-12\r\n31096,1291,EMEA,toys,retail,163.41,4,0.112,none,2024-11-16\r\n31097,2402,AMER,home,mobile,103.94,7,0.168,bundle,2024-07-24\r\n31098,2122,AMER,sports,retail,34.26,4,0.046,none,2024-01-25\r\n31099,2090,AMER,fashion,retail,123.05,6,0.175,coupon,2024-10-27\r\n31100,2491,APAC,grocery,mobile,87.35,8,0.094,bundle,2024-03-08\r\n31101,1603,EMEA,sports,retail,108.24,2,0.168,none,2024-09-12\r\n31102,1692,LATAM,grocery,online,49.98,5,0.249,coupon,2024-06-20\r\n31103,1556,AMER,electronics,online,39.02,5,0.050,coupon,2024-12-28\r\n31104,1882,AMER,grocery,online,15.60,5,0.108,none,2024-05-20\r\n31105,1868,AMER,electronics,online,85.56,5,0.238,bundle,2024-04-18\r\n31106,1459,LATAM,fashion,online,67.31,6,0.120,none,2024-08-03\r\n31107,1398,APAC,toys,online,43.31,6,0.212,none,2024-08-27\r\n31108,2460,AMER,home,online,54.09,5,0.233,loyalty,2024-02-03\r\n31109,1059,AMER,electronics,retail,35.39,3,0.020,none,2024-09-07\r\n31110,2334,LATAM,grocery,partner,48.36,7,0.071,none,2024-08-15\r\n31111,1967,EMEA,sports,retail,39.42,4,0.166,none,2024-07-10\r\n31112,2192,APAC,electronics,mobile,65.28,4,0.116,none,2024-05-23\r\n31113,1116,LATAM,home,retail,13.82,8,0.198,none,2024-04-13\r\n31114,1599,APAC,electronics,online,51.70,7,0.132,coupon,2024-05-18\r\n31115,1894,APAC,sports,partner,78.84,8,0.079,none,2024-06-03\r\n31116,2090,AMER,electronics,retail,32.41,8,0.056,none,2024-01-20\r\n31117,1838,AMER,fashion,retail,99.82,6,0.137,coupon,2024-09-08\r\n31118,2417,LATAM,grocery,online,35.13,3,0.120,none,2024-05-13\r\n31119,2004,LATAM,toys,retail,142.72,4,0.188,bundle,2024-12-03\r\n31120,1864,EMEA,grocery,online,56.50,3,0.089,coupon,2024-03-01\r\n31121,1997,APAC,grocery,online,78.30,7,0.210,coupon,2024-06-12\r\n31122,1800,APAC,sports,retail,47.33,7,0.239,coupon,2024-11-03\r\n31123,1520,APAC,sports,retail,148.76,1,0.080,none,2024-01-17\r\n31124,1495,LATAM,grocery,mobile,88.52,4,0.093,none,2024-08-11\r\n31125,1455,APAC,fashion,online,43.83,3,0.047,none,2024-10-01\r\n31126,2393,LATAM,sports,online,63.53,2,0.137,none,2024-07-17\r\n31127,1177,LATAM,grocery,retail,76.58,5,0.206,none,2024-06-21\r\n31128,2105,APAC,toys,mobile,53.22,2,0.141,none,2024-06-19\r\n31129,1789,EMEA,electronics,retail,43.26,6,0.233,none,2024-01-08\r\n31130,2244,LATAM,home,retail,62.83,6,0.096,none,2024-12-15\r\n31131,1377,APAC,sports,partner,84.25,8,0.193,coupon,2024-08-17\r\n31132,2379,AMER,grocery,online,28.27,1,0.105,none,2024-02-23\r\n31133,2415,AMER,home,online,67.80,1,0.240,none,2024-05-12\r\n31134,1571,EMEA,toys,online,65.40,6,0.239,none,2024-05-08\r\n31135,2255,AMER,grocery,retail,89.42,5,0.071,bundle,2024-02-23\r\n31136,2013,APAC,fashion,mobile,70.88,3,0.082,coupon,2024-12-19\r\n31137,1724,LATAM,toys,mobile,58.74,7,0.122,none,2024-04-27\r\n31138,1478,EMEA,electronics,online,44.46,7,0.005,coupon,2024-05-21\r\n31139,1201,LATAM,toys,online,55.85,5,0.128,coupon,2024-08-20\r\n31140,2142,LATAM,grocery,mobile,23.71,2,0.006,none,2024-04-26\r\n31141,2135,EMEA,toys,partner,38.12,7,0.160,none,2024-10-18\r\n31142,1020,APAC,toys,online,78.48,4,0.199,coupon,2024-04-23\r\n31143,1183,AMER,home,retail,64.31,6,0.070,loyalty,2024-03-01\r\n31144,2431,LATAM,grocery,online,73.52,1,0.231,none,2024-04-23\r\n31145,1887,LATAM,fashion,online,70.24,7,0.210,coupon,2024-01-15\r\n31146,1303,LATAM,grocery,online,69.59,6,0.164,none,2024-08-14\r\n31147,1781,LATAM,grocery,partner,42.38,3,0.054,none,2024-02-20\r\n31148,1277,AMER,grocery,retail,54.95,1,0.179,none,2024-05-12\r\n31149,1463,EMEA,sports,retail,24.38,4,0.077,none,2024-03-07\r\n31150,2228,EMEA,grocery,retail,42.21,2,0.179,coupon,2024-10-07\r\n31151,2058,LATAM,home,online,51.26,3,0.211,none,2024-03-27\r\n31152,1242,LATAM,sports,online,31.79,4,0.232,coupon,2024-02-20\r\n31153,1284,APAC,electronics,retail,26.89,7,0.008,loyalty,2024-05-02\r\n31154,2006,APAC,sports,mobile,92.89,8,0.114,none,2024-10-23\r\n31155,2232,EMEA,sports,online,36.95,7,0.140,bundle,2024-08-14\r\n31156,1944,AMER,grocery,retail,67.75,2,0.111,none,2024-04-07\r\n31157,1382,LATAM,grocery,retail,24.44,3,0.123,coupon,2024-10-11\r\n31158,1806,APAC,toys,online,90.97,3,0.221,loyalty,2024-01-07\r\n31159,2333,APAC,grocery,retail,83.56,3,0.193,none,2024-09-22\r\n31160,1865,LATAM,toys,retail,74.14,8,0.162,none,2024-07-21\r\n31161,1237,LATAM,fashion,online,23.84,3,0.222,none,2024-04-19\r\n31162,1736,AMER,grocery,retail,44.53,8,0.248,none,2024-02-18\r\n31163,1223,LATAM,home,mobile,52.61,1,0.137,none,2024-10-19\r\n31164,1615,LATAM,grocery,retail,44.79,7,0.118,coupon,2024-12-14\r\n31165,2162,EMEA,home,online,22.52,1,0.018,none,2024-04-22\r\n31166,2499,LATAM,grocery,online,88.14,7,0.064,none,2024-08-02\r\n31167,2114,AMER,home,retail,18.56,7,0.220,none,2024-05-05\r\n31168,1157,LATAM,fashion,online,74.90,5,0.164,coupon,2024-03-22\r\n31169,1693,EMEA,fashion,retail,46.82,5,0.038,none,2024-10-25\r\n31170,1187,AMER,grocery,retail,59.15,2,0.055,none,2024-02-21\r\n31171,2458,EMEA,home,mobile,44.28,4,0.102,none,2024-10-08\r\n31172,1760,LATAM,grocery,retail,104.39,5,0.028,none,2024-04-22\r\n31173,1910,LATAM,grocery,online,49.31,8,0.143,coupon,2024-01-28\r\n31174,1234,AMER,grocery,online,51.05,3,0.003,bundle,2024-05-06\r\n31175,1497,EMEA,electronics,retail,80.84,2,0.190,none,2024-03-01\r\n31176,2111,EMEA,home,retail,58.70,6,0.084,none,2024-05-07\r\n31177,1257,APAC,home,retail,44.02,6,0.250,coupon,2024-05-14\r\n31178,1489,AMER,electronics,retail,98.64,6,0.011,coupon,2024-11-28\r\n31179,1768,AMER,fashion,online,52.40,6,0.061,none,2024-09-26\r\n31180,2355,EMEA,home,online,119.06,1,0.198,bundle,2024-08-10\r\n31181,1408,AMER,fashion,online,31.19,4,0.147,coupon,2024-09-25\r\n31182,1124,AMER,grocery,online,88.31,8,0.013,bundle,2024-02-06\r\n31183,2455,AMER,grocery,mobile,117.67,8,0.011,bundle,2024-08-11\r\n31184,1517,AMER,home,retail,103.14,8,0.037,none,2024-08-17\r\n31185,1713,EMEA,home,online,38.67,1,0.239,none,2024-09-05\r\n31186,1854,AMER,home,retail,68.98,5,0.212,coupon,2024-03-06\r\n31187,1598,EMEA,electronics,online,41.46,1,0.039,none,2024-10-22\r\n31188,1594,LATAM,toys,retail,45.50,6,0.213,bundle,2024-11-02\r\n31189,2103,LATAM,grocery,online,139.47,2,0.178,bundle,2024-08-17\r\n31190,2234,LATAM,sports,online,40.16,5,0.157,bundle,2024-05-25\r\n31191,2401,LATAM,sports,online,55.84,1,0.098,none,2024-09-19\r\n31192,2179,LATAM,electronics,online,36.17,7,0.197,coupon,2024-02-16\r\n31193,1734,AMER,sports,retail,127.23,2,0.218,none,2024-09-26\r\n31194,2323,AMER,electronics,online,42.68,7,0.035,bundle,2024-06-09\r\n31195,1957,AMER,electronics,online,89.12,5,0.189,none,2024-04-03\r\n31196,2394,EMEA,fashion,mobile,47.22,4,0.055,none,2024-02-18\r\n31197,1594,LATAM,sports,mobile,40.72,5,0.166,none,2024-09-12\r\n31198,1030,EMEA,toys,partner,59.35,8,0.239,none,2024-03-05\r\n31199,1106,AMER,home,online,25.50,7,0.102,none,2024-08-06\r\n31200,1037,EMEA,fashion,retail,38.80,3,0.204,bundle,2024-11-23\r\n31201,1503,APAC,electronics,online,60.92,7,0.003,loyalty,2024-07-12\r\n31202,1084,AMER,grocery,online,63.14,5,0.050,none,2024-11-16\r\n31203,1610,LATAM,home,online,47.87,4,0.131,none,2024-09-24\r\n31204,1340,LATAM,toys,online,84.10,2,0.035,coupon,2024-04-27\r\n31205,1394,LATAM,electronics,retail,79.29,8,0.033,none,2024-05-24\r\n31206,1283,APAC,home,online,57.96,7,0.184,loyalty,2024-05-21\r\n31207,1629,LATAM,electronics,retail,32.21,3,0.158,none,2024-08-03\r\n31208,2484,APAC,sports,online,120.01,8,0.194,none,2024-08-25\r\n31209,1787,APAC,toys,mobile,40.62,7,0.058,none,2024-10-19\r\n31210,1011,APAC,grocery,retail,57.33,1,0.191,none,2024-09-01\r\n31211,1940,APAC,fashion,online,81.31,1,0.123,coupon,2024-06-01\r\n31212,2428,LATAM,home,online,45.39,4,0.083,none,2024-04-10\r\n31213,2490,AMER,electronics,retail,74.10,8,0.040,bundle,2024-10-06\r\n31214,1457,EMEA,grocery,online,21.76,1,0.153,none,2024-04-27\r\n31215,2335,EMEA,home,partner,79.16,1,0.076,coupon,2024-12-04\r\n31216,2247,LATAM,grocery,online,44.40,5,0.145,none,2024-08-16\r\n31217,1282,LATAM,electronics,online,89.10,2,0.226,none,2024-03-03\r\n31218,1027,APAC,grocery,online,38.48,6,0.150,none,2024-03-21\r\n31219,2028,APAC,electronics,retail,82.80,3,0.159,coupon,2024-01-18\r\n31220,1420,APAC,home,retail,57.87,8,0.229,coupon,2024-11-07\r\n31221,2358,AMER,electronics,retail,151.06,5,0.073,bundle,2024-12-13\r\n31222,2038,LATAM,home,retail,65.93,7,0.093,none,2024-04-25\r\n31223,2090,AMER,electronics,online,44.48,2,0.005,none,2024-09-09\r\n31224,2027,EMEA,grocery,mobile,216.82,3,0.137,none,2024-06-27\r\n31225,1496,AMER,fashion,online,30.67,6,0.061,none,2024-09-24\r\n31226,1191,EMEA,electronics,partner,26.10,5,0.036,none,2024-11-26\r\n31227,1605,APAC,grocery,retail,36.66,3,0.178,none,2024-09-23\r\n31228,1524,LATAM,grocery,retail,47.13,7,0.110,none,2024-05-16\r\n31229,2131,APAC,fashion,online,19.24,1,0.205,none,2024-08-02\r\n31230,1422,LATAM,home,online,111.51,8,0.047,coupon,2024-02-15\r\n31231,1975,EMEA,toys,partner,38.85,8,0.058,none,2024-03-16\r\n31232,1522,LATAM,grocery,online,107.21,5,0.208,loyalty,2024-04-17\r\n31233,1804,AMER,toys,retail,42.28,1,0.076,loyalty,2024-06-17\r\n31234,2241,APAC,sports,online,33.70,1,0.222,none,2024-09-26\r\n31235,1062,EMEA,grocery,retail,93.05,6,0.139,none,2024-03-07\r\n31236,1103,EMEA,home,mobile,40.41,8,0.167,none,2024-12-08\r\n31237,1999,EMEA,fashion,online,134.56,8,0.226,bundle,2024-01-27\r\n31238,2283,AMER,electronics,retail,81.60,6,0.225,none,2024-07-08\r\n31239,1427,EMEA,grocery,mobile,42.22,3,0.042,none,2024-11-15\r\n31240,1289,LATAM,fashion,online,46.40,1,0.000,bundle,2024-01-13\r\n31241,2349,APAC,electronics,online,21.06,3,0.140,bundle,2024-03-09\r\n31242,1572,LATAM,grocery,retail,46.52,1,0.227,none,2024-06-17\r\n31243,1578,LATAM,grocery,retail,35.82,5,0.145,loyalty,2024-12-13\r\n31244,2128,EMEA,electronics,retail,29.44,2,0.202,coupon,2024-09-25\r\n31245,1755,APAC,toys,retail,42.35,2,0.153,none,2024-12-26\r\n31246,1955,AMER,grocery,mobile,54.38,4,0.211,bundle,2024-05-27\r\n31247,2034,LATAM,home,retail,63.60,8,0.007,none,2024-04-26\r\n31248,1141,AMER,home,retail,85.50,8,0.102,none,2024-07-19\r\n31249,1744,EMEA,fashion,online,109.67,1,0.235,coupon,2024-02-07\r\n31250,1849,EMEA,grocery,retail,48.18,7,0.097,bundle,2024-03-24\r\n31251,1490,AMER,grocery,online,58.08,5,0.112,none,2024-12-21\r\n31252,1699,APAC,grocery,online,42.01,1,0.202,bundle,2024-03-10\r\n31253,1065,AMER,electronics,retail,50.65,6,0.072,none,2024-04-18\r\n31254,1176,EMEA,home,online,54.49,3,0.133,loyalty,2024-04-23\r\n31255,1805,EMEA,grocery,online,62.05,6,0.113,none,2024-09-04\r\n31256,1911,LATAM,grocery,online,63.02,6,0.100,none,2024-04-28\r\n31257,1530,APAC,electronics,online,30.01,4,0.158,loyalty,2024-04-10\r\n31258,1696,LATAM,home,mobile,109.67,6,0.064,none,2024-05-07\r\n31259,2472,AMER,electronics,online,53.28,8,0.045,none,2024-08-17\r\n31260,1149,LATAM,home,retail,62.51,7,0.201,none,2024-04-16\r\n31261,1116,LATAM,sports,mobile,51.12,1,0.086,none,2024-10-09\r\n31262,1439,LATAM,grocery,retail,108.01,7,0.070,none,2024-08-17\r\n31263,1027,APAC,sports,online,25.13,5,0.219,none,2024-02-06\r\n31264,2075,LATAM,grocery,online,86.70,7,0.030,none,2024-03-04\r\n31265,1756,EMEA,grocery,mobile,55.98,6,0.148,bundle,2024-04-14\r\n31266,1144,APAC,toys,mobile,66.99,1,0.004,none,2024-07-12\r\n31267,1783,AMER,home,online,50.99,7,0.107,loyalty,2024-11-24\r\n31268,1942,APAC,home,mobile,21.88,6,0.116,loyalty,2024-08-10\r\n31269,1608,AMER,grocery,mobile,53.36,8,0.209,none,2024-02-27\r\n31270,2325,LATAM,fashion,online,47.83,4,0.211,coupon,2024-04-23\r\n31271,2456,APAC,grocery,mobile,28.02,1,0.228,none,2024-04-28\r\n31272,2292,EMEA,fashion,online,67.58,2,0.189,none,2024-09-19\r\n31273,1762,LATAM,grocery,mobile,67.58,4,0.091,none,2024-02-28\r\n31274,1504,AMER,toys,mobile,22.80,3,0.146,none,2024-05-07\r\n31275,2038,LATAM,grocery,online,37.31,8,0.231,none,2024-08-18\r\n31276,1822,EMEA,toys,online,42.89,8,0.214,none,2024-10-10\r\n31277,1464,APAC,fashion,online,15.60,3,0.070,none,2024-08-13\r\n31278,2052,LATAM,grocery,retail,122.73,8,0.003,bundle,2024-03-28\r\n31279,1015,AMER,electronics,retail,28.85,8,0.122,coupon,2024-03-08\r\n31280,1250,APAC,toys,retail,31.01,5,0.159,coupon,2024-05-07\r\n31281,1811,APAC,electronics,online,63.89,7,0.141,none,2024-01-24\r\n31282,2111,EMEA,toys,online,34.56,3,0.069,none,2024-06-08\r\n31283,1937,APAC,home,mobile,53.43,1,0.094,coupon,2024-10-09\r\n31284,2410,EMEA,home,online,67.56,3,0.107,bundle,2024-01-06\r\n31285,2207,APAC,electronics,mobile,32.82,2,0.207,coupon,2024-05-07\r\n31286,1399,AMER,fashion,partner,36.15,1,0.115,coupon,2024-01-04\r\n31287,1104,APAC,fashion,online,108.31,7,0.084,none,2024-05-21\r\n31288,1550,APAC,electronics,online,57.34,4,0.039,coupon,2024-08-13\r\n31289,1787,APAC,toys,online,13.11,3,0.101,loyalty,2024-03-07\r\n31290,2223,EMEA,grocery,retail,69.86,2,0.080,none,2024-06-05\r\n31291,1815,APAC,fashion,partner,54.71,5,0.050,none,2024-03-18\r\n31292,1386,AMER,sports,retail,62.75,4,0.127,loyalty,2024-07-08\r\n31293,1147,EMEA,sports,online,47.95,2,0.078,bundle,2024-08-04\r\n31294,2119,AMER,sports,online,59.27,6,0.021,coupon,2024-06-09\r\n31295,2101,APAC,electronics,retail,51.78,4,0.038,none,2024-08-18\r\n31296,2128,EMEA,home,online,34.94,5,0.143,none,2024-03-11\r\n31297,1198,AMER,toys,mobile,59.90,6,0.081,none,2024-11-13\r\n31298,1916,AMER,grocery,online,17.87,1,0.220,coupon,2024-04-02\r\n31299,1904,APAC,sports,retail,38.50,7,0.169,none,2024-10-06\r\n31300,2043,EMEA,toys,retail,53.16,8,0.133,none,2024-06-27\r\n31301,1836,LATAM,home,retail,70.72,7,0.039,none,2024-06-12\r\n31302,2157,AMER,fashion,online,46.29,1,0.164,loyalty,2024-04-14\r\n31303,2463,AMER,sports,mobile,51.33,2,0.005,coupon,2024-11-03\r\n31304,1390,APAC,sports,retail,39.05,2,0.169,coupon,2024-11-02\r\n31305,1846,APAC,grocery,retail,68.48,7,0.041,none,2024-02-09\r\n31306,1722,EMEA,fashion,retail,59.97,4,0.061,none,2024-07-05\r\n31307,2083,LATAM,grocery,online,44.47,2,0.012,none,2024-05-12\r\n31308,1025,EMEA,sports,online,160.32,5,0.167,coupon,2024-02-05\r\n31309,1456,APAC,electronics,online,47.94,3,0.117,none,2024-07-18\r\n31310,1558,EMEA,electronics,online,92.76,7,0.138,loyalty,2024-10-05\r\n31311,1946,AMER,home,online,96.16,5,0.064,coupon,2024-12-22\r\n31312,2273,APAC,electronics,mobile,47.54,3,0.073,none,2024-07-18\r\n31313,2260,EMEA,electronics,retail,109.32,7,0.132,coupon,2024-02-27\r\n31314,1471,EMEA,grocery,online,100.53,1,0.018,none,2024-08-01\r\n31315,1998,APAC,home,mobile,66.72,6,0.161,none,2024-11-26\r\n31316,1409,APAC,sports,online,48.76,5,0.059,coupon,2024-10-01\r\n31317,1924,AMER,grocery,online,53.51,5,0.217,none,2024-01-22\r\n31318,2016,LATAM,fashion,online,76.06,1,0.043,none,2024-12-26\r\n31319,2160,LATAM,electronics,online,166.05,7,0.088,none,2024-08-22\r\n31320,1715,AMER,home,mobile,26.41,4,0.123,bundle,2024-11-10\r\n31321,2417,LATAM,grocery,retail,98.29,1,0.143,none,2024-02-19\r\n31322,1591,APAC,grocery,mobile,29.83,2,0.040,loyalty,2024-10-01\r\n31323,1108,EMEA,fashion,retail,70.66,1,0.221,none,2024-09-26\r\n31324,2451,APAC,sports,mobile,76.97,8,0.182,coupon,2024-10-20\r\n31325,1095,APAC,sports,online,71.12,2,0.179,none,2024-03-04\r\n31326,1282,LATAM,fashion,partner,72.38,2,0.196,loyalty,2024-06-20\r\n31327,1744,EMEA,toys,online,61.86,7,0.048,coupon,2024-03-17\r\n31328,1492,APAC,fashion,retail,108.02,8,0.239,bundle,2024-10-24\r\n31329,2114,AMER,electronics,online,63.15,2,0.223,none,2024-11-07\r\n31330,2063,APAC,fashion,online,142.62,5,0.005,none,2024-08-21\r\n31331,1872,LATAM,grocery,online,85.56,7,0.229,bundle,2024-06-24\r\n31332,1929,LATAM,toys,retail,72.20,2,0.175,none,2024-04-06\r\n31333,2191,AMER,grocery,online,52.96,3,0.179,none,2024-06-03\r\n31334,1058,LATAM,grocery,retail,83.11,7,0.012,none,2024-02-18\r\n31335,1170,AMER,electronics,mobile,55.20,4,0.026,loyalty,2024-12-13\r\n31336,1005,LATAM,electronics,retail,113.32,7,0.245,coupon,2024-10-05\r\n31337,1490,AMER,grocery,partner,75.38,2,0.228,none,2024-06-12\r\n31338,2247,LATAM,fashion,online,69.49,5,0.157,bundle,2024-12-22\r\n31339,1499,EMEA,home,online,41.28,2,0.113,none,2024-10-23\r\n31340,1360,APAC,home,online,30.35,8,0.193,bundle,2024-07-17\r\n31341,1893,APAC,toys,mobile,71.82,3,0.143,none,2024-01-04\r\n31342,1698,EMEA,toys,online,122.66,2,0.041,bundle,2024-07-19\r\n31343,1407,LATAM,electronics,mobile,51.00,3,0.195,none,2024-03-06\r\n31344,2111,EMEA,grocery,mobile,83.06,6,0.042,loyalty,2024-09-03\r\n31345,1101,AMER,electronics,online,79.98,3,0.146,none,2024-03-17\r\n31346,1023,APAC,toys,mobile,46.21,8,0.129,none,2024-01-12\r\n31347,1071,AMER,toys,online,56.09,6,0.050,none,2024-06-09\r\n31348,1949,AMER,fashion,online,79.08,2,0.001,coupon,2024-08-24\r\n31349,1695,LATAM,sports,online,60.26,2,0.038,bundle,2024-10-12\r\n31350,1850,APAC,home,mobile,48.08,8,0.199,none,2024-05-22\r\n31351,1180,AMER,toys,retail,46.31,4,0.142,none,2024-12-22\r\n31352,1042,LATAM,electronics,retail,39.41,5,0.015,none,2024-10-26\r\n31353,2387,EMEA,sports,online,81.05,5,0.065,coupon,2024-07-11\r\n31354,2159,AMER,sports,retail,59.22,8,0.126,none,2024-10-10\r\n31355,1354,AMER,sports,online,129.21,1,0.184,none,2024-07-15\r\n31356,1695,LATAM,home,online,37.80,8,0.229,none,2024-03-18\r\n31357,2174,LATAM,grocery,retail,76.29,1,0.026,bundle,2024-02-22\r\n31358,1246,EMEA,sports,online,141.62,3,0.001,loyalty,2024-02-22\r\n31359,1595,AMER,grocery,online,58.81,1,0.183,none,2024-10-27\r\n31360,1955,AMER,grocery,online,66.79,3,0.163,bundle,2024-07-01\r\n31361,1652,APAC,grocery,online,36.01,6,0.218,coupon,2024-08-28\r\n31362,1752,APAC,home,retail,60.77,2,0.138,none,2024-07-13\r\n31363,1577,AMER,fashion,retail,42.00,5,0.079,coupon,2024-08-20\r\n31364,1058,LATAM,toys,retail,85.11,2,0.069,none,2024-01-23\r\n31365,2137,LATAM,fashion,online,47.45,3,0.241,none,2024-07-09\r\n31366,1708,LATAM,grocery,retail,37.38,5,0.141,none,2024-01-28\r\n31367,1191,EMEA,home,retail,56.27,1,0.231,bundle,2024-04-07\r\n31368,1611,EMEA,home,retail,66.55,6,0.106,none,2024-07-10\r\n31369,1229,LATAM,home,online,26.81,7,0.046,none,2024-03-16\r\n31370,1345,AMER,electronics,retail,46.74,1,0.021,coupon,2024-11-16\r\n31371,1415,AMER,fashion,online,95.96,4,0.250,coupon,2024-06-18\r\n31372,1912,APAC,electronics,online,196.90,4,0.144,none,2024-12-07\r\n31373,1776,APAC,electronics,mobile,111.32,1,0.226,coupon,2024-05-07\r\n31374,1157,LATAM,grocery,online,97.41,3,0.101,none,2024-04-15\r\n31375,1549,APAC,home,retail,150.08,5,0.166,coupon,2024-04-12\r\n31376,1480,APAC,home,retail,76.36,5,0.143,loyalty,2024-07-12\r\n31377,1779,APAC,grocery,online,106.74,6,0.009,coupon,2024-01-06\r\n31378,2449,LATAM,grocery,partner,20.86,2,0.132,none,2024-02-09\r\n31379,2463,AMER,grocery,mobile,53.77,1,0.139,bundle,2024-01-02\r\n31380,1695,LATAM,grocery,online,19.59,4,0.025,coupon,2024-07-23\r\n31381,1090,AMER,electronics,online,31.53,1,0.149,none,2024-04-03\r\n31382,1386,AMER,sports,mobile,107.31,5,0.049,none,2024-07-12\r\n31383,2336,APAC,toys,retail,61.36,4,0.082,coupon,2024-01-24\r\n31384,2253,AMER,sports,partner,81.22,3,0.125,none,2024-06-13\r\n31385,1817,APAC,grocery,online,24.70,5,0.125,none,2024-09-12\r\n31386,1420,APAC,home,mobile,30.01,1,0.167,none,2024-01-02\r\n31387,2336,APAC,electronics,online,67.11,2,0.156,bundle,2024-02-28\r\n31388,1968,EMEA,sports,online,56.95,7,0.037,none,2024-03-08\r\n31389,1325,APAC,grocery,online,78.25,3,0.158,none,2024-12-15\r\n31390,2297,EMEA,electronics,online,42.25,6,0.014,none,2024-01-20\r\n31391,1947,EMEA,home,retail,45.82,8,0.006,none,2024-10-21\r\n31392,2348,EMEA,grocery,online,48.36,7,0.003,coupon,2024-06-12\r\n31393,1999,EMEA,toys,retail,41.78,7,0.136,none,2024-05-27\r\n31394,1359,LATAM,sports,online,37.61,3,0.228,none,2024-05-27\r\n31395,1051,EMEA,grocery,partner,58.92,3,0.122,none,2024-07-08\r\n31396,2349,APAC,home,mobile,39.45,4,0.065,loyalty,2024-12-03\r\n31397,1970,LATAM,home,mobile,48.21,8,0.088,none,2024-01-15\r\n31398,1182,EMEA,electronics,retail,125.34,6,0.201,coupon,2024-10-02\r\n31399,1204,AMER,home,online,56.24,5,0.057,bundle,2024-12-25\r\n31400,1212,LATAM,toys,online,70.12,6,0.021,none,2024-08-19\r\n31401,1331,AMER,electronics,retail,38.20,8,0.024,none,2024-10-26\r\n31402,1118,AMER,sports,online,122.67,8,0.192,coupon,2024-10-20\r\n31403,1515,EMEA,grocery,retail,58.80,4,0.226,coupon,2024-12-21\r\n31404,1162,AMER,electronics,online,60.30,4,0.099,loyalty,2024-02-23\r\n31405,1432,APAC,sports,mobile,19.95,3,0.229,none,2024-06-22\r\n31406,2494,AMER,grocery,retail,68.76,6,0.238,none,2024-02-26\r\n31407,2489,LATAM,fashion,partner,82.09,2,0.188,none,2024-10-13\r\n31408,1376,EMEA,fashion,online,54.15,7,0.209,none,2024-05-02\r\n31409,2034,LATAM,fashion,online,48.34,2,0.153,none,2024-08-10\r\n31410,1008,AMER,electronics,online,61.46,8,0.104,none,2024-03-18\r\n31411,1345,AMER,grocery,partner,71.69,8,0.223,none,2024-04-08\r\n31412,1701,LATAM,toys,online,51.22,6,0.043,none,2024-06-24\r\n31413,1146,LATAM,electronics,mobile,43.41,1,0.058,coupon,2024-06-11\r\n31414,1863,EMEA,home,online,24.63,8,0.076,none,2024-02-28\r\n31415,1238,AMER,electronics,retail,91.93,6,0.240,bundle,2024-04-26\r\n31416,1230,EMEA,electronics,online,47.89,6,0.129,none,2024-09-07\r\n31417,2080,LATAM,fashion,online,72.28,2,0.011,none,2024-11-13\r\n31418,1267,EMEA,grocery,online,68.21,8,0.214,coupon,2024-05-04\r\n31419,1044,EMEA,toys,retail,61.99,2,0.241,coupon,2024-04-10\r\n31420,1893,APAC,electronics,mobile,28.42,8,0.024,bundle,2024-11-22\r\n31421,1004,LATAM,grocery,mobile,60.67,1,0.004,bundle,2024-10-13\r\n31422,2373,LATAM,toys,retail,74.02,8,0.136,none,2024-12-11\r\n31423,1275,EMEA,grocery,online,104.46,4,0.161,none,2024-12-01\r\n31424,2337,AMER,home,online,44.88,3,0.050,none,2024-12-24\r\n31425,1429,APAC,electronics,retail,93.64,4,0.034,coupon,2024-12-09\r\n31426,2054,AMER,grocery,retail,16.90,7,0.109,none,2024-04-03\r\n31427,1021,AMER,electronics,partner,59.88,2,0.056,none,2024-07-13\r\n31428,1450,EMEA,toys,online,98.83,4,0.193,none,2024-01-22\r\n31429,1572,LATAM,fashion,online,32.82,5,0.087,none,2024-02-22\r\n31430,1274,LATAM,grocery,online,109.58,4,0.167,none,2024-08-07\r\n31431,1690,LATAM,grocery,online,103.32,6,0.145,bundle,2024-12-27\r\n31432,1910,LATAM,home,online,57.87,7,0.131,none,2024-06-05\r\n31433,1720,AMER,toys,retail,45.30,7,0.106,coupon,2024-03-10\r\n31434,1875,EMEA,electronics,online,47.64,3,0.067,none,2024-07-09\r\n31435,1600,AMER,electronics,mobile,85.36,2,0.042,none,2024-04-14\r\n31436,1239,APAC,grocery,online,31.31,7,0.231,none,2024-10-16\r\n31437,1052,LATAM,fashion,retail,28.78,7,0.168,none,2024-09-16\r\n31438,1070,EMEA,electronics,online,53.27,5,0.006,none,2024-02-02\r\n31439,1560,AMER,electronics,online,117.16,8,0.083,none,2024-01-05\r\n31440,1014,EMEA,fashion,online,20.63,8,0.162,bundle,2024-06-01\r\n31441,1668,AMER,grocery,online,43.38,8,0.179,bundle,2024-07-05\r\n31442,1675,LATAM,grocery,retail,68.15,5,0.031,loyalty,2024-07-04\r\n31443,1617,AMER,grocery,partner,28.30,8,0.236,coupon,2024-11-14\r\n31444,1977,APAC,grocery,online,99.57,2,0.069,none,2024-05-23\r\n31445,1197,LATAM,electronics,retail,198.47,5,0.012,none,2024-06-25\r\n31446,1807,EMEA,toys,retail,190.99,6,0.175,coupon,2024-08-09\r\n31447,1016,AMER,sports,online,35.85,7,0.023,none,2024-12-05\r\n31448,1484,AMER,sports,online,66.82,7,0.097,bundle,2024-08-19\r\n31449,1321,EMEA,electronics,mobile,31.35,5,0.168,none,2024-04-19\r\n31450,1579,AMER,home,online,31.94,5,0.017,coupon,2024-10-25\r\n31451,1370,APAC,toys,online,40.14,2,0.011,bundle,2024-06-21\r\n31452,1820,AMER,electronics,retail,63.41,8,0.163,bundle,2024-11-12\r\n31453,1875,EMEA,fashion,mobile,96.48,1,0.165,none,2024-05-15\r\n31454,1669,AMER,fashion,mobile,29.47,2,0.122,coupon,2024-08-28\r\n31455,2322,AMER,grocery,online,87.33,3,0.176,bundle,2024-12-04\r\n31456,1963,AMER,sports,retail,37.97,5,0.152,loyalty,2024-08-27\r\n31457,2445,APAC,electronics,online,72.47,5,0.088,loyalty,2024-09-21\r\n31458,1433,EMEA,grocery,online,58.83,4,0.118,loyalty,2024-07-11\r\n31459,2238,AMER,grocery,mobile,21.48,1,0.200,none,2024-06-10\r\n31460,2139,AMER,electronics,retail,87.46,4,0.079,none,2024-04-07\r\n31461,2021,EMEA,grocery,partner,77.18,4,0.114,none,2024-09-24\r\n31462,1525,APAC,electronics,online,104.64,6,0.225,none,2024-09-19\r\n31463,1132,EMEA,grocery,online,61.24,5,0.198,none,2024-09-03\r\n31464,1125,LATAM,home,retail,17.01,8,0.001,none,2024-09-03\r\n31465,1472,AMER,grocery,retail,82.89,8,0.049,none,2024-01-15\r\n31466,1429,APAC,home,online,40.34,3,0.041,none,2024-07-21\r\n31467,1806,APAC,home,retail,25.47,1,0.066,loyalty,2024-02-01\r\n31468,2424,LATAM,toys,online,56.98,6,0.170,coupon,2024-04-05\r\n31469,1139,EMEA,grocery,retail,46.53,6,0.234,none,2024-12-11\r\n31470,2051,APAC,grocery,partner,29.57,3,0.159,bundle,2024-05-13\r\n31471,1318,LATAM,sports,mobile,43.56,3,0.079,loyalty,2024-04-04\r\n31472,1636,APAC,fashion,retail,80.70,4,0.203,none,2024-07-02\r\n31473,1351,APAC,grocery,retail,26.95,2,0.218,none,2024-06-25\r\n31474,1597,APAC,grocery,online,54.86,5,0.032,bundle,2024-06-01\r\n31475,2315,LATAM,home,retail,41.46,5,0.084,none,2024-09-22\r\n31476,2082,APAC,grocery,online,39.05,2,0.066,coupon,2024-02-18\r\n31477,1299,LATAM,electronics,retail,24.23,8,0.063,bundle,2024-10-26\r\n31478,1031,AMER,fashion,retail,33.62,4,0.124,none,2024-11-07\r\n31479,1890,LATAM,electronics,online,44.92,5,0.209,coupon,2024-02-19\r\n31480,1715,AMER,fashion,online,133.82,8,0.094,coupon,2024-09-10\r\n31481,1851,EMEA,grocery,retail,42.76,6,0.073,none,2024-07-24\r\n31482,1334,APAC,sports,retail,139.90,2,0.211,none,2024-01-20\r\n31483,1374,APAC,electronics,online,61.57,3,0.058,coupon,2024-01-07\r\n31484,1037,EMEA,toys,retail,101.68,8,0.195,loyalty,2024-03-07\r\n31485,1634,AMER,fashion,mobile,111.65,4,0.090,none,2024-04-25\r\n31486,1305,EMEA,sports,online,25.09,7,0.210,none,2024-09-21\r\n31487,2168,EMEA,fashion,online,82.51,8,0.220,none,2024-11-19\r\n31488,1247,AMER,home,retail,92.39,1,0.130,coupon,2024-12-09\r\n31489,1921,LATAM,electronics,retail,50.26,4,0.027,none,2024-12-27\r\n31490,2497,AMER,grocery,retail,29.95,4,0.222,none,2024-02-12\r\n31491,2023,LATAM,grocery,online,77.87,2,0.088,none,2024-07-23\r\n31492,1383,AMER,electronics,online,91.87,5,0.072,none,2024-11-19\r\n31493,1909,APAC,electronics,mobile,55.65,1,0.048,coupon,2024-05-13\r\n31494,1102,APAC,grocery,partner,70.24,7,0.020,none,2024-10-07\r\n31495,1058,LATAM,grocery,mobile,100.31,1,0.206,bundle,2024-01-25\r\n31496,1272,AMER,grocery,online,63.61,4,0.185,coupon,2024-03-18\r\n31497,1989,LATAM,home,retail,31.23,7,0.181,none,2024-12-27\r\n31498,1729,AMER,grocery,partner,21.21,4,0.068,none,2024-09-19\r\n31499,2442,APAC,grocery,online,27.14,3,0.085,bundle,2024-02-09\r\n31500,2117,EMEA,grocery,mobile,53.37,3,0.051,coupon,2024-06-02\r\n31501,2410,EMEA,sports,partner,20.91,2,0.082,loyalty,2024-09-19\r\n31502,1496,AMER,fashion,mobile,89.82,1,0.023,none,2024-02-07\r\n31503,2124,AMER,electronics,partner,57.69,8,0.021,loyalty,2024-06-11\r\n31504,1739,AMER,home,online,80.34,2,0.233,coupon,2024-08-18\r\n31505,1820,AMER,grocery,mobile,39.18,7,0.191,none,2024-01-25\r\n31506,1746,LATAM,fashion,mobile,37.67,3,0.051,none,2024-11-03\r\n31507,1363,EMEA,grocery,online,120.80,4,0.004,none,2024-12-09\r\n31508,1808,APAC,sports,retail,30.99,3,0.245,none,2024-12-22\r\n31509,1009,APAC,electronics,online,83.64,6,0.246,none,2024-10-11\r\n31510,2160,LATAM,electronics,online,65.75,7,0.072,none,2024-10-26\r\n31511,2091,LATAM,grocery,retail,35.69,3,0.041,none,2024-12-18\r\n31512,1878,EMEA,sports,online,98.39,8,0.121,none,2024-07-02\r\n31513,2138,APAC,electronics,online,18.36,7,0.171,none,2024-01-13\r\n31514,1515,EMEA,home,retail,102.34,8,0.195,none,2024-01-24\r\n31515,1880,LATAM,fashion,retail,104.76,2,0.140,coupon,2024-05-15\r\n31516,1680,LATAM,electronics,online,35.78,2,0.055,bundle,2024-11-26\r\n31517,1112,APAC,grocery,online,28.79,7,0.078,coupon,2024-12-17\r\n31518,1057,LATAM,home,online,39.88,4,0.172,none,2024-01-27\r\n31519,1762,LATAM,grocery,online,46.21,6,0.161,none,2024-02-27\r\n31520,1274,LATAM,electronics,online,45.46,1,0.177,loyalty,2024-05-13\r\n31521,1718,EMEA,fashion,retail,49.03,6,0.066,coupon,2024-10-13\r\n31522,1493,APAC,grocery,online,47.17,3,0.101,loyalty,2024-06-19\r\n31523,2288,AMER,grocery,online,108.66,7,0.139,none,2024-02-06\r\n31524,1458,APAC,home,online,88.42,1,0.232,none,2024-07-05\r\n31525,1776,APAC,electronics,mobile,121.20,6,0.139,none,2024-03-15\r\n31526,2059,AMER,grocery,retail,24.87,7,0.207,none,2024-08-12\r\n31527,2385,APAC,electronics,retail,74.89,3,0.240,coupon,2024-03-07\r\n31528,1292,LATAM,electronics,retail,32.94,4,0.114,bundle,2024-03-24\r\n31529,1762,LATAM,electronics,retail,60.03,1,0.188,none,2024-05-19\r\n31530,1800,APAC,home,mobile,128.41,4,0.227,none,2024-01-02\r\n31531,1680,LATAM,fashion,online,77.13,3,0.189,none,2024-07-09\r\n31532,2460,AMER,electronics,online,69.95,1,0.123,none,2024-11-10\r\n31533,1032,AMER,home,online,50.78,1,0.238,none,2024-07-20\r\n31534,1287,AMER,grocery,retail,85.63,2,0.221,none,2024-01-16\r\n31535,1515,EMEA,electronics,retail,76.22,6,0.064,bundle,2024-08-05\r\n31536,1705,AMER,electronics,retail,43.11,1,0.240,none,2024-06-05\r\n31537,2025,EMEA,home,online,33.06,3,0.104,coupon,2024-12-12\r\n31538,2344,LATAM,sports,online,41.31,5,0.079,none,2024-11-26\r\n31539,2391,EMEA,grocery,partner,116.42,8,0.133,bundle,2024-11-27\r\n31540,2415,AMER,home,mobile,66.78,5,0.190,coupon,2024-06-25\r\n31541,2198,EMEA,grocery,retail,53.34,5,0.026,bundle,2024-03-25\r\n31542,1659,APAC,grocery,mobile,67.45,2,0.150,bundle,2024-05-22\r\n31543,1248,APAC,fashion,online,35.45,7,0.025,none,2024-09-01\r\n31544,2335,EMEA,electronics,retail,14.21,4,0.068,none,2024-12-15\r\n31545,2411,EMEA,grocery,retail,49.39,3,0.096,none,2024-05-06\r\n31546,2016,LATAM,grocery,retail,52.72,6,0.031,none,2024-04-14\r\n31547,1153,AMER,toys,retail,70.51,7,0.200,coupon,2024-02-07\r\n31548,1152,LATAM,fashion,mobile,79.79,6,0.126,coupon,2024-04-22\r\n31549,1818,AMER,grocery,online,83.27,7,0.027,bundle,2024-04-13\r\n31550,1394,LATAM,fashion,online,32.38,7,0.061,none,2024-04-23\r\n31551,1723,LATAM,home,online,60.78,3,0.021,coupon,2024-09-18\r\n31552,1750,LATAM,grocery,retail,37.50,4,0.088,none,2024-10-22\r\n31553,2392,EMEA,electronics,retail,32.15,6,0.034,none,2024-01-09\r\n31554,2406,EMEA,sports,mobile,48.86,4,0.230,loyalty,2024-03-08\r\n31555,2185,EMEA,electronics,retail,71.99,7,0.011,coupon,2024-09-28\r\n31556,1493,APAC,electronics,online,44.59,8,0.240,loyalty,2024-11-11\r\n31557,1038,APAC,grocery,mobile,48.02,3,0.164,bundle,2024-03-04\r\n31558,1901,AMER,electronics,online,83.31,4,0.019,coupon,2024-04-15\r\n31559,1303,LATAM,grocery,mobile,32.55,8,0.024,none,2024-12-25\r\n31560,2282,EMEA,toys,online,116.48,7,0.108,loyalty,2024-07-08\r\n31561,1515,EMEA,toys,retail,66.04,4,0.122,none,2024-03-07\r\n31562,1174,APAC,fashion,online,68.80,2,0.005,none,2024-11-02\r\n31563,2050,APAC,fashion,mobile,134.47,3,0.214,none,2024-08-03\r\n31564,1753,APAC,grocery,retail,129.56,4,0.095,coupon,2024-03-03\r\n31565,2251,APAC,electronics,online,55.57,2,0.138,none,2024-02-12\r\n31566,1884,APAC,grocery,online,55.67,4,0.128,none,2024-11-28\r\n31567,1605,APAC,electronics,online,43.14,7,0.166,none,2024-10-25\r\n31568,1132,EMEA,fashion,partner,36.31,3,0.076,none,2024-05-20\r\n31569,2284,EMEA,fashion,online,84.86,7,0.202,none,2024-09-22\r\n31570,1196,APAC,toys,online,74.05,1,0.083,none,2024-02-07\r\n31571,1140,LATAM,sports,mobile,60.84,5,0.141,coupon,2024-09-01\r\n31572,1724,LATAM,sports,retail,75.78,4,0.175,loyalty,2024-06-26\r\n31573,2394,EMEA,electronics,online,36.01,4,0.227,loyalty,2024-08-19\r\n31574,1655,LATAM,home,retail,59.29,1,0.194,none,2024-01-20\r\n31575,2141,AMER,grocery,online,45.23,8,0.198,none,2024-02-22\r\n31576,1312,EMEA,electronics,partner,49.34,7,0.149,none,2024-05-22\r\n31577,1302,LATAM,grocery,retail,47.63,2,0.094,none,2024-11-06\r\n31578,1671,APAC,grocery,online,106.77,8,0.076,coupon,2024-12-14\r\n31579,1846,APAC,sports,online,66.03,6,0.128,coupon,2024-10-13\r\n31580,1699,APAC,toys,retail,86.31,3,0.150,loyalty,2024-11-06\r\n31581,2110,LATAM,home,online,29.43,3,0.106,none,2024-11-13\r\n31582,2033,LATAM,electronics,online,49.95,8,0.145,none,2024-03-10\r\n31583,2258,AMER,grocery,retail,50.55,7,0.031,none,2024-10-12\r\n31584,1514,LATAM,home,online,93.15,3,0.042,none,2024-10-24\r\n31585,1130,LATAM,grocery,retail,28.81,3,0.013,none,2024-11-10\r\n31586,1017,AMER,electronics,online,51.32,1,0.198,loyalty,2024-11-13\r\n31587,1650,LATAM,electronics,online,170.19,5,0.072,none,2024-02-15\r\n31588,1910,LATAM,home,online,70.85,8,0.116,bundle,2024-02-11\r\n31589,1532,APAC,fashion,online,52.85,2,0.080,none,2024-08-06\r\n31590,2039,EMEA,fashion,partner,90.53,3,0.171,none,2024-10-06\r\n31591,1179,APAC,fashion,retail,58.88,6,0.115,none,2024-05-14\r\n31592,1950,LATAM,electronics,retail,38.77,3,0.134,none,2024-05-11\r\n31593,1514,LATAM,home,online,53.94,8,0.028,none,2024-07-22\r\n31594,2316,EMEA,electronics,retail,83.13,3,0.193,bundle,2024-01-23\r\n31595,1748,APAC,home,online,27.96,8,0.228,none,2024-05-26\r\n31596,2131,APAC,home,retail,78.21,3,0.171,coupon,2024-12-11\r\n31597,1794,AMER,fashion,partner,34.02,1,0.192,coupon,2024-11-14\r\n31598,1054,EMEA,electronics,retail,47.39,8,0.089,none,2024-12-04\r\n31599,2069,AMER,fashion,retail,41.66,6,0.033,none,2024-01-11\r\n31600,1053,AMER,sports,mobile,46.66,5,0.145,none,2024-06-27\r\n31601,2375,AMER,grocery,retail,120.12,2,0.051,loyalty,2024-09-09\r\n31602,1570,AMER,grocery,online,53.70,3,0.100,none,2024-12-11\r\n31603,1464,APAC,grocery,mobile,71.51,2,0.007,none,2024-01-18\r\n31604,2002,APAC,sports,online,29.64,3,0.084,loyalty,2024-09-22\r\n31605,1454,APAC,grocery,retail,129.24,5,0.157,bundle,2024-02-08\r\n31606,2478,AMER,fashion,online,32.79,4,0.236,bundle,2024-04-18\r\n31607,1861,AMER,sports,online,21.39,6,0.229,none,2024-12-25\r\n31608,2115,APAC,sports,retail,34.43,5,0.211,coupon,2024-02-07\r\n31609,2022,LATAM,grocery,online,25.14,1,0.214,none,2024-01-07\r\n31610,2197,LATAM,electronics,mobile,38.10,6,0.087,none,2024-04-08\r\n31611,1309,EMEA,sports,online,38.59,4,0.193,none,2024-09-09\r\n31612,2052,LATAM,sports,mobile,45.44,4,0.083,coupon,2024-03-16\r\n31613,2187,EMEA,electronics,online,30.52,6,0.079,bundle,2024-06-15\r\n31614,1848,EMEA,sports,online,80.39,1,0.125,bundle,2024-03-06\r\n31615,1093,APAC,home,online,104.36,3,0.059,bundle,2024-02-01\r\n31616,1584,EMEA,sports,partner,22.14,1,0.004,coupon,2024-11-19\r\n31617,1231,AMER,fashion,online,55.30,1,0.047,bundle,2024-04-26\r\n31618,1069,APAC,fashion,partner,46.46,2,0.203,bundle,2024-08-27\r\n31619,1819,AMER,toys,mobile,61.15,4,0.039,loyalty,2024-07-20\r\n31620,2289,APAC,grocery,retail,90.97,7,0.172,none,2024-04-16\r\n31621,1629,LATAM,sports,online,29.77,3,0.236,none,2024-08-06\r\n31622,1918,EMEA,fashion,retail,38.94,7,0.085,none,2024-12-03\r\n31623,2168,EMEA,electronics,retail,42.46,7,0.054,coupon,2024-06-01\r\n31624,1224,APAC,toys,online,108.34,1,0.218,coupon,2024-09-23\r\n31625,1817,APAC,toys,retail,208.16,4,0.185,none,2024-12-08\r\n31626,2498,LATAM,toys,partner,60.94,7,0.097,none,2024-06-28\r\n31627,1352,AMER,fashion,retail,78.41,4,0.164,coupon,2024-07-10\r\n31628,2401,LATAM,fashion,retail,56.63,3,0.166,none,2024-05-10\r\n31629,1362,AMER,electronics,online,78.10,6,0.181,none,2024-02-20\r\n31630,2319,AMER,toys,mobile,49.61,5,0.078,none,2024-09-25\r\n31631,1424,APAC,home,online,45.06,6,0.059,none,2024-11-19\r\n31632,1853,APAC,toys,retail,59.06,3,0.056,none,2024-07-25\r\n31633,1739,AMER,grocery,online,101.25,5,0.136,none,2024-02-17\r\n31634,1269,LATAM,electronics,retail,91.58,7,0.209,none,2024-10-01\r\n31635,2422,APAC,home,online,97.09,6,0.236,coupon,2024-04-04\r\n31636,1193,APAC,grocery,online,63.69,6,0.088,none,2024-03-01\r\n31637,1373,LATAM,toys,retail,48.02,8,0.088,none,2024-11-26\r\n31638,2102,APAC,home,retail,67.53,7,0.198,none,2024-10-20\r\n31639,1189,AMER,grocery,online,34.41,2,0.070,coupon,2024-01-04\r\n31640,1327,APAC,grocery,online,63.44,5,0.077,none,2024-11-26\r\n31641,1665,AMER,electronics,online,32.02,2,0.089,none,2024-02-12\r\n31642,1733,LATAM,sports,online,82.90,2,0.206,none,2024-07-16\r\n31643,2148,EMEA,grocery,mobile,107.00,7,0.045,none,2024-08-07\r\n31644,2456,APAC,sports,retail,97.42,6,0.004,bundle,2024-10-27\r\n31645,1312,EMEA,fashion,retail,47.26,7,0.169,coupon,2024-11-01\r\n31646,2019,AMER,grocery,retail,49.43,1,0.102,none,2024-11-23\r\n31647,2102,APAC,toys,online,77.14,4,0.029,coupon,2024-07-08\r\n31648,1716,LATAM,electronics,online,47.16,8,0.084,loyalty,2024-11-10\r\n31649,1135,APAC,sports,retail,30.45,3,0.212,none,2024-02-22\r\n31650,2080,LATAM,sports,partner,38.94,3,0.215,coupon,2024-09-28\r\n31651,1680,LATAM,fashion,online,61.69,7,0.025,none,2024-01-28\r\n31652,1057,LATAM,toys,partner,95.24,7,0.013,loyalty,2024-10-19\r\n31653,2253,AMER,toys,online,44.89,5,0.083,none,2024-08-04\r\n31654,1562,AMER,electronics,mobile,51.57,1,0.062,none,2024-01-17\r\n31655,1021,AMER,home,online,82.31,8,0.048,bundle,2024-01-15\r\n31656,1457,EMEA,grocery,online,72.90,8,0.206,coupon,2024-12-05\r\n31657,2141,AMER,sports,retail,47.35,5,0.233,none,2024-10-10\r\n31658,1554,AMER,electronics,online,90.09,7,0.138,bundle,2024-02-25\r\n31659,1790,AMER,home,retail,51.52,7,0.047,bundle,2024-12-04\r\n31660,1618,EMEA,fashion,online,37.49,4,0.199,none,2024-01-06\r\n31661,1051,EMEA,electronics,online,63.20,1,0.182,none,2024-03-09\r\n31662,1998,APAC,fashion,retail,83.86,1,0.099,coupon,2024-01-19\r\n31663,1608,AMER,grocery,mobile,68.73,7,0.210,none,2024-02-17\r\n31664,1113,EMEA,grocery,retail,86.95,2,0.191,none,2024-12-24\r\n31665,2133,AMER,home,partner,103.77,7,0.073,none,2024-01-05\r\n31666,1481,LATAM,grocery,online,34.42,5,0.131,loyalty,2024-10-14\r\n31667,2116,LATAM,grocery,online,46.76,5,0.174,loyalty,2024-08-06\r\n31668,2242,AMER,fashion,online,44.99,7,0.010,none,2024-06-21\r\n31669,1822,EMEA,grocery,partner,40.55,7,0.146,loyalty,2024-07-27\r\n31670,2425,APAC,toys,retail,34.01,3,0.025,none,2024-05-24\r\n31671,2480,APAC,fashion,retail,76.58,1,0.066,none,2024-01-24\r\n31672,2435,AMER,grocery,retail,52.50,8,0.185,none,2024-09-10\r\n31673,2443,LATAM,toys,online,56.30,6,0.142,none,2024-08-14\r\n31674,2497,AMER,grocery,retail,57.09,4,0.138,loyalty,2024-07-04\r\n31675,2076,AMER,grocery,online,37.14,6,0.079,bundle,2024-12-14\r\n31676,1029,EMEA,grocery,partner,75.18,3,0.003,none,2024-12-05\r\n31677,2070,APAC,toys,online,9.57,1,0.028,bundle,2024-02-11\r\n31678,2300,EMEA,fashion,online,51.04,5,0.146,bundle,2024-05-10\r\n31679,1655,LATAM,electronics,mobile,86.75,4,0.030,none,2024-06-15\r\n31680,1919,EMEA,grocery,online,48.27,8,0.179,none,2024-06-09\r\n31681,1233,AMER,home,retail,120.07,6,0.206,coupon,2024-12-10\r\n31682,1933,EMEA,home,online,80.38,2,0.010,none,2024-05-24\r\n31683,2103,LATAM,sports,retail,83.48,5,0.033,coupon,2024-12-15\r\n31684,1246,EMEA,toys,mobile,99.19,7,0.201,none,2024-08-17\r\n31685,2475,AMER,grocery,online,19.35,2,0.202,none,2024-10-05\r\n31686,2323,AMER,electronics,mobile,140.97,6,0.191,loyalty,2024-07-01\r\n31687,2351,EMEA,grocery,online,125.05,6,0.122,none,2024-05-14\r\n31688,2308,AMER,fashion,online,67.60,3,0.074,coupon,2024-10-04\r\n31689,1872,LATAM,grocery,online,15.96,7,0.168,bundle,2024-12-23\r\n31690,1462,LATAM,sports,retail,136.64,4,0.202,none,2024-05-26\r\n31691,2118,AMER,grocery,online,41.68,1,0.032,none,2024-05-04\r\n31692,1408,AMER,electronics,retail,96.87,5,0.230,none,2024-06-20\r\n31693,1644,EMEA,toys,retail,58.77,7,0.018,none,2024-10-22\r\n31694,1829,EMEA,fashion,partner,32.03,6,0.150,none,2024-11-20\r\n31695,1603,EMEA,home,mobile,26.05,4,0.034,loyalty,2024-04-13\r\n31696,2439,AMER,fashion,retail,57.69,8,0.061,coupon,2024-06-15\r\n31697,2224,EMEA,grocery,retail,97.59,5,0.178,loyalty,2024-04-05\r\n31698,1449,EMEA,home,retail,92.06,5,0.163,none,2024-02-23\r\n31699,2491,APAC,grocery,mobile,65.18,7,0.188,bundle,2024-09-22\r\n31700,1420,APAC,electronics,online,59.00,8,0.227,none,2024-08-24\r\n31701,1527,AMER,electronics,retail,42.23,6,0.160,none,2024-07-22\r\n31702,1759,EMEA,grocery,retail,56.34,5,0.117,coupon,2024-12-10\r\n31703,2393,LATAM,home,retail,130.31,2,0.012,none,2024-02-01\r\n31704,1018,APAC,home,online,64.73,2,0.048,loyalty,2024-05-16\r\n31705,1218,AMER,toys,online,73.94,4,0.093,coupon,2024-04-09\r\n31706,1081,AMER,fashion,online,85.16,7,0.194,none,2024-06-02\r\n31707,1526,EMEA,home,retail,49.65,7,0.183,none,2024-09-11\r\n31708,1173,LATAM,home,online,61.95,5,0.013,bundle,2024-09-26\r\n31709,1883,LATAM,grocery,online,125.72,5,0.017,loyalty,2024-11-14\r\n31710,2062,EMEA,grocery,mobile,31.60,4,0.228,none,2024-08-19\r\n31711,2483,LATAM,sports,online,36.81,6,0.047,none,2024-11-19\r\n31712,1582,AMER,home,retail,63.39,5,0.094,bundle,2024-11-02\r\n31713,2366,APAC,grocery,retail,27.40,2,0.185,none,2024-11-17\r\n31714,1641,EMEA,grocery,retail,63.53,5,0.200,none,2024-01-20\r\n31715,1895,AMER,home,online,97.35,6,0.207,none,2024-10-19\r\n31716,2133,AMER,sports,retail,69.92,3,0.026,loyalty,2024-11-15\r\n31717,2251,APAC,grocery,retail,40.14,8,0.243,none,2024-08-08\r\n31718,1049,AMER,electronics,retail,48.19,4,0.171,bundle,2024-05-25\r\n31719,1489,AMER,electronics,online,39.38,6,0.220,loyalty,2024-04-12\r\n31720,1197,LATAM,grocery,retail,16.46,5,0.141,none,2024-11-02\r\n31721,2302,APAC,fashion,retail,65.92,8,0.190,coupon,2024-09-23\r\n31722,1750,LATAM,home,online,42.91,3,0.065,none,2024-08-17\r\n31723,1620,LATAM,home,online,32.95,4,0.008,none,2024-04-11\r\n31724,2005,APAC,electronics,retail,45.12,4,0.123,coupon,2024-10-16\r\n31725,1075,AMER,electronics,mobile,57.26,8,0.206,bundle,2024-11-14\r\n31726,1332,APAC,sports,online,28.32,5,0.224,none,2024-08-02\r\n31727,1669,AMER,fashion,retail,67.54,2,0.121,none,2024-03-11\r\n31728,1912,APAC,sports,partner,64.47,5,0.190,coupon,2024-09-16\r\n31729,2349,APAC,home,online,34.22,6,0.205,coupon,2024-07-01\r\n31730,1704,AMER,fashion,online,66.82,3,0.229,none,2024-11-11\r\n31731,2390,AMER,toys,online,15.69,6,0.082,loyalty,2024-05-03\r\n31732,2179,LATAM,electronics,mobile,112.08,2,0.058,none,2024-06-16\r\n31733,1891,APAC,sports,mobile,62.70,6,0.244,none,2024-10-19\r\n31734,1922,EMEA,electronics,online,100.72,1,0.040,none,2024-01-08\r\n31735,1386,AMER,toys,online,59.90,3,0.138,bundle,2024-02-23\r\n31736,1524,LATAM,grocery,online,61.20,1,0.126,none,2024-08-02\r\n31737,1517,AMER,home,retail,58.67,3,0.249,none,2024-12-12\r\n31738,2340,EMEA,toys,retail,99.57,5,0.177,none,2024-04-13\r\n31739,1055,AMER,electronics,online,63.85,6,0.184,none,2024-07-07\r\n31740,1977,APAC,electronics,online,30.19,5,0.102,none,2024-04-11\r\n31741,1215,LATAM,home,retail,33.33,7,0.072,none,2024-05-06\r\n31742,1599,APAC,home,online,72.76,1,0.008,bundle,2024-01-22\r\n31743,1929,LATAM,grocery,online,49.85,3,0.211,none,2024-07-02\r\n31744,1549,APAC,toys,online,55.12,2,0.071,none,2024-04-04\r\n31745,2237,EMEA,sports,mobile,46.09,3,0.231,none,2024-03-17\r\n31746,1243,AMER,home,online,36.96,4,0.047,none,2024-12-11\r\n31747,2148,EMEA,electronics,retail,86.45,4,0.201,loyalty,2024-01-17\r\n31748,1481,LATAM,electronics,retail,99.72,4,0.074,coupon,2024-01-03\r\n31749,2494,AMER,grocery,retail,53.04,6,0.142,none,2024-03-26\r\n31750,2240,LATAM,grocery,online,23.62,7,0.018,none,2024-05-09\r\n31751,2135,EMEA,grocery,online,36.19,6,0.142,coupon,2024-11-21\r\n31752,1269,LATAM,electronics,online,37.11,6,0.099,coupon,2024-08-03\r\n31753,1738,LATAM,home,retail,64.32,7,0.194,none,2024-11-22\r\n31754,1550,APAC,home,retail,78.42,8,0.022,none,2024-11-21\r\n31755,1920,LATAM,fashion,partner,63.24,6,0.068,none,2024-05-22\r\n31756,2164,AMER,toys,mobile,29.97,1,0.207,none,2024-07-23\r\n31757,1715,AMER,grocery,mobile,63.70,1,0.118,none,2024-10-12\r\n31758,2367,AMER,sports,mobile,54.28,3,0.116,none,2024-02-06\r\n31759,1806,APAC,fashion,mobile,11.67,8,0.128,loyalty,2024-03-23\r\n31760,1900,APAC,fashion,online,50.05,3,0.250,none,2024-01-05\r\n31761,1931,APAC,grocery,online,90.10,4,0.108,none,2024-09-27\r\n31762,1924,AMER,toys,retail,47.93,2,0.167,none,2024-06-27\r\n31763,1495,LATAM,fashion,mobile,78.24,7,0.072,none,2024-01-03\r\n31764,1484,AMER,fashion,retail,55.65,1,0.065,none,2024-02-11\r\n31765,1188,LATAM,grocery,online,61.59,3,0.130,none,2024-12-17\r\n31766,2360,EMEA,grocery,mobile,66.75,3,0.162,coupon,2024-02-15\r\n31767,1719,LATAM,electronics,online,53.47,5,0.008,none,2024-09-05\r\n31768,2164,AMER,grocery,online,87.83,8,0.128,none,2024-07-01\r\n31769,2096,LATAM,home,online,40.92,2,0.127,none,2024-11-15\r\n31770,1916,AMER,grocery,retail,34.49,4,0.003,none,2024-12-02\r\n31771,1163,AMER,electronics,retail,86.59,5,0.012,loyalty,2024-04-01\r\n31772,1632,LATAM,electronics,online,47.21,5,0.133,bundle,2024-10-04\r\n31773,2461,LATAM,fashion,online,64.73,7,0.117,none,2024-02-21\r\n31774,1175,AMER,fashion,online,65.63,7,0.050,coupon,2024-12-08\r\n31775,1676,LATAM,sports,retail,42.01,2,0.239,none,2024-12-22\r\n31776,1704,AMER,fashion,mobile,101.11,3,0.158,coupon,2024-01-25\r\n31777,1649,APAC,home,retail,31.74,4,0.180,none,2024-12-17\r\n31778,2189,LATAM,fashion,retail,72.43,4,0.035,none,2024-10-20\r\n31779,1608,AMER,grocery,retail,43.44,4,0.238,loyalty,2024-04-05\r\n31780,1217,EMEA,electronics,online,33.43,3,0.154,coupon,2024-05-15\r\n31781,1977,APAC,home,online,59.38,8,0.203,coupon,2024-04-01\r\n31782,1912,APAC,grocery,online,110.36,2,0.245,bundle,2024-11-23\r\n31783,2372,AMER,toys,online,73.74,5,0.147,none,2024-10-11\r\n31784,1154,LATAM,sports,online,59.91,5,0.184,none,2024-10-24\r\n31785,1229,LATAM,electronics,online,28.19,5,0.099,none,2024-06-27\r\n31786,2111,EMEA,electronics,online,43.71,3,0.169,none,2024-10-10\r\n31787,1500,EMEA,grocery,online,88.41,2,0.033,loyalty,2024-07-04\r\n31788,2281,AMER,electronics,online,57.58,4,0.010,none,2024-02-27\r\n31789,2296,AMER,grocery,retail,60.80,1,0.128,none,2024-07-16\r\n31790,1187,AMER,home,mobile,38.50,2,0.043,none,2024-06-11\r\n31791,1698,EMEA,home,retail,70.96,5,0.049,none,2024-04-07\r\n31792,1198,AMER,toys,retail,89.07,8,0.179,coupon,2024-05-15\r\n31793,1196,APAC,electronics,mobile,37.68,2,0.110,none,2024-07-24\r\n31794,2318,AMER,toys,retail,24.74,1,0.238,coupon,2024-10-11\r\n31795,1887,LATAM,grocery,retail,73.64,2,0.170,none,2024-09-05\r\n31796,1685,AMER,grocery,online,31.22,4,0.206,bundle,2024-11-20\r\n31797,1389,LATAM,grocery,retail,45.39,4,0.187,coupon,2024-08-14\r\n31798,2410,EMEA,fashion,retail,33.44,2,0.030,none,2024-01-23\r\n31799,2002,APAC,electronics,online,99.98,1,0.019,coupon,2024-03-22\r\n31800,1273,AMER,electronics,mobile,18.93,5,0.155,none,2024-03-03\r\n31801,1965,LATAM,fashion,mobile,100.82,6,0.068,loyalty,2024-12-16\r\n31802,1851,EMEA,grocery,online,80.59,4,0.060,none,2024-05-10\r\n31803,2144,EMEA,electronics,online,71.74,3,0.183,none,2024-06-15\r\n31804,1853,APAC,home,online,114.45,5,0.073,loyalty,2024-05-02\r\n31805,1125,LATAM,electronics,mobile,95.94,4,0.222,none,2024-09-17\r\n31806,1908,AMER,electronics,retail,92.89,1,0.134,none,2024-02-11\r\n31807,2046,APAC,electronics,online,68.84,2,0.057,none,2024-04-05\r\n31808,2041,LATAM,home,retail,42.55,7,0.012,none,2024-07-21\r\n31809,1888,LATAM,home,retail,51.13,6,0.154,coupon,2024-10-09\r\n31810,2468,EMEA,sports,mobile,69.99,4,0.033,coupon,2024-01-26\r\n31811,1406,LATAM,home,retail,36.03,6,0.208,bundle,2024-01-25\r\n31812,1402,EMEA,fashion,online,47.52,5,0.095,none,2024-04-11\r\n31813,1516,EMEA,sports,online,59.57,8,0.039,none,2024-12-10\r\n31814,1696,LATAM,sports,mobile,34.51,7,0.114,none,2024-09-08\r\n31815,1701,LATAM,grocery,online,146.73,3,0.023,none,2024-05-23\r\n31816,1157,LATAM,toys,online,34.73,8,0.230,bundle,2024-10-05\r\n31817,1262,APAC,grocery,online,67.68,2,0.231,none,2024-07-13\r\n31818,2196,AMER,fashion,online,87.44,7,0.139,coupon,2024-01-21\r\n31819,2312,APAC,fashion,online,159.77,2,0.043,loyalty,2024-06-23\r\n31820,1932,EMEA,grocery,retail,42.44,2,0.226,none,2024-03-03\r\n31821,1899,APAC,toys,online,67.52,2,0.015,coupon,2024-04-05\r\n31822,1027,APAC,sports,retail,76.21,8,0.214,coupon,2024-11-05\r\n31823,1534,EMEA,home,retail,34.69,3,0.170,none,2024-01-12\r\n31824,1792,AMER,grocery,mobile,48.01,7,0.209,none,2024-01-27\r\n31825,1731,AMER,grocery,retail,86.92,8,0.001,bundle,2024-10-02\r\n31826,1726,EMEA,toys,retail,104.91,1,0.184,none,2024-12-20\r\n31827,1663,LATAM,home,online,102.73,8,0.184,bundle,2024-07-25\r\n31828,2231,LATAM,home,retail,69.54,8,0.125,none,2024-11-26\r\n31829,1744,EMEA,fashion,retail,37.22,2,0.110,none,2024-08-13\r\n31830,1428,APAC,fashion,retail,51.97,3,0.133,none,2024-12-23\r\n31831,1444,EMEA,grocery,retail,64.35,3,0.091,coupon,2024-07-23\r\n31832,1901,AMER,toys,online,61.66,6,0.190,coupon,2024-09-03\r\n31833,1653,APAC,electronics,online,85.76,3,0.141,bundle,2024-06-04\r\n31834,2218,EMEA,fashion,retail,57.48,2,0.209,none,2024-10-06\r\n31835,1171,APAC,sports,retail,115.49,4,0.018,none,2024-03-06\r\n31836,2147,LATAM,sports,retail,35.46,5,0.181,bundle,2024-04-15\r\n31837,2481,APAC,electronics,online,43.06,3,0.182,loyalty,2024-06-12\r\n31838,1904,APAC,sports,online,25.79,3,0.109,none,2024-10-28\r\n31839,1469,EMEA,electronics,retail,93.06,2,0.024,none,2024-12-06\r\n31840,2454,LATAM,grocery,online,62.63,4,0.002,bundle,2024-12-01\r\n31841,2019,AMER,electronics,retail,138.28,1,0.229,coupon,2024-05-07\r\n31842,1128,LATAM,electronics,online,59.56,1,0.161,bundle,2024-07-09\r\n31843,1387,AMER,grocery,retail,50.25,7,0.127,none,2024-10-20\r\n31844,1200,EMEA,home,online,51.40,5,0.058,loyalty,2024-08-18\r\n31845,2006,APAC,electronics,online,56.29,6,0.210,none,2024-06-28\r\n31846,1575,APAC,toys,retail,53.53,6,0.189,none,2024-12-18\r\n31847,1495,LATAM,grocery,partner,47.61,6,0.189,none,2024-07-15\r\n31848,2392,EMEA,fashion,online,59.36,5,0.185,coupon,2024-06-26\r\n31849,1851,EMEA,electronics,online,114.53,1,0.156,none,2024-08-05\r\n31850,1544,LATAM,sports,mobile,90.87,3,0.113,none,2024-10-26\r\n31851,1218,AMER,home,online,45.01,6,0.030,none,2024-08-03\r\n31852,1089,LATAM,fashion,online,59.44,8,0.013,none,2024-02-10\r\n31853,1830,EMEA,electronics,mobile,271.82,2,0.033,loyalty,2024-07-07\r\n31854,1176,EMEA,home,partner,81.67,8,0.073,coupon,2024-12-18\r\n31855,2134,AMER,fashion,retail,22.78,7,0.037,bundle,2024-05-25\r\n31856,1867,AMER,home,retail,48.16,6,0.155,bundle,2024-02-22\r\n31857,1083,AMER,home,online,48.72,8,0.046,none,2024-11-05\r\n31858,1234,AMER,fashion,mobile,54.24,5,0.019,none,2024-12-13\r\n31859,1211,EMEA,electronics,retail,46.20,6,0.165,bundle,2024-09-03\r\n31860,1368,EMEA,grocery,online,189.48,8,0.110,bundle,2024-10-23\r\n31861,1927,EMEA,grocery,retail,31.97,5,0.032,none,2024-11-25\r\n31862,2490,AMER,fashion,online,63.80,6,0.236,coupon,2024-02-24\r\n31863,1898,EMEA,sports,retail,34.96,2,0.033,coupon,2024-02-08\r\n31864,1203,AMER,sports,mobile,77.55,4,0.099,none,2024-08-23\r\n31865,1459,LATAM,sports,online,35.05,1,0.214,none,2024-12-21\r\n31866,2381,AMER,grocery,retail,104.38,7,0.137,bundle,2024-03-20\r\n31867,1551,APAC,sports,online,84.82,3,0.240,none,2024-12-05\r\n31868,1262,APAC,home,online,47.66,7,0.102,bundle,2024-02-19\r\n31869,1248,APAC,grocery,online,57.19,8,0.200,none,2024-08-04\r\n31870,2484,APAC,home,retail,69.20,6,0.157,none,2024-05-28\r\n31871,1800,APAC,toys,online,70.13,8,0.030,none,2024-03-27\r\n31872,1539,LATAM,electronics,online,148.78,3,0.237,none,2024-03-27\r\n31873,1478,EMEA,home,retail,90.46,8,0.224,bundle,2024-07-19\r\n31874,1013,LATAM,fashion,retail,107.32,5,0.204,loyalty,2024-10-11\r\n31875,1074,LATAM,electronics,online,39.53,2,0.008,none,2024-10-19\r\n31876,2060,LATAM,electronics,partner,63.62,6,0.114,bundle,2024-03-27\r\n31877,1859,AMER,toys,mobile,115.47,3,0.059,loyalty,2024-10-03\r\n31878,1946,AMER,grocery,retail,60.20,4,0.206,bundle,2024-04-24\r\n31879,2286,AMER,home,mobile,45.17,6,0.050,none,2024-09-14\r\n31880,1370,APAC,home,retail,67.93,6,0.151,none,2024-05-19\r\n31881,2309,AMER,grocery,retail,79.48,8,0.150,none,2024-07-28\r\n31882,1938,APAC,toys,online,90.73,3,0.172,bundle,2024-03-08\r\n31883,1023,APAC,electronics,online,31.52,7,0.228,bundle,2024-05-19\r\n31884,2353,AMER,grocery,mobile,35.50,5,0.108,none,2024-05-07\r\n31885,1510,EMEA,grocery,mobile,67.36,3,0.066,none,2024-10-04\r\n31886,1776,APAC,electronics,retail,44.13,8,0.171,none,2024-03-18\r\n31887,2438,AMER,grocery,mobile,56.13,8,0.013,none,2024-12-02\r\n31888,2199,LATAM,grocery,retail,40.32,5,0.001,none,2024-04-12\r\n31889,2313,LATAM,home,partner,62.85,8,0.180,bundle,2024-01-14\r\n31890,1489,AMER,grocery,retail,47.36,5,0.155,none,2024-12-13\r\n31891,1620,LATAM,grocery,online,27.01,6,0.100,none,2024-01-26\r\n31892,2137,LATAM,grocery,retail,101.09,6,0.054,none,2024-01-08\r\n31893,1260,LATAM,home,retail,55.83,3,0.114,coupon,2024-07-21\r\n31894,1300,EMEA,electronics,online,83.74,7,0.033,coupon,2024-05-23\r\n31895,2229,APAC,grocery,mobile,32.11,5,0.150,none,2024-05-05\r\n31896,1172,APAC,toys,online,64.86,4,0.242,coupon,2024-08-25\r\n31897,1929,LATAM,grocery,online,41.16,7,0.081,none,2024-01-25\r\n31898,1286,EMEA,grocery,retail,37.05,6,0.230,loyalty,2024-02-28\r\n31899,1365,LATAM,sports,retail,75.36,5,0.080,loyalty,2024-01-10\r\n31900,2123,AMER,electronics,mobile,176.23,2,0.042,none,2024-09-25\r\n31901,1174,APAC,home,partner,60.23,3,0.053,none,2024-02-24\r\n31902,1872,LATAM,grocery,mobile,91.42,6,0.091,coupon,2024-11-01\r\n31903,2072,AMER,fashion,retail,42.03,5,0.125,coupon,2024-04-28\r\n31904,1600,AMER,home,online,78.76,7,0.014,loyalty,2024-05-16\r\n31905,2177,AMER,grocery,mobile,61.51,1,0.073,none,2024-02-14\r\n31906,1799,EMEA,fashion,retail,28.11,4,0.233,coupon,2024-01-09\r\n31907,2189,LATAM,home,online,43.12,2,0.133,none,2024-09-19\r\n31908,1887,LATAM,home,mobile,131.68,2,0.126,coupon,2024-05-13\r\n31909,1313,EMEA,grocery,mobile,129.51,2,0.245,coupon,2024-03-04\r\n31910,1052,LATAM,grocery,online,66.40,1,0.023,none,2024-01-04\r\n31911,2057,APAC,home,online,12.22,7,0.220,none,2024-12-09\r\n31912,1837,LATAM,fashion,retail,99.60,6,0.249,coupon,2024-02-14\r\n31913,1485,APAC,grocery,retail,49.95,2,0.162,none,2024-12-28\r\n31914,1540,LATAM,grocery,online,86.16,1,0.128,bundle,2024-01-10\r\n31915,1935,EMEA,toys,retail,163.18,7,0.048,none,2024-02-05\r\n31916,1051,EMEA,grocery,mobile,42.50,4,0.129,none,2024-09-01\r\n31917,2183,EMEA,fashion,mobile,100.26,5,0.075,none,2024-06-20\r\n31918,1376,EMEA,grocery,online,59.44,4,0.026,coupon,2024-05-22\r\n31919,2468,EMEA,home,online,56.90,5,0.035,none,2024-08-10\r\n31920,2091,LATAM,home,retail,46.81,6,0.195,none,2024-03-23\r\n31921,2045,LATAM,grocery,online,27.96,6,0.084,none,2024-01-01\r\n31922,2167,APAC,grocery,retail,79.46,4,0.026,none,2024-09-25\r\n31923,2455,AMER,grocery,retail,45.59,6,0.059,coupon,2024-07-26\r\n31924,1641,EMEA,home,online,68.90,4,0.220,none,2024-02-21\r\n31925,2003,LATAM,grocery,mobile,46.50,7,0.220,none,2024-01-08\r\n31926,1917,LATAM,grocery,retail,39.92,5,0.245,none,2024-01-27\r\n31927,2331,APAC,grocery,online,73.23,1,0.075,bundle,2024-01-14\r\n31928,1325,APAC,electronics,mobile,46.46,4,0.198,none,2024-09-27\r\n31929,1555,AMER,sports,retail,87.76,2,0.163,none,2024-04-01\r\n31930,1724,LATAM,electronics,online,48.73,8,0.028,loyalty,2024-10-03\r\n31931,1233,AMER,toys,online,17.42,5,0.025,coupon,2024-04-24\r\n31932,1117,LATAM,fashion,online,27.05,6,0.178,coupon,2024-08-21\r\n31933,2296,AMER,fashion,online,65.22,7,0.160,none,2024-05-21\r\n31934,1148,AMER,grocery,online,35.28,8,0.229,loyalty,2024-02-05\r\n31935,1667,AMER,electronics,online,62.69,8,0.117,none,2024-11-01\r\n31936,1533,APAC,electronics,online,86.96,3,0.157,loyalty,2024-01-17\r\n31937,1827,EMEA,grocery,retail,79.07,2,0.030,bundle,2024-06-23\r\n31938,1789,EMEA,grocery,online,107.32,2,0.012,none,2024-11-24\r\n31939,2271,LATAM,grocery,mobile,41.93,2,0.087,none,2024-01-28\r\n31940,2327,EMEA,electronics,online,43.81,8,0.143,none,2024-02-25\r\n31941,1984,LATAM,home,online,93.06,7,0.195,none,2024-10-21\r\n31942,1697,APAC,fashion,online,49.32,3,0.220,coupon,2024-12-16\r\n31943,2464,LATAM,home,online,48.18,2,0.060,bundle,2024-05-11\r\n31944,1454,APAC,electronics,partner,81.08,3,0.157,none,2024-01-11\r\n31945,2465,EMEA,home,retail,74.86,5,0.212,bundle,2024-09-01\r\n31946,1603,EMEA,electronics,online,35.35,2,0.197,coupon,2024-04-12\r\n31947,1595,AMER,grocery,online,22.05,4,0.119,none,2024-05-04\r\n31948,1008,AMER,fashion,online,51.42,5,0.023,coupon,2024-12-20\r\n31949,2249,LATAM,electronics,retail,113.76,1,0.082,none,2024-08-02\r\n31950,1536,LATAM,grocery,online,50.89,3,0.006,none,2024-09-24\r\n31951,1559,EMEA,fashion,online,101.22,6,0.224,coupon,2024-11-19\r\n31952,1688,LATAM,fashion,partner,21.44,6,0.004,bundle,2024-01-07\r\n31953,2053,AMER,grocery,retail,79.38,6,0.187,bundle,2024-06-08\r\n31954,2240,LATAM,electronics,retail,25.33,8,0.184,none,2024-05-12\r\n31955,2024,AMER,grocery,online,22.74,4,0.017,bundle,2024-11-09\r\n31956,1340,LATAM,grocery,retail,48.94,8,0.250,bundle,2024-07-21\r\n31957,2477,APAC,electronics,online,58.10,6,0.072,none,2024-12-07\r\n31958,1824,LATAM,fashion,online,56.12,8,0.025,bundle,2024-01-15\r\n31959,1730,AMER,electronics,retail,126.14,1,0.104,loyalty,2024-09-24\r\n31960,1052,LATAM,sports,retail,129.41,1,0.038,none,2024-06-17\r\n31961,2071,APAC,grocery,online,63.22,1,0.160,none,2024-09-16\r\n31962,1058,LATAM,electronics,retail,83.85,2,0.244,none,2024-12-08\r\n31963,2282,EMEA,fashion,partner,59.77,8,0.110,bundle,2024-07-21\r\n31964,2220,LATAM,grocery,partner,114.61,3,0.098,coupon,2024-04-02\r\n31965,2484,APAC,toys,retail,140.96,5,0.147,coupon,2024-08-16\r\n31966,1942,APAC,grocery,online,30.93,8,0.039,coupon,2024-12-01\r\n31967,2443,LATAM,electronics,retail,55.19,3,0.131,bundle,2024-08-07\r\n31968,2306,AMER,home,online,23.95,4,0.222,none,2024-07-18\r\n31969,1537,LATAM,electronics,online,357.60,5,0.000,coupon,2024-10-15\r\n31970,1206,EMEA,sports,retail,37.97,4,0.087,none,2024-07-16\r\n31971,1247,AMER,grocery,online,27.87,7,0.240,none,2024-10-26\r\n31972,1958,APAC,grocery,mobile,21.47,8,0.170,none,2024-11-26\r\n31973,1335,APAC,fashion,mobile,75.04,3,0.238,none,2024-03-04\r\n31974,2249,LATAM,grocery,online,75.21,2,0.110,none,2024-10-17\r\n31975,1180,AMER,home,online,33.52,2,0.108,bundle,2024-10-07\r\n31976,1005,LATAM,toys,retail,73.55,1,0.216,none,2024-08-25\r\n31977,2464,LATAM,fashion,online,26.97,5,0.075,none,2024-06-28\r\n31978,1746,LATAM,sports,retail,36.64,8,0.098,none,2024-04-18\r\n31979,1322,AMER,grocery,online,27.38,7,0.064,bundle,2024-08-14\r\n31980,2077,APAC,fashion,online,33.59,2,0.075,coupon,2024-10-25\r\n31981,1049,AMER,fashion,retail,82.13,1,0.082,none,2024-05-28\r\n31982,1806,APAC,grocery,online,23.84,8,0.110,none,2024-05-21\r\n31983,1521,LATAM,sports,retail,76.98,8,0.137,none,2024-11-03\r\n31984,1051,EMEA,sports,online,22.15,5,0.229,none,2024-11-20\r\n31985,2303,EMEA,sports,retail,234.96,7,0.091,coupon,2024-02-11\r\n31986,1968,EMEA,grocery,online,121.00,8,0.160,none,2024-09-10\r\n31987,2371,LATAM,electronics,retail,44.43,2,0.004,bundle,2024-04-23\r\n31988,2121,APAC,sports,online,72.56,8,0.199,none,2024-11-17\r\n31989,2362,AMER,electronics,online,88.46,3,0.248,none,2024-02-04\r\n31990,1409,APAC,electronics,retail,31.84,8,0.056,bundle,2024-04-28\r\n31991,1501,AMER,electronics,online,24.81,6,0.012,bundle,2024-01-09\r\n31992,1414,APAC,sports,retail,61.68,7,0.121,none,2024-04-24\r\n31993,2124,AMER,sports,online,49.88,3,0.224,bundle,2024-04-24\r\n31994,1182,EMEA,electronics,online,23.07,4,0.094,none,2024-02-11\r\n31995,2499,LATAM,electronics,online,36.46,4,0.193,none,2024-09-20\r\n31996,2158,APAC,fashion,partner,40.03,6,0.142,none,2024-04-07\r\n31997,2437,LATAM,fashion,online,35.31,1,0.128,none,2024-04-26\r\n31998,1820,AMER,fashion,mobile,60.14,5,0.000,bundle,2024-05-08\r\n31999,1566,EMEA,electronics,online,50.17,1,0.069,loyalty,2024-06-17\r\n32000,1585,AMER,electronics,online,62.37,3,0.112,loyalty,2024-01-21\r\n32001,2487,LATAM,grocery,mobile,89.30,6,0.124,none,2024-04-11\r\n32002,2200,LATAM,home,retail,29.18,4,0.240,none,2024-07-19\r\n32003,1446,AMER,electronics,online,81.92,1,0.204,bundle,2024-11-24\r\n32004,1225,APAC,fashion,online,76.60,2,0.165,none,2024-07-02\r\n32005,1622,LATAM,grocery,retail,56.58,2,0.121,coupon,2024-03-24\r\n32006,2139,AMER,toys,partner,75.85,5,0.080,none,2024-12-22\r\n32007,1821,LATAM,electronics,mobile,98.71,8,0.141,bundle,2024-10-05\r\n32008,1799,EMEA,sports,online,77.73,2,0.211,coupon,2024-04-15\r\n32009,1367,AMER,grocery,mobile,102.61,5,0.076,none,2024-11-08\r\n32010,2237,EMEA,electronics,retail,47.50,1,0.076,coupon,2024-04-13\r\n32011,1459,LATAM,home,retail,67.07,4,0.031,none,2024-11-21\r\n32012,1533,APAC,grocery,online,34.81,7,0.065,none,2024-10-27\r\n32013,1008,AMER,sports,online,76.04,8,0.142,none,2024-03-12\r\n32014,2281,AMER,toys,retail,22.51,1,0.224,none,2024-01-22\r\n32015,2102,APAC,electronics,retail,78.04,2,0.046,loyalty,2024-08-03\r\n32016,1242,LATAM,electronics,online,104.46,4,0.213,none,2024-02-26\r\n32017,2491,APAC,home,mobile,39.71,7,0.173,none,2024-11-12\r\n32018,1020,APAC,grocery,partner,26.36,6,0.040,bundle,2024-08-03\r\n32019,1880,LATAM,toys,mobile,68.00,8,0.131,loyalty,2024-03-23\r\n32020,2098,AMER,grocery,retail,82.52,3,0.108,none,2024-10-12\r\n32021,2037,LATAM,home,online,61.15,1,0.154,none,2024-06-12\r\n32022,2421,AMER,grocery,retail,33.66,3,0.130,none,2024-08-11\r\n32023,2319,AMER,toys,mobile,51.62,8,0.103,none,2024-05-27\r\n32024,1370,APAC,grocery,online,56.57,6,0.084,coupon,2024-12-02\r\n32025,1719,LATAM,grocery,online,73.28,1,0.002,none,2024-03-09\r\n32026,1393,LATAM,toys,retail,52.46,3,0.170,bundle,2024-10-10\r\n32027,1334,APAC,electronics,online,30.65,5,0.081,none,2024-08-21\r\n32028,1793,LATAM,electronics,retail,42.18,6,0.126,none,2024-10-12\r\n32029,1832,APAC,sports,retail,43.72,2,0.138,none,2024-08-28\r\n32030,2133,AMER,sports,online,52.23,7,0.220,none,2024-05-24\r\n32031,1322,AMER,home,online,58.90,8,0.151,none,2024-02-25\r\n32032,1092,AMER,grocery,online,35.42,8,0.096,coupon,2024-09-02\r\n32033,1064,AMER,electronics,online,60.90,4,0.038,none,2024-02-18\r\n32034,1062,EMEA,home,online,61.15,6,0.057,loyalty,2024-09-23\r\n32035,2169,EMEA,toys,online,67.97,4,0.177,bundle,2024-05-08\r\n32036,1868,AMER,fashion,online,67.15,5,0.174,none,2024-02-19\r\n32037,1560,AMER,grocery,retail,40.37,8,0.150,none,2024-06-09\r\n32038,2075,LATAM,electronics,mobile,48.35,6,0.153,bundle,2024-05-10\r\n32039,1730,AMER,fashion,online,111.30,2,0.176,none,2024-05-05\r\n32040,1076,LATAM,sports,retail,76.61,3,0.121,coupon,2024-11-21\r\n32041,1842,LATAM,home,retail,51.53,4,0.112,coupon,2024-01-26\r\n32042,2009,LATAM,home,online,46.44,5,0.058,bundle,2024-10-11\r\n32043,1344,EMEA,toys,online,75.30,1,0.159,none,2024-03-05\r\n32044,1096,EMEA,electronics,online,114.63,4,0.101,none,2024-05-01\r\n32045,1912,APAC,electronics,retail,50.78,5,0.180,coupon,2024-11-21\r\n32046,1525,APAC,fashion,online,41.54,6,0.067,none,2024-12-19\r\n32047,2061,EMEA,grocery,retail,57.41,7,0.137,coupon,2024-02-09\r\n32048,1312,EMEA,toys,retail,56.46,8,0.155,coupon,2024-09-09\r\n32049,1725,APAC,home,online,92.06,6,0.161,none,2024-09-21\r\n32050,1329,APAC,home,mobile,96.04,3,0.129,none,2024-06-16\r\n32051,1649,APAC,sports,partner,64.55,7,0.155,loyalty,2024-08-18\r\n32052,1251,EMEA,grocery,mobile,74.28,6,0.016,bundle,2024-07-16\r\n32053,1200,EMEA,electronics,partner,82.92,4,0.056,none,2024-07-07\r\n32054,1807,EMEA,fashion,online,34.56,1,0.072,coupon,2024-02-16\r\n32055,1047,APAC,grocery,retail,32.87,3,0.189,none,2024-11-27\r\n32056,1064,AMER,grocery,retail,81.76,7,0.066,none,2024-03-15\r\n32057,1155,EMEA,grocery,online,69.58,8,0.144,loyalty,2024-02-27\r\n32058,1294,APAC,grocery,online,29.28,8,0.108,none,2024-11-06\r\n32059,1277,AMER,home,mobile,61.25,8,0.123,none,2024-09-20\r\n32060,2339,AMER,fashion,online,54.47,3,0.221,coupon,2024-09-23\r\n32061,1011,APAC,home,retail,50.53,1,0.130,none,2024-02-28\r\n32062,2394,EMEA,home,retail,39.41,1,0.022,bundle,2024-06-01\r\n32063,1390,APAC,fashion,online,67.87,8,0.180,bundle,2024-10-14\r\n32064,2356,LATAM,toys,retail,168.33,7,0.186,none,2024-02-24\r\n32065,1301,AMER,home,partner,157.52,6,0.020,bundle,2024-05-27\r\n32066,2292,EMEA,electronics,retail,24.28,6,0.062,coupon,2024-11-11\r\n32067,1866,EMEA,sports,online,47.72,2,0.205,none,2024-11-08\r\n32068,1827,EMEA,grocery,retail,93.78,7,0.242,bundle,2024-05-05\r\n32069,1549,APAC,sports,online,72.49,3,0.142,none,2024-09-08\r\n32070,1912,APAC,electronics,online,96.37,6,0.187,loyalty,2024-08-18\r\n32071,1787,APAC,grocery,mobile,51.42,2,0.199,none,2024-06-03\r\n32072,2031,AMER,grocery,retail,67.07,3,0.068,loyalty,2024-03-20\r\n32073,1103,EMEA,grocery,retail,33.09,1,0.070,none,2024-06-11\r\n32074,1394,LATAM,home,mobile,143.73,2,0.156,bundle,2024-04-03\r\n32075,1070,EMEA,electronics,online,24.11,2,0.151,loyalty,2024-10-16\r\n32076,1834,AMER,fashion,online,50.32,6,0.031,none,2024-08-21\r\n32077,1559,EMEA,fashion,online,40.48,4,0.178,none,2024-03-15\r\n32078,1949,AMER,electronics,retail,51.22,5,0.062,none,2024-06-21\r\n32079,1448,EMEA,grocery,online,119.30,2,0.164,bundle,2024-12-24\r\n32080,1630,APAC,fashion,online,59.37,3,0.204,none,2024-02-14\r\n32081,2023,LATAM,sports,mobile,59.97,2,0.124,none,2024-03-23\r\n32082,1714,APAC,electronics,online,71.89,4,0.056,coupon,2024-10-23\r\n32083,1049,AMER,toys,retail,75.69,2,0.086,coupon,2024-03-03\r\n32084,2461,LATAM,sports,retail,53.78,5,0.243,bundle,2024-09-14\r\n32085,1674,LATAM,grocery,online,145.54,1,0.145,none,2024-12-14\r\n32086,1176,EMEA,electronics,partner,124.55,5,0.089,bundle,2024-12-19\r\n32087,2292,EMEA,home,online,73.31,3,0.239,none,2024-05-19\r\n32088,1042,LATAM,grocery,retail,62.37,5,0.109,coupon,2024-07-16\r\n32089,2204,AMER,toys,online,80.27,6,0.214,none,2024-08-28\r\n32090,1404,EMEA,grocery,online,81.99,4,0.032,none,2024-02-14\r\n32091,1387,AMER,fashion,online,134.75,7,0.211,none,2024-12-18\r\n32092,2034,LATAM,fashion,online,66.94,5,0.094,none,2024-03-15\r\n32093,1735,LATAM,home,online,36.95,4,0.242,none,2024-10-23\r\n32094,2231,LATAM,grocery,online,93.41,8,0.195,none,2024-04-26\r\n32095,1307,AMER,grocery,online,93.44,2,0.236,coupon,2024-07-09\r\n32096,1055,AMER,electronics,retail,81.15,7,0.001,coupon,2024-09-24\r\n32097,1528,EMEA,grocery,mobile,86.58,2,0.107,none,2024-10-28\r\n32098,2165,AMER,grocery,mobile,63.71,2,0.224,bundle,2024-09-15\r\n32099,1971,EMEA,electronics,retail,19.60,8,0.019,bundle,2024-07-03\r\n32100,1180,AMER,sports,mobile,40.85,1,0.042,coupon,2024-08-11\r\n32101,1546,EMEA,electronics,online,132.32,2,0.087,loyalty,2024-01-21\r\n32102,2274,APAC,sports,retail,39.59,8,0.172,none,2024-07-11\r\n32103,2128,EMEA,sports,online,34.69,8,0.220,none,2024-11-11\r\n32104,1053,AMER,sports,online,43.47,4,0.102,loyalty,2024-09-24\r\n32105,1772,EMEA,home,online,27.76,4,0.029,coupon,2024-06-06\r\n32106,1768,AMER,fashion,online,72.00,8,0.208,none,2024-12-03\r\n32107,1680,LATAM,fashion,online,47.64,3,0.183,loyalty,2024-02-07\r\n32108,1801,LATAM,grocery,mobile,34.59,1,0.108,none,2024-03-04\r\n32109,1935,EMEA,electronics,mobile,35.47,7,0.224,coupon,2024-03-04\r\n32110,1197,LATAM,grocery,online,39.66,1,0.043,none,2024-01-12\r\n32111,1644,EMEA,sports,online,43.71,2,0.085,none,2024-01-07\r\n32112,2048,LATAM,electronics,online,118.12,3,0.022,none,2024-07-19\r\n32113,1173,LATAM,electronics,online,80.25,6,0.089,bundle,2024-11-05\r\n32114,1637,APAC,home,retail,32.95,6,0.192,coupon,2024-12-10\r\n32115,1160,LATAM,home,retail,144.70,8,0.089,loyalty,2024-07-24\r\n32116,1281,AMER,home,online,49.67,7,0.065,none,2024-12-28\r\n32117,2320,LATAM,fashion,mobile,55.20,6,0.047,none,2024-02-13\r\n32118,1491,EMEA,fashion,retail,136.81,1,0.247,none,2024-05-15\r\n32119,2260,EMEA,fashion,mobile,105.24,8,0.155,bundle,2024-02-26\r\n32120,1637,APAC,electronics,online,95.45,3,0.223,none,2024-06-12\r\n32121,1797,LATAM,fashion,online,56.37,1,0.184,none,2024-06-04\r\n32122,1038,APAC,grocery,online,47.86,1,0.117,loyalty,2024-05-14\r\n32123,1455,APAC,home,mobile,41.15,7,0.216,none,2024-01-08\r\n32124,2303,EMEA,home,online,52.44,3,0.056,none,2024-12-27\r\n32125,1230,EMEA,electronics,online,66.41,6,0.240,none,2024-09-21\r\n32126,1148,AMER,electronics,online,30.53,1,0.198,bundle,2024-02-11\r\n32127,1698,EMEA,grocery,online,178.74,8,0.225,bundle,2024-08-22\r\n32128,1493,APAC,fashion,mobile,48.41,8,0.032,coupon,2024-10-09\r\n32129,1906,APAC,fashion,online,56.05,4,0.038,loyalty,2024-12-18\r\n32130,2136,AMER,fashion,online,30.68,5,0.123,bundle,2024-06-06\r\n32131,1357,EMEA,sports,retail,77.30,6,0.173,none,2024-05-17\r\n32132,1825,AMER,sports,retail,125.82,4,0.218,coupon,2024-05-22\r\n32133,2455,AMER,sports,online,37.67,5,0.185,none,2024-10-10\r\n32134,2296,AMER,electronics,mobile,55.46,4,0.239,none,2024-08-20\r\n32135,1425,EMEA,grocery,mobile,37.90,5,0.228,none,2024-12-09\r\n32136,2316,EMEA,home,online,58.50,4,0.019,coupon,2024-02-03\r\n32137,1880,LATAM,grocery,retail,117.43,2,0.116,none,2024-10-19\r\n32138,2185,EMEA,home,retail,80.08,4,0.137,coupon,2024-11-16\r\n32139,1995,LATAM,grocery,online,47.68,6,0.134,none,2024-06-26\r\n32140,1831,APAC,electronics,online,28.58,6,0.222,coupon,2024-10-05\r\n32141,1500,EMEA,grocery,partner,59.36,1,0.153,none,2024-02-07\r\n32142,1798,AMER,fashion,online,88.99,3,0.000,coupon,2024-08-11\r\n32143,2423,LATAM,electronics,partner,55.91,3,0.059,none,2024-08-09\r\n32144,1956,APAC,grocery,online,65.33,1,0.223,coupon,2024-09-04\r\n32145,2328,EMEA,grocery,online,72.27,3,0.141,bundle,2024-10-09\r\n32146,2497,AMER,electronics,partner,118.11,6,0.245,none,2024-07-08\r\n32147,1496,AMER,electronics,mobile,69.83,2,0.007,none,2024-10-28\r\n32148,1217,EMEA,fashion,retail,49.32,4,0.023,bundle,2024-03-12\r\n32149,1376,EMEA,sports,retail,41.09,8,0.238,none,2024-07-11\r\n32150,1432,APAC,grocery,retail,88.73,8,0.097,none,2024-03-22\r\n32151,1035,EMEA,electronics,retail,25.70,2,0.221,coupon,2024-02-22\r\n32152,2237,EMEA,toys,online,73.71,2,0.228,bundle,2024-08-02\r\n32153,1731,AMER,sports,online,31.32,2,0.163,bundle,2024-06-27\r\n32154,1838,AMER,home,retail,133.62,6,0.036,bundle,2024-07-23\r\n32155,1170,AMER,grocery,online,44.71,8,0.098,loyalty,2024-03-04\r\n32156,2083,LATAM,electronics,mobile,20.13,6,0.229,none,2024-10-07\r\n32157,1252,APAC,toys,online,62.30,5,0.250,loyalty,2024-09-04\r\n32158,1044,EMEA,home,retail,63.65,8,0.203,none,2024-02-11\r\n32159,2358,AMER,grocery,mobile,83.94,1,0.113,none,2024-04-01\r\n32160,1888,LATAM,toys,mobile,24.03,2,0.153,coupon,2024-06-09\r\n32161,2278,APAC,toys,retail,84.69,1,0.049,none,2024-11-16\r\n32162,1668,AMER,electronics,retail,43.27,3,0.129,coupon,2024-08-13\r\n32163,2207,APAC,sports,online,60.87,2,0.088,loyalty,2024-09-03\r\n32164,2279,LATAM,grocery,online,66.98,3,0.103,none,2024-12-19\r\n32165,1736,AMER,sports,retail,38.10,7,0.124,none,2024-04-12\r\n32166,1799,EMEA,grocery,online,39.17,2,0.032,coupon,2024-06-22\r\n32167,1121,EMEA,sports,retail,72.53,2,0.184,none,2024-10-28\r\n32168,1111,APAC,home,mobile,68.07,2,0.230,bundle,2024-02-11\r\n32169,2074,AMER,electronics,retail,96.30,5,0.083,none,2024-11-05\r\n32170,2169,EMEA,electronics,retail,24.01,6,0.197,bundle,2024-11-07\r\n32171,1739,AMER,home,retail,14.73,5,0.047,bundle,2024-07-21\r\n32172,1659,APAC,sports,online,77.03,4,0.190,none,2024-02-19\r\n32173,2322,AMER,home,online,50.73,8,0.039,none,2024-02-23\r\n32174,2088,EMEA,electronics,online,42.71,1,0.174,coupon,2024-09-18\r\n32175,1141,AMER,sports,online,31.64,4,0.161,none,2024-11-09\r\n32176,1849,EMEA,fashion,mobile,43.30,3,0.008,none,2024-11-03\r\n32177,1091,EMEA,grocery,online,37.48,3,0.162,none,2024-05-01\r\n32178,1301,AMER,fashion,retail,63.26,8,0.246,none,2024-01-09\r\n32179,1896,EMEA,electronics,mobile,66.89,8,0.238,loyalty,2024-06-22\r\n32180,1402,EMEA,electronics,online,81.82,6,0.066,coupon,2024-10-07\r\n32181,1315,AMER,electronics,mobile,74.17,8,0.027,coupon,2024-10-01\r\n32182,1990,EMEA,fashion,online,75.47,3,0.051,coupon,2024-07-15\r\n32183,2062,EMEA,toys,mobile,45.67,6,0.162,none,2024-09-06\r\n32184,1609,LATAM,fashion,online,60.25,3,0.139,none,2024-03-05\r\n32185,1321,EMEA,electronics,mobile,61.16,2,0.197,none,2024-10-28\r\n32186,2196,AMER,grocery,retail,31.97,6,0.092,none,2024-10-11\r\n32187,1094,LATAM,fashion,online,33.82,6,0.051,none,2024-10-22\r\n32188,2212,EMEA,home,retail,46.83,1,0.040,none,2024-12-16\r\n32189,1671,APAC,fashion,online,86.21,2,0.048,none,2024-05-21\r\n32190,2036,APAC,sports,online,75.60,7,0.180,none,2024-04-18\r\n32191,1525,APAC,grocery,online,51.10,6,0.110,coupon,2024-01-19\r\n32192,1353,EMEA,grocery,online,23.95,7,0.057,none,2024-10-23\r\n32193,1672,APAC,sports,online,98.18,8,0.178,none,2024-11-20\r\n32194,2262,APAC,electronics,retail,20.37,2,0.242,coupon,2024-02-05\r\n32195,1669,AMER,electronics,retail,117.78,1,0.064,none,2024-11-12\r\n32196,1137,APAC,electronics,online,35.84,4,0.047,none,2024-06-10\r\n32197,2279,LATAM,electronics,retail,29.31,7,0.214,none,2024-12-09\r\n32198,1866,EMEA,electronics,retail,52.04,1,0.245,none,2024-05-05\r\n32199,1656,LATAM,home,online,9.21,3,0.053,none,2024-12-10\r\n32200,2055,AMER,electronics,retail,25.92,3,0.179,none,2024-12-13\r\n32201,2293,LATAM,home,retail,47.36,7,0.160,coupon,2024-02-21\r\n32202,2182,AMER,fashion,online,157.65,5,0.097,none,2024-06-14\r\n32203,1445,APAC,electronics,online,81.05,2,0.040,none,2024-12-25\r\n32204,2130,EMEA,toys,online,24.21,4,0.199,none,2024-04-17\r\n32205,2438,AMER,home,online,28.27,3,0.239,coupon,2024-01-06\r\n32206,1250,APAC,grocery,retail,49.64,4,0.068,none,2024-06-01\r\n32207,1589,AMER,grocery,mobile,195.52,5,0.239,none,2024-12-22\r\n32208,1001,LATAM,sports,online,34.26,7,0.100,coupon,2024-07-06\r\n32209,1668,AMER,home,retail,76.58,8,0.129,coupon,2024-06-21\r\n32210,1368,EMEA,grocery,online,58.44,6,0.164,none,2024-05-16\r\n32211,2098,AMER,grocery,retail,22.81,8,0.024,none,2024-03-05\r\n32212,1703,AMER,home,online,53.35,5,0.119,loyalty,2024-01-03\r\n32213,1591,APAC,grocery,mobile,133.45,5,0.167,coupon,2024-09-28\r\n32214,2335,EMEA,grocery,retail,54.69,4,0.078,none,2024-11-09\r\n32215,1527,AMER,fashion,retail,30.17,8,0.192,coupon,2024-08-08\r\n32216,1185,LATAM,grocery,online,24.21,5,0.040,none,2024-02-21\r\n32217,1607,LATAM,fashion,online,46.91,2,0.054,none,2024-07-14\r\n32218,1983,LATAM,sports,online,99.96,5,0.112,none,2024-06-20\r\n32219,1649,APAC,electronics,online,159.56,4,0.120,none,2024-01-12\r\n32220,1462,LATAM,sports,mobile,29.57,5,0.216,coupon,2024-06-16\r\n32221,2428,LATAM,electronics,online,65.86,2,0.061,loyalty,2024-08-05\r\n32222,1874,LATAM,grocery,retail,83.36,4,0.183,loyalty,2024-12-19\r\n32223,2030,EMEA,toys,online,70.66,6,0.109,none,2024-06-21\r\n32224,2189,LATAM,home,retail,46.27,5,0.129,bundle,2024-12-11\r\n32225,1360,APAC,home,online,64.57,6,0.042,coupon,2024-05-28\r\n32226,2185,EMEA,toys,partner,35.79,7,0.005,bundle,2024-06-27\r\n32227,1963,AMER,sports,online,50.73,2,0.136,none,2024-12-01\r\n32228,1883,LATAM,grocery,retail,21.16,5,0.223,coupon,2024-05-11\r\n32229,1495,LATAM,grocery,online,34.05,8,0.169,coupon,2024-11-03\r\n32230,1483,EMEA,toys,online,131.63,6,0.151,bundle,2024-12-21\r\n32231,1671,APAC,grocery,retail,31.21,7,0.040,none,2024-01-01\r\n32232,1725,APAC,electronics,retail,72.49,8,0.104,bundle,2024-02-14\r\n32233,1820,AMER,grocery,online,52.64,1,0.036,bundle,2024-07-16\r\n32234,1763,LATAM,home,online,36.11,4,0.202,none,2024-08-16\r\n32235,1504,AMER,grocery,online,59.91,8,0.116,bundle,2024-05-10\r\n32236,1372,APAC,sports,online,97.89,8,0.220,none,2024-02-20\r\n32237,1808,APAC,home,online,50.96,8,0.201,bundle,2024-10-19\r\n32238,1235,EMEA,fashion,mobile,82.54,1,0.011,coupon,2024-04-09\r\n32239,1490,AMER,electronics,mobile,58.07,4,0.050,none,2024-01-27\r\n32240,1677,EMEA,home,mobile,42.78,1,0.109,none,2024-07-26\r\n32241,1286,EMEA,home,retail,30.00,1,0.020,none,2024-12-17\r\n32242,1769,LATAM,toys,mobile,120.38,4,0.028,bundle,2024-05-22\r\n32243,1532,APAC,grocery,online,56.52,4,0.217,none,2024-10-24\r\n32244,2489,LATAM,toys,retail,27.16,6,0.186,none,2024-07-12\r\n32245,1865,LATAM,fashion,mobile,74.13,4,0.170,loyalty,2024-05-03\r\n32246,1497,EMEA,fashion,online,43.29,2,0.020,coupon,2024-11-18\r\n32247,2140,AMER,grocery,mobile,40.34,3,0.111,loyalty,2024-05-27\r\n32248,2059,AMER,electronics,retail,33.05,8,0.033,coupon,2024-01-15\r\n32249,1354,AMER,grocery,retail,49.12,8,0.231,none,2024-05-09\r\n32250,1330,EMEA,grocery,retail,48.91,4,0.130,bundle,2024-08-19\r\n32251,2063,APAC,toys,retail,42.88,5,0.220,loyalty,2024-05-21\r\n32252,1607,LATAM,electronics,retail,64.66,1,0.197,loyalty,2024-12-08\r\n32253,1737,AMER,sports,retail,122.49,6,0.155,bundle,2024-09-24\r\n32254,1305,EMEA,sports,online,41.66,5,0.001,coupon,2024-02-01\r\n32255,1609,LATAM,toys,mobile,16.86,2,0.078,none,2024-06-23\r\n32256,1107,APAC,electronics,retail,143.36,3,0.212,none,2024-10-19\r\n32257,2157,AMER,grocery,online,34.70,6,0.099,none,2024-10-19\r\n32258,2356,LATAM,electronics,online,103.02,8,0.250,loyalty,2024-08-14\r\n32259,1116,LATAM,home,retail,68.98,6,0.174,none,2024-11-03\r\n32260,1679,APAC,grocery,retail,38.55,2,0.127,none,2024-11-07\r\n32261,2415,AMER,toys,online,56.67,1,0.011,coupon,2024-07-08\r\n32262,1863,EMEA,fashion,retail,55.66,1,0.137,none,2024-07-13\r\n32263,2376,LATAM,toys,retail,58.74,5,0.160,none,2024-05-10\r\n32264,1658,AMER,grocery,online,37.56,3,0.165,none,2024-09-17\r\n32265,2023,LATAM,grocery,mobile,89.50,8,0.117,none,2024-12-06\r\n32266,1289,LATAM,grocery,mobile,39.16,6,0.190,none,2024-03-28\r\n32267,2142,LATAM,home,online,34.62,1,0.236,bundle,2024-04-01\r\n32268,1005,LATAM,toys,partner,69.60,8,0.206,coupon,2024-10-16\r\n32269,2240,LATAM,grocery,online,30.71,2,0.245,none,2024-02-26\r\n32270,2206,AMER,sports,retail,46.74,3,0.107,none,2024-09-07\r\n32271,1433,EMEA,grocery,retail,36.88,5,0.026,none,2024-05-28\r\n32272,2223,EMEA,home,online,116.51,1,0.010,none,2024-04-18\r\n32273,1178,EMEA,sports,online,126.04,2,0.170,none,2024-11-08\r\n32274,1475,LATAM,sports,online,77.02,5,0.035,loyalty,2024-02-15\r\n32275,2311,LATAM,home,mobile,141.45,3,0.032,coupon,2024-11-01\r\n32276,1998,APAC,electronics,mobile,54.30,5,0.164,coupon,2024-06-14\r\n32277,1214,EMEA,fashion,retail,61.68,2,0.142,bundle,2024-02-01\r\n32278,1905,APAC,fashion,retail,114.32,8,0.156,none,2024-09-07\r\n32279,2199,LATAM,grocery,partner,53.89,1,0.195,bundle,2024-02-10\r\n32280,1468,AMER,electronics,mobile,52.88,1,0.226,bundle,2024-02-27\r\n32281,2360,EMEA,fashion,retail,37.87,3,0.217,bundle,2024-01-19\r\n32282,1817,APAC,home,retail,179.34,3,0.116,coupon,2024-01-28\r\n32283,1289,LATAM,home,retail,29.71,4,0.163,none,2024-07-24\r\n32284,2126,APAC,grocery,mobile,18.30,2,0.127,coupon,2024-07-18\r\n32285,2059,AMER,grocery,partner,78.43,6,0.155,none,2024-08-21\r\n32286,2420,EMEA,home,retail,98.87,5,0.146,none,2024-10-15\r\n32287,1127,EMEA,fashion,online,39.91,5,0.238,loyalty,2024-04-19\r\n32288,1871,APAC,electronics,online,56.72,6,0.128,coupon,2024-08-15\r\n32289,1264,APAC,grocery,retail,23.90,2,0.022,bundle,2024-11-11\r\n32290,2007,LATAM,grocery,mobile,51.40,4,0.100,none,2024-09-08\r\n32291,1086,AMER,grocery,mobile,31.88,1,0.144,none,2024-11-14\r\n32292,1861,AMER,toys,online,111.12,6,0.179,bundle,2024-06-18\r\n32293,1342,LATAM,electronics,retail,156.62,7,0.187,bundle,2024-02-11\r\n32294,1889,APAC,electronics,online,30.18,7,0.066,bundle,2024-12-23\r\n32295,1283,APAC,home,online,50.44,4,0.040,coupon,2024-12-15\r\n32296,2341,EMEA,home,online,66.43,2,0.211,none,2024-01-12\r\n32297,2013,APAC,electronics,online,51.55,4,0.054,bundle,2024-01-13\r\n32298,1912,APAC,fashion,online,51.68,4,0.180,none,2024-11-04\r\n32299,2191,AMER,home,online,67.31,4,0.137,none,2024-08-10\r\n32300,2272,EMEA,grocery,retail,34.23,8,0.058,none,2024-01-24\r\n32301,1592,LATAM,electronics,online,80.17,1,0.140,coupon,2024-09-16\r\n32302,1792,AMER,grocery,online,36.12,6,0.157,none,2024-04-27\r\n32303,1893,APAC,home,retail,87.31,5,0.188,loyalty,2024-04-06\r\n32304,1513,APAC,electronics,online,77.75,2,0.029,none,2024-11-09\r\n32305,1109,APAC,sports,online,97.85,5,0.067,coupon,2024-04-07\r\n32306,2354,LATAM,sports,retail,41.97,1,0.225,none,2024-04-27\r\n32307,1621,APAC,electronics,online,71.05,6,0.200,none,2024-12-09\r\n32308,1123,LATAM,grocery,retail,38.25,5,0.068,coupon,2024-02-28\r\n32309,2335,EMEA,fashion,online,272.32,3,0.163,none,2024-01-13\r\n32310,1555,AMER,grocery,retail,72.22,2,0.085,none,2024-04-19\r\n32311,1822,EMEA,electronics,retail,75.78,6,0.225,none,2024-02-13\r\n32312,2257,AMER,grocery,online,102.41,6,0.238,coupon,2024-06-08\r\n32313,1001,LATAM,toys,online,52.44,1,0.037,coupon,2024-10-22\r\n32314,1846,APAC,sports,online,37.81,6,0.040,none,2024-02-24\r\n32315,2079,EMEA,home,mobile,55.26,6,0.204,none,2024-03-17\r\n32316,1177,LATAM,home,retail,221.09,5,0.012,coupon,2024-12-03\r\n32317,2259,AMER,grocery,online,27.35,8,0.191,loyalty,2024-12-25\r\n32318,2279,LATAM,home,online,117.21,7,0.004,none,2024-02-18\r\n32319,1895,AMER,home,online,77.74,7,0.063,coupon,2024-03-03\r\n32320,2278,APAC,electronics,online,47.19,4,0.110,coupon,2024-05-15\r\n32321,1395,APAC,toys,online,62.91,1,0.169,bundle,2024-03-18\r\n32322,1427,EMEA,home,online,13.17,3,0.180,loyalty,2024-09-09\r\n32323,2373,LATAM,grocery,online,46.93,5,0.137,none,2024-10-27\r\n32324,1098,APAC,electronics,online,55.31,1,0.234,none,2024-12-08\r\n32325,2313,LATAM,grocery,partner,33.75,5,0.109,coupon,2024-02-23\r\n32326,2386,EMEA,grocery,retail,61.92,7,0.026,none,2024-12-26\r\n32327,2080,LATAM,home,mobile,59.63,2,0.053,none,2024-02-23\r\n32328,1553,LATAM,home,mobile,113.96,5,0.008,none,2024-08-03\r\n32329,1777,AMER,electronics,retail,56.00,7,0.238,none,2024-02-16\r\n32330,1211,EMEA,electronics,online,44.93,6,0.063,none,2024-10-14\r\n32331,2409,APAC,toys,online,111.63,5,0.028,coupon,2024-01-21\r\n32332,1855,APAC,electronics,retail,61.13,8,0.189,bundle,2024-01-04\r\n32333,1003,APAC,home,online,83.63,6,0.019,none,2024-11-12\r\n32334,2333,APAC,fashion,retail,41.30,2,0.157,none,2024-01-21\r\n32335,1549,APAC,grocery,online,50.66,6,0.091,none,2024-10-12\r\n32336,1380,AMER,electronics,online,211.79,1,0.228,none,2024-03-20\r\n32337,2428,LATAM,home,mobile,180.38,7,0.180,bundle,2024-04-26\r\n32338,2305,AMER,sports,mobile,20.38,5,0.082,none,2024-03-15\r\n32339,1120,LATAM,fashion,retail,27.15,5,0.115,bundle,2024-02-10\r\n32340,1695,LATAM,fashion,retail,69.63,4,0.058,coupon,2024-04-15\r\n32341,1799,EMEA,toys,retail,37.49,8,0.077,bundle,2024-06-03\r\n32342,1496,AMER,electronics,retail,18.99,1,0.214,none,2024-11-02\r\n32343,1700,EMEA,electronics,online,38.19,8,0.056,coupon,2024-06-06\r\n32344,1986,LATAM,electronics,retail,27.90,7,0.232,none,2024-12-09\r\n32345,1168,APAC,toys,online,104.60,3,0.114,none,2024-12-16\r\n32346,1038,APAC,electronics,retail,76.20,7,0.147,bundle,2024-10-13\r\n32347,2120,AMER,toys,retail,45.80,2,0.065,none,2024-10-28\r\n32348,1066,AMER,electronics,online,42.17,4,0.061,none,2024-10-22\r\n32349,1543,AMER,electronics,online,58.94,1,0.056,bundle,2024-06-05\r\n32350,1344,EMEA,toys,retail,32.95,3,0.166,bundle,2024-07-02\r\n32351,1300,EMEA,home,online,107.76,5,0.076,bundle,2024-06-07\r\n32352,1524,LATAM,grocery,online,43.54,3,0.047,loyalty,2024-06-22\r\n32353,2096,LATAM,toys,retail,65.87,1,0.049,none,2024-12-18\r\n32354,1066,AMER,electronics,online,52.47,7,0.040,none,2024-02-17\r\n32355,1312,EMEA,grocery,retail,112.58,4,0.117,loyalty,2024-03-12\r\n32356,1808,APAC,fashion,online,52.67,5,0.177,none,2024-03-18\r\n32357,2360,EMEA,grocery,retail,86.01,6,0.194,none,2024-12-11\r\n32358,1390,APAC,home,retail,41.92,7,0.022,coupon,2024-06-10\r\n32359,2102,APAC,electronics,online,96.99,7,0.173,bundle,2024-11-13\r\n32360,2214,AMER,home,online,63.02,7,0.050,coupon,2024-06-11\r\n32361,1351,APAC,home,retail,27.15,8,0.010,none,2024-02-12\r\n32362,1834,AMER,home,retail,76.02,7,0.081,coupon,2024-06-07\r\n32363,1970,LATAM,grocery,retail,74.01,8,0.230,none,2024-07-17\r\n32364,2475,AMER,home,retail,107.58,2,0.088,coupon,2024-01-11\r\n32365,2419,LATAM,home,online,28.84,4,0.228,coupon,2024-08-27\r\n32366,1409,APAC,fashion,online,89.07,1,0.214,coupon,2024-12-21\r\n32367,1365,LATAM,electronics,retail,72.80,2,0.000,none,2024-08-11\r\n32368,1524,LATAM,toys,online,58.32,5,0.167,loyalty,2024-11-05\r\n32369,1068,APAC,electronics,online,112.56,6,0.105,none,2024-02-06\r\n32370,2000,APAC,sports,retail,64.34,2,0.231,none,2024-12-07\r\n32371,2126,APAC,grocery,retail,45.98,4,0.096,none,2024-04-27\r\n32372,1887,LATAM,sports,mobile,95.29,4,0.208,none,2024-05-26\r\n32373,1066,AMER,grocery,partner,18.88,7,0.112,none,2024-09-10\r\n32374,1962,APAC,electronics,online,135.40,2,0.245,coupon,2024-01-21\r\n32375,1733,LATAM,toys,retail,115.18,8,0.182,loyalty,2024-05-07\r\n32376,1120,LATAM,sports,retail,52.26,4,0.241,loyalty,2024-10-04\r\n32377,2105,APAC,grocery,mobile,81.69,4,0.224,none,2024-02-11\r\n32378,1681,LATAM,home,online,10.13,2,0.131,none,2024-03-06\r\n32379,1189,AMER,grocery,retail,84.46,8,0.037,coupon,2024-11-08\r\n32380,1146,LATAM,home,online,48.82,5,0.065,none,2024-09-02\r\n32381,1020,APAC,electronics,online,40.63,5,0.155,loyalty,2024-02-08\r\n32382,1119,LATAM,fashion,retail,70.52,4,0.009,none,2024-01-12\r\n32383,1872,LATAM,grocery,online,55.88,3,0.022,coupon,2024-07-03\r\n32384,2418,AMER,home,retail,46.21,8,0.122,none,2024-12-08\r\n32385,2420,EMEA,grocery,retail,44.69,4,0.154,none,2024-12-16\r\n32386,1401,LATAM,fashion,online,63.37,7,0.100,none,2024-10-03\r\n32387,2456,APAC,grocery,online,63.43,4,0.112,none,2024-10-07\r\n32388,1654,EMEA,sports,online,226.75,8,0.169,coupon,2024-07-17\r\n32389,1137,APAC,electronics,retail,79.16,4,0.001,coupon,2024-07-01\r\n32390,1964,EMEA,grocery,retail,56.34,2,0.070,coupon,2024-02-02\r\n32391,2153,APAC,toys,mobile,80.62,4,0.014,bundle,2024-02-15\r\n32392,1471,EMEA,electronics,mobile,23.58,2,0.080,none,2024-08-17\r\n32393,1689,LATAM,grocery,online,85.47,1,0.022,none,2024-02-16\r\n32394,1330,EMEA,home,online,39.20,7,0.116,none,2024-03-22\r\n32395,1249,EMEA,grocery,retail,26.46,4,0.131,none,2024-06-24\r\n32396,1940,APAC,electronics,retail,15.40,8,0.016,coupon,2024-08-22\r\n32397,2351,EMEA,grocery,retail,67.17,6,0.088,loyalty,2024-05-07\r\n32398,1541,APAC,fashion,online,44.74,8,0.124,none,2024-04-08\r\n32399,2008,APAC,fashion,online,20.37,1,0.079,loyalty,2024-08-05\r\n32400,1798,AMER,toys,partner,50.46,7,0.171,coupon,2024-04-20\r\n32401,1024,APAC,fashion,online,18.13,4,0.057,coupon,2024-04-15\r\n32402,1883,LATAM,fashion,retail,49.57,7,0.052,coupon,2024-10-11\r\n32403,2135,EMEA,grocery,online,115.74,3,0.142,none,2024-01-10\r\n32404,1827,EMEA,grocery,online,79.85,5,0.120,coupon,2024-08-10\r\n32405,2167,APAC,home,partner,40.80,6,0.166,none,2024-08-20\r\n32406,1618,EMEA,fashion,mobile,35.39,2,0.013,coupon,2024-05-03\r\n32407,2100,APAC,fashion,retail,19.70,4,0.014,none,2024-11-06\r\n32408,2096,LATAM,toys,retail,109.60,8,0.057,none,2024-01-20\r\n32409,1059,AMER,grocery,mobile,35.79,2,0.153,none,2024-12-25\r\n32410,1940,APAC,grocery,partner,44.11,1,0.209,none,2024-03-28\r\n32411,1093,APAC,sports,retail,102.21,2,0.214,coupon,2024-10-18\r\n32412,2207,APAC,home,online,30.53,5,0.101,none,2024-10-09\r\n32413,1409,APAC,sports,partner,69.25,3,0.117,none,2024-09-17\r\n32414,2059,AMER,home,partner,119.55,5,0.167,none,2024-07-09\r\n32415,2370,EMEA,grocery,online,18.46,2,0.167,none,2024-06-14\r\n32416,1359,LATAM,fashion,retail,53.63,6,0.084,coupon,2024-07-10\r\n32417,2404,EMEA,electronics,online,51.47,2,0.039,none,2024-10-08\r\n32418,1248,APAC,grocery,online,67.98,3,0.039,none,2024-04-27\r\n32419,2416,LATAM,electronics,mobile,49.46,3,0.006,none,2024-11-06\r\n32420,1667,AMER,grocery,online,57.55,7,0.052,coupon,2024-08-07\r\n32421,1146,LATAM,sports,online,80.09,3,0.189,none,2024-09-21\r\n32422,1881,LATAM,home,online,102.88,6,0.123,coupon,2024-10-16\r\n32423,1498,LATAM,grocery,online,32.95,5,0.070,loyalty,2024-08-18\r\n32424,1011,APAC,grocery,online,67.69,7,0.183,bundle,2024-12-20\r\n32425,1709,EMEA,toys,online,27.79,8,0.032,loyalty,2024-04-03\r\n32426,1117,LATAM,grocery,online,100.20,8,0.146,coupon,2024-10-25\r\n32427,1685,AMER,fashion,online,57.66,3,0.194,none,2024-08-27\r\n32428,2360,EMEA,electronics,online,66.77,2,0.095,none,2024-04-21\r\n32429,2310,EMEA,grocery,retail,42.90,2,0.042,none,2024-05-26\r\n32430,2039,EMEA,home,retail,88.52,4,0.106,loyalty,2024-11-28\r\n32431,2346,LATAM,grocery,online,98.71,2,0.237,coupon,2024-12-15\r\n32432,1348,AMER,electronics,retail,78.62,4,0.089,none,2024-12-09\r\n32433,1147,EMEA,electronics,partner,33.99,2,0.011,none,2024-11-13\r\n32434,1821,LATAM,grocery,partner,25.99,2,0.047,coupon,2024-07-24\r\n32435,2105,APAC,home,mobile,54.57,6,0.121,coupon,2024-09-09\r\n32436,1151,APAC,toys,retail,86.73,6,0.147,loyalty,2024-01-13\r\n32437,1306,LATAM,home,mobile,35.66,1,0.168,loyalty,2024-06-17\r\n32438,1771,AMER,home,online,63.72,1,0.188,coupon,2024-12-22\r\n32439,2259,AMER,home,retail,56.95,4,0.236,none,2024-01-25\r\n32440,1973,EMEA,sports,retail,60.82,1,0.039,none,2024-08-22\r\n32441,1872,LATAM,electronics,online,176.27,6,0.124,none,2024-11-20\r\n32442,1697,APAC,fashion,online,35.50,7,0.042,none,2024-12-14\r\n32443,1396,EMEA,electronics,online,96.09,1,0.043,bundle,2024-04-08\r\n32444,1692,LATAM,grocery,retail,57.07,8,0.175,none,2024-01-15\r\n32445,1103,EMEA,grocery,retail,66.35,6,0.249,bundle,2024-05-25\r\n32446,1913,LATAM,electronics,retail,41.94,8,0.050,loyalty,2024-05-11\r\n32447,1864,EMEA,fashion,mobile,29.09,3,0.230,none,2024-12-10\r\n32448,1782,LATAM,home,mobile,101.64,8,0.119,coupon,2024-07-25\r\n32449,1504,AMER,home,retail,116.99,3,0.002,none,2024-11-21\r\n32450,2183,EMEA,fashion,online,43.32,8,0.222,bundle,2024-12-15\r\n32451,1117,LATAM,fashion,mobile,18.84,7,0.132,none,2024-08-20\r\n32452,1449,EMEA,fashion,online,36.21,6,0.016,none,2024-03-10\r\n32453,2217,LATAM,electronics,online,167.99,1,0.039,none,2024-08-11\r\n32454,1541,APAC,home,online,44.75,4,0.183,coupon,2024-09-28\r\n32455,2465,EMEA,electronics,retail,46.80,6,0.161,none,2024-04-08\r\n32456,1954,APAC,home,online,49.45,8,0.227,coupon,2024-10-07\r\n32457,2390,AMER,sports,online,75.20,7,0.118,bundle,2024-05-14\r\n32458,2017,EMEA,home,retail,36.37,2,0.140,none,2024-03-10\r\n32459,1031,AMER,home,retail,60.07,2,0.159,none,2024-03-01\r\n32460,2231,LATAM,fashion,online,146.35,5,0.038,coupon,2024-10-21\r\n32461,1646,APAC,electronics,mobile,93.91,7,0.173,none,2024-04-10\r\n32462,2109,EMEA,toys,retail,19.13,4,0.198,bundle,2024-01-13\r\n32463,2066,APAC,electronics,retail,138.24,8,0.191,none,2024-03-05\r\n32464,2007,LATAM,home,mobile,30.08,1,0.040,none,2024-07-25\r\n32465,1953,EMEA,grocery,mobile,32.01,8,0.067,none,2024-03-17\r\n32466,2498,LATAM,electronics,online,43.84,4,0.006,none,2024-10-25\r\n32467,1626,EMEA,grocery,online,45.60,7,0.047,none,2024-06-16\r\n32468,2104,EMEA,sports,retail,33.94,7,0.051,none,2024-04-22\r\n32469,1226,AMER,electronics,retail,30.15,3,0.071,none,2024-03-09\r\n32470,1941,AMER,grocery,online,51.85,5,0.076,bundle,2024-09-22\r\n32471,1550,APAC,home,retail,145.61,6,0.192,bundle,2024-01-21\r\n32472,1541,APAC,home,online,49.81,1,0.038,none,2024-09-23\r\n32473,1676,LATAM,fashion,online,34.33,2,0.231,none,2024-10-05\r\n32474,1361,LATAM,toys,retail,79.29,4,0.135,loyalty,2024-07-05\r\n32475,2434,APAC,home,mobile,30.55,5,0.079,coupon,2024-06-08\r\n32476,1706,EMEA,home,partner,167.77,6,0.047,none,2024-10-02\r\n32477,1661,LATAM,toys,retail,107.63,2,0.186,coupon,2024-01-01\r\n32478,1300,EMEA,sports,online,27.69,5,0.027,none,2024-07-08\r\n32479,1526,EMEA,fashion,mobile,16.59,1,0.089,coupon,2024-07-09\r\n32480,1384,LATAM,home,online,84.95,6,0.173,none,2024-01-17\r\n32481,1392,AMER,fashion,retail,51.64,6,0.192,none,2024-10-06\r\n32482,1092,AMER,home,mobile,56.71,5,0.097,loyalty,2024-04-18\r\n32483,1175,AMER,electronics,online,58.97,8,0.080,none,2024-09-02\r\n32484,2104,EMEA,toys,online,68.60,3,0.087,none,2024-11-18\r\n32485,1544,LATAM,toys,mobile,117.57,8,0.184,none,2024-08-11\r\n32486,1786,APAC,grocery,partner,24.84,2,0.107,none,2024-10-22\r\n32487,1147,EMEA,electronics,retail,32.77,2,0.062,loyalty,2024-07-05\r\n32488,1260,LATAM,home,online,105.67,8,0.238,none,2024-07-27\r\n32489,1807,EMEA,fashion,retail,77.00,4,0.230,none,2024-08-12\r\n32490,1143,LATAM,fashion,online,24.67,2,0.048,none,2024-04-24\r\n32491,1294,APAC,electronics,mobile,40.77,6,0.039,none,2024-05-17\r\n32492,1133,EMEA,toys,retail,61.04,1,0.160,none,2024-01-11\r\n32493,1955,AMER,grocery,online,55.22,6,0.192,loyalty,2024-05-09\r\n32494,1823,EMEA,sports,retail,76.87,1,0.043,coupon,2024-09-04\r\n32495,1525,APAC,home,online,36.08,5,0.008,bundle,2024-09-27\r\n32496,2335,EMEA,home,retail,38.82,6,0.158,none,2024-10-08\r\n32497,1581,APAC,home,online,124.95,4,0.147,bundle,2024-09-03\r\n32498,2259,AMER,electronics,retail,33.99,1,0.160,bundle,2024-01-23\r\n32499,1451,EMEA,sports,online,71.02,2,0.037,coupon,2024-07-15\r\n32500,2291,EMEA,toys,mobile,59.71,5,0.143,loyalty,2024-05-11\r\n32501,1938,APAC,fashion,partner,77.19,8,0.140,coupon,2024-02-11\r\n32502,2403,LATAM,home,mobile,23.75,7,0.184,none,2024-10-12\r\n32503,1196,APAC,grocery,retail,55.95,5,0.223,none,2024-02-21\r\n32504,1003,APAC,electronics,online,62.09,1,0.090,loyalty,2024-09-03\r\n32505,2372,AMER,toys,retail,44.28,3,0.135,none,2024-02-27\r\n32506,1153,AMER,grocery,retail,74.34,6,0.185,bundle,2024-02-13\r\n32507,2058,LATAM,grocery,mobile,45.92,1,0.103,bundle,2024-06-04\r\n32508,1635,APAC,home,online,99.70,6,0.063,coupon,2024-10-17\r\n32509,1216,APAC,electronics,online,50.45,1,0.150,coupon,2024-10-05\r\n32510,2213,APAC,home,retail,61.59,4,0.084,none,2024-11-05\r\n32511,1848,EMEA,sports,retail,36.85,4,0.227,coupon,2024-12-26\r\n32512,1539,LATAM,home,retail,39.07,6,0.039,none,2024-02-12\r\n32513,2089,EMEA,grocery,mobile,37.89,8,0.059,none,2024-12-05\r\n32514,1204,AMER,grocery,retail,81.27,1,0.056,coupon,2024-08-25\r\n32515,2356,LATAM,sports,online,26.35,3,0.084,coupon,2024-03-25\r\n32516,2448,APAC,grocery,retail,81.75,4,0.188,none,2024-02-27\r\n32517,1615,LATAM,grocery,online,87.76,3,0.053,bundle,2024-12-11\r\n32518,1523,LATAM,home,retail,54.29,4,0.096,none,2024-01-06\r\n32519,1034,EMEA,grocery,online,97.50,8,0.180,none,2024-06-16\r\n32520,1352,AMER,electronics,retail,38.01,5,0.086,coupon,2024-02-05\r\n32521,1692,LATAM,home,retail,33.76,4,0.195,none,2024-04-19\r\n32522,1242,LATAM,electronics,online,38.81,1,0.143,none,2024-04-02\r\n32523,1891,APAC,home,retail,43.38,6,0.138,none,2024-04-20\r\n32524,1572,LATAM,sports,online,68.16,1,0.226,none,2024-03-17\r\n32525,2059,AMER,sports,retail,56.77,5,0.236,bundle,2024-01-26\r\n32526,1662,LATAM,fashion,retail,43.60,5,0.010,none,2024-02-23\r\n32527,2411,EMEA,electronics,mobile,33.39,4,0.016,none,2024-06-24\r\n32528,2025,EMEA,toys,online,36.79,5,0.062,coupon,2024-10-06\r\n32529,1469,EMEA,electronics,mobile,85.87,7,0.190,coupon,2024-05-03\r\n32530,2178,AMER,fashion,online,40.26,5,0.053,none,2024-06-26\r\n32531,1186,APAC,grocery,mobile,94.33,6,0.123,coupon,2024-07-12\r\n32532,1782,LATAM,fashion,retail,45.73,5,0.021,none,2024-05-17\r\n32533,1459,LATAM,electronics,online,55.83,1,0.035,none,2024-04-07\r\n32534,1145,AMER,electronics,retail,93.84,7,0.119,none,2024-04-15\r\n32535,1036,EMEA,home,online,63.33,7,0.119,none,2024-02-15\r\n32536,2476,APAC,electronics,retail,190.29,1,0.144,coupon,2024-05-12\r\n32537,2041,LATAM,grocery,online,39.81,3,0.218,none,2024-05-11\r\n32538,1494,AMER,home,retail,91.55,6,0.037,none,2024-03-23\r\n32539,1384,LATAM,grocery,online,40.35,8,0.036,bundle,2024-06-17\r\n32540,1151,APAC,sports,mobile,43.14,3,0.166,bundle,2024-07-14\r\n32541,1339,EMEA,fashion,online,50.36,1,0.176,none,2024-07-23\r\n32542,1020,APAC,sports,retail,24.06,4,0.177,none,2024-10-13\r\n32543,2401,LATAM,sports,online,56.32,3,0.046,none,2024-11-18\r\n32544,2079,EMEA,grocery,online,40.22,1,0.007,none,2024-08-28\r\n32545,1537,LATAM,grocery,retail,58.01,4,0.142,coupon,2024-08-28\r\n32546,1139,EMEA,grocery,retail,86.53,7,0.147,coupon,2024-12-02\r\n32547,1492,APAC,toys,retail,36.64,3,0.178,bundle,2024-03-13\r\n32548,1798,AMER,grocery,retail,57.88,2,0.061,bundle,2024-06-07\r\n32549,1866,EMEA,fashion,retail,72.89,2,0.142,none,2024-06-07\r\n32550,1367,AMER,home,retail,72.70,1,0.048,none,2024-10-04\r\n32551,1606,AMER,grocery,online,81.14,1,0.208,none,2024-02-07\r\n32552,2372,AMER,grocery,online,79.13,6,0.138,none,2024-09-08\r\n32553,1240,EMEA,fashion,online,59.28,5,0.035,loyalty,2024-09-11\r\n32554,1866,EMEA,sports,retail,73.19,8,0.009,none,2024-07-19\r\n32555,2151,APAC,grocery,online,108.59,5,0.207,bundle,2024-10-25\r\n32556,1353,EMEA,fashion,online,19.51,5,0.002,none,2024-12-18\r\n32557,2121,APAC,home,mobile,35.25,1,0.247,none,2024-03-14\r\n32558,1383,AMER,grocery,retail,61.66,8,0.162,none,2024-11-20\r\n32559,1779,APAC,electronics,retail,36.12,1,0.194,loyalty,2024-06-17\r\n32560,2491,APAC,electronics,mobile,56.29,4,0.244,coupon,2024-02-23\r\n32561,1140,LATAM,grocery,retail,87.48,7,0.120,none,2024-09-06\r\n32562,2288,AMER,home,online,86.84,7,0.210,none,2024-04-01\r\n32563,1619,APAC,grocery,retail,53.79,3,0.208,bundle,2024-04-07\r\n32564,1523,LATAM,home,retail,103.56,4,0.053,none,2024-03-08\r\n32565,1872,LATAM,grocery,online,50.55,4,0.186,none,2024-03-04\r\n32566,1471,EMEA,grocery,retail,19.40,3,0.241,none,2024-02-01\r\n32567,1070,EMEA,toys,online,83.71,2,0.243,none,2024-02-12\r\n32568,2031,AMER,home,mobile,85.47,2,0.008,none,2024-06-03\r\n32569,2072,AMER,electronics,partner,79.93,2,0.189,loyalty,2024-02-25\r\n32570,1924,AMER,grocery,online,53.29,2,0.099,none,2024-01-06\r\n32571,2291,EMEA,grocery,partner,90.27,4,0.142,none,2024-06-05\r\n32572,1013,LATAM,grocery,retail,58.33,4,0.036,none,2024-03-02\r\n32573,2390,AMER,fashion,online,83.69,5,0.123,none,2024-04-08\r\n32574,1717,AMER,grocery,partner,27.94,1,0.149,none,2024-04-10\r\n32575,1266,AMER,home,mobile,95.16,7,0.221,none,2024-05-10\r\n32576,1156,APAC,grocery,mobile,40.50,3,0.203,none,2024-11-06\r\n32577,1444,EMEA,toys,retail,96.39,4,0.178,none,2024-04-12\r\n32578,1638,EMEA,fashion,retail,78.61,5,0.140,none,2024-10-23\r\n32579,1971,EMEA,electronics,online,135.82,8,0.160,bundle,2024-07-25\r\n32580,1194,APAC,electronics,mobile,68.72,4,0.067,none,2024-09-16\r\n32581,1858,LATAM,sports,mobile,53.05,3,0.092,bundle,2024-10-06\r\n32582,2462,EMEA,fashion,online,82.50,1,0.097,none,2024-12-15\r\n32583,2184,APAC,grocery,mobile,57.74,7,0.072,none,2024-03-22\r\n32584,1372,APAC,grocery,mobile,128.74,4,0.156,none,2024-06-05\r\n32585,1329,APAC,fashion,online,44.99,5,0.122,none,2024-07-11\r\n32586,1179,APAC,home,retail,40.67,6,0.038,none,2024-04-01\r\n32587,2362,AMER,electronics,mobile,45.09,6,0.184,none,2024-01-10\r\n32588,1046,EMEA,home,online,55.21,2,0.217,bundle,2024-01-24\r\n32589,1904,APAC,electronics,retail,58.66,8,0.036,coupon,2024-03-13\r\n32590,1660,AMER,grocery,online,37.70,4,0.212,coupon,2024-04-07\r\n32591,1338,EMEA,grocery,retail,53.90,7,0.028,none,2024-09-16\r\n32592,2422,APAC,electronics,online,87.51,5,0.105,loyalty,2024-03-17\r\n32593,1686,LATAM,sports,retail,74.69,7,0.245,coupon,2024-03-20\r\n32594,1192,EMEA,home,partner,49.02,2,0.220,bundle,2024-02-07\r\n32595,2413,AMER,sports,mobile,41.13,2,0.016,none,2024-08-20\r\n32596,2265,APAC,sports,retail,63.82,4,0.094,none,2024-10-13\r\n32597,1625,EMEA,electronics,retail,106.83,7,0.036,loyalty,2024-09-02\r\n32598,1267,EMEA,electronics,mobile,90.58,7,0.078,coupon,2024-03-01\r\n32599,1479,AMER,grocery,mobile,51.97,6,0.217,none,2024-07-01\r\n32600,1663,LATAM,sports,retail,68.23,1,0.195,none,2024-03-15\r\n32601,1410,AMER,electronics,mobile,66.52,4,0.233,none,2024-02-19\r\n32602,1746,LATAM,sports,online,130.12,1,0.210,bundle,2024-01-03\r\n32603,2452,LATAM,electronics,retail,36.33,6,0.062,none,2024-06-01\r\n32604,1488,AMER,grocery,online,43.25,3,0.163,loyalty,2024-02-20\r\n32605,1195,AMER,grocery,online,211.99,8,0.122,loyalty,2024-10-14\r\n32606,1125,LATAM,toys,online,74.57,1,0.117,none,2024-01-21\r\n32607,2157,AMER,home,mobile,150.73,5,0.147,none,2024-11-07\r\n32608,1523,LATAM,grocery,retail,38.20,3,0.037,bundle,2024-03-19\r\n32609,1545,AMER,fashion,retail,98.19,4,0.172,none,2024-10-02\r\n32610,1630,APAC,grocery,online,68.68,5,0.049,none,2024-04-06\r\n32611,1172,APAC,fashion,retail,77.39,7,0.005,coupon,2024-03-12\r\n32612,1748,APAC,home,retail,140.17,1,0.150,coupon,2024-05-17\r\n32613,2496,EMEA,home,online,73.18,3,0.056,bundle,2024-10-22\r\n32614,2436,LATAM,home,retail,85.95,4,0.086,none,2024-06-11\r\n32615,2073,AMER,fashion,online,47.11,6,0.144,none,2024-04-10\r\n32616,1674,LATAM,grocery,mobile,51.49,8,0.161,none,2024-12-27\r\n32617,1017,AMER,toys,retail,73.91,5,0.147,loyalty,2024-02-16\r\n32618,1678,LATAM,sports,online,81.01,1,0.013,none,2024-11-20\r\n32619,1210,LATAM,toys,online,31.44,6,0.185,none,2024-10-14\r\n32620,1122,AMER,home,retail,83.91,6,0.143,coupon,2024-08-09\r\n32621,1855,APAC,grocery,online,80.94,6,0.065,none,2024-02-17\r\n32622,2339,AMER,grocery,partner,79.93,4,0.133,coupon,2024-11-12\r\n32623,1423,EMEA,grocery,mobile,56.31,8,0.147,bundle,2024-09-12\r\n32624,1479,AMER,home,online,107.62,3,0.064,loyalty,2024-05-06\r\n32625,1576,EMEA,electronics,retail,76.35,6,0.009,coupon,2024-08-04\r\n32626,1697,APAC,grocery,mobile,38.00,4,0.247,coupon,2024-07-05\r\n32627,1391,LATAM,home,retail,57.41,8,0.034,none,2024-04-07\r\n32628,1820,AMER,electronics,retail,62.64,8,0.150,none,2024-08-08\r\n32629,2445,APAC,electronics,online,69.38,5,0.079,none,2024-10-17\r\n32630,1923,LATAM,toys,retail,66.89,5,0.223,bundle,2024-04-27\r\n32631,2245,APAC,sports,retail,41.85,5,0.006,none,2024-05-23\r\n32632,2467,AMER,electronics,online,78.92,6,0.175,coupon,2024-12-26\r\n32633,2050,APAC,home,online,47.59,8,0.242,none,2024-01-21\r\n32634,1600,AMER,home,mobile,70.64,8,0.231,coupon,2024-06-02\r\n32635,2235,AMER,electronics,online,43.09,2,0.064,none,2024-05-14\r\n32636,1581,APAC,toys,partner,51.70,6,0.051,bundle,2024-01-22\r\n32637,1578,LATAM,electronics,retail,50.63,7,0.048,coupon,2024-12-02\r\n32638,1729,AMER,grocery,online,29.62,4,0.034,none,2024-01-25\r\n32639,1733,LATAM,grocery,retail,71.71,5,0.072,none,2024-10-17\r\n32640,1640,APAC,grocery,online,52.87,7,0.189,none,2024-04-03\r\n32641,1915,LATAM,fashion,retail,50.32,7,0.194,none,2024-01-01\r\n32642,2435,AMER,grocery,retail,85.80,2,0.084,none,2024-05-10\r\n32643,1748,APAC,fashion,mobile,89.01,6,0.019,coupon,2024-01-09\r\n32644,1918,EMEA,grocery,retail,37.08,5,0.087,none,2024-09-25\r\n32645,2473,EMEA,home,retail,39.56,7,0.034,none,2024-06-11\r\n32646,1955,AMER,electronics,retail,131.72,6,0.239,none,2024-02-06\r\n32647,1579,AMER,sports,online,86.16,4,0.070,none,2024-11-10\r\n32648,1713,EMEA,fashion,online,44.99,1,0.149,coupon,2024-08-13\r\n32649,1227,AMER,grocery,mobile,79.08,2,0.233,bundle,2024-10-13\r\n32650,1449,EMEA,fashion,retail,96.54,5,0.138,none,2024-05-22\r\n32651,2172,EMEA,sports,mobile,77.99,3,0.075,none,2024-11-26\r\n32652,1113,EMEA,toys,mobile,71.56,5,0.080,coupon,2024-11-18\r\n32653,1505,EMEA,home,online,72.87,8,0.051,none,2024-05-15\r\n32654,2046,APAC,toys,mobile,34.23,4,0.152,none,2024-02-06\r\n32655,2135,EMEA,fashion,online,85.21,5,0.243,none,2024-11-21\r\n32656,1115,AMER,grocery,online,116.09,8,0.191,none,2024-03-03\r\n32657,1990,EMEA,home,mobile,66.81,2,0.101,none,2024-10-16\r\n32658,1520,APAC,grocery,mobile,173.27,3,0.197,none,2024-01-13\r\n32659,2202,APAC,home,retail,96.87,2,0.008,none,2024-08-02\r\n32660,2486,APAC,grocery,online,154.04,7,0.166,none,2024-02-18\r\n32661,2314,EMEA,toys,online,60.88,5,0.196,none,2024-03-23\r\n32662,1887,LATAM,grocery,online,25.11,5,0.204,loyalty,2024-08-26\r\n32663,2105,APAC,electronics,retail,90.66,2,0.203,none,2024-11-06\r\n32664,2270,APAC,sports,retail,54.53,4,0.092,coupon,2024-05-08\r\n32665,2114,AMER,electronics,online,136.55,7,0.214,none,2024-08-08\r\n32666,1598,EMEA,electronics,retail,19.75,4,0.120,none,2024-04-01\r\n32667,1819,AMER,grocery,online,23.60,4,0.102,coupon,2024-05-16\r\n32668,1653,APAC,electronics,online,39.99,8,0.166,none,2024-10-26\r\n32669,2219,LATAM,fashion,mobile,37.26,1,0.116,coupon,2024-02-20\r\n32670,1004,LATAM,grocery,retail,43.86,5,0.193,coupon,2024-04-18\r\n32671,2175,AMER,electronics,online,44.20,7,0.194,none,2024-10-04\r\n32672,1782,LATAM,electronics,online,138.30,8,0.036,none,2024-01-20\r\n32673,2415,AMER,home,retail,67.11,5,0.099,none,2024-04-20\r\n32674,2093,LATAM,sports,online,80.64,5,0.014,none,2024-02-21\r\n32675,2062,EMEA,home,retail,103.03,1,0.225,coupon,2024-03-13\r\n32676,1160,LATAM,grocery,online,37.77,2,0.083,coupon,2024-04-26\r\n32677,1605,APAC,home,online,151.32,6,0.022,loyalty,2024-03-09\r\n32678,2408,EMEA,toys,retail,66.85,7,0.060,bundle,2024-01-10\r\n32679,1519,APAC,grocery,online,121.17,3,0.047,bundle,2024-06-01\r\n32680,2013,APAC,grocery,online,86.60,7,0.178,loyalty,2024-03-07\r\n32681,1629,LATAM,grocery,partner,23.73,5,0.198,coupon,2024-05-12\r\n32682,2364,APAC,fashion,online,34.87,6,0.206,none,2024-01-28\r\n32683,1826,LATAM,sports,mobile,108.34,6,0.210,none,2024-10-10\r\n32684,2089,EMEA,grocery,online,66.68,1,0.195,loyalty,2024-02-21\r\n32685,2279,LATAM,electronics,online,49.19,3,0.041,none,2024-12-07\r\n32686,1865,LATAM,grocery,online,71.69,8,0.245,none,2024-02-07\r\n32687,2471,APAC,electronics,online,41.23,2,0.244,bundle,2024-11-10\r\n32688,2451,APAC,electronics,partner,29.20,8,0.138,none,2024-10-25\r\n32689,1959,EMEA,home,retail,42.47,5,0.002,coupon,2024-02-17\r\n32690,2306,AMER,fashion,online,111.37,4,0.221,none,2024-07-11\r\n32691,2350,APAC,grocery,online,95.10,5,0.107,coupon,2024-07-25\r\n32692,1505,EMEA,grocery,retail,34.95,4,0.139,none,2024-04-16\r\n32693,1824,LATAM,toys,online,37.92,7,0.149,none,2024-11-15\r\n32694,1052,LATAM,electronics,partner,102.49,5,0.002,none,2024-07-16\r\n32695,2308,AMER,toys,online,28.66,1,0.155,coupon,2024-04-28\r\n32696,1721,EMEA,home,online,28.58,7,0.160,coupon,2024-09-06\r\n32697,2118,AMER,toys,online,51.45,4,0.134,none,2024-11-01\r\n32698,2060,LATAM,fashion,retail,62.35,4,0.027,none,2024-01-09\r\n32699,2170,EMEA,fashion,online,33.84,5,0.035,bundle,2024-10-13\r\n32700,1928,AMER,toys,online,49.81,1,0.147,none,2024-01-16\r\n32701,2115,APAC,sports,retail,98.49,4,0.233,none,2024-06-21\r\n32702,1138,AMER,fashion,online,43.10,6,0.244,coupon,2024-08-19\r\n32703,1871,APAC,home,online,59.87,1,0.146,none,2024-06-25\r\n32704,2400,EMEA,sports,online,50.54,1,0.003,loyalty,2024-10-25\r\n32705,1111,APAC,home,retail,48.95,7,0.192,none,2024-05-04\r\n32706,2396,AMER,fashion,retail,29.87,8,0.193,none,2024-09-01\r\n32707,1559,EMEA,grocery,online,49.53,3,0.077,coupon,2024-04-18\r\n32708,2159,AMER,electronics,retail,35.82,5,0.201,none,2024-04-15\r\n32709,2382,LATAM,electronics,retail,38.71,7,0.213,none,2024-07-19\r\n32710,1393,LATAM,sports,retail,62.71,2,0.032,none,2024-08-01\r\n32711,2144,EMEA,home,retail,47.02,7,0.215,coupon,2024-10-23\r\n32712,1791,LATAM,home,online,31.34,8,0.218,none,2024-05-21\r\n32713,1731,AMER,electronics,online,65.29,1,0.194,none,2024-05-05\r\n32714,1919,EMEA,fashion,online,71.87,1,0.138,bundle,2024-12-04\r\n32715,1980,LATAM,toys,online,118.23,4,0.014,none,2024-10-09\r\n32716,1687,APAC,toys,retail,55.61,4,0.193,loyalty,2024-03-03\r\n32717,2141,AMER,electronics,online,41.19,3,0.086,loyalty,2024-10-19\r\n32718,1063,AMER,electronics,retail,30.76,2,0.103,loyalty,2024-06-12\r\n32719,1276,AMER,electronics,retail,64.37,4,0.175,none,2024-01-20\r\n32720,1525,APAC,toys,retail,30.57,6,0.181,coupon,2024-02-14\r\n32721,1000,APAC,fashion,retail,45.78,8,0.130,none,2024-12-01\r\n32722,2192,APAC,sports,online,40.52,1,0.016,bundle,2024-08-13\r\n32723,1571,EMEA,toys,mobile,52.05,3,0.081,none,2024-09-26\r\n32724,2405,AMER,grocery,retail,43.67,4,0.019,none,2024-05-19\r\n32725,2110,LATAM,fashion,retail,34.32,1,0.131,bundle,2024-07-28\r\n32726,2289,APAC,electronics,mobile,73.04,3,0.055,none,2024-05-03\r\n32727,2224,EMEA,toys,retail,44.96,6,0.222,coupon,2024-12-14\r\n32728,2396,AMER,grocery,online,31.95,1,0.249,none,2024-05-02\r\n32729,2446,LATAM,fashion,retail,23.50,7,0.141,none,2024-09-24\r\n32730,1871,APAC,toys,online,112.36,7,0.229,bundle,2024-05-24\r\n32731,2280,EMEA,fashion,retail,103.67,5,0.199,bundle,2024-03-04\r\n32732,1789,EMEA,home,online,80.81,5,0.027,none,2024-02-21\r\n32733,2202,APAC,home,online,71.34,1,0.172,none,2024-01-01\r\n32734,1247,AMER,home,retail,43.56,4,0.055,none,2024-11-06\r\n32735,1001,LATAM,fashion,retail,51.96,7,0.098,loyalty,2024-09-08\r\n32736,2231,LATAM,grocery,online,89.53,7,0.249,bundle,2024-01-21\r\n32737,1127,EMEA,toys,online,86.16,4,0.162,coupon,2024-05-24\r\n32738,1578,LATAM,electronics,online,33.20,5,0.249,none,2024-11-01\r\n32739,1165,AMER,electronics,partner,49.61,5,0.075,none,2024-09-19\r\n32740,1102,APAC,home,partner,81.74,7,0.112,none,2024-06-26\r\n32741,1435,AMER,toys,retail,53.51,1,0.131,none,2024-06-09\r\n32742,2308,AMER,grocery,retail,50.22,3,0.150,none,2024-01-11\r\n32743,1392,AMER,sports,online,51.20,4,0.156,none,2024-08-01\r\n32744,2082,APAC,fashion,partner,58.59,5,0.074,coupon,2024-11-01\r\n32745,1046,EMEA,home,online,65.58,2,0.222,coupon,2024-12-01\r\n32746,2306,AMER,electronics,retail,79.61,8,0.195,none,2024-10-21\r\n32747,1879,EMEA,home,mobile,108.55,5,0.083,none,2024-09-16\r\n32748,1852,AMER,electronics,retail,35.18,1,0.190,none,2024-06-03\r\n32749,1248,APAC,electronics,retail,98.97,6,0.122,none,2024-11-02\r\n32750,1085,EMEA,home,online,32.81,7,0.196,none,2024-01-05\r\n32751,1891,APAC,home,online,43.50,6,0.214,coupon,2024-01-18\r\n32752,1529,LATAM,grocery,retail,65.19,6,0.200,none,2024-02-21\r\n32753,1594,LATAM,grocery,retail,141.54,7,0.199,none,2024-02-28\r\n32754,2352,APAC,electronics,retail,31.01,2,0.003,bundle,2024-05-12\r\n32755,1655,LATAM,fashion,retail,36.81,4,0.056,none,2024-05-17\r\n32756,2460,AMER,grocery,retail,52.25,4,0.238,none,2024-08-03\r\n32757,1195,AMER,grocery,online,141.53,7,0.013,coupon,2024-12-07\r\n32758,1347,APAC,fashion,retail,44.85,4,0.160,none,2024-01-16\r\n32759,2277,EMEA,fashion,retail,41.68,5,0.223,none,2024-06-14\r\n32760,2146,APAC,fashion,online,92.05,5,0.212,coupon,2024-04-01\r\n32761,1057,LATAM,grocery,online,48.26,4,0.039,none,2024-04-22\r\n32762,2283,AMER,electronics,retail,121.04,6,0.038,none,2024-05-11\r\n32763,2320,LATAM,electronics,retail,39.58,4,0.191,bundle,2024-02-01\r\n32764,1807,EMEA,electronics,online,54.07,1,0.071,none,2024-12-15\r\n32765,1956,APAC,sports,online,65.39,3,0.199,loyalty,2024-10-01\r\n32766,1020,APAC,home,retail,30.12,8,0.142,loyalty,2024-12-12\r\n32767,1718,EMEA,grocery,retail,81.50,5,0.235,none,2024-07-18\r\n32768,2078,APAC,sports,partner,143.22,4,0.081,coupon,2024-09-03\r\n32769,1440,AMER,home,retail,30.89,6,0.213,coupon,2024-12-13\r\n32770,2134,AMER,home,retail,45.95,1,0.024,none,2024-06-24\r\n32771,2386,EMEA,grocery,online,69.81,4,0.244,coupon,2024-07-23\r\n32772,1367,AMER,home,online,90.25,5,0.231,none,2024-07-12\r\n32773,1149,LATAM,sports,retail,89.71,8,0.158,none,2024-06-24\r\n32774,2230,LATAM,toys,retail,48.75,5,0.244,none,2024-10-13\r\n32775,1708,LATAM,electronics,retail,54.39,5,0.204,none,2024-09-17\r\n32776,1172,APAC,sports,partner,124.16,7,0.130,loyalty,2024-05-15\r\n32777,2386,EMEA,fashion,online,78.53,2,0.248,coupon,2024-10-27\r\n32778,1206,EMEA,fashion,retail,62.14,5,0.180,coupon,2024-12-10\r\n32779,1520,APAC,toys,retail,29.51,5,0.010,none,2024-12-08\r\n32780,1835,AMER,home,online,65.52,7,0.183,none,2024-01-04\r\n32781,1680,LATAM,toys,online,31.14,4,0.215,none,2024-01-04\r\n32782,1874,LATAM,grocery,retail,30.17,6,0.042,coupon,2024-03-05\r\n32783,2386,EMEA,home,online,84.04,5,0.227,none,2024-02-06\r\n32784,1562,AMER,fashion,online,37.82,7,0.238,bundle,2024-05-16\r\n32785,1522,LATAM,fashion,online,65.24,8,0.035,coupon,2024-09-03\r\n32786,2391,EMEA,electronics,online,54.80,6,0.192,bundle,2024-09-02\r\n32787,1481,LATAM,home,retail,172.45,3,0.102,none,2024-02-04\r\n32788,1171,APAC,grocery,online,41.24,5,0.125,none,2024-03-18\r\n32789,2028,APAC,sports,online,80.81,7,0.189,coupon,2024-10-17\r\n32790,2406,EMEA,electronics,mobile,159.84,4,0.062,bundle,2024-11-28\r\n32791,1140,LATAM,home,retail,50.57,4,0.126,none,2024-06-23\r\n32792,2002,APAC,sports,mobile,71.61,3,0.186,none,2024-05-03\r\n32793,1239,APAC,electronics,online,42.17,1,0.078,coupon,2024-09-07\r\n32794,2163,EMEA,fashion,mobile,70.81,1,0.154,none,2024-08-02\r\n32795,1857,LATAM,grocery,mobile,19.86,3,0.198,none,2024-03-24\r\n32796,1899,APAC,fashion,online,52.50,2,0.075,loyalty,2024-09-20\r\n32797,1521,LATAM,electronics,online,42.24,2,0.186,none,2024-10-07\r\n32798,1732,LATAM,fashion,online,24.96,6,0.236,bundle,2024-06-03\r\n32799,1008,AMER,grocery,mobile,35.78,3,0.070,coupon,2024-09-22\r\n32800,1658,AMER,sports,retail,85.04,3,0.123,coupon,2024-09-02\r\n32801,1456,APAC,home,retail,35.14,2,0.151,coupon,2024-11-07\r\n32802,1818,AMER,grocery,online,86.23,6,0.094,none,2024-08-11\r\n32803,1620,LATAM,sports,online,86.76,6,0.191,none,2024-12-03\r\n32804,1126,LATAM,grocery,retail,48.34,8,0.168,none,2024-02-28\r\n32805,1961,EMEA,toys,online,70.28,8,0.190,coupon,2024-09-03\r\n32806,1086,AMER,grocery,mobile,55.03,5,0.072,none,2024-02-22\r\n32807,1384,LATAM,fashion,online,29.49,4,0.129,none,2024-06-20\r\n32808,2179,LATAM,grocery,retail,46.14,4,0.216,none,2024-04-08\r\n32809,1777,AMER,grocery,online,100.90,3,0.070,none,2024-02-07\r\n32810,2463,AMER,grocery,mobile,30.88,1,0.127,none,2024-04-07\r\n32811,1190,EMEA,sports,online,44.48,2,0.167,none,2024-02-28\r\n32812,2364,APAC,home,retail,28.98,6,0.166,coupon,2024-05-22\r\n32813,1916,AMER,electronics,retail,73.29,3,0.094,coupon,2024-05-19\r\n32814,1941,AMER,toys,retail,67.05,2,0.224,bundle,2024-11-20\r\n32815,1723,LATAM,home,online,73.83,8,0.094,coupon,2024-08-11\r\n32816,2246,AMER,grocery,retail,28.97,5,0.230,loyalty,2024-11-24\r\n32817,1272,AMER,electronics,online,35.80,7,0.185,none,2024-04-26\r\n32818,2314,EMEA,electronics,retail,77.75,7,0.103,coupon,2024-05-10\r\n32819,2192,APAC,toys,online,115.06,5,0.017,none,2024-02-13\r\n32820,2421,AMER,electronics,mobile,71.70,2,0.020,bundle,2024-11-27\r\n32821,2321,APAC,electronics,retail,65.17,4,0.028,loyalty,2024-05-27\r\n32822,1116,LATAM,sports,online,52.06,4,0.055,bundle,2024-12-22\r\n32823,2481,APAC,grocery,partner,82.13,7,0.183,coupon,2024-02-18\r\n32824,1923,LATAM,home,online,64.40,7,0.139,none,2024-02-19\r\n32825,1081,AMER,sports,online,99.56,8,0.239,none,2024-05-15\r\n32826,2291,EMEA,electronics,retail,49.01,3,0.230,bundle,2024-02-10\r\n32827,1494,AMER,home,online,29.26,8,0.165,none,2024-08-22\r\n32828,2142,LATAM,home,online,36.62,3,0.101,none,2024-05-05\r\n32829,1147,EMEA,home,retail,51.83,1,0.218,coupon,2024-02-21\r\n32830,1485,APAC,grocery,online,38.51,5,0.091,none,2024-02-07\r\n32831,1688,LATAM,electronics,retail,45.49,3,0.180,bundle,2024-10-17\r\n32832,1703,AMER,electronics,mobile,56.89,3,0.037,bundle,2024-06-20\r\n32833,2214,AMER,sports,mobile,26.09,5,0.044,bundle,2024-05-17\r\n32834,1628,EMEA,home,mobile,114.50,5,0.038,none,2024-02-16\r\n32835,1735,LATAM,toys,retail,73.19,8,0.168,bundle,2024-08-16\r\n32836,2390,AMER,home,online,50.13,2,0.165,bundle,2024-09-15\r\n32837,2288,AMER,fashion,online,68.55,8,0.241,coupon,2024-06-02\r\n32838,2226,EMEA,grocery,online,73.81,3,0.194,none,2024-01-11\r\n32839,1685,AMER,home,online,116.78,7,0.246,none,2024-01-24\r\n32840,1593,AMER,electronics,retail,48.43,8,0.088,none,2024-06-13\r\n32841,1155,EMEA,grocery,mobile,91.72,2,0.133,coupon,2024-12-27\r\n32842,2300,EMEA,grocery,online,55.20,4,0.141,bundle,2024-11-02\r\n32843,1969,LATAM,home,online,43.55,2,0.226,bundle,2024-09-20\r\n32844,1691,LATAM,home,online,51.99,2,0.100,none,2024-11-28\r\n32845,1314,AMER,home,mobile,34.82,7,0.139,bundle,2024-11-06\r\n32846,1104,APAC,home,online,79.87,3,0.087,none,2024-06-10\r\n32847,2356,LATAM,toys,online,43.81,2,0.177,none,2024-07-25\r\n32848,1785,EMEA,fashion,mobile,112.29,4,0.154,none,2024-02-23\r\n32849,1034,EMEA,sports,online,55.30,3,0.019,none,2024-04-14\r\n32850,1427,EMEA,fashion,mobile,84.33,2,0.236,coupon,2024-04-24\r\n32851,1868,AMER,grocery,online,95.03,5,0.157,none,2024-04-28\r\n32852,1779,APAC,home,mobile,65.98,7,0.178,none,2024-11-14\r\n32853,1741,AMER,toys,online,97.63,6,0.228,none,2024-05-23\r\n32854,1981,EMEA,electronics,retail,44.79,8,0.229,loyalty,2024-11-02\r\n32855,2042,LATAM,sports,mobile,34.95,8,0.003,coupon,2024-08-02\r\n32856,1336,APAC,home,retail,79.81,7,0.059,none,2024-10-28\r\n32857,2228,EMEA,fashion,mobile,77.44,6,0.247,none,2024-10-21\r\n32858,1947,EMEA,grocery,partner,96.85,4,0.062,none,2024-06-17\r\n32859,2137,LATAM,toys,retail,39.87,8,0.022,bundle,2024-04-16\r\n32860,1564,APAC,grocery,retail,119.25,3,0.033,coupon,2024-04-04\r\n32861,1677,EMEA,fashion,online,40.06,3,0.087,none,2024-08-08\r\n32862,2225,EMEA,grocery,mobile,70.14,2,0.169,bundle,2024-11-08\r\n32863,2367,AMER,grocery,retail,127.89,3,0.092,bundle,2024-09-12\r\n32864,1671,APAC,sports,retail,72.60,8,0.104,coupon,2024-08-21\r\n32865,2042,LATAM,fashion,online,20.39,2,0.143,loyalty,2024-06-09\r\n32866,1023,APAC,grocery,online,50.16,7,0.013,coupon,2024-09-06\r\n32867,1092,AMER,electronics,retail,22.08,2,0.131,none,2024-06-06\r\n32868,1376,EMEA,grocery,mobile,64.16,8,0.005,none,2024-11-25\r\n32869,1681,LATAM,grocery,online,88.83,7,0.152,none,2024-01-14\r\n32870,2404,EMEA,fashion,retail,58.18,7,0.005,none,2024-07-15\r\n32871,1246,EMEA,toys,online,52.25,5,0.240,none,2024-03-25\r\n32872,2475,AMER,electronics,online,111.40,2,0.133,none,2024-06-24\r\n32873,1911,LATAM,grocery,mobile,79.99,4,0.094,none,2024-04-06\r\n32874,1200,EMEA,electronics,retail,61.14,7,0.212,none,2024-07-01\r\n32875,1934,EMEA,home,retail,35.10,6,0.168,none,2024-01-23\r\n32876,2264,LATAM,grocery,retail,61.25,5,0.209,coupon,2024-05-27\r\n32877,2337,AMER,home,mobile,43.66,8,0.165,none,2024-08-06\r\n32878,1013,LATAM,grocery,mobile,35.40,1,0.150,none,2024-06-02\r\n32879,1902,AMER,home,online,86.19,7,0.003,none,2024-03-24\r\n32880,2157,AMER,toys,online,30.85,4,0.130,coupon,2024-04-02\r\n32881,1991,APAC,grocery,retail,49.38,1,0.187,loyalty,2024-03-14\r\n32882,1512,APAC,electronics,online,24.56,1,0.149,coupon,2024-04-28\r\n32883,1870,EMEA,home,retail,43.71,7,0.194,none,2024-03-28\r\n32884,1312,EMEA,toys,retail,43.82,3,0.170,none,2024-07-14\r\n32885,2075,LATAM,fashion,partner,150.83,2,0.119,none,2024-07-01\r\n32886,1100,AMER,grocery,online,32.96,4,0.193,coupon,2024-12-08\r\n32887,1838,AMER,grocery,retail,84.14,3,0.074,bundle,2024-12-17\r\n32888,1714,APAC,grocery,online,81.40,3,0.248,none,2024-10-13\r\n32889,1383,AMER,fashion,mobile,37.44,1,0.207,coupon,2024-05-02\r\n32890,1242,LATAM,grocery,online,25.07,8,0.091,none,2024-06-22\r\n32891,1215,LATAM,fashion,online,43.69,6,0.238,loyalty,2024-02-04\r\n32892,2157,AMER,toys,partner,98.44,1,0.217,loyalty,2024-11-15\r\n32893,2475,AMER,sports,online,67.81,8,0.212,none,2024-09-19\r\n32894,1121,EMEA,sports,retail,34.47,2,0.230,none,2024-10-17\r\n32895,1683,AMER,electronics,online,38.56,3,0.048,none,2024-12-24\r\n32896,1777,AMER,grocery,retail,53.74,1,0.064,none,2024-05-07\r\n32897,1141,AMER,grocery,mobile,103.09,5,0.004,none,2024-04-21\r\n32898,2359,LATAM,electronics,retail,23.52,4,0.104,none,2024-04-18\r\n32899,2058,LATAM,sports,retail,108.63,4,0.083,bundle,2024-05-28\r\n32900,1219,LATAM,home,online,38.72,4,0.003,coupon,2024-11-19\r\n32901,1261,APAC,home,online,141.96,4,0.238,coupon,2024-10-05\r\n32902,1152,LATAM,grocery,retail,51.73,7,0.105,none,2024-08-11\r\n32903,1442,EMEA,toys,online,26.05,4,0.222,none,2024-02-10\r\n32904,1988,AMER,grocery,retail,116.37,2,0.060,none,2024-01-24\r\n32905,2461,LATAM,electronics,retail,57.33,3,0.015,coupon,2024-02-16\r\n32906,1563,EMEA,home,mobile,38.90,4,0.195,none,2024-04-14\r\n32907,1731,AMER,electronics,online,64.34,7,0.056,none,2024-04-11\r\n32908,1232,LATAM,electronics,mobile,70.05,6,0.049,none,2024-03-07\r\n32909,1673,AMER,electronics,retail,104.10,4,0.099,loyalty,2024-10-15\r\n32910,1362,AMER,electronics,retail,69.11,6,0.159,none,2024-04-26\r\n32911,1135,APAC,home,online,92.53,3,0.020,none,2024-08-11\r\n32912,2399,LATAM,electronics,mobile,73.57,3,0.045,none,2024-08-26\r\n32913,1485,APAC,home,online,66.69,2,0.049,none,2024-11-07\r\n32914,2169,EMEA,grocery,mobile,40.91,4,0.090,none,2024-11-22\r\n32915,1956,APAC,fashion,retail,93.99,8,0.149,loyalty,2024-11-19\r\n32916,2081,APAC,sports,mobile,30.44,5,0.126,none,2024-11-07\r\n32917,1490,AMER,fashion,online,110.16,4,0.120,coupon,2024-06-08\r\n32918,1857,LATAM,fashion,mobile,69.38,8,0.068,bundle,2024-05-26\r\n32919,1007,APAC,toys,retail,40.11,3,0.019,none,2024-02-26\r\n32920,1197,LATAM,fashion,retail,44.40,4,0.091,none,2024-01-09\r\n32921,1275,EMEA,fashion,mobile,50.09,7,0.013,none,2024-08-04\r\n32922,1579,AMER,grocery,retail,33.39,8,0.014,none,2024-04-26\r\n32923,1502,APAC,grocery,mobile,132.50,5,0.096,bundle,2024-10-09\r\n32924,1251,EMEA,home,partner,51.48,2,0.159,loyalty,2024-02-05\r\n32925,1369,AMER,toys,retail,67.94,8,0.166,coupon,2024-07-14\r\n32926,1502,APAC,toys,retail,75.28,2,0.010,none,2024-02-23\r\n32927,1377,APAC,grocery,retail,55.80,1,0.162,none,2024-12-01\r\n32928,2263,AMER,home,online,53.13,7,0.007,bundle,2024-07-11\r\n32929,1845,AMER,home,retail,39.69,3,0.198,none,2024-10-01\r\n32930,1155,EMEA,grocery,retail,51.02,5,0.228,none,2024-12-14\r\n32931,2282,EMEA,grocery,online,47.73,7,0.132,coupon,2024-02-03\r\n32932,2229,APAC,electronics,online,121.97,1,0.022,none,2024-04-03\r\n32933,1087,AMER,electronics,retail,82.88,3,0.056,none,2024-09-09\r\n32934,1245,APAC,grocery,online,45.83,4,0.024,bundle,2024-10-23\r\n32935,1198,AMER,grocery,mobile,34.51,1,0.197,bundle,2024-12-22\r\n32936,2109,EMEA,sports,retail,50.25,2,0.226,none,2024-08-10\r\n32937,1016,AMER,electronics,retail,30.71,5,0.144,coupon,2024-11-07\r\n32938,1355,EMEA,fashion,retail,82.58,5,0.139,coupon,2024-03-01\r\n32939,1852,AMER,home,mobile,29.78,6,0.214,coupon,2024-05-07\r\n32940,1600,AMER,home,online,29.81,4,0.141,coupon,2024-01-15\r\n32941,2439,AMER,toys,retail,37.27,4,0.104,none,2024-06-13\r\n32942,1324,LATAM,toys,retail,78.54,7,0.197,none,2024-11-16\r\n32943,1724,LATAM,toys,online,34.98,4,0.136,loyalty,2024-03-02\r\n32944,2235,AMER,fashion,retail,127.84,7,0.072,none,2024-01-01\r\n32945,2289,APAC,sports,online,76.12,3,0.093,coupon,2024-03-15\r\n32946,2425,APAC,grocery,online,65.03,1,0.180,none,2024-07-03\r\n32947,1001,LATAM,electronics,partner,121.35,7,0.054,none,2024-04-17\r\n32948,1286,EMEA,electronics,retail,32.10,7,0.170,bundle,2024-02-25\r\n32949,1068,APAC,fashion,retail,112.49,4,0.173,coupon,2024-12-01\r\n32950,1056,LATAM,toys,mobile,51.42,6,0.205,bundle,2024-12-17\r\n32951,1603,EMEA,electronics,online,52.91,7,0.074,none,2024-01-13\r\n32952,2468,EMEA,home,retail,42.74,4,0.233,none,2024-08-25\r\n32953,1288,LATAM,toys,online,58.80,6,0.202,loyalty,2024-01-15\r\n32954,1980,LATAM,electronics,retail,56.21,3,0.132,none,2024-12-28\r\n32955,2374,LATAM,fashion,online,34.65,1,0.190,none,2024-12-25\r\n32956,1895,AMER,home,mobile,53.65,3,0.118,coupon,2024-10-28\r\n32957,1404,EMEA,toys,mobile,38.90,7,0.235,none,2024-08-23\r\n32958,1398,APAC,fashion,partner,269.03,6,0.114,none,2024-04-12\r\n32959,1855,APAC,fashion,retail,93.73,2,0.016,none,2024-12-25\r\n32960,2232,EMEA,sports,online,65.74,7,0.207,none,2024-06-21\r\n32961,2046,APAC,fashion,retail,45.73,6,0.099,none,2024-12-15\r\n32962,2054,AMER,sports,online,28.39,1,0.110,none,2024-12-05\r\n32963,1489,AMER,sports,online,43.11,8,0.036,coupon,2024-03-23\r\n32964,2479,EMEA,sports,online,201.97,7,0.237,none,2024-12-16\r\n32965,1744,EMEA,electronics,mobile,71.36,2,0.087,none,2024-08-07\r\n32966,1739,AMER,grocery,online,30.73,1,0.084,none,2024-11-03\r\n32967,1323,EMEA,grocery,online,86.73,1,0.038,coupon,2024-05-16\r\n32968,1643,EMEA,fashion,retail,29.62,7,0.159,coupon,2024-06-24\r\n32969,1627,LATAM,electronics,online,28.48,8,0.135,none,2024-10-25\r\n32970,2490,AMER,grocery,online,29.43,3,0.180,none,2024-09-01\r\n32971,1171,APAC,sports,online,44.58,2,0.208,none,2024-08-27\r\n32972,2443,LATAM,sports,online,46.22,3,0.118,none,2024-03-12\r\n32973,2111,EMEA,fashion,retail,79.35,5,0.236,loyalty,2024-06-09\r\n32974,1415,AMER,electronics,mobile,102.52,2,0.181,coupon,2024-10-07\r\n32975,2382,LATAM,toys,online,19.21,8,0.196,coupon,2024-01-04\r\n32976,1297,AMER,grocery,online,41.22,2,0.147,coupon,2024-03-26\r\n32977,2305,AMER,sports,retail,28.00,6,0.069,bundle,2024-07-17\r\n32978,1664,LATAM,electronics,online,39.63,8,0.052,none,2024-04-28\r\n32979,1991,APAC,grocery,online,57.69,7,0.231,bundle,2024-07-18\r\n32980,2377,AMER,fashion,online,43.02,5,0.015,coupon,2024-02-12\r\n32981,1378,APAC,fashion,retail,185.98,7,0.121,coupon,2024-01-15\r\n32982,1624,AMER,grocery,online,43.64,4,0.102,none,2024-03-09\r\n32983,2219,LATAM,grocery,retail,43.44,5,0.010,none,2024-02-23\r\n32984,1948,EMEA,electronics,retail,19.99,1,0.115,none,2024-12-04\r\n32985,1492,APAC,electronics,online,166.31,5,0.095,bundle,2024-02-01\r\n32986,1862,LATAM,sports,online,37.15,1,0.127,coupon,2024-07-01\r\n32987,2140,AMER,sports,online,49.91,1,0.125,bundle,2024-09-07\r\n32988,2222,LATAM,home,partner,167.50,8,0.231,none,2024-07-26\r\n32989,2021,EMEA,electronics,online,72.90,1,0.049,none,2024-08-21\r\n32990,1401,LATAM,electronics,mobile,46.61,3,0.211,none,2024-11-01\r\n32991,2068,LATAM,electronics,partner,94.65,6,0.170,bundle,2024-10-16\r\n32992,1144,APAC,home,retail,58.98,3,0.088,none,2024-01-07\r\n32993,2097,AMER,toys,online,51.62,3,0.245,none,2024-03-12\r\n32994,2451,APAC,grocery,online,76.35,7,0.183,bundle,2024-05-20\r\n32995,1280,LATAM,fashion,retail,157.14,7,0.089,coupon,2024-07-07\r\n32996,2425,APAC,toys,online,67.65,5,0.087,loyalty,2024-08-06\r\n32997,2004,LATAM,electronics,retail,45.28,8,0.224,none,2024-08-21\r\n32998,2319,AMER,fashion,retail,55.51,8,0.196,coupon,2024-02-18\r\n32999,1340,LATAM,sports,online,38.25,8,0.060,coupon,2024-06-06\r\n33000,1132,EMEA,home,online,25.40,8,0.026,none,2024-02-27\r\n33001,2166,AMER,sports,online,72.73,2,0.070,none,2024-06-22\r\n33002,1608,AMER,sports,retail,57.99,4,0.177,coupon,2024-09-18\r\n33003,1805,EMEA,fashion,retail,42.50,6,0.038,none,2024-01-18\r\n33004,2195,APAC,grocery,mobile,44.50,4,0.192,bundle,2024-11-19\r\n33005,1656,LATAM,toys,retail,133.14,8,0.123,none,2024-11-24\r\n33006,1931,APAC,sports,online,76.37,2,0.034,none,2024-05-08\r\n33007,2406,EMEA,toys,partner,31.17,2,0.211,none,2024-06-15\r\n33008,1515,EMEA,home,online,33.57,7,0.111,none,2024-10-17\r\n33009,2060,LATAM,grocery,online,50.06,7,0.045,bundle,2024-01-20\r\n33010,1766,AMER,sports,retail,51.84,1,0.113,coupon,2024-07-04\r\n33011,2157,AMER,grocery,retail,43.21,6,0.104,none,2024-10-14\r\n33012,2318,AMER,sports,online,71.43,6,0.012,none,2024-05-16\r\n33013,1187,AMER,grocery,partner,78.43,2,0.021,none,2024-11-06\r\n33014,1421,APAC,grocery,online,99.19,5,0.053,coupon,2024-06-23\r\n33015,1765,EMEA,electronics,mobile,205.94,6,0.025,coupon,2024-02-03\r\n33016,1666,LATAM,toys,retail,82.89,4,0.201,none,2024-09-15\r\n33017,1016,AMER,electronics,online,121.59,6,0.068,none,2024-05-11\r\n33018,1083,AMER,grocery,partner,62.27,4,0.218,none,2024-06-26\r\n33019,2069,AMER,fashion,online,82.23,4,0.185,loyalty,2024-07-01\r\n33020,2209,AMER,electronics,retail,56.34,3,0.109,none,2024-08-13\r\n33021,2257,AMER,home,retail,173.79,1,0.055,none,2024-03-22\r\n33022,2171,EMEA,grocery,online,154.99,8,0.147,coupon,2024-11-19\r\n33023,2363,AMER,home,retail,39.51,8,0.223,none,2024-07-07\r\n33024,1991,APAC,electronics,mobile,70.15,5,0.152,none,2024-12-07\r\n33025,2328,EMEA,electronics,online,115.01,6,0.097,none,2024-02-04\r\n33026,1395,APAC,home,mobile,38.47,7,0.229,none,2024-07-14\r\n33027,1223,LATAM,fashion,online,30.84,6,0.248,coupon,2024-05-09\r\n33028,1588,LATAM,electronics,mobile,41.38,7,0.199,none,2024-02-12\r\n33029,1445,APAC,electronics,retail,41.10,3,0.033,none,2024-12-13\r\n33030,2184,APAC,fashion,partner,40.10,4,0.113,none,2024-07-06\r\n33031,1914,EMEA,electronics,partner,52.48,5,0.199,none,2024-05-08\r\n33032,1357,EMEA,toys,mobile,43.41,4,0.153,none,2024-01-05\r\n33033,1825,AMER,toys,mobile,97.80,2,0.057,none,2024-08-11\r\n33034,1550,APAC,electronics,online,95.92,5,0.218,none,2024-09-25\r\n33035,2003,LATAM,fashion,online,29.19,3,0.157,none,2024-05-09\r\n33036,1173,LATAM,electronics,online,56.34,7,0.047,none,2024-11-02\r\n33037,1230,EMEA,grocery,online,98.35,3,0.162,none,2024-06-28\r\n33038,2221,LATAM,home,retail,109.66,2,0.011,bundle,2024-09-10\r\n33039,1152,LATAM,grocery,online,73.92,7,0.184,none,2024-06-16\r\n33040,1567,AMER,electronics,online,117.11,1,0.056,none,2024-05-24\r\n33041,2057,APAC,grocery,online,31.76,2,0.136,coupon,2024-11-04\r\n33042,2462,EMEA,fashion,partner,57.78,8,0.075,loyalty,2024-12-22\r\n33043,1357,EMEA,grocery,online,168.24,1,0.046,none,2024-11-19\r\n33044,2108,AMER,home,retail,62.64,6,0.087,none,2024-07-01\r\n33045,2092,AMER,fashion,retail,55.38,8,0.234,none,2024-10-22\r\n33046,1465,AMER,sports,retail,47.85,6,0.154,none,2024-03-10\r\n33047,1613,EMEA,sports,retail,60.01,8,0.211,coupon,2024-05-13\r\n33048,2496,EMEA,toys,retail,15.20,3,0.011,bundle,2024-10-12\r\n33049,1466,AMER,electronics,retail,86.74,4,0.095,none,2024-12-24\r\n33050,2217,LATAM,home,online,30.11,8,0.119,none,2024-07-27\r\n33051,1342,LATAM,home,retail,20.24,5,0.057,coupon,2024-12-03\r\n33052,1464,APAC,fashion,retail,109.94,2,0.222,none,2024-12-22\r\n33053,1467,LATAM,home,online,56.39,8,0.234,none,2024-07-23\r\n33054,2150,APAC,electronics,online,59.22,4,0.228,coupon,2024-08-07\r\n33055,1821,LATAM,grocery,online,16.40,2,0.228,none,2024-03-05\r\n33056,1994,LATAM,home,partner,58.22,4,0.214,none,2024-08-02\r\n33057,2422,APAC,toys,partner,66.50,5,0.241,bundle,2024-11-19\r\n33058,1057,LATAM,grocery,online,128.49,5,0.212,bundle,2024-09-14\r\n33059,2466,APAC,home,online,56.02,2,0.022,bundle,2024-06-18\r\n33060,1736,AMER,grocery,online,27.84,5,0.198,loyalty,2024-01-10\r\n33061,1536,LATAM,toys,online,19.52,8,0.144,none,2024-08-03\r\n33062,1299,LATAM,fashion,retail,100.94,5,0.127,none,2024-01-18\r\n33063,1976,AMER,sports,online,32.42,5,0.078,coupon,2024-07-14\r\n33064,1031,AMER,electronics,online,51.54,5,0.016,coupon,2024-01-05\r\n33065,1321,EMEA,toys,partner,57.43,6,0.225,bundle,2024-09-02\r\n33066,1887,LATAM,grocery,online,95.13,7,0.240,none,2024-11-19\r\n33067,1985,AMER,fashion,mobile,61.81,2,0.044,bundle,2024-02-21\r\n33068,1589,AMER,sports,online,21.65,3,0.096,loyalty,2024-05-12\r\n33069,1534,EMEA,home,online,26.45,4,0.171,none,2024-11-13\r\n33070,2429,EMEA,grocery,partner,36.97,8,0.129,none,2024-08-28\r\n33071,1273,AMER,grocery,online,29.54,5,0.178,none,2024-06-14\r\n33072,1904,APAC,electronics,retail,70.90,1,0.047,coupon,2024-04-08\r\n33073,1928,AMER,home,online,60.86,2,0.002,none,2024-11-06\r\n33074,1520,APAC,electronics,retail,54.58,6,0.200,loyalty,2024-07-27\r\n33075,1736,AMER,grocery,retail,86.39,2,0.189,bundle,2024-02-10\r\n33076,2193,AMER,sports,online,27.22,2,0.133,none,2024-02-27\r\n33077,1722,EMEA,sports,retail,75.01,2,0.002,bundle,2024-01-18\r\n33078,1533,APAC,home,retail,96.38,4,0.247,coupon,2024-09-05\r\n33079,1768,AMER,fashion,online,34.73,5,0.158,loyalty,2024-09-05\r\n33080,1455,APAC,fashion,partner,67.32,4,0.064,bundle,2024-04-01\r\n33081,1403,APAC,fashion,online,46.94,5,0.139,none,2024-09-10\r\n33082,1619,APAC,home,online,23.39,4,0.198,loyalty,2024-02-22\r\n33083,1588,LATAM,home,retail,48.36,2,0.197,bundle,2024-05-16\r\n33084,1119,LATAM,fashion,online,60.85,6,0.005,none,2024-09-03\r\n33085,2484,APAC,home,online,82.72,3,0.135,coupon,2024-10-05\r\n33086,1699,APAC,home,online,45.05,6,0.005,none,2024-03-06\r\n33087,1367,AMER,home,mobile,41.55,2,0.072,coupon,2024-11-17\r\n33088,2282,EMEA,fashion,mobile,40.97,4,0.213,none,2024-04-06\r\n33089,1611,EMEA,toys,retail,54.36,1,0.138,loyalty,2024-07-23\r\n33090,1914,EMEA,home,online,53.70,7,0.213,none,2024-07-25\r\n33091,2296,AMER,grocery,online,125.31,6,0.220,none,2024-01-22\r\n33092,1638,EMEA,grocery,online,110.54,3,0.125,coupon,2024-07-13\r\n33093,2144,EMEA,grocery,mobile,32.88,5,0.206,none,2024-07-08\r\n33094,1019,APAC,home,retail,40.90,6,0.033,none,2024-04-13\r\n33095,1032,AMER,electronics,online,113.72,6,0.228,none,2024-01-02\r\n33096,1880,LATAM,sports,online,75.80,7,0.060,bundle,2024-06-09\r\n33097,1766,AMER,home,mobile,45.33,5,0.126,coupon,2024-11-10\r\n33098,1338,EMEA,grocery,online,68.99,1,0.216,none,2024-03-12\r\n33099,2426,AMER,grocery,partner,104.41,3,0.110,coupon,2024-09-15\r\n33100,1037,EMEA,grocery,online,100.94,4,0.173,none,2024-04-25\r\n33101,2090,AMER,grocery,online,43.29,4,0.063,coupon,2024-10-09\r\n33102,2179,LATAM,grocery,mobile,42.23,4,0.049,none,2024-02-17\r\n33103,1985,AMER,fashion,retail,6.20,5,0.154,bundle,2024-08-21\r\n33104,2143,AMER,sports,online,38.64,7,0.231,coupon,2024-12-13\r\n33105,2244,LATAM,sports,online,52.41,2,0.070,none,2024-06-19\r\n33106,1052,LATAM,fashion,online,35.24,7,0.049,none,2024-06-11\r\n33107,2448,APAC,grocery,online,57.58,4,0.057,loyalty,2024-04-09\r\n33108,1460,LATAM,home,mobile,16.12,5,0.039,bundle,2024-07-17\r\n33109,1715,AMER,sports,online,63.56,3,0.201,bundle,2024-12-23\r\n33110,1292,LATAM,toys,mobile,107.80,7,0.083,none,2024-11-13\r\n33111,2310,EMEA,grocery,retail,40.85,1,0.211,bundle,2024-03-14\r\n33112,1543,AMER,home,retail,50.86,3,0.007,none,2024-10-09\r\n33113,1549,APAC,electronics,mobile,43.29,1,0.015,none,2024-05-27\r\n33114,1426,AMER,toys,partner,142.45,4,0.006,coupon,2024-02-18\r\n33115,1819,AMER,sports,online,61.86,8,0.112,loyalty,2024-08-06\r\n33116,2339,AMER,home,retail,54.04,2,0.093,loyalty,2024-02-07\r\n33117,1056,LATAM,home,online,21.74,1,0.111,none,2024-05-14\r\n33118,2147,LATAM,sports,partner,53.86,8,0.104,loyalty,2024-01-20\r\n33119,1868,AMER,toys,online,76.04,8,0.199,loyalty,2024-07-21\r\n33120,1690,LATAM,grocery,online,25.06,6,0.215,coupon,2024-01-08\r\n33121,2056,LATAM,grocery,retail,38.01,3,0.190,loyalty,2024-12-18\r\n33122,1012,LATAM,electronics,mobile,37.62,2,0.206,none,2024-11-05\r\n33123,1316,APAC,grocery,online,59.62,1,0.096,none,2024-09-03\r\n33124,1653,APAC,sports,partner,103.28,6,0.058,loyalty,2024-08-18\r\n33125,2273,APAC,electronics,online,22.25,4,0.175,bundle,2024-07-10\r\n33126,1895,AMER,toys,online,151.28,8,0.051,loyalty,2024-08-17\r\n33127,2192,APAC,fashion,retail,78.67,2,0.211,bundle,2024-01-04\r\n33128,1688,LATAM,grocery,retail,173.28,6,0.117,none,2024-05-12\r\n33129,1350,LATAM,electronics,retail,61.00,2,0.126,none,2024-05-16\r\n33130,2018,AMER,electronics,retail,31.88,8,0.094,bundle,2024-06-01\r\n33131,1866,EMEA,grocery,online,53.05,2,0.222,loyalty,2024-04-28\r\n33132,1845,AMER,home,online,46.09,7,0.202,coupon,2024-11-05\r\n33133,2096,LATAM,fashion,online,92.10,7,0.156,none,2024-03-21\r\n33134,2243,APAC,grocery,mobile,85.96,3,0.108,none,2024-09-28\r\n33135,2278,APAC,sports,online,56.97,7,0.050,coupon,2024-06-24\r\n33136,1974,EMEA,electronics,retail,47.44,5,0.215,none,2024-02-14\r\n33137,1299,LATAM,sports,online,68.25,2,0.004,coupon,2024-08-24\r\n33138,2494,AMER,electronics,online,57.99,7,0.128,none,2024-07-06\r\n33139,2212,EMEA,sports,retail,38.19,3,0.219,coupon,2024-07-06\r\n33140,2499,LATAM,sports,online,175.68,7,0.071,none,2024-01-21\r\n33141,1833,EMEA,toys,mobile,122.04,1,0.153,none,2024-06-20\r\n33142,1021,AMER,toys,retail,62.70,1,0.022,none,2024-06-19\r\n33143,1719,LATAM,home,retail,81.50,8,0.080,none,2024-07-08\r\n33144,1257,APAC,grocery,retail,44.72,4,0.140,none,2024-05-08\r\n33145,1157,LATAM,toys,retail,68.45,2,0.217,none,2024-11-14\r\n33146,1739,AMER,home,online,26.83,1,0.057,none,2024-11-19\r\n33147,1404,EMEA,home,retail,59.09,6,0.153,none,2024-09-17\r\n33148,1660,AMER,toys,online,171.90,8,0.127,bundle,2024-05-22\r\n33149,1534,EMEA,fashion,online,38.88,5,0.207,none,2024-02-23\r\n33150,1163,AMER,sports,online,51.29,5,0.184,none,2024-06-17\r\n33151,2234,LATAM,toys,partner,34.32,8,0.131,coupon,2024-02-09\r\n33152,2126,APAC,grocery,online,90.99,5,0.021,none,2024-11-20\r\n33153,1863,EMEA,fashion,retail,47.48,6,0.068,coupon,2024-10-07\r\n33154,1220,LATAM,electronics,mobile,91.14,4,0.155,coupon,2024-03-16\r\n33155,1827,EMEA,grocery,online,40.87,4,0.077,loyalty,2024-06-15\r\n33156,1404,EMEA,home,retail,59.74,5,0.189,none,2024-05-26\r\n33157,2065,EMEA,sports,retail,62.86,5,0.095,none,2024-11-20\r\n33158,1485,APAC,grocery,retail,32.97,7,0.197,none,2024-03-06\r\n33159,1330,EMEA,fashion,online,61.83,7,0.076,none,2024-02-16\r\n33160,2001,EMEA,grocery,online,57.15,1,0.032,none,2024-05-13\r\n33161,1632,LATAM,grocery,mobile,62.25,5,0.023,none,2024-12-12\r\n33162,2163,EMEA,electronics,retail,64.63,5,0.024,none,2024-07-28\r\n33163,1523,LATAM,home,partner,64.01,4,0.035,none,2024-08-28\r\n33164,2133,AMER,fashion,online,37.26,3,0.112,bundle,2024-09-20\r\n33165,1949,AMER,home,mobile,57.40,8,0.219,coupon,2024-07-06\r\n33166,1154,LATAM,fashion,retail,60.83,6,0.100,coupon,2024-02-07\r\n33167,1657,LATAM,fashion,retail,32.74,8,0.057,none,2024-10-19\r\n33168,1372,APAC,fashion,online,49.19,3,0.085,none,2024-02-24\r\n33169,1734,AMER,home,online,40.55,4,0.244,bundle,2024-01-08\r\n33170,1122,AMER,grocery,retail,38.37,3,0.136,bundle,2024-06-05\r\n33171,2192,APAC,grocery,retail,54.99,7,0.057,none,2024-05-02\r\n33172,1984,LATAM,fashion,retail,76.25,5,0.156,none,2024-04-19\r\n33173,1802,AMER,toys,retail,33.45,3,0.157,none,2024-03-11\r\n33174,2016,LATAM,electronics,online,37.46,3,0.200,none,2024-03-07\r\n33175,2098,AMER,sports,retail,128.62,1,0.232,none,2024-05-04\r\n33176,1421,APAC,sports,online,48.96,7,0.205,coupon,2024-01-02\r\n33177,1603,EMEA,electronics,online,86.03,6,0.165,loyalty,2024-04-12\r\n33178,2133,AMER,grocery,online,91.07,3,0.111,none,2024-04-01\r\n33179,1015,AMER,fashion,online,83.70,4,0.008,none,2024-05-18\r\n33180,2378,LATAM,fashion,retail,38.34,2,0.152,coupon,2024-12-12\r\n33181,1734,AMER,electronics,mobile,65.54,6,0.156,none,2024-02-21\r\n33182,2063,APAC,electronics,retail,77.93,2,0.012,none,2024-10-06\r\n33183,2348,EMEA,fashion,online,32.50,4,0.067,none,2024-04-12\r\n33184,2040,LATAM,fashion,partner,31.94,8,0.215,none,2024-02-07\r\n33185,1727,APAC,sports,retail,80.38,5,0.159,bundle,2024-11-02\r\n33186,2193,AMER,grocery,online,24.94,2,0.054,bundle,2024-12-25\r\n33187,1277,AMER,home,retail,52.29,7,0.014,none,2024-11-11\r\n33188,2292,EMEA,grocery,mobile,22.92,3,0.168,none,2024-08-26\r\n33189,2054,AMER,toys,partner,183.49,3,0.208,loyalty,2024-04-03\r\n33190,2098,AMER,sports,retail,107.60,4,0.120,none,2024-03-14\r\n33191,1295,EMEA,grocery,retail,88.30,5,0.192,none,2024-11-23\r\n33192,1895,AMER,grocery,online,95.84,3,0.141,none,2024-02-05\r\n33193,2108,AMER,home,online,97.93,7,0.122,none,2024-11-01\r\n33194,1108,EMEA,grocery,mobile,49.98,3,0.033,none,2024-08-10\r\n33195,2396,AMER,grocery,retail,103.55,4,0.082,none,2024-11-09\r\n33196,1633,EMEA,grocery,retail,203.88,7,0.192,none,2024-03-01\r\n33197,2271,LATAM,electronics,online,108.81,6,0.148,coupon,2024-11-16\r\n33198,2330,EMEA,electronics,online,22.94,3,0.044,none,2024-05-18\r\n33199,1576,EMEA,home,online,33.40,4,0.139,bundle,2024-11-28\r\n33200,1098,APAC,home,online,55.59,3,0.152,bundle,2024-02-07\r\n33201,1241,APAC,grocery,mobile,81.50,2,0.231,coupon,2024-12-11\r\n33202,2253,AMER,fashion,retail,47.09,2,0.065,loyalty,2024-03-17\r\n33203,1414,APAC,electronics,online,42.49,1,0.059,bundle,2024-12-16\r\n33204,1581,APAC,fashion,online,138.12,6,0.099,none,2024-12-22\r\n33205,1990,EMEA,sports,retail,27.40,7,0.101,none,2024-06-24\r\n33206,1369,AMER,grocery,online,13.84,4,0.187,coupon,2024-07-06\r\n33207,1213,EMEA,sports,mobile,52.92,3,0.205,none,2024-06-04\r\n33208,1042,LATAM,toys,retail,80.75,3,0.121,none,2024-12-23\r\n33209,1311,APAC,sports,retail,25.38,2,0.144,coupon,2024-07-06\r\n33210,2371,LATAM,grocery,online,50.72,5,0.193,none,2024-12-23\r\n33211,1934,EMEA,electronics,mobile,90.59,6,0.240,loyalty,2024-03-14\r\n33212,1840,LATAM,grocery,retail,60.40,3,0.089,none,2024-06-17\r\n33213,1819,AMER,fashion,online,85.69,6,0.017,none,2024-06-22\r\n33214,2230,LATAM,grocery,retail,82.66,7,0.039,none,2024-07-02\r\n33215,1401,LATAM,grocery,online,67.43,5,0.066,bundle,2024-07-03\r\n33216,1397,LATAM,electronics,online,33.87,4,0.146,loyalty,2024-06-13\r\n33217,2325,LATAM,home,online,60.71,2,0.034,none,2024-11-12\r\n33218,2217,LATAM,grocery,retail,61.08,4,0.084,bundle,2024-10-03\r\n33219,1808,APAC,home,retail,26.24,2,0.173,coupon,2024-12-22\r\n33220,1905,APAC,home,online,54.23,8,0.104,none,2024-09-24\r\n33221,1970,LATAM,grocery,online,72.17,7,0.120,none,2024-08-09\r\n33222,1319,EMEA,electronics,partner,20.80,4,0.226,none,2024-06-04\r\n33223,1314,AMER,sports,retail,17.90,6,0.206,none,2024-10-10\r\n33224,1289,LATAM,fashion,partner,61.57,7,0.038,none,2024-04-03\r\n33225,1365,LATAM,toys,online,47.71,8,0.183,coupon,2024-02-13\r\n33226,1584,EMEA,fashion,online,89.70,1,0.115,bundle,2024-04-16\r\n33227,1122,AMER,fashion,retail,92.34,3,0.199,coupon,2024-02-23\r\n33228,2338,AMER,electronics,online,26.39,4,0.207,none,2024-10-20\r\n33229,1484,AMER,toys,mobile,62.53,6,0.104,none,2024-12-14\r\n33230,1611,EMEA,fashion,mobile,39.67,2,0.158,coupon,2024-01-17\r\n33231,1034,EMEA,sports,online,76.89,7,0.093,none,2024-03-16\r\n33232,1111,APAC,sports,online,74.75,1,0.003,bundle,2024-01-26\r\n33233,1304,LATAM,grocery,retail,61.19,7,0.091,bundle,2024-07-25\r\n33234,1220,LATAM,electronics,online,25.57,5,0.117,bundle,2024-11-11\r\n33235,1769,LATAM,grocery,partner,48.12,2,0.086,coupon,2024-01-23\r\n33236,1579,AMER,home,retail,89.02,6,0.225,none,2024-08-24\r\n33237,1279,EMEA,grocery,mobile,57.88,1,0.056,none,2024-01-19\r\n33238,2177,AMER,fashion,online,44.01,4,0.110,none,2024-02-23\r\n33239,1025,EMEA,grocery,retail,107.70,7,0.040,none,2024-04-07\r\n33240,1939,LATAM,electronics,retail,76.57,1,0.073,coupon,2024-04-03\r\n33241,1480,APAC,grocery,online,50.06,7,0.032,coupon,2024-12-03\r\n33242,1385,LATAM,sports,retail,85.36,2,0.097,none,2024-10-09\r\n33243,2207,APAC,fashion,retail,27.43,1,0.245,loyalty,2024-04-14\r\n33244,1333,EMEA,home,retail,52.43,3,0.214,bundle,2024-07-13\r\n33245,1404,EMEA,home,retail,59.11,4,0.244,none,2024-02-11\r\n33246,2204,AMER,toys,online,28.29,6,0.248,none,2024-10-23\r\n33247,1486,LATAM,fashion,online,115.83,1,0.142,coupon,2024-05-20\r\n33248,1751,AMER,electronics,partner,58.29,5,0.009,bundle,2024-06-07\r\n33249,2141,AMER,home,retail,30.75,4,0.029,none,2024-10-08\r\n33250,2018,AMER,grocery,retail,92.43,7,0.207,loyalty,2024-09-20\r\n33251,1300,EMEA,grocery,mobile,53.04,6,0.048,loyalty,2024-06-24\r\n33252,2315,LATAM,sports,retail,34.08,1,0.196,coupon,2024-02-21\r\n33253,1576,EMEA,toys,mobile,75.55,7,0.218,none,2024-10-26\r\n33254,1051,EMEA,sports,online,29.80,2,0.024,none,2024-03-08\r\n33255,1341,EMEA,fashion,online,73.28,7,0.211,none,2024-11-19\r\n33256,1218,AMER,home,online,52.39,6,0.055,none,2024-07-24\r\n33257,1425,EMEA,home,mobile,60.19,3,0.031,none,2024-05-17\r\n33258,1487,AMER,grocery,retail,25.66,8,0.210,none,2024-05-17\r\n33259,1893,APAC,grocery,retail,46.31,6,0.096,loyalty,2024-06-14\r\n33260,1529,LATAM,fashion,retail,124.05,8,0.035,none,2024-03-07\r\n33261,1058,LATAM,grocery,mobile,50.51,6,0.092,coupon,2024-01-04\r\n33262,1412,AMER,home,retail,46.00,7,0.225,none,2024-06-07\r\n33263,1348,AMER,toys,online,36.95,1,0.240,bundle,2024-01-14\r\n33264,1405,LATAM,home,retail,72.68,3,0.132,none,2024-01-13\r\n33265,1989,LATAM,electronics,online,37.74,7,0.205,loyalty,2024-07-26\r\n33266,1364,EMEA,sports,retail,50.94,4,0.064,coupon,2024-09-12\r\n33267,2376,LATAM,grocery,retail,77.77,4,0.010,coupon,2024-12-03\r\n33268,1479,AMER,sports,retail,84.47,1,0.206,loyalty,2024-03-03\r\n33269,1981,EMEA,home,mobile,82.55,7,0.088,none,2024-12-18\r\n33270,1755,APAC,grocery,online,47.03,5,0.249,bundle,2024-08-16\r\n33271,2015,APAC,home,mobile,66.92,1,0.100,coupon,2024-09-21\r\n33272,2320,LATAM,home,mobile,39.53,3,0.188,none,2024-11-03\r\n33273,1446,AMER,grocery,mobile,64.56,7,0.244,none,2024-02-06\r\n33274,1713,EMEA,grocery,retail,110.90,3,0.080,bundle,2024-03-02\r\n33275,2097,AMER,grocery,retail,28.97,1,0.129,none,2024-09-12\r\n33276,1701,LATAM,grocery,partner,45.66,8,0.012,none,2024-08-04\r\n33277,1466,AMER,sports,online,38.74,5,0.126,none,2024-03-17\r\n33278,1326,AMER,fashion,online,46.04,1,0.207,none,2024-01-22\r\n33279,1699,APAC,electronics,retail,71.47,3,0.184,none,2024-05-06\r\n33280,1446,AMER,toys,retail,59.62,5,0.161,none,2024-12-12\r\n33281,1342,LATAM,home,online,107.19,4,0.113,none,2024-06-28\r\n33282,1562,AMER,grocery,online,104.36,7,0.219,coupon,2024-05-21\r\n33283,1573,AMER,home,partner,51.60,8,0.250,loyalty,2024-09-22\r\n33284,2191,AMER,sports,partner,41.02,7,0.158,none,2024-11-21\r\n33285,1169,LATAM,toys,retail,53.62,3,0.185,none,2024-12-08\r\n33286,1241,APAC,grocery,online,79.52,5,0.213,none,2024-07-20\r\n33287,1248,APAC,sports,retail,76.16,1,0.042,none,2024-07-21\r\n33288,2078,APAC,fashion,retail,19.36,1,0.163,loyalty,2024-08-01\r\n33289,1662,LATAM,toys,retail,55.17,7,0.143,none,2024-04-26\r\n33290,1761,EMEA,grocery,mobile,35.56,8,0.054,loyalty,2024-12-01\r\n33291,1823,EMEA,grocery,retail,50.54,6,0.187,none,2024-10-10\r\n33292,2366,APAC,sports,retail,65.66,3,0.131,coupon,2024-09-27\r\n33293,1287,AMER,toys,retail,27.23,4,0.195,none,2024-12-26\r\n33294,2409,APAC,home,online,53.16,3,0.116,coupon,2024-03-16\r\n33295,2236,APAC,grocery,online,71.81,3,0.157,none,2024-12-02\r\n33296,1577,AMER,fashion,online,63.02,2,0.204,bundle,2024-05-12\r\n33297,1140,LATAM,electronics,retail,106.08,3,0.062,none,2024-07-07\r\n33298,1752,APAC,electronics,online,46.95,3,0.147,none,2024-05-22\r\n33299,2408,EMEA,home,retail,34.92,6,0.011,none,2024-01-23\r\n33300,1979,APAC,grocery,online,77.42,2,0.249,none,2024-04-12\r\n33301,2428,LATAM,electronics,mobile,17.01,6,0.063,bundle,2024-07-14\r\n33302,1494,AMER,home,online,80.72,8,0.228,none,2024-06-18\r\n33303,1305,EMEA,electronics,online,40.25,1,0.249,bundle,2024-04-24\r\n33304,1042,LATAM,home,online,62.79,8,0.217,none,2024-05-12\r\n33305,1397,LATAM,sports,retail,72.96,6,0.075,none,2024-01-16\r\n33306,2222,LATAM,fashion,retail,137.20,8,0.113,none,2024-05-25\r\n33307,2243,APAC,fashion,retail,36.24,4,0.073,coupon,2024-01-28\r\n33308,1294,APAC,home,retail,34.08,1,0.025,none,2024-04-02\r\n33309,1646,APAC,fashion,online,46.50,6,0.055,coupon,2024-09-01\r\n33310,1151,APAC,home,online,73.07,5,0.057,none,2024-06-08\r\n33311,1590,APAC,toys,online,105.38,7,0.168,none,2024-12-03\r\n33312,1648,APAC,electronics,online,30.14,5,0.236,none,2024-05-06\r\n33313,1991,APAC,sports,partner,156.04,1,0.241,none,2024-01-05\r\n33314,1605,APAC,fashion,retail,129.87,4,0.247,none,2024-04-13\r\n33315,2374,LATAM,sports,online,53.94,6,0.040,none,2024-05-17\r\n33316,1960,EMEA,sports,online,48.30,4,0.132,coupon,2024-02-27\r\n33317,1635,APAC,home,retail,21.57,1,0.197,none,2024-02-01\r\n33318,2300,EMEA,fashion,online,51.35,7,0.186,none,2024-05-06\r\n33319,2188,EMEA,grocery,retail,28.38,1,0.201,bundle,2024-08-11\r\n33320,1454,APAC,fashion,retail,86.28,2,0.143,none,2024-04-01\r\n33321,1735,LATAM,electronics,mobile,215.32,3,0.145,bundle,2024-05-02\r\n33322,1571,EMEA,home,retail,61.66,2,0.159,none,2024-04-13\r\n33323,1805,EMEA,sports,online,66.04,8,0.130,bundle,2024-11-27\r\n33324,1859,AMER,electronics,partner,45.53,1,0.132,none,2024-03-20\r\n33325,1586,LATAM,toys,online,53.32,8,0.240,none,2024-08-14\r\n33326,2362,AMER,electronics,online,103.05,1,0.034,none,2024-07-23\r\n33327,1847,LATAM,electronics,online,41.23,5,0.129,coupon,2024-09-15\r\n33328,1922,EMEA,home,mobile,88.51,4,0.054,none,2024-02-01\r\n33329,1952,EMEA,sports,online,30.31,8,0.133,bundle,2024-07-18\r\n33330,2178,AMER,electronics,online,46.65,4,0.081,none,2024-07-05\r\n33331,1930,AMER,grocery,online,51.60,7,0.085,none,2024-02-13\r\n33332,1358,APAC,home,retail,65.84,8,0.155,none,2024-01-26\r\n33333,2127,LATAM,grocery,online,42.85,2,0.066,none,2024-11-05\r\n33334,1874,LATAM,grocery,retail,24.38,7,0.227,none,2024-02-08\r\n33335,2108,AMER,electronics,online,50.96,4,0.205,none,2024-05-22\r\n33336,1226,AMER,electronics,partner,90.34,5,0.046,coupon,2024-10-09\r\n33337,2209,AMER,fashion,partner,59.33,3,0.135,bundle,2024-05-09\r\n33338,1418,LATAM,grocery,online,61.06,6,0.175,loyalty,2024-06-21\r\n33339,2201,AMER,fashion,mobile,147.92,4,0.076,none,2024-07-15\r\n33340,1482,AMER,grocery,retail,84.12,4,0.170,coupon,2024-04-27\r\n33341,2249,LATAM,home,online,29.28,7,0.038,coupon,2024-11-08\r\n33342,2313,LATAM,electronics,retail,49.04,3,0.194,none,2024-08-17\r\n33343,1956,APAC,grocery,retail,43.79,5,0.035,none,2024-01-20\r\n33344,2126,APAC,grocery,retail,83.65,6,0.131,none,2024-11-26\r\n33345,2020,AMER,grocery,retail,34.40,6,0.024,none,2024-12-22\r\n33346,2059,AMER,home,online,19.73,6,0.076,loyalty,2024-01-09\r\n33347,1461,LATAM,sports,partner,44.35,4,0.103,bundle,2024-02-10\r\n33348,1188,LATAM,grocery,mobile,61.62,5,0.047,none,2024-02-12\r\n33349,2321,APAC,grocery,retail,62.18,8,0.139,none,2024-11-17\r\n33350,1736,AMER,home,retail,56.25,3,0.151,none,2024-05-22\r\n33351,1931,APAC,sports,online,131.26,3,0.166,none,2024-11-20\r\n33352,2290,LATAM,home,online,37.71,5,0.029,bundle,2024-08-12\r\n33353,2207,APAC,home,partner,39.06,1,0.007,bundle,2024-10-05\r\n33354,1085,EMEA,fashion,online,59.75,6,0.046,none,2024-11-10\r\n33355,2096,LATAM,home,mobile,55.77,7,0.173,none,2024-05-18\r\n33356,1103,EMEA,electronics,mobile,67.12,8,0.145,coupon,2024-03-08\r\n33357,2292,EMEA,electronics,retail,70.17,8,0.131,loyalty,2024-07-14\r\n33358,2082,APAC,toys,retail,61.22,4,0.047,none,2024-05-09\r\n33359,2317,LATAM,electronics,mobile,128.36,2,0.198,none,2024-09-08\r\n33360,1855,APAC,sports,retail,30.85,6,0.047,none,2024-12-17\r\n33361,1829,EMEA,fashion,retail,85.71,5,0.184,none,2024-09-14\r\n33362,1486,LATAM,grocery,mobile,59.90,3,0.052,none,2024-12-15\r\n33363,1374,APAC,home,retail,58.45,2,0.059,bundle,2024-08-17\r\n33364,1185,LATAM,fashion,mobile,96.03,3,0.185,none,2024-11-25\r\n33365,1562,AMER,electronics,retail,50.85,3,0.052,coupon,2024-01-22\r\n33366,2099,AMER,electronics,online,18.86,4,0.202,none,2024-10-09\r\n33367,2121,APAC,sports,online,40.60,4,0.210,loyalty,2024-04-14\r\n33368,1389,LATAM,electronics,retail,53.00,2,0.132,none,2024-04-12\r\n33369,2118,AMER,electronics,online,44.03,4,0.083,coupon,2024-03-09\r\n33370,2174,LATAM,electronics,mobile,90.71,6,0.020,none,2024-04-17\r\n33371,2444,EMEA,electronics,online,104.49,5,0.130,bundle,2024-04-10\r\n33372,1594,LATAM,home,online,83.48,1,0.116,loyalty,2024-12-14\r\n33373,2414,EMEA,toys,retail,69.50,5,0.008,coupon,2024-11-10\r\n33374,1562,AMER,grocery,online,51.56,6,0.207,bundle,2024-08-25\r\n33375,1209,AMER,sports,retail,70.73,5,0.195,none,2024-05-25\r\n33376,1191,EMEA,home,online,53.13,4,0.247,none,2024-08-10\r\n33377,1365,LATAM,toys,mobile,47.01,5,0.162,none,2024-05-28\r\n33378,1452,LATAM,sports,online,57.23,8,0.156,none,2024-06-13\r\n33379,2103,LATAM,home,online,99.42,8,0.122,bundle,2024-04-14\r\n33380,1644,EMEA,sports,online,46.80,1,0.194,none,2024-02-02\r\n33381,1636,APAC,sports,online,46.39,6,0.167,none,2024-06-05\r\n33382,1586,LATAM,sports,retail,35.11,5,0.045,none,2024-11-03\r\n33383,2192,APAC,fashion,mobile,42.53,5,0.175,none,2024-10-05\r\n33384,1757,EMEA,toys,online,71.85,8,0.143,bundle,2024-05-26\r\n33385,2322,AMER,grocery,partner,72.65,2,0.196,none,2024-03-16\r\n33386,2079,EMEA,grocery,retail,63.60,4,0.032,none,2024-07-13\r\n33387,1577,AMER,grocery,online,26.96,1,0.076,none,2024-11-08\r\n33388,1652,APAC,grocery,online,85.65,5,0.000,none,2024-02-23\r\n33389,1318,LATAM,grocery,mobile,92.16,3,0.134,bundle,2024-11-10\r\n33390,1636,APAC,grocery,mobile,45.05,2,0.235,none,2024-09-06\r\n33391,1316,APAC,fashion,retail,29.49,2,0.058,none,2024-10-13\r\n33392,1904,APAC,grocery,online,47.61,5,0.047,none,2024-12-24\r\n33393,1776,APAC,home,online,35.61,3,0.021,bundle,2024-01-10\r\n33394,2036,APAC,toys,retail,46.48,4,0.074,none,2024-01-27\r\n33395,2098,AMER,grocery,retail,37.22,6,0.029,none,2024-11-05\r\n33396,1042,LATAM,grocery,online,59.51,8,0.137,none,2024-12-15\r\n33397,2498,LATAM,electronics,retail,99.80,2,0.045,none,2024-07-01\r\n33398,2268,EMEA,grocery,mobile,157.03,7,0.140,none,2024-10-02\r\n33399,1594,LATAM,grocery,retail,45.98,3,0.226,none,2024-01-07\r\n33400,1047,APAC,home,retail,63.46,1,0.157,loyalty,2024-12-01\r\n33401,1573,AMER,grocery,retail,60.52,6,0.241,none,2024-05-19\r\n33402,1270,LATAM,sports,online,68.30,5,0.212,none,2024-01-27\r\n33403,1851,EMEA,home,online,49.72,2,0.097,none,2024-01-12\r\n33404,1120,LATAM,fashion,retail,149.10,3,0.058,bundle,2024-03-25\r\n33405,1223,LATAM,electronics,online,151.02,5,0.190,none,2024-10-19\r\n33406,2135,EMEA,toys,online,105.55,2,0.029,bundle,2024-12-11\r\n33407,1587,LATAM,electronics,online,30.98,7,0.018,none,2024-05-24\r\n33408,2494,AMER,home,retail,140.64,3,0.181,none,2024-06-02\r\n33409,1911,LATAM,sports,partner,76.34,1,0.050,none,2024-07-07\r\n33410,1690,LATAM,electronics,mobile,178.94,1,0.096,none,2024-02-10\r\n33411,2395,APAC,home,online,79.87,7,0.082,none,2024-06-27\r\n33412,1369,AMER,electronics,retail,112.01,4,0.165,none,2024-03-15\r\n33413,2075,LATAM,grocery,retail,39.61,7,0.222,loyalty,2024-11-01\r\n33414,1438,APAC,electronics,online,43.50,7,0.071,none,2024-11-04\r\n33415,2370,EMEA,home,online,32.52,6,0.058,none,2024-04-22\r\n33416,1751,AMER,home,online,16.07,4,0.033,coupon,2024-04-06\r\n33417,2017,EMEA,grocery,online,30.21,6,0.245,none,2024-10-05\r\n33418,2018,AMER,grocery,online,66.20,4,0.248,loyalty,2024-09-20\r\n33419,1816,EMEA,grocery,online,42.92,1,0.219,none,2024-10-07\r\n33420,1868,AMER,grocery,retail,34.83,1,0.033,none,2024-02-13\r\n33421,2416,LATAM,home,mobile,74.19,8,0.081,none,2024-02-05\r\n33422,2364,APAC,toys,mobile,85.17,1,0.065,coupon,2024-03-01\r\n33423,1451,EMEA,grocery,retail,76.67,7,0.161,none,2024-09-17\r\n33424,1969,LATAM,grocery,online,52.26,3,0.098,none,2024-05-24\r\n33425,1622,LATAM,electronics,online,26.30,1,0.208,none,2024-01-21\r\n33426,1700,EMEA,toys,online,127.13,4,0.214,none,2024-06-21\r\n33427,1897,AMER,fashion,online,73.57,5,0.181,loyalty,2024-08-22\r\n33428,1831,APAC,toys,mobile,60.96,7,0.058,none,2024-11-06\r\n33429,1022,APAC,sports,online,64.79,1,0.103,loyalty,2024-11-25\r\n33430,1997,APAC,sports,online,124.45,2,0.163,none,2024-06-03\r\n33431,1854,AMER,toys,online,69.92,7,0.070,coupon,2024-10-18\r\n33432,1966,APAC,toys,online,57.33,5,0.113,none,2024-03-14\r\n33433,1834,AMER,home,retail,76.28,7,0.009,coupon,2024-11-07\r\n33434,1984,LATAM,home,retail,37.98,7,0.057,none,2024-08-01\r\n33435,1963,AMER,toys,mobile,43.28,5,0.195,none,2024-10-26\r\n33436,1311,APAC,toys,retail,51.21,4,0.015,none,2024-09-17\r\n33437,2015,APAC,grocery,online,70.00,1,0.017,loyalty,2024-08-26\r\n33438,1152,LATAM,electronics,partner,20.78,7,0.011,bundle,2024-01-03\r\n33439,2331,APAC,sports,online,42.41,1,0.164,bundle,2024-09-28\r\n33440,2363,AMER,grocery,partner,61.48,5,0.113,none,2024-01-14\r\n33441,1498,LATAM,electronics,online,32.26,3,0.114,loyalty,2024-10-12\r\n33442,1342,LATAM,home,online,50.67,7,0.208,none,2024-10-02\r\n33443,1809,APAC,electronics,online,44.90,7,0.087,loyalty,2024-01-27\r\n33444,1766,AMER,toys,online,44.72,8,0.063,none,2024-10-06\r\n33445,1439,LATAM,grocery,online,79.17,3,0.241,none,2024-07-02\r\n33446,1123,LATAM,electronics,retail,61.25,6,0.173,none,2024-04-15\r\n33447,2266,LATAM,fashion,online,42.90,1,0.233,none,2024-09-12\r\n33448,1472,AMER,home,mobile,75.27,1,0.120,loyalty,2024-10-23\r\n33449,1708,LATAM,toys,online,37.75,2,0.184,bundle,2024-02-04\r\n33450,1186,APAC,grocery,retail,35.65,1,0.171,none,2024-11-08\r\n33451,1625,EMEA,electronics,online,101.06,3,0.142,none,2024-12-22\r\n33452,2175,AMER,fashion,online,54.36,8,0.183,none,2024-08-26\r\n33453,2064,LATAM,home,partner,62.21,4,0.160,none,2024-08-05\r\n33454,2286,AMER,fashion,mobile,120.58,7,0.166,none,2024-04-11\r\n33455,1793,LATAM,grocery,online,47.21,2,0.152,bundle,2024-11-08\r\n33456,1294,APAC,home,online,104.16,1,0.051,loyalty,2024-07-12\r\n33457,1310,AMER,electronics,mobile,67.46,7,0.080,bundle,2024-07-03\r\n33458,1472,AMER,home,retail,78.31,7,0.050,none,2024-03-20\r\n33459,1830,EMEA,grocery,online,30.22,7,0.223,none,2024-09-09\r\n33460,1377,APAC,toys,online,32.30,7,0.119,none,2024-07-11\r\n33461,1096,EMEA,electronics,online,38.08,3,0.180,none,2024-10-02\r\n33462,1085,EMEA,fashion,online,56.45,2,0.061,none,2024-03-13\r\n33463,1016,AMER,grocery,retail,47.79,6,0.195,none,2024-02-20\r\n33464,2433,APAC,electronics,online,19.62,2,0.106,none,2024-06-09\r\n33465,1568,AMER,grocery,online,45.16,3,0.105,bundle,2024-05-15\r\n33466,1272,AMER,grocery,partner,39.08,1,0.181,none,2024-01-02\r\n33467,1396,EMEA,home,online,60.24,8,0.131,none,2024-07-12\r\n33468,2173,LATAM,electronics,online,52.61,1,0.039,bundle,2024-12-19\r\n33469,1280,LATAM,electronics,online,58.60,4,0.111,none,2024-06-21\r\n33470,1178,EMEA,electronics,online,58.31,4,0.114,coupon,2024-09-22\r\n33471,1410,AMER,home,online,66.18,8,0.039,none,2024-07-20\r\n33472,2203,APAC,home,online,73.64,2,0.210,loyalty,2024-12-08\r\n33473,1791,LATAM,grocery,online,51.05,2,0.202,loyalty,2024-06-16\r\n33474,1205,APAC,electronics,online,41.53,6,0.210,coupon,2024-08-14\r\n33475,2441,EMEA,home,online,66.95,3,0.233,none,2024-04-11\r\n33476,2155,APAC,fashion,online,74.21,2,0.215,none,2024-03-25\r\n33477,1706,EMEA,electronics,online,60.90,7,0.058,coupon,2024-07-26\r\n33478,2246,AMER,fashion,online,66.96,2,0.189,none,2024-09-15\r\n33479,2489,LATAM,grocery,online,56.67,5,0.061,none,2024-01-25\r\n33480,2418,AMER,sports,online,62.91,7,0.147,none,2024-03-26\r\n33481,1530,APAC,sports,online,31.61,8,0.119,bundle,2024-11-24\r\n33482,1405,LATAM,home,online,67.81,3,0.098,none,2024-10-17\r\n33483,2269,EMEA,fashion,retail,58.65,6,0.002,coupon,2024-06-08\r\n33484,1294,APAC,fashion,retail,59.43,7,0.023,bundle,2024-04-20\r\n33485,2394,EMEA,grocery,retail,35.09,4,0.132,none,2024-05-17\r\n33486,1185,LATAM,grocery,retail,35.70,8,0.159,none,2024-07-27\r\n33487,2208,AMER,grocery,retail,74.37,3,0.049,coupon,2024-05-08\r\n33488,1808,APAC,toys,mobile,41.35,5,0.242,loyalty,2024-01-23\r\n33489,1961,EMEA,electronics,online,172.92,6,0.057,none,2024-02-14\r\n33490,2241,APAC,home,mobile,63.36,4,0.012,coupon,2024-12-01\r\n33491,1006,AMER,electronics,partner,53.82,5,0.035,none,2024-08-07\r\n33492,2310,EMEA,toys,online,59.85,5,0.176,loyalty,2024-03-09\r\n33493,2401,LATAM,grocery,online,33.42,5,0.102,none,2024-07-25\r\n33494,1338,EMEA,sports,online,78.62,2,0.222,none,2024-07-07\r\n33495,1124,AMER,electronics,online,42.51,7,0.079,bundle,2024-03-07\r\n33496,1303,LATAM,electronics,mobile,67.59,4,0.248,none,2024-07-14\r\n33497,1254,APAC,grocery,online,47.22,1,0.098,coupon,2024-11-25\r\n33498,1248,APAC,toys,retail,50.61,1,0.172,none,2024-10-12\r\n33499,2268,EMEA,grocery,online,54.43,3,0.207,none,2024-07-08\r\n33500,1107,APAC,grocery,retail,44.93,5,0.189,none,2024-08-28\r\n33501,1631,APAC,sports,online,74.13,4,0.011,loyalty,2024-04-24\r\n33502,1741,AMER,home,retail,31.36,4,0.095,none,2024-04-13\r\n33503,2429,EMEA,fashion,retail,56.31,7,0.032,none,2024-05-08\r\n33504,1612,LATAM,grocery,mobile,57.34,8,0.191,loyalty,2024-11-01\r\n33505,2329,LATAM,grocery,online,31.67,7,0.095,coupon,2024-08-07\r\n33506,1305,EMEA,toys,mobile,55.60,8,0.205,none,2024-01-12\r\n33507,1460,LATAM,fashion,retail,30.35,5,0.029,none,2024-08-25\r\n33508,2266,LATAM,sports,online,19.92,1,0.061,none,2024-12-20\r\n33509,1056,LATAM,toys,partner,18.33,7,0.109,none,2024-10-15\r\n33510,1918,EMEA,home,online,31.36,8,0.245,none,2024-07-02\r\n33511,1119,LATAM,grocery,online,65.53,3,0.015,none,2024-05-26\r\n33512,2278,APAC,fashion,online,38.51,8,0.105,none,2024-10-02\r\n33513,1634,AMER,electronics,retail,108.27,4,0.246,none,2024-10-26\r\n33514,2091,LATAM,home,partner,70.58,2,0.153,none,2024-09-24\r\n33515,1485,APAC,grocery,online,143.35,5,0.250,none,2024-01-03\r\n33516,1946,AMER,sports,online,20.38,3,0.159,none,2024-03-11\r\n33517,1889,APAC,grocery,partner,85.21,2,0.221,none,2024-04-05\r\n33518,1908,AMER,grocery,retail,70.18,3,0.212,coupon,2024-03-12\r\n33519,2426,AMER,electronics,retail,46.26,1,0.118,none,2024-12-07\r\n33520,1105,AMER,home,retail,69.73,8,0.152,none,2024-10-06\r\n33521,1954,APAC,grocery,mobile,94.47,4,0.112,none,2024-01-09\r\n33522,1218,AMER,home,retail,46.85,8,0.147,none,2024-09-18\r\n33523,2280,EMEA,grocery,retail,47.62,8,0.156,loyalty,2024-05-24\r\n33524,1603,EMEA,home,online,153.40,3,0.012,none,2024-09-17\r\n33525,1624,AMER,home,retail,118.63,1,0.197,none,2024-12-24\r\n33526,1443,EMEA,grocery,online,71.30,2,0.217,coupon,2024-03-17\r\n33527,2167,APAC,electronics,retail,124.33,2,0.223,none,2024-08-25\r\n33528,1509,AMER,home,retail,48.38,8,0.240,coupon,2024-11-27\r\n33529,1537,LATAM,grocery,mobile,50.30,4,0.075,none,2024-05-23\r\n33530,2152,EMEA,fashion,retail,69.66,6,0.247,loyalty,2024-10-22\r\n33531,2096,LATAM,fashion,retail,18.82,2,0.064,none,2024-06-17\r\n33532,2007,LATAM,electronics,retail,46.51,1,0.080,none,2024-12-19\r\n33533,2365,LATAM,fashion,mobile,27.91,7,0.075,loyalty,2024-02-15\r\n33534,1900,APAC,fashion,retail,41.03,8,0.017,loyalty,2024-09-23\r\n33535,2029,APAC,grocery,retail,123.39,3,0.006,bundle,2024-05-28\r\n33536,1287,AMER,grocery,online,69.91,4,0.060,none,2024-06-25\r\n33537,2275,LATAM,toys,online,31.31,5,0.052,none,2024-02-03\r\n33538,1996,APAC,toys,retail,52.70,2,0.134,none,2024-03-25\r\n33539,2286,AMER,home,retail,75.83,6,0.136,none,2024-02-23\r\n33540,1219,LATAM,sports,online,53.98,2,0.201,coupon,2024-01-08\r\n33541,1471,EMEA,sports,online,53.90,3,0.233,none,2024-09-11\r\n33542,2253,AMER,electronics,mobile,149.22,6,0.197,coupon,2024-07-15\r\n33543,1170,AMER,electronics,online,49.62,5,0.174,bundle,2024-09-22\r\n33544,2016,LATAM,grocery,mobile,61.37,3,0.206,coupon,2024-08-15\r\n33545,2258,AMER,home,online,80.58,6,0.189,none,2024-08-23\r\n33546,1934,EMEA,sports,mobile,23.73,8,0.194,coupon,2024-10-03\r\n33547,1840,LATAM,grocery,retail,70.89,8,0.186,none,2024-10-16\r\n33548,1481,LATAM,fashion,online,34.49,7,0.211,coupon,2024-01-11\r\n33549,1993,APAC,grocery,retail,72.12,6,0.207,bundle,2024-07-01\r\n33550,1384,LATAM,home,retail,21.95,6,0.177,none,2024-07-28\r\n33551,1694,APAC,electronics,retail,23.13,5,0.139,none,2024-08-20\r\n33552,1272,AMER,grocery,online,85.51,2,0.214,coupon,2024-09-18\r\n33553,1629,LATAM,electronics,online,38.94,8,0.031,coupon,2024-07-07\r\n33554,1980,LATAM,home,retail,96.76,6,0.168,bundle,2024-12-27\r\n33555,1602,EMEA,grocery,online,20.47,5,0.239,loyalty,2024-05-12\r\n33556,1928,AMER,fashion,retail,53.27,3,0.037,coupon,2024-05-05\r\n33557,2067,LATAM,grocery,mobile,65.82,2,0.038,loyalty,2024-06-17\r\n33558,2127,LATAM,fashion,online,23.42,2,0.066,none,2024-04-04\r\n33559,1805,EMEA,electronics,retail,39.11,6,0.148,loyalty,2024-09-25\r\n33560,2185,EMEA,fashion,retail,42.05,7,0.238,none,2024-11-14\r\n33561,1089,LATAM,grocery,retail,29.76,2,0.013,loyalty,2024-02-24\r\n33562,1789,EMEA,grocery,retail,103.68,5,0.221,coupon,2024-08-21\r\n33563,1855,APAC,home,retail,78.91,2,0.064,none,2024-10-05\r\n33564,1942,APAC,sports,online,63.16,8,0.243,coupon,2024-07-24\r\n33565,1548,EMEA,grocery,retail,48.42,4,0.111,none,2024-01-20\r\n33566,2418,AMER,home,retail,25.34,2,0.013,coupon,2024-09-07\r\n33567,1310,AMER,electronics,retail,52.77,1,0.049,loyalty,2024-02-07\r\n33568,2401,LATAM,electronics,retail,138.59,2,0.236,none,2024-08-06\r\n33569,2330,EMEA,electronics,retail,63.95,5,0.190,none,2024-11-12\r\n33570,2408,EMEA,home,partner,61.96,3,0.128,none,2024-03-15\r\n33571,1765,EMEA,grocery,mobile,44.97,8,0.069,none,2024-08-19\r\n33572,1574,AMER,sports,online,71.46,8,0.113,none,2024-08-27\r\n33573,2340,EMEA,fashion,partner,97.80,4,0.233,coupon,2024-07-04\r\n33574,1229,LATAM,toys,online,30.18,2,0.160,coupon,2024-11-15\r\n33575,1057,LATAM,sports,mobile,113.51,6,0.020,none,2024-03-06\r\n33576,1459,LATAM,sports,online,49.16,8,0.010,loyalty,2024-06-25\r\n33577,1520,APAC,sports,retail,29.37,2,0.039,none,2024-11-06\r\n33578,1191,EMEA,home,online,56.21,7,0.061,none,2024-06-05\r\n33579,1002,EMEA,sports,online,24.55,4,0.038,none,2024-02-09\r\n33580,2014,EMEA,grocery,online,54.75,3,0.074,bundle,2024-10-16\r\n33581,1337,APAC,fashion,retail,38.54,7,0.080,none,2024-04-05\r\n33582,2191,AMER,toys,online,65.23,3,0.234,none,2024-07-16\r\n33583,1616,APAC,electronics,online,58.22,5,0.216,none,2024-07-16\r\n33584,2330,EMEA,sports,partner,42.00,7,0.237,none,2024-07-19\r\n33585,1165,AMER,home,online,77.06,5,0.186,bundle,2024-10-22\r\n33586,1025,EMEA,grocery,online,75.35,1,0.033,loyalty,2024-01-24\r\n33587,1230,EMEA,sports,retail,59.11,8,0.044,none,2024-06-10\r\n33588,2363,AMER,toys,online,106.16,3,0.095,loyalty,2024-10-01\r\n33589,2169,EMEA,grocery,retail,31.74,1,0.039,bundle,2024-12-06\r\n33590,1875,EMEA,home,online,68.40,3,0.002,bundle,2024-01-06\r\n33591,1806,APAC,sports,retail,79.33,5,0.206,none,2024-01-19\r\n33592,1695,LATAM,home,online,39.32,7,0.202,coupon,2024-06-25\r\n33593,2338,AMER,sports,mobile,37.44,4,0.128,loyalty,2024-02-05\r\n33594,2034,LATAM,grocery,online,76.81,8,0.171,coupon,2024-02-03\r\n33595,1582,AMER,electronics,retail,158.84,1,0.045,coupon,2024-05-22\r\n33596,1071,AMER,grocery,mobile,54.75,8,0.186,none,2024-10-02\r\n33597,1831,APAC,home,online,49.40,4,0.174,coupon,2024-06-12\r\n33598,1790,AMER,electronics,partner,34.44,4,0.244,coupon,2024-05-06\r\n33599,1976,AMER,fashion,online,100.66,2,0.197,none,2024-04-18\r\n33600,2463,AMER,grocery,online,98.94,3,0.212,none,2024-02-14\r\n33601,2235,AMER,home,online,81.81,2,0.066,loyalty,2024-11-19\r\n33602,2196,AMER,electronics,retail,70.66,5,0.133,none,2024-10-21\r\n33603,2403,LATAM,electronics,partner,50.93,7,0.135,coupon,2024-11-06\r\n33604,2286,AMER,sports,online,60.57,6,0.206,none,2024-01-22\r\n33605,2305,AMER,fashion,online,33.64,1,0.221,none,2024-02-27\r\n33606,1900,APAC,toys,online,92.56,5,0.220,bundle,2024-08-08\r\n33607,1790,AMER,electronics,retail,81.97,8,0.014,bundle,2024-06-22\r\n33608,1809,APAC,electronics,online,47.46,8,0.225,none,2024-05-22\r\n33609,2035,LATAM,grocery,retail,60.30,7,0.214,loyalty,2024-07-09\r\n33610,1854,AMER,toys,online,68.34,8,0.172,none,2024-02-01\r\n33611,2011,AMER,sports,online,73.12,3,0.155,none,2024-08-12\r\n33612,1881,LATAM,grocery,online,55.14,1,0.104,coupon,2024-09-15\r\n33613,2428,LATAM,grocery,retail,144.23,1,0.079,bundle,2024-03-20\r\n33614,2225,EMEA,fashion,retail,41.35,2,0.174,none,2024-12-07\r\n33615,2220,LATAM,home,partner,80.61,2,0.183,none,2024-06-23\r\n33616,1511,EMEA,fashion,online,57.05,5,0.160,none,2024-01-09\r\n33617,2064,LATAM,electronics,online,63.16,4,0.027,none,2024-05-18\r\n33618,2340,EMEA,sports,online,59.80,1,0.097,coupon,2024-02-22\r\n33619,2277,EMEA,toys,retail,106.18,5,0.240,none,2024-06-20\r\n33620,1398,APAC,grocery,retail,77.61,6,0.050,bundle,2024-05-24\r\n33621,1985,AMER,toys,retail,31.26,3,0.173,bundle,2024-02-12\r\n33622,1624,AMER,toys,retail,66.61,2,0.078,coupon,2024-01-08\r\n33623,1768,AMER,toys,online,141.45,5,0.169,coupon,2024-12-08\r\n33624,1141,AMER,home,retail,67.45,8,0.036,bundle,2024-01-06\r\n33625,1559,EMEA,electronics,mobile,156.38,6,0.245,loyalty,2024-02-15\r\n33626,1544,LATAM,grocery,retail,10.72,7,0.105,none,2024-03-07\r\n33627,1004,LATAM,grocery,online,87.86,2,0.241,none,2024-02-20\r\n33628,1891,APAC,sports,online,70.19,7,0.178,none,2024-04-16\r\n33629,1533,APAC,electronics,online,53.49,8,0.110,coupon,2024-08-12\r\n33630,1597,APAC,electronics,online,123.22,3,0.044,none,2024-08-10\r\n33631,1396,EMEA,electronics,retail,33.24,8,0.111,none,2024-03-20\r\n33632,1040,LATAM,grocery,retail,34.62,1,0.115,bundle,2024-02-02\r\n33633,1673,AMER,fashion,online,106.27,4,0.092,coupon,2024-12-02\r\n33634,2408,EMEA,grocery,retail,70.82,2,0.154,coupon,2024-12-05\r\n33635,1534,EMEA,toys,retail,69.91,7,0.065,none,2024-06-10\r\n33636,2260,EMEA,toys,online,99.91,1,0.248,coupon,2024-11-25\r\n33637,1650,LATAM,fashion,retail,33.33,3,0.127,none,2024-02-19\r\n33638,2216,AMER,electronics,online,50.94,1,0.153,coupon,2024-06-22\r\n33639,2418,AMER,toys,partner,73.79,7,0.231,none,2024-03-06\r\n33640,2252,EMEA,toys,online,46.12,3,0.124,none,2024-03-19\r\n33641,1592,LATAM,home,retail,118.14,7,0.083,coupon,2024-11-26\r\n33642,1052,LATAM,home,online,82.51,2,0.181,none,2024-09-22\r\n33643,2155,APAC,grocery,online,97.51,2,0.216,coupon,2024-08-01\r\n33644,1947,EMEA,grocery,retail,26.00,3,0.079,none,2024-11-24\r\n33645,2279,LATAM,grocery,mobile,86.64,4,0.200,coupon,2024-04-04\r\n33646,2477,APAC,home,online,38.77,5,0.196,coupon,2024-09-10\r\n33647,1234,AMER,fashion,online,69.59,5,0.114,bundle,2024-11-02\r\n33648,1899,APAC,grocery,retail,17.96,1,0.080,coupon,2024-01-22\r\n33649,1395,APAC,fashion,online,53.23,5,0.152,bundle,2024-11-21\r\n33650,1001,LATAM,grocery,online,72.92,5,0.131,none,2024-10-25\r\n33651,1598,EMEA,toys,online,96.60,1,0.077,coupon,2024-01-06\r\n33652,2152,EMEA,grocery,retail,67.35,6,0.164,none,2024-08-27\r\n33653,1639,APAC,fashion,online,58.45,6,0.118,none,2024-02-25\r\n33654,1107,APAC,electronics,retail,103.96,8,0.245,coupon,2024-10-20\r\n33655,1339,EMEA,grocery,retail,22.66,8,0.087,none,2024-12-04\r\n33656,2434,APAC,grocery,retail,34.18,5,0.135,bundle,2024-10-04\r\n33657,2108,AMER,home,retail,89.97,6,0.187,none,2024-10-02\r\n33658,1526,EMEA,grocery,retail,24.49,5,0.213,none,2024-12-26\r\n33659,2437,LATAM,electronics,online,45.80,3,0.176,loyalty,2024-10-23\r\n33660,1798,AMER,fashion,online,43.48,1,0.151,loyalty,2024-07-15\r\n33661,2470,EMEA,home,online,43.29,3,0.066,none,2024-12-19\r\n33662,2360,EMEA,grocery,online,38.53,1,0.090,bundle,2024-12-24\r\n33663,2137,LATAM,toys,mobile,86.53,2,0.078,bundle,2024-12-04\r\n33664,1340,LATAM,electronics,retail,66.36,8,0.198,none,2024-01-06\r\n33665,1347,APAC,electronics,mobile,40.30,1,0.046,bundle,2024-01-24\r\n33666,1218,AMER,grocery,mobile,71.32,5,0.039,none,2024-07-22\r\n33667,1452,LATAM,sports,retail,123.12,7,0.004,coupon,2024-04-02\r\n33668,1125,LATAM,grocery,online,65.15,5,0.217,coupon,2024-04-25\r\n33669,1815,APAC,toys,online,33.75,2,0.208,coupon,2024-02-13\r\n33670,1818,AMER,grocery,online,48.46,5,0.075,coupon,2024-01-05\r\n33671,2263,AMER,grocery,online,18.54,4,0.010,none,2024-11-18\r\n33672,1991,APAC,grocery,online,32.13,5,0.049,none,2024-05-16\r\n33673,2310,EMEA,grocery,retail,72.78,8,0.235,coupon,2024-01-23\r\n33674,1503,APAC,sports,retail,100.47,1,0.141,none,2024-09-09\r\n33675,2344,LATAM,home,online,77.60,5,0.248,none,2024-11-02\r\n33676,1323,EMEA,fashion,online,44.16,4,0.242,coupon,2024-11-03\r\n33677,1727,APAC,electronics,online,80.46,1,0.216,none,2024-12-13\r\n33678,1972,LATAM,fashion,partner,47.07,3,0.191,none,2024-11-15\r\n33679,2005,APAC,home,retail,35.34,5,0.149,loyalty,2024-03-09\r\n33680,1719,LATAM,sports,online,109.23,1,0.112,none,2024-01-14\r\n33681,2477,APAC,home,retail,18.15,4,0.013,coupon,2024-08-25\r\n33682,2028,APAC,home,partner,51.18,6,0.227,bundle,2024-02-15\r\n33683,2225,EMEA,sports,mobile,31.75,1,0.111,bundle,2024-10-08\r\n33684,1532,APAC,grocery,online,49.26,4,0.165,loyalty,2024-10-11\r\n33685,1690,LATAM,home,retail,42.05,4,0.160,none,2024-12-14\r\n33686,2444,EMEA,grocery,partner,53.31,3,0.056,coupon,2024-09-10\r\n33687,2486,APAC,sports,online,68.29,5,0.225,bundle,2024-02-08\r\n33688,1246,EMEA,electronics,mobile,80.19,5,0.143,coupon,2024-05-04\r\n33689,1520,APAC,grocery,online,35.15,1,0.243,bundle,2024-07-08\r\n33690,1397,LATAM,home,retail,41.57,2,0.097,bundle,2024-08-04\r\n33691,2083,LATAM,grocery,retail,149.26,7,0.115,none,2024-05-01\r\n33692,2042,LATAM,home,mobile,31.37,7,0.136,none,2024-10-22\r\n33693,2455,AMER,fashion,online,183.20,4,0.007,none,2024-12-24\r\n33694,1804,AMER,toys,retail,28.98,6,0.155,coupon,2024-08-11\r\n33695,1707,APAC,fashion,online,59.95,1,0.245,coupon,2024-09-10\r\n33696,1111,APAC,home,retail,59.67,4,0.214,none,2024-06-28\r\n33697,1018,APAC,home,retail,69.37,6,0.019,none,2024-03-10\r\n33698,1618,EMEA,electronics,mobile,25.95,8,0.159,none,2024-06-17\r\n33699,1059,AMER,home,retail,29.50,4,0.127,none,2024-12-14\r\n33700,2149,EMEA,electronics,online,50.81,1,0.081,bundle,2024-02-03\r\n33701,1123,LATAM,fashion,online,152.08,1,0.050,none,2024-06-13\r\n33702,1594,LATAM,grocery,retail,24.43,3,0.203,none,2024-09-18\r\n33703,2041,LATAM,grocery,mobile,58.48,6,0.156,none,2024-03-13\r\n33704,1498,LATAM,toys,online,86.63,7,0.200,none,2024-06-10\r\n33705,2060,LATAM,electronics,online,72.01,7,0.153,loyalty,2024-09-09\r\n33706,2072,AMER,electronics,online,117.24,7,0.213,bundle,2024-01-16\r\n33707,1296,LATAM,grocery,online,53.62,1,0.041,none,2024-01-09\r\n33708,1621,APAC,fashion,retail,65.63,6,0.171,none,2024-01-03\r\n33709,2095,EMEA,home,retail,32.21,6,0.179,coupon,2024-04-08\r\n33710,2237,EMEA,fashion,retail,94.74,4,0.155,bundle,2024-10-07\r\n33711,1370,APAC,home,retail,76.31,7,0.020,coupon,2024-06-11\r\n33712,1311,APAC,electronics,partner,56.81,1,0.059,none,2024-02-09\r\n33713,1450,EMEA,electronics,online,40.74,1,0.065,none,2024-10-25\r\n33714,2064,LATAM,toys,retail,77.33,7,0.034,bundle,2024-05-02\r\n33715,1347,APAC,electronics,retail,32.26,6,0.063,bundle,2024-05-25\r\n33716,1635,APAC,home,mobile,45.75,2,0.155,coupon,2024-07-15\r\n33717,2221,LATAM,fashion,retail,101.71,2,0.100,loyalty,2024-03-12\r\n33718,1154,LATAM,sports,online,50.86,4,0.116,coupon,2024-06-10\r\n33719,1465,AMER,toys,online,46.30,5,0.016,none,2024-02-23\r\n33720,2319,AMER,home,mobile,24.33,7,0.014,coupon,2024-10-19\r\n33721,1766,AMER,electronics,online,61.22,3,0.172,none,2024-11-15\r\n33722,2191,AMER,electronics,online,51.15,8,0.241,none,2024-07-22\r\n33723,2286,AMER,sports,online,66.04,8,0.233,loyalty,2024-11-15\r\n33724,1726,EMEA,fashion,online,42.37,7,0.199,none,2024-11-15\r\n33725,1495,LATAM,home,online,82.96,5,0.204,none,2024-04-23\r\n33726,1393,LATAM,sports,retail,65.14,3,0.235,none,2024-09-23\r\n33727,1376,EMEA,home,mobile,34.36,7,0.019,none,2024-08-13\r\n33728,2143,AMER,grocery,partner,22.63,2,0.171,none,2024-05-19\r\n33729,2158,APAC,home,retail,64.85,8,0.069,none,2024-10-18\r\n33730,2426,AMER,fashion,retail,133.84,3,0.018,none,2024-09-01\r\n33731,1057,LATAM,electronics,online,166.54,1,0.118,coupon,2024-11-07\r\n33732,2206,AMER,electronics,retail,63.96,5,0.124,none,2024-10-10\r\n33733,1431,APAC,electronics,online,63.64,2,0.217,coupon,2024-06-22\r\n33734,1239,APAC,grocery,retail,34.38,1,0.007,none,2024-10-05\r\n33735,2272,EMEA,home,online,58.42,2,0.087,none,2024-01-01\r\n33736,1820,AMER,grocery,retail,104.20,6,0.044,none,2024-03-01\r\n33737,1627,LATAM,home,online,19.53,5,0.065,coupon,2024-08-14\r\n33738,1190,EMEA,fashion,retail,71.12,8,0.126,none,2024-04-16\r\n33739,2390,AMER,home,online,76.80,2,0.057,none,2024-06-17\r\n33740,1642,EMEA,grocery,retail,37.60,8,0.040,bundle,2024-07-06\r\n33741,1608,AMER,electronics,retail,25.73,5,0.131,none,2024-08-22\r\n33742,2269,EMEA,fashion,online,19.69,6,0.050,none,2024-06-23\r\n33743,2203,APAC,electronics,retail,158.13,7,0.076,none,2024-02-11\r\n33744,1297,AMER,home,retail,145.13,3,0.145,none,2024-03-22\r\n33745,1950,LATAM,home,retail,31.35,4,0.206,none,2024-05-27\r\n33746,2184,APAC,electronics,retail,28.98,3,0.182,none,2024-08-22\r\n33747,1842,LATAM,sports,mobile,59.73,8,0.184,none,2024-01-08\r\n33748,1204,AMER,grocery,online,115.84,1,0.066,loyalty,2024-07-10\r\n33749,1632,LATAM,grocery,online,173.18,1,0.098,none,2024-11-11\r\n33750,2245,APAC,fashion,online,38.98,8,0.198,loyalty,2024-12-09\r\n33751,1031,AMER,electronics,retail,20.62,5,0.158,loyalty,2024-03-20\r\n33752,1490,AMER,home,mobile,31.90,3,0.059,none,2024-04-16\r\n33753,1479,AMER,home,online,32.63,8,0.129,none,2024-01-27\r\n33754,1655,LATAM,home,retail,33.68,7,0.205,bundle,2024-01-24\r\n33755,2416,LATAM,electronics,online,29.77,6,0.173,none,2024-11-27\r\n33756,1767,AMER,electronics,retail,108.36,8,0.171,coupon,2024-02-11\r\n33757,1900,APAC,electronics,online,32.39,1,0.009,none,2024-08-17\r\n33758,2452,LATAM,grocery,retail,64.34,4,0.173,none,2024-02-24\r\n33759,1477,APAC,home,online,84.08,7,0.157,none,2024-03-04\r\n33760,1191,EMEA,grocery,partner,17.92,5,0.177,loyalty,2024-03-23\r\n33761,2494,AMER,home,online,56.98,2,0.123,none,2024-01-19\r\n33762,1856,EMEA,electronics,online,36.22,1,0.189,coupon,2024-07-03\r\n33763,1253,AMER,grocery,retail,80.48,4,0.114,bundle,2024-11-04\r\n33764,1197,LATAM,grocery,retail,79.33,6,0.161,coupon,2024-02-13\r\n33765,1957,AMER,home,online,33.35,7,0.032,none,2024-01-19\r\n33766,2110,LATAM,sports,mobile,36.65,3,0.182,none,2024-10-18\r\n33767,2033,LATAM,electronics,online,20.62,3,0.132,loyalty,2024-06-10\r\n33768,1265,APAC,electronics,retail,152.43,6,0.179,none,2024-10-18\r\n33769,1743,LATAM,grocery,mobile,45.70,7,0.034,none,2024-10-26\r\n33770,2274,APAC,fashion,online,33.08,2,0.076,none,2024-10-06\r\n33771,2092,AMER,toys,retail,62.53,1,0.064,none,2024-12-24\r\n33772,2069,AMER,electronics,retail,28.94,7,0.102,coupon,2024-06-15\r\n33773,2388,LATAM,electronics,online,52.89,3,0.177,none,2024-07-08\r\n33774,2365,LATAM,grocery,retail,28.99,2,0.248,none,2024-05-24\r\n33775,2091,LATAM,electronics,online,45.24,8,0.111,none,2024-10-11\r\n33776,2218,EMEA,home,retail,56.91,1,0.087,none,2024-07-14\r\n33777,1589,AMER,home,online,96.49,3,0.169,none,2024-10-15\r\n33778,1652,APAC,fashion,retail,38.61,6,0.246,coupon,2024-11-08\r\n33779,1414,APAC,toys,retail,25.11,8,0.021,none,2024-07-17\r\n33780,1187,AMER,fashion,online,82.05,3,0.201,none,2024-12-14\r\n33781,1652,APAC,home,mobile,53.32,6,0.165,coupon,2024-06-02\r\n33782,1765,EMEA,fashion,mobile,49.10,1,0.049,coupon,2024-12-23\r\n33783,2126,APAC,fashion,online,210.30,4,0.156,coupon,2024-01-04\r\n33784,2455,AMER,fashion,online,95.97,4,0.092,coupon,2024-02-27\r\n33785,1528,EMEA,toys,mobile,89.37,2,0.227,coupon,2024-05-02\r\n33786,2487,LATAM,grocery,mobile,49.89,7,0.168,none,2024-02-17\r\n33787,2482,EMEA,toys,online,59.14,1,0.226,coupon,2024-04-08\r\n33788,2343,EMEA,grocery,online,31.95,3,0.142,none,2024-06-02\r\n33789,1715,AMER,grocery,online,23.82,7,0.198,none,2024-01-01\r\n33790,1499,EMEA,fashion,retail,26.22,7,0.056,none,2024-06-27\r\n33791,1969,LATAM,home,retail,31.47,6,0.224,none,2024-12-23\r\n33792,2356,LATAM,home,online,20.93,7,0.033,loyalty,2024-06-25\r\n33793,2069,AMER,fashion,retail,130.17,5,0.165,coupon,2024-02-16\r\n33794,2007,LATAM,grocery,retail,20.43,8,0.152,coupon,2024-01-20\r\n33795,2459,AMER,sports,online,43.58,2,0.120,coupon,2024-03-13\r\n33796,2214,AMER,fashion,retail,58.72,5,0.132,bundle,2024-10-25\r\n33797,1700,EMEA,sports,online,100.29,5,0.248,coupon,2024-12-26\r\n33798,2258,AMER,fashion,retail,41.63,7,0.220,none,2024-05-01\r\n33799,2266,LATAM,grocery,online,38.33,3,0.084,bundle,2024-02-21\r\n33800,2244,LATAM,electronics,mobile,42.35,4,0.130,none,2024-08-12\r\n33801,1249,EMEA,grocery,partner,49.90,1,0.237,none,2024-11-06\r\n33802,1267,EMEA,sports,mobile,39.39,6,0.001,none,2024-08-03\r\n33803,2252,EMEA,home,retail,60.86,7,0.246,bundle,2024-05-24\r\n33804,1542,APAC,electronics,online,54.50,2,0.009,none,2024-04-04\r\n33805,1759,EMEA,grocery,online,159.39,8,0.135,none,2024-06-28\r\n33806,1190,EMEA,toys,retail,95.08,7,0.090,coupon,2024-07-02\r\n33807,1515,EMEA,electronics,online,27.81,5,0.043,none,2024-09-05\r\n33808,2362,AMER,grocery,retail,34.48,7,0.073,none,2024-12-14\r\n33809,1187,AMER,grocery,online,53.29,8,0.019,bundle,2024-10-23\r\n33810,1491,EMEA,grocery,online,53.53,6,0.034,none,2024-03-10\r\n33811,2224,EMEA,grocery,retail,72.57,1,0.205,bundle,2024-01-02\r\n33812,2414,EMEA,fashion,retail,46.34,5,0.054,none,2024-09-02\r\n33813,1899,APAC,electronics,retail,70.05,2,0.039,none,2024-11-28\r\n33814,2021,EMEA,home,mobile,54.62,6,0.055,loyalty,2024-03-18\r\n33815,2006,APAC,fashion,online,103.00,3,0.022,coupon,2024-03-22\r\n33816,1591,APAC,electronics,online,32.19,5,0.165,none,2024-06-22\r\n33817,2415,AMER,toys,retail,78.08,5,0.235,coupon,2024-01-09\r\n33818,1291,EMEA,grocery,retail,97.64,2,0.108,none,2024-12-04\r\n33819,1424,APAC,grocery,retail,24.66,2,0.175,none,2024-04-02\r\n33820,2328,EMEA,toys,retail,68.16,4,0.177,none,2024-05-12\r\n33821,1226,AMER,fashion,online,43.06,6,0.133,none,2024-10-08\r\n33822,2154,APAC,home,online,40.41,3,0.229,none,2024-09-14\r\n33823,2284,EMEA,toys,retail,54.91,1,0.241,none,2024-04-23\r\n33824,1908,AMER,fashion,retail,98.56,3,0.051,bundle,2024-09-26\r\n33825,2261,EMEA,home,online,39.31,3,0.014,none,2024-06-01\r\n33826,2013,APAC,toys,online,130.20,1,0.201,none,2024-05-03\r\n33827,2205,AMER,fashion,online,37.18,7,0.019,coupon,2024-06-16\r\n33828,1066,AMER,fashion,online,68.37,4,0.203,none,2024-10-11\r\n33829,2463,AMER,home,retail,24.14,8,0.230,none,2024-04-16\r\n33830,1919,EMEA,electronics,online,82.14,7,0.129,bundle,2024-10-21\r\n33831,2259,AMER,home,retail,89.72,5,0.184,none,2024-02-03\r\n33832,2107,APAC,home,partner,53.55,1,0.153,none,2024-10-09\r\n33833,2109,EMEA,fashion,retail,75.76,6,0.241,none,2024-09-27\r\n33834,1402,EMEA,home,online,61.16,3,0.083,none,2024-09-02\r\n33835,1079,LATAM,fashion,online,23.09,3,0.108,loyalty,2024-08-19\r\n33836,1551,APAC,sports,partner,109.51,3,0.150,none,2024-03-18\r\n33837,1496,AMER,electronics,retail,82.40,4,0.137,none,2024-02-11\r\n33838,1592,LATAM,grocery,retail,91.22,2,0.060,none,2024-11-15\r\n33839,1415,AMER,home,online,89.69,5,0.106,none,2024-12-18\r\n33840,2035,LATAM,home,online,72.00,4,0.243,coupon,2024-01-26\r\n33841,1817,APAC,toys,retail,25.15,3,0.025,coupon,2024-03-01\r\n33842,1910,LATAM,electronics,online,24.54,3,0.057,none,2024-01-17\r\n33843,1820,AMER,electronics,online,105.32,1,0.036,bundle,2024-02-27\r\n33844,1745,APAC,toys,online,85.18,3,0.020,bundle,2024-04-07\r\n33845,1310,AMER,fashion,online,84.32,5,0.147,coupon,2024-07-09\r\n33846,1722,EMEA,fashion,retail,38.74,8,0.241,coupon,2024-03-27\r\n33847,2333,APAC,home,online,39.99,7,0.227,bundle,2024-10-13\r\n33848,1372,APAC,grocery,online,86.99,4,0.066,none,2024-08-17\r\n33849,1167,EMEA,sports,retail,43.47,2,0.187,none,2024-07-12\r\n33850,1325,APAC,grocery,online,17.62,7,0.250,loyalty,2024-11-04\r\n33851,2270,APAC,electronics,online,37.22,7,0.225,coupon,2024-07-28\r\n33852,2324,AMER,toys,retail,65.81,8,0.051,none,2024-08-26\r\n33853,2393,LATAM,toys,retail,29.75,1,0.200,coupon,2024-09-02\r\n33854,1395,APAC,grocery,online,34.14,7,0.028,none,2024-12-10\r\n33855,2239,EMEA,grocery,online,45.85,2,0.191,coupon,2024-06-16\r\n33856,1298,LATAM,toys,retail,130.18,2,0.163,none,2024-08-03\r\n33857,2405,AMER,sports,retail,36.59,3,0.142,none,2024-11-18\r\n33858,1977,APAC,sports,online,115.59,4,0.012,bundle,2024-11-17\r\n33859,1604,EMEA,fashion,online,114.85,5,0.083,coupon,2024-08-19\r\n33860,1309,EMEA,electronics,online,95.86,5,0.179,coupon,2024-08-13\r\n33861,1027,APAC,electronics,partner,135.88,3,0.009,bundle,2024-10-13\r\n33862,2142,LATAM,fashion,retail,61.29,2,0.120,none,2024-08-06\r\n33863,1619,APAC,toys,online,42.58,7,0.145,coupon,2024-11-26\r\n33864,1483,EMEA,electronics,online,36.32,7,0.103,none,2024-08-11\r\n33865,1478,EMEA,home,mobile,94.31,2,0.092,bundle,2024-05-07\r\n33866,1118,AMER,fashion,retail,72.87,7,0.198,loyalty,2024-08-06\r\n33867,2428,LATAM,fashion,online,31.71,1,0.056,bundle,2024-11-25\r\n33868,1804,AMER,home,retail,51.19,6,0.005,bundle,2024-12-26\r\n33869,2403,LATAM,fashion,online,108.88,7,0.017,none,2024-04-02\r\n33870,1296,LATAM,electronics,online,82.84,8,0.208,none,2024-03-23\r\n33871,1995,LATAM,electronics,online,30.98,2,0.053,none,2024-03-02\r\n33872,1675,LATAM,sports,mobile,89.23,3,0.204,bundle,2024-07-03\r\n33873,1590,APAC,fashion,online,75.23,7,0.140,none,2024-01-04\r\n33874,1261,APAC,grocery,online,87.76,3,0.197,none,2024-09-07\r\n33875,2233,EMEA,electronics,online,103.83,3,0.053,coupon,2024-05-23\r\n33876,1339,EMEA,electronics,mobile,57.78,3,0.150,bundle,2024-07-06\r\n33877,1905,APAC,electronics,retail,52.46,8,0.179,bundle,2024-02-11\r\n33878,2159,AMER,grocery,retail,36.82,5,0.249,coupon,2024-11-20\r\n33879,2055,AMER,toys,mobile,22.27,5,0.017,none,2024-03-10\r\n33880,1009,APAC,sports,retail,41.69,6,0.050,coupon,2024-11-03\r\n33881,1117,LATAM,grocery,online,76.86,4,0.240,bundle,2024-06-16\r\n33882,2022,LATAM,fashion,online,61.57,8,0.228,coupon,2024-04-16\r\n33883,1732,LATAM,grocery,retail,40.02,7,0.124,none,2024-06-07\r\n33884,2414,EMEA,sports,retail,19.67,4,0.018,none,2024-11-22\r\n33885,1975,EMEA,grocery,retail,71.20,2,0.053,none,2024-02-15\r\n33886,1909,APAC,grocery,online,28.12,7,0.207,none,2024-01-28\r\n33887,1793,LATAM,home,online,31.86,8,0.015,none,2024-10-01\r\n33888,2337,AMER,sports,online,50.54,6,0.249,bundle,2024-06-14\r\n33889,1204,AMER,toys,online,71.29,1,0.061,none,2024-05-11\r\n33890,1607,LATAM,sports,mobile,70.87,4,0.246,coupon,2024-03-05\r\n33891,2077,APAC,home,retail,76.44,1,0.245,loyalty,2024-01-02\r\n33892,1122,AMER,sports,retail,124.13,3,0.226,none,2024-11-27\r\n33893,1190,EMEA,sports,mobile,34.97,3,0.047,loyalty,2024-03-16\r\n33894,1278,AMER,fashion,retail,69.58,1,0.137,none,2024-01-08\r\n33895,1133,EMEA,home,retail,58.82,2,0.229,coupon,2024-12-12\r\n33896,1861,AMER,home,online,54.17,4,0.056,none,2024-08-19\r\n33897,1505,EMEA,home,online,77.10,1,0.234,none,2024-01-04\r\n33898,1376,EMEA,sports,online,30.07,5,0.192,none,2024-02-14\r\n33899,1016,AMER,home,online,33.51,3,0.076,bundle,2024-05-07\r\n33900,1873,EMEA,sports,retail,86.32,1,0.157,none,2024-04-10\r\n33901,2095,EMEA,fashion,online,23.81,6,0.192,none,2024-06-04\r\n33902,1702,AMER,grocery,retail,50.57,3,0.207,coupon,2024-11-01\r\n33903,1841,AMER,grocery,online,18.70,3,0.015,none,2024-04-17\r\n33904,2294,EMEA,home,online,36.81,4,0.130,coupon,2024-05-09\r\n33905,2497,AMER,electronics,online,276.14,1,0.198,loyalty,2024-10-16\r\n33906,1385,LATAM,fashion,online,81.57,2,0.077,none,2024-01-21\r\n33907,1544,LATAM,home,online,53.73,6,0.078,none,2024-04-06\r\n33908,1332,APAC,sports,online,10.95,1,0.005,bundle,2024-08-06\r\n33909,1701,LATAM,electronics,retail,80.67,7,0.198,none,2024-07-10\r\n33910,2482,EMEA,fashion,online,34.94,7,0.159,none,2024-11-23\r\n33911,1019,APAC,sports,mobile,42.54,1,0.214,none,2024-01-20\r\n33912,1636,APAC,fashion,online,122.66,8,0.147,none,2024-06-21\r\n33913,1546,EMEA,grocery,retail,57.48,7,0.218,coupon,2024-10-18\r\n33914,1809,APAC,home,online,186.69,3,0.032,none,2024-07-06\r\n33915,2053,AMER,grocery,retail,124.22,7,0.232,none,2024-11-21\r\n33916,1912,APAC,toys,mobile,86.90,6,0.166,loyalty,2024-01-19\r\n33917,2286,AMER,electronics,online,80.34,3,0.166,coupon,2024-01-01\r\n33918,1033,APAC,home,retail,140.94,2,0.179,none,2024-08-04\r\n33919,1359,LATAM,electronics,online,48.95,5,0.219,coupon,2024-08-01\r\n33920,1766,AMER,electronics,mobile,16.94,6,0.202,none,2024-01-09\r\n33921,2350,APAC,electronics,online,114.33,1,0.144,none,2024-01-14\r\n33922,1729,AMER,fashion,online,104.22,4,0.183,none,2024-03-27\r\n33923,2142,LATAM,electronics,online,36.12,2,0.223,none,2024-09-03\r\n33924,1580,AMER,toys,online,114.31,3,0.224,none,2024-06-02\r\n33925,2418,AMER,toys,online,24.07,3,0.147,bundle,2024-05-28\r\n33926,2488,EMEA,home,online,76.46,5,0.094,bundle,2024-02-26\r\n33927,1599,APAC,grocery,online,86.57,1,0.106,loyalty,2024-07-10\r\n33928,1759,EMEA,home,online,120.78,4,0.145,loyalty,2024-03-15\r\n33929,2068,LATAM,sports,online,30.69,5,0.231,coupon,2024-01-18\r\n33930,1278,AMER,fashion,online,25.38,1,0.096,bundle,2024-08-08\r\n33931,1967,EMEA,sports,mobile,92.41,8,0.039,none,2024-07-09\r\n33932,2411,EMEA,grocery,retail,48.53,3,0.058,none,2024-03-08\r\n33933,1004,LATAM,grocery,online,46.43,4,0.242,none,2024-09-01\r\n33934,2237,EMEA,sports,retail,44.27,6,0.205,none,2024-03-11\r\n33935,2375,AMER,sports,partner,56.59,2,0.233,coupon,2024-11-21\r\n33936,1668,AMER,grocery,mobile,13.22,7,0.149,loyalty,2024-10-04\r\n33937,1575,APAC,grocery,partner,36.68,1,0.136,none,2024-12-20\r\n33938,1633,EMEA,sports,online,65.33,2,0.070,coupon,2024-08-24\r\n33939,1284,APAC,electronics,retail,62.53,5,0.199,coupon,2024-07-07\r\n33940,2278,APAC,home,online,66.73,4,0.192,none,2024-02-14\r\n33941,2155,APAC,electronics,online,65.64,6,0.240,none,2024-04-14\r\n33942,2375,AMER,electronics,retail,32.26,5,0.076,none,2024-06-13\r\n33943,2413,AMER,home,retail,52.59,8,0.091,none,2024-03-15\r\n33944,1844,APAC,grocery,retail,89.45,4,0.195,none,2024-07-21\r\n33945,1144,APAC,grocery,retail,97.19,1,0.100,loyalty,2024-01-24\r\n33946,1195,AMER,electronics,online,55.21,4,0.236,none,2024-01-12\r\n33947,1327,APAC,toys,retail,156.37,3,0.188,coupon,2024-12-23\r\n33948,1415,AMER,grocery,online,105.94,3,0.212,coupon,2024-03-02\r\n33949,1185,LATAM,sports,mobile,60.29,4,0.238,none,2024-08-17\r\n33950,2000,APAC,home,online,46.48,2,0.165,none,2024-09-26\r\n33951,2323,AMER,electronics,online,28.36,5,0.099,none,2024-05-04\r\n33952,1746,LATAM,electronics,mobile,104.57,8,0.170,none,2024-12-08\r\n33953,1988,AMER,home,online,45.39,2,0.065,none,2024-02-15\r\n33954,2133,AMER,electronics,retail,145.19,4,0.051,none,2024-09-13\r\n33955,1510,EMEA,fashion,retail,19.01,6,0.151,coupon,2024-02-28\r\n33956,2239,EMEA,sports,online,41.75,5,0.110,bundle,2024-09-04\r\n33957,2301,EMEA,sports,retail,67.50,4,0.064,bundle,2024-01-18\r\n33958,2209,AMER,home,retail,71.15,7,0.244,none,2024-03-11\r\n33959,1314,AMER,toys,retail,29.96,6,0.121,none,2024-11-23\r\n33960,2171,EMEA,electronics,partner,44.95,5,0.126,none,2024-12-08\r\n33961,1367,AMER,grocery,retail,19.77,7,0.033,coupon,2024-04-22\r\n33962,2399,LATAM,toys,partner,36.37,5,0.002,none,2024-02-21\r\n33963,1044,EMEA,home,online,42.27,3,0.086,none,2024-06-28\r\n33964,2050,APAC,grocery,mobile,50.30,1,0.230,none,2024-12-27\r\n33965,2413,AMER,fashion,online,71.76,1,0.187,coupon,2024-02-28\r\n33966,1366,APAC,home,retail,74.36,5,0.153,bundle,2024-01-26\r\n33967,2408,EMEA,sports,online,69.46,2,0.155,none,2024-10-25\r\n33968,1456,APAC,fashion,retail,49.07,7,0.189,none,2024-05-22\r\n33969,1339,EMEA,fashion,online,53.26,7,0.209,coupon,2024-08-02\r\n33970,2033,LATAM,home,partner,61.43,1,0.198,none,2024-12-28\r\n33971,1321,EMEA,electronics,online,67.21,8,0.162,loyalty,2024-09-16\r\n33972,2056,LATAM,grocery,online,88.99,4,0.022,loyalty,2024-03-11\r\n33973,1122,AMER,home,partner,87.56,4,0.010,none,2024-07-22\r\n33974,1359,LATAM,sports,retail,73.58,8,0.047,none,2024-06-08\r\n33975,2257,AMER,fashion,retail,59.13,3,0.032,coupon,2024-12-22\r\n33976,2398,EMEA,grocery,retail,59.89,4,0.112,none,2024-10-09\r\n33977,2358,AMER,fashion,online,135.11,3,0.208,coupon,2024-11-28\r\n33978,2352,APAC,electronics,retail,47.18,3,0.165,none,2024-12-19\r\n33979,2276,AMER,sports,online,111.24,8,0.216,none,2024-09-02\r\n33980,1969,LATAM,home,retail,39.28,6,0.176,none,2024-04-17\r\n33981,2236,APAC,toys,retail,31.50,2,0.225,none,2024-05-27\r\n33982,1673,AMER,home,retail,54.15,8,0.096,none,2024-02-10\r\n33983,1394,LATAM,home,online,44.78,6,0.020,none,2024-02-18\r\n33984,1659,APAC,home,retail,51.17,3,0.198,coupon,2024-02-19\r\n33985,1836,LATAM,electronics,online,49.66,6,0.096,none,2024-10-06\r\n33986,1385,LATAM,fashion,retail,61.41,3,0.216,bundle,2024-07-18\r\n33987,1062,EMEA,toys,partner,43.00,7,0.038,bundle,2024-09-08\r\n33988,1854,AMER,home,retail,86.38,8,0.175,coupon,2024-04-01\r\n33989,1650,LATAM,electronics,retail,69.98,8,0.210,none,2024-03-13\r\n33990,1130,LATAM,fashion,online,66.07,3,0.223,none,2024-07-28\r\n33991,1117,LATAM,home,mobile,35.36,1,0.107,none,2024-05-20\r\n33992,2497,AMER,fashion,mobile,41.97,8,0.042,bundle,2024-08-07\r\n33993,1113,EMEA,fashion,online,71.15,7,0.214,bundle,2024-04-21\r\n33994,2352,APAC,grocery,retail,78.90,2,0.065,none,2024-06-25\r\n33995,1415,AMER,grocery,mobile,19.81,8,0.236,loyalty,2024-11-27\r\n33996,1036,EMEA,fashion,online,131.14,5,0.012,none,2024-11-09\r\n33997,1549,APAC,home,online,49.37,5,0.111,none,2024-03-09\r\n33998,2403,LATAM,fashion,online,45.72,4,0.046,none,2024-06-24\r\n33999,1856,EMEA,grocery,online,33.66,6,0.111,none,2024-11-24\r\n34000,1101,AMER,sports,retail,41.03,6,0.186,coupon,2024-07-19\r\n34001,1173,LATAM,fashion,retail,113.50,3,0.153,none,2024-11-17\r\n34002,2485,AMER,home,mobile,53.83,4,0.043,none,2024-02-10\r\n34003,1144,APAC,grocery,retail,48.35,8,0.212,none,2024-08-24\r\n34004,1433,EMEA,fashion,retail,69.93,1,0.233,coupon,2024-04-11\r\n34005,2185,EMEA,fashion,online,55.53,8,0.066,none,2024-03-28\r\n34006,1695,LATAM,grocery,online,13.13,7,0.183,coupon,2024-05-08\r\n34007,1878,EMEA,fashion,retail,31.62,1,0.135,bundle,2024-08-21\r\n34008,1897,AMER,toys,online,89.14,3,0.187,none,2024-05-15\r\n34009,2009,LATAM,grocery,retail,109.98,6,0.001,loyalty,2024-04-04\r\n34010,1780,APAC,grocery,mobile,91.00,6,0.140,coupon,2024-05-25\r\n34011,2135,EMEA,electronics,online,36.61,7,0.158,coupon,2024-12-17\r\n34012,2346,LATAM,home,online,38.77,8,0.102,coupon,2024-09-08\r\n34013,2051,APAC,sports,retail,14.73,5,0.172,none,2024-10-03\r\n34014,1524,LATAM,toys,retail,123.20,8,0.143,none,2024-12-20\r\n34015,1798,AMER,electronics,retail,15.81,6,0.155,none,2024-03-19\r\n34016,1424,APAC,fashion,retail,40.23,5,0.082,none,2024-04-08\r\n34017,2060,LATAM,toys,online,116.25,6,0.127,coupon,2024-03-03\r\n34018,1932,EMEA,toys,online,74.82,7,0.200,coupon,2024-04-22\r\n34019,2106,LATAM,electronics,mobile,137.48,7,0.244,coupon,2024-07-08\r\n34020,2270,APAC,grocery,online,49.58,4,0.224,none,2024-02-09\r\n34021,2049,LATAM,sports,online,65.18,5,0.030,none,2024-10-14\r\n34022,1672,APAC,grocery,online,55.70,8,0.184,none,2024-06-16\r\n34023,1331,AMER,home,online,82.92,4,0.080,coupon,2024-11-13\r\n34024,1549,APAC,electronics,online,89.19,5,0.180,none,2024-06-18\r\n34025,2238,AMER,grocery,retail,47.98,3,0.047,bundle,2024-11-18\r\n34026,1343,LATAM,sports,retail,174.31,1,0.207,none,2024-01-15\r\n34027,2451,APAC,sports,mobile,35.63,7,0.103,bundle,2024-04-03\r\n34028,1810,LATAM,fashion,mobile,56.98,6,0.085,coupon,2024-11-15\r\n34029,1981,EMEA,fashion,partner,38.63,5,0.182,none,2024-08-19\r\n34030,2005,APAC,electronics,retail,129.44,6,0.210,none,2024-05-27\r\n34031,1165,AMER,toys,mobile,41.77,8,0.160,none,2024-08-25\r\n34032,1542,APAC,toys,online,48.68,5,0.042,loyalty,2024-01-26\r\n34033,1300,EMEA,sports,online,48.35,2,0.039,coupon,2024-08-17\r\n34034,1995,LATAM,grocery,online,150.40,2,0.073,none,2024-06-12\r\n34035,2130,EMEA,toys,online,76.68,7,0.231,loyalty,2024-04-07\r\n34036,1872,LATAM,electronics,mobile,114.33,1,0.171,none,2024-02-16\r\n34037,1682,EMEA,grocery,online,63.06,3,0.059,none,2024-10-14\r\n34038,1556,AMER,electronics,online,160.05,7,0.096,none,2024-03-26\r\n34039,1427,EMEA,fashion,retail,78.04,8,0.040,none,2024-04-07\r\n34040,1149,LATAM,grocery,retail,35.03,5,0.086,loyalty,2024-02-24\r\n34041,1134,APAC,fashion,retail,85.86,6,0.001,none,2024-04-28\r\n34042,2123,AMER,grocery,online,44.45,5,0.048,loyalty,2024-11-11\r\n34043,1679,APAC,toys,retail,74.14,7,0.201,coupon,2024-06-21\r\n34044,2061,EMEA,grocery,online,63.09,4,0.216,none,2024-06-03\r\n34045,1145,AMER,fashion,mobile,78.03,4,0.240,coupon,2024-03-04\r\n34046,2007,LATAM,fashion,online,30.34,1,0.067,coupon,2024-04-11\r\n34047,1581,APAC,electronics,online,77.80,3,0.202,loyalty,2024-10-11\r\n34048,1538,AMER,fashion,retail,92.25,7,0.004,loyalty,2024-08-25\r\n34049,1299,LATAM,grocery,online,39.62,8,0.125,coupon,2024-04-25\r\n34050,2343,EMEA,electronics,retail,45.35,8,0.118,none,2024-03-13\r\n34051,1335,APAC,sports,online,93.02,7,0.044,coupon,2024-06-21\r\n34052,1320,EMEA,grocery,online,39.37,7,0.219,none,2024-07-18\r\n34053,1834,AMER,electronics,retail,22.29,5,0.228,bundle,2024-09-12\r\n34054,2467,AMER,grocery,retail,80.12,1,0.020,none,2024-02-20\r\n34055,1608,AMER,grocery,online,75.64,8,0.091,bundle,2024-07-05\r\n34056,2120,AMER,home,online,58.02,3,0.071,loyalty,2024-01-10\r\n34057,1082,EMEA,home,online,85.77,3,0.197,none,2024-01-22\r\n34058,1294,APAC,home,mobile,63.68,8,0.144,loyalty,2024-12-06\r\n34059,1708,LATAM,grocery,retail,89.76,1,0.161,loyalty,2024-01-28\r\n34060,2435,AMER,home,online,104.97,3,0.235,none,2024-11-22\r\n34061,1381,LATAM,sports,mobile,86.99,8,0.073,coupon,2024-02-25\r\n34062,2385,APAC,sports,online,36.05,2,0.012,coupon,2024-06-22\r\n34063,2210,APAC,home,mobile,58.66,3,0.150,coupon,2024-11-07\r\n34064,1148,AMER,fashion,online,84.12,3,0.074,coupon,2024-10-01\r\n34065,2043,EMEA,electronics,retail,47.42,4,0.108,none,2024-06-16\r\n34066,1482,AMER,electronics,mobile,30.18,7,0.151,none,2024-02-09\r\n34067,1526,EMEA,sports,retail,88.19,4,0.171,none,2024-07-05\r\n34068,1930,AMER,sports,partner,39.75,6,0.034,none,2024-07-09\r\n34069,2270,APAC,toys,online,63.32,5,0.225,coupon,2024-10-10\r\n34070,1614,EMEA,grocery,retail,158.18,6,0.204,bundle,2024-09-22\r\n34071,1297,AMER,electronics,online,50.75,5,0.089,coupon,2024-09-18\r\n34072,1604,EMEA,fashion,online,42.64,6,0.128,bundle,2024-05-02\r\n34073,1214,EMEA,toys,online,24.52,1,0.108,coupon,2024-12-19\r\n34074,1156,APAC,grocery,retail,34.40,6,0.125,coupon,2024-06-05\r\n34075,1762,LATAM,fashion,mobile,91.13,7,0.164,none,2024-03-24\r\n34076,1125,LATAM,electronics,online,117.01,5,0.134,loyalty,2024-02-04\r\n34077,2444,EMEA,electronics,retail,35.30,2,0.202,loyalty,2024-11-14\r\n34078,2447,AMER,home,mobile,181.24,2,0.100,coupon,2024-09-19\r\n34079,1536,LATAM,sports,mobile,62.23,2,0.199,none,2024-05-03\r\n34080,1118,AMER,electronics,mobile,247.59,4,0.206,coupon,2024-11-03\r\n34081,1589,AMER,electronics,online,86.56,5,0.091,none,2024-04-15\r\n34082,1784,EMEA,electronics,retail,29.23,7,0.042,coupon,2024-07-25\r\n34083,1793,LATAM,grocery,mobile,36.65,5,0.203,none,2024-09-11\r\n34084,2329,LATAM,sports,retail,56.61,4,0.170,coupon,2024-04-16\r\n34085,1061,APAC,sports,mobile,55.87,3,0.165,bundle,2024-12-12\r\n34086,2470,EMEA,grocery,retail,44.89,7,0.156,none,2024-06-23\r\n34087,2129,APAC,sports,retail,77.79,7,0.088,bundle,2024-11-01\r\n34088,2081,APAC,fashion,mobile,45.68,4,0.036,none,2024-05-20\r\n34089,2402,AMER,sports,online,88.45,6,0.217,none,2024-04-18\r\n34090,2418,AMER,home,retail,204.86,1,0.102,coupon,2024-06-13\r\n34091,2345,LATAM,sports,retail,132.20,7,0.098,loyalty,2024-01-09\r\n34092,2173,LATAM,toys,online,41.52,4,0.048,loyalty,2024-06-05\r\n34093,2021,EMEA,fashion,online,86.94,8,0.163,none,2024-08-26\r\n34094,1834,AMER,grocery,retail,296.41,3,0.163,none,2024-06-28\r\n34095,1202,APAC,toys,online,14.56,7,0.039,bundle,2024-02-26\r\n34096,1611,EMEA,fashion,retail,49.65,7,0.190,bundle,2024-11-11\r\n34097,1441,LATAM,grocery,retail,14.46,4,0.138,bundle,2024-09-20\r\n34098,1999,EMEA,electronics,online,33.23,5,0.052,none,2024-04-02\r\n34099,1560,AMER,electronics,online,20.57,1,0.183,bundle,2024-03-02\r\n34100,1729,AMER,electronics,online,33.63,4,0.045,none,2024-10-16\r\n34101,1887,LATAM,sports,online,87.54,5,0.211,none,2024-07-17\r\n34102,2486,APAC,grocery,online,44.46,3,0.178,coupon,2024-07-12\r\n34103,2159,AMER,grocery,retail,47.02,8,0.176,bundle,2024-09-17\r\n34104,1056,LATAM,home,online,68.72,8,0.194,none,2024-11-19\r\n34105,1908,AMER,toys,online,74.70,2,0.083,none,2024-03-18\r\n34106,2270,APAC,grocery,retail,36.21,1,0.220,coupon,2024-04-17\r\n34107,1829,EMEA,electronics,mobile,39.54,8,0.183,bundle,2024-02-23\r\n34108,1339,EMEA,electronics,retail,50.04,2,0.187,none,2024-11-12\r\n34109,1464,APAC,fashion,online,54.85,3,0.045,none,2024-07-28\r\n34110,2359,LATAM,sports,retail,63.34,4,0.086,none,2024-11-14\r\n34111,1411,LATAM,grocery,retail,140.00,3,0.024,coupon,2024-05-01\r\n34112,1588,LATAM,grocery,online,79.29,8,0.213,none,2024-07-01\r\n34113,2227,LATAM,fashion,mobile,60.41,4,0.045,none,2024-11-13\r\n34114,2258,AMER,electronics,online,36.03,6,0.222,none,2024-08-15\r\n34115,1641,EMEA,fashion,online,32.07,6,0.037,none,2024-05-03\r\n34116,1576,EMEA,grocery,retail,96.51,3,0.211,none,2024-02-20\r\n34117,2231,LATAM,fashion,retail,107.83,2,0.221,none,2024-10-11\r\n34118,1734,AMER,toys,partner,65.13,8,0.244,coupon,2024-04-11\r\n34119,1828,EMEA,home,partner,105.73,4,0.234,bundle,2024-04-07\r\n34120,1765,EMEA,toys,online,47.76,1,0.239,none,2024-07-23\r\n34121,2274,APAC,grocery,retail,62.55,5,0.106,none,2024-01-20\r\n34122,1417,APAC,toys,mobile,34.23,6,0.241,none,2024-06-20\r\n34123,1918,EMEA,fashion,online,28.56,8,0.184,coupon,2024-07-08\r\n34124,1729,AMER,electronics,online,83.40,8,0.088,loyalty,2024-07-03\r\n34125,2217,LATAM,home,retail,115.53,8,0.029,coupon,2024-08-08\r\n34126,1724,LATAM,sports,retail,41.69,4,0.083,none,2024-07-25\r\n34127,2223,EMEA,electronics,online,84.10,1,0.097,loyalty,2024-10-11\r\n34128,2038,LATAM,grocery,online,75.51,3,0.197,none,2024-05-09\r\n34129,2415,AMER,electronics,retail,61.76,3,0.155,none,2024-08-10\r\n34130,2038,LATAM,home,retail,18.99,6,0.102,bundle,2024-11-13\r\n34131,2313,LATAM,sports,retail,62.44,8,0.088,none,2024-02-18\r\n34132,2078,APAC,fashion,retail,77.12,8,0.122,coupon,2024-05-12\r\n34133,1283,APAC,home,online,56.49,7,0.090,none,2024-03-06\r\n34134,1173,LATAM,electronics,retail,40.48,4,0.232,none,2024-07-10\r\n34135,1477,APAC,home,retail,31.24,8,0.202,bundle,2024-02-21\r\n34136,1659,APAC,fashion,retail,47.49,7,0.041,none,2024-05-22\r\n34137,1342,LATAM,sports,mobile,36.86,7,0.160,none,2024-01-08\r\n34138,1872,LATAM,fashion,retail,120.96,2,0.231,coupon,2024-10-21\r\n34139,1525,APAC,grocery,online,67.04,3,0.096,coupon,2024-08-02\r\n34140,1074,LATAM,toys,online,63.45,8,0.136,loyalty,2024-03-02\r\n34141,1169,LATAM,home,online,40.05,7,0.137,bundle,2024-07-14\r\n34142,1390,APAC,fashion,online,108.45,4,0.015,coupon,2024-12-05\r\n34143,1575,APAC,grocery,online,23.51,3,0.167,none,2024-08-10\r\n34144,2422,APAC,grocery,retail,56.30,3,0.236,none,2024-01-10\r\n34145,2309,AMER,grocery,retail,36.66,7,0.111,none,2024-02-19\r\n34146,1877,LATAM,toys,online,50.37,2,0.200,coupon,2024-05-05\r\n34147,1814,AMER,grocery,partner,112.14,1,0.210,none,2024-02-04\r\n34148,1489,AMER,electronics,partner,48.79,7,0.180,none,2024-01-22\r\n34149,1612,LATAM,electronics,retail,48.14,7,0.127,loyalty,2024-09-16\r\n34150,1607,LATAM,electronics,mobile,65.02,1,0.005,none,2024-11-22\r\n34151,1874,LATAM,electronics,retail,73.19,1,0.102,none,2024-10-14\r\n34152,1121,EMEA,grocery,online,79.66,5,0.212,none,2024-10-16\r\n34153,2233,EMEA,toys,retail,53.82,1,0.210,loyalty,2024-01-20\r\n34154,1883,LATAM,fashion,retail,76.01,6,0.178,coupon,2024-03-08\r\n34155,1788,AMER,fashion,online,25.01,7,0.028,none,2024-10-02\r\n34156,1467,LATAM,grocery,online,26.94,6,0.024,none,2024-09-19\r\n34157,1006,AMER,electronics,retail,51.64,4,0.217,none,2024-01-22\r\n34158,2347,AMER,home,online,44.47,4,0.249,none,2024-07-17\r\n34159,1930,AMER,grocery,online,24.97,5,0.157,bundle,2024-03-15\r\n34160,2234,LATAM,electronics,online,39.19,1,0.131,none,2024-05-24\r\n34161,2388,LATAM,toys,mobile,60.31,6,0.165,bundle,2024-11-03\r\n34162,1232,LATAM,fashion,retail,38.82,5,0.243,none,2024-04-22\r\n34163,2128,EMEA,grocery,retail,66.48,3,0.064,none,2024-04-28\r\n34164,2434,APAC,home,online,79.74,3,0.220,none,2024-12-18\r\n34165,1940,APAC,electronics,retail,46.80,7,0.018,none,2024-12-25\r\n34166,1484,AMER,electronics,mobile,183.86,3,0.147,none,2024-01-17\r\n34167,1625,EMEA,fashion,retail,47.12,5,0.019,none,2024-08-28\r\n34168,1904,APAC,home,online,162.94,6,0.092,none,2024-02-10\r\n34169,2264,LATAM,toys,mobile,43.93,5,0.241,none,2024-08-27\r\n34170,1646,APAC,electronics,partner,133.34,2,0.122,coupon,2024-01-15\r\n34171,2093,LATAM,electronics,retail,93.57,6,0.113,none,2024-10-06\r\n34172,2017,EMEA,home,retail,36.56,2,0.181,none,2024-12-07\r\n34173,1405,LATAM,toys,mobile,65.62,2,0.221,bundle,2024-05-08\r\n34174,1133,EMEA,grocery,online,28.37,5,0.010,coupon,2024-01-23\r\n34175,2384,LATAM,fashion,online,39.37,7,0.212,none,2024-02-26\r\n34176,1108,EMEA,home,retail,141.57,1,0.060,none,2024-12-01\r\n34177,1737,AMER,home,online,70.02,4,0.111,none,2024-06-20\r\n34178,1509,AMER,home,retail,114.98,7,0.227,bundle,2024-04-06\r\n34179,1825,AMER,fashion,mobile,30.27,8,0.068,loyalty,2024-04-03\r\n34180,1818,AMER,sports,online,40.94,7,0.214,loyalty,2024-04-16\r\n34181,1860,EMEA,home,retail,74.79,1,0.188,none,2024-02-16\r\n34182,1096,EMEA,fashion,online,120.47,6,0.149,none,2024-05-24\r\n34183,1812,EMEA,fashion,online,94.35,7,0.014,none,2024-12-03\r\n34184,1970,LATAM,grocery,online,40.48,8,0.038,none,2024-06-22\r\n34185,1933,EMEA,fashion,online,33.03,7,0.120,none,2024-12-18\r\n34186,1645,EMEA,electronics,online,34.92,7,0.150,none,2024-04-28\r\n34187,1168,APAC,sports,online,57.66,4,0.231,coupon,2024-03-17\r\n34188,1140,LATAM,grocery,online,92.26,8,0.090,bundle,2024-09-22\r\n34189,2458,EMEA,grocery,retail,29.12,6,0.181,none,2024-10-14\r\n34190,2077,APAC,toys,online,45.87,4,0.164,bundle,2024-03-27\r\n34191,1156,APAC,fashion,retail,53.28,1,0.244,bundle,2024-03-03\r\n34192,1085,EMEA,grocery,retail,56.06,5,0.215,none,2024-10-25\r\n34193,2047,AMER,grocery,retail,45.04,3,0.056,none,2024-05-22\r\n34194,2269,EMEA,toys,online,92.54,5,0.170,coupon,2024-12-21\r\n34195,2415,AMER,electronics,online,26.08,1,0.131,none,2024-04-12\r\n34196,2358,AMER,home,online,34.47,1,0.027,none,2024-07-25\r\n34197,1179,APAC,fashion,retail,24.21,1,0.033,none,2024-08-12\r\n34198,1449,EMEA,toys,online,75.98,5,0.194,bundle,2024-09-23\r\n34199,1472,AMER,sports,online,47.70,8,0.048,none,2024-11-23\r\n34200,1221,LATAM,fashion,online,48.48,8,0.173,none,2024-03-25\r\n34201,1175,AMER,sports,retail,136.24,8,0.075,bundle,2024-02-20\r\n34202,2015,APAC,grocery,partner,86.43,8,0.129,loyalty,2024-02-21\r\n34203,1098,APAC,electronics,online,36.63,2,0.155,none,2024-08-22\r\n34204,1544,LATAM,electronics,partner,59.08,1,0.098,none,2024-06-22\r\n34205,1011,APAC,electronics,partner,27.54,7,0.115,loyalty,2024-02-01\r\n34206,1256,LATAM,fashion,partner,26.19,1,0.219,none,2024-03-17\r\n34207,1333,EMEA,sports,online,62.78,5,0.152,coupon,2024-03-03\r\n34208,1825,AMER,grocery,online,26.54,3,0.127,bundle,2024-09-01\r\n34209,1376,EMEA,sports,online,53.89,4,0.122,bundle,2024-03-08\r\n34210,1559,EMEA,grocery,retail,40.99,1,0.241,coupon,2024-02-19\r\n34211,1837,LATAM,electronics,online,61.39,2,0.005,none,2024-10-26\r\n34212,1694,APAC,electronics,mobile,44.69,6,0.086,none,2024-06-09\r\n34213,1985,AMER,home,retail,42.89,7,0.105,bundle,2024-12-09\r\n34214,2467,AMER,electronics,retail,52.32,2,0.072,none,2024-05-25\r\n34215,1729,AMER,sports,retail,92.42,4,0.060,none,2024-02-07\r\n34216,2301,EMEA,sports,retail,52.52,5,0.231,bundle,2024-06-27\r\n34217,2033,LATAM,grocery,online,35.67,3,0.031,coupon,2024-04-03\r\n34218,2198,EMEA,grocery,online,76.99,2,0.018,bundle,2024-08-23\r\n34219,1961,EMEA,home,online,45.57,7,0.026,none,2024-06-24\r\n34220,2353,AMER,grocery,online,64.01,3,0.201,coupon,2024-08-23\r\n34221,1746,LATAM,grocery,partner,33.70,5,0.125,none,2024-08-14\r\n34222,2068,LATAM,grocery,mobile,39.82,8,0.168,none,2024-02-26\r\n34223,2246,AMER,sports,retail,99.80,2,0.232,none,2024-05-14\r\n34224,2031,AMER,electronics,retail,80.55,3,0.009,none,2024-12-14\r\n34225,1392,AMER,fashion,online,60.54,8,0.106,none,2024-03-10\r\n34226,2254,LATAM,toys,online,21.47,3,0.126,bundle,2024-09-05\r\n34227,1860,EMEA,grocery,retail,46.79,2,0.189,coupon,2024-02-18\r\n34228,2422,APAC,fashion,online,21.77,8,0.047,none,2024-02-17\r\n34229,1491,EMEA,grocery,online,39.49,6,0.067,none,2024-04-24\r\n34230,1065,AMER,sports,online,33.58,3,0.152,none,2024-05-28\r\n34231,1013,LATAM,sports,online,25.80,6,0.036,none,2024-08-20\r\n34232,1548,EMEA,electronics,online,16.40,3,0.113,coupon,2024-07-09\r\n34233,1368,EMEA,home,retail,31.41,7,0.221,coupon,2024-05-03\r\n34234,1274,LATAM,fashion,retail,60.01,4,0.082,coupon,2024-02-26\r\n34235,1533,APAC,electronics,online,114.11,7,0.118,coupon,2024-11-26\r\n34236,1882,AMER,toys,retail,149.76,1,0.212,none,2024-11-11\r\n34237,2316,EMEA,home,online,69.20,6,0.145,none,2024-04-27\r\n34238,1117,LATAM,grocery,mobile,107.11,4,0.199,loyalty,2024-12-19\r\n34239,1864,EMEA,electronics,mobile,35.55,3,0.121,bundle,2024-05-13\r\n34240,1889,APAC,grocery,mobile,70.37,3,0.239,none,2024-09-17\r\n34241,1746,LATAM,sports,mobile,58.16,4,0.130,none,2024-07-26\r\n34242,2417,LATAM,home,retail,32.11,7,0.131,loyalty,2024-07-19\r\n34243,1229,LATAM,electronics,retail,59.60,5,0.238,coupon,2024-01-08\r\n34244,2029,APAC,fashion,mobile,90.54,4,0.003,none,2024-08-07\r\n34245,2200,LATAM,electronics,online,50.84,4,0.248,none,2024-10-17\r\n34246,1082,EMEA,toys,online,54.45,5,0.025,bundle,2024-02-27\r\n34247,1665,AMER,toys,retail,69.80,2,0.243,bundle,2024-09-01\r\n34248,1981,EMEA,home,online,86.34,4,0.142,coupon,2024-09-17\r\n34249,1010,EMEA,home,mobile,106.03,2,0.122,bundle,2024-10-17\r\n34250,2056,LATAM,grocery,mobile,114.04,3,0.118,none,2024-09-26\r\n34251,2360,EMEA,electronics,retail,27.58,5,0.199,coupon,2024-09-26\r\n34252,1048,EMEA,toys,online,61.11,4,0.028,coupon,2024-02-08\r\n34253,1797,LATAM,home,retail,72.75,3,0.100,loyalty,2024-10-13\r\n34254,1032,AMER,electronics,retail,90.40,2,0.194,none,2024-05-01\r\n34255,2291,EMEA,home,retail,70.80,7,0.212,none,2024-04-15\r\n34256,1526,EMEA,home,retail,67.30,5,0.004,bundle,2024-12-06\r\n34257,1072,LATAM,home,online,49.95,8,0.109,none,2024-04-26\r\n34258,1328,APAC,home,retail,76.81,4,0.159,none,2024-02-27\r\n34259,1565,AMER,sports,online,60.93,2,0.095,none,2024-05-05\r\n34260,2079,EMEA,home,retail,119.02,3,0.204,loyalty,2024-11-06\r\n34261,2130,EMEA,electronics,retail,112.70,8,0.083,none,2024-11-14\r\n34262,1855,APAC,grocery,retail,86.81,4,0.024,none,2024-11-15\r\n34263,1634,AMER,grocery,mobile,48.39,4,0.230,none,2024-06-13\r\n34264,2464,LATAM,home,retail,46.24,5,0.108,none,2024-03-28\r\n34265,1390,APAC,sports,retail,102.83,3,0.068,bundle,2024-10-27\r\n34266,1427,EMEA,grocery,mobile,57.01,7,0.247,coupon,2024-09-01\r\n34267,1642,EMEA,fashion,online,67.23,6,0.200,bundle,2024-03-17\r\n34268,1406,LATAM,grocery,online,43.98,5,0.122,none,2024-06-14\r\n34269,1706,EMEA,grocery,retail,46.50,2,0.212,loyalty,2024-03-07\r\n34270,1437,EMEA,grocery,retail,264.81,1,0.240,bundle,2024-05-12\r\n34271,1631,APAC,fashion,retail,55.10,2,0.044,bundle,2024-12-27\r\n34272,2111,EMEA,sports,online,72.25,3,0.119,none,2024-10-07\r\n34273,1155,EMEA,toys,online,35.38,3,0.169,none,2024-09-17\r\n34274,2087,LATAM,fashion,retail,60.36,1,0.231,coupon,2024-07-16\r\n34275,1190,EMEA,home,online,76.56,2,0.195,loyalty,2024-02-15\r\n34276,1264,APAC,toys,retail,203.67,8,0.118,bundle,2024-08-01\r\n34277,1712,LATAM,fashion,online,46.80,6,0.069,none,2024-05-18\r\n34278,1553,LATAM,fashion,retail,81.51,2,0.116,bundle,2024-04-28\r\n34279,2175,AMER,electronics,online,68.55,2,0.075,none,2024-06-21\r\n34280,2295,EMEA,toys,online,161.54,8,0.030,none,2024-04-04\r\n34281,1916,AMER,sports,partner,68.20,4,0.225,coupon,2024-09-14\r\n34282,1105,AMER,grocery,retail,106.45,3,0.178,coupon,2024-05-05\r\n34283,2390,AMER,fashion,retail,79.92,7,0.042,loyalty,2024-08-27\r\n34284,1792,AMER,fashion,online,50.74,8,0.101,coupon,2024-12-16\r\n34285,1457,EMEA,home,retail,96.63,8,0.052,coupon,2024-05-24\r\n34286,1613,EMEA,electronics,online,27.97,8,0.023,none,2024-04-07\r\n34287,1711,APAC,electronics,retail,42.83,3,0.169,none,2024-07-04\r\n34288,2214,AMER,fashion,online,58.38,8,0.106,bundle,2024-03-07\r\n34289,1979,APAC,sports,mobile,73.12,7,0.213,none,2024-01-04\r\n34290,1159,LATAM,fashion,retail,106.15,5,0.044,none,2024-09-18\r\n34291,1071,AMER,fashion,online,29.73,1,0.146,none,2024-04-09\r\n34292,1004,LATAM,sports,mobile,38.66,2,0.234,loyalty,2024-05-15\r\n34293,2070,APAC,grocery,online,39.74,4,0.151,none,2024-01-05\r\n34294,1265,APAC,grocery,mobile,101.75,5,0.198,bundle,2024-11-27\r\n34295,1148,AMER,grocery,retail,31.05,6,0.010,coupon,2024-02-19\r\n34296,2292,EMEA,electronics,retail,111.64,7,0.012,none,2024-04-18\r\n34297,1576,EMEA,grocery,online,61.83,3,0.212,coupon,2024-03-13\r\n34298,1799,EMEA,home,online,66.22,2,0.230,none,2024-01-12\r\n34299,1520,APAC,home,online,42.02,2,0.235,none,2024-09-03\r\n34300,1394,LATAM,sports,retail,88.40,8,0.026,none,2024-08-20\r\n34301,1420,APAC,electronics,online,25.98,6,0.151,loyalty,2024-03-24\r\n34302,1142,EMEA,electronics,retail,80.96,4,0.221,none,2024-03-13\r\n34303,2398,EMEA,home,mobile,53.25,4,0.013,bundle,2024-10-02\r\n34304,1214,EMEA,toys,online,50.80,6,0.191,none,2024-11-12\r\n34305,2228,EMEA,sports,retail,55.41,3,0.213,loyalty,2024-11-09\r\n34306,1756,EMEA,toys,online,62.03,4,0.229,coupon,2024-08-12\r\n34307,1766,AMER,grocery,online,34.90,4,0.151,coupon,2024-02-01\r\n34308,2187,EMEA,toys,online,45.97,2,0.085,none,2024-10-09\r\n34309,1118,AMER,home,online,71.51,2,0.205,loyalty,2024-09-22\r\n34310,2391,EMEA,fashion,online,66.78,1,0.137,none,2024-08-03\r\n34311,1513,APAC,home,online,58.75,6,0.174,coupon,2024-06-28\r\n34312,1452,LATAM,electronics,retail,41.60,8,0.058,none,2024-12-24\r\n34313,1101,AMER,grocery,online,45.89,3,0.033,coupon,2024-11-13\r\n34314,1515,EMEA,home,retail,87.21,4,0.002,none,2024-12-26\r\n34315,2361,EMEA,electronics,mobile,66.44,8,0.171,none,2024-08-15\r\n34316,1550,APAC,electronics,mobile,118.07,2,0.241,none,2024-11-02\r\n34317,1956,APAC,grocery,retail,43.58,1,0.143,loyalty,2024-12-08\r\n34318,2355,EMEA,home,retail,103.38,5,0.028,coupon,2024-03-01\r\n34319,1984,LATAM,grocery,online,35.50,4,0.147,loyalty,2024-05-16\r\n34320,1265,APAC,grocery,online,31.01,8,0.100,bundle,2024-05-04\r\n34321,1605,APAC,grocery,mobile,57.51,5,0.050,none,2024-04-19\r\n34322,2496,EMEA,fashion,retail,44.00,5,0.121,loyalty,2024-05-13\r\n34323,1352,AMER,home,online,66.24,5,0.181,none,2024-04-01\r\n34324,2274,APAC,fashion,partner,80.64,2,0.079,bundle,2024-07-01\r\n34325,2138,APAC,grocery,retail,74.61,1,0.219,none,2024-02-23\r\n34326,1333,EMEA,sports,retail,108.91,7,0.114,coupon,2024-04-20\r\n34327,1296,LATAM,electronics,retail,72.33,3,0.136,coupon,2024-10-04\r\n34328,1863,EMEA,fashion,online,81.07,3,0.039,none,2024-12-27\r\n34329,1816,EMEA,grocery,online,71.84,7,0.174,none,2024-10-02\r\n34330,1084,AMER,toys,online,20.72,7,0.049,coupon,2024-12-27\r\n34331,2446,LATAM,grocery,retail,29.17,1,0.227,none,2024-02-10\r\n34332,1273,AMER,home,retail,61.12,8,0.124,coupon,2024-04-06\r\n34333,2495,EMEA,electronics,online,30.41,7,0.240,none,2024-01-03\r\n34334,2217,LATAM,sports,partner,52.16,2,0.199,none,2024-02-22\r\n34335,1195,AMER,toys,retail,132.75,3,0.024,none,2024-05-21\r\n34336,1503,APAC,electronics,online,57.18,8,0.147,none,2024-07-04\r\n34337,1236,AMER,fashion,online,95.45,6,0.023,none,2024-07-21\r\n34338,2324,AMER,electronics,retail,74.95,5,0.157,loyalty,2024-03-22\r\n34339,2189,LATAM,electronics,online,37.61,7,0.092,loyalty,2024-05-18\r\n34340,1596,EMEA,toys,mobile,59.05,1,0.057,loyalty,2024-02-12\r\n34341,1323,EMEA,electronics,online,58.80,8,0.185,loyalty,2024-06-20\r\n34342,1723,LATAM,toys,retail,34.15,7,0.094,none,2024-03-19\r\n34343,1973,EMEA,electronics,online,72.84,2,0.122,none,2024-03-24\r\n34344,1112,APAC,sports,retail,53.19,4,0.152,bundle,2024-04-17\r\n34345,2144,EMEA,sports,mobile,61.49,2,0.177,none,2024-02-26\r\n34346,2288,AMER,toys,retail,29.13,8,0.237,bundle,2024-07-19\r\n34347,2389,LATAM,electronics,mobile,32.45,7,0.037,none,2024-03-16\r\n34348,1054,EMEA,home,online,104.72,7,0.234,coupon,2024-07-04\r\n34349,1611,EMEA,home,retail,22.90,3,0.238,none,2024-12-05\r\n34350,1588,LATAM,sports,online,83.74,4,0.030,none,2024-04-13\r\n34351,1762,LATAM,grocery,partner,46.25,1,0.229,none,2024-10-11\r\n34352,2430,APAC,grocery,online,62.55,8,0.209,none,2024-05-22\r\n34353,1604,EMEA,sports,online,76.35,3,0.016,none,2024-09-17\r\n34354,2363,AMER,toys,mobile,51.95,3,0.095,none,2024-05-05\r\n34355,1784,EMEA,grocery,retail,97.64,3,0.007,none,2024-08-23\r\n34356,1896,EMEA,toys,partner,57.10,2,0.246,coupon,2024-01-12\r\n34357,1921,LATAM,grocery,retail,100.50,1,0.099,coupon,2024-08-25\r\n34358,1231,AMER,home,mobile,76.87,7,0.194,none,2024-05-04\r\n34359,2034,LATAM,sports,mobile,77.30,3,0.214,coupon,2024-09-03\r\n34360,1058,LATAM,fashion,retail,95.80,5,0.125,none,2024-11-08\r\n34361,1827,EMEA,grocery,retail,92.51,3,0.018,loyalty,2024-05-06\r\n34362,2471,APAC,grocery,retail,38.05,2,0.082,none,2024-07-18\r\n34363,1109,APAC,fashion,retail,27.89,5,0.141,none,2024-08-14\r\n34364,1101,AMER,grocery,online,81.92,2,0.215,none,2024-04-22\r\n34365,1270,LATAM,home,online,45.66,6,0.120,none,2024-01-25\r\n34366,1172,APAC,electronics,online,65.50,3,0.233,coupon,2024-07-12\r\n34367,2131,APAC,toys,online,51.11,8,0.026,none,2024-04-04\r\n34368,1865,LATAM,electronics,retail,123.89,2,0.173,none,2024-12-28\r\n34369,1505,EMEA,fashion,online,26.23,3,0.097,none,2024-10-02\r\n34370,1146,LATAM,sports,online,28.99,6,0.010,coupon,2024-06-22\r\n34371,1374,APAC,electronics,online,72.23,8,0.183,bundle,2024-01-19\r\n34372,1537,LATAM,fashion,online,122.69,6,0.069,none,2024-04-05\r\n34373,1860,EMEA,home,partner,137.27,1,0.119,coupon,2024-05-12\r\n34374,2323,AMER,electronics,partner,86.00,8,0.205,none,2024-08-16\r\n34375,1676,LATAM,home,retail,125.81,5,0.175,none,2024-10-05\r\n34376,1061,APAC,fashion,online,108.44,6,0.068,none,2024-02-05\r\n34377,1683,AMER,home,online,49.31,7,0.245,none,2024-12-05\r\n34378,2367,AMER,toys,online,38.68,8,0.197,coupon,2024-01-07\r\n34379,1939,LATAM,grocery,online,42.40,6,0.091,none,2024-01-15\r\n34380,1190,EMEA,fashion,retail,24.78,2,0.108,coupon,2024-09-23\r\n34381,2453,AMER,fashion,online,90.42,6,0.008,loyalty,2024-06-12\r\n34382,1160,LATAM,electronics,retail,35.12,6,0.047,loyalty,2024-04-06\r\n34383,1150,LATAM,electronics,online,89.55,5,0.107,none,2024-05-25\r\n34384,2334,LATAM,grocery,partner,119.66,6,0.132,coupon,2024-07-25\r\n34385,1509,AMER,toys,retail,103.98,5,0.052,bundle,2024-08-06\r\n34386,1320,EMEA,sports,online,70.00,4,0.103,none,2024-08-16\r\n34387,1036,EMEA,grocery,online,68.91,5,0.123,coupon,2024-10-23\r\n34388,1896,EMEA,fashion,online,194.19,6,0.226,none,2024-12-05\r\n34389,2020,AMER,fashion,retail,52.71,5,0.171,none,2024-07-17\r\n34390,1316,APAC,grocery,mobile,44.55,8,0.215,none,2024-05-17\r\n34391,1509,AMER,sports,mobile,24.90,8,0.148,bundle,2024-06-04\r\n34392,2348,EMEA,sports,retail,84.56,7,0.240,bundle,2024-02-14\r\n34393,1421,APAC,sports,online,87.68,1,0.186,none,2024-01-02\r\n34394,1458,APAC,grocery,online,85.17,3,0.190,coupon,2024-12-04\r\n34395,1527,AMER,home,online,43.90,8,0.074,bundle,2024-07-11\r\n34396,1213,EMEA,grocery,online,78.73,3,0.068,bundle,2024-01-13\r\n34397,2271,LATAM,grocery,mobile,72.62,4,0.096,bundle,2024-04-28\r\n34398,2271,LATAM,toys,mobile,46.13,5,0.180,none,2024-01-09\r\n34399,1018,APAC,toys,online,55.16,1,0.014,none,2024-06-23\r\n34400,1697,APAC,sports,online,40.34,2,0.198,bundle,2024-01-26\r\n34401,2283,AMER,grocery,online,38.63,6,0.028,coupon,2024-11-16\r\n34402,2162,EMEA,electronics,online,125.84,4,0.100,none,2024-02-01\r\n34403,1140,LATAM,sports,retail,63.75,8,0.024,loyalty,2024-06-20\r\n34404,1236,AMER,grocery,retail,40.36,3,0.222,none,2024-02-13\r\n34405,1453,APAC,home,online,46.40,1,0.117,coupon,2024-02-01\r\n34406,2269,EMEA,toys,online,80.59,2,0.033,bundle,2024-11-18\r\n34407,1423,EMEA,electronics,online,50.70,8,0.105,none,2024-04-04\r\n34408,1609,LATAM,home,retail,68.80,7,0.077,coupon,2024-02-23\r\n34409,1691,LATAM,home,mobile,57.74,7,0.102,none,2024-08-12\r\n34410,1517,AMER,home,partner,67.24,2,0.237,none,2024-04-06\r\n34411,1722,EMEA,electronics,retail,33.62,1,0.139,loyalty,2024-08-28\r\n34412,1848,EMEA,electronics,retail,95.47,2,0.179,loyalty,2024-04-18\r\n34413,2101,APAC,grocery,online,55.56,7,0.049,none,2024-01-04\r\n34414,2299,EMEA,toys,online,97.26,2,0.196,none,2024-02-13\r\n34415,1952,EMEA,electronics,online,40.02,4,0.150,none,2024-12-10\r\n34416,1424,APAC,grocery,online,31.52,8,0.171,coupon,2024-11-01\r\n34417,1847,LATAM,sports,online,40.62,8,0.102,none,2024-12-02\r\n34418,1484,AMER,fashion,partner,68.77,7,0.049,loyalty,2024-02-04\r\n34419,1289,LATAM,home,retail,70.05,3,0.180,coupon,2024-02-17\r\n34420,1935,EMEA,sports,retail,68.73,6,0.143,none,2024-11-05\r\n34421,1523,LATAM,electronics,mobile,42.33,5,0.019,coupon,2024-10-02\r\n34422,2007,LATAM,electronics,online,44.02,7,0.129,none,2024-05-27\r\n34423,1758,AMER,fashion,mobile,48.24,6,0.185,none,2024-01-28\r\n34424,1044,EMEA,grocery,online,63.55,6,0.168,none,2024-10-28\r\n34425,2128,EMEA,toys,retail,63.79,3,0.106,loyalty,2024-01-03\r\n34426,2103,LATAM,home,retail,46.86,4,0.034,none,2024-02-16\r\n34427,2389,LATAM,grocery,online,85.65,1,0.110,coupon,2024-02-09\r\n34428,1614,EMEA,electronics,mobile,51.11,1,0.153,none,2024-01-24\r\n34429,1552,EMEA,home,retail,38.05,6,0.044,none,2024-07-24\r\n34430,1597,APAC,toys,mobile,85.03,3,0.097,none,2024-09-18\r\n34431,1827,EMEA,electronics,retail,28.61,5,0.144,loyalty,2024-07-04\r\n34432,1628,EMEA,sports,online,38.64,3,0.177,none,2024-12-12\r\n34433,2255,AMER,grocery,retail,66.64,3,0.224,none,2024-11-02\r\n34434,1311,APAC,electronics,partner,90.76,1,0.203,none,2024-03-26\r\n34435,2154,APAC,electronics,online,80.86,2,0.022,none,2024-01-18\r\n34436,1052,LATAM,sports,mobile,53.44,2,0.123,none,2024-10-04\r\n34437,2236,APAC,grocery,mobile,119.20,4,0.027,bundle,2024-06-23\r\n34438,1398,APAC,grocery,online,87.29,1,0.223,coupon,2024-08-12\r\n34439,1208,AMER,sports,online,75.45,6,0.064,bundle,2024-02-02\r\n34440,2447,AMER,fashion,online,146.04,2,0.025,none,2024-12-05\r\n34441,2101,APAC,grocery,online,45.39,8,0.113,none,2024-03-08\r\n34442,2209,AMER,sports,retail,93.66,7,0.186,none,2024-11-12\r\n34443,1204,AMER,grocery,retail,72.93,5,0.159,coupon,2024-10-07\r\n34444,1085,EMEA,electronics,retail,33.71,4,0.082,coupon,2024-09-28\r\n34445,1514,LATAM,grocery,retail,70.71,1,0.144,none,2024-10-18\r\n34446,1821,LATAM,grocery,retail,43.65,5,0.127,coupon,2024-12-05\r\n34447,2425,APAC,toys,online,85.28,3,0.074,bundle,2024-02-18\r\n34448,2384,LATAM,fashion,online,36.35,4,0.125,none,2024-06-12\r\n34449,1441,LATAM,fashion,retail,196.11,5,0.228,none,2024-04-22\r\n34450,2391,EMEA,electronics,online,90.17,6,0.133,bundle,2024-05-08\r\n34451,2186,LATAM,fashion,online,63.07,7,0.179,coupon,2024-05-17\r\n34452,1573,AMER,home,mobile,31.11,2,0.097,none,2024-06-06\r\n34453,2397,LATAM,grocery,retail,70.34,3,0.188,none,2024-05-17\r\n34454,1094,LATAM,toys,mobile,64.20,7,0.013,none,2024-06-20\r\n34455,1173,LATAM,toys,retail,76.11,8,0.148,none,2024-12-02\r\n34456,1157,LATAM,electronics,retail,45.36,6,0.047,none,2024-04-25\r\n34457,2077,APAC,toys,online,34.72,1,0.160,coupon,2024-10-20\r\n34458,2494,AMER,electronics,retail,115.20,2,0.148,none,2024-10-04\r\n34459,2164,AMER,electronics,online,42.49,2,0.146,bundle,2024-01-21\r\n34460,1427,EMEA,grocery,retail,56.57,8,0.100,bundle,2024-01-10\r\n34461,1027,APAC,sports,retail,70.18,4,0.123,none,2024-12-09\r\n34462,2090,AMER,home,online,51.98,4,0.218,bundle,2024-12-17\r\n34463,1352,AMER,home,mobile,83.63,3,0.036,none,2024-05-01\r\n34464,1270,LATAM,home,online,58.76,5,0.011,loyalty,2024-07-16\r\n34465,2178,AMER,grocery,online,36.12,7,0.042,none,2024-07-06\r\n34466,1389,LATAM,fashion,mobile,33.85,8,0.152,none,2024-05-17\r\n34467,2193,AMER,electronics,online,110.57,6,0.172,none,2024-06-21\r\n34468,2382,LATAM,home,online,132.49,2,0.042,coupon,2024-06-23\r\n34469,2326,LATAM,home,online,40.86,7,0.189,bundle,2024-02-13\r\n34470,1468,AMER,sports,online,54.46,6,0.022,none,2024-02-07\r\n34471,2054,AMER,sports,online,28.53,7,0.203,none,2024-03-04\r\n34472,1356,LATAM,electronics,online,76.60,7,0.022,bundle,2024-09-23\r\n34473,1569,APAC,grocery,online,32.03,1,0.102,coupon,2024-12-12\r\n34474,2316,EMEA,home,retail,47.71,5,0.009,bundle,2024-10-22\r\n34475,1756,EMEA,grocery,online,99.41,6,0.170,bundle,2024-04-02\r\n34476,1492,APAC,grocery,online,29.98,3,0.125,none,2024-02-28\r\n34477,1832,APAC,electronics,online,53.99,4,0.113,none,2024-02-24\r\n34478,2326,LATAM,grocery,partner,30.69,1,0.153,none,2024-05-02\r\n34479,1772,EMEA,grocery,retail,106.76,2,0.062,none,2024-07-20\r\n34480,1325,APAC,home,online,22.39,5,0.059,none,2024-09-11\r\n34481,1128,LATAM,grocery,retail,97.15,2,0.006,none,2024-09-03\r\n34482,2329,LATAM,electronics,retail,86.63,5,0.113,none,2024-06-16\r\n34483,2406,EMEA,fashion,partner,113.92,5,0.074,coupon,2024-06-20\r\n34484,1250,APAC,toys,online,57.34,2,0.180,none,2024-06-21\r\n34485,1464,APAC,grocery,online,57.04,6,0.165,bundle,2024-09-04\r\n34486,1060,LATAM,grocery,online,45.94,1,0.099,none,2024-02-17\r\n34487,1383,AMER,electronics,retail,95.61,5,0.173,coupon,2024-04-17\r\n34488,1743,LATAM,fashion,online,44.66,4,0.084,bundle,2024-12-22\r\n34489,2297,EMEA,electronics,retail,27.01,3,0.239,none,2024-11-18\r\n34490,1487,AMER,fashion,online,171.33,6,0.109,loyalty,2024-05-01\r\n34491,1461,LATAM,fashion,retail,34.12,8,0.203,bundle,2024-04-06\r\n34492,1952,EMEA,sports,retail,121.07,6,0.087,loyalty,2024-03-17\r\n34493,1673,AMER,electronics,online,69.06,2,0.207,none,2024-05-01\r\n34494,1331,AMER,electronics,mobile,22.79,6,0.154,none,2024-09-18\r\n34495,1230,EMEA,sports,retail,190.33,1,0.228,coupon,2024-08-07\r\n34496,1581,APAC,grocery,online,48.38,5,0.235,coupon,2024-12-09\r\n34497,1841,AMER,fashion,retail,95.75,6,0.141,coupon,2024-01-15\r\n34498,1046,EMEA,fashion,mobile,43.83,5,0.237,coupon,2024-10-24\r\n34499,2378,LATAM,home,partner,155.69,3,0.163,coupon,2024-12-01\r\n34500,1676,LATAM,home,retail,76.24,8,0.190,bundle,2024-01-05\r\n34501,1940,APAC,home,retail,55.53,2,0.156,none,2024-02-18\r\n34502,1808,APAC,home,online,40.10,6,0.116,coupon,2024-04-18\r\n34503,2265,APAC,electronics,online,44.73,2,0.009,coupon,2024-07-18\r\n34504,1029,EMEA,fashion,retail,68.47,7,0.223,loyalty,2024-03-05\r\n34505,1297,AMER,toys,retail,89.08,3,0.118,none,2024-05-10\r\n34506,1909,APAC,sports,retail,53.55,5,0.012,none,2024-04-02\r\n34507,2377,AMER,fashion,online,52.83,6,0.031,none,2024-12-26\r\n34508,1402,EMEA,grocery,retail,50.57,2,0.151,none,2024-02-13\r\n34509,1307,AMER,electronics,online,68.39,7,0.087,coupon,2024-11-28\r\n34510,2371,LATAM,home,online,52.89,7,0.035,none,2024-01-20\r\n34511,1863,EMEA,home,mobile,44.29,3,0.019,none,2024-09-24\r\n34512,1111,APAC,sports,retail,60.87,5,0.145,bundle,2024-01-06\r\n34513,1124,AMER,home,online,52.67,5,0.222,none,2024-12-20\r\n34514,2468,EMEA,grocery,online,87.83,2,0.245,none,2024-09-04\r\n34515,1738,LATAM,home,online,39.14,8,0.006,coupon,2024-11-27\r\n34516,1015,AMER,grocery,online,126.68,6,0.250,bundle,2024-07-21\r\n34517,1283,APAC,fashion,online,26.00,5,0.078,none,2024-09-24\r\n34518,1941,AMER,electronics,retail,90.29,7,0.094,loyalty,2024-01-22\r\n34519,1519,APAC,grocery,online,86.46,6,0.061,coupon,2024-07-20\r\n34520,1203,AMER,home,retail,52.30,8,0.134,none,2024-08-02\r\n34521,1652,APAC,grocery,retail,66.06,7,0.087,none,2024-01-25\r\n34522,2438,AMER,fashion,online,31.38,8,0.036,none,2024-03-11\r\n34523,2206,AMER,toys,mobile,84.27,4,0.228,none,2024-09-20\r\n34524,1257,APAC,fashion,mobile,50.93,8,0.241,none,2024-03-07\r\n34525,1778,LATAM,electronics,mobile,48.68,2,0.003,none,2024-08-19\r\n34526,2076,AMER,home,online,47.12,1,0.247,none,2024-11-22\r\n34527,1742,AMER,toys,online,22.54,1,0.011,none,2024-03-13\r\n34528,2413,AMER,grocery,retail,36.51,4,0.087,bundle,2024-01-12\r\n34529,1607,LATAM,sports,online,35.55,4,0.229,bundle,2024-08-02\r\n34530,1662,LATAM,grocery,online,76.94,7,0.128,none,2024-01-03\r\n34531,1459,LATAM,grocery,online,67.97,2,0.227,bundle,2024-11-26\r\n34532,1504,AMER,grocery,retail,29.30,5,0.217,coupon,2024-04-21\r\n34533,1408,AMER,home,online,50.12,5,0.069,coupon,2024-03-16\r\n34534,1580,AMER,home,online,65.43,1,0.082,none,2024-09-14\r\n34535,1816,EMEA,fashion,retail,46.66,3,0.011,coupon,2024-07-23\r\n34536,1038,APAC,fashion,retail,78.95,8,0.245,coupon,2024-10-15\r\n34537,2057,APAC,fashion,online,72.96,7,0.043,loyalty,2024-03-17\r\n34538,2353,AMER,fashion,retail,107.95,5,0.150,loyalty,2024-08-04\r\n34539,1464,APAC,grocery,online,100.38,5,0.038,none,2024-09-21\r\n34540,1829,EMEA,sports,mobile,39.36,4,0.158,bundle,2024-02-21\r\n34541,1016,AMER,sports,retail,109.01,2,0.232,none,2024-01-01\r\n34542,1748,APAC,grocery,retail,60.65,1,0.156,none,2024-09-10\r\n34543,2178,AMER,home,online,107.43,8,0.004,none,2024-11-18\r\n34544,1305,EMEA,home,online,30.41,4,0.185,none,2024-07-09\r\n34545,1626,EMEA,home,online,50.37,6,0.077,none,2024-12-06\r\n34546,1154,LATAM,toys,retail,126.27,2,0.103,bundle,2024-07-20\r\n34547,2164,AMER,fashion,online,55.27,6,0.240,coupon,2024-03-25\r\n34548,1833,EMEA,grocery,retail,46.61,6,0.116,coupon,2024-12-08\r\n34549,1165,AMER,home,mobile,242.40,5,0.019,none,2024-05-13\r\n34550,2357,EMEA,grocery,partner,28.69,4,0.146,loyalty,2024-04-01\r\n34551,1378,APAC,home,online,36.68,8,0.129,loyalty,2024-04-09\r\n34552,1515,EMEA,grocery,online,120.80,1,0.047,none,2024-01-08\r\n34553,1391,LATAM,grocery,mobile,29.14,5,0.160,none,2024-04-08\r\n34554,1308,EMEA,sports,online,36.53,1,0.109,none,2024-09-02\r\n34555,1584,EMEA,electronics,retail,137.83,4,0.149,none,2024-04-06\r\n34556,2095,EMEA,home,online,102.06,5,0.163,none,2024-07-28\r\n34557,1499,EMEA,grocery,online,72.67,6,0.095,coupon,2024-04-11\r\n34558,2145,AMER,grocery,online,34.97,3,0.101,none,2024-01-02\r\n34559,2348,EMEA,fashion,online,26.81,4,0.094,none,2024-08-03\r\n34560,1765,EMEA,home,mobile,63.71,8,0.168,none,2024-02-28\r\n34561,2271,LATAM,electronics,retail,52.89,6,0.089,bundle,2024-04-21\r\n34562,1681,LATAM,fashion,mobile,100.34,1,0.158,none,2024-11-04\r\n34563,1402,EMEA,grocery,retail,39.94,7,0.105,none,2024-12-18\r\n34564,1974,EMEA,grocery,online,69.26,8,0.242,coupon,2024-08-15\r\n34565,1920,LATAM,toys,retail,67.37,7,0.005,coupon,2024-02-22\r\n34566,1701,LATAM,toys,retail,43.92,8,0.163,none,2024-11-22\r\n34567,1920,LATAM,grocery,retail,29.25,7,0.236,bundle,2024-01-13\r\n34568,2467,AMER,grocery,mobile,37.80,8,0.127,coupon,2024-06-15\r\n34569,1946,AMER,home,mobile,26.66,3,0.082,none,2024-06-26\r\n34570,1003,APAC,electronics,online,49.01,4,0.093,bundle,2024-06-10\r\n34571,1114,APAC,home,retail,86.93,1,0.025,coupon,2024-01-20\r\n34572,1337,APAC,fashion,mobile,65.87,4,0.048,coupon,2024-09-25\r\n34573,1103,EMEA,toys,online,59.13,2,0.005,none,2024-08-13\r\n34574,1087,AMER,electronics,retail,31.00,3,0.171,loyalty,2024-09-08\r\n34575,2091,LATAM,electronics,retail,37.98,2,0.193,none,2024-04-25\r\n34576,2213,APAC,home,online,87.92,3,0.025,none,2024-11-27\r\n34577,2076,AMER,fashion,online,71.19,8,0.067,loyalty,2024-04-24\r\n34578,2128,EMEA,grocery,partner,60.32,7,0.060,none,2024-04-21\r\n34579,2096,LATAM,home,online,93.30,7,0.204,none,2024-11-25\r\n34580,2017,EMEA,sports,retail,62.29,4,0.040,bundle,2024-08-16\r\n34581,1859,AMER,electronics,retail,44.36,2,0.052,none,2024-08-26\r\n34582,1610,LATAM,grocery,online,78.73,6,0.080,bundle,2024-09-08\r\n34583,2260,EMEA,fashion,mobile,22.47,7,0.179,loyalty,2024-02-14\r\n34584,1950,LATAM,sports,online,56.74,7,0.073,coupon,2024-12-10\r\n34585,1322,AMER,electronics,online,67.70,8,0.221,none,2024-10-23\r\n34586,2303,EMEA,fashion,retail,41.62,1,0.221,none,2024-10-14\r\n34587,2470,EMEA,sports,partner,45.06,3,0.205,none,2024-11-06\r\n34588,2089,EMEA,electronics,retail,71.93,6,0.046,bundle,2024-01-18\r\n34589,1553,LATAM,electronics,online,72.36,1,0.040,none,2024-09-14\r\n34590,1222,AMER,electronics,retail,51.84,4,0.171,bundle,2024-09-26\r\n34591,2466,APAC,electronics,mobile,50.69,7,0.206,none,2024-06-23\r\n34592,1219,LATAM,grocery,online,19.03,5,0.217,none,2024-04-20\r\n34593,1914,EMEA,sports,mobile,52.18,4,0.224,coupon,2024-08-04\r\n34594,1568,AMER,grocery,retail,53.52,6,0.060,none,2024-02-15\r\n34595,2084,LATAM,fashion,retail,31.23,8,0.026,coupon,2024-11-20\r\n34596,2178,AMER,toys,online,47.34,1,0.039,none,2024-09-15\r\n34597,2141,AMER,home,mobile,130.50,5,0.242,none,2024-04-28\r\n34598,1528,EMEA,sports,mobile,82.40,3,0.009,none,2024-08-01\r\n34599,1017,AMER,fashion,online,26.00,7,0.142,bundle,2024-03-28\r\n34600,2112,LATAM,fashion,online,87.11,3,0.198,none,2024-07-08\r\n34601,2143,AMER,fashion,mobile,91.03,6,0.132,none,2024-02-19\r\n34602,2099,AMER,fashion,online,49.54,5,0.178,none,2024-04-05\r\n34603,1141,AMER,grocery,online,61.65,5,0.089,none,2024-08-22\r\n34604,1887,LATAM,electronics,online,36.40,7,0.068,bundle,2024-03-23\r\n34605,2374,LATAM,home,online,110.76,5,0.174,none,2024-08-02\r\n34606,1180,AMER,home,retail,50.68,5,0.102,none,2024-06-24\r\n34607,2304,LATAM,electronics,retail,26.58,1,0.243,none,2024-01-22\r\n34608,1932,EMEA,home,retail,61.86,2,0.185,loyalty,2024-07-15\r\n34609,2167,APAC,electronics,online,39.96,2,0.020,none,2024-02-20\r\n34610,1756,EMEA,home,online,52.32,6,0.065,loyalty,2024-10-22\r\n34611,2366,APAC,toys,online,58.79,4,0.129,coupon,2024-07-26\r\n34612,1501,AMER,home,retail,68.27,4,0.241,none,2024-10-08\r\n34613,1005,LATAM,toys,retail,99.31,3,0.244,coupon,2024-01-15\r\n34614,2108,AMER,home,online,59.75,8,0.203,none,2024-01-27\r\n34615,1670,EMEA,electronics,online,82.81,6,0.187,coupon,2024-05-08\r\n34616,1794,AMER,fashion,mobile,107.13,7,0.104,none,2024-04-10\r\n34617,1577,AMER,home,retail,43.72,7,0.246,coupon,2024-06-20\r\n34618,1660,AMER,toys,retail,60.65,3,0.130,coupon,2024-10-16\r\n34619,2263,AMER,electronics,retail,35.26,7,0.223,bundle,2024-12-19\r\n34620,1988,AMER,grocery,partner,39.17,2,0.168,none,2024-01-21\r\n34621,1591,APAC,home,online,50.25,4,0.183,bundle,2024-10-25\r\n34622,1906,APAC,grocery,retail,55.47,7,0.202,none,2024-12-08\r\n34623,1312,EMEA,fashion,online,37.65,7,0.202,none,2024-01-10\r\n34624,1276,AMER,electronics,online,48.46,3,0.069,none,2024-05-18\r\n34625,1130,LATAM,fashion,retail,44.76,3,0.023,coupon,2024-02-18\r\n34626,1300,EMEA,grocery,online,50.44,7,0.088,none,2024-06-06\r\n34627,1761,EMEA,grocery,online,30.63,4,0.041,loyalty,2024-10-01\r\n34628,1794,AMER,electronics,online,93.70,1,0.058,none,2024-07-01\r\n34629,1474,LATAM,electronics,mobile,66.59,4,0.005,bundle,2024-04-27\r\n34630,1708,LATAM,fashion,online,67.76,1,0.199,none,2024-02-20\r\n34631,1045,LATAM,grocery,online,66.70,2,0.202,none,2024-12-20\r\n34632,1575,APAC,sports,online,43.24,8,0.246,none,2024-12-09\r\n34633,1081,AMER,grocery,online,132.66,8,0.161,none,2024-02-18\r\n34634,2065,EMEA,home,retail,58.50,8,0.052,none,2024-07-22\r\n34635,2046,APAC,sports,online,26.76,3,0.116,bundle,2024-05-19\r\n34636,2488,EMEA,sports,retail,78.44,2,0.102,none,2024-06-02\r\n34637,1613,EMEA,sports,retail,107.64,1,0.208,none,2024-11-09\r\n34638,1936,EMEA,electronics,partner,32.80,3,0.111,coupon,2024-09-07\r\n34639,1123,LATAM,grocery,online,53.02,8,0.027,none,2024-01-24\r\n34640,1990,EMEA,fashion,retail,55.27,3,0.082,none,2024-07-20\r\n34641,1517,AMER,grocery,retail,64.85,5,0.111,loyalty,2024-06-22\r\n34642,2405,AMER,grocery,mobile,21.79,5,0.218,bundle,2024-03-17\r\n34643,1904,APAC,toys,retail,61.13,6,0.062,none,2024-11-11\r\n34644,1509,AMER,electronics,online,44.23,4,0.006,bundle,2024-08-05\r\n34645,1800,APAC,toys,retail,58.27,4,0.139,none,2024-10-23\r\n34646,1151,APAC,grocery,online,60.34,2,0.247,none,2024-09-25\r\n34647,2441,EMEA,fashion,online,85.07,1,0.106,loyalty,2024-04-11\r\n34648,1900,APAC,electronics,retail,90.28,7,0.227,none,2024-12-22\r\n34649,2138,APAC,grocery,online,31.73,6,0.001,none,2024-12-14\r\n34650,2479,EMEA,sports,online,28.46,4,0.218,coupon,2024-07-13\r\n34651,1360,APAC,grocery,retail,32.62,5,0.163,bundle,2024-07-07\r\n34652,1280,LATAM,fashion,partner,67.92,3,0.225,none,2024-03-24\r\n34653,2463,AMER,grocery,retail,48.21,3,0.100,bundle,2024-09-03\r\n34654,1389,LATAM,electronics,online,36.57,7,0.198,none,2024-06-09\r\n34655,1069,APAC,electronics,mobile,50.16,3,0.106,none,2024-11-21\r\n34656,1902,AMER,sports,retail,53.08,4,0.046,none,2024-08-22\r\n34657,1455,APAC,fashion,online,116.12,4,0.149,none,2024-07-08\r\n34658,1378,APAC,sports,online,64.12,6,0.190,none,2024-02-22\r\n34659,1635,APAC,home,retail,26.20,1,0.125,none,2024-09-12\r\n34660,1972,LATAM,grocery,online,65.49,6,0.148,none,2024-04-21\r\n34661,1619,APAC,electronics,mobile,63.77,8,0.097,bundle,2024-11-28\r\n34662,1626,EMEA,home,online,41.70,4,0.241,none,2024-11-04\r\n34663,1183,AMER,electronics,online,66.23,3,0.243,bundle,2024-02-08\r\n34664,1512,APAC,fashion,online,29.07,1,0.055,none,2024-05-11\r\n34665,1468,AMER,toys,retail,54.89,2,0.037,coupon,2024-08-05\r\n34666,1729,AMER,toys,retail,44.93,8,0.063,none,2024-09-27\r\n34667,1157,LATAM,sports,retail,26.64,5,0.033,none,2024-11-05\r\n34668,2029,APAC,sports,retail,69.60,2,0.173,none,2024-08-08\r\n34669,2249,LATAM,electronics,retail,56.10,5,0.046,none,2024-10-06\r\n34670,2232,EMEA,electronics,online,81.31,6,0.147,coupon,2024-05-21\r\n34671,1236,AMER,grocery,online,45.98,5,0.240,coupon,2024-12-23\r\n34672,2137,LATAM,home,online,125.90,5,0.243,none,2024-02-14\r\n34673,2267,AMER,home,online,59.62,6,0.151,bundle,2024-10-09\r\n34674,1538,AMER,sports,online,19.76,6,0.170,bundle,2024-03-22\r\n34675,1976,AMER,grocery,online,152.04,3,0.060,coupon,2024-08-22\r\n34676,2052,LATAM,fashion,mobile,28.07,8,0.205,loyalty,2024-12-26\r\n34677,2103,LATAM,electronics,retail,76.34,7,0.221,bundle,2024-06-16\r\n34678,1424,APAC,electronics,online,31.88,4,0.125,none,2024-07-14\r\n34679,2263,AMER,toys,mobile,29.91,6,0.099,none,2024-05-25\r\n34680,1873,EMEA,grocery,retail,30.78,1,0.206,coupon,2024-03-27\r\n34681,1275,EMEA,fashion,online,27.23,2,0.238,none,2024-10-20\r\n34682,1902,AMER,fashion,online,30.04,1,0.146,bundle,2024-11-04\r\n34683,2384,LATAM,electronics,retail,27.00,6,0.106,none,2024-04-06\r\n34684,1632,LATAM,grocery,online,48.58,6,0.162,coupon,2024-12-19\r\n34685,1800,APAC,fashion,online,62.58,6,0.142,none,2024-09-08\r\n34686,2065,EMEA,sports,online,45.92,4,0.181,coupon,2024-08-03\r\n34687,1865,LATAM,home,online,97.33,8,0.134,loyalty,2024-09-02\r\n34688,1499,EMEA,grocery,online,44.02,8,0.119,none,2024-04-13\r\n34689,2058,LATAM,electronics,partner,45.23,2,0.115,none,2024-10-02\r\n34690,2094,AMER,home,online,40.86,8,0.120,none,2024-11-08\r\n34691,1887,LATAM,grocery,retail,78.53,7,0.054,none,2024-04-04\r\n34692,1741,AMER,fashion,online,67.17,4,0.180,none,2024-05-03\r\n34693,2075,LATAM,fashion,online,20.76,3,0.203,none,2024-01-25\r\n34694,1256,LATAM,home,retail,42.95,8,0.037,coupon,2024-05-27\r\n34695,1850,APAC,toys,mobile,61.99,7,0.031,coupon,2024-01-08\r\n34696,2146,APAC,grocery,online,66.32,4,0.115,none,2024-09-27\r\n34697,1415,AMER,electronics,retail,25.14,1,0.218,coupon,2024-06-10\r\n34698,1910,LATAM,home,partner,38.57,1,0.043,none,2024-01-13\r\n34699,2008,APAC,sports,online,69.61,2,0.012,coupon,2024-06-27\r\n34700,1738,LATAM,fashion,online,56.10,7,0.089,none,2024-10-25\r\n34701,1522,LATAM,grocery,retail,43.97,8,0.058,coupon,2024-09-25\r\n34702,1292,LATAM,sports,online,39.36,2,0.227,bundle,2024-08-02\r\n34703,2119,AMER,fashion,partner,88.81,6,0.160,bundle,2024-05-10\r\n34704,1325,APAC,grocery,online,68.87,6,0.127,bundle,2024-11-09\r\n34705,1846,APAC,home,online,66.21,7,0.186,bundle,2024-10-14\r\n34706,1381,LATAM,sports,mobile,70.15,1,0.045,none,2024-11-28\r\n34707,2233,EMEA,grocery,mobile,70.44,1,0.158,loyalty,2024-09-27\r\n34708,2340,EMEA,sports,retail,68.61,6,0.126,none,2024-05-11\r\n34709,1729,AMER,home,online,34.33,7,0.140,none,2024-12-13\r\n34710,2108,AMER,grocery,retail,41.10,7,0.099,none,2024-08-09\r\n34711,2196,AMER,sports,retail,35.16,3,0.029,none,2024-02-07\r\n34712,1696,LATAM,toys,retail,46.53,6,0.165,coupon,2024-11-21\r\n34713,1715,AMER,home,mobile,18.32,4,0.115,loyalty,2024-01-01\r\n34714,2244,LATAM,toys,mobile,15.18,3,0.061,coupon,2024-10-07\r\n34715,1671,APAC,sports,online,58.45,4,0.160,none,2024-12-02\r\n34716,2413,AMER,grocery,partner,56.58,6,0.090,coupon,2024-12-20\r\n34717,2337,AMER,grocery,online,132.90,1,0.122,none,2024-03-28\r\n34718,2319,AMER,grocery,retail,79.62,6,0.156,none,2024-04-16\r\n34719,1775,EMEA,home,partner,82.49,4,0.220,none,2024-11-09\r\n34720,1562,AMER,electronics,retail,59.63,2,0.165,none,2024-09-01\r\n34721,2277,EMEA,fashion,online,35.89,6,0.149,coupon,2024-08-21\r\n34722,2001,EMEA,electronics,online,61.95,6,0.189,none,2024-06-19\r\n34723,1949,AMER,grocery,online,75.49,8,0.026,none,2024-08-12\r\n34724,1417,APAC,electronics,retail,59.35,6,0.073,coupon,2024-11-10\r\n34725,1362,AMER,electronics,retail,113.32,7,0.191,bundle,2024-01-02\r\n34726,1628,EMEA,home,retail,66.48,6,0.006,none,2024-02-01\r\n34727,1380,AMER,grocery,online,90.67,3,0.190,none,2024-09-16\r\n34728,1453,APAC,grocery,online,107.26,8,0.116,none,2024-11-19\r\n34729,1787,APAC,electronics,online,56.85,6,0.150,coupon,2024-03-23\r\n34730,1946,AMER,electronics,online,46.83,5,0.027,bundle,2024-09-20\r\n34731,1177,LATAM,grocery,retail,77.86,4,0.120,loyalty,2024-06-26\r\n34732,1368,EMEA,home,retail,89.00,4,0.241,coupon,2024-06-21\r\n34733,1620,LATAM,home,mobile,41.16,3,0.151,coupon,2024-11-23\r\n34734,1501,AMER,fashion,online,55.70,2,0.020,bundle,2024-01-24\r\n34735,1546,EMEA,fashion,retail,32.22,6,0.141,none,2024-12-15\r\n34736,1945,AMER,sports,mobile,83.72,2,0.025,coupon,2024-07-26\r\n34737,1181,LATAM,sports,online,80.19,8,0.230,bundle,2024-04-27\r\n34738,2271,LATAM,sports,retail,61.82,4,0.245,none,2024-09-07\r\n34739,2216,AMER,home,partner,88.12,7,0.069,none,2024-05-03\r\n34740,1064,AMER,toys,retail,115.60,2,0.090,none,2024-11-18\r\n34741,1537,LATAM,grocery,online,68.04,8,0.073,none,2024-12-13\r\n34742,2085,AMER,grocery,mobile,57.12,1,0.234,bundle,2024-07-03\r\n34743,1775,EMEA,sports,retail,65.13,1,0.185,none,2024-03-04\r\n34744,2099,AMER,electronics,online,57.68,8,0.064,none,2024-09-20\r\n34745,1572,LATAM,electronics,online,18.95,6,0.176,none,2024-02-08\r\n34746,1836,LATAM,fashion,retail,40.63,4,0.039,none,2024-11-06\r\n34747,1739,AMER,home,mobile,49.07,7,0.154,coupon,2024-12-04\r\n34748,1726,EMEA,toys,retail,15.77,1,0.165,coupon,2024-11-24\r\n34749,2140,AMER,electronics,online,39.75,1,0.199,none,2024-11-23\r\n34750,2242,AMER,grocery,retail,45.91,4,0.101,none,2024-05-12\r\n34751,1488,AMER,electronics,online,186.50,1,0.034,loyalty,2024-04-27\r\n34752,1363,EMEA,sports,online,132.42,5,0.170,coupon,2024-08-28\r\n34753,1678,LATAM,grocery,mobile,65.26,6,0.159,none,2024-04-27\r\n34754,2005,APAC,grocery,retail,119.30,1,0.076,none,2024-11-13\r\n34755,2399,LATAM,home,retail,93.06,3,0.026,none,2024-11-12\r\n34756,1252,APAC,grocery,retail,73.34,3,0.248,bundle,2024-01-14\r\n34757,1085,EMEA,electronics,online,17.96,3,0.191,coupon,2024-08-23\r\n34758,1811,APAC,grocery,retail,32.44,5,0.168,bundle,2024-07-13\r\n34759,1392,AMER,grocery,retail,41.38,6,0.137,none,2024-07-18\r\n34760,1733,LATAM,sports,online,100.89,3,0.082,coupon,2024-03-05\r\n34761,1173,LATAM,grocery,online,71.71,7,0.089,none,2024-11-14\r\n34762,1969,LATAM,sports,retail,44.43,8,0.072,bundle,2024-02-08\r\n34763,1846,APAC,home,retail,54.26,3,0.158,loyalty,2024-10-15\r\n34764,2482,EMEA,electronics,online,45.65,8,0.162,none,2024-10-27\r\n34765,1781,LATAM,electronics,online,71.37,6,0.166,loyalty,2024-06-18\r\n34766,1891,APAC,fashion,online,75.37,2,0.030,coupon,2024-07-11\r\n34767,2322,AMER,electronics,online,26.52,5,0.164,none,2024-04-11\r\n34768,2411,EMEA,electronics,retail,140.30,5,0.006,none,2024-02-15\r\n34769,1942,APAC,toys,online,80.93,7,0.224,none,2024-04-01\r\n34770,1545,AMER,grocery,mobile,31.45,4,0.056,coupon,2024-04-18\r\n34771,1642,EMEA,toys,online,65.31,5,0.048,none,2024-04-18\r\n34772,2172,EMEA,grocery,online,123.07,3,0.077,none,2024-04-17\r\n34773,1289,LATAM,electronics,online,54.79,6,0.093,none,2024-06-18\r\n34774,1383,AMER,home,online,63.34,3,0.173,loyalty,2024-09-02\r\n34775,1331,AMER,sports,online,56.45,5,0.166,loyalty,2024-03-07\r\n34776,2050,APAC,electronics,online,31.79,8,0.211,none,2024-01-28\r\n34777,1964,EMEA,toys,online,53.09,8,0.232,none,2024-01-21\r\n34778,1147,EMEA,electronics,online,50.15,2,0.006,none,2024-04-19\r\n34779,2448,APAC,electronics,online,64.14,5,0.029,bundle,2024-09-26\r\n34780,1860,EMEA,home,online,50.69,7,0.246,none,2024-02-08\r\n34781,1962,APAC,home,retail,74.15,1,0.029,loyalty,2024-02-08\r\n34782,2169,EMEA,fashion,mobile,80.45,7,0.073,none,2024-03-05\r\n34783,1001,LATAM,fashion,retail,41.01,4,0.157,none,2024-03-27\r\n34784,1215,LATAM,sports,mobile,39.15,1,0.105,none,2024-09-12\r\n34785,1334,APAC,fashion,online,123.76,8,0.025,coupon,2024-12-08\r\n34786,1915,LATAM,sports,partner,80.22,4,0.233,none,2024-06-09\r\n34787,1511,EMEA,toys,mobile,13.69,1,0.172,none,2024-02-26\r\n34788,1204,AMER,fashion,retail,139.90,1,0.217,none,2024-11-14\r\n34789,1658,AMER,toys,online,34.99,8,0.114,coupon,2024-09-10\r\n34790,1042,LATAM,toys,retail,36.85,7,0.092,none,2024-09-19\r\n34791,1348,AMER,toys,retail,32.96,2,0.127,none,2024-03-09\r\n34792,1944,AMER,electronics,online,70.02,1,0.054,coupon,2024-03-26\r\n34793,1300,EMEA,fashion,mobile,107.38,8,0.195,none,2024-05-06\r\n34794,1793,LATAM,grocery,online,123.74,2,0.168,none,2024-10-20\r\n34795,1475,LATAM,fashion,online,78.94,1,0.007,bundle,2024-11-13\r\n34796,1345,AMER,grocery,mobile,59.79,3,0.240,none,2024-03-15\r\n34797,2093,LATAM,grocery,partner,60.95,7,0.072,coupon,2024-04-10\r\n34798,1950,LATAM,grocery,online,72.26,1,0.102,none,2024-05-04\r\n34799,1179,APAC,grocery,retail,80.90,6,0.163,bundle,2024-05-12\r\n34800,1805,EMEA,fashion,online,93.01,7,0.155,coupon,2024-10-25\r\n34801,1011,APAC,toys,retail,60.47,1,0.155,none,2024-03-23\r\n34802,1101,AMER,electronics,partner,30.04,1,0.209,none,2024-06-07\r\n34803,2464,LATAM,grocery,online,86.75,2,0.123,none,2024-10-10\r\n34804,1133,EMEA,home,online,73.67,5,0.110,none,2024-07-08\r\n34805,1746,LATAM,sports,mobile,56.67,8,0.232,none,2024-12-03\r\n34806,2348,EMEA,home,online,192.69,8,0.058,coupon,2024-05-09\r\n34807,1886,LATAM,fashion,mobile,92.76,6,0.077,none,2024-03-25\r\n34808,1243,AMER,toys,retail,39.85,1,0.231,none,2024-01-22\r\n34809,2245,APAC,grocery,retail,75.87,8,0.027,loyalty,2024-11-17\r\n34810,2356,LATAM,toys,online,51.43,2,0.224,bundle,2024-09-20\r\n34811,1723,LATAM,home,mobile,62.11,3,0.197,none,2024-11-11\r\n34812,1643,EMEA,grocery,mobile,68.78,4,0.062,none,2024-06-27\r\n34813,1655,LATAM,grocery,mobile,28.80,6,0.085,none,2024-09-08\r\n34814,2280,EMEA,fashion,online,72.35,7,0.015,bundle,2024-05-14\r\n34815,2444,EMEA,grocery,mobile,47.15,6,0.021,none,2024-01-27\r\n34816,2432,AMER,toys,partner,60.31,4,0.135,none,2024-08-28\r\n34817,1140,LATAM,sports,mobile,57.13,7,0.204,coupon,2024-09-08\r\n34818,1574,AMER,sports,online,44.83,6,0.168,coupon,2024-10-08\r\n34819,1793,LATAM,toys,retail,56.00,1,0.019,none,2024-08-06\r\n34820,1337,APAC,electronics,mobile,89.78,5,0.039,none,2024-03-27\r\n34821,1909,APAC,grocery,retail,62.76,1,0.105,none,2024-01-15\r\n34822,2284,EMEA,electronics,online,178.82,4,0.248,none,2024-03-01\r\n34823,2424,LATAM,grocery,online,85.05,7,0.173,bundle,2024-03-05\r\n34824,1109,APAC,sports,online,80.86,3,0.106,none,2024-07-27\r\n34825,2146,APAC,sports,online,84.22,2,0.015,bundle,2024-03-20\r\n34826,1627,LATAM,electronics,online,29.20,5,0.039,none,2024-07-18\r\n34827,1303,LATAM,fashion,online,66.24,3,0.095,coupon,2024-03-04\r\n34828,1227,AMER,toys,online,23.49,6,0.138,none,2024-09-01\r\n34829,1266,AMER,sports,online,114.68,7,0.163,none,2024-02-14\r\n34830,1188,LATAM,electronics,retail,71.02,5,0.068,none,2024-05-08\r\n34831,1506,EMEA,fashion,online,95.66,6,0.193,coupon,2024-12-19\r\n34832,1060,LATAM,fashion,online,93.80,7,0.176,loyalty,2024-05-11\r\n34833,1987,AMER,electronics,online,28.09,8,0.199,loyalty,2024-05-16\r\n34834,2150,APAC,home,retail,76.27,6,0.172,coupon,2024-11-11\r\n34835,2296,AMER,electronics,mobile,17.04,6,0.052,none,2024-09-28\r\n34836,2344,LATAM,toys,mobile,81.61,7,0.039,none,2024-03-22\r\n34837,2410,EMEA,sports,retail,84.30,5,0.201,none,2024-03-24\r\n34838,1110,LATAM,fashion,online,69.13,2,0.204,none,2024-06-26\r\n34839,1661,LATAM,grocery,retail,109.24,6,0.152,none,2024-05-22\r\n34840,1120,LATAM,grocery,online,35.68,3,0.075,none,2024-05-10\r\n34841,1757,EMEA,toys,retail,55.09,2,0.158,coupon,2024-12-16\r\n34842,1447,LATAM,fashion,partner,61.23,3,0.160,none,2024-12-08\r\n34843,2496,EMEA,grocery,online,55.45,5,0.235,coupon,2024-05-02\r\n34844,1109,APAC,electronics,online,43.21,1,0.185,none,2024-05-20\r\n34845,2407,EMEA,grocery,online,94.93,3,0.146,none,2024-10-17\r\n34846,1345,AMER,sports,mobile,114.69,2,0.226,none,2024-01-17\r\n34847,2173,LATAM,electronics,online,87.28,7,0.233,loyalty,2024-05-05\r\n34848,1174,APAC,grocery,online,65.29,7,0.098,none,2024-10-20\r\n34849,2261,EMEA,sports,online,101.50,3,0.231,none,2024-03-05\r\n34850,1119,LATAM,home,online,18.68,1,0.145,loyalty,2024-06-03\r\n34851,1170,AMER,home,online,39.03,5,0.029,none,2024-12-06\r\n34852,1105,AMER,grocery,online,71.95,7,0.031,coupon,2024-03-28\r\n34853,1624,AMER,grocery,online,55.08,1,0.016,bundle,2024-10-15\r\n34854,1204,AMER,electronics,retail,55.34,4,0.078,none,2024-02-27\r\n34855,1380,AMER,toys,online,68.50,3,0.058,bundle,2024-11-17\r\n34856,2144,EMEA,sports,online,43.15,7,0.244,coupon,2024-01-22\r\n34857,1878,EMEA,grocery,online,24.57,7,0.085,none,2024-06-03\r\n34858,1544,LATAM,electronics,retail,32.98,8,0.246,none,2024-03-08\r\n34859,2329,LATAM,electronics,online,78.94,7,0.076,coupon,2024-08-03\r\n34860,1549,APAC,fashion,online,56.14,7,0.180,none,2024-10-17\r\n34861,2107,APAC,electronics,online,135.69,2,0.135,none,2024-12-22\r\n34862,2402,AMER,electronics,retail,99.07,4,0.022,none,2024-01-16\r\n34863,2240,LATAM,sports,partner,11.93,4,0.185,bundle,2024-01-06\r\n34864,1246,EMEA,grocery,retail,39.93,1,0.110,none,2024-10-28\r\n34865,1826,LATAM,toys,online,53.40,5,0.221,bundle,2024-04-10\r\n34866,1994,LATAM,toys,online,89.84,8,0.122,none,2024-11-09\r\n34867,1154,LATAM,grocery,retail,38.25,8,0.175,none,2024-05-02\r\n34868,1954,APAC,sports,online,39.57,3,0.011,none,2024-06-05\r\n34869,1339,EMEA,grocery,online,39.63,8,0.068,bundle,2024-12-01\r\n34870,1072,LATAM,home,online,42.75,7,0.158,none,2024-03-27\r\n34871,1247,AMER,electronics,online,61.56,7,0.220,coupon,2024-11-07\r\n34872,1701,LATAM,home,online,27.18,4,0.168,bundle,2024-09-27\r\n34873,1920,LATAM,fashion,retail,50.25,6,0.223,none,2024-10-15\r\n34874,1051,EMEA,sports,online,69.29,7,0.223,none,2024-09-01\r\n34875,1959,EMEA,toys,online,108.74,8,0.119,coupon,2024-08-09\r\n34876,2374,LATAM,sports,retail,77.57,5,0.030,none,2024-04-15\r\n34877,1378,APAC,home,mobile,47.37,8,0.170,none,2024-12-09\r\n34878,1246,EMEA,fashion,online,112.56,3,0.094,none,2024-10-04\r\n34879,1418,LATAM,fashion,online,32.86,5,0.023,none,2024-03-04\r\n34880,1824,LATAM,fashion,online,52.55,1,0.004,coupon,2024-04-17\r\n34881,2360,EMEA,home,online,42.54,6,0.016,coupon,2024-01-09\r\n34882,2172,EMEA,grocery,retail,58.48,1,0.225,bundle,2024-02-18\r\n34883,1813,EMEA,toys,retail,59.53,6,0.113,bundle,2024-06-16\r\n34884,1655,LATAM,electronics,online,45.48,6,0.216,loyalty,2024-03-08\r\n34885,1125,LATAM,fashion,retail,95.50,6,0.178,none,2024-12-02\r\n34886,1663,LATAM,sports,retail,73.49,4,0.162,none,2024-12-02\r\n34887,2280,EMEA,home,mobile,52.17,4,0.051,loyalty,2024-09-27\r\n34888,1135,APAC,home,online,69.47,1,0.229,coupon,2024-09-25\r\n34889,1450,EMEA,electronics,retail,104.31,7,0.023,none,2024-05-20\r\n34890,1716,LATAM,grocery,mobile,46.58,3,0.124,none,2024-10-25\r\n34891,1618,EMEA,sports,retail,42.59,6,0.104,coupon,2024-07-06\r\n34892,2281,AMER,fashion,retail,30.07,5,0.179,none,2024-04-27\r\n34893,1548,EMEA,sports,online,16.67,8,0.051,none,2024-01-26\r\n34894,1251,EMEA,fashion,online,69.55,8,0.230,none,2024-08-24\r\n34895,2049,LATAM,electronics,retail,53.25,4,0.119,loyalty,2024-02-23\r\n34896,2492,LATAM,fashion,mobile,34.07,5,0.046,coupon,2024-04-03\r\n34897,1210,LATAM,fashion,online,94.74,2,0.114,none,2024-02-02\r\n34898,1182,EMEA,electronics,retail,65.39,6,0.205,none,2024-09-19\r\n34899,1317,EMEA,electronics,online,71.32,2,0.103,none,2024-01-23\r\n34900,1429,APAC,grocery,online,34.59,2,0.021,coupon,2024-05-06\r\n34901,2319,AMER,grocery,online,43.50,2,0.179,loyalty,2024-02-25\r\n34902,1028,EMEA,home,retail,65.32,1,0.020,none,2024-11-20\r\n34903,1321,EMEA,grocery,online,67.25,5,0.148,loyalty,2024-02-01\r\n34904,1288,LATAM,grocery,online,29.17,4,0.196,none,2024-07-09\r\n34905,1350,LATAM,home,mobile,31.63,6,0.022,coupon,2024-03-21\r\n34906,1443,EMEA,home,online,40.37,2,0.085,none,2024-12-12\r\n34907,1963,AMER,electronics,online,50.88,4,0.236,none,2024-07-04\r\n34908,1180,AMER,electronics,online,74.19,4,0.244,coupon,2024-10-22\r\n34909,1590,APAC,home,mobile,152.77,1,0.002,none,2024-02-09\r\n34910,1541,APAC,home,online,56.39,6,0.013,none,2024-01-18\r\n34911,2099,AMER,toys,retail,96.80,2,0.019,none,2024-09-03\r\n34912,1720,AMER,home,online,56.50,5,0.210,coupon,2024-06-05\r\n34913,1350,LATAM,home,retail,19.36,6,0.143,none,2024-09-13\r\n34914,2149,EMEA,electronics,online,29.44,6,0.040,none,2024-05-05\r\n34915,1058,LATAM,home,retail,23.86,4,0.203,none,2024-08-25\r\n34916,1623,AMER,fashion,retail,68.72,6,0.186,none,2024-12-09\r\n34917,2478,AMER,fashion,mobile,86.40,6,0.146,coupon,2024-09-14\r\n34918,1783,AMER,fashion,online,28.04,5,0.230,none,2024-09-10\r\n34919,1928,AMER,grocery,online,42.80,2,0.247,none,2024-04-27\r\n34920,1950,LATAM,fashion,retail,47.85,2,0.072,none,2024-03-14\r\n34921,2283,AMER,electronics,mobile,35.80,8,0.103,none,2024-10-24\r\n34922,2102,APAC,electronics,retail,69.12,3,0.158,bundle,2024-04-23\r\n34923,1132,EMEA,electronics,mobile,37.77,8,0.109,coupon,2024-07-19\r\n34924,2340,EMEA,toys,online,36.03,5,0.121,coupon,2024-09-11\r\n34925,2058,LATAM,grocery,online,97.62,7,0.182,coupon,2024-12-01\r\n34926,2339,AMER,electronics,partner,26.10,7,0.129,loyalty,2024-05-26\r\n34927,1258,EMEA,grocery,online,78.02,5,0.171,coupon,2024-11-15\r\n34928,2384,LATAM,grocery,online,112.15,8,0.200,none,2024-03-13\r\n34929,1057,LATAM,home,online,143.25,8,0.033,none,2024-09-21\r\n34930,2069,AMER,electronics,mobile,66.72,3,0.197,none,2024-11-05\r\n34931,1010,EMEA,fashion,online,85.92,5,0.076,bundle,2024-07-09\r\n34932,2433,APAC,sports,mobile,58.90,7,0.184,none,2024-12-11\r\n34933,2234,LATAM,electronics,retail,21.60,1,0.148,coupon,2024-01-09\r\n34934,2313,LATAM,sports,mobile,103.37,5,0.019,none,2024-02-08\r\n34935,1972,LATAM,grocery,online,78.89,6,0.077,bundle,2024-03-10\r\n34936,2127,LATAM,fashion,online,76.52,7,0.238,none,2024-07-03\r\n34937,1739,AMER,home,mobile,67.20,1,0.051,loyalty,2024-12-01\r\n34938,1937,APAC,toys,mobile,142.39,5,0.188,bundle,2024-04-01\r\n34939,2206,AMER,electronics,mobile,81.24,2,0.104,loyalty,2024-05-01\r\n34940,2178,AMER,electronics,retail,79.52,3,0.138,none,2024-09-23\r\n34941,1872,LATAM,fashion,retail,70.09,2,0.157,bundle,2024-08-05\r\n34942,1487,AMER,fashion,mobile,128.22,7,0.065,bundle,2024-03-18\r\n34943,2368,AMER,sports,online,61.32,5,0.028,none,2024-07-14\r\n34944,2480,APAC,toys,retail,75.84,1,0.051,none,2024-10-12\r\n34945,2101,APAC,home,retail,42.37,7,0.103,bundle,2024-06-17\r\n34946,1625,EMEA,fashion,online,42.91,5,0.084,none,2024-06-04\r\n34947,2019,AMER,home,retail,117.92,7,0.166,coupon,2024-03-19\r\n34948,1096,EMEA,electronics,online,47.22,4,0.189,bundle,2024-11-11\r\n34949,2494,AMER,electronics,retail,127.78,3,0.168,coupon,2024-01-14\r\n34950,2498,LATAM,grocery,mobile,125.72,1,0.042,coupon,2024-12-11\r\n34951,2017,EMEA,grocery,mobile,311.23,6,0.150,coupon,2024-04-18\r\n34952,1342,LATAM,toys,online,90.55,3,0.231,bundle,2024-02-24\r\n34953,2237,EMEA,sports,online,97.04,6,0.068,coupon,2024-01-12\r\n34954,2185,EMEA,grocery,retail,61.15,6,0.054,loyalty,2024-05-26\r\n34955,1027,APAC,home,retail,29.10,6,0.080,coupon,2024-08-18\r\n34956,1705,AMER,fashion,online,89.45,1,0.213,none,2024-03-27\r\n34957,1802,AMER,home,online,52.76,5,0.019,none,2024-01-17\r\n34958,1298,LATAM,fashion,online,57.22,2,0.232,none,2024-01-20\r\n34959,2231,LATAM,grocery,online,43.11,4,0.133,none,2024-10-23\r\n34960,1701,LATAM,grocery,partner,56.52,2,0.116,loyalty,2024-07-21\r\n34961,1199,APAC,sports,partner,22.97,6,0.070,loyalty,2024-04-22\r\n34962,1719,LATAM,home,online,60.79,3,0.102,none,2024-10-20\r\n34963,1313,EMEA,electronics,retail,54.93,1,0.026,none,2024-09-17\r\n34964,2066,APAC,fashion,online,81.05,6,0.151,none,2024-08-11\r\n34965,2020,AMER,fashion,retail,83.67,4,0.076,none,2024-12-03\r\n34966,1910,LATAM,fashion,mobile,44.05,4,0.010,none,2024-09-15\r\n34967,1394,LATAM,toys,partner,77.48,5,0.019,none,2024-02-28\r\n34968,1254,APAC,grocery,online,47.19,6,0.167,bundle,2024-03-23\r\n34969,2456,APAC,grocery,online,45.19,6,0.186,none,2024-09-23\r\n34970,1239,APAC,fashion,mobile,124.20,3,0.199,none,2024-12-19\r\n34971,1523,LATAM,grocery,mobile,43.40,6,0.067,none,2024-01-13\r\n34972,2294,EMEA,electronics,retail,29.87,7,0.228,bundle,2024-04-16\r\n34973,1500,EMEA,toys,mobile,43.96,2,0.157,bundle,2024-10-25\r\n34974,1990,EMEA,electronics,retail,71.61,6,0.236,coupon,2024-08-15\r\n34975,1275,EMEA,home,online,171.30,6,0.176,coupon,2024-05-26\r\n34976,1105,AMER,home,retail,28.54,7,0.174,bundle,2024-04-16\r\n34977,2372,AMER,sports,retail,29.81,2,0.186,none,2024-02-03\r\n34978,2116,LATAM,sports,mobile,80.01,7,0.207,none,2024-12-17\r\n34979,1621,APAC,grocery,online,46.20,5,0.056,loyalty,2024-12-10\r\n34980,1957,AMER,toys,online,77.32,3,0.011,none,2024-04-27\r\n34981,2284,EMEA,electronics,mobile,45.13,1,0.058,none,2024-06-22\r\n34982,2165,AMER,electronics,online,45.81,4,0.026,none,2024-10-13\r\n34983,2217,LATAM,grocery,online,35.54,5,0.093,loyalty,2024-09-09\r\n34984,2266,LATAM,electronics,retail,58.80,6,0.153,none,2024-04-06\r\n34985,1275,EMEA,fashion,online,46.75,5,0.054,none,2024-03-01\r\n34986,1792,AMER,fashion,mobile,125.97,8,0.104,none,2024-12-19\r\n34987,2047,AMER,home,mobile,112.13,6,0.187,none,2024-08-24\r\n34988,1325,APAC,electronics,online,81.95,6,0.244,coupon,2024-12-16\r\n34989,2306,AMER,grocery,mobile,56.43,8,0.064,none,2024-12-28\r\n34990,1460,LATAM,sports,retail,89.66,3,0.159,none,2024-10-02\r\n34991,1284,APAC,toys,online,102.10,4,0.076,bundle,2024-03-17\r\n34992,1829,EMEA,grocery,online,125.62,6,0.069,none,2024-07-11\r\n34993,1229,LATAM,home,online,38.10,1,0.096,coupon,2024-11-09\r\n34994,1776,APAC,sports,mobile,114.44,5,0.105,bundle,2024-08-06\r\n34995,1928,AMER,home,retail,95.45,4,0.178,none,2024-11-26\r\n34996,1540,LATAM,grocery,retail,92.82,2,0.028,none,2024-07-19\r\n34997,1235,EMEA,grocery,retail,55.73,5,0.118,none,2024-03-19\r\n34998,1206,EMEA,sports,online,224.59,6,0.209,coupon,2024-08-11\r\n34999,2461,LATAM,grocery,online,65.57,7,0.152,none,2024-05-15\r\n35000,1531,EMEA,electronics,partner,88.42,3,0.054,none,2024-12-15\r\n35001,2376,LATAM,grocery,retail,51.67,8,0.188,none,2024-04-10\r\n35002,1407,LATAM,electronics,retail,25.52,6,0.243,none,2024-08-07\r\n35003,2373,LATAM,home,retail,59.47,6,0.153,none,2024-12-21\r\n35004,2367,AMER,fashion,online,100.77,2,0.153,bundle,2024-05-02\r\n35005,1639,APAC,grocery,partner,59.24,3,0.087,none,2024-07-14\r\n35006,1058,LATAM,fashion,online,80.26,5,0.032,none,2024-05-03\r\n35007,1316,APAC,fashion,online,34.03,1,0.004,none,2024-08-24\r\n35008,1720,AMER,sports,online,83.62,3,0.239,coupon,2024-07-01\r\n35009,2382,LATAM,home,retail,39.41,1,0.019,none,2024-08-21\r\n35010,2017,EMEA,grocery,online,56.14,8,0.159,coupon,2024-04-13\r\n35011,1643,EMEA,electronics,mobile,56.87,1,0.012,none,2024-12-26\r\n35012,2218,EMEA,home,online,35.75,1,0.212,none,2024-01-08\r\n35013,1854,AMER,fashion,online,95.67,8,0.015,none,2024-07-28\r\n35014,1550,APAC,grocery,online,54.49,3,0.000,none,2024-01-15\r\n35015,2151,APAC,home,online,38.30,2,0.149,none,2024-05-24\r\n35016,1310,AMER,toys,online,43.95,6,0.043,loyalty,2024-10-01\r\n35017,1116,LATAM,grocery,online,78.92,5,0.193,bundle,2024-02-10\r\n35018,1056,LATAM,grocery,online,42.63,2,0.076,none,2024-04-28\r\n35019,2360,EMEA,fashion,online,63.33,1,0.161,none,2024-05-28\r\n35020,1522,LATAM,electronics,partner,42.29,3,0.009,none,2024-06-10\r\n35021,1888,LATAM,grocery,online,28.05,1,0.183,none,2024-12-17\r\n35022,2222,LATAM,fashion,mobile,89.87,7,0.024,coupon,2024-05-12\r\n35023,1780,APAC,electronics,retail,119.67,1,0.242,coupon,2024-09-06\r\n35024,1707,APAC,grocery,mobile,113.59,8,0.210,coupon,2024-05-25\r\n35025,1001,LATAM,electronics,retail,52.81,7,0.108,loyalty,2024-11-14\r\n35026,1684,EMEA,home,online,131.11,6,0.039,none,2024-04-27\r\n35027,2347,AMER,home,online,73.42,2,0.043,none,2024-11-23\r\n35028,1632,LATAM,home,mobile,45.37,6,0.019,coupon,2024-03-22\r\n35029,2243,APAC,sports,online,30.71,6,0.074,none,2024-06-15\r\n35030,1761,EMEA,grocery,mobile,22.97,3,0.095,none,2024-04-20\r\n35031,1504,AMER,grocery,retail,92.18,8,0.245,none,2024-05-20\r\n35032,2224,EMEA,grocery,online,74.41,1,0.040,none,2024-11-16\r\n35033,1859,AMER,fashion,partner,34.63,5,0.148,bundle,2024-10-20\r\n35034,1786,APAC,fashion,online,57.78,7,0.153,none,2024-11-20\r\n35035,2393,LATAM,sports,online,79.93,7,0.142,coupon,2024-02-05\r\n35036,1588,LATAM,home,retail,26.34,7,0.004,coupon,2024-03-08\r\n35037,2262,APAC,fashion,online,32.15,4,0.141,loyalty,2024-11-22\r\n35038,1273,AMER,grocery,online,48.81,1,0.126,none,2024-07-09\r\n35039,1803,LATAM,toys,retail,51.39,2,0.114,none,2024-10-01\r\n35040,2195,APAC,sports,online,54.98,3,0.236,none,2024-02-08\r\n35041,1704,AMER,grocery,mobile,56.93,7,0.216,coupon,2024-05-02\r\n35042,2288,AMER,fashion,retail,49.70,5,0.028,coupon,2024-01-15\r\n35043,1069,APAC,fashion,online,45.53,4,0.007,coupon,2024-09-24\r\n35044,1503,APAC,home,online,23.69,1,0.108,none,2024-10-20\r\n35045,1225,APAC,electronics,partner,55.18,1,0.126,bundle,2024-10-28\r\n35046,1149,LATAM,electronics,retail,51.02,8,0.053,coupon,2024-05-27\r\n35047,1324,LATAM,home,online,57.12,2,0.179,bundle,2024-07-26\r\n35048,1519,APAC,grocery,retail,46.11,6,0.001,none,2024-09-22\r\n35049,2288,AMER,toys,retail,43.95,2,0.077,coupon,2024-05-20\r\n35050,1309,EMEA,grocery,online,30.79,2,0.207,loyalty,2024-02-26\r\n35051,1294,APAC,toys,retail,62.66,5,0.028,none,2024-02-15\r\n35052,1656,LATAM,grocery,online,45.91,5,0.033,loyalty,2024-05-10\r\n35053,2137,LATAM,fashion,online,42.41,8,0.103,coupon,2024-02-25\r\n35054,1867,AMER,grocery,retail,48.66,7,0.020,none,2024-04-22\r\n35055,2454,LATAM,sports,online,22.48,2,0.228,none,2024-07-28\r\n35056,1306,LATAM,electronics,mobile,29.44,3,0.085,none,2024-10-16\r\n35057,2034,LATAM,home,online,37.91,8,0.169,none,2024-07-01\r\n35058,2185,EMEA,fashion,online,16.04,3,0.033,coupon,2024-08-10\r\n35059,2421,AMER,grocery,online,75.17,4,0.200,none,2024-08-15\r\n35060,1079,LATAM,home,retail,92.71,7,0.167,none,2024-10-19\r\n35061,1699,APAC,toys,online,50.06,1,0.095,none,2024-02-05\r\n35062,2338,AMER,sports,online,39.27,2,0.215,coupon,2024-10-27\r\n35063,1468,AMER,sports,online,33.75,3,0.216,none,2024-06-07\r\n35064,2031,AMER,fashion,mobile,49.97,7,0.170,none,2024-05-09\r\n35065,1697,APAC,grocery,retail,81.46,7,0.092,bundle,2024-09-10\r\n35066,2030,EMEA,grocery,online,51.37,2,0.213,bundle,2024-04-13\r\n35067,1440,AMER,toys,mobile,32.46,8,0.106,bundle,2024-01-06\r\n35068,2468,EMEA,toys,retail,132.23,2,0.213,bundle,2024-06-20\r\n35069,2080,LATAM,fashion,retail,56.46,6,0.163,coupon,2024-12-28\r\n35070,1374,APAC,home,retail,63.40,5,0.229,loyalty,2024-03-17\r\n35071,1509,AMER,electronics,mobile,80.54,4,0.116,coupon,2024-09-14\r\n35072,2081,APAC,sports,retail,51.44,4,0.169,coupon,2024-10-10\r\n35073,1864,EMEA,electronics,retail,38.65,8,0.032,bundle,2024-11-10\r\n35074,1150,LATAM,electronics,retail,59.78,4,0.068,none,2024-04-11\r\n35075,2397,LATAM,grocery,online,55.57,3,0.066,none,2024-11-03\r\n35076,2205,AMER,electronics,online,41.88,7,0.127,none,2024-10-13\r\n35077,1288,LATAM,grocery,online,23.20,7,0.237,coupon,2024-05-07\r\n35078,1546,EMEA,fashion,retail,89.94,7,0.157,none,2024-03-15\r\n35079,2375,AMER,grocery,online,52.41,7,0.148,none,2024-12-18\r\n35080,2266,LATAM,fashion,online,139.87,5,0.228,bundle,2024-12-06\r\n35081,1950,LATAM,grocery,online,58.82,4,0.074,none,2024-11-22\r\n35082,1314,AMER,home,retail,66.48,5,0.019,bundle,2024-10-09\r\n35083,1521,LATAM,electronics,online,59.99,4,0.144,bundle,2024-08-04\r\n35084,1022,APAC,grocery,mobile,56.06,2,0.013,none,2024-04-14\r\n35085,1232,LATAM,home,online,98.50,3,0.048,none,2024-02-23\r\n35086,1407,LATAM,fashion,mobile,63.44,8,0.214,none,2024-10-23\r\n35087,2229,APAC,home,retail,57.50,6,0.081,none,2024-11-28\r\n35088,2404,EMEA,sports,retail,53.43,2,0.079,none,2024-10-07\r\n35089,1643,EMEA,fashion,mobile,110.57,5,0.093,coupon,2024-11-13\r\n35090,1534,EMEA,home,online,26.80,1,0.048,none,2024-11-17\r\n35091,2188,EMEA,fashion,retail,92.97,7,0.148,none,2024-08-27\r\n35092,1005,LATAM,grocery,online,39.12,1,0.025,bundle,2024-05-09\r\n35093,2146,APAC,toys,mobile,55.86,7,0.031,none,2024-12-19\r\n35094,1272,AMER,grocery,online,58.46,3,0.123,none,2024-11-07\r\n35095,1353,EMEA,fashion,online,48.97,1,0.144,bundle,2024-04-22\r\n35096,1826,LATAM,grocery,retail,80.40,8,0.235,coupon,2024-06-27\r\n35097,2338,AMER,toys,retail,43.16,5,0.109,none,2024-08-23\r\n35098,2292,EMEA,sports,retail,52.93,4,0.206,loyalty,2024-07-18\r\n35099,1443,EMEA,fashion,retail,23.72,3,0.159,none,2024-04-15\r\n35100,2195,APAC,toys,retail,64.23,6,0.243,bundle,2024-08-16\r\n35101,1559,EMEA,home,online,69.87,5,0.227,coupon,2024-03-17\r\n35102,1893,APAC,toys,mobile,63.05,6,0.210,none,2024-12-02\r\n35103,1585,AMER,electronics,retail,67.31,3,0.099,bundle,2024-01-26\r\n35104,1283,APAC,electronics,mobile,40.01,1,0.162,none,2024-06-03\r\n35105,2379,AMER,electronics,online,36.99,1,0.234,none,2024-11-07\r\n35106,2442,APAC,sports,mobile,39.87,5,0.079,none,2024-06-05\r\n35107,1971,EMEA,grocery,online,50.61,6,0.200,none,2024-02-08\r\n35108,2451,APAC,fashion,retail,46.47,3,0.064,none,2024-06-06\r\n35109,1614,EMEA,grocery,online,31.85,3,0.167,bundle,2024-03-24\r\n35110,1301,AMER,sports,online,44.04,7,0.199,none,2024-03-17\r\n35111,1151,APAC,electronics,retail,48.63,3,0.130,coupon,2024-12-17\r\n35112,2199,LATAM,grocery,online,26.09,5,0.166,coupon,2024-07-22\r\n35113,1947,EMEA,sports,mobile,73.41,2,0.094,bundle,2024-01-20\r\n35114,2225,EMEA,grocery,online,54.48,7,0.208,loyalty,2024-12-04\r\n35115,2355,EMEA,home,online,106.83,8,0.148,none,2024-05-06\r\n35116,1874,LATAM,fashion,online,58.76,3,0.248,bundle,2024-12-23\r\n35117,1694,APAC,fashion,retail,85.86,4,0.022,loyalty,2024-11-06\r\n35118,1783,AMER,sports,online,91.96,5,0.039,bundle,2024-07-15\r\n35119,1472,AMER,home,retail,60.80,7,0.202,none,2024-08-15\r\n35120,1921,LATAM,home,online,30.29,3,0.060,loyalty,2024-11-07\r\n35121,2314,EMEA,home,retail,26.85,3,0.169,loyalty,2024-06-12\r\n35122,1214,EMEA,home,retail,91.06,4,0.109,coupon,2024-11-26\r\n35123,2423,LATAM,fashion,retail,77.15,8,0.201,none,2024-01-19\r\n35124,1049,AMER,fashion,online,54.35,8,0.130,bundle,2024-05-04\r\n35125,2119,AMER,sports,mobile,58.85,4,0.109,none,2024-02-12\r\n35126,1035,EMEA,grocery,partner,45.72,8,0.141,bundle,2024-09-22\r\n35127,1461,LATAM,sports,mobile,72.92,8,0.054,none,2024-01-13\r\n35128,2462,EMEA,fashion,mobile,39.88,4,0.016,none,2024-05-22\r\n35129,2167,APAC,home,retail,49.75,7,0.004,none,2024-07-26\r\n35130,2096,LATAM,electronics,partner,55.77,1,0.108,coupon,2024-10-04\r\n35131,1148,AMER,home,mobile,70.63,3,0.194,coupon,2024-12-26\r\n35132,2172,EMEA,home,online,58.73,3,0.030,none,2024-09-28\r\n35133,2290,LATAM,fashion,online,45.05,7,0.241,coupon,2024-06-18\r\n35134,2454,LATAM,home,mobile,40.25,5,0.042,coupon,2024-02-04\r\n35135,2007,LATAM,grocery,online,30.70,8,0.109,none,2024-03-19\r\n35136,2230,LATAM,electronics,retail,79.48,3,0.225,none,2024-04-01\r\n35137,1497,EMEA,grocery,retail,21.88,2,0.068,none,2024-08-23\r\n35138,2179,LATAM,fashion,retail,35.41,2,0.146,none,2024-01-25\r\n35139,2450,EMEA,electronics,retail,80.06,6,0.231,bundle,2024-03-25\r\n35140,2154,APAC,electronics,retail,20.48,4,0.151,none,2024-05-18\r\n35141,2355,EMEA,electronics,retail,91.81,4,0.113,coupon,2024-06-11\r\n35142,1122,AMER,fashion,online,45.38,8,0.048,bundle,2024-02-28\r\n35143,2455,AMER,electronics,retail,56.77,1,0.117,coupon,2024-06-14\r\n35144,2126,APAC,home,online,67.53,8,0.013,bundle,2024-12-11\r\n35145,1952,EMEA,grocery,online,153.10,5,0.089,loyalty,2024-09-08\r\n35146,1976,AMER,electronics,online,52.21,6,0.017,bundle,2024-01-15\r\n35147,1869,AMER,home,online,37.04,7,0.244,none,2024-01-08\r\n35148,1764,LATAM,electronics,retail,87.96,4,0.235,none,2024-04-13\r\n35149,1699,APAC,home,partner,56.43,3,0.038,none,2024-06-26\r\n35150,1232,LATAM,electronics,partner,44.96,5,0.191,coupon,2024-07-22\r\n35151,2463,AMER,grocery,retail,57.42,4,0.207,bundle,2024-11-11\r\n35152,2462,EMEA,fashion,online,72.18,3,0.170,none,2024-12-03\r\n35153,2031,AMER,fashion,online,50.46,4,0.103,none,2024-08-12\r\n35154,1407,LATAM,grocery,retail,37.00,5,0.149,none,2024-08-04\r\n35155,2371,LATAM,grocery,online,52.62,6,0.025,none,2024-06-13\r\n35156,2142,LATAM,grocery,mobile,52.54,4,0.182,none,2024-03-13\r\n35157,1991,APAC,sports,retail,73.30,8,0.101,none,2024-10-22\r\n35158,1918,EMEA,fashion,online,42.13,7,0.083,none,2024-12-04\r\n35159,1161,AMER,toys,retail,104.14,6,0.040,coupon,2024-07-27\r\n35160,2292,EMEA,grocery,online,23.38,1,0.197,none,2024-06-04\r\n35161,1116,LATAM,fashion,mobile,67.64,8,0.100,none,2024-06-23\r\n35162,2463,AMER,grocery,retail,30.61,5,0.214,none,2024-12-28\r\n35163,1735,LATAM,grocery,retail,26.58,4,0.145,none,2024-12-06\r\n35164,1558,EMEA,sports,online,20.17,6,0.125,loyalty,2024-12-17\r\n35165,1611,EMEA,electronics,online,51.77,4,0.156,none,2024-02-15\r\n35166,1693,EMEA,electronics,retail,38.27,5,0.193,none,2024-08-27\r\n35167,1081,AMER,grocery,mobile,28.46,7,0.151,none,2024-03-18\r\n35168,1698,EMEA,grocery,retail,66.67,4,0.050,none,2024-09-21\r\n35169,2234,LATAM,sports,mobile,61.84,8,0.236,none,2024-08-20\r\n35170,1076,LATAM,grocery,mobile,55.24,4,0.031,bundle,2024-04-10\r\n35171,2241,APAC,home,mobile,106.02,8,0.024,none,2024-04-21\r\n35172,1725,APAC,grocery,online,80.26,1,0.021,none,2024-09-12\r\n35173,1857,LATAM,home,online,63.32,8,0.121,coupon,2024-09-17\r\n35174,1912,APAC,grocery,online,69.00,7,0.107,none,2024-10-21\r\n35175,2285,APAC,sports,retail,30.68,8,0.068,none,2024-01-14\r\n35176,1863,EMEA,home,retail,31.22,6,0.156,none,2024-09-24\r\n35177,2300,EMEA,fashion,online,77.11,5,0.068,none,2024-11-01\r\n35178,1561,EMEA,fashion,online,57.99,6,0.006,none,2024-11-05\r\n35179,1867,AMER,electronics,online,43.13,5,0.153,loyalty,2024-07-08\r\n35180,2207,APAC,sports,partner,48.85,4,0.042,none,2024-01-24\r\n35181,1484,AMER,electronics,retail,48.61,5,0.068,bundle,2024-07-26\r\n35182,2152,EMEA,fashion,retail,55.21,3,0.062,none,2024-03-14\r\n35183,1870,EMEA,fashion,mobile,46.92,3,0.238,coupon,2024-07-22\r\n35184,2059,AMER,electronics,retail,36.15,2,0.039,bundle,2024-12-24\r\n35185,1674,LATAM,home,online,83.89,8,0.036,none,2024-05-05\r\n35186,1718,EMEA,electronics,online,59.73,6,0.030,none,2024-12-23\r\n35187,2282,EMEA,fashion,online,75.78,6,0.000,bundle,2024-10-02\r\n35188,1690,LATAM,home,online,27.87,8,0.194,loyalty,2024-12-25\r\n35189,2411,EMEA,grocery,online,47.06,8,0.026,bundle,2024-12-13\r\n35190,1803,LATAM,electronics,retail,54.57,4,0.129,none,2024-11-16\r\n35191,2320,LATAM,electronics,online,62.34,5,0.201,none,2024-05-13\r\n35192,2011,AMER,electronics,mobile,130.50,7,0.157,coupon,2024-04-11\r\n35193,1732,LATAM,sports,retail,86.19,8,0.119,loyalty,2024-09-06\r\n35194,1613,EMEA,grocery,partner,36.29,4,0.233,bundle,2024-05-26\r\n35195,1538,AMER,fashion,retail,72.71,8,0.243,bundle,2024-07-27\r\n35196,2285,APAC,electronics,retail,68.76,5,0.047,none,2024-01-03\r\n35197,1874,LATAM,grocery,retail,69.00,7,0.065,none,2024-07-19\r\n35198,1266,AMER,fashion,retail,32.89,4,0.123,none,2024-07-07\r\n35199,1411,LATAM,grocery,online,44.72,7,0.193,bundle,2024-12-18\r\n35200,2397,LATAM,grocery,online,30.07,2,0.160,coupon,2024-09-02\r\n35201,2240,LATAM,home,online,118.51,4,0.066,coupon,2024-06-10\r\n35202,2306,AMER,grocery,online,48.65,7,0.186,coupon,2024-02-20\r\n35203,1130,LATAM,sports,retail,145.18,1,0.104,coupon,2024-07-24\r\n35204,1164,EMEA,fashion,online,75.29,2,0.077,none,2024-04-27\r\n35205,2305,AMER,fashion,online,40.11,4,0.237,bundle,2024-07-21\r\n35206,1456,APAC,grocery,online,24.57,6,0.132,none,2024-10-15\r\n35207,2184,APAC,grocery,online,85.78,2,0.246,none,2024-11-11\r\n35208,1803,LATAM,electronics,online,57.81,3,0.100,none,2024-02-28\r\n35209,2337,AMER,electronics,retail,54.78,7,0.073,coupon,2024-03-21\r\n35210,2168,EMEA,toys,online,102.61,3,0.220,loyalty,2024-03-07\r\n35211,1562,AMER,grocery,online,48.72,6,0.089,none,2024-05-18\r\n35212,1327,APAC,grocery,online,58.86,4,0.074,none,2024-02-23\r\n35213,1903,LATAM,toys,mobile,44.44,4,0.126,bundle,2024-05-09\r\n35214,1666,LATAM,electronics,retail,79.36,5,0.155,none,2024-11-14\r\n35215,1483,EMEA,electronics,partner,61.92,3,0.219,none,2024-03-06\r\n35216,1063,AMER,sports,online,38.85,1,0.083,loyalty,2024-12-28\r\n35217,2443,LATAM,home,mobile,29.77,7,0.016,none,2024-07-13\r\n35218,1645,EMEA,home,online,67.77,1,0.066,coupon,2024-01-28\r\n35219,1306,LATAM,grocery,retail,33.01,1,0.039,coupon,2024-12-06\r\n35220,1251,EMEA,home,online,98.46,7,0.241,none,2024-08-23\r\n35221,1581,APAC,grocery,online,59.72,3,0.235,bundle,2024-05-08\r\n35222,2166,AMER,fashion,retail,71.90,3,0.013,none,2024-04-18\r\n35223,2041,LATAM,sports,retail,34.65,7,0.238,none,2024-02-05\r\n35224,2445,APAC,grocery,online,51.88,2,0.068,bundle,2024-12-25\r\n35225,1814,AMER,home,online,52.84,1,0.246,none,2024-01-13\r\n35226,2363,AMER,sports,mobile,35.83,3,0.215,none,2024-07-09\r\n35227,2428,LATAM,sports,retail,33.31,5,0.168,bundle,2024-07-28\r\n35228,2466,APAC,home,partner,68.76,3,0.232,none,2024-10-05\r\n35229,1053,AMER,grocery,online,32.49,6,0.076,none,2024-01-20\r\n35230,1821,LATAM,home,online,77.08,2,0.049,none,2024-04-28\r\n35231,2075,LATAM,fashion,online,43.46,7,0.200,none,2024-08-28\r\n35232,1106,AMER,home,retail,46.01,8,0.122,none,2024-01-20\r\n35233,2247,LATAM,toys,retail,26.95,7,0.169,none,2024-11-18\r\n35234,1538,AMER,grocery,retail,54.62,8,0.143,none,2024-05-07\r\n35235,1002,EMEA,grocery,online,33.34,1,0.067,none,2024-12-23\r\n35236,1575,APAC,toys,online,60.77,7,0.193,bundle,2024-07-01\r\n35237,2367,AMER,fashion,online,66.46,6,0.107,loyalty,2024-09-19\r\n35238,1164,EMEA,home,partner,104.72,3,0.127,loyalty,2024-10-08\r\n35239,1602,EMEA,sports,retail,103.13,4,0.069,bundle,2024-02-04\r\n35240,1804,AMER,grocery,online,39.83,8,0.086,loyalty,2024-10-23\r\n35241,2393,LATAM,home,partner,25.63,5,0.065,none,2024-10-02\r\n35242,1819,AMER,home,online,64.28,3,0.182,bundle,2024-02-06\r\n35243,1643,EMEA,sports,retail,161.08,5,0.245,bundle,2024-02-17\r\n35244,1753,APAC,fashion,retail,51.13,5,0.031,bundle,2024-02-21\r\n35245,1520,APAC,home,online,82.47,6,0.150,none,2024-09-15\r\n35246,1645,EMEA,electronics,online,62.61,5,0.223,none,2024-05-23\r\n35247,1346,AMER,electronics,retail,88.80,2,0.038,coupon,2024-02-11\r\n35248,1720,AMER,fashion,online,68.21,5,0.111,none,2024-11-22\r\n35249,1112,APAC,fashion,online,66.43,7,0.072,none,2024-01-02\r\n35250,1538,AMER,grocery,online,65.16,2,0.240,bundle,2024-12-28\r\n35251,2148,EMEA,toys,online,56.76,5,0.068,none,2024-03-15\r\n35252,1287,AMER,home,retail,58.50,8,0.087,bundle,2024-12-19\r\n35253,1373,LATAM,home,retail,61.87,3,0.029,none,2024-08-09\r\n35254,2142,LATAM,toys,online,55.77,6,0.225,none,2024-04-04\r\n35255,2446,LATAM,electronics,online,99.74,4,0.224,none,2024-09-19\r\n35256,1437,EMEA,electronics,retail,46.57,5,0.113,none,2024-04-11\r\n35257,1448,EMEA,fashion,mobile,48.40,4,0.164,none,2024-05-21\r\n35258,1369,AMER,fashion,retail,27.76,7,0.218,none,2024-10-17\r\n35259,1041,APAC,electronics,retail,133.48,2,0.043,none,2024-07-24\r\n35260,2106,LATAM,home,retail,52.07,2,0.205,bundle,2024-02-06\r\n35261,1088,LATAM,home,online,70.35,3,0.083,none,2024-01-22\r\n35262,1752,APAC,home,online,39.40,7,0.144,loyalty,2024-08-05\r\n35263,1701,LATAM,grocery,online,117.66,7,0.172,none,2024-06-14\r\n35264,1607,LATAM,home,mobile,34.02,6,0.056,bundle,2024-05-02\r\n35265,1079,LATAM,grocery,online,156.28,3,0.166,coupon,2024-05-07\r\n35266,1307,AMER,grocery,online,74.58,3,0.148,loyalty,2024-10-16\r\n35267,1096,EMEA,grocery,online,59.39,4,0.052,bundle,2024-04-16\r\n35268,1491,EMEA,grocery,retail,256.06,1,0.159,coupon,2024-02-14\r\n35269,1325,APAC,fashion,partner,47.33,4,0.123,none,2024-01-22\r\n35270,1814,AMER,electronics,online,112.62,7,0.236,none,2024-10-27\r\n35271,2472,AMER,sports,online,53.84,7,0.193,coupon,2024-07-10\r\n35272,1629,LATAM,fashion,online,26.45,7,0.243,bundle,2024-12-06\r\n35273,1685,AMER,toys,online,62.22,4,0.209,none,2024-10-15\r\n35274,2255,AMER,grocery,retail,77.98,1,0.033,coupon,2024-10-11\r\n35275,1162,AMER,home,retail,118.43,7,0.013,none,2024-09-08\r\n35276,2055,AMER,grocery,retail,124.93,4,0.078,bundle,2024-04-14\r\n35277,2268,EMEA,sports,retail,29.63,7,0.108,none,2024-05-06\r\n35278,1908,AMER,grocery,retail,27.65,5,0.107,none,2024-02-02\r\n35279,1181,LATAM,grocery,online,62.66,6,0.011,coupon,2024-10-20\r\n35280,1735,LATAM,electronics,mobile,30.94,7,0.248,coupon,2024-01-11\r\n35281,2430,APAC,fashion,online,42.28,4,0.129,none,2024-05-26\r\n35282,1774,EMEA,grocery,online,91.07,6,0.242,none,2024-12-10\r\n35283,1306,LATAM,electronics,mobile,21.78,6,0.195,none,2024-10-09\r\n35284,1350,LATAM,electronics,online,35.93,4,0.201,none,2024-04-11\r\n35285,1608,AMER,electronics,retail,48.63,2,0.001,none,2024-11-28\r\n35286,2286,AMER,grocery,retail,34.62,6,0.228,none,2024-11-04\r\n35287,2311,LATAM,home,retail,51.04,5,0.127,none,2024-11-13\r\n35288,2114,AMER,electronics,online,99.96,2,0.048,none,2024-10-26\r\n35289,1177,LATAM,home,online,92.06,6,0.180,none,2024-03-09\r\n35290,1338,EMEA,sports,online,61.21,8,0.066,none,2024-07-07\r\n35291,2131,APAC,fashion,online,10.70,6,0.074,coupon,2024-05-01\r\n35292,2418,AMER,grocery,retail,42.60,6,0.199,none,2024-10-20\r\n35293,1344,EMEA,home,retail,61.28,5,0.010,none,2024-04-28\r\n35294,1921,LATAM,grocery,mobile,29.57,3,0.094,coupon,2024-05-24\r\n35295,2247,LATAM,toys,online,80.38,8,0.247,none,2024-07-06\r\n35296,2377,AMER,electronics,mobile,52.73,4,0.122,none,2024-07-11\r\n35297,1507,EMEA,grocery,online,54.40,4,0.246,none,2024-07-21\r\n35298,1747,EMEA,home,mobile,55.91,6,0.168,coupon,2024-05-12\r\n35299,1114,APAC,electronics,mobile,47.17,4,0.239,none,2024-09-08\r\n35300,1061,APAC,fashion,retail,22.35,5,0.174,none,2024-07-12\r\n35301,2432,AMER,electronics,online,56.08,1,0.225,none,2024-09-27\r\n35302,1494,AMER,home,online,65.28,1,0.208,none,2024-07-26\r\n35303,1236,AMER,sports,online,57.50,6,0.055,none,2024-04-08\r\n35304,1257,APAC,grocery,retail,95.29,7,0.133,none,2024-10-22\r\n35305,1341,EMEA,electronics,retail,77.20,5,0.152,none,2024-12-12\r\n35306,1586,LATAM,fashion,mobile,28.92,2,0.047,none,2024-09-14\r\n35307,2459,AMER,toys,retail,31.16,7,0.049,none,2024-02-12\r\n35308,2018,AMER,grocery,online,77.95,8,0.207,bundle,2024-10-22\r\n35309,1636,APAC,sports,retail,141.48,8,0.161,none,2024-01-04\r\n35310,1653,APAC,electronics,online,87.21,7,0.181,bundle,2024-06-04\r\n35311,1459,LATAM,fashion,retail,29.42,6,0.165,none,2024-11-04\r\n35312,2213,APAC,sports,mobile,39.60,2,0.046,none,2024-07-24\r\n35313,2123,AMER,grocery,mobile,36.78,8,0.088,bundle,2024-07-03\r\n35314,2014,EMEA,sports,retail,35.83,1,0.029,none,2024-07-26\r\n35315,2135,EMEA,electronics,online,29.90,5,0.116,none,2024-07-01\r\n35316,1334,APAC,sports,online,72.43,5,0.107,bundle,2024-03-01\r\n35317,1827,EMEA,grocery,retail,22.00,5,0.112,none,2024-05-04\r\n35318,2067,LATAM,toys,online,70.55,4,0.088,coupon,2024-12-15\r\n35319,1153,AMER,grocery,mobile,127.44,2,0.229,none,2024-09-28\r\n35320,1380,AMER,fashion,mobile,35.76,6,0.197,none,2024-07-16\r\n35321,2330,EMEA,grocery,retail,65.48,8,0.032,none,2024-06-06\r\n35322,1809,APAC,sports,retail,61.05,8,0.225,loyalty,2024-12-25\r\n35323,2170,EMEA,home,online,84.97,4,0.115,none,2024-08-19\r\n35324,2017,EMEA,electronics,retail,147.22,4,0.106,coupon,2024-10-20\r\n35325,1528,EMEA,home,retail,120.48,2,0.144,coupon,2024-08-17\r\n35326,1308,EMEA,home,online,26.90,3,0.225,none,2024-01-21\r\n35327,1867,AMER,sports,partner,24.58,5,0.004,none,2024-08-17\r\n35328,1519,APAC,electronics,online,120.21,7,0.186,none,2024-01-24\r\n35329,2122,AMER,electronics,online,80.20,8,0.194,none,2024-02-17\r\n35330,2310,EMEA,fashion,online,39.68,7,0.033,bundle,2024-11-20\r\n35331,2349,APAC,grocery,retail,39.61,2,0.033,none,2024-08-19\r\n35332,2202,APAC,electronics,online,19.65,5,0.020,none,2024-01-19\r\n35333,1323,EMEA,fashion,mobile,68.82,1,0.204,none,2024-11-22\r\n35334,2079,EMEA,sports,retail,66.15,7,0.175,none,2024-03-14\r\n35335,1135,APAC,home,mobile,142.99,8,0.072,none,2024-12-26\r\n35336,1866,EMEA,fashion,online,106.36,1,0.075,none,2024-05-07\r\n35337,2390,AMER,toys,partner,79.15,8,0.015,coupon,2024-06-04\r\n35338,1700,EMEA,fashion,retail,21.27,6,0.224,loyalty,2024-02-26\r\n35339,1481,LATAM,electronics,retail,66.27,6,0.001,none,2024-06-06\r\n35340,2027,EMEA,toys,online,47.96,4,0.054,none,2024-04-22\r\n35341,2113,LATAM,home,online,83.31,3,0.005,none,2024-02-07\r\n35342,1219,LATAM,toys,online,173.61,5,0.220,none,2024-02-18\r\n35343,1416,EMEA,sports,mobile,30.34,4,0.199,bundle,2024-11-12\r\n35344,1786,APAC,fashion,online,67.95,3,0.030,none,2024-12-27\r\n35345,2274,APAC,electronics,retail,163.05,5,0.063,coupon,2024-09-01\r\n35346,1852,AMER,fashion,online,70.45,6,0.063,bundle,2024-02-19\r\n35347,1461,LATAM,electronics,online,36.10,8,0.056,loyalty,2024-01-26\r\n35348,2418,AMER,grocery,retail,54.75,1,0.138,coupon,2024-04-28\r\n35349,1246,EMEA,sports,online,71.53,2,0.003,none,2024-05-02\r\n35350,2082,APAC,home,retail,79.16,7,0.061,none,2024-11-20\r\n35351,1725,APAC,home,retail,58.59,2,0.126,bundle,2024-03-10\r\n35352,2345,LATAM,home,online,57.93,2,0.240,loyalty,2024-03-15\r\n35353,1210,LATAM,sports,mobile,62.65,7,0.144,none,2024-05-11\r\n35354,1049,AMER,toys,retail,35.64,8,0.080,none,2024-05-05\r\n35355,1541,APAC,toys,retail,61.59,3,0.086,coupon,2024-11-17\r\n35356,2250,AMER,electronics,mobile,84.36,4,0.209,bundle,2024-05-17\r\n35357,2398,EMEA,home,retail,52.64,6,0.115,bundle,2024-08-01\r\n35358,1167,EMEA,fashion,retail,25.99,3,0.217,none,2024-04-13\r\n35359,1434,EMEA,toys,online,86.12,3,0.237,coupon,2024-06-02\r\n35360,1502,APAC,home,online,40.33,6,0.203,bundle,2024-12-16\r\n35361,1572,LATAM,sports,retail,135.96,1,0.037,none,2024-08-23\r\n35362,1554,AMER,toys,online,31.14,2,0.116,none,2024-12-10\r\n35363,1360,APAC,electronics,online,48.80,7,0.069,none,2024-08-23\r\n35364,1389,LATAM,home,mobile,39.57,1,0.246,coupon,2024-02-14\r\n35365,1748,APAC,home,online,52.96,8,0.072,none,2024-04-25\r\n35366,2301,EMEA,grocery,online,55.58,6,0.114,bundle,2024-12-27\r\n35367,2462,EMEA,grocery,partner,41.38,5,0.142,coupon,2024-04-22\r\n35368,1964,EMEA,electronics,mobile,99.28,7,0.196,none,2024-12-17\r\n35369,1817,APAC,sports,online,70.58,6,0.138,none,2024-11-14\r\n35370,2291,EMEA,electronics,online,84.85,4,0.177,none,2024-05-01\r\n35371,1761,EMEA,fashion,retail,49.58,1,0.112,none,2024-12-11\r\n35372,2312,APAC,sports,online,37.04,6,0.013,coupon,2024-09-12\r\n35373,1396,EMEA,fashion,retail,99.20,7,0.013,none,2024-07-18\r\n35374,2277,EMEA,fashion,online,71.94,4,0.095,coupon,2024-01-03\r\n35375,2032,AMER,sports,online,57.13,5,0.009,loyalty,2024-04-24\r\n35376,1812,EMEA,electronics,online,91.72,8,0.012,none,2024-08-16\r\n35377,2144,EMEA,grocery,online,40.38,6,0.018,none,2024-10-13\r\n35378,1578,LATAM,fashion,retail,305.37,5,0.129,none,2024-01-01\r\n35379,1707,APAC,sports,retail,55.02,5,0.192,none,2024-04-27\r\n35380,1779,APAC,sports,retail,59.44,1,0.006,none,2024-02-11\r\n35381,2297,EMEA,electronics,mobile,32.53,4,0.106,bundle,2024-08-15\r\n35382,1516,EMEA,sports,mobile,89.78,8,0.246,none,2024-08-13\r\n35383,1736,AMER,electronics,online,101.71,7,0.216,none,2024-12-13\r\n35384,1687,APAC,toys,retail,28.75,3,0.248,none,2024-10-10\r\n35385,1947,EMEA,grocery,online,86.84,3,0.237,none,2024-08-01\r\n35386,1697,APAC,toys,retail,68.39,1,0.215,none,2024-11-27\r\n35387,1243,AMER,electronics,retail,32.97,8,0.114,none,2024-12-14\r\n35388,2205,AMER,fashion,retail,65.57,6,0.201,bundle,2024-07-02\r\n35389,2144,EMEA,toys,retail,146.24,4,0.014,none,2024-06-16\r\n35390,1939,LATAM,electronics,online,34.81,4,0.138,none,2024-06-17\r\n35391,1067,APAC,home,retail,74.97,6,0.036,none,2024-05-07\r\n35392,2255,AMER,fashion,online,31.95,5,0.220,bundle,2024-10-07\r\n35393,1132,EMEA,sports,online,28.72,4,0.127,none,2024-01-23\r\n35394,2188,EMEA,sports,retail,22.96,8,0.124,none,2024-01-26\r\n35395,1833,EMEA,home,mobile,44.49,8,0.134,loyalty,2024-05-27\r\n35396,2062,EMEA,electronics,retail,78.74,7,0.158,coupon,2024-10-14\r\n35397,2277,EMEA,fashion,mobile,50.73,6,0.108,loyalty,2024-09-24\r\n35398,1364,EMEA,grocery,online,71.88,3,0.080,none,2024-12-06\r\n35399,2149,EMEA,fashion,partner,65.33,7,0.241,none,2024-02-05\r\n35400,2276,AMER,sports,online,44.83,7,0.114,coupon,2024-09-17\r\n35401,2476,APAC,home,online,40.08,1,0.213,none,2024-06-20\r\n35402,1242,LATAM,fashion,online,31.22,8,0.072,none,2024-01-24\r\n35403,2442,APAC,grocery,online,31.98,3,0.103,none,2024-04-24\r\n35404,1151,APAC,home,online,35.20,4,0.048,none,2024-12-13\r\n35405,1735,LATAM,electronics,online,36.83,3,0.050,none,2024-07-12\r\n35406,1400,EMEA,toys,online,61.00,5,0.233,loyalty,2024-12-11\r\n35407,1057,LATAM,electronics,online,59.62,2,0.052,none,2024-06-18\r\n35408,1052,LATAM,grocery,online,52.98,3,0.161,bundle,2024-05-14\r\n35409,2310,EMEA,sports,retail,198.04,2,0.140,bundle,2024-09-02\r\n35410,1225,APAC,home,mobile,26.71,8,0.153,coupon,2024-10-06\r\n35411,1547,AMER,grocery,retail,72.29,3,0.199,loyalty,2024-07-25\r\n35412,2057,APAC,home,online,73.38,5,0.217,none,2024-02-15\r\n35413,2307,LATAM,grocery,online,73.65,5,0.001,none,2024-01-03\r\n35414,1985,AMER,fashion,retail,28.35,4,0.175,bundle,2024-04-24\r\n35415,1866,EMEA,electronics,online,66.76,8,0.168,none,2024-08-07\r\n35416,2417,LATAM,electronics,online,103.38,6,0.106,coupon,2024-05-14\r\n35417,2468,EMEA,home,online,55.08,7,0.081,loyalty,2024-05-13\r\n35418,1690,LATAM,grocery,online,35.77,7,0.180,coupon,2024-12-09\r\n35419,2211,APAC,fashion,mobile,48.27,1,0.176,loyalty,2024-09-04\r\n35420,1520,APAC,electronics,mobile,66.30,2,0.095,coupon,2024-09-01\r\n35421,1438,APAC,home,retail,79.85,3,0.081,bundle,2024-11-14\r\n35422,2371,LATAM,toys,retail,75.01,8,0.010,bundle,2024-09-15\r\n35423,2300,EMEA,toys,mobile,108.68,6,0.120,none,2024-01-26\r\n35424,1377,APAC,grocery,online,246.77,1,0.201,none,2024-06-09\r\n35425,2473,EMEA,home,online,51.39,1,0.149,coupon,2024-06-26\r\n35426,1161,AMER,home,retail,36.76,2,0.006,none,2024-01-15\r\n35427,1425,EMEA,home,online,43.79,4,0.052,loyalty,2024-12-16\r\n35428,1055,AMER,sports,online,34.65,8,0.117,none,2024-02-11\r\n35429,1719,LATAM,sports,retail,31.86,8,0.248,none,2024-04-27\r\n35430,1286,EMEA,home,retail,63.90,2,0.127,loyalty,2024-02-24\r\n35431,1405,LATAM,electronics,retail,89.44,8,0.022,none,2024-08-11\r\n35432,2208,AMER,fashion,online,97.52,3,0.193,none,2024-12-14\r\n35433,2073,AMER,electronics,mobile,34.99,1,0.016,coupon,2024-05-05\r\n35434,2155,APAC,grocery,online,38.20,8,0.017,none,2024-06-03\r\n35435,1104,APAC,fashion,online,77.13,5,0.119,none,2024-06-05\r\n35436,1216,APAC,grocery,retail,55.92,6,0.130,none,2024-04-04\r\n35437,1191,EMEA,electronics,retail,76.48,8,0.053,none,2024-01-11\r\n35438,2244,LATAM,toys,mobile,74.02,1,0.236,none,2024-05-09\r\n35439,2462,EMEA,fashion,online,44.54,5,0.037,none,2024-06-07\r\n35440,1450,EMEA,fashion,retail,78.45,4,0.246,bundle,2024-05-06\r\n35441,2226,EMEA,home,online,36.50,7,0.055,coupon,2024-08-01\r\n35442,1402,EMEA,home,mobile,53.72,4,0.158,none,2024-07-17\r\n35443,2437,LATAM,sports,mobile,168.45,3,0.029,none,2024-01-28\r\n35444,1299,LATAM,grocery,retail,75.61,1,0.228,bundle,2024-12-17\r\n35445,1308,EMEA,fashion,online,55.26,1,0.016,none,2024-04-06\r\n35446,1045,LATAM,sports,online,99.69,6,0.195,bundle,2024-12-24\r\n35447,1684,EMEA,sports,retail,36.00,4,0.172,none,2024-09-16\r\n35448,1459,LATAM,sports,retail,36.57,5,0.184,none,2024-06-11\r\n35449,1123,LATAM,home,online,83.59,8,0.239,coupon,2024-03-20\r\n35450,1462,LATAM,electronics,retail,57.55,8,0.095,none,2024-09-28\r\n35451,2233,EMEA,home,retail,53.27,7,0.090,coupon,2024-04-06\r\n35452,1761,EMEA,home,mobile,202.29,3,0.050,none,2024-06-04\r\n35453,1234,AMER,grocery,mobile,54.80,5,0.135,coupon,2024-10-28\r\n35454,2016,LATAM,home,retail,47.14,7,0.071,none,2024-10-27\r\n35455,2221,LATAM,fashion,online,55.49,7,0.110,none,2024-08-08\r\n35456,1618,EMEA,fashion,online,74.84,1,0.125,none,2024-11-06\r\n35457,2485,AMER,fashion,mobile,58.52,7,0.131,coupon,2024-03-23\r\n35458,2310,EMEA,grocery,online,34.39,7,0.168,bundle,2024-09-15\r\n35459,1906,APAC,sports,online,40.92,3,0.023,bundle,2024-06-13\r\n35460,1575,APAC,grocery,retail,129.59,1,0.113,none,2024-09-04\r\n35461,2192,APAC,electronics,retail,66.30,5,0.180,none,2024-04-24\r\n35462,1144,APAC,sports,online,19.79,2,0.011,none,2024-11-12\r\n35463,1664,LATAM,electronics,online,63.87,7,0.095,coupon,2024-07-12\r\n35464,1578,LATAM,home,online,91.91,3,0.026,bundle,2024-02-14\r\n35465,1612,LATAM,sports,online,66.19,6,0.016,none,2024-12-08\r\n35466,2113,LATAM,electronics,partner,42.64,4,0.039,none,2024-12-07\r\n35467,1517,AMER,home,retail,46.04,5,0.126,coupon,2024-09-25\r\n35468,2078,APAC,sports,retail,34.27,5,0.236,coupon,2024-01-28\r\n35469,2282,EMEA,toys,online,71.55,7,0.013,none,2024-06-24\r\n35470,1035,EMEA,home,online,52.83,4,0.222,none,2024-02-04\r\n35471,1683,AMER,sports,online,106.20,4,0.155,bundle,2024-01-11\r\n35472,1398,APAC,grocery,retail,42.35,2,0.216,none,2024-01-07\r\n35473,1837,LATAM,toys,online,67.64,2,0.033,none,2024-09-05\r\n35474,1519,APAC,fashion,online,65.62,7,0.070,none,2024-06-14\r\n35475,1715,AMER,fashion,online,85.51,1,0.070,loyalty,2024-11-08\r\n35476,1831,APAC,toys,partner,64.67,4,0.103,none,2024-12-12\r\n35477,1232,LATAM,electronics,mobile,32.73,6,0.186,coupon,2024-11-13\r\n35478,1905,APAC,electronics,mobile,55.02,4,0.086,none,2024-11-18\r\n35479,1445,APAC,sports,retail,140.97,2,0.187,coupon,2024-09-22\r\n35480,1554,AMER,electronics,retail,35.83,6,0.066,none,2024-02-01\r\n35481,2240,LATAM,home,retail,67.31,4,0.208,none,2024-10-08\r\n35482,2213,APAC,home,retail,41.11,4,0.035,bundle,2024-01-15\r\n35483,1067,APAC,electronics,online,40.16,1,0.019,none,2024-09-26\r\n35484,2384,LATAM,grocery,online,26.36,4,0.091,loyalty,2024-05-01\r\n35485,2061,EMEA,grocery,online,73.13,6,0.175,none,2024-01-12\r\n35486,1639,APAC,grocery,online,46.30,3,0.155,none,2024-04-26\r\n35487,2396,AMER,home,retail,152.90,5,0.035,bundle,2024-06-07\r\n35488,2093,LATAM,fashion,online,136.38,8,0.020,none,2024-02-27\r\n35489,1832,APAC,grocery,online,97.54,5,0.112,none,2024-04-07\r\n35490,2393,LATAM,home,online,57.62,7,0.224,none,2024-04-16\r\n35491,1947,EMEA,electronics,online,113.55,2,0.052,coupon,2024-08-14\r\n35492,1258,EMEA,grocery,mobile,49.55,1,0.181,bundle,2024-01-26\r\n35493,1861,AMER,grocery,mobile,77.39,1,0.159,coupon,2024-09-15\r\n35494,1175,AMER,grocery,retail,52.09,4,0.069,coupon,2024-11-03\r\n35495,2174,LATAM,grocery,retail,46.62,2,0.188,bundle,2024-12-09\r\n35496,1159,LATAM,home,retail,30.59,4,0.199,bundle,2024-11-04\r\n35497,2415,AMER,fashion,retail,39.17,2,0.082,none,2024-10-21\r\n35498,1740,EMEA,home,online,141.93,2,0.080,none,2024-01-08\r\n35499,1534,EMEA,toys,online,78.75,2,0.111,none,2024-09-06\r\n35500,1092,AMER,fashion,online,19.50,7,0.198,none,2024-06-06\r\n35501,1038,APAC,sports,retail,82.98,2,0.204,none,2024-01-02\r\n35502,2496,EMEA,grocery,online,18.67,8,0.216,bundle,2024-09-17\r\n35503,2151,APAC,grocery,mobile,63.54,2,0.107,none,2024-02-26\r\n35504,1460,LATAM,toys,retail,61.87,3,0.043,loyalty,2024-03-18\r\n35505,1808,APAC,sports,partner,51.29,8,0.234,none,2024-12-07\r\n35506,1294,APAC,home,online,91.85,7,0.162,bundle,2024-03-16\r\n35507,2308,AMER,grocery,mobile,39.56,8,0.100,bundle,2024-01-10\r\n35508,2262,APAC,grocery,retail,95.67,1,0.089,none,2024-04-07\r\n35509,1280,LATAM,home,retail,40.18,6,0.067,coupon,2024-11-25\r\n35510,1567,AMER,sports,retail,71.77,7,0.237,none,2024-10-26\r\n35511,1464,APAC,home,retail,106.68,1,0.138,bundle,2024-12-03\r\n35512,1722,EMEA,grocery,partner,37.96,5,0.013,coupon,2024-10-05\r\n35513,1525,APAC,electronics,retail,52.41,6,0.227,loyalty,2024-11-28\r\n35514,1586,LATAM,electronics,online,29.06,5,0.147,coupon,2024-02-11\r\n35515,2412,LATAM,home,online,114.42,3,0.130,bundle,2024-04-03\r\n35516,2318,AMER,grocery,online,122.56,6,0.022,loyalty,2024-10-21\r\n35517,2113,LATAM,grocery,online,123.00,8,0.081,none,2024-09-02\r\n35518,2099,AMER,grocery,online,26.17,3,0.033,loyalty,2024-08-12\r\n35519,2473,EMEA,toys,online,131.36,4,0.214,coupon,2024-12-12\r\n35520,1162,AMER,fashion,mobile,65.87,8,0.224,none,2024-06-09\r\n35521,1280,LATAM,sports,retail,95.23,2,0.141,coupon,2024-12-12\r\n35522,2034,LATAM,grocery,online,84.75,2,0.113,none,2024-09-04\r\n35523,1116,LATAM,grocery,online,123.07,2,0.182,none,2024-02-25\r\n35524,1239,APAC,grocery,retail,134.77,5,0.097,none,2024-05-03\r\n35525,1708,LATAM,toys,retail,51.24,3,0.151,none,2024-03-02\r\n35526,1562,AMER,electronics,online,79.02,2,0.148,none,2024-02-24\r\n35527,2299,EMEA,fashion,online,100.03,5,0.228,coupon,2024-05-16\r\n35528,1501,AMER,grocery,mobile,63.17,8,0.013,coupon,2024-03-03\r\n35529,1358,APAC,toys,retail,54.47,5,0.129,none,2024-08-09\r\n35530,1432,APAC,electronics,online,24.36,6,0.069,coupon,2024-01-14\r\n35531,1776,APAC,electronics,online,144.94,6,0.239,bundle,2024-02-16\r\n35532,1482,AMER,sports,mobile,30.16,5,0.101,none,2024-12-07\r\n35533,1025,EMEA,sports,mobile,47.72,4,0.197,bundle,2024-03-10\r\n35534,1968,EMEA,sports,online,55.54,5,0.020,loyalty,2024-03-11\r\n35535,1301,AMER,grocery,mobile,68.58,1,0.131,none,2024-07-28\r\n35536,1714,APAC,grocery,retail,143.22,7,0.168,loyalty,2024-07-17\r\n35537,2413,AMER,home,online,76.00,5,0.207,loyalty,2024-02-17\r\n35538,2025,EMEA,electronics,online,75.40,4,0.073,bundle,2024-08-23\r\n35539,1281,AMER,home,partner,182.00,3,0.066,coupon,2024-08-23\r\n35540,2329,LATAM,grocery,online,38.99,4,0.094,none,2024-01-07\r\n35541,1229,LATAM,grocery,online,40.96,4,0.087,coupon,2024-01-08\r\n35542,1910,LATAM,toys,online,96.78,6,0.023,loyalty,2024-02-03\r\n35543,1828,EMEA,electronics,online,69.74,5,0.157,bundle,2024-05-16\r\n35544,1792,AMER,electronics,online,22.88,4,0.100,none,2024-12-11\r\n35545,1732,LATAM,home,mobile,145.50,7,0.086,none,2024-06-19\r\n35546,1856,EMEA,electronics,retail,53.08,4,0.026,bundle,2024-07-11\r\n35547,1721,EMEA,grocery,mobile,77.23,2,0.243,bundle,2024-01-17\r\n35548,2135,EMEA,toys,online,49.01,8,0.225,none,2024-10-23\r\n35549,2076,AMER,fashion,partner,42.00,6,0.163,coupon,2024-03-06\r\n35550,1791,LATAM,home,retail,144.83,5,0.172,loyalty,2024-07-23\r\n35551,1926,AMER,fashion,online,43.38,7,0.135,none,2024-02-19\r\n35552,2199,LATAM,electronics,mobile,57.35,1,0.168,coupon,2024-08-23\r\n35553,2006,APAC,fashion,retail,45.43,4,0.036,bundle,2024-07-28\r\n35554,2142,LATAM,grocery,online,47.90,8,0.155,coupon,2024-03-26\r\n35555,1016,AMER,grocery,online,28.19,6,0.090,coupon,2024-07-24\r\n35556,1239,APAC,grocery,retail,50.53,2,0.102,none,2024-12-14\r\n35557,1646,APAC,electronics,online,88.38,4,0.181,coupon,2024-11-24\r\n35558,1688,LATAM,grocery,online,76.63,6,0.182,coupon,2024-11-24\r\n35559,1674,LATAM,fashion,online,23.27,7,0.197,bundle,2024-09-22\r\n35560,1592,LATAM,home,online,96.97,2,0.139,loyalty,2024-12-08\r\n35561,1357,EMEA,grocery,online,68.52,5,0.192,coupon,2024-01-19\r\n35562,1120,LATAM,grocery,online,46.03,4,0.027,none,2024-05-14\r\n35563,1189,AMER,sports,online,54.21,7,0.162,loyalty,2024-10-22\r\n35564,2228,EMEA,grocery,retail,85.01,8,0.032,bundle,2024-06-22\r\n35565,1058,LATAM,grocery,retail,49.42,8,0.168,none,2024-04-17\r\n35566,1343,LATAM,home,online,65.49,2,0.008,none,2024-08-03\r\n35567,2139,AMER,electronics,retail,47.39,7,0.246,coupon,2024-04-17\r\n35568,1961,EMEA,sports,online,73.84,1,0.088,loyalty,2024-10-27\r\n35569,1136,EMEA,home,online,88.51,2,0.153,coupon,2024-06-09\r\n35570,2332,APAC,grocery,online,90.80,2,0.144,bundle,2024-12-05\r\n35571,1432,APAC,sports,partner,42.52,3,0.105,none,2024-07-24\r\n35572,2374,LATAM,toys,retail,65.36,6,0.130,bundle,2024-08-16\r\n35573,1860,EMEA,grocery,retail,33.53,6,0.008,bundle,2024-08-18\r\n35574,2056,LATAM,electronics,online,27.56,6,0.109,none,2024-06-11\r\n35575,1459,LATAM,home,online,61.80,2,0.195,none,2024-06-23\r\n35576,2226,EMEA,home,online,51.91,6,0.082,none,2024-10-13\r\n35577,1840,LATAM,grocery,online,72.84,7,0.155,none,2024-12-25\r\n35578,2215,LATAM,electronics,online,33.82,7,0.085,none,2024-01-17\r\n35579,1261,APAC,sports,retail,61.79,3,0.221,bundle,2024-09-07\r\n35580,1602,EMEA,electronics,retail,69.15,8,0.033,loyalty,2024-11-13\r\n35581,2329,LATAM,sports,retail,54.66,1,0.041,bundle,2024-12-25\r\n35582,1026,APAC,grocery,online,76.95,4,0.046,bundle,2024-12-11\r\n35583,1267,EMEA,electronics,online,77.48,5,0.045,coupon,2024-01-22\r\n35584,1786,APAC,electronics,retail,26.33,4,0.013,none,2024-08-08\r\n35585,2042,LATAM,electronics,retail,67.96,1,0.227,coupon,2024-09-15\r\n35586,2244,LATAM,grocery,retail,73.44,3,0.017,bundle,2024-09-27\r\n35587,1003,APAC,grocery,retail,53.85,6,0.168,none,2024-06-24\r\n35588,1916,AMER,sports,partner,73.94,4,0.227,none,2024-05-22\r\n35589,2054,AMER,grocery,online,40.71,4,0.081,bundle,2024-10-15\r\n35590,1692,LATAM,grocery,online,123.22,1,0.189,none,2024-11-09\r\n35591,1168,APAC,home,partner,81.13,8,0.080,bundle,2024-12-04\r\n35592,2019,AMER,sports,retail,62.53,3,0.024,bundle,2024-04-01\r\n35593,2169,EMEA,sports,mobile,96.36,5,0.134,bundle,2024-06-09\r\n35594,2067,LATAM,electronics,mobile,48.22,3,0.033,loyalty,2024-05-11\r\n35595,1722,EMEA,home,retail,62.75,7,0.228,none,2024-07-17\r\n35596,1353,EMEA,home,online,64.39,6,0.062,none,2024-05-26\r\n35597,2251,APAC,grocery,retail,40.02,4,0.154,none,2024-08-06\r\n35598,2125,LATAM,grocery,retail,29.92,1,0.196,coupon,2024-05-01\r\n35599,1384,LATAM,fashion,online,81.83,1,0.193,coupon,2024-06-28\r\n35600,1125,LATAM,fashion,online,70.78,6,0.094,none,2024-02-11\r\n35601,1132,EMEA,home,retail,17.72,1,0.241,none,2024-08-10\r\n35602,2194,APAC,sports,retail,78.15,4,0.082,none,2024-12-10\r\n35603,1303,LATAM,sports,mobile,64.54,1,0.039,loyalty,2024-04-05\r\n35604,2274,APAC,electronics,retail,31.04,1,0.076,loyalty,2024-09-03\r\n35605,2140,AMER,electronics,online,125.50,4,0.047,none,2024-06-28\r\n35606,2283,AMER,electronics,retail,70.61,6,0.134,none,2024-04-18\r\n35607,1891,APAC,toys,partner,102.21,3,0.223,bundle,2024-09-20\r\n35608,2305,AMER,home,retail,66.13,7,0.085,none,2024-03-24\r\n35609,1795,EMEA,home,retail,101.05,1,0.151,bundle,2024-07-09\r\n35610,2253,AMER,toys,online,80.56,3,0.032,none,2024-10-15\r\n35611,1792,AMER,fashion,retail,182.72,1,0.064,bundle,2024-05-25\r\n35612,2080,LATAM,fashion,mobile,67.70,2,0.061,bundle,2024-04-16\r\n35613,2296,AMER,grocery,retail,85.10,3,0.144,bundle,2024-09-24\r\n35614,2323,AMER,toys,online,74.94,1,0.090,bundle,2024-07-16\r\n35615,2357,EMEA,electronics,retail,151.65,5,0.139,none,2024-05-09\r\n35616,2495,EMEA,grocery,online,87.48,3,0.242,coupon,2024-04-21\r\n35617,1786,APAC,electronics,retail,25.38,6,0.190,coupon,2024-11-27\r\n35618,1773,LATAM,grocery,online,77.78,5,0.081,loyalty,2024-06-18\r\n35619,1745,APAC,grocery,online,21.51,1,0.076,loyalty,2024-04-10\r\n35620,1269,LATAM,fashion,online,20.24,7,0.183,none,2024-07-02\r\n35621,1246,EMEA,sports,retail,51.64,5,0.057,none,2024-01-08\r\n35622,1427,EMEA,home,retail,122.50,5,0.130,bundle,2024-03-18\r\n35623,2136,AMER,fashion,online,80.98,4,0.120,none,2024-04-07\r\n35624,1102,APAC,fashion,mobile,50.44,3,0.059,none,2024-06-24\r\n35625,1957,AMER,electronics,retail,39.69,1,0.249,coupon,2024-12-18\r\n35626,1478,EMEA,toys,online,31.33,2,0.099,coupon,2024-08-22\r\n35627,1809,APAC,fashion,online,95.21,5,0.172,coupon,2024-09-05\r\n35628,1588,LATAM,electronics,mobile,38.26,5,0.100,none,2024-07-10\r\n35629,2375,AMER,electronics,online,48.79,2,0.167,none,2024-12-24\r\n35630,1327,APAC,sports,mobile,90.02,4,0.068,coupon,2024-12-05\r\n35631,2121,APAC,home,mobile,73.89,6,0.212,none,2024-09-11\r\n35632,1181,LATAM,electronics,mobile,82.72,7,0.111,bundle,2024-04-15\r\n35633,1140,LATAM,grocery,online,60.00,7,0.095,none,2024-07-26\r\n35634,1745,APAC,grocery,mobile,34.59,6,0.036,none,2024-05-21\r\n35635,1253,AMER,fashion,online,50.60,8,0.009,none,2024-03-21\r\n35636,2484,APAC,sports,online,35.40,6,0.080,coupon,2024-11-10\r\n35637,1518,AMER,fashion,online,50.27,8,0.012,none,2024-01-18\r\n35638,2181,AMER,grocery,retail,67.42,5,0.031,none,2024-05-03\r\n35639,1474,LATAM,home,retail,24.54,4,0.110,none,2024-01-13\r\n35640,1903,LATAM,grocery,retail,35.35,2,0.088,coupon,2024-02-20\r\n35641,1562,AMER,grocery,retail,56.84,6,0.151,bundle,2024-08-11\r\n35642,2247,LATAM,toys,retail,14.34,2,0.004,bundle,2024-10-04\r\n35643,1498,LATAM,grocery,partner,82.65,3,0.219,loyalty,2024-07-02\r\n35644,1274,LATAM,electronics,retail,52.53,4,0.219,none,2024-03-10\r\n35645,2210,APAC,fashion,online,19.41,7,0.003,loyalty,2024-01-12\r\n35646,1294,APAC,home,online,38.41,4,0.112,none,2024-04-11\r\n35647,1413,LATAM,grocery,partner,29.01,7,0.218,none,2024-07-28\r\n35648,1498,LATAM,fashion,retail,43.34,4,0.247,coupon,2024-12-11\r\n35649,2278,APAC,sports,retail,88.91,3,0.065,none,2024-06-27\r\n35650,1325,APAC,grocery,online,63.26,5,0.074,coupon,2024-11-21\r\n35651,1483,EMEA,home,online,39.12,1,0.167,none,2024-10-19\r\n35652,1278,AMER,electronics,retail,20.15,2,0.135,bundle,2024-04-09\r\n35653,1550,APAC,grocery,retail,44.03,4,0.180,coupon,2024-10-19\r\n35654,1086,AMER,electronics,retail,75.70,1,0.076,coupon,2024-05-07\r\n35655,2265,APAC,toys,mobile,30.61,3,0.098,none,2024-01-25\r\n35656,1210,LATAM,home,retail,65.75,5,0.150,loyalty,2024-04-28\r\n35657,2274,APAC,electronics,retail,43.43,1,0.167,none,2024-09-13\r\n35658,1005,LATAM,fashion,online,54.67,3,0.149,loyalty,2024-06-19\r\n35659,1952,EMEA,toys,retail,127.81,3,0.086,bundle,2024-01-06\r\n35660,1718,EMEA,grocery,retail,110.97,6,0.171,none,2024-01-02\r\n35661,2310,EMEA,fashion,online,40.06,7,0.013,bundle,2024-02-10\r\n35662,2026,LATAM,home,online,45.37,2,0.022,coupon,2024-02-25\r\n35663,1933,EMEA,sports,mobile,65.30,4,0.174,bundle,2024-10-27\r\n35664,1401,LATAM,grocery,online,18.24,4,0.091,coupon,2024-12-17\r\n35665,1493,APAC,toys,online,45.27,7,0.237,none,2024-08-21\r\n35666,1589,AMER,grocery,mobile,41.04,8,0.175,none,2024-05-16\r\n35667,2093,LATAM,home,mobile,39.17,1,0.099,none,2024-09-25\r\n35668,2300,EMEA,electronics,online,46.11,4,0.223,none,2024-03-27\r\n35669,2360,EMEA,home,retail,42.65,1,0.116,none,2024-10-20\r\n35670,1157,LATAM,home,retail,40.54,3,0.102,none,2024-04-22\r\n35671,1043,LATAM,home,online,26.62,3,0.095,none,2024-08-15\r\n35672,2309,AMER,home,online,52.64,3,0.232,none,2024-05-14\r\n35673,1833,EMEA,electronics,retail,33.83,6,0.105,none,2024-06-02\r\n35674,1821,LATAM,grocery,online,46.92,7,0.223,coupon,2024-12-01\r\n35675,2358,AMER,electronics,online,19.15,4,0.003,none,2024-11-05\r\n35676,2257,AMER,home,mobile,88.57,6,0.116,none,2024-07-07\r\n35677,2324,AMER,grocery,partner,54.02,2,0.242,coupon,2024-06-26\r\n35678,2342,AMER,toys,retail,49.33,3,0.023,none,2024-05-26\r\n35679,1394,LATAM,toys,retail,60.00,3,0.065,coupon,2024-04-24\r\n35680,1715,AMER,home,online,87.40,4,0.070,bundle,2024-08-14\r\n35681,1464,APAC,grocery,partner,40.37,5,0.155,none,2024-12-17\r\n35682,2339,AMER,fashion,retail,69.83,7,0.096,bundle,2024-08-17\r\n35683,1720,AMER,sports,online,81.98,6,0.018,coupon,2024-09-25\r\n35684,2063,APAC,sports,mobile,39.74,5,0.024,none,2024-08-22\r\n35685,1503,APAC,grocery,retail,63.26,7,0.205,coupon,2024-05-27\r\n35686,1040,LATAM,electronics,online,28.19,5,0.192,none,2024-03-08\r\n35687,1721,EMEA,home,retail,118.09,6,0.014,bundle,2024-08-15\r\n35688,1216,APAC,home,online,53.67,4,0.207,bundle,2024-05-11\r\n35689,2466,APAC,electronics,online,48.83,6,0.060,none,2024-03-19\r\n35690,2331,APAC,home,online,68.26,7,0.061,bundle,2024-10-12\r\n35691,1258,EMEA,electronics,retail,103.37,3,0.196,bundle,2024-10-09\r\n35692,1757,EMEA,fashion,retail,24.18,2,0.170,loyalty,2024-12-11\r\n35693,1673,AMER,electronics,retail,40.26,1,0.221,bundle,2024-08-09\r\n35694,1203,AMER,home,retail,83.30,2,0.182,none,2024-04-21\r\n35695,1998,APAC,fashion,retail,87.75,6,0.130,none,2024-11-13\r\n35696,1801,LATAM,grocery,retail,91.96,5,0.048,bundle,2024-02-19\r\n35697,1719,LATAM,home,retail,94.44,2,0.240,none,2024-12-12\r\n35698,1220,LATAM,fashion,retail,69.26,1,0.044,bundle,2024-07-03\r\n35699,2126,APAC,home,online,10.05,3,0.010,none,2024-09-16\r\n35700,1044,EMEA,sports,retail,99.39,3,0.176,none,2024-11-19\r\n35701,1692,LATAM,fashion,partner,61.32,1,0.223,loyalty,2024-04-27\r\n35702,2053,AMER,grocery,online,65.42,2,0.095,coupon,2024-10-17\r\n35703,2492,LATAM,home,retail,15.46,1,0.037,none,2024-03-21\r\n35704,1586,LATAM,electronics,retail,72.40,5,0.018,coupon,2024-12-17\r\n35705,2026,LATAM,electronics,retail,153.99,4,0.054,coupon,2024-07-13\r\n35706,1763,LATAM,fashion,retail,22.07,4,0.046,none,2024-05-10\r\n35707,2381,AMER,electronics,retail,97.05,4,0.054,bundle,2024-12-17\r\n35708,1109,APAC,sports,retail,18.63,4,0.118,loyalty,2024-08-08\r\n35709,1086,AMER,fashion,online,68.50,7,0.174,none,2024-06-12\r\n35710,1150,LATAM,home,partner,84.05,8,0.125,none,2024-03-06\r\n35711,1905,APAC,grocery,online,64.78,1,0.029,loyalty,2024-01-18\r\n35712,2397,LATAM,sports,online,48.47,3,0.128,none,2024-08-05\r\n35713,1672,APAC,fashion,retail,177.66,7,0.248,coupon,2024-08-24\r\n35714,2013,APAC,grocery,online,196.86,3,0.074,loyalty,2024-10-08\r\n35715,1526,EMEA,grocery,retail,97.60,7,0.225,none,2024-03-19\r\n35716,1313,EMEA,sports,retail,57.55,1,0.164,none,2024-09-25\r\n35717,1596,EMEA,grocery,partner,59.68,1,0.003,none,2024-12-14\r\n35718,1259,EMEA,toys,retail,63.94,3,0.218,none,2024-01-11\r\n35719,1708,LATAM,electronics,mobile,28.61,5,0.180,loyalty,2024-11-02\r\n35720,1512,APAC,fashion,partner,57.79,2,0.197,none,2024-08-02\r\n35721,2105,APAC,fashion,retail,59.42,1,0.092,none,2024-04-26\r\n35722,2349,APAC,sports,partner,28.52,7,0.040,none,2024-11-14\r\n35723,1587,LATAM,electronics,online,55.35,1,0.107,none,2024-02-13\r\n35724,1589,AMER,fashion,partner,42.67,8,0.176,none,2024-07-27\r\n35725,2203,APAC,fashion,online,49.80,5,0.133,none,2024-10-11\r\n35726,1100,AMER,grocery,retail,52.47,5,0.219,none,2024-07-05\r\n35727,1717,AMER,electronics,retail,41.67,5,0.148,none,2024-11-25\r\n35728,1426,AMER,sports,retail,117.47,4,0.141,loyalty,2024-01-20\r\n35729,2059,AMER,grocery,mobile,18.56,7,0.048,none,2024-06-21\r\n35730,1524,LATAM,grocery,online,42.62,6,0.020,none,2024-12-19\r\n35731,1814,AMER,toys,online,136.75,4,0.195,none,2024-09-02\r\n35732,1719,LATAM,electronics,retail,80.75,2,0.141,none,2024-12-18\r\n35733,1481,LATAM,home,online,61.81,7,0.249,coupon,2024-11-13\r\n35734,1212,LATAM,grocery,retail,30.10,7,0.039,bundle,2024-12-18\r\n35735,1987,AMER,grocery,online,45.18,5,0.100,none,2024-07-26\r\n35736,1976,AMER,home,online,32.60,8,0.219,bundle,2024-06-22\r\n35737,1105,AMER,grocery,mobile,46.76,7,0.187,loyalty,2024-12-01\r\n35738,1042,LATAM,fashion,online,67.89,5,0.012,bundle,2024-06-22\r\n35739,1026,APAC,grocery,retail,51.46,1,0.111,none,2024-11-27\r\n35740,1786,APAC,grocery,mobile,41.85,5,0.137,bundle,2024-02-08\r\n35741,1180,AMER,toys,online,41.11,5,0.196,bundle,2024-05-26\r\n35742,1518,AMER,sports,mobile,75.55,2,0.010,none,2024-09-13\r\n35743,1609,LATAM,electronics,retail,86.00,8,0.215,none,2024-05-01\r\n35744,1386,AMER,toys,online,67.73,7,0.129,none,2024-06-27\r\n35745,1116,LATAM,toys,retail,29.41,2,0.127,none,2024-03-23\r\n35746,1051,EMEA,home,retail,120.39,1,0.111,none,2024-05-21\r\n35747,2227,LATAM,electronics,online,41.28,5,0.240,loyalty,2024-01-25\r\n35748,1732,LATAM,electronics,mobile,51.98,1,0.189,none,2024-06-05\r\n35749,2027,EMEA,sports,online,36.59,2,0.011,none,2024-06-22\r\n35750,1016,AMER,fashion,online,41.07,2,0.245,bundle,2024-01-25\r\n35751,1506,EMEA,grocery,retail,79.71,8,0.206,none,2024-12-02\r\n35752,1468,AMER,electronics,online,203.24,8,0.226,none,2024-03-21\r\n35753,1274,LATAM,home,partner,137.45,4,0.057,coupon,2024-12-22\r\n35754,2199,LATAM,sports,retail,41.24,3,0.242,loyalty,2024-03-01\r\n35755,1571,EMEA,electronics,online,35.17,1,0.158,bundle,2024-10-18\r\n35756,1306,LATAM,toys,retail,99.01,5,0.203,none,2024-01-23\r\n35757,2275,LATAM,grocery,retail,41.37,6,0.114,none,2024-10-02\r\n35758,1982,EMEA,fashion,online,72.70,1,0.038,coupon,2024-07-01\r\n35759,1256,LATAM,home,mobile,40.37,3,0.153,none,2024-07-03\r\n35760,2243,APAC,grocery,retail,24.39,1,0.100,none,2024-01-21\r\n35761,2248,LATAM,toys,online,34.34,4,0.208,none,2024-05-07\r\n35762,1424,APAC,home,mobile,57.85,5,0.111,coupon,2024-07-07\r\n35763,1879,EMEA,grocery,mobile,55.42,3,0.214,bundle,2024-10-08\r\n35764,1451,EMEA,sports,retail,64.03,5,0.188,none,2024-02-13\r\n35765,1403,APAC,fashion,retail,61.38,5,0.076,none,2024-12-14\r\n35766,1229,LATAM,sports,retail,22.64,2,0.183,none,2024-03-28\r\n35767,2495,EMEA,electronics,mobile,46.93,3,0.126,none,2024-09-03\r\n35768,2054,AMER,electronics,mobile,43.41,6,0.129,coupon,2024-06-21\r\n35769,1319,EMEA,grocery,online,149.57,8,0.203,none,2024-06-08\r\n35770,1080,LATAM,grocery,online,39.93,4,0.054,none,2024-10-05\r\n35771,1898,EMEA,sports,retail,30.27,2,0.103,coupon,2024-05-15\r\n35772,2149,EMEA,home,online,22.20,1,0.019,none,2024-06-10\r\n35773,1780,APAC,electronics,online,61.14,5,0.057,coupon,2024-11-03\r\n35774,1184,AMER,grocery,retail,82.41,2,0.202,none,2024-12-09\r\n35775,1874,LATAM,grocery,online,47.57,4,0.204,none,2024-08-07\r\n35776,1493,APAC,sports,retail,47.76,4,0.168,none,2024-11-11\r\n35777,2403,LATAM,fashion,retail,44.30,1,0.194,none,2024-01-13\r\n35778,1048,EMEA,electronics,online,37.78,5,0.222,coupon,2024-10-01\r\n35779,1631,APAC,grocery,online,59.64,1,0.243,none,2024-12-14\r\n35780,2425,APAC,sports,online,57.55,2,0.141,none,2024-05-05\r\n35781,1790,AMER,toys,online,88.04,4,0.191,coupon,2024-09-07\r\n35782,1495,LATAM,toys,online,91.86,2,0.011,none,2024-08-23\r\n35783,2167,APAC,sports,retail,62.70,6,0.209,loyalty,2024-04-22\r\n35784,1158,LATAM,grocery,retail,44.78,4,0.238,none,2024-02-26\r\n35785,2121,APAC,home,retail,60.21,6,0.081,none,2024-04-18\r\n35786,2097,AMER,fashion,online,67.69,2,0.138,none,2024-08-17\r\n35787,2409,APAC,home,online,68.38,8,0.040,bundle,2024-02-10\r\n35788,1768,AMER,sports,online,59.64,6,0.199,coupon,2024-01-02\r\n35789,2398,EMEA,home,online,37.74,8,0.021,bundle,2024-12-08\r\n35790,1624,AMER,home,online,131.97,6,0.138,bundle,2024-01-20\r\n35791,2071,APAC,home,retail,28.33,3,0.069,none,2024-09-02\r\n35792,1135,APAC,grocery,retail,127.11,4,0.089,none,2024-04-25\r\n35793,1063,AMER,home,retail,59.06,2,0.021,none,2024-10-14\r\n35794,1218,AMER,sports,online,174.00,6,0.075,loyalty,2024-11-17\r\n35795,2173,LATAM,toys,retail,38.94,5,0.077,none,2024-03-09\r\n35796,1422,LATAM,fashion,online,56.65,6,0.035,loyalty,2024-10-22\r\n35797,1711,APAC,toys,online,41.34,7,0.059,bundle,2024-10-01\r\n35798,1173,LATAM,grocery,mobile,79.92,1,0.124,none,2024-12-27\r\n35799,2010,APAC,home,partner,126.76,6,0.014,none,2024-07-17\r\n35800,1592,LATAM,fashion,retail,168.69,8,0.028,bundle,2024-12-20\r\n35801,2090,AMER,sports,retail,125.36,2,0.191,loyalty,2024-02-20\r\n35802,1808,APAC,fashion,partner,75.43,3,0.024,none,2024-03-01\r\n35803,1532,APAC,grocery,mobile,52.20,5,0.023,none,2024-03-12\r\n35804,2273,APAC,electronics,online,66.18,8,0.205,loyalty,2024-05-27\r\n35805,2134,AMER,sports,online,43.14,3,0.201,none,2024-06-23\r\n35806,1034,EMEA,grocery,retail,91.88,8,0.006,none,2024-02-01\r\n35807,1343,LATAM,grocery,online,48.07,5,0.015,bundle,2024-03-08\r\n35808,1647,LATAM,electronics,online,27.18,3,0.143,none,2024-02-03\r\n35809,2285,APAC,home,online,55.86,6,0.229,none,2024-06-16\r\n35810,1036,EMEA,grocery,online,27.25,7,0.092,none,2024-03-03\r\n35811,2485,AMER,home,online,35.24,8,0.004,none,2024-05-24\r\n35812,1743,LATAM,electronics,retail,56.58,5,0.220,bundle,2024-12-02\r\n35813,2480,APAC,grocery,mobile,183.60,1,0.141,none,2024-07-28\r\n35814,1711,APAC,grocery,online,41.81,2,0.080,none,2024-12-18\r\n35815,2128,EMEA,home,online,91.16,5,0.009,loyalty,2024-05-15\r\n35816,1476,APAC,grocery,retail,59.13,8,0.200,none,2024-11-06\r\n35817,2163,EMEA,electronics,retail,62.37,7,0.051,none,2024-08-12\r\n35818,1548,EMEA,home,online,60.32,7,0.213,none,2024-04-12\r\n35819,1581,APAC,fashion,online,126.62,2,0.002,coupon,2024-07-01\r\n35820,2121,APAC,toys,retail,38.93,2,0.189,none,2024-10-12\r\n35821,2416,LATAM,grocery,mobile,65.07,3,0.171,coupon,2024-12-25\r\n35822,1690,LATAM,grocery,mobile,77.74,8,0.055,coupon,2024-07-22\r\n35823,1021,AMER,grocery,retail,33.79,2,0.056,none,2024-02-16\r\n35824,1658,AMER,home,partner,45.48,4,0.247,none,2024-09-12\r\n35825,1272,AMER,grocery,retail,33.08,6,0.036,coupon,2024-07-02\r\n35826,1528,EMEA,toys,mobile,68.88,7,0.103,coupon,2024-06-15\r\n35827,1946,AMER,home,mobile,100.68,2,0.099,bundle,2024-11-15\r\n35828,2418,AMER,grocery,online,71.03,5,0.033,bundle,2024-06-03\r\n35829,2227,LATAM,toys,online,94.70,6,0.118,none,2024-02-05\r\n35830,2272,EMEA,fashion,online,55.84,1,0.221,none,2024-11-12\r\n35831,1291,EMEA,fashion,retail,105.92,5,0.082,none,2024-05-03\r\n35832,1538,AMER,electronics,mobile,72.39,6,0.208,none,2024-11-19\r\n35833,1938,APAC,home,online,24.58,2,0.109,none,2024-06-21\r\n35834,1565,AMER,electronics,partner,22.47,6,0.077,bundle,2024-01-21\r\n35835,1165,AMER,fashion,retail,105.32,6,0.013,none,2024-07-20\r\n35836,1205,APAC,toys,online,86.12,4,0.142,none,2024-07-11\r\n35837,1216,APAC,grocery,mobile,61.65,8,0.216,loyalty,2024-09-23\r\n35838,2361,EMEA,sports,mobile,21.87,7,0.012,none,2024-08-20\r\n35839,1735,LATAM,grocery,retail,110.03,8,0.146,coupon,2024-07-17\r\n35840,1223,LATAM,fashion,online,39.76,4,0.065,none,2024-12-03\r\n35841,1794,AMER,home,online,64.36,5,0.061,bundle,2024-01-04\r\n35842,2307,LATAM,electronics,online,14.41,4,0.079,coupon,2024-09-14\r\n35843,1117,LATAM,grocery,retail,58.39,3,0.230,none,2024-03-23\r\n35844,1030,EMEA,fashion,online,38.04,2,0.190,coupon,2024-07-04\r\n35845,2193,AMER,home,online,56.93,4,0.226,none,2024-05-15\r\n35846,2447,AMER,home,mobile,42.64,7,0.110,none,2024-07-11\r\n35847,2197,LATAM,grocery,mobile,45.81,2,0.246,none,2024-01-18\r\n35848,1008,AMER,electronics,retail,52.85,7,0.231,bundle,2024-05-08\r\n35849,1242,LATAM,sports,retail,101.42,3,0.044,none,2024-07-05\r\n35850,1511,EMEA,electronics,online,32.11,8,0.203,bundle,2024-01-24\r\n35851,2423,LATAM,toys,mobile,116.20,8,0.058,loyalty,2024-12-28\r\n35852,1321,EMEA,electronics,online,54.97,8,0.134,none,2024-07-27\r\n35853,1841,AMER,home,retail,15.55,6,0.021,none,2024-05-10\r\n35854,2324,AMER,electronics,mobile,54.91,4,0.243,coupon,2024-06-14\r\n35855,2411,EMEA,toys,retail,29.99,7,0.125,coupon,2024-04-26\r\n35856,1026,APAC,sports,retail,158.25,6,0.153,none,2024-07-12\r\n35857,1254,APAC,grocery,mobile,63.20,7,0.086,none,2024-11-27\r\n35858,1790,AMER,fashion,online,48.25,8,0.068,none,2024-01-25\r\n35859,1398,APAC,sports,online,83.73,5,0.124,coupon,2024-08-19\r\n35860,1690,LATAM,grocery,online,49.90,6,0.083,coupon,2024-11-08\r\n35861,2307,LATAM,home,online,77.31,3,0.232,none,2024-02-28\r\n35862,2351,EMEA,grocery,online,109.54,3,0.246,none,2024-02-08\r\n35863,1298,LATAM,grocery,online,83.09,6,0.227,none,2024-08-16\r\n35864,2072,AMER,grocery,retail,38.56,2,0.241,coupon,2024-06-07\r\n35865,1239,APAC,home,online,92.82,1,0.020,none,2024-02-26\r\n35866,1681,LATAM,grocery,online,37.56,2,0.073,none,2024-04-21\r\n35867,1899,APAC,electronics,online,103.55,7,0.089,none,2024-06-27\r\n35868,2455,AMER,fashion,online,51.47,8,0.077,none,2024-01-03\r\n35869,1810,LATAM,grocery,mobile,120.50,3,0.206,none,2024-08-24\r\n35870,1238,AMER,electronics,retail,64.68,3,0.023,none,2024-09-02\r\n35871,1729,AMER,grocery,online,36.46,6,0.036,none,2024-06-25\r\n35872,1238,AMER,home,retail,48.25,4,0.023,none,2024-02-07\r\n35873,2492,LATAM,home,mobile,88.32,7,0.012,none,2024-06-25\r\n35874,2436,LATAM,sports,online,74.92,4,0.142,none,2024-03-11\r\n35875,1864,EMEA,home,mobile,137.63,8,0.221,none,2024-10-04\r\n35876,2050,APAC,grocery,online,66.56,4,0.167,none,2024-01-19\r\n35877,2020,AMER,grocery,online,77.71,3,0.141,bundle,2024-08-10\r\n35878,1215,LATAM,toys,retail,37.99,2,0.075,coupon,2024-07-14\r\n35879,1985,AMER,toys,online,65.25,7,0.248,coupon,2024-01-18\r\n35880,1854,AMER,fashion,online,25.40,3,0.007,none,2024-07-05\r\n35881,2490,AMER,electronics,online,61.25,2,0.037,none,2024-10-23\r\n35882,1684,EMEA,toys,mobile,22.25,7,0.237,loyalty,2024-09-12\r\n35883,1407,LATAM,home,online,71.94,5,0.247,none,2024-05-23\r\n35884,1772,EMEA,electronics,retail,48.83,1,0.041,none,2024-02-04\r\n35885,1389,LATAM,fashion,retail,56.09,3,0.175,none,2024-12-07\r\n35886,1358,APAC,home,retail,59.65,4,0.105,coupon,2024-09-25\r\n35887,1371,AMER,electronics,partner,46.31,4,0.174,coupon,2024-10-24\r\n35888,1464,APAC,fashion,retail,112.77,8,0.214,coupon,2024-10-04\r\n35889,1446,AMER,sports,online,56.09,2,0.032,bundle,2024-02-18\r\n35890,1286,EMEA,electronics,retail,57.79,7,0.248,coupon,2024-03-07\r\n35891,1725,APAC,electronics,partner,42.30,3,0.046,none,2024-09-20\r\n35892,1481,LATAM,toys,retail,27.87,4,0.206,none,2024-02-06\r\n35893,1621,APAC,toys,retail,83.91,2,0.242,coupon,2024-10-02\r\n35894,2406,EMEA,toys,retail,84.05,6,0.063,bundle,2024-05-07\r\n35895,2339,AMER,toys,mobile,64.76,2,0.163,none,2024-12-23\r\n35896,1253,AMER,grocery,retail,72.72,4,0.201,none,2024-12-10\r\n35897,2336,APAC,home,mobile,39.42,2,0.074,bundle,2024-06-15\r\n35898,1656,LATAM,fashion,online,72.19,6,0.170,coupon,2024-12-07\r\n35899,1208,AMER,electronics,retail,54.58,1,0.241,none,2024-04-02\r\n35900,1041,APAC,grocery,online,120.40,5,0.096,bundle,2024-10-22\r\n35901,1684,EMEA,electronics,online,47.21,3,0.186,none,2024-08-23\r\n35902,1657,LATAM,home,online,48.35,5,0.150,coupon,2024-05-09\r\n35903,1097,EMEA,toys,online,51.09,8,0.112,coupon,2024-07-03\r\n35904,1370,APAC,grocery,retail,73.42,7,0.000,none,2024-05-24\r\n35905,2489,LATAM,grocery,online,76.18,6,0.237,none,2024-08-11\r\n35906,2385,APAC,fashion,mobile,46.89,6,0.046,none,2024-05-12\r\n35907,1238,AMER,sports,retail,32.03,7,0.090,coupon,2024-12-24\r\n35908,1913,LATAM,grocery,partner,59.92,6,0.065,none,2024-06-22\r\n35909,1167,EMEA,toys,retail,41.27,3,0.241,coupon,2024-11-23\r\n35910,1884,APAC,toys,retail,142.69,6,0.043,loyalty,2024-12-20\r\n35911,1989,LATAM,toys,retail,31.90,4,0.220,coupon,2024-08-25\r\n35912,2140,AMER,fashion,mobile,77.97,3,0.070,none,2024-12-27\r\n35913,2469,LATAM,sports,retail,81.78,2,0.144,coupon,2024-06-17\r\n35914,2425,APAC,sports,retail,70.74,1,0.168,none,2024-06-03\r\n35915,2084,LATAM,grocery,online,45.11,4,0.167,coupon,2024-07-19\r\n35916,1909,APAC,home,online,39.53,2,0.003,none,2024-02-13\r\n35917,2315,LATAM,grocery,online,45.01,5,0.122,loyalty,2024-03-10\r\n35918,1760,LATAM,sports,online,25.08,2,0.101,none,2024-07-18\r\n35919,2250,AMER,electronics,online,136.09,5,0.165,loyalty,2024-04-11\r\n35920,2009,LATAM,electronics,retail,74.98,4,0.090,none,2024-06-11\r\n35921,1701,LATAM,electronics,online,60.51,4,0.232,bundle,2024-04-07\r\n35922,1015,AMER,electronics,online,52.91,8,0.092,none,2024-04-15\r\n35923,2393,LATAM,electronics,online,27.35,8,0.223,none,2024-08-24\r\n35924,1392,AMER,electronics,online,36.10,5,0.023,coupon,2024-01-12\r\n35925,1986,LATAM,fashion,online,65.08,7,0.216,none,2024-12-28\r\n35926,1690,LATAM,sports,mobile,72.59,6,0.132,none,2024-05-08\r\n35927,2419,LATAM,grocery,retail,36.34,8,0.137,none,2024-02-25\r\n35928,1174,APAC,electronics,online,108.28,1,0.045,coupon,2024-03-26\r\n35929,1292,LATAM,electronics,online,38.01,7,0.006,bundle,2024-04-15\r\n35930,1035,EMEA,grocery,mobile,77.15,8,0.096,coupon,2024-05-21\r\n35931,1519,APAC,home,mobile,28.69,6,0.028,none,2024-05-27\r\n35932,1776,APAC,home,mobile,32.51,6,0.193,loyalty,2024-07-13\r\n35933,2424,LATAM,toys,online,71.00,6,0.115,none,2024-06-05\r\n35934,2100,APAC,fashion,partner,41.93,8,0.011,none,2024-06-26\r\n35935,1172,APAC,sports,online,52.21,8,0.066,none,2024-02-22\r\n35936,1156,APAC,grocery,online,108.57,4,0.240,none,2024-10-04\r\n35937,1271,EMEA,toys,retail,68.42,4,0.024,none,2024-10-09\r\n35938,1859,AMER,home,partner,81.31,8,0.016,none,2024-11-14\r\n35939,2347,AMER,toys,online,51.03,5,0.188,none,2024-08-03\r\n35940,1428,APAC,fashion,online,24.77,8,0.189,coupon,2024-09-08\r\n35941,1658,AMER,toys,online,100.68,8,0.059,coupon,2024-10-24\r\n35942,1753,APAC,sports,mobile,78.50,4,0.217,none,2024-06-09\r\n35943,1306,LATAM,home,mobile,33.42,1,0.018,coupon,2024-02-26\r\n35944,1956,APAC,electronics,partner,101.50,5,0.009,bundle,2024-01-05\r\n35945,1545,AMER,grocery,online,47.74,1,0.087,none,2024-10-25\r\n35946,2398,EMEA,sports,online,53.29,2,0.078,none,2024-03-25\r\n35947,1712,LATAM,electronics,online,43.75,2,0.196,loyalty,2024-12-22\r\n35948,2252,EMEA,grocery,retail,68.49,6,0.245,loyalty,2024-08-02\r\n35949,2345,LATAM,electronics,retail,35.75,7,0.035,none,2024-03-20\r\n35950,1283,APAC,home,retail,29.50,6,0.024,none,2024-03-28\r\n35951,1183,AMER,electronics,retail,62.69,4,0.121,none,2024-09-10\r\n35952,1040,LATAM,grocery,mobile,42.85,5,0.213,none,2024-01-09\r\n35953,1123,LATAM,home,online,39.29,8,0.077,coupon,2024-09-13\r\n35954,1993,APAC,electronics,retail,38.54,3,0.003,none,2024-06-19\r\n35955,1144,APAC,home,online,38.24,2,0.178,none,2024-01-07\r\n35956,2441,EMEA,home,online,131.72,1,0.026,loyalty,2024-09-04\r\n35957,2039,EMEA,toys,online,50.39,4,0.157,none,2024-02-04\r\n35958,1278,AMER,grocery,online,25.85,3,0.092,coupon,2024-10-20\r\n35959,2391,EMEA,grocery,online,28.87,4,0.059,coupon,2024-03-08\r\n35960,2385,APAC,electronics,retail,40.82,6,0.145,loyalty,2024-04-05\r\n35961,1064,AMER,electronics,online,70.80,7,0.168,none,2024-02-08\r\n35962,1610,LATAM,home,online,65.05,8,0.072,bundle,2024-10-24\r\n35963,1234,AMER,grocery,online,92.07,3,0.218,bundle,2024-03-10\r\n35964,1273,AMER,sports,online,100.77,8,0.084,loyalty,2024-11-13\r\n35965,2154,APAC,toys,online,53.12,7,0.236,bundle,2024-07-02\r\n35966,1225,APAC,sports,mobile,113.58,6,0.195,none,2024-03-14\r\n35967,1211,EMEA,toys,online,138.14,8,0.060,none,2024-03-09\r\n35968,1820,AMER,grocery,mobile,47.83,7,0.090,none,2024-07-16\r\n35969,2197,LATAM,grocery,retail,125.22,6,0.034,none,2024-06-19\r\n35970,1745,APAC,home,online,20.55,4,0.103,none,2024-02-08\r\n35971,1856,EMEA,fashion,retail,60.97,1,0.246,none,2024-11-19\r\n35972,1874,LATAM,fashion,retail,38.01,4,0.014,none,2024-10-05\r\n35973,1966,APAC,home,online,62.06,5,0.065,none,2024-05-08\r\n35974,2090,AMER,home,retail,64.96,6,0.002,none,2024-04-26\r\n35975,1453,APAC,grocery,partner,97.69,1,0.127,none,2024-05-09\r\n35976,2124,AMER,sports,retail,120.23,4,0.011,coupon,2024-09-19\r\n35977,1469,EMEA,grocery,online,88.90,2,0.176,loyalty,2024-02-10\r\n35978,1444,EMEA,home,partner,26.86,7,0.097,none,2024-05-15\r\n35979,1285,EMEA,home,online,19.43,4,0.135,none,2024-03-20\r\n35980,2428,LATAM,grocery,retail,43.55,6,0.167,none,2024-12-27\r\n35981,1395,APAC,grocery,mobile,36.22,8,0.075,none,2024-01-01\r\n35982,2089,EMEA,sports,online,78.18,8,0.122,none,2024-01-03\r\n35983,1315,AMER,home,partner,81.92,8,0.074,coupon,2024-10-12\r\n35984,2382,LATAM,fashion,online,61.39,8,0.058,coupon,2024-08-08\r\n35985,2495,EMEA,electronics,retail,23.98,2,0.085,none,2024-03-19\r\n35986,1847,LATAM,grocery,retail,77.67,3,0.148,bundle,2024-02-14\r\n35987,2312,APAC,grocery,online,47.80,2,0.124,none,2024-09-28\r\n35988,2219,LATAM,grocery,partner,62.27,2,0.026,none,2024-12-05\r\n35989,1476,APAC,electronics,online,72.11,5,0.142,none,2024-05-27\r\n35990,1492,APAC,electronics,online,127.49,7,0.196,none,2024-08-09\r\n35991,2351,EMEA,home,online,39.55,8,0.048,bundle,2024-03-20\r\n35992,2421,AMER,fashion,retail,40.35,3,0.053,coupon,2024-10-28\r\n35993,2123,AMER,sports,mobile,207.33,5,0.095,loyalty,2024-12-16\r\n35994,1016,AMER,grocery,partner,46.83,6,0.098,none,2024-08-28\r\n35995,1445,APAC,electronics,retail,81.25,6,0.117,loyalty,2024-06-01\r\n35996,2218,EMEA,grocery,retail,52.72,7,0.193,none,2024-08-11\r\n35997,1091,EMEA,sports,online,25.67,5,0.064,none,2024-07-13\r\n35998,1186,APAC,grocery,online,47.70,8,0.182,none,2024-10-25\r\n35999,2120,AMER,fashion,online,31.97,7,0.148,coupon,2024-06-16\r\n36000,1117,LATAM,grocery,partner,103.72,6,0.138,coupon,2024-01-05\r\n36001,2044,APAC,electronics,online,90.04,4,0.198,none,2024-01-06\r\n36002,1937,APAC,fashion,online,30.54,2,0.041,loyalty,2024-09-04\r\n36003,1566,EMEA,home,online,28.37,1,0.248,none,2024-04-26\r\n36004,1115,AMER,electronics,online,71.86,4,0.138,none,2024-12-04\r\n36005,1143,LATAM,home,online,32.59,1,0.250,none,2024-02-19\r\n36006,1084,AMER,grocery,retail,51.41,4,0.140,coupon,2024-03-24\r\n36007,1948,EMEA,sports,online,32.61,6,0.030,coupon,2024-10-21\r\n36008,1714,APAC,grocery,retail,84.34,7,0.227,none,2024-06-12\r\n36009,1531,EMEA,sports,online,77.42,7,0.088,none,2024-06-21\r\n36010,2213,APAC,home,retail,127.61,1,0.035,none,2024-10-24\r\n36011,1593,AMER,toys,partner,59.84,2,0.037,none,2024-07-19\r\n36012,1864,EMEA,grocery,online,95.17,1,0.191,bundle,2024-08-16\r\n36013,1327,APAC,home,online,33.15,8,0.082,bundle,2024-04-01\r\n36014,2333,APAC,grocery,mobile,49.80,2,0.191,bundle,2024-07-24\r\n36015,2193,AMER,electronics,retail,82.01,1,0.092,none,2024-03-19\r\n36016,1590,APAC,sports,retail,65.98,3,0.137,none,2024-09-18\r\n36017,1303,LATAM,fashion,online,18.13,1,0.194,none,2024-07-18\r\n36018,2108,AMER,grocery,mobile,32.39,7,0.187,none,2024-11-24\r\n36019,1833,EMEA,home,retail,21.52,1,0.055,none,2024-03-20\r\n36020,1049,AMER,grocery,online,78.34,2,0.206,none,2024-12-26\r\n36021,1131,APAC,fashion,online,37.79,4,0.045,coupon,2024-02-09\r\n36022,2318,AMER,grocery,mobile,75.36,1,0.059,none,2024-09-26\r\n36023,1389,LATAM,grocery,online,69.91,4,0.010,bundle,2024-12-26\r\n36024,2092,AMER,electronics,online,53.18,1,0.163,coupon,2024-10-09\r\n36025,2210,APAC,electronics,online,69.09,8,0.198,loyalty,2024-07-28\r\n36026,1131,APAC,grocery,retail,74.44,6,0.071,coupon,2024-02-27\r\n36027,2077,APAC,toys,retail,71.29,3,0.038,none,2024-03-16\r\n36028,1989,LATAM,electronics,online,78.80,1,0.033,none,2024-06-16\r\n36029,1007,APAC,home,retail,30.58,1,0.112,none,2024-06-24\r\n36030,2196,AMER,toys,retail,169.08,1,0.113,coupon,2024-12-22\r\n36031,1030,EMEA,fashion,retail,44.07,7,0.161,bundle,2024-10-27\r\n36032,1325,APAC,electronics,mobile,36.50,3,0.031,bundle,2024-01-12\r\n36033,2400,EMEA,toys,mobile,32.82,7,0.004,none,2024-10-08\r\n36034,1767,AMER,sports,retail,25.92,7,0.125,coupon,2024-11-10\r\n36035,2079,EMEA,home,online,195.41,4,0.043,loyalty,2024-03-26\r\n36036,1907,EMEA,sports,retail,46.13,7,0.120,coupon,2024-04-18\r\n36037,1310,AMER,electronics,online,116.43,3,0.069,none,2024-05-17\r\n36038,1819,AMER,sports,online,57.00,8,0.146,none,2024-07-09\r\n36039,1967,EMEA,fashion,retail,106.45,1,0.137,bundle,2024-04-28\r\n36040,1756,EMEA,fashion,mobile,57.32,5,0.120,coupon,2024-12-24\r\n36041,2108,AMER,grocery,mobile,44.30,6,0.168,loyalty,2024-07-24\r\n36042,2098,AMER,grocery,retail,77.56,5,0.002,coupon,2024-07-23\r\n36043,1659,APAC,fashion,retail,55.84,2,0.044,bundle,2024-03-21\r\n36044,1201,LATAM,home,online,54.76,3,0.029,none,2024-03-05\r\n36045,2423,LATAM,electronics,online,37.91,3,0.185,none,2024-08-21\r\n36046,1713,EMEA,sports,online,65.19,5,0.069,bundle,2024-08-25\r\n36047,1132,EMEA,grocery,retail,115.87,1,0.140,coupon,2024-05-09\r\n36048,1958,APAC,home,online,53.76,2,0.168,loyalty,2024-10-08\r\n36049,2032,AMER,electronics,mobile,52.42,4,0.058,none,2024-06-08\r\n36050,2426,AMER,home,online,48.88,7,0.199,none,2024-11-25\r\n36051,1223,LATAM,electronics,retail,42.13,1,0.198,bundle,2024-10-05\r\n36052,2151,APAC,fashion,online,57.07,8,0.167,none,2024-01-26\r\n36053,1535,AMER,grocery,online,125.57,2,0.107,loyalty,2024-09-07\r\n36054,2267,AMER,sports,mobile,59.70,3,0.223,none,2024-08-15\r\n36055,2192,APAC,electronics,retail,67.03,5,0.004,loyalty,2024-07-28\r\n36056,2262,APAC,toys,online,87.49,5,0.005,none,2024-03-13\r\n36057,1991,APAC,grocery,online,59.00,5,0.116,loyalty,2024-03-18\r\n36058,1874,LATAM,electronics,online,56.02,3,0.170,none,2024-08-23\r\n36059,1666,LATAM,sports,online,79.48,1,0.181,none,2024-11-16\r\n36060,2163,EMEA,home,mobile,127.47,5,0.211,none,2024-01-10\r\n36061,2359,LATAM,toys,online,32.62,6,0.233,none,2024-06-04\r\n36062,1818,AMER,fashion,online,54.28,8,0.022,bundle,2024-07-18\r\n36063,1845,AMER,electronics,retail,52.15,3,0.163,bundle,2024-05-07\r\n36064,1653,APAC,grocery,retail,40.24,6,0.225,loyalty,2024-07-12\r\n36065,2163,EMEA,electronics,retail,152.40,4,0.094,none,2024-12-16\r\n36066,1485,APAC,fashion,online,63.25,5,0.041,bundle,2024-12-09\r\n36067,2491,APAC,sports,retail,37.82,4,0.010,none,2024-01-11\r\n36068,2474,LATAM,sports,online,147.99,3,0.116,coupon,2024-11-05\r\n36069,2248,LATAM,electronics,online,36.92,6,0.112,none,2024-03-11\r\n36070,1253,AMER,fashion,partner,26.57,7,0.220,none,2024-11-25\r\n36071,1583,AMER,grocery,retail,79.52,5,0.198,coupon,2024-05-08\r\n36072,1691,LATAM,sports,online,69.60,2,0.039,bundle,2024-12-21\r\n36073,2492,LATAM,electronics,retail,179.99,3,0.043,none,2024-01-03\r\n36074,1558,EMEA,fashion,retail,59.90,8,0.127,none,2024-02-11\r\n36075,2231,LATAM,toys,online,110.58,8,0.127,coupon,2024-03-25\r\n36076,2214,AMER,home,retail,102.80,8,0.041,none,2024-11-03\r\n36077,1695,LATAM,electronics,retail,37.12,4,0.223,coupon,2024-03-13\r\n36078,1070,EMEA,home,online,35.24,7,0.097,none,2024-11-15\r\n36079,1852,AMER,home,online,20.77,3,0.228,none,2024-05-25\r\n36080,1967,EMEA,sports,online,79.93,4,0.237,none,2024-01-02\r\n36081,1930,AMER,home,retail,27.28,5,0.031,coupon,2024-10-06\r\n36082,1042,LATAM,grocery,online,100.01,3,0.110,none,2024-01-15\r\n36083,1495,LATAM,sports,mobile,82.42,7,0.141,none,2024-01-11\r\n36084,1578,LATAM,grocery,online,31.42,8,0.006,none,2024-04-07\r\n36085,1014,EMEA,electronics,retail,28.54,1,0.215,coupon,2024-09-23\r\n36086,2368,AMER,grocery,retail,137.76,4,0.174,bundle,2024-02-14\r\n36087,2013,APAC,sports,retail,109.17,2,0.028,none,2024-06-03\r\n36088,1898,EMEA,electronics,online,41.64,3,0.211,coupon,2024-10-20\r\n36089,1747,EMEA,fashion,retail,83.72,7,0.055,coupon,2024-01-12\r\n36090,2323,AMER,toys,retail,52.42,3,0.118,loyalty,2024-09-13\r\n36091,1513,APAC,electronics,retail,65.38,4,0.246,coupon,2024-05-18\r\n36092,2090,AMER,fashion,mobile,31.86,7,0.072,coupon,2024-09-07\r\n36093,2087,LATAM,fashion,online,205.25,6,0.010,bundle,2024-02-11\r\n36094,2261,EMEA,grocery,online,36.38,6,0.042,none,2024-07-01\r\n36095,1709,EMEA,toys,online,138.40,4,0.139,loyalty,2024-11-19\r\n36096,1075,AMER,grocery,online,69.09,1,0.163,bundle,2024-04-04\r\n36097,1186,APAC,grocery,mobile,35.73,4,0.126,coupon,2024-07-15\r\n36098,1539,LATAM,sports,retail,45.02,6,0.175,bundle,2024-02-23\r\n36099,2053,AMER,grocery,mobile,42.53,8,0.228,coupon,2024-02-09\r\n36100,1868,AMER,grocery,retail,47.24,2,0.182,coupon,2024-10-26\r\n36101,1280,LATAM,home,retail,102.91,7,0.249,none,2024-04-07\r\n36102,1858,LATAM,grocery,retail,49.25,3,0.203,none,2024-03-21\r\n36103,1817,APAC,grocery,partner,51.81,2,0.194,none,2024-09-12\r\n36104,2412,LATAM,electronics,online,37.60,1,0.148,coupon,2024-03-19\r\n36105,2092,AMER,grocery,online,52.24,3,0.206,bundle,2024-09-07\r\n36106,1554,AMER,home,online,52.08,4,0.199,coupon,2024-12-08\r\n36107,2127,LATAM,grocery,retail,98.08,3,0.076,none,2024-03-23\r\n36108,2152,EMEA,sports,online,78.18,8,0.059,none,2024-03-26\r\n36109,1826,LATAM,electronics,online,29.48,1,0.076,coupon,2024-07-27\r\n36110,1814,AMER,home,mobile,30.80,2,0.171,none,2024-07-15\r\n36111,2219,LATAM,home,online,16.92,2,0.237,none,2024-11-19\r\n36112,1319,EMEA,grocery,online,49.12,5,0.084,coupon,2024-04-14\r\n36113,1168,APAC,sports,retail,39.42,6,0.158,none,2024-10-11\r\n36114,2448,APAC,grocery,online,25.91,6,0.084,none,2024-03-28\r\n36115,1692,LATAM,grocery,retail,52.09,5,0.139,loyalty,2024-10-26\r\n36116,2450,EMEA,grocery,retail,72.96,1,0.016,coupon,2024-01-04\r\n36117,1531,EMEA,grocery,mobile,31.72,2,0.044,coupon,2024-10-23\r\n36118,2471,APAC,home,retail,64.13,1,0.017,none,2024-04-02\r\n36119,1708,LATAM,sports,retail,20.40,2,0.223,none,2024-11-05\r\n36120,2077,APAC,home,online,60.44,3,0.057,none,2024-04-13\r\n36121,1195,AMER,grocery,retail,85.94,5,0.120,none,2024-04-28\r\n36122,1031,AMER,electronics,mobile,59.52,3,0.034,none,2024-07-11\r\n36123,1758,AMER,fashion,online,123.64,8,0.023,loyalty,2024-04-19\r\n36124,1070,EMEA,sports,retail,19.39,7,0.145,coupon,2024-07-10\r\n36125,2083,LATAM,electronics,retail,23.59,5,0.029,none,2024-05-19\r\n36126,1643,EMEA,home,retail,68.38,8,0.180,none,2024-05-10\r\n36127,1754,EMEA,sports,online,157.17,5,0.032,none,2024-07-08\r\n36128,1874,LATAM,grocery,retail,69.62,6,0.095,none,2024-04-12\r\n36129,1573,AMER,grocery,retail,75.54,4,0.036,none,2024-06-07\r\n36130,1721,EMEA,grocery,retail,24.91,2,0.062,none,2024-08-07\r\n36131,1954,APAC,home,online,20.01,7,0.185,loyalty,2024-05-25\r\n36132,1460,LATAM,home,retail,55.51,7,0.009,none,2024-02-06\r\n36133,2319,AMER,electronics,partner,62.14,7,0.062,coupon,2024-03-02\r\n36134,1771,AMER,toys,mobile,140.43,7,0.113,coupon,2024-10-17\r\n36135,2382,LATAM,grocery,online,54.63,6,0.047,none,2024-10-18\r\n36136,2009,LATAM,grocery,retail,68.99,3,0.117,none,2024-04-16\r\n36137,1733,LATAM,grocery,partner,35.77,1,0.093,none,2024-04-01\r\n36138,1890,LATAM,electronics,retail,48.13,4,0.206,none,2024-09-10\r\n36139,2462,EMEA,electronics,online,57.55,1,0.124,none,2024-11-26\r\n36140,2307,LATAM,toys,online,63.65,4,0.140,none,2024-08-11\r\n36141,1894,APAC,electronics,online,115.06,5,0.164,bundle,2024-11-28\r\n36142,2147,LATAM,electronics,mobile,95.16,1,0.031,none,2024-04-01\r\n36143,1078,APAC,grocery,online,28.89,3,0.037,bundle,2024-04-14\r\n36144,1683,AMER,fashion,partner,69.38,8,0.042,none,2024-09-20\r\n36145,1476,APAC,grocery,online,35.47,4,0.008,bundle,2024-12-28\r\n36146,1294,APAC,home,retail,70.11,3,0.119,none,2024-02-24\r\n36147,1943,AMER,home,mobile,57.45,3,0.153,none,2024-10-23\r\n36148,1351,APAC,fashion,retail,102.05,5,0.206,none,2024-02-20\r\n36149,1577,AMER,sports,mobile,92.93,4,0.220,none,2024-10-15\r\n36150,2353,AMER,fashion,online,45.64,5,0.146,none,2024-09-22\r\n36151,1760,LATAM,toys,online,49.97,8,0.224,coupon,2024-11-18\r\n36152,1293,AMER,grocery,retail,56.39,7,0.211,none,2024-06-28\r\n36153,1493,APAC,fashion,mobile,65.70,7,0.172,coupon,2024-01-24\r\n36154,1429,APAC,toys,online,61.37,1,0.044,none,2024-02-04\r\n36155,1162,AMER,grocery,online,41.11,7,0.065,bundle,2024-03-18\r\n36156,2321,APAC,grocery,online,51.66,7,0.116,none,2024-04-26\r\n36157,1141,AMER,grocery,online,31.39,6,0.183,bundle,2024-05-19\r\n36158,1798,AMER,toys,retail,38.98,2,0.172,none,2024-07-18\r\n36159,1944,AMER,grocery,retail,38.87,2,0.085,bundle,2024-12-13\r\n36160,2490,AMER,electronics,mobile,23.61,7,0.196,none,2024-07-09\r\n36161,1056,LATAM,grocery,online,47.56,3,0.122,none,2024-01-12\r\n36162,2194,APAC,grocery,online,38.27,2,0.133,bundle,2024-08-25\r\n36163,2355,EMEA,grocery,mobile,66.10,8,0.246,none,2024-04-25\r\n36164,2131,APAC,toys,online,48.52,2,0.151,none,2024-06-09\r\n36165,2254,LATAM,fashion,retail,90.51,7,0.224,coupon,2024-09-07\r\n36166,2499,LATAM,electronics,retail,25.45,2,0.049,none,2024-05-27\r\n36167,2003,LATAM,electronics,online,49.27,7,0.223,none,2024-09-13\r\n36168,1757,EMEA,grocery,partner,32.90,7,0.238,loyalty,2024-10-15\r\n36169,2412,LATAM,grocery,mobile,104.13,1,0.008,bundle,2024-03-17\r\n36170,2452,LATAM,sports,retail,29.54,1,0.094,coupon,2024-05-02\r\n36171,1898,EMEA,sports,online,47.51,5,0.126,coupon,2024-06-21\r\n36172,2273,APAC,toys,retail,41.29,2,0.032,loyalty,2024-01-01\r\n36173,1271,EMEA,sports,mobile,37.20,8,0.101,bundle,2024-07-22\r\n36174,1722,EMEA,electronics,online,76.75,1,0.115,none,2024-02-09\r\n36175,1877,LATAM,grocery,online,52.59,7,0.178,coupon,2024-11-17\r\n36176,2221,LATAM,grocery,online,66.36,5,0.002,coupon,2024-05-13\r\n36177,1834,AMER,grocery,retail,63.53,1,0.124,none,2024-08-07\r\n36178,2151,APAC,toys,online,50.40,1,0.196,coupon,2024-05-06\r\n36179,2244,LATAM,home,mobile,64.27,4,0.016,bundle,2024-04-04\r\n36180,1640,APAC,electronics,mobile,45.01,2,0.241,none,2024-07-22\r\n36181,1729,AMER,fashion,retail,53.78,2,0.175,none,2024-12-10\r\n36182,2226,EMEA,grocery,online,41.98,4,0.212,none,2024-03-13\r\n36183,1469,EMEA,home,mobile,37.57,1,0.238,loyalty,2024-09-19\r\n36184,1172,APAC,fashion,retail,50.82,8,0.199,none,2024-06-26\r\n36185,2093,LATAM,sports,online,29.02,6,0.097,coupon,2024-04-15\r\n36186,2419,LATAM,grocery,retail,30.96,3,0.128,coupon,2024-10-28\r\n36187,1390,APAC,sports,online,33.19,8,0.079,none,2024-08-17\r\n36188,1256,LATAM,home,retail,39.85,6,0.196,coupon,2024-07-25\r\n36189,1042,LATAM,sports,online,39.60,8,0.228,bundle,2024-10-03\r\n36190,1101,AMER,sports,retail,79.61,6,0.195,none,2024-09-26\r\n36191,2149,EMEA,electronics,online,23.55,7,0.014,none,2024-08-23\r\n36192,1031,AMER,electronics,mobile,11.15,6,0.102,none,2024-02-17\r\n36193,1410,AMER,home,retail,57.19,7,0.092,none,2024-10-14\r\n36194,2492,LATAM,grocery,retail,31.84,6,0.008,coupon,2024-10-26\r\n36195,1028,EMEA,grocery,mobile,55.56,3,0.167,none,2024-02-07\r\n36196,2140,AMER,home,online,63.87,2,0.232,coupon,2024-12-05\r\n36197,1621,APAC,sports,mobile,26.15,5,0.005,bundle,2024-10-22\r\n36198,1501,AMER,sports,retail,65.35,6,0.094,coupon,2024-10-15\r\n36199,1313,EMEA,toys,mobile,52.93,4,0.195,coupon,2024-03-14\r\n36200,1174,APAC,grocery,mobile,49.85,1,0.097,none,2024-06-24\r\n36201,2080,LATAM,fashion,retail,56.65,5,0.063,bundle,2024-06-02\r\n36202,1868,AMER,toys,retail,150.22,7,0.138,none,2024-10-23\r\n36203,2443,LATAM,grocery,online,44.61,3,0.190,none,2024-08-07\r\n36204,2235,AMER,grocery,retail,46.13,6,0.115,coupon,2024-10-25\r\n36205,2085,AMER,electronics,online,54.54,1,0.205,none,2024-08-12\r\n36206,2462,EMEA,electronics,online,31.20,4,0.036,bundle,2024-10-16\r\n36207,1692,LATAM,electronics,retail,30.70,7,0.211,none,2024-02-14\r\n36208,1960,EMEA,sports,retail,23.98,5,0.161,coupon,2024-01-14\r\n36209,1835,AMER,fashion,retail,45.90,5,0.195,none,2024-02-14\r\n36210,2127,LATAM,electronics,retail,88.18,2,0.147,none,2024-02-08\r\n36211,2343,EMEA,sports,online,47.97,4,0.019,none,2024-04-25\r\n36212,1753,APAC,grocery,mobile,48.50,6,0.062,none,2024-08-08\r\n36213,1622,LATAM,home,retail,25.72,5,0.047,none,2024-08-12\r\n36214,2177,AMER,grocery,online,36.82,8,0.008,bundle,2024-09-12\r\n36215,1574,AMER,grocery,online,52.06,3,0.067,none,2024-11-25\r\n36216,2213,APAC,electronics,retail,48.20,7,0.059,coupon,2024-01-06\r\n36217,2042,LATAM,home,retail,53.54,3,0.052,none,2024-05-13\r\n36218,1296,LATAM,electronics,retail,45.43,4,0.014,none,2024-08-22\r\n36219,1320,EMEA,fashion,online,42.84,3,0.201,none,2024-07-01\r\n36220,2168,EMEA,home,online,48.32,1,0.118,coupon,2024-01-16\r\n36221,2462,EMEA,fashion,mobile,17.07,4,0.235,none,2024-12-27\r\n36222,1935,EMEA,grocery,retail,72.46,8,0.190,none,2024-09-08\r\n36223,1341,EMEA,electronics,mobile,52.29,3,0.117,none,2024-11-05\r\n36224,1054,EMEA,electronics,online,27.11,5,0.145,coupon,2024-03-21\r\n36225,2436,LATAM,fashion,online,71.85,3,0.106,coupon,2024-08-18\r\n36226,2139,AMER,grocery,retail,98.62,5,0.014,none,2024-12-23\r\n36227,1197,LATAM,fashion,online,82.57,4,0.153,bundle,2024-05-21\r\n36228,1881,LATAM,electronics,retail,47.10,2,0.231,none,2024-07-25\r\n36229,1141,AMER,sports,online,53.32,8,0.063,coupon,2024-01-11\r\n36230,2150,APAC,home,online,61.10,2,0.245,none,2024-02-19\r\n36231,1890,LATAM,grocery,retail,35.87,6,0.139,none,2024-05-24\r\n36232,1855,APAC,electronics,mobile,37.74,7,0.176,coupon,2024-07-22\r\n36233,1123,LATAM,grocery,retail,144.06,4,0.035,bundle,2024-03-28\r\n36234,1640,APAC,toys,online,71.03,6,0.210,none,2024-06-18\r\n36235,1964,EMEA,home,online,81.99,5,0.060,coupon,2024-03-10\r\n36236,2335,EMEA,electronics,online,89.77,6,0.061,loyalty,2024-03-23\r\n36237,2172,EMEA,fashion,retail,23.71,4,0.013,none,2024-06-08\r\n36238,2056,LATAM,home,mobile,67.45,5,0.022,loyalty,2024-09-06\r\n36239,1757,EMEA,fashion,online,71.93,4,0.176,none,2024-03-09\r\n36240,1578,LATAM,grocery,online,79.67,6,0.006,none,2024-06-07\r\n36241,1800,APAC,electronics,retail,37.50,6,0.114,none,2024-01-24\r\n36242,1813,EMEA,grocery,retail,50.80,8,0.167,none,2024-03-21\r\n36243,1273,AMER,toys,online,18.17,1,0.050,bundle,2024-04-02\r\n36244,1781,LATAM,fashion,mobile,153.63,3,0.089,coupon,2024-08-04\r\n36245,2140,AMER,fashion,online,49.02,8,0.132,none,2024-06-26\r\n36246,1198,AMER,fashion,mobile,113.14,1,0.132,coupon,2024-12-28\r\n36247,1245,APAC,electronics,online,67.51,7,0.146,none,2024-01-09\r\n36248,1881,LATAM,grocery,retail,103.51,2,0.178,none,2024-11-12\r\n36249,2059,AMER,home,retail,122.10,7,0.243,coupon,2024-10-09\r\n36250,1318,LATAM,electronics,retail,65.74,6,0.032,none,2024-10-08\r\n36251,2196,AMER,home,online,41.96,5,0.167,none,2024-08-25\r\n36252,1593,AMER,sports,online,101.55,8,0.090,none,2024-12-18\r\n36253,2318,AMER,electronics,mobile,91.98,1,0.037,none,2024-12-20\r\n36254,2370,EMEA,grocery,online,82.58,2,0.233,none,2024-09-04\r\n36255,1597,APAC,home,partner,33.06,6,0.112,bundle,2024-02-19\r\n36256,1471,EMEA,toys,online,57.38,5,0.107,none,2024-07-03\r\n36257,1235,EMEA,fashion,retail,22.10,3,0.151,bundle,2024-01-07\r\n36258,1070,EMEA,home,retail,67.53,6,0.198,loyalty,2024-02-02\r\n36259,2062,EMEA,home,retail,23.63,1,0.228,bundle,2024-04-28\r\n36260,1812,EMEA,electronics,partner,49.67,5,0.129,none,2024-03-04\r\n36261,1093,APAC,electronics,retail,33.23,2,0.022,bundle,2024-03-08\r\n36262,2488,EMEA,toys,online,134.38,7,0.129,bundle,2024-07-16\r\n36263,1391,LATAM,grocery,mobile,48.49,6,0.056,none,2024-03-28\r\n36264,1270,LATAM,fashion,online,104.16,4,0.147,bundle,2024-08-21\r\n36265,1662,LATAM,toys,retail,76.65,7,0.154,coupon,2024-01-09\r\n36266,2092,AMER,home,online,48.06,2,0.112,bundle,2024-07-28\r\n36267,1033,APAC,fashion,retail,51.85,4,0.075,coupon,2024-04-28\r\n36268,1101,AMER,grocery,retail,55.03,8,0.185,none,2024-11-01\r\n36269,1077,AMER,electronics,online,91.13,4,0.211,coupon,2024-03-03\r\n36270,2045,LATAM,sports,retail,52.10,4,0.085,coupon,2024-06-15\r\n36271,2287,EMEA,toys,online,92.45,5,0.028,bundle,2024-10-10\r\n36272,1210,LATAM,fashion,mobile,106.55,5,0.047,none,2024-03-05\r\n36273,1017,AMER,grocery,online,58.89,4,0.168,bundle,2024-11-04\r\n36274,2125,LATAM,home,retail,56.68,6,0.160,coupon,2024-10-24\r\n36275,1601,APAC,sports,online,137.06,8,0.147,none,2024-08-27\r\n36276,2397,LATAM,electronics,online,30.51,7,0.203,none,2024-10-11\r\n36277,1920,LATAM,electronics,retail,30.34,8,0.103,none,2024-11-28\r\n36278,1902,AMER,electronics,online,36.52,3,0.246,none,2024-10-16\r\n36279,2374,LATAM,toys,retail,29.47,6,0.102,loyalty,2024-04-09\r\n36280,2241,APAC,electronics,retail,29.44,4,0.158,bundle,2024-11-07\r\n36281,2497,AMER,electronics,online,84.09,3,0.155,none,2024-09-27\r\n36282,2297,EMEA,fashion,retail,52.12,4,0.205,none,2024-01-26\r\n36283,1594,LATAM,toys,online,59.04,2,0.084,loyalty,2024-03-06\r\n36284,1839,APAC,grocery,online,56.28,7,0.170,coupon,2024-08-21\r\n36285,2230,LATAM,electronics,retail,76.54,1,0.231,loyalty,2024-11-24\r\n36286,1621,APAC,electronics,retail,133.17,8,0.221,none,2024-11-23\r\n36287,1323,EMEA,home,retail,34.44,6,0.101,none,2024-08-22\r\n36288,1963,AMER,electronics,online,76.14,6,0.202,none,2024-01-11\r\n36289,1991,APAC,home,online,23.28,1,0.209,bundle,2024-10-01\r\n36290,2450,EMEA,electronics,online,43.71,2,0.140,bundle,2024-12-16\r\n36291,1861,AMER,sports,retail,116.35,8,0.054,coupon,2024-05-22\r\n36292,1847,LATAM,toys,retail,25.43,4,0.228,coupon,2024-10-26\r\n36293,2291,EMEA,electronics,online,57.85,8,0.199,none,2024-02-14\r\n36294,1323,EMEA,fashion,retail,87.50,2,0.090,none,2024-10-15\r\n36295,1244,LATAM,electronics,retail,89.97,4,0.131,none,2024-09-03\r\n36296,2440,APAC,electronics,online,113.23,7,0.044,none,2024-01-24\r\n36297,1066,AMER,sports,retail,32.04,3,0.168,none,2024-01-12\r\n36298,1940,APAC,fashion,retail,76.54,4,0.148,coupon,2024-07-25\r\n36299,2218,EMEA,home,online,59.63,8,0.119,coupon,2024-02-24\r\n36300,1762,LATAM,fashion,online,49.42,6,0.200,none,2024-09-20\r\n36301,1204,AMER,electronics,online,41.35,4,0.007,coupon,2024-03-25\r\n36302,2308,AMER,toys,retail,41.17,5,0.243,none,2024-11-15\r\n36303,1919,EMEA,toys,mobile,76.10,1,0.142,coupon,2024-07-14\r\n36304,2461,LATAM,sports,online,63.75,2,0.076,none,2024-07-19\r\n36305,1512,APAC,electronics,online,51.71,2,0.076,loyalty,2024-11-07\r\n36306,1427,EMEA,sports,online,67.96,1,0.141,coupon,2024-04-04\r\n36307,1755,APAC,electronics,online,113.08,3,0.186,none,2024-02-02\r\n36308,1744,EMEA,fashion,online,41.51,2,0.233,coupon,2024-11-20\r\n36309,1784,EMEA,grocery,mobile,65.85,2,0.038,loyalty,2024-05-12\r\n36310,1159,LATAM,home,online,66.13,2,0.145,none,2024-11-08\r\n36311,2017,EMEA,electronics,retail,22.73,1,0.111,none,2024-09-18\r\n36312,1990,EMEA,electronics,retail,51.43,8,0.156,none,2024-01-22\r\n36313,1864,EMEA,fashion,mobile,60.95,8,0.056,coupon,2024-02-05\r\n36314,1255,AMER,home,online,46.05,4,0.123,loyalty,2024-01-25\r\n36315,1496,AMER,grocery,mobile,62.20,8,0.029,none,2024-10-22\r\n36316,1555,AMER,fashion,retail,46.77,3,0.071,none,2024-10-14\r\n36317,1790,AMER,fashion,retail,40.40,4,0.083,loyalty,2024-06-18\r\n36318,1288,LATAM,grocery,retail,94.04,4,0.166,loyalty,2024-04-17\r\n36319,2186,LATAM,electronics,retail,84.64,1,0.222,bundle,2024-08-27\r\n36320,2011,AMER,electronics,mobile,32.22,3,0.193,bundle,2024-03-27\r\n36321,2284,EMEA,home,online,42.24,6,0.022,none,2024-10-08\r\n36322,1963,AMER,toys,online,46.97,6,0.117,coupon,2024-07-07\r\n36323,1940,APAC,grocery,online,30.34,6,0.235,bundle,2024-09-28\r\n36324,1244,LATAM,toys,online,35.37,4,0.027,none,2024-07-12\r\n36325,1244,LATAM,fashion,retail,39.78,5,0.128,none,2024-07-16\r\n36326,1825,AMER,grocery,retail,71.31,5,0.186,none,2024-06-02\r\n36327,2387,EMEA,home,retail,168.12,6,0.174,coupon,2024-12-10\r\n36328,2096,LATAM,sports,retail,42.39,5,0.087,bundle,2024-04-08\r\n36329,2222,LATAM,fashion,retail,41.28,3,0.225,loyalty,2024-01-11\r\n36330,1523,LATAM,fashion,partner,75.40,3,0.052,bundle,2024-08-19\r\n36331,2109,EMEA,fashion,online,67.17,2,0.135,none,2024-05-24\r\n36332,1470,LATAM,home,online,77.99,7,0.240,none,2024-07-20\r\n36333,2180,AMER,grocery,online,76.18,5,0.210,none,2024-12-26\r\n36334,2065,EMEA,electronics,retail,34.27,3,0.092,none,2024-03-05\r\n36335,2151,APAC,grocery,retail,42.03,2,0.115,bundle,2024-03-19\r\n36336,2452,LATAM,electronics,online,28.05,8,0.032,none,2024-09-07\r\n36337,2381,AMER,grocery,online,105.58,7,0.116,none,2024-01-26\r\n36338,1523,LATAM,grocery,online,43.97,4,0.083,none,2024-03-28\r\n36339,1262,APAC,home,online,147.97,5,0.197,none,2024-08-23\r\n36340,1743,LATAM,fashion,online,49.94,2,0.074,none,2024-02-15\r\n36341,1746,LATAM,electronics,retail,78.30,1,0.079,bundle,2024-07-05\r\n36342,1424,APAC,electronics,retail,99.61,2,0.006,none,2024-07-03\r\n36343,1975,EMEA,fashion,online,37.93,3,0.131,none,2024-03-07\r\n36344,1813,EMEA,toys,mobile,85.63,6,0.219,none,2024-10-20\r\n36345,2214,AMER,home,online,35.07,4,0.128,none,2024-04-02\r\n36346,2461,LATAM,electronics,online,72.30,6,0.036,none,2024-03-25\r\n36347,2402,AMER,toys,mobile,64.06,5,0.053,bundle,2024-06-05\r\n36348,2376,LATAM,sports,mobile,66.37,7,0.100,none,2024-02-07\r\n36349,2471,APAC,home,retail,21.52,3,0.218,bundle,2024-04-16\r\n36350,1971,EMEA,toys,retail,54.98,4,0.229,loyalty,2024-05-19\r\n36351,1507,EMEA,grocery,retail,50.79,8,0.103,none,2024-02-23\r\n36352,1227,AMER,electronics,online,75.07,7,0.116,coupon,2024-12-05\r\n36353,2464,LATAM,toys,mobile,74.03,5,0.201,coupon,2024-03-05\r\n36354,2101,APAC,home,mobile,65.86,1,0.236,none,2024-10-05\r\n36355,1519,APAC,home,retail,128.33,2,0.246,bundle,2024-03-02\r\n36356,2108,AMER,fashion,online,46.92,7,0.135,none,2024-09-03\r\n36357,2153,APAC,sports,mobile,55.26,6,0.060,none,2024-05-26\r\n36358,2364,APAC,grocery,online,103.16,6,0.128,coupon,2024-09-05\r\n36359,1359,LATAM,fashion,online,17.05,6,0.026,none,2024-11-22\r\n36360,1638,EMEA,sports,online,40.49,1,0.092,coupon,2024-12-02\r\n36361,2149,EMEA,electronics,retail,103.76,7,0.250,none,2024-04-01\r\n36362,1514,LATAM,grocery,retail,63.25,2,0.049,loyalty,2024-03-09\r\n36363,2141,AMER,home,retail,108.39,3,0.224,bundle,2024-10-10\r\n36364,1951,LATAM,fashion,retail,86.46,5,0.182,none,2024-04-06\r\n36365,1030,EMEA,grocery,online,74.50,7,0.077,none,2024-05-22\r\n36366,2143,AMER,electronics,retail,114.71,1,0.086,none,2024-05-13\r\n36367,2338,AMER,home,retail,57.22,2,0.068,none,2024-06-19\r\n36368,2042,LATAM,home,online,77.79,8,0.210,none,2024-06-06\r\n36369,1552,EMEA,home,retail,79.87,5,0.017,none,2024-06-27\r\n36370,2170,EMEA,sports,retail,54.23,7,0.089,none,2024-06-04\r\n36371,2263,AMER,grocery,retail,70.15,1,0.215,coupon,2024-09-11\r\n36372,1328,APAC,grocery,retail,46.16,6,0.206,none,2024-03-27\r\n36373,2307,LATAM,toys,retail,44.89,7,0.196,none,2024-05-07\r\n36374,2047,AMER,grocery,partner,133.41,1,0.026,none,2024-10-16\r\n36375,2098,AMER,fashion,retail,319.00,6,0.227,loyalty,2024-07-16\r\n36376,1303,LATAM,sports,retail,119.51,8,0.247,none,2024-10-03\r\n36377,1630,APAC,home,mobile,41.36,6,0.110,none,2024-02-12\r\n36378,2269,EMEA,electronics,online,61.12,3,0.081,bundle,2024-08-26\r\n36379,1062,EMEA,grocery,retail,63.94,2,0.063,none,2024-01-17\r\n36380,1574,AMER,fashion,retail,33.14,2,0.133,none,2024-09-03\r\n36381,1312,EMEA,fashion,mobile,63.90,8,0.223,none,2024-08-19\r\n36382,2170,EMEA,grocery,online,101.83,1,0.249,coupon,2024-02-01\r\n36383,2419,LATAM,sports,retail,62.04,6,0.186,none,2024-04-27\r\n36384,2212,EMEA,home,online,107.85,5,0.058,none,2024-12-15\r\n36385,1610,LATAM,grocery,mobile,68.47,2,0.069,none,2024-02-11\r\n36386,2254,LATAM,electronics,online,25.68,1,0.013,none,2024-10-02\r\n36387,1750,LATAM,sports,online,47.04,4,0.051,bundle,2024-08-28\r\n36388,2009,LATAM,sports,online,99.07,2,0.107,none,2024-11-28\r\n36389,1485,APAC,home,online,47.67,1,0.080,bundle,2024-09-14\r\n36390,2372,AMER,sports,partner,56.45,8,0.222,none,2024-09-02\r\n36391,1404,EMEA,sports,online,119.05,2,0.016,none,2024-03-09\r\n36392,1536,LATAM,grocery,online,112.38,2,0.114,none,2024-04-01\r\n36393,1184,AMER,fashion,mobile,186.38,3,0.020,none,2024-04-11\r\n36394,1376,EMEA,toys,online,50.87,3,0.090,none,2024-12-28\r\n36395,2341,EMEA,electronics,retail,92.48,8,0.048,none,2024-08-25\r\n36396,1685,AMER,fashion,mobile,91.82,3,0.122,bundle,2024-04-07\r\n36397,1958,APAC,home,retail,71.87,5,0.200,coupon,2024-05-12\r\n36398,2214,AMER,electronics,retail,106.68,8,0.108,none,2024-07-25\r\n36399,2150,APAC,sports,online,44.29,5,0.209,coupon,2024-04-14\r\n36400,1515,EMEA,grocery,retail,61.85,6,0.205,none,2024-10-09\r\n36401,2352,APAC,grocery,retail,66.74,2,0.019,none,2024-05-11\r\n36402,1765,EMEA,grocery,mobile,45.28,1,0.088,none,2024-04-12\r\n36403,2307,LATAM,grocery,online,31.31,4,0.040,coupon,2024-12-03\r\n36404,1065,AMER,grocery,online,61.36,5,0.205,loyalty,2024-11-14\r\n36405,1551,APAC,fashion,retail,36.49,6,0.229,loyalty,2024-11-26\r\n36406,1911,LATAM,electronics,online,50.31,1,0.063,none,2024-08-22\r\n36407,2209,AMER,toys,retail,75.66,4,0.191,none,2024-06-20\r\n36408,1635,APAC,grocery,retail,45.51,1,0.148,loyalty,2024-10-14\r\n36409,1087,AMER,home,retail,96.56,1,0.158,none,2024-03-11\r\n36410,1758,AMER,home,mobile,53.54,3,0.142,none,2024-05-07\r\n36411,1408,AMER,sports,online,56.38,5,0.104,coupon,2024-02-09\r\n36412,2049,LATAM,home,online,95.34,8,0.003,none,2024-11-21\r\n36413,1166,AMER,toys,mobile,70.55,2,0.042,coupon,2024-10-12\r\n36414,1166,AMER,toys,online,24.94,7,0.005,none,2024-04-17\r\n36415,1202,APAC,home,retail,44.99,8,0.049,none,2024-05-14\r\n36416,1587,LATAM,fashion,online,56.68,8,0.131,coupon,2024-12-16\r\n36417,1699,APAC,electronics,online,55.70,6,0.173,none,2024-07-23\r\n36418,1168,APAC,home,retail,72.24,2,0.097,none,2024-11-13\r\n36419,1271,EMEA,home,partner,30.79,2,0.163,bundle,2024-12-16\r\n36420,1964,EMEA,grocery,online,103.91,5,0.116,none,2024-12-13\r\n36421,1329,APAC,grocery,retail,23.59,5,0.029,bundle,2024-12-05\r\n36422,1101,AMER,home,online,87.33,2,0.171,coupon,2024-03-20\r\n36423,1354,AMER,sports,online,55.56,7,0.188,none,2024-12-20\r\n36424,1066,AMER,grocery,online,24.90,5,0.185,bundle,2024-04-25\r\n36425,1662,LATAM,fashion,online,30.03,3,0.030,bundle,2024-04-01\r\n36426,1088,LATAM,fashion,retail,177.12,7,0.101,none,2024-09-02\r\n36427,2193,AMER,fashion,retail,93.78,3,0.152,bundle,2024-02-23\r\n36428,1427,EMEA,toys,online,41.40,4,0.081,loyalty,2024-10-23\r\n36429,1525,APAC,electronics,online,245.49,2,0.209,coupon,2024-05-10\r\n36430,1852,AMER,home,retail,98.82,6,0.122,none,2024-12-24\r\n36431,1833,EMEA,sports,online,44.78,6,0.166,none,2024-07-21\r\n36432,1456,APAC,home,online,39.68,1,0.054,loyalty,2024-02-26\r\n36433,1885,EMEA,home,mobile,20.12,7,0.092,coupon,2024-12-19\r\n36434,1358,APAC,electronics,online,85.64,6,0.007,none,2024-03-09\r\n36435,1551,APAC,grocery,retail,79.00,6,0.053,coupon,2024-01-04\r\n36436,1109,APAC,toys,retail,45.30,5,0.002,none,2024-03-03\r\n36437,2175,AMER,grocery,retail,56.96,4,0.049,none,2024-08-06\r\n36438,1333,EMEA,fashion,online,62.40,7,0.243,bundle,2024-10-27\r\n36439,1041,APAC,grocery,retail,84.47,1,0.087,none,2024-07-19\r\n36440,2369,LATAM,grocery,mobile,18.78,4,0.129,bundle,2024-05-25\r\n36441,1351,APAC,sports,retail,25.41,7,0.081,coupon,2024-11-10\r\n36442,1827,EMEA,toys,online,45.97,3,0.081,coupon,2024-04-07\r\n36443,1060,LATAM,electronics,online,84.61,5,0.155,loyalty,2024-08-04\r\n36444,1395,APAC,toys,retail,122.50,5,0.147,none,2024-04-06\r\n36445,2446,LATAM,fashion,online,67.61,7,0.183,none,2024-04-26\r\n36446,1925,LATAM,electronics,online,73.34,2,0.071,none,2024-05-24\r\n36447,1794,AMER,fashion,mobile,60.36,1,0.200,coupon,2024-11-10\r\n36448,2230,LATAM,grocery,retail,77.58,5,0.150,bundle,2024-05-06\r\n36449,1153,AMER,home,mobile,56.33,3,0.153,coupon,2024-07-10\r\n36450,1276,AMER,sports,online,34.03,4,0.058,none,2024-09-08\r\n36451,1806,APAC,grocery,retail,28.26,2,0.043,coupon,2024-01-01\r\n36452,1374,APAC,grocery,retail,94.84,8,0.068,coupon,2024-08-02\r\n36453,2499,LATAM,fashion,online,58.75,5,0.165,none,2024-10-05\r\n36454,2028,APAC,fashion,online,59.86,5,0.240,coupon,2024-10-17\r\n36455,1933,EMEA,home,online,29.35,6,0.047,none,2024-02-15\r\n36456,1617,AMER,sports,online,41.55,3,0.231,none,2024-04-07\r\n36457,1752,APAC,toys,retail,69.35,6,0.232,none,2024-10-15\r\n36458,1594,LATAM,home,retail,63.01,6,0.077,bundle,2024-08-20\r\n36459,1523,LATAM,fashion,retail,112.66,6,0.204,none,2024-01-03\r\n36460,1967,EMEA,home,online,87.35,6,0.216,bundle,2024-05-26\r\n36461,2258,AMER,fashion,online,82.75,3,0.063,coupon,2024-04-03\r\n36462,1740,EMEA,electronics,online,96.37,4,0.060,none,2024-10-21\r\n36463,1623,AMER,toys,retail,134.09,5,0.156,coupon,2024-03-24\r\n36464,1424,APAC,electronics,partner,104.49,2,0.244,coupon,2024-10-26\r\n36465,2432,AMER,sports,online,21.96,2,0.176,none,2024-12-19\r\n36466,1018,APAC,toys,mobile,36.65,7,0.108,none,2024-08-27\r\n36467,2402,AMER,home,retail,32.62,2,0.169,none,2024-11-13\r\n36468,1378,APAC,home,retail,178.62,6,0.105,bundle,2024-10-25\r\n36469,1185,LATAM,grocery,retail,56.28,4,0.137,none,2024-04-28\r\n36470,1137,APAC,fashion,retail,34.41,5,0.121,none,2024-11-22\r\n36471,1807,EMEA,electronics,online,35.17,2,0.100,none,2024-08-02\r\n36472,2133,AMER,grocery,online,40.04,6,0.215,coupon,2024-08-02\r\n36473,2348,EMEA,grocery,retail,51.37,1,0.126,loyalty,2024-07-14\r\n36474,2493,APAC,home,mobile,46.50,7,0.084,none,2024-09-20\r\n36475,1843,EMEA,electronics,retail,103.35,3,0.042,coupon,2024-10-03\r\n36476,1962,APAC,home,online,132.94,5,0.153,coupon,2024-07-09\r\n36477,2198,EMEA,sports,retail,29.73,8,0.030,none,2024-04-15\r\n36478,1462,LATAM,toys,retail,73.08,3,0.028,none,2024-01-11\r\n36479,1091,EMEA,sports,retail,93.57,5,0.240,coupon,2024-06-15\r\n36480,1411,LATAM,grocery,online,94.90,3,0.198,loyalty,2024-03-02\r\n36481,1628,EMEA,home,online,89.28,5,0.041,none,2024-08-13\r\n36482,1039,AMER,electronics,mobile,43.64,2,0.054,bundle,2024-02-20\r\n36483,1434,EMEA,sports,mobile,53.42,6,0.150,none,2024-08-09\r\n36484,1144,APAC,electronics,online,48.84,5,0.144,coupon,2024-07-20\r\n36485,2396,AMER,fashion,online,36.95,5,0.036,none,2024-10-09\r\n36486,1803,LATAM,fashion,online,61.49,7,0.025,loyalty,2024-05-10\r\n36487,1498,LATAM,grocery,retail,77.49,6,0.081,none,2024-04-22\r\n36488,1017,AMER,grocery,online,61.92,1,0.045,loyalty,2024-10-13\r\n36489,1985,AMER,toys,online,28.82,2,0.191,loyalty,2024-02-20\r\n36490,1481,LATAM,electronics,online,55.22,8,0.229,none,2024-09-06\r\n36491,2484,APAC,electronics,online,66.72,4,0.192,none,2024-04-25\r\n36492,1099,LATAM,home,online,118.09,8,0.038,bundle,2024-02-07\r\n36493,1037,EMEA,home,online,20.49,1,0.190,coupon,2024-10-05\r\n36494,1346,AMER,electronics,retail,34.79,2,0.184,none,2024-03-12\r\n36495,2334,LATAM,grocery,online,18.09,1,0.002,none,2024-01-24\r\n36496,1070,EMEA,grocery,retail,85.78,8,0.035,coupon,2024-06-27\r\n36497,2254,LATAM,toys,retail,90.80,2,0.170,bundle,2024-03-18\r\n36498,1912,APAC,toys,partner,49.06,5,0.030,coupon,2024-07-22\r\n36499,1903,LATAM,electronics,online,36.92,4,0.205,none,2024-04-05\r\n36500,1379,EMEA,grocery,retail,71.14,7,0.164,bundle,2024-06-04\r\n36501,1106,AMER,electronics,online,26.93,4,0.240,none,2024-10-09\r\n36502,1434,EMEA,toys,online,56.29,8,0.043,coupon,2024-01-07\r\n36503,1763,LATAM,fashion,online,40.69,3,0.196,coupon,2024-06-05\r\n36504,2349,APAC,grocery,partner,108.35,4,0.157,bundle,2024-10-12\r\n36505,2058,LATAM,grocery,mobile,19.98,4,0.176,none,2024-01-20\r\n36506,2258,AMER,grocery,online,57.86,2,0.172,coupon,2024-02-16\r\n36507,2319,AMER,grocery,retail,32.96,3,0.219,loyalty,2024-10-13\r\n36508,2251,APAC,home,online,70.16,2,0.056,none,2024-03-20\r\n36509,1352,AMER,sports,online,55.14,6,0.229,none,2024-10-25\r\n36510,1598,EMEA,electronics,online,59.36,5,0.042,bundle,2024-01-02\r\n36511,2087,LATAM,home,retail,95.13,2,0.164,none,2024-04-27\r\n36512,2137,LATAM,grocery,partner,50.00,4,0.029,coupon,2024-07-06\r\n36513,1841,AMER,grocery,online,35.47,7,0.113,loyalty,2024-05-07\r\n36514,2403,LATAM,home,online,43.84,5,0.143,none,2024-03-11\r\n36515,1931,APAC,toys,online,54.77,8,0.146,none,2024-01-23\r\n36516,1777,AMER,electronics,mobile,69.10,1,0.172,loyalty,2024-08-28\r\n36517,1345,AMER,grocery,mobile,35.81,8,0.177,bundle,2024-08-13\r\n36518,1778,LATAM,grocery,online,69.25,7,0.123,none,2024-09-12\r\n36519,1965,LATAM,home,retail,88.78,5,0.172,coupon,2024-12-16\r\n36520,2072,AMER,home,mobile,43.23,3,0.155,bundle,2024-04-04\r\n36521,1464,APAC,home,online,59.35,3,0.048,coupon,2024-05-12\r\n36522,2494,AMER,grocery,retail,83.15,5,0.124,none,2024-08-24\r\n36523,2380,AMER,sports,retail,44.65,1,0.219,coupon,2024-12-24\r\n36524,2323,AMER,home,online,50.69,4,0.036,bundle,2024-04-02\r\n36525,2132,LATAM,toys,retail,44.87,7,0.128,none,2024-12-28\r\n36526,2283,AMER,electronics,online,61.64,3,0.101,none,2024-03-07\r\n36527,1457,EMEA,fashion,retail,51.33,6,0.105,none,2024-09-05\r\n36528,1029,EMEA,sports,mobile,80.85,8,0.166,loyalty,2024-02-19\r\n36529,2075,LATAM,sports,retail,30.86,1,0.226,none,2024-09-01\r\n36530,2145,AMER,toys,mobile,173.60,6,0.103,bundle,2024-02-11\r\n36531,1628,EMEA,grocery,retail,28.22,4,0.163,bundle,2024-06-14\r\n36532,1127,EMEA,fashion,partner,48.85,1,0.022,none,2024-05-21\r\n36533,2161,LATAM,electronics,retail,47.55,3,0.086,bundle,2024-02-15\r\n36534,1135,APAC,grocery,mobile,13.09,7,0.159,loyalty,2024-11-10\r\n36535,2151,APAC,sports,online,32.45,5,0.060,none,2024-07-02\r\n36536,2298,APAC,grocery,online,40.72,5,0.191,none,2024-02-02\r\n36537,2181,AMER,grocery,retail,99.55,6,0.021,bundle,2024-06-23\r\n36538,2285,APAC,electronics,online,43.21,7,0.012,loyalty,2024-04-02\r\n36539,2196,AMER,grocery,online,54.17,5,0.087,coupon,2024-12-15\r\n36540,1031,AMER,grocery,online,98.86,6,0.088,loyalty,2024-08-24\r\n36541,2372,AMER,sports,mobile,66.42,6,0.068,none,2024-02-26\r\n36542,1112,APAC,fashion,retail,80.32,6,0.156,none,2024-01-05\r\n36543,2274,APAC,home,online,19.75,7,0.176,none,2024-02-28\r\n36544,1128,LATAM,sports,retail,18.99,1,0.124,none,2024-12-20\r\n36545,2260,EMEA,toys,retail,29.29,4,0.176,coupon,2024-01-11\r\n36546,1386,AMER,sports,retail,56.20,2,0.155,coupon,2024-05-24\r\n36547,1178,EMEA,fashion,retail,52.75,2,0.071,none,2024-06-11\r\n36548,2060,LATAM,toys,online,52.94,8,0.137,none,2024-04-19\r\n36549,2012,APAC,fashion,online,38.47,7,0.045,bundle,2024-01-04\r\n36550,1738,LATAM,home,retail,96.20,6,0.197,loyalty,2024-07-04\r\n36551,2380,AMER,toys,online,156.70,1,0.229,none,2024-08-06\r\n36552,1694,APAC,fashion,mobile,37.30,3,0.015,none,2024-04-13\r\n36553,1061,APAC,home,online,110.39,6,0.219,bundle,2024-12-05\r\n36554,2417,LATAM,fashion,online,35.19,5,0.105,none,2024-08-26\r\n36555,2067,LATAM,home,online,28.30,7,0.037,none,2024-12-25\r\n36556,1365,LATAM,toys,mobile,82.33,4,0.182,bundle,2024-05-19\r\n36557,1702,AMER,grocery,mobile,80.03,7,0.150,none,2024-01-17\r\n36558,1597,APAC,electronics,retail,39.24,3,0.061,none,2024-03-10\r\n36559,2018,AMER,sports,online,106.35,5,0.015,none,2024-11-22\r\n36560,1791,LATAM,home,online,66.50,2,0.231,none,2024-05-03\r\n36561,2293,LATAM,fashion,online,61.08,2,0.067,coupon,2024-10-14\r\n36562,2445,APAC,toys,online,114.19,3,0.026,bundle,2024-06-26\r\n36563,2302,APAC,sports,online,114.84,3,0.242,none,2024-07-27\r\n36564,2304,LATAM,toys,retail,133.98,8,0.243,none,2024-04-04\r\n36565,2203,APAC,home,online,126.27,7,0.160,coupon,2024-12-01\r\n36566,1516,EMEA,grocery,online,34.95,3,0.238,loyalty,2024-01-05\r\n36567,2241,APAC,fashion,retail,33.83,1,0.161,coupon,2024-08-27\r\n36568,2270,APAC,grocery,retail,48.94,4,0.244,coupon,2024-03-02\r\n36569,1461,LATAM,sports,mobile,31.45,8,0.166,bundle,2024-03-01\r\n36570,1169,LATAM,home,online,56.30,4,0.179,none,2024-04-04\r\n36571,1620,LATAM,grocery,retail,77.61,3,0.167,coupon,2024-12-20\r\n36572,1410,AMER,sports,mobile,34.43,4,0.228,coupon,2024-06-02\r\n36573,2461,LATAM,home,online,51.22,5,0.206,loyalty,2024-01-21\r\n36574,1946,AMER,grocery,retail,33.91,3,0.037,bundle,2024-04-19\r\n36575,2239,EMEA,home,mobile,53.66,4,0.205,bundle,2024-05-13\r\n36576,2342,AMER,electronics,retail,69.98,7,0.219,coupon,2024-10-12\r\n36577,1354,AMER,grocery,retail,46.04,5,0.180,none,2024-08-19\r\n36578,2162,EMEA,grocery,retail,47.57,5,0.133,coupon,2024-12-20\r\n36579,1843,EMEA,home,online,57.26,6,0.168,coupon,2024-01-23\r\n36580,1460,LATAM,toys,retail,27.16,5,0.140,coupon,2024-03-09\r\n36581,1871,APAC,fashion,retail,68.60,6,0.117,loyalty,2024-07-04\r\n36582,2275,LATAM,fashion,online,64.17,8,0.100,none,2024-05-14\r\n36583,2239,EMEA,home,online,82.94,6,0.249,loyalty,2024-04-04\r\n36584,1262,APAC,fashion,mobile,84.47,3,0.130,coupon,2024-12-24\r\n36585,1658,AMER,grocery,online,94.46,2,0.051,none,2024-03-09\r\n36586,2149,EMEA,home,online,103.49,3,0.168,coupon,2024-11-06\r\n36587,2479,EMEA,fashion,mobile,33.36,2,0.225,none,2024-12-09\r\n36588,1441,LATAM,home,mobile,76.25,3,0.141,none,2024-01-17\r\n36589,1536,LATAM,sports,online,70.52,6,0.035,bundle,2024-08-25\r\n36590,1100,AMER,electronics,online,81.16,1,0.119,none,2024-07-23\r\n36591,1900,APAC,grocery,mobile,49.35,4,0.058,none,2024-04-04\r\n36592,1477,APAC,toys,retail,95.98,6,0.221,none,2024-10-15\r\n36593,1330,EMEA,grocery,retail,76.89,7,0.084,bundle,2024-01-09\r\n36594,1257,APAC,sports,online,47.94,4,0.244,none,2024-07-21\r\n36595,1306,LATAM,grocery,retail,51.23,7,0.143,none,2024-10-15\r\n36596,1572,LATAM,grocery,retail,26.51,2,0.232,none,2024-08-10\r\n36597,2081,APAC,grocery,online,51.11,4,0.024,none,2024-06-19\r\n36598,1983,LATAM,sports,online,40.67,7,0.228,none,2024-06-06\r\n36599,1794,AMER,sports,partner,34.48,4,0.038,none,2024-09-17\r\n36600,2202,APAC,sports,retail,36.94,2,0.048,none,2024-11-07\r\n36601,2342,AMER,electronics,mobile,57.55,5,0.069,none,2024-04-26\r\n36602,1490,AMER,sports,retail,55.51,5,0.119,none,2024-02-24\r\n36603,1605,APAC,electronics,online,43.80,7,0.123,bundle,2024-06-21\r\n36604,1772,EMEA,electronics,retail,34.81,6,0.113,none,2024-02-19\r\n36605,1556,AMER,electronics,retail,72.65,6,0.209,bundle,2024-05-27\r\n36606,1499,EMEA,grocery,mobile,57.87,2,0.024,coupon,2024-04-25\r\n36607,1677,EMEA,fashion,retail,77.72,6,0.142,coupon,2024-06-18\r\n36608,1397,LATAM,grocery,online,40.15,2,0.217,none,2024-08-09\r\n36609,2010,APAC,fashion,online,59.02,1,0.136,coupon,2024-11-17\r\n36610,1489,AMER,grocery,retail,29.96,1,0.002,none,2024-08-27\r\n36611,1246,EMEA,fashion,online,26.24,6,0.198,coupon,2024-11-20\r\n36612,1381,LATAM,grocery,mobile,76.73,8,0.043,none,2024-03-13\r\n36613,1157,LATAM,electronics,mobile,114.61,2,0.164,loyalty,2024-08-06\r\n36614,1234,AMER,fashion,mobile,82.31,2,0.152,none,2024-12-02\r\n36615,1375,AMER,sports,mobile,32.08,2,0.224,loyalty,2024-01-03\r\n36616,2209,AMER,electronics,online,66.87,4,0.037,bundle,2024-10-06\r\n36617,2146,APAC,toys,online,110.43,3,0.199,loyalty,2024-03-02\r\n36618,1386,AMER,electronics,retail,17.75,4,0.136,none,2024-03-02\r\n36619,1381,LATAM,electronics,retail,92.87,4,0.188,none,2024-02-21\r\n36620,1226,AMER,toys,online,53.22,7,0.240,coupon,2024-09-28\r\n36621,1912,APAC,sports,online,132.28,1,0.005,bundle,2024-04-21\r\n36622,1123,LATAM,electronics,online,34.00,6,0.198,loyalty,2024-02-18\r\n36623,1671,APAC,home,online,52.54,6,0.247,coupon,2024-06-27\r\n36624,1488,AMER,grocery,online,50.22,8,0.203,none,2024-04-19\r\n36625,1651,LATAM,electronics,online,73.54,4,0.186,bundle,2024-04-22\r\n36626,1017,AMER,home,retail,122.40,6,0.011,coupon,2024-10-16\r\n36627,1484,AMER,grocery,online,66.49,3,0.199,bundle,2024-08-15\r\n36628,1981,EMEA,home,online,95.88,8,0.209,bundle,2024-12-13\r\n36629,2050,APAC,grocery,retail,75.27,5,0.250,none,2024-11-12\r\n36630,2141,AMER,grocery,online,73.69,6,0.149,bundle,2024-08-20\r\n36631,1971,EMEA,electronics,online,21.77,1,0.026,none,2024-02-25\r\n36632,1913,LATAM,electronics,online,41.71,4,0.235,none,2024-06-06\r\n36633,1541,APAC,toys,online,61.12,3,0.184,none,2024-12-21\r\n36634,1667,AMER,sports,mobile,48.55,8,0.145,coupon,2024-03-07\r\n36635,1442,EMEA,electronics,online,45.12,2,0.097,loyalty,2024-08-27\r\n36636,1739,AMER,toys,online,35.50,5,0.161,coupon,2024-10-04\r\n36637,2440,APAC,toys,retail,45.97,3,0.038,none,2024-02-21\r\n36638,1149,LATAM,electronics,partner,48.79,1,0.066,none,2024-04-21\r\n36639,2240,LATAM,grocery,mobile,85.73,4,0.155,loyalty,2024-02-15\r\n36640,1847,LATAM,home,retail,43.23,6,0.017,none,2024-06-23\r\n36641,1299,LATAM,grocery,mobile,134.73,6,0.060,coupon,2024-06-09\r\n36642,1173,LATAM,toys,online,34.58,5,0.088,bundle,2024-02-12\r\n36643,1743,LATAM,grocery,mobile,58.12,5,0.131,none,2024-01-20\r\n36644,2422,APAC,fashion,mobile,190.38,3,0.246,none,2024-02-10\r\n36645,1478,EMEA,home,online,39.69,8,0.197,none,2024-10-24\r\n36646,1599,APAC,toys,online,30.61,8,0.233,loyalty,2024-09-21\r\n36647,1929,LATAM,grocery,retail,32.14,1,0.013,none,2024-10-23\r\n36648,1417,APAC,home,mobile,45.57,3,0.243,none,2024-01-18\r\n36649,1078,APAC,sports,retail,86.66,6,0.199,none,2024-05-03\r\n36650,1877,LATAM,grocery,online,77.55,1,0.081,none,2024-02-19\r\n36651,2317,LATAM,sports,mobile,91.28,7,0.241,bundle,2024-01-14\r\n36652,1703,AMER,sports,online,52.98,1,0.115,bundle,2024-02-18\r\n36653,1075,AMER,grocery,online,90.05,6,0.003,none,2024-04-23\r\n36654,1894,APAC,toys,retail,96.79,3,0.035,bundle,2024-03-25\r\n36655,1706,EMEA,grocery,online,30.70,8,0.235,none,2024-09-05\r\n36656,1845,AMER,grocery,online,66.10,8,0.220,none,2024-12-26\r\n36657,1046,EMEA,sports,online,53.55,3,0.210,none,2024-11-28\r\n36658,2395,APAC,toys,online,50.81,1,0.244,coupon,2024-04-26\r\n36659,1467,LATAM,home,mobile,52.75,7,0.201,bundle,2024-07-14\r\n36660,1069,APAC,grocery,online,43.05,8,0.046,none,2024-12-19\r\n36661,1570,AMER,electronics,retail,32.82,3,0.107,none,2024-02-01\r\n36662,1538,AMER,grocery,retail,59.51,4,0.093,none,2024-04-01\r\n36663,2024,AMER,fashion,online,46.48,6,0.193,none,2024-01-17\r\n36664,2365,LATAM,toys,online,65.93,2,0.004,none,2024-10-04\r\n36665,1283,APAC,sports,online,95.34,6,0.087,none,2024-01-17\r\n36666,1991,APAC,electronics,retail,61.46,8,0.061,none,2024-07-25\r\n36667,2168,EMEA,home,retail,46.09,1,0.160,coupon,2024-04-17\r\n36668,1022,APAC,grocery,online,72.48,3,0.038,coupon,2024-07-15\r\n36669,1032,AMER,home,online,37.78,6,0.029,bundle,2024-09-23\r\n36670,1424,APAC,home,online,113.80,4,0.004,none,2024-07-28\r\n36671,2089,EMEA,fashion,retail,90.64,4,0.098,coupon,2024-09-16\r\n36672,2049,LATAM,sports,online,111.21,6,0.016,coupon,2024-03-06\r\n36673,1453,APAC,toys,online,62.99,1,0.050,none,2024-09-27\r\n36674,2454,LATAM,home,online,61.26,5,0.229,coupon,2024-09-27\r\n36675,2299,EMEA,sports,retail,51.94,4,0.126,coupon,2024-05-17\r\n36676,1976,AMER,grocery,retail,53.82,4,0.035,none,2024-07-03\r\n36677,2130,EMEA,grocery,online,47.86,5,0.125,none,2024-02-03\r\n36678,1687,APAC,home,retail,61.36,4,0.065,none,2024-07-23\r\n36679,2162,EMEA,grocery,retail,71.15,1,0.006,none,2024-04-15\r\n36680,1025,EMEA,toys,online,32.19,5,0.228,coupon,2024-12-05\r\n36681,1976,AMER,grocery,retail,61.64,7,0.210,none,2024-01-28\r\n36682,1221,LATAM,sports,online,26.92,8,0.218,none,2024-07-05\r\n36683,1612,LATAM,electronics,online,48.28,6,0.052,none,2024-05-12\r\n36684,1771,AMER,grocery,online,39.03,2,0.002,loyalty,2024-12-05\r\n36685,1633,EMEA,grocery,mobile,67.59,3,0.088,coupon,2024-06-22\r\n36686,1515,EMEA,fashion,retail,84.40,7,0.213,none,2024-10-01\r\n36687,1942,APAC,fashion,retail,112.81,4,0.166,none,2024-02-28\r\n36688,2184,APAC,home,online,149.41,4,0.191,none,2024-04-12\r\n36689,1428,APAC,sports,retail,67.07,3,0.118,coupon,2024-09-15\r\n36690,1507,EMEA,home,retail,76.32,4,0.203,coupon,2024-07-02\r\n36691,1633,EMEA,electronics,mobile,59.74,6,0.006,coupon,2024-06-11\r\n36692,1151,APAC,toys,online,64.86,4,0.060,loyalty,2024-05-01\r\n36693,2379,AMER,fashion,online,64.33,6,0.245,none,2024-10-24\r\n36694,1306,LATAM,fashion,retail,67.11,6,0.033,coupon,2024-02-24\r\n36695,2029,APAC,electronics,mobile,77.49,1,0.240,coupon,2024-08-05\r\n36696,1796,LATAM,home,retail,110.52,7,0.183,coupon,2024-07-25\r\n36697,1255,AMER,grocery,partner,23.57,2,0.136,loyalty,2024-01-08\r\n36698,2113,LATAM,grocery,online,29.30,5,0.000,loyalty,2024-05-11\r\n36699,1712,LATAM,home,retail,29.90,8,0.202,none,2024-09-27\r\n36700,1622,LATAM,grocery,retail,72.03,8,0.125,none,2024-10-14\r\n36701,1159,LATAM,home,retail,46.37,2,0.011,coupon,2024-02-08\r\n36702,1332,APAC,fashion,mobile,37.77,1,0.082,none,2024-01-05\r\n36703,2421,AMER,sports,online,38.47,6,0.065,coupon,2024-12-10\r\n36704,2319,AMER,grocery,online,69.91,5,0.083,none,2024-04-13\r\n36705,2474,LATAM,grocery,mobile,125.63,5,0.178,none,2024-12-02\r\n36706,2053,AMER,grocery,online,21.74,4,0.038,loyalty,2024-11-10\r\n36707,2244,LATAM,grocery,retail,52.50,5,0.035,coupon,2024-04-16\r\n36708,1312,EMEA,home,retail,23.79,6,0.142,none,2024-10-24\r\n36709,1496,AMER,grocery,online,20.92,1,0.080,loyalty,2024-05-08\r\n36710,2428,LATAM,home,online,110.99,6,0.120,coupon,2024-08-11\r\n36711,1609,LATAM,sports,online,31.84,6,0.155,coupon,2024-06-14\r\n36712,2231,LATAM,home,online,38.87,6,0.003,bundle,2024-06-15\r\n36713,2124,AMER,fashion,retail,56.70,7,0.056,none,2024-01-26\r\n36714,1306,LATAM,grocery,online,82.12,6,0.084,none,2024-08-08\r\n36715,1286,EMEA,fashion,online,38.83,3,0.191,none,2024-01-07\r\n36716,2347,AMER,sports,retail,44.38,5,0.145,none,2024-08-16\r\n36717,1750,LATAM,electronics,online,54.69,8,0.106,coupon,2024-05-03\r\n36718,1560,AMER,toys,retail,78.52,2,0.231,none,2024-08-27\r\n36719,1540,LATAM,grocery,online,37.43,6,0.197,none,2024-08-13\r\n36720,1129,LATAM,sports,online,14.84,4,0.231,none,2024-01-20\r\n36721,2397,LATAM,fashion,retail,70.80,7,0.051,none,2024-09-05\r\n36722,1539,LATAM,home,online,134.25,8,0.011,none,2024-11-07\r\n36723,1404,EMEA,electronics,mobile,14.26,1,0.207,none,2024-10-25\r\n36724,1587,LATAM,electronics,retail,58.18,3,0.144,bundle,2024-08-03\r\n36725,1167,EMEA,toys,mobile,77.51,4,0.085,none,2024-05-22\r\n36726,2033,LATAM,sports,online,36.11,7,0.092,loyalty,2024-04-19\r\n36727,1602,EMEA,home,retail,47.06,5,0.130,bundle,2024-06-04\r\n36728,2183,EMEA,fashion,online,29.77,8,0.100,none,2024-09-16\r\n36729,1370,APAC,sports,online,51.96,5,0.221,none,2024-01-06\r\n36730,2095,EMEA,grocery,retail,56.73,8,0.144,none,2024-11-07\r\n36731,1389,LATAM,grocery,online,42.63,4,0.236,loyalty,2024-07-05\r\n36732,1570,AMER,electronics,online,28.66,5,0.154,none,2024-01-20\r\n36733,2111,EMEA,home,online,75.50,3,0.043,coupon,2024-09-04\r\n36734,1733,LATAM,sports,retail,126.00,4,0.026,none,2024-08-16\r\n36735,2360,EMEA,sports,online,116.29,4,0.206,none,2024-12-21\r\n36736,2494,AMER,grocery,online,168.05,7,0.109,none,2024-06-12\r\n36737,1694,APAC,grocery,online,53.16,6,0.029,loyalty,2024-03-20\r\n36738,2028,APAC,grocery,retail,79.14,4,0.102,none,2024-12-13\r\n36739,2445,APAC,toys,online,42.70,7,0.244,none,2024-01-16\r\n36740,2091,LATAM,electronics,retail,118.81,7,0.018,none,2024-04-13\r\n36741,1719,LATAM,electronics,online,54.95,5,0.035,coupon,2024-02-17\r\n36742,1378,APAC,grocery,online,60.19,2,0.011,none,2024-11-17\r\n36743,2354,LATAM,grocery,mobile,116.32,2,0.075,none,2024-10-08\r\n36744,1767,AMER,home,retail,65.72,4,0.216,bundle,2024-05-07\r\n36745,1361,LATAM,home,retail,67.75,5,0.004,coupon,2024-03-21\r\n36746,1046,EMEA,grocery,retail,69.52,6,0.054,bundle,2024-09-01\r\n36747,1140,LATAM,electronics,retail,55.28,3,0.108,coupon,2024-12-06\r\n36748,1008,AMER,sports,online,67.04,3,0.179,none,2024-12-04\r\n36749,2498,LATAM,toys,retail,38.04,2,0.210,none,2024-06-17\r\n36750,2231,LATAM,toys,retail,79.61,6,0.134,loyalty,2024-05-28\r\n36751,1282,LATAM,sports,online,35.47,1,0.229,none,2024-06-05\r\n36752,2152,EMEA,electronics,mobile,118.85,7,0.114,none,2024-08-16\r\n36753,1873,EMEA,grocery,partner,28.63,7,0.200,loyalty,2024-05-14\r\n36754,1156,APAC,electronics,online,24.57,7,0.236,loyalty,2024-01-01\r\n36755,1594,LATAM,fashion,retail,84.05,8,0.057,none,2024-07-15\r\n36756,2279,LATAM,fashion,retail,88.57,4,0.126,bundle,2024-08-21\r\n36757,2164,AMER,grocery,online,64.94,7,0.242,none,2024-12-17\r\n36758,2326,LATAM,fashion,online,53.11,3,0.186,bundle,2024-07-05\r\n36759,1304,LATAM,grocery,retail,42.05,3,0.149,none,2024-12-15\r\n36760,1021,AMER,grocery,retail,97.78,7,0.024,none,2024-01-09\r\n36761,1938,APAC,home,retail,79.39,5,0.061,coupon,2024-09-19\r\n36762,1186,APAC,sports,retail,27.73,8,0.184,bundle,2024-09-25\r\n36763,2285,APAC,grocery,online,35.19,1,0.080,loyalty,2024-12-09\r\n36764,1590,APAC,fashion,online,55.62,6,0.139,none,2024-01-27\r\n36765,1597,APAC,electronics,retail,40.18,6,0.081,coupon,2024-11-23\r\n36766,1299,LATAM,fashion,retail,49.04,7,0.154,none,2024-11-09\r\n36767,1627,LATAM,electronics,online,81.82,1,0.151,none,2024-02-16\r\n36768,1507,EMEA,grocery,online,65.66,7,0.171,none,2024-01-13\r\n36769,1725,APAC,grocery,retail,49.64,6,0.065,none,2024-06-07\r\n36770,2029,APAC,grocery,retail,94.29,4,0.054,coupon,2024-01-11\r\n36771,2379,AMER,grocery,retail,102.16,7,0.156,coupon,2024-06-18\r\n36772,1439,LATAM,sports,online,32.54,1,0.200,bundle,2024-09-04\r\n36773,2365,LATAM,home,mobile,30.55,1,0.104,none,2024-12-25\r\n36774,1734,AMER,sports,online,29.76,6,0.179,coupon,2024-02-25\r\n36775,1691,LATAM,fashion,retail,12.46,3,0.091,coupon,2024-06-13\r\n36776,1257,APAC,sports,online,92.43,7,0.127,coupon,2024-01-14\r\n36777,1444,EMEA,grocery,online,37.33,4,0.057,bundle,2024-05-21\r\n36778,2275,LATAM,grocery,retail,62.90,5,0.241,none,2024-04-23\r\n36779,1741,AMER,grocery,online,101.72,2,0.017,none,2024-11-10\r\n36780,1864,EMEA,home,online,29.33,3,0.199,none,2024-01-23\r\n36781,2421,AMER,toys,online,49.56,2,0.226,none,2024-02-20\r\n36782,1679,APAC,grocery,online,115.15,7,0.110,none,2024-03-06\r\n36783,1986,LATAM,toys,retail,48.36,6,0.005,none,2024-10-24\r\n36784,2186,LATAM,electronics,retail,95.78,5,0.000,none,2024-12-03\r\n36785,2444,EMEA,sports,retail,99.00,7,0.040,none,2024-05-01\r\n36786,1045,LATAM,sports,mobile,82.38,7,0.236,none,2024-02-20\r\n36787,1454,APAC,fashion,retail,38.81,6,0.067,coupon,2024-11-24\r\n36788,2085,AMER,grocery,online,64.76,3,0.158,none,2024-03-08\r\n36789,1976,AMER,fashion,online,43.75,3,0.102,loyalty,2024-04-19\r\n36790,1074,LATAM,home,online,61.45,4,0.006,none,2024-03-21\r\n36791,2128,EMEA,home,retail,55.17,1,0.189,none,2024-08-18\r\n36792,2421,AMER,electronics,mobile,96.33,4,0.200,none,2024-12-14\r\n36793,1700,EMEA,electronics,retail,48.42,1,0.120,bundle,2024-09-03\r\n36794,1603,EMEA,fashion,online,79.41,6,0.105,none,2024-03-10\r\n36795,1079,LATAM,toys,mobile,62.00,6,0.189,none,2024-12-21\r\n36796,2245,APAC,home,online,218.83,7,0.064,none,2024-09-03\r\n36797,1156,APAC,fashion,online,48.96,2,0.070,loyalty,2024-09-06\r\n36798,2436,LATAM,grocery,online,68.37,5,0.220,none,2024-09-28\r\n36799,1362,AMER,grocery,retail,79.56,4,0.097,none,2024-02-11\r\n36800,2057,APAC,electronics,retail,57.19,4,0.105,bundle,2024-10-03\r\n36801,1220,LATAM,grocery,mobile,42.32,2,0.180,none,2024-03-19\r\n36802,1953,EMEA,electronics,retail,50.01,6,0.166,coupon,2024-12-23\r\n36803,1483,EMEA,electronics,online,74.19,4,0.025,none,2024-12-16\r\n36804,2069,AMER,fashion,partner,43.56,4,0.002,coupon,2024-12-08\r\n36805,1227,AMER,sports,online,164.04,1,0.238,none,2024-05-24\r\n36806,2441,EMEA,fashion,online,201.31,6,0.025,coupon,2024-08-26\r\n36807,1537,LATAM,home,online,37.61,1,0.205,coupon,2024-07-25\r\n36808,1616,APAC,grocery,online,61.42,1,0.012,bundle,2024-10-15\r\n36809,2221,LATAM,home,online,49.28,2,0.249,none,2024-02-11\r\n36810,1198,AMER,sports,retail,60.00,5,0.039,none,2024-12-07\r\n36811,1187,AMER,grocery,partner,124.27,1,0.112,none,2024-04-01\r\n36812,1491,EMEA,sports,mobile,66.69,5,0.165,none,2024-01-09\r\n36813,2428,LATAM,toys,retail,136.94,4,0.163,none,2024-07-06\r\n36814,1388,AMER,fashion,online,42.07,3,0.056,none,2024-09-02\r\n36815,2298,APAC,fashion,online,53.13,5,0.142,none,2024-03-28\r\n36816,2084,LATAM,fashion,retail,41.66,2,0.218,none,2024-08-27\r\n36817,1440,AMER,fashion,online,72.33,1,0.162,none,2024-10-22\r\n36818,1980,LATAM,electronics,online,61.51,1,0.246,coupon,2024-02-12\r\n36819,2313,LATAM,grocery,mobile,85.49,7,0.101,coupon,2024-08-06\r\n36820,2494,AMER,grocery,online,58.19,2,0.221,none,2024-12-02\r\n36821,2283,AMER,electronics,retail,86.83,8,0.048,none,2024-05-26\r\n36822,2287,EMEA,fashion,online,42.63,7,0.165,none,2024-02-16\r\n36823,1774,EMEA,home,online,77.89,7,0.046,coupon,2024-03-06\r\n36824,2313,LATAM,toys,retail,59.37,1,0.061,none,2024-04-08\r\n36825,1175,AMER,home,online,64.28,5,0.089,loyalty,2024-03-23\r\n36826,1974,EMEA,home,retail,104.35,1,0.231,none,2024-10-09\r\n36827,2367,AMER,electronics,online,138.25,1,0.048,coupon,2024-09-03\r\n36828,1406,LATAM,home,retail,50.83,7,0.036,coupon,2024-04-12\r\n36829,1088,LATAM,home,retail,11.93,7,0.043,none,2024-12-15\r\n36830,1756,EMEA,toys,mobile,113.54,7,0.104,none,2024-10-25\r\n36831,1181,LATAM,toys,retail,40.81,3,0.163,none,2024-12-08\r\n36832,1867,AMER,electronics,online,109.22,5,0.060,loyalty,2024-08-18\r\n36833,2040,LATAM,sports,retail,39.09,2,0.175,none,2024-03-13\r\n36834,1738,LATAM,grocery,online,66.51,3,0.150,none,2024-02-01\r\n36835,2374,LATAM,grocery,mobile,47.03,7,0.221,none,2024-08-20\r\n36836,2152,EMEA,grocery,retail,56.09,7,0.047,none,2024-11-28\r\n36837,2218,EMEA,grocery,online,65.50,3,0.092,none,2024-12-26\r\n36838,1688,LATAM,electronics,retail,111.58,6,0.193,none,2024-07-26\r\n36839,2445,APAC,grocery,retail,67.08,4,0.223,none,2024-03-23\r\n36840,1842,LATAM,sports,mobile,56.13,1,0.128,coupon,2024-04-27\r\n36841,1826,LATAM,fashion,retail,41.02,8,0.189,none,2024-04-04\r\n36842,2263,AMER,fashion,retail,59.15,8,0.244,none,2024-08-21\r\n36843,2304,LATAM,grocery,online,69.30,5,0.149,none,2024-04-15\r\n36844,1806,APAC,grocery,retail,59.34,8,0.017,none,2024-04-12\r\n36845,2402,AMER,toys,online,95.55,5,0.128,bundle,2024-10-05\r\n36846,1327,APAC,grocery,online,90.01,2,0.130,coupon,2024-04-24\r\n36847,1920,LATAM,fashion,online,112.32,3,0.189,bundle,2024-07-05\r\n36848,2294,EMEA,sports,online,48.19,4,0.180,loyalty,2024-06-17\r\n36849,1144,APAC,electronics,mobile,46.46,8,0.148,coupon,2024-10-25\r\n36850,2437,LATAM,grocery,online,99.40,8,0.218,bundle,2024-06-22\r\n36851,2240,LATAM,fashion,online,66.45,1,0.015,coupon,2024-08-05\r\n36852,1957,AMER,grocery,mobile,34.55,7,0.236,coupon,2024-02-23\r\n36853,1177,LATAM,grocery,mobile,110.36,7,0.054,coupon,2024-08-01\r\n36854,1626,EMEA,grocery,retail,45.35,4,0.158,loyalty,2024-06-25\r\n36855,1667,AMER,home,online,95.79,5,0.224,none,2024-09-08\r\n36856,2450,EMEA,toys,online,44.36,3,0.120,bundle,2024-11-22\r\n36857,1795,EMEA,grocery,retail,450.51,4,0.192,coupon,2024-06-26\r\n36858,2300,EMEA,grocery,retail,77.99,1,0.034,none,2024-04-20\r\n36859,2398,EMEA,grocery,retail,113.79,3,0.042,none,2024-07-26\r\n36860,1413,LATAM,electronics,retail,52.50,2,0.088,loyalty,2024-07-02\r\n36861,1092,AMER,fashion,retail,68.67,3,0.009,coupon,2024-08-27\r\n36862,1814,AMER,fashion,online,50.32,5,0.058,none,2024-06-28\r\n36863,2100,APAC,sports,retail,67.27,7,0.107,none,2024-03-19\r\n36864,2274,APAC,fashion,retail,43.16,7,0.133,bundle,2024-04-28\r\n36865,1767,AMER,grocery,online,53.84,1,0.107,none,2024-10-24\r\n36866,1644,EMEA,grocery,partner,46.81,5,0.084,loyalty,2024-05-19\r\n36867,2384,LATAM,fashion,retail,38.34,1,0.008,none,2024-10-12\r\n36868,1262,APAC,grocery,online,59.41,8,0.079,coupon,2024-04-08\r\n36869,1739,AMER,toys,online,45.57,3,0.005,none,2024-09-28\r\n36870,1195,AMER,grocery,online,38.48,3,0.227,none,2024-01-21\r\n36871,2107,APAC,grocery,retail,92.50,3,0.190,none,2024-08-28\r\n36872,1104,APAC,fashion,retail,53.68,7,0.095,coupon,2024-09-02\r\n36873,2312,APAC,electronics,mobile,48.09,6,0.243,none,2024-09-05\r\n36874,1939,LATAM,home,online,263.07,1,0.025,bundle,2024-02-03\r\n36875,2052,LATAM,electronics,mobile,93.64,7,0.043,coupon,2024-11-23\r\n36876,1134,APAC,grocery,mobile,71.91,4,0.219,none,2024-01-27\r\n36877,1930,AMER,home,retail,34.17,7,0.237,none,2024-07-02\r\n36878,1215,LATAM,grocery,partner,81.58,3,0.187,coupon,2024-08-18\r\n36879,2217,LATAM,grocery,online,71.38,8,0.086,none,2024-12-03\r\n36880,1803,LATAM,home,online,53.79,6,0.027,none,2024-10-13\r\n36881,1775,EMEA,toys,online,68.68,4,0.223,none,2024-06-07\r\n36882,1031,AMER,grocery,retail,29.24,5,0.122,none,2024-02-15\r\n36883,1118,AMER,home,retail,50.17,4,0.009,none,2024-10-03\r\n36884,1568,AMER,grocery,online,77.53,6,0.008,bundle,2024-07-18\r\n36885,1087,AMER,grocery,online,92.96,3,0.043,none,2024-06-26\r\n36886,1660,AMER,home,mobile,52.30,7,0.052,none,2024-02-02\r\n36887,2104,EMEA,home,mobile,122.37,5,0.209,none,2024-10-02\r\n36888,2133,AMER,home,mobile,66.72,3,0.001,bundle,2024-09-13\r\n36889,2451,APAC,sports,online,44.28,1,0.172,none,2024-11-03\r\n36890,2104,EMEA,toys,retail,54.69,3,0.164,none,2024-05-13\r\n36891,2314,EMEA,home,online,104.62,3,0.162,coupon,2024-03-25\r\n36892,1920,LATAM,grocery,retail,46.21,8,0.159,loyalty,2024-12-07\r\n36893,1831,APAC,grocery,online,22.85,7,0.103,coupon,2024-10-09\r\n36894,1232,LATAM,grocery,partner,24.83,1,0.163,none,2024-07-24\r\n36895,1472,AMER,fashion,retail,43.32,7,0.055,none,2024-06-13\r\n36896,1642,EMEA,fashion,online,62.63,6,0.051,none,2024-10-09\r\n36897,1397,LATAM,electronics,online,46.97,7,0.173,bundle,2024-08-20\r\n36898,1960,EMEA,home,mobile,66.62,3,0.137,none,2024-02-12\r\n36899,1766,AMER,toys,online,24.15,6,0.134,none,2024-07-08\r\n36900,2288,AMER,home,online,95.85,6,0.030,coupon,2024-08-02\r\n36901,1107,APAC,home,partner,22.02,5,0.109,bundle,2024-07-25\r\n36902,1943,AMER,grocery,partner,61.51,7,0.080,none,2024-09-07\r\n36903,1372,APAC,toys,retail,80.55,3,0.056,coupon,2024-12-02\r\n36904,1677,EMEA,grocery,retail,54.07,3,0.015,none,2024-06-15\r\n36905,1416,EMEA,electronics,retail,47.07,4,0.013,coupon,2024-02-22\r\n36906,1946,AMER,grocery,retail,29.37,6,0.161,none,2024-09-14\r\n36907,2072,AMER,grocery,retail,60.61,8,0.001,bundle,2024-07-04\r\n36908,1624,AMER,fashion,retail,35.35,1,0.040,bundle,2024-10-26\r\n36909,1055,AMER,grocery,retail,67.03,4,0.064,none,2024-11-09\r\n36910,1681,LATAM,electronics,online,43.98,3,0.205,none,2024-07-01\r\n36911,1776,APAC,grocery,mobile,34.53,4,0.196,bundle,2024-01-18\r\n36912,2212,EMEA,toys,online,70.16,6,0.187,none,2024-10-24\r\n36913,1899,APAC,sports,retail,31.76,4,0.071,coupon,2024-04-09\r\n36914,1396,EMEA,home,mobile,156.70,8,0.171,none,2024-01-26\r\n36915,2102,APAC,home,retail,28.07,2,0.165,none,2024-02-09\r\n36916,1691,LATAM,electronics,online,69.56,2,0.067,coupon,2024-07-13\r\n36917,2186,LATAM,home,partner,68.62,2,0.173,none,2024-01-19\r\n36918,1448,EMEA,fashion,retail,59.27,2,0.135,coupon,2024-11-15\r\n36919,1312,EMEA,grocery,online,38.92,6,0.081,none,2024-12-10\r\n36920,1133,EMEA,electronics,online,35.29,8,0.106,coupon,2024-12-17\r\n36921,1850,APAC,home,online,98.14,3,0.195,none,2024-08-23\r\n36922,1795,EMEA,electronics,online,48.49,8,0.011,none,2024-11-24\r\n36923,2274,APAC,grocery,online,49.04,8,0.100,none,2024-12-19\r\n36924,1160,LATAM,home,online,56.57,2,0.193,none,2024-07-22\r\n36925,2495,EMEA,grocery,online,44.24,2,0.247,none,2024-02-11\r\n36926,1154,LATAM,toys,online,41.45,1,0.052,none,2024-12-13\r\n36927,1658,AMER,sports,mobile,100.30,1,0.061,coupon,2024-09-27\r\n36928,2297,EMEA,grocery,online,60.92,7,0.127,none,2024-01-05\r\n36929,2385,APAC,grocery,mobile,60.07,7,0.121,coupon,2024-10-15\r\n36930,1832,APAC,fashion,mobile,75.87,8,0.216,coupon,2024-12-05\r\n36931,1503,APAC,grocery,mobile,49.29,7,0.246,coupon,2024-06-21\r\n36932,1850,APAC,toys,mobile,72.89,2,0.165,none,2024-07-28\r\n36933,1601,APAC,sports,online,77.55,8,0.135,none,2024-04-16\r\n36934,1904,APAC,fashion,mobile,51.09,8,0.141,coupon,2024-06-01\r\n36935,2428,LATAM,fashion,retail,25.41,3,0.228,none,2024-09-27\r\n36936,2364,APAC,sports,online,107.26,6,0.016,none,2024-04-25\r\n36937,1635,APAC,electronics,retail,46.81,7,0.053,none,2024-07-06\r\n36938,1328,APAC,toys,retail,42.87,3,0.247,none,2024-03-20\r\n36939,1521,LATAM,sports,online,37.97,5,0.130,loyalty,2024-06-26\r\n36940,1406,LATAM,electronics,online,71.40,3,0.229,bundle,2024-01-10\r\n36941,1166,AMER,electronics,online,57.49,6,0.006,none,2024-06-15\r\n36942,1121,EMEA,toys,online,55.82,6,0.208,coupon,2024-12-16\r\n36943,1820,AMER,home,online,57.06,1,0.009,bundle,2024-06-04\r\n36944,2241,APAC,toys,retail,46.19,2,0.012,none,2024-01-21\r\n36945,2396,AMER,electronics,retail,104.29,8,0.222,none,2024-03-11\r\n36946,1879,EMEA,sports,mobile,40.00,5,0.207,none,2024-11-17\r\n36947,1026,APAC,electronics,retail,56.30,5,0.213,none,2024-01-11\r\n36948,1782,LATAM,sports,online,35.70,7,0.123,none,2024-09-12\r\n36949,2039,EMEA,home,online,88.27,8,0.012,none,2024-01-17\r\n36950,1121,EMEA,toys,online,64.66,6,0.010,coupon,2024-12-11\r\n36951,1850,APAC,fashion,online,63.08,5,0.057,bundle,2024-01-24\r\n36952,1254,APAC,grocery,retail,105.96,8,0.185,coupon,2024-08-17\r\n36953,1217,EMEA,fashion,retail,102.51,6,0.066,loyalty,2024-03-12\r\n36954,2159,AMER,sports,retail,13.44,2,0.186,bundle,2024-04-25\r\n36955,1565,AMER,grocery,retail,26.02,2,0.049,loyalty,2024-05-05\r\n36956,1452,LATAM,grocery,retail,64.87,6,0.192,none,2024-09-13\r\n36957,1356,LATAM,grocery,online,66.49,8,0.091,none,2024-01-07\r\n36958,1078,APAC,fashion,mobile,77.44,3,0.043,none,2024-01-10\r\n36959,1242,LATAM,home,online,35.54,7,0.036,bundle,2024-01-21\r\n36960,1415,AMER,electronics,online,82.04,7,0.129,coupon,2024-10-11\r\n36961,1801,LATAM,toys,retail,58.86,2,0.227,none,2024-09-11\r\n36962,2444,EMEA,home,online,36.27,8,0.234,coupon,2024-08-26\r\n36963,1395,APAC,electronics,online,102.01,5,0.153,none,2024-08-14\r\n36964,2134,AMER,electronics,online,69.57,6,0.090,bundle,2024-01-28\r\n36965,2420,EMEA,toys,mobile,67.47,1,0.143,none,2024-11-03\r\n36966,1266,AMER,fashion,online,111.22,6,0.090,none,2024-06-05\r\n36967,2361,EMEA,sports,online,43.31,5,0.007,none,2024-08-15\r\n36968,1934,EMEA,fashion,online,28.56,1,0.111,bundle,2024-12-04\r\n36969,2375,AMER,sports,mobile,54.84,5,0.196,none,2024-10-12\r\n36970,1683,AMER,fashion,online,37.80,7,0.026,loyalty,2024-03-02\r\n36971,1019,APAC,grocery,mobile,41.33,2,0.220,none,2024-06-12\r\n36972,1003,APAC,grocery,online,19.46,3,0.220,none,2024-01-06\r\n36973,1552,EMEA,grocery,retail,89.06,4,0.015,none,2024-02-12\r\n36974,1934,EMEA,electronics,mobile,53.57,5,0.132,none,2024-04-26\r\n36975,2104,EMEA,sports,online,64.99,3,0.148,loyalty,2024-06-24\r\n36976,2476,APAC,electronics,retail,82.45,6,0.188,none,2024-01-26\r\n36977,1620,LATAM,electronics,online,57.63,1,0.049,coupon,2024-04-26\r\n36978,1682,EMEA,home,mobile,83.00,4,0.099,coupon,2024-07-22\r\n36979,1646,APAC,home,retail,103.35,5,0.225,coupon,2024-11-05\r\n36980,2189,LATAM,home,online,94.17,7,0.224,bundle,2024-10-27\r\n36981,1585,AMER,grocery,retail,65.44,8,0.186,coupon,2024-03-05\r\n36982,2328,EMEA,sports,mobile,69.14,6,0.140,coupon,2024-06-24\r\n36983,1350,LATAM,electronics,online,34.71,5,0.017,none,2024-07-15\r\n36984,1743,LATAM,toys,mobile,29.83,2,0.225,none,2024-10-17\r\n36985,1017,AMER,home,online,108.93,8,0.000,bundle,2024-02-26\r\n36986,1337,APAC,grocery,partner,35.37,6,0.058,none,2024-10-08\r\n36987,2386,EMEA,fashion,retail,93.80,4,0.036,loyalty,2024-04-25\r\n36988,1429,APAC,home,online,52.60,1,0.009,none,2024-08-27\r\n36989,1864,EMEA,sports,online,29.53,2,0.163,none,2024-03-14\r\n36990,1628,EMEA,electronics,online,103.95,2,0.187,coupon,2024-04-02\r\n36991,1735,LATAM,fashion,online,69.97,6,0.217,none,2024-08-28\r\n36992,2117,EMEA,fashion,mobile,60.37,7,0.130,bundle,2024-09-19\r\n36993,1272,AMER,electronics,online,101.91,7,0.053,none,2024-02-13\r\n36994,1064,AMER,grocery,online,51.63,6,0.072,coupon,2024-09-02\r\n36995,1744,EMEA,sports,online,65.62,6,0.129,bundle,2024-02-22\r\n36996,1702,AMER,electronics,online,82.87,7,0.124,none,2024-10-13\r\n36997,1805,EMEA,electronics,online,185.74,6,0.248,bundle,2024-04-05\r\n36998,2478,AMER,electronics,partner,43.78,1,0.020,none,2024-03-03\r\n36999,2399,LATAM,sports,retail,37.73,5,0.125,none,2024-05-21\r\n37000,2248,LATAM,toys,online,118.06,1,0.208,coupon,2024-09-20\r\n37001,1315,AMER,fashion,retail,61.87,5,0.012,coupon,2024-11-07\r\n37002,1328,APAC,fashion,retail,35.40,6,0.192,coupon,2024-02-25\r\n37003,1654,EMEA,electronics,online,71.24,3,0.108,none,2024-09-25\r\n37004,1440,AMER,grocery,online,56.93,1,0.148,none,2024-11-06\r\n37005,1915,LATAM,grocery,online,81.41,2,0.194,bundle,2024-03-28\r\n37006,2017,EMEA,fashion,online,57.67,8,0.025,none,2024-05-09\r\n37007,2051,APAC,fashion,mobile,44.49,4,0.193,none,2024-09-03\r\n37008,1489,AMER,home,mobile,46.10,6,0.045,coupon,2024-04-11\r\n37009,1619,APAC,sports,retail,60.11,2,0.095,none,2024-04-06\r\n37010,1796,LATAM,grocery,mobile,63.94,2,0.141,loyalty,2024-05-19\r\n37011,1832,APAC,home,online,46.86,2,0.030,none,2024-05-19\r\n37012,1198,AMER,electronics,retail,51.31,4,0.178,loyalty,2024-01-13\r\n37013,2034,LATAM,home,mobile,56.83,3,0.021,coupon,2024-06-11\r\n37014,2270,APAC,home,online,25.63,6,0.015,coupon,2024-07-15\r\n37015,2394,EMEA,fashion,online,101.47,8,0.235,coupon,2024-05-20\r\n37016,2118,AMER,toys,online,67.87,6,0.149,coupon,2024-07-27\r\n37017,1409,APAC,grocery,retail,53.64,3,0.239,bundle,2024-09-17\r\n37018,1133,EMEA,grocery,retail,85.54,8,0.208,loyalty,2024-02-15\r\n37019,2196,AMER,electronics,mobile,72.22,5,0.245,coupon,2024-05-28\r\n37020,1472,AMER,sports,retail,75.79,4,0.170,none,2024-02-02\r\n37021,1436,APAC,home,mobile,72.06,1,0.226,bundle,2024-05-21\r\n37022,2078,APAC,grocery,retail,82.01,3,0.078,loyalty,2024-11-11\r\n37023,1780,APAC,home,online,71.77,3,0.225,none,2024-02-08\r\n37024,1877,LATAM,sports,online,67.75,6,0.242,coupon,2024-09-17\r\n37025,1051,EMEA,sports,online,49.32,7,0.115,none,2024-03-02\r\n37026,1966,APAC,toys,mobile,110.87,5,0.210,none,2024-07-10\r\n37027,1944,AMER,toys,retail,63.12,2,0.133,bundle,2024-06-23\r\n37028,1354,AMER,toys,online,40.22,7,0.241,none,2024-09-06\r\n37029,2413,AMER,fashion,retail,51.51,6,0.062,none,2024-02-25\r\n37030,2143,AMER,home,online,126.00,4,0.088,coupon,2024-10-23\r\n37031,2316,EMEA,electronics,retail,28.04,5,0.002,none,2024-03-24\r\n37032,2095,EMEA,electronics,online,95.35,3,0.231,loyalty,2024-09-07\r\n37033,2265,APAC,grocery,partner,26.23,7,0.034,coupon,2024-02-05\r\n37034,1756,EMEA,fashion,online,38.86,4,0.017,none,2024-10-05\r\n37035,1742,AMER,grocery,mobile,32.75,6,0.115,none,2024-09-25\r\n37036,1058,LATAM,grocery,online,70.66,2,0.247,none,2024-09-04\r\n37037,1633,EMEA,sports,retail,30.30,2,0.133,coupon,2024-11-20\r\n37038,1408,AMER,home,retail,19.85,7,0.199,loyalty,2024-09-15\r\n37039,1379,EMEA,toys,online,80.87,1,0.069,loyalty,2024-06-02\r\n37040,1119,LATAM,toys,mobile,129.70,3,0.151,none,2024-12-13\r\n37041,1465,AMER,electronics,online,38.78,7,0.082,coupon,2024-06-19\r\n37042,2403,LATAM,grocery,retail,51.98,5,0.211,bundle,2024-06-21\r\n37043,2321,APAC,home,retail,36.78,3,0.115,none,2024-06-17\r\n37044,1431,APAC,sports,retail,66.32,2,0.108,loyalty,2024-05-08\r\n37045,2228,EMEA,grocery,retail,78.26,2,0.229,none,2024-10-26\r\n37046,1600,AMER,electronics,retail,62.91,7,0.211,coupon,2024-06-20\r\n37047,1936,EMEA,home,retail,27.44,8,0.143,coupon,2024-08-20\r\n37048,2315,LATAM,grocery,retail,55.29,1,0.197,bundle,2024-10-15\r\n37049,1748,APAC,fashion,retail,113.58,3,0.207,none,2024-06-10\r\n37050,1041,APAC,toys,online,25.37,8,0.153,none,2024-12-07\r\n37051,2316,EMEA,grocery,retail,44.83,5,0.209,bundle,2024-12-21\r\n37052,1666,LATAM,grocery,online,28.44,1,0.123,none,2024-09-09\r\n37053,1528,EMEA,toys,online,133.47,4,0.006,coupon,2024-01-16\r\n37054,1857,LATAM,electronics,retail,58.55,5,0.040,bundle,2024-04-04\r\n37055,1420,APAC,home,retail,165.33,8,0.244,none,2024-01-23\r\n37056,1534,EMEA,electronics,retail,107.74,4,0.191,none,2024-05-04\r\n37057,1786,APAC,grocery,online,68.53,7,0.038,bundle,2024-04-21\r\n37058,1191,EMEA,grocery,retail,59.85,6,0.070,bundle,2024-08-11\r\n37059,1299,LATAM,sports,partner,33.57,2,0.089,none,2024-07-10\r\n37060,2290,LATAM,grocery,retail,37.65,7,0.025,none,2024-08-02\r\n37061,1859,AMER,electronics,mobile,50.86,6,0.040,coupon,2024-08-14\r\n37062,1708,LATAM,fashion,online,222.00,7,0.028,bundle,2024-09-14\r\n37063,2460,AMER,home,online,80.60,6,0.093,bundle,2024-12-05\r\n37064,2262,APAC,home,retail,14.47,4,0.117,coupon,2024-02-14\r\n37065,1058,LATAM,sports,retail,73.79,5,0.234,none,2024-04-06\r\n37066,1085,EMEA,home,online,97.46,8,0.138,none,2024-06-20\r\n37067,1515,EMEA,grocery,retail,142.03,5,0.116,none,2024-10-18\r\n37068,2378,LATAM,home,retail,73.84,5,0.113,coupon,2024-01-25\r\n37069,1270,LATAM,grocery,mobile,38.23,4,0.235,none,2024-09-09\r\n37070,2452,LATAM,electronics,retail,66.94,1,0.141,none,2024-05-27\r\n37071,1106,AMER,grocery,online,35.59,3,0.199,none,2024-10-19\r\n37072,2185,EMEA,sports,online,63.17,6,0.115,none,2024-09-06\r\n37073,1711,APAC,electronics,mobile,20.78,6,0.076,none,2024-02-10\r\n37074,1554,AMER,sports,online,76.23,3,0.155,none,2024-10-10\r\n37075,1788,AMER,electronics,online,87.01,6,0.166,none,2024-03-23\r\n37076,1301,AMER,sports,retail,45.81,5,0.010,coupon,2024-07-08\r\n37077,1624,AMER,sports,online,36.42,8,0.226,loyalty,2024-05-07\r\n37078,2210,APAC,grocery,online,83.22,1,0.099,loyalty,2024-08-17\r\n37079,1952,EMEA,sports,online,23.70,3,0.214,none,2024-02-01\r\n37080,1243,AMER,home,online,47.42,7,0.152,bundle,2024-04-25\r\n37081,1352,AMER,toys,online,67.54,1,0.197,loyalty,2024-12-19\r\n37082,2395,APAC,home,retail,37.27,5,0.171,none,2024-08-07\r\n37083,2348,EMEA,home,retail,62.26,8,0.049,none,2024-03-01\r\n37084,1936,EMEA,electronics,retail,59.76,5,0.189,loyalty,2024-07-05\r\n37085,1294,APAC,home,online,54.56,4,0.022,coupon,2024-01-23\r\n37086,1335,APAC,grocery,mobile,144.12,5,0.180,loyalty,2024-11-11\r\n37087,1561,EMEA,grocery,partner,93.96,4,0.043,bundle,2024-02-16\r\n37088,2368,AMER,home,online,23.33,8,0.173,none,2024-10-12\r\n37089,1013,LATAM,home,retail,70.36,4,0.064,none,2024-07-08\r\n37090,1046,EMEA,grocery,online,43.05,4,0.070,none,2024-03-05\r\n37091,1783,AMER,sports,online,37.79,5,0.177,coupon,2024-05-24\r\n37092,1118,AMER,sports,online,91.00,4,0.228,none,2024-07-03\r\n37093,1139,EMEA,fashion,online,48.87,2,0.210,none,2024-10-14\r\n37094,1180,AMER,grocery,online,62.57,5,0.173,coupon,2024-07-16\r\n37095,1990,EMEA,fashion,online,44.07,3,0.151,none,2024-08-19\r\n37096,1147,EMEA,fashion,online,38.86,8,0.144,bundle,2024-01-08\r\n37097,1179,APAC,fashion,online,20.94,4,0.097,none,2024-08-26\r\n37098,2430,APAC,home,mobile,50.47,4,0.246,none,2024-10-11\r\n37099,2003,LATAM,sports,retail,61.07,8,0.053,none,2024-05-03\r\n37100,1921,LATAM,grocery,online,45.50,7,0.127,none,2024-01-23\r\n37101,2005,APAC,grocery,online,30.04,2,0.185,none,2024-01-19\r\n37102,1612,LATAM,sports,retail,102.07,8,0.210,none,2024-03-21\r\n37103,1583,AMER,electronics,retail,75.65,2,0.075,loyalty,2024-04-08\r\n37104,1668,AMER,electronics,mobile,58.27,8,0.154,coupon,2024-11-02\r\n37105,2038,LATAM,electronics,online,56.85,3,0.046,none,2024-12-18\r\n37106,2364,APAC,toys,mobile,83.96,1,0.102,bundle,2024-12-20\r\n37107,1658,AMER,toys,online,80.55,2,0.102,coupon,2024-05-18\r\n37108,2329,LATAM,sports,partner,39.47,2,0.027,none,2024-12-25\r\n37109,2357,EMEA,toys,online,61.00,5,0.165,loyalty,2024-02-16\r\n37110,2445,APAC,home,online,60.73,3,0.239,bundle,2024-07-07\r\n37111,1240,EMEA,grocery,retail,48.81,3,0.160,coupon,2024-10-21\r\n37112,1737,AMER,grocery,mobile,82.58,8,0.194,coupon,2024-05-14\r\n37113,1058,LATAM,electronics,online,30.82,6,0.015,bundle,2024-08-23\r\n37114,1959,EMEA,sports,partner,58.77,7,0.231,coupon,2024-06-06\r\n37115,1638,EMEA,grocery,online,93.13,4,0.057,loyalty,2024-11-23\r\n37116,1011,APAC,sports,retail,59.59,7,0.219,none,2024-03-09\r\n37117,2450,EMEA,home,retail,94.83,5,0.250,loyalty,2024-12-12\r\n37118,1776,APAC,grocery,online,70.03,7,0.012,none,2024-02-10\r\n37119,2061,EMEA,home,retail,43.72,8,0.011,none,2024-08-02\r\n37120,2206,AMER,fashion,online,129.88,1,0.127,coupon,2024-03-28\r\n37121,1506,EMEA,fashion,retail,125.38,2,0.109,none,2024-02-13\r\n37122,1488,AMER,electronics,online,45.96,2,0.234,none,2024-08-14\r\n37123,1544,LATAM,electronics,online,59.99,7,0.025,none,2024-06-07\r\n37124,1814,AMER,toys,online,34.80,8,0.103,none,2024-07-08\r\n37125,2126,APAC,fashion,online,87.49,6,0.013,coupon,2024-01-23\r\n37126,1963,AMER,electronics,mobile,49.36,2,0.094,coupon,2024-10-26\r\n37127,1101,AMER,toys,retail,51.87,3,0.126,none,2024-03-17\r\n37128,1558,EMEA,home,online,97.52,5,0.157,bundle,2024-10-23\r\n37129,2358,AMER,fashion,mobile,36.08,8,0.244,bundle,2024-10-17\r\n37130,1017,AMER,electronics,online,59.49,8,0.193,bundle,2024-04-19\r\n37131,2356,LATAM,grocery,online,55.56,5,0.121,bundle,2024-11-21\r\n37132,1908,AMER,home,retail,46.00,7,0.053,loyalty,2024-11-11\r\n37133,2379,AMER,fashion,retail,42.15,2,0.148,none,2024-05-21\r\n37134,2486,APAC,home,partner,73.10,2,0.171,bundle,2024-11-05\r\n37135,1913,LATAM,home,retail,48.61,4,0.035,none,2024-12-13\r\n37136,2101,APAC,electronics,online,117.62,5,0.249,bundle,2024-02-26\r\n37137,1622,LATAM,sports,mobile,134.73,4,0.177,none,2024-12-07\r\n37138,2215,LATAM,fashion,online,80.79,5,0.023,none,2024-05-28\r\n37139,1784,EMEA,sports,online,33.00,1,0.150,loyalty,2024-04-02\r\n37140,1859,AMER,electronics,mobile,52.24,7,0.067,none,2024-07-14\r\n37141,1511,EMEA,fashion,online,39.05,4,0.216,coupon,2024-11-23\r\n37142,1703,AMER,grocery,online,101.21,4,0.020,coupon,2024-04-23\r\n37143,2077,APAC,grocery,mobile,34.63,1,0.098,none,2024-03-27\r\n37144,1524,LATAM,home,online,56.87,2,0.085,none,2024-01-23\r\n37145,1360,APAC,fashion,online,43.94,1,0.192,none,2024-11-17\r\n37146,1332,APAC,grocery,online,60.31,2,0.089,none,2024-04-23\r\n37147,1604,EMEA,electronics,partner,54.52,3,0.238,none,2024-10-15\r\n37148,1619,APAC,electronics,retail,36.91,4,0.199,none,2024-11-11\r\n37149,1086,AMER,toys,online,59.30,8,0.010,none,2024-10-21\r\n37150,1794,AMER,home,mobile,98.63,4,0.128,coupon,2024-08-27\r\n37151,2092,AMER,fashion,retail,37.77,3,0.050,none,2024-10-12\r\n37152,1522,LATAM,sports,online,29.93,5,0.138,bundle,2024-11-18\r\n37153,2067,LATAM,sports,mobile,109.48,2,0.056,none,2024-11-08\r\n37154,2082,APAC,electronics,mobile,192.13,2,0.100,coupon,2024-12-02\r\n37155,2465,EMEA,toys,retail,63.39,2,0.162,none,2024-09-24\r\n37156,1372,APAC,fashion,online,78.63,4,0.041,none,2024-03-11\r\n37157,1412,AMER,home,online,58.23,5,0.114,bundle,2024-12-15\r\n37158,2440,APAC,electronics,online,105.23,2,0.157,none,2024-02-02\r\n37159,2032,AMER,electronics,online,32.80,8,0.178,bundle,2024-07-19\r\n37160,2411,EMEA,home,retail,61.47,8,0.158,none,2024-05-24\r\n37161,2431,LATAM,sports,retail,64.01,4,0.022,loyalty,2024-01-14\r\n37162,2126,APAC,fashion,retail,27.23,4,0.182,bundle,2024-07-10\r\n37163,1043,LATAM,sports,retail,121.09,1,0.093,none,2024-07-18\r\n37164,2025,EMEA,toys,retail,149.64,6,0.035,bundle,2024-03-04\r\n37165,1367,AMER,sports,retail,157.76,7,0.178,loyalty,2024-02-07\r\n37166,1665,AMER,home,retail,104.48,4,0.122,none,2024-07-04\r\n37167,2286,AMER,fashion,online,95.76,1,0.039,none,2024-06-14\r\n37168,1190,EMEA,electronics,mobile,129.00,7,0.102,none,2024-11-08\r\n37169,1150,LATAM,toys,online,147.14,8,0.052,none,2024-08-03\r\n37170,1473,LATAM,electronics,online,51.77,6,0.081,none,2024-05-12\r\n37171,2401,LATAM,electronics,mobile,22.96,1,0.005,bundle,2024-05-20\r\n37172,1120,LATAM,home,online,193.95,7,0.030,none,2024-02-01\r\n37173,1939,LATAM,electronics,partner,58.57,5,0.002,none,2024-04-07\r\n37174,1671,APAC,home,retail,29.55,7,0.152,none,2024-10-12\r\n37175,1354,AMER,toys,retail,73.90,5,0.032,none,2024-04-24\r\n37176,2125,LATAM,sports,online,105.19,6,0.160,none,2024-11-08\r\n37177,1215,LATAM,home,retail,30.84,7,0.002,coupon,2024-08-11\r\n37178,1983,LATAM,home,retail,46.42,3,0.073,coupon,2024-06-01\r\n37179,1129,LATAM,home,online,66.42,4,0.165,none,2024-03-17\r\n37180,1714,APAC,grocery,mobile,22.68,6,0.166,coupon,2024-04-21\r\n37181,1049,AMER,fashion,online,50.39,4,0.025,none,2024-12-24\r\n37182,1703,AMER,grocery,retail,78.50,5,0.090,bundle,2024-11-21\r\n37183,1637,APAC,electronics,retail,29.22,5,0.099,none,2024-05-21\r\n37184,1565,AMER,electronics,retail,34.29,7,0.114,loyalty,2024-07-03\r\n37185,1433,EMEA,grocery,mobile,91.28,8,0.175,coupon,2024-05-13\r\n37186,1438,APAC,home,online,69.37,3,0.203,none,2024-11-24\r\n37187,1292,LATAM,grocery,online,40.60,5,0.120,none,2024-11-28\r\n37188,1275,EMEA,grocery,online,121.89,2,0.105,bundle,2024-08-24\r\n37189,1647,LATAM,electronics,online,80.82,1,0.244,none,2024-04-25\r\n37190,1148,AMER,grocery,online,54.44,4,0.244,loyalty,2024-04-01\r\n37191,1657,LATAM,grocery,retail,79.51,1,0.082,none,2024-02-22\r\n37192,1149,LATAM,sports,retail,48.20,1,0.231,loyalty,2024-05-09\r\n37193,2179,LATAM,grocery,mobile,107.61,7,0.009,none,2024-07-04\r\n37194,2306,AMER,sports,online,34.82,3,0.171,coupon,2024-07-01\r\n37195,1556,AMER,grocery,retail,73.24,4,0.081,none,2024-04-13\r\n37196,1085,EMEA,home,mobile,34.96,6,0.033,none,2024-09-18\r\n37197,1883,LATAM,home,online,60.76,8,0.241,none,2024-07-10\r\n37198,2371,LATAM,fashion,retail,56.46,1,0.158,bundle,2024-03-17\r\n37199,1935,EMEA,electronics,retail,16.00,7,0.236,coupon,2024-08-07\r\n37200,2484,APAC,grocery,online,52.85,8,0.016,none,2024-08-15\r\n37201,2215,LATAM,electronics,online,57.17,1,0.124,loyalty,2024-07-14\r\n37202,1807,EMEA,grocery,retail,29.49,4,0.006,loyalty,2024-11-17\r\n37203,1306,LATAM,fashion,retail,127.76,6,0.193,none,2024-09-12\r\n37204,1608,AMER,fashion,retail,39.70,4,0.117,loyalty,2024-09-21\r\n37205,1018,APAC,home,online,68.92,8,0.143,none,2024-07-22\r\n37206,1157,LATAM,grocery,retail,69.45,2,0.209,coupon,2024-04-15\r\n37207,1147,EMEA,grocery,online,73.58,8,0.062,none,2024-02-04\r\n37208,2498,LATAM,fashion,retail,52.97,7,0.105,none,2024-12-18\r\n37209,1468,AMER,grocery,retail,50.75,7,0.092,bundle,2024-09-23\r\n37210,1062,EMEA,fashion,online,40.20,6,0.098,none,2024-03-02\r\n37211,1656,LATAM,electronics,retail,28.39,1,0.125,none,2024-07-21\r\n37212,1350,LATAM,fashion,mobile,67.00,5,0.225,bundle,2024-03-16\r\n37213,2355,EMEA,toys,retail,49.92,4,0.242,coupon,2024-01-01\r\n37214,2339,AMER,grocery,online,52.18,3,0.085,none,2024-02-24\r\n37215,1279,EMEA,sports,retail,79.42,6,0.101,none,2024-10-22\r\n37216,1248,APAC,home,retail,83.62,7,0.104,loyalty,2024-04-11\r\n37217,2123,AMER,toys,mobile,85.16,4,0.192,loyalty,2024-05-18\r\n37218,1506,EMEA,home,partner,83.72,8,0.123,coupon,2024-07-03\r\n37219,2043,EMEA,electronics,mobile,113.80,2,0.119,bundle,2024-05-18\r\n37220,1835,AMER,grocery,online,63.09,8,0.162,none,2024-02-24\r\n37221,1822,EMEA,grocery,retail,101.77,2,0.181,none,2024-03-16\r\n37222,1364,EMEA,fashion,mobile,130.42,7,0.222,none,2024-01-25\r\n37223,1439,LATAM,electronics,retail,37.17,5,0.191,bundle,2024-04-23\r\n37224,2386,EMEA,home,online,25.14,7,0.020,none,2024-08-04\r\n37225,2424,LATAM,electronics,online,66.84,2,0.064,coupon,2024-07-07\r\n37226,1702,AMER,grocery,online,45.12,6,0.115,coupon,2024-09-17\r\n37227,2236,APAC,fashion,retail,51.39,1,0.164,none,2024-08-24\r\n37228,1422,LATAM,home,retail,32.35,4,0.050,loyalty,2024-09-14\r\n37229,1964,EMEA,home,mobile,77.14,6,0.218,loyalty,2024-11-17\r\n37230,2413,AMER,grocery,online,27.05,2,0.220,none,2024-06-04\r\n37231,1031,AMER,electronics,online,86.09,3,0.219,bundle,2024-02-21\r\n37232,1735,LATAM,grocery,mobile,131.74,4,0.023,coupon,2024-10-19\r\n37233,2262,APAC,home,retail,50.75,2,0.165,none,2024-08-06\r\n37234,1661,LATAM,home,retail,43.58,5,0.046,none,2024-03-07\r\n37235,1702,AMER,toys,online,43.98,8,0.097,bundle,2024-10-21\r\n37236,1046,EMEA,home,mobile,73.98,8,0.180,coupon,2024-02-13\r\n37237,1545,AMER,sports,retail,22.93,4,0.033,coupon,2024-11-02\r\n37238,2441,EMEA,sports,retail,28.55,8,0.183,none,2024-12-12\r\n37239,1095,APAC,toys,online,48.31,7,0.185,coupon,2024-08-25\r\n37240,1114,APAC,fashion,retail,43.10,6,0.013,coupon,2024-04-22\r\n37241,1010,EMEA,fashion,online,49.79,8,0.197,none,2024-09-21\r\n37242,2142,LATAM,fashion,online,33.41,7,0.110,bundle,2024-01-04\r\n37243,1252,APAC,home,online,112.36,6,0.114,none,2024-09-07\r\n37244,2151,APAC,electronics,retail,55.52,7,0.073,loyalty,2024-10-04\r\n37245,2392,EMEA,sports,retail,62.53,6,0.188,none,2024-07-24\r\n37246,2137,LATAM,grocery,online,100.88,2,0.216,bundle,2024-09-02\r\n37247,1923,LATAM,grocery,retail,24.39,1,0.018,none,2024-06-22\r\n37248,1230,EMEA,toys,retail,27.63,6,0.060,coupon,2024-02-24\r\n37249,2223,EMEA,sports,retail,91.28,7,0.177,none,2024-09-28\r\n37250,1661,LATAM,electronics,online,65.30,5,0.169,none,2024-06-01\r\n37251,1065,AMER,fashion,retail,89.02,2,0.065,coupon,2024-09-05\r\n37252,1741,AMER,grocery,mobile,40.49,7,0.211,none,2024-07-22\r\n37253,2259,AMER,fashion,online,105.89,8,0.043,none,2024-06-15\r\n37254,1374,APAC,electronics,online,42.42,5,0.015,bundle,2024-12-14\r\n37255,1392,AMER,home,online,59.17,6,0.134,none,2024-06-04\r\n37256,1457,EMEA,electronics,retail,106.56,6,0.032,none,2024-04-21\r\n37257,1781,LATAM,grocery,online,119.70,4,0.199,none,2024-03-06\r\n37258,1196,APAC,grocery,online,46.79,4,0.051,loyalty,2024-05-19\r\n37259,1310,AMER,electronics,online,56.11,4,0.204,none,2024-10-14\r\n37260,1233,AMER,electronics,online,103.39,7,0.143,none,2024-01-01\r\n37261,1552,EMEA,grocery,retail,88.43,7,0.049,none,2024-09-15\r\n37262,2345,LATAM,sports,online,13.75,8,0.119,bundle,2024-06-17\r\n37263,1857,LATAM,grocery,online,45.27,4,0.005,bundle,2024-06-08\r\n37264,1399,AMER,electronics,mobile,37.21,5,0.218,none,2024-07-21\r\n37265,1661,LATAM,fashion,retail,47.50,6,0.094,bundle,2024-11-12\r\n37266,2322,AMER,toys,online,34.22,8,0.185,loyalty,2024-09-01\r\n37267,2053,AMER,fashion,online,45.74,7,0.198,loyalty,2024-03-01\r\n37268,1182,EMEA,fashion,retail,68.67,8,0.035,none,2024-12-09\r\n37269,2072,AMER,electronics,mobile,70.17,3,0.033,coupon,2024-08-10\r\n37270,2365,LATAM,home,online,75.93,8,0.049,coupon,2024-11-15\r\n37271,1085,EMEA,electronics,online,61.70,3,0.202,coupon,2024-09-25\r\n37272,2149,EMEA,sports,retail,34.68,5,0.096,none,2024-09-23\r\n37273,1325,APAC,grocery,retail,39.01,4,0.079,loyalty,2024-09-28\r\n37274,1118,AMER,home,retail,61.83,8,0.210,none,2024-01-06\r\n37275,2209,AMER,fashion,online,52.18,5,0.198,none,2024-01-06\r\n37276,1525,APAC,grocery,mobile,135.75,8,0.175,bundle,2024-06-26\r\n37277,1949,AMER,sports,retail,83.51,1,0.101,none,2024-03-13\r\n37278,1647,LATAM,sports,online,99.37,3,0.084,none,2024-06-28\r\n37279,1268,EMEA,fashion,online,26.65,1,0.242,coupon,2024-12-20\r\n37280,1206,EMEA,fashion,online,81.93,6,0.014,none,2024-04-23\r\n37281,2171,EMEA,electronics,online,36.71,1,0.139,coupon,2024-12-03\r\n37282,1295,EMEA,grocery,online,99.73,2,0.162,none,2024-10-14\r\n37283,1674,LATAM,grocery,retail,51.53,6,0.051,none,2024-02-05\r\n37284,1212,LATAM,sports,retail,134.12,7,0.114,loyalty,2024-05-13\r\n37285,1217,EMEA,sports,mobile,164.05,1,0.005,loyalty,2024-10-16\r\n37286,1272,AMER,sports,online,43.45,7,0.085,loyalty,2024-03-14\r\n37287,1392,AMER,electronics,retail,47.51,4,0.147,bundle,2024-02-23\r\n37288,1971,EMEA,sports,retail,43.21,6,0.054,none,2024-10-20\r\n37289,2406,EMEA,grocery,online,73.34,6,0.124,none,2024-01-03\r\n37290,2380,AMER,electronics,online,78.03,2,0.079,none,2024-01-17\r\n37291,1376,EMEA,sports,mobile,44.22,3,0.166,coupon,2024-03-02\r\n37292,2147,LATAM,sports,online,106.72,3,0.213,none,2024-08-17\r\n37293,1743,LATAM,grocery,online,87.47,6,0.220,coupon,2024-08-23\r\n37294,2182,AMER,electronics,online,97.45,2,0.202,bundle,2024-02-27\r\n37295,2162,EMEA,sports,online,30.74,3,0.075,coupon,2024-09-28\r\n37296,1715,AMER,grocery,online,99.36,1,0.108,loyalty,2024-03-26\r\n37297,1322,AMER,grocery,retail,32.07,2,0.063,none,2024-10-16\r\n37298,2004,LATAM,electronics,online,139.95,6,0.052,none,2024-07-27\r\n37299,1874,LATAM,electronics,online,85.91,1,0.109,loyalty,2024-09-12\r\n37300,1045,LATAM,toys,retail,65.11,2,0.237,coupon,2024-07-19\r\n37301,1802,AMER,fashion,online,45.28,3,0.203,coupon,2024-09-20\r\n37302,1182,EMEA,fashion,online,98.82,1,0.233,coupon,2024-03-06\r\n37303,1597,APAC,fashion,online,40.53,1,0.056,none,2024-01-14\r\n37304,1533,APAC,electronics,retail,48.16,4,0.008,bundle,2024-04-17\r\n37305,1588,LATAM,fashion,online,135.76,4,0.072,none,2024-07-18\r\n37306,2221,LATAM,home,mobile,146.67,4,0.002,none,2024-07-27\r\n37307,1111,APAC,sports,online,53.81,4,0.156,loyalty,2024-05-14\r\n37308,1056,LATAM,fashion,mobile,94.94,2,0.178,none,2024-08-16\r\n37309,2083,LATAM,electronics,online,62.77,8,0.094,loyalty,2024-12-26\r\n37310,1698,EMEA,home,mobile,17.82,8,0.054,none,2024-11-18\r\n37311,1043,LATAM,fashion,online,114.31,8,0.032,coupon,2024-11-10\r\n37312,2154,APAC,electronics,retail,53.01,3,0.116,bundle,2024-01-09\r\n37313,1038,APAC,electronics,mobile,56.35,6,0.001,loyalty,2024-02-10\r\n37314,2227,LATAM,sports,online,22.45,1,0.209,none,2024-01-02\r\n37315,1438,APAC,sports,retail,88.31,7,0.040,none,2024-05-08\r\n37316,2304,LATAM,home,retail,102.15,8,0.248,none,2024-01-14\r\n37317,1435,AMER,toys,online,73.68,5,0.001,coupon,2024-03-23\r\n37318,2165,AMER,sports,online,32.29,6,0.081,loyalty,2024-05-05\r\n37319,1142,EMEA,grocery,retail,54.97,6,0.065,none,2024-10-25\r\n37320,1084,AMER,grocery,mobile,31.45,7,0.188,none,2024-01-05\r\n37321,1370,APAC,home,online,81.42,5,0.154,none,2024-02-19\r\n37322,2036,APAC,home,online,33.70,1,0.003,none,2024-08-24\r\n37323,1119,LATAM,sports,retail,141.41,3,0.230,none,2024-02-20\r\n37324,2181,AMER,fashion,mobile,90.62,5,0.170,none,2024-03-20\r\n37325,2428,LATAM,electronics,online,73.09,2,0.159,coupon,2024-03-08\r\n37326,2062,EMEA,sports,online,68.60,3,0.231,none,2024-09-09\r\n37327,1035,EMEA,home,retail,17.07,3,0.119,none,2024-03-18\r\n37328,1566,EMEA,home,partner,103.45,3,0.200,none,2024-10-21\r\n37329,2356,LATAM,fashion,online,42.48,8,0.062,none,2024-04-17\r\n37330,1925,LATAM,fashion,mobile,32.95,1,0.147,none,2024-10-03\r\n37331,1978,AMER,fashion,retail,40.11,2,0.061,none,2024-09-07\r\n37332,1924,AMER,grocery,online,27.33,7,0.154,loyalty,2024-10-06\r\n37333,1753,APAC,grocery,retail,82.09,8,0.088,none,2024-02-24\r\n37334,2069,AMER,electronics,online,51.73,6,0.046,none,2024-09-25\r\n37335,2454,LATAM,toys,online,35.56,7,0.239,coupon,2024-03-17\r\n37336,1590,APAC,grocery,online,77.25,7,0.159,none,2024-02-17\r\n37337,1839,APAC,electronics,online,63.78,4,0.043,none,2024-01-15\r\n37338,1281,AMER,home,retail,74.43,5,0.205,bundle,2024-12-15\r\n37339,1836,LATAM,grocery,online,41.65,6,0.049,coupon,2024-03-02\r\n37340,1274,LATAM,electronics,partner,53.80,5,0.241,none,2024-09-13\r\n37341,2138,APAC,toys,retail,58.43,4,0.141,bundle,2024-07-17\r\n37342,1320,EMEA,home,online,88.47,4,0.121,none,2024-07-19\r\n37343,1113,EMEA,fashion,mobile,49.75,2,0.206,none,2024-01-04\r\n37344,2245,APAC,grocery,online,68.36,4,0.147,coupon,2024-07-14\r\n37345,1547,AMER,grocery,online,115.00,8,0.064,bundle,2024-08-02\r\n37346,2329,LATAM,electronics,partner,151.00,7,0.181,coupon,2024-07-10\r\n37347,1710,APAC,toys,partner,63.46,2,0.037,none,2024-04-23\r\n37348,2159,AMER,electronics,retail,44.11,8,0.056,none,2024-10-04\r\n37349,1070,EMEA,sports,online,64.82,7,0.020,loyalty,2024-11-04\r\n37350,1582,AMER,fashion,online,71.38,5,0.200,loyalty,2024-06-21\r\n37351,1611,EMEA,electronics,online,25.77,6,0.139,none,2024-12-04\r\n37352,1345,AMER,grocery,online,41.29,3,0.037,none,2024-09-03\r\n37353,1679,APAC,electronics,online,39.93,1,0.086,loyalty,2024-01-23\r\n37354,1343,LATAM,grocery,online,61.53,6,0.095,loyalty,2024-08-14\r\n37355,2191,AMER,fashion,online,42.18,4,0.209,loyalty,2024-07-19\r\n37356,1367,AMER,electronics,online,93.37,6,0.234,none,2024-05-24\r\n37357,2255,AMER,electronics,mobile,56.36,7,0.163,none,2024-05-14\r\n37358,1369,AMER,home,retail,70.90,5,0.054,coupon,2024-02-08\r\n37359,2409,APAC,electronics,online,54.22,4,0.116,none,2024-10-07\r\n37360,1939,LATAM,fashion,retail,24.61,7,0.146,loyalty,2024-04-28\r\n37361,2241,APAC,sports,partner,75.94,1,0.114,coupon,2024-05-02\r\n37362,1546,EMEA,electronics,online,51.79,2,0.205,none,2024-11-19\r\n37363,1417,APAC,electronics,online,40.37,6,0.108,none,2024-08-10\r\n37364,2361,EMEA,fashion,retail,36.06,7,0.127,none,2024-02-16\r\n37365,2006,APAC,fashion,retail,56.71,8,0.012,coupon,2024-07-23\r\n37366,1053,AMER,fashion,online,22.73,5,0.088,loyalty,2024-11-01\r\n37367,1120,LATAM,sports,retail,68.65,3,0.106,none,2024-11-05\r\n37368,2315,LATAM,grocery,online,98.43,8,0.123,coupon,2024-04-21\r\n37369,2403,LATAM,electronics,mobile,51.53,3,0.236,none,2024-01-05\r\n37370,1003,APAC,grocery,online,46.14,8,0.113,bundle,2024-05-09\r\n37371,1757,EMEA,fashion,retail,74.46,6,0.029,coupon,2024-02-18\r\n37372,1365,LATAM,grocery,online,40.29,5,0.197,none,2024-12-14\r\n37373,2159,AMER,home,online,121.22,4,0.180,none,2024-09-04\r\n37374,2156,AMER,fashion,retail,33.19,3,0.115,loyalty,2024-12-12\r\n37375,1190,EMEA,sports,retail,31.43,8,0.198,none,2024-08-25\r\n37376,1825,AMER,fashion,online,100.24,4,0.112,none,2024-07-02\r\n37377,1015,AMER,fashion,retail,83.75,1,0.095,none,2024-09-02\r\n37378,2406,EMEA,home,partner,68.21,3,0.140,none,2024-12-02\r\n37379,1429,APAC,toys,mobile,67.00,1,0.155,coupon,2024-04-06\r\n37380,1916,AMER,grocery,mobile,27.38,1,0.247,none,2024-10-23\r\n37381,1654,EMEA,home,mobile,64.70,8,0.248,coupon,2024-10-22\r\n37382,2139,AMER,grocery,online,41.87,1,0.126,none,2024-08-26\r\n37383,1476,APAC,sports,retail,83.50,5,0.055,none,2024-04-02\r\n37384,2299,EMEA,grocery,retail,45.87,2,0.031,loyalty,2024-03-10\r\n37385,1184,AMER,home,retail,222.64,2,0.218,bundle,2024-03-17\r\n37386,2442,APAC,electronics,partner,47.19,2,0.136,coupon,2024-08-17\r\n37387,2353,AMER,electronics,online,98.17,7,0.117,coupon,2024-11-03\r\n37388,1408,AMER,sports,partner,55.66,3,0.163,none,2024-02-23\r\n37389,1937,APAC,fashion,online,25.66,5,0.168,loyalty,2024-11-11\r\n37390,1970,LATAM,home,online,85.57,3,0.070,none,2024-03-26\r\n37391,1110,LATAM,toys,online,69.34,1,0.142,coupon,2024-01-16\r\n37392,1913,LATAM,sports,mobile,73.09,2,0.161,none,2024-08-13\r\n37393,1180,AMER,home,online,79.79,7,0.112,none,2024-03-19\r\n37394,1069,APAC,home,mobile,26.60,2,0.008,none,2024-08-09\r\n37395,1779,APAC,toys,mobile,61.93,7,0.007,none,2024-03-26\r\n37396,2320,LATAM,fashion,retail,44.56,8,0.096,none,2024-06-03\r\n37397,2250,AMER,grocery,retail,52.27,2,0.023,none,2024-12-21\r\n37398,2314,EMEA,grocery,online,53.51,6,0.175,none,2024-09-05\r\n37399,2027,EMEA,grocery,mobile,61.60,6,0.191,loyalty,2024-01-18\r\n37400,1326,AMER,electronics,retail,38.05,2,0.041,none,2024-05-21\r\n37401,1235,EMEA,sports,retail,25.56,7,0.028,coupon,2024-02-22\r\n37402,2019,AMER,sports,online,51.55,7,0.020,none,2024-04-14\r\n37403,1918,EMEA,toys,mobile,31.83,4,0.094,coupon,2024-06-22\r\n37404,1182,EMEA,grocery,retail,88.56,4,0.221,none,2024-11-18\r\n37405,1026,APAC,home,mobile,30.68,6,0.141,none,2024-01-04\r\n37406,2360,EMEA,toys,retail,56.02,4,0.047,bundle,2024-01-17\r\n37407,1235,EMEA,grocery,retail,28.71,2,0.118,none,2024-03-17\r\n37408,1910,LATAM,electronics,retail,13.01,7,0.009,none,2024-11-04\r\n37409,1375,AMER,electronics,online,56.95,2,0.139,coupon,2024-12-19\r\n37410,1399,AMER,sports,retail,58.13,4,0.224,none,2024-08-04\r\n37411,2493,APAC,toys,retail,59.32,1,0.175,none,2024-05-24\r\n37412,1430,EMEA,home,online,40.76,2,0.105,none,2024-09-23\r\n37413,1497,EMEA,sports,mobile,56.05,5,0.149,bundle,2024-05-27\r\n37414,2183,EMEA,grocery,retail,42.26,1,0.124,none,2024-03-14\r\n37415,2021,EMEA,sports,partner,21.77,1,0.242,coupon,2024-05-20\r\n37416,2428,LATAM,grocery,partner,26.92,3,0.188,coupon,2024-10-05\r\n37417,2090,AMER,sports,online,158.49,3,0.077,coupon,2024-08-12\r\n37418,1212,LATAM,toys,retail,62.16,1,0.132,none,2024-11-24\r\n37419,2453,AMER,home,online,47.72,1,0.048,coupon,2024-08-23\r\n37420,1943,AMER,home,online,65.37,7,0.194,bundle,2024-04-08\r\n37421,2135,EMEA,home,online,29.81,7,0.123,none,2024-03-12\r\n37422,1698,EMEA,electronics,retail,72.45,1,0.120,bundle,2024-03-18\r\n37423,1884,APAC,toys,retail,32.49,7,0.015,none,2024-05-17\r\n37424,2054,AMER,fashion,retail,37.72,4,0.183,bundle,2024-02-06\r\n37425,1141,AMER,toys,retail,123.04,8,0.048,coupon,2024-06-26\r\n37426,1868,AMER,grocery,retail,84.76,6,0.040,none,2024-11-23\r\n37427,2204,AMER,fashion,mobile,65.88,3,0.007,bundle,2024-11-17\r\n37428,1765,EMEA,electronics,online,40.01,2,0.103,coupon,2024-05-07\r\n37429,1650,LATAM,grocery,online,25.33,6,0.077,none,2024-12-04\r\n37430,1055,AMER,sports,mobile,82.75,8,0.017,none,2024-10-17\r\n37431,2333,APAC,electronics,online,61.78,2,0.137,loyalty,2024-11-02\r\n37432,1286,EMEA,grocery,online,51.89,2,0.096,coupon,2024-03-17\r\n37433,2033,LATAM,grocery,online,46.84,4,0.128,none,2024-03-25\r\n37434,2293,LATAM,home,mobile,39.46,5,0.033,none,2024-05-17\r\n37435,2016,LATAM,grocery,retail,72.73,2,0.070,bundle,2024-11-10\r\n37436,1559,EMEA,grocery,online,75.99,4,0.018,none,2024-05-24\r\n37437,2489,LATAM,grocery,online,56.83,4,0.141,bundle,2024-11-25\r\n37438,1432,APAC,toys,retail,54.77,3,0.074,none,2024-01-14\r\n37439,1217,EMEA,home,retail,84.34,8,0.163,bundle,2024-02-23\r\n37440,1551,APAC,grocery,online,57.21,3,0.163,coupon,2024-01-17\r\n37441,1987,AMER,electronics,online,31.60,7,0.217,none,2024-06-09\r\n37442,1604,EMEA,sports,online,44.32,2,0.156,none,2024-12-13\r\n37443,2241,APAC,home,retail,76.66,7,0.122,bundle,2024-02-03\r\n37444,1465,AMER,grocery,retail,69.95,5,0.050,coupon,2024-04-04\r\n37445,1846,APAC,home,mobile,99.05,1,0.247,none,2024-02-02\r\n37446,1945,AMER,toys,mobile,84.02,5,0.208,none,2024-06-25\r\n37447,1737,AMER,toys,online,81.96,8,0.152,loyalty,2024-09-27\r\n37448,1928,AMER,sports,retail,97.64,1,0.081,bundle,2024-07-07\r\n37449,1907,EMEA,fashion,retail,48.93,3,0.139,loyalty,2024-08-21\r\n37450,1864,EMEA,grocery,mobile,172.15,2,0.083,coupon,2024-04-02\r\n37451,2269,EMEA,grocery,retail,80.27,6,0.158,none,2024-06-17\r\n37452,2055,AMER,grocery,retail,41.44,3,0.125,loyalty,2024-08-04\r\n37453,1790,AMER,sports,retail,37.94,1,0.193,coupon,2024-03-22\r\n37454,1578,LATAM,home,retail,53.63,6,0.189,coupon,2024-10-08\r\n37455,2061,EMEA,grocery,online,86.63,1,0.133,none,2024-11-01\r\n37456,2273,APAC,grocery,retail,143.37,4,0.111,loyalty,2024-08-08\r\n37457,1591,APAC,grocery,retail,27.62,7,0.135,coupon,2024-02-28\r\n37458,1820,AMER,electronics,online,49.79,2,0.250,none,2024-01-10\r\n37459,2054,AMER,home,online,36.12,2,0.005,loyalty,2024-09-23\r\n37460,2150,APAC,grocery,online,40.57,6,0.203,coupon,2024-03-24\r\n37461,2390,AMER,home,online,36.45,5,0.178,coupon,2024-11-05\r\n37462,1775,EMEA,fashion,retail,72.30,3,0.034,none,2024-12-11\r\n37463,2263,AMER,home,retail,31.59,8,0.016,none,2024-06-27\r\n37464,2208,AMER,home,online,83.41,6,0.090,none,2024-03-22\r\n37465,1835,AMER,toys,online,70.56,1,0.037,loyalty,2024-10-19\r\n37466,2316,EMEA,electronics,online,122.16,6,0.151,none,2024-07-02\r\n37467,1758,AMER,electronics,online,58.28,2,0.059,coupon,2024-03-25\r\n37468,1947,EMEA,fashion,online,41.49,7,0.004,none,2024-03-10\r\n37469,1770,AMER,electronics,retail,53.20,1,0.078,loyalty,2024-08-19\r\n37470,1423,EMEA,fashion,retail,49.01,8,0.082,none,2024-05-04\r\n37471,2251,APAC,electronics,retail,109.72,2,0.028,bundle,2024-01-07\r\n37472,2477,APAC,electronics,mobile,29.63,6,0.068,none,2024-10-16\r\n37473,1522,LATAM,home,retail,39.31,6,0.049,none,2024-08-21\r\n37474,1164,EMEA,toys,online,116.29,7,0.220,loyalty,2024-12-16\r\n37475,2346,LATAM,grocery,mobile,34.46,4,0.031,none,2024-10-08\r\n37476,1264,APAC,grocery,online,11.60,1,0.040,none,2024-08-01\r\n37477,2180,AMER,fashion,mobile,78.93,1,0.082,none,2024-05-14\r\n37478,2157,AMER,grocery,online,61.31,8,0.018,none,2024-04-02\r\n37479,1438,APAC,electronics,mobile,105.37,6,0.070,coupon,2024-03-12\r\n37480,1744,EMEA,sports,mobile,67.90,1,0.032,coupon,2024-08-08\r\n37481,1589,AMER,electronics,online,14.27,5,0.089,bundle,2024-06-13\r\n37482,1585,AMER,grocery,retail,19.53,1,0.189,coupon,2024-10-13\r\n37483,2295,EMEA,grocery,retail,31.16,2,0.249,none,2024-08-06\r\n37484,2210,APAC,fashion,partner,53.05,8,0.115,none,2024-09-25\r\n37485,2022,LATAM,electronics,retail,56.36,8,0.089,none,2024-12-11\r\n37486,1463,EMEA,fashion,online,21.29,8,0.212,none,2024-12-09\r\n37487,1458,APAC,grocery,partner,88.74,6,0.247,coupon,2024-05-20\r\n37488,1171,APAC,home,online,58.78,1,0.187,coupon,2024-05-18\r\n37489,1279,EMEA,sports,mobile,75.86,6,0.115,none,2024-09-14\r\n37490,1191,EMEA,fashion,online,71.45,3,0.017,none,2024-12-16\r\n37491,1743,LATAM,grocery,mobile,35.63,1,0.164,bundle,2024-11-11\r\n37492,2302,APAC,grocery,online,27.78,1,0.147,bundle,2024-05-11\r\n37493,1229,LATAM,electronics,online,96.05,2,0.235,none,2024-09-10\r\n37494,1745,APAC,electronics,retail,53.03,8,0.030,coupon,2024-12-19\r\n37495,2495,EMEA,electronics,online,96.81,1,0.147,bundle,2024-03-13\r\n37496,1877,LATAM,electronics,retail,36.39,2,0.110,none,2024-01-15\r\n37497,1100,AMER,home,online,201.65,4,0.068,none,2024-09-09\r\n37498,1122,AMER,home,retail,45.62,4,0.100,none,2024-03-22\r\n37499,2308,AMER,home,retail,271.26,2,0.146,none,2024-11-15\r\n37500,1185,LATAM,electronics,online,89.36,6,0.104,none,2024-08-19\r\n37501,1629,LATAM,electronics,online,33.06,3,0.246,loyalty,2024-07-21\r\n37502,1953,EMEA,grocery,retail,123.30,8,0.011,none,2024-10-13\r\n37503,2164,AMER,home,retail,97.62,6,0.175,coupon,2024-09-07\r\n37504,1936,EMEA,fashion,retail,51.49,3,0.062,none,2024-07-16\r\n37505,2370,EMEA,toys,retail,67.14,2,0.061,bundle,2024-04-25\r\n37506,1235,EMEA,sports,retail,63.66,7,0.229,none,2024-08-22\r\n37507,1760,LATAM,fashion,online,84.55,4,0.159,bundle,2024-05-26\r\n37508,1868,AMER,sports,mobile,35.81,4,0.004,coupon,2024-03-13\r\n37509,1645,EMEA,toys,online,71.36,8,0.029,none,2024-09-10\r\n37510,2104,EMEA,grocery,retail,33.05,4,0.100,none,2024-02-21\r\n37511,1013,LATAM,toys,online,77.36,3,0.107,none,2024-02-06\r\n37512,1383,AMER,grocery,online,67.78,8,0.174,none,2024-10-23\r\n37513,1223,LATAM,electronics,retail,49.17,5,0.190,bundle,2024-10-08\r\n37514,1023,APAC,fashion,online,49.61,5,0.092,none,2024-09-14\r\n37515,2303,EMEA,electronics,retail,54.95,7,0.143,none,2024-09-03\r\n37516,1653,APAC,home,retail,82.25,7,0.001,loyalty,2024-02-27\r\n37517,1401,LATAM,electronics,online,28.37,7,0.210,bundle,2024-07-20\r\n37518,2323,AMER,electronics,online,88.38,7,0.055,bundle,2024-03-20\r\n37519,2451,APAC,home,retail,67.74,8,0.020,coupon,2024-05-22\r\n37520,2295,EMEA,grocery,mobile,68.65,2,0.131,loyalty,2024-11-26\r\n37521,1274,LATAM,grocery,online,69.99,8,0.120,none,2024-11-27\r\n37522,1063,AMER,grocery,retail,51.68,7,0.098,none,2024-04-10\r\n37523,1993,APAC,fashion,online,55.09,1,0.011,none,2024-04-06\r\n37524,2257,AMER,fashion,online,49.09,2,0.223,loyalty,2024-09-09\r\n37525,2304,LATAM,electronics,online,53.81,4,0.189,none,2024-12-15\r\n37526,2231,LATAM,fashion,partner,45.02,7,0.045,none,2024-02-01\r\n37527,1530,APAC,grocery,mobile,39.10,7,0.229,none,2024-03-11\r\n37528,1522,LATAM,electronics,online,45.92,3,0.040,bundle,2024-09-28\r\n37529,2027,EMEA,grocery,retail,96.96,8,0.105,coupon,2024-05-19\r\n37530,1451,EMEA,electronics,mobile,47.41,5,0.125,none,2024-02-09\r\n37531,2295,EMEA,electronics,retail,78.56,2,0.010,bundle,2024-11-07\r\n37532,1129,LATAM,grocery,partner,48.81,2,0.186,none,2024-07-08\r\n37533,1317,EMEA,fashion,online,33.97,3,0.185,coupon,2024-09-19\r\n37534,1057,LATAM,electronics,online,89.04,5,0.020,bundle,2024-12-27\r\n37535,1732,LATAM,home,partner,89.98,1,0.143,coupon,2024-04-24\r\n37536,1716,LATAM,fashion,online,41.98,1,0.069,coupon,2024-05-13\r\n37537,1531,EMEA,electronics,online,91.98,7,0.147,bundle,2024-08-23\r\n37538,2423,LATAM,grocery,retail,34.30,7,0.248,none,2024-02-17\r\n37539,1710,APAC,home,retail,136.05,7,0.007,coupon,2024-06-28\r\n37540,2025,EMEA,grocery,retail,49.79,3,0.153,bundle,2024-05-22\r\n37541,1798,AMER,home,retail,56.81,1,0.026,coupon,2024-08-25\r\n37542,1285,EMEA,fashion,retail,64.22,3,0.126,none,2024-05-23\r\n37543,2456,APAC,electronics,retail,65.07,3,0.098,none,2024-06-03\r\n37544,2318,AMER,electronics,mobile,93.25,7,0.088,none,2024-09-25\r\n37545,1917,LATAM,sports,partner,55.55,1,0.041,coupon,2024-07-09\r\n37546,1170,AMER,electronics,online,52.26,7,0.161,none,2024-03-25\r\n37547,2445,APAC,home,retail,32.51,8,0.139,bundle,2024-02-21\r\n37548,1527,AMER,sports,retail,25.35,8,0.098,none,2024-09-20\r\n37549,2055,AMER,sports,mobile,89.59,8,0.201,bundle,2024-09-05\r\n37550,1285,EMEA,electronics,retail,61.04,6,0.096,none,2024-02-25\r\n37551,1448,EMEA,fashion,online,75.89,6,0.147,none,2024-06-16\r\n37552,2114,AMER,grocery,retail,127.77,8,0.225,coupon,2024-09-27\r\n37553,2472,AMER,home,mobile,107.54,5,0.107,coupon,2024-11-09\r\n37554,1520,APAC,sports,retail,70.99,3,0.026,none,2024-01-02\r\n37555,2203,APAC,sports,retail,66.74,3,0.085,none,2024-06-06\r\n37556,1743,LATAM,sports,partner,22.04,5,0.170,none,2024-10-19\r\n37557,2175,AMER,grocery,online,61.83,3,0.045,none,2024-11-20\r\n37558,1492,APAC,fashion,mobile,98.99,7,0.157,loyalty,2024-07-19\r\n37559,2155,APAC,electronics,online,28.96,7,0.115,none,2024-11-21\r\n37560,1283,APAC,sports,online,57.97,3,0.220,coupon,2024-07-09\r\n37561,2451,APAC,sports,retail,62.89,5,0.239,coupon,2024-02-25\r\n37562,1369,AMER,home,retail,140.58,2,0.162,loyalty,2024-01-27\r\n37563,1231,AMER,home,mobile,59.86,3,0.072,none,2024-09-17\r\n37564,1513,APAC,home,online,71.46,1,0.137,none,2024-06-16\r\n37565,1121,EMEA,home,retail,45.82,4,0.219,none,2024-08-24\r\n37566,2281,AMER,home,partner,57.62,8,0.124,bundle,2024-05-16\r\n37567,2060,LATAM,grocery,retail,80.48,1,0.035,none,2024-11-17\r\n37568,2117,EMEA,fashion,partner,56.52,1,0.202,loyalty,2024-08-11\r\n37569,1946,AMER,home,retail,61.33,1,0.070,none,2024-08-09\r\n37570,2346,LATAM,toys,mobile,56.97,8,0.234,bundle,2024-02-24\r\n37571,1393,LATAM,electronics,mobile,106.78,3,0.091,none,2024-04-04\r\n37572,1462,LATAM,grocery,retail,55.84,5,0.194,none,2024-10-02\r\n37573,1238,AMER,fashion,online,75.85,4,0.116,coupon,2024-10-25\r\n37574,1005,LATAM,grocery,online,121.77,3,0.241,none,2024-06-04\r\n37575,2108,AMER,home,mobile,64.17,8,0.164,none,2024-02-13\r\n37576,2357,EMEA,toys,online,46.88,2,0.174,coupon,2024-05-02\r\n37577,1943,AMER,grocery,online,13.43,6,0.059,none,2024-06-03\r\n37578,1662,LATAM,home,retail,33.54,1,0.162,none,2024-04-07\r\n37579,2242,AMER,toys,online,64.78,5,0.193,loyalty,2024-07-17\r\n37580,2369,LATAM,toys,online,89.74,7,0.087,none,2024-07-17\r\n37581,2207,APAC,electronics,online,42.38,2,0.117,loyalty,2024-05-15\r\n37582,1450,EMEA,electronics,online,69.04,4,0.046,bundle,2024-06-19\r\n37583,2141,AMER,grocery,online,73.48,2,0.189,loyalty,2024-10-08\r\n37584,1912,APAC,electronics,retail,102.89,8,0.139,loyalty,2024-10-03\r\n37585,1669,AMER,fashion,online,28.08,7,0.173,coupon,2024-10-20\r\n37586,1849,EMEA,home,retail,83.69,1,0.007,coupon,2024-09-10\r\n37587,2436,LATAM,grocery,online,82.01,2,0.013,none,2024-04-09\r\n37588,1019,APAC,fashion,retail,45.21,5,0.076,bundle,2024-04-10\r\n37589,2457,EMEA,home,retail,43.99,7,0.239,coupon,2024-02-17\r\n37590,1413,LATAM,electronics,retail,110.79,3,0.088,none,2024-12-07\r\n37591,1734,AMER,electronics,retail,49.58,2,0.137,bundle,2024-10-15\r\n37592,2490,AMER,electronics,mobile,74.79,3,0.167,bundle,2024-03-10\r\n37593,1251,EMEA,home,online,59.30,3,0.034,none,2024-06-24\r\n37594,1741,AMER,grocery,online,73.19,5,0.040,none,2024-03-12\r\n37595,2145,AMER,fashion,online,80.25,6,0.082,bundle,2024-05-03\r\n37596,1884,APAC,grocery,retail,48.32,5,0.059,loyalty,2024-03-28\r\n37597,1899,APAC,sports,retail,58.40,1,0.209,bundle,2024-07-14\r\n37598,1999,EMEA,electronics,mobile,46.86,7,0.044,coupon,2024-07-07\r\n37599,1260,LATAM,electronics,online,30.78,5,0.092,loyalty,2024-03-05\r\n37600,2405,AMER,electronics,online,30.25,5,0.114,none,2024-04-09\r\n37601,1579,AMER,grocery,partner,48.36,1,0.222,loyalty,2024-08-06\r\n37602,1965,LATAM,grocery,online,126.83,3,0.093,none,2024-08-21\r\n37603,2016,LATAM,electronics,mobile,51.37,6,0.043,none,2024-02-11\r\n37604,2164,AMER,electronics,online,38.84,3,0.008,bundle,2024-02-24\r\n37605,2333,APAC,home,online,129.67,5,0.200,none,2024-08-20\r\n37606,2224,EMEA,home,partner,29.18,3,0.069,bundle,2024-05-07\r\n37607,1422,LATAM,grocery,retail,29.89,4,0.126,none,2024-12-16\r\n37608,1541,APAC,sports,mobile,48.77,2,0.140,none,2024-05-03\r\n37609,1021,AMER,electronics,retail,85.51,2,0.099,coupon,2024-01-19\r\n37610,2022,LATAM,electronics,mobile,75.09,4,0.085,none,2024-05-01\r\n37611,1634,AMER,electronics,retail,70.87,2,0.048,coupon,2024-11-04\r\n37612,1053,AMER,fashion,online,144.58,4,0.112,none,2024-09-10\r\n37613,1258,EMEA,grocery,partner,83.52,1,0.199,loyalty,2024-10-23\r\n37614,1322,AMER,grocery,retail,68.11,7,0.009,none,2024-12-24\r\n37615,1015,AMER,electronics,online,49.34,5,0.003,coupon,2024-01-26\r\n37616,1377,APAC,sports,online,64.12,6,0.210,none,2024-11-02\r\n37617,1658,AMER,toys,online,203.44,5,0.005,none,2024-10-06\r\n37618,1924,AMER,grocery,online,32.90,6,0.025,bundle,2024-10-20\r\n37619,2166,AMER,fashion,retail,68.29,8,0.239,bundle,2024-08-17\r\n37620,1066,AMER,home,online,76.81,7,0.176,bundle,2024-12-17\r\n37621,1417,APAC,electronics,online,228.22,7,0.043,bundle,2024-12-14\r\n37622,1229,LATAM,grocery,mobile,49.09,7,0.034,loyalty,2024-10-21\r\n37623,1546,EMEA,home,online,50.29,1,0.195,loyalty,2024-08-25\r\n37624,2199,LATAM,sports,online,35.39,5,0.134,none,2024-09-16\r\n37625,1919,EMEA,fashion,online,81.09,7,0.215,loyalty,2024-06-21\r\n37626,2053,AMER,grocery,online,24.96,6,0.132,bundle,2024-09-18\r\n37627,1410,AMER,fashion,retail,71.00,5,0.133,bundle,2024-08-13\r\n37628,2056,LATAM,fashion,retail,42.62,8,0.034,none,2024-04-25\r\n37629,1093,APAC,fashion,retail,30.92,4,0.097,none,2024-06-22\r\n37630,1541,APAC,grocery,retail,46.60,6,0.118,coupon,2024-08-21\r\n37631,1803,LATAM,sports,retail,98.54,5,0.241,bundle,2024-11-09\r\n37632,1399,AMER,fashion,online,104.50,4,0.075,bundle,2024-10-09\r\n37633,2362,AMER,electronics,online,103.78,5,0.056,none,2024-01-16\r\n37634,1932,EMEA,home,online,51.86,6,0.038,coupon,2024-07-03\r\n37635,1278,AMER,grocery,online,37.47,2,0.095,bundle,2024-03-03\r\n37636,1996,APAC,electronics,retail,33.94,8,0.128,none,2024-04-06\r\n37637,1253,AMER,home,retail,120.25,1,0.177,none,2024-10-11\r\n37638,1861,AMER,grocery,online,81.77,3,0.108,none,2024-09-22\r\n37639,1720,AMER,electronics,online,46.30,5,0.139,bundle,2024-03-10\r\n37640,1466,AMER,toys,online,27.53,3,0.171,coupon,2024-01-01\r\n37641,1738,LATAM,home,retail,54.90,3,0.154,none,2024-07-28\r\n37642,1782,LATAM,home,online,50.10,3,0.177,none,2024-02-19\r\n37643,2459,AMER,home,online,33.33,2,0.209,coupon,2024-05-28\r\n37644,2000,APAC,home,partner,221.79,8,0.007,none,2024-04-14\r\n37645,1656,LATAM,electronics,mobile,47.71,1,0.092,none,2024-05-05\r\n37646,1059,AMER,sports,online,92.30,7,0.226,bundle,2024-11-28\r\n37647,1006,AMER,grocery,online,115.06,7,0.052,none,2024-05-21\r\n37648,2130,EMEA,electronics,retail,27.83,6,0.193,none,2024-05-06\r\n37649,1544,LATAM,sports,mobile,31.87,3,0.178,none,2024-05-07\r\n37650,1751,AMER,toys,online,26.35,1,0.245,loyalty,2024-07-04\r\n37651,1480,APAC,grocery,retail,65.33,2,0.197,none,2024-09-15\r\n37652,1696,LATAM,grocery,retail,108.76,5,0.001,none,2024-01-08\r\n37653,2401,LATAM,sports,online,41.16,2,0.092,coupon,2024-12-13\r\n37654,1607,LATAM,toys,online,48.36,5,0.150,coupon,2024-05-21\r\n37655,1432,APAC,grocery,retail,37.78,4,0.006,bundle,2024-03-16\r\n37656,1290,EMEA,sports,retail,92.16,1,0.222,none,2024-04-24\r\n37657,2128,EMEA,toys,retail,29.87,4,0.147,none,2024-07-17\r\n37658,1269,LATAM,fashion,mobile,97.48,7,0.056,none,2024-09-19\r\n37659,2045,LATAM,fashion,retail,33.00,5,0.020,none,2024-11-04\r\n37660,2431,LATAM,fashion,online,75.22,4,0.077,none,2024-04-21\r\n37661,1252,APAC,fashion,online,62.47,3,0.019,loyalty,2024-02-09\r\n37662,1215,LATAM,sports,retail,51.81,7,0.181,coupon,2024-06-22\r\n37663,1023,APAC,toys,online,32.21,3,0.132,none,2024-08-13\r\n37664,1698,EMEA,sports,online,55.16,3,0.073,coupon,2024-12-17\r\n37665,1335,APAC,electronics,retail,138.45,5,0.175,none,2024-01-15\r\n37666,2296,AMER,toys,online,91.75,6,0.035,none,2024-10-13\r\n37667,2379,AMER,fashion,retail,60.22,7,0.087,bundle,2024-01-24\r\n37668,1731,AMER,toys,retail,103.36,5,0.059,coupon,2024-03-04\r\n37669,2381,AMER,sports,retail,94.58,6,0.220,none,2024-09-04\r\n37670,2008,APAC,home,retail,90.16,8,0.144,coupon,2024-01-14\r\n37671,1288,LATAM,toys,online,31.18,7,0.011,none,2024-09-07\r\n37672,2294,EMEA,home,online,32.76,8,0.020,loyalty,2024-08-05\r\n37673,2411,EMEA,sports,mobile,51.00,5,0.105,coupon,2024-12-04\r\n37674,1249,EMEA,electronics,mobile,34.30,3,0.060,loyalty,2024-09-24\r\n37675,2141,AMER,home,online,60.40,8,0.031,none,2024-08-08\r\n37676,1423,EMEA,grocery,retail,71.50,6,0.207,bundle,2024-06-11\r\n37677,2294,EMEA,home,mobile,22.98,4,0.004,none,2024-03-07\r\n37678,1576,EMEA,grocery,online,64.27,1,0.050,coupon,2024-09-10\r\n37679,1415,AMER,sports,online,94.27,8,0.185,coupon,2024-02-16\r\n37680,1765,EMEA,grocery,online,107.38,6,0.142,bundle,2024-10-04\r\n37681,2267,AMER,sports,retail,61.36,2,0.163,none,2024-06-11\r\n37682,2085,AMER,grocery,retail,133.74,5,0.103,none,2024-05-06\r\n37683,1368,EMEA,toys,partner,59.25,1,0.145,loyalty,2024-12-07\r\n37684,1492,APAC,toys,mobile,137.70,4,0.023,coupon,2024-07-14\r\n37685,1231,AMER,toys,online,67.19,4,0.214,none,2024-09-24\r\n37686,1165,AMER,toys,online,101.95,7,0.190,loyalty,2024-10-23\r\n37687,1035,EMEA,toys,online,138.48,1,0.203,coupon,2024-06-03\r\n37688,2110,LATAM,grocery,online,33.40,4,0.174,none,2024-02-15\r\n37689,2454,LATAM,toys,retail,93.41,5,0.039,coupon,2024-05-23\r\n37690,2228,EMEA,grocery,retail,44.90,7,0.183,none,2024-02-12\r\n37691,1279,EMEA,grocery,online,86.03,6,0.189,coupon,2024-07-18\r\n37692,1700,EMEA,electronics,retail,74.64,6,0.096,coupon,2024-05-10\r\n37693,2234,LATAM,grocery,online,127.90,1,0.089,none,2024-12-07\r\n37694,1089,LATAM,toys,retail,40.31,3,0.023,coupon,2024-06-05\r\n37695,2007,LATAM,electronics,online,64.72,6,0.049,none,2024-09-02\r\n37696,2084,LATAM,fashion,online,30.17,2,0.088,bundle,2024-08-01\r\n37697,2251,APAC,home,retail,38.20,4,0.134,loyalty,2024-05-04\r\n37698,1207,APAC,toys,online,54.09,8,0.013,bundle,2024-02-09\r\n37699,1770,AMER,electronics,retail,31.94,8,0.241,none,2024-08-28\r\n37700,2326,LATAM,fashion,partner,135.78,1,0.142,none,2024-07-27\r\n37701,2194,APAC,electronics,online,98.22,1,0.209,none,2024-09-01\r\n37702,1693,EMEA,toys,retail,54.36,7,0.001,none,2024-10-03\r\n37703,1121,EMEA,grocery,retail,38.90,6,0.030,coupon,2024-09-26\r\n37704,1574,AMER,home,online,32.54,2,0.126,bundle,2024-04-11\r\n37705,2466,APAC,toys,partner,49.25,4,0.198,loyalty,2024-08-28\r\n37706,2415,AMER,home,mobile,114.08,2,0.056,none,2024-04-04\r\n37707,2370,EMEA,grocery,online,52.07,1,0.103,none,2024-02-16\r\n37708,1440,AMER,sports,mobile,62.83,8,0.050,bundle,2024-04-15\r\n37709,1992,LATAM,electronics,online,37.77,2,0.089,none,2024-09-16\r\n37710,1480,APAC,toys,online,27.50,1,0.180,none,2024-01-11\r\n37711,1699,APAC,sports,online,89.96,8,0.082,none,2024-12-01\r\n37712,2183,EMEA,grocery,retail,32.28,5,0.088,none,2024-09-23\r\n37713,2005,APAC,electronics,online,47.74,6,0.137,loyalty,2024-01-03\r\n37714,1401,LATAM,electronics,online,108.46,6,0.168,loyalty,2024-05-04\r\n37715,1164,EMEA,grocery,online,92.99,8,0.153,loyalty,2024-08-14\r\n37716,2062,EMEA,home,retail,31.58,8,0.158,none,2024-07-22\r\n37717,1600,AMER,home,online,23.13,5,0.061,none,2024-12-04\r\n37718,2294,EMEA,fashion,retail,35.54,3,0.155,none,2024-04-23\r\n37719,1207,APAC,grocery,online,123.12,4,0.021,none,2024-01-18\r\n37720,1591,APAC,electronics,online,59.52,1,0.247,none,2024-04-09\r\n37721,1315,AMER,fashion,online,32.13,1,0.018,none,2024-03-17\r\n37722,1667,AMER,grocery,retail,60.48,6,0.182,none,2024-10-21\r\n37723,2195,APAC,grocery,mobile,117.53,7,0.222,none,2024-12-27\r\n37724,1458,APAC,home,retail,44.58,4,0.139,loyalty,2024-04-10\r\n37725,1453,APAC,electronics,mobile,39.68,7,0.193,none,2024-01-14\r\n37726,1085,EMEA,sports,retail,47.83,7,0.001,none,2024-12-10\r\n37727,1178,EMEA,fashion,retail,48.47,8,0.063,coupon,2024-06-21\r\n37728,1693,EMEA,grocery,online,75.60,4,0.038,bundle,2024-12-17\r\n37729,1501,AMER,electronics,online,71.58,2,0.134,bundle,2024-02-10\r\n37730,2036,APAC,home,online,30.59,4,0.096,none,2024-07-23\r\n37731,1485,APAC,electronics,online,39.40,6,0.114,loyalty,2024-12-05\r\n37732,1743,LATAM,grocery,mobile,33.62,6,0.240,bundle,2024-09-02\r\n37733,1935,EMEA,grocery,online,30.11,3,0.137,bundle,2024-10-06\r\n37734,2418,AMER,grocery,retail,36.39,8,0.032,none,2024-04-24\r\n37735,1139,EMEA,grocery,online,42.44,4,0.080,none,2024-06-25\r\n37736,1817,APAC,fashion,mobile,54.22,1,0.235,loyalty,2024-07-04\r\n37737,1850,APAC,home,online,66.12,6,0.051,none,2024-08-11\r\n37738,1473,LATAM,grocery,retail,47.60,5,0.013,none,2024-07-08\r\n37739,1694,APAC,fashion,retail,71.43,7,0.162,loyalty,2024-03-18\r\n37740,2090,AMER,home,online,331.89,3,0.190,none,2024-09-27\r\n37741,1529,LATAM,sports,online,48.36,1,0.084,none,2024-12-16\r\n37742,1446,AMER,grocery,online,29.25,4,0.172,none,2024-01-02\r\n37743,1205,APAC,electronics,mobile,73.40,2,0.159,loyalty,2024-01-06\r\n37744,1149,LATAM,home,retail,65.56,6,0.122,coupon,2024-12-04\r\n37745,1596,EMEA,sports,retail,85.49,1,0.237,coupon,2024-01-07\r\n37746,1890,LATAM,sports,online,92.18,2,0.010,none,2024-01-03\r\n37747,2114,AMER,fashion,online,33.96,6,0.161,none,2024-11-15\r\n37748,2474,LATAM,electronics,online,61.28,2,0.059,none,2024-08-25\r\n37749,1024,APAC,sports,retail,84.33,6,0.094,none,2024-01-03\r\n37750,2421,AMER,home,online,22.56,4,0.018,none,2024-06-28\r\n37751,1148,AMER,fashion,online,204.93,5,0.094,loyalty,2024-08-04\r\n37752,1192,EMEA,home,retail,30.90,5,0.181,bundle,2024-07-13\r\n37753,2394,EMEA,fashion,online,173.82,4,0.139,none,2024-09-03\r\n37754,1946,AMER,electronics,retail,105.39,7,0.216,none,2024-11-14\r\n37755,1755,APAC,home,retail,54.44,1,0.217,none,2024-01-23\r\n37756,2262,APAC,toys,online,22.93,3,0.041,bundle,2024-09-24\r\n37757,1475,LATAM,grocery,online,89.81,7,0.033,none,2024-10-03\r\n37758,1729,AMER,grocery,partner,44.03,4,0.184,none,2024-10-22\r\n37759,1723,LATAM,toys,online,40.45,1,0.131,bundle,2024-03-18\r\n37760,1217,EMEA,fashion,online,71.90,2,0.124,none,2024-08-27\r\n37761,1838,AMER,home,mobile,38.93,5,0.244,none,2024-10-16\r\n37762,1831,APAC,electronics,online,102.61,5,0.139,bundle,2024-10-05\r\n37763,1449,EMEA,grocery,mobile,35.73,3,0.089,none,2024-08-12\r\n37764,1161,AMER,sports,online,33.28,3,0.129,bundle,2024-12-17\r\n37765,2133,AMER,toys,mobile,24.66,1,0.156,coupon,2024-05-15\r\n37766,1267,EMEA,toys,online,82.89,3,0.190,none,2024-05-08\r\n37767,1065,AMER,home,mobile,53.54,7,0.063,coupon,2024-08-24\r\n37768,1017,AMER,grocery,online,50.36,4,0.192,none,2024-03-06\r\n37769,1550,APAC,electronics,mobile,25.06,7,0.156,loyalty,2024-11-18\r\n37770,2229,APAC,fashion,retail,74.16,7,0.225,coupon,2024-11-13\r\n37771,2238,AMER,home,mobile,52.04,5,0.231,coupon,2024-12-01\r\n37772,2237,EMEA,electronics,mobile,21.46,4,0.228,none,2024-08-07\r\n37773,1906,APAC,grocery,online,103.52,4,0.203,coupon,2024-01-12\r\n37774,2176,AMER,sports,online,40.91,5,0.085,bundle,2024-12-04\r\n37775,2122,AMER,grocery,online,153.92,4,0.081,none,2024-05-09\r\n37776,1518,AMER,grocery,online,107.90,4,0.129,none,2024-04-20\r\n37777,2165,AMER,fashion,retail,41.75,1,0.191,coupon,2024-09-02\r\n37778,2359,LATAM,grocery,online,93.54,3,0.030,bundle,2024-06-11\r\n37779,2239,EMEA,grocery,online,145.42,1,0.151,none,2024-12-06\r\n37780,1249,EMEA,fashion,online,52.94,3,0.051,none,2024-01-02\r\n37781,1273,AMER,grocery,online,59.05,5,0.097,coupon,2024-02-21\r\n37782,1559,EMEA,sports,partner,55.29,3,0.025,none,2024-12-06\r\n37783,1815,APAC,fashion,online,95.73,7,0.247,none,2024-04-26\r\n37784,1359,LATAM,home,retail,106.80,2,0.042,none,2024-04-06\r\n37785,1459,LATAM,grocery,online,42.02,5,0.163,bundle,2024-05-25\r\n37786,1139,EMEA,toys,online,37.33,1,0.161,none,2024-10-08\r\n37787,1655,LATAM,grocery,retail,118.52,1,0.052,coupon,2024-12-09\r\n37788,1642,EMEA,home,online,47.82,5,0.048,none,2024-07-12\r\n37789,2265,APAC,sports,retail,39.46,6,0.049,none,2024-03-11\r\n37790,2470,EMEA,grocery,retail,89.51,8,0.060,none,2024-10-17\r\n37791,1332,APAC,toys,retail,24.21,8,0.093,none,2024-05-19\r\n37792,1032,AMER,sports,retail,51.75,1,0.055,none,2024-10-16\r\n37793,2116,LATAM,grocery,mobile,26.25,4,0.210,none,2024-11-20\r\n37794,2182,AMER,home,retail,42.92,1,0.168,none,2024-03-26\r\n37795,2202,APAC,sports,mobile,54.47,1,0.192,none,2024-11-03\r\n37796,2384,LATAM,fashion,retail,82.34,8,0.233,none,2024-05-23\r\n37797,2081,APAC,fashion,retail,48.72,1,0.080,coupon,2024-09-02\r\n37798,1052,LATAM,home,retail,30.76,6,0.002,coupon,2024-02-05\r\n37799,1873,EMEA,sports,retail,60.30,4,0.194,bundle,2024-05-23\r\n37800,2388,LATAM,electronics,retail,15.77,8,0.086,none,2024-10-06\r\n37801,2093,LATAM,home,mobile,71.91,5,0.054,bundle,2024-04-27\r\n37802,1925,LATAM,grocery,retail,68.82,1,0.034,none,2024-08-22\r\n37803,1812,EMEA,fashion,retail,106.45,1,0.057,coupon,2024-10-24\r\n37804,2425,APAC,grocery,retail,111.75,1,0.080,coupon,2024-06-20\r\n37805,1143,LATAM,grocery,online,53.48,3,0.059,bundle,2024-03-07\r\n37806,1182,EMEA,home,mobile,16.99,8,0.247,none,2024-01-17\r\n37807,1439,LATAM,home,retail,142.34,2,0.075,bundle,2024-03-05\r\n37808,2420,EMEA,electronics,online,23.86,8,0.131,none,2024-02-18\r\n37809,1338,EMEA,fashion,online,29.97,6,0.238,loyalty,2024-09-04\r\n37810,1408,AMER,sports,online,68.55,8,0.064,none,2024-05-21\r\n37811,1734,AMER,grocery,retail,62.20,4,0.057,none,2024-07-02\r\n37812,1355,EMEA,electronics,online,45.18,8,0.009,none,2024-09-01\r\n37813,1395,APAC,sports,retail,15.36,6,0.233,coupon,2024-03-11\r\n37814,2038,LATAM,home,partner,45.64,4,0.128,coupon,2024-10-16\r\n37815,1496,AMER,fashion,online,55.72,7,0.218,loyalty,2024-07-15\r\n37816,1951,LATAM,toys,mobile,51.12,7,0.226,none,2024-08-17\r\n37817,2459,AMER,grocery,online,71.96,8,0.118,none,2024-02-07\r\n37818,2031,AMER,home,retail,70.84,4,0.141,none,2024-04-16\r\n37819,1353,EMEA,electronics,retail,54.75,4,0.249,none,2024-05-01\r\n37820,1344,EMEA,grocery,retail,50.08,7,0.111,none,2024-10-06\r\n37821,1428,APAC,home,partner,32.23,5,0.025,none,2024-08-01\r\n37822,2319,AMER,fashion,online,29.49,7,0.219,coupon,2024-02-18\r\n37823,1577,AMER,home,mobile,82.83,4,0.200,coupon,2024-01-23\r\n37824,1323,EMEA,grocery,retail,34.09,5,0.174,loyalty,2024-01-21\r\n37825,2441,EMEA,grocery,online,41.71,1,0.112,loyalty,2024-05-09\r\n37826,1121,EMEA,home,retail,27.44,4,0.179,loyalty,2024-03-15\r\n37827,1714,APAC,sports,online,68.67,1,0.195,loyalty,2024-10-26\r\n37828,1057,LATAM,grocery,online,91.11,6,0.034,coupon,2024-03-05\r\n37829,1965,LATAM,sports,retail,97.66,7,0.008,none,2024-02-21\r\n37830,1834,AMER,toys,retail,25.11,2,0.065,none,2024-06-14\r\n37831,1656,LATAM,electronics,online,100.27,3,0.134,none,2024-02-12\r\n37832,1446,AMER,electronics,online,26.69,6,0.123,none,2024-03-09\r\n37833,1229,LATAM,sports,online,52.29,8,0.129,none,2024-08-28\r\n37834,1451,EMEA,grocery,online,32.29,8,0.138,none,2024-08-06\r\n37835,1513,APAC,electronics,online,231.52,1,0.055,none,2024-11-10\r\n37836,2263,AMER,electronics,online,38.06,4,0.190,bundle,2024-01-20\r\n37837,2350,APAC,electronics,online,95.32,2,0.104,none,2024-04-06\r\n37838,1732,LATAM,home,online,138.94,7,0.243,none,2024-10-03\r\n37839,2210,APAC,fashion,online,20.52,7,0.149,none,2024-01-14\r\n37840,2081,APAC,home,retail,32.12,2,0.122,none,2024-07-25\r\n37841,2436,LATAM,toys,online,10.75,3,0.024,coupon,2024-10-24\r\n37842,1761,EMEA,electronics,mobile,52.94,4,0.062,none,2024-11-18\r\n37843,2216,AMER,home,retail,48.15,7,0.214,coupon,2024-12-27\r\n37844,2487,LATAM,grocery,online,43.66,7,0.000,none,2024-05-15\r\n37845,2086,APAC,toys,online,66.50,6,0.224,loyalty,2024-11-06\r\n37846,1012,LATAM,fashion,retail,46.20,4,0.201,none,2024-06-09\r\n37847,1544,LATAM,grocery,online,57.72,2,0.025,none,2024-02-10\r\n37848,1991,APAC,electronics,retail,60.97,1,0.153,loyalty,2024-09-17\r\n37849,2168,EMEA,grocery,mobile,51.74,6,0.199,none,2024-07-17\r\n37850,1321,EMEA,home,online,144.54,7,0.078,none,2024-09-01\r\n37851,1758,AMER,home,online,64.93,4,0.244,loyalty,2024-06-10\r\n37852,1649,APAC,electronics,retail,48.78,7,0.094,coupon,2024-07-01\r\n37853,1064,AMER,electronics,retail,63.93,3,0.034,none,2024-02-04\r\n37854,1590,APAC,home,retail,62.32,1,0.228,none,2024-05-05\r\n37855,2414,EMEA,home,mobile,71.59,1,0.022,none,2024-12-21\r\n37856,2411,EMEA,electronics,retail,33.96,6,0.137,coupon,2024-12-22\r\n37857,1899,APAC,electronics,mobile,83.12,7,0.110,none,2024-04-21\r\n37858,2222,LATAM,electronics,online,63.88,7,0.050,none,2024-12-11\r\n37859,2375,AMER,sports,online,91.38,1,0.229,none,2024-01-02\r\n37860,1508,LATAM,home,mobile,97.21,8,0.028,none,2024-11-14\r\n37861,2431,LATAM,toys,partner,43.84,1,0.085,none,2024-10-20\r\n37862,1706,EMEA,toys,retail,109.19,5,0.228,none,2024-05-07\r\n37863,1013,LATAM,grocery,mobile,77.60,7,0.056,loyalty,2024-11-27\r\n37864,2233,EMEA,toys,retail,121.09,1,0.026,coupon,2024-03-18\r\n37865,1939,LATAM,fashion,retail,86.53,8,0.147,loyalty,2024-07-01\r\n37866,1280,LATAM,grocery,retail,32.71,4,0.206,loyalty,2024-06-09\r\n37867,1461,LATAM,toys,retail,51.18,4,0.148,loyalty,2024-08-13\r\n37868,1823,EMEA,home,retail,69.03,5,0.050,none,2024-12-28\r\n37869,2178,AMER,grocery,mobile,66.20,4,0.085,none,2024-08-02\r\n37870,1590,APAC,home,retail,40.22,3,0.146,loyalty,2024-01-06\r\n37871,1924,AMER,sports,partner,36.49,8,0.205,none,2024-02-01\r\n37872,1350,LATAM,fashion,online,26.73,7,0.193,none,2024-11-16\r\n37873,1544,LATAM,electronics,retail,82.04,8,0.059,none,2024-05-20\r\n37874,1849,EMEA,grocery,online,44.37,7,0.199,coupon,2024-07-24\r\n37875,2248,LATAM,fashion,retail,36.06,5,0.099,loyalty,2024-06-24\r\n37876,1862,LATAM,home,retail,47.96,5,0.180,none,2024-04-12\r\n37877,1540,LATAM,sports,online,44.75,7,0.137,none,2024-11-04\r\n37878,1516,EMEA,electronics,mobile,46.29,5,0.249,bundle,2024-03-21\r\n37879,2082,APAC,sports,retail,43.03,5,0.024,coupon,2024-08-02\r\n37880,1270,LATAM,grocery,online,70.70,6,0.092,none,2024-10-20\r\n37881,2403,LATAM,sports,online,81.68,3,0.038,loyalty,2024-10-25\r\n37882,1896,EMEA,toys,retail,99.03,3,0.108,none,2024-01-02\r\n37883,1245,APAC,grocery,online,19.37,2,0.226,none,2024-06-02\r\n37884,2162,EMEA,toys,online,69.42,6,0.116,none,2024-06-23\r\n37885,2440,APAC,sports,retail,29.93,4,0.027,none,2024-02-15\r\n37886,1806,APAC,electronics,mobile,87.88,3,0.246,none,2024-06-18\r\n37887,2198,EMEA,sports,online,58.42,4,0.216,loyalty,2024-12-27\r\n37888,1884,APAC,grocery,online,33.29,7,0.137,bundle,2024-01-23\r\n37889,1410,AMER,home,mobile,34.22,8,0.018,none,2024-08-16\r\n37890,1614,EMEA,home,partner,64.76,1,0.016,coupon,2024-02-02\r\n37891,2183,EMEA,grocery,online,43.69,5,0.126,loyalty,2024-06-04\r\n37892,1401,LATAM,home,retail,33.09,5,0.015,none,2024-09-11\r\n37893,1056,LATAM,home,mobile,97.21,8,0.224,none,2024-11-16\r\n37894,1131,APAC,toys,online,86.85,3,0.148,none,2024-10-13\r\n37895,1117,LATAM,home,retail,111.03,1,0.150,none,2024-02-28\r\n37896,1283,APAC,electronics,partner,44.71,7,0.078,coupon,2024-06-19\r\n37897,1430,EMEA,home,retail,52.88,3,0.173,bundle,2024-11-16\r\n37898,1905,APAC,electronics,online,35.47,3,0.054,none,2024-01-08\r\n37899,1398,APAC,electronics,retail,27.36,5,0.082,none,2024-05-10\r\n37900,1918,EMEA,fashion,retail,32.85,3,0.028,none,2024-09-21\r\n37901,2385,APAC,grocery,retail,89.45,1,0.130,none,2024-02-20\r\n37902,1732,LATAM,fashion,online,46.48,2,0.027,coupon,2024-06-13\r\n37903,1657,LATAM,electronics,mobile,65.61,7,0.197,coupon,2024-07-14\r\n37904,2184,APAC,electronics,retail,49.61,5,0.100,bundle,2024-03-06\r\n37905,1425,EMEA,home,online,66.93,7,0.067,bundle,2024-06-22\r\n37906,1660,AMER,grocery,online,60.93,8,0.126,coupon,2024-04-01\r\n37907,1116,LATAM,electronics,retail,37.49,2,0.114,none,2024-12-01\r\n37908,2213,APAC,grocery,retail,72.89,7,0.095,bundle,2024-04-15\r\n37909,2469,LATAM,fashion,online,153.21,8,0.245,none,2024-07-05\r\n37910,1452,LATAM,home,online,53.70,2,0.049,none,2024-10-20\r\n37911,2169,EMEA,electronics,online,73.37,2,0.174,none,2024-10-20\r\n37912,1944,AMER,grocery,online,58.52,3,0.051,none,2024-02-13\r\n37913,1919,EMEA,electronics,online,54.87,6,0.181,loyalty,2024-10-01\r\n37914,1561,EMEA,toys,retail,51.45,6,0.221,coupon,2024-07-25\r\n37915,2062,EMEA,electronics,retail,66.32,5,0.066,coupon,2024-09-27\r\n37916,2109,EMEA,electronics,partner,59.45,8,0.099,loyalty,2024-11-05\r\n37917,2340,EMEA,toys,online,41.33,5,0.161,bundle,2024-08-19\r\n37918,1382,LATAM,electronics,mobile,79.43,7,0.105,none,2024-09-05\r\n37919,1204,AMER,grocery,online,34.13,4,0.204,coupon,2024-05-01\r\n37920,1809,APAC,toys,retail,63.80,6,0.073,none,2024-05-28\r\n37921,1266,AMER,grocery,online,60.05,6,0.125,bundle,2024-11-12\r\n37922,1821,LATAM,home,mobile,58.21,2,0.044,coupon,2024-03-16\r\n37923,1060,LATAM,sports,online,51.37,6,0.163,bundle,2024-01-01\r\n37924,2145,AMER,sports,online,96.71,3,0.156,none,2024-08-08\r\n37925,1087,AMER,home,online,65.04,2,0.239,none,2024-12-25\r\n37926,1704,AMER,home,retail,51.64,4,0.049,none,2024-12-09\r\n37927,2390,AMER,home,online,80.12,5,0.030,loyalty,2024-11-17\r\n37928,2323,AMER,home,retail,151.04,7,0.016,coupon,2024-05-20\r\n37929,1969,LATAM,home,online,63.57,8,0.126,none,2024-06-19\r\n37930,1612,LATAM,grocery,online,56.45,8,0.010,none,2024-05-24\r\n37931,1881,LATAM,sports,online,105.21,2,0.199,none,2024-08-28\r\n37932,1530,APAC,electronics,online,51.30,8,0.209,none,2024-11-04\r\n37933,1125,LATAM,toys,online,63.04,4,0.180,loyalty,2024-10-14\r\n37934,1761,EMEA,toys,retail,29.89,2,0.012,none,2024-07-03\r\n37935,1279,EMEA,home,online,47.28,8,0.209,none,2024-12-24\r\n37936,1707,APAC,sports,retail,112.68,3,0.119,loyalty,2024-11-28\r\n37937,1046,EMEA,sports,online,36.02,1,0.237,none,2024-04-03\r\n37938,2194,APAC,grocery,online,51.91,5,0.178,loyalty,2024-08-08\r\n37939,1685,AMER,grocery,online,61.54,4,0.050,none,2024-11-20\r\n37940,2178,AMER,home,retail,97.06,3,0.242,coupon,2024-05-25\r\n37941,1562,AMER,home,online,38.69,4,0.118,none,2024-02-22\r\n37942,2088,EMEA,sports,online,43.93,3,0.222,coupon,2024-02-03\r\n37943,1241,APAC,home,retail,43.75,7,0.156,none,2024-09-04\r\n37944,2348,EMEA,home,retail,46.71,1,0.213,none,2024-12-20\r\n37945,1756,EMEA,electronics,online,116.68,2,0.176,none,2024-10-17\r\n37946,1302,LATAM,home,online,80.88,7,0.033,none,2024-11-13\r\n37947,1231,AMER,fashion,retail,199.20,7,0.007,none,2024-05-14\r\n37948,1982,EMEA,sports,online,126.09,1,0.101,bundle,2024-01-19\r\n37949,2263,AMER,grocery,partner,39.28,1,0.206,none,2024-03-07\r\n37950,1131,APAC,grocery,retail,84.65,3,0.236,none,2024-10-05\r\n37951,1153,AMER,home,online,81.19,2,0.099,coupon,2024-04-22\r\n37952,1563,EMEA,electronics,retail,37.68,1,0.143,coupon,2024-08-04\r\n37953,1983,LATAM,home,online,93.26,8,0.096,loyalty,2024-11-21\r\n37954,1097,EMEA,toys,online,30.37,8,0.073,bundle,2024-12-28\r\n37955,1004,LATAM,fashion,retail,49.50,2,0.022,none,2024-03-18\r\n37956,1896,EMEA,home,online,176.71,8,0.134,none,2024-10-28\r\n37957,1457,EMEA,sports,mobile,206.28,2,0.178,coupon,2024-01-11\r\n37958,2290,LATAM,electronics,mobile,89.85,1,0.057,coupon,2024-07-26\r\n37959,2101,APAC,sports,online,74.61,8,0.157,none,2024-03-18\r\n37960,1681,LATAM,home,online,86.62,3,0.050,none,2024-01-09\r\n37961,2129,APAC,toys,online,24.59,3,0.067,none,2024-03-01\r\n37962,2225,EMEA,fashion,retail,29.64,4,0.167,none,2024-09-02\r\n37963,1081,AMER,sports,online,94.41,1,0.223,coupon,2024-10-18\r\n37964,2058,LATAM,electronics,online,79.43,6,0.241,coupon,2024-03-04\r\n37965,2188,EMEA,toys,mobile,39.99,4,0.103,none,2024-02-12\r\n37966,2252,EMEA,sports,retail,58.88,5,0.119,none,2024-12-16\r\n37967,1175,AMER,grocery,online,72.61,2,0.171,loyalty,2024-10-12\r\n37968,1556,AMER,electronics,online,45.84,8,0.012,none,2024-12-09\r\n37969,2262,APAC,electronics,online,46.29,8,0.243,loyalty,2024-11-11\r\n37970,1929,LATAM,fashion,online,101.40,7,0.233,none,2024-07-04\r\n37971,2460,AMER,home,online,41.24,7,0.214,none,2024-06-13\r\n37972,1060,LATAM,home,online,40.57,5,0.120,none,2024-04-10\r\n37973,1647,LATAM,home,mobile,28.22,6,0.194,coupon,2024-10-25\r\n37974,1663,LATAM,home,online,72.40,8,0.024,none,2024-07-22\r\n37975,1165,AMER,home,online,34.61,3,0.179,loyalty,2024-03-09\r\n37976,2425,APAC,grocery,partner,70.40,5,0.161,none,2024-12-24\r\n37977,2454,LATAM,sports,retail,138.73,7,0.042,none,2024-06-17\r\n37978,1888,LATAM,home,retail,55.38,3,0.189,none,2024-08-12\r\n37979,1095,APAC,electronics,online,69.92,8,0.169,loyalty,2024-02-09\r\n37980,2380,AMER,sports,retail,36.57,7,0.045,bundle,2024-09-09\r\n37981,1488,AMER,grocery,online,39.06,1,0.037,none,2024-02-09\r\n37982,2130,EMEA,sports,mobile,44.26,2,0.221,none,2024-09-19\r\n37983,1041,APAC,grocery,online,57.76,1,0.135,coupon,2024-12-11\r\n37984,1996,APAC,fashion,retail,82.02,8,0.162,coupon,2024-10-19\r\n37985,1154,LATAM,electronics,retail,47.66,7,0.213,none,2024-03-06\r\n37986,1449,EMEA,electronics,mobile,146.86,2,0.059,none,2024-09-28\r\n37987,2257,AMER,sports,retail,27.07,8,0.048,none,2024-05-03\r\n37988,1291,EMEA,grocery,retail,33.46,6,0.124,loyalty,2024-10-11\r\n37989,1199,APAC,electronics,mobile,134.74,6,0.009,none,2024-11-27\r\n37990,1247,AMER,electronics,online,11.03,5,0.125,bundle,2024-01-28\r\n37991,1954,APAC,electronics,mobile,71.66,6,0.190,none,2024-03-28\r\n37992,1499,EMEA,electronics,retail,168.33,7,0.132,none,2024-11-07\r\n37993,1582,AMER,sports,retail,58.42,7,0.188,none,2024-05-14\r\n37994,1367,AMER,electronics,online,81.77,4,0.122,coupon,2024-02-14\r\n37995,2451,APAC,toys,online,54.68,5,0.060,coupon,2024-11-03\r\n37996,1312,EMEA,grocery,online,94.13,1,0.230,none,2024-02-24\r\n37997,2303,EMEA,home,retail,61.60,3,0.117,none,2024-03-28\r\n37998,2308,AMER,electronics,online,43.52,4,0.145,none,2024-08-17\r\n37999,1083,AMER,grocery,online,38.54,6,0.032,bundle,2024-07-23\r\n38000,2132,LATAM,fashion,retail,77.05,6,0.084,loyalty,2024-05-10\r\n38001,2197,LATAM,home,online,47.27,1,0.063,none,2024-10-08\r\n38002,1921,LATAM,grocery,online,69.69,5,0.016,none,2024-11-11\r\n38003,1398,APAC,toys,online,89.31,1,0.093,none,2024-07-05\r\n38004,1452,LATAM,fashion,online,158.33,7,0.222,none,2024-03-18\r\n38005,2416,LATAM,electronics,mobile,60.63,1,0.147,none,2024-12-23\r\n38006,1638,EMEA,electronics,retail,26.55,3,0.003,coupon,2024-04-23\r\n38007,1925,LATAM,grocery,online,47.73,1,0.201,none,2024-06-25\r\n38008,1191,EMEA,grocery,mobile,86.76,3,0.087,none,2024-06-11\r\n38009,1953,EMEA,sports,retail,69.72,5,0.192,none,2024-09-16\r\n38010,1246,EMEA,home,online,41.85,8,0.062,none,2024-08-18\r\n38011,1350,LATAM,electronics,online,74.12,4,0.064,bundle,2024-02-15\r\n38012,1698,EMEA,electronics,retail,37.22,2,0.050,bundle,2024-09-12\r\n38013,1524,LATAM,toys,retail,54.46,3,0.073,coupon,2024-06-02\r\n38014,1332,APAC,toys,online,58.67,2,0.091,coupon,2024-01-25\r\n38015,2311,LATAM,electronics,online,105.70,3,0.018,none,2024-03-28\r\n38016,1064,AMER,grocery,online,24.68,8,0.002,bundle,2024-07-21\r\n38017,2235,AMER,fashion,retail,65.84,6,0.056,coupon,2024-05-14\r\n38018,2134,AMER,electronics,retail,89.75,4,0.175,none,2024-06-20\r\n38019,1024,APAC,sports,online,117.07,3,0.093,none,2024-07-16\r\n38020,1388,AMER,grocery,online,74.65,4,0.244,bundle,2024-04-17\r\n38021,1038,APAC,grocery,mobile,44.99,4,0.114,none,2024-01-25\r\n38022,1735,LATAM,grocery,online,66.29,1,0.045,none,2024-02-16\r\n38023,1616,APAC,sports,retail,11.53,4,0.215,coupon,2024-03-07\r\n38024,1156,APAC,sports,retail,44.50,7,0.075,none,2024-07-27\r\n38025,1847,LATAM,home,mobile,75.40,3,0.102,none,2024-06-21\r\n38026,2310,EMEA,home,retail,40.74,1,0.144,loyalty,2024-06-08\r\n38027,2222,LATAM,electronics,retail,15.50,7,0.008,coupon,2024-02-26\r\n38028,1048,EMEA,toys,online,56.52,6,0.135,coupon,2024-12-17\r\n38029,1702,AMER,electronics,online,15.14,8,0.158,loyalty,2024-02-12\r\n38030,1702,AMER,grocery,retail,28.06,1,0.037,none,2024-12-18\r\n38031,2271,LATAM,home,online,86.43,7,0.122,loyalty,2024-12-08\r\n38032,2298,APAC,home,mobile,94.90,2,0.089,loyalty,2024-12-01\r\n38033,2087,LATAM,toys,online,49.16,6,0.192,loyalty,2024-07-05\r\n38034,1221,LATAM,grocery,mobile,44.25,5,0.052,none,2024-09-04\r\n38035,1989,LATAM,grocery,retail,44.51,3,0.066,bundle,2024-08-16\r\n38036,2125,LATAM,grocery,retail,25.75,3,0.041,none,2024-08-26\r\n38037,2424,LATAM,toys,partner,50.45,4,0.237,loyalty,2024-02-26\r\n38038,2329,LATAM,grocery,retail,50.98,2,0.201,none,2024-02-14\r\n38039,1634,AMER,fashion,online,55.00,5,0.016,none,2024-04-25\r\n38040,1141,AMER,toys,mobile,46.78,3,0.172,coupon,2024-10-19\r\n38041,2228,EMEA,electronics,retail,39.26,2,0.207,loyalty,2024-10-21\r\n38042,1798,AMER,home,online,64.93,3,0.190,none,2024-01-13\r\n38043,1337,APAC,toys,retail,36.34,5,0.066,loyalty,2024-07-07\r\n38044,1136,EMEA,toys,online,70.99,8,0.034,bundle,2024-03-21\r\n38045,1472,AMER,home,partner,28.24,3,0.161,bundle,2024-06-05\r\n38046,1547,AMER,grocery,retail,60.13,6,0.065,none,2024-01-12\r\n38047,2449,LATAM,grocery,retail,75.89,4,0.063,none,2024-10-09\r\n38048,1120,LATAM,sports,online,85.25,1,0.164,coupon,2024-09-19\r\n38049,2040,LATAM,grocery,retail,107.94,6,0.083,bundle,2024-04-07\r\n38050,2334,LATAM,sports,mobile,16.60,7,0.041,bundle,2024-10-21\r\n38051,2343,EMEA,sports,retail,79.71,5,0.163,bundle,2024-06-09\r\n38052,1052,LATAM,electronics,online,99.77,4,0.080,coupon,2024-05-03\r\n38053,1683,AMER,grocery,retail,61.59,4,0.235,none,2024-10-25\r\n38054,2428,LATAM,toys,online,53.60,2,0.107,none,2024-06-01\r\n38055,1511,EMEA,home,retail,47.03,5,0.012,none,2024-04-02\r\n38056,2401,LATAM,grocery,retail,120.62,4,0.090,none,2024-10-13\r\n38057,1176,EMEA,sports,mobile,64.87,3,0.130,none,2024-04-21\r\n38058,2245,APAC,fashion,online,51.59,2,0.248,none,2024-09-26\r\n38059,1677,EMEA,grocery,online,68.66,5,0.046,none,2024-07-20\r\n38060,1101,AMER,toys,retail,48.62,3,0.215,bundle,2024-09-20\r\n38061,2098,AMER,fashion,retail,108.54,1,0.147,loyalty,2024-11-21\r\n38062,2094,AMER,grocery,retail,97.84,4,0.171,none,2024-06-24\r\n38063,1202,APAC,electronics,retail,142.03,1,0.027,none,2024-08-18\r\n38064,2323,AMER,fashion,online,72.80,6,0.188,bundle,2024-12-06\r\n38065,1166,AMER,toys,online,108.84,7,0.102,none,2024-09-05\r\n38066,1023,APAC,grocery,online,93.56,3,0.072,loyalty,2024-02-17\r\n38067,2397,LATAM,electronics,retail,90.61,5,0.207,coupon,2024-05-24\r\n38068,2224,EMEA,home,mobile,76.18,3,0.159,coupon,2024-09-28\r\n38069,1416,EMEA,toys,retail,90.60,5,0.025,bundle,2024-08-05\r\n38070,1071,AMER,home,online,141.30,8,0.083,none,2024-12-11\r\n38071,1787,APAC,grocery,online,71.75,2,0.147,none,2024-06-06\r\n38072,1660,AMER,electronics,retail,45.39,6,0.099,none,2024-06-11\r\n38073,1371,AMER,fashion,mobile,105.26,4,0.034,coupon,2024-04-28\r\n38074,1410,AMER,electronics,retail,53.76,8,0.012,loyalty,2024-07-05\r\n38075,2092,AMER,fashion,retail,50.68,3,0.247,none,2024-10-24\r\n38076,1580,AMER,grocery,online,77.94,3,0.096,loyalty,2024-03-21\r\n38077,2109,EMEA,toys,online,128.82,5,0.010,coupon,2024-07-11\r\n38078,2352,APAC,electronics,retail,28.29,8,0.227,none,2024-01-24\r\n38079,1741,AMER,home,online,69.15,8,0.096,coupon,2024-10-06\r\n38080,1107,APAC,toys,online,27.26,2,0.054,none,2024-11-05\r\n38081,1208,AMER,sports,online,41.19,3,0.214,none,2024-04-13\r\n38082,1945,AMER,sports,retail,37.18,7,0.006,bundle,2024-10-25\r\n38083,1342,LATAM,grocery,retail,71.17,6,0.002,none,2024-09-11\r\n38084,2216,AMER,toys,retail,85.57,2,0.079,bundle,2024-06-04\r\n38085,2410,EMEA,grocery,retail,42.06,2,0.087,none,2024-01-05\r\n38086,1119,LATAM,fashion,online,84.24,2,0.245,bundle,2024-05-10\r\n38087,1439,LATAM,grocery,online,62.98,1,0.112,coupon,2024-01-28\r\n38088,1266,AMER,home,retail,159.36,5,0.001,none,2024-02-11\r\n38089,1324,LATAM,home,retail,53.83,3,0.112,bundle,2024-11-19\r\n38090,2181,AMER,home,retail,29.41,5,0.211,none,2024-11-02\r\n38091,1317,EMEA,grocery,online,73.39,6,0.150,none,2024-05-14\r\n38092,2236,APAC,fashion,online,81.09,4,0.024,coupon,2024-03-22\r\n38093,1176,EMEA,electronics,online,40.29,5,0.147,coupon,2024-07-08\r\n38094,2091,LATAM,electronics,online,32.00,3,0.230,coupon,2024-12-12\r\n38095,1531,EMEA,sports,retail,75.44,5,0.144,none,2024-11-03\r\n38096,1969,LATAM,electronics,retail,40.03,4,0.163,coupon,2024-07-14\r\n38097,1484,AMER,grocery,retail,44.98,5,0.084,none,2024-10-03\r\n38098,1551,APAC,electronics,retail,129.64,5,0.224,none,2024-01-22\r\n38099,2243,APAC,grocery,online,34.40,8,0.211,none,2024-06-13\r\n38100,2322,AMER,sports,mobile,41.58,5,0.160,loyalty,2024-11-13\r\n38101,1227,AMER,sports,online,57.34,3,0.035,bundle,2024-03-21\r\n38102,1352,AMER,electronics,online,129.79,8,0.066,none,2024-06-09\r\n38103,1534,EMEA,fashion,retail,45.55,3,0.051,loyalty,2024-09-06\r\n38104,1908,AMER,fashion,retail,28.53,4,0.031,none,2024-01-11\r\n38105,1950,LATAM,home,online,37.68,6,0.202,none,2024-09-22\r\n38106,1632,LATAM,grocery,retail,78.00,3,0.074,none,2024-01-20\r\n38107,1490,AMER,home,online,38.72,7,0.165,loyalty,2024-12-13\r\n38108,1999,EMEA,fashion,online,63.77,4,0.052,none,2024-10-23\r\n38109,2351,EMEA,electronics,retail,37.08,2,0.027,none,2024-02-20\r\n38110,2167,APAC,toys,mobile,55.51,7,0.049,coupon,2024-03-24\r\n38111,2137,LATAM,grocery,retail,44.10,2,0.134,bundle,2024-04-23\r\n38112,1734,AMER,home,retail,125.43,5,0.006,loyalty,2024-11-14\r\n38113,2434,APAC,grocery,retail,35.30,8,0.230,bundle,2024-10-17\r\n38114,1860,EMEA,electronics,retail,32.66,6,0.108,bundle,2024-05-01\r\n38115,1688,LATAM,toys,online,35.99,7,0.165,coupon,2024-06-24\r\n38116,1730,AMER,fashion,online,50.84,4,0.214,loyalty,2024-04-28\r\n38117,2403,LATAM,fashion,retail,43.98,8,0.074,coupon,2024-01-06\r\n38118,1198,AMER,grocery,online,84.84,6,0.060,none,2024-02-13\r\n38119,1784,EMEA,home,online,79.56,3,0.138,none,2024-04-02\r\n38120,1477,APAC,grocery,online,60.70,4,0.134,bundle,2024-07-01\r\n38121,1867,AMER,grocery,online,61.39,6,0.135,none,2024-07-24\r\n38122,1373,LATAM,grocery,retail,67.87,2,0.107,bundle,2024-03-02\r\n38123,1342,LATAM,home,mobile,74.75,2,0.091,coupon,2024-11-18\r\n38124,2277,EMEA,grocery,partner,86.54,3,0.230,none,2024-10-24\r\n38125,1359,LATAM,electronics,online,50.04,5,0.171,none,2024-05-01\r\n38126,1097,EMEA,fashion,mobile,35.37,7,0.234,bundle,2024-09-26\r\n38127,2046,APAC,fashion,partner,62.73,6,0.224,none,2024-06-05\r\n38128,1983,LATAM,electronics,online,38.50,2,0.220,none,2024-09-28\r\n38129,1413,LATAM,fashion,online,22.38,8,0.176,none,2024-02-13\r\n38130,2144,EMEA,sports,online,89.98,2,0.110,none,2024-09-14\r\n38131,2104,EMEA,electronics,retail,42.93,2,0.040,none,2024-02-11\r\n38132,1817,APAC,sports,retail,119.23,2,0.005,coupon,2024-11-24\r\n38133,2181,AMER,grocery,mobile,62.04,1,0.248,loyalty,2024-06-03\r\n38134,1021,AMER,toys,online,43.81,2,0.247,coupon,2024-10-23\r\n38135,1789,EMEA,grocery,retail,42.42,4,0.064,none,2024-06-14\r\n38136,1273,AMER,fashion,mobile,23.27,1,0.132,none,2024-06-20\r\n38137,2476,APAC,toys,online,105.08,2,0.109,none,2024-04-14\r\n38138,1454,APAC,grocery,online,86.80,1,0.026,none,2024-07-28\r\n38139,1001,LATAM,grocery,partner,80.84,2,0.066,none,2024-02-08\r\n38140,1445,APAC,electronics,online,73.93,2,0.224,coupon,2024-01-25\r\n38141,1577,AMER,electronics,mobile,51.58,8,0.233,none,2024-01-02\r\n38142,1460,LATAM,home,retail,77.95,1,0.068,bundle,2024-03-18\r\n38143,1571,EMEA,grocery,partner,101.04,1,0.010,none,2024-01-11\r\n38144,1250,APAC,grocery,mobile,42.61,2,0.031,coupon,2024-11-10\r\n38145,1148,AMER,grocery,retail,33.66,4,0.217,none,2024-01-09\r\n38146,2041,LATAM,electronics,online,54.86,2,0.247,none,2024-11-22\r\n38147,2128,EMEA,home,mobile,121.37,3,0.174,none,2024-06-21\r\n38148,2465,EMEA,fashion,retail,34.58,7,0.045,none,2024-04-16\r\n38149,2035,LATAM,toys,online,136.65,1,0.250,bundle,2024-02-07\r\n38150,1447,LATAM,electronics,online,62.92,7,0.047,none,2024-11-10\r\n38151,2310,EMEA,grocery,mobile,75.00,2,0.067,none,2024-11-10\r\n38152,1268,EMEA,fashion,retail,77.59,3,0.135,loyalty,2024-03-19\r\n38153,1902,AMER,electronics,online,76.85,2,0.183,coupon,2024-11-27\r\n38154,1281,AMER,grocery,mobile,63.81,4,0.023,coupon,2024-04-02\r\n38155,2458,EMEA,sports,retail,24.99,7,0.073,none,2024-12-21\r\n38156,1277,AMER,electronics,retail,44.39,8,0.189,bundle,2024-01-26\r\n38157,1986,LATAM,sports,partner,119.72,2,0.053,none,2024-10-04\r\n38158,2187,EMEA,electronics,mobile,85.39,2,0.166,none,2024-08-15\r\n38159,1844,APAC,toys,retail,22.32,5,0.172,none,2024-11-27\r\n38160,2419,LATAM,fashion,online,56.32,2,0.192,none,2024-06-12\r\n38161,2013,APAC,sports,retail,80.02,7,0.113,coupon,2024-01-06\r\n38162,1713,EMEA,home,online,111.66,5,0.194,none,2024-05-28\r\n38163,2304,LATAM,home,mobile,115.75,6,0.000,bundle,2024-04-19\r\n38164,1231,AMER,electronics,mobile,60.71,2,0.205,none,2024-08-20\r\n38165,2071,APAC,grocery,mobile,75.59,4,0.096,none,2024-11-28\r\n38166,1659,APAC,grocery,partner,114.93,3,0.244,none,2024-05-24\r\n38167,1944,AMER,toys,online,91.48,5,0.178,none,2024-04-07\r\n38168,1865,LATAM,electronics,mobile,26.69,2,0.014,bundle,2024-01-16\r\n38169,1653,APAC,electronics,online,33.60,3,0.184,none,2024-12-01\r\n38170,1667,AMER,fashion,retail,28.55,4,0.045,loyalty,2024-11-15\r\n38171,1013,LATAM,grocery,online,125.80,3,0.204,bundle,2024-08-02\r\n38172,1303,LATAM,home,online,22.67,2,0.231,none,2024-05-26\r\n38173,2020,AMER,home,online,54.64,1,0.180,coupon,2024-05-23\r\n38174,1839,APAC,fashion,online,25.49,4,0.207,none,2024-11-18\r\n38175,2474,LATAM,electronics,online,43.03,2,0.165,none,2024-01-26\r\n38176,1810,LATAM,grocery,retail,82.72,5,0.162,coupon,2024-01-17\r\n38177,1310,AMER,electronics,retail,86.41,4,0.210,none,2024-05-22\r\n38178,1260,LATAM,home,retail,41.04,8,0.143,bundle,2024-04-03\r\n38179,1057,LATAM,grocery,online,70.40,3,0.038,bundle,2024-08-09\r\n38180,1168,APAC,toys,retail,79.26,6,0.014,bundle,2024-02-19\r\n38181,1399,AMER,electronics,online,46.77,3,0.130,none,2024-03-20\r\n38182,2016,LATAM,electronics,online,91.17,8,0.101,coupon,2024-09-01\r\n38183,2302,APAC,electronics,retail,30.09,4,0.019,bundle,2024-01-04\r\n38184,1473,LATAM,electronics,online,72.27,3,0.086,loyalty,2024-03-06\r\n38185,2464,LATAM,fashion,online,38.18,6,0.007,none,2024-03-05\r\n38186,1106,AMER,fashion,retail,27.79,8,0.229,none,2024-12-01\r\n38187,2450,EMEA,home,retail,99.79,2,0.116,none,2024-10-01\r\n38188,1214,EMEA,grocery,retail,78.36,8,0.050,none,2024-06-27\r\n38189,1254,APAC,sports,retail,126.12,1,0.070,coupon,2024-07-02\r\n38190,1450,EMEA,electronics,retail,60.57,5,0.137,none,2024-09-15\r\n38191,1213,EMEA,toys,online,69.66,2,0.067,none,2024-06-26\r\n38192,1885,EMEA,toys,online,39.78,2,0.007,none,2024-10-22\r\n38193,1605,APAC,grocery,retail,132.24,4,0.191,none,2024-07-10\r\n38194,1139,EMEA,home,mobile,65.21,8,0.133,coupon,2024-02-04\r\n38195,1204,AMER,electronics,mobile,107.88,3,0.064,loyalty,2024-11-09\r\n38196,2258,AMER,electronics,online,35.80,6,0.098,none,2024-07-04\r\n38197,1171,APAC,fashion,mobile,26.62,3,0.138,coupon,2024-01-26\r\n38198,1412,AMER,electronics,online,57.04,2,0.199,bundle,2024-09-19\r\n38199,1227,AMER,sports,retail,29.23,8,0.037,none,2024-07-11\r\n38200,1397,LATAM,home,online,143.74,5,0.171,coupon,2024-05-20\r\n38201,1112,APAC,fashion,mobile,31.75,1,0.086,none,2024-11-23\r\n38202,1131,APAC,home,mobile,44.59,2,0.111,loyalty,2024-04-13\r\n38203,2103,LATAM,fashion,retail,42.54,4,0.044,none,2024-02-02\r\n38204,2371,LATAM,sports,online,43.80,4,0.044,coupon,2024-10-03\r\n38205,1205,APAC,grocery,retail,75.94,3,0.224,coupon,2024-12-13\r\n38206,1925,LATAM,home,retail,51.75,6,0.247,none,2024-02-01\r\n38207,2329,LATAM,sports,online,25.85,1,0.185,loyalty,2024-12-02\r\n38208,1766,AMER,sports,online,117.83,7,0.023,coupon,2024-10-07\r\n38209,2131,APAC,fashion,partner,26.29,1,0.247,none,2024-05-10\r\n38210,2063,APAC,fashion,online,58.27,8,0.217,none,2024-12-15\r\n38211,1995,LATAM,fashion,partner,46.34,1,0.051,none,2024-07-13\r\n38212,1978,AMER,grocery,retail,48.86,1,0.050,none,2024-12-06\r\n38213,2472,AMER,home,online,246.62,3,0.101,none,2024-06-07\r\n38214,2207,APAC,sports,mobile,54.95,6,0.192,none,2024-03-27\r\n38215,1631,APAC,toys,online,83.07,7,0.099,none,2024-09-15\r\n38216,2494,AMER,sports,online,38.93,5,0.241,coupon,2024-01-16\r\n38217,1091,EMEA,electronics,online,64.97,2,0.072,none,2024-03-14\r\n38218,1219,LATAM,grocery,online,54.05,7,0.046,bundle,2024-12-19\r\n38219,2082,APAC,fashion,online,86.51,8,0.171,coupon,2024-11-13\r\n38220,1761,EMEA,grocery,retail,49.20,5,0.102,none,2024-10-01\r\n38221,1317,EMEA,fashion,retail,121.73,7,0.199,none,2024-12-12\r\n38222,1493,APAC,toys,online,54.54,6,0.244,none,2024-08-13\r\n38223,2403,LATAM,home,retail,51.93,1,0.100,bundle,2024-09-14\r\n38224,2195,APAC,sports,retail,61.35,2,0.147,none,2024-06-15\r\n38225,1518,AMER,home,retail,55.99,1,0.097,none,2024-01-05\r\n38226,2122,AMER,electronics,online,92.20,8,0.035,coupon,2024-12-11\r\n38227,1646,APAC,grocery,retail,238.14,6,0.059,coupon,2024-11-03\r\n38228,1543,AMER,sports,retail,24.72,6,0.002,none,2024-02-11\r\n38229,2411,EMEA,home,retail,84.88,6,0.097,none,2024-02-28\r\n38230,1715,AMER,fashion,retail,104.89,8,0.137,none,2024-11-15\r\n38231,2302,APAC,electronics,online,22.50,6,0.157,none,2024-10-12\r\n38232,1805,EMEA,electronics,retail,35.36,4,0.171,coupon,2024-03-08\r\n38233,1300,EMEA,sports,online,189.71,5,0.002,none,2024-05-25\r\n38234,1364,EMEA,fashion,retail,24.43,3,0.129,none,2024-10-24\r\n38235,2044,APAC,sports,online,53.39,1,0.013,bundle,2024-01-28\r\n38236,2157,AMER,fashion,retail,64.23,7,0.176,none,2024-08-26\r\n38237,1728,AMER,toys,retail,95.91,4,0.050,none,2024-02-20\r\n38238,1284,APAC,sports,mobile,57.30,2,0.094,none,2024-06-04\r\n38239,1912,APAC,toys,online,45.77,6,0.053,coupon,2024-01-22\r\n38240,2473,EMEA,fashion,retail,37.10,4,0.135,coupon,2024-07-13\r\n38241,2063,APAC,electronics,retail,52.82,8,0.080,none,2024-06-10\r\n38242,2066,APAC,sports,online,88.12,8,0.134,none,2024-10-11\r\n38243,2341,EMEA,electronics,online,48.45,8,0.004,none,2024-01-07\r\n38244,1853,APAC,grocery,online,84.82,8,0.133,coupon,2024-03-20\r\n38245,2046,APAC,grocery,online,25.12,6,0.112,coupon,2024-12-21\r\n38246,1817,APAC,electronics,retail,69.13,7,0.174,none,2024-09-08\r\n38247,1542,APAC,sports,online,71.91,4,0.127,none,2024-01-02\r\n38248,1760,LATAM,home,online,85.36,8,0.112,coupon,2024-01-05\r\n38249,1727,APAC,home,online,48.20,2,0.148,none,2024-02-27\r\n38250,1657,LATAM,home,mobile,47.34,3,0.130,none,2024-10-19\r\n38251,1883,LATAM,grocery,mobile,68.32,8,0.181,none,2024-11-17\r\n38252,2311,LATAM,grocery,retail,81.70,5,0.032,none,2024-06-28\r\n38253,2144,EMEA,fashion,online,42.79,2,0.136,none,2024-04-07\r\n38254,2203,APAC,electronics,retail,65.54,6,0.168,bundle,2024-06-18\r\n38255,1404,EMEA,toys,retail,29.09,7,0.211,bundle,2024-04-08\r\n38256,1739,AMER,grocery,online,37.78,6,0.183,coupon,2024-03-22\r\n38257,1263,AMER,grocery,retail,31.58,8,0.045,bundle,2024-03-16\r\n38258,1725,APAC,electronics,retail,28.42,6,0.143,none,2024-07-19\r\n38259,2273,APAC,sports,retail,76.02,3,0.097,bundle,2024-07-14\r\n38260,2072,AMER,electronics,retail,55.47,5,0.184,none,2024-09-06\r\n38261,1394,LATAM,home,retail,55.97,2,0.071,coupon,2024-01-23\r\n38262,1207,APAC,grocery,mobile,62.68,1,0.060,coupon,2024-03-11\r\n38263,1366,APAC,grocery,retail,82.40,4,0.223,none,2024-12-28\r\n38264,1323,EMEA,home,online,29.73,1,0.077,bundle,2024-05-17\r\n38265,2047,AMER,home,online,60.32,2,0.208,none,2024-05-28\r\n38266,2398,EMEA,home,online,29.98,2,0.155,none,2024-03-09\r\n38267,1910,LATAM,home,retail,45.52,6,0.055,loyalty,2024-12-24\r\n38268,2090,AMER,electronics,online,11.36,1,0.041,none,2024-05-02\r\n38269,1166,AMER,home,retail,120.25,7,0.019,bundle,2024-01-17\r\n38270,1232,LATAM,fashion,online,78.02,4,0.011,bundle,2024-04-15\r\n38271,1342,LATAM,grocery,online,40.15,7,0.246,coupon,2024-12-23\r\n38272,1570,AMER,grocery,retail,68.35,7,0.143,bundle,2024-11-14\r\n38273,1621,APAC,home,online,118.42,6,0.215,coupon,2024-11-04\r\n38274,2404,EMEA,electronics,online,44.66,5,0.090,coupon,2024-07-02\r\n38275,1131,APAC,fashion,retail,95.26,7,0.140,none,2024-08-08\r\n38276,2129,APAC,home,online,83.36,5,0.029,coupon,2024-07-13\r\n38277,1164,EMEA,fashion,online,41.87,6,0.250,none,2024-09-22\r\n38278,2148,EMEA,electronics,online,97.87,7,0.042,none,2024-05-03\r\n38279,1982,EMEA,fashion,mobile,27.14,6,0.010,none,2024-11-16\r\n38280,1096,EMEA,toys,online,31.11,5,0.013,none,2024-10-18\r\n38281,2456,APAC,home,retail,35.98,8,0.223,none,2024-03-02\r\n38282,2212,EMEA,sports,retail,53.06,8,0.033,none,2024-02-09\r\n38283,2042,LATAM,sports,mobile,60.48,8,0.153,bundle,2024-01-03\r\n38284,2034,LATAM,grocery,online,35.52,4,0.185,none,2024-10-19\r\n38285,2453,AMER,sports,retail,53.35,4,0.140,coupon,2024-12-03\r\n38286,1245,APAC,fashion,retail,118.06,3,0.089,none,2024-12-27\r\n38287,1298,LATAM,grocery,online,66.04,5,0.128,bundle,2024-01-15\r\n38288,2019,AMER,sports,online,79.21,8,0.054,bundle,2024-06-20\r\n38289,1143,LATAM,grocery,online,69.51,7,0.076,coupon,2024-03-22\r\n38290,2351,EMEA,home,retail,66.08,3,0.206,none,2024-09-12\r\n38291,2278,APAC,fashion,retail,40.08,6,0.196,none,2024-04-01\r\n38292,2344,LATAM,grocery,online,79.32,3,0.024,loyalty,2024-03-22\r\n38293,1623,AMER,toys,mobile,66.91,2,0.046,coupon,2024-05-23\r\n38294,1403,APAC,grocery,retail,95.23,4,0.192,coupon,2024-10-16\r\n38295,1173,LATAM,fashion,online,45.47,4,0.085,none,2024-06-20\r\n38296,1003,APAC,fashion,retail,58.77,5,0.209,bundle,2024-01-13\r\n38297,1092,AMER,electronics,retail,46.94,5,0.038,none,2024-12-08\r\n38298,1964,EMEA,home,online,16.05,2,0.065,none,2024-04-19\r\n38299,1992,LATAM,home,online,80.80,4,0.003,bundle,2024-02-03\r\n38300,1451,EMEA,fashion,mobile,44.00,8,0.061,loyalty,2024-12-06\r\n38301,1328,APAC,fashion,retail,35.74,2,0.014,none,2024-04-23\r\n38302,1669,AMER,fashion,retail,87.47,3,0.195,none,2024-05-16\r\n38303,1828,EMEA,fashion,retail,71.89,5,0.103,none,2024-12-22\r\n38304,2259,AMER,fashion,retail,62.31,8,0.156,none,2024-04-23\r\n38305,1738,LATAM,grocery,online,73.36,3,0.043,none,2024-12-15\r\n38306,1206,EMEA,sports,retail,64.22,6,0.101,none,2024-06-01\r\n38307,1280,LATAM,electronics,online,57.46,6,0.056,none,2024-06-10\r\n38308,1306,LATAM,home,retail,66.11,4,0.215,bundle,2024-03-11\r\n38309,1354,AMER,sports,online,38.12,2,0.082,none,2024-01-26\r\n38310,1348,AMER,grocery,online,40.36,2,0.126,none,2024-10-03\r\n38311,1691,LATAM,grocery,mobile,50.85,4,0.103,none,2024-02-17\r\n38312,1283,APAC,sports,online,136.96,1,0.148,loyalty,2024-02-09\r\n38313,1053,AMER,home,retail,56.13,2,0.166,bundle,2024-03-25\r\n38314,1823,EMEA,toys,online,57.92,2,0.032,coupon,2024-04-08\r\n38315,2380,AMER,fashion,online,64.02,4,0.178,coupon,2024-12-17\r\n38316,1191,EMEA,sports,retail,37.14,7,0.211,none,2024-07-08\r\n38317,1817,APAC,grocery,retail,47.58,7,0.063,none,2024-11-02\r\n38318,1164,EMEA,grocery,partner,75.68,4,0.131,bundle,2024-09-11\r\n38319,1573,AMER,fashion,online,66.52,2,0.135,bundle,2024-08-11\r\n38320,1241,APAC,electronics,retail,45.79,6,0.051,none,2024-09-04\r\n38321,1947,EMEA,fashion,retail,102.06,1,0.185,loyalty,2024-11-20\r\n38322,2387,EMEA,sports,online,22.13,7,0.119,none,2024-03-12\r\n38323,1984,LATAM,electronics,online,21.86,1,0.240,none,2024-02-07\r\n38324,2333,APAC,fashion,mobile,137.66,1,0.104,none,2024-12-13\r\n38325,2159,AMER,home,online,38.10,8,0.241,bundle,2024-09-08\r\n38326,1072,LATAM,home,retail,42.23,6,0.167,none,2024-03-26\r\n38327,1557,LATAM,home,online,29.12,2,0.181,bundle,2024-08-12\r\n38328,2120,AMER,fashion,retail,22.61,5,0.096,none,2024-06-05\r\n38329,2244,LATAM,home,partner,55.02,8,0.216,coupon,2024-02-21\r\n38330,1729,AMER,fashion,online,33.85,6,0.033,bundle,2024-11-24\r\n38331,1951,LATAM,sports,retail,66.44,7,0.204,none,2024-07-18\r\n38332,1135,APAC,home,retail,31.97,5,0.105,coupon,2024-09-02\r\n38333,2392,EMEA,electronics,partner,119.71,3,0.048,none,2024-02-24\r\n38334,1932,EMEA,grocery,online,25.33,2,0.072,coupon,2024-04-13\r\n38335,1760,LATAM,home,online,66.93,6,0.182,none,2024-11-14\r\n38336,1396,EMEA,grocery,mobile,37.74,3,0.072,none,2024-04-10\r\n38337,1937,APAC,grocery,retail,36.68,8,0.205,none,2024-01-04\r\n38338,2489,LATAM,grocery,online,81.73,2,0.210,loyalty,2024-01-27\r\n38339,1853,APAC,home,online,71.81,4,0.008,none,2024-04-16\r\n38340,1695,LATAM,toys,retail,62.86,3,0.086,coupon,2024-10-10\r\n38341,1924,AMER,electronics,online,23.66,4,0.015,bundle,2024-02-27\r\n38342,1970,LATAM,grocery,mobile,51.09,1,0.174,coupon,2024-05-21\r\n38343,2028,APAC,grocery,retail,21.94,3,0.049,bundle,2024-07-25\r\n38344,2373,LATAM,grocery,retail,63.64,3,0.012,none,2024-09-10\r\n38345,2473,EMEA,grocery,online,48.75,7,0.058,none,2024-08-10\r\n38346,2020,AMER,grocery,retail,95.31,5,0.147,none,2024-08-02\r\n38347,2077,APAC,electronics,mobile,133.09,5,0.167,none,2024-09-20\r\n38348,2273,APAC,home,partner,56.47,1,0.154,none,2024-01-17\r\n38349,1053,AMER,sports,online,98.85,3,0.132,none,2024-03-22\r\n38350,1998,APAC,toys,online,66.70,2,0.127,loyalty,2024-04-23\r\n38351,2228,EMEA,grocery,online,84.16,8,0.080,coupon,2024-05-17\r\n38352,1522,LATAM,grocery,online,86.09,5,0.157,none,2024-03-23\r\n38353,1927,EMEA,grocery,retail,26.55,3,0.043,none,2024-06-26\r\n38354,1483,EMEA,grocery,retail,100.71,7,0.003,none,2024-11-08\r\n38355,1576,EMEA,grocery,mobile,58.39,4,0.204,none,2024-02-20\r\n38356,2035,LATAM,grocery,mobile,55.80,7,0.005,none,2024-09-12\r\n38357,2468,EMEA,grocery,retail,68.88,1,0.012,bundle,2024-10-14\r\n38358,2217,LATAM,home,retail,48.61,2,0.205,none,2024-02-24\r\n38359,1114,APAC,grocery,retail,35.31,1,0.006,none,2024-02-16\r\n38360,1870,EMEA,grocery,online,26.05,2,0.030,coupon,2024-01-02\r\n38361,2077,APAC,fashion,mobile,26.04,7,0.016,none,2024-01-12\r\n38362,1108,EMEA,fashion,online,37.49,1,0.038,none,2024-02-08\r\n38363,1234,AMER,fashion,retail,118.29,1,0.042,none,2024-11-08\r\n38364,2495,EMEA,home,online,105.87,2,0.076,loyalty,2024-02-02\r\n38365,1600,AMER,sports,online,46.50,2,0.012,none,2024-09-10\r\n38366,1767,AMER,fashion,retail,45.83,8,0.018,none,2024-05-27\r\n38367,1546,EMEA,grocery,mobile,40.68,2,0.221,none,2024-08-24\r\n38368,2177,AMER,home,retail,48.27,5,0.249,none,2024-09-10\r\n38369,1591,APAC,sports,online,100.93,7,0.060,none,2024-08-03\r\n38370,1459,LATAM,electronics,partner,84.25,4,0.125,none,2024-03-10\r\n38371,2027,EMEA,grocery,online,113.40,3,0.039,none,2024-07-27\r\n38372,2016,LATAM,sports,retail,67.34,6,0.155,none,2024-09-27\r\n38373,1131,APAC,grocery,mobile,71.80,2,0.171,none,2024-12-12\r\n38374,2165,AMER,fashion,online,70.22,5,0.236,none,2024-05-19\r\n38375,1630,APAC,grocery,online,65.14,8,0.092,none,2024-08-02\r\n38376,1412,AMER,toys,retail,37.76,1,0.197,coupon,2024-05-09\r\n38377,1366,APAC,sports,retail,101.42,4,0.126,coupon,2024-03-03\r\n38378,1590,APAC,home,online,53.70,5,0.117,loyalty,2024-11-10\r\n38379,1522,LATAM,grocery,online,72.45,4,0.100,none,2024-04-08\r\n38380,1664,LATAM,toys,mobile,35.01,1,0.132,none,2024-10-15\r\n38381,1544,LATAM,home,online,61.03,6,0.194,loyalty,2024-11-06\r\n38382,2349,APAC,grocery,online,50.83,3,0.152,none,2024-02-27\r\n38383,1090,AMER,sports,retail,61.14,5,0.223,coupon,2024-10-26\r\n38384,1889,APAC,home,online,97.54,7,0.017,bundle,2024-12-23\r\n38385,2478,AMER,home,online,117.94,6,0.063,none,2024-07-01\r\n38386,1341,EMEA,home,online,80.48,2,0.102,loyalty,2024-07-19\r\n38387,1492,APAC,fashion,online,138.50,6,0.217,none,2024-03-12\r\n38388,1310,AMER,toys,online,75.52,1,0.227,coupon,2024-09-27\r\n38389,2391,EMEA,sports,mobile,54.07,1,0.066,coupon,2024-04-11\r\n38390,1323,EMEA,home,online,60.60,6,0.178,none,2024-09-18\r\n38391,1181,LATAM,grocery,mobile,71.24,1,0.127,none,2024-08-06\r\n38392,1074,LATAM,electronics,online,42.80,3,0.107,coupon,2024-10-20\r\n38393,1969,LATAM,toys,retail,40.77,4,0.249,none,2024-03-04\r\n38394,2023,LATAM,home,retail,27.19,6,0.094,coupon,2024-04-19\r\n38395,1085,EMEA,grocery,mobile,67.39,8,0.045,coupon,2024-06-26\r\n38396,1837,LATAM,grocery,retail,38.31,5,0.163,bundle,2024-08-21\r\n38397,1187,AMER,grocery,online,86.21,2,0.087,none,2024-06-09\r\n38398,2098,AMER,grocery,online,51.03,7,0.212,none,2024-10-06\r\n38399,1950,LATAM,home,retail,121.57,1,0.229,none,2024-11-06\r\n38400,1783,AMER,grocery,retail,63.87,4,0.056,coupon,2024-08-22\r\n38401,1590,APAC,grocery,retail,81.41,1,0.035,none,2024-07-15\r\n38402,1588,LATAM,grocery,online,34.30,3,0.179,none,2024-11-12\r\n38403,1372,APAC,home,partner,49.48,8,0.102,coupon,2024-01-19\r\n38404,1872,LATAM,fashion,retail,66.61,4,0.155,none,2024-12-02\r\n38405,2169,EMEA,grocery,online,51.04,1,0.181,bundle,2024-08-16\r\n38406,1138,AMER,home,online,58.70,3,0.192,coupon,2024-10-01\r\n38407,1159,LATAM,toys,online,127.40,4,0.182,bundle,2024-12-15\r\n38408,1370,APAC,home,online,162.13,7,0.132,none,2024-01-11\r\n38409,1646,APAC,grocery,retail,67.49,2,0.163,none,2024-12-05\r\n38410,1642,EMEA,toys,online,57.71,6,0.192,coupon,2024-07-13\r\n38411,2326,LATAM,grocery,retail,44.30,5,0.111,none,2024-05-10\r\n38412,2449,LATAM,fashion,retail,75.80,1,0.171,coupon,2024-12-17\r\n38413,1543,AMER,home,retail,49.25,2,0.030,none,2024-07-14\r\n38414,1415,AMER,sports,online,40.92,8,0.242,loyalty,2024-04-17\r\n38415,2046,APAC,grocery,online,47.93,6,0.218,coupon,2024-07-23\r\n38416,2430,APAC,grocery,retail,81.73,2,0.021,none,2024-05-19\r\n38417,1477,APAC,home,retail,43.61,8,0.147,coupon,2024-10-19\r\n38418,2354,LATAM,toys,online,60.05,1,0.091,coupon,2024-10-05\r\n38419,2325,LATAM,electronics,retail,33.85,3,0.023,none,2024-12-06\r\n38420,1794,AMER,fashion,mobile,60.85,8,0.202,none,2024-04-24\r\n38421,1737,AMER,electronics,mobile,83.29,2,0.002,none,2024-04-05\r\n38422,1574,AMER,fashion,online,82.62,1,0.110,none,2024-05-09\r\n38423,1227,AMER,home,mobile,68.28,7,0.033,coupon,2024-08-06\r\n38424,1942,APAC,electronics,retail,145.64,6,0.077,coupon,2024-06-08\r\n38425,2483,LATAM,electronics,retail,152.71,7,0.039,none,2024-04-22\r\n38426,2415,AMER,grocery,online,41.19,1,0.199,none,2024-01-10\r\n38427,1046,EMEA,electronics,mobile,51.23,2,0.091,none,2024-07-02\r\n38428,1727,APAC,grocery,partner,69.52,7,0.016,none,2024-12-14\r\n38429,2309,AMER,fashion,retail,63.30,6,0.250,bundle,2024-06-14\r\n38430,1591,APAC,electronics,online,49.83,4,0.106,none,2024-04-18\r\n38431,1353,EMEA,grocery,mobile,56.94,4,0.140,none,2024-07-15\r\n38432,1750,LATAM,home,mobile,100.52,8,0.013,loyalty,2024-09-08\r\n38433,1975,EMEA,grocery,partner,55.24,8,0.192,none,2024-06-15\r\n38434,2424,LATAM,home,retail,60.86,2,0.144,none,2024-03-02\r\n38435,1999,EMEA,sports,retail,154.85,8,0.077,none,2024-05-02\r\n38436,1486,LATAM,electronics,online,35.38,7,0.205,bundle,2024-01-26\r\n38437,2094,AMER,electronics,retail,72.99,7,0.078,none,2024-05-02\r\n38438,2436,LATAM,sports,retail,96.19,7,0.131,none,2024-11-23\r\n38439,1828,EMEA,home,retail,44.96,4,0.033,loyalty,2024-03-19\r\n38440,1187,AMER,home,retail,82.53,2,0.183,loyalty,2024-03-21\r\n38441,1950,LATAM,grocery,retail,53.83,6,0.203,loyalty,2024-04-19\r\n38442,2324,AMER,fashion,retail,30.44,7,0.060,bundle,2024-07-08\r\n38443,1431,APAC,fashion,online,105.76,6,0.116,none,2024-09-14\r\n38444,1761,EMEA,electronics,retail,68.28,8,0.128,none,2024-03-10\r\n38445,1746,LATAM,fashion,online,57.28,3,0.085,none,2024-03-10\r\n38446,1251,EMEA,grocery,retail,55.68,8,0.243,loyalty,2024-10-09\r\n38447,1035,EMEA,fashion,partner,46.23,2,0.242,none,2024-04-08\r\n38448,1339,EMEA,toys,online,32.76,3,0.086,none,2024-11-09\r\n38449,2146,APAC,fashion,online,73.81,6,0.198,none,2024-05-25\r\n38450,1573,AMER,home,mobile,54.39,4,0.061,coupon,2024-05-10\r\n38451,1337,APAC,grocery,online,30.80,1,0.187,none,2024-09-22\r\n38452,2460,AMER,fashion,retail,63.36,1,0.078,none,2024-09-05\r\n38453,2060,LATAM,toys,retail,56.04,7,0.060,none,2024-08-21\r\n38454,1931,APAC,home,online,239.78,3,0.207,bundle,2024-10-23\r\n38455,1998,APAC,grocery,retail,77.98,1,0.155,none,2024-02-19\r\n38456,1961,EMEA,home,retail,27.73,2,0.143,none,2024-07-09\r\n38457,2340,EMEA,toys,mobile,60.91,8,0.056,none,2024-06-27\r\n38458,1120,LATAM,fashion,retail,137.54,7,0.096,coupon,2024-07-06\r\n38459,1400,EMEA,electronics,online,50.04,1,0.086,loyalty,2024-02-17\r\n38460,1162,AMER,electronics,partner,64.44,6,0.172,loyalty,2024-01-11\r\n38461,1383,AMER,electronics,mobile,17.99,7,0.242,bundle,2024-10-04\r\n38462,2137,LATAM,grocery,retail,55.07,7,0.199,loyalty,2024-02-05\r\n38463,1379,EMEA,sports,online,123.53,2,0.052,none,2024-12-05\r\n38464,1033,APAC,sports,partner,93.16,4,0.089,coupon,2024-03-01\r\n38465,1639,APAC,home,partner,48.27,5,0.012,none,2024-05-19\r\n38466,1272,AMER,toys,retail,57.47,3,0.014,none,2024-10-04\r\n38467,2289,APAC,electronics,partner,93.73,2,0.191,loyalty,2024-04-25\r\n38468,2073,AMER,fashion,online,36.33,3,0.202,bundle,2024-04-27\r\n38469,1771,AMER,sports,online,81.61,1,0.139,none,2024-11-26\r\n38470,1380,AMER,grocery,online,30.28,7,0.045,none,2024-08-26\r\n38471,1151,APAC,fashion,online,40.90,4,0.181,none,2024-12-09\r\n38472,1857,LATAM,fashion,online,38.24,4,0.052,coupon,2024-10-01\r\n38473,1775,EMEA,fashion,online,67.17,7,0.241,loyalty,2024-07-11\r\n38474,1935,EMEA,home,retail,34.88,8,0.041,none,2024-07-01\r\n38475,1777,AMER,toys,online,46.17,1,0.099,none,2024-11-25\r\n38476,1603,EMEA,toys,online,61.89,3,0.117,coupon,2024-12-19\r\n38477,1313,EMEA,electronics,retail,80.85,5,0.173,none,2024-10-16\r\n38478,2445,APAC,sports,mobile,27.79,5,0.142,coupon,2024-02-10\r\n38479,2450,EMEA,electronics,online,51.70,5,0.203,loyalty,2024-10-01\r\n38480,1493,APAC,electronics,retail,77.25,7,0.046,coupon,2024-02-15\r\n38481,1954,APAC,grocery,mobile,19.13,5,0.076,none,2024-10-21\r\n38482,1007,APAC,electronics,retail,85.27,6,0.131,none,2024-02-06\r\n38483,2126,APAC,home,online,41.38,4,0.027,bundle,2024-12-06\r\n38484,1770,AMER,electronics,retail,32.79,4,0.211,loyalty,2024-06-10\r\n38485,1344,EMEA,electronics,retail,44.36,7,0.063,none,2024-10-27\r\n38486,2392,EMEA,electronics,partner,96.19,5,0.169,none,2024-07-25\r\n38487,2401,LATAM,electronics,online,65.38,7,0.088,none,2024-08-02\r\n38488,1973,EMEA,fashion,retail,63.17,1,0.107,none,2024-02-12\r\n38489,1974,EMEA,home,mobile,81.39,5,0.126,loyalty,2024-04-10\r\n38490,1576,EMEA,electronics,retail,70.19,5,0.048,none,2024-02-01\r\n38491,1839,APAC,grocery,online,94.34,1,0.242,coupon,2024-12-10\r\n38492,2486,APAC,electronics,mobile,75.73,1,0.166,none,2024-10-26\r\n38493,1995,LATAM,home,retail,61.33,1,0.069,none,2024-03-10\r\n38494,2175,AMER,electronics,online,61.74,6,0.107,none,2024-06-26\r\n38495,1538,AMER,grocery,online,83.96,3,0.199,coupon,2024-12-01\r\n38496,2484,APAC,home,partner,15.59,7,0.052,coupon,2024-06-09\r\n38497,1183,AMER,home,online,101.10,8,0.238,none,2024-06-28\r\n38498,1056,LATAM,grocery,mobile,40.15,4,0.059,none,2024-02-19\r\n38499,2189,LATAM,grocery,mobile,63.31,7,0.060,none,2024-06-17\r\n38500,1868,AMER,home,mobile,88.34,4,0.063,coupon,2024-12-07\r\n38501,2323,AMER,electronics,retail,72.75,3,0.157,none,2024-07-12\r\n38502,2056,LATAM,fashion,retail,145.96,1,0.032,none,2024-09-01\r\n38503,2090,AMER,home,online,38.87,4,0.165,none,2024-08-24\r\n38504,1926,AMER,electronics,retail,57.55,3,0.124,coupon,2024-09-23\r\n38505,2342,AMER,electronics,retail,69.01,6,0.170,none,2024-10-13\r\n38506,1619,APAC,home,online,86.83,5,0.223,none,2024-02-08\r\n38507,1356,LATAM,sports,retail,207.74,3,0.229,none,2024-11-21\r\n38508,1165,AMER,grocery,online,61.47,7,0.241,none,2024-06-20\r\n38509,2150,APAC,sports,mobile,67.23,5,0.238,bundle,2024-10-06\r\n38510,1342,LATAM,toys,online,70.35,3,0.127,coupon,2024-12-15\r\n38511,2226,EMEA,grocery,mobile,44.86,7,0.190,none,2024-03-02\r\n38512,1571,EMEA,fashion,retail,107.72,2,0.077,none,2024-01-15\r\n38513,1734,AMER,fashion,online,29.93,2,0.054,coupon,2024-01-28\r\n38514,2160,LATAM,fashion,mobile,40.04,6,0.016,none,2024-09-08\r\n38515,1432,APAC,electronics,mobile,60.40,1,0.043,none,2024-03-22\r\n38516,1316,APAC,sports,retail,40.42,8,0.082,none,2024-09-27\r\n38517,2188,EMEA,electronics,retail,78.95,3,0.172,none,2024-09-07\r\n38518,1858,LATAM,fashion,retail,34.53,5,0.046,none,2024-09-03\r\n38519,2480,APAC,fashion,retail,45.03,1,0.057,none,2024-06-04\r\n38520,1065,AMER,grocery,partner,82.17,4,0.051,none,2024-07-15\r\n38521,1980,LATAM,grocery,retail,98.63,2,0.227,bundle,2024-01-06\r\n38522,1693,EMEA,grocery,online,20.59,5,0.163,coupon,2024-07-28\r\n38523,2155,APAC,fashion,retail,133.39,2,0.241,none,2024-01-06\r\n38524,1763,LATAM,grocery,online,65.54,2,0.056,none,2024-08-17\r\n38525,1135,APAC,sports,online,46.23,7,0.229,none,2024-11-26\r\n38526,1730,AMER,fashion,online,131.93,7,0.022,coupon,2024-04-25\r\n38527,2210,APAC,sports,online,30.23,3,0.070,none,2024-02-10\r\n38528,1992,LATAM,toys,online,112.95,2,0.215,none,2024-07-07\r\n38529,1247,AMER,grocery,mobile,68.76,6,0.004,none,2024-03-11\r\n38530,1636,APAC,home,retail,32.88,8,0.107,none,2024-10-16\r\n38531,1905,APAC,fashion,retail,47.69,6,0.028,none,2024-07-04\r\n38532,2287,EMEA,home,retail,62.62,2,0.038,bundle,2024-05-04\r\n38533,2048,LATAM,electronics,online,38.56,7,0.163,loyalty,2024-02-27\r\n38534,2436,LATAM,grocery,mobile,59.18,8,0.233,coupon,2024-10-13\r\n38535,1900,APAC,home,online,99.50,8,0.179,loyalty,2024-12-27\r\n38536,1668,AMER,sports,retail,61.87,4,0.184,bundle,2024-01-18\r\n38537,2495,EMEA,grocery,online,35.45,8,0.084,none,2024-02-03\r\n38538,1422,LATAM,home,online,56.34,2,0.083,none,2024-06-28\r\n38539,2266,LATAM,toys,online,40.41,8,0.033,bundle,2024-12-24\r\n38540,1389,LATAM,fashion,mobile,132.29,1,0.069,loyalty,2024-10-06\r\n38541,1387,AMER,grocery,online,82.36,6,0.165,bundle,2024-04-05\r\n38542,1959,EMEA,home,mobile,42.96,3,0.240,none,2024-01-27\r\n38543,1586,LATAM,toys,retail,97.32,5,0.026,none,2024-03-09\r\n38544,2226,EMEA,fashion,online,76.98,8,0.026,coupon,2024-04-02\r\n38545,2197,LATAM,home,online,38.46,3,0.035,none,2024-11-07\r\n38546,2116,LATAM,sports,online,61.70,3,0.185,none,2024-07-13\r\n38547,1554,AMER,home,online,70.58,5,0.198,none,2024-01-13\r\n38548,2467,AMER,fashion,mobile,85.98,7,0.128,coupon,2024-02-02\r\n38549,1739,AMER,toys,online,67.99,7,0.065,none,2024-10-24\r\n38550,1560,AMER,electronics,retail,54.28,2,0.201,none,2024-01-24\r\n38551,1679,APAC,grocery,retail,26.33,8,0.020,coupon,2024-11-22\r\n38552,1489,AMER,home,retail,41.95,2,0.225,none,2024-09-08\r\n38553,1779,APAC,electronics,online,101.94,1,0.221,loyalty,2024-03-18\r\n38554,1335,APAC,sports,online,43.24,1,0.217,coupon,2024-11-06\r\n38555,1902,AMER,electronics,partner,41.57,6,0.031,loyalty,2024-09-11\r\n38556,2052,LATAM,grocery,partner,26.71,4,0.179,loyalty,2024-09-05\r\n38557,1033,APAC,grocery,online,42.17,3,0.088,none,2024-10-08\r\n38558,2310,EMEA,sports,mobile,86.01,4,0.197,none,2024-09-22\r\n38559,2150,APAC,grocery,online,35.45,3,0.036,coupon,2024-12-27\r\n38560,1592,LATAM,grocery,retail,73.00,8,0.107,none,2024-03-06\r\n38561,1041,APAC,grocery,mobile,28.87,7,0.065,none,2024-04-11\r\n38562,2255,AMER,fashion,online,63.47,6,0.067,none,2024-04-12\r\n38563,2057,APAC,home,retail,203.96,4,0.043,loyalty,2024-04-01\r\n38564,1174,APAC,electronics,online,21.89,8,0.019,none,2024-06-24\r\n38565,2204,AMER,home,mobile,55.79,4,0.137,none,2024-02-19\r\n38566,1831,APAC,grocery,partner,71.90,8,0.098,none,2024-12-22\r\n38567,2297,EMEA,fashion,partner,54.42,1,0.213,none,2024-02-24\r\n38568,1827,EMEA,grocery,online,23.03,1,0.172,coupon,2024-07-21\r\n38569,1001,LATAM,sports,mobile,66.67,2,0.142,none,2024-01-09\r\n38570,1351,APAC,grocery,online,26.23,8,0.154,none,2024-05-06\r\n38571,1265,APAC,sports,online,57.68,5,0.087,coupon,2024-11-22\r\n38572,1339,EMEA,electronics,online,22.62,8,0.141,none,2024-04-18\r\n38573,1559,EMEA,home,online,50.56,6,0.180,none,2024-08-03\r\n38574,1332,APAC,sports,online,101.02,7,0.119,none,2024-02-01\r\n38575,2053,AMER,fashion,mobile,57.70,8,0.181,coupon,2024-08-02\r\n38576,1473,LATAM,grocery,retail,107.30,7,0.002,none,2024-10-04\r\n38577,2414,EMEA,electronics,online,48.50,4,0.224,bundle,2024-05-14\r\n38578,2293,LATAM,electronics,online,36.37,3,0.068,none,2024-06-24\r\n38579,2320,LATAM,fashion,retail,96.66,5,0.160,loyalty,2024-12-21\r\n38580,1035,EMEA,toys,retail,65.96,5,0.039,bundle,2024-05-20\r\n38581,1452,LATAM,fashion,retail,42.04,3,0.096,none,2024-06-27\r\n38582,2204,AMER,fashion,retail,112.64,7,0.027,none,2024-01-22\r\n38583,2023,LATAM,electronics,retail,54.34,3,0.201,none,2024-05-13\r\n38584,2290,LATAM,fashion,online,62.25,2,0.222,none,2024-05-23\r\n38585,1039,AMER,home,retail,128.76,2,0.147,bundle,2024-06-17\r\n38586,1581,APAC,grocery,partner,57.55,6,0.189,none,2024-10-08\r\n38587,1279,EMEA,sports,online,34.84,4,0.059,coupon,2024-05-02\r\n38588,1045,LATAM,fashion,online,33.73,5,0.065,none,2024-01-24\r\n38589,1206,EMEA,home,online,33.17,5,0.172,coupon,2024-02-12\r\n38590,1153,AMER,grocery,retail,68.41,5,0.081,loyalty,2024-12-21\r\n38591,2227,LATAM,fashion,partner,98.62,4,0.012,none,2024-11-03\r\n38592,1052,LATAM,home,online,53.19,4,0.233,coupon,2024-01-16\r\n38593,2153,APAC,fashion,retail,92.18,4,0.221,loyalty,2024-08-19\r\n38594,1769,LATAM,sports,retail,43.71,5,0.237,bundle,2024-12-06\r\n38595,1729,AMER,toys,online,54.14,3,0.235,none,2024-04-27\r\n38596,2068,LATAM,home,retail,74.07,6,0.174,bundle,2024-06-23\r\n38597,2014,EMEA,grocery,online,64.47,7,0.051,coupon,2024-02-22\r\n38598,1653,APAC,electronics,online,100.14,8,0.095,none,2024-08-18\r\n38599,2245,APAC,sports,retail,55.09,7,0.050,none,2024-10-26\r\n38600,1483,EMEA,grocery,retail,51.12,6,0.124,coupon,2024-11-27\r\n38601,1761,EMEA,grocery,retail,70.49,7,0.163,loyalty,2024-11-25\r\n38602,2338,AMER,sports,mobile,46.12,3,0.230,none,2024-12-21\r\n38603,1070,EMEA,grocery,partner,34.57,6,0.081,none,2024-10-04\r\n38604,1376,EMEA,sports,retail,74.99,7,0.216,none,2024-10-09\r\n38605,1878,EMEA,electronics,mobile,84.87,1,0.157,none,2024-03-06\r\n38606,1804,AMER,grocery,retail,97.03,7,0.119,bundle,2024-04-25\r\n38607,2184,APAC,grocery,online,101.29,3,0.143,coupon,2024-02-06\r\n38608,1384,LATAM,fashion,mobile,50.75,3,0.180,coupon,2024-10-27\r\n38609,1586,LATAM,grocery,online,86.03,1,0.195,none,2024-06-11\r\n38610,2380,AMER,sports,online,175.11,2,0.118,none,2024-12-18\r\n38611,1450,EMEA,electronics,retail,54.35,5,0.143,none,2024-07-09\r\n38612,1883,LATAM,grocery,retail,24.05,6,0.234,bundle,2024-07-05\r\n38613,1501,AMER,toys,online,70.26,6,0.088,none,2024-05-23\r\n38614,2494,AMER,grocery,retail,27.05,3,0.152,none,2024-01-25\r\n38615,1791,LATAM,home,retail,49.87,8,0.241,bundle,2024-06-03\r\n38616,1388,AMER,grocery,online,47.18,5,0.083,none,2024-07-25\r\n38617,2254,LATAM,toys,retail,31.68,1,0.219,none,2024-11-16\r\n38618,2103,LATAM,electronics,partner,123.56,2,0.077,none,2024-09-03\r\n38619,2293,LATAM,grocery,online,62.40,4,0.029,bundle,2024-01-11\r\n38620,1726,EMEA,home,online,53.93,8,0.236,loyalty,2024-08-07\r\n38621,2311,LATAM,grocery,mobile,30.48,4,0.134,none,2024-08-10\r\n38622,1349,APAC,grocery,online,23.36,1,0.146,bundle,2024-10-01\r\n38623,1181,LATAM,grocery,partner,8.71,5,0.105,bundle,2024-01-07\r\n38624,1778,LATAM,sports,online,48.63,4,0.211,coupon,2024-12-17\r\n38625,1323,EMEA,grocery,mobile,36.14,7,0.216,none,2024-06-16\r\n38626,1048,EMEA,sports,retail,65.17,6,0.159,none,2024-02-24\r\n38627,2054,AMER,sports,online,28.51,3,0.055,coupon,2024-08-11\r\n38628,2386,EMEA,home,online,55.09,7,0.144,none,2024-04-13\r\n38629,1124,AMER,home,online,26.09,7,0.103,none,2024-03-06\r\n38630,2340,EMEA,toys,partner,116.41,7,0.041,bundle,2024-07-28\r\n38631,1762,LATAM,grocery,retail,109.13,4,0.024,none,2024-02-08\r\n38632,2042,LATAM,electronics,online,27.29,7,0.050,none,2024-02-25\r\n38633,1043,LATAM,grocery,mobile,88.60,2,0.144,none,2024-10-25\r\n38634,1425,EMEA,sports,retail,61.05,5,0.115,coupon,2024-09-16\r\n38635,1145,AMER,home,mobile,19.13,4,0.081,none,2024-06-07\r\n38636,1906,APAC,grocery,online,58.01,6,0.154,bundle,2024-09-15\r\n38637,1261,APAC,toys,online,91.00,8,0.172,bundle,2024-07-18\r\n38638,2428,LATAM,home,retail,31.30,1,0.130,none,2024-08-12\r\n38639,2116,LATAM,electronics,online,45.59,4,0.150,loyalty,2024-08-22\r\n38640,2358,AMER,sports,online,30.83,7,0.087,coupon,2024-08-04\r\n38641,1186,APAC,fashion,online,72.18,3,0.157,none,2024-07-26\r\n38642,1837,LATAM,home,mobile,78.01,7,0.072,none,2024-09-04\r\n38643,1743,LATAM,grocery,online,55.93,5,0.086,coupon,2024-12-24\r\n38644,1939,LATAM,home,retail,73.99,3,0.189,loyalty,2024-11-22\r\n38645,1100,AMER,fashion,retail,29.04,7,0.162,bundle,2024-11-23\r\n38646,1896,EMEA,grocery,online,42.69,6,0.078,loyalty,2024-01-11\r\n38647,1517,AMER,fashion,retail,98.70,6,0.020,none,2024-11-22\r\n38648,2110,LATAM,home,retail,60.05,7,0.075,none,2024-02-11\r\n38649,1420,APAC,electronics,partner,105.49,1,0.047,coupon,2024-09-28\r\n38650,1651,LATAM,toys,retail,12.30,5,0.013,none,2024-06-26\r\n38651,1874,LATAM,electronics,mobile,100.97,7,0.068,none,2024-05-01\r\n38652,1492,APAC,fashion,online,89.67,7,0.199,none,2024-04-04\r\n38653,1125,LATAM,grocery,online,38.35,4,0.146,none,2024-03-18\r\n38654,1551,APAC,toys,online,61.83,4,0.008,bundle,2024-07-05\r\n38655,1676,LATAM,toys,retail,93.30,6,0.065,bundle,2024-06-22\r\n38656,1200,EMEA,grocery,mobile,100.04,8,0.045,coupon,2024-02-17\r\n38657,1109,APAC,electronics,mobile,58.97,2,0.154,bundle,2024-05-01\r\n38658,2052,LATAM,sports,online,28.33,5,0.172,coupon,2024-04-11\r\n38659,1293,AMER,electronics,online,93.05,8,0.112,coupon,2024-02-13\r\n38660,1125,LATAM,electronics,online,49.41,7,0.066,none,2024-03-24\r\n38661,1373,LATAM,home,retail,47.23,1,0.061,none,2024-08-08\r\n38662,1857,LATAM,grocery,online,52.61,6,0.056,none,2024-02-16\r\n38663,1538,AMER,toys,online,80.73,1,0.162,none,2024-02-23\r\n38664,2080,LATAM,grocery,online,45.09,4,0.035,none,2024-02-17\r\n38665,1924,AMER,home,mobile,97.10,6,0.104,none,2024-05-15\r\n38666,2378,LATAM,grocery,retail,68.39,8,0.044,none,2024-02-23\r\n38667,2072,AMER,grocery,retail,34.94,3,0.205,coupon,2024-10-04\r\n38668,1388,AMER,grocery,retail,45.12,5,0.187,none,2024-06-03\r\n38669,1201,LATAM,home,online,66.42,7,0.181,bundle,2024-09-13\r\n38670,1589,AMER,electronics,online,26.97,2,0.244,none,2024-05-06\r\n38671,1392,AMER,home,mobile,79.02,1,0.158,coupon,2024-03-09\r\n38672,1154,LATAM,grocery,online,43.70,4,0.053,coupon,2024-03-05\r\n38673,1793,LATAM,grocery,mobile,111.20,2,0.247,bundle,2024-09-02\r\n38674,1963,AMER,grocery,online,39.05,6,0.121,none,2024-10-17\r\n38675,1110,LATAM,grocery,online,15.17,5,0.133,none,2024-08-06\r\n38676,1692,LATAM,home,retail,34.81,3,0.039,loyalty,2024-02-26\r\n38677,1244,LATAM,electronics,online,105.24,6,0.059,none,2024-05-12\r\n38678,1961,EMEA,electronics,retail,78.66,8,0.153,none,2024-01-20\r\n38679,2032,AMER,fashion,online,59.46,7,0.196,none,2024-12-18\r\n38680,2207,APAC,grocery,online,55.60,2,0.120,none,2024-03-10\r\n38681,1267,EMEA,electronics,online,26.91,2,0.148,none,2024-04-18\r\n38682,2123,AMER,fashion,online,62.84,7,0.179,none,2024-03-14\r\n38683,1200,EMEA,fashion,retail,39.35,8,0.193,none,2024-03-17\r\n38684,1941,AMER,fashion,online,91.10,6,0.078,none,2024-10-20\r\n38685,1059,AMER,home,mobile,51.10,2,0.049,coupon,2024-06-15\r\n38686,2371,LATAM,fashion,retail,49.96,6,0.119,none,2024-01-19\r\n38687,1489,AMER,grocery,online,56.38,2,0.118,none,2024-07-03\r\n38688,1995,LATAM,electronics,mobile,30.44,7,0.214,none,2024-02-21\r\n38689,1208,AMER,fashion,online,59.75,2,0.232,none,2024-06-02\r\n38690,2397,LATAM,grocery,mobile,60.21,8,0.192,none,2024-02-22\r\n38691,2445,APAC,fashion,online,43.95,6,0.114,bundle,2024-06-24\r\n38692,2134,AMER,home,retail,81.89,8,0.186,none,2024-04-04\r\n38693,2474,LATAM,grocery,online,13.22,8,0.075,none,2024-06-12\r\n38694,1020,APAC,sports,retail,89.13,4,0.011,none,2024-04-16\r\n38695,1600,AMER,grocery,online,49.20,4,0.168,none,2024-04-23\r\n38696,2321,APAC,grocery,online,43.28,5,0.234,none,2024-04-23\r\n38697,1632,LATAM,sports,retail,143.99,8,0.079,none,2024-09-22\r\n38698,2095,EMEA,home,online,33.82,3,0.161,none,2024-09-07\r\n38699,1298,LATAM,fashion,online,84.63,6,0.205,none,2024-08-10\r\n38700,1333,EMEA,electronics,partner,65.45,7,0.079,none,2024-01-19\r\n38701,1325,APAC,grocery,retail,120.21,2,0.067,none,2024-10-26\r\n38702,1333,EMEA,sports,online,133.84,6,0.126,none,2024-05-11\r\n38703,2378,LATAM,grocery,retail,36.18,4,0.014,none,2024-12-27\r\n38704,1600,AMER,grocery,retail,23.42,3,0.002,none,2024-05-10\r\n38705,2228,EMEA,fashion,online,42.05,4,0.039,coupon,2024-01-11\r\n38706,1451,EMEA,sports,online,275.37,7,0.226,bundle,2024-12-18\r\n38707,2136,AMER,grocery,online,138.66,1,0.098,bundle,2024-05-03\r\n38708,1055,AMER,toys,retail,25.93,6,0.143,loyalty,2024-08-06\r\n38709,1645,EMEA,grocery,retail,79.23,7,0.083,coupon,2024-06-01\r\n38710,1870,EMEA,grocery,online,83.58,8,0.249,none,2024-06-02\r\n38711,1775,EMEA,sports,mobile,71.62,7,0.001,none,2024-03-01\r\n38712,1596,EMEA,home,retail,34.18,1,0.149,none,2024-11-01\r\n38713,1911,LATAM,home,online,79.01,1,0.054,none,2024-01-07\r\n38714,1484,AMER,grocery,retail,86.26,4,0.024,none,2024-12-13\r\n38715,1145,AMER,electronics,retail,55.34,4,0.110,none,2024-03-13\r\n38716,1876,LATAM,home,online,39.94,8,0.243,none,2024-11-01\r\n38717,1826,LATAM,grocery,retail,24.60,5,0.106,coupon,2024-11-16\r\n38718,1546,EMEA,electronics,retail,64.57,8,0.089,coupon,2024-06-15\r\n38719,2365,LATAM,grocery,mobile,88.23,1,0.053,none,2024-09-04\r\n38720,1230,EMEA,grocery,partner,32.64,6,0.006,none,2024-04-27\r\n38721,2142,LATAM,sports,online,99.20,7,0.049,none,2024-09-04\r\n38722,2093,LATAM,home,online,53.61,1,0.183,none,2024-02-17\r\n38723,1913,LATAM,toys,online,106.12,4,0.189,none,2024-09-18\r\n38724,1933,EMEA,sports,online,36.86,4,0.174,none,2024-01-17\r\n38725,1574,AMER,sports,online,36.44,4,0.055,none,2024-11-13\r\n38726,1021,AMER,home,online,59.29,8,0.108,loyalty,2024-05-15\r\n38727,1120,LATAM,electronics,retail,33.67,7,0.216,coupon,2024-02-11\r\n38728,1984,LATAM,fashion,retail,50.95,4,0.040,loyalty,2024-11-23\r\n38729,2412,LATAM,sports,retail,99.89,7,0.040,none,2024-12-16\r\n38730,1028,EMEA,fashion,partner,58.21,3,0.107,none,2024-11-03\r\n38731,2465,EMEA,fashion,online,28.84,5,0.133,none,2024-08-05\r\n38732,2186,LATAM,grocery,online,50.06,3,0.151,none,2024-10-20\r\n38733,1744,EMEA,grocery,retail,31.12,3,0.051,none,2024-10-18\r\n38734,2089,EMEA,home,mobile,52.78,3,0.188,coupon,2024-09-01\r\n38735,1278,AMER,electronics,retail,41.65,8,0.174,none,2024-03-26\r\n38736,2065,EMEA,sports,retail,56.81,8,0.167,loyalty,2024-11-14\r\n38737,1118,AMER,home,retail,77.68,8,0.187,none,2024-12-14\r\n38738,1723,LATAM,toys,mobile,24.50,4,0.164,none,2024-12-23\r\n38739,1466,AMER,home,online,86.62,6,0.073,bundle,2024-06-05\r\n38740,1206,EMEA,grocery,online,44.62,4,0.245,none,2024-03-12\r\n38741,1893,APAC,sports,online,35.10,1,0.137,bundle,2024-01-15\r\n38742,2056,LATAM,electronics,retail,72.37,7,0.199,coupon,2024-09-25\r\n38743,1288,LATAM,fashion,retail,79.68,3,0.087,none,2024-04-03\r\n38744,2188,EMEA,fashion,online,43.98,6,0.027,none,2024-06-09\r\n38745,1340,LATAM,toys,online,76.20,7,0.138,none,2024-07-01\r\n38746,2324,AMER,grocery,retail,49.17,6,0.091,bundle,2024-02-26\r\n38747,2015,APAC,electronics,online,46.17,4,0.139,loyalty,2024-06-19\r\n38748,1295,EMEA,fashion,online,80.91,6,0.144,coupon,2024-10-22\r\n38749,1483,EMEA,grocery,online,56.92,7,0.219,bundle,2024-02-01\r\n38750,1158,LATAM,electronics,retail,94.58,8,0.009,none,2024-02-15\r\n38751,1264,APAC,home,online,115.32,2,0.212,coupon,2024-10-04\r\n38752,1294,APAC,sports,mobile,83.34,7,0.076,loyalty,2024-12-13\r\n38753,2071,APAC,fashion,online,94.43,5,0.075,coupon,2024-07-12\r\n38754,1600,AMER,grocery,mobile,64.81,1,0.225,none,2024-10-12\r\n38755,2177,AMER,home,online,49.45,2,0.151,bundle,2024-12-12\r\n38756,1306,LATAM,electronics,mobile,166.23,5,0.216,loyalty,2024-11-02\r\n38757,2235,AMER,home,retail,65.28,4,0.059,none,2024-07-28\r\n38758,1111,APAC,home,online,24.84,6,0.242,none,2024-05-27\r\n38759,1794,AMER,toys,mobile,134.71,1,0.083,none,2024-08-28\r\n38760,1984,LATAM,home,retail,60.12,7,0.054,none,2024-10-26\r\n38761,1654,EMEA,electronics,retail,43.27,8,0.185,none,2024-11-05\r\n38762,1119,LATAM,sports,online,33.53,5,0.146,none,2024-07-13\r\n38763,1549,APAC,home,retail,127.95,6,0.029,coupon,2024-09-26\r\n38764,1464,APAC,fashion,online,62.61,2,0.046,coupon,2024-02-04\r\n38765,2271,LATAM,home,retail,73.59,7,0.240,coupon,2024-05-18\r\n38766,1316,APAC,home,mobile,31.66,4,0.118,none,2024-11-01\r\n38767,1152,LATAM,grocery,mobile,71.89,2,0.168,none,2024-08-18\r\n38768,1949,AMER,electronics,online,39.30,4,0.201,bundle,2024-12-23\r\n38769,1276,AMER,fashion,retail,23.31,8,0.042,loyalty,2024-10-25\r\n38770,1578,LATAM,sports,mobile,40.84,4,0.002,coupon,2024-06-18\r\n38771,1321,EMEA,grocery,online,42.26,6,0.227,none,2024-07-18\r\n38772,2005,APAC,fashion,mobile,50.80,5,0.130,loyalty,2024-02-01\r\n38773,2325,LATAM,fashion,retail,62.52,7,0.088,none,2024-08-22\r\n38774,1546,EMEA,electronics,online,37.61,3,0.212,bundle,2024-03-04\r\n38775,1682,EMEA,electronics,online,72.47,8,0.049,coupon,2024-04-04\r\n38776,2223,EMEA,sports,retail,47.30,5,0.042,loyalty,2024-07-10\r\n38777,1378,APAC,toys,mobile,53.49,3,0.074,none,2024-02-12\r\n38778,1490,AMER,home,retail,82.26,1,0.004,coupon,2024-07-27\r\n38779,2323,AMER,home,partner,46.10,3,0.168,none,2024-08-26\r\n38780,1302,LATAM,home,retail,105.74,6,0.199,none,2024-06-03\r\n38781,1845,AMER,grocery,online,61.74,2,0.085,coupon,2024-03-05\r\n38782,2095,EMEA,home,mobile,55.31,3,0.147,none,2024-01-28\r\n38783,1730,AMER,home,retail,49.67,2,0.249,none,2024-04-20\r\n38784,1059,AMER,grocery,retail,60.75,1,0.140,loyalty,2024-09-01\r\n38785,2487,LATAM,grocery,retail,75.72,8,0.114,none,2024-11-15\r\n38786,2015,APAC,sports,online,30.89,5,0.117,bundle,2024-04-27\r\n38787,1549,APAC,grocery,mobile,150.41,8,0.079,none,2024-02-18\r\n38788,1790,AMER,sports,online,66.56,3,0.013,none,2024-10-18\r\n38789,1618,EMEA,fashion,partner,149.99,4,0.144,none,2024-01-16\r\n38790,1028,EMEA,home,retail,77.46,5,0.216,none,2024-05-01\r\n38791,1696,LATAM,sports,online,91.66,1,0.148,coupon,2024-04-27\r\n38792,2494,AMER,sports,online,37.80,5,0.132,none,2024-06-02\r\n38793,2010,APAC,grocery,mobile,61.36,5,0.183,coupon,2024-06-05\r\n38794,1569,APAC,electronics,online,103.86,7,0.174,none,2024-05-06\r\n38795,1416,EMEA,sports,retail,69.88,5,0.179,bundle,2024-01-20\r\n38796,1740,EMEA,grocery,retail,82.37,3,0.114,none,2024-05-17\r\n38797,1662,LATAM,home,retail,72.50,3,0.087,bundle,2024-09-13\r\n38798,1706,EMEA,fashion,online,22.01,1,0.137,coupon,2024-11-13\r\n38799,1420,APAC,toys,retail,35.02,2,0.142,none,2024-03-20\r\n38800,2154,APAC,grocery,retail,86.93,1,0.009,none,2024-10-27\r\n38801,1471,EMEA,grocery,online,66.86,5,0.196,none,2024-05-22\r\n38802,2001,EMEA,toys,mobile,53.61,8,0.249,none,2024-07-25\r\n38803,1074,LATAM,grocery,online,48.01,7,0.238,none,2024-12-01\r\n38804,1451,EMEA,grocery,retail,102.32,4,0.162,coupon,2024-05-10\r\n38805,1989,LATAM,grocery,retail,73.58,1,0.049,none,2024-07-04\r\n38806,2017,EMEA,toys,retail,80.30,2,0.052,bundle,2024-08-19\r\n38807,1053,AMER,grocery,mobile,124.16,8,0.175,none,2024-02-15\r\n38808,2266,LATAM,home,partner,69.78,6,0.173,none,2024-09-11\r\n38809,1191,EMEA,fashion,retail,48.55,3,0.127,none,2024-05-03\r\n38810,1702,AMER,sports,mobile,149.46,2,0.082,none,2024-12-13\r\n38811,2263,AMER,grocery,retail,93.84,7,0.059,coupon,2024-03-05\r\n38812,1960,EMEA,grocery,online,38.55,8,0.011,none,2024-01-13\r\n38813,2070,APAC,fashion,online,57.93,1,0.058,coupon,2024-08-05\r\n38814,2498,LATAM,toys,online,40.01,4,0.212,bundle,2024-12-24\r\n38815,2039,EMEA,sports,online,100.91,3,0.010,bundle,2024-07-03\r\n38816,1418,LATAM,grocery,online,104.85,8,0.108,none,2024-09-08\r\n38817,2414,EMEA,grocery,mobile,41.45,5,0.230,bundle,2024-08-05\r\n38818,1974,EMEA,sports,online,54.21,8,0.006,none,2024-08-12\r\n38819,2105,APAC,home,online,30.59,5,0.018,none,2024-09-09\r\n38820,2213,APAC,sports,retail,79.21,5,0.054,coupon,2024-05-08\r\n38821,1086,AMER,fashion,retail,39.62,8,0.001,none,2024-08-23\r\n38822,2335,EMEA,electronics,retail,32.58,6,0.016,coupon,2024-10-27\r\n38823,1815,APAC,electronics,retail,193.47,7,0.205,bundle,2024-01-17\r\n38824,2286,AMER,electronics,mobile,87.78,2,0.022,none,2024-02-21\r\n38825,1434,EMEA,home,retail,62.28,1,0.184,none,2024-04-14\r\n38826,2268,EMEA,grocery,mobile,32.84,4,0.203,coupon,2024-04-09\r\n38827,1501,AMER,home,retail,15.65,6,0.082,none,2024-09-18\r\n38828,1204,AMER,electronics,mobile,34.05,6,0.017,none,2024-10-18\r\n38829,2472,AMER,grocery,online,72.46,3,0.007,none,2024-05-06\r\n38830,1958,APAC,electronics,online,57.48,7,0.088,coupon,2024-11-20\r\n38831,2232,EMEA,home,mobile,37.75,8,0.035,bundle,2024-10-10\r\n38832,1462,LATAM,grocery,online,19.36,5,0.160,none,2024-07-14\r\n38833,1614,EMEA,home,retail,109.72,6,0.011,loyalty,2024-03-10\r\n38834,2454,LATAM,grocery,online,69.00,2,0.028,none,2024-10-04\r\n38835,2237,EMEA,grocery,retail,45.27,3,0.204,none,2024-03-10\r\n38836,1747,EMEA,fashion,partner,37.81,3,0.161,none,2024-02-07\r\n38837,1549,APAC,home,mobile,33.50,5,0.033,bundle,2024-10-02\r\n38838,1530,APAC,fashion,retail,59.35,4,0.244,none,2024-02-07\r\n38839,1680,LATAM,home,retail,136.72,4,0.180,none,2024-02-13\r\n38840,2202,APAC,sports,online,142.91,7,0.035,coupon,2024-11-23\r\n38841,1232,LATAM,grocery,retail,41.40,3,0.211,none,2024-06-10\r\n38842,1458,APAC,grocery,mobile,101.74,6,0.183,none,2024-08-19\r\n38843,1792,AMER,fashion,partner,103.26,8,0.215,coupon,2024-03-09\r\n38844,2029,APAC,home,online,35.67,8,0.207,none,2024-02-28\r\n38845,1317,EMEA,electronics,retail,47.30,5,0.173,bundle,2024-04-17\r\n38846,2070,APAC,electronics,online,87.39,6,0.236,none,2024-07-23\r\n38847,1189,AMER,electronics,online,22.42,6,0.015,none,2024-05-23\r\n38848,1834,AMER,toys,partner,47.99,6,0.172,none,2024-10-11\r\n38849,2045,LATAM,fashion,retail,98.72,3,0.180,none,2024-03-21\r\n38850,1580,AMER,toys,mobile,108.36,3,0.118,none,2024-04-25\r\n38851,1682,EMEA,fashion,online,56.96,8,0.026,none,2024-09-28\r\n38852,1694,APAC,home,retail,76.39,3,0.050,bundle,2024-08-27\r\n38853,1420,APAC,home,retail,20.79,6,0.044,coupon,2024-06-13\r\n38854,2028,APAC,fashion,mobile,63.27,4,0.164,none,2024-11-08\r\n38855,1128,LATAM,home,mobile,175.63,7,0.044,none,2024-08-08\r\n38856,1004,LATAM,grocery,online,63.38,1,0.061,none,2024-12-16\r\n38857,1797,LATAM,electronics,retail,48.70,5,0.085,none,2024-12-11\r\n38858,1312,EMEA,sports,retail,106.13,8,0.101,none,2024-11-15\r\n38859,1490,AMER,fashion,retail,22.78,2,0.091,none,2024-08-24\r\n38860,1760,LATAM,electronics,online,79.89,7,0.147,bundle,2024-12-15\r\n38861,2428,LATAM,toys,online,197.30,7,0.183,loyalty,2024-09-28\r\n38862,2299,EMEA,grocery,mobile,94.34,1,0.224,bundle,2024-10-11\r\n38863,1173,LATAM,home,online,31.28,2,0.029,none,2024-01-26\r\n38864,1706,EMEA,sports,retail,115.17,8,0.071,coupon,2024-03-09\r\n38865,2423,LATAM,sports,online,25.32,7,0.137,coupon,2024-10-07\r\n38866,2285,APAC,home,online,47.81,2,0.097,none,2024-03-17\r\n38867,1990,EMEA,electronics,online,52.87,6,0.213,none,2024-04-28\r\n38868,1856,EMEA,grocery,retail,19.14,5,0.041,loyalty,2024-01-06\r\n38869,1202,APAC,electronics,retail,24.36,4,0.050,bundle,2024-07-24\r\n38870,2421,AMER,fashion,retail,67.14,2,0.027,none,2024-11-09\r\n38871,2252,EMEA,fashion,mobile,38.36,6,0.020,none,2024-01-07\r\n38872,2106,LATAM,fashion,retail,32.98,3,0.242,none,2024-09-21\r\n38873,1794,AMER,grocery,retail,130.55,5,0.091,none,2024-11-12\r\n38874,1478,EMEA,grocery,retail,66.29,2,0.034,coupon,2024-06-14\r\n38875,1860,EMEA,grocery,online,102.36,6,0.205,none,2024-05-01\r\n38876,1743,LATAM,grocery,retail,48.59,7,0.054,none,2024-12-06\r\n38877,2090,AMER,fashion,retail,61.05,3,0.097,none,2024-12-20\r\n38878,1131,APAC,fashion,partner,42.33,5,0.004,none,2024-09-15\r\n38879,1157,LATAM,sports,partner,62.68,8,0.049,none,2024-05-27\r\n38880,1599,APAC,home,mobile,45.71,5,0.174,none,2024-07-01\r\n38881,2471,APAC,electronics,online,34.17,8,0.012,bundle,2024-02-09\r\n38882,2052,LATAM,sports,retail,88.08,6,0.131,none,2024-02-24\r\n38883,2477,APAC,electronics,mobile,32.63,4,0.192,none,2024-10-15\r\n38884,2248,LATAM,toys,online,56.47,4,0.121,coupon,2024-01-20\r\n38885,1586,LATAM,sports,online,38.08,7,0.233,none,2024-10-11\r\n38886,1490,AMER,electronics,retail,82.57,8,0.195,none,2024-10-15\r\n38887,2309,AMER,fashion,retail,126.09,7,0.114,none,2024-08-24\r\n38888,2318,AMER,grocery,retail,112.32,4,0.223,loyalty,2024-06-12\r\n38889,2033,LATAM,toys,retail,42.87,6,0.210,coupon,2024-07-05\r\n38890,2402,AMER,grocery,mobile,47.82,2,0.241,none,2024-03-26\r\n38891,1517,AMER,toys,online,56.77,2,0.096,none,2024-01-15\r\n38892,2219,LATAM,fashion,online,39.95,6,0.080,bundle,2024-01-28\r\n38893,1510,EMEA,home,online,16.61,8,0.103,none,2024-09-05\r\n38894,1407,LATAM,fashion,mobile,54.70,2,0.004,none,2024-08-05\r\n38895,2432,AMER,grocery,retail,56.37,2,0.064,bundle,2024-07-24\r\n38896,2172,EMEA,home,online,75.62,8,0.118,none,2024-11-20\r\n38897,2363,AMER,grocery,retail,247.48,8,0.061,loyalty,2024-01-07\r\n38898,2225,EMEA,fashion,retail,79.10,5,0.247,none,2024-12-02\r\n38899,1855,APAC,fashion,online,58.37,8,0.211,coupon,2024-08-24\r\n38900,1595,AMER,sports,online,108.97,6,0.123,coupon,2024-10-06\r\n38901,1077,AMER,fashion,online,33.23,4,0.134,none,2024-12-16\r\n38902,2295,EMEA,toys,partner,45.08,4,0.110,none,2024-07-17\r\n38903,1820,AMER,fashion,partner,41.87,7,0.082,coupon,2024-12-15\r\n38904,2494,AMER,home,online,101.38,1,0.053,none,2024-10-25\r\n38905,2238,AMER,fashion,retail,92.25,3,0.047,none,2024-06-09\r\n38906,1364,EMEA,fashion,retail,47.41,5,0.225,none,2024-04-27\r\n38907,1531,EMEA,toys,online,73.89,2,0.116,none,2024-01-05\r\n38908,2086,APAC,fashion,online,41.02,6,0.074,loyalty,2024-01-04\r\n38909,1039,AMER,grocery,retail,85.81,7,0.133,none,2024-02-12\r\n38910,1051,EMEA,grocery,online,27.75,2,0.017,bundle,2024-08-08\r\n38911,1528,EMEA,toys,online,51.24,4,0.057,coupon,2024-12-25\r\n38912,1686,LATAM,grocery,online,57.41,6,0.110,coupon,2024-06-03\r\n38913,2322,AMER,fashion,retail,78.77,7,0.064,none,2024-10-17\r\n38914,1683,AMER,electronics,online,39.58,4,0.147,bundle,2024-04-03\r\n38915,1307,AMER,fashion,retail,46.04,2,0.035,none,2024-03-09\r\n38916,1811,APAC,grocery,retail,35.61,3,0.196,none,2024-12-08\r\n38917,1337,APAC,grocery,mobile,40.23,4,0.091,none,2024-11-18\r\n38918,1726,EMEA,toys,partner,81.91,1,0.189,none,2024-10-16\r\n38919,2459,AMER,fashion,mobile,82.95,5,0.035,none,2024-08-04\r\n38920,1173,LATAM,electronics,online,15.65,5,0.021,coupon,2024-07-15\r\n38921,1771,AMER,electronics,mobile,50.79,2,0.043,coupon,2024-03-09\r\n38922,2132,LATAM,home,retail,63.04,3,0.064,none,2024-12-22\r\n38923,1945,AMER,sports,retail,67.04,7,0.176,none,2024-05-24\r\n38924,1148,AMER,electronics,retail,64.11,4,0.213,coupon,2024-05-06\r\n38925,1269,LATAM,home,online,35.46,2,0.234,none,2024-09-27\r\n38926,2289,APAC,grocery,online,100.02,2,0.206,coupon,2024-05-24\r\n38927,1196,APAC,home,online,42.28,1,0.229,coupon,2024-07-25\r\n38928,1712,LATAM,electronics,mobile,34.64,5,0.038,coupon,2024-02-15\r\n38929,2287,EMEA,electronics,partner,42.59,4,0.126,none,2024-06-15\r\n38930,2250,AMER,electronics,online,34.24,8,0.236,loyalty,2024-06-06\r\n38931,2483,LATAM,fashion,mobile,34.10,6,0.231,none,2024-11-20\r\n38932,2415,AMER,sports,online,44.22,8,0.176,none,2024-11-12\r\n38933,1245,APAC,grocery,online,54.03,4,0.153,coupon,2024-07-21\r\n38934,1587,LATAM,electronics,online,41.67,4,0.202,coupon,2024-03-10\r\n38935,1956,APAC,fashion,online,60.26,2,0.244,none,2024-04-11\r\n38936,2295,EMEA,electronics,online,88.88,8,0.227,none,2024-04-04\r\n38937,1255,AMER,electronics,online,83.03,8,0.167,bundle,2024-05-25\r\n38938,1225,APAC,sports,online,32.51,5,0.226,loyalty,2024-06-04\r\n38939,1283,APAC,home,retail,39.59,1,0.066,coupon,2024-01-07\r\n38940,2119,AMER,grocery,online,126.71,7,0.154,none,2024-06-05\r\n38941,2366,APAC,electronics,partner,70.60,4,0.114,none,2024-06-18\r\n38942,1106,AMER,sports,online,105.95,1,0.027,bundle,2024-11-14\r\n38943,1076,LATAM,toys,online,33.17,1,0.062,none,2024-11-12\r\n38944,1830,EMEA,electronics,retail,48.20,4,0.130,coupon,2024-02-09\r\n38945,1542,APAC,grocery,retail,54.55,7,0.199,none,2024-02-16\r\n38946,1509,AMER,toys,mobile,85.94,6,0.177,loyalty,2024-01-14\r\n38947,2337,AMER,toys,retail,75.32,2,0.246,none,2024-09-12\r\n38948,1094,LATAM,electronics,online,83.87,5,0.114,coupon,2024-12-03\r\n38949,1964,EMEA,electronics,online,108.15,8,0.015,bundle,2024-07-10\r\n38950,2233,EMEA,home,retail,61.81,8,0.066,coupon,2024-03-21\r\n38951,1389,LATAM,grocery,online,45.07,1,0.162,coupon,2024-01-07\r\n38952,2030,EMEA,electronics,online,47.20,4,0.190,loyalty,2024-06-21\r\n38953,1646,APAC,grocery,retail,98.93,5,0.070,none,2024-11-09\r\n38954,2375,AMER,toys,online,49.80,8,0.082,none,2024-07-19\r\n38955,1217,EMEA,sports,retail,133.25,2,0.172,none,2024-07-28\r\n38956,1300,EMEA,fashion,online,43.49,3,0.157,coupon,2024-12-27\r\n38957,2028,APAC,grocery,online,47.52,6,0.027,none,2024-12-27\r\n38958,2444,EMEA,fashion,online,93.85,5,0.207,none,2024-12-09\r\n38959,2080,LATAM,fashion,retail,32.71,7,0.058,none,2024-02-05\r\n38960,2427,LATAM,grocery,retail,77.28,8,0.206,none,2024-12-07\r\n38961,2349,APAC,electronics,retail,42.88,3,0.151,bundle,2024-10-07\r\n38962,2140,AMER,grocery,online,99.97,1,0.119,none,2024-08-15\r\n38963,1499,EMEA,electronics,online,29.03,5,0.024,loyalty,2024-01-07\r\n38964,2132,LATAM,grocery,online,45.74,7,0.065,none,2024-12-20\r\n38965,1150,LATAM,electronics,mobile,46.67,4,0.229,none,2024-09-27\r\n38966,2118,AMER,home,retail,91.39,7,0.017,none,2024-04-18\r\n38967,1987,AMER,toys,online,87.40,4,0.193,none,2024-09-22\r\n38968,1703,AMER,home,retail,33.41,8,0.142,none,2024-09-14\r\n38969,1715,AMER,electronics,online,104.99,8,0.232,coupon,2024-03-12\r\n38970,1503,APAC,grocery,mobile,60.74,3,0.086,none,2024-09-18\r\n38971,1977,APAC,grocery,online,63.94,4,0.110,none,2024-04-18\r\n38972,1535,AMER,home,partner,61.83,7,0.031,loyalty,2024-05-21\r\n38973,2210,APAC,grocery,retail,40.04,8,0.182,none,2024-05-28\r\n38974,1187,AMER,home,mobile,55.71,6,0.065,none,2024-10-19\r\n38975,2497,AMER,home,retail,99.56,7,0.055,coupon,2024-02-04\r\n38976,1768,AMER,home,retail,33.05,8,0.019,none,2024-06-13\r\n38977,1148,AMER,grocery,online,63.12,3,0.155,loyalty,2024-08-24\r\n38978,1651,LATAM,electronics,mobile,24.62,8,0.054,none,2024-01-02\r\n38979,1827,EMEA,sports,retail,57.23,7,0.086,none,2024-07-17\r\n38980,1417,APAC,electronics,mobile,41.47,1,0.187,coupon,2024-06-03\r\n38981,1975,EMEA,toys,online,69.09,5,0.104,none,2024-04-13\r\n38982,2085,AMER,sports,retail,87.37,5,0.232,bundle,2024-06-06\r\n38983,2324,AMER,fashion,online,62.87,6,0.126,none,2024-03-04\r\n38984,1334,APAC,grocery,online,29.36,7,0.197,coupon,2024-09-09\r\n38985,1944,AMER,fashion,online,26.82,7,0.248,bundle,2024-05-24\r\n38986,1607,LATAM,fashion,retail,25.95,5,0.043,none,2024-11-17\r\n38987,1650,LATAM,fashion,retail,13.30,5,0.187,none,2024-06-05\r\n38988,1898,EMEA,grocery,online,65.75,2,0.196,none,2024-07-07\r\n38989,1979,APAC,toys,online,120.22,1,0.110,coupon,2024-10-21\r\n38990,1701,LATAM,electronics,online,66.55,7,0.108,none,2024-07-06\r\n38991,2448,APAC,electronics,retail,63.46,5,0.155,none,2024-09-15\r\n38992,2360,EMEA,home,mobile,18.24,6,0.229,none,2024-07-22\r\n38993,1134,APAC,electronics,online,45.41,2,0.017,none,2024-03-07\r\n38994,2172,EMEA,toys,retail,83.64,8,0.046,none,2024-11-15\r\n38995,1067,APAC,electronics,online,145.73,1,0.009,none,2024-08-25\r\n38996,2291,EMEA,grocery,online,54.60,1,0.204,loyalty,2024-07-03\r\n38997,1335,APAC,fashion,online,65.57,7,0.002,none,2024-07-27\r\n38998,1378,APAC,sports,retail,89.22,6,0.149,none,2024-12-10\r\n38999,1837,LATAM,grocery,retail,81.55,1,0.078,none,2024-04-24\r\n39000,1687,APAC,electronics,partner,34.29,8,0.152,none,2024-03-08\r\n39001,1313,EMEA,toys,retail,42.79,3,0.088,coupon,2024-08-27\r\n39002,2092,AMER,fashion,online,60.85,6,0.038,none,2024-04-20\r\n39003,1344,EMEA,home,mobile,34.40,6,0.179,none,2024-03-20\r\n39004,2115,APAC,sports,online,65.72,6,0.246,none,2024-06-13\r\n39005,1430,EMEA,grocery,online,47.80,7,0.138,bundle,2024-10-19\r\n39006,2093,LATAM,sports,retail,24.44,5,0.102,none,2024-04-16\r\n39007,1897,AMER,grocery,retail,134.22,6,0.065,coupon,2024-06-07\r\n39008,1785,EMEA,electronics,online,73.16,8,0.013,none,2024-04-03\r\n39009,1260,LATAM,home,retail,30.54,8,0.100,coupon,2024-05-09\r\n39010,1399,AMER,toys,online,39.18,7,0.194,none,2024-10-17\r\n39011,1259,EMEA,fashion,online,77.23,2,0.137,loyalty,2024-04-15\r\n39012,2329,LATAM,electronics,retail,31.00,3,0.172,none,2024-07-22\r\n39013,2388,LATAM,electronics,retail,52.21,5,0.185,none,2024-10-13\r\n39014,1220,LATAM,toys,online,44.10,3,0.095,bundle,2024-01-11\r\n39015,2333,APAC,electronics,mobile,62.56,2,0.177,none,2024-05-24\r\n39016,1742,AMER,grocery,online,61.51,6,0.230,none,2024-05-23\r\n39017,1169,LATAM,toys,retail,40.73,5,0.039,bundle,2024-07-16\r\n39018,1385,LATAM,grocery,online,45.01,1,0.220,none,2024-06-18\r\n39019,1587,LATAM,home,retail,22.64,5,0.198,none,2024-09-28\r\n39020,2369,LATAM,home,mobile,59.63,4,0.033,coupon,2024-12-20\r\n39021,1146,LATAM,grocery,mobile,43.05,8,0.142,none,2024-12-03\r\n39022,2435,AMER,fashion,retail,91.20,1,0.176,none,2024-03-28\r\n39023,1086,AMER,electronics,mobile,11.32,8,0.169,coupon,2024-10-10\r\n39024,1678,LATAM,grocery,retail,42.77,2,0.130,coupon,2024-10-11\r\n39025,1060,LATAM,fashion,online,33.60,2,0.103,none,2024-02-15\r\n39026,2288,AMER,fashion,online,77.77,8,0.071,coupon,2024-01-01\r\n39027,1999,EMEA,grocery,online,71.01,2,0.073,coupon,2024-03-07\r\n39028,1871,APAC,grocery,retail,48.13,5,0.189,none,2024-06-05\r\n39029,1802,AMER,electronics,retail,125.12,8,0.223,none,2024-03-24\r\n39030,1590,APAC,grocery,mobile,46.26,3,0.021,none,2024-10-08\r\n39031,1571,EMEA,grocery,online,45.36,2,0.205,bundle,2024-07-02\r\n39032,2254,LATAM,fashion,mobile,55.19,2,0.154,coupon,2024-11-23\r\n39033,2330,EMEA,electronics,online,22.63,8,0.082,none,2024-08-24\r\n39034,1214,EMEA,grocery,partner,53.17,8,0.107,loyalty,2024-03-04\r\n39035,1728,AMER,toys,retail,67.35,8,0.033,coupon,2024-09-12\r\n39036,2373,LATAM,sports,partner,33.92,4,0.179,none,2024-05-27\r\n39037,1442,EMEA,home,retail,64.99,4,0.131,bundle,2024-02-07\r\n39038,1293,AMER,grocery,retail,44.83,1,0.092,none,2024-06-10\r\n39039,2365,LATAM,grocery,online,29.48,4,0.080,none,2024-06-22\r\n39040,1644,EMEA,electronics,retail,50.60,3,0.119,none,2024-07-04\r\n39041,2179,LATAM,grocery,retail,63.00,8,0.180,none,2024-09-13\r\n39042,2085,AMER,grocery,partner,50.64,3,0.113,bundle,2024-04-23\r\n39043,1733,LATAM,toys,retail,57.22,4,0.179,none,2024-03-21\r\n39044,2091,LATAM,fashion,online,25.95,6,0.040,none,2024-10-11\r\n39045,1776,APAC,sports,online,35.23,5,0.132,coupon,2024-07-25\r\n39046,1764,LATAM,electronics,mobile,96.94,1,0.098,none,2024-01-26\r\n39047,2296,AMER,toys,retail,145.91,3,0.044,none,2024-09-21\r\n39048,2286,AMER,electronics,online,120.27,5,0.203,none,2024-12-22\r\n39049,1493,APAC,grocery,retail,88.90,6,0.167,bundle,2024-07-15\r\n39050,1160,LATAM,home,online,43.97,3,0.217,bundle,2024-10-11\r\n39051,2283,AMER,sports,online,91.81,1,0.172,none,2024-02-24\r\n39052,1282,LATAM,sports,online,232.07,8,0.173,coupon,2024-07-02\r\n39053,1360,APAC,electronics,online,85.50,6,0.199,none,2024-07-05\r\n39054,1933,EMEA,home,partner,64.18,7,0.146,none,2024-02-05\r\n39055,2346,LATAM,home,retail,42.38,5,0.215,coupon,2024-07-25\r\n39056,1966,APAC,toys,retail,37.76,4,0.019,loyalty,2024-10-05\r\n39057,1652,APAC,electronics,online,41.12,5,0.247,loyalty,2024-04-08\r\n39058,2437,LATAM,grocery,online,47.11,7,0.111,none,2024-07-24\r\n39059,2164,AMER,home,online,23.98,5,0.018,none,2024-10-27\r\n39060,1645,EMEA,grocery,retail,43.66,7,0.057,none,2024-02-03\r\n39061,1721,EMEA,electronics,online,84.61,7,0.207,bundle,2024-11-16\r\n39062,1131,APAC,sports,online,66.24,3,0.235,none,2024-12-04\r\n39063,2281,AMER,home,online,132.06,6,0.077,bundle,2024-03-21\r\n39064,2265,APAC,home,retail,39.14,8,0.055,none,2024-05-08\r\n39065,1415,AMER,home,online,94.24,5,0.080,none,2024-06-10\r\n39066,1252,APAC,home,partner,66.88,1,0.128,none,2024-03-14\r\n39067,2037,LATAM,grocery,retail,21.39,2,0.010,none,2024-06-21\r\n39068,2123,AMER,electronics,retail,162.15,1,0.046,none,2024-08-05\r\n39069,1045,LATAM,fashion,online,29.71,4,0.220,bundle,2024-10-01\r\n39070,1394,LATAM,home,online,27.59,7,0.180,none,2024-01-08\r\n39071,2383,APAC,toys,mobile,76.89,1,0.212,none,2024-02-21\r\n39072,1431,APAC,toys,online,39.28,2,0.106,none,2024-08-26\r\n39073,1225,APAC,electronics,retail,56.30,7,0.093,coupon,2024-04-24\r\n39074,1370,APAC,grocery,mobile,60.26,3,0.230,coupon,2024-08-17\r\n39075,1146,LATAM,toys,online,50.16,1,0.136,none,2024-11-02\r\n39076,1897,AMER,grocery,online,35.81,2,0.088,none,2024-02-24\r\n39077,2124,AMER,home,retail,45.36,4,0.042,coupon,2024-07-01\r\n39078,2027,EMEA,sports,online,47.56,2,0.194,none,2024-01-28\r\n39079,1422,LATAM,home,online,39.19,8,0.104,coupon,2024-09-01\r\n39080,1387,AMER,electronics,mobile,69.79,8,0.167,coupon,2024-01-03\r\n39081,1900,APAC,grocery,online,88.23,3,0.073,coupon,2024-10-06\r\n39082,1640,APAC,grocery,online,46.73,6,0.089,none,2024-12-06\r\n39083,2499,LATAM,fashion,retail,61.07,1,0.134,bundle,2024-06-09\r\n39084,1469,EMEA,grocery,online,30.60,5,0.166,none,2024-05-28\r\n39085,1975,EMEA,fashion,online,46.46,2,0.182,none,2024-11-01\r\n39086,1556,AMER,home,retail,31.23,8,0.016,coupon,2024-11-11\r\n39087,1194,APAC,grocery,partner,97.32,5,0.201,none,2024-10-18\r\n39088,2421,AMER,fashion,mobile,22.45,6,0.159,none,2024-06-10\r\n39089,2427,LATAM,home,online,67.41,8,0.136,bundle,2024-09-13\r\n39090,2196,AMER,toys,retail,62.81,5,0.191,bundle,2024-01-22\r\n39091,1909,APAC,home,online,76.10,3,0.231,none,2024-02-05\r\n39092,1378,APAC,sports,online,43.36,1,0.006,none,2024-10-24\r\n39093,1217,EMEA,grocery,retail,47.10,7,0.126,none,2024-10-06\r\n39094,1957,AMER,sports,retail,25.72,6,0.035,coupon,2024-01-16\r\n39095,1309,EMEA,grocery,online,44.71,3,0.195,bundle,2024-03-16\r\n39096,1510,EMEA,grocery,online,169.46,7,0.127,none,2024-02-28\r\n39097,1666,LATAM,electronics,partner,47.47,7,0.169,bundle,2024-05-13\r\n39098,2244,LATAM,grocery,mobile,75.70,4,0.218,none,2024-08-21\r\n39099,1010,EMEA,grocery,retail,67.13,8,0.191,none,2024-12-16\r\n39100,1022,APAC,sports,online,95.40,7,0.200,bundle,2024-01-09\r\n39101,2221,LATAM,electronics,retail,63.32,5,0.044,loyalty,2024-06-13\r\n39102,1698,EMEA,sports,online,23.82,4,0.053,none,2024-11-13\r\n39103,1617,AMER,grocery,retail,58.70,6,0.123,coupon,2024-02-15\r\n39104,2020,AMER,toys,mobile,96.32,7,0.174,none,2024-08-03\r\n39105,1215,LATAM,toys,mobile,34.00,2,0.120,loyalty,2024-04-18\r\n39106,2410,EMEA,electronics,online,30.89,2,0.160,none,2024-09-15\r\n39107,2070,APAC,toys,retail,36.63,5,0.066,loyalty,2024-11-03\r\n39108,1132,EMEA,fashion,online,33.99,2,0.034,none,2024-09-05\r\n39109,1035,EMEA,grocery,mobile,92.28,7,0.217,none,2024-09-11\r\n39110,2107,APAC,home,retail,88.51,1,0.175,none,2024-02-03\r\n39111,1895,AMER,sports,retail,29.80,4,0.241,coupon,2024-10-10\r\n39112,1954,APAC,grocery,retail,44.98,5,0.113,coupon,2024-03-07\r\n39113,1431,APAC,toys,online,26.17,5,0.014,none,2024-10-21\r\n39114,1868,AMER,electronics,mobile,52.47,8,0.031,none,2024-09-10\r\n39115,1610,LATAM,fashion,partner,232.89,3,0.209,none,2024-01-10\r\n39116,1125,LATAM,electronics,retail,84.74,5,0.218,none,2024-03-10\r\n39117,2454,LATAM,sports,partner,31.30,2,0.201,none,2024-02-22\r\n39118,2392,EMEA,home,online,70.95,5,0.247,coupon,2024-08-19\r\n39119,2258,AMER,fashion,retail,81.74,4,0.181,bundle,2024-06-26\r\n39120,1919,EMEA,electronics,mobile,97.71,7,0.082,coupon,2024-11-13\r\n39121,2001,EMEA,electronics,online,48.67,3,0.041,none,2024-03-01\r\n39122,1919,EMEA,grocery,retail,99.42,2,0.188,none,2024-04-13\r\n39123,1911,LATAM,grocery,mobile,86.10,5,0.030,none,2024-02-10\r\n39124,1266,AMER,grocery,retail,51.25,8,0.114,none,2024-03-20\r\n39125,2120,AMER,sports,online,37.84,8,0.148,none,2024-10-03\r\n39126,1480,APAC,grocery,retail,89.70,1,0.059,coupon,2024-11-05\r\n39127,1691,LATAM,grocery,mobile,58.30,1,0.200,coupon,2024-11-18\r\n39128,1508,LATAM,home,retail,35.73,2,0.018,none,2024-12-06\r\n39129,1368,EMEA,electronics,online,40.95,8,0.135,bundle,2024-01-18\r\n39130,2032,AMER,sports,online,69.93,6,0.222,bundle,2024-02-13\r\n39131,2471,APAC,sports,retail,35.89,8,0.192,bundle,2024-12-11\r\n39132,2462,EMEA,grocery,online,43.98,8,0.181,coupon,2024-08-22\r\n39133,2364,APAC,grocery,online,76.84,2,0.148,none,2024-03-03\r\n39134,2149,EMEA,sports,online,53.57,1,0.127,coupon,2024-09-22\r\n39135,1424,APAC,fashion,retail,33.76,1,0.186,coupon,2024-08-17\r\n39136,1100,AMER,home,retail,28.72,7,0.195,none,2024-10-19\r\n39137,2101,APAC,fashion,online,86.04,3,0.212,none,2024-07-25\r\n39138,1428,APAC,sports,online,23.94,1,0.241,bundle,2024-12-20\r\n39139,1023,APAC,grocery,online,137.74,5,0.190,none,2024-05-23\r\n39140,1995,LATAM,electronics,online,53.95,1,0.141,coupon,2024-08-19\r\n39141,2128,EMEA,sports,retail,21.81,3,0.248,bundle,2024-07-23\r\n39142,2364,APAC,home,mobile,33.72,3,0.131,coupon,2024-01-16\r\n39143,1297,AMER,grocery,online,35.50,3,0.023,none,2024-10-03\r\n39144,1945,AMER,grocery,retail,95.23,3,0.119,none,2024-01-07\r\n39145,1034,EMEA,grocery,online,95.81,5,0.138,none,2024-10-23\r\n39146,1135,APAC,electronics,retail,52.26,7,0.032,none,2024-05-06\r\n39147,2325,LATAM,toys,retail,43.64,8,0.169,none,2024-07-25\r\n39148,1351,APAC,home,mobile,74.83,2,0.090,none,2024-02-28\r\n39149,2081,APAC,sports,retail,53.58,4,0.219,bundle,2024-09-21\r\n39150,1531,EMEA,sports,online,82.59,8,0.213,none,2024-05-02\r\n39151,1100,AMER,electronics,mobile,27.61,8,0.087,coupon,2024-08-22\r\n39152,2280,EMEA,grocery,online,29.39,2,0.159,none,2024-11-15\r\n39153,1752,APAC,fashion,online,43.77,5,0.229,none,2024-12-20\r\n39154,1892,LATAM,fashion,retail,55.41,6,0.110,none,2024-05-23\r\n39155,2189,LATAM,grocery,mobile,60.11,2,0.031,coupon,2024-12-12\r\n39156,1862,LATAM,fashion,retail,73.03,2,0.168,none,2024-10-08\r\n39157,1315,AMER,fashion,online,34.39,2,0.109,none,2024-08-17\r\n39158,1579,AMER,grocery,retail,62.34,3,0.042,none,2024-07-18\r\n39159,2026,LATAM,grocery,online,54.95,8,0.155,coupon,2024-04-08\r\n39160,1563,EMEA,grocery,online,32.41,6,0.161,none,2024-03-01\r\n39161,2051,APAC,electronics,online,41.65,8,0.051,coupon,2024-10-23\r\n39162,1294,APAC,grocery,online,47.68,3,0.096,none,2024-04-17\r\n39163,1369,AMER,fashion,retail,94.23,8,0.015,coupon,2024-11-25\r\n39164,1846,APAC,grocery,online,43.97,6,0.240,none,2024-03-24\r\n39165,2479,EMEA,toys,online,53.58,1,0.113,bundle,2024-08-10\r\n39166,1588,LATAM,fashion,online,40.64,1,0.070,coupon,2024-05-20\r\n39167,2051,APAC,grocery,retail,54.79,8,0.156,none,2024-11-14\r\n39168,1725,APAC,home,online,62.28,8,0.122,none,2024-12-21\r\n39169,1978,AMER,home,retail,150.76,1,0.093,coupon,2024-12-20\r\n39170,2413,AMER,fashion,online,48.39,4,0.154,none,2024-05-16\r\n39171,1533,APAC,grocery,mobile,41.13,7,0.133,none,2024-07-06\r\n39172,1431,APAC,toys,mobile,32.98,3,0.215,loyalty,2024-07-12\r\n39173,1145,AMER,home,online,27.44,7,0.164,none,2024-06-26\r\n39174,1798,AMER,grocery,retail,54.34,5,0.113,none,2024-08-21\r\n39175,2344,LATAM,fashion,online,51.51,2,0.097,bundle,2024-11-13\r\n39176,2043,EMEA,toys,online,53.12,2,0.249,loyalty,2024-06-23\r\n39177,1584,EMEA,grocery,online,38.09,5,0.213,none,2024-05-06\r\n39178,2100,APAC,grocery,retail,54.00,7,0.124,bundle,2024-09-05\r\n39179,1608,AMER,home,mobile,80.80,2,0.236,none,2024-05-10\r\n39180,1442,EMEA,grocery,online,47.13,7,0.169,none,2024-09-22\r\n39181,1287,AMER,home,online,55.72,5,0.175,none,2024-11-14\r\n39182,1245,APAC,electronics,mobile,39.98,3,0.046,none,2024-09-16\r\n39183,1798,AMER,home,online,39.78,2,0.217,coupon,2024-12-15\r\n39184,1724,LATAM,toys,mobile,52.86,6,0.222,bundle,2024-05-22\r\n39185,1966,APAC,home,online,107.73,8,0.091,none,2024-03-13\r\n39186,1000,APAC,electronics,online,120.41,8,0.147,bundle,2024-06-17\r\n39187,1111,APAC,electronics,online,29.85,2,0.119,none,2024-10-25\r\n39188,1937,APAC,toys,retail,65.86,3,0.005,none,2024-03-28\r\n39189,1965,LATAM,sports,retail,70.73,7,0.113,none,2024-10-03\r\n39190,2068,LATAM,home,mobile,123.00,1,0.038,loyalty,2024-02-10\r\n39191,1159,LATAM,grocery,online,39.40,8,0.197,coupon,2024-07-12\r\n39192,2127,LATAM,fashion,retail,35.51,7,0.171,loyalty,2024-05-10\r\n39193,2474,LATAM,grocery,online,46.76,7,0.232,coupon,2024-11-04\r\n39194,1143,LATAM,electronics,online,62.60,5,0.192,none,2024-05-17\r\n39195,1267,EMEA,electronics,online,72.44,7,0.190,none,2024-03-17\r\n39196,1544,LATAM,electronics,online,68.24,5,0.180,loyalty,2024-09-22\r\n39197,1401,LATAM,sports,retail,23.26,2,0.222,coupon,2024-05-04\r\n39198,2464,LATAM,grocery,online,80.29,6,0.234,loyalty,2024-03-04\r\n39199,2342,AMER,home,online,156.33,4,0.111,coupon,2024-06-26\r\n39200,1026,APAC,electronics,online,73.76,1,0.143,none,2024-10-12\r\n39201,1071,AMER,toys,online,39.47,4,0.062,bundle,2024-07-21\r\n39202,1199,APAC,grocery,retail,72.15,8,0.111,none,2024-02-05\r\n39203,1460,LATAM,home,retail,46.89,3,0.183,none,2024-09-11\r\n39204,1528,EMEA,grocery,retail,76.75,3,0.034,none,2024-03-25\r\n39205,2126,APAC,electronics,partner,160.75,7,0.204,none,2024-04-16\r\n39206,1563,EMEA,home,retail,60.67,2,0.232,none,2024-01-15\r\n39207,1092,AMER,fashion,online,32.10,1,0.172,none,2024-12-27\r\n39208,1347,APAC,sports,retail,109.85,1,0.160,none,2024-03-25\r\n39209,1928,AMER,grocery,online,20.85,5,0.006,coupon,2024-02-10\r\n39210,1962,APAC,home,retail,44.27,2,0.035,loyalty,2024-08-23\r\n39211,2452,LATAM,electronics,online,79.39,8,0.041,coupon,2024-05-20\r\n39212,1458,APAC,grocery,retail,37.20,2,0.179,none,2024-06-15\r\n39213,1748,APAC,sports,online,74.41,4,0.208,none,2024-01-23\r\n39214,1069,APAC,fashion,retail,32.55,4,0.101,coupon,2024-07-07\r\n39215,2297,EMEA,grocery,retail,50.92,1,0.121,loyalty,2024-10-27\r\n39216,1827,EMEA,toys,retail,20.58,1,0.224,none,2024-10-12\r\n39217,2466,APAC,electronics,online,103.07,5,0.061,none,2024-10-17\r\n39218,1744,EMEA,electronics,retail,43.93,2,0.113,none,2024-07-14\r\n39219,1272,AMER,electronics,online,84.74,2,0.016,coupon,2024-11-08\r\n39220,2450,EMEA,grocery,retail,60.93,2,0.209,coupon,2024-04-18\r\n39221,1277,AMER,home,online,77.47,7,0.051,coupon,2024-06-02\r\n39222,2460,AMER,fashion,retail,50.72,3,0.032,coupon,2024-08-06\r\n39223,2185,EMEA,electronics,online,67.79,4,0.035,none,2024-10-28\r\n39224,1084,AMER,sports,retail,184.53,1,0.039,loyalty,2024-12-22\r\n39225,2258,AMER,electronics,mobile,31.56,1,0.056,loyalty,2024-04-02\r\n39226,2028,APAC,electronics,online,62.50,8,0.193,coupon,2024-02-06\r\n39227,2134,AMER,toys,online,23.31,4,0.150,none,2024-08-02\r\n39228,1459,LATAM,electronics,retail,135.59,8,0.014,bundle,2024-04-08\r\n39229,1791,LATAM,toys,online,80.53,1,0.110,loyalty,2024-07-13\r\n39230,2097,AMER,grocery,online,47.52,1,0.177,none,2024-02-19\r\n39231,2154,APAC,toys,online,45.45,7,0.211,none,2024-05-03\r\n39232,2210,APAC,electronics,retail,68.50,3,0.074,bundle,2024-01-14\r\n39233,1198,AMER,grocery,online,60.50,6,0.191,none,2024-09-21\r\n39234,2012,APAC,fashion,online,70.40,7,0.083,none,2024-11-28\r\n39235,2201,AMER,grocery,online,23.41,5,0.242,loyalty,2024-11-22\r\n39236,1296,LATAM,grocery,mobile,76.26,7,0.070,loyalty,2024-06-10\r\n39237,2093,LATAM,grocery,online,77.32,5,0.099,coupon,2024-05-23\r\n39238,2264,LATAM,toys,online,104.67,6,0.223,coupon,2024-11-12\r\n39239,1574,AMER,toys,retail,32.51,2,0.213,none,2024-07-20\r\n39240,1297,AMER,toys,mobile,133.69,4,0.141,none,2024-11-18\r\n39241,1927,EMEA,electronics,mobile,138.28,6,0.155,coupon,2024-05-21\r\n39242,1298,LATAM,sports,online,35.30,3,0.071,none,2024-03-18\r\n39243,2493,APAC,sports,retail,110.68,3,0.241,none,2024-08-13\r\n39244,1953,EMEA,electronics,online,45.75,7,0.042,none,2024-12-07\r\n39245,1624,AMER,grocery,partner,57.41,1,0.036,none,2024-08-23\r\n39246,1080,LATAM,fashion,online,37.06,7,0.060,none,2024-02-25\r\n39247,1240,EMEA,home,retail,161.82,1,0.214,none,2024-08-27\r\n39248,2225,EMEA,sports,online,53.13,2,0.142,none,2024-01-17\r\n39249,1970,LATAM,home,mobile,87.22,7,0.024,coupon,2024-03-16\r\n39250,1367,AMER,grocery,online,47.79,7,0.224,coupon,2024-07-23\r\n39251,1115,AMER,electronics,online,48.49,2,0.172,coupon,2024-06-20\r\n39252,1441,LATAM,grocery,partner,54.93,2,0.183,coupon,2024-05-24\r\n39253,2080,LATAM,grocery,online,26.52,5,0.211,coupon,2024-01-14\r\n39254,2478,AMER,home,online,55.11,6,0.119,none,2024-10-09\r\n39255,1287,AMER,fashion,retail,19.18,5,0.246,none,2024-12-23\r\n39256,1529,LATAM,sports,retail,100.18,4,0.171,none,2024-01-28\r\n39257,1254,APAC,home,mobile,32.05,7,0.230,none,2024-05-24\r\n39258,2051,APAC,toys,online,80.49,3,0.136,none,2024-03-02\r\n39259,2380,AMER,sports,online,31.29,4,0.153,bundle,2024-08-28\r\n39260,1010,EMEA,sports,retail,52.80,8,0.001,coupon,2024-07-02\r\n39261,2190,LATAM,toys,online,21.90,3,0.167,none,2024-02-17\r\n39262,1110,LATAM,grocery,retail,60.62,1,0.126,none,2024-01-15\r\n39263,2381,AMER,grocery,online,83.32,3,0.201,none,2024-12-04\r\n39264,1315,AMER,electronics,mobile,45.22,6,0.015,none,2024-12-16\r\n39265,2074,AMER,home,retail,56.37,1,0.222,bundle,2024-05-07\r\n39266,2126,APAC,sports,retail,62.28,8,0.089,coupon,2024-04-04\r\n39267,2429,EMEA,grocery,online,72.36,5,0.027,none,2024-10-07\r\n39268,1734,AMER,grocery,mobile,26.82,6,0.218,none,2024-01-01\r\n39269,1423,EMEA,sports,online,114.39,6,0.226,bundle,2024-02-08\r\n39270,1602,EMEA,fashion,retail,56.52,5,0.199,none,2024-12-20\r\n39271,1580,AMER,grocery,retail,29.08,3,0.017,none,2024-04-06\r\n39272,1064,AMER,fashion,online,36.54,2,0.230,none,2024-02-17\r\n39273,1920,LATAM,electronics,retail,40.44,5,0.050,none,2024-01-13\r\n39274,2414,EMEA,toys,partner,69.53,5,0.020,none,2024-10-20\r\n39275,1219,LATAM,fashion,online,71.00,2,0.190,none,2024-08-17\r\n39276,1100,AMER,electronics,retail,161.81,6,0.030,loyalty,2024-11-09\r\n39277,2034,LATAM,electronics,retail,37.95,3,0.250,none,2024-10-13\r\n39278,1716,LATAM,fashion,online,71.10,2,0.138,none,2024-03-04\r\n39279,1339,EMEA,fashion,partner,82.20,2,0.093,coupon,2024-12-27\r\n39280,1918,EMEA,electronics,retail,64.72,2,0.014,none,2024-07-02\r\n39281,1898,EMEA,home,online,50.89,5,0.134,none,2024-07-07\r\n39282,2355,EMEA,grocery,online,45.57,6,0.036,none,2024-07-27\r\n39283,2329,LATAM,toys,online,75.85,8,0.047,coupon,2024-11-19\r\n39284,2149,EMEA,electronics,online,63.38,2,0.037,none,2024-06-28\r\n39285,1158,LATAM,toys,online,106.84,1,0.141,none,2024-08-04\r\n39286,1252,APAC,grocery,retail,49.13,5,0.155,none,2024-07-21\r\n39287,1308,EMEA,home,retail,110.66,6,0.043,none,2024-12-06\r\n39288,1793,LATAM,grocery,retail,36.91,6,0.231,none,2024-04-14\r\n39289,2271,LATAM,electronics,online,84.58,4,0.102,bundle,2024-06-03\r\n39290,2076,AMER,toys,mobile,123.22,5,0.145,bundle,2024-10-04\r\n39291,1869,AMER,home,mobile,58.97,4,0.086,coupon,2024-04-13\r\n39292,1845,AMER,electronics,online,24.83,2,0.166,none,2024-10-05\r\n39293,2075,LATAM,fashion,retail,20.35,7,0.100,none,2024-07-27\r\n39294,1297,AMER,sports,online,47.32,4,0.089,coupon,2024-01-07\r\n39295,2048,LATAM,home,retail,36.26,4,0.104,none,2024-02-28\r\n39296,1988,AMER,grocery,mobile,26.23,4,0.053,none,2024-06-03\r\n39297,1556,AMER,toys,retail,67.59,5,0.142,coupon,2024-02-11\r\n39298,1986,LATAM,electronics,online,80.66,2,0.180,coupon,2024-10-26\r\n39299,1620,LATAM,electronics,retail,170.98,7,0.069,coupon,2024-12-12\r\n39300,2302,APAC,electronics,online,71.50,1,0.198,bundle,2024-07-26\r\n39301,2449,LATAM,grocery,online,56.22,3,0.148,none,2024-07-24\r\n39302,1324,LATAM,home,mobile,41.06,8,0.054,bundle,2024-05-04\r\n39303,2392,EMEA,electronics,mobile,56.83,8,0.062,loyalty,2024-02-16\r\n39304,1320,EMEA,sports,online,99.46,8,0.142,coupon,2024-12-24\r\n39305,1318,LATAM,fashion,online,34.08,6,0.001,loyalty,2024-05-01\r\n39306,1547,AMER,grocery,online,49.46,3,0.048,bundle,2024-12-13\r\n39307,1902,AMER,electronics,partner,62.37,3,0.092,loyalty,2024-05-04\r\n39308,1043,LATAM,grocery,online,43.15,5,0.109,none,2024-08-07\r\n39309,2461,LATAM,sports,online,66.46,2,0.221,coupon,2024-05-17\r\n39310,2217,LATAM,fashion,mobile,57.53,3,0.206,loyalty,2024-10-03\r\n39311,1143,LATAM,sports,retail,26.46,7,0.049,loyalty,2024-05-23\r\n39312,2221,LATAM,home,retail,65.05,8,0.134,coupon,2024-03-22\r\n39313,1252,APAC,fashion,mobile,153.09,4,0.165,bundle,2024-05-27\r\n39314,2035,LATAM,fashion,retail,49.25,8,0.021,none,2024-08-14\r\n39315,1149,LATAM,grocery,online,106.71,3,0.066,none,2024-12-21\r\n39316,1647,LATAM,fashion,online,81.61,7,0.164,none,2024-05-20\r\n39317,2141,AMER,home,online,65.16,6,0.021,none,2024-07-24\r\n39318,2314,EMEA,electronics,mobile,120.25,1,0.009,none,2024-06-21\r\n39319,1523,LATAM,grocery,retail,42.71,3,0.020,none,2024-10-20\r\n39320,2086,APAC,home,online,41.09,6,0.010,coupon,2024-06-26\r\n39321,1571,EMEA,grocery,retail,87.94,3,0.145,none,2024-07-15\r\n39322,1396,EMEA,toys,partner,30.32,5,0.178,none,2024-05-17\r\n39323,2215,LATAM,fashion,mobile,46.19,8,0.148,none,2024-08-03\r\n39324,1336,APAC,sports,partner,88.21,3,0.207,none,2024-11-23\r\n39325,2312,APAC,grocery,online,61.80,1,0.091,coupon,2024-04-13\r\n39326,1861,AMER,electronics,online,36.48,8,0.052,none,2024-03-11\r\n39327,2214,AMER,fashion,retail,134.10,2,0.150,none,2024-04-04\r\n39328,1054,EMEA,sports,retail,43.32,1,0.167,none,2024-12-20\r\n39329,1609,LATAM,electronics,mobile,122.00,1,0.159,none,2024-08-13\r\n39330,1764,LATAM,grocery,retail,79.10,3,0.249,none,2024-06-21\r\n39331,1678,LATAM,grocery,mobile,128.11,8,0.177,none,2024-08-22\r\n39332,1144,APAC,grocery,mobile,38.97,3,0.192,loyalty,2024-11-26\r\n39333,1716,LATAM,fashion,mobile,75.42,5,0.008,none,2024-06-12\r\n39334,1265,APAC,home,online,91.06,7,0.021,bundle,2024-12-08\r\n39335,1216,APAC,grocery,online,51.23,6,0.163,coupon,2024-09-09\r\n39336,1156,APAC,grocery,retail,33.33,6,0.194,none,2024-02-25\r\n39337,1200,EMEA,home,online,51.28,6,0.089,bundle,2024-11-08\r\n39338,2120,AMER,fashion,mobile,96.60,7,0.225,none,2024-06-03\r\n39339,2163,EMEA,toys,retail,67.62,5,0.227,none,2024-02-12\r\n39340,1895,AMER,sports,retail,48.55,1,0.081,coupon,2024-12-09\r\n39341,2365,LATAM,fashion,online,23.94,3,0.206,bundle,2024-07-06\r\n39342,1395,APAC,sports,mobile,62.19,2,0.185,loyalty,2024-06-12\r\n39343,2272,EMEA,fashion,retail,69.93,2,0.042,bundle,2024-04-16\r\n39344,2297,EMEA,electronics,retail,24.01,3,0.020,none,2024-03-04\r\n39345,2181,AMER,fashion,online,55.90,2,0.214,bundle,2024-07-11\r\n39346,1259,EMEA,fashion,online,50.60,6,0.214,none,2024-08-24\r\n39347,1389,LATAM,home,online,97.92,2,0.053,loyalty,2024-08-27\r\n39348,1669,AMER,home,mobile,72.72,7,0.010,none,2024-11-18\r\n39349,1425,EMEA,home,retail,43.11,8,0.053,none,2024-07-14\r\n39350,1268,EMEA,sports,retail,49.80,2,0.049,coupon,2024-01-16\r\n39351,2270,APAC,home,online,110.22,5,0.221,none,2024-05-10\r\n39352,1240,EMEA,grocery,online,137.99,6,0.043,none,2024-01-27\r\n39353,2306,AMER,home,mobile,116.64,5,0.032,none,2024-01-26\r\n39354,2180,AMER,electronics,online,34.50,7,0.079,none,2024-03-01\r\n39355,2283,AMER,electronics,retail,51.42,2,0.140,none,2024-06-15\r\n39356,1399,AMER,sports,online,46.65,3,0.098,bundle,2024-01-13\r\n39357,1679,APAC,home,retail,56.69,1,0.170,none,2024-02-08\r\n39358,1504,AMER,home,mobile,53.81,6,0.223,none,2024-10-18\r\n39359,1236,AMER,home,retail,89.65,6,0.082,coupon,2024-05-09\r\n39360,1155,EMEA,grocery,online,56.19,7,0.140,coupon,2024-11-21\r\n39361,2034,LATAM,grocery,online,107.23,4,0.138,bundle,2024-09-17\r\n39362,1038,APAC,toys,online,80.92,2,0.095,none,2024-06-22\r\n39363,2215,LATAM,fashion,online,43.71,3,0.096,none,2024-08-17\r\n39364,1202,APAC,home,online,60.58,5,0.116,bundle,2024-06-03\r\n39365,2023,LATAM,grocery,online,56.62,6,0.089,bundle,2024-07-22\r\n39366,1125,LATAM,fashion,retail,22.06,1,0.118,none,2024-03-19\r\n39367,2094,AMER,home,online,127.91,2,0.017,loyalty,2024-01-15\r\n39368,2338,AMER,electronics,online,52.21,2,0.029,none,2024-02-04\r\n39369,1923,LATAM,grocery,online,86.55,5,0.203,none,2024-05-26\r\n39370,2058,LATAM,grocery,partner,77.59,5,0.059,bundle,2024-12-15\r\n39371,1626,EMEA,sports,retail,144.92,8,0.140,none,2024-08-20\r\n39372,1157,LATAM,fashion,partner,98.13,8,0.067,loyalty,2024-01-27\r\n39373,1963,AMER,grocery,online,128.30,3,0.137,coupon,2024-02-25\r\n39374,1603,EMEA,toys,retail,34.00,4,0.141,bundle,2024-02-13\r\n39375,2166,AMER,fashion,online,17.95,2,0.017,none,2024-02-01\r\n39376,1533,APAC,sports,retail,56.82,2,0.166,coupon,2024-02-05\r\n39377,1346,AMER,fashion,mobile,149.08,3,0.103,none,2024-08-11\r\n39378,2104,EMEA,toys,mobile,49.94,7,0.018,bundle,2024-12-07\r\n39379,1177,LATAM,toys,mobile,52.29,7,0.086,coupon,2024-06-17\r\n39380,1704,AMER,toys,online,43.23,2,0.076,bundle,2024-01-09\r\n39381,1592,LATAM,grocery,retail,37.74,3,0.216,none,2024-05-12\r\n39382,1695,LATAM,home,retail,27.46,8,0.103,none,2024-09-19\r\n39383,1060,LATAM,fashion,online,61.08,2,0.077,none,2024-09-08\r\n39384,1919,EMEA,grocery,online,49.53,5,0.018,bundle,2024-09-19\r\n39385,1474,LATAM,electronics,online,35.02,3,0.050,none,2024-09-10\r\n39386,1051,EMEA,grocery,partner,64.16,8,0.072,bundle,2024-05-16\r\n39387,2209,AMER,grocery,online,32.77,3,0.099,none,2024-02-17\r\n39388,1074,LATAM,grocery,retail,63.60,2,0.040,none,2024-08-09\r\n39389,2339,AMER,grocery,retail,25.26,6,0.052,none,2024-06-13\r\n39390,2367,AMER,grocery,retail,35.80,2,0.157,none,2024-04-14\r\n39391,2348,EMEA,home,online,101.02,1,0.021,none,2024-04-18\r\n39392,2415,AMER,fashion,retail,117.22,6,0.192,none,2024-06-26\r\n39393,2207,APAC,grocery,online,51.23,6,0.053,coupon,2024-05-18\r\n39394,1417,APAC,fashion,online,24.09,7,0.233,bundle,2024-03-24\r\n39395,1375,AMER,toys,mobile,45.52,1,0.209,bundle,2024-11-11\r\n39396,1306,LATAM,home,mobile,34.95,7,0.236,bundle,2024-09-27\r\n39397,2458,EMEA,home,online,50.81,2,0.130,coupon,2024-12-03\r\n39398,2406,EMEA,toys,partner,60.58,8,0.053,none,2024-03-05\r\n39399,2097,AMER,grocery,online,69.38,2,0.047,coupon,2024-10-13\r\n39400,1258,EMEA,electronics,retail,31.35,3,0.232,none,2024-10-25\r\n39401,2422,APAC,home,mobile,46.07,5,0.063,coupon,2024-06-25\r\n39402,1373,LATAM,home,online,54.00,5,0.116,none,2024-02-01\r\n39403,1780,APAC,fashion,mobile,48.58,8,0.040,bundle,2024-10-12\r\n39404,1631,APAC,electronics,online,95.15,6,0.084,coupon,2024-07-17\r\n39405,1003,APAC,fashion,online,36.69,4,0.013,coupon,2024-01-23\r\n39406,1177,LATAM,grocery,retail,24.60,7,0.093,loyalty,2024-05-15\r\n39407,2164,AMER,grocery,retail,33.54,6,0.185,none,2024-06-19\r\n39408,1490,AMER,fashion,retail,32.42,8,0.201,none,2024-10-02\r\n39409,1199,APAC,home,retail,83.76,3,0.000,coupon,2024-11-15\r\n39410,2290,LATAM,grocery,retail,75.27,4,0.110,none,2024-02-25\r\n39411,1225,APAC,sports,online,53.52,1,0.241,coupon,2024-07-18\r\n39412,2050,APAC,electronics,mobile,29.17,6,0.123,none,2024-04-20\r\n39413,1033,APAC,electronics,retail,107.42,3,0.161,none,2024-01-21\r\n39414,1816,EMEA,fashion,mobile,94.07,8,0.011,none,2024-02-16\r\n39415,1169,LATAM,electronics,online,32.46,7,0.147,coupon,2024-02-24\r\n39416,2453,AMER,sports,online,73.84,1,0.087,none,2024-02-07\r\n39417,1528,EMEA,sports,retail,100.73,2,0.137,bundle,2024-07-08\r\n39418,2074,AMER,grocery,retail,94.43,7,0.008,none,2024-08-09\r\n39419,1899,APAC,fashion,online,59.10,4,0.204,bundle,2024-01-06\r\n39420,1088,LATAM,sports,online,115.23,7,0.044,none,2024-09-16\r\n39421,1355,EMEA,fashion,online,71.34,2,0.049,coupon,2024-12-22\r\n39422,2390,AMER,toys,retail,124.81,8,0.007,none,2024-08-16\r\n39423,1751,AMER,home,mobile,30.97,7,0.233,bundle,2024-10-06\r\n39424,1694,APAC,grocery,retail,41.92,6,0.212,coupon,2024-07-02\r\n39425,2414,EMEA,sports,online,88.70,1,0.146,bundle,2024-10-07\r\n39426,1556,AMER,toys,online,43.79,8,0.100,none,2024-09-21\r\n39427,2035,LATAM,toys,retail,59.18,3,0.064,none,2024-07-16\r\n39428,1950,LATAM,home,retail,34.41,7,0.086,none,2024-10-28\r\n39429,1565,AMER,home,online,61.72,4,0.060,none,2024-07-12\r\n39430,1597,APAC,home,retail,26.33,5,0.141,none,2024-09-07\r\n39431,1986,LATAM,electronics,retail,30.48,1,0.189,none,2024-08-15\r\n39432,1829,EMEA,home,retail,51.86,2,0.172,loyalty,2024-01-07\r\n39433,2276,AMER,toys,retail,42.24,1,0.072,none,2024-01-22\r\n39434,1170,AMER,home,retail,57.15,8,0.134,loyalty,2024-02-28\r\n39435,1931,APAC,sports,partner,67.12,5,0.220,none,2024-10-07\r\n39436,1469,EMEA,fashion,online,63.02,8,0.245,none,2024-12-13\r\n39437,1117,LATAM,electronics,retail,31.97,7,0.075,bundle,2024-06-14\r\n39438,1618,EMEA,electronics,online,33.87,4,0.099,none,2024-07-15\r\n39439,1937,APAC,electronics,mobile,63.03,6,0.182,bundle,2024-01-09\r\n39440,1096,EMEA,electronics,retail,77.22,4,0.100,none,2024-12-22\r\n39441,1344,EMEA,electronics,retail,36.61,5,0.087,coupon,2024-03-04\r\n39442,2476,APAC,home,online,45.59,2,0.048,none,2024-10-28\r\n39443,1014,EMEA,sports,mobile,105.08,6,0.096,bundle,2024-08-11\r\n39444,2373,LATAM,home,retail,88.84,8,0.074,bundle,2024-05-09\r\n39445,1149,LATAM,electronics,retail,30.08,3,0.161,loyalty,2024-10-06\r\n39446,1321,EMEA,fashion,online,32.64,5,0.218,none,2024-02-13\r\n39447,1683,AMER,electronics,retail,64.65,6,0.048,none,2024-04-17\r\n39448,2109,EMEA,grocery,online,94.29,3,0.137,coupon,2024-04-21\r\n39449,1120,LATAM,home,mobile,48.04,5,0.110,none,2024-06-03\r\n39450,1900,APAC,home,retail,105.12,4,0.091,none,2024-04-09\r\n39451,1573,AMER,fashion,partner,81.73,5,0.074,coupon,2024-11-16\r\n39452,1708,LATAM,grocery,online,50.77,3,0.189,loyalty,2024-06-06\r\n39453,1033,APAC,home,online,38.71,6,0.091,none,2024-02-24\r\n39454,1329,APAC,electronics,retail,63.60,6,0.176,none,2024-11-25\r\n39455,2387,EMEA,sports,online,60.31,5,0.188,loyalty,2024-01-25\r\n39456,2269,EMEA,sports,online,68.99,2,0.194,coupon,2024-11-27\r\n39457,1103,EMEA,sports,mobile,100.65,5,0.195,none,2024-08-16\r\n39458,1035,EMEA,electronics,online,55.83,4,0.121,none,2024-03-19\r\n39459,1435,AMER,grocery,online,53.11,3,0.004,bundle,2024-11-25\r\n39460,1967,EMEA,sports,retail,86.25,3,0.162,bundle,2024-07-12\r\n39461,1556,AMER,grocery,mobile,72.36,6,0.091,bundle,2024-11-13\r\n39462,2109,EMEA,home,retail,60.85,4,0.183,none,2024-01-06\r\n39463,1135,APAC,home,retail,90.55,4,0.243,coupon,2024-06-16\r\n39464,1073,AMER,electronics,online,43.66,4,0.206,bundle,2024-03-07\r\n39465,1031,AMER,fashion,online,201.73,4,0.143,coupon,2024-05-18\r\n39466,1862,LATAM,home,retail,60.36,7,0.214,none,2024-09-11\r\n39467,2470,EMEA,grocery,retail,30.22,1,0.213,none,2024-11-23\r\n39468,1212,LATAM,home,online,61.00,8,0.236,bundle,2024-09-22\r\n39469,1885,EMEA,fashion,online,56.42,2,0.093,coupon,2024-10-21\r\n39470,1240,EMEA,home,mobile,175.37,3,0.150,none,2024-06-11\r\n39471,1245,APAC,grocery,retail,96.84,4,0.026,none,2024-11-04\r\n39472,1331,AMER,grocery,mobile,29.84,3,0.129,none,2024-06-10\r\n39473,1877,LATAM,home,retail,68.38,4,0.128,coupon,2024-12-25\r\n39474,1811,APAC,grocery,online,54.19,5,0.086,loyalty,2024-02-26\r\n39475,1286,EMEA,home,partner,42.21,3,0.138,coupon,2024-05-17\r\n39476,2400,EMEA,home,retail,118.09,3,0.054,none,2024-02-26\r\n39477,1964,EMEA,toys,partner,36.05,5,0.154,none,2024-03-05\r\n39478,2429,EMEA,grocery,retail,99.14,2,0.116,none,2024-06-16\r\n39479,1995,LATAM,sports,online,34.43,5,0.040,coupon,2024-05-25\r\n39480,1976,AMER,fashion,online,80.75,4,0.091,bundle,2024-05-16\r\n39481,1878,EMEA,electronics,retail,61.56,5,0.160,none,2024-04-10\r\n39482,1700,EMEA,electronics,retail,91.81,1,0.166,coupon,2024-05-03\r\n39483,2247,LATAM,fashion,online,48.78,5,0.063,none,2024-10-18\r\n39484,2024,AMER,sports,online,58.41,2,0.007,bundle,2024-02-17\r\n39485,2060,LATAM,electronics,online,50.00,5,0.003,none,2024-03-19\r\n39486,2321,APAC,home,online,41.20,2,0.096,none,2024-12-05\r\n39487,1780,APAC,grocery,online,51.58,1,0.233,coupon,2024-03-14\r\n39488,1559,EMEA,home,partner,73.45,7,0.092,bundle,2024-08-21\r\n39489,1891,APAC,fashion,retail,52.68,8,0.193,coupon,2024-01-10\r\n39490,1123,LATAM,grocery,retail,18.68,8,0.104,none,2024-02-07\r\n39491,2429,EMEA,home,online,78.28,3,0.144,none,2024-10-14\r\n39492,2087,LATAM,fashion,online,57.04,4,0.132,none,2024-06-02\r\n39493,1011,APAC,home,retail,84.03,1,0.179,none,2024-02-01\r\n39494,1901,AMER,electronics,online,83.66,8,0.076,none,2024-04-16\r\n39495,1840,LATAM,sports,partner,56.04,5,0.235,none,2024-01-12\r\n39496,1481,LATAM,home,retail,36.93,1,0.082,bundle,2024-10-04\r\n39497,2449,LATAM,grocery,online,32.61,1,0.001,bundle,2024-05-05\r\n39498,2162,EMEA,grocery,partner,42.84,2,0.205,none,2024-07-01\r\n39499,2088,EMEA,sports,retail,41.05,8,0.081,loyalty,2024-12-08\r\n39500,2209,AMER,toys,mobile,63.64,1,0.223,none,2024-04-10\r\n39501,1067,APAC,grocery,online,49.50,5,0.226,none,2024-05-11\r\n39502,1877,LATAM,grocery,mobile,36.19,4,0.050,loyalty,2024-01-01\r\n39503,1003,APAC,grocery,retail,33.50,4,0.064,none,2024-01-22\r\n39504,1257,APAC,electronics,retail,56.58,4,0.000,loyalty,2024-01-26\r\n39505,2005,APAC,grocery,retail,54.48,4,0.152,none,2024-05-19\r\n39506,2296,AMER,electronics,retail,93.78,6,0.117,none,2024-08-01\r\n39507,1340,LATAM,fashion,online,113.54,5,0.050,coupon,2024-07-22\r\n39508,1024,APAC,sports,retail,30.72,8,0.084,none,2024-03-09\r\n39509,2247,LATAM,fashion,mobile,30.38,7,0.126,coupon,2024-05-22\r\n39510,1902,AMER,grocery,retail,41.11,2,0.247,none,2024-06-19\r\n39511,2457,EMEA,home,online,26.74,1,0.241,none,2024-07-22\r\n39512,1150,LATAM,electronics,retail,79.54,3,0.235,coupon,2024-05-08\r\n39513,2285,APAC,electronics,online,89.83,2,0.184,none,2024-08-17\r\n39514,2456,APAC,electronics,retail,89.28,7,0.225,none,2024-06-12\r\n39515,2221,LATAM,fashion,online,49.93,6,0.221,none,2024-12-24\r\n39516,2268,EMEA,grocery,online,74.70,8,0.095,none,2024-06-22\r\n39517,2045,LATAM,toys,online,43.91,4,0.008,loyalty,2024-12-21\r\n39518,1128,LATAM,electronics,retail,42.17,2,0.193,none,2024-09-18\r\n39519,1568,AMER,home,retail,67.63,8,0.110,none,2024-11-08\r\n39520,2276,AMER,electronics,partner,25.94,2,0.081,bundle,2024-10-15\r\n39521,2013,APAC,electronics,mobile,42.14,4,0.147,loyalty,2024-06-11\r\n39522,1868,AMER,sports,mobile,51.59,1,0.031,none,2024-09-06\r\n39523,2017,EMEA,grocery,retail,51.00,1,0.133,none,2024-09-01\r\n39524,1466,AMER,home,mobile,144.81,7,0.203,loyalty,2024-12-23\r\n39525,1469,EMEA,toys,mobile,62.82,7,0.222,none,2024-12-13\r\n39526,1225,APAC,fashion,online,83.54,6,0.242,none,2024-01-18\r\n39527,2489,LATAM,fashion,mobile,33.25,8,0.070,coupon,2024-04-27\r\n39528,1731,AMER,toys,retail,57.61,4,0.240,bundle,2024-05-21\r\n39529,2436,LATAM,sports,online,20.43,7,0.014,coupon,2024-11-23\r\n39530,2319,AMER,sports,retail,28.79,2,0.249,none,2024-08-06\r\n39531,2129,APAC,toys,online,167.11,6,0.195,coupon,2024-12-15\r\n39532,2366,APAC,electronics,retail,124.35,8,0.023,coupon,2024-03-01\r\n39533,1513,APAC,grocery,retail,53.56,4,0.227,none,2024-12-28\r\n39534,1328,APAC,fashion,online,37.41,5,0.031,none,2024-11-25\r\n39535,1299,LATAM,fashion,retail,100.47,2,0.068,coupon,2024-04-16\r\n39536,1849,EMEA,fashion,online,47.30,6,0.133,bundle,2024-09-08\r\n39537,1922,EMEA,electronics,online,19.34,3,0.036,none,2024-08-08\r\n39538,1960,EMEA,grocery,retail,19.62,6,0.155,none,2024-04-20\r\n39539,2100,APAC,fashion,online,36.19,7,0.128,none,2024-02-14\r\n39540,2441,EMEA,grocery,mobile,105.86,8,0.187,none,2024-05-12\r\n39541,1554,AMER,sports,mobile,81.83,7,0.046,none,2024-05-24\r\n39542,2498,LATAM,fashion,retail,172.97,5,0.064,none,2024-10-07\r\n39543,1318,LATAM,grocery,online,19.40,3,0.161,none,2024-01-22\r\n39544,1748,APAC,fashion,mobile,54.84,5,0.176,coupon,2024-06-22\r\n39545,1234,AMER,sports,retail,164.80,4,0.059,none,2024-09-20\r\n39546,1438,APAC,electronics,retail,30.67,3,0.082,none,2024-10-25\r\n39547,1355,EMEA,grocery,mobile,24.29,2,0.173,none,2024-10-23\r\n39548,2364,APAC,electronics,online,35.85,7,0.016,none,2024-04-23\r\n39549,1343,LATAM,grocery,mobile,32.18,5,0.129,none,2024-04-15\r\n39550,1982,EMEA,fashion,retail,33.01,5,0.073,coupon,2024-07-18\r\n39551,2060,LATAM,grocery,online,134.79,2,0.229,none,2024-05-06\r\n39552,1503,APAC,fashion,retail,69.89,2,0.026,none,2024-09-05\r\n39553,1525,APAC,toys,online,92.95,2,0.206,coupon,2024-06-16\r\n39554,1012,LATAM,grocery,online,38.65,5,0.161,bundle,2024-05-04\r\n39555,2214,AMER,home,mobile,38.33,6,0.098,bundle,2024-10-25\r\n39556,2149,EMEA,home,mobile,65.14,7,0.235,none,2024-04-22\r\n39557,2239,EMEA,home,mobile,67.61,8,0.046,loyalty,2024-01-11\r\n39558,2425,APAC,grocery,retail,67.54,2,0.071,loyalty,2024-12-06\r\n39559,1052,LATAM,home,partner,35.19,8,0.115,bundle,2024-01-13\r\n39560,1339,EMEA,fashion,online,130.30,4,0.241,loyalty,2024-11-22\r\n39561,1704,AMER,electronics,retail,34.37,3,0.063,none,2024-07-24\r\n39562,1018,APAC,grocery,online,32.63,1,0.219,loyalty,2024-11-17\r\n39563,1211,EMEA,fashion,retail,55.63,8,0.220,none,2024-01-05\r\n39564,2372,AMER,grocery,partner,54.58,4,0.212,none,2024-03-03\r\n39565,1540,LATAM,electronics,retail,64.81,4,0.119,loyalty,2024-07-11\r\n39566,2108,AMER,electronics,retail,18.38,5,0.177,none,2024-03-26\r\n39567,1360,APAC,fashion,online,58.45,2,0.213,none,2024-05-27\r\n39568,1969,LATAM,sports,online,113.29,2,0.169,none,2024-09-06\r\n39569,2479,EMEA,fashion,retail,101.42,7,0.024,none,2024-02-01\r\n39570,2185,EMEA,grocery,retail,17.64,6,0.242,coupon,2024-08-24\r\n39571,2166,AMER,toys,online,79.85,7,0.092,bundle,2024-06-18\r\n39572,1095,APAC,grocery,retail,73.07,4,0.121,none,2024-03-18\r\n39573,1429,APAC,fashion,online,25.70,4,0.157,none,2024-05-03\r\n39574,1716,LATAM,toys,online,87.89,4,0.023,none,2024-01-18\r\n39575,2009,LATAM,electronics,online,162.50,8,0.029,none,2024-02-28\r\n39576,1562,AMER,home,retail,29.80,6,0.104,bundle,2024-07-10\r\n39577,1802,AMER,grocery,retail,50.39,1,0.107,none,2024-01-06\r\n39578,1185,LATAM,grocery,online,30.77,3,0.186,coupon,2024-01-06\r\n39579,1841,AMER,grocery,online,115.94,3,0.203,none,2024-02-07\r\n39580,1589,AMER,toys,online,46.88,5,0.149,none,2024-12-15\r\n39581,1326,AMER,toys,retail,69.86,3,0.104,bundle,2024-08-06\r\n39582,1827,EMEA,toys,online,58.61,3,0.181,none,2024-09-25\r\n39583,1677,EMEA,grocery,mobile,79.44,6,0.105,coupon,2024-01-13\r\n39584,1542,APAC,fashion,online,80.48,3,0.234,coupon,2024-07-12\r\n39585,2014,EMEA,electronics,retail,184.29,2,0.227,none,2024-01-27\r\n39586,1029,EMEA,sports,partner,44.29,7,0.033,none,2024-01-05\r\n39587,2109,EMEA,grocery,mobile,41.23,7,0.023,none,2024-10-05\r\n39588,2328,EMEA,fashion,retail,75.91,1,0.115,none,2024-05-02\r\n39589,2443,LATAM,electronics,online,31.89,8,0.077,bundle,2024-03-28\r\n39590,2227,LATAM,electronics,retail,37.64,2,0.106,coupon,2024-08-04\r\n39591,1488,AMER,fashion,online,74.73,4,0.021,loyalty,2024-01-25\r\n39592,1515,EMEA,electronics,retail,36.26,7,0.165,coupon,2024-11-11\r\n39593,1522,LATAM,home,retail,52.24,7,0.078,none,2024-04-12\r\n39594,1252,APAC,home,mobile,68.37,8,0.071,coupon,2024-06-07\r\n39595,1374,APAC,grocery,online,67.87,4,0.139,none,2024-06-14\r\n39596,2061,EMEA,electronics,mobile,30.09,8,0.240,none,2024-06-26\r\n39597,1147,EMEA,grocery,online,37.49,2,0.020,none,2024-04-22\r\n39598,2278,APAC,toys,online,43.86,6,0.104,coupon,2024-10-11\r\n39599,1272,AMER,toys,retail,47.72,8,0.095,none,2024-12-03\r\n39600,1790,AMER,grocery,partner,45.71,6,0.198,coupon,2024-07-14\r\n39601,1175,AMER,electronics,mobile,87.82,4,0.193,coupon,2024-12-21\r\n39602,1192,EMEA,electronics,retail,57.39,6,0.155,none,2024-03-27\r\n39603,1280,LATAM,grocery,partner,57.14,1,0.087,bundle,2024-09-20\r\n39604,1011,APAC,electronics,retail,27.81,3,0.246,none,2024-03-22\r\n39605,1163,AMER,toys,online,123.27,8,0.110,none,2024-10-03\r\n39606,1863,EMEA,electronics,retail,46.98,4,0.143,none,2024-09-08\r\n39607,1694,APAC,grocery,online,94.72,6,0.166,none,2024-04-04\r\n39608,1231,AMER,fashion,retail,75.27,8,0.134,coupon,2024-11-21\r\n39609,2089,EMEA,fashion,mobile,66.76,4,0.189,bundle,2024-08-27\r\n39610,1164,EMEA,grocery,retail,113.87,7,0.047,bundle,2024-07-09\r\n39611,1401,LATAM,home,online,16.27,4,0.194,coupon,2024-12-27\r\n39612,1890,LATAM,grocery,mobile,51.26,8,0.084,none,2024-04-03\r\n39613,1708,LATAM,fashion,online,83.47,1,0.069,coupon,2024-09-23\r\n39614,1940,APAC,electronics,mobile,38.04,8,0.008,none,2024-08-25\r\n39615,2046,APAC,home,mobile,38.69,4,0.054,bundle,2024-01-03\r\n39616,1012,LATAM,sports,online,51.97,5,0.190,none,2024-08-20\r\n39617,1838,AMER,sports,online,39.73,2,0.171,bundle,2024-03-27\r\n39618,1759,EMEA,toys,online,35.83,7,0.064,none,2024-01-28\r\n39619,2024,AMER,grocery,online,59.54,7,0.067,bundle,2024-08-06\r\n39620,1074,LATAM,grocery,retail,109.57,3,0.034,none,2024-04-19\r\n39621,2321,APAC,grocery,online,62.88,8,0.090,none,2024-11-03\r\n39622,2049,LATAM,fashion,online,54.72,3,0.050,none,2024-05-15\r\n39623,2494,AMER,electronics,online,113.07,1,0.233,bundle,2024-11-07\r\n39624,2239,EMEA,home,online,19.93,1,0.189,coupon,2024-08-04\r\n39625,2280,EMEA,home,retail,107.60,3,0.052,none,2024-01-12\r\n39626,1620,LATAM,toys,online,54.50,1,0.084,bundle,2024-01-22\r\n39627,2139,AMER,fashion,online,29.02,1,0.061,loyalty,2024-06-10\r\n39628,1375,AMER,electronics,retail,74.46,1,0.234,bundle,2024-12-13\r\n39629,1860,EMEA,toys,mobile,37.49,1,0.247,coupon,2024-09-20\r\n39630,1158,LATAM,fashion,retail,82.08,6,0.052,coupon,2024-01-17\r\n39631,1480,APAC,fashion,online,41.54,6,0.042,none,2024-08-03\r\n39632,1343,LATAM,toys,online,164.63,8,0.071,none,2024-09-21\r\n39633,2062,EMEA,sports,retail,76.94,1,0.022,none,2024-09-01\r\n39634,1258,EMEA,grocery,online,54.17,3,0.149,none,2024-03-21\r\n39635,1066,AMER,grocery,retail,33.68,7,0.049,loyalty,2024-12-05\r\n39636,1159,LATAM,grocery,mobile,67.16,4,0.056,none,2024-08-06\r\n39637,2452,LATAM,grocery,retail,92.81,4,0.078,bundle,2024-08-26\r\n39638,2013,APAC,electronics,retail,46.69,2,0.089,none,2024-07-17\r\n39639,2418,AMER,fashion,mobile,31.38,6,0.153,coupon,2024-09-24\r\n39640,2470,EMEA,home,retail,17.73,3,0.190,none,2024-03-08\r\n39641,2167,APAC,home,mobile,115.95,3,0.065,bundle,2024-06-04\r\n39642,1024,APAC,home,online,65.45,8,0.056,loyalty,2024-12-10\r\n39643,1054,EMEA,grocery,retail,13.79,8,0.121,bundle,2024-09-16\r\n39644,1419,APAC,home,online,54.09,1,0.092,coupon,2024-05-23\r\n39645,1540,LATAM,toys,online,71.90,5,0.027,coupon,2024-04-18\r\n39646,1558,EMEA,electronics,retail,39.66,7,0.031,none,2024-06-08\r\n39647,1302,LATAM,electronics,online,80.06,4,0.181,none,2024-06-19\r\n39648,2441,EMEA,toys,mobile,48.54,3,0.085,none,2024-06-10\r\n39649,2329,LATAM,electronics,retail,84.33,2,0.084,none,2024-11-17\r\n39650,1686,LATAM,sports,online,44.77,5,0.018,coupon,2024-04-09\r\n39651,2429,EMEA,fashion,retail,57.66,6,0.120,none,2024-08-24\r\n39652,2262,APAC,fashion,retail,18.28,3,0.247,none,2024-06-16\r\n39653,1447,LATAM,fashion,online,44.18,2,0.027,none,2024-10-16\r\n39654,1479,AMER,fashion,retail,75.26,3,0.203,bundle,2024-02-09\r\n39655,1585,AMER,fashion,retail,42.20,5,0.011,none,2024-06-09\r\n39656,2255,AMER,sports,partner,68.04,3,0.005,none,2024-07-04\r\n39657,2211,APAC,home,online,31.58,6,0.009,none,2024-02-23\r\n39658,1167,EMEA,grocery,online,130.81,5,0.191,none,2024-02-10\r\n39659,2158,APAC,grocery,retail,102.87,4,0.022,bundle,2024-03-20\r\n39660,1142,EMEA,electronics,retail,88.88,1,0.120,coupon,2024-05-27\r\n39661,1350,LATAM,grocery,retail,79.10,1,0.109,coupon,2024-02-02\r\n39662,1504,AMER,electronics,retail,25.32,5,0.220,coupon,2024-06-08\r\n39663,1905,APAC,sports,online,168.87,3,0.211,none,2024-08-04\r\n39664,2269,EMEA,fashion,retail,158.29,1,0.196,none,2024-01-08\r\n39665,2195,APAC,home,online,50.04,3,0.127,bundle,2024-08-05\r\n39666,1229,LATAM,grocery,online,36.27,5,0.170,loyalty,2024-01-06\r\n39667,1082,EMEA,grocery,online,144.71,3,0.167,none,2024-03-18\r\n39668,1938,APAC,home,retail,71.78,1,0.053,bundle,2024-04-26\r\n39669,1139,EMEA,home,online,24.43,4,0.033,bundle,2024-02-05\r\n39670,1102,APAC,home,online,65.02,5,0.160,none,2024-07-02\r\n39671,1624,AMER,home,mobile,43.33,3,0.094,none,2024-12-06\r\n39672,1782,LATAM,electronics,online,95.60,1,0.124,none,2024-06-11\r\n39673,1135,APAC,toys,online,48.63,8,0.068,none,2024-08-20\r\n39674,2209,AMER,electronics,online,64.23,2,0.039,coupon,2024-10-24\r\n39675,2298,APAC,toys,mobile,123.40,6,0.230,coupon,2024-04-07\r\n39676,1622,LATAM,home,online,38.90,4,0.143,none,2024-11-25\r\n39677,1943,AMER,electronics,online,71.62,8,0.016,none,2024-02-19\r\n39678,2041,LATAM,grocery,partner,42.75,1,0.035,none,2024-02-08\r\n39679,2314,EMEA,grocery,online,32.59,6,0.113,none,2024-07-08\r\n39680,2123,AMER,electronics,retail,109.36,2,0.160,none,2024-09-13\r\n39681,2146,APAC,grocery,online,74.65,4,0.143,bundle,2024-09-15\r\n39682,1546,EMEA,grocery,retail,164.03,4,0.055,coupon,2024-08-13\r\n39683,1415,AMER,home,retail,56.39,3,0.126,none,2024-06-15\r\n39684,1041,APAC,toys,online,39.94,4,0.020,none,2024-11-03\r\n39685,1630,APAC,grocery,retail,120.20,4,0.112,bundle,2024-02-05\r\n39686,1714,APAC,fashion,retail,102.98,7,0.197,bundle,2024-08-03\r\n39687,1221,LATAM,electronics,online,43.53,4,0.104,loyalty,2024-05-10\r\n39688,1659,APAC,home,retail,44.87,5,0.121,bundle,2024-11-07\r\n39689,1753,APAC,electronics,retail,34.51,2,0.091,none,2024-11-27\r\n39690,1688,LATAM,toys,online,91.09,1,0.190,none,2024-09-09\r\n39691,2493,APAC,home,retail,87.82,8,0.121,none,2024-11-08\r\n39692,1830,EMEA,grocery,retail,68.33,4,0.103,none,2024-09-11\r\n39693,2294,EMEA,grocery,retail,51.71,6,0.240,none,2024-11-22\r\n39694,1860,EMEA,grocery,online,82.19,6,0.226,none,2024-01-13\r\n39695,1258,EMEA,fashion,online,35.86,7,0.026,loyalty,2024-04-02\r\n39696,2176,AMER,home,mobile,63.30,4,0.235,none,2024-05-03\r\n39697,1321,EMEA,grocery,retail,47.33,5,0.036,none,2024-12-25\r\n39698,1193,APAC,home,partner,42.14,7,0.168,none,2024-01-03\r\n39699,1852,AMER,grocery,retail,131.66,4,0.190,bundle,2024-12-08\r\n39700,2194,APAC,electronics,online,65.92,4,0.190,loyalty,2024-05-08\r\n39701,1430,EMEA,electronics,mobile,69.56,8,0.236,loyalty,2024-07-27\r\n39702,2289,APAC,fashion,retail,59.51,1,0.112,none,2024-11-07\r\n39703,2353,AMER,fashion,online,28.14,7,0.032,coupon,2024-09-10\r\n39704,2450,EMEA,fashion,mobile,43.77,7,0.106,coupon,2024-05-16\r\n39705,1173,LATAM,electronics,retail,33.86,1,0.048,loyalty,2024-09-21\r\n39706,1098,APAC,electronics,online,32.41,8,0.031,coupon,2024-06-17\r\n39707,2035,LATAM,toys,online,86.20,2,0.082,loyalty,2024-01-06\r\n39708,2012,APAC,grocery,mobile,50.98,7,0.247,none,2024-09-07\r\n39709,1705,AMER,fashion,retail,47.87,2,0.225,coupon,2024-10-22\r\n39710,1704,AMER,grocery,retail,34.18,7,0.054,none,2024-04-01\r\n39711,1630,APAC,grocery,online,40.64,6,0.045,none,2024-01-18\r\n39712,1774,EMEA,grocery,retail,38.11,8,0.007,none,2024-12-19\r\n39713,1154,LATAM,grocery,mobile,82.76,2,0.051,bundle,2024-06-19\r\n39714,1564,APAC,sports,online,67.67,3,0.148,bundle,2024-03-09\r\n39715,1948,EMEA,fashion,mobile,51.05,8,0.189,coupon,2024-04-25\r\n39716,1453,APAC,toys,online,85.06,3,0.093,none,2024-04-05\r\n39717,1186,APAC,home,retail,60.85,7,0.032,none,2024-10-02\r\n39718,1276,AMER,electronics,mobile,27.94,2,0.203,coupon,2024-04-22\r\n39719,2353,AMER,electronics,online,39.30,6,0.149,coupon,2024-09-02\r\n39720,2367,AMER,grocery,retail,24.01,6,0.122,loyalty,2024-11-01\r\n39721,1887,LATAM,sports,online,107.47,8,0.123,none,2024-09-08\r\n39722,2460,AMER,home,online,65.74,1,0.029,coupon,2024-11-08\r\n39723,2350,APAC,electronics,online,45.07,3,0.148,bundle,2024-07-18\r\n39724,1885,EMEA,fashion,retail,73.73,1,0.173,none,2024-07-14\r\n39725,2245,APAC,fashion,retail,86.53,3,0.009,coupon,2024-07-20\r\n39726,2148,EMEA,home,online,43.25,3,0.043,coupon,2024-02-13\r\n39727,2146,APAC,fashion,online,42.71,8,0.163,none,2024-04-15\r\n39728,2259,AMER,grocery,online,58.29,8,0.105,bundle,2024-08-28\r\n39729,1907,EMEA,sports,retail,63.49,3,0.162,coupon,2024-08-04\r\n39730,1525,APAC,sports,retail,77.60,2,0.051,coupon,2024-09-19\r\n39731,2331,APAC,home,retail,34.78,7,0.216,none,2024-02-10\r\n39732,2409,APAC,toys,mobile,98.03,5,0.229,none,2024-11-22\r\n39733,2303,EMEA,home,retail,182.22,5,0.123,none,2024-08-15\r\n39734,2402,AMER,grocery,online,124.27,2,0.110,none,2024-11-08\r\n39735,1662,LATAM,home,online,35.01,1,0.141,none,2024-11-23\r\n39736,1118,AMER,electronics,mobile,44.12,6,0.180,none,2024-05-18\r\n39737,2082,APAC,toys,online,26.50,8,0.084,none,2024-09-01\r\n39738,1273,AMER,toys,retail,86.67,4,0.094,none,2024-03-25\r\n39739,1521,LATAM,fashion,online,61.51,1,0.052,loyalty,2024-04-14\r\n39740,2303,EMEA,grocery,partner,23.02,3,0.011,bundle,2024-07-15\r\n39741,2361,EMEA,home,online,56.45,5,0.157,bundle,2024-06-20\r\n39742,2491,APAC,grocery,retail,57.05,1,0.062,none,2024-07-03\r\n39743,1806,APAC,home,partner,65.59,7,0.130,coupon,2024-06-18\r\n39744,1832,APAC,grocery,mobile,70.48,7,0.177,none,2024-09-28\r\n39745,2274,APAC,toys,mobile,56.76,2,0.070,bundle,2024-02-20\r\n39746,2188,EMEA,fashion,retail,69.33,7,0.242,none,2024-10-26\r\n39747,2095,EMEA,sports,online,33.63,7,0.169,coupon,2024-05-08\r\n39748,2191,AMER,toys,online,35.61,8,0.249,none,2024-01-18\r\n39749,1964,EMEA,sports,online,25.98,5,0.148,coupon,2024-03-17\r\n39750,1652,APAC,sports,partner,76.65,5,0.003,bundle,2024-08-22\r\n39751,1401,LATAM,grocery,retail,29.48,4,0.086,coupon,2024-01-25\r\n39752,1822,EMEA,toys,mobile,23.06,8,0.154,none,2024-07-22\r\n39753,1145,AMER,home,retail,34.85,7,0.011,bundle,2024-03-01\r\n39754,2117,EMEA,fashion,online,63.21,4,0.040,coupon,2024-03-06\r\n39755,1426,AMER,grocery,retail,100.95,6,0.126,coupon,2024-07-06\r\n39756,1697,APAC,grocery,retail,199.30,7,0.246,none,2024-09-21\r\n39757,1461,LATAM,fashion,retail,90.87,8,0.126,none,2024-07-22\r\n39758,1634,AMER,sports,mobile,43.75,7,0.033,none,2024-05-26\r\n39759,2461,LATAM,sports,online,64.37,4,0.197,none,2024-08-16\r\n39760,1002,EMEA,toys,retail,63.41,8,0.136,none,2024-12-04\r\n39761,2429,EMEA,grocery,retail,40.83,6,0.021,none,2024-06-19\r\n39762,2165,AMER,grocery,online,68.36,5,0.017,none,2024-10-03\r\n39763,1427,EMEA,fashion,retail,92.90,5,0.201,none,2024-04-16\r\n39764,1715,AMER,grocery,online,66.51,1,0.209,bundle,2024-02-16\r\n39765,1525,APAC,home,retail,193.98,1,0.099,none,2024-09-05\r\n39766,2470,EMEA,home,online,150.54,8,0.192,bundle,2024-03-14\r\n39767,1891,APAC,electronics,online,57.82,2,0.180,none,2024-05-15\r\n39768,2394,EMEA,sports,mobile,77.75,3,0.000,none,2024-02-03\r\n39769,1535,AMER,grocery,retail,61.08,2,0.202,none,2024-08-10\r\n39770,1241,APAC,grocery,retail,46.42,4,0.213,bundle,2024-02-27\r\n39771,1739,AMER,home,online,70.21,7,0.203,none,2024-02-23\r\n39772,1435,AMER,electronics,online,65.30,5,0.189,bundle,2024-03-16\r\n39773,2492,LATAM,grocery,online,44.39,3,0.070,none,2024-06-27\r\n39774,1917,LATAM,home,retail,82.44,8,0.052,coupon,2024-01-14\r\n39775,1507,EMEA,grocery,retail,80.76,4,0.122,coupon,2024-08-16\r\n39776,1981,EMEA,toys,online,110.13,1,0.205,coupon,2024-05-06\r\n39777,2469,LATAM,grocery,online,36.42,1,0.036,coupon,2024-01-07\r\n39778,1646,APAC,grocery,online,92.00,2,0.107,bundle,2024-08-27\r\n39779,2258,AMER,grocery,retail,46.04,6,0.038,none,2024-12-09\r\n39780,1000,APAC,home,online,37.38,4,0.235,bundle,2024-06-05\r\n39781,1760,LATAM,electronics,online,19.43,7,0.070,coupon,2024-06-08\r\n39782,1159,LATAM,sports,retail,83.85,6,0.186,none,2024-07-04\r\n39783,2347,AMER,electronics,retail,120.72,5,0.030,none,2024-08-13\r\n39784,2310,EMEA,toys,online,41.08,7,0.163,bundle,2024-07-28\r\n39785,1166,AMER,sports,online,72.07,1,0.075,none,2024-02-21\r\n39786,2055,AMER,toys,retail,57.74,1,0.146,none,2024-02-15\r\n39787,2025,EMEA,grocery,online,39.22,2,0.123,loyalty,2024-06-19\r\n39788,1731,AMER,sports,retail,49.20,6,0.192,none,2024-02-22\r\n39789,2198,EMEA,electronics,online,55.51,8,0.208,loyalty,2024-03-06\r\n39790,2253,AMER,grocery,online,66.30,5,0.120,loyalty,2024-06-01\r\n39791,1594,LATAM,electronics,mobile,40.92,2,0.030,coupon,2024-12-02\r\n39792,1901,AMER,grocery,partner,51.53,1,0.237,none,2024-12-21\r\n39793,2445,APAC,grocery,mobile,45.26,4,0.018,none,2024-01-01\r\n39794,1558,EMEA,grocery,partner,122.41,8,0.233,none,2024-02-22\r\n39795,2354,LATAM,home,online,108.14,5,0.192,coupon,2024-05-26\r\n39796,1333,EMEA,sports,online,43.36,7,0.208,bundle,2024-11-10\r\n39797,1648,APAC,electronics,online,26.74,8,0.023,coupon,2024-08-14\r\n39798,1988,AMER,home,partner,167.88,6,0.020,none,2024-02-05\r\n39799,1594,LATAM,grocery,online,72.73,2,0.105,none,2024-09-27\r\n39800,1929,LATAM,fashion,partner,134.30,4,0.070,bundle,2024-06-13\r\n39801,2174,LATAM,electronics,mobile,56.78,8,0.237,none,2024-12-14\r\n39802,1188,LATAM,electronics,retail,51.66,8,0.108,coupon,2024-03-04\r\n39803,1873,EMEA,home,retail,106.29,8,0.145,none,2024-08-24\r\n39804,2154,APAC,fashion,retail,140.55,2,0.222,none,2024-08-01\r\n39805,1373,LATAM,toys,online,48.12,3,0.049,loyalty,2024-05-18\r\n39806,1364,EMEA,fashion,online,88.74,4,0.066,coupon,2024-07-04\r\n39807,1889,APAC,grocery,retail,106.97,5,0.249,none,2024-06-16\r\n39808,2158,APAC,toys,online,33.63,8,0.086,none,2024-11-14\r\n39809,1343,LATAM,toys,retail,40.17,6,0.033,none,2024-01-28\r\n39810,2370,EMEA,home,online,95.01,7,0.192,none,2024-02-02\r\n39811,1763,LATAM,toys,online,72.26,4,0.031,bundle,2024-06-06\r\n39812,2131,APAC,sports,mobile,86.45,5,0.022,none,2024-03-08\r\n39813,1261,APAC,electronics,retail,35.98,6,0.010,none,2024-09-07\r\n39814,2302,APAC,grocery,partner,70.12,3,0.132,none,2024-06-23\r\n39815,1869,AMER,grocery,retail,106.08,6,0.054,none,2024-02-20\r\n39816,1641,EMEA,electronics,retail,84.45,2,0.219,coupon,2024-10-04\r\n39817,1989,LATAM,electronics,retail,52.77,6,0.031,none,2024-10-24\r\n39818,1450,EMEA,grocery,online,31.23,4,0.091,bundle,2024-12-10\r\n39819,2097,AMER,grocery,retail,78.07,3,0.144,none,2024-07-22\r\n39820,1578,LATAM,sports,online,68.93,7,0.190,none,2024-05-15\r\n39821,1476,APAC,sports,online,30.83,6,0.147,coupon,2024-04-28\r\n39822,2432,AMER,home,online,36.63,3,0.108,none,2024-07-02\r\n39823,1469,EMEA,home,retail,41.21,8,0.108,coupon,2024-06-06\r\n39824,1962,APAC,grocery,mobile,40.72,2,0.143,loyalty,2024-12-06\r\n39825,2103,LATAM,grocery,online,42.92,5,0.119,none,2024-01-26\r\n39826,2077,APAC,grocery,online,33.71,1,0.019,none,2024-02-14\r\n39827,2176,AMER,grocery,retail,35.24,4,0.091,coupon,2024-06-11\r\n39828,1595,AMER,sports,mobile,77.69,4,0.109,bundle,2024-08-08\r\n39829,1972,LATAM,grocery,online,29.01,8,0.032,coupon,2024-04-15\r\n39830,1202,APAC,sports,retail,88.67,4,0.123,none,2024-11-14\r\n39831,2491,APAC,home,online,50.31,3,0.066,bundle,2024-03-20\r\n39832,1776,APAC,home,online,42.60,8,0.181,none,2024-04-11\r\n39833,2359,LATAM,fashion,mobile,77.97,4,0.057,none,2024-05-18\r\n39834,1883,LATAM,sports,retail,81.41,3,0.070,bundle,2024-08-06\r\n39835,2109,EMEA,grocery,retail,81.59,6,0.137,bundle,2024-06-16\r\n39836,2359,LATAM,grocery,online,34.54,5,0.227,bundle,2024-12-05\r\n39837,1602,EMEA,sports,online,47.35,8,0.174,loyalty,2024-04-26\r\n39838,1020,APAC,grocery,online,91.05,5,0.135,coupon,2024-12-06\r\n39839,1293,AMER,electronics,online,69.49,6,0.087,bundle,2024-07-07\r\n39840,1143,LATAM,grocery,online,29.22,6,0.026,none,2024-06-04\r\n39841,1081,AMER,grocery,online,89.32,4,0.226,none,2024-07-03\r\n39842,2354,LATAM,grocery,online,16.28,1,0.174,bundle,2024-01-07\r\n39843,2051,APAC,sports,partner,78.86,5,0.014,loyalty,2024-10-22\r\n39844,2061,EMEA,home,mobile,30.05,6,0.162,none,2024-11-19\r\n39845,1121,EMEA,grocery,online,42.16,2,0.170,loyalty,2024-11-20\r\n39846,1052,LATAM,toys,retail,97.99,1,0.206,none,2024-09-05\r\n39847,1442,EMEA,sports,online,32.81,8,0.048,coupon,2024-11-10\r\n39848,2119,AMER,electronics,online,68.24,4,0.219,none,2024-08-22\r\n39849,2060,LATAM,toys,online,130.58,5,0.011,none,2024-01-24\r\n39850,1114,APAC,electronics,retail,28.23,5,0.029,loyalty,2024-06-15\r\n39851,1130,LATAM,electronics,mobile,60.78,5,0.027,none,2024-04-15\r\n39852,1682,EMEA,grocery,online,106.02,7,0.029,bundle,2024-10-22\r\n39853,2174,LATAM,sports,online,27.60,3,0.229,bundle,2024-12-11\r\n39854,1149,LATAM,home,online,90.91,4,0.211,none,2024-08-07\r\n39855,1988,AMER,home,online,69.28,2,0.017,none,2024-07-26\r\n39856,2333,APAC,grocery,retail,54.25,6,0.120,none,2024-01-19\r\n39857,1255,AMER,fashion,online,93.30,6,0.095,coupon,2024-02-12\r\n39858,1626,EMEA,toys,mobile,23.40,8,0.137,bundle,2024-08-22\r\n39859,1754,EMEA,fashion,partner,64.91,4,0.024,none,2024-02-17\r\n39860,1801,LATAM,grocery,retail,61.36,5,0.149,coupon,2024-04-13\r\n39861,1187,AMER,grocery,mobile,64.66,3,0.105,loyalty,2024-01-01\r\n39862,2398,EMEA,electronics,online,35.01,5,0.142,none,2024-06-08\r\n39863,1094,LATAM,fashion,online,59.39,1,0.163,none,2024-01-24\r\n39864,2117,EMEA,toys,online,64.81,4,0.158,none,2024-11-03\r\n39865,2152,EMEA,electronics,online,62.43,2,0.146,none,2024-04-15\r\n39866,1412,AMER,fashion,mobile,53.09,5,0.135,bundle,2024-12-09\r\n39867,1984,LATAM,toys,online,61.59,6,0.015,none,2024-01-08\r\n39868,2009,LATAM,grocery,online,75.84,8,0.148,none,2024-11-01\r\n39869,2448,APAC,fashion,retail,76.66,4,0.174,none,2024-08-19\r\n39870,1558,EMEA,home,retail,133.87,3,0.227,coupon,2024-03-24\r\n39871,2383,APAC,fashion,online,49.51,5,0.162,coupon,2024-05-28\r\n39872,1694,APAC,toys,retail,69.58,6,0.013,none,2024-02-10\r\n39873,2476,APAC,fashion,mobile,139.86,6,0.246,none,2024-11-25\r\n39874,1404,EMEA,sports,mobile,173.43,4,0.149,none,2024-01-24\r\n39875,2016,LATAM,electronics,retail,56.04,5,0.028,none,2024-05-24\r\n39876,2058,LATAM,sports,retail,126.66,4,0.239,loyalty,2024-12-05\r\n39877,1707,APAC,fashion,retail,53.25,5,0.009,bundle,2024-12-11\r\n39878,1936,EMEA,grocery,retail,44.52,4,0.048,bundle,2024-05-04\r\n39879,1944,AMER,fashion,retail,40.13,5,0.106,coupon,2024-09-03\r\n39880,1013,LATAM,grocery,online,54.25,1,0.220,bundle,2024-12-25\r\n39881,1439,LATAM,grocery,online,41.49,7,0.074,bundle,2024-12-18\r\n39882,1894,APAC,toys,online,67.63,1,0.095,none,2024-09-16\r\n39883,1166,AMER,fashion,online,36.50,7,0.101,none,2024-04-12\r\n39884,2257,AMER,grocery,retail,153.92,6,0.100,none,2024-01-04\r\n39885,2182,AMER,sports,retail,61.16,3,0.016,none,2024-07-21\r\n39886,1977,APAC,grocery,mobile,66.39,2,0.200,coupon,2024-04-12\r\n39887,1149,LATAM,grocery,online,31.54,7,0.094,bundle,2024-09-08\r\n39888,2257,AMER,sports,online,31.82,6,0.050,bundle,2024-11-23\r\n39889,2199,LATAM,home,online,28.88,1,0.202,none,2024-05-10\r\n39890,1931,APAC,grocery,online,82.47,3,0.072,coupon,2024-03-01\r\n39891,1284,APAC,sports,mobile,34.71,4,0.087,coupon,2024-04-25\r\n39892,2264,LATAM,electronics,online,59.34,6,0.041,loyalty,2024-01-05\r\n39893,1163,AMER,electronics,retail,120.59,5,0.043,none,2024-05-16\r\n39894,1006,AMER,sports,online,113.70,4,0.114,coupon,2024-04-22\r\n39895,2182,AMER,grocery,online,59.64,3,0.224,none,2024-09-06\r\n39896,1134,APAC,home,retail,64.86,2,0.111,none,2024-06-11\r\n39897,1623,AMER,grocery,online,62.16,5,0.114,none,2024-10-14\r\n39898,1911,LATAM,grocery,online,97.12,6,0.007,none,2024-12-05\r\n39899,1313,EMEA,grocery,online,89.37,7,0.089,none,2024-09-02\r\n39900,1921,LATAM,home,retail,26.99,1,0.136,coupon,2024-08-11\r\n39901,2126,APAC,grocery,retail,26.15,5,0.031,loyalty,2024-05-04\r\n39902,2168,EMEA,toys,online,42.95,1,0.016,none,2024-09-11\r\n39903,1601,APAC,grocery,retail,118.44,2,0.086,bundle,2024-05-07\r\n39904,1587,LATAM,electronics,online,42.64,7,0.115,bundle,2024-10-17\r\n39905,1609,LATAM,sports,retail,95.66,4,0.068,coupon,2024-09-10\r\n39906,1192,EMEA,electronics,online,130.14,7,0.242,coupon,2024-07-10\r\n39907,1947,EMEA,fashion,online,33.18,8,0.197,none,2024-06-14\r\n39908,1991,APAC,fashion,retail,59.80,2,0.199,coupon,2024-06-11\r\n39909,2373,LATAM,home,retail,51.30,6,0.037,bundle,2024-09-21\r\n39910,1576,EMEA,electronics,online,58.96,7,0.210,bundle,2024-01-07\r\n39911,1859,AMER,home,retail,50.59,5,0.240,none,2024-07-18\r\n39912,1833,EMEA,grocery,online,49.32,2,0.144,coupon,2024-08-18\r\n39913,1350,LATAM,grocery,retail,69.62,4,0.185,none,2024-01-12\r\n39914,1478,EMEA,toys,online,54.03,3,0.081,none,2024-12-13\r\n39915,1533,APAC,grocery,retail,59.75,5,0.153,coupon,2024-07-28\r\n39916,2276,AMER,grocery,retail,32.69,4,0.161,none,2024-04-06\r\n39917,1523,LATAM,toys,online,135.13,2,0.073,none,2024-03-07\r\n39918,2241,APAC,home,online,86.39,1,0.009,none,2024-08-15\r\n39919,2064,LATAM,grocery,retail,54.84,4,0.018,bundle,2024-07-09\r\n39920,2027,EMEA,grocery,online,46.84,1,0.082,loyalty,2024-05-07\r\n39921,2286,AMER,fashion,online,57.98,3,0.148,none,2024-03-13\r\n39922,1681,LATAM,grocery,mobile,58.58,5,0.240,none,2024-09-11\r\n39923,2288,AMER,electronics,mobile,95.22,6,0.140,none,2024-02-11\r\n39924,2485,AMER,grocery,mobile,48.11,7,0.020,coupon,2024-04-13\r\n39925,2196,AMER,electronics,online,87.06,6,0.222,none,2024-11-19\r\n39926,1137,APAC,sports,online,44.79,6,0.124,none,2024-12-16\r\n39927,1801,LATAM,grocery,online,70.90,3,0.105,bundle,2024-02-03\r\n39928,1508,LATAM,grocery,online,108.75,6,0.147,none,2024-12-14\r\n39929,1097,EMEA,grocery,online,19.13,3,0.109,bundle,2024-01-03\r\n39930,1992,LATAM,sports,mobile,50.17,1,0.063,none,2024-01-25\r\n39931,2391,EMEA,home,online,23.48,6,0.025,none,2024-11-19\r\n39932,1080,LATAM,electronics,mobile,31.57,6,0.244,loyalty,2024-02-05\r\n39933,2404,EMEA,fashion,mobile,31.03,5,0.203,none,2024-10-20\r\n39934,2395,APAC,sports,partner,74.23,5,0.248,none,2024-08-26\r\n39935,1282,LATAM,grocery,online,62.33,3,0.194,none,2024-01-06\r\n39936,2127,LATAM,fashion,mobile,71.87,5,0.136,none,2024-09-06\r\n39937,1653,APAC,grocery,online,68.41,3,0.027,none,2024-05-16\r\n39938,1828,EMEA,electronics,mobile,130.42,4,0.098,bundle,2024-09-01\r\n39939,1352,AMER,electronics,online,51.86,5,0.233,none,2024-04-13\r\n39940,1803,LATAM,grocery,online,51.10,7,0.082,none,2024-05-04\r\n39941,1656,LATAM,home,online,38.25,5,0.142,bundle,2024-06-28\r\n39942,1418,LATAM,grocery,online,230.55,7,0.040,bundle,2024-05-05\r\n39943,1917,LATAM,home,online,42.29,2,0.119,coupon,2024-07-05\r\n39944,2021,EMEA,fashion,retail,85.56,7,0.169,none,2024-03-03\r\n39945,1684,EMEA,grocery,retail,44.87,5,0.126,coupon,2024-06-28\r\n39946,1232,LATAM,toys,retail,42.62,7,0.219,none,2024-07-14\r\n39947,1234,AMER,grocery,retail,188.27,5,0.015,none,2024-10-25\r\n39948,1696,LATAM,fashion,online,29.37,7,0.160,none,2024-07-02\r\n39949,2318,AMER,home,online,47.04,1,0.085,loyalty,2024-05-23\r\n39950,1437,EMEA,fashion,retail,90.63,8,0.032,coupon,2024-10-21\r\n39951,1785,EMEA,electronics,online,40.99,4,0.153,none,2024-09-18\r\n39952,2177,AMER,grocery,retail,46.53,5,0.109,coupon,2024-11-17\r\n39953,2433,APAC,grocery,retail,59.13,1,0.198,loyalty,2024-12-17\r\n39954,2388,LATAM,sports,retail,46.58,3,0.112,coupon,2024-11-03\r\n39955,1837,LATAM,electronics,retail,36.70,8,0.149,none,2024-11-21\r\n39956,2493,APAC,grocery,retail,44.23,6,0.240,none,2024-03-05\r\n39957,1801,LATAM,electronics,retail,31.94,2,0.123,none,2024-05-26\r\n39958,1942,APAC,fashion,online,35.33,4,0.043,coupon,2024-06-11\r\n39959,2242,AMER,grocery,online,38.28,5,0.006,none,2024-11-08\r\n39960,1363,EMEA,grocery,online,60.67,5,0.198,none,2024-10-14\r\n39961,1636,APAC,fashion,online,41.34,7,0.207,none,2024-04-19\r\n39962,1436,APAC,grocery,retail,87.02,6,0.158,bundle,2024-11-05\r\n39963,2012,APAC,sports,retail,88.86,7,0.040,none,2024-12-26\r\n39964,1281,AMER,toys,retail,116.29,3,0.039,none,2024-04-02\r\n39965,1201,LATAM,grocery,retail,145.43,1,0.164,none,2024-12-26\r\n39966,2193,AMER,grocery,retail,40.96,7,0.012,bundle,2024-08-28\r\n39967,1450,EMEA,sports,mobile,23.97,4,0.077,none,2024-02-24\r\n39968,1573,AMER,home,retail,48.55,2,0.178,none,2024-06-05\r\n39969,2018,AMER,sports,online,61.99,8,0.208,none,2024-03-03\r\n39970,2146,APAC,sports,retail,40.32,1,0.174,none,2024-07-18\r\n39971,2426,AMER,fashion,retail,101.18,6,0.127,none,2024-01-09\r\n39972,1067,APAC,electronics,online,91.43,3,0.244,none,2024-08-10\r\n39973,2001,EMEA,fashion,retail,69.39,2,0.048,bundle,2024-06-25\r\n39974,1790,AMER,electronics,online,33.20,8,0.064,bundle,2024-07-12\r\n39975,2125,LATAM,sports,retail,54.59,6,0.190,none,2024-09-09\r\n39976,1682,EMEA,sports,mobile,47.29,8,0.129,none,2024-05-02\r\n39977,1295,EMEA,fashion,mobile,53.51,2,0.218,loyalty,2024-08-05\r\n39978,1057,LATAM,grocery,mobile,96.82,1,0.171,coupon,2024-12-20\r\n39979,2288,AMER,grocery,online,38.41,5,0.232,none,2024-10-18\r\n39980,1774,EMEA,home,online,80.18,8,0.110,bundle,2024-07-25\r\n39981,2229,APAC,grocery,online,50.32,8,0.033,coupon,2024-11-01\r\n39982,2107,APAC,fashion,retail,137.57,6,0.071,coupon,2024-09-10\r\n39983,1368,EMEA,fashion,retail,56.96,8,0.104,none,2024-12-09\r\n39984,1577,AMER,home,mobile,177.87,3,0.144,none,2024-10-24\r\n39985,1115,AMER,fashion,online,72.17,3,0.207,none,2024-06-08\r\n39986,2037,LATAM,grocery,retail,88.82,2,0.042,bundle,2024-08-06\r\n39987,2052,LATAM,fashion,retail,66.17,5,0.063,bundle,2024-01-02\r\n39988,2139,AMER,fashion,retail,50.94,6,0.046,none,2024-03-20\r\n39989,1744,EMEA,electronics,retail,44.00,4,0.052,bundle,2024-02-26\r\n39990,1306,LATAM,fashion,online,116.64,2,0.163,none,2024-06-20\r\n39991,2360,EMEA,toys,online,47.61,4,0.217,bundle,2024-10-18\r\n39992,1279,EMEA,fashion,online,125.61,2,0.006,loyalty,2024-12-22\r\n39993,2122,AMER,grocery,online,84.11,5,0.173,none,2024-11-17\r\n39994,1435,AMER,electronics,online,54.06,4,0.101,none,2024-10-21\r\n39995,2389,LATAM,sports,online,94.93,7,0.200,none,2024-12-13\r\n39996,1925,LATAM,home,online,141.62,6,0.209,none,2024-10-28\r\n39997,1673,AMER,electronics,online,98.64,1,0.043,none,2024-06-25\r\n39998,1695,LATAM,toys,retail,37.15,1,0.186,none,2024-08-06\r\n39999,1496,AMER,fashion,online,52.98,2,0.137,coupon,2024-11-17\r\n40000,1721,EMEA,fashion,partner,37.17,8,0.102,none,2024-10-22\r\n"
  },
  {
    "path": "packages/talon/bench/dune",
    "content": "(executable\n (name bench_talon)\n (libraries talon talon.csv thumper))\n\n(rule\n (alias runtest)\n (action\n  (progn\n   (run %{exe:bench_talon.exe} -q)\n   (diff? talon.thumper talon.thumper.corrected))))\n"
  },
  {
    "path": "packages/talon/bench/scripts/generate_fixtures.py",
    "content": "\"\"\"Generate synthetic fixtures for the Talon dataframe benchmarks.\"\"\"\n\nfrom __future__ import annotations\n\nimport csv\nimport random\nfrom dataclasses import dataclass\nfrom pathlib import Path\nfrom typing import Iterable, List\n\n\n@dataclass(frozen=True)\nclass Customer:\n    customer_id: int\n    region: str\n    segment: str\n    status: str\n    loyalty_score: float\n    tenure_years: int\n\n\ndef _create_customers(rng: random.Random, base_id: int = 1000, count: int = 1500) -> List[Customer]:\n    regions = [\"EMEA\", \"AMER\", \"APAC\", \"LATAM\"]\n    segments = [\"Enterprise\", \"Growth\", \"SMB\", \"Consumer\"]\n    statuses = [\"active\", \"at_risk\", \"inactive\"]\n\n    customers: List[Customer] = []\n    for offset in range(count):\n        customer_id = base_id + offset\n        region = rng.choice(regions)\n        segment = rng.choices(segments, weights=[0.25, 0.2, 0.3, 0.25])[0]\n        status = rng.choices(statuses, weights=[0.65, 0.2, 0.15])[0]\n        loyalty_score = round(rng.uniform(20.0, 98.0), 2)\n        tenure_years = rng.randint(1, 12)\n        customers.append(\n            Customer(\n                customer_id=customer_id,\n                region=region,\n                segment=segment,\n                status=status,\n                loyalty_score=loyalty_score,\n                tenure_years=tenure_years,\n            )\n        )\n    return customers\n\n\ndef _save_customers(path: Path, rows: Iterable[Customer]) -> None:\n    with path.open(\"w\", newline=\"\", encoding=\"utf-8\") as handle:\n        writer = csv.writer(handle)\n        writer.writerow([\n            \"customer_id\",\n            \"segment\",\n            \"region\",\n            \"status\",\n            \"loyalty_score\",\n            \"tenure_years\",\n        ])\n        for row in rows:\n            writer.writerow(\n                [\n                    row.customer_id,\n                    row.segment,\n                    row.region,\n                    row.status,\n                    f\"{row.loyalty_score:.2f}\",\n                    row.tenure_years,\n                ]\n            )\n\n\ndef _write_transactions(path: Path, customers: List[Customer], rng: random.Random, count: int = 40000) -> None:\n    categories = [\n        \"electronics\",\n        \"grocery\",\n        \"fashion\",\n        \"home\",\n        \"sports\",\n        \"toys\",\n    ]\n    channels = [\"online\", \"retail\", \"mobile\", \"partner\"]\n    promo_flags = [\"none\", \"coupon\", \"bundle\", \"loyalty\"]\n\n    with path.open(\"w\", newline=\"\", encoding=\"utf-8\") as handle:\n        writer = csv.writer(handle)\n        writer.writerow(\n            [\n                \"transaction_id\",\n                \"customer_id\",\n                \"region\",\n                \"category\",\n                \"channel\",\n                \"amount\",\n                \"quantity\",\n                \"discount\",\n                \"promo\",\n                \"event_date\",\n            ]\n        )\n\n        for tx_id in range(1, count + 1):\n            customer = rng.choice(customers)\n            category = rng.choices(categories, weights=[0.2, 0.25, 0.15, 0.18, 0.12, 0.1])[0]\n            channel = rng.choices(channels, weights=[0.45, 0.35, 0.15, 0.05])[0]\n            quantity = rng.randint(1, 8)\n            base_amount = rng.lognormvariate(4.0, 0.5)\n            seasonal_factor = 0.85 + rng.random() * 0.4\n            amount = round(base_amount * seasonal_factor, 2)\n            discount = round(rng.uniform(0.0, 0.25), 3)\n            promo = rng.choices(promo_flags, weights=[0.55, 0.2, 0.15, 0.1])[0]\n            month = rng.randint(1, 12)\n            day = rng.randint(1, 28)\n            event_date = f\"2024-{month:02d}-{day:02d}\"\n\n            writer.writerow(\n                [\n                    tx_id,\n                    customer.customer_id,\n                    customer.region,\n                    category,\n                    channel,\n                    f\"{amount:.2f}\",\n                    quantity,\n                    f\"{discount:.3f}\",\n                    promo,\n                    event_date,\n                ]\n            )\n\n\ndef main() -> None:\n    data_dir = Path(__file__).resolve().parents[1] / \"data\"\n    data_dir.mkdir(parents=True, exist_ok=True)\n\n    rng = random.Random(20240127)\n\n    customers = _create_customers(rng)\n    _save_customers(data_dir / \"customers.csv\", customers)\n    _write_transactions(data_dir / \"transactions.csv\", customers, rng)\n\n    print(f\"Generated Talon fixtures in {data_dir}\")\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "packages/talon/bench/talon.thumper",
    "content": "# thumper baseline\n# version: 1\n# suite_name: talon\n# host: 1480401c3b76ed18\n# cpu: Apple M1 Max\n# ocaml: 5.4.1\n# git: 31747323\n# dirty: true\n# command: /Users/tmattio/Workspace/raven/_build/default/packages/talon/bench/bench_talon.exe --bless --quick\n\ntalon/filter_high_value\talloc_words\t5.829060e+05\t5.829060e+05\t5.829060e+05\t0.000000e+00\t5\t0\ntalon/filter_high_value\tcpu_time\t3.465705e-03\t3.398246e-03\t3.551483e-03\t2.210760e-02\t5\t0\ntalon/filter_high_value\twall_time\t3.471535e-03\t3.408034e-03\t3.535678e-03\t1.838436e-02\t5\t0\ntalon/group_category_region\talloc_words\t1.970381e+06\t1.970381e+06\t1.970381e+06\t0.000000e+00\t5\t1\ntalon/group_category_region\tcpu_time\t1.762447e-02\t1.746038e-02\t1.799537e-02\t1.517769e-02\t5\t1\ntalon/group_category_region\twall_time\t1.763611e-02\t1.745952e-02\t1.811992e-02\t1.872291e-02\t5\t1\ntalon/join_customer_lookup\talloc_words\t3.093605e+06\t3.093605e+06\t3.093605e+06\t0.000000e+00\t5\t1\ntalon/join_customer_lookup\tcpu_time\t2.282571e-02\t2.273004e-02\t2.290595e-02\t3.853322e-03\t5\t0\ntalon/join_customer_lookup\twall_time\t2.283701e-02\t2.275199e-02\t2.291955e-02\t3.668491e-03\t5\t0\ntalon/sort_amount_desc\talloc_words\t1.991202e+06\t1.991202e+06\t1.991202e+06\t0.000000e+00\t5\t1\ntalon/sort_amount_desc\tcpu_time\t2.935218e-02\t2.877947e-02\t2.979120e-02\t1.723426e-02\t5\t2\ntalon/sort_amount_desc\twall_time\t2.937058e-02\t2.873829e-02\t2.981201e-02\t1.827888e-02\t5\t2\n"
  },
  {
    "path": "packages/talon/doc/01-getting-started.md",
    "content": "# Getting Started with Talon\n\n## installation\n\nTalon is part of the Raven ecosystem and will be available through OPAM:\n\n<!-- $MDX skip -->\n```bash\nopam install talon\n```\n\nFor now, you can build from source:\n\n<!-- $MDX skip -->\n```bash\ngit clone https://github.com/raven-ml/raven.git\ncd raven\nopam install . --deps-only\ndune build\n```\n\n## your first dataframe\n\nLet's create a simple dataframe and explore its features:\n\n```ocaml\nopen Talon\n\n(* Create a dataframe from arrays *)\nlet df = create [\n  (\"name\", Col.string [|\"Alice\"; \"Bob\"; \"Charlie\"; \"Dana\"|]);\n  (\"age\", Col.int32 [|25l; 30l; 35l; 28l|]);\n  (\"height\", Col.float64 [|1.70; 1.80; 1.75; 1.65|]);\n  (\"weight\", Col.float64 [|65.0; 82.0; 90.0; 70.0|]);\n  (\"active\", Col.bool [|true; false; true; true|])\n]\n\n(* Check the shape *)\nlet () = Printf.printf \"Rows: %d, Columns: %d\\n\"\n  (num_rows df) (num_columns df)\n\n(* Print the dataframe *)\nlet () = print df\n```\n\n## adding computed columns\n\nOne of Talon's strengths is type-safe row operations:\n\n```ocaml\n(* Calculate BMI: weight / (height^2) *)\nlet df = with_column df \"bmi\" Nx.float64\n  Row.(map2 (number \"weight\") (number \"height\")\n    ~f:(fun w h -> w /. (h ** 2.)))\n\n(* Add a categorical column based on BMI *)\nlet categories =\n  match to_array Nx.float64 df \"bmi\" with\n  | Some arr ->\n    Array.map (fun bmi ->\n      if bmi < 18.5 then \"underweight\"\n      else if bmi < 25.0 then \"normal\"\n      else if bmi < 30.0 then \"overweight\"\n      else \"obese\") arr\n  | None -> [||]\n\nlet df = add_column df \"category\" (Col.string categories)\n```\n\n## row-wise operations\n\nTalon excels at operations across many columns:\n\n```ocaml\n(* Sum across multiple columns *)\nlet df_scores = create [\n  (\"student\", Col.string [|\"Alice\"; \"Bob\"|]);\n  (\"math\", Col.float64 [|92.0; 85.0|]);\n  (\"science\", Col.float64 [|88.0; 92.0|]);\n  (\"history\", Col.float64 [|95.0; 78.0|]);\n  (\"english\", Col.float64 [|90.0; 88.0|])\n]\n\n(* Calculate total score *)\nlet scores = Agg.row_sum df_scores\n  ~names:[\"math\"; \"science\"; \"history\"; \"english\"]\nlet df_scores = add_column df_scores \"total\" scores\n\n(* Calculate average score *)\nlet avg = Agg.row_mean df_scores\n  ~names:[\"math\"; \"science\"; \"history\"; \"english\"]\nlet df_scores = add_column df_scores \"average\" avg\n```\n\n## filtering and sorting\n\n```ocaml\n(* Filter students with average >= 90 *)\nlet top_students = filter_by df_scores\n  Row.(map (number \"average\") ~f:(fun avg -> avg >= 90.0))\n\n(* Sort by total score descending *)\nlet sorted = sort_values ~ascending:false df_scores \"total\"\n```\n\n## working with column selectors\n\nTalon provides composable column selection via `select_columns`:\n\n```ocaml\n(* Select all numeric columns *)\nlet numeric_cols = select_columns df `Numeric\n\n(* Select columns by prefix using standard list operations *)\nlet price_cols =\n  List.filter (fun n -> String.starts_with ~prefix:\"price_\" n) (column_names df)\n\n(* Select all except specific columns *)\nlet feature_cols =\n  List.filter (fun n -> not (List.mem n [\"id\"; \"name\"; \"timestamp\"]))\n    (column_names df)\n\n(* Use selectors in operations *)\nlet row_totals = Agg.row_sum df ~names:numeric_cols\n```\n\n## functional transformations\n\nUse applicative functors for elegant row transformations:\n\n```ocaml\nlet df = create [\n  (\"a\", Col.float64 [|1.0; 2.0; 3.0|]);\n  (\"b\", Col.float64 [|4.0; 5.0; 6.0|]);\n  (\"c\", Col.float64 [|7.0; 8.0; 9.0|]);\n  (\"x\", Col.float64 [|10.0; 20.0; 30.0|]);\n  (\"y\", Col.float64 [|0.5; 0.5; 0.5|]);\n  (\"z\", Col.float64 [|1.0; 2.0; 3.0|])\n]\n\n(* Add multiple computed columns *)\nlet df = with_column df \"sum\" Nx.float64\n  Row.(map3 (number \"a\") (number \"b\") (number \"c\")\n    ~f:(fun a b c -> a +. b +. c))\n\nlet df = with_column df \"product\" Nx.float64\n  Row.(map3 (number \"a\") (number \"b\") (number \"c\")\n    ~f:(fun a b c -> a *. b *. c))\n\n(* Use applicative operations *)\nlet df = with_column df \"result\" Nx.float64\n  Row.(map3 (number \"x\") (number \"y\") (number \"z\")\n    ~f:(fun a b c -> a *. b +. c))\n```\n\n## data manipulation\n\n### joins\n\n```ocaml\nlet df1 = create [\n  (\"id\", Col.int32 [|1l; 2l; 3l|]);\n  (\"name\", Col.string [|\"Alice\"; \"Bob\"; \"Charlie\"|])\n]\n\nlet df2 = create [\n  (\"id\", Col.int32 [|2l; 3l; 4l|]);\n  (\"score\", Col.float64 [|85.0; 92.0; 88.0|])\n]\n\n(* Inner join *)\nlet joined = join df1 df2 ~on:\"id\" ~how:`Inner ()\n\n(* Left join *)\nlet left_joined = join df1 df2 ~on:\"id\" ~how:`Left ()\n```\n\n### pivot tables\n\n```ocaml\nlet sales = create [\n  (\"date\", Col.string [|\"2024-01\"; \"2024-01\"; \"2024-02\"; \"2024-02\"|]);\n  (\"product\", Col.string [|\"A\"; \"B\"; \"A\"; \"B\"|]);\n  (\"amount\", Col.float64 [|100.0; 150.0; 120.0; 180.0|])\n]\n\nlet pivoted = pivot sales ~index:\"date\" ~columns:\"product\" ~values:\"amount\" ()\n```\n\n## loading and saving data\n\nUse `Talon_csv` to control types and null handling on I/O:\n\n<!-- $MDX skip -->\n```ocaml\nlet df = Talon_csv.read ~dtype_spec:[\"id\", `Int64; \"score\", `Float64] \"data.csv\"\nlet () = Talon_csv.write \"clean.csv\" df\n```\n\n## Next Steps\nCheck out the [Comparison with Pandas](/docs/talon/pandas-comparison/) to see how Talon's functional approach differs from Pandas.\n"
  },
  {
    "path": "packages/talon/doc/02-row-operations.md",
    "content": "# Row Operations\n\nTalon's `Row` module is an applicative functor for type-safe, row-wise computations. This is Talon's most distinctive feature compared to pandas, where column operations are typically stringly-typed.\n\n## The Row Applicative\n\nA `'a row` is a computation that, when executed against a dataframe, produces a value of type `'a` for each row. You build row computations declaratively, then apply them with `with_column`, `filter_by`, or `map`.\n\n### Column accessors\n\nExtract typed values from named columns:\n\n<!-- $MDX skip -->\n```ocaml\nopen Talon\n\nRow.float64 \"height\"         (* float row — from a float64 column *)\nRow.int32 \"age\"              (* int32 row — from an int32 column *)\nRow.string \"name\"            (* string row — from a string column *)\nRow.bool \"active\"            (* bool row — from a bool column *)\nRow.number \"score\"           (* float row — coerces any numeric type to float *)\n```\n\n`number` is the most flexible accessor — it works with any numeric column type (int32, int64, float32, float64) and coerces to `float`.\n\n### Transforming with map\n\nApply a function to each row's value:\n\n<!-- $MDX skip -->\n```ocaml\n(* Double every value *)\nlet doubled = Row.map (Row.float64 \"price\") ~f:(fun x -> x *. 2.)\n\n(* Combine two columns *)\nlet full_name =\n  Row.map2 (Row.string \"first\") (Row.string \"last\")\n    ~f:(fun f l -> f ^ \" \" ^ l)\n\n(* Three columns at once *)\nlet weighted_score =\n  Row.map3\n    (Row.number \"math\") (Row.number \"science\") (Row.number \"english\")\n    ~f:(fun m s e -> m *. 0.4 +. s *. 0.35 +. e *. 0.25)\n```\n\n## Adding Computed Columns\n\n`with_column` applies a row computation to create a new column:\n\n<!-- $MDX skip -->\n```ocaml\n(* BMI = weight / height² *)\nlet df = with_column df \"bmi\" Nx.Float64\n  Row.(map2 (number \"weight\") (number \"height\")\n    ~f:(fun w h -> w /. (h *. h)))\n\n(* Category from numeric value *)\nlet df = with_column df \"grade\" Nx.Int32\n  Row.(map (number \"score\") ~f:(fun s ->\n    if s >= 90. then 4l\n    else if s >= 80. then 3l\n    else if s >= 70. then 2l\n    else 1l))\n```\n\nFor multiple computed columns in one pass, use `with_columns_map`:\n\n<!-- $MDX skip -->\n```ocaml\nlet df = with_columns_map df [\n  (\"bmi\", Nx.Float64,\n    Row.(map2 (number \"weight\") (number \"height\")\n      ~f:(fun w h -> w /. (h *. h))));\n  (\"is_tall\", Nx.Int32,\n    Row.(map (number \"height\") ~f:(fun h ->\n      if h > 1.80 then 1l else 0l)));\n]\n```\n\n## Filtering\n\n`filter_by` keeps rows where a row computation returns `true`:\n\n<!-- $MDX skip -->\n```ocaml\n(* Simple filter *)\nlet adults = filter_by df\n  Row.(map (int32 \"age\") ~f:(fun a -> a >= 18l))\n\n(* Compound filter *)\nlet qualified = filter_by df\n  Row.(map2 (number \"score\") (bool \"active\")\n    ~f:(fun s a -> s >= 80. && a))\n```\n\n## Working with Multiple Columns\n\n### numbers and map_list\n\nFor operations across a dynamic list of columns:\n\n<!-- $MDX skip -->\n```ocaml\n(* Average across all score columns *)\nlet score_cols = [\"math\"; \"science\"; \"english\"; \"history\"] in\nlet avg = Row.map_list (Row.numbers score_cols) ~f:(fun scores ->\n  List.fold_left (+.) 0. scores /. float (List.length scores))\n\nlet df = with_column df \"average\" Nx.Float64 avg\n```\n\n### fold_list\n\nMore memory-efficient than `map_list` for reductions:\n\n<!-- $MDX skip -->\n```ocaml\n(* Total across quarterly columns *)\nlet total = Row.fold_list\n  (Row.numbers [\"q1\"; \"q2\"; \"q3\"; \"q4\"])\n  ~init:0. ~f:(+.)\n```\n\n### sequence\n\nCollect values from multiple computations into a list:\n\n<!-- $MDX skip -->\n```ocaml\nlet all_scores = Row.sequence [\n  Row.number \"math\";\n  Row.number \"science\";\n  Row.number \"english\";\n]\n(* all_scores : float list row *)\n```\n\n## Row-wise Aggregations (Row.Agg)\n\n`Row.Agg` provides efficient horizontal aggregations — computing statistics across columns within each row:\n\n<!-- $MDX skip -->\n```ocaml\n(* Sum across score columns *)\nlet total = Row.Agg.sum (Row.numbers [\"math\"; \"science\"; \"english\"])\n\n(* Mean, ignoring nulls *)\nlet avg = Row.Agg.mean ~skipna:true (Row.numbers [\"q1\"; \"q2\"; \"q3\"; \"q4\"])\n\n(* Min and max across columns *)\nlet best = Row.Agg.max (Row.numbers [\"test1\"; \"test2\"; \"test3\"])\nlet worst = Row.Agg.min (Row.numbers [\"test1\"; \"test2\"; \"test3\"])\n\n(* Weighted sum *)\nlet weights = Nx.create Nx.Float64 [|3|] [|0.4; 0.35; 0.25|]\nlet weighted = Row.Agg.dot weights (Row.numbers [\"math\"; \"science\"; \"english\"])\n\n(* Boolean reductions *)\nlet all_pass = Row.Agg.all [\n  Row.(map (number \"math\") ~f:(fun x -> x >= 50.));\n  Row.(map (number \"science\") ~f:(fun x -> x >= 50.));\n]\n```\n\nThese are more efficient than `map_list` followed by manual reduction because they use vectorized Nx operations internally.\n\n## Row Metadata\n\n### index\n\nAccess the current row index:\n\n<!-- $MDX skip -->\n```ocaml\nlet df = with_column df \"row_num\" Nx.Int32\n  Row.(map index ~f:Int32.of_int)\n```\n\n## Nullable Columns\n\nFor columns that may contain null values, use the `_opt` accessors:\n\n<!-- $MDX skip -->\n```ocaml\n(* Returns float option row instead of float row *)\nlet maybe_score = Row.float64_opt \"score\"\n\n(* Handle nulls explicitly *)\nlet filled = Row.map maybe_score ~f:(function\n  | Some v -> v\n  | None -> 0.)\n```\n\n## Column-wise Aggregations (Agg)\n\nFor completeness, Talon also provides column-wise aggregations via the top-level `Agg` module. These produce scalar results from entire columns:\n\n<!-- $MDX skip -->\n```ocaml\n(* Column-wise: single value from entire column *)\nlet avg_score = Agg.Float.mean df \"score\"\nlet total = Agg.Float.sum df \"revenue\"\nlet maximum = Agg.Float.max df \"temperature\"\nlet row_count = Agg.count df \"name\"\n```\n\n## Next Steps\n\n- [Getting Started](/docs/talon/getting-started/) — basic DataFrame creation and manipulation\n- [pandas Comparison](/docs/talon/pandas-comparison/) — side-by-side reference\n"
  },
  {
    "path": "packages/talon/doc/03-pandas-comparison.md",
    "content": "# Talon vs. pandas – A Practical Comparison\n\nThis guide explains how Talon's dataframe model relates to Python's [pandas](https://pandas.pydata.org/), focusing on:\n\n* How core concepts map (DataFrame, Series, dtypes, nulls)\n* Where the APIs feel similar vs. deliberately different\n* How to translate common pandas patterns into Talon\n\nIf you already use pandas, this should be enough to become productive in Talon quickly.\n\n---\n\n## 1. Big-Picture Differences\n\n| Aspect          | pandas (Python)                                           | Talon (OCaml)                                                       |\n| --------------- | --------------------------------------------------------- | ------------------------------------------------------------------- |\n| Language        | Dynamic, interpreted                                      | Statically typed, compiled                                          |\n| Core table type | `pd.DataFrame`                                            | `Talon.t`                                                           |\n| Column type     | `pd.Series`                                               | `Talon.Col.t` (abstract)                                            |\n| Numeric backend | NumPy                                                     | Nx                                                                  |\n| Typing model    | Dtypes checked at runtime                                 | Dtypes tracked at type & value-level via GADTs                      |\n| Null semantics  | NaN, `pd.NA`, object `None`, nullable dtypes              | Explicit null masks for numerics, `option` values for strings/bools |\n| Row-wise logic  | `DataFrame.apply`, vectorized ops                         | `Row` applicative combinators, compiled to loops                    |\n| Groupby / joins | `groupby`, `merge`, `join`                                | `group_by`, `join`                                                  |\n| Reshaping       | `pivot`, `melt`, `stack`, `unstack`                       | `pivot`, `melt`                                                     |\n| I/O             | `read_csv`, `to_csv`, `read_json`, `to_json`, etc.        | `Talon_csv.read/write`                                              |\n| Mutability      | DataFrames mutably changed by convention (but not always) | Immutable `Talon.t`; operations return new dataframes               |\n\n**Talon semantics to know (read once):**\n- Null keys never match in joins; inner joins drop null-key rows.\n- Row-wise reducers default to `skipna=true`; set `~skipna:false` to propagate nulls.\n- `Row.number` coerces numerics to float64; use `Row.float64`/`int32`/`int64` to avoid upcasting.\n- Dataframes are immutable; every operation returns a new `Talon.t`.\n\n---\n\n## 2. Core Data Model: DataFrame & Column\n\n### 2.1 DataFrame\n\n**pandas**\n\n```python\nimport pandas as pd\n\ndf = pd.DataFrame({\n    \"name\": [\"Alice\", \"Bob\"],\n    \"age\":  [25, 30],\n})\n```\n\n**Talon**\n\n```ocaml\nopen Talon\n\nlet df =\n  create\n    [\n      (\"name\", Col.string [| \"Alice\"; \"Bob\" |]);\n      (\"age\",  Col.int32   [| 25l; 30l |]);\n    ]\n```\n\nKey parallels:\n\n* Both are *row-oriented* logical tables with named, homogeneous columns.\n* `Talon.t` is immutable; every transformation returns a new dataframe.\n\n### 2.2 Column Representation\n\n**pandas** columns are `Series`, dynamically typed: dtype is metadata, but Python won't stop you from doing `df[\"name\"] + 1` until runtime.\n\n**Talon** columns are opaque (`Col.t`), internally storing:\n\n* Numeric data backed by 1D Nx tensors with an optional null mask\n* String data as `string option array`\n* Boolean data as `bool option array`\n\nThis gives Talon:\n\n* Runtime knowledge of *exact* numeric dtype.\n* Explicit representation of nulls instead of overloading special values.\n\n---\n\n## 3. Dtypes & Type Safety\n\n### 3.1 Creating Columns\n\n**pandas**\n\n```python\npd.Series([1.0, 2.0, 3.0], dtype=\"float64\")\npd.Series([1, 2, 3], dtype=\"int32\")\npd.Series([\"a\", \"b\"], dtype=\"string[python]\")\n```\n\n**Talon**\n\n```ocaml\nlet _ = Col.float64 [| 1.0; 2.0; 3.0 |]\nlet _ = Col.int32   [| 1l; 2l; 3l |]\nlet _ = Col.string  [| \"a\"; \"b\" |]\n```\n\nNullable equivalents:\n\n```ocaml\nlet _ = Col.float64_opt [| Some 1.0; None; Some 3.0 |]\nlet _ = Col.int32_opt   [| Some 42l; None; Some 100l |]\nlet _ = Col.string_opt  [| Some \"x\"; None; Some \"y\" |]\nlet _ = Col.bool_opt    [| Some true; None; Some false |]\n```\n\n### 3.2 Consequences of Strong Typing\n\n* Talon will fail fast if you try to use a column with the wrong type accessor (e.g. `Row.int32 \"name\"`).\n* Numeric aggregations (`Agg.sum`, `Agg.mean`, etc.) coerce any numeric column to float, so you don't need separate modules for float vs int aggregations.\n* String and boolean operations live in dedicated sub-modules (`Agg.String`, `Agg.Bool`).\n\nWhere pandas often says \"this is probably fine, let's try\", Talon tends to say \"pick the right function for this dtype\".\n\n---\n\n## 4. Null / Missing Data Semantics\n\nThis is one of the biggest conceptual differences.\n\n### 4.1 Representation\n\n**pandas**\n\n* Historically: NaN for numeric, `None`/`np.nan` for object; increasingly `pd.NA` and nullable dtypes.\n* Null semantics vary slightly by dtype (especially between legacy and new nullable dtypes).\n\n**Talon**\n\n* **Numeric columns**: explicit optional boolean mask; payload values (including `nan`, `Int32.min_int`, etc.) are treated as *regular data* unless masked.\n* **String / Bool columns**: `None` = null, `Some v` = non-null.\n\nSo the \"source of truth\" for missingness is:\n\n* Numeric: the mask attached to the column.\n* String/Bool: `option` constructors.\n\n### 4.2 Column-level utilities\n\n**pandas**\n\n```python\ndf[\"score\"].isna()\ndf[\"score\"].notna()\ndf[\"score\"].dropna()\ndf[\"score\"].fillna(0.0)\n```\n\n**Talon**\n\n<!-- $MDX skip -->\n```ocaml\n(* Column-level *)\nlet has_nulls   = Col.has_nulls col\nlet null_count  = Col.null_count col\nlet no_nulls    = Col.drop_nulls col\n\n(* Fill nulls with a single-element column of the same type *)\nlet filled = Col.fill_nulls col ~value:(Col.float64 [| 0.0 |])\n```\n\n### 4.3 DataFrame-level null handling\n\n**pandas**\n\n```python\ndf.dropna()                 # drop rows with any null\ndf.dropna(subset=[\"col1\"])  # only check some columns\ndf[\"col\"].isna()            # mask\ndf[\"col\"].fillna(0)\n```\n\n**Talon**\n\n<!-- $MDX skip -->\n```ocaml\nlet cleaned   = drop_nulls df              (* all columns *)\nlet cleaned_x = drop_nulls ~subset:[\"x\"] df\n\nlet col = get_column_exn df \"x\"\nlet has = Col.has_nulls col\nlet n   = Col.null_count col\n\nlet df' = fill_null df \"score\" ~with_value:(`Float 0.0)\n```\n\n`drop_nulls` and `fill_null` are the closest conceptual equivalents to `dropna` and `fillna`.\n\n---\n\n## 5. Constructing DataFrames (and I/O)\n\n### 5.1 From in-memory data\n\n**pandas**\n\n```python\ndf = pd.DataFrame(\n    {\"name\": [\"Alice\", \"Bob\"], \"age\": [25, 30], \"score\": [85.5, 92.0]},\n)\n```\n\n**Talon**\n\n```ocaml\nlet df =\n  create\n    [\n      (\"name\",  Col.string  [| \"Alice\"; \"Bob\" |]);\n      (\"age\",   Col.int32   [| 25l; 30l |]);\n      (\"score\", Col.float64 [| 85.5; 92.0 |]);\n    ]\n```\n\nFrom Nx tensors:\n\n<!-- $MDX skip -->\n```ocaml\nlet t1 = Nx.create Nx.float64 [| 3 |] [| 1.0; 2.0; 3.0 |]\nlet t2 = Nx.create Nx.float64 [| 3 |] [| 4.0; 5.0; 6.0 |]\nlet df = of_tensors ~names:[ \"x\"; \"y\" ] [ t1; t2 ]\n```\n\nFrom a 2D tensor:\n\n<!-- $MDX skip -->\n```ocaml\nlet t = Nx.create Nx.float64 [| 2; 3 |] [| 1.; 2.; 3.; 4.; 5.; 6. |]\nlet df = of_nx ~names:[ \"x\"; \"y\"; \"z\" ] t\n```\n\n### 5.2 CSV I/O\n\n**pandas**\n\n```python\ndf = pd.read_csv(\"data.csv\")\ndf.to_csv(\"out.csv\", index=False)\n```\n\n**Talon**\n\n<!-- $MDX skip -->\n```ocaml\nlet df =\n  Talon_csv.read\n    ~sep:','\n    ~na_values:[\"\"; \"NA\"; \"N/A\"; \"null\"; \"NULL\"]\n    \"data.csv\"\n\nlet () = Talon_csv.write ~sep:',' \"out.csv\" df\n```\n\nFrom/to string:\n\n<!-- $MDX skip -->\n```ocaml\nlet csv = Talon_csv.to_string df\nlet df' = Talon_csv.of_string csv\n```\n\n---\n\n## 6. Selecting & Inspecting Columns\n\n### 6.1 Column discovery\n\n**pandas**\n\n```python\ndf.columns\ndf.dtypes\nlen(df)\ndf.empty\n```\n\n**Talon**\n\n```ocaml\nlet (rows, cols) = shape df\nlet n_rows       = num_rows df\nlet n_cols       = num_columns df\nlet names        = column_names df\n\nlet types : (string * [ `Float32 | `Float64 | `Int32 | `Int64 | `Bool | `String | `Other ]) list =\n  column_types df\n\nlet is_empty = is_empty df\n```\n\nType-based selection (roughly `df.select_dtypes`):\n\n```ocaml\nlet numeric_cols = select_columns df `Numeric   (* floats + ints *)\nlet float_cols   = select_columns df `Float     (* float32/64 *)\nlet int_cols     = select_columns df `Int\nlet bool_cols    = select_columns df `Bool\nlet string_cols  = select_columns df `String\n```\n\nName-based selection uses standard list operations:\n\n<!-- $MDX skip -->\n```ocaml\nlet prefixed = List.filter (fun n -> String.starts_with ~prefix:\"temp_\" n)\n  (column_names df)\nlet suffixed = List.filter (fun n -> String.ends_with ~suffix:\"_score\" n)\n  (column_names df)\nlet others = List.filter (fun n -> not (List.mem n [\"id\"; \"target\"]))\n  (column_names df)\n```\n\n### 6.2 Getting and manipulating single columns\n\n**pandas**\n\n```python\nage_series = df[\"age\"]\ndf[\"ratio\"] = df[\"a\"] / df[\"b\"]\n```\n\n**Talon**\n\n```ocaml\nlet age_col   = get_column_exn df \"age\"\n\nlet df' =\n  add_column df \"ratio\"\n    (Col.float64 [| 1.0; 2.0 |])  (* or use with_column, see below *)\n```\n\nDrop / rename:\n\n```ocaml\nlet df_no_age   = drop_column df \"age\"\nlet df_relabel  = rename_column df ~old_name:\"age\" ~new_name:\"age_years\"\nlet df_pruned   = drop_columns df [ \"name\"; \"score\" ]\n```\n\nSelect subsets:\n\n```ocaml\nlet df_small  = select df [ \"name\"; \"age\" ]        (* error if missing *)\nlet df_loose  = select ~strict:false df [ \"name\"; \"maybe\" ] (* ignores missing *)\nlet df_reordered = reorder_columns df [ \"age\"; \"name\" ]\n```\n\nExtract as arrays, like `df[\"x\"].to_numpy()`:\n\n<!-- $MDX skip -->\n```ocaml\nlet xs : float array option    = to_array Nx.float64 df \"x\"\nlet ys : int32 array option    = to_array Nx.int32   df \"y\"\nlet zs : string option array option = to_string_array df \"z\"\n```\n\n---\n\n## 7. Row-wise Computations\n\npandas often uses:\n\n* vectorized operations (`df[\"a\"] + df[\"b\"]`)\n* `DataFrame.apply` / `Series.apply`.\n\nTalon uses the `Row` applicative to define per-row computations.\n\n### 7.1 Basic accessors\n\n**pandas**\n\n```python\n# per-row access\ndf.apply(lambda row: row[\"a\"] + row[\"b\"], axis=1)\n```\n\n**Talon**\n\n<!-- $MDX skip -->\n```ocaml\nopen Row\n\nlet sum_ab : float row =\n  map2 (float64 \"a\") (float64 \"b\") ~f:( +. )\n```\n\nUse this with `map` / `with_column`:\n\n<!-- $MDX skip -->\n```ocaml\nlet df' =\n  with_column df \"sum_ab\" Nx.float64 sum_ab\n```\n\nAvailable accessors:\n\n* `float32`, `float64`, `int32`, `int64`\n* `string`, `bool`\n* `number` – numeric column coerced to float\n* Option-aware variants: `float64_opt`, `int32_opt`, `string_opt`, `bool_opt`\n* `index` – row index\n* Helpers: `map`, `map2`, `map3`, `both`, `sequence`, `fold_list`\n\n### 7.2 Filtering rows\n\n**pandas**\n\n```python\nadults = df[df[\"age\"] > 25]\n```\n\n**Talon**\n\n```ocaml\nlet adults =\n  filter_by df\n    Row.(\n      map (int32 \"age\") ~f:(fun age -> age > 25l)\n    )\n```\n\nOr with boolean mask like `df[df[\"mask\"]]`:\n\n<!-- $MDX skip -->\n```ocaml\nlet mask : bool array = [|true; false; true|]\nlet filtered = filter df mask\n```\n\n### 7.3 Adding multiple derived columns\n\n**pandas**\n\n```python\ndf[\"sum\"]   = df[\"a\"] + df[\"b\"]\ndf[\"ratio\"] = df[\"a\"] / df[\"b\"]\n```\n\n**Talon**\n\n<!-- $MDX skip -->\n```ocaml\nlet df' = df\n  |> fun df -> with_column df \"sum\" Nx.float64\n    Row.(map2 (float64 \"a\") (float64 \"b\") ~f:( +. ))\n  |> fun df -> with_column df \"ratio\" Nx.float64\n    Row.(map2 (float64 \"a\") (float64 \"b\") ~f:( /. ))\n```\n\nOr add multiple pre-computed columns at once with `with_columns`:\n\n<!-- $MDX skip -->\n```ocaml\nlet df' = with_columns df [\n  (\"col1\", Col.float64 [| 1.0; 2.0 |]);\n  (\"col2\", Col.float64 [| 3.0; 4.0 |]);\n]\n```\n\n---\n\n## 8. Column-wise Aggregations & Descriptives\n\n### 8.1 Simple aggregations\n\n**pandas**\n\n```python\ndf[\"score\"].sum()\ndf[\"score\"].mean()\ndf[\"score\"].std()\ndf[\"score\"].min()\ndf[\"score\"].max()\ndf[\"score\"].median()\ndf[\"score\"].quantile(0.25)\n```\n\n**Talon**\n\nAll numeric aggregations coerce to float, so a single set of functions works for any numeric column:\n\n```ocaml\nlet sum_score   = Agg.sum  df \"score\"\nlet mean_score  = Agg.mean df \"score\"\nlet std_score   = Agg.std  df \"score\"\nlet min_score   = Agg.min  df \"score\"\nlet max_score   = Agg.max  df \"score\"\nlet median      = Agg.median   df \"score\"\nlet q25         = Agg.quantile df \"score\" ~q:0.25\n```\n\nInteger columns work with the same functions:\n\n```ocaml\nlet total  = Agg.sum  df \"age\"   (* returns float *)\nlet min_a  = Agg.min  df \"age\"   (* returns float option *)\nlet mean_a = Agg.mean df \"age\"   (* returns float *)\n```\n\n### 8.2 Strings and booleans\n\n**pandas**\n\n```python\ndf[\"name\"].min()\ndf[\"name\"].max()\ndf[\"name\"].mode()\ndf[\"name\"].nunique()\n(df[\"flag\"]).all()\n(df[\"flag\"]).any()\n(df[\"flag\"]).mean()  # proportion true\n```\n\n**Talon**\n\n<!-- $MDX skip -->\n```ocaml\nlet s_min    = Agg.String.min     df \"name\"\nlet s_max    = Agg.String.max     df \"name\"\nlet s_mode   = Agg.String.mode    df \"name\"\nlet s_unique = Agg.String.unique  df \"name\"  (* string array *)\nlet s_nuniq  = Agg.String.nunique df \"name\"\n\nlet b_all    = Agg.Bool.all  df \"flag\"\nlet b_any    = Agg.Bool.any  df \"flag\"\nlet b_sum    = Agg.Bool.sum  df \"flag\"\nlet b_mean   = Agg.Bool.mean df \"flag\" (* proportion true *)\n```\n\n### 8.3 Generic quantities\n\n**pandas**\n\n```python\ndf[\"x\"].count()\ndf[\"x\"].nunique()\ndf[\"x\"].value_counts()\ndf[\"x\"].isna()\n```\n\n**Talon**\n\n<!-- $MDX skip -->\n```ocaml\nlet count     = Agg.count df \"x\"\nlet nunique   = Agg.nunique df \"x\"\n\nlet vc = value_counts df \"x\"\n(* vc is a dataframe with \"value\" and \"count\" columns *)\n\nlet null_col : Col.t = is_null df \"x\"\n(* null_col is a boolean column where true indicates null *)\n```\n\n### 8.4 `describe`\n\n**pandas**\n\n```python\ndf.describe()\n```\n\n**Talon**\n\n<!-- $MDX skip -->\n```ocaml\nlet stats_df = describe df\n```\n\n* `describe` in Talon returns a `Talon.t` whose rows are `\"count\"`, `\"mean\"`, `\"std\"`, `\"min\"`, `\"25%\"`, `\"50%\"`, `\"75%\"`, `\"max\"` and columns are numeric column names.\n\n---\n\n## 9. Row-wise Aggregations (`axis=1` in pandas)\n\n**pandas**\n\n```python\ndf[\"row_sum\"]   = df[[\"a\", \"b\", \"c\"]].sum(axis=1)\ndf[\"row_mean\"]  = df[[\"a\", \"b\", \"c\"]].mean(axis=1)\ndf[\"row_max\"]   = df[[\"a\", \"b\", \"c\"]].max(axis=1)\ndf[\"dot\"]       = df[[\"x\", \"y\"]] @ np.array([0.2, 0.8])\ndf[\"any_flag\"]  = df[[\"f1\", \"f2\", \"f3\"]].any(axis=1)\ndf[\"all_flag\"]  = df[[\"f1\", \"f2\", \"f3\"]].all(axis=1)\n```\n\n**Talon**\n\nUse `Agg.row_*` (vectorized across columns):\n\n```ocaml\nlet df_row =\n  create\n    [\n      (\"a\", Col.int32 [| 1l; 2l; 3l |]);\n      (\"b\", Col.int32 [| 4l; 5l; 6l |]);\n      (\"c\", Col.int32 [| 7l; 8l; 9l |]);\n      (\"x\", Col.float32 [| 1.0; 2.0; 3.0 |]);\n      (\"y\", Col.float32 [| 0.2; 0.8; 1.0 |]);\n      (\"f1\", Col.bool [| true; false; true |]);\n      (\"f2\", Col.bool [| true; true; false |]);\n      (\"f3\", Col.bool [| false; true; true |]);\n    ]\n\nlet row_sum_col   = Agg.row_sum  df_row ~names:[ \"a\"; \"b\"; \"c\" ]\nlet row_mean_col  = Agg.row_mean df_row ~names:[ \"a\"; \"b\"; \"c\" ]\nlet row_max_col   = Agg.row_max  df_row ~names:[ \"a\"; \"b\"; \"c\" ]\n\nlet dot_col =\n  Agg.dot df_row ~names:[ \"x\"; \"y\" ] ~weights:[| 0.2; 0.8 |]\n\nlet any_flag_col  = Agg.row_any df_row ~names:[ \"f1\"; \"f2\"; \"f3\" ]\nlet all_flag_col  = Agg.row_all df_row ~names:[ \"f1\"; \"f2\"; \"f3\" ]\n\nlet df' =\n  with_columns df_row\n    [\n      (\"row_sum\",  row_sum_col);\n      (\"dot\",      dot_col);\n      (\"any_flag\", any_flag_col);\n    ]\n```\n\nThese are direct analogues of `axis=1` aggregations in pandas, implemented with Nx for performance.\n\n---\n\n## 10. Sorting, Sampling, and Slicing\n\n### 10.1 Sorting\n\n**pandas**\n\n```python\ndf.sort_values(\"age\")\ndf.sort_values(\"age\", ascending=False)\n```\n\n**Talon**\n\n```ocaml\nlet df_sorted     = sort_values df \"age\"\nlet df_descending = sort_values ~ascending:false df \"age\"\n```\n\nCustom key sort (like `df.sort_values(key=...)`):\n\n```ocaml\nlet people =\n  create\n    [\n      (\"first\", Col.string [| \"Ada\"; \"Bob\"; \"Cara\"; \"Dan\" |]);\n      (\"last\", Col.string [| \"Zane\"; \"Young\"; \"Zane\"; \"Xue\" |]);\n    ]\n\nlet df_sorted_by_composite =\n  sort people\n    Row.(\n      map2 (string \"last\") (string \"first\")\n        ~f:(fun l f -> l ^ \", \" ^ f)\n    )\n    ~compare:String.compare\n```\n\n### 10.2 Sampling\n\n**pandas**\n\n```python\ndf.sample(n=10, replace=True, random_state=42)\ndf.sample(frac=0.1)\n```\n\n**Talon**\n\n<!-- $MDX skip -->\n```ocaml\nlet s1 = sample ~n:10   ~replace:true  ~seed:42 df\nlet s2 = sample ~frac:0.1              df\n```\n\nExactly one of `n` or `frac` must be provided.\n\n### 10.3 Head / tail / slice\n\n**pandas**\n\n```python\ndf.head(5)\ndf.tail(5)\ndf.iloc[10:20]\n```\n\n**Talon**\n\n```ocaml\nlet df_slice =\n  create\n    [\n      ( \"age\",\n        Col.int32 [| 18l; 22l; 25l; 27l; 30l; 31l; 35l; 40l; 42l; 44l; 48l; 50l |]\n      );\n    ]\n\nlet first5  = head df_slice          (* default n=5 *)\nlet last5   = tail df_slice\nlet mid     = slice df_slice ~start:2 ~stop:8\n```\n\n---\n\n## 11. Grouping\n\n### 11.1 Group by existing column\n\n**pandas**\n\n```python\nfor key, group in df.groupby(\"category\"):\n    ...\n```\n\n**Talon**\n\n```ocaml\nlet grouped =\n  create\n    [\n      (\"category\", Col.string [| \"A\"; \"A\"; \"B\"; \"B\"; \"C\" |]);\n      (\"score\", Col.float64 [| 85.; 92.; 78.; 88.; 95. |]);\n    ]\n\nlet groups : (string * t) list = group_by grouped (Row.string \"category\")\n\nlet () =\n  List.iter\n    (fun (key, group_df) ->\n      Printf.printf \"Group %s: rows=%d\\n\" key (num_rows group_df)\n    )\n    groups\n```\n\n### 11.2 Group by computed key\n\n**pandas**\n\n```python\ndf.groupby(df[\"score\"].apply(lambda s: \"A\" if s >= 90 else \"B\"))\n```\n\n**Talon**\n\n```ocaml\nlet scored =\n  create\n    [\n      (\"score\", Col.float64 [| 85.; 92.; 78.; 88.; 95. |]);\n    ]\n\nlet groups =\n  group_by scored\n    Row.(\n      map (float64 \"score\") ~f:(fun s ->\n        if s >= 90.0 then \"A\"\n        else if s >= 80.0 then \"B\"\n        else \"C\")\n    )\n(* groups : (string * t) list *)\n```\n\n`group_by` takes a `Row` computation as the key, which covers both column-based and computed grouping.\n\n---\n\n## 12. Joins and Merges\n\n### 12.1 API shape\n\n**pandas**\n\n```python\ndf1.merge(df2, on=\"id\", how=\"inner\")\ndf1.merge(df2, left_on=\"a\", right_on=\"b\", how=\"left\")\ndf1.join(df2.set_index(\"id\"), on=\"id\", how=\"outer\")\n```\n\n**Talon**\n\n```ocaml\nlet df1 =\n  create\n    [\n      (\"id\", Col.int32 [| 1l; 2l |]);\n      (\"value\", Col.float64 [| 10.0; 20.0 |]);\n    ]\n\nlet df2 =\n  create\n    [\n      (\"id\", Col.int32 [| 1l; 2l |]);\n      (\"value\", Col.float64 [| 100.0; 200.0 |]);\n    ]\n\n(* Same key name on both sides *)\nlet joined =\n  join df1 df2 ~on:\"id\" ~how:`Inner ()\n\n(* Different key names *)\nlet df_left =\n  create\n    [\n      (\"a\", Col.int32 [| 1l; 2l |]);\n      (\"val1\", Col.float64 [| 10.0; 20.0 |]);\n    ]\n\nlet df_right =\n  create\n    [\n      (\"b\", Col.int32 [| 1l; 2l |]);\n      (\"val2\", Col.float64 [| 100.0; 200.0 |]);\n    ]\n\nlet merged =\n  join df_left df_right\n    ~on:\"a\" ~right_on:\"b\"\n    ~how:`Left\n    ()\n```\n\nJoin types: `` `Inner | `Left | `Right | `Outer ``\n\nColumn name collisions:\n\n* The join key appears once (for `join` on same name).\n* Other duplicate names get suffixes `(\"_x\", \"_y\")` by default.\n* Customize via `~suffixes:(\"_left\", \"_right\")`.\n\nNull semantics for join keys:\n\n* Null keys never match each other (similar to SQL semantics; different from some pandas corner cases).\n* Inner joins drop null-keyed rows entirely.\n* Outer joins keep null-keyed rows, but they don't match across sides.\n\n---\n\n## 13. Reshaping: Pivot & Melt\n\n### 13.1 Pivot\n\n**pandas**\n\n```python\npd.pivot_table(\n    df,\n    index=\"date\",\n    columns=\"product\",\n    values=\"amount\",\n    aggfunc=\"sum\"\n)\n```\n\n**Talon**\n\n```ocaml\nlet df_pivot =\n  create\n    [\n      (\"date\", Col.string [| \"2024-01\"; \"2024-01\"; \"2024-02\"; \"2024-02\" |]);\n      (\"product\", Col.string [| \"A\"; \"B\"; \"A\"; \"B\" |]);\n      (\"amount\", Col.float64 [| 100.0; 150.0; 120.0; 180.0 |]);\n    ]\n\nlet pivoted =\n  pivot df_pivot\n    ~index:\"date\"\n    ~columns:\"product\"\n    ~values:\"amount\"\n    ~agg_func:`Sum\n    ()\n```\n\nSupported `agg_func`: `` `Sum | `Mean | `Count | `Min | `Max ``.\n\n### 13.2 Melt\n\n**pandas**\n\n```python\npd.melt(\n    df,\n    id_vars=[\"id\"],\n    value_vars=[\"A\", \"B\"],\n    var_name=\"variable\",\n    value_name=\"value\",\n)\n```\n\n**Talon**\n\n```ocaml\nlet df_melt =\n  create\n    [\n      (\"id\", Col.int32 [| 1l; 2l |]);\n      (\"A\", Col.float64 [| 10.0; 20.0 |]);\n      (\"B\", Col.float64 [| 30.0; 40.0 |]);\n    ]\n\nlet melted =\n  melt df_melt\n    ~id_vars:[\"id\"]\n    ~value_vars:[\"A\"; \"B\"]\n    ~var_name:\"variable\"\n    ~value_name:\"value\"\n    ()\n```\n\nIf `value_vars` is omitted, Talon uses all columns not in `id_vars`, just like pandas.\n\n---\n\n## 14. Converting to Nx (vs NumPy)\n\n**pandas**\n\n```python\narr = df[[\"x\", \"y\"]].to_numpy(dtype=\"float32\")\n```\n\n**Talon**\n\n<!-- $MDX skip -->\n```ocaml\nlet tensor : (float, Bigarray.float32_elt) Nx.t =\n  to_nx df\n```\n\n* `to_nx` stacks **numeric** columns only (floats and ints).\n* All numeric columns are cast to `float32`.\n* Nulls become `NaN`.\n\nFor more control, extract specific columns and use `Nx.stack` manually.\n\n---\n\n## 15. When to Reach for Talon vs pandas\n\n**Use Talon when:**\n\n* You're writing OCaml (obviously) and want a dataframe story compatible with Nx and type-safe numeric code.\n* You want null semantics that are explicit and consistent across operations.\n* You care about compile-time guidance: you'd rather have `Agg.String.min` only accept strings than debug runtime dtype errors.\n* You like functional, immutable pipelines and row computations expressed as pure combinators.\n\n**Use pandas when:**\n\n* You're in Python, especially in a notebook-heavy, exploratory environment.\n* You need the huge ecosystem around pandas (plotting, scikit-learn, statsmodels, etc.).\n* You rely on advanced pandas features Talon doesn't yet model (MultiIndex, time-series index semantics, categorical dtypes, etc.).\n\n---\n\n## 16. Quick Cheat Sheet\n\n| Task                      | pandas                                | Talon                                                                          |\n| ------------------------- | ------------------------------------- | ------------------------------------------------------------------------------ |\n| Create DF from columns    | `pd.DataFrame({...})`                 | `create [ (\"col\", Col.float64 [| ... |]); ... ]`                              |\n| Read CSV                  | `pd.read_csv(\"file.csv\")`             | `Talon_csv.read \"file.csv\"`                                                    |\n| Filter rows               | `df[df[\"age\"] > 25]`                  | `filter_by df Row.(map (int32 \"age\") ~f:(fun a -> a > 25l))`                   |\n| Select columns            | `df[[\"a\", \"b\"]]`                      | `select df [\"a\"; \"b\"]`                                                         |\n| Drop null rows            | `df.dropna()`                         | `drop_nulls df`                                                                |\n| Fill nulls                | `df[\"x\"].fillna(0)`                   | `fill_null df \"x\" ~with_value:(\\`Float 0.0)`                                   |\n| Column sum                | `df[\"x\"].sum()`                       | `Agg.sum df \"x\"`                                                               |\n| Value counts              | `df[\"x\"].value_counts()`              | `value_counts df \"x\"`                                                           |\n| Group by column           | `df.groupby(\"key\")`                   | `group_by df (Row.string \"key\")`                                               |\n| Join on column            | `df1.merge(df2, on=\"id\", how=\"left\")` | `join df1 df2 ~on:\"id\" ~how:\\`Left ()`                                         |\n| Pivot                     | `pd.pivot_table(df, index=..., ...)`  | `pivot df ~index ~columns ~values ~agg_func ()`                                |\n| Melt                      | `pd.melt(df, ...)`                    | `melt df ~id_vars ~value_vars ()`                                              |\n| Describe numeric columns  | `df.describe()`                       | `describe df`                                                                  |\n| Head / tail               | `df.head(5)`, `df.tail(5)`            | `head ~n:5 df`, `tail ~n:5 df`                                                 |\n| Row sum (axis=1)          | `df[cols].sum(axis=1)`                | `let s = Agg.row_sum df ~names:cols in add_column df \"row_sum\" s`              |\n| Convert to numeric matrix | `df[cols].to_numpy(dtype=\"float32\")`  | `to_nx df`                                                                     |\n"
  },
  {
    "path": "packages/talon/doc/dune",
    "content": "(mdx\n (files *.md)\n (package talon)\n (libraries talon talon.csv nx))\n"
  },
  {
    "path": "packages/talon/doc/index.md",
    "content": "# talon\n\nTalon provides type-safe DataFrames for OCaml, built on Nx arrays. It is the Raven ecosystem's equivalent of pandas and Polars.\n\n## Features\n\n- **Heterogeneous columns** — mix strings, floats, integers, and booleans\n- **Applicative Row operations** — type-safe, composable row-wise computations\n- **First-class null handling** — explicit null masks for numeric columns, Option types for strings and bools\n- **Vectorized aggregations** — column-wise and row-wise reductions backed by Nx\n- **CSV I/O** — read and write CSV files with auto-detection\n- **Built on Nx** — columns are 1-D Nx tensors\n\n## Quick Start\n\n```ocaml\nopen Talon\n\nlet () =\n  let df = create [\n    (\"name\", Col.string [|\"Alice\"; \"Bob\"; \"Charlie\"|]);\n    (\"age\", Col.int32 [|25l; 30l; 35l|]);\n    (\"score\", Col.float64 [|92.5; 87.3; 95.1|]);\n  ] in\n  print df\n```\nShape: (3, 3)\nname\tage\tscore\nAlice\t25\t92.5\nBob\t30\t87.3\nCharlie\t35\t95.1\n\n## Next Steps\n\n- [Getting Started](/docs/talon/getting-started/) — installation, creating and inspecting DataFrames\n- [Row Operations](/docs/talon/row-operations/) — the applicative Row system, computed columns, filtering\n- [pandas Comparison](/docs/talon/pandas-comparison/) — side-by-side reference\n"
  },
  {
    "path": "packages/talon/examples/01-quickstart/README.md",
    "content": "# Quickstart\n\nBasic dataframe creation and column operations. Creates a dataframe from literal data, adds computed columns (BMI, fitness score), computes aggregations, and prints the result.\n"
  },
  {
    "path": "packages/talon/examples/01-quickstart/dune",
    "content": "(executable\n (name main)\n (libraries talon nx))\n"
  },
  {
    "path": "packages/talon/examples/01-quickstart/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Talon\n\nlet () =\n  (* Create a simple dataframe *)\n  let df =\n    create\n      [\n        (\"name\", Col.string [| \"Alice\"; \"Bob\"; \"Charlie\"; \"Dana\" |]);\n        (\"age\", Col.int32 [| 25l; 30l; 35l; 28l |]);\n        (\"height\", Col.float64 [| 1.70; 1.80; 1.75; 1.65 |]);\n        (\"weight\", Col.float64 [| 65.0; 82.0; 77.0; 55.0 |]);\n        (\"active\", Col.bool [| true; false; true; true |]);\n      ]\n  in\n\n  Printf.printf \"== quickstart ==\\n\";\n  Printf.printf \"shape: %d rows x %d cols\\n\" (num_rows df) (num_columns df);\n\n  (* Add a BMI column: weight / (height^2) *)\n  let df =\n    with_column df \"bmi\" Nx.float64\n      Row.(\n        map2 (number \"weight\") (number \"height\") ~f:(fun w h -> w /. (h ** 2.)))\n  in\n\n  (* Row-wise \"fitness score\": BMI inverse + activity boost *)\n  let df =\n    with_column df \"fitness\" Nx.float64\n      Row.(\n        map2 (number \"bmi\") (bool \"active\") ~f:(fun bmi active ->\n            (1. /. bmi) +. if active then 0.2 else 0.))\n  in\n\n  (* Column aggregations (operate on a single column) *)\n  let avg_bmi = Agg.mean df \"bmi\" in\n  Printf.printf \"avg BMI: %.3f\\n\" avg_bmi;\n\n  (* Show the head *)\n  print ~max_rows:10 df;\n  ()\n"
  },
  {
    "path": "packages/talon/examples/02-wide-features/README.md",
    "content": "# Wide Features\n\nWorking with wide datasets that have many numeric columns. Selects feature columns by prefix, computes row-wise sums and weighted dot products, and sorts by score.\n"
  },
  {
    "path": "packages/talon/examples/02-wide-features/dune",
    "content": "(executable\n (name main)\n (libraries talon))\n"
  },
  {
    "path": "packages/talon/examples/02-wide-features/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Talon\n\nlet () =\n  (* A \"wide\" frame: 5 rows x 8 numeric features *)\n  let df =\n    create\n      [\n        (\"id\", Col.string [| \"u1\"; \"u2\"; \"u3\"; \"u4\"; \"u5\" |]);\n        (\"feat_1\", Col.float64 [| 1.; 4.; 2.; 3.; 1. |]);\n        (\"feat_2\", Col.float64 [| 0.; 1.; 1.; 1.; 2. |]);\n        (\"feat_3\", Col.float64 [| 3.; 0.; 1.; 2.; 0. |]);\n        (\"feat_4\", Col.float64 [| 5.; 2.; 0.; 1.; 3. |]);\n        (\"feat_5\", Col.float64 [| 2.; 2.; 2.; 2.; 2. |]);\n        (\"feat_6\", Col.float64 [| 1.; 0.; 1.; 0.; 1. |]);\n        (\"feat_7\", Col.float64 [| 0.5; 0.2; 0.1; 0.3; 0.4 |]);\n        (\"feat_8\", Col.float64 [| 10.; 9.; 7.; 13.; 8. |]);\n      ]\n  in\n\n  (* Select all feature columns by prefix *)\n  let feats =\n    List.filter\n      (fun n -> String.starts_with ~prefix:\"feat_\" n)\n      (column_names df)\n  in\n\n  (* Row-wise sum across many columns (vectorized) *)\n  let df = add_column df \"row_sum\" (Agg.row_sum ~skipna:true df ~names:feats) in\n\n  (* Weighted score (dot product) *)\n  let weights = [| 0.1; 0.1; 0.1; 0.1; 0.1; 0.05; 0.05; 0.4 |] in\n  let df = add_column df \"score\" (Agg.dot df ~names:feats ~weights) in\n\n  (* Sort by score descending *)\n  let df = sort_values ~ascending:false df \"score\" in\n\n  print ~max_rows:10 df\n"
  },
  {
    "path": "packages/talon/examples/03-selectors/README.md",
    "content": "# Selectors\n\nColumn selection using multiple strategies: by type (numeric, float), by name pattern (prefix, suffix, regex), and by exclusion. Useful for operating on subsets of columns in wide dataframes.\n"
  },
  {
    "path": "packages/talon/examples/03-selectors/dune",
    "content": "(executable\n (name main)\n (libraries talon))\n"
  },
  {
    "path": "packages/talon/examples/03-selectors/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Talon\n\nlet () =\n  let df =\n    create\n      [\n        (\"id\", Col.string [| \"a\"; \"b\"; \"c\"; \"d\" |]);\n        (\"age\", Col.int32 [| 20l; 30l; 40l; 50l |]);\n        (\"height_cm\", Col.float64 [| 170.; 180.; 165.; 175. |]);\n        (\"is_member\", Col.bool [| true; false; true; false |]);\n        (\"note\", Col.string [| \"x\"; \"y\"; \"z\"; \"\" |]);\n      ]\n  in\n\n  let numeric = select_columns df `Numeric in\n  let floats = select_columns df `Float in\n  let names = column_names df in\n  let by_prefix =\n    List.filter (fun n -> String.starts_with ~prefix:\"he\" n) names\n  in\n  let by_suffix =\n    List.filter (fun n -> String.ends_with ~suffix:\"_cm\" n) names\n  in\n  let numeric_except_id =\n    List.filter (fun n -> n <> \"id\") (select_columns df `Numeric)\n  in\n\n  Printf.printf \"numeric: [%s]\\n\" (String.concat \", \" numeric);\n  Printf.printf \"float:   [%s]\\n\" (String.concat \", \" floats);\n  Printf.printf \"prefix 'he': [%s]\\n\" (String.concat \", \" by_prefix);\n  Printf.printf \"suffix '_cm': [%s]\\n\" (String.concat \", \" by_suffix);\n  Printf.printf \"numeric except id: [%s]\\n\"\n    (String.concat \", \" numeric_except_id)\n"
  },
  {
    "path": "packages/talon/examples/04-row-reduce/README.md",
    "content": "# Row Reduce\n\nRow-wise aggregations with NaN handling. Demonstrates `Row.Agg.sum` and `Row.Agg.mean` across numeric columns with both `skipna:true` (skip NaN values) and `skipna:false` (propagate NaN) semantics.\n"
  },
  {
    "path": "packages/talon/examples/04-row-reduce/dune",
    "content": "(executable\n (name main)\n (libraries talon))\n"
  },
  {
    "path": "packages/talon/examples/04-row-reduce/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Talon\n\nlet () =\n  (* Demonstrate row-wise reductions and skipna semantics *)\n  let nan = Stdlib.nan in\n  let df =\n    create\n      [\n        (\"a\", Col.float64 [| 1.; nan; 3.; 4. |]);\n        (\"b\", Col.float64 [| 0.; 2.; nan; 1. |]);\n        (* ints can use sentinels to represent nulls in Talon (see docs). Here\n           we keep them valid for simplicity. *)\n        (\"c\", Col.int32 [| 10l; 20l; 30l; 40l |]);\n      ]\n  in\n\n  let nums = select_columns df `Numeric in\n\n  (* Row-wise sum/mean across all numeric columns *)\n  let df =\n    add_column df \"sum_skipna\" (Agg.row_sum ~skipna:true df ~names:nums)\n  in\n  let df =\n    add_column df \"mean_skipna\" (Agg.row_mean ~skipna:true df ~names:nums)\n  in\n\n  (* Strict variant (NaN participates) *)\n  let df =\n    add_column df \"sum_strict\" (Agg.row_sum ~skipna:false df ~names:nums)\n  in\n  let df =\n    add_column df \"mean_strict\" (Agg.row_mean ~skipna:false df ~names:nums)\n  in\n\n  print ~max_rows:10 df\n"
  },
  {
    "path": "packages/talon/examples/05-sorting-and-grouping/README.md",
    "content": "# Sorting and Grouping\n\nSorting dataframes by column values and grouping rows by a key column. Demonstrates `sort_values` for ordering and `group_by_column` for split-apply-combine aggregations.\n"
  },
  {
    "path": "packages/talon/examples/05-sorting-and-grouping/dune",
    "content": "(executable\n (name main)\n (libraries talon))\n"
  },
  {
    "path": "packages/talon/examples/05-sorting-and-grouping/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Talon\n\nlet () =\n  let df =\n    create\n      [\n        (\"city\", Col.string [| \"Paris\"; \"Paris\"; \"Lyon\"; \"Lyon\"; \"Nice\" |]);\n        (\"sales\", Col.float64 [| 1200.; 800.; 450.; 900.; 500. |]);\n        (\"units\", Col.int32 [| 10l; 8l; 5l; 9l; 6l |]);\n      ]\n  in\n\n  Printf.printf \"== original ==\\n\";\n  print df;\n\n  Printf.printf \"\\n== sorted by sales desc ==\\n\";\n  let df_sorted = sort_values ~ascending:false df \"sales\" in\n  print df_sorted;\n\n  Printf.printf \"\\n== group by city, show sums ==\\n\";\n  let groups = group_by df (Row.string \"city\") in\n  List.iter\n    (fun (city_name, sub) ->\n      let total_sales = Agg.sum sub \"sales\" in\n      let total_units = Agg.sum sub \"units\" in\n      Printf.printf \"- %s: sales=%.0f units=%.0f\\n\" city_name total_sales\n        total_units)\n    groups;\n  ()\n"
  },
  {
    "path": "packages/talon/lib/col.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype t =\n  | P : ('a, 'b) Nx.dtype * ('a, 'b) Nx.t * bool array option -> t\n  | S : string option array -> t\n  | B : bool option array -> t\n\n(* Internal helpers *)\n\nlet normalize_mask = function\n  | Some mask when Array.exists Fun.id mask -> Some (Array.copy mask)\n  | _ -> None\n\nlet count_none arr =\n  Array.fold_left (fun acc x -> if Option.is_none x then acc + 1 else acc) 0 arr\n\nlet fill_options arr varr =\n  let result = Array.copy arr in\n  Array.iteri (fun i x -> if Option.is_none x then result.(i) <- varr.(0)) arr;\n  result\n\nlet reindex_nullable_options arr indices len =\n  Array.init len (fun i ->\n      let idx = indices.(i) in\n      if idx >= 0 && idx < Array.length arr then arr.(idx) else None)\n\n(* Constructors *)\n\nlet numeric_default : type a b. (a, b) Nx.dtype -> a = function\n  | Nx.Float16 -> Float.nan\n  | Nx.Float32 -> Float.nan\n  | Nx.Float64 -> Float.nan\n  | Nx.BFloat16 -> Float.nan\n  | Nx.Float8_e4m3 -> Float.nan\n  | Nx.Float8_e5m2 -> Float.nan\n  | Nx.Int4 -> 0\n  | Nx.UInt4 -> 0\n  | Nx.Int8 -> 0\n  | Nx.UInt8 -> 0\n  | Nx.Int16 -> 0\n  | Nx.UInt16 -> 0\n  | Nx.Int32 -> Int32.min_int\n  | Nx.UInt32 -> Int32.min_int\n  | Nx.Int64 -> Int64.min_int\n  | Nx.UInt64 -> Int64.min_int\n  | Nx.Complex64 -> Complex.zero\n  | Nx.Complex128 -> Complex.zero\n  | Nx.Bool -> false\n\nlet numeric (type a b) (dtype : (a, b) Nx.dtype) (arr : a array) =\n  let tensor = Nx.create dtype [| Array.length arr |] arr in\n  P (dtype, tensor, None)\n\nlet numeric_opt (type a b) (dtype : (a, b) Nx.dtype) (arr : a option array) =\n  let default = numeric_default dtype in\n  let data = Array.map (fun x -> Option.value x ~default) arr in\n  let mask = Array.map Option.is_none arr in\n  let tensor = Nx.create dtype [| Array.length data |] data in\n  P (dtype, tensor, normalize_mask (Some mask))\n\nlet string arr = S (Array.map (fun x -> Some x) arr)\nlet string_opt arr = S arr\nlet bool arr = B (Array.map (fun x -> Some x) arr)\nlet bool_opt arr = B arr\nlet float32 arr = numeric Nx.float32 arr\nlet float64 arr = numeric Nx.float64 arr\nlet int32 arr = numeric Nx.int32 arr\nlet int64 arr = numeric Nx.int64 arr\nlet float32_opt arr = numeric_opt Nx.float32 arr\nlet float64_opt arr = numeric_opt Nx.float64 arr\nlet int32_opt arr = numeric_opt Nx.int32 arr\nlet int64_opt arr = numeric_opt Nx.int64 arr\n\nlet of_tensor (type a b) (t : (a, b) Nx.t) =\n  match Nx.shape t with\n  | [| _ |] -> P (Nx.dtype t, t, None)\n  | _ -> invalid_arg \"of_tensor: tensor must be 1D\"\n\n(* Properties *)\n\nlet length = function\n  | P (_, t, _) -> Nx.size t\n  | S arr -> Array.length arr\n  | B arr -> Array.length arr\n\nlet has_nulls = function\n  | P (_, _, Some mask) -> Array.exists Fun.id mask\n  | P _ -> false\n  | S arr -> Array.exists Option.is_none arr\n  | B arr -> Array.exists Option.is_none arr\n\nlet null_count = function\n  | P (_, _, Some mask) ->\n      Array.fold_left (fun acc b -> if b then acc + 1 else acc) 0 mask\n  | P _ -> 0\n  | S arr -> count_none arr\n  | B arr -> count_none arr\n\nlet null_mask = function P (_, _, mask) -> mask | _ -> None\n\nlet dtype = function\n  | P (Nx.Float32, _, _) -> `Float32\n  | P (Nx.Float64, _, _) -> `Float64\n  | P (Nx.Int32, _, _) -> `Int32\n  | P (Nx.Int64, _, _) -> `Int64\n  | S _ -> `String\n  | B _ -> `Bool\n  | P _ -> `Other\n\nlet is_null_at col i =\n  match col with\n  | P (_, _, Some mask) -> mask.(i)\n  | P _ -> false\n  | S arr -> Option.is_none arr.(i)\n  | B arr -> Option.is_none arr.(i)\n\n(* Generic dtype helpers *)\n\nlet element_to_string (type a b) (dtype : (a, b) Nx.dtype) : a -> string =\n  match dtype with\n  | Nx.Float32 -> string_of_float\n  | Nx.Float64 -> string_of_float\n  | Nx.Float16 -> string_of_float\n  | Nx.BFloat16 -> string_of_float\n  | Nx.Float8_e4m3 -> string_of_float\n  | Nx.Float8_e5m2 -> string_of_float\n  | Nx.Int32 -> Int32.to_string\n  | Nx.UInt32 -> Int32.to_string\n  | Nx.Int64 -> Int64.to_string\n  | Nx.UInt64 -> Int64.to_string\n  | Nx.Int4 -> string_of_int\n  | Nx.UInt4 -> string_of_int\n  | Nx.Int8 -> string_of_int\n  | Nx.UInt8 -> string_of_int\n  | Nx.Int16 -> string_of_int\n  | Nx.UInt16 -> string_of_int\n  | Nx.Complex64 -> fun c -> Printf.sprintf \"%g+%gi\" c.Complex.re c.Complex.im\n  | Nx.Complex128 -> fun c -> Printf.sprintf \"%g+%gi\" c.Complex.re c.Complex.im\n  | Nx.Bool -> string_of_bool\n\nlet element_to_float (type a b) (dtype : (a, b) Nx.dtype) : a -> float =\n  match dtype with\n  | Nx.Float32 -> Fun.id\n  | Nx.Float64 -> Fun.id\n  | Nx.Float16 -> Fun.id\n  | Nx.BFloat16 -> Fun.id\n  | Nx.Float8_e4m3 -> Fun.id\n  | Nx.Float8_e5m2 -> Fun.id\n  | Nx.Int32 -> Int32.to_float\n  | Nx.UInt32 -> Int32.to_float\n  | Nx.Int64 -> Int64.to_float\n  | Nx.UInt64 -> Int64.to_float\n  | Nx.Int4 -> float_of_int\n  | Nx.UInt4 -> float_of_int\n  | Nx.Int8 -> float_of_int\n  | Nx.UInt8 -> float_of_int\n  | Nx.Int16 -> float_of_int\n  | Nx.UInt16 -> float_of_int\n  | Nx.Complex64 -> failwith \"element_to_float: complex not supported\"\n  | Nx.Complex128 -> failwith \"element_to_float: complex not supported\"\n  | Nx.Bool -> failwith \"element_to_float: bool not supported\"\n\n(* Null handling *)\n\nlet drop_nulls col =\n  match col with\n  | P (dtype, tensor, Some mask) ->\n      let arr = Nx.to_array tensor in\n      let n = Array.length arr in\n      let count = ref 0 in\n      for i = 0 to n - 1 do\n        if not mask.(i) then incr count\n      done;\n      let result = Array.make !count arr.(0) in\n      let j = ref 0 in\n      for i = 0 to n - 1 do\n        if not mask.(i) then (\n          result.(!j) <- arr.(i);\n          incr j)\n      done;\n      P (dtype, Nx.create dtype [| !count |] result, None)\n  | P (_, _, None) -> col\n  | S arr ->\n      let filtered = Array.to_list arr |> List.filter_map Fun.id in\n      string (Array.of_list filtered)\n  | B arr ->\n      let filtered = Array.to_list arr |> List.filter_map Fun.id in\n      bool (Array.of_list filtered)\n\nlet fill_nulls_p (type a b) (dtype : (a, b) Nx.dtype) tensor mask_opt\n    (varr : a array) =\n  match mask_opt with\n  | None -> P (dtype, tensor, None)\n  | Some mask ->\n      let arr : a array = Nx.to_array tensor in\n      let result = Array.copy arr in\n      let new_mask = Array.copy mask in\n      Array.iteri\n        (fun i is_null ->\n          if is_null then (\n            result.(i) <- varr.(0);\n            new_mask.(i) <- false))\n        mask;\n      P\n        ( dtype,\n          Nx.create dtype [| Array.length result |] result,\n          normalize_mask (Some new_mask) )\n\nlet fill_nulls col ~value =\n  match (col, value) with\n  | P (dtype, t, m), P (vdtype, vt, _) -> (\n      match Nx_core.Dtype.equal_witness dtype vdtype with\n      | Some Type.Equal -> fill_nulls_p dtype t m (Nx.to_array vt)\n      | None ->\n          invalid_arg \"Col.fill_nulls: value type doesn't match column type\")\n  | S arr, S varr -> S (fill_options arr varr)\n  | B arr, B varr -> B (fill_options arr varr)\n  | _ -> invalid_arg \"Col.fill_nulls: value type doesn't match column type\"\n\n(* Extraction *)\n\nlet to_tensor (type a b) (dtype : (a, b) Nx.dtype) col =\n  match col with\n  | P (col_dtype, tensor, _) -> (\n      match Nx_core.Dtype.equal_witness dtype col_dtype with\n      | Some Type.Equal -> Some (tensor : (a, b) Nx.t)\n      | None -> None)\n  | _ -> None\n\nlet to_string_array = function S arr -> Some arr | _ -> None\nlet to_bool_array = function B arr -> Some arr | _ -> None\n\n(* Internal: extract any numeric column as a float array, filtering by mask *)\n\nlet col_as_float_array col =\n  match col with\n  | P (dtype, tensor, mask) -> (\n      match dtype with\n      | Nx.Complex64 -> failwith \"col_as_float_array: complex not supported\"\n      | Nx.Complex128 -> failwith \"col_as_float_array: complex not supported\"\n      | Nx.Bool -> failwith \"col_as_float_array: bool not supported\"\n      | _ -> (\n          let arr : float array = Nx.to_array (Nx.cast Nx.float64 tensor) in\n          match mask with\n          | Some m ->\n              let collected = ref [] in\n              let count = ref 0 in\n              for i = Array.length arr - 1 downto 0 do\n                if not m.(i) then (\n                  collected := arr.(i) :: !collected;\n                  incr count)\n              done;\n              (Array.of_list !collected, !count)\n          | None -> (arr, Array.length arr)))\n  | _ -> failwith \"col_as_float_array: column must be numeric\"\n\n(* Display: returns a closure that formats the value at index i as a string. The\n   underlying array is extracted once so repeated calls are O(1). *)\n\nlet to_string_fn ?(null = \"<null>\") col =\n  match col with\n  | P (dtype, tensor, mask) ->\n      let is_null =\n        match mask with Some m -> fun i -> m.(i) | None -> fun _ -> false\n      in\n      let to_s = element_to_string dtype in\n      let arr = Nx.to_array tensor in\n      fun i -> if is_null i then null else to_s arr.(i)\n  | S arr -> ( fun i -> match arr.(i) with Some s -> s | None -> null)\n  | B arr -> (\n      fun i -> match arr.(i) with Some b -> string_of_bool b | None -> null)\n\n(* Internal: reindex a column by an array of non-negative indices *)\n\nlet reindex col indices =\n  match col with\n  | P (dtype, tensor, mask_opt) ->\n      let n = Array.length indices in\n      if n = 0 then P (dtype, Nx.empty dtype [| 0 |], None)\n      else\n        let idx_tensor =\n          Nx.create Nx.int32 [| n |] (Array.map Int32.of_int indices)\n        in\n        let gathered = Nx.take ~axis:0 idx_tensor tensor in\n        let mask =\n          match mask_opt with\n          | Some m ->\n              let sub = Array.map (fun i -> m.(i)) indices in\n              if Array.exists Fun.id sub then Some sub else None\n          | None -> None\n        in\n        P (dtype, gathered, mask)\n  | S arr -> S (Array.map (fun i -> arr.(i)) indices)\n  | B arr -> B (Array.map (fun i -> arr.(i)) indices)\n\n(* Internal: reindex with nullable indices (-1 means null) *)\n\nlet reindex_nullable col indices n_source =\n  let has_null = Array.exists (fun idx -> idx < 0) indices in\n  if not has_null then reindex col indices\n  else\n    let len = Array.length indices in\n    match col with\n    | P (dtype, tensor, mask_opt) ->\n        let source = Nx.to_array tensor in\n        let result = Array.copy source in\n        let result =\n          if len = Array.length result then result\n          else Array.make len (if n_source > 0 then source.(0) else result.(0))\n        in\n        let mask =\n          Array.init len (fun i ->\n              let idx = indices.(i) in\n              if idx < 0 || idx >= n_source then true\n              else\n                let is_null =\n                  match mask_opt with Some m -> m.(idx) | None -> false\n                in\n                if not is_null then result.(i) <- source.(idx);\n                is_null)\n        in\n        let mask_opt = if Array.exists Fun.id mask then Some mask else None in\n        P (dtype, Nx.create dtype [| len |] result, mask_opt)\n    | S arr -> S (reindex_nullable_options arr indices len)\n    | B arr -> B (reindex_nullable_options arr indices len)\n\n(* Internal: slice a column from start to start+length *)\n\nlet slice_col col start length =\n  match col with\n  | P (dtype, tensor, mask_opt) ->\n      let sliced = Nx.slice [ Nx.R (start, start + length) ] tensor in\n      let mask =\n        match mask_opt with\n        | Some m ->\n            let sub = Array.sub m start length in\n            if Array.exists Fun.id sub then Some sub else None\n        | None -> None\n      in\n      P (dtype, sliced, mask)\n  | S arr -> S (Array.sub arr start length)\n  | B arr -> B (Array.sub arr start length)\n\n(* Internal: concatenate columns of the same type *)\n\nlet combine_masks arrays_masks =\n  if List.exists (fun (_, m) -> Option.is_some m) arrays_masks then\n    let mask_arrays =\n      List.map\n        (fun (arr, mask_opt) ->\n          match mask_opt with\n          | Some m -> Array.copy m\n          | None -> Array.make (Array.length arr) false)\n        arrays_masks\n    in\n    let concatenated = Array.concat mask_arrays in\n    if Array.exists Fun.id concatenated then Some concatenated else None\n  else None\n\nlet concat_p (type a b) (dtype : (a, b) Nx.dtype) cols =\n  let arrays_masks =\n    List.map\n      (function\n        | P (_, t, mask) ->\n            let arr : a array = Nx.to_array (Nx.cast dtype t) in\n            (arr, mask)\n        | _ -> failwith \"concat: column type mismatch\")\n      cols\n  in\n  let arrays = List.map fst arrays_masks in\n  let all_data : a array = Array.concat arrays in\n  let combined_mask = combine_masks arrays_masks in\n  P (dtype, Nx.create dtype [| Array.length all_data |] all_data, combined_mask)\n\nlet concat_cols cols =\n  match cols with\n  | [] -> invalid_arg \"concat_cols: empty list\"\n  | first :: _ -> (\n      match first with\n      | P (dtype, _, _) -> concat_p dtype cols\n      | S _ ->\n          let arrays =\n            List.map\n              (function S arr -> arr | _ -> failwith \"concat: type mismatch\")\n              cols\n          in\n          S (Array.concat arrays)\n      | B _ ->\n          let arrays =\n            List.map\n              (function B arr -> arr | _ -> failwith \"concat: type mismatch\")\n              cols\n          in\n          B (Array.concat arrays))\n\n(* Column transforms *)\n\nlet via_float64 f col =\n  match col with\n  | P (dtype, tensor, _) ->\n      let arr = Nx.to_array (Nx.cast Nx.float64 tensor) in\n      let result = f arr in\n      let result_tensor =\n        Nx.create Nx.float64 [| Array.length result |] result\n      in\n      P (dtype, Nx.cast dtype result_tensor, None)\n  | _ -> failwith \"column must be numeric\"\n\nlet cumsum col =\n  via_float64\n    (fun arr ->\n      let result = Array.copy arr in\n      for i = 1 to Array.length result - 1 do\n        result.(i) <- result.(i - 1) +. result.(i)\n      done;\n      result)\n    col\n\nlet cumprod col =\n  via_float64\n    (fun arr ->\n      let result = Array.copy arr in\n      for i = 1 to Array.length result - 1 do\n        result.(i) <- result.(i - 1) *. result.(i)\n      done;\n      result)\n    col\n\nlet diff ?(periods = 1) col =\n  via_float64\n    (fun arr ->\n      let n = Array.length arr in\n      let result = Array.make n 0. in\n      for i = periods to n - 1 do\n        result.(i) <- arr.(i) -. arr.(i - periods)\n      done;\n      result)\n    col\n\nlet pct_change ?(periods = 1) col =\n  match col with\n  | P (_, tensor, _) ->\n      let arr = Nx.to_array (Nx.cast Nx.float64 tensor) in\n      let n = Array.length arr in\n      let result = Array.make n Float.nan in\n      for i = periods to n - 1 do\n        let prev = arr.(i - periods) in\n        let curr = arr.(i) in\n        result.(i) <- (if prev = 0. then Float.nan else (curr -. prev) /. prev)\n      done;\n      float64 result\n  | _ -> failwith \"pct_change: column must be numeric\"\n\nlet shift_option_array ~periods arr =\n  let n = Array.length arr in\n  let result = Array.make n None in\n  if periods > 0 then\n    for i = periods to n - 1 do\n      result.(i) <- arr.(i - periods)\n    done\n  else\n    for i = 0 to n - 1 + periods do\n      result.(i) <- arr.(i - periods)\n    done;\n  result\n\nlet shift ~periods col =\n  match col with\n  | P (dtype, tensor, _) ->\n      let n = (Nx.shape tensor).(0) in\n      if periods = 0 then col\n      else\n        let abs_p = abs periods in\n        if abs_p >= n then\n          P (dtype, Nx.zeros dtype [| n |], Some (Array.make n true))\n        else\n          let data, pad =\n            if periods > 0 then\n              ( Nx.slice [ Nx.R (0, n - abs_p) ] tensor,\n                Nx.zeros dtype [| abs_p |] )\n            else\n              (Nx.slice [ Nx.R (abs_p, n) ] tensor, Nx.zeros dtype [| abs_p |])\n          in\n          let result =\n            if periods > 0 then Nx.concatenate ~axis:0 [ pad; data ]\n            else Nx.concatenate ~axis:0 [ data; pad ]\n          in\n          let mask =\n            Array.init n (fun i ->\n                if periods > 0 then i < abs_p else i >= n - abs_p)\n          in\n          P (dtype, result, Some mask)\n  | S arr -> S (shift_option_array ~periods arr)\n  | B arr -> B (shift_option_array ~periods arr)\n\n(* Formatting *)\n\nlet pp ppf col =\n  let len = length col in\n  let to_s = to_string_fn col in\n  let dtype_str =\n    match dtype col with\n    | `Float32 -> \"float32\"\n    | `Float64 -> \"float64\"\n    | `Int32 -> \"int32\"\n    | `Int64 -> \"int64\"\n    | `String -> \"string\"\n    | `Bool -> \"bool\"\n    | `Other -> \"other\"\n  in\n  Format.fprintf ppf \"@[<hov 2>Col(%s, %d)[\" dtype_str len;\n  let show = min 5 len in\n  for i = 0 to show - 1 do\n    if i > 0 then Format.fprintf ppf \",@ \";\n    Format.fprintf ppf \"%s\" (to_s i)\n  done;\n  if len > show then Format.fprintf ppf \",@ ...\";\n  Format.fprintf ppf \"]@]\"\n"
  },
  {
    "path": "packages/talon/lib/csv/csv_io.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Minimal RFC 4180 CSV parser and writer. *)\n\nlet parse_row separator line =\n  let len = String.length line in\n  let fields = ref [] in\n  let buf = Buffer.create 64 in\n  let i = ref 0 in\n  while !i < len do\n    if line.[!i] = '\"' then (\n      incr i;\n      let in_quotes = ref true in\n      while !i < len && !in_quotes do\n        if line.[!i] = '\"' then\n          if !i + 1 < len && line.[!i + 1] = '\"' then (\n            Buffer.add_char buf '\"';\n            i := !i + 2)\n          else (\n            in_quotes := false;\n            incr i)\n        else (\n          Buffer.add_char buf line.[!i];\n          incr i)\n      done;\n      if !i < len && line.[!i] = separator then incr i;\n      fields := Buffer.contents buf :: !fields;\n      Buffer.clear buf)\n    else if line.[!i] = separator then (\n      fields := Buffer.contents buf :: !fields;\n      Buffer.clear buf;\n      incr i)\n    else (\n      Buffer.add_char buf line.[!i];\n      incr i)\n  done;\n  fields := Buffer.contents buf :: !fields;\n  List.rev !fields\n\nlet strip_cr line =\n  let len = String.length line in\n  if len > 0 && line.[len - 1] = '\\r' then String.sub line 0 (len - 1) else line\n\nlet parse ?(separator = ',') content =\n  let lines = String.split_on_char '\\n' content in\n  let lines = List.map strip_cr lines in\n  let lines = List.filter (fun l -> l <> \"\") lines in\n  List.map (parse_row separator) lines\n\nlet needs_quoting separator field =\n  let len = String.length field in\n  let rec check i =\n    if i >= len then false\n    else\n      let c = field.[i] in\n      c = separator || c = '\"' || c = '\\n' || c = '\\r' || check (i + 1)\n  in\n  check 0\n\nlet quote_field separator field =\n  if needs_quoting separator field then (\n    let buf = Buffer.create (String.length field + 4) in\n    Buffer.add_char buf '\"';\n    String.iter\n      (fun c ->\n        if c = '\"' then Buffer.add_string buf \"\\\"\\\"\" else Buffer.add_char buf c)\n      field;\n    Buffer.add_char buf '\"';\n    Buffer.contents buf)\n  else field\n\nlet write_row buf separator fields =\n  List.iteri\n    (fun i field ->\n      if i > 0 then Buffer.add_char buf separator;\n      Buffer.add_string buf (quote_field separator field))\n    fields;\n  Buffer.add_char buf '\\n'\n\nlet serialize ?(separator = ',') rows =\n  let buf = Buffer.create 1024 in\n  List.iter (write_row buf separator) rows;\n  Buffer.contents buf\n\nlet write_row_to_channel oc separator fields =\n  let buf = Buffer.create 256 in\n  write_row buf separator fields;\n  output_string oc (Buffer.contents buf)\n"
  },
  {
    "path": "packages/talon/lib/csv/dune",
    "content": "(library\n (name talon_csv)\n (public_name talon.csv)\n (libraries talon nx))\n"
  },
  {
    "path": "packages/talon/lib/csv/talon_csv.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype dtype_spec =\n  (string * [ `Float32 | `Float64 | `Int32 | `Int64 | `Bool | `String ]) list\n\nlet default_na_values = [ \"\"; \"NA\"; \"N/A\"; \"null\"; \"NULL\"; \"nan\"; \"NaN\" ]\nlet is_null_value na_values s = List.mem s na_values\n\nlet detect_dtype na_values values =\n  let non_null_values =\n    List.filter (fun v -> not (is_null_value na_values v)) values\n  in\n  if List.length non_null_values = 0 then `String\n  else\n    let all_bool =\n      List.for_all\n        (fun v ->\n          match String.lowercase_ascii v with\n          | \"true\" | \"t\" | \"yes\" | \"y\" | \"1\" | \"false\" | \"f\" | \"no\" | \"n\" | \"0\"\n            ->\n              true\n          | _ -> false)\n        non_null_values\n    in\n    if all_bool then `Bool\n    else\n      let all_int, needs_int64 =\n        List.fold_left\n          (fun (all_ok, overflow) v ->\n            if not all_ok then (false, overflow)\n            else\n              try\n                let i64 = Int64.of_string v in\n                let too_big =\n                  i64 > Int64.of_int32 Int32.max_int\n                  || i64 < Int64.of_int32 Int32.min_int\n                in\n                (true, overflow || too_big)\n              with _ -> (false, overflow))\n          (true, false) non_null_values\n      in\n      if all_int then if needs_int64 then `Int64 else `Int32\n      else\n        let all_float =\n          List.for_all\n            (fun v ->\n              try\n                ignore (float_of_string v);\n                true\n              with _ -> false)\n            non_null_values\n        in\n        if all_float then `Float64 else `String\n\nlet columns_of_rows na_values dtype_spec column_names data_rows =\n  let num_cols = List.length column_names in\n  let columns_data = Array.init num_cols (fun _ -> []) in\n  List.iter\n    (fun row ->\n      List.iteri\n        (fun i value ->\n          if i < num_cols then columns_data.(i) <- value :: columns_data.(i))\n        row)\n    data_rows;\n  Array.iteri (fun i lst -> columns_data.(i) <- List.rev lst) columns_data;\n  List.mapi\n    (fun i name ->\n      let values = columns_data.(i) in\n      let dtype =\n        match dtype_spec with\n        | Some specs -> (\n            try List.assoc name specs\n            with Not_found -> detect_dtype na_values values)\n        | None -> detect_dtype na_values values\n      in\n      let parse_col values ~parse ~make =\n        let arr =\n          List.map\n            (fun v ->\n              if is_null_value na_values v then None\n              else try Some (parse v) with _ -> None)\n            values\n          |> Array.of_list\n        in\n        make arr\n      in\n      let column =\n        match dtype with\n        | `Float32 ->\n            parse_col values ~parse:float_of_string ~make:Talon.Col.float32_opt\n        | `Float64 ->\n            parse_col values ~parse:float_of_string ~make:Talon.Col.float64_opt\n        | `Int32 ->\n            parse_col values ~parse:Int32.of_string ~make:Talon.Col.int32_opt\n        | `Int64 ->\n            parse_col values ~parse:Int64.of_string ~make:Talon.Col.int64_opt\n        | `Bool ->\n            parse_col values ~make:Talon.Col.bool_opt ~parse:(fun v ->\n                match String.lowercase_ascii v with\n                | \"true\" | \"t\" | \"yes\" | \"y\" | \"1\" -> true\n                | \"false\" | \"f\" | \"no\" | \"n\" | \"0\" -> false\n                | _ -> raise Exit)\n        | `String -> parse_col values ~parse:Fun.id ~make:Talon.Col.string_opt\n      in\n      (name, column))\n    column_names\n\nlet col_string_fns na_repr df =\n  List.map\n    (fun name ->\n      Talon.Col.to_string_fn ~null:na_repr (Talon.get_column_exn df name))\n    (Talon.column_names df)\n\nlet df_of_rows ?names ?(na_values = default_na_values) ?dtype_spec rows =\n  match names with\n  | Some column_names -> (\n      match rows with\n      | [] ->\n          let columns =\n            List.map (fun name -> (name, Talon.Col.string [||])) column_names\n          in\n          Talon.create columns\n      | _ ->\n          columns_of_rows na_values dtype_spec column_names rows |> Talon.create\n      )\n  | None -> (\n      match rows with\n      | [] -> Talon.empty\n      | [ header ] ->\n          let columns =\n            List.map (fun name -> (name, Talon.Col.string [||])) header\n          in\n          Talon.create columns\n      | header :: data ->\n          columns_of_rows na_values dtype_spec header data |> Talon.create)\n\nlet of_string ?(sep = ',') ?names ?na_values ?dtype_spec s =\n  df_of_rows ?names ?na_values ?dtype_spec (Csv_io.parse ~separator:sep s)\n\nlet to_string ?(sep = ',') ?(na_repr = \"\") df =\n  let buf = Buffer.create 1024 in\n  let fns = col_string_fns na_repr df in\n  let n_rows = Talon.num_rows df in\n  Csv_io.write_row buf sep (Talon.column_names df);\n  for i = 0 to n_rows - 1 do\n    Csv_io.write_row buf sep (List.map (fun f -> f i) fns)\n  done;\n  Buffer.contents buf\n\nlet read ?(sep = ',') ?names ?na_values ?dtype_spec path =\n  In_channel.with_open_text path @@ fun ic ->\n  let rows = ref [] in\n  (try\n     while true do\n       let line = Csv_io.strip_cr (input_line ic) in\n       if line <> \"\" then rows := Csv_io.parse_row sep line :: !rows\n     done\n   with End_of_file -> ());\n  df_of_rows ?names ?na_values ?dtype_spec (List.rev !rows)\n\nlet write ?(sep = ',') ?(na_repr = \"\") path df =\n  Out_channel.with_open_text path @@ fun oc ->\n  let buf = Buffer.create 256 in\n  let fns = col_string_fns na_repr df in\n  let n_rows = Talon.num_rows df in\n  Csv_io.write_row buf sep (Talon.column_names df);\n  output_string oc (Buffer.contents buf);\n  for i = 0 to n_rows - 1 do\n    Buffer.clear buf;\n    Csv_io.write_row buf sep (List.map (fun f -> f i) fns);\n    output_string oc (Buffer.contents buf)\n  done\n"
  },
  {
    "path": "packages/talon/lib/csv/talon_csv.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** CSV codec for Talon dataframes.\n\n    {[\n    (* From string *)\n    let df = Talon_csv.of_string csv_text\n\n    (* From file (streaming) *)\n    let df =\n      Talon_csv.read \"data.csv\"\n        (* To file (streaming) *)\n        Talon_csv.write \"out.csv\" df\n    ]} *)\n\ntype dtype_spec =\n  (string * [ `Float32 | `Float64 | `Int32 | `Int64 | `Bool | `String ]) list\n(** Column type specifications. Columns not listed are auto-detected. *)\n\nval of_string :\n  ?sep:char ->\n  ?names:string list ->\n  ?na_values:string list ->\n  ?dtype_spec:dtype_spec ->\n  string ->\n  Talon.t\n(** [of_string s] parses CSV text into a dataframe. The first row is used as\n    column names unless [names] is provided, in which case all rows are treated\n    as data.\n\n    @param sep delimiter character (default [','])\n    @param names explicit column names; when given, all rows are data\n    @param na_values\n      strings treated as null (default\n      [[\"\"; \"NA\"; \"N/A\"; \"null\"; \"NULL\"; \"nan\"; \"NaN\"]])\n    @param dtype_spec\n      explicit column types; unspecified columns are auto-detected *)\n\nval to_string : ?sep:char -> ?na_repr:string -> Talon.t -> string\n(** [to_string df] serializes a dataframe to CSV text. The first row of the\n    output is the column names.\n\n    @param sep delimiter character (default [','])\n    @param na_repr string for null values (default [\"\"]) *)\n\nval read :\n  ?sep:char ->\n  ?names:string list ->\n  ?na_values:string list ->\n  ?dtype_spec:dtype_spec ->\n  string ->\n  Talon.t\n(** [read path] reads a CSV file into a dataframe, streaming line by line.\n\n    @param sep delimiter character (default [','])\n    @param names explicit column names; when given, all rows are data\n    @param na_values strings treated as null\n    @param dtype_spec explicit column types *)\n\nval write : ?sep:char -> ?na_repr:string -> string -> Talon.t -> unit\n(** [write path df] writes a dataframe to a CSV file, streaming row by row.\n\n    @param sep delimiter character (default [','])\n    @param na_repr string for null values (default [\"\"]) *)\n"
  },
  {
    "path": "packages/talon/lib/dune",
    "content": "(library\n (name talon)\n (public_name talon)\n (libraries nx nx_core))\n"
  },
  {
    "path": "packages/talon/lib/talon.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet list_take n l =\n  let[@tail_mod_cons] rec aux n l =\n    match (n, l) with 0, _ | _, [] -> [] | n, x :: l -> x :: aux (n - 1) l\n  in\n  if n <= 0 then [] else aux n l\n\nmodule Col = Col\n\ntype t = {\n  columns : (string * Col.t) list;\n  column_map : (string, Col.t) Hashtbl.t;\n}\n\ntype 'a row = { f : t -> int -> 'a }\n\n(* Internal helpers *)\n\nlet get_column t name = Hashtbl.find_opt t.column_map name\n\nlet get_column_exn t name =\n  match get_column t name with Some col -> col | None -> raise Not_found\n\n(* Creation *)\n\nlet empty = { columns = []; column_map = Hashtbl.create 0 }\n\nlet create pairs =\n  if pairs = [] then empty\n  else\n    let first_length = Col.length (snd (List.hd pairs)) in\n    let all_same_length =\n      List.for_all (fun (_, col) -> Col.length col = first_length) pairs\n    in\n    if not all_same_length then\n      invalid_arg \"create: all columns must have the same length\"\n    else\n      let names = List.map fst pairs in\n      let unique_names = List.sort_uniq String.compare names in\n      if List.length names <> List.length unique_names then\n        invalid_arg \"create: duplicate column names\"\n      else\n        let column_map = Hashtbl.create (List.length pairs) in\n        List.iter (fun (name, col) -> Hashtbl.add column_map name col) pairs;\n        { columns = pairs; column_map }\n\nlet of_tensors ?names tensors =\n  if tensors = [] then empty\n  else\n    let first_shape = Nx.shape (List.hd tensors) in\n    if Array.length first_shape <> 1 then\n      invalid_arg \"of_tensors: all tensors must be 1D\"\n    else\n      let all_same_shape =\n        List.for_all (fun t -> Nx.shape t = first_shape) tensors\n      in\n      if not all_same_shape then\n        invalid_arg \"of_tensors: all tensors must have the same shape\"\n      else\n        let names =\n          match names with\n          | Some n when List.length n = List.length tensors -> n\n          | Some _ -> invalid_arg \"of_tensors: wrong number of names\"\n          | None -> List.mapi (fun i _ -> Printf.sprintf \"col%d\" i) tensors\n        in\n        let pairs =\n          List.map2 (fun name t -> (name, Col.of_tensor t)) names tensors\n        in\n        create pairs\n\nlet of_nx ?names tensor =\n  match Nx.shape tensor with\n  | [| _rows; cols |] ->\n      let tensors =\n        List.init cols (fun col_i -> Nx.slice [ Nx.A; Nx.I col_i ] tensor)\n      in\n      of_tensors ?names tensors\n  | _ -> invalid_arg \"of_nx: tensor must be 2D\"\n\n(* Shape *)\n\nlet shape t =\n  match t.columns with\n  | [] -> (0, 0)\n  | (_, col) :: _ -> (Col.length col, List.length t.columns)\n\nlet num_rows t = fst (shape t)\nlet num_columns t = snd (shape t)\nlet column_names t = List.map fst t.columns\n\nlet column_types t =\n  List.map\n    (fun (name, col) ->\n      let typ =\n        match col with\n        | Col.P (Nx.Float32, _, _) -> `Float32\n        | Col.P (Nx.Float64, _, _) -> `Float64\n        | Col.P (Nx.Int32, _, _) -> `Int32\n        | Col.P (Nx.Int64, _, _) -> `Int64\n        | Col.P _ -> `Other\n        | Col.S _ -> `String\n        | Col.B _ -> `Bool\n      in\n      (name, typ))\n    t.columns\n\nlet is_empty t = num_rows t = 0\n\nlet select_columns t category =\n  List.filter_map\n    (fun (name, typ) ->\n      let keep =\n        match (category, typ) with\n        | `Numeric, (`Float32 | `Float64 | `Int32 | `Int64) -> true\n        | `Float, (`Float32 | `Float64) -> true\n        | `Int, (`Int32 | `Int64) -> true\n        | `Bool, `Bool -> true\n        | `String, `String -> true\n        | _ -> false\n      in\n      if keep then Some name else None)\n    (column_types t)\n\n(* Column access *)\n\nlet has_column t name = Hashtbl.mem t.column_map name\n\nlet add_column t name col =\n  let expected_length = num_rows t in\n  if expected_length > 0 && Col.length col <> expected_length then\n    invalid_arg \"add_column: column length doesn't match dataframe rows\"\n  else\n    let columns =\n      List.filter (fun (n, _) -> n <> name) t.columns @ [ (name, col) ]\n    in\n    let column_map = Hashtbl.copy t.column_map in\n    Hashtbl.replace column_map name col;\n    { columns; column_map }\n\nlet drop_column t name =\n  let columns = List.filter (fun (n, _) -> n <> name) t.columns in\n  let column_map = Hashtbl.copy t.column_map in\n  Hashtbl.remove column_map name;\n  { columns; column_map }\n\nlet drop_columns t names = List.fold_left drop_column t names\n\nlet rename_column t ~old_name ~new_name =\n  match get_column t old_name with\n  | None -> raise Not_found\n  | Some _ ->\n      if has_column t new_name then\n        invalid_arg \"rename_column: new_name already exists\"\n      else\n        let columns =\n          List.map\n            (fun (n, c) -> if n = old_name then (new_name, c) else (n, c))\n            t.columns\n        in\n        let column_map = Hashtbl.create (Hashtbl.length t.column_map) in\n        List.iter (fun (name, col) -> Hashtbl.add column_map name col) columns;\n        { columns; column_map }\n\nlet select ?(strict = true) t names =\n  if strict then\n    let columns =\n      List.map\n        (fun name ->\n          match get_column t name with\n          | Some col -> (name, col)\n          | None -> raise Not_found)\n        names\n    in\n    create columns\n  else\n    let columns =\n      List.filter_map\n        (fun name ->\n          match get_column t name with\n          | Some col -> Some (name, col)\n          | None -> None)\n        names\n    in\n    create columns\n\nlet reorder_columns t names =\n  let requested = List.map (fun name -> (name, get_column_exn t name)) names in\n  let remaining =\n    List.filter (fun (name, _) -> not (List.mem name names)) t.columns\n  in\n  create (requested @ remaining)\n\nlet cast_column t name dtype =\n  match get_column t name with\n  | Some (Col.P (_, tensor, mask)) ->\n      let casted = Nx.astype dtype tensor in\n      add_column t name (Col.P (dtype, casted, mask))\n  | _ -> invalid_arg \"cast_column: conversion not possible\"\n\n(* Extraction *)\n\nlet to_array (type a b) (dtype : (a, b) Nx.dtype) t name =\n  get_column t name\n  |> Option.map (fun col -> Col.to_tensor dtype col)\n  |> Option.join |> Option.map Nx.to_array\n\nlet to_opt_array (type a b) (dtype : (a, b) Nx.dtype) t name =\n  get_column t name\n  |> Option.map (fun col ->\n      match Col.to_tensor dtype col with\n      | None -> None\n      | Some tensor ->\n          let arr = Nx.to_array tensor in\n          let mask = Col.null_mask col in\n          Some\n            (Array.mapi\n               (fun i v ->\n                 match mask with Some m when m.(i) -> None | _ -> Some v)\n               arr))\n  |> Option.join\n\nlet to_bool_array t name =\n  get_column t name |> Option.map Col.to_bool_array |> Option.join\n\nlet to_string_array t name =\n  get_column t name |> Option.map Col.to_string_array |> Option.join\n\n(* Row module *)\n\nmodule Row = struct\n  let return x = { f = (fun _ _ -> x) }\n\n  let apply ff fx =\n    {\n      f =\n        (fun df i ->\n          let f = ff.f df i in\n          let x = fx.f df i in\n          f x);\n    }\n\n  let map x ~f = { f = (fun df i -> f (x.f df i)) }\n  let map2 x y ~f = { f = (fun df i -> f (x.f df i) (y.f df i)) }\n  let map3 x y z ~f = { f = (fun df i -> f (x.f df i) (y.f df i) (z.f df i)) }\n  let both x y = { f = (fun df i -> (x.f df i, y.f df i)) }\n\n  let cached ~extract =\n    let cache = ref None in\n    {\n      f =\n        (fun df i ->\n          let a =\n            match !cache with\n            | Some (df', a) when df' == df -> a\n            | _ ->\n                let a = extract df in\n                cache := Some (df, a);\n                a\n          in\n          a.(i));\n    }\n\n  let cached_masked ~extract =\n    let cache = ref None in\n    {\n      f =\n        (fun df i ->\n          let a, mask_opt =\n            match !cache with\n            | Some (df', a, m) when df' == df -> (a, m)\n            | _ ->\n                let a, m = extract df in\n                cache := Some (df, a, m);\n                (a, m)\n          in\n          match mask_opt with\n          | Some mask when mask.(i) -> None\n          | _ -> Some a.(i));\n    }\n\n  let col (type a b) (dtype : (a, b) Nx.dtype) name =\n    cached ~extract:(fun df ->\n        match Col.to_tensor dtype (get_column_exn df name) with\n        | Some tensor -> (Nx.to_array tensor : a array)\n        | None -> failwith (\"Column \" ^ name ^ \" has incompatible dtype\"))\n\n  let col_opt (type a b) (dtype : (a, b) Nx.dtype) name =\n    cached_masked ~extract:(fun df ->\n        let c = get_column_exn df name in\n        match Col.to_tensor dtype c with\n        | Some tensor -> ((Nx.to_array tensor : a array), Col.null_mask c)\n        | None -> failwith (\"Column \" ^ name ^ \" has incompatible dtype\"))\n\n  let string name =\n    map\n      (cached ~extract:(fun df ->\n           match get_column df name with\n           | Some (Col.S a) -> a\n           | _ -> failwith (\"Column \" ^ name ^ \" is not string\")))\n      ~f:(Option.value ~default:\"\")\n\n  let string_opt name =\n    cached ~extract:(fun df ->\n        match get_column df name with\n        | Some (Col.S a) -> a\n        | _ -> failwith (\"Column \" ^ name ^ \" is not string\"))\n\n  let bool name =\n    map\n      (cached ~extract:(fun df ->\n           match get_column df name with\n           | Some (Col.B a) -> a\n           | _ -> failwith (\"Column \" ^ name ^ \" is not bool\")))\n      ~f:(Option.value ~default:false)\n\n  let bool_opt name =\n    cached ~extract:(fun df ->\n        match get_column df name with\n        | Some (Col.B a) -> a\n        | _ -> failwith (\"Column \" ^ name ^ \" is not bool\"))\n\n  let number name =\n    cached ~extract:(fun df ->\n        match get_column df name with\n        | Some (Col.P (Nx.Float32, tensor, _)) ->\n            (Nx.to_array tensor : float array)\n        | Some (Col.P (Nx.Float64, tensor, _)) ->\n            (Nx.to_array tensor : float array)\n        | Some (Col.P (Nx.Int32, tensor, _)) ->\n            Array.map Int32.to_float (Nx.to_array tensor)\n        | Some (Col.P (Nx.Int64, tensor, _)) ->\n            Array.map Int64.to_float (Nx.to_array tensor)\n        | Some _ -> failwith (\"Column \" ^ name ^ \" is not numeric\")\n        | None -> failwith (\"Column \" ^ name ^ \" not found\"))\n\n  let float32 name = col Nx.float32 name\n  let float64 name = col Nx.float64 name\n  let int32 name = col Nx.int32 name\n  let int64 name = col Nx.int64 name\n  let float32_opt name = col_opt Nx.float32 name\n  let float64_opt name = col_opt Nx.float64 name\n  let int32_opt name = col_opt Nx.int32 name\n  let int64_opt name = col_opt Nx.int64 name\n  let index = { f = (fun _ i -> i) }\n  let sequence xs = { f = (fun df i -> List.map (fun x -> x.f df i) xs) }\n\n  let fold_list xs ~init ~f =\n    { f = (fun df i -> List.fold_left (fun acc x -> f acc (x.f df i)) init xs) }\nend\n\n(* Internal: reindex rows by array of non-negative indices *)\n\nlet reindex_rows t indices =\n  List.map (fun (name, col) -> (name, Col.reindex col indices)) t.columns\n  |> create\n\nlet take t indices =\n  let n = num_rows t in\n  Array.iter\n    (fun i ->\n      if i < 0 || i >= n then\n        invalid_arg\n          (Printf.sprintf \"Talon.take: index %d out of bounds for %d rows\" i n))\n    indices;\n  reindex_rows t indices\n\n(* Slicing and filtering *)\n\nlet head ?(n = 5) t =\n  let actual_n = min n (num_rows t) in\n  let columns =\n    List.map (fun (name, col) -> (name, Col.slice_col col 0 actual_n)) t.columns\n  in\n  create columns\n\nlet tail ?(n = 5) t =\n  let n_rows = num_rows t in\n  let actual_n = min n n_rows in\n  let start = n_rows - actual_n in\n  let columns =\n    List.map\n      (fun (name, col) -> (name, Col.slice_col col start actual_n))\n      t.columns\n  in\n  create columns\n\nlet slice t ~start ~stop =\n  let n_rows = num_rows t in\n  let start = max 0 start in\n  let stop = min stop n_rows in\n  let length = max 0 (stop - start) in\n  let columns =\n    List.map\n      (fun (name, col) -> (name, Col.slice_col col start length))\n      t.columns\n  in\n  create columns\n\nlet sample ?n ?frac ?replace ?seed t =\n  let n_rows = num_rows t in\n  let sample_size =\n    match (n, frac) with\n    | Some n, None -> n\n    | None, Some f -> int_of_float (f *. float_of_int n_rows)\n    | _ -> invalid_arg \"sample: either n or frac must be specified\"\n  in\n  let replace = Option.value replace ~default:false in\n  let state =\n    match seed with\n    | Some s -> Random.State.make [| s |]\n    | None -> Random.State.make_self_init ()\n  in\n  let indices =\n    if replace then\n      Array.init sample_size (fun _ -> Random.State.int state n_rows)\n    else\n      let all_indices = Array.init n_rows Fun.id in\n      for i = n_rows - 1 downto 1 do\n        let j = Random.State.int state (i + 1) in\n        let temp = all_indices.(i) in\n        all_indices.(i) <- all_indices.(j);\n        all_indices.(j) <- temp\n      done;\n      Array.sub all_indices 0 (min sample_size n_rows)\n  in\n  reindex_rows t indices\n\nlet filter t mask =\n  let n_rows = num_rows t in\n  if Array.length mask <> n_rows then\n    invalid_arg \"filter: mask length must match num_rows\"\n  else\n    let indices = ref [] in\n    Array.iteri (fun i b -> if b then indices := i :: !indices) mask;\n    let indices = Array.of_list (List.rev !indices) in\n    reindex_rows t indices\n\nlet filter_by t pred =\n  let n_rows = num_rows t in\n  let mask = Array.init n_rows (fun i -> pred.f t i) in\n  filter t mask\n\nlet drop_nulls ?subset t =\n  let cols_to_check =\n    match subset with Some cols -> cols | None -> column_names t\n  in\n  let mask = Array.make (num_rows t) true in\n  List.iter\n    (fun col_name ->\n      match get_column t col_name with\n      | Some (Col.P (_, _, Some null_mask)) ->\n          Array.iteri\n            (fun i is_null -> if is_null then mask.(i) <- false)\n            null_mask\n      | Some (Col.P (_, _, None)) -> ()\n      | Some (Col.S arr) ->\n          Array.iteri\n            (fun i v -> if Option.is_none v then mask.(i) <- false)\n            arr\n      | Some (Col.B arr) ->\n          Array.iteri\n            (fun i v -> if Option.is_none v then mask.(i) <- false)\n            arr\n      | None -> ())\n    cols_to_check;\n  filter t mask\n\nlet fill_null t col_name ~with_value =\n  match get_column t col_name with\n  | None -> invalid_arg (\"fill_null: column \" ^ col_name ^ \" not found\")\n  | Some col ->\n      let value_col =\n        match (with_value, Col.dtype col) with\n        | `Float v, `Float32 -> Col.float32 [| v |]\n        | `Float v, `Float64 -> Col.float64 [| v |]\n        | `Float v, _ -> Col.float64 [| v |]\n        | `Int32 v, _ -> Col.int32 [| v |]\n        | `Int64 v, _ -> Col.int64 [| v |]\n        | `String v, _ -> Col.string [| v |]\n        | `Bool v, _ -> Col.bool [| v |]\n      in\n      let filled = Col.fill_nulls col ~value:value_col in\n      add_column t col_name filled\n\nlet drop_duplicates ?subset t =\n  let cols_to_check =\n    match subset with None -> column_names t | Some names -> names\n  in\n  let n_rows = num_rows t in\n  let seen = Hashtbl.create n_rows in\n  let unique_indices = ref [] in\n  let fmts =\n    List.map\n      (fun name ->\n        match get_column t name with\n        | Some col -> Col.to_string_fn col\n        | None -> fun _ -> \"\")\n      cols_to_check\n  in\n  for i = 0 to n_rows - 1 do\n    let key_str = String.concat \"\\x00\" (List.map (fun f -> f i) fmts) in\n    if not (Hashtbl.mem seen key_str) then (\n      Hashtbl.add seen key_str ();\n      unique_indices := i :: !unique_indices)\n  done;\n  let indices = Array.of_list (List.rev !unique_indices) in\n  reindex_rows t indices\n\n(* Transforms *)\n\nlet concat ~axis dfs =\n  match axis with\n  | `Rows ->\n      if dfs = [] then empty\n      else\n        let first = List.hd dfs in\n        let names = column_names first in\n        let all_same_columns =\n          List.for_all (fun df -> column_names df = names) dfs\n        in\n        if not all_same_columns then\n          invalid_arg\n            \"concat: all dataframes must have the same columns for row \\\n             concatenation\"\n        else\n          let columns =\n            List.map\n              (fun name ->\n                let cols = List.map (fun df -> get_column_exn df name) dfs in\n                (name, Col.concat_cols cols))\n              names\n          in\n          create columns\n  | `Columns ->\n      if dfs = [] then empty\n      else\n        let first_rows = num_rows (List.hd dfs) in\n        let all_same_rows =\n          List.for_all (fun df -> num_rows df = first_rows) dfs\n        in\n        if not all_same_rows then\n          invalid_arg\n            \"concat: all dataframes must have the same number of rows for \\\n             column concatenation\"\n        else\n          let all_columns = List.concat_map (fun df -> df.columns) dfs in\n          create all_columns\n\nlet map (type a b) t (dtype : (a, b) Nx.dtype) (f : a row) : (a, b) Nx.t =\n  let n_rows = num_rows t in\n  let data = Array.init n_rows (fun i -> f.f t i) in\n  Nx.create dtype [| n_rows |] data\n\nlet with_column t name dtype f =\n  let tensor = map t dtype f in\n  add_column t name (Col.of_tensor tensor)\n\nlet with_string_column t name f =\n  let n_rows = num_rows t in\n  let data = Array.init n_rows (fun i -> Some (f.f t i)) in\n  add_column t name (Col.S data)\n\nlet with_bool_column t name f =\n  let n_rows = num_rows t in\n  let data = Array.init n_rows (fun i -> Some (f.f t i)) in\n  add_column t name (Col.B data)\n\nlet with_columns t cols =\n  List.fold_left (fun df (name, col) -> add_column df name col) t cols\n\nlet iter t f =\n  let n_rows = num_rows t in\n  for i = 0 to n_rows - 1 do\n    f.f t i\n  done\n\nlet fold t ~init ~f =\n  let n_rows = num_rows t in\n  let rec loop i acc =\n    if i >= n_rows then acc\n    else\n      let update_fn = f.f t i in\n      let next_acc = update_fn acc in\n      loop (i + 1) next_acc\n  in\n  loop 0 init\n\n(* Sorting and grouping *)\n\nlet sort t key ~compare =\n  let n_rows = num_rows t in\n  let keys = Array.init n_rows (fun i -> (i, key.f t i)) in\n  Array.sort (fun (_, k1) (_, k2) -> compare k1 k2) keys;\n  let indices = Array.map fst keys in\n  reindex_rows t indices\n\nlet sort_values ?(ascending = true) t name =\n  match get_column t name with\n  | None -> raise Not_found\n  | Some col -> (\n      let cmp = if ascending then compare else fun a b -> compare b a in\n      match col with\n      | Col.P (Nx.Float32, _, _) | Col.P (Nx.Float64, _, _) ->\n          sort t (Row.number name) ~compare:cmp\n      | Col.P (Nx.Int32, _, _) -> sort t (Row.col Nx.int32 name) ~compare:cmp\n      | Col.P (Nx.Int64, _, _) -> sort t (Row.col Nx.int64 name) ~compare:cmp\n      | Col.S _ -> sort t (Row.string name) ~compare:cmp\n      | Col.B _ -> sort t (Row.bool name) ~compare:cmp\n      | _ -> failwith \"sort_values: unsupported column type\")\n\nlet group_by t key =\n  let n_rows = num_rows t in\n  let groups = Hashtbl.create 16 in\n  for i = 0 to n_rows - 1 do\n    let k = key.f t i in\n    let indices =\n      match Hashtbl.find_opt groups k with None -> [] | Some lst -> lst\n    in\n    Hashtbl.replace groups k (i :: indices)\n  done;\n  Hashtbl.fold\n    (fun k indices acc ->\n      let indices = Array.of_list (List.rev indices) in\n      (k, reindex_rows t indices) :: acc)\n    groups []\n\n(* Column transforms — delegate to Col, return dataframe *)\n\nlet cumsum t name = add_column t name (Col.cumsum (get_column_exn t name))\nlet cumprod t name = add_column t name (Col.cumprod (get_column_exn t name))\n\nlet diff t name ?periods () =\n  add_column t name (Col.diff ?periods (get_column_exn t name))\n\nlet pct_change t name ?periods () =\n  add_column t name (Col.pct_change ?periods (get_column_exn t name))\n\nlet shift t name ~periods =\n  add_column t name (Col.shift ~periods (get_column_exn t name))\n\n(* Column inspection *)\n\nlet is_null t name =\n  match get_column t name with\n  | Some (Col.P (_, _, Some mask)) -> Col.B (Array.map (fun b -> Some b) mask)\n  | Some (Col.P _) -> Col.B (Array.make (num_rows t) (Some false))\n  | Some (Col.S arr) -> Col.B (Array.map (fun x -> Some (Option.is_none x)) arr)\n  | Some (Col.B arr) -> Col.B (Array.map (fun x -> Some (Option.is_none x)) arr)\n  | None -> Col.B [||]\n\nlet value_counts_typed (type a) (tbl : (a, int) Hashtbl.t) arr\n    (mask_opt : bool array option) =\n  let is_null i = match mask_opt with Some m -> m.(i) | None -> false in\n  Array.iteri\n    (fun i x ->\n      if not (is_null i) then\n        let c = Option.value (Hashtbl.find_opt tbl x) ~default:0 in\n        Hashtbl.replace tbl x (c + 1))\n    arr;\n  let items = Hashtbl.fold (fun k v acc -> (k, v) :: acc) tbl [] in\n  let items = List.sort (fun (_, c1) (_, c2) -> compare c2 c1) items in\n  (Array.of_list (List.map fst items), Array.of_list (List.map snd items))\n\nlet count_options ~wrap arr =\n  let tbl = Hashtbl.create 16 in\n  Array.iter\n    (function\n      | Some x ->\n          let c = Option.value (Hashtbl.find_opt tbl x) ~default:0 in\n          Hashtbl.replace tbl x (c + 1)\n      | None -> ())\n    arr;\n  let items = Hashtbl.fold (fun k v acc -> (k, v) :: acc) tbl [] in\n  let items = List.sort (fun (_, c1) (_, c2) -> compare c2 c1) items in\n  let values = Array.of_list (List.map (fun (x, _) -> Some x) items) in\n  let counts = Array.of_list (List.map snd items) in\n  create\n    [\n      (\"value\", wrap values);\n      (\"count\", Col.int32 (Array.map Int32.of_int counts));\n    ]\n\nlet value_counts t name =\n  match get_column t name with\n  | Some col -> (\n      match col with\n      | Col.P (dtype, tensor, mask_opt) ->\n          let arr = Nx.to_array tensor in\n          let tbl = Hashtbl.create 16 in\n          let values, counts_arr = value_counts_typed tbl arr mask_opt in\n          let counts_int32 = Array.map Int32.of_int counts_arr in\n          create\n            [\n              ( \"value\",\n                Col.P\n                  (dtype, Nx.create dtype [| Array.length values |] values, None)\n              );\n              (\"count\", Col.int32 counts_int32);\n            ]\n      | Col.S arr -> count_options ~wrap:(fun a -> Col.S a) arr\n      | Col.B arr -> count_options ~wrap:(fun a -> Col.B a) arr)\n  | None -> empty\n\n(* Aggregations *)\n\nmodule Agg = struct\n  let sum t name =\n    let filtered, _ = Col.col_as_float_array (get_column_exn t name) in\n    Array.fold_left ( +. ) 0. filtered\n\n  let mean t name =\n    let filtered, count = Col.col_as_float_array (get_column_exn t name) in\n    if count = 0 then Float.nan\n    else Array.fold_left ( +. ) 0. filtered /. float_of_int count\n\n  let variance_of col =\n    let filtered, count = Col.col_as_float_array col in\n    if count = 0 then Float.nan\n    else\n      let n = float_of_int count in\n      let sum = ref 0. in\n      let sum_sq = ref 0. in\n      for i = 0 to Array.length filtered - 1 do\n        let x = filtered.(i) in\n        sum := !sum +. x;\n        sum_sq := !sum_sq +. (x *. x)\n      done;\n      let mean = !sum /. n in\n      (!sum_sq /. n) -. (mean *. mean)\n\n  let std t name = sqrt (variance_of (get_column_exn t name))\n  let var t name = variance_of (get_column_exn t name)\n\n  let min t name =\n    let filtered, count = Col.col_as_float_array (get_column_exn t name) in\n    if count = 0 then None else Some (Array.fold_left min max_float filtered)\n\n  let max t name =\n    let filtered, count = Col.col_as_float_array (get_column_exn t name) in\n    if count = 0 then None else Some (Array.fold_left max min_float filtered)\n\n  let median t name =\n    let filtered, count = Col.col_as_float_array (get_column_exn t name) in\n    if count = 0 then Float.nan\n    else (\n      Array.sort compare filtered;\n      let n = Array.length filtered in\n      if n mod 2 = 0 then (filtered.((n / 2) - 1) +. filtered.(n / 2)) /. 2.\n      else filtered.(n / 2))\n\n  let quantile t name ~q =\n    let filtered, count = Col.col_as_float_array (get_column_exn t name) in\n    if count = 0 then Float.nan\n    else (\n      Array.sort compare filtered;\n      let n = Array.length filtered in\n      let pos = q *. float_of_int (n - 1) in\n      let lower = int_of_float pos in\n      let upper = Stdlib.min (lower + 1) (n - 1) in\n      let weight = pos -. float_of_int lower in\n      (filtered.(lower) *. (1. -. weight)) +. (filtered.(upper) *. weight))\n\n  let count t name =\n    match get_column t name with\n    | Some col -> Col.length col - Col.null_count col\n    | None -> 0\n\n  let count_unique_options arr =\n    let seen = Hashtbl.create 16 in\n    Array.iter (function Some x -> Hashtbl.replace seen x () | None -> ()) arr;\n    Hashtbl.length seen\n\n  let nunique t name =\n    let col = get_column t name in\n    match col with\n    | None -> 0\n    | Some (Col.S arr) -> count_unique_options arr\n    | Some (Col.B arr) -> count_unique_options arr\n    | Some (Col.P _) ->\n        (* Use col_as_float_array which already handles all numeric dtypes and\n           respects the null mask *)\n        let filtered, count = Col.col_as_float_array (Option.get col) in\n        if count = 0 then 0\n        else\n          let seen = Hashtbl.create 16 in\n          Array.iter\n            (fun x -> Hashtbl.replace seen (Int64.bits_of_float x) ())\n            filtered;\n          Hashtbl.length seen\n\n  (* Row-wise (horizontal) reductions *)\n\n  let collect_as_float64 t names =\n    List.fold_left\n      (fun (acc : (float, Bigarray.float64_elt) Nx.t list) name ->\n        match get_column t name with\n        | Some (Col.P (_, tensor, mask_opt)) ->\n            let casted = Nx.cast Nx.float64 tensor in\n            let result =\n              match mask_opt with\n              | Some mask ->\n                  let mask_tensor =\n                    Nx.create Nx.uint8\n                      [| Array.length mask |]\n                      (Array.map (fun b -> if b then 1 else 0) mask)\n                  in\n                  let mask_float = Nx.cast Nx.float64 mask_tensor in\n                  let nan_tensor = Nx.full_like casted Float.nan in\n                  Nx.where (Nx.cast Nx.bool mask_float) nan_tensor casted\n              | None -> casted\n            in\n            result :: acc\n        | _ -> acc)\n      [] names\n    |> List.rev\n\n  let dot t ~names ~weights =\n    if List.length names <> Array.length weights then\n      invalid_arg \"dot: number of columns must match number of weights\";\n    let tensors = collect_as_float64 t names in\n    if tensors = [] then Col.float64 (Array.make (num_rows t) 0.)\n    else\n      let n_rows = num_rows t in\n      let n_cols = List.length tensors in\n      let arrs = List.map Nx.to_array tensors in\n      let result = Array.make n_rows 0. in\n      for i = 0 to n_rows - 1 do\n        let sum = ref 0. in\n        List.iteri\n          (fun j arr ->\n            if j < n_cols then\n              let v = arr.(i) in\n              if Float.is_finite v then sum := !sum +. (v *. weights.(j)))\n          arrs;\n        result.(i) <- !sum\n      done;\n      Col.float64 result\n\n  let row_sum ?(skipna = true) t ~names =\n    let tensors = collect_as_float64 t names in\n    if tensors = [] then Col.numeric Nx.float64 (Array.make (num_rows t) 0.0)\n    else\n      let stacked = Nx.stack tensors ~axis:0 in\n      let result =\n        if skipna then\n          let nan_mask = Nx.isnan stacked in\n          let zeros = Nx.zeros_like stacked in\n          let cleaned = Nx.where nan_mask zeros stacked in\n          Nx.sum cleaned ~axes:[ 0 ]\n        else Nx.sum stacked ~axes:[ 0 ]\n      in\n      Col.of_tensor result\n\n  let row_mean ?(skipna = true) t ~names =\n    let tensors = collect_as_float64 t names in\n    if tensors = [] then\n      Col.numeric Nx.float64 (Array.make (num_rows t) Float.nan)\n    else\n      let stacked = Nx.stack tensors ~axis:0 in\n      let result =\n        if skipna then\n          let nan_mask = Nx.isnan stacked in\n          let zeros = Nx.zeros_like stacked in\n          let cleaned = Nx.where nan_mask zeros stacked in\n          let ones = Nx.ones_like stacked in\n          let valid_mask = Nx.where nan_mask zeros ones in\n          let sum = Nx.sum cleaned ~axes:[ 0 ] in\n          let count = Nx.sum valid_mask ~axes:[ 0 ] in\n          let safe_count = Nx.maximum count (Nx.ones_like count) in\n          let mean = Nx.div sum safe_count in\n          let all_nan = Nx.equal count (Nx.zeros_like count) in\n          Nx.where all_nan (Nx.full_like mean Float.nan) mean\n        else Nx.mean stacked ~axes:[ 0 ]\n      in\n      Col.of_tensor result\n\n  let row_min ?(skipna = true) t ~names =\n    let tensors = collect_as_float64 t names in\n    if tensors = [] then\n      Col.numeric Nx.float64 (Array.make (num_rows t) Float.nan)\n    else\n      let stacked = Nx.stack tensors ~axis:0 in\n      let result =\n        if skipna then\n          let nan_mask = Nx.isnan stacked in\n          let inf = Nx.full_like stacked Float.infinity in\n          let cleaned = Nx.where nan_mask inf stacked in\n          let min_vals = Nx.min cleaned ~axes:[ 0 ] in\n          let is_inf =\n            Nx.equal min_vals (Nx.full_like min_vals Float.infinity)\n          in\n          Nx.where is_inf (Nx.full_like min_vals Float.nan) min_vals\n        else Nx.min stacked ~axes:[ 0 ]\n      in\n      Col.of_tensor result\n\n  let row_max ?(skipna = true) t ~names =\n    let tensors = collect_as_float64 t names in\n    if tensors = [] then\n      Col.numeric Nx.float64 (Array.make (num_rows t) Float.nan)\n    else\n      let stacked = Nx.stack tensors ~axis:0 in\n      let result =\n        if skipna then\n          let nan_mask = Nx.isnan stacked in\n          let neg_inf = Nx.full_like stacked Float.neg_infinity in\n          let cleaned = Nx.where nan_mask neg_inf stacked in\n          let max_vals = Nx.max cleaned ~axes:[ 0 ] in\n          let is_neg_inf =\n            Nx.equal max_vals (Nx.full_like max_vals Float.neg_infinity)\n          in\n          Nx.where is_neg_inf (Nx.full_like max_vals Float.nan) max_vals\n        else Nx.max stacked ~axes:[ 0 ]\n      in\n      Col.of_tensor result\n\n  module String = struct\n    let get_strings t name =\n      match get_column t name with\n      | Some (Col.S arr) -> arr\n      | _ -> failwith (\"Agg.String: column \" ^ name ^ \" is not a string column\")\n\n    let min t name =\n      let arr = get_strings t name in\n      Array.fold_left\n        (fun acc x ->\n          match (acc, x) with\n          | None, v -> v\n          | Some a, Some b -> Some (Stdlib.min a b)\n          | v, None -> v)\n        None arr\n\n    let max t name =\n      let arr = get_strings t name in\n      Array.fold_left\n        (fun acc x ->\n          match (acc, x) with\n          | None, v -> v\n          | Some a, Some b -> Some (Stdlib.max a b)\n          | v, None -> v)\n        None arr\n\n    let concat t name ?(sep = \"\") () =\n      let arr = get_strings t name in\n      let parts =\n        Array.fold_left\n          (fun acc x -> match x with Some s -> s :: acc | None -> acc)\n          [] arr\n      in\n      Stdlib.String.concat sep (List.rev parts)\n\n    let unique t name =\n      let arr = get_strings t name in\n      let seen = Hashtbl.create 16 in\n      Array.iter\n        (function Some s -> Hashtbl.replace seen s () | None -> ())\n        arr;\n      Hashtbl.fold (fun k () acc -> k :: acc) seen [] |> Array.of_list\n\n    let nunique t name =\n      let arr = get_strings t name in\n      let seen = Hashtbl.create 16 in\n      Array.iter\n        (function Some s -> Hashtbl.replace seen s () | None -> ())\n        arr;\n      Hashtbl.length seen\n\n    let mode t name =\n      let arr = get_strings t name in\n      let counts = Hashtbl.create 16 in\n      Array.iter\n        (function\n          | Some s ->\n              let c = Option.value (Hashtbl.find_opt counts s) ~default:0 in\n              Hashtbl.replace counts s (c + 1)\n          | None -> ())\n        arr;\n      Hashtbl.fold\n        (fun k v acc ->\n          match acc with\n          | None -> Some (k, v)\n          | Some (_, best) when v > best -> Some (k, v)\n          | _ -> acc)\n        counts None\n      |> Option.map fst\n  end\n\n  module Bool = struct\n    let get_bools t name =\n      match get_column t name with\n      | Some (Col.B arr) -> arr\n      | _ -> failwith (\"Agg.Bool: column \" ^ name ^ \" is not a boolean column\")\n\n    let all t name =\n      let arr = get_bools t name in\n      Array.for_all (function Some b -> b | None -> true) arr\n\n    let any t name =\n      let arr = get_bools t name in\n      Array.exists (function Some true -> true | _ -> false) arr\n\n    let sum t name =\n      let arr = get_bools t name in\n      Array.fold_left\n        (fun acc x -> match x with Some true -> acc + 1 | _ -> acc)\n        0 arr\n\n    let mean t name =\n      let arr = get_bools t name in\n      let total = ref 0 in\n      let count = ref 0 in\n      Array.iter\n        (function\n          | Some b ->\n              if b then incr total;\n              incr count\n          | None -> ())\n        arr;\n      if !count = 0 then Float.nan\n      else float_of_int !total /. float_of_int !count\n  end\n\n  let collect_bool_arrays t names =\n    List.map\n      (fun name ->\n        match get_column t name with\n        | Some (Col.B arr) -> arr\n        | _ ->\n            failwith\n              (\"Agg.row_all/row_any: column \" ^ name\n             ^ \" is not a boolean column\"))\n      names\n\n  let row_all t ~names =\n    let arrays = collect_bool_arrays t names in\n    let n_rows = num_rows t in\n    let result =\n      Array.init n_rows (fun i ->\n          let value =\n            List.for_all\n              (fun arr -> match arr.(i) with Some true -> true | _ -> false)\n              arrays\n          in\n          Some value)\n    in\n    Col.B result\n\n  let row_any t ~names =\n    let arrays = collect_bool_arrays t names in\n    let n_rows = num_rows t in\n    let result =\n      Array.init n_rows (fun i ->\n          let value =\n            List.exists\n              (fun arr -> match arr.(i) with Some true -> true | _ -> false)\n              arrays\n          in\n          Some value)\n    in\n    Col.B result\nend\n\n(* Joins *)\n\nmodule Join_key = struct\n  type t =\n    | Int32 of int32\n    | Int64 of int64\n    | Float of float\n    | String of string\n    | Null\n\n  let equal a b =\n    match (a, b) with\n    | Int32 x, Int32 y -> Int32.equal x y\n    | Int64 x, Int64 y -> Int64.equal x y\n    | Float x, Float y -> Float.equal x y\n    | String x, String y -> String.equal x y\n    | Null, Null -> true\n    | _ -> false\n\n  let hash = Hashtbl.hash\nend\n\nmodule Join_key_tbl = Hashtbl.Make (struct\n  type t = Join_key.t\n\n  let equal = Join_key.equal\n  let hash = Join_key.hash\nend)\n\nlet get_key_array col =\n  match col with\n  | Col.P (dtype, tensor, mask_opt) -> (\n      let is_null i =\n        match mask_opt with Some mask -> mask.(i) | None -> false\n      in\n      match dtype with\n      | Nx.Int32 ->\n          let arr : int32 array = Nx.to_array tensor in\n          Array.mapi\n            (fun i v -> if is_null i then Join_key.Null else Join_key.Int32 v)\n            arr\n      | Nx.Int64 ->\n          let arr : int64 array = Nx.to_array tensor in\n          Array.mapi\n            (fun i v -> if is_null i then Join_key.Null else Join_key.Int64 v)\n            arr\n      | Nx.Float32 ->\n          let arr : float array = Nx.to_array tensor in\n          Array.mapi\n            (fun i v -> if is_null i then Join_key.Null else Join_key.Float v)\n            arr\n      | Nx.Float64 ->\n          let arr : float array = Nx.to_array tensor in\n          Array.mapi\n            (fun i v -> if is_null i then Join_key.Null else Join_key.Float v)\n            arr\n      | _ -> failwith \"Unsupported column type for join\")\n  | Col.S arr ->\n      Array.map\n        (function Some s -> Join_key.String s | None -> Join_key.Null)\n        arr\n  | _ -> failwith \"Unsupported column type for join\"\n\nlet build_index keys =\n  let tmp = Join_key_tbl.create (max 16 (Array.length keys)) in\n  Array.iteri\n    (fun idx key ->\n      let existing =\n        match Join_key_tbl.find_opt tmp key with Some lst -> lst | None -> []\n      in\n      Join_key_tbl.replace tmp key (idx :: existing))\n    keys;\n  let final_tbl = Join_key_tbl.create (Join_key_tbl.length tmp + 1) in\n  Join_key_tbl.iter\n    (fun key lst ->\n      Join_key_tbl.add final_tbl key (Array.of_list (List.rev lst)))\n    tmp;\n  final_tbl\n\nlet join t1 t2 ~on ?right_on ~how ?(suffixes = (\"_x\", \"_y\")) () =\n  let right_key = Option.value right_on ~default:on in\n  let t2, right_key_col =\n    if right_key <> on then\n      let t2' = rename_column t2 ~old_name:right_key ~new_name:on in\n      (t2', on)\n    else (t2, on)\n  in\n  let left_col = get_column_exn t1 on in\n  let right_col = get_column_exn t2 right_key_col in\n\n  let left_keys = get_key_array left_col in\n  let right_keys = get_key_array right_col in\n  let right_index = build_index right_keys in\n\n  let left_indices = ref [] in\n  let right_indices = ref [] in\n  let append_pair l r =\n    left_indices := l :: !left_indices;\n    right_indices := r :: !right_indices\n  in\n  let matched_right = Array.make (Array.length right_keys) false in\n\n  (match how with\n  | `Inner ->\n      for i = 0 to Array.length left_keys - 1 do\n        match Join_key_tbl.find_opt right_index left_keys.(i) with\n        | Some matches ->\n            Array.iter\n              (fun j ->\n                append_pair i j;\n                matched_right.(j) <- true)\n              matches\n        | None -> ()\n      done\n  | `Left ->\n      for i = 0 to Array.length left_keys - 1 do\n        match Join_key_tbl.find_opt right_index left_keys.(i) with\n        | Some matches ->\n            Array.iter\n              (fun j ->\n                append_pair i j;\n                matched_right.(j) <- true)\n              matches\n        | None -> append_pair i (-1)\n      done\n  | `Right ->\n      let left_index = build_index left_keys in\n      for j = 0 to Array.length right_keys - 1 do\n        match Join_key_tbl.find_opt left_index right_keys.(j) with\n        | Some matches -> Array.iter (fun i -> append_pair i j) matches\n        | None -> append_pair (-1) j\n      done\n  | `Outer ->\n      for i = 0 to Array.length left_keys - 1 do\n        match Join_key_tbl.find_opt right_index left_keys.(i) with\n        | Some matches ->\n            Array.iter\n              (fun j ->\n                append_pair i j;\n                matched_right.(j) <- true)\n              matches\n        | None -> append_pair i (-1)\n      done;\n      for j = 0 to Array.length right_keys - 1 do\n        if not matched_right.(j) then append_pair (-1) j\n      done);\n\n  let left_idx = Array.of_list (List.rev !left_indices) in\n  let right_idx = Array.of_list (List.rev !right_indices) in\n\n  let result_cols = ref [] in\n  let left_suffix, right_suffix = suffixes in\n\n  List.iter\n    (fun name ->\n      let col = get_column_exn t1 name in\n      let new_col = Col.reindex_nullable col left_idx (num_rows t1) in\n      let final_name =\n        if name <> on && has_column t2 name then name ^ left_suffix else name\n      in\n      result_cols := (final_name, new_col) :: !result_cols)\n    (column_names t1);\n\n  List.iter\n    (fun name ->\n      if name <> on then\n        let col = get_column_exn t2 name in\n        let new_col = Col.reindex_nullable col right_idx (num_rows t2) in\n        let final_name =\n          if has_column t1 name then name ^ right_suffix else name\n        in\n        result_cols := (final_name, new_col) :: !result_cols)\n    (column_names t2);\n\n  create (List.rev !result_cols)\n\n(* Pivot and reshape *)\n\nlet pivot t ~index ~columns ~values ?(agg_func = `Sum) () =\n  let col_col = get_column_exn t columns in\n  let col_fmt = Col.to_string_fn col_col in\n  let idx_col = get_column_exn t index in\n  let idx_fmt = Col.to_string_fn idx_col in\n  let n = num_rows t in\n  let unique_cols =\n    let seen = Hashtbl.create 16 in\n    let result = ref [] in\n    for i = 0 to n - 1 do\n      let s = col_fmt i in\n      if not (Hashtbl.mem seen s) then (\n        Hashtbl.add seen s ();\n        result := s :: !result)\n    done;\n    List.rev !result\n  in\n  let unique_indices =\n    let seen = Hashtbl.create 16 in\n    let result = ref [] in\n    for i = 0 to n - 1 do\n      let s = idx_fmt i in\n      if not (Hashtbl.mem seen s) then (\n        Hashtbl.add seen s ();\n        result := s :: !result)\n    done;\n    List.rev !result\n  in\n  let groups = Hashtbl.create 16 in\n  let val_arr, _ = Col.col_as_float_array (get_column_exn t values) in\n  for i = 0 to n - 1 do\n    let idx_key = idx_fmt i in\n    let col_key = col_fmt i in\n    let key = (idx_key, col_key) in\n    let current = try Hashtbl.find groups key with Not_found -> [] in\n    Hashtbl.replace groups key (val_arr.(i) :: current)\n  done;\n  let aggregate values =\n    match agg_func with\n    | `Sum -> List.fold_left ( +. ) 0. values\n    | `Mean ->\n        let s = List.fold_left ( +. ) 0. values in\n        s /. float_of_int (List.length values)\n    | `Count -> float_of_int (List.length values)\n    | `Min -> List.fold_left min Float.infinity values\n    | `Max -> List.fold_left max Float.neg_infinity values\n  in\n  let result_cols =\n    ref [ (index, Col.string (Array.of_list unique_indices)) ]\n  in\n  List.iter\n    (fun col_name ->\n      let col_values =\n        List.map\n          (fun idx ->\n            try\n              let values = Hashtbl.find groups (idx, col_name) in\n              aggregate values\n            with Not_found -> Float.nan)\n          unique_indices\n      in\n      result_cols :=\n        (col_name, Col.float64 (Array.of_list col_values)) :: !result_cols)\n    unique_cols;\n  create (List.rev !result_cols)\n\nlet melt t ?(id_vars = []) ?(value_vars = []) ?(var_name = \"variable\")\n    ?(value_name = \"value\") () =\n  let value_columns =\n    if value_vars = [] then\n      List.filter (fun name -> not (List.mem name id_vars)) (column_names t)\n    else value_vars\n  in\n  let n_rows = num_rows t in\n  let n_value_cols = List.length value_columns in\n  let total_rows = n_rows * n_value_cols in\n  let result_cols = ref [] in\n  List.iter\n    (fun id_name ->\n      let col = get_column_exn t id_name in\n      let new_col =\n        Col.reindex col (Array.init total_rows (fun i -> i / n_value_cols))\n      in\n      result_cols := (id_name, new_col) :: !result_cols)\n    id_vars;\n  let value_columns_arr = Array.of_list value_columns in\n  let var_col_values =\n    Array.init total_rows (fun i -> Some value_columns_arr.(i mod n_value_cols))\n  in\n  result_cols := (var_name, Col.S var_col_values) :: !result_cols;\n  let value_arrays =\n    Array.of_list\n      (List.map\n         (fun col_name ->\n           let arr, _ = Col.col_as_float_array (get_column_exn t col_name) in\n           arr)\n         value_columns)\n  in\n  let value_col_data =\n    Array.init total_rows (fun i ->\n        let row = i / n_value_cols in\n        let c = i mod n_value_cols in\n        value_arrays.(c).(row))\n  in\n  result_cols := (value_name, Col.float64 value_col_data) :: !result_cols;\n  create (List.rev !result_cols)\n\n(* Conversion *)\n\nlet to_nx t =\n  let numeric_tensors =\n    List.filter_map\n      (fun (_name, col) ->\n        match col with\n        | Col.P (_, tensor, _) -> Some (Nx.astype Nx.float32 tensor)\n        | _ -> None)\n      t.columns\n  in\n  if numeric_tensors = [] then invalid_arg \"to_nx: no numeric columns\"\n  else Nx.stack numeric_tensors ~axis:1\n\n(* Display *)\n\nlet pp ?(max_rows = 10) ?(max_cols = 10) ppf t =\n  let n_rows = num_rows t in\n  let n_cols = num_columns t in\n  let rows_to_show = min max_rows n_rows in\n  let cols_to_show = min max_cols n_cols in\n  let names = column_names t in\n  let names_to_show = list_take cols_to_show names in\n  let fmts =\n    List.map\n      (fun name ->\n        match get_column t name with\n        | Some col -> Col.to_string_fn col\n        | None -> fun _ -> \"\")\n      names_to_show\n  in\n  Format.fprintf ppf \"Shape: (%d, %d)@\\n\" n_rows n_cols;\n  Format.fprintf ppf \"%s@\\n\" (String.concat \"\\t\" names_to_show);\n  for i = 0 to rows_to_show - 1 do\n    Format.fprintf ppf \"%s@\\n\"\n      (String.concat \"\\t\" (List.map (fun f -> f i) fmts))\n  done\n\nlet to_string ?max_rows ?max_cols t =\n  Format.asprintf \"%a\" (pp ?max_rows ?max_cols) t\n\nlet print ?max_rows ?max_cols t = print_string (to_string ?max_rows ?max_cols t)\n\nlet describe t =\n  let numeric_cols =\n    List.filter_map\n      (fun (name, col) -> match col with Col.P _ -> Some name | _ -> None)\n      t.columns\n  in\n  let stats = [ \"count\"; \"mean\"; \"std\"; \"min\"; \"25%\"; \"50%\"; \"75%\"; \"max\" ] in\n  let data =\n    List.map\n      (fun stat ->\n        ( stat,\n          Col.string\n            (Array.of_list\n               (List.map\n                  (fun col_name ->\n                    match stat with\n                    | \"count\" -> string_of_int (Agg.count t col_name)\n                    | \"mean\" -> string_of_float (Agg.mean t col_name)\n                    | \"std\" -> string_of_float (Agg.std t col_name)\n                    | \"min\" -> (\n                        match Agg.min t col_name with\n                        | Some v -> string_of_float v\n                        | None -> \"NaN\")\n                    | \"25%\" -> string_of_float (Agg.quantile t col_name ~q:0.25)\n                    | \"50%\" -> string_of_float (Agg.median t col_name)\n                    | \"75%\" -> string_of_float (Agg.quantile t col_name ~q:0.75)\n                    | \"max\" -> (\n                        match Agg.max t col_name with\n                        | Some v -> string_of_float v\n                        | None -> \"NaN\")\n                    | _ -> \"\")\n                  numeric_cols)) ))\n      stats\n  in\n  create data\n\nlet pp_info ppf t =\n  let n_rows, n_cols = shape t in\n  Format.fprintf ppf \"DataFrame info:@\\n\";\n  Format.fprintf ppf \"  Rows: %d@\\n\" n_rows;\n  Format.fprintf ppf \"  Columns: %d@\\n\" n_cols;\n  Format.fprintf ppf \"@\\nColumn types:@\\n\";\n  List.iter\n    (fun (name, typ) ->\n      let typ_str =\n        match typ with\n        | `Float32 -> \"float32\"\n        | `Float64 -> \"float64\"\n        | `Int32 -> \"int32\"\n        | `Int64 -> \"int64\"\n        | `Bool -> \"bool\"\n        | `String -> \"string\"\n        | `Other -> \"other\"\n      in\n      Format.fprintf ppf \"  %s: %s@\\n\" name typ_str)\n    (column_types t)\n\nlet info t = pp_info Format.std_formatter t\n\n(* ───── Rich display ───── *)\n\nlet html_escape s =\n  let buf = Buffer.create (String.length s + 8) in\n  String.iter\n    (function\n      | '<' -> Buffer.add_string buf \"&lt;\"\n      | '>' -> Buffer.add_string buf \"&gt;\"\n      | '&' -> Buffer.add_string buf \"&amp;\"\n      | '\"' -> Buffer.add_string buf \"&quot;\"\n      | c -> Buffer.add_char buf c)\n    s;\n  Buffer.contents buf\n\nlet to_html ?(max_rows = 20) ?(max_cols = 10) t =\n  let n_rows = num_rows t in\n  let n_cols = num_columns t in\n  let rows_to_show = min max_rows n_rows in\n  let cols_to_show = min max_cols n_cols in\n  let names = column_names t in\n  let names_to_show = list_take cols_to_show names in\n  let fmts =\n    List.map\n      (fun name ->\n        match get_column t name with\n        | Some col -> Col.to_string_fn col\n        | None -> fun _ -> \"\")\n      names_to_show\n  in\n  let buf = Buffer.create 512 in\n  Buffer.add_string buf \"<table>\\n<thead><tr>\";\n  List.iter\n    (fun name ->\n      Buffer.add_string buf \"<th>\";\n      Buffer.add_string buf (html_escape name);\n      Buffer.add_string buf \"</th>\")\n    names_to_show;\n  if n_cols > cols_to_show then Buffer.add_string buf \"<th>\\xe2\\x80\\xa6</th>\";\n  Buffer.add_string buf \"</tr></thead>\\n<tbody>\\n\";\n  for i = 0 to rows_to_show - 1 do\n    Buffer.add_string buf \"<tr>\";\n    List.iter\n      (fun f ->\n        Buffer.add_string buf \"<td>\";\n        Buffer.add_string buf (html_escape (f i));\n        Buffer.add_string buf \"</td>\")\n      fmts;\n    if n_cols > cols_to_show then Buffer.add_string buf \"<td>\\xe2\\x80\\xa6</td>\";\n    Buffer.add_string buf \"</tr>\\n\"\n  done;\n  if n_rows > rows_to_show then begin\n    Buffer.add_string buf \"<tr>\";\n    for _ = 1 to List.length names_to_show do\n      Buffer.add_string buf \"<td>\\xe2\\x80\\xa6</td>\"\n    done;\n    if n_cols > cols_to_show then Buffer.add_string buf \"<td>\\xe2\\x80\\xa6</td>\";\n    Buffer.add_string buf \"</tr>\\n\"\n  end;\n  Buffer.add_string buf \"</tbody>\\n</table>\\n\";\n  Buffer.add_string buf \"<p><small>\";\n  Buffer.add_string buf (string_of_int n_rows);\n  Buffer.add_string buf \" rows \\xc3\\x97 \";\n  Buffer.add_string buf (string_of_int n_cols);\n  Buffer.add_string buf \" columns</small></p>\";\n  Buffer.contents buf\n\nlet base64_encode input =\n  let alphabet =\n    \"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/\"\n  in\n  let len = String.length input in\n  let out_len = (len + 2) / 3 * 4 in\n  let out = Bytes.create out_len in\n  let rec loop i j =\n    if i < len then begin\n      let b0 = Char.code (String.unsafe_get input i) in\n      let b1 =\n        if i + 1 < len then Char.code (String.unsafe_get input (i + 1)) else 0\n      in\n      let b2 =\n        if i + 2 < len then Char.code (String.unsafe_get input (i + 2)) else 0\n      in\n      Bytes.unsafe_set out j (String.unsafe_get alphabet (b0 lsr 2));\n      Bytes.unsafe_set out (j + 1)\n        (String.unsafe_get alphabet (((b0 land 3) lsl 4) lor (b1 lsr 4)));\n      Bytes.unsafe_set out (j + 2)\n        (if i + 1 < len then\n           String.unsafe_get alphabet (((b1 land 0xf) lsl 2) lor (b2 lsr 6))\n         else '=');\n      Bytes.unsafe_set out (j + 3)\n        (if i + 2 < len then String.unsafe_get alphabet (b2 land 0x3f) else '=');\n      loop (i + 3) (j + 4)\n    end\n  in\n  loop 0 0;\n  Bytes.unsafe_to_string out\n\nlet pp_display ppf t =\n  let html = to_html t in\n  let b64 = base64_encode html in\n  Format.fprintf ppf \"![](data:text/html;base64,%s)\" b64\n"
  },
  {
    "path": "packages/talon/lib/talon.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Dataframe library for tabular data manipulation.\n\n    Dataframes are immutable collections of named, typed columns with equal\n    length. Columns can hold numeric tensors (via {!Nx}), strings, or booleans,\n    each with explicit null semantics. *)\n\ntype t\n(** The type for dataframes.\n\n    Dataframes are immutable tabular data structures with named, typed columns.\n    All columns in a dataframe have the same length. *)\n\ntype 'a row\n(** The type for row-wise computations producing values of type ['a].\n\n    Row computations form an applicative functor, allowing composition of\n    independent computations from multiple columns. *)\n\n(** {1:columns Columns} *)\n\nmodule Col : sig\n  (** Column creation and manipulation.\n\n      Columns are the building blocks of dataframes, each storing a homogeneous\n      sequence of values with consistent null handling. *)\n\n  type t\n  (** The type for columns.\n\n      Columns store homogeneous data with consistent null handling:\n      - Numeric data backed by 1D {!Nx} tensors with an optional null mask.\n      - String data as [string option array].\n      - Boolean data as [bool option array]. *)\n\n  (** {2:generic_constructors Generic constructors} *)\n\n  val numeric : ('a, 'b) Nx.dtype -> 'a array -> t\n  (** [numeric dtype arr] is a numeric column from [arr] with dtype [dtype]. *)\n\n  val numeric_opt : ('a, 'b) Nx.dtype -> 'a option array -> t\n  (** [numeric_opt dtype arr] is a nullable numeric column from [arr] with dtype\n      [dtype]. [None] values are recorded in the null mask. *)\n\n  (** {2:non_nullable From arrays (non-nullable)}\n\n      Create columns from arrays without introducing null masks. Values are\n      taken literally; to represent missing data, use the [_opt] constructors\n      instead. *)\n\n  val float32 : float array -> t\n  (** [float32 arr] is a non-nullable float32 column from [arr].\n\n      The resulting column has no null mask. All values, including [nan], are\n      treated as regular data. *)\n\n  val float64 : float array -> t\n  (** [float64 arr] is a non-nullable float64 column from [arr].\n\n      The resulting column has no null mask. All values, including [nan], are\n      treated as regular data. *)\n\n  val int32 : int32 array -> t\n  (** [int32 arr] is a non-nullable int32 column from [arr].\n\n      The resulting column has no null mask. *)\n\n  val int64 : int64 array -> t\n  (** [int64 arr] is a non-nullable int64 column from [arr].\n\n      The resulting column has no null mask. *)\n\n  val bool : bool array -> t\n  (** [bool arr] is a non-nullable boolean column from [arr].\n\n      All values are wrapped as [Some value], creating a column with no nulls.\n  *)\n\n  val string : string array -> t\n  (** [string arr] is a non-nullable string column from [arr].\n\n      All values are wrapped as [Some value], creating a column with no nulls.\n  *)\n\n  (** {2:nullable From option arrays (nullable)}\n\n      Create columns from option arrays with explicit null representation.\n      Numeric types attach a null mask (while storing placeholder values in the\n      tensor), whereas string and boolean types preserve the option structure.\n  *)\n\n  val float32_opt : float option array -> t\n  (** [float32_opt arr] is a nullable float32 column from [arr].\n\n      [None] values are recorded in the null mask. Placeholder [nan] values are\n      stored in the tensor; callers must rely on the mask (via option accessors\n      or {!module:Agg} helpers) to detect nulls. *)\n\n  val float64_opt : float option array -> t\n  (** [float64_opt arr] is a nullable float64 column from [arr].\n\n      [None] values are recorded in the null mask. Placeholder [nan] values are\n      stored in the tensor; callers must rely on the mask to detect nulls. *)\n\n  val int32_opt : int32 option array -> t\n  (** [int32_opt arr] is a nullable int32 column from [arr].\n\n      [None] values are recorded in the null mask. The tensor stores\n      [Int32.min_int] as placeholder, but the mask is authoritative when\n      checking for nulls. *)\n\n  val int64_opt : int64 option array -> t\n  (** [int64_opt arr] is a nullable int64 column from [arr].\n\n      [None] values are recorded in the null mask. The tensor stores\n      [Int64.min_int] as placeholder, but the mask is authoritative when\n      checking for nulls. *)\n\n  val bool_opt : bool option array -> t\n  (** [bool_opt arr] is a nullable boolean column from [arr].\n\n      The option array is used directly without conversion. O(1). *)\n\n  val string_opt : string option array -> t\n  (** [string_opt arr] is a nullable string column from [arr].\n\n      The option array is used directly without conversion. O(1). *)\n\n  (** {2:properties Properties} *)\n\n  val length : t -> int\n  (** [length col] is the number of elements in [col]. *)\n\n  val null_mask : t -> bool array option\n  (** [null_mask col] is the null mask of [col], if any.\n\n      Returns [Some mask] when an explicit mask was attached via a nullable\n      constructor, [None] otherwise. *)\n\n  val dtype :\n    t -> [ `Float32 | `Float64 | `Int32 | `Int64 | `Bool | `String | `Other ]\n  (** [dtype col] is the column's data type as a poly-variant tag. *)\n\n  val is_null_at : t -> int -> bool\n  (** [is_null_at col i] is [true] iff the value at index [i] is null.\n\n      Checks the null mask for numeric columns, or tests for [None] in\n      string/boolean columns. *)\n\n  (** {2:of_tensor From tensors} *)\n\n  val of_tensor : ('a, 'b) Nx.t -> t\n  (** [of_tensor t] is a non-nullable column from the 1D tensor [t].\n\n      The tensor's dtype is preserved. Existing payload values (including NaNs\n      or extremal integers) remain regular data.\n\n      Raises [Invalid_argument] if [t] is not 1D. O(1) — the tensor is used\n      directly without copying. *)\n\n  (** {2:nulls Null handling} *)\n\n  val has_nulls : t -> bool\n  (** [has_nulls col] is [true] iff [col] contains at least one null value.\n\n      Checks the null mask for numeric columns, or scans for [None] in\n      string/boolean columns. *)\n\n  val null_count : t -> int\n  (** [null_count col] is the number of null values in [col]. *)\n\n  val drop_nulls : t -> t\n  (** [drop_nulls col] is [col] with all null values removed.\n\n      The column type is preserved. *)\n\n  val fill_nulls : t -> value:t -> t\n  (** [fill_nulls col ~value] is [col] with null values replaced by the first\n      element of [value].\n\n      [value] must be a single-element column of the same type as [col]. Raises\n      [Invalid_argument] if column types don't match.\n\n      See also {!Talon.fill_null} for a more convenient scalar-based API at the\n      dataframe level. *)\n\n  (** {2:col_transforms Column transforms} *)\n\n  val cumsum : t -> t\n  (** [cumsum col] is the cumulative sum of [col], preserving the dtype. *)\n\n  val cumprod : t -> t\n  (** [cumprod col] is the cumulative product of [col], preserving the dtype. *)\n\n  val diff : ?periods:int -> t -> t\n  (** [diff ?periods col] is the element-wise difference between consecutive\n      values. [periods] defaults to [1]. *)\n\n  val pct_change : ?periods:int -> t -> t\n  (** [pct_change ?periods col] is the fractional change between consecutive\n      values. [nan] where the previous value is zero. [periods] defaults to [1].\n      Result is always float64. *)\n\n  val shift : periods:int -> t -> t\n  (** [shift ~periods col] is [col] with values shifted by [periods] positions.\n      Positive shifts move values down (inserting nulls at the top), negative\n      shifts move values up. *)\n\n  (** {2:extraction Extraction} *)\n\n  val to_tensor : ('a, 'b) Nx.dtype -> t -> ('a, 'b) Nx.t option\n  (** [to_tensor dtype col] is the underlying tensor if [col] is numeric and its\n      dtype matches [dtype]. *)\n\n  val to_string_array : t -> string option array option\n  (** [to_string_array col] is the underlying string option array if [col] is a\n      string column. *)\n\n  val to_bool_array : t -> bool option array option\n  (** [to_bool_array col] is the underlying bool option array if [col] is a\n      boolean column. *)\n\n  val to_string_fn : ?null:string -> t -> int -> string\n  (** [to_string_fn ?null col] is a function that formats the value at index [i]\n      as a string.\n\n      The underlying array is extracted once so repeated calls are O(1). [null]\n      defaults to [\"<null>\"]. *)\n\n  val pp : Format.formatter -> t -> unit\n  (** [pp] formats a column for inspection. Shows the dtype, length, and up to 5\n      values. *)\nend\n\n(** {1:creation DataFrame creation} *)\n\nval empty : t\n(** [empty] is an empty dataframe with no rows or columns.\n\n    Neutral element for {!concat}. *)\n\nval create : (string * Col.t) list -> t\n(** [create pairs] is a dataframe from [(name, column)] pairs.\n\n    Column names must be unique (case-sensitive) and all columns must have the\n    same length.\n\n    Raises [Invalid_argument] if duplicate column names exist or column lengths\n    differ. *)\n\nval of_tensors : ?names:string list -> ('a, 'b) Nx.t list -> t\n(** [of_tensors ?names tensors] is a dataframe from 1D tensors.\n\n    All tensors must have the same shape and dtype. [names] defaults to\n    [\"col0\"], [\"col1\"], etc.\n\n    Raises [Invalid_argument] if tensors have inconsistent shapes, any tensor is\n    not 1D, names are not unique, or the wrong number of names is provided. *)\n\nval of_nx : ?names:string list -> ('a, 'b) Nx.t -> t\n(** [of_nx ?names tensor] is a dataframe from a 2D tensor.\n\n    Each column of the tensor becomes a dataframe column. [names] defaults to\n    [\"col0\"], [\"col1\"], etc.\n\n    Raises [Invalid_argument] if [tensor] is not 2D or names are not unique. *)\n\n(** {1:inspection Shape and inspection} *)\n\nval shape : t -> int * int\n(** [shape df] is [(rows, columns)]. *)\n\nval num_rows : t -> int\n(** [num_rows df] is the number of rows in [df]. *)\n\nval num_columns : t -> int\n(** [num_columns df] is the number of columns in [df]. *)\n\nval column_names : t -> string list\n(** [column_names df] is the column names of [df] in order. *)\n\nval column_types :\n  t ->\n  (string\n  * [ `Float32 | `Float64 | `Int32 | `Int64 | `Bool | `String | `Other ])\n  list\n(** [column_types df] is the column names paired with their detected types. *)\n\nval select_columns :\n  t -> [ `Numeric | `Float | `Int | `Bool | `String ] -> string list\n(** [select_columns df category] is the column names matching [category].\n\n    Categories:\n    - [`Numeric]: all numeric types (float32, float64, int32, int64)\n    - [`Float]: floating-point types only (float32, float64)\n    - [`Int]: integer types only (int32, int64)\n    - [`Bool]: boolean columns\n    - [`String]: string columns *)\n\nval is_empty : t -> bool\n(** [is_empty df] is [true] iff [df] has no rows.\n\n    {b Note.} A dataframe can have columns but zero rows and still be considered\n    empty. *)\n\n(** {1:col_access Column access and manipulation} *)\n\nval get_column : t -> string -> Col.t option\n(** [get_column df name] is the column named [name] in [df], if any. *)\n\nval get_column_exn : t -> string -> Col.t\n(** [get_column_exn df name] is the column named [name] in [df].\n\n    Raises [Not_found] if the column does not exist. *)\n\nval to_array : ('a, 'b) Nx.dtype -> t -> string -> 'a array option\n(** [to_array dtype df name] is the numeric column [name] as a typed array if\n    the column exists and matches [dtype].\n\n    Null values retain their placeholder representation (NaN for floats,\n    sentinel values for integers). See {!to_opt_array} to distinguish nulls from\n    data. *)\n\nval to_opt_array : ('a, 'b) Nx.dtype -> t -> string -> 'a option array option\n(** [to_opt_array dtype df name] is the numeric column [name] as an option array\n    if the column exists and matches [dtype]. [None] for null elements, [Some v]\n    for present values. *)\n\nval to_bool_array : t -> string -> bool option array option\n(** [to_bool_array df name] is the bool option array for column [name] if it\n    exists and is bool type. *)\n\nval to_string_array : t -> string -> string option array option\n(** [to_string_array df name] is the string option array for column [name] if it\n    exists and is string type. *)\n\nval has_column : t -> string -> bool\n(** [has_column df name] is [true] iff [df] has a column named [name]. *)\n\nval add_column : t -> string -> Col.t -> t\n(** [add_column df name col] is [df] with column [name] added or replaced.\n\n    Raises [Invalid_argument] if [Col.length col] differs from [num_rows df]. *)\n\nval drop_column : t -> string -> t\n(** [drop_column df name] is [df] without column [name].\n\n    {b Note.} Returns [df] unchanged if the column does not exist. *)\n\nval drop_columns : t -> string list -> t\n(** [drop_columns df names] is [df] without the named columns.\n\n    Non-existent columns are silently ignored. *)\n\nval rename_column : t -> old_name:string -> new_name:string -> t\n(** [rename_column df ~old_name ~new_name] is [df] with column [old_name]\n    renamed to [new_name].\n\n    Raises [Not_found] if [old_name] does not exist. Raises [Invalid_argument]\n    if [new_name] already exists as a different column. *)\n\nval select : ?strict:bool -> t -> string list -> t\n(** [select ?strict df names] is the sub-dataframe with only the named columns,\n    in the order given by [names].\n\n    [strict] defaults to [true]: raises [Not_found] if any name is missing. When\n    [false], missing columns are silently skipped. *)\n\nval reorder_columns : t -> string list -> t\n(** [reorder_columns df names] is [df] with columns reordered so that [names]\n    appear first (in that order), followed by any remaining columns in their\n    original relative order.\n\n    Raises [Not_found] if any name in the list does not exist. *)\n\n(** {1:row_ops Row-wise operations}\n\n    The {!Row} module provides a declarative way to express computations over\n    dataframe rows. *)\n\nmodule Row : sig\n  (** Row-wise computations using an applicative interface. *)\n\n  (** {2:applicative Applicative interface} *)\n\n  val return : 'a -> 'a row\n  (** [return x] is a computation that produces [x] for every row. *)\n\n  val apply : ('a -> 'b) row -> 'a row -> 'b row\n  (** [apply f x] is the computation that applies [f] to [x] for each row. *)\n\n  val map : 'a row -> f:('a -> 'b) -> 'b row\n  (** [map x ~f] is the computation that applies [f] to each row's value from\n      [x]. *)\n\n  val map2 : 'a row -> 'b row -> f:('a -> 'b -> 'c) -> 'c row\n  (** [map2 x y ~f] is the computation that applies [f] to corresponding values\n      from [x] and [y]. *)\n\n  val map3 : 'a row -> 'b row -> 'c row -> f:('a -> 'b -> 'c -> 'd) -> 'd row\n  (** [map3 x y z ~f] combines three computations with [f]. *)\n\n  val both : 'a row -> 'b row -> ('a * 'b) row\n  (** [both x y] is the computation that pairs values from [x] and [y]. *)\n\n  (** {2:accessors Column accessors} *)\n\n  val float32 : string -> float row\n  (** [float32 name] extracts float32 values from column [name].\n\n      Raises [Not_found] if the column does not exist. Raises [Invalid_argument]\n      if the column is not float32 type. *)\n\n  val float64 : string -> float row\n  (** [float64 name] extracts float64 values from column [name].\n\n      Raises [Not_found] if the column does not exist. Raises [Invalid_argument]\n      if the column is not float64 type. *)\n\n  val int32 : string -> int32 row\n  (** [int32 name] extracts int32 values from column [name].\n\n      Raises [Not_found] if the column does not exist. Raises [Invalid_argument]\n      if the column is not int32 type. *)\n\n  val int64 : string -> int64 row\n  (** [int64 name] extracts int64 values from column [name].\n\n      Raises [Not_found] if the column does not exist. Raises [Invalid_argument]\n      if the column is not int64 type. *)\n\n  val string : string -> string row\n  (** [string name] extracts string values from column [name].\n\n      {b Note.} Null values are converted to empty strings.\n\n      Raises [Not_found] if the column does not exist. Raises [Invalid_argument]\n      if the column is not string type. *)\n\n  val bool : string -> bool row\n  (** [bool name] extracts boolean values from column [name].\n\n      {b Note.} Null values are converted to [false].\n\n      Raises [Not_found] if the column does not exist. Raises [Invalid_argument]\n      if the column is not boolean type. *)\n\n  val number : string -> float row\n  (** [number name] extracts numeric values from column [name], coercing all\n      numeric types to float.\n\n      {b Note.} Null values become [nan].\n\n      Raises [Not_found] if the column does not exist. Raises [Invalid_argument]\n      if the column is not a numeric type. *)\n\n  (** {2:row_info Row information} *)\n\n  val index : int row\n  (** [index] is the current row index (0-based). *)\n\n  val sequence : 'a row list -> 'a list row\n  (** [sequence xs] is the computation that collects values from all\n      computations in [xs] into a list. *)\n\n  val fold_list : 'a row list -> init:'b -> f:('b -> 'a -> 'b) -> 'b row\n  (** [fold_list xs ~init ~f] folds [f] over the computations in [xs] without\n      creating an intermediate list. *)\n\n  (** {2:opt_accessors Option-based accessors}\n\n      These accessors return [None] for null values instead of using placeholder\n      values. Use these when you need to distinguish genuine values from missing\n      data. *)\n\n  val float32_opt : string -> float option row\n  (** [float32_opt name] extracts float32 values as options from column [name].\n      [None] for null values.\n\n      Raises [Not_found] if the column does not exist. Raises [Invalid_argument]\n      if the column is not float32 type. *)\n\n  val float64_opt : string -> float option row\n  (** [float64_opt name] extracts float64 values as options from column [name].\n      [None] for null values.\n\n      Raises [Not_found] if the column does not exist. Raises [Invalid_argument]\n      if the column is not float64 type. *)\n\n  val int32_opt : string -> int32 option row\n  (** [int32_opt name] extracts int32 values as options from column [name].\n      [None] for null values.\n\n      Raises [Not_found] if the column does not exist. Raises [Invalid_argument]\n      if the column is not int32 type. *)\n\n  val int64_opt : string -> int64 option row\n  (** [int64_opt name] extracts int64 values as options from column [name].\n      [None] for null values.\n\n      Raises [Not_found] if the column does not exist. Raises [Invalid_argument]\n      if the column is not int64 type. *)\n\n  val string_opt : string -> string option row\n  (** [string_opt name] extracts string values as options from column [name].\n      [None] for null values.\n\n      Raises [Not_found] if the column does not exist. Raises [Invalid_argument]\n      if the column is not string type. *)\n\n  val bool_opt : string -> bool option row\n  (** [bool_opt name] extracts boolean values as options from column [name].\n      [None] for null values.\n\n      Raises [Not_found] if the column does not exist. Raises [Invalid_argument]\n      if the column is not boolean type. *)\nend\n\n(** {1:filtering Row filtering and transformation} *)\n\nval head : ?n:int -> t -> t\n(** [head ?n df] is the first [n] rows of [df]. [n] defaults to [5].\n\n    If [n] exceeds the number of rows, returns the entire dataframe. *)\n\nval tail : ?n:int -> t -> t\n(** [tail ?n df] is the last [n] rows of [df]. [n] defaults to [5].\n\n    If [n] exceeds the number of rows, returns the entire dataframe. *)\n\nval slice : t -> start:int -> stop:int -> t\n(** [slice df ~start ~stop] is the rows from [start] (inclusive) to [stop]\n    (exclusive).\n\n    Raises [Invalid_argument] if [start < 0], [stop < start], or indices are out\n    of bounds. *)\n\nval take : t -> int array -> t\n(** [take df indices] is the rows of [df] at the given 0-based [indices].\n\n    Indices may repeat (to duplicate rows) and need not be sorted.\n\n    Raises [Invalid_argument] if any index is out of bounds. *)\n\nval sample : ?n:int -> ?frac:float -> ?replace:bool -> ?seed:int -> t -> t\n(** [sample ?n ?frac ?replace ?seed df] is a random sample of rows from [df].\n\n    Exactly one of [n] or [frac] must be specified:\n    - [n]: exact number of rows to sample.\n    - [frac]: fraction of rows to sample (in \\[[0];[1]\\]).\n    - [replace] defaults to [false].\n    - [seed]: random seed for reproducible sampling.\n\n    Raises [Invalid_argument] if both [n] and [frac] are specified, neither is\n    specified, [frac] is outside \\[[0];[1]\\], or [n > num_rows df] when\n    [replace] is [false]. *)\n\nval filter : t -> bool array -> t\n(** [filter df mask] is the rows of [df] where [mask] is [true].\n\n    Raises [Invalid_argument] if [Array.length mask] differs from [num_rows df].\n*)\n\nval filter_by : t -> bool row -> t\n(** [filter_by df pred] is the rows of [df] where [pred] is [true].\n\n    Raises the same exceptions as the column accessors used in [pred]. *)\n\nval drop_nulls : ?subset:string list -> t -> t\n(** [drop_nulls ?subset df] is [df] with rows containing null values removed.\n\n    When [subset] is provided, only those columns are checked for nulls.\n    Otherwise all columns are checked. A row is dropped if any checked column is\n    null at that position. *)\n\nval fill_null :\n  t ->\n  string ->\n  with_value:\n    [ `Float of float\n    | `Int32 of int32\n    | `Int64 of int64\n    | `String of string\n    | `Bool of bool ] ->\n  t\n(** [fill_null df col_name ~with_value] is [df] with null values in column\n    [col_name] replaced by [with_value].\n\n    The value type must match the column type. Raises [Invalid_argument] if the\n    column does not exist or the types do not match. *)\n\nval drop_duplicates : ?subset:string list -> t -> t\n(** [drop_duplicates ?subset df] is [df] with duplicate rows removed, keeping\n    the first occurrence.\n\n    When [subset] is provided, only those columns are considered for equality.\n    Raises [Not_found] if any column in [subset] does not exist. *)\n\nval concat : axis:[ `Rows | `Columns ] -> t list -> t\n(** [concat ~axis dfs] is the concatenation of [dfs].\n\n    - [`Rows]: all dataframes must have the same columns; rows are stacked.\n    - [`Columns]: all dataframes must have the same number of rows; columns are\n      combined. Column names must be unique across dataframes.\n\n    Raises [Invalid_argument] if [dfs] is empty or the dataframes are\n    incompatible for the chosen axis. *)\n\nval map : t -> ('a, 'b) Nx.dtype -> 'a row -> ('a, 'b) Nx.t\n(** [map df dtype f] is a 1D tensor of the given [dtype] obtained by applying\n    [f] to each row of [df]. *)\n\nval with_column : t -> string -> ('a, 'b) Nx.dtype -> 'a row -> t\n(** [with_column df name dtype f] is [df] with a column [name] whose values are\n    produced by applying [f] to each row. If a column named [name] already\n    exists, it is replaced. *)\n\nval with_string_column : t -> string -> string row -> t\n(** [with_string_column df name f] is [df] with a string column [name] whose\n    values are produced by [f]. *)\n\nval with_bool_column : t -> string -> bool row -> t\n(** [with_bool_column df name f] is [df] with a boolean column [name] whose\n    values are produced by [f]. *)\n\nval with_columns : t -> (string * Col.t) list -> t\n(** [with_columns df cols] is [df] with the given columns added or replaced.\n\n    Raises [Invalid_argument] if any column length differs from [num_rows df].\n*)\n\nval iter : t -> unit row -> unit\n(** [iter df f] applies [f] to each row of [df] for side effects. *)\n\nval fold : t -> init:'acc -> f:('acc -> 'acc) row -> 'acc\n(** [fold df ~init ~f] folds [f] over the rows of [df] with accumulator [init].\n*)\n\n(** {1:sorting Sorting and grouping} *)\n\nval sort : t -> 'a row -> compare:('a -> 'a -> int) -> t\n(** [sort df key ~compare] is [df] with rows sorted by the values produced by\n    [key], ordered according to [compare].\n\n    O(n log n) in the number of rows. *)\n\nval sort_values : ?ascending:bool -> t -> string -> t\n(** [sort_values ?ascending df name] is [df] with rows sorted by column [name].\n\n    [ascending] defaults to [true]. Null values are always sorted to the end\n    regardless of direction.\n\n    Raises [Not_found] if the column does not exist. O(n log n). *)\n\nval group_by : t -> 'key row -> ('key * t) list\n(** [group_by df key] is the list of [(k, sub_df)] pairs obtained by grouping\n    the rows of [df] by the values produced by [key].\n\n    The order of groups is not guaranteed. Rows within each group maintain their\n    original relative order. *)\n\n(** {1:transforms Column transforms} *)\n\nval cumsum : t -> string -> t\n(** [cumsum df name] is [df] with column [name] replaced by its cumulative sum,\n    preserving the column's dtype.\n\n    Raises [Not_found] if the column does not exist. *)\n\nval cumprod : t -> string -> t\n(** [cumprod df name] is [df] with column [name] replaced by its cumulative\n    product, preserving the column's dtype.\n\n    Raises [Not_found] if the column does not exist. *)\n\nval diff : t -> string -> ?periods:int -> unit -> t\n(** [diff df name ?periods ()] is [df] with column [name] replaced by the\n    element-wise difference between consecutive values. [periods] defaults to\n    [1].\n\n    Raises [Not_found] if the column does not exist. *)\n\nval pct_change : t -> string -> ?periods:int -> unit -> t\n(** [pct_change df name ?periods ()] is [df] with column [name] replaced by the\n    fractional change between consecutive values. [nan] where the previous value\n    is zero. [periods] defaults to [1]. Result column is always float64.\n\n    Raises [Not_found] if the column does not exist. *)\n\nval shift : t -> string -> periods:int -> t\n(** [shift df name ~periods] is [df] with column [name] shifted by [periods]\n    positions. Positive shifts move values down (inserting nulls at the top),\n    negative shifts move values up.\n\n    Raises [Not_found] if the column does not exist. *)\n\n(** {1:col_inspect Column inspection} *)\n\nval is_null : t -> string -> Col.t\n(** [is_null df name] is a boolean column where [true] indicates a null value at\n    that position. *)\n\nval value_counts : t -> string -> t\n(** [value_counts df name] is a two-column dataframe with columns [\"value\"] and\n    [\"count\"], containing the unique non-null values and their frequencies,\n    sorted by count descending. *)\n\n(** {1:aggregations Aggregations} *)\n\nmodule Agg : sig\n  (** Column-wise aggregation operations.\n\n      All numeric aggregations coerce any numeric column to float, eliminating\n      the need for type-specific sub-modules. *)\n\n  (** {2:scalar Scalar aggregations}\n\n      These reduce a column to a single scalar value. *)\n\n  val sum : t -> string -> float\n  (** [sum df name] is the sum of non-null values in column [name] as float. *)\n\n  val mean : t -> string -> float\n  (** [mean df name] is the arithmetic mean of non-null values in column [name].\n  *)\n\n  val std : t -> string -> float\n  (** [std df name] is the population standard deviation of column [name]\n      (divides by [n], not [n-1]). *)\n\n  val var : t -> string -> float\n  (** [var df name] is the population variance of column [name] (divides by [n],\n      not [n-1]). *)\n\n  val min : t -> string -> float option\n  (** [min df name] is the minimum non-null value, or [None] if the column is\n      empty or all null. *)\n\n  val max : t -> string -> float option\n  (** [max df name] is the maximum non-null value, or [None] if the column is\n      empty or all null. *)\n\n  val median : t -> string -> float\n  (** [median df name] is the median (50th percentile) of column [name]. *)\n\n  val quantile : t -> string -> q:float -> float\n  (** [quantile df name ~q] is the [q]-th quantile of column [name] ([q] in\n      \\[[0];[1]\\]). *)\n\n  (** {2:generic Generic aggregations} *)\n\n  val count : t -> string -> int\n  (** [count df name] is the number of non-null values in column [name]. *)\n\n  val nunique : t -> string -> int\n  (** [nunique df name] is the number of unique non-null values in column\n      [name]. *)\n\n  (** {2:row_agg Row-wise (horizontal) aggregations}\n\n      These compute aggregations across columns for each row. *)\n\n  val row_sum : ?skipna:bool -> t -> names:string list -> Col.t\n  (** [row_sum ?skipna df ~names] is the row-wise sum across the named columns.\n      [skipna] defaults to [true]: skip null values. When [false], any null in a\n      row makes the entire row result null. *)\n\n  val row_mean : ?skipna:bool -> t -> names:string list -> Col.t\n  (** [row_mean ?skipna df ~names] is the row-wise mean across the named\n      columns. [skipna] defaults to [true]. *)\n\n  val row_min : ?skipna:bool -> t -> names:string list -> Col.t\n  (** [row_min ?skipna df ~names] is the row-wise minimum across the named\n      columns. [skipna] defaults to [true]. *)\n\n  val row_max : ?skipna:bool -> t -> names:string list -> Col.t\n  (** [row_max ?skipna df ~names] is the row-wise maximum across the named\n      columns. [skipna] defaults to [true]. *)\n\n  val dot : t -> names:string list -> weights:float array -> Col.t\n  (** [dot df ~names ~weights] is the weighted sum (dot product) across the\n      named columns for each row.\n\n      [weights] must have the same length as [names]. Raises [Invalid_argument]\n      if lengths differ or columns are not numeric. *)\n\n  val row_all : t -> names:string list -> Col.t\n  (** [row_all df ~names] is the row-wise logical AND across the named boolean\n      columns.\n\n      Each row is [true] only if all values are [Some true]. [None] and\n      [Some false] both count as false. Raises [Invalid_argument] if any column\n      is not boolean or does not exist. *)\n\n  val row_any : t -> names:string list -> Col.t\n  (** [row_any df ~names] is the row-wise logical OR across the named boolean\n      columns.\n\n      Each row is [true] if any value is [Some true]. Raises [Invalid_argument]\n      if any column is not boolean or does not exist. *)\n\n  (** {2:string_agg String aggregations} *)\n\n  module String : sig\n    val min : t -> string -> string option\n    (** [min df name] is the lexicographically smallest non-null string, or\n        [None] if the column is empty or all null. *)\n\n    val max : t -> string -> string option\n    (** [max df name] is the lexicographically largest non-null string, or\n        [None] if the column is empty or all null. *)\n\n    val concat : t -> string -> ?sep:string -> unit -> string\n    (** [concat df name ?sep ()] is the concatenation of all non-null strings.\n        [sep] defaults to [\"\"]. Empty string if all values are null. *)\n\n    val unique : t -> string -> string array\n    (** [unique df name] is the array of unique non-null values.\n\n        Order is not guaranteed. *)\n\n    val nunique : t -> string -> int\n    (** [nunique df name] is the number of unique non-null values. *)\n\n    val mode : t -> string -> string option\n    (** [mode df name] is the most frequent non-null value, or [None] if the\n        column is empty or all null. *)\n  end\n\n  (** {2:bool_agg Boolean aggregations} *)\n\n  module Bool : sig\n    val all : t -> string -> bool\n    (** [all df name] is [true] iff all non-null values are [true].\n\n        Returns [true] for columns with only null values (vacuous truth). *)\n\n    val any : t -> string -> bool\n    (** [any df name] is [true] iff any non-null value is [true].\n\n        Returns [false] for columns with only null values. *)\n\n    val sum : t -> string -> int\n    (** [sum df name] is the number of [true] values. Nulls are excluded. *)\n\n    val mean : t -> string -> float\n    (** [mean df name] is the proportion of [true] values among non-null. [nan]\n        if all values are null. *)\n  end\nend\n\n(** {1:joins Joins and merges} *)\n\nval join :\n  t ->\n  t ->\n  on:string ->\n  ?right_on:string ->\n  how:[ `Inner | `Left | `Right | `Outer ] ->\n  ?suffixes:string * string ->\n  unit ->\n  t\n(** [join df1 df2 ~on ?right_on ~how ?suffixes ()] joins two dataframes on key\n    columns.\n\n    [on] names the key column in [df1]. [right_on] names the key column in\n    [df2]; defaults to [on] when both dataframes share the same column name.\n\n    Join types:\n    - [`Inner]: rows where key exists in both dataframes.\n    - [`Left]: all rows from [df1], null-filled for missing [df2] rows.\n    - [`Right]: all rows from [df2], null-filled for missing [df1] rows.\n    - [`Outer]: all rows from both, null-filled where missing.\n\n    Null keys never match (null != null). Duplicate column names receive\n    [suffixes] (default: [\"_x\"], [\"_y\"]).\n\n    Raises [Not_found] if a key column is missing. Raises [Invalid_argument] if\n    key columns have incompatible types. *)\n\n(** {1:reshape Pivot and reshape} *)\n\nval pivot :\n  t ->\n  index:string ->\n  columns:string ->\n  values:string ->\n  ?agg_func:[ `Sum | `Mean | `Count | `Min | `Max ] ->\n  unit ->\n  t\n(** [pivot df ~index ~columns ~values ?agg_func ()] is a pivot table from [df].\n\n    - [index]: column whose values become row identifiers.\n    - [columns]: column whose unique values become new column names.\n    - [values]: column containing the data to fill the table.\n    - [agg_func] defaults to [`Sum] for numeric, [`Count] for others.\n\n    Raises [Not_found] if any specified column does not exist. Raises\n    [Invalid_argument] if [values] is incompatible with [agg_func]. *)\n\nval melt :\n  t ->\n  ?id_vars:string list ->\n  ?value_vars:string list ->\n  ?var_name:string ->\n  ?value_name:string ->\n  unit ->\n  t\n(** [melt df ?id_vars ?value_vars ?var_name ?value_name ()] unpivots [df] from\n    wide to long format.\n\n    - [id_vars]: columns to keep as identifiers (default: all non-[value_vars]).\n    - [value_vars]: columns to melt (default: all non-[id_vars]).\n    - [var_name] defaults to [\"variable\"].\n    - [value_name] defaults to [\"value\"].\n\n    Raises [Not_found] if any specified column does not exist. Raises\n    [Invalid_argument] if [id_vars] and [value_vars] overlap. *)\n\n(** {1:converting Converting} *)\n\nval to_nx : t -> (float, Bigarray.float32_elt) Nx.t\n(** [to_nx df] is a 2D float32 tensor from the numeric columns of [df].\n\n    Rows correspond to dataframe rows, columns to numeric dataframe columns (in\n    order). All numeric types are cast to float32. Null values become [nan].\n    String and boolean columns are ignored.\n\n    Raises [Invalid_argument] if [df] contains no numeric columns. *)\n\n(** {1:fmt Formatting and inspecting} *)\n\nval pp : ?max_rows:int -> ?max_cols:int -> Format.formatter -> t -> unit\n(** [pp ?max_rows ?max_cols ppf df] formats [df] as a table on [ppf].\n\n    [max_rows] defaults to [10]. [max_cols] defaults to [10]. *)\n\nval to_string : ?max_rows:int -> ?max_cols:int -> t -> string\n(** [to_string ?max_rows ?max_cols df] is [df] formatted as a table string. *)\n\nval print : ?max_rows:int -> ?max_cols:int -> t -> unit\n(** [print ?max_rows ?max_cols df] is\n    [pp ?max_rows ?max_cols Format.std_formatter df]. *)\n\nval describe : t -> t\n(** [describe df] is a dataframe of summary statistics for the numeric columns\n    of [df].\n\n    Rows are: count, mean, std, min, 25%, 50%, 75%, max. String and boolean\n    columns are ignored. *)\n\nval cast_column : t -> string -> ('a, 'b) Nx.dtype -> t\n(** [cast_column df name dtype] is [df] with column [name] converted to the\n    numeric [dtype].\n\n    Null values are preserved through the conversion.\n\n    Raises [Not_found] if the column does not exist. Raises [Invalid_argument]\n    if the source column is not numeric. *)\n\nval to_html : ?max_rows:int -> ?max_cols:int -> t -> string\n(** [to_html ?max_rows ?max_cols df] is [df] formatted as an HTML table string.\n\n    Generates a [<table>] element with [<thead>] and [<tbody>]. Truncated rows\n    and columns show ellipsis markers. A trailing [<p>] shows the shape when the\n    table is truncated.\n\n    [max_rows] defaults to [20]. [max_cols] defaults to [10]. *)\n\nval pp_display : Format.formatter -> t -> unit\n(** [pp_display ppf df] formats [df] as a data URI containing an HTML table.\n\n    This printer is intended for notebook environments (Quill) where the data\n    URI pattern is detected and rendered as rich HTML output. Uses fixed\n    defaults (max_rows=20, max_cols=10). For custom limits, use {!to_html}\n    directly. *)\n\nval pp_info : Format.formatter -> t -> unit\n(** [pp_info ppf df] formats detailed information about [df] on [ppf]: shape,\n    column names and types, null counts, and memory usage. *)\n\nval info : t -> unit\n(** [info df] is [pp_info Format.std_formatter df]. *)\n"
  },
  {
    "path": "packages/talon/test/dune",
    "content": "(test\n (name test_talon)\n (package talon)\n (libraries talon windtrap nx))\n\n(test\n (name test_talon_csv)\n (package talon)\n (libraries talon talon.csv windtrap nx))\n"
  },
  {
    "path": "packages/talon/test/test_talon.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Talon\nopen Windtrap\n\nlet check_int msg = equal ~msg int\nlet check_float msg = equal ~msg (float 1e-6)\nlet check_bool msg = equal ~msg bool\nlet check_string msg = equal ~msg string\nlet check_option_float msg = equal ~msg (option (float 1e-6))\nlet check_option_string msg = equal ~msg (option string)\nlet check_option_bool_array msg = equal ~msg (option (array bool))\nlet mask_of_column df name = Col.null_mask (get_column_exn df name)\n\n(* ───── Test Column Creation ───── *)\n\nlet test_col_creation () =\n  let df1 = create [ (\"c\", Col.float32 [| 1.0; 2.0; 3.0 |]) ] in\n  check_int \"float32 col rows\" 3 (num_rows df1);\n  let c1 = get_column_exn df1 \"c\" in\n  check_bool \"float32 no nulls\" false (Col.has_nulls c1);\n\n  let df2 = create [ (\"c\", Col.int32 [| 1l; 2l; 3l |]) ] in\n  check_int \"int32 col rows\" 3 (num_rows df2);\n\n  let df3 = create [ (\"c\", Col.string [| \"a\"; \"b\"; \"c\" |]) ] in\n  check_int \"string col rows\" 3 (num_rows df3);\n\n  let df4 = create [ (\"c\", Col.bool [| true; false; true |]) ] in\n  check_int \"bool col rows\" 3 (num_rows df4)\n\nlet test_col_nulls () =\n  let c1 = Col.float32_opt [| Some 1.0; None; Some 3.0 |] in\n  check_bool \"has nulls\" true (Col.has_nulls c1);\n  check_int \"null count\" 1 (Col.null_count c1);\n\n  let c2 = Col.drop_nulls c1 in\n  let df = create [ (\"c\", c2) ] in\n  check_int \"after drop_nulls\" 2 (num_rows df);\n  check_bool \"no nulls after drop\" false (Col.has_nulls c2)\n\nlet test_col_null_mask () =\n  let col = Col.float32_opt [| Some 1.0; None |] in\n  check_option_bool_array \"mask kept\"\n    (Some [| false; true |])\n    (Col.null_mask col);\n  let plain = Col.float32 [| 1.0; 2.0 |] in\n  check_option_bool_array \"no mask\" None (Col.null_mask plain)\n\nlet test_drop_nulls_preserves_data_with_mask () =\n  let col = Col.int32_opt [| Some Int32.min_int; None |] in\n  let dropped = Col.drop_nulls col in\n  match Col.to_tensor Nx.int32 dropped with\n  | Some tensor ->\n      let arr : int32 array = Nx.to_array tensor in\n      check_int \"length after drop\" 1 (Array.length arr);\n      check_bool \"sentinel retained\" true (arr.(0) = Int32.min_int);\n      check_option_bool_array \"mask cleared\" None (Col.null_mask dropped)\n  | None -> fail \"expected int32 column\"\n\nlet test_fill_nulls_respects_mask () =\n  let col = Col.int32_opt [| Some 42l; None |] in\n  let filled = Col.fill_nulls col ~value:(Col.int32 [| 0l |]) in\n  match Col.to_tensor Nx.int32 filled with\n  | Some tensor ->\n      let arr : int32 array = Nx.to_array tensor in\n      check_bool \"first value untouched\" true (arr.(0) = 42l);\n      check_bool \"null filled\" true (arr.(1) = 0l)\n  | None -> fail \"expected int32 column\"\n\n(* ───── Test Dataframe Creation ───── *)\n\nlet test_df_creation () =\n  let df =\n    create\n      [\n        (\"a\", Col.int32 [| 1l; 2l; 3l |]);\n        (\"b\", Col.float64 [| 1.5; 2.5; 3.5 |]);\n        (\"c\", Col.string [| \"x\"; \"y\"; \"z\" |]);\n      ]\n  in\n\n  let rows, cols = shape df in\n  check_int \"rows\" 3 rows;\n  check_int \"cols\" 3 cols;\n  check_bool \"not empty\" false (is_empty df);\n\n  let names = column_names df in\n  equal ~msg:\"column names\" (list string) [ \"a\"; \"b\"; \"c\" ] names\n\nlet test_df_empty () =\n  let df = empty in\n  let rows, cols = shape df in\n  check_int \"empty rows\" 0 rows;\n  check_int \"empty cols\" 0 cols;\n  check_bool \"is empty\" true (is_empty df)\n\n(* ───── Test Column Operations ───── *)\n\nlet test_column_access () =\n  let df =\n    create\n      [\n        (\"x\", Col.int32 [| 1l; 2l; 3l |]); (\"y\", Col.float32 [| 1.0; 2.0; 3.0 |]);\n      ]\n  in\n\n  check_bool \"has column x\" true (has_column df \"x\");\n  check_bool \"has column y\" true (has_column df \"y\");\n  check_bool \"no column z\" false (has_column df \"z\");\n\n  match get_column df \"x\" with\n  | Some _col -> check_int \"df has 3 rows\" 3 (num_rows df)\n  | None -> fail \"column x should exist\"\n\nlet test_column_add_drop () =\n  let df = create [ (\"a\", Col.int32 [| 1l; 2l |]) ] in\n\n  let df2 = add_column df \"b\" (Col.float32 [| 1.0; 2.0 |]) in\n  check_int \"cols after add\" 2 (num_columns df2);\n  check_bool \"has new column\" true (has_column df2 \"b\");\n\n  let df3 = drop_column df2 \"a\" in\n  check_int \"cols after drop\" 1 (num_columns df3);\n  check_bool \"column dropped\" false (has_column df3 \"a\")\n\nlet test_rename_column () =\n  let df = create [ (\"old\", Col.int32 [| 1l; 2l |]) ] in\n  let df2 = rename_column df ~old_name:\"old\" ~new_name:\"new\" in\n\n  check_bool \"old name gone\" false (has_column df2 \"old\");\n  check_bool \"new name exists\" true (has_column df2 \"new\")\n\nlet test_select () =\n  let df =\n    create\n      [\n        (\"a\", Col.int32 [| 1l |]);\n        (\"b\", Col.int32 [| 2l |]);\n        (\"c\", Col.int32 [| 3l |]);\n      ]\n  in\n\n  let df2 = select df [ \"c\"; \"a\" ] in\n  equal ~msg:\"selected cols\" (list string) [ \"c\"; \"a\" ] (column_names df2);\n\n  let df3 = select ~strict:false df [ \"a\"; \"missing\"; \"c\" ] in\n  equal ~msg:\"loose select\" (list string) [ \"a\"; \"c\" ] (column_names df3)\n\n(* ───── Test Row Operations ───── *)\n\nlet test_head_tail () =\n  let df = create [ (\"x\", Col.int32 [| 1l; 2l; 3l; 4l; 5l |]) ] in\n\n  let h = head ~n:2 df in\n  check_int \"head size\" 2 (num_rows h);\n\n  let t = tail ~n:2 df in\n  check_int \"tail size\" 2 (num_rows t)\n\nlet test_slice () =\n  let df = create [ (\"x\", Col.int32 [| 0l; 1l; 2l; 3l; 4l |]) ] in\n  let s = slice df ~start:1 ~stop:4 in\n  check_int \"slice size\" 3 (num_rows s)\n\nlet test_filter () =\n  let df =\n    create\n      [\n        (\"x\", Col.int32 [| 1l; 2l; 3l; 4l |]);\n        (\"y\", Col.string [| \"a\"; \"b\"; \"c\"; \"d\" |]);\n      ]\n  in\n\n  let mask = [| true; false; true; false |] in\n  let filtered = filter df mask in\n  check_int \"filtered rows\" 2 (num_rows filtered)\n\nlet test_mask_projection_ops () =\n  let df =\n    create [ (\"x\", Col.float32_opt [| Some 1.0; None; Some 3.0; None |]) ]\n  in\n  let head_df = head ~n:3 df in\n  check_option_bool_array \"head mask\"\n    (Some [| false; true; false |])\n    (mask_of_column head_df \"x\");\n  let slice_df = slice df ~start:1 ~stop:4 in\n  check_option_bool_array \"slice mask\"\n    (Some [| true; false; true |])\n    (mask_of_column slice_df \"x\");\n  let filtered = filter df [| false; true; false; true |] in\n  check_option_bool_array \"filter mask\"\n    (Some [| true; true |])\n    (mask_of_column filtered \"x\")\n\nlet test_concat_mask_combines () =\n  let df1 = create [ (\"x\", Col.float32_opt [| Some 1.0; None |]) ] in\n  let df2 = create [ (\"x\", Col.float32_opt [| None; Some 4.0 |]) ] in\n  let combined = concat ~axis:`Rows [ df1; df2 ] in\n  check_option_bool_array \"concat mask\"\n    (Some [| false; true; true; false |])\n    (mask_of_column combined \"x\")\n\nlet test_cast_preserves_mask () =\n  let df = create [ (\"x\", Col.float32_opt [| Some 1.0; None |]) ] in\n  let df_cast = cast_column df \"x\" Nx.float64 in\n  check_option_bool_array \"cast mask\"\n    (Some [| false; true |])\n    (mask_of_column df_cast \"x\")\n\nlet test_pct_change_has_no_mask () =\n  let df = create [ (\"x\", Col.float32_opt [| Some 1.0; Some 2.0; None |]) ] in\n  let df' = pct_change df \"x\" () in\n  let col = get_column_exn df' \"x\" in\n  check_option_bool_array \"pct_change mask\" None (Col.null_mask col)\n\nlet test_filter_by () =\n  let df =\n    create\n      [\n        (\"x\", Col.int32 [| 1l; 2l; 3l; 4l |]);\n        (\"y\", Col.float32 [| 1.0; 2.0; 3.0; 4.0 |]);\n      ]\n  in\n\n  let filtered = filter_by df Row.(map (int32 \"x\") ~f:(fun x -> x > 2l)) in\n  check_int \"filtered rows\" 2 (num_rows filtered)\n\nlet test_drop_duplicates () =\n  let df =\n    create\n      [\n        (\"x\", Col.int32 [| 1l; 1l; 2l; 2l; 3l |]);\n        (\"y\", Col.string [| \"a\"; \"a\"; \"b\"; \"b\"; \"c\" |]);\n      ]\n  in\n\n  let unique = drop_duplicates df in\n  check_int \"unique rows\" 3 (num_rows unique)\n\n(* ───── Test Concatenation ───── *)\n\nlet test_concat_rows () =\n  let df1 = create [ (\"x\", Col.int32 [| 1l; 2l |]) ] in\n  let df2 = create [ (\"x\", Col.int32 [| 3l; 4l |]) ] in\n\n  let combined = concat ~axis:`Rows [ df1; df2 ] in\n  check_int \"concat rows\" 4 (num_rows combined)\n\nlet test_concat_cols () =\n  let df1 = create [ (\"a\", Col.int32 [| 1l; 2l |]) ] in\n  let df2 = create [ (\"b\", Col.int32 [| 3l; 4l |]) ] in\n\n  let combined = concat ~axis:`Columns [ df1; df2 ] in\n  check_int \"concat cols\" 2 (num_columns combined)\n\n(* ───── Test Row Module ───── *)\n\nlet test_row_accessors () =\n  let df =\n    create\n      [\n        (\"i32\", Col.int32 [| 42l; 24l |]);\n        (\"f32\", Col.float32 [| 3.14; 2.71 |]);\n        (\"str\", Col.string [| \"hello\"; \"world\" |]);\n        (\"bool\", Col.bool [| true; false |]);\n      ]\n  in\n\n  (* Test by filtering based on row values *)\n  let filtered_i32 =\n    filter_by df Row.(map (int32 \"i32\") ~f:(fun x -> x = 42l))\n  in\n  check_int \"filtered by i32\" 1 (num_rows filtered_i32);\n\n  let filtered_str =\n    filter_by df Row.(map (string \"str\") ~f:(fun s -> s = \"hello\"))\n  in\n  check_int \"filtered by string\" 1 (num_rows filtered_str);\n\n  let filtered_bool = filter_by df Row.(bool \"bool\") in\n  check_int \"filtered by bool\" 1 (num_rows filtered_bool)\n\nlet test_row_map () =\n  let df = create [ (\"x\", Col.int32 [| 1l; 2l; 3l |]) ] in\n\n  (* Use with_column to create a new column *)\n  let df2 =\n    with_column df \"doubled\" Nx.int32\n      Row.(map (int32 \"x\") ~f:(fun x -> Int32.mul x 2l))\n  in\n\n  match to_array Nx.int32 df2 \"doubled\" with\n  | Some arr -> check_bool \"mapped values\" true (arr = [| 2l; 4l; 6l |])\n  | None -> fail \"doubled column should exist\"\n\n(* ───── Test Sorting ───── *)\n\nlet test_sort () =\n  let df =\n    create\n      [\n        (\"x\", Col.int32 [| 3l; 1l; 2l |]); (\"y\", Col.string [| \"c\"; \"a\"; \"b\" |]);\n      ]\n  in\n\n  let sorted = sort_values df \"x\" in\n  match to_array Nx.int32 sorted \"x\" with\n  | Some arr -> check_bool \"sorted\" true (arr = [| 1l; 2l; 3l |])\n  | None -> fail \"column should exist\"\n\nlet test_group_by () =\n  let df =\n    create\n      [\n        (\"key\", Col.string [| \"a\"; \"b\"; \"a\"; \"b\" |]);\n        (\"val\", Col.int32 [| 1l; 2l; 3l; 4l |]);\n      ]\n  in\n\n  let groups = group_by df (Row.string \"key\") in\n  check_int \"group count\" 2 (List.length groups)\n\n(* ───── Test Aggregations ───── *)\n\nlet test_agg_float () =\n  let df = create [ (\"x\", Col.float32 [| 1.0; 2.0; 3.0; 4.0 |]) ] in\n\n  check_float \"sum\" 10.0 (Agg.sum df \"x\");\n  check_float \"mean\" 2.5 (Agg.mean df \"x\");\n  check_float \"std\" 1.118034 (Agg.std df \"x\");\n  check_option_float \"min\" (Some 1.0) (Agg.min df \"x\");\n  check_option_float \"max\" (Some 4.0) (Agg.max df \"x\");\n  check_float \"median\" 2.5 (Agg.median df \"x\")\n\nlet test_agg_int () =\n  let df = create [ (\"x\", Col.int32 [| 1l; 2l; 3l; 4l |]) ] in\n\n  check_float \"sum\" 10.0 (Agg.sum df \"x\");\n  check_float \"mean\" 2.5 (Agg.mean df \"x\");\n  check_option_float \"min\" (Some 1.0) (Agg.min df \"x\");\n  check_option_float \"max\" (Some 4.0) (Agg.max df \"x\")\n\nlet test_agg_string () =\n  let df = create [ (\"x\", Col.string [| \"b\"; \"a\"; \"c\"; \"b\" |]) ] in\n\n  check_option_string \"min\" (Some \"a\") (Agg.String.min df \"x\");\n  check_option_string \"max\" (Some \"c\") (Agg.String.max df \"x\");\n  check_string \"concat\" \"bacb\" (Agg.String.concat df \"x\" ());\n  check_int \"nunique\" 3 (Agg.String.nunique df \"x\");\n  check_option_string \"mode\" (Some \"b\") (Agg.String.mode df \"x\")\n\nlet test_agg_bool () =\n  let df = create [ (\"x\", Col.bool [| true; false; true; true |]) ] in\n\n  check_bool \"all\" false (Agg.Bool.all df \"x\");\n  check_bool \"any\" true (Agg.Bool.any df \"x\");\n  check_int \"sum\" 3 (Agg.Bool.sum df \"x\");\n  check_float \"mean\" 0.75 (Agg.Bool.mean df \"x\")\n\nlet test_agg_cumulative () =\n  let df = create [ (\"x\", Col.int32 [| 1l; 2l; 3l |]) ] in\n\n  let df_cumsum = cumsum df \"x\" in\n  check_int \"cumsum length\" 3 (num_rows df_cumsum);\n\n  let df_diff = diff df \"x\" () in\n  check_int \"diff length\" 3 (num_rows df_diff)\n\nlet test_agg_nulls () =\n  let df = create [ (\"x\", Col.float32_opt [| Some 1.0; None; Some 3.0 |]) ] in\n\n  let nulls_col = is_null df \"x\" in\n  (match Col.to_bool_array nulls_col with\n  | Some arr ->\n      check_bool \"null detection\" true (arr.(1) = Some true);\n      check_bool \"non-null\" true (arr.(0) = Some false)\n  | None -> fail \"is_null should return bool column\");\n  check_int \"count non-null\" 2 (Agg.count df \"x\")\n\n(* ───── Test Type Conversions ───── *)\n\nlet test_to_arrays () =\n  let df =\n    create\n      [\n        (\"f32\", Col.float32 [| 1.0; 2.0 |]);\n        (\"i32\", Col.int32 [| 1l; 2l |]);\n        (\"str\", Col.string [| \"a\"; \"b\" |]);\n        (\"bool\", Col.bool [| true; false |]);\n      ]\n  in\n\n  (match to_array Nx.float32 df \"f32\" with\n  | Some arr -> check_bool \"float32 array\" true (arr = [| 1.0; 2.0 |])\n  | None -> fail \"should get float32 array\");\n\n  (match to_array Nx.int32 df \"i32\" with\n  | Some arr -> check_bool \"int32 array\" true (arr = [| 1l; 2l |])\n  | None -> fail \"should get int32 array\");\n\n  (match to_string_array df \"str\" with\n  | Some arr -> check_bool \"string array\" true (arr = [| Some \"a\"; Some \"b\" |])\n  | None -> fail \"should get string array\");\n\n  match to_bool_array df \"bool\" with\n  | Some arr -> check_bool \"bool array\" true (arr = [| Some true; Some false |])\n  | None -> fail \"should get bool array\"\n\nlet test_to_nx () =\n  let df =\n    create\n      [ (\"a\", Col.float32 [| 1.0; 2.0 |]); (\"b\", Col.float32 [| 3.0; 4.0 |]) ]\n  in\n\n  let tensor = to_nx df in\n  let shape = Nx.shape tensor in\n  check_bool \"to_nx shape\" true (shape = [| 2; 2 |])\n\nlet test_of_nx () =\n  (* Test basic of_nx functionality *)\n  let tensor =\n    Nx.create Nx.float32 [| 2; 3 |] [| 1.0; 2.0; 3.0; 4.0; 5.0; 6.0 |]\n  in\n  let df = of_nx tensor in\n\n  (* Just check that we get a dataframe with the right number of columns *)\n  check_int \"of_nx cols\" 3 (num_columns df);\n  check_bool \"of_nx not empty\" false (is_empty df)\n\n(* ───── Test Edge Cases ───── *)\n\nlet test_empty_operations () =\n  let df = empty in\n\n  let df2 = head df in\n  check_bool \"head of empty\" true (is_empty df2);\n\n  let df3 = filter df [||] in\n  check_bool \"filter empty\" true (is_empty df3);\n\n  let df4 = concat ~axis:`Rows [] in\n  check_bool \"concat empty list\" true (is_empty df4)\n\nlet test_single_row () =\n  let df = create [ (\"x\", Col.int32 [| 42l |]) ] in\n\n  check_int \"single row\" 1 (num_rows df);\n\n  let h = head ~n:10 df in\n  check_int \"head larger than df\" 1 (num_rows h)\n\nlet test_cast_column () =\n  let df = create [ (\"x\", Col.int32 [| 1l; 2l; 3l |]) ] in\n  let df2 = cast_column df \"x\" Nx.float32 in\n\n  (* Check that we can extract as float array after casting *)\n  match to_array Nx.float32 df2 \"x\" with\n  | Some arr -> check_bool \"cast to float32\" true (Array.length arr = 3)\n  | None -> fail \"should be able to extract as float32 after cast\"\n\n(* ───── Test Suites ───── *)\n\nlet col_tests =\n  [\n    test \"creation\" test_col_creation;\n    test \"nulls\" test_col_nulls;\n    test \"null mask\" test_col_null_mask;\n    test \"drop_nulls mask\" test_drop_nulls_preserves_data_with_mask;\n    test \"fill_nulls mask\" test_fill_nulls_respects_mask;\n  ]\n\nlet creation_tests =\n  [ test \"basic\" test_df_creation; test \"empty\" test_df_empty ]\n\nlet column_tests =\n  [\n    test \"access\" test_column_access;\n    test \"add_drop\" test_column_add_drop;\n    test \"rename\" test_rename_column;\n    test \"select\" test_select;\n  ]\n\n(* Test option-based accessors *)\nlet test_row_opt_accessors () =\n  let df =\n    create\n      [\n        (\"float_col\", Col.float64_opt [| Some 1.0; None; Some 3.0 |]);\n        (\"int_col\", Col.int32_opt [| Some 10l; None; Some 30l |]);\n        (\"string_col\", Col.string_opt [| Some \"a\"; None; Some \"c\" |]);\n        (\"bool_col\", Col.bool_opt [| Some true; None; Some false |]);\n      ]\n  in\n  (* Test float64_opt *)\n  let float_values =\n    map df Nx.float64\n      (Row.map (Row.float64_opt \"float_col\") ~f:(function\n        | Some v -> v\n        | None -> -1.0))\n  in\n  check_float \"float_opt row 0\" 1.0 (Nx.to_array float_values).(0);\n  check_float \"float_opt row 1 (null)\" (-1.0) (Nx.to_array float_values).(1);\n  check_float \"float_opt row 2\" 3.0 (Nx.to_array float_values).(2);\n  (* Test int32_opt *)\n  let int_values =\n    map df Nx.int32\n      (Row.map (Row.int32_opt \"int_col\") ~f:(function\n        | Some v -> v\n        | None -> -1l))\n  in\n  check_int \"int_opt row 0\" 10 (Int32.to_int (Nx.to_array int_values).(0));\n  check_int \"int_opt row 1 (null)\" (-1)\n    (Int32.to_int (Nx.to_array int_values).(1));\n  check_int \"int_opt row 2\" 30 (Int32.to_int (Nx.to_array int_values).(2))\n\nlet test_drop_nulls_helper () =\n  let df =\n    create\n      [\n        (\"a\", Col.float64_opt [| Some 1.0; None; Some 3.0; Some 4.0 |]);\n        (\"b\", Col.int32 [| 10l; 20l; 30l; 40l |]);\n      ]\n  in\n  (* Drop rows with any nulls *)\n  let cleaned = drop_nulls df in\n  check_int \"drop_nulls all\" 3 (num_rows cleaned);\n  (* Drop only checking column \"b\" (which has no nulls) *)\n  let partial = drop_nulls df ~subset:[ \"b\" ] in\n  check_int \"drop_nulls subset\" 4 (num_rows partial)\n\nlet test_fill_null_helper () =\n  let df = create [ (\"x\", Col.float64_opt [| Some 1.0; None; Some 3.0 |]) ] in\n  let filled = fill_null df \"x\" ~with_value:(`Float 0.0) in\n  match to_array Nx.float64 filled \"x\" with\n  | Some arr ->\n      check_float \"filled 0\" 1.0 arr.(0);\n      check_float \"filled 1 (was null)\" 0.0 arr.(1);\n      check_float \"filled 2\" 3.0 arr.(2)\n  | None -> fail \"Expected float array\"\n\nlet test_fillna_replaces_nulls () =\n  let df =\n    create\n      [\n        (\"a\", Col.float32_opt [| Some 1.0; None; Some 3.0 |]);\n        (\"b\", Col.int32_opt [| Some 10l; None; Some 30l |]);\n      ]\n  in\n  let df_a = fill_null df \"a\" ~with_value:(`Float 0.0) in\n  let filled_a = get_column_exn df_a \"a\" in\n  (match Col.to_tensor Nx.float32 filled_a with\n  | Some tensor ->\n      let arr : float array = Nx.to_array tensor in\n      check_float \"filled a[0]\" 1.0 arr.(0);\n      check_float \"filled a[1]\" 0.0 arr.(1);\n      check_float \"filled a[2]\" 3.0 arr.(2);\n      check_option_bool_array \"mask cleared for a\" None (Col.null_mask filled_a)\n  | None -> fail \"Expected float32 column\");\n  let df_b = fill_null df \"b\" ~with_value:(`Int32 0l) in\n  let filled_b = get_column_exn df_b \"b\" in\n  match Col.to_tensor Nx.int32 filled_b with\n  | Some tensor ->\n      let arr : int32 array = Nx.to_array tensor in\n      check_int \"filled b[0]\" 10 (Int32.to_int arr.(0));\n      check_int \"filled b[1]\" 0 (Int32.to_int arr.(1));\n      check_int \"filled b[2]\" 30 (Int32.to_int arr.(2));\n      check_option_bool_array \"mask cleared for b\" None (Col.null_mask filled_b)\n  | None -> fail \"Expected int32 column\"\n\nlet test_null_count_helper () =\n  let df =\n    create\n      [\n        (\"a\", Col.float64_opt [| Some 1.0; None; Some 3.0 |]);\n        (\"b\", Col.int32 [| 10l; 20l; 30l |]);\n      ]\n  in\n  let col_a = get_column_exn df \"a\" in\n  let col_b = get_column_exn df \"b\" in\n  check_int \"null_count a\" 1 (Col.null_count col_a);\n  check_int \"null_count b\" 0 (Col.null_count col_b);\n  check_bool \"has_nulls a\" true (Col.has_nulls col_a);\n  check_bool \"has_nulls b\" false (Col.has_nulls col_b)\n\nlet test_mask_aware_aggregations () =\n  let df =\n    create\n      [ (\"values\", Col.float64_opt [| Some 1.0; None; Some 3.0; Some 5.0 |]) ]\n  in\n  (* Sum should skip the null *)\n  let sum_result = Agg.sum df \"values\" in\n  check_float \"masked sum\" 9.0 sum_result;\n  (* Mean should compute over non-null values only *)\n  let mean_result = Agg.mean df \"values\" in\n  check_float \"masked mean\" 3.0 mean_result;\n  (* Min/max should skip nulls *)\n  (match Agg.min df \"values\" with\n  | Some v -> check_float \"masked min\" 1.0 v\n  | None -> fail \"Expected Some min\");\n  match Agg.max df \"values\" with\n  | Some v -> check_float \"masked max\" 5.0 v\n  | None -> fail \"Expected Some max\"\n\nlet option_tests =\n  [\n    test \"row_opt_accessors\" test_row_opt_accessors;\n    test \"drop_nulls\" test_drop_nulls_helper;\n    test \"fill_null\" test_fill_null_helper;\n    test \"fillna\" test_fillna_replaces_nulls;\n    test \"null_count\" test_null_count_helper;\n    test \"mask_aware_agg\" test_mask_aware_aggregations;\n  ]\n\nlet mask_tests =\n  [\n    test \"projection\" test_mask_projection_ops;\n    test \"concat\" test_concat_mask_combines;\n    test \"cast\" test_cast_preserves_mask;\n    test \"pct_change\" test_pct_change_has_no_mask;\n  ]\n\nlet row_tests =\n  [\n    test \"head_tail\" test_head_tail;\n    test \"slice\" test_slice;\n    test \"filter\" test_filter;\n    test \"filter_by\" test_filter_by;\n    test \"drop_duplicates\" test_drop_duplicates;\n  ]\n\nlet concat_tests =\n  [ test \"rows\" test_concat_rows; test \"columns\" test_concat_cols ]\n\nlet row_module_tests =\n  [ test \"accessors\" test_row_accessors; test \"map\" test_row_map ]\n\nlet sort_group_tests = [ test \"sort\" test_sort; test \"group_by\" test_group_by ]\n\nlet agg_tests =\n  [\n    test \"float\" test_agg_float;\n    test \"int\" test_agg_int;\n    test \"string\" test_agg_string;\n    test \"bool\" test_agg_bool;\n    test \"cumulative\" test_agg_cumulative;\n    test \"nulls\" test_agg_nulls;\n  ]\n\nlet conversion_tests =\n  [\n    test \"to_arrays\" test_to_arrays;\n    test \"to_nx\" test_to_nx;\n    test \"of_nx\" test_of_nx;\n    test \"cast_column\" test_cast_column;\n  ]\n\nlet edge_tests =\n  [\n    test \"empty_operations\" test_empty_operations;\n    test \"single_row\" test_single_row;\n  ]\n\n(* ───── Test Wide Operations ───── *)\n\nlet test_map_list_product () =\n  (* Create a dataframe with 6 int32 columns *)\n  let df =\n    create\n      [\n        (\"A\", Col.int32 [| 2l; 3l; 4l |]);\n        (\"B\", Col.int32 [| 1l; 2l; 3l |]);\n        (\"C\", Col.int32 [| 3l; 1l; 2l |]);\n        (\"D\", Col.int32 [| 1l; 1l; 1l |]);\n        (\"E\", Col.int32 [| 2l; 2l; 2l |]);\n        (\"F\", Col.int32 [| 1l; 3l; 1l |]);\n      ]\n  in\n\n  (* Compute product of all 6 columns using sequence + map *)\n  let df2 =\n    with_column df \"allMul\" Nx.int32\n      Row.(\n        map\n          (sequence (List.map int32 [ \"A\"; \"B\"; \"C\"; \"D\"; \"E\"; \"F\" ]))\n          ~f:(List.fold_left Int32.mul 1l))\n  in\n\n  (* Check the results *)\n  match to_array Nx.int32 df2 \"allMul\" with\n  | Some arr ->\n      (* Row 1: 2*1*3*1*2*1 = 12 *)\n      (* Row 2: 3*2*1*1*2*3 = 36 *)\n      (* Row 3: 4*3*2*1*2*1 = 48 *)\n      check_bool \"Product of 6 columns\" true (arr = [| 12l; 36l; 48l |])\n  | None -> fail \"allMul column should exist\"\n\nlet test_sequence_sum () =\n  (* Create a dataframe with float columns *)\n  let df =\n    create\n      [\n        (\"A\", Col.float64 [| 1.0; 2.0; 3.0 |]);\n        (\"B\", Col.float64 [| 2.0; 3.0; 4.0 |]);\n        (\"C\", Col.float64 [| 3.0; 4.0; 5.0 |]);\n      ]\n  in\n\n  (* Sum all columns using sequence + map *)\n  let df2 =\n    with_column df \"total\" Nx.float64\n      Row.(\n        map\n          (sequence (List.map float64 [ \"A\"; \"B\"; \"C\" ]))\n          ~f:(List.fold_left ( +. ) 0.))\n  in\n\n  match to_array Nx.float64 df2 \"total\" with\n  | Some arr ->\n      check_float \"Row 1 sum\" 6.0 arr.(0);\n      check_float \"Row 2 sum\" 9.0 arr.(1);\n      check_float \"Row 3 sum\" 12.0 arr.(2)\n  | None -> fail \"total column should exist\"\n\nlet test_weighted_sum () =\n  (* Test weighted sum example *)\n  let df =\n    create\n      [\n        (\"A\", Col.float64 [| 1.0; 2.0; 3.0 |]);\n        (\"B\", Col.float64 [| 2.0; 3.0; 4.0 |]);\n        (\"C\", Col.float64 [| 3.0; 4.0; 5.0 |]);\n        (\"D\", Col.float64 [| 4.0; 5.0; 6.0 |]);\n        (\"E\", Col.float64 [| 5.0; 6.0; 7.0 |]);\n        (\"F\", Col.float64 [| 6.0; 7.0; 8.0 |]);\n      ]\n  in\n\n  let feats = [ \"A\"; \"B\"; \"C\"; \"D\"; \"E\"; \"F\" ] in\n  let weights = [ 0.2; 0.3; 0.1; 0.1; 0.1; 0.2 ] in\n\n  let df' =\n    with_column df \"score\" Nx.float64\n      Row.(\n        map\n          (sequence (List.map float64 feats))\n          ~f:(fun xs ->\n            List.fold_left2 (fun acc wi xi -> acc +. (wi *. xi)) 0. weights xs))\n  in\n\n  match to_array Nx.float64 df' \"score\" with\n  | Some scores ->\n      (* First row: 0.2*1 + 0.3*2 + 0.1*3 + 0.1*4 + 0.1*5 + 0.2*6 = 3.2 *)\n      check_float \"First weighted sum\" 3.2 scores.(0);\n      check_float \"Second weighted sum\" 4.2 scores.(1);\n      check_float \"Third weighted sum\" 5.2 scores.(2)\n  | None -> fail \"score column should exist\"\n\nlet test_select_columns () =\n  let df =\n    create\n      [\n        (\"name\", Col.string [| \"Alice\"; \"Bob\"; \"Charlie\" |]);\n        (\"age\", Col.int32 [| 25l; 30l; 35l |]);\n        (\"score\", Col.float64 [| 85.5; 92.0; 78.5 |]);\n        (\"active\", Col.bool [| true; false; true |]);\n        (\"height\", Col.float32 [| 1.75; 1.80; 1.70 |]);\n        (\"id\", Col.int64 [| 1L; 2L; 3L |]);\n      ]\n  in\n\n  let numeric_cols = select_columns df `Numeric in\n  let expected = [ \"age\"; \"score\"; \"height\"; \"id\" ] in\n\n  (* Sort both lists for comparison since order might vary *)\n  let sorted_numeric = List.sort String.compare numeric_cols in\n  let sorted_expected = List.sort String.compare expected in\n\n  equal ~msg:\"Numeric column names\" (list string) sorted_expected sorted_numeric;\n\n  let float_cols = List.sort String.compare (select_columns df `Float) in\n  equal ~msg:\"Float columns\" (list string) [ \"height\"; \"score\" ] float_cols;\n\n  let int_cols = List.sort String.compare (select_columns df `Int) in\n  equal ~msg:\"Int columns\" (list string) [ \"age\"; \"id\" ] int_cols;\n\n  let bool_cols = select_columns df `Bool in\n  equal ~msg:\"Bool columns\" (list string) [ \"active\" ] bool_cols;\n\n  let string_cols = select_columns df `String in\n  equal ~msg:\"String columns\" (list string) [ \"name\" ] string_cols\n\nlet test_row_helpers () =\n  let df =\n    create\n      [\n        (\"a\", Col.int32 [| 1l; 2l |]);\n        (\"b\", Col.int32 [| 3l; 4l |]);\n        (\"c\", Col.float64 [| 5.0; 6.0 |]);\n      ]\n  in\n\n  (* Test using list map with Row accessors + sequence *)\n  let df2 =\n    with_column df \"sum\" Nx.float64\n      Row.(\n        map\n          (sequence (List.map float64 [ \"c\" ]))\n          ~f:(fun xs ->\n            match xs with\n            | [ x ] -> x *. 2.0\n            | _ -> failwith \"Expected exactly one column\"))\n  in\n\n  match to_array Nx.float64 df2 \"sum\" with\n  | Some arr ->\n      check_float \"First doubled\" 10.0 arr.(0);\n      check_float \"Second doubled\" 12.0 arr.(1)\n  | None -> fail \"sum column should exist\"\n\nlet test_sequence_equivalence () =\n  (* Test that sequence works correctly *)\n  let df =\n    create\n      [\n        (\"x\", Col.int32 [| 1l; 2l |]);\n        (\"y\", Col.int32 [| 10l; 20l |]);\n        (\"z\", Col.int32 [| 100l; 200l |]);\n      ]\n  in\n\n  let df_seq =\n    with_column df \"sum_seq\" Nx.int32\n      Row.(\n        map\n          (sequence (List.map int32 [ \"x\"; \"y\"; \"z\" ]))\n          ~f:(List.fold_left Int32.add 0l))\n  in\n\n  match to_array Nx.int32 df_seq \"sum_seq\" with\n  | Some arr -> check_bool \"sequence results\" true (arr = [| 111l; 222l |])\n  | None -> fail \"columns should exist\"\n\nlet wide_tests =\n  [\n    test \"map via sequence\" test_map_list_product;\n    test \"sequence sum\" test_sequence_sum;\n    test \"weighted sum\" test_weighted_sum;\n    test \"select_columns\" test_select_columns;\n    test \"row helpers\" test_row_helpers;\n    test \"sequence\" test_sequence_equivalence;\n  ]\n\n(* ───── Test Ergonomic APIs ───── *)\n\nlet test_with_columns () =\n  let df =\n    create\n      [\n        (\"x\", Col.float64 [| 1.0; 2.0; 3.0 |]);\n        (\"y\", Col.float64 [| 4.0; 5.0; 6.0 |]);\n      ]\n  in\n\n  let df2 =\n    with_columns df\n      [\n        (\"z\", Col.float64 [| 7.0; 8.0; 9.0 |]);\n        (\"sum\", Col.float64 [| 5.0; 7.0; 9.0 |]);\n      ]\n  in\n\n  check_int \"columns added\" 4 (num_columns df2);\n  check_bool \"has z\" true (has_column df2 \"z\");\n  check_bool \"has sum\" true (has_column df2 \"sum\")\n\nlet test_column_selectors () =\n  let df =\n    create\n      [\n        (\"feat_1\", Col.float64 [| 1.0 |]);\n        (\"feat_2\", Col.float64 [| 2.0 |]);\n        (\"id\", Col.int32 [| 1l |]);\n        (\"name\", Col.string [| \"test\" |]);\n        (\"score_a\", Col.float64 [| 3.0 |]);\n        (\"score_b\", Col.float64 [| 4.0 |]);\n      ]\n  in\n\n  (* Test select_columns *)\n  let numeric = select_columns df `Numeric in\n  check_int \"numeric columns\" 5 (List.length numeric);\n\n  let strings = select_columns df `String in\n  equal ~msg:\"string columns\" (list string) [ \"name\" ] strings\n\nlet test_rowagg_sum () =\n  let df =\n    create\n      [\n        (\"a\", Col.float64_opt [| Some 1.0; Some 2.0; None |]);\n        (\"b\", Col.float64 [| 3.0; 4.0; 5.0 |]);\n        (\"c\", Col.int32 [| 5l; 6l; 7l |]);\n      ]\n  in\n\n  (* Test sum with skipna=true (default) *)\n  let sum_col = Agg.row_sum df ~names:[ \"a\"; \"b\"; \"c\" ] in\n  let df2 = add_column df \"row_sum\" sum_col in\n\n  match to_array Nx.float64 df2 \"row_sum\" with\n  | Some arr ->\n      check_float \"Row 0 sum\" 9.0 arr.(0);\n      (* 1 + 3 + 5 *)\n      check_float \"Row 1 sum\" 12.0 arr.(1);\n      (* 2 + 4 + 6 *)\n      check_float \"Row 2 sum\" 12.0 arr.(2)\n      (* None + 5 + 7, missing skipped *)\n  | None -> fail \"row_sum should exist\"\n\nlet rowagg_skipna_fixture () =\n  create\n    [\n      (\"float_opt\", Col.float64_opt [| Some 1.0; None; Some 5.0 |]);\n      (\"int_opt\", Col.int32_opt [| Some 2l; Some 3l; None |]);\n      (\"baseline\", Col.float64 [| 10.0; 20.0; 30.0 |]);\n    ]\n\nlet unpack_float64_column col label =\n  match Col.to_tensor Nx.float64 col with\n  | Some tensor -> (Nx.to_array tensor : float array)\n  | None -> fail (\"expected float64 column for \" ^ label)\n\nlet test_rowagg_sum_skipna_false () =\n  let df = rowagg_skipna_fixture () in\n  let names = [ \"float_opt\"; \"int_opt\"; \"baseline\" ] in\n  let sum_skipna_true = Agg.row_sum df ~names in\n  let sum_skipna_false = Agg.row_sum df ~skipna:false ~names in\n  let true_arr =\n    unpack_float64_column sum_skipna_true \"Agg.row_sum skipna=true\"\n  in\n  let false_arr =\n    unpack_float64_column sum_skipna_false \"Agg.row_sum skipna=false\"\n  in\n  check_float \"skipna=true row0 sum\" 13.0 true_arr.(0);\n  check_float \"skipna=true row1 sum\" 23.0 true_arr.(1);\n  check_float \"skipna=true row2 sum\" 35.0 true_arr.(2);\n  check_float \"skipna=false row0 sum\" 13.0 false_arr.(0);\n  check_bool \"skipna=false row1 sum is nan\" true (Float.is_nan false_arr.(1));\n  check_bool \"skipna=false row2 sum is nan\" true (Float.is_nan false_arr.(2))\n\nlet test_rowagg_mean_skipna_false () =\n  let df = rowagg_skipna_fixture () in\n  let names = [ \"float_opt\"; \"int_opt\"; \"baseline\" ] in\n  let mean_skipna_true = Agg.row_mean df ~names in\n  let mean_skipna_false = Agg.row_mean df ~skipna:false ~names in\n  let true_arr =\n    unpack_float64_column mean_skipna_true \"Agg.row_mean skipna=true\"\n  in\n  let false_arr =\n    unpack_float64_column mean_skipna_false \"Agg.row_mean skipna=false\"\n  in\n  check_float \"skipna=true row0 mean\" (13. /. 3.) true_arr.(0);\n  check_float \"skipna=true row1 mean\" 11.5 true_arr.(1);\n  check_float \"skipna=true row2 mean\" 17.5 true_arr.(2);\n  check_float \"skipna=false row0 mean\" (13. /. 3.) false_arr.(0);\n  check_bool \"skipna=false row1 mean is nan\" true (Float.is_nan false_arr.(1));\n  check_bool \"skipna=false row2 mean is nan\" true (Float.is_nan false_arr.(2))\n\nlet test_rowagg_min_skipna_false () =\n  let df = rowagg_skipna_fixture () in\n  let names = [ \"float_opt\"; \"int_opt\"; \"baseline\" ] in\n  let min_skipna_true = Agg.row_min df ~names in\n  let min_skipna_false = Agg.row_min df ~skipna:false ~names in\n  let true_arr =\n    unpack_float64_column min_skipna_true \"Agg.row_min skipna=true\"\n  in\n  let false_arr =\n    unpack_float64_column min_skipna_false \"Agg.row_min skipna=false\"\n  in\n  check_float \"skipna=true row0 min\" 1.0 true_arr.(0);\n  check_float \"skipna=true row1 min\" 3.0 true_arr.(1);\n  check_float \"skipna=true row2 min\" 5.0 true_arr.(2);\n  check_float \"skipna=false row0 min\" 1.0 false_arr.(0);\n  check_bool \"skipna=false row1 min is nan\" true (Float.is_nan false_arr.(1));\n  check_bool \"skipna=false row2 min is nan\" true (Float.is_nan false_arr.(2))\n\nlet test_rowagg_max_skipna_false () =\n  let df = rowagg_skipna_fixture () in\n  let names = [ \"float_opt\"; \"int_opt\"; \"baseline\" ] in\n  let max_skipna_true = Agg.row_max df ~names in\n  let max_skipna_false = Agg.row_max df ~skipna:false ~names in\n  let true_arr =\n    unpack_float64_column max_skipna_true \"Agg.row_max skipna=true\"\n  in\n  let false_arr =\n    unpack_float64_column max_skipna_false \"Agg.row_max skipna=false\"\n  in\n  check_float \"skipna=true row0 max\" 10.0 true_arr.(0);\n  check_float \"skipna=true row1 max\" 20.0 true_arr.(1);\n  check_float \"skipna=true row2 max\" 30.0 true_arr.(2);\n  check_float \"skipna=false row0 max\" 10.0 false_arr.(0);\n  check_bool \"skipna=false row1 max is nan\" true (Float.is_nan false_arr.(1));\n  check_bool \"skipna=false row2 max is nan\" true (Float.is_nan false_arr.(2))\n\nlet test_rowagg_bool_reducers () =\n  let df =\n    create\n      [\n        (\"flag_a\", Col.bool_opt [| Some true; Some true; Some false |]);\n        (\"flag_b\", Col.bool_opt [| Some true; Some false; Some false |]);\n      ]\n  in\n  let all_col = Agg.row_all df ~names:[ \"flag_a\"; \"flag_b\" ] in\n  let any_col = Agg.row_any df ~names:[ \"flag_a\"; \"flag_b\" ] in\n  match (Col.to_bool_array all_col, Col.to_bool_array any_col) with\n  | Some all_arr, Some any_arr ->\n      let expect_bool msg expected = function\n        | Some value -> check_bool msg expected value\n        | None -> fail (msg ^ \" should be Some\")\n      in\n      expect_bool \"Agg.row_all row0\" true all_arr.(0);\n      expect_bool \"Agg.row_all row1\" false all_arr.(1);\n      expect_bool \"Agg.row_all row2\" false all_arr.(2);\n      expect_bool \"Agg.row_any row0\" true any_arr.(0);\n      expect_bool \"Agg.row_any row1\" true any_arr.(1);\n      expect_bool \"Agg.row_any row2\" false any_arr.(2)\n  | _ -> fail \"expected boolean option columns\"\n\nlet test_row_number () =\n  let df =\n    create\n      [\n        (\"i32\", Col.int32 [| 1l; 2l; 3l |]);\n        (\"i64\", Col.int64 [| 10L; 20L; 30L |]);\n        (\"f32\", Col.float32 [| 1.5; 2.5; 3.5 |]);\n        (\"f64\", Col.float64 [| 100.0; 200.0; 300.0 |]);\n      ]\n  in\n\n  (* Test Row.number coerces all numeric types to float *)\n  let df2 = with_column df \"i32_as_float\" Nx.float64 Row.(number \"i32\") in\n  match to_array Nx.float64 df2 \"i32_as_float\" with\n  | Some arr ->\n      check_float \"Int32 as float\" 1.0 arr.(0);\n      check_float \"Int32 as float\" 2.0 arr.(1);\n      check_float \"Int32 as float\" 3.0 arr.(2)\n  | None -> fail \"i32_as_float should exist\"\n\nlet test_row_fold_list () =\n  let df =\n    create\n      [\n        (\"a\", Col.float64 [| 1.0; 2.0; 3.0 |]);\n        (\"b\", Col.float64 [| 10.0; 20.0; 30.0 |]);\n        (\"c\", Col.float64 [| 100.0; 200.0; 300.0 |]);\n      ]\n  in\n\n  (* Test fold_list to compute sum without intermediate list *)\n  let df2 =\n    with_column df \"sum\" Nx.float64\n      Row.(fold_list (List.map number [ \"a\"; \"b\"; \"c\" ]) ~init:0. ~f:( +. ))\n  in\n\n  match to_array Nx.float64 df2 \"sum\" with\n  | Some arr ->\n      check_float \"Row 0 sum\" 111.0 arr.(0);\n      check_float \"Row 1 sum\" 222.0 arr.(1);\n      check_float \"Row 2 sum\" 333.0 arr.(2)\n  | None -> fail \"sum should exist\"\n\nlet test_columns_except () =\n  let df =\n    create\n      [\n        (\"keep1\", Col.int32 [| 1l |]);\n        (\"drop1\", Col.int32 [| 2l |]);\n        (\"keep2\", Col.int32 [| 3l |]);\n        (\"drop2\", Col.int32 [| 4l |]);\n        (\"keep3\", Col.int32 [| 5l |]);\n      ]\n  in\n\n  let drop_set = [ \"drop1\"; \"drop2\" ] in\n  let kept =\n    List.filter (fun n -> not (List.mem n drop_set)) (column_names df)\n  in\n  equal ~msg:\"columns except\" (list string)\n    [ \"keep1\"; \"keep2\"; \"keep3\" ]\n    (List.sort String.compare kept)\n\nlet test_rowagg_dot () =\n  let df =\n    create\n      [\n        (\"x\", Col.float64 [| 1.0; 2.0; 3.0 |]);\n        (\"y\", Col.float64 [| 4.0; 5.0; 6.0 |]);\n        (\"z\", Col.float64 [| 7.0; 8.0; 9.0 |]);\n      ]\n  in\n\n  let weights = [| 0.2; 0.3; 0.5 |] in\n  let score = Agg.dot df ~names:[ \"x\"; \"y\"; \"z\" ] ~weights in\n  let df2 = add_column df \"score\" score in\n\n  match to_array Nx.float64 df2 \"score\" with\n  | Some arr ->\n      check_float \"Row 0 weighted\" 4.9 arr.(0);\n      (* 0.2*1 + 0.3*4 + 0.5*7 = 0.2 + 1.2 + 3.5 = 4.9 *)\n      check_float \"Row 1 weighted\" 5.9 arr.(1);\n      (* 0.2*2 + 0.3*5 + 0.5*8 = 0.4 + 1.5 + 4.0 = 5.9 *)\n      check_float \"Row 2 weighted\" 6.9 arr.(2)\n      (* 0.2*3 + 0.3*6 + 0.5*9 = 0.6 + 1.8 + 4.5 = 6.9 *)\n  | None -> fail \"score should exist\"\n\nlet test_join_inner () =\n  let df1 =\n    create\n      [\n        (\"id\", Col.int32 [| 1l; 2l; 3l |]);\n        (\"name\", Col.string [| \"Alice\"; \"Bob\"; \"Charlie\" |]);\n      ]\n  in\n  let df2 =\n    create\n      [\n        (\"id\", Col.int32 [| 2l; 3l; 4l |]);\n        (\"score\", Col.float64 [| 85.0; 90.0; 95.0 |]);\n      ]\n  in\n\n  let result = join df1 df2 ~on:\"id\" ~how:`Inner () in\n  check_int \"inner join rows\" 2 (num_rows result);\n  check_bool \"has name column\" true (has_column result \"name\");\n  check_bool \"has score column\" true (has_column result \"score\")\n\nlet test_join_left () =\n  let df1 =\n    create\n      [\n        (\"key\", Col.string [| \"a\"; \"b\"; \"c\" |]);\n        (\"val1\", Col.int32 [| 1l; 2l; 3l |]);\n      ]\n  in\n  let df2 =\n    create\n      [\n        (\"key\", Col.string [| \"b\"; \"c\"; \"d\" |]);\n        (\"val2\", Col.int32 [| 20l; 30l; 40l |]);\n      ]\n  in\n\n  let result = join df1 df2 ~on:\"key\" ~how:`Left () in\n  check_int \"left join rows\" 3 (num_rows result);\n\n  (* Check that all left keys are present *)\n  match to_string_array result \"key\" with\n  | Some arr ->\n      check_option_string \"first key\" (Some \"a\") arr.(0);\n      check_option_string \"second key\" (Some \"b\") arr.(1);\n      check_option_string \"third key\" (Some \"c\") arr.(2)\n  | None -> fail \"key column should exist\"\n\nlet test_merge () =\n  let df1 =\n    create\n      [ (\"id\", Col.int32 [| 1l; 2l |]); (\"x\", Col.float64 [| 10.0; 20.0 |]) ]\n  in\n  let df2 =\n    create\n      [\n        (\"code\", Col.int32 [| 1l; 2l |]); (\"y\", Col.float64 [| 100.0; 200.0 |]);\n      ]\n  in\n\n  let result = join df1 df2 ~on:\"id\" ~right_on:\"code\" ~how:`Inner () in\n  check_int \"merge rows\" 2 (num_rows result);\n  check_bool \"has x column\" true (has_column result \"x\");\n  check_bool \"has y column\" true (has_column result \"y\")\n\nlet test_join_preserves_null_masks () =\n  let left =\n    create\n      [\n        (\"id\", Col.int32 [| 1l; 2l |]);\n        (\"left_val\", Col.int32_opt [| Some 10l; None |]);\n      ]\n  in\n  let right =\n    create\n      [\n        (\"id\", Col.int32 [| 1l |]); (\"right_val\", Col.int32_opt [| Some 100l |]);\n      ]\n  in\n  let joined = join left right ~on:\"id\" ~how:`Left () in\n  check_option_bool_array \"left mask preserved\"\n    (Some [| false; true |])\n    (mask_of_column joined \"left_val\");\n  check_option_bool_array \"right mask populated\"\n    (Some [| false; true |])\n    (mask_of_column joined \"right_val\")\n\nlet test_pivot () =\n  let df =\n    create\n      [\n        (\"date\", Col.string [| \"2024-01\"; \"2024-01\"; \"2024-02\"; \"2024-02\" |]);\n        (\"product\", Col.string [| \"A\"; \"B\"; \"A\"; \"B\" |]);\n        (\"sales\", Col.float64 [| 100.0; 150.0; 120.0; 180.0 |]);\n      ]\n  in\n\n  let pivoted =\n    pivot df ~index:\"date\" ~columns:\"product\" ~values:\"sales\" ~agg_func:`Sum ()\n  in\n  check_int \"pivot rows\" 2 (num_rows pivoted);\n  check_bool \"has A column\" true (has_column pivoted \"A\");\n  check_bool \"has B column\" true (has_column pivoted \"B\");\n\n  (* Check aggregated values *)\n  match (to_array Nx.float64 pivoted \"A\", to_array Nx.float64 pivoted \"B\") with\n  | Some a_vals, Some b_vals ->\n      check_float \"Jan A sales\" 100.0 a_vals.(0);\n      check_float \"Jan B sales\" 150.0 b_vals.(0);\n      check_float \"Feb A sales\" 120.0 a_vals.(1);\n      check_float \"Feb B sales\" 180.0 b_vals.(1)\n  | _ -> fail \"pivot columns should exist\"\n\nlet test_pivot_numeric_index () =\n  let df =\n    create\n      [\n        (\"id\", Col.int32 [| 1l; 1l; 2l; 2l |]);\n        (\"category\", Col.string [| \"A\"; \"B\"; \"A\"; \"B\" |]);\n        (\"value\", Col.float64 [| 1.0; 2.0; 3.0; 4.0 |]);\n      ]\n  in\n  let pivoted =\n    pivot df ~index:\"id\" ~columns:\"category\" ~values:\"value\" ~agg_func:`Sum ()\n  in\n  check_int \"numeric pivot rows\" 2 (num_rows pivoted);\n  (match Col.to_string_array (get_column_exn pivoted \"id\") with\n  | Some arr ->\n      check_option_string \"first id\" (Some \"1\") arr.(0);\n      check_option_string \"second id\" (Some \"2\") arr.(1)\n  | None -> fail \"expected string index column\");\n  match (to_array Nx.float64 pivoted \"A\", to_array Nx.float64 pivoted \"B\") with\n  | Some a_vals, Some b_vals ->\n      check_float \"id=1 A sum\" 1.0 a_vals.(0);\n      check_float \"id=2 A sum\" 3.0 a_vals.(1);\n      check_float \"id=1 B sum\" 2.0 b_vals.(0);\n      check_float \"id=2 B sum\" 4.0 b_vals.(1)\n  | _ -> fail \"pivot numeric columns should exist\"\n\nlet test_melt () =\n  let df =\n    create\n      [\n        (\"id\", Col.int32 [| 1l; 2l |]);\n        (\"A\", Col.float64 [| 10.0; 20.0 |]);\n        (\"B\", Col.float64 [| 30.0; 40.0 |]);\n        (\"C\", Col.float64 [| 50.0; 60.0 |]);\n      ]\n  in\n\n  let melted = melt df ~id_vars:[ \"id\" ] ~value_vars:[ \"A\"; \"B\"; \"C\" ] () in\n  check_int \"melt rows\" 6 (num_rows melted);\n  (* 2 rows * 3 columns = 6 *)\n  check_bool \"has id column\" true (has_column melted \"id\");\n  check_bool \"has variable column\" true (has_column melted \"variable\");\n  check_bool \"has value column\" true (has_column melted \"value\");\n\n  (* Check melted structure *)\n  match to_string_array melted \"variable\" with\n  | Some vars ->\n      check_option_string \"first var\" (Some \"A\") vars.(0);\n      check_option_string \"second var\" (Some \"B\") vars.(1);\n      check_option_string \"third var\" (Some \"C\") vars.(2);\n      check_option_string \"fourth var\" (Some \"A\") vars.(3);\n      check_option_string \"fifth var\" (Some \"B\") vars.(4);\n      check_option_string \"sixth var\" (Some \"C\") vars.(5)\n  | None -> fail \"variable column should exist\"\n\nlet test_join_with_suffixes () =\n  let df1 =\n    create\n      [\n        (\"id\", Col.int32 [| 1l; 2l |]); (\"value\", Col.float64 [| 10.0; 20.0 |]);\n      ]\n  in\n  let df2 =\n    create\n      [\n        (\"id\", Col.int32 [| 1l; 2l |]); (\"value\", Col.float64 [| 100.0; 200.0 |]);\n      ]\n  in\n\n  let result =\n    join df1 df2 ~on:\"id\" ~how:`Inner ~suffixes:(\"_left\", \"_right\") ()\n  in\n  check_bool \"has value_left column\" true (has_column result \"value_left\");\n  check_bool \"has value_right column\" true (has_column result \"value_right\");\n\n  match\n    ( to_array Nx.float64 result \"value_left\",\n      to_array Nx.float64 result \"value_right\" )\n  with\n  | Some left, Some right ->\n      check_float \"left value 1\" 10.0 left.(0);\n      check_float \"right value 1\" 100.0 right.(0);\n      check_float \"left value 2\" 20.0 left.(1);\n      check_float \"right value 2\" 200.0 right.(1)\n  | _ -> fail \"value columns should exist\"\n\nlet join_reshape_tests =\n  [\n    test \"join inner\" test_join_inner;\n    test \"join left\" test_join_left;\n    test \"merge\" test_merge;\n    test \"join preserves masks\" test_join_preserves_null_masks;\n    test \"pivot\" test_pivot;\n    test \"pivot numeric index\" test_pivot_numeric_index;\n    test \"melt\" test_melt;\n    test \"join with suffixes\" test_join_with_suffixes;\n  ]\n\nlet ergonomic_tests =\n  [\n    test \"with_columns\" test_with_columns;\n    test \"column_selectors\" test_column_selectors;\n    test \"columns_except\" test_columns_except;\n    test \"Row.number\" test_row_number;\n    test \"Row.fold_list\" test_row_fold_list;\n    test \"Agg.row_sum\" test_rowagg_sum;\n    test \"rowagg_sum_skipna_false\" test_rowagg_sum_skipna_false;\n    test \"rowagg_mean_skipna_false\" test_rowagg_mean_skipna_false;\n    test \"rowagg_min_skipna_false\" test_rowagg_min_skipna_false;\n    test \"rowagg_max_skipna_false\" test_rowagg_max_skipna_false;\n    test \"rowagg_bool_reducers\" test_rowagg_bool_reducers;\n    test \"Agg.dot\" test_rowagg_dot;\n  ]\n\nlet () =\n  run \"Talon\"\n    [\n      group \"Col\" col_tests;\n      group \"Creation\" creation_tests;\n      group \"Columns\" column_tests;\n      group \"Null masks\" mask_tests;\n      group \"Option accessors\" option_tests;\n      group \"Rows\" row_tests;\n      group \"Concatenation\" concat_tests;\n      group \"Row module\" row_module_tests;\n      group \"Sort & Group\" sort_group_tests;\n      group \"Aggregations\" agg_tests;\n      group \"Conversions\" conversion_tests;\n      group \"Edge cases\" edge_tests;\n      group \"Wide operations\" wide_tests;\n      group \"Ergonomic APIs\" ergonomic_tests;\n      group \"Join & Reshape\" join_reshape_tests;\n    ]\n"
  },
  {
    "path": "packages/talon/test/test_talon_csv.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Talon\nopen Windtrap\n\nlet check_int msg = equal ~msg int\nlet check_bool msg = equal ~msg bool\n\nlet test_of_string_basic () =\n  let csv = \"name,age,score\\nAlice,25,85.5\\nBob,30,92.0\\nCharlie,35,78.5\" in\n  let df = Talon_csv.of_string csv in\n  check_int \"rows\" 3 (num_rows df);\n  check_int \"cols\" 3 (num_columns df);\n  let names = column_names df in\n  equal ~msg:\"column names\" (list string) [ \"name\"; \"age\"; \"score\" ] names\n\nlet test_of_string_custom_sep () =\n  let csv = \"name;age\\nAlice;25\\nBob;30\" in\n  let df = Talon_csv.of_string ~sep:';' csv in\n  check_int \"rows\" 2 (num_rows df);\n  check_int \"cols\" 2 (num_columns df)\n\nlet test_of_string_with_nulls () =\n  let csv = \"name,value\\nAlice,1.5\\nBob,NA\\nCharlie,2.5\" in\n  let df = Talon_csv.of_string csv in\n  check_int \"rows\" 3 (num_rows df);\n  let col = get_column_exn df \"value\" in\n  check_bool \"has nulls\" true (Col.has_nulls col);\n  check_int \"null count\" 1 (Col.null_count col)\n\nlet test_of_string_dtype_spec () =\n  let csv = \"id,flag\\n1,true\\n2,false\\n3,true\" in\n  let dtype_spec = [ (\"id\", `Int32); (\"flag\", `Bool) ] in\n  let df = Talon_csv.of_string ~dtype_spec csv in\n  check_int \"rows\" 3 (num_rows df);\n  match to_bool_array df \"flag\" with\n  | Some arr ->\n      check_bool \"bool values\" true\n        (arr = [| Some true; Some false; Some true |])\n  | None -> fail \"flag column should be bool\"\n\nlet test_of_string_header_only () =\n  let csv = \"col1,col2,col3\" in\n  let df = Talon_csv.of_string csv in\n  check_int \"empty df rows\" 0 (num_rows df);\n  check_int \"empty df cols\" 3 (num_columns df)\n\nlet test_to_string_basic () =\n  let df =\n    create\n      [\n        (\"name\", Col.string [| \"Alice\"; \"Bob\" |]);\n        (\"age\", Col.int32 [| 25l; 30l |]);\n      ]\n  in\n  let csv = Talon_csv.to_string df in\n  check_bool \"has header\" true (String.contains csv 'n');\n  check_bool \"has name\" true (String.contains csv 'A')\n\nlet test_to_string_custom_sep () =\n  let df =\n    create [ (\"a\", Col.int32 [| 1l; 2l |]); (\"b\", Col.int32 [| 3l; 4l |]) ]\n  in\n  let csv = Talon_csv.to_string ~sep:';' df in\n  check_bool \"has semicolon\" true (String.contains csv ';');\n  check_bool \"no comma\" false (String.contains csv ',')\n\nlet test_to_string_with_nulls () =\n  let df =\n    create [ (\"values\", Col.float32_opt [| Some 1.; None; Some 3. |]) ]\n  in\n  let csv = Talon_csv.to_string ~na_repr:\"NULL\" df in\n  check_bool \"has NULL\" true (String.contains csv 'N')\n\nlet test_round_trip () =\n  let df1 =\n    create\n      [\n        (\"id\", Col.int32 [| 1l; 2l; 3l |]);\n        (\"value\", Col.float32 [| 1.5; 2.5; 3.5 |]);\n        (\"label\", Col.string [| \"a\"; \"b\"; \"c\" |]);\n      ]\n  in\n  let csv = Talon_csv.to_string df1 in\n  let df2 = Talon_csv.of_string csv in\n  check_int \"same rows\" (num_rows df1) (num_rows df2);\n  check_int \"same cols\" (num_columns df1) (num_columns df2);\n  let names1 = column_names df1 in\n  let names2 = column_names df2 in\n  equal ~msg:\"same names\" (list string) names1 names2\n\nlet test_auto_detect_dtypes () =\n  let csv =\n    \"int_col,float_col,bool_col,str_col\\n\\\n     42,3.14,true,hello\\n\\\n     100,2.71,false,world\"\n  in\n  let df = Talon_csv.of_string csv in\n  check_int \"rows\" 2 (num_rows df);\n  match to_array Nx.int32 df \"int_col\" with\n  | Some arr -> check_bool \"int values\" true (arr = [| 42l; 100l |])\n  | None -> fail \"int_col should be int32\"\n\nlet test_mixed_nulls () =\n  let csv = \"a,b,c\\n1,2.5,foo\\n,NA,\\n3,4.5,bar\" in\n  let df = Talon_csv.of_string csv in\n  check_int \"rows\" 3 (num_rows df);\n  let col_a = get_column_exn df \"a\" in\n  let col_b = get_column_exn df \"b\" in\n  let col_c = get_column_exn df \"c\" in\n  check_bool \"col a has nulls\" true (Col.has_nulls col_a);\n  check_bool \"col b has nulls\" true (Col.has_nulls col_b);\n  check_bool \"col c has nulls\" true (Col.has_nulls col_c)\n\nlet test_big_int_detection () =\n  let csv = \"id\\n9223372036854775806\" in\n  let df = Talon_csv.of_string csv in\n  check_int \"rows\" 1 (num_rows df);\n  let col = get_column_exn df \"id\" in\n  let is_int64 = Col.to_tensor Nx.int64 col <> None in\n  check_bool \"big int detected as Int64\" true is_int64;\n  match to_array Nx.int64 df \"id\" with\n  | Some arr ->\n      check_int \"array length\" 1 (Array.length arr);\n      check_bool \"correct value\" true (arr.(0) = 9223372036854775806L)\n  | None -> fail \"to_array Nx.int64 should return Some for Int64 column\"\n\nlet reading_tests =\n  [\n    test \"basic\" test_of_string_basic;\n    test \"custom_sep\" test_of_string_custom_sep;\n    test \"with_nulls\" test_of_string_with_nulls;\n    test \"dtype_spec\" test_of_string_dtype_spec;\n    test \"header_only\" test_of_string_header_only;\n    test \"auto_detect\" test_auto_detect_dtypes;\n    test \"mixed_nulls\" test_mixed_nulls;\n    test \"big_int_detection\" test_big_int_detection;\n  ]\n\nlet writing_tests =\n  [\n    test \"basic\" test_to_string_basic;\n    test \"custom_sep\" test_to_string_custom_sep;\n    test \"with_nulls\" test_to_string_with_nulls;\n  ]\n\nlet integration_tests = [ test \"round_trip\" test_round_trip ]\n\nlet () =\n  run \"Talon_csv\"\n    [\n      group \"Reading\" reading_tests;\n      group \"Writing\" writing_tests;\n      group \"Integration\" integration_tests;\n    ]\n"
  },
  {
    "path": "packages/tolk/.gitignore",
    "content": "AGENTS.md"
  },
  {
    "path": "packages/tolk/.ocamlformat",
    "content": "disable\n"
  },
  {
    "path": "packages/tolk/LICENSE-tinygrad",
    "content": "Copyright (c) 2024, the tiny corp\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n"
  },
  {
    "path": "packages/tolk/README.md",
    "content": "# tolk\n\nA port of [tinygrad](https://github.com/tinygrad/tinygrad) in OCaml. A minimal, readable ML compiler for the [Raven](https://github.com/raven-ml/raven) ecosystem.\n\n## Build\n\n```bash\ndune build\ndune test\n```\n\n## Reference\n\n- [tinygrad](https://github.com/tinygrad/tinygrad) — the original project this is based on\n\n## License\n\nISC\n"
  },
  {
    "path": "packages/tolk/doc/dune",
    "content": "(mdx\n (files *.md)\n (package tolk)\n (libraries tolk tolk_ir))\n"
  },
  {
    "path": "packages/tolk/doc/index.md",
    "content": "# tolk\n\nTolk is a port of [tinygrad](https://github.com/tinygrad/tinygrad) in OCaml — a minimal compiler for GPU tensor computation. It takes tensor-level computation graphs, optimizes them, and emits efficient kernels for CPU (via Clang), Metal, CUDA, and OpenCL backends.\n\n## Features\n\n- **Three-level IR** — tensor graphs, kernel DAGs, and linear programs with shared conventions (sub-axes, tagging, map\\_children)\n- **Symbolic simplification** — three-phase algebraic pipeline for index expressions with div/mod folding\n- **Hardware decompositions** — transcendentals, int64 emulation, float type promotion, and late op rewrites\n- **Codegen pipeline** — range simplification, GPU dimension mapping, beam search optimization, and linearization\n- **Schedule pipeline** — tensor-to-kernel graph transformation with range analysis and multi-device sharding\n- **JIT integration** — used by Rune's `jit` transformation to compile and dispatch kernels at runtime\n\n## Architecture\n\nTolk follows a layered compilation pipeline:\n\n1. **Tensor IR** — high-level operation graph (reductions, reshapes, movement ops)\n2. **Schedule** — transforms tensor graphs into kernel graphs via rangeify and indexing\n3. **Codegen** — optimizes kernel structure (range simplification, GPU dims, beam search)\n4. **Lowering** — lowers to linear program IR (devectorization, expansion, decompositions)\n5. **Renderer** — emits backend-specific source code (C, Metal, CUDA, OpenCL)\n6. **Runtime** — compiles and dispatches kernels on target devices\n\n## Libraries\n\n- `tolk` — codegen pipeline, renderer, device abstraction, and runtime\n- `tolk.ir` — IR definitions (tensor, kernel, program), symbolic simplification, decompositions\n- `tolk.cpu` — CPU backend (Clang compilation, ELF loading)\n- `tolk.metal` — Metal backend (macOS GPU)\n"
  },
  {
    "path": "packages/tolk/lib/codegen/codegen.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(* Codegen entry point — optimization dispatch + lowering. *)\n\nopen Tolk_ir\nmodule K = Kernel\n\n(* Environment *)\n\nlet debug = Helpers.getenv \"DEBUG\" 0\nlet beam = Helpers.getenv \"BEAM\" 0\nlet beam_estimate = Helpers.getenv \"BEAM_ESTIMATE\" 1\nlet noopt = Helpers.getenv \"NOOPT\" 0\n\n(* Allocate raw buffers for beam search from the kernel's Param nodes. *)\nlet make_beam_search device beam_width =\n  Option.map (fun dev k ->\n    let rawbufs = List.map (fun p -> match K.view p with\n      | Param { dtype = pty; _ } ->\n          Device.create_buffer ~size:(Dtype.Ptr.size pty)\n            ~dtype:(Dtype.Val (Dtype.Ptr.base pty)) dev\n      | _ -> assert false)\n      (Postrange.bufs_from_ast (Postrange.ast k)) in\n    Search.beam_search ~allow_test_size:(beam_estimate <> 0)\n      k rawbufs beam_width dev)\n    device\n\n(* Optimize and lower a kernel AST to a form ready for linearization.\n   When [optimize] is true, runs load collapse, range splitting,\n   symbolic simplification, range tightening, and dispatches to\n   beam search or hand-coded optimizations via Postrange. *)\nlet full_rewrite_to_sink ?(optimize = true) ?device ren sink =\n  let sink =\n    if optimize then begin\n      let sink = Simplify.pm_load_collapse sink in\n      let sink = Simplify.pm_split_ranges sink in\n      let sink = K.graph_rewrite ~name:\"initial symbolic\"\n        (K.first_match [Symbolic.sym; Simplify.flatten_range]) sink in\n      let sink = Simplify.pm_simplify_ranges sink in\n      let beam_search =\n        if beam >= 1 then make_beam_search device beam else None in\n      let hand_coded_optimizations =\n        if noopt = 0 then Some Heuristic.hand_coded_optimizations\n        else None in\n      Postrange.apply_opts ?beam_search ?hand_coded_optimizations sink ren\n    end else sink\n  in\n  Codegen_lower.lower ren sink\n\n(* Full pipeline: optimize + lower + linearize + render + compile. *)\nlet get_program ?(optimize = true) ?device dev ren sink =\n  let sink = full_rewrite_to_sink ~optimize ?device ren sink in\n  let ki = match K.view sink with\n    | Sink { kernel_info = Some ki; _ } -> ki\n    | _ -> { K.name = \"kernel\"; axis_kinds = []; dont_use_locals = false;\n             applied_opts = []; opts_to_apply = None; estimates = None } in\n  let program = Linearizer.linearize sink in\n  let estimates = match ki.estimates with\n    | Some e -> Program_spec.Estimates.of_kernel e\n    | None -> Program_spec.Estimates.of_program program in\n  let compiled =\n    Device.compile_program dev ~name:ki.name ~applied_opts:ki.applied_opts\n      ~estimates program in\n  if debug >= 3 && ki.applied_opts <> [] then\n    Printf.eprintf \"%s\\n%!\"\n      (String.concat \", \" (List.map K.Opt.to_string ki.applied_opts));\n  if debug >= 4 then\n    Printf.eprintf \"%s\\n%!\" (Program_spec.src compiled);\n  compiled\n"
  },
  {
    "path": "packages/tolk/lib/codegen/codegen.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Codegen entry point — optimization dispatch and lowering.\n\n    {!get_program} is the main entry point: it optimizes a kernel AST\n    (load collapse, range splitting/simplification, beam search or\n    hand-coded optimizations), lowers it (expansion, devectorization,\n    GPU dims, decompositions), linearizes, renders, and compiles to a\n    {!Program_spec.t}. *)\n\nval full_rewrite_to_sink :\n  ?optimize:bool ->\n  ?device:Device.t ->\n  Renderer.t ->\n  Tolk_ir.Kernel.t ->\n  Tolk_ir.Kernel.t\n(** [full_rewrite_to_sink ?optimize ?device ren sink] optimizes and\n    lowers kernel [sink] to a linearizer-ready form.\n\n    When [optimize] is [true] (default), runs load collapse, range\n    splitting, symbolic simplification, range tightening, and dispatches\n    to beam search or hand-coded optimizations. When [false], skips\n    directly to lowering.\n\n    [device] enables beam search when [BEAM >= 1] is set. *)\n\nval get_program :\n  ?optimize:bool ->\n  ?device:Device.t ->\n  Device.t ->\n  Renderer.t ->\n  Tolk_ir.Kernel.t ->\n  Program_spec.t\n(** [get_program ?optimize ?device dev ren sink] compiles kernel [sink]\n    to a {!Program_spec.t}. Calls {!full_rewrite_to_sink} then\n    linearizes, renders, and compiles. *)\n"
  },
  {
    "path": "packages/tolk/lib/codegen/codegen_lower.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(* Codegen lowering — all passes after optimization, up to linearization.\n\n   This module has no dependency on Search, Postrange, or Heuristic,\n   so beam search can safely call [lower_and_linearize] without cycles. *)\n\nopen Tolk_ir\nmodule K = Kernel\n\n(* Environment *)\n\nlet debug = Helpers.getenv \"DEBUG\" 0\nlet devectorize = Helpers.getenv \"DEVECTORIZE\" 1\n\n(* Index dtype lowering — replace abstract index dtype with concrete int32. *)\n\nlet is_index_dtype dt = Dtype.equal (Dtype.scalarize dt) Dtype.index\n\nlet concrete_index_dtype dt =\n  if is_index_dtype dt then Dtype.vec (Dtype.count dt) Dtype.int32 else dt\n\nlet lower_index_dtype_rule node =\n  match K.view node with\n  | Cast { dtype; _ } when is_index_dtype dtype ->\n      Some (K.replace node ~dtype:(concrete_index_dtype dtype) ())\n  | Const { dtype; _ } | Range { dtype; _ }\n  | Unary { dtype; _ } | Binary { dtype; _ } | Ternary { dtype; _ }\n  | Gep { dtype; _ }\n  | Special { dtype; _ } | Define_var { dtype; _ }\n    when is_index_dtype (Dtype.Val dtype) ->\n      Some (K.replace node ~dtype:(concrete_index_dtype (Dtype.Val dtype)) ())\n  | Vectorize { dtype; _ } when is_index_dtype dtype ->\n      Some (K.replace node ~dtype:(concrete_index_dtype dtype) ())\n  | Invalid_index { dtype } ->\n      let cdt = concrete_index_dtype (Dtype.Val dtype) in\n      let scalar_zero = K.const (Const.int (Dtype.val_of (Dtype.scalarize cdt)) 0) in\n      if Dtype.Val.count dtype > 1 then Some (K.broadcast scalar_zero (Dtype.Val.count dtype))\n      else Some scalar_zero\n  | _ -> None\n\n(* Strip index casts from Sink/End children. *)\nlet strip_index_cast_children node =\n  match K.view node with\n  | Sink _ | End _ ->\n      let children = K.children node in\n      let stripped = List.map (fun c -> match K.view c with\n        | Cast { src; dtype = Dtype.Val dt } when is_index_dtype (Dtype.Val dt) -> src\n        | Cast { src; dtype = Dtype.Val dt } ->\n            (match K.dtype_opt src with\n             | Some src_dt when Dtype.equal src_dt (Dtype.Val dt) -> src | _ -> c)\n        | _ -> c) children in\n      if List.for_all2 (fun a b -> a == b) children stripped then None\n      else Some (K.replace node ~children:stripped ())\n  | _ -> None\n\n(* Strip index casts from INDEX children. *)\nlet strip_index_cast_from_index node =\n  match K.view node with\n  | Index { ptr; idxs; gate; dtype = Dtype.Ptr _ } ->\n      let stripped = List.map (fun idx -> match K.view idx with\n        | Cast { src; dtype = Dtype.Val dt } when Dtype.Val.is_int dt ->\n            (match K.dtype_opt src with\n             | Some src_dt when Dtype.is_int src_dt -> src | _ -> idx)\n        | _ -> idx) idxs in\n      if List.for_all2 (fun a b -> a == b) idxs stripped then None\n      else Some (K.index ~ptr ~idxs:stripped ?gate ())\n  | _ -> None\n\nlet pm_lower_index_dtype =\n  K.first_match [lower_index_dtype_rule; strip_index_cast_children;\n                 strip_index_cast_from_index]\n\n(* Bufferize lowering — convert Bufferize nodes to\n   DEFINE_LOCAL + INDEX + STORE + END + BARRIER. *)\nlet bufferize_range_size ranges =\n  List.fold_left (fun acc r -> match K.view r with\n    | Range { size; _ } ->\n        (match K.const_arg size with\n         | Some (Int i) -> acc * Int64.to_int i\n         | _ -> failwith \"bufferize_range_size: non-constant range extent\")\n    | _ -> acc) 1 ranges\n\nlet add_buffers_local_rule node =\n  match K.view node with\n  | Bufferize { src; ranges; dtype; _ } ->\n      let size = bufferize_range_size ranges in\n      if size <= 0 then None\n      else\n        let sorted_rngs = List.sort (fun a b ->\n          compare (K.range_axis a) (K.range_axis b)) ranges in\n        let range_ids = List.filter K.is_range sorted_rngs in\n        let ptr_dt = Dtype.Ptr.create (Dtype.Ptr.base dtype) ~addrspace:Dtype.Local ~size in\n        let def_local = K.define_local ~size ~dtype:ptr_dt in\n        let idx = K.index ~ptr:def_local ~idxs:sorted_rngs () in\n        let store = K.store ~dst:idx ~value:src ~ranges:[] in\n        let ended = K.end_ ~value:store ~ranges:range_ids () in\n        let bar = K.after ~src:K.barrier ~deps:[ended] in\n        Some (K.after ~src:def_local ~deps:[bar])\n  | _ -> None\n\n(* Lower an optimized kernel AST to a form ready for linearization.\n   Runs expansion, devectorization, GPU dimension mapping, index dtype\n   concretization, operation decomposition, and renderer-specific rewrites. *)\nlet lower ren sink =\n  (* Normalize symbolic expressions before expansion. *)\n  let sink = K.graph_rewrite ~name:\"postopt symbolic\"\n    (K.first_match [Symbolic.sym; Symbolic.pm_move_where_on_load]) sink in\n\n  (* Expand UPCAST/UNROLL ranges into explicit vector operations. *)\n  let sink = Expander.expand sink in\n\n  (* Convert Bufferize nodes into DEFINE_LOCAL + INDEX + STORE + END + BARRIER. *)\n  let sink = K.graph_rewrite ~name:\"add local buffers\"\n    (K.first_match [add_buffers_local_rule]) sink in\n\n  (* Scalarize reductions: lower Reduce nodes and push GEPs through. *)\n  let sink = Devectorizer.pm_reduce sink in\n\n  (* Map logical ranges to physical GPU grid dimensions (SPECIAL nodes). *)\n  let sink = Gpudims.pm_add_gpudims ren sink in\n\n  (* Insert explicit Load/Store operations from Index nodes. *)\n  let sink = Devectorizer.pm_add_loads sink in\n\n  (* Scalarize remaining vector operations for targets without native vectors. *)\n  let sink = Devectorizer.pm_devectorize ren sink in\n\n  (* Lower image Param_image loads/stores to read_imagef/write_imagef. *)\n  let sink = Images.rewrite ren sink in\n\n  (* Replace abstract index dtype with concrete int32. *)\n  let sink = K.graph_rewrite ~name:\"lower all index dtypes\"\n    (K.first_match [pm_lower_index_dtype; Devectorizer.load_store_indexing;\n                    Symbolic.gep_pushing]) sink in\n  let sink = K.graph_rewrite ~name:\"post index symbolic\"\n    (K.first_match [Symbolic.symbolic]) sink in\n\n  (* Apply renderer-specific pre-processing if provided. *)\n  let sink = match Renderer.pre_matcher ren with\n    | Some pm -> K.graph_rewrite ~name:\"pre_matcher\" pm sink\n    | None -> sink in\n\n  (* Decompose compound operations into primitives supported by the target. *)\n  let ops =\n    let base = Renderer.supported_ops ren in\n    let disable_fast_idiv = Helpers.getenv \"DISABLE_FAST_IDIV\" 0 <> 0 in\n    { base with disable_fast_idiv } in\n  let pm_decomp = K.first_match [\n    Symbolic.symbolic_simple;\n    Decompositions.get_late_rewrite_patterns ops] in\n  let sink = K.graph_rewrite ~name:\"decompositions\" pm_decomp sink in\n\n  (* Lower unsupported dtypes: int64 → int32, emulated floats. *)\n  let sink =\n    if Renderer.supports_dtype ren Dtype.int64 then sink\n    else K.graph_rewrite ~name:\"decomp long -> int\"\n           Decompositions.pm_long_decomp sink in\n  let sink = List.fold_left (fun sink (fr, to_) ->\n    let ctx : Decompositions.float_decomp_ctx =\n      { from_dtype = fr; to_dtype = to_ } in\n    K.graph_rewrite (Decompositions.pm_float_decomp ctx) sink)\n    sink (Renderer.emulated_float_dtypes ren) in\n\n  (* Expand transcendental functions (exp2, log2, sin, etc.). *)\n  let sink = K.graph_rewrite ~name:\"transcendental\"\n    (K.first_match [\n      Symbolic.symbolic_simple;\n      Decompositions.get_transcendental_patterns ops]) sink in\n\n  (* Final cleanup: re-apply decompositions, renderer emit rules, and\n     split multi-range End nodes into nested single-range Ends. *)\n  let extra = match Renderer.extra_matcher ren with\n    | Some m -> [m] | None -> [] in\n  let sink = K.graph_rewrite ~name:\"final rewrite\"\n    (K.first_match\n       ([pm_decomp; Devectorizer.pm_render_rule] @ extra\n        @ [Linearizer.do_split_ends]))\n    sink in\n\n  (* Add control-flow ordering edges between sibling loops. *)\n  let sink = Linearizer.pm_add_control_flow sink in\n\n  if debug >= 6 then K.print_uops ~label:\"lower\" sink;\n  sink\n\nlet lower_and_linearize ren sink =\n  Linearizer.linearize (lower ren sink)\n\nlet compile dev ren sink =\n  Device.compile_program dev (lower_and_linearize ren sink)\n"
  },
  {
    "path": "packages/tolk/lib/codegen/codegen_lower.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Codegen lowering — all passes after optimization, up to linearization.\n\n    {!lower} runs expansion, devectorization, GPU dimension mapping, image\n    lowering, index dtype concretization, decompositions, and renderer-specific\n    rewrites.\n\n    This module has no dependency on Search, Postrange, or Heuristic,\n    so beam search can safely call {!lower_and_linearize} without cycles. *)\n\nval lower : Renderer.t -> Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t\n(** [lower renderer sink] runs all non-optimization codegen passes on an\n    optimized kernel AST. Returns a linearizer-ready {!Tolk_ir.Kernel.t}. *)\n"
  },
  {
    "path": "packages/tolk/lib/codegen/gpudims.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(* GPU dimension mapping.\n\n   Maps logical kernel ranges to physical GPU grid dimensions (SPECIAL nodes)\n   via grouping, splitting, and contraction. *)\n\nopen Tolk_ir\nmodule K = Kernel\n\nlet pp_ints a =\n  String.concat \"; \" (Array.to_list (Array.map string_of_int a))\n\nlet err_limit idims max_sizes =\n  Printf.sprintf \"cannot limit dim [%s], max_sizes=[%s]\"\n    (pp_ints idims) (pp_ints max_sizes)\n\nlet dim_max (d : K.t) : int =\n  match K.const_arg d with\n  | Some (Int n) -> Int64.to_int n\n  | _ -> failwith \"dim_max: non-constant dimension not yet supported\"\n\nlet dim_of_prefix prefix i = match prefix with\n  | \"gidx\" -> Special_dim.Group_id i\n  | \"lidx\" -> Special_dim.Local_id i\n  | \"idx\" -> Special_dim.Global_idx i\n  | s -> failwith (Printf.sprintf \"unknown dim prefix: %s\" s)\n\nlet smallest_factor n =\n  let limit = int_of_float (ceil (sqrt (float_of_int n))) in\n  let rec loop f =\n    if f > limit then 1\n    else if n mod f = 0 then f\n    else loop (f + 1)\n  in\n  loop 2\n\nlet array_rev a =\n  let n = Array.length a in\n  Array.init n (fun i -> a.(n - 1 - i))\n\n(* Merge adjacent dims until they fit within max_sizes. *)\nlet group_dims dims max_sizes =\n  let dims = ref (Array.copy dims) in\n  let rec loop () =\n    let d = !dims in\n    let n = Array.length d in\n    let nm = Array.length max_sizes in\n    if n <= nm\n       && not (Array.exists2 (fun d m -> d > m)\n                 d (Array.sub max_sizes 0 (min n nm)))\n    then Some d\n    else\n      let rec try_merge i =\n        if i >= nm || i >= n - 1 then None\n        else if d.(i) * d.(i + 1) <= max_sizes.(i) then begin\n          dims := Array.init (n - 1) (fun j ->\n            if j < i then d.(j)\n            else if j = i then d.(i) * d.(i + 1)\n            else d.(j + 1));\n          loop ()\n        end else try_merge (i + 1)\n      in\n      try_merge 0\n  in\n  loop ()\n\n(* Split dims that exceed max_sizes by factoring into adjacent slots. *)\nlet split_dims dims max_sizes =\n  if Array.for_all2 (fun d m -> d <= m)\n       dims (Array.sub max_sizes 0 (Array.length dims))\n  then dims\n  else begin\n    let d = Array.make 3 1 in\n    for i = 0 to min (Array.length dims) 3 - 1 do d.(i) <- dims.(i) done;\n    for i = 0 to 2 do\n      while d.(i) > max_sizes.(i) do\n        let div = smallest_factor d.(i) in\n        if div = 1 then failwith (err_limit dims max_sizes);\n        let next = (i + 1) mod 3 in\n        d.(next) <- d.(next) * div;\n        d.(i) <- d.(i) / div\n      done\n    done;\n    if d.(2) = 1 then Array.sub d 0 2\n    else if d.(1) = 1 && d.(2) = 1 then Array.sub d 0 1\n    else d\n  end\n\n(* Flatten SPECIAL raw indices to 1D, then decompose back to original dims. *)\nlet flatten_and_decompose raw limited dims =\n  let open K.O in\n  let flat = match Array.length limited with\n    | 2 -> raw.(0) * int_ limited.(1) + raw.(1)\n    | _ -> raw.(0) * int_ Stdlib.(limited.(1) * limited.(2))\n            + raw.(1) * int_ limited.(2) + raw.(2)\n  in\n  match Array.length dims with\n  | 1 -> [flat]\n  | 2 -> [flat / int_ dims.(1); flat mod int_ dims.(1)]\n  | _ -> [flat / int_ Stdlib.(dims.(2) * dims.(1));\n          flat / int_ dims.(2) mod int_ dims.(1);\n          flat mod int_ dims.(2)]\n\n(* Map logical range sizes to physical GPU dimensions (SPECIAL nodes). *)\nlet rec get_grouped_dims prefix dims max_sizes ~reverse =\n  if reverse then\n    List.rev (get_grouped_dims prefix (array_rev dims) max_sizes ~reverse:false)\n  else\n    let idims = Array.map dim_max dims in\n    let limited = match max_sizes with\n      | None -> idims\n      | Some max_sizes ->\n          let max_sizes = Array.of_list max_sizes in\n          let limited = match group_dims idims max_sizes with\n            | Some g -> g | None -> idims in\n          if Array.length limited > Array.length max_sizes then\n            failwith (err_limit idims max_sizes);\n          if limited = idims then split_dims idims max_sizes else limited\n    in\n    let raw = Array.mapi (fun i s ->\n      K.special ~dim:(dim_of_prefix prefix i) ~size:(K.O.int_ s) ()) limited in\n    let nl = Array.length limited and nd = Array.length idims in\n    if nl < nd then\n      match Helpers.get_contraction idims limited with\n      | None ->\n          failwith (Printf.sprintf\n            \"get_contraction should not be None dims=[%s] limited=[%s]\"\n            (pp_ints idims) (pp_ints limited))\n      | Some contraction ->\n          let open K.O in\n          let ret = ref [] in\n          List.iteri (fun i group ->\n            let cur = ref raw.(i) in\n            let group = Array.of_list group in\n            for j = 0 to Array.length group - 2 do\n              let c = dims.(group.(j)) in\n              ret := (!cur mod c) :: !ret;\n              cur := !cur / c\n            done;\n            ret := !cur :: !ret) contraction;\n          List.rev !ret\n    else if nl > nd then\n      let open K.O in\n      if nl = 2 && nd = 1 then\n        [raw.(0) * int_ limited.(1) + raw.(1)]\n      else if nl = 3 && nd = 1 then\n        [(raw.(0) * int_ limited.(1) + raw.(1)) * int_ limited.(2) + raw.(2)]\n      else if limited <> idims then\n        flatten_and_decompose raw limited idims\n      else Array.to_list raw\n    else if limited <> idims then\n      flatten_and_decompose raw limited idims\n    else Array.to_list raw\n\n(* Range key: (axis, sub) — everything except the kind. *)\nmodule Range_key = struct\n  type t = int * int list\n  let compare = Stdlib.compare\n  let of_range r = (K.range_axis r, K.range_sub r)\nend\n\nmodule Rkmap = Map.Make (Range_key)\n\n(* Substitute ranges with SPECIAL-based GPU dimension indices. *)\nlet add_gpudims (ctx : Renderer.t) (s : K.t) : K.t option =\n  match K.view s with\n  | Sink { kernel_info = None; _ } -> None\n  | Sink { kernel_info = Some ki; _ } ->\n      let s_topo = K.toposort s in\n      if List.exists (fun x ->\n        match K.view x with Special _ -> true | _ -> false) s_topo\n      then None\n      else\n        (* Collect all ranges keyed by (axis, sub). *)\n        let all_ranges = List.fold_left (fun acc x ->\n          if K.is_range x then Rkmap.add (Range_key.of_range x) x acc\n          else acc) Rkmap.empty s_topo in\n        let extract_keys pred =\n          Rkmap.fold (fun key x acc ->\n            if pred (K.range_kind x) then key :: acc else acc)\n            all_ranges []\n          |> List.sort Range_key.compare in\n        let global_dims = extract_keys (function\n          | Axis_kind.Global | Thread -> true | _ -> false) in\n        let local_dims = extract_keys (function\n          | Axis_kind.Warp | Local | Group_reduce -> true | _ -> false) in\n        if global_dims = [] && local_dims = [] then None\n        else\n          let shape_of keys = Array.of_list (List.map (fun k ->\n            K.range_size (Rkmap.find k all_ranges)) keys) in\n          let global_shape = shape_of global_dims in\n          let local_shape = shape_of local_dims in\n          (* Compute per-range index expressions. *)\n          let idxs =\n            if Renderer.has_threads ctx then begin\n              assert (Array.length global_shape > 0);\n              let hi = dim_max global_shape.(0) - 1 in\n              let core = K.define_var ~name:\"core_id\" ~lo:0 ~hi\n                ~dtype:Dtype.Val.int32 () in\n              [K.cast ~src:core ~dtype:Dtype.index]\n            end else if ki.dont_use_locals then begin\n              assert (local_dims = []);\n              get_grouped_dims \"idx\" global_shape\n                (Renderer.global_max ctx) ~reverse:true\n            end else begin\n              let local_idxs = get_grouped_dims \"lidx\" local_shape\n                (Renderer.local_max ctx) ~reverse:false in\n              let hw_local = List.filter_map (fun u ->\n                match K.view u with\n                | Special { size; _ } -> Some (dim_max size)\n                | _ -> None) local_idxs in\n              let global_max = match Renderer.global_prod_max ctx with\n                | None -> Renderer.global_max ctx\n                | Some pm ->\n                    let gm = match Renderer.global_max ctx with\n                      | Some g -> g | None -> pm in\n                    let rec zip3 gs ps ls = match gs, ps, ls with\n                      | g :: gs, p :: ps, l :: ls ->\n                          min g (p / l) :: zip3 gs ps ls\n                      | g :: gs, p :: ps, [] ->\n                          min g p :: zip3 gs ps []\n                      | _ -> [] in\n                    Some (zip3 gm pm (hw_local @ [1; 1; 1])) in\n              get_grouped_dims \"gidx\" global_shape global_max ~reverse:true\n              @ local_idxs\n            end\n          in\n          (* Build substitution map. *)\n          let lr_tbl = K.live_ranges_tbl s in\n          let all_dim_keys = global_dims @ local_dims in\n          let dim_idx = List.fold_left (fun (acc, i) k ->\n            (Rkmap.add k i acc, i + 1)) (Rkmap.empty, 0) all_dim_keys\n            |> fst in\n          let subs = ref [] in\n          List.iter (fun r ->\n            (* Guard global stores against missing local ranges. *)\n            (match K.view r with\n             | Store { dst = idx; _ } ->\n                 (match K.view idx with\n                  | Index { ptr; idxs = idx_srcs; gate; dtype = Dtype.Ptr idx_pty }\n                    when Dtype.Ptr.addrspace idx_pty = Dtype.Global ->\n                      let idx_ranges = Option.value ~default:[]\n                        (K.Ref_tbl.find_opt lr_tbl idx) in\n                      let missing = List.filter_map (fun rk ->\n                        let rng = Rkmap.find rk all_ranges in\n                        if List.exists (fun x -> x == rng) idx_ranges\n                        then None else Some rng) local_dims in\n                      if missing <> [] then begin\n                        assert (gate = None);\n                        let open K.O in\n                        let mask = List.fold_left (fun acc x ->\n                          K.binary ~op:`And ~lhs:acc ~rhs:(eq x (int_ 0)))\n                          (eq (List.hd missing) (int_ 0))\n                          (List.tl missing) in\n                        let value = match idx_srcs with\n                          | [] -> K.const_int 0 | [v] -> v\n                          | first :: rest ->\n                              List.fold_left (fun a x ->\n                                K.binary ~op:`Add ~lhs:a ~rhs:x) first rest in\n                        let dt = K.dtype value in\n                        let gated = K.O.where\n                          (K.broadcast mask (Dtype.count dt)) value\n                          (K.invalid_index ~lanes:(Dtype.count dt) ()) in\n                        subs := (idx, K.index ~ptr ~idxs:[gated] ()) :: !subs\n                      end\n                  | _ -> ())\n             | _ -> ());\n            (* Substitute non-reduce ranges with their idx expression. *)\n            if K.is_range r then begin\n              let key = Range_key.of_range r in\n              match Rkmap.find_opt key dim_idx with\n              | Some ii when K.range_kind r <> Axis_kind.Reduce ->\n                  subs := (r, List.nth idxs ii) :: !subs\n              | _ -> ()\n            end)\n            s_topo;\n          if !subs = [] then None\n          else Some (K.substitute !subs s)\n  | _ -> None\n\nlet pm_add_gpudims (ctx : Renderer.t) (root : K.t) : K.t =\n  K.graph_rewrite ~name:\"add gpudims\" (fun node -> add_gpudims ctx node) root\n"
  },
  {
    "path": "packages/tolk/lib/codegen/gpudims.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** GPU dimension mapping.\n\n    Maps logical kernel ranges to physical GPU grid dimensions\n    ({!Tolk_ir.Kernel.view.Special} nodes) via grouping, splitting, and\n    contraction.\n\n    The pass replaces {!Tolk_ir.Kernel.view.Range} nodes of Global, Thread,\n    Warp, Local, and Group_reduce kinds with SPECIAL hardware index nodes,\n    adjusting for renderer grid size limits. For threaded backends, a single\n    [core_id] variable replaces all global ranges. Missing local ranges on\n    global stores are gated with validity masks. *)\n\nval get_grouped_dims :\n  string ->\n  Tolk_ir.Kernel.t array ->\n  int list option ->\n  reverse:bool ->\n  Tolk_ir.Kernel.t list\n(** [get_grouped_dims prefix dims max_sizes ~reverse] maps logical [dims]\n    to physical SPECIAL dimension nodes.\n\n    [prefix] is [\"gidx\"], [\"lidx\"], or [\"idx\"]. [max_sizes] constrains\n    physical dimensions ([None] for no constraint). When [reverse], dims\n    are reversed before mapping and the result reversed back.\n\n    Raises [Failure] if dims cannot be grouped or split to fit\n    [max_sizes]. *)\n\nval pm_add_gpudims : Renderer.t -> Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t\n(** [pm_add_gpudims renderer root] replaces GPU-mappable ranges in [root]\n    with SPECIAL dimension nodes sized to the renderer's grid limits.\n\n    Returns [root] unchanged when the kernel has no GPU-mappable ranges\n    or already contains SPECIAL nodes. *)\n"
  },
  {
    "path": "packages/tolk/lib/codegen/late/devectorizer.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\nopen Tolk_ir\nmodule K = Kernel\n\n(* Helpers *)\n\n(* Partition a sorted int list into runs of consecutive values.\n   [0;1;2;5;6] -> [[0;1;2]; [5;6]] *)\nlet group_consecutive = function\n  | [] -> []\n  | x :: rest ->\n      let rec go acc cur = function\n        | [] -> List.rev (List.rev cur :: acc)\n        | x :: rest ->\n            match cur with\n            | prev :: _ when x = prev + 1 -> go acc (x :: cur) rest\n            | _ -> go (List.rev cur :: acc) [x] rest\n      in\n      go [] [x] rest\n\n(* Grouping key for fold_expanded_index: physical identity for nodes,\n   structural equality for constants, a sentinel for Invalid. *)\ntype root_key =\n  | Root_node of K.t\n  | Root_invalid\n  | Root_const\n\n(* Split idx into (root, constant_offset).  ADD(e, c) -> (e, c). *)\nlet decompose_idx idx =\n  match K.view idx with\n  | Binary { op = `Add; lhs; rhs; _ } -> (\n      match K.const_arg rhs with\n      | Some (Int c) -> Root_node lhs, Int64.to_int c\n      | _ -> match K.const_arg lhs with\n        | Some (Int c) -> Root_node rhs, Int64.to_int c\n        | _ -> Root_node idx, 0)\n  | Const { value; _ } -> (\n      match Const.view value with\n      | Int c -> Root_const, Int64.to_int c\n      | _ -> Root_node idx, 0)\n  | Invalid_index _ -> Root_invalid, 0\n  | _ -> Root_node idx, 0\n\n(* load_store_indexing *)\n\n(* Is this node a monotonically increasing function of its inputs? *)\nlet rec is_increasing node = match K.view node with\n  | Const _ | Define_var _ | Special _ | Range _ -> true\n  | Binary { op = `Add; lhs; rhs; _ } -> is_increasing lhs && is_increasing rhs\n  | Binary { op = `Mul | `Idiv; lhs; rhs; _ }\n    when K.is_const rhs && K.vmin rhs >= 0 -> is_increasing lhs\n  | _ -> false\n\n(* For image indexes: determine which validity clauses can be dropped\n   because the index is provably out of bounds when the clause is false. *)\nlet drop_valid_stmts valid idx height width =\n  let open Symbolic in\n  List.filter (fun stmt ->\n    match parse_valid stmt with\n    | None -> false\n    | Some (x, is_upper, c) ->\n        (* For X0 + X1 + ... >= 1, check out-of-bound when all Xi = 0. *)\n        if not is_upper && c = 1\n           && List.for_all (fun u -> Symbolic.is_irreducible u && K.vmin u = 0)\n                (Divandmod.split_add x)\n        then\n          let testidx = List.fold_left (fun nowidx u ->\n            K.substitute [(u, K.const_int 0)] nowidx)\n            idx (Divandmod.split_add x) in\n          K.vmax (K.gep ~src:testidx ~idx:0) < 0\n          || K.vmax (K.gep ~src:testidx ~idx:1) < 0\n        else\n          (* If X <= c, check out-of-bound at X = c+1.\n             If X >= c, check out-of-bound at X = c-1. *)\n          let test_value = if is_upper then c + 1 else c - 1 in\n          let dims = [width; height] in\n          let srcs = K.children idx in\n          List.exists2 (fun i b ->\n            if is_increasing i then\n              let rw = K.substitute [(x, K.const_int test_value)] i in\n              K.vmin rw >= b || K.vmax rw < 0\n            else false)\n            (List.filteri (fun j _ -> j < List.length dims) srcs) dims)\n    (Symbolic.split_and valid)\n\n(* Simplify an INDEX validity gate.  For non-image buffers, runs\n   uop_given_valid on the index; for Param_image buffers, also drops\n   redundant validity clauses proved by image dimension bounds. *)\nlet simplify_valid_load buf start_idx valid =\n  let idx = Symbolic.uop_given_valid valid start_idx in\n  match K.view buf with\n  | Param_image { width; height; _ } ->\n      (* Wait for image-indexed form (2-component coords). *)\n      if Dtype.count (K.dtype start_idx) <> 2 then None\n      else\n        let drop = drop_valid_stmts valid idx height width in\n        if drop = [] && idx == start_idx then None\n        else\n          let remaining = List.filter (fun s ->\n            not (List.exists (fun d -> d == s) drop))\n            (Symbolic.split_and valid) in\n          let gated_idx = match remaining with\n            | [] -> idx\n            | _ ->\n                let new_valid = List.fold_left (fun acc s ->\n                  K.binary ~op:`And ~lhs:acc ~rhs:s)\n                  (List.hd remaining) (List.tl remaining) in\n                K.ternary ~op:`Where ~a:new_valid ~b:idx\n                  ~c:(K.invalid_index ~lanes:(Dtype.count (K.dtype idx)) ())\n          in\n          Some (K.index ~ptr:buf ~idxs:[gated_idx] ())\n  | _ ->\n      if idx == start_idx then None\n      else\n        let gated_idx = K.ternary ~op:`Where ~a:valid ~b:idx\n          ~c:(K.invalid_index ~lanes:(Dtype.count (K.dtype idx)) ()) in\n        Some (K.index ~ptr:buf ~idxs:[gated_idx] ())\n\n(* Remove a gate that is the constant [true]. *)\nlet drop_true_gate (node : K.t) : K.t option =\n  match K.view node with\n  | Index { ptr; idxs; gate = Some g; _ }\n    when K.const_arg g = Some (Bool true) ->\n      Some (K.index ~ptr ~idxs ())\n  | _ -> None\n\n(* Match INDEX(buf, where(cond, x, Invalid)) or INDEX(buf, x:long, c:bool)\n   and simplify the validity gate via uop_given_valid. *)\nlet simplify_valid_index (node : K.t) : K.t option =\n  match K.view node with\n  | Index { ptr; idxs = [idx]; gate = None; _ } ->\n      (* Pattern 1: INDEX(buf, where(cond, x, Invalid)) *)\n      (match K.view idx with\n       | Ternary { op = `Where; a = cond; b = x; c = inv; _ }\n         when (match K.view inv with Invalid_index _ -> true | _ -> false) ->\n           simplify_valid_load ptr x cond\n       | _ -> None)\n  | Index { ptr; idxs = [x]; gate = Some c; _ } ->\n      (* Pattern 2: INDEX(buf, x, c:bool) — after index dtype lowered *)\n      (match K.dtype_opt x with\n       | Some dt when Dtype.scalar dt = Dtype.Int64 ->\n           simplify_valid_load ptr x c\n       | _ -> None)\n  | _ -> None\n\nlet load_store_indexing node =\n  match simplify_valid_index node with\n  | Some _ as r -> r\n  | None -> drop_true_gate node\n\n(* load/store grouping *)\n\n(* Expand Index(Vectorize(buf,...), vec) into Vectorize of per-lane indexes. *)\nlet expand_index (node : K.t) : K.t option =\n  match K.view node with\n  | Index { ptr; idxs = [vec]; gate = None; _ } -> (\n      match K.view ptr with\n      | Vectorize { srcs = buf :: _; _ } ->\n          let n = Dtype.count (K.dtype vec) in\n          let lanes = List.init n (fun i ->\n            K.index ~ptr:buf ~idxs:[K.gep ~src:vec ~idx:i] ()) in\n          Some (K.vectorize ~srcs:lanes)\n      | _ -> None)\n  | _ -> None\n\n(* Fold Vectorize(Index(buf,i0),...,Index(buf,iN)) back into grouped\n   pointer-cast accesses: consecutive offsets share a single wide ptr. *)\nlet fold_expanded_index (node : K.t) : K.t option =\n  match K.view node with\n  | Vectorize { srcs; _ } when srcs <> [] ->\n      let first = match K.view (List.hd srcs) with\n        | Index { ptr; dtype = Dtype.Ptr pty; _ } -> Some (ptr, pty)\n        | _ -> None in\n      (match first with\n       | None -> None\n       | Some (buf, buf_pty) ->\n      if not (List.for_all (fun s -> match K.view s with\n        | Index { ptr; _ } -> ptr == buf | _ -> false) srcs) then None\n      else if not (List.for_all K.is_ptr srcs) then None\n      else begin\n        let n = List.length srcs in\n        (* Ordered map with physical-equality keys on K.t nodes. *)\n        let offsets :\n            (K.t option * root_key, (int, int list) Hashtbl.t) Hashtbl.t =\n          Hashtbl.create 4 in\n        let key_order = ref [] in\n        let find_or_create valid root =\n          let eq_key (v, r) =\n            (match v, valid with\n             | None, None -> true | Some a, Some b -> a == b | _ -> false)\n            && (match r, root with\n                | Root_node a, Root_node b -> a == b\n                | Root_invalid, Root_invalid -> true\n                | Root_const, Root_const -> true\n                | _ -> false)\n          in\n          match List.find_opt eq_key !key_order with\n          | Some k -> Hashtbl.find offsets k\n          | None ->\n              let k = (valid, root) in\n              let tbl = Hashtbl.create 4 in\n              Hashtbl.replace offsets k tbl;\n              key_order := k :: !key_order;\n              tbl\n        in\n        (* Collect per-src offsets keyed by (gate, root). *)\n        List.iteri (fun i s ->\n          match K.view s with\n          | Index { idxs = [idx]; gate; _ } ->\n              let root, arg = decompose_idx idx in\n              let tbl = find_or_create gate root in\n              let prev = match Hashtbl.find_opt tbl arg with\n                | Some l -> l | None -> [] in\n              Hashtbl.replace tbl arg (i :: prev)\n          | _ -> ()) srcs;\n        (* Group consecutive offsets and widen ptrs. *)\n        let group_and_widen () =\n          let ret = ref [] in\n          let idxs = Array.make n (-1) in\n          let global_offset = ref 0 in\n          List.iter (fun key ->\n            let tbl = Hashtbl.find offsets key in\n            let sorted_args = List.sort compare\n              (Hashtbl.fold (fun k _ acc -> k :: acc) tbl []) in\n            List.iter (fun grp ->\n              let grp_len = List.length grp in\n              let first_off = List.hd grp in\n              let first_orig = List.hd (Hashtbl.find tbl first_off) in\n              let lidx = List.nth srcs first_orig in\n              let lidx =\n                if grp_len > 1 then\n                  let scalar = Dtype.Val.scalarize (Dtype.Ptr.base buf_pty) in\n                  let wide_pty = Dtype.Ptr.with_base\n                    (Dtype.Val.vec grp_len scalar) buf_pty in\n                  K.cast ~src:lidx ~dtype:(Dtype.Ptr wide_pty)\n                else lidx\n              in\n              List.iteri (fun lane_i g ->\n                List.iter (fun oi ->\n                  idxs.(oi) <- !global_offset + lane_i)\n                  (Hashtbl.find tbl g)) grp;\n              ret := lidx :: !ret;\n              global_offset := !global_offset + grp_len)\n              (group_consecutive sorted_args))\n            (List.rev !key_order);\n          if Array.exists (fun x -> x < 0) idxs then None\n          else Some (List.rev !ret, Array.to_list idxs, !global_offset)\n        in\n        (* Assemble PTRCAT + GEP result. *)\n        match group_and_widen () with\n        | None | Some (_, _, 0) -> None\n        | Some (ret, idxs_list, total) ->\n            let scalar = Dtype.Val.scalarize (Dtype.Ptr.base buf_pty) in\n            let cat_pty = Dtype.Ptr.with_base scalar buf_pty in\n            let post_cat = K.ptrcat ~srcs:ret\n              ~dtype:(Dtype.Ptr.vec total cat_pty) in\n            Some (K.gep_multi ~src:post_cat ~idxs:idxs_list)\n      end)\n  | _ -> None\n\n(* Push GEP through LOAD: Load(GEP(x, arg)) -> GEP(Load(x, wider_dtype), arg). *)\nlet gep_after_load (node : K.t) : K.t option =\n  match K.view node with\n  | Load { src; alt; dtype } -> (\n      match K.view src with\n      | Gep { src = inner; idxs; dtype = gep_dt } ->\n          let wide_dt = Dtype.vec (Dtype.Val.count gep_dt) (Dtype.scalarize (Dtype.Val dtype)) in\n          let wide_load = K.replace node\n            ~children:(inner :: Option.to_list alt)\n            ~dtype:wide_dt () in\n          Some (K.gep_multi ~src:wide_load ~idxs)\n      | _ -> None)\n  | _ -> None\n\n(* Push GEP through STORE: Store(GEP(x, perm), val) -> Store(x, GEP(val, inv_perm)).\n   XXX does not handle expanding (duplicate) GEPs — same as tinygrad. *)\nlet gep_on_store (node : K.t) : K.t option =\n  match K.view node with\n  | Store { dst; value; ranges } -> (\n      match K.view dst with\n      | Gep { src = inner; idxs; _ } ->\n          let n = List.length idxs in\n          let inv = Array.make n 0 in\n          List.iteri (fun i x -> if x >= 0 && x < n then inv.(x) <- i) idxs;\n          Some (K.store ~dst:inner\n                  ~value:(K.gep_multi ~src:value ~idxs:(Array.to_list inv))\n                  ~ranges)\n      | _ -> None)\n  | _ -> None\n\n(* Split Load(Ptrcat(p0,...,pN)) into Vcat(Load(p0),...,Load(pN)). *)\nlet ptrcat_after_load (node : K.t) : K.t option =\n  match K.view node with\n  | Load { src; alt; _ } -> (\n      match K.view src with\n      | Ptrcat { srcs; _ } ->\n          Some (K.vcat ~srcs:(List.map (fun p -> K.load ~src:p ?alt ()) srcs))\n      | _ -> None)\n  | _ -> None\n\n(* Split Store(Ptrcat(p0,...,pN), data) into Group(Store(p0, slice0), ...). *)\nlet ptrcat_after_store (node : K.t) : K.t option =\n  match K.view node with\n  | Store { dst; value; ranges } -> (\n      match K.view dst with\n      | Ptrcat { srcs; _ } ->\n          let rec go acc offset = function\n            | [] -> Some (K.group (List.rev acc))\n            | p :: rest ->\n                let n = Dtype.count (K.dtype p) in\n                let chunk = K.gep_multi ~src:value\n                  ~idxs:(List.init n (fun j -> offset + j)) in\n                go (K.store ~dst:p ~value:chunk ~ranges :: acc) (offset + n) rest\n          in\n          go [] 0 srcs\n      | _ -> None)\n  | _ -> None\n\n(* correct load/store *)\n\n(* Extract (ptr, idxs, gate, buf_pty, sz) from a Cast(Index(...)) src. *)\nlet extract_cast_index src =\n  match K.view src with\n  | Cast { src = idx; dtype = Dtype.Ptr pty } -> (\n      match K.view idx with\n      | Index { ptr; idxs; gate; _ } ->\n          let sz = Dtype.Val.count (Dtype.Ptr.base pty) in\n          if not (K.is_ptr ptr) then None\n          else\n            let buf_pty = K.ptr_dtype ptr in\n            if sz = 1 || Dtype.Ptr.addrspace buf_pty = Dtype.Reg then None\n            else Some (ptr, idxs, gate, buf_pty, sz)\n      | _ -> None)\n  | _ -> None\n\n(* Split wide Load/Store(Cast(Index)) into renderer-supported widths.\n   Image and DSP paths omitted. *)\nlet split_load_store (ren : Renderer.t) (node : K.t) : K.t option =\n  (* Determine fold widths, filter by divisibility, split into chunks. *)\n  let split ptr idxs gate buf_pty sz mk_item =\n    let base_scalar = Dtype.Val.scalarize (Dtype.Ptr.base buf_pty) in\n    let widths = match Dtype.Val.scalar base_scalar with\n      | Float32 | Float16 | Fp8e4m3 | Fp8e5m2\n        when Renderer.supports_float4 ren ->\n          if Dtype.Val.scalar base_scalar = Float16 && Helpers.allow_half8\n          then [8; 4; 2]\n          else if Helpers.amx then [16; 8; 4; 2]\n          else [4; 2]\n      | _ -> []\n    in\n    let offset = List.hd idxs in\n    let lengths = List.filter (fun x ->\n      K.divides offset x <> None) (widths @ [1]) in\n    let rec go acc off =\n      if off >= sz then List.rev acc\n      else match List.find_opt (fun fl -> off + fl <= sz) lengths with\n      | None -> List.rev acc\n      | Some fl ->\n          let new_idxs =\n            if off = 0 then idxs\n            else List.map (fun i ->\n              K.binary ~op:`Add ~lhs:i ~rhs:(K.const_int off)) idxs in\n          let base_idx = K.index ~ptr ~idxs:new_idxs ?gate () in\n          let lidx =\n            if fl > 1 then\n              let wide_pty = Dtype.Ptr.with_base\n                (Dtype.Val.vec fl base_scalar) buf_pty in\n              K.cast ~src:base_idx ~dtype:(Dtype.Ptr wide_pty)\n            else base_idx in\n          go (mk_item lidx fl off :: acc) (off + fl)\n    in\n    match go [] 0 with [] | [_] -> None | ret -> Some ret\n  in\n  match K.view node with\n  | Load { src; alt; dtype } -> (\n      match extract_cast_index src with\n      | None -> None\n      | Some (ptr, idxs, gate, buf_pty, sz) ->\n          Option.map (fun ret -> K.vcat ~srcs:ret)\n            (split ptr idxs gate buf_pty sz (fun lidx fl _off ->\n               K.replace node ~children:(lidx :: Option.to_list alt)\n                 ~dtype:(Dtype.vec fl (Dtype.scalarize (Dtype.Val dtype))) ())))\n  | Store { dst; value; ranges } -> (\n      match extract_cast_index dst with\n      | None -> None\n      | Some (ptr, idxs, gate, buf_pty, sz) ->\n          Option.map K.group\n            (split ptr idxs gate buf_pty sz (fun lidx fl off ->\n               K.store ~dst:lidx ~ranges\n                 ~value:(K.gep_multi ~src:value\n                           ~idxs:(List.init fl (fun j -> off + j))))))\n  | _ -> None\n\nlet pm_correct_load_store_rule ren = split_load_store ren\n\n(* devectorize *)\n\nlet prod lst = List.fold_left ( * ) 1 lst\n\n(* Break a wide WMMA into multiple smaller WMMAs matching the upcast chunk size. *)\nlet no_vectorized_wmma (node : K.t) : K.t option =\n  match K.view node with\n  | Wmma { a; b; c; dtype; upcast_axes = ua, ub, uc; _ } ->\n      let out_sz = prod (List.map snd uc) in\n      if Dtype.Val.count dtype = out_sz then None\n      else\n        let chunk src axes =\n          let ssz = prod (List.map snd axes) in\n          let cnt = Dtype.count (K.dtype src) in\n          List.init (cnt / ssz) (fun g ->\n            K.gep_multi ~src ~idxs:(List.init ssz (fun j -> g * ssz + j)))\n        in\n        let wmma_dt = Dtype.vec out_sz (Dtype.scalarize (Dtype.Val dtype)) in\n        let wmmas = List.map2 (fun (a, b) c ->\n          K.replace node ~children:[a; b; c] ~dtype:wmma_dt ())\n          (List.combine (chunk a ua) (chunk b ub)) (chunk c uc) in\n        let srcs = List.concat_map (fun w ->\n          List.init out_sz (fun i -> K.gep ~src:w ~idx:i)) wmmas in\n        Some (K.vectorize ~srcs)\n  | _ -> None\n\n(* Scalarize vectorized ALU/Cast/Bitcast by extracting each lane. *)\nlet no_vectorized_alu (node : K.t) : K.t option =\n  match K.view node with\n  (* WHERE with Invalid 3rd arg: image index pattern, skip *)\n  | Ternary { op = `Where; c; _ }\n    when (match K.view c with Invalid_index _ -> true | _ -> false) -> None\n  | Unary _ | Binary _ | Ternary _ | Cast _ | Bitcast _ ->\n      let adt = K.dtype node in\n      let vc = Dtype.vcount adt in\n      if vc <= 1 then None\n      else\n        let children = K.children node in\n        let scalar_dt = Dtype.scalarize adt in\n        let srcs = List.init vc (fun i ->\n          K.replace node\n            ~children:(List.map (fun s -> K.gep ~src:s ~idx:i) children)\n            ~dtype:scalar_dt ()) in\n        Some (K.vectorize ~srcs)\n  | _ -> None\n\n(* Scalarize DEFINE_LOCAL/DEFINE_REG with vector base: widen size, scalarize\n   base, cast back. *)\nlet no_vectorized_buf (node : K.t) : K.t option =\n  let scalarize size dtype mk =\n    let cnt = Dtype.Val.count (Dtype.Ptr.base dtype) in\n    let scalar_pty = Dtype.Ptr.with_size (Dtype.Ptr.size dtype * cnt)\n      (Dtype.Ptr.with_base (Dtype.Val.scalarize (Dtype.Ptr.base dtype)) dtype) in\n    Some (K.cast ~src:(mk (size * cnt) scalar_pty)\n      ~dtype:(Dtype.Ptr dtype))\n  in\n  match K.view node with\n  | Define_local { size; dtype } when Dtype.Val.count (Dtype.Ptr.base dtype) > 1 ->\n      scalarize size dtype (fun size dtype -> K.define_local ~size ~dtype)\n  | Define_reg { size; dtype; slot } when Dtype.Val.count (Dtype.Ptr.base dtype) > 1 ->\n      scalarize size dtype (fun size dtype -> K.define_reg ~size ~dtype ~slot)\n  | _ -> None\n\n(* Scalarize a vector Index on local/reg memory.\n   Handles three ptr shapes matching tinygrad's devectorize_buf_and_index:\n   1. Cast(buf).index(idx)              — plain scalar index\n   2. Cast(buf).broadcast(b).index(idx) — broadcast index\n   3. Cast(buf).gep(g).index(idx)       — GEP-selected lanes *)\nlet no_vectorized_index (node : K.t) : K.t option =\n  let rec is_local_or_reg n = match K.view n with\n    | After { src; _ } -> is_local_or_reg src\n    | Define_local _ | Define_reg _ -> true\n    | _ -> false\n  in\n  let check_cast n = match K.view n with\n    | Cast { src = buf; dtype = Dtype.Ptr cp; _ }\n      when is_local_or_reg buf -> Some (buf, cp)\n    | _ -> None\n  in\n  match K.view node with\n  | Index { ptr; idxs; dtype = Dtype.Ptr pty; _ }\n    when Dtype.Val.count (Dtype.Ptr.base pty) > 1 ->\n      (* Decompose ptr into (buf, cast_pty, bcast_kind) *)\n      let found = match K.view ptr with\n        | Cast _ ->\n            Option.map (fun (buf, cp) -> (buf, cp, `Plain)) (check_cast ptr)\n        | Vectorize { srcs = s :: _; _ } ->\n            Option.map (fun (buf, cp) -> (buf, cp, `Broadcast ptr)) (check_cast s)\n        | Gep { src = inner; idxs = gep_idxs; _ } ->\n            Option.map (fun (buf, cp) -> (buf, cp, `Gep gep_idxs)) (check_cast inner)\n        | _ -> None\n      in\n      Option.bind found (fun (buf, cast_pty, bcast) ->\n        let cnt = Dtype.Val.count (Dtype.Ptr.base cast_pty) in\n        let pairs = match bcast with\n          | `Gep gep_idxs ->\n              let vc = Dtype.Ptr.v cast_pty in\n              let n_gep = List.length gep_idxs in\n              List.init (vc * n_gep) (fun i ->\n                (i mod n_gep, i / n_gep + List.nth gep_idxs (i mod n_gep)))\n          | `Broadcast bnode ->\n              let bvc = Dtype.vcount (K.dtype bnode) in\n              List.init (cnt * bvc) (fun i -> (i mod bvc, i / bvc))\n          | `Plain ->\n              List.init cnt (fun c -> (0, c))\n        in\n        let n = List.length pairs in\n        let open K.O in\n        let idx = match idxs with\n          | [] -> int_ 0\n          | first :: rest -> List.fold_left ( + ) first rest in\n        let lane_sel = K.gep_multi ~src:idx ~idxs:(List.map fst pairs) in\n        let stride = K.broadcast (int_ cnt) n in\n        let off = K.vectorize ~srcs:(List.map (fun (_, o) -> int_ o) pairs) in\n        let wide_idx = lane_sel * stride + off in\n        Some (K.index ~ptr:(K.broadcast buf n) ~idxs:[wide_idx] ()))\n  | _ -> None\n\n(* Move Cast out of After: After(Cast(x, dt), deps) -> Cast(After(x, deps), dt). *)\nlet cast_after_after (node : K.t) : K.t option =\n  match K.view node with\n  | After { src; deps } -> (\n      match K.view src with\n      | Cast { src = inner; dtype } ->\n          Some (K.cast ~src:(K.after ~src:inner ~deps) ~dtype)\n      | _ -> None)\n  | _ -> None\n\n(* pm_render *)\n\n(* Expand vector Const into Vectorize of scalar copies. *)\nlet expand_vector_const (node : K.t) : K.t option =\n  match K.view node with\n  | Const { value; dtype } when Dtype.Val.count dtype > 1 ->\n      let c = K.const value in\n      Some (K.vectorize ~srcs:(List.init (Dtype.Val.count dtype) (fun _ -> c)))\n  | _ -> None\n\n(* Expand Vconst into Vectorize of per-lane scalar constants. *)\nlet expand_vconst (node : K.t) : K.t option =\n  match K.view node with\n  | Vconst { values; _ } ->\n      Some (K.vectorize ~srcs:(List.map K.const values))\n  | _ -> None\n\n(* Expand multi-element GEP into Vectorize of single-element GEPs. *)\nlet expand_multi_gep (node : K.t) : K.t option =\n  match K.view node with\n  | Gep { src; idxs; _ } when List.length idxs > 1 ->\n      Some (K.vectorize ~srcs:(List.map (fun x -> K.gep ~src ~idx:x) idxs))\n  | _ -> None\n\n(* Remove trivial GEP(x, 0) when x is scalar. *)\nlet trivial_gep (node : K.t) : K.t option =\n  match K.view node with\n  | Gep { src; idxs = [0]; _ } ->\n      if Dtype.vcount (K.dtype src) = 1 then Some src\n      else None\n  | _ -> None\n\n(* Remove single-element Vectorize. *)\nlet trivial_vectorize (node : K.t) : K.t option =\n  match K.view node with\n  | Vectorize { srcs = [src]; _ } -> Some src\n  | _ -> None\n\n(* Find the INDEX gate through Cast/Bitcast wrappers. *)\nlet rec find_gate n = match K.view n with\n  | Index { gate = Some g; _ } -> Some g\n  | Cast { src; _ } | Bitcast { src; _ } -> find_gate src\n  | _ -> None\n\n(* Give gated loads a zero alt value when they don't have one yet.\n   Tinygrad also checks for CUSTOM/STORE/BARRIER in the alt position;\n   the OCaml IR uses a typed [alt : t option] field so effect nodes\n   cannot appear there — matching [alt = None] is sufficient. *)\nlet masked_load_alt (node : K.t) : K.t option =\n  match K.view node with\n  | Load { src; alt = None; _ } when find_gate src <> None ->\n      Some (K.load ~src ~alt:(K.zero_like node) ())\n  | _ -> None\n\n(* Is [gate] the logical negation of [cond]?  i.e. gate = xor(cond, true). *)\nlet is_negated cond gate = match K.view gate with\n  | Binary { op = `Xor; lhs; rhs; _ } ->\n      (lhs == cond && K.const_arg rhs = Some (Bool true))\n      || (rhs == cond && K.const_arg lhs = Some (Bool true))\n  | _ -> false\n\n(* Fold Where(cond, Load(gated), fallback) into Load(gated, alt=fallback)\n   when the INDEX gate matches or negates the WHERE condition. *)\nlet where_after_gated_load (node : K.t) : K.t option =\n  let try_fold cond load_side alt_side ~negated =\n    let inner, wrap_dt = match K.view load_side with\n      | Cast { src; dtype } -> src, Some dtype\n      | _ -> load_side, None\n    in\n    match K.view inner with\n    | Load { src; dtype = load_dt; _ } -> (\n        match find_gate src with\n        | Some gate\n          when (if negated then is_negated cond gate else cond == gate) ->\n            (* Unwrap Cast if inner already matches load dtype, avoiding\n               a roundtrip cast (e.g. uint->float->uint). *)\n            let alt = match K.view alt_side with\n              | Cast { src = inner_alt; _ }\n                when K.dtype_opt inner_alt = Some (Dtype.Val load_dt) -> inner_alt\n              | _ -> K.cast ~src:alt_side ~dtype:(Dtype.Val load_dt)\n            in\n            let load = K.load ~src ~alt () in\n            let result_dt = match wrap_dt with\n              | Some dt -> dt | None -> Dtype.Val load_dt in\n            Some (K.cast ~src:load ~dtype:result_dt)\n        | _ -> None)\n    | _ -> None\n  in\n  match K.view node with\n  | Ternary { op = `Where; a = cond; b = true_side; c = false_side; _ } -> (\n      match try_fold cond true_side false_side ~negated:false with\n      | Some _ as r -> r\n      | None -> try_fold cond false_side true_side ~negated:true)\n  | _ -> None\n\n(* Reduce lowering *)\n\nlet identity_element = Const.identity_element\n\n(* Split horizontal reduction lanes when input is wider than output. *)\nlet horizontal_reduce (inp : K.t) (out_dtype : Dtype.t) : K.t list =\n  let inp_dt = K.dtype inp in\n  if Dtype.equal inp_dt out_dtype then [inp]\n  else\n    let amount = Dtype.count inp_dt / Dtype.count out_dtype in\n    List.init amount (fun i ->\n      K.gep_multi ~src:inp\n        ~idxs:(List.init (Dtype.count out_dtype) (fun j -> i + j * amount)))\n\n(* Reduce a list with a binary op: [a; b; c] -> op(op(a, b), c). *)\nlet reduce_fold op = function\n  | [] -> failwith \"reduce_fold: empty list\"\n  | first :: rest ->\n      List.fold_left\n        (fun a x -> K.binary ~op:(op :> Op.binary) ~lhs:a ~rhs:x)\n        first rest\n\ntype reduce_ctx = { mutable acc_num : int }\n\n(* Lower Reduce into an explicit register accumulator with END loop. *)\nlet reduce_to_acc (ctx : reduce_ctx) (node : K.t) : K.t option =\n  match K.view node with\n  | Reduce { op; src = inp; ranges = reduce_range; dtype } ->\n      let lst = horizontal_reduce inp (Dtype.Val dtype) in\n      if reduce_range = [] then Some (reduce_fold op lst)\n      else begin\n        let topo = K.toposort inp in\n        let ended = K.Ref_tbl.create 16 in\n        List.iter (fun n -> match K.view n with\n          | End { ranges; _ } ->\n              List.iter (fun r -> K.Ref_tbl.replace ended r ()) ranges\n          | _ -> ()) topo;\n        let reduce_set = K.Ref_tbl.create 8 in\n        List.iter (fun r -> K.Ref_tbl.replace reduce_set r ()) reduce_range;\n        let input_ranges = List.filter (fun n ->\n          K.is_range n\n          && not (K.Ref_tbl.mem reduce_set n)\n          && not (K.Ref_tbl.mem ended n)) topo in\n        let identity = K.broadcast\n          (K.const (identity_element op (Dtype.Val.scalarize dtype)))\n          (Dtype.Val.count dtype) in\n        let acc_pty = Dtype.Ptr.create dtype ~addrspace:Dtype.Reg ~size:1 in\n        let acc = K.define_reg ~size:1 ~dtype:acc_pty ~slot:ctx.acc_num in\n        ctx.acc_num <- ctx.acc_num + 1;\n        let zero = K.const_int 0 in\n        let idx ptr = K.index ~ptr ~idxs:[zero] ~as_ptr:false () in\n        let acc_after_input = match input_ranges with\n          | [] -> acc | deps -> K.after ~src:acc ~deps in\n        let acc_init =\n          K.store ~dst:(idx acc_after_input) ~value:identity ~ranges:[] in\n        let acc_in_loop =\n          K.after ~src:acc ~deps:(acc_init :: reduce_range) in\n        let ret = reduce_fold op (idx acc_in_loop :: lst) in\n        let store_back = K.store ~dst:(idx acc) ~value:ret ~ranges:[] in\n        let end_node =\n          K.end_ ~value:store_back ~ranges:reduce_range ~tag:\"mergeable\" () in\n        Some (idx (K.after ~src:acc ~deps:[end_node]))\n      end\n  | _ -> None\n\n(* Merge END nodes that share the same ranges (created by reduce_to_acc). *)\nlet merge_reduce_ends (_ctx : reduce_ctx) (node : K.t) : K.t option =\n  match K.view node with\n  | Sink _ ->\n      let by_ranges : (K.t list, K.t list) Hashtbl.t = Hashtbl.create 8 in\n      List.iter (fun n ->\n        match K.view n with\n        | End { ranges; _ } when K.tag n = Some \"mergeable\" ->\n            let prev = match Hashtbl.find_opt by_ranges ranges with\n              | Some l -> l | None -> [] in\n            Hashtbl.replace by_ranges ranges (n :: prev)\n        | _ -> ()) (K.toposort node);\n      let mappings = Hashtbl.fold (fun ranges ends acc ->\n        if List.length ends <= 1 then acc\n        else\n          let stores = List.map (fun e ->\n            match K.view e with\n            | End { value; _ } -> value | _ -> assert false) ends in\n          let merged = K.end_ ~value:(K.group stores) ~ranges () in\n          List.fold_left (fun acc old -> (old, merged) :: acc) acc ends)\n        by_ranges [] in\n      (match mappings with\n       | [] -> None\n       | _ -> Some (K.substitute mappings node))\n  | _ -> None\n\n(* Fold ADD(WMMA, x) into WMMA's accumulator: WMMA(a, b, c+x). *)\nlet wmma_accumulate (node : K.t) : K.t option =\n  match K.view node with\n  | Binary { op = `Add; lhs; rhs; _ } ->\n      let try_fold wmma other = match K.view wmma with\n        | Wmma { a; b; c; _ } ->\n            Some (K.replace wmma ~children:[a; b; K.O.( + ) c other] ())\n        | _ -> None\n      in\n      (match try_fold lhs rhs with\n       | Some _ as r -> r\n       | None -> try_fold rhs lhs)\n  | _ -> None\n\n(* Insert Load for value-typed Index; collapse Store(Load(x), v) -> Store(x, v). *)\nlet add_loads_rule (node : K.t) : K.t option =\n  match K.view node with\n  | Index { dtype = Dtype.Val _; ptr; idxs; gate; _ } ->\n      let ptr_pty = K.ptr_dtype ptr in\n      let ptr_idx =\n        K.index_raw ~ptr ~idxs ?gate ~dtype:(Dtype.Ptr ptr_pty) () in\n      Some (K.load ~src:ptr_idx ())\n  | Index { dtype = Dtype.Ptr _; _ } -> None\n  | Store { dst; value; ranges } -> (\n      match K.view dst with\n      | Load { src; _ } -> Some (K.store ~dst:src ~value ~ranges)\n      | _ -> None)\n  | _ -> None\n\n(* Passes *)\n\nlet pm_reduce (root : K.t) : K.t =\n  let ctx = { acc_num = 0 } in\n  K.graph_rewrite ~name:\"remove_reduce\"\n    (K.first_match [\n      reduce_to_acc ctx; wmma_accumulate;\n      merge_reduce_ends ctx;\n      Symbolic.gep_pushing;\n    ]) root\n\nlet pm_add_loads (root : K.t) : K.t =\n  K.graph_rewrite ~name:\"** add loads (code)\" add_loads_rule root\n\nlet pm_devectorize (ren : Renderer.t) (root : K.t) : K.t =\n  K.graph_rewrite ~name:\"devectorize\"\n    (K.first_match [\n      Symbolic.sym;\n      cast_after_after; no_vectorized_alu; no_vectorized_wmma;\n      no_vectorized_buf; no_vectorized_index;\n      expand_index; fold_expanded_index;\n      gep_after_load; gep_on_store;\n      ptrcat_after_load; ptrcat_after_store;\n      split_load_store ren;\n      load_store_indexing;\n    ]) root\n\nlet pm_render_rule =\n  K.first_match [\n    expand_vector_const; expand_vconst; expand_multi_gep; trivial_gep;\n    trivial_vectorize; masked_load_alt; where_after_gated_load;\n  ]\n\nlet pm_render (root : K.t) : K.t =\n  K.graph_rewrite pm_render_rule root\n"
  },
  {
    "path": "packages/tolk/lib/codegen/late/devectorizer.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Late reduction lowering and devectorization.\n\n    Transforms Kernel IR from abstract buffer references into concrete\n    {!Tolk_ir.Kernel.view.Load}/{!Tolk_ir.Kernel.view.Store} operations,\n    scalarises wide vector operations, and folds load/store grouping before\n    linearisation.\n\n    Image-related passes are omitted; Tolk handles images separately via\n    {!Images}.\n\n    The passes run in this order (composed by {!Lowering.lower}):\n    + {!pm_reduce} — lower reductions to accumulator loops.\n    + {!pm_add_loads} — insert explicit loads.\n    + {!pm_devectorize} — scalarise, fold, correct, and simplify.\n    + {!pm_render} — prepare for rendering.\n\n    See also {!Expander}, {!Linearizer}. *)\n\n(** {1:passes Passes} *)\n\nval pm_reduce : Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t\n(** [pm_reduce root] lowers {!Tolk_ir.Kernel.view.Reduce} nodes to\n    explicit {!Tolk_ir.Kernel.view.Define_reg} accumulator loops with\n    {!Tolk_ir.Kernel.view.End}. Parallel reductions that share the same\n    range are merged into a single {!Tolk_ir.Kernel.view.End} via\n    {!Tolk_ir.Kernel.view.Group}. Also folds\n    [{!Tolk_ir.Kernel.view.Wmma} + add] into the WMMA accumulator.\n    Includes GEP pushing ({!Symbolic.gep_pushing}) in the same fixpoint,\n    matching tinygrad's [pm_reduce+gep_pushing] composition. *)\n\nval pm_add_loads : Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t\n(** [pm_add_loads root] inserts explicit {!Tolk_ir.Kernel.view.Load}\n    for value-typed {!Tolk_ir.Kernel.view.Index} nodes, and collapses\n    [Store(Load(x), v)] to [Store(x, v)]. *)\n\nval pm_devectorize : Renderer.t -> Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t\n(** [pm_devectorize renderer root] runs a single fixpoint that:\n\n    - Scalarises vectorised ALU, Cast, Bitcast, and WMMA.\n    - Scalarises {!Tolk_ir.Kernel.view.Define_local}/\n      {!Tolk_ir.Kernel.view.Define_reg} with vector base types.\n    - Scalarises vector {!Tolk_ir.Kernel.view.Index} on local/reg\n      memory (plain, broadcast, and GEP patterns).\n    - Reorders {!Tolk_ir.Kernel.view.Cast} through\n      {!Tolk_ir.Kernel.view.After}.\n    - Expands and folds vectorised INDEX for load/store grouping\n      (consecutive offsets share a single wide pointer).\n    - Pushes {!Tolk_ir.Kernel.view.Gep} through Load/Store.\n    - Spreads {!Tolk_ir.Kernel.view.Ptrcat} across Load/Store.\n    - Splits oversized Load/Store for [renderer]\n      (as reported by {!Renderer.supports_float4}).\n    - Drops trivially-true gates from\n      {!Tolk_ir.Kernel.view.Index}.\n    - Applies symbolic simplification. *)\n\nval load_store_indexing : Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t option\n(** [load_store_indexing node] simplifies INDEX validity gates:\n    drops always-true gates, simplifies gated indexes via\n    {!Symbolic.uop_given_valid}, and for image buffers drops redundant\n    validity clauses proved by image dimension bounds. *)\n\nval no_vectorized_alu : Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t option\n(** [no_vectorized_alu node] scalarizes a vectorized ALU, Cast, or Bitcast\n    by extracting each lane via GEP, applying the scalar operation, and\n    re-vectorizing.  Returns [None] for scalar nodes or image-index WHERE\n    patterns.  Used by renderer [extra_pm] to devectorize bool-typed ops\n    and WHERE in the final rewrite fixpoint. *)\n\n(** {1:render Render preparation} *)\n\nval pm_render_rule : Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t option\n(** [pm_render_rule node] is the individual render-preparation rewrite\n    rule, suitable for composition with decomposition and renderer rules\n    in a final fixpoint.\n\n    Expands vector {!Tolk_ir.Kernel.view.Const},\n    {!Tolk_ir.Kernel.view.Vconst}, and multi-element\n    {!Tolk_ir.Kernel.view.Gep} to {!Tolk_ir.Kernel.view.Vectorize};\n    removes trivial GEP and single-element Vectorize; gives gated loads\n    a zero alt value; folds [Where(cond, gated_load, fallback)] into the\n    load's alt. *)\n\nval pm_render : Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t\n(** [pm_render root] runs {!pm_render_rule} to fixpoint. *)\n"
  },
  {
    "path": "packages/tolk/lib/codegen/late/expander.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\nopen Tolk_ir\nmodule K = Kernel\n\n(* Helpers *)\n\nlet prod lst = List.fold_left ( * ) 1 lst\n\n(* Flatten a multi-axis position into a linear index.\n   args = [(axis, size); ...], rpk = [(axis, value); ...].\n   Rightmost axis varies fastest. *)\nlet expand_arg_to_idx args rpk =\n  List.fold_right\n    (fun (axis, m) (idx, mul) ->\n      let v = List.assoc axis rpk in\n      (v * mul + idx, mul * m))\n    args (0, 1)\n  |> fst\n\n(* Cartesian product of all axis values.\n   [(0,2); (1,3)] -> [{0:0,1:0}; {0:0,1:1}; ...; {0:1,1:2}] as assoc lists. *)\nlet choices_from_args args =\n  List.fold_left\n    (fun acc (axis, m) ->\n      List.concat_map (fun rest ->\n        List.init m (fun v -> (axis, v) :: rest)) acc)\n    [[]] args\n\n(* For each choice in cargs, compute the flat index into eargs's space.\n   Excluded axes are zeroed.\n   XXX tinygrad memoizes this with @functools.cache. *)\nlet swizzle_args cargs eargs exclude_args =\n  List.map\n    (fun rpk ->\n      let rpk =\n        if exclude_args = [] then rpk\n        else List.map (fun x -> (x, 0)) exclude_args @ rpk\n      in\n      expand_arg_to_idx eargs rpk)\n    (choices_from_args cargs)\n\nlet all_same = function\n  | [] -> true\n  | x :: xs -> List.for_all (( = ) x) xs\n\nlet is_unroll n = match K.view n with Unroll _ -> true | _ -> false\n\nlet unroll_axes nodes =\n  List.concat_map (fun n ->\n    match K.view n with Unroll { axes; _ } -> axes | _ -> []) nodes\n\n(* Expand an op's Unroll children into a single wider vector operation. *)\nlet do_expand root =\n  let expandable = match K.view root with\n    | Unary _ | Binary _ | Ternary _ | Cast _ | Bitcast _ | Gep _\n    | Wmma _ | Load _ | Store _ | Index _ | Bufferize _\n    | Vectorize _ | Reduce _ | End _ | After _ -> true\n    | _ -> false in\n  if not expandable then None\n  else\n  let children = K.children root in\n  let expands = List.filter is_unroll children in\n  if expands = [] then None\n  else\n    let root_view = K.view root in\n    let root_dt = match K.dtype_opt root with Some dt -> dt | None -> Dtype.void in\n    let dcount n = match K.sort n with\n      | Effect -> 1 | _ -> Dtype.count (K.dtype n) in\n    let exclude_args = match root_view with\n      | Wmma { reduce_axes; upcast_axes = ua, ub, uc; _ } ->\n          List.sort_uniq compare (reduce_axes @ List.map fst (ua @ ub @ uc))\n      | _ -> [] in\n    let expands_args = List.map (fun e ->\n      match K.view e with Unroll { axes; _ } -> axes | _ -> []) expands in\n    let expand_args =\n      if all_same expands_args && exclude_args = [] then List.hd expands_args\n      else\n        List.filter (fun (a, _) -> not (List.mem a exclude_args))\n          (List.sort_uniq compare (List.concat expands_args)) in\n    let expand_sz = prod (List.map snd expand_args) in\n    let is_non_ptr_index = match root_view with\n      | Index { dtype = Dtype.Val _; _ } -> true | _ -> false in\n    (* Build new sources *)\n    let new_srcs = List.mapi (fun i src ->\n      match K.view src with\n      | Unroll { src = inner; axes = src_axes; _ } ->\n          if expand_args = src_axes then inner\n          else\n            let lst = swizzle_args expand_args src_axes exclude_args in\n            let sc = dcount src in\n            let lst = if sc > 1 then\n              List.concat_map (fun idx ->\n                List.init sc (fun j -> idx * sc + j)) lst\n              else lst in\n            K.gep_multi ~src:inner ~idxs:lst\n      | _ ->\n          let is_passthrough =\n            (match K.range_start root with\n             | Some rs -> i >= rs | None -> false)\n            || (is_non_ptr_index && i >= 1) in\n          if is_passthrough then src\n          else if dcount src > 1 then\n            K.vcat ~srcs:(List.init expand_sz (fun _ -> src))\n          else K.broadcast src expand_sz)\n      children in\n    (* Build result node — one match for all cases *)\n    let buf_reg = K.is_ptr (List.hd children) &&\n      (let pty = K.ptr_dtype (List.hd children) in\n       Dtype.Ptr.addrspace pty = Dtype.Reg) in\n    let nsrc = match root_view with\n      | Index { dtype = Dtype.Val _; _ } when buf_reg ->\n          (* REG buffer: expand into individual scalar INDEXes *)\n          K.vectorize ~srcs:(List.init expand_sz (fun j ->\n            K.replace root ~children:(List.map (fun s ->\n              if K.is_ptr s || dcount s > 1\n              then K.gep ~src:s ~idx:j else s) new_srcs) ()))\n      | Gep { idxs = [gep_idx]; _ } ->\n          assert (Dtype.count root_dt = 1);\n          let src0 = List.hd new_srcs in\n          let stride = dcount src0 / expand_sz in\n          K.gep_multi ~src:src0\n            ~idxs:(List.init expand_sz (fun k -> gep_idx + k * stride))\n      | Index { dtype = Dtype.Ptr pty; idxs; gate; _ } ->\n          let rest = List.tl new_srcs in\n          let n = List.length idxs in\n          K.index_raw ~ptr:(List.hd new_srcs)\n            ~idxs:(List.filteri (fun i _ -> i < n) rest)\n            ?gate:(Option.map (fun _ -> List.nth rest n) gate)\n            ~dtype:(Dtype.vec expand_sz (Dtype.Ptr pty)) ()\n      | _ when root_dt = Dtype.void ->\n          K.replace root ~children:new_srcs ()\n      | _ ->\n          K.replace root ~children:new_srcs\n            ~dtype:(Dtype.vec (Dtype.count root_dt * expand_sz)\n                      (Dtype.scalarize root_dt)) ()\n    in\n    Some (K.unroll ~src:nsrc ~axes:expand_args ~dtype:(Dtype.val_of root_dt))\n\n(* Contract an Unroll back to scalar form via GEP index permutations. *)\nlet do_contract con =\n  match K.view con with\n  | Contract { src = ex; axes = con_axes; dtype = con_dt } ->\n      (match K.view ex with\n       | Unroll { src = inner; axes = ex_axes; _ } ->\n           assert (Dtype.Val.equal con_dt Dtype.Val.void\n                   || Dtype.Val.count con_dt = prod (List.map snd con_axes));\n           let new_ex_args =\n             List.filter (fun x -> not (List.mem x con_axes)) ex_axes in\n           let idxs =\n             List.concat_map (fun rpk ->\n               List.map\n                 (fun lrpk -> expand_arg_to_idx ex_axes (rpk @ lrpk))\n                 (choices_from_args con_axes))\n               (choices_from_args new_ex_args) in\n           Some (K.unroll ~src:(K.gep_multi ~src:inner ~idxs)\n                   ~axes:new_ex_args ~dtype:con_dt)\n       | _ ->\n           if Dtype.Val.equal con_dt Dtype.Val.void then Some ex\n           else Some (K.vectorize\n                   ~srcs:(List.init (Dtype.Val.count con_dt) (fun _ -> ex))))\n  | _ -> None\n\n(* Wrap END's value in CONTRACT for any Unroll ranges. *)\nlet end_unrolls node =\n  match K.view node with\n  | End { value; ranges } ->\n      let unrolls, rest = List.partition is_unroll ranges in\n      if unrolls = [] then None\n      else\n        let axes = unroll_axes unrolls in\n        Some (K.end_ ~value:(K.contract ~src:value ~axes ~dtype:Dtype.Val.void)\n                ~ranges:rest ())\n  | _ -> None\n\n(* Detect Vectorize that is a broadcast (all srcs physically identical). *)\nlet peel_broadcast n =\n  match K.view n with\n  | Vectorize { srcs = (x :: _) as srcs; _ }\n    when List.for_all (fun y -> x == y) srcs -> Some (x, List.length srcs)\n  | _ -> None\n\n(* Push broadcast through After: After(Broadcast(x, n), deps) -> Broadcast(After(x, deps), n). *)\nlet broadcast_after node =\n  match K.view node with\n  | After { src; deps } ->\n      Option.map (fun (x, n) ->\n        K.broadcast (K.after ~src:x ~deps) n) (peel_broadcast src)\n  | _ -> None\n\n(* Push broadcast through End: End(Broadcast(x, n), ranges) -> Broadcast(End(x, ranges), n). *)\nlet broadcast_end node =\n  match K.view node with\n  | End { value; ranges } ->\n      Option.map (fun (x, n) ->\n        K.broadcast (K.end_ ~value:x ~ranges ()) n) (peel_broadcast value)\n  | _ -> None\n\n(* Bufferize(Unroll, Unroll) contracts the range Unroll. *)\nlet bufferize_contract node =\n  match K.view node with\n  | Bufferize { src = val_src; ranges = [range_src]; _ } -> (\n      match K.view val_src, K.view range_src with\n      | Unroll _, Unroll { src = inner; axes; _ } ->\n          let cnt = Dtype.count (K.dtype inner) in\n          Some (K.replace node ~children:(List.map (fun s ->\n            match K.view s with\n            | Unroll { dtype = s_dt; _ } ->\n                K.contract ~src:s ~axes\n                  ~dtype:(Dtype.Val.vec cnt (Dtype.Val.scalarize s_dt))\n            | _ -> s) (K.children node)) ())\n      | _ -> None)\n  | _ -> None\n\n(* Flatten nested Unroll(Unroll(x, inner_axes), outer_axes) -> Unroll(x, inner+outer). *)\nlet double_unroll node =\n  match K.view node with\n  | Unroll { src = inner; axes = outer_axes; dtype = outer_dt } -> (\n      match K.view inner with\n      | Unroll { src = deepest; axes = inner_axes; _ } ->\n          Some (K.unroll ~src:deepest ~axes:(inner_axes @ outer_axes)\n                  ~dtype:outer_dt)\n      | _ -> None)\n  | _ -> None\n\n(* Empty Unroll is a no-op. *)\nlet empty_unroll node =\n  match K.view node with\n  | Unroll { src; axes = []; _ } -> Some src\n  | _ -> None\n\n(* Core expander: broadcast pushing, unroll/contract engine, and\n   expansion of ALU/Cast/Index/etc with Unroll children. *)\nlet expander_rule =\n  K.first_match [\n    broadcast_after; broadcast_end;\n    end_unrolls;\n    bufferize_contract;\n    double_unroll;\n    do_expand;\n    do_contract;\n    empty_unroll;\n  ]\n\n(* pre-expander *)\n\n(* Rewrite UPCAST/UNROLL Range to Unroll(Vconst(0..s-1)). *)\nlet range_to_unroll node =\n  match K.view node with\n  | Range { size; dtype; axis; kind = (Axis_kind.Upcast | Axis_kind.Unroll); _ } -> (\n      match K.const_arg size with\n      | Some (Int n) ->\n          let s = Int64.to_int n in\n          let scalar = Dtype.Val.scalarize dtype in\n          let vec = K.vconst\n            ~values:(List.init s (fun i -> Const.int scalar i))\n            ~dtype:(Dtype.Val.vec s scalar) in\n          Some (K.unroll ~src:vec ~axes:[(axis, s)] ~dtype)\n      | _ -> None)\n  | _ -> None\n\n(* Partition Reduce ranges into RANGE and UNROLL, contract the UNROLLs. *)\nlet fix_reduce_unroll node =\n  match K.view node with\n  | Reduce { op; src; ranges; dtype } ->\n      let reduce_range, reduce_expand = List.partition K.is_range ranges in\n      if reduce_expand = [] then None\n      else\n        let reduce_expand =\n          List.filter (fun x -> K.const_arg x = None) reduce_expand in\n        let axes = unroll_axes reduce_expand in\n        let ret =\n          if axes <> [] then\n            K.with_tag \"1\" (K.contract ~src ~axes\n              ~dtype:(Dtype.Val.vec (prod (List.map snd axes))\n                        (Dtype.Val.scalarize dtype)))\n          else src in\n        Some (K.reduce ~op ~src:ret ~ranges:reduce_range ~dtype)\n  | _ -> None\n\n(* Partition Store ranges into UNROLL and rest, contract the UNROLLs. *)\nlet fix_store_unroll node =\n  match K.view node with\n  | Store { dst; value; ranges } ->\n      let store_expand, store_range = List.partition is_unroll ranges in\n      if store_expand = [] then None\n      else\n        let axes = unroll_axes store_expand in\n        let inner = K.store ~dst ~value ~ranges:store_range in\n        Some (K.with_tag \"1\"\n                (K.contract ~src:inner ~axes ~dtype:Dtype.Val.void))\n  | _ -> None\n\n(* Convert Reduce with Group_reduce ranges into a local-buffer reduction. *)\nlet fix_group_for_reduce node =\n  match K.view node with\n  | Reduce { op; src; ranges; dtype } ->\n      let reduce_gfr, reduce_r =\n        List.partition (fun u ->\n          match K.view u with\n          | Range { kind = Axis_kind.Group_reduce; _ } -> true\n          | _ -> false)\n          ranges in\n      if reduce_gfr = [] then None\n      else\n        let upstream_locals =\n          List.filter\n            (fun u -> K.is_range u && K.range_kind u = Axis_kind.Local)\n            (K.toposort node) in\n        let ret = K.reduce ~op ~src ~ranges:reduce_r ~dtype in\n        let reduce_loop =\n          List.map (fun r ->\n            K.range ~size:(K.range_size r) ~axis:(K.range_axis r + 100)\n              ~kind:Axis_kind.Reduce ~dtype:(Dtype.val_of (K.dtype r)) ())\n            reduce_gfr in\n        let gfr_axis = K.range_axis (List.hd reduce_gfr) in\n        let buf_dt = K.dtype ret in\n        let buf =\n          K.bufferize ~src:ret ~ranges:(upstream_locals @ reduce_gfr)\n            ~dtype:(Dtype.Ptr.create (Dtype.val_of buf_dt) ~addrspace:Dtype.Local ~size:(-1))\n            ~opts:{ device = Some (Device_index gfr_axis);\n                    addrspace = Dtype.Local; removable = true } in\n        let idx =\n          K.index ~ptr:buf ~idxs:(upstream_locals @ reduce_loop) () in\n        Some (K.reduce ~op ~src:idx ~ranges:reduce_loop ~dtype)\n  | _ -> None\n\nlet pre_expander_rule =\n  K.first_match [range_to_unroll; fix_reduce_unroll; fix_store_unroll]\n\n(* Run all expander passes to fixpoint: symbolic simplification,\n   range-to-unroll conversion, group-for-reduce lowering, and the\n   main expand/contract engine. *)\nlet expand root =\n  K.graph_rewrite ~name:\"expander\"\n    (K.first_match [\n      Symbolic.sym;\n      pre_expander_rule;\n      fix_group_for_reduce;\n      expander_rule;\n    ]) root\n"
  },
  {
    "path": "packages/tolk/lib/codegen/late/expander.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Late expansion of structured vector markers.\n\n    This pass owns late vector-shape realisation between post-range\n    optimisation and devectorisation.  It converts scheduler-introduced\n    {!Tolk_ir.Kernel.view.Unroll}/{!Tolk_ir.Kernel.view.Contract}\n    structure into ordinary late-kernel vector operations, rewrites\n    grouped reductions through {!Tolk_ir.Kernel.view.Bufferize}, and\n    consumes [Upcast]/[Unroll] ranges on {!Tolk_ir.Kernel.view.Reduce},\n    {!Tolk_ir.Kernel.view.Store}, and {!Tolk_ir.Kernel.view.End} before\n    later lowering.\n\n    The result is still late Kernel IR, not render-ready IR.  Later\n    stages may still see transient vectorised reduce sources or\n    vectorised pointer indexing forms that are consumed by\n    {!Devectorizer}.\n\n    See also {!Devectorizer}, {!Linearizer}. *)\n\nval expand : Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t\n(** [expand root] runs symbolic simplification, range-to-unroll\n    conversion, group-for-reduce lowering, and the main\n    expand/contract engine as a single fixpoint.\n\n    Replaces structured expansion markers\n    ({!Tolk_ir.Kernel.view.Unroll}, {!Tolk_ir.Kernel.view.Contract})\n    with ordinary late-kernel vector structure while keeping range\n    fields scalar-only. *)\n"
  },
  {
    "path": "packages/tolk/lib/codegen/late/images.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Lower Param_image nodes into OpenCL image intrinsics (read_imagef /\n   write_imagef).  Runs on Kernel IR before linearization.\n\n   This replaces tinygrad's ImageDType handling which threads image types\n   through the entire pipeline.  Tolk keeps images as Param_image nodes\n   and lowers them here in a single late pass. *)\n\nopen Tolk_ir\nmodule K = Kernel\n\nlet strf = Printf.sprintf\n\nlet float4 = Dtype.Val.vec 4 Dtype.Val.float32\nlet int2 = Dtype.Val.vec 2 Dtype.Val.int32\n\nlet is_image node = match K.view node with\n  | Param_image _ -> true | _ -> false\n\n(* Decompose an image Index into (image_param, coord_node, gate).\n   The coord_node is a custom_inline \"(int2)({0}, {1})\" built from the\n   two index operands.  Validates dtype and index count. *)\nlet decompose_image_index node = match K.view node with\n  | Index { ptr; idxs; gate; _ } when is_image ptr ->\n      (match K.dtype_opt ptr with\n       | Some dt ->\n           (match Dtype.scalar dt with\n            | Float16 | Float32 -> ()\n            | s -> failwith (strf\n                \"images: unsupported base dtype %s\" (Dtype.scalar_to_string s)));\n           if List.length idxs <> 2 then\n             failwith \"images: image access requires exactly two coordinates\"\n       | None -> ());\n      let coords = K.custom_inline ~fmt:\"(int2)({0}, {1})\" ~args:idxs ~dtype:int2 in\n      Some (ptr, coords, gate)\n  | _ -> None\n\n(* Load/Store on image params become read_imagef/write_imagef intrinsics.\n   Index nodes that feed non-image consumers are left unchanged. *)\nlet rewrite_rule (node : K.t) : K.t option =\n  match K.view node with\n  | Load { src; alt; dtype } -> (\n      match decompose_image_index src with\n      | None -> None\n      | Some (ptr, coords, gate) ->\n          if not (Dtype.Val.equal dtype float4) then\n            failwith \"images: image loads must produce float4\";\n          Some (match gate, alt with\n            | None, None ->\n                K.custom_inline ~fmt:\"read_imagef({0}, smp, {1})\"\n                  ~args:[ptr; coords] ~dtype\n            | Some g, Some a ->\n                K.custom_inline\n                  ~fmt:\"({2}?read_imagef({0}, smp, {1}):{3})\"\n                  ~args:[ptr; coords; g; a] ~dtype\n            | Some _, None ->\n                failwith \"images: gated image load requires alt value\"\n            | None, Some _ ->\n                failwith \"images: image load alt requires gated index\"))\n  | Store { dst; value; _ } -> (\n      match decompose_image_index dst with\n      | None -> None\n      | Some (ptr, coords, gate) ->\n          if not (Dtype.equal (K.dtype value) (Dtype.Val float4)) then\n            failwith \"images: image stores must write float4\";\n          Some (match gate with\n            | None ->\n                K.custom ~fmt:\"write_imagef({0}, {1}, {2});\"\n                  ~args:[ptr; coords; value]\n            | Some g ->\n                K.custom ~fmt:\"if ({3}) write_imagef({0}, {1}, {2});\"\n                  ~args:[ptr; coords; value; g]))\n  | _ -> None\n\nlet supports_images renderer =\n  match Renderer.device renderer with \"CL\" | \"QCOM\" -> true | _ -> false\n\nlet rewrite renderer root =\n  let has_images = List.exists (fun n -> match K.view n with\n    | Param_image _ -> true | _ -> false) (K.toposort root) in\n  if has_images && not (supports_images renderer) then\n    failwith (strf \"images: renderer %s does not support images\"\n      (Renderer.name renderer));\n  K.graph_rewrite rewrite_rule root\n"
  },
  {
    "path": "packages/tolk/lib/codegen/late/images.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Late lowering for OpenCL image operations.\n\n    See also {!Devectorizer} and {!Linearizer}. *)\n\nval rewrite : Renderer.t -> Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t\n(** [rewrite renderer root] lowers {!Tolk_ir.Kernel.view.Param_image}-based\n    memory operations into explicit OpenCL image builtins for [renderer].\n\n    Raises [Failure] if [root] uses images and [renderer] does not\n    support them. *)\n"
  },
  {
    "path": "packages/tolk/lib/codegen/late/linearizer.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\nopen Tolk_ir\nmodule K = Kernel\n\n(* Priority-based topological sort.\n\n   Assigns each node a priority (run_count, op_priority, extra) and produces\n   a linear order that respects dependencies while keeping nodes with similar\n   priorities adjacent.  Lower priority numbers appear earlier in output. *)\n\n(* Run count for a range node: the product of its extent. *)\nlet range_extent node = match K.view node with\n  | Range { size; _ } -> (\n      match K.const_arg size with Some (Int n) -> Int64.to_int n | _ -> 1)\n  | _ -> 1\n\n(* Priority triple: (run_count, op_priority, extra).\n   Constructor order matters: OCaml's generic [compare] on this type gives\n   the same lexicographic ordering as tinygrad's Python tuple comparison. *)\ntype extra = No_extra | Idx of int | Name of string\n\nlet priority_of live node =\n  let run_count = List.fold_left (fun acc r -> acc * range_extent r) 1\n    (match K.Ref_tbl.find_opt live node with Some rs -> rs | None -> []) in\n  let op_pri, extra = match K.view node with\n    | Param { idx; _ }       -> -20, Idx idx\n    | Define_var { name; _ } -> -19, Name name\n    | Define_local _         -> -18, No_extra\n    | Define_reg _           -> -17, No_extra\n    | Load _                 ->  -1, No_extra\n    | Store _                ->   1, No_extra\n    | Range _                ->   5, No_extra\n    | End _                  ->  -5, No_extra\n    | _                      ->   0, No_extra\n  in\n  (run_count, op_pri, extra)\n\n(* Heap for priority extraction during toposort.\n   Keys are unique (assigned by List.iteri), so int comparison suffices. *)\nmodule Heap = Set.Make (struct\n  type t = int * K.t\n  let compare (a, _) (b, _) = compare a b\nend)\n\nlet linearize_order topo =\n  let n = List.length topo in\n  let sink = match List.rev topo with\n    | x :: _ -> x | [] -> failwith \"Linearizer: empty topo\" in\n  let live = K.live_ranges_tbl sink in\n  (* Assign priorities and compute ideal ordering *)\n  let priorities = K.Ref_tbl.create n in\n  List.iter (fun u ->\n    K.Ref_tbl.replace priorities u (priority_of live u)) topo;\n  let nkey = K.Ref_tbl.create n in\n  List.iteri (fun i u -> K.Ref_tbl.replace nkey u i)\n    (List.stable_sort (fun a b ->\n       compare (K.Ref_tbl.find priorities a) (K.Ref_tbl.find priorities b))\n       topo);\n  (* Compute out-degrees *)\n  let out_degree = K.Ref_tbl.create n in\n  List.iter (fun u ->\n    List.iter (fun s ->\n      let d = match K.Ref_tbl.find_opt out_degree s with\n        | Some d -> d | None -> 0 in\n      K.Ref_tbl.replace out_degree s (d + 1))\n      (K.children u)) topo;\n  (* Heap-based toposort: work backwards from sink, release nodes when\n     all consumers are placed, prefer nodes closest to ideal position. *)\n  let get_nkey u = match K.Ref_tbl.find_opt nkey u with\n    | Some k -> k | None -> 0 in\n  let heap = ref (Heap.singleton (- get_nkey sink, sink)) in\n  let result = ref [] in\n  while not (Heap.is_empty !heap) do\n    let ((_, u) as elt) = Heap.min_elt !heap in\n    heap := Heap.remove elt !heap;\n    result := u :: !result;\n    List.iter (fun v ->\n      let d = (match K.Ref_tbl.find_opt out_degree v with\n        | Some d -> d | None -> 0) - 1 in\n      K.Ref_tbl.replace out_degree v d;\n      if d = 0 then heap := Heap.add (- get_nkey v, v) !heap)\n      (K.children u)\n  done;\n  !result\n\n(* Control-flow context: ordering edges between sibling loops.\n\n   For each pair of sibling END nodes (loops nested under the same parent),\n   adds an edge from the later loop's RANGE to the earlier loop's END,\n   ensuring sequential emission of loops that must not interleave. *)\n\nlet end_range node = match K.view node with\n  | End { ranges = [r]; _ } when K.is_range r -> Some r\n  | _ -> None\n\ntype cfg_context = { edges : K.t K.Ref_tbl.t }\n\nlet build_cfg_context topo =\n  let n = List.length topo in\n  (* Phase 1: compute transitive deps and find nesting relationships. *)\n  let deps = K.Ref_tbl.create n in\n  let nesting = K.Ref_tbl.create 32 in\n  List.iter (fun node ->\n    let cdeps = K.Ref_tbl.create 16 in\n    List.iter (fun child ->\n      match K.Ref_tbl.find_opt deps child with\n      | Some s -> K.Ref_tbl.iter (fun k () -> K.Ref_tbl.replace cdeps k ()) s\n      | None -> ())\n      (K.children node);\n    (match K.view node with\n     | End _ | Sink _ ->\n         K.Ref_tbl.iter (fun x () ->\n           match K.view x with\n           | End _ when not (K.Ref_tbl.mem nesting x) ->\n               let is_nested = match K.view node with\n                 | Sink _ -> true\n                 | _ -> (match end_range node, K.Ref_tbl.find_opt deps x with\n                   | Some rr, Some xd -> K.Ref_tbl.mem xd rr\n                   | _ -> false) in\n               if is_nested then K.Ref_tbl.replace nesting x node\n           | _ -> ()) cdeps\n     | _ -> ());\n    (match K.view node with\n     | Range _ | End _ -> K.Ref_tbl.replace cdeps node ()\n     | _ -> ());\n    K.Ref_tbl.replace deps node cdeps) topo;\n  (* Phase 2: group siblings and build ordering edges. *)\n  let siblings = K.Ref_tbl.create 32 in\n  K.Ref_tbl.iter (fun child parent ->\n    let cur = match K.Ref_tbl.find_opt siblings parent with\n      | Some l -> l | None -> [] in\n    K.Ref_tbl.replace siblings parent (child :: cur)) nesting;\n  let edges = K.Ref_tbl.create 16 in\n  K.Ref_tbl.iter (fun parent ends ->\n    let dep_count node =\n      match K.Ref_tbl.find_opt deps node with\n      | Some nd ->\n          List.fold_left (fun acc u ->\n            if K.Ref_tbl.mem nd u then acc + 1 else acc) 0 ends\n      | None -> 0 in\n    let order = List.sort (fun a b -> compare (dep_count a) (dep_count b)) ends in\n    let add_edge rn pred =\n      assert (not (K.in_backward_slice rn pred));\n      K.Ref_tbl.replace edges rn pred in\n    let rec chain prev = function\n      | y :: ys ->\n          (match end_range y with\n           | Some rr -> add_edge rr prev; chain y ys\n           | None -> chain prev ys)\n      | [] -> () in\n    (match K.view parent with\n     | Sink _ -> (match order with x :: rest -> chain x rest | [] -> ())\n     | _ -> (match end_range parent with\n       | Some rr -> chain rr order | None -> ())))\n    siblings;\n  { edges }\n\n(* Split multi-range END into nested single-range ENDs.\n   Extracts actual RANGE nodes from the ranges' dependency graph, sorts\n   by axis (descending), and nests innermost-first. *)\nlet do_split_ends node = match K.view node with\n  | End { value; ranges } ->\n      let result =\n        K.toposort (K.sink ranges)\n        |> List.filter K.is_range\n        |> List.sort (fun a b -> compare (K.range_axis b) (K.range_axis a))\n        |> List.fold_left (fun v r -> K.end_ ~value:v ~ranges:[r] ()) value in\n      if result == node then None else Some result\n  | _ -> None\n\nlet pm_split_ends root = K.graph_rewrite do_split_ends root\n\n(* Kernel -> Program emission *)\n\nmodule P = Program\n\n(* Resolve the dtype of an After/Group/End chain (transparent wrappers). *)\nlet rec after_dtype node = match K.view node with\n  | Barrier | Store _ -> Some Dtype.void\n  | End { value; _ } | After { src = value; _ } -> after_dtype value\n  | Group { srcs = src :: _ } -> after_dtype src\n  | Group { srcs = [] } -> None\n  | _ -> K.dtype_opt node\n\n(* Walk through Cast/Bitcast/After to find a gated Index. *)\nlet rec find_gate node = match K.view node with\n  | Index { gate = Some g; _ } -> Some g\n  | After { src; _ } | Cast { src; _ } | Bitcast { src; _ } -> find_gate src\n  | _ -> None\n\ntype emitter = {\n  builder : P.builder;\n  k2p : P.id K.Ref_tbl.t;           (* kernel node -> program id *)\n  mutable open_ranges : K.t list;    (* ranges opened but not yet closed *)\n}\n\nlet lookup em node =\n  match K.Ref_tbl.find_opt em.k2p node with\n  | Some id -> id\n  | None -> failwith \"Linearizer: missing kernel ref mapping\"\n\nlet emit_instr em node instr =\n  let id = P.emit em.builder instr in\n  K.Ref_tbl.replace em.k2p node id\n\nlet maps em = List.map (lookup em)\n\n(* Alias: transparent node maps to another node's program id. *)\nlet alias em node target =\n  K.Ref_tbl.replace em.k2p node (lookup em target)\n\n(* Open a range: emit if not yet emitted, track as open. *)\nlet ensure_range em node =\n  match K.Ref_tbl.find_opt em.k2p node with\n  | Some id -> id\n  | None -> match K.view node with\n    | Range { size; dtype; axis; sub; kind } ->\n        em.open_ranges <- node :: em.open_ranges;\n        emit_instr em node\n          (Range { size = lookup em size; dtype; axis; sub; kind });\n        lookup em node\n    | _ -> failwith \"Linearizer: expected Range node\"\n\n(* Resolve a child: ranges are opened lazily, everything else is looked up. *)\nlet resolve em node = match K.view node with\n  | Range _ -> ensure_range em node\n  | _ -> lookup em node\n\nlet emit em node =\n  let m = lookup em and ms = maps em in\n  match K.view node with\n  (* Transparent: alias to source, produce no instruction. *)\n  | Sink _ -> ()\n  | Group { srcs = src :: _ } -> alias em node src\n  | Group { srcs = [] } -> failwith \"Linearizer: empty Group\"\n  | After { src; _ } when K.is_ptr src -> alias em node src\n  | After { src; deps } ->\n      let dtype = match after_dtype src with\n        | Some dt -> Dtype.val_of dt | None -> failwith \"Linearizer: After src has no dtype\" in\n      emit_instr em node (After { src = m src; deps = ms deps; dtype })\n\n  (* Range lifecycle *)\n  | Range _ -> ignore (ensure_range em node)\n  | End { value; ranges = [] } -> alias em node value\n  | End { value; ranges = [range] } ->\n      let dep = resolve em value in\n      let range_id = ensure_range em range in\n      ignore (P.emit em.builder (End_range { dep; range = range_id }));\n      em.open_ranges <-\n        List.filter (fun r -> not (r == range)) em.open_ranges;\n      K.Ref_tbl.replace em.k2p node dep\n  | End _ -> failwith \"Linearizer: End must have 0 or 1 range after split\"\n\n  (* Gated store: wrap in If/Endif *)\n  | Store { dst; value; _ } -> (\n      match find_gate dst with\n      | Some gate ->\n          let gate_id = m gate and dst_id = m dst in\n          let if_id = P.emit em.builder\n            (If { cond = gate_id; idx_for_dedup = dst_id }) in\n          emit_instr em node (Store { dst = dst_id; value = m value });\n          ignore (P.emit em.builder (Endif { if_ = if_id }))\n      | None ->\n          emit_instr em node (Store { dst = m dst; value = m value }))\n\n  (* 1:1 translations *)\n  | Param { idx; dtype } ->\n      emit_instr em node (Param { idx; dtype })\n  | Param_image { idx; dtype; width; height } ->\n      emit_instr em node (Param_image { idx; dtype; width; height })\n  | Define_local { size; dtype } ->\n      emit_instr em node (Define_local { size; dtype })\n  | Define_reg { size; dtype; _ } ->\n      emit_instr em node (Define_reg { size; dtype })\n  | Define_var { name; lo; hi; dtype } ->\n      emit_instr em node (Define_var { name; lo; hi; dtype })\n  | Const { value; dtype } ->\n      emit_instr em node (Const { value; dtype })\n  | Index { ptr; idxs; gate; dtype = Dtype.Ptr pty } ->\n      emit_instr em node\n        (Index { ptr = m ptr; idxs = ms idxs;\n                 gate = Option.map m gate; dtype = pty })\n  | Index { dtype = Dtype.Val _; _ } ->\n      failwith \"Linearizer: Index must be ptr-typed after pm_add_loads\"\n  | Load { src; alt; dtype } ->\n      let has_gate = find_gate src <> None in\n      if has_gate && alt = None then\n        failwith \"Linearizer: gated loads require an alt value before linearize\";\n      if (not has_gate) && alt <> None then\n        failwith \"Linearizer: Load alt requires gated Index\";\n      emit_instr em node\n        (Load { src = m src; alt = Option.map m alt; dtype })\n  | Unary { op; src; dtype } ->\n      emit_instr em node (Unary { op; src = m src; dtype })\n  | Binary { op; lhs; rhs; dtype } ->\n      emit_instr em node\n        (Binary { op; lhs = m lhs; rhs = m rhs; dtype })\n  | Ternary { op; a; b; c; dtype } ->\n      emit_instr em node\n        (Ternary { op; a = m a; b = m b; c = m c; dtype })\n  | Cast { src; dtype } ->\n      emit_instr em node\n        (Cast { src = m src; dtype = Dtype.val_of dtype })\n  | Bitcast { src; dtype } ->\n      emit_instr em node (Bitcast { src = m src; dtype })\n  | Vectorize { srcs; dtype } ->\n      emit_instr em node\n        (Vectorize { srcs = ms srcs; dtype = Dtype.val_of dtype })\n  | Gep { src; idxs; dtype } ->\n      emit_instr em node (Gep { src = m src; idxs; dtype })\n  | Barrier -> emit_instr em node Barrier\n  | Special { dim; size; dtype } ->\n      emit_instr em node (Special { dim; size = m size; dtype })\n  | Wmma { name; a; b; c; dtype; dims; dtype_in; dtype_out;\n           device; threads; upcast_axes; reduce_axes } ->\n      emit_instr em node\n        (Wmma { name; a = m a; b = m b; c = m c; dtype;\n                dims; dtype_in; dtype_out; device; threads;\n                upcast_axes; reduce_axes })\n  | Custom { fmt; args } ->\n      emit_instr em node (Custom { fmt; args = ms args })\n  | Custom_inline { fmt; args; dtype } ->\n      emit_instr em node (Custom_inline { fmt; args = ms args; dtype })\n\n  (* Must be lowered before linearization *)\n  | Invalid_index _ | Vconst _ | Ptrcat _ | Vcat _\n  | Reduce _ | Unroll _ | Contract _ | Bufferize _ ->\n      failwith (\"Linearizer: \" ^ K.view_op_name (K.view node)\n                ^ \" must be lowered before linearize\")\n\n(* Add control-flow edges: RANGE nodes gain a dependency on the\n   predecessor END/RANGE determined by build_cfg_context. *)\nlet add_control_flow cfg node = match K.view node with\n  | Range _ -> (\n      match K.Ref_tbl.find_opt cfg.edges node with\n      | Some pred ->\n          let children = K.children node in\n          Some (K.replace node ~children:(children @ [pred]) ())\n      | None -> None)\n  | _ -> None\n\nlet pm_add_control_flow sink =\n  let cfg = build_cfg_context (K.toposort sink) in\n  K.graph_rewrite ~name:\"add control flow\" (add_control_flow cfg) sink\n\n(* Priority-based topological ordering followed by Kernel → Program emission.\n   The input must already have split Ends and control-flow edges applied. *)\n\nlet linearize sink =\n  let topo = K.toposort sink in\n  let order = linearize_order topo in\n  let em = {\n    builder = P.create ();\n    k2p = K.Ref_tbl.create (List.length topo);\n    open_ranges = [];\n  } in\n  List.iter (emit em) order;\n  if em.open_ranges <> [] then\n    failwith \"Linearizer: unclosed ranges after emission (missing End?)\";\n  P.finish em.builder\n"
  },
  {
    "path": "packages/tolk/lib/codegen/late/linearizer.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Kernel-to-program linearization.\n\n    Provides rewrite passes for splitting multi-range Ends and adding\n    control-flow edges, plus the final priority-based toposort and\n    Kernel → Program emission.\n\n    The input kernel must be in late codegen form: exactly one\n    {!Tolk_ir.Kernel.view.Sink}, no unlowered nodes\n    ({!Tolk_ir.Kernel.view.Reduce}, {!Tolk_ir.Kernel.view.Bufferize},\n    {!Tolk_ir.Kernel.view.Ptrcat}, {!Tolk_ir.Kernel.view.Vcat},\n    {!Tolk_ir.Kernel.view.Unroll}, {!Tolk_ir.Kernel.view.Contract}),\n    and gated loads must already carry an alternate value.\n\n    See also {!Devectorizer}, {!Expander}. *)\n\nval do_split_ends : Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t option\n(** [do_split_ends node] splits a multi-range End into nested single-range\n    Ends. Returns [None] for non-End nodes or single-range Ends. *)\n\nval pm_split_ends : Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t\n(** [pm_split_ends root] splits multi-range {!Tolk_ir.Kernel.view.End}\n    nodes into nested single-range Ends, sorted by axis (descending). *)\n\nval pm_add_control_flow : Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t\n(** [pm_add_control_flow sink] computes control-flow dependencies between\n    sibling loops and adds ordering edges to {!Tolk_ir.Kernel.view.Range}\n    nodes. *)\n\nval linearize : Tolk_ir.Kernel.t -> Tolk_ir.Program.t\n(** [linearize sink] performs priority-based topological ordering and\n    emits a flat {!Tolk_ir.Program.t}.  Expects [pm_split_ends] and\n    [pm_add_control_flow] to have already been applied.\n\n    Raises [Failure] if [sink] contains unlowered nodes or has\n    unclosed ranges after emission. *)\n"
  },
  {
    "path": "packages/tolk/lib/codegen/opt/heuristic.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\nopen Tolk_ir\nmodule K = Kernel\nmodule P = Postrange\n\n(* Environment *)\n\nlet use_tc = Helpers.getenv \"USE_TC\" 1\nlet tc_select = Helpers.getenv \"TC_SELECT\" (-1)\nlet tc_opt = Helpers.getenv \"TC_OPT\" 0\nlet amx = Helpers.getenv \"AMX\" 0 <> 0\nlet mv_blocksize = Helpers.getenv \"MV_BLOCKSIZE\" 4\nlet mv_threads_per_row = Helpers.getenv \"MV_THREADS_PER_ROW\" 8\nlet mv_rows_per_thread = Helpers.getenv \"MV_ROWS_PER_THREAD\" 4\nlet mv = Helpers.getenv \"MV\" 1\nlet nolocals_var = Helpers.Context_var.int ~key:\"NOLOCALS\" ~default:0\n\n(* Helpers *)\n\nlet const_int_or default node =\n  match K.const_arg node with Some (Int n) -> Int64.to_int n | _ -> default\n\nlet last lst = List.nth lst (List.length lst - 1)\n\nlet prod_at shape axes =\n  List.fold_left (fun acc a ->\n    const_int_or 1 (List.nth shape a) * acc) 1 axes\n\nlet divides_by rng n = K.divides (K.range_size rng) n <> None\n\nlet index_of_rng rngs rng =\n  match List.find_index (fun r -> r == rng) rngs with\n  | Some i -> i | None -> -1\n\n(* Unwrap Where/Invalid guard from an Index's first idx. *)\nlet get_idx buf = match K.view buf with\n  | Index { idxs = idx :: _; _ } -> Some (\n      match K.view idx with\n      | Ternary { op = `Where; b; c; _ }\n        when (match K.view c with Invalid_index _ -> true | _ -> false) -> b\n      | _ -> idx)\n  | _ -> None\n\n(* Flatten ADD tree into a list of addends. *)\nlet split_add node =\n  let rec go acc = function\n    | [] -> List.rev acc\n    | n :: rest -> match K.view n with\n      | Binary { op = `Add; lhs; rhs; _ } -> go acc (lhs :: rhs :: rest)\n      | _ -> go (n :: acc) rest\n  in\n  go [] [node]\n\nlet try_apply k opt =\n  try ignore (P.apply_opt k opt : _ option) with P.Opt_error _ -> ()\n\n(* Find the first size in [sizes] that divides [rng], apply [mk_opt]\n   at that axis, and return the shift_to result. *)\nlet try_opt_on_rng tk rng sizes mk_opt =\n  match List.find_opt (divides_by rng) sizes with\n  | Some sz ->\n      let axis = index_of_rng (P.rngs tk) rng in\n      if axis >= 0 then P.apply_opt tk (mk_opt axis sz) else None\n  | None -> None\n\n(* Try tensor core optimization.  Returns Some k on success. *)\nlet try_tensor_cores k =\n  if use_tc <= 0\n     || (List.length (P.axes_of k [Axis_kind.Group_reduce; Axis_kind.Reduce]) <> 1\n         && tc_opt < 1)\n  then None\n  else\n    let tk = P.copy k in\n    let tc_result =\n      try P.apply_opt tk (K.Opt.Tc { axis = 0; tc_select; tc_opt; use_tc })\n      with P.Opt_error _ -> None in\n    match tc_result with\n    | Some (n_rng, m_rng) when not amx ->\n        let rngs = [| n_rng; m_rng |] in\n        List.iter (fun d ->\n          match try_opt_on_rng tk rngs.(d) [5;4;3;2]\n            (fun axis amount -> K.Opt.Upcast { axis; amount }) with\n          | Some (replaced, _) -> rngs.(d) <- replaced\n          | None -> ()) [1; 0];\n        ignore (try_opt_on_rng tk rngs.(0) [4; 2]\n          (fun axis amount -> K.Opt.Local { axis; amount }));\n        Some tk\n    | _ -> None\n\n(* Upcast float4 image axes.  Must run early before locals are added. *)\nlet is_image_buf buf = match K.view buf with\n  | Index { ptr; _ } -> (match K.view ptr with Param_image _ -> true | _ -> false)\n  | _ -> false\n\nlet upcast_images k =\n  List.iter (fun buf ->\n    if is_image_buf buf then\n      Option.iter (fun idx ->\n        let axes = List.filter_map (fun c ->\n          if K.is_range c && const_int_or 0 (K.range_size c) mod 4 = 0\n          then match index_of_rng (P.rngs k) c with\n            | i when i >= 0 -> Some i | _ -> None\n          else None) (split_add idx) in\n        match axes with\n        | axis :: _ when List.mem axis (P.upcastable_dims k) ->\n            ignore (P.apply_opt k (K.Opt.Upcast { axis; amount = 4 }))\n        | axis :: _ ->\n            (match List.find_index (( = ) axis) (P.unrollable_dims k) with\n             | Some ui -> ignore (P.apply_opt k (K.Opt.Unroll { axis = ui; amount = 4 }))\n             | None -> ())\n        | [] -> ())\n        (get_idx buf))\n    (P.bufs k)\n\n(* Detect matrix-vector pattern: reduce(add, mul(INDEX, INDEX)) where\n   the first reduce range appears as an addend in idx0, and all idx0\n   ranges appear in idx1.  Returns the first reduce range on match. *)\nlet detect_matvec k =\n  let open Option in\n  bind (P.reduceop k) (fun red ->\n  match K.view red with\n  | Reduce { op = `Add; src = mul_src; _ } -> (\n    match K.view mul_src with\n    | Binary { op = `Mul; lhs = in0; rhs = in1; _ }\n      when (match K.view in0 with Index _ -> true | _ -> false)\n        && (match K.view in1 with Index _ -> true | _ -> false) ->\n        bind (get_idx in0) (fun idx0 ->\n        bind (get_idx in1) (fun _idx1 ->\n        match P.ranges_of k [Axis_kind.Reduce] with\n        | first_red :: _ ->\n            let idx0_rngs = List.filter K.is_range (K.backward_slice idx0) in\n            let idx1_rngs = List.filter K.is_range (K.backward_slice _idx1) in\n            if List.exists (fun u -> u == first_red) (split_add idx0)\n               && List.for_all (fun r -> List.memq r idx1_rngs) idx0_rngs\n            then Some first_red\n            else None\n        | [] -> None))\n    | _ -> None)\n  | _ -> None)\n\n(* Apply matvec opts (GROUP + LOCAL + UPCAST) if the pattern matches. *)\nlet try_matvec k =\n  if not (Renderer.has_local (P.ren k)) || mv = 0\n     || (mv_blocksize <= 1 && mv_threads_per_row <= 1 && mv_rows_per_thread <= 1)\n     || List.length (P.full_shape k) < 2\n     || not (Renderer.has_shared (P.ren k))\n  then None\n  else match detect_matvec k with\n  | None -> None\n  | Some first_red ->\n      let gi = List.find_opt (fun gi ->\n        divides_by first_red mv_threads_per_row\n        && const_int_or 0 (List.nth (P.full_shape k) gi)\n           mod (mv_blocksize * mv_rows_per_thread) = 0)\n        (P.axes_of k [Axis_kind.Global]) in\n      match gi with\n      | None -> None\n      | Some gi ->\n          if mv_threads_per_row > 1 then\n            try_apply k (K.Opt.Group { axis = 0; amount = mv_threads_per_row });\n          if mv_blocksize > 1 then\n            ignore (P.apply_opt k (K.Opt.Local { axis = gi; amount = mv_blocksize }));\n          if mv_rows_per_thread > 1 then\n            ignore (P.apply_opt k (K.Opt.Upcast { axis = gi; amount = mv_rows_per_thread }));\n          Some k\n\n(* Try GROUPTOP if output shape is small. *)\nlet try_grouping k =\n  let threshold = if Helpers.Context_var.get nolocals_var <> 0 then 240 else 2048 in\n  if prod_at (P.output_shape k) (P.upcastable_dims k) <= threshold then\n    (try List.iter (fun axis ->\n       try\n         ignore (P.apply_opt k (K.Opt.Grouptop { axis; amount = 16 }));\n         raise_notrace Exit\n       with P.Opt_error _ -> ()) [0; 1; 2]\n     with Exit -> ());\n  P.group_for_reduces k > 0\n\n(* Upcast small masked dims (e.g. from Tensor.stack). *)\nlet upcast_masked k =\n  let ast_slice = K.backward_slice (P.ast k) in\n  let is_masked rng = List.exists (fun u -> match K.view u with\n    | Ternary { op = `Where; a = cond; _ } ->\n        List.exists (fun n -> n == rng) (K.backward_slice cond)\n    | _ -> false) ast_slice in\n  let to_upcast = List.fold_left (fun acc axis ->\n    let sz = const_int_or 0 (List.nth (P.full_shape k) axis) in\n    if sz > 7 then acc\n    else if is_masked (List.nth (P.rngs k) axis)\n         && prod_at (P.full_shape k) acc * sz <= 49\n    then acc @ [axis] else acc)\n    [] (P.upcastable_dims k) in\n  List.iter (fun axis ->\n    ignore (P.apply_opt k (K.Opt.Upcast { axis; amount = 0 })))\n    (List.rev to_upcast)\n\n(* Upcast non-reduce axes based on stride analysis: prefer axes where\n   some buffer broadcasts (stride 0) while all upcast/unroll axes have\n   nonzero stride.  Pick the axis with fewest strides first. *)\nlet upcast_heuristic k =\n  let is_dsp = Renderer.device (P.ren k) = \"DSP\" in\n  let upcasted = Hashtbl.create 8 in\n  let continue_ = ref true in\n  while !continue_\n        && prod_at (P.output_shape k) (P.upcastable_dims k) >= 1024\n        && P.upcast_size k < 32 do\n    let upcast_amounts =\n      if is_dsp then (if Hashtbl.length upcasted = 0 then [128] else [])\n      else [3; 4] in\n    let xb = List.fold_left (fun acc axis ->\n      List.fold_left (fun acc amt ->\n        if Hashtbl.mem upcasted axis then acc\n        else if const_int_or 0 (List.nth (P.full_shape k) axis) mod amt <> 0 then acc\n        else\n          let rng = List.nth (P.rngs k) axis in\n          let upcast_unroll = P.ranges_of k [Axis_kind.Upcast; Axis_kind.Unroll] in\n          let has_broadcast = List.exists (fun b ->\n            match get_idx b with\n            | Some idx ->\n                let bslice = K.backward_slice idx in\n                not (List.memq rng bslice)\n                && List.for_all (fun r2 -> List.memq r2 bslice) upcast_unroll\n            | None -> false) (P.bufs k) in\n          if not has_broadcast then acc\n          else\n            let num_strides = ref 0 in\n            let sum_strides = ref 0 in\n            List.iter (fun b ->\n              match get_idx b with\n              | Some idx ->\n                  if List.memq rng (K.backward_slice idx) then\n                    incr num_strides;\n                  List.iter (fun c ->\n                    if c == rng then incr sum_strides\n                    else match K.view c with\n                    | Binary { op = `Mul; lhs; rhs; _ } ->\n                        if lhs == rng && K.is_const rhs then\n                          sum_strides := !sum_strides + K.const_to_int rhs\n                        else if rhs == rng && K.is_const lhs then\n                          sum_strides := !sum_strides + K.const_to_int lhs\n                    | _ -> ()) (split_add idx)\n              | None -> ()) (P.bufs k);\n            (!num_strides, !sum_strides, axis, amt) :: acc)\n        acc upcast_amounts)\n      [] (P.upcastable_dims k)\n      |> List.sort compare in\n    match xb with\n    | (_, _, axis, amt) :: _ ->\n        ignore (P.apply_opt k (K.Opt.Upcast { axis; amount = amt }));\n        Hashtbl.replace upcasted axis ()\n    | [] -> continue_ := false\n  done\n\n(* Unroll last reduce dim if small. *)\nlet unroll_reduce k =\n  try\n    let ud = P.unrollable_dims k in\n    if ud <> []\n       && (P.upcast_size k <= 4 || P.axes_of k [Axis_kind.Unroll] = [])\n       && P.upcast_size k < 64\n    then begin\n      let s = const_int_or 0 (List.nth (P.full_shape k) (last ud)) in\n      if s <= 32 then begin\n        ignore (P.apply_opt k\n          (K.Opt.Unroll { axis = List.length ud - 1; amount = 0 }));\n        let ud2 = P.unrollable_dims k in\n        if ud2 <> [] && s <= 3\n           && const_int_or 0 (List.nth (P.full_shape k) (last ud2)) <= 3 then\n          ignore (P.apply_opt k\n            (K.Opt.Unroll { axis = List.length ud2 - 1; amount = 0 }))\n      end else if const_int_or 0 (List.nth (P.full_shape k) (last ud)) mod 4 = 0 then\n        ignore (P.apply_opt k\n          (K.Opt.Unroll { axis = List.length ud - 1; amount = 4 }))\n    end\n  with P.Opt_error _ -> ()\n\n(* Upcast by 4 if nothing is upcasted yet. *)\nlet upcast_default k =\n  let ud = P.upcastable_dims k in\n  if P.upcasted k = 0 && ud <> []\n     && const_int_or 0 (List.nth (P.full_shape k) (last ud)) mod 4 = 0 then\n    ignore (P.apply_opt k (K.Opt.Upcast { axis = last ud; amount = 4 }))\n\n(* Choose local sizes for global/loop axes, prioritising expand axes. *)\nlet apply_locals k =\n  if not (Renderer.has_local (P.ren k)) then ()\n  else if Helpers.Context_var.get nolocals_var <> 0 then\n    ignore (P.apply_opt k K.Opt.Nolocals)\n  else begin\n    (* Rank axes: expand axes (broadcast in some buffer) sort first. *)\n    let ranking = List.filter_map (fun axis ->\n      let rng = List.nth (P.rngs k) axis in\n      if not (K.is_const (K.range_size rng)) then None\n      else\n        let is_expand = List.exists (fun b ->\n          match get_idx b with\n          | Some idx -> not (List.memq rng (K.backward_slice idx))\n          | None -> false) (P.bufs k) in\n        Some (is_expand, axis))\n      (P.axes_of k [Axis_kind.Global; Axis_kind.Loop]) in\n    let sorted_ranking = List.sort (fun (e1, a1) (e2, a2) ->\n      let c = compare e2 e1 in\n      if c <> 0 then c else compare a2 a1) ranking in\n    (* Pick a local size for each axis, respecting the 128-thread budget. *)\n    let to_local = List.fold_left (fun acc (_, axis) ->\n      let local_size = List.fold_left (fun p (_, sz) -> p * sz) 1 acc in\n      let candidates =\n        (if axis = 0 then [32] else []) @ [16; 8; 4; 3; 2] in\n      let ax_sz = const_int_or 0 (List.nth (P.full_shape k) axis) in\n      match List.find_opt (fun x ->\n        ax_sz mod x = 0 && local_size * x <= 128) candidates with\n      | Some sz -> (axis, sz) :: acc\n      | None -> acc) [] sorted_ranking in\n    (* Apply at most 3 locals, sorted by axis, adjusting for deleted shapes. *)\n    let to_apply = to_local\n      |> List.rev |> List.filteri (fun i _ -> i < 3)\n      |> List.sort (fun (a1, _) (a2, _) -> compare a1 a2) in\n    let deleted = ref 0 in\n    List.iter (fun (axis, local_sz) ->\n      let axis = axis - !deleted in\n      let will_delete = const_int_or 0 (List.nth (P.full_shape k) axis) = local_sz in\n      ignore (P.apply_opt k (K.Opt.Local { axis; amount = local_sz }));\n      if will_delete then incr deleted) to_apply\n  end\n\n(* Pick a thread count for LOOP axes. *)\nlet apply_threading k =\n  if Renderer.has_threads (P.ren k) then\n    match Renderer.global_max (P.ren k) with\n    | Some (gmax :: _) ->\n        let total = List.fold_left (fun acc s -> const_int_or 1 s * acc) 1\n          (P.full_shape k) in\n        (try List.iter (fun threads ->\n          if threads <= gmax && total / (128 lsl 10) >= threads then begin\n            (try List.iter (fun axis ->\n              if const_int_or 0 (List.nth (P.full_shape k) axis) mod threads = 0\n              then begin\n                try_apply k (K.Opt.Thread { axis; amount = threads });\n                raise_notrace Exit\n              end) (P.axes_of k [Axis_kind.Loop])\n             with Exit -> ());\n            let opts = P.applied_opts k in\n            if opts <> [] && (match last opts with\n              K.Opt.Thread _ -> true | _ -> false)\n            then raise_notrace Exit\n          end) [32; 16; 12; 8; 6; 5; 4; 3; 2]\n         with Exit -> ())\n    | _ -> ()\n\nlet hand_coded_optimizations k =\n  match try_tensor_cores k with Some k -> k | None ->\n  let k = P.copy k in\n  upcast_images k;\n  match try_matvec k with Some k -> k | None ->\n  if try_grouping k then k\n  else begin\n    upcast_masked k;\n    upcast_heuristic k;\n    unroll_reduce k;\n    upcast_default k;\n    apply_locals k;\n    apply_threading k;\n    k\n  end\n"
  },
  {
    "path": "packages/tolk/lib/codegen/opt/heuristic.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Heuristic-based kernel optimizations.\n\n    Applies a sequence of hand-coded optimization steps to a kernel scheduler:\n    tensor cores, image upcasts, matvec detection, grouping, masked upcasts,\n    broadcast-based upcasts, reduce unrolling, local groups, and threading.\n\n    *)\n\nval nolocals_var : int Helpers.Context_var.t\n(** Runtime override for [NOLOCALS] environment variable. *)\n\nval hand_coded_optimizations : Postrange.t -> Postrange.t\n(** [hand_coded_optimizations k] applies heuristic-based optimizations to the\n    kernel scheduler [k]. Returns the (possibly mutated) scheduler. *)\n"
  },
  {
    "path": "packages/tolk/lib/codegen/opt/postrange.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\nopen Tolk_ir\nmodule K = Kernel\n\nlet strf = Printf.sprintf\nlet prod lst = List.fold_left ( * ) 1 lst\n\nlet const_int_or default node = match K.const_arg node with\n  | Some (Int n) -> Int64.to_int n | _ -> default\n\nexception Opt_error of string\n\nlet check cond msg = if not cond then raise (Opt_error msg)\n\nlet nth_or_error lst i msg =\n  match List.nth_opt lst i with Some v -> v | None -> raise (Opt_error msg)\n\n(* Cached shape data — recomputed after every AST mutation. *)\ntype shape = {\n  rngs : K.t list;\n  axis_types : Axis_kind.t list;\n  full_shape : K.t list;\n  shape_str : string list;\n}\n\nlet compute_shape ast =\n  let rngs =\n    K.toposort ast\n    |> List.filter (fun u -> K.is_range u && const_int_or 0 (K.range_size u) > 1)\n    |> List.sort (fun a b ->\n         compare (Axis_kind.to_pos (K.range_kind a), K.range_axis a)\n                 (Axis_kind.to_pos (K.range_kind b), K.range_axis b)) in\n  let axis_types = List.map K.range_kind rngs in\n  let full_shape = List.map K.range_size rngs in\n  let cnt = Hashtbl.create 8 in\n  let shape_str = List.map (fun at ->\n    let n = match Hashtbl.find_opt cnt at with Some n -> n | None -> 0 in\n    Hashtbl.replace cnt at (n + 1);\n    strf \"%s%d\" (Axis_kind.letter at) n) axis_types in\n  { rngs; axis_types; full_shape; shape_str }\n\n(* Scheduler state: wraps a kernel AST and tracks applied optimisations. *)\ntype t = {\n  mutable ast : K.t;\n  ren : Renderer.t;\n  mutable dont_use_locals : bool;\n  mutable applied_opts : K.Opt.t list;\n  mutable tensor_core : Tc.t option;\n  mutable opt_range : int;\n  mutable shape : shape;\n}\n\nlet refresh t = t.shape <- compute_shape t.ast\n\nlet create ast ren =\n  let dont_use_locals, applied_opts = match K.view ast with\n    | Sink { kernel_info = Some ki; _ } -> ki.dont_use_locals, ki.applied_opts\n    | _ -> false, [] in\n  let all_rngs = K.find_nodes K.is_range ast in\n  let max_axis = List.fold_left (fun acc r -> max acc (K.range_axis r)) 0 all_rngs in\n  let shape = compute_shape ast in\n  { ast; ren; dont_use_locals; applied_opts;\n    tensor_core = None; opt_range = max_axis + 1; shape }\n\n(* Accessors — read from cached shape. *)\nlet rngs t = t.shape.rngs\nlet shape_len t = List.length t.shape.rngs\nlet full_shape t = t.shape.full_shape\nlet axis_types t = t.shape.axis_types\nlet shape_str t = t.shape.shape_str\n\nlet shape_str_to_axis t nms =\n  List.map (fun nm ->\n    match List.find_index (fun s -> s = nm) t.shape.shape_str with\n    | Some i -> i\n    | None -> failwith (strf \"shape_str_to_axis: %S not found\" nm))\n    nms\n\nlet ast t = t.ast\nlet ren t = t.ren\nlet applied_opts t = t.applied_opts\nlet tensor_core t = t.tensor_core\n\nlet copy t = { t with ast = t.ast }\n\n(* Ranges that appear in END nodes with non-Reduce axis type. *)\nlet output_rngs t =\n  List.concat_map (fun s ->\n    match K.view s with\n    | End { ranges; _ } ->\n        let sink_ranges = K.find_nodes K.is_range (K.sink ranges) in\n        List.filter (fun r -> K.range_kind r <> Axis_kind.Reduce) sink_ranges\n    | _ -> [])\n    (K.children t.ast)\n\n(* Loop ranges eligible for promotion to Global: must appear in all\n   BUFFERIZE nodes' ranges. *)\nlet globalizable_rngs t =\n  let out = List.filter (fun r ->\n    K.range_kind r = Axis_kind.Loop) (output_rngs t) in\n  List.fold_left (fun acc node ->\n    match K.view node with\n    | Bufferize { ranges; _ } ->\n        List.filter (fun r -> List.memq r ranges) acc\n    | _ -> acc)\n    out (K.toposort t.ast)\n\n(* Promote eligible Loop ranges to Global. *)\nlet convert_loop_to_global t =\n  if Renderer.has_local t.ren then begin\n    let glob = globalizable_rngs t in\n    let subs = List.filter_map (fun r ->\n      if List.memq r glob then\n        Some (r, K.range ~size:(K.range_size r)\n                   ~axis:(K.range_axis r) ~sub:(K.range_sub r)\n                   ~kind:Axis_kind.Global\n                   ~dtype:(Dtype.val_of (K.dtype r)) ())\n      else None)\n      (rngs t) in\n    if subs <> [] then (t.ast <- K.substitute subs t.ast; refresh t)\n  end\n\nlet reduceop t =\n  List.find_opt (fun x -> match K.view x with Reduce _ -> true | _ -> false)\n    (K.backward_slice t.ast)\n\nlet colors t =\n  let out = output_rngs t in\n  let glob = globalizable_rngs t in\n  List.map2 (fun at r ->\n    if t.dont_use_locals && at = Axis_kind.Global then \"BLUE\"\n    else if at = Axis_kind.Loop && not (List.memq r out) then \"BLACK\"\n    else if at = Axis_kind.Loop && not (List.memq r glob) then \"white\"\n    else Axis_kind.color at)\n    (axis_types t) (rngs t)\n\nlet render_size sz = match K.const_arg sz with\n  | Some (Int n) -> Int64.to_string n | _ -> \"s\"\n\nlet colored_shape t =\n  String.concat \" \" (List.map2 (fun rng color ->\n    strf \"%4s:%s\" (render_size (K.range_size rng)) color)\n    (rngs t) (colors t))\n\n(* Sanitise a kernel name to a valid identifier. *)\nlet to_function_name s =\n  let buf = Buffer.create (String.length s) in\n  String.iter (fun c -> match c with\n    | 'a' .. 'z' | 'A' .. 'Z' | '0' .. '9' | '_' -> Buffer.add_char buf c\n    | _ -> Buffer.add_string buf (strf \"%02X\" (Char.code c))) s;\n  Buffer.contents buf\n\nlet kernel_cnt : (string, int) Hashtbl.t = Hashtbl.create 16\n\n(* Finalize the kernel: generate a debug name, flatten ranges,\n   and attach updated kernel_info with a tag marking it as optimized. *)\nlet get_optimized_ast ?name_override t =\n  let name = match name_override with\n    | Some n -> n\n    | None ->\n        let k_type = if reduceop t <> None then \"r\" else \"E\" in\n        let specials = List.sort (fun a b ->\n            match K.view a, K.view b with\n            | Special { dim = da; _ }, Special { dim = db; _ } ->\n                Special_dim.compare da db\n            | _ -> 0)\n          (List.filter (fun n -> match K.view n with Special _ -> true | _ -> false)\n             (K.toposort t.ast)) in\n        let special_strs = List.map (fun s -> match K.view s with\n          | Special { size; _ } -> render_size size | _ -> \"?\") specials in\n        let rng_strs = List.map (fun rng ->\n          render_size (K.range_size rng)) (rngs t) in\n        let raw = k_type ^ \"_\" ^ String.concat \"_\"\n          (special_strs @ rng_strs) in\n        let fn = to_function_name raw in\n        let cnt = 1 + (match Hashtbl.find_opt kernel_cnt fn with\n          | Some c -> c | None -> 0) in\n        Hashtbl.replace kernel_cnt fn cnt;\n        raw ^ (if cnt > 1 then strf \"n%d\" (cnt - 1) else \"\")\n  in\n  t.ast <- Simplify.pm_flatten_range t.ast; refresh t;\n  let ki = { K.name; axis_kinds = []; dont_use_locals = t.dont_use_locals;\n             applied_opts = t.applied_opts; opts_to_apply = None;\n             estimates = None } in\n  K.with_tag \"1\"\n    (K.sink ~kernel_info:ki\n       (match K.view t.ast with Sink { srcs; _ } -> srcs | _ -> [t.ast]))\n\n(* Split [rng] by [amount]: the original range shrinks to size/amount,\n   a new range of [amount] is created with [new_kind].  When [top] is true\n   the new range is the high part; otherwise it is the low part. *)\nlet shift_to ?(top = false) ?input_new_rng t rng amount new_kind =\n  let size = K.range_size rng in\n  let old_sz = match K.divides size amount with\n    | Some q -> q\n    | None -> raise (Opt_error (strf \"shift_to: %d can't divide range\" amount)) in\n  let new_rng = match input_new_rng with\n    | Some r -> r\n    | None ->\n        let axis = t.opt_range in\n        t.opt_range <- t.opt_range + 1;\n        K.range ~size:(K.const_int amount) ~axis ~kind:new_kind () in\n  let replaced_rng = K.range ~size:old_sz\n    ~axis:(K.range_axis rng) ~sub:(K.range_sub rng)\n    ~kind:(K.range_kind rng) ~dtype:(Dtype.val_of (K.dtype rng)) () in\n  let open K.O in\n  let sub_axis =\n    if top then new_rng * old_sz + replaced_rng\n    else replaced_rng * K.const_int amount + new_rng in\n  t.ast <- K.substitute [(rng, sub_axis)] t.ast; refresh t;\n  (replaced_rng, new_rng)\n\nlet ranges_of t kinds =\n  List.filter (fun r -> List.mem (K.range_kind r) kinds) (rngs t)\n\nlet axes_of t kinds =\n  axis_types t\n  |> List.mapi (fun i at -> if List.mem at kinds then Some i else None)\n  |> List.filter_map Fun.id\n\n(* Axes of the given kinds whose full_shape entry is a constant > 1. *)\nlet const_dims t kinds =\n  let fs = full_shape t in\n  List.filter (fun i -> const_int_or 0 (List.nth fs i) > 1) (axes_of t kinds)\n\nlet upcast_size t =\n  let fs = full_shape t in\n  prod (List.map (fun a -> const_int_or 1 (List.nth fs a))\n    (axes_of t [Axis_kind.Upcast; Axis_kind.Unroll]))\n\nlet upcastable_dims t =\n  const_dims t [Axis_kind.Global; Axis_kind.Local; Axis_kind.Loop]\n\nlet unrollable_dims t =\n  const_dims t [Axis_kind.Group_reduce; Axis_kind.Reduce]\n\nlet bufs t =\n  List.rev (List.filter (fun x ->\n    match K.view x with Index _ -> true | _ -> false)\n    (K.toposort t.ast))\n\nlet output_shape t =\n  List.map2 (fun s at ->\n    match at with\n    | Axis_kind.Reduce | Axis_kind.Unroll | Axis_kind.Group_reduce ->\n        K.const_int 1\n    | _ -> s)\n    (full_shape t) (axis_types t)\n\nlet upcasted t =\n  List.length (axes_of t [Axis_kind.Upcast; Axis_kind.Unroll])\n\nlet group_for_reduces t =\n  List.length (axes_of t [Axis_kind.Group_reduce])\n\n(* Resolve an opt's axis to a real range index. *)\nlet real_axis t op axis =\n  match op, axis with\n  | _, None | K.Opt.Tc _, _ -> -1\n  | K.Opt.Unroll _, Some a ->\n      nth_or_error (unrollable_dims t) a \"invalid unroll axis\"\n  | K.Opt.Group _, Some a | K.Opt.Grouptop _, Some a ->\n      nth_or_error (axes_of t [Axis_kind.Reduce]) a \"invalid group axis\"\n  | _, Some a ->\n      check (a < shape_len t) \"invalid axis\"; a\n\nlet range_int_size rng = const_int_or 0 (K.range_size rng)\n\nlet allow_tf32 = match Sys.getenv_opt \"ALLOW_TF32\" with\n  | Some \"1\" -> true | _ -> false\n\nlet argsort perm =\n  let n = List.length perm in\n  let inv = Array.make n 0 in\n  List.iteri (fun i x -> if x >= 0 && x < n then inv.(x) <- i) perm;\n  Array.to_list inv\n\n(* Apply TC opt/reduce splits, return the new ranges (reversed). *)\nlet apply_tc_shifts t axes (tc : Tc.t) =\n  let warp = ref (K.range ~size:(K.const_int tc.threads) ~axis:(-1)\n                    ~kind:Axis_kind.Warp ()) in\n  let ne = ref [] in\n  List.iter (fun opt_str ->\n    let dim_idx = Char.code opt_str.[1] - Char.code '0' in\n    if opt_str.[0] = 'l' then begin\n      let replaced, new_rng = shift_to t axes.(dim_idx) 2\n        Axis_kind.Local ~input_new_rng:(K.binary ~op:`Mod\n          ~lhs:!warp ~rhs:(K.const_int 2)) in\n      axes.(dim_idx) <- replaced;\n      warp := K.binary ~op:`Idiv ~lhs:!warp ~rhs:(K.const_int 2);\n      ne := new_rng :: !ne\n    end else if opt_str.[0] = 'u' then begin\n      let replaced, new_rng = shift_to t axes.(dim_idx) 2\n        Axis_kind.Upcast in\n      axes.(dim_idx) <- replaced;\n      ne := new_rng :: !ne\n    end else\n      failwith (strf \"unsupported tc opt: %c\" opt_str.[0]))\n    tc.opts;\n  List.iter (fun (_, amt) ->\n    let replaced, new_rng = shift_to t axes.(2) amt Axis_kind.Unroll in\n    axes.(2) <- replaced;\n    ne := new_rng :: !ne)\n    (Tc.get_reduce_axes tc);\n  List.rev !ne\n\n(* Build the WMMA node and substitute it for the tagged reduce. *)\nlet build_wmma_node t (tc : Tc.t) ne =\n  let tagged_red = List.find (fun x ->\n    match K.view x with Reduce _ -> K.tag x = Some \"TC\" | _ -> false)\n    (K.toposort t.ast) in\n  let tne = List.map (K.with_tag \"1\") ne in\n  let ret = K.substitute (List.combine ne tne) tagged_red in\n  let ret_src = match K.view ret with\n    | Reduce { src; _ } -> src | _ -> assert false in\n  let mul_src = match K.view ret_src with\n    | Cast { src; _ } -> src | _ -> ret_src in\n  let srcs = K.children mul_src in\n  let perm0, perm1 = Tc.permutes_for_shape_str tc (Tc.base_shape_str tc) in\n  let srcs = List.mapi (fun i src ->\n    let p = if i = 0 then perm0 else perm1 in\n    K.substitute (List.combine tne\n      (List.map (fun j -> List.nth ne j) (argsort p))) src) srcs in\n  (* Compute upcast/reduce axes *)\n  let n_reduce = List.length (Tc.get_reduce_axes tc) in\n  let tc_reduce_axes = shape_str_to_axis t\n    (List.init n_reduce (fun i -> strf \"r%d\" i)) in\n  let base_ua = List.map (fun s -> (s, 2))\n    (shape_str_to_axis t (Tc.base_upcast_axes tc)) in\n  let log2 n = int_of_float (log (float_of_int n) /. log 2.0) in\n  let a_ept, b_ept, c_ept = tc.elements_per_thread in\n  let tc_upcast_axes = List.init 3 (fun i ->\n    let n = log2 [|a_ept; b_ept; c_ept|].(i) in\n    List.filteri (fun j _ -> j < n) base_ua) in\n  (* Convert axes to range numbers *)\n  let rngs_now = rngs t in\n  let tc_upcast_axes = List.map (fun v ->\n    List.map (fun (a, sz) ->\n      (K.range_axis (List.nth rngs_now a), sz)) v) tc_upcast_axes in\n  let tc_reduce_axes = List.map (fun a ->\n    K.range_axis (List.nth rngs_now a)) tc_reduce_axes in\n  (* Build the WMMA node *)\n  let out_dt = Dtype.of_scalar tc.dtype_out in\n  let src0, src1 = match srcs with [a; b] -> a, b | _ -> assert false in\n  let ua, ub, uc = match tc_upcast_axes with\n    | [a; b; c] -> a, b, c | _ -> assert false in\n  let contract_src src axes ept =\n    K.with_tag \"1\" (K.contract ~src ~axes\n      ~dtype:(Dtype.Val.vec ept (Dtype.val_of (Dtype.scalarize (K.dtype src))))) in\n  let wmma = K.with_tag \"1\" (K.wmma\n    ~name:(Tc.to_string tc)\n    ~a:(contract_src src0 ua a_ept)\n    ~b:(contract_src src1 ub b_ept)\n    ~c:(K.broadcast (K.const (Const.float (Dtype.val_of out_dt) 0.0)) c_ept)\n    ~dtype:(Dtype.Val.vec c_ept (Dtype.val_of out_dt))\n    ~dims:tc.dims ~dtype_in:tc.dtype_in ~dtype_out:tc.dtype_out\n    ~device:(Renderer.device t.ren) ~threads:tc.threads\n    ~upcast_axes:(ua, ub, uc) ~reduce_axes:[]) in\n  let tc_uop = K.with_tag \"1\" (K.unroll ~src:wmma ~axes:uc ~dtype:(Dtype.val_of out_dt)) in\n  (* Preserve extra reduces *)\n  let red_range_nodes = K.find_nodes K.is_range\n    (K.sink (match K.view tagged_red with\n      | Reduce { ranges; _ } -> ranges | _ -> [])) in\n  let extra_reduces = List.filter (fun x ->\n    not (List.mem (K.range_axis x) tc_reduce_axes)) red_range_nodes in\n  let tc_uop = if extra_reduces <> [] then\n    K.reduce ~op:`Add ~src:tc_uop ~ranges:extra_reduces\n      ~dtype:(Dtype.val_of (K.dtype tc_uop))\n  else tc_uop in\n  t.ast <- K.substitute [(tagged_red, tc_uop)] t.ast; refresh t\n\n(* Shared memory size check for group/reduce opts. *)\nlet check_shared_memory t opt amt red_opt =\n  let is_group = match opt with K.Opt.Group _ | Grouptop _ -> true | _ -> false in\n  if red_opt <> None\n     && (is_group || (group_for_reduces t > 0\n         && (match opt with Nolocals | Padto _ -> false | _ -> true)))\n  then begin\n    let fs = full_shape t in\n    let upcast_local_sz = prod (List.map (fun a ->\n      const_int_or 1 (List.nth fs a))\n      (axes_of t [Axis_kind.Upcast; Axis_kind.Warp;\n                  Axis_kind.Local; Axis_kind.Group_reduce])) in\n    let red = match red_opt with Some r -> r | None -> assert false in\n    let red_dt = K.dtype red in\n    let smem_sz = amt * upcast_local_sz * Dtype.itemsize red_dt in\n    check (smem_sz <= Renderer.shared_max t.ren)\n      (strf \"exceeds shared memory: needs %d, max %d\"\n         smem_sz (Renderer.shared_max t.ren))\n  end\n\n(* Check that a GROUP_REDUCE is not inside another reduce. *)\nlet check_no_nested_group t r red_opt =\n  if red_opt <> None then begin\n    (* Find the REDUCE whose range sources contain r *)\n    let reduce_node = List.find_opt (fun u ->\n      match K.view u with\n      | Reduce { ranges; _ } ->\n          let range_nodes = K.find_nodes K.is_range (K.sink ranges) in\n          List.memq r range_nodes\n      | _ -> false) (K.toposort t.ast) in\n    match reduce_node with\n    | Some red_node ->\n        (* Check enclosing (live) ranges, not the reduce's own range inputs *)\n        let live = K.live_ranges red_node in\n        check (not (List.exists (fun u ->\n          let k = K.range_kind u in\n          k = Axis_kind.Reduce || k = Axis_kind.Unroll\n          || k = Axis_kind.Group_reduce) live))\n          \"cannot have a GROUP_REDUCE inside another reduce\"\n    | None -> ()\n  end\n\n(* Per-opt validation for shift_to opts. *)\nlet validate_shift_opt t opt r amt rng_kind =\n  match opt with\n  | K.Opt.Unroll _ ->\n      check (amt <= 32) \"don't unroll more than 32\";\n      check (rng_kind = Axis_kind.Group_reduce || rng_kind = Axis_kind.Reduce)\n        \"unroll is for GROUP_REDUCE/REDUCE\"\n  | Upcast _ ->\n      check (Renderer.device t.ren = \"DSP\" || amt <= 16)\n        \"don't upcast more than 16\";\n      check (rng_kind = Axis_kind.Global || rng_kind = Axis_kind.Local\n             || rng_kind = Axis_kind.Loop)\n        \"upcast is for GLOBAL/LOCAL/LOOP\"\n  | Local _ ->\n      check (not t.dont_use_locals) \"can't use locals\";\n      check (rng_kind = Axis_kind.Global || rng_kind = Axis_kind.Loop)\n        \"local is for globals\"\n  | Thread _ ->\n      check (Renderer.has_threads t.ren) \"target does not support threads\";\n      (match Renderer.global_max t.ren with\n       | Some (gm :: _) -> check (amt <= gm) \"too many threads\" | _ -> ());\n      check (List.for_all (fun at -> at <> Axis_kind.Thread)\n               (axis_types t)) \"already threaded\";\n      check (List.memq r (globalizable_rngs t))\n        \"can't apply thread to this dim\"\n  | Group _ | Grouptop _ ->\n      check (List.for_all (fun o ->\n        match o with K.Opt.Tc _ -> false | _ -> true) t.applied_opts)\n        \"no grouping with tensor cores\";\n      check (not t.dont_use_locals) \"can't use locals\";\n      check (rng_kind = Axis_kind.Reduce) \"group is for reduce\"\n  | _ -> ()\n\n(* Pad a range to a multiple of [amount]. *)\nlet apply_padto t r amount red_opt =\n  check (K.const_arg (K.range_size r) <> None) \"only pad const axes\";\n  let rng_kind = K.range_kind r in\n  check (rng_kind <> Axis_kind.Upcast && rng_kind <> Axis_kind.Unroll)\n    \"cannot pad upcasted\";\n  check (rng_kind <> Axis_kind.Thread) \"cannot pad thread\";\n  (match red_opt with\n   | Some red when rng_kind = Axis_kind.Group_reduce\n                 || rng_kind = Axis_kind.Reduce ->\n       let red_op = match K.view red with\n         | Reduce { op; _ } -> op | _ -> assert false in\n       check (red_op = `Add)\n         (strf \"cannot pad %s\" (K.view_op_name (K.view red)));\n       let has_unsafe = List.exists (fun u ->\n         match K.view u with\n         | Unary { op = (`Recip | `Log2 | `Exp2); _ }\n         | Binary { op = (`Idiv | `Pow); _ } -> true\n         | _ -> false) (K.toposort red) in\n       check (not has_unsafe)\n         (strf \"cannot pad %s\" (K.view_op_name (K.view red)))\n   | _ -> ());\n  let old_size = range_int_size r in\n  let new_sz = (old_size + amount - 1) / amount * amount in\n  check (old_size > new_sz / 4) \"pad adds more than quadruple the work\";\n  let replaced_rng = K.range ~size:(K.const_int new_sz)\n    ~axis:(K.range_axis r) ~sub:(K.range_sub r)\n    ~kind:(K.range_kind r) ~dtype:(Dtype.val_of (K.dtype r)) () in\n  let valid = K.binary ~op:`Cmplt\n    ~lhs:replaced_rng ~rhs:(K.const_int old_size) in\n  let subs = [(r, replaced_rng)] in\n  let subs = List.fold_left (fun acc b ->\n    match K.view b with\n    | Index { ptr; idxs; gate; _ } when K.in_backward_slice r b ->\n        let combined_valid = match gate with\n          | Some g -> K.binary ~op:`And ~lhs:valid ~rhs:g\n          | None -> valid in\n        let guarded_idx = K.index ~ptr ~idxs ~gate:combined_valid () in\n        let where = K.ternary ~op:`Where\n          ~a:combined_valid ~b:guarded_idx ~c:(K.invalid_index ()) in\n        (b, where) :: acc\n    | _ -> acc) subs (bufs t) in\n  t.ast <- K.substitute subs t.ast; refresh t\n\n(* Swap two global ranges' axis numbers. *)\nlet apply_swap t r with_axis =\n  let altrng = nth_or_error (rngs t) with_axis \"invalid swap axis\" in\n  check (K.range_kind r = Axis_kind.Global\n         && K.range_kind altrng = Axis_kind.Global)\n    \"swap only for globals\";\n  let r' = K.with_tag \"1\" (K.range\n    ~size:(K.range_size r) ~sub:(K.range_sub r)\n    ~axis:(K.range_axis altrng) ~kind:(K.range_kind r)\n    ~dtype:(Dtype.val_of (K.dtype r)) ()) in\n  let alt' = K.with_tag \"1\" (K.range\n    ~size:(K.range_size altrng) ~sub:(K.range_sub altrng)\n    ~axis:(K.range_axis r) ~kind:(K.range_kind altrng)\n    ~dtype:(Dtype.val_of (K.dtype altrng)) ()) in\n  t.ast <- K.substitute [(r, r'); (altrng, alt')] t.ast;\n  t.ast <- K.graph_rewrite (fun node ->\n    match K.tag node with Some _ -> Some (K.replace node ()) | None -> None)\n    t.ast;\n  refresh t\n\n(* Mutual recursion: apply_opt <-> apply_tc_opt <-> pad_tc_axes *)\n\n(* Pad each TC axis to a multiple of tc.dims[i]. Returns false on failure. *)\nlet rec pad_tc_axes t axes (tc : Tc.t) tc_opt =\n  let pad_ok = ref true in\n  (try\n    for i = 0 to 2 do\n      let a = axes.(i) in\n      let idx = match List.find_index (fun r -> r == a) (rngs t) with\n        | Some j -> j | None -> raise (Opt_error \"range not found\") in\n      let dim = let n, m, k = tc.dims in [|n; m; k|].(i) in\n      if range_int_size a mod dim <> 0 then begin\n        if tc_opt < 2 then raise (Opt_error \"tc padding requires opt_level >= 2\");\n        ignore (apply_opt ~append_opt:false t\n          (K.Opt.Padto { axis = idx; amount = dim }));\n        axes.(i) <- List.nth (rngs t) idx\n      end\n    done\n  with Opt_error _ -> pad_ok := false);\n  !pad_ok\n\n(* Apply tensor core optimisation.  Returns Some axes on success, None\n   if no matching TC was found. *)\nand apply_tc_opt t use_tc axis tc_select tc_opt =\n  let red = match List.find_opt (fun x ->\n    match K.view x with Reduce _ -> true | _ -> false) (K.toposort t.ast) with\n    | Some r -> r | None -> raise (Opt_error \"no reduce ops for TensorCore\") in\n  let red_op, red_src = match K.view red with\n    | Reduce { op; src; _ } -> op, src | _ -> assert false in\n  if use_tc = 0 || red_op <> `Add then None\n  else\n    let mul = match K.view red_src with Cast { src; _ } -> src | _ -> red_src in\n    match K.view mul with\n    | Binary { op = `Mul; lhs = in0; rhs = in1; _ } ->\n        let tcs =\n          if tc_select = -1 then Renderer.tensor_cores t.ren\n          else match List.nth_opt (Renderer.tensor_cores t.ren) tc_select with\n            | Some tc -> [tc] | None -> raise (Opt_error \"invalid tensor core choice\") in\n        let in0_dt = Dtype.val_of (Dtype.scalarize (K.dtype in0)) in\n        let in1_dt = Dtype.val_of (Dtype.scalarize (K.dtype in1)) in\n        let red_dt = Dtype.val_of (Dtype.scalarize (K.dtype red)) in\n        let in0_ranges = K.find_nodes K.is_range in0 in\n        let in1_ranges = K.find_nodes K.is_range in1 in\n        let red_ranges = match K.view red with\n          Reduce { ranges; _ } -> ranges | _ -> [] in\n        let sort_desc = List.sort (fun a b ->\n          compare (K.range_axis b) (K.range_axis a)) in\n        let try_tc (tc : Tc.t) =\n          if (Renderer.device t.ren = \"CUDA\" || Renderer.device t.ren = \"NV\")\n             && tc.dtype_in = Dtype.Float32 && not allow_tf32 then None\n          else if Dtype.Val.scalar in0_dt <> tc.dtype_in\n               || Dtype.Val.scalar in1_dt <> tc.dtype_in\n               || Dtype.Val.scalar red_dt <> tc.dtype_out then None\n          else\n            let in0_r = sort_desc (List.filter (fun u ->\n              not (List.memq u in1_ranges)) in0_ranges) in\n            let in1_r = sort_desc (List.filter (fun u ->\n              not (List.memq u in0_ranges)) in1_ranges) in\n            let red_r = sort_desc red_ranges in\n            if in0_r = [] || in1_r = [] || red_r = [] then None\n            else\n              let choices = List.concat_map (fun a ->\n                List.concat_map (fun b ->\n                  List.map (fun c -> [a; b; c]) red_r) in0_r) in1_r in\n              if axis >= List.length choices then None\n              else begin\n                let axes = Array.of_list (List.nth choices axis) in\n                t.ast <- K.substitute [(red, K.with_tag \"TC\" red)] t.ast;\n                refresh t;\n                if not (pad_tc_axes t axes tc tc_opt) then None\n                else begin\n                  let ne = apply_tc_shifts t axes tc in\n                  if use_tc <> 2 then build_wmma_node t tc ne;\n                  t.tensor_core <- Some tc;\n                  Some (Array.to_list axes)\n                end\n              end\n        in\n        List.find_map try_tc tcs\n    | _ -> None\n\nand apply_opt ?(append_opt = true) t opt =\n  let ret = match opt with\n  | K.Opt.Nolocals ->\n      check (List.for_all (fun at ->\n        at <> Axis_kind.Warp && at <> Axis_kind.Local\n        && at <> Axis_kind.Group_reduce) (axis_types t))\n        \"no locals can't have locals\";\n      if append_opt then t.applied_opts <- t.applied_opts @ [opt];\n      t.dont_use_locals <- true;\n      None\n\n  | Tc { axis; tc_select; tc_opt; use_tc } ->\n      check (t.applied_opts = []) \"tensor core opts must be first\";\n      check (tc_select >= -1\n             && tc_select < List.length (Renderer.tensor_cores t.ren))\n        \"invalid tc_select\";\n      check (tc_opt >= 0 && tc_opt <= 2) \"invalid tc_opt\";\n      check (use_tc > 0 && use_tc <= 2) \"invalid use_tc\";\n      let axes = apply_tc_opt t use_tc axis tc_select tc_opt in\n      check (Option.is_some axes) \"no tensor core available\";\n      Option.bind axes (fun l -> match l with\n        | a :: b :: _ -> Some (a, b) | _ -> None)\n\n  | Padto { axis = _; amount } ->\n      let ra = real_axis t opt (K.Opt.axis opt) in\n      let r = List.nth (rngs t) ra in\n      apply_padto t r amount (reduceop t);\n      None\n\n  | Swap { axis = _; with_axis } ->\n      let ra = real_axis t opt (K.Opt.axis opt) in\n      let r = List.nth (rngs t) ra in\n      apply_swap t r with_axis;\n      None\n\n  | _ ->\n      let ra = real_axis t opt (K.Opt.axis opt) in\n      let r = List.nth (rngs t) ra in\n      let red_opt = reduceop t in\n      (match opt with\n       | Local _ | Group _ | Grouptop _ ->\n           check (Renderer.has_local t.ren) \"locals needed for opt\"\n       | _ -> ());\n      let new_kind = match opt with\n        | Local _ -> Axis_kind.Local | Upcast _ -> Axis_kind.Upcast\n        | Unroll _ -> Axis_kind.Unroll\n        | Group _ | Grouptop _ -> Axis_kind.Group_reduce\n        | Thread _ -> Axis_kind.Thread\n        | _ -> assert false in\n      let amt = match K.Opt.amount opt with\n        | Some 0 -> range_int_size r | Some a -> a\n        | None -> range_int_size r in\n      check_shared_memory t opt amt red_opt;\n      (match opt with Group _ | Grouptop _ ->\n        check_no_nested_group t r red_opt | _ -> ());\n      validate_shift_opt t opt r amt (K.range_kind r);\n      let top = match opt with Grouptop _ | Thread _ -> true | _ -> false in\n      Some (shift_to ~top t r amt new_kind)\n  in\n  if append_opt then t.applied_opts <- t.applied_opts @ [opt];\n  ret\n\n(* Extract sorted Param nodes from an AST.  Returns raw Param nodes;\n   the caller (Pipeline) constructs device buffers. *)\nlet bufs_from_ast ast =\n  List.sort (fun a b ->\n    match K.view a, K.view b with\n    | Param { idx = ia; _ }, Param { idx = ib; _ } -> compare ia ib\n    | _ -> 0)\n    (List.filter (fun n -> match K.view n with Param _ -> true | _ -> false)\n       (K.backward_slice ast))\n\n(* Top-level optimization dispatch.  Strategy closures are passed by the\n   caller (Pipeline) to break circular module dependencies. *)\nlet apply_opts ?beam_search ?hand_coded_optimizations ast ren =\n  if K.tag ast <> None then ast\n  else\n    let ki = match K.view ast with\n      | Sink { kernel_info = Some ki; _ } -> Some ki | _ -> None in\n    let k = create ast ren in\n    convert_loop_to_global k;\n    let optimize k =\n      match beam_search with\n      | Some bs -> bs k\n      | None ->\n          match hand_coded_optimizations with\n          | Some f when k.applied_opts = []\n            && not (List.exists (fun n ->\n                 match K.view n with Bufferize _ -> true | _ -> false)\n                 (K.backward_slice ast)) -> f k\n          | _ -> k\n    in\n    let k = match ki with\n      | Some { opts_to_apply = Some opts; _ } ->\n          List.iter (fun opt -> ignore (apply_opt k opt)) opts; k\n      | _ -> optimize k\n    in\n    let name_override = match ki with\n      | Some ki when ki.name <> \"\" && ki.name <> \"test\" -> Some ki.name\n      | _ -> None in\n    get_optimized_ast ?name_override k\n"
  },
  {
    "path": "packages/tolk/lib/codegen/opt/postrange.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Post-range kernel optimisation scheduler.\n\n    Wraps a late {!Tolk_ir.Kernel.t} AST and applies optimisation passes\n    (upcast, local, unroll, group, tensor core, padding, swap) that\n    reshape the iteration space before expansion and devectorisation.\n\n    See also {!Tc}, {!Heuristic}, {!Search}. *)\n\n(** {1:types Types} *)\n\ntype t\n(** Mutable scheduler state wrapping a kernel AST, renderer, and\n    optimisation history. *)\n\nexception Opt_error of string\n(** Raised when an optimisation precondition fails. *)\n\n(** {1:lifecycle Lifecycle} *)\n\nval create : Tolk_ir.Kernel.t -> Renderer.t -> t\n(** [create ast ren] is a fresh scheduler for [ast] with renderer\n    [ren]. *)\n\nval copy : t -> t\n(** [copy t] is a shallow copy with independent mutable state. *)\n\nval apply_opt :\n  ?append_opt:bool -> t -> Tolk_ir.Kernel.Opt.t ->\n  (Tolk_ir.Kernel.t * Tolk_ir.Kernel.t) option\n(** [apply_opt t opt] applies [opt] to [t] and returns\n    [Some (replaced_rng, new_rng)] for shift_to opts, the first two\n    TC axes as a pair for TC opts, or [None] otherwise.\n    [append_opt] defaults to [true].\n\n    Raises {!Opt_error} on precondition failure. *)\n\nval get_optimized_ast :\n  ?name_override:string -> t -> Tolk_ir.Kernel.t\n(** [get_optimized_ast t] finalises [t]: flattens ranges, generates a\n    debug name, and returns the AST with updated kernel info. *)\n\n(** {1:pipeline Pipeline} *)\n\nval apply_opts :\n  ?beam_search:(t -> t) ->\n  ?hand_coded_optimizations:(t -> t) ->\n  Tolk_ir.Kernel.t -> Renderer.t -> Tolk_ir.Kernel.t\n(** [apply_opts ast ren] optimises [ast] for [ren].  Returns [ast]\n    unchanged if already tagged.  Strategy callbacks break circular\n    dependencies with {!Search} and {!Heuristic}. *)\n\nval bufs_from_ast : Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t list\n(** [bufs_from_ast ast] is the Param nodes of [ast] sorted by index. *)\n\n(** {1:accessors Accessors} *)\n\nval ast : t -> Tolk_ir.Kernel.t\n(** [ast t] is [t]'s current kernel AST. *)\n\nval ren : t -> Renderer.t\n(** [ren t] is [t]'s renderer. *)\n\nval applied_opts : t -> Tolk_ir.Kernel.Opt.t list\n(** [applied_opts t] is the opts applied so far. *)\n\nval tensor_core : t -> Tc.t option\n(** [tensor_core t] is the active tensor core, if any. *)\n\n(** {1:shape Shape queries} *)\n\nval rngs : t -> Tolk_ir.Kernel.t list\n(** Active ranges sorted by (axis kind, axis number). *)\n\nval shape_len : t -> int\n(** Number of active ranges. *)\n\nval full_shape : t -> Tolk_ir.Kernel.t list\n(** Size node of each active range. *)\n\nval axis_types : t -> Tolk_ir.Axis_kind.t list\n(** Axis kind of each active range. *)\n\nval shape_str : t -> string list\n(** Labelled axis names ([\"g0\"; \"l0\"; \"r0\"; …]). *)\n\nval shape_str_to_axis : t -> string list -> int list\n(** Map axis label names to indices.  Raises [Failure] if not found. *)\n\nval axes_of : t -> Tolk_ir.Axis_kind.t list -> int list\n(** Indices of ranges whose kind is in the given list. *)\n\nval ranges_of : t -> Tolk_ir.Axis_kind.t list -> Tolk_ir.Kernel.t list\n(** Ranges whose kind is in the given list. *)\n\nval upcastable_dims : t -> int list\n(** Global/Local/Loop axes with constant size > 1. *)\n\nval unrollable_dims : t -> int list\n(** Group_reduce/Reduce axes with constant size > 1. *)\n\nval upcast_size : t -> int\n(** Product of Upcast and Unroll shape sizes. *)\n\nval output_shape : t -> Tolk_ir.Kernel.t list\n(** {!full_shape} with reduce/unroll/group axes replaced by [1]. *)\n\nval upcasted : t -> int\n(** Number of Upcast and Unroll axes. *)\n\nval group_for_reduces : t -> int\n(** Number of Group_reduce axes. *)\n\n(** {1:queries Queries} *)\n\nval reduceop : t -> Tolk_ir.Kernel.t option\n(** First Reduce node in the AST, if any. *)\n\nval bufs : t -> Tolk_ir.Kernel.t list\n(** Index nodes in the AST, reversed. *)\n\nval colored_shape : t -> string\n(** Debug string of range sizes with axis-kind labels. *)\n\nval range_int_size : Tolk_ir.Kernel.t -> int\n(** Constant integer size of a range node, or [0]. *)\n\nval real_axis :\n  t -> Tolk_ir.Kernel.Opt.t -> int option -> int\n(** [real_axis t op axis] resolves [axis] for [op] to a range index.\n    Returns [-1] when [axis] is [None] or [op] is TC.\n\n    Raises {!Opt_error} on invalid axis. *)\n\n(** {1:transforms Transforms} *)\n\nval convert_loop_to_global : t -> unit\n(** Promote eligible Loop ranges to Global. *)\n\nval shift_to :\n  ?top:bool -> ?input_new_rng:Tolk_ir.Kernel.t ->\n  t -> Tolk_ir.Kernel.t -> int -> Tolk_ir.Axis_kind.t ->\n  Tolk_ir.Kernel.t * Tolk_ir.Kernel.t\n(** [shift_to t rng amount kind] splits [rng] by [amount].  Returns\n    [(replaced_rng, new_rng)].  [top] defaults to [false].\n    Raises [Failure] if [amount] does not divide the range. *)\n"
  },
  {
    "path": "packages/tolk/lib/codegen/opt/search.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(* Beam search kernel optimizer. *)\n\nopen Tolk_ir\nmodule K = Kernel\nmodule P = Postrange\n\n(* Environment *)\n\nlet beam_padto = Helpers.getenv \"BEAM_PADTO\" 0 <> 0\nlet beam_upcast_max = Helpers.getenv \"BEAM_UPCAST_MAX\" 256\nlet beam_local_max = Helpers.getenv \"BEAM_LOCAL_MAX\" 1024\nlet beam_log_surpass_max = Helpers.getenv \"BEAM_LOG_SURPASS_MAX\" 0 <> 0\nlet tc = Helpers.getenv \"TC\" 1\nlet tc_opt = Helpers.getenv \"TC_OPT\" 2\nlet nolocals = Helpers.getenv \"NOLOCALS\" 0 <> 0\nlet beam_uops_max = Helpers.getenv \"BEAM_UOPS_MAX\" 3000\nlet beam_timeout_sec = Helpers.getenv \"BEAM_TIMEOUT_SEC\" 10\nlet beam_strict_mode = Helpers.getenv \"BEAM_STRICT_MODE\" 0 <> 0\nlet debug = Helpers.getenv \"DEBUG\" 0\nlet beam_dev_timeout = Helpers.getenv \"BEAM_DEV_TIMEOUT\" 1 <> 0\nlet beam_debug = Helpers.getenv \"BEAM_DEBUG\" 0\nlet cachelevel = Helpers.getenv \"CACHELEVEL\" 1\nlet ignore_beam_cache = Helpers.getenv \"IGNORE_BEAM_CACHE\" 0 <> 0\nlet beam_min_progress =\n  (match Sys.getenv_opt \"BEAM_MIN_PROGRESS\" with\n   | Some s -> (try Float.of_string s with Failure _ -> 0.01)\n   | None -> 0.01) /. 1e6\n\n(* All candidate optimizations tried during beam search. *)\nlet actions =\n  let open K.Opt in\n  let acc = ref [] in\n  let add opt = acc := opt :: !acc in\n  let gen mk max_axis amounts =\n    List.iter (fun amount ->\n      for axis = 0 to max_axis do add (mk axis amount) done) amounts\n  in\n  gen (fun axis amount -> Upcast { axis; amount }) 7\n    [0; 2; 3; 4; 5; 7];\n  gen (fun axis amount -> Unroll { axis; amount }) 4\n    [0; 4; 7];\n  gen (fun axis amount -> Local { axis; amount }) 5\n    [2; 3; 4; 8; 13; 16; 29];\n  gen (fun axis amount -> Grouptop { axis; amount }) 2\n    [13; 16; 28; 29; 32; 49; 64; 256];\n  gen (fun axis amount -> Group { axis; amount }) 2\n    [0; 4; 8; 16];\n  if beam_padto then\n    gen (fun axis amount -> Padto { axis; amount }) 6 [32];\n  add (Local { axis = 0; amount = 32 });\n  add (Local { axis = 6; amount = 2 });\n  add (Tc { axis = 0; tc_select = -1; tc_opt = 0; use_tc = tc });\n  for axis = 0 to 8 do\n    add (Tc { axis; tc_select = -1; tc_opt; use_tc = tc })\n  done;\n  for axis_0 = 0 to 4 do\n    for axis_1 = axis_0 + 1 to 4 do\n      add (Swap { axis = axis_0; with_axis = axis_1 })\n    done\n  done;\n  gen (fun axis amount -> Thread { axis; amount }) 2\n    [2; 3; 4; 5; 8; 12; 16; 24; 32; 64];\n  if nolocals then add Nolocals;\n  List.rev !acc\n\nlet const_to_int_opt n =\n  match K.const_arg n with Some (Int v) -> Some (Int64.to_int v) | _ -> None\n\n(* Skip actions that are equivalent to the zero-variant already in the list. *)\nlet is_noop a ax full_shape =\n  ax < List.length full_shape\n  && (match K.Opt.amount a, const_to_int_opt (List.nth full_shape ax) with\n      | Some amt, Some sz when sz = amt -> List.mem (K.Opt.with_amount a 0) actions\n      | _ -> false)\n\nlet is_tc = function K.Opt.Tc _ -> true | _ -> false\n\n(* Return valid actions for a scheduler state as (index, scheduler) pairs. *)\nlet get_kernel_actions ?(include_0 = true) ?max_up s =\n  let max_up = match max_up with\n    | Some n -> n | None -> beam_upcast_max in\n  let max_lcl = beam_local_max in\n  let acted = ref (if include_0 then [(0, s)] else []) in\n  List.iteri (fun i a ->\n    let dominated = ref false in\n    (* Pre-filter: resolve axis, skip if out of range or noop. *)\n    (match K.Opt.axis a with\n     | Some _ when not (is_tc a) ->\n         (match P.real_axis s a (K.Opt.axis a) with\n          | ax ->\n              if ax >= P.shape_len s || is_noop a ax (P.full_shape s) then\n                dominated := true\n          | exception P.Opt_error _ -> dominated := true)\n     | _ -> ());\n    if not !dominated then begin\n      let s2 = P.copy s in\n      try\n        ignore (P.apply_opt s2 a : _ option);\n        (* Check upcast/local budget. *)\n        let up = ref 1 and lcl = ref 1 in\n        let tc_up = match P.tensor_core s2 with\n          | Some (tc : Tc.t) ->\n              let m, n, k = tc.dims in m * n * k / tc.threads\n          | None -> 1 in\n        List.iter2 (fun x t ->\n          match const_to_int_opt x with\n          | None -> ()\n          | Some sz ->\n              if t = Axis_kind.Upcast || t = Axis_kind.Unroll then\n                up := !up * sz\n              else if t = Axis_kind.Warp || t = Axis_kind.Local\n                      || t = Axis_kind.Group_reduce then\n                lcl := !lcl * sz)\n          (P.full_shape s2) (P.axis_types s2);\n        if !up / tc_up > max_up || !lcl > max_lcl then begin\n          if beam_log_surpass_max then\n            Printf.eprintf\n              \"too many upcast/local. up/tc_up=%d, max_up=%d, lcl=%d, max_lcl=%d\\n%!\"\n              (!up / tc_up) max_up !lcl max_lcl\n        end else\n          acted := (i + 1, s2) :: !acted\n      with P.Opt_error _ -> ()\n    end)\n    actions;\n  List.rev !acted\n\n(* Resolve symbolic global dims and shrink until they fit max_global_size\n   by halving dims > 16 from the end.  Returns (scaled_size, factor). *)\nlet get_test_global_size global_size var_vals max_global_size =\n  let test = Array.map (fun sz -> K.sym_infer sz var_vals) global_size in\n  let input_size = Array.fold_left ( * ) 1 test in\n  let cont = ref true in\n  while !cont && Array.fold_left ( * ) 1 test > max_global_size do\n    cont := false;\n    for j = Array.length test - 1 downto 0 do\n      if not !cont && test.(j) > 16 then begin\n        test.(j) <- test.(j) / 2;\n        cont := true\n      end\n    done\n  done;\n  let scaled = Array.fold_left ( * ) 1 test in\n  (test, Float.of_int input_size /. Float.of_int (max scaled 1))\n\n(* Compile result from try_compile. *)\ntype compiled = {\n  program : Program_spec.t;\n  compile_time : float;\n}\n\nexception Compile_timeout\n\n(* Compile a single candidate: optimize → lower → check uop count → compile.\n   Takes (index, scheduler) and returns (index, result) so callers can\n   dispatch candidates in parallel and match results back.\n   XXX compilation is sequential; should be parallelised. *)\nlet try_compile ~use_timeout ((idx, s) : int * P.t) (device : Device.t)\n    : int * compiled option =\n  let ren = P.ren s in\n  let prev_handler =\n    if use_timeout then begin\n      let h = Sys.signal Sys.sigalrm\n        (Sys.Signal_handle (fun _ -> raise Compile_timeout)) in\n      ignore (Unix.alarm beam_timeout_sec);\n      Some h\n    end else None\n  in\n  let cleanup () = match prev_handler with\n    | Some h -> ignore (Unix.alarm 0); Sys.set_signal Sys.sigalrm h\n    | None -> ()\n  in\n  let result =\n    try\n      let ast = P.get_optimized_ast ~name_override:\"test\" (P.copy s) in\n      let ir = Linearizer.linearize (Codegen_lower.lower ren ast) in\n      let uop_count = Program.length ir in\n      if beam_uops_max > 0 && uop_count >= beam_uops_max then begin\n        if beam_log_surpass_max then\n          Printf.eprintf \"too many uops. uop_count=%d, uops_max=%d\\n%!\"\n            uop_count beam_uops_max;\n        None\n      end else begin\n        let st = Unix.gettimeofday () in\n        let prog = Device.compile_program device ir in\n        let compile_time = Unix.gettimeofday () -. st in\n        Some { program = prog; compile_time }\n      end\n    with\n    | Compile_timeout ->\n        if debug >= 2 then\n          Printf.eprintf \"*** BEAM COMPILE TIMEOUT\\n%!\";\n        None\n    | (Out_of_memory | Stack_overflow) as exn ->\n        cleanup (); raise exn\n    | Failure _ | Invalid_argument _ ->\n        if debug >= 4 then\n          Printf.eprintf \"%s\\n%!\" (Printexc.get_backtrace ());\n        None\n    | exn ->\n        if beam_strict_mode then (cleanup (); raise exn)\n        else None\n  in\n  cleanup ();\n  (idx, result)\n\n(* Time a compiled program on device.  Returns a list of timing samples. *)\nlet time_program ~device p rawbufs var_vals ~early_stop ~cnt\n    ~clear_l2 ~allow_test_size ~dev_timeout =\n  let timeout =\n    if dev_timeout && Float.is_finite early_stop then\n      Some (max 1 (Float.to_int (early_stop *. 1e3)))\n    else None\n  in\n  let factor = ref 1.0 in\n  let p =\n    if allow_test_size then\n      let scaled_global, f =\n        get_test_global_size (Program_spec.global_size p) var_vals 65536 in\n      factor := f;\n      Program_spec.with_global_dims scaled_global p\n    else p\n  in\n  let car = Realize.Compiled_runner.create ~device p in\n  let input_bufs =\n    List.map (List.nth rawbufs) (Program_spec.globals p) in\n  let tms = ref [] in\n  let stopped = ref false in\n  for _ = 1 to cnt do\n    if not !stopped then begin\n      if clear_l2 then Device.invalidate_caches device;\n      let tm =\n        match Realize.Compiled_runner.call car input_bufs var_vals\n                ~wait:true ~timeout with\n        | Some t -> t *. !factor\n        | None -> infinity\n      in\n      tms := tm :: !tms;\n      if early_stop < List.fold_left min infinity !tms then stopped := true\n    end\n  done;\n  List.rev !tms\n\n(* Build name-keyed var_vals from Define_var nodes in the AST,\n   using the midpoint of each variable's range. *)\nlet build_var_vals ast =\n  K.find_nodes (fun n -> match K.view n with Define_var _ -> true | _ -> false) ast\n  |> List.map (fun n -> match K.view n with\n       | Define_var { name; lo; hi; _ } -> (name, (lo + hi) / 2)\n       | _ -> assert false)\n\nlet beam_search ?(allow_test_size = true) ?(disable_cache = ignore_beam_cache)\n    (s : P.t) (rawbufs : Device.Buffer.t list) (amt : int)\n    (device : Device.t) : P.t =\n  let ren = P.ren s in\n  (* Disk cache lookup *)\n  let cache_key =\n    let ast_key =\n      Digest.to_hex (Digest.string (Marshal.to_string (P.ast s) [])) in\n    Printf.sprintf \"%s_%d_%b_%s_%s\" ast_key amt allow_test_size\n      (Renderer.device ren) (Renderer.name ren)\n  in\n  let cache_enabled = not disable_cache && cachelevel >= 1 in\n  (match\n     if cache_enabled then\n       (try Diskcache.get ~table:\"beam_search\" ~key:cache_key\n        with _ -> None)\n     else None\n   with\n   | Some cached_opts ->\n       let ret = P.copy s in\n       let skip = List.length (P.applied_opts s) in\n       List.iteri (fun i opt ->\n         if i >= skip then ignore (P.apply_opt ret opt)) cached_opts;\n       ret\n   | None ->\n  let beam = ref [(s, infinity)] in\n  let seen_libs : (bytes, unit) Hashtbl.t = Hashtbl.create 256 in\n  if beam_debug > 0 then\n    Format.eprintf \"BEAM_SEARCH:@\\n%a@.\" K.pp (P.ast s);\n  if debug >= 2 then\n    Printf.eprintf \"   0.00s:                from   1 ->   1 actions %s\\n%!\"\n      (P.colored_shape s);\n  List.iter Device.Buffer.ensure_allocated rawbufs;\n  let var_vals = build_var_vals (P.ast s) in\n  let st = Unix.gettimeofday () in\n  let exiting = ref false in\n  (try while not !exiting do\n    let candidates =\n      List.concat_map (fun (si, _) ->\n        List.map snd (get_kernel_actions ~include_0:false si))\n        !beam\n    in\n    let timed = ref [] in\n    let least_compute_ops = ref infinity in\n    let n_candidates = List.length candidates in\n    (* XXX: tinygrad uses a multiprocessing pool for parallel compilation.\n       We compile sequentially; parallelise with Domains when needed. *)\n    List.iteri (fun i cand ->\n      match try_compile ~use_timeout:true (i, cand) device with\n      | _, None -> ()\n      | _, Some { program; compile_time } ->\n          let lib = match Program_spec.lib program with\n            | Some l -> l | None -> assert false in\n          if Hashtbl.mem seen_libs lib then ()\n          else begin\n            let this_ops = match (Program_spec.estimates program).ops with\n              | Program_spec.Estimates.Int n -> Float.of_int n\n              | Symbolic node -> Float.of_int (K.sym_infer node var_vals) in\n            least_compute_ops := Float.min this_ops !least_compute_ops;\n            if !least_compute_ops *. 1000.0 < this_ops then begin\n              if beam_log_surpass_max then\n                Printf.eprintf \"too much compute. this=%e, least=%e\\n%!\"\n                  this_ops !least_compute_ops\n            end else begin\n              Hashtbl.replace seen_libs lib ();\n              let early_stop = match !beam with\n                | (_, best) :: _ -> best *. 3.0 | [] -> 1.0 in\n              (match\n                 time_program ~device program rawbufs var_vals ~early_stop\n                   ~cnt:3 ~clear_l2:true ~allow_test_size\n                   ~dev_timeout:beam_dev_timeout\n               with\n               | tms ->\n                   let best_tm = List.fold_left min infinity tms in\n                   timed := (cand, best_tm) :: !timed;\n                   if beam_debug > 1 then\n                     Printf.eprintf\n                       \"%7.2fs: %5d %12e compile/%12e run      %4d/%4d   %s\\n%!\"\n                       (Unix.gettimeofday () -. st) i compile_time best_tm\n                       (List.length !timed) n_candidates (P.colored_shape cand)\n                   else if debug >= 2 then\n                     Printf.eprintf \"\\r%7.2fs: %12e      %4d/%4d         %s%!\"\n                       (Unix.gettimeofday () -. st) best_tm\n                       (List.length !timed) n_candidates (P.colored_shape cand)\n               | exception exn ->\n                   if beam_debug > 0 then\n                     Printf.eprintf \"BEAM failed for opts: %s\\n%s\\n%!\"\n                       (String.concat \", \"\n                          (List.map K.Opt.to_string (P.applied_opts cand)))\n                       (Printexc.to_string exn);\n                   match exn with\n                   | Failure _ | Invalid_argument _ -> ()\n                   | _ -> raise exn)\n            end\n          end)\n      candidates;\n    (* Select best candidates *)\n    let opts = List.sort (fun (_, t1) (_, t2) -> Float.compare t1 t2) !timed in\n    let should_exit = match opts, !beam with\n      | [], _ -> true\n      | (_, t) :: _, _ when t < beam_min_progress -> true\n      | (_, ot) :: _, (_, bt) :: _ when bt -. ot < beam_min_progress -> true\n      | _ -> false in\n    exiting := should_exit;\n    if not should_exit then\n      beam := List.filteri (fun i _ -> i < amt) opts\n    else (match opts, !beam with\n      | (s_best, t_best) :: _, (_, t_beam) :: _ when t_best < t_beam ->\n          beam := [(s_best, t_best)]\n      | _ -> ());\n    if debug >= 2 then\n      Printf.eprintf \"\\r%7.2fs: %12e from %3d -> %3d actions %s\\n%!\"\n        (Unix.gettimeofday () -. st) (snd (List.hd !beam))\n        n_candidates (List.length opts) (P.colored_shape (fst (List.hd !beam)))\n  done\n   with Sys.Break -> raise Sys.Break);\n  let result = fst (List.hd !beam) in\n  if cache_enabled then\n    Diskcache.put ~table:\"beam_search\" ~key:cache_key (P.applied_opts result);\n  if beam_debug > 0 then\n    Printf.eprintf \"BEAM_SEARCH: final tm=%e, applied_opts=%s\\n%!\"\n      (snd (List.hd !beam))\n      (String.concat \", \" (List.map K.Opt.to_string (P.applied_opts result)));\n  result)\n"
  },
  {
    "path": "packages/tolk/lib/codegen/opt/search.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Beam search kernel optimiser.\n\n    Explores the space of kernel optimisations by compiling, timing, and\n    selecting the best candidates over multiple rounds. *)\n\nval beam_search :\n  ?allow_test_size:bool ->\n  ?disable_cache:bool ->\n  Postrange.t ->\n  Device.Buffer.t list ->\n  int ->\n  Device.t ->\n  Postrange.t\n(** [beam_search s rawbufs amt device] optimises scheduler [s] using\n    beam search with beam width [amt].\n\n    - [allow_test_size] (default [true]) scales down global dimensions\n      during timing to stay within hardware limits.\n    - [disable_cache] (default from [IGNORE_BEAM_CACHE] env) skips the\n      on-disk result cache.\n\n    Returns the best scheduler found. *)\n"
  },
  {
    "path": "packages/tolk/lib/codegen/opt/tc.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\nopen Tolk_ir\n\nlet strf = Printf.sprintf\nlet pow2 n = 1 lsl n\nlet log2 n = int_of_float (log (float_of_int n) /. log 2.0)\nlet labels prefix n = List.init n (fun i -> strf \"%s%d\" prefix i)\n\n(* D = A * B + C.  A is (M x K), B is (K x N), C and D are (M x N).\n   dims = (N, M, K).  All axes have size 2. *)\ntype t = {\n  dims : int * int * int;\n  threads : int;\n  elements_per_thread : int * int * int;\n  dtype_in : Dtype.scalar;\n  dtype_out : Dtype.scalar;\n  opts : string list;\n  swizzle :\n    (string list * string list * string list)\n    * (string list * string list * string list);\n}\n\nlet get_reduce_axes (tc : t) =\n  let _, _, k = tc.dims in\n  List.init (log2 k) (fun i -> (i, 2))\n\nlet get_upcast_axes (tc : t) =\n  List.filter (fun o -> o.[0] = 'u') tc.opts\n\nlet get_local_axes (tc : t) =\n  List.filter (fun o -> o.[0] = 'l') tc.opts\n\n(* Shape string before the reduce UNROLL: numbered local/upcast labels\n   from opts, then reduce labels. *)\nlet base_shape_str (tc : t) =\n  let u = ref 0 and l = ref 0 in\n  let opts_labels = List.map (fun o ->\n    let c = o.[0] in\n    let cnt = if c = 'u' then u else l in\n    let s = strf \"%c%d\" c !cnt in\n    incr cnt; s) tc.opts in\n  opts_labels @ labels \"r\" (List.length (get_reduce_axes tc))\n\n(* Upcast + reduce axis names in reverse order, used to define the UNROLL\n   axes after the opts are applied. *)\nlet base_upcast_axes (tc : t) =\n  List.rev (labels \"r\" (List.length (get_reduce_axes tc))\n            @ labels \"u\" (List.length (get_upcast_axes tc)))\n\n(* Build remap tables from the canonical axis order (l0..lN, u0..uN, r0..rN)\n   to the swizzled order for operands A and B. *)\nlet remaps (tc : t) =\n  let n_local = List.length (get_local_axes tc) in\n  let n_upcast = List.length (get_upcast_axes tc) in\n  let n_reduce = List.length (get_reduce_axes tc) in\n  let fwd = labels \"l\" n_local @ labels \"u\" n_upcast @ labels \"r\" n_reduce in\n  let (s0_l, s0_u, s0_r), (s1_l, s1_u, s1_r) = tc.swizzle in\n  let make flat =\n    let tbl = Hashtbl.create (List.length fwd) in\n    List.iter2 (Hashtbl.replace tbl) fwd flat;\n    tbl in\n  (make (s0_l @ s0_u @ s0_r), make (s1_l @ s1_u @ s1_r))\n\n(* Compute the two permutation vectors (for A and B) that reorder\n   shape_str according to the swizzle. *)\nlet permutes_for_shape_str (tc : t) shape_str =\n  let r0, r1 = remaps tc in\n  let perm remap =\n    List.mapi (fun i ss ->\n      match Hashtbl.find_opt remap ss with\n      | Some mapped ->\n          let rec find j = function\n            | [] -> failwith (strf \"permutes_for_shape_str: %S not found\" mapped)\n            | x :: _ when x = mapped -> j\n            | _ :: rest -> find (j + 1) rest in\n          find 0 shape_str\n      | None -> i) shape_str in\n  (perm r0, perm r1)\n\nlet to_string (tc : t) =\n  let n, m, k = tc.dims in\n  strf \"WMMA_%d_%d_%d_%s_%s\" n m k\n    (Dtype.scalar_cname tc.dtype_in) (Dtype.scalar_cname tc.dtype_out)\n\nlet validate (tc : t) =\n  let n_local = List.length (get_local_axes tc) in\n  let n_upcast = List.length (get_upcast_axes tc) in\n  let n_reduce = List.length (get_reduce_axes tc) in\n  let n, m, _ = tc.dims in\n  let a_ept, b_ept, c_ept = tc.elements_per_thread in\n  let check cond msg = if not cond then failwith msg in\n  check (n * m = pow2 (n_local + n_upcast))\n    (strf \"N(%d) x M(%d) != local(%d) x upcast(%d)\"\n       n m (pow2 n_local) (pow2 n_upcast));\n  check (pow2 n_local = tc.threads)\n    (strf \"%d threads but found %d locals\" tc.threads (pow2 n_local));\n  check (pow2 n_upcast = c_ept)\n    (strf \"%d C elements but found %d upcasts\" c_ept (pow2 n_upcast));\n  let count_dim d = List.length (List.filter (fun o -> o.[1] = d) tc.opts) in\n  check (n = pow2 (count_dim '0'))\n    (strf \"opts wrong on dims[0]: %d vs %d\" n (pow2 (count_dim '0')));\n  check (m = pow2 (count_dim '1'))\n    (strf \"opts wrong on dims[1]: %d vs %d\" m (pow2 (count_dim '1')));\n  let (s0_l, s0_u, s0_r), (s1_l, s1_u, s1_r) = tc.swizzle in\n  let len = List.length in\n  check (len s0_l = n_local && len s1_l = n_local) \"local swizzle size wrong\";\n  check (len s0_u = n_upcast && len s1_u = n_upcast) \"upcast swizzle size wrong\";\n  check (len s0_r = n_reduce && len s1_r = n_reduce) \"reduce swizzle size wrong\";\n  let total = n_local + n_upcast + n_reduce in\n  let r0, r1 = remaps tc in\n  check (Hashtbl.length r0 = total && Hashtbl.length r1 = total) \"remaps wrong size\";\n  let u = ref 0 and l = ref 0 in\n  let zs0 = ref [] and zs1 = ref [] in\n  List.iter (fun o ->\n    let label = strf \"%c%d\" o.[0] (if o.[0] = 'u' then !u else !l) in\n    if o.[1] = '0' then zs0 := label :: !zs0;\n    if o.[1] = '1' then zs1 := label :: !zs1;\n    if o.[0] = 'u' then incr u else incr l) tc.opts;\n  let non_local_non_zero zs x = not (List.mem x zs) && x.[0] <> 'l' in\n  let upcasted_0 = List.filter (non_local_non_zero !zs0) (s0_u @ s0_r) in\n  let upcasted_1 = List.filter (non_local_non_zero !zs1) (s1_u @ s1_r) in\n  check (pow2 (len upcasted_0) = a_ept)\n    (strf \"elements_per_thread[0] mismatch: %d vs %d\" (pow2 (len upcasted_0)) a_ept);\n  check (pow2 (len upcasted_1) = b_ept)\n    (strf \"elements_per_thread[1] mismatch: %d vs %d\" (pow2 (len upcasted_1)) b_ept)\n\n(* Tensor core definitions *)\n\nlet mk ~dims ~threads ~ept ~opts ~swizzle dtypes =\n  List.map (fun (dtype_in, dtype_out) ->\n    { dims; threads; elements_per_thread = ept;\n      dtype_in; dtype_out; opts; swizzle }) dtypes\n\n(* NVIDIA *)\n\nlet cuda_tc_opts = [\"u0\";\"l0\";\"l0\";\"l1\";\"l1\";\"l1\";\"u1\"]\n\nlet cuda_81616 = mk ~dims:(8,16,16) ~threads:32 ~ept:(8,4,4) ~opts:cuda_tc_opts\n  ~swizzle:(([\"r1\";\"r2\";\"l2\";\"l3\";\"l4\"], [\"u1\";\"r3\"], [\"l0\";\"l1\";\"u0\";\"r0\"]),\n            ([\"r1\";\"r2\";\"u0\";\"l0\";\"l1\"], [\"r0\";\"r3\"], [\"l2\";\"l3\";\"l4\";\"u1\"]))\n  Dtype.[(Float16, Float32); (Bfloat16, Float32); (Float16, Float16)]\n\nlet cuda_81632_f8 = mk ~dims:(8,16,32) ~threads:32 ~ept:(16,8,4) ~opts:cuda_tc_opts\n  ~swizzle:(([\"r2\";\"r3\";\"l2\";\"l3\";\"l4\"], [\"u1\";\"r4\"], [\"l0\";\"l1\";\"u0\";\"r0\";\"r1\"]),\n            ([\"r2\";\"r3\";\"u0\";\"l0\";\"l1\"], [\"r1\";\"r4\"], [\"l2\";\"l3\";\"l4\";\"u1\";\"r0\"]))\n  Dtype.[(Fp8e4m3, Float32); (Fp8e5m2, Float32)]\n\nlet cuda_8168_f16 = mk ~dims:(8,16,8) ~threads:32 ~ept:(4,2,4) ~opts:cuda_tc_opts\n  ~swizzle:(([\"r1\";\"r2\";\"l2\";\"l3\";\"l4\"], [\"r0\";\"u1\"], [\"l0\";\"l1\";\"u0\"]),\n            ([\"r1\";\"r2\";\"u0\";\"l0\";\"l1\"], [\"u1\";\"r0\"], [\"l2\";\"l3\";\"l4\"]))\n  Dtype.[(Float16, Float32); (Float16, Float16)]\n\nlet cuda_8168_tf32 = mk ~dims:(8,16,8) ~threads:32 ~ept:(4,2,4) ~opts:cuda_tc_opts\n  ~swizzle:(([\"r0\";\"r1\";\"l2\";\"l3\";\"l4\"], [\"u1\";\"r2\"], [\"l0\";\"l1\";\"u0\"]),\n            ([\"r0\";\"r1\";\"u0\";\"l0\";\"l1\"], [\"u1\";\"r2\"], [\"l2\";\"l3\";\"l4\"]))\n  Dtype.[(Float32, Float32)]\n\nlet cuda_sm75 = cuda_8168_f16\nlet cuda_sm80 = cuda_81616 @ cuda_8168_f16 @ cuda_8168_tf32\nlet cuda_sm89 = cuda_sm80 @ cuda_81632_f8\n\n(* AMD *)\n\nlet amd_rdna3 = mk ~dims:(16,16,16) ~threads:32 ~ept:(16,16,8)\n  ~opts:[\"l0\";\"l0\";\"l0\";\"l0\";\"l1\";\"u1\";\"u1\";\"u1\"]\n  ~swizzle:(([\"l4\";\"u0\";\"u1\";\"u2\";\"l0\"], [\"r1\";\"r2\";\"r3\"], [\"l1\";\"l2\";\"l3\";\"r0\"]),\n            ([\"l0\";\"l1\";\"l2\";\"l3\";\"l4\"], [\"r1\";\"r2\";\"r3\"], [\"u0\";\"u1\";\"u2\";\"r0\"]))\n  Dtype.[(Float16, Float32); (Float16, Float16); (Bfloat16, Float32)]\n\nlet amd_rdna4 = mk ~dims:(16,16,16) ~threads:32 ~ept:(8,8,8)\n  ~opts:[\"l0\";\"l0\";\"l0\";\"l0\";\"u1\";\"u1\";\"u1\";\"l1\"]\n  ~swizzle:(([\"u0\";\"u1\";\"u2\";\"l4\";\"r2\"], [\"r0\";\"r1\";\"r3\"], [\"l0\";\"l1\";\"l2\";\"l3\"]),\n            ([\"l0\";\"l1\";\"l2\";\"l3\";\"r2\"], [\"r0\";\"r1\";\"r3\"], [\"l4\";\"u0\";\"u1\";\"u2\"]))\n  Dtype.[(Float16, Float32); (Float16, Float16); (Bfloat16, Float32); (Bfloat16, Bfloat16)]\n\nlet amd_cdna_161616 = mk ~dims:(16,16,16) ~threads:64 ~ept:(4,4,4)\n  ~opts:[\"l0\";\"l0\";\"l0\";\"l0\";\"u1\";\"u1\";\"l1\";\"l1\"]\n  ~swizzle:(([\"u0\";\"u1\";\"l4\";\"l5\";\"r2\";\"r3\"], [\"r0\";\"r1\"], [\"l0\";\"l1\";\"l2\";\"l3\"]),\n            ([\"l0\";\"l1\";\"l2\";\"l3\";\"r2\";\"r3\"], [\"r0\";\"r1\"], [\"l4\";\"l5\";\"u0\";\"u1\"]))\n  Dtype.[(Float16, Float32); (Bfloat16, Float32)]\n\nlet amd_cdna_161632 = mk ~dims:(16,16,32) ~threads:64 ~ept:(8,8,4)\n  ~opts:[\"l0\";\"l0\";\"l0\";\"l0\";\"u1\";\"u1\";\"l1\";\"l1\"]\n  ~swizzle:(([\"u0\";\"u1\";\"l4\";\"l5\";\"r3\";\"r4\"], [\"r0\";\"r1\"], [\"l0\";\"l1\";\"l2\";\"l3\";\"r2\"]),\n            ([\"l0\";\"l1\";\"l2\";\"l3\";\"r3\";\"r4\"], [\"r0\";\"r1\"], [\"l4\";\"l5\";\"u0\";\"u1\";\"r2\"]))\n  Dtype.[(Fp8e5m2, Float32); (Fp8e4m3, Float32); (Float16, Float32); (Bfloat16, Float32)]\n\nlet amd_cdna_1616128 = mk ~dims:(16,16,128) ~threads:64 ~ept:(32,32,4)\n  ~opts:[\"l0\";\"l0\";\"l0\";\"l0\";\"u1\";\"u1\";\"l1\";\"l1\"]\n  ~swizzle:(([\"u0\";\"u1\";\"l4\";\"l5\";\"r5\";\"r6\"], [\"r0\";\"r1\"], [\"l0\";\"l1\";\"l2\";\"l3\";\"r2\";\"r3\";\"r4\"]),\n            ([\"l0\";\"l1\";\"l2\";\"l3\";\"r5\";\"r6\"], [\"r0\";\"r1\"], [\"l4\";\"l5\";\"u0\";\"u1\";\"r2\";\"r3\";\"r4\"]))\n  Dtype.[(Fp8e5m2, Float32); (Fp8e4m3, Float32)]\n\nlet amd_cdna3 = List.filteri (fun i _ -> i < 2) amd_cdna_161632 @ amd_cdna_161616\nlet amd_cdna4 = amd_cdna_1616128 @ amd_cdna_161632 @ amd_cdna_161616\n\n(* Apple Metal *)\n\nlet metal = mk ~dims:(8,8,8) ~threads:32 ~ept:(2,2,2)\n  ~opts:[\"u0\";\"l0\";\"l1\";\"l1\";\"l0\";\"l1\"]\n  ~swizzle:(([\"r1\";\"l1\";\"l2\";\"r2\";\"l4\"], [\"r0\"], [\"u0\";\"l0\";\"l3\"]),\n            ([\"l0\";\"r0\";\"r1\";\"l3\";\"r2\"], [\"u0\"], [\"l1\";\"l2\";\"l4\"]))\n  Dtype.[(Float32, Float32); (Float16, Float32); (Float16, Float16);\n         (Bfloat16, Float32); (Bfloat16, Bfloat16)]\n\n(* Apple AMX *)\n\nlet amx =\n  let sz = 64 / Dtype.itemsize Dtype.float32 in\n  mk ~dims:(sz,sz,1) ~threads:1 ~ept:(sz, sz, sz*sz)\n    ~opts:[\"u0\";\"u0\";\"u0\";\"u0\";\"u1\";\"u1\";\"u1\";\"u1\"]\n    ~swizzle:(([], [\"u0\";\"u1\";\"u2\";\"u3\";\"u4\";\"u5\";\"u6\";\"u7\"], []),\n              ([], [\"u4\";\"u5\";\"u6\";\"u7\";\"u0\";\"u1\";\"u2\";\"u3\"], []))\n    Dtype.[(Float32, Float32)]\n\n(* Intel *)\n\nlet intel = mk ~dims:(8,8,16) ~threads:8 ~ept:(16,16,8)\n  ~opts:[\"l0\";\"l0\";\"l0\";\"u1\";\"u1\";\"u1\"]\n  ~swizzle:(([\"r1\";\"r2\";\"r3\"], [\"u0\";\"u1\";\"u2\"], [\"l0\";\"l1\";\"l2\";\"r0\"]),\n            ([\"l0\";\"l1\";\"l2\"], [\"r1\";\"r2\";\"r3\"], [\"u0\";\"u1\";\"u2\";\"r0\"]))\n  Dtype.[(Float16, Float32)]\n\n(* Validate all definitions at load time. *)\nlet () =\n  List.iter (List.iter validate)\n    [cuda_sm75; cuda_sm80; cuda_sm89;\n     amd_rdna3; amd_rdna4; amd_cdna3; amd_cdna4;\n     metal; amx; intel]\n"
  },
  {
    "path": "packages/tolk/lib/codegen/opt/tc.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Tensor core definitions and swizzle helpers.\n\n    Defines hardware WMMA configurations for NVIDIA, AMD, Apple, and Intel\n    and provides the axis-remapping logic needed by {!Postrange} to lower\n    matmuls into tensor core instructions.\n\n    See also {!Renderer.tensor_core}, {!Postrange}. *)\n\n(** {1:types Types} *)\n\n(** The type for tensor core (WMMA/MFMA) configurations.\n\n    Describes a hardware matrix-multiply-accumulate instruction\n    [D = A * B + C] where A is (M x K), B is (K x N), and C/D are\n    (M x N).  The configuration specifies tile geometry, thread mapping,\n    dtype requirements, and the dimension swizzle needed to lay data out\n    for the instruction. *)\ntype t = {\n  dims : int * int * int;\n      (** [(n, m, k)] matrix-multiply tile dimensions. *)\n  threads : int;\n      (** Number of threads cooperating on one tile. *)\n  elements_per_thread : int * int * int;\n      (** [(a, b, c)] elements each thread contributes for operands A, B,\n          and accumulator C. *)\n  dtype_in : Tolk_ir.Dtype.scalar;\n      (** Element type of the A and B input operands. *)\n  dtype_out : Tolk_ir.Dtype.scalar;\n      (** Element type of the C accumulator operand. *)\n  opts : string list;\n      (** Scheduling option strings ([\"u0\"], [\"l1\"], …) applied when this\n          tensor core is active.  Passed to the kernel optimiser to\n          configure tiling and unrolling. *)\n  swizzle :\n    (string list * string list * string list)\n    * (string list * string list * string list);\n      (** Operand layout remapping as\n          [((a_local, a_upcast, a_reduce), (b_local, b_upcast, b_reduce))].\n          Each triple contains (local, upcast, reduce) dimension index\n          strings describing the physical layout required by the hardware\n          instruction. *)\n}\n\n(** {1:helpers Helpers} *)\n\nval get_reduce_axes : t -> (int * int) list\n(** [get_reduce_axes tc] is the reduce axes for [tc]: one [(i, 2)] pair\n    per power-of-two factor in the K dimension. *)\n\nval base_shape_str : t -> string list\n(** [base_shape_str tc] is the shape string before the reduce UNROLL:\n    numbered local/upcast labels from [tc.opts], then reduce labels. *)\n\nval base_upcast_axes : t -> string list\n(** [base_upcast_axes tc] is the upcast + reduce axis names in reverse\n    order, used to define the UNROLL axes after opts are applied. *)\n\nval permutes_for_shape_str : t -> string list -> int list * int list\n(** [permutes_for_shape_str tc shape_str] is the two permutation vectors\n    (for operands A and B) that reorder [shape_str] according to the\n    swizzle. *)\n\nval to_string : t -> string\n(** [to_string tc] is [\"WMMA_N_M_K_in_out\"]. *)\n\nval validate : t -> unit\n(** [validate tc] checks all invariants of [tc].\n\n    Raises [Failure] on mismatch. Called at module load time for all\n    built-in definitions. *)\n\n(** {1:definitions Definitions}\n\n    Each list contains one {!t} per supported dtype pair.  All entries\n    are validated at module load time. *)\n\nval cuda_sm75 : t list\n(** NVIDIA SM 7.5 (Turing). *)\n\nval cuda_sm80 : t list\n(** NVIDIA SM 8.0 (Ampere). *)\n\nval cuda_sm89 : t list\n(** NVIDIA SM 8.9 (Ada Lovelace). *)\n\nval amd_rdna3 : t list\n(** AMD RDNA 3 WMMA. *)\n\nval amd_rdna4 : t list\n(** AMD RDNA 4 WMMA. *)\n\nval amd_cdna3 : t list\n(** AMD CDNA 3 MFMA. *)\n\nval amd_cdna4 : t list\n(** AMD CDNA 4 MFMA. *)\n\nval metal : t list\n(** Apple Metal simdgroup_matrix. *)\n\nval amx : t list\n(** Apple AMX. *)\n\nval intel : t list\n(** Intel Xe DPAS. *)\n"
  },
  {
    "path": "packages/tolk/lib/codegen/simplify.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\nopen Tolk_ir\nmodule K = Kernel\n\n(* Helpers *)\n\nlet no_range u =\n  not (List.exists K.is_range (K.backward_slice u))\n\nlet no_load u =\n  not (List.exists (fun n ->\n    match K.view n with Index _ -> true | _ -> false)\n    (K.backward_slice u))\n\nlet is_divmod n = match K.view n with\n  | Binary { op = `Idiv | `Mod; _ } -> true | _ -> false\n\nlet count_divmod x =\n  List.length (List.filter (fun n -> n != x && is_divmod n)\n    (K.backward_slice x))\n\nlet is_zero n = match K.const_arg n with\n  | Some (Int 0L) | Some (Bool false) -> true\n  | Some (Float f) -> f = 0.0\n  | _ -> false\n\nlet peel_cast n = match K.view n with Cast { src; _ } -> src | _ -> n\n\nlet minimum a b =\n  K.ternary ~op:`Where ~a:(K.binary ~op:`Cmplt ~lhs:a ~rhs:b) ~b:a ~c:b\n\nlet maximum a b = K.binary ~op:`Max ~lhs:a ~rhs:b\n\nlet mem_phys x xs = List.exists (fun y -> y == x) xs\n\nlet split_and c =\n  let rec go c = match K.view c with\n    | Binary { op = `And; lhs; rhs; _ } -> go lhs @ go rhs\n    | _ -> [c] in\n  go c\n\nlet rec list_take n = function\n  | _ when n <= 0 -> []\n  | x :: xs -> x :: list_take (n - 1) xs\n  | [] -> []\n\nlet rec list_drop n = function\n  | l when n <= 0 -> l\n  | _ :: xs -> list_drop (n - 1) xs\n  | [] -> []\n\n(* Toposort-reorder range children of Reduce/Store/End. *)\nlet flatten_range node =\n  match K.view node with\n  | Reduce _ | Store _ | End _ ->\n      (match K.range_start node with\n       | None -> None\n       | Some off ->\n           let ch = K.children node in\n           let rngs = list_drop off ch in\n           if rngs = [] then None\n           else\n             let new_rngs =\n               List.filter K.is_range (K.toposort (K.sink rngs)) in\n             let result =\n               K.replace node ~children:(list_take off ch @ new_rngs) () in\n             if result = node then None else Some result)\n  | _ -> None\n\nlet pm_flatten_range root = K.graph_rewrite flatten_range root\n\n(* Apply substitutions from ctx, clear ctx, simplify result. *)\nlet do_substitute ctx x sub_fxn =\n  let mappings = K.Ref_tbl.fold (fun k v acc ->\n    match v with Some v -> (k, sub_fxn k v) :: acc | None -> acc) ctx [] in\n  K.Ref_tbl.reset ctx;\n  if mappings = [] then None\n  else\n    let ret = K.graph_rewrite Symbolic.symbolic (K.substitute mappings x) in\n    if ret = x then None else Some ret\n\n(* Merge two adjacent ranges into one whose size is the product of the\n   originals.  Kept only when divmod count does not increase. *)\nlet simplify_merge_adjacent u =\n  match K.view u with\n  | End _ | Reduce _ ->\n      let u_ended = K.ended_ranges u in\n      if u_ended = [] then None\n      else begin\n        let reduce_ranges =\n          List.filter_map (fun x -> match K.view x with\n            | Reduce { ranges; _ } -> Some ranges | _ -> None)\n            (K.backward_slice u) in\n        let pairs = match K.view u with\n          | End _ ->\n              let rec adj = function\n                | a :: (b :: _ as rest) -> (a, b) :: adj rest | _ -> [] in\n              adj u_ended\n          | _ ->\n              List.concat_map (fun r0 ->\n                List.filter_map (fun r1 ->\n                  if r0 == r1 then None else Some (r0, r1)) u_ended)\n                u_ended in\n        let result = ref u in\n        List.iter (fun (r0, r1) ->\n          if K.range_kind r0 = K.range_kind r1\n             && List.for_all (fun rngs ->\n                  mem_phys r0 rngs = mem_phys r1 rngs) reduce_ranges\n          then begin\n            let open K.O in\n            let s0 = K.range_size r0 and s1 = K.range_size r1 in\n            let merged = K.range ~size:(s0 * s1) ~axis:(K.range_axis r0)\n              ~kind:(K.range_kind r0)\n              ~dtype:(Dtype.val_of (K.dtype r0)) () in\n            let nidx = K.substitute\n              [(r0, merged / s1); (r1, merged mod s1)] !result in\n            let nidx = K.graph_rewrite\n              (K.first_match [Symbolic.symbolic; flatten_range]) nidx in\n            if count_divmod nidx <= count_divmod !result then\n              result := nidx\n          end) pairs;\n        if !result == u then None else Some !result\n      end\n  | _ -> None\n\n(* Extract r<C guards from gated Index nodes, track the tightest bound\n   per range, mark reduce ranges unshrinkable, substitute at Sink. *)\nlet simplify_ranges root =\n  let ctx : K.t option K.Ref_tbl.t = K.Ref_tbl.create 16 in\n  let mark_unshrinkable r =\n    K.Ref_tbl.replace ctx r (Some (K.range_size r)) in\n  let extract_guards idx_value =\n    match K.view idx_value with\n    | Ternary { op = `Where; a = cond; b = x; c = invalid; _ } ->\n        (match K.view invalid with\n         | Invalid_index _ ->\n             let tbl = K.Ref_tbl.create 8 in\n             List.iter (fun v -> match K.view v with\n               | Binary { op = `Cmplt; lhs = r; rhs = c; _ }\n                 when K.is_range r && K.is_const c ->\n                   K.Ref_tbl.replace tbl r c\n               | _ -> ()) (split_and cond);\n             (x, tbl)\n         | _ -> (idx_value, K.Ref_tbl.create 0))\n    | _ -> (idx_value, K.Ref_tbl.create 0) in\n  let rule node =\n    match K.view node with\n    | End _ | Reduce _ ->\n        (match simplify_merge_adjacent node with\n         | Some _ as merged -> merged\n         | None ->\n             (match K.view node with\n              | Reduce { ranges; _ } -> List.iter mark_unshrinkable ranges\n              | _ -> ());\n             None)\n    | Index _ ->\n        let ch = K.children node in\n        let idx_value = match ch with _ :: v :: _ -> v | _ -> List.hd ch in\n        let x, guards = extract_guards idx_value in\n        let x = if K.Ref_tbl.length guards = 0 then node else x in\n        K.Ref_tbl.iter (fun r c ->\n          let dominated = match K.Ref_tbl.find_opt ctx r with\n            | Some (Some existing) ->\n                (match K.const_arg existing, K.const_arg c with\n                 | Some (Int ei), Some (Int ci) -> Int64.compare ci ei <= 0\n                 | _ -> true)\n            | Some None -> true\n            | None -> false in\n          if not dominated then K.Ref_tbl.replace ctx r (Some c)) guards;\n        List.iter (fun r ->\n          if not (K.Ref_tbl.mem guards r) then mark_unshrinkable r)\n          (K.live_ranges x);\n        None\n    | Sink _ ->\n        do_substitute ctx node (fun r c ->\n          K.range ~size:c ~axis:(K.range_axis r) ~kind:(K.range_kind r)\n            ~dtype:(Dtype.val_of (K.dtype r)) ())\n    | _ -> None\n  in\n  K.graph_rewrite ~name:\"simplify ranges\"\n    (fun node ->\n      match rule node with Some _ as r -> r | None -> flatten_range node)\n    root\n\nlet pm_simplify_ranges root =\n  let rec loop node =\n    let node' = simplify_ranges node in\n    if node' = node then node else loop node'\n  in\n  loop root\n\nlet is_image_store node = match K.view node with\n  | Store { dst; _ } ->\n      (match K.view dst with\n       | Index { ptr; _ } ->\n           (match K.view ptr with Param_image _ -> true | _ -> false)\n       | _ -> false)\n  | _ -> false\n\nlet can_split_range r c =\n  K.is_range r && K.is_const c\n  && K.range_kind r <> Axis_kind.Warp\n  && K.is_const (K.range_size r)\n  && K.divides (K.range_size r) (K.const_to_int c) <> None\n\n(* Split ranges where range_size divides the modulus constant.\n   range(N) % C where N|C becomes outer(N/C)*C + inner(C). *)\nlet split_ranges root =\n  let ctx : K.t option K.Ref_tbl.t = K.Ref_tbl.create 16 in\n  let rule node =\n    match K.view node with\n    | Binary { op = `Mod; lhs = r; rhs = c; _ }\n      when can_split_range r c && not (K.Ref_tbl.mem ctx r) ->\n        K.Ref_tbl.replace ctx r (Some c); None\n    | _ when is_image_store node ->\n        let dst = List.hd (K.children node) in\n        List.iter (fun r -> K.Ref_tbl.replace ctx r None)\n          (K.live_ranges dst);\n        None\n    | Sink _ ->\n        do_substitute ctx node (fun k v ->\n          let open K.O in\n          let size = K.range_size k and axis = K.range_axis k in\n          let sub = K.range_sub k and kind = K.range_kind k in\n          let dt = Dtype.val_of (K.dtype k) in\n          let outer = K.range ~size:(size / v) ~axis\n            ~sub:(sub @ [0]) ~kind ~dtype:dt () in\n          let inner = K.range ~size:v ~axis\n            ~sub:(sub @ [1]) ~kind ~dtype:dt () in\n          (outer * v) + inner)\n    | _ -> None\n  in\n  K.graph_rewrite ~name:\"split ranges\"\n    (fun node ->\n      match rule node with Some _ as r -> r | None -> flatten_range node)\n    root\n\nlet pm_split_ranges root =\n  let rec loop node =\n    let node' = split_ranges node in\n    if node' = node then node else loop node'\n  in\n  loop root\n\n(* Remove ranges from a Reduce that aren't referenced in the source.\n   Compensate: ADD → multiply by range size, MUL → exponentiate. *)\nlet reduce_unparented node =\n  match K.view node with\n  | Reduce { op; src; ranges; dtype }\n    when op = `Add || op = `Max || op = `Mul ->\n      assert (List.for_all K.is_range ranges);\n      let src_ranges = K.live_ranges src in\n      let parented, unparented =\n        List.partition (fun r -> mem_phys r src_ranges) ranges in\n      if unparented = [] then None\n      else\n        let ret =\n          if parented <> [] || not (Dtype.equal (Dtype.Val dtype) (K.dtype src))\n          then K.reduce ~op ~src ~ranges:parented ~dtype\n          else src in\n        let range_size_broadcast r =\n          let s = K.cast ~src:(K.range_size r)\n            ~dtype:(Dtype.scalarize (Dtype.Val dtype)) in\n          K.broadcast s (Dtype.Val.count dtype) in\n        let compensate binop acc r =\n          K.binary ~op:binop ~lhs:acc ~rhs:(range_size_broadcast r) in\n        let ret = match op with\n          | `Add -> List.fold_left (compensate `Mul) ret unparented\n          | `Mul -> List.fold_left (compensate `Pow) ret unparented\n          | _ -> ret in\n        Some ret\n  | _ -> None\n\nlet pm_reduce_unparented root = K.graph_rewrite reduce_unparented root\n\n(* Gated toposort: only follow children where gate holds. *)\nlet toposort_gated gate root =\n  let visited = K.Ref_tbl.create 64 in\n  let order = ref [] in\n  let rec visit node =\n    if not (K.Ref_tbl.mem visited node) && gate node then begin\n      K.Ref_tbl.replace visited node ();\n      List.iter visit (K.children node);\n      order := node :: !order\n    end in\n  visit root;\n  List.rev !order\n\n(* Fold rules for single-range reduce(add, where(r<cut, ...)) patterns.\n   Each rule computes a count expression and returns cast(count, val.dtype) * val. *)\nlet fold_result count v =\n  let dt = K.dtype v in\n  K.binary ~op:`Mul\n    ~lhs:(K.cast ~src:count ~dtype:dt) ~rhs:v\n\nlet reduce_fold_rule r _dtype src =\n  let open K.O in\n  let r_size = K.range_size r in\n  match K.view src with\n  | Ternary { op = `Where; a = cond; b = val_true; c = val_false; _ } ->\n      (match K.view cond with\n       | Binary { op = `Cmplt; lhs = cond_r; rhs = cut; _ }\n         when cond_r == r && is_zero val_false && no_range val_true ->\n           Some (fold_result (minimum (maximum cut (int_ 0)) r_size) val_true)\n       | Binary { op = `Cmplt; lhs = cond_r; rhs = cut; _ }\n         when cond_r == r && is_zero val_true && no_range val_false ->\n           Some (fold_result\n             (minimum (maximum (r_size + neg cut) (int_ 0)) r_size) val_false)\n       | Binary { op = `And; lhs; rhs; _ } when is_zero val_false ->\n           (match K.view lhs, K.view rhs with\n            | Binary { op = `Cmpeq; lhs = lower_cond; rhs = false_const; _ },\n              Binary { op = `Cmplt; lhs = upper_r; rhs = upper; _ } ->\n                (match K.view lower_cond with\n                 | Binary { op = `Cmplt; lhs = lower_r; rhs = lower; _ }\n                   when lower_r == r && upper_r == r\n                        && is_zero false_const && no_range val_true ->\n                     let count = minimum\n                       (maximum (minimum upper r_size + neg (maximum lower (int_ 0)))\n                          (int_ 0)) r_size in\n                     Some (fold_result count val_true)\n                 | _ -> None)\n            | _ -> None)\n       | _ -> None)\n  | _ -> None\n\n(* General reduce rules: split ADD across reduce, AND-WHERE factoring. *)\nlet reduce_general_rule ranges dtype src =\n  match K.view src with\n  | Binary { op = `Add; lhs = x; rhs = y; _ } ->\n      Some (K.binary ~op:`Add\n        ~lhs:(K.reduce ~op:`Add ~src:x ~ranges ~dtype)\n        ~rhs:(K.reduce ~op:`Add ~src:y ~ranges ~dtype))\n  | Ternary { op = `Where; a = cond; b = val_true; c = val_false; _ }\n    when is_zero val_false ->\n      (match K.view cond with\n       | Binary { op = `And; lhs = dv; rhs = rest; _ } ->\n           (match K.view dv with\n            | Define_var _ ->\n                let inner = K.ternary ~op:`Where\n                  ~a:rest ~b:val_true ~c:val_false in\n                Some (K.binary ~op:`Mul\n                  ~lhs:(K.reduce ~op:`Add ~src:inner ~ranges ~dtype)\n                  ~rhs:(K.cast ~src:dv\n                    ~dtype:(K.dtype val_true)))\n            | _ -> None)\n       | _ -> None)\n  | _ -> None\n\n(* Lift addition/multiplication out of comparisons for reduce collapse. *)\nlet lift_add_from_cmp ~cmp_op lhs c =\n  let inner = peel_cast lhs in\n  match K.view inner with\n  | Binary { op = `Add; lhs = x; rhs = y; _ }\n    when no_range y && no_range c ->\n      let open K.O in\n      let y_dt = K.dtype y in\n      Some (K.binary ~op:cmp_op ~lhs:x\n        ~rhs:(K.cast ~src:c ~dtype:y_dt + neg y))\n  | _ -> None\n\n(* Combined reduce collapse rewrite rule. *)\nlet pm_reduce_collapse_rule node =\n  match K.view node with\n  | Binary { op = `Cmplt; lhs; rhs = c; _ } ->\n      (match lift_add_from_cmp ~cmp_op:`Cmplt lhs c with\n       | Some _ as r -> r\n       | None ->\n           match K.view lhs with\n           | Binary { op = `Mul; lhs = x; rhs = y; _ }\n             when no_range y && no_range c\n                  && Dtype.is_int (K.dtype y)\n                  && K.vmin y > 0 ->\n               let open K.O in\n               Some (x < ((c + y + neg (int_ 1)) / y))\n           | _ -> None)\n  | Reduce { op = `Add; src; ranges; dtype } when ranges <> [] ->\n      let folded = match ranges with\n        | [r] -> reduce_fold_rule r dtype src | _ -> None in\n      (match folded with\n       | Some _ -> folded\n       | None -> reduce_general_rule ranges dtype src)\n  | Binary { op = `Mul; lhs = x; rhs = gate_cast; _ } ->\n      (match K.view gate_cast with\n       | Cast { src = gate; _ } ->\n           (match K.dtype_opt gate with\n            | Some dt when Dtype.scalar dt = Dtype.Bool ->\n                Some (K.ternary ~op:`Where ~a:gate ~b:x\n                  ~c:(K.zero_like x))\n            | _ -> None)\n       | _ -> None)\n  | _ -> None\n\n(* Nodes that don't need proxy replacement in reduce collapse. *)\nlet is_leaf n = match K.view n with\n  | Const _ | Vconst _ | Define_var _ | Param _ | Define_local _ -> true\n  | _ -> false\n\nlet has_store_or_reduce nodes =\n  List.exists (fun x -> match K.view x with\n    | Store _ | Reduce _ -> true | _ -> false) nodes\n\n(* Isolate range-dependent subgraph, replace externals with define_var\n   proxies, build a standalone Reduce, simplify, substitute back. *)\nlet reduce_collapse_inner ~pm red u =\n  match K.view red with\n  | Reduce { op = `Add; ranges; _ } ->\n      let result = ref u in\n      let failed = ref false in\n      List.iter (fun r ->\n        if not !failed then begin\n          let lr_tbl = K.live_ranges_tbl !result in\n          let included = toposort_gated (fun x ->\n            match K.Ref_tbl.find_opt lr_tbl x with\n            | Some rngs -> mem_phys r rngs | None -> false) !result in\n          if has_store_or_reduce included then failed := true\n          else begin\n            let in_set = K.Ref_tbl.create 32 in\n            List.iter (fun x -> K.Ref_tbl.replace in_set x ()) included;\n            let proxies = K.Ref_tbl.create 16 in\n            let n = ref 0 in\n            List.iter (fun u_node ->\n              List.iter (fun s ->\n                if not (K.Ref_tbl.mem in_set s || K.Ref_tbl.mem proxies s\n                        || is_leaf s) then begin\n                  K.Ref_tbl.replace proxies s\n                    (K.define_var ~name:(Printf.sprintf \"in%d\" !n)\n                       ~lo:(K.vmin s) ~hi:(K.vmax s)\n                       ~dtype:(Dtype.val_of (K.dtype s)) ());\n                  incr n\n                end) (K.children u_node)) included;\n            let fwd = K.Ref_tbl.fold (fun k v acc -> (k, v) :: acc) proxies [] in\n            let collapse_fxn = K.reduce ~op:`Add\n              ~src:(K.substitute fwd !result) ~ranges:[r]\n              ~dtype:(Dtype.val_of (K.dtype !result)) in\n            let sink = K.graph_rewrite\n              (K.first_match [reduce_unparented; pm; Symbolic.symbolic])\n              collapse_fxn in\n            if not (no_range sink) then failed := true\n            else\n              let rev = K.Ref_tbl.fold (fun k v acc -> (v, k) :: acc) proxies [] in\n              result := K.substitute rev sink\n          end\n        end) ranges;\n      if !failed || !result == u then None else Some !result\n  | _ -> None\n\nlet reduce_collapse red u =\n  reduce_collapse_inner ~pm:pm_reduce_collapse_rule red u\n\n(* idx >= 0 & idx < size, as a validity condition for index substitution. *)\nlet valid_index_cond idx_cast r_dt r_size =\n  let open K.O in\n  let ge_zero = K.binary ~op:`Cmpeq\n    ~lhs:(idx_cast < K.cast ~src:(int_ 0) ~dtype:r_dt)\n    ~rhs:(K.const_bool false) in\n  K.binary ~op:`And ~lhs:ge_zero ~rhs:(idx_cast < r_size)\n\n(* Load-specific collapse rules: lift ne, gated-load substitution. *)\nlet pm_reduce_load_collapse_rule node =\n  match K.view node with\n  | Binary { op = `Cmpne; lhs; rhs = c; _ } ->\n      lift_add_from_cmp ~cmp_op:`Cmpne lhs c\n  | Reduce { op = `Add; src; ranges = [r]; _ } ->\n      (match K.view src with\n       | Ternary { op = `Where; a = cond; b = zero; c = expr; _ }\n         when is_zero zero ->\n           (match K.view cond with\n            | Binary { op = `Cmpne; lhs = idx; rhs = ne_rhs; _ }\n              when peel_cast ne_rhs == r ->\n                let r_dt = K.dtype r in\n                let idx_cast = K.cast ~src:idx ~dtype:r_dt in\n                let valid = valid_index_cond idx_cast r_dt (K.range_size r) in\n                let valid_idx = K.ternary ~op:`Where ~a:valid\n                  ~b:idx_cast ~c:(K.invalid_index ()) in\n                Some (K.ternary ~op:`Where ~a:valid\n                  ~b:(K.substitute [(r, valid_idx)] expr)\n                  ~c:(K.zero_like expr))\n            | _ -> None)\n       | _ -> pm_reduce_collapse_rule node)\n  | _ -> pm_reduce_collapse_rule node\n\nlet reduce_load_collapse red u =\n  reduce_collapse_inner ~pm:pm_reduce_load_collapse_rule red u\n\n(* pm_reduce_simplify: reduce_unparented + reduce_collapse. *)\nlet pm_reduce_simplify_rule node =\n  match reduce_unparented node with\n  | Some _ as r -> r\n  | None -> match K.view node with\n    | Reduce { op = `Add; src = u; ranges; _ } when ranges <> [] ->\n        reduce_collapse node u\n    | _ -> None\n\nlet pm_reduce_simplify root = K.graph_rewrite pm_reduce_simplify_rule root\n\n(* pm_load_collapse: reduce_load_collapse + lift rule for loaded indices. *)\nlet pm_load_collapse_rule node =\n  match K.view node with\n  | Reduce { op = `Add; src = u; ranges = [_]; _ } ->\n      reduce_load_collapse node u\n  | Binary { op = `Cmplt; lhs = x; rhs = c; _ } ->\n      (match K.view x with\n       | Binary { op = `Add; lhs = x_inner; rhs = y; _ } ->\n           (match K.dtype_opt x_inner with\n            | Some dt when Dtype.scalar dt = Dtype.Index ->\n                if no_load y && no_load c && not (no_load x_inner) then\n                  let open K.O in\n                  Some (x_inner < (c + neg y))\n                else None\n            | _ -> None)\n       | _ -> None)\n  | _ -> None\n\nlet pm_load_collapse root =\n  K.graph_rewrite ~name:\"load collapse\" pm_load_collapse_rule root\n"
  },
  {
    "path": "packages/tolk/lib/codegen/simplify.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Range simplification passes.\n\n    Each pass takes a Kernel root and returns the transformed root after\n    running its rewrite rules to fixpoint via {!Tolk_ir.Kernel.graph_rewrite}.\n\n    Passes are composed in the codegen pipeline in this order:\n    {!pm_load_collapse}, {!pm_split_ranges}, initial symbolic +\n    {!flatten_range}, {!pm_simplify_ranges}. *)\n\nval flatten_range : Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t option\n(** [flatten_range node] toposorts the range children of a Reduce, Store,\n    or End node and reattaches them in sorted order. Returns [None] for\n    other nodes or when already sorted. *)\n\nval pm_flatten_range : Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t\n(** [pm_flatten_range root] applies {!flatten_range} to all nodes in the\n    DAG. *)\n\nval pm_split_ranges : Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t\n(** [pm_split_ranges root] splits ranges where [range % C] appears and\n    the range size divides [C]. Each qualifying range with divisor [C]\n    becomes [outer(size/C) * C + inner(C)].\n\n    Image stores are excluded. *)\n\nval pm_simplify_ranges : Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t\n(** [pm_simplify_ranges root] merges adjacent ranges with the same kind\n    into a single range when it does not increase the divmod count, and\n    shrinks gated ranges based on [r < C] guards extracted from Index\n    nodes. Reduce ranges are never shrunk. *)\n\nval pm_reduce_unparented : Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t\n(** [pm_reduce_unparented root] removes reduce ranges not referenced in\n    the reduce source. For ADD reduces the removed range size is multiplied\n    into the result; for MUL it is exponentiated. *)\n\nval pm_reduce_simplify : Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t\n(** [pm_reduce_simplify root] combines {!pm_reduce_unparented} with\n    reduce collapse: algebraic simplification that eliminates ranges from\n    ADD reduces when possible. *)\n\nval pm_load_collapse : Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t\n(** [pm_load_collapse root] collapses reduces over gated loads (tensor\n    indexing patterns) and includes an undo rule to prevent arithmetic\n    on loaded index values that could overflow. *)\n"
  },
  {
    "path": "packages/tolk/lib/compiler.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\ntype t = {\n  name : string;\n  cachekey : string option;\n  compile : string -> bytes;\n}\n\nexception Compile_error of string\n\nlet ccache = Helpers.getenv \"CCACHE\" 1\n\nlet make ~name ?cachekey ~compile () =\n  let cachekey = if ccache <> 0 then cachekey else None in\n  { name; cachekey; compile }\n\nlet name t = t.name\n\nlet compile t src = t.compile src\n\nlet compile_cached t src =\n  match t.cachekey with\n  | None -> t.compile src\n  | Some table ->\n      match Diskcache.get ~table ~key:src with\n      | Some lib -> lib\n      | None ->\n          let lib = t.compile src in\n          Diskcache.put ~table ~key:src lib;\n          lib\n"
  },
  {
    "path": "packages/tolk/lib/compiler.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Kernel source compiler.\n\n    A compiler turns rendered source code into a compiled binary. Each\n    backend provides its own compiler (e.g. clang for CPU, nvcc for\n    CUDA). Renderers carry an optional compiler via\n    {!Renderer.compiler}; the device selects the active renderer and\n    uses its compiler at {!Device.compile_program} time.\n\n    {!compile_cached} is the primary entry point: it checks the\n    on-disk cache before invoking the underlying compiler. Disk\n    caching is controlled by the [CCACHE] environment variable\n    (default [1], set to [0] to disable). *)\n\n(** {1:types Types} *)\n\ntype t\n(** The type for kernel compilers. *)\n\nexception Compile_error of string\n(** Raised by {!compile} when compilation fails. The payload is a\n    human-readable error message. *)\n\n(** {1:constructors Constructors} *)\n\nval make :\n  name:string ->\n  ?cachekey:string ->\n  compile:(string -> bytes) ->\n  unit ->\n  t\n(** [make ~name ?cachekey ~compile ()] is a compiler with the given\n    name and compilation function.\n\n    [cachekey] is the disk cache table name (e.g., [\"compile_clang_jit\"]).\n    When [None] (default) or when the [CCACHE] environment variable is\n    [0], {!compile_cached} bypasses the disk cache. *)\n\n(** {1:accessors Accessors} *)\n\nval name : t -> string\n(** [name c] is [c]'s name. *)\n\n(** {1:compiling Compiling} *)\n\nval compile : t -> string -> bytes\n(** [compile c src] compiles [src] using [c], bypassing the disk\n    cache. *)\n\nval compile_cached : t -> string -> bytes\n(** [compile_cached c src] compiles [src] using [c]. Checks the\n    disk cache first when a [cachekey] was provided and [CCACHE] is\n    enabled; stores the result on cache miss. Falls back to\n    {!compile} when caching is disabled. *)\n"
  },
  {
    "path": "packages/tolk/lib/device.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\nopen Tolk_ir\n\n(* Buffer + Allocators *)\n\nmodule Buffer_spec = struct\n  type t = {\n    uncached : bool;\n    cpu_access : bool;\n    host : bool;\n    nolru : bool;\n    external_ptr : nativeint option;\n  }\n\n  let default =\n    {\n      uncached = false;\n      cpu_access = false;\n      host = false;\n      nolru = false;\n      external_ptr = None;\n    }\nend\n\nmodule Allocator = struct\n  type 'buf transfer = dest:'buf -> src:'buf -> int -> unit\n\n  type 'buf t = {\n    alloc : int -> Buffer_spec.t -> 'buf;\n    free : 'buf -> int -> Buffer_spec.t -> unit;\n    copyin : 'buf -> bytes -> unit;\n    copyout : bytes -> 'buf -> unit;\n    addr : 'buf -> nativeint;\n    offset : ('buf -> int -> int -> 'buf) option;\n    transfer : 'buf transfer option;\n    supports_transfer : bool;\n    copy_from_disk : ('buf -> 'buf -> int -> unit) option;\n    supports_copy_from_disk : bool;\n  }\n\n  type packed = Pack : 'buf t -> packed\nend\n\nmodule Lru_allocator = struct\n  let lru_var = Helpers.Context_var.int ~key:\"LRU\" ~default:1\n\n  let wrap (inner : 'buf Allocator.t) : 'buf Allocator.t =\n    let cache : (int * Buffer_spec.t * 'buf) list ref = ref [] in\n    let free_cache () =\n      List.iter (fun (size, spec, buf) -> inner.free buf size spec) !cache;\n      cache := []\n    in\n    {\n      inner with\n      alloc =\n        (fun size spec ->\n          let rec find acc = function\n            | (s, sp, buf) :: rest when s = size && sp = spec ->\n                cache := List.rev_append acc rest;\n                buf\n            | entry :: rest -> find (entry :: acc) rest\n            | [] -> (\n                try inner.alloc size spec\n                with exn -> (\n                  free_cache ();\n                  try inner.alloc size spec with _ -> raise exn))\n          in\n          find [] !cache);\n      free =\n        (fun buf size spec ->\n          if Helpers.Context_var.get lru_var <> 0\n             && (not spec.Buffer_spec.nolru)\n             && Option.is_none spec.external_ptr\n          then cache := (size, spec, buf) :: !cache\n          else inner.free buf size spec);\n    }\nend\n\nmodule Buffer = struct\n  type 'buf raw = {\n    id : int;\n    device : string;\n    size : int;\n    dtype : Dtype.t;\n    spec : Buffer_spec.t;\n    allocator : 'buf Allocator.t;\n    mutable buf : 'buf option;\n    base : 'buf raw option;\n    offset : int;\n    mutable uop_refcount : int;\n    mutable allocated_views : int;\n  }\n\n  type t = Pack : 'buf raw -> t\n\n  let next_id = Atomic.make 0\n  let fresh_id () = Atomic.fetch_and_add next_id 1\n\n  let rec base_raw (buf : 'buf raw) =\n    match buf.base with None -> buf | Some base -> base_raw base\n\n  let base (Pack buf as t) =\n    match buf.base with None -> t | Some _ -> Pack (base_raw buf)\n\n  let offset (Pack buf) = buf.offset\n  let uop_refcount (Pack buf) = (base_raw buf).uop_refcount\n  let id (Pack buf) = buf.id\n  let base_id (Pack buf) = (base_raw buf).id\n\n  let add_ref (Pack buf as t) cnt =\n    let base = base_raw buf in\n    base.uop_refcount <- base.uop_refcount + cnt;\n    t\n\n  let is_allocated (Pack buf) = Option.is_some (base_raw buf).buf\n  let is_initialized (Pack buf) = Option.is_some buf.buf\n  let nbytes (Pack buf) = buf.size * Dtype.itemsize buf.dtype\n\n  let rec allocate (Pack buf as t) =\n    if Option.is_some buf.buf then invalid_arg \"buffer already allocated\";\n    match buf.base with\n    | None -> buf.buf <- Some (buf.allocator.alloc (nbytes t) buf.spec)\n    | Some base ->\n        ensure_allocated (Pack base);\n        base.allocated_views <- base.allocated_views + 1;\n        let offset =\n          match buf.allocator.offset with\n          | None -> invalid_arg \"allocator offset is required for buffer views\"\n          | Some f -> f\n        in\n        let base_buf =\n          match base.buf with Some b -> b | None -> assert false\n        in\n        buf.buf <- Some (offset base_buf (nbytes t) buf.offset)\n\n  and ensure_allocated t = if not (is_initialized t) then allocate t\n\n  let rec deallocate (Pack buf as t) =\n    match (buf.base, buf.buf) with\n    | _, None -> ()\n    | None, Some raw ->\n        (* Catch use-after-free early: freeing a base while views still\n           reference it would leave dangling pointers. *)\n        if buf.allocated_views <> 0 then\n          invalid_arg \"base buffer still has allocated views\";\n        buf.allocator.free raw (nbytes t) buf.spec;\n        buf.buf <- None\n    | Some base, Some _ ->\n        buf.buf <- None;\n        base.allocated_views <- base.allocated_views - 1\n\n  let create ~device ~size ~dtype ?spec allocator =\n    let spec = Option.value spec ~default:Buffer_spec.default in\n    match allocator with\n    | Allocator.Pack alloc ->\n        let raw =\n          {\n            id = fresh_id ();\n            device;\n            size;\n            dtype;\n            spec;\n            allocator = alloc;\n            buf = None;\n            base = None;\n            offset = 0;\n            uop_refcount = 0;\n            allocated_views = 0;\n          }\n        in\n        Gc.finalise (fun raw -> deallocate (Pack raw)) raw;\n        Pack raw\n\n  let device (Pack b) = b.device\n  let size (Pack b) = b.size\n  let dtype (Pack b) = b.dtype\n  let spec (Pack b) = b.spec\n  let supports_offset (Pack b) = Option.is_some b.allocator.offset\n  let allocator (Pack b) = Allocator.Pack (base_raw b).allocator\n\n  let ensure_size t bytes =\n    let expected = nbytes t in\n    if Bytes.length bytes <> expected then\n      invalid_arg\n        (Printf.sprintf \"buffer size mismatch: got %d bytes, expected %d\"\n           (Bytes.length bytes) expected)\n\n  let copyin (Pack b as t) bytes =\n    ensure_size t bytes;\n    match b.buf with\n    | None -> invalid_arg \"buffer is not allocated\"\n    | Some raw -> b.allocator.copyin raw bytes\n\n  let copyout (Pack b as t) bytes =\n    ensure_size t bytes;\n    match b.buf with\n    | None -> invalid_arg \"buffer is not allocated\"\n    | Some raw -> b.allocator.copyout bytes raw\n\n  let as_bytes t =\n    let buf = Bytes.create (nbytes t) in\n    copyout t buf;\n    buf\n\n  let view (Pack b as t) ~size ~dtype ~offset =\n    if offset < 0 then invalid_arg \"buffer view offset must be non-negative\";\n    if offset >= nbytes t then\n      invalid_arg \"buffer view offset must be less than nbytes\";\n    let base = base_raw b in\n    let raw =\n      {\n        id = fresh_id ();\n        device = base.device;\n        size;\n        dtype;\n        spec = base.spec;\n        allocator = base.allocator;\n        buf = None;\n        base = Some base;\n        offset = base.offset + offset;\n        uop_refcount = 0;\n        allocated_views = 0;\n      }\n    in\n    Gc.finalise (fun raw -> deallocate (Pack raw)) raw;\n    Pack raw\n\n  let addr (Pack b as t) =\n    ensure_allocated t;\n    match b.buf with Some raw -> b.allocator.addr raw | None -> assert false\n\n  (* XXX: copy_between belongs in the engine layer, not the device layer.\n     tinygrad's BufferCopy and BufferXfer live in realize.py with fast paths\n     (disk, zero-copy via _as_buffer, device-to-device _transfer).  This\n     naive CPU bounce should move when tolk gains an engine/realize module. *)\n  let copy_between ~dst ~src =\n    if size dst <> size src then invalid_arg \"buffer copy size mismatch\";\n    if not (Dtype.equal (dtype dst) (dtype src)) then\n      invalid_arg \"buffer copy dtype mismatch\";\n    ensure_allocated dst;\n    ensure_allocated src;\n    let tmp = Bytes.create (nbytes src) in\n    copyout src tmp;\n    copyin dst tmp\nend\n\n(* Compiled devices *)\n\ntype prog = {\n  call :\n    nativeint array -> global:int array -> local:int array option ->\n    vals:int64 array -> wait:bool -> timeout:int option -> float option;\n  free : unit -> unit;\n}\n\ntype runtime = string -> bytes -> runtimevars:(string * int) list -> prog\n\nmodule Renderer_set = struct\n  type entry = {\n    renderer : Renderer.t;\n    ctrl : int Helpers.Context_var.t option;\n  }\n\n  type t = { entries : entry list; ctrl : string Helpers.Context_var.t option }\n\n  let make ?ctrl entries =\n    { entries = List.map (fun (renderer, ctrl) -> { renderer; ctrl }) entries;\n      ctrl }\n\n  let entry_name (e : entry) =\n    match Renderer.compiler e.renderer with\n    | Some comp -> String.uppercase_ascii (Compiler.name comp)\n    | None -> String.uppercase_ascii (Renderer.name e.renderer)\n\n  let ctrl_value (e : entry) = Option.map Helpers.Context_var.get e.ctrl\n\n  let select set =\n    let pick = function\n      | [] -> invalid_arg \"no available renderers\"\n      | [ e ] -> e\n      | _ -> invalid_arg \"multiple renderers forced\"\n    in\n    let by_priority () =\n      let forced = List.filter (fun e -> ctrl_value e = Some 1) set.entries in\n      match forced with\n      | _ :: _ -> pick forced\n      | [] ->\n          pick (List.filter (fun e -> ctrl_value e <> Some 0) set.entries)\n    in\n    let selected =\n      match Option.map Helpers.Context_var.get set.ctrl with\n      | None -> by_priority ()\n      | Some name -> (\n          let name = String.uppercase_ascii name in\n          match List.find_opt (fun e -> entry_name e = name) set.entries with\n          | None ->\n              invalid_arg (Printf.sprintf \"unknown renderer selection: %s\" name)\n          | Some entry -> entry)\n    in\n    selected.renderer\nend\n\ntype t = {\n  name : string;\n  allocator : Allocator.packed;\n  renderer_set : Renderer_set.t;\n  runtime : runtime;\n  synchronize : unit -> unit;\n  invalidate_caches_fn : (unit -> unit) option;\n}\n\ntype device = t\n\n(* XXX: Program_cache belongs in the engine layer, not the device layer.\n   tinygrad's method_cache and get_runner live in realize.py.  Move this\n   when tolk gains an engine/realize module. *)\nmodule Program_cache = struct\n  type renderer_context =\n    string * bool * bool * bool * int list option * int list option * int\n\n  type key = {\n    device : string;\n    compiler : string;\n    kernel_key : string;\n    context : renderer_context;\n    entry_name : string;\n    estimates : Program_spec.Estimates.t;\n    base : bool;\n  }\n\n  module Key = struct\n    type t = key\n\n    let equal = ( = )\n    let hash = Hashtbl.hash\n  end\n\n  module Cache = Hashtbl.Make (Key)\n\n  let cache : Program_spec.t Cache.t = Cache.create 64\n\n  let base_device name =\n    match String.split_on_char ':' name with [] -> name | head :: _ -> head\n\n  let renderer_context renderer =\n    ( Renderer.name renderer,\n      Renderer.has_local renderer,\n      Renderer.has_threads renderer,\n      Renderer.has_shared renderer,\n      Renderer.global_max renderer,\n      Renderer.local_max renderer,\n      Renderer.shared_max renderer )\n\n  let kernel_key (program : Tolk_ir.Program.t) =\n    Digest.to_hex (Digest.string (Marshal.to_string program []))\n\n  let mutex = Mutex.create ()\nend\n\nlet make ~name ~allocator ~renderer_set ~runtime ~synchronize\n    ?invalidate_caches () =\n  { name; allocator; renderer_set; runtime; synchronize;\n    invalidate_caches_fn = invalidate_caches }\n\nlet name d = d.name\nlet renderer d = Renderer_set.select d.renderer_set\nlet runtime d = d.runtime\nlet synchronize d = d.synchronize ()\n\n(* Two-level program cache: compiles the kernel once for a \"base\" device\n   (e.g. the first GPU) and clones the template for other devices sharing\n   the same compiler and renderer context, avoiding redundant render+compile\n   work in multi-device setups. *)\nlet compile_program d ?name ?(applied_opts = []) ?(estimates = Program_spec.Estimates.zero) program =\n  let ren = Renderer_set.select d.renderer_set in\n  let comp = match Renderer.compiler ren with\n    | Some c -> c\n    | None -> invalid_arg \"device has no compiler\"\n  in\n  let kernel_name = Option.value name ~default:\"kern\" in\n  let kkey = Program_cache.kernel_key program in\n  let make_key ~device ~base =\n    Program_cache.\n      {\n        device;\n        compiler = Compiler.name comp;\n        kernel_key = kkey;\n        context = Program_cache.renderer_context ren;\n        entry_name = kernel_name;\n        estimates;\n        base;\n      }\n  in\n  let ckey = make_key ~device:d.name ~base:false in\n  Mutex.lock Program_cache.mutex;\n  Fun.protect\n    ~finally:(fun () -> Mutex.unlock Program_cache.mutex)\n    (fun () ->\n      match Program_cache.Cache.find_opt Program_cache.cache ckey with\n      | Some cached -> cached\n      | None ->\n          let bkey =\n            make_key ~device:(Program_cache.base_device d.name) ~base:true\n          in\n          let build_spec () =\n            let src = Renderer.render ren ~name:kernel_name program in\n            let lib = Compiler.compile_cached comp src in\n            Program_spec.of_program ~name:kernel_name ~src ~device:d.name\n              ~lib ~applied_opts ~estimates program\n          in\n          let spec =\n            match Program_cache.Cache.find_opt Program_cache.cache bkey with\n            | Some cached -> cached\n            | None ->\n                let s = build_spec () in\n                Program_cache.Cache.add Program_cache.cache bkey s;\n                s\n          in\n          Program_cache.Cache.add Program_cache.cache ckey spec;\n          spec)\n\nlet create_buffer ~size ~dtype ?spec d =\n  Buffer.create ~device:d.name ~size ~dtype ?spec d.allocator\n\nlet invalidate_caches d = Option.iter (fun f -> f ()) d.invalidate_caches_fn\n\nmodule Multi_buffer = struct\n  type t = { bufs : Buffer.t list }\n\n  let create ~devices ~size ~dtype ?spec () =\n    if devices = [] then invalid_arg \"multi buffer requires at least one device\";\n    let bufs =\n      List.map (fun device -> create_buffer ~size ~dtype ?spec device) devices\n    in\n    { bufs }\n\n  let bufs t = t.bufs\n\n  let first t =\n    match t.bufs with\n    | [] -> invalid_arg \"multi buffer is empty\"\n    | buf :: _ -> buf\n\n  let size t = Buffer.size (first t)\n  let dtype t = Buffer.dtype (first t)\n\n  let add_ref t cnt =\n    List.iter (fun buf -> ignore (Buffer.add_ref buf cnt)) t.bufs;\n    t\n\n  let is_allocated t = List.for_all Buffer.is_allocated t.bufs\n\n  let copy_between ~dst ~src =\n    let dst_bufs = dst.bufs in\n    let src_bufs = src.bufs in\n    if List.length dst_bufs <> List.length src_bufs then\n      invalid_arg \"multi buffer copy device count mismatch\";\n    List.iter2 (fun d s -> Buffer.copy_between ~dst:d ~src:s) dst_bufs src_bufs\nend\n"
  },
  {
    "path": "packages/tolk/lib/device.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Device runtime abstraction.\n\n    A {e device} bundles the pieces needed to run compiled kernels on a specific\n    backend: an {!Allocator.packed} for buffer management, a {!Renderer_set.t}\n    for renderer/compiler selection, a {!Queue.t} for kernel dispatch, and a\n    preparation hook for device-specific program setup.\n\n    {!Buffer.t} values are existentially packed so that the concrete backend\n    buffer type does not leak into consumer code. {!Program.t} values carry\n    compiled binaries together with their runtime metadata. Compiled programs\n    are cached per device and compiler context. *)\n\n(** {1:types Types} *)\n\ntype t\n(** The type for compiled device runtimes. *)\n\ntype device = t\n(** Alias for {!t}, used in signatures where [device] reads better than\n    [Device.t]. *)\n\n(** {1:buffer_spec Buffer specification} *)\n\n(** Buffer allocation options.\n\n    A {!t} describes allocation constraints for a device buffer: memory\n    location, caching policy, and optional external backing. *)\nmodule Buffer_spec : sig\n  type t = {\n    uncached : bool;  (** [true] to request uncached memory. *)\n    cpu_access : bool;  (** [true] to request CPU-accessible device memory. *)\n    host : bool;  (** [true] to allocate in host memory. *)\n    nolru : bool;  (** [true] to bypass the LRU allocator cache on free. *)\n    external_ptr : nativeint option;\n        (** External backing pointer, or [None] to let the allocator choose.\n            Buffers with an external pointer bypass LRU caching on free. *)\n  }\n  (** Buffer allocation options. *)\n\n  val default : t\n  (** [default] is\n      [{uncached = false; cpu_access = false; host = false; nolru = false;\n       external_ptr = None}]. *)\nend\n\n(** {1:allocator Allocator} *)\n\n(** Backend allocator interface.\n\n    An allocator manages device buffer lifecycle: allocation, data transfer,\n    addressing, and optional features such as offset views and device-to-device\n    copies. The buffer type ['buf] is backend-specific and hidden behind\n    {!packed} at the device level.\n\n    See {!Lru_allocator} for LRU caching on top of a raw allocator. *)\nmodule Allocator : sig\n  (** {1:types Types} *)\n\n  type 'buf transfer = dest:'buf -> src:'buf -> int -> unit\n  (** The type for device-to-device transfers. [transfer ~dest ~src nbytes]\n      copies [nbytes] from [src] to [dest]. Both buffers belong to the same\n      backend. *)\n\n  type 'buf t = {\n    alloc : int -> Buffer_spec.t -> 'buf;\n        (** [alloc nbytes spec] allocates a device buffer of [nbytes] bytes with\n            options [spec]. *)\n    free : 'buf -> int -> Buffer_spec.t -> unit;\n        (** [free buf nbytes spec] releases [buf]. [nbytes] and [spec] must\n            match the values passed to {!field-alloc}. *)\n    copyin : 'buf -> bytes -> unit;\n        (** [copyin buf src] copies [src] into [buf]. *)\n    copyout : bytes -> 'buf -> unit;\n        (** [copyout dst buf] copies [buf] into [dst]. *)\n    addr : 'buf -> nativeint;  (** [addr buf] is the device address of [buf]. *)\n    offset : ('buf -> int -> int -> 'buf) option;\n        (** [offset buf nbytes byte_offset] is a view into [buf] starting at\n            [byte_offset] and spanning [nbytes], or [None] if the backend does\n            not support offset views. *)\n    transfer : 'buf transfer option;\n        (** Device-to-device transfer, or [None] if unsupported. *)\n    supports_transfer : bool;  (** [true] iff {!field-transfer} is [Some _]. *)\n    copy_from_disk : ('buf -> 'buf -> int -> unit) option;\n        (** Direct disk-to-device copy, or [None] if unsupported. *)\n    supports_copy_from_disk : bool;\n        (** [true] iff {!field-copy_from_disk} is [Some _]. *)\n  }\n  (** The type for backend allocators parameterised by the buffer representation\n      ['buf]. *)\n\n  type packed =\n    | Pack : 'buf t -> packed\n        (** Existential wrapper hiding the backend buffer type. *)\nend\n\n(** {1:lru_allocator LRU allocator} *)\n\n(** LRU buffer reuse layer.\n\n    Wraps a raw allocator so that freed buffers are cached by [(size, spec)] and\n    reused on subsequent allocations. Buffers marked {!Buffer_spec.nolru} or\n    carrying an {!Buffer_spec.external_ptr} bypass the cache and are freed\n    immediately. When a fresh allocation fails, the entire cache is flushed and\n    the allocation is retried once. *)\nmodule Lru_allocator : sig\n  val wrap : 'buf Allocator.t -> 'buf Allocator.t\n  (** [wrap alloc] is [alloc] augmented with LRU buffer reuse. *)\nend\n\n(** {1:buffer Buffers} *)\n\n(** Existentially-packed device buffers.\n\n    A buffer is either a {e base buffer} (directly allocated) or a {e view} into\n    a base buffer at a byte offset. Views share the base buffer's backing\n    storage.\n\n    Buffers start unallocated. Call {!allocate} or {!ensure_allocated} to\n    materialise backing storage. Each buffer has a globally unique {!id}\n    assigned at creation. A GC finaliser calls {!deallocate} when the buffer\n    becomes unreachable.\n\n    Reference counting ({!uop_refcount}, {!add_ref}) is managed externally by\n    the compiler runtime and is not used for deallocation. *)\nmodule Buffer : sig\n  (** {1:types Types} *)\n\n  type t\n  (** The type for existentially-packed device buffers. *)\n\n  (** {1:constructors Constructors} *)\n\n  val create :\n    device:string ->\n    size:int ->\n    dtype:Tolk_ir.Dtype.t ->\n    ?spec:Buffer_spec.t ->\n    Allocator.packed ->\n    t\n  (** [create ~device ~size ~dtype ?spec allocator] is an unallocated base\n      buffer for [size] elements of [dtype] on [device].\n\n      [spec] defaults to {!Buffer_spec.default}. *)\n\n  val view : t -> size:int -> dtype:Tolk_ir.Dtype.t -> offset:int -> t\n  (** [view b ~size ~dtype ~offset] is a view into [b] starting at byte [offset]\n      and spanning [size] elements of [dtype]. The view shares the base buffer's\n      allocator and spec.\n\n      Raises [Invalid_argument] if [offset] is negative or [>= nbytes b]. *)\n\n  (** {1:identity Identity and metadata} *)\n\n  val id : t -> int\n  (** [id b] is [b]'s globally unique identifier. *)\n\n  val base_id : t -> int\n  (** [base_id b] is the unique identifier of [b]'s root base buffer. Equal to\n      [id b] when [b] is itself a base buffer. *)\n\n  val device : t -> string\n  (** [device b] is the device name [b] is bound to. *)\n\n  val size : t -> int\n  (** [size b] is the element count. *)\n\n  val dtype : t -> Tolk_ir.Dtype.t\n  (** [dtype b] is the element dtype. *)\n\n  val spec : t -> Buffer_spec.t\n  (** [spec b] is the buffer specification. *)\n\n  val nbytes : t -> int\n  (** [nbytes b] is the size in bytes ([size b * Dtype.itemsize (dtype b)]). *)\n\n  val base : t -> t\n  (** [base b] is the root base buffer. If [b] is already a base buffer,\n      [base b] is [b] itself. *)\n\n  val offset : t -> int\n  (** [offset b] is the byte offset into the base buffer. [0] for base buffers.\n  *)\n\n  (** {1:allocation Allocation} *)\n\n  val allocate : t -> unit\n  (** [allocate b] materialises backing storage for [b]. For views, ensures the\n      base buffer is allocated first, then creates the offset view via the\n      allocator.\n\n      Raises [Invalid_argument] if [b] is already allocated, or if [b] is a view\n      and the allocator does not support {!Allocator.offset}. *)\n\n  val ensure_allocated : t -> unit\n  (** [ensure_allocated b] calls {!allocate} if [b] is not yet initialised.\n      No-op otherwise. *)\n\n  val is_allocated : t -> bool\n  (** [is_allocated b] is [true] iff the base buffer's backing storage exists.\n  *)\n\n  val is_initialized : t -> bool\n  (** [is_initialized b] is [true] iff this specific buffer or view has its own\n      storage pointer set. A view can be uninitialised even when the base buffer\n      is allocated. *)\n\n  val deallocate : t -> unit\n  (** [deallocate b] releases backing storage if allocated. For base buffers,\n      frees via the allocator. For views, detaches from the base buffer. No-op\n      if already deallocated.\n\n      Raises [Invalid_argument] if [b] is a base buffer that still has allocated\n      views. *)\n\n  val supports_offset : t -> bool\n  (** [supports_offset b] is [true] iff [b]'s allocator provides offset views.\n  *)\n\n  val allocator : t -> Allocator.packed\n  (** [allocator b] is the allocator of [b]'s base buffer. *)\n\n  (** {1:refcount Reference counting} *)\n\n  val uop_refcount : t -> int\n  (** [uop_refcount b] is the base buffer's UOp reference count. *)\n\n  val add_ref : t -> int -> t\n  (** [add_ref b cnt] increments the base buffer's UOp reference count by [cnt]\n      and returns [b]. *)\n\n  (** {1:data_transfer Data transfer} *)\n\n  val copyin : t -> bytes -> unit\n  (** [copyin b src] copies [src] into [b].\n\n      Raises [Invalid_argument] if [Bytes.length src <> nbytes b] or if [b] is\n      not allocated. *)\n\n  val copyout : t -> bytes -> unit\n  (** [copyout b dst] copies the contents of [b] into [dst].\n\n      Raises [Invalid_argument] if [Bytes.length dst <> nbytes b] or if [b] is\n      not allocated. *)\n\n  val as_bytes : t -> bytes\n  (** [as_bytes b] is a fresh [bytes] value containing the contents of [b].\n      Equivalent to allocating [Bytes.create (nbytes b)] and calling {!copyout}.\n  *)\n\n  val copy_between : dst:t -> src:t -> unit\n  (** [copy_between ~dst ~src] copies the contents of [src] into [dst] via a\n      host-memory bounce buffer. Both buffers are allocated if needed.\n\n      Raises [Invalid_argument] if [size dst <> size src] or\n      [dtype dst <> dtype src]. *)\n\n  val addr : t -> nativeint\n  (** [addr b] is the device address of [b]. Allocates [b] if needed. *)\nend\n\n(** {1:prog Runtime program handle} *)\n\ntype prog = {\n  call :\n    nativeint array -> global:int array -> local:int array option ->\n    vals:int64 array -> wait:bool -> timeout:int option -> float option;\n  free : unit -> unit;\n}\n(** A device-specific dispatch handle. *)\n\ntype runtime = string -> bytes -> runtimevars:(string * int) list -> prog\n(** [runtime name lib ~runtimevars] creates a dispatch handle for [lib]\n    with entry point [name]. [runtimevars] maps variable names (e.g.\n    [\"core_id\"]) to their index in the vals array. *)\n\n(** {1:renderer_set Renderer selection} *)\n\n(** Available renderers for a device.\n\n    Each renderer carries its own {!Compiler.t} via {!Renderer.compiler}.\n    The active renderer is chosen at {!Device.compile_program} time:\n    explicit environment override takes priority, then forced entries\n    ([ctrl = 1]), then the first non-disabled entry. *)\nmodule Renderer_set : sig\n  type t\n  (** The type for renderer sets. *)\n\n  val make :\n    ?ctrl:string Helpers.Context_var.t ->\n    (Renderer.t * int Helpers.Context_var.t option) list ->\n    t\n  (** [make ?ctrl entries] is a renderer set from [entries]. Each entry\n      pairs a renderer with an optional environment variable control\n      ([1] forces selection, [0] disables). [ctrl] is a global override\n      that selects by compiler name (case-insensitive). *)\nend\n\n(** {1:device_operations Device operations} *)\n\nval make :\n  name:string ->\n  allocator:Allocator.packed ->\n  renderer_set:Renderer_set.t ->\n  runtime:runtime ->\n  synchronize:(unit -> unit) ->\n  ?invalidate_caches:(unit -> unit) ->\n  unit ->\n  t\n(** [make ~name ~allocator ~renderer_set ~runtime ~synchronize\n    ?invalidate_caches ()] is a device runtime.\n\n    [runtime name lib] loads a compiled binary and returns a dispatch handle.\n\n    [synchronize ()] blocks until all pending work on the device completes. *)\n\nval name : t -> string\n(** [name d] is [d]'s device name. *)\n\nval renderer : t -> Renderer.t\n(** [renderer d] is the active renderer. *)\n\nval runtime : t -> runtime\n(** [runtime d] is [d]'s runtime factory. *)\n\nval synchronize : t -> unit\n(** [synchronize d] blocks until all pending work on [d] completes. *)\n\nval compile_program :\n  t ->\n  ?name:string ->\n  ?applied_opts:Tolk_ir.Kernel.Opt.t list ->\n  ?estimates:Program_spec.Estimates.t ->\n  Tolk_ir.Program.t ->\n  Program_spec.t\n(** [compile_program d ?name ?estimates program] renders and compiles [program]\n    for [d], returning a prepared {!Program.t}.\n\n    Results are cached by device name, compiler name, kernel content digest,\n    renderer context, entry name, and estimates. Cached programs are cloned\n    (entry address and cleanup cleared) before being returned.\n\n    [name] defaults to [\"kern\"]. [estimates] defaults to\n    {!Program_spec.Estimates.zero}. *)\n\nval create_buffer :\n  size:int -> dtype:Tolk_ir.Dtype.t -> ?spec:Buffer_spec.t -> t -> Buffer.t\n(** [create_buffer ~size ~dtype ?spec d] is an unallocated buffer for [size]\n    elements of [dtype] on [d].\n\n    [spec] defaults to {!Buffer_spec.default}. *)\n\nval invalidate_caches : t -> unit\n(** [invalidate_caches d] flushes device caches (e.g., L2) if the device\n    supports it. No-op if [~invalidate_caches] was not provided to {!make}.\n    Called by beam search between timing runs for consistent measurements. *)\n\n(** {1:multi_buffer Multi-device buffers} *)\n\n(** Buffers spanning multiple devices.\n\n    A multi-device buffer holds one {!Buffer.t} per device, all sharing the same\n    size and dtype. Operations apply element-wise across the per-device buffers.\n*)\nmodule Multi_buffer : sig\n  (** {1:types Types} *)\n\n  type t\n  (** The type for multi-device buffers. *)\n\n  (** {1:constructors Constructors} *)\n\n  val create :\n    devices:device list ->\n    size:int ->\n    dtype:Tolk_ir.Dtype.t ->\n    ?spec:Buffer_spec.t ->\n    unit ->\n    t\n  (** [create ~devices ~size ~dtype ?spec ()] is a multi-device buffer with one\n      underlying buffer per device in [devices].\n\n      [spec] defaults to {!Buffer_spec.default}. The trailing [unit] argument is\n      needed because [spec] is optional.\n\n      Raises [Invalid_argument] if [devices] is empty. *)\n\n  (** {1:accessors Accessors} *)\n\n  val bufs : t -> Buffer.t list\n  (** [bufs t] is the underlying per-device buffers, one per device in the order\n      given to {!create}. *)\n\n  val size : t -> int\n  (** [size t] is the element count (same across all buffers). *)\n\n  val dtype : t -> Tolk_ir.Dtype.t\n  (** [dtype t] is the element dtype (same across all buffers). *)\n\n  val is_allocated : t -> bool\n  (** [is_allocated t] is [true] iff all underlying buffers are allocated. *)\n\n  (** {1:operations Operations} *)\n\n  val add_ref : t -> int -> t\n  (** [add_ref t cnt] increments the UOp reference count on all underlying\n      buffers by [cnt] and returns [t]. *)\n\n  val copy_between : dst:t -> src:t -> unit\n  (** [copy_between ~dst ~src] copies pairwise across the underlying buffers.\n\n      Raises [Invalid_argument] if [dst] and [src] have different numbers of\n      devices. *)\nend\n"
  },
  {
    "path": "packages/tolk/lib/diskcache.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Simple file-based disk cache.\n   Uses Marshal for serialization and individual files per key. *)\n\nlet cache_version = 1\n\nlet cache_dir =\n  let base =\n    match Sys.getenv_opt \"XDG_CACHE_HOME\" with\n    | Some dir when dir <> \"\" -> dir\n    | _ -> (\n        match Sys.getenv_opt \"HOME\" with\n        | Some home -> (\n            match Sys.os_type with\n            | \"Unix\" ->\n                (* macOS uses ~/Library/Caches, Linux uses ~/.cache *)\n                let macos_dir = Filename.concat home \"Library/Caches\" in\n                if Sys.file_exists macos_dir then macos_dir\n                else Filename.concat home \".cache\"\n            | _ -> Filename.concat home \".cache\")\n        | None -> Filename.current_dir_name)\n  in\n  Filename.concat base \"tolk\"\n\nlet ensure_dir dir =\n  if not (Sys.file_exists dir) then begin\n    (* Create parent dirs recursively *)\n    let rec mkdir_p d =\n      if not (Sys.file_exists d) then begin\n        mkdir_p (Filename.dirname d);\n        (try Unix.mkdir d 0o755 with Unix.Unix_error (Unix.EEXIST, _, _) -> ())\n      end\n    in\n    mkdir_p dir\n  end\n\nlet cache_path ~table ~key =\n  let dir = Filename.concat cache_dir table in\n  let hash = Digest.to_hex (Digest.string key) in\n  Filename.concat dir (hash ^ \".cache\")\n\nlet get ~table ~key =\n  let path = cache_path ~table ~key in\n  if not (Sys.file_exists path) then None\n  else\n    try\n      let ic = open_in_bin path in\n      Fun.protect\n        ~finally:(fun () -> close_in ic)\n        (fun () ->\n          let version : int = Marshal.from_channel ic in\n          if version <> cache_version then None\n          else\n            let value = Marshal.from_channel ic in\n            Some value)\n    with _ -> None\n\nlet put ~table ~key value =\n  let path = cache_path ~table ~key in\n  ensure_dir (Filename.dirname path);\n  try\n    let oc = open_out_bin path in\n    Fun.protect\n      ~finally:(fun () -> close_out oc)\n      (fun () ->\n        Marshal.to_channel oc cache_version [];\n        Marshal.to_channel oc value [])\n  with _ -> ()\n"
  },
  {
    "path": "packages/tolk/lib/diskcache.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Simple file-based disk cache.\n\n    Stores marshalled OCaml values keyed by [(table, key)] pairs. Each entry\n    is a separate file under the platform cache directory. Values are\n    automatically invalidated when the cache version changes.\n\n    Uses Marshal for serialization and individual files per key. *)\n\nval get : table:string -> key:string -> 'a option\n(** [get ~table ~key] retrieves a cached value, or [None] if the key is\n    absent or the cache file is corrupt/stale. *)\n\nval put : table:string -> key:string -> 'a -> unit\n(** [put ~table ~key value] stores [value] in the cache. Creates the cache\n    directory if needed. *)\n"
  },
  {
    "path": "packages/tolk/lib/dune",
    "content": "(include_subdirs unqualified)\n\n(library\n (name tolk)\n (public_name tolk)\n (libraries unix tolk_ir))\n"
  },
  {
    "path": "packages/tolk/lib/engine/allocations.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(* Buffer allocation.\n\n   Transforms a tensor-level SINK into a CALL with explicit buffer\n   allocations and a buffer_map tracking which original tensor nodes\n   map to which allocated buffers.\n\n   Three phases:\n   1. Tag nodes that need realization (CONTIGUOUS, AFTER+STORE, bases).\n   2. Replace tagged nodes with explicit buffer allocations.\n   3. Finalize: strip tags, collect assigns, replace buffers with PARAMs. *)\n\nopen Tolk_ir\nmodule T = Tensor\nmodule D = Dtype\nmodule C = Const\n\n(* Helpers *)\n\nlet int_ n = T.const (C.int D.Val.index n) D.index\nlet shape_prod = List.fold_left ( * ) 1\nlet dtype_or_void n = match T.dtype n with Some d -> d | None -> D.void\n\nlet shape_node dims =\n  match List.map int_ dims with [d] -> d | ds -> T.vectorize ~srcs:ds\n\n(* Follow movement ops (not MULTI, not DETACH) plus DETACH to the\n   underlying node.  Equivalent to tinygrad's UOp.multibase. *)\nlet rec multibase x = match T.view x with\n  | Reshape { src; _ } | Expand { src; _ } | Pad { src; _ }\n  | Shrink { src; _ } | Permute { src; _ } | Flip { src; _ }\n  | Detach { src; _ } -> multibase src\n  | _ -> x\n\n(* Follow AFTER chains to the underlying source. *)\nlet rec base_through_after x = match T.view x with\n  | After { src; _ } -> base_through_after src\n  | _ -> x\n\n(* Is the base of [x] a buffer or buffer-view? *)\nlet has_buffer_identity x = match T.view (T.base x) with\n  | Buffer _ | Buffer_view _ -> true\n  | _ -> false\n\n(* Ops that do not need buffer realization. *)\nlet dont_realize = function\n  | T.Const _ | T.Buffer _ | T.Bind _ | T.Define_var _ | T.After _ -> true\n  | _ -> false\n\n(* Shrink [src] to [target_shape].  Each dimension is kept from 0 to\n   the target size — a no-op when shapes already match. *)\nlet shrink_to shapes src target_shape =\n  match shapes src with\n  | Some s when s = target_shape -> src\n  | _ ->\n      let before = shape_node (List.map (fun _ -> 0) target_shape) in\n      let after = shape_node target_shape in\n      T.shrink ~src ~before ~after\n\n(* If movement ops on [src] collapse to a contiguous range backed by a\n   buffer, return the element offset.  Returns [None] when the view is\n   non-contiguous or too complex to analyse statically. *)\nlet contiguous_view_offset shapes src =\n  (* Walk the movement-op chain and track whether the view stays\n     contiguous.  We handle the common patterns; the full analysis\n     would require the rangeify index pipeline. *)\n  let rec walk node = match T.view node with\n    | Buffer _ | Buffer_view _ -> Some 0\n    | Reshape { src; _ } -> walk src\n    | Shrink { src; _ } ->\n        let inner = match shapes src with Some s -> s | None -> [] in\n        if inner = [] then None\n        else\n          let pairs = match T.extract_marg_pairs (T.view node) with\n            | Some p -> p | None -> [] in\n          if pairs = [] then None\n          else\n            let n = List.length pairs in\n            (* All leading dimensions must be kept in full. *)\n            let all_full = List.for_all2 (fun (b, e) d ->\n              b = 0 && e = d) (List.filteri (fun i _ -> i < n - 1) pairs)\n              (List.filteri (fun i _ -> i < n - 1) inner) in\n            if not all_full then None\n            else\n              let last_b = fst (List.nth pairs (n - 1)) in\n              if last_b = 0 then walk src\n              else\n                (* Contiguous slice starting at last_b. *)\n                let strides = List.rev (List.fold_left (fun acc d ->\n                  (List.hd acc * d) :: acc) [1]\n                  (List.rev (List.tl (List.rev inner)))) in\n                let offset = last_b * List.nth strides (n - 1) in\n                (match walk src with\n                 | Some base_off -> Some (base_off + offset)\n                 | None -> None)\n    | _ -> None\n  in\n  let base = T.base src in\n  match T.view base with\n  | Buffer _ | Buffer_view _ -> walk src\n  | _ -> None\n\n(* Context *)\n\ntype ctx = {\n  uop_tbl : (int, T.t) Hashtbl.t;\n  mutable uop_count : int;\n  buffer_map : (int, T.t) Hashtbl.t;\n  bases : (int, unit) Hashtbl.t;\n  mutable assigns : T.t list;\n  mutable replacements : T.t list;\n  tags : (int, int list) Hashtbl.t;\n  shapes : T.t -> int list option;\n  devices : T.t -> T.device option;\n  mutable uid : int;\n}\n\n(* Tag side-table *)\n\nlet get_tags ctx n = Hashtbl.find_opt ctx.tags (T.tag n)\n\nlet get_tags_or_empty ctx n =\n  match Hashtbl.find_opt ctx.tags (T.tag n) with\n  | Some t -> t | None -> []\n\nlet has_tag ctx n = Hashtbl.mem ctx.tags (T.tag n)\nlet set_tags ctx n ts = Hashtbl.replace ctx.tags (T.tag n) ts\nlet remove_tags ctx n = Hashtbl.remove ctx.tags (T.tag n)\n\n(* When graph_rewrite rebuilds a node with new children, propagate its\n   tag entry to the replacement. *)\nlet propagate_tags ctx ~old_n ~new_n =\n  if old_n != new_n then\n    match Hashtbl.find_opt ctx.tags (T.tag old_n) with\n    | Some t -> Hashtbl.replace ctx.tags (T.tag new_n) t\n    | None -> ()\n\n(* Assign the next tag index to [x] and record it. *)\nlet tag_uop ctx x =\n  if has_tag ctx x then ()\n  else begin\n    let idx = ctx.uop_count in\n    ctx.uop_count <- ctx.uop_count + 1;\n    Hashtbl.replace ctx.uop_tbl idx x;\n    set_tags ctx x [idx]\n  end\n\n(* Phase 1 — add_tags *)\n\n(* Number the nodes that need realization and populate buffer_map for\n   plain AFTER nodes.  Runs bottom-up so children are tagged before\n   parents. *)\nlet add_tags ctx node =\n  match T.view node with\n  | After { src; deps; _ } ->\n      if List.exists (fun d ->\n        match T.view d with Store _ -> true | _ -> false) deps\n      then tag_uop ctx node;\n      Hashtbl.replace ctx.buffer_map (T.tag node) (base_through_after src);\n      None\n  | Contiguous _ ->\n      tag_uop ctx node;\n      None\n  | _ when Hashtbl.mem ctx.bases (T.tag node) ->\n      tag_uop ctx node;\n      None\n  | _ -> None\n\n(* Phase 2 — early transform *)\n\n(* Create a fresh buffer matching [src]'s device, shape, and [dtype].\n   For multi-device tensors the buffer covers one shard and is wrapped\n   in MULTI. *)\nlet buffer_like ctx src dtype =\n  let shape = match ctx.shapes src with\n    | Some s -> s | None -> failwith \"buffer_like: unknown shape\" in\n  let dev = match ctx.devices src with\n    | Some d -> d | None -> failwith \"buffer_like: unknown device\" in\n  let axis = match T.view src with\n    | Multi { axis; _ } -> Some axis | _ -> None in\n  let ndev = match dev with\n    | T.Multi ds -> List.length ds | T.Single _ -> 1 in\n  (* Per-shard shape: divide the sharding axis by the device count. *)\n  let shard_shape = match axis with\n    | Some ax when ndev > 1 ->\n        List.mapi (fun i d -> if i = ax then d / ndev else d) shape\n    | _ -> shape in\n  let size = shape_prod shard_shape in\n  let dev_node = T.device dev in\n  let uid = ctx.uid in\n  ctx.uid <- ctx.uid + 1;\n  let buf = T.buffer ~unique:(T.unique ~id:uid) ~device:dev_node ~size ~dtype in\n  let buf = T.reshape ~src:buf ~shape:(shape_node shard_shape) in\n  (* Shrink to actual shard shape when it differs from max shard shape.\n     For evenly divisible axes this is a no-op. *)\n  let buf = shrink_to ctx.shapes buf shard_shape in\n  match axis with\n  | Some ax when ndev > 1 -> T.multi ~src:buf ~axis:ax\n  | _ -> buf\n\n(* If movement ops on [src] collapse to a contiguous range, return a\n   BUFFER_VIEW reshaped to [src]'s shape. *)\nlet make_buffer_view shapes src =\n  match contiguous_view_offset shapes src with\n  | None -> None\n  | Some offset ->\n      let base = T.base src in\n      let size = match shapes src with\n        | Some s -> shape_prod s | None -> 0 in\n      (* Chain BUFFER_VIEW offsets when the base is already a view. *)\n      let offset, buf = match T.view base with\n        | Buffer_view { offset = bv_off; src = bv_src; _ } ->\n            offset + bv_off, bv_src\n        | _ -> offset, base\n      in\n      let bv_dtype = dtype_or_void src in\n      let bv = T.buffer_view ~src:buf ~size ~offset ~dtype:bv_dtype in\n      let shape = match shapes src with Some s -> s | None -> [] in\n      Some (T.reshape ~src:bv ~shape:(shape_node shape))\n\n(* CONTIGUOUS(movement-ops(BUFFER)) → CONTIGUOUS(BUFFER_VIEW) when the\n   movement ops collapse to a contiguous range. *)\nlet contiguous_mops_to_view ctx node =\n  match T.view node with\n  | Contiguous { src; _ } ->\n      let base = T.base src in\n      (match T.view base with\n       | Buffer _ | Buffer_view _ ->\n           (* RESHAPE directly on a buffer already has buffer identity,\n              handled by merge_contiguous_after — skip. *)\n           let trivial_reshape = match T.view src with\n             | Reshape { src = inner; _ } ->\n                 (match T.view inner with\n                  | Buffer _ | Buffer_view _ -> true | _ -> false)\n             | _ -> false in\n           if trivial_reshape then None\n           else if ctx.shapes node = None then None (* symbolic shapes *)\n           else\n             (* XXX: should check that the device allocator supports\n                offset views.  All current tolk devices (CPU, Metal)\n                do, so we skip the check for now. *)\n             (match make_buffer_view ctx.shapes src with\n              | None -> None\n              | Some view ->\n                  let c = T.contiguous ~src:view () in\n                  (match get_tags ctx node with\n                   | Some ts -> set_tags ctx c ts\n                   | None -> ());\n                  Some c)\n       | _ -> None)\n  | _ -> None\n\n(* Transform precompiled CALL nodes to have explicit output buffers.\n   Currently only single-output (SINK body) precompiled calls exist in\n   tolk; multi-output calls would need TUPLE/GETTUPLE IR support. *)\nlet transform_precompiled_call _ctx node =\n  match T.view node with\n  | Call { info; callee = Ref body; _ } when info.precompile ->\n      (match T.view body with\n       | Sink _ -> None\n       | _ -> None)\n  | _ -> None\n\n(* Rule: tagged non-CONTIGUOUS/AFTER/STORE → wrap in CONTIGUOUS and\n   move the tag onto it. *)\nlet wrap_tagged ctx node =\n  match T.view node with\n  | Contiguous _ | After _ | Store _ -> None\n  | _ ->\n      (match get_tags ctx node with\n       | Some ts ->\n           remove_tags ctx node;\n           let c = T.contiguous ~src:node () in\n           set_tags ctx c ts;\n           Some c\n       | None -> None)\n\n(* Rule: CONTIGUOUS(AFTER) where AFTER's source has buffer identity →\n   remove the redundant CONTIGUOUS and merge tags into the AFTER. *)\nlet merge_contiguous_after ctx node =\n  match T.view node with\n  | Contiguous { src = a; _ } ->\n      (match T.view a with\n       | After { src = a_src; _ } when has_buffer_identity a_src ->\n           let merged = get_tags_or_empty ctx a @ get_tags_or_empty ctx node in\n           remove_tags ctx node;\n           set_tags ctx a merged;\n           Some a\n       | _ -> None)\n  | _ -> None\n\n(* Rule: AFTER(_, STORE(_, src)) → CONTIGUOUS(src) when the store's\n   target is not a BUFFER. *)\nlet revert_store_to_contiguous ctx node =\n  match T.view node with\n  | After { deps; _ } ->\n      let store_src = List.find_map (fun d ->\n        match T.view d with\n        | Store { value; _ } -> Some value\n        | _ -> None) deps in\n      (match store_src with\n       | None -> None\n       | Some src ->\n           let rec find_target n = match T.view n with\n             | Bitcast { src; _ } | After { src; _ } -> find_target (T.base src)\n             | _ -> n\n           in\n           let target = find_target node in\n           (match T.view target with\n            | Buffer _ -> None\n            | _ ->\n                let c = T.contiguous ~src () in\n                (match get_tags ctx node with\n                 | Some ts -> set_tags ctx c ts\n                 | None -> ());\n                Some c))\n  | _ -> None\n\n(* Rule: CONTIGUOUS → BUFFER + STORE + AFTER.  The core allocation. *)\nlet contig_to_store_after ctx node =\n  match T.view node with\n  | Contiguous { src; dtype; _ } ->\n      let has_dev = ctx.devices src <> None in\n      if not has_dev then None\n      else\n        let shape = match ctx.shapes src with Some s -> s | None -> [] in\n        if shape_prod shape = 0 then Some src\n        else begin\n          let buf = buffer_like ctx src dtype in\n          let store = T.store ~dst:buf ~value:src in\n          let result = T.after ~src:buf ~deps:[store] in\n          (match get_tags ctx node with\n           | Some ts -> set_tags ctx result ts\n           | None -> ());\n          Some result\n        end\n  | _ -> None\n\n(* Rule: remove DETACH / CONTIGUOUS_BACKWARD. *)\nlet remove_detach node =\n  match T.view node with\n  | Detach { src; _ } | Contiguous_backward { src; _ } -> Some src\n  | _ -> None\n\n(* Phase 3 — finalize *)\n\n(* Strip tags, map each original numbered node to its final buffer,\n   and collect assigns. *)\nlet pm_finalize ctx node =\n  match T.view node with\n  | After _ ->\n      (match get_tags ctx node with\n       | Some tag_indices ->\n           remove_tags ctx node;\n           let replace_uop = base_through_after node in\n           List.iter (fun t ->\n             let original = Hashtbl.find ctx.uop_tbl t in\n             let original_shape =\n               match ctx.shapes original with Some s -> s | None -> [] in\n             let buf = shrink_to ctx.shapes replace_uop original_shape in\n             Hashtbl.replace ctx.buffer_map (T.tag original) buf)\n             tag_indices\n       | None -> ());\n      ctx.assigns <- node :: ctx.assigns;\n      None\n  | Const { value; dtype; srcs = [u; d] }\n    when (match T.view u with Unique _ -> true | _ -> false)\n      && (match T.view d with Device _ -> true | _ -> false) ->\n      Some (T.const ~srcs:[d] value dtype)\n  | _ -> None\n\n(* Replace BUFFER, BUFFER_VIEW, and BIND with PARAM for cache-key\n   normalisation. *)\nlet pm_replace_buf ctx node =\n  let replace_input b =\n    ctx.replacements <- b :: ctx.replacements;\n    let slot = List.length ctx.replacements - 1 in\n    let dtype = dtype_or_void b in\n    let device = match T.view b with\n      | Buffer { device; _ } -> Some device\n      | _ -> None\n    in\n    Some (T.param ~slot ~dtype ?device ())\n  in\n  match T.view node with\n  | Buffer { unique; device; _ }\n    when (match T.view unique with Unique _ -> true | _ -> false)\n      && (match T.view device with Device _ -> true | _ -> false) ->\n      replace_input node\n  | Buffer_view { src; _ }\n    when (match T.view src with Buffer _ -> true | _ -> false) ->\n      replace_input node\n  | Bind { var; value = Some v; _ }\n    when (match T.view var with Define_var _ -> true | _ -> false)\n      && (match T.view v with Const _ -> true | _ -> false) ->\n      replace_input node\n  | _ -> None\n\n(* Entry point *)\n\nlet transform_to_call (big_sink : T.t) : T.t * (int, T.t) Hashtbl.t =\n  let shapes = T.compute_shapes big_sink in\n  let devices = T.compute_devices big_sink in\n  let bases = Hashtbl.create 16 in\n  (match T.view big_sink with\n   | Sink { srcs; _ } ->\n       List.iter (fun x ->\n         if not (dont_realize (T.view (T.base x))) then\n           Hashtbl.replace bases (T.tag (multibase x)) ())\n         srcs\n   | _ -> ());\n  let uid_start =\n    List.fold_left (fun acc x ->\n      match T.view x with\n      | Unique { id; _ } -> max acc (id + 1)\n      | _ -> acc) 0 (T.toposort big_sink)\n  in\n  let ctx = {\n    uop_tbl = Hashtbl.create 64;\n    uop_count = 0;\n    buffer_map = Hashtbl.create 64;\n    bases;\n    assigns = [];\n    replacements = [];\n    tags = Hashtbl.create 64;\n    shapes;\n    devices;\n    uid = uid_start;\n  } in\n  (* Phase 1: number the nodes that need realization. *)\n  let big_sink =\n    T.graph_rewrite ~name:\"add_tags\" (add_tags ctx) big_sink in\n  (* Phase 2: replace tagged nodes with buffer allocations. *)\n  let big_sink =\n    T.graph_rewrite ~name:\"early_transform\"\n      ~on_rebuild:(propagate_tags ctx)\n      (T.first_match [\n        transform_precompiled_call ctx;\n        contiguous_mops_to_view ctx;\n        wrap_tagged ctx;\n        merge_contiguous_after ctx;\n        revert_store_to_contiguous ctx;\n        contig_to_store_after ctx;\n        remove_detach;\n      ]) big_sink in\n  (* Phase 3a: finalize — strip tags and collect assigns. *)\n  ignore (T.graph_rewrite ~name:\"finalize\" (pm_finalize ctx) big_sink);\n  (* Phase 3b: replace buffers with PARAMs and wrap in a CALL. *)\n  let assigns_sink = T.sink (List.rev ctx.assigns) in\n  let body =\n    T.graph_rewrite ~name:\"replace_bufs\" (pm_replace_buf ctx) assigns_sink in\n  let args = List.rev ctx.replacements in\n  let dtype = dtype_or_void body in\n  let info =\n    { T.grad_fxn = None; metadata = []; name = None; precompile = false } in\n  let ret = T.call ~callee:(Ref body) ~args ~info ~dtype in\n  (ret, ctx.buffer_map)\n"
  },
  {
    "path": "packages/tolk/lib/engine/allocations.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Buffer allocation for tensor graphs.\n\n    Decides which tensor computations need explicit buffer allocations\n    and transforms a lazy tensor-level {!Tolk_ir.Tensor.view.Sink}\n    into a {!Tolk_ir.Tensor.view.Call} with allocated buffers.\n\n    The transformation runs in three phases:\n    {ol\n    {- {e Tag.}  Identify nodes that need realization\n       ({!Tolk_ir.Tensor.view.Contiguous},\n       {!Tolk_ir.Tensor.view.After}+{!Tolk_ir.Tensor.view.Store},\n       and non-trivial bases of the sink's children).}\n    {- {e Allocate.}  Replace tagged nodes with explicit\n       {!Tolk_ir.Tensor.view.Buffer} +\n       {!Tolk_ir.Tensor.view.Store} +\n       {!Tolk_ir.Tensor.view.After} sequences.  When movement ops\n       on a buffer collapse to a contiguous range, a\n       {!Tolk_ir.Tensor.view.Buffer_view} is used instead.}\n    {- {e Finalize.}  Strip internal bookkeeping, collect the\n       resulting stores, replace input buffers with\n       {!Tolk_ir.Tensor.view.Param} nodes for cache-key\n       normalisation, and wrap everything in a\n       {!Tolk_ir.Tensor.view.Call}.}}\n\n    The returned [buffer_map] tracks which original tensor nodes map\n    to which allocated buffers, keyed by {!Tolk_ir.Tensor.tag}. *)\n\nval transform_to_call :\n  Tolk_ir.Tensor.t ->\n  Tolk_ir.Tensor.t * (int, Tolk_ir.Tensor.t) Hashtbl.t\n(** [transform_to_call big_sink] is [(call, buffer_map)].\n\n    [big_sink] must be a {!Tolk_ir.Tensor.view.Sink} node\n    representing the lazy tensor graph to be realized.\n\n    [call] is a {!Tolk_ir.Tensor.view.Call} whose callee is a\n    parameterised sink (input buffers replaced by\n    {!Tolk_ir.Tensor.view.Param} nodes) and whose arguments are\n    the original buffer and bind nodes.\n\n    [buffer_map] maps original tensor nodes to their allocated\n    buffers, keyed by {!Tolk_ir.Tensor.tag}.  Downstream scheduling\n    uses this to resolve tensor references to concrete buffers. *)\n"
  },
  {
    "path": "packages/tolk/lib/engine/jit.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(* JIT compilation.\n\n   Three-phase execution: warmup (cnt=0) runs eagerly, capture (cnt=1)\n   records the computation schedule, exec (cnt>=2) replays the compiled\n   schedule with fresh input buffers.  On the first replay, the schedule\n   may be condensed into graph executors when the device supports it. *)\n\nopen Tolk_ir\nmodule K = Kernel\nmodule T = Tensor\nmodule B = Device.Buffer\n\nlet strf = Printf.sprintf\nlet debug = Helpers.getenv \"DEBUG\" 0\nlet jit_level = Helpers.getenv \"JIT\" 2\nlet jit_batch_size = Helpers.getenv \"JIT_BATCH_SIZE\" 0\n\n(* Exceptions *)\n\nexception Graph_exn of string\nexception Jit_error of string\n\n(* Types *)\n\nlet next_uid = ref 0\nlet fresh_uid () = let i = !next_uid in incr next_uid; i\n\n(* Runner kind — replaces Python isinstance dispatch on Runner subclasses.\n   Each variant carries enough to dispatch and to extract kind-specific\n   data (e.g. Program_spec from a compiled kernel). *)\ntype prg =\n  | Compiled of Realize.Compiled_runner.t\n  | View_op of Realize.Runner.t\n  | Buffer_copy of Realize.Runner.t\n  | Buffer_xfer of Realize.Runner.t\n  | Enc_dec of Realize.Runner.t\n  | Graph of graph_runner\n\n(* Execution item with mutable buffer slots for input substitution.\n   [uid] provides stable identity across list rebuilds (replaces Python\n   id() on ExecItem objects). *)\nand exec_item = {\n  uid : int;\n  bufs : B.t option array;\n  prg : prg;\n  fixedvars : string list;\n}\n\n(* Graph runner — batches multiple kernels for accelerated dispatch.\n   Stores precomputed replacement tables so the device graph only needs\n   to update the values that actually change between calls. *)\nand graph_runner = {\n  gr_cache : exec_item list;\n  gr_input_replace : ((int * int), int) Hashtbl.t;\n  gr_var_replace : (int, (int * int) list) Hashtbl.t;\n  gr_dims_replace : (int, int option * int option) Hashtbl.t;\n  gr_dims_base : (int, int array * int array) Hashtbl.t;\n  gr_vars : string array;\n  gr_sym_dims : K.t array list;\n  gr_w_dep : (int, (int * int * int) list) Hashtbl.t;\n  gr_r_dep : (int, (int * int * int) list) Hashtbl.t;\n  gr_runner : Realize.Runner.t;\n}\n\n(* A view input is a sub-buffer of an existing input that must be\n   reconstructed from the base on every call. *)\ntype view_input = {\n  vi_base_idx : int;\n  vi_offset : int;\n  vi_device : string;\n  vi_size : int;\n  vi_dtype : Dtype.t;\n}\n\n(* Validation token for ensuring inputs don't change shape between calls. *)\ntype input_info = {\n  ii_size : int;\n  ii_dtype : Dtype.t;\n  ii_device : string;\n}\n\n(* Exec item helpers *)\n\nlet runner_of_prg = function\n  | Compiled cr -> Realize.Compiled_runner.runner cr\n  | View_op r | Buffer_copy r | Buffer_xfer r | Enc_dec r -> r\n  | Graph gr -> gr.gr_runner\n\nlet run_ei ei var_vals ~jit =\n  let runner = runner_of_prg ei.prg in\n  let bufs = Array.to_list ei.bufs |> List.filter_map (fun b ->\n    Option.map (fun buf -> B.ensure_allocated buf; buf) b) in\n  ignore (Realize.Runner.call runner bufs var_vals\n    ~wait:(not jit || debug >= 2) ~timeout:None)\n\n(* Lower a Realize.Exec_item into our richer exec_item.  Compiles\n   kernels via [get_runner] and wraps the result in the appropriate\n   [prg] variant so we can dispatch on runner kind later. *)\nlet lower_realize_ei ~device ~get_program (rei : Realize.Exec_item.t)\n    : exec_item =\n  let bufs = Array.of_list (Realize.Exec_item.bufs rei) in\n  let live_bufs = Array.to_list bufs |> List.filter_map Fun.id in\n  let prg =\n    match T.view (Realize.Exec_item.ast rei) with\n    | Call { callee = Ast kernel; _ } ->\n        Compiled (Realize.get_runner ~device ~get_program kernel)\n    | Call { callee = Ref ref_node; _ } -> begin\n        match T.view ref_node with\n        | Buffer_view _ ->\n            View_op (Realize.view_op ~device (List.hd live_bufs))\n        | Copy _ ->\n            let dest = List.nth live_bufs 0 in\n            let src = List.nth live_bufs 1 in\n            Buffer_copy (Realize.buffer_copy ~device\n              ~total_sz:(B.nbytes dest)\n              ~dest_device:(B.device dest)\n              ~src_device:(B.device src))\n        | _ -> failwith \"lower_realize_ei: unsupported Ref callee\"\n      end\n    | _ -> failwith \"lower_realize_ei: expected Call node\"\n  in\n  let fixedvars = List.map fst (Realize.Exec_item.var_vals rei) in\n  { uid = fresh_uid (); bufs; prg; fixedvars }\n\n(* Output buffers *)\n\n(* Buffers written by an exec item.  For compiled kernels, output\n   parameters that are not also inputs; for copies, the destination. *)\nlet get_out_buffers ei =\n  match ei.prg with\n  | Compiled cr ->\n      let p = Realize.Compiled_runner.p cr in\n      let ins = Program_spec.ins p in\n      List.filter_map (fun out ->\n        if List.mem out ins then None else ei.bufs.(out))\n        (Program_spec.outs p)\n  | Buffer_copy _ | Buffer_xfer _ | Enc_dec _ ->\n      Option.to_list ei.bufs.(0)\n  | View_op _ | Graph _ -> []\n\n(* Buffer set *)\n\n(* Set of buffers keyed by id, with an optional None sentinel for\n   tracking \"unknown\" / cleared slots. *)\ntype buf_set = {\n  mutable has_none : bool;\n  tbl : (int, B.t) Hashtbl.t;\n}\n\nlet buf_set () = { has_none = false; tbl = Hashtbl.create 32 }\n\nlet buf_set_mem s = function\n  | None -> s.has_none\n  | Some b -> Hashtbl.mem s.tbl (B.id b)\n\nlet buf_set_add s b = Hashtbl.replace s.tbl (B.id b) b\n\n(* Propagate buffer dependencies forward through a cache: any exec item\n   whose inputs overlap the seed set has its outputs added. *)\nlet update_depends depends cache =\n  List.iter (fun ei ->\n    if Array.exists (buf_set_mem depends) ei.bufs then\n      List.iter (buf_set_add depends) (get_out_buffers ei))\n    cache\n\n(* Input replacement *)\n\n(* Build (cache_idx, buf_idx) -> input_idx map.\n   When [orig_valid_positions] is provided (keyed by exec_item uid),\n   only positions valid during the original capture are included — this\n   prevents aliasing bugs when graph batching reuses buffer slots. *)\nlet get_input_replace cache (input_bufs : B.t array)\n    ?orig_valid_positions () =\n  let idx_of_buf : (int, int) Hashtbl.t = Hashtbl.create 32 in\n  Array.iteri (fun i buf ->\n    Hashtbl.replace idx_of_buf (B.id buf) i) input_bufs;\n  let result = Hashtbl.create 64 in\n  List.iteri (fun j ei ->\n    Array.iteri (fun i b ->\n      match b with\n      | None -> ()\n      | Some buf ->\n          match Hashtbl.find_opt idx_of_buf (B.id buf) with\n          | None -> ()\n          | Some idx ->\n              let valid = match orig_valid_positions with\n                | None -> true\n                | Some vp ->\n                    match Hashtbl.find_opt vp ei.uid with\n                    | None -> false\n                    | Some set -> List.mem i set\n              in\n              if valid then Hashtbl.replace result (j, i) idx)\n      ei.bufs)\n    cache;\n  result\n\n(* Graph runner *)\n\nlet is_sym_dim dim =\n  let rec loop i =\n    i < Array.length dim &&\n    (match K.const_arg dim.(i) with None -> true | Some _ -> loop (i + 1))\n  in\n  loop 0\n\nlet dim_eq a b =\n  Array.length a = Array.length b &&\n  let rec loop i =\n    i >= Array.length a || (K.tag a.(i) = K.tag b.(i) && loop (i + 1))\n  in\n  loop 0\n\nlet is_runtime_var p name =\n  match Program_spec.core_id p with\n  | Some ci ->\n      let vars = Program_spec.vars p in\n      ci.var_index < List.length vars &&\n      (List.nth vars ci.var_index).name = name\n  | None -> false\n\nlet create_graph_runner cache (input_bufs : B.t array)\n    (var_vals : (string * int) list) ?orig_valid_positions () =\n  let input_replace =\n    get_input_replace cache input_bufs ?orig_valid_positions () in\n  let vars =\n    List.sort_uniq String.compare (List.map fst var_vals)\n    |> Array.of_list in\n  let var_index name =\n    let rec loop i =\n      if i >= Array.length vars then\n        failwith (strf \"graph_runner: unknown variable %S\" name)\n      else if String.equal vars.(i) name then i\n      else loop (i + 1)\n    in\n    loop 0\n  in\n  (* Collect unique symbolic launch dimension vectors. *)\n  let sym_dims = ref [] in\n  let add_if_sym dim =\n    if is_sym_dim dim && not (List.exists (dim_eq dim) !sym_dims) then\n      sym_dims := dim :: !sym_dims\n  in\n  List.iter (fun ei ->\n    match ei.prg with\n    | Compiled cr ->\n        let p = Realize.Compiled_runner.p cr in\n        (match Program_spec.local_size p with\n         | Some ls -> add_if_sym ls\n         | None -> ());\n        add_if_sym (Program_spec.global_size p)\n    | _ -> ())\n    cache;\n  let sym_dims = List.rev !sym_dims in\n  let find_sym_idx dim =\n    if not (is_sym_dim dim) then None\n    else\n      let rec loop i = function\n        | [] -> None\n        | d :: rest -> if dim_eq d dim then Some i else loop (i + 1) rest\n      in\n      loop 0 sym_dims\n  in\n  (* Build per-kernel replacement tables. *)\n  let var_replace = Hashtbl.create 16 in\n  let dims_replace = Hashtbl.create 16 in\n  let dims_base = Hashtbl.create 16 in\n  let total_est = ref Program_spec.Estimates.zero in\n  List.iteri (fun j ei ->\n    total_est := Program_spec.Estimates.( + ) !total_est\n      (Realize.Runner.estimates (runner_of_prg ei.prg));\n    match ei.prg with\n    | Compiled cr ->\n        let p = Realize.Compiled_runner.p cr in\n        (* Variables needing runtime substitution: not fixed, not runtime. *)\n        let replace = ref [] in\n        List.iteri (fun i (v : Program_spec.var) ->\n          if not (List.mem v.name ei.fixedvars) &&\n             not (is_runtime_var p v.name)\n          then replace := (i, var_index v.name) :: !replace)\n          (Program_spec.vars p);\n        if !replace <> [] then\n          Hashtbl.replace var_replace j (List.rev !replace);\n        (* Symbolic launch dims. *)\n        let g = Program_spec.global_size p in\n        let gi = find_sym_idx g in\n        let li = match Program_spec.local_size p with\n          | Some ls -> find_sym_idx ls | None -> None in\n        if gi <> None || li <> None then begin\n          Hashtbl.replace dims_replace j (gi, li);\n          let eval d = Array.map (fun s -> K.sym_infer s var_vals) d in\n          let base_l = match Program_spec.local_size p with\n            | Some ls -> eval ls | None -> [| 1; 1; 1 |] in\n          Hashtbl.replace dims_base j (eval g, base_l)\n        end\n    | _ -> ())\n    cache;\n  let dev = Realize.Runner.dev (runner_of_prg (List.hd cache).prg) in\n  (* Base runner — device-specific graph implementations override call. *)\n  let runner = Realize.Runner.make\n    ~display_name:(strf \"<batched %d>\" (List.length cache))\n    ~device:dev ~estimates:!total_est\n    (fun _bufs _var_vals ~wait:_ ~timeout:_ -> None) in\n  { gr_cache = cache; gr_input_replace = input_replace;\n    gr_var_replace = var_replace; gr_dims_replace = dims_replace;\n    gr_dims_base = dims_base; gr_vars = vars; gr_sym_dims = sym_dims;\n    gr_w_dep = Hashtbl.create 0; gr_r_dep = Hashtbl.create 0;\n    gr_runner = runner }\n\n(* (cache_idx, program_var_idx, value) for runtime variable updates. *)\nlet updated_vars gr var_vals =\n  let vals = Array.map (fun name -> List.assoc name var_vals) gr.gr_vars in\n  let acc = ref [] in\n  Hashtbl.iter (fun j vidxs ->\n    List.iter (fun (i, v) -> acc := (j, i, vals.(v)) :: !acc) vidxs)\n    gr.gr_var_replace;\n  !acc\n\n(* (cache_idx, global, local) for symbolic launch dimension updates. *)\nlet updated_launch_dims gr var_vals =\n  let dims = List.map (fun dim ->\n    Array.map (fun s -> K.sym_infer s var_vals) dim)\n    gr.gr_sym_dims |> Array.of_list in\n  let acc = ref [] in\n  Hashtbl.iter (fun j (gi, li) ->\n    let base_g, base_l = Hashtbl.find gr.gr_dims_base j in\n    let g = match gi with Some i -> dims.(i) | None -> base_g in\n    let l = match li with Some i -> dims.(i) | None -> base_l in\n    acc := (j, g, l) :: !acc)\n    gr.gr_dims_replace;\n  !acc\n\n(* Interval-based read/write dependency tracking for suballocated buffers.\n   Device-specific graph implementations call this to discover which\n   previously-launched kernels a new dispatch must wait on. *)\nlet access_resources gr bufs ~write new_dep =\n  let get tbl key =\n    match Hashtbl.find_opt tbl key with Some l -> l | None -> [] in\n  let overlaps st en s e = st < e && s < en in\n  (* Phase 1: collect wait dependencies from overlapping ranges. *)\n  let wait = Hashtbl.create 8 in\n  Array.iteri (fun i buf ->\n    let key = B.base_id buf in\n    let s = B.offset buf in\n    let e = s + B.nbytes buf in\n    List.iter (fun (st, en, dep) ->\n      if overlaps st en s e then Hashtbl.replace wait dep dep)\n      (get gr.gr_w_dep key);\n    if List.mem i write then\n      List.iter (fun (st, en, dep) ->\n        if overlaps st en s e then Hashtbl.replace wait dep dep)\n        (get gr.gr_r_dep key))\n    bufs;\n  (* Phase 2: clip written intervals and insert new dependency. *)\n  let clip entries s e =\n    List.concat_map (fun (st, en, dep) ->\n      (if st < min s en then [(st, min s en, dep)] else []) @\n      (if max e st < en then [(max e st, en, dep)] else []))\n      entries\n  in\n  Array.iteri (fun i buf ->\n    let key = B.base_id buf in\n    let s = B.offset buf in\n    let e = s + B.nbytes buf in\n    if List.mem i write then begin\n      Hashtbl.replace gr.gr_w_dep key\n        (clip (get gr.gr_w_dep key) s e @ [(s, e, new_dep)]);\n      Hashtbl.replace gr.gr_r_dep key\n        (clip (get gr.gr_r_dep key) s e)\n    end else\n      Hashtbl.replace gr.gr_r_dep key\n        (get gr.gr_r_dep key @ [(s, e, new_dep)]))\n    bufs;\n  Hashtbl.fold (fun _ dep acc -> dep :: acc) wait []\n\nlet supports_exec_item devs ei =\n  match ei.prg with\n  | Compiled _ ->\n      let n = List.length (List.sort_uniq (fun a b ->\n        String.compare (Device.name a) (Device.name b)) devs) in\n      n = 1\n  | _ -> false\n\n(* Multi-device variant: all devices must be the same backend type. *)\nlet multi_supports_exec_item devs ei =\n  let backend name =\n    match String.split_on_char ':' name with t :: _ -> t | [] -> name in\n  match ei.prg with\n  | Compiled _ | Buffer_xfer _ ->\n      let buf_types = Array.to_list ei.bufs |> List.filter_map (fun b ->\n        Option.map (fun buf -> backend (B.device buf)) b) in\n      let dev_types = List.map (fun d -> backend (Device.name d)) devs in\n      List.length (List.sort_uniq String.compare (buf_types @ dev_types)) = 1\n  | _ -> false\n\n(* Graph batching *)\n\n(* Split the jit cache into batches for graph execution.  Consecutive\n   compatible kernels are condensed into a single graph executor when\n   the device provides a graph implementation.  The batch size doubles\n   after each successful graph, allowing the accelerator to update\n   later graphs while early ones are still running. *)\nlet apply_graph_to_jit cache (input_bufs : B.t array)\n    (var_vals : (string * int) list) ?orig_valid_positions\n    ?(max_batch_size = 0) () =\n  let graph_one = Helpers.getenv \"GRAPH_ONE_KERNEL\" 0 <> 0 in\n  let graphed = ref [] in\n  let batch = ref [] in\n  let batch_devs : Device.t list ref = ref [] in\n  let max_bs = ref max_batch_size in\n  let dedup_devs ds =\n    List.sort_uniq (fun a b ->\n      String.compare (Device.name a) (Device.name b)) ds in\n  let flush () =\n    begin try\n      if !batch_devs = [] then\n        raise (Graph_exn \"no device for graph\");\n      if List.length !batch <= 1 && not graph_one then\n        raise (Graph_exn \"only one kernel doesn't graph\");\n      let dev = List.hd !batch_devs in\n      (* Device graph construction: dev.graph(batch, input_bufs, var_vals).\n         When graph support is added, the device will provide a constructor\n         that returns a graph_runner wrapping the batched kernels. *)\n      ignore (dev, input_bufs, var_vals, orig_valid_positions);\n      raise (Graph_exn \"device graph not yet implemented\")\n    with Graph_exn e ->\n      graphed := List.rev_append !batch !graphed;\n      if debug >= 2 then\n        Printf.eprintf \"JIT GRAPHing failed batch with %d kernels: %s\\n%!\"\n          (List.length !batch) e\n    end;\n    batch := [];\n    batch_devs := []\n  in\n  List.iter (fun ei ->\n    let ji_dev = match ei.prg with\n      | Compiled cr ->\n          Some (Realize.Runner.dev (Realize.Compiled_runner.runner cr))\n      | View_op _ -> None  (* silently skipped *)\n      | _ -> None\n    in\n    (* Graphability requires a device with graph support.  When a device\n       implements [graph], this check also calls [supports_exec_item]. *)\n    let can_graph = match ji_dev with\n      | Some _dev -> false  (* no device graph support yet *)\n      | None -> false\n    in\n    let can_share = can_graph && !batch_devs <> [] in\n    let can_extend = can_share &&\n      (!max_bs = 0 || List.length !batch < !max_bs) in\n    if not can_extend && !batch <> [] then flush ();\n    if can_graph then begin\n      batch := ei :: !batch;\n      batch_devs := dedup_devs (match ji_dev with\n        | Some d -> d :: !batch_devs | None -> !batch_devs)\n    end else begin\n      graphed := ei :: !graphed;\n      batch_devs := []\n    end)\n    cache;\n  if !batch <> [] then flush ();\n  ignore max_bs;\n  List.rev !graphed\n\n(* Memory planning *)\n\n(* Apply the internal memory planner to a jit cache, returning a new\n   cache with buffer assignments optimized.  Buffers absent from the\n   planner's assignment table keep their original allocation. *)\nlet plan_jit_memory jit_cache =\n  let copies = List.filter_map (fun ei ->\n    match ei.prg with\n    | Buffer_copy _ | Buffer_xfer _ | Enc_dec _ ->\n        (match ei.bufs.(0), (if Array.length ei.bufs > 1 then ei.bufs.(1)\n                             else None) with\n         | Some dst, Some src -> Some (dst, src)\n         | _ -> None)\n    | _ -> None) jit_cache in\n  let buffers = List.map (fun ei ->\n    Array.to_list ei.bufs |> List.filter_map Fun.id) jit_cache in\n  let assigned =\n    Memory.internal_memory_planner ~copies ~debug_prefix:\"JIT \" buffers in\n  List.map (fun ei ->\n    let new_bufs = Array.map (function\n      | None -> None\n      | Some buf ->\n          let repl = match Hashtbl.find_opt assigned (B.id buf) with\n            | Some b -> b | None -> buf in\n          B.ensure_allocated repl;\n          Some repl) ei.bufs in\n    { ei with bufs = new_bufs; uid = fresh_uid () }) jit_cache\n\n(* Captured JIT *)\n\ntype 'a captured_jit = {\n  ret : 'a;\n  jit_cache : exec_item array;\n  input_replace : ((int * int), int) Hashtbl.t;\n  extra_view_inputs : view_input list;\n  expected_input_info : input_info array;\n  mutable live_cache : exec_item list;\n  mutable live_replace : ((int * int), int) Hashtbl.t;\n  mutable first_run : bool;\n  output_to_writer : (int, int) Hashtbl.t;\n  input_to_max_reader : (int, int) Hashtbl.t;\n}\n\n(* Null out input buffer slots so their memory can be reused. *)\nlet clear_inputs t =\n  Hashtbl.iter (fun (j, i) _ ->\n    (List.nth t.live_cache j).bufs.(i) <- None)\n    t.live_replace\n\n(* Precompute read-after-write hazard detection tables.\n   output_to_writer: buffer_id -> cache index that writes it.\n   input_to_max_reader: input buffer index -> latest cache index\n   that reads it (only when the buffer is NOT also an output of\n   that same kernel, since same-kernel overlap is always safe). *)\nlet init_hazard_tables t =\n  Hashtbl.clear t.output_to_writer;\n  Array.iteri (fun j ei ->\n    List.iter (fun b ->\n      Hashtbl.replace t.output_to_writer (B.id b) j)\n      (get_out_buffers ei))\n    t.jit_cache;\n  Hashtbl.clear t.input_to_max_reader;\n  Hashtbl.iter (fun (j, i) idx ->\n    let ei = t.jit_cache.(j) in\n    let outs = get_out_buffers ei in\n    let is_own_output = match ei.bufs.(i) with\n      | None -> false\n      | Some b ->\n          List.exists (fun o -> B.id o = B.id b) outs\n    in\n    if not is_own_output then begin\n      let prev = match Hashtbl.find_opt t.input_to_max_reader idx with\n        | Some n -> n | None -> -1 in\n      if j > prev then Hashtbl.replace t.input_to_max_reader idx j\n    end)\n    t.input_replace\n\nlet create_captured ret jit_cache input_replace extra_view_inputs\n    expected_input_info =\n  let jit_cache = Array.of_list jit_cache in\n  let t = {\n    ret; jit_cache; input_replace; extra_view_inputs; expected_input_info;\n    live_cache = Array.to_list jit_cache;\n    live_replace = input_replace;\n    first_run = true;\n    output_to_writer = Hashtbl.create 32;\n    input_to_max_reader = Hashtbl.create 16;\n  } in\n  init_hazard_tables t;\n  clear_inputs t;\n  t\n\nlet free_intermediates t =\n  let dep = buf_set () in\n  dep.has_none <- true;\n  update_depends dep (Array.to_list t.jit_cache);\n  Hashtbl.iter (fun _ buf ->\n    if B.is_allocated buf then B.deallocate buf)\n    dep.tbl;\n  (* Reset execution state. *)\n  t.live_cache <- Array.to_list t.jit_cache;\n  t.live_replace <- t.input_replace;\n  t.first_run <- true;\n  init_hazard_tables t;\n  clear_inputs t\n\nlet replan_buffers_memory_layout t =\n  (* Snapshot old buffers so we can copy data after remapping. *)\n  let old_bufs : (int, B.t) Hashtbl.t = Hashtbl.create 32 in\n  Array.iter (fun ei ->\n    Array.iter (function\n      | None -> ()\n      | Some buf -> Hashtbl.replace old_bufs (B.id buf) buf)\n      ei.bufs)\n    t.jit_cache;\n  (* Run memory planner over all buffers with ignore_checks. *)\n  let all = [Array.to_list t.jit_cache |> List.concat_map (fun ei ->\n    Array.to_list ei.bufs |> List.filter_map Fun.id)] in\n  let assigned =\n    Memory.internal_memory_planner ~ignore_checks:true all in\n  (* Remap jit_cache buffers. *)\n  let new_cache = Array.map (fun ei ->\n    let new_bufs = Array.map (function\n      | None -> None\n      | Some buf ->\n          Some (match Hashtbl.find_opt assigned (B.id buf) with\n                | Some b -> b | None -> buf))\n      ei.bufs in\n    { ei with bufs = new_bufs }) t.jit_cache in\n  (* Copy data from old to new for any reassigned buffer. *)\n  Hashtbl.iter (fun old_id new_buf ->\n    match Hashtbl.find_opt old_bufs old_id with\n    | Some old_buf when B.is_allocated old_buf ->\n        B.ensure_allocated new_buf;\n        let tmp = Bytes.create (B.nbytes old_buf) in\n        B.copyout old_buf tmp;\n        B.copyin new_buf tmp\n    | _ -> ())\n    assigned;\n  (* Reinitialize with the new cache. *)\n  let cache_list = Array.to_list new_cache in\n  Array.blit new_cache 0 t.jit_cache 0 (Array.length new_cache);\n  t.live_cache <- cache_list;\n  t.live_replace <- t.input_replace;\n  t.first_run <- true;\n  init_hazard_tables t;\n  clear_inputs t\n\n(* Execute the captured schedule with fresh input buffers. *)\nlet exec_captured t ~device (input_bufs : B.t array)\n    (var_vals : (string * int) list) =\n  (* Validate inputs match what was captured. *)\n  let n_expected = Array.length t.expected_input_info in\n  if Array.length input_bufs <> n_expected then\n    raise (Jit_error (strf \"input count mismatch: expected %d, got %d\"\n      n_expected (Array.length input_bufs)));\n  Array.iteri (fun i info ->\n    let buf = input_bufs.(i) in\n    if B.size buf <> info.ii_size\n       || not (Dtype.equal (B.dtype buf) info.ii_dtype)\n       || B.device buf <> info.ii_device\n    then\n      raise (Jit_error (strf\n        \"input %d mismatch: expected (%d, %s, %s), got (%d, %s, %s)\" i\n        info.ii_size (Dtype.to_string info.ii_dtype) info.ii_device\n        (B.size buf) (Dtype.to_string (B.dtype buf)) (B.device buf))))\n    t.expected_input_info;\n  (* Extend input_bufs with view inputs reconstructed from base buffers. *)\n  let n = Array.length input_bufs in\n  let n_extra = List.length t.extra_view_inputs in\n  let bufs = Array.init (n + n_extra) (fun i ->\n    if i < n then input_bufs.(i) else input_bufs.(0)) in\n  Array.blit input_bufs 0 bufs 0 n;\n  List.iteri (fun k vi ->\n    let base = bufs.(vi.vi_base_idx) in\n    let view = B.view base\n      ~size:vi.vi_size ~dtype:vi.vi_dtype\n      ~offset:(vi.vi_offset * Dtype.itemsize vi.vi_dtype) in\n    B.ensure_allocated view;\n    bufs.(n + k) <- view)\n    t.extra_view_inputs;\n  (* Copy aliased inputs to prevent read-after-write hazards.\n     When an input is also written by a kernel and a later kernel\n     reads the same input, snapshot the input before execution. *)\n  for i = 0 to Array.length bufs - 1 do\n    let ib = bufs.(i) in\n    match Hashtbl.find_opt t.output_to_writer (B.id ib) with\n    | None -> ()\n    | Some writer ->\n        let max_reader =\n          match Hashtbl.find_opt t.input_to_max_reader i with\n          | Some n -> n | None -> -1 in\n        if max_reader >= writer then begin\n          let copy = Device.create_buffer\n            ~size:(B.size ib) ~dtype:(B.dtype ib) device in\n          B.ensure_allocated copy;\n          let tmp = Bytes.create (B.nbytes ib) in\n          B.copyout ib tmp;\n          B.copyin copy tmp;\n          bufs.(i) <- copy\n        end\n  done;\n  (* Assign input buffers into their live cache slots. *)\n  Hashtbl.iter (fun (j, i) idx ->\n    (List.nth t.live_cache j).bufs.(i) <- Some bufs.(idx))\n    t.live_replace;\n  (* First run: allocate intermediates and try graph batching. *)\n  if t.first_run then begin\n    Array.iter (fun ei ->\n      Array.iter (function\n        | Some buf -> B.ensure_allocated buf\n        | None -> ())\n        ei.bufs)\n      t.jit_cache;\n    if jit_level < 2 then begin\n      (* Build valid positions from the capture-time input_replace. *)\n      let orig_valid : (int, int list) Hashtbl.t = Hashtbl.create 32 in\n      Hashtbl.iter (fun (j, i) _ ->\n        let uid = t.jit_cache.(j).uid in\n        let prev = match Hashtbl.find_opt orig_valid uid with\n          | Some l -> l | None -> [] in\n        if not (List.mem i prev) then\n          Hashtbl.replace orig_valid uid (i :: prev))\n        t.input_replace;\n      t.live_cache <- apply_graph_to_jit (Array.to_list t.jit_cache)\n        bufs var_vals ~orig_valid_positions:orig_valid\n        ~max_batch_size:jit_batch_size ();\n      (* Recompute input_replace: graph items have all positions valid,\n         non-graph items keep their original valid positions. *)\n      let valid : (int, int list) Hashtbl.t = Hashtbl.create 32 in\n      List.iter (fun ei ->\n        let positions = match ei.prg with\n          | Graph _ -> List.init (Array.length ei.bufs) Fun.id\n          | _ ->\n              match Hashtbl.find_opt orig_valid ei.uid with\n              | Some l -> l | None -> [] in\n        Hashtbl.replace valid ei.uid positions)\n        t.live_cache;\n      t.live_replace <-\n        get_input_replace t.live_cache bufs ~orig_valid_positions:valid ()\n    end;\n    t.first_run <- false\n  end;\n  if debug >= 1 && List.length t.live_cache >= 10 then\n    Printf.eprintf \"jit execs %d kernels\\n%!\" (List.length t.live_cache);\n  List.iter (fun ei -> run_ei ei var_vals ~jit:true) t.live_cache;\n  clear_inputs t;\n  t.ret\n\n(* Capture state *)\n\n(* Non-empty during JIT capture.  The schedule machinery should call\n   [add_linear] to record each linear into the active capture. *)\nlet capturing : T.t list ref option ref = ref None\n\nlet is_capturing () = Option.is_some !capturing\n\nlet add_linear linear =\n  match !capturing with\n  | None -> failwith \"add_linear: not inside a JIT capture\"\n  | Some linears -> linears := linear :: !linears\n\n(* TinyJit *)\n\ntype 'a tiny_jit = {\n  fxn : (B.t array -> (string * int) list -> 'a) option;\n  device : Device.t;\n  get_program : Kernel.t -> Program_spec.t;\n  prune : bool;\n  optimize : bool;\n  mutable captured : 'a captured_jit option;\n  mutable cnt : int;\n}\n\nlet captured t = t.captured\nlet jit_cache t = t.jit_cache\n\nlet create ~device ~get_program ?fxn ?captured\n    ?(prune = false) ?(optimize = false) () =\n  let cnt = if fxn = None then 2 else 0 in\n  { fxn; device; get_program; prune; optimize; captured; cnt }\n\nlet reset t =\n  if t.fxn = None then invalid_arg \"can't reset without function\";\n  t.cnt <- 0;\n  t.captured <- None\n\nlet call t (input_bufs : B.t array)\n    (var_vals : (string * int) list)\n    ~(buffers : T.t -> B.t option) =\n  let ret =\n    if jit_level = 0 || t.cnt = 0 then begin\n      (* Warmup: execute eagerly. *)\n      let fxn = Option.get t.fxn in\n      fxn input_bufs var_vals\n    end\n    else if t.cnt = 1 then begin\n      (* Capture: record the computation schedule. *)\n      let fxn = Option.get t.fxn in\n      if is_capturing () then\n        raise (Jit_error \"nested TinyJit is not supported\");\n      let linears = ref [] in\n      capturing := Some linears;\n      let ret = Fun.protect\n        ~finally:(fun () -> capturing := None)\n        (fun () -> fxn input_bufs var_vals) in\n      let linears = List.rev !linears in\n      if linears = [] then raise (Jit_error \"didn't JIT anything!\");\n      if debug >= 1 then\n        Printf.eprintf \"JIT captured %d linears with %d inputs\\n%!\"\n          (List.length linears) (Array.length input_bufs);\n      (* Combine captured linears into a single schedule. *)\n      let linear = T.linear (List.concat_map (fun l ->\n        match T.view l with\n        | Linear { srcs } -> srcs\n        | _ -> [l]) linears) in\n      (* Convert to exec items via schedule + lower. *)\n      let realize_eis =\n        Schedule.linear_to_schedule linear ~buffers in\n      let jit_cache = List.map\n        (lower_realize_ei ~device:t.device ~get_program:t.get_program)\n        realize_eis in\n      (* Track view inputs: sub-buffers whose base is an input. *)\n      let extra_views = ref [] in\n      let all_bufs = ref (Array.to_list input_bufs) in\n      List.iter (fun ei ->\n        Array.iter (fun b ->\n          match b with\n          | None -> ()\n          | Some buf ->\n              let base = B.base buf in\n              if B.id buf <> B.id base then begin\n                let base_idx = ref (-1) in\n                List.iteri (fun k ib ->\n                  if B.id ib = B.id base then base_idx := k)\n                  !all_bufs;\n                if !base_idx >= 0 then begin\n                  all_bufs := !all_bufs @ [buf];\n                  extra_views :=\n                    { vi_base_idx = !base_idx;\n                      vi_offset = B.offset buf;\n                      vi_device = B.device buf;\n                      vi_size = B.size buf;\n                      vi_dtype = B.dtype buf } :: !extra_views\n                end\n              end)\n          ei.bufs)\n        jit_cache;\n      (* Prune independent kernels (optional). *)\n      let jit_cache =\n        if t.prune then begin\n          let dep = buf_set () in\n          Array.iter (buf_set_add dep) input_bufs;\n          update_depends dep jit_cache;\n          let pruned, onetime = List.partition (fun ei ->\n            List.exists (fun b -> Hashtbl.mem dep.tbl (B.id b))\n              (get_out_buffers ei))\n            jit_cache in\n          if debug >= 1 then\n            Printf.eprintf \"pruned from %d -> %d kernels\\n%!\"\n              (List.length jit_cache) (List.length pruned);\n          (* Synchronize devices before running onetime kernels. *)\n          let seen_devs = Hashtbl.create 4 in\n          List.iter (fun ei ->\n            Array.iter (function\n              | None -> ()\n              | Some buf ->\n                  let dname = B.device buf in\n                  if not (Hashtbl.mem seen_devs dname) then begin\n                    Hashtbl.replace seen_devs dname ();\n                    Device.synchronize t.device\n                  end)\n              ei.bufs)\n            onetime;\n          (* Run onetime kernels now; they won't be replayed. *)\n          List.iter (fun ei -> run_ei ei var_vals ~jit:true) onetime;\n          pruned\n        end else jit_cache\n      in\n      (* Memory planning. *)\n      let jit_cache = plan_jit_memory jit_cache in\n      let input_arr = Array.of_list !all_bufs in\n      let input_replace = get_input_replace jit_cache input_arr () in\n      if debug >= 1 then begin\n        let n_unique =\n          let s = Hashtbl.create 16 in\n          Hashtbl.iter (fun _ v -> Hashtbl.replace s v ()) input_replace;\n          Hashtbl.length s in\n        if n_unique <> Array.length input_bufs then\n          Printf.eprintf \"WARNING: some input tensors not found\\n%!\"\n      end;\n      (* Execute the schedule. *)\n      List.iter (fun ei -> run_ei ei var_vals ~jit:false) jit_cache;\n      (* Record input shapes for validation on subsequent calls. *)\n      let expected_input_info = Array.map (fun buf ->\n        { ii_size = B.size buf; ii_dtype = B.dtype buf;\n          ii_device = B.device buf })\n        input_bufs in\n      t.captured <- Some (create_captured ret jit_cache input_replace\n        (List.rev !extra_views) expected_input_info);\n      if t.optimize then\n        replan_buffers_memory_layout (Option.get t.captured);\n      ret\n    end\n    else begin\n      (* Exec: replay the captured schedule. *)\n      let captured = Option.get t.captured in\n      exec_captured captured ~device:t.device input_bufs var_vals\n    end\n  in\n  t.cnt <- t.cnt + 1;\n  ret\n"
  },
  {
    "path": "packages/tolk/lib/engine/jit.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** JIT compilation and replay.\n\n    A {e JIT} ({!Tiny_jit}) wraps a function and transparently captures\n    its computation schedule on the second call, then replays it on all\n    subsequent calls.  Three phases:\n\n    {ul\n    {- {e Warmup} (cnt=0): execute eagerly.}\n    {- {e Capture} (cnt=1): record the schedule, compile kernels, plan\n       memory, execute, and store the result as a {!Captured_jit}.}\n    {- {e Exec} (cnt>=2): validate inputs, substitute fresh buffers,\n       and replay the compiled schedule.}}\n\n    On the first replay, the schedule may be condensed into\n    {!Graph_runner} executors when the device supports batched\n    dispatch.\n\n    See also {!Realize} for the underlying runner and exec-item\n    types. *)\n\n(** {1:exceptions Exceptions} *)\n\nexception Graph_exn of string\n(** Raised when graph batching fails for a batch of kernels.\n    The string describes why (e.g. too few kernels, unsupported\n    device). *)\n\nexception Jit_error of string\n(** Raised for JIT-specific errors: nested capture, empty capture,\n    input mismatch on replay. *)\n\n(** {1:types Types} *)\n\n(** Runner kind.\n\n    Discriminates the runner attached to an {!exec_item} so that\n    JIT internals can dispatch on runner kind (compiled kernel vs\n    buffer copy vs graph batch) without runtime type introspection. *)\ntype prg =\n  | Compiled of Realize.Compiled_runner.t\n      (** Compiled kernel.  Carries the full {!Program_spec.t} via\n          {!Realize.Compiled_runner.p}. *)\n  | View_op of Realize.Runner.t\n      (** Buffer view (zero-copy reshape). *)\n  | Buffer_copy of Realize.Runner.t\n      (** Host-bounce buffer copy. *)\n  | Buffer_xfer of Realize.Runner.t\n      (** Device-to-device transfer. *)\n  | Enc_dec of Realize.Runner.t\n      (** Hardware encode/decode. *)\n  | Graph of graph_runner\n      (** Batched graph executor. *)\n\n(** Execution item with mutable buffer slots.\n\n    Buffer slots are stored as an array so that input substitution\n    on replay is O(1).  The {!uid} field provides stable identity\n    across list rebuilds (e.g. after graph batching). *)\nand exec_item = {\n  uid : int;\n  bufs : Device.Buffer.t option array;\n  prg : prg;\n  fixedvars : string list;\n      (** Variable names bound at schedule time.  These are excluded\n          from runtime substitution in the {!Graph_runner}. *)\n}\n\n(** Graph runner.\n\n    Batches multiple kernels for accelerated dispatch on devices\n    that support graph APIs (e.g. CUDA graphs, Metal command\n    buffers).  Precomputes replacement tables for variables and\n    launch dimensions so the device graph only needs to update the\n    values that actually change between calls.\n\n    Device-specific graph implementations construct this via\n    {!create_graph_runner} and override the runner's call function\n    to perform the actual graph dispatch. *)\nand graph_runner = {\n  gr_cache : exec_item list;\n      (** Exec items in this batch (kept alive for the graph). *)\n  gr_input_replace : ((int * int), int) Hashtbl.t;\n      (** [(j, i) -> k]: buffer slot [i] of cache entry [j] is\n          input buffer [k]. *)\n  gr_var_replace : (int, (int * int) list) Hashtbl.t;\n      (** [j -> \\[(prog_var_idx, global_var_idx); ...\\]]: for\n          cache entry [j], which program variables need runtime\n          substitution and their index into {!field-gr_vars}. *)\n  gr_dims_replace : (int, int option * int option) Hashtbl.t;\n      (** [j -> (global_sym_idx, local_sym_idx)]: for cache entry\n          [j], indices into {!field-gr_sym_dims} for symbolic\n          launch dimensions.  [None] means the dimension is\n          constant. *)\n  gr_dims_base : (int, int array * int array) Hashtbl.t;\n      (** [j -> (global, local)]: concrete base launch dimensions\n          for cache entry [j], used as fallback for non-symbolic\n          dimensions. *)\n  gr_vars : string array;\n      (** Sorted unique variable names across all kernels. *)\n  gr_sym_dims : Tolk_ir.Kernel.t array list;\n      (** Unique symbolic launch dimension vectors.  Evaluated\n          via {!Tolk_ir.Kernel.sym_infer} at dispatch time. *)\n  gr_w_dep : (int, (int * int * int) list) Hashtbl.t;\n      (** Write dependency map for suballocated buffers.  Keyed\n          by base buffer id; values are [(start, end, dep)]\n          interval triples.  Populated by {!access_resources}. *)\n  gr_r_dep : (int, (int * int * int) list) Hashtbl.t;\n      (** Read dependency map.  Same structure as\n          {!field-gr_w_dep}. *)\n  gr_runner : Realize.Runner.t;\n      (** Underlying runner for dispatch.  Device graph\n          implementations provide the call function. *)\n}\n\n(** View input descriptor.\n\n    Records a sub-buffer relationship so that view inputs can be\n    reconstructed from base input buffers on every replay call. *)\ntype view_input = {\n  vi_base_idx : int;  (** Index of the base buffer in the input array. *)\n  vi_offset : int;  (** Byte offset from the base. *)\n  vi_device : string;  (** Device name. *)\n  vi_size : int;  (** Element count. *)\n  vi_dtype : Tolk_ir.Dtype.t;  (** Element type. *)\n}\n\n(** Input validation descriptor.\n\n    Captured at the end of the capture phase and checked on every\n    replay call to ensure inputs have not changed shape, dtype, or\n    device. *)\ntype input_info = {\n  ii_size : int;  (** Element count. *)\n  ii_dtype : Tolk_ir.Dtype.t;  (** Element type. *)\n  ii_device : string;  (** Device name. *)\n}\n\n(** {1:exec_items Exec items} *)\n\nval runner_of_prg : prg -> Realize.Runner.t\n(** [runner_of_prg prg] is the underlying {!Realize.Runner.t}\n    for [prg], regardless of runner kind. *)\n\nval run_ei :\n  exec_item -> (string * int) list -> jit:bool -> unit\n(** [run_ei ei var_vals ~jit] dispatches [ei] with variable\n    bindings [var_vals].  Buffers are allocated on demand.\n    When [jit] is [true], execution does not wait for\n    completion. *)\n\nval lower_realize_ei :\n  device:Device.t ->\n  get_program:(Tolk_ir.Kernel.t -> Program_spec.t) ->\n  Realize.Exec_item.t ->\n  exec_item\n(** [lower_realize_ei ~device ~get_program rei] compiles [rei]\n    and wraps the result as an {!exec_item} with the appropriate\n    {!prg} variant.\n\n    Kernel ASTs are compiled via {!Realize.get_runner}.\n    Buffer views become {!View_op}; copies become\n    {!Buffer_copy}.\n\n    Raises [Failure] if the AST node is not a supported\n    [Call] variant. *)\n\nval get_out_buffers : exec_item -> Device.Buffer.t list\n(** [get_out_buffers ei] is the list of buffers written by [ei].\n    For compiled kernels, output parameters not also read; for\n    copies, the destination buffer.  Empty for views and graph\n    runners. *)\n\n(** {1:dependencies Buffer dependencies} *)\n\n(** Mutable set of buffers keyed by identity, with an optional\n    [None] sentinel. *)\ntype buf_set = {\n  mutable has_none : bool;\n  tbl : (int, Device.Buffer.t) Hashtbl.t;\n}\n\nval buf_set : unit -> buf_set\n(** [buf_set ()] is a fresh empty set. *)\n\nval buf_set_mem : buf_set -> Device.Buffer.t option -> bool\n(** [buf_set_mem s b] is [true] iff [b] is in [s].  [None]\n    matches the sentinel. *)\n\nval buf_set_add : buf_set -> Device.Buffer.t -> unit\n(** [buf_set_add s b] adds [b] to [s]. *)\n\nval update_depends : buf_set -> exec_item list -> unit\n(** [update_depends depends cache] propagates buffer dependencies\n    forward: for each exec item in [cache] whose inputs overlap\n    [depends], the item's output buffers are added to [depends]. *)\n\nval get_input_replace :\n  exec_item list ->\n  Device.Buffer.t array ->\n  ?orig_valid_positions:(int, int list) Hashtbl.t ->\n  unit ->\n  ((int * int), int) Hashtbl.t\n(** [get_input_replace cache input_bufs ?orig_valid_positions ()]\n    maps input buffer positions in [cache].\n\n    Returns a table where key [(j, i)] means buffer slot [i] of\n    cache entry [j] holds input buffer at index [v].\n\n    When [orig_valid_positions] is provided (keyed by\n    {!exec_item.uid}), only positions present in that table are\n    included.  This prevents aliasing bugs when graph batching\n    reuses buffer slots. *)\n\n(** {1:graph_runner Graph runner} *)\n\nval create_graph_runner :\n  exec_item list ->\n  Device.Buffer.t array ->\n  (string * int) list ->\n  ?orig_valid_positions:(int, int list) Hashtbl.t ->\n  unit ->\n  graph_runner\n(** [create_graph_runner cache input_bufs var_vals\n    ?orig_valid_positions ()] builds a graph runner for [cache].\n\n    Precomputes variable and launch-dimension replacement tables\n    from the compiled kernels in [cache].  The base runner's call\n    function is a no-op; device graph implementations should\n    replace it. *)\n\nval updated_vars :\n  graph_runner -> (string * int) list -> (int * int * int) list\n(** [updated_vars gr var_vals] is the list of\n    [(cache_idx, program_var_idx, value)] triples for all\n    variables in [gr] that need runtime substitution given\n    [var_vals]. *)\n\nval updated_launch_dims :\n  graph_runner -> (string * int) list ->\n  (int * int array * int array) list\n(** [updated_launch_dims gr var_vals] is the list of\n    [(cache_idx, global, local)] triples for all kernels in [gr]\n    with symbolic launch dimensions, evaluated against\n    [var_vals]. *)\n\nval access_resources :\n  graph_runner ->\n  Device.Buffer.t array ->\n  write:int list ->\n  int ->\n  int list\n(** [access_resources gr bufs ~write new_dep] updates the\n    interval-based read/write dependency maps in [gr] and returns\n    the list of prior dependencies that [bufs] must wait on.\n\n    [write] is the list of buffer indices (into [bufs]) that are\n    written.  [new_dep] is the dependency handle for this\n    dispatch. *)\n\nval supports_exec_item : Device.t list -> exec_item -> bool\n(** [supports_exec_item devs ei] is [true] iff [ei] is a\n    compiled kernel and all devices in [devs] are the same. *)\n\nval multi_supports_exec_item : Device.t list -> exec_item -> bool\n(** [multi_supports_exec_item devs ei] is [true] iff [ei] is a\n    compiled kernel or device transfer and all devices (from\n    [devs] and [ei]'s buffers) share the same backend type. *)\n\n(** {1:graph_batching Graph batching} *)\n\nval apply_graph_to_jit :\n  exec_item list ->\n  Device.Buffer.t array ->\n  (string * int) list ->\n  ?orig_valid_positions:(int, int list) Hashtbl.t ->\n  ?max_batch_size:int ->\n  unit ->\n  exec_item list\n(** [apply_graph_to_jit cache input_bufs var_vals\n    ?orig_valid_positions ?max_batch_size ()] splits [cache]\n    into batches for graph execution.\n\n    Consecutive compatible kernels are condensed into a single\n    {!Graph} exec item when the device supports batched dispatch.\n    The batch size doubles after each successful graph.\n\n    Returns [cache] unchanged when no device graph support is\n    available.\n\n    [max_batch_size] defaults to [0] (unlimited). *)\n\n(** {1:memory Memory planning} *)\n\nval plan_jit_memory : exec_item list -> exec_item list\n(** [plan_jit_memory cache] runs the internal memory planner over\n    [cache], returning a new cache with optimized buffer\n    assignments.  Buffers not reassigned by the planner keep\n    their original allocation; reassigned buffers are allocated\n    eagerly. *)\n\n(** {1:captured Captured JIT} *)\n\n(** A captured computation schedule ready for replay.\n\n    Created at the end of the capture phase, a {!captured_jit}\n    holds the compiled schedule, the input-to-buffer mapping, and\n    precomputed read-after-write hazard tables.  On the first\n    replay, graph batching is attempted; subsequent replays reuse\n    the batched schedule. *)\ntype 'a captured_jit\n\nval create_captured :\n  'a ->\n  exec_item list ->\n  ((int * int), int) Hashtbl.t ->\n  view_input list ->\n  input_info array ->\n  'a captured_jit\n(** [create_captured ret cache input_replace views input_info]\n    is a captured JIT holding return value [ret], schedule\n    [cache], input mapping [input_replace], view input\n    descriptors [views], and input validation info [input_info].\n\n    Initializes hazard-detection tables and clears input buffer\n    slots. *)\n\nval clear_inputs : 'a captured_jit -> unit\n(** [clear_inputs t] sets all input buffer slots to [None] so\n    their memory can be freed or reused between calls. *)\n\nval free_intermediates : 'a captured_jit -> unit\n(** [free_intermediates t] deallocates all intermediate buffers\n    reachable from cleared input slots and resets execution\n    state.  The next replay will re-allocate intermediates and\n    re-attempt graph batching. *)\n\nval replan_buffers_memory_layout : 'a captured_jit -> unit\n(** [replan_buffers_memory_layout t] re-runs the memory planner\n    over [t]'s schedule with relaxed checks, remaps buffer\n    assignments, copies data from old to new buffers, and resets\n    execution state. *)\n\nval exec_captured :\n  'a captured_jit ->\n  device:Device.t ->\n  Device.Buffer.t array ->\n  (string * int) list ->\n  'a\n(** [exec_captured t ~device input_bufs var_vals] executes the\n    captured schedule with fresh [input_bufs] and [var_vals],\n    returning the captured return value.\n\n    On the first call, intermediates are allocated and graph\n    batching is attempted.  Input buffer slots are cleared after\n    execution.\n\n    Raises {!Jit_error} if [input_bufs] does not match the\n    captured input count, sizes, dtypes, or devices. *)\n\n(** {1:capture Capture state} *)\n\nval is_capturing : unit -> bool\n(** [is_capturing ()] is [true] iff a {!Tiny_jit} capture is\n    in progress. *)\n\nval add_linear : Tolk_ir.Tensor.t -> unit\n(** [add_linear linear] records [linear] into the active capture.\n\n    Raises [Failure] if no capture is in progress. *)\n\n(** {1:tiny_jit TinyJit} *)\n\n(** The JIT wrapper.\n\n    Wraps a function and transparently captures its computation\n    schedule on the second call.  Subsequent calls replay the\n    compiled schedule with fresh input buffers. *)\ntype 'a tiny_jit\n\nval captured : 'a tiny_jit -> 'a captured_jit option\n(** [captured t] is [t]'s captured schedule, or [None] if [t] has\n    not yet completed the capture phase. *)\n\nval jit_cache : 'a captured_jit -> exec_item array\n(** [jit_cache t] is [t]'s compiled schedule.  Buffer slots in\n    these items are updated in-place on each replay. *)\n\nval create :\n  device:Device.t ->\n  get_program:(Tolk_ir.Kernel.t -> Program_spec.t) ->\n  ?fxn:(Device.Buffer.t array -> (string * int) list -> 'a) ->\n  ?captured:'a captured_jit ->\n  ?prune:bool ->\n  ?optimize:bool ->\n  unit ->\n  'a tiny_jit\n(** [create ~device ~get_program ?fxn ?captured ?prune\n    ?optimize ()] is a JIT wrapper.\n\n    Provide either [fxn] (the function to JIT) or [captured]\n    (a pre-captured schedule).  When [captured] is provided,\n    execution starts at the replay phase (cnt=2).\n\n    {ul\n    {- [prune] removes kernels whose outputs are not reachable\n       from the inputs.  Defaults to [false].}\n    {- [optimize] re-runs the memory planner after capture for\n       tighter allocation.  Defaults to [false].}}\n\n    Raises [Invalid_argument] if neither [fxn] nor [captured]\n    is provided. *)\n\nval reset : 'a tiny_jit -> unit\n(** [reset t] resets [t] to the warmup phase, discarding any\n    captured schedule.\n\n    Raises [Invalid_argument] if [t] was created without a\n    function. *)\n\nval call :\n  'a tiny_jit ->\n  Device.Buffer.t array ->\n  (string * int) list ->\n  buffers:(Tolk_ir.Tensor.t -> Device.Buffer.t option) ->\n  'a\n(** [call t input_bufs var_vals ~buffers] executes [t] with\n    [input_bufs], variable bindings [var_vals], and tensor-to-buffer\n    mapping [buffers].\n\n    {ul\n    {- {e Warmup} (cnt=0): calls the wrapped function eagerly.}\n    {- {e Capture} (cnt=1): calls the function under the capture\n       handler, converts recorded linears to a compiled schedule,\n       runs memory planning, executes, and stores a\n       {!captured_jit}.}\n    {- {e Exec} (cnt>=2): validates inputs against the capture\n       and replays via {!exec_captured}.}}\n\n    [buffers] maps tensor IR nodes to device buffers.  It is used\n    during the capture phase to resolve the schedule; ignored on\n    warmup and replay.\n\n    Raises {!Jit_error} if:\n    {ul\n    {- capture is attempted while another capture is in progress,}\n    {- the capture produces no linears,}\n    {- inputs mismatch on replay (count, size, dtype, or\n       device).}} *)\n"
  },
  {
    "path": "packages/tolk/lib/engine/memory.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(* Memory planning.\n\n   Reduces peak memory by reusing buffers whose lifetimes don't overlap.\n   Each schedule step lists the buffers it touches; we compute live ranges per\n   base buffer, then either suballocate from a per-lane TLSF arena (when the\n   device supports offset views) or recycle freed buffers from a pool keyed by\n   (device, dtype, spec, nbytes).\n\n   Copy and compute buffers live in separate lanes so freeing a copy buffer\n   never forces a dependency between the copy and compute queues. *)\n\nmodule B = Device.Buffer\n\nlet debug = Helpers.getenv \"DEBUG\" 0\nlet no_memory_planner = Helpers.getenv \"NO_MEMORY_PLANNER\" 0 <> 0\n\nlet round_up n align = (n + align - 1) / align * align\nlet blk = 0x1000\n\n(* Lane key: (device, is_copy_lane). *)\ntype lane_key = string * int\n\n(* [buffers] is a list of per-step buffer lists (one per schedule item).\n   [copies] is the (dst, src) pairs from copy operations. Returns a hashtable\n   mapping buffer ids to replacement buffers. Buffers absent from the table\n   keep their original allocation. *)\nlet internal_memory_planner ?(copies = []) ?(ignore_checks = false)\n    ?(debug_prefix = \"\") buffers =\n  let assigned = Hashtbl.create 64 in\n  if no_memory_planner then assigned\n  else begin\n\n    (* Live ranges *)\n\n    let first = Hashtbl.create 64 in\n    let last = Hashtbl.create 64 in\n    let bases = Hashtbl.create 64 in\n    let to_opt = Hashtbl.create 64 in\n    List.iteri (fun step bufs ->\n      List.iter (fun buf ->\n        let base = B.base buf in\n        let bid = B.base_id buf in\n        if ignore_checks\n           || not (B.is_allocated buf || B.is_allocated base\n                  || B.uop_refcount buf > 0)\n        then begin\n          if not (Hashtbl.mem first bid) then begin\n            Hashtbl.replace first bid step;\n            Hashtbl.replace bases bid base\n          end;\n          Hashtbl.replace last bid step;\n          Hashtbl.replace to_opt (B.id buf) buf\n        end) bufs) buffers;\n\n    (* Lane separation *)\n\n    (* Copy buffers are held for an extra lifetime so their free is deferred\n       past any compute work sitting between two copy steps. *)\n    let copy_set = Hashtbl.create 16 in\n    List.iter (fun (dst, src) ->\n      Hashtbl.replace copy_set (B.base_id dst) ();\n      Hashtbl.replace copy_set (B.base_id src) ()) copies;\n    let is_copy bid = Hashtbl.mem copy_set bid in\n    let lane_key bid =\n      (B.device (Hashtbl.find bases bid), if is_copy bid then 1 else 0) in\n    let hold bid =\n      if is_copy bid then Hashtbl.find last bid - Hashtbl.find first bid + 1\n      else 0 in\n\n    (* Sorted alloc/free timeline *)\n\n    (* Encoding: 0 = free, 1 = alloc. Sorting places frees before allocs at\n       the same step so recycled buffers are immediately available. *)\n    let events =\n      Hashtbl.fold (fun bid _ acc ->\n        ((Hashtbl.find first bid, 1), bid)\n        :: ((Hashtbl.find last bid + 1 + hold bid, 0), bid)\n        :: acc)\n        bases []\n      |> List.sort (fun (k1, _) (k2, _) -> compare k1 k2)\n    in\n\n    (* Allocate or reuse *)\n\n    let total_memory =\n      Hashtbl.fold (fun bid _ acc ->\n        acc + round_up (B.nbytes (Hashtbl.find bases bid)) blk)\n        bases 0\n      * 2 (* 2x headroom for fragmentation *)\n    in\n    (* Per-lane TLSF arena for devices that support suballocation;\n       maps lane_key to (high_water_mark, allocator). *)\n    let global_planner : (lane_key, int * Tlsf.t) Hashtbl.t =\n      Hashtbl.create 8 in\n    let get_planner lk =\n      match Hashtbl.find_opt global_planner lk with\n      | Some p -> p\n      | None ->\n          let p =\n            (0, Tlsf.create ~size:total_memory ~block_size:blk\n                  ~lv2_cnt:32 ()) in\n          Hashtbl.replace global_planner lk p; p in\n    (* One template buffer per lane to extract the allocator from. *)\n    let lane_template : (lane_key, B.t) Hashtbl.t = Hashtbl.create 8 in\n    let replace : (int, B.t option * int option) Hashtbl.t =\n      Hashtbl.create 64 in\n    let pool = Hashtbl.create 64 in\n    List.iter (fun ((_, is_alloc), bid) ->\n      let base = Hashtbl.find bases bid in\n      if B.supports_offset base then begin\n        let lk = lane_key bid in\n        if not (Hashtbl.mem lane_template lk) then\n          Hashtbl.replace lane_template lk base;\n        let max_sz, tlsf = get_planner lk in\n        let off =\n          if is_alloc = 1 then begin\n            let off =\n              Tlsf.alloc tlsf (round_up (B.nbytes base) blk) () in\n            Hashtbl.replace replace bid (None, Some off);\n            off\n          end else begin\n            let off = match snd (Hashtbl.find replace bid) with\n              | Some o -> o | None -> assert false in\n            Tlsf.free tlsf off;\n            off\n          end\n        in\n        Hashtbl.replace global_planner lk\n          (max max_sz (off + B.nbytes base), tlsf)\n      end else begin\n        let key =\n          (lane_key bid, B.dtype base, B.spec base, B.nbytes base) in\n        if is_alloc = 1 then begin\n          let repl = match Hashtbl.find_opt pool key with\n            | Some (b :: rest) -> Hashtbl.replace pool key rest; b\n            | _ -> base in\n          Hashtbl.replace replace bid (Some repl, None)\n        end else begin\n          let repl =\n            match fst (Hashtbl.find replace bid) with\n            | Some b -> b | None -> assert false in\n          let freed =\n            match Hashtbl.find_opt pool key with\n            | Some l -> l | None -> [] in\n          Hashtbl.replace pool key (repl :: freed)\n        end\n      end) events;\n\n    (* Global arena buffers *)\n\n    (* One shared int8 buffer per lane for all suballocated regions. *)\n    let global_bufs : (lane_key, B.t) Hashtbl.t = Hashtbl.create 8 in\n    Hashtbl.iter (fun lk (sz, _) ->\n      if sz > 0 then begin\n        let template = Hashtbl.find lane_template lk in\n        let gb =\n          B.create ~device:(fst lk) ~size:(round_up sz blk)\n            ~dtype:Tolk_ir.Dtype.int8 (B.allocator template) in\n        Hashtbl.replace global_bufs lk gb\n      end) global_planner;\n\n    (* Resolve suballocated entries: None base → global arena buffer. *)\n    let resolved = Hashtbl.create 64 in\n    Hashtbl.iter (fun bid (repl_opt, off) ->\n      let base_buf = match repl_opt with\n        | Some b -> b\n        | None -> Hashtbl.find global_bufs (lane_key bid) in\n      Hashtbl.replace resolved bid (base_buf, off)) replace;\n\n    (* Build replacement map *)\n\n    (* Base buffers that got a different physical buffer. *)\n    Hashtbl.iter (fun bid (repl, off) ->\n      let base = Hashtbl.find bases bid in\n      if B.id base <> B.id repl then\n        Hashtbl.replace assigned (B.id base)\n          (match off with\n           | None -> repl\n           | Some off ->\n               B.view repl ~size:(B.size base)\n                 ~dtype:(B.dtype base) ~offset:off))\n      resolved;\n\n    (* Sub-buffers: rebase onto the (possibly replaced) parent. *)\n    Hashtbl.iter (fun _ buf ->\n      if B.id buf <> B.base_id buf then begin\n        let base = B.base buf in\n        let pbuf =\n          match Hashtbl.find_opt assigned (B.id base) with\n          | Some b -> b | None -> base in\n        Hashtbl.replace assigned (B.id buf)\n          (B.view (B.base pbuf)\n             ~size:(B.size buf) ~dtype:(B.dtype buf)\n             ~offset:(B.offset pbuf + B.offset buf))\n      end) to_opt;\n\n    (* Debug *)\n\n    if debug >= 1 then begin\n      let seen_k = Hashtbl.create 16 in\n      let seen_v = Hashtbl.create 16 in\n      let omem = ref 0 and nmem = ref 0 in\n      let nk = ref 0 and nv = ref 0 in\n      Hashtbl.iter (fun buf_id new_buf ->\n        (match Hashtbl.find_opt bases buf_id with\n         | Some orig when not (Hashtbl.mem seen_k buf_id) ->\n             Hashtbl.replace seen_k buf_id ();\n             omem := !omem + B.nbytes orig;\n             incr nk\n         | _ -> ());\n        let vid = B.base_id new_buf in\n        if B.id new_buf = vid && not (Hashtbl.mem seen_v vid) then begin\n          Hashtbl.replace seen_v vid ();\n          nmem := !nmem + B.nbytes new_buf;\n          incr nv\n        end) assigned;\n      Hashtbl.iter (fun _ gb ->\n        let vid = B.id gb in\n        if not (Hashtbl.mem seen_v vid) then begin\n          Hashtbl.replace seen_v vid ();\n          nmem := !nmem + B.nbytes gb;\n          incr nv\n        end) global_bufs;\n      if !omem <> !nmem then\n        Printf.printf\n          \"%smemory reduced from %.2f MB -> %.2f MB, %d -> %d bufs\\n\"\n          debug_prefix\n          (Float.of_int !omem /. 1e6) (Float.of_int !nmem /. 1e6) !nk !nv\n    end;\n\n    assigned\n  end\n\nlet memory_planner schedule =\n  let buffers =\n    List.map (fun si ->\n      List.filter_map Fun.id (Realize.Exec_item.bufs si))\n      schedule\n  in\n  let copies =\n    List.filter_map (fun si ->\n      let is_copy = match Tolk_ir.Tensor.view (Realize.Exec_item.ast si) with\n        | Tolk_ir.Tensor.Call { callee = Ref r; _ } ->\n            (match Tolk_ir.Tensor.view r with\n             | Tolk_ir.Tensor.Copy _ -> true | _ -> false)\n        | _ -> false in\n      if is_copy then\n        match Realize.Exec_item.bufs si with\n        | Some dst :: Some src :: _ -> Some (dst, src)\n        | _ -> None\n      else None)\n      schedule\n  in\n  let assigned = internal_memory_planner ~copies buffers in\n  List.map (fun si ->\n    let new_bufs =\n      List.map (function\n        | None -> None\n        | Some buf ->\n            Some (match Hashtbl.find_opt assigned (B.id buf) with\n                  | Some repl -> repl | None -> buf))\n        (Realize.Exec_item.bufs si)\n    in\n    Realize.Exec_item.make\n      ~ast:(Realize.Exec_item.ast si)\n      ~bufs:new_bufs\n      ~var_vals:(Realize.Exec_item.var_vals si) ())\n    schedule\n"
  },
  {
    "path": "packages/tolk/lib/engine/memory.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Memory planning.\n\n    Reduces peak memory by reusing buffers whose lifetimes don't overlap.\n    Each schedule step lists the buffers it touches; the planner computes\n    live ranges per base buffer, then either suballocates from a per-lane\n    {!Tlsf} arena (when the device supports offset views) or recycles freed\n    buffers from a pool keyed by (device, dtype, spec, nbytes).\n\n    Copy and compute buffers are separated into distinct lanes so that\n    freeing a copy buffer never forces a dependency between the copy and\n    compute queues.\n\n    Disabled when the [NO_MEMORY_PLANNER] environment variable is non-zero. *)\n\n(** {1:planner Core planner} *)\n\nval internal_memory_planner :\n  ?copies:(Device.Buffer.t * Device.Buffer.t) list ->\n  ?ignore_checks:bool ->\n  ?debug_prefix:string ->\n  Device.Buffer.t list list ->\n  (int, Device.Buffer.t) Hashtbl.t\n(** [internal_memory_planner ?copies ?ignore_checks ?debug_prefix buffers]\n    is a buffer replacement table that minimises peak memory.\n\n    [buffers] is a list of per-step buffer lists (one inner list per\n    schedule item). [copies] is the [(dst, src)] pairs from copy\n    operations; defaults to [\\[\\]]. Buffers involved in copies are\n    placed in a separate lane and held longer to avoid cross-queue\n    dependencies.\n\n    The returned hashtable maps {!Device.Buffer.id} to a replacement\n    {!Device.Buffer.t}. Buffers absent from the table keep their\n    original allocation.\n\n    When [ignore_checks] is [false] (the default), buffers that are\n    already allocated or have a positive reference count are skipped.\n\n    When [DEBUG >= 1], prints memory reduction statistics to stdout,\n    prefixed by [debug_prefix] (defaults to [\"\"]). *)\n\n(** {1:schedule Schedule integration} *)\n\nval memory_planner :\n  Realize.Exec_item.t list -> Realize.Exec_item.t list\n(** [memory_planner schedule] applies {!internal_memory_planner} to\n    [schedule] and returns a new schedule with buffers replaced.\n\n    Copy operations are detected by matching on\n    {!Tolk_ir.Tensor.Copy} nodes and their first two buffer\n    arguments. Each exec item is reconstructed with replaced buffers;\n    the AST and variable bindings are preserved. *)\n"
  },
  {
    "path": "packages/tolk/lib/engine/realize.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\nlet strf = Printf.sprintf\n\n(* Environment *)\n\nlet debug = Helpers.getenv \"DEBUG\" 0\n\n(* Runners *)\n\nmodule Runner = struct\n  type t = {\n    display_name : string;\n    device : Device.t;\n    estimates : Program_spec.Estimates.t;\n    mutable first_run : bool;\n    call :\n      Device.Buffer.t list -> (string * int) list ->\n      wait:bool -> timeout:int option -> float option;\n  }\n\n  let make ~display_name ~device ?(estimates = Program_spec.Estimates.zero) call =\n    { display_name; device; estimates; first_run = true; call }\n\n  let dev t = t.device\n  let display_name t = t.display_name\n  let estimates t = t.estimates\n\n  let call t bufs var_vals ~wait ~timeout =\n    t.call bufs var_vals ~wait ~timeout\n\n  let exec t rawbufs ?(var_vals = []) () =\n    t.call rawbufs var_vals ~wait:false ~timeout:None\nend\n\n(* Local size optimization *)\n\nlet max_workgroup = 1024\n\nlet optimize_local_size ~device (prg : Device.prog) global_size\n    (rawbufs : Device.Buffer.t list) =\n  (* Avoid clobbering output if it also appears as input. *)\n  let bufs = match rawbufs with\n    | out :: rest when\n        List.exists (fun b ->\n          Device.Buffer.base_id b = Device.Buffer.base_id out) rest ->\n        let test_out =\n          Device.create_buffer ~size:(Device.Buffer.size out)\n            ~dtype:(Device.Buffer.dtype out) device\n        in\n        Device.Buffer.ensure_allocated test_out;\n        test_out :: rest\n    | _ -> rawbufs\n  in\n  let buf_addrs = Array.of_list (List.map Device.Buffer.addr bufs) in\n  let ndims = Array.length global_size in\n  let powers = [| 1; 2; 4; 8; 16; 32; 64; 128; 256; max_workgroup |] in\n  (* For each dimension, valid local sizes are {sz} ∪ powers that fit. *)\n  let local_dims = Array.init ndims (fun i ->\n    let sz = global_size.(i) in\n    List.filter (fun x -> x <= sz)\n      (List.sort_uniq Int.compare (sz :: Array.to_list powers)))\n  in\n  (* Enumerate all combinations with product ≤ max_workgroup. *)\n  let local_sizes = ref [] in\n  let rec enumerate acc dim =\n    if dim >= ndims then begin\n      let ls = Array.of_list (List.rev acc) in\n      if Array.fold_left ( * ) 1 ls <= max_workgroup then\n        local_sizes := ls :: !local_sizes\n    end else\n      List.iter (fun x -> enumerate (x :: acc) (dim + 1)) local_dims.(dim)\n  in\n  enumerate [] 0;\n  (* Try each size twice, in random order. *)\n  let all = Array.of_list (!local_sizes @ !local_sizes) in\n  let n = Array.length all in\n  for i = n - 1 downto 1 do\n    let j = Random.int (i + 1) in\n    let tmp = all.(i) in all.(i) <- all.(j); all.(j) <- tmp\n  done;\n  let best_time = ref infinity in\n  let best_local = ref (Array.make ndims 1) in\n  for k = 0 to n - 1 do\n    let local_size = all.(k) in\n    let global = Array.init ndims (fun i -> global_size.(i) / local_size.(i)) in\n    let tm =\n      try\n        match prg.call buf_addrs ~global ~local:(Some local_size)\n                ~vals:[||] ~wait:true ~timeout:None with\n        | Some t -> t\n        | None -> infinity\n      with _ -> infinity\n    in\n    if tm < !best_time then begin\n      best_time := tm;\n      best_local := local_size\n    end\n  done;\n  if Float.is_infinite !best_time then\n    invalid_arg \"all optimize_local_size exec failed\";\n  !best_local\n\n(* Compiled runner *)\n\nmodule Compiled_runner = struct\n  type t = {\n    runner : Runner.t;\n    p : Program_spec.t;\n    prg : Device.prog;\n  }\n\n  let runtimevars_of_spec p =\n    List.filter_map (fun (i, (v : Program_spec.var)) ->\n      if v.name = \"core_id\" then Some (v.name, i) else None)\n      (List.mapi (fun i v -> (i, v)) (Program_spec.vars p))\n\n  let create ~device ?prg (p : Program_spec.t) =\n    if debug >= 3 && Program_spec.applied_opts p <> [] then\n      Printf.eprintf \"%s\\n%!\"\n        (String.concat \", \"\n           (List.map Tolk_ir.Kernel.Opt.to_string\n              (Program_spec.applied_opts p)));\n    if debug >= 4 then\n      Printf.eprintf \"%s\\n%!\" (Program_spec.src p);\n    let p, lib = match Program_spec.lib p with\n      | Some lib -> p, lib\n      | None ->\n          let comp = match Renderer.compiler (Device.renderer device) with\n            | Some c -> c\n            | None -> invalid_arg \"no compiler for device\"\n          in\n          let lib = Compiler.compile_cached comp (Program_spec.src p) in\n          Program_spec.with_lib lib p, lib\n    in\n    let prg = match prg with\n      | Some h -> h\n      | None ->\n          Device.runtime device (Program_spec.name p) lib\n            ~runtimevars:(runtimevars_of_spec p)\n    in\n    let vars = Program_spec.vars p in\n    let call bufs var_vals ~wait ~timeout =\n      let global, local = Program_spec.launch_dims p var_vals in\n      let vals = Array.of_list (List.map (fun (v : Program_spec.var) ->\n        match List.assoc_opt v.name var_vals with\n        | Some n -> Int64.of_int n | None -> 0L) vars)\n      in\n      let buf_addrs = Array.of_list (List.map Device.Buffer.addr bufs) in\n      prg.call buf_addrs ~global ~local ~vals ~wait ~timeout\n    in\n    let runner =\n      Runner.make ~display_name:(Program_spec.name p)\n        ~device ~estimates:(Program_spec.estimates p) call\n    in\n    { runner; p; prg }\n\n  let p t = t.p\n  let runner t = t.runner\n\n  let call t bufs var_vals ~wait ~timeout =\n    t.runner.call bufs var_vals ~wait ~timeout\nend\n\n(* View op *)\n\nlet view_op ~device (buf : Device.Buffer.t) =\n  let display_name =\n    strf \"view %8d @ %-10d\"\n      (Device.Buffer.nbytes buf) (Device.Buffer.offset buf)\n  in\n  let call rawbufs _var_vals ~wait:_ ~timeout:_ =\n    (match rawbufs with\n     | [ dst; src ] ->\n         if Device.Buffer.base_id dst <> Device.Buffer.base_id src then\n           invalid_arg \"view: dst must share base with src\"\n     | _ -> invalid_arg \"view: expected exactly two buffers\");\n    None\n  in\n  Runner.make ~display_name ~device call\n\n(* Buffer copy *)\n\nlet buffer_copy ~device ~total_sz ~dest_device ~src_device =\n  let sz =\n    if total_sz >= 1_000_000\n    then strf \"%7.2fM\" (Float.of_int total_sz /. 1e6)\n    else strf \"%8d\" total_sz\n  in\n  let dest_short = String.sub dest_device 0 (min 7 (String.length dest_device)) in\n  let src_short = String.sub src_device 0 (min 7 (String.length src_device)) in\n  let display_name = strf \"copy %s, %7s <- %-7s\" sz dest_short src_short in\n  let call rawbufs _var_vals ~wait ~timeout:_ =\n    match rawbufs with\n    | [ dest; src ] ->\n        if Device.Buffer.size dest <> Device.Buffer.size src\n           || not (Tolk_ir.Dtype.equal\n                     (Device.Buffer.dtype dest) (Device.Buffer.dtype src))\n        then invalid_arg \"buffer copy: size or dtype mismatch\";\n        let st = Unix.gettimeofday () in\n        let tmp = Bytes.create (Device.Buffer.nbytes src) in\n        Device.Buffer.copyout src tmp;\n        Device.Buffer.copyin dest tmp;\n        if wait then begin\n          Device.synchronize device;\n          Some (Unix.gettimeofday () -. st)\n        end else None\n    | _ -> invalid_arg \"buffer copy: expected exactly two buffers\"\n  in\n  let estimates = Program_spec.Estimates.{\n    ops = Int 0; lds = Int total_sz; mem = Int total_sz } in\n  Runner.make ~display_name ~device ~estimates call\n\n(* XXX: BufferXfer — device-to-device transfer via allocator._transfer.\n   Implement when multi-device support lands. *)\n\n(* XXX: EncDec — hardware encode/decode (HEVC).  Out of scope. *)\n\n(* Method cache *)\n\nlet method_cache : (string, Compiled_runner.t) Hashtbl.t = Hashtbl.create 64\n\nlet cache_key ~device ~ast_key ~base =\n  let compiler_name = match Renderer.compiler (Device.renderer device) with\n    | Some c -> Compiler.name c | None -> \"\" in\n  strf \"%s:%s:%s:%b\" (Device.name device) compiler_name ast_key base\n\nlet get_runner ~device ~get_program (ast : Tolk_ir.Kernel.t) =\n  let ast_key =\n    Digest.to_hex (Digest.string (Marshal.to_string ast [])) in\n  let ckey = cache_key ~device ~ast_key ~base:false in\n  match Hashtbl.find_opt method_cache ckey with\n  | Some car -> car\n  | None ->\n      let bkey = cache_key ~device ~ast_key ~base:true in\n      match Hashtbl.find_opt method_cache bkey with\n      | Some bcar ->\n          let car = Compiled_runner.create ~device (Compiled_runner.p bcar) in\n          Hashtbl.replace method_cache ckey car;\n          car\n      | None ->\n          let p = get_program ast in\n          let car = Compiled_runner.create ~device p in\n          Hashtbl.replace method_cache ckey car;\n          Hashtbl.replace method_cache bkey car;\n          car\n\n(* Resolve a scheduled Call node to a runner by dispatching on its\n   callee: inline kernel ASTs are compiled, buffer views and copies\n   are mapped to their respective runners. *)\nlet lower_ast ~device ~get_program (ast : Tolk_ir.Tensor.t)\n    (bufs : Device.Buffer.t list) : Runner.t =\n  let module T = Tolk_ir.Tensor in\n  match T.view ast with\n  | Call { callee = Ast kernel; _ } ->\n      Compiled_runner.runner (get_runner ~device ~get_program kernel)\n  | Call { callee = Ref ref_node; _ } -> begin\n      match T.view ref_node with\n      | Buffer_view _ ->\n          view_op ~device (List.hd bufs)\n      | Copy _ ->\n          let dest = List.hd bufs and src = List.nth bufs 1 in\n          buffer_copy ~device\n            ~total_sz:(Device.Buffer.nbytes dest)\n            ~dest_device:(Device.Buffer.device dest)\n            ~src_device:(Device.Buffer.device src)\n      | v ->\n          invalid_arg (Format.asprintf\n            \"lower_ast: unsupported callee %a\" T.pp_view v)\n    end\n  | v ->\n      invalid_arg (Format.asprintf\n        \"lower_ast: expected Call, got %a\" T.pp_view v)\n\n(* Exec item *)\n\nmodule Exec_item = struct\n  type t = {\n    ast : Tolk_ir.Tensor.t;\n    bufs : Device.Buffer.t option list;\n    var_vals : (string * int) list;\n    mutable prg : Runner.t option;\n  }\n\n  let make ~ast ~bufs ?(var_vals = []) ?prg () =\n    { ast; bufs; var_vals; prg }\n\n  let ast t = t.ast\n  let bufs t = t.bufs\n  let var_vals t = t.var_vals\n\n  let lower ~device ~get_program t =\n    if Option.is_some t.prg then t\n    else begin\n      let bufs = List.filter_map Fun.id t.bufs in\n      t.prg <- Some (lower_ast ~device ~get_program t.ast bufs);\n      t\n    end\n\n  let run t ?(var_vals = []) ?(wait = false) ?(do_update_stats = true) () =\n    let prg = match t.prg with\n      | Some p -> p\n      | None -> invalid_arg \"exec item not lowered\"\n    in\n    let merged = t.var_vals @ var_vals in\n    let bufs = List.filter_map (fun b ->\n      match b with\n      | Some buf -> Device.Buffer.ensure_allocated buf; Some buf\n      | None -> None)\n      t.bufs\n    in\n    let et = prg.call bufs merged\n        ~wait:(wait || debug >= 2) ~timeout:None in\n    if do_update_stats then\n      prg.first_run <- false;\n    et\nend\n\n(* Run schedule *)\n\nlet run_schedule ~device ~get_program schedule\n    ?(var_vals = []) ?(do_update_stats = true) () =\n  List.iter (fun ei ->\n    let ei = Exec_item.lower ~device ~get_program ei in\n    ignore (Exec_item.run ei ~var_vals ~do_update_stats ()))\n    schedule\n"
  },
  {
    "path": "packages/tolk/lib/engine/realize.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Schedule execution and kernel dispatch.\n\n    A {e runner} ({!Runner.t}) is the common dispatch interface for\n    all executable operations: compiled kernels, buffer copies, and\n    views. {!Compiled_runner} compiles kernel programs and creates\n    runners. {!Exec_item} pairs a runner with its buffers for\n    scheduled execution. {!run_schedule} executes a list of items\n    in order.\n\n    See also {!Device.prog} for the low-level device dispatch\n    handle. *)\n\n(** {1:runners Runners} *)\n\n(** Common dispatch interface.\n\n    A runner wraps a single dispatchable operation (compiled kernel,\n    buffer copy, view). Dispatch takes a list of buffers and\n    name-keyed variable bindings and optionally returns execution\n    time. *)\nmodule Runner : sig\n  type t\n  (** The type for runners. *)\n\n  val make :\n    display_name:string ->\n    device:Device.t ->\n    ?estimates:Program_spec.Estimates.t ->\n    (Device.Buffer.t list -> (string * int) list ->\n     wait:bool -> timeout:int option -> float option) ->\n    t\n  (** [make ~display_name ~device ?estimates call] is a runner that\n      dispatches via [call].\n\n      [estimates] defaults to {!Program_spec.Estimates.zero}. *)\n\n  val dev : t -> Device.t\n  (** [dev t] is [t]'s device. *)\n\n  val display_name : t -> string\n  (** [display_name t] is [t]'s human-readable name for debug\n      output. *)\n\n  val estimates : t -> Program_spec.Estimates.t\n  (** [estimates t] is [t]'s cost estimates. *)\n\n  val call :\n    t -> Device.Buffer.t list -> (string * int) list ->\n    wait:bool -> timeout:int option -> float option\n  (** [call t bufs var_vals ~wait ~timeout] dispatches the operation\n      on [bufs] with variable bindings [var_vals].\n\n      Returns [Some time] when [wait] is [true] and the backend\n      supports timing, [None] otherwise. *)\n\n  val exec :\n    t -> Device.Buffer.t list -> ?var_vals:(string * int) list ->\n    unit -> float option\n  (** [exec t bufs ?var_vals ()] is {!call} with [~wait:false] and\n      [~timeout:None]. Always returns [None].\n\n      [var_vals] defaults to [[]]. *)\nend\n\n(** {1:local_size Local size optimization} *)\n\nval optimize_local_size :\n  device:Device.t ->\n  Device.prog ->\n  int array ->\n  Device.Buffer.t list ->\n  int array\n(** [optimize_local_size ~device prg global_size rawbufs] finds the\n    local workgroup size that minimises execution time for [prg]\n    with [global_size].\n\n    Enumerates all valid local sizes (each dimension drawn from\n    powers of two up to [1024], total product at most [1024]),\n    tries each twice in random order, and returns the fastest.\n\n    When the first buffer in [rawbufs] also appears later in the\n    list, a temporary buffer is allocated to avoid clobbering\n    output during measurement.\n\n    Raises [Invalid_argument] if every candidate fails. *)\n\n(** {1:compiled_runner Compiled runner} *)\n\n(** Kernel compilation and dispatch.\n\n    A compiled runner wraps a {!Program_spec.t}, compiles it if\n    its {!Program_spec.lib} is [None], creates a {!Device.prog}\n    handle via {!Device.runtime}, and dispatches kernels through\n    it. *)\nmodule Compiled_runner : sig\n  type t\n  (** The type for compiled runners. *)\n\n  val create :\n    device:Device.t ->\n    ?prg:Device.prog ->\n    Program_spec.t ->\n    t\n  (** [create ~device ?prg p] is a compiled runner for [p] on\n      [device].\n\n      When {!Program_spec.lib} [p] is [None], the source is\n      compiled via the device's {!Renderer.compiler}.\n\n      [prg] overrides the {!Device.prog} handle. When [None]\n      (default), one is created via {!Device.runtime}.\n\n      Raises [Invalid_argument] if the device has no compiler and\n      [p] has no compiled binary. *)\n\n  val p : t -> Program_spec.t\n  (** [p t] is [t]'s program spec. *)\n\n  val runner : t -> Runner.t\n  (** [runner t] is [t]'s underlying runner. *)\n\n  val call :\n    t -> Device.Buffer.t list -> (string * int) list ->\n    wait:bool -> timeout:int option -> float option\n  (** [call t bufs var_vals ~wait ~timeout] dispatches the kernel\n      on [bufs] with variable bindings [var_vals].\n\n      See {!Runner.call} for the return value semantics. *)\nend\n\n(** {1:view_op View operation} *)\n\nval view_op : device:Device.t -> Device.Buffer.t -> Runner.t\n(** [view_op ~device buf] is a runner that asserts [dst] and [src]\n    share the same base buffer. No data is copied.\n\n    Raises [Invalid_argument] if the buffers do not share a base\n    or if the argument list does not contain exactly two buffers. *)\n\n(** {1:buffer_copy Buffer copy} *)\n\nval buffer_copy :\n  device:Device.t ->\n  total_sz:int ->\n  dest_device:string ->\n  src_device:string ->\n  Runner.t\n(** [buffer_copy ~device ~total_sz ~dest_device ~src_device] is a\n    runner that copies data between buffers via a host-memory\n    bounce. [dest_device] and [src_device] are device names used\n    in the display string.\n\n    Raises [Invalid_argument] if the two buffers differ in size or\n    dtype, or if the argument list does not contain exactly two\n    buffers. *)\n\n(** {1:method_cache Method cache} *)\n\nval get_runner :\n  device:Device.t ->\n  get_program:(Tolk_ir.Kernel.t -> Program_spec.t) ->\n  Tolk_ir.Kernel.t ->\n  Compiled_runner.t\n(** [get_runner ~device ~get_program ast] is a compiled runner for\n    [ast] on [device]. Returns a cached runner when available;\n    on a miss, calls [get_program ast] to compile the kernel and\n    caches the result. A base-device entry is shared across\n    device instances with the same compiler and renderer. *)\n\n(** {1:exec_item Execution items} *)\n\n(** A scheduled execution step.\n\n    An exec item pairs an AST reference with buffer arguments and\n    fixed variable bindings. {!lower} resolves the AST to a\n    {!Runner.t}; {!run} dispatches it. *)\nmodule Exec_item : sig\n  type t\n  (** The type for execution items. *)\n\n  val make :\n    ast:Tolk_ir.Tensor.t ->\n    bufs:Device.Buffer.t option list ->\n    ?var_vals:(string * int) list ->\n    ?prg:Runner.t ->\n    unit ->\n    t\n  (** [make ~ast ~bufs ?var_vals ?prg ()] is an exec item.\n\n      [ast] is the tensor graph node that describes the operation\n      (kernel SINK, BUFFER_VIEW, COPY, etc.).\n\n      [var_vals] defaults to [[]]. [prg] defaults to [None]. *)\n\n  val ast : t -> Tolk_ir.Tensor.t\n  (** [ast t] is the tensor graph node. *)\n\n  val bufs : t -> Device.Buffer.t option list\n  (** [bufs t] is the buffer argument list. *)\n\n  val var_vals : t -> (string * int) list\n  (** [var_vals t] is the fixed variable bindings. *)\n\n  val lower :\n    device:Device.t ->\n    get_program:(Tolk_ir.Kernel.t -> Program_spec.t) ->\n    t -> t\n  (** [lower ~device ~get_program t] resolves [t]'s AST to a\n      runner if not already set. Kernel SINKs are compiled via\n      {!get_runner}; BUFFER_VIEWs become {!view_op}; COPYs become\n      {!buffer_copy}.\n\n      Returns [t] unchanged if the runner is already set. *)\n\n  val run :\n    t ->\n    ?var_vals:(string * int) list ->\n    ?wait:bool ->\n    ?do_update_stats:bool ->\n    unit ->\n    float option\n  (** [run t ?var_vals ?wait ?do_update_stats ()] dispatches [t]'s\n      runner. Variable bindings are [t]'s fixed bindings merged\n      with [var_vals]. [None] buffer slots are skipped; remaining\n      buffers are allocated if needed.\n\n      [var_vals] defaults to [[]]. [wait] defaults to [false]\n      (forced to [true] when [DEBUG >= 2]). [do_update_stats]\n      defaults to [true].\n\n      Raises [Invalid_argument] if the runner has not been set. *)\nend\n\n(** {1:run_schedule Schedule execution} *)\n\nval run_schedule :\n  device:Device.t ->\n  get_program:(Tolk_ir.Kernel.t -> Program_spec.t) ->\n  Exec_item.t list ->\n  ?var_vals:(string * int) list ->\n  ?do_update_stats:bool ->\n  unit ->\n  unit\n(** [run_schedule ~device ~get_program items ?var_vals\n    ?do_update_stats ()] lowers and executes each item in order.\n\n    [var_vals] defaults to [[]]. [do_update_stats] defaults to\n    [true]. *)\n"
  },
  {
    "path": "packages/tolk/lib/engine/schedule.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\nopen Tolk_ir\nmodule T = Tensor\n\nlet debug = Helpers.getenv \"DEBUG\" 0\nlet scache_enabled = Helpers.getenv \"SCACHE\" 1\n\n(* Schedule linearizer *)\n\n(* Follow src[0] chains through movement ops until hitting a data\n   source: After, Buffer, Param, Mselect, Mstack, or Bind. *)\nlet rec unwrap_src (node : T.t) : T.t =\n  match T.view node with\n  | After _ | Buffer _ | Param _ | Mselect _ | Mstack _ | Bind _ -> node\n  | v ->\n      match T.children_of v with\n      | src :: _ -> unwrap_src src\n      | [] -> node\n\n(* Build kernel dependency graph from the scheduled tensor graph and\n   topologically sort it into a LINEAR node.\n\n   In an After node, deps.(0) is the kernel (Call or End) and\n   deps.(1..) are WAR dependencies from rangeify. *)\nlet create_schedule (sink : T.t) : T.t =\n  (* Phase 1: build dependency graph.\n     children_map.(k) = kernels that depend on k.\n     in_degree.(k) = number of unresolved dependencies of k. *)\n  let children_map : (int, T.t list) Hashtbl.t = Hashtbl.create 64 in\n  let in_degree : (int, int) Hashtbl.t = Hashtbl.create 64 in\n  let degree_nodes : (int, T.t) Hashtbl.t = Hashtbl.create 64 in\n  let key n = T.tag n in\n  let add_child producer consumer =\n    let pk = key producer in\n    let prev = match Hashtbl.find_opt children_map pk with\n      | Some l -> l | None -> [] in\n    Hashtbl.replace children_map pk (consumer :: prev);\n    let ck = key consumer in\n    let deg = match Hashtbl.find_opt in_degree ck with\n      | Some n -> n | None -> 0 in\n    Hashtbl.replace in_degree ck (deg + 1)\n  in\n  let ensure_in_degree k =\n    let tag = key k in\n    if not (Hashtbl.mem in_degree tag) then begin\n      Hashtbl.replace in_degree tag 0;\n      Hashtbl.replace degree_nodes tag k\n    end\n  in\n  let slice = T.backward_slice sink in\n  List.iter (fun u ->\n    match T.view u with\n    | After { deps; _ } -> begin\n        match deps with\n        | [] -> ()\n        | k :: war_deps ->\n            (match T.view k with\n             (* Skip unprocessed STORE+AFTER inside precompiled CALL bodies *)\n             | Store _ -> ()\n             | Call _ | End _ ->\n                 ensure_in_degree k;\n                 (match T.view k with\n                  | End { value; _ } ->\n                      (match T.view value with\n                       | Call _ -> ()\n                       | v -> invalid_arg (Format.asprintf\n                           \"END src[0] should be CALL, not %a\" T.pp_view v))\n                  | _ -> ());\n                 let kernel_deps = match T.view k with\n                   | End { value; _ } ->\n                       (match T.view value with\n                        | Call { args; _ } -> args\n                        | _ -> [])\n                   | Call { args; _ } -> args\n                   | _ -> []\n                 in\n                 List.iter (fun s ->\n                   let s = unwrap_src s in\n                   match T.view s with\n                   | After { deps = s_deps; _ } ->\n                       (match s_deps with\n                        | s_kernel :: _ -> add_child s_kernel k\n                        | [] -> ())\n                   | Mselect _ | Mstack _ ->\n                       List.iter (fun ss ->\n                         let ss = match T.view ss with\n                           | Mselect { src; _ } -> src | _ -> ss in\n                         match T.view ss with\n                         | Buffer _ | Param _ -> ()\n                         | After { deps = ss_deps; _ } ->\n                             (match ss_deps with\n                              | ss_kernel :: _ -> add_child ss_kernel k\n                              | [] -> ())\n                         | v -> invalid_arg (Format.asprintf\n                             \"expected AFTER, got %a\" T.pp_view v))\n                         (T.children s)\n                   | Buffer _ | Param _ | Bind _ -> ()\n                   | v -> invalid_arg (Format.asprintf\n                       \"input to kernel must be AFTER, BUFFER, PARAM, \\\n                        MSELECT, MSTACK, or BIND, not %a\" T.pp_view v))\n                   (kernel_deps @ war_deps)\n             | v -> invalid_arg (Format.asprintf\n                 \"AFTER deps[0] should be CALL or END, not %a\" T.pp_view v))\n      end\n    | _ -> ())\n    slice;\n  (* Phase 2: BFS topological sort. *)\n  let queue = Queue.create () in\n  Hashtbl.iter (fun tag deg ->\n    if deg = 0 then\n      Queue.add (Hashtbl.find degree_nodes tag) queue)\n    in_degree;\n  let linearized = ref [] in\n  while not (Queue.is_empty queue) do\n    let rk = Queue.pop queue in\n    (match T.view rk with\n     | Linear { srcs } ->\n         linearized := List.rev_append srcs !linearized\n     | _ ->\n         let k = match T.view rk with\n           | End { value; _ } -> value | _ -> rk in\n         (match T.view k with\n          | Call { callee; args; info; dtype } ->\n              let buf_nodes = List.filter (fun s ->\n                match T.view s with Bind _ -> false | _ -> true) args in\n              let buf_nodes = List.map unwrap_src buf_nodes in\n              let new_call = T.call ~callee ~args:buf_nodes ~info ~dtype in\n              linearized := new_call :: !linearized\n          | v -> invalid_arg (Format.asprintf\n              \"unexpected op in queue: %a\" T.pp_view v)));\n    let succs = match Hashtbl.find_opt children_map (key rk) with\n      | Some l -> l | None -> [] in\n    List.iter (fun x ->\n      let xk = key x in\n      let deg = Hashtbl.find in_degree xk - 1 in\n      Hashtbl.replace in_degree xk deg;\n      if deg = 0 then Queue.add x queue)\n      succs\n  done;\n  T.linear (List.rev !linearized)\n\n(* Convert a Linear node to ExecItem list.\n   [buffers] maps tensor nodes to runtime Buffer.t values. *)\nlet linear_to_schedule (linear : T.t)\n    ~(buffers : T.t -> Device.Buffer.t option) : Realize.Exec_item.t list =\n  let srcs = match T.view linear with\n    | Linear { srcs } -> srcs\n    | _ -> invalid_arg \"linear_to_schedule: expected Linear node\"\n  in\n  let schedule = ref [] in\n  List.iter (fun si ->\n    match T.view si with\n    | Call { callee; args; _ } ->\n        (* Create subbuffer views if the callee is a Buffer_view *)\n        (match callee with\n         | Ref ref_node -> begin\n             match T.view ref_node with\n             | Buffer_view { size; offset; dtype; _ } -> begin\n                 match args with\n                 | _dst :: base_node :: _ ->\n                     (match buffers base_node with\n                      | Some base ->\n                          let _view = Device.Buffer.view base ~size ~dtype\n                            ~offset:(offset * Dtype.itemsize dtype) in\n                          (* XXX: register view in buffer table *)\n                          ()\n                      | None -> ())\n                 | _ -> ()\n               end\n             | _ -> ()\n           end\n         | Ast _ -> ());\n        let buf_nodes = List.filter (fun s ->\n          match T.view s with Bind _ -> false | _ -> true) args in\n        let bufs = List.map buffers buf_nodes in\n        (* XXX: multi-device expansion not yet implemented *)\n        schedule :=\n          Realize.Exec_item.make ~ast:si ~bufs () :: !schedule\n    | _ -> ())\n    srcs;\n  List.rev !schedule\n\n(* Resolve PARAM nodes to actual buffers. *)\nlet post_sched_cache_rule ~(param_bufs : T.t list) (node : T.t)\n    : T.t option =\n  match T.view node with\n  | Param { slot; _ } ->\n      if slot >= 0 && slot < List.length param_bufs then\n        Some (List.nth param_bufs slot)\n      else None\n  | _ -> None\n\n(* Resolve CALL(LINEAR, ...) by substituting PARAMs with buffer\n   arguments. Flatten nested LINEAR nodes. *)\nlet resolve_linear_call_rule (node : T.t) : T.t option =\n  match T.view node with\n  | Call { callee = Ref ref_node; args; _ } -> begin\n      match T.view ref_node with\n      | Linear _ ->\n          Some (T.graph_rewrite\n            (post_sched_cache_rule ~param_bufs:args)\n            ref_node)\n      | _ -> None\n    end\n  | Linear { srcs } ->\n      let has_nested = List.exists (fun s ->\n        match T.view s with Linear _ -> true | _ -> false) srcs in\n      if has_nested then\n        let flat = List.concat_map (fun s ->\n          match T.view s with\n          | Linear { srcs = inner } -> inner\n          | _ -> [s]) srcs\n        in\n        Some (T.linear flat)\n      else None\n  | _ -> None\n\n(* Schedule cache *)\n\nlet schedule_cache : (int, T.t) Hashtbl.t = Hashtbl.create 64\n\n(* Convert a tensor-level SINK into a LINEAR node. *)\nlet lower_sink_to_linear ~get_kernel_graph (sink : T.t) : T.t option =\n  match T.view sink with\n  | Sink { kernel_info = Some _; _ } -> None\n  | Sink _ ->\n      let st = Unix.gettimeofday () in\n      let cache_key = T.tag sink in\n      let linear =\n        if scache_enabled <> 0 then\n          match Hashtbl.find_opt schedule_cache cache_key with\n          | Some cached -> cached\n          | None ->\n              let kernel_graph = get_kernel_graph sink in\n              let r = create_schedule kernel_graph in\n              Hashtbl.replace schedule_cache cache_key r;\n              r\n        else\n          create_schedule (get_kernel_graph sink)\n      in\n      if (debug >= 1 && (match T.view linear with\n            | Linear { srcs } -> List.length srcs > 1\n            | _ -> false))\n         || debug >= 3\n      then begin\n        let n = match T.view linear with\n          | Linear { srcs } -> List.length srcs | _ -> 0 in\n        Printf.eprintf \"scheduled %5d kernels in %8.2f ms\\n%!\"\n          n ((Unix.gettimeofday () -. st) *. 1000.)\n      end;\n      Some linear\n  | _ -> None\n\n(* Full schedule pipeline: tensor graph → ExecItem list + var_vals.\n\n   [big_sink] is either a raw SINK (legacy) or a CALL produced by\n   {!Allocations_next.transform_to_call}.\n\n   1. Lower each SINK to a LINEAR via get_kernel_graph + create_schedule.\n      [enter_calls] lets us descend into CALL bodies from allocations.\n   2. Resolve CALL(LINEAR, ...) by substituting PARAMs with buffers.\n   3. Extract bound variable values from BIND nodes.\n   4. Convert the final LINEAR to ExecItems. *)\nlet complete_create_schedule_with_vars ~get_kernel_graph\n    ~(buffers : T.t -> Device.Buffer.t option)\n    (big_sink : T.t)\n    : Realize.Exec_item.t list * (string * int) list =\n  (* Step 1: lower SINKs to LINEARs *)\n  let graph = T.graph_rewrite ~enter_calls:true (fun node ->\n    lower_sink_to_linear ~get_kernel_graph node)\n    big_sink\n  in\n  (* Step 2: resolve CALL(LINEAR, ...) *)\n  let graph = T.graph_rewrite resolve_linear_call_rule graph in\n  (* Step 3: find the LINEAR result *)\n  let linear = match T.view graph with\n    | Linear _ -> graph\n    | _ -> graph\n  in\n  (* Step 4: extract var_vals from BIND nodes.\n     When big_sink is a CALL from allocations, BINDs are among the\n     args; when it is a raw SINK, they are among the srcs. *)\n  let var_vals = ref [] in\n  let extract_binds nodes =\n    List.iter (fun src ->\n      match T.view src with\n      | Bind { var; value = Some v; _ } -> begin\n          match T.view var, T.view v with\n          | Define_var { name; _ }, Const { value; _ } ->\n              (match Const.view value with\n               | Int n ->\n                   let n = Int64.to_int n in\n                   (match List.assoc_opt name !var_vals with\n                    | Some prev when prev <> n ->\n                        invalid_arg (Printf.sprintf\n                          \"bind mismatch on %s, %d <> %d\" name prev n)\n                    | _ -> var_vals := (name, n) :: !var_vals)\n               | _ -> ())\n          | _ -> ()\n        end\n      | _ -> ()) nodes\n  in\n  (match T.view big_sink with\n   | Sink { srcs; _ } -> extract_binds srcs\n   | Call { args; _ } -> extract_binds args\n   | _ -> ());\n  (* Step 5: convert LINEAR to ExecItems *)\n  let schedule = linear_to_schedule linear ~buffers in\n  (schedule, !var_vals)\n"
  },
  {
    "path": "packages/tolk/lib/gpu_target.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\ntype cuda = SM75 | SM80 | SM89\ntype amd = RDNA3 | RDNA4 | CDNA3 | CDNA4\n\nlet parse_cuda_arch arch =\n  let buf = Buffer.create (String.length arch) in\n  String.iter\n    (fun c -> if c >= '0' && c <= '9' then Buffer.add_char buf c else ())\n    arch;\n  if Buffer.length buf = 0 then None\n  else int_of_string_opt (Buffer.contents buf)\n\nlet cuda_of_env () =\n  let raw =\n    match Sys.getenv_opt \"CUDA_ARCH\" with\n    | Some arch when String.trim arch <> \"\" -> String.trim arch\n    | _ -> (\n        match Sys.getenv_opt \"CUDA_SM\" with\n        | Some arch when String.trim arch <> \"\" -> String.trim arch\n        | _ -> \"\")\n  in\n  match parse_cuda_arch raw with\n  | Some ver when ver >= 89 -> Some SM89\n  | Some ver when ver >= 80 -> Some SM80\n  | Some ver when ver >= 75 -> Some SM75\n  | _ -> None\n\nlet parse_amd_arch arch =\n  let arch = String.trim arch |> String.lowercase_ascii in\n  let contains needle =\n    let nlen = String.length needle in\n    let alen = String.length arch in\n    let rec loop i =\n      if i + nlen > alen then false\n      else if String.sub arch i nlen = needle then true\n      else loop (i + 1)\n    in\n    nlen > 0 && loop 0\n  in\n  if contains \"gfx950\" || contains \"9.5.0\" then Some CDNA4\n  else if contains \"gfx942\" || contains \"9.4.2\" then Some CDNA3\n  else if\n    contains \"gfx1200\" || contains \"gfx1201\" || contains \"12.0.0\"\n    || contains \"12.0.1\"\n  then Some RDNA4\n  else if contains \"gfx11\" || contains \"11.\" then Some RDNA3\n  else None\n\nlet amd_of_env () =\n  let first_set vars =\n    let rec loop = function\n      | [] -> \"\"\n      | var :: vars -> (\n          match Sys.getenv_opt var with\n          | Some value when String.trim value <> \"\" -> String.trim value\n          | _ -> loop vars)\n    in\n    loop vars\n  in\n  first_set\n    [ \"AMD_ARCH\"; \"HIP_ARCH\"; \"HCC_AMDGPU_TARGET\"; \"HSA_OVERRIDE_GFX_VERSION\" ]\n  |> parse_amd_arch\n"
  },
  {
    "path": "packages/tolk/lib/gpu_target.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Resolved GPU target descriptors for renderer construction.\n\n    This module owns CUDA/AMD target selection policy. Renderers consume these\n    resolved targets and do not read environment variables themselves. *)\n\n(** CUDA SM architecture tiers used by source generation. *)\ntype cuda = SM75 | SM80 | SM89\n\n(** AMD GPU architecture families used by source generation. *)\ntype amd = RDNA3 | RDNA4 | CDNA3 | CDNA4\n\nval cuda_of_env : unit -> cuda option\n(** [cuda_of_env ()] resolves [CUDA_ARCH] or [CUDA_SM] to the nearest supported\n    CUDA SM tier. Returns [None] when no supported tier is configured. *)\n\nval amd_of_env : unit -> amd option\n(** [amd_of_env ()] resolves common AMD arch environment variables such as\n    [AMD_ARCH], [HIP_ARCH], [HCC_AMDGPU_TARGET], or [HSA_OVERRIDE_GFX_VERSION].\n    Returns [None] when no supported arch family is configured. *)\n"
  },
  {
    "path": "packages/tolk/lib/helpers.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(* Environment *)\n\nlet getenv name default =\n  match Sys.getenv_opt name with\n  | Some s -> (try int_of_string s with Failure _ -> default)\n  | None -> default\n\nlet getenv_str name default =\n  match Sys.getenv_opt name with\n  | Some s when s <> \"\" -> s\n  | _ -> default\n\nlet amx = getenv \"AMX\" 0 <> 0\nlet allow_half8 = getenv \"ALLOW_HALF8\" 0 <> 0\n\n(* Context variables *)\n\nmodule Context_var = struct\n  type 'a t = { key : string; value : 'a ref }\n\n  let int ~key ~default =\n    let value = getenv key default in\n    { key; value = ref value }\n\n  let string ~key ~default =\n    let value =\n      match Sys.getenv_opt key with\n      | Some s ->\n          let v = String.trim s in\n          if v = \"\" then default else v\n      | None -> default\n    in\n    { key; value = ref value }\n\n  let get v = !(v.value)\n\n  type binding = B : 'a t * 'a -> binding\n\n  let with_context overrides f =\n    let saved = List.map (fun (B (v, _)) -> B (v, !(v.value))) overrides in\n    List.iter (fun (B (v, x)) -> v.value := x) overrides;\n    Fun.protect\n      ~finally:(fun () -> List.iter (fun (B (v, old)) -> v.value := old) saved)\n      f\nend\n\n(* Collections *)\n\n(* Preserves first occurrence, removes duplicates. *)\nlet dedup_by eq lst =\n  let rec loop acc = function\n    | [] -> List.rev acc\n    | x :: rest ->\n        if List.exists (eq x) acc then loop acc rest\n        else loop (x :: acc) rest\n  in\n  loop [] lst\n\n(* Partitions old_shape indices into contiguous groups whose cumulative products\n   match the corresponding new_shape elements, returning None if no valid\n   partition exists. Used to determine whether a reshape is a simple view\n   (contraction of contiguous axes) or requires a copy. *)\nlet get_contraction old_shape new_shape =\n  let n_old = Array.length old_shape in\n  let n_new = Array.length new_shape in\n  let acc_old = Array.make n_old 1 in\n  let acc_new = Array.make n_new 1 in\n  if n_old > 0 then acc_old.(0) <- old_shape.(0);\n  for i = 1 to n_old - 1 do\n    acc_old.(i) <- acc_old.(i - 1) * old_shape.(i)\n  done;\n  if n_new > 0 then acc_new.(0) <- new_shape.(0);\n  for i = 1 to n_new - 1 do\n    acc_new.(i) <- acc_new.(i - 1) * new_shape.(i)\n  done;\n  let split = Array.make n_new 0 in\n  let ok = ref true in\n  for i = 0 to n_new - 1 do\n    if !ok then begin\n      if acc_new.(i) = 1 then split.(i) <- 0\n      else\n        match\n          let found = ref (-1) in\n          for j = 0 to n_old - 1 do\n            if !found = -1 && acc_old.(j) = acc_new.(i) then found := j + 1\n          done;\n          !found\n        with\n        | -1 -> ok := false\n        | idx -> split.(i) <- idx\n    end\n  done;\n  if not !ok then None\n  else\n    let starts = Array.make n_new 0 in\n    let ends = Array.make n_new 0 in\n    for i = 0 to n_new - 1 do\n      starts.(i) <- (if i = 0 then 0 else split.(i - 1));\n      ends.(i) <- (if i = n_new - 1 then n_old else split.(i))\n    done;\n    Some\n      (Array.to_list\n         (Array.init n_new (fun i ->\n              List.init (ends.(i) - starts.(i)) (fun j -> starts.(i) + j))))\n"
  },
  {
    "path": "packages/tolk/lib/ir/axis_kind.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\ntype t =\n  | Global\n  | Thread\n  | Local\n  | Warp\n  | Loop\n  | Group_reduce\n  | Reduce\n  | Upcast\n  | Unroll\n  | Placeholder\n\nlet equal = ( = )\nlet compare = Stdlib.compare\n\nlet to_string = function\n  | Global -> \"global\"\n  | Thread -> \"thread\"\n  | Local -> \"local\"\n  | Warp -> \"warp\"\n  | Loop -> \"loop\"\n  | Group_reduce -> \"group_reduce\"\n  | Reduce -> \"reduce\"\n  | Upcast -> \"upcast\"\n  | Unroll -> \"unroll\"\n  | Placeholder -> \"placeholder\"\n\nlet pp fmt kind = Format.pp_print_string fmt (to_string kind)\n\n(* Sorting priority: Loop=-1, Thread=Global=0, Warp=1,\n   Local=Group_reduce=2, Upcast=3, Reduce=4, Unroll=5. *)\nlet to_pos = function\n  | Loop -> -1\n  | Thread | Global -> 0\n  | Warp -> 1\n  | Local | Group_reduce -> 2\n  | Upcast -> 3\n  | Reduce -> 4\n  | Unroll -> 5\n  | Placeholder -> 6\n\nlet letter = function\n  | Global -> \"g\"\n  | Thread -> \"t\"\n  | Local -> \"l\"\n  | Warp -> \"w\"\n  | Loop -> \"L\"\n  | Upcast -> \"u\"\n  | Group_reduce -> \"G\"\n  | Reduce -> \"R\"\n  | Unroll -> \"r\"\n  | Placeholder -> \"?\"\n\nlet color = function\n  | Global -> \"blue\"\n  | Thread -> \"BLUE\"\n  | Local -> \"cyan\"\n  | Warp -> \"CYAN\"\n  | Loop -> \"WHITE\"\n  | Upcast -> \"yellow\"\n  | Group_reduce -> \"RED\"\n  | Reduce -> \"red\"\n  | Unroll -> \"magenta\"\n  | Placeholder -> \"white\"\n"
  },
  {
    "path": "packages/tolk/lib/ir/axis_kind.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Kernel axis kinds.\n\n    An axis kind classifies each loop axis in a kernel schedule. The kind\n    determines how the axis maps to hardware execution dimensions (threads,\n    workgroups, registers) or to compiler transforms (unroll, upcast, reduce).\n\n    {b Note.} The variant declaration order is load-bearing: {!compare}\n    delegates to {!Stdlib.compare}, so reordering variants changes the sort\n    order used by the optimizer. *)\n\n(** {1:types Types} *)\n\n(** The type for axis kinds. *)\ntype t =\n  | Global  (** Global work dimension. *)\n  | Thread  (** Per-thread dimension. *)\n  | Local  (** Workgroup-local dimension. *)\n  | Warp  (** Warp-level dimension. *)\n  | Loop  (** Software loop. *)\n  | Group_reduce  (** Group-level reduction. *)\n  | Reduce  (** Reduction axis. *)\n  | Upcast  (** Vectorization (upcast) axis. *)\n  | Unroll  (** Unrolled loop axis. *)\n  | Placeholder  (** Placeholder for unassigned axes. *)\n\n(** {1:predicates Predicates and comparisons} *)\n\nval equal : t -> t -> bool\n(** [equal a b] is [true] iff [a] and [b] are the same kind. *)\n\nval compare : t -> t -> int\n(** [compare a b] totally orders axis kinds by variant declaration order. *)\n\n(** {1:fmt Formatting} *)\n\nval to_string : t -> string\n(** [to_string kind] is a lowercase string (e.g. [\"global\"], [\"group_reduce\"]). *)\n\nval pp : Format.formatter -> t -> unit\n(** [pp] formats an axis kind with {!to_string}. *)\n\n(** {1:data Data definitions} *)\n\n(* CR: that's a weird API, I suspect this is used to implement something that should be provided in axis_kind itself possibly? *)\nval to_pos : t -> int\n(** [to_pos kind] is the sorting priority for [kind]. *)\n\n(* CR: what is this used for? if it's really useful we should document *)\nval letter : t -> string\n(** [letter kind] is a single-character label (e.g. [\"g\"], [\"l\"], [\"R\"]). *)\n\n(* CR: is this even used? That's such a weird api, seems out of place? *)\nval color : t -> string\n(** [color kind] is a debug color name (e.g. [\"blue\"], [\"cyan\"]). *)\n"
  },
  {
    "path": "packages/tolk/lib/ir/const.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\nlet strf = Printf.sprintf\n\nlet err_not_scalar kind dt =\n  strf \"Const.%s expects a scalar dtype, got %s\" kind (Dtype.Val.to_string dt)\n\nlet err_not_int dt =\n  strf \"Const.int64 expects an integer dtype, got %s\" (Dtype.Val.to_string dt)\n\nlet err_not_float dt =\n  strf \"Const.float expects a floating-point dtype, got %s\" (Dtype.Val.to_string dt)\n\ntype view = Bool of bool | Int of int64 | Float of float\ntype t = { dtype : Dtype.Val.t; view : view }\n\nlet view t = t.view\nlet dtype t = t.dtype\nlet bool value = { dtype = Dtype.Val.bool; view = Bool value }\n\nlet int64 (dtype : Dtype.Val.t) value =\n  if Dtype.Val.count dtype <> 1 then invalid_arg (err_not_scalar \"int64\" dtype);\n  if not (Dtype.Val.is_int dtype) then invalid_arg (err_not_int dtype);\n  { dtype; view = Int value }\n\nlet int dtype value = int64 dtype (Int64.of_int value)\n\nlet float (dtype : Dtype.Val.t) value =\n  if Dtype.Val.count dtype <> 1 then invalid_arg (err_not_scalar \"float\" dtype);\n  if not (Dtype.Val.is_float dtype) then invalid_arg (err_not_float dtype);\n  { dtype; view = Float value }\n\nlet equal_view a b =\n  match a, b with\n  | Bool x, Bool y -> Bool.equal x y\n  | Int x, Int y -> Int64.equal x y\n  | Float x, Float y -> Int64.equal (Int64.bits_of_float x) (Int64.bits_of_float y)\n  | _ -> false\n\nlet equal a b = Dtype.Val.equal a.dtype b.dtype && equal_view a.view b.view\n\nlet compare_view a b =\n  match a, b with\n  | Bool x, Bool y -> Bool.compare x y\n  | Int x, Int y -> Int64.compare x y\n  | Float x, Float y -> Int64.compare (Int64.bits_of_float x) (Int64.bits_of_float y)\n  | Bool _, _ -> -1 | _, Bool _ -> 1\n  | Int _, _ -> -1 | _, Int _ -> 1\n\nlet compare a b =\n  let c = Dtype.Val.compare a.dtype b.dtype in\n  if c <> 0 then c else compare_view a.view b.view\n\nlet to_string t =\n  let s = Dtype.Val.to_string t.dtype in\n  match t.view with\n  | Bool v -> strf \"%b:%s\" v s\n  | Int v -> strf \"%Ld:%s\" v s\n  | Float v -> strf \"%g:%s\" v s\n\nlet pp fmt t = Format.pp_print_string fmt (to_string t)\n\nlet zero dtype =\n  let dt = Dtype.Val dtype in\n  if Dtype.is_float dt then float dtype 0.0\n  else if Dtype.is_bool dt then bool false\n  else int dtype 0\n\nlet identity_element (op : Op.reduce) dtype =\n  let dt = Dtype.Val dtype in\n  match op with\n  | `Add -> zero dtype\n  | `Mul -> if Dtype.is_float dt then float dtype 1.0 else int dtype 1\n  | `Max -> (\n      match Dtype.min dt with\n      | `Float f -> float dtype f\n      | `SInt i | `UInt i -> int64 dtype i\n      | `Bool _ -> bool false)\n"
  },
  {
    "path": "packages/tolk/lib/ir/const.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Typed compile-time constants.\n\n    A constant pairs a scalar value with its {!Dtype.t}. Constructors validate\n    that the dtype matches the value kind (boolean, integer, or float) and that\n    the dtype is scalar. *)\n\n(** {1:types Types} *)\n\ntype t\n(** The type for typed constants. *)\n\ntype view =\n  | Bool of bool  (** Boolean payload. *)\n  | Int of int64  (** Integer payload (signed or unsigned, stored as [int64]). *)\n  | Float of float  (** Floating-point payload. *)\n(** Read-only constant payload. Pattern-match via {!view}. *)\n\n(** {1:access Accessors} *)\n\nval view : t -> view\n(** [view c] is the payload of [c]. *)\n\nval dtype : t -> Dtype.Val.t\n(** [dtype c] is the dtype of [c]. *)\n\n(** {1:constructors Constructors} *)\n\nval bool : bool -> t\n(** [bool b] is a boolean constant with dtype {!Dtype.bool}. *)\n\nval int : Dtype.Val.t -> int -> t\n(** [int dtype n] is an integer constant.\n\n    Raises [Invalid_argument] if [dtype] is not a scalar integer or boolean\n    dtype. *)\n\nval int64 : Dtype.Val.t -> int64 -> t\n(** [int64 dtype n] is an integer constant.\n\n    Raises [Invalid_argument] if [dtype] is not a scalar integer or boolean\n    dtype. *)\n\nval float : Dtype.Val.t -> float -> t\n(** [float dtype x] is a floating-point constant.\n\n    Raises [Invalid_argument] if [dtype] is not a scalar float dtype. *)\n\n(** {1:predicates Predicates and comparisons} *)\n\nval equal : t -> t -> bool\n(** [equal a b] is [true] iff [a] and [b] carry the same dtype and payload. *)\n\nval compare : t -> t -> int\n(** [compare a b] totally orders constants, first by dtype then by payload. *)\n\n(** {1:fmt Formatting} *)\n\nval to_string : t -> string\n(** [to_string c] is a compact [value:dtype] representation (e.g. [\"42:int32\"],\n    [\"3.14:float32\"], [\"true:bool\"]). *)\n\nval pp : Format.formatter -> t -> unit\n(** [pp] formats a constant with {!to_string}. *)\n\n(** {1:helpers Dtype-aware helpers} *)\n\nval zero : Dtype.Val.t -> t\n(** [zero dtype] is the zero constant for [dtype]: [0.0] for floats,\n    [false] for bools, [0] for integers. *)\n\nval identity_element : Op.reduce -> Dtype.Val.t -> t\n(** [identity_element op dtype] is the identity element for reduction [op]\n    on [dtype]: [0] for [`Add], [1] for [`Mul], [dtype.min] for [`Max]. *)\n"
  },
  {
    "path": "packages/tolk/lib/ir/decomposition.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\nmodule K = Kernel\n\n(* Helpers *)\n\nlet powers_of_two : (int64, int) Hashtbl.t =\n  let tbl = Hashtbl.create 64 in\n  for i = 0 to 62 do Hashtbl.replace tbl (Int64.shift_left 1L i) i done;\n  tbl\n\nlet log2_of_power n = Hashtbl.find_opt powers_of_two n\n\nlet const_int_val node =\n  match K.view node with\n  | Const { value; _ } ->\n    (match Const.view value with Int v -> Some v | _ -> None)\n  | _ -> None\n\nlet iconst v = K.const (Const.int64 Dtype.Val.index v)\n\n(* Transcendentals *)\n\nlet xpow ~base ~exponent =\n  let is_neg = K.binary ~op:`Cmplt ~lhs:base ~rhs:(K.const_float 0.0) in\n  let abs_base = K.ternary ~op:`Where ~a:is_neg\n    ~b:(K.unary ~op:`Neg ~src:base) ~c:base in\n  let ret = K.unary ~op:`Exp2\n    ~src:(K.binary ~op:`Mul ~lhs:exponent\n            ~rhs:(K.unary ~op:`Log2 ~src:abs_base)) in\n  let fdt = K.dtype ret in\n  let int_exp = K.cast ~src:(K.cast ~src:exponent ~dtype:Dtype.int32)\n    ~dtype:(K.dtype exponent) in\n  let non_int = K.binary ~op:`Cmpne ~lhs:exponent ~rhs:int_exp in\n  let abs_exp = K.ternary ~op:`Where\n    ~a:(K.binary ~op:`Cmplt ~lhs:exponent ~rhs:(K.const_float 0.0))\n    ~b:(K.unary ~op:`Neg ~src:exponent) ~c:exponent in\n  let is_odd = K.cast\n    ~src:(K.binary ~op:`Mod\n            ~lhs:(K.cast ~src:abs_exp ~dtype:Dtype.int32)\n            ~rhs:(K.const (Const.int Dtype.Val.int32 2)))\n    ~dtype:Dtype.bool in\n  let nan_c = K.const (Const.float (Dtype.val_of fdt) Float.nan) in\n  let neg_base = K.ternary ~op:`Where ~a:non_int ~b:nan_c\n    ~c:(K.ternary ~op:`Where ~a:is_odd\n          ~b:(K.unary ~op:`Neg ~src:ret) ~c:ret) in\n  let zero_zero = K.binary ~op:`And\n    ~lhs:(K.binary ~op:`Cmpeq ~lhs:base ~rhs:(K.const_float 0.0))\n    ~rhs:(K.binary ~op:`Cmpeq ~lhs:exponent ~rhs:(K.const_float 0.0)) in\n  K.ternary ~op:`Where ~a:zero_zero\n    ~b:(K.const (Const.float (Dtype.val_of fdt) 1.0))\n    ~c:(K.ternary ~op:`Where ~a:is_neg ~b:neg_base ~c:ret)\n\n(* IEEE 754 helpers *)\n\nlet mantissa_bits dt = snd (Dtype.finfo dt)\n\nlet exponent_bias dt =\n  let e, _ = Dtype.finfo dt in (1 lsl (e - 1)) - 1\n\nlet exponent_mask dt =\n  let e, _ = Dtype.finfo dt in (1 lsl e) - 1\n\n(* Shift by constant via mul/div by power of 2. *)\n\nlet shr_const x n =\n  let dt = K.dtype x in\n  K.binary ~op:`Idiv ~lhs:x\n    ~rhs:(K.const (Const.int64 (Dtype.val_of dt) (Int64.shift_left 1L n)))\n\nlet shl_const x n =\n  let dt = K.dtype x in\n  K.binary ~op:`Mul ~lhs:x\n    ~rhs:(K.const (Const.int64 (Dtype.val_of dt) (Int64.shift_left 1L n)))\n\nlet const_of_node_int node =\n  match K.view node with\n  | Const { value; _ } ->\n    (match Const.view value with Int v -> Some (Int64.to_int v) | _ -> None)\n  | _ -> None\n\nlet expr_shr x y =\n  match const_of_node_int y with\n  | Some n -> shr_const x n\n  | None -> failwith \"expr_shr: non-constant shift amount\"\n\nlet expr_shl x y =\n  match const_of_node_int y with\n  | Some n -> shl_const x n\n  | None -> failwith \"expr_shl: non-constant shift amount\"\n\nlet lazy_map_numbers x ~inf:inf_val ~ninf:ninf_val ~nan:nan_val ~ratio =\n  let fdt = K.dtype x in\n  let pos_inf = K.const (Const.float (Dtype.val_of fdt) Float.infinity) in\n  let neg_inf = K.const (Const.float (Dtype.val_of fdt) Float.neg_infinity) in\n  K.ternary ~op:`Where\n    ~a:(K.binary ~op:`Cmpne ~lhs:x ~rhs:pos_inf)\n    ~b:(K.ternary ~op:`Where\n          ~a:(K.binary ~op:`Cmpne ~lhs:x ~rhs:x) ~b:nan_val\n          ~c:(K.ternary ~op:`Where\n                ~a:(K.binary ~op:`Cmpne ~lhs:x ~rhs:neg_inf)\n                ~b:ratio ~c:ninf_val))\n    ~c:inf_val\n\nlet polyN x coeffs =\n  let fdt = K.dtype x in\n  let c v = K.const (Const.float (Dtype.val_of fdt) v) in\n  match coeffs with\n  | [] -> c 0.0\n  | first :: rest ->\n    List.fold_left\n      (fun acc ci -> K.binary ~op:`Add ~lhs:(K.binary ~op:`Mul ~lhs:acc ~rhs:x) ~rhs:(c ci))\n      (c first) rest\n\nlet const_like node v = K.const (Const.float (K.dtype node |> Dtype.scalarize |> Dtype.val_of) v)\nlet int_const_like node v = K.const (Const.int64 (Dtype.val_of (K.dtype node)) v)\n\nlet int_for_float = function\n  | Dtype.Float64 -> Dtype.int64\n  | Float32 -> Dtype.int32\n  | Float16 -> Dtype.int16\n  | _ -> Dtype.int32\n\nlet uint_for_float = function\n  | Dtype.Float64 -> Dtype.uint64\n  | Float32 -> Dtype.uint32\n  | Float16 -> Dtype.uint16\n  | _ -> Dtype.uint32\n\nlet float_for_int = function\n  | Dtype.Int64 -> Dtype.float64\n  | Int32 -> Dtype.float32\n  | Int16 -> Dtype.float16\n  | _ -> Dtype.float32\n\nlet rintk d =\n  let fdt = K.dtype d in\n  let out_dtype = Dtype.vec (Dtype.count fdt) (int_for_float (Dtype.scalar fdt)) in\n  let zero = const_like d 0.0 in\n  let rounded = K.binary ~op:`Add ~lhs:d\n    ~rhs:(K.ternary ~op:`Where\n            ~a:(K.binary ~op:`Cmplt ~lhs:d ~rhs:zero)\n            ~b:(const_like d (-0.5)) ~c:(const_like d 0.5)) in\n  K.cast ~src:rounded ~dtype:out_dtype\n\nlet pow2if q float_dtype =\n  let qdt = K.dtype q in\n  let scalar = Dtype.scalar qdt in\n  let out_scalar = match scalar with\n    | Int16 -> Dtype.scalarize float_dtype\n    | _ -> float_for_int scalar\n  in\n  let out_dtype = Dtype.vec (Dtype.count qdt) out_scalar in\n  let q_biased = K.binary ~op:`Add ~lhs:q\n    ~rhs:(int_const_like q (Int64.of_int (exponent_bias out_dtype))) in\n  K.bitcast ~src:(shl_const q_biased (mantissa_bits out_dtype)) ~dtype:(Dtype.val_of out_dtype)\n\nlet ilogb2k d =\n  let fdt = K.dtype d in\n  let int_dtype = Dtype.vec (Dtype.count fdt) (int_for_float (Dtype.scalar fdt)) in\n  let dint = K.bitcast ~src:d ~dtype:(Dtype.val_of int_dtype) in\n  let masked = K.binary ~op:`And ~lhs:(shr_const dint (mantissa_bits fdt))\n    ~rhs:(K.const (Const.int64 (Dtype.val_of int_dtype) (Int64.of_int (exponent_mask fdt)))) in\n  K.binary ~op:`Sub ~lhs:masked\n    ~rhs:(K.const (Const.int64 (Dtype.val_of int_dtype) (Int64.of_int (exponent_bias fdt))))\n\nlet ldexp3k d e =\n  let fdt = K.dtype d in\n  let int_dtype = Dtype.vec (Dtype.count fdt) (int_for_float (Dtype.scalar fdt)) in\n  let m1 = K.bitcast ~src:d ~dtype:(Dtype.val_of int_dtype) in\n  let m2 = shl_const (K.cast ~src:e ~dtype:int_dtype) (mantissa_bits fdt) in\n  K.bitcast ~src:(K.binary ~op:`Add ~lhs:m1 ~rhs:m2) ~dtype:(Dtype.val_of fdt)\n\nlet ldexp2k d e =\n  let fdt = K.dtype d in\n  let half = shr_const e 1 in\n  let other = K.binary ~op:`Sub ~lhs:e ~rhs:half in\n  K.binary ~op:`Mul\n    ~lhs:(K.binary ~op:`Mul ~lhs:d ~rhs:(pow2if half fdt))\n    ~rhs:(pow2if other fdt)\n\nlet frexp_decomp v =\n  let fdt = K.dtype v in\n  let scalar = Dtype.scalar fdt in\n  let mantissa_mask, half_exp_bits = match scalar with\n    | Dtype.Float64 -> (0x000FFFFFFFFFFFFFL, 0x3FE0000000000000L)\n    | Float32 -> (0x807FFFFFL, 0x3F000000L)\n    | Float16 -> (0x83FFL, 0x3800L)\n    | _ -> (0x807FFFFFL, 0x3F000000L)\n  in\n  let uint_dtype = Dtype.vec (Dtype.count fdt) (uint_for_float scalar) in\n  let bits = K.bitcast ~src:v ~dtype:(Dtype.val_of uint_dtype) in\n  let exponent = K.binary ~op:`And ~lhs:(shr_const bits (mantissa_bits fdt))\n    ~rhs:(K.const (Const.int64 (Dtype.val_of uint_dtype) (Int64.of_int (exponent_mask fdt)))) in\n  let mantissa = K.bitcast ~dtype:(Dtype.val_of fdt)\n    ~src:(K.binary ~op:`Or\n            ~lhs:(K.binary ~op:`And ~lhs:bits\n                    ~rhs:(K.const (Const.int64 (Dtype.val_of uint_dtype) mantissa_mask)))\n            ~rhs:(K.const (Const.int64 (Dtype.val_of uint_dtype) half_exp_bits))) in\n  let exp = K.binary ~op:`Add\n    ~lhs:(K.binary ~op:`Sub ~lhs:exponent\n            ~rhs:(K.const (Const.int64 (Dtype.val_of uint_dtype) (Int64.of_int (exponent_bias fdt)))))\n    ~rhs:(K.const (Const.int64 (Dtype.val_of uint_dtype) 1L)) in\n  (mantissa, exp)\n\n(* Payne-Hanek range reduction: reduce an arbitrary floating-point angle d\n   to the interval [-pi/4, pi/4] with a quadrant indicator. Uses a table of\n   2/pi digits and 64-bit integer arithmetic to maintain precision far beyond\n   what Cody-Waite can handle (needed when |d| >> 1). Returns (r, q) where\n   r is the reduced angle and q mod 4 selects the trig quadrant. *)\n\nlet payne_hanek_reduction d =\n  let fdt = K.dtype d in\n  let two_over_pi_f =\n    [| 0x00000000; 0x28be60db; 0x9391054a; 0x7f09d5f4; 0x7d4d3770;\n       0x36d8a566; 0x4f10e410 |] in\n  let intermediate_dtype =\n    if Dtype.scalar fdt = Dtype.Float16 then\n      Dtype.vec (Dtype.count fdt) Dtype.float32\n    else fdt in\n  let f, e_raw = frexp_decomp d in\n  let uint64_dt = Dtype.vec (Dtype.count fdt) Dtype.uint64 in\n  let int32_dt = Dtype.vec (Dtype.count fdt) Dtype.int32 in\n  let uint32_dt = Dtype.vec (Dtype.count fdt) Dtype.uint32 in\n  let ia = K.cast ~dtype:uint64_dt\n    ~src:(K.binary ~op:`Mul\n            ~lhs:(K.cast ~src:f ~dtype:intermediate_dtype)\n            ~rhs:(K.const (Const.float (intermediate_dtype |> Dtype.scalarize |> Dtype.val_of) 4.294967296e9))) in\n  let i = shr_const (K.cast ~src:e_raw ~dtype:uint64_dt) 5 in\n  let e = K.binary ~op:`And\n    ~lhs:(K.cast ~src:e_raw ~dtype:int32_dt)\n    ~rhs:(K.const (Const.int Dtype.Val.int32 31)) in\n  let offset = K.binary ~op:`Sub\n    ~lhs:(K.const (Const.int Dtype.Val.int32 32)) ~rhs:e in\n  let rec take an off count =\n    if count + off < Array.length two_over_pi_f - 1 then\n      let inner = take an off (count + 1) in\n      K.ternary ~op:`Where\n        ~a:(K.binary ~op:`Cmpne ~lhs:i\n              ~rhs:(K.const (Const.int64 (Dtype.val_of uint64_dt) (Int64.of_int count))))\n        ~b:inner\n        ~c:(K.const (Const.int64 (Dtype.val_of uint32_dt) (Int64.of_int two_over_pi_f.(count + off))))\n    else an in\n  let shift_lazy op x y =\n    K.cast ~dtype:uint32_dt\n      ~src:(K.binary ~op\n              ~lhs:(K.cast ~src:x ~dtype:uint64_dt)\n              ~rhs:(K.cast ~src:(pow2if y fdt) ~dtype:uint64_dt)) in\n  let zero_u32 = K.const (Const.int64 (Dtype.val_of uint32_dt) 0L) in\n  let a = Array.init 4 (fun off -> take zero_u32 off 0) in\n  let combine ai aj =\n    K.binary ~op:`Or\n      ~lhs:(shift_lazy `Mul a.(ai) e)\n      ~rhs:(shift_lazy `Idiv a.(aj) offset) in\n  let hi = combine 0 1 in\n  let mi = combine 1 2 in\n  let lo = combine 2 3 in\n  let hp_mul x y =\n    K.binary ~op:`Mul\n      ~lhs:(K.cast ~src:x ~dtype:uint64_dt)\n      ~rhs:(K.cast ~src:y ~dtype:uint64_dt) in\n  let p = K.binary ~op:`Add\n    ~lhs:(K.binary ~op:`Add\n            ~lhs:(shl_const (hp_mul ia hi) 32)\n            ~rhs:(hp_mul ia mi))\n    ~rhs:(shr_const (hp_mul ia lo) 32) in\n  let q = K.cast ~src:(shr_const p 62) ~dtype:int32_dt in\n  let p_masked = K.binary ~op:`And ~lhs:p\n    ~rhs:(K.const (Const.int64 Dtype.Val.uint64 0x3ffffffffffffffFL)) in\n  let r = K.cast ~dtype:fdt\n    ~src:(K.binary ~op:`Mul\n            ~lhs:(K.cast ~src:p_masked ~dtype:intermediate_dtype)\n            ~rhs:(K.const (Const.float (intermediate_dtype |> Dtype.scalarize |> Dtype.val_of) 3.4061215800865545e-19))) in\n  let f_lt_half = K.binary ~op:`Cmplt ~lhs:f ~rhs:(const_like f 0.5) in\n  let r_adj = K.binary ~op:`Sub ~lhs:r ~rhs:(const_like r (Float.pi /. 2.0)) in\n  let q_adj = K.binary ~op:`Add ~lhs:q ~rhs:(K.const (Const.int Dtype.Val.int32 1)) in\n  (K.ternary ~op:`Where ~a:f_lt_half ~b:r ~c:r_adj,\n   K.ternary ~op:`Where ~a:f_lt_half ~b:q ~c:q_adj)\n\nlet cody_waite_reduction d =\n  let fdt = K.dtype d in\n  let scalar = Dtype.scalar fdt in\n  let m_1_pi = 0.318309886183790671537767526745028724 in\n  let muladd q c r = K.binary ~op:`Add\n    ~lhs:(K.binary ~op:`Mul ~lhs:q ~rhs:c) ~rhs:r in\n  let qdh =\n    if scalar = Dtype.Float64 then\n      K.binary ~op:`Mul\n        ~lhs:(K.cast ~dtype:fdt\n                ~src:(K.cast ~dtype:(Dtype.vec (Dtype.count fdt) Dtype.int64)\n                        ~src:(K.binary ~op:`Mul ~lhs:d\n                                ~rhs:(const_like d (m_1_pi /. Float.of_int (1 lsl 24))))))\n        ~rhs:(const_like d (Float.of_int (1 lsl 24)))\n    else const_like d 0.0 in\n  let quadrant =\n    if scalar = Dtype.Float64 then\n      rintk (K.binary ~op:`Sub\n               ~lhs:(K.binary ~op:`Mul ~lhs:d ~rhs:(const_like d m_1_pi))\n               ~rhs:qdh)\n    else rintk (K.binary ~op:`Mul ~lhs:d ~rhs:(const_like d m_1_pi)) in\n  let q_float = K.cast ~src:quadrant ~dtype:fdt in\n  let r = match scalar with\n    | Dtype.Float64 ->\n      let pi_a = 3.1415926218032836914 and pi_b = 3.1786509424591713469e-08 in\n      let pi_c = 1.2246467864107188502e-16 and pi_d = 1.2736634327021899816e-24 in\n      let r = muladd qdh (const_like d (-.pi_a)) d in\n      let r = muladd q_float (const_like d (-.pi_a)) r in\n      let r = muladd qdh (const_like d (-.pi_b)) r in\n      let r = muladd q_float (const_like d (-.pi_b)) r in\n      let r = muladd qdh (const_like d (-.pi_c)) r in\n      let r = muladd q_float (const_like d (-.pi_c)) r in\n      muladd (K.binary ~op:`Add ~lhs:qdh ~rhs:q_float) (const_like d (-.pi_d)) r\n    | Dtype.Float16 ->\n      let f32_dt = Dtype.vec (Dtype.count fdt) Dtype.float32 in\n      let q32 = K.cast ~src:q_float ~dtype:f32_dt in\n      let c v = K.const (Const.float Dtype.Val.float32 v) in\n      let r = muladd q32 (c (-3.1414794921875)) (K.cast ~src:d ~dtype:f32_dt) in\n      let r = muladd q32 (c (-0.00011315941810607910156)) r in\n      let r = muladd q32 (c (-1.9841872589410058936e-09)) r in\n      K.cast ~src:(muladd q32 (c (-1.2154201256553420762e-10)) r) ~dtype:fdt\n    | _ ->\n      let r = muladd q_float (const_like d (-3.1414794921875)) d in\n      let r = muladd q_float (const_like d (-0.00011315941810607910156)) r in\n      let r = muladd q_float (const_like d (-1.9841872589410058936e-09)) r in\n      muladd q_float (const_like d (-1.2154201256553420762e-10)) r\n  in\n  (r, K.cast ~src:quadrant ~dtype:(Dtype.vec (Dtype.count fdt) Dtype.int32))\n\n(* Sine polynomial *)\n\nlet trig_poly d coeff32 coeff64 =\n  let fdt = K.dtype d in\n  let d2 = K.binary ~op:`Mul ~lhs:d ~rhs:d in\n  let coeffs =\n    if Dtype.scalar fdt = Dtype.Float64 then coeff64\n    else coeff32\n  in\n  K.binary ~op:`Mul ~lhs:d ~rhs:(polyN d2 coeffs)\n\nlet sin_poly d =\n  trig_poly d\n    [ 2.6083159809786593541503e-06; -0.0001981069071916863322258;\n      0.00833307858556509017944336; -0.166666597127914428710938; 1.0 ]\n    [ -7.97255955009037868891952e-18; 2.81009972710863200091251e-15;\n      -7.64712219118158833288484e-13; 1.60590430605664501629054e-10;\n      -2.50521083763502045810755e-08; 2.75573192239198747630416e-06;\n      -0.000198412698412696162806809; 0.00833333333333332974823815;\n      -0.166666666666666657414808; 1.0 ]\n\nlet ifand q n =\n  let dt = K.dtype q in\n  K.binary ~op:`Cmpne\n    ~lhs:(K.binary ~op:`And ~lhs:q ~rhs:(K.const (Const.int64 (Dtype.val_of dt) (Int64.of_int n))))\n    ~rhs:(K.const (Const.int64 (Dtype.val_of dt) 0L))\n\nlet sign_flip r q n =\n  K.binary ~op:`Mul ~lhs:r\n    ~rhs:(K.ternary ~op:`Where ~a:(ifand q n)\n            ~b:(const_like r (-1.0)) ~c:(const_like r 1.0))\n\nlet sin_poly_small d q = sign_flip (sin_poly d) q 1\n\nlet sin_poly_large d q =\n  let d_adj = K.binary ~op:`Add ~lhs:d\n    ~rhs:(K.ternary ~op:`Where ~a:(ifand q 1)\n            ~b:(const_like d (Float.pi /. 2.0)) ~c:(const_like d 0.0)) in\n  sign_flip (sin_poly d_adj) q 2\n\n(* Toplevel transcendentals *)\n\nlet xsin ?(fast = false) ?(switch_over = 30.0) d =\n  let fdt = K.dtype d in\n  let nan_c = K.const (Const.float (fdt |> Dtype.scalarize |> Dtype.val_of) Float.nan) in\n  let zero = const_like d 0.0 in\n  let x = lazy_map_numbers d ~inf:zero ~ninf:zero ~nan:zero ~ratio:d in\n  let x_sign = K.ternary ~op:`Where\n    ~a:(K.binary ~op:`Cmpne ~lhs:x ~rhs:zero)\n    ~b:(K.ternary ~op:`Where\n          ~a:(K.binary ~op:`Cmplt ~lhs:x ~rhs:zero)\n          ~b:(const_like x (-1.0)) ~c:(const_like x 1.0))\n    ~c:zero in\n  let x_abs = K.binary ~op:`Mul ~lhs:x ~rhs:x_sign in\n  let result =\n    if fast then\n      let r, q = cody_waite_reduction x_abs in\n      sin_poly_small r q\n    else\n      let r_large, q_large = payne_hanek_reduction x_abs in\n      let r_small, q_small = cody_waite_reduction x_abs in\n      K.ternary ~op:`Where\n        ~a:(K.binary ~op:`Cmplt ~lhs:x_abs ~rhs:(const_like x_abs switch_over))\n        ~b:(sin_poly_small r_small q_small)\n        ~c:(sin_poly_large r_large q_large) in\n  let result = K.binary ~op:`Mul ~lhs:result ~rhs:x_sign in\n  lazy_map_numbers d ~inf:nan_c ~ninf:nan_c ~nan:nan_c ~ratio:result\n\nlet xexp2 d =\n  let fdt = K.dtype d in\n  let scalar = Dtype.scalar fdt in\n  let zero = const_like d 0.0 in\n  let x = lazy_map_numbers d ~inf:zero ~ninf:zero ~nan:zero ~ratio:d in\n  let q = rintk x in\n  let s = K.binary ~op:`Sub ~lhs:x ~rhs:(K.cast ~src:q ~dtype:fdt) in\n  let u =\n    if scalar = Dtype.Float64 then\n      polyN s\n        [ 0.4434359082926529454e-9; 0.7073164598085707425e-8;\n          0.1017819260921760451e-6; 0.1321543872511327615e-5;\n          0.1525273353517584730e-4; 0.1540353045101147808e-3;\n          0.1333355814670499073e-2; 0.9618129107597600536e-2;\n          0.5550410866482046596e-1; 0.2402265069591012214e+0;\n          0.6931471805599452862e+0; 0.1000000000000000000e+1 ]\n    else\n      polyN s\n        [ 0.1535920892e-3; 0.1339262701e-2; 0.9618384764e-2;\n          0.5550347269e-1; 0.2402264476e+0; 0.6931471825e+0; 1.0 ] in\n  let u = ldexp2k u q in\n  let upper, lower = match scalar with\n    | Dtype.Float64 -> (1024.0, -2000.0)\n    | Dtype.Float16 -> (23.0, -22.0)\n    | _ -> (128.0, -150.0) in\n  let inf = const_like d Float.infinity in\n  let u = K.ternary ~op:`Where\n    ~a:(K.binary ~op:`Cmplt ~lhs:d ~rhs:(const_like d upper)) ~b:u ~c:inf in\n  let u = K.ternary ~op:`Where\n    ~a:(K.binary ~op:`Cmplt ~lhs:d ~rhs:(const_like d lower)) ~b:zero ~c:u in\n  let nan_c = K.const (Const.float (fdt |> Dtype.scalarize |> Dtype.val_of) Float.nan) in\n  K.ternary ~op:`Where\n    ~a:(K.binary ~op:`Cmpne ~lhs:d ~rhs:d) ~b:nan_c ~c:u\n\nlet xlog2 d =\n  let fdt = K.dtype d in\n  let scalar = Dtype.scalar fdt in\n  let denormal_exp = if scalar = Dtype.Float16 then 10 else 64 in\n  let flt_min_val = if scalar = Dtype.Float16 then 6.1e-5 else 1e-4 in\n  let is_denormal = K.binary ~op:`Cmplt ~lhs:d ~rhs:(const_like d flt_min_val) in\n  let a = K.ternary ~op:`Where ~a:is_denormal\n    ~b:(K.binary ~op:`Mul ~lhs:d ~rhs:(const_like d (Float.of_int (1 lsl denormal_exp))))\n    ~c:d in\n  let e = K.cast ~src:(ilogb2k (K.binary ~op:`Mul ~lhs:a ~rhs:(const_like a (1.0 /. 0.75))))\n    ~dtype:fdt in\n  let m = ldexp3k a (K.unary ~op:`Neg ~src:e) in\n  let e = K.ternary ~op:`Where ~a:is_denormal\n    ~b:(K.binary ~op:`Sub ~lhs:e ~rhs:(const_like e (Float.of_int denormal_exp)))\n    ~c:e in\n  let one = const_like m 1.0 in\n  let x = K.binary ~op:`Fdiv\n    ~lhs:(K.binary ~op:`Sub ~lhs:m ~rhs:one)\n    ~rhs:(K.binary ~op:`Add ~lhs:m ~rhs:one) in\n  let x2 = K.binary ~op:`Mul ~lhs:x ~rhs:x in\n  let x_x2 = K.binary ~op:`Mul ~lhs:x ~rhs:x2 in\n  let r =\n    if scalar = Dtype.Float64 then\n      let t = polyN x2\n        [ 0.2211941750456081490e+0; 0.2200768693152277689e+0;\n          0.2623708057488514656e+0; 0.3205977477944495502e+0;\n          0.4121985945485324709e+0; 0.5770780162997058982e+0;\n          0.96179669392608091449 ] in\n      K.binary ~op:`Add\n        ~lhs:(K.binary ~op:`Add ~lhs:(K.binary ~op:`Mul ~lhs:t ~rhs:x_x2) ~rhs:e)\n        ~rhs:(K.binary ~op:`Mul ~lhs:x ~rhs:(const_like x 2.885390081777926774))\n    else\n      let t = polyN x2\n        [ 0.4374550283e+0; 0.5764790177e+0; 0.9618012905120 ] in\n      let base = K.binary ~op:`Add\n        ~lhs:(K.binary ~op:`Add ~lhs:(K.binary ~op:`Mul ~lhs:t ~rhs:x_x2) ~rhs:e)\n        ~rhs:(K.binary ~op:`Mul ~lhs:x ~rhs:(const_like x 2.8853900432586669922)) in\n      if scalar = Dtype.Float32 then\n        K.binary ~op:`Add ~lhs:base\n          ~rhs:(K.binary ~op:`Mul ~lhs:x ~rhs:(const_like x 3.2734474483568488616e-08))\n      else base in\n  let inf = const_like d Float.infinity in\n  let neg_inf = const_like d Float.neg_infinity in\n  let nan_c = K.const (Const.float (fdt |> Dtype.scalarize |> Dtype.val_of) Float.nan) in\n  let r = K.ternary ~op:`Where\n    ~a:(K.binary ~op:`Cmpne ~lhs:d ~rhs:inf) ~b:r ~c:inf in\n  let r = K.ternary ~op:`Where\n    ~a:(K.binary ~op:`Cmpne ~lhs:d ~rhs:(const_like d 0.0)) ~b:r ~c:neg_inf in\n  let r = K.ternary ~op:`Where\n    ~a:(K.binary ~op:`Cmplt ~lhs:d ~rhs:(const_like d (-0.0))) ~b:nan_c ~c:r in\n  let r = K.ternary ~op:`Where\n    ~a:(K.binary ~op:`Cmpne ~lhs:d ~rhs:d) ~b:nan_c ~c:r in\n  (* reciprocal trick: devices where x == -0.0 fails *)\n  K.ternary ~op:`Where\n    ~a:(K.binary ~op:`Cmpne\n          ~lhs:(K.unary ~op:`Recip ~src:d)\n          ~rhs:(const_like d Float.neg_infinity))\n    ~b:r ~c:neg_inf\n\n(* Threefry *)\n\nlet threefry2x32 x key =\n  let u64 = K.dtype x in\n  let u32 = Dtype.uint32 in\n  let mask32 = K.const (Const.int64 (Dtype.val_of u64) 0xFFFFFFFFL) in\n  let lo v = K.cast ~src:(K.binary ~op:`And ~lhs:v ~rhs:mask32) ~dtype:u32 in\n  let hi v = K.cast ~src:(K.binary ~op:`And ~lhs:(shr_const v 32) ~rhs:mask32) ~dtype:u32 in\n  let x0 = lo x and x1 = hi x in\n  let key0 = lo key and key1 = hi key in\n  let rotations = [| [| 13; 15; 26; 6 |]; [| 17; 29; 16; 24 |] |] in\n  let ks = [| key1;\n    K.binary ~op:`Xor\n      ~lhs:(K.binary ~op:`Xor ~lhs:key0 ~rhs:key1)\n      ~rhs:(K.const (Const.int64 (Dtype.val_of u32) 0x1BD11BDAL));\n    key0 |] in\n  let xr0 = ref (K.binary ~op:`Add ~lhs:x0 ~rhs:ks.(2)) in\n  let xr1 = ref (K.binary ~op:`Add ~lhs:x1 ~rhs:ks.(0)) in\n  for i = 0 to 4 do\n    let rots = rotations.(i mod 2) in\n    for j = 0 to 3 do\n      let r = rots.(j) in\n      let x0_new = K.binary ~op:`Add ~lhs:!xr0 ~rhs:!xr1 in\n      let rotated = K.binary ~op:`Add\n        ~lhs:(shl_const !xr1 r) ~rhs:(shr_const !xr1 (32 - r)) in\n      xr1 := K.binary ~op:`Xor ~lhs:x0_new ~rhs:rotated;\n      xr0 := x0_new\n    done;\n    xr0 := K.binary ~op:`Add ~lhs:!xr0 ~rhs:ks.(i mod 3);\n    xr1 := K.binary ~op:`Add ~lhs:!xr1\n      ~rhs:(K.binary ~op:`Add ~lhs:ks.((i + 1) mod 3)\n              ~rhs:(K.const (Const.int64 (Dtype.val_of u32) (Int64.of_int (i + 1)))))\n  done;\n  K.binary ~op:`Or\n    ~lhs:(shl_const (K.cast ~src:!xr1 ~dtype:u64) 32)\n    ~rhs:(K.cast ~src:!xr0 ~dtype:u64)\n\n(* Pattern matching *)\n\ntype supported_ops = {\n  has_exp2 : bool;\n  has_log2 : bool;\n  has_sin : bool;\n  has_sqrt : bool;\n  has_recip : bool;\n  has_neg : bool;\n  has_sub : bool;\n  has_max : bool;\n  has_shl : bool;\n  has_shr : bool;\n  has_and : bool;\n  has_or : bool;\n  has_cmplt : bool;\n  has_cmpeq : bool;\n  has_fdiv : bool;\n  has_threefry : bool;\n  has_mulacc : bool;\n  disable_fast_idiv : bool;\n  force_transcendental : bool;\n}\n\nlet transcendental_dtypes dt =\n  let s = Dtype.Val.scalar dt in\n  s = Dtype.Float16 || s = Dtype.Float32 || s = Dtype.Float64\n\nlet get_transcendental_patterns (ops : supported_ops) node =\n  let via_f32 f d dtype =\n    if transcendental_dtypes dtype then Some (f d)\n    else if Dtype.Val.is_float dtype then\n      Some (K.cast ~src:(f (K.cast ~src:d ~dtype:Dtype.float32)) ~dtype:(Dtype.Val dtype))\n    else None in\n  match K.view node with\n  | Unary { op = `Exp2; src = d; dtype }\n    when not ops.has_exp2 || ops.force_transcendental -> via_f32 xexp2 d dtype\n  | Unary { op = `Log2; src = d; dtype }\n    when not ops.has_log2 || ops.force_transcendental -> via_f32 xlog2 d dtype\n  | Unary { op = `Sin; src = d; dtype }\n    when not ops.has_sin || ops.force_transcendental -> via_f32 xsin d dtype\n  | Unary { op = `Sqrt; src = d; _ }\n    when not ops.has_sqrt || ops.force_transcendental ->\n      Some (xpow ~base:d ~exponent:(const_like d 0.5))\n  | _ -> None\n\n(* Integer division *)\n\nlet magicgu ~vmax ~d =\n  assert (d > 0);\n  let nc = (vmax + 1) / d * d - 1 in\n  let nbits =\n    let rec bits v = if v <= 0 then 0 else 1 + bits (v lsr 1) in\n    bits vmax in\n  let rec find_s s =\n    if s > 2 * nbits then failwith \"magicgu: no solution found\"\n    else\n      let two_s = 1 lsl s in\n      if two_s > nc * (d - 1 - (two_s - 1) mod d) then\n        ((two_s + d - 1 - (two_s - 1) mod d) / d, s)\n      else find_s (s + 1) in\n  find_s 0\n\nlet fast_idiv x d =\n  assert (d > 0L);\n  let d = Int64.to_int d in\n  let bound_of = function `SInt n -> Int64.to_int n | _ -> 0 in\n  let bound_of_max = function `SInt n -> Int64.to_int n | _ -> Int.max_int in\n  let xmin = max (Divandmod.vmin x) (Int64.of_int (bound_of (Dtype.min Dtype.index))) in\n  let xmax = min (Divandmod.vmax x) (Int64.of_int (bound_of_max (Dtype.max Dtype.index))) in\n  let vmin_i = Int64.to_int xmin and vmax_i = Int64.to_int xmax in\n  let m, s = magicgu ~vmax:(max (abs vmax_i) (abs vmin_i)) ~d in\n  let m64 = Int64.of_int m in\n  let fits =\n    Int64.mul m64 (Int64.of_int vmin_i) >= Int64.of_int Int.min_int\n    && Int64.mul m64 (Int64.of_int vmax_i) <= Int64.of_int Int.max_int in\n  if fits then\n    let shifted = K.binary ~op:`Shr\n      ~lhs:(K.binary ~op:`Mul ~lhs:x ~rhs:(iconst m64))\n      ~rhs:(iconst (Int64.of_int s)) in\n    if xmin >= 0L then Some shifted\n    else\n      let correction = K.ternary ~op:`Where\n        ~a:(K.binary ~op:`Cmplt ~lhs:x ~rhs:(iconst 0L))\n        ~b:(iconst 1L) ~c:(iconst 0L) in\n      Some (K.binary ~op:`Add ~lhs:shifted ~rhs:correction)\n  else None\n\n(* Long decomposition: int64 -> int32 pairs *)\n\nlet is_long_dtype (dt : Dtype.t) =\n  Dtype.scalar dt = Dtype.Int64 || Dtype.scalar dt = Dtype.Uint64\n\nlet long_to_int_dtype (dt : Dtype.t) = match Dtype.scalar dt with\n  | Int64 -> Dtype.int32 | Uint64 -> Dtype.uint32 | _ -> dt\n\nlet reindex_long (idx : K.t) off mul =\n  match K.view idx with\n  | Index { ptr; idxs = [ i ]; gate; _ } ->\n    let open K.O in\n    K.index ~ptr ~idxs:[ i * int_ mul + int_ off ] ?gate ()\n  | _ -> idx\n\ntype l2i_op =\n  [ `Neg | `Shl | `Shr | `Add | `Sub | `Mul | `Cmplt | `Cmpeq | `Cmpne\n  | `Xor | `Or | `And | `Where | `Max | `Cast | `Bitcast ]\n\nlet rec l2i (op : l2i_op) (dt : Dtype.t) (uops : K.t list) : K.t * K.t =\n  let zero = K.const (Const.int (Dtype.val_of dt) 0) in\n  let a0, a1 = match uops with\n    | [a0; a1] -> (a0, a1)\n    | [a0; a1; _; _] -> (a0, a1)\n    | _ -> failwith \"l2i: unexpected operand count\"\n  in\n  let b0, b1 = match uops with\n    | [_; _; b0; b1] -> (b0, b1)\n    | _ -> (zero, zero)\n  in\n  match op with\n  | `Neg -> l2i `Sub dt [zero; zero; a0; a1]\n  | `Shl ->\n      let b0_mod = K.binary ~op:`And ~lhs:b0 ~rhs:(K.const (Const.int (Dtype.val_of dt) 31)) in\n      let lo = expr_shl a0 b0_mod in\n      let hi = K.binary ~op:`Or\n        ~lhs:(expr_shl a1 b0_mod)\n        ~rhs:(expr_shr (expr_shr a0 (K.const (Const.int (Dtype.val_of dt) 1)))\n                (K.binary ~op:`Sub ~lhs:(K.const (Const.int (Dtype.val_of dt) 31)) ~rhs:b0_mod)) in\n      let ge32 = K.binary ~op:`Cmplt ~lhs:(K.const (Const.int (Dtype.val_of dt) 31)) ~rhs:b0 in\n      (K.ternary ~op:`Where ~a:ge32 ~b:zero ~c:lo,\n       K.ternary ~op:`Where ~a:ge32 ~b:lo ~c:hi)\n  | `Shr ->\n      let b0_mod = K.binary ~op:`And ~lhs:b0 ~rhs:(K.const (Const.int (Dtype.val_of dt) 31)) in\n      let lo = K.binary ~op:`Or\n        ~lhs:(expr_shr a0 b0_mod)\n        ~rhs:(expr_shl (expr_shl a1 (K.const (Const.int (Dtype.val_of dt) 1)))\n                (K.binary ~op:`Sub ~lhs:(K.const (Const.int (Dtype.val_of dt) 31)) ~rhs:b0_mod)) in\n      let hi = expr_shr a1 b0_mod in\n      let ge32 = K.binary ~op:`Cmplt ~lhs:(K.const (Const.int (Dtype.val_of dt) 31)) ~rhs:b0 in\n      (K.ternary ~op:`Where ~a:ge32 ~b:hi ~c:lo,\n       K.ternary ~op:`Where ~a:ge32 ~b:zero ~c:hi)\n  | `Add ->\n      let low = K.binary ~op:`Add ~lhs:a0 ~rhs:b0 in\n      let carry =\n        K.cast\n          ~src:(K.binary ~op:`Cmplt\n            ~lhs:(K.bitcast ~src:low ~dtype:Dtype.Val.uint32)\n            ~rhs:(K.bitcast ~src:a0 ~dtype:Dtype.Val.uint32))\n          ~dtype:dt\n      in\n      (low, K.binary ~op:`Add ~lhs:(K.binary ~op:`Add ~lhs:a1 ~rhs:b1) ~rhs:carry)\n  | `Sub ->\n      let borrow =\n        K.cast\n          ~src:(K.binary ~op:`Cmplt\n            ~lhs:(K.bitcast ~src:a0 ~dtype:Dtype.Val.uint32)\n            ~rhs:(K.bitcast ~src:b0 ~dtype:Dtype.Val.uint32))\n          ~dtype:dt\n      in\n      (K.binary ~op:`Sub ~lhs:a0 ~rhs:b0,\n       K.binary ~op:`Sub ~lhs:(K.binary ~op:`Sub ~lhs:a1 ~rhs:b1) ~rhs:borrow)\n  | `Cmplt ->\n      let hi_lt = K.binary ~op:`Cmplt ~lhs:a1 ~rhs:b1 in\n      let hi_eq = K.binary ~op:`Cmpeq ~lhs:a1 ~rhs:b1 in\n      let lo_lt = K.binary ~op:`Cmplt\n        ~lhs:(K.bitcast ~src:a0 ~dtype:Dtype.Val.uint32)\n        ~rhs:(K.bitcast ~src:b0 ~dtype:Dtype.Val.uint32) in\n      (K.binary ~op:`Or ~lhs:hi_lt ~rhs:(K.binary ~op:`And ~lhs:hi_eq ~rhs:lo_lt), zero)\n  | `Cmpeq ->\n      (K.binary ~op:`And\n        ~lhs:(K.binary ~op:`Cmpeq ~lhs:a0 ~rhs:b0)\n        ~rhs:(K.binary ~op:`Cmpeq ~lhs:a1 ~rhs:b1), zero)\n  | `Cmpne ->\n      (K.binary ~op:`Or\n        ~lhs:(K.binary ~op:`Cmpne ~lhs:a0 ~rhs:b0)\n        ~rhs:(K.binary ~op:`Cmpne ~lhs:a1 ~rhs:b1), zero)\n  | `Xor -> (K.binary ~op:`Xor ~lhs:a0 ~rhs:b0, K.binary ~op:`Xor ~lhs:a1 ~rhs:b1)\n  | `Or -> (K.binary ~op:`Or ~lhs:a0 ~rhs:b0, K.binary ~op:`Or ~lhs:a1 ~rhs:b1)\n  | `And -> (K.binary ~op:`And ~lhs:a0 ~rhs:b0, K.binary ~op:`And ~lhs:a1 ~rhs:b1)\n  | `Where ->\n    (match uops with\n    | [cond; t_lo; t_hi; f_lo; f_hi] ->\n      (K.ternary ~op:`Where ~a:cond ~b:t_lo ~c:f_lo,\n       K.ternary ~op:`Where ~a:cond ~b:t_hi ~c:f_hi)\n    | _ -> failwith \"l2i Where: need 5 operands\")\n  | `Max -> l2i `Where dt (fst (l2i `Cmplt dt uops) :: b0 :: b1 :: a0 :: [a1])\n  | _ -> failwith \"l2i: unsupported op\"\n\nlet widen_long_ptr (dtype : Dtype.Ptr.t) size =\n  let new_base = Dtype.Val (Dtype.Ptr.base dtype) |> Dtype.scalarize |> long_to_int_dtype in\n  Dtype.Ptr.create (Dtype.val_of new_base) ~addrspace:(Dtype.Ptr.addrspace dtype) ~size:(size * 2)\n\nlet pm_long_decomp (node : K.t) : K.t option =\n  match K.view node with\n  | Param { idx; dtype } when is_long_dtype (Dtype.Val (Dtype.Ptr.base dtype)) ->\n    Some (K.param ~idx ~dtype:(widen_long_ptr dtype (Dtype.Ptr.size dtype)))\n  | Define_local { size; dtype } when is_long_dtype (Dtype.Val (Dtype.Ptr.base dtype)) ->\n    Some (K.define_local ~size:(size * 2) ~dtype:(widen_long_ptr dtype size))\n  | Define_reg { size; dtype; slot } when is_long_dtype (Dtype.Val (Dtype.Ptr.base dtype)) ->\n    Some (K.define_reg ~size:(size * 2) ~dtype:(widen_long_ptr dtype size) ~slot)\n  | Index { dtype = Dtype.Ptr pty; _ }\n    when is_long_dtype (Dtype.Val (Dtype.Ptr.base pty) |> Dtype.scalarize) ->\n    let off = match K.tag node with Some \"1\" -> 1 | _ -> 0 in\n    Some (K.replace (reindex_long node off 2)\n            ~dtype:(Dtype.Val (Dtype.Ptr.base pty) |> Dtype.scalarize |> long_to_int_dtype) ())\n  | Store { dst; value; ranges } when K.tag node = None ->\n    (match K.dtype_opt value with\n    | Some dt when is_long_dtype dt ->\n      Some (K.group [\n        K.with_tag \"0\" (K.store ~dst:(reindex_long dst 0 2)\n          ~value:(K.with_tag \"0\" value) ~ranges);\n        K.with_tag \"1\" (K.store ~dst:(reindex_long dst 1 2)\n          ~value:(K.with_tag \"1\" value) ~ranges)])\n    | _ -> None)\n  | Load { src; dtype; _ } when is_long_dtype (Dtype.Val dtype) ->\n    (match K.tag node with\n    | Some tag_str ->\n      Some (K.load ~src:(reindex_long src (if tag_str = \"1\" then 1 else 0) 2) ())\n    | None -> None)\n  | Const { value; dtype } when is_long_dtype (Dtype.Val dtype) ->\n    (match K.tag node, Const.view value with\n    | Some \"1\", Int n ->\n      Some (K.const (Const.int (Dtype.val_of (long_to_int_dtype (Dtype.Val dtype)))\n              (Int64.to_int (Int64.shift_right_logical n 32))))\n    | Some _, Int n ->\n      Some (K.const (Const.int (Dtype.val_of (long_to_int_dtype (Dtype.Val dtype)))\n              (Int64.to_int (Int64.logand n 0xFFFFFFFFL))))\n    | _ -> None)\n  | Binary { op = (`Cmplt | `Cmpeq | `Cmpne) as op; lhs; rhs; _ }\n    when (match K.dtype_opt lhs with Some dt -> is_long_dtype dt | None -> false) ->\n    let dt = long_to_int_dtype (K.dtype lhs) in\n    Some (fst (l2i op dt [\n      K.with_tag \"0\" lhs; K.with_tag \"1\" lhs;\n      K.with_tag \"0\" rhs; K.with_tag \"1\" rhs]))\n  | (Binary { dtype; _ } | Unary { dtype; _ } | Ternary { dtype; _ })\n    when is_long_dtype (Dtype.Val dtype) && K.tag node <> None ->\n    let dt = long_to_int_dtype (Dtype.Val dtype) in\n    let expanded = List.concat_map (fun c ->\n      match K.dtype_opt c with\n      | Some cdt when is_long_dtype cdt ->\n        [K.cast ~src:(K.with_tag \"0\" c) ~dtype:dt;\n         K.cast ~src:(K.with_tag \"1\" c) ~dtype:dt]\n      | _ -> [c]) (K.children node) in\n    let to_l2i_op : K.view -> l2i_op = function\n      | Binary { op = `Add; _ } -> `Add | Binary { op = `Sub; _ } -> `Sub\n      | Binary { op = `Mul; _ } -> `Mul | Binary { op = `Shl; _ } -> `Shl\n      | Binary { op = `Shr; _ } -> `Shr | Binary { op = `And; _ } -> `And\n      | Binary { op = `Or; _ } -> `Or | Binary { op = `Xor; _ } -> `Xor\n      | Binary { op = `Cmplt; _ } -> `Cmplt | Binary { op = `Cmpeq; _ } -> `Cmpeq\n      | Binary { op = `Cmpne; _ } -> `Cmpne | Binary { op = `Max; _ } -> `Max\n      | Unary { op = `Neg; _ } -> `Neg\n      | _ -> failwith \"l2i: unsupported op\" in\n    let lo, hi = l2i (to_l2i_op (K.view node)) dt expanded in\n    (match K.tag node with\n    | Some \"0\" -> Some lo | Some \"1\" -> Some hi | _ -> None)\n  | _ -> None\n\n(* Float decomposition *)\n\nlet f2f_dt : Dtype.scalar -> Dtype.scalar = function\n  | Float16 | Bfloat16 -> Uint16\n  | Float32 -> Uint32 | Float64 -> Uint64\n  | s -> s\n\nlet f2f (v : K.t) ~(fr : Dtype.scalar) ~(to_ : Dtype.scalar) : K.t =\n  let dt_of s = Dtype.of_scalar s in\n  let fs = Dtype.bitsize (dt_of fr) and fb = exponent_bias (dt_of fr) in\n  let fe, fm = Dtype.finfo (dt_of fr) in\n  let ts = Dtype.bitsize (dt_of to_) and tb = exponent_bias (dt_of to_) in\n  let te, tm = Dtype.finfo (dt_of to_) in\n  let to_uint = Dtype.of_scalar (f2f_dt to_) in\n  let fr_uint = Dtype.of_scalar (f2f_dt fr) in\n  (* Use Int64 for all mask/shift computations to avoid overflow on wide floats *)\n  let i64_const dt n = K.const (Const.int64 (Dtype.val_of dt) n) in\n  if fe <= te && fm < tm then begin\n    let sign = shl_const\n      (K.cast ~src:(K.binary ~op:`And ~lhs:v\n        ~rhs:(i64_const fr_uint (Int64.shift_left 1L (fs - 1))))\n        ~dtype:to_uint)\n      (ts - fs) in\n    let nosign = K.cast\n      ~src:(K.binary ~op:`And ~lhs:v\n        ~rhs:(i64_const fr_uint (Int64.sub (Int64.shift_left 1L (fs - 1)) 1L)))\n      ~dtype:to_uint in\n    let exp = shr_const nosign fm in\n    let norm = K.binary ~op:`Add\n      ~lhs:(shl_const nosign (tm - fm))\n      ~rhs:(i64_const to_uint (Int64.shift_left (Int64.of_int (tb - fb)) tm)) in\n    let nan = K.binary ~op:`Or\n      ~lhs:(shl_const nosign (tm - fm))\n      ~rhs:(i64_const to_uint (Int64.shift_left (Int64.sub (Int64.shift_left 1L te) 1L) tm)) in\n    let is_nan = K.binary ~op:`Cmpeq ~lhs:exp\n      ~rhs:(i64_const to_uint (Int64.sub (Int64.shift_left 1L fe) 1L)) in\n    let is_zero = K.binary ~op:`Cmpeq ~lhs:exp\n      ~rhs:(i64_const to_uint 0L) in\n    K.bitcast\n      ~src:(K.binary ~op:`Or ~lhs:sign\n        ~rhs:(K.ternary ~op:`Where ~a:is_zero\n          ~b:(K.const (Const.int (Dtype.val_of to_uint) 0))\n          ~c:(K.ternary ~op:`Where ~a:is_nan ~b:nan ~c:norm)))\n      ~dtype:(Dtype.Val.of_scalar to_)\n  end else\n    K.cast ~src:v ~dtype:(Dtype.of_scalar to_)\n\nlet f2f_clamp (v : K.t) ~(dt_scalar : Dtype.scalar) : K.t =\n  let dt = Dtype.of_scalar dt_scalar in\n  let e, m = Dtype.finfo dt in\n  let max_exp, max_man = (1 lsl e) - 2, (1 lsl m) - 1 in\n  let max_val = 2.0 ** Float.of_int (max_exp - exponent_bias dt) *.\n    (1.0 +. Float.of_int max_man /. Float.of_int (1 lsl m)) in\n  let mx = K.const (Const.float (Dtype.val_of (K.dtype v)) max_val) in\n  let neg_mx = K.unary ~op:`Neg ~src:mx in\n  let inf = K.const (Const.float (Dtype.val_of (K.dtype v)) infinity) in\n  let is_nan = K.binary ~op:`Cmpne ~lhs:v ~rhs:v in\n  let lt_neg = K.binary ~op:`Cmplt ~lhs:v ~rhs:neg_mx in\n  let gt_pos = K.binary ~op:`Cmplt ~lhs:mx ~rhs:v in\n  K.ternary ~op:`Where ~a:is_nan ~b:v\n    ~c:(K.ternary ~op:`Where ~a:lt_neg ~b:(K.unary ~op:`Neg ~src:inf)\n      ~c:(K.ternary ~op:`Where ~a:gt_pos ~b:inf ~c:v))\n\ntype float_decomp_ctx = {\n  from_dtype : Dtype.scalar;\n  to_dtype : Dtype.scalar;\n}\n\nlet pm_float_decomp (ctx : float_decomp_ctx) (node : K.t) : K.t option =\n  let fr = ctx.from_dtype and to_ = ctx.to_dtype in\n  let rebase_ptr (dtype : Dtype.Ptr.t) =\n    let new_base = Dtype.Val.vec (Dtype.Ptr.count dtype) (Dtype.Val.of_scalar (f2f_dt fr)) in\n    Dtype.Ptr.create new_base ~addrspace:(Dtype.Ptr.addrspace dtype) ~size:(Dtype.Ptr.size dtype) in\n  let tag n = K.with_tag (Dtype.scalar_to_string fr) n in\n  match K.view node with\n  | Param { idx; dtype } when Dtype.Ptr.scalar dtype = fr ->\n    Some (tag (K.param ~idx ~dtype:(rebase_ptr dtype)))\n  | Define_local { size; dtype } when Dtype.Ptr.scalar dtype = fr ->\n    Some (tag (K.define_local ~size ~dtype:(rebase_ptr dtype)))\n  | Define_reg { size; dtype; slot } when Dtype.Ptr.scalar dtype = fr ->\n    Some (tag (K.define_reg ~size ~dtype:(rebase_ptr dtype) ~slot))\n  | Load { src; dtype; _ } when Dtype.Val.scalar dtype = fr ->\n    let storage_dt = Dtype.vec (Dtype.Val.count dtype) (Dtype.of_scalar (f2f_dt fr)) in\n    Some (f2f (K.replace (K.load ~src ()) ~dtype:storage_dt ()) ~fr ~to_)\n  | Cast { src; dtype } when Dtype.scalar dtype = fr ->\n    Some (f2f_clamp (K.cast ~src ~dtype:(Dtype.vec (Dtype.count dtype) (Dtype.of_scalar to_))) ~dt_scalar:fr)\n  | (Binary { dtype; _ } | Unary { dtype; _ } | Ternary { dtype; _ })\n    when Dtype.Val.scalar dtype = fr ->\n    let new_children = List.map (fun c ->\n      match K.dtype_opt c with\n      | Some cdt when Dtype.scalar cdt = fr ->\n        K.cast ~src:c ~dtype:(Dtype.vec (Dtype.count cdt) (Dtype.of_scalar to_))\n      | _ -> c) (K.children node) in\n    Some (K.replace node ~children:new_children\n            ~dtype:(Dtype.vec (Dtype.Val.count dtype) (Dtype.of_scalar to_)) ())\n  | _ -> None\n\n(* Late rewrite patterns *)\n\nlet get_late_rewrite_patterns (ops : supported_ops) node =\n  match K.view node with\n  | Binary { op = `Max; lhs; rhs; _ }\n    when not ops.has_max && ops.has_cmplt ->\n    Some (K.ternary ~op:`Where ~a:(K.binary ~op:`Cmplt ~lhs ~rhs) ~b:rhs ~c:lhs)\n  | Binary { op = `Mod; lhs = x; rhs; dtype }\n    when ops.has_and && Dtype.Val.is_int dtype\n         && (Dtype.Val.is_unsigned dtype || Divandmod.vmin x >= 0L) ->\n    (match const_int_val rhs with\n    | Some c when c > 0L && Option.is_some (log2_of_power c) ->\n      Some (K.binary ~op:`And ~lhs:x ~rhs:(K.const (Const.int64 dtype (Int64.sub c 1L))))\n    | _ -> None)\n  | Binary { op = `Mul; lhs; rhs; dtype }\n    when ops.has_shl && Dtype.Val.is_int dtype ->\n    let try_shift base c_node = match const_int_val c_node with\n      | Some c when c > 0L ->\n        Option.map (fun n ->\n          K.binary ~op:`Shl ~lhs:base\n            ~rhs:(K.const (Const.int64 dtype (Int64.of_int n))))\n          (log2_of_power c)\n      | _ -> None in\n    (match try_shift lhs rhs with Some _ as r -> r | None -> try_shift rhs lhs)\n  | Binary { op = `Idiv; lhs = x; rhs; dtype }\n    when ops.has_shr && Dtype.Val.is_int dtype && Dtype.Val.is_unsigned dtype ->\n    (match const_int_val rhs with\n    | Some c when c > 0L ->\n      Option.map (fun n ->\n        K.binary ~op:`Shr ~lhs:x\n          ~rhs:(K.const (Const.int64 dtype (Int64.of_int n))))\n        (log2_of_power c)\n    | _ -> None)\n  | Binary { op = `Idiv; lhs = x; rhs; dtype }\n    when ops.has_shr && Dtype.Val.is_int dtype && not (Dtype.Val.is_unsigned dtype) ->\n    (match const_int_val rhs with\n    | Some c when c > 0L ->\n      (match log2_of_power c with\n      | Some n ->\n        let correction = K.ternary ~op:`Where\n          ~a:(K.binary ~op:`Cmplt ~lhs:x ~rhs:(K.const (Const.int64 dtype 0L)))\n          ~b:(K.const (Const.int64 dtype (Int64.sub c 1L)))\n          ~c:(K.const (Const.int64 dtype 0L)) in\n        Some (K.binary ~op:`Shr\n                ~lhs:(K.binary ~op:`Add ~lhs:x ~rhs:correction)\n                ~rhs:(K.const (Const.int64 dtype (Int64.of_int n))))\n      | None -> if not ops.disable_fast_idiv then fast_idiv x c else None)\n    | _ -> None)\n  | Binary { op = `Idiv; lhs = x; rhs; dtype }\n    when ops.has_shr && Dtype.Val.is_int dtype && not ops.disable_fast_idiv ->\n    (match const_int_val rhs with\n    | Some d when d > 0L && Option.is_none (log2_of_power d) -> fast_idiv x d\n    | _ -> None)\n  | Binary { op = `Mul; lhs = x; rhs; _ } when ops.has_neg ->\n    (match const_int_val rhs with\n    | Some (-1L) -> Some (K.unary ~op:`Neg ~src:x)\n    | _ -> match const_int_val x with\n      | Some (-1L) -> Some (K.unary ~op:`Neg ~src:rhs)\n      | _ -> None)\n  | Binary { op = `Add; lhs; rhs = c; _ } when ops.has_mulacc ->\n    (match K.view lhs with\n    | Binary { op = `Mul; lhs = a; rhs = b; _ } ->\n      Some (K.ternary ~op:`Mulacc ~a ~b ~c)\n    | _ -> match K.view c with\n      | Binary { op = `Mul; lhs = a; rhs = b; _ } ->\n        Some (K.ternary ~op:`Mulacc ~a ~b ~c:lhs)\n      | _ -> None)\n  | Unary { op = `Recip; src = x; _ } when ops.has_fdiv ->\n    Some (K.binary ~op:`Fdiv\n            ~lhs:(K.const (Const.float (Dtype.val_of (K.dtype x)) 1.0)) ~rhs:x)\n  | _ -> None\n"
  },
  {
    "path": "packages/tolk/lib/ir/decomposition.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Hardware-level decompositions for operations not directly supported.\n\n    Provides:\n\n    - Transcendental decompositions: [xpow], [xsin], [xexp2], [xlog2] with\n      1.0 ULP accuracy using Sleef-based polynomial approximations.\n    - Counter-based PRNG: [threefry2x32].\n    - Late rewrite patterns: MUL → SHL, IDIV → SHR, MOD → AND,\n      MAX → WHERE, fast integer division via magic numbers.\n    - Transcendental pattern factory: [get_transcendental_patterns].\n\n    Used by {!Lowering} at pipeline steps 18-21 and by {!Symbolic} (phase 3)\n    for POW folding. *)\n\n(** {1 Transcendentals} *)\n\nval xpow : base:Kernel.t -> exponent:Kernel.t -> Kernel.t\n(** [xpow ~base ~exponent] decomposes [base ** exponent] into\n    [exp2(exponent * log2(|base|))] with correct handling of negative bases,\n    non-integer exponents (NaN), and [0 ** 0 = 1]. *)\n\nval xsin : ?fast:bool -> ?switch_over:float -> Kernel.t -> Kernel.t\n(** [xsin d] decomposes [sin(d)] into a 1.0 ULP polynomial approximation.\n    Uses Cody-Waite reduction for small angles and Payne-Hanek reduction\n    for large angles. [fast] assumes [|d| <= switch_over]. *)\n\nval xexp2 : Kernel.t -> Kernel.t\n(** [xexp2 d] decomposes [exp2(d)] into a 1.0 ULP polynomial approximation\n    using the Sleef algorithm. *)\n\nval xlog2 : Kernel.t -> Kernel.t\n(** [xlog2 d] decomposes [log2(d)] into a 1.0 ULP polynomial approximation\n    with denormal handling. *)\n\nval threefry2x32 : Kernel.t -> Kernel.t -> Kernel.t\n(** [threefry2x32 x key] implements the Threefry 2x32 counter-based PRNG.\n    Splits uint64 [x] and [key] into uint32 halves, performs 5 rounds of\n    rotation-XOR-addition, and reassembles the result as uint64. *)\n\n(** {1 Integer division} *)\n\nval magicgu : vmax:int -> d:int -> int * int\n(** [magicgu ~vmax ~d] computes [(m, s)] such that [x // d == (x * m) >> s]\n    for all [0 <= x <= vmax] and [d > 0]. Adapted from Hacker's Delight,\n    Chapter 10. *)\n\n(** {1 Long decomposition (int64 → int32 pairs)} *)\n\nval pm_long_decomp : Kernel.t -> Kernel.t option\n(** [pm_long_decomp node] decomposes int64/uint64 operations into pairs of\n    int32/uint32 operations using the node tag for hi/lo tracking.\n    Run conditionally when the device does not support [int64]. *)\n\n(** {1 Float decomposition (unsupported float → supported float)} *)\n\ntype float_decomp_ctx = {\n  from_dtype : Dtype.scalar;\n  to_dtype : Dtype.scalar;\n}\n(** Context for float decomposition: convert [from_dtype] to [to_dtype]. *)\n\nval pm_float_decomp : float_decomp_ctx -> Kernel.t -> Kernel.t option\n(** [pm_float_decomp ctx node] promotes operations on unsupported float\n    dtypes (fp8, bf16) to a supported dtype (typically f32).\n    Run conditionally per emulated dtype pair. *)\n\n(** {1 Late rewrite patterns} *)\n\n(* CR: Is this the right place for late rewrite patterns? Should this live in codegen/late instead? *)\n\ntype supported_ops = {\n  has_exp2 : bool;\n  has_log2 : bool;\n  has_sin : bool;\n  has_sqrt : bool;\n  has_recip : bool;\n  has_neg : bool;\n  has_sub : bool;\n  has_max : bool;\n  has_shl : bool;\n  has_shr : bool;\n  has_and : bool;\n  has_or : bool;\n  has_cmplt : bool;\n  has_cmpeq : bool;\n  has_fdiv : bool;\n  has_threefry : bool;\n  has_mulacc : bool;\n  disable_fast_idiv : bool;\n  force_transcendental : bool;\n}\n(** Backend capability flags for decomposition passes. Each [has_*] flag is\n    [true] iff the backend natively supports the corresponding operation.\n    Unsupported operations are lowered into sequences of supported ones.\n\n    A single flat set of supported operations consumed by both\n    [get_late_rewrite_patterns] and [get_transcendental_patterns]. *)\n\nval get_late_rewrite_patterns : supported_ops -> Kernel.t -> Kernel.t option\n(** Device-specific late rewrite rules. Decomposes operations that the target\n    renderer does not support directly. *)\n\nval get_transcendental_patterns :\n  supported_ops -> Kernel.t -> Kernel.t option\n(** Conditionally rewrite EXP2/LOG2/SIN/SQRT into software implementations\n    when the target device does not support them natively. Non-transcendental\n    float dtypes (e.g. bfloat16) are cast to float32 first. *)\n"
  },
  {
    "path": "packages/tolk/lib/ir/divandmod.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(* Division and modulo folding for index-typed expressions. *)\n\nmodule K = Kernel\n\n(* Int64 arithmetic *)\n\nlet floordiv a b =\n  let q = Int64.div a b and r = Int64.rem a b in\n  if r <> 0L && Int64.compare a 0L < 0 <> (Int64.compare b 0L < 0) then\n    Int64.sub q 1L\n  else q\n\nlet floormod a b = Int64.sub a (Int64.mul (floordiv a b) b)\n\n(* C-style truncation division and remainder (rounds toward zero),\n   matching the IR's Idiv constant folding (Int64.div). *)\nlet cdiv a b = Int64.div a b\nlet cmod a b = Int64.rem a b\n\nlet rec gcd a b =\n  let a = Int64.abs a in\n  let b = Int64.abs b in\n  if b = 0L then a else gcd b (Int64.rem a b)\n\nlet min4 a b c d = min a (min b (min c d))\nlet max4 a b c d = max a (max b (max c d))\n\n(* Node helpers *)\n\nlet iconst v = K.const (Const.int64 Dtype.Val.index v)\n\nlet const_int_val node =\n  match K.view node with\n  | Const { value; _ } -> (\n      match Const.view value with Int v -> Some v | _ -> None)\n  | _ -> None\n\n(* Recursive interval analysis: compute the tightest lower (vmin) and upper\n   (vmax) bounds of an index expression. For Mul, Idiv, and Mod the extremes\n   can occur at any corner of the two operands' ranges, so we evaluate all\n   four products/quotients and take the min/max respectively. Division and\n   modulo bail to Int64.min/max_int when the divisor range spans zero. *)\n\nlet rec vmin node =\n  match K.view node with\n  | Const { value; _ } -> (\n      match Const.view value with\n      | Int v -> v\n      | Bool b -> if b then 1L else 0L\n      | Float _ -> Int64.min_int)\n  | Range _ -> 0L\n  | Define_var { lo; _ } -> Int64.of_int lo\n  | Special _ -> 0L\n  | Binary { op = `Add; lhs; rhs; _ } -> Int64.add (vmin lhs) (vmin rhs)\n  | Binary { op = `Sub; lhs; rhs; _ } -> Int64.sub (vmin lhs) (vmax rhs)\n  | Binary { op = `Mul; lhs; rhs; _ } ->\n      let a = vmin lhs and b = vmax lhs and c = vmin rhs and d = vmax rhs in\n      min4 (Int64.mul a c) (Int64.mul a d) (Int64.mul b c) (Int64.mul b d)\n  | Binary { op = `Idiv; lhs; rhs; _ } ->\n      if vmin rhs > 0L then\n        let xlo = vmin lhs and xhi = vmax lhs\n        and ylo = vmin rhs and yhi = vmax rhs in\n        min4 (Int64.div xlo ylo) (Int64.div xlo yhi)\n          (Int64.div xhi ylo) (Int64.div xhi yhi)\n      else Int64.min_int\n  | Binary { op = `Mod; lhs; _ } ->\n      if vmin lhs >= 0L then 0L else Int64.min_int\n  | Binary { op = `Max; lhs; rhs; _ } -> max (vmin lhs) (vmin rhs)\n  | Binary { op = `Cmplt; lhs; rhs; _ } ->\n      if vmax lhs < vmin rhs then 1L else 0L\n  | Binary { op = `Cmpne; lhs; rhs; _ } ->\n      if vmax lhs < vmin rhs || vmax rhs < vmin lhs then 1L else 0L\n  | Binary { op = `And; lhs; rhs; _ } ->\n      if vmin lhs >= 0L && vmin rhs >= 0L then 0L else Int64.min_int\n  | Unary { op = `Neg; src; _ } -> Int64.neg (vmax src)\n  | Cast { src; dtype } ->\n      let dt = Dtype.val_of dtype in\n      if Dtype.Val.is_int dt then vmin src else Int64.min_int\n  | Ternary { op = `Where; b; c; _ } -> min (vmin b) (vmin c)\n  | _ -> Int64.min_int\n\nand vmax node =\n  match K.view node with\n  | Const { value; _ } -> (\n      match Const.view value with\n      | Int v -> v\n      | Bool b -> if b then 1L else 0L\n      | Float _ -> Int64.max_int)\n  | Range { size; _ } -> Int64.sub (vmax size) 1L\n  | Define_var { hi; _ } -> Int64.of_int hi\n  | Special { size; _ } -> Int64.sub (vmax size) 1L\n  | Binary { op = `Add; lhs; rhs; _ } -> Int64.add (vmax lhs) (vmax rhs)\n  | Binary { op = `Sub; lhs; rhs; _ } -> Int64.sub (vmax lhs) (vmin rhs)\n  | Binary { op = `Mul; lhs; rhs; _ } ->\n      let a = vmin lhs and b = vmax lhs and c = vmin rhs and d = vmax rhs in\n      max4 (Int64.mul a c) (Int64.mul a d) (Int64.mul b c) (Int64.mul b d)\n  | Binary { op = `Idiv; lhs; rhs; _ } ->\n      if vmin rhs > 0L then\n        let xlo = vmin lhs and xhi = vmax lhs\n        and ylo = vmin rhs and yhi = vmax rhs in\n        max4 (Int64.div xlo ylo) (Int64.div xlo yhi)\n          (Int64.div xhi ylo) (Int64.div xhi yhi)\n      else Int64.max_int\n  | Binary { op = `Mod; lhs; rhs; _ } ->\n      if vmin lhs >= 0L then min (vmax lhs) (Int64.sub (vmax rhs) 1L)\n      else Int64.max_int\n  | Binary { op = `Max; lhs; rhs; _ } -> max (vmax lhs) (vmax rhs)\n  | Binary { op = `Cmplt; lhs; rhs; _ } ->\n      if vmin lhs >= vmax rhs then 0L else 1L\n  | Binary { op = `Cmpne; lhs; rhs; _ } ->\n      if vmin lhs = vmax lhs && vmin lhs = vmin rhs && vmin rhs = vmax rhs\n      then 0L else 1L\n  | Unary { op = `Neg; src; _ } -> Int64.neg (vmin src)\n  | Cast { src; dtype } ->\n      let dt = Dtype.val_of dtype in\n      if Dtype.Val.is_int dt then vmax src else Int64.max_int\n  | Ternary { op = `Where; b; c; _ } -> max (vmax b) (vmax c)\n  | _ -> Int64.max_int\n\nlet split_add node =\n  let rec go acc node =\n    match K.view node with\n    | Binary { op = `Add; lhs; rhs; _ } -> go (go acc rhs) lhs\n    | _ -> node :: acc\n  in\n  go [] node\n\nlet const_factor node =\n  match K.view node with\n  | Const { value; _ } -> (\n      match Const.view value with Int v -> v | _ -> 1L)\n  | Binary { op = `Mul; rhs; _ } -> (\n      match const_int_val rhs with Some v -> v | None -> 1L)\n  | _ -> 1L\n\nlet sum_nodes = function\n  | [] -> iconst 0L\n  | [ x ] -> x\n  | x :: rest ->\n      List.fold_left (fun acc t -> K.binary ~op:`Add ~lhs:acc ~rhs:t) x rest\n\nlet pop_const node =\n  let terms = split_add node in\n  let consts, non_consts =\n    List.partition (fun t -> Option.is_some (const_int_val t)) terms\n  in\n  let const_sum =\n    List.fold_left\n      (fun acc t ->\n        match const_int_val t with Some v -> Int64.add acc v | None -> acc)\n      0L consts\n  in\n  (sum_nodes non_consts, const_sum)\n\n(* Check whether an expression is statically divisible by the constant [c],\n   returning [Some (node / c)] with the quotient simplified. Recurses into\n   Add (all summands must be divisible) and Mul (the constant factor must\n   be divisible), returning None if any sub-expression is not evenly divisible. *)\nlet rec divides node c =\n  if c = 1L then Some node\n  else if c = 0L then None\n  else\n    match K.view node with\n    | Const { value; dtype } -> (\n        match Const.view value with\n        | Int v when Int64.rem v c = 0L ->\n            Some (K.const (Const.int64 dtype (Int64.div v c)))\n        | _ -> None)\n    | Binary { op = `Add; _ } ->\n        let terms = split_add node in\n        let divided = List.filter_map (fun t -> divides t c) terms in\n        if List.length divided = List.length terms then Some (sum_nodes divided)\n        else None\n    | Binary { op = `Mul; lhs; rhs; _ } -> (\n        match K.view rhs with\n        | Const { value; dtype } -> (\n            match Const.view value with\n            | Int v when Int64.rem v c = 0L ->\n                let q = Int64.div v c in\n                if q = 1L then Some lhs\n                else Some (K.binary ~op:`Mul ~lhs\n                       ~rhs:(K.const (Const.int64 dtype q)))\n            | _ -> None)\n        | _ -> None)\n    | _ -> None\n\nlet rec cartesian = function\n  | [] -> [ [] ]\n  | choices :: rest ->\n      let rest_products = cartesian rest in\n      List.concat_map\n        (fun c -> List.map (fun rp -> c :: rp) rest_products)\n        choices\n\n(* fold_divmod_general *)\n\nlet is_index_dtype dtype =\n  Dtype.Val.is_int dtype && Dtype.Val.equal (Dtype.Val.scalarize dtype) Dtype.Val.index\n\nlet ( ||| ) a b = match a with Some _ -> a | None -> b ()\n\n(* Top-level algebraic simplifier for Idiv and Mod on index expressions.\n   Tries a cascade of strategies via the short-circuit combinator (|||):\n   cancel when the quotient is provably constant, fold nested div/mod,\n   remove redundant inner mods, linearize binary-valued numerators,\n   exploit congruences mod c, factor out the GCD, and finally try\n   recursive nesting by candidate divisors. Returns Some simplified_node\n   on the first strategy that succeeds, or None. *)\nlet rec fold_divmod_general node =\n  match K.view node with\n  | Binary { op = ((`Idiv | `Mod) as op); lhs = x; rhs = y; dtype }\n    when is_index_dtype dtype ->\n      let is_mod = op = `Mod in\n      let x_min = vmin x and x_max = vmax x\n      and y_min = vmin y and y_max = vmax y in\n      if y_min = 0L && y_max = 0L then None\n      else\n        cancel_divmod ~is_mod x y x_min x_max y_min y_max\n        ||| fun () ->\n        let x_peeled, const = pop_const x in\n        let uops_no_const = split_add x_peeled in\n        let const_denom =\n          match const_int_val y with\n          | Some c when c > 0L ->\n              fold_const_denom ~is_mod ~x ~x_min ~x_peeled ~uops_no_const\n                ~const ~c ~y\n          | _ -> None\n        in\n        const_denom\n        ||| fun () ->\n        let all_uops = split_add x in\n        divide_by_gcd ~is_mod ~op ~x ~y all_uops\n        ||| fun () -> factor_remainder ~is_mod ~x_min ~y_min ~x ~y all_uops\n  | _ -> None\n\nand cancel_divmod ~is_mod x y x_min x_max y_min y_max =\n  if Int64.mul y_min y_max > 0L then\n    let q1 = cdiv x_min y_min and q2 = cdiv x_min y_max\n    and q3 = cdiv x_max y_min and q4 = cdiv x_max y_max in\n    if q1 = q2 && q2 = q3 && q3 = q4 then\n      if is_mod then\n        Some (K.binary ~op:`Sub ~lhs:x\n                ~rhs:(K.binary ~op:`Mul ~lhs:(iconst q1) ~rhs:y))\n      else Some (iconst q1)\n    else None\n  else None\n\nand fold_const_denom ~is_mod ~x ~x_min ~x_peeled ~uops_no_const ~const ~c ~y =\n  nested_div_mod ~is_mod ~x ~c ~y\n  ||| fun () -> remove_nested_mod ~is_mod ~x_min ~uops_no_const ~const ~c ~y\n  ||| fun () ->\n  let decomp =\n    List.map\n      (fun u ->\n        let f = const_factor u in\n        let t = match divides u f with Some d -> d | None -> u in\n        (t, f))\n      uops_no_const\n  in\n  let terms = List.map fst decomp and factors = List.map snd decomp in\n  fold_binary_numerator ~is_mod ~terms ~factors ~const ~c\n  ||| fun () -> fold_congruence ~is_mod ~x_min ~terms ~factors ~const ~c\n  ||| fun () -> gcd_with_remainder ~is_mod ~x_peeled ~factors ~const ~c\n  ||| fun () -> nest_by_factor ~is_mod ~x_min ~x ~terms ~factors ~const ~c\n\nand nested_div_mod ~is_mod ~x ~c ~y =\n  match K.view x with\n  | Binary { op = `Mod; lhs = x0; rhs = mod_rhs; _ } -> (\n      match divides mod_rhs c with\n      | Some k ->\n          if is_mod then Some (K.binary ~op:`Mod ~lhs:x0 ~rhs:y)\n          else\n            Some (K.binary ~op:`Mod\n                    ~lhs:(K.binary ~op:`Idiv ~lhs:x0 ~rhs:y) ~rhs:k)\n      | None -> None)\n  | _ -> None\n\nand remove_nested_mod ~is_mod ~x_min ~uops_no_const ~const ~c ~y =\n  if not (is_mod && x_min >= 0L) then None\n  else\n    let new_xs, changed =\n      List.fold_right\n        (fun u (acc, ch) ->\n          match K.view u with\n          | Binary { op = `Mod; lhs = u0; rhs = mr; _ } -> (\n              match divides mr c with\n              | Some _ -> (u0 :: acc, true)\n              | None -> (u :: acc, ch))\n          | _ -> (u :: acc, ch))\n        uops_no_const ([], false)\n    in\n    if not changed then None\n    else\n      let new_x =\n        K.binary ~op:`Add ~lhs:(sum_nodes new_xs) ~rhs:(iconst const)\n      in\n      if vmin new_x >= 0L then Some (K.binary ~op:`Mod ~lhs:new_x ~rhs:y)\n      else None\n\nand fold_binary_numerator ~is_mod ~terms ~factors ~const ~c =\n  match terms, factors with\n  | [ v ], [ f ] when Int64.sub (vmax v) (vmin v) = 1L ->\n      let eval = if is_mod then cmod else cdiv in\n      let y1 = eval (Int64.add (Int64.mul f (vmin v)) const) c in\n      let y2 = eval (Int64.add (Int64.mul f (vmax v)) const) c in\n      Some\n        (K.binary ~op:`Add\n           ~lhs:(K.binary ~op:`Mul ~lhs:(iconst (Int64.sub y2 y1))\n                   ~rhs:(K.binary ~op:`Sub ~lhs:v ~rhs:(iconst (vmin v))))\n           ~rhs:(iconst y1))\n  | _ -> None\n\nand fold_congruence ~is_mod ~x_min ~terms ~factors ~const ~c =\n  if x_min < 0L then None\n  else\n    let rem_choices =\n      List.map\n        (fun f ->\n          let r = floormod f c in\n          if Int64.mul r 2L = c then [ r; Int64.sub r c ]\n          else\n            let rc = Int64.sub r c in\n            if Int64.abs r <= Int64.abs rc then [ r ] else [ rc ])\n        factors\n    in\n    List.find_map\n      (fun rems ->\n        let rem_terms =\n          List.filter_map\n            (fun (r, v) ->\n              if r = 0L then None\n              else if r = 1L then Some v\n              else Some (K.binary ~op:`Mul ~lhs:v ~rhs:(iconst r)))\n            (List.combine rems terms)\n        in\n        let const_rem = floormod const c in\n        let all_rem =\n          if const_rem <> 0L then rem_terms @ [ iconst const_rem ]\n          else rem_terms\n        in\n        let rem = sum_nodes all_rem in\n        let rem_lo = floordiv (vmin rem) c in\n        let rem_hi = floordiv (vmax rem) c in\n        if rem_lo <> rem_hi then None\n        else if is_mod then\n          Some (K.binary ~op:`Sub ~lhs:rem\n                  ~rhs:(iconst (Int64.mul rem_lo c)))\n        else\n          let div_terms =\n            List.filter_map\n              (fun ((f, r), v) ->\n                let q = floordiv (Int64.sub f r) c in\n                if q = 0L then None\n                else if q = 1L then Some v\n                else Some (K.binary ~op:`Mul ~lhs:v ~rhs:(iconst q)))\n              (List.combine (List.combine factors rems) terms)\n          in\n          let const_div = Int64.add (floordiv const c) rem_lo in\n          let all_div =\n            if const_div <> 0L then div_terms @ [ iconst const_div ]\n            else div_terms\n          in\n          Some (sum_nodes all_div))\n      (cartesian rem_choices)\n\nand gcd_with_remainder ~is_mod ~x_peeled ~factors ~const ~c =\n  if vmin x_peeled < 0L then None\n  else\n    let g = List.fold_left gcd c factors in\n    if g <= 1L then None\n    else\n      match divides x_peeled g with\n      | None -> None\n      | Some divided ->\n          let cg = Int64.div c g in\n          let new_x =\n            K.binary ~op:`Add ~lhs:divided\n              ~rhs:(iconst (floormod (floordiv const g) cg))\n          in\n          if vmin new_x < 0L then None\n          else if is_mod then\n            Some\n              (K.binary ~op:`Add\n                 ~lhs:(K.binary ~op:`Mul\n                         ~lhs:(K.binary ~op:`Mod ~lhs:new_x ~rhs:(iconst cg))\n                         ~rhs:(iconst g))\n                 ~rhs:(iconst (floormod const g)))\n          else\n            Some\n              (K.binary ~op:`Add\n                 ~lhs:(K.binary ~op:`Idiv ~lhs:new_x ~rhs:(iconst cg))\n                 ~rhs:(iconst (floordiv const c)))\n\n(* Try to simplify (x div c) or (x mod c) by factoring through an\n   intermediate divisor: for each non-trivial factor f that divides c,\n   compute (x/f) and recursively simplify that, then finish with the\n   remaining (c/f). Among all candidates that succeed, pick the one\n   whose backward slice (expression size) is smallest. *)\nand nest_by_factor ~is_mod ~x_min ~x ~terms ~factors ~const ~c =\n  if x_min < 0L then None\n  else\n    let x_peeled, _ = pop_const x in\n    let uops_no_const = split_add x_peeled in\n    let candidates =\n      List.filter_map\n        (fun (u, f) ->\n          match K.view u with\n          | Const _ -> None\n          | _ ->\n              let af = Int64.abs f in\n              if af > 1L && af < c && Int64.rem c f = 0L then Some af\n              else None)\n        (List.combine uops_no_const factors)\n      |> List.sort_uniq Int64.compare\n    in\n    let results =\n      List.filter_map\n        (fun div ->\n          let xd = K.binary ~op:`Idiv ~lhs:x ~rhs:(iconst div) in\n          match fold_divmod_general xd with\n          | Some newxs when vmin newxs >= 0L ->\n              let cd = Int64.div c div in\n              if not is_mod then\n                Some (List.length (K.backward_slice newxs),\n                      K.binary ~op:`Idiv ~lhs:newxs ~rhs:(iconst cd))\n              else\n                let b_parts =\n                  List.filter_map\n                    (fun (f, t) ->\n                      let r = floormod f div in\n                      if r <> 0L then\n                        Some (K.binary ~op:`Mul ~lhs:t ~rhs:(iconst r))\n                      else None)\n                    (List.combine factors terms)\n                in\n                let cr = floormod const div in\n                let b_parts =\n                  if cr <> 0L then b_parts @ [ iconst cr ] else b_parts\n                in\n                let b = sum_nodes b_parts in\n                if vmin b >= 0L && vmax b < div then\n                  let r =\n                    K.binary ~op:`Add\n                      ~lhs:(K.binary ~op:`Mul\n                              ~lhs:(K.binary ~op:`Mod ~lhs:newxs\n                                      ~rhs:(iconst cd))\n                              ~rhs:(iconst div))\n                      ~rhs:b\n                  in\n                  Some (List.length (K.backward_slice r), r)\n                else None\n          | _ -> None)\n        candidates\n    in\n    match results with\n    | [] -> None\n    | first :: rest ->\n        let _, best =\n          List.fold_left\n            (fun ((bc, _) as best) ((cc, _) as cur) ->\n              if cc < bc then cur else best)\n            first rest\n        in\n        Some best\n\nand divide_by_gcd ~is_mod ~op ~x ~y all_uops =\n  let gcd_val =\n    List.fold_left (fun acc u -> gcd acc (const_factor u))\n      (const_factor y) all_uops\n  in\n  if gcd_val <= 1L then None\n  else\n    match divides x gcd_val, divides y gcd_val with\n    | Some x_div, Some y_div ->\n        let ret = K.binary ~op:(op :> Op.binary) ~lhs:x_div ~rhs:y_div in\n        if is_mod then\n          Some (K.binary ~op:`Mul ~lhs:ret ~rhs:(iconst gcd_val))\n        else Some ret\n    | _ -> None\n\nand factor_remainder ~is_mod ~x_min ~y_min ~x ~y all_uops =\n  if y_min < 0L || x_min < 0L then None\n  else\n    let quo, rem =\n      List.fold_right\n        (fun u (q, r) ->\n          match const_int_val y with\n          | Some yv ->\n              let cf = const_factor u in\n              if Int64.rem cf yv <> cf then\n                let base =\n                  match divides u cf with Some d -> d | None -> u\n                in\n                let r_part =\n                  K.binary ~op:`Mul ~lhs:base ~rhs:(iconst (floormod cf yv))\n                in\n                let q_part =\n                  if is_mod then iconst 0L\n                  else K.binary ~op:`Mul ~lhs:base\n                         ~rhs:(iconst (floordiv cf yv))\n                in\n                (q_part :: q, r_part :: r)\n              else (q, u :: r)\n          | None -> (q, u :: r))\n        all_uops ([], [])\n    in\n    if quo = [] then None\n    else\n      let new_x =\n        K.binary ~op:`Add ~lhs:(sum_nodes rem) ~rhs:(iconst 0L)\n      in\n      if vmin new_x < 0L then None\n      else if is_mod then Some (K.binary ~op:`Mod ~lhs:new_x ~rhs:y)\n      else\n        Some (K.binary ~op:`Add\n                ~lhs:(K.binary ~op:`Idiv ~lhs:new_x ~rhs:y)\n                ~rhs:(sum_nodes quo))\n\n(* Fast inline rules *)\n\nlet fast_div_combine node =\n  match K.view node with\n  | Binary { op = `Idiv; lhs; rhs = d; dtype } when is_index_dtype dtype -> (\n      match const_int_val d with\n      | Some dv when dv > 0L -> (\n          let try_pattern inner_div a =\n            match K.view inner_div with\n            | Binary { op = `Idiv; lhs = x; rhs = c; _ } -> (\n                match const_int_val c, const_int_val a with\n                | Some cv, Some av when cv > 0L && av >= 0L && vmin x >= 0L ->\n                    Some\n                      (K.binary ~op:`Idiv\n                         ~lhs:(K.binary ~op:`Add ~lhs:x\n                                 ~rhs:(iconst (Int64.mul av cv)))\n                         ~rhs:(iconst (Int64.mul cv dv)))\n                | _ -> None)\n            | _ -> None\n          in\n          match K.view lhs with\n          | Binary { op = `Add; lhs = l; rhs = r; _ } -> (\n              match try_pattern l r with\n              | Some _ as result -> result\n              | None -> try_pattern r l)\n          | _ -> None)\n      | _ -> None)\n  | _ -> None\n\nlet neg_divisor_div node =\n  match K.view node with\n  | Binary { op = `Idiv; lhs = x; rhs = d; dtype }\n    when is_index_dtype dtype && vmax d < 0L ->\n      Some (K.unary ~op:`Neg\n              ~src:(K.binary ~op:`Idiv ~lhs:x ~rhs:(K.unary ~op:`Neg ~src:d)))\n  | _ -> None\n\nlet neg_dividend_div node =\n  match K.view node with\n  | Binary { op = `Idiv; lhs = x; rhs = d; dtype }\n    when is_index_dtype dtype && vmax x <= 0L ->\n      Some (K.unary ~op:`Neg\n              ~src:(K.binary ~op:`Idiv ~lhs:(K.unary ~op:`Neg ~src:x) ~rhs:d))\n  | _ -> None\n\nlet const_split_div node =\n  match K.view node with\n  | Binary { op = `Idiv; lhs; rhs = d; dtype } when is_index_dtype dtype -> (\n      match const_int_val d with\n      | Some dv when dv > 0L -> (\n          let try_split x c_node =\n            match const_int_val c_node with\n            | Some cv when floormod cv dv <> cv ->\n                if vmin x >= 0L && vmin lhs >= 0L then\n                  Some\n                    (K.binary ~op:`Add\n                       ~lhs:(K.binary ~op:`Idiv\n                               ~lhs:(K.binary ~op:`Add ~lhs:x\n                                       ~rhs:(iconst (floormod cv dv)))\n                               ~rhs:(iconst dv))\n                       ~rhs:(iconst (floordiv cv dv)))\n                else None\n            | _ -> None\n          in\n          match K.view lhs with\n          | Binary { op = `Add; lhs = l; rhs = r; _ } -> (\n              match try_split l r with\n              | Some _ as result -> result\n              | None -> try_split r l)\n          | _ -> None)\n      | _ -> None)\n  | _ -> None\n\nlet neg_dividend_mod node =\n  match K.view node with\n  | Binary { op = `Mod; lhs = x; rhs = d; dtype }\n    when is_index_dtype dtype && vmax x <= 0L ->\n      Some (K.unary ~op:`Neg\n              ~src:(K.binary ~op:`Mod ~lhs:(K.unary ~op:`Neg ~src:x) ~rhs:d))\n  | _ -> None\n\nlet neg_divisor_mod node =\n  match K.view node with\n  | Binary { op = `Mod; lhs = x; rhs = d; dtype }\n    when is_index_dtype dtype && vmax d < 0L ->\n      Some (K.binary ~op:`Mod ~lhs:x ~rhs:(K.unary ~op:`Neg ~src:d))\n  | _ -> None\n\n(* Entry point *)\n\nlet div_and_mod_symbolic =\n  K.first_match\n    [ fast_div_combine; neg_divisor_div; neg_dividend_div; const_split_div;\n      fold_divmod_general; neg_dividend_mod; neg_divisor_mod ]\n"
  },
  {
    "path": "packages/tolk/lib/ir/divandmod.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Division and modulo folding for index-typed expressions. *)\n\nval div_and_mod_symbolic : Kernel.t -> Kernel.t option\n(** [div_and_mod_symbolic node] tries all div/mod folding rules on [node].\n    Only fires on index-typed {!Op.Idiv} and {!Op.Mod} nodes. *)\n\n(** {1 Expression analysis helpers} *)\n\nval vmin : Kernel.t -> int64\n(** [vmin node] is a conservative lower bound for an index-typed expression. *)\n\nval vmax : Kernel.t -> int64\n(** [vmax node] is a conservative upper bound for an index-typed expression. *)\n\nval split_add : Kernel.t -> Kernel.t list\n(** [split_add node] flattens an addition tree into its additive terms. *)\n\nval const_factor : Kernel.t -> int64\n(** [const_factor node] extracts the constant multiplicative factor.\n    Returns [1L] for non-multiply or non-constant-factor terms. *)\n\nval divides : Kernel.t -> int64 -> Kernel.t option\n(** [divides node c] returns [Some (node / c)] if [node] is evenly divisible\n    by [c], or [None] otherwise. *)\n"
  },
  {
    "path": "packages/tolk/lib/ir/dtype.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\ntype scalar =\n  | Void\n  | Bool\n  | Int8\n  | Int16\n  | Int32\n  | Int64\n  | Uint8\n  | Uint16\n  | Uint32\n  | Uint64\n  | Float16\n  | Bfloat16\n  | Float32\n  | Float64\n  | Fp8e4m3\n  | Fp8e5m2\n  | Index\n\ntype addr_space = Global | Local | Reg\n\n(* Shared scalar-level functions *)\n\nlet scalar_bitsize = function\n  | Void -> 0\n  | Bool -> 1\n  | Int8 | Uint8 | Fp8e4m3 | Fp8e5m2 -> 8\n  | Int16 | Uint16 | Float16 | Bfloat16 -> 16\n  | Int32 | Uint32 | Float32 -> 32\n  | Int64 | Uint64 | Float64 -> 64\n  | Index -> 800\n\nlet scalar_priority = function\n  | Void | Index -> -1\n  | Bool -> 0\n  | Int8 -> 1 | Uint8 -> 2\n  | Int16 -> 3 | Uint16 -> 4\n  | Int32 -> 5 | Uint32 -> 6\n  | Int64 -> 7 | Uint64 -> 8\n  | Fp8e4m3 -> 9 | Fp8e5m2 -> 10\n  | Float16 -> 11 | Bfloat16 -> 12\n  | Float32 -> 13 | Float64 -> 14\n\nlet scalar_compare a b =\n  let c = Int.compare (scalar_priority a) (scalar_priority b) in\n  if c <> 0 then c\n  else\n    let c = Int.compare (scalar_bitsize a) (scalar_bitsize b) in\n    if c <> 0 then c else Stdlib.compare a b\n\nlet scalar_is_float = function\n  | Float16 | Bfloat16 | Float32 | Float64 | Fp8e4m3 | Fp8e5m2 -> true\n  | _ -> false\n\nlet scalar_is_fp8 = function Fp8e4m3 | Fp8e5m2 -> true | _ -> false\n\nlet scalar_is_int = function\n  | Int8 | Int16 | Int32 | Int64 | Uint8 | Uint16 | Uint32 | Uint64 | Index ->\n      true\n  | _ -> false\n\nlet scalar_is_unsigned = function\n  | Uint8 | Uint16 | Uint32 | Uint64 -> true\n  | _ -> false\n\nlet scalar_is_bool = function Bool -> true | _ -> false\n\n(* Promotion lattice *)\n\nlet promo_lattice =\n  [ Bool, [ Int8; Uint8 ];\n    Int8, [ Int16 ];       Int16, [ Int32 ];\n    Int32, [ Int64 ];      Int64, [ Uint64 ];\n    Uint8, [ Int16; Uint16 ];\n    Uint16, [ Int32; Uint32 ];\n    Uint32, [ Int64; Uint64 ];\n    Uint64, [ Fp8e4m3; Fp8e5m2 ];\n    Fp8e4m3, [ Float16; Bfloat16 ];\n    Fp8e5m2, [ Float16; Bfloat16 ];\n    Float16, [ Float32 ];  Bfloat16, [ Float32 ];\n    Float32, [ Float64 ] ]\n\nmodule Scalar_set = Set.Make (struct\n  type t = scalar\n  let compare = Stdlib.compare\nend)\n\nlet ancestor_cache : (scalar, Scalar_set.t) Hashtbl.t = Hashtbl.create 16\n\nlet rec scalar_ancestors s =\n  match Hashtbl.find_opt ancestor_cache s with\n  | Some set -> set\n  | None ->\n      let parents = Option.value ~default:[] (List.assoc_opt s promo_lattice) in\n      let set =\n        List.fold_left\n          (fun acc p -> Scalar_set.union acc (scalar_ancestors p))\n          (Scalar_set.singleton s) parents\n      in\n      Hashtbl.add ancestor_cache s set;\n      set\n\nlet min_by_priority scalars =\n  Scalar_set.fold\n    (fun s best ->\n      match best with\n      | None -> Some s\n      | Some b when scalar_compare s b < 0 -> Some s\n      | _ -> best)\n    scalars None\n\n(* Val module *)\n\nmodule Val = struct\n  type t = { scalar : scalar; count : int }\n\n  let scalar dt = dt.scalar\n  let count dt = dt.count\n  let of_scalar s = { scalar = s; count = 1 }\n  let void = of_scalar Void\n  let bool = of_scalar Bool\n  let int8 = of_scalar Int8\n  let int16 = of_scalar Int16\n  let int32 = of_scalar Int32\n  let int64 = of_scalar Int64\n  let uint8 = of_scalar Uint8\n  let uint16 = of_scalar Uint16\n  let uint32 = of_scalar Uint32\n  let uint64 = of_scalar Uint64\n  let float16 = of_scalar Float16\n  let bfloat16 = of_scalar Bfloat16\n  let float32 = of_scalar Float32\n  let float64 = of_scalar Float64\n  let fp8e4m3 = of_scalar Fp8e4m3\n  let fp8e5m2 = of_scalar Fp8e5m2\n  let index = of_scalar Index\n  let default_float = float32\n  let default_int = int32\n\n  let scalarize dt = if dt.count = 1 then dt else { dt with count = 1 }\n\n  let vec n dt =\n    if dt.count <> 1 then\n      invalid_arg (Printf.sprintf \"can't vectorize type with count %d\" dt.count);\n    if n < 0 then invalid_arg (Printf.sprintf \"vector size must be >= 0, got %d\" n);\n    if n = 0 && dt.scalar <> Index then\n      invalid_arg \"only index dtype can use zero-length vectors\";\n    if n = 1 || dt.scalar = Void then dt else { dt with count = n }\n\n  let with_scalar s dt = { dt with scalar = s }\n\n  let is_float dt = scalar_is_float dt.scalar\n  let is_int dt = scalar_is_int dt.scalar\n  let is_unsigned dt = scalar_is_unsigned dt.scalar\n  let is_bool dt = scalar_is_bool dt.scalar\n  let is_fp8 dt = scalar_is_fp8 dt.scalar\n  let bitsize dt = scalar_bitsize dt.scalar * dt.count\n  let itemsize dt = (bitsize dt + 7) / 8\n  let priority dt = scalar_priority dt.scalar\n\n  let least_upper_dtype dts =\n    if List.exists (fun d -> d.scalar = Index) dts then\n      invalid_arg \"Index does not participate in dtype promotion\";\n    match dts with\n    | [] -> invalid_arg \"least_upper_dtype requires at least one dtype\"\n    | [ d ] -> scalarize d\n    | first :: rest ->\n        let intersection =\n          List.fold_left\n            (fun acc d -> Scalar_set.inter acc (scalar_ancestors d.scalar))\n            (scalar_ancestors first.scalar) rest\n        in\n        (match min_by_priority intersection with\n        | Some s -> of_scalar s\n        | None ->\n            invalid_arg \"least_upper_dtype: no common type in promotion lattice\")\n\n  let least_upper_float dt =\n    if scalar_is_float dt.scalar then scalarize dt\n    else least_upper_dtype [ scalarize dt; float32 ]\n\n  let can_lossless_cast dt0 dt1 =\n    let s0 = dt0.scalar and s1 = dt1.scalar in\n    s0 = s1 || s0 = Bool ||\n    match s1 with\n    | Index ->\n        List.mem s0 [ Uint8; Uint16; Uint32; Uint64; Int8; Int16; Int32; Int64 ]\n    | Float64 ->\n        List.mem s0\n          [ Float32; Float16; Bfloat16; Fp8e4m3; Fp8e5m2;\n            Uint32; Uint16; Uint8; Int32; Int16; Int8 ]\n    | Float32 ->\n        List.mem s0\n          [ Float16; Bfloat16; Fp8e4m3; Fp8e5m2; Uint16; Uint8; Int16; Int8 ]\n    | Float16 -> List.mem s0 [ Fp8e4m3; Fp8e5m2; Uint8; Int8 ]\n    | Uint64 -> List.mem s0 [ Uint32; Uint16; Uint8 ]\n    | Uint32 -> List.mem s0 [ Uint16; Uint8 ]\n    | Uint16 -> s0 = Uint8\n    | Int64 -> List.mem s0 [ Uint32; Uint16; Uint8; Int32; Int16; Int8 ]\n    | Int32 -> List.mem s0 [ Uint16; Uint8; Int16; Int8 ]\n    | Int16 -> List.mem s0 [ Uint8; Int8 ]\n    | _ -> false\n\n  let sum_acc_dtype dt =\n    if dt.scalar = Index then\n      invalid_arg \"sum_acc_dtype does not accept index dtype\";\n    let dt = scalarize dt in\n    if scalar_is_unsigned dt.scalar then least_upper_dtype [ dt; uint32 ]\n    else if scalar_is_int dt.scalar || scalar_is_bool dt.scalar then\n      least_upper_dtype [ dt; int32 ]\n    else least_upper_dtype [ dt; float32 ]\n\n  let equal a b = a.scalar = b.scalar && a.count = b.count\n\n  let compare a b =\n    let c = scalar_compare a.scalar b.scalar in\n    if c <> 0 then c else Int.compare a.count b.count\n\n  let to_string t =\n    let s = match t.scalar with\n      | Void -> \"void\"   | Bool -> \"bool\"   | Index -> \"index\"\n      | Int8 -> \"i8\"     | Int16 -> \"i16\"   | Int32 -> \"i32\"   | Int64 -> \"i64\"\n      | Uint8 -> \"u8\"    | Uint16 -> \"u16\"  | Uint32 -> \"u32\"  | Uint64 -> \"u64\"\n      | Float16 -> \"f16\" | Bfloat16 -> \"bf16\"\n      | Float32 -> \"f32\" | Float64 -> \"f64\"\n      | Fp8e4m3 -> \"fp8e4m3\" | Fp8e5m2 -> \"fp8e5m2\"\n    in\n    if t.count = 1 then s else Printf.sprintf \"%s×%d\" s t.count\n\n  let pp fmt t = Format.pp_print_string fmt (to_string t)\nend\n\n(* Ptr module *)\n\nmodule Ptr = struct\n  type t = {\n    scalar : scalar;\n    count : int;\n    addrspace : addr_space;\n    v : int;\n    size : int;\n  }\n\n  let scalar p = p.scalar\n  let count p = p.count\n  let addrspace p = p.addrspace\n  let v p = p.v\n  let size p = p.size\n  let base p : Val.t = { scalar = p.scalar; count = p.count }\n\n  let err_vcount n = Printf.sprintf \"pointer vcount must be >= 1, got %d\" n\n\n  let create (base : Val.t) ~addrspace ~size =\n    { scalar = base.scalar; count = base.count; addrspace; v = 1; size }\n\n  let create_v (base : Val.t) ~addrspace ~size ~v =\n    if v < 1 then invalid_arg (err_vcount v);\n    { scalar = base.scalar; count = base.count; addrspace; v; size }\n\n  let scalarize p =\n    if p.v = 1 && p.count = 1 then p\n    else { p with count = 1; v = 1 }\n\n  let vec n p =\n    if n < 1 then invalid_arg (err_vcount n);\n    if p.v = n then p else { p with v = n }\n\n  let with_base (dt : Val.t) p =\n    { p with scalar = dt.scalar; count = dt.count }\n\n  let with_size n p = { p with size = n }\n\n  let equal a b =\n    a.scalar = b.scalar && a.count = b.count\n    && a.addrspace = b.addrspace && a.v = b.v && a.size = b.size\n\n  let compare a b =\n    let ( |? ) c f = if c <> 0 then c else f () in\n    scalar_compare a.scalar b.scalar |? fun () ->\n    Int.compare a.count b.count |? fun () ->\n    Stdlib.compare a.addrspace b.addrspace |? fun () ->\n    Int.compare a.v b.v |? fun () -> Int.compare a.size b.size\n\n  let to_string p =\n    let base = Val.to_string { Val.scalar = p.scalar; count = p.count } in\n    let vec = if p.v = 1 then \"\" else Printf.sprintf \".vec(%d)\" p.v in\n    let space = match p.addrspace with\n      | Global -> \"global\" | Local -> \"local\" | Reg -> \"reg\"\n    in\n    Printf.sprintf \"%s*%s [%s]\" base vec space\n\n  let pp fmt p = Format.pp_print_string fmt (to_string p)\nend\n\n(* Unified type *)\n\ntype t = Val of Val.t | Ptr of Ptr.t\n\n(* Dispatching accessors *)\n\nlet scalar = function Val v -> Val.scalar v | Ptr p -> Ptr.scalar p\nlet count = function Val v -> Val.count v | Ptr p -> Ptr.count p\nlet vcount = function Val v -> Val.count v | Ptr p -> Ptr.v p\nlet is_ptr = function Ptr _ -> true | Val _ -> false\nlet val_of = function Val v -> v | Ptr p -> Ptr.base p\n\n(* Dispatching transformers *)\n\nlet scalarize = function\n  | Val v -> Val (Val.scalarize v)\n  | Ptr p -> Ptr (Ptr.scalarize p)\n\nlet vec n = function\n  | Val v -> Val (Val.vec n v)\n  | Ptr p -> Ptr (Ptr.vec n p)\n\n(* Predicates *)\n\nlet is_float dt = scalar_is_float (scalar dt)\nlet is_int dt = scalar_is_int (scalar dt)\nlet is_unsigned dt = scalar_is_unsigned (scalar dt)\nlet is_bool dt = scalar_is_bool (scalar dt)\nlet is_fp8 dt = scalar_is_fp8 (scalar dt)\n\n(* Properties *)\n\nlet bitsize dt = scalar_bitsize (scalar dt) * count dt\nlet itemsize dt = (bitsize dt + 7) / 8\nlet priority dt = scalar_priority (scalar dt)\n\n(* Bounds *)\n\ntype bound =\n  [ `Bool of bool | `SInt of int64 | `UInt of int64 | `Float of float ]\n\nlet err_void_bounds = \"void has no numeric bounds\"\n\nlet min dt =\n  let s = scalar dt in\n  let b = scalar_bitsize s in\n  match s with\n  | Bool -> `Bool false\n  | Uint8 | Uint16 | Uint32 | Uint64 -> `UInt 0L\n  | Index -> `SInt Int64.min_int\n  | Int8 | Int16 | Int32 | Int64 ->\n      if b >= 64 then `SInt Int64.min_int\n      else `SInt Int64.(neg (shift_left 1L (b - 1)))\n  | Float16 | Bfloat16 | Float32 | Float64 | Fp8e4m3 | Fp8e5m2 ->\n      `Float neg_infinity\n  | Void -> invalid_arg err_void_bounds\n\nlet max dt =\n  let s = scalar dt in\n  let b = scalar_bitsize s in\n  match s with\n  | Bool -> `Bool true\n  | Uint8 | Uint16 | Uint32 | Uint64 ->\n      if b >= 64 then `UInt Int64.minus_one\n      else `UInt Int64.(sub (shift_left 1L b) 1L)\n  | Index -> `SInt Int64.max_int\n  | Int8 | Int16 | Int32 | Int64 ->\n      if b >= 64 then `SInt Int64.max_int\n      else `SInt Int64.(sub (shift_left 1L (b - 1)) 1L)\n  | Float16 | Bfloat16 | Float32 | Float64 | Fp8e4m3 | Fp8e5m2 ->\n      `Float infinity\n  | Void -> invalid_arg err_void_bounds\n\nlet finfo dt =\n  match scalar dt with\n  | Float16 -> 5, 10   | Bfloat16 -> 8, 7\n  | Float32 -> 8, 23   | Float64 -> 11, 52\n  | Fp8e5m2 -> 5, 2    | Fp8e4m3 -> 4, 3\n  | _ -> invalid_arg \"finfo expects a floating-point dtype\"\n\n(* Comparison *)\n\nlet equal a b =\n  match a, b with\n  | Val a, Val b -> Val.equal a b\n  | Ptr a, Ptr b -> Ptr.equal a b\n  | _ -> false\n\nlet compare a b =\n  match a, b with\n  | Val a, Val b -> Val.compare a b\n  | Ptr a, Ptr b -> Ptr.compare a b\n  | Val _, Ptr _ -> -1\n  | Ptr _, Val _ -> 1\n\n(* Formatting *)\n\nlet to_string = function Val v -> Val.to_string v | Ptr p -> Ptr.to_string p\nlet pp fmt dt = Format.pp_print_string fmt (to_string dt)\n\n(* Scalar formatting *)\n\nlet scalar_to_string = function\n  | Void -> \"void\"   | Bool -> \"bool\"   | Index -> \"index\"\n  | Int8 -> \"i8\"     | Int16 -> \"i16\"   | Int32 -> \"i32\"   | Int64 -> \"i64\"\n  | Uint8 -> \"u8\"    | Uint16 -> \"u16\"  | Uint32 -> \"u32\"  | Uint64 -> \"u64\"\n  | Float16 -> \"f16\" | Bfloat16 -> \"bf16\"\n  | Float32 -> \"f32\" | Float64 -> \"f64\"\n  | Fp8e4m3 -> \"fp8e4m3\" | Fp8e5m2 -> \"fp8e5m2\"\n\nlet pp_scalar fmt s = Format.pp_print_string fmt (scalar_to_string s)\n\nlet addr_space_to_string = function\n  | Global -> \"global\" | Local -> \"local\" | Reg -> \"reg\"\n\nlet pp_addr_space fmt a = Format.pp_print_string fmt (addr_space_to_string a)\n\nlet scalar_cname = function\n  | Void -> \"void\"          | Bool -> \"bool\"           | Index -> \"index\"\n  | Int8 -> \"signed char\"   | Int16 -> \"short\"\n  | Int32 -> \"int\"          | Int64 -> \"long\"\n  | Uint8 -> \"unsigned char\"  | Uint16 -> \"unsigned short\"\n  | Uint32 -> \"unsigned int\"  | Uint64 -> \"unsigned long\"\n  | Float16 -> \"half\"       | Bfloat16 -> \"__bf16\"\n  | Float32 -> \"float\"      | Float64 -> \"double\"\n  | Fp8e4m3 -> \"float8_e4m3\"  | Fp8e5m2 -> \"float8_e5m2\"\n\n(* Convenience constructors — wrapped as Dtype.t *)\n\nlet of_scalar s = Val (Val.of_scalar s)\nlet void = Val Val.void\nlet bool = Val Val.bool\nlet int8 = Val Val.int8\nlet int16 = Val Val.int16\nlet int32 = Val Val.int32\nlet int64 = Val Val.int64\nlet uint8 = Val Val.uint8\nlet uint16 = Val Val.uint16\nlet uint32 = Val Val.uint32\nlet uint64 = Val Val.uint64\nlet float16 = Val Val.float16\nlet bfloat16 = Val Val.bfloat16\nlet float32 = Val Val.float32\nlet float64 = Val Val.float64\nlet fp8e4m3 = Val Val.fp8e4m3\nlet fp8e5m2 = Val Val.fp8e5m2\nlet index = Val Val.index\nlet default_float = Val Val.default_float\nlet default_int = Val Val.default_int\n\n(* FP conversion *)\n\nlet float_to_fp16 x =\n  if Float.is_nan x then Float.nan\n  else if Float.is_infinite x then x\n  else if x = 0.0 then x\n  else\n    let bits = Int64.bits_of_float x in\n    let sign = Int64.logand (Int64.shift_right_logical bits 63) 1L in\n    let exp =\n      Int64.to_int (Int64.logand (Int64.shift_right_logical bits 52) 0x7FFL)\n    in\n    let mant = Int64.logand bits 0xFFFFFFFFFFFFFL in\n    let unbiased = exp - 1023 in\n    if unbiased > 15 then\n      if sign = 1L then Float.neg Float.infinity else Float.infinity\n    else if unbiased < -24 then if sign = 1L then -0.0 else 0.0\n    else\n      let fp16_sign = Int64.shift_left sign 15 in\n      let fp16_bits =\n        if unbiased < -14 then begin\n          let shift = -14 - unbiased in\n          let full_mant = Int64.logor mant 0x10000000000000L in\n          let total_shift = 42 + shift in\n          let shifted = Int64.shift_right_logical full_mant total_shift in\n          let round_bit =\n            Int64.to_int\n              (Int64.logand\n                 (Int64.shift_right_logical full_mant (total_shift - 1))\n                 1L)\n          in\n          let sticky =\n            let mask = Int64.sub (Int64.shift_left 1L (total_shift - 1)) 1L in\n            if Int64.logand full_mant mask <> 0L then 1 else 0\n          in\n          let rounded =\n            if round_bit = 1 && (sticky = 1 || Int64.logand shifted 1L <> 0L)\n            then Int64.add shifted 1L\n            else shifted\n          in\n          Int64.logor fp16_sign rounded\n        end\n        else begin\n          let biased16 = unbiased + 15 in\n          let shifted_mant = Int64.shift_right_logical mant 42 in\n          let round_bit =\n            Int64.to_int (Int64.logand (Int64.shift_right_logical mant 41) 1L)\n          in\n          let sticky =\n            if Int64.logand mant 0x1FFFFFFFFFFL <> 0L then 1 else 0\n          in\n          let rounded =\n            if\n              round_bit = 1 && (sticky = 1 || Int64.logand shifted_mant 1L <> 0L)\n            then Int64.add shifted_mant 1L\n            else shifted_mant\n          in\n          let final_exp, final_mant =\n            if rounded > 0x3FFL then (biased16 + 1, 0L) else (biased16, rounded)\n          in\n          if final_exp > 30 then Int64.logor fp16_sign 0x7C00L\n          else\n            Int64.logor fp16_sign\n              (Int64.logor (Int64.of_int (final_exp lsl 10)) final_mant)\n        end\n      in\n      let fp16_exp =\n        Int64.to_int\n          (Int64.logand (Int64.shift_right_logical fp16_bits 10) 0x1FL)\n      in\n      let fp16_mant = Int64.logand fp16_bits 0x3FFL in\n      let f =\n        if fp16_exp = 0x1F then\n          if fp16_mant = 0L then Float.infinity else Float.nan\n        else if fp16_exp = 0 then Float.ldexp (Int64.to_float fp16_mant) (-24)\n        else\n          Float.ldexp\n            (Int64.to_float (Int64.logor fp16_mant 0x400L))\n            (fp16_exp - 25)\n      in\n      if sign = 1L then Float.neg f else f\n\nlet float_to_bf16 x =\n  if not (Float.is_finite x) then x\n  else\n    let u = Int32.bits_of_float x in\n    let u =\n      Int32.logand\n        (Int32.add u\n           (Int32.add 0x7FFFl\n              (Int32.logand (Int32.shift_right_logical u 16) 1l)))\n        0xFFFF_0000l\n    in\n    Int32.float_of_bits u\n\ntype fp8_params = {\n  exp_bias : int; sig_bits : int; mantissa_mask : int;\n  mindenorm_o2 : int64; overflow_threshold : int64;\n  maxnorm : int; minnorm : int64;\n}\n\nlet fp8e4m3_params =\n  { exp_bias = 7; sig_bits = 4; mantissa_mask = 0x7;\n    mindenorm_o2 = 0x3F50000000000000L; overflow_threshold = 0x407D000000000000L;\n    maxnorm = 0x7E; minnorm = 0x3F90000000000000L }\n\nlet fp8e5m2_params =\n  { exp_bias = 15; sig_bits = 3; mantissa_mask = 0x3;\n    mindenorm_o2 = 0x3EE0000000000000L;\n    overflow_threshold = Int64.sub 0x40EE000000000000L 1L;\n    maxnorm = 0x7B; minnorm = 0x3F10000000000000L }\n\nlet float_to_fp8 scalar x =\n  match scalar with\n  | Fp8e4m3 when not (Float.is_finite x) ->\n      if Float.copy_sign 1.0 x > 0.0 then 0x7f else 0xff\n  | Fp8e5m2 when Float.is_infinite x ->\n      if Float.copy_sign 1.0 x > 0.0 then 0x7c else 0xfc\n  | Fp8e4m3 | Fp8e5m2 ->\n      let p = match scalar with\n        | Fp8e4m3 -> fp8e4m3_params | _ -> fp8e5m2_params\n      in\n      let xbits = Int64.bits_of_float x in\n      let half_ulp = Int64.shift_left 1L (53 - p.sig_bits - 1) in\n      let sign =\n        Int64.to_int (Int64.logand (Int64.shift_right_logical xbits 63) 1L)\n        lsl 7\n      in\n      let raw_exp =\n        Int64.to_int (Int64.logand (Int64.shift_right_logical xbits 52) 0x7FFL)\n      in\n      let exp = raw_exp - 1023 + p.exp_bias in\n      let mantissa =\n        Int64.to_int\n          (Int64.logand\n             (Int64.shift_right_logical xbits (53 - p.sig_bits))\n             (Int64.of_int p.mantissa_mask))\n      in\n      let absx = Int64.logand xbits 0x7FFFFFFFFFFFFFFFL in\n      let res =\n        if Int64.compare absx p.mindenorm_o2 <= 0 then 0\n        else if Int64.compare absx 0x7FF0000000000000L > 0 then\n          if scalar = Fp8e4m3 then 0x7F else 0x7E lor mantissa\n        else if Int64.compare absx p.overflow_threshold > 0 then p.maxnorm\n        else if Int64.compare absx p.minnorm >= 0 then begin\n          let base = (exp lsl (p.sig_bits - 1)) lor mantissa in\n          let round_mask = Int64.sub (Int64.shift_left half_ulp 1) 1L in\n          let round_bits = Int64.logand xbits round_mask in\n          if Int64.compare round_bits half_ulp > 0\n             || (round_bits = half_ulp && mantissa land 1 <> 0)\n          then base + 1 else base\n        end\n        else begin\n          let shift = 1 - exp in\n          let mant_with_implicit = mantissa lor (1 lsl (p.sig_bits - 1)) in\n          let base = mant_with_implicit asr shift in\n          let round_bits =\n            Int64.logand\n              (Int64.logor xbits (Int64.shift_left 1L 52))\n              (Int64.sub (Int64.shift_left half_ulp (shift + 1)) 1L)\n          in\n          let threshold = Int64.shift_left half_ulp shift in\n          if Int64.compare round_bits threshold > 0\n             || (round_bits = threshold && base land 1 <> 0)\n          then base + 1 else base\n        end\n      in\n      res lor sign\n  | _ -> invalid_arg \"float_to_fp8: expected Fp8e4m3 or Fp8e5m2\"\n\nlet fp8_to_float scalar x =\n  match scalar with\n  | Fp8e4m3 | Fp8e5m2 ->\n      let ur = x lsl 8 in\n      let ur =\n        if scalar = Fp8e5m2 && ur land 0x7FFF > 0x7C00 then 0x7FFF\n        else if scalar = Fp8e4m3 then begin\n          let sign = ur land 0x8000 in\n          let exponent = ((ur land 0x7800) asr 1) + 0x2000 in\n          let mantissa_init = (ur land 0x0700) asr 1 in\n          let absx = x land 0x7F in\n          if absx = 0x7F then 0x7FFF\n          else if exponent = 0x2000 then begin\n            if mantissa_init <> 0 then begin\n              let rec normalize m e =\n                if m land 0x0400 <> 0 then (m, e)\n                else normalize (m lsl 1) (e - 0x0400)\n              in\n              let m, e = normalize (mantissa_init lsl 1) exponent in\n              sign lor e lor (m land 0x03FF)\n            end\n            else sign\n          end\n          else sign lor exponent lor mantissa_init\n        end\n        else ur\n      in\n      let fp16_sign = (ur asr 15) land 1 in\n      let fp16_exp = (ur asr 10) land 0x1F in\n      let fp16_mant = ur land 0x3FF in\n      let f =\n        if fp16_exp = 0x1F then\n          if fp16_mant = 0 then Float.infinity else Float.nan\n        else if fp16_exp = 0 then Float.ldexp (Float.of_int fp16_mant) (-24)\n        else Float.ldexp (Float.of_int (fp16_mant + 1024)) (fp16_exp - 25)\n      in\n      if fp16_sign = 1 then Float.neg f else f\n  | _ -> invalid_arg \"fp8_to_float: expected Fp8e4m3 or Fp8e5m2\"\n\nlet truncate_float (dt : Val.t) x =\n  match dt.scalar with\n  | Float64 -> x\n  | Float32 -> Int32.float_of_bits (Int32.bits_of_float x)\n  | Float16 -> float_to_fp16 x\n  | Bfloat16 -> float_to_bf16 x\n  | Fp8e4m3 | Fp8e5m2 -> fp8_to_float dt.scalar (float_to_fp8 dt.scalar x)\n  | _ -> invalid_arg \"truncate_float: expected a floating-point dtype\"\n\nlet truncate_int (dt : Val.t) x =\n  let b = scalar_bitsize dt.scalar in\n  match dt.scalar with\n  | Bool -> if x <> 0 then 1 else 0\n  | Uint8 | Uint16 | Uint32 | Uint64 ->\n      if b >= Sys.int_size then x else x land ((1 lsl b) - 1)\n  | Int8 | Int16 | Int32 | Int64 | Index ->\n      if b >= Sys.int_size then x\n      else\n        let mask = (1 lsl b) - 1 in\n        let unsigned = x land mask in\n        if unsigned land (1 lsl (b - 1)) <> 0 then unsigned lor lnot mask\n        else unsigned\n  | _ -> invalid_arg \"truncate_int: expected an integer or bool dtype\"\n"
  },
  {
    "path": "packages/tolk/lib/ir/dtype.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Data types for tensor computations.\n\n    This module defines two levels of data type and their union:\n\n    - {!Val.t} — value dtypes (a {!scalar} identity with a vector width).\n    - {!Ptr.t} — pointer dtypes (a base value dtype plus address space, buffer\n      element count, and pointer vector width).\n    - {!t} — the union of both, for IR nodes whose dtype can be either\n      (e.g., [Index]).\n\n    Val-specific operations live in {!Val}, pointer-specific operations in\n    {!Ptr}, and dispatching operations at the top level. Kernel view fields\n    use the most precise type: {!Val.t} for value-only nodes, {!Ptr.t} for\n    pointer-only nodes, and {!t} where either is possible.\n\n    {b Promotion.} Scalar types are organized in a promotion lattice based\n    on {{:https://jax.readthedocs.io/en/latest/jep/9407-type-promotion.html}\n    JAX JEP-9407}. Promotion is total: any pair of numeric types has a\n    common supertype, at the cost of some lossy edges (e.g., [Uint64]\n    promotes through fp8 to reach floats). See {!Val.least_upper_dtype}.\n\n    {b Vectorization.} A value dtype represents [count] lanes of a\n    [scalar]. Most operations work on the scalar component and ignore\n    [count]. Use {!Val.vec} and {!Val.scalarize} to convert.\n\n    {b Index.} The {!Index} scalar is a symbolic type for loop counters and\n    address arithmetic. It does not participate in promotion and is lowered\n    to [int32] or [int64] by backends. *)\n\n(** {1:scalars Scalar types} *)\n\n(** Scalar data type identity. Ordered by promotion priority from {!Bool}\n    (lowest, priority [0]) to {!Float64} (highest, priority [14]). {!Void} and\n    {!Index} have priority [-1]. *)\ntype scalar =\n  | Void  (** Absence of a value. Has zero bitsize. *)\n  | Bool  (** Boolean. 1 bit. *)\n  | Int8  (** Signed 8-bit integer. *)\n  | Int16  (** Signed 16-bit integer. *)\n  | Int32  (** Signed 32-bit integer. *)\n  | Int64  (** Signed 64-bit integer. *)\n  | Uint8  (** Unsigned 8-bit integer. *)\n  | Uint16  (** Unsigned 16-bit integer. *)\n  | Uint32  (** Unsigned 32-bit integer. *)\n  | Uint64  (** Unsigned 64-bit integer. *)\n  | Float16  (** IEEE 754 binary16 (half precision). *)\n  | Bfloat16  (** Brain floating-point 16: 8-bit exponent, 7-bit mantissa. *)\n  | Float32  (** IEEE 754 binary32 (single precision). *)\n  | Float64  (** IEEE 754 binary64 (double precision). *)\n  | Fp8e4m3  (** 8-bit float: 4-bit exponent, 3-bit mantissa. *)\n  | Fp8e5m2  (** 8-bit float: 5-bit exponent, 2-bit mantissa. *)\n  | Index\n      (** Symbolic index type for loop counters and address arithmetic. Uses 800\n          bits as a non-machine sentinel. Does not participate in dtype\n          promotion. *)\n\n(** {1:addr_spaces Address spaces} *)\n\n(** GPU memory address space. *)\ntype addr_space =\n  | Global  (** Global device memory. *)\n  | Local  (** Shared/local memory (workgroup scope). *)\n  | Reg  (** Register storage. *)\n\n(** {1:val_mod Value dtypes} *)\n\nmodule Val : sig\n  type t\n  (** Value dtype: a {!scalar} identity with a vector width ([count]).\n      Invariant: [count >= 0]; [count = 0] only when [scalar = Index]. *)\n\n  (** {2 Accessors} *)\n\n  val scalar : t -> scalar\n  (** [scalar dt] is the element type of [dt]. *)\n\n  val count : t -> int\n  (** [count dt] is the vector width of [dt]. [1] for scalar types. *)\n\n  (** {2 Constructors} *)\n\n  val of_scalar : scalar -> t\n  (** [of_scalar s] is the scalar dtype for [s] with [count = 1]. *)\n\n  val void : t\n  val bool : t\n  val int8 : t\n  val int16 : t\n  val int32 : t\n  val int64 : t\n  val uint8 : t\n  val uint16 : t\n  val uint32 : t\n  val uint64 : t\n  val float16 : t\n  val bfloat16 : t\n  val float32 : t\n  val float64 : t\n  val fp8e4m3 : t\n  val fp8e5m2 : t\n  val index : t\n\n  val default_float : t\n  (** [default_float] is {!float32}. Used by {!least_upper_float} and\n      {!sum_acc_dtype} when promoting non-float types to a floating-point\n      domain. *)\n\n  val default_int : t\n  (** [default_int] is {!int32}. Used by {!sum_acc_dtype} to widen narrow\n      integer accumulators. *)\n\n  (** {2 Transformers} *)\n\n  val scalarize : t -> t\n  (** [scalarize dt] is [dt] with [count = 1].\n\n      See also {!vec}. *)\n\n  val vec : int -> t -> t\n  (** [vec n dt] is a vector type with [n] lanes of [scalar dt].\n\n      If [n = 1] or [scalar dt] is {!Void}, returns [dt] unchanged.\n      [vec 0 index] is permitted for empty shape vectors.\n\n      Raises [Invalid_argument] if [count dt <> 1] (already vectorized),\n      [n < 0], or [n = 0] on a non-{!Index} dtype.\n\n      See also {!scalarize}. *)\n\n  val with_scalar : scalar -> t -> t\n  (** [with_scalar s dt] is [dt] with its scalar identity replaced by [s]. *)\n\n  (** {2 Predicates} *)\n\n  val is_float : t -> bool\n  (** [is_float dt] is [true] iff [scalar dt] is a floating-point type. *)\n\n  val is_int : t -> bool\n  (** [is_int dt] is [true] iff [scalar dt] is an integer type (including\n      {!Index}). *)\n\n  val is_unsigned : t -> bool\n  (** [is_unsigned dt] is [true] iff [scalar dt] is unsigned. *)\n\n  val is_bool : t -> bool\n  (** [is_bool dt] is [true] iff [scalar dt] is {!Bool}. *)\n\n  val is_fp8 : t -> bool\n  (** [is_fp8 dt] is [true] iff [scalar dt] is {!Fp8e4m3} or {!Fp8e5m2}. *)\n\n  (** {2 Properties} *)\n\n  val bitsize : t -> int\n  (** [bitsize dt] is the total size in bits (scalar bit width × count). *)\n\n  val itemsize : t -> int\n  (** [itemsize dt] is {!bitsize} rounded up to bytes. *)\n\n  val priority : t -> int\n  (** [priority dt] is the promotion priority of [scalar dt]. *)\n\n  (** {2 Promotion} *)\n\n  val least_upper_dtype : t list -> t\n  (** [least_upper_dtype ts] is the least upper bound of [ts] in the promotion\n      lattice. The result always has [count = 1].\n\n      Promotion is total for numeric types: any pair has a common supertype.\n      Some edges are lossy (e.g., [Int64] to [Uint64] loses negative values,\n      [Uint64] to fp8 loses most precision).\n\n      Raises [Invalid_argument] if [ts] is empty or contains {!Index}.\n\n      See also {!least_upper_float} and {!can_lossless_cast}. *)\n\n  val least_upper_float : t -> t\n  (** [least_upper_float t] is [scalarize t] if [t] is floating-point, or\n      [least_upper_dtype [scalarize t; float32]] otherwise.\n\n      See also {!least_upper_dtype}. *)\n\n  val can_lossless_cast : t -> t -> bool\n  (** [can_lossless_cast src dst] is [true] iff every value representable by\n      [scalar src] is exactly representable by [scalar dst]. {!Bool} casts\n      losslessly to any type. {!Index} accepts all integer types as lossless\n      sources.\n\n      This checks exact representability, not promotion: for example,\n      [can_lossless_cast int32 float32] is [false] (float32 cannot represent\n      all 32-bit integers).\n\n      See also {!least_upper_dtype}. *)\n\n  val sum_acc_dtype : t -> t\n  (** [sum_acc_dtype dt] is the accumulator dtype for sum-like reductions\n      over [dt]. The result always has [count = 1].\n\n      Widening rules:\n      - Unsigned integers promote to at least {!uint32}.\n      - Signed integers and booleans promote to at least {!int32}.\n      - Floats promote to at least {!float32}.\n\n      Raises [Invalid_argument] if [scalar dt] is {!Index}. *)\n\n  (** {2 Comparison} *)\n\n  val equal : t -> t -> bool\n  (** [equal a b] is [true] iff [a] and [b] have the same scalar and count. *)\n\n  val compare : t -> t -> int\n  (** [compare a b] is a total order over value dtypes. Orders first by scalar\n      promotion priority and bit width, then by count. *)\n\n  (** {2 Formatting} *)\n\n  val to_string : t -> string\n  (** [to_string dt] is the short name of [dt]. For scalar types this is\n      the scalar name (e.g., [\"f32\"]). For vector types it appends the count\n      (e.g., [\"f32×4\"]). *)\n\n  val pp : Format.formatter -> t -> unit\n  (** [pp] formats a value dtype using {!to_string}. *)\nend\n\n(** {1:ptr_mod Pointer dtypes} *)\n\nmodule Ptr : sig\n  type t\n  (** Pointer dtype: a base {!Val.t} plus address space, buffer element count,\n      and pointer vector width.\n      Invariant: pointer vector width [>= 1]. *)\n\n  (** {2 Accessors} *)\n\n  val scalar : t -> scalar\n  (** [scalar p] is the scalar identity of the base value dtype. *)\n\n  val count : t -> int\n  (** [count p] is the vector width of the base value dtype. *)\n\n  val addrspace : t -> addr_space\n  (** [addrspace p] is the memory address space of [p]. *)\n\n  val v : t -> int\n  (** [v p] is the pointer vector width of [p]. *)\n\n  val size : t -> int\n  (** [size p] is the element count of [p], or [-1] for unbounded. *)\n\n  val base : t -> Val.t\n  (** [base p] is the pointed-to value dtype. *)\n\n  (** {2 Constructors} *)\n\n  val create : Val.t -> addrspace:addr_space -> size:int -> t\n  (** [create base ~addrspace ~size] is a pointer to [base] in [addrspace]\n      with [size] elements. Pointer vector width defaults to [1]. *)\n\n  val create_v : Val.t -> addrspace:addr_space -> size:int -> v:int -> t\n  (** [create_v base ~addrspace ~size ~v] is like {!create} with explicit\n      pointer vector width [v].\n\n      Raises [Invalid_argument] if [v < 1]. *)\n\n  (** {2 Transformers} *)\n\n  val scalarize : t -> t\n  (** [scalarize p] is [p] with pointer vector width [1] and base\n      [count = 1]. *)\n\n  val vec : int -> t -> t\n  (** [vec n p] is [p] with pointer vector width [n].\n\n      Raises [Invalid_argument] if [n < 1]. *)\n\n  val with_base : Val.t -> t -> t\n  (** [with_base dt p] is [p] with base value dtype replaced by [dt]. *)\n\n  val with_size : int -> t -> t\n  (** [with_size n p] is [p] with element count [n]. *)\n\n  (** {2 Comparison} *)\n\n  val equal : t -> t -> bool\n  (** [equal a b] is [true] iff all fields of [a] and [b] are\n      structurally equal (base, addrspace, v, size).\n\n      {b Note.} IR validators may use partial field comparisons (e.g.,\n      index validation ignores size). Those are intentionally different\n      from this structural equality. *)\n\n  val compare : t -> t -> int\n  (** [compare a b] is a total order over pointer types. *)\n\n  (** {2 Formatting} *)\n\n  val to_string : t -> string\n  (** [to_string p] is a human-readable representation of [p] (e.g.,\n      [\"f32* \\[global\\]\"]). *)\n\n  val pp : Format.formatter -> t -> unit\n  (** [pp] formats a pointer dtype using {!to_string}. *)\nend\n\n(** {1:types Unified dtype} *)\n\ntype t = Val of Val.t | Ptr of Ptr.t\n(** A dtype that is either a value or a pointer. Used in IR nodes whose\n    dtype can be either (e.g., [Index] nodes that may or may not carry\n    pointer semantics). *)\n\n(** {2:dispatch_access Dispatching accessors} *)\n\nval scalar : t -> scalar\n(** [scalar dt] is the scalar identity.\n    [scalar (Val v)] is [Val.scalar v].\n    [scalar (Ptr p)] is [Ptr.scalar p]. *)\n\nval count : t -> int\n(** [count dt] is the value vector width.\n    [count (Val v)] is [Val.count v].\n    [count (Ptr p)] is [Ptr.count p]. *)\n\nval vcount : t -> int\n(** [vcount dt] is the vector count.\n    [vcount (Val v)] is [Val.count v].\n    [vcount (Ptr p)] is [Ptr.v p]. *)\n\nval is_ptr : t -> bool\n(** [is_ptr dt] is [true] iff [dt] is [Ptr _]. *)\n\nval val_of : t -> Val.t\n(** [val_of (Val v)] is [v].\n    [val_of (Ptr p)] is [Ptr.base p]. *)\n\n(** {2:dispatch_transform Dispatching transformers} *)\n\nval scalarize : t -> t\n(** [scalarize dt] dispatches to {!Val.scalarize} or {!Ptr.scalarize},\n    preserving the [Val]/[Ptr] wrapper. *)\n\nval vec : int -> t -> t\n(** [vec n dt] dispatches to {!Val.vec} or {!Ptr.vec}, preserving the\n    [Val]/[Ptr] wrapper. *)\n\n(** {2:predicates Predicates} *)\n\nval is_float : t -> bool\n(** [is_float dt] is [true] iff [scalar dt] is a floating-point type\n    ({!Float16}, {!Bfloat16}, {!Float32}, {!Float64}, {!Fp8e4m3}, or\n    {!Fp8e5m2}). *)\n\nval is_int : t -> bool\n(** [is_int dt] is [true] iff [scalar dt] is a signed or unsigned integer\n    type, including {!Index}. *)\n\nval is_unsigned : t -> bool\n(** [is_unsigned dt] is [true] iff [scalar dt] is one of {!Uint8},\n    {!Uint16}, {!Uint32}, or {!Uint64}. *)\n\nval is_bool : t -> bool\n(** [is_bool dt] is [true] iff [scalar dt] is {!Bool}. *)\n\nval is_fp8 : t -> bool\n(** [is_fp8 dt] is [true] iff [scalar dt] is {!Fp8e4m3} or {!Fp8e5m2}. *)\n\n(** {2:properties Properties} *)\n\nval bitsize : t -> int\n(** [bitsize dt] is the total size of [dt] in bits, i.e., the scalar bit\n    width multiplied by [count dt]. {!Void} has bitsize [0]. {!Index} has\n    a sentinel bitsize of [800]. *)\n\nval itemsize : t -> int\n(** [itemsize dt] is [{!bitsize} dt] rounded up to the nearest byte. *)\n\nval priority : t -> int\n(** [priority dt] is the promotion priority of [scalar dt]. Higher priority\n    types absorb lower ones in {!Val.least_upper_dtype}. Ranges from [-1]\n    ({!Void}, {!Index}) through [0] ({!Bool}) to [14] ({!Float64}). *)\n\n(** {2:bounds Bounds} *)\n\ntype bound =\n  [ `Bool of bool | `SInt of int64 | `UInt of int64 | `Float of float ]\n(** Numeric bounds for dtypes. Returned by {!min} and {!max}.\n\n    - [`Bool b] for boolean bounds.\n    - [`SInt n] for signed integer bounds (including {!Index}, which\n      approximates with [Int64] bounds).\n    - [`UInt n] for unsigned integer bounds. Values are raw 64-bit unsigned bit\n      patterns in [int64] (e.g., uint64 max is [`UInt Int64.minus_one]).\n    - [`Float f] for floating-point bounds ([-infinity] and [infinity]). *)\n\nval min : t -> bound\n(** [min dt] is the smallest value representable by [scalar dt].\n\n    Raises [Invalid_argument] if [scalar dt] is {!Void}.\n\n    See also {!max}. *)\n\nval max : t -> bound\n(** [max dt] is the largest value representable by [scalar dt].\n\n    Raises [Invalid_argument] if [scalar dt] is {!Void}.\n\n    See also {!min}. *)\n\nval finfo : t -> int * int\n(** [finfo dt] is [(exponent_bits, mantissa_bits)] for the floating-point dtype\n    [dt]. For example, [finfo (Val float32)] is [(8, 23)] and\n    [finfo (Val float16)] is [(5, 10)].\n\n    Raises [Invalid_argument] if not floating-point.\n\n    See also {!is_float}. *)\n\n(** {2:comparison Comparison} *)\n\nval equal : t -> t -> bool\n(** [equal a b] is [true] iff [a] and [b] carry the same dtype. *)\n\nval compare : t -> t -> int\n(** [compare a b] is a total order over {!t} values. *)\n\n(** {2:fmt Formatting} *)\n\nval to_string : t -> string\n(** [to_string dt] formats [dt] using {!Val.to_string} or {!Ptr.to_string}. *)\n\nval pp : Format.formatter -> t -> unit\n(** [pp] formats a dtype using {!to_string}. *)\n\n(** {1:scalar_fmt Scalar formatting} *)\n\nval scalar_to_string : scalar -> string\n(** [scalar_to_string s] is the short name of [s] (e.g., [\"f32\"], [\"i64\"],\n    [\"bool\"], [\"void\"], [\"index\"]). *)\n\nval pp_scalar : Format.formatter -> scalar -> unit\n(** [pp_scalar] formats a {!scalar} using {!scalar_to_string}. *)\n\nval addr_space_to_string : addr_space -> string\n(** [addr_space_to_string a] is [\"global\"], [\"local\"], or [\"reg\"]. *)\n\nval pp_addr_space : Format.formatter -> addr_space -> unit\n(** [pp_addr_space] formats an address space. *)\n\n(** {1:cnames C type names} *)\n\nval scalar_cname : scalar -> string\n(** [scalar_cname s] is the C-language type name for [s], used by codegen\n    renderers as a fallback when no device-specific type map override exists.\n    For example, [\"int\"] for {!Int32}, [\"signed char\"] for {!Int8}, [\"half\"] for\n    {!Float16}, [\"__bf16\"] for {!Bfloat16}. *)\n\n(** {1:convenience Convenience constructors}\n\n    Value dtype constants wrapped as {!t} for direct use in any context\n    expecting a unified dtype. For the unwrapped {!Val.t} versions, use\n    the {!Val} module directly. *)\n\nval of_scalar : scalar -> t\n(** [of_scalar s] is [Val (Val.of_scalar s)]. *)\n\nval void : t\nval bool : t\nval int8 : t\nval int16 : t\nval int32 : t\nval int64 : t\nval uint8 : t\nval uint16 : t\nval uint32 : t\nval uint64 : t\nval float16 : t\nval bfloat16 : t\nval float32 : t\nval float64 : t\nval fp8e4m3 : t\nval fp8e5m2 : t\nval index : t\nval default_float : t\nval default_int : t\n\n(** {1:fp_conv Floating-point conversion}\n\n    Precision-truncation utilities for constant folding. These functions round\n    or convert between floating-point precisions using round-to-nearest-even\n    semantics. *)\n\nval float_to_fp16 : float -> float\n(** [float_to_fp16 x] rounds [x] to IEEE 754 binary16 (half) precision using\n    round-to-nearest-even. The result is a [float] holding the exact\n    half-representable value. Overflows to infinity, underflows to zero or\n    denormal, preserves NaN and infinity. *)\n\nval float_to_bf16 : float -> float\n(** [float_to_bf16 x] rounds [x] to bfloat16 precision using\n    round-to-nearest-even. Non-finite values pass through unchanged. *)\n\nval float_to_fp8 : scalar -> float -> int\n(** [float_to_fp8 s x] converts [x] to an fp8 byte value.\n\n    Raises [Invalid_argument] if [s] is not {!Fp8e4m3} or {!Fp8e5m2}.\n\n    See also {!fp8_to_float}. *)\n\nval fp8_to_float : scalar -> int -> float\n(** [fp8_to_float s byte] converts fp8 byte value [byte] to a [float].\n\n    Raises [Invalid_argument] if [s] is not {!Fp8e4m3} or {!Fp8e5m2}.\n\n    See also {!float_to_fp8}. *)\n\nval truncate_float : Val.t -> float -> float\n(** [truncate_float dt x] truncates [x] to the precision of floating-point\n    dtype [dt]. For {!Float64} this is the identity. For {!Float32} it\n    round-trips through [Int32.bits_of_float]. For narrower types it uses\n    {!float_to_fp16}, {!float_to_bf16}, or the fp8 conversion pair.\n\n    Raises [Invalid_argument] if [dt] is not floating-point.\n\n    See also {!truncate_int}. *)\n\nval truncate_int : Val.t -> int -> int\n(** [truncate_int dt x] truncates integer [x] to the range of integer dtype\n    [dt]. Unsigned types use bitwise masking. Signed types and {!Index} use\n    modular arithmetic with sign extension. {!Bool} maps nonzero to [1] and\n    zero to [0].\n\n    Raises [Invalid_argument] if [dt] is not an integer, index, or bool type.\n\n    See also {!truncate_float}. *)\n"
  },
  {
    "path": "packages/tolk/lib/ir/dune",
    "content": "(include_subdirs no)\n\n(library\n (name tolk_ir)\n (public_name tolk.ir)\n (libraries unix))\n"
  },
  {
    "path": "packages/tolk/lib/ir/hashcons.ml",
    "content": "(**************************************************************************)\n(*                                                                        *)\n(*  Copyright (C) Jean-Christophe Filliatre                               *)\n(*                                                                        *)\n(*  This software is free software; you can redistribute it and/or        *)\n(*  modify it under the terms of the GNU Library General Public           *)\n(*  License version 2.1, with the special exception on linking            *)\n(*  described in file LICENSE.                                            *)\n(*                                                                        *)\n(*  This software is distributed in the hope that it will be useful,      *)\n(*  but WITHOUT ANY WARRANTY; without even the implied warranty of        *)\n(*  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.                  *)\n(*                                                                        *)\n(**************************************************************************)\n\n(* Vendored from https://github.com/backtracking/hashcons\n   Modifications:\n   - Removed the non-functorial generic interface.\n   - Removed [Hmap] and [Hset]. *)\n\ntype +'a hash_consed = { hkey : int; tag : int; node : 'a }\n\nlet tag_counter = ref 0\nlet gentag () = incr tag_counter; !tag_counter\nlet gentag_peek () = !tag_counter\n\nmodule type HashedType = sig\n  type t\n  val equal : t -> t -> bool\n  val hash : t -> int\nend\n\nmodule type S = sig\n  type key\n  type t\n  val create : int -> t\n  val clear : t -> unit\n  val hashcons : t -> key -> key hash_consed\n  val iter : (key hash_consed -> unit) -> t -> unit\n  val stats : t -> int * int * int * int * int * int\nend\n\nmodule Make (H : HashedType) : S with type key = H.t = struct\n  type key = H.t\n  type data = H.t hash_consed\n\n  type t = {\n    mutable table : data Weak.t array;\n    mutable totsize : int;\n    mutable limit : int;\n  }\n\n  let emptybucket = Weak.create 0\n\n  let create sz =\n    let sz = if sz < 7 then 7 else sz in\n    let sz = if sz > Sys.max_array_length then Sys.max_array_length else sz in\n    { table = Array.make sz emptybucket; totsize = 0; limit = 3 }\n\n  let clear t =\n    for i = 0 to Array.length t.table - 1 do\n      t.table.(i) <- emptybucket\n    done;\n    t.totsize <- 0;\n    t.limit <- 3\n\n  let iter f t =\n    let rec iter_bucket i b =\n      if i >= Weak.length b then ()\n      else\n        match Weak.get b i with\n        | Some v -> f v; iter_bucket (i + 1) b\n        | None -> iter_bucket (i + 1) b\n    in\n    Array.iter (iter_bucket 0) t.table\n\n  let count t =\n    let rec count_bucket i b accu =\n      if i >= Weak.length b then accu\n      else count_bucket (i + 1) b (accu + if Weak.check b i then 1 else 0)\n    in\n    Array.fold_right (count_bucket 0) t.table 0\n\n  let next_sz n = min (3 * n / 2 + 3) (Sys.max_array_length - 1)\n\n  let rec resize t =\n    let oldlen = Array.length t.table in\n    let newlen = next_sz oldlen in\n    if newlen > oldlen then begin\n      let newt = create newlen in\n      newt.limit <- t.limit + 100;\n      iter (fun d -> add newt d) t;\n      t.table <- newt.table\n    end\n\n  and add t d =\n    let index = d.hkey mod Array.length t.table in\n    let bucket = t.table.(index) in\n    let sz = Weak.length bucket in\n    let rec loop i =\n      if i >= sz then begin\n        let newsz = min (3 * sz / 2 + 3) (Sys.max_array_length - 1) in\n        if newsz <= sz then\n          failwith \"Hashcons.Make: hash bucket cannot grow more\";\n        let newbucket = Weak.create newsz in\n        Weak.blit bucket 0 newbucket 0 sz;\n        Weak.set newbucket i (Some d);\n        t.table.(index) <- newbucket;\n        t.totsize <- t.totsize + (newsz - sz);\n        if t.totsize > t.limit * Array.length t.table then resize t\n      end else begin\n        if Weak.check bucket i then loop (i + 1)\n        else Weak.set bucket i (Some d)\n      end\n    in\n    loop 0\n\n  let hashcons t d =\n    let hkey = H.hash d land max_int in\n    let index = hkey mod Array.length t.table in\n    let bucket = t.table.(index) in\n    let sz = Weak.length bucket in\n    let rec loop i =\n      if i >= sz then begin\n        let hnode = { hkey; tag = gentag (); node = d } in\n        add t hnode;\n        hnode\n      end else\n        match Weak.get bucket i with\n        | Some v when H.equal v.node d -> begin\n            match Weak.get bucket i with\n            | Some v -> v\n            | None -> loop (i + 1)\n          end\n        | _ -> loop (i + 1)\n    in\n    loop 0\n\n  let stats t =\n    let len = Array.length t.table in\n    let lens = Array.map Weak.length t.table in\n    Array.sort compare lens;\n    let totlen = Array.fold_left ( + ) 0 lens in\n    (len, count t, totlen, lens.(0), lens.(len / 2), lens.(len - 1))\nend\n"
  },
  {
    "path": "packages/tolk/lib/ir/hashcons.mli",
    "content": "(**************************************************************************)\n(*                                                                        *)\n(*  Copyright (C) Jean-Christophe Filliatre                               *)\n(*                                                                        *)\n(*  This software is free software; you can redistribute it and/or        *)\n(*  modify it under the terms of the GNU Library General Public           *)\n(*  License version 2.1, with the special exception on linking            *)\n(*  described in file LICENSE.                                            *)\n(*                                                                        *)\n(*  This software is distributed in the hope that it will be useful,      *)\n(*  but WITHOUT ANY WARRANTY; without even the implied warranty of        *)\n(*  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.                  *)\n(*                                                                        *)\n(**************************************************************************)\n\n(* Vendored from https://github.com/backtracking/hashcons\n   Modifications:\n   - Removed the non-functorial generic interface ([create], [hashcons],\n     etc. at the top level) — we only use the [Make] functor.\n   - Removed [Hmap] and [Hset] — we use [Ref_tbl] (keyed by integer tag)\n     instead of Patricia-tree maps/sets. *)\n\n(*s Hash tables for hash consing.\n\n    The technique is described in this paper:\n      Sylvain Conchon and Jean-Christophe Filliâtre.\n      Type-Safe Modular Hash-Consing.\n      In ACM SIGPLAN Workshop on ML, Portland, Oregon, September 2006.\n      https://www.lri.fr/~filliatr/ftp/publis/hash-consing2.pdf\n\n    Hash consed values are of the\n    following type [hash_consed]. The field [tag] contains a unique\n    integer (for values hash consed with the same table). The field\n    [hkey] contains the hash key of the value (without modulo) for\n    possible use in other hash tables (and internally when hash\n    consing tables are resized). The field [node] contains the value\n    itself.\n\n    Hash consing tables are using weak pointers, so that values that are no\n    more referenced from anywhere else can be erased by the GC. *)\n\ntype +'a hash_consed = private {\n  hkey: int;\n  tag : int;\n  node: 'a;\n}\n\nval gentag_peek : unit -> int\n\n(*s Functorial interface. *)\n\nmodule type HashedType =\n  sig\n    type t\n    val equal : t -> t -> bool\n    val hash : t -> int\n  end\n\nmodule type S =\n  sig\n    type key\n    type t\n    val create : int -> t\n    val clear : t -> unit\n    val hashcons : t -> key -> key hash_consed\n    val iter : (key hash_consed -> unit) -> t -> unit\n    val stats : t -> int * int * int * int * int * int\n  end\n\nmodule Make(H : HashedType) : (S with type key = H.t)\n"
  },
  {
    "path": "packages/tolk/lib/ir/kernel.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(* Types *)\n\ntype bufferize_device =\n  | Device_single of string\n  | Device_multi of string list\n  | Device_index of int\n\ntype sort = Value | Pointer | Index | Effect\n\nmodule Opt = struct\n  type t =\n    | Local of { axis : int; amount : int }\n    | Upcast of { axis : int; amount : int }\n    | Unroll of { axis : int; amount : int }\n    | Group of { axis : int; amount : int }\n    | Grouptop of { axis : int; amount : int }\n    | Thread of { axis : int; amount : int }\n    | Nolocals\n    | Tc of { axis : int; tc_select : int; tc_opt : int; use_tc : int }\n    | Padto of { axis : int; amount : int }\n    | Swap of { axis : int; with_axis : int }\n\n  let to_string = function\n    | Local { axis; amount } -> Printf.sprintf \"LOCAL:%d:%d\" axis amount\n    | Upcast { axis; amount } -> Printf.sprintf \"UPCAST:%d:%d\" axis amount\n    | Unroll { axis; amount } -> Printf.sprintf \"UNROLL:%d:%d\" axis amount\n    | Group { axis; amount } -> Printf.sprintf \"GROUP:%d:%d\" axis amount\n    | Grouptop { axis; amount } -> Printf.sprintf \"GROUPTOP:%d:%d\" axis amount\n    | Thread { axis; amount } -> Printf.sprintf \"THREAD:%d:%d\" axis amount\n    | Nolocals -> \"NOLOCALS\"\n    | Tc { axis; tc_select; tc_opt; use_tc } ->\n        Printf.sprintf \"TC:%d:%d:%d:%d\" axis tc_select tc_opt use_tc\n    | Padto { axis; amount } -> Printf.sprintf \"PADTO:%d:%d\" axis amount\n    | Swap { axis; with_axis } -> Printf.sprintf \"SWAP:%d:%d\" axis with_axis\n\n  let pp fmt t = Format.pp_print_string fmt (to_string t)\n\n  let axis = function\n    | Local { axis; _ } | Upcast { axis; _ } | Unroll { axis; _ }\n    | Group { axis; _ } | Grouptop { axis; _ } | Thread { axis; _ }\n    | Tc { axis; _ } | Padto { axis; _ } | Swap { axis; _ } -> Some axis\n    | Nolocals -> None\n\n  let amount = function\n    | Local { amount; _ } | Upcast { amount; _ } | Unroll { amount; _ }\n    | Group { amount; _ } | Grouptop { amount; _ } | Thread { amount; _ }\n    | Padto { amount; _ } -> Some amount\n    | Tc _ | Swap _ | Nolocals -> None\n\n  let with_amount t amt = match t with\n    | Local r -> Local { r with amount = amt }\n    | Upcast r -> Upcast { r with amount = amt }\n    | Unroll r -> Unroll { r with amount = amt }\n    | Group r -> Group { r with amount = amt }\n    | Grouptop r -> Grouptop { r with amount = amt }\n    | Thread r -> Thread { r with amount = amt }\n    | Padto r -> Padto { r with amount = amt }\n    | (Tc _ | Swap _ | Nolocals) as t -> t\nend\n\ntype bufferize_opts = {\n  device : bufferize_device option;\n  addrspace : Dtype.addr_space;\n  removable : bool;\n}\n\ntype tagged_view = { view : view; tag : string option }\n\nand t = tagged_view Hashcons.hash_consed\n\nand estimate = Int of int | Symbolic of t\n\nand estimates = { ops : estimate; lds : estimate; mem : estimate }\n\nand kernel_info = {\n  name : string;\n  axis_kinds : Axis_kind.t list;\n  dont_use_locals : bool;\n  applied_opts : Opt.t list;\n  opts_to_apply : Opt.t list option;\n  estimates : estimates option;\n}\n\nand view =\n  | Sink of { srcs : t list; kernel_info : kernel_info option }\n  | Group of { srcs : t list }\n  | After of { src : t; deps : t list }\n  | Param of { idx : int; dtype : Dtype.Ptr.t }\n  | Param_image of { idx : int; dtype : Dtype.Ptr.t; width : int; height : int }\n  | Define_local of { size : int; dtype : Dtype.Ptr.t }\n  | Define_reg of { size : int; dtype : Dtype.Ptr.t; slot : int }\n  | Define_var of { name : string; lo : int; hi : int; dtype : Dtype.Val.t }\n  | Bufferize of {\n      src : t;\n      ranges : t list;\n      dtype : Dtype.Ptr.t;\n      opts : bufferize_opts;\n    }\n  | Const of { value : Const.t; dtype : Dtype.Val.t }\n  | Vconst of { values : Const.t list; dtype : Dtype.Val.t }\n  | Invalid_index of { dtype : Dtype.Val.t }\n  | Index of { ptr : t; idxs : t list; gate : t option; dtype : Dtype.t }\n  | Ptrcat of { srcs : t list; dtype : Dtype.Ptr.t }\n  | Load of { src : t; alt : t option; dtype : Dtype.Val.t }\n  | Store of { dst : t; value : t; ranges : t list }\n  | Unary of { op : Op.unary; src : t; dtype : Dtype.Val.t }\n  | Binary of { op : Op.binary; lhs : t; rhs : t; dtype : Dtype.Val.t }\n  | Ternary of { op : Op.ternary; a : t; b : t; c : t; dtype : Dtype.Val.t }\n  | Cast of { src : t; dtype : Dtype.t }\n  | Bitcast of { src : t; dtype : Dtype.Val.t }\n  | Vectorize of { srcs : t list; dtype : Dtype.t }\n  | Vcat of { srcs : t list; dtype : Dtype.Val.t }\n  | Gep of { src : t; idxs : int list; dtype : Dtype.Val.t }\n  | Range of { size : t; dtype : Dtype.Val.t; axis : int; sub : int list; kind : Axis_kind.t }\n  | End of { value : t; ranges : t list }\n  | Barrier\n  | Special of { dim : Special_dim.t; size : t; dtype : Dtype.Val.t }\n  | Reduce of { op : Op.reduce; src : t; ranges : t list; dtype : Dtype.Val.t }\n  | Unroll of { src : t; axes : (int * int) list; dtype : Dtype.Val.t }\n  | Contract of { src : t; axes : (int * int) list; dtype : Dtype.Val.t }\n  | Wmma of {\n      name : string;\n      a : t;\n      b : t;\n      c : t;\n      dtype : Dtype.Val.t;\n      dims : int * int * int;\n      dtype_in : Dtype.scalar;\n      dtype_out : Dtype.scalar;\n      device : string;\n      threads : int;\n      upcast_axes : (int * int) list * (int * int) list * (int * int) list;\n      reduce_axes : int list;\n    }\n  | Custom of { fmt : string; args : t list }\n  | Custom_inline of { fmt : string; args : t list; dtype : Dtype.Val.t }\n\n(* Building *)\n\nlet view node = node.Hashcons.node.view\nlet tag node = node.Hashcons.node.tag\n\n(* Shallow hash of a view: hash the variant tag, children by their unique\n   hashcons tag (physical identity), and non-child fields by generic hash.\n   Children are already interned, so their tag is stable. *)\nlet shallow_hash_view v =\n  let h = ref (Hashtbl.hash (Obj.tag (Obj.repr v))) in\n  let add_id node = h := !h * 31 + node.Hashcons.tag in\n  let add x = h := !h * 31 + Hashtbl.hash x in\n  (match v with\n   | Sink { srcs; kernel_info } -> List.iter add_id srcs; add kernel_info\n   | Group { srcs } -> List.iter add_id srcs\n   | After { src; deps } -> add_id src; List.iter add_id deps\n   | Param { idx; dtype } -> add idx; add dtype\n   | Param_image { idx; dtype; width; height } -> add idx; add dtype; add width; add height\n   | Define_local { size; dtype } -> add size; add dtype\n   | Define_reg { size; dtype; slot } -> add size; add dtype; add slot\n   | Define_var { name; lo; hi; dtype } -> add name; add lo; add hi; add dtype\n   | Bufferize { src; ranges; dtype; opts } -> add_id src; List.iter add_id ranges; add dtype; add opts\n   | Const { value; dtype } ->\n       (match Const.view value with\n        | Bool b -> add b | Int i -> add i\n        | Float f -> add (Int64.bits_of_float f));\n       add dtype\n   | Vconst { values; dtype } ->\n       List.iter (fun v ->\n         match Const.view v with\n         | Bool b -> add b | Int i -> add i\n         | Float f -> add (Int64.bits_of_float f)) values;\n       add dtype\n   | Invalid_index { dtype } -> add dtype\n   | Index { ptr; idxs; gate; dtype } -> add_id ptr; List.iter add_id idxs; (match gate with Some g -> add_id g | None -> ()); add dtype\n   | Ptrcat { srcs; dtype } -> List.iter add_id srcs; add dtype\n   | Load { src; alt; dtype } -> add_id src; (match alt with Some a -> add_id a | None -> ()); add dtype\n   | Store { dst; value; ranges } -> add_id dst; add_id value; List.iter add_id ranges\n   | Unary { op; src; dtype } -> add op; add_id src; add dtype\n   | Binary { op; lhs; rhs; dtype } -> add op; add_id lhs; add_id rhs; add dtype\n   | Ternary { op; a; b; c; dtype } -> add op; add_id a; add_id b; add_id c; add dtype\n   | Cast { src; dtype } -> add_id src; add dtype\n   | Bitcast { src; dtype } -> add_id src; add dtype\n   | Vectorize { srcs; dtype } -> List.iter add_id srcs; add dtype\n   | Vcat { srcs; dtype } -> List.iter add_id srcs; add dtype\n   | Gep { src; idxs; dtype } -> add_id src; add idxs; add dtype\n   | Range { size; dtype; axis; sub; kind } -> add_id size; add dtype; add axis; add sub; add kind\n   | End { value; ranges } -> add_id value; List.iter add_id ranges\n   | Barrier -> ()\n   | Special { dim; size; dtype } -> add dim; add_id size; add dtype\n   | Reduce { op; src; ranges; dtype } -> add op; add_id src; List.iter add_id ranges; add dtype\n   | Unroll { src; axes; dtype } -> add_id src; add axes; add dtype\n   | Contract { src; axes; dtype } -> add_id src; add axes; add dtype\n   | Wmma { name; a; b; c; dtype; dims; dtype_in; dtype_out; device; threads; upcast_axes; reduce_axes } ->\n       add name; add_id a; add_id b; add_id c; add dtype; add dims; add dtype_in; add dtype_out; add device; add threads; add upcast_axes; add reduce_axes\n   | Custom { fmt; args } -> add fmt; List.iter add_id args\n   | Custom_inline { fmt; args; dtype } -> add fmt; List.iter add_id args; add dtype);\n  !h\n\n(* Shallow equality: same variant, children physically equal, non-child\n   fields structurally equal. *)\nlet shallow_equal_view v1 v2 =\n  match v1, v2 with\n  | Sink { srcs = s1; kernel_info = k1 }, Sink { srcs = s2; kernel_info = k2 } ->\n      List.length s1 = List.length s2 && List.for_all2 (==) s1 s2 && k1 = k2\n  | Group { srcs = s1 }, Group { srcs = s2 } ->\n      List.length s1 = List.length s2 && List.for_all2 (==) s1 s2\n  | After { src = s1; deps = d1 }, After { src = s2; deps = d2 } ->\n      s1 == s2 && List.length d1 = List.length d2 && List.for_all2 (==) d1 d2\n  | Param r1, Param r2 -> r1.idx = r2.idx && Dtype.Ptr.equal r1.dtype r2.dtype\n  | Param_image r1, Param_image r2 ->\n      r1.idx = r2.idx && Dtype.Ptr.equal r1.dtype r2.dtype && r1.width = r2.width && r1.height = r2.height\n  | Define_local r1, Define_local r2 -> r1.size = r2.size && Dtype.Ptr.equal r1.dtype r2.dtype\n  | Define_reg r1, Define_reg r2 -> r1.size = r2.size && Dtype.Ptr.equal r1.dtype r2.dtype && r1.slot = r2.slot\n  | Define_var r1, Define_var r2 ->\n      r1.name = r2.name && r1.lo = r2.lo && r1.hi = r2.hi && Dtype.Val.equal r1.dtype r2.dtype\n  | Bufferize { src = s1; ranges = r1; dtype = d1; opts = o1 },\n    Bufferize { src = s2; ranges = r2; dtype = d2; opts = o2 } ->\n      s1 == s2 && List.length r1 = List.length r2 && List.for_all2 (==) r1 r2 && Dtype.Ptr.equal d1 d2 && o1 = o2\n  | Const r1, Const r2 -> Const.equal r1.value r2.value && Dtype.Val.equal r1.dtype r2.dtype\n  | Vconst r1, Vconst r2 ->\n      Dtype.Val.equal r1.dtype r2.dtype && List.length r1.values = List.length r2.values\n      && List.for_all2 Const.equal r1.values r2.values\n  | Invalid_index r1, Invalid_index r2 -> Dtype.Val.equal r1.dtype r2.dtype\n  | Index { ptr = p1; idxs = i1; gate = g1; dtype = d1 },\n    Index { ptr = p2; idxs = i2; gate = g2; dtype = d2 } ->\n      p1 == p2 && List.length i1 = List.length i2 && List.for_all2 (==) i1 i2\n      && (match g1, g2 with None, None -> true | Some a, Some b -> a == b | _ -> false) && Dtype.equal d1 d2\n  | Ptrcat { srcs = s1; dtype = d1 }, Ptrcat { srcs = s2; dtype = d2 } ->\n      List.length s1 = List.length s2 && List.for_all2 (==) s1 s2 && Dtype.Ptr.equal d1 d2\n  | Load { src = s1; alt = a1; dtype = d1 }, Load { src = s2; alt = a2; dtype = d2 } ->\n      s1 == s2 && (match a1, a2 with None, None -> true | Some x, Some y -> x == y | _ -> false) && Dtype.Val.equal d1 d2\n  | Store { dst = d1; value = v1; ranges = r1 }, Store { dst = d2; value = v2; ranges = r2 } ->\n      d1 == d2 && v1 == v2 && List.length r1 = List.length r2 && List.for_all2 (==) r1 r2\n  | Unary { op = o1; src = s1; dtype = d1 }, Unary { op = o2; src = s2; dtype = d2 } ->\n      o1 = o2 && s1 == s2 && Dtype.Val.equal d1 d2\n  | Binary { op = o1; lhs = l1; rhs = r1; dtype = d1 }, Binary { op = o2; lhs = l2; rhs = r2; dtype = d2 } ->\n      o1 = o2 && l1 == l2 && r1 == r2 && Dtype.Val.equal d1 d2\n  | Ternary { op = o1; a = a1; b = b1; c = c1; dtype = d1 },\n    Ternary { op = o2; a = a2; b = b2; c = c2; dtype = d2 } ->\n      o1 = o2 && a1 == a2 && b1 == b2 && c1 == c2 && Dtype.Val.equal d1 d2\n  | Cast { src = s1; dtype = d1 }, Cast { src = s2; dtype = d2 } -> s1 == s2 && Dtype.equal d1 d2\n  | Bitcast { src = s1; dtype = d1 }, Bitcast { src = s2; dtype = d2 } -> s1 == s2 && Dtype.Val.equal d1 d2\n  | Vectorize { srcs = s1; dtype = d1 }, Vectorize { srcs = s2; dtype = d2 } ->\n      List.length s1 = List.length s2 && List.for_all2 (==) s1 s2 && Dtype.equal d1 d2\n  | Vcat { srcs = s1; dtype = d1 }, Vcat { srcs = s2; dtype = d2 } ->\n      List.length s1 = List.length s2 && List.for_all2 (==) s1 s2 && Dtype.Val.equal d1 d2\n  | Gep { src = s1; idxs = i1; dtype = d1 }, Gep { src = s2; idxs = i2; dtype = d2 } ->\n      s1 == s2 && i1 = i2 && Dtype.Val.equal d1 d2\n  | Range r1, Range r2 ->\n      r1.size == r2.size && Dtype.Val.equal r1.dtype r2.dtype && r1.axis = r2.axis && r1.sub = r2.sub && r1.kind = r2.kind\n  | End { value = v1; ranges = r1 }, End { value = v2; ranges = r2 } ->\n      v1 == v2 && List.length r1 = List.length r2 && List.for_all2 (==) r1 r2\n  | Barrier, Barrier -> true\n  | Special { dim = d1; size = s1; dtype = t1 }, Special { dim = d2; size = s2; dtype = t2 } ->\n      d1 = d2 && s1 == s2 && Dtype.Val.equal t1 t2\n  | Reduce { op = o1; src = s1; ranges = r1; dtype = d1 }, Reduce { op = o2; src = s2; ranges = r2; dtype = d2 } ->\n      o1 = o2 && s1 == s2 && List.length r1 = List.length r2 && List.for_all2 (==) r1 r2 && Dtype.Val.equal d1 d2\n  | Unroll { src = s1; axes = a1; dtype = d1 }, Unroll { src = s2; axes = a2; dtype = d2 } ->\n      s1 == s2 && a1 = a2 && Dtype.Val.equal d1 d2\n  | Contract { src = s1; axes = a1; dtype = d1 }, Contract { src = s2; axes = a2; dtype = d2 } ->\n      s1 == s2 && a1 = a2 && Dtype.Val.equal d1 d2\n  | Wmma w1, Wmma w2 ->\n      w1.name = w2.name && w1.a == w2.a && w1.b == w2.b && w1.c == w2.c\n      && Dtype.Val.equal w1.dtype w2.dtype && w1.dims = w2.dims && w1.dtype_in = w2.dtype_in\n      && w1.dtype_out = w2.dtype_out && w1.device = w2.device && w1.threads = w2.threads\n      && w1.upcast_axes = w2.upcast_axes && w1.reduce_axes = w2.reduce_axes\n  | Custom { fmt = f1; args = a1 }, Custom { fmt = f2; args = a2 } ->\n      f1 = f2 && List.length a1 = List.length a2 && List.for_all2 (==) a1 a2\n  | Custom_inline { fmt = f1; args = a1; dtype = d1 }, Custom_inline { fmt = f2; args = a2; dtype = d2 } ->\n      f1 = f2 && List.length a1 = List.length a2 && List.for_all2 (==) a1 a2 && Dtype.Val.equal d1 d2\n  | _ -> false\n\nlet shallow_hash_tagged tv =\n  shallow_hash_view tv.view * 31 + Hashtbl.hash tv.tag\n\nlet shallow_equal_tagged tv1 tv2 =\n  tv1.tag = tv2.tag && shallow_equal_view tv1.view tv2.view\n\nmodule View_hc = Hashcons.Make (struct\n  type t = tagged_view\n  let equal = shallow_equal_tagged\n  let hash = shallow_hash_tagged\nend)\n\nlet hc_table = View_hc.create 4096\nlet mk ?tag v = View_hc.hashcons hc_table { view = v; tag }\nlet with_tag t node = mk ~tag:t (view node)\nlet sink ?kernel_info srcs = mk (Sink { srcs; kernel_info })\nlet group srcs = match srcs with [x] -> x | _ -> mk (Group { srcs })\nlet after ~src ~deps = match deps with [] -> src | _ -> mk (After { src; deps })\nlet param ~idx ~dtype = mk (Param { idx; dtype })\n\nlet param_image ~idx ~dtype ~width ~height =\n  mk (Param_image { idx; dtype; width; height })\n\nlet define_local ~size ~dtype = mk (Define_local { size; dtype })\nlet define_reg ~size ~dtype ~slot = mk (Define_reg { size; dtype; slot })\n\nlet define_var ~name ~lo ~hi ?(dtype = Dtype.Val.index) () =\n  mk (Define_var { name; lo; hi; dtype })\n\nlet bufferize ~src ~ranges ~dtype ~opts = mk (Bufferize { src; ranges; dtype; opts })\nlet const value = mk (Const { value; dtype = Const.dtype value })\nlet vconst ~values ~dtype = mk (Vconst { values; dtype })\n\nlet invalid_index ?(lanes = 1) () =\n  mk (Invalid_index { dtype = Dtype.Val.vec lanes Dtype.Val.index })\n\nlet rec get_ptr_dtype node = match node.Hashcons.node.view with\n  | Param { dtype; _ }\n  | Param_image { dtype; _ }\n  | Define_local { dtype; _ }\n  | Define_reg { dtype; _ }\n  | Bufferize { dtype; _ }\n  | Ptrcat { dtype; _ } ->\n      Some dtype\n  | Index { dtype = Dtype.Ptr p; _ } -> Some p\n  | Index { dtype = Dtype.Val _; _ } -> None\n  | Vectorize { dtype = Dtype.Ptr p; _ } -> Some p\n  | Vectorize { dtype = Dtype.Val _; _ } -> None\n  | Cast { dtype = Dtype.Ptr p; _ } -> Some p\n  | After { src; _ } | Cast { src; _ } | Bitcast { src; _ } -> get_ptr_dtype src\n  | _ -> None\n\nlet pointer_dtype_exn ctx node =\n  match get_ptr_dtype node with\n  | Some dtype -> dtype\n  | None -> Printf.ksprintf invalid_arg \"Kernel.%s expects a pointer node\" ctx\n\nlet rec node_dtype node = match node.Hashcons.node.view with\n  | Param { dtype; _ } | Param_image { dtype; _ } | Define_local { dtype; _ }\n  | Define_reg { dtype; _ } | Bufferize { dtype; _ }\n  | Ptrcat { dtype; _ } ->\n      Some (Dtype.Ptr.base dtype)\n  | Index { dtype; _ } | Cast { dtype; _ } | Vectorize { dtype; _ } ->\n      Some (Dtype.val_of dtype)\n  | Define_var { dtype; _ } | Const { dtype; _ } | Vconst { dtype; _ }\n  | Invalid_index { dtype; _ }\n  | Load { dtype; _ } | Unary { dtype; _ } | Binary { dtype; _ }\n  | Ternary { dtype; _ } | Bitcast { dtype; _ }\n  | Vcat { dtype; _ } | Gep { dtype; _ }\n  | Range { dtype; _ } | Special { dtype; _ } | Reduce { dtype; _ }\n  | Unroll { dtype; _ } | Contract { dtype; _ } | Wmma { dtype; _ }\n  | Custom_inline { dtype; _ } ->\n      Some dtype\n  | Sink _ | Group _ | After _ | Store _ | End _ | Barrier | Custom _ -> None\n\nlet node_any_dtype node = match node.Hashcons.node.view with\n  | Param { dtype; _ } | Param_image { dtype; _ } | Define_local { dtype; _ }\n  | Define_reg { dtype; _ } | Bufferize { dtype; _ }\n  | Ptrcat { dtype; _ } ->\n      Some (Dtype.Ptr dtype)\n  | Index { dtype; _ } | Cast { dtype; _ } | Vectorize { dtype; _ } ->\n      Some dtype\n  | Define_var { dtype; _ } | Const { dtype; _ } | Vconst { dtype; _ }\n  | Invalid_index { dtype; _ }\n  | Load { dtype; _ } | Unary { dtype; _ } | Binary { dtype; _ }\n  | Ternary { dtype; _ } | Bitcast { dtype; _ }\n  | Vcat { dtype; _ } | Gep { dtype; _ }\n  | Range { dtype; _ } | Special { dtype; _ } | Reduce { dtype; _ }\n  | Unroll { dtype; _ } | Contract { dtype; _ } | Wmma { dtype; _ }\n  | Custom_inline { dtype; _ } ->\n      Some (Dtype.Val dtype)\n  | Sink _ | Group _ | After _ | Store _ | End _ | Barrier | Custom _ -> None\n\nlet value_dtype_exn ctx node =\n  match node_dtype node with\n  | Some dtype -> dtype\n  | None -> Printf.ksprintf invalid_arg \"Kernel.%s expects a value dtype\" ctx\n\nlet index ~ptr ~idxs ?gate ?(as_ptr = true) () =\n  let pty = pointer_dtype_exn \"index\" ptr in\n  let dtype =\n    if as_ptr then Dtype.Ptr pty else Dtype.Val (Dtype.Ptr.base pty)\n  in\n  mk (Index { ptr; idxs; gate; dtype })\n\nlet index_raw ~ptr ~idxs ?gate ~dtype () =\n  mk (Index { ptr; idxs; gate; dtype })\n\nlet ptrcat ~srcs ~dtype = mk (Ptrcat { srcs; dtype })\n\nlet load ~src ?alt () =\n  let dtype = Dtype.Ptr.base (pointer_dtype_exn \"load\" src) in\n  mk (Load { src; alt; dtype })\n\nlet store ~dst ~value ~ranges = mk (Store { dst; value; ranges })\nlet unary ~op ~src = mk (Unary { op; src; dtype = value_dtype_exn \"unary\" src })\n\nlet binary ~op ~lhs ~rhs =\n  let lhs_dtype = value_dtype_exn \"binary\" lhs in\n  let dtype = match op with\n    | `Cmplt | `Cmpeq | `Cmpne ->\n        Dtype.Val.vec (Dtype.Val.count lhs_dtype) Dtype.Val.bool\n    | _ -> lhs_dtype\n  in\n  mk (Binary { op; lhs; rhs; dtype })\n\nlet ternary ~op ~a ~b ~c =\n  let dtype = match op with\n    | `Where -> value_dtype_exn \"ternary\" b\n    | `Mulacc -> value_dtype_exn \"ternary\" a\n  in\n  mk (Ternary { op; a; b; c; dtype })\n\nlet cast ~src ~dtype = mk (Cast { src; dtype })\nlet bitcast ~src ~dtype = mk (Bitcast { src; dtype })\n\nlet vectorize ~srcs =\n  match srcs with\n  | [] -> invalid_arg \"Kernel.vectorize expects at least one source\"\n  | src :: rest ->\n      let count = 1 + List.length rest in\n      let dtype : Dtype.t =\n        match get_ptr_dtype src with\n        | Some pty -> Dtype.Ptr pty |> Dtype.vec count\n        | None ->\n            let dt = value_dtype_exn \"vectorize\" src in\n            Dtype.Val dt |> Dtype.scalarize |> Dtype.vec count\n      in\n      mk (Vectorize { srcs; dtype })\n\nlet vcat ~srcs =\n  match srcs with\n  | [] -> invalid_arg \"Kernel.vcat expects at least one source\"\n  | _ ->\n      let dtypes = List.map (value_dtype_exn \"vcat\") srcs in\n      let first = List.hd dtypes in\n      let total_count =\n        List.fold_left\n          (fun acc dtype ->\n            if Dtype.Val.scalar dtype <> Dtype.Val.scalar first then\n              invalid_arg \"Kernel.vcat expects a common scalar dtype\";\n            acc + Dtype.Val.count dtype)\n          0 dtypes\n      in\n      mk (Vcat { srcs; dtype = Dtype.Val.vec total_count (Dtype.Val.scalarize first) })\n\n(* Eager GEP folding.\n   VECTORIZE → extract lane.  CONST → scalar const.\n   Everything else → create GEP node (no source validation). *)\nlet gep ~src ~idx =\n  match src.Hashcons.node.view with\n  | Vectorize { srcs; _ } when idx >= 0 && idx < List.length srcs ->\n      List.nth srcs idx\n  | Const { value; _ } ->\n      mk (Const { value; dtype = Dtype.Val.scalarize (Const.dtype value) })\n  | Vconst { values; dtype } when idx >= 0 && idx < List.length values ->\n      mk (Const { value = List.nth values idx; dtype = Dtype.Val.scalarize dtype })\n  | _ -> (\n      match node_dtype src with\n      | Some dt -> mk (Gep { src; idxs = [idx]; dtype = Dtype.Val.scalarize dt })\n      | None -> mk (Gep { src; idxs = [idx]; dtype = Dtype.Val.void }))\n\nlet range ~size ~axis ?(sub = []) ~kind ?(dtype = Dtype.Val.index) () =\n  mk (Range { size; dtype; axis; sub; kind })\n\nlet end_ ~value ~ranges ?tag () =\n  mk ?tag (End { value; ranges })\nlet barrier = mk Barrier\nlet special ~dim ~size ?(dtype = Dtype.Val.int32) () = mk (Special { dim; size; dtype })\nlet reduce ~op ~src ~ranges ~dtype = mk (Reduce { op; src; ranges; dtype })\nlet unroll ~src ~axes ~dtype = mk (Unroll { src; axes; dtype })\nlet contract ~src ~axes ~dtype = mk (Contract { src; axes; dtype })\n\nlet wmma ~name ~a ~b ~c ~dtype ~dims ~dtype_in ~dtype_out ~device ~threads\n    ~upcast_axes ~reduce_axes =\n  mk (Wmma {\n      name; a; b; c; dtype; dims; dtype_in; dtype_out;\n      device; threads; upcast_axes; reduce_axes })\n\nlet custom ~fmt ~args = mk (Custom { fmt; args })\nlet custom_inline ~fmt ~args ~dtype = mk (Custom_inline { fmt; args; dtype })\n\nlet gep_multi ~src ~idxs =\n  match idxs with\n  | [] -> invalid_arg \"Kernel.gep_multi expects at least one index\"\n  | [ idx ] ->\n      let is_scalar_zero =\n        match node_dtype src with Some dt -> Dtype.Val.count dt = 1 && idx = 0 | None -> false\n      in\n      if is_scalar_zero then src else gep ~src ~idx\n  | _ ->\n      let dt = match node_dtype src with Some dt -> dt | None -> Dtype.Val.void in\n      if Dtype.Val.equal dt Dtype.Val.void then src\n      else\n        let scalar = Dtype.Val.scalarize dt in\n        mk (Gep { src; idxs; dtype = Dtype.Val.vec (List.length idxs) scalar })\n\nlet broadcast node n =\n  if n <= 1 then node\n  else match get_ptr_dtype node with\n    | Some pty ->\n        let copies = List.init n (fun _ -> node) in\n        mk (Vectorize { srcs = copies; dtype = Dtype.Ptr pty |> Dtype.vec n })\n    | None ->\n        match node_dtype node with\n        | None -> node\n        | Some dt ->\n            let copies = List.init n (fun _ -> node) in\n            if Dtype.Val.count dt = 1 then mk (Vectorize { srcs = copies; dtype = Dtype.Val dt |> Dtype.vec n })\n            else mk (Vcat { srcs = copies; dtype = Dtype.Val.vec (Dtype.Val.count dt * n) (Dtype.Val.scalarize dt) })\n\nlet const_int n = mk (Const { value = Const.int Dtype.Val.index n; dtype = Dtype.Val.index })\nlet const_float x = mk (Const { value = Const.float Dtype.Val.float32 x; dtype = Dtype.Val.float32 })\nlet const_bool b = mk (Const { value = Const.bool b; dtype = Dtype.Val.bool })\n\nlet zero_like node =\n  match node_dtype node with\n  | None -> invalid_arg \"Kernel.zero_like: node has no dtype\"\n  | Some dt ->\n      let scalar_dt = Dtype.Val.scalarize dt in\n      let value =\n        if Dtype.Val.is_float dt then Const.float scalar_dt 0.0\n        else if Dtype.Val.scalar dt = Dtype.Bool then Const.bool false\n        else Const.int scalar_dt 0\n      in\n      mk (Const { value; dtype = dt })\n\n(* Inspecting *)\n\nlet dtype_opt = node_any_dtype\n\nlet dtype node =\n  match node_any_dtype node with\n  | Some dt -> dt\n  | None -> invalid_arg \"Kernel.dtype: node has no dtype\"\n\nlet ptr_dtype node = pointer_dtype_exn \"ptr_dtype\" node\n\nlet sort node =\n  match node.Hashcons.node.view with\n  | Param _ | Param_image _ | Define_local _ | Define_reg _ | Bufferize _\n  | Index { dtype = Dtype.Ptr _; _ } | Vectorize { dtype = Dtype.Ptr _; _ }\n  | Cast { dtype = Dtype.Ptr _; _ } | Ptrcat _ ->\n      Pointer\n  | Sink _ | Group _ | After _ | Store _ | End _ | Barrier | Custom _ -> Effect\n  | Define_var _ | Invalid_index _ | Range _ | Special _ -> Index\n  | _ -> (\n      match node_dtype node with\n      | Some dtype when Dtype.Val.scalar dtype = Dtype.Index -> Index\n      | _ -> Value)\n\nlet children node = match node.Hashcons.node.view with\n  | Sink { srcs; _ } | Group { srcs } -> srcs\n  | After { src; deps } -> src :: deps\n  | Param _ | Param_image _ | Define_local _ | Define_reg _ | Define_var _\n  | Const _ | Vconst _ | Invalid_index _ | Barrier ->\n      []\n  | Bufferize { src; ranges; _ } -> src :: ranges\n  | Index { ptr; idxs; gate; _ } -> (ptr :: idxs) @ Option.to_list gate\n  | Ptrcat { srcs; _ } -> srcs\n  | Load { src; alt; _ } -> src :: Option.to_list alt\n  | Store { dst; value; ranges } -> dst :: value :: ranges\n  | Unary { src; _ }\n  | Cast { src; _ }\n  | Bitcast { src; _ }\n  | Gep { src; _ }\n  | Unroll { src; _ }\n  | Contract { src; _ } ->\n      [src]\n  | Range { size; _ } | Special { size; _ } -> [ size ]\n  | End { value; ranges } -> value :: ranges\n  | Binary { lhs; rhs; _ } -> [ lhs; rhs ]\n  | Ternary { a; b; c; _ } -> [ a; b; c ]\n  | Vectorize { srcs; _ } | Vcat { srcs; _ } -> srcs\n  | Reduce { src; ranges; _ } -> src :: ranges\n  | Wmma { a; b; c; _ } -> [ a; b; c ]\n  | Custom { args; _ } | Custom_inline { args; _ } -> args\n\nlet replace node ?children:childs ?(dtype : Dtype.t option) () =\n  let src =\n    Array.of_list (match childs with Some c -> c | None -> children node)\n  in\n  let pos = ref 0 in\n  let take () =\n    let v = src.(!pos) in\n    incr pos;\n    v\n  in\n  let take_n n = List.init n (fun _ -> take ()) in\n  let take_opt present = if present then Some (take ()) else None in\n  let take_rest () =\n    let len = Array.length src in\n    let r = List.init (len - !pos) (fun j -> src.(!pos + j)) in\n    pos := len;\n    r\n  in\n  (* For nodes with Val.t dtype fields, extract val_of from the unified dtype. *)\n  let vdt old = match dtype with Some d -> Dtype.val_of d | None -> old in\n  let new_view =\n    match node.Hashcons.node.view with\n    | Sink { kernel_info; _ } -> Sink { srcs = take_rest (); kernel_info }\n    | Group _ -> Group { srcs = take_rest () }\n    | After _ ->\n        let s = take () in\n        After { src = s; deps = take_rest () }\n    | Param _ | Param_image _ | Define_local _ | Define_reg _ | Barrier ->\n        node.Hashcons.node.view\n    | Define_var { name; lo; hi; dtype = old_dt } ->\n        Define_var { name; lo; hi; dtype = vdt old_dt }\n    | Const { value; dtype = old_dt } ->\n        Const { value; dtype = vdt old_dt }\n    | Vconst { values; dtype = old_dt } ->\n        Vconst { values; dtype = vdt old_dt }\n    | Invalid_index { dtype = old_dt } ->\n        Invalid_index { dtype = vdt old_dt }\n    | Bufferize { dtype = ptr_dt; opts; _ } ->\n        let s = take () in\n        Bufferize { src = s; ranges = take_rest (); dtype = ptr_dt; opts }\n    | Index { idxs; gate; dtype = udt; _ } ->\n        let ptr = take () in\n        let idxs = take_n (List.length idxs) in\n        let gate = take_opt (Option.is_some gate) in\n        let udt = match dtype with\n          | Some d ->\n              (* Preserve the ptr/value distinction: if the Index was ptr-typed,\n                 update the base of the ptr dtype; if value-typed, update the\n                 value dtype. *)\n              (match udt with\n               | Dtype.Ptr p -> Dtype.Ptr (Dtype.Ptr.with_base (Dtype.val_of d) p)\n               | Dtype.Val _ -> d)\n          | None -> udt\n        in\n        Index { ptr; idxs; gate; dtype = udt }\n    | Ptrcat { dtype = ptr_dt; _ } -> Ptrcat { srcs = take_rest (); dtype = ptr_dt }\n    | Load { alt; dtype = old_dt; _ } ->\n        let s = take () in\n        let alt = take_opt (Option.is_some alt) in\n        Load { src = s; alt; dtype = vdt old_dt }\n    | Store _ ->\n        let dst = take () in\n        let value = take () in\n        Store { dst; value; ranges = take_rest () }\n    | Unary { op; dtype = old_dt; _ } -> Unary { op; src = take (); dtype = vdt old_dt }\n    | Binary { op; dtype = old_dt; _ } ->\n        let lhs = take () in\n        let rhs = take () in\n        Binary { op; lhs; rhs; dtype = vdt old_dt }\n    | Ternary { op; dtype = old_dt; _ } ->\n        let a = take () in\n        let b = take () in\n        let c = take () in\n        Ternary { op; a; b; c; dtype = vdt old_dt }\n    | Cast { dtype = udt; _ } ->\n        let udt = match dtype with\n          | Some d -> (match udt with Dtype.Ptr _ -> udt | Dtype.Val _ -> d)\n          | None -> udt\n        in\n        Cast { src = take (); dtype = udt }\n    | Bitcast { dtype = old_dt; _ } -> Bitcast { src = take (); dtype = vdt old_dt }\n    | Vectorize { dtype = udt; _ } ->\n        let udt = match dtype with\n          | Some d -> (match udt with Dtype.Ptr _ -> udt | Dtype.Val _ -> d)\n          | None -> udt\n        in\n        Vectorize { srcs = take_rest (); dtype = udt }\n    | Vcat { dtype = old_dt; _ } -> Vcat { srcs = take_rest (); dtype = vdt old_dt }\n    | Gep { idxs; dtype = old_dt; _ } -> Gep { src = take (); idxs; dtype = vdt old_dt }\n    | Range { axis; sub; kind; dtype = old_dt; _ } ->\n        Range { size = take (); dtype = vdt old_dt; axis; sub; kind }\n    | End _ ->\n        let value = take () in\n        End { value; ranges = take_rest () }\n    | Special { dim; dtype = old_dt; _ } ->\n        Special { dim; size = take (); dtype = vdt old_dt }\n    | Reduce { op; dtype = old_dt; _ } ->\n        let s = take () in\n        let ranges = take_rest () in\n        Reduce { op; src = s; ranges; dtype = vdt old_dt }\n    | Unroll { axes; dtype = old_dt; _ } ->\n        Unroll { src = take (); axes; dtype = vdt old_dt }\n    | Contract { axes; dtype = old_dt; _ } ->\n        Contract { src = take (); axes; dtype = vdt old_dt }\n    | Wmma ({ dtype = old_dt; _ } as w) ->\n        let a = take () in\n        let b = take () in\n        let c = take () in\n        Wmma { w with a; b; c; dtype = vdt old_dt }\n    | Custom { fmt; _ } -> Custom { fmt; args = take_rest () }\n    | Custom_inline { fmt; dtype = old_dt; _ } ->\n        Custom_inline { fmt; args = take_rest (); dtype = vdt old_dt }\n  in\n  mk ?tag:(tag node) new_view\n\nlet map_children f (instr : view) : view =\n  let fl = List.map f and fo = Option.map f in\n  match instr with\n  | Sink { srcs; kernel_info } -> Sink { srcs = fl srcs; kernel_info }\n  | Group { srcs } -> Group { srcs = fl srcs }\n  | After { src; deps } -> After { src = f src; deps = fl deps }\n  | Param _ | Param_image _ | Define_local _ | Define_reg _ | Define_var _\n  | Const _ | Vconst _ | Invalid_index _ | Barrier ->\n      instr\n  | Bufferize { src; ranges; dtype; opts } ->\n      let src = f src in let ranges = fl ranges in\n      Bufferize { src; ranges; dtype; opts }\n  | Index { ptr; idxs; gate; dtype } ->\n      let ptr = f ptr in let idxs = fl idxs in let gate = fo gate in\n      Index { ptr; idxs; gate; dtype }\n  | Ptrcat { srcs; dtype } -> Ptrcat { srcs = fl srcs; dtype }\n  | Load { src; alt; dtype } ->\n      let src = f src in let alt = fo alt in\n      Load { src; alt; dtype }\n  | Store { dst; value; ranges } ->\n      let dst = f dst in let value = f value in let ranges = fl ranges in\n      Store { dst; value; ranges }\n  | Unary { op; src; dtype } -> Unary { op; src = f src; dtype }\n  | Binary { op; lhs; rhs; dtype } ->\n      let lhs = f lhs in let rhs = f rhs in\n      Binary { op; lhs; rhs; dtype }\n  | Ternary { op; a; b; c; dtype } ->\n      let a = f a in let b = f b in let c = f c in\n      Ternary { op; a; b; c; dtype }\n  | Cast { src; dtype } -> Cast { src = f src; dtype }\n  | Bitcast { src; dtype } -> Bitcast { src = f src; dtype }\n  | Vectorize { srcs; dtype } -> Vectorize { srcs = fl srcs; dtype }\n  | Vcat { srcs; dtype } -> Vcat { srcs = fl srcs; dtype }\n  | Gep { src; idxs; dtype } -> Gep { src = f src; idxs; dtype }\n  | Range { size; dtype; axis; sub; kind } ->\n      Range { size = f size; dtype; axis; sub; kind }\n  | End { value; ranges } ->\n      let value = f value in let ranges = fl ranges in\n      End { value; ranges }\n  | Special { dim; size; dtype } -> Special { dim; size = f size; dtype }\n  | Reduce { op; src; ranges; dtype } ->\n      let src = f src in let ranges = fl ranges in\n      Reduce { op; src; ranges; dtype }\n  | Unroll { src; axes; dtype } -> Unroll { src = f src; axes; dtype }\n  | Contract { src; axes; dtype } -> Contract { src = f src; axes; dtype }\n  | Wmma w ->\n      let a = f w.a in let b = f w.b in let c = f w.c in\n      Wmma { w with a; b; c }\n  | Custom { fmt; args } -> Custom { fmt; args = fl args }\n  | Custom_inline { fmt; args; dtype } ->\n      Custom_inline { fmt; args = fl args; dtype }\n\n(* Hash tables *)\n\nmodule Tbl = Hashtbl.Make (struct\n  type nonrec t = t\n  let equal = ( = )\n  let hash node = node.Hashcons.tag\nend)\n\nmodule Ref_tbl = (Hashtbl.Make (struct\n  type nonrec t = t\n  let equal = ( == )\n  let hash node = node.Hashcons.tag\nend) : Hashtbl.S with type key = t)\n\nlet toposort root =\n  let state = Ref_tbl.create 256 in\n  let order = ref [] in\n  let rec visit node = match Ref_tbl.find_opt state node with\n    | Some 2 -> ()\n    | Some 1 -> failwith \"Kernel.toposort: cyclic graph\"\n    | Some _ -> assert false\n    | None ->\n        Ref_tbl.add state node 1;\n        List.iter visit (children node);\n        Ref_tbl.replace state node 2;\n        order := node :: !order\n  in\n  visit root;\n  List.rev !order\n\nlet intern (root : t) : t =\n  let nodes = toposort root in\n  let n = List.length nodes in\n  let canon = Ref_tbl.create n in\n  let lookup node = match Ref_tbl.find_opt canon node with Some n -> n | None -> node in\n  List.iter\n    (fun original ->\n      let changed = List.exists (fun c -> lookup c != c) (children original) in\n      let shared =\n        if changed then\n          mk ?tag:(tag original) (map_children lookup (view original))\n        else original\n      in\n      Ref_tbl.replace canon original shared)\n    nodes;\n  lookup root\n\nlet const_arg node = match node.Hashcons.node.view with\n  | Const { value; _ } -> Some (Const.view value)\n  | _ -> None\n\nlet is_alu node = match node.Hashcons.node.view with\n  | Unary _ | Binary _ | Ternary _ -> true\n  | _ -> false\n\nlet is_ptr node = Option.is_some (get_ptr_dtype node)\n\nlet first_match rules node =\n  List.find_map (fun rule -> rule node) rules\n\n(* Validation *)\n\nlet validate root =\n  let nodes = toposort root in\n  let ids = Ref_tbl.create (List.length nodes) in\n  List.iteri (fun i node -> Ref_tbl.add ids node i) nodes;\n  let id node = match Ref_tbl.find_opt ids node with Some i -> i | None -> -1 in\n  let fail node msg =\n    Printf.ksprintf failwith \"Kernel.validate: instruction %d: %s\" (id node) msg\n  in\n  let rec get_dtype node =\n    match node.Hashcons.node.view with\n    | After { src; _ } -> get_dtype src\n    | End { value; _ } -> get_dtype value\n    | _ -> node_dtype node\n  in\n  let check_dtype_eq node ~ctx ~expected ~got =\n    match (expected, got) with\n    | Some e, Some g when Dtype.Val.equal e g -> ()\n    | Some e, Some g ->\n        fail node\n          (Printf.sprintf \"%s: expected %s, got %s\" ctx (Dtype.Val.to_string e)\n             (Dtype.Val.to_string g))\n    | None, _ ->\n        fail node (Printf.sprintf \"%s: expected dtype not available\" ctx)\n    | _, None ->\n        fail node (Printf.sprintf \"%s: operand dtype not available\" ctx)\n  in\n  let check_dtype_match node ~ctx dt1 dt2 =\n    match (dt1, dt2) with\n    | Some d1, Some d2 when Dtype.Val.equal d1 d2 -> ()\n    | Some _, Some _ ->\n        fail node (Printf.sprintf \"%s: operand dtypes don't match\" ctx)\n    | _ -> fail node (Printf.sprintf \"%s: operand dtype not available\" ctx)\n  in\n  let check_bool_scalar node ~ctx value =\n    match get_dtype value with\n    | Some dt when Dtype.Val.scalar dt = Dtype.Bool && Dtype.Val.count dt = 1 -> ()\n    | Some _ -> fail node (Printf.sprintf \"%s must be bool scalar\" ctx)\n    | None -> fail node (Printf.sprintf \"%s dtype not available\" ctx)\n  in\n  let check_bool node ~ctx value =\n    match get_dtype value with\n    | Some dt when Dtype.Val.scalar dt = Dtype.Bool -> ()\n    | Some _ -> fail node (Printf.sprintf \"%s must be bool\" ctx)\n    | None -> fail node (Printf.sprintf \"%s dtype not available\" ctx)\n  in\n  let check_shift_rhs node rhs dtype =\n    match get_dtype rhs with\n    | Some dt when Dtype.Val.equal dt dtype || Dtype.Val.equal dt Dtype.Val.uint32 -> ()\n    | Some _ -> fail node \"shift rhs must match lhs dtype or be uint32\"\n    | None -> fail node \"shift rhs dtype not available\"\n  in\n  let check_index_like node ~ctx value =\n    match get_dtype value with\n    | Some dt when Dtype.Val.scalar dt = Dtype.Index || Dtype.Val.scalar dt = Dtype.Int32 -> ()\n    | Some _ -> fail node (Printf.sprintf \"%s must be index-like\" ctx)\n    | None -> fail node (Printf.sprintf \"%s dtype not available\" ctx)\n  in\n  let check_horizontal_reduce_src node ~src ~dtype =\n    match get_dtype src with\n    | Some src_dtype when Dtype.Val.equal src_dtype dtype -> ()\n    | Some src_dtype\n      when Dtype.Val.scalar src_dtype = Dtype.Val.scalar dtype\n           && Dtype.Val.count src_dtype >= Dtype.Val.count dtype\n           && Dtype.Val.count src_dtype mod Dtype.Val.count dtype = 0 ->\n        ()\n    | Some got ->\n        fail node\n          (Printf.sprintf \"Reduce src: expected %s or horizontal vector, got %s\"\n             (Dtype.Val.to_string dtype) (Dtype.Val.to_string got))\n    | None -> fail node \"Reduce src dtype not available\"\n  in\n  let rec index_base node = match node.Hashcons.node.view with\n    | After { src; _ } | Cast { src; _ } | Bitcast { src; _ } -> index_base src\n    | Param _ | Param_image _ | Define_local _ | Define_reg _ | Bufferize _\n    | Ptrcat _ ->\n        true\n    | Vectorize { srcs; _ } | Vcat { srcs; _ } ->\n        (* After do_expand, buffer ptrs may be wrapped in Vectorize/Vcat *)\n        srcs <> [] && List.for_all index_base srcs\n    | _ -> false\n  in\n  let rec ptr_ref node = match node.Hashcons.node.view with\n    | Index { dtype = Dtype.Ptr pty; gate; _ } -> Some (node, pty, gate)\n    | Index { dtype = Dtype.Val _; _ } -> None\n    | ( Ptrcat { dtype; _ }\n      | Param { dtype; _ }\n      | Param_image { dtype; _ }\n      | Define_local { dtype; _ }\n      | Define_reg { dtype; _ }\n      | Bufferize { dtype; _ } ) ->\n        Some (node, dtype, None)\n    | Gep { src; dtype; _ } -> (\n        match ptr_ref src with\n        | Some (_, pty, gate) ->\n            let pty = Dtype.Ptr.with_base dtype pty in\n            Some (node, pty, gate)\n        | None -> None)\n    | Cast { src; dtype = Dtype.Ptr pty } ->\n        let gate = match ptr_ref src with Some (_, _, g) -> g | None -> None in\n        Some (node, pty, gate)\n    | After { src; _ } | Cast { src; _ } | Bitcast { src; _ } -> ptr_ref src\n    | _ -> None\n  in\n  let prod lst = List.fold_left ( * ) 1 lst in\n  List.iter\n    (fun instr ->\n      match instr.Hashcons.node.view with\n      | Sink _ | Group _ | After _ -> ()\n      | Param { dtype; _ } | Param_image { dtype; _ } ->\n          if Dtype.Ptr.addrspace dtype <> Dtype.Global then\n            fail instr \"Param must have Global addrspace\"\n      | Define_local { dtype; _ } ->\n          if Dtype.Ptr.addrspace dtype <> Dtype.Local then\n            fail instr \"Define_local must have Local addrspace\"\n      | Define_reg { dtype; _ } ->\n          if Dtype.Ptr.addrspace dtype <> Dtype.Reg then\n            fail instr \"Define_reg must have Reg addrspace\"\n      | Define_var { lo; hi; dtype; _ } ->\n          if Dtype.Val.count dtype <> 1 then fail instr \"Define_var must be scalar\";\n          if not (Dtype.Val.is_int dtype) then\n            fail instr \"Define_var must be int/index\";\n          if lo > hi then fail instr \"Define_var bounds invalid (lo > hi)\"\n      | Bufferize { ranges; dtype; opts; _ } ->\n          if Dtype.Ptr.addrspace dtype <> opts.addrspace then\n            fail instr \"Bufferize dtype addrspace mismatch\";\n          List.iter (check_index_like instr ~ctx:\"Bufferize range\") ranges\n      | Const { value; dtype } -> (\n          match Const.view value with\n          | Bool _ ->\n              if Dtype.Val.scalar dtype <> Dtype.Bool then\n                fail instr \"Bool const must have bool dtype\"\n          | Int _ ->\n              if not (Dtype.Val.is_int dtype) then\n                fail instr \"Int const must have int/index dtype\"\n          | Float _ ->\n              if not (Dtype.Val.is_float dtype) then\n                fail instr \"Float const must have float dtype\")\n      | Vconst { values; dtype } ->\n          if values = [] then fail instr \"Vconst must have at least one value\";\n          if Dtype.Val.count dtype <> List.length values then\n            fail instr \"Vconst dtype count must match values length\"\n      | Invalid_index { dtype } ->\n          if Dtype.Val.scalar dtype <> Dtype.Index then\n            fail instr \"Invalid_index must have Index dtype\"\n      | Range { size; dtype; _ } ->\n          if not (Dtype.Val.is_int dtype) then\n            fail instr \"Range must have int/index\";\n          if Dtype.Val.count dtype <> 1 then fail instr \"Range must be scalar\";\n          check_dtype_eq instr ~ctx:\"Range size\" ~expected:(Some dtype)\n            ~got:(get_dtype size)\n      | End { ranges; _ } ->\n          List.iter (check_index_like instr ~ctx:\"End range\") ranges\n      | Barrier -> ()\n      | Special { size; dtype; _ } ->\n          if Dtype.Val.count dtype <> 1 then fail instr \"Special must be scalar\";\n          if not (Dtype.Val.scalar dtype = Dtype.Index || Dtype.Val.scalar dtype = Dtype.Int32) then\n            fail instr \"Special must be index or int32\";\n          check_dtype_eq instr ~ctx:\"Special size\" ~expected:(Some dtype)\n            ~got:(get_dtype size)\n      | Index { ptr; idxs; gate; dtype } -> (\n          if idxs = [] then fail instr \"Index must have at least one index\";\n          if not (index_base ptr) then\n            fail instr \"Index base must be a buffer define/bufferize\";\n          List.iter (check_index_like instr ~ctx:\"Index operand\") idxs;\n          Option.iter (check_bool_scalar instr ~ctx:\"Index gate\") gate;\n          (* Walk through Vectorize/Cast/After to find the underlying ptr dtype *)\n          let rec find_ptr_dtype n = match n.Hashcons.node.view with\n            | Vectorize { srcs = s :: _; _ } | Vcat { srcs = s :: _; _ } -> find_ptr_dtype s\n            | After { src; _ } | Cast { src; _ } | Bitcast { src; _ } -> find_ptr_dtype src\n            | _ -> get_ptr_dtype n\n          in\n          match find_ptr_dtype ptr, dtype with\n          | Some base_pty, Dtype.Ptr pty\n            when Dtype.Val.equal (Dtype.Ptr.base base_pty) (Dtype.Ptr.base pty)\n                 && Dtype.Ptr.addrspace base_pty = Dtype.Ptr.addrspace pty\n                 && Dtype.Ptr.v base_pty = Dtype.Ptr.v pty ->\n              ()\n          | Some base_pty, Dtype.Val dt\n            when Dtype.Val.scalar (Dtype.Ptr.base base_pty) = Dtype.Val.scalar dt ->\n              (* Allow vectorized Index: base scalar matches but count may differ *)\n              ()\n          | Some base_pty, Dtype.Ptr pty\n            when Dtype.Val.scalar (Dtype.Ptr.base base_pty) = Dtype.Val.scalar (Dtype.Ptr.base pty) ->\n              (* Allow vectorized Index: scalar matches, addrspace matches *)\n              ()\n          | Some _, _ -> fail instr \"Index dtype must match base pointer type\"\n          | None, _ -> fail instr \"Index base dtype not available\")\n      | Ptrcat { srcs; dtype } ->\n          if srcs = [] then fail instr \"Ptrcat must have at least one source\";\n          let total_vcount = ref 0 in\n          List.iter\n            (fun src ->\n              match ptr_ref src with\n              | Some (_, pty, _) ->\n                  if Dtype.Ptr.addrspace pty <> Dtype.Ptr.addrspace dtype then\n                    fail instr \"Ptrcat addrspace mismatch\";\n                  if not (Dtype.Val.equal (Dtype.Ptr.base pty) (Dtype.Ptr.base dtype)) then\n                    fail instr \"Ptrcat base dtype mismatch\";\n                  total_vcount := !total_vcount + Dtype.Val.count (Dtype.Ptr.base pty)\n              | None -> fail instr \"Ptrcat sources must be pointers\")\n            srcs;\n          if !total_vcount <> Dtype.Ptr.v dtype then fail instr \"Ptrcat vcount mismatch\"\n      | Load { src; alt; dtype } -> (\n          match ptr_ref src with\n          | Some (_, pty, gate) -> (\n              (* Allow widened Load dtype (e.g. f32×4 from f32 pointer).\n                 Intermediate state after do_expand.  Check scalar match. *)\n              if Dtype.Val.scalar dtype <> Dtype.Val.scalar (Dtype.Ptr.base pty) then\n                check_dtype_eq instr ~ctx:\"Load dtype\" ~expected:(Some (Dtype.Ptr.base pty))\n                  ~got:(Some dtype);\n              match alt with\n              | None -> ()\n              | Some alt_ref -> (\n                  check_dtype_eq instr ~ctx:\"Load alt\" ~expected:(Some dtype)\n                    ~got:(get_dtype alt_ref);\n                  match gate with\n                  | None -> fail instr \"Load alt requires gated Index\"\n                  | Some _ -> ()))\n          | None -> fail instr \"Load src must reference a pointer\")\n      | Store { dst; value; ranges } -> (\n          List.iter (check_index_like instr ~ctx:\"Store range\") ranges;\n          let dst_ok = match ptr_ref dst with\n            | Some (_, pty, _) -> (\n                (* Allow widened Store value (e.g. i32×4 to i32 pointer).\n                   Intermediate state after do_expand.  Check scalar match. *)\n                match get_dtype value with\n                | Some vdt when Dtype.Val.scalar vdt <> Dtype.Val.scalar (Dtype.Ptr.base pty) ->\n                    check_dtype_eq instr ~ctx:\"Store value\"\n                      ~expected:(Some (Dtype.Ptr.base pty)) ~got:(Some vdt)\n                | _ -> ());\n                true\n            | None -> false\n          in\n          (* Also accept value-typed Index as dst (before pm_add_loads). *)\n          if not dst_ok then\n            let rec has_index n = match n.Hashcons.node.view with\n              | Index { dtype = Dtype.Val _; _ } -> true\n              | After { src; _ } | Cast { src; _ } | Bitcast { src; _ } -> has_index src\n              | _ -> false\n            in\n            if not (has_index dst) then\n              fail instr \"Store dst must reference a pointer or value-typed Index\")\n      | Unary { src; dtype; _ } ->\n          check_dtype_eq instr ~ctx:\"Unary operand\" ~expected:(Some dtype)\n            ~got:(get_dtype src)\n      | Binary { op; lhs; rhs; dtype } -> (\n          let ldt = get_dtype lhs and rdt = get_dtype rhs in\n          match op with\n          | `Shl | `Shr ->\n              check_dtype_eq instr ~ctx:\"Shift lhs\" ~expected:(Some dtype)\n                ~got:ldt;\n              check_shift_rhs instr rhs dtype;\n              if not (Dtype.Val.is_int dtype) then\n                fail instr \"Shift must have int/index dtype\"\n          | `Cmplt | `Cmpeq | `Cmpne ->\n              if Dtype.Val.scalar dtype <> Dtype.Bool then\n                fail instr \"Comparison must produce bool\";\n              check_dtype_match instr ~ctx:\"Comparison operands\" ldt rdt\n          | `Idiv | `Mod ->\n              check_dtype_match instr ~ctx:\"Binary operands\" ldt rdt;\n              check_dtype_eq instr ~ctx:\"Binary result\" ~expected:(Some dtype)\n                ~got:ldt;\n              if not (Dtype.Val.is_int dtype) then\n                fail instr \"Idiv/Mod must have int/index dtype\"\n          | _ ->\n              check_dtype_match instr ~ctx:\"Binary operands\" ldt rdt;\n              check_dtype_eq instr ~ctx:\"Binary result\" ~expected:(Some dtype)\n                ~got:ldt)\n      | Ternary { op; a; b; c; dtype } -> (\n          match op with\n          | `Where ->\n              check_bool instr ~ctx:\"Where condition\" a;\n              check_dtype_match instr ~ctx:\"Where arms\" (get_dtype b)\n                (get_dtype c);\n              check_dtype_eq instr ~ctx:\"Where result\" ~expected:(Some dtype)\n                ~got:(get_dtype b)\n          | `Mulacc ->\n              check_dtype_match instr ~ctx:\"Mulacc a/b\" (get_dtype a)\n                (get_dtype b);\n              check_dtype_match instr ~ctx:\"Mulacc a/c\" (get_dtype a)\n                (get_dtype c);\n              check_dtype_eq instr ~ctx:\"Mulacc result\" ~expected:(Some dtype)\n                ~got:(get_dtype a))\n      | Vectorize { srcs; dtype } ->\n          if srcs = [] then\n            fail instr \"Vectorize must have at least one operand\";\n          (* For ptr-typed Vectorize, vcount is the ptr v field;\n             for value-typed, it's the dtype count. *)\n          let vcount = match dtype with\n            | Dtype.Ptr p -> Dtype.Ptr.v p\n            | Dtype.Val t -> Dtype.Val.count t\n          in\n          if vcount <> List.length srcs then\n            fail instr \"Vectorize dtype count must match operand count\";\n          List.iter\n            (fun src ->\n              let ok = match dtype, get_dtype src with\n                | Dtype.Val dt, Some sdt ->\n                    Dtype.Val.count sdt = 1 && Dtype.Val.scalar sdt = Dtype.Val.scalar dt\n                | Dtype.Ptr p, _ -> (\n                    match get_ptr_dtype src with\n                    | Some sp ->\n                        Dtype.Val.equal (Dtype.Ptr.base sp) (Dtype.Ptr.base p)\n                        && Dtype.Ptr.addrspace sp = Dtype.Ptr.addrspace p\n                    | None -> false)\n                | _, None -> false\n              in\n              if not ok then\n                fail instr \"Vectorize operands must be scalar and match\")\n            srcs\n      | Vcat { srcs; dtype } ->\n          if srcs = [] then fail instr \"Vcat must have at least one operand\";\n          let total = ref 0 in\n          List.iter\n            (fun src ->\n              match get_dtype src with\n              | Some dt ->\n                  if Dtype.Val.scalar dt <> Dtype.Val.scalar dtype then\n                    fail instr \"Vcat operand scalar mismatch\";\n                  total := !total + Dtype.Val.count dt\n              | None -> fail instr \"Vcat operand dtype not available\")\n            srcs;\n          if !total <> Dtype.Val.count dtype then fail instr \"Vcat count mismatch\"\n      | Gep { src; idxs; dtype } -> (\n          if idxs = [] then fail instr \"Gep must have at least one index\";\n          match get_dtype src with\n          | Some dt when Dtype.Val.count dt > 1 ->\n              List.iter (fun idx ->\n                if idx < 0 || idx >= Dtype.Val.count dt then\n                  fail instr \"Gep index out of bounds\") idxs;\n              let n = List.length idxs in\n              if n = 1 then begin\n                if Dtype.Val.count dtype <> 1 || Dtype.Val.scalar dtype <> Dtype.Val.scalar dt then\n                  fail instr \"Gep dtype must be scalar of source\"\n              end else begin\n                if Dtype.Val.count dtype <> n || Dtype.Val.scalar dtype <> Dtype.Val.scalar dt then\n                  fail instr \"Gep dtype must be vec(scalar, len(idxs))\"\n              end\n          | Some _ ->\n              (* Scalar source: GEP on a non-vector node is valid.\n                 This arises from do_contract on non-vector sources\n                 (e.g. WMMA with scalar result dtype). *)\n              ()\n          | None ->\n              (* Void/effect source: GEP produces void.\n                 Arises from do_contract on void sources (Store, End).\n                 Cleaned up by gep_pushing's gep_void rule. *)\n              ())\n      | Reduce { src; ranges; dtype; _ } ->\n          check_horizontal_reduce_src instr ~src ~dtype;\n          List.iter (check_index_like instr ~ctx:\"Reduce range\") ranges\n      | Unroll { src; axes; dtype } ->\n          if Dtype.Val.scalar dtype <> Dtype.Void then begin\n            let expected = prod (List.map snd axes) * Dtype.Val.count dtype in\n            match get_dtype src with\n            | Some dt when Dtype.Val.count dt = expected -> ()\n            | Some _ -> fail instr \"Unroll source count mismatch\"\n            | None -> fail instr \"Unroll source dtype not available\"\n          end\n      | Contract { axes; dtype; _ } ->\n          if Dtype.Val.scalar dtype <> Dtype.Void then begin\n            let expected = prod (List.map snd axes) in\n            if Dtype.Val.count dtype <> expected then\n              fail instr \"Contract dtype count mismatch\"\n          end\n      | Wmma _ -> ()\n      | Cast _ | Bitcast _ | Custom _ | Custom_inline _ -> ())\n    nodes\n\n(* Rewriting *)\n\n(* Debug: columnar print_uops for kernel DAG inspection.\n   Defined before graph_rewrite so the debug hook can call it. *)\n\nlet debug_level =\n  lazy (match Sys.getenv_opt \"DEBUG\" with\n    | Some s -> (try int_of_string s with _ -> 0) | None -> 0)\n\nlet view_op_name = function\n  | Sink _ -> \"Sink\"\n  | Group _ -> \"Group\"\n  | After _ -> \"After\"\n  | Param _ -> \"Param\"\n  | Param_image _ -> \"Param_image\"\n  | Define_local _ -> \"Define_local\"\n  | Define_reg _ -> \"Define_reg\"\n  | Define_var _ -> \"Define_var\"\n  | Bufferize _ -> \"Bufferize\"\n  | Const _ -> \"Const\"\n  | Vconst _ -> \"Vconst\"\n  | Invalid_index _ -> \"Invalid_index\"\n  | Index _ -> \"Index\"\n  | Ptrcat _ -> \"Ptrcat\"\n  | Load _ -> \"Load\"\n  | Store _ -> \"Store\"\n  | Unary { op; _ } -> Format.asprintf \"%a\" Op.pp_unary op\n  | Binary { op; _ } -> Format.asprintf \"%a\" Op.pp_binary op\n  | Ternary { op; _ } -> Format.asprintf \"%a\" Op.pp_ternary op\n  | Cast _ -> \"Cast\"\n  | Bitcast _ -> \"Bitcast\"\n  | Vectorize _ -> \"Vectorize\"\n  | Vcat _ -> \"Vcat\"\n  | Gep _ -> \"Gep\"\n  | Range _ -> \"Range\"\n  | End _ -> \"End\"\n  | Barrier -> \"Barrier\"\n  | Special _ -> \"Special\"\n  | Reduce _ -> \"Reduce\"\n  | Unroll _ -> \"Unroll\"\n  | Contract _ -> \"Contract\"\n  | Wmma _ -> \"Wmma\"\n  | Custom _ -> \"Custom\"\n  | Custom_inline _ -> \"Custom_inline\"\n\nlet scalar_name (s : Dtype.scalar) = match s with\n  | Float32 -> \"dtypes.float\" | Float16 -> \"dtypes.half\"\n  | Float64 -> \"dtypes.double\" | Int32 -> \"dtypes.int\"\n  | Int64 -> \"dtypes.long\" | Int16 -> \"dtypes.short\"\n  | Int8 -> \"dtypes.char\" | Uint8 -> \"dtypes.uchar\"\n  | Uint16 -> \"dtypes.ushort\" | Uint32 -> \"dtypes.uint\"\n  | Uint64 -> \"dtypes.ulong\" | Bool -> \"dtypes.bool\"\n  | Index -> \"dtypes.weakint\"\n  | _ -> Printf.sprintf \"dtypes.%s\" (Format.asprintf \"%a\" Dtype.pp_scalar s)\n\nlet debug_dtype_str dt =\n  let count = Dtype.Val.count dt in\n  let base = scalar_name (Dtype.Val.scalar dt) in\n  if count > 1 then Printf.sprintf \"%s.vec(%d)\" base count\n  else base\n\nlet debug_ptr_str (ptr : Dtype.Ptr.t) =\n  let base = debug_dtype_str (Dtype.Ptr.base ptr) in\n  let s = Printf.sprintf \"%s.ptr(%d)\" base (Dtype.Ptr.size ptr) in\n  if Dtype.Ptr.v ptr > 1 then Printf.sprintf \"%s.vec(%d)\" s (Dtype.Ptr.v ptr)\n  else s\n\nlet dtype_str_full node = match view node with\n  | Param { dtype; _ } | Param_image { dtype; _ } | Define_local { dtype; _ }\n  | Define_reg { dtype; _ } | Ptrcat { dtype; _ } | Bufferize { dtype; _ } ->\n      debug_ptr_str dtype\n  | Index { dtype = Dtype.Ptr p; _ } | Cast { dtype = Dtype.Ptr p; _ }\n  | Vectorize { dtype = Dtype.Ptr p; _ } ->\n      debug_ptr_str p\n  | Index { dtype = Dtype.Val t; _ } | Cast { dtype = Dtype.Val t; _ }\n  | Vectorize { dtype = Dtype.Val t; _ } ->\n      debug_dtype_str t\n  | Sink _ | Group _ | After _ | Store _ | End _ | Barrier | Custom _ ->\n      \"dtypes.void\"\n  | _ ->\n      match node_dtype node with Some dt -> debug_dtype_str dt | None -> \"dtypes.void\"\n\nlet axis_kind_str = function\n  | Axis_kind.Global -> \"AxisType.GLOBAL\"\n  | Axis_kind.Thread -> \"AxisType.THREAD\"\n  | Axis_kind.Local -> \"AxisType.LOCAL\"\n  | Axis_kind.Warp -> \"AxisType.WARP\"\n  | Axis_kind.Loop -> \"AxisType.LOOP\"\n  | Axis_kind.Upcast -> \"AxisType.UPCAST\"\n  | Axis_kind.Group_reduce -> \"AxisType.GROUP_REDUCE\"\n  | Axis_kind.Reduce -> \"AxisType.REDUCE\"\n  | Axis_kind.Unroll -> \"AxisType.UNROLL\"\n  | Axis_kind.Placeholder -> \"AxisType.PLACEHOLDER\"\n\n(* Python tuple repr: () for empty, (x,) for single, (x, y) for multi *)\nlet py_tuple = function\n  | [] -> \"()\"\n  | [x] -> Printf.sprintf \"(%s,)\" x\n  | items -> Printf.sprintf \"(%s)\" (String.concat \", \" items)\n\nlet const_value_str value =\n  match Const.view value with\n  | Bool v -> string_of_bool v\n  | Int v -> Int64.to_string v\n  | Float v -> Printf.sprintf \"%g\" v\n\nlet view_arg = function\n  | Const { value; _ } -> const_value_str value\n  | Vconst { values; _ } ->\n      Printf.sprintf \"(%s)\" (String.concat \", \" (List.map const_value_str values))\n  | Param { idx; _ } | Param_image { idx; _ } -> string_of_int idx\n  | Define_var { name; lo; hi; _ } ->\n      Printf.sprintf \"('%s', %d, %d)\" name lo hi\n  | Define_local { size; _ } | Define_reg { size; _ } ->\n      Printf.sprintf \"size=%d\" size\n  | Range { axis; kind; _ } ->\n      Printf.sprintf \"(%d, %s)\" axis (axis_kind_str kind)\n  | Special { dim; _ } -> Format.asprintf \"%a\" Special_dim.pp dim\n  | Reduce { op; _ } ->\n      \"Ops.\" ^ String.uppercase_ascii (Format.asprintf \"%a\" Op.pp_reduce op)\n  | Wmma { name; dims = n, m, k; _ } ->\n      Printf.sprintf \"%s %dx%dx%d\" name n m k\n  | Gep { idxs; _ } ->\n      Printf.sprintf \"(%s,)\" (String.concat \", \" (List.map string_of_int idxs))\n  | Custom { fmt; _ } | Custom_inline { fmt; _ } -> fmt\n  | Sink { kernel_info = Some ki; _ } ->\n      let axis_types = py_tuple (List.map axis_kind_str ki.axis_kinds) in\n      let opt_repr opt =\n        let op, axis, arg = match opt with\n          | Opt.Local { axis; amount } -> \"OptOps.LOCAL\", string_of_int axis, string_of_int amount\n          | Opt.Upcast { axis; amount } -> \"OptOps.UPCAST\", string_of_int axis, string_of_int amount\n          | Opt.Unroll { axis; amount } -> \"OptOps.UNROLL\", string_of_int axis, string_of_int amount\n          | Opt.Group { axis; amount } -> \"OptOps.GROUP\", string_of_int axis, string_of_int amount\n          | Opt.Grouptop { axis; amount } -> \"OptOps.GROUPTOP\", string_of_int axis, string_of_int amount\n          | Opt.Thread { axis; amount } -> \"OptOps.THREAD\", string_of_int axis, string_of_int amount\n          | Opt.Nolocals -> \"OptOps.NOLOCALS\", \"None\", \"None\"\n          | Opt.Tc { axis; tc_select; tc_opt; use_tc } ->\n              \"OptOps.TC\", string_of_int axis,\n              Printf.sprintf \"(%d, %d, %d)\" tc_select tc_opt use_tc\n          | Opt.Padto { axis; amount } -> \"OptOps.PADTO\", string_of_int axis, string_of_int amount\n          | Opt.Swap { axis; with_axis } -> \"OptOps.SWAP\", string_of_int axis, string_of_int with_axis\n        in\n        Printf.sprintf \"Opt(op=%s, axis=%s, arg=%s)\" op axis arg\n      in\n      let applied_opts = py_tuple (List.map opt_repr ki.applied_opts) in\n      let opts_to_apply = match ki.opts_to_apply with\n        | None -> \"None\"\n        | Some opts -> py_tuple (List.map opt_repr opts)\n      in\n      let estimates = match ki.estimates with\n        | None -> \"None\"\n        | Some _ -> \"...\"\n      in\n      Printf.sprintf\n        \"KernelInfo(name='%s', axis_types=%s, dont_use_locals=%s, applied_opts=%s, opts_to_apply=%s, estimates=%s)\"\n        ki.name axis_types\n        (if ki.dont_use_locals then \"True\" else \"False\")\n        applied_opts opts_to_apply estimates\n  | _ -> \"None\"\n\nlet print_uops ?label root =\n  let nodes = toposort root in\n  let ids = Tbl.create (List.length nodes) in\n  List.iteri (fun i node -> Tbl.add ids node i) nodes;\n  (* Compute ranges per node (which RANGE nodes each value lives within).\n     For each node, its ranges are the union of its children's ranges\n     minus any ended ranges. *)\n  let range_map : t list Ref_tbl.t = Ref_tbl.create (List.length nodes) in\n  List.iter (fun node ->\n    let child_ranges =\n      List.fold_left (fun acc c ->\n        let c_rngs = match Ref_tbl.find_opt range_map c with\n          | Some r -> r | None -> [] in\n        List.fold_left (fun a r ->\n          if List.exists (fun x -> x == r) a then a else r :: a) acc c_rngs)\n        [] (children node)\n    in\n    let ended = match view node with\n      | End { ranges; _ } -> ranges\n      | Reduce { ranges; _ } -> ranges\n      | Store { ranges; _ } -> ranges\n      | Bufferize { ranges; _ } -> ranges\n      | _ -> []\n    in\n    let rngs =\n      List.filter (fun r ->\n        not (List.exists (fun e -> e == r) ended)) child_ranges\n    in\n    let rngs = match view node with\n      | Range _ -> if List.exists (fun r -> r == node) rngs then rngs else node :: rngs\n      | _ -> rngs\n    in\n    Ref_tbl.replace range_map node rngs) nodes;\n  (match label with\n   | Some l -> Printf.eprintf \"=== %s ===\\n\" l\n   | None -> ());\n  List.iteri\n    (fun i node ->\n      let v = view node in\n      let src_strs =\n        List.map\n          (fun c -> match view c with\n            | Const { value; _ } -> Printf.sprintf \"'%s'\" (const_value_str value)\n            | _ ->\n                (match Tbl.find_opt ids c with\n                 | Some idx -> string_of_int idx | None -> \"--\"))\n          (children node)\n      in\n      let srcs = Printf.sprintf \"[%s]\" (String.concat \", \" src_strs) in\n      let ranges =\n        match Ref_tbl.find_opt range_map node with\n        | None | Some [] -> \"\"\n        | Some rngs ->\n            let get_axis r = match view r with Range { axis; _ } -> axis | _ -> max_int in\n            let sorted = List.sort (fun a b -> compare (get_axis a) (get_axis b)) rngs in\n            String.concat \",\" (List.map (fun r -> string_of_int (get_axis r)) sorted)\n      in\n      Printf.eprintf \"%4d %-20s: %-10s %-40s %-32s %s\\n\"\n        i (\"Ops.\" ^ view_op_name v) ranges (dtype_str_full node) srcs (view_arg v))\n    nodes;\n  Printf.eprintf \"%!\"\n\n(* Stack-based graph rewrite (unified_rewrite).\n   Stage 0: push children, advance to stage 1.\n   Stage 1: rebuild with rewritten children, apply rewrite.\n   Stage 2: link original node to the final result of the rewritten node.\n   Uses a waitlist for children not yet ready. *)\nlet graph_rewrite ?(name=\"\") rewrite root =\n  let replace : t Ref_tbl.t = Ref_tbl.create 256 in\n  let on_stack : unit Ref_tbl.t = Ref_tbl.create 256 in\n  let waitlist : (t * int * t) list Ref_tbl.t = Ref_tbl.create 16 in\n  let stack : (t * int * t) Stack.t = Stack.create () in\n  let lookup c =\n    match Ref_tbl.find_opt replace c with Some r -> r | None -> c\n  in\n  let set_replace n v =\n    Ref_tbl.replace replace n v;\n    match Ref_tbl.find_opt waitlist n with\n    | Some waiting ->\n        Ref_tbl.remove waitlist n;\n        List.iter (fun entry -> Stack.push entry stack) waiting\n    | None -> ()\n  in\n  Stack.push (root, 0, root) stack;\n  Ref_tbl.replace on_stack root ();\n  let counter = ref 0 in\n  while not (Stack.is_empty stack) do\n    let n, stage, new_n = Stack.pop stack in\n    if Ref_tbl.mem replace n then ()\n    else begin\n      incr counter;\n      if !counter > 250000 then\n        failwith (Printf.sprintf \"graph_rewrite(%s): %d nodes\" name !counter);\n      if !counter >= 249990 then\n        Printf.eprintf \"  [%s] %d: stage=%d %s tag=%d in_replace=%b\\n%!\" name !counter stage\n          (\"Ops.\" ^ view_op_name new_n.Hashcons.node.view) new_n.Hashcons.tag (Ref_tbl.mem replace new_n);\n      if stage = 0 then begin\n        (* Stage 0: push self at stage 1, then push children *)\n        Stack.push (n, 1, new_n) stack;\n        List.iter (fun x ->\n          if not (Ref_tbl.mem on_stack x) then begin\n            Stack.push (x, 0, x) stack;\n            Ref_tbl.replace on_stack x ()\n          end) (List.rev (children new_n))\n      end\n      else if stage = 1 then begin\n        (* Stage 1: check all children are ready *)\n        let all_ready = ref true in\n        let new_src =\n          List.map (fun x ->\n            match Ref_tbl.find_opt replace x with\n            | Some r -> r\n            | None -> all_ready := false; x)\n            (children new_n)\n        in\n        if not !all_ready then begin\n          (* Some child not ready — register in waitlist *)\n          let missing = List.find (fun x -> not (Ref_tbl.mem replace x)) (children new_n) in\n          let prev = match Ref_tbl.find_opt waitlist missing with Some l -> l | None -> [] in\n          Ref_tbl.replace waitlist missing ((n, 1, new_n) :: prev)\n        end\n        else begin\n          let old_src = children new_n in\n          let changed = not (List.for_all2 (fun a b -> a == b) old_src new_src) in\n          if not changed then begin\n            (* Children unchanged. Try rewrite. *)\n            match rewrite new_n with\n            | None ->\n                set_replace n new_n\n            | Some rewritten when rewritten == new_n ->\n                (* Identity rewrite — treat as no match. *)\n                set_replace n new_n\n            | Some rewritten ->\n                Stack.push (n, 2, rewritten) stack;\n                Stack.push (rewritten, 0, rewritten) stack\n          end\n          else begin\n            (* Children changed. Rebuild and push for full processing. *)\n            let rebuilt = mk ?tag:(tag new_n) (map_children lookup (view new_n)) in\n            Stack.push (n, 2, rebuilt) stack;\n            Stack.push (rebuilt, 0, rebuilt) stack\n          end\n        end\n      end\n      else begin\n        (* Stage 2: link n → result of new_n *)\n        match Ref_tbl.find_opt replace new_n with\n        | Some result ->\n            set_replace n result\n        | None ->\n            (* new_n not ready — register in waitlist *)\n            let prev = match Ref_tbl.find_opt waitlist new_n with Some l -> l | None -> [] in\n            Ref_tbl.replace waitlist new_n ((n, 2, new_n) :: prev)\n      end\n    end\n  done;\n  let result = lookup root in\n  if Lazy.force debug_level >= 6 && name <> \"\" then\n    print_uops ~label:name result;\n  result\n\nlet propagate_tag tags old_node new_node =\n  Option.iter\n    (fun t -> Option.iter (Ref_tbl.replace t new_node) (Ref_tbl.find_opt t old_node))\n    tags\n\nlet substitute ?tags mappings root =\n  let tbl = Ref_tbl.create (List.length mappings) in\n  List.iter\n    (fun (old_node, new_node) ->\n      Ref_tbl.replace tbl old_node new_node;\n      propagate_tag tags old_node new_node)\n    mappings;\n  let nodes = toposort root in\n  let rebuilt = Ref_tbl.create (List.length nodes) in\n  let lookup node = match Ref_tbl.find_opt rebuilt node with Some n -> n | None -> node in\n  List.iter\n    (fun node ->\n      match Ref_tbl.find_opt tbl node with\n      | Some replacement -> Ref_tbl.replace rebuilt node replacement\n      | None ->\n          if List.exists (fun c -> lookup c != c) (children node) then begin\n            let new_node = mk ?tag:(tag node) (map_children lookup (view node)) in\n            Ref_tbl.replace rebuilt node new_node;\n            propagate_tag tags node new_node\n          end)\n    nodes;\n  lookup root\n\n(* Analysis *)\n\nlet backward_slice root = toposort root\n\nlet in_backward_slice needle haystack =\n  let visited = Ref_tbl.create 64 in\n  let rec search node =\n    if node == needle then true\n    else if Ref_tbl.mem visited node then false\n    else begin\n      Ref_tbl.add visited node ();\n      List.exists search (children node)\n    end\n  in\n  search haystack\n\nlet find_nodes pred root =\n  List.filter pred (toposort root)\n\n(* Symbolic divisibility *)\n\nlet rec divides node v =\n  if v = 1 then Some node\n  else match node.Hashcons.node.view with\n  | Const { value; _ } -> (\n      match Const.view value with\n      | Int c when Int64.rem c (Int64.of_int v) = 0L ->\n          Some (const_int (Int64.to_int (Int64.div c (Int64.of_int v))))\n      | _ -> None)\n  | Binary { op = `Add; lhs; rhs; _ } -> (\n      match divides lhs v, divides rhs v with\n      | Some d0, Some d1 -> Some (binary ~op:`Add ~lhs:d0 ~rhs:d1)\n      | _ -> None)\n  | Binary { op = `Mul; lhs; rhs; _ } -> (\n      match divides lhs v with\n      | Some d0 -> Some (binary ~op:`Mul ~lhs:d0 ~rhs:rhs)\n      | None ->\n          match divides rhs v with\n          | Some d1 -> Some (binary ~op:`Mul ~lhs ~rhs:d1)\n          | None -> None)\n  | _ -> None\n\n(* Evaluate a node tree to a concrete integer given variable bindings. *)\nlet rec sym_infer node var_vals =\n  match node.Hashcons.node.view with\n  | Const { value; _ } -> (\n      match Const.view value with\n      | Int n -> Int64.to_int n\n      | _ -> failwith \"sym_infer: non-integer constant\")\n  | Define_var { name; _ } -> (\n      match List.assoc_opt name var_vals with\n      | Some v -> v\n      | None -> failwith (Printf.sprintf \"sym_infer: unbound variable %S\" name))\n  | Binary { op; lhs; rhs; _ } ->\n      let a = sym_infer lhs var_vals and b = sym_infer rhs var_vals in\n      (match op with\n       | `Add -> a + b | `Sub -> a - b | `Mul -> a * b\n       | `Idiv -> a / b | `Mod -> a mod b\n       | `Max -> max a b\n       | `Cmplt -> if a < b then 1 else 0\n       | `Cmpeq -> if a = b then 1 else 0\n       | `Cmpne -> if a <> b then 1 else 0\n       | `And -> a land b | `Or -> a lor b | `Xor -> a lxor b\n       | `Shl -> a lsl b | `Shr -> a asr b\n       | _ -> failwith (Printf.sprintf \"sym_infer: unsupported binary op\"))\n  | Unary { op = `Neg; src; _ } -> - (sym_infer src var_vals)\n  | Ternary { op = `Where; a = cond; b = t; c = f; _ } ->\n      if sym_infer cond var_vals <> 0 then sym_infer t var_vals\n      else sym_infer f var_vals\n  | _ -> failwith (Printf.sprintf \"sym_infer: cannot evaluate %s\"\n           (view_op_name node.Hashcons.node.view))\n\n(* Value bounds *)\n\nlet bound_to_int : Dtype.bound -> int = function\n  | `Bool b -> Bool.to_int b\n  | `SInt n | `UInt n ->\n      if n < Int64.of_int Int.min_int then Int.min_int\n      else if n > Int64.of_int Int.max_int then Int.max_int\n      else Int64.to_int n\n  | `Float f ->\n      if Float.is_nan f then 0\n      else if f <= Float.of_int Int.min_int then Int.min_int\n      else if f >= Float.of_int Int.max_int then Int.max_int\n      else Float.to_int f\n\nlet dtype_bounds node =\n  match node_dtype node with\n  | Some dt -> bound_to_int (Dtype.min (Dtype.Val dt)), bound_to_int (Dtype.max (Dtype.Val dt))\n  | None -> 0, Int.max_int\n\nlet rec vmin_vmax node =\n  match view node with\n  | Const { value; _ } -> (match Const.view value with\n    | Int v -> let v = Int64.to_int v in (v, v)\n    | Bool b -> let v = Bool.to_int b in (v, v)\n    | Float _ -> dtype_bounds node)\n  | Range { size; _ } | Special { size; _ } -> (0, snd (vmin_vmax size) - 1)\n  | Define_var { lo; hi; _ } -> (lo, hi)\n  | Unary { op = `Neg; src; _ } ->\n      let lo, hi = vmin_vmax src in (-hi, -lo)\n  | Ternary { op = `Where; b; c; dtype } when Dtype.Val.is_int dtype ->\n      let b_lo, b_hi = vmin_vmax b in\n      let c_lo, c_hi = vmin_vmax c in\n      (min b_lo c_lo, max b_hi c_hi)\n  | Gep { src; _ } | Unroll { src; _ } -> vmin_vmax src\n  | Vectorize { srcs; _ } ->\n      List.fold_left (fun (lo, hi) s ->\n        let s_lo, s_hi = vmin_vmax s in (min lo s_lo, max hi s_hi))\n        (Int.max_int, Int.min_int) srcs\n  | Cast { src; dtype }\n    when Dtype.val_of dtype |> Dtype.Val.is_int ->\n      let dt = Dtype.val_of dtype in\n      let s_lo, s_hi = vmin_vmax src in\n      (max (bound_to_int (Dtype.min (Dtype.Val dt))) s_lo,\n       min s_hi (bound_to_int (Dtype.max (Dtype.Val dt))))\n  | Cast { src; _ } | Bitcast { src; _ } -> vmin_vmax src\n  | Binary { op; lhs; rhs; dtype } when not (Dtype.Val.is_float dtype) ->\n      let s0_lo, s0_hi = vmin_vmax lhs in\n      let s1_lo, s1_hi = vmin_vmax rhs in\n      (match op with\n      | `Add -> (s0_lo + s1_lo, s0_hi + s1_hi)\n      | `Sub -> (s0_lo - s1_hi, s0_hi - s1_lo)\n      | `Mul ->\n          let a = s0_lo * s1_lo and b = s0_lo * s1_hi in\n          let c = s0_hi * s1_lo and d = s0_hi * s1_hi in\n          (min (min a b) (min c d), max (max a b) (max c d))\n      | `Max -> (max s0_lo s1_lo, max s0_hi s1_hi)\n      | `Mod ->\n          if s1_lo = s1_hi && s1_lo > 0 then\n            ((if s0_lo >= 0 then 0\n              else if s0_lo > -s1_lo then s0_lo\n              else -(s1_hi - 1)),\n             (if s0_hi < 0 then 0\n              else if s0_hi < s1_lo then s0_hi\n              else s1_lo - 1))\n          else if s1_lo > 0 then\n            ((if s0_lo >= 0 then 0 else -(s1_hi - 1)),\n             (if s0_hi <= 0 then 0 else s1_hi - 1))\n          else dtype_bounds node\n      | `Idiv ->\n          if s1_lo * s1_hi > 0 then\n            let a = s0_lo / s1_lo and b = s0_lo / s1_hi in\n            let c = s0_hi / s1_lo and d = s0_hi / s1_hi in\n            (min (min a b) (min c d), max (max a b) (max c d))\n          else dtype_bounds node\n      | `Cmplt -> (Bool.to_int (s0_hi < s1_lo), Bool.to_int (s0_lo < s1_hi))\n      | `Cmpne ->\n          (Bool.to_int (s0_hi < s1_lo || s1_hi < s0_lo),\n           Bool.to_int (not (s0_lo = s0_hi && s0_lo = s1_lo && s1_lo = s1_hi)))\n      | `Cmpeq ->\n          (Bool.to_int (s0_lo = s0_hi && s0_lo = s1_lo && s1_lo = s1_hi),\n           Bool.to_int (s0_lo <= s1_hi && s1_lo <= s0_hi))\n      | `And when Dtype.Val.is_int dtype && s1_lo = s1_hi && s1_lo >= 0 ->\n          (0, if s0_lo < 0 then s1_hi else min s0_hi s1_hi)\n      | `And when Dtype.Val.is_bool dtype ->\n          (Bool.to_int (s0_lo > 0 && s1_lo > 0),\n           Bool.to_int (s0_hi > 0 && s1_hi > 0))\n      | `Or when Dtype.Val.is_bool dtype ->\n          (Bool.to_int (s0_lo > 0 || s1_lo > 0),\n           Bool.to_int (s0_hi > 0 || s1_hi > 0))\n      | `Shl when s1_lo = s1_hi && s1_lo >= 0 && s1_lo < Sys.int_size - 1 ->\n          (s0_lo lsl s1_lo, s0_hi lsl s1_lo)\n      | `Shr when s1_lo = s1_hi && s1_lo >= 0 && s1_lo < Sys.int_size - 1 ->\n          (s0_lo asr s1_lo, s0_hi asr s1_lo)\n      | _ -> dtype_bounds node)\n  | _ -> dtype_bounds node\n\nlet vmin node = fst (vmin_vmax node)\nlet vmax node = snd (vmin_vmax node)\n\n(* Node predicates *)\n\nlet is_range node = match node.Hashcons.node.view with Range _ -> true | _ -> false\nlet is_const node = match node.Hashcons.node.view with Const _ -> true | _ -> false\n\n(* Range analysis *)\n\n(* range_start: index at which range args begin for ops that carry ranges.\n   Bufferize: 1, Reduce: 1, Store: 2, Wmma: 3, End: 1. *)\nlet range_start node = match node.Hashcons.node.view with\n  | Bufferize _ -> Some 1\n  | Reduce _ -> Some 1\n  | Store _ -> Some 2\n  | Wmma _ -> Some 3\n  | End _ -> Some 1\n  | _ -> None\n\nlet rec ended_ranges ?(live = fun _ -> []) (node : t) : t list =\n  match range_start node with\n  | Some off -> List.filteri (fun i _ -> i >= off) (children node)\n  | None -> match node.Hashcons.node.view with\n    | After { deps; _ } -> List.concat_map (ended_ranges ~live) deps\n    | Contract { axes; _ } ->\n        let axis_ids = List.map fst axes in\n        let src = List.hd (children node) in\n        List.filter\n          (fun r -> match r.Hashcons.node.view with\n            | Range { axis; _ } -> List.mem axis axis_ids\n            | _ -> false)\n          (live src)\n    | _ -> []\n\nlet live_ranges_tbl root =\n  let nodes = toposort root in\n  let tbl = Ref_tbl.create (List.length nodes) in\n  let get node = match Ref_tbl.find_opt tbl node with Some r -> r | None -> [] in\n  List.iter\n    (fun node ->\n      let live = Ref_tbl.create 16 in\n      List.iter (fun c -> List.iter (fun r -> Ref_tbl.replace live r ()) (get c)) (children node);\n      List.iter\n        (fun er ->\n          if is_range er then Ref_tbl.remove live er\n          else List.iter (fun r -> Ref_tbl.remove live r) (get er))\n        (ended_ranges ~live:get node);\n      if is_range node then Ref_tbl.replace live node ();\n      Ref_tbl.replace tbl node (Ref_tbl.fold (fun k () acc -> k :: acc) live []))\n    nodes;\n  tbl\n\nlet live_ranges node =\n  let tbl = live_ranges_tbl node in\n  match Ref_tbl.find_opt tbl node with Some r -> r | None -> []\n\n(* Accessors *)\n\nlet range_size node = match node.Hashcons.node.view with\n  | Range { size; _ } -> size\n  | _ -> invalid_arg \"Kernel.range_size: not a Range node\"\n\nlet range_axis node = match node.Hashcons.node.view with\n  | Range { axis; _ } -> axis\n  | _ -> invalid_arg \"Kernel.range_axis: not a Range node\"\n\nlet range_kind node = match node.Hashcons.node.view with\n  | Range { kind; _ } -> kind\n  | _ -> invalid_arg \"Kernel.range_kind: not a Range node\"\n\nlet range_sub node = match node.Hashcons.node.view with\n  | Range { sub; _ } -> sub\n  | _ -> invalid_arg \"Kernel.range_sub: not a Range node\"\n\nlet const_to_int node = match node.Hashcons.node.view with\n  | Const { value; _ } -> (\n      match Const.view value with\n      | Int n -> Int64.to_int n\n      | Bool b -> if b then 1 else 0\n      | Float _ -> invalid_arg \"Kernel.const_to_int: float constant\")\n  | _ -> invalid_arg \"Kernel.const_to_int: not a Const node\"\n\n(* Operators *)\n\nmodule O = struct\n  let ( + ) a b = binary ~op:`Add ~lhs:a ~rhs:b\n  let ( * ) a b = binary ~op:`Mul ~lhs:a ~rhs:b\n  let ( / ) a b = binary ~op:`Idiv ~lhs:a ~rhs:b\n  let ( mod ) a b = binary ~op:`Mod ~lhs:a ~rhs:b\n  let ( < ) a b = binary ~op:`Cmplt ~lhs:a ~rhs:b\n  let eq a b = binary ~op:`Cmpeq ~lhs:a ~rhs:b\n  let ne a b = binary ~op:`Cmpne ~lhs:a ~rhs:b\n  let where cond then_ else_ = ternary ~op:`Where ~a:cond ~b:then_ ~c:else_\n  let neg x = unary ~op:`Neg ~src:x\n  let not_ x = binary ~op:`Cmpeq ~lhs:x ~rhs:(zero_like x)\n  let cast dtype node = mk (Cast { src = node; dtype })\n  let int_ = const_int\n  let float_ = const_float\n  let bool_ = const_bool\nend\n\n(* Structural comparison — canonical ordering for commutative operands. *)\n\nlet view_ordinal = function\n  | Define_var _ -> 1 | Special _ -> 3 | Define_local _ -> 4\n  | Define_reg _ -> 5 | Param _ -> 8 | Param_image _ -> 9\n  | Sink _ -> 15 | After _ -> 16 | Group _ -> 17 | Gep _ -> 18\n  | Vectorize _ -> 19 | Index _ -> 20 | Load _ -> 21 | Store _ -> 22\n  | Wmma _ -> 23 | Cast _ -> 24 | Bitcast _ -> 25 | Unary _ -> 26\n  | Binary _ -> 27 | Ternary _ -> 28 | Barrier -> 29 | Range _ -> 30\n  | End _ -> 31 | Const _ -> 33 | Vconst _ -> 33 | Custom _ -> 34\n  | Custom_inline _ -> 35\n  | Reduce _ -> 100 | Vcat _ | Ptrcat _ | Unroll _ | Contract _\n  | Bufferize _ | Invalid_index _ -> 100\n\n(* Compare the non-child, non-dtype payload fields of two views of the same\n   variant.  Returns 0 when both views are the same constructor with identical\n   payload, a negative or positive integer otherwise. Assumes [view_ordinal a =\n   view_ordinal b]; cross-constructor comparison is handled by the caller. *)\nlet compare_view_args a b =\n  match a, b with\n  | Sink _, Sink _ | Group _, Group _ | After _, After _\n  | Barrier, Barrier | End _, End _ ->\n      0\n  | Param { idx = i1; _ }, Param { idx = i2; _ } -> Int.compare i1 i2\n  | Param_image { idx = i1; width = w1; height = h1; _ },\n    Param_image { idx = i2; width = w2; height = h2; _ } ->\n      let c = Int.compare i1 i2 in\n      if c <> 0 then c\n      else let c = Int.compare w1 w2 in\n      if c <> 0 then c else Int.compare h1 h2\n  | Define_local { size = s1; _ }, Define_local { size = s2; _ } ->\n      Int.compare s1 s2\n  | Define_reg { size = s1; slot = sl1; _ }, Define_reg { size = s2; slot = sl2; _ } ->\n      let c = Int.compare s1 s2 in\n      if c <> 0 then c else Int.compare sl1 sl2\n  | Define_var { name = n1; lo = l1; hi = h1; _ },\n    Define_var { name = n2; lo = l2; hi = h2; _ } ->\n      let c = String.compare n1 n2 in\n      if c <> 0 then c\n      else let c = Int.compare l1 l2 in\n      if c <> 0 then c else Int.compare h1 h2\n  | Bufferize { opts = o1; _ }, Bufferize { opts = o2; _ } ->\n      Stdlib.compare o1 o2\n  | Const { value = v1; _ }, Const { value = v2; _ } -> Const.compare v1 v2\n  | Vconst { values = v1; _ }, Vconst { values = v2; _ } ->\n      let rec cmp a b = match a, b with\n        | [], [] -> 0 | [], _ -> -1 | _, [] -> 1\n        | x :: xs, y :: ys ->\n            let c = Const.compare x y in\n            if c <> 0 then c else cmp xs ys\n      in\n      cmp v1 v2\n  | Invalid_index _, Invalid_index _ -> 0\n  | Index _, Index _ -> 0\n  | Ptrcat _, Ptrcat _ -> 0\n  | Load _, Load _ -> 0\n  | Store _, Store _ -> 0\n  | Unary { op = o1; _ }, Unary { op = o2; _ } -> Op.compare_unary o1 o2\n  | Binary { op = o1; _ }, Binary { op = o2; _ } -> Op.compare_binary o1 o2\n  | Ternary { op = o1; _ }, Ternary { op = o2; _ } -> Op.compare_ternary o1 o2\n  | Cast _, Cast _ -> 0\n  | Bitcast _, Bitcast _ -> 0\n  | Vectorize _, Vectorize _ -> 0\n  | Vcat _, Vcat _ -> 0\n  | Gep { idxs = i1; _ }, Gep { idxs = i2; _ } -> Stdlib.compare i1 i2\n  | Range { axis = a1; sub = s1; kind = k1; _ },\n    Range { axis = a2; sub = s2; kind = k2; _ } ->\n      let c = Int.compare a1 a2 in\n      if c <> 0 then c\n      else let c = Stdlib.compare s1 s2 in\n      if c <> 0 then c else Axis_kind.compare k1 k2\n  | Special { dim = d1; _ }, Special { dim = d2; _ } -> Special_dim.compare d1 d2\n  | Reduce { op = o1; _ }, Reduce { op = o2; _ } -> Op.compare_reduce o1 o2\n  | Unroll { axes = a1; _ }, Unroll { axes = a2; _ } -> Stdlib.compare a1 a2\n  | Contract { axes = a1; _ }, Contract { axes = a2; _ } -> Stdlib.compare a1 a2\n  | Wmma { name = n1; dims = d1; dtype_in = di1; dtype_out = do1;\n           device = dv1; threads = t1; upcast_axes = u1; reduce_axes = r1; _ },\n    Wmma { name = n2; dims = d2; dtype_in = di2; dtype_out = do2;\n           device = dv2; threads = t2; upcast_axes = u2; reduce_axes = r2; _ } ->\n      let c = String.compare n1 n2 in\n      if c <> 0 then c\n      else let c = Stdlib.compare d1 d2 in\n      if c <> 0 then c\n      else let c = Stdlib.compare di1 di2 in\n      if c <> 0 then c\n      else let c = Stdlib.compare do1 do2 in\n      if c <> 0 then c\n      else let c = String.compare dv1 dv2 in\n      if c <> 0 then c\n      else let c = Int.compare t1 t2 in\n      if c <> 0 then c\n      else let c = Stdlib.compare u1 u2 in\n      if c <> 0 then c else Stdlib.compare r1 r2\n  | Custom { fmt = f1; _ }, Custom { fmt = f2; _ } -> String.compare f1 f2\n  | Custom_inline { fmt = f1; _ }, Custom_inline { fmt = f2; _ } ->\n      String.compare f1 f2\n  | _ ->\n      (* Different constructors — fall through to ordinal comparison. *)\n      0\n\n(* Full dtype of a node for structural comparison, using the precise unified type. *)\nlet node_any_dtype node = match node.Hashcons.node.view with\n  | Param { dtype; _ } | Param_image { dtype; _ } | Define_local { dtype; _ }\n  | Define_reg { dtype; _ } | Bufferize { dtype; _ } | Ptrcat { dtype; _ } ->\n      Some (Dtype.Ptr dtype)\n  | Index { dtype; _ } | Cast { dtype; _ } | Vectorize { dtype; _ } ->\n      Some dtype\n  | Define_var { dtype; _ } | Const { dtype; _ } | Vconst { dtype; _ }\n  | Invalid_index { dtype; _ }\n  | Load { dtype; _ } | Unary { dtype; _ } | Binary { dtype; _ }\n  | Ternary { dtype; _ } | Bitcast { dtype; _ }\n  | Vcat { dtype; _ } | Gep { dtype; _ }\n  | Range { dtype; _ } | Special { dtype; _ } | Reduce { dtype; _ }\n  | Unroll { dtype; _ } | Contract { dtype; _ } | Wmma { dtype; _ }\n  | Custom_inline { dtype; _ } ->\n      Some (Dtype.Val dtype)\n  | Sink _ | Group _ | After _ | Store _ | End _ | Barrier | Custom _ ->\n      None\n\nlet compare_opt_any_dtype a b =\n  match a, b with\n  | None, None -> 0\n  | None, Some _ -> -1\n  | Some _, None -> 1\n  | Some a, Some b -> Dtype.compare a b\n\nlet rec compare_structure a b =\n  if a == b then 0\n  else\n    let va = view a and vb = view b in\n    let c = Int.compare (view_ordinal va) (view_ordinal vb) in\n    if c <> 0 then c\n    else let c = compare_opt_any_dtype (node_any_dtype a) (node_any_dtype b) in\n    if c <> 0 then c\n    else let c = compare_view_args va vb in\n    if c <> 0 then c\n    else compare_children_struct (children a) (children b)\nand compare_children_struct xs ys =\n  match xs, ys with\n  | [], [] -> 0 | [], _ -> -1 | _, [] -> 1\n  | x :: xs', y :: ys' ->\n      let c = compare_structure x y in\n      if c <> 0 then c else compare_children_struct xs' ys'\n\n(* Formatting *)\n\nlet pp_comma fmt () = Format.fprintf fmt \", \"\n\nlet pp_ptr fmt (dtype : Dtype.Ptr.t) =\n  Format.fprintf fmt \"%s\" (Dtype.Ptr.to_string dtype)\n\nlet pp_axes fmt axes =\n  Format.pp_print_list ~pp_sep:pp_comma\n    (fun fmt (a, s) -> Format.fprintf fmt \"(%d, %d)\" a s)\n    fmt axes\n\nlet pp_view_with ids fmt instr =\n  let pp_ref fmt node = Format.fprintf fmt \"%%%d\" (Tbl.find ids node) in\n  let pp_refs fmt refs = Format.pp_print_list ~pp_sep:pp_comma pp_ref fmt refs in\n  let pp_opt_ref label fmt = function\n    | None -> () | Some n -> Format.fprintf fmt \" %s=%a\" label pp_ref n\n  in\n  (match tag instr with Some t -> Format.fprintf fmt \"[%s] \" t | None -> ());\n  match instr.Hashcons.node.view with\n  | Sink { srcs; kernel_info = _ } -> Format.fprintf fmt \"sink %a\" pp_refs srcs\n  | Group { srcs } -> Format.fprintf fmt \"group %a\" pp_refs srcs\n  | After { src; deps } ->\n      Format.fprintf fmt \"after %a, deps=[%a]\" pp_ref src pp_refs deps\n  | Param { idx; dtype } -> Format.fprintf fmt \"param %d : %a\" idx pp_ptr dtype\n  | Param_image { idx; dtype; width; height } ->\n      Format.fprintf fmt \"param_image %d : %a [%dx%d]\" idx pp_ptr dtype width\n        height\n  | Define_local { size; dtype } ->\n      Format.fprintf fmt \"define_local %a, size=%d\" pp_ptr dtype size\n  | Define_reg { size; dtype; slot } ->\n      Format.fprintf fmt \"define_reg %a, size=%d, slot=%d\" pp_ptr dtype size slot\n  | Define_var { name; lo; hi; dtype } ->\n      Format.fprintf fmt \"define_var %s : %a [%d..%d]\" name Dtype.Val.pp dtype lo hi\n  | Bufferize { src; ranges; dtype; _ } ->\n      Format.fprintf fmt \"bufferize %a, ranges=[%a] : %a\" pp_ref src pp_refs\n        ranges pp_ptr dtype\n  | Const { value; dtype } ->\n      Format.fprintf fmt \"const %a : %a\" Const.pp value Dtype.Val.pp dtype\n  | Vconst { values; dtype } ->\n      Format.fprintf fmt \"vconst (%a) : %a\"\n        (Format.pp_print_list ~pp_sep:pp_comma Const.pp) values Dtype.Val.pp dtype\n  | Invalid_index { dtype } ->\n      Format.fprintf fmt \"invalid_index : %a\" Dtype.Val.pp dtype\n  | Index { ptr; idxs; gate; dtype } ->\n      Format.fprintf fmt \"index %a, %a%a : %a\" pp_ref ptr pp_refs idxs\n        (pp_opt_ref \"gate\") gate Dtype.pp dtype\n  | Ptrcat { srcs; dtype } ->\n      Format.fprintf fmt \"ptrcat %a : %a\" pp_refs srcs pp_ptr dtype\n  | Load { src; alt; dtype } ->\n      Format.fprintf fmt \"load %a%a : %a\" pp_ref src\n        (pp_opt_ref \"alt\") alt Dtype.Val.pp dtype\n  | Store { dst; value; ranges } ->\n      Format.fprintf fmt \"store %a, %a, ranges=[%a]\" pp_ref dst pp_ref value\n        pp_refs ranges\n  | Unary { op; src; dtype } ->\n      Format.fprintf fmt \"%a %a : %a\" Op.pp_unary op pp_ref src Dtype.Val.pp dtype\n  | Cast { src; dtype } ->\n      Format.fprintf fmt \"cast %a : %a\" pp_ref src Dtype.pp dtype\n  | Bitcast { src; dtype } ->\n      Format.fprintf fmt \"bitcast %a : %a\" pp_ref src Dtype.Val.pp dtype\n  | Binary { op; lhs; rhs; dtype } ->\n      Format.fprintf fmt \"%a %a, %a : %a\" Op.pp_binary op pp_ref lhs pp_ref rhs\n        Dtype.Val.pp dtype\n  | Ternary { op; a; b; c; dtype } ->\n      Format.fprintf fmt \"%a %a, %a, %a : %a\" Op.pp_ternary op pp_ref a pp_ref b\n        pp_ref c Dtype.Val.pp dtype\n  | Vectorize { srcs; dtype } ->\n      Format.fprintf fmt \"vec %a : %a\" pp_refs srcs Dtype.pp dtype\n  | Vcat { srcs; dtype } ->\n      Format.fprintf fmt \"cat %a : %a\" pp_refs srcs Dtype.Val.pp dtype\n  | Gep { src; idxs; dtype } ->\n      Format.fprintf fmt \"gep %a, [%a] : %a\" pp_ref src\n        (Format.pp_print_list ~pp_sep:(fun fmt () -> Format.fprintf fmt \";\")\n           Format.pp_print_int) idxs\n        Dtype.Val.pp dtype\n  | Range { size; dtype; axis; sub; kind } ->\n      Format.fprintf fmt \"range %a : %a [axis=%d, %a%a]\" pp_ref size Dtype.Val.pp\n        dtype axis Axis_kind.pp kind\n        (fun fmt -> function\n          | [] -> ()\n          | sub ->\n              let pp_semi fmt () = Format.fprintf fmt \";\" in\n              Format.fprintf fmt \", sub=[%a]\"\n                (Format.pp_print_list ~pp_sep:pp_semi Format.pp_print_int) sub)\n        sub\n  | End { value; ranges } ->\n      Format.fprintf fmt \"end %a, ranges=[%a]\" pp_ref value pp_refs ranges\n  | Barrier -> Format.fprintf fmt \"barrier\"\n  | Special { dim; size; dtype } ->\n      Format.fprintf fmt \"special %a, %a : %a\" Special_dim.pp dim pp_ref size\n        Dtype.Val.pp dtype\n  | Reduce { op; src; ranges; dtype } ->\n      Format.fprintf fmt \"reduce.%a %a, ranges=[%a] : %a\" Op.pp_reduce op pp_ref\n        src pp_refs ranges Dtype.Val.pp dtype\n  | Unroll { src; axes; dtype } ->\n      Format.fprintf fmt \"unroll %a, axes=[%a] : %a\" pp_ref src pp_axes axes\n        Dtype.Val.pp dtype\n  | Contract { src; axes; dtype } ->\n      Format.fprintf fmt \"contract %a, axes=[%a] : %a\" pp_ref src pp_axes axes\n        Dtype.Val.pp dtype\n  | Wmma { name; a; b; c; dtype; dims = n, m, k; dtype_in; dtype_out;\n           device; threads; _ } ->\n      Format.fprintf fmt\n        \"wmma.%s %a, %a, %a : %a [%dx%dx%d, %a -> %a, %s, threads=%d]\"\n        name pp_ref a pp_ref b pp_ref c Dtype.Val.pp dtype n m k\n        Dtype.pp_scalar dtype_in Dtype.pp_scalar dtype_out device threads\n  | Custom { fmt = f; args } ->\n      Format.fprintf fmt \"custom \\\"%s\\\" %a\" f pp_refs args\n  | Custom_inline { fmt = f; args; dtype } ->\n      Format.fprintf fmt \"custom_inline \\\"%s\\\" %a : %a\" f pp_refs args Dtype.Val.pp\n        dtype\n\nlet assign_ids root =\n  let nodes = toposort root in\n  let ids = Tbl.create (List.length nodes) in\n  List.iteri (fun i node -> Tbl.add ids node i) nodes;\n  (ids, nodes)\n\nlet pp_view fmt instr =\n  let ids, _ = assign_ids instr in\n  pp_view_with ids fmt instr\n\nlet pp fmt root =\n  let ids, nodes = assign_ids root in\n  List.iteri\n    (fun i node -> Format.fprintf fmt \"%3d: %a@\\n\" i (pp_view_with ids) node)\n    nodes\n\n"
  },
  {
    "path": "packages/tolk/lib/ir/kernel.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Codegen-oriented DAG IR.\n\n    [Kernel] is the memory-level graph stage of [ir_next]. Nodes describe\n    indexed buffer accesses, loop structure, and late kernel operations that\n    still precede linear backend emission.\n\n    The public surface is intentionally narrow:\n    - build nodes with the top-level constructors;\n    - inspect nodes with {!view};\n    - analyze them with {!dtype}, {!sort}, {!children}, and {!toposort};\n    - validate with {!validate};\n    - rewrite DAGs with {!map_children} and {!graph_rewrite}.\n\n    Validation is intentionally relaxed for late-kernel IR: transient\n    vectorized index-like values are allowed before devectorization, and\n    {!Reduce} sources may be wider vectors of the same scalar type for later\n    horizontal reduction lowering. *)\n\n(** {1:types Types} *)\n\ntype t\n(** A kernel DAG node. Values are hash-consed: structurally identical nodes\n    are physically identical, enabling correct deduplication in\n    {!graph_rewrite}. *)\n\ntype sort =\n  | Value  (** Scalar or vector computation. *)\n  | Pointer  (** Pointer into a buffer. *)\n  | Index  (** Index-like value or loop variable. *)\n  | Effect  (** Side-effecting node (store, barrier). *)\n(** Coarse node role.\n\n    Pointer and effect nodes are visible directly through {!sort} rather\n    than recovered indirectly from validators. [Index] covers index-like\n    values and loop variables. *)\n\ntype bufferize_device =\n  | Device_single of string  (** A single named device. *)\n  | Device_multi of string list  (** Multiple devices for sharded buffers. *)\n  | Device_index of int  (** Device selected by index. *)\n(** Bufferization device selector. *)\n\ntype estimate =\n  | Int of int  (** Concrete count. *)\n  | Symbolic of t  (** Symbolic expression depending on runtime variables. *)\n(** Static or symbolic cost estimate. *)\n\ntype estimates = {\n  ops : estimate;  (** Arithmetic operation count. *)\n  lds : estimate;  (** Local data share (LDS) access count. *)\n  mem : estimate;  (** Global memory access count. *)\n}\n(** Kernel cost estimates. *)\n\nmodule Opt : sig\n  type t =\n    | Local of { axis : int; amount : int }\n        (** Split [axis] into local (workgroup-shared) tiles of [amount]. *)\n    | Upcast of { axis : int; amount : int }\n        (** Vectorize [axis] by [amount] lanes. *)\n    | Unroll of { axis : int; amount : int }\n        (** Unroll [axis] by [amount] iterations. *)\n    | Group of { axis : int; amount : int }\n        (** Split [axis] into workgroups of [amount]. *)\n    | Grouptop of { axis : int; amount : int }\n        (** Like {!Group} but takes the top portion of [axis]. *)\n    | Thread of { axis : int; amount : int }\n        (** Split [axis] into per-thread tiles of [amount]. *)\n    | Nolocals  (** Disable local memory usage for this kernel. *)\n    | Tc of { axis : int; tc_select : int; tc_opt : int; use_tc : int }\n        (** Tensor-core configuration. *)\n    | Padto of { axis : int; amount : int }\n        (** Pad [axis] to a multiple of [amount]. *)\n    | Swap of { axis : int; with_axis : int }\n        (** Swap two axes in the schedule. *)\n  (** Search and schedule options attached to kernel metadata. *)\n\n  val to_string : t -> string\n  (** [to_string opt] is a compact textual form of [opt]. *)\n\n  val pp : Format.formatter -> t -> unit\n  (** [pp] formats options with {!to_string}. *)\n\n  val axis : t -> int option\n  (** [axis opt] is the axis of [opt], or [None] for [Nolocals]. *)\n\n  val amount : t -> int option\n  (** [amount opt] is the amount/arg of [opt], or [None] for [Tc], [Swap],\n      and [Nolocals]. *)\n\n  val with_amount : t -> int -> t\n  (** [with_amount opt n] returns [opt] with its amount replaced by [n].\n      No-op for [Tc], [Swap], and [Nolocals]. *)\nend\n\ntype bufferize_opts = {\n  device : bufferize_device option;\n      (** Target device, or [None] for default placement. *)\n  addrspace : Dtype.addr_space;  (** Memory address space for the buffer. *)\n  removable : bool;\n      (** [true] if the buffer can be elided by later optimizations. *)\n}\n(** Bufferization options. *)\n\ntype kernel_info = {\n  name : string;  (** Kernel name for debugging and codegen. *)\n  axis_kinds : Axis_kind.t list;  (** Kind assignment per schedule axis. *)\n  dont_use_locals : bool;\n      (** [true] if local memory was disabled (e.g. via {!Opt.Nolocals}). *)\n  applied_opts : Opt.t list;  (** Schedule options already applied. *)\n  opts_to_apply : Opt.t list option;\n      (** Remaining options to apply, or [None] for auto-tuning. *)\n  estimates : estimates option;  (** Cost estimates, if computed. *)\n}\n(** Non-semantic kernel annotations currently carried by {!Sink}. *)\n\ntype view =\n  | Sink of { srcs : t list; kernel_info : kernel_info option }\n      (** Kernel root gathering semantic sources. *)\n  | Group of { srcs : t list }\n      (** Groups effect children without producing a value. *)\n  | After of { src : t; deps : t list }\n      (** Sequences [src] after [deps]. *)\n  | Param of { idx : int; dtype : Dtype.Ptr.t }\n      (** Global buffer parameter at index [idx]. *)\n  | Param_image of { idx : int; dtype : Dtype.Ptr.t; width : int; height : int }\n      (** Image buffer parameter with pixel dimensions. *)\n  | Define_local of { size : int; dtype : Dtype.Ptr.t }\n      (** Local (workgroup-shared) memory buffer of [size] elements. *)\n  | Define_reg of { size : int; dtype : Dtype.Ptr.t; slot : int }\n      (** Register-backed buffer of [size] elements at accumulator [slot]. *)\n  | Define_var of { name : string; lo : int; hi : int; dtype : Dtype.Val.t }\n      (** Scalar loop or index variable bounded by \\[[lo];[hi]\\]. *)\n  | Bufferize of {\n      src : t;\n      ranges : t list;\n      dtype : Dtype.Ptr.t;\n      opts : bufferize_opts;\n    }  (** Materializes [src] into a buffer. *)\n  | Const of { value : Const.t; dtype : Dtype.Val.t }\n      (** Compile-time constant. *)\n  | Vconst of { values : Const.t list; dtype : Dtype.Val.t }\n      (** Vector of compile-time constants (one per lane). *)\n  | Invalid_index of { dtype : Dtype.Val.t }\n      (** Invalid index sentinel. *)\n  | Index of { ptr : t; idxs : t list; gate : t option; dtype : Dtype.t }\n      (** Indexes into [ptr] with per-dimension [idxs] and optional [gate].\n          When [dtype] is [Ptr _], the node is a pointer-typed index (buffer\n          address). When [dtype] is [Val _], it is a value-typed index that\n          [pm_add_loads] will later wrap with {!Load}. *)\n  | Ptrcat of { srcs : t list; dtype : Dtype.Ptr.t }\n      (** Concatenates pointer bundles. *)\n  | Load of { src : t; alt : t option; dtype : Dtype.Val.t }\n      (** Loads from pointer [src]. [alt] is used when gated. *)\n  | Store of { dst : t; value : t; ranges : t list }\n      (** Stores [value] through pointer [dst]. *)\n  | Unary of { op : Op.unary; src : t; dtype : Dtype.Val.t }\n      (** Unary arithmetic or transcendental. *)\n  | Binary of { op : Op.binary; lhs : t; rhs : t; dtype : Dtype.Val.t }\n      (** Binary arithmetic, logic, or comparison. *)\n  | Ternary of { op : Op.ternary; a : t; b : t; c : t; dtype : Dtype.Val.t }\n      (** Ternary operation ([Where] or [Mulacc]). *)\n  | Cast of { src : t; dtype : Dtype.t }\n      (** Type cast. When [dtype] is [Ptr _], this is a pointer reinterpretation\n          (e.g. widening an Index pointer for grouped loads). *)\n  | Bitcast of { src : t; dtype : Dtype.Val.t }\n      (** Bit-preserving reinterpretation. *)\n  | Vectorize of { srcs : t list; dtype : Dtype.t }\n      (** Packs scalar [srcs] into a vector.  When the sources are pointers,\n          [dtype] is [Ptr _] with [v = List.length srcs]. *)\n  | Vcat of { srcs : t list; dtype : Dtype.Val.t }\n      (** Concatenates vectors with a common scalar type. *)\n  | Gep of { src : t; idxs : int list; dtype : Dtype.Val.t }\n      (** Extracts elements at [idxs] from a vector. When [idxs] has one\n          element, the result is scalar. When [idxs] has multiple elements,\n          the result is a vector of the extracted elements. *)\n  | Range of { size : t; dtype : Dtype.Val.t; axis : int; sub : int list; kind : Axis_kind.t }\n      (** Loop or index variable over \\[[0];[size-1]\\] on [axis]. *)\n  | End of { value : t; ranges : t list }\n      (** Closes loop [ranges] around [value]. *)\n  | Barrier  (** Workgroup barrier. *)\n  | Special of { dim : Special_dim.t; size : t; dtype : Dtype.Val.t }\n      (** Backend-provided hardware index. *)\n  | Reduce of { op : Op.reduce; src : t; ranges : t list; dtype : Dtype.Val.t }\n      (** Reduces [src] over [ranges] with [op]. *)\n  | Unroll of { src : t; axes : (int * int) list; dtype : Dtype.Val.t }\n      (** Encodes unrolled lanes of [src]. *)\n  | Contract of { src : t; axes : (int * int) list; dtype : Dtype.Val.t }\n      (** Contracts unrolled structure back into a vector dtype. *)\n  | Wmma of {\n      name : string;\n      a : t;\n      b : t;\n      c : t;\n      dtype : Dtype.Val.t;\n      dims : int * int * int;\n      dtype_in : Dtype.scalar;\n      dtype_out : Dtype.scalar;\n      device : string;\n      threads : int;\n      upcast_axes : (int * int) list * (int * int) list * (int * int) list;\n      reduce_axes : int list;\n    }  (** Tensor-core matrix multiply-accumulate primitive. *)\n  | Custom of { fmt : string; args : t list }\n      (** Backend-specific effect or statement. *)\n  | Custom_inline of { fmt : string; args : t list; dtype : Dtype.Val.t }\n      (** Backend-specific inline value expression. *)\n(** Read-only node view. Pattern-match via {!view}. *)\n\n(** {1:building Building} *)\n\nval sink : ?kernel_info:kernel_info -> t list -> t\n(** [sink ?kernel_info srcs] is a kernel root with semantic sources [srcs]. *)\n\nval group : t list -> t\n(** [group srcs] groups effect-like children without introducing a value.\n\n    Returns [src] unchanged when [srcs] is a singleton list. *)\n\nval after : src:t -> deps:t list -> t\n(** [after ~src ~deps] sequences [src] after [deps].\n\n    Returns [src] unchanged when [deps] is empty. *)\n\nval param : idx:int -> dtype:Dtype.Ptr.t -> t\n(** [param ~idx ~dtype] is a global buffer parameter. *)\n\nval param_image : idx:int -> dtype:Dtype.Ptr.t -> width:int -> height:int -> t\n(** [param_image ~idx ~dtype ~width ~height] is an image parameter. *)\n\nval define_local : size:int -> dtype:Dtype.Ptr.t -> t\n(** [define_local ~size ~dtype] defines a local-memory buffer. *)\n\nval define_reg : size:int -> dtype:Dtype.Ptr.t -> slot:int -> t\n(** [define_reg ~size ~dtype ~slot] defines a register-backed buffer.\n\n    [slot] is a unique accumulator index that prevents parallel reduce\n    accumulators from being merged by {!intern}. *)\n\nval define_var : name:string -> lo:int -> hi:int -> ?dtype:Dtype.Val.t -> unit -> t\n(** [define_var ~name ~lo ~hi ()] is a scalar loop or index variable.\n\n    [dtype] defaults to {!Dtype.Val.index}. *)\n\nval bufferize :\n  src:t -> ranges:t list -> dtype:Dtype.Ptr.t -> opts:bufferize_opts -> t\n(** [bufferize ~src ~ranges ~dtype ~opts] materializes [src] into a buffer. *)\n\nval const : Const.t -> t\n(** [const c] is a constant node with dtype derived from [c]. *)\n\nval vconst : values:Const.t list -> dtype:Dtype.Val.t -> t\n(** [vconst ~values ~dtype] is a vector constant with one value per lane. *)\n\nval invalid_index : ?lanes:int -> unit -> t\n(** [invalid_index ?lanes ()] is the invalid index sentinel.\n\n    [lanes] defaults to [1]. *)\n\nval index : ptr:t -> idxs:t list -> ?gate:t -> ?as_ptr:bool -> unit -> t\n(** [index ~ptr ~idxs ?gate ?as_ptr ()] indexes pointer [ptr].\n\n    When [as_ptr] is [true] (the default), the result is a pointer-typed\n    index ([dtype = Ptr _]). When [as_ptr] is [false], the result is a\n    value-typed index ([dtype = Val _]) that [pm_add_loads] will later\n    wrap with {!Load}.\n\n    Raises [Invalid_argument] if [ptr] does not produce a pointer. *)\n\nval index_raw : ptr:t -> idxs:t list -> ?gate:t -> dtype:Dtype.t -> unit -> t\n(** [index_raw ~ptr ~idxs ?gate ~dtype ()] creates an Index node with an\n    explicit dtype. Unlike {!index}, this does not validate [ptr] and does\n    not derive the dtype from [ptr]. Used by rewrite rules that need to\n    change an Index's dtype directly (e.g., [pm_add_loads]). *)\n\nval ptrcat : srcs:t list -> dtype:Dtype.Ptr.t -> t\n(** [ptrcat ~srcs ~dtype] concatenates pointer bundles. *)\n\nval load : src:t -> ?alt:t -> unit -> t\n(** [load ~src ?alt ()] loads from pointer [src].\n\n    The result dtype is derived from [src].\n\n    Raises [Invalid_argument] if [src] does not produce a pointer. *)\n\nval store : dst:t -> value:t -> ranges:t list -> t\n(** [store ~dst ~value ~ranges] stores [value] through pointer [dst]. *)\n\nval unary : op:Op.unary -> src:t -> t\n(** [unary ~op ~src] applies [op] to [src]. The result dtype is derived from\n    [src]. *)\n\nval binary : op:Op.binary -> lhs:t -> rhs:t -> t\n(** [binary ~op ~lhs ~rhs] applies a binary operation.\n\n    Comparisons return a boolean dtype with the lane count of [lhs]. Other\n    operators inherit the dtype of [lhs]. *)\n\nval ternary : op:Op.ternary -> a:t -> b:t -> c:t -> t\n(** [ternary ~op ~a ~b ~c] applies a ternary operation.\n\n    [Where] inherits the dtype of [b]. [Mulacc] inherits the dtype of [a]. *)\n\nval cast : src:t -> dtype:Dtype.t -> t\n(** [cast ~src ~dtype] casts [src] to [dtype]. When [dtype] is [Ptr _], the\n    result is a pointer-typed node (e.g. widening an Index for grouped loads). *)\n\nval bitcast : src:t -> dtype:Dtype.Val.t -> t\n(** [bitcast ~src ~dtype] bitcasts [src] to [dtype]. *)\n\nval vectorize : srcs:t list -> t\n(** [vectorize ~srcs] vectorizes scalar sources.\n\n    Raises [Invalid_argument] if [srcs] is empty or a source dtype is not\n    available. *)\n\nval vcat : srcs:t list -> t\n(** [vcat ~srcs] concatenates vectors with a common scalar type.\n\n    Raises [Invalid_argument] if [srcs] is empty or a source dtype is not\n    available. *)\n\nval gep : src:t -> idx:int -> t\n(** [gep ~src ~idx] extracts element [idx] from vector [src].\n\n    Raises [Invalid_argument] if [src] does not produce a dtype. *)\n\nval range :\n  size:t -> axis:int -> ?sub:int list -> kind:Axis_kind.t -> ?dtype:Dtype.Val.t -> unit -> t\n(** [range ~size ~axis ~kind ()] is a loop/index variable over [size].\n\n    [dtype] defaults to {!Dtype.Val.index}. *)\n\nval end_ : value:t -> ranges:t list -> ?tag:string -> unit -> t\n(** [end_ ~value ~ranges ()] closes loop ranges around [value].\n\n    [tag] sets the node's tag. Pass [~tag:\"mergeable\"] to mark Ends\n    created by reduce-to-accumulator lowering. *)\n\nval tag : t -> string option\n(** [tag node] is the node's tag, or [None]. *)\n\nval with_tag : string -> t -> t\n(** [with_tag s node] returns a node with the same view as [node] and tag\n    [Some s]. Because tags are part of the hash-consing key, the result may\n    be a different physical node than [node]. *)\n\nval barrier : t\n(** [barrier] is a barrier effect. *)\n\nval special : dim:Special_dim.t -> size:t -> ?dtype:Dtype.Val.t -> unit -> t\n(** [special ~dim ~size ()] is a backend special index.\n\n    [dtype] defaults to {!Dtype.Val.int32}. *)\n\nval reduce : op:Op.reduce -> src:t -> ranges:t list -> dtype:Dtype.Val.t -> t\n(** [reduce ~op ~src ~ranges ~dtype] reduces [src] over [ranges]. *)\n\nval unroll : src:t -> axes:(int * int) list -> dtype:Dtype.Val.t -> t\n(** [unroll ~src ~axes ~dtype] encodes unrolled lanes of [src]. *)\n\nval contract : src:t -> axes:(int * int) list -> dtype:Dtype.Val.t -> t\n(** [contract ~src ~axes ~dtype] contracts unrolled structure back into a vector\n    dtype. *)\n\nval wmma :\n  name:string ->\n  a:t ->\n  b:t ->\n  c:t ->\n  dtype:Dtype.Val.t ->\n  dims:int * int * int ->\n  dtype_in:Dtype.scalar ->\n  dtype_out:Dtype.scalar ->\n  device:string ->\n  threads:int ->\n  upcast_axes:(int * int) list * (int * int) list * (int * int) list ->\n  reduce_axes:int list ->\n  t\n(** [wmma ~name ~a ~b ~c ~dtype ~dims ~dtype_in ~dtype_out ~device ~threads\n    ~upcast_axes ~reduce_axes] is a tensor-core matrix multiply-accumulate\n    primitive. [dims] is [(M, N, K)], [dtype_in] and [dtype_out] are the\n    input and output scalar types, and [threads] is the warp thread count. *)\n\nval custom : fmt:string -> args:t list -> t\n(** [custom ~fmt ~args] is a backend-specific effect or statement node. *)\n\nval custom_inline : fmt:string -> args:t list -> dtype:Dtype.Val.t -> t\n(** [custom_inline ~fmt ~args ~dtype] is a backend-specific inline value node.\n*)\n\nval gep_multi : src:t -> idxs:int list -> t\n(** [gep_multi ~src ~idxs] extracts elements at [idxs] from vector [src].\n\n    Returns [src] unchanged if [idxs] is [[0]] and [src] is scalar.\n    Returns a single scalar {!Gep} for one index. Returns a multi-element\n    {!Gep} for multiple indices. *)\n\nval broadcast : t -> int -> t\n(** [broadcast node n] repeats [node] into an [n]-wide vector.\n\n    Scalars become {!Vectorize} with [n] copies. Vectors become {!Vcat} of\n    [n] copies. Pointer nodes become {!Vectorize} with pointer vector\n    width [n]. [n <= 1] returns [node]. *)\n\nval const_int : int -> t\n(** [const_int n] is an {!Dtype.index} constant [n]. *)\n\nval const_float : float -> t\n(** [const_float x] is a {!Dtype.float32} constant [x]. *)\n\nval const_bool : bool -> t\n(** [const_bool b] is a {!Dtype.bool} constant [b]. *)\n\nval zero_like : t -> t\n(** [zero_like node] is a zero constant matching [node]'s dtype (including\n    vector width). Float dtypes get [0.0], bool gets [false], integers get [0].\n\n    Raises [Invalid_argument] if [node] has no dtype. *)\n\n(** {1:inspection Inspecting} *)\n\nval view : t -> view\n(** [view n] is the read-only view of [n]. *)\n\nval dtype : t -> Dtype.t\n(** [dtype n] is the dtype of [n].\n\n    Raises [Invalid_argument] if [n] has no dtype (e.g. effect nodes). *)\n\nval dtype_opt : t -> Dtype.t option\n(** [dtype_opt n] is the dtype of [n], or [None] for effect nodes. *)\n\nval sort : t -> sort\n(** [sort n] is the coarse role of [n]. *)\n\nval children : t -> t list\n(** [children n] are the direct input nodes of [n]. *)\n\nval toposort : t -> t list\n(** [toposort root] is [root]'s dependency order, from leaves to [root]. *)\n\nval intern : t -> t\n(** [intern root] hash-conses equal nodes within the DAG reachable from [root].\n*)\n\n(* CR: replace the ad-hoc is_* and range_kind/range_axis accessors with a\n   small set of *_arg projections (const_arg, range_arg, …) that return\n   option types suitable for pattern matching.  See const_arg below. *)\n\nval const_arg : t -> Const.view option\n(** [const_arg node] is [Some v] when [node] is a {!Const}, where [v] is\n    the constant's value as a {!Const.view}. *)\n\nval is_alu : t -> bool\n(** [is_alu node] is [true] for {!Unary}, {!Binary}, and {!Ternary} nodes. *)\n\nval is_ptr : t -> bool\n(** [is_ptr node] is [true] for pointer-producing nodes ({!Param},\n    {!Param_image}, {!Define_local}, {!Define_reg}, {!Bufferize}, {!Index},\n    {!Ptrcat}, {!Vectorize} with [Ptr _] dtype), including through\n    {!After}/{!Cast}/{!Bitcast} wrappers. *)\n\nval ptr_dtype : t -> Dtype.Ptr.t\n(** [ptr_dtype n] is the pointer dtype of [n]. Follows through\n    {!After}/{!Cast}/{!Bitcast} wrappers.\n\n    Raises [Invalid_argument] if [n] is not a pointer-producing node. *)\n\nval is_range : t -> bool\n(** [is_range node] is [true] for {!Range} nodes. *)\n\nval is_const : t -> bool\n(** [is_const node] is [true] for {!Const} nodes. *)\n\nval range_size : t -> t\n(** [range_size node] is the [size] child of a {!Range} node.\n\n    Raises [Invalid_argument] if [node] is not a {!Range}. *)\n\nval range_axis : t -> int\n(** [range_axis node] is the [axis] of a {!Range} node.\n\n    Raises [Invalid_argument] if [node] is not a {!Range}. *)\n\nval range_kind : t -> Axis_kind.t\n(** [range_kind node] is the [kind] of a {!Range} node.\n\n    Raises [Invalid_argument] if [node] is not a {!Range}. *)\n\nval range_sub : t -> int list\n(** [range_sub node] is the [sub] indices of a {!Range} node.\n\n    Raises [Invalid_argument] if [node] is not a {!Range}. *)\n\nval const_to_int : t -> int\n(** [const_to_int node] extracts the integer value of a {!Const} node.\n\n    Raises [Invalid_argument] if [node] is not an integer constant. *)\n\n\nmodule Ref_tbl : Hashtbl.S with type key = t\n(** Hash table keyed by physical identity ([==]). *)\n\n(** {1:validation Validation} *)\n\nval validate : t -> unit\n(** [validate root] checks kernel invariants.\n\n    Raises [Failure] on the first violation. *)\n\n(** {1:rewriting Rewriting} *)\n\nval first_match : (t -> t option) list -> t -> t option\n(** [first_match rules node] tries each rule in order, returning the first\n    [Some]. Returns [None] if no rule matches. *)\n\nval replace : t -> ?children:t list -> ?dtype:Dtype.t -> unit -> t\n(** [replace node ?children ?dtype ()] rebuilds [node], substituting\n    [children] and/or [dtype] where provided. Unchanged fields are preserved.\n\n    [children] must have the same length as [children node]. [dtype] applies\n    to nodes that carry a dtype field; effect nodes ignore it.\n\n    The result is interned (hash-consed via {!mk}). *)\n\nval map_children : (t -> t) -> view -> view\n(** [map_children f v] rebuilds the direct children of [v] with [f]. *)\n\n\nval graph_rewrite : ?name:string -> (t -> t option) -> t -> t\n(** [graph_rewrite ?name f root] applies [f] to every node in the DAG\n    rooted at [root] in a single pass. Each node is processed at most\n    once. When a rewrite produces a new node, that node is fully\n    processed (its children are visited), but already-processed nodes\n    are never re-visited. [name] is used in error messages. *)\n\nval substitute : ?tags:int Ref_tbl.t -> (t * t) list -> t -> t\n(** [substitute ?tags mappings root] replaces nodes in [root] by physical\n    identity ([==]). Each [(old, new_)] pair causes [old] to be replaced\n    with [new_].\n\n    When [tags] is provided, tag propagation is enabled: if a replaced or\n    rebuilt node has an entry in [tags], the entry is copied to the new node. *)\n\n(** {1:analysis Analysis} *)\n\nval backward_slice : t -> t list\n(** [backward_slice root] is all nodes transitively reachable from [root]\n    (walking children), in topological order (leaves first).\n\n    {b Note.} The result includes [root] itself (as the last element). *)\n\nval in_backward_slice : t -> t -> bool\n(** [in_backward_slice needle haystack] is [true] if [needle] appears in the\n    transitive dependencies of [haystack]. Uses physical identity ([==]). *)\n\nval find_nodes : (t -> bool) -> t -> t list\n(** [find_nodes pred root] returns all nodes in [root]'s DAG satisfying [pred],\n    in topological order. *)\n\nval divides : t -> int -> t option\n(** [divides node v] is [Some q] if [node] can be symbolically shown to be\n    divisible by [v], where [q] is the quotient node ([node / v]).  Returns\n    [None] when divisibility cannot be proved.\n\n    Handles {!Const}, {!Binary} [Add] (both operands must divide), and\n    {!Binary} [Mul] (either operand may divide). *)\n\nval vmin : t -> int\n(** [vmin node] is a lower bound on the value [node] can take. *)\n\nval vmax : t -> int\n(** [vmax node] is an upper bound on the value [node] can take. *)\n\nval sym_infer : t -> (string * int) list -> int\n(** [sym_infer node var_vals] evaluates [node] to a concrete integer by\n    substituting each {!Define_var} with its value from [var_vals]\n    (matched by name).\n\n    Raises [Failure] if the expression contains nodes that cannot be\n    evaluated (e.g. loads, stores, non-arithmetic ops). *)\n\nval range_start : t -> int option\n(** [range_start v] is the child index at which range arguments begin for\n    nodes that carry them.\n\n    Returns [Some 1] for {!view.Bufferize}, {!view.Reduce}, {!view.End};\n    [Some 2] for {!view.Store}; [Some 3] for {!view.Wmma};\n    [None] for all other nodes. *)\n\nval ended_ranges : ?live:(t -> t list) -> t -> t list\n(** [ended_ranges ?live node] is the list of ranges closed by [node].\n\n    For {!view.Bufferize}, {!view.Reduce}, {!view.Store}, {!view.Wmma}, and\n    {!view.End}: range children from the range-start offset onward. For\n    {!view.After}: the union of [ended_ranges] of deps. For {!view.Contract}:\n    ranges from the source whose axis matches one of the contract's axis IDs,\n    looked up via [live]. Otherwise: empty.\n\n    [live] defaults to [fun _ -> []] and is required for correct {!view.Contract}\n    handling. {!live_ranges_tbl} provides the appropriate lookup automatically. *)\n\nval live_ranges : t -> t list\n(** [live_ranges node] is the set of {!view.Range} nodes that are transitively\n    reachable from [node]'s children and have not been ended by any inner\n    {!view.Reduce}, {!view.Store}, or {!view.End} node. If [node] is itself a\n    {!view.Range}, it is included.\n\n    {b Note.} Computed by a full bottom-up traversal of [node]'s DAG.\n    Not cached — callers that need live ranges for many nodes in the same\n    DAG should use {!live_ranges_tbl} instead. *)\n\nval live_ranges_tbl : t -> t list Ref_tbl.t\n(** [live_ranges_tbl root] precomputes {!live_ranges} for every node in the\n    DAG rooted at [root]. The returned table maps each node to its live\n    ranges.\n\n    Use this when the gate function of a traversal needs live-range\n    information for many nodes. *)\n\n(** {1:operators Operators}\n\n    {!module-O} provides infix operators for building arithmetic Kernel DAG\n    nodes. Open locally in codegen modules:\n\n    {[\n      let open Kernel.O in\n      let idx = base * int_ stride + offset in\n      ...\n    ]} *)\n\nmodule O : sig\n  val ( + ) : t -> t -> t\n  (** Binary {!Op.Add}. *)\n\n  val ( * ) : t -> t -> t\n  (** Binary {!Op.Mul}. *)\n\n  val ( / ) : t -> t -> t\n  (** Binary {!Op.Idiv}. *)\n\n  val ( mod ) : t -> t -> t\n  (** Binary {!Op.Mod}. *)\n\n  val ( < ) : t -> t -> t\n  (** Binary {!Op.Cmplt}. Result has boolean scalar dtype. *)\n\n  val eq : t -> t -> t\n  (** Binary {!Op.Cmpeq}. *)\n\n  val ne : t -> t -> t\n  (** Binary {!Op.Cmpne}. *)\n\n  val where : t -> t -> t -> t\n  (** [where cond then_ else_] is {!Op.Where}. *)\n\n  val neg : t -> t\n  (** Unary {!Op.Neg}. *)\n\n  val not_ : t -> t\n  (** Logical NOT: [eq node (bool_ false)]. *)\n\n  val cast : Dtype.t -> t -> t\n  (** [cast dtype node] casts [node] to [dtype]. *)\n\n  val int_ : int -> t\n  (** [int_ n] is [const_int n] ({!Dtype.index}-typed). *)\n\n  val float_ : float -> t\n  (** [float_ x] is [const_float x] ({!Dtype.float32}-typed). *)\n\n  val bool_ : bool -> t\n  (** [bool_ b] is [const_bool b]. *)\nend\n\n(** {1:comparison Comparison} *)\n\nval compare_structure : t -> t -> int\n(** [compare_structure a b] compares two nodes by recursive structural key\n    (op ordinal, arg, dtype, children). Used for canonicalizing commutative\n    operations. *)\n\n(** {1:formatting Formatting} *)\n\nval pp_view : Format.formatter -> t -> unit\n(** [pp_view] formats one node with local ids relative to that node's DAG. *)\n\nval pp : Format.formatter -> t -> unit\n(** [pp] formats the whole DAG rooted at its argument. *)\n\nval view_op_name : view -> string\n(** [view_op_name v] is the operation name of [v] as an [\"Ops.XXX\"] string\n    (e.g., [\"Ops.SINK\"], [\"Ops.LOAD\"], [\"Ops.ADD\"]). *)\n\nval print_uops : ?label:string -> t -> unit\n(** [print_uops ?label root] prints the DAG rooted at [root] in columnar\n    format to stderr (one node per line: id, op, dtype, sources, value).\n    When [label] is provided, [\"=== label ===\"] is printed before the\n    listing. *)\n"
  },
  {
    "path": "packages/tolk/lib/ir/op.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\ntype reduce = [ `Add | `Mul | `Max ]\ntype unary = [ `Neg | `Exp2 | `Log2 | `Sin | `Sqrt | `Recip | `Trunc ]\n\ntype binary =\n  [ `Add\n  | `Sub\n  | `Mul\n  | `Fdiv\n  | `Idiv\n  | `Mod\n  | `Max\n  | `Pow\n  | `Shl\n  | `Shr\n  | `And\n  | `Or\n  | `Xor\n  | `Threefry\n  | `Cmplt\n  | `Cmpeq\n  | `Cmpne ]\n\ntype ternary = [ `Where | `Mulacc ]\n\nlet equal_reduce = ( = )\nlet compare_reduce = Stdlib.compare\nlet equal_unary = ( = )\nlet compare_unary = Stdlib.compare\nlet equal_binary = ( = )\nlet compare_binary = Stdlib.compare\nlet equal_ternary = ( = )\nlet compare_ternary = Stdlib.compare\n\nlet string_of_reduce = function\n  | `Add -> \"add\" | `Mul -> \"mul\" | `Max -> \"max\"\n\nlet string_of_unary = function\n  | `Neg -> \"neg\" | `Exp2 -> \"exp2\" | `Log2 -> \"log2\" | `Sin -> \"sin\"\n  | `Sqrt -> \"sqrt\" | `Recip -> \"recip\" | `Trunc -> \"trunc\"\n\nlet string_of_binary = function\n  | `Add -> \"add\" | `Sub -> \"sub\" | `Mul -> \"mul\" | `Fdiv -> \"fdiv\"\n  | `Idiv -> \"idiv\" | `Mod -> \"mod\" | `Max -> \"max\" | `Pow -> \"pow\"\n  | `Shl -> \"shl\" | `Shr -> \"shr\" | `And -> \"and\" | `Or -> \"or\"\n  | `Xor -> \"xor\" | `Threefry -> \"threefry\" | `Cmplt -> \"cmplt\"\n  | `Cmpeq -> \"cmpeq\" | `Cmpne -> \"cmpne\"\n\nlet string_of_ternary = function\n  | `Where -> \"where\" | `Mulacc -> \"mulacc\"\n\nlet pp_reduce fmt op = Format.pp_print_string fmt (string_of_reduce op)\nlet pp_unary fmt op = Format.pp_print_string fmt (string_of_unary op)\nlet pp_binary fmt op = Format.pp_print_string fmt (string_of_binary op)\nlet pp_ternary fmt op = Format.pp_print_string fmt (string_of_ternary op)\n"
  },
  {
    "path": "packages/tolk/lib/ir/op.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Arithmetic and logical operations.\n\n    Operations are represented as polymorphic variants grouped by arity. Each\n    group has {!equal_reduce}, {!compare_reduce}, {!pp_reduce} (and likewise for\n    {!unary}, {!binary}, {!ternary}). *)\n\n(** {1:types Types} *)\n\ntype reduce = [ `Add | `Mul | `Max ]\n(** The type for reduction operations. *)\n\ntype unary = [ `Neg | `Exp2 | `Log2 | `Sin | `Sqrt | `Recip | `Trunc ]\n(** The type for unary operations. *)\n\ntype binary =\n  [ `Add\n  | `Sub\n  | `Mul\n  | `Fdiv  (** Floating-point division. *)\n  | `Idiv  (** Integer division. *)\n  | `Mod\n  | `Max\n  | `Pow\n  | `Shl  (** Left shift. *)\n  | `Shr  (** Right shift. *)\n  | `And  (** Bitwise and. *)\n  | `Or  (** Bitwise or. *)\n  | `Xor  (** Bitwise xor. *)\n  | `Threefry  (** Threefry PRNG mixing. *)\n  | `Cmplt  (** Less-than comparison (result is bool). *)\n  | `Cmpeq  (** Equality comparison (result is bool). *)\n  | `Cmpne  (** Not-equal comparison (result is bool). *) ]\n(** The type for binary operations.\n\n    Comparison operators ([`Cmplt], [`Cmpeq], [`Cmpne]) produce a boolean dtype\n    regardless of their operand dtype. All other operators preserve the operand\n    dtype. *)\n\ntype ternary = [ `Where | `Mulacc ]\n(** The type for ternary operations.\n\n    [`Where] selects between two values based on a boolean condition. [`Mulacc]\n    is fused multiply-accumulate. *)\n\n(** {1:reduce Reduce operations} *)\n\nval equal_reduce : reduce -> reduce -> bool\n(** [equal_reduce a b] is [true] iff [a] and [b] are the same. *)\n\nval compare_reduce : reduce -> reduce -> int\n(** [compare_reduce a b] totally orders reduce operations. *)\n\nval pp_reduce : Format.formatter -> reduce -> unit\n(** [pp_reduce] formats a reduce operation as a lowercase string. *)\n\n(** {1:unary Unary operations} *)\n\nval equal_unary : unary -> unary -> bool\n(** [equal_unary a b] is [true] iff [a] and [b] are the same. *)\n\nval compare_unary : unary -> unary -> int\n(** [compare_unary a b] totally orders unary operations. *)\n\nval pp_unary : Format.formatter -> unary -> unit\n(** [pp_unary] formats a unary operation as a lowercase string. *)\n\n(** {1:binary Binary operations} *)\n\nval equal_binary : binary -> binary -> bool\n(** [equal_binary a b] is [true] iff [a] and [b] are the same. *)\n\nval compare_binary : binary -> binary -> int\n(** [compare_binary a b] totally orders binary operations. *)\n\nval pp_binary : Format.formatter -> binary -> unit\n(** [pp_binary] formats a binary operation as a lowercase string. *)\n\n(** {1:ternary Ternary operations} *)\n\nval equal_ternary : ternary -> ternary -> bool\n(** [equal_ternary a b] is [true] iff [a] and [b] are the same. *)\n\nval compare_ternary : ternary -> ternary -> int\n(** [compare_ternary a b] totally orders ternary operations. *)\n\nval pp_ternary : Format.formatter -> ternary -> unit\n(** [pp_ternary] formats a ternary operation as a lowercase string. *)\n"
  },
  {
    "path": "packages/tolk/lib/ir/program.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\nlet strf = Printf.sprintf\n\n(* Types *)\n\ntype id = int\ntype sort = Value | Pointer | Index | Effect\n\ntype view =\n  | Param of { idx : int; dtype : Dtype.Ptr.t }\n  | Param_image of { idx : int; dtype : Dtype.Ptr.t; width : int; height : int }\n  | Define_local of { size : int; dtype : Dtype.Ptr.t }\n  | Define_reg of { size : int; dtype : Dtype.Ptr.t }\n  | Define_var of { name : string; lo : int; hi : int; dtype : Dtype.Val.t }\n  | Const of { value : Const.t; dtype : Dtype.Val.t }\n  | Index of { ptr : id; idxs : id list; gate : id option; dtype : Dtype.Ptr.t }\n  | Load of { src : id; alt : id option; dtype : Dtype.Val.t }\n  | After of { src : id; deps : id list; dtype : Dtype.Val.t }\n  | Store of { dst : id; value : id }\n  | Unary of { op : Op.unary; src : id; dtype : Dtype.Val.t }\n  | Binary of { op : Op.binary; lhs : id; rhs : id; dtype : Dtype.Val.t }\n  | Ternary of { op : Op.ternary; a : id; b : id; c : id; dtype : Dtype.Val.t }\n  | Cast of { src : id; dtype : Dtype.Val.t }\n  | Bitcast of { src : id; dtype : Dtype.Val.t }\n  | Vectorize of { srcs : id list; dtype : Dtype.Val.t }\n  | Gep of { src : id; idxs : int list; dtype : Dtype.Val.t }\n  | Range of {\n      size : id;\n      dtype : Dtype.Val.t;\n      axis : int;\n      sub : int list;\n      kind : Axis_kind.t;\n    }\n  | End_range of { dep : id; range : id }\n  | If of { cond : id; idx_for_dedup : id }\n  | Endif of { if_ : id }\n  | Barrier\n  | Special of { dim : Special_dim.t; size : id; dtype : Dtype.Val.t }\n  | Wmma of {\n      name : string;\n      a : id;\n      b : id;\n      c : id;\n      dtype : Dtype.Val.t;\n      dims : int * int * int;\n      dtype_in : Dtype.scalar;\n      dtype_out : Dtype.scalar;\n      device : string;\n      threads : int;\n      upcast_axes : (int * int) list * (int * int) list * (int * int) list;\n      reduce_axes : int list;\n    }\n  | Custom of { fmt : string; args : id list }\n  | Custom_inline of { fmt : string; args : id list; dtype : Dtype.Val.t }\n\ntype t = view array\n\nlet unary ~op ~src ~dtype = Unary { op; src; dtype }\nlet binary ~op ~lhs ~rhs ~dtype = Binary { op; lhs; rhs; dtype }\nlet ternary ~op ~a ~b ~c ~dtype = Ternary { op; a; b; c; dtype }\n\nlet dtype_of = function\n  | Param { dtype; _ } | Param_image { dtype; _ } | Define_local { dtype; _ }\n  | Define_reg { dtype; _ } | Index { dtype; _ } ->\n      Some (Dtype.Ptr.base dtype)\n  | Define_var { dtype; _ } | Const { dtype; _ } | Load { dtype; _ }\n  | After { dtype; _ } | Unary { dtype; _ } | Binary { dtype; _ }\n  | Ternary { dtype; _ } | Cast { dtype; _ } | Bitcast { dtype; _ }\n  | Vectorize { dtype; _ } | Gep { dtype; _ } | Range { dtype; _ }\n  | Special { dtype; _ } | Wmma { dtype; _ } | Custom_inline { dtype; _ } ->\n      Some dtype\n  | Store _ | End_range _ | If _ | Endif _ | Barrier | Custom _ -> None\n\nlet refs_of = function\n  | Param _ | Param_image _ | Define_local _ | Define_reg _ | Define_var _\n  | Const _ | Barrier ->\n      []\n  | Unary { src; _ } | Cast { src; _ } | Bitcast { src; _ } | Gep { src; _ } ->\n      [ src ]\n  | Range { size; _ } | Special { size; _ } -> [ size ]\n  | End_range { dep; range } -> [ dep; range ]\n  | Endif { if_ } -> [ if_ ]\n  | Store { dst; value } -> [ dst; value ]\n  | After { src; deps; _ } -> src :: deps\n  | If { cond; idx_for_dedup } -> [ cond; idx_for_dedup ]\n  | Binary { lhs; rhs; _ } -> [ lhs; rhs ]\n  | Ternary { a; b; c; _ } | Wmma { a; b; c; _ } -> [ a; b; c ]\n  | Vectorize { srcs; _ } -> srcs\n  | Custom { args; _ } | Custom_inline { args; _ } -> args\n  | Index { ptr; idxs; gate; _ } -> (ptr :: idxs) @ Option.to_list gate\n  | Load { src; alt; _ } -> src :: Option.to_list alt\n\nlet map_children f instr =\n  match instr with\n  | Param _ | Param_image _ | Define_local _ | Define_reg _ | Define_var _\n  | Const _ | Barrier ->\n      instr\n  | Index r ->\n      let ptr = f r.ptr in let idxs = List.map f r.idxs in\n      let gate = Option.map f r.gate in\n      Index { r with ptr; idxs; gate }\n  | Load r ->\n      let src = f r.src in let alt = Option.map f r.alt in\n      Load { r with src; alt }\n  | After r ->\n      let src = f r.src in let deps = List.map f r.deps in\n      After { r with src; deps }\n  | Store { dst; value } ->\n      let dst = f dst in let value = f value in\n      Store { dst; value }\n  | Unary r -> Unary { r with src = f r.src }\n  | Binary r ->\n      let lhs = f r.lhs in let rhs = f r.rhs in\n      Binary { r with lhs; rhs }\n  | Ternary r ->\n      let a = f r.a in let b = f r.b in let c = f r.c in\n      Ternary { r with a; b; c }\n  | Cast r -> Cast { r with src = f r.src }\n  | Bitcast r -> Bitcast { r with src = f r.src }\n  | Vectorize r -> Vectorize { r with srcs = List.map f r.srcs }\n  | Gep r -> Gep { r with src = f r.src }\n  | Range r -> Range { r with size = f r.size }\n  | End_range { dep; range } ->\n      let dep = f dep in let range = f range in\n      End_range { dep; range }\n  | If { cond; idx_for_dedup } ->\n      let cond = f cond in let idx_for_dedup = f idx_for_dedup in\n      If { cond; idx_for_dedup }\n  | Endif { if_ } -> Endif { if_ = f if_ }\n  | Special r -> Special { r with size = f r.size }\n  | Wmma w -> Wmma { w with a = f w.a; b = f w.b; c = f w.c }\n  | Custom r -> Custom { r with args = List.map f r.args }\n  | Custom_inline r -> Custom_inline { r with args = List.map f r.args }\n\n(* Validation *)\n\nlet validate (program : t) =\n  let range_stack = ref [] in\n  let if_stack = ref [] in\n  let seen_specials = Hashtbl.create 8 in\n  let fail i msg =\n    failwith (strf \"Program.validate: instruction %d: %s\" i msg)\n  in\n  let get_dtype r =\n    if r < 0 || r >= Array.length program then None else dtype_of program.(r)\n  in\n  let check_dtype_eq i ~ctx ~expected ~got =\n    match expected, got with\n    | Some expected, Some got when Dtype.Val.equal expected got -> ()\n    | Some expected, Some got ->\n        fail i (strf \"%s: expected %s, got %s\" ctx\n                  (Dtype.Val.to_string expected) (Dtype.Val.to_string got))\n    | None, _ -> fail i (strf \"%s: expected dtype not available\" ctx)\n    | _, None -> fail i (strf \"%s: operand dtype not available\" ctx)\n  in\n  let check_dtype_match i ~ctx left right =\n    match left, right with\n    | Some left, Some right when Dtype.Val.equal left right -> ()\n    | Some _, Some _ -> fail i (strf \"%s: operand dtypes don't match\" ctx)\n    | _ -> fail i (strf \"%s: operand dtype not available\" ctx)\n  in\n  let check_shift_rhs i rhs dtype =\n    match get_dtype rhs with\n    | Some rhs_dtype when Dtype.Val.equal rhs_dtype dtype -> ()\n    | Some rhs_dtype when Dtype.Val.equal rhs_dtype Dtype.Val.uint32 -> ()\n    | Some _ -> fail i \"shift rhs must match lhs dtype or be uint32\"\n    | None -> fail i \"shift rhs dtype not available\"\n  in\n  let check_scalar_bool i ~ctx r =\n    match get_dtype r with\n    | Some dt when Dtype.Val.scalar dt = Dtype.Bool && Dtype.Val.count dt = 1 -> ()\n    | Some dt when Dtype.Val.count dt > 1 ->\n        fail i (strf \"%s must be scalar (not vector)\" ctx)\n    | Some _ -> fail i (strf \"%s must be bool\" ctx)\n    | None -> fail i (strf \"%s dtype not available\" ctx)\n  in\n  let rec index_ref r =\n    match program.(r) with\n    | Index _ -> Some r\n    | Cast { src; _ } | Bitcast { src; _ } | After { src; _ } -> index_ref src\n    | _ -> None\n  in\n  let require_index_ref i ~ctx r =\n    match index_ref r with\n    | Some idx -> idx\n    | None -> fail i (strf \"%s must reference Index (or casted Index)\" ctx)\n  in\n  let check_int_scalar i ~ctx r =\n    match get_dtype r with\n    | Some dt when Dtype.Val.is_int dt && Dtype.Val.count dt = 1 -> ()\n    | Some dt when Dtype.Val.count dt <> 1 ->\n        fail i (strf \"%s must be scalar (not vector)\" ctx)\n    | Some _ -> fail i (strf \"%s must be int\" ctx)\n    | None -> fail i (strf \"%s dtype not available\" ctx)\n  in\n  let check_index_base i ptr =\n    match program.(ptr) with\n    | Param _ | Param_image _ | Define_local _ | Define_reg _ -> ()\n    | _ ->\n        fail i \"Index base must be a Param/Param_image/Define_local/Define_reg\"\n  in\n  let ptr_dtype_of i ptr : Dtype.Ptr.t =\n    match program.(ptr) with\n    | Param { dtype; _ } | Param_image { dtype; _ }\n    | Define_local { dtype; _ } | Define_reg { dtype; _ } -> dtype\n    | _ -> fail i \"must index a pointer definition\"\n  in\n  Array.iteri\n    (fun i instr ->\n      List.iter\n        (fun r ->\n          if r < 0 || r >= i then\n            fail i (strf \"references %%%d (out of bounds or forward)\" r))\n        (refs_of instr);\n      (match dtype_of instr with\n      | Some dt when Dtype.Val.scalar dt = Dtype.Index ->\n          fail i\n            \"Index dtype not allowed in linearized program (should be lowered)\"\n      | _ -> ());\n      match instr with\n      | Param { dtype; _ } | Param_image { dtype; _ } ->\n          if Dtype.Ptr.addrspace dtype <> Dtype.Global then\n            fail i \"Param must have Global addrspace\"\n      | Define_local { dtype; _ } ->\n          if Dtype.Ptr.addrspace dtype <> Dtype.Local then\n            fail i \"Define_local must have Local addrspace\"\n      | Define_reg { dtype; _ } ->\n          if Dtype.Ptr.addrspace dtype <> Dtype.Reg then\n            fail i \"Define_reg must have Reg addrspace\"\n      | Define_var { lo; hi; dtype; _ } ->\n          if Dtype.Val.count dtype <> 1 then fail i \"Define_var must be scalar\";\n          if not (Dtype.Val.is_int dtype || Dtype.Val.scalar dtype = Dtype.Index) then\n            fail i \"Define_var must be int/index\";\n          if lo > hi then fail i \"Define_var bounds invalid (lo > hi)\"\n      | Range { size; dtype; _ } ->\n          if not (Dtype.Val.is_int dtype) then fail i \"Range must have int dtype\";\n          if Dtype.Val.count dtype <> 1 then fail i \"Range must be scalar\";\n          check_dtype_eq i ~ctx:\"Range size\" ~expected:(Some dtype)\n            ~got:(get_dtype size);\n          range_stack := i :: !range_stack\n      | End_range { dep; range } -> (\n          if dep < 0 || dep >= i then\n            fail i (strf \"End_range dep references %%%d (invalid)\" dep);\n          (match program.(range) with\n          | Range _ -> ()\n          | _ -> fail i \"End_range must reference a Range\");\n          match !range_stack with\n          | top :: rest when top = range -> range_stack := rest\n          | _ -> fail i \"unbalanced End_range\")\n      | If { cond; idx_for_dedup } ->\n          if_stack := i :: !if_stack;\n          check_scalar_bool i ~ctx:\"If condition\" cond;\n          ignore (require_index_ref i ~ctx:\"If idx_for_dedup\" idx_for_dedup)\n      | Endif { if_ } -> (\n          (match program.(if_) with\n          | If _ -> ()\n          | _ -> fail i \"Endif must reference an If\");\n          match !if_stack with\n          | top :: rest when top = if_ -> if_stack := rest\n          | _ -> fail i \"unbalanced Endif\")\n      | Special { dim; size; dtype } ->\n          (match Hashtbl.find_opt seen_specials dim with\n          | Some first_idx ->\n              fail i\n                (Format.asprintf \"duplicate Special %a (first at %d)\"\n                   Special_dim.pp dim first_idx)\n          | None -> Hashtbl.add seen_specials dim i);\n          if Dtype.Val.scalar dtype <> Dtype.Int32 || Dtype.Val.count dtype <> 1 then\n            fail i \"Special must be int32 scalar\";\n          check_dtype_eq i ~ctx:\"Special size\" ~expected:(Some dtype)\n            ~got:(get_dtype size)\n      | Index { ptr; idxs; gate; _ } ->\n          check_index_base i ptr;\n          (match idxs with\n          | [ _ ] -> ()\n          | [] -> fail i \"Index must have exactly one index\"\n          | _ ->\n              fail i\n                (strf \"Index must have exactly one index (got %d)\"\n                   (List.length idxs)));\n          List.iter (check_int_scalar i ~ctx:\"Index operand\") idxs;\n          Option.iter (check_scalar_bool i ~ctx:\"Index gate\") gate\n      | Load { src; alt; dtype } -> (\n          let idx_ref = require_index_ref i ~ctx:\"Load src\" src in\n          let ptr, gate =\n            match program.(idx_ref) with\n            | Index { ptr; gate; _ } -> (ptr, gate)\n            | _ -> fail i \"Load src requires Index\"\n          in\n          let ptr_dtype = ptr_dtype_of i ptr in\n          check_dtype_eq i ~ctx:\"Load dtype\" ~expected:(Some (Dtype.Ptr.base ptr_dtype))\n            ~got:(Some dtype);\n          match alt with\n          | None -> ()\n          | Some alt_ref -> (\n              check_dtype_eq i ~ctx:\"Load alt\" ~expected:(Some dtype)\n                ~got:(get_dtype alt_ref);\n              match gate with\n              | Some _ -> ()\n              | None -> fail i \"Load alt requires gated Index\"))\n      | After { src; dtype; _ } -> (\n          match program.(src) with\n          | Barrier | Store _ | End_range _ | Custom _ ->\n              if not (Dtype.Val.equal dtype Dtype.Val.void) then\n                fail i \"After void-source must have void dtype\"\n          | _ ->\n              check_dtype_eq i ~ctx:\"After src\" ~expected:(Some dtype)\n                ~got:(get_dtype src))\n      | Ternary { op = `Where; a = cond; b = then_; c = else_; dtype } ->\n          check_scalar_bool i ~ctx:\"Where condition\" cond;\n          check_dtype_eq i ~ctx:\"Where branch then\" ~expected:(Some dtype)\n            ~got:(get_dtype then_);\n          check_dtype_eq i ~ctx:\"Where branch else\" ~expected:(Some dtype)\n            ~got:(get_dtype else_)\n      | Binary { op = `Cmplt | `Cmpeq | `Cmpne; lhs; rhs; dtype } ->\n          if Dtype.Val.scalar dtype <> Dtype.Bool then\n            fail i \"comparison result must be bool\";\n          check_dtype_match i ~ctx:\"comparison operands\" (get_dtype lhs)\n            (get_dtype rhs)\n      | Binary { op = `Idiv | `Mod; dtype; _ } ->\n          if not (Dtype.Val.is_int dtype) then fail i \"Idiv/Mod must have int dtype\"\n      | Binary\n          { op =\n              ( `Add | `Sub | `Mul | `Fdiv | `Max | `Pow | `And | `Or | `Xor\n              | `Threefry );\n            lhs; rhs; dtype } ->\n          check_dtype_match i ~ctx:\"binary ALU lhs\" (Some dtype) (get_dtype lhs);\n          check_dtype_match i ~ctx:\"binary ALU rhs\" (Some dtype) (get_dtype rhs)\n      | Binary { op = `Shl | `Shr; lhs; rhs; dtype } ->\n          check_dtype_match i ~ctx:\"shift operand\" (Some dtype) (get_dtype lhs);\n          check_shift_rhs i rhs dtype\n      | Unary { src; dtype; _ } ->\n          check_dtype_match i ~ctx:\"unary ALU\" (Some dtype) (get_dtype src)\n      | Ternary { op = `Mulacc; a; b; c; dtype } ->\n          check_dtype_match i ~ctx:\"Mulacc a\" (Some dtype) (get_dtype a);\n          check_dtype_match i ~ctx:\"Mulacc b\" (Some dtype) (get_dtype b);\n          check_dtype_match i ~ctx:\"Mulacc c\" (Some dtype) (get_dtype c)\n      | Vectorize { srcs; dtype } ->\n          let n = List.length srcs in\n          if n <= 1 then fail i \"Vectorize must have more than one source\";\n          if n <> Dtype.Val.count dtype then\n            fail i (strf \"Vectorize has %d sources but dtype.count=%d\"\n                      n (Dtype.Val.count dtype));\n          List.iteri\n            (fun j src_ref ->\n              match get_dtype src_ref with\n              | Some src_dt ->\n                  if Dtype.Val.count src_dt <> 1 then\n                    fail i (strf \"Vectorize source %d must be scalar\" j);\n                  if Dtype.Val.scalar src_dt <> Dtype.Val.scalar dtype then\n                    fail i (strf \"Vectorize source %d has wrong scalar type\" j)\n              | None ->\n                  fail i (strf \"Vectorize source %d dtype not available\" j))\n            srcs\n      | Gep { src; idxs; dtype } -> (\n          if idxs = [] then fail i \"Gep must have at least one index\";\n          match get_dtype src with\n          | Some src_dt ->\n              if Dtype.Val.count src_dt <= 1 then fail i \"Gep source must be a vector\";\n              List.iter (fun idx ->\n                if idx < 0 || idx >= Dtype.Val.count src_dt then\n                  fail i (strf \"Gep index %d out of bounds (vector has %d elements)\"\n                            idx (Dtype.Val.count src_dt))) idxs;\n              let n = List.length idxs in\n              if n = 1 then begin\n                if Dtype.Val.scalar dtype <> Dtype.Val.scalar src_dt || Dtype.Val.count dtype <> 1 then\n                  fail i \"Gep result must be scalar of source vector type\"\n              end else begin\n                if Dtype.Val.scalar dtype <> Dtype.Val.scalar src_dt || Dtype.Val.count dtype <> n then\n                  fail i \"Gep result must be vec(scalar, len(idxs))\"\n              end\n          | None -> fail i \"Gep source dtype not available\")\n      | Wmma { dims = n, m, k; dtype; dtype_out; _ } ->\n          if n <= 0 || m <= 0 || k <= 0 then fail i \"Wmma dims must be positive\";\n          if Dtype.Val.scalar dtype <> dtype_out then\n            fail i \"Wmma result dtype must match dtype_out\"\n      | Store { dst; value } ->\n          let idx_ref = require_index_ref i ~ctx:\"Store dst\" dst in\n          let ptr =\n            match program.(idx_ref) with\n            | Index { ptr; _ } -> ptr\n            | _ -> fail i \"Store dst requires Index\"\n          in\n          let ptr_dtype = ptr_dtype_of i ptr in\n          check_dtype_eq i ~ctx:\"Store value\"\n            ~expected:(Some (Dtype.Ptr.base ptr_dtype)) ~got:(get_dtype value)\n      | Const _ | Cast _ | Bitcast _ | Barrier | Custom _ | Custom_inline _ ->\n          ())\n    program;\n  if !range_stack <> [] then\n    failwith (strf \"Program.validate: %d unclosed Range(s) at end of program\"\n                (List.length !range_stack));\n  if !if_stack <> [] then\n    failwith (strf \"Program.validate: %d unclosed If(s) at end of program\"\n                (List.length !if_stack))\n\n(* Formatting *)\n\nlet pp_ref fmt r = Format.fprintf fmt \"%%%d\" r\n\nlet pp_refs fmt refs =\n  Format.pp_print_list ~pp_sep:(fun fmt () -> Format.fprintf fmt \", \")\n    pp_ref fmt refs\n\nlet pp_ptr fmt (dtype : Dtype.Ptr.t) =\n  Format.fprintf fmt \"%s\" (Dtype.Ptr.to_string dtype)\n\nlet pp_opt_ref label fmt = function\n  | None -> ()\n  | Some r -> Format.fprintf fmt \" %s=%%%d\" label r\n\nlet pp_view fmt = function\n  | Param { idx; dtype } -> Format.fprintf fmt \"param %d : %a\" idx pp_ptr dtype\n  | Param_image { idx; dtype; width; height } ->\n      Format.fprintf fmt \"param_image %d : %a [%dx%d]\" idx pp_ptr dtype width\n        height\n  | Define_local { size; dtype } ->\n      Format.fprintf fmt \"define_local %a, size=%d\" pp_ptr dtype size\n  | Define_reg { size; dtype } ->\n      Format.fprintf fmt \"define_reg %a, size=%d\" pp_ptr dtype size\n  | Define_var { name; lo; hi; dtype } ->\n      Format.fprintf fmt \"define_var %s : %a [%d..%d]\" name Dtype.Val.pp dtype lo hi\n  | Const { value; dtype } ->\n      Format.fprintf fmt \"const %a : %a\" Const.pp value Dtype.Val.pp dtype\n  | Index { ptr; idxs; gate; dtype } ->\n      Format.fprintf fmt \"index %a, %a%a : %a\" pp_ref ptr pp_refs idxs\n        (pp_opt_ref \"gate\") gate pp_ptr dtype\n  | Load { src; alt; dtype } ->\n      Format.fprintf fmt \"load %a%a : %a\" pp_ref src\n        (pp_opt_ref \"alt\") alt Dtype.Val.pp dtype\n  | After { src; deps; dtype } ->\n      Format.fprintf fmt \"after %a, deps=[%a] : %a\" pp_ref src pp_refs deps\n        Dtype.Val.pp dtype\n  | Store { dst; value } ->\n      Format.fprintf fmt \"store %a, %a\" pp_ref dst pp_ref value\n  | Unary { op; src; dtype } ->\n      Format.fprintf fmt \"%a %a : %a\" Op.pp_unary op pp_ref src Dtype.Val.pp dtype\n  | Cast { src; dtype } ->\n      Format.fprintf fmt \"cast %a : %a\" pp_ref src Dtype.Val.pp dtype\n  | Bitcast { src; dtype } ->\n      Format.fprintf fmt \"bitcast %a : %a\" pp_ref src Dtype.Val.pp dtype\n  | Binary { op; lhs; rhs; dtype } ->\n      Format.fprintf fmt \"%a %a, %a : %a\" Op.pp_binary op pp_ref lhs pp_ref rhs\n        Dtype.Val.pp dtype\n  | Ternary { op; a; b; c; dtype } ->\n      Format.fprintf fmt \"%a %a, %a, %a : %a\" Op.pp_ternary op pp_ref a pp_ref b\n        pp_ref c Dtype.Val.pp dtype\n  | Vectorize { srcs; dtype } ->\n      Format.fprintf fmt \"vec %a : %a\" pp_refs srcs Dtype.Val.pp dtype\n  | Gep { src; idxs; dtype } ->\n      Format.fprintf fmt \"gep %a, [%a] : %a\" pp_ref src\n        (Format.pp_print_list ~pp_sep:(fun fmt () -> Format.fprintf fmt \";\")\n           Format.pp_print_int) idxs\n        Dtype.Val.pp dtype\n  | Range { size; dtype; axis; sub; kind } ->\n      Format.fprintf fmt \"range %a : %a [axis=%d, %a%a]\" pp_ref size Dtype.Val.pp\n        dtype axis Axis_kind.pp kind\n        (fun fmt sub ->\n          if sub <> [] then\n            Format.fprintf fmt \", sub=[%a]\"\n              (Format.pp_print_list\n                 ~pp_sep:(fun fmt () -> Format.fprintf fmt \";\")\n                 Format.pp_print_int)\n              sub)\n        sub\n  | End_range { dep; range } ->\n      Format.fprintf fmt \"end_range %a, dep=%a\" pp_ref range pp_ref dep\n  | If { cond; idx_for_dedup } ->\n      Format.fprintf fmt \"if %a, %a\" pp_ref cond pp_ref idx_for_dedup\n  | Endif { if_ } -> Format.fprintf fmt \"endif %a\" pp_ref if_\n  | Barrier -> Format.fprintf fmt \"barrier\"\n  | Special { dim; size; dtype } ->\n      Format.fprintf fmt \"special %a, %a : %a\" Special_dim.pp dim pp_ref size\n        Dtype.Val.pp dtype\n  | Wmma { name; a; b; c; dtype; dims = n, m, k; dtype_in; dtype_out;\n           device; threads; _ } ->\n      Format.fprintf fmt\n        \"wmma.%s %a, %a, %a : %a [%dx%dx%d, %a -> %a, %s, threads=%d]\" name\n        pp_ref a pp_ref b pp_ref c Dtype.Val.pp dtype n m k Dtype.pp_scalar dtype_in\n        Dtype.pp_scalar dtype_out device threads\n  | Custom { fmt = f; args } ->\n      Format.fprintf fmt \"custom \\\"%s\\\" %a\" f pp_refs args\n  | Custom_inline { fmt = f; args; dtype } ->\n      Format.fprintf fmt \"custom_inline \\\"%s\\\" %a : %a\" f pp_refs args Dtype.Val.pp\n        dtype\n\nlet pp fmt t =\n  Array.iteri (fun i instr -> Format.fprintf fmt \"%3d: %a@\\n\" i pp_view instr) t\n\n(* Building *)\n\ntype builder = { mutable data : view array; mutable len : int }\n\nlet create () = { data = Array.make 32 Barrier; len = 0 }\n\nlet ensure builder =\n  if builder.len = Array.length builder.data then begin\n    let next = Array.make (max 1 (builder.len * 2)) Barrier in\n    Array.blit builder.data 0 next 0 builder.len;\n    builder.data <- next\n  end\n\nlet emit builder instr =\n  ensure builder;\n  let id = builder.len in\n  builder.data.(id) <- instr;\n  builder.len <- id + 1;\n  id\n\nlet finish builder = Array.sub builder.data 0 builder.len\nlet view (program : t) id = program.(id)\n\n(* Rewrite a program's instruction array in one forward pass. [f] is called\n   on each instruction and may return [Some id] to replace it (e.g. for\n   deduplication or custom lowering), or [None] to keep the default behavior\n   of remapping children and emitting into the new array. A [remap] array\n   translates old instruction ids to new ones so that later instructions\n   referencing earlier ones stay consistent after rewriting. *)\n\nlet rebuild f program =\n  let n = Array.length program in\n  let remap = Array.make n (-1) in\n  let b = create () in\n  let emit_view instr = emit b instr in\n  let map_ref r = remap.(r) in\n  Array.iteri\n    (fun i instr ->\n      match f ~emit:emit_view ~map_ref instr with\n      | Some idx -> remap.(i) <- idx\n      | None -> remap.(i) <- emit_view (map_children map_ref instr))\n    program;\n  finish b\n\n(* Inspecting *)\n\nlet length program = Array.length program\n\nlet dtype program id =\n  if id < 0 || id >= Array.length program then None else dtype_of program.(id)\n\nlet children program id = refs_of program.(id)\nlet iteri f program = Array.iteri f program\nlet is_alu = function Unary _ | Binary _ | Ternary _ -> true | _ -> false\n\nlet is_ptr program id =\n  match program.(id) with\n  | Param _ | Param_image _ | Define_local _ | Index _ -> true\n  | _ -> false\nlet dtype_of_view = dtype_of\n\nlet index_gate program id =\n  let rec walk id =\n    match program.(id) with\n    | Index { gate; _ } -> gate\n    | Cast { src; _ } | Bitcast { src; _ } | After { src; _ } -> walk src\n    | _ -> None\n  in\n  walk id\n\nlet map_alu ~map_ref ~dtype = function\n  | Unary { op; src; _ } -> Unary { op; src = map_ref src; dtype }\n  | Binary { op; lhs; rhs; _ } ->\n      Binary { op; lhs = map_ref lhs; rhs = map_ref rhs; dtype }\n  | Ternary { op; a; b; c; _ } ->\n      Ternary { op; a = map_ref a; b = map_ref b; c = map_ref c; dtype }\n  | _ -> invalid_arg \"Program.map_alu expects an ALU view\"\n\nlet sort program id =\n  match program.(id) with\n  | Param _ | Param_image _ | Define_local _ | Define_reg _ | Index _ -> Pointer\n  | Define_var _ | Range _ | Special _ -> Index\n  | Store _ | End_range _ | If _ | Endif _ | Barrier | Custom _ -> Effect\n  | After { dtype; _ } when Dtype.Val.equal dtype Dtype.Val.void -> Effect\n  | Const _ | Load _ | After _ | Unary _ | Binary _ | Ternary _ | Cast _\n  | Bitcast _ | Vectorize _ | Gep _ | Wmma _ | Custom_inline _ ->\n      Value\n"
  },
  {
    "path": "packages/tolk/lib/ir/program.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Render-ready SSA IR.\n\n    [Program] is the linear backend stage of [ir_next]. Values are array-backed\n    sequences of instructions with stable ids and backward-only references.\n\n    - build programs with {!create}, {!emit}, and {!finish};\n    - inspect with {!view}, {!dtype}, {!sort}, {!children};\n    - validate with {!validate};\n    - rewrite with {!map_children}, {!map_alu}, and {!rebuild}. *)\n\n(** {1:types Types} *)\n\ntype t\n(** A linear SSA program. *)\n\ntype id = int\n(** Instruction id. An index into the program array. *)\n\ntype builder\n(** Mutable program builder. *)\n\ntype sort = Value | Pointer | Index | Effect\n(** Coarse instruction role. *)\n\ntype view =\n  | Param of { idx : int; dtype : Dtype.Ptr.t }\n      (** Global buffer parameter at index [idx]. *)\n  | Param_image of { idx : int; dtype : Dtype.Ptr.t; width : int; height : int }\n      (** Image buffer parameter with pixel dimensions. *)\n  | Define_local of { size : int; dtype : Dtype.Ptr.t }\n      (** Local (workgroup-shared) memory buffer of [size] elements. *)\n  | Define_reg of { size : int; dtype : Dtype.Ptr.t }\n      (** Register-backed buffer of [size] elements. *)\n  | Define_var of { name : string; lo : int; hi : int; dtype : Dtype.Val.t }\n      (** Scalar loop or index variable bounded by \\[[lo];[hi]\\]. *)\n  | Const of { value : Const.t; dtype : Dtype.Val.t }\n      (** Compile-time constant. *)\n  | Index of { ptr : id; idxs : id list; gate : id option; dtype : Dtype.Ptr.t }\n      (** Indexes into [ptr] with per-dimension [idxs] and optional [gate]. *)\n  | Load of { src : id; alt : id option; dtype : Dtype.Val.t }\n      (** Loads from pointer [src]. [alt] is used when gated. *)\n  | After of { src : id; deps : id list; dtype : Dtype.Val.t }\n      (** Sequences [src] after [deps]. *)\n  | Store of { dst : id; value : id }\n      (** Stores [value] through pointer [dst]. *)\n  | Unary of { op : Op.unary; src : id; dtype : Dtype.Val.t }\n      (** Unary arithmetic or transcendental. *)\n  | Binary of { op : Op.binary; lhs : id; rhs : id; dtype : Dtype.Val.t }\n      (** Binary arithmetic, logic, or comparison. *)\n  | Ternary of { op : Op.ternary; a : id; b : id; c : id; dtype : Dtype.Val.t }\n      (** Ternary operation ([Where] or [Mulacc]). *)\n  | Cast of { src : id; dtype : Dtype.Val.t }\n      (** Type cast. *)\n  | Bitcast of { src : id; dtype : Dtype.Val.t }\n      (** Bit-preserving reinterpretation. *)\n  | Vectorize of { srcs : id list; dtype : Dtype.Val.t }\n      (** Packs scalar [srcs] into a vector. *)\n  | Gep of { src : id; idxs : int list; dtype : Dtype.Val.t }\n      (** Extracts elements at [idxs] from a vector. When [idxs] has one\n          element, the result is scalar. When [idxs] has multiple elements,\n          the result is a vector of the extracted elements. *)\n  | Range of { size : id; dtype : Dtype.Val.t; axis : int; sub : int list; kind : Axis_kind.t }\n      (** Loop variable over \\[[0];[size-1]\\] on [axis]. *)\n  | End_range of { dep : id; range : id }\n      (** Closes the loop opened by [range]. [dep] is the last value produced\n          inside the loop body, ensuring the body completes before the loop\n          closes. *)\n  | If of { cond : id; idx_for_dedup : id }\n      (** Conditional branch on [cond]. *)\n  | Endif of { if_ : id }\n      (** Closes the conditional opened by [if_]. *)\n  | Barrier  (** Workgroup barrier. *)\n  | Special of { dim : Special_dim.t; size : id; dtype : Dtype.Val.t }\n      (** Backend-provided hardware index. *)\n  | Wmma of {\n      name : string;\n      a : id;\n      b : id;\n      c : id;\n      dtype : Dtype.Val.t;\n      dims : int * int * int;\n      dtype_in : Dtype.scalar;\n      dtype_out : Dtype.scalar;\n      device : string;\n      threads : int;\n      upcast_axes : (int * int) list * (int * int) list * (int * int) list;\n      reduce_axes : int list;\n    }  (** Tensor-core matrix multiply-accumulate primitive. *)\n  | Custom of { fmt : string; args : id list }\n      (** Backend-specific effect or statement. *)\n  | Custom_inline of { fmt : string; args : id list; dtype : Dtype.Val.t }\n      (** Backend-specific inline value expression. *)\n(** Read-only instruction view. Pattern-match via {!view}. *)\n\n(** {1:building Building} *)\n\nval create : unit -> builder\n(** [create ()] is an empty program builder. *)\n\nval emit : builder -> view -> id\n(** [emit b v] appends [v] to [b] and returns its id. *)\n\nval finish : builder -> t\n(** [finish b] is the program built so far. *)\n\n(** {1:inspection Inspecting} *)\n\nval view : t -> id -> view\n(** [view t id] is the instruction at [id]. *)\n\nval length : t -> int\n(** [length t] is the number of instructions in [t]. *)\n\nval dtype : t -> id -> Dtype.Val.t option\n(** [dtype t id] is the value dtype of [id], if any. Effect instructions\n    return [None]. *)\n\nval sort : t -> id -> sort\n(** [sort t id] is the coarse role of [id]. *)\n\nval children : t -> id -> id list\n(** [children t id] are the direct input ids of instruction [id]. *)\n\nval iteri : (id -> view -> unit) -> t -> unit\n(** [iteri f t] calls [f id view] for each instruction in program order. *)\n\n(** {1:predicates Predicates} *)\n\nval is_alu : view -> bool\n(** [is_alu v] is [true] iff [v] is {!Unary}, {!Binary}, or {!Ternary}. *)\n\nval is_ptr : t -> id -> bool\n(** [is_ptr t id] is [true] iff instruction [id] produces a pointer\n    ({!Param}, {!Param_image}, {!Define_local}, or {!Index}). *)\n\nval dtype_of_view : view -> Dtype.Val.t option\n(** [dtype_of_view v] is the result dtype of [v], if any. Effect views\n    return [None]. For pointer views, returns the base type. *)\n\nval index_gate : t -> id -> id option\n(** [index_gate t id] walks through {!Cast}, {!Bitcast}, and {!After} to\n    find the underlying {!Index} gate, if any. *)\n\n(** {1:validation Validation} *)\n\nval validate : t -> unit\n(** [validate t] checks program invariants.\n\n    Raises [Failure] on the first violation. *)\n\n(** {1:rewriting Rewriting} *)\n\nval map_children : (id -> id) -> view -> view\n(** [map_children f v] rebuilds [v] with [f] applied to every child ref.\n    Non-reference fields (dtype, constants, options) are preserved. *)\n\nval map_alu : map_ref:(id -> id) -> dtype:Dtype.Val.t -> view -> view\n(** [map_alu ~map_ref ~dtype v] remaps child refs and replaces the dtype\n    of an ALU view.\n\n    Raises [Invalid_argument] if [v] is not {!Unary}, {!Binary}, or\n    {!Ternary}. *)\n\nval rebuild :\n  (emit:(view -> id) -> map_ref:(id -> id) -> view -> id option) -> t -> t\n(** [rebuild f t] constructs a new program by iterating forward through [t].\n\n    For each instruction, [f ~emit ~map_ref view] may emit replacement\n    instructions via [emit] and return the new id, or return [None] to keep\n    the instruction with refs automatically remapped via [map_ref]. *)\n\n(** {1:formatting Formatting} *)\n\nval pp_view : Format.formatter -> view -> unit\n(** [pp_view] formats one instruction view. *)\n\nval pp : Format.formatter -> t -> unit\n(** [pp] formats a whole program. *)\n"
  },
  {
    "path": "packages/tolk/lib/ir/shape.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\ntype dim = Static of int | Symbol of { name : string; lo : int; hi : int }\ntype t = dim list\n\nlet scalar = []\nlet of_dims ns = List.map (fun n -> Static n) ns\nlet of_dim_list ds = ds\nlet dims s = s\nlet rank s = List.length s\n\nlet static_dims s =\n  let rec loop acc = function\n    | [] -> Some (List.rev acc)\n    | Static n :: rest -> loop (n :: acc) rest\n    | Symbol _ :: _ -> None\n  in\n  loop [] s\n\nlet pp_dim ppf = function\n  | Static n -> Format.pp_print_int ppf n\n  | Symbol { name; lo; hi } -> Format.fprintf ppf \"%s[%d..%d]\" name lo hi\n\nlet pp_sep ppf () = Format.pp_print_string ppf \", \"\nlet pp ppf s = Format.fprintf ppf \"[%a]\" (Format.pp_print_list ~pp_sep pp_dim) s\n"
  },
  {
    "path": "packages/tolk/lib/ir/shape.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Tensor shapes.\n\n    A shape is an ordered sequence of dimensions. Each dimension is either a\n    static integer or a bounded symbolic variable. *)\n\n(** {1:types Types} *)\n\n(** The type for individual dimensions. *)\ntype dim =\n  | Static of int  (** A concrete dimension size. *)\n  | Symbol of { name : string; lo : int; hi : int }\n      (** A symbolic dimension bounded by \\[[lo];[hi]\\]. *)\n\ntype t\n(** The type for shapes. *)\n\n(** {1:constructors Constructors} *)\n\nval scalar : t\n(** [scalar] is the empty shape (rank 0). *)\n\nval of_dims : int list -> t\n(** [of_dims ns] is a shape where every dimension is {!Static}. *)\n\nval of_dim_list : dim list -> t\n(** [of_dim_list ds] is a shape from an explicit list of dimensions. *)\n\n(** {1:access Accessors} *)\n\nval dims : t -> dim list\n(** [dims s] is the dimension list of [s]. *)\n\nval rank : t -> int\n(** [rank s] is [List.length (dims s)]. *)\n\nval static_dims : t -> int list option\n(** [static_dims s] is [Some ns] if every dimension of [s] is {!Static}, where\n    [ns] are the sizes. Returns [None] if any dimension is symbolic. *)\n\n(** {1:fmt Formatting} *)\n\nval pp_dim : Format.formatter -> dim -> unit\n(** [pp_dim] formats a dimension. Static dimensions are formatted as integers;\n    symbolic dimensions as [name[lo..hi]]. *)\n\nval pp : Format.formatter -> t -> unit\n(** [pp] formats a shape as a bracketed, comma-separated dimension list (e.g.\n    [[3, 4, 5]]). *)\n"
  },
  {
    "path": "packages/tolk/lib/ir/special_dim.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\ntype t = Group_id of int | Local_id of int | Global_idx of int\n\nlet axis = function Group_id a | Local_id a | Global_idx a -> a\nlet equal = ( = )\nlet compare = Stdlib.compare\n\nlet pp fmt t =\n  let s = match t with Group_id _ -> \"gid\" | Local_id _ -> \"lid\" | Global_idx _ -> \"idx\" in\n  Format.fprintf fmt \"%s%d\" s (axis t)\n"
  },
  {
    "path": "packages/tolk/lib/ir/special_dim.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Backend-provided special dimensions.\n\n    A special dimension identifies a hardware execution index supplied by the\n    compute backend (e.g. OpenCL's [get_group_id], [get_local_id], or a fused\n    global index). Each variant carries the axis number it refers to. *)\n\n(** {1:types Types} *)\n\n(** The type for special dimensions. *)\ntype t =\n  | Group_id of int  (** Workgroup id on the given axis. *)\n  | Local_id of int  (** Thread-local id within a workgroup. *)\n  | Global_idx of int  (** Global thread index on the given axis. *)\n\n(** {1:access Accessors} *)\n\nval axis : t -> int\n(** [axis d] is the axis number carried by [d]. *)\n\n(** {1:predicates Predicates and comparisons} *)\n\nval equal : t -> t -> bool\n(** [equal a b] is [true] iff [a] and [b] denote the same special dimension and\n    axis. *)\n\nval compare : t -> t -> int\n(** [compare a b] totally orders special dimensions. *)\n\n(** {1:fmt Formatting} *)\n\nval pp : Format.formatter -> t -> unit\n(** [pp] formats a special dimension as a compact string (e.g. [gid0], [lid1],\n    [idx2]). *)\n"
  },
  {
    "path": "packages/tolk/lib/ir/symbolic.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(* Symbolic simplification rules for Kernel IR.\n\n   Three phases:\n   - symbolic_simple (phase 1): generic folding\n   - symbolic (phase 2): deeper algebraic rules + divandmod\n   - sym (phase 3): full symbolic + GEP/vectorize + decompositions *)\n\nmodule K = Kernel\n\n(* Helpers *)\n\nlet is_const_int n node =\n  match K.view node with\n  | Const { value; _ } ->\n      (match Const.view value with Int v -> v = Int64.of_int n | _ -> false)\n  | _ -> false\n\nlet is_const_float f node =\n  match K.view node with\n  | Const { value; _ } ->\n      (match Const.view value with Float v -> v = f | _ -> false)\n  | _ -> false\n\nlet const_int_val node =\n  match K.view node with\n  | Const { value; _ } ->\n      (match Const.view value with Int v -> Some v | _ -> None)\n  | _ -> None\n\nlet is_const_bool b node =\n  match K.view node with\n  | Const { value; _ } ->\n      (match Const.view value with Bool v -> v = b | _ -> false)\n  | _ -> false\n\nlet is_index_dtype dt =\n  Dtype.Val.is_int dt && Dtype.Val.equal (Dtype.Val.scalarize dt) Dtype.Val.index\n\n(* GEP pushing *)\n\nlet gep_vectorize node =\n  match K.view node with\n  | Gep { src; idxs; _ } ->\n      (match K.view src with\n      | Vectorize { srcs; _ } ->\n          let n = List.length srcs in\n          if List.for_all (fun i -> i >= 0 && i < n) idxs then\n            match idxs with\n            | [idx] -> Some (List.nth srcs idx)\n            | _ ->\n                let extracted = List.map (fun i -> List.nth srcs i) idxs in\n                Some (K.vectorize ~srcs:extracted)\n          else None\n      | _ -> None)\n  | _ -> None\n\nlet gep_const node =\n  match K.view node with\n  | Gep { src; idxs; _ } ->\n      (match K.view src with\n      | Const { value; _ } ->\n          (match idxs with\n          | [_] -> Some (K.const value)\n          | _ ->\n              let c = K.const value in\n              Some (K.vectorize ~srcs:(List.init (List.length idxs) (fun _ -> c))))\n      | _ -> None)\n  | _ -> None\n\nlet gep_vconst node =\n  match K.view node with\n  | Gep { src; idxs; _ } ->\n      (match K.view src with\n      | Vconst { values; dtype } ->\n          let n = List.length values in\n          if List.for_all (fun i -> i >= 0 && i < n) idxs then\n            match idxs with\n            | [idx] -> Some (K.const (List.nth values idx))\n            | _ ->\n                Some\n                  (K.vconst\n                     ~values:(List.map (List.nth values) idxs)\n                     ~dtype:(Dtype.Val.vec (List.length idxs) (Dtype.Val.scalarize dtype)))\n          else None\n      | _ -> None)\n  | _ -> None\n\nlet gep_void node =\n  match K.view node with\n  | Gep { src; dtype; _ } when Dtype.Val.equal dtype Dtype.Val.void -> Some src\n  | _ -> None\n\nlet vcat_to_vectorize node =\n  match K.view node with\n  | Vcat { srcs; _ } ->\n      let expanded =\n        List.concat_map\n          (fun s ->\n            let count = Dtype.count (K.dtype s) in\n            List.init count (fun i -> K.gep ~src:s ~idx:i))\n          srcs\n      in\n      Some (K.vectorize ~srcs:expanded)\n  | _ -> None\n\nlet vectorize_in_order_gep node =\n  match K.view node with\n  | Vectorize { srcs = (first :: _ as srcs); _ } ->\n      (match K.view first with\n      | Gep { src = base; idxs = [0]; _ } ->\n          let count = List.length srcs in\n          if count = Dtype.count (K.dtype base) then\n            let rec check i = function\n              | [] -> true\n              | s :: rest ->\n                  (match K.view s with\n                  | Gep { src; idxs = [idx]; _ } ->\n                      src == base && idx = i && check (i + 1) rest\n                  | _ -> false)\n            in\n            if check 0 srcs then Some base else None\n          else None\n      | _ -> None)\n  | _ -> None\n\nlet gep_through_alu node =\n  match K.view node with\n  | Gep { src; idxs = [idx]; dtype } when Dtype.Val.equal (Dtype.Val.scalarize dtype) Dtype.Val.index\n    ->\n      (match K.view src with\n      | Binary { op; lhs; rhs; dtype = alu_dt } when Dtype.Val.is_int alu_dt ->\n          Some\n            (K.binary ~op\n               ~lhs:(K.gep ~src:lhs ~idx)\n               ~rhs:(K.gep ~src:rhs ~idx))\n      | Unary { op; src = inner; dtype = alu_dt } when Dtype.Val.is_int alu_dt ->\n          Some (K.unary ~op ~src:(K.gep ~src:inner ~idx))\n      | Cast { src = inner; _ } ->\n          Some (K.cast ~src:(K.gep ~src:inner ~idx) ~dtype:(Dtype.Val dtype))\n      | _ -> None)\n  | _ -> None\n\n(* GEP with identity permutation on the full vector → remove the GEP.\n   Skips ptr-typed sources (Ptrcat, Cast with ptr dtype) — those need\n   to stay so gep_after_load can update the Load dtype. *)\nlet gep_identity node =\n  match K.view node with\n  | Gep { src; idxs; _ } when not (K.is_ptr src) ->\n      let src_count = match K.dtype_opt src with\n        | Some dt -> Dtype.count dt | None -> -1\n      in\n      if src_count > 0 && idxs = List.init src_count Fun.id then Some src\n      else None\n  | _ -> None\n\nlet gep_pushing =\n  K.first_match\n    [ gep_vectorize; gep_const; gep_vconst; gep_void; gep_identity;\n      vcat_to_vectorize; vectorize_in_order_gep; gep_through_alu ]\n\n(* ALU/Vectorize reordering *)\n\nlet alu_through_broadcast_vectorize node =\n  let is_broadcast srcs =\n    match srcs with\n    | x :: _ -> List.for_all (fun s -> s == x) srcs\n    | [] -> false\n  in\n  match K.view node with\n  | Binary { op; lhs; rhs; _ } ->\n      (match K.view lhs, K.view rhs with\n      | Vectorize { srcs = lsrcs; dtype = ldt },\n        Vectorize { srcs = rsrcs; dtype = rdt }\n        when Dtype.count ldt = Dtype.count rdt\n             && Dtype.count ldt > 0\n             && is_broadcast lsrcs && is_broadcast rsrcs ->\n          let scalar = K.binary ~op ~lhs:(List.hd lsrcs) ~rhs:(List.hd rsrcs) in\n          Some (K.vectorize ~srcs:(List.init (Dtype.count ldt) (fun _ -> scalar)))\n      | _ -> None)\n  | Unary { op; src; _ } ->\n      (match K.view src with\n      | Vectorize { srcs; dtype } when Dtype.count dtype > 0 && is_broadcast srcs ->\n          let scalar = K.unary ~op ~src:(List.hd srcs) in\n          Some (K.vectorize ~srcs:(List.init (Dtype.count dtype) (fun _ -> scalar)))\n      | _ -> None)\n  | _ -> None\n\n(* Constant folding *)\n\nlet exec_unary op v =\n  match op with\n  | `Neg -> -.v\n  | `Exp2 -> Float.pow 2.0 v\n  | `Log2 -> Float.log v /. Float.log 2.0\n  | `Sin -> Float.sin v\n  | `Sqrt -> Float.sqrt v\n  | `Recip -> 1.0 /. v\n  | `Trunc -> if v >= 0.0 then Float.floor v else Float.ceil v\n\nlet exec_binary_float op l r =\n  match op with\n  | `Add -> Some (l +. r)\n  | `Sub -> Some (l -. r)\n  | `Mul -> Some (l *. r)\n  | `Fdiv -> Some (l /. r)\n  | `Max -> Some (Float.max l r)\n  | `Pow -> Some (Float.pow l r)\n  | _ -> None\n\nlet exec_binary_int op (l : int64) (r : int64) =\n  match op with\n  | `Add -> Some (Int64.add l r)\n  | `Sub -> Some (Int64.sub l r)\n  | `Mul -> Some (Int64.mul l r)\n  | `Idiv -> if r = 0L then None else Some (Int64.div l r)\n  | `Mod -> if r = 0L then None else Some (Int64.rem l r)\n  | `Max -> Some (if Int64.compare l r >= 0 then l else r)\n  | `Shl -> Some (Int64.shift_left l (Int64.to_int r))\n  | `Shr -> Some (Int64.shift_right l (Int64.to_int r))\n  | `And -> Some (Int64.logand l r)\n  | `Or -> Some (Int64.logor l r)\n  | `Xor -> Some (Int64.logxor l r)\n  | `Cmplt -> Some (if Int64.compare l r < 0 then 1L else 0L)\n  | `Cmpeq -> Some (if l = r then 1L else 0L)\n  | `Cmpne -> Some (if l <> r then 1L else 0L)\n  | _ -> None\n\nlet const_fold node =\n  match K.view node with\n  | Unary { op; src; _ } ->\n      (match K.view src with\n      | Const { value; dtype } ->\n          (match Const.view value with\n          | Float f -> Some (K.const (Const.float dtype (exec_unary op f)))\n          | _ -> None)\n      | _ -> None)\n  | Binary { op; lhs; rhs; _ } ->\n      (match K.view lhs, K.view rhs with\n      | Const { value = lv; dtype = ld }, Const { value = rv; _ } ->\n          (match Const.view lv, Const.view rv with\n          | Float lf, Float rf ->\n              (match op with\n              | `Cmplt -> Some (K.const (Const.bool (lf < rf)))\n              | `Cmpeq -> Some (K.const (Const.bool (lf = rf)))\n              | `Cmpne -> Some (K.const (Const.bool (lf <> rf)))\n              | _ ->\n                  Option.map (fun r -> K.const (Const.float ld r))\n                    (exec_binary_float op lf rf))\n          | Int li, Int ri ->\n              (match op with\n              | `Cmplt -> Some (K.const (Const.bool (Int64.compare li ri < 0)))\n              | `Cmpeq -> Some (K.const (Const.bool (li = ri)))\n              | `Cmpne -> Some (K.const (Const.bool (li <> ri)))\n              | _ ->\n                  Option.map (fun r -> K.const (Const.int64 ld r))\n                    (exec_binary_int op li ri))\n          | Bool lb, Bool rb ->\n              (match op with\n              | `And -> Some (K.const (Const.bool (lb && rb)))\n              | `Or -> Some (K.const (Const.bool (lb || rb)))\n              | `Xor | `Cmpne -> Some (K.const (Const.bool (lb <> rb)))\n              | `Cmpeq -> Some (K.const (Const.bool (lb = rb)))\n              | _ -> None)\n          | _ -> None)\n      | _ -> None)\n  | _ -> None\n\n(* Phase 1: self-folding and identity rules (symbolic_simple) *)\n\nlet self_fold node =\n  match K.view node with\n  | Binary { op = `Idiv; lhs; rhs; dtype } when lhs == rhs ->\n      Some (K.const (Const.int64 dtype 1L))\n  | Binary { op = `Idiv; lhs; rhs; _ } when is_const_int (-1) rhs ->\n      Some (K.unary ~op:`Neg ~src:lhs)\n  | Binary { op = `Mod; lhs; rhs; dtype } when lhs == rhs ->\n      Some (K.const (Const.int64 dtype 0L))\n  | Binary { op = `Mod; lhs; rhs; _ } ->\n      (match K.view lhs with\n      | Binary { op = `Mod; rhs = inner_rhs; _ } when inner_rhs == rhs ->\n          Some lhs\n      | _ -> None)\n  | Binary { op = `Cmplt; lhs; rhs; _ } when lhs == rhs ->\n      Some (K.const_bool false)\n  | Binary { op = `Xor; lhs; rhs; dtype } when lhs == rhs ->\n      Some (K.const (Const.int64 dtype 0L))\n  | Binary { op = `Cmpne; lhs; rhs; dtype }\n    when lhs == rhs && Dtype.Val.is_int dtype ->\n      Some (K.const_bool false)\n  | _ -> None\n\nlet identity_fold node =\n  match K.view node with\n  | Binary { op = `Add; lhs; rhs; _ } ->\n      if is_const_int 0 rhs || is_const_float 0.0 rhs then Some lhs\n      else if is_const_int 0 lhs || is_const_float 0.0 lhs then Some rhs\n      else None\n  | Binary { op = `Sub; lhs; rhs; _ } ->\n      if is_const_int 0 rhs || is_const_float 0.0 rhs then Some lhs else None\n  | Binary { op = `Mul; lhs; rhs; dtype } ->\n      if is_const_int 1 rhs || is_const_float 1.0 rhs then Some lhs\n      else if is_const_int 1 lhs || is_const_float 1.0 lhs then Some rhs\n      else if is_const_int 0 rhs || is_const_int 0 lhs then\n        Some (K.const (Const.int64 dtype 0L))\n      else None\n  | Binary { op = (`Idiv | `Fdiv); lhs; rhs; _ } ->\n      if is_const_int 1 rhs || is_const_float 1.0 rhs then Some lhs else None\n  | Binary { op = `Or; lhs; rhs; _ } ->\n      if is_const_int 0 rhs then Some lhs\n      else if is_const_int 0 lhs then Some rhs\n      else None\n  | Binary { op = `And; lhs; rhs; _ } ->\n      if is_const_int 0 rhs then Some rhs\n      else if is_const_int 0 lhs then Some lhs\n      else None\n  | Binary { op = `Xor; lhs; rhs; _ } ->\n      if is_const_int 0 rhs then Some lhs\n      else if is_const_int 0 lhs then Some rhs\n      else None\n  | Cast { src; dtype } ->\n      (* Only fold identity cast for value-typed nodes.  Ptr-typed casts\n         (e.g. Cast(Index, Ptr pty)) must remain for the devectorizer's\n         extract_cast_index pattern.  Use the value projection so that a\n         ptr-typed source never matches a Ptr-typed Cast target. *)\n      let src_dt =\n        Option.map (fun d -> Dtype.Val (Dtype.val_of d)) (K.dtype_opt src)\n      in\n      if src_dt = Some dtype then Some src else None\n  | _ -> None\n\nlet float_div_fold node =\n  match K.view node with\n  | Binary { op = `Fdiv; lhs; rhs; _ } when lhs == rhs ->\n      Some (K.const (Const.float (Dtype.val_of (K.dtype node)) 1.0))\n  | Binary { op = `Fdiv; lhs; rhs; _ } ->\n      (match K.view lhs with\n      | Binary { op = `Mul; lhs = x; rhs = y; _ } when y == rhs -> Some x\n      | Binary { op = `Mul; lhs = y; rhs = x; _ } when y == rhs -> Some x\n      | _ -> None)\n  | _ -> None\n\nlet where_fold node =\n  match K.view node with\n  | Ternary { op = `Where; b; c; _ } when b == c -> Some b\n  | Ternary { op = `Where; a; b; c; _ } ->\n      (match K.view a with\n      | Const { value; _ } ->\n          (match Const.view value with\n          | Bool true -> Some b\n          | Bool false -> Some c\n          | Int v -> if v <> 0L then Some b else Some c\n          | _ -> None)\n      | _ -> None)\n  | _ -> None\n\nlet idempotent_fold node =\n  match K.view node with\n  | Binary { op = (`And | `Or | `Max); lhs; rhs; _ } when lhs == rhs ->\n      Some lhs\n  | _ -> None\n\nlet trunc_int node =\n  match K.view node with\n  | Unary { op = `Trunc; src; _ } when Dtype.is_int (K.dtype src)\n    -> Some src\n  | _ -> None\n\nlet bool_arith_fold node =\n  match K.dtype_opt node with\n  | None -> None\n  | Some dt ->\n  if not (Dtype.is_bool dt) then None\n  else\n    match K.view node with\n    | Binary { op = `Mul; lhs; rhs; _ } ->\n        Some (K.binary ~op:`And ~lhs ~rhs)\n    | Binary { op = (`Add | `Max); lhs; rhs; _ } ->\n        Some (K.binary ~op:`Or ~lhs ~rhs)\n    | Binary { op = `And; lhs = x; rhs; _ } ->\n        if is_const_bool true rhs then Some x\n        else if is_const_bool false rhs then Some rhs\n        else None\n    | Binary { op = `Or; lhs = x; rhs; _ } ->\n        if is_const_bool true rhs then Some rhs\n        else if is_const_bool false rhs then Some x\n        else None\n    | _ -> None\n\nlet double_not_fold node =\n  match K.view node with\n  | Binary { op = `Cmpne; lhs; rhs; _ } when is_const_bool true rhs ->\n      (match K.view lhs with\n      | Binary { op = `Cmpne; lhs = z; rhs = inner_rhs; _ }\n        when is_const_bool true inner_rhs -> Some z\n      | _ -> None)\n  | _ -> None\n\nlet bool_where_identity node =\n  match K.view node with\n  | Ternary { op = `Where; a; b; c; _ }\n    when Dtype.is_bool (K.dtype a) ->\n      (match K.view b, K.view c with\n      | Const { value = bv; _ }, Const { value = cv; _ } ->\n          (match Const.view bv, Const.view cv with\n          | Bool true, Bool false -> Some a\n          | Bool false, Bool true ->\n              Some (K.binary ~op:`Cmpne ~lhs:a ~rhs:(K.const_bool true))\n          | _ -> None)\n      | _ -> None)\n  | _ -> None\n\nlet const_cast_fold node =\n  match K.view node with\n  | Cast { src; dtype = Dtype.Val dtype } ->\n      (match K.view src with\n      | Const { value; _ } ->\n          (match Const.view value with\n          | Int v ->\n              if Dtype.Val.is_int dtype then Some (K.const (Const.int64 dtype v))\n              else if Dtype.Val.is_float dtype then\n                Some (K.const (Const.float dtype (Int64.to_float v)))\n              else if Dtype.Val.is_bool dtype then\n                Some (K.const (Const.bool (v <> 0L)))\n              else None\n          | Float f ->\n              if Dtype.Val.is_float dtype then Some (K.const (Const.float dtype f))\n              else if Dtype.Val.is_int dtype then\n                Some (K.const (Const.int64 dtype (Int64.of_float f)))\n              else if Dtype.Val.is_bool dtype then\n                Some (K.const (Const.bool (f <> 0.0)))\n              else None\n          | Bool b ->\n              if Dtype.Val.is_int dtype then\n                Some (K.const (Const.int64 dtype (if b then 1L else 0L)))\n              else if Dtype.Val.is_float dtype then\n                Some (K.const (Const.float dtype (if b then 1.0 else 0.0)))\n              else None)\n      | _ -> None)\n  | _ -> None\n\nlet double_cast_fold node =\n  match K.view node with\n  | Cast { src; dtype = b_dt } ->\n      (match K.view src with\n      | Cast { src = x; dtype = a_dt } ->\n          let x_dt = K.dtype x in\n          let b = Dtype.val_of b_dt and a = Dtype.val_of a_dt in\n          if Dtype.equal x_dt (Dtype.Val b) && Dtype.Val.can_lossless_cast b a then\n            Some x\n          else None\n      | _ -> None)\n  | _ -> None\n\nlet cast_to_bool node =\n  match K.view node with\n  | Cast { src; dtype = Dtype.Val dt } when Dtype.Val.is_bool dt ->\n      Some (K.binary ~op:`Cmpne ~lhs:src ~rhs:(K.zero_like src))\n  | _ -> None\n\nlet nested_where_fold node =\n  match K.view node with\n  | Ternary { op = `Where; a; b = inner; c = d_outer; _ } ->\n      (match K.view inner with\n      | Ternary { op = `Where; a = b_cond; b = c_val; c = d_inner; _ }\n        when d_inner == d_outer ->\n          Some\n            (K.ternary ~op:`Where\n               ~a:(K.binary ~op:`And ~lhs:a ~rhs:b_cond)\n               ~b:c_val ~c:d_outer)\n      | _ -> None)\n  | _ -> None\n\nlet divmod_reconstitute node =\n  match K.view node with\n  | Binary { op = `Add; dtype; _ }\n    when is_index_dtype dtype ->\n      let terms = Divandmod.split_add node in\n      List.find_mapi\n        (fun i u ->\n          let base_div_mul =\n            match K.view u with\n            | Binary { op = `Mod; lhs = base; rhs = d_node; _ } ->\n                Option.map (fun d -> (base, d, 1L)) (const_int_val d_node)\n            | Binary { op = `Mul; lhs = mod_node; rhs = m_node; _ } ->\n                (match K.view mod_node with\n                | Binary { op = `Mod; lhs = base; rhs = d_node; _ } ->\n                    (match const_int_val d_node, const_int_val m_node with\n                    | Some d, Some m -> Some (base, d, m)\n                    | _ -> None)\n                | _ -> None)\n            | _ -> None\n          in\n          match base_div_mul with\n          | None -> None\n          | Some (base, div, mul) ->\n              List.find_mapi\n                (fun j v ->\n                  if i = j then None\n                  else\n                    match K.view v with\n                    | Binary { op = `Mul; lhs = q; rhs = dm_node; _ } ->\n                        (match const_int_val dm_node with\n                        | Some dm when dm = Int64.mul div mul ->\n                            (match K.view q with\n                            | Binary { op = `Idiv; lhs = q_base; rhs = q_div; _ }\n                              ->\n                                (match const_int_val q_div with\n                                | Some qd when qd = div && q_base == base ->\n                                    let remaining =\n                                      List.filteri\n                                        (fun k _ -> k <> i && k <> j)\n                                        terms\n                                    in\n                                    let base_mul =\n                                      if mul = 1L then base\n                                      else\n                                        K.binary ~op:`Mul ~lhs:base\n                                          ~rhs:(K.const (Const.int64 Dtype.Val.index mul))\n                                    in\n                                    Some\n                                      (List.fold_left\n                                        (fun acc t -> K.binary ~op:`Add ~lhs:acc ~rhs:t)\n                                        base_mul remaining)\n                                | _ -> None)\n                            | _ -> None)\n                        | _ -> None)\n                    | _ -> None)\n                terms)\n        terms\n  | _ -> None\n\n(* Phase 2: deeper algebraic rules *)\n\nlet is_commutative = function\n  | `Add | `Mul | `And | `Or | `Xor -> true\n  | _ -> false\n\nlet commutative_flip node =\n  match K.view node with\n  | Binary { op; lhs; rhs; dtype }\n    when is_index_dtype dtype\n         && is_commutative op\n         && K.compare_structure rhs lhs < 0 ->\n      Some (K.binary ~op ~lhs:rhs ~rhs:lhs)\n  | _ -> None\n\nlet combine_terms node =\n  match K.view node with\n  | Binary { op = `Add; lhs; rhs; dtype } when lhs == rhs && Dtype.Val.is_int dtype ->\n      Some (K.binary ~op:`Mul ~lhs ~rhs:(K.const (Const.int64 dtype 2L)))\n  | Binary { op = `Add; lhs; rhs; _ } ->\n      let extract_mul_const n =\n        match K.view n with\n        | Binary { op = `Mul; lhs = x; rhs = c; _ } ->\n            Option.map (fun cv -> (x, cv)) (const_int_val c)\n        | _ -> None\n      in\n      let dt = Dtype.val_of (K.dtype node) in\n      (match extract_mul_const lhs, extract_mul_const rhs with\n      | Some (x1, c1), Some (x2, c2) when x1 == x2 ->\n          Some (K.binary ~op:`Mul ~lhs:x1\n                  ~rhs:(K.const (Const.int64 dt (Int64.add c1 c2))))\n      | None, Some (x2, c2) when lhs == x2 ->\n          Some (K.binary ~op:`Mul ~lhs\n                  ~rhs:(K.const (Const.int64 dt (Int64.add c2 1L))))\n      | Some (x1, c1), None when x1 == rhs ->\n          Some (K.binary ~op:`Mul ~lhs:x1\n                  ~rhs:(K.const (Const.int64 dt (Int64.add c1 1L))))\n      | _ -> None)\n  | _ -> None\n\nlet associative_fold node =\n  match K.view node with\n  | Binary { op; lhs; rhs = c2; _ } when is_commutative op ->\n      (match K.view lhs, const_int_val c2 with\n      | Binary { op = inner_op; lhs = x; rhs = c1; _ }, Some _\n        when inner_op = op ->\n          (match const_int_val c1 with\n          | Some _ ->\n              Some (K.binary ~op ~lhs:x ~rhs:(K.binary ~op ~lhs:c1 ~rhs:c2))\n          | None -> None)\n      | _ -> None)\n  | _ -> None\n\nlet const_to_end node =\n  let try_op op lhs rhs =\n    if not (K.is_const rhs) then\n      match K.view lhs with\n      | Binary { op = inner_op; lhs = x; rhs = c1; _ }\n        when inner_op = op && K.is_const c1 ->\n          Some\n            (K.binary ~op\n               ~lhs:(K.binary ~op ~lhs:x ~rhs)\n               ~rhs:c1)\n      | _ -> None\n    else None\n  in\n  match K.view node with\n  | Binary { op = (`Add as op); lhs; rhs; _ } -> try_op op lhs rhs\n  | Binary { op = (`Mul as op); lhs; rhs; _ } -> try_op op lhs rhs\n  | _ -> None\n\nlet nested_div_fold node =\n  match K.view node with\n  | Binary { op = `Idiv; lhs; rhs = c2; _ } ->\n      (match K.view lhs with\n      | Binary { op = `Idiv; lhs = x; rhs = c1; _ } ->\n          Some (K.binary ~op:`Idiv ~lhs:x\n                  ~rhs:(K.binary ~op:`Mul ~lhs:c1 ~rhs:c2))\n      | _ -> None)\n  | _ -> None\n\nlet range_self_divmod node =\n  match K.view node with\n  | Binary { op = `Mod; lhs; rhs; _ } ->\n      (match K.view lhs with\n      | Range { size; _ } when size == rhs -> Some lhs\n      | _ -> None)\n  | Binary { op = `Idiv; lhs; rhs; dtype } ->\n      (match K.view lhs with\n      | Range { size; _ } when size == rhs ->\n          Some (K.const (Const.int64 dtype 0L))\n      | _ -> None)\n  | _ -> None\n\nlet max_fold node =\n  match K.view node with\n  | Binary { op = `Max; lhs; rhs; _ } ->\n      if Divandmod.vmin lhs >= Divandmod.vmax rhs then Some lhs\n      else if Divandmod.vmax lhs <= Divandmod.vmin rhs then Some rhs\n      else None\n  | _ -> None\n\nlet range_collapse node =\n  match K.dtype_opt node with\n  | None -> None\n  | Some dt ->\n  let dtype = Dtype.val_of dt in\n  let is_cmp = match K.view node with\n    | Binary { op = `Cmplt | `Cmpne; _ } -> true | _ -> false in\n  if not (is_cmp || is_index_dtype dtype) then None\n  else match K.view node with\n  | Binary _ | Unary _ | Ternary _ | Define_var _ | Range _ | Special _ ->\n      let lo = Divandmod.vmin node and hi = Divandmod.vmax node in\n      if lo = hi && (is_cmp || (lo <> Int64.min_int && hi <> Int64.max_int))\n      then Some (K.const (if is_cmp then Const.bool (lo <> 0L)\n                          else Const.int64 dtype lo))\n      else None\n  | _ -> None\n\nlet lt_const_fold node =\n  match K.view node with\n  | Binary { op = `Cmplt; lhs; rhs; _ } ->\n      (* c0 + x < c1 or x + c0 < c1 -> x < c1 - c0 *)\n      (match K.view lhs with\n      | Binary { op = `Add; lhs = a; rhs = b; _ } ->\n          let c0, x =\n            if K.is_const a then Some a, b\n            else if K.is_const b then Some b, a\n            else None, a\n          in\n          (match c0 with\n          | Some c0 ->\n              Some (K.binary ~op:`Cmplt ~lhs:x\n                      ~rhs:(K.binary ~op:`Sub ~lhs:rhs ~rhs:c0))\n          | None -> None)\n      | _ -> None)\n  | _ -> None\n\nlet try_both_orderings lhs rhs f =\n  match f lhs rhs with\n  | Some _ as r -> r\n  | None -> f rhs lhs\n\nlet distribute_neg node =\n  match K.view node with\n  | Binary { op = `Mul; lhs; rhs; _ } ->\n      try_both_orderings lhs rhs (fun neg_one sum ->\n        if is_const_int (-1) neg_one then\n          match K.view sum with\n          | Binary { op = `Add; lhs = x; rhs = c; _ } when K.is_const c ->\n              Some\n                (K.binary ~op:`Add\n                   ~lhs:(K.unary ~op:`Neg ~src:x)\n                   ~rhs:(K.unary ~op:`Neg ~src:c))\n          | _ -> None\n        else None)\n  | _ -> None\n\nlet float_div_chain node =\n  match K.view node with\n  | Binary { op = `Fdiv; lhs; rhs = x3; _ } ->\n      (match K.view lhs with\n      | Binary { op = `Fdiv; lhs = x; rhs = x2; _ } when not (x2 == x3) ->\n          Some (K.binary ~op:`Fdiv ~lhs:x\n                  ~rhs:(K.binary ~op:`Mul ~lhs:x2 ~rhs:x3))\n      | _ -> None)\n  | _ -> None\n\nlet distribute_const_mul node =\n  match K.view node with\n  | Binary { op = `Mul; lhs; rhs; dtype }\n    when is_index_dtype dtype && K.is_const lhs ->\n      (match K.view rhs with\n      | Binary { op = `Add; lhs = x; rhs = c; _ } ->\n          Some\n            (K.binary ~op:`Add\n               ~lhs:(K.binary ~op:`Mul ~lhs ~rhs:x)\n               ~rhs:(K.binary ~op:`Mul ~lhs ~rhs:c))\n      | _ -> None)\n  | _ -> None\n\nlet where_not_inversion node =\n  match K.view node with\n  | Ternary { op = `Where; a; b = t; c = f; _ } ->\n      (match K.view a with\n      | Binary { op = `Cmpne; lhs = cond; rhs; _ }\n        when is_const_bool true rhs ->\n          Some (K.ternary ~op:`Where ~a:cond ~b:f ~c:t)\n      | _ -> None)\n  | _ -> None\n\nlet lt_mul_fold node =\n  match K.view node with\n  | Binary { op = `Cmplt; lhs; rhs; _ } ->\n      let extract_cmul n =\n        match K.view n with\n        | Binary { op = `Mul; lhs = a; rhs = b; dtype } when is_index_dtype dtype ->\n            (match const_int_val a with\n            | Some cv -> Some (cv, b, dtype)\n            | None -> Option.map (fun cv -> (cv, a, dtype)) (const_int_val b))\n        | _ -> None\n      in\n      (match extract_cmul lhs with\n      | Some (c0, x, dtype) ->\n          (match const_int_val rhs with\n          | Some c1 when c0 > 0L && c1 > 0L ->\n              let ceil_div = Int64.div (Int64.add c1 (Int64.sub c0 1L)) c0 in\n              Some (K.binary ~op:`Cmplt ~lhs:x\n                      ~rhs:(K.const (Const.int64 dtype ceil_div)))\n          | Some c1 when c0 < 0L && c0 <> -1L && c1 <= 0L ->\n              let div_val =\n                Int64.neg (Int64.div (Int64.neg c1) (Int64.neg c0))\n              in\n              Some (K.binary ~op:`Cmplt\n                      ~lhs:(K.unary ~op:`Neg ~src:x)\n                      ~rhs:(K.const (Const.int64 dtype (Int64.neg div_val))))\n          | _ -> None)\n      | None -> None)\n  | _ -> None\n\nlet lt_div_fold node =\n  match K.view node with\n  | Binary { op = `Cmplt; lhs; rhs; _ } ->\n      (match K.view lhs with\n      | Binary { op = `Idiv; lhs = x; rhs = d; dtype }\n        when is_index_dtype dtype ->\n          (match const_int_val d, const_int_val rhs with\n          | Some dv, Some cv when dv > 0L ->\n              let bound =\n                if cv > 0L then Int64.mul cv dv\n                else Int64.sub (Int64.mul cv dv) (Int64.sub dv 1L)\n              in\n              Some (K.binary ~op:`Cmplt ~lhs:x\n                      ~rhs:(K.const (Const.int64 dtype bound)))\n          | _ -> None)\n      | _ -> None)\n  | _ -> None\n\nlet lt_sign_flip node =\n  match K.view node with\n  | Binary { op = `Cmplt; lhs; rhs; _ } ->\n      (match K.view lhs, K.view rhs with\n      | Binary { op = `Mul; lhs = x; rhs = lc; _ },\n        Binary { op = `Mul; lhs = y; rhs = rc; _ }\n        when is_const_int (-1) lc && is_const_int (-1) rc ->\n          Some (K.binary ~op:`Cmplt ~lhs:y ~rhs:x)\n      | _ -> None)\n  | _ -> None\n\nlet cast_chain_fold node =\n  match K.view node with\n  | Cast { src; dtype = b_dt } ->\n      (match K.view src with\n      | Cast { src = x; dtype = a_dt } ->\n          if Dtype.Val.can_lossless_cast (Dtype.val_of (K.dtype x)) (Dtype.val_of a_dt) then\n            Some (K.cast ~src:x ~dtype:b_dt)\n          else None\n      | _ -> None)\n  | _ -> None\n\nlet is_side_effecting node =\n  match K.view node with\n  | Range _ | Store _ | End _ | Unroll _ | Barrier -> true\n  | _ -> false\n\nlet after_cleanup node =\n  match K.view node with\n  | After { src; deps = [] } -> Some src\n  | _ -> None\n\nlet bool_or_not node =\n  match K.view node with\n  | Binary { op = `Or; lhs = x; rhs; _ }\n    when Dtype.is_bool (K.dtype node) ->\n      (* x | !x -> true, or !x | x -> true *)\n      let is_not_of target n =\n        match K.view n with\n        | Binary { op = `Cmpne; lhs = nx; rhs = t; _ }\n          when nx == target && is_const_bool true t -> true\n        | _ -> false\n      in\n      if is_not_of x rhs || is_not_of rhs x then Some (K.const_bool true)\n      else None\n  | _ -> None\n\n(* SINK/GROUP cleanup *)\n\nlet is_removable_from_sink node =\n  match K.view node with\n  | Unroll _ | Vectorize _ -> true\n  | _ -> false\n\nlet flatten_removable srcs =\n  List.concat_map\n    (fun s -> if is_removable_from_sink s then K.children s else [ s ])\n    srcs\n\nlet sink_cleanup node =\n  match K.view node with\n  | Sink { srcs; kernel_info } when List.exists is_removable_from_sink srcs ->\n      Some (K.sink ?kernel_info (flatten_removable srcs))\n  | Group { srcs } when List.exists is_removable_from_sink srcs ->\n      Some (K.group (flatten_removable srcs))\n  | _ -> None\n\nlet group_singleton node =\n  match K.view node with\n  | Group { srcs = [ x ] } -> Some x\n  | _ -> None\n\nlet empty_unroll node =\n  match K.view node with\n  | Unroll { src; axes = []; _ } -> Some src\n  | _ -> None\n\n(* Phase 3: POW decomposition, reciprocal algebra, etc. *)\n\nlet pow_fold node =\n  match K.view node with\n  | Binary { op = `Pow; lhs = base; rhs = exp; dtype }\n    when Dtype.Val.is_float dtype ->\n      Some (Decomposition.xpow ~base ~exponent:exp)\n  | _ -> None\n\nlet where_cast_push node =\n  match K.view node with\n  | Cast { src; dtype } ->\n      (match K.view src with\n      | Ternary { op = `Where; a = s; b = a; c = b; _ } ->\n          Some (K.ternary ~op:`Where ~a:s\n                  ~b:(K.cast ~src:a ~dtype)\n                  ~c:(K.cast ~src:b ~dtype))\n      | _ -> None)\n  | _ -> None\n\nlet reciprocal_algebra node =\n  match K.view node with\n  | Unary { op = `Recip; src; _ } ->\n      (match K.view src with\n      | Binary { op = `Mul; lhs = x1; rhs = x2; _ } when x1 == x2 ->\n          let rx = K.unary ~op:`Recip ~src:x1 in\n          Some (K.binary ~op:`Mul ~lhs:rx ~rhs:rx)\n      | Binary { op = `Mul; lhs = x; rhs = c; _ } when K.is_const c ->\n          Some (K.binary ~op:`Mul\n                  ~lhs:(K.unary ~op:`Recip ~src:x)\n                  ~rhs:(K.unary ~op:`Recip ~src:c))\n      | _ -> None)\n  (* x * 1/(1+x) -> 1 - 1/(1+x) *)\n  | Binary { op = `Mul; lhs = x; rhs; _ } ->\n      (match K.view rhs with\n      | Unary { op = `Recip; src = sum; _ } ->\n          (match K.view sum with\n          | Binary { op = `Add; lhs = a; rhs = b; _ }\n            when (a == x && is_const_int 1 b) ||\n                 (b == x && is_const_int 1 a) ->\n              let one = K.const (Const.float (Dtype.val_of (K.dtype node)) 1.0) in\n              Some (K.binary ~op:`Sub ~lhs:one ~rhs)\n          | _ -> None)\n      | _ -> None)\n  | _ -> None\n\nlet distribute_neg_full node =\n  match K.view node with\n  | Binary { op = `Mul; lhs; rhs; _ } ->\n      try_both_orderings lhs rhs (fun neg_one sum ->\n        if is_const_int (-1) neg_one then\n          match K.view sum with\n          | Binary { op = `Add; lhs = x; rhs = y; _ } ->\n              Some (K.binary ~op:`Add\n                      ~lhs:(K.unary ~op:`Neg ~src:x)\n                      ~rhs:(K.unary ~op:`Neg ~src:y))\n          | _ -> None\n        else None)\n  | _ -> None\n\nlet distribute_mul_index node =\n  match K.view node with\n  | Binary { op = `Mul; lhs; rhs; dtype }\n    when is_index_dtype dtype && K.is_const rhs ->\n      (match K.view lhs with\n      | Binary { op = `Add; lhs = x; rhs = y; _ } ->\n          Some (K.binary ~op:`Add\n                  ~lhs:(K.binary ~op:`Mul ~lhs:x ~rhs)\n                  ~rhs:(K.binary ~op:`Mul ~lhs:y ~rhs))\n      | _ -> None)\n  | _ -> None\n\n(* Propagate Invalid_index *)\n\nlet is_invalid node =\n  match K.view node with Invalid_index _ -> true | _ -> false\n\nlet is_comparison = function\n  | `Cmplt | `Cmpeq | `Cmpne -> true\n  | _ -> false\n\nlet decompose_invalid_gate node =\n  match K.view node with\n  | Ternary { op = `Where; a = cond; b = x; c = inv; _ }\n    when is_invalid inv -> Some (cond, x, inv)\n  | _ -> None\n\nlet invalid_is_index inv =\n  match K.view inv with\n  | Invalid_index { dtype; _ } -> Dtype.Val.equal (Dtype.Val.scalarize dtype) Dtype.Val.index\n  | _ -> false\n\nlet propagate_invalid node =\n  match K.view node with\n  | Cast { src; dtype } ->\n      (match decompose_invalid_gate src with\n      | Some (cond, x, inv) ->\n          if invalid_is_index inv then Some (K.cast ~src:x ~dtype)\n          else\n            Some (K.ternary ~op:`Where ~a:cond\n                    ~b:(K.cast ~src:x ~dtype)\n                    ~c:(K.cast ~src:inv ~dtype))\n      | None -> None)\n  | Unary { op; src; _ } ->\n      (match decompose_invalid_gate src with\n      | Some (cond, x, inv) ->\n          Some (K.ternary ~op:`Where ~a:cond ~b:(K.unary ~op ~src:x) ~c:inv)\n      | None ->\n          (match K.view src with Invalid_index _ -> Some src | _ -> None))\n  | Binary { op; lhs; rhs; _ } when not (is_comparison op) ->\n      (match decompose_invalid_gate lhs, decompose_invalid_gate rhs with\n      | Some (cond, x, inv), None ->\n          Some (K.ternary ~op:`Where ~a:cond\n                  ~b:(K.binary ~op ~lhs:x ~rhs) ~c:inv)\n      | None, Some (cond, x, inv) ->\n          Some (K.ternary ~op:`Where ~a:cond\n                  ~b:(K.binary ~op ~lhs ~rhs:x) ~c:inv)\n      | _ ->\n          if is_invalid lhs then Some lhs\n          else if is_invalid rhs then Some rhs\n          else None)\n  | Binary { op; lhs; rhs; _ } when is_comparison op ->\n      let handle_side gate ~build =\n        match decompose_invalid_gate gate with\n        | Some (cond, x, inv) ->\n            if invalid_is_index inv then Some (build x)\n            else\n              Some (K.ternary ~op:`Where ~a:cond\n                      ~b:(build x)\n                      ~c:(K.cast ~src:inv ~dtype:Dtype.bool))\n        | None -> None\n      in\n      (match handle_side lhs\n               ~build:(fun x -> K.binary ~op ~lhs:x ~rhs) with\n      | Some _ as r -> r\n      | None ->\n          handle_side rhs\n            ~build:(fun x -> K.binary ~op ~lhs ~rhs:x))\n  | Ternary { op = `Where; a = cond; b = inv; c = val_; _ }\n    when is_invalid inv ->\n      if is_invalid val_ then Some inv\n      else\n        let not_cond =\n          K.binary ~op:`Cmpeq ~lhs:cond ~rhs:(K.const (Const.bool false))\n        in\n        Some (K.ternary ~op:`Where ~a:not_cond ~b:val_ ~c:inv)\n  | Ternary { op = `Where; a; b = gate; c; _ }\n    when not (is_invalid c) ->\n      (match decompose_invalid_gate gate with\n      | Some (cond, x, inv) ->\n          Some (K.ternary ~op:`Where ~a:cond\n                  ~b:(K.ternary ~op:`Where ~a ~b:x ~c)\n                  ~c:inv)\n      | None -> None)\n  | Ternary { op = `Where; a; b; c = gate; _ }\n    when not (is_invalid b) ->\n      (match decompose_invalid_gate gate with\n      | Some (cond, x, inv) ->\n          Some (K.ternary ~op:`Where ~a:cond\n                  ~b:(K.ternary ~op:`Where ~a ~b ~c:x)\n                  ~c:inv)\n      | None -> None)\n  | Bitcast { src; dtype } when is_invalid src ->\n      Some (K.cast ~src ~dtype:(Dtype.Val dtype))\n  | Bitcast { src; dtype } ->\n      (match decompose_invalid_gate src with\n      | Some (cond, x, inv) ->\n          Some (K.ternary ~op:`Where ~a:cond\n                  ~b:(K.bitcast ~src:x ~dtype)\n                  ~c:(K.bitcast ~src:inv ~dtype))\n      | None -> None)\n  | _ -> None\n\nlet fold_gated_load_store node =\n  match K.view node with\n  | Load { src; alt; dtype } ->\n      (match K.view src with\n      | Index { idxs; _ } when List.exists is_invalid idxs ->\n          let zero =\n            match alt with\n            | Some a -> a\n            | None ->\n                if Dtype.Val.is_float dtype then K.const (Const.float dtype 0.0)\n                else K.const (Const.int64 dtype 0L)\n          in\n          Some zero\n      | _ -> None)\n  | Store { dst; value; ranges } ->\n      (match decompose_invalid_gate value with\n      | Some (cond, val_, _inv) ->\n          (match K.view dst with\n          | Index { ptr; idxs; gate; _ } ->\n              let gated_idxs =\n                List.map\n                  (fun idx ->\n                    K.ternary ~op:`Where ~a:cond ~b:idx\n                      ~c:(K.invalid_index ()))\n                  idxs\n              in\n              let combined_gate = match gate with\n                | Some g -> K.binary ~op:`And ~lhs:cond ~rhs:g\n                | None -> cond\n              in\n              Some (K.store\n                      ~dst:(K.index ~ptr ~idxs:gated_idxs ~gate:combined_gate ())\n                      ~value:val_ ~ranges)\n          | _ -> None)\n      | None -> None)\n  | _ -> None\n\n(* Composed passes *)\n\nlet phase1_rules =\n  [ propagate_invalid;\n    const_fold;\n    self_fold;\n    identity_fold;\n    float_div_fold;\n    where_fold;\n    idempotent_fold;\n    trunc_int;\n    bool_arith_fold;\n    double_not_fold;\n    bool_where_identity;\n    const_cast_fold;\n    double_cast_fold;\n    cast_to_bool;\n    nested_where_fold;\n    divmod_reconstitute ]\n\nlet phase2_rules =\n  [ commutative_flip;\n    combine_terms;\n    associative_fold;\n    const_to_end;\n    nested_div_fold;\n    range_self_divmod;\n    max_fold;\n    range_collapse;\n    lt_const_fold;\n    distribute_neg;\n    float_div_chain;\n    distribute_const_mul;\n    where_not_inversion;\n    lt_mul_fold;\n    lt_div_fold;\n    lt_sign_flip;\n    cast_chain_fold;\n    after_cleanup;\n    bool_or_not;\n    Divandmod.div_and_mod_symbolic;\n    sink_cleanup;\n    group_singleton;\n    empty_unroll ]\n\nlet symbolic_simple = K.first_match phase1_rules\n\nlet symbolic = K.first_match (phase1_rules @ phase2_rules)\n\nlet sym =\n  K.first_match\n    ([ gep_pushing; alu_through_broadcast_vectorize ]\n     @ phase1_rules @ phase2_rules\n     @ [ pow_fold; where_cast_push; reciprocal_algebra;\n         distribute_neg_full; distribute_mul_index;\n         fold_gated_load_store ])\n\n(* Validity analysis — parse bound constraints from AND-chained conditions\n   and simplify expressions given those bounds.  Used by load_store_indexing\n   in the devectorizer to simplify index validity gates. *)\n\nlet split_and node =\n  let rec go acc = function\n    | [] -> List.rev acc\n    | n :: rest -> match K.view n with\n      | Binary { op = `And; lhs; rhs; _ } -> go acc (lhs :: rhs :: rest)\n      | _ -> go (n :: acc) rest\n  in\n  go [] [node]\n\n(* Parse a comparison into (expr, is_upper_bound, constant).\n   Returns (X, true, c) for X <= c, and (X, false, c) for X >= c. *)\nlet parse_valid v =\n  match K.view v with\n  | Binary { op = `Cmpne; lhs = s0; rhs = one; _ }\n    when K.const_arg one = Some (Int 1L) ->\n      (match K.view s0 with\n       | Binary { op = `Cmplt; lhs = x; rhs = c; _ }\n         when Dtype.is_int (K.dtype x) ->\n           Some (x, false, K.vmin c)\n       | _ -> None)\n  | Binary { op = `Cmplt; lhs = x; rhs = c; _ }\n    when Dtype.is_int (K.dtype x) ->\n      Some (x, true, K.vmax c - 1)\n  | _ -> None\n\nlet is_irreducible node = match K.view node with\n  | Const _ | Vconst _ | Define_var _ | Special _ | Range _ -> true\n  | _ -> false\n\n(* Simplify [uop] given that [valid] is known to be true.  Parses bound\n   constraints from [valid], substitutes bounded expressions with fresh\n   define_var proxies, simplifies, then substitutes back. *)\nlet uop_given_valid ?(try_simplex = true) valid uop =\n  let bounds : (K.t, int option * int option) Hashtbl.t = Hashtbl.create 8 in\n  List.iter (fun stmt ->\n    match parse_valid stmt with\n    | None -> ()\n    | Some (expr, is_upper, c) ->\n        let lo, hi = match Hashtbl.find_opt bounds expr with\n          | Some (lo, hi) -> lo, hi | None -> None, None in\n        let lo, hi = if is_upper then lo, Some c else Some c, hi in\n        Hashtbl.replace bounds expr (lo, hi))\n    (split_and valid);\n\n  let simplify node = K.graph_rewrite symbolic node in\n  let all_same = function\n    | [] -> true | x :: rest -> List.for_all (fun y -> y = x) rest in\n\n  let uop = ref uop in\n  let all_candidates = ref [] in\n  let i = ref 0 in\n  Hashtbl.iter (fun expr (lo_opt, hi_opt) ->\n    let v0 = match lo_opt with Some v -> v | None -> K.vmin expr in\n    let v1 = match hi_opt with Some v -> v | None -> K.vmax expr in\n    let fake = K.define_var ~name:(Printf.sprintf \"fake%d\" !i)\n      ~lo:v0 ~hi:v1 ~dtype:(Dtype.val_of (K.dtype expr)) () in\n    all_candidates := (expr, fake) :: !all_candidates;\n\n    if try_simplex then begin\n      let candidates = ref [[(expr, fake)]] in\n      if v0 = 1 then\n        (match K.view expr with\n         | Binary { op = `Add; _ } ->\n             let addends = Divandmod.split_add expr in\n             if List.for_all (fun u ->\n               is_irreducible u && K.vmin u = 0) addends\n             then\n               candidates := (List.mapi (fun _j xi ->\n                 (xi, K.define_var\n                   ~name:(Printf.sprintf \"fake%d\" !i)\n                   ~lo:1 ~hi:(K.vmax xi)\n                   ~dtype:(Dtype.val_of (K.dtype xi)) ())) addends) :: !candidates\n         | _ -> ());\n\n      List.iter (fun candidate ->\n        let newuops = List.map (fun (x, new_x) ->\n          K.substitute [(x, new_x)] !uop) candidate in\n        if not (List.exists (fun u -> u == !uop) newuops) then begin\n          let newuops = List.map2 (fun (_, new_x) u ->\n            let s = simplify u in\n            simplify (K.substitute\n              [(new_x, fst (List.find (fun (_, f) -> f = new_x) candidate))] s))\n            candidate newuops in\n          if all_same newuops then\n            uop := List.hd newuops\n          else match K.view !uop with\n          | Vectorize { srcs = [_; _]; _ } ->\n              let src0s = List.map (fun u ->\n                List.hd (K.children u)) newuops in\n              let src1s = List.map (fun u ->\n                List.nth (K.children u) 1) newuops in\n              if all_same src0s then\n                uop := K.replace !uop\n                  ~children:[List.hd src0s; List.nth (K.children !uop) 1] ();\n              if all_same src1s then\n                uop := K.replace !uop\n                  ~children:[List.hd (K.children !uop); List.hd src1s] ()\n          | _ -> ()\n        end)\n        !candidates\n    end;\n    incr i)\n    bounds;\n\n  let sub_dict = !all_candidates in\n  let s_uop = K.substitute sub_dict !uop in\n  if s_uop != !uop then begin\n    let rev = List.map (fun (x, new_x) -> (new_x, x)) sub_dict in\n    uop := simplify (K.substitute rev (simplify s_uop))\n  end;\n  !uop\n\n(* Extract the real index from a possibly-gated index expression.\n   where(cond, x, Invalid) → x;  anything else → self. *)\nlet get_idx node =\n  match K.view node with\n  | Ternary { op = `Where; b = x; c = inv; _ }\n    when (match K.view inv with Invalid_index _ -> true | _ -> false) -> x\n  | _ -> node\n\n(* Extract the validity condition from a possibly-gated index expression.\n   where(cond, x, Invalid) → cond;  anything else → const true. *)\nlet get_valid node =\n  match K.view node with\n  | Ternary { op = `Where; a = cond; c = inv; _ }\n    when (match K.view inv with Invalid_index _ -> true | _ -> false) -> cond\n  | _ -> K.const_bool true\n\n(* Wrap an index in a validity gate: where(cond, idx, Invalid). *)\nlet with_valid cond idx =\n  match K.view cond with\n  | Ternary { op = `Where; _ } when K.const_arg cond = Some (Bool true) -> idx\n  | _ ->\n      K.ternary ~op:`Where ~a:cond ~b:idx\n        ~c:(K.invalid_index ~lanes:(Dtype.count (K.dtype idx)) ())\n\n(* Move WHERE conditions from around loads into the INDEX gate.\n   Matches where(cond, buf.index(idx), 0): conditions whose ranges are a\n   subset of idx's ranges (and that don't introduce new INDEX deps) are\n   moved into the index's validity gate. *)\nlet where_on_load cond buf idx =\n  let where_clauses = split_and cond in\n  let load_valid = get_valid idx in\n  let in_load = split_and load_valid in\n  let idx_indexes = List.filter (fun u ->\n    match K.view u with Index _ -> true | _ -> false)\n    (K.backward_slice idx) in\n  let idx_ranges = K.live_ranges idx in\n  let can_move c =\n    let c_ranges = K.live_ranges c in\n    List.for_all (fun r -> List.exists (fun ir -> ir == r) idx_ranges) c_ranges\n    && List.for_all (fun u -> match K.view u with\n         | Index _ -> List.exists (fun iu -> iu == u) idx_indexes\n         | _ -> true)\n         (K.backward_slice c)\n  in\n  let movable = List.filter (fun c ->\n    not (List.exists (fun il -> il == c) in_load)) where_clauses in\n  let moved, keep = List.partition can_move movable in\n  if List.length keep = List.length where_clauses then None\n  else\n    let new_valid = List.fold_left (fun acc c ->\n      K.binary ~op:`And ~lhs:acc ~rhs:c) load_valid moved in\n    let new_idx = K.index ~ptr:buf ~idxs:[with_valid new_valid (get_idx idx)] () in\n    let outer = match keep with\n      | [] -> new_idx\n      | _ ->\n          let outer_cond = List.fold_left (fun acc c ->\n            K.binary ~op:`And ~lhs:acc ~rhs:c) (K.const_bool true) keep in\n          K.ternary ~op:`Where ~a:outer_cond ~b:new_idx\n            ~c:(K.const_int 0) in\n    Some outer\n\nlet is_zero_const n = match K.const_arg n with\n  | Some (Int 0L) | Some (Bool false) -> true\n  | Some (Float f) -> f = 0.0\n  | _ -> false\n\n(* Move WHERE conditions into INDEX validity gates.\n   Rule 1: where(cond, buf.index(idx), 0)\n   Rule 2: where(cond, 0, buf.index(idx)) — negates the condition. *)\nlet pm_move_where_on_load node =\n  match K.view node with\n  | Ternary { op = `Where; a = cond; b = load; c = zero; _ }\n    when is_zero_const zero ->\n      (match K.view load with\n       | Index { ptr; idxs = [idx]; _ } ->\n           where_on_load cond ptr idx\n       | _ -> None)\n  | Ternary { op = `Where; a = cond; b = zero; c = load; _ }\n    when is_zero_const zero ->\n      (match K.view load with\n       | Index { ptr; idxs = [idx]; _ } ->\n           let negated = K.binary ~op:`Cmpne ~lhs:cond ~rhs:(K.const_bool true) in\n           where_on_load negated ptr idx\n       | _ -> None)\n  | _ -> None\n"
  },
  {
    "path": "packages/tolk/lib/ir/symbolic.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Symbolic simplification rules for {!Kernel} IR.\n\n    Rules have type [Kernel.t -> Kernel.t option] and compose with\n    {!Kernel.first_match}.\n\n    {b Layering.} Three phases:\n\n    - {!symbolic_simple} (phase 1): generic folding — constant folding,\n      identity removal, self-folding, where folding, divmod reconstitution.\n    - {!symbolic} (phase 2): adds algebraic rules (combine terms, associative\n      folding, lt folding, range collapse) and {!Divandmod.div_and_mod_symbolic}.\n    - {!sym} (phase 3): adds GEP pushing, vectorize reordering, POW\n      decomposition.\n\n    Callers pick the layer they need and compose it with their own rules. *)\n\n(* CR: documentation is not following our guidelines, need to update with /ocaml-doc skill *)\n\nval gep_pushing : Kernel.t -> Kernel.t option\n(** GEP (get-element-pointer) simplification rules.\n\n    - [Gep(Vectorize(a,b,c,...), i)] → lane [i]\n    - [Gep(Const(c), _)] → [Const(c)]\n    - [Gep(void, _)] → source\n    - [Vcat(a, b)] → [Vectorize(Gep(a,0), ..., Gep(b,0), ...)] *)\n\nval symbolic_simple : Kernel.t -> Kernel.t option\n(** Phase 1: generic folding. Constant folding, identity removal,\n    self-folding (x//x→1, x%x→0, x^x→0), where folding, idempotent ops,\n    divmod reconstitution. *)\n\nval symbolic : Kernel.t -> Kernel.t option\n(** Phase 2: algebraic simplification. Includes {!symbolic_simple} plus:\n\n    - Combine terms: [x*c0 + x*c1 → x*(c0+c1)], [x+x → x*2]\n    - Associative folding: [(x + c1) + c2 → x + (c1 + c2)]\n    - Nested div: [(x // c1) // c2 → x // (c1*c2)]\n    - Range self-div/mod: [Range(n) % n → Range(n)]\n    - Max folding via vmin/vmax\n    - Range/ALU collapse when vmin==vmax\n    - Lt constant folding: [c0 + x < c1 → x < c1 - c0]\n    - {!Divandmod.div_and_mod_symbolic}\n\n    Use this in substitution contexts. *)\n\nval sym : Kernel.t -> Kernel.t option\n(** Phase 3: full symbolic simplification. Includes {!symbolic} plus:\n\n    - GEP pushing and vectorize reordering\n    - [ALU(Vectorize(x,...), Vectorize(y,...))] → [Vectorize(ALU(x,y), ...)]\n    - POW → [exp2(exponent * log2(base))] via {!Decomposition.xpow} *)\n\nval split_and : Kernel.t -> Kernel.t list\n(** [split_and node] flattens an AND tree into a list of conjuncts. *)\n\nval is_irreducible : Kernel.t -> bool\n(** [is_irreducible node] is [true] for Const, Vconst, Define_var,\n    Special, and Range nodes. *)\n\nval parse_valid : Kernel.t -> (Kernel.t * bool * int) option\n(** [parse_valid v] parses a comparison into [(expr, is_upper_bound, c)].\n    Returns [(X, true, c)] for [X <= c], [(X, false, c)] for [X >= c]. *)\n\nval uop_given_valid : ?try_simplex:bool -> Kernel.t -> Kernel.t -> Kernel.t\n(** [uop_given_valid valid uop] simplifies [uop] given that [valid] is true.\n    Parses bound constraints from [valid], substitutes bounded expressions\n    with proxies, simplifies, and substitutes back. *)\n\nval pm_move_where_on_load : Kernel.t -> Kernel.t option\n(** [pm_move_where_on_load node] moves WHERE conditions from around\n    loads into the INDEX validity gate when the condition's ranges are\n    a subset of the index's ranges. *)\n"
  },
  {
    "path": "packages/tolk/lib/ir/tensor.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(* Hash-consed tensor graph IR, modelled after Kernel.t. *)\n\ntype device = Single of string | Multi of string list\ntype metadata = { name : string; caller : string; backward : bool }\n\ntype view =\n  | Sink of { srcs : t list; kernel_info : Kernel.kernel_info option }\n  | Group of { srcs : t list }\n  | After of { src : t; deps : t list; dtype : Dtype.t }\n  | Unique of { id : int }\n  | Lunique of { id : int }\n  | Device of { device : device }\n  | Buffer of { unique : t; device : t; size : int; dtype : Dtype.t }\n  | Buffer_view of { src : t; size : int; offset : int; dtype : Dtype.t }\n  | Const of { value : Const.t; dtype : Dtype.t; srcs : t list }\n  | Vconst of { values : Const.t list; dtype : Dtype.t; srcs : t list }\n  | Define_var of { name : string; lo : int; hi : int; dtype : Dtype.t }\n  | Bind of { var : t; value : t option; dtype : Dtype.t }\n  | Param of { slot : int; dtype : Dtype.t; shape : t option; device : t option }\n  | Call of { callee : callee; args : t list; info : call_info; dtype : Dtype.t }\n  | Detach of { src : t; dtype : Dtype.t }\n  | Contiguous of { src : t; ranges : t list; opts : Kernel.Opt.t list; dtype : Dtype.t }\n  | Contiguous_backward of { src : t; dtype : Dtype.t }\n  | Copy of { src : t; device : t; dtype : Dtype.t }\n  | Allreduce of { src : t; device : t; op : Op.reduce; dtype : Dtype.t }\n  | Multi of { src : t; axis : int; dtype : Dtype.t }\n  | Mstack of { srcs : t list; dtype : Dtype.t }\n  | Mselect of { src : t; index : int; dtype : Dtype.t }\n  | Reduce_axis of { src : t; op : Op.reduce; axes : int list; dtype : Dtype.t }\n  | Reduce of { src : t; ranges : t list; op : Op.reduce; dtype : Dtype.t }\n  | Reshape of { src : t; shape : t; dtype : Dtype.t }\n  | Expand of { src : t; shape : t; dtype : Dtype.t }\n  | Pad of { src : t; before : t; after : t; dtype : Dtype.t }\n  | Shrink of { src : t; before : t; after : t; dtype : Dtype.t }\n  | Permute of { src : t; order : int list; dtype : Dtype.t }\n  | Flip of { src : t; dims : bool list; dtype : Dtype.t }\n  | Range of { size : t; dtype : Dtype.t; axis : int; sub : int list; kind : Axis_kind.t }\n  | End of { value : t; ranges : t list }\n  | Index of { ptr : t; idxs : t list; gate : t option; dtype : Dtype.t }\n  | Store of { dst : t; value : t }\n  | Vectorize of { srcs : t list; dtype : Dtype.t }\n  | Cast of { src : t; dtype : Dtype.t }\n  | Bitcast of { src : t; dtype : Dtype.t }\n  | Unary of { op : Op.unary; src : t; dtype : Dtype.t }\n  | Binary of { op : Op.binary; lhs : t; rhs : t; dtype : Dtype.t }\n  | Ternary of { op : Op.ternary; a : t; b : t; c : t; dtype : Dtype.t }\n  | Noop of { src : t option; dtype : Dtype.t }\n  | Bufferize of { src : t; ranges : t list; dtype : Dtype.t; opts : Kernel.bufferize_opts }\n  | Invalid_index of { dtype : Dtype.t }\n  | Define_local of { size : int; dtype : Dtype.Ptr.t }\n  | Barrier\n  | Linear of { srcs : t list }\n  | Shaped_wmma of {\n      a : t; b : t; acc : t;\n      dims : int * int * int;\n      device : string;\n      threads : int;\n      dtype : Dtype.t;\n    }\n\nand t = view Hashcons.hash_consed\n\nand callee = Ref of t | Ast of Kernel.t\n\nand call_info = {\n  grad_fxn : grad_fxn option;\n  metadata : metadata list;\n  name : string option;\n  precompile : bool;\n}\n\nand grad_fxn = grad_output:t -> call:t -> t option list\n\n(* Hash-consing *)\n\nlet phys_list_eq a b =\n  List.length a = List.length b && List.for_all2 (==) a b\n\nlet phys_opt_eq a b =\n  match a, b with None, None -> true | Some x, Some y -> x == y | _ -> false\n\nlet rec shallow_hash_view = function\n  | Sink { srcs; kernel_info } ->\n      Hashtbl.hash (0, List.map (fun n -> n.Hashcons.tag) srcs, kernel_info)\n  | Group { srcs } ->\n      Hashtbl.hash (1, List.map (fun n -> n.Hashcons.tag) srcs)\n  | After { src; deps; dtype } ->\n      Hashtbl.hash (2, src.Hashcons.tag,\n        List.map (fun n -> n.Hashcons.tag) deps, dtype)\n  | Unique { id } -> Hashtbl.hash (3, id)\n  | Lunique { id } -> Hashtbl.hash (4, id)\n  | Device { device } -> Hashtbl.hash (5, device)\n  | Buffer { unique; device; size; dtype } ->\n      Hashtbl.hash (6, unique.Hashcons.tag, device.Hashcons.tag, size, dtype)\n  | Buffer_view { src; size; offset; dtype } ->\n      Hashtbl.hash (7, src.Hashcons.tag, size, offset, dtype)\n  | Const { value; dtype; srcs } ->\n      Hashtbl.hash (8, value, dtype, List.map (fun n -> n.Hashcons.tag) srcs)\n  | Vconst { values; dtype; srcs } ->\n      Hashtbl.hash (9, values, dtype, List.map (fun n -> n.Hashcons.tag) srcs)\n  | Define_var { name; lo; hi; dtype } ->\n      Hashtbl.hash (10, name, lo, hi, dtype)\n  | Bind { var; value; dtype } ->\n      Hashtbl.hash (11, var.Hashcons.tag,\n        (match value with None -> -1 | Some v -> v.Hashcons.tag), dtype)\n  | Param { slot; dtype; shape; device } ->\n      Hashtbl.hash (12, slot, dtype,\n        (match shape with None -> -1 | Some s -> s.Hashcons.tag),\n        (match device with None -> -1 | Some d -> d.Hashcons.tag))\n  | Call { callee; args; dtype; _ } ->\n      Hashtbl.hash (13,\n        (match callee with Ref r -> r.Hashcons.tag | Ast _ -> -1),\n        List.map (fun n -> n.Hashcons.tag) args, dtype)\n  | Detach { src; dtype } -> Hashtbl.hash (14, src.Hashcons.tag, dtype)\n  | Contiguous { src; ranges; opts; dtype } ->\n      Hashtbl.hash (15, src.Hashcons.tag,\n        List.map (fun n -> n.Hashcons.tag) ranges, opts, dtype)\n  | Contiguous_backward { src; dtype } ->\n      Hashtbl.hash (16, src.Hashcons.tag, dtype)\n  | Copy { src; device; dtype } ->\n      Hashtbl.hash (17, src.Hashcons.tag, device.Hashcons.tag, dtype)\n  | Allreduce { src; device; op; dtype } ->\n      Hashtbl.hash (18, src.Hashcons.tag, device.Hashcons.tag, op, dtype)\n  | Multi { src; axis; dtype } ->\n      Hashtbl.hash (19, src.Hashcons.tag, axis, dtype)\n  | Mstack { srcs; dtype } ->\n      Hashtbl.hash (20, List.map (fun n -> n.Hashcons.tag) srcs, dtype)\n  | Mselect { src; index; dtype } ->\n      Hashtbl.hash (21, src.Hashcons.tag, index, dtype)\n  | Reduce_axis { src; op; axes; dtype } ->\n      Hashtbl.hash (22, src.Hashcons.tag, op, axes, dtype)\n  | Reduce { src; ranges; op; dtype } ->\n      Hashtbl.hash (23, src.Hashcons.tag,\n        List.map (fun n -> n.Hashcons.tag) ranges, op, dtype)\n  | Reshape { src; shape; dtype } ->\n      Hashtbl.hash (24, src.Hashcons.tag, shape.Hashcons.tag, dtype)\n  | Expand { src; shape; dtype } ->\n      Hashtbl.hash (25, src.Hashcons.tag, shape.Hashcons.tag, dtype)\n  | Pad { src; before; after; dtype } ->\n      Hashtbl.hash (26, src.Hashcons.tag, before.Hashcons.tag,\n        after.Hashcons.tag, dtype)\n  | Shrink { src; before; after; dtype } ->\n      Hashtbl.hash (27, src.Hashcons.tag, before.Hashcons.tag,\n        after.Hashcons.tag, dtype)\n  | Permute { src; order; dtype } ->\n      Hashtbl.hash (28, src.Hashcons.tag, order, dtype)\n  | Flip { src; dims; dtype } ->\n      Hashtbl.hash (29, src.Hashcons.tag, dims, dtype)\n  | Range { size; dtype; axis; sub; kind } ->\n      Hashtbl.hash (30, size.Hashcons.tag, dtype, axis, sub, kind)\n  | End { value; ranges } ->\n      Hashtbl.hash (31, value.Hashcons.tag,\n        List.map (fun n -> n.Hashcons.tag) ranges)\n  | Index { ptr; idxs; gate; dtype } ->\n      Hashtbl.hash (32, ptr.Hashcons.tag,\n        List.map (fun n -> n.Hashcons.tag) idxs,\n        (match gate with None -> -1 | Some g -> g.Hashcons.tag), dtype)\n  | Store { dst; value } ->\n      Hashtbl.hash (33, dst.Hashcons.tag, value.Hashcons.tag)\n  | Vectorize { srcs; dtype } ->\n      Hashtbl.hash (34, List.map (fun n -> n.Hashcons.tag) srcs, dtype)\n  | Cast { src; dtype } -> Hashtbl.hash (35, src.Hashcons.tag, dtype)\n  | Bitcast { src; dtype } -> Hashtbl.hash (36, src.Hashcons.tag, dtype)\n  | Unary { op; src; dtype } ->\n      Hashtbl.hash (37, op, src.Hashcons.tag, dtype)\n  | Binary { op; lhs; rhs; dtype } ->\n      Hashtbl.hash (38, op, lhs.Hashcons.tag, rhs.Hashcons.tag, dtype)\n  | Ternary { op; a; b; c; dtype } ->\n      Hashtbl.hash (39, op, a.Hashcons.tag, b.Hashcons.tag,\n        c.Hashcons.tag, dtype)\n  | Noop { src; dtype } ->\n      Hashtbl.hash (40, (match src with None -> -1 | Some s -> s.Hashcons.tag),\n        dtype)\n  | Bufferize { src; ranges; dtype; opts } ->\n      Hashtbl.hash (41, src.Hashcons.tag,\n        List.map (fun n -> n.Hashcons.tag) ranges, dtype, opts)\n  | Invalid_index { dtype } -> Hashtbl.hash (42, dtype)\n  | Define_local { size; dtype } -> Hashtbl.hash (43, size, dtype)\n  | Barrier -> Hashtbl.hash 44\n  | Linear { srcs } ->\n      Hashtbl.hash (45, List.map (fun n -> n.Hashcons.tag) srcs)\n  | Shaped_wmma { a; b; acc; dims; device; threads; dtype } ->\n      Hashtbl.hash (46, a.Hashcons.tag, b.Hashcons.tag, acc.Hashcons.tag,\n        dims, device, threads, dtype)\n\nand shallow_equal_view v1 v2 =\n  match v1, v2 with\n  | Sink s1, Sink s2 ->\n      phys_list_eq s1.srcs s2.srcs && s1.kernel_info = s2.kernel_info\n  | Group g1, Group g2 -> phys_list_eq g1.srcs g2.srcs\n  | After a1, After a2 ->\n      a1.src == a2.src && phys_list_eq a1.deps a2.deps && a1.dtype = a2.dtype\n  | Unique u1, Unique u2 -> u1.id = u2.id\n  | Lunique l1, Lunique l2 -> l1.id = l2.id\n  | Device d1, Device d2 -> d1.device = d2.device\n  | Buffer b1, Buffer b2 ->\n      b1.unique == b2.unique && b1.device == b2.device\n      && b1.size = b2.size && b1.dtype = b2.dtype\n  | Buffer_view b1, Buffer_view b2 ->\n      b1.src == b2.src && b1.size = b2.size\n      && b1.offset = b2.offset && b1.dtype = b2.dtype\n  | Const c1, Const c2 ->\n      c1.value = c2.value && c1.dtype = c2.dtype && phys_list_eq c1.srcs c2.srcs\n  | Vconst v1, Vconst v2 ->\n      v1.values = v2.values && v1.dtype = v2.dtype && phys_list_eq v1.srcs v2.srcs\n  | Define_var d1, Define_var d2 ->\n      d1.name = d2.name && d1.lo = d2.lo && d1.hi = d2.hi && d1.dtype = d2.dtype\n  | Bind b1, Bind b2 ->\n      b1.var == b2.var && phys_opt_eq b1.value b2.value && b1.dtype = b2.dtype\n  | Param p1, Param p2 ->\n      p1.slot = p2.slot && p1.dtype = p2.dtype\n      && phys_opt_eq p1.shape p2.shape && phys_opt_eq p1.device p2.device\n  | Call c1, Call c2 ->\n      (match c1.callee, c2.callee with\n       | Ref r1, Ref r2 -> r1 == r2\n       | Ast a1, Ast a2 -> a1 == a2\n       | _ -> false)\n      && phys_list_eq c1.args c2.args && c1.dtype = c2.dtype\n  | Detach d1, Detach d2 -> d1.src == d2.src && d1.dtype = d2.dtype\n  | Contiguous c1, Contiguous c2 ->\n      c1.src == c2.src && phys_list_eq c1.ranges c2.ranges\n      && c1.opts = c2.opts && c1.dtype = c2.dtype\n  | Contiguous_backward c1, Contiguous_backward c2 ->\n      c1.src == c2.src && c1.dtype = c2.dtype\n  | Copy c1, Copy c2 ->\n      c1.src == c2.src && c1.device == c2.device && c1.dtype = c2.dtype\n  | Allreduce a1, Allreduce a2 ->\n      a1.src == a2.src && a1.device == a2.device\n      && a1.op = a2.op && a1.dtype = a2.dtype\n  | Multi m1, Multi m2 ->\n      m1.src == m2.src && m1.axis = m2.axis && m1.dtype = m2.dtype\n  | Mstack m1, Mstack m2 ->\n      phys_list_eq m1.srcs m2.srcs && m1.dtype = m2.dtype\n  | Mselect m1, Mselect m2 ->\n      m1.src == m2.src && m1.index = m2.index && m1.dtype = m2.dtype\n  | Reduce_axis r1, Reduce_axis r2 ->\n      r1.src == r2.src && r1.op = r2.op && r1.axes = r2.axes && r1.dtype = r2.dtype\n  | Reduce r1, Reduce r2 ->\n      r1.src == r2.src && phys_list_eq r1.ranges r2.ranges\n      && r1.op = r2.op && r1.dtype = r2.dtype\n  | Reshape r1, Reshape r2 ->\n      r1.src == r2.src && r1.shape == r2.shape && r1.dtype = r2.dtype\n  | Expand e1, Expand e2 ->\n      e1.src == e2.src && e1.shape == e2.shape && e1.dtype = e2.dtype\n  | Pad p1, Pad p2 ->\n      p1.src == p2.src && p1.before == p2.before\n      && p1.after == p2.after && p1.dtype = p2.dtype\n  | Shrink s1, Shrink s2 ->\n      s1.src == s2.src && s1.before == s2.before\n      && s1.after == s2.after && s1.dtype = s2.dtype\n  | Permute p1, Permute p2 ->\n      p1.src == p2.src && p1.order = p2.order && p1.dtype = p2.dtype\n  | Flip f1, Flip f2 ->\n      f1.src == f2.src && f1.dims = f2.dims && f1.dtype = f2.dtype\n  | Range r1, Range r2 ->\n      r1.size == r2.size && r1.dtype = r2.dtype && r1.axis = r2.axis\n      && r1.sub = r2.sub && r1.kind = r2.kind\n  | End e1, End e2 ->\n      e1.value == e2.value && phys_list_eq e1.ranges e2.ranges\n  | Index i1, Index i2 ->\n      i1.ptr == i2.ptr && phys_list_eq i1.idxs i2.idxs\n      && phys_opt_eq i1.gate i2.gate && i1.dtype = i2.dtype\n  | Store s1, Store s2 -> s1.dst == s2.dst && s1.value == s2.value\n  | Vectorize v1, Vectorize v2 ->\n      phys_list_eq v1.srcs v2.srcs && v1.dtype = v2.dtype\n  | Cast c1, Cast c2 -> c1.src == c2.src && c1.dtype = c2.dtype\n  | Bitcast b1, Bitcast b2 -> b1.src == b2.src && b1.dtype = b2.dtype\n  | Unary u1, Unary u2 ->\n      u1.op = u2.op && u1.src == u2.src && u1.dtype = u2.dtype\n  | Binary b1, Binary b2 ->\n      b1.op = b2.op && b1.lhs == b2.lhs && b1.rhs == b2.rhs && b1.dtype = b2.dtype\n  | Ternary t1, Ternary t2 ->\n      t1.op = t2.op && t1.a == t2.a && t1.b == t2.b\n      && t1.c == t2.c && t1.dtype = t2.dtype\n  | Noop n1, Noop n2 -> phys_opt_eq n1.src n2.src && n1.dtype = n2.dtype\n  | Bufferize b1, Bufferize b2 ->\n      b1.src == b2.src && phys_list_eq b1.ranges b2.ranges\n      && b1.dtype = b2.dtype && b1.opts = b2.opts\n  | Invalid_index i1, Invalid_index i2 -> i1.dtype = i2.dtype\n  | Define_local d1, Define_local d2 -> d1.size = d2.size && d1.dtype = d2.dtype\n  | Barrier, Barrier -> true\n  | Linear l1, Linear l2 -> phys_list_eq l1.srcs l2.srcs\n  | Shaped_wmma w1, Shaped_wmma w2 ->\n      w1.a == w2.a && w1.b == w2.b && w1.acc == w2.acc\n      && w1.dims = w2.dims && w1.device = w2.device\n      && w1.threads = w2.threads && Dtype.equal w1.dtype w2.dtype\n  | _ -> false\n\nmodule View_hc = Hashcons.Make (struct\n  type nonrec t = view\n  let equal = shallow_equal_view\n  let hash = shallow_hash_view\nend)\n\nlet hc_table = View_hc.create 4096\nlet mk v = View_hc.hashcons hc_table v\n\n(* Accessors *)\n\nlet view (n : t) = n.Hashcons.node\nlet tag (n : t) = n.Hashcons.tag\n\nlet node_dtype = function\n  | After { dtype; _ } | Buffer { dtype; _ } | Buffer_view { dtype; _ }\n  | Const { dtype; _ } | Vconst { dtype; _ } | Define_var { dtype; _ }\n  | Bind { dtype; _ } | Param { dtype; _ } | Call { dtype; _ }\n  | Detach { dtype; _ } | Contiguous { dtype; _ }\n  | Contiguous_backward { dtype; _ } | Copy { dtype; _ }\n  | Allreduce { dtype; _ } | Multi { dtype; _ } | Mstack { dtype; _ }\n  | Mselect { dtype; _ } | Reduce_axis { dtype; _ } | Reduce { dtype; _ }\n  | Reshape { dtype; _ } | Expand { dtype; _ } | Pad { dtype; _ }\n  | Shrink { dtype; _ } | Permute { dtype; _ } | Flip { dtype; _ }\n  | Range { dtype; _ } | Index { dtype; _ } | Vectorize { dtype; _ }\n  | Cast { dtype; _ } | Bitcast { dtype; _ } | Unary { dtype; _ }\n  | Binary { dtype; _ } | Ternary { dtype; _ } | Noop { dtype; _ }\n  | Bufferize { dtype; _ } | Invalid_index { dtype; _ }\n  | Shaped_wmma { dtype; _ } ->\n      Some dtype\n  | Define_local { dtype; _ } -> Some (Dtype.Val (Dtype.Ptr.base dtype))\n  | Sink _ | Group _ | Unique _ | Lunique _ | Device _ | End _ | Store _\n  | Barrier | Linear _ -> None\n\nlet dtype n = node_dtype (view n)\nlet node_dtype_of n = node_dtype (view n)\n\nlet children_of = function\n  | Sink { srcs; _ } | Group { srcs } | Linear { srcs } -> srcs\n  | After { src; deps; _ } -> src :: deps\n  | Unique _ | Lunique _ | Device _ | Define_var _ | Invalid_index _\n  | Barrier | Define_local _ -> []\n  | Buffer { unique; device; _ } -> [ unique; device ]\n  | Buffer_view { src; _ } -> [ src ]\n  | Const { srcs; _ } | Vconst { srcs; _ } -> srcs\n  | Bind { var; value; _ } ->\n      var :: (match value with Some v -> [v] | None -> [])\n  | Param { shape; device; _ } ->\n      List.filter_map Fun.id [shape; device]\n  | Call { callee; args; _ } ->\n      (match callee with Ref r -> r :: args | Ast _ -> args)\n  | Detach { src; _ } | Contiguous_backward { src; _ }\n  | Multi { src; _ } | Mselect { src; _ }\n  | Cast { src; _ } | Bitcast { src; _ }\n  | Unary { src; _ } -> [src]\n  | Contiguous { src; ranges; _ } | Reduce { src; ranges; _ }\n  | Bufferize { src; ranges; _ } -> src :: ranges\n  | Copy { src; device; _ } | Allreduce { src; device; _ } -> [src; device]\n  | Mstack { srcs; _ } | Vectorize { srcs; _ } -> srcs\n  | Reduce_axis { src; _ } -> [src]\n  | Reshape { src; shape; _ } | Expand { src; shape; _ } -> [src; shape]\n  | Pad { src; before; after; _ } | Shrink { src; before; after; _ } ->\n      [src; before; after]\n  | Permute { src; _ } | Flip { src; _ } -> [src]\n  | Range { size; _ } -> [size]\n  | End { value; ranges } -> value :: ranges\n  | Index { ptr; idxs; gate; _ } ->\n      ptr :: idxs @ (match gate with Some g -> [g] | None -> [])\n  | Store { dst; value } -> [dst; value]\n  | Binary { lhs; rhs; _ } -> [lhs; rhs]\n  | Ternary { a; b; c; _ } -> [a; b; c]\n  | Shaped_wmma { a; b; acc; _ } -> [a; b; acc]\n  | Noop { src; _ } -> (match src with Some s -> [s] | None -> [])\n\nlet children n = children_of (view n)\n\n(* [map_children] applies [f] to every child in the same order as\n   [children_of].  All [f] calls use explicit [let]-bindings so that\n   evaluation order is left-to-right regardless of the compiler's\n   record-field evaluation order.  This matters when [f] carries\n   mutable state (e.g. the index counter in [replace]). *)\nlet map_children (f : t -> t) = function\n  | Sink { srcs; kernel_info } -> Sink { srcs = List.map f srcs; kernel_info }\n  | Group { srcs } -> Group { srcs = List.map f srcs }\n  | After { src; deps; dtype } ->\n      let src = f src in let deps = List.map f deps in\n      After { src; deps; dtype }\n  | (Unique _ | Lunique _ | Device _ | Define_var _ | Invalid_index _\n    | Barrier | Define_local _) as v -> v\n  | Buffer { unique; device; size; dtype } ->\n      let unique = f unique in let device = f device in\n      Buffer { unique; device; size; dtype }\n  | Buffer_view { src; size; offset; dtype } ->\n      Buffer_view { src = f src; size; offset; dtype }\n  | Const { value; dtype; srcs } -> Const { value; dtype; srcs = List.map f srcs }\n  | Vconst { values; dtype; srcs } -> Vconst { values; dtype; srcs = List.map f srcs }\n  | Bind { var; value; dtype } ->\n      let var = f var in let value = Option.map f value in\n      Bind { var; value; dtype }\n  | Param { slot; dtype; shape; device } ->\n      let shape = Option.map f shape in let device = Option.map f device in\n      Param { slot; dtype; shape; device }\n  | Call { callee; args; info; dtype } ->\n      let callee = match callee with Ref r -> Ref (f r) | Ast _ -> callee in\n      let args = List.map f args in\n      Call { callee; args; info; dtype }\n  | Detach { src; dtype } -> Detach { src = f src; dtype }\n  | Contiguous { src; ranges; opts; dtype } ->\n      let src = f src in let ranges = List.map f ranges in\n      Contiguous { src; ranges; opts; dtype }\n  | Contiguous_backward { src; dtype } ->\n      Contiguous_backward { src = f src; dtype }\n  | Copy { src; device; dtype } ->\n      let src = f src in let device = f device in\n      Copy { src; device; dtype }\n  | Allreduce { src; device; op; dtype } ->\n      let src = f src in let device = f device in\n      Allreduce { src; device; op; dtype }\n  | Multi { src; axis; dtype } -> Multi { src = f src; axis; dtype }\n  | Mstack { srcs; dtype } -> Mstack { srcs = List.map f srcs; dtype }\n  | Mselect { src; index; dtype } -> Mselect { src = f src; index; dtype }\n  | Reduce_axis { src; op; axes; dtype } ->\n      Reduce_axis { src = f src; op; axes; dtype }\n  | Reduce { src; ranges; op; dtype } ->\n      let src = f src in let ranges = List.map f ranges in\n      Reduce { src; ranges; op; dtype }\n  | Reshape { src; shape; dtype } ->\n      let src = f src in let shape = f shape in\n      Reshape { src; shape; dtype }\n  | Expand { src; shape; dtype } ->\n      let src = f src in let shape = f shape in\n      Expand { src; shape; dtype }\n  | Pad { src; before; after; dtype } ->\n      let src = f src in let before = f before in let after = f after in\n      Pad { src; before; after; dtype }\n  | Shrink { src; before; after; dtype } ->\n      let src = f src in let before = f before in let after = f after in\n      Shrink { src; before; after; dtype }\n  | Permute { src; order; dtype } -> Permute { src = f src; order; dtype }\n  | Flip { src; dims; dtype } -> Flip { src = f src; dims; dtype }\n  | Range { size; dtype; axis; sub; kind } ->\n      Range { size = f size; dtype; axis; sub; kind }\n  | End { value; ranges } ->\n      let value = f value in let ranges = List.map f ranges in\n      End { value; ranges }\n  | Index { ptr; idxs; gate; dtype } ->\n      let ptr = f ptr in let idxs = List.map f idxs in\n      let gate = Option.map f gate in\n      Index { ptr; idxs; gate; dtype }\n  | Store { dst; value } ->\n      let dst = f dst in let value = f value in\n      Store { dst; value }\n  | Vectorize { srcs; dtype } -> Vectorize { srcs = List.map f srcs; dtype }\n  | Cast { src; dtype } -> Cast { src = f src; dtype }\n  | Bitcast { src; dtype } -> Bitcast { src = f src; dtype }\n  | Unary { op; src; dtype } -> Unary { op; src = f src; dtype }\n  | Binary { op; lhs; rhs; dtype } ->\n      let lhs = f lhs in let rhs = f rhs in\n      Binary { op; lhs; rhs; dtype }\n  | Ternary { op; a; b; c; dtype } ->\n      let a = f a in let b = f b in let c = f c in\n      Ternary { op; a; b; c; dtype }\n  | Noop { src; dtype } -> Noop { src = Option.map f src; dtype }\n  | Bufferize { src; ranges; dtype; opts } ->\n      let src = f src in let ranges = List.map f ranges in\n      Bufferize { src; ranges; dtype; opts }\n  | Linear { srcs } -> Linear { srcs = List.map f srcs }\n  | Shaped_wmma w ->\n      let a = f w.a in let b = f w.b in let acc = f w.acc in\n      Shaped_wmma { w with a; b; acc }\n\n(* Helpers used by both validation and analysis *)\n\nlet extract_int_shape n =\n  match view n with\n  | Const { value; _ } ->\n      (match Const.view value with\n       | Int i -> Some [ Int64.to_int i ]\n       | _ -> None)\n  | Vconst { values = []; _ } -> Some []\n  | Vectorize { srcs; _ } ->\n      let ints = List.filter_map (fun s ->\n        match view s with\n        | Const { value; _ } ->\n            (match Const.view value with\n             | Int i -> Some (Int64.to_int i)\n             | _ -> None)\n        | _ -> None) srcs\n      in\n      if List.length ints = List.length srcs then Some ints else None\n  | _ -> None\n\n(* Validation *)\n\nlet check cond msg = if not cond then failwith msg\n\nlet is_device_node n = match view n with Device _ -> true | _ -> false\nlet is_unique_node n = match view n with Unique _ | Lunique _ -> true | _ -> false\nlet is_define_var_node n = match view n with Define_var _ -> true | _ -> false\n\nlet is_index_dtype dt = Dtype.equal (Dtype.scalarize dt) Dtype.index\n\nlet is_index_vector_node n =\n  match view n with\n  | Const { dtype; _ } -> is_index_dtype dtype\n  | Vectorize { dtype; _ } -> is_index_dtype dtype\n  | Vconst { dtype; _ } -> is_index_dtype dtype\n  | _ -> false\n\nlet is_comparison = function `Cmplt | `Cmpeq | `Cmpne -> true | _ -> false\nlet is_shift = function `Shl | `Shr -> true | _ -> false\n\n(* Constructors *)\n\nlet sink ?kernel_info srcs = mk (Sink { srcs; kernel_info })\nlet group srcs = match srcs with [x] -> x | _ -> mk (Group { srcs })\nlet after ~src ~deps =\n  let dtype = Option.value ~default:Dtype.void (dtype src) in\n  mk (After { src; deps; dtype })\nlet unique ~id = mk (Unique { id })\nlet lunique ~id = mk (Lunique { id })\nlet device d = mk (Device { device = d })\n\nlet buffer ~unique ~device ~size ~dtype =\n  check (size >= 0) \"Buffer size must be non-negative\";\n  check (is_unique_node unique) \"Buffer unique must be Unique/Lunique\";\n  check (is_device_node device) \"Buffer device must be Device\";\n  mk (Buffer { unique; device; size; dtype })\n\nlet buffer_view ~src ~size ~offset ~dtype =\n  check (size >= 0) \"Buffer_view size must be non-negative\";\n  check (offset >= 0) \"Buffer_view offset must be non-negative\";\n  check (match view src with Buffer _ | Index _ -> true | _ -> false)\n    \"Buffer_view src must be Buffer or Index\";\n  mk (Buffer_view { src; size; offset; dtype })\n\nlet const ?(srcs = []) value dtype =\n  (match Const.view value with\n   | Bool _ -> check (Dtype.is_bool dtype) \"Bool const must have bool dtype\"\n   | Int _ -> check (Dtype.is_int dtype) \"Int const must have int/index dtype\"\n   | Float _ -> check (Dtype.is_float dtype) \"Float const must have float dtype\");\n  mk (Const { value; dtype; srcs })\n\nlet vconst ~values ~dtype ?(srcs = []) () =\n  check (List.length values = Dtype.count dtype)\n    \"Vconst values must match vector width\";\n  let scalar_dt = Dtype.scalarize dtype in\n  List.iter (fun v ->\n    match Const.view v with\n    | Int _ -> check (Dtype.is_int scalar_dt) \"Vconst: expected int elements\"\n    | Float _ -> check (Dtype.is_float scalar_dt) \"Vconst: expected float elements\"\n    | Bool _ -> check (Dtype.is_bool scalar_dt) \"Vconst: expected bool elements\")\n    values;\n  mk (Vconst { values; dtype; srcs })\n\nlet define_var ~name ~lo ~hi ?(dtype = Dtype.index) () =\n  check (Dtype.is_int dtype) \"Define_var dtype must be int/index\";\n  check (lo <= hi) \"Define_var lo > hi\";\n  mk (Define_var { name; lo; hi; dtype })\n\nlet bind ~var ?value ~dtype () =\n  check (is_define_var_node var) \"Bind var must be Define_var\";\n  (match value with\n   | Some v ->\n     let vdt = Option.value ~default:Dtype.void (node_dtype_of v) in\n     check (Dtype.equal vdt dtype) \"Bind value dtype must match\"\n   | None -> ());\n  mk (Bind { var; value; dtype })\n\nlet param ~slot ~dtype ?shape ?device () =\n  (match shape with\n   | Some s -> check (is_index_vector_node s) \"Param shape must be index vector\"\n   | None -> ());\n  (match device with\n   | Some d -> check (is_device_node d) \"Param device must be Device\"\n   | None -> ());\n  mk (Param { slot; dtype; shape; device })\n\nlet call ~callee ~args ~info ~dtype =\n  (match callee with\n   | Ref r ->\n     let rdt = Option.value ~default:Dtype.void (node_dtype_of r) in\n     check (Dtype.equal rdt dtype) \"Call dtype must match Ref dtype\"\n   | Ast _ -> ());\n  mk (Call { callee; args; info; dtype })\n\nlet detach ~src =\n  let dtype = Option.value ~default:Dtype.void (dtype src) in\n  mk (Detach { src; dtype })\n\nlet contiguous ~src ?(ranges = []) ?(opts = []) () =\n  let dtype = Option.value ~default:Dtype.void (dtype src) in\n  List.iter (fun r ->\n    let rdt = Option.value ~default:Dtype.void (node_dtype_of r) in\n    check (is_index_dtype rdt && Dtype.count rdt = 1)\n      \"Contiguous range must be index scalar\")\n    ranges;\n  mk (Contiguous { src; ranges; opts; dtype })\n\nlet contiguous_backward ~src =\n  let dtype = Option.value ~default:Dtype.void (dtype src) in\n  mk (Contiguous_backward { src; dtype })\n\nlet copy ~src ~device () =\n  check (is_device_node device) \"Copy device must be Device\";\n  let dt = Option.value ~default:Dtype.void (dtype src) in\n  mk (Copy { src; device; dtype = dt })\n\nlet allreduce ~src ~device ~op ~dtype =\n  check (is_device_node device) \"Allreduce device must be Device\";\n  mk (Allreduce { src; device; op; dtype })\n\nlet multi ~src ~axis =\n  let dtype = Option.value ~default:Dtype.void (dtype src) in\n  mk (Multi { src; axis; dtype })\n\nlet mstack ~srcs =\n  check (srcs <> []) \"Mstack must have srcs\";\n  let dtype = match srcs with\n    | s :: _ -> Option.value ~default:Dtype.void (dtype s)\n    | [] -> Dtype.void in\n  List.iter (fun s ->\n    let sdt = Option.value ~default:Dtype.void (node_dtype_of s) in\n    check (Dtype.equal sdt dtype) \"Mstack src dtypes must match\")\n    srcs;\n  mk (Mstack { srcs; dtype })\n\nlet mselect ~src ~index =\n  let dtype = Option.value ~default:Dtype.void (dtype src) in\n  mk (Mselect { src; index; dtype })\n\nlet reduce_axis ~src ~op ~axes =\n  check (axes <> []) \"Reduce_axis must have at least one axis\";\n  check (List.length (List.sort_uniq Int.compare axes) = List.length axes)\n    \"Reduce_axis axes must be unique\";\n  let dtype = Option.value ~default:Dtype.void (dtype src) in\n  mk (Reduce_axis { src; op; axes; dtype })\n\nlet reduce ~src ~ranges ~op ~dtype = mk (Reduce { src; ranges; op; dtype })\n\nlet reshape ~src ~shape =\n  (match extract_int_shape shape with\n   | Some dims ->\n     check (List.for_all (fun d -> d >= 0) dims)\n       \"Reshape dims must not be negative\"\n   | None -> ());\n  let dtype = Option.value ~default:Dtype.void (dtype src) in\n  mk (Reshape { src; shape; dtype })\n\nlet expand ~src ~shape =\n  let dtype = Option.value ~default:Dtype.void (dtype src) in\n  mk (Expand { src; shape; dtype })\n\nlet pad_shrink_width n =\n  match view n with\n  | Vectorize { srcs; _ } -> Some (List.length srcs)\n  | Const _ -> Some 1\n  | Vconst { values; _ } -> Some (List.length values)\n  | _ -> None\n\nlet pad ~src ~before ~after =\n  (match pad_shrink_width before, pad_shrink_width after with\n   | Some bw, Some aw ->\n     check (bw = aw) \"Pad before/after width mismatch\"\n   | _ -> ());\n  let dtype = Option.value ~default:Dtype.void (dtype src) in\n  mk (Pad { src; before; after; dtype })\n\nlet shrink ~src ~before ~after =\n  (match pad_shrink_width before, pad_shrink_width after with\n   | Some bw, Some aw ->\n     check (bw = aw) \"Shrink before/after width mismatch\"\n   | _ -> ());\n  let dtype = Option.value ~default:Dtype.void (dtype src) in\n  mk (Shrink { src; before; after; dtype })\n\nlet permute ~src ~order =\n  check (List.sort Int.compare order = List.init (List.length order) Fun.id)\n    \"Permute order must be valid permutation\";\n  let dtype = Option.value ~default:Dtype.void (dtype src) in\n  mk (Permute { src; order; dtype })\n\nlet flip ~src ~dims =\n  let dtype = Option.value ~default:Dtype.void (dtype src) in\n  mk (Flip { src; dims; dtype })\nlet range ~size ~axis ?(sub = []) ~kind ?(dtype = Dtype.index) () =\n  mk (Range { size; dtype; axis; sub; kind })\nlet end_ ~value ~ranges = mk (End { value; ranges })\n\nlet index ~ptr ~idxs ?gate ~dtype () = mk (Index { ptr; idxs; gate; dtype })\n\nlet store ~dst ~value = mk (Store { dst; value })\n\nlet vectorize ~srcs =\n  if srcs = [] then invalid_arg \"Vectorize: srcs must not be empty\";\n  let dtype = match srcs with\n    | s :: _ -> (match dtype s with Some d -> Dtype.vec (List.length srcs) (Dtype.scalarize d) | None -> Dtype.void)\n    | [] -> Dtype.void in\n  mk (Vectorize { srcs; dtype })\n\nlet cast ~src ~dtype =\n  let src_dt = Option.value ~default:Dtype.void (node_dtype_of src) in\n  check (Dtype.count src_dt = Dtype.count dtype)\n    \"Cast must preserve vector width\";\n  mk (Cast { src; dtype })\n\nlet bitcast ~src ~dtype = mk (Bitcast { src; dtype })\n\nlet unary ~op ~src =\n  let dtype = Option.value ~default:Dtype.void (dtype src) in\n  mk (Unary { op; src; dtype })\n\nlet binary ~op ~lhs ~rhs =\n  let lhs_dt = Option.value ~default:Dtype.void (dtype lhs) in\n  let rhs_dt = Option.value ~default:Dtype.void (dtype rhs) in\n  if is_comparison op then begin\n    check (Dtype.equal (Dtype.scalarize lhs_dt) (Dtype.scalarize rhs_dt))\n      \"Comparison operands don't match\";\n    let c = Dtype.count lhs_dt in\n    let dtype = if c > 1 then Dtype.vec c Dtype.bool else Dtype.bool in\n    mk (Binary { op; lhs; rhs; dtype })\n  end else if is_shift op then begin\n    check (Dtype.is_int lhs_dt) \"Shift lhs must be int/index\";\n    check (Dtype.equal (Dtype.scalarize rhs_dt) (Dtype.scalarize lhs_dt)\n           || Dtype.equal rhs_dt (Dtype.Val Dtype.Val.uint32))\n      \"Shift rhs dtype must match lhs or be uint\";\n    mk (Binary { op; lhs; rhs; dtype = lhs_dt })\n  end else begin\n    (match op with\n     | `Idiv | `Mod ->\n       check (Dtype.is_int lhs_dt) \"Idiv/Mod must be int/index\"\n     | _ -> ());\n    mk (Binary { op; lhs; rhs; dtype = lhs_dt })\n  end\n\nlet ternary ~op ~a ~b ~c =\n  (match op with\n   | `Where ->\n     let adt = Option.value ~default:Dtype.void (dtype a) in\n     check (Dtype.is_bool adt && Dtype.count adt = 1)\n       \"Where condition must be bool scalar\";\n     let bdt = Option.value ~default:Dtype.void (dtype b) in\n     let cdt = Option.value ~default:Dtype.void (dtype c) in\n     check (Dtype.equal bdt cdt) \"Where arms must match\"\n   | `Mulacc ->\n     let adt = Option.value ~default:Dtype.void (dtype a) in\n     let bdt = Option.value ~default:Dtype.void (dtype b) in\n     let cdt = Option.value ~default:Dtype.void (dtype c) in\n     check (Dtype.equal adt bdt && Dtype.equal adt cdt)\n       \"Mulacc operands must all match\");\n  let dtype = Option.value ~default:Dtype.void (dtype b) in\n  mk (Ternary { op; a; b; c; dtype })\n\nlet noop ?src ~dtype () = mk (Noop { src; dtype })\nlet bufferize ~src ~ranges ~dtype ~opts = mk (Bufferize { src; ranges; dtype; opts })\nlet invalid_index ~dtype = mk (Invalid_index { dtype })\nlet define_local ~size ~dtype = mk (Define_local { size; dtype })\nlet barrier = mk Barrier\nlet linear srcs = mk (Linear { srcs })\nlet shaped_wmma ~a ~b ~acc ~dims ~device ~threads ~dtype =\n  mk (Shaped_wmma { a; b; acc; dims; device; threads; dtype })\n\nlet assign ~target ~value ?(extras = []) () =\n  let st = store ~dst:target ~value in\n  after ~src:target ~deps:(st :: extras)\n\n(* Replace *)\n\nlet replace n ?children:new_ch ?dtype:new_dt () =\n  let v = view n in\n  let v = match new_ch with\n    | None -> v\n    | Some ch ->\n        let i = ref 0 in\n        map_children (fun _ -> let c = List.nth ch !i in incr i; c) v\n  in\n  let v = match new_dt with\n    | None -> v\n    | Some dt ->\n        (match v with\n         | After r -> After { r with dtype = dt }\n         | Buffer r -> Buffer { r with dtype = dt }\n         | Buffer_view r -> Buffer_view { r with dtype = dt }\n         | Const r -> Const { r with dtype = dt }\n         | Vconst r -> Vconst { r with dtype = dt }\n         | Define_var r -> Define_var { r with dtype = dt }\n         | Bind r -> Bind { r with dtype = dt }\n         | Param r -> Param { r with dtype = dt }\n         | Call r -> Call { r with dtype = dt }\n         | Detach r -> Detach { r with dtype = dt }\n         | Contiguous r -> Contiguous { r with dtype = dt }\n         | Contiguous_backward r -> Contiguous_backward { r with dtype = dt }\n         | Copy r -> Copy { r with dtype = dt }\n         | Allreduce r -> Allreduce { r with dtype = dt }\n         | Multi r -> Multi { r with dtype = dt }\n         | Mstack r -> Mstack { r with dtype = dt }\n         | Mselect r -> Mselect { r with dtype = dt }\n         | Reduce_axis r -> Reduce_axis { r with dtype = dt }\n         | Reduce r -> Reduce { r with dtype = dt }\n         | Reshape r -> Reshape { r with dtype = dt }\n         | Expand r -> Expand { r with dtype = dt }\n         | Pad r -> Pad { r with dtype = dt }\n         | Shrink r -> Shrink { r with dtype = dt }\n         | Permute r -> Permute { r with dtype = dt }\n         | Flip r -> Flip { r with dtype = dt }\n         | Range r -> Range { r with dtype = dt }\n         | Index r -> Index { r with dtype = dt }\n         | Vectorize r -> Vectorize { r with dtype = dt }\n         | Cast r -> Cast { r with dtype = dt }\n         | Bitcast r -> Bitcast { r with dtype = dt }\n         | Unary r -> Unary { r with dtype = dt }\n         | Binary r -> Binary { r with dtype = dt }\n         | Ternary r -> Ternary { r with dtype = dt }\n         | Noop r -> Noop { r with dtype = dt }\n         | Bufferize r -> Bufferize { r with dtype = dt }\n         | Invalid_index _ -> Invalid_index { dtype = dt }\n         | Shaped_wmma r -> Shaped_wmma { r with dtype = dt }\n         | v -> v)\n  in\n  let result = mk v in\n  if result == n then n else result\n\n(* Traversal *)\n\nlet toposort ?(gate = fun _ -> true) ?(enter_calls = true) root =\n  let visited : (int, unit) Hashtbl.t = Hashtbl.create 256 in\n  let result = ref [] in\n  let stack : (t * bool) Stack.t = Stack.create () in\n  Stack.push (root, false) stack;\n  while not (Stack.is_empty stack) do\n    let node, processed = Stack.pop stack in\n    if Hashtbl.mem visited node.Hashcons.tag then ()\n    else if not processed then begin\n      if gate node then begin\n        Stack.push (node, true) stack;\n        let srcs = match view node with\n          | Call { callee = Ref c; args; _ } when not enter_calls ->\n              args @ [c]  (* skip callee body but include the ref *)\n          | Call { args; _ } when not enter_calls -> args\n          | _ -> children node\n        in\n        List.iter (fun s ->\n          if not (Hashtbl.mem visited s.Hashcons.tag) then\n            Stack.push (s, false) stack)\n          (List.rev srcs)\n      end\n    end else begin\n      Hashtbl.replace visited node.Hashcons.tag ();\n      result := node :: !result\n    end\n  done;\n  List.rev !result\n\nlet backward_slice root =\n  let nodes = toposort root in\n  List.filter (fun n -> n != root) nodes\n\nlet variables root =\n  List.filter (fun n ->\n    match view n with Define_var _ -> true | _ -> false)\n    (toposort root)\n\nlet ranges root =\n  List.filter (fun n ->\n    match view n with Range _ -> true | _ -> false)\n    (toposort root)\n\n(* Rewriting *)\n\nmodule Ref_tbl = Hashtbl.Make (struct\n  type nonrec t = t\n  let equal a b = a == b\n  let hash (n : t) = n.Hashcons.tag\nend)\n\nlet first_match rules n =\n  List.find_map (fun rule -> rule n) rules\n\n(* 3-stage stack-based graph rewrite. When a rewrite produces a new\n   node, that node is fully processed (children visited, rewrite\n   applied). Waitlists handle nodes whose dependencies aren't yet\n   resolved. *)\nlet graph_rewrite ?(name = \"\") ?(enter_calls = true)\n    ?(on_rebuild : old_n:t -> new_n:t -> unit = fun ~old_n:_ ~new_n:_ -> ())\n    rewrite root =\n  let replace : t Ref_tbl.t = Ref_tbl.create 256 in\n  let on_stack : unit Ref_tbl.t = Ref_tbl.create 256 in\n  let waitlist : (t * int * t) list Ref_tbl.t = Ref_tbl.create 16 in\n  let stack : (t * int * t) Stack.t = Stack.create () in\n  let lookup c =\n    match Ref_tbl.find_opt replace c with Some r -> r | None -> c\n  in\n  let set_replace n v =\n    Ref_tbl.replace replace n v;\n    match Ref_tbl.find_opt waitlist n with\n    | Some waiting ->\n        Ref_tbl.remove waitlist n;\n        List.iter (fun entry -> Stack.push entry stack) waiting\n    | None -> ()\n  in\n  Stack.push (root, 0, root) stack;\n  Ref_tbl.replace on_stack root ();\n  let counter = ref 0 in\n  while not (Stack.is_empty stack) do\n    let n, stage, new_n = Stack.pop stack in\n    if Ref_tbl.mem replace n then ()\n    else begin\n      incr counter;\n      if !counter > 250000 then\n        failwith (Printf.sprintf \"graph_rewrite(%s): %d nodes\" name !counter);\n      if stage = 0 then begin\n        (* Stage 0: push self at stage 1, then push children *)\n        Stack.push (n, 1, new_n) stack;\n        let srcs =\n          if not enter_calls then\n            match view new_n with\n            | Call { args; _ } -> args\n            | _ -> children new_n\n          else children new_n\n        in\n        List.iter (fun x ->\n          if not (Ref_tbl.mem on_stack x) then begin\n            Stack.push (x, 0, x) stack;\n            Ref_tbl.replace on_stack x ()\n          end) (List.rev srcs)\n      end\n      else if stage = 1 then begin\n        (* Stage 1: check all children are ready *)\n        let all_ready = ref true in\n        let new_src =\n          List.map (fun x ->\n            match Ref_tbl.find_opt replace x with\n            | Some r -> r\n            | None -> all_ready := false; x)\n            (children new_n)\n        in\n        if not !all_ready then begin\n          let missing =\n            List.find (fun x -> not (Ref_tbl.mem replace x))\n              (children new_n)\n          in\n          let prev = match Ref_tbl.find_opt waitlist missing with\n            | Some l -> l | None -> [] in\n          Ref_tbl.replace waitlist missing ((n, 1, new_n) :: prev)\n        end\n        else begin\n          let old_src = children new_n in\n          let changed =\n            List.length old_src = List.length new_src\n            && not (List.for_all2 (==) old_src new_src)\n          in\n          if not changed then begin\n            match rewrite new_n with\n            | None -> set_replace n new_n\n            | Some rewritten when rewritten == new_n -> set_replace n new_n\n            | Some rewritten ->\n                Stack.push (n, 2, rewritten) stack;\n                Stack.push (rewritten, 0, rewritten) stack\n          end\n          else begin\n            let rebuilt = mk (map_children lookup (view new_n)) in\n            on_rebuild ~old_n:new_n ~new_n:rebuilt;\n            Stack.push (n, 2, rebuilt) stack;\n            Stack.push (rebuilt, 0, rebuilt) stack\n          end\n        end\n      end\n      else begin\n        (* Stage 2: link n → result of new_n *)\n        match Ref_tbl.find_opt replace new_n with\n        | Some result -> set_replace n result\n        | None ->\n            let prev = match Ref_tbl.find_opt waitlist new_n with\n              | Some l -> l | None -> [] in\n            Ref_tbl.replace waitlist new_n ((n, 2, new_n) :: prev)\n      end\n    end\n  done;\n  lookup root\n\nlet substitute mappings root =\n  let tbl : t Ref_tbl.t = Ref_tbl.create (List.length mappings) in\n  List.iter (fun (old_n, new_n) -> Ref_tbl.replace tbl old_n new_n) mappings;\n  graph_rewrite (fun n ->\n    Ref_tbl.find_opt tbl n)\n    root\n\n(* Analysis *)\n\nlet rec base n =\n  match view n with\n  | Reshape { src; _ } | Expand { src; _ } | Pad { src; _ }\n  | Shrink { src; _ } | Permute { src; _ } | Flip { src; _ }\n  | Multi { src; _ } | Detach { src; _ } -> base src\n  | _ -> n\n\nlet extract_marg v =\n  match v with\n  | Reshape { shape; _ } | Expand { shape; _ } -> extract_int_shape shape\n  | _ -> None\n\nlet extract_marg_pairs v =\n  match v with\n  | Pad { before; after; _ } | Shrink { before; after; _ } ->\n      (match extract_int_shape before, extract_int_shape after with\n       | Some bs, Some als when List.length bs = List.length als ->\n           Some (List.combine bs als)\n       | _ -> None)\n  | _ -> None\n\nlet compute_shapes root =\n  let tbl : (int, int list option) Hashtbl.t = Hashtbl.create 256 in\n  let nodes = toposort root in\n  let sh n = match Hashtbl.find_opt tbl n.Hashcons.tag with\n    | Some s -> s | None -> None in\n  List.iter (fun n ->\n    let shape = match view n with\n      | Sink _ | Group _ | Unique _ | Lunique _ | Device _ | Range _\n      | Store _ | End _ | Barrier | Define_local _ | Linear _ -> None\n      | Const _ | Vconst _ | Define_var _ | Bind _ | Invalid_index _ ->\n          Some []\n      | Buffer { size; _ } | Buffer_view { size; _ } -> Some [ size ]\n      | Param { shape; _ } ->\n          Option.bind shape extract_int_shape\n      | Reshape { shape; _ } | Expand { shape; _ } ->\n          extract_int_shape shape\n      | Pad { src; before; after; _ } ->\n          (match sh src, extract_int_shape before, extract_int_shape after with\n           | Some s, Some b, Some a ->\n               Some (List.map2 (fun si (bi, ai) -> si + bi + ai)\n                 s (List.combine b a))\n           | _ -> None)\n      | Shrink { src; before; after; _ } ->\n          (match sh src, extract_int_shape before, extract_int_shape after with\n           | Some s, Some b, Some a ->\n               Some (List.map2 (fun si (bi, ai) -> si - bi - ai)\n                 s (List.combine b a))\n           | _ -> None)\n      | Permute { src; order; _ } ->\n          Option.map (fun s ->\n            List.map (fun i -> List.nth s i) order) (sh src)\n      | Flip { src; _ } -> sh src\n      | Vectorize { srcs; _ } ->\n          (match srcs with\n           | s :: _ ->\n               Option.map (fun dims -> List.length srcs :: dims) (sh s)\n           | [] -> Some [0])\n      | Reduce_axis { src; axes; _ } ->\n          Option.map (fun s ->\n            List.mapi (fun i d -> if List.mem i axes then 1 else d) s) (sh src)\n      | Multi { src; _ } | Mselect { src; _ }\n      | Detach { src; _ } | Contiguous { src; _ }\n      | Contiguous_backward { src; _ } | Copy { src; _ }\n      | Cast { src; _ } | Bitcast { src; _ }\n      | Unary { src; _ } | Noop { src = Some src; _ } -> sh src\n      | Mstack { srcs; _ } ->\n          (match srcs with s :: _ -> sh s | [] -> None)\n      | Binary { lhs; _ } -> sh lhs\n      | Ternary { b; _ } -> sh b\n      | Call { callee = Ast _; args; _ } ->\n          (match args with a :: _ -> sh a | [] -> None)\n      | _ -> None\n    in\n    Hashtbl.replace tbl n.Hashcons.tag shape)\n    nodes;\n  fun n -> match Hashtbl.find_opt tbl n.Hashcons.tag with\n    | Some s -> s | None -> None\n\nlet compute_devices root =\n  let tbl : (int, device option) Hashtbl.t = Hashtbl.create 256 in\n  let nodes = toposort root in\n  let dev n = match Hashtbl.find_opt tbl n.Hashcons.tag with\n    | Some d -> d | None -> None in\n  List.iter (fun n ->\n    let d = match view n with\n      | Device { device = d } -> Some d\n      | Buffer { device = d; _ } -> dev d\n      | Copy { device = d; _ } -> dev d\n      | After { src; _ } | Detach { src; _ }\n      | Contiguous { src; _ } | Contiguous_backward { src; _ }\n      | Cast { src; _ } | Bitcast { src; _ }\n      | Unary { src; _ } | Reshape { src; _ }\n      | Expand { src; _ } | Pad { src; _ }\n      | Shrink { src; _ } | Permute { src; _ }\n      | Flip { src; _ } | Reduce_axis { src; _ }\n      | Multi { src; _ } | Mselect { src; _ }\n      | Noop { src = Some src; _ } -> dev src\n      | Binary { lhs; _ } -> dev lhs\n      | Ternary { b; _ } -> dev b\n      | Mstack { srcs; _ } ->\n          (match srcs with s :: _ -> dev s | [] -> None)\n      | Param { device = d; _ } ->\n          Option.bind d dev\n      | Call { callee = Ast _; args; _ } ->\n          (match args with a :: _ -> dev a | [] -> None)\n      | Allreduce { device = d; _ } -> dev d\n      | _ -> None\n    in\n    Hashtbl.replace tbl n.Hashcons.tag d)\n    nodes;\n  fun n -> match Hashtbl.find_opt tbl n.Hashcons.tag with\n    | Some d -> d | None -> None\n\nlet consumer_map root =\n  let tbl : (int, t list) Hashtbl.t = Hashtbl.create 256 in\n  let nodes = toposort root in\n  List.iter (fun n ->\n    List.iter (fun c ->\n      let prev = match Hashtbl.find_opt tbl c.Hashcons.tag with\n        | Some l -> l | None -> [] in\n      Hashtbl.replace tbl c.Hashcons.tag (n :: prev))\n      (children n))\n    nodes;\n  fun n -> match Hashtbl.find_opt tbl n.Hashcons.tag with\n    | Some l -> l | None -> []\n\n(* Formatting *)\n\nlet pp_view fmt v =\n  let pp_node fmt (n : t) = Format.fprintf fmt \"%%%d\" n.Hashcons.tag in\n  let pp_nodes fmt ns =\n    Format.pp_print_list ~pp_sep:(fun fmt () -> Format.fprintf fmt \", \")\n      pp_node fmt ns\n  in\n  match v with\n  | Sink { srcs; _ } -> Format.fprintf fmt \"sink [%a]\" pp_nodes srcs\n  | Group { srcs } -> Format.fprintf fmt \"group [%a]\" pp_nodes srcs\n  | After { src; deps; _ } ->\n      Format.fprintf fmt \"after %a, deps=[%a]\" pp_node src pp_nodes deps\n  | Unique { id } -> Format.fprintf fmt \"unique %d\" id\n  | Lunique { id } -> Format.fprintf fmt \"lunique %d\" id\n  | Device { device = Single d } -> Format.fprintf fmt \"device %s\" d\n  | Device { device = Multi ds } ->\n      Format.fprintf fmt \"device [%s]\" (String.concat \", \" ds)\n  | Buffer { unique; device; size; dtype } ->\n      Format.fprintf fmt \"buffer unique=%a device=%a size=%d : %a\"\n        pp_node unique pp_node device size Dtype.pp dtype\n  | Buffer_view { src; size; offset; dtype } ->\n      Format.fprintf fmt \"buffer_view %a size=%d offset=%d : %a\"\n        pp_node src size offset Dtype.pp dtype\n  | Const { dtype; _ } -> Format.fprintf fmt \"const : %a\" Dtype.pp dtype\n  | Vconst { dtype; _ } -> Format.fprintf fmt \"vconst : %a\" Dtype.pp dtype\n  | Define_var { name; lo; hi; dtype } ->\n      Format.fprintf fmt \"define_var %s [%d, %d] : %a\" name lo hi Dtype.pp dtype\n  | Bind { var; value; _ } ->\n      Format.fprintf fmt \"bind %a = %a\" pp_node var\n        (Format.pp_print_option pp_node) value\n  | Param { slot; dtype; _ } ->\n      Format.fprintf fmt \"param %d : %a\" slot Dtype.pp dtype\n  | Call { args; dtype; _ } ->\n      Format.fprintf fmt \"call [%a] : %a\" pp_nodes args Dtype.pp dtype\n  | Linear { srcs } -> Format.fprintf fmt \"linear [%a]\" pp_nodes srcs\n  | Shaped_wmma { a; b; acc; dims = (m, n, k); device; threads; dtype } ->\n      Format.fprintf fmt \"shaped_wmma %a, %a, %a dims=(%d,%d,%d) dev=%s thr=%d : %a\"\n        pp_node a pp_node b pp_node acc m n k device threads Dtype.pp dtype\n  | Store { dst; value } ->\n      Format.fprintf fmt \"store %a <- %a\" pp_node dst pp_node value\n  | End { value; ranges } ->\n      Format.fprintf fmt \"end %a ranges=[%a]\" pp_node value pp_nodes ranges\n  | Barrier -> Format.fprintf fmt \"barrier\"\n  | _ -> Format.fprintf fmt \"<%s>\" (match v with\n      | Detach _ -> \"detach\" | Contiguous _ -> \"contiguous\"\n      | Contiguous_backward _ -> \"contiguous_backward\"\n      | Copy _ -> \"copy\" | Allreduce _ -> \"allreduce\"\n      | Multi _ -> \"multi\" | Mstack _ -> \"mstack\" | Mselect _ -> \"mselect\"\n      | Reduce_axis _ -> \"reduce_axis\" | Reduce _ -> \"reduce\"\n      | Reshape _ -> \"reshape\" | Expand _ -> \"expand\"\n      | Pad _ -> \"pad\" | Shrink _ -> \"shrink\"\n      | Permute _ -> \"permute\" | Flip _ -> \"flip\"\n      | Range _ -> \"range\" | Index _ -> \"index\"\n      | Vectorize _ -> \"vectorize\" | Cast _ -> \"cast\" | Bitcast _ -> \"bitcast\"\n      | Unary _ -> \"unary\" | Binary _ -> \"binary\" | Ternary _ -> \"ternary\"\n      | Noop _ -> \"noop\" | Bufferize _ -> \"bufferize\"\n      | Invalid_index _ -> \"invalid_index\" | Define_local _ -> \"define_local\"\n      | _ -> \"unknown\")\n\nlet pp fmt root =\n  let nodes = toposort root in\n  List.iter (fun n ->\n    Format.fprintf fmt \"%3d: %a@\\n\" n.Hashcons.tag pp_view (view n))\n    nodes\n"
  },
  {
    "path": "packages/tolk/lib/ir/tensor.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** High-level tensor graph IR.\n\n    Nodes are hash-consed: structurally identical nodes are physically\n    identical ([==]), enabling efficient deduplication during graph\n    rewriting.\n\n    Build nodes with the smart constructors ({!sink}, {!after},\n    {!const}, …); inspect with {!view}, {!dtype}, and {!children};\n    rewrite with {!graph_rewrite} and {!substitute}. *)\n\n(** {1:types Types} *)\n\ntype t\n(** A tensor graph node. Hash-consed: structurally identical nodes\n    are physically identical. *)\n\ntype device =\n  | Single of string  (** A single named device. *)\n  | Multi of string list  (** Multiple devices for sharded tensors. *)\n(** Device placement selector. *)\n\ntype metadata = {\n  name : string;  (** Operation name. *)\n  caller : string;  (** Call-site identifier. *)\n  backward : bool;  (** [true] if emitted during backward pass. *)\n}\n(** Call-site metadata. *)\n\n(** {2:view Node views} *)\n\ntype view =\n  | Sink of { srcs : t list; kernel_info : Kernel.kernel_info option }\n  | Group of { srcs : t list }\n  | After of { src : t; deps : t list; dtype : Dtype.t }\n  | Unique of { id : int }\n  | Lunique of { id : int }\n  | Device of { device : device }\n  | Buffer of { unique : t; device : t; size : int; dtype : Dtype.t }\n  | Buffer_view of { src : t; size : int; offset : int; dtype : Dtype.t }\n  | Const of { value : Const.t; dtype : Dtype.t; srcs : t list }\n  | Vconst of { values : Const.t list; dtype : Dtype.t; srcs : t list }\n  | Define_var of { name : string; lo : int; hi : int; dtype : Dtype.t }\n  | Bind of { var : t; value : t option; dtype : Dtype.t }\n  | Param of {\n      slot : int;\n      dtype : Dtype.t;\n      shape : t option;\n      device : t option;\n    }\n  | Call of {\n      callee : callee;\n      args : t list;\n      info : call_info;\n      dtype : Dtype.t;\n    }\n  | Detach of { src : t; dtype : Dtype.t }\n  | Contiguous of { src : t; ranges : t list; opts : Kernel.Opt.t list; dtype : Dtype.t }\n  | Contiguous_backward of { src : t; dtype : Dtype.t }\n  | Copy of { src : t; device : t; dtype : Dtype.t }\n  | Allreduce of { src : t; device : t; op : Op.reduce; dtype : Dtype.t }\n  | Multi of { src : t; axis : int; dtype : Dtype.t }\n  | Mstack of { srcs : t list; dtype : Dtype.t }\n  | Mselect of { src : t; index : int; dtype : Dtype.t }\n  | Reduce_axis of {\n      src : t;\n      op : Op.reduce;\n      axes : int list;\n      dtype : Dtype.t;\n    }\n  | Reduce of { src : t; ranges : t list; op : Op.reduce; dtype : Dtype.t }\n  | Reshape of { src : t; shape : t; dtype : Dtype.t }\n  | Expand of { src : t; shape : t; dtype : Dtype.t }\n  | Pad of { src : t; before : t; after : t; dtype : Dtype.t }\n  | Shrink of { src : t; before : t; after : t; dtype : Dtype.t }\n  | Permute of { src : t; order : int list; dtype : Dtype.t }\n  | Flip of { src : t; dims : bool list; dtype : Dtype.t }\n  | Range of {\n      size : t;\n      dtype : Dtype.t;\n      axis : int;\n      sub : int list;\n      kind : Axis_kind.t;\n    }\n  | End of { value : t; ranges : t list }\n  | Index of { ptr : t; idxs : t list; gate : t option; dtype : Dtype.t }\n  | Store of { dst : t; value : t }\n  | Vectorize of { srcs : t list; dtype : Dtype.t }\n  | Cast of { src : t; dtype : Dtype.t }\n  | Bitcast of { src : t; dtype : Dtype.t }\n  | Unary of { op : Op.unary; src : t; dtype : Dtype.t }\n  | Binary of { op : Op.binary; lhs : t; rhs : t; dtype : Dtype.t }\n  | Ternary of { op : Op.ternary; a : t; b : t; c : t; dtype : Dtype.t }\n  | Noop of { src : t option; dtype : Dtype.t }\n  | Bufferize of {\n      src : t;\n      ranges : t list;\n      dtype : Dtype.t;\n      opts : Kernel.bufferize_opts;\n    }\n  | Invalid_index of { dtype : Dtype.t }\n  | Define_local of { size : int; dtype : Dtype.Ptr.t }\n  | Barrier\n  | Linear of { srcs : t list }\n  | Shaped_wmma of {\n      a : t; b : t; acc : t;\n      dims : int * int * int;\n      device : string;\n      threads : int;\n      dtype : Dtype.t;\n    }\n(** Node views. Each variant describes one tensor operation with\n    direct references to child nodes. *)\n\nand callee =\n  | Ref of t  (** Reference to an in-graph callable. *)\n  | Ast of Kernel.t  (** Inline kernel AST. *)\n(** Call target. *)\n\nand call_info = {\n  grad_fxn : grad_fxn option;\n  metadata : metadata list;\n  name : string option;\n  precompile : bool;\n}\n(** Call annotations. *)\n\nand grad_fxn = grad_output:t -> call:t -> t option list\n(** Custom gradient callback. *)\n\n(** {1:constructors Constructors} *)\n\nval sink : ?kernel_info:Kernel.kernel_info -> t list -> t\n(** [sink ?kernel_info srcs] is a graph root gathering [srcs]. *)\n\nval group : t list -> t\n(** [group srcs] groups effect children. Returns [src] directly\n    when [srcs] is a singleton. *)\n\nval after : src:t -> deps:t list -> t\n(** [after ~src ~deps] sequences [src] after [deps]. *)\n\nval unique : id:int -> t\n(** [unique ~id] is a unique buffer identity tag. *)\n\nval lunique : id:int -> t\n(** [lunique ~id] is a lazy unique buffer identity tag. *)\n\nval device : device -> t\n(** [device d] is a device placement node. *)\n\nval buffer : unique:t -> device:t -> size:int -> dtype:Dtype.t -> t\n(** [buffer ~unique ~device ~size ~dtype] is a buffer allocation. *)\n\nval buffer_view : src:t -> size:int -> offset:int -> dtype:Dtype.t -> t\n(** [buffer_view ~src ~size ~offset ~dtype] is a view into [src]. *)\n\nval const : ?srcs:t list -> Const.t -> Dtype.t -> t\n(** [const ?srcs c dt] is a constant [c] of type [dt]. [srcs] are\n    scheduling dependencies (default [[]]). *)\n\nval vconst :\n  values:Const.t list -> dtype:Dtype.t -> ?srcs:t list -> unit -> t\n(** [vconst ~values ~dtype ()] is a vector of constants. *)\n\nval define_var :\n  name:string -> lo:int -> hi:int -> ?dtype:Dtype.t -> unit -> t\n(** [define_var ~name ~lo ~hi ()] is a symbolic variable bounded by\n    \\[[lo];[hi]\\]. [dtype] defaults to {!Dtype.index}. *)\n\nval bind : var:t -> ?value:t -> dtype:Dtype.t -> unit -> t\n(** [bind ~var ?value ~dtype ()] binds [var] to [value]. *)\n\nval param :\n  slot:int -> dtype:Dtype.t -> ?shape:t -> ?device:t -> unit -> t\n(** [param ~slot ~dtype ()] is a function parameter at [slot]. *)\n\nval call :\n  callee:callee -> args:t list -> info:call_info -> dtype:Dtype.t -> t\n(** [call ~callee ~args ~info ~dtype] calls [callee] with [args]. *)\n\nval assign : target:t -> value:t -> ?extras:t list -> unit -> t\n(** [assign ~target ~value ()] stores [value] into [target] and\n    returns an {!After} sequencing [target] after the store. *)\n\nval detach : src:t -> t\n(** [detach ~src] detaches [src] from the gradient tape. *)\n\nval contiguous : src:t -> ?ranges:t list -> ?opts:Kernel.Opt.t list -> unit -> t\n(** [contiguous ~src ()] forces [src] into contiguous layout. *)\n\nval contiguous_backward : src:t -> t\n(** [contiguous_backward ~src] is a backward-pass contiguous marker. *)\n\nval copy : src:t -> device:t -> unit -> t\n(** [copy ~src ~device ()] copies [src] to [device]. *)\n\nval allreduce : src:t -> device:t -> op:Op.reduce -> dtype:Dtype.t -> t\n(** [allreduce ~src ~device ~op ~dtype] all-reduces [src]. *)\n\nval multi : src:t -> axis:int -> t\n(** [multi ~src ~axis] distributes [src] along [axis]. *)\n\nval mstack : srcs:t list -> t\n(** [mstack ~srcs] stacks per-device shards. *)\n\nval mselect : src:t -> index:int -> t\n(** [mselect ~src ~index] selects shard [index]. *)\n\n\n\nval reduce_axis : src:t -> op:Op.reduce -> axes:int list -> t\n(** [reduce_axis ~src ~op ~axes] reduces [src] along [axes]. *)\n\nval reduce : src:t -> ranges:t list -> op:Op.reduce -> dtype:Dtype.t -> t\n(** [reduce ~src ~ranges ~op ~dtype] reduces [src] over [ranges]. *)\n\nval reshape : src:t -> shape:t -> t\n(** [reshape ~src ~shape] reshapes [src] to [shape]. *)\n\nval expand : src:t -> shape:t -> t\n(** [expand ~src ~shape] broadcasts [src] to [shape]. *)\n\nval pad : src:t -> before:t -> after:t -> t\n(** [pad ~src ~before ~after] pads [src] with zeros. *)\n\nval shrink : src:t -> before:t -> after:t -> t\n(** [shrink ~src ~before ~after] trims edges of [src]. *)\n\nval permute : src:t -> order:int list -> t\n(** [permute ~src ~order] permutes axes of [src]. *)\n\nval flip : src:t -> dims:bool list -> t\n(** [flip ~src ~dims] reverses [src] along flagged dimensions. *)\n\nval range :\n  size:t -> axis:int -> ?sub:int list -> kind:Axis_kind.t ->\n  ?dtype:Dtype.t -> unit -> t\n(** [range ~size ~axis ~kind ()] is a loop variable over\n    \\[[0];[size-1]\\]. [dtype] defaults to {!Dtype.index}. *)\n\nval end_ : value:t -> ranges:t list -> t\n(** [end_ ~value ~ranges] closes loop [ranges] around [value]. *)\n\nval index :\n  ptr:t -> idxs:t list -> ?gate:t -> dtype:Dtype.t -> unit -> t\n(** [index ~ptr ~idxs ?gate ~dtype ()] indexes into [ptr]. *)\n\nval store : dst:t -> value:t -> t\n(** [store ~dst ~value] stores [value] through [dst]. *)\n\nval vectorize : srcs:t list -> t\n(** [vectorize ~srcs] packs scalar [srcs] into a vector. *)\n\nval cast : src:t -> dtype:Dtype.t -> t\n(** [cast ~src ~dtype] casts [src] to [dtype]. *)\n\nval bitcast : src:t -> dtype:Dtype.t -> t\n(** [bitcast ~src ~dtype] bitcasts [src] to [dtype]. *)\n\nval unary : op:Op.unary -> src:t -> t\n(** [unary ~op ~src] applies unary [op]. *)\n\nval binary : op:Op.binary -> lhs:t -> rhs:t -> t\n(** [binary ~op ~lhs ~rhs] applies binary [op]. *)\n\nval ternary : op:Op.ternary -> a:t -> b:t -> c:t -> t\n(** [ternary ~op ~a ~b ~c] applies ternary [op]. *)\n\nval noop : ?src:t -> dtype:Dtype.t -> unit -> t\n(** [noop ?src ~dtype ()] is a pass-through scheduling marker. *)\n\nval bufferize :\n  src:t -> ranges:t list -> dtype:Dtype.t ->\n  opts:Kernel.bufferize_opts -> t\n(** [bufferize ~src ~ranges ~dtype ~opts] materializes [src]. *)\n\nval invalid_index : dtype:Dtype.t -> t\n(** [invalid_index ~dtype] is an invalid index sentinel. *)\n\nval define_local : size:int -> dtype:Dtype.Ptr.t -> t\n(** [define_local ~size ~dtype] defines a local-memory buffer. *)\n\nval barrier : t\n(** [barrier] is a workgroup barrier. *)\n\nval linear : t list -> t\n(** [linear srcs] is a linearized schedule of [srcs]. *)\n\nval shaped_wmma :\n  a:t -> b:t -> acc:t ->\n  dims:(int * int * int) -> device:string -> threads:int ->\n  dtype:Dtype.t -> t\n(** [shaped_wmma ~a ~b ~acc ~dims ~device ~threads ~dtype] is a shaped\n    tensor-core WMMA operation. Lowered to kernel-level {!Kernel.view.Wmma}\n    during scheduling. *)\n\nval replace : t -> ?children:t list -> ?dtype:Dtype.t -> unit -> t\n(** [replace n ?children ?dtype ()] rebuilds [n] with substituted\n    children and/or dtype. Unchanged fields are preserved.\n\n    [children] must have the same length as [children n]. *)\n\n(** {1:inspection Inspection} *)\n\nval view : t -> view\n(** [view n] is the operation [n] represents. *)\n\nval children : t -> t list\n(** [children n] are the direct input nodes of [n]. *)\n\nval dtype : t -> Dtype.t option\n(** [dtype n] is [n]'s dtype, if any. *)\n\nval tag : t -> int\n(** [tag n] is [n]'s unique identity. Two nodes are physically\n    identical iff their tags are equal. *)\n\n(** {1:traversal Traversal} *)\n\nval toposort :\n  ?gate:(t -> bool) -> ?enter_calls:bool -> t -> t list\n(** [toposort ?gate ?enter_calls root] is all transitive\n    dependencies of [root] in topological order (leaves first).\n\n    [gate] controls descent: when it returns [false] for a node,\n    that node's children are not visited. Defaults to [fun _ -> true].\n\n    [enter_calls] controls whether CALL bodies (the callee) are\n    entered. Defaults to [true]. When [false], [callee] is treated\n    as opaque. *)\n\nval backward_slice : t -> t list\n(** [backward_slice root] is {!toposort} [root] without [root]\n    itself. *)\n\nval variables : t -> t list\n(** [variables root] is all {!Define_var} nodes reachable from\n    [root], in topological order. *)\n\nval ranges : t -> t list\n(** [ranges root] is all {!Range} nodes reachable from [root],\n    in topological order. *)\n\n(** {1:rewriting Rewriting} *)\n\nval children_of : view -> t list\n(** [children_of v] are the direct child nodes of [v]. *)\n\nval map_children : (t -> t) -> view -> view\n(** [map_children f v] rebuilds the children of [v] with [f]. *)\n\nval node_dtype : view -> Dtype.t option\n(** [node_dtype v] is the dtype of [v], if any. *)\n\nval graph_rewrite :\n  ?name:string -> ?enter_calls:bool ->\n  ?on_rebuild:(old_n:t -> new_n:t -> unit) ->\n  (t -> t option) -> t -> t\n(** [graph_rewrite ?name ?enter_calls f root] rewrites [root]'s DAG.\n\n    Processes nodes bottom-up using a 3-stage stack:\n    {ul\n    {- Stage 0: push children for processing.}\n    {- Stage 1: rebuild with rewritten children, apply [f].\n       When [f] returns [Some n'], [n'] replaces the node and\n       is re-processed.}\n    {- Stage 2: link the original node to its final replacement.}}\n\n    Nodes that depend on not-yet-ready replacements are added to\n    a waitlist and resumed when the dependency resolves.\n\n    [enter_calls] controls whether CALL bodies are entered.\n    Defaults to [true]. *)\n\nval substitute : (t * t) list -> t -> t\n(** [substitute mappings root] replaces nodes by physical identity\n    ([==]). Each [(old, new_)] pair causes [old] to be replaced\n    with [new_] throughout the DAG. *)\n\nval first_match : (t -> t option) list -> t -> t option\n(** [first_match rules n] tries each rule in order, returning\n    the first [Some]. Returns [None] if no rule matches. *)\n\n(** {1:analysis Analysis} *)\n\nval base : t -> t\n(** [base n] follows through movement ops (Reshape, Expand, Pad,\n    Shrink, Permute, Flip, Multi, Detach) to the underlying buffer\n    node. *)\n\nval extract_int_shape : t -> int list option\n(** [extract_int_shape n] decodes a concrete int list from a\n    shape-encoding node (Vectorize of Consts, single Const, or\n    empty Vconst). Returns [None] if any dimension is symbolic. *)\n\nval extract_marg : view -> int list option\n(** [extract_marg v] extracts the shape argument from a Reshape\n    or Expand view. Returns [None] for other ops or symbolic\n    shapes. *)\n\nval extract_marg_pairs : view -> (int * int) list option\n(** [extract_marg_pairs v] extracts (before, after) pairs from\n    a Pad or Shrink view. Returns [None] for other ops or\n    symbolic values. *)\n\nval compute_shapes : t -> (t -> int list option)\n(** [compute_shapes root] computes the shape of every node\n    reachable from [root]. Returns a lookup function. *)\n\nval compute_devices : t -> (t -> device option)\n(** [compute_devices root] computes the device of every node\n    reachable from [root]. Returns a lookup function. *)\n\nval consumer_map : t -> (t -> t list)\n(** [consumer_map root] builds a consumer map: for each node\n    reachable from [root], the list of nodes that reference it\n    as a child. Returns a lookup function. *)\n\n(** {1:formatting Formatting} *)\n\nval pp_view : Format.formatter -> view -> unit\n(** [pp_view] formats one tensor node view. *)\n\nval pp : Format.formatter -> t -> unit\n(** [pp] formats the DAG rooted at a node. *)\n\n"
  },
  {
    "path": "packages/tolk/lib/ir/tolk_ir.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\nmodule Dtype = Dtype\nmodule Const = Const\nmodule Shape = Shape\nmodule Axis_kind = Axis_kind\nmodule Special_dim = Special_dim\nmodule Op = Op\nmodule Kernel = Kernel\nmodule Divandmod = Divandmod\nmodule Decompositions = Decomposition\nmodule Symbolic = Symbolic\nmodule Tensor = Tensor\nmodule Program = Program\n"
  },
  {
    "path": "packages/tolk/lib/ir/tolk_ir.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Intermediate representations for tensor computation.\n\n    [Tolk_ir] provides three IR stages that lower a tensor program from\n    high-level operations down to render-ready linear code:\n    - {!Tensor} — value-graph of high-level tensor operations.\n    - {!Kernel} — codegen-oriented DAG of indexed buffer accesses and loops.\n    - {!Program} — linear SSA instruction sequence for backend emission.\n\n    Supporting modules define the shared type vocabulary:\n    {!modules:\n    Dtype\n    Const\n    Shape\n    Op\n    Axis_kind\n    Special_dim\n    Symbolic} *)\n\nmodule Dtype = Dtype\n(** Scalar, vector, and pointer data types. *)\n\nmodule Const = Const\n(** Typed compile-time constants. *)\n\nmodule Shape = Shape\n(** Tensor shapes with static and symbolic dimensions. *)\n\nmodule Axis_kind = Axis_kind\n(** Kernel axis kinds (thread, local, reduce, etc.). *)\n\nmodule Special_dim = Special_dim\n(** Backend-provided hardware execution indices. *)\n\nmodule Op = Op\n(** Arithmetic and logical operations grouped by arity. *)\n\nmodule Kernel = Kernel\n(** Codegen-oriented DAG IR (memory-level graph stage). *)\n\nmodule Divandmod = Divandmod\n(** Division and modulo folding for index-typed expressions. *)\n\nmodule Decompositions = Decomposition\n(** Hardware-level decompositions for unsupported operations. *)\n\nmodule Symbolic = Symbolic\n(** Symbolic simplification rules for {!Kernel} IR. *)\n\nmodule Tensor = Tensor\n(** High-level tensor graph IR (value-graph stage). *)\n\nmodule Program = Program\n(** Render-ready linear SSA IR (backend emission stage). *)\n"
  },
  {
    "path": "packages/tolk/lib/program_spec.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\nopen Tolk_ir\n\ntype var = { name : string; lo : int; hi : int; dtype : Dtype.t }\ntype core_id = { var_index : int; lo : int; hi : int }\n\nlet thread_count core_id = core_id.hi - core_id.lo + 1\n\ntype launch_kind = Serial | Thread_groups | Threads\n\nmodule K = Kernel\n\nmodule Estimates = struct\n  type estimate = Int of int | Symbolic of Kernel.t\n  type t = { ops : estimate; lds : estimate; mem : estimate }\n\n  let zero = { ops = Int 0; lds = Int 0; mem = Int 0 }\n\n  let add_estimate a b =\n    match (a, b) with\n    | Int a, Int b -> Int (a + b)\n    | Symbolic s, Int 0 | Int 0, Symbolic s -> Symbolic s\n    | Symbolic a, Symbolic b when a == b -> Symbolic a\n    | Symbolic a, Int b ->\n        Symbolic (Kernel.binary ~op:`Add ~lhs:a ~rhs:(Kernel.const_int b))\n    | Int a, Symbolic b ->\n        Symbolic (Kernel.binary ~op:`Add ~lhs:(Kernel.const_int a) ~rhs:b)\n    | Symbolic a, Symbolic b ->\n        Symbolic (Kernel.binary ~op:`Add ~lhs:a ~rhs:b)\n\n  let ( + ) a b =\n    {\n      ops = add_estimate a.ops b.ops;\n      lds = add_estimate a.lds b.lds;\n      mem = add_estimate a.mem b.mem;\n    }\n\n  let of_kernel (estimates : Kernel.estimates) =\n    let of_estimate = function\n      | Kernel.Int n -> Int n\n      | Kernel.Symbolic s -> Symbolic s\n    in\n    {\n      ops = of_estimate estimates.ops;\n      lds = of_estimate estimates.lds;\n      mem = of_estimate estimates.mem;\n    }\n\n  let of_program (program : Program.t) =\n    let ( + ) = Stdlib.( + ) in\n    let ( * ) = Stdlib.( * ) in\n    let module P = Program in\n    let flops = ref 0 in\n    let lds = ref 0 in\n    let mem : (int * bool, int) Hashtbl.t = Hashtbl.create 16 in\n    let mults = ref 1 in\n    let mult_stack = Stack.create () in\n    let const_of_id id =\n      match P.view program id with\n      | Const { value; _ } ->\n          (match Const.view value with Int n -> Int64.to_int n | _ -> 1)\n      | _ -> 1\n    in\n    let scalar_itemsize (dtype : Dtype.t) =\n      Dtype.itemsize (Dtype.scalarize dtype)\n    in\n    let rec find_param id =\n      match P.view program id with\n      | Param { idx; dtype; _ } -> Some (idx, dtype)\n      | Index { ptr; _ } -> find_param ptr\n      | After { src; _ } -> find_param src\n      | _ -> None\n    in\n    let track_mem key (ptr : Dtype.Ptr.t) itemsize =\n      let prev = Option.value ~default:0 (Hashtbl.find_opt mem key) in\n      let accessed = prev + itemsize * !mults in\n      Hashtbl.replace mem key\n        (if Dtype.Ptr.size ptr > 0 then\n           min accessed (Dtype.Ptr.size ptr * Dtype.Val.itemsize (Dtype.Ptr.base ptr))\n         else accessed)\n    in\n    let is_reg_access id =\n      match P.view program id with\n      | Index { dtype = ptr; _ } -> Dtype.Ptr.addrspace ptr = Reg\n      | _ -> false\n    in\n    let store_itemsize value =\n      match P.dtype program value with\n      | Some dt -> scalar_itemsize (Dtype.Val dt)\n      | None -> 1\n    in\n    (* Exclude load/store indexing and if-conditions from FLOP counting. *)\n    let dont_count : (P.id, unit) Hashtbl.t = Hashtbl.create 64 in\n    let rec collect_deps_range_gated id =\n      if not (Hashtbl.mem dont_count id) then begin\n        Hashtbl.replace dont_count id ();\n        match P.view program id with\n        | Range _ -> ()\n        | _ -> List.iter collect_deps_range_gated (P.children program id)\n      end\n    in\n    let rec collect_deps_all id =\n      if not (Hashtbl.mem dont_count id) then begin\n        Hashtbl.replace dont_count id ();\n        List.iter collect_deps_all (P.children program id)\n      end\n    in\n    P.iteri\n      (fun _id v ->\n        match v with\n        | Load { src; _ } | Store { dst = src; _ } ->\n            (match P.view program src with\n             | Index { idxs; gate; _ } ->\n                 List.iter collect_deps_range_gated idxs;\n                 Option.iter collect_deps_range_gated gate\n             | _ -> ())\n        | If { cond; _ } -> collect_deps_all cond\n        | _ -> ())\n      program;\n    P.iteri\n      (fun id v ->\n        (match v with\n         | Load { src; dtype; _ } ->\n             (match find_param src with\n              | Some (idx, ptr) ->\n                  track_mem (idx, false) ptr (scalar_itemsize (Dtype.Val dtype))\n              | None -> ())\n         | Store { dst; value; _ } ->\n             (match find_param dst with\n              | Some (idx, ptr) ->\n                  track_mem (idx, true) ptr (store_itemsize value)\n              | None -> ())\n         | _ -> ());\n        match v with\n        | Range { size; _ } ->\n            Stack.push !mults mult_stack;\n            mults := !mults * const_of_id size\n        | End_range _ -> mults := Stack.pop mult_stack\n        | Special { size; _ } -> mults := !mults * const_of_id size\n        | Define_var { name; hi; _ } when name = \"core_id\" ->\n            mults := !mults * (hi + 1)\n        | Load { src; dtype; _ } ->\n            if not (is_reg_access src) then\n              lds := !lds + scalar_itemsize (Dtype.Val dtype) * !mults\n        | Store { dst; value; _ } ->\n            if not (is_reg_access dst) then\n              lds := !lds + store_itemsize value * !mults\n        | Unary { dtype; _ } | Binary { dtype; _ }\n          when not (Hashtbl.mem dont_count id) ->\n            flops := !flops + !mults * Dtype.Val.count dtype\n        | Ternary { op = `Mulacc; dtype; _ }\n          when not (Hashtbl.mem dont_count id) ->\n            flops := !flops + 2 * !mults * Dtype.Val.count dtype\n        | Ternary { dtype; _ } when not (Hashtbl.mem dont_count id) ->\n            flops := !flops + !mults * Dtype.Val.count dtype\n        | Wmma { dims = m, n, k; threads; _ }\n          when not (Hashtbl.mem dont_count id) ->\n            flops := !flops + 2 * (m * n * k / threads) * !mults\n        | _ -> ())\n      program;\n    let total_mem = Hashtbl.fold (fun _ bytes acc -> acc + bytes) mem 0 in\n    { ops = Int !flops; lds = Int !lds; mem = Int total_mem }\nend\n\ntype launch = {\n  kind : launch_kind;\n  global : K.t array;\n  local : K.t array option;\n}\n\ntype t = {\n  name : string;\n  src : string;\n  device : string;\n  program : Program.t;\n  lib : bytes option;\n  applied_opts : Kernel.Opt.t list;\n  vars : var list;\n  globals : int list;\n  outs : int list;\n  ins : int list;\n  launch : launch;\n  estimates : Estimates.t;\n  core_id : core_id option;\n}\n\nlet unsupported_launch_expr ~ref_ view =\n  invalid_arg\n    (Format.asprintf \"unsupported launch expression at ref %d: %a\" ref_\n       Program.pp_view view)\n\nlet mark_axis seen ~kind axis =\n  if axis < 0 || axis >= Array.length seen then\n    invalid_arg (Printf.sprintf \"launch axis %d out of bounds\" axis);\n  if seen.(axis) then\n    invalid_arg (Printf.sprintf \"%s axis %d appears more than once\" kind axis);\n  seen.(axis) <- true\n\nlet set_axis dims axis value =\n  if axis < 0 || axis >= Array.length dims then\n    invalid_arg (Printf.sprintf \"launch axis %d out of bounds\" axis);\n  dims.(axis) <- value\n\nlet trace_to_param (program : Program.t) (ref_ : int) : int option =\n  let index_ptr =\n    match Program.view program ref_ with\n    | Index { ptr; _ } -> Some ptr\n    | Cast { src; _ } | Bitcast { src; _ } -> (\n        match Program.view program src with\n        | Index { ptr; _ } -> Some ptr\n        | _ -> None)\n    | _ -> None\n  in\n  Option.bind index_ptr (fun ptr ->\n      match Program.view program ptr with\n      | Param { idx; _ } -> Some idx\n      | _ -> None)\n\n(* Convert a Program IR reference to a K.t expression for launch dimensions. *)\nlet kernel_expr_of_program (program : Program.t) var_nodes =\n  let rec expr_of_ref ref_ =\n    match Program.view program ref_ with\n    | Const { value; _ } -> (\n        match Const.view value with\n        | Int n -> K.const_int (Int64.to_int n)\n        | _ ->\n            invalid_arg\n              (Printf.sprintf \"non-integer constant at ref %d\" ref_))\n    | Define_var _ -> (\n        match Hashtbl.find_opt var_nodes ref_ with\n        | Some node -> node\n        | None ->\n            invalid_arg\n              (Printf.sprintf \"unknown scalar variable at ref %d\" ref_))\n    | Cast { src; _ } | Bitcast { src; _ } -> expr_of_ref src\n    | Unary { op = `Neg; src; _ } -> K.unary ~op:`Neg ~src:(expr_of_ref src)\n    | Binary { op; lhs; rhs; _ } ->\n        K.binary ~op ~lhs:(expr_of_ref lhs) ~rhs:(expr_of_ref rhs)\n    | v -> unsupported_launch_expr ~ref_ v\n  in\n  expr_of_ref\n\nlet default_dims () = [| K.const_int 1; K.const_int 1; K.const_int 1 |]\n\nlet collect_vars (program : Program.t) =\n  let raw = ref [] in\n  let var_nodes = Hashtbl.create 8 in\n  Program.iteri\n    (fun ref_ (v : Program.view) ->\n      match v with\n      | Define_var { name; lo; hi; dtype } ->\n          raw := (ref_, { name; lo; hi; dtype = Dtype.Val dtype }) :: !raw;\n          Hashtbl.replace var_nodes ref_\n            (K.define_var ~name ~lo ~hi ~dtype ())\n      | _ -> ())\n    program;\n  let sorted =\n    List.sort\n      (fun (_, (a : var)) (_, (b : var)) ->\n        compare (a.name, a.lo, a.hi) (b.name, b.lo, b.hi))\n      !raw\n  in\n  let var_index_of_ref = Hashtbl.create (List.length sorted) in\n  List.iteri\n    (fun index (ref_, _) -> Hashtbl.add var_index_of_ref ref_ index)\n    sorted;\n  (List.map snd sorted, var_index_of_ref, var_nodes)\n\nlet collect_buffers (program : Program.t) =\n  let outs = ref [] in\n  let ins = ref [] in\n  Program.iteri\n    (fun _id (v : Program.view) ->\n      match v with\n      | Store { dst; _ } ->\n          trace_to_param program dst\n          |> Option.iter (fun idx -> outs := idx :: !outs)\n      | Load { src; _ } ->\n          trace_to_param program src\n          |> Option.iter (fun idx -> ins := idx :: !ins)\n      | _ -> ())\n    program;\n  (List.sort_uniq Int.compare !outs, List.sort_uniq Int.compare !ins)\n\n(* Extracts launch grid/block dimensions from Special nodes in the program.\n   Enforces mutual exclusion between the flat-thread paradigm (global_idx only)\n   and the thread-group paradigm (group_id + local_id), raising if both are\n   mixed. Also captures the optional core_id variable for CPU dispatch. *)\nlet collect_launch (program : Program.t) var_index_of_ref scalar_expr =\n  let global = default_dims () in\n  let local = default_dims () in\n  let seen_global = Array.make 3 false in\n  let seen_local = Array.make 3 false in\n  let has_thread_groups = ref false in\n  let has_threads = ref false in\n  let core_id = ref None in\n  Program.iteri\n    (fun ref_ (v : Program.view) ->\n      match v with\n      | Special { dim; size; _ } ->\n          let expr = scalar_expr size in\n          let axis = Special_dim.axis dim in\n          begin match dim with\n          | Group_id _ ->\n              if !has_threads then\n                invalid_arg\n                  \"launch metadata cannot mix flat-thread and thread-group \\\n                   specials\";\n              has_thread_groups := true;\n              mark_axis seen_global ~kind:\"group_id\" axis;\n              set_axis global axis expr\n          | Local_id _ ->\n              if !has_threads then\n                invalid_arg\n                  \"launch metadata cannot mix flat-thread and thread-group \\\n                   specials\";\n              has_thread_groups := true;\n              mark_axis seen_local ~kind:\"local_id\" axis;\n              set_axis local axis expr\n          | Global_idx _ ->\n              if !has_thread_groups then\n                invalid_arg\n                  \"launch metadata cannot mix flat-thread and thread-group \\\n                   specials\";\n              has_threads := true;\n              mark_axis seen_global ~kind:\"global_idx\" axis;\n              set_axis global axis expr\n          end\n      | Define_var { name = \"core_id\"; lo; hi; _ } -> (\n          match !core_id with\n          | Some _ -> invalid_arg \"core_id must be defined at most once\"\n          | None when lo <> 0 -> invalid_arg \"core_id must have lower bound 0\"\n          | None ->\n              let var_index =\n                match Hashtbl.find_opt var_index_of_ref ref_ with\n                | Some index -> index\n                | None -> invalid_arg \"core_id missing from variable table\"\n              in\n              global.(0) <- K.const_int (hi + 1);\n              core_id := Some { var_index; lo; hi })\n      | _ -> ())\n    program;\n  let launch =\n    if !has_threads then { kind = Threads; global; local = None }\n    else if !has_thread_groups then\n      { kind = Thread_groups; global; local = Some local }\n    else { kind = Serial; global; local = Some local }\n  in\n  (launch, !core_id)\n\nlet of_program ~name ~src ~device ?lib ?(applied_opts = [])\n    ?(estimates = Estimates.zero) (program : Program.t) : t =\n  let vars, var_index_of_ref, var_nodes = collect_vars program in\n  let kernel_expr = kernel_expr_of_program program var_nodes in\n  let outs, ins = collect_buffers program in\n  let globals = List.sort_uniq Int.compare (outs @ ins) in\n  let launch, core_id = collect_launch program var_index_of_ref kernel_expr in\n  { name; src; device; program; lib; applied_opts; vars; globals; outs; ins;\n    launch; estimates; core_id }\n\nlet with_lib lib t = { t with lib = Some lib }\nlet with_estimates estimates t = { t with estimates }\n\nlet with_global_dims dims t =\n  { t with launch = { t.launch with global = Array.map K.const_int dims } }\n\nlet name t = t.name\nlet src t = t.src\nlet device t = t.device\nlet program t = t.program\nlet lib t = t.lib\nlet applied_opts t = t.applied_opts\nlet vars t = t.vars\nlet globals t = t.globals\nlet outs t = t.outs\nlet ins t = t.ins\nlet core_id t = t.core_id\nlet launch_kind t = t.launch.kind\nlet estimates t = t.estimates\n\nlet global_size t = t.launch.global\nlet local_size t = t.launch.local\n\nlet launch_dims t var_vals =\n  let eval_dims dims = Array.map (fun d -> K.sym_infer d var_vals) dims in\n  let global = eval_dims t.launch.global in\n  let local = Option.map eval_dims t.launch.local in\n  (global, local)\n"
  },
  {
    "path": "packages/tolk/lib/program_spec.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Compile-time kernel descriptions extracted from {!Ir.Program.t}.\n\n    A {!t} is the runtime-facing description of a lowered kernel before\n    device-specific preparation. It captures the lowered program, kernel name,\n    launch metadata, scalar variables, buffer reads and writes, and cost\n    estimates.\n\n    Scalar {e variables} are runtime parameters defined by\n    {!Ir.Program.instr.Define_var} instructions. Buffer {e reads} and {e writes}\n    are parameter indices traced from {!Ir.Program.instr.Load} and\n    {!Ir.Program.instr.Store} instructions respectively. *)\n\n(** {1:types Types} *)\n\ntype var = {\n  name : string;  (** Variable name matching the IR definition. *)\n  lo : int;  (** Inclusive lower bound. *)\n  hi : int;  (** Inclusive upper bound. *)\n  dtype : Tolk_ir.Dtype.t;  (** Scalar data type. *)\n}\n(** The type for scalar kernel parameters with runtime bounds.\n\n    Each [var] corresponds to one {!Ir.Program.instr.Define_var} instruction in\n    the lowered program. *)\n\ntype core_id = {\n  var_index : int;  (** Index into {!vars} identifying this variable. *)\n  lo : int;  (** Inclusive lower bound. Always [0]. *)\n  hi : int;  (** Inclusive upper bound. *)\n}\n(** The type for the runtime-managed [\"core_id\"] variable.\n\n    When present, [\"core_id\"] enables multi-core dispatch. The runtime assigns\n    each core a value in \\[[lo];[hi]\\]. *)\n\nval thread_count : core_id -> int\n(** [thread_count cid] is [cid.hi - cid.lo + 1]. *)\n\n(** The type for kernel launch models. A kernel uses exactly one model. *)\ntype launch_kind =\n  | Serial  (** No parallelism. Global and local sizes are all [1]. *)\n  | Thread_groups\n      (** Thread-group model (e.g. Metal, CUDA blocks). Both global and local\n          dimensions are meaningful. *)\n  | Threads\n      (** Flat-thread model (e.g. OpenCL global work). Only global dimensions\n          are meaningful; local size is [None]. *)\n\n(** {1:estimates Cost estimates} *)\n\nmodule Estimates : sig\n  (** Estimated kernel costs for scheduling and profiling.\n\n      Each cost component is either an exact integer or a symbolic expression\n      preserved from the upstream {!Ir.Kernel.estimates}. *)\n\n  (** The type for a single cost component. *)\n  type estimate =\n    | Int of int  (** Exact integer count. *)\n    | Symbolic of Tolk_ir.Kernel.t  (** Symbolic expression depending on runtime variables. *)\n\n  type t = {\n    ops : estimate;  (** Arithmetic operation count. *)\n    lds : estimate;  (** Local data share (shared memory) access count. *)\n    mem : estimate;  (** Global memory access count. *)\n  }\n  (** The type for kernel cost estimates. *)\n\n  val zero : t\n  (** [zero] is [{ops = Int 0; lds = Int 0; mem = Int 0}]. *)\n\n  val ( + ) : t -> t -> t\n  (** [a + b] is the component-wise sum of [a] and [b]. Two [Int] values produce\n      an [Int]. When either side is [Symbolic], the result is [Symbolic] with\n      the expressions concatenated. *)\n\n  val of_kernel : Tolk_ir.Kernel.estimates -> t\n  (** [of_kernel e] is the lossless conversion of {!Tolk_ir.Kernel.estimates} [e]. *)\n\n  val of_program : Tolk_ir.Program.t -> t\n  (** [of_program p] computes estimates by walking [p]. Counts FLOPs (excluding\n      index arithmetic), load/store bytes, and total memory accessed (capped at\n      buffer size for re-reads). Loop multipliers are stacked through\n      {!Tolk_ir.Program.view.Range}/{!Tolk_ir.Program.view.End_range} and\n      {!Tolk_ir.Program.view.Special} nodes. *)\nend\n\n(** {1:spec Kernel specifications} *)\n\ntype t\n(** The type for compile-time kernel descriptions.\n\n    Invariants:\n    - Variable order is stable and sorted by [(name, lo, hi)].\n    - Read and write parameter indices are sorted and deduplicated.\n    - Launch metadata uses exactly one model: {!Serial}, {!Thread_groups}, or\n      {!Threads}.\n    - [\"core_id\"], when present, is unique and has [lo = 0]. *)\n\n(** {2:constructors Constructors} *)\n\nval of_program :\n  name:string ->\n  src:string ->\n  device:string ->\n  ?lib:bytes ->\n  ?applied_opts:Tolk_ir.Kernel.Opt.t list ->\n  ?estimates:Estimates.t ->\n  Tolk_ir.Program.t ->\n  t\n(** [of_program ~name ~src ~device ?lib ?applied_opts ?estimates program]\n    extracts a kernel description from [program].\n\n    [lib] defaults to [None] (not yet compiled). [applied_opts] defaults\n    to [[]]. [estimates] defaults to {!Estimates.zero}.\n\n    Raises [Invalid_argument] if:\n    - launch metadata depends on an unsupported scalar instruction,\n    - a launch axis is outside [0..2],\n    - a launch axis is repeated,\n    - launch metadata mixes flat-thread and thread-group models,\n    - [\"core_id\"] is defined more than once, or\n    - [\"core_id\"] has a lower bound different from [0]. *)\n\nval with_lib : bytes -> t -> t\n(** [with_lib lib spec] is [spec] with [lib] set to [Some lib]. *)\n\nval with_estimates : Estimates.t -> t -> t\n(** [with_estimates e spec] is [spec] with estimates replaced by [e]. *)\n\nval with_global_dims : int array -> t -> t\n(** [with_global_dims dims spec] is [spec] with the global launch dimensions\n    replaced by constant values [dims]. *)\n\n(** {2:accessors Accessors} *)\n\nval name : t -> string\n(** [name spec] is the kernel entry-point name. *)\n\nval src : t -> string\n(** [src spec] is the rendered source code. *)\n\nval device : t -> string\n(** [device spec] is the target device name. *)\n\nval program : t -> Tolk_ir.Program.t\n(** [program spec] is the lowered IR program. *)\n\nval lib : t -> bytes option\n(** [lib spec] is the compiled binary, or [None] if not yet compiled. *)\n\nval applied_opts : t -> Tolk_ir.Kernel.Opt.t list\n(** [applied_opts spec] is the optimization options applied during codegen. *)\n\nval vars : t -> var list\n(** [vars spec] is the scalar variable definitions in stable argument order. *)\n\nval outs : t -> int list\n(** [outs spec] is the sorted, deduplicated parameter indices written by the\n    kernel. *)\n\nval ins : t -> int list\n(** [ins spec] is the sorted, deduplicated parameter indices read by the kernel.\n*)\n\nval globals : t -> int list\n(** [globals spec] is the sorted, deduplicated union of {!outs} and {!ins}. *)\n\nval core_id : t -> core_id option\n(** [core_id spec] is the runtime-managed [\"core_id\"] variable, if any. *)\n\nval launch_kind : t -> launch_kind\n(** [launch_kind spec] is the kernel launch model. *)\n\nval estimates : t -> Estimates.t\n(** [estimates spec] is the kernel cost estimates. *)\n\n(** {2:launch Launch dimensions} *)\n\nval global_size : t -> Tolk_ir.Kernel.t array\n(** [global_size spec] is the symbolic global launch dimensions (length [3]).\n    Use {!launch_dims} to evaluate them to concrete integers. *)\n\nval local_size : t -> Tolk_ir.Kernel.t array option\n(** [local_size spec] is the symbolic local launch dimensions, or [None]\n    for flat-thread ({!Threads}) kernels. *)\n\nval launch_dims : t -> (string * int) list -> int array * int array option\n(** [launch_dims spec var_vals] evaluates the launch dimensions of [spec]\n    using name-keyed variable bindings [var_vals].\n\n    Returns [(global, local)] where [global] has length [3] and [local] is:\n    - [Some [|1; 1; 1|]] for {!Serial}.\n    - [Some local] for {!Thread_groups}.\n    - [None] for {!Threads}. *)\n"
  },
  {
    "path": "packages/tolk/lib/renderer/cstyle.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\nopen Tolk_ir\nmodule P = Program\n\nlet strf = Printf.sprintf\n\n(* Environment *)\n\nlet threads = Helpers.getenv \"THREADS\" 1 <> 0\nlet amx = Helpers.getenv \"AMX\" 0 <> 0\nlet expand_ssa = Helpers.getenv \"EXPAND_SSA\" 0 <> 0\nlet aligned = Helpers.getenv \"ALIGNED\" 1 <> 0\n\n(* Helpers *)\n\nlet strip_parens s =\n  let n = String.length s in\n  if n < 2 || s.[0] <> '(' || s.[n - 1] <> ')' then s\n  else\n    (* Only strip when the inner content has balanced parens,\n       so (a+b)*(c+d) is left unchanged. *)\n    let d = ref 0 in\n    try\n      for i = 1 to n - 2 do\n        if s.[i] = '(' then incr d\n        else if s.[i] = ')' then (decr d; if !d < 0 then raise_notrace Exit)\n      done;\n      if !d = 0 then String.sub s 1 (n - 2) else s\n    with Exit -> s\n\nlet dedup lst =\n  let seen = Hashtbl.create 16 in\n  List.filter (fun x ->\n    let fresh = not (Hashtbl.mem seen x) in\n    if fresh then Hashtbl.replace seen x ();\n    fresh\n  ) lst\n\n(* Subset of Python str.format() used by CUSTOM node format strings:\n   positional ({0}, {1}), auto-numbered ({}), and escaped braces ({{ }}). *)\nlet render_custom_fmt fmt args =\n  let a = Array.of_list args in\n  let n = Array.length a in\n  let buf = Buffer.create (String.length fmt) in\n  let len = String.length fmt in\n  let rec scan i auto =\n    if i >= len then ()\n    else match fmt.[i] with\n    | '{' when i + 1 < len && fmt.[i + 1] = '{' ->\n        Buffer.add_char buf '{'; scan (i + 2) auto\n    | '}' when i + 1 < len && fmt.[i + 1] = '}' ->\n        Buffer.add_char buf '}'; scan (i + 2) auto\n    | '{' ->\n        let j = match String.index_from_opt fmt (i + 1) '}' with\n          | Some j -> j\n          | None -> invalid_arg \"render_custom_fmt: unclosed '{'\"\n        in\n        let field = String.trim (String.sub fmt (i + 1) (j - i - 1)) in\n        let idx, auto =\n          if field = \"\" then auto, auto + 1\n          else match int_of_string_opt field with\n          | Some k -> k, auto\n          | None -> invalid_arg (strf \"render_custom_fmt: non-numeric field {%s}\" field)\n        in\n        if idx >= 0 && idx < n then Buffer.add_string buf a.(idx);\n        scan (j + 1) auto\n    | c ->\n        Buffer.add_char buf c; scan (i + 1) auto\n  in\n  scan 0 0;\n  Buffer.contents buf\n\nlet vec_elem_name i =\n  if i < 16 then String.make 1 \"xyzwabcdefghijkl\".[i]\n  else strf \"v%d\" i\n\nlet prod = List.fold_left ( * ) 1\n\nlet is_associative = function\n  | `Add | `Mul | `Xor | `Or | `And -> true | _ -> false\n\n(* Type rendering *)\n\ntype scalar_name = Dtype.scalar -> string\n\nlet base_scalar_name : scalar_name = function\n  | Dtype.Void -> \"void\" | Bool -> \"bool\"\n  | Int8 -> \"signed char\" | Int16 -> \"short\" | Int32 -> \"int\"\n  | Int64 -> \"long\" | Uint8 -> \"unsigned char\" | Uint16 -> \"unsigned short\"\n  | Uint32 -> \"unsigned int\" | Uint64 -> \"unsigned long\"\n  | Float16 -> \"half\" | Bfloat16 -> \"__bf16\" | Float32 -> \"float\"\n  | Float64 -> \"double\" | Fp8e4m3 -> \"float8_e4m3\"\n  | Fp8e5m2 -> \"float8_e5m2\" | Index -> \"long\"\n\nlet render_dtype_str (sn : scalar_name) (dt : Dtype.t) =\n  let base = sn (Dtype.scalar dt) in\n  if Dtype.count dt > 1 then\n    strf \"%s%d\" (String.map (fun c -> if c = ' ' then '_' else c) base) (Dtype.count dt)\n  else base\n\n(* Constant rendering *)\n\nlet render_float_lit f =\n  let s = strf \"%.17g\" f in\n  if String.contains s '.' || String.contains s 'e' || String.contains s 'E'\n  then s else s ^ \".0\"\n\nlet truncate_u32 i = i land 0xFFFFFFFF\n\n(* Upper 16 bits of the float32 encoding of f, after round-to-nearest-even.\n   Used to render bf16 constants as their bit pattern (OpenCL). *)\nlet float_to_bf16_bits (f : float) =\n  let bits = Int32.bits_of_float f in\n  if not (Float.is_finite f) then\n    Int32.to_int (Int32.shift_right_logical bits 16)\n  else\n    (* Round-to-nearest-even: add 0x7FFF + bit16 for tie-breaking. *)\n    let rounded =\n      Int32.add bits\n        (Int32.add 0x7fffl\n           (Int32.logand (Int32.shift_right_logical bits 16) 1l))\n    in\n    Int32.to_int (Int32.shift_right_logical rounded 16)\n\nlet render_const_base ~infinity ~nan_ ~render_cast (c : Const.t) (dt : Dtype.t) =\n  let cast s = strf \"(%s)\" (render_cast dt s) in\n  match Const.view c with\n  | Bool b -> if b then \"1\" else \"0\"\n  | Float f ->\n      if Float.is_nan f then cast nan_\n      else if f = Float.infinity then cast infinity\n      else if f = Float.neg_infinity then cast (\"-\" ^ infinity)\n      else\n        let lit = render_float_lit f in\n        (match Dtype.scalar dt with\n        | Float64 -> lit\n        | Float16 | Bfloat16 | Fp8e4m3 | Fp8e5m2 -> cast (lit ^ \"f\")\n        | _ -> lit ^ \"f\")\n  | Int v ->\n      (match Dtype.scalar dt with\n      | Int64 -> strf \"%Ldll\" v\n      | Uint64 -> strf \"%Luull\" v\n      | Uint32 -> strf \"%uu\" (truncate_u32 (Int64.to_int v))\n      | Uint8 | Uint16 -> cast (strf \"%Ldu\" v)\n      | Int8 | Int16 -> cast (strf \"%Ld\" v)\n      | _ -> strf \"%Ld\" v)\n\n(* Cast rendering *)\n\nlet base_render_cast type_map dt v = strf \"(%s)(%s)\" (type_map dt) v\n\nlet render_bitcast_with fmt type_map program src_id dst v =\n  let src_dt = match P.dtype program src_id with Some dt -> dt | None -> dst in\n  strf fmt (type_map dst) (type_map src_dt) v\n\nlet base_render_bitcast = render_bitcast_with \"__builtin_bit_cast(%s, (%s)(%s))\"\n\n(* Code for op *)\n\ntype code_for_op = {\n  unary : Op.unary -> string -> Dtype.t -> string;\n  binary : Op.binary -> string -> string -> Dtype.t -> string;\n  ternary : Op.ternary -> string -> string -> string -> Dtype.t -> string;\n}\n\n(* Ops handled by base_code_for_op — passed to Renderer.make so\n   supported_ops_of_code_for_op derives accurate decomposition flags. *)\nlet base_code_for_op_list : Renderer.code_op list =\n  [ Sqrt; Recip; Neg; Exp2; Log2; Sin; Trunc;\n    And; Xor; Or; Add; Sub; Mul; Mod; Idiv; Cmpne;\n    Shr; Shl; Cmplt; Where; Cmpeq ]\n\nlet base_code_for_op = {\n  unary = (fun op x _dt -> match op with\n    | `Neg -> strf \"-%s\" x | `Exp2 -> strf \"exp2(%s)\" x\n    | `Log2 -> strf \"log2(%s)\" x | `Sin -> strf \"sin(%s)\" x\n    | `Sqrt -> strf \"sqrt(%s)\" x | `Recip -> strf \"(1/%s)\" x\n    | `Trunc -> strf \"trunc(%s)\" x);\n  binary = (fun op a b _dt -> match op with\n    | `Add -> strf \"(%s+%s)\" a b | `Sub -> strf \"(%s-%s)\" a b\n    | `Mul -> strf \"(%s*%s)\" a b | `Fdiv -> strf \"(%s/%s)\" a b\n    | `Idiv -> strf \"(%s/%s)\" a b | `Mod -> strf \"(%s%%%s)\" a b\n    | `Shl -> strf \"(%s<<%s)\" a b | `Shr -> strf \"(%s>>%s)\" a b\n    | `And -> strf \"(%s&%s)\" a b | `Or -> strf \"(%s|%s)\" a b\n    | `Xor -> strf \"(%s^%s)\" a b\n    | `Cmplt -> strf \"(%s<%s)\" a b | `Cmpeq -> strf \"(%s==%s)\" a b\n    | `Cmpne -> strf \"(%s!=%s)\" a b\n    | _ -> invalid_arg \"binary op not handled in renderer\");\n  ternary = (fun op a b c _dt -> match op with\n    | `Where -> strf \"(%s?%s:%s)\" a b c\n    | _ -> invalid_arg \"ternary op not handled in renderer\");\n}\n\n(* Language configuration *)\n\ntype rule = P.t -> P.id -> P.view -> lang -> string array -> string option\n\nand lang = {\n  kernel_typedef : int -> string;\n  buffer_prefix : string;\n  buffer_suffix : string;\n  smem_align : string;\n  smem_prefix : string;\n  smem_prefix_for_cast : bool;\n  arg_int_prefix : string;\n  barrier : string;\n  extra_args : string list;\n  float4_ctor : Dtype.t -> string;\n  float4_style : string * string;\n  gep_arr_threshold : int;\n  code_for_workitem : Special_dim.t -> string;\n  type_map : Dtype.t -> string;\n  render_const : Const.t -> Dtype.t -> string;\n  render_cast : Dtype.t -> string -> string;\n  render_bitcast : P.t -> P.id -> Dtype.t -> string -> string;\n  code_for_op : code_for_op;\n  rules : rule list;\n  render_kernel_hook : lang -> string -> string list -> rendered_buf list -> P.t -> string;\n  infinity : string;\n  nan_ : string;\n}\n\nand buf_kind =\n  | Buf_ptr of Dtype.Ptr.t\n  | Buf_image of Dtype.Ptr.t\n  | Buf_int\n\nand rendered_buf = {\n  buf_name : string;\n  buf_kind : buf_kind;\n  buf_mutable : bool;\n}\n\n(* Base rendering rule *)\n\nlet base_render : rule = fun program id v lang r ->\n  let open P in\n  match v with\n  | Define_reg { size; dtype } ->\n      Some (strf \"%s %s[%d];\" (lang.type_map (Dtype.Val (Dtype.Ptr.base dtype))) r.(id) size)\n  | If { cond; _ } -> Some (strf \"if (%s) {\" r.(cond))\n  | End_range _ | Endif _ -> Some \"}\"\n  | Wmma { name; a; b; c; _ } ->\n      Some (strf \"__%s(%s, %s, %s)\" name r.(a) r.(b) r.(c))\n  | Range { size; dtype; _ } ->\n      let n = r.(id) in\n      Some (strf \"for (%s %s = 0; %s < %s; %s++) {\"\n        (lang.type_map (Dtype.Val dtype)) n n r.(size) n)\n  | Vectorize { srcs; dtype } ->\n      let l, rr = lang.float4_style in\n      Some (strf \"%s%s%s%s\" (lang.float4_ctor (Dtype.Val dtype)) l\n        (String.concat \",\" (List.map (fun s -> r.(s)) srcs)) rr)\n  | Cast { src; dtype } when Dtype.Val.count dtype > 1\n      && not (P.is_ptr program src) ->\n      Some (strf \"__builtin_convertvector(%s, %s)\" r.(src) (lang.type_map (Dtype.Val dtype)))\n  | Cast { src; dtype } when P.is_ptr program src ->\n      (* Pointer cast: (type_ptr)(ptr_expr) *)\n      Some (strf \"((%s*)(%s))\" (lang.type_map (Dtype.Val dtype)) r.(src))\n  | Cast { src; dtype } ->\n      Some (strf \"(%s)\" (lang.render_cast (Dtype.Val dtype) r.(src)))\n  | Bitcast { src; dtype } ->\n      Some (lang.render_bitcast program src (Dtype.Val dtype) r.(src))\n  | Define_local { size; dtype } ->\n      Some (strf \"%s%s%s %s[%d];\"\n        lang.smem_align lang.smem_prefix (lang.type_map (Dtype.Val (Dtype.Ptr.base dtype))) r.(id) size)\n  | Barrier -> Some lang.barrier\n  | Special { dim; size; _ } ->\n      Some (strf \"%s; /* %s */\" (lang.code_for_workitem dim) r.(size))\n  | Const { value; dtype } -> Some (lang.render_const value (Dtype.Val dtype))\n  | Index { ptr; idxs; _ } ->\n      let idx_str = match idxs with\n        | [] -> \"0\"\n        | [idx] -> r.(idx)\n        | _ -> String.concat \"+\" (List.map (fun s -> r.(s)) idxs)\n      in\n      Some (strf \"(%s+%s)\" r.(ptr) idx_str)\n  | Load { src; alt = Some alt; _ } ->\n      (match P.index_gate program src with\n      | Some gate -> Some (strf \"(%s?*%s:%s)\" r.(gate) r.(src) r.(alt))\n      | None -> Some (strf \"(*%s)\" r.(src)))\n  | Load { src; _ } -> Some (strf \"(*%s)\" r.(src))\n  | Store { dst; value } -> Some (strf \"*%s = %s;\" r.(dst) r.(value))\n  | Unary { op; src; dtype } ->\n      Some (lang.code_for_op.unary op r.(src) (Dtype.Val dtype))\n  | Binary { op; lhs; rhs; dtype } ->\n      let strip_if_same child =\n        match P.view program child with\n        | Binary { op = cop; _ } when cop = op && is_associative op ->\n            strip_parens r.(child)\n        | _ -> r.(child)\n      in\n      Some (lang.code_for_op.binary op (strip_if_same lhs) (strip_if_same rhs) (Dtype.Val dtype))\n  | Ternary { op; a; b; c; dtype } ->\n      Some (lang.code_for_op.ternary op r.(a) r.(b) r.(c) (Dtype.Val dtype))\n  | Gep { src; idxs; dtype } ->\n      let src_count = match P.dtype program src with\n        | Some dt -> Dtype.Val.count dt | None -> 1\n      in\n      let elem idx =\n        if src_count > lang.gep_arr_threshold then r.(src) ^ strf \"[%d]\" idx\n        else r.(src) ^ strf \".%s\" (vec_elem_name idx)\n      in\n      (match idxs with\n      | [idx] -> Some (elem idx)\n      | _ ->\n          let l, rr = lang.float4_style in\n          Some (strf \"%s%s%s%s\" (lang.float4_ctor (Dtype.Val dtype)) l\n            (String.concat \",\" (List.map elem idxs)) rr))\n  | Custom { fmt; args } | Custom_inline { fmt; args; _ } ->\n      Some (render_custom_fmt fmt (List.map (fun s -> r.(s)) args))\n  | _ -> None\n\nlet apply_rules rules program id v lang r =\n  let rec loop = function\n    | [] -> None\n    | rule :: rest ->\n        (match rule program id v lang r with Some _ as s -> s | None -> loop rest)\n  in\n  loop rules\n\n(* Inlining heuristic — decides which nodes get their rendered string\n   substituted at use sites rather than assigned to a named temporary. *)\n\nlet should_inline use_count program id (v : P.view) =\n  let open P in\n  match v with\n  | Const _ | Gep _ | Index _ | Custom_inline _ -> true\n  | Load { src; alt = None; _ } -> (\n      match P.view program src with\n      | Index { dtype; _ } -> Dtype.Ptr.addrspace dtype = Dtype.Reg\n      | _ -> false)\n  | Unary _ | Binary _ | Ternary _ ->\n      not expand_ssa &&\n      (match v with Ternary { op = `Where; _ } -> false | _ -> use_count <= 1)\n  | Cast { src; dtype } ->\n      P.is_ptr program src ||\n      (Dtype.Val.count dtype = 1 && not expand_ssa && use_count <= 1)\n  | Bitcast _ | Vectorize _ -> not expand_ssa && use_count <= 1\n  | _ -> false\n\n(* Naming *)\n\nlet prefix_of : P.view -> string = fun v ->\n  let open P in\n  match v with\n  | Wmma _ -> \"wmma\" | Define_local _ -> \"temp\" | Const _ -> \"const\"\n  | Cast _ | Bitcast _ | Vectorize _ -> \"cast\" | Gep _ -> \"gep\"\n  | Index _ -> \"bidx\" | Define_reg _ -> \"acc\" | Load _ -> \"val\"\n  | _ -> \"alu\"\n\nlet special_name = function\n  | Special_dim.Group_id a -> strf \"gidx%d\" a\n  | Local_id a -> strf \"lidx%d\" a\n  | Global_idx a -> strf \"idx%d\" a\n\nlet sub_str i = if i >= 0 then string_of_int i else \"m\" ^ string_of_int (-i)\n\nlet range_name kind axis sub =\n  let base = strf \"%sidx%d\" (Axis_kind.letter kind) axis in\n  match sub with\n  | [] -> base\n  | _ -> strf \"%s_%s\" base (String.concat \"_\" (List.map sub_str sub))\n\n(* Metadata collection *)\n\ntype wmma_info = {\n  wi_name : string;\n  wi_dims : int * int * int;\n  wi_dtype_in : Dtype.scalar;\n  wi_dtype_out : Dtype.scalar;\n  wi_upcast_axes : (int * int) list * (int * int) list * (int * int) list;\n}\n\nlet collect_used_dtypes program =\n  let acc = ref [] in\n  P.iteri (fun id _v ->\n    match P.dtype program id with\n    | Some dt -> acc := dt :: !acc\n    | None -> ()\n  ) program;\n  dedup (List.rev !acc)\n\nlet collect_wmma_args program =\n  let acc = ref [] in\n  P.iteri (fun _id v ->\n    let open P in\n    match v with\n    | Wmma { name; dims; dtype_in; dtype_out; upcast_axes; _ } ->\n        acc := { wi_name = name; wi_dims = dims;\n                 wi_dtype_in = dtype_in; wi_dtype_out = dtype_out;\n                 wi_upcast_axes = upcast_axes } :: !acc\n    | _ -> ()\n  ) program;\n  dedup (List.rev !acc)\n\n(* Core rendering loop.\n\n   Walks the program in topological order, assigning each node a name and\n   rendering it to a C expression string.  Single-use expressions are inlined\n   at their use site; everything else is assigned to a named temporary and\n   appended to the kernel body. *)\n\nlet render_program (lang : lang) (program : P.t) =\n  let open P in\n  let n = P.length program in\n  let r = Array.make n \"\" in\n  (* 1. child_count — how many times each node is referenced as an operand *)\n  let child_count = Array.make n 0 in\n  P.iteri (fun id _v ->\n    List.iter (fun c ->\n      if c >= 0 && c < n then child_count.(c) <- child_count.(c) + 1\n    ) (P.children program id)\n  ) program;\n  (* 2. writable — mark PARAMs reachable from STORE destinations *)\n  let writable = Array.make n false in\n  let rec mark_writable id =\n    if id >= 0 && id < n then begin\n      writable.(id) <- true;\n      match P.view program id with\n      | Index { ptr; _ } -> mark_writable ptr\n      | Cast { src; _ } | Bitcast { src; _ } | After { src; _ } -> mark_writable src\n      | _ -> ()\n    end\n  in\n  P.iteri (fun _id v -> match v with\n    | Store { dst; _ } -> mark_writable dst\n    | Custom { args; _ } ->\n        List.iter (fun arg ->\n          if arg >= 0 && arg < n then\n            match P.view program arg with\n            | Param_image _ -> mark_writable arg\n            | _ -> ()\n        ) args\n    | _ -> ()) program;\n  (* 3. main walk *)\n  let bufs = ref [] in\n  let kernel = ref [] in\n  let depth = ref 1 in\n  let counters : (string, int) Hashtbl.t = Hashtbl.create 16 in\n  let counter_get pfx =\n    match Hashtbl.find_opt counters pfx with Some n -> n | None -> 0 in\n  P.iteri (fun id v ->\n    match v with\n    (* AFTER: alias — r[u] = r[u.src[0]] *)\n    | After { src; _ } -> r.(id) <- r.(src)\n    (* PARAM / DEFINE_VAR: name and register as buffer *)\n    | Param { idx; dtype } ->\n        let sz = Dtype.Ptr.size dtype in\n        r.(id) <- (if sz > 0 then strf \"data%d_%d\" idx sz else strf \"data%d\" idx);\n        bufs := { buf_name = r.(id); buf_kind = Buf_ptr dtype;\n                  buf_mutable = writable.(id) } :: !bufs\n    | Param_image { idx; dtype; width; height } ->\n        r.(id) <- strf \"data%d_%dx%d\" idx width height;\n        bufs := { buf_name = r.(id); buf_kind = Buf_image dtype;\n                  buf_mutable = writable.(id) } :: !bufs\n    | Define_var { name; _ } ->\n        r.(id) <- name;\n        bufs := { buf_name = name; buf_kind = Buf_int;\n                  buf_mutable = false } :: !bufs\n    | _ ->\n        (* naming *)\n        let prefix = match v with\n          | Special { dim; _ } -> r.(id) <- special_name dim; None\n          | Range { axis; kind; sub; _ } ->\n              r.(id) <- range_name kind axis sub; None\n          | _ ->\n              let p = prefix_of v in\n              r.(id) <- strf \"%s%d\" p (counter_get p);\n              Some p\n        in\n        (* render *)\n        let l = match apply_rules lang.rules program id v lang r with\n          | Some s -> s\n          | None -> strf \"/* unhandled */\"\n        in\n        (* depth adjustment: ENDIF/END decrement before emitting *)\n        (match v with End_range _ | Endif _ -> decr depth | _ -> ());\n        (* inline decision *)\n        if should_inline child_count.(id) program id v then\n          r.(id) <- l\n        else begin\n          let line = match v with\n            | Range _ | Define_local _ | Store _ | Define_reg _ -> l\n            | Special { dtype; _ } ->\n                strf \"%s %s = %s\" (lang.type_map (Dtype.Val dtype)) r.(id) l\n            | _ -> (match P.dtype program id with\n                | Some dt when not (Dtype.Val.equal dt Dtype.Val.void) ->\n                    strf \"%s %s = %s;\" (lang.type_map (Dtype.Val dt)) r.(id) l\n                | _ -> l)\n          in\n          kernel := (String.make (!depth * 2) ' ' ^ line) :: !kernel;\n          (match prefix with\n           | Some p -> Hashtbl.replace counters p (counter_get p + 1)\n           | None -> ())\n        end;\n        (* depth adjustment: IF/RANGE increment after emitting *)\n        (match v with If _ | Range _ -> incr depth | _ -> ())\n  ) program;\n  (List.rev !kernel, List.rev !bufs)\n\n(* Kernel assembly.\n\n   Wraps the rendered kernel body in a function signature with typed\n   parameters, launch bounds, and an optional prefix (headers/defines). *)\n\nlet default_render_kernel lang name kernel bufs program =\n  let open P in\n  (* image sampler preamble *)\n  let tmp =\n    if List.exists (fun b -> match b.buf_kind with Buf_image _ -> true | _ -> false) bufs\n    then \"const sampler_t smp = CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP | CLK_FILTER_NEAREST;\\n\"\n    else \"\"\n  in\n  let param_strs = List.map (fun b -> match b.buf_kind with\n    | Buf_image _ ->\n        let q = if b.buf_mutable then \"write_only\" else \"read_only\" in\n        strf \"%s image2d_t %s\" q b.buf_name\n    | Buf_ptr dtype ->\n        let base_str = render_dtype_str\n          (fun s -> lang.type_map (Dtype.of_scalar s))\n          (Dtype.Val (Dtype.Ptr.base dtype)) in\n        strf \"%s%s*%s %s\" lang.buffer_prefix base_str lang.buffer_suffix b.buf_name\n    | Buf_int ->\n        strf \"%s %s\" lang.arg_int_prefix b.buf_name\n  ) bufs in\n  (* launch_bounds = product of local dimension sizes *)\n  let launch_bounds = ref 1 in\n  P.iteri (fun _id v -> match v with\n    | Special { dim = Local_id _; size; _ } -> (\n        match P.view program size with\n        | Const { value; _ } -> (match Const.view value with\n            | Int n -> launch_bounds := !launch_bounds * Int64.to_int n | _ -> ())\n        | _ -> ())\n    | _ -> ()) program;\n  let all_params = param_strs @ lang.extra_args in\n  strf \"%s %s(%s) {\\n%s%s\\n}\"\n    (lang.kernel_typedef !launch_bounds) name\n    (String.concat \", \" all_params) tmp (String.concat \"\\n\" kernel)\n\nlet render_kernel (lang : lang) ?(name = \"kernel\") (program : P.t) =\n  let kernel, bufs = render_program lang program in\n  lang.render_kernel_hook lang name kernel bufs program\n\n(* Language constructor *)\n\nlet make_lang ~scalar_name\n    ?(kernel_typedef = fun _ -> \"void\")\n    ?(buffer_prefix = \"\") ?(buffer_suffix = \"\")\n    ?(smem_align = \"\") ?(smem_prefix = \"\")\n    ?(smem_prefix_for_cast = true)\n    ?(arg_int_prefix = \"const int\")\n    ?(barrier = \"\")\n    ?(extra_args = [])\n    ?(float4_ctor = fun _type_map _dt -> \"(float4)\")\n    ?(float4_style = (\"(\", \")\"))\n    ?(gep_arr_threshold = 4)\n    ?(code_for_workitem = fun _ -> failwith \"no workitem support\")\n    ?(code_for_op = base_code_for_op)\n    ?render_bitcast:render_bitcast_opt\n    ?(rules = [base_render])\n    ?(render_kernel_hook = fun lang name kernel bufs program ->\n        default_render_kernel lang name kernel bufs program)\n    ?(infinity = \"INFINITY\") ?(nan_ = \"NAN\")\n    () =\n  let type_map = render_dtype_str scalar_name in\n  let render_cast = base_render_cast type_map in\n  let render_bitcast = match render_bitcast_opt with\n    | Some f -> f type_map\n    | None -> (fun program src_id dst v -> base_render_bitcast (fun dt -> type_map (Dtype.Val dt)) program src_id (Dtype.val_of dst) v)\n  in\n  let render_const = render_const_base ~infinity ~nan_ ~render_cast in\n  let float4_ctor = float4_ctor type_map in\n  { kernel_typedef; buffer_prefix; buffer_suffix; smem_align; smem_prefix;\n    smem_prefix_for_cast; arg_int_prefix; barrier; extra_args;\n    float4_ctor; float4_style; gep_arr_threshold; code_for_workitem;\n    type_map; render_const; render_cast; render_bitcast;\n    code_for_op; rules; render_kernel_hook; infinity; nan_ }\n\nlet render_fn lang ?(name = \"kernel\") program = render_kernel lang ~name program\n\nmodule K = Kernel\n\n(* extra_pm: devectorize bool-typed ALU, CAST-from-bool, and WHERE.\n   These can't be vectorized on C-style backends.  Requires\n   Devectorizer.no_vectorized_alu to be exported. *)\nlet extra_pm (node : K.t) : K.t option =\n  match K.view node with\n  (* ALU/CAST/BITCAST/INDEX returning bool *)\n  | (Unary _ | Binary _ | Ternary _ | Cast _ | Bitcast _ | Index _)\n    when Dtype.scalar (K.dtype node) = Dtype.Bool\n         && Dtype.vcount (K.dtype node) > 1 ->\n      Devectorizer.no_vectorized_alu node\n  (* CAST from bool source *)\n  | Cast { src; _ } when Dtype.scalar (K.dtype src) = Dtype.Bool\n                         && Dtype.vcount (K.dtype node) > 1 ->\n      Devectorizer.no_vectorized_alu node\n  (* WHERE can't be vectorized *)\n  | Ternary { op = `Where; _ } when Dtype.vcount (K.dtype node) > 1 ->\n      Devectorizer.no_vectorized_alu node\n  | _ -> None\n\n(* create_non_native_float_pats: promote ALU on non-native float dtypes\n   through float32.  Only promotes ALU — storage stays in the original\n   dtype (unlike pm_float_decomp which rewrites everything). *)\nlet create_non_native_float_pats ?(casting = true)\n    (dts : Dtype.scalar list) (node : K.t) : K.t option =\n  let f32 = Dtype.of_scalar Dtype.Float32 in\n  let is_nn dt = List.mem (Dtype.scalar dt) dts in\n  let cast_f32 src = K.cast ~src ~dtype:f32 in\n  match K.view node with\n  (* WHERE with non-native float result *)\n  | Ternary { op = `Where; a; b; c; _ }\n    when is_nn (K.dtype node) ->\n      let w = K.ternary ~op:`Where ~a ~b:(cast_f32 b) ~c:(cast_f32 c) in\n      Some (K.cast ~src:w ~dtype:(K.dtype node))\n  (* ALU returning non-native float *)\n  | (Unary _ | Binary _ | Ternary _)\n    when is_nn (K.dtype node) ->\n      let new_children = List.map (fun c ->\n        if is_nn (K.dtype c) then cast_f32 c else c\n      ) (K.children node) in\n      let promoted = K.replace node ~children:new_children\n        ~dtype:(Dtype.vec (Dtype.count (K.dtype node)) f32) () in\n      Some (K.cast ~src:promoted ~dtype:(K.dtype node))\n  (* Bool-returning ALU with non-native float inputs *)\n  | (Binary _ | Ternary _)\n    when Dtype.scalar (K.dtype node) = Dtype.Bool ->\n      let children = K.children node in\n      if List.for_all (fun c ->\n        match K.dtype_opt c with Some dt -> is_nn dt | None -> false\n      ) children then\n        let new_children = List.map cast_f32 children in\n        Some (K.replace node ~children:new_children ())\n      else None\n  (* Cast TO non-native from non-float: insert f32 intermediate *)\n  | Cast { src; dtype } when casting && is_nn dtype ->\n      let src_dt = K.dtype src in\n      if Dtype.scalar src_dt <> Dtype.Float32 then\n        Some (K.cast ~src:(cast_f32 src) ~dtype)\n      else None\n  (* Cast FROM non-native: insert f32 intermediate *)\n  | Cast { src; dtype } when casting ->\n      let src_dt = K.dtype src in\n      if is_nn src_dt && Dtype.scalar dtype <> Dtype.Float32 then\n        Some (K.cast ~src:(cast_f32 src) ~dtype)\n      else None\n  | _ -> None\n\n(* Software bf16 ↔ f32 cast via bit manipulation.  Used by renderers\n   that lack native bf16 cast instructions (Clang, OpenCL, AMD non-CDNA4). *)\nlet cast_float_to_bf16 (x : K.t) : K.t =\n  let open K.O in\n  (* x must be float32; result is bf16 via uint16 bit pattern *)\n  let bits = K.bitcast ~src:x ~dtype:Dtype.Val.uint32 in\n  let neg_bits = K.binary ~op:`And ~lhs:(K.unary ~op:`Neg ~src:bits)\n    ~rhs:(K.const (Const.int Dtype.Val.uint32 0x7f800000)) in\n  let is_not_inf = ne neg_bits (K.const (Const.int Dtype.Val.uint32 0)) in\n  let bit16 = K.binary ~op:`And ~lhs:(K.binary ~op:`Shr ~lhs:bits\n    ~rhs:(K.const (Const.int Dtype.Val.uint32 16)))\n    ~rhs:(K.const (Const.int Dtype.Val.uint32 1)) in\n  let rounded = K.binary ~op:`Add ~lhs:bits\n    ~rhs:(K.binary ~op:`Add ~lhs:bit16\n      ~rhs:(K.const (Const.int Dtype.Val.uint32 0x7fff))) in\n  let mantissa_nz = ne (K.binary ~op:`And ~lhs:bits\n    ~rhs:(K.const (Const.int Dtype.Val.uint32 0xffff)))\n    (K.const (Const.int Dtype.Val.uint32 0)) in\n  let inf_nan = where mantissa_nz\n    (K.binary ~op:`Or ~lhs:bits ~rhs:(K.const (Const.int Dtype.Val.uint32 0x10000)))\n    bits in\n  let result = where is_not_inf rounded inf_nan in\n  let shifted = K.binary ~op:`Shr ~lhs:result\n    ~rhs:(K.const (Const.int Dtype.Val.uint32 16)) in\n  K.bitcast ~src:(K.cast ~src:shifted ~dtype:(Dtype.of_scalar Dtype.Uint16))\n    ~dtype:Dtype.Val.bfloat16\n\nlet pm_manual_bf16_cast (node : K.t) : K.t option =\n  match K.view node with\n  (* bf16 → f32: shift left 16 bits and bitcast *)\n  | Cast { src; dtype }\n    when Dtype.scalar dtype = Dtype.Float32\n         && Dtype.scalar (K.dtype src) = Dtype.Bfloat16 ->\n      let bits = K.cast ~src:(K.bitcast ~src ~dtype:Dtype.Val.uint16)\n        ~dtype:(Dtype.of_scalar Dtype.Uint32) in\n      let shifted = K.binary ~op:`Shl ~lhs:bits\n        ~rhs:(K.const (Const.int Dtype.Val.uint32 16)) in\n      Some (K.bitcast ~src:shifted ~dtype:Dtype.Val.float32)\n  (* f32 → bf16: round-to-nearest-even via bit manipulation *)\n  | Cast { src; dtype }\n    when Dtype.scalar dtype = Dtype.Bfloat16\n         && Dtype.scalar (K.dtype src) = Dtype.Float32 ->\n      Some (cast_float_to_bf16 src)\n  | _ -> None\n\n(* Clang *)\n\nlet clang_scalar_name : scalar_name = function\n  | Dtype.Bool -> \"_Bool\" | Float16 -> \"__fp16\" | s -> base_scalar_name s\n\n(* Round down to power-of-two alignment for ext_vector_type typedefs. *)\nlet clang_render_vector_prefix scalar_name (dt : Dtype.Val.t) =\n  let type_map = render_dtype_str scalar_name in\n  let scalar = type_map (Dtype.scalarize (Dtype.Val dt)) in\n  let vec = type_map (Dtype.Val dt) in\n  let alignment =\n    if (not aligned) || Dtype.Val.scalar dt = Bool then 1\n    else 1 lsl (int_of_float (log (float_of_int (Dtype.Val.itemsize dt)) /. log 2.0))\n  in\n  strf \"typedef %s %s __attribute__((aligned(%d),ext_vector_type(%d)));\"\n    scalar vec alignment (Dtype.Val.count dt)\n\nlet clang_render_kernel lang name kernel bufs program =\n  let used = collect_used_dtypes program in\n  let vec_defs = List.filter_map (fun dt ->\n    if Dtype.Val.count dt > 1 then Some (clang_render_vector_prefix clang_scalar_name dt)\n    else None) used\n  in\n  let type_map = render_dtype_str clang_scalar_name in\n  let wmma_defs = List.concat_map (fun wi ->\n    let n, m, _ = wi.wi_dims in\n    let out = type_map (Dtype.vec (n * n) (Dtype.of_scalar wi.wi_dtype_in)) in\n    let dt1 = type_map (Dtype.vec n (Dtype.of_scalar wi.wi_dtype_in)) in\n    let dt2 = type_map (Dtype.vec m (Dtype.of_scalar wi.wi_dtype_in)) in\n    [ {|#define AMX_SET(imm5) __asm(\"nop\\\\nnop\\\\nnop\\\\n.word (0x201000+(%0<<5)+%1)\" : : \"i\"(17), \"i\"(imm5) : \"memory\")|};\n      {|#define AMX(op, gpr, btf) __asm(\".word (0x201000+(%0 << 5)+0%1-((0%1>>4)*6))\" : : \"i\"(op), \"r\"((unsigned long long)(gpr)+(btf)) : \"memory\")|};\n      strf {|static %s __%s(%s data1, %s data2, %s data0){\n  AMX_SET(0);\n  for(int ridx0 = 0; ridx0 < 16; ridx0++){ AMX(4, (int *)(&data0), 0ull<<62 | (ridx0*4ull)<<56 | ridx0*64ull); }\n  AMX(0, (int *)(&data2), 0ull<<62); AMX(1, (int *)(&data1), 0ull<<62); AMX(12, 0, 0ull);\n  for(int ridx0 = 0; ridx0 < 16; ridx0++){ AMX(5, (int *)(&data0), 0ull<<62 | (ridx0*4ull)<<56 | ridx0*64ull); }\n  AMX_SET(1);\n  return data0;\n}|} out wi.wi_name dt1 dt2 out ]\n  ) (collect_wmma_args program)\n  in\n  let prefix = String.concat \"\\n\" (vec_defs @ wmma_defs) in\n  let body = default_render_kernel lang name kernel bufs program in\n  if prefix = \"\" then body else prefix ^ \"\\n\" ^ body\n\nlet clang_code_for_op = { base_code_for_op with\n  unary = (fun op x dt -> match op with\n    | `Sqrt ->\n        strf \"%s(%s)\" (if Dtype.scalar dt = Float64 then \"__builtin_sqrt\" else \"__builtin_sqrtf\") x\n    | `Trunc ->\n        strf \"%s(%s)\" (if Dtype.scalar dt = Float64 then \"__builtin_trunc\" else \"__builtin_truncf\") x\n    | _ -> base_code_for_op.unary op x dt);\n  binary = (fun op a b dt -> match op with\n    | `Fdiv -> strf \"(%s/%s)\" a b\n    | _ -> base_code_for_op.binary op a b dt);\n}\n\nlet clang_lang = make_lang ~scalar_name:clang_scalar_name\n  ~buffer_suffix:\" restrict\" ~gep_arr_threshold:0\n  ~float4_ctor:(fun tm dt -> strf \"(%s)\" (tm dt)) ~float4_style:(\"{\", \"}\")\n  ~infinity:{|__builtin_inff()|} ~nan_:{|__builtin_nanf(\"\")|}\n  ~code_for_op:clang_code_for_op\n  ~render_kernel_hook:clang_render_kernel\n  ()\n\n(* Clang fixed-ABI wrapper — generates a public entry point that unpacks\n   bufs/vals arrays into the kernel's typed parameters. *)\n\nlet clang_abi_wrapper name bufs =\n  let inner = name ^ \"_\" in\n  let buf_idx = ref 0 in\n  let val_idx = ref 0 in\n  let call_args = List.map (fun b -> match b.buf_kind with\n    | Buf_ptr dtype | Buf_image dtype ->\n        let c_type = base_scalar_name (Dtype.Val.scalar (Dtype.Ptr.base dtype)) in\n        let arg = strf \"(%s*)bufs[%d]\" c_type !buf_idx in\n        incr buf_idx; arg\n    | Buf_int ->\n        let arg = strf \"vals[%d]\" !val_idx in\n        incr val_idx; arg\n  ) bufs in\n  strf \"void %s(const unsigned long long *bufs, const long long *vals) {\\n  %s(%s);\\n}\"\n    name inner (String.concat \", \" call_args)\n\nlet clang_abi_render_kernel lang name kernel bufs program =\n  let inner_name = name ^ \"_\" in\n  let body = clang_render_kernel lang inner_name kernel bufs program in\n  let wrapper = clang_abi_wrapper name bufs in\n  body ^ \"\\n\" ^ wrapper\n\nlet clang_abi_lang = make_lang ~scalar_name:clang_scalar_name\n  ~kernel_typedef:(fun _ -> \"static void\")\n  ~buffer_suffix:\" restrict\" ~gep_arr_threshold:0\n  ~float4_ctor:(fun tm dt -> strf \"(%s)\" (tm dt)) ~float4_style:(\"{\", \"}\")\n  ~infinity:{|__builtin_inff()|} ~nan_:{|__builtin_nanf(\"\")|}\n  ~code_for_op:clang_code_for_op\n  ~render_kernel_hook:clang_abi_render_kernel\n  ()\n\n(* OpenCL *)\n\nlet opencl_scalar_name : scalar_name = function\n  | Dtype.Int8 -> \"char\" | Uint8 -> \"uchar\" | Uint16 -> \"ushort\"\n  | Uint32 -> \"uint\" | Uint64 -> \"ulong\" | Bfloat16 -> \"ushort\"\n  | s -> base_scalar_name s\n\nlet opencl_render_bitcast = render_bitcast_with \"as_%s((%s)(%s))\"\n\n(* bf16 constants rendered as their bit pattern since bf16 is stored as ushort *)\nlet opencl_bf16_const_rule : rule = fun _program _id v _lang _r ->\n  let open P in\n  match v with\n  | Const { value; dtype } when Dtype.Val.scalar dtype = Dtype.Bfloat16 -> (\n      match Const.view value with\n      | Float f -> Some (strf \"%uu\" (float_to_bf16_bits f))\n      | _ -> None)\n  | _ -> None\n\nlet opencl_render_kernel lang name kernel bufs program =\n  let has_half = List.exists (fun dt ->\n    Dtype.Val.scalar dt = Dtype.Float16) (collect_used_dtypes program) in\n  let prefix = if has_half then\n    \"#pragma OPENCL EXTENSION cl_khr_fp16 : enable\\n\" else \"\" in\n  prefix ^ default_render_kernel lang name kernel bufs program\n\nlet opencl_lang = make_lang ~scalar_name:opencl_scalar_name\n  ~kernel_typedef:(fun _ -> \"__kernel void\")\n  ~buffer_prefix:\"__global \"\n  ~smem_align:{|__attribute__ ((aligned (16))) |}\n  ~smem_prefix:\"__local \"\n  ~barrier:\"barrier(CLK_LOCAL_MEM_FENCE);\"\n  ~float4_ctor:(fun tm dt -> strf \"(%s)\" (tm dt))\n  ~render_bitcast:(fun tm -> fun p s dst v -> opencl_render_bitcast (fun dt -> tm (Dtype.Val dt)) p s (Dtype.val_of dst) v)\n  ~code_for_workitem:(fun dim ->\n    let a = Special_dim.axis dim in\n    match dim with\n    | Group_id _ -> strf \"get_group_id(%d)\" a\n    | Local_id _ -> strf \"get_local_id(%d)\" a\n    | Global_idx _ -> strf \"get_global_id(%d)\" a)\n  ~rules:[opencl_bf16_const_rule; base_render]\n  ~render_kernel_hook:opencl_render_kernel\n  ()\n\n(* Intel *)\n\nlet intel_bf16_cast_rule : rule = fun program _id v _lang r ->\n  let open P in\n  match v with\n  | Cast { src; dtype } -> (\n      match Dtype.Val.scalar dtype, Option.map Dtype.Val.scalar (P.dtype program src) with\n      | Dtype.Bfloat16, Some Float32 ->\n          Some (strf \"intel_convert_bfloat16_as_ushort(%s)\" r.(src))\n      | Float32, Some Bfloat16 ->\n          Some (strf \"intel_convert_as_bfloat16_float(%s)\" r.(src))\n      | _ -> None)\n  | _ -> None\n\nlet intel_render_kernel lang name kernel bufs program =\n  let prefix = List.map (fun wi ->\n    let dt_in_name, dt_in_sfx =\n      if wi.wi_dtype_in = Dtype.Bfloat16 then (\"ushort\", \"bf16\")\n      else (\"half\", \"f16\")\n    in\n    let dt_out = base_scalar_name wi.wi_dtype_out in\n    strf {|%s8 __%s(%s16 a, %s16 b, %s8 c) {\n    return intel_sub_group_%s_%s_matrix_mad_k16(as_int8(a), as_int8(b), c);\n}|} dt_out wi.wi_name dt_in_name dt_in_name dt_out dt_in_sfx dt_in_sfx\n  ) (collect_wmma_args program) in\n  let preamble = opencl_render_kernel lang name kernel bufs program in\n  if prefix = [] then preamble\n  else String.concat \"\\n\" prefix ^ \"\\n\" ^ preamble\n\nlet intel_lang =\n  { opencl_lang with\n    kernel_typedef = (fun _ ->\n      \"__attribute__((intel_reqd_sub_group_size(8)))\\n__kernel void\");\n    rules = [intel_bf16_cast_rule; opencl_bf16_const_rule; base_render];\n    render_kernel_hook = intel_render_kernel;\n  }\n\n(* Metal *)\n\nlet metal_scalar_name : scalar_name = function\n  | Dtype.Bfloat16 -> \"bfloat\" | s -> base_scalar_name s\n\nlet metal_render_bitcast = render_bitcast_with \"as_type<%s>((%s)(%s))\"\n\nlet metal_render_kernel lang name kernel bufs program =\n  let type_map = render_dtype_str metal_scalar_name in\n  let prefix = [\"#include <metal_stdlib>\"; \"using namespace metal;\"] in\n  let wmma_prefix = List.map (fun wi ->\n    let dstr_out = type_map (Dtype.vec 2 (Dtype.of_scalar wi.wi_dtype_out)) in\n    let dstr_in = type_map (Dtype.vec 2 (Dtype.of_scalar wi.wi_dtype_in)) in\n    let simd_in = type_map (Dtype.of_scalar wi.wi_dtype_in) in\n    let simd_out = type_map (Dtype.of_scalar wi.wi_dtype_out) in\n    strf {|%s __%s(%s a, %s b, %s c){\n  simdgroup_%s8x8 mat_a, mat_b; simdgroup_%s8x8 mat_c;\n  mat_a.thread_elements()[0] = a[0]; mat_b.thread_elements()[0] = b[0]; mat_c.thread_elements()[0] = c[0];\n  mat_a.thread_elements()[1] = a[1]; mat_b.thread_elements()[1] = b[1]; mat_c.thread_elements()[1] = c[1];\n  simdgroup_multiply_accumulate(mat_c, mat_a, mat_b, mat_c);\n  return %s(mat_c.thread_elements()[0], mat_c.thread_elements()[1]);\n}|} dstr_out wi.wi_name dstr_in dstr_in dstr_out simd_in simd_out dstr_out\n  ) (collect_wmma_args program) in\n  let all_prefix = String.concat \"\\n\" (prefix @ wmma_prefix) ^ \"\\n\" in\n  all_prefix ^ default_render_kernel lang name kernel bufs program\n\nlet metal_lang = make_lang ~scalar_name:metal_scalar_name\n  ~kernel_typedef:(fun _ -> \"kernel void\")\n  ~buffer_prefix:\"device \"\n  ~smem_prefix:{|threadgroup __attribute__((aligned(16))) |}\n  ~arg_int_prefix:\"constant int&\"\n  ~barrier:\"threadgroup_barrier(mem_flags::mem_threadgroup);\"\n  ~float4_ctor:(fun tm dt -> tm dt)\n  ~render_bitcast:(fun tm -> fun p s dst v -> metal_render_bitcast (fun dt -> tm (Dtype.Val dt)) p s (Dtype.val_of dst) v)\n  ~extra_args:[\n    \"uint3 gid [[threadgroup_position_in_grid]]\";\n    \"uint3 lid [[thread_position_in_threadgroup]]\"]\n  ~code_for_workitem:(fun dim ->\n    let a = Special_dim.axis dim in\n    match dim with\n    | Group_id _ -> strf \"gid.%c\" (Char.chr (120 + a))\n    | Local_id _ -> strf \"lid.%c\" (Char.chr (120 + a))\n    | Global_idx _ -> failwith \"Metal does not support Global_idx specials\")\n  ~code_for_op:{ base_code_for_op with\n    unary = (fun op x _dt -> match op with\n      | `Sin -> strf \"precise::sin(%s)\" x\n      | _ -> base_code_for_op.unary op x _dt);\n  }\n  ~render_kernel_hook:metal_render_kernel\n  ()\n\n(* CUDA *)\n\nlet cuda_scalar_name : scalar_name = function\n  | Dtype.Bfloat16 -> \"nv_bfloat16\"\n  | Fp8e4m3 -> \"__nv_fp8_e4m3\" | Fp8e5m2 -> \"__nv_fp8_e5m2\"\n  | s -> base_scalar_name s\n\nlet is_half_or_bf16 (dt : Dtype.Val.t) =\n  match Dtype.Val.scalar dt with Float16 | Bfloat16 -> true | _ -> false\n\nlet cuda_hfn name (dt : Dtype.t) x =\n  strf \"%s(%s)\" (if is_half_or_bf16 (Dtype.val_of dt) then \"h\" ^ name else name) x\n\nlet cuda_render_bitcast = render_bitcast_with \"tg_bitcast<%s>((%s)(%s))\"\n\nlet cuda_render_vector_prefix scalar_name (dt : Dtype.Val.t) =\n  let type_map = render_dtype_str scalar_name in\n  let vec = type_map (Dtype.Val dt) in\n  let scal = type_map (Dtype.scalarize (Dtype.Val dt)) in\n  let nms = List.init (Dtype.Val.count dt) vec_elem_name in\n  let elems = String.concat \", \" nms in\n  let header = String.concat \", \"\n    (List.map (fun x -> strf \"%s %s\" scal x) nms) in\n  strf \"struct __align__(%d) %s { %s %s; }; __device__ %s make_%s(%s) { %s r={%s}; return r; }\"\n    (Dtype.Val.itemsize dt) vec scal elems vec vec header vec elems\n\nlet cuda_dt_map_in = function\n  | Dtype.Float32 -> \"tf32\" | Float16 -> \"f16\" | Bfloat16 -> \"bf16\"\n  | Fp8e4m3 -> \"e4m3\" | Fp8e5m2 -> \"e5m2\" | _ -> \"f32\"\n\nlet cuda_dt_map_out = function\n  | Dtype.Float32 -> \"f32\" | Float16 -> \"f16\" | _ -> \"f32\"\n\nlet cuda_render_kernel lang name kernel bufs program =\n  let type_map = render_dtype_str cuda_scalar_name in\n  let prefix = ref [\n    {|#define INFINITY (__int_as_float(0x7f800000))|};\n    {|#define NAN (__int_as_float(0x7fffffff))|};\n    {|template <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }|};\n  ] in\n  let used = collect_used_dtypes program in\n  if List.exists (fun dt ->\n    let s = Dtype.Val.scalar dt in s = Fp8e4m3 || s = Fp8e5m2) used then\n    prefix := !prefix @ [\"#include <cuda_fp8.h>\"];\n  if List.exists (fun dt -> Dtype.Val.scalar dt = Float16) used then\n    prefix := !prefix @ [\"#include <cuda_fp16.h>\"];\n  if List.exists (fun dt -> Dtype.Val.scalar dt = Bfloat16) used then\n    prefix := !prefix @ [\"#include <cuda_bf16.h>\"];\n  (* vector type prefixes for half/bf16 count 4,8 and fp8 count 2,4,8,16 *)\n  let vec_defs = List.filter_map (fun dt ->\n    let need = match Dtype.Val.scalar dt with\n      | Float16 | Bfloat16 -> List.mem (Dtype.Val.count dt) [4; 8]\n      | Fp8e4m3 | Fp8e5m2 -> List.mem (Dtype.Val.count dt) [2; 4; 8; 16]\n      | _ -> false\n    in\n    if need then Some (cuda_render_vector_prefix cuda_scalar_name dt)\n    else None) used\n  in\n  prefix := !prefix @ vec_defs;\n  (* WMMA preambles *)\n  List.iter (fun wi ->\n    let n, m, k = wi.wi_dims in\n    let ua, ub, uc = wi.wi_upcast_axes in\n    let upcast_sizes = [prod (List.map snd ua); prod (List.map snd ub); prod (List.map snd uc)] in\n    let wmma_dtypes = List.map2 (fun dt size ->\n      type_map (Dtype.vec size (Dtype.of_scalar dt)))\n      [wi.wi_dtype_in; wi.wi_dtype_in; wi.wi_dtype_out] upcast_sizes in\n    (* number of 32-bit registers per operand *)\n    let n_operands = List.map2 (fun dt size ->\n      size * Dtype.itemsize (Dtype.of_scalar dt) / 4)\n      [wi.wi_dtype_in; wi.wi_dtype_in; wi.wi_dtype_out] upcast_sizes in\n    let total = List.fold_left (+) 0 n_operands in\n    let operands = List.init total (fun i -> strf \"%%%d\" i) in\n    let nc = List.nth n_operands 2 in\n    let na = List.nth n_operands 0 in\n    let nb = List.nth n_operands 1 in\n    let slice from len = List.filteri (fun i _ -> i >= from && i < from + len) operands in\n    let join = String.concat \", \" in\n    prefix := !prefix @ [\n      strf {|__device__ %s __%s(%s a, %s b, %s c){\n  int *a_pk = (int *)(&a), *b_pk = (int *)(&b), *c_pk = (int *)(&c);\n  asm(\"mma.sync.aligned.m%dn%dk%d.row.col.%s.%s.%s.%s\"\n      \"{%s}, {%s},\"\n      \"{%s}, {%s};\"\n    : %s\n    : %s, %s);\n  return c;\n}|}\n        (List.nth wmma_dtypes 2) wi.wi_name\n        (List.nth wmma_dtypes 0) (List.nth wmma_dtypes 1) (List.nth wmma_dtypes 2)\n        m n k\n        (cuda_dt_map_out wi.wi_dtype_out)\n        (cuda_dt_map_in wi.wi_dtype_in) (cuda_dt_map_in wi.wi_dtype_in)\n        (cuda_dt_map_out wi.wi_dtype_out)\n        (join (slice 0 nc)) (join (slice nc na))\n        (join (slice (nc + na) nb)) (join (slice 0 nc))\n        (join (List.init nc (fun i -> strf {|\"+r\"(c_pk[%d])|} i)))\n        (join (List.init na (fun i -> strf {|\"r\"(a_pk[%d])|} i)))\n        (join (List.init nb (fun i -> strf {|\"r\"(b_pk[%d])|} i)))\n    ]\n  ) (collect_wmma_args program);\n  let preamble = String.concat \"\\n\" !prefix ^ \"\\n\" in\n  preamble ^ default_render_kernel lang name kernel bufs program\n\nlet cuda_lang = make_lang ~scalar_name:cuda_scalar_name\n  ~kernel_typedef:(fun lb ->\n    strf {|extern \"C\" __global__ void __launch_bounds__(%d)|} lb)\n  ~smem_prefix:\"__shared__ __align__(16) \"\n  ~smem_prefix_for_cast:false\n  ~barrier:\"__syncthreads();\"\n  ~float4_ctor:(fun tm dt -> strf \"make_%s\" (tm dt))\n  ~render_bitcast:(fun tm -> fun p s dst v -> cuda_render_bitcast (fun dt -> tm (Dtype.Val dt)) p s (Dtype.val_of dst) v)\n  ~gep_arr_threshold:8\n  ~infinity:\"INFINITY\"\n  ~nan_:\"NAN\"\n  ~code_for_workitem:(fun dim ->\n    let a = Special_dim.axis dim in\n    let c = Char.chr (120 + a) in\n    match dim with\n    | Group_id _ -> strf \"blockIdx.%c\" c\n    | Local_id _ -> strf \"threadIdx.%c\" c\n    | Global_idx _ -> strf \"(blockIdx.%c*blockDim.%c+threadIdx.%c)\" c c c)\n  ~code_for_op:{ base_code_for_op with\n    unary = (fun op x dt -> match op with\n      | `Exp2 -> cuda_hfn \"exp2\" dt x | `Log2 -> cuda_hfn \"log2\" dt x\n      | `Sin -> cuda_hfn \"sin\" dt x | `Sqrt -> cuda_hfn \"sqrt\" dt x\n      | `Trunc -> cuda_hfn \"trunc\" dt x\n      | `Recip -> if is_half_or_bf16 (Dtype.val_of dt) then strf \"hrcp(%s)\" x else strf \"(1/%s)\" x\n      | _ -> base_code_for_op.unary op x dt);\n  }\n  ~render_kernel_hook:cuda_render_kernel\n  ()\n\n(* AMD HIP *)\n\nlet amd_scalar_name : scalar_name = function\n  | Dtype.Bfloat16 -> \"hip_bfloat16\"\n  | Fp8e4m3 -> \"hip_fp8\" | Fp8e5m2 -> \"hip_bf8\"\n  | s -> base_scalar_name s\n\nlet ocml op (dt : Dtype.Val.t) x =\n  let bits = match Dtype.Val.scalar dt with Float16 -> 16 | Float64 -> 64 | _ -> 32 in\n  strf \"__ocml_%s_f%d(%s)\" op bits x\n\nlet fp8_index = function\n  | Dtype.Fp8e5m2 -> 1 | _ -> 0\n\nlet amd_render_vector_prefix scalar_name (dt : Dtype.Val.t) =\n  let type_map = render_dtype_str scalar_name in\n  let vec = type_map (Dtype.Val dt) in\n  let scal = type_map (Dtype.scalarize (Dtype.Val dt)) in\n  let nms = List.init (Dtype.Val.count dt) vec_elem_name in\n  strf \"typedef %s %s __attribute__((ext_vector_type(%d)));\\n\\\n        static inline __attribute__((device)) %s make_%s(%s) { return { %s }; }\"\n    scal vec (Dtype.Val.count dt) vec vec\n    (String.concat \", \" (List.map (fun x -> strf \"%s %s\" scal x) nms))\n    (String.concat \", \" nms)\n\nlet amd_wmma_type_map = function\n  | Dtype.Bfloat16 -> \"bf16\" | Float32 -> \"f32\" | Float16 -> \"f16\"\n  | Fp8e4m3 -> \"_fp8_fp8\" | Fp8e5m2 -> \"_bf8_bf8\" | _ -> \"f32\"\n\nlet amd_wmma_out_map = function\n  | Dtype.Float32 -> \"f32\" | Float16 -> \"f16\" | _ -> \"f32\"\n\n(* CDNA WMMA rule: k=128 gets fp8_index args + 4 zeros, others get 3 zeros *)\nlet amd_cdna_wmma_rule : rule = fun program _id v _lang r ->\n  let open P in\n  match v with\n  | Wmma { name; a; b; c; dims = (_, _, k); dtype_in; _ } when k = 128 ->\n      let fi = fp8_index dtype_in in\n      Some (strf \"__%s(%s, %s, %s, %d, %d, 0, 0, 0, 0)\" name r.(a) r.(b) r.(c) fi fi)\n  | Wmma { name; a; b; c; _ } ->\n      Some (strf \"__%s(%s, %s, %s, 0, 0, 0)\" name r.(a) r.(b) r.(c))\n  | _ -> None\n\nlet amd_cdna_fp8_cast_rule : rule = fun program _id v _lang r ->\n  let open P in\n  match v with\n  | Cast { src; dtype } -> (\n      let dst_s = Dtype.Val.scalar dtype in\n      let src_s = Option.map Dtype.Val.scalar (P.dtype program src) in\n      match dst_s, src_s with\n      | (Fp8e4m3 | Fp8e5m2), Some Float32 ->\n          Some (strf \"f32_to_fp8(%s, %d)\" r.(src) (fp8_index dst_s))\n      | Float32, Some ((Fp8e4m3 | Fp8e5m2) as s) ->\n          let cvt = if s = Fp8e5m2 then \"bf8\" else \"fp8\" in\n          Some (strf \"__builtin_amdgcn_cvt_f32_%s((unsigned int)%s, 0)\" cvt r.(src))\n      | _ -> None)\n  | _ -> None\n\nlet amd_render_kernel ~cdna ~cdna4 ~rdna4 ~tensor_cores:_ scalar_name\n    lang name kernel bufs program =\n  let open P in\n  let prefix = ref [] in\n  let ockl = ref [] in\n  let used = collect_used_dtypes program in\n  let has_non_finite = ref false in\n  let has_specials = ref false in\n  P.iteri (fun _id v -> match v with\n    | Const { value; _ } -> (match Const.view value with\n        | Float f when not (Float.is_finite f) -> has_non_finite := true\n        | _ -> ())\n    | Special _ -> has_specials := true\n    | _ -> ()) program;\n  if !has_non_finite then\n    prefix := [{|#define INFINITY (__builtin_inff())|}; {|#define NAN (__builtin_nanf(\"\"))|}];\n  if !has_specials then begin\n    prefix := !prefix @ [\"typedef long unsigned int size_t;\"];\n    ockl := List.map (fun n ->\n      strf {|extern \"C\" __attribute__((device, const)) unsigned int __ockl_get_%s(size_t);|} n\n    ) [\"local_id\"; \"group_id\"; \"local_size\"]\n  end;\n  (* OCML math function declarations *)\n  let ocml_ops = [(`Exp2, \"exp2\", \"pure\"); (`Log2, \"log2\", \"pure\");\n    (`Sqrt, \"sqrt\", \"const\"); (`Sin, \"sin\", \"\"); (`Trunc, \"trunc\", \"\")] in\n  let ocml_decls = ref [] in\n  P.iteri (fun _id v -> match v with\n    | Unary { op; dtype; _ } ->\n        List.iter (fun (tag, name, attr) ->\n          if op = tag && (Dtype.Val.scalar dtype = Float16 || Dtype.Val.scalar dtype = Float32 || Dtype.Val.scalar dtype = Float64) then\n            let bits = match Dtype.Val.scalar dtype with Float16 -> 16 | Float64 -> 64 | _ -> 32 in\n            let dt_name = base_scalar_name (Dtype.Val.scalar dtype) in\n            let decl = strf {|extern \"C\" __attribute__((device%s)) %s __ocml_%s_f%d(%s);|}\n              (if attr = \"\" then \"\" else \", \" ^ attr) dt_name name bits dt_name in\n            ocml_decls := decl :: !ocml_decls\n        ) ocml_ops\n    | _ -> ()) program;\n  prefix := !prefix @ !ockl @ dedup (List.rev !ocml_decls);\n  (* Type definitions *)\n  if List.exists (fun dt -> Dtype.Val.scalar dt = Bfloat16) used then\n    prefix := !prefix @\n      [strf \"typedef %s hip_bfloat16;\" (if cdna4 then \"__bf16\" else \"unsigned short\")];\n  if List.exists (fun dt -> Dtype.Val.scalar dt = Float16) used then\n    prefix := !prefix @ [\"#define half _Float16\"];\n  if List.exists (fun dt ->\n    let s = Dtype.Val.scalar dt in s = Fp8e4m3 || s = Fp8e5m2) used then begin\n    prefix := !prefix @ [\"typedef unsigned char hip_bf8;\"; \"typedef unsigned char hip_fp8;\"];\n    prefix := !prefix @ [{|static inline __attribute__((device)) unsigned char f32_to_fp8(float v, int is_bf8) {\n  v = (((*(unsigned*)&v)&0x7F800000)!=0x7F800000)?__builtin_amdgcn_fmed3f(v,is_bf8?57344.0f:448.0f,is_bf8?-57344.0f:-448.0f) : v;\n  return (unsigned char)(is_bf8?__builtin_amdgcn_cvt_pk_bf8_f32(v,v,0,false):__builtin_amdgcn_cvt_pk_fp8_f32(v,v,0,false));\n}|}]\n  end;\n  (* Vector type prefixes *)\n  prefix := !prefix @ List.filter_map (fun dt ->\n    if Dtype.Val.count dt > 1 then Some (amd_render_vector_prefix scalar_name dt)\n    else None) used;\n  (* WMMA defines *)\n  let wmma_type_map = ref [(Dtype.Bfloat16, \"bf16\"); (Dtype.Float32, \"f32\"); (Dtype.Float16, \"f16\");\n    (Dtype.Fp8e4m3, \"_fp8_fp8\"); (Dtype.Fp8e5m2, \"_bf8_bf8\")] in\n  List.iter (fun wi ->\n    let n, m, k = wi.wi_dims in\n    let type_in = List.assoc wi.wi_dtype_in !wmma_type_map in\n    let type_out = amd_wmma_out_map wi.wi_dtype_out in\n    if cdna then begin\n      (if (n, m, k) = (16, 16, 16) && wi.wi_dtype_in = Dtype.Bfloat16 then\n        wmma_type_map := (Dtype.Bfloat16, \"bf16_1k\") :: !wmma_type_map);\n      (if (n, m, k) = (16, 16, 32) then\n        wmma_type_map := (Dtype.Bfloat16, \"_bf16\") :: (Dtype.Float16, \"_f16\") :: !wmma_type_map);\n      (if (n, m, k) = (16, 16, 128) then\n        wmma_type_map := (Dtype.Fp8e4m3, \"_f8f6f4\") :: (Dtype.Fp8e5m2, \"_f8f6f4\") :: !wmma_type_map);\n      let type_in' = List.assoc wi.wi_dtype_in !wmma_type_map in\n      let scale = if k = 128 then \"scale_\" else \"\" in\n      prefix := !prefix @\n        [strf \"#define __%s __builtin_amdgcn_mfma_%s%s_%dx%dx%d%s\"\n           wi.wi_name scale type_out n m k type_in']\n    end else if rdna4 then\n      prefix := !prefix @\n        [strf \"#define __%s __builtin_amdgcn_wmma_%s_16x16x16_%s_w32_gfx12\"\n           wi.wi_name type_out type_in]\n    else if wi.wi_dtype_out = Float32 then\n      prefix := !prefix @\n        [strf \"#define __%s __builtin_amdgcn_wmma_f32_16x16x16_%s_w32\"\n           wi.wi_name (if wi.wi_dtype_in = Float16 then \"f16\" else \"bf16\")]\n    else\n      prefix := !prefix @\n        [strf {|static inline __attribute__((device)) half8 __%s(half16 a, half16 b, half8 c) {\n  half16 c_frag = {}; half8 d; for (int n = 0; n < 8; n++) { c_frag[n*2] = c[n]; }\n  c_frag = __builtin_amdgcn_wmma_f16_16x16x16_f16_w32(a, b, c_frag, false);\n  for (int n = 0; n < 8; n++) { d[n] = c_frag[n*2]; } return d;\n}|} wi.wi_name]\n  ) (collect_wmma_args program);\n  let preamble = String.concat \"\\n\" !prefix ^ \"\\n\" in\n  preamble ^ default_render_kernel lang name kernel bufs program\n\nlet amd_lang ~cdna ~cdna4 ~rdna4 ~tensor_cores =\n  let scalar_name = amd_scalar_name in\n  let base_rules =\n    if cdna then [amd_cdna_fp8_cast_rule; amd_cdna_wmma_rule; base_render]\n    else [base_render]\n  in\n  make_lang ~scalar_name\n    ~kernel_typedef:(fun lb ->\n      strf {|extern \"C\" __attribute__((global)) void __attribute__((amdgpu_flat_work_group_size(1, %d)))|} lb)\n    ~smem_prefix:{|__attribute__((shared, aligned(16)))|}\n    ~smem_prefix_for_cast:false\n    ~float4_ctor:(fun tm dt -> strf \"make_%s\" (tm dt))\n    ~barrier:(\n      {|__builtin_amdgcn_fence(__ATOMIC_RELEASE, \"workgroup\");|} ^\n      \"__builtin_amdgcn_s_barrier();\" ^\n      {|__builtin_amdgcn_fence(__ATOMIC_ACQUIRE, \"workgroup\");|})\n    ~code_for_workitem:(fun dim ->\n      let a = Special_dim.axis dim in\n      match dim with\n      | Group_id _ -> strf \"__ockl_get_group_id(%d)\" a\n      | Local_id _ -> strf \"__ockl_get_local_id(%d)\" a\n      | Global_idx _ ->\n          strf \"(__ockl_get_group_id(%d)*__ockl_get_local_size(%d)+__ockl_get_local_id(%d))\" a a a)\n    ~code_for_op:{ base_code_for_op with\n      unary = (fun op x dt -> match op with\n        | `Exp2 -> ocml \"exp2\" (Dtype.val_of dt) x | `Log2 -> ocml \"log2\" (Dtype.val_of dt) x\n        | `Sin -> ocml \"sin\" (Dtype.val_of dt) x | `Sqrt -> ocml \"sqrt\" (Dtype.val_of dt) x\n        | `Trunc -> ocml \"trunc\" (Dtype.val_of dt) x\n        | _ -> base_code_for_op.unary op x dt);\n    }\n    ~rules:base_rules\n    ~render_kernel_hook:(amd_render_kernel ~cdna ~cdna4 ~rdna4 ~tensor_cores scalar_name)\n    ()\n\n(* Exported renderers *)\n\n(* Clang extra_matcher: f64/bf16→f16/bf16 cast through f32, devectorize\n   sqrt/trunc, non-native bf16 ALU, manual bf16 cast, base extra_pm *)\nlet clang_extra_matcher =\n  K.first_match [\n    (* LLVM can't legalize f64→f16/bf16 or bf16→f16 on some CPUs *)\n    (fun node -> match K.view node with\n      | Cast { src; dtype }\n        when (Dtype.scalar (K.dtype src) = Float64 || Dtype.scalar (K.dtype src) = Bfloat16)\n             && (Dtype.scalar dtype = Float16 || Dtype.scalar dtype = Bfloat16)\n             && Dtype.scalar (K.dtype src) <> Dtype.scalar dtype ->\n          Some (K.cast ~src:(K.cast ~src ~dtype:(Dtype.of_scalar Float32)) ~dtype)\n      | _ -> None);\n    (* sqrt/trunc can't be vectorized on Clang *)\n    (fun node -> match K.view node with\n      | Unary { op = (`Sqrt | `Trunc); _ }\n        when Dtype.vcount (K.dtype node) > 1 ->\n          Devectorizer.no_vectorized_alu node\n      | _ -> None);\n    create_non_native_float_pats [Bfloat16];\n    pm_manual_bf16_cast;\n    extra_pm;\n  ]\n\nlet clang =\n  Renderer.make ~code_for_op:base_code_for_op_list\n    ~name:\"clang\" ~device:\"CPU\"\n    ~has_local:false ~has_shared:false ~shared_max:0\n    ~has_threads:threads\n    ~tensor_cores:(if amx then Tc.amx else [])\n    ~extra_matcher:clang_extra_matcher\n    ~render:(render_fn clang_abi_lang) ()\n\nlet clang_no_abi =\n  Renderer.make ~code_for_op:base_code_for_op_list\n    ~name:\"clang\" ~device:\"CPU\"\n    ~has_local:false ~has_shared:false ~shared_max:0\n    ~has_threads:threads\n    ~tensor_cores:(if amx then Tc.amx else [])\n    ~extra_matcher:clang_extra_matcher\n    ~render:(render_fn clang_lang) ()\n\n(* OpenCL extra_matcher: non-native bf16 ALU, manual bf16 cast, extra_pm *)\nlet opencl_extra_matcher =\n  K.first_match [\n    create_non_native_float_pats [Bfloat16];\n    pm_manual_bf16_cast;\n    extra_pm;\n  ]\n\nlet opencl =\n  Renderer.make ~code_for_op:base_code_for_op_list\n    ~name:\"opencl\" ~device:\"CL\"\n    ~has_local:true ~has_shared:true ~shared_max:(32 * 1024)\n    ~extra_matcher:opencl_extra_matcher\n    ~render:(render_fn opencl_lang) ()\n\nlet intel =\n  Renderer.make ~code_for_op:base_code_for_op_list\n    ~name:\"intel\" ~device:\"CL\"\n    ~tensor_cores:Tc.intel\n    ~has_local:true ~has_shared:true ~shared_max:(32 * 1024)\n    ~extra_matcher:opencl_extra_matcher\n    ~render:(render_fn intel_lang) ()\n\nlet qcom =\n  Renderer.make ~code_for_op:base_code_for_op_list\n    ~name:\"qcom\" ~device:\"QCOM\"\n    ~has_local:true ~has_shared:true ~shared_max:(32 * 1024)\n    ~extra_matcher:opencl_extra_matcher\n    ~render:(render_fn opencl_lang) ()\n\n(* Metal extra_matcher: bf16 transcendental promotion through f32, extra_pm *)\nlet metal_is_arm64 =\n  try\n    let ic = Unix.open_process_in \"uname -m\" in\n    let machine = String.trim (input_line ic) in\n    ignore (Unix.close_process_in ic);\n    machine = \"arm64\"\n  with _ -> false\n\nlet metal_extra_matcher =\n  K.first_match [\n    (* bf16 sqrt/exp2/log2/sin → promote through f32 *)\n    (fun node -> match K.view node with\n      | Unary { op = (`Sqrt | `Exp2 | `Log2 | `Sin); _ }\n        when Dtype.scalar (K.dtype node) = Bfloat16 ->\n          let f32 = Dtype.of_scalar Float32 in\n          let new_children = List.map (fun c -> K.cast ~src:c ~dtype:f32) (K.children node) in\n          let promoted = K.replace node ~children:new_children ~dtype:f32 () in\n          Some (K.cast ~src:promoted ~dtype:(K.dtype node))\n      | _ -> None);\n    extra_pm;\n  ]\n\nlet metal =\n  Renderer.make ~code_for_op:base_code_for_op_list\n    ~name:\"metal\" ~device:\"METAL\"\n    ~tensor_cores:(if metal_is_arm64 then Tc.metal else [])\n    ~has_local:true ~has_shared:true ~shared_max:(32 * 1024)\n    ~extra_matcher:metal_extra_matcher\n    ~render:(render_fn metal_lang) ()\n\n(* CUDA extra_matcher: non-native fp8 ALU (no casting), fp8 cross-cast, extra_pm *)\nlet cuda_tensor_cores = function\n  | Gpu_target.SM75 -> Tc.cuda_sm75\n  | Gpu_target.SM80 -> Tc.cuda_sm80\n  | Gpu_target.SM89 -> Tc.cuda_sm89\n\nlet cuda_extra_matcher =\n  K.first_match [\n    create_non_native_float_pats ~casting:false [Fp8e4m3; Fp8e5m2];\n    (* fp8 → fp8 cross-cast through f32 *)\n    (fun node -> match K.view node with\n      | Cast { src; dtype }\n        when (Dtype.scalar dtype = Fp8e4m3 || Dtype.scalar dtype = Fp8e5m2)\n             && (Dtype.scalar (K.dtype src) = Fp8e4m3 || Dtype.scalar (K.dtype src) = Fp8e5m2)\n             && Dtype.scalar (K.dtype src) <> Dtype.scalar dtype ->\n          Some (K.cast ~src:(K.cast ~src ~dtype:(Dtype.of_scalar Float32)) ~dtype)\n      | _ -> None);\n    extra_pm;\n  ]\n\nlet cuda (arch : Gpu_target.cuda) =\n  Renderer.make ~code_for_op:base_code_for_op_list\n    ~name:\"cuda\" ~device:\"CUDA\"\n    ~tensor_cores:(cuda_tensor_cores arch)\n    ~has_local:true ~has_shared:true ~shared_max:(48 * 1024)\n    ~global_max:[0x7FFFFFFF; 65535; 65535]\n    ~local_max:[1024; 1024; 64]\n    ~extra_matcher:cuda_extra_matcher\n    ~render:(render_fn cuda_lang) ()\n\n(* AMD extra_matcher: varies by arch *)\nlet amd_tensor_cores = function\n  | Gpu_target.RDNA3 -> Tc.amd_rdna3\n  | Gpu_target.RDNA4 -> Tc.amd_rdna4\n  | Gpu_target.CDNA3 -> Tc.amd_cdna3\n  | Gpu_target.CDNA4 -> Tc.amd_cdna4\n\nlet amd (arch : Gpu_target.amd) =\n  let tensor_cores = amd_tensor_cores arch in\n  let cdna, cdna4, rdna4 = match arch with\n    | Gpu_target.CDNA3 -> (true, false, false)\n    | Gpu_target.CDNA4 -> (true, true, false)\n    | Gpu_target.RDNA4 -> (false, false, true)\n    | Gpu_target.RDNA3 -> (false, false, false)\n  in\n  let lang = amd_lang ~cdna ~cdna4 ~rdna4 ~tensor_cores in\n  (* CDNA4 skips manual_bf16_cast and extra_pm (native bf16 support) *)\n  let extra =\n    if cdna4 then\n      K.first_match [\n        create_non_native_float_pats [Bfloat16; Fp8e4m3; Fp8e5m2];\n      ]\n    else\n      K.first_match [\n        create_non_native_float_pats [Bfloat16; Fp8e4m3; Fp8e5m2];\n        pm_manual_bf16_cast;\n        extra_pm;\n      ]\n  in\n  Renderer.make ~code_for_op:base_code_for_op_list\n    ~name:\"amd\" ~device:\"AMD\"\n    ~tensor_cores\n    ~has_local:true ~has_shared:true ~shared_max:(64 * 1024)\n    ~global_max:[0x7FFFFFFF; 65535; 65535]\n    ~extra_matcher:extra\n    ~render:(render_fn lang) ()\n"
  },
  {
    "path": "packages/tolk/lib/renderer/cstyle.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** C-family language renderers.\n\n    {!Renderer.t} values for C-style GPU and CPU backends: CUDA, HIP, Metal,\n    OpenCL, and Clang. Each renderer converts a {!Tolk_ir.Program.t} into\n    backend-specific source code via {!Renderer.render}.\n\n    GPU renderers map {!Tolk_ir.Special_dim.t} values to backend-specific\n    workitem expressions (e.g., [blockIdx]/[threadIdx] for CUDA,\n    [get_global_id] for OpenCL, [gid]/[lid] for Metal). The CPU renderer\n    ({!clang}) has no GPU thread support.\n\n    See {!Renderer} for the renderer interface. *)\n\n(** {1:cpu CPU} *)\n\nval clang : Renderer.t\n(** [clang] is a Clang/CPU renderer with SIMD support.\n\n    Generates C code for host CPU execution using Clang extensions:\n    - [ext_vector_type] for SIMD vector types.\n    - [__builtin_convertvector] for vector casts.\n    - [__builtin_sqrtf], [__builtin_truncf], etc. for math.\n\n    Emits a fixed-ABI wrapper\n    ([void name(const unsigned long long *bufs, const long long *vals)]) around\n    the kernel to avoid a libffi dependency at JIT time.\n\n    Device is [\"CPU\"]. No GPU thread support ({!Renderer.has_local} is [false]).\n    No shared memory.\n\n    {b Note.} Reads environment variables at module initialization:\n    - [AMX]: set to [1] to enable Apple AMX tensor cores.\n    - [THREADS]: set to [0] to disable host-side threading (default: enabled).\n    - [CPU_COUNT]: override logical CPU count for thread pool size.\n\n    See also {!clang_no_abi}. *)\n\nval clang_no_abi : Renderer.t\n(** [clang_no_abi] is {!clang} without the fixed-ABI wrapper.\n\n    Generates a plain [void name(...)] signature with individual typed\n    parameters. Useful for testing and integration with runtimes that use native\n    calling conventions.\n\n    See also {!clang}. *)\n\n(** {1:opencl OpenCL} *)\n\nval opencl : Renderer.t\n(** [opencl] is an OpenCL renderer.\n\n    Generates OpenCL C code using [get_group_id], [get_local_id],\n    [get_global_id] for thread indexing. Kernel functions are annotated with\n    [__kernel], buffers with [__global], and shared memory with [__local].\n\n    Device is [\"CL\"]. Shared memory limit is 32KB. Bfloat16 is emulated via\n    promotion to float32.\n\n    See also {!intel} and {!qcom}. *)\n\nval intel : Renderer.t\n(** [intel] is an Intel OpenCL renderer.\n\n    {!opencl} variant with [intel_reqd_sub_group_size(8)] for sub-group WMMA\n    operations. Uses Intel-specific bf16 conversion intrinsics\n    ([intel_convert_bfloat16_as_ushort], [intel_convert_as_bfloat16_float])\n    instead of manual bit manipulation.\n\n    Device is [\"CL\"]. Shared memory limit is 32KB. Tensor cores use 8x8x16 tiles\n    with 8 threads.\n\n    See also {!opencl}. *)\n\nval qcom : Renderer.t\n(** [qcom] is a Qualcomm OpenCL renderer for Adreno GPUs.\n\n    Identical to {!opencl} in code generation. The separate renderer allows\n    device-specific scheduling in codegen passes.\n\n    Device is [\"QCOM\"]. Shared memory limit is 32KB. *)\n\n(** {1:metal Metal} *)\n\nval metal : Renderer.t\n(** [metal] is a Metal Shading Language renderer for Apple GPUs.\n\n    Generates MSL code with [threadgroup_position_in_grid] and\n    [thread_position_in_threadgroup] attributes for thread indexing. Uses\n    [threadgroup] storage for shared memory and [threadgroup_barrier] for\n    synchronization.\n\n    Device is [\"METAL\"]. Shared memory limit is 32KB.\n\n    {b Note.} Tensor cores (simdgroup 8x8 matrix operations) are only available\n    on arm64 Apple Silicon. On Intel Macs, no tensor cores are configured. *)\n\n(** {1:cuda CUDA} *)\n\nval cuda : Gpu_target.cuda -> Renderer.t\n(** [cuda arch] is a CUDA renderer for NVIDIA GPUs.\n\n    Generates CUDA C code using [blockIdx]/[threadIdx] for thread indexing. Uses\n    [extern \"C\" __global__] with [__launch_bounds__] when local dimensions are\n    known. Half-precision intrinsics ([hexp2], [hlog2], [hsqrt], etc.) are used\n    for float16 and bfloat16 transcendentals.\n\n    Device is [\"CUDA\"]. Shared memory limit is 48KB. Global grid max is\n    \\[2{^ 31}-1, 65535, 65535\\]. Local block max is \\[1024, 1024, 64\\].\n\n    [arch] selects tensor core configurations:\n    - {!Gpu_target.SM75}: 8x16x8 tiles, f16 input.\n    - {!Gpu_target.SM80}: 8x16x16 tiles (f16, bf16) + 8x16x8 (f16, tf32).\n    - {!Gpu_target.SM89}: {!Gpu_target.SM80} + 8x16x32 tiles for fp8. *)\n\n(** {1:amd AMD} *)\n\nval amd : Gpu_target.amd -> Renderer.t\n(** [amd arch] is an AMD HIP renderer.\n\n    Generates HIP code using OCKL work item functions ([__ockl_get_group_id],\n    [__ockl_get_local_id]) for thread indexing and OCML transcendentals\n    ([__ocml_*_f\\{16,32,64\\}]) for math. Uses [__builtin_amdgcn_fence] for\n    release-acquire barriers (unlike CUDA's [__syncthreads], AMD barriers do not\n    imply a fence). Bfloat16 is emulated via software bit manipulation. Fp8 uses\n    [__builtin_amdgcn_cvt_*] intrinsics.\n\n    Device is [\"AMD\"]. Shared memory limit is 64KB. Global grid max is\n    \\[2{^ 31}-1, 65535, 65535\\].\n\n    [arch] selects tensor core configurations:\n    - {!Gpu_target.RDNA3}: WMMA 16x16x16, 32-thread wavefront.\n    - {!Gpu_target.RDNA4}: WMMA 16x16x16, gfx12 builtins.\n    - {!Gpu_target.CDNA3}: MFMA fp8/bf16, 16x16x32/16, 64-thread wavefront.\n    - {!Gpu_target.CDNA4}: MFMA fp8/bf16/f16, 16x16x128/32/16. *)\n"
  },
  {
    "path": "packages/tolk/lib/renderer.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(* Types *)\n\n\n(* ALU operations a backend can provide custom rendering for. *)\ntype code_op =\n  | Sqrt\n  | Recip\n  | Neg\n  | Exp2\n  | Log2\n  | Sin\n  | Trunc\n  | And\n  | Xor\n  | Or\n  | Add\n  | Sub\n  | Mul\n  | Mod\n  | Idiv\n  | Cmpne\n  | Shr\n  | Shl\n  | Cmplt\n  | Where\n  | Cmpeq\n  | Fdiv\n  | Max\n  | Mulacc\n  | Threefry\n\nlet all_supported_ops : Tolk_ir.Decompositions.supported_ops =\n  {\n    has_exp2 = true; has_log2 = true; has_sin = true; has_sqrt = true;\n    has_recip = true; has_neg = true; has_sub = true; has_max = true;\n    has_shl = true; has_shr = true; has_and = true; has_or = true;\n    has_cmplt = true; has_cmpeq = true; has_fdiv = true;\n    has_threefry = true; has_mulacc = true;\n    disable_fast_idiv = false; force_transcendental = false;\n  }\n\nlet supported_ops_of_code_for_op (ops : code_op list) : Tolk_ir.Decompositions.supported_ops =\n  let has op = List.mem op ops in\n  {\n    has_exp2 = has Exp2; has_log2 = has Log2; has_sin = has Sin;\n    has_sqrt = has Sqrt; has_recip = has Recip; has_neg = has Neg;\n    has_sub = has Sub; has_max = has Max; has_shl = has Shl;\n    has_shr = has Shr; has_and = has And; has_or = has Or;\n    has_cmplt = has Cmplt; has_cmpeq = has Cmpeq; has_fdiv = has Fdiv;\n    has_threefry = has Threefry;\n    has_mulacc = has Mulacc;\n    disable_fast_idiv = false;\n    force_transcendental = false;\n  }\n\ntype t = {\n  name : string;\n  device : string;\n  compiler : Compiler.t option;\n  has_local : bool;\n  has_threads : bool;\n  has_shared : bool;\n  global_max : int list option;\n  global_prod_max : int list option;\n  local_max : int list option;\n  shared_max : int;\n  tensor_cores : Tc.t list;\n  supports_float4 : bool;\n  render : ?name:string -> Tolk_ir.Program.t -> string;\n  code_for_op : code_op list;\n  supported_ops : Tolk_ir.Decompositions.supported_ops;\n  pre_matcher : (Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t option) option;\n  extra_matcher : (Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t option) option;\n}\n\n(* Accessors *)\n\nlet name t = t.name\nlet device t = t.device\nlet compiler t = t.compiler\nlet has_local t = t.has_local\nlet has_threads t = t.has_threads\nlet has_shared t = t.has_shared\nlet global_max t = t.global_max\nlet global_prod_max t = t.global_prod_max\nlet local_max t = t.local_max\nlet shared_max t = t.shared_max\nlet tensor_cores t = t.tensor_cores\nlet supports_float4 t = t.supports_float4\nlet render t = t.render\nlet code_for_op t = t.code_for_op\nlet supported_ops t = t.supported_ops\nlet pre_matcher t = t.pre_matcher\nlet extra_matcher t = t.extra_matcher\n\n(* dtype support — checks whether the backend natively supports a given dtype\n   and lists float types that need emulation (promoted to a wider float). *)\nlet supports_dtype _t _dt = true\nlet emulated_float_dtypes _t : (Tolk_ir.Dtype.scalar * Tolk_ir.Dtype.scalar) list = []\n\n(* Construction *)\n\n(* 0x8FFFFFFF: conservative upper bound for grid/block dimensions.\n   Backends override with actual hardware limits (e.g., CUDA\n   gridDim.x = 2^31-1). *)\nlet with_compiler compiler t = { t with compiler = Some compiler }\n\nlet make ?(tensor_cores = []) ?(supports_float4 = true)\n    ?(has_threads = false)\n    ?(global_max = [ 0x8FFFFFFF; 0x8FFFFFFF; 0x8FFFFFFF ])\n    ?global_prod_max\n    ?(local_max = [ 0x8FFFFFFF; 0x8FFFFFFF; 0x8FFFFFFF ])\n    ?(code_for_op = []) ?supported_ops ?compiler ?pre_matcher ?extra_matcher\n    ~name ~device ~has_local ~has_shared ~shared_max ~render () =\n  let supported_ops =\n    match supported_ops with\n    | Some ops -> ops\n    | None ->\n        if code_for_op = [] then all_supported_ops\n        else supported_ops_of_code_for_op code_for_op\n  in\n  {\n    name;\n    device;\n    compiler;\n    has_local;\n    has_threads;\n    has_shared;\n    global_max = Some global_max;\n    global_prod_max;\n    local_max = Some local_max;\n    shared_max;\n    tensor_cores;\n    supports_float4;\n    render;\n    code_for_op;\n    supported_ops;\n    pre_matcher;\n    extra_matcher;\n  }\n"
  },
  {
    "path": "packages/tolk/lib/renderer.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** GPU kernel renderer.\n\n    A renderer converts {!Ir.Program.t} programs to backend-specific source\n    code and owns its {!Compiler.t}. The abstract type {!type-t} encapsulates\n    target capabilities (memory hierarchy, grid limits, supported operations),\n    a rendering function, and an optional compiler.\n\n    Backends construct renderers via {!make}, supplying only the fields that\n    differ from the defaults. The compiler is typically attached by the\n    device backend via {!with_compiler} or the [?compiler] parameter of\n    {!make}.\n\n    See {!Cstyle} for C-family language backends (CUDA, Metal, OpenCL, HIP,\n    Clang). *)\n\n(** {1:types Types} *)\n\n(** ALU operations that a backend can provide custom rendering for.\n\n    The decomposition pass uses {!type-supported_ops} to decide which composite\n    operations to lower; {!val-code_for_op} lists the operations a renderer\n    handles natively.\n\n    Operations without a corresponding flag in {!type-supported_ops} ([Add],\n    [Sub], [Mul], [Mod], [Idiv], [Cmpne], [Xor], [Where], [Trunc]) are always\n    required and never decomposed. *)\ntype code_op =\n  | Sqrt  (** Square root. *)\n  | Recip  (** Reciprocal ([1/x]). *)\n  | Neg  (** Arithmetic negation. *)\n  | Exp2  (** Base-2 exponential. *)\n  | Log2  (** Base-2 logarithm. *)\n  | Sin  (** Sine. *)\n  | Trunc  (** Truncation to integer. *)\n  | And  (** Bitwise AND. *)\n  | Xor  (** Bitwise XOR. *)\n  | Or  (** Bitwise OR. *)\n  | Add  (** Addition. *)\n  | Sub  (** Subtraction. *)\n  | Mul  (** Multiplication. *)\n  | Mod  (** Modulo. *)\n  | Idiv  (** Integer division. *)\n  | Cmpne  (** Not-equal comparison. *)\n  | Shr  (** Bitwise right shift. *)\n  | Shl  (** Bitwise left shift. *)\n  | Cmplt  (** Less-than comparison. *)\n  | Where  (** Ternary select ([cond ? a : b]). *)\n  | Cmpeq  (** Equality comparison. *)\n  | Fdiv  (** Floating-point division. *)\n  | Max  (** Maximum. *)\n  | Mulacc  (** Fused multiply-accumulate. *)\n  | Threefry  (** Threefry 2x32 PRNG mixing function. *)\n\n(** {2:supported_ops Supported operations} *)\n\nval all_supported_ops : Tolk_ir.Decompositions.supported_ops\n(** [all_supported_ops] marks every decomposable operation as natively\n    supported. *)\n\nval supported_ops_of_code_for_op : code_op list -> Tolk_ir.Decompositions.supported_ops\n(** [supported_ops_of_code_for_op ops] derives capability flags from a list of\n    natively rendered operations. An operation absent from [ops] is marked\n    unsupported. *)\n\n(** {1:renderer Renderer} *)\n\ntype t\n(** The type for renderers. *)\n\n(** {1:properties Properties} *)\n\nval name : t -> string\n(** [name r] is the renderer name (e.g., [\"metal\"], [\"cuda\"]). *)\n\nval device : t -> string\n(** [device r] is the target device identifier (e.g., [\"NV\"], [\"HIP\"], [\"CPU\"]).\n    Passed as context to codegen rewrite passes for device-specific\n    transformations. *)\n\nval compiler : t -> Compiler.t option\n(** [compiler r] is [r]'s compiler, or [None] if the renderer has no\n    associated compiler (e.g., interpreter backends). *)\n\nval has_local : t -> bool\n(** [has_local r] is [true] iff [r] supports local thread IDs. *)\n\nval has_threads : t -> bool\n(** [has_threads r] is [true] iff [r] supports host-side threading instead of\n    GPU grid dimensions. *)\n\nval has_shared : t -> bool\n(** [has_shared r] is [true] iff [r] supports shared memory. *)\n\nval global_max : t -> int list option\n(** [global_max r] is the maximum global grid dimensions [[x; y; z]], or [None]\n    when unconstrained. The list has exactly three elements when present. *)\n\nval global_prod_max : t -> int list option\n(** [global_prod_max r] is the per-axis product limit for global dimensions, or\n    [None] when unconstrained. When present, each global dimension is capped at\n    [min(global_max.(i), global_prod_max.(i) / local_hw.(i))]. *)\n\nval local_max : t -> int list option\n(** [local_max r] is the maximum local workgroup dimensions [[x; y; z]], or\n    [None] when unconstrained. The list has exactly three elements when present.\n*)\n\nval shared_max : t -> int\n(** [shared_max r] is the maximum shared memory size in bytes.\n\n    - [0] when shared memory is unsupported ({!has_shared} is [false]).\n    - For GPU backends, a conservative default (e.g., 32 KB for OpenCL, 48 KB\n      for CUDA). Actual limits may vary by device. *)\n\nval tensor_cores : t -> Tc.t list\n(** [tensor_cores r] is the list of {!type-tensor_core} configurations supported\n    by [r]. Empty when the backend has no hardware matrix-multiply support. *)\n\n(** {1:capabilities Capabilities} *)\n\nval code_for_op : t -> code_op list\n(** [code_for_op r] is the list of ALU operations that [r] provides custom\n    rendering for.\n\n    See also {!val-supported_ops}. *)\n\nval supported_ops : t -> Tolk_ir.Decompositions.supported_ops\n(** [supported_ops r] is the backend capability flags for the decomposition\n    pass, derived from {!val-code_for_op} unless explicitly overridden via\n    {!make}. *)\n\nval supports_dtype : t -> Tolk_ir.Dtype.t -> bool\n(** [supports_dtype r dt] is [true] iff the backend natively supports [dt].\n    When [false], the decomposition pass emulates [dt] using supported types. *)\n\nval emulated_float_dtypes : t -> (Tolk_ir.Dtype.scalar * Tolk_ir.Dtype.scalar) list\n(** [emulated_float_dtypes r] is the list of [(from, to)] dtype pairs for\n    float emulation. Each [from] float is promoted to [to] (typically f32).\n    Empty for backends that natively support all float types. *)\n\nval pre_matcher : t -> (Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t option) option\n(** [pre_matcher r] is an optional device-specific rewrite rule applied\n    before decompositions. *)\n\nval extra_matcher : t -> (Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t option) option\n(** [extra_matcher r] is an optional device-specific rewrite rule composed\n    into the final rewrite fixpoint. *)\n\n(** {1:load_store Load/store policy} *)\n\nval supports_float4 : t -> bool\n(** [supports_float4 r] is [true] iff [r] supports vectorized (float4/float2)\n    load and store operations.  The devectorizer uses this to decide whether\n    wide accesses can be folded.  Defaults to [true]. *)\n\n(** {1:rendering Rendering} *)\n\nval render : t -> ?name:string -> Tolk_ir.Program.t -> string\n(** [render r ~name program] converts [program] to backend-specific source code.\n\n    [name] defaults to [\"kernel\"]. *)\n\n(** {1:construction Construction} *)\n\nval make :\n  ?tensor_cores:Tc.t list ->\n  ?supports_float4:bool ->\n  ?has_threads:bool ->\n  ?global_max:int list ->\n  ?global_prod_max:int list ->\n  ?local_max:int list ->\n  ?code_for_op:code_op list ->\n  ?supported_ops:Tolk_ir.Decompositions.supported_ops ->\n  ?compiler:Compiler.t ->\n  ?pre_matcher:(Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t option) ->\n  ?extra_matcher:(Tolk_ir.Kernel.t -> Tolk_ir.Kernel.t option) ->\n  name:string ->\n  device:string ->\n  has_local:bool ->\n  has_shared:bool ->\n  shared_max:int ->\n  render:(?name:string -> Tolk_ir.Program.t -> string) ->\n  unit ->\n  t\n(** [make ~name ~device ~has_local ~has_shared ~shared_max ~render ()] is a\n    renderer with the given capabilities.\n\n    Optional parameters and their defaults:\n    - [tensor_cores]: [[]] (none).\n    - [supports_float4]: [true].\n    - [has_threads]: [false].\n    - [global_max]: [Some [0x8FFFFFFF; 0x8FFFFFFF; 0x8FFFFFFF]].\n    - [global_prod_max]: [None].\n    - [local_max]: [Some [0x8FFFFFFF; 0x8FFFFFFF; 0x8FFFFFFF]].\n    - [code_for_op]: [[]] (no custom ops).\n    - [supported_ops]: derived from [code_for_op] via\n      {!supported_ops_of_code_for_op}. When [code_for_op] is [[]], defaults to\n      {!all_supported_ops}.\n    - [compiler]: [None]. *)\n\nval with_compiler : Compiler.t -> t -> t\n(** [with_compiler c r] is [r] with compiler set to [Some c]. *)\n"
  },
  {
    "path": "packages/tolk/lib/runtime/cpu/compiler_cpu.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\nopen Tolk\n\n(* Host Detection *)\n\nlet uname flag =\n  try\n    let ic = Unix.open_process_in (\"uname \" ^ flag) in\n    let value = input_line ic in\n    let _ = Unix.close_process_in ic in\n    String.trim value\n  with _ -> \"\"\n\nlet cc =\n  let var = Helpers.Context_var.string ~key:\"CC\" ~default:\"clang\" in\n  fun () -> Helpers.Context_var.get var\n\nlet host_arch = uname \"-m\"\nlet is_windows = String.equal Sys.os_type \"Win32\"\n\n(* Subprocess Helpers *)\n\nlet write_all_fd fd bytes =\n  let len = Bytes.length bytes in\n  let rec loop off =\n    if off < len then\n      let wrote = Unix.write fd bytes off (len - off) in\n      loop (off + wrote)\n  in\n  loop 0\n\nlet read_pipes stdout_fd stderr_fd =\n  let buf = Bytes.create 4096 in\n  let stdout_buf = Buffer.create 4096 in\n  let stderr_buf = Buffer.create 4096 in\n  Fun.protect\n    ~finally:(fun () ->\n      (try Unix.close stdout_fd with Unix.Unix_error _ -> ());\n      try Unix.close stderr_fd with Unix.Unix_error _ -> ())\n    (fun () ->\n      let rec loop fds =\n        match fds with\n        | [] -> ()\n        | _ ->\n            let ready, _, _ = Unix.select fds [] [] (-1.) in\n            let fds' =\n              List.fold_left\n                (fun acc fd ->\n                  if not (List.mem fd ready) then fd :: acc\n                  else\n                    match Unix.read fd buf 0 (Bytes.length buf) with\n                    | 0 -> acc\n                    | n ->\n                        let target =\n                          if fd = stdout_fd then stdout_buf else stderr_buf\n                        in\n                        Buffer.add_string target (Bytes.sub_string buf 0 n);\n                        fd :: acc)\n                [] fds\n            in\n            loop (List.rev fds')\n      in\n      loop [ stdout_fd; stderr_fd ];\n      (Buffer.contents stdout_buf, Buffer.contents stderr_buf))\n\n(* Compilation *)\n\n(* Spawns a C compiler subprocess (clang by default) with stdin/stdout/stderr\n   pipes, feeding source on stdin and collecting object code from stdout via\n   select-based multiplexing. Key flags: -fno-math-errno ensures sqrt becomes a\n   single instruction; -ffixed-x18 avoids ARM's platform-reserved register\n   (macOS context switch / Windows TEB); --target=<arch>-none-unknown-elf\n   produces a relocatable ELF regardless of host triple. *)\nlet compile ~lang src =\n  let arch = if is_windows then \"AMD64\" else host_arch in\n  let target = if is_windows then \"x86_64\" else arch in\n  let arch_flag =\n    match arch with\n    | \"x86_64\" | \"AMD64\" -> \"-march=native\"\n    | \"riscv64\" -> \"-march=rv64g\"\n    | _ -> \"-mcpu=native\"\n  in\n  let extra_args =\n    if String.equal target \"arm64\" then [ \"-ffixed-x18\" ] else []\n  in\n  let base_args =\n    [\n      Printf.sprintf \"--target=%s-none-unknown-elf\" target;\n      arch_flag;\n      \"-O2\";\n      \"-fPIC\";\n      \"-ffreestanding\";\n      \"-fno-math-errno\";\n      \"-nostdlib\";\n      \"-fno-ident\";\n    ]\n  in\n  let stdin_r, stdin_w = Unix.pipe () in\n  let stdout_r, stdout_w = Unix.pipe () in\n  let stderr_r, stderr_w = Unix.pipe () in\n  Unix.set_close_on_exec stdin_w;\n  Unix.set_close_on_exec stdout_r;\n  Unix.set_close_on_exec stderr_r;\n  let argv =\n    Array.of_list\n      ((cc () :: \"-c\" :: \"-x\" :: lang :: base_args)\n      @ extra_args @ [ \"-\"; \"-o\"; \"-\" ])\n  in\n  let pid = Unix.create_process (cc ()) argv stdin_r stdout_w stderr_w in\n  Unix.close stdin_r;\n  Unix.close stdout_w;\n  Unix.close stderr_w;\n  write_all_fd stdin_w (Bytes.of_string src);\n  Unix.close stdin_w;\n  let obj, err = read_pipes stdout_r stderr_r in\n  let _, status = Unix.waitpid [] pid in\n  match status with\n  | Unix.WEXITED 0 -> Bytes.of_string obj\n  | _ ->\n      let label = Printf.sprintf \"clang -x %s\" lang in\n      let msg =\n        if String.equal err \"\" then label ^ \" failed (no stderr output)\"\n        else Printf.sprintf \"%s failed:\\n%s\" label err\n      in\n      raise (Compiler.Compile_error msg)\n\nlet compile_clang src = compile ~lang:\"c\" src\n\n(* Compiles LLVM IR to object code by invoking clang with -x ir.\n   This avoids a library dependency on LLVM at the cost of per-compilation\n   subprocess overhead. *)\nlet compile_llvmir src = compile ~lang:\"ir\" src\n"
  },
  {
    "path": "packages/tolk/lib/runtime/cpu/compiler_cpu.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** CPU backend compiler for the tolk JIT runtime.\n\n    Compiles C or LLVM IR source to relocatable ELF objects by invoking clang as\n    a subprocess. The compiler targets the host architecture in freestanding\n    mode ([-ffreestanding], [-nostdlib], [-fPIC]), producing\n    position-independent objects suitable for JIT loading via\n    {!Compiler}.\n\n    Source is fed on stdin and the object is read from stdout, so no temporary\n    files are created.\n\n    The compiler executable is controlled by the [CC] environment variable (via\n    {!Helpers.Context_var.string}), defaulting to [\"clang\"]. *)\n\n(** {1:compiling Compiling} *)\n\nval compile_clang : string -> bytes\n(** [compile_clang src] compiles C source [src] to a relocatable ELF object.\n\n    The returned {!bytes} contains the raw object file contents.\n\n    Compilation uses [-O2] and the following architecture-specific flags:\n    - x86_64: [-march=native]\n    - ARM64: [-mcpu=native] and [-ffixed-x18] (avoids the platform-reserved\n      register on macOS and Windows)\n    - RISC-V 64: [-march=rv64g]\n\n    [-fno-math-errno] is always passed so that intrinsics like [sqrt] compile to\n    single instructions rather than function calls.\n\n    {b Note.} On Windows the target is forced to [x86_64] regardless of the\n    reported host architecture.\n\n    Raises {!Compiler.Compile_error} if clang exits with a non-zero\n    status. The error message includes clang's stderr output when available. *)\n\nval compile_llvmir : string -> bytes\n(** [compile_llvmir src] compiles LLVM IR source [src] to a relocatable ELF\n    object.\n\n    Behaves identically to {!compile_clang} except the input language is LLVM IR\n    ([-x ir]) instead of C.\n\n    {b Note.} This invokes clang as a subprocess rather than using the\n    LLVM C API directly. This avoids a library dependency on LLVM at the\n    cost of per-compilation subprocess overhead.\n\n    Raises {!Compiler.Compile_error} if clang exits with a non-zero\n    status. *)\n"
  },
  {
    "path": "packages/tolk/lib/runtime/cpu/dune",
    "content": "(include_subdirs no)\n\n(library\n (name tolk_cpu)\n (public_name tolk.cpu)\n (libraries tolk unix threads)\n (foreign_stubs\n  (language c)\n  (names tolk_cpu_stubs)))\n"
  },
  {
    "path": "packages/tolk/lib/runtime/cpu/elf_cpu_loader.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\nopen Tolk\n\nmodule Image = struct\n  type t = { mutable data : Bytes.t; mutable len : int }\n\n  let of_bytes ?(extra_capacity = 0) bytes =\n    let len = Bytes.length bytes in\n    let data = Bytes.make (len + extra_capacity) '\\000' in\n    Bytes.blit bytes 0 data 0 len;\n    { data; len }\n\n  let ensure t size =\n    let capacity = Bytes.length t.data in\n    if size <= capacity then ()\n    else\n      let rec grow cap = if cap >= size then cap else grow (cap * 2) in\n      let next = Bytes.make (grow (max 1 capacity)) '\\000' in\n      Bytes.blit t.data 0 next 0 t.len;\n      t.data <- next\n\n  let append_bytes t src =\n    let off = t.len in\n    let src_len = Bytes.length src in\n    let needed = off + src_len in\n    ensure t needed;\n    Bytes.blit src 0 t.data off src_len;\n    t.len <- needed;\n    off\n\n  let get_u32 t off =\n    let b0 = Char.code (Bytes.get t.data off) in\n    let b1 = Char.code (Bytes.get t.data (off + 1)) in\n    let b2 = Char.code (Bytes.get t.data (off + 2)) in\n    let b3 = Char.code (Bytes.get t.data (off + 3)) in\n    b0 lor (b1 lsl 8) lor (b2 lsl 16) lor (b3 lsl 24)\n\n  let set_u32 t off v =\n    let byte n = Char.chr ((v lsr n) land 0xFF) in\n    Bytes.set t.data off (byte 0);\n    Bytes.set t.data (off + 1) (byte 8);\n    Bytes.set t.data (off + 2) (byte 16);\n    Bytes.set t.data (off + 3) (byte 24)\n\n  let to_bytes t = Bytes.sub t.data 0 t.len\nend\n\ntype reloc = { offset : int; target : nativeint; r_type : int; addend : int }\n\ntype t = {\n  image : Bytes.t;\n  relocs : reloc list;\n  entry_offset : int;\n  extra_capacity : int;\n}\n\nlet r_x86_64_pc32 = 2\nlet r_x86_64_plt32 = 4\nlet r_aarch64_adr_prel_pg_hi21 = 275\nlet r_aarch64_add_abs_lo12_nc = 277\nlet r_aarch64_jump26 = 282\nlet r_aarch64_call26 = 283\nlet r_aarch64_ldst16_abs_lo12_nc = 284\nlet r_aarch64_ldst32_abs_lo12_nc = 285\nlet r_aarch64_ldst64_abs_lo12_nc = 286\nlet r_aarch64_ldst128_abs_lo12_nc = 299\nlet alloc_size t = Bytes.length t.image + t.extra_capacity\nlet entry_offset t = t.entry_offset\nlet mask_bits n = if n <= 0 then 0L else Int64.(sub (shift_left 1L n) 1L)\n\nlet getbits x lo hi =\n  let width = hi - lo + 1 in\n  Int64.(to_int (logand (shift_right_logical x lo) (mask_bits width)))\n\nlet i2u32 x = Int64.to_int (Int64.logand x 0xFFFFFFFFL)\n\nlet resolve_reloc ~link_symbol sections reloc =\n  let symbol = reloc.Elf.symbol in\n  let target =\n    if symbol.shndx = 0 then link_symbol symbol.name\n    else\n      let section = sections.(symbol.shndx) in\n      Nativeint.of_int (section.Elf.addr + symbol.value)\n  in\n  {\n    offset = reloc.offset;\n    target;\n    r_type = reloc.r_type;\n    addend = reloc.addend;\n  }\n\nlet prepare ~link_symbol ~entry elf =\n  let sections = Elf.sections elf in\n  let relocs_rev, extra_capacity =\n    List.fold_left\n      (fun (acc, cap) r ->\n        let reloc = resolve_reloc ~link_symbol sections r in\n        let cap =\n          if reloc.r_type = r_aarch64_call26 || reloc.r_type = r_aarch64_jump26\n          then cap + 16\n          else cap\n        in\n        (reloc :: acc, cap))\n      ([], 0) (Elf.relocs elf)\n  in\n  {\n    image = Elf.image elf;\n    relocs = List.rev relocs_rev;\n    entry_offset = Elf.find_symbol_offset elf entry;\n    extra_capacity;\n  }\n\nlet load ~link_symbol ~entry obj =\n  let elf = Elf.load obj in\n  prepare ~link_symbol ~entry elf\n\n(* Patches a single relocation in the loaded ELF image at runtime. Handles\n   x86_64 PC-relative (PC32, PLT32) and ARM64 page-relative (ADRP, ADD/LDSTn\n   lo12, CALL26/JUMP26) relocation types. For ARM64 CALL26/JUMP26 targets\n   beyond the +/-128 MiB direct-branch range, emits a trampoline stub (LDR X17\n   + BR X17 + 8-byte absolute address) appended to the image. *)\nlet apply_reloc image ~base reloc =\n  let open Int64 in\n  let ploc = reloc.offset in\n  let base_i64 = of_nativeint base in\n  let ploc_i64 = of_int ploc in\n  let tgt = add (of_nativeint reloc.target) (of_int reloc.addend) in\n  let patch_lo12 ~shift =\n    let instr = Image.get_u32 image ploc in\n    let patched = instr lor (getbits tgt shift 11 lsl 10) in\n    Image.set_u32 image ploc patched\n  in\n  let rt = reloc.r_type in\n  if rt = r_x86_64_pc32 then Image.set_u32 image ploc (i2u32 (sub tgt ploc_i64))\n  else if rt = r_x86_64_plt32 then\n    Image.set_u32 image ploc (i2u32 (sub tgt (add ploc_i64 base_i64)))\n  else if rt = r_aarch64_adr_prel_pg_hi21 then begin\n    let instr = Image.get_u32 image ploc in\n    let rel_pg =\n      sub\n        (logand tgt (lognot 0xFFFL))\n        (logand (add base_i64 ploc_i64) (lognot 0xFFFL))\n    in\n    let patched =\n      instr lor (getbits rel_pg 12 13 lsl 29) lor (getbits rel_pg 14 32 lsl 5)\n    in\n    Image.set_u32 image ploc patched\n  end\n  else if rt = r_aarch64_add_abs_lo12_nc then patch_lo12 ~shift:0\n  else if rt = r_aarch64_ldst16_abs_lo12_nc then patch_lo12 ~shift:1\n  else if rt = r_aarch64_ldst32_abs_lo12_nc then patch_lo12 ~shift:2\n  else if rt = r_aarch64_ldst64_abs_lo12_nc then patch_lo12 ~shift:3\n  else if rt = r_aarch64_ldst128_abs_lo12_nc then patch_lo12 ~shift:4\n  else if rt = r_aarch64_call26 || rt = r_aarch64_jump26 then begin\n    let delta = sub tgt (add base_i64 ploc_i64) in\n    let lo = of_int (-((1 lsl 25) * 4)) in\n    let hi = of_int (((1 lsl 25) - 1) * 4) in\n    if compare delta lo >= 0 && compare delta hi <= 0 then\n      let instr = Image.get_u32 image ploc in\n      let patched = instr lor getbits delta 2 27 in\n      Image.set_u32 image ploc patched\n    else\n      let tramp = Bytes.make 16 '\\000' in\n      let tramp_img = Image.of_bytes tramp in\n      Image.set_u32 tramp_img 0 0x58000051;\n      Image.set_u32 tramp_img 4 0xD61F0220;\n      let bytes =\n        Bytes.init 8 (fun i ->\n            Char.chr (to_int (logand (shift_right_logical tgt (i * 8)) 0xFFL)))\n      in\n      Bytes.blit bytes 0 tramp 8 8;\n      let tramp_off = Image.append_bytes image tramp in\n      let instr = Image.get_u32 image ploc in\n      let patched = instr lor getbits (of_int (tramp_off - ploc)) 2 27 in\n      Image.set_u32 image ploc patched\n  end\n  else invalid_arg \"unknown relocation type\"\n\nlet link ~base t =\n  let image = Image.of_bytes ~extra_capacity:t.extra_capacity t.image in\n  List.iter (apply_reloc image ~base) t.relocs;\n  Image.to_bytes image\n"
  },
  {
    "path": "packages/tolk/lib/runtime/cpu/elf_cpu_loader.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** CPU ELF relocation loader.\n\n    This module turns an {!Elf.t} relocatable object into executable machine\n    code for the CPU JIT runtime. Loading is split into two phases:\n\n    + {!load} parses the ELF bytes, resolves external symbols, and collects\n      relocations. No final addresses are needed yet.\n    + {!link} patches all relocations against a concrete load base and returns\n      ready-to-execute bytes.\n\n    The supported relocation types are the subset of x86-64 and AArch64 used by\n    the Tolk JIT backend:\n\n    - x86-64: [R_X86_64_PC32], [R_X86_64_PLT32].\n    - AArch64: [R_AARCH64_ADR_PREL_PG_HI21], [R_AARCH64_ADD_ABS_LO12_NC],\n      [R_AARCH64_CALL26], [R_AARCH64_JUMP26], [R_AARCH64_LDST16_ABS_LO12_NC],\n      [R_AARCH64_LDST32_ABS_LO12_NC], [R_AARCH64_LDST64_ABS_LO12_NC],\n      [R_AARCH64_LDST128_ABS_LO12_NC].\n\n    See {!Elf} for the underlying object parser. *)\n\n(** {1:types Types} *)\n\ntype t\n(** A prepared CPU image awaiting relocation at a concrete load address. Holds\n    the flat image bytes, resolved relocation entries, and the entry-point\n    offset. *)\n\n(** {1:loading Loading} *)\n\nval load : link_symbol:(string -> nativeint) -> entry:string -> Bytes.t -> t\n(** [load ~link_symbol ~entry obj] parses ELF object bytes [obj] and prepares a\n    CPU image for later linking.\n\n    [link_symbol name] resolves external (undefined) symbols to their runtime\n    addresses. It is called once per undefined symbol reference during loading.\n    Defined symbols are resolved from the ELF section layout.\n\n    [entry] names the symbol whose image offset becomes the entry point (see\n    {!entry_offset}).\n\n    Raises [Invalid_argument] if [entry] is missing or undefined in the symbol\n    table.\n\n    Raises [Invalid_argument] if [obj] is not a valid ELF object (propagated\n    from {!Elf.load}). *)\n\n(** {1:querying Querying} *)\n\nval alloc_size : t -> int\n(** [alloc_size t] is the number of bytes needed to materialize the final\n    executable image. This is at least the flat image size, plus conservative\n    slack for AArch64 branch trampolines that {!link} may emit for out-of-range\n    [CALL26] / [JUMP26] relocations (16 bytes per such relocation). *)\n\nval entry_offset : t -> int\n(** [entry_offset t] is the byte offset of the entry symbol within the image. *)\n\n(** {1:linking Linking} *)\n\nval link : base:nativeint -> t -> Bytes.t\n(** [link ~base t] applies all relocations assuming the image will be loaded at\n    address [base] and returns the final executable bytes.\n\n    For AArch64 [CALL26] and [JUMP26] relocations whose target is outside the\n    +/-128 MiB direct-branch range, a trampoline\n    ([LDR X17, #8; BR X17; <8-byte absolute address>]) is appended to the image\n    and the original branch is redirected to it.\n\n    The returned {!Bytes.t} has length at most {!alloc_size} [t].\n\n    Raises [Invalid_argument] if any relocation has an unsupported type. *)\n"
  },
  {
    "path": "packages/tolk/lib/runtime/cpu/tolk_cpu.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\nopen Tolk\n\n(* FFI Externals *)\n\nexternal cpu_alloc : int -> nativeint = \"caml_tolk_cpu_alloc\"\nexternal cpu_free : nativeint -> unit = \"caml_tolk_cpu_free\"\nexternal cpu_copyin : nativeint -> bytes -> unit = \"caml_tolk_cpu_copyin\"\nexternal cpu_copyout : bytes -> nativeint -> unit = \"caml_tolk_cpu_copyout\"\nexternal exec_alloc : int -> nativeint = \"caml_tolk_cpu_jit_alloc\"\nexternal exec_free : nativeint -> int -> unit = \"caml_tolk_cpu_jit_free\"\nexternal exec_write : nativeint -> bytes -> unit = \"caml_tolk_cpu_jit_write\"\n\nexternal exec_call : nativeint -> nativeint array -> int64 array -> unit\n  = \"caml_tolk_cpu_jit_call\"\n\nexternal link_symbol_raw : string array -> string -> nativeint\n  = \"caml_tolk_cpu_jit_link_symbol\"\n\ntype loaded_program = { base : nativeint; entry : nativeint; size : int }\n\nlet link_symbol ?(libs = []) name =\n  let libs = Array.of_list libs in\n  link_symbol_raw libs name\n\nlet load_program ~name ~lib =\n  let prepared =\n    Elf_cpu_loader.load ~link_symbol ~entry:name lib\n  in\n  let size = Elf_cpu_loader.alloc_size prepared in\n  let base = exec_alloc size in\n  try\n    let image = Elf_cpu_loader.link ~base prepared in\n    exec_write base image;\n    let entry =\n      Nativeint.add base\n        (Nativeint.of_int (Elf_cpu_loader.entry_offset prepared))\n    in\n    { base; entry; size }\n  with exn ->\n    exec_free base size;\n    raise exn\n\nlet unload_program loaded = exec_free loaded.base loaded.size\n\n(* Allocator *)\n\n(* Tinygrad uses mmap (MAP_ANON | MAP_SHARED) for CPU buffers. Tolk uses\n   calloc/free for simplicity. Mmap becomes relevant for shared-memory IPC or\n   very large allocations; calloc suffices for single-process CPU execution. *)\nlet raw_allocator =\n  let alloc size spec =\n    match spec.Device.Buffer_spec.external_ptr with\n    | Some ptr -> ptr\n    | None -> cpu_alloc size\n  in\n  let free buf _size spec =\n    match spec.Device.Buffer_spec.external_ptr with\n    | Some _ -> ()\n    | None -> cpu_free buf\n  in\n  {\n    Device.Allocator.alloc;\n    free;\n    copyin = cpu_copyin;\n    copyout = cpu_copyout;\n    addr = Fun.id;\n    offset = None;\n    transfer = None;\n    supports_transfer = false;\n    copy_from_disk = None;\n    supports_copy_from_disk = false;\n  }\n\n(* Execution Queue *)\n\n(* Tinygrad uses recursive CPUWorker threads: each worker can spawn sub-workers\n   for parallel kernel execution. Tolk uses a flat two-tier model: a single\n   Thread dispatches tasks from the queue, and a shared Domain pool provides\n   true parallelism for multi-threaded kernels. Domains (not Threads) are used\n   for the pool because OCaml 5 Domains run on separate OS threads with\n   independent minor heaps, giving actual CPU parallelism for kernel\n   execution. *)\nmodule Cpu_queue = struct\n  type pool_job = Run of (unit -> unit) | Stop\n\n  type pool = {\n    tasks : pool_job Queue.t;\n    mutex : Mutex.t;\n    cond : Condition.t;\n    mutable workers : unit Domain.t list;\n  }\n\n  let pool_create () =\n    {\n      tasks = Queue.create ();\n      mutex = Mutex.create ();\n      cond = Condition.create ();\n      workers = [];\n    }\n\n  let rec pool_worker_loop pool =\n    Mutex.lock pool.mutex;\n    while Queue.is_empty pool.tasks do\n      Condition.wait pool.cond pool.mutex\n    done;\n    let job = Queue.take pool.tasks in\n    Mutex.unlock pool.mutex;\n    match job with\n    | Stop -> ()\n    | Run fn ->\n        fn ();\n        pool_worker_loop pool\n\n  let pool_start_worker pool = Domain.spawn (fun () -> pool_worker_loop pool)\n\n  (* Only called from the single dispatch thread (worker), so no lock needed. *)\n  let pool_ensure pool count =\n    let existing = List.length pool.workers in\n    if count > existing then\n      let new_workers =\n        List.init (count - existing) (fun _ -> pool_start_worker pool)\n      in\n      pool.workers <- pool.workers @ new_workers\n\n  let pool_enqueue pool job =\n    Mutex.lock pool.mutex;\n    Queue.add (Run job) pool.tasks;\n    Condition.signal pool.cond;\n    Mutex.unlock pool.mutex\n\n  let pool_shutdown pool =\n    match pool.workers with\n    | [] -> ()\n    | workers ->\n        Mutex.lock pool.mutex;\n        List.iter (fun _ -> Queue.add Stop pool.tasks) workers;\n        Condition.broadcast pool.cond;\n        Mutex.unlock pool.mutex;\n        List.iter Domain.join workers;\n        pool.workers <- []\n\n  type work = {\n    entry : nativeint;\n    bufs : nativeint array;\n    vals : int64 array;\n    threads : int;\n    core_id_index : int option;\n  }\n\n  type task = Work of work | Stop\n\n  type t = {\n    tasks : task Queue.t;\n    mutex : Mutex.t;\n    cond : Condition.t;\n    pool : pool;\n    mutable worker_thread : unit Domain.t option;\n    mutable pending : int;\n    mutable error : exn option;\n  }\n\n  let run_kernel task tid =\n    let vals = Array.copy task.vals in\n    (match task.core_id_index with\n    | None -> ()\n    | Some idx ->\n        if idx >= 0 && idx < Array.length vals then\n          vals.(idx) <- Int64.of_int tid);\n    exec_call task.entry task.bufs vals\n\n  (* Fan out kernel execution across the Domain pool. Thread 0 runs on the\n     dispatch thread; threads 1..N-1 are enqueued to pool workers. The dispatch\n     thread blocks until all threads complete, propagating the first error. *)\n  let run_task t task =\n    let threads = max 1 task.threads in\n    if threads = 1 then run_kernel task 0\n    else (\n      pool_ensure t.pool (threads - 1);\n      let remaining = ref threads in\n      let mutex = Mutex.create () in\n      let cond = Condition.create () in\n      let error : exn option ref = ref None in\n      let record_error exn =\n        Mutex.lock mutex;\n        if !error = None then error := Some exn;\n        Mutex.unlock mutex\n      in\n      let finish () =\n        Mutex.lock mutex;\n        remaining := !remaining - 1;\n        if !remaining = 0 then Condition.signal cond;\n        Mutex.unlock mutex\n      in\n      let run tid =\n        (try run_kernel task tid with exn -> record_error exn);\n        finish ()\n      in\n      for tid = 1 to threads - 1 do\n        pool_enqueue t.pool (fun () -> run tid)\n      done;\n      run 0;\n      Mutex.lock mutex;\n      while !remaining > 0 do\n        Condition.wait cond mutex\n      done;\n      let task_error = !error in\n      Mutex.unlock mutex;\n      match task_error with None -> () | Some exn -> raise exn)\n\n  let worker t =\n    let rec loop () =\n      Mutex.lock t.mutex;\n      while Queue.is_empty t.tasks do\n        Condition.wait t.cond t.mutex\n      done;\n      let task = Queue.take t.tasks in\n      Mutex.unlock t.mutex;\n      match task with\n      | Stop -> ()\n      | Work work ->\n          let error =\n            try\n              run_task t work;\n              None\n            with exn -> Some exn\n          in\n          Mutex.lock t.mutex;\n          (match error with\n          | None -> ()\n          | Some exn -> if t.error = None then t.error <- Some exn);\n          t.pending <- t.pending - 1;\n          Condition.broadcast t.cond;\n          Mutex.unlock t.mutex;\n          loop ()\n    in\n    loop ()\n\n  let create () =\n    let pool = pool_create () in\n    let t =\n      {\n        tasks = Queue.create ();\n        mutex = Mutex.create ();\n        cond = Condition.create ();\n        pool;\n        worker_thread = None;\n        pending = 0;\n        error = None;\n      }\n    in\n    let worker_thread = Domain.spawn (fun () -> worker t) in\n    t.worker_thread <- Some worker_thread;\n    t\n\n  let exec t ~entry ~bufs ~vals ~threads ~core_id_index =\n    let task = Work { entry; bufs; vals; threads; core_id_index } in\n    Mutex.lock t.mutex;\n    Queue.add task t.tasks;\n    t.pending <- t.pending + 1;\n    Condition.signal t.cond;\n    Mutex.unlock t.mutex\n\n  let synchronize t =\n    Mutex.lock t.mutex;\n    while t.pending > 0 do\n      Condition.wait t.cond t.mutex\n    done;\n    let error = t.error in\n    t.error <- None;\n    Mutex.unlock t.mutex;\n    match error with None -> () | Some exn -> raise exn\n\n  let shutdown t =\n    match t.worker_thread with\n    | None -> ()\n    | Some worker ->\n        Mutex.lock t.mutex;\n        Queue.add Stop t.tasks;\n        Condition.signal t.cond;\n        Mutex.unlock t.mutex;\n        Domain.join worker;\n        t.worker_thread <- None;\n        pool_shutdown t.pool\nend\n\n(* Device Registration *)\n\nlet create name =\n  let clang =\n    Compiler.make ~name:\"CLANG\" ~cachekey:\"compile_clang_jit\"\n      ~compile:Compiler_cpu.compile_clang ()\n  in\n  let state = Cpu_queue.create () in\n  at_exit (fun () -> Cpu_queue.shutdown state);\n  let runtime entry_name lib ~runtimevars =\n    let loaded = load_program ~name:entry_name ~lib in\n    let entry = loaded.entry in\n    let core_id_index = List.assoc_opt \"core_id\" runtimevars in\n    let call bufs ~global ~local:_ ~vals ~wait ~timeout:_ =\n      let threads = match core_id_index with\n        | Some _ -> max 1 global.(0)\n        | None -> 1\n      in\n      Cpu_queue.exec state ~entry ~bufs ~vals ~threads ~core_id_index;\n      if wait then begin\n        Cpu_queue.synchronize state;\n        None\n      end else\n        None\n    in\n    let free () = unload_program loaded in\n    Device.{ call; free }\n  in\n  let synchronize () = Cpu_queue.synchronize state in\n  let renderer = Renderer.with_compiler clang Cstyle.clang in\n  let renderer_set = Device.Renderer_set.make [renderer, None] in\n  let allocator =\n    Device.Allocator.Pack (Device.Lru_allocator.wrap raw_allocator)\n  in\n  Device.make ~name ~allocator ~renderer_set ~runtime ~synchronize ()\n"
  },
  {
    "path": "packages/tolk/lib/runtime/cpu/tolk_cpu.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** CPU device backend.\n\n    [Tolk_cpu] provides a CPU execution backend for tolk. It compiles kernels to\n    native object code via an external C compiler, loads them into executable\n    memory, and dispatches execution across multiple CPU cores using a domain\n    pool.\n\n    The single entry point is {!val-create}, which returns a {!Tolk.Device.t}\n    ready for use with the tolk runtime. *)\n\n(** {1:device Device creation} *)\n\nval create : string -> Tolk.Device.t\n(** [create name] is a CPU device named [name].\n\n    The device uses clang (or the compiler specified by the [CC] environment\n    variable) to compile kernel source to native object code. Compiled objects\n    are loaded into executable memory via an ELF loader and JIT stubs.\n\n    Kernel execution is dispatched through a background worker thread.\n    Multi-threaded kernels fan out across a shared domain pool, providing true\n    OS-level parallelism via OCaml 5 domains.\n\n    Memory allocation uses [calloc]/[free]. An LRU cache wraps the raw allocator\n    to reuse recently freed buffers.\n\n    [CC] defaults to [\"clang\"] when unset. *)\n"
  },
  {
    "path": "packages/tolk/lib/runtime/cpu/tolk_cpu_stubs.c",
    "content": "#include <caml/alloc.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/mlvalues.h>\n#include <caml/threads.h>\n\n#if defined(_WIN32)\n#include <malloc.h>\n#else\n#include <alloca.h>\n#endif\n#include <stdio.h>\n#include <stdint.h>\n#include <stdlib.h>\n#include <string.h>\n\n#if defined(_WIN32)\n#include <windows.h>\n#else\n#include <pthread.h>\n#include <dlfcn.h>\n#include <sys/mman.h>\n#endif\n\n/* MAP_ANON is the BSD spelling of MAP_ANONYMOUS; older systems may only define one. */\n#if !defined(_WIN32)\n#ifndef MAP_ANON\n#define MAP_ANON MAP_ANONYMOUS\n#endif\n\n#ifndef MAP_JIT\n#define MAP_JIT 0x0800\n#endif\n#endif\n\nstatic void *jit_alloc_executable(size_t size) {\n#if defined(_WIN32)\n  return VirtualAlloc(NULL, size, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE);\n#else\n  int flags = MAP_PRIVATE | MAP_ANON;\n#if defined(__APPLE__)\n  flags |= MAP_JIT;\n#endif\n  void *mem = mmap(NULL, size, PROT_READ | PROT_WRITE | PROT_EXEC, flags, -1, 0);\n  if (mem == MAP_FAILED) {\n    return NULL;\n  }\n  return mem;\n#endif\n}\n\nCAMLprim value caml_tolk_cpu_alloc(value v_size) {\n  CAMLparam1(v_size);\n  size_t size = (size_t)Long_val(v_size);\n  void *ptr = calloc(1, size);\n  if (ptr == NULL) {\n    caml_failwith(\"cpu_alloc failed\");\n  }\n  CAMLreturn(caml_copy_nativeint((intnat)ptr));\n}\n\nCAMLprim value caml_tolk_cpu_free(value v_ptr) {\n  CAMLparam1(v_ptr);\n  void *ptr = (void *)Nativeint_val(v_ptr);\n  free(ptr);\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_tolk_cpu_copyin(value v_ptr, value v_bytes) {\n  CAMLparam2(v_ptr, v_bytes);\n  void *ptr = (void *)Nativeint_val(v_ptr);\n  size_t len = (size_t)caml_string_length(v_bytes);\n  memcpy(ptr, Bytes_val(v_bytes), len);\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_tolk_cpu_copyout(value v_bytes, value v_ptr) {\n  CAMLparam2(v_bytes, v_ptr);\n  void *ptr = (void *)Nativeint_val(v_ptr);\n  size_t len = (size_t)caml_string_length(v_bytes);\n  memcpy(Bytes_val(v_bytes), ptr, len);\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_tolk_cpu_jit_alloc(value v_size) {\n  CAMLparam1(v_size);\n  size_t size = (size_t)Long_val(v_size);\n  void *mem = jit_alloc_executable(size);\n  if (mem == NULL) {\n    caml_failwith(\"jit_alloc failed\");\n  }\n  CAMLreturn(caml_copy_nativeint((intnat)mem));\n}\n\nCAMLprim value caml_tolk_cpu_jit_free(value v_ptr, value v_size) {\n  CAMLparam2(v_ptr, v_size);\n  void *ptr = (void *)Nativeint_val(v_ptr);\n  size_t size = (size_t)Long_val(v_size);\n#if defined(_WIN32)\n  VirtualFree(ptr, 0, MEM_RELEASE);\n#else\n  munmap(ptr, size);\n#endif\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_tolk_cpu_jit_write(value v_ptr, value v_bytes) {\n  CAMLparam2(v_ptr, v_bytes);\n  void *ptr = (void *)Nativeint_val(v_ptr);\n  size_t len = (size_t)caml_string_length(v_bytes);\n#if defined(__APPLE__)\n  pthread_jit_write_protect_np(0);\n#endif\n  memcpy(ptr, Bytes_val(v_bytes), len);\n#if defined(__APPLE__)\n  pthread_jit_write_protect_np(1);\n#endif\n#if defined(_WIN32)\n  FlushInstructionCache(GetCurrentProcess(), ptr, len);\n#else\n  __builtin___clear_cache((char *)ptr, (char *)ptr + len);\n#endif\n  CAMLreturn(Val_unit);\n}\n\n/* Tinygrad passes each buffer address and scalar value as individual C function\n   arguments (varargs-style via ctypes). Tolk uses a fixed two-pointer convention:\n   fn(const uint64_t *bufs, const int64_t *vals). This simplifies the FFI to a\n   constant-arity call regardless of kernel argument count. The C and LLVM IR\n   renderers must generate code that reads from these arrays (bufs[i], vals[j])\n   rather than named parameters. */\nCAMLprim value caml_tolk_cpu_jit_call(value v_entry, value v_bufs, value v_vals) {\n  CAMLparam3(v_entry, v_bufs, v_vals);\n  size_t nbufs = (size_t)Wosize_val(v_bufs);\n  size_t nvals = (size_t)Wosize_val(v_vals);\n  uint64_t *bufs = nbufs > 0 ? (uint64_t *)alloca(sizeof(uint64_t) * nbufs) : NULL;\n  int64_t *vals = nvals > 0 ? (int64_t *)alloca(sizeof(int64_t) * nvals) : NULL;\n  for (size_t i = 0; i < nbufs; ++i) {\n    bufs[i] = (uint64_t)Nativeint_val(Field(v_bufs, i));\n  }\n  for (size_t i = 0; i < nvals; ++i) {\n    vals[i] = (int64_t)Int64_val(Field(v_vals, i));\n  }\n  void (*fn)(const uint64_t *, const int64_t *) =\n      (void (*)(const uint64_t *, const int64_t *))Nativeint_val(v_entry);\n  caml_release_runtime_system();\n  fn(bufs, vals);\n  caml_acquire_runtime_system();\n  CAMLreturn(Val_unit);\n}\n\nstatic void *try_dlopen(const char *name) {\n#if defined(_WIN32)\n  void *handle = (void *)LoadLibraryA(name);\n  if (handle != NULL) {\n    return handle;\n  }\n  char buf[256];\n  if (snprintf(buf, sizeof(buf), \"%s.dll\", name) > 0) {\n    handle = (void *)LoadLibraryA(buf);\n    if (handle != NULL) {\n      return handle;\n    }\n  }\n  if (snprintf(buf, sizeof(buf), \"lib%s.dll\", name) > 0) {\n    handle = (void *)LoadLibraryA(buf);\n    if (handle != NULL) {\n      return handle;\n    }\n  }\n  return NULL;\n#else\n  void *handle = dlopen(name, RTLD_LAZY | RTLD_LOCAL);\n  if (handle != NULL) {\n    return handle;\n  }\n  char buf[256];\n  if (snprintf(buf, sizeof(buf), \"lib%s.so\", name) > 0) {\n    handle = dlopen(buf, RTLD_LAZY | RTLD_LOCAL);\n    if (handle != NULL) {\n      return handle;\n    }\n  }\n  if (snprintf(buf, sizeof(buf), \"lib%s.dylib\", name) > 0) {\n    handle = dlopen(buf, RTLD_LAZY | RTLD_LOCAL);\n    if (handle != NULL) {\n      return handle;\n    }\n  }\n  return NULL;\n#endif\n}\n\nCAMLprim value caml_tolk_cpu_jit_link_symbol(value v_libs, value v_sym) {\n  CAMLparam2(v_libs, v_sym);\n  const char *sym = String_val(v_sym);\n  void *addr = NULL;\n\n  size_t nlibs = (size_t)Wosize_val(v_libs);\n#if defined(_WIN32)\n  if (nlibs > 0) {\n    for (size_t i = 0; i < nlibs && addr == NULL; ++i) {\n      const char *lib = String_val(Field(v_libs, i));\n      void *handle = try_dlopen(lib);\n      if (handle != NULL) {\n        addr = (void *)GetProcAddress((HMODULE)handle, sym);\n      }\n    }\n  }\n#else\n  if (nlibs > 0) {\n    for (size_t i = 0; i < nlibs && addr == NULL; ++i) {\n      const char *lib = String_val(Field(v_libs, i));\n      void *handle = try_dlopen(lib);\n      if (handle != NULL) {\n        addr = dlsym(handle, sym);\n      }\n    }\n  }\n#endif\n\n  if (addr == NULL) {\n    caml_failwith(\"link_symbol failed\");\n  }\n  CAMLreturn(caml_copy_nativeint((intnat)addr));\n}\n"
  },
  {
    "path": "packages/tolk/lib/runtime/metal/dune",
    "content": "(include_subdirs no)\n\n(library\n (name tolk_metal)\n (public_name tolk.metal)\n (enabled_if\n  (= %{system} macosx))\n (libraries tolk threads)\n (c_library_flags\n  (-framework Metal -framework Foundation -framework CoreGraphics))\n (foreign_stubs\n  (language c)\n  (names tolk_metal_stubs)\n  (flags\n   (:standard -fblocks -x objective-c))))\n"
  },
  {
    "path": "packages/tolk/lib/runtime/metal/tolk_metal.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\nopen Tolk\n\nmodule Ffi = struct\n  external create_device : unit -> nativeint = \"caml_tolk_metal_create_device\"\n  external release_device : nativeint -> unit = \"caml_tolk_metal_release_device\"\n\n  external create_command_queue : nativeint -> nativeint\n    = \"caml_tolk_metal_create_command_queue\"\n\n  external release_command_queue : nativeint -> unit\n    = \"caml_tolk_metal_release_command_queue\"\n\n  external buffer_alloc : nativeint -> int -> nativeint\n    = \"caml_tolk_metal_buffer_alloc\"\n\n  external buffer_free : nativeint -> unit = \"caml_tolk_metal_buffer_free\"\n\n  external buffer_copyin : nativeint -> bytes -> unit\n    = \"caml_tolk_metal_buffer_copyin\"\n\n  external buffer_copyout : bytes -> nativeint -> unit\n    = \"caml_tolk_metal_buffer_copyout\"\n\n  external program_create : nativeint -> string -> bytes -> nativeint\n    = \"caml_tolk_metal_program_create\"\n\n  external program_free : nativeint -> unit = \"caml_tolk_metal_program_free\"\n\n  external program_dispatch :\n    nativeint ->\n    nativeint ->\n    nativeint array ->\n    int array ->\n    int array ->\n    int array ->\n    int array ->\n    nativeint\n    = \"caml_tolk_metal_program_dispatch_bc\" \"caml_tolk_metal_program_dispatch\"\n\n  external command_buffer_wait : nativeint -> unit\n    = \"caml_tolk_metal_command_buffer_wait\"\n\n  external compile : string -> bytes option = \"caml_tolk_metal_compile\"\n\n  external icb_create : nativeint -> int -> nativeint\n    = \"caml_tolk_metal_icb_create\"\n\n  external icb_encode :\n    nativeint ->\n    int ->\n    nativeint ->\n    nativeint array ->\n    nativeint ->\n    int array ->\n    int array ->\n    int array ->\n    unit = \"caml_tolk_metal_icb_encode_bc\" \"caml_tolk_metal_icb_encode\"\n\n  external icb_update_buffer : nativeint -> int -> int -> nativeint -> unit\n    = \"caml_tolk_metal_icb_update_buffer\"\n\n  external icb_update_dispatch :\n    nativeint -> int -> int array -> int array -> unit\n    = \"caml_tolk_metal_icb_update_dispatch_bc\"\n      \"caml_tolk_metal_icb_update_dispatch\"\n\n  external icb_execute :\n    nativeint ->\n    nativeint ->\n    int ->\n    nativeint array ->\n    nativeint array ->\n    nativeint = \"caml_tolk_metal_icb_execute\"\n\n  external icb_release : nativeint -> unit = \"caml_tolk_metal_icb_release\"\n  external needs_icb_fix : nativeint -> bool = \"caml_tolk_metal_needs_icb_fix\"\n\n  external blit_copy :\n    nativeint -> nativeint -> int -> nativeint -> int -> int -> nativeint\n    = \"caml_tolk_metal_blit_copy_bc\" \"caml_tolk_metal_blit_copy\"\n\n  external create_shared_event : nativeint -> nativeint\n    = \"caml_tolk_metal_create_shared_event\"\n\n  external release_shared_event : nativeint -> unit\n    = \"caml_tolk_metal_release_shared_event\"\n\n  external encode_signal_event : nativeint -> nativeint -> int -> unit\n    = \"caml_tolk_metal_encode_signal_event\"\n\n  external encode_wait_event : nativeint -> nativeint -> int -> unit\n    = \"caml_tolk_metal_encode_wait_event\"\n\n  external command_buffer_gpu_time : nativeint -> float * float\n    = \"caml_tolk_metal_command_buffer_gpu_time\"\n\n  external device_name : nativeint -> string = \"caml_tolk_metal_device_name\"\nend\n\nmodule State = struct\n  type t = {\n    device : nativeint;\n    queue : nativeint;\n    shared_event : nativeint;\n    mutable timeline_value : int;\n    mutable in_flight : nativeint list;\n    mutable closed : bool;\n    needs_icb_fix : bool;\n    device_name : string;\n  }\n\n  let create () =\n    let device = Ffi.create_device () in\n    let queue = Ffi.create_command_queue device in\n    let shared_event = Ffi.create_shared_event device in\n    let needs_icb_fix = Ffi.needs_icb_fix device in\n    let device_name = Ffi.device_name device in\n    {\n      device;\n      queue;\n      shared_event;\n      timeline_value = 0;\n      in_flight = [];\n      closed = false;\n      needs_icb_fix;\n      device_name;\n    }\n\n  let synchronize t =\n    List.iter Ffi.command_buffer_wait t.in_flight;\n    t.in_flight <- []\n\n  let shutdown t =\n    if not t.closed then (\n      synchronize t;\n      Ffi.release_shared_event t.shared_event;\n      Ffi.release_command_queue t.queue;\n      Ffi.release_device t.device;\n      t.closed <- true)\n\n  let is_virtual t =\n    let name = String.lowercase_ascii t.device_name in\n    let rec has_substring s sub i =\n      if i + String.length sub > String.length s then false\n      else if String.sub s i (String.length sub) = sub then true\n      else has_substring s sub (i + 1)\n    in\n    has_substring name \"virtual\" 0\nend\n\nmodule Allocator = struct\n  let raw state =\n    let alloc size spec =\n      match spec.Device.Buffer_spec.external_ptr with\n      | Some ptr -> ptr\n      | None -> Ffi.buffer_alloc state.State.device size\n    in\n    let free buf _size spec =\n      match spec.Device.Buffer_spec.external_ptr with\n      | Some _ -> ()\n      | None -> Ffi.buffer_free buf\n    in\n    let copyin buf bytes =\n      State.synchronize state;\n      Ffi.buffer_copyin buf bytes\n    in\n    let copyout bytes buf =\n      State.synchronize state;\n      Ffi.buffer_copyout bytes buf\n    in\n    let transfer ~dest ~src nbytes =\n      State.synchronize state;\n      let cmd = Ffi.blit_copy state.State.queue src 0 dest 0 nbytes in\n      state.State.in_flight <- cmd :: state.State.in_flight;\n      State.synchronize state\n    in\n    let addr buf = buf in\n    {\n      Device.Allocator.alloc;\n      free;\n      copyin;\n      copyout;\n      addr;\n      offset = Some (fun buf _size _offset -> buf);\n      transfer = Some transfer;\n      supports_transfer = true;\n      copy_from_disk = None;\n      supports_copy_from_disk = false;\n    }\n\n  let create state =\n    Device.Allocator.Pack (Device.Lru_allocator.wrap (raw state))\nend\n\nmodule Compiler = struct\n  let compile src =\n    match Ffi.compile src with\n    | Some binary -> binary\n    | None -> Bytes.of_string src\n    | exception Failure _ -> Bytes.of_string src\n\n  let create () = Compiler.make ~name:\"METAL\" ~cachekey:\"compile_metal\" ~compile ()\nend\n\nmodule Program = struct\n  let runtime state entry_name lib ~runtimevars:_ =\n    let handle = Ffi.program_create state.State.device entry_name lib in\n    let local_dims = [| 1; 1; 1 |] in\n    let call bufs ~global ~local ~vals:_ ~wait ~timeout:_ =\n      let local = Option.value local ~default:local_dims in\n      let buf_offsets = Array.make (Array.length bufs) 0 in\n      let cmd =\n        Ffi.program_dispatch state.State.queue handle bufs buf_offsets\n          [||] global local\n      in\n      state.State.in_flight <- cmd :: state.State.in_flight;\n      if wait then begin\n        State.synchronize state;\n        None\n      end else\n        None\n    in\n    let free () = Ffi.program_free handle in\n    Device.{ call; free }\nend\n\nmodule Icb = struct\n  type t = { handle : nativeint; count : int }\n\n  let create state ~count =\n    let handle = Ffi.icb_create state.State.device count in\n    { handle; count }\n\n  let encode t ~index ~program ~buffers ~arg_buf ~arg_offsets ~global ~local =\n    Ffi.icb_encode t.handle index program buffers arg_buf arg_offsets global\n      local\n\n  let update_buffer t ~index ~buf_index ~buf =\n    Ffi.icb_update_buffer t.handle index buf_index buf\n\n  let update_dispatch t ~index ~global ~local =\n    Ffi.icb_update_dispatch t.handle index global local\n\n  let execute state t ~resources ~pipelines =\n    let fix_pipelines = if state.State.needs_icb_fix then pipelines else [||] in\n    let cmd =\n      Ffi.icb_execute state.State.queue t.handle t.count resources fix_pipelines\n    in\n    state.State.in_flight <- cmd :: state.State.in_flight\n\n  let release t = Ffi.icb_release t.handle\nend\n\nlet create name =\n  let state = State.create () in\n  at_exit (fun () -> State.shutdown state);\n  let allocator = Allocator.create state in\n  let renderer =\n    Renderer.with_compiler (Compiler.create ()) Cstyle.metal\n  in\n  let renderer_set = Device.Renderer_set.make [renderer, None] in\n  let runtime = Program.runtime state in\n  let synchronize () = State.synchronize state in\n  Device.make ~name ~allocator ~renderer_set ~runtime ~synchronize ()\n"
  },
  {
    "path": "packages/tolk/lib/runtime/metal/tolk_metal.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Metal GPU device backend.\n\n    [Tolk_metal] provides a {!Tolk.Device.t} that executes compiled kernels on\n    Apple Metal GPUs. Construct a device with {!create} and interact with it\n    through the {!Tolk.Device} interface.\n\n    For batched multi-kernel execution, {!Icb} encodes a sequence of compute\n    dispatches into a Metal indirect command buffer that can be replayed with a\n    single GPU submission.\n\n    {1:compilation Kernel compilation}\n\n    Kernels are compiled in two stages. The compiler first attempts offline\n    compilation via Apple's private MTLCompiler framework (source to MTLB\n    binary). When that framework is unavailable, it falls back to runtime source\n    compilation through the Metal API.\n\n    {1:env Environment variables}\n\n    - [METAL_FAST_MATH] — when set to a non-zero integer, enables fast-math mode\n      for runtime source compilation (the fallback path). Defaults to [0]\n      (disabled). *)\n\n(** {1:device Device} *)\n\nval create : string -> Tolk.Device.t\n(** [create name] is a Metal device identified by [name].\n\n    The device uses the system default Metal GPU, an LRU-cached shared-memory\n    allocator with blit-based buffer transfers, and the {!Tolk.Cstyle.metal}\n    renderer. An {!Stdlib.at_exit} handler synchronizes in-flight work and\n    releases the underlying Metal device and command queue.\n\n    Raises [Failure] if no Metal GPU is available (e.g. running in a VM or on\n    unsupported hardware). *)\n\n(** {1:state Device state} *)\n\nmodule State : sig\n  type t\n  (** The type for Metal device state. Holds the GPU device handle, command\n      queue, shared timeline event, and in-flight command buffer list. *)\n\n  val create : unit -> t\n  (** [create ()] initializes the system default Metal device, command queue,\n      and shared event.\n\n      Raises [Failure] if no Metal GPU is available. *)\n\n  val synchronize : t -> unit\n  (** [synchronize t] blocks until all in-flight command buffers complete. After\n      return, the in-flight list is empty.\n\n      Raises [Failure] if any command buffer completed with an error. *)\n\n  val shutdown : t -> unit\n  (** [shutdown t] synchronizes and releases all Metal resources (command queue,\n      shared event, device). Subsequent calls are no-ops. *)\n\n  val is_virtual : t -> bool\n  (** [is_virtual t] is [true] iff the device name contains [\"virtual\"],\n      indicating a paravirtualized Metal device (e.g. macOS VM). ICB-based graph\n      execution is unreliable on virtual devices. *)\nend\n\n(** {1:icb Indirect command buffers}\n\n    An indirect command buffer (ICB) pre-encodes a fixed sequence of compute\n    dispatches that can be replayed with a single GPU submission. Buffers and\n    dispatch dimensions can be updated between replays without re-encoding the\n    full command sequence.\n\n    Typical usage:\n    + {!Icb.create} to allocate the ICB.\n    + {!Icb.encode} for each kernel in the batch.\n    + {!Icb.execute} to submit.\n    + {!Icb.update_buffer} / {!Icb.update_dispatch} then {!Icb.execute} for\n      subsequent iterations.\n    + {!Icb.release} when done. *)\n\nmodule Icb : sig\n  type t\n  (** The type for indirect command buffers. *)\n\n  val create : State.t -> count:int -> t\n  (** [create state ~count] allocates an ICB with capacity for [count] compute\n      commands.\n\n      Raises [Failure] if Metal cannot allocate the ICB. *)\n\n  val encode :\n    t ->\n    index:int ->\n    program:nativeint ->\n    buffers:nativeint array ->\n    arg_buf:nativeint ->\n    arg_offsets:int array ->\n    global:int array ->\n    local:int array ->\n    unit\n  (** [encode t ~index ~program ~buffers ~arg_buf ~arg_offsets ~global ~local]\n      encodes a compute dispatch at command [index] with:\n      - [program] — pipeline handle from {!Tolk.Device.Program.entry_addr}.\n      - [buffers] — kernel buffer bindings (array of Metal buffer addresses).\n      - [arg_buf] — Metal buffer holding packed [int32] variable parameters, or\n        [0n] if there are none.\n      - [arg_offsets] — byte offsets into [arg_buf] for each variable parameter.\n      - [global] — threadgroup grid dimensions, length 3.\n      - [local] — threads per threadgroup, length 3.\n\n      A memory barrier is inserted after the dispatch so commands execute in\n      order.\n\n      Raises [Failure] if [local] threads exceed the pipeline's maximum. *)\n\n  val update_buffer : t -> index:int -> buf_index:int -> buf:nativeint -> unit\n  (** [update_buffer t ~index ~buf_index ~buf] replaces the buffer at binding\n      [buf_index] for command [index]. The buffer offset is set to [0]. *)\n\n  val update_dispatch :\n    t -> index:int -> global:int array -> local:int array -> unit\n  (** [update_dispatch t ~index ~global ~local] updates the threadgroup\n      dimensions for command [index]. Both arrays must have length 3. *)\n\n  val execute :\n    State.t ->\n    t ->\n    resources:nativeint array ->\n    pipelines:nativeint array ->\n    unit\n  (** [execute state t ~resources ~pipelines] submits the ICB for GPU execution.\n\n      [resources] are Metal buffer handles marked for read and write access by\n      the GPU. Every buffer referenced by encoded commands must appear here.\n\n      [pipelines] are pipeline handles for the M1/M2 ICB workaround: on pre-M3\n      GPUs (AGXG family < 15), a zero-size dummy dispatch is issued per pipeline\n      before executing the ICB to prevent\n      [kIOGPUCommandBufferCallbackErrorInvalidResource] crashes. On M3+ the\n      array is ignored.\n\n      The resulting command buffer is appended to the in-flight list. *)\n\n  val release : t -> unit\n  (** [release t] frees the underlying Metal ICB. *)\nend\n"
  },
  {
    "path": "packages/tolk/lib/runtime/metal/tolk_metal_stubs.c",
    "content": "#import <Foundation/Foundation.h>\n#import <Metal/Metal.h>\n#include <caml/alloc.h>\n#include <caml/fail.h>\n#include <caml/memory.h>\n#include <caml/mlvalues.h>\n#include <caml/threads.h>\n#import <dispatch/dispatch.h>\n#include <dlfcn.h>\n#import <objc/message.h>\n#include <stdint.h>\n#include <stdlib.h>\n#include <string.h>\n\n// 13 is the undocumented request type Metal uses to compile source into MTLB.\n// This mirrors tinygrad's Metal compiler path.\n#define REQUEST_TYPE_COMPILE 13\n\ntypedef struct {\n  id<MTLLibrary> library;\n  id<MTLFunction> function;\n  id<MTLComputePipelineState> pipeline;\n  // Cached to avoid repeated ObjC message sends (tinygrad: \"cache these msg\n  // calls\"). Used to validate local threadgroup size before dispatch.\n  uint64_t max_total_threads;\n  char* name;\n  NSString* label; // cached NSString for command buffer labeling\n} tolk_metal_program;\n\nstatic void fail_with_nserror(NSError* error, const char* fallback) {\n  if (error != nil) {\n    NSString* desc = [error localizedDescription];\n    const char* msg = desc != nil ? [desc UTF8String] : fallback;\n    caml_failwith(msg);\n  }\n  caml_failwith(fallback);\n}\n\n// METAL_FAST_MATH mirrors tinygrad's fast-math toggle for source compilation.\nstatic bool metal_fast_math_enabled(void) {\n  const char* raw = getenv(\"METAL_FAST_MATH\");\n  if (raw == NULL) return false;\n  while (*raw == ' ' || *raw == '\\t' || *raw == '\\n') raw++;\n  if (*raw == '\\0') return false;\n  return atoi(raw) != 0;\n}\n\nstatic NSString* metal_cache_dir(void) {\n  const char* xdg = getenv(\"XDG_CACHE_HOME\");\n  NSString* base = nil;\n  if (xdg != NULL && xdg[0] != '\\0') {\n    base = [NSString stringWithUTF8String:xdg];\n  } else {\n    base = [[NSHomeDirectory() stringByAppendingPathComponent:@\"Library\"]\n        stringByAppendingPathComponent:@\"Caches\"];\n  }\n  NSString* dir = [base stringByAppendingPathComponent:@\"tolk\"];\n  [[NSFileManager defaultManager] createDirectoryAtPath:dir\n                            withIntermediateDirectories:YES\n                                             attributes:nil\n                                                  error:nil];\n  return dir;\n}\n\nCAMLprim value caml_tolk_metal_create_device(value unit) {\n  CAMLparam1(unit);\n  @autoreleasepool {\n    // MTLCreateSystemDefaultDevice can return nil on unsupported/virtualized\n    // setups. The OCaml side will surface the failure if that happens.\n    id<MTLDevice> device = MTLCreateSystemDefaultDevice();\n    if (device == nil) caml_failwith(\"Metal device unavailable\");\n    [device retain];\n    CAMLreturn(caml_copy_nativeint((intnat)device));\n  }\n}\n\nCAMLprim value caml_tolk_metal_release_device(value v_device) {\n  CAMLparam1(v_device);\n  @autoreleasepool {\n    id<MTLDevice> device = (id<MTLDevice>)Nativeint_val(v_device);\n    [device release];\n    CAMLreturn(Val_unit);\n  }\n}\n\nCAMLprim value caml_tolk_metal_create_command_queue(value v_device) {\n  CAMLparam1(v_device);\n  @autoreleasepool {\n    id<MTLDevice> device = (id<MTLDevice>)Nativeint_val(v_device);\n    id<MTLCommandQueue> queue =\n        [device newCommandQueueWithMaxCommandBufferCount:1024];\n    if (queue == nil) caml_failwith(\"Cannot allocate Metal command queue\");\n    CAMLreturn(caml_copy_nativeint((intnat)queue));\n  }\n}\n\nCAMLprim value caml_tolk_metal_release_command_queue(value v_queue) {\n  CAMLparam1(v_queue);\n  @autoreleasepool {\n    id<MTLCommandQueue> queue = (id<MTLCommandQueue>)Nativeint_val(v_queue);\n    [queue release];\n    CAMLreturn(Val_unit);\n  }\n}\n\nCAMLprim value caml_tolk_metal_buffer_alloc(value v_device, value v_size) {\n  CAMLparam2(v_device, v_size);\n  @autoreleasepool {\n    id<MTLDevice> device = (id<MTLDevice>)Nativeint_val(v_device);\n    NSUInteger size = (NSUInteger)Long_val(v_size);\n    id<MTLBuffer> buf =\n        [device newBufferWithLength:size options:MTLResourceStorageModeShared];\n    if (buf == nil) caml_failwith(\"Metal OOM while allocating buffer\");\n    CAMLreturn(caml_copy_nativeint((intnat)buf));\n  }\n}\n\nCAMLprim value caml_tolk_metal_buffer_free(value v_buf) {\n  CAMLparam1(v_buf);\n  @autoreleasepool {\n    id<MTLBuffer> buf = (id<MTLBuffer>)Nativeint_val(v_buf);\n    [buf release];\n    CAMLreturn(Val_unit);\n  }\n}\n\nCAMLprim value caml_tolk_metal_buffer_copyin(value v_buf, value v_bytes) {\n  CAMLparam2(v_buf, v_bytes);\n  id<MTLBuffer> buf = (id<MTLBuffer>)Nativeint_val(v_buf);\n  void* dst = [buf contents];\n  size_t len = (size_t)caml_string_length(v_bytes);\n  memcpy(dst, Bytes_val(v_bytes), len);\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_tolk_metal_buffer_copyout(value v_bytes, value v_buf) {\n  CAMLparam2(v_bytes, v_buf);\n  id<MTLBuffer> buf = (id<MTLBuffer>)Nativeint_val(v_buf);\n  void* src = [buf contents];\n  size_t len = (size_t)caml_string_length(v_bytes);\n  memcpy(Bytes_val(v_bytes), src, len);\n  CAMLreturn(Val_unit);\n}\n\nCAMLprim value caml_tolk_metal_program_create(value v_device, value v_name,\n                                         value v_lib) {\n  CAMLparam3(v_device, v_name, v_lib);\n  @autoreleasepool {\n    id<MTLDevice> device = (id<MTLDevice>)Nativeint_val(v_device);\n    const char* name = String_val(v_name);\n    size_t lib_len = (size_t)caml_string_length(v_lib);\n    const uint8_t* lib = (const uint8_t*)String_val(v_lib);\n    id<MTLLibrary> library = nil;\n    if (lib_len >= 4 && memcmp(lib, \"MTLB\", 4) == 0) {\n      void* copy = malloc(lib_len);\n      if (copy == NULL) caml_failwith(\"Metal library allocation failed\");\n      memcpy(copy, lib, lib_len);\n      dispatch_data_t data = dispatch_data_create(\n          copy, lib_len, NULL, DISPATCH_DATA_DESTRUCTOR_DEFAULT);\n      NSError* error = nil;\n      library = [device newLibraryWithData:data error:&error];\n      dispatch_release(data);\n      if (library == nil)\n        fail_with_nserror(error, \"Failed to load Metal library\");\n    } else {\n      NSString* src = [[NSString alloc] initWithBytes:lib\n                                               length:lib_len\n                                             encoding:NSUTF8StringEncoding];\n      if (src == nil) caml_failwith(\"Metal source is not valid UTF-8\");\n      MTLCompileOptions* options = [[MTLCompileOptions alloc] init];\n      BOOL fast_math = metal_fast_math_enabled();\n#if defined(__MAC_OS_X_VERSION_MAX_ALLOWED) && \\\n    __MAC_OS_X_VERSION_MAX_ALLOWED >= 150000\n      if (@available(macOS 15.0, *)) {\n        options.mathMode = fast_math ? MTLMathModeFast : MTLMathModeSafe;\n      } else {\n        // Use ObjC runtime to avoid deprecation warnings on older SDKs.\n        if ([options respondsToSelector:@selector(setFastMathEnabled:)]) {\n          ((void (*)(id, SEL, BOOL))objc_msgSend)(\n              options, @selector(setFastMathEnabled:), fast_math);\n        }\n      }\n#else\n      options.fastMathEnabled = fast_math;\n#endif\n      NSError* error = nil;\n      library = [device newLibraryWithSource:src options:options error:&error];\n      [options release];\n      [src release];\n      if (library == nil)\n        fail_with_nserror(error, \"Metal source compile failed\");\n    }\n    NSString* ns_name = [NSString stringWithUTF8String:name];\n    id<MTLFunction> function = [library newFunctionWithName:ns_name];\n    if (function == nil) {\n      [library release];\n      caml_failwith(\"Metal function not found\");\n    }\n    MTLComputePipelineDescriptor* desc =\n        [[MTLComputePipelineDescriptor alloc] init];\n    desc.computeFunction = function;\n    desc.supportIndirectCommandBuffers = YES;\n    NSError* error = nil;\n    id<MTLComputePipelineState> pipeline =\n        [device newComputePipelineStateWithDescriptor:desc\n                                              options:MTLPipelineOptionNone\n                                           reflection:nil\n                                                error:&error];\n    [desc release];\n    if (pipeline == nil) {\n      [function release];\n      [library release];\n      fail_with_nserror(error, \"Metal pipeline creation failed\");\n    }\n    tolk_metal_program* prog =\n        (tolk_metal_program*)calloc(1, sizeof(tolk_metal_program));\n    if (prog == NULL) {\n      [pipeline release];\n      [function release];\n      [library release];\n      caml_failwith(\"Metal program allocation failed\");\n    }\n    prog->library = library;\n    prog->function = function;\n    prog->pipeline = pipeline;\n    prog->max_total_threads =\n        (uint64_t)[pipeline maxTotalThreadsPerThreadgroup];\n    prog->name = strdup(name);\n    prog->label = [[NSString stringWithUTF8String:name] retain];\n    CAMLreturn(caml_copy_nativeint((intnat)prog));\n  }\n}\n\nCAMLprim value caml_tolk_metal_program_free(value v_prog) {\n  CAMLparam1(v_prog);\n  @autoreleasepool {\n    tolk_metal_program* prog = (tolk_metal_program*)Nativeint_val(v_prog);\n    if (prog != NULL) {\n      [prog->pipeline release];\n      [prog->function release];\n      [prog->library release];\n      if (prog->label != nil) [prog->label release];\n      free(prog->name);\n      free(prog);\n    }\n    CAMLreturn(Val_unit);\n  }\n}\n\nCAMLprim value caml_tolk_metal_program_dispatch(value v_queue, value v_prog,\n                                           value v_buffers, value v_offsets,\n                                           value v_args, value v_global,\n                                           value v_local) {\n  CAMLparam5(v_queue, v_prog, v_buffers, v_offsets, v_args);\n  CAMLxparam2(v_global, v_local);\n  @autoreleasepool {\n    id<MTLCommandQueue> queue = (id<MTLCommandQueue>)Nativeint_val(v_queue);\n    tolk_metal_program* prog = (tolk_metal_program*)Nativeint_val(v_prog);\n    mlsize_t buf_count = Wosize_val(v_buffers);\n    mlsize_t arg_count = Wosize_val(v_args);\n    if (Wosize_val(v_offsets) != buf_count) {\n      caml_failwith(\"Metal dispatch: buffer and offset array length mismatch\");\n    }\n    if (Wosize_val(v_global) != 3 || Wosize_val(v_local) != 3) {\n      caml_failwith(\"Metal dispatch expects 3D sizes\");\n    }\n    int gx = Int_val(Field(v_global, 0));\n    int gy = Int_val(Field(v_global, 1));\n    int gz = Int_val(Field(v_global, 2));\n    int lx = Int_val(Field(v_local, 0));\n    int ly = Int_val(Field(v_local, 1));\n    int lz = Int_val(Field(v_local, 2));\n    uint64_t local_threads = (uint64_t)lx * (uint64_t)ly * (uint64_t)lz;\n    if (local_threads > prog->max_total_threads) {\n      caml_failwith(\"Metal local size exceeds max threads per threadgroup\");\n    }\n\n    id<MTLCommandBuffer> cmd = [queue commandBuffer];\n    if (cmd == nil) caml_failwith(\"Metal command buffer creation failed\");\n    id<MTLComputeCommandEncoder> encoder = [cmd computeCommandEncoder];\n    if (encoder == nil) caml_failwith(\"Metal compute encoder creation failed\");\n    [encoder setComputePipelineState:prog->pipeline];\n\n    for (mlsize_t i = 0; i < buf_count; ++i) {\n      id<MTLBuffer> buf = (id<MTLBuffer>)Nativeint_val(Field(v_buffers, i));\n      NSUInteger offset = (NSUInteger)Int_val(Field(v_offsets, i));\n      [encoder setBuffer:buf offset:offset atIndex:i];\n    }\n    for (mlsize_t i = 0; i < arg_count; ++i) {\n      int32_t arg_value = (int32_t)Int_val(Field(v_args, i));\n      [encoder setBytes:&arg_value\n                 length:sizeof(arg_value)\n                atIndex:(buf_count + i)];\n    }\n\n    MTLSize global =\n        MTLSizeMake((NSUInteger)gx, (NSUInteger)gy, (NSUInteger)gz);\n    MTLSize local = MTLSizeMake((NSUInteger)lx, (NSUInteger)ly, (NSUInteger)lz);\n    [encoder dispatchThreadgroups:global threadsPerThreadgroup:local];\n    [encoder endEncoding];\n\n    if (prog->label != nil) [cmd setLabel:prog->label];\n    [cmd commit];\n    [cmd retain];\n    CAMLreturn(caml_copy_nativeint((intnat)cmd));\n  }\n}\n\nCAMLprim value caml_tolk_metal_program_dispatch_bc(value* argv, int argc) {\n  (void)argc;\n  // Bytecode stub for the 7-arg native entrypoint.\n  return caml_tolk_metal_program_dispatch(argv[0], argv[1], argv[2], argv[3],\n                                     argv[4], argv[5], argv[6]);\n}\n\nCAMLprim value caml_tolk_metal_icb_create(value v_device, value v_count) {\n  CAMLparam2(v_device, v_count);\n  @autoreleasepool {\n    id<MTLDevice> device = (id<MTLDevice>)Nativeint_val(v_device);\n    NSUInteger count = (NSUInteger)Long_val(v_count);\n    MTLIndirectCommandBufferDescriptor* desc =\n        [[MTLIndirectCommandBufferDescriptor alloc] init];\n    desc.commandTypes = MTLIndirectCommandTypeConcurrentDispatch;\n    desc.inheritBuffers = NO;\n    desc.inheritPipelineState = NO;\n    // 31 is Metal's hardware limit on kernel buffer bindings per command.\n    desc.maxKernelBufferBindCount = 31;\n    id<MTLIndirectCommandBuffer> icb =\n        [device newIndirectCommandBufferWithDescriptor:desc\n                                       maxCommandCount:count\n                                               options:MTLResourceCPUCacheModeDefaultCache];\n    [desc release];\n    if (icb == nil) caml_failwith(\"Metal ICB creation failed\");\n    CAMLreturn(caml_copy_nativeint((intnat)icb));\n  }\n}\n\nCAMLprim value caml_tolk_metal_icb_encode(value v_icb, value v_index, value v_prog,\n                                     value v_buffers, value v_arg_buf,\n                                     value v_arg_offsets, value v_global,\n                                     value v_local) {\n  CAMLparam5(v_icb, v_index, v_prog, v_buffers, v_arg_buf);\n  CAMLxparam3(v_arg_offsets, v_global, v_local);\n  @autoreleasepool {\n    id<MTLIndirectCommandBuffer> icb =\n        (id<MTLIndirectCommandBuffer>)Nativeint_val(v_icb);\n    NSUInteger index = (NSUInteger)Int_val(v_index);\n    tolk_metal_program* prog = (tolk_metal_program*)Nativeint_val(v_prog);\n    mlsize_t buf_count = Wosize_val(v_buffers);\n    mlsize_t arg_count = Wosize_val(v_arg_offsets);\n    if (Wosize_val(v_global) != 3 || Wosize_val(v_local) != 3) {\n      caml_failwith(\"Metal ICB expects 3D sizes\");\n    }\n    int gx = Int_val(Field(v_global, 0));\n    int gy = Int_val(Field(v_global, 1));\n    int gz = Int_val(Field(v_global, 2));\n    int lx = Int_val(Field(v_local, 0));\n    int ly = Int_val(Field(v_local, 1));\n    int lz = Int_val(Field(v_local, 2));\n    uint64_t local_threads = (uint64_t)lx * (uint64_t)ly * (uint64_t)lz;\n    if (local_threads > prog->max_total_threads) {\n      caml_failwith(\"Metal local size exceeds max threads per threadgroup\");\n    }\n\n    id<MTLIndirectComputeCommand> cmd =\n        [icb indirectComputeCommandAtIndex:index];\n    [cmd setComputePipelineState:prog->pipeline];\n\n    for (mlsize_t i = 0; i < buf_count; ++i) {\n      id<MTLBuffer> buf = (id<MTLBuffer>)Nativeint_val(Field(v_buffers, i));\n      [cmd setKernelBuffer:buf offset:0 atIndex:i];\n    }\n    if (Nativeint_val(v_arg_buf) != 0 && arg_count > 0) {\n      id<MTLBuffer> arg_buf = (id<MTLBuffer>)Nativeint_val(v_arg_buf);\n      for (mlsize_t i = 0; i < arg_count; ++i) {\n        NSUInteger offset = (NSUInteger)Int_val(Field(v_arg_offsets, i));\n        [cmd setKernelBuffer:arg_buf offset:offset atIndex:(buf_count + i)];\n      }\n    }\n\n    MTLSize global =\n        MTLSizeMake((NSUInteger)gx, (NSUInteger)gy, (NSUInteger)gz);\n    MTLSize local = MTLSizeMake((NSUInteger)lx, (NSUInteger)ly, (NSUInteger)lz);\n    [cmd concurrentDispatchThreadgroups:global threadsPerThreadgroup:local];\n    // Barrier ensures sequential execution: each command completes before the\n    // next begins. Without this, commands in the ICB execute concurrently.\n    [cmd setBarrier];\n    CAMLreturn(Val_unit);\n  }\n}\n\nCAMLprim value caml_tolk_metal_icb_encode_bc(value* argv, int argc) {\n  (void)argc;\n  return caml_tolk_metal_icb_encode(argv[0], argv[1], argv[2], argv[3], argv[4],\n                              argv[5], argv[6], argv[7]);\n}\n\nCAMLprim value caml_tolk_metal_icb_update_buffer(value v_icb, value v_index,\n                                           value v_buf_index, value v_buf) {\n  CAMLparam4(v_icb, v_index, v_buf_index, v_buf);\n  @autoreleasepool {\n    id<MTLIndirectCommandBuffer> icb =\n        (id<MTLIndirectCommandBuffer>)Nativeint_val(v_icb);\n    NSUInteger index = (NSUInteger)Int_val(v_index);\n    NSUInteger buf_index = (NSUInteger)Int_val(v_buf_index);\n    id<MTLBuffer> buf = (id<MTLBuffer>)Nativeint_val(v_buf);\n    id<MTLIndirectComputeCommand> cmd =\n        [icb indirectComputeCommandAtIndex:index];\n    [cmd setKernelBuffer:buf offset:0 atIndex:buf_index];\n    CAMLreturn(Val_unit);\n  }\n}\n\nCAMLprim value caml_tolk_metal_icb_update_dispatch(value v_icb, value v_index,\n                                             value v_global, value v_local) {\n  CAMLparam3(v_icb, v_index, v_global);\n  CAMLxparam1(v_local);\n  @autoreleasepool {\n    id<MTLIndirectCommandBuffer> icb =\n        (id<MTLIndirectCommandBuffer>)Nativeint_val(v_icb);\n    NSUInteger index = (NSUInteger)Int_val(v_index);\n    if (Wosize_val(v_global) != 3 || Wosize_val(v_local) != 3) {\n      caml_failwith(\"Metal ICB expects 3D sizes\");\n    }\n    int gx = Int_val(Field(v_global, 0));\n    int gy = Int_val(Field(v_global, 1));\n    int gz = Int_val(Field(v_global, 2));\n    int lx = Int_val(Field(v_local, 0));\n    int ly = Int_val(Field(v_local, 1));\n    int lz = Int_val(Field(v_local, 2));\n\n    id<MTLIndirectComputeCommand> cmd =\n        [icb indirectComputeCommandAtIndex:index];\n    MTLSize global =\n        MTLSizeMake((NSUInteger)gx, (NSUInteger)gy, (NSUInteger)gz);\n    MTLSize local = MTLSizeMake((NSUInteger)lx, (NSUInteger)ly, (NSUInteger)lz);\n    [cmd concurrentDispatchThreadgroups:global threadsPerThreadgroup:local];\n    CAMLreturn(Val_unit);\n  }\n}\n\nCAMLprim value caml_tolk_metal_icb_update_dispatch_bc(value* argv, int argc) {\n  (void)argc;\n  return caml_tolk_metal_icb_update_dispatch(argv[0], argv[1], argv[2], argv[3]);\n}\n\nCAMLprim value caml_tolk_metal_icb_execute(value v_queue, value v_icb,\n                                     value v_count, value v_resources,\n                                     value v_pipelines) {\n  CAMLparam5(v_queue, v_icb, v_count, v_resources, v_pipelines);\n  @autoreleasepool {\n    id<MTLCommandQueue> queue = (id<MTLCommandQueue>)Nativeint_val(v_queue);\n    id<MTLIndirectCommandBuffer> icb =\n        (id<MTLIndirectCommandBuffer>)Nativeint_val(v_icb);\n    NSUInteger count = (NSUInteger)Long_val(v_count);\n    mlsize_t res_count = Wosize_val(v_resources);\n    mlsize_t pipeline_count = Wosize_val(v_pipelines);\n\n    id<MTLCommandBuffer> cmd = [queue commandBuffer];\n    if (cmd == nil) caml_failwith(\"Metal command buffer creation failed\");\n    id<MTLComputeCommandEncoder> encoder = [cmd computeCommandEncoder];\n    if (encoder == nil) caml_failwith(\"Metal compute encoder creation failed\");\n\n    if (res_count > 0) {\n      id<MTLResource>* resources =\n          (id<MTLResource>*)malloc(sizeof(id<MTLResource>) * res_count);\n      if (resources == NULL) caml_failwith(\"Metal resource allocation failed\");\n      for (mlsize_t i = 0; i < res_count; ++i) {\n        id<MTLBuffer> buf =\n            (id<MTLBuffer>)Nativeint_val(Field(v_resources, i));\n        resources[i] = buf;\n      }\n      [encoder useResources:resources\n                      count:res_count\n                      usage:MTLResourceUsageRead | MTLResourceUsageWrite];\n      free(resources);\n    }\n\n    // M1/M2 workaround: dummy dispatch with each pipeline to mark them as used.\n    // Without this, ICB execution can crash on AGXG<15 (pre-M3) GPUs.\n    for (mlsize_t i = 0; i < pipeline_count; ++i) {\n      tolk_metal_program* prog =\n          (tolk_metal_program*)Nativeint_val(Field(v_pipelines, i));\n      [encoder setComputePipelineState:prog->pipeline];\n      [encoder dispatchThreadgroups:MTLSizeMake(0, 0, 0)\n           threadsPerThreadgroup:MTLSizeMake(0, 0, 0)];\n    }\n\n    NSRange range = NSMakeRange(0, count);\n    [encoder executeCommandsInBuffer:icb withRange:range];\n    [encoder endEncoding];\n    [cmd commit];\n    [cmd retain];\n    CAMLreturn(caml_copy_nativeint((intnat)cmd));\n  }\n}\n\nCAMLprim value caml_tolk_metal_icb_release(value v_icb) {\n  CAMLparam1(v_icb);\n  @autoreleasepool {\n    id<MTLIndirectCommandBuffer> icb =\n        (id<MTLIndirectCommandBuffer>)Nativeint_val(v_icb);\n    [icb release];\n    CAMLreturn(Val_unit);\n  }\n}\n\n// Detect whether this GPU needs the M1/M2 ICB workaround.\n// Returns true for AGXG<15 (pre-M3) families.\nCAMLprim value caml_tolk_metal_needs_icb_fix(value v_device) {\n  CAMLparam1(v_device);\n  @autoreleasepool {\n    id<MTLDevice> device = (id<MTLDevice>)Nativeint_val(v_device);\n    NSString* desc = [device description];\n    if (desc == nil) CAMLreturn(Val_true);\n    NSRange range = [desc rangeOfString:@\"AGXG\"];\n    if (range.location == NSNotFound) CAMLreturn(Val_true);\n    NSString* rest = [desc substringFromIndex:range.location + 4];\n    int family = atoi([rest UTF8String]);\n    CAMLreturn(Val_bool(family < 15));\n  }\n}\n\nCAMLprim value caml_tolk_metal_blit_copy(value v_queue, value v_src_buf,\n                                    value v_src_offset, value v_dst_buf,\n                                    value v_dst_offset, value v_size) {\n  CAMLparam5(v_queue, v_src_buf, v_src_offset, v_dst_buf, v_dst_offset);\n  CAMLxparam1(v_size);\n  @autoreleasepool {\n    id<MTLCommandQueue> queue = (id<MTLCommandQueue>)Nativeint_val(v_queue);\n    id<MTLBuffer> src = (id<MTLBuffer>)Nativeint_val(v_src_buf);\n    NSUInteger src_offset = (NSUInteger)Long_val(v_src_offset);\n    id<MTLBuffer> dst = (id<MTLBuffer>)Nativeint_val(v_dst_buf);\n    NSUInteger dst_offset = (NSUInteger)Long_val(v_dst_offset);\n    NSUInteger size = (NSUInteger)Long_val(v_size);\n\n    id<MTLCommandBuffer> cmd = [queue commandBuffer];\n    if (cmd == nil) caml_failwith(\"Metal command buffer creation failed\");\n    id<MTLBlitCommandEncoder> encoder = [cmd blitCommandEncoder];\n    if (encoder == nil) caml_failwith(\"Metal blit encoder creation failed\");\n    [encoder copyFromBuffer:src\n               sourceOffset:src_offset\n                   toBuffer:dst\n          destinationOffset:dst_offset\n                       size:size];\n    [encoder endEncoding];\n    [cmd commit];\n    [cmd retain];\n    CAMLreturn(caml_copy_nativeint((intnat)cmd));\n  }\n}\n\nCAMLprim value caml_tolk_metal_blit_copy_bc(value* argv, int argc) {\n  (void)argc;\n  return caml_tolk_metal_blit_copy(argv[0], argv[1], argv[2], argv[3], argv[4],\n                              argv[5]);\n}\n\nCAMLprim value caml_tolk_metal_create_shared_event(value v_device) {\n  CAMLparam1(v_device);\n  @autoreleasepool {\n    id<MTLDevice> device = (id<MTLDevice>)Nativeint_val(v_device);\n    id<MTLSharedEvent> event = [device newSharedEvent];\n    if (event == nil) caml_failwith(\"Metal shared event creation failed\");\n    CAMLreturn(caml_copy_nativeint((intnat)event));\n  }\n}\n\nCAMLprim value caml_tolk_metal_release_shared_event(value v_event) {\n  CAMLparam1(v_event);\n  @autoreleasepool {\n    id<MTLSharedEvent> event = (id<MTLSharedEvent>)Nativeint_val(v_event);\n    [event release];\n    CAMLreturn(Val_unit);\n  }\n}\n\nCAMLprim value caml_tolk_metal_encode_signal_event(value v_cmd, value v_event,\n                                              value v_timeline_value) {\n  CAMLparam3(v_cmd, v_event, v_timeline_value);\n  @autoreleasepool {\n    id<MTLCommandBuffer> cmd = (id<MTLCommandBuffer>)Nativeint_val(v_cmd);\n    id<MTLEvent> event = (id<MTLEvent>)Nativeint_val(v_event);\n    uint64_t val = (uint64_t)Long_val(v_timeline_value);\n    [cmd encodeSignalEvent:event value:val];\n    CAMLreturn(Val_unit);\n  }\n}\n\nCAMLprim value caml_tolk_metal_encode_wait_event(value v_cmd, value v_event,\n                                            value v_timeline_value) {\n  CAMLparam3(v_cmd, v_event, v_timeline_value);\n  @autoreleasepool {\n    id<MTLCommandBuffer> cmd = (id<MTLCommandBuffer>)Nativeint_val(v_cmd);\n    id<MTLEvent> event = (id<MTLEvent>)Nativeint_val(v_event);\n    uint64_t val = (uint64_t)Long_val(v_timeline_value);\n    [cmd encodeWaitForEvent:event value:val];\n    CAMLreturn(Val_unit);\n  }\n}\n\nCAMLprim value caml_tolk_metal_command_buffer_gpu_time(value v_cmd) {\n  CAMLparam1(v_cmd);\n  CAMLlocal1(v_pair);\n  id<MTLCommandBuffer> cmd = (id<MTLCommandBuffer>)Nativeint_val(v_cmd);\n  double start = [cmd GPUStartTime];\n  double end = [cmd GPUEndTime];\n  v_pair = caml_alloc(2 * Double_wosize, Double_array_tag);\n  Store_double_field(v_pair, 0, start);\n  Store_double_field(v_pair, 1, end);\n  CAMLreturn(v_pair);\n}\n\nCAMLprim value caml_tolk_metal_device_name(value v_device) {\n  CAMLparam1(v_device);\n  @autoreleasepool {\n    id<MTLDevice> device = (id<MTLDevice>)Nativeint_val(v_device);\n    NSString* name = [device name];\n    const char* str = name != nil ? [name UTF8String] : \"unknown\";\n    CAMLreturn(caml_copy_string(str));\n  }\n}\n\nCAMLprim value caml_tolk_metal_command_buffer_wait(value v_cmd) {\n  CAMLparam1(v_cmd);\n  id<MTLCommandBuffer> cmd = (id<MTLCommandBuffer>)Nativeint_val(v_cmd);\n\n  caml_release_runtime_system();\n  [cmd waitUntilCompleted];\n  caml_acquire_runtime_system();\n\n  @autoreleasepool {\n    NSError* error = [cmd error];\n    if (error != nil) {\n      NSString* desc = [error localizedDescription];\n      const char* msg =\n          desc != nil ? [desc UTF8String] : \"Metal command buffer failed\";\n      char buf[512];\n      snprintf(buf, sizeof(buf), \"%s\", msg);\n      [cmd release];\n      caml_failwith(buf);\n    }\n    [cmd release];\n  }\n  CAMLreturn(Val_unit);\n}\n\ntypedef void* (*MTLCodeGenServiceCreate_t)(const char* label);\ntypedef void (*MTLCodeGenServiceBuildRequest_t)(void* cgs, void* queue,\n                                                int request_type,\n                                                const void* request,\n                                                size_t request_len,\n                                                void* callback);\n\nstatic void* mtlcompiler_handle = NULL;\nstatic MTLCodeGenServiceCreate_t mtl_create = NULL;\nstatic MTLCodeGenServiceBuildRequest_t mtl_build = NULL;\nstatic void* mtl_service = NULL;\n\n// MTLCompiler is a private framework used for fast source->MTLB compilation.\n// If it can't be loaded, we fall back to runtime source compilation.\nstatic int ensure_mtlcompiler(void) {\n  if (mtl_create != NULL && mtl_build != NULL && mtl_service != NULL) return 1;\n  if (mtlcompiler_handle == NULL) {\n    mtlcompiler_handle = dlopen(\n        \"/System/Library/PrivateFrameworks/MTLCompiler.framework/MTLCompiler\",\n        RTLD_LAZY);\n    if (mtlcompiler_handle == NULL) {\n      mtlcompiler_handle = dlopen(\"MTLCompiler\", RTLD_LAZY);\n    }\n  }\n  if (mtlcompiler_handle == NULL) return 0;\n  if (mtl_create == NULL) {\n    mtl_create = (MTLCodeGenServiceCreate_t)dlsym(mtlcompiler_handle,\n                                                  \"MTLCodeGenServiceCreate\");\n  }\n  if (mtl_build == NULL) {\n    mtl_build = (MTLCodeGenServiceBuildRequest_t)dlsym(\n        mtlcompiler_handle, \"MTLCodeGenServiceBuildRequest\");\n  }\n  if (mtl_create == NULL || mtl_build == NULL) return 0;\n  if (mtl_service == NULL) {\n    mtl_service = mtl_create(\"tolk\");\n  }\n  return mtl_service != NULL;\n}\n\ntypedef struct {\n  int error;\n  char* error_msg;\n  uint8_t* data;\n  size_t len;\n} compile_result;\n\nstatic size_t round_up(size_t value, size_t align) {\n  size_t rem = value % align;\n  if (rem == 0) return value;\n  return value + (align - rem);\n}\n\n// Compile Metal source to MTLB binary via Apple's private MTLCompiler.\n// Returns Some(bytes) on success, None if MTLCompiler is unavailable.\n// The request format is: [src_len:8][params_len:8][src_padded][params].\n// The reply format is: [?:8][header_size:4][warning_size:4][header][warnings][MTLB].\nCAMLprim value caml_tolk_metal_compile(value v_src) {\n  CAMLparam1(v_src);\n  CAMLlocal2(v_bytes, v_some);\n  @autoreleasepool {\n    if (!ensure_mtlcompiler()) {\n      CAMLreturn(Val_int(0));\n    }\n    const char* src = String_val(v_src);\n    size_t src_len = (size_t)caml_string_length(v_src);\n\n    NSOperatingSystemVersion ver =\n        [[NSProcessInfo processInfo] operatingSystemVersion];\n    int major = (int)ver.majorVersion;\n    const char* metal_version = \"macos-metal2.0\";\n    if (major >= 14)\n      metal_version = \"metal3.1\";\n    else if (major >= 13)\n      metal_version = \"metal3.0\";\n\n    NSString* cache_dir = metal_cache_dir();\n    const char* cache_path = [cache_dir UTF8String];\n\n    char params[1024];\n    snprintf(params, sizeof(params),\n             \"-fno-fast-math -std=%s --driver-mode=metal -x metal \"\n             \"-fmodules-cache-path=\\\"%s\\\" -fno-caret-diagnostics\",\n             metal_version, cache_path);\n\n    size_t src_padded_len = round_up(src_len + 1, 4);\n    size_t params_len = strlen(params) + 1;\n    size_t request_len = 16 + src_padded_len + params_len;\n    uint8_t* request = (uint8_t*)malloc(request_len);\n    if (request == NULL)\n      caml_failwith(\"Metal compiler request allocation failed\");\n\n    uint64_t src_len64 = (uint64_t)src_padded_len;\n    uint64_t params_len64 = (uint64_t)params_len;\n    memcpy(request, &src_len64, 8);\n    memcpy(request + 8, &params_len64, 8);\n    memcpy(request + 16, src, src_len);\n    request[16 + src_len] = '\\0';\n    if (src_padded_len > src_len + 1) {\n      memset(request + 16 + src_len + 1, 0, src_padded_len - (src_len + 1));\n    }\n    memcpy(request + 16 + src_padded_len, params, params_len);\n\n    __block compile_result res = {0, NULL, NULL, 0};\n    void* service = mtl_service;\n    // MTLCodeGenServiceBuildRequest expects a block (Apple's C extension).\n    // We use a stack block here to mirror tinygrad's callback behavior.\n    mtl_build(service, NULL, REQUEST_TYPE_COMPILE, request, request_len,\n              ^(void* blockptr, int32_t error, void* dataPtr, size_t dataLen,\n                const char* errorMessage) {\n                (void)blockptr;\n                if (error == 0 && dataPtr != NULL && dataLen > 0) {\n                  res.data = (uint8_t*)malloc(dataLen);\n                  if (res.data != NULL) {\n                    memcpy(res.data, dataPtr, dataLen);\n                    res.len = dataLen;\n                  }\n                } else {\n                  res.error = error != 0 ? (int)error : -1;\n                  if (errorMessage != NULL)\n                    res.error_msg = strdup(errorMessage);\n                }\n              });\n    free(request);\n\n    if (res.error != 0 || res.data == NULL) {\n      char buf[256];\n      const char* msg =\n          res.error_msg != NULL ? res.error_msg : \"Metal compiler failed\";\n      snprintf(buf, sizeof(buf), \"%s\", msg);\n      free(res.error_msg);\n      free(res.data);\n      caml_failwith(buf);\n    }\n\n    if (res.len < 16) {\n      free(res.data);\n      caml_failwith(\"Invalid Metal compiler output\");\n    }\n    // The compiler reply includes a header + warnings before the MTLB blob.\n    uint32_t header_size = 0;\n    uint32_t warning_size = 0;\n    memcpy(&header_size, res.data + 8, 4);\n    memcpy(&warning_size, res.data + 12, 4);\n    size_t offset = (size_t)header_size + (size_t)warning_size;\n    if (offset > res.len) {\n      free(res.data);\n      caml_failwith(\"Invalid Metal compiler output\");\n    }\n    uint8_t* mtlb = res.data + offset;\n    size_t mtlb_len = res.len - offset;\n    if (mtlb_len < 8 || memcmp(mtlb, \"MTLB\", 4) != 0 ||\n        memcmp(mtlb + mtlb_len - 4, \"ENDT\", 4) != 0) {\n      free(res.data);\n      caml_failwith(\"Invalid Metal library output\");\n    }\n\n    v_bytes = caml_alloc_string(mtlb_len);\n    memcpy((char*)String_val(v_bytes), mtlb, mtlb_len);\n    free(res.data);\n\n    v_some = caml_alloc(1, 0);\n    Store_field(v_some, 0, v_bytes);\n    CAMLreturn(v_some);\n  }\n}\n"
  },
  {
    "path": "packages/tolk/lib/runtime/support/elf.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\nmodule Image = struct\n  type t = { mutable data : Bytes.t; mutable len : int }\n\n  let create len = { data = Bytes.make len '\\000'; len }\n\n  let ensure t size =\n    let capacity = Bytes.length t.data in\n    if size <= capacity then ()\n    else\n      let rec grow cap = if cap >= size then cap else grow (cap * 2) in\n      let next_cap = if capacity = 0 then size else grow capacity in\n      let next = Bytes.make next_cap '\\000' in\n      Bytes.blit t.data 0 next 0 t.len;\n      t.data <- next\n\n  let extend_zero t len =\n    let needed = t.len + len in\n    ensure t needed;\n    t.len <- needed\n\n  let set_bytes t off src =\n    let src_len = Bytes.length src in\n    let needed = off + src_len in\n    ensure t needed;\n    Bytes.blit src 0 t.data off src_len;\n    if needed > t.len then t.len <- needed\n\n  let append_bytes t src =\n    let off = t.len in\n    set_bytes t off src;\n    off\n\n  let align t alignment =\n    let rem = t.len mod alignment in\n    if rem <> 0 then extend_zero t (alignment - rem)\n\n  let length t = t.len\n\n  let set_zero t off len =\n    let needed = off + len in\n    ensure t needed;\n    Bytes.fill t.data off len '\\000';\n    if needed > t.len then t.len <- needed\n\n  let to_bytes t = Bytes.sub t.data 0 t.len\nend\n\ntype section_header = {\n  sh_name : int;\n  sh_type : int;\n  sh_flags : Int64.t;\n  mutable sh_addr : int;\n  sh_offset : int;\n  sh_size : int;\n  sh_link : int;\n  sh_info : int;\n  sh_addralign : int;\n  sh_entsize : int;\n}\n\ntype raw_section = { name : string; header : section_header; content : Bytes.t }\ntype section = { name : string; addr : int; size : int; content : Bytes.t }\ntype symbol = { name : string; shndx : int; value : int }\ntype reloc = { offset : int; symbol : symbol; r_type : int; addend : int }\n\ntype t = {\n  image : Bytes.t;\n  sections : section array;\n  symbols : symbol array;\n  relocs : reloc list;\n}\n\nlet array_find_opt f a =\n  let len = Array.length a in\n  let rec aux i =\n    if i >= len then None else if f a.(i) then Some a.(i) else aux (i + 1)\n  in\n  aux 0\n\nlet sht_null = 0\nlet sht_symtab = 2\nlet sht_rela = 4\nlet sht_nobits = 8\nlet sht_rel = 9\nlet shf_alloc = 0x2L\nlet image t = t.image\nlet sections t = t.sections\nlet relocs t = t.relocs\nlet u8 bytes off = Char.code (Bytes.get bytes off)\n\nlet u16 bytes off =\n  let b0 = u8 bytes off in\n  let b1 = u8 bytes (off + 1) in\n  b0 lor (b1 lsl 8)\n\nlet u32 bytes off =\n  let b0 = u8 bytes off in\n  let b1 = u8 bytes (off + 1) in\n  let b2 = u8 bytes (off + 2) in\n  let b3 = u8 bytes (off + 3) in\n  b0 lor (b1 lsl 8) lor (b2 lsl 16) lor (b3 lsl 24)\n\nlet u64 bytes off =\n  let open Int64 in\n  let lo = of_int (u32 bytes off) in\n  let hi = of_int (u32 bytes (off + 4)) in\n  logor lo (shift_left hi 32)\n\nlet strtab_get bytes off =\n  let rec find_end idx =\n    if idx >= Bytes.length bytes then idx\n    else if Bytes.get bytes idx = '\\000' then idx\n    else find_end (idx + 1)\n  in\n  let last = find_end off in\n  Bytes.sub_string bytes off (last - off)\n\nlet read_headers obj =\n  if Bytes.length obj < 64 then invalid_arg \"invalid ELF\";\n  if\n    Bytes.get obj 0 <> '\\x7f'\n    || Bytes.get obj 1 <> 'E'\n    || Bytes.get obj 2 <> 'L'\n    || Bytes.get obj 3 <> 'F'\n  then invalid_arg \"invalid ELF\";\n  let class_ = u8 obj 4 in\n  let data = u8 obj 5 in\n  if class_ <> 2 || data <> 1 then invalid_arg \"unsupported ELF format\";\n  let e_type = u16 obj 16 in\n  if e_type <> 1 then invalid_arg \"unsupported ELF type\";\n  let e_shoff = Int64.to_int (u64 obj 40) in\n  let e_shentsize = u16 obj 58 in\n  let e_shnum = u16 obj 60 in\n  let e_shstrndx = u16 obj 62 in\n  let headers =\n    Array.init e_shnum (fun i ->\n        let off = e_shoff + (i * e_shentsize) in\n        let sh_name = u32 obj off in\n        let sh_type = u32 obj (off + 4) in\n        let sh_flags = u64 obj (off + 8) in\n        let sh_addr = Int64.to_int (u64 obj (off + 16)) in\n        let sh_offset = Int64.to_int (u64 obj (off + 24)) in\n        let sh_size = Int64.to_int (u64 obj (off + 32)) in\n        let sh_link = u32 obj (off + 40) in\n        let sh_info = u32 obj (off + 44) in\n        let sh_addralign = Int64.to_int (u64 obj (off + 48)) in\n        let sh_entsize = Int64.to_int (u64 obj (off + 56)) in\n        {\n          sh_name;\n          sh_type;\n          sh_flags;\n          sh_addr;\n          sh_offset;\n          sh_size;\n          sh_link;\n          sh_info;\n          sh_addralign;\n          sh_entsize;\n        })\n  in\n  let sh_strtab =\n    let hdr = headers.(e_shstrndx) in\n    Bytes.sub obj hdr.sh_offset hdr.sh_size\n  in\n  Array.map\n    (fun header ->\n      let name = strtab_get sh_strtab header.sh_name in\n      let content =\n        if header.sh_type = sht_nobits then Bytes.create 0\n        else Bytes.sub obj header.sh_offset header.sh_size\n      in\n      { name; header; content })\n    headers\n\nlet is_alloc_section section =\n  Int64.logand section.header.sh_flags shf_alloc <> 0L\n\nlet build_image ?(force_section_align = 1) sections =\n  let max_fixed =\n    Array.fold_left\n      (fun acc s ->\n        if is_alloc_section s && s.header.sh_addr <> 0 then\n          max acc (s.header.sh_addr + s.header.sh_size)\n        else acc)\n      0 sections\n  in\n  let image = Image.create max_fixed in\n  Array.iter\n    (fun s ->\n      if not (is_alloc_section s) then ()\n      else if s.header.sh_addr <> 0 then\n        if s.header.sh_type = sht_nobits then\n          Image.set_zero image s.header.sh_addr s.header.sh_size\n        else Image.set_bytes image s.header.sh_addr s.content\n      else begin\n        let align = max force_section_align (max s.header.sh_addralign 1) in\n        Image.align image align;\n        s.header.sh_addr <- Image.length image;\n        if s.header.sh_type = sht_nobits then\n          Image.extend_zero image s.header.sh_size\n        else ignore (Image.append_bytes image s.content)\n      end)\n    sections;\n  image\n\nlet symtab sections =\n  array_find_opt (fun s -> s.header.sh_type = sht_symtab) sections\n\nlet read_symbols sections =\n  match symtab sections with\n  | None -> [||]\n  | Some sym_sec ->\n      let strtab = sections.(sym_sec.header.sh_link).content in\n      let entsize = max sym_sec.header.sh_entsize 24 in\n      let count = sym_sec.header.sh_size / entsize in\n      Array.init count (fun i ->\n          let off = i * entsize in\n          let st_name = u32 sym_sec.content off in\n          let st_shndx = u16 sym_sec.content (off + 6) in\n          let st_value = Int64.to_int (u64 sym_sec.content (off + 8)) in\n          let name = strtab_get strtab st_name in\n          { name; shndx = st_shndx; value = st_value })\n\nlet read_relocs sections symbols =\n  let acc = ref [] in\n  Array.iter\n    (fun rel_sec ->\n      if rel_sec.header.sh_type <> sht_rel && rel_sec.header.sh_type <> sht_rela\n      then ()\n      else\n        let target : raw_section = sections.(rel_sec.header.sh_info) in\n        if not (String.equal target.name \".eh_frame\") then begin\n          let entsize =\n            if rel_sec.header.sh_entsize <> 0 then rel_sec.header.sh_entsize\n            else if rel_sec.header.sh_type = sht_rel then 16\n            else 24\n          in\n          let count = rel_sec.header.sh_size / entsize in\n          for i = 0 to count - 1 do\n            let off = i * entsize in\n            let r_offset = Int64.to_int (u64 rel_sec.content off) in\n            let r_info = u64 rel_sec.content (off + 8) in\n            let addend =\n              if rel_sec.header.sh_type = sht_rela then\n                Int64.to_int (u64 rel_sec.content (off + 16))\n              else 0\n            in\n            let sym_idx = Int64.(to_int (shift_right_logical r_info 32)) in\n            if sym_idx < 0 || sym_idx >= Array.length symbols then\n              invalid_arg \"invalid relocation symbol\";\n            let symbol = symbols.(sym_idx) in\n            let r_type = Int64.(to_int (logand r_info 0xFFFFFFFFL)) in\n            acc :=\n              {\n                offset = target.header.sh_addr + r_offset;\n                symbol;\n                r_type;\n                addend;\n              }\n              :: !acc\n          done\n        end)\n    sections;\n  List.rev !acc\n\nlet public_section raw =\n  let content =\n    if raw.header.sh_type = sht_nobits then Bytes.make raw.header.sh_size '\\000'\n    else raw.content\n  in\n  {\n    name = raw.name;\n    addr = raw.header.sh_addr;\n    size = raw.header.sh_size;\n    content;\n  }\n\nlet load ?(force_section_align = 1) obj =\n  let sections = read_headers obj in\n  let image = build_image ~force_section_align sections in\n  let symbols = read_symbols sections in\n  let relocs = read_relocs sections symbols in\n  {\n    image = Image.to_bytes image;\n    sections = Array.map public_section sections;\n    symbols;\n    relocs;\n  }\n\nlet find_section (t : t) name =\n  array_find_opt (fun (s : section) -> s.name = name) t.sections\n\nlet find_symbol_offset t name =\n  match array_find_opt (fun (s : symbol) -> s.name = name) t.symbols with\n  | None -> invalid_arg (\"missing symbol: \" ^ name)\n  | Some sym ->\n      if sym.shndx = sht_null then invalid_arg (\"symbol is undefined: \" ^ name)\n      else\n        let section = t.sections.(sym.shndx) in\n        section.addr + sym.value\n"
  },
  {
    "path": "packages/tolk/lib/runtime/support/elf.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Relocatable ELF object loading.\n\n    Parses 64-bit little-endian ELF relocatable objects ([ET_REL]) and lays out\n    their allocatable sections into a contiguous flat image. Section and\n    relocation metadata is preserved for backend-specific loaders, but no\n    machine-specific relocations are applied. *)\n\n(** {1:types Types} *)\n\ntype section = private {\n  name : string;\n      (** The ELF section name (e.g. [\".text\"], [\".data\"], [\".bss\"]). *)\n  addr : int;  (** Byte offset of the section within the flat {!image}. *)\n  size : int;  (** Size of the section in bytes. *)\n  content : Bytes.t;\n      (** Section contents. For [SHT_NOBITS] sections (e.g. [.bss]), [content]\n          is a zero-filled buffer of length {!size}. *)\n}\n(** The type for sections after image layout. *)\n\ntype symbol = private {\n  name : string;  (** The symbol name from the string table. *)\n  shndx : int;\n      (** Section header index the symbol belongs to. [0] for undefined symbols.\n      *)\n  value : int;\n      (** Symbol value: byte offset from the start of the symbol's section. *)\n}\n(** The type for symbols from the object's symbol table. *)\n\ntype reloc = private {\n  offset : int;\n      (** Absolute byte offset within the flat {!image} where the relocation\n          applies. *)\n  symbol : symbol;  (** The referenced {!type-symbol}. *)\n  r_type : int;\n      (** Machine-specific relocation type (e.g. [R_AARCH64_CALL26],\n          [R_X86_64_PC32]). *)\n  addend : int;  (** Relocation addend. [0] for [SHT_REL] entries. *)\n}\n(** The type for relocations anchored at absolute image offsets. *)\n\ntype t\n(** The type for a laid-out relocatable ELF object. Holds the flat image,\n    resolved section addresses, symbols, and pending relocations. *)\n\n(** {1:loading Loading} *)\n\nval load : ?force_section_align:int -> Bytes.t -> t\n(** [load ?force_section_align obj] parses ELF relocatable object [obj] and lays\n    out its allocatable sections into a flat image.\n\n    Sections with a fixed address ([sh_addr <> 0]) are placed first. Remaining\n    allocatable sections are appended sequentially, each aligned to the maximum\n    of the ELF section alignment and [force_section_align] (defaults to [1]).\n\n    Raises [Invalid_argument] if [obj] is not a valid 64-bit little-endian ELF\n    relocatable object. *)\n\n(** {1:accessors Accessors} *)\n\nval image : t -> Bytes.t\n(** [image t] is the flat image built from allocatable sections. *)\n\nval sections : t -> section array\n(** [sections t] is all object sections in section-header order, with\n    {!field-addr} set to their final image offsets. *)\n\nval relocs : t -> reloc list\n(** [relocs t] is the list of relocations with offsets resolved to absolute\n    image positions. *)\n\n(** {1:lookup Lookup} *)\n\nval find_section : t -> string -> section option\n(** [find_section t name] is the section named [name], if any. *)\n\nval find_symbol_offset : t -> string -> int\n(** [find_symbol_offset t name] is the absolute byte offset in {!image} of the\n    defined symbol [name].\n\n    Raises [Invalid_argument] if no symbol named [name] exists or if the symbol\n    is undefined. *)\n"
  },
  {
    "path": "packages/tolk/lib/runtime/support/tlsf.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(* Two-Level Segregated Fit allocator.\n\n   Maintains two levels of free-list buckets for O(1) best-fit allocation:\n     - Level 1 is the most significant bit of the block size.\n     - Level 2 subdivides each L1 range into [2^l2_cnt] entries.\n\n   Allocation searches for the smallest block that fits, splitting if\n   oversized. Deallocation merges the freed block with its neighbours. *)\n\nlet round_up n align = (n + align - 1) / align * align\n\nlet bit_length n =\n  let rec go acc n = if n = 0 then acc else go (acc + 1) (n lsr 1) in\n  go 0 n\n\ntype block = {\n  size : int;\n  next : int option;\n  prev : int option;\n  is_free : bool;\n}\n\ntype t = {\n  base : int;\n  block_size : int;\n  l2_cnt : int;\n  storage : (int, int list) Hashtbl.t array;\n  lv1_entries : int array;\n  blocks : (int, block) Hashtbl.t;\n}\n\nlet lv1 size = bit_length size\n\nlet lv2 t size =\n  let bl = bit_length size in\n  (size - (1 lsl (bl - 1))) / (1 lsl (max 0 (bl - t.l2_cnt)))\n\nlet insert_block t start size ?prev () =\n  let prev = match prev with\n    | Some p -> p\n    | None -> (Hashtbl.find t.blocks start).prev in\n  let l1 = lv1 size and l2 = lv2 t size in\n  let cur = match Hashtbl.find_opt t.storage.(l1) l2 with\n    | Some l -> l | None -> [] in\n  Hashtbl.replace t.storage.(l1) l2 (start :: cur);\n  t.lv1_entries.(l1) <- t.lv1_entries.(l1) + 1;\n  Hashtbl.replace t.blocks start\n    { size; next = Some (start + size); prev; is_free = true }\n\nlet remove_block t start size ?prev () =\n  let prev = match prev with\n    | Some p -> p\n    | None -> (Hashtbl.find t.blocks start).prev in\n  let l1 = lv1 size and l2 = lv2 t size in\n  let cur = match Hashtbl.find_opt t.storage.(l1) l2 with\n    | Some l -> l | None -> [] in\n  Hashtbl.replace t.storage.(l1) l2\n    (List.filter (fun s -> s <> start) cur);\n  t.lv1_entries.(l1) <- t.lv1_entries.(l1) - 1;\n  Hashtbl.replace t.blocks start\n    { size; next = Some (start + size); prev; is_free = false }\n\nlet split_block t start size new_size =\n  let blk = Hashtbl.find t.blocks start in\n  assert blk.is_free;\n  let nxt = blk.next in\n  remove_block t start size ();\n  insert_block t start new_size ();\n  insert_block t (start + new_size) (size - new_size) ~prev:(Some start) ();\n  (match nxt with\n   | Some n when Hashtbl.mem t.blocks n ->\n       let b = Hashtbl.find t.blocks n in\n       Hashtbl.replace t.blocks n\n         { b with prev = Some (start + new_size) }\n   | _ -> ())\n\nlet merge_right t start =\n  let blk = Hashtbl.find t.blocks start in\n  assert blk.is_free;\n  let size = ref blk.size in\n  let nxt = ref blk.next in\n  let continue = ref true in\n  while !continue do\n    match !nxt with\n    | Some n when Hashtbl.mem t.blocks n ->\n        let b = Hashtbl.find t.blocks n in\n        if not b.is_free then continue := false\n        else begin\n          remove_block t start !size ();\n          remove_block t n b.size ();\n          size := !size + b.size;\n          insert_block t start !size ();\n          assert ((Hashtbl.find t.blocks start).next = b.next);\n          nxt := (Hashtbl.find t.blocks n).next;\n          Hashtbl.remove t.blocks n\n        end\n    | _ -> continue := false\n  done;\n  (match !nxt with\n   | Some n when Hashtbl.mem t.blocks n ->\n       let b = Hashtbl.find t.blocks n in\n       Hashtbl.replace t.blocks n { b with prev = Some start }\n   | _ -> ())\n\nlet merge_block t start =\n  let start = ref start in\n  let continue = ref true in\n  while !continue do\n    match (Hashtbl.find t.blocks !start).prev with\n    | Some x when (Hashtbl.find t.blocks x).is_free -> start := x\n    | _ -> continue := false\n  done;\n  merge_right t !start\n\nlet create ~size ?(base = 0) ?(block_size = 16) ?(lv2_cnt = 16) () =\n  let l2_cnt = bit_length lv2_cnt in\n  let n_levels = bit_length size + 1 in\n  let storage = Array.init n_levels (fun _ -> Hashtbl.create 4) in\n  let lv1_entries = Array.make n_levels 0 in\n  let blocks = Hashtbl.create 64 in\n  let t = { base; block_size; l2_cnt; storage; lv1_entries; blocks } in\n  Hashtbl.replace blocks 0\n    { size; next = None; prev = None; is_free = true };\n  if size > 0 then insert_block t 0 size ();\n  t\n\nlet alloc t req_size ?(align = 1) () =\n  let req_size = max t.block_size req_size in\n  let size = max t.block_size (req_size + align - 1) in\n  (* Round up to the next bucket boundary so any entry there fits. *)\n  let size = round_up size (1 lsl (bit_length size - t.l2_cnt)) in\n  let n_levels = Array.length t.storage in\n  let result = ref (-1) in\n  let l1 = ref (lv1 size) in\n  while !l1 < n_levels && !result = -1 do\n    if t.lv1_entries.(!l1) <> 0 then begin\n      let l2_start =\n        if !l1 = bit_length size then lv2 t size else 0 in\n      let l2_end = 1 lsl t.l2_cnt in\n      let l2 = ref l2_start in\n      while !l2 < l2_end && !result = -1 do\n        let entries =\n          match Hashtbl.find_opt t.storage.(!l1) !l2 with\n          | Some l -> l | None -> [] in\n        if entries <> [] then begin\n          let start = ref (List.hd entries) in\n          let nsize = ref (Hashtbl.find t.blocks !start).size in\n          assert (!nsize >= size);\n          (* Alignment: split off a prefix if the start isn't aligned. *)\n          let new_start = round_up !start align in\n          if new_start <> !start then begin\n            split_block t !start !nsize (new_start - !start);\n            start := new_start;\n            nsize := (Hashtbl.find t.blocks new_start).size\n          end;\n          (* Split off the tail if the block is larger than needed. *)\n          if !nsize > req_size then\n            split_block t !start !nsize req_size;\n          remove_block t !start req_size ();\n          result := !start + t.base\n        end;\n        incr l2\n      done\n    end;\n    incr l1\n  done;\n  if !result = -1 then raise Out_of_memory;\n  !result\n\nlet free t start =\n  let s = start - t.base in\n  let blk = Hashtbl.find t.blocks s in\n  insert_block t s blk.size ();\n  merge_block t s\n"
  },
  {
    "path": "packages/tolk/lib/runtime/support/tlsf.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Two-Level Segregated Fit allocator.\n\n    Manages a contiguous address range with O(1) best-fit allocation and\n    O(1) deallocation with coalescing. Free blocks are indexed by two\n    levels of buckets:\n    {ul\n    {- Level 1 is the most significant bit of the block size.}\n    {- Level 2 subdivides each L1 range into [2{^l2_cnt}] entries.}}\n\n    Allocation finds the smallest free block that fits, splitting the\n    remainder. Deallocation merges the freed block with its neighbours. *)\n\n(** {1:types Types} *)\n\ntype t\n(** The type for TLSF allocators. Mutable. *)\n\n(** {1:constructors Constructors} *)\n\nval create :\n  size:int ->\n  ?base:int ->\n  ?block_size:int ->\n  ?lv2_cnt:int ->\n  unit ->\n  t\n(** [create ~size ?base ?block_size ?lv2_cnt ()] is a TLSF allocator\n    managing [size] bytes starting at virtual address [base].\n\n    [base] defaults to [0]. [block_size] is the minimum allocation\n    granularity and defaults to [16]. [lv2_cnt] is the number of\n    level-2 subdivisions per level-1 bucket and defaults to [16]. *)\n\n(** {1:operations Operations} *)\n\nval alloc : t -> int -> ?align:int -> unit -> int\n(** [alloc t size ?align ()] is the start address of a newly allocated\n    region of [size] bytes. The returned address is a multiple of [align].\n\n    [align] defaults to [1]. The actual allocation is at least\n    [block_size] bytes.\n\n    Raises [Out_of_memory] if no free block can satisfy the request. *)\n\nval free : t -> int -> unit\n(** [free t addr] returns the block at [addr] to the free pool and\n    merges it with any adjacent free blocks. [addr] must have been\n    previously returned by {!alloc} on the same allocator. *)\n"
  },
  {
    "path": "packages/tolk/lib/schedule/allreduce.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(* Multi-device collective reduction.\n\n   Implements naive, ring, and all-to-all allreduce strategies for reducing\n   buffers across multiple devices. *)\n\nopen Tolk_ir\n\nmodule T = Tensor\n\n(* Environment *)\n\nlet ring_var = Helpers.Context_var.int ~key:\"RING\" ~default:0\nlet all2all_var = Helpers.Context_var.int ~key:\"ALL2ALL\" ~default:0\n\nlet ring_allreduce_threshold =\n  Helpers.Context_var.int ~key:\"RING_ALLREDUCE_THRESHOLD\" ~default:256_000\n\n(* Shape encoding\n\n   Shapes and bounds are tensor nodes: a single dim is a scalar const,\n   multiple dims become a vectorize of scalar consts. *)\n\nlet dim d = T.const (Const.int Dtype.Val.index d) Dtype.index\n\nlet emit_shape = function\n  | [d] -> dim d\n  | dims -> T.vectorize ~srcs:(List.map dim dims)\n\nlet emit_pairs pairs =\n  (emit_shape (List.map fst pairs), emit_shape (List.map snd pairs))\n\n(* Int-list wrappers over tensor-node shape/bounds APIs. *)\n\nlet reshape src dims = T.reshape ~src ~shape:(emit_shape dims)\n\nlet shrink src bounds =\n  let before, after = emit_pairs bounds in\n  T.shrink ~src ~before ~after\n\nlet pad src padding =\n  let before, after = emit_pairs padding in\n  T.pad ~src ~before ~after\n\nlet copy_to_device src dev = T.copy ~src ~device:(T.device (Single dev)) ()\n\n(* Reduction *)\n\nlet reduce op lhs rhs = T.binary ~op:(op :> Op.binary) ~lhs ~rhs\n\nlet fold_reduce op = function\n  | [] -> failwith \"fold_reduce: empty list\"\n  | x :: xs -> List.fold_left (reduce op) x xs\n\n(* handle_allreduce *)\n\nlet handle_allreduce buf ~op ~device =\n  let devices = T.compute_devices buf in\n  match devices buf with\n  | Some (Multi devs) ->\n      let devs = Array.of_list devs in\n      let ndev = Array.length devs in\n      let shapes = T.compute_shapes buf in\n      let shape = match shapes buf with\n        | Some s -> s\n        | None -> failwith \"handle_allreduce: buf has no shape\"\n      in\n      let numel = List.fold_left ( * ) 1 shape in\n      let threshold = Helpers.Context_var.get ring_allreduce_threshold in\n      let all2all = Helpers.Context_var.get all2all_var in\n      let ring = Helpers.Context_var.get ring_var in\n      (* Ring allreduce doesn't benefit with <=2 nodes or <256k elements —\n         fall back to naive to save on dispatch and chunking. *)\n      let use_all2all =\n        all2all >= 2 || (ndev > 2 && numel > threshold && all2all >= 1)\n      in\n      let use_ring =\n        not use_all2all\n        && (ring >= 2 || (ndev > 2 && numel > threshold && ring >= 1))\n      in\n      let buf = T.contiguous ~src:buf () in\n      if not use_ring && not use_all2all then\n        (* Naive: copy every shard to the target device and reduce. *)\n        let shards = List.init ndev (fun i ->\n          T.copy ~src:(T.mselect ~src:buf ~index:i) ~device ()) in\n        Some (fold_reduce op shards)\n      else begin\n        (* Divide into ndev chunks, aligned to the largest power-of-2 factor\n           (up to 32) that divides numel. Larger chunks go to earlier devices. *)\n        let factor =\n          match List.find_opt (fun f -> numel mod f = 0) [32; 16; 8; 4; 2] with\n          | Some f -> f | None -> 1\n        in\n        let base = numel / factor / ndev in\n        let left = numel / factor mod ndev in\n        let chunks = Array.init ndev (fun i ->\n          (if i < left then base + 1 else base) * factor) in\n        (* Prefix-sum to get (start, end) pairs. *)\n        let bounds =\n          let pos = ref 0 in\n          Array.map (fun sz -> let s = !pos in pos := s + sz; (s, s + sz)) chunks\n        in\n        (* Reduce-scatter: each device ends up with one fully-reduced chunk. *)\n        let reduced_chunks = Array.mapi (fun i (s, e) ->\n          if use_all2all then\n            (* All-to-all: gather chunk [s,e) from every device onto device i. *)\n            let chunks_on_i = List.init ndev (fun j ->\n              let shard = T.mselect ~src:buf ~index:j in\n              copy_to_device (shrink (reshape shard [numel]) [(s, e)]) devs.(i)) in\n            fold_reduce op chunks_on_i\n          else begin\n            (* Ring: walk chunk around the ring, accumulating at each hop. *)\n            let flat = reshape buf [numel] in\n            let chunk = shrink flat [(s, e)] in\n            let reduced = ref (shrink flat [(s, e)]) in\n            for step = 0 to ndev - 2 do\n              let src_idx = (i + step) mod ndev in\n              let dest_idx = (i + step + 1) mod ndev in\n              (* On the first step, reduced is still multi-device (inherits from\n                 buf) and needs mselect. After that it lives on a single device. *)\n              let r = if step = 0 then T.mselect ~src:!reduced ~index:src_idx\n                      else !reduced in\n              let cp = copy_to_device r devs.(dest_idx) in\n              let ch = copy_to_device (T.mselect ~src:chunk ~index:dest_idx)\n                         devs.(dest_idx) in\n              reduced := reduce op cp ch\n            done;\n            !reduced\n          end) bounds\n        in\n        (* Allgather: broadcast each reduced chunk to all devices. *)\n        let copied_chunks = Array.mapi (fun i rc ->\n          match T.view device with\n          | Device { device = Single target } ->\n              (* Target is a single device — just copy there. *)\n              copy_to_device rc target\n          | _ when use_all2all ->\n              (* All-to-all: copy to every device and stack. *)\n              T.mstack ~srcs:(List.init ndev (fun j ->\n                copy_to_device rc devs.(j)))\n          | _ ->\n              (* Ring: chain copies around the ring, then reorder. *)\n              let chain = Array.make ndev rc in\n              let current = ref rc in\n              for step = 0 to ndev - 2 do\n                current := copy_to_device !current devs.((i + step) mod ndev);\n                chain.(step + 1) <- !current\n              done;\n              T.mstack ~srcs:(List.init ndev (fun j ->\n                chain.((j - i + 1 + ndev) mod ndev)))) reduced_chunks\n        in\n        (* Reassemble: pad each chunk back to full size and sum. *)\n        let padded = List.init ndev (fun i ->\n          let (s, e) = bounds.(i) in\n          pad copied_chunks.(i) [(s, numel - e)]) in\n        Some (reshape (fold_reduce `Add padded) shape)\n      end\n  | _ -> None\n\n(* create_allreduce_function *)\n\nlet create_allreduce_function buf ~op ~device ~dtype ~shape ?output () =\n  let output = match output with\n    | Some o -> o\n    | None ->\n        let size = List.fold_left ( * ) 1 shape in\n        let unique = T.noop ~dtype () in\n        T.contiguous ~src:(reshape (T.buffer ~unique ~device ~size ~dtype) shape) ()\n  in\n  (* Build params mirroring the output and source signatures. *)\n  let to_ = T.param ~slot:0 ~dtype ~shape:(emit_shape shape) ~device () in\n  let buf_shapes = T.compute_shapes buf in\n  let buf_devices = T.compute_devices buf in\n  let src_shape = Option.value ~default:shape (buf_shapes buf) in\n  let src_device = match buf_devices buf with\n    | Some dev -> T.device dev\n    | None -> device\n  in\n  let src = T.param ~slot:1 ~dtype ~shape:(emit_shape src_shape)\n              ~device:src_device () in\n  match handle_allreduce src ~op ~device with\n  | Some result ->\n      let assigned = T.assign ~target:to_ ~value:result () in\n      let sink = T.sink [assigned] in\n      let info : T.call_info =\n        { grad_fxn = None; metadata = []; name = Some \"allreduce\";\n          precompile = true }\n      in\n      let kernel = T.call ~callee:(Ref sink) ~args:[output; T.contiguous ~src:buf ()]\n                     ~info ~dtype in\n      Some (T.after ~src:output ~deps:[kernel])\n  | None -> None\n"
  },
  {
    "path": "packages/tolk/lib/schedule/allreduce.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Multi-device collective reduction.\n\n    Builds allreduce computation graphs using naive, ring, or all-to-all\n    strategies depending on device count, element count, and the [RING],\n    [ALL2ALL], and [RING_ALLREDUCE_THRESHOLD] context variables. *)\n\n(** {1:encoding Shape encoding} *)\n\nval emit_shape : int list -> Tolk_ir.Tensor.t\n(** [emit_shape dims] encodes [dims] as a tensor shape node. A single\n    dimension becomes a scalar constant; multiple dimensions become a\n    {!Tolk_ir.Tensor.vectorize} of scalar constants. *)\n\nval emit_pairs :\n  (int * int) list -> Tolk_ir.Tensor.t * Tolk_ir.Tensor.t\n(** [emit_pairs pairs] splits [(lo, hi)] int pairs into two shape\n    nodes [(emit_shape los, emit_shape his)]. *)\n\n(** {1:allreduce Allreduce} *)\n\nval handle_allreduce :\n  Tolk_ir.Tensor.t ->\n  op:Tolk_ir.Op.reduce ->\n  device:Tolk_ir.Tensor.t ->\n  Tolk_ir.Tensor.t option\n(** [handle_allreduce buf ~op ~device] builds a reduction graph that\n    combines every shard of [buf] with [op] and places the result\n    on [device].\n\n    Returns [None] if [buf] is not on a multi-device. Raises\n    [Failure] if [buf] has no concrete shape.\n\n    The strategy is selected automatically:\n    {ul\n    {- {e Naive} when the device count is [<= 2] or the element\n       count is below [RING_ALLREDUCE_THRESHOLD] (default 256k).}\n    {- {e All-to-all} when [ALL2ALL >= 2], or [ALL2ALL >= 1] and\n       the size exceeds the threshold with [> 2] devices.}\n    {- {e Ring} when [RING >= 2], or [RING >= 1] and the size\n       exceeds the threshold with [> 2] devices.}} *)\n\nval create_allreduce_function :\n  Tolk_ir.Tensor.t ->\n  op:Tolk_ir.Op.reduce ->\n  device:Tolk_ir.Tensor.t ->\n  dtype:Tolk_ir.Dtype.t ->\n  shape:int list ->\n  ?output:Tolk_ir.Tensor.t ->\n  unit ->\n  Tolk_ir.Tensor.t option\n(** [create_allreduce_function buf ~op ~device ~dtype ~shape ()]\n    wraps {!handle_allreduce} into a precompiled [CALL] kernel with\n    parameter and buffer setup.\n\n    [output] defaults to a fresh contiguous buffer of the given\n    [dtype], [shape], and [device].\n\n    Returns [None] if [buf] is not on a multi-device. *)\n"
  },
  {
    "path": "packages/tolk/lib/schedule/indexing.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(* Core rangeify algorithm.\n\n   Converts the high-level tensor graph (with movement ops, REDUCE_AXIS, etc.)\n   into an indexed representation with explicit RANGE loops, BUFFERIZE nodes,\n   and INDEX operations.\n\n   The algorithm has three phases:\n\n   1. Build the realize map: decide which nodes need their own buffer\n      (realization boundary).  Realized nodes get fresh ranges and produce\n      BUFFERIZE + INDEX pairs in the final graph.\n\n   2. Backward range propagation (run_rangeify): walk the graph in reverse\n      toposort.  Each node either inherits ranges from its single consumer,\n      merges ranges from multiple consumers, or gets fresh ranges when\n      realized.  Movement ops transform ranges (permute, reshape, etc.)\n      instead of existing as nodes in the output.\n\n   3. Apply rangeify (pm_apply_rangeify): rewrite the graph bottom-up,\n      replacing REDUCE_AXIS with REDUCE, PAD with WHERE, inserting\n      BUFFERIZE/INDEX/END nodes, and removing movement ops. *)\n\nopen Tolk_ir\nmodule T = Tensor\nmodule D = Dtype\nmodule C = Const\nmodule Ak = Axis_kind\n\n(* Ops that never need realization — they produce contiguous output by\n   definition, so their consumers can always index directly into them.\n   tinygrad also includes DEFINE_REG and LOAD, which don't exist in the\n   tensor-level IR (they are kernel-only ops). *)\nlet is_always_contiguous = function\n  | T.Contiguous _ | T.After _ | T.Copy _ | T.Buffer _ | T.Buffer_view _\n  | T.Const _ | T.Bind _ | T.Device _ | T.Mselect _ | T.Mstack _ | T.Param _\n  | T.Define_local _ | T.Call _ ->\n      true\n  | _ -> false\n\n(* Helpers *)\n\nlet idx n = T.const (C.int D.Val.index n) D.index\nlet btrue = T.const (C.bool true) D.bool\nlet bfalse = T.const (C.bool false) D.bool\n\nlet select_axes axes xs = List.filteri (fun i _ -> List.mem i axes) xs\n\nlet movement_src = function\n  | T.Reshape { src; _ } | T.Expand { src; _ } | T.Pad { src; _ }\n  | T.Shrink { src; _ } | T.Permute { src; _ } | T.Flip { src; _ } ->\n      Some src\n  | _ -> None\n\nlet is_movement_op v = Option.is_some (movement_src v)\n\n(* Boolean fold: MUL for conjunction, ADD for disjunction — matching\n   tinygrad's .prod() and .sum() on bool UOps. *)\nlet bool_reduce op identity vs =\n  List.fold_left (fun acc v -> T.binary ~op ~lhs:acc ~rhs:v) identity vs\n\nlet prod_valid vs = bool_reduce `Mul btrue vs\nlet sum_valid vs = bool_reduce `Add bfalse vs\n\n(* r >= s  encoded as  NOT(r < s),  matching tinygrad's\n   (self < x).logical_not() → CMPNE(CMPLT(r, s), true). *)\nlet ge r s =\n  T.binary ~op:`Cmpne\n    ~lhs:(T.binary ~op:`Cmplt ~lhs:r ~rhs:(idx s)) ~rhs:btrue\n\n(* Indexing context *)\n\ntype realize_state =\n  | Marked        (* pending realization — set during realize map construction *)\n  | Realized of int list  (* resolved — records which axes were realized *)\n\ntype indexing_context = {\n  realize_map : (int, realize_state) Hashtbl.t;\n  range_map : (int, T.t list * T.t list) Hashtbl.t;\n  mutable range_idx : int;\n}\n\nlet create_context () = {\n  realize_map = Hashtbl.create 256;\n  range_map = Hashtbl.create 256;\n  range_idx = 0;\n}\n\n(* Size-1 dimensions collapse to constant 0.  [size] is concrete — tinygrad\n   accepts [sint] (symbolic or int) but we only handle static shapes here. *)\nlet new_range ctx size ?(kind = Ak.Loop) () =\n  if size = 1 then idx 0\n  else begin\n    let axis = ctx.range_idx in\n    ctx.range_idx <- ctx.range_idx + 1;\n    T.range ~size:(idx size) ~axis ~kind ()\n  end\n\n(* Context accessors — keyed by T.tag (unique per hash-consed node). *)\n\nlet realize_get ctx n = Hashtbl.find_opt ctx.realize_map (T.tag n)\nlet realize_set ctx n v = Hashtbl.replace ctx.realize_map (T.tag n) v\nlet realize_del ctx n = Hashtbl.remove ctx.realize_map (T.tag n)\nlet realize_mem ctx n = Hashtbl.mem ctx.realize_map (T.tag n)\n\nlet range_get ctx n = Hashtbl.find_opt ctx.range_map (T.tag n)\nlet range_set ctx n v = Hashtbl.replace ctx.range_map (T.tag n) v\n\n(* Generate realize map *)\n\nlet has_store_dep deps =\n  List.exists (fun d -> match T.view d with T.Store _ -> true | _ -> false) deps\n\n(* Does [n] or any node in its backward slice match one of the\n   non-injective view ops?  (RESHAPE and EXPAND are excluded —\n   tinygrad only checks SHRINK, PERMUTE, FLIP, PAD here.) *)\nlet has_view_op_in_slice n =\n  List.exists (fun x ->\n    match T.view x with\n    | T.Shrink _ | T.Permute _ | T.Flip _ | T.Pad _ -> true\n    | _ -> false)\n    (n :: T.backward_slice n)\n\nlet mark_non_contiguous_src ctx s =\n  if not (is_always_contiguous (T.view (T.base s))) then\n    realize_set ctx s Marked\n\n(* Mirrors tinygrad's pm_generate_realize_map PatternMatcher.\n   All four blocks fire independently per node — a PatternMatcher rule\n   that returns None continues to the next rule rather than short-\n   circuiting. *)\nlet generate_realize_map ctx root =\n  let nodes = T.toposort root in\n  List.iter (fun n ->\n    let v = T.view n in\n    (* Rule 1: always realize COPY and CONTIGUOUS *)\n    (match v with\n     | T.Copy _ | T.Contiguous _ -> realize_set ctx n Marked\n     | _ -> ());\n    (* Rule 2: realize AFTER that has a STORE dep *)\n    (match v with\n     | T.After { deps; _ } when has_store_dep deps -> realize_set ctx n Marked\n     | _ -> ());\n    (* Rule 3: realize non-contiguous sources of COPY/MSELECT/MSTACK *)\n    (match v with\n     | T.Copy { src; _ } | T.Mselect { src; _ } ->\n         mark_non_contiguous_src ctx src\n     | T.Mstack { srcs; _ } ->\n         List.iter (mark_non_contiguous_src ctx) srcs\n     | _ -> ());\n    (* Rule 4: conditionally unrealize or re-realize the value in a\n       single-dep Store+After.  Only fires when deps = [Store]. *)\n    (match v with\n     | T.After { deps = [d]; _ } -> begin\n         match T.view d with\n         | T.Store { dst; value } ->\n             (* Unrealize COPY/BUFFER_VIEW when the target buffer IS the\n                output and no view ops distort the destination. *)\n             (match T.view value with\n              | T.Copy _ | T.Buffer_view _\n                when realize_mem ctx value\n                     && not (has_view_op_in_slice dst) ->\n                  realize_del ctx value\n              | _ -> ());\n             (* WAR hazard: dest's base in value's backward slice means\n                the write aliases a read — force a temporary. *)\n             let base = T.base dst in\n             if List.exists (fun x -> x == base)\n                  (value :: T.backward_slice value) then\n               realize_set ctx value Marked\n         | _ -> ()\n       end\n     | _ -> ()))\n    nodes\n\n(* Tensor ↔ Kernel conversion for symbolic simplification.\n   Only index arithmetic nodes are expected — anything else is a bug. *)\n\nmodule K = Kernel\n\nlet rec tensor_to_kernel n =\n  match T.view n with\n  | T.Const { value; _ } -> K.const value\n  | T.Range { size; axis; sub; kind; dtype } ->\n      K.range ~size:(tensor_to_kernel size) ~axis ~sub ~kind\n        ~dtype:(D.val_of dtype) ()\n  | T.Binary { op; lhs; rhs; _ } ->\n      K.binary ~op ~lhs:(tensor_to_kernel lhs) ~rhs:(tensor_to_kernel rhs)\n  | T.Unary { op; src; _ } ->\n      K.unary ~op ~src:(tensor_to_kernel src)\n  | T.Ternary { op; a; b; c; _ } ->\n      K.ternary ~op ~a:(tensor_to_kernel a)\n        ~b:(tensor_to_kernel b) ~c:(tensor_to_kernel c)\n  | T.Invalid_index { dtype } ->\n      K.const (C.int (D.val_of (D.scalarize dtype)) 0)\n  | v -> failwith (Format.asprintf \"tensor_to_kernel: unexpected %a\" T.pp_view v)\n\nlet rec kernel_to_tensor k =\n  match K.view k with\n  | K.Const { value; dtype } -> T.const value (D.Val dtype)\n  | K.Range { size; axis; sub; kind; dtype } ->\n      T.range ~size:(kernel_to_tensor size) ~axis ~sub ~kind\n        ~dtype:(D.Val dtype) ()\n  | K.Binary { op; lhs; rhs; _ } ->\n      T.binary ~op ~lhs:(kernel_to_tensor lhs) ~rhs:(kernel_to_tensor rhs)\n  | K.Unary { op; src; _ } ->\n      T.unary ~op ~src:(kernel_to_tensor src)\n  | K.Ternary { op; a; b; c; _ } ->\n      T.ternary ~op ~a:(kernel_to_tensor a)\n        ~b:(kernel_to_tensor b) ~c:(kernel_to_tensor c)\n  | _ -> failwith (Format.asprintf \"kernel_to_tensor: unexpected %a\" K.pp_view k)\n\n(* Round-trip through Kernel IR to apply symbolic simplification. *)\nlet simplify_tensor_expr expr =\n  let k = tensor_to_kernel expr in\n  let k = K.graph_rewrite (K.first_match [Symbolic.sym]) k in\n  kernel_to_tensor k\n\n(* Movement ops — reshape *)\n\nlet argsort order =\n  let indexed = List.mapi (fun i v -> (v, i)) order in\n  List.map snd (List.sort (fun (a, _) (b, _) -> compare a b) indexed)\n\n(* Reshape: linearize output dims into a scalar index, decompose into input\n   dims via mod/div, then simplify the resulting expressions.\n\n   A placeholder substitution trick keeps the simplifier from confusing actual\n   range identities with the arithmetic it needs to reduce. *)\nlet apply_reshape in_shape out_shape rngs =\n  let rngs = List.map simplify_tensor_expr rngs in\n  (* Collect all Range nodes and create Placeholder stand-ins *)\n  let all_ranges = T.ranges (T.sink rngs) in\n  let sub_fwd = List.mapi (fun i r ->\n    let size = match T.view r with\n      | T.Range { size; _ } -> size | _ -> idx 1 in\n    (r, T.range ~size ~axis:i ~kind:Ak.Placeholder ())) all_ranges in\n  let sub_rev = List.map (fun (k, v) -> (v, k)) sub_fwd in\n  let rngs = List.map (T.substitute sub_fwd) rngs in\n  (* Linearize: weighted positional sum of output ranges *)\n  let _, terms = List.fold_right (fun (s, r) (stride, ts) ->\n    let t = if stride = 1 then r\n      else T.binary ~op:`Mul ~lhs:(idx stride) ~rhs:r in\n    (stride * s, t :: ts)) (List.combine out_shape rngs) (1, []) in\n  let combined = List.fold_left (fun a t ->\n    T.binary ~op:`Add ~lhs:a ~rhs:t) (idx 0) terms in\n  (* Decompose: peel off input dimensions right-to-left.\n     The ref + rev_map/rev processes in_shape in reverse while the ref\n     accumulates the running quotient; rev_map reverses the result. *)\n  let combined = ref combined in\n  let axes = List.rev_map (fun s ->\n    let r = T.binary ~op:`Mod ~lhs:!combined ~rhs:(idx s) in\n    combined := T.binary ~op:`Idiv ~lhs:!combined ~rhs:(idx s);\n    r) (List.rev in_shape) in\n  (* Simplify, then restore actual ranges *)\n  List.map (fun r ->\n    T.substitute sub_rev (simplify_tensor_expr r)) axes\n\n(* Transform ranges through a movement op.  Each case defines how output\n   indices map to input indices — this is the inverse of the movement. *)\nlet apply_movement_op ~shapes v rngs =\n  match v with\n  | T.Shrink _ -> begin\n      match T.extract_marg_pairs v with\n      | Some pairs ->\n          List.map2 (fun r (ss, _) ->\n            if ss = 0 then r\n            else T.binary ~op:`Add ~lhs:r ~rhs:(idx ss))\n            rngs pairs\n      | None -> rngs\n    end\n  | T.Permute { order; _ } ->\n      List.map (fun p -> List.nth rngs p) (argsort order)\n  | T.Flip { src; dims; _ } -> begin\n      match shapes src with\n      | Some in_shape ->\n          List.map2 (fun r (f, s) ->\n            if not f then r\n            else T.binary ~op:`Sub ~lhs:(idx (s - 1)) ~rhs:r)\n            rngs (List.combine dims in_shape)\n      | None -> rngs\n    end\n  | T.Expand { src; shape; _ } -> begin\n      match shapes src, T.extract_int_shape shape with\n      | Some in_shape, Some out_shape ->\n          List.map2 (fun r (in_s, out_s) ->\n            if in_s = out_s then r else idx 0)\n            rngs (List.combine in_shape out_shape)\n      | _ -> rngs\n    end\n  | T.Pad { src; _ } -> begin\n      match shapes src, T.extract_marg_pairs v with\n      | Some in_shape, Some pairs ->\n          (* The where(r-s, invalid) is intentionally outside the\n             graph_rewrite so that convert_pad_to_where wraps the pad\n             with only the newly added validity condition *)\n          List.map2 (fun (r, sh) (s, e) ->\n            if s = 0 && e = 0 then r\n            else\n              let valid = simplify_tensor_expr\n                (T.binary ~op:`And ~lhs:(ge r s)\n                   ~rhs:(T.binary ~op:`Cmplt ~lhs:r\n                            ~rhs:(idx (sh + s)))) in\n              T.ternary ~op:`Where ~a:valid\n                ~b:(T.binary ~op:`Sub ~lhs:r ~rhs:(idx s))\n                ~c:(T.invalid_index ~dtype:D.index))\n            (List.combine rngs in_shape) pairs\n      | _ -> rngs\n    end\n  | T.Reshape { src; shape; _ } -> begin\n      match shapes src, T.extract_int_shape shape with\n      | Some in_shape, Some out_shape ->\n          apply_reshape in_shape out_shape rngs\n      | _ -> rngs\n    end\n  | _ -> assert false\n\n(* Apply rangeify — graph rewrite rules *)\n\n(* Extract the index value from a possibly-gated range expression.\n   where(valid, index, invalid) → index;  anything else → itself. *)\nlet get_idx r =\n  match T.view r with\n  | T.Ternary { op = `Where; b = value; c = else_; _ } ->\n      (match T.view else_ with T.Invalid_index _ -> value | _ -> r)\n  | _ -> r\n\n(* Extract the validity condition from a possibly-gated range expression.\n   where(valid, _, invalid) → valid;  invalid → false;  else → true. *)\nlet get_valid r =\n  match T.view r with\n  | T.Ternary { op = `Where; a = valid; c = else_; _ } ->\n      (match T.view else_ with T.Invalid_index _ -> valid | _ -> btrue)\n  | T.Invalid_index _ -> bfalse\n  | _ -> btrue\n\n(* Direct buffer sources: can be indexed without realization.\n   Matches PARAM, BUFFER_VIEW, MSTACK, MSELECT, and AFTER nodes whose\n   deps don't include STORE or END (plain scheduling barriers). *)\nlet is_direct_buffer = function\n  | T.Param _ | T.Buffer_view _ | T.Mstack _ | T.Mselect _ -> true\n  | T.After { deps; _ } ->\n      not (List.exists (fun d ->\n        match T.view d with T.Store _ | T.End _ -> true | _ -> false) deps)\n  | _ -> false\n\nlet map_device = function\n  | Some (T.Single d) -> Some (K.Device_single d)\n  | Some (T.Multi ds) -> Some (K.Device_multi ds)\n  | None -> None\n\n(* REDUCE_AXIS → REDUCE with explicit range children.\n   Selects the input ranges at the reduce axes. *)\nlet convert_reduce_axis ctx n =\n  match T.view n with\n  | T.Reduce_axis { src; op; axes; dtype } -> begin\n      match range_get ctx n with\n      | Some ((in_rngs, _) as entry) ->\n          let ranges = select_axes axes in_rngs in\n          let ret = T.reduce ~src ~ranges ~op ~dtype in\n          range_set ctx ret entry;\n          Some ret\n      | None -> None\n    end\n  | _ -> None\n\n(* PAD → WHERE(valid, src, 0).\n   Collects validity conditions from each input range and MULs them. *)\nlet convert_pad_to_where ctx n =\n  match range_get ctx n with\n  | None -> None\n  | Some ((in_rngs, _) as entry) ->\n      let valid = prod_valid (List.map get_valid in_rngs) in\n      let src = match T.view n with T.Pad { src; _ } -> src | _ -> assert false in\n      let dtype = match T.dtype n with Some d -> d | None -> D.float32 in\n      let ret = T.ternary ~op:`Where ~a:valid ~b:src\n        ~c:(T.const (C.zero (D.val_of dtype)) dtype) in\n      range_set ctx ret entry;\n      Some ret\n\n(* Strip movement ops — their effect is already captured in the range_map.\n   Also remove when the source is an INDEX (already lowered). *)\nlet remove_movement_op ctx n =\n  match movement_src (T.view n) with\n  | Some src ->\n      if Option.is_some (range_get ctx n) then Some src\n      else (match T.view src with T.Index _ -> Some src | _ -> None)\n  | None -> None\n\n(* For each child of [n], insert BUFFERIZE/INDEX/END as needed:\n   - Direct buffer sources get an INDEX with the consumer's input ranges.\n   - Realized non-STORE sources get BUFFERIZE + INDEX.\n   - Realized STORE sources get END (closing ranges).\n   Returns None when no children changed. *)\nlet create_bufferize_and_index ctx ~devices n =\n  match T.view n with\n  | T.Bufferize _ | T.Index _ -> None\n  | _ ->\n      let parent_is_copy = match T.view n with T.Copy _ -> true | _ -> false in\n      let parent_rngs = range_get ctx n in\n      let children = T.children n in\n      let changed = ref false in\n      let new_children = List.map (fun s ->\n        let sv = T.view s in\n        if is_direct_buffer sv then\n          match parent_rngs with\n          | Some (in_rngs, _) ->\n              changed := true;\n              (* Strip pointer → value dtype, matching tinygrad's .dtype.base *)\n              let dtype = match T.dtype s with\n                | Some d -> D.Val (D.val_of d) | None -> D.index in\n              T.index ~ptr:s ~idxs:in_rngs ~dtype ()\n          | None -> s\n        else match realize_get ctx s with\n        | Some (Realized realized_axes) ->\n            changed := true;\n            let out_rngs = match range_get ctx s with\n              | Some (_, out) -> out | None -> [] in\n            let closed = select_axes realized_axes out_rngs in\n            (match sv with\n             | T.Store _ ->\n                 let ranges = List.filter (fun r ->\n                   match T.view r with T.Range _ -> true | _ -> false) closed in\n                 realize_del ctx s;\n                 T.end_ ~value:s ~ranges\n             | _ ->\n                 let removable = not parent_is_copy\n                   && not (is_always_contiguous sv) in\n                 let is_local =\n                   List.length out_rngs <> List.length realized_axes in\n                 let addrspace = if is_local then D.Local else D.Global in\n                 let device = map_device (devices s) in\n                 let opts : K.bufferize_opts =\n                   { device; addrspace; removable } in\n                 let src_dtype = match T.dtype s with\n                   | Some d -> d | None -> D.float32 in\n                 let buf = T.bufferize ~src:s ~ranges:closed\n                   ~dtype:src_dtype ~opts in\n                 match parent_rngs with\n                 | Some (in_rngs, _) ->\n                     let idxs = select_axes realized_axes in_rngs in\n                     let idx_dtype = D.Val (D.val_of src_dtype) in\n                     T.index ~ptr:buf ~idxs ~dtype:idx_dtype ()\n                 | None -> buf)\n        | _ -> s) children in\n      if !changed then Some (T.replace n ~children:new_children ())\n      else None\n\n(* Cascading rules matching tinygrad's pm_apply_rangeify PatternMatcher.\n   Rules 1–2 are op-specific; rule 3 (All) matches everything; rule 4\n   matches movement ops.  On None, each falls through to the next. *)\nlet apply_rangeify_pass ctx ~devices root =\n  T.graph_rewrite ~name:\"apply rangeify\"\n    ~on_rebuild:(fun ~old_n ~new_n ->\n      if T.tag old_n <> T.tag new_n then begin\n        (match realize_get ctx old_n with\n        | Some v -> realize_set ctx new_n v | None -> ());\n        (match range_get ctx old_n with\n        | Some v -> range_set ctx new_n v | None -> ())\n      end)\n    (fun n ->\n      let specific = match T.view n with\n        | T.Reduce_axis _ -> convert_reduce_axis ctx n\n        | T.Pad _ -> convert_pad_to_where ctx n\n        | _ -> None in\n      match specific with\n      | Some _ -> specific\n      | None ->\n      match create_bufferize_and_index ctx ~devices n with\n      | Some _ as r -> r\n      | None -> remove_movement_op ctx n) root\n\n(* Run rangeify — backward range propagation *)\n\nlet pcontig_var = Helpers.Context_var.int ~key:\"PCONTIG\" ~default:0\n\nlet all_same = function\n  | [] -> true\n  | x :: rest -> List.for_all (fun y -> y == x) rest\n\nlet is_elementwise_or_reduce = function\n  | T.Unary _ | T.Binary _ | T.Ternary _ | T.Cast _ | T.Bitcast _\n  | T.Reduce_axis _ -> true\n  | _ -> false\n\n(* Only called on nodes from T.ranges, which are always Range. *)\nlet range_axis r = match T.view r with\n  | T.Range { axis; _ } -> axis | _ -> assert false\n\n(* Transpose a list of equal-length lists: one per consumer →\n   one per axis. *)\nlet transpose = function\n  | [] -> []\n  | first :: _ as lists ->\n      List.mapi (fun i _ -> List.map (fun l -> List.nth l i) lists) first\n\n(* Check whether ended ranges force additional axes to be realized.\n   Clears ending ranges and returns the (possibly updated) out_rngs. *)\nlet check_ending_ranges ctx ~pcontig ~get_ending ~set_ending ~out_shape x out_rngs =\n  if get_ending x = [] then out_rngs\n  else begin\n    let existing = match realize_get ctx x with\n      | Some (Realized axes) -> axes | _ -> [] in\n    let realize_axis = ref existing in\n    List.iteri (fun i r ->\n      if not (List.mem i !realize_axis) then\n        if pcontig <= 1 ||\n           List.exists (fun rr ->\n             List.exists (fun e ->\n               range_axis rr > range_axis e) (get_ending x))\n             (T.ranges r)\n        then realize_axis := !realize_axis @ [i]) out_rngs;\n    set_ending x [];\n    if !realize_axis <> [] then begin\n      realize_set ctx x (Realized !realize_axis);\n      List.mapi (fun i r ->\n        if List.mem i !realize_axis then\n          new_range ctx (List.nth out_shape i) ()\n        else r) out_rngs\n    end else out_rngs\n  end\n\n(* Main backward walk.  For each node (roots-to-leaves) we determine:\n   - out_rngs: one range expression per output axis\n   - rngs: one range expression per input axis (= out_rngs transformed\n     by movement ops, with fresh Reduce ranges for REDUCE_AXIS)\n\n   The pair (rngs, out_rngs) is stored in range_map for use by\n   apply_rangeify_pass. *)\nlet run_rangeify root ~shapes =\n  let ctx = create_context () in\n  generate_realize_map ctx root;\n  let consumers = T.consumer_map root in\n  let toposort = T.toposort ~enter_calls:false root in\n  let ending : (int, T.t list) Hashtbl.t = Hashtbl.create 256 in\n  let get_ending x =\n    Option.value ~default:[] (Hashtbl.find_opt ending (T.tag x)) in\n  let set_ending x v = Hashtbl.replace ending (T.tag x) v in\n  let pcontig = Helpers.Context_var.get pcontig_var in\n\n  List.iter (fun x ->\n    let v = T.view x in\n\n    (* Skip non-rangeable nodes.\n       Lunique is OCaml-specific (tinygrad only skips UNIQUE). *)\n    let skip = match v with\n      | T.Device _ | T.Unique _ | T.Lunique _\n      | T.Call _ | T.Linear _\n      | T.Mstack _ | T.Mselect _ -> true\n      | T.After { deps; _ } -> not (has_store_dep deps)\n      | _ ->\n          match T.dtype x with\n          | Some dt -> D.scalar dt = D.Index\n          | None -> false in\n    if skip then () else begin\n\n    (* Propagate ending ranges from consumers *)\n    set_ending x (List.concat_map get_ending (consumers x));\n\n    let out_shape = Option.value ~default:[] (shapes x) in\n\n    (* Input ranges of consumers that already have ranges *)\n    let consumer_rngs = List.filter_map (fun c ->\n      match range_get ctx c with\n      | Some (in_rngs, _) -> Some in_rngs\n      | None -> None) (consumers x) in\n\n    (* --- Determine output ranges --- *)\n    let out_rngs =\n      if realize_mem ctx x then begin\n        (* 1. Realized → fresh ranges, end all, mark all axes *)\n        let out = List.map (fun s -> new_range ctx s ()) out_shape in\n        set_ending x [];\n        assert (realize_get ctx x = Some Marked);\n        realize_set ctx x\n          (Realized (List.init (List.length out_shape) Fun.id));\n        Some out\n      end\n      else match List.length consumer_rngs with\n      | 0 -> None  (* no consumer has ranges → skip *)\n      | 1 -> Some (List.hd consumer_rngs)\n      | _ ->\n          (* 3. Multiple consumers → merge per-axis *)\n          let n = List.length (List.hd consumer_rngs) in\n          if not (List.for_all (fun l -> List.length l = n) consumer_rngs) then begin\n            (* Consumer ranges disagree on rank → realize *)\n            let n_out = List.length out_shape in\n            let out = List.map (fun s -> new_range ctx s ()) out_shape in\n            realize_set ctx x (Realized (List.init n_out Fun.id));\n            Some out\n          end else\n          (* Truncate to min of consumer rank and output rank, matching\n             tinygrad's zip truncation behavior. *)\n          let per_axis_full = transpose consumer_rngs in\n          let n_out = List.length out_shape in\n          let per_axis = List.filteri (fun i _ -> i < n_out) per_axis_full in\n          let rngs_valids = List.map (fun axis_rngs ->\n            let local = List.map get_idx axis_rngs in\n            let valids = List.map get_valid axis_rngs in\n            (local, valids)) per_axis in\n          let all_all_same =\n            List.for_all (fun (lr, _) -> all_same lr) rngs_valids in\n          let out = ref [] and realize_axes = ref [] in\n          List.iteri (fun i (local_rngs, valids) ->\n            if all_all_same || (pcontig > 0 && all_same local_rngs) then begin\n              (* Ranges agree — merge validity with OR *)\n              let merged = simplify_tensor_expr\n                (T.ternary ~op:`Where ~a:(sum_valid valids)\n                   ~b:(List.hd local_rngs)\n                   ~c:(T.invalid_index ~dtype:D.index)) in\n              out := merged :: !out\n            end else begin\n              (* Ranges disagree — fresh range, mark axis *)\n              out := new_range ctx (List.nth out_shape i) () :: !out;\n              realize_axes := i :: !realize_axes\n            end) rngs_valids;\n          let realize_axes = List.rev !realize_axes in\n          if realize_axes <> [] then\n            realize_set ctx x (Realized realize_axes);\n          Some (List.rev !out)\n    in\n\n    match out_rngs with\n    | None -> ()\n    | Some out_rngs ->\n\n    (* --- Ending range check --- *)\n    (* Elementwise/reduce ops with ended ranges may need to realize\n       additional axes to prevent stale range references. *)\n    let out_rngs =\n      if is_elementwise_or_reduce v then\n        check_ending_ranges ctx ~pcontig ~get_ending ~set_ending\n          ~out_shape x out_rngs\n      else out_rngs\n    in\n\n    (* --- Compute input ranges --- *)\n    let rngs = out_rngs in\n\n    (* Movement ops transform output ranges into input ranges *)\n    let rngs =\n      if is_movement_op v then apply_movement_op ~shapes v rngs\n      else rngs in\n\n    (* EXPAND: track ending ranges for axes that changed\n       (range was replaced by const 0 for broadcasted dims).\n       tinygrad guards this with all(isinstance(y,int) or y.op is not\n       Ops.RANGE for y in x.shape) to skip when EXPAND injects a range\n       via a symbolic shape.  With static int shapes this is always true. *)\n    (match v with\n     | T.Expand _ ->\n         let diff = List.filter_map (fun (ri, ro) ->\n           if ri != ro then Some ro else None)\n           (List.combine rngs out_rngs) in\n         if diff <> [] then\n           set_ending x (get_ending x @ T.ranges (T.sink diff))\n     | _ -> ());\n\n    (* REDUCE_AXIS: create Reduce-kind ranges for the reduction axes *)\n    let rngs = match v with\n      | T.Reduce_axis { axes; src; _ } -> begin\n          match shapes src with\n          | Some src_shape ->\n              List.mapi (fun i (r, s) ->\n                if List.mem i axes\n                then new_range ctx s ~kind:Ak.Reduce ()\n                else r) (List.combine rngs src_shape)\n          | None -> rngs\n        end\n      | _ -> rngs in\n\n    range_set ctx x (rngs, out_rngs)\n    end)\n    (List.rev toposort);\n\n  ctx\n"
  },
  {
    "path": "packages/tolk/lib/schedule/indexing.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Rangeify: tensor graph to indexed representation.\n\n    Converts the high-level tensor graph (movement ops, REDUCE_AXIS,\n    etc.) into an indexed representation with explicit RANGE loops,\n    BUFFERIZE nodes, and INDEX operations.\n\n    The algorithm runs in three phases:\n\n    {ol\n    {- {b Realize map.}  Decide which nodes need their own buffer\n       (realization boundary).  See {!generate_realize_map}.}\n    {- {b Range propagation.}  Walk the graph root-to-leaf, assigning\n       one range expression per axis to every node.  Realized nodes get\n       fresh ranges; others inherit or merge from consumers.  Movement\n       ops transform ranges instead of persisting as nodes.\n       See {!run_rangeify}.}\n    {- {b Apply.}  Bottom-up graph rewrite: REDUCE_AXIS becomes REDUCE,\n       PAD becomes WHERE, realized sources are wrapped in\n       BUFFERIZE + INDEX or END, and movement ops are removed.\n       See {!apply_rangeify_pass}.}} *)\n\n(** {1:predicates Predicates} *)\n\nval is_always_contiguous : Tolk_ir.Tensor.view -> bool\n(** [is_always_contiguous v] is [true] for ops whose output is\n    contiguous by definition (Contiguous, After, Copy, Buffer,\n    Buffer_view, Const, Bind, Device, Mselect, Mstack, Param,\n    Define_local, Call).  Their consumers can index directly\n    without realization. *)\n\n(** {1:context Indexing context} *)\n\ntype realize_state =\n  | Marked\n      (** Pending realization — set during realize-map construction,\n          before axis resolution. *)\n  | Realized of int list\n      (** Resolved — records which output axes were realized. *)\n(** Realization state for a single node. *)\n\ntype indexing_context = {\n  realize_map : (int, realize_state) Hashtbl.t;\n  range_map : (int, Tolk_ir.Tensor.t list * Tolk_ir.Tensor.t list) Hashtbl.t;\n      (** Maps {!Tolk_ir.Tensor.tag} to [(input_ranges, output_ranges)]. *)\n  mutable range_idx : int;\n      (** Monotonic counter for fresh range axis indices. *)\n}\n(** Per-node state populated by {!run_rangeify}.  All maps are keyed\n    by {!Tolk_ir.Tensor.tag}. *)\n\nval create_context : unit -> indexing_context\n(** [create_context ()] is a fresh, empty context. *)\n\nval new_range :\n  indexing_context -> int -> ?kind:Tolk_ir.Axis_kind.t -> unit ->\n  Tolk_ir.Tensor.t\n(** [new_range ctx size ?kind ()] is a fresh RANGE node over\n    \\[[0];[size-1]\\] with axis kind [kind] (default {!Tolk_ir.Axis_kind.Loop}).\n    Returns a constant [0] when [size] is [1]. *)\n\n(** {1:simplify Symbolic simplification} *)\n\nval simplify_tensor_expr : Tolk_ir.Tensor.t -> Tolk_ir.Tensor.t\n(** [simplify_tensor_expr e] round-trips [e] through the Kernel IR,\n    applies {!Tolk_ir.Symbolic.sym}, and converts back.  Only handles\n    index-arithmetic nodes (Const, Range, Binary, Unary, Ternary,\n    Invalid_index). *)\n\n(** {1:movement Movement ops} *)\n\nval apply_movement_op :\n  shapes:(Tolk_ir.Tensor.t -> int list option) ->\n  Tolk_ir.Tensor.view ->\n  Tolk_ir.Tensor.t list ->\n  Tolk_ir.Tensor.t list\n(** [apply_movement_op ~shapes view rngs] transforms [rngs] (output\n    ranges) through a movement op, producing the corresponding input\n    ranges.  Handles Shrink, Permute, Flip, Expand, Pad, and Reshape.\n\n    Raises [Assert_failure] if [view] is not a movement op. *)\n\n(** {1:rangeify Rangeify passes} *)\n\nval run_rangeify :\n  Tolk_ir.Tensor.t ->\n  shapes:(Tolk_ir.Tensor.t -> int list option) ->\n  indexing_context\n(** [run_rangeify root ~shapes] builds the realize map, then walks\n    the graph from roots to leaves assigning per-node ranges.\n    Returns a populated {!indexing_context} ready for\n    {!apply_rangeify_pass}. *)\n\nval apply_rangeify_pass :\n  indexing_context ->\n  devices:(Tolk_ir.Tensor.t -> Tolk_ir.Tensor.device option) ->\n  Tolk_ir.Tensor.t ->\n  Tolk_ir.Tensor.t\n(** [apply_rangeify_pass ctx ~devices root] rewrites [root] bottom-up:\n\n    {ul\n    {- REDUCE_AXIS → REDUCE with explicit range children.}\n    {- PAD → WHERE guarded by the input ranges' validity.}\n    {- Realized sources → BUFFERIZE + INDEX (or END for stores).}\n    {- Direct buffer sources (Param, Buffer_view, …) → INDEX.}\n    {- Movement ops → removed (their effect is in the range map).}} *)\n\n(** {1:helpers Range helpers} *)\n\nval get_idx : Tolk_ir.Tensor.t -> Tolk_ir.Tensor.t\n(** [get_idx r] extracts the index value from a possibly-gated range.\n    [where(valid, index, invalid)] yields [index]; anything else\n    yields [r] unchanged. *)\n\nval get_valid : Tolk_ir.Tensor.t -> Tolk_ir.Tensor.t\n(** [get_valid r] extracts the validity condition from a\n    possibly-gated range.  [where(valid, _, invalid)] yields [valid];\n    [invalid] yields [false]; anything else yields [true]. *)\n"
  },
  {
    "path": "packages/tolk/lib/schedule/multi.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(* Multi-device sharding transformations.\n\n   Transforms operations on MULTI-wrapped (sharded) buffers into per-shard\n   operations.  Each handler strips the MULTI wrapper, applies the operation\n   to the inner per-shard tensor, and re-wraps the result. *)\n\nopen Tolk_ir\nmodule T = Tensor\n\n(* Helpers *)\n\nlet prod l = List.fold_left ( * ) 1 l\n\nlet index_of x l =\n  let rec loop i = function\n    | [] -> invalid_arg \"index_of: element not found\"\n    | y :: _ when y = x -> i\n    | _ :: rest -> loop (i + 1) rest\n  in\n  loop 0 l\n\nlet ndev_of devices node =\n  match (devices node : T.device option) with\n  | Some (T.Multi ds) -> List.length ds\n  | _ -> 1\n\nlet int_ n = T.const (Const.int Dtype.Val.int32 n) Dtype.int32\n\n(* Build a shape-like vectorized node from scalar tensor expressions. *)\nlet emit_symbolic = function\n  | [d] -> d | ds -> T.vectorize ~srcs:ds\n\n(* Partition [src] along [axis] using a symbolic device index.\n   Each device takes its slice: [dnum*sz .. dnum*sz + sz). *)\nlet shard shape ndev src axis =\n  let dim = List.nth shape axis in\n  if dim mod ndev <> 0 then failwith \"multi axis uneven\";\n  let sz = dim / ndev in\n  let dnum = T.define_var ~name:\"_device_num\" ~lo:0 ~hi:(ndev - 1)\n    ~dtype:Dtype.int32 () in\n  let off = T.binary ~op:`Mul ~lhs:dnum ~rhs:(int_ sz) in\n  let before = List.mapi (fun i _ ->\n    if i <> axis then int_ 0 else off) shape in\n  let after = List.mapi (fun i s ->\n    if i <> axis then int_ s\n    else T.binary ~op:`Add ~lhs:off ~rhs:(int_ sz)) shape in\n  T.shrink ~src ~before:(emit_symbolic before) ~after:(emit_symbolic after)\n\n(* Inverse of [shard]: pad each device's shard so it covers the full range,\n   with zeros outside its slice.  Summing across devices reconstructs the\n   full tensor. *)\nlet unshard shape ndev src axis =\n  let bsz = List.nth shape axis in\n  let dnum = T.define_var ~name:\"_device_num\" ~lo:0 ~hi:(ndev - 1)\n    ~dtype:Dtype.int32 () in\n  let off = T.binary ~op:`Mul ~lhs:(int_ bsz) ~rhs:dnum in\n  let before = List.mapi (fun i _ ->\n    if i <> axis then int_ 0 else off) shape in\n  let after = List.mapi (fun i _ ->\n    if i <> axis then int_ 0\n    else T.binary ~op:`Sub ~lhs:(int_ (bsz * (ndev - 1))) ~rhs:off) shape in\n  T.pad ~src ~before:(emit_symbolic before) ~after:(emit_symbolic after)\n\n(* MSELECT/MSTACK rewrite *)\n\n(* Substitute every [_device_num] variable in [node] with constant [i]. *)\nlet subst_device_num node i =\n  let is_device_num v = match T.view v with\n    | Define_var { name; _ } -> name = \"_device_num\" | _ -> false\n  in\n  match List.find_opt is_device_num (T.variables node) with\n  | None -> node\n  | Some dvar ->\n    let dt = Option.value ~default:Dtype.int32 (T.dtype dvar) in\n    T.substitute [(dvar, T.const (Const.int (Dtype.val_of dt) i) dt)] node\n\n(* Decompose a vectorized shape node into its per-axis scalars. *)\nlet shape_elems node = match T.view node with\n  | Vectorize { srcs; _ } -> srcs | _ -> [node]\n\n(* Move SHRINK before MSTACK: substitute [_device_num] with each device\n   index and apply the shrink to each MSTACK element individually. *)\nlet mstack_early_shrink ms before after =\n  let bs = shape_elems before and es = shape_elems after in\n  let srcs = match T.view ms with\n    | Mstack { srcs; _ } -> srcs\n    | _ -> failwith \"mstack_early_shrink: expected MSTACK\"\n  in\n  let apply_shrink s i =\n    let bs' = List.map (fun b -> subst_device_num b i) bs in\n    let es' = List.map (fun e -> subst_device_num e i) es in\n    T.shrink ~src:s ~before:(emit_symbolic bs') ~after:(emit_symbolic es')\n  in\n  let new_srcs = List.mapi (fun i x ->\n    match T.view x with\n    | Copy { src; device; _ } ->\n        T.copy ~src:(apply_shrink src i) ~device ()\n    | _ ->\n        T.contiguous ~src:(apply_shrink x i) ()) srcs\n  in\n  T.mstack ~srcs:new_srcs\n\n(* BROADCAST: copy from single to multi-device → per-device copies in MSTACK. *)\nlet broadcast_copy ~devices node =\n  match T.view node with\n  | Copy { src; device; _ } ->\n    (match (devices src : T.device option), T.view device with\n     | Some (T.Single _), Device { device = Multi ds } ->\n         let copies = List.map (fun d ->\n           T.copy ~src ~device:(T.device (Single d)) ()) ds in\n         Some (T.mstack ~srcs:copies)\n     | _ -> None)\n  | _ -> None\n\n(* COPY_TO_ONE: copy from multi-device to single → select shard 0 and copy. *)\nlet copy_to_one ~devices node =\n  match T.view node with\n  | Copy { src; device; _ } ->\n    (match (devices src : T.device option), T.view device with\n     | Some (T.Multi _), Device { device = T.Single _ } ->\n         Some (T.copy ~src:(T.mselect ~src ~index:0) ~device ())\n     | _ -> None)\n  | _ -> None\n\n(* MSELECT(MSTACK) → direct indexing. *)\nlet mselect_mstack node =\n  match T.view node with\n  | Mselect { src; index; _ } ->\n    (match T.view src with\n     | Mstack { srcs; _ } -> List.nth_opt srcs index\n     | _ -> None)\n  | _ -> None\n\n(* MSELECT(movement(s)) → movement(MSELECT(s)): push select inside. *)\nlet mselect_before_movement node =\n  match T.view node with\n  | Mselect { src; index; _ } ->\n    let sel inner = T.mselect ~src:inner ~index in\n    (match T.view src with\n     | Reshape { src = inner; shape; _ } ->\n         Some (T.reshape ~src:(sel inner) ~shape)\n     | Expand { src = inner; shape; _ } ->\n         Some (T.expand ~src:(sel inner) ~shape)\n     | Permute { src = inner; order; _ } ->\n         Some (T.permute ~src:(sel inner) ~order)\n     | Flip { src = inner; dims; _ } ->\n         Some (T.flip ~src:(sel inner) ~dims)\n     | Pad { src = inner; before; after; _ } ->\n         Some (T.pad ~src:(sel inner) ~before ~after)\n     | Shrink { src = inner; before; after; _ } ->\n         Some (T.shrink ~src:(sel inner) ~before ~after)\n     | _ -> None)\n  | _ -> None\n\n(* Multi functions *)\n\n(* The expand target shape uses the full multi-device size at the shard\n   axis, but the inner source has the per-shard size — keep it. *)\nlet expand_multi ~shapes shape src axis =\n  match T.extract_int_shape shape with\n  | None -> None\n  | Some target ->\n    let adjusted = match shapes src with\n      | Some src_shape ->\n        List.mapi (fun i s ->\n          if i = axis then List.nth src_shape axis else s) target\n      | None -> target\n    in\n    Some (T.multi ~src:(T.expand ~src ~shape:(Allreduce.emit_shape adjusted))\n            ~axis)\n\nlet pad_multi before after src axis =\n  match T.extract_int_shape before, T.extract_int_shape after with\n  | Some bs, Some es ->\n    if List.nth bs axis <> 0 || List.nth es axis <> 0 then\n      failwith \"padding not supported on sharded axis\";\n    Some (T.multi ~src:(T.pad ~src ~before ~after) ~axis)\n  | _ -> None\n\nlet permute_multi order src axis =\n  T.multi ~src:(T.permute ~src ~order) ~axis:(index_of axis order)\n\nlet flip_multi dims src axis =\n  if List.nth dims axis then failwith \"flipping not supported on sharded axis\";\n  T.multi ~src:(T.flip ~src ~dims) ~axis\n\nlet reduce_multi ~devices op axes src axis multi =\n  let reduced = T.reduce_axis ~src ~op ~axes in\n  if List.mem axis axes then\n    (* Shard axis is being reduced — allreduce across devices. *)\n    let dev = match devices multi with\n      | Some d -> T.device d\n      | None -> failwith \"reduce_multi: no device\"\n    in\n    let dt = Option.value ~default:Dtype.void (T.dtype reduced) in\n    T.allreduce ~src:reduced ~device:dev ~op ~dtype:dt\n  else\n    T.multi ~src:reduced ~axis\n\n(* In tinygrad, store_after_multi receives the MULTI-wrapped dest and\n   uses it directly — inner MULTIs are stripped by later rewrite passes.\n   We match that: the caller should pass the MULTI-wrapped dest. *)\nlet store_after_multi dest src_inner src_axis =\n  T.multi ~src:(T.after ~src:dest ~deps:[T.store ~dst:dest ~value:src_inner])\n    ~axis:src_axis\n\nlet unwrap_multi x = match T.view x with T.Multi { src; _ } -> src | _ -> x\n\n(* Apply op to inner shard, unwrap any other MULTI sources, re-wrap. *)\nlet passthrough_multi root src axis =\n  let wrap inner = Some (T.multi ~src:inner ~axis) in\n  match T.view root with\n  | Cast { dtype; _ } -> wrap (T.cast ~src ~dtype)\n  | Bitcast { dtype; _ } -> wrap (T.bitcast ~src ~dtype)\n  | Contiguous { ranges; opts; _ } ->\n    wrap (T.contiguous ~src ~ranges:(List.map unwrap_multi ranges) ~opts ())\n  | Detach _ -> wrap (T.detach ~src)\n  | Contiguous_backward _ -> wrap (T.contiguous_backward ~src)\n  | After { deps; _ } ->\n    wrap (T.after ~src ~deps:(List.map unwrap_multi deps))\n  | _ -> None\n\n(* Find the last position in [new_shape] where the cumulative product of\n   all preceding dimensions equals [prior_prod]. Returns [None] when the\n   shard boundary cannot be placed (e.g. dimensions were merged across it). *)\nlet find_shard_axis prior_prod new_shape =\n  let acc = ref 1 in\n  let found = ref None in\n  List.iteri (fun i s ->\n    if !acc = prior_prod then found := Some i;\n    acc := !acc * s)\n    new_shape;\n  !found\n\nlet reshape_multi ~shapes ~devices shape src axis multi =\n  match T.extract_int_shape shape, shapes multi with\n  | None, _ | _, None -> None\n  | Some new_shape, Some multi_shape ->\n    let ndev = ndev_of devices multi in\n    if prod multi_shape <> prod new_shape then\n      failwith \"reshape must maintain prod(shape)\";\n    let prior_prod = prod (List.filteri (fun i _ -> i < axis) multi_shape) in\n    match find_shard_axis prior_prod new_shape with\n    | None -> None\n    | Some new_axis ->\n      let adjusted = List.mapi (fun i s ->\n        if i = new_axis then s / ndev else s) new_shape in\n      Some (T.multi ~src:(T.reshape ~src ~shape:(Allreduce.emit_shape adjusted))\n              ~axis:new_axis)\n\nlet shrink_multi ~shapes ~devices before after src axis multi =\n  match T.extract_int_shape before, T.extract_int_shape after,\n        shapes src, shapes multi, devices multi with\n  | Some starts, Some ends, Some src_shape, Some multi_shape, Some dev ->\n    let pairs = List.combine starts ends in\n    let shard_pair = List.nth pairs axis in\n    let shard_dim = List.nth src_shape axis in\n    let full_pair = (0, List.nth multi_shape axis) in\n    let ndev = ndev_of devices multi in\n    let bounds = List.init ndev (fun i ->\n      (i * shard_dim, (i + 1) * shard_dim)) in\n    if shard_pair <> full_pair && not (List.mem shard_pair bounds) then\n      failwith \"shrinking not supported on sharded axis\";\n    let replace_shard p =\n      List.mapi (fun i (s, e) ->\n        if i = axis then (0, shard_dim) else (s, e)) p\n    in\n    if shard_pair <> full_pair then begin\n      (* Shrink targets exactly one partition — select that shard,\n         copy to all devices, drop the MULTI wrapper. *)\n      let idx = index_of shard_pair bounds in\n      let bef, aft = Allreduce.emit_pairs (replace_shard pairs) in\n      Some (T.shrink\n              ~src:(T.copy ~src:(T.mselect ~src:multi ~index:idx)\n                      ~device:(T.device dev) ())\n              ~before:bef ~after:aft)\n    end else begin\n      (* Full-axis shrink: adjust to per-shard range, shrink independently. *)\n      let bef, aft = Allreduce.emit_pairs (replace_shard pairs) in\n      Some (T.multi ~src:(T.shrink ~src ~before:bef ~after:aft) ~axis)\n    end\n  | _ -> None\n\n(* Gather a sharded MULTI tensor onto [device]: extract the inner shard,\n   unshard it (symbolic pad per device), then allreduce-sum. *)\nlet copy_multi ~shapes ~devices multi device =\n  match T.view multi with\n  | Multi { src = inner; axis; _ } ->\n    let inner_shape = match shapes inner with\n      | Some sh -> sh\n      | None -> failwith \"copy_multi: unknown inner shape\"\n    in\n    let ndev = ndev_of devices multi in\n    let unsharded = unshard inner_shape ndev inner axis in\n    let dt = Option.value ~default:Dtype.void (T.dtype unsharded) in\n    T.allreduce ~src:unsharded ~device ~op:`Add ~dtype:dt\n  | _ -> failwith \"copy_multi: expected MULTI\"\n\nlet alu_multi ~shapes ~devices root =\n  let srcs = match T.view root with\n    | Unary { src; _ } -> [src]\n    | Binary { lhs; rhs; _ } -> [lhs; rhs]\n    | Ternary { a; b; c; _ } -> [a; b; c]\n    | _ -> []\n  in\n  (* Result shard axis: last axis among MULTI sources. *)\n  let axes = List.filter_map (fun s ->\n    match T.view s with Multi { axis; _ } -> Some axis | _ -> None) srcs in\n  match axes with\n  | [] -> None\n  | _ ->\n    let axis = List.nth axes (List.length axes - 1) in\n    let ndev = match List.find_map (fun s ->\n      match devices s with\n      | Some (dev : T.device) -> (match dev with T.Multi ds -> Some (List.length ds) | T.Single _ -> None)\n      | _ -> None) srcs with\n      | Some n -> n\n      | None -> failwith \"alu_multi: no multi device\"\n    in\n    let shape_exn s = match shapes s with\n      | Some sh -> sh\n      | None -> failwith \"alu_multi: unknown shape\"\n    in\n    (* Align each source to [axis]: unwrap matching MULTIs, shard\n       non-sharded sources, gather-then-reshard mismatched ones. *)\n    let aligned = List.map (fun s ->\n      match T.view s with\n      | Multi { src = inner; axis = a; _ } when a = axis -> inner\n      | Multi _ ->\n          let dev = match devices s with\n            | Some d -> T.device d\n            | None -> failwith \"alu_multi: no device\"\n          in\n          shard (shape_exn s) ndev\n            (copy_multi ~shapes ~devices s dev) axis\n      | _ -> shard (shape_exn s) ndev s axis) srcs\n    in\n    let result = match T.view root, aligned with\n      | Unary { op; _ }, [s] -> T.unary ~op ~src:s\n      | Binary { op; _ }, [l; r] -> T.binary ~op ~lhs:l ~rhs:r\n      | Ternary { op; _ }, [a; b; c] -> T.ternary ~op ~a ~b ~c\n      | _ -> failwith \"alu_multi: unexpected\"\n    in\n    Some (T.multi ~src:result ~axis)\n\n(* PARAM: if a PARAM has a MULTI child (indicating it lives on multiple\n   devices), rebuild with per-shard shape and wrap in MULTI. *)\nlet param_to_multi ~shapes ~devices node =\n  match T.view node with\n  | Param { slot; dtype; shape; device } ->\n      let axis_opt = List.find_map (fun c ->\n        match T.view c with T.Multi { axis; _ } -> Some axis | _ -> None)\n        (T.children node) in\n      (match axis_opt with\n       | None -> None\n       | Some axis ->\n           let ndev = ndev_of devices node in\n           let shard_shape = match shape with\n             | Some s ->\n                 let s = unwrap_multi s in\n                 (match T.extract_int_shape s with\n                  | Some dims ->\n                      let adjusted = List.mapi (fun i d ->\n                        if i = axis then d / ndev else d) dims in\n                      Some (Allreduce.emit_shape adjusted)\n                  | None -> Some s)\n             | None -> None\n           in\n           let device = Option.map unwrap_multi device in\n           Some (T.multi\n                   ~src:(T.param ~slot ~dtype ?shape:shard_shape ?device ())\n                   ~axis))\n  | _ -> None\n\n(* Don't resolve CALL bodies that are already compiled kernels. *)\nlet should_resolve_call callee (info : T.call_info) =\n  if info.precompile then false\n  else match callee with\n  | T.Ast _ -> false\n  | T.Ref body ->\n      (match T.view body with\n       | Sink { kernel_info = Some _; _ } | Linear _ | Copy _ -> false\n       | _ -> true)\n\n(* Pattern matcher *)\n\nlet is_multi x = match T.view x with T.Multi _ -> true | _ -> false\n\nlet multi_axis x = match T.view x with\n  | T.Multi { axis; _ } -> axis | _ -> assert false\n\nlet rec multi_pm ~shapes ~devices node =\n  match T.view node with\n  (* PARAM with MULTI children → shard shape, wrap in MULTI. *)\n  | Param _ -> param_to_multi ~shapes ~devices node\n\n  (* ALU: align shard axes across sources, apply per-shard. *)\n  | (Unary _ | Binary _ | Ternary _)\n    when List.exists is_multi (T.children node) ->\n      alu_multi ~shapes ~devices node\n\n  (* Movement/reduction ops with MULTI source. *)\n  | Reduce_axis { src; op; axes; _ } when is_multi src ->\n      (match T.view src with\n       | Multi { src = inner; axis; _ } ->\n           Some (reduce_multi ~devices op axes inner axis src)\n       | _ -> None)\n\n  | Reshape { src; shape; _ } when is_multi src ->\n      (match T.view src with\n       | Multi { src = inner; axis; _ } ->\n           reshape_multi ~shapes ~devices shape inner axis src\n       | _ -> None)\n\n  | Expand { src; shape; _ } when is_multi src ->\n      (match T.view src with\n       | Multi { src = inner; axis; _ } ->\n           expand_multi ~shapes shape inner axis\n       | _ -> None)\n\n  | Pad { src; before; after; _ } when is_multi src ->\n      (match T.view src with\n       | Multi { src = inner; axis; _ } ->\n           pad_multi before after inner axis\n       | _ -> None)\n\n  | Permute { src; order; _ } when is_multi src ->\n      (match T.view src with\n       | Multi { src = inner; axis; _ } ->\n           Some (permute_multi order inner axis)\n       | _ -> None)\n\n  | Flip { src; dims; _ } when is_multi src ->\n      (match T.view src with\n       | Multi { src = inner; axis; _ } ->\n           Some (flip_multi dims inner axis)\n       | _ -> None)\n\n  (* SHRINK: multi_pm rule (MULTI source) or replace_allreduce (MSTACK). *)\n  | Shrink { src; before; after; _ } ->\n      (match T.view src with\n       | Multi { src = inner; axis; _ } ->\n           shrink_multi ~shapes ~devices before after inner axis src\n       | Mstack _ ->\n           Some (mstack_early_shrink src before after)\n       | _ -> None)\n\n  (* AFTER(MULTI, STORE(MULTI, MULTI)) → store_after_multi;\n     AFTER(MULTI, ...) → passthrough. *)\n  | After { src; deps; _ } when is_multi src ->\n      let try_store = match deps with\n        | [dep] ->\n            (match T.view dep with\n             | Store { dst; value } when is_multi dst && is_multi value ->\n                 Some (store_after_multi dst (unwrap_multi value)\n                         (multi_axis value))\n             | _ -> None)\n        | _ -> None\n      in\n      (match try_store with\n       | Some _ as r -> r\n       | None -> passthrough_multi node (unwrap_multi src) (multi_axis src))\n\n  (* COPY(MULTI, device) → gather via unshard + allreduce.\n     COPY(single→multi) → broadcast.  COPY(multi→single) → select shard 0. *)\n  | Copy { src; device; _ } ->\n      if is_multi src then\n        Some (copy_multi ~shapes ~devices src device)\n      else\n        (match broadcast_copy ~devices node with\n         | Some _ as r -> r\n         | None -> copy_to_one ~devices node)\n\n  (* ALLREDUCE(MULTI, device) → unwrap, allreduce inner, re-wrap. *)\n  | Allreduce { src; device; op; _ } when is_multi src ->\n      (match T.view src with\n       | Multi { src = inner; axis; _ } ->\n           let dt = Option.value ~default:Dtype.void (T.dtype inner) in\n           Some (T.multi ~src:(T.allreduce ~src:inner ~device ~op ~dtype:dt)\n                   ~axis)\n       | _ -> None)\n\n  (* CALL: resolve body through multi_pm, then passthrough or void strip.\n     Tinygrad's GETTUPLE/TUPLE rules have no equivalent here — our CALL\n     nodes return typed values directly, not through a TUPLE wrapper. *)\n  | Call { callee; args; info; dtype } ->\n      (* 1. Recursive body resolution (tinygrad's rewrite_into_call). *)\n      let resolved = match callee with\n        | Ref body when should_resolve_call callee info ->\n            let rewrite = multi_pm ~shapes ~devices in\n            let new_body = T.graph_rewrite ~name:\"subcall\" rewrite body in\n            let new_args = List.map unwrap_multi args in\n            if is_multi new_body then\n              let axis = multi_axis new_body in\n              Some (T.multi\n                      ~src:(T.call ~callee:(Ref (unwrap_multi new_body))\n                              ~args:new_args ~info ~dtype)\n                      ~axis)\n            else if new_body == body\n                    && List.for_all2 (fun a b -> a == b) new_args args\n            then None\n            else Some (T.call ~callee:(Ref new_body) ~args:new_args\n                         ~info ~dtype)\n        | _ -> None\n      in\n      (match resolved with\n       | Some _ -> resolved\n       | None ->\n           (* 2. Passthrough: callee ref is MULTI. *)\n           (match callee with\n            | Ref r when is_multi r ->\n                let axis = multi_axis r in\n                Some (T.multi\n                        ~src:(T.call ~callee:(Ref (unwrap_multi r))\n                                ~args:(List.map unwrap_multi args) ~info ~dtype)\n                        ~axis)\n            | _ ->\n                (* 3. void CALL: strip MULTI from all sources. *)\n                let all_srcs = match callee with\n                  | Ref r -> r :: args | Ast _ -> args in\n                if dtype = Dtype.void && List.exists is_multi all_srcs then\n                  let callee = match callee with\n                    | Ref r -> T.Ref (unwrap_multi r) | c -> c in\n                  Some (T.call ~callee ~args:(List.map unwrap_multi args)\n                          ~info ~dtype)\n                else None))\n\n  (* Passthrough: CAST, BITCAST, CONTIGUOUS, DETACH, CONTIGUOUS_BACKWARD. *)\n  | (Cast { src; _ } | Bitcast { src; _ } | Contiguous { src; _ }\n    | Detach { src; _ } | Contiguous_backward { src; _ })\n    when is_multi src ->\n      passthrough_multi node (unwrap_multi src) (multi_axis src)\n\n  (* STORE: strip MULTI from dst and value. *)\n  | Store { dst; value } when is_multi dst ->\n      Some (T.store ~dst:(unwrap_multi dst) ~value:(unwrap_multi value))\n\n  (* MSELECT: resolve on MSTACK, or push inside movement ops. *)\n  | Mselect _ ->\n      (match mselect_mstack node with\n       | Some _ as r -> r\n       | None -> mselect_before_movement node)\n\n  | _ -> None\n"
  },
  {
    "path": "packages/tolk/lib/schedule/multi.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Multi-device sharding transformations.\n\n    Rewrites operations on {!Tolk_ir.Tensor.Multi}-wrapped (sharded)\n    tensors into per-shard operations. Each rule strips the MULTI\n    wrapper, applies the operation to the inner per-shard tensor, and\n    re-wraps the result.\n\n    Covers ALU, movement, reduction, copy, allreduce, store, and\n    passthrough ops. CALL bodies are resolved recursively. *)\n\nval multi_pm :\n  shapes:(Tolk_ir.Tensor.t -> int list option) ->\n  devices:(Tolk_ir.Tensor.t -> Tolk_ir.Tensor.device option) ->\n  Tolk_ir.Tensor.t ->\n  Tolk_ir.Tensor.t option\n(** [multi_pm ~shapes ~devices node] rewrites [node] if it involves\n    multi-device sharding.\n\n    [shapes] maps a tensor node to its concrete shape, if known.\n    [devices] maps a tensor node to its device placement, if known.\n\n    Returns [Some node'] when the node is rewritten, [None] when no\n    rule applies. Intended as the rewrite function for\n    {!Tolk_ir.Tensor.graph_rewrite}. *)\n"
  },
  {
    "path": "packages/tolk/lib/schedule/rangeify.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(* Schedule pipeline.\n\n   Transforms a tensor-level SINK into a kernel graph with CALL nodes\n   wrapping Kernel.t ASTs. The pipeline:\n\n    1. multi_pm — multi-device rewriting\n    2. fold_moved_after — openpilot AFTER folding (when enabled)\n    3. earliest_rewrites — syntactic sugar, movement ops, canonicalization\n    4. run_rangeify — core range analysis (in Indexing)\n    5. apply_rangeify — bottom-up rewrite with rangeify context\n    6. post-rangeify — buffer folding, const folding, buffer removal\n    7. limit_bufs — insert bufferize when too many input buffers\n    8. add_buffers — BUFFERIZE → STORE + BUFFER\n    9. split_kernels — STORE/END → CALL(kernel SINK)\n   10. WAR deps — write-after-read dependency fixup *)\n\nopen Tolk_ir\nmodule T = Tensor\nmodule K = Kernel\nmodule D = Dtype\nmodule C = Const\n\n(* Context variables *)\n\nlet openpilot_hacks_var =\n  Helpers.Context_var.int ~key:\"OPENPILOT_HACKS\" ~default:0\nlet float16_var = Helpers.Context_var.int ~key:\"FLOAT16\" ~default:0\nlet split_reduceop_var =\n  Helpers.Context_var.int ~key:\"SPLIT_REDUCEOP\" ~default:1\nlet split_threshold_var =\n  Helpers.Context_var.int ~key:\"REDUCEOP_SPLIT_THRESHOLD\" ~default:32768\nlet split_size_var =\n  Helpers.Context_var.int ~key:\"REDUCEOP_SPLIT_SIZE\" ~default:22\nlet max_kernel_buffers_var =\n  Helpers.Context_var.int ~key:\"MAX_KERNEL_BUFFERS\" ~default:0\n\n(* Helpers *)\n\nlet int_ n = T.const (C.int D.Val.index n) D.index\nlet shape_prod = List.fold_left ( * ) 1\nlet dtype_or_void n = match T.dtype n with Some d -> d | None -> D.void\n\n(* Encode an int list as a shape tensor (Vectorize of Consts). *)\nlet shape_node dims =\n  match List.map int_ dims with [d] -> d | ds -> T.vectorize ~srcs:ds\n\n(* Follow src through AFTER nodes to the root buffer. *)\nlet rec root_after n =\n  match T.view n with After { src; _ } -> root_after src | _ -> n\n\n(* Extract the concrete size of each range (1 for non-range or symbolic). *)\nlet range_sizes rngs =\n  List.map (fun r ->\n    match T.view r with\n    | Range { size; _ } ->\n        (match T.view size with\n        | Const { value; _ } ->\n            (match C.view value with Int n -> Int64.to_int n | _ -> 1)\n        | _ -> 1)\n    | _ -> 1) rngs\n\n(* Compute a single flat index from multi-dimensional ranges and shape\n   using row-major strides. *)\nlet compute_flat_index rngs shape =\n  let n = List.length shape in\n  if n = 0 then int_ 0\n  else\n    let shape = Array.of_list shape in\n    let strides = Array.make n 1 in\n    for i = n - 2 downto 0 do\n      strides.(i) <- strides.(i + 1) * shape.(i + 1)\n    done;\n    let terms = List.filter_map (fun (i, rng) ->\n      if strides.(i) = 0 then None\n      else if strides.(i) = 1 then Some rng\n      else Some (T.binary ~op:`Mul ~lhs:rng ~rhs:(int_ strides.(i))))\n      (List.mapi (fun i rng -> (i, rng)) rngs) in\n    match terms with\n    | [] -> int_ 0\n    | first :: rest ->\n        List.fold_left (fun acc t ->\n          T.binary ~op:`Add ~lhs:acc ~rhs:t) first rest\n\n(* lower_shaped_wmma: lowers tensor-level Shaped_wmma to kernel-level Wmma\n   with CONTRACT/UNROLL. Blocked on the Tensor IR gaining a Shaped_wmma\n   variant — tinygrad defines SHAPED_WMMA in uop/ops.py but our tensor.mli\n   doesn't have it yet. The kernel IR (Wmma, Contract, Gep) is ready. *)\n\nlet is_elementwise = function\n  | T.Unary _ | T.Binary _ | T.Ternary _ | T.Cast _ | T.Bitcast _\n  | T.Const _ -> true\n  | _ -> false\n\nlet is_movement = function\n  | T.Reshape _ | T.Expand _ | T.Pad _ | T.Shrink _ | T.Permute _\n  | T.Flip _ -> true\n  | _ -> false\n\nlet movement_src = function\n  | T.Reshape { src; _ } | T.Expand { src; _ } | T.Pad { src; _ }\n  | T.Shrink { src; _ } | T.Permute { src; _ } | T.Flip { src; _ } -> src\n  | _ -> assert false\n\nlet argsort order =\n  let indexed = List.mapi (fun i o -> (o, i)) order in\n  let sorted = List.sort (fun (a, _) (b, _) -> compare a b) indexed in\n  List.map snd sorted\n\nlet device_max_bufs = function\n  | \"METAL\" -> 31 | \"WEBGPU\" -> 8 | _ -> 0\n\n(* Syntactic sugar *)\n\n(* INDEX(INDEX(ptr, idxs1), idxs2) → INDEX(ptr, idxs1 @ idxs2) when the\n   inner INDEX is a pointer type and the outer is not. *)\nlet index_concat n =\n  match T.view n with\n  | Index { ptr; idxs; gate; dtype } ->\n      (match T.view ptr with\n      | Index { ptr = inner_ptr; idxs = inner_idxs; dtype = inner_dt; _ } ->\n          (match inner_dt with\n          | D.Ptr _ when (match dtype with D.Ptr _ -> false | _ -> true) ->\n              Some (T.index ~ptr:inner_ptr\n                ~idxs:(inner_idxs @ idxs) ?gate ~dtype ())\n          | _ -> None)\n      | _ -> None)\n  | _ -> None\n\n(* INDEX on elementwise/const: push the INDEX into the sources. *)\nlet early_rangeify n =\n  match T.view n with\n  | Index { ptr; idxs; dtype; _ } when idxs <> [] ->\n      let v = T.view ptr in\n      if is_elementwise v then\n        let new_children = List.map (fun s ->\n          T.index ~ptr:s ~idxs ~dtype ()) (T.children ptr) in\n        Some (T.replace ptr ~children:new_children ())\n      else None\n  | _ -> None\n\n(* Movement ops *)\n\n(* Push a movement op through INDEX by applying it to the ranges. *)\nlet mop_through_index shapes n =\n  match T.view n with\n  | Index { ptr; idxs; gate; dtype } when is_movement (T.view ptr) ->\n      let v = T.view ptr in\n      let src = movement_src v in\n      (* Check len(idxs) == len(ptr.shape), matching tinygrad's\n         len(idx.src[1:]) == len(r.shape) where r is the movement op.\n         For Ptr-typed PARAM sources (from debuf), derive src_shape from\n         the ptr dtype size when compute_shapes returns None. *)\n      (let src_shape = match shapes src with\n        | Some _ as s -> s\n        | None -> match T.dtype src with\n          | Some (D.Ptr p) -> Some [D.Ptr.size p]\n          | _ -> None in\n      match src_shape, shapes ptr with\n      | Some _, Some ptr_shape when List.length idxs = List.length ptr_shape ->\n          let shapes_with_ptr n = match shapes n with\n            | Some _ as s -> s\n            | None -> match T.dtype n with\n              | Some (D.Ptr p) -> Some [D.Ptr.size p]\n              | _ -> None in\n          let new_idxs = Indexing.apply_movement_op ~shapes:shapes_with_ptr v idxs in\n          Some (T.index ~ptr:src ~idxs:new_idxs ?gate ~dtype ())\n      | _ -> None)\n  | _ -> None\n\n(* Move movement ops and INDEX past AFTER (but not when AFTER has a raw\n   STORE with shaped children — from replace_contig_with_store_after). *)\nlet mop_past_after shapes n =\n  let v = T.view n in\n  if not (is_movement v || (match v with Index _ -> true | _ -> false))\n  then None\n  else\n    let src = match v with\n      | Index { ptr; _ } -> ptr | _ -> movement_src v in\n    match T.view src with\n    | After { src = after_src; deps; _ } ->\n        if shapes after_src = None then None\n        else if List.exists (fun d ->\n          match T.view d with\n          | Store { dst; _ } -> shapes dst <> None\n          | _ -> false) deps\n        then None\n        else\n          let new_after = T.after ~src:after_src ~deps in\n          Some (T.replace n\n            ~children:(new_after :: List.tl (T.children n)) ())\n    | _ -> None\n\n(* Strip movement ops from END: they don't affect the closed ranges. *)\nlet mop_past_end n =\n  match T.view n with\n  | End { value; ranges } when is_movement (T.view value) ->\n      Some (T.end_ ~value:(movement_src (T.view value)) ~ranges)\n  | _ -> None\n\n(* Fold moved AFTERs — openpilot hack *)\n\n(* Walk through PERMUTE/RESHAPE/WHERE+PAD on the store value to find\n   the underlying source, adjusting the AFTER inverse accordingly.\n   Called only when OPENPILOT_HACKS is set. *)\nlet found_after ctx ~after ~value =\n  let x = ref value in\n  let a = ref after in\n  (* CAST float16 → walk through *)\n  (if Helpers.Context_var.get float16_var <> 0 then\n    match T.view !x with\n    | Cast { src; dtype } when dtype = D.float16 ->\n        x := src;\n        a := T.cast ~src:!a ~dtype:D.float32\n    | _ -> ());\n  let continue_ = ref true in\n  while !continue_ do\n    match T.view !x with\n    | Permute { src; order; _ } ->\n        x := src;\n        a := T.permute ~src:!a ~order:(argsort order)\n    | Reshape { src; _ } ->\n        let src_shape = T.extract_int_shape\n          (List.nth (T.children !x) 1) in\n        (match src_shape with\n        | Some s ->\n            x := src;\n            a := T.reshape ~src:!a ~shape:(shape_node s)\n        | None -> continue_ := false)\n    | Ternary { op = `Where; b = pad_src; c = false_; _ }\n      when (match T.view (T.base false_) with\n            | Invalid_index _ -> true | _ -> false)\n        && (match T.view pad_src with Pad _ -> true | _ -> false) ->\n        let pad_inner = match T.view pad_src with\n          | Pad { src; _ } -> src | _ -> assert false in\n        (* XXX shrink bounds from pad.marg not yet extracted — we walk\n           through the pad to its source without adjusting the AFTER.\n           Tinygrad shrinks using (l, s-r) from the PAD arg and shape. *)\n        x := pad_inner;\n    | _ -> continue_ := false\n  done;\n  Hashtbl.replace ctx (T.tag !x) !a\n\n(* Earliest rewrites *)\n\n(* Walk AFTER chain on the store target to find the root buffer. If the\n   store value depends on the target, insert contiguous to break the\n   dependency cycle. *)\nlet normalize_store_after_target_chain ~target ~value =\n  let root = root_after target in\n  let value =\n    if List.exists (fun n -> n == target) (T.toposort value)\n    then T.contiguous ~src:value ()\n    else value\n  in\n  T.after ~src:root ~deps:[T.store ~dst:root ~value]\n\n(* Make the store value contiguous if it reaches the target buffer through\n   hazardous movement ops. PERMUTE and FLIP reorder indices; SHRINK can\n   have overlapping regions when the destination is also shrunk. *)\nlet fix_store_after_hazard ~buf ~target ~value =\n  let is_unsafe =\n    let has_shrink = List.exists (fun n ->\n      match T.view n with Shrink _ -> true | _ -> false)\n      (T.toposort target) in\n    fun v -> match v with\n    | T.Permute _ | T.Flip _ -> true\n    | T.Shrink _ -> has_shrink\n    | _ -> false\n  in\n  let base = T.base target in\n  let slice = T.toposort value\n    ~gate:(fun s -> match T.view s with Contiguous _ -> false | _ -> true) in\n  let reaches : (int, bool) Hashtbl.t = Hashtbl.create (List.length slice) in\n  let found = ref false in\n  List.iter (fun s ->\n    if not !found then begin\n      let r = s == base ||\n        List.exists (fun c ->\n          Hashtbl.find_opt reaches (T.tag c) = Some true)\n          (T.children s) in\n      Hashtbl.replace reaches (T.tag s) r;\n      if r && is_unsafe (T.view s) then found := true\n    end) slice;\n  if !found then\n    Some (T.after ~src:buf\n            ~deps:[T.store ~dst:target ~value:(T.contiguous ~src:value ())])\n  else None\n\n(* Resolve a CALL by inlining the callee: gather PARAMs from the body,\n   map each to the corresponding argument by slot, and substitute.\n   Kernel calls (SINK with kernel_info), precompiled calls, and Ast\n   callees are never resolved — they are real invocations. *)\nlet resolve_call n =\n  match T.view n with\n  | Call { callee = Ref body; args; info; _ } ->\n      let is_kernel_sink = match T.view body with\n        | Sink { kernel_info = Some _; _ } -> true | _ -> false in\n      if info.precompile || is_kernel_sink then None\n      else\n        let params =\n          List.filter (fun x -> match T.view x with Param _ -> true | _ -> false)\n            (T.toposort body) in\n        let params = List.sort (fun a b ->\n          match T.view a, T.view b with\n          | Param { slot = sa; _ }, Param { slot = sb; _ } -> compare sa sb\n          | _ -> 0) params in\n        let mappings = List.filter_map (fun p ->\n          match T.view p with\n          | Param { slot; _ } when slot < List.length args ->\n              Some (p, List.nth args slot)\n          | _ -> None) params in\n        Some (T.substitute mappings body)\n  | _ -> None\n\n(* Detect which axes of [src] are expanded (broadcast from size 1) by\n   pushing ranges through the movement-op chain and seeing which survive.\n   An axis whose range disappears was introduced by EXPAND.  Tinygrad\n   uses index+substitute+pm_mops; we walk the chain directly since\n   movement ops are linear and apply_movement_op is the same transform\n   pm_mops invokes. *)\nlet detect_expanded shapes src =\n  let src_shape = match shapes src with Some s -> s | None -> [] in\n  let n = List.length src_shape in\n  if n = 0 then []\n  else\n    let rngs = List.mapi (fun i s ->\n      if s > 1 then T.range ~size:(int_ s) ~axis:i ~kind:Axis_kind.Loop ()\n      else int_ 0) src_shape in\n    let rec push node rngs =\n      let v = T.view node in\n      match v with\n      | Reshape { src; _ } | Expand { src; _ } | Pad { src; _ }\n      | Shrink { src; _ } | Permute { src; _ } | Flip { src; _ } ->\n          push src (Indexing.apply_movement_op ~shapes v rngs)\n      | _ -> rngs\n    in\n    let final = push src rngs in\n    let live = List.concat_map (fun r ->\n      List.filter_map (fun x ->\n        match T.view x with Range { axis; _ } -> Some axis | _ -> None)\n        (r :: T.backward_slice r)) final in\n    List.init n (fun i -> not (List.mem i live))\n\n(* Split a large reduce into two phases for better GPU occupancy. The\n   dimension is factored: phase 1 reduces within each chunk, phase 2\n   reduces across chunks. Only applies when the reduction ratio exceeds\n   a threshold and the chosen axis is not an expanded broadcast. *)\nlet split_reduceop shapes n =\n  match T.view n with\n  | Reduce_axis { src; op; axes; _ } ->\n      (match shapes n, shapes src with\n      | Some red_shape, Some src_shape\n        when shape_prod red_shape > 0\n          && Helpers.Context_var.get split_reduceop_var <> 0\n          && shape_prod src_shape / shape_prod red_shape\n             >= Helpers.Context_var.get split_threshold_var ->\n          let is_expanded = detect_expanded shapes src in\n          let max_div = min 256\n            (1 lsl Helpers.Context_var.get split_size_var\n             / shape_prod red_shape) in\n          let candidates = List.concat_map (fun i ->\n            if List.nth is_expanded i then []\n            else\n              let dim = List.nth src_shape i in\n              let rec try_div d acc =\n                if d < 8 then List.rev acc\n                else if dim mod d = 0 then try_div (d - 1) ((i, d) :: acc)\n                else try_div (d - 1) acc\n              in\n              try_div max_div []) axes in\n          (match candidates with\n          | [] -> None\n          | (dim, divisor) :: _ ->\n              let nd = List.length src_shape in\n              let split_shape = List.init (nd + 1) (fun i ->\n                if i < dim then List.nth src_shape i\n                else if i = dim then divisor\n                else if i = dim + 1 then List.nth src_shape dim / divisor\n                else List.nth src_shape (i - 1)) in\n              let perm = List.init (nd + 1) (fun i ->\n                if i < dim then i\n                else if i < nd then i + 1\n                else dim) in\n              let splitted =\n                T.permute ~order:perm\n                  ~src:(T.reshape ~src ~shape:(shape_node split_shape)) in\n              let phase1 = T.contiguous\n                ~src:(T.reduce_axis ~src:splitted ~op ~axes) () in\n              let phase2 = T.reduce_axis ~src:phase1 ~op\n                ~axes:[List.length red_shape] in\n              Some (T.reshape ~src:phase2 ~shape:(shape_node red_shape)))\n      | _ -> None)\n  | _ -> None\n\n(* Post-rangeify cleanups *)\n\n(* BUFFERIZE(INDEX(ptr, idxs), ranges) is identity when idxs = ranges.\n   Remove both and return ptr, shrunk to the bufferize shape. *)\nlet remove_noop_bufferize ~idxs ~ranges ~ptr ~buf_shape =\n  if not (List.equal (==) idxs ranges) then None\n  else begin match T.view ptr with\n  | Buffer_view _ -> None\n  | _ ->\n    match buf_shape with\n    | Some shape when shape <> [] ->\n        Some (T.shrink ~src:ptr\n                ~before:(shape_node (List.map (fun _ -> 0) shape))\n                ~after:(shape_node shape))\n    | _ -> Some ptr\n  end\n\nlet is_always_run = function\n  | T.Contiguous _ | T.Copy _ | T.Noop _ -> true | _ -> false\n\n(* Remove dead axes from BUFFERIZE. An axis is dead if its range is a\n   constant or is not referenced by the source computation. Dead axes\n   are collapsed to size 1 via reshape, then restored via expand. *)\nlet cleanup_dead_axes shapes n =\n  match T.view n with\n  | Bufferize { src; ranges; dtype; opts } ->\n      let src_v = T.view src in\n      (* Never touch CONTIGUOUS/COPY/NOOP sources or plain AFTERs *)\n      if is_always_run src_v then None\n      else if (match src_v with After _ -> true | _ -> false) then None\n      else\n        let shape = match shapes n with Some s -> s | None -> [] in\n        if List.length shape <> List.length ranges then None\n        else\n          (* Bail on symbolic range sizes *)\n          let has_symbolic = List.exists (fun r ->\n            match T.view r with\n            | Range { size; _ } ->\n                (match T.view size with Const _ -> false | _ -> true)\n            | _ -> false) ranges in\n          if has_symbolic then None\n          else\n            let src_ranges = T.ranges src in\n            let hit = ref false in\n            let new_ranges = ref [] in\n            let reshape_dims = ref [] in\n            List.iter2 (fun s rng ->\n              let dead = match T.view rng with\n                | Const _ -> true\n                | Range _ ->\n                    not (List.exists (fun r -> r == rng) src_ranges)\n                | _ -> false in\n              if dead then begin\n                reshape_dims := 1 :: !reshape_dims;\n                hit := true\n              end else begin\n                reshape_dims := s :: !reshape_dims;\n                new_ranges := rng :: !new_ranges\n              end) shape ranges;\n            if not !hit then None\n            else\n              let new_ranges = List.rev !new_ranges in\n              let reshape_shape = List.rev !reshape_dims in\n              let ret = T.bufferize ~src ~ranges:new_ranges ~dtype ~opts in\n              let ret = T.reshape ~src:ret ~shape:(shape_node reshape_shape) in\n              Some (T.expand ~src:ret ~shape:(shape_node shape))\n  | _ -> None\n\nlet pcontig_var = Helpers.Context_var.int ~key:\"PCONTIG\" ~default:0\n\n(* Decide whether a BUFFERIZE can be removed by re-expressing its source\n   inline. The cost function counts accessed buffers and checks whether\n   reduce sources reference buffers — if so, the intermediate buffer is\n   needed for locality and we keep it. *)\nlet remove_bufferize ~src ~buf_ranges ~buf_shape ~idx_ranges ~removable =\n  assert (List.length buf_ranges = List.length idx_ranges);\n  let src_v = T.view src in\n  if is_always_run src_v || not removable then None\n  else\n    (* Walk source subtree: count accessed buffers, collect indexes and\n       reduces. Stop descending at global BUFFERIZE and MSTACK. *)\n    let accessed = Hashtbl.create 8 in\n    let indexes = ref [] in\n    let reduces = ref [] in\n    ignore (T.toposort src ~gate:(fun x ->\n      match T.view x with\n      | Bufferize { opts = { addrspace = D.Global; _ }; _ } ->\n          Hashtbl.replace accessed (T.tag x) (); false\n      | Mstack _ ->\n          Hashtbl.replace accessed (T.tag x) (); false\n      | Param _ ->\n          Hashtbl.replace accessed (T.tag x) (); true\n      | Index _ ->\n          indexes := x :: !indexes; true\n      | Reduce _ ->\n          reduces := x :: !reduces; true\n      | _ -> true));\n    let pcontig = Helpers.Context_var.get pcontig_var in\n    if Hashtbl.length accessed > 3 && pcontig <= 2 then None\n    else\n      (* Check if any reduce's source transitively references a buffer *)\n      let buffer_in_reduce =\n        if !reduces = [] then false\n        else begin\n          let rsrcs = List.filter_map (fun r ->\n            match T.view r with\n            | Reduce { src; _ } -> Some src | _ -> None) !reduces in\n          let found = ref false in\n          ignore (T.toposort (T.sink rsrcs) ~gate:(fun x ->\n            if !found then false\n            else match T.view x with\n            | Param _ | Bufferize _ -> found := true; false\n            | _ -> true));\n          !found\n        end\n      in\n      if buffer_in_reduce then begin\n        if pcontig > 2 then begin\n          (* Partial contig: keep ranges that overlap local indexes or\n             are used by reduce axes, bufferize only those. *)\n          let buf_size = match buf_shape with\n            | Some s -> shape_prod s | None -> 1 in\n          let in_size = Hashtbl.length accessed in\n          let out_in_ratio =\n            float_of_int (buf_size + 1) /. float_of_int (in_size + 1) in\n          if out_in_ratio < 10.0 then None\n          else\n            let local_indexes = List.filter (fun x ->\n              match T.view x with\n              | Index { ptr; _ } ->\n                  (match T.view ptr with\n                  | Bufferize { opts = { addrspace = D.Local; _ }; _ } -> true\n                  | _ -> false)\n              | _ -> false) !indexes in\n            let exclude_ranges =\n              List.concat_map (fun x ->\n                match T.view x with\n                | Index { idxs; _ } -> T.ranges (T.group idxs)\n                | _ -> []) local_indexes in\n            let subs = List.filter_map (fun (k, v) ->\n              match T.view k with Const _ -> None | _ -> Some (k, v))\n              (List.combine buf_ranges idx_ranges) in\n            let is_pcontig, is_subs = List.partition (fun (k, v) ->\n              List.exists (fun r -> r == k) exclude_ranges ||\n              List.exists (fun r ->\n                match T.view r with\n                | Range { kind; _ } -> kind = Axis_kind.Reduce\n                | _ -> false) (T.ranges v)) subs in\n            if is_subs = [] then None\n            else\n              let ret = T.substitute is_subs src in\n              if is_pcontig = [] then Some ret\n              else\n                let pc_rngs = List.map fst is_pcontig in\n                let pc_idxs = List.map snd is_pcontig in\n                let opts : K.bufferize_opts =\n                  { device = None; addrspace = D.Local; removable = true } in\n                let dtype = match T.dtype src with\n                  | Some d -> d | None -> D.float32 in\n                let buf = T.bufferize ~src:ret ~ranges:pc_rngs ~dtype ~opts in\n                Some (T.index ~ptr:buf ~idxs:pc_idxs ~dtype ())\n        end else None\n      end\n      else\n        (* Safe to remove: substitute BUFFERIZE ranges → INDEX ranges *)\n        let mappings = List.filter_map (fun (k, v) ->\n          match T.view k with\n          | Const _ -> None\n          | _ -> (match T.view v with\n            | Invalid_index _ -> None\n            | _ -> Some (k, v)))\n          (List.combine buf_ranges idx_ranges) in\n        Some (T.substitute mappings src)\n\n(* Handle DISK/TINYFS buffer views: compute offset from the INDEX and\n   create a BUFFER_VIEW node. *)\nlet late_buffer_view devices n =\n  match T.view n with\n  | Bufferize { src; ranges; _ } ->\n      (match T.view src with\n      | Bitcast _ | Contiguous _ ->\n          let dev = devices n in\n          let is_disk = match dev with\n            | Some (T.Single d) ->\n                String.length d >= 4\n                && (String.sub d 0 4 = \"DISK\" || String.sub d 0 6 = \"TINYFS\")\n            | _ -> false in\n          if not is_disk then None\n          else\n            let shape = range_sizes ranges in\n            let size = shape_prod shape in\n            (* Walk up to find the INDEX *)\n            let rec find_index x =\n              match List.find_opt (fun u ->\n                match T.view u with Index _ -> true | _ -> false)\n                (T.children x) with\n              | Some idx -> idx\n              | None -> match T.children x with\n                | c :: _ -> find_index c | [] -> x\n            in\n            let idx = find_index src in\n            let offset = match T.view idx with\n              | Index { idxs; _ } when idxs = [] -> 0\n              | Index { idxs; _ } ->\n                  (* XXX tinygrad uses idx.vmin (symbolic minimum) for\n                     each index. We approximate with const values only;\n                     symbolic indices contribute 0. This is wrong for\n                     non-const offsets on DISK buffers. *)\n                  List.fold_left (fun acc i ->\n                    match T.view i with\n                    | Const { value; _ } ->\n                        (match C.view value with\n                        | Int n -> acc + Int64.to_int n | _ -> acc)\n                    | _ -> acc) 0 idxs\n                  |> max 0\n              | _ -> 0\n            in\n            let idx_base = T.base idx in\n            let bv = T.buffer_view ~src:idx_base ~size ~offset\n              ~dtype:(dtype_or_void src) in\n            let rng_node = match ranges with\n              | [r] -> r | _ -> List.hd ranges in\n            Some (T.replace n\n              ~children:[bv; rng_node] ())\n      | _ -> None)\n  | _ -> None\n\n(* Insert BUFFERIZE for elementwise sources when a kernel exceeds the\n   device's buffer limit. Each source gets its own ranges so it\n   materializes independently. *)\nlet limit_bufs (ctx : Indexing.indexing_context) devices n =\n  match T.view n with\n  | Binary _ | Ternary _ ->\n      let dev_name = match devices n with\n        | Some (T.Single d) ->\n            Some (List.hd (String.split_on_char ':' d))\n        | Some (T.Multi ds) ->\n            Some (List.hd (String.split_on_char ':' (List.hd ds)))\n        | None -> None in\n      Option.bind dev_name (fun dname ->\n        let max_bufs =\n          match Helpers.Context_var.get max_kernel_buffers_var with\n          | 0 -> device_max_bufs dname\n          | n -> n in\n        if max_bufs = 0 then None\n        else\n          let bufs = Hashtbl.create 16 in\n          ignore (T.toposort n ~gate:(fun u ->\n            match T.view u with\n            | Bufferize _ | After _ | Param _ | Mselect _\n            | Mstack _ | Define_var _ ->\n                Hashtbl.replace bufs (T.tag u) (); false\n            | _ -> true));\n          if Hashtbl.length bufs <= max_bufs - 1 then None\n          else\n            let children = T.children n in\n            let new_children = List.map (fun s ->\n              let sv = T.view s in\n              if is_elementwise sv && devices s <> None then\n                let orig_ranges = T.ranges s in\n                let new_ranges = List.map (fun x ->\n                  match T.view x with\n                  | Range { size; sub; dtype; _ } ->\n                      let axis = ctx.range_idx in\n                      ctx.range_idx <- ctx.range_idx + 1;\n                      T.range ~size ~axis ~sub\n                        ~kind:Axis_kind.Loop ~dtype ()\n                  | _ -> x) orig_ranges in\n                let dev = match devices s with\n                  | Some (T.Single d) -> Some (K.Device_single d)\n                  | Some (T.Multi ds) -> Some (K.Device_multi ds)\n                  | None -> None in\n                let opts : K.bufferize_opts =\n                  { device = dev; addrspace = D.Global;\n                    removable = true } in\n                let dtype = match T.dtype s with\n                  | Some d -> d | None -> D.float32 in\n                let subst = T.substitute\n                  (List.combine orig_ranges new_ranges) s in\n                let buf = T.bufferize ~src:subst ~ranges:new_ranges\n                  ~dtype ~opts in\n                T.index ~ptr:buf ~idxs:orig_ranges ~dtype ()\n              else s) children in\n            if List.for_all2 (==) children new_children then None\n            else Some (T.replace n ~children:new_children ()))\n  | _ -> None\n\n(* Add buffers *)\n\n(* Collapse multi-range BUFFERIZE into a single flat index. If the\n   BUFFERIZE already has one range, nothing to do. *)\nlet flatten_bufferize shapes n =\n  match T.view n with\n  | Bufferize { src; ranges; dtype; opts } when List.length ranges > 1 ->\n      let flat_idx = compute_flat_index ranges (range_sizes ranges) in\n      let ret = T.bufferize ~src ~ranges:[flat_idx] ~dtype ~opts in\n      let buf_shape = match shapes n with\n        | Some s -> s\n        | None -> range_sizes ranges\n      in\n      let ret = T.reshape ~src:ret ~shape:(shape_node buf_shape) in\n      (* If any range has symbolic size, shrink to actual bounds *)\n      let has_symbolic = List.exists (fun r ->\n        match T.view r with\n        | Range { size; _ } ->\n            (match T.view size with Const _ -> false | _ -> true)\n        | _ -> false) ranges in\n      if has_symbolic then\n        let sym = List.map (fun r ->\n          match T.view r with\n          | Range { size; _ } ->\n              (match T.view size with Const _ -> int_ 1 | _ -> size)\n          | _ -> int_ 1) ranges in\n        let before = shape_node (List.map (fun _ -> 0) sym) in\n        let after = match sym with [d] -> d | ds -> T.vectorize ~srcs:ds in\n        Some (T.shrink ~src:ret ~before ~after)\n      else Some ret\n  | _ -> None\n\n(* Convert BUFFERIZE to STORE + BUFFER. Three paths:\n   - AFTER: the source is an assign — wrap existing store in END\n   - GLOBAL: allocate a new buffer, index, store, END\n   - LOCAL: like GLOBAL but with DEFINE_LOCAL and barrier (not used when\n     allow_locals=false in the main pipeline) *)\nlet bufferize_to_store counter n =\n  match T.view n with\n  | Bufferize { src; ranges; dtype; opts } ->\n      (* Extract Range nodes from the expression tree — after\n         flatten_bufferize, ranges may contain a single flat index\n         expression rather than raw Range nodes. *)\n      let all_ranges = List.filter (fun r ->\n        match T.view r with T.Range _ -> true | _ -> false)\n        (T.toposort (T.sink ranges)) in\n      let size = shape_prod (range_sizes all_ranges) in\n      if size <= 0 then None\n      else\n        let range_nodes =\n          List.sort (fun a b ->\n            match T.view a, T.view b with\n            | Range { axis = a1; _ }, Range { axis = a2; _ } -> compare a1 a2\n            | _ -> 0) all_ranges in\n        let rngs = range_nodes in\n        let ptr_dt =\n          D.Ptr.create (D.val_of dtype) ~addrspace:opts.addrspace ~size in\n        (match T.view src with\n        (* AFTER path: source is an assign (AFTER+STORE) *)\n        | After { src = after_src; deps; _ } ->\n            let stores = List.filter (fun d ->\n              match T.view d with\n              | Store { dst; _ } ->\n                  (match T.view dst with Index _ -> true | _ -> false)\n              | _ -> false) deps in\n            let buf = T.base after_src in\n            (match stores with\n            | [] -> Some buf\n            | store :: _ ->\n                let dst, store_val = match T.view store with\n                  | Store { dst; value } -> dst, value\n                  | _ -> assert false in\n                (* Walk through BUFFERIZE(INDEX(…)) on the store target *)\n                let target = match T.view dst with\n                  | Index { ptr; _ } ->\n                      (match T.view ptr with\n                      | Bufferize { src = inner; _ } ->\n                          (match T.view inner with\n                          | Index _ -> inner | _ -> dst)\n                      | _ -> dst)\n                  | _ -> dst in\n                if store_val == target then\n                  Some (T.after ~src:buf ~deps:[])\n                else\n                  let target_rngs = T.ranges target in\n                  let all_rngs = List.sort_uniq (fun a b ->\n                    match T.view a, T.view b with\n                    | Range { axis = a1; _ }, Range { axis = a2; _ } ->\n                        compare a1 a2\n                    | _ -> compare (T.tag a) (T.tag b))\n                    (target_rngs @ range_nodes) in\n                  let ended = T.end_\n                    ~value:(T.store\n                      ~dst:(T.replace target ~dtype:(D.Ptr ptr_dt) ())\n                      ~value:store_val)\n                    ~ranges:all_rngs in\n                  Some (T.after ~src:buf ~deps:[ended]))\n        (* GLOBAL path: new buffer *)\n        | _ when opts.addrspace = D.Global ->\n            let luniq_id = !counter in\n            incr counter;\n            let dev = match opts.device with\n              | Some (K.Device_single d) -> T.device (T.Single d)\n              | Some (K.Device_multi ds) -> T.device (T.Multi ds)\n              | Some (K.Device_index _) | None ->\n                  T.device (T.Single \"CPU\") in\n            let buf = T.buffer ~unique:(T.lunique ~id:luniq_id)\n              ~device:dev ~size ~dtype in\n            let idx = T.index ~ptr:buf ~idxs:rngs\n              ~dtype:(D.Ptr ptr_dt) () in\n            let ended = T.end_\n              ~value:(T.store ~dst:idx ~value:src)\n              ~ranges:range_nodes in\n            Some (T.after ~src:buf ~deps:[ended])\n        (* LOCAL path: DEFINE_LOCAL + barrier *)\n        | _ when opts.addrspace = D.Local ->\n            incr counter;\n            let buf = T.define_local ~size ~dtype:ptr_dt in\n            let idx = T.index ~ptr:buf ~idxs:rngs\n              ~dtype:(D.Ptr ptr_dt) () in\n            let ended = T.end_\n              ~value:(T.store ~dst:idx ~value:src)\n              ~ranges:range_nodes in\n            let bar = T.barrier in\n            let ended_bar = T.end_ ~value:bar ~ranges:[ended] in\n            Some (T.after ~src:buf ~deps:[ended_bar])\n        | _ -> None)\n  | _ -> None\n\n(* Split into kernels *)\n\n(* Per-kernel context accumulated during the local graph rewrite that\n   converts a STORE/END subtree into a CALL(kernel SINK). *)\ntype split_context = {\n  mutable slot : int;\n  buf_map : (int, T.t) Hashtbl.t;\n  vars : (int, T.t) Hashtbl.t;\n  mutable range_ctr : int;\n  mutable opts : K.Opt.t list option;\n  renumbered : (int, unit) Hashtbl.t;\n  buf_shapes : (int, int list) Hashtbl.t;\n}\n\nlet create_split_context () = {\n  slot = 0;\n  buf_map = Hashtbl.create 16;\n  vars = Hashtbl.create 4;\n  range_ctr = 0;\n  opts = None;\n  renumbered = Hashtbl.create 16;\n  buf_shapes = Hashtbl.create 16;\n}\n\n(* Convert BUFFER/PARAM to a kernel PARAM with a slot index. The buffer\n   is reshaped to its shape so downstream indexing sees the right layout. *)\nlet debuf ctx shapes n =\n  let dtype = dtype_or_void n in\n  let size = match T.view n with\n    | Buffer { size; _ } -> size\n    | _ -> (match shapes n with\n      | Some dims -> List.fold_left ( * ) 1 dims\n      | None -> 1)\n  in\n  let ptr_dt = D.Ptr.create (D.val_of dtype) ~addrspace:D.Global ~size in\n  let slot = ctx.slot in\n  ctx.slot <- ctx.slot + 1;\n  let ret = T.param ~slot ~dtype:(D.Ptr ptr_dt) () in\n  (* Use multi-dim shape from buf_shapes (precomputed from INDEX consumers)\n     when available, falling back to shapes. *)\n  let buf_shape = match Hashtbl.find_opt ctx.buf_shapes (T.tag n) with\n    | Some s -> Some s\n    | None -> shapes n in\n  let ret = match buf_shape with\n    | Some shape when shape <> [] ->\n        T.reshape ~src:ret ~shape:(shape_node shape)\n    | _ -> ret\n  in\n  (* XXX tinygrad distinguishes max_shape (static upper bound) from shape\n     (possibly symbolic) and adds a shrink when they differ. *)\n  if not (Hashtbl.mem ctx.buf_map (T.tag n)) then\n    Hashtbl.replace ctx.buf_map (T.tag n) n;\n  Some ret\n\n(* Handle AFTER/MSTACK/MSELECT during kernel split: record the buffer\n   mapping and return the buffer node so downstream sees BUFFER not the\n   wrapper. Local-memory AFTERs are left in the kernel. *)\nlet handle_after ctx n =\n  let v = T.view n in\n  let is_local = match v with\n    | After { dtype = D.Ptr p; _ } -> D.Ptr.addrspace p = D.Local\n    | _ -> false in\n  if is_local then None\n  else\n    let buf = match v with\n      | After { src; _ } -> T.base src\n      | Mstack _ | Mselect _ -> List.hd (T.children n)\n      | _ -> n in\n    let buf = match T.view buf with\n      | Mstack _ | Mselect _ -> List.hd (T.children buf)\n      | _ -> buf in\n    assert (not (Hashtbl.mem ctx.buf_map (T.tag buf)));\n    Hashtbl.replace ctx.buf_map (T.tag buf) n;\n    Some buf\n\n(* Cycle detection: verify each buffer is accessed through a single\n   index path. Tinygrad compares idx.src[0].op (operation type); we\n   compare node identity which is strictly more conservative — it may\n   report cycles that tinygrad allows, but will never miss one. *)\nlet find_bufs n =\n  let slice = T.toposort n\n    ~gate:(fun x -> match T.view x with After _ -> false | _ -> true) in\n  let read_from : (int, int) Hashtbl.t = Hashtbl.create 8 in\n  List.iter (fun s ->\n    match T.view s with\n    | Index { ptr; _ } ->\n        let buf = T.base ptr in\n        (match T.view buf with\n        | Buffer _ | Param _ ->\n            let tag = T.tag ptr in\n            (match Hashtbl.find_opt read_from (T.tag buf) with\n            | Some prev when prev <> tag ->\n                failwith \"cycle detected while indexing buffer\"\n            | _ -> Hashtbl.replace read_from (T.tag buf) tag)\n        | _ -> ())\n    | _ -> ()) slice;\n  None\n\n(* Record a BIND node for the kernel argument list, pass through to the\n   bound variable. *)\nlet unbind_kernel ctx n =\n  Hashtbl.replace ctx.vars (T.tag n) n;\n  match T.view n with Bind { var; _ } -> Some var | _ -> assert false\n\n(* Renumber range axes starting from 0 so that kernel deduplication works\n   regardless of the original axis numbering.  Tinygrad uses a tag field\n   on UOps to avoid renumbering the replacement; we track the output\n   nodes explicitly since the OCaml IR has no mutable tag. *)\nlet renumber_range ctx n =\n  if Hashtbl.mem ctx.renumbered (T.tag n) then None\n  else match T.view n with\n  | Range { size; sub; kind; dtype; _ } ->\n      let axis = ctx.range_ctr in\n      ctx.range_ctr <- ctx.range_ctr + 1;\n      let r = T.range ~size ~axis ~sub ~kind ~dtype () in\n      Hashtbl.replace ctx.renumbered (T.tag r) ();\n      Some r\n  | _ -> assert false\n\n(* Strip CONTIGUOUS, saving any Opt hints to ctx for KernelInfo. *)\nlet get_contiguous ctx n =\n  match T.view n with\n  | Contiguous { src; opts; _ } ->\n      if opts <> [] then ctx.opts <- Some opts;\n      Some src\n  | _ -> assert false\n\n(* Local rewrite for kernel split: convert BUFFER/PARAM to kernel PARAMs,\n   record bindings, renumber ranges, strip CONTIGUOUS/NOOP/CONST srcs. *)\nlet to_define_global ctx shapes n =\n  match T.view n with\n  | Store _ -> find_bufs n\n  | Buffer _ -> debuf ctx shapes n\n  | Param { device = Some _; _ } -> debuf ctx shapes n\n  | Bind _ -> unbind_kernel ctx n\n  | After _ | Mstack _ | Mselect _ -> handle_after ctx n\n  | Index { ptr; idxs = []; _ } ->\n      (* INDEX(DEFINE_VAR) → DEFINE_VAR *)\n      (match T.view ptr with Define_var _ -> Some ptr | _ -> None)\n  | Bufferize { src; ranges; dtype; opts } ->\n      (* Remove device from local BUFFERIZE *)\n      if opts.device <> None then\n        Some (T.bufferize ~src ~ranges ~dtype\n                ~opts:{ opts with device = None })\n      else None\n  | Const _ ->\n      (* Remove UNIQUE/DEVICE children to dedup constants *)\n      if T.children n <> [] then Some (T.replace n ~children:[] ())\n      else None\n  | Range _ -> renumber_range ctx n\n  | Contiguous _ -> get_contiguous ctx n\n  | Noop { src = Some s; _ } -> Some s\n  | Noop { src = None; _ } -> None\n  (* XXX tinygrad's rangeify_codegen has AFTER.broadcast and AFTER.gep\n     rules for DEFINE_LOCAL vectorized access. These fire when local\n     buffers use vector dtypes. Not yet needed — add when vector-typed\n     DEFINE_LOCAL is emitted. *)\n  | _ -> None\n\n(* Linearize multi-dim kernel index expressions into a single flat offset.\n   Iterate dims right-to-left, accumulating stride, building sum(acc * src).\n   The expression structure must match tinygrad's so that range renumbering\n   in the split_store local rewrite assigns axis numbers in the same order. *)\nlet linearize_idxs idxs =\n  if List.length idxs <= 1 then idxs\n  else\n    let dim_sizes = List.map (fun idx ->\n      match K.view idx with\n      | Range { size; _ } -> K.const_arg size\n      | _ -> None) idxs in\n    if List.exists Option.is_none dim_sizes then idxs\n    else\n      let dims = List.map (fun s ->\n        match s with Some (Const.Int n) -> Int64.to_int n | _ -> 0)\n        dim_sizes in\n      (* Right-to-left: accumulate stride, build terms — matching\n         tinygrad's _apply_reshape. Then simplify to canonicalize,\n         matching tinygrad's graph_rewrite(combined, symbolic+...). *)\n      let acc = ref 1 in\n      let terms = List.rev_map (fun (s, idx) ->\n        let t = if !acc = 1 then idx\n          else K.binary ~op:`Mul ~lhs:(K.const_int !acc) ~rhs:idx in\n        acc := !acc * s; t)\n        (List.rev (List.combine dims idxs)) in\n      let flat = List.fold_left (fun a t -> K.binary ~op:`Add ~lhs:a ~rhs:t)\n        (K.const_int 0) terms in\n      let flat = K.graph_rewrite ~name:\"linearize_simplify\"\n        (K.first_match [Symbolic.sym]) flat in\n      [flat]\n\n(* Convert a tensor subtree (after to_define_global) into kernel IR.\n   Each tensor node maps 1:1 to its kernel equivalent.  Shaped_wmma\n   should already be lowered by earliest_rewrites; hitting it here\n   is a pipeline bug. *)\nlet tensor_subtree_to_kernel root =\n  let slice = T.toposort root in\n  let tbl : (int, K.t) Hashtbl.t = Hashtbl.create (List.length slice) in\n  let lookup n = match Hashtbl.find_opt tbl (T.tag n) with\n    | Some k -> k | None -> K.const (C.int D.Val.index 0) in\n  let map ns = List.map lookup ns in\n  List.iter (fun n ->\n    let k = match T.view n with\n      | Const { value; _ } -> K.const value\n      | Range { size; axis; sub; kind; dtype } ->\n          K.range ~size:(lookup size) ~axis ~sub ~kind\n            ~dtype:(D.val_of dtype) ()\n      | End { value; ranges } ->\n          K.end_ ~value:(lookup value) ~ranges:(map ranges) ()\n      | Index { ptr; idxs; gate; _ } ->\n          K.index ~ptr:(lookup ptr) ~idxs:(map idxs)\n            ?gate:(Option.map lookup gate) ~as_ptr:false ()\n      | Store { dst; value } ->\n          K.store ~dst:(lookup dst) ~value:(lookup value) ~ranges:[]\n      | Reduce { src; ranges; op; dtype } ->\n          K.reduce ~op ~src:(lookup src) ~ranges:(map ranges)\n            ~dtype:(D.val_of dtype)\n      | Unary { op; src; _ } -> K.unary ~op ~src:(lookup src)\n      | Binary { op; lhs; rhs; _ } ->\n          K.binary ~op ~lhs:(lookup lhs) ~rhs:(lookup rhs)\n      | Ternary { op; a; b; c; _ } ->\n          K.ternary ~op ~a:(lookup a) ~b:(lookup b) ~c:(lookup c)\n      | Cast { src; dtype } -> K.cast ~src:(lookup src) ~dtype\n      | Bitcast { src; dtype } ->\n          K.bitcast ~src:(lookup src) ~dtype:(D.val_of dtype)\n      | Vectorize { srcs; _ } -> K.vectorize ~srcs:(map srcs)\n      | Define_var { name; lo; hi; dtype } ->\n          K.define_var ~name ~lo ~hi ~dtype:(D.val_of dtype) ()\n      | Define_local { size; dtype } -> K.define_local ~size ~dtype\n      | Barrier -> K.barrier\n      | Invalid_index _ -> K.invalid_index ()\n      | Bufferize { src; ranges; dtype; opts } ->\n          K.bufferize ~src:(lookup src) ~ranges:(map ranges)\n            ~dtype:(D.Ptr.create (D.val_of dtype)\n              ~addrspace:opts.addrspace ~size:1) ~opts\n      | After { src; deps; _ } ->\n          K.after ~src:(lookup src) ~deps:(map deps)\n      | Sink { srcs; kernel_info } -> K.sink ?kernel_info (map srcs)\n      | Noop { src = Some s; _ } -> lookup s\n      | Noop { src = None; _ } -> K.const (C.int D.Val.index 0)\n      | Param { slot; dtype; _ } ->\n          let pt = match dtype with\n            | D.Ptr p -> p\n            | D.Val v -> D.Ptr.create v ~addrspace:D.Global ~size:1 in\n          K.param ~idx:slot ~dtype:pt\n      | Bind { var; _ } -> lookup var\n      | Contiguous { src; _ } | Reshape { src; _ } | Expand { src; _ }\n      | Pad { src; _ } | Shrink { src; _ } | Permute { src; _ }\n      | Flip { src; _ } | Detach { src; _ } -> lookup src\n      | Device _ | Unique _ | Lunique _ ->\n          K.const (C.int D.Val.index 0)\n      | v -> failwith (Format.asprintf\n          \"tensor_subtree_to_kernel: unexpected %a\" T.pp_view v)\n    in\n    Hashtbl.replace tbl (T.tag n) k) slice;\n  lookup root\n\n(* Ranges reachable from [n] that are not closed by any END or consumed\n   by a REDUCE in the subtree. Tinygrad's .ranges excludes ended and\n   reduce-internal ranges. *)\nlet open_ranges n =\n  let all = T.ranges n in\n  let closed = List.concat_map (fun x ->\n    match T.view x with\n    | End { ranges; _ } -> ranges\n    | Reduce { ranges; _ } -> ranges\n    | _ -> [])\n    (T.toposort n) in\n  List.filter (fun r -> not (List.exists (fun e -> e == r) closed)) all\n\n(* Convert a STORE/END subtree into a CALL(kernel SINK, bufs, vars). *)\nlet split_store shapes n =\n  match T.view n with\n  | Store _ | End _ ->\n      (* Don't split if there are open ranges *)\n      if open_ranges n <> [] then None\n      (* Raw shaped STORE should be processed through its END wrapper *)\n      else if (match T.view n with\n        | Store { dst; _ } -> shapes dst <> None\n        | _ -> false)\n      then None\n      else\n        let ctx = create_split_context () in\n        (* Precompute multi-dim shapes for Buffer nodes from their INDEX\n           consumers. *)\n        List.iter (fun nd -> match T.view nd with\n          | Index { ptr; idxs; _ } when List.length idxs > 1 ->\n            (match T.view ptr with\n             | Buffer _ ->\n               let dims = List.filter_map (fun r -> match T.view r with\n                 | Range { size; _ } -> (match T.view size with\n                   | Const { value; _ } -> (match C.view value with\n                     | Int n -> Some (Int64.to_int n) | _ -> None)\n                   | _ -> None)\n                 | _ -> None) idxs in\n               if List.length dims = List.length idxs then\n                 Hashtbl.replace ctx.buf_shapes (T.tag ptr) dims\n             | _ -> ())\n          | _ -> ()) (T.toposort n);\n        (* Flatten range: toposort-reorder range children of End/Store.\n           Tensor-level equivalent of Simplify.flatten_range. *)\n        let flatten_range_t n =\n          match T.view n with\n          | End { value; ranges } when ranges <> [] ->\n              let new_rngs = List.filter (fun r ->\n                match T.view r with Range _ -> true | _ -> false)\n                (T.toposort (T.sink ranges)) in\n              if List.equal (==) ranges new_rngs then None\n              else Some (T.end_ ~value ~ranges:new_rngs)\n          | _ -> None\n        in\n        (* Use on-the-fly shape computation for the local rewrite,\n           since debuf creates new RESHAPE nodes not in the precomputed\n           shapes table. *)\n        let local_shapes n = T.compute_shapes (T.sink [n]) n in\n        let rewrite = T.first_match [\n          to_define_global ctx shapes;\n          flatten_range_t;\n          mop_through_index local_shapes;\n        ] in\n        let ret = T.graph_rewrite ~name:\"kernel_split\" rewrite n in\n        (* Determine callee type based on the stored value.\n           If the END already wraps a CALL (the inner STORE was split\n           first in the bottom-up pass), nothing more to do. *)\n        let stored = match T.view ret with\n          | Store { value; _ } -> Some value\n          | End { value; _ } ->\n              (match T.view value with\n              | Store { value; _ } -> Some value\n              | Call _ -> None\n              | _ -> failwith \"split_store: END wraps non-STORE\")\n          | Call _ -> None\n          | _ -> failwith \"split_store: unexpected result\" in\n        begin match stored with\n        | None -> None\n        | Some stored ->\n        let bufs = Hashtbl.fold (fun _ v acc -> v :: acc)\n          ctx.buf_map [] in\n        let vars = Hashtbl.fold (fun _ v acc -> v :: acc)\n          ctx.vars [] in\n        let info : T.call_info = {\n          grad_fxn = None; metadata = [];\n          name = None; precompile = false } in\n        let dtype = match T.dtype n with\n          | Some d -> d | None -> D.void in\n        (* COPY/BUFFER_VIEW are cross-device ops — keep as Ref *)\n        let callee : T.callee = match T.view stored with\n          | Copy _ | Buffer_view _ ->\n              let ended = match T.view ret with\n                | End { ranges; _ } -> ranges | _ -> [] in\n              Ref (T.replace stored\n                ~children:(T.children stored @ ended) ())\n          | _ ->\n              (* Normal kernel: convert tensor subtree to kernel IR *)\n              let kernel_sink = T.sink\n                ~kernel_info:{ K.name = \"\"; axis_kinds = [];\n                  dont_use_locals = false; applied_opts = [];\n                  opts_to_apply = ctx.opts; estimates = None }\n                [ret] in\n              Ast (tensor_subtree_to_kernel kernel_sink)\n        in\n        Some (T.call ~callee ~args:(bufs @ vars) ~info ~dtype)\n        end\n  | _ -> None\n\n(* WAR dependency fixup *)\n\n(* If kernel U reads buffer S, and S is also written by another kernel,\n   S's write must complete before U runs. Add explicit ordering deps. *)\nlet fix_war_deps root =\n  let nodes = T.toposort root in\n  let afters = List.filter (fun n ->\n    match T.view n with After _ -> true | _ -> false) nodes in\n  if afters = [] then root\n  else\n    let buf_of n = match T.view n with\n      | After { src; _ } -> T.base src | _ -> n in\n    let kernel_assign : (int, T.t) Hashtbl.t = Hashtbl.create 16 in\n    List.iter (fun u ->\n      Hashtbl.replace kernel_assign (T.tag (buf_of u)) u) afters;\n    let call_of u = match T.view u with\n      | After { deps; _ } ->\n          List.find_opt (fun d ->\n            match T.view d with Call _ -> true | _ -> false) deps\n      | _ -> None in\n    let assign_rep : (int, T.t list) Hashtbl.t = Hashtbl.create 16 in\n    List.iter (fun u ->\n      let u_buf = buf_of u in\n      let reads = match call_of u with\n        | Some call -> (match T.view call with\n          | Call { args; _ } ->\n              List.filter (fun a -> match T.view a with\n                | Buffer _ | Param _ -> true | _ -> false) args\n          | _ -> [])\n        | None -> [] in\n      List.iter (fun s ->\n        if s != u_buf then\n          match Hashtbl.find_opt kernel_assign (T.tag s) with\n          | Some a ->\n              if call_of a <> None && call_of a = call_of u then ()\n              else begin\n                let prev = match Hashtbl.find_opt assign_rep (T.tag a) with\n                  | Some l -> l | None -> [] in\n                if not (List.exists (fun p -> p == u) prev) then\n                  Hashtbl.replace assign_rep (T.tag a) (u :: prev)\n              end\n          | None -> ()) reads) afters;\n    if Hashtbl.length assign_rep = 0 then root\n    else\n      T.graph_rewrite ~name:\"fix_war_deps\" (fun n ->\n        match Hashtbl.find_opt assign_rep (T.tag n) with\n        | Some extra_deps -> (match T.view n with\n          | After { src; deps; _ } ->\n              Some (T.after ~src ~deps:(deps @ extra_deps))\n          | _ -> None)\n        | None -> None) root\n\n(* Main pipeline *)\n\nlet get_kernel_graph (root : T.t) : T.t =\n  let shapes = T.compute_shapes root in\n  let devices = T.compute_devices root in\n  (* 1. multi_pm *)\n  let root =\n    T.graph_rewrite ~name:\"multi_pm\"\n      (Multi.multi_pm ~shapes ~devices) root in\n  let shapes = T.compute_shapes root in\n  let devices = T.compute_devices root in\n  (* 2. fold moved AFTERs (openpilot hack) *)\n  let root =\n    if Helpers.Context_var.get openpilot_hacks_var = 0 then root\n    else\n      let ctx : (int, T.t) Hashtbl.t = Hashtbl.create 16 in\n      T.graph_rewrite ~name:\"fold_moved_after\" (fun n ->\n        match T.view n with\n        | After { deps; _ } ->\n            let store = List.find_opt (fun d ->\n              match T.view d with Store _ -> true | _ -> false) deps in\n            (match store with\n            | Some s ->\n                let value = match T.view s with\n                  | Store { value; _ } -> value | _ -> assert false in\n                let after = n in\n                (match T.view value with\n                | Reshape _ | Expand _ | Pad _ | Shrink _ | Permute _\n                | Flip _ | Cast _ | Ternary { op = `Where; _ } ->\n                    found_after ctx ~after ~value; None\n                | _ -> None)\n            | None -> None)\n        | Unary _ | Binary _ | Ternary _ | Cast _ | Bitcast _ ->\n            let children = T.children n in\n            let new_children = List.map (fun s ->\n              match Hashtbl.find_opt ctx (T.tag s) with\n              | Some after -> after | None -> s) children in\n            if List.for_all2 (==) children new_children then None\n            else Some (T.replace n ~children:new_children ())\n        | _ -> None) root\n  in\n  (* 3. earliest_rewrites (syntactic sugar + mops + canonicalization) *)\n  let root =\n    T.graph_rewrite ~name:\"earliest_rewrites\" (T.first_match [\n      index_concat;\n      early_rangeify;\n      mop_through_index shapes;\n      mop_past_after shapes;\n      mop_past_end;\n      (* Merge adjacent reshapes *)\n      (fun n -> match T.view n with\n        | Reshape { src; shape; _ } ->\n            (match T.view src with\n            | Reshape { src = inner; _ } ->\n                Some (T.reshape ~src:inner ~shape)\n            | _ -> None)\n        | _ -> None);\n      resolve_call;\n      (* Resolve allreduce *)\n      (fun n -> match T.view n with\n        | Allreduce { src = buf; device; op; dtype } ->\n            let shape = match shapes n with Some s -> s | None -> [] in\n            Allreduce.create_allreduce_function buf ~op ~device ~dtype\n              ~shape ()\n        | _ -> None);\n      split_reduceop shapes;\n      (* Remove DETACH/CONTIGUOUS_BACKWARD *)\n      (fun n -> match T.view n with\n        | Detach { src; _ } | Contiguous_backward { src; _ } -> Some src\n        | _ -> None);\n      (* COPY size mismatch: wrap in contiguous if movement ops changed size *)\n      (fun n -> match T.view n with\n        | Copy { src; _ } when is_movement (T.view src) ->\n            let base_shape = shapes (T.base src) in\n            let src_shape = shapes src in\n            if base_shape <> src_shape then\n              Some (T.replace n\n                ~children:(T.contiguous ~src () :: List.tl (T.children n)) ())\n            else None\n        | _ -> None);\n      (* Same-device COPY → NOOP *)\n      (fun n -> match T.view n with\n        | Copy { src; _ } ->\n            (match devices src, devices n with\n            | Some d1, Some d2 when d1 = d2 ->\n                Some (T.noop ~src ~dtype:(dtype_or_void src) ())\n            | _ -> None)\n        | _ -> None);\n      (* Assign rules (AFTER+STORE) *)\n      (fun n -> match T.view n with\n        | After { src = buf; deps; _ } ->\n            let store = List.find_opt (fun d ->\n              match T.view d with Store _ -> true | _ -> false) deps in\n            (match store with\n            | Some s ->\n                let target, value = match T.view s with\n                  | Store { dst; value } -> dst, value\n                  | _ -> assert false in\n                (* Bitcast on target → move to value *)\n                (match T.view target with\n                | Bitcast { src = inner; _ } ->\n                    Some (T.after ~src:inner\n                      ~deps:[T.store ~dst:inner\n                        ~value:(T.bitcast ~src:value\n                          ~dtype:(dtype_or_void inner))])\n                | _ ->\n                    (* View shape mismatch → wrap in inner AFTER *)\n                    let target_shape = shapes target in\n                    let buf_shape = shapes n in\n                    if target_shape <> buf_shape\n                       && target_shape <> None && buf_shape <> None then\n                      let inner = T.after ~src:target ~deps:[s] in\n                      let extras = List.filter (fun d -> d != s) deps in\n                      Some (T.after ~src:buf ~deps:(inner :: extras))\n                    else\n                      match fix_store_after_hazard ~buf ~target ~value with\n                      | Some _ as r -> r\n                      | None ->\n                        match T.view target with\n                        | After _ ->\n                            Some (normalize_store_after_target_chain\n                              ~target ~value)\n                        | _ -> None)\n            | None -> None)\n        | _ -> None);\n      (* Size-0 reduce → identity element *)\n      (fun n -> match T.view n with\n        | Reduce_axis { src; op; dtype; _ }\n          when (match shapes src with\n            | Some s -> shape_prod s = 0 | None -> false)\n            && (match shapes n with\n              | Some s -> shape_prod s > 0 | None -> false) ->\n            Some (T.const (C.identity_element op (D.val_of dtype)) dtype)\n        | _ -> None);\n      (* Size-0 → zero *)\n      (fun n -> match T.view n with\n        | Sink _ -> None\n        | _ when (match shapes n with\n            | Some s -> shape_prod s = 0 | None -> false) ->\n            let dt = dtype_or_void n in\n            Some (T.const (C.zero (D.val_of dt)) dt)\n        | _ -> None);\n    ]) root in\n  let shapes = T.compute_shapes root in\n  let devices = T.compute_devices root in\n  (* 4. run_rangeify *)\n  let ctx = Indexing.run_rangeify root ~shapes in\n  (* 5. apply_rangeify *)\n  let root = Indexing.apply_rangeify_pass ctx ~devices root in\n  let shapes = T.compute_shapes root in\n  (* 6. post-rangeify: buffer folding + buffer removal.\n     Tinygrad also composes symbolic + pm_reduce_simplify here, but in\n     our split IR, symbolic operates on Kernel.t and is applied during\n     run_rangeify/apply_rangeify_pass via simplify_tensor_expr. *)\n  let root =\n    T.graph_rewrite ~name:\"post_rangeify\" (T.first_match [\n      cleanup_dead_axes shapes;\n      (* Remove noop bufferize *)\n      (fun n -> match T.view n with\n        | Bufferize { src; ranges; _ } ->\n            (match T.view src with\n            | Index { ptr; idxs; _ } ->\n                remove_noop_bufferize ~idxs ~ranges ~ptr\n                  ~buf_shape:(shapes n)\n            | _ -> None)\n        | _ -> None);\n      (* No buffers for const *)\n      (fun n -> match T.view n with\n        | Bufferize { src; _ } ->\n            (match T.view src with\n            | Const { value; _ } ->\n                Some (T.const value (dtype_or_void n))\n            | _ -> None)\n        | _ -> None);\n      (* Indexing a const is a const *)\n      (fun n -> match T.view n with\n        | Index { ptr; _ } ->\n            (match T.view ptr with Const _ -> Some ptr | _ -> None)\n        | _ -> None);\n      (* Copy on const is const *)\n      (fun n -> match T.view n with\n        | Copy { src; _ } ->\n            (match T.view src with\n            | Const { value; _ } ->\n                Some (T.const value (dtype_or_void n))\n            | _ -> None)\n        | _ -> None);\n      (* Noop on const *)\n      (fun n -> match T.view n with\n        | Noop { src = Some s; _ } ->\n            (match T.view s with Const _ -> Some s | _ -> None)\n        | _ -> None);\n      (* MSTACK(CONST).INDEX → CONST *)\n      (fun n -> match T.view n with\n        | Index { ptr; _ } ->\n            (match T.view ptr with\n            | Mstack { srcs; _ } ->\n                (match srcs with\n                | s :: _ ->\n                    let base = T.base s in\n                    (match T.view base with\n                    | Const { value; dtype; _ } ->\n                        Some (T.const value dtype)\n                    | _ -> None)\n                | [] -> None)\n            | _ -> None)\n        | _ -> None);\n      (* Remove bufferize with cost function *)\n      (fun n -> match T.view n with\n        | Index { ptr; idxs; _ } ->\n            (match T.view ptr with\n            | Bufferize { src; ranges; opts; _ } ->\n                remove_bufferize ~src ~buf_ranges:ranges\n                  ~buf_shape:(shapes ptr) ~idx_ranges:idxs\n                  ~removable:opts.removable\n            | _ -> None)\n        | _ -> None);\n    ]) root in\n  (* 7. limit_bufs *)\n  let root =\n    let devices = T.compute_devices root in\n    T.graph_rewrite ~name:\"limit_bufs\"\n      (limit_bufs ctx devices) root in\n  (* 8. add buffers (BUFFERIZE → STORE + BUFFER) *)\n  let root =\n    let devices = T.compute_devices root in\n    let lunique_start =\n      List.fold_left (fun acc x ->\n        match T.view x with\n        | Lunique { id; _ } -> max acc (id + 1)\n        | _ -> acc) 0 (T.toposort root) in\n    let counter = ref lunique_start in\n    T.graph_rewrite ~name:\"add_buffers\" (T.first_match [\n      mop_through_index shapes;\n      mop_past_after shapes;\n      mop_past_end;\n      flatten_bufferize shapes;\n      late_buffer_view devices;\n      bufferize_to_store counter;\n      (* Move RESHAPEs through MSELECT/MSTACK *)\n      (fun n -> match T.view n with\n        | Mselect _ | Mstack _ ->\n            let children = T.children n in\n            if List.for_all (fun c ->\n              match T.view c with Reshape _ -> true | _ -> false)\n              children\n            then\n              let unwrapped = List.map (fun c ->\n                T.base (match T.view c with\n                  | Reshape { src; _ } -> src | _ -> c)) children in\n              let inner = T.replace n ~children:unwrapped () in\n              let shape = match shapes n with\n                | Some s -> s | None -> [] in\n              if shape <> [] then\n                Some (T.reshape ~src:inner ~shape:(shape_node shape))\n              else Some inner\n            else None\n        | _ -> None);\n      (* Remove RESHAPEs on CALL args *)\n      (fun n -> match T.view n with\n        | Call { callee; args; info; dtype } ->\n            let new_args = List.map (fun a ->\n              match T.view a with\n              | Reshape { src; _ } -> src | _ -> a) args in\n            if List.for_all2 (==) args new_args then None\n            else Some (T.call ~callee ~args:new_args ~info ~dtype)\n        | _ -> None);\n      (* Remove MOP on AFTER deps, flatten nested AFTERs *)\n      (fun n -> match T.view n with\n        | After { src; deps; _ } ->\n            let new_deps = List.map (fun d ->\n              match T.view d with\n              | Reshape { src; _ } | Expand { src; _ }\n              | Permute { src; _ } | Flip { src; _ }\n              | Pad { src; _ } | Shrink { src; _ } -> src\n              | _ -> d) deps in\n            let flat_deps = List.concat_map (fun d ->\n              match T.view d with\n              | After { deps; _ } -> deps\n              | _ -> [d]) new_deps in\n            if List.for_all2 (==) deps flat_deps then None\n            else Some (T.after ~src ~deps:flat_deps)\n        | _ -> None);\n      (* Remove invalid writes *)\n      (fun n -> match T.view n with\n        | After { src; deps; _ } ->\n            let real_deps = List.filter (fun d ->\n              match T.view d with\n              | Noop { src = None; _ } -> false\n              | End { value; _ } ->\n                  (match T.view value with\n                  | Noop { src = None; _ } -> false\n                  | _ -> true)\n              | _ -> true) deps in\n            if List.length real_deps < List.length deps then\n              (match real_deps with\n              | [] -> Some src\n              | _ -> Some (T.after ~src ~deps:real_deps))\n            else None\n        | _ -> None);\n    ]) root in\n  (* 9. split kernels *)\n  let shapes = T.compute_shapes root in\n  let root =\n    T.graph_rewrite ~name:\"split_kernels\" (split_store shapes) root in\n  (* 10. WAR deps *)\n  fix_war_deps root\n"
  },
  {
    "path": "packages/tolk/lib/schedule/rangeify.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2024 the tiny corp. MIT License (see LICENSE-tinygrad).\n  Copyright (c) 2026 The Raven authors. ISC License.\n\n  SPDX-License-Identifier: MIT AND ISC\n  ---------------------------------------------------------------------------*)\n\n(** Schedule pipeline: tensor graph to kernel graph.\n\n    Transforms a tensor-level SINK into a graph of CALL nodes wrapping\n    {!Tolk_ir.Kernel.t} ASTs ready for codegen.  The pipeline has ten\n    passes:\n\n    {ol\n    {- {e multi_pm} — multi-device rewriting.}\n    {- {e fold_moved_after} — openpilot AFTER folding (when enabled).}\n    {- {e earliest_rewrites} — syntactic sugar, movement ops,\n       call resolution, allreduce, split-reduce, size-0 folding.}\n    {- {e run_rangeify} — core range analysis (in {!Indexing}).}\n    {- {e apply_rangeify} — bottom-up rewrite with rangeify context.}\n    {- {e post-rangeify} — dead-axis cleanup, buffer folding, const\n       folding, cost-based buffer removal.}\n    {- {e limit_bufs} — insert BUFFERIZE when a kernel exceeds the\n       device buffer limit.}\n    {- {e add_buffers} — lower BUFFERIZE to STORE + BUFFER.}\n    {- {e split_kernels} — convert STORE/END subtrees into\n       CALL(kernel SINK).}\n    {- {e WAR deps} — write-after-read dependency fixup.}} *)\n\nval get_kernel_graph : Tolk_ir.Tensor.t -> Tolk_ir.Tensor.t\n(** [get_kernel_graph sink] is the kernel graph for [sink].\n\n    [sink] is a tensor-level SINK node.  The returned graph contains\n    AFTER nodes whose deps are CALL nodes wrapping\n    {!Tolk_ir.Kernel.t} ASTs, connected by WAR dependency edges. *)\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/clang_dot_product.expected",
    "content": "void dot_product(float* restrict data0, float* restrict data1, float* restrict data2) {\n  float acc0[1];\n  *(acc0+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 128; Ridx0++) {\n    float val0 = (*(data0+Ridx0));\n    float val1 = (*(data1+Ridx0));\n    *(acc0+0) = ((*(acc0+0))+(val0*val1));\n  }\n  *(data2+0) = (*(acc0+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/clang_elementwise_add.expected",
    "content": "void elementwise_add(float* restrict data0, float* restrict data1, float* restrict data2, const int core_id) {\n  float val0 = (*(data0+core_id));\n  float val1 = (*(data1+core_id));\n  *(data2+core_id) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/clang_elementwise_cast_f16.expected",
    "content": "void elementwise_cast_f16(__fp16* restrict data0, float* restrict data1, float* restrict data2, const int core_id) {\n  __fp16 val0 = (*(data0+core_id));\n  float val1 = (*(data1+core_id));\n  *(data2+core_id) = (((float)(val0))+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/clang_elementwise_int32.expected",
    "content": "void elementwise_int32(int* restrict data0, int* restrict data1, int* restrict data2, const int core_id) {\n  int val0 = (*(data0+core_id));\n  int val1 = (*(data1+core_id));\n  *(data2+core_id) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/clang_elementwise_sqrt.expected",
    "content": "void elementwise_sqrt(float* restrict data0, float* restrict data1, const int core_id) {\n  float val0 = (*(data0+core_id));\n  *(data1+core_id) = __builtin_sqrtf(val0);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/clang_elementwise_where.expected",
    "content": "void elementwise_where(float* restrict data0, float* restrict data1, const int core_id) {\n  float val0 = (*(data0+core_id));\n  float alu0 = ((0.0f<val0)?val0:0.0f);\n  *(data1+core_id) = alu0;\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/clang_gated_store.expected",
    "content": "void gated_store(float* restrict data0, float* restrict data1, float* restrict data2, const int core_id) {\n  float val0 = (*(data0+core_id));\n  float val1 = (*(data1+core_id));\n  _Bool alu0 = (core_id<200);\n  if (alu0) {\n    *(data2+core_id) = (val0+val1);\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/clang_max_reduce.expected",
    "content": "void max_reduce(float* restrict data0, float* restrict data1) {\n  float acc0[1];\n  *(acc0+0) = ((float)(-__builtin_inff()));\n  for (int Ridx0 = 0; Ridx0 < 64; Ridx0++) {\n    float val0 = (*(data0+Ridx0));\n    float alu1 = (((*(acc0+0))<val0)?val0:(*(acc0+0)));\n    *(acc0+0) = alu1;\n  }\n  *(data1+0) = (*(acc0+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/clang_multi_output.expected",
    "content": "void multi_output(float* restrict data0, float* restrict data1, float* restrict data2, const int core_id) {\n  float val0 = (*(data0+core_id));\n  *(data1+core_id) = (val0+1.0f);\n  *(data2+core_id) = (val0*2.0f);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/clang_no_optimize.expected",
    "content": "void no_optimize(float* restrict data0, float* restrict data1, float* restrict data2, const int core_id) {\n  float val0 = (*(data0+core_id));\n  float val1 = (*(data1+core_id));\n  *(data2+core_id) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/clang_parallel_reduce.expected",
    "content": "void parallel_reduce(float* restrict data0, float* restrict data1, float* restrict data2) {\n  float acc0[1];\n  float acc1[1];\n  *(acc0+0) = 0.0f;\n  *(acc1+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 128; Ridx0++) {\n    float val0 = (*(data0+Ridx0));\n    *(acc0+0) = ((*(acc0+0))+val0);\n    *(acc1+0) = ((*(acc1+0))+(val0*val0));\n  }\n  *(data1+0) = (*(acc0+0));\n  *(data2+0) = (*(acc1+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/clang_reduce_rows.expected",
    "content": "void reduce_rows(float* restrict data0, float* restrict data1, const int core_id) {\n  float acc0[1];\n  *(acc0+0) = 0.0f;\n  for (int Ridx1 = 0; Ridx1 < 32; Ridx1++) {\n    float val0 = (*(data0+((core_id<<5)+Ridx1)));\n    *(acc0+0) = ((*(acc0+0))+val0);\n  }\n  *(data1+core_id) = (*(acc0+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/clang_sum_reduce.expected",
    "content": "void sum_reduce(float* restrict data0, float* restrict data1) {\n  float acc0[1];\n  *(acc0+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 256; Ridx0++) {\n    float val0 = (*(data0+Ridx0));\n    *(acc0+0) = ((*(acc0+0))+val0);\n  }\n  *(data1+0) = (*(acc0+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/cuda_dot_product.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) dot_product(float* data0, float* data1, float* data2) {\n  float acc0[1];\n  *(acc0+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 128; Ridx0++) {\n    float val0 = (*(data0+Ridx0));\n    float val1 = (*(data1+Ridx0));\n    *(acc0+0) = ((*(acc0+0))+(val0*val1));\n  }\n  *(data2+0) = (*(acc0+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/cuda_elementwise_2d.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) elementwise_2d(float* data0, float* data1, float* data2) {\n  int gidx0 = blockIdx.x; /* 128 */\n  float val0 = (*(data0+gidx0));\n  float val1 = (*(data1+gidx0));\n  *(data2+gidx0) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/cuda_elementwise_add.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) elementwise_add(float* data0, float* data1, float* data2) {\n  int gidx0 = blockIdx.x; /* 256 */\n  float val0 = (*(data0+gidx0));\n  float val1 = (*(data1+gidx0));\n  *(data2+gidx0) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/cuda_elementwise_cast_f16.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\n#include <cuda_fp16.h>\nextern \"C\" __global__ void __launch_bounds__(1) elementwise_cast_f16(half* data0, float* data1, float* data2) {\n  int gidx0 = blockIdx.x; /* 256 */\n  half val0 = (*(data0+gidx0));\n  float val1 = (*(data1+gidx0));\n  *(data2+gidx0) = (((float)(val0))+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/cuda_elementwise_int32.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) elementwise_int32(int* data0, int* data1, int* data2) {\n  int gidx0 = blockIdx.x; /* 256 */\n  int val0 = (*(data0+gidx0));\n  int val1 = (*(data1+gidx0));\n  *(data2+gidx0) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/cuda_elementwise_sqrt.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) elementwise_sqrt(float* data0, float* data1) {\n  int gidx0 = blockIdx.x; /* 256 */\n  float val0 = (*(data0+gidx0));\n  *(data1+gidx0) = sqrt(val0);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/cuda_elementwise_where.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) elementwise_where(float* data0, float* data1) {\n  int gidx0 = blockIdx.x; /* 256 */\n  float val0 = (*(data0+gidx0));\n  float alu0 = ((0.0f<val0)?val0:0.0f);\n  *(data1+gidx0) = alu0;\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/cuda_gated_store.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) gated_store(float* data0, float* data1, float* data2) {\n  int gidx0 = blockIdx.x; /* 256 */\n  float val0 = (*(data0+gidx0));\n  float val1 = (*(data1+gidx0));\n  bool alu0 = (gidx0<200);\n  if (alu0) {\n    *(data2+gidx0) = (val0+val1);\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/cuda_matmul_small.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) matmul_small(float* data0, float* data1, float* data2) {\n  float acc0[1];\n  int gidx0 = blockIdx.x; /* 4 */\n  int gidx1 = blockIdx.y; /* 4 */\n  int alu0 = (gidx1<<2);\n  *(acc0+0) = 0.0f;\n  for (int Ridx2 = 0; Ridx2 < 4; Ridx2++) {\n    float val0 = (*(data0+(alu0+Ridx2)));\n    float val1 = (*(data1+(gidx0+(Ridx2<<2))));\n    *(acc0+0) = ((*(acc0+0))+(val0*val1));\n  }\n  *(data2+(gidx0+alu0)) = (*(acc0+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/cuda_max_reduce.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) max_reduce(float* data0, float* data1) {\n  float acc0[1];\n  *(acc0+0) = ((float)(-INFINITY));\n  for (int Ridx0 = 0; Ridx0 < 64; Ridx0++) {\n    float val0 = (*(data0+Ridx0));\n    float alu1 = (((*(acc0+0))<val0)?val0:(*(acc0+0)));\n    *(acc0+0) = alu1;\n  }\n  *(data1+0) = (*(acc0+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/cuda_multi_output.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) multi_output(float* data0, float* data1, float* data2) {\n  int gidx0 = blockIdx.x; /* 256 */\n  float val0 = (*(data0+gidx0));\n  *(data1+gidx0) = (val0+1.0f);\n  *(data2+gidx0) = (val0*2.0f);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/cuda_no_optimize.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) no_optimize(float* data0, float* data1, float* data2) {\n  int gidx0 = blockIdx.x; /* 256 */\n  float val0 = (*(data0+gidx0));\n  float val1 = (*(data1+gidx0));\n  *(data2+gidx0) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/cuda_parallel_reduce.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) parallel_reduce(float* data0, float* data1, float* data2) {\n  float acc0[1];\n  float acc1[1];\n  *(acc0+0) = 0.0f;\n  *(acc1+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 128; Ridx0++) {\n    float val0 = (*(data0+Ridx0));\n    *(acc0+0) = ((*(acc0+0))+val0);\n    *(acc1+0) = ((*(acc1+0))+(val0*val0));\n  }\n  *(data1+0) = (*(acc0+0));\n  *(data2+0) = (*(acc1+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/cuda_reduce_rows.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) reduce_rows(float* data0, float* data1) {\n  float acc0[1];\n  int gidx0 = blockIdx.x; /* 8 */\n  *(acc0+0) = 0.0f;\n  for (int Ridx1 = 0; Ridx1 < 32; Ridx1++) {\n    float val0 = (*(data0+((gidx0<<5)+Ridx1)));\n    *(acc0+0) = ((*(acc0+0))+val0);\n  }\n  *(data1+gidx0) = (*(acc0+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/cuda_sum_reduce.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) sum_reduce(float* data0, float* data1) {\n  float acc0[1];\n  *(acc0+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 256; Ridx0++) {\n    float val0 = (*(data0+Ridx0));\n    *(acc0+0) = ((*(acc0+0))+val0);\n  }\n  *(data1+0) = (*(acc0+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/dune",
    "content": "(executable\n (name generate_actual)\n (libraries tolk tolk.ir))\n\n(rule\n (package tolk)\n (targets\n  clang_dot_product.actual\n  clang_elementwise_add.actual\n  clang_elementwise_cast_f16.actual\n  clang_elementwise_int32.actual\n  clang_elementwise_sqrt.actual\n  clang_elementwise_where.actual\n  clang_gated_store.actual\n  clang_multi_output.actual\n  clang_no_optimize.actual\n  clang_parallel_reduce.actual\n  clang_reduce_rows.actual\n  clang_sum_reduce.actual\n  cuda_dot_product.actual\n  cuda_elementwise_2d.actual\n  cuda_elementwise_add.actual\n  cuda_elementwise_cast_f16.actual\n  cuda_elementwise_int32.actual\n  cuda_elementwise_sqrt.actual\n  cuda_elementwise_where.actual\n  cuda_gated_store.actual\n  cuda_matmul_small.actual\n  cuda_multi_output.actual\n  cuda_no_optimize.actual\n  cuda_parallel_reduce.actual\n  cuda_reduce_rows.actual\n  cuda_sum_reduce.actual\n  metal_dot_product.actual\n  metal_elementwise_2d.actual\n  metal_elementwise_add.actual\n  metal_elementwise_cast_f16.actual\n  metal_elementwise_int32.actual\n  metal_elementwise_sqrt.actual\n  metal_elementwise_where.actual\n  metal_gated_store.actual\n  metal_matmul_small.actual\n  metal_multi_output.actual\n  metal_no_optimize.actual\n  metal_parallel_reduce.actual\n  metal_reduce_rows.actual\n  metal_sum_reduce.actual\n  opencl_dot_product.actual\n  opencl_elementwise_2d.actual\n  opencl_elementwise_add.actual\n  opencl_elementwise_cast_f16.actual\n  opencl_elementwise_int32.actual\n  opencl_elementwise_sqrt.actual\n  opencl_elementwise_where.actual\n  opencl_gated_store.actual\n  opencl_matmul_small.actual\n  opencl_multi_output.actual\n  opencl_no_optimize.actual\n  opencl_parallel_reduce.actual\n  opencl_reduce_rows.actual\n  opencl_sum_reduce.actual)\n (action\n  (run ./generate_actual.exe .)))\n\n; max_reduce .expected files are kept as reference for when decomposition\n; steps (18-21) are ported. No .actual is generated yet so no diff rules.\n\n(rule\n (alias runtest)\n (package tolk)\n (action\n  (progn\n   (diff clang_dot_product.expected clang_dot_product.actual)\n   (diff clang_elementwise_add.expected clang_elementwise_add.actual)\n   (diff\n    clang_elementwise_cast_f16.expected\n    clang_elementwise_cast_f16.actual)\n   (diff clang_elementwise_int32.expected clang_elementwise_int32.actual)\n   (diff clang_elementwise_sqrt.expected clang_elementwise_sqrt.actual)\n   (diff clang_elementwise_where.expected clang_elementwise_where.actual)\n   (diff clang_gated_store.expected clang_gated_store.actual)\n   (diff clang_multi_output.expected clang_multi_output.actual)\n   (diff clang_no_optimize.expected clang_no_optimize.actual)\n   (diff clang_parallel_reduce.expected clang_parallel_reduce.actual)\n   (diff clang_reduce_rows.expected clang_reduce_rows.actual)\n   (diff clang_sum_reduce.expected clang_sum_reduce.actual)\n   (diff cuda_dot_product.expected cuda_dot_product.actual)\n   (diff cuda_elementwise_2d.expected cuda_elementwise_2d.actual)\n   (diff cuda_elementwise_add.expected cuda_elementwise_add.actual)\n   (diff cuda_elementwise_cast_f16.expected cuda_elementwise_cast_f16.actual)\n   (diff cuda_elementwise_int32.expected cuda_elementwise_int32.actual)\n   (diff cuda_elementwise_sqrt.expected cuda_elementwise_sqrt.actual)\n   (diff cuda_elementwise_where.expected cuda_elementwise_where.actual)\n   (diff cuda_gated_store.expected cuda_gated_store.actual)\n   (diff cuda_matmul_small.expected cuda_matmul_small.actual)\n   (diff cuda_multi_output.expected cuda_multi_output.actual)\n   (diff cuda_no_optimize.expected cuda_no_optimize.actual)\n   (diff cuda_parallel_reduce.expected cuda_parallel_reduce.actual)\n   (diff cuda_reduce_rows.expected cuda_reduce_rows.actual)\n   (diff cuda_sum_reduce.expected cuda_sum_reduce.actual)\n   (diff metal_dot_product.expected metal_dot_product.actual)\n   (diff metal_elementwise_2d.expected metal_elementwise_2d.actual)\n   (diff metal_elementwise_add.expected metal_elementwise_add.actual)\n   (diff\n    metal_elementwise_cast_f16.expected\n    metal_elementwise_cast_f16.actual)\n   (diff metal_elementwise_int32.expected metal_elementwise_int32.actual)\n   (diff metal_elementwise_sqrt.expected metal_elementwise_sqrt.actual)\n   (diff metal_elementwise_where.expected metal_elementwise_where.actual)\n   (diff metal_gated_store.expected metal_gated_store.actual)\n   (diff metal_matmul_small.expected metal_matmul_small.actual)\n   (diff metal_multi_output.expected metal_multi_output.actual)\n   (diff metal_no_optimize.expected metal_no_optimize.actual)\n   (diff metal_parallel_reduce.expected metal_parallel_reduce.actual)\n   (diff metal_reduce_rows.expected metal_reduce_rows.actual)\n   (diff metal_sum_reduce.expected metal_sum_reduce.actual)\n   (diff opencl_dot_product.expected opencl_dot_product.actual)\n   (diff opencl_elementwise_2d.expected opencl_elementwise_2d.actual)\n   (diff opencl_elementwise_add.expected opencl_elementwise_add.actual)\n   (diff\n    opencl_elementwise_cast_f16.expected\n    opencl_elementwise_cast_f16.actual)\n   (diff opencl_elementwise_int32.expected opencl_elementwise_int32.actual)\n   (diff opencl_elementwise_sqrt.expected opencl_elementwise_sqrt.actual)\n   (diff opencl_elementwise_where.expected opencl_elementwise_where.actual)\n   (diff opencl_gated_store.expected opencl_gated_store.actual)\n   (diff opencl_matmul_small.expected opencl_matmul_small.actual)\n   (diff opencl_multi_output.expected opencl_multi_output.actual)\n   (diff opencl_no_optimize.expected opencl_no_optimize.actual)\n   (diff opencl_parallel_reduce.expected opencl_parallel_reduce.actual)\n   (diff opencl_reduce_rows.expected opencl_reduce_rows.actual)\n   (diff opencl_sum_reduce.expected opencl_sum_reduce.actual))))\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/generate_actual.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Generates .actual files for codegen pipeline golden tests. Each file\n   contains tolk's rendered output for a specific backend + test case after\n   running the full codegen pipeline (Pipeline.full_rewrite_to_sink ->\n   Linearizer.linearize -> Renderer.render). Dune diff rules compare .actual\n   against .expected. *)\n\nopen Tolk\nopen Tolk_ir\nmodule K = Kernel\n\nlet global_fptr = Dtype.Ptr.create Dtype.Val.float32 ~addrspace:Global ~size:(-1)\nlet idx n = K.const (Const.int Dtype.Val.index n)\n\nlet kernel_info ?(axis_kinds = []) name =\n  {\n    K.name;\n    axis_kinds;\n    dont_use_locals = false;\n    applied_opts = [];\n    opts_to_apply = Some [];\n    estimates = None;\n  }\n\n(* Extract the kernel name from a pipeline-processed Sink. *)\nlet name_of_sink sink =\n  match K.view sink with\n  | K.Sink { kernel_info = Some ki; _ } -> ki.name\n  | _ -> \"kernel\"\n\n(* Full pipeline chain: Kernel.t -> source string. *)\nlet pipeline_to_source ?(optimize = true) ren sink =\n  let processed = Codegen.full_rewrite_to_sink ~optimize ren sink in\n  let name = name_of_sink processed in\n  let program = Linearizer.linearize processed in\n  String.trim (Renderer.render ren ~name program)\n\n(* ── Kernel AST builders ── *)\n\nlet make_elementwise_add () =\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let p2 = K.param ~idx:2 ~dtype:global_fptr in\n  let r0 = K.range ~size:(idx 256) ~axis:0 ~kind:Axis_kind.Global () in\n  let ld_a = K.load ~src:(K.index ~ptr:p0 ~idxs:[ r0 ] ()) () in\n  let ld_b = K.load ~src:(K.index ~ptr:p1 ~idxs:[ r0 ] ()) () in\n  let add = K.binary ~op:`Add ~lhs:ld_a ~rhs:ld_b in\n  let st = K.store ~dst:(K.index ~ptr:p2 ~idxs:[ r0 ] ()) ~value:add ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0 ] () in\n  K.sink ~kernel_info:(kernel_info ~axis_kinds:[ Axis_kind.Global ] \"elementwise_add\") [ e ]\n\nlet make_sum_reduce () =\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let r0 = K.range ~size:(idx 256) ~axis:0 ~kind:Axis_kind.Reduce () in\n  let ld = K.load ~src:(K.index ~ptr:p0 ~idxs:[ r0 ] ()) () in\n  let red = K.reduce ~op:`Add ~src:ld ~ranges:[ r0 ] ~dtype:Dtype.Val.float32 in\n  let st =\n    K.store ~dst:(K.index ~ptr:p1 ~idxs:[ idx 0 ] ()) ~value:red ~ranges:[]\n  in\n  K.sink ~kernel_info:(kernel_info ~axis_kinds:[ Axis_kind.Reduce ] \"sum_reduce\") [ st ]\n\n(* max_reduce is excluded: requires Max->Where decomposition (pipeline Steps\n   18-21) which is not yet ported. The expected file is generated for reference\n   but has no matching .actual until decompositions land. *)\n\nlet make_dot_product () =\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let p2 = K.param ~idx:2 ~dtype:global_fptr in\n  let r0 = K.range ~size:(idx 128) ~axis:0 ~kind:Axis_kind.Reduce () in\n  let ld_a = K.load ~src:(K.index ~ptr:p0 ~idxs:[ r0 ] ()) () in\n  let ld_b = K.load ~src:(K.index ~ptr:p1 ~idxs:[ r0 ] ()) () in\n  let mul = K.binary ~op:`Mul ~lhs:ld_a ~rhs:ld_b in\n  let red = K.reduce ~op:`Add ~src:mul ~ranges:[ r0 ] ~dtype:Dtype.Val.float32 in\n  let st =\n    K.store ~dst:(K.index ~ptr:p2 ~idxs:[ idx 0 ] ()) ~value:red ~ranges:[]\n  in\n  K.sink ~kernel_info:(kernel_info ~axis_kinds:[ Axis_kind.Reduce ] \"dot_product\") [ st ]\n\nlet make_matmul_small () =\n  let m, n, k = (4, 4, 4) in\n  let pA = K.param ~idx:0 ~dtype:global_fptr in\n  let pB = K.param ~idx:1 ~dtype:global_fptr in\n  let pC = K.param ~idx:2 ~dtype:global_fptr in\n  let ri = K.range ~size:(idx m) ~axis:0 ~kind:Axis_kind.Global () in\n  let rj = K.range ~size:(idx n) ~axis:1 ~kind:Axis_kind.Global () in\n  let rk = K.range ~size:(idx k) ~axis:2 ~kind:Axis_kind.Reduce () in\n  let open K.O in\n  let a_idx = ri * int_ k + rk in\n  let b_idx = rk * int_ n + rj in\n  let c_idx = ri * int_ n + rj in\n  let ld_a = K.load ~src:(K.index ~ptr:pA ~idxs:[ a_idx ] ()) () in\n  let ld_b = K.load ~src:(K.index ~ptr:pB ~idxs:[ b_idx ] ()) () in\n  let mul = K.binary ~op:`Mul ~lhs:ld_a ~rhs:ld_b in\n  let red = K.reduce ~op:`Add ~src:mul ~ranges:[ rk ] ~dtype:Dtype.Val.float32 in\n  let st =\n    K.store ~dst:(K.index ~ptr:pC ~idxs:[ c_idx ] ()) ~value:red ~ranges:[]\n  in\n  let e = K.end_ ~value:st ~ranges:[ ri; rj ] () in\n  K.sink\n    ~kernel_info:\n      (kernel_info\n         ~axis_kinds:[ Axis_kind.Global; Axis_kind.Global; Axis_kind.Reduce ]\n         \"matmul_small\")\n    [ e ]\n\nlet make_elementwise_2d () =\n  let rows, cols = (8, 16) in\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let p2 = K.param ~idx:2 ~dtype:global_fptr in\n  let ri = K.range ~size:(idx rows) ~axis:0 ~kind:Axis_kind.Global () in\n  let rj = K.range ~size:(idx cols) ~axis:1 ~kind:Axis_kind.Global () in\n  let open K.O in\n  let flat = ri * int_ cols + rj in\n  let ld_a = K.load ~src:(K.index ~ptr:p0 ~idxs:[ flat ] ()) () in\n  let ld_b = K.load ~src:(K.index ~ptr:p1 ~idxs:[ flat ] ()) () in\n  let add = K.binary ~op:`Add ~lhs:ld_a ~rhs:ld_b in\n  let st =\n    K.store ~dst:(K.index ~ptr:p2 ~idxs:[ flat ] ()) ~value:add ~ranges:[]\n  in\n  let e = K.end_ ~value:st ~ranges:[ ri; rj ] () in\n  K.sink\n    ~kernel_info:\n      (kernel_info\n         ~axis_kinds:[ Axis_kind.Global; Axis_kind.Global ]\n         \"elementwise_2d\")\n    [ e ]\n\nlet make_reduce_rows () =\n  let rows, cols = (8, 32) in\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let ri = K.range ~size:(idx rows) ~axis:0 ~kind:Axis_kind.Global () in\n  let rj = K.range ~size:(idx cols) ~axis:1 ~kind:Axis_kind.Reduce () in\n  let open K.O in\n  let flat = ri * int_ cols + rj in\n  let ld = K.load ~src:(K.index ~ptr:p0 ~idxs:[ flat ] ()) () in\n  let red = K.reduce ~op:`Add ~src:ld ~ranges:[ rj ] ~dtype:Dtype.Val.float32 in\n  let st =\n    K.store ~dst:(K.index ~ptr:p1 ~idxs:[ ri ] ()) ~value:red ~ranges:[]\n  in\n  let e = K.end_ ~value:st ~ranges:[ ri ] () in\n  K.sink\n    ~kernel_info:\n      (kernel_info\n         ~axis_kinds:[ Axis_kind.Global; Axis_kind.Reduce ]\n         \"reduce_rows\")\n    [ e ]\n\nlet make_no_optimize () =\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let p2 = K.param ~idx:2 ~dtype:global_fptr in\n  let r0 = K.range ~size:(idx 256) ~axis:0 ~kind:Axis_kind.Global () in\n  let ld_a = K.load ~src:(K.index ~ptr:p0 ~idxs:[ r0 ] ()) () in\n  let ld_b = K.load ~src:(K.index ~ptr:p1 ~idxs:[ r0 ] ()) () in\n  let add = K.binary ~op:`Add ~lhs:ld_a ~rhs:ld_b in\n  let st = K.store ~dst:(K.index ~ptr:p2 ~idxs:[ r0 ] ()) ~value:add ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0 ] () in\n  K.sink ~kernel_info:(kernel_info ~axis_kinds:[ Axis_kind.Global ] \"no_optimize\") [ e ]\n\nlet make_multi_output () =\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let p2 = K.param ~idx:2 ~dtype:global_fptr in\n  let r0 = K.range ~size:(idx 256) ~axis:0 ~kind:Axis_kind.Global () in\n  let ld_a = K.load ~src:(K.index ~ptr:p0 ~idxs:[ r0 ] ()) () in\n  let one = K.const (Const.float Dtype.Val.float32 1.0) in\n  let two = K.const (Const.float Dtype.Val.float32 2.0) in\n  let st1 =\n    K.store\n      ~dst:(K.index ~ptr:p1 ~idxs:[ r0 ] ())\n      ~value:(K.binary ~op:`Add ~lhs:ld_a ~rhs:one)\n      ~ranges:[]\n  in\n  let e1 = K.end_ ~value:st1 ~ranges:[ r0 ] () in\n  let st2 =\n    K.store\n      ~dst:(K.index ~ptr:p2 ~idxs:[ r0 ] ())\n      ~value:(K.binary ~op:`Mul ~lhs:ld_a ~rhs:two)\n      ~ranges:[]\n  in\n  let e2 = K.end_ ~value:st2 ~ranges:[ r0 ] () in\n  K.sink ~kernel_info:(kernel_info ~axis_kinds:[ Axis_kind.Global ] \"multi_output\") [ e1; e2 ]\n\nlet make_gated_store () =\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let p2 = K.param ~idx:2 ~dtype:global_fptr in\n  let r0 = K.range ~size:(idx 256) ~axis:0 ~kind:Axis_kind.Global () in\n  let ld_a = K.load ~src:(K.index ~ptr:p0 ~idxs:[ r0 ] ()) () in\n  let ld_b = K.load ~src:(K.index ~ptr:p1 ~idxs:[ r0 ] ()) () in\n  let add = K.binary ~op:`Add ~lhs:ld_a ~rhs:ld_b in\n  let gate = K.binary ~op:`Cmplt ~lhs:r0 ~rhs:(idx 200) in\n  let st =\n    K.store\n      ~dst:(K.index ~ptr:p2 ~idxs:[ r0 ] ~gate ())\n      ~value:add ~ranges:[]\n  in\n  let e = K.end_ ~value:st ~ranges:[ r0 ] () in\n  K.sink ~kernel_info:(kernel_info ~axis_kinds:[ Axis_kind.Global ] \"gated_store\") [ e ]\n\nlet make_elementwise_where () =\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let r0 = K.range ~size:(idx 256) ~axis:0 ~kind:Axis_kind.Global () in\n  let ld = K.load ~src:(K.index ~ptr:p0 ~idxs:[ r0 ] ()) () in\n  let zero = K.const (Const.float Dtype.Val.float32 0.0) in\n  let cond = K.binary ~op:`Cmplt ~lhs:zero ~rhs:ld in\n  let w = K.ternary ~op:`Where ~a:cond ~b:ld ~c:zero in\n  let st = K.store ~dst:(K.index ~ptr:p1 ~idxs:[ r0 ] ()) ~value:w ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0 ] () in\n  K.sink ~kernel_info:(kernel_info ~axis_kinds:[ Axis_kind.Global ] \"elementwise_where\") [ e ]\n\nlet make_elementwise_cast_f16 () =\n  (* c[i] = (float32)a_f16[i] + b[i]. Param order: 0=f16, 1=f32, 2=out_f32.\n     Build the Add as cast(ld_f16) + ld_f32 to match the reference load ordering. *)\n  let f16_ptr = Dtype.Ptr.create Dtype.Val.float16 ~addrspace:Global ~size:(-1) in\n  let p0 = K.param ~idx:0 ~dtype:f16_ptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let p2 = K.param ~idx:2 ~dtype:global_fptr in\n  let r0 = K.range ~size:(idx 256) ~axis:0 ~kind:Axis_kind.Global () in\n  let ld_a = K.load ~src:(K.index ~ptr:p0 ~idxs:[ r0 ] ()) () in\n  let cast_a = K.cast ~src:ld_a ~dtype:Dtype.float32 in\n  let ld_b = K.load ~src:(K.index ~ptr:p1 ~idxs:[ r0 ] ()) () in\n  let add = K.binary ~op:`Add ~lhs:cast_a ~rhs:ld_b in\n  let st = K.store ~dst:(K.index ~ptr:p2 ~idxs:[ r0 ] ()) ~value:add ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0 ] () in\n  K.sink ~kernel_info:(kernel_info ~axis_kinds:[ Axis_kind.Global ] \"elementwise_cast_f16\") [ e ]\n\nlet make_elementwise_sqrt () =\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let r0 = K.range ~size:(idx 256) ~axis:0 ~kind:Axis_kind.Global () in\n  let ld = K.load ~src:(K.index ~ptr:p0 ~idxs:[ r0 ] ()) () in\n  let sq = K.unary ~op:`Sqrt ~src:ld in\n  let st = K.store ~dst:(K.index ~ptr:p1 ~idxs:[ r0 ] ()) ~value:sq ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0 ] () in\n  K.sink ~kernel_info:(kernel_info ~axis_kinds:[ Axis_kind.Global ] \"elementwise_sqrt\") [ e ]\n\nlet make_parallel_reduce () =\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let p2 = K.param ~idx:2 ~dtype:global_fptr in\n  let r0 = K.range ~size:(idx 128) ~axis:0 ~kind:Axis_kind.Reduce () in\n  let ld = K.load ~src:(K.index ~ptr:p0 ~idxs:[ r0 ] ()) () in\n  let red1 = K.reduce ~op:`Add ~src:ld ~ranges:[ r0 ] ~dtype:Dtype.Val.float32 in\n  let sq = K.binary ~op:`Mul ~lhs:ld ~rhs:ld in\n  let red2 = K.reduce ~op:`Add ~src:sq ~ranges:[ r0 ] ~dtype:Dtype.Val.float32 in\n  let c0 = idx 0 in\n  let st1 =\n    K.store ~dst:(K.index ~ptr:p1 ~idxs:[ c0 ] ()) ~value:red1 ~ranges:[]\n  in\n  let st2 =\n    K.store ~dst:(K.index ~ptr:p2 ~idxs:[ c0 ] ()) ~value:red2 ~ranges:[]\n  in\n  K.sink ~kernel_info:(kernel_info ~axis_kinds:[ Axis_kind.Reduce ] \"parallel_reduce\") [ st1; st2 ]\n\nlet make_elementwise_int32 () =\n  let i32_ptr = Dtype.Ptr.create Dtype.Val.int32 ~addrspace:Global ~size:(-1) in\n  let p0 = K.param ~idx:0 ~dtype:i32_ptr in\n  let p1 = K.param ~idx:1 ~dtype:i32_ptr in\n  let p2 = K.param ~idx:2 ~dtype:i32_ptr in\n  let r0 = K.range ~size:(idx 256) ~axis:0 ~kind:Axis_kind.Global () in\n  let ld_a = K.load ~src:(K.index ~ptr:p0 ~idxs:[ r0 ] ()) () in\n  let ld_b = K.load ~src:(K.index ~ptr:p1 ~idxs:[ r0 ] ()) () in\n  let add = K.binary ~op:`Add ~lhs:ld_a ~rhs:ld_b in\n  let st = K.store ~dst:(K.index ~ptr:p2 ~idxs:[ r0 ] ()) ~value:add ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0 ] () in\n  K.sink ~kernel_info:(kernel_info ~axis_kinds:[ Axis_kind.Global ] \"elementwise_int32\") [ e ]\n\n(* ── Test cases ── *)\n\ntype test_case = {\n  name : string;\n  kernel : Kernel.t;\n  backends : (string * Renderer.t) list;\n  optimize : bool;\n}\n\nlet all_renderers =\n  [\n    (\"clang\", Cstyle.clang_no_abi);\n    (\"cuda\", Cstyle.cuda Gpu_target.SM80);\n    (\"metal\", Cstyle.metal);\n    (\"opencl\", Cstyle.opencl);\n  ]\n\nlet gpu_renderers = List.filter (fun (name, _) -> name <> \"clang\") all_renderers\n\nlet test_cases =\n  [\n    { name = \"elementwise_add\"; kernel = make_elementwise_add ();\n      backends = all_renderers; optimize = true };\n    { name = \"sum_reduce\"; kernel = make_sum_reduce ();\n      backends = all_renderers; optimize = true };\n    (* max_reduce excluded: requires Max→Where decomposition (Steps 18-21). *)\n    { name = \"dot_product\"; kernel = make_dot_product ();\n      backends = all_renderers; optimize = true };\n    { name = \"matmul_small\"; kernel = make_matmul_small ();\n      backends = gpu_renderers; optimize = true };\n    { name = \"elementwise_2d\"; kernel = make_elementwise_2d ();\n      backends = gpu_renderers; optimize = true };\n    { name = \"reduce_rows\"; kernel = make_reduce_rows ();\n      backends = all_renderers; optimize = true };\n    { name = \"no_optimize\"; kernel = make_no_optimize ();\n      backends = all_renderers; optimize = false };\n    { name = \"multi_output\"; kernel = make_multi_output ();\n      backends = all_renderers; optimize = true };\n    { name = \"gated_store\"; kernel = make_gated_store ();\n      backends = all_renderers; optimize = true };\n    { name = \"elementwise_where\"; kernel = make_elementwise_where ();\n      backends = all_renderers; optimize = true };\n    { name = \"elementwise_cast_f16\"; kernel = make_elementwise_cast_f16 ();\n      backends = all_renderers; optimize = true };\n    { name = \"elementwise_sqrt\"; kernel = make_elementwise_sqrt ();\n      backends = all_renderers; optimize = true };\n    { name = \"parallel_reduce\"; kernel = make_parallel_reduce ();\n      backends = all_renderers; optimize = true };\n    { name = \"elementwise_int32\"; kernel = make_elementwise_int32 ();\n      backends = all_renderers; optimize = true };\n  ]\n\nlet () =\n  let dir = Sys.argv.(1) in\n  List.iter\n    (fun { name; kernel; backends; optimize } ->\n      List.iter\n        (fun (backend_name, renderer) ->\n          let snap = Printf.sprintf \"%s_%s\" backend_name name in\n          match pipeline_to_source ~optimize renderer kernel with\n          | out ->\n              let filename = Filename.concat dir (snap ^ \".actual\") in\n              let oc = open_out filename in\n              output_string oc out;\n              output_char oc '\\n';\n              close_out oc\n          | exception exn ->\n              Printf.eprintf \"FAIL %s: %s\\n%!\" snap (Printexc.to_string exn);\n              raise exn)\n        backends)\n    test_cases\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/generate_expected.py",
    "content": "#!/usr/bin/env python3\n\"\"\"Generate tinygrad reference .expected files for codegen pipeline golden tests.\n\nConstructs kernel-level UOp DAGs (SINK-rooted) and runs them through tinygrad's\nfull_rewrite_to_sink + linearize + render pipeline.  This produces the reference\nsource code that Tolk's Pipeline.full_rewrite_to_sink must match.\n\nUsage:\n    uv run packages/tolk/test/golden/codegen/generate_expected.py\n\nAfter running, commit the generated .expected files.\n\"\"\"\n\nimport os\nimport sys\n\nsys.path.insert(\n    0,\n    os.path.join(\n        os.path.dirname(__file__), \"..\", \"..\", \"..\", \"..\", \"..\", \"_tinygrad\"\n    ),\n)\n\nfrom tinygrad.uop.ops import UOp, Ops, KernelInfo, AxisType\nfrom tinygrad.dtype import dtypes\nfrom tinygrad.codegen import full_rewrite_to_sink, line_rewrite, pm_linearize_cleanups\nfrom tinygrad.codegen.late.linearizer import linearize\nfrom tinygrad.renderer.cstyle import (\n    ClangRenderer,\n    CUDARenderer,\n    MetalRenderer,\n    OpenCLRenderer,\n)\nimport tinygrad.renderer.cstyle as _cstyle_mod\n\nOUT_DIR = os.path.dirname(__file__)\n\n\nclass _RenderOnlyCUDARenderer(CUDARenderer):\n    \"\"\"CUDARenderer that skips compiler init (nvrtc not needed for rendering).\"\"\"\n\n    def __init__(self, arch):\n        self.device, self.arch, self.use_nvcc = \"NV\", arch, False\n        self.compiler = None\n        ver = int(arch[3:])\n        tc = _cstyle_mod.tc\n        self.tensor_cores = (\n            tc.cuda_sm89 if ver >= 89\n            else tc.cuda_sm80 if ver >= 80\n            else tc.cuda_sm75 if ver >= 75\n            else []\n        )\n\n\nRENDERERS = {}\nfor _name, _ctor in [\n    (\"clang\", lambda: ClangRenderer()),\n    (\"cuda\", lambda: _RenderOnlyCUDARenderer(arch=\"sm_80\")),\n    (\"metal\", lambda: MetalRenderer()),\n    (\"opencl\", lambda: OpenCLRenderer()),\n]:\n    try:\n        RENDERERS[_name] = _ctor()\n    except Exception as e:\n        print(f\"WARNING: skipping {_name} renderer: {e}\")\n\n\ndef write_expected(name, content):\n    path = os.path.join(OUT_DIR, f\"{name}.expected\")\n    with open(path, \"w\") as f:\n        f.write(content + \"\\n\")\n    print(f\"  wrote {path}\")\n\n\ndef get_source(sink, renderer, optimize=True):\n    \"\"\"Run the full tinygrad codegen pipeline and return rendered source.\"\"\"\n    rewritten = full_rewrite_to_sink(sink, renderer, optimize=optimize)\n    lst = linearize(rewritten)\n    lst = line_rewrite(lst, pm_linearize_cleanups)\n    return renderer.render(lst).strip()\n\n\ndef ki(name=\"test\", **kwargs):\n    \"\"\"Build a KernelInfo with deterministic defaults.\n\n    Using name != \"test\" forces apply_opts to preserve the name rather than\n    auto-generating one with a global counter, which avoids order-dependent\n    naming mismatches between the Python and OCaml generators.\n    \"\"\"\n    defaults = dict(name=name, axis_types=(), opts_to_apply=())\n    defaults.update(kwargs)\n    return KernelInfo(**defaults)\n\n\n# ── Kernel AST builders ──\n# Each builds a SINK-rooted kernel DAG matching the equivalent Tolk Kernel.t\n# construction in generate_actual.ml.\n\n\ndef build_elementwise_add():\n    \"\"\"c[i] = a[i] + b[i], 1 Global range.\"\"\"\n    p0 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    p1 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 1)\n    p2 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 2)\n    r0 = UOp.range(256, 0, AxisType.GLOBAL)\n    ld_a = p0.index(r0, ptr=True).load()\n    ld_b = p1.index(r0, ptr=True).load()\n    add = ld_a + ld_b\n    st = p2.index(r0, ptr=True).store(add)\n    end = st.end(r0)\n    return UOp.sink(end, arg=ki(\"elementwise_add\", axis_types=(AxisType.GLOBAL,)))\n\n\ndef build_sum_reduce():\n    \"\"\"b[0] = sum(a[i]), 1 Reduce range.\"\"\"\n    p0 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    p1 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 1)\n    r0 = UOp.range(256, 0, AxisType.REDUCE)\n    ld = p0.index(r0, ptr=True).load()\n    red = UOp(Ops.REDUCE, dtypes.float32, (ld, r0), Ops.ADD)\n    c0 = UOp.const(dtypes.index, 0)\n    st = p1.index(c0, ptr=True).store(red)\n    return UOp.sink(st, arg=ki(\"sum_reduce\", axis_types=(AxisType.REDUCE,)))\n\n\ndef build_max_reduce():\n    \"\"\"b[0] = max(a[i]), 1 Reduce range.\"\"\"\n    p0 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    p1 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 1)\n    r0 = UOp.range(64, 0, AxisType.REDUCE)\n    ld = p0.index(r0, ptr=True).load()\n    red = UOp(Ops.REDUCE, dtypes.float32, (ld, r0), Ops.MAX)\n    c0 = UOp.const(dtypes.index, 0)\n    st = p1.index(c0, ptr=True).store(red)\n    return UOp.sink(st, arg=ki(\"max_reduce\", axis_types=(AxisType.REDUCE,)))\n\n\ndef build_dot_product():\n    \"\"\"c[0] = sum_k(a[k] * b[k]), 1 Reduce range.\"\"\"\n    p0 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    p1 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 1)\n    p2 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 2)\n    r0 = UOp.range(128, 0, AxisType.REDUCE)\n    ld_a = p0.index(r0, ptr=True).load()\n    ld_b = p1.index(r0, ptr=True).load()\n    mul = ld_a * ld_b\n    red = UOp(Ops.REDUCE, dtypes.float32, (mul, r0), Ops.ADD)\n    c0 = UOp.const(dtypes.index, 0)\n    st = p2.index(c0, ptr=True).store(red)\n    return UOp.sink(st, arg=ki(\"dot_product\", axis_types=(AxisType.REDUCE,)))\n\n\ndef build_matmul_small():\n    \"\"\"C[i*4+j] = sum_k(A[i*4+k] * B[k*4+j]), M=N=K=4.\"\"\"\n    M, N, K = 4, 4, 4\n    pA = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    pB = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 1)\n    pC = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 2)\n    ri = UOp.range(M, 0, AxisType.GLOBAL)\n    rj = UOp.range(N, 1, AxisType.GLOBAL)\n    rk = UOp.range(K, 2, AxisType.REDUCE)\n    a_idx = ri * K + rk\n    b_idx = rk * N + rj\n    c_idx = ri * N + rj\n    ld_a = pA.index(a_idx, ptr=True).load()\n    ld_b = pB.index(b_idx, ptr=True).load()\n    mul = ld_a * ld_b\n    red = UOp(Ops.REDUCE, dtypes.float32, (mul, rk), Ops.ADD)\n    st = pC.index(c_idx, ptr=True).store(red)\n    end = st.end(ri, rj)\n    return UOp.sink(\n        end,\n        arg=ki(\n            \"matmul_small\",\n            axis_types=(AxisType.GLOBAL, AxisType.GLOBAL, AxisType.REDUCE),\n        ),\n    )\n\n\ndef build_elementwise_2d():\n    \"\"\"c[i*16+j] = a[i*16+j] + b[i*16+j], 2 Global ranges.\"\"\"\n    ROWS, COLS = 8, 16\n    p0 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    p1 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 1)\n    p2 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 2)\n    ri = UOp.range(ROWS, 0, AxisType.GLOBAL)\n    rj = UOp.range(COLS, 1, AxisType.GLOBAL)\n    flat = ri * COLS + rj\n    ld_a = p0.index(flat, ptr=True).load()\n    ld_b = p1.index(flat, ptr=True).load()\n    add = ld_a + ld_b\n    st = p2.index(flat, ptr=True).store(add)\n    end = st.end(ri, rj)\n    return UOp.sink(\n        end, arg=ki(\"elementwise_2d\", axis_types=(AxisType.GLOBAL, AxisType.GLOBAL))\n    )\n\n\ndef build_reduce_rows():\n    \"\"\"b[i] = sum_j(a[i*32+j]), 1 Global + 1 Reduce range.\"\"\"\n    ROWS, COLS = 8, 32\n    p0 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    p1 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 1)\n    ri = UOp.range(ROWS, 0, AxisType.GLOBAL)\n    rj = UOp.range(COLS, 1, AxisType.REDUCE)\n    flat = ri * COLS + rj\n    ld = p0.index(flat, ptr=True).load()\n    red = UOp(Ops.REDUCE, dtypes.float32, (ld, rj), Ops.ADD)\n    st = p1.index(ri, ptr=True).store(red)\n    end = st.end(ri)\n    return UOp.sink(\n        end, arg=ki(\"reduce_rows\", axis_types=(AxisType.GLOBAL, AxisType.REDUCE))\n    )\n\n\ndef build_multi_output():\n    \"\"\"b[i] = a[i] + 1.0; c[i] = a[i] * 2.0, 1 Global range, 2 stores.\"\"\"\n    p0 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    p1 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 1)\n    p2 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 2)\n    r0 = UOp.range(256, 0, AxisType.GLOBAL)\n    ld_a = p0.index(r0, ptr=True).load()\n    st1 = p1.index(r0, ptr=True).store(ld_a + UOp.const(dtypes.float32, 1.0))\n    e1 = st1.end(r0)\n    st2 = p2.index(r0, ptr=True).store(ld_a * UOp.const(dtypes.float32, 2.0))\n    e2 = st2.end(r0)\n    return UOp.sink(e1, e2, arg=ki(\"multi_output\", axis_types=(AxisType.GLOBAL,)))\n\n\ndef build_gated_store():\n    \"\"\"c[i] = a[i] + b[i] with store gated by i < 200, range size=256.\"\"\"\n    p0 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    p1 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 1)\n    p2 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 2)\n    r0 = UOp.range(256, 0, AxisType.GLOBAL)\n    ld_a = p0.index(r0, ptr=True).load()\n    ld_b = p1.index(r0, ptr=True).load()\n    add = ld_a + ld_b\n    gate = r0 < UOp.const(dtypes.index, 200)\n    st = UOp(Ops.INDEX, dtypes.float32.ptr(), (p2, r0, gate)).store(add)\n    end = st.end(r0)\n    return UOp.sink(end, arg=ki(\"gated_store\", axis_types=(AxisType.GLOBAL,)))\n\n\n# ── Test cases ──\n# (name, builder, backends_or_None, optimize)\n\nGPU_RENDERERS = [\"cuda\", \"metal\", \"opencl\"]\n\n\ndef build_no_optimize():\n    \"\"\"Same as elementwise_add but with optimize=false and unique name.\"\"\"\n    p0 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    p1 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 1)\n    p2 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 2)\n    r0 = UOp.range(256, 0, AxisType.GLOBAL)\n    ld_a = p0.index(r0, ptr=True).load()\n    ld_b = p1.index(r0, ptr=True).load()\n    add = ld_a + ld_b\n    st = p2.index(r0, ptr=True).store(add)\n    end = st.end(r0)\n    return UOp.sink(end, arg=ki(\"no_optimize\", axis_types=(AxisType.GLOBAL,)))\n\n\ndef build_elementwise_where():\n    \"\"\"c[i] = (a[i] > 0) ? a[i] : 0.0 (ReLU pattern), 1 Global range.\"\"\"\n    p0 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    p1 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 1)\n    r0 = UOp.range(256, 0, AxisType.GLOBAL)\n    ld = p0.index(r0, ptr=True).load()\n    zero = UOp.const(dtypes.float32, 0.0)\n    cond = zero.alu(Ops.CMPLT, ld)  # 0.0 < a[i] => a[i] > 0\n    val = cond.where(ld, zero)\n    st = p1.index(r0, ptr=True).store(val)\n    end = st.end(r0)\n    return UOp.sink(end, arg=ki(\"elementwise_where\", axis_types=(AxisType.GLOBAL,)))\n\n\ndef build_elementwise_cast_f16():\n    \"\"\"c[i] = (float32)a_f16[i] + b[i], 1 Global range, mixed dtypes.\"\"\"\n    p0 = UOp(Ops.PARAM, dtypes.half.ptr(), (), 0)\n    p1 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 1)\n    p2 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 2)\n    r0 = UOp.range(256, 0, AxisType.GLOBAL)\n    ld_a = p0.index(r0, ptr=True).load()\n    cast_a = UOp(Ops.CAST, dtypes.float32, (ld_a,))\n    ld_b = p1.index(r0, ptr=True).load()\n    add = cast_a + ld_b\n    st = p2.index(r0, ptr=True).store(add)\n    end = st.end(r0)\n    return UOp.sink(end, arg=ki(\"elementwise_cast_f16\", axis_types=(AxisType.GLOBAL,)))\n\n\ndef build_elementwise_sqrt():\n    \"\"\"c[i] = sqrt(a[i]), 1 Global range, exercises unary SQRT through pipeline.\"\"\"\n    p0 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    p1 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 1)\n    r0 = UOp.range(256, 0, AxisType.GLOBAL)\n    ld = p0.index(r0, ptr=True).load()\n    sq = UOp(Ops.SQRT, dtypes.float32, (ld,))\n    st = p1.index(r0, ptr=True).store(sq)\n    end = st.end(r0)\n    return UOp.sink(end, arg=ki(\"elementwise_sqrt\", axis_types=(AxisType.GLOBAL,)))\n\n\ndef build_parallel_reduce():\n    \"\"\"b[0] = sum(a[i]); c[0] = sum(a[i]*a[i]), 1 Reduce range, 2 stores.\"\"\"\n    p0 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    p1 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 1)\n    p2 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 2)\n    r0 = UOp.range(128, 0, AxisType.REDUCE)\n    ld = p0.index(r0, ptr=True).load()\n    red1 = UOp(Ops.REDUCE, dtypes.float32, (ld, r0), Ops.ADD)\n    red2 = UOp(Ops.REDUCE, dtypes.float32, (ld * ld, r0), Ops.ADD)\n    c0 = UOp.const(dtypes.index, 0)\n    st1 = p1.index(c0, ptr=True).store(red1)\n    st2 = p2.index(c0, ptr=True).store(red2)\n    return UOp.sink(st1, st2, arg=ki(\"parallel_reduce\", axis_types=(AxisType.REDUCE,)))\n\n\ndef build_elementwise_int32():\n    \"\"\"c[i] = a[i] + b[i] (all int32), 1 Global range.\"\"\"\n    p0 = UOp(Ops.PARAM, dtypes.int32.ptr(), (), 0)\n    p1 = UOp(Ops.PARAM, dtypes.int32.ptr(), (), 1)\n    p2 = UOp(Ops.PARAM, dtypes.int32.ptr(), (), 2)\n    r0 = UOp.range(256, 0, AxisType.GLOBAL)\n    ld_a = p0.index(r0, ptr=True).load()\n    ld_b = p1.index(r0, ptr=True).load()\n    add = ld_a + ld_b\n    st = p2.index(r0, ptr=True).store(add)\n    end = st.end(r0)\n    return UOp.sink(end, arg=ki(\"elementwise_int32\", axis_types=(AxisType.GLOBAL,)))\n\n\nTEST_CASES = [\n    (\"elementwise_add\", build_elementwise_add, None, True),\n    (\"sum_reduce\", build_sum_reduce, None, True),\n    (\"max_reduce\", build_max_reduce, None, True),\n    (\"dot_product\", build_dot_product, None, True),\n    (\"matmul_small\", build_matmul_small, GPU_RENDERERS, True),\n    (\"elementwise_2d\", build_elementwise_2d, GPU_RENDERERS, True),\n    (\"reduce_rows\", build_reduce_rows, None, True),\n    (\"no_optimize\", build_no_optimize, None, False),\n    (\"multi_output\", build_multi_output, None, True),\n    (\"gated_store\", build_gated_store, None, True),\n    (\"elementwise_where\", build_elementwise_where, None, True),\n    (\"elementwise_cast_f16\", build_elementwise_cast_f16, None, True),\n    (\"elementwise_sqrt\", build_elementwise_sqrt, None, True),\n    (\"parallel_reduce\", build_parallel_reduce, None, True),\n    (\"elementwise_int32\", build_elementwise_int32, None, True),\n]\n\n\ndef main():\n    total = 0\n    for case_name, builder, backends, optimize in TEST_CASES:\n        print(f\"\\n{case_name} (optimize={optimize}):\")\n        sink = builder()\n        targets = backends if backends else list(RENDERERS.keys())\n        for backend_name in targets:\n            if backend_name not in RENDERERS:\n                print(f\"  SKIP {backend_name}_{case_name}: renderer not available\")\n                continue\n            renderer = RENDERERS[backend_name]\n            snap_name = f\"{backend_name}_{case_name}\"\n            try:\n                src = get_source(sink, renderer, optimize=optimize)\n                write_expected(snap_name, src)\n                total += 1\n            except Exception as e:\n                print(f\"  FAIL {snap_name}: {e}\")\n                import traceback\n                traceback.print_exc()\n\n    print(f\"\\nDone. Generated {total} .expected files in {OUT_DIR}\")\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/metal_dot_product.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void dot_product(device float* data0, device float* data1, device float* data2, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  float acc0[1];\n  *(acc0+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 128; Ridx0++) {\n    float val0 = (*(data0+Ridx0));\n    float val1 = (*(data1+Ridx0));\n    *(acc0+0) = ((*(acc0+0))+(val0*val1));\n  }\n  *(data2+0) = (*(acc0+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/metal_elementwise_2d.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void elementwise_2d(device float* data0, device float* data1, device float* data2, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int gidx0 = gid.x; /* 128 */\n  float val0 = (*(data0+gidx0));\n  float val1 = (*(data1+gidx0));\n  *(data2+gidx0) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/metal_elementwise_add.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void elementwise_add(device float* data0, device float* data1, device float* data2, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int gidx0 = gid.x; /* 256 */\n  float val0 = (*(data0+gidx0));\n  float val1 = (*(data1+gidx0));\n  *(data2+gidx0) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/metal_elementwise_cast_f16.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void elementwise_cast_f16(device half* data0, device float* data1, device float* data2, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int gidx0 = gid.x; /* 256 */\n  half val0 = (*(data0+gidx0));\n  float val1 = (*(data1+gidx0));\n  *(data2+gidx0) = (((float)(val0))+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/metal_elementwise_int32.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void elementwise_int32(device int* data0, device int* data1, device int* data2, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int gidx0 = gid.x; /* 256 */\n  int val0 = (*(data0+gidx0));\n  int val1 = (*(data1+gidx0));\n  *(data2+gidx0) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/metal_elementwise_sqrt.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void elementwise_sqrt(device float* data0, device float* data1, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int gidx0 = gid.x; /* 256 */\n  float val0 = (*(data0+gidx0));\n  *(data1+gidx0) = sqrt(val0);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/metal_elementwise_where.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void elementwise_where(device float* data0, device float* data1, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int gidx0 = gid.x; /* 256 */\n  float val0 = (*(data0+gidx0));\n  float alu0 = ((0.0f<val0)?val0:0.0f);\n  *(data1+gidx0) = alu0;\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/metal_gated_store.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void gated_store(device float* data0, device float* data1, device float* data2, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int gidx0 = gid.x; /* 256 */\n  float val0 = (*(data0+gidx0));\n  float val1 = (*(data1+gidx0));\n  bool alu0 = (gidx0<200);\n  if (alu0) {\n    *(data2+gidx0) = (val0+val1);\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/metal_matmul_small.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void matmul_small(device float* data0, device float* data1, device float* data2, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  float acc0[1];\n  int gidx0 = gid.x; /* 4 */\n  int gidx1 = gid.y; /* 4 */\n  int alu0 = (gidx1<<2);\n  *(acc0+0) = 0.0f;\n  for (int Ridx2 = 0; Ridx2 < 4; Ridx2++) {\n    float val0 = (*(data0+(alu0+Ridx2)));\n    float val1 = (*(data1+(gidx0+(Ridx2<<2))));\n    *(acc0+0) = ((*(acc0+0))+(val0*val1));\n  }\n  *(data2+(gidx0+alu0)) = (*(acc0+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/metal_max_reduce.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void max_reduce(device float* data0, device float* data1, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  float acc0[1];\n  *(acc0+0) = ((float)(-INFINITY));\n  for (int Ridx0 = 0; Ridx0 < 64; Ridx0++) {\n    float val0 = (*(data0+Ridx0));\n    float alu1 = (((*(acc0+0))<val0)?val0:(*(acc0+0)));\n    *(acc0+0) = alu1;\n  }\n  *(data1+0) = (*(acc0+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/metal_multi_output.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void multi_output(device float* data0, device float* data1, device float* data2, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int gidx0 = gid.x; /* 256 */\n  float val0 = (*(data0+gidx0));\n  *(data1+gidx0) = (val0+1.0f);\n  *(data2+gidx0) = (val0*2.0f);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/metal_no_optimize.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void no_optimize(device float* data0, device float* data1, device float* data2, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int gidx0 = gid.x; /* 256 */\n  float val0 = (*(data0+gidx0));\n  float val1 = (*(data1+gidx0));\n  *(data2+gidx0) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/metal_parallel_reduce.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void parallel_reduce(device float* data0, device float* data1, device float* data2, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  float acc0[1];\n  float acc1[1];\n  *(acc0+0) = 0.0f;\n  *(acc1+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 128; Ridx0++) {\n    float val0 = (*(data0+Ridx0));\n    *(acc0+0) = ((*(acc0+0))+val0);\n    *(acc1+0) = ((*(acc1+0))+(val0*val0));\n  }\n  *(data1+0) = (*(acc0+0));\n  *(data2+0) = (*(acc1+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/metal_reduce_rows.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void reduce_rows(device float* data0, device float* data1, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  float acc0[1];\n  int gidx0 = gid.x; /* 8 */\n  *(acc0+0) = 0.0f;\n  for (int Ridx1 = 0; Ridx1 < 32; Ridx1++) {\n    float val0 = (*(data0+((gidx0<<5)+Ridx1)));\n    *(acc0+0) = ((*(acc0+0))+val0);\n  }\n  *(data1+gidx0) = (*(acc0+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/metal_sum_reduce.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void sum_reduce(device float* data0, device float* data1, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  float acc0[1];\n  *(acc0+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 256; Ridx0++) {\n    float val0 = (*(data0+Ridx0));\n    *(acc0+0) = ((*(acc0+0))+val0);\n  }\n  *(data1+0) = (*(acc0+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/opencl_dot_product.expected",
    "content": "__kernel void dot_product(__global float* data0, __global float* data1, __global float* data2) {\n  float acc0[1];\n  *(acc0+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 128; Ridx0++) {\n    float val0 = (*(data0+Ridx0));\n    float val1 = (*(data1+Ridx0));\n    *(acc0+0) = ((*(acc0+0))+(val0*val1));\n  }\n  *(data2+0) = (*(acc0+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/opencl_elementwise_2d.expected",
    "content": "__kernel void elementwise_2d(__global float* data0, __global float* data1, __global float* data2) {\n  int gidx0 = get_group_id(0); /* 128 */\n  float val0 = (*(data0+gidx0));\n  float val1 = (*(data1+gidx0));\n  *(data2+gidx0) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/opencl_elementwise_add.expected",
    "content": "__kernel void elementwise_add(__global float* data0, __global float* data1, __global float* data2) {\n  int gidx0 = get_group_id(0); /* 256 */\n  float val0 = (*(data0+gidx0));\n  float val1 = (*(data1+gidx0));\n  *(data2+gidx0) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/opencl_elementwise_cast_f16.expected",
    "content": "#pragma OPENCL EXTENSION cl_khr_fp16 : enable\n__kernel void elementwise_cast_f16(__global half* data0, __global float* data1, __global float* data2) {\n  int gidx0 = get_group_id(0); /* 256 */\n  half val0 = (*(data0+gidx0));\n  float val1 = (*(data1+gidx0));\n  *(data2+gidx0) = (((float)(val0))+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/opencl_elementwise_int32.expected",
    "content": "__kernel void elementwise_int32(__global int* data0, __global int* data1, __global int* data2) {\n  int gidx0 = get_group_id(0); /* 256 */\n  int val0 = (*(data0+gidx0));\n  int val1 = (*(data1+gidx0));\n  *(data2+gidx0) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/opencl_elementwise_sqrt.expected",
    "content": "__kernel void elementwise_sqrt(__global float* data0, __global float* data1) {\n  int gidx0 = get_group_id(0); /* 256 */\n  float val0 = (*(data0+gidx0));\n  *(data1+gidx0) = sqrt(val0);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/opencl_elementwise_where.expected",
    "content": "__kernel void elementwise_where(__global float* data0, __global float* data1) {\n  int gidx0 = get_group_id(0); /* 256 */\n  float val0 = (*(data0+gidx0));\n  float alu0 = ((0.0f<val0)?val0:0.0f);\n  *(data1+gidx0) = alu0;\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/opencl_gated_store.expected",
    "content": "__kernel void gated_store(__global float* data0, __global float* data1, __global float* data2) {\n  int gidx0 = get_group_id(0); /* 256 */\n  float val0 = (*(data0+gidx0));\n  float val1 = (*(data1+gidx0));\n  bool alu0 = (gidx0<200);\n  if (alu0) {\n    *(data2+gidx0) = (val0+val1);\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/opencl_matmul_small.expected",
    "content": "__kernel void matmul_small(__global float* data0, __global float* data1, __global float* data2) {\n  float acc0[1];\n  int gidx0 = get_group_id(0); /* 4 */\n  int gidx1 = get_group_id(1); /* 4 */\n  int alu0 = (gidx1<<2);\n  *(acc0+0) = 0.0f;\n  for (int Ridx2 = 0; Ridx2 < 4; Ridx2++) {\n    float val0 = (*(data0+(alu0+Ridx2)));\n    float val1 = (*(data1+(gidx0+(Ridx2<<2))));\n    *(acc0+0) = ((*(acc0+0))+(val0*val1));\n  }\n  *(data2+(gidx0+alu0)) = (*(acc0+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/opencl_max_reduce.expected",
    "content": "__kernel void max_reduce(__global float* data0, __global float* data1) {\n  float acc0[1];\n  *(acc0+0) = ((float)(-INFINITY));\n  for (int Ridx0 = 0; Ridx0 < 64; Ridx0++) {\n    float val0 = (*(data0+Ridx0));\n    float alu1 = (((*(acc0+0))<val0)?val0:(*(acc0+0)));\n    *(acc0+0) = alu1;\n  }\n  *(data1+0) = (*(acc0+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/opencl_multi_output.expected",
    "content": "__kernel void multi_output(__global float* data0, __global float* data1, __global float* data2) {\n  int gidx0 = get_group_id(0); /* 256 */\n  float val0 = (*(data0+gidx0));\n  *(data1+gidx0) = (val0+1.0f);\n  *(data2+gidx0) = (val0*2.0f);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/opencl_no_optimize.expected",
    "content": "__kernel void no_optimize(__global float* data0, __global float* data1, __global float* data2) {\n  int gidx0 = get_group_id(0); /* 256 */\n  float val0 = (*(data0+gidx0));\n  float val1 = (*(data1+gidx0));\n  *(data2+gidx0) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/opencl_parallel_reduce.expected",
    "content": "__kernel void parallel_reduce(__global float* data0, __global float* data1, __global float* data2) {\n  float acc0[1];\n  float acc1[1];\n  *(acc0+0) = 0.0f;\n  *(acc1+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 128; Ridx0++) {\n    float val0 = (*(data0+Ridx0));\n    *(acc0+0) = ((*(acc0+0))+val0);\n    *(acc1+0) = ((*(acc1+0))+(val0*val0));\n  }\n  *(data1+0) = (*(acc0+0));\n  *(data2+0) = (*(acc1+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/opencl_reduce_rows.expected",
    "content": "__kernel void reduce_rows(__global float* data0, __global float* data1) {\n  float acc0[1];\n  int gidx0 = get_group_id(0); /* 8 */\n  *(acc0+0) = 0.0f;\n  for (int Ridx1 = 0; Ridx1 < 32; Ridx1++) {\n    float val0 = (*(data0+((gidx0<<5)+Ridx1)));\n    *(acc0+0) = ((*(acc0+0))+val0);\n  }\n  *(data1+gidx0) = (*(acc0+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/codegen/opencl_sum_reduce.expected",
    "content": "__kernel void sum_reduce(__global float* data0, __global float* data1) {\n  float acc0[1];\n  *(acc0+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 256; Ridx0++) {\n    float val0 = (*(data0+Ridx0));\n    *(acc0+0) = ((*(acc0+0))+val0);\n  }\n  *(data1+0) = (*(acc0+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/clang_bitcast_f32_to_i32.expected",
    "content": "void test(float* restrict data0, int* restrict data1) {\n  float val0 = (*(data0+0));\n  *(data1+0) = __builtin_bit_cast(int, (float)(val0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/clang_cast_f16_to_f32.expected",
    "content": "void test(__fp16* restrict data0, float* restrict data1) {\n  __fp16 val0 = (*(data0+0));\n  *(data1+0) = ((float)(val0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/clang_conditional.expected",
    "content": "void test(float* restrict data0) {\n  if (1) {\n    *(data0+0) = 1.0f;\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/clang_const_inf_nan.expected",
    "content": "void test(float* restrict data0) {\n  *(data0+0) = ((float)(__builtin_inff()));\n  *(data0+1) = ((float)(__builtin_nanf(\"\")));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/clang_gated_load.expected",
    "content": "void test(float* restrict data0, float* restrict data1) {\n  float val0 = (1?*(data0+0):0.0f);\n  *(data1+0) = val0;\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/clang_loop.expected",
    "content": "void test(float* restrict data0) {\n  for (int Lidx0 = 0; Lidx0 < 10; Lidx0++) {\n    float val0 = (*(data0+Lidx0));\n    *(data0+Lidx0) = val0;\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/clang_multi_param.expected",
    "content": "void test(float* restrict data0, float* restrict data1, float* restrict data2, float* restrict data3) {\n  float val0 = (*(data0+0));\n  float val1 = (*(data1+0));\n  *(data3+0) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/clang_nested_loops.expected",
    "content": "void test(float* restrict data0) {\n  for (int Lidx0 = 0; Lidx0 < 10; Lidx0++) {\n    for (int Lidx1 = 0; Lidx1 < 5; Lidx1++) {\n      int alu0 = (Lidx0+Lidx1);\n      float val0 = (*(data0+alu0));\n      *(data0+alu0) = val0;\n    }\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/clang_simple_add_f32.expected",
    "content": "void test(float* restrict data0, float* restrict data1, float* restrict data2) {\n  float val0 = (*(data0+0));\n  float val1 = (*(data1+0));\n  *(data2+0) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/clang_simple_mul_i32.expected",
    "content": "void test(int* restrict data0, int* restrict data1, int* restrict data2) {\n  int val0 = (*(data0+0));\n  int val1 = (*(data1+0));\n  *(data2+0) = (val0*val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/clang_unary_sqrt_f16.expected",
    "content": "void test(__fp16* restrict data0, __fp16* restrict data1) {\n  __fp16 val0 = (*(data0+0));\n  *(data1+0) = __builtin_sqrtf(val0);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/clang_unary_sqrt_f32.expected",
    "content": "void test(float* restrict data0, float* restrict data1) {\n  float val0 = (*(data0+0));\n  *(data1+0) = __builtin_sqrtf(val0);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/clang_vectorize_gep.expected",
    "content": "typedef float float4 __attribute__((aligned(16),ext_vector_type(4)));\nvoid test(float* restrict data0, float* restrict data1) {\n  float val0 = (*(data0+0));\n  float val1 = (*(data0+1));\n  float val2 = (*(data0+2));\n  float val3 = (*(data0+3));\n  *(data1+0) = (float4){val0,val1,val2,val3}[2];\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/clang_where_select.expected",
    "content": "void test(float* restrict data0, float* restrict data1, float* restrict data2) {\n  float val0 = (*(data0+0));\n  float val1 = (*(data1+0));\n  float alu0 = (1?val0:val1);\n  *(data2+0) = alu0;\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/cuda_bitcast_f32_to_i32.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) test(float* data0, int* data1) {\n  float val0 = (*(data0+0));\n  *(data1+0) = tg_bitcast<int>((float)(val0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/cuda_cast_f16_to_f32.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\n#include <cuda_fp16.h>\nextern \"C\" __global__ void __launch_bounds__(1) test(half* data0, float* data1) {\n  half val0 = (*(data0+0));\n  *(data1+0) = ((float)(val0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/cuda_conditional.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) test(float* data0) {\n  if (1) {\n    *(data0+0) = 1.0f;\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/cuda_const_inf_nan.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) test(float* data0) {\n  *(data0+0) = ((float)(INFINITY));\n  *(data0+1) = ((float)(NAN));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/cuda_gated_load.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) test(float* data0, float* data1) {\n  float val0 = (1?*(data0+0):0.0f);\n  *(data1+0) = val0;\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/cuda_loop.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) test(float* data0) {\n  for (int Lidx0 = 0; Lidx0 < 10; Lidx0++) {\n    float val0 = (*(data0+Lidx0));\n    *(data0+Lidx0) = val0;\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/cuda_multi_param.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) test(float* data0, float* data1, float* data2, float* data3) {\n  float val0 = (*(data0+0));\n  float val1 = (*(data1+0));\n  *(data3+0) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/cuda_nested_loops.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) test(float* data0) {\n  for (int Lidx0 = 0; Lidx0 < 10; Lidx0++) {\n    for (int Lidx1 = 0; Lidx1 < 5; Lidx1++) {\n      int alu0 = (Lidx0+Lidx1);\n      float val0 = (*(data0+alu0));\n      *(data0+alu0) = val0;\n    }\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/cuda_shared_memory.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) test(float* data0) {\n  __shared__ __align__(16) float temp0[256];\n  *(temp0+0) = 0.0f;\n  __syncthreads();\n  float val0 = (*(temp0+0));\n  *(data0+0) = val0;\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/cuda_simple_add_f32.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) test(float* data0, float* data1, float* data2) {\n  float val0 = (*(data0+0));\n  float val1 = (*(data1+0));\n  *(data2+0) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/cuda_simple_mul_i32.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) test(int* data0, int* data1, int* data2) {\n  int val0 = (*(data0+0));\n  int val1 = (*(data1+0));\n  *(data2+0) = (val0*val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/cuda_special_dims.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(32) test(float* data0) {\n  int gidx0 = blockIdx.x; /* 32 */\n  int lidx0 = threadIdx.x; /* 32 */\n  int alu0 = (gidx0+lidx0);\n  float val0 = (*(data0+alu0));\n  *(data0+alu0) = val0;\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/cuda_unary_sqrt_f16.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\n#include <cuda_fp16.h>\nextern \"C\" __global__ void __launch_bounds__(1) test(half* data0, half* data1) {\n  half val0 = (*(data0+0));\n  *(data1+0) = hsqrt(val0);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/cuda_unary_sqrt_f32.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) test(float* data0, float* data1) {\n  float val0 = (*(data0+0));\n  *(data1+0) = sqrt(val0);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/cuda_vectorize_gep.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) test(float* data0, float* data1) {\n  float val0 = (*(data0+0));\n  float val1 = (*(data0+1));\n  float val2 = (*(data0+2));\n  float val3 = (*(data0+3));\n  *(data1+0) = make_float4(val0,val1,val2,val3).z;\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/cuda_where_select.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(1) test(float* data0, float* data1, float* data2) {\n  float val0 = (*(data0+0));\n  float val1 = (*(data1+0));\n  float alu0 = (1?val0:val1);\n  *(data2+0) = alu0;\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/dune",
    "content": "(executable\n (name generate_actual)\n (libraries tolk tolk.ir))\n\n(rule\n (package tolk)\n (targets\n  clang_bitcast_f32_to_i32.actual\n  clang_cast_f16_to_f32.actual\n  clang_conditional.actual\n  clang_const_inf_nan.actual\n  clang_gated_load.actual\n  clang_loop.actual\n  clang_multi_param.actual\n  clang_nested_loops.actual\n  clang_simple_add_f32.actual\n  clang_simple_mul_i32.actual\n  clang_unary_sqrt_f16.actual\n  clang_unary_sqrt_f32.actual\n  clang_vectorize_gep.actual\n  clang_where_select.actual\n  cuda_bitcast_f32_to_i32.actual\n  cuda_cast_f16_to_f32.actual\n  cuda_conditional.actual\n  cuda_const_inf_nan.actual\n  cuda_gated_load.actual\n  cuda_loop.actual\n  cuda_multi_param.actual\n  cuda_nested_loops.actual\n  cuda_shared_memory.actual\n  cuda_simple_add_f32.actual\n  cuda_simple_mul_i32.actual\n  cuda_special_dims.actual\n  cuda_unary_sqrt_f16.actual\n  cuda_unary_sqrt_f32.actual\n  cuda_vectorize_gep.actual\n  cuda_where_select.actual\n  metal_bitcast_f32_to_i32.actual\n  metal_cast_f16_to_f32.actual\n  metal_conditional.actual\n  metal_const_inf_nan.actual\n  metal_gated_load.actual\n  metal_loop.actual\n  metal_multi_param.actual\n  metal_nested_loops.actual\n  metal_shared_memory.actual\n  metal_simple_add_f32.actual\n  metal_simple_mul_i32.actual\n  metal_special_dims.actual\n  metal_unary_sqrt_f16.actual\n  metal_unary_sqrt_f32.actual\n  metal_vectorize_gep.actual\n  metal_where_select.actual\n  opencl_bitcast_f32_to_i32.actual\n  opencl_cast_f16_to_f32.actual\n  opencl_conditional.actual\n  opencl_const_inf_nan.actual\n  opencl_gated_load.actual\n  opencl_loop.actual\n  opencl_multi_param.actual\n  opencl_nested_loops.actual\n  opencl_shared_memory.actual\n  opencl_simple_add_f32.actual\n  opencl_simple_mul_i32.actual\n  opencl_special_dims.actual\n  opencl_unary_sqrt_f16.actual\n  opencl_unary_sqrt_f32.actual\n  opencl_vectorize_gep.actual\n  opencl_where_select.actual)\n (action\n  (run ./generate_actual.exe .)))\n\n(rule\n (alias runtest)\n (package tolk)\n (action\n  (progn\n   (diff clang_bitcast_f32_to_i32.expected clang_bitcast_f32_to_i32.actual)\n   (diff clang_cast_f16_to_f32.expected clang_cast_f16_to_f32.actual)\n   (diff clang_conditional.expected clang_conditional.actual)\n   (diff clang_const_inf_nan.expected clang_const_inf_nan.actual)\n   (diff clang_gated_load.expected clang_gated_load.actual)\n   (diff clang_loop.expected clang_loop.actual)\n   (diff clang_multi_param.expected clang_multi_param.actual)\n   (diff clang_nested_loops.expected clang_nested_loops.actual)\n   (diff clang_simple_add_f32.expected clang_simple_add_f32.actual)\n   (diff clang_simple_mul_i32.expected clang_simple_mul_i32.actual)\n   (diff clang_unary_sqrt_f16.expected clang_unary_sqrt_f16.actual)\n   (diff clang_unary_sqrt_f32.expected clang_unary_sqrt_f32.actual)\n   (diff clang_vectorize_gep.expected clang_vectorize_gep.actual)\n   (diff clang_where_select.expected clang_where_select.actual)\n   (diff cuda_bitcast_f32_to_i32.expected cuda_bitcast_f32_to_i32.actual)\n   (diff cuda_cast_f16_to_f32.expected cuda_cast_f16_to_f32.actual)\n   (diff cuda_conditional.expected cuda_conditional.actual)\n   (diff cuda_const_inf_nan.expected cuda_const_inf_nan.actual)\n   (diff cuda_gated_load.expected cuda_gated_load.actual)\n   (diff cuda_loop.expected cuda_loop.actual)\n   (diff cuda_multi_param.expected cuda_multi_param.actual)\n   (diff cuda_nested_loops.expected cuda_nested_loops.actual)\n   (diff cuda_shared_memory.expected cuda_shared_memory.actual)\n   (diff cuda_simple_add_f32.expected cuda_simple_add_f32.actual)\n   (diff cuda_simple_mul_i32.expected cuda_simple_mul_i32.actual)\n   (diff cuda_special_dims.expected cuda_special_dims.actual)\n   (diff cuda_unary_sqrt_f16.expected cuda_unary_sqrt_f16.actual)\n   (diff cuda_unary_sqrt_f32.expected cuda_unary_sqrt_f32.actual)\n   (diff cuda_vectorize_gep.expected cuda_vectorize_gep.actual)\n   (diff cuda_where_select.expected cuda_where_select.actual)\n   (diff metal_bitcast_f32_to_i32.expected metal_bitcast_f32_to_i32.actual)\n   (diff metal_cast_f16_to_f32.expected metal_cast_f16_to_f32.actual)\n   (diff metal_conditional.expected metal_conditional.actual)\n   (diff metal_const_inf_nan.expected metal_const_inf_nan.actual)\n   (diff metal_gated_load.expected metal_gated_load.actual)\n   (diff metal_loop.expected metal_loop.actual)\n   (diff metal_multi_param.expected metal_multi_param.actual)\n   (diff metal_nested_loops.expected metal_nested_loops.actual)\n   (diff metal_shared_memory.expected metal_shared_memory.actual)\n   (diff metal_simple_add_f32.expected metal_simple_add_f32.actual)\n   (diff metal_simple_mul_i32.expected metal_simple_mul_i32.actual)\n   (diff metal_special_dims.expected metal_special_dims.actual)\n   (diff metal_unary_sqrt_f16.expected metal_unary_sqrt_f16.actual)\n   (diff metal_unary_sqrt_f32.expected metal_unary_sqrt_f32.actual)\n   (diff metal_vectorize_gep.expected metal_vectorize_gep.actual)\n   (diff metal_where_select.expected metal_where_select.actual)\n   (diff opencl_bitcast_f32_to_i32.expected opencl_bitcast_f32_to_i32.actual)\n   (diff opencl_cast_f16_to_f32.expected opencl_cast_f16_to_f32.actual)\n   (diff opencl_conditional.expected opencl_conditional.actual)\n   (diff opencl_const_inf_nan.expected opencl_const_inf_nan.actual)\n   (diff opencl_gated_load.expected opencl_gated_load.actual)\n   (diff opencl_loop.expected opencl_loop.actual)\n   (diff opencl_multi_param.expected opencl_multi_param.actual)\n   (diff opencl_nested_loops.expected opencl_nested_loops.actual)\n   (diff opencl_shared_memory.expected opencl_shared_memory.actual)\n   (diff opencl_simple_add_f32.expected opencl_simple_add_f32.actual)\n   (diff opencl_simple_mul_i32.expected opencl_simple_mul_i32.actual)\n   (diff opencl_special_dims.expected opencl_special_dims.actual)\n   (diff opencl_unary_sqrt_f16.expected opencl_unary_sqrt_f16.actual)\n   (diff opencl_unary_sqrt_f32.expected opencl_unary_sqrt_f32.actual)\n   (diff opencl_vectorize_gep.expected opencl_vectorize_gep.actual)\n   (diff opencl_where_select.expected opencl_where_select.actual))))\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/generate_actual.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Generates .actual files for expect tests. Each file contains tolk's rendered\n   output for a specific backend + test case, matching the programs in\n   generate_expected.py. Dune diff rules compare .actual against .expected\n   (generated from the reference renderer). *)\n\nopen Tolk\nopen Tolk_ir\nmodule P = Program\n\nlet global_ptr dt = Dtype.Ptr.create dt ~addrspace:Global ~size:(-1)\nlet local_ptr dt = Dtype.Ptr.create dt ~addrspace:Local ~size:(-1)\n\n(* IR program builders — must match generate_expected.py exactly. *)\n\nlet make_simple_add_f32 () =\n  let dt = Dtype.Val.float32 in\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let p1 = P.emit b (Param { idx = 1; dtype = ptr }) in\n  let p2 = P.emit b (Param { idx = 2; dtype = ptr }) in\n  let c0 = P.emit b (Const { value = Const.int Dtype.Val.int32 0; dtype = Dtype.Val.int32 }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let idx1 = P.emit b (Index { ptr = p1; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let ld0 = P.emit b (Load { src = idx0; alt = None; dtype = dt }) in\n  let ld1 = P.emit b (Load { src = idx1; alt = None; dtype = dt }) in\n  let sum = P.emit b (Binary { op = `Add; lhs = ld0; rhs = ld1; dtype = dt }) in\n  let idx2 = P.emit b (Index { ptr = p2; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let _ = P.emit b (Store { dst = idx2; value = sum }) in\n  P.finish b\n\nlet make_simple_mul_i32 () =\n  let dt = Dtype.Val.int32 in\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let p1 = P.emit b (Param { idx = 1; dtype = ptr }) in\n  let p2 = P.emit b (Param { idx = 2; dtype = ptr }) in\n  let c0 = P.emit b (Const { value = Const.int Dtype.Val.int32 0; dtype = Dtype.Val.int32 }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let idx1 = P.emit b (Index { ptr = p1; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let ld0 = P.emit b (Load { src = idx0; alt = None; dtype = dt }) in\n  let ld1 = P.emit b (Load { src = idx1; alt = None; dtype = dt }) in\n  let prod = P.emit b (Binary { op = `Mul; lhs = ld0; rhs = ld1; dtype = dt }) in\n  let idx2 = P.emit b (Index { ptr = p2; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let _ = P.emit b (Store { dst = idx2; value = prod }) in\n  P.finish b\n\nlet make_loop () =\n  let dt = Dtype.Val.float32 in\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let c10 = P.emit b (Const { value = Const.int Dtype.Val.int32 10; dtype = Dtype.Val.int32 }) in\n  let r = P.emit b (Range { size = c10; dtype = Dtype.Val.int32; axis = 0; sub = []; kind = Axis_kind.Loop }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ r ]; gate = None; dtype = ptr }) in\n  let ld = P.emit b (Load { src = idx0; alt = None; dtype = dt }) in\n  let idx1 = P.emit b (Index { ptr = p0; idxs = [ r ]; gate = None; dtype = ptr }) in\n  let _ = P.emit b (Store { dst = idx1; value = ld }) in\n  let _ = P.emit b (End_range { dep = ld; range = r }) in\n  P.finish b\n\nlet make_gated_load () =\n  let dt = Dtype.Val.float32 in\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let p1 = P.emit b (Param { idx = 1; dtype = ptr }) in\n  let c0 = P.emit b (Const { value = Const.int Dtype.Val.int32 0; dtype = Dtype.Val.int32 }) in\n  let gate = P.emit b (Const { value = Const.bool true; dtype = Dtype.Val.bool }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = Some gate; dtype = ptr }) in\n  let alt = P.emit b (Const { value = Const.float dt 0.0; dtype = dt }) in\n  let ld = P.emit b (Load { src = idx0; alt = Some alt; dtype = dt }) in\n  let idx1 = P.emit b (Index { ptr = p1; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let _ = P.emit b (Store { dst = idx1; value = ld }) in\n  P.finish b\n\nlet make_shared_memory () =\n  let dt = Dtype.Val.float32 in\n  let gptr = global_ptr dt in\n  let lptr = local_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = gptr }) in\n  let dl = P.emit b (Define_local { size = 256; dtype = lptr }) in\n  let c0 = P.emit b (Const { value = Const.int Dtype.Val.int32 0; dtype = Dtype.Val.int32 }) in\n  let lidx = P.emit b (Index { ptr = dl; idxs = [ c0 ]; gate = None; dtype = lptr }) in\n  let fzero = P.emit b (Const { value = Const.float dt 0.0; dtype = dt }) in\n  let _ = P.emit b (Store { dst = lidx; value = fzero }) in\n  let _ = P.emit b Barrier in\n  let ld = P.emit b (Load { src = lidx; alt = None; dtype = dt }) in\n  let gidx = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = gptr }) in\n  let _ = P.emit b (Store { dst = gidx; value = ld }) in\n  P.finish b\n\nlet make_where_select () =\n  let dt = Dtype.Val.float32 in\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let p1 = P.emit b (Param { idx = 1; dtype = ptr }) in\n  let p2 = P.emit b (Param { idx = 2; dtype = ptr }) in\n  let c0 = P.emit b (Const { value = Const.int Dtype.Val.int32 0; dtype = Dtype.Val.int32 }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let idx1 = P.emit b (Index { ptr = p1; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let ld0 = P.emit b (Load { src = idx0; alt = None; dtype = dt }) in\n  let ld1 = P.emit b (Load { src = idx1; alt = None; dtype = dt }) in\n  let cond = P.emit b (Const { value = Const.bool true; dtype = Dtype.Val.bool }) in\n  let w = P.emit b (Ternary { op = `Where; a = cond; b = ld0; c = ld1; dtype = dt }) in\n  let idx2 = P.emit b (Index { ptr = p2; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let _ = P.emit b (Store { dst = idx2; value = w }) in\n  P.finish b\n\nlet make_cast_f16_to_f32 () =\n  let from_dt = Dtype.Val.float16 in\n  let to_dt = Dtype.Val.float32 in\n  let from_ptr = global_ptr from_dt in\n  let to_ptr = global_ptr to_dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = from_ptr }) in\n  let p1 = P.emit b (Param { idx = 1; dtype = to_ptr }) in\n  let c0 = P.emit b (Const { value = Const.int Dtype.Val.int32 0; dtype = Dtype.Val.int32 }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = from_ptr }) in\n  let idx1 = P.emit b (Index { ptr = p1; idxs = [ c0 ]; gate = None; dtype = to_ptr }) in\n  let ld = P.emit b (Load { src = idx0; alt = None; dtype = from_dt }) in\n  let cast = P.emit b (Cast { src = ld; dtype = to_dt }) in\n  let _ = P.emit b (Store { dst = idx1; value = cast }) in\n  P.finish b\n\nlet make_nested_loops () =\n  let dt = Dtype.Val.float32 in\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let c10 = P.emit b (Const { value = Const.int Dtype.Val.int32 10; dtype = Dtype.Val.int32 }) in\n  let c5 = P.emit b (Const { value = Const.int Dtype.Val.int32 5; dtype = Dtype.Val.int32 }) in\n  let r0 = P.emit b (Range { size = c10; dtype = Dtype.Val.int32; axis = 0; sub = []; kind = Axis_kind.Loop }) in\n  let r1 = P.emit b (Range { size = c5; dtype = Dtype.Val.int32; axis = 1; sub = []; kind = Axis_kind.Loop }) in\n  let sum = P.emit b (Binary { op = `Add; lhs = r0; rhs = r1; dtype = Dtype.Val.int32 }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ sum ]; gate = None; dtype = ptr }) in\n  let ld = P.emit b (Load { src = idx0; alt = None; dtype = dt }) in\n  let idx1 = P.emit b (Index { ptr = p0; idxs = [ sum ]; gate = None; dtype = ptr }) in\n  let _ = P.emit b (Store { dst = idx1; value = ld }) in\n  let _ = P.emit b (End_range { dep = ld; range = r1 }) in\n  let _ = P.emit b (End_range { dep = r0; range = r0 }) in\n  P.finish b\n\nlet make_multi_param () =\n  let dt = Dtype.Val.float32 in\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let p1 = P.emit b (Param { idx = 1; dtype = ptr }) in\n  let _ = P.emit b (Param { idx = 2; dtype = ptr }) in\n  let p3 = P.emit b (Param { idx = 3; dtype = ptr }) in\n  let c0 = P.emit b (Const { value = Const.int Dtype.Val.int32 0; dtype = Dtype.Val.int32 }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let idx1 = P.emit b (Index { ptr = p1; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let ld0 = P.emit b (Load { src = idx0; alt = None; dtype = dt }) in\n  let ld1 = P.emit b (Load { src = idx1; alt = None; dtype = dt }) in\n  let sum = P.emit b (Binary { op = `Add; lhs = ld0; rhs = ld1; dtype = dt }) in\n  let idx3 = P.emit b (Index { ptr = p3; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let _ = P.emit b (Store { dst = idx3; value = sum }) in\n  P.finish b\n\nlet make_unary_sqrt_f32 () =\n  let dt = Dtype.Val.float32 in\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let p1 = P.emit b (Param { idx = 1; dtype = ptr }) in\n  let c0 = P.emit b (Const { value = Const.int Dtype.Val.int32 0; dtype = Dtype.Val.int32 }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let ld = P.emit b (Load { src = idx0; alt = None; dtype = dt }) in\n  let sq = P.emit b (Unary { op = `Sqrt; src = ld; dtype = dt }) in\n  let idx1 = P.emit b (Index { ptr = p1; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let _ = P.emit b (Store { dst = idx1; value = sq }) in\n  P.finish b\n\nlet make_unary_sqrt_f16 () =\n  let dt = Dtype.Val.float16 in\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let p1 = P.emit b (Param { idx = 1; dtype = ptr }) in\n  let c0 = P.emit b (Const { value = Const.int Dtype.Val.int32 0; dtype = Dtype.Val.int32 }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let ld = P.emit b (Load { src = idx0; alt = None; dtype = dt }) in\n  let sq = P.emit b (Unary { op = `Sqrt; src = ld; dtype = dt }) in\n  let idx1 = P.emit b (Index { ptr = p1; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let _ = P.emit b (Store { dst = idx1; value = sq }) in\n  P.finish b\n\nlet make_special_dims () =\n  let dt = Dtype.Val.float32 in\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let c32 = P.emit b (Const { value = Const.int Dtype.Val.int32 32; dtype = Dtype.Val.int32 }) in\n  let gid = P.emit b (Special { dim = Special_dim.Group_id 0; size = c32; dtype = Dtype.Val.int32 }) in\n  let lid = P.emit b (Special { dim = Special_dim.Local_id 0; size = c32; dtype = Dtype.Val.int32 }) in\n  let sum = P.emit b (Binary { op = `Add; lhs = gid; rhs = lid; dtype = Dtype.Val.int32 }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ sum ]; gate = None; dtype = ptr }) in\n  let ld = P.emit b (Load { src = idx0; alt = None; dtype = dt }) in\n  let idx1 = P.emit b (Index { ptr = p0; idxs = [ sum ]; gate = None; dtype = ptr }) in\n  let _ = P.emit b (Store { dst = idx1; value = ld }) in\n  P.finish b\n\nlet make_bitcast_f32_to_i32 () =\n  let from_dt = Dtype.Val.float32 in\n  let to_dt = Dtype.Val.int32 in\n  let from_ptr = global_ptr from_dt in\n  let to_ptr = global_ptr to_dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = from_ptr }) in\n  let p1 = P.emit b (Param { idx = 1; dtype = to_ptr }) in\n  let c0 = P.emit b (Const { value = Const.int Dtype.Val.int32 0; dtype = Dtype.Val.int32 }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = from_ptr }) in\n  let ld = P.emit b (Load { src = idx0; alt = None; dtype = from_dt }) in\n  let bc = P.emit b (Bitcast { src = ld; dtype = to_dt }) in\n  let idx1 = P.emit b (Index { ptr = p1; idxs = [ c0 ]; gate = None; dtype = to_ptr }) in\n  let _ = P.emit b (Store { dst = idx1; value = bc }) in\n  P.finish b\n\nlet make_conditional () =\n  let dt = Dtype.Val.float32 in\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let c0 = P.emit b (Const { value = Const.int Dtype.Val.int32 0; dtype = Dtype.Val.int32 }) in\n  let cond = P.emit b (Const { value = Const.bool true; dtype = Dtype.Val.bool }) in\n  let if_ = P.emit b (If { cond; idx_for_dedup = c0 }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let fone = P.emit b (Const { value = Const.float dt 1.0; dtype = dt }) in\n  let _ = P.emit b (Store { dst = idx0; value = fone }) in\n  let _ = P.emit b (Endif { if_ }) in\n  P.finish b\n\nlet make_const_inf_nan () =\n  let dt = Dtype.Val.float32 in\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let c0 = P.emit b (Const { value = Const.int Dtype.Val.int32 0; dtype = Dtype.Val.int32 }) in\n  let c1 = P.emit b (Const { value = Const.int Dtype.Val.int32 1; dtype = Dtype.Val.int32 }) in\n  let finf = P.emit b (Const { value = Const.float dt infinity; dtype = dt }) in\n  let fnan = P.emit b (Const { value = Const.float dt nan; dtype = dt }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let _ = P.emit b (Store { dst = idx0; value = finf }) in\n  let idx1 = P.emit b (Index { ptr = p0; idxs = [ c1 ]; gate = None; dtype = ptr }) in\n  let _ = P.emit b (Store { dst = idx1; value = fnan }) in\n  P.finish b\n\nlet make_vectorize_gep () =\n  let dt = Dtype.Val.float32 in\n  let vdt = Dtype.Val.vec 4 dt in\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let p1 = P.emit b (Param { idx = 1; dtype = ptr }) in\n  let c0 = P.emit b (Const { value = Const.int Dtype.Val.int32 0; dtype = Dtype.Val.int32 }) in\n  let c1 = P.emit b (Const { value = Const.int Dtype.Val.int32 1; dtype = Dtype.Val.int32 }) in\n  let c2 = P.emit b (Const { value = Const.int Dtype.Val.int32 2; dtype = Dtype.Val.int32 }) in\n  let c3 = P.emit b (Const { value = Const.int Dtype.Val.int32 3; dtype = Dtype.Val.int32 }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let idx1 = P.emit b (Index { ptr = p0; idxs = [ c1 ]; gate = None; dtype = ptr }) in\n  let idx2 = P.emit b (Index { ptr = p0; idxs = [ c2 ]; gate = None; dtype = ptr }) in\n  let idx3 = P.emit b (Index { ptr = p0; idxs = [ c3 ]; gate = None; dtype = ptr }) in\n  let ld0 = P.emit b (Load { src = idx0; alt = None; dtype = dt }) in\n  let ld1 = P.emit b (Load { src = idx1; alt = None; dtype = dt }) in\n  let ld2 = P.emit b (Load { src = idx2; alt = None; dtype = dt }) in\n  let ld3 = P.emit b (Load { src = idx3; alt = None; dtype = dt }) in\n  let vec = P.emit b (Vectorize { srcs = [ ld0; ld1; ld2; ld3 ]; dtype = vdt }) in\n  let gep = P.emit b (Gep { src = vec; idxs = [2]; dtype = dt }) in\n  let oidx = P.emit b (Index { ptr = p1; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let _ = P.emit b (Store { dst = oidx; value = gep }) in\n  P.finish b\n\n(* Test cases: (name, builder, backends). backends = None means all backends,\n   Some [...] limits to those. *)\n\ntype test_case = {\n  name : string;\n  prog : Program.t;\n  backends : (string * Renderer.t) list;\n}\n\nlet all_renderers =\n  [\n    (\"clang\", Cstyle.clang_no_abi);\n    (\"cuda\", Cstyle.cuda Gpu_target.SM80);\n    (\"metal\", Cstyle.metal);\n    (\"opencl\", Cstyle.opencl);\n  ]\n\nlet gpu_renderers = List.filter (fun (name, _) -> name <> \"clang\") all_renderers\n\nlet test_cases =\n  [\n    {\n      name = \"simple_add_f32\";\n      prog = make_simple_add_f32 ();\n      backends = all_renderers;\n    };\n    {\n      name = \"simple_mul_i32\";\n      prog = make_simple_mul_i32 ();\n      backends = all_renderers;\n    };\n    { name = \"loop\"; prog = make_loop (); backends = all_renderers };\n    { name = \"gated_load\"; prog = make_gated_load (); backends = all_renderers };\n    {\n      name = \"shared_memory\";\n      prog = make_shared_memory ();\n      backends = gpu_renderers;\n    };\n    {\n      name = \"where_select\";\n      prog = make_where_select ();\n      backends = all_renderers;\n    };\n    {\n      name = \"cast_f16_to_f32\";\n      prog = make_cast_f16_to_f32 ();\n      backends = all_renderers;\n    };\n    {\n      name = \"nested_loops\";\n      prog = make_nested_loops ();\n      backends = all_renderers;\n    };\n    {\n      name = \"multi_param\";\n      prog = make_multi_param ();\n      backends = all_renderers;\n    };\n    {\n      name = \"unary_sqrt_f32\";\n      prog = make_unary_sqrt_f32 ();\n      backends = all_renderers;\n    };\n    {\n      name = \"unary_sqrt_f16\";\n      prog = make_unary_sqrt_f16 ();\n      backends = all_renderers;\n    };\n    {\n      name = \"special_dims\";\n      prog = make_special_dims ();\n      backends = gpu_renderers;\n    };\n    {\n      name = \"bitcast_f32_to_i32\";\n      prog = make_bitcast_f32_to_i32 ();\n      backends = all_renderers;\n    };\n    {\n      name = \"conditional\";\n      prog = make_conditional ();\n      backends = all_renderers;\n    };\n    {\n      name = \"const_inf_nan\";\n      prog = make_const_inf_nan ();\n      backends = all_renderers;\n    };\n    {\n      name = \"vectorize_gep\";\n      prog = make_vectorize_gep ();\n      backends = all_renderers;\n    };\n  ]\n\nlet () =\n  let dir = Sys.argv.(1) in\n  List.iter\n    (fun { name; prog; backends } ->\n      List.iter\n        (fun (backend_name, renderer) ->\n          let out = String.trim (Renderer.render renderer ~name:\"test\" prog) in\n          let filename =\n            Filename.concat dir\n              (Printf.sprintf \"%s_%s.actual\" backend_name name)\n          in\n          let oc = open_out filename in\n          output_string oc out;\n          output_char oc '\\n';\n          close_out oc)\n        backends)\n    test_cases\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/generate_expected.py",
    "content": "#!/usr/bin/env python3\n\"\"\"Generate tinygrad reference .expected files for expect tests.\n\nConstructs linearized UOp programs and calls the renderer directly (bypassing\nget_program's rewrite pipeline). This produces rendered source code from\ntinygrad's renderer that matches the flat IR programs constructed in tolk's\ngenerate_actual.ml.\n\nUsage:\n    uv run tolk/test/golden/cstyle/generate_expected.py\n\nAfter running, commit the generated .expected files. Dune's expect tests diff\ntolk's .actual output against these tinygrad-generated .expected files.\n\"\"\"\n\nimport os\nimport sys\n\nsys.path.insert(\n    0,\n    os.path.join(\n        os.path.dirname(__file__), \"..\", \"..\", \"..\", \"..\", \"..\", \"_tinygrad\"\n    ),\n)\n\nfrom tinygrad.uop.ops import UOp, Ops, KernelInfo, AxisType\nfrom tinygrad.dtype import dtypes, AddrSpace\nfrom tinygrad.renderer.cstyle import ClangRenderer, CUDARenderer, MetalRenderer, OpenCLRenderer\n\nOUT_DIR = os.path.dirname(__file__)\n\nRENDERERS = {}\n\nfor _name, _ctor in [\n    (\"cuda\", lambda: CUDARenderer(arch=\"sm_80\")),\n    (\"metal\", lambda: MetalRenderer()),\n    (\"opencl\", lambda: OpenCLRenderer()),\n    (\"clang\", lambda: ClangRenderer()),\n]:\n    try:\n        RENDERERS[_name] = _ctor()\n    except Exception as e:\n        print(f\"WARNING: skipping {_name} renderer: {e}\")\n\n\ndef write_expected(name, content):\n    \"\"\"Write a .expected file.\"\"\"\n    path = os.path.join(OUT_DIR, f\"{name}.expected\")\n    with open(path, \"w\") as f:\n        f.write(content + \"\\n\")\n    print(f\"  wrote {path}\")\n\n\n# ── Linearized program builders ──\n# Each returns a list[UOp] in linearized (topologically sorted) form.\n# These correspond to the OCaml make_* functions in test_renderer.ml.\n#\n# Key differences from kernel-level UOps:\n# - INDEX uses ptr=True so dtype is PtrDType (required by renderer)\n# - RANGE has a single source (upper bound), not (start, end)\n# - RANGE arg is (axis_index, AxisType) tuple\n# - DEFINE_LOCAL uses ptr(size=N, addrspace=LOCAL) for the dtype\n\n\ndef build_simple_add_f32():\n    \"\"\"Two loads, one add, one store (float32).\"\"\"\n    sink = UOp(Ops.SINK, dtypes.void, (), arg=KernelInfo())\n    a = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    b = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 1)\n    c = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 2)\n    idx = UOp.const(dtypes.int, 0)\n    idx_a = a.index(idx, ptr=True)\n    ld_a = UOp(Ops.LOAD, dtypes.float32, (idx_a,))\n    idx_b = b.index(idx, ptr=True)\n    ld_b = UOp(Ops.LOAD, dtypes.float32, (idx_b,))\n    add = ld_a + ld_b\n    idx_c = c.index(idx, ptr=True)\n    store = UOp(Ops.STORE, dtypes.void, (idx_c, add))\n    return [sink, a, b, c, idx, idx_a, ld_a, idx_b, ld_b, add, idx_c, store]\n\n\ndef build_simple_mul_i32():\n    \"\"\"Integer multiply.\"\"\"\n    sink = UOp(Ops.SINK, dtypes.void, (), arg=KernelInfo())\n    a = UOp(Ops.PARAM, dtypes.int32.ptr(), (), 0)\n    b = UOp(Ops.PARAM, dtypes.int32.ptr(), (), 1)\n    c = UOp(Ops.PARAM, dtypes.int32.ptr(), (), 2)\n    idx = UOp.const(dtypes.int, 0)\n    idx_a = a.index(idx, ptr=True)\n    ld_a = UOp(Ops.LOAD, dtypes.int32, (idx_a,))\n    idx_b = b.index(idx, ptr=True)\n    ld_b = UOp(Ops.LOAD, dtypes.int32, (idx_b,))\n    mul = ld_a * ld_b\n    idx_c = c.index(idx, ptr=True)\n    store = UOp(Ops.STORE, dtypes.void, (idx_c, mul))\n    return [sink, a, b, c, idx, idx_a, ld_a, idx_b, ld_b, mul, idx_c, store]\n\n\ndef build_loop():\n    \"\"\"For loop with load/store.\"\"\"\n    sink = UOp(Ops.SINK, dtypes.void, (), arg=KernelInfo())\n    a = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    ten = UOp.const(dtypes.int, 10)\n    ridx = UOp(Ops.RANGE, dtypes.int, (ten,), (0, AxisType.LOOP))\n    idx_ld = a.index(ridx, ptr=True)\n    ld = UOp(Ops.LOAD, dtypes.float32, (idx_ld,))\n    idx_st = a.index(ridx, ptr=True)\n    store = UOp(Ops.STORE, dtypes.void, (idx_st, ld))\n    end = UOp(Ops.END, dtypes.void, (ridx,))\n    return [sink, a, ten, ridx, idx_ld, ld, idx_st, store, end]\n\n\ndef build_gated_load():\n    \"\"\"Gated load with alt value.\"\"\"\n    sink = UOp(Ops.SINK, dtypes.void, (), arg=KernelInfo())\n    a = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    b = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 1)\n    idx = UOp.const(dtypes.int, 0)\n    gate = UOp.const(dtypes.bool, True)\n    alt = UOp.const(dtypes.float32, 0.0)\n    idx_a = UOp(Ops.INDEX, dtypes.float32.ptr(), (a, idx, gate))\n    ld = UOp(Ops.LOAD, dtypes.float32, (idx_a, alt))\n    idx_b = b.index(idx, ptr=True)\n    store = UOp(Ops.STORE, dtypes.void, (idx_b, ld))\n    return [sink, a, b, idx, gate, alt, idx_a, ld, idx_b, store]\n\n\ndef build_shared_memory():\n    \"\"\"Shared memory + barrier.\"\"\"\n    local_ptr = dtypes.float32.ptr(size=256, addrspace=AddrSpace.LOCAL)\n    sink = UOp(Ops.SINK, dtypes.void, (), arg=KernelInfo())\n    a = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    temp = UOp(Ops.DEFINE_LOCAL, local_ptr, (), \"smem\")\n    idx = UOp.const(dtypes.int, 0)\n    zero = UOp.const(dtypes.float32, 0.0)\n    idx_local = temp.index(idx, ptr=True)\n    store_local = UOp(Ops.STORE, dtypes.void, (idx_local, zero))\n    barrier = UOp(Ops.BARRIER, dtypes.void, (store_local,))\n    after = UOp(Ops.AFTER, local_ptr, (temp, barrier))\n    idx_local2 = after.index(idx, ptr=True)\n    ld = UOp(Ops.LOAD, dtypes.float32, (idx_local2,))\n    idx_global = a.index(idx, ptr=True)\n    store_global = UOp(Ops.STORE, dtypes.void, (idx_global, ld))\n    return [sink, a, temp, idx, zero, idx_local, store_local, barrier, after, idx_local2, ld, idx_global, store_global]\n\n\ndef build_where_select():\n    \"\"\"Ternary where.\"\"\"\n    sink = UOp(Ops.SINK, dtypes.void, (), arg=KernelInfo())\n    a = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    b = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 1)\n    c = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 2)\n    idx = UOp.const(dtypes.int, 0)\n    idx_a = a.index(idx, ptr=True)\n    ld_a = UOp(Ops.LOAD, dtypes.float32, (idx_a,))\n    idx_b = b.index(idx, ptr=True)\n    ld_b = UOp(Ops.LOAD, dtypes.float32, (idx_b,))\n    cond = UOp.const(dtypes.bool, True)\n    where = cond.where(ld_a, ld_b)\n    idx_c = c.index(idx, ptr=True)\n    store = UOp(Ops.STORE, dtypes.void, (idx_c, where))\n    return [sink, a, b, c, idx, idx_a, ld_a, idx_b, ld_b, cond, where, idx_c, store]\n\n\ndef build_cast_f16_to_f32():\n    \"\"\"Float16 to Float32 cast.\"\"\"\n    sink = UOp(Ops.SINK, dtypes.void, (), arg=KernelInfo())\n    a = UOp(Ops.PARAM, dtypes.half.ptr(), (), 0)\n    b = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 1)\n    idx = UOp.const(dtypes.int, 0)\n    idx_a = a.index(idx, ptr=True)\n    ld = UOp(Ops.LOAD, dtypes.half, (idx_a,))\n    cast = UOp(Ops.CAST, dtypes.float32, (ld,))\n    idx_b = b.index(idx, ptr=True)\n    store = UOp(Ops.STORE, dtypes.void, (idx_b, cast))\n    return [sink, a, b, idx, idx_a, ld, cast, idx_b, store]\n\n\ndef build_nested_loops():\n    \"\"\"Two nested loops.\"\"\"\n    sink = UOp(Ops.SINK, dtypes.void, (), arg=KernelInfo())\n    a = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    ten = UOp.const(dtypes.int, 10)\n    five = UOp.const(dtypes.int, 5)\n    ridx0 = UOp(Ops.RANGE, dtypes.int, (ten,), (0, AxisType.LOOP))\n    ridx1 = UOp(Ops.RANGE, dtypes.int, (five,), (1, AxisType.LOOP))\n    combined = ridx0 + ridx1\n    idx_ld = a.index(combined, ptr=True)\n    ld = UOp(Ops.LOAD, dtypes.float32, (idx_ld,))\n    idx_st = a.index(combined, ptr=True)\n    store = UOp(Ops.STORE, dtypes.void, (idx_st, ld))\n    end1 = UOp(Ops.END, dtypes.void, (ridx1,))\n    end0 = UOp(Ops.END, dtypes.void, (ridx0,))\n    return [sink, a, ten, five, ridx0, ridx1, combined, idx_ld, ld, idx_st, store, end1, end0]\n\n\ndef build_multi_param():\n    \"\"\"4 params, add two and store.\"\"\"\n    sink = UOp(Ops.SINK, dtypes.void, (), arg=KernelInfo())\n    a = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    b = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 1)\n    c = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 2)\n    d = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 3)\n    idx = UOp.const(dtypes.int, 0)\n    idx_a = a.index(idx, ptr=True)\n    ld_a = UOp(Ops.LOAD, dtypes.float32, (idx_a,))\n    idx_b = b.index(idx, ptr=True)\n    ld_b = UOp(Ops.LOAD, dtypes.float32, (idx_b,))\n    add = ld_a + ld_b\n    idx_d = d.index(idx, ptr=True)\n    store = UOp(Ops.STORE, dtypes.void, (idx_d, add))\n    return [sink, a, b, c, d, idx, idx_a, ld_a, idx_b, ld_b, add, idx_d, store]\n\n\ndef build_unary_sqrt_f32():\n    \"\"\"Sqrt on float32.\"\"\"\n    sink = UOp(Ops.SINK, dtypes.void, (), arg=KernelInfo())\n    a = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    b = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 1)\n    idx = UOp.const(dtypes.int, 0)\n    idx_a = a.index(idx, ptr=True)\n    ld = UOp(Ops.LOAD, dtypes.float32, (idx_a,))\n    sq = UOp(Ops.SQRT, dtypes.float32, (ld,))\n    idx_b = b.index(idx, ptr=True)\n    store = UOp(Ops.STORE, dtypes.void, (idx_b, sq))\n    return [sink, a, b, idx, idx_a, ld, sq, idx_b, store]\n\n\ndef build_unary_sqrt_f16():\n    \"\"\"Sqrt on float16 — exercises half-precision intrinsic paths.\"\"\"\n    sink = UOp(Ops.SINK, dtypes.void, (), arg=KernelInfo())\n    a = UOp(Ops.PARAM, dtypes.half.ptr(), (), 0)\n    b = UOp(Ops.PARAM, dtypes.half.ptr(), (), 1)\n    idx = UOp.const(dtypes.int, 0)\n    idx_a = a.index(idx, ptr=True)\n    ld = UOp(Ops.LOAD, dtypes.half, (idx_a,))\n    sq = UOp(Ops.SQRT, dtypes.half, (ld,))\n    idx_b = b.index(idx, ptr=True)\n    store = UOp(Ops.STORE, dtypes.void, (idx_b, sq))\n    return [sink, a, b, idx, idx_a, ld, sq, idx_b, store]\n\n\ndef build_special_dims():\n    \"\"\"GPU special dimensions (group_id, local_id).\"\"\"\n    sink = UOp(Ops.SINK, dtypes.void, (), arg=KernelInfo())\n    a = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    bound = UOp.const(dtypes.int, 32)\n    gid = UOp(Ops.SPECIAL, dtypes.int, (bound,), \"gidx0\")\n    lid = UOp(Ops.SPECIAL, dtypes.int, (bound,), \"lidx0\")\n    combined = gid + lid\n    idx_a = a.index(combined, ptr=True)\n    ld = UOp(Ops.LOAD, dtypes.float32, (idx_a,))\n    idx_st = a.index(combined, ptr=True)\n    store = UOp(Ops.STORE, dtypes.void, (idx_st, ld))\n    return [sink, a, bound, gid, lid, combined, idx_a, ld, idx_st, store]\n\n\ndef build_bitcast_f32_to_i32():\n    \"\"\"Bitcast float32 to int32.\"\"\"\n    sink = UOp(Ops.SINK, dtypes.void, (), arg=KernelInfo())\n    a = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    b = UOp(Ops.PARAM, dtypes.int32.ptr(), (), 1)\n    idx = UOp.const(dtypes.int, 0)\n    idx_a = a.index(idx, ptr=True)\n    ld = UOp(Ops.LOAD, dtypes.float32, (idx_a,))\n    bc = UOp(Ops.BITCAST, dtypes.int32, (ld,))\n    idx_b = b.index(idx, ptr=True)\n    store = UOp(Ops.STORE, dtypes.void, (idx_b, bc))\n    return [sink, a, b, idx, idx_a, ld, bc, idx_b, store]\n\n\ndef build_conditional():\n    \"\"\"If/Endif control flow.\"\"\"\n    sink = UOp(Ops.SINK, dtypes.void, (), arg=KernelInfo())\n    a = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    idx = UOp.const(dtypes.int, 0)\n    cond = UOp.const(dtypes.bool, True)\n    if_op = UOp(Ops.IF, dtypes.void, (cond,))\n    idx_a = a.index(idx, ptr=True)\n    one = UOp.const(dtypes.float32, 1.0)\n    store = UOp(Ops.STORE, dtypes.void, (idx_a, one))\n    endif = UOp(Ops.ENDIF, dtypes.void, (if_op,))\n    return [sink, a, idx, cond, if_op, idx_a, one, store, endif]\n\n\ndef build_const_inf_nan():\n    \"\"\"Special float constants: infinity and NaN.\"\"\"\n    import math\n    sink = UOp(Ops.SINK, dtypes.void, (), arg=KernelInfo())\n    a = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    idx0 = UOp.const(dtypes.int, 0)\n    idx1 = UOp.const(dtypes.int, 1)\n    inf_val = UOp.const(dtypes.float32, math.inf)\n    nan_val = UOp.const(dtypes.float32, math.nan)\n    idx_a0 = a.index(idx0, ptr=True)\n    store0 = UOp(Ops.STORE, dtypes.void, (idx_a0, inf_val))\n    idx_a1 = a.index(idx1, ptr=True)\n    store1 = UOp(Ops.STORE, dtypes.void, (idx_a1, nan_val))\n    return [sink, a, idx0, idx1, inf_val, nan_val, idx_a0, store0, idx_a1, store1]\n\n\ndef build_vectorize_gep():\n    \"\"\"Vectorize 4 floats, then GEP element 2.\"\"\"\n    sink = UOp(Ops.SINK, dtypes.void, (), arg=KernelInfo())\n    a = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    b = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 1)\n    idx0 = UOp.const(dtypes.int, 0)\n    idx1 = UOp.const(dtypes.int, 1)\n    idx2 = UOp.const(dtypes.int, 2)\n    idx3 = UOp.const(dtypes.int, 3)\n    ia0 = a.index(idx0, ptr=True)\n    ia1 = a.index(idx1, ptr=True)\n    ia2 = a.index(idx2, ptr=True)\n    ia3 = a.index(idx3, ptr=True)\n    v0 = UOp(Ops.LOAD, dtypes.float32, (ia0,))\n    v1 = UOp(Ops.LOAD, dtypes.float32, (ia1,))\n    v2 = UOp(Ops.LOAD, dtypes.float32, (ia2,))\n    v3 = UOp(Ops.LOAD, dtypes.float32, (ia3,))\n    vec = UOp(Ops.VECTORIZE, dtypes.float32.vec(4), (v0, v1, v2, v3))\n    gep = UOp(Ops.GEP, dtypes.float32, (vec,), (2,))\n    idx_b = b.index(idx0, ptr=True)\n    store = UOp(Ops.STORE, dtypes.void, (idx_b, gep))\n    return [sink, a, b, idx0, idx1, idx2, idx3,\n            ia0, ia1, ia2, ia3, v0, v1, v2, v3,\n            vec, gep, idx_b, store]\n\n\n# ── Main ──\n\nTEST_CASES = [\n    (\"simple_add_f32\", build_simple_add_f32, None),\n    (\"simple_mul_i32\", build_simple_mul_i32, None),\n    (\"loop\", build_loop, None),\n    (\"gated_load\", build_gated_load, None),\n    (\"shared_memory\", build_shared_memory, [\"cuda\", \"metal\", \"opencl\"]),\n    (\"where_select\", build_where_select, None),\n    (\"cast_f16_to_f32\", build_cast_f16_to_f32, None),\n    (\"nested_loops\", build_nested_loops, None),\n    (\"multi_param\", build_multi_param, None),\n    (\"unary_sqrt_f32\", build_unary_sqrt_f32, None),\n    (\"unary_sqrt_f16\", build_unary_sqrt_f16, None),\n    (\"special_dims\", build_special_dims, [\"metal\", \"opencl\"]),\n    (\"bitcast_f32_to_i32\", build_bitcast_f32_to_i32, None),\n    (\"conditional\", build_conditional, None),\n    (\"const_inf_nan\", build_const_inf_nan, None),\n    (\"vectorize_gep\", build_vectorize_gep, None),\n]\n\n\ndef main():\n    total = 0\n    for case_name, builder, backends in TEST_CASES:\n        print(f\"\\n{case_name}:\")\n        uops = builder()\n        targets = backends if backends else list(RENDERERS.keys())\n        for backend_name in targets:\n            if backend_name not in RENDERERS:\n                print(f\"  SKIP {backend_name}_{case_name}: renderer not available\")\n                continue\n            renderer = RENDERERS[backend_name]\n            snap_name = f\"{backend_name}_{case_name}\"\n            try:\n                src = renderer.render(uops).strip()\n                write_expected(snap_name, src)\n                total += 1\n            except Exception as e:\n                print(f\"  SKIP {snap_name}: {e}\")\n\n    print(f\"\\nDone. Generated {total} .expected files in {OUT_DIR}\")\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/metal_bitcast_f32_to_i32.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void test(device float* data0, device int* data1, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  float val0 = (*(data0+0));\n  *(data1+0) = as_type<int>((float)(val0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/metal_cast_f16_to_f32.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void test(device half* data0, device float* data1, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  half val0 = (*(data0+0));\n  *(data1+0) = ((float)(val0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/metal_conditional.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void test(device float* data0, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  if (1) {\n    *(data0+0) = 1.0f;\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/metal_const_inf_nan.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void test(device float* data0, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  *(data0+0) = ((float)(INFINITY));\n  *(data0+1) = ((float)(NAN));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/metal_gated_load.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void test(device float* data0, device float* data1, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  float val0 = (1?*(data0+0):0.0f);\n  *(data1+0) = val0;\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/metal_loop.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void test(device float* data0, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  for (int Lidx0 = 0; Lidx0 < 10; Lidx0++) {\n    float val0 = (*(data0+Lidx0));\n    *(data0+Lidx0) = val0;\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/metal_multi_param.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void test(device float* data0, device float* data1, device float* data2, device float* data3, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  float val0 = (*(data0+0));\n  float val1 = (*(data1+0));\n  *(data3+0) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/metal_nested_loops.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void test(device float* data0, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  for (int Lidx0 = 0; Lidx0 < 10; Lidx0++) {\n    for (int Lidx1 = 0; Lidx1 < 5; Lidx1++) {\n      int alu0 = (Lidx0+Lidx1);\n      float val0 = (*(data0+alu0));\n      *(data0+alu0) = val0;\n    }\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/metal_shared_memory.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void test(device float* data0, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  threadgroup __attribute__((aligned(16))) float temp0[256];\n  *(temp0+0) = 0.0f;\n  threadgroup_barrier(mem_flags::mem_threadgroup);\n  float val0 = (*(temp0+0));\n  *(data0+0) = val0;\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/metal_simple_add_f32.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void test(device float* data0, device float* data1, device float* data2, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  float val0 = (*(data0+0));\n  float val1 = (*(data1+0));\n  *(data2+0) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/metal_simple_mul_i32.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void test(device int* data0, device int* data1, device int* data2, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int val0 = (*(data0+0));\n  int val1 = (*(data1+0));\n  *(data2+0) = (val0*val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/metal_special_dims.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void test(device float* data0, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int gidx0 = gid.x; /* 32 */\n  int lidx0 = lid.x; /* 32 */\n  int alu0 = (gidx0+lidx0);\n  float val0 = (*(data0+alu0));\n  *(data0+alu0) = val0;\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/metal_unary_sqrt_f16.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void test(device half* data0, device half* data1, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  half val0 = (*(data0+0));\n  *(data1+0) = sqrt(val0);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/metal_unary_sqrt_f32.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void test(device float* data0, device float* data1, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  float val0 = (*(data0+0));\n  *(data1+0) = sqrt(val0);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/metal_vectorize_gep.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void test(device float* data0, device float* data1, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  float val0 = (*(data0+0));\n  float val1 = (*(data0+1));\n  float val2 = (*(data0+2));\n  float val3 = (*(data0+3));\n  *(data1+0) = float4(val0,val1,val2,val3).z;\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/metal_where_select.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void test(device float* data0, device float* data1, device float* data2, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  float val0 = (*(data0+0));\n  float val1 = (*(data1+0));\n  float alu0 = (1?val0:val1);\n  *(data2+0) = alu0;\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/opencl_bitcast_f32_to_i32.expected",
    "content": "__kernel void test(__global float* data0, __global int* data1) {\n  float val0 = (*(data0+0));\n  *(data1+0) = as_int((float)(val0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/opencl_cast_f16_to_f32.expected",
    "content": "#pragma OPENCL EXTENSION cl_khr_fp16 : enable\n__kernel void test(__global half* data0, __global float* data1) {\n  half val0 = (*(data0+0));\n  *(data1+0) = ((float)(val0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/opencl_conditional.expected",
    "content": "__kernel void test(__global float* data0) {\n  if (1) {\n    *(data0+0) = 1.0f;\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/opencl_const_inf_nan.expected",
    "content": "__kernel void test(__global float* data0) {\n  *(data0+0) = ((float)(INFINITY));\n  *(data0+1) = ((float)(NAN));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/opencl_gated_load.expected",
    "content": "__kernel void test(__global float* data0, __global float* data1) {\n  float val0 = (1?*(data0+0):0.0f);\n  *(data1+0) = val0;\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/opencl_loop.expected",
    "content": "__kernel void test(__global float* data0) {\n  for (int Lidx0 = 0; Lidx0 < 10; Lidx0++) {\n    float val0 = (*(data0+Lidx0));\n    *(data0+Lidx0) = val0;\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/opencl_multi_param.expected",
    "content": "__kernel void test(__global float* data0, __global float* data1, __global float* data2, __global float* data3) {\n  float val0 = (*(data0+0));\n  float val1 = (*(data1+0));\n  *(data3+0) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/opencl_nested_loops.expected",
    "content": "__kernel void test(__global float* data0) {\n  for (int Lidx0 = 0; Lidx0 < 10; Lidx0++) {\n    for (int Lidx1 = 0; Lidx1 < 5; Lidx1++) {\n      int alu0 = (Lidx0+Lidx1);\n      float val0 = (*(data0+alu0));\n      *(data0+alu0) = val0;\n    }\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/opencl_shared_memory.expected",
    "content": "__kernel void test(__global float* data0) {\n  __attribute__ ((aligned (16))) __local float temp0[256];\n  *(temp0+0) = 0.0f;\n  barrier(CLK_LOCAL_MEM_FENCE);\n  float val0 = (*(temp0+0));\n  *(data0+0) = val0;\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/opencl_simple_add_f32.expected",
    "content": "__kernel void test(__global float* data0, __global float* data1, __global float* data2) {\n  float val0 = (*(data0+0));\n  float val1 = (*(data1+0));\n  *(data2+0) = (val0+val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/opencl_simple_mul_i32.expected",
    "content": "__kernel void test(__global int* data0, __global int* data1, __global int* data2) {\n  int val0 = (*(data0+0));\n  int val1 = (*(data1+0));\n  *(data2+0) = (val0*val1);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/opencl_special_dims.expected",
    "content": "__kernel void test(__global float* data0) {\n  int gidx0 = get_group_id(0); /* 32 */\n  int lidx0 = get_local_id(0); /* 32 */\n  int alu0 = (gidx0+lidx0);\n  float val0 = (*(data0+alu0));\n  *(data0+alu0) = val0;\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/opencl_unary_sqrt_f16.expected",
    "content": "#pragma OPENCL EXTENSION cl_khr_fp16 : enable\n__kernel void test(__global half* data0, __global half* data1) {\n  half val0 = (*(data0+0));\n  *(data1+0) = sqrt(val0);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/opencl_unary_sqrt_f32.expected",
    "content": "__kernel void test(__global float* data0, __global float* data1) {\n  float val0 = (*(data0+0));\n  *(data1+0) = sqrt(val0);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/opencl_vectorize_gep.expected",
    "content": "__kernel void test(__global float* data0, __global float* data1) {\n  float val0 = (*(data0+0));\n  float val1 = (*(data0+1));\n  float val2 = (*(data0+2));\n  float val3 = (*(data0+3));\n  *(data1+0) = (float4)(val0,val1,val2,val3).z;\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/cstyle/opencl_where_select.expected",
    "content": "__kernel void test(__global float* data0, __global float* data1, __global float* data2) {\n  float val0 = (*(data0+0));\n  float val1 = (*(data1+0));\n  float alu0 = (1?val0:val1);\n  *(data2+0) = alu0;\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/debug/dune",
    "content": "(executable\n (name generate_actual)\n (libraries tolk tolk.ir unix))\n\n(rule\n (package tolk)\n (targets elementwise_add.actual elementwise_add_opt.actual)\n (action\n  (setenv\n   DEBUG\n   6\n   (run ./generate_actual.exe .))))\n\n(rule\n (alias runtest)\n (package tolk)\n (action\n  (diff elementwise_add.expected elementwise_add.actual)))\n\n(rule\n (alias runtest)\n (package tolk)\n (action\n  (diff elementwise_add_opt.expected elementwise_add_opt.actual)))\n"
  },
  {
    "path": "packages/tolk/test/golden/debug/elementwise_add.expected",
    "content": "=== early movement ops ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.CONST           :            dtypes.weakint                           []                               256\n   2 Ops.RANGE           : 0          dtypes.weakint                           ['256']                          (0, AxisType.GLOBAL)\n   3 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [0, 2]                           None\n   4 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   5 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [4, 2]                           None\n   6 Ops.LOAD            : 0          dtypes.float                             [5]                              None\n   7 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   8 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [7, 2]                           None\n   9 Ops.LOAD            : 0          dtypes.float                             [8]                              None\n  10 Ops.ADD             : 0          dtypes.float                             [6, 9]                           None\n  11 Ops.STORE           : 0          dtypes.void                              [3, 10]                          None\n  12 Ops.END             :            dtypes.void                              [11, 2]                          None\n  13 Ops.SINK            :            dtypes.void                              [12]                             KernelInfo(name='elementwise_add', axis_types=(AxisType.GLOBAL,), dont_use_locals=False, applied_opts=(), opts_to_apply=(), estimates=None)\n=== load collapse ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.CONST           :            dtypes.weakint                           []                               256\n   2 Ops.RANGE           : 0          dtypes.weakint                           ['256']                          (0, AxisType.GLOBAL)\n   3 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [0, 2]                           None\n   4 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   5 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [4, 2]                           None\n   6 Ops.LOAD            : 0          dtypes.float                             [5]                              None\n   7 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   8 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [7, 2]                           None\n   9 Ops.LOAD            : 0          dtypes.float                             [8]                              None\n  10 Ops.ADD             : 0          dtypes.float                             [6, 9]                           None\n  11 Ops.STORE           : 0          dtypes.void                              [3, 10]                          None\n  12 Ops.END             :            dtypes.void                              [11, 2]                          None\n  13 Ops.SINK            :            dtypes.void                              [12]                             KernelInfo(name='elementwise_add', axis_types=(AxisType.GLOBAL,), dont_use_locals=False, applied_opts=(), opts_to_apply=(), estimates=None)\n=== split ranges ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.CONST           :            dtypes.weakint                           []                               256\n   2 Ops.RANGE           : 0          dtypes.weakint                           ['256']                          (0, AxisType.GLOBAL)\n   3 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [0, 2]                           None\n   4 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   5 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [4, 2]                           None\n   6 Ops.LOAD            : 0          dtypes.float                             [5]                              None\n   7 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   8 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [7, 2]                           None\n   9 Ops.LOAD            : 0          dtypes.float                             [8]                              None\n  10 Ops.ADD             : 0          dtypes.float                             [6, 9]                           None\n  11 Ops.STORE           : 0          dtypes.void                              [3, 10]                          None\n  12 Ops.END             :            dtypes.void                              [11, 2]                          None\n  13 Ops.SINK            :            dtypes.void                              [12]                             KernelInfo(name='elementwise_add', axis_types=(AxisType.GLOBAL,), dont_use_locals=False, applied_opts=(), opts_to_apply=(), estimates=None)\n=== initial symbolic ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.CONST           :            dtypes.weakint                           []                               256\n   2 Ops.RANGE           : 0          dtypes.weakint                           ['256']                          (0, AxisType.GLOBAL)\n   3 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [0, 2]                           None\n   4 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   5 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [4, 2]                           None\n   6 Ops.LOAD            : 0          dtypes.float                             [5]                              None\n   7 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   8 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [7, 2]                           None\n   9 Ops.LOAD            : 0          dtypes.float                             [8]                              None\n  10 Ops.ADD             : 0          dtypes.float                             [6, 9]                           None\n  11 Ops.STORE           : 0          dtypes.void                              [3, 10]                          None\n  12 Ops.END             :            dtypes.void                              [11, 2]                          None\n  13 Ops.SINK            :            dtypes.void                              [12]                             KernelInfo(name='elementwise_add', axis_types=(AxisType.GLOBAL,), dont_use_locals=False, applied_opts=(), opts_to_apply=(), estimates=None)\n=== simplify ranges ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.CONST           :            dtypes.weakint                           []                               256\n   2 Ops.RANGE           : 0          dtypes.weakint                           ['256']                          (0, AxisType.GLOBAL)\n   3 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [0, 2]                           None\n   4 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   5 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [4, 2]                           None\n   6 Ops.LOAD            : 0          dtypes.float                             [5]                              None\n   7 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   8 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [7, 2]                           None\n   9 Ops.LOAD            : 0          dtypes.float                             [8]                              None\n  10 Ops.ADD             : 0          dtypes.float                             [6, 9]                           None\n  11 Ops.STORE           : 0          dtypes.void                              [3, 10]                          None\n  12 Ops.END             :            dtypes.void                              [11, 2]                          None\n  13 Ops.SINK            :            dtypes.void                              [12]                             KernelInfo(name='elementwise_add', axis_types=(AxisType.GLOBAL,), dont_use_locals=False, applied_opts=(), opts_to_apply=(), estimates=None)\n=== postopt symbolic ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.CONST           :            dtypes.weakint                           []                               256\n   2 Ops.RANGE           : 0          dtypes.weakint                           ['256']                          (0, AxisType.GLOBAL)\n   3 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [0, 2]                           None\n   4 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   5 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [4, 2]                           None\n   6 Ops.LOAD            : 0          dtypes.float                             [5]                              None\n   7 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   8 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [7, 2]                           None\n   9 Ops.LOAD            : 0          dtypes.float                             [8]                              None\n  10 Ops.ADD             : 0          dtypes.float                             [6, 9]                           None\n  11 Ops.STORE           : 0          dtypes.void                              [3, 10]                          None\n  12 Ops.END             :            dtypes.void                              [11, 2]                          None\n  13 Ops.SINK            :            dtypes.void                              [12]                             KernelInfo(name='elementwise_add', axis_types=(), dont_use_locals=False, applied_opts=(), opts_to_apply=None, estimates=None)\n=== expander ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.CONST           :            dtypes.weakint                           []                               256\n   2 Ops.RANGE           : 0          dtypes.weakint                           ['256']                          (0, AxisType.GLOBAL)\n   3 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [0, 2]                           None\n   4 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   5 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [4, 2]                           None\n   6 Ops.LOAD            : 0          dtypes.float                             [5]                              None\n   7 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   8 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [7, 2]                           None\n   9 Ops.LOAD            : 0          dtypes.float                             [8]                              None\n  10 Ops.ADD             : 0          dtypes.float                             [6, 9]                           None\n  11 Ops.STORE           : 0          dtypes.void                              [3, 10]                          None\n  12 Ops.END             :            dtypes.void                              [11, 2]                          None\n  13 Ops.SINK            :            dtypes.void                              [12]                             KernelInfo(name='elementwise_add', axis_types=(), dont_use_locals=False, applied_opts=(), opts_to_apply=None, estimates=None)\n=== add local buffers ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.CONST           :            dtypes.weakint                           []                               256\n   2 Ops.RANGE           : 0          dtypes.weakint                           ['256']                          (0, AxisType.GLOBAL)\n   3 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [0, 2]                           None\n   4 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   5 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [4, 2]                           None\n   6 Ops.LOAD            : 0          dtypes.float                             [5]                              None\n   7 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   8 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [7, 2]                           None\n   9 Ops.LOAD            : 0          dtypes.float                             [8]                              None\n  10 Ops.ADD             : 0          dtypes.float                             [6, 9]                           None\n  11 Ops.STORE           : 0          dtypes.void                              [3, 10]                          None\n  12 Ops.END             :            dtypes.void                              [11, 2]                          None\n  13 Ops.SINK            :            dtypes.void                              [12]                             KernelInfo(name='elementwise_add', axis_types=(), dont_use_locals=False, applied_opts=(), opts_to_apply=None, estimates=None)\n=== remove_reduce ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.CONST           :            dtypes.weakint                           []                               256\n   2 Ops.RANGE           : 0          dtypes.weakint                           ['256']                          (0, AxisType.GLOBAL)\n   3 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [0, 2]                           None\n   4 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   5 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [4, 2]                           None\n   6 Ops.LOAD            : 0          dtypes.float                             [5]                              None\n   7 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   8 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [7, 2]                           None\n   9 Ops.LOAD            : 0          dtypes.float                             [8]                              None\n  10 Ops.ADD             : 0          dtypes.float                             [6, 9]                           None\n  11 Ops.STORE           : 0          dtypes.void                              [3, 10]                          None\n  12 Ops.END             :            dtypes.void                              [11, 2]                          None\n  13 Ops.SINK            :            dtypes.void                              [12]                             KernelInfo(name='elementwise_add', axis_types=(), dont_use_locals=False, applied_opts=(), opts_to_apply=None, estimates=None)\n=== add gpudims ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.DEFINE_VAR      :            dtypes.int                               []                               ('core_id', 0, 255)\n   2 Ops.CAST            :            dtypes.weakint                           [1]                              None\n   3 Ops.INDEX           :            dtypes.float.ptr(-1)                     [0, 2]                           None\n   4 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   5 Ops.INDEX           :            dtypes.float.ptr(-1)                     [4, 2]                           None\n   6 Ops.LOAD            :            dtypes.float                             [5]                              None\n   7 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   8 Ops.INDEX           :            dtypes.float.ptr(-1)                     [7, 2]                           None\n   9 Ops.LOAD            :            dtypes.float                             [8]                              None\n  10 Ops.ADD             :            dtypes.float                             [6, 9]                           None\n  11 Ops.STORE           :            dtypes.void                              [3, 10]                          None\n  12 Ops.END             :            dtypes.void                              [11, 2]                          None\n  13 Ops.SINK            :            dtypes.void                              [12]                             KernelInfo(name='elementwise_add', axis_types=(), dont_use_locals=False, applied_opts=(), opts_to_apply=None, estimates=None)\n=== ** add loads (code) ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.DEFINE_VAR      :            dtypes.int                               []                               ('core_id', 0, 255)\n   2 Ops.CAST            :            dtypes.weakint                           [1]                              None\n   3 Ops.INDEX           :            dtypes.float.ptr(-1)                     [0, 2]                           None\n   4 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   5 Ops.INDEX           :            dtypes.float.ptr(-1)                     [4, 2]                           None\n   6 Ops.LOAD            :            dtypes.float                             [5]                              None\n   7 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   8 Ops.INDEX           :            dtypes.float.ptr(-1)                     [7, 2]                           None\n   9 Ops.LOAD            :            dtypes.float                             [8]                              None\n  10 Ops.ADD             :            dtypes.float                             [6, 9]                           None\n  11 Ops.STORE           :            dtypes.void                              [3, 10]                          None\n  12 Ops.END             :            dtypes.void                              [11, 2]                          None\n  13 Ops.SINK            :            dtypes.void                              [12]                             KernelInfo(name='elementwise_add', axis_types=(), dont_use_locals=False, applied_opts=(), opts_to_apply=None, estimates=None)\n=== devectorize ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.DEFINE_VAR      :            dtypes.int                               []                               ('core_id', 0, 255)\n   2 Ops.CAST            :            dtypes.weakint                           [1]                              None\n   3 Ops.INDEX           :            dtypes.float.ptr(-1)                     [0, 2]                           None\n   4 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   5 Ops.INDEX           :            dtypes.float.ptr(-1)                     [4, 2]                           None\n   6 Ops.LOAD            :            dtypes.float                             [5]                              None\n   7 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   8 Ops.INDEX           :            dtypes.float.ptr(-1)                     [7, 2]                           None\n   9 Ops.LOAD            :            dtypes.float                             [8]                              None\n  10 Ops.ADD             :            dtypes.float                             [6, 9]                           None\n  11 Ops.STORE           :            dtypes.void                              [3, 10]                          None\n  12 Ops.END             :            dtypes.void                              [11, 2]                          None\n  13 Ops.SINK            :            dtypes.void                              [12]                             KernelInfo(name='elementwise_add', axis_types=(), dont_use_locals=False, applied_opts=(), opts_to_apply=None, estimates=None)\n=== lower all index dtypes ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.DEFINE_VAR      :            dtypes.int                               []                               ('core_id', 0, 255)\n   2 Ops.INDEX           :            dtypes.float.ptr(-1)                     [0, 1]                           None\n   3 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   4 Ops.INDEX           :            dtypes.float.ptr(-1)                     [3, 1]                           None\n   5 Ops.LOAD            :            dtypes.float                             [4]                              None\n   6 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   7 Ops.INDEX           :            dtypes.float.ptr(-1)                     [6, 1]                           None\n   8 Ops.LOAD            :            dtypes.float                             [7]                              None\n   9 Ops.ADD             :            dtypes.float                             [5, 8]                           None\n  10 Ops.STORE           :            dtypes.void                              [2, 9]                           None\n  11 Ops.END             :            dtypes.void                              [10, 1]                          None\n  12 Ops.SINK            :            dtypes.void                              [11]                             KernelInfo(name='elementwise_add', axis_types=(), dont_use_locals=False, applied_opts=(), opts_to_apply=None, estimates=None)\n=== post index symbolic ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.DEFINE_VAR      :            dtypes.int                               []                               ('core_id', 0, 255)\n   2 Ops.INDEX           :            dtypes.float.ptr(-1)                     [0, 1]                           None\n   3 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   4 Ops.INDEX           :            dtypes.float.ptr(-1)                     [3, 1]                           None\n   5 Ops.LOAD            :            dtypes.float                             [4]                              None\n   6 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   7 Ops.INDEX           :            dtypes.float.ptr(-1)                     [6, 1]                           None\n   8 Ops.LOAD            :            dtypes.float                             [7]                              None\n   9 Ops.ADD             :            dtypes.float                             [5, 8]                           None\n  10 Ops.STORE           :            dtypes.void                              [2, 9]                           None\n  11 Ops.END             :            dtypes.void                              [10, 1]                          None\n  12 Ops.SINK            :            dtypes.void                              [11]                             KernelInfo(name='elementwise_add', axis_types=(), dont_use_locals=False, applied_opts=(), opts_to_apply=None, estimates=None)\n=== decompositions ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.DEFINE_VAR      :            dtypes.int                               []                               ('core_id', 0, 255)\n   2 Ops.INDEX           :            dtypes.float.ptr(-1)                     [0, 1]                           None\n   3 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   4 Ops.INDEX           :            dtypes.float.ptr(-1)                     [3, 1]                           None\n   5 Ops.LOAD            :            dtypes.float                             [4]                              None\n   6 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   7 Ops.INDEX           :            dtypes.float.ptr(-1)                     [6, 1]                           None\n   8 Ops.LOAD            :            dtypes.float                             [7]                              None\n   9 Ops.ADD             :            dtypes.float                             [5, 8]                           None\n  10 Ops.STORE           :            dtypes.void                              [2, 9]                           None\n  11 Ops.END             :            dtypes.void                              [10, 1]                          None\n  12 Ops.SINK            :            dtypes.void                              [11]                             KernelInfo(name='elementwise_add', axis_types=(), dont_use_locals=False, applied_opts=(), opts_to_apply=None, estimates=None)\n=== decomp dtypes ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.DEFINE_VAR      :            dtypes.int                               []                               ('core_id', 0, 255)\n   2 Ops.INDEX           :            dtypes.float.ptr(-1)                     [0, 1]                           None\n   3 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   4 Ops.INDEX           :            dtypes.float.ptr(-1)                     [3, 1]                           None\n   5 Ops.LOAD            :            dtypes.float                             [4]                              None\n   6 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   7 Ops.INDEX           :            dtypes.float.ptr(-1)                     [6, 1]                           None\n   8 Ops.LOAD            :            dtypes.float                             [7]                              None\n   9 Ops.ADD             :            dtypes.float                             [5, 8]                           None\n  10 Ops.STORE           :            dtypes.void                              [2, 9]                           None\n  11 Ops.END             :            dtypes.void                              [10, 1]                          None\n  12 Ops.SINK            :            dtypes.void                              [11]                             KernelInfo(name='elementwise_add', axis_types=(), dont_use_locals=False, applied_opts=(), opts_to_apply=None, estimates=None)\n=== transcendental ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.DEFINE_VAR      :            dtypes.int                               []                               ('core_id', 0, 255)\n   2 Ops.INDEX           :            dtypes.float.ptr(-1)                     [0, 1]                           None\n   3 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   4 Ops.INDEX           :            dtypes.float.ptr(-1)                     [3, 1]                           None\n   5 Ops.LOAD            :            dtypes.float                             [4]                              None\n   6 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   7 Ops.INDEX           :            dtypes.float.ptr(-1)                     [6, 1]                           None\n   8 Ops.LOAD            :            dtypes.float                             [7]                              None\n   9 Ops.ADD             :            dtypes.float                             [5, 8]                           None\n  10 Ops.STORE           :            dtypes.void                              [2, 9]                           None\n  11 Ops.END             :            dtypes.void                              [10, 1]                          None\n  12 Ops.SINK            :            dtypes.void                              [11]                             KernelInfo(name='elementwise_add', axis_types=(), dont_use_locals=False, applied_opts=(), opts_to_apply=None, estimates=None)\n=== final rewrite ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.DEFINE_VAR      :            dtypes.int                               []                               ('core_id', 0, 255)\n   2 Ops.INDEX           :            dtypes.float.ptr(-1)                     [0, 1]                           None\n   3 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   4 Ops.INDEX           :            dtypes.float.ptr(-1)                     [3, 1]                           None\n   5 Ops.LOAD            :            dtypes.float                             [4]                              None\n   6 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   7 Ops.INDEX           :            dtypes.float.ptr(-1)                     [6, 1]                           None\n   8 Ops.LOAD            :            dtypes.float                             [7]                              None\n   9 Ops.ADD             :            dtypes.float                             [5, 8]                           None\n  10 Ops.STORE           :            dtypes.void                              [2, 9]                           None\n  11 Ops.SINK            :            dtypes.void                              [10]                             KernelInfo(name='elementwise_add', axis_types=(), dont_use_locals=False, applied_opts=(), opts_to_apply=None, estimates=None)\n=== add control flow ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.DEFINE_VAR      :            dtypes.int                               []                               ('core_id', 0, 255)\n   2 Ops.INDEX           :            dtypes.float.ptr(-1)                     [0, 1]                           None\n   3 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   4 Ops.INDEX           :            dtypes.float.ptr(-1)                     [3, 1]                           None\n   5 Ops.LOAD            :            dtypes.float                             [4]                              None\n   6 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   7 Ops.INDEX           :            dtypes.float.ptr(-1)                     [6, 1]                           None\n   8 Ops.LOAD            :            dtypes.float                             [7]                              None\n   9 Ops.ADD             :            dtypes.float                             [5, 8]                           None\n  10 Ops.STORE           :            dtypes.void                              [2, 9]                           None\n  11 Ops.SINK            :            dtypes.void                              [10]                             KernelInfo(name='elementwise_add', axis_types=(), dont_use_locals=False, applied_opts=(), opts_to_apply=None, estimates=None)\n"
  },
  {
    "path": "packages/tolk/test/golden/debug/elementwise_add_opt.expected",
    "content": "=== early movement ops ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.CONST           :            dtypes.weakint                           []                               256\n   2 Ops.RANGE           : 0          dtypes.weakint                           ['256']                          (0, AxisType.GLOBAL)\n   3 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [0, 2]                           None\n   4 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   5 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [4, 2]                           None\n   6 Ops.LOAD            : 0          dtypes.float                             [5]                              None\n   7 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   8 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [7, 2]                           None\n   9 Ops.LOAD            : 0          dtypes.float                             [8]                              None\n  10 Ops.ADD             : 0          dtypes.float                             [6, 9]                           None\n  11 Ops.STORE           : 0          dtypes.void                              [3, 10]                          None\n  12 Ops.END             :            dtypes.void                              [11, 2]                          None\n  13 Ops.SINK            :            dtypes.void                              [12]                             KernelInfo(name='elementwise_add_opt', axis_types=(AxisType.GLOBAL,), dont_use_locals=False, applied_opts=(), opts_to_apply=None, estimates=None)\n=== load collapse ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.CONST           :            dtypes.weakint                           []                               256\n   2 Ops.RANGE           : 0          dtypes.weakint                           ['256']                          (0, AxisType.GLOBAL)\n   3 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [0, 2]                           None\n   4 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   5 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [4, 2]                           None\n   6 Ops.LOAD            : 0          dtypes.float                             [5]                              None\n   7 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   8 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [7, 2]                           None\n   9 Ops.LOAD            : 0          dtypes.float                             [8]                              None\n  10 Ops.ADD             : 0          dtypes.float                             [6, 9]                           None\n  11 Ops.STORE           : 0          dtypes.void                              [3, 10]                          None\n  12 Ops.END             :            dtypes.void                              [11, 2]                          None\n  13 Ops.SINK            :            dtypes.void                              [12]                             KernelInfo(name='elementwise_add_opt', axis_types=(AxisType.GLOBAL,), dont_use_locals=False, applied_opts=(), opts_to_apply=None, estimates=None)\n=== split ranges ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.CONST           :            dtypes.weakint                           []                               256\n   2 Ops.RANGE           : 0          dtypes.weakint                           ['256']                          (0, AxisType.GLOBAL)\n   3 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [0, 2]                           None\n   4 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   5 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [4, 2]                           None\n   6 Ops.LOAD            : 0          dtypes.float                             [5]                              None\n   7 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   8 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [7, 2]                           None\n   9 Ops.LOAD            : 0          dtypes.float                             [8]                              None\n  10 Ops.ADD             : 0          dtypes.float                             [6, 9]                           None\n  11 Ops.STORE           : 0          dtypes.void                              [3, 10]                          None\n  12 Ops.END             :            dtypes.void                              [11, 2]                          None\n  13 Ops.SINK            :            dtypes.void                              [12]                             KernelInfo(name='elementwise_add_opt', axis_types=(AxisType.GLOBAL,), dont_use_locals=False, applied_opts=(), opts_to_apply=None, estimates=None)\n=== initial symbolic ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.CONST           :            dtypes.weakint                           []                               256\n   2 Ops.RANGE           : 0          dtypes.weakint                           ['256']                          (0, AxisType.GLOBAL)\n   3 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [0, 2]                           None\n   4 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   5 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [4, 2]                           None\n   6 Ops.LOAD            : 0          dtypes.float                             [5]                              None\n   7 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   8 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [7, 2]                           None\n   9 Ops.LOAD            : 0          dtypes.float                             [8]                              None\n  10 Ops.ADD             : 0          dtypes.float                             [6, 9]                           None\n  11 Ops.STORE           : 0          dtypes.void                              [3, 10]                          None\n  12 Ops.END             :            dtypes.void                              [11, 2]                          None\n  13 Ops.SINK            :            dtypes.void                              [12]                             KernelInfo(name='elementwise_add_opt', axis_types=(AxisType.GLOBAL,), dont_use_locals=False, applied_opts=(), opts_to_apply=None, estimates=None)\n=== simplify ranges ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.CONST           :            dtypes.weakint                           []                               256\n   2 Ops.RANGE           : 0          dtypes.weakint                           ['256']                          (0, AxisType.GLOBAL)\n   3 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [0, 2]                           None\n   4 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   5 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [4, 2]                           None\n   6 Ops.LOAD            : 0          dtypes.float                             [5]                              None\n   7 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n   8 Ops.INDEX           : 0          dtypes.float.ptr(-1)                     [7, 2]                           None\n   9 Ops.LOAD            : 0          dtypes.float                             [8]                              None\n  10 Ops.ADD             : 0          dtypes.float                             [6, 9]                           None\n  11 Ops.STORE           : 0          dtypes.void                              [3, 10]                          None\n  12 Ops.END             :            dtypes.void                              [11, 2]                          None\n  13 Ops.SINK            :            dtypes.void                              [12]                             KernelInfo(name='elementwise_add_opt', axis_types=(AxisType.GLOBAL,), dont_use_locals=False, applied_opts=(), opts_to_apply=None, estimates=None)\n=== postopt symbolic ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.CONST           :            dtypes.weakint                           []                               64\n   2 Ops.RANGE           : 0          dtypes.weakint                           ['64']                           (0, AxisType.GLOBAL)\n   3 Ops.CONST           :            dtypes.weakint                           []                               4\n   4 Ops.MUL             : 0          dtypes.weakint                           [2, '4']                         None\n   5 Ops.RANGE           : 1          dtypes.weakint                           ['4']                            (1, AxisType.UPCAST)\n   6 Ops.ADD             : 0,1        dtypes.weakint                           [4, 5]                           None\n   7 Ops.INDEX           : 0,1        dtypes.float.ptr(-1)                     [0, 6]                           None\n   8 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   9 Ops.INDEX           : 0,1        dtypes.float.ptr(-1)                     [8, 6]                           None\n  10 Ops.LOAD            : 0,1        dtypes.float                             [9]                              None\n  11 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n  12 Ops.INDEX           : 0,1        dtypes.float.ptr(-1)                     [11, 6]                          None\n  13 Ops.LOAD            : 0,1        dtypes.float                             [12]                             None\n  14 Ops.ADD             : 0,1        dtypes.float                             [10, 13]                         None\n  15 Ops.STORE           : 0,1        dtypes.void                              [7, 14]                          None\n  16 Ops.END             :            dtypes.void                              [15, 2, 5]                       None\n  17 Ops.SINK            :            dtypes.void                              [16]                             KernelInfo(name='elementwise_add_opt', axis_types=(), dont_use_locals=False, applied_opts=(Opt(op=OptOps.UPCAST, axis=0, arg=4),), opts_to_apply=None, estimates=None)\n=== expander ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.VECTORIZE       :            dtypes.float.ptr(-1).vec(4)              [0, 0, 0, 0]                     None\n   2 Ops.CONST           :            dtypes.weakint                           []                               64\n   3 Ops.RANGE           : 0          dtypes.weakint                           ['64']                           (0, AxisType.GLOBAL)\n   4 Ops.CONST           :            dtypes.weakint                           []                               4\n   5 Ops.MUL             : 0          dtypes.weakint                           [3, '4']                         None\n   6 Ops.VECTORIZE       : 0          dtypes.weakint.vec(4)                    [5, 5, 5, 5]                     None\n   7 Ops.VCONST          :            dtypes.weakint.vec(4)                    []                               (0, 1, 2, 3)\n   8 Ops.ADD             : 0          dtypes.weakint.vec(4)                    [6, 7]                           None\n   9 Ops.INDEX           : 0          dtypes.float.ptr(-1).vec(4)              [1, 8]                           None\n  10 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n  11 Ops.VECTORIZE       :            dtypes.float.ptr(-1).vec(4)              [10, 10, 10, 10]                 None\n  12 Ops.INDEX           : 0          dtypes.float.ptr(-1).vec(4)              [11, 8]                          None\n  13 Ops.LOAD            : 0          dtypes.float.vec(4)                      [12]                             None\n  14 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n  15 Ops.VECTORIZE       :            dtypes.float.ptr(-1).vec(4)              [14, 14, 14, 14]                 None\n  16 Ops.INDEX           : 0          dtypes.float.ptr(-1).vec(4)              [15, 8]                          None\n  17 Ops.LOAD            : 0          dtypes.float.vec(4)                      [16]                             None\n  18 Ops.ADD             : 0          dtypes.float.vec(4)                      [13, 17]                         None\n  19 Ops.STORE           : 0          dtypes.void                              [9, 18]                          None\n  20 Ops.END             :            dtypes.void                              [19, 3]                          None\n  21 Ops.SINK            :            dtypes.void                              [20]                             KernelInfo(name='elementwise_add_opt', axis_types=(), dont_use_locals=False, applied_opts=(Opt(op=OptOps.UPCAST, axis=0, arg=4),), opts_to_apply=None, estimates=None)\n=== add local buffers ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.VECTORIZE       :            dtypes.float.ptr(-1).vec(4)              [0, 0, 0, 0]                     None\n   2 Ops.CONST           :            dtypes.weakint                           []                               64\n   3 Ops.RANGE           : 0          dtypes.weakint                           ['64']                           (0, AxisType.GLOBAL)\n   4 Ops.CONST           :            dtypes.weakint                           []                               4\n   5 Ops.MUL             : 0          dtypes.weakint                           [3, '4']                         None\n   6 Ops.VECTORIZE       : 0          dtypes.weakint.vec(4)                    [5, 5, 5, 5]                     None\n   7 Ops.VCONST          :            dtypes.weakint.vec(4)                    []                               (0, 1, 2, 3)\n   8 Ops.ADD             : 0          dtypes.weakint.vec(4)                    [6, 7]                           None\n   9 Ops.INDEX           : 0          dtypes.float.ptr(-1).vec(4)              [1, 8]                           None\n  10 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n  11 Ops.VECTORIZE       :            dtypes.float.ptr(-1).vec(4)              [10, 10, 10, 10]                 None\n  12 Ops.INDEX           : 0          dtypes.float.ptr(-1).vec(4)              [11, 8]                          None\n  13 Ops.LOAD            : 0          dtypes.float.vec(4)                      [12]                             None\n  14 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n  15 Ops.VECTORIZE       :            dtypes.float.ptr(-1).vec(4)              [14, 14, 14, 14]                 None\n  16 Ops.INDEX           : 0          dtypes.float.ptr(-1).vec(4)              [15, 8]                          None\n  17 Ops.LOAD            : 0          dtypes.float.vec(4)                      [16]                             None\n  18 Ops.ADD             : 0          dtypes.float.vec(4)                      [13, 17]                         None\n  19 Ops.STORE           : 0          dtypes.void                              [9, 18]                          None\n  20 Ops.END             :            dtypes.void                              [19, 3]                          None\n  21 Ops.SINK            :            dtypes.void                              [20]                             KernelInfo(name='elementwise_add_opt', axis_types=(), dont_use_locals=False, applied_opts=(Opt(op=OptOps.UPCAST, axis=0, arg=4),), opts_to_apply=None, estimates=None)\n=== remove_reduce ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.VECTORIZE       :            dtypes.float.ptr(-1).vec(4)              [0, 0, 0, 0]                     None\n   2 Ops.CONST           :            dtypes.weakint                           []                               64\n   3 Ops.RANGE           : 0          dtypes.weakint                           ['64']                           (0, AxisType.GLOBAL)\n   4 Ops.CONST           :            dtypes.weakint                           []                               4\n   5 Ops.MUL             : 0          dtypes.weakint                           [3, '4']                         None\n   6 Ops.VECTORIZE       : 0          dtypes.weakint.vec(4)                    [5, 5, 5, 5]                     None\n   7 Ops.VCONST          :            dtypes.weakint.vec(4)                    []                               (0, 1, 2, 3)\n   8 Ops.ADD             : 0          dtypes.weakint.vec(4)                    [6, 7]                           None\n   9 Ops.INDEX           : 0          dtypes.float.ptr(-1).vec(4)              [1, 8]                           None\n  10 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n  11 Ops.VECTORIZE       :            dtypes.float.ptr(-1).vec(4)              [10, 10, 10, 10]                 None\n  12 Ops.INDEX           : 0          dtypes.float.ptr(-1).vec(4)              [11, 8]                          None\n  13 Ops.LOAD            : 0          dtypes.float.vec(4)                      [12]                             None\n  14 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n  15 Ops.VECTORIZE       :            dtypes.float.ptr(-1).vec(4)              [14, 14, 14, 14]                 None\n  16 Ops.INDEX           : 0          dtypes.float.ptr(-1).vec(4)              [15, 8]                          None\n  17 Ops.LOAD            : 0          dtypes.float.vec(4)                      [16]                             None\n  18 Ops.ADD             : 0          dtypes.float.vec(4)                      [13, 17]                         None\n  19 Ops.STORE           : 0          dtypes.void                              [9, 18]                          None\n  20 Ops.END             :            dtypes.void                              [19, 3]                          None\n  21 Ops.SINK            :            dtypes.void                              [20]                             KernelInfo(name='elementwise_add_opt', axis_types=(), dont_use_locals=False, applied_opts=(Opt(op=OptOps.UPCAST, axis=0, arg=4),), opts_to_apply=None, estimates=None)\n=== add gpudims ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.VECTORIZE       :            dtypes.float.ptr(-1).vec(4)              [0, 0, 0, 0]                     None\n   2 Ops.DEFINE_VAR      :            dtypes.int                               []                               ('core_id', 0, 63)\n   3 Ops.CAST            :            dtypes.weakint                           [2]                              None\n   4 Ops.CONST           :            dtypes.weakint                           []                               4\n   5 Ops.MUL             :            dtypes.weakint                           [3, '4']                         None\n   6 Ops.VECTORIZE       :            dtypes.weakint.vec(4)                    [5, 5, 5, 5]                     None\n   7 Ops.VCONST          :            dtypes.weakint.vec(4)                    []                               (0, 1, 2, 3)\n   8 Ops.ADD             :            dtypes.weakint.vec(4)                    [6, 7]                           None\n   9 Ops.INDEX           :            dtypes.float.ptr(-1).vec(4)              [1, 8]                           None\n  10 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n  11 Ops.VECTORIZE       :            dtypes.float.ptr(-1).vec(4)              [10, 10, 10, 10]                 None\n  12 Ops.INDEX           :            dtypes.float.ptr(-1).vec(4)              [11, 8]                          None\n  13 Ops.LOAD            :            dtypes.float.vec(4)                      [12]                             None\n  14 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n  15 Ops.VECTORIZE       :            dtypes.float.ptr(-1).vec(4)              [14, 14, 14, 14]                 None\n  16 Ops.INDEX           :            dtypes.float.ptr(-1).vec(4)              [15, 8]                          None\n  17 Ops.LOAD            :            dtypes.float.vec(4)                      [16]                             None\n  18 Ops.ADD             :            dtypes.float.vec(4)                      [13, 17]                         None\n  19 Ops.STORE           :            dtypes.void                              [9, 18]                          None\n  20 Ops.END             :            dtypes.void                              [19, 3]                          None\n  21 Ops.SINK            :            dtypes.void                              [20]                             KernelInfo(name='elementwise_add_opt', axis_types=(), dont_use_locals=False, applied_opts=(Opt(op=OptOps.UPCAST, axis=0, arg=4),), opts_to_apply=None, estimates=None)\n=== ** add loads (code) ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.VECTORIZE       :            dtypes.float.ptr(-1).vec(4)              [0, 0, 0, 0]                     None\n   2 Ops.DEFINE_VAR      :            dtypes.int                               []                               ('core_id', 0, 63)\n   3 Ops.CAST            :            dtypes.weakint                           [2]                              None\n   4 Ops.CONST           :            dtypes.weakint                           []                               4\n   5 Ops.MUL             :            dtypes.weakint                           [3, '4']                         None\n   6 Ops.VECTORIZE       :            dtypes.weakint.vec(4)                    [5, 5, 5, 5]                     None\n   7 Ops.VCONST          :            dtypes.weakint.vec(4)                    []                               (0, 1, 2, 3)\n   8 Ops.ADD             :            dtypes.weakint.vec(4)                    [6, 7]                           None\n   9 Ops.INDEX           :            dtypes.float.ptr(-1).vec(4)              [1, 8]                           None\n  10 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n  11 Ops.VECTORIZE       :            dtypes.float.ptr(-1).vec(4)              [10, 10, 10, 10]                 None\n  12 Ops.INDEX           :            dtypes.float.ptr(-1).vec(4)              [11, 8]                          None\n  13 Ops.LOAD            :            dtypes.float.vec(4)                      [12]                             None\n  14 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n  15 Ops.VECTORIZE       :            dtypes.float.ptr(-1).vec(4)              [14, 14, 14, 14]                 None\n  16 Ops.INDEX           :            dtypes.float.ptr(-1).vec(4)              [15, 8]                          None\n  17 Ops.LOAD            :            dtypes.float.vec(4)                      [16]                             None\n  18 Ops.ADD             :            dtypes.float.vec(4)                      [13, 17]                         None\n  19 Ops.STORE           :            dtypes.void                              [9, 18]                          None\n  20 Ops.END             :            dtypes.void                              [19, 3]                          None\n  21 Ops.SINK            :            dtypes.void                              [20]                             KernelInfo(name='elementwise_add_opt', axis_types=(), dont_use_locals=False, applied_opts=(Opt(op=OptOps.UPCAST, axis=0, arg=4),), opts_to_apply=None, estimates=None)\n=== devectorize ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.DEFINE_VAR      :            dtypes.int                               []                               ('core_id', 0, 63)\n   2 Ops.CAST            :            dtypes.weakint                           [1]                              None\n   3 Ops.CONST           :            dtypes.weakint                           []                               4\n   4 Ops.MUL             :            dtypes.weakint                           [2, '4']                         None\n   5 Ops.INDEX           :            dtypes.float.ptr(-1)                     [0, 4]                           None\n   6 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [5]                              None\n   7 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   8 Ops.INDEX           :            dtypes.float.ptr(-1)                     [7, 4]                           None\n   9 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [8]                              None\n  10 Ops.LOAD            :            dtypes.float.vec(4)                      [9]                              None\n  11 Ops.GEP             :            dtypes.float                             [10]                             (0,)\n  12 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n  13 Ops.INDEX           :            dtypes.float.ptr(-1)                     [12, 4]                          None\n  14 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [13]                             None\n  15 Ops.LOAD            :            dtypes.float.vec(4)                      [14]                             None\n  16 Ops.GEP             :            dtypes.float                             [15]                             (0,)\n  17 Ops.ADD             :            dtypes.float                             [11, 16]                         None\n  18 Ops.GEP             :            dtypes.float                             [10]                             (1,)\n  19 Ops.GEP             :            dtypes.float                             [15]                             (1,)\n  20 Ops.ADD             :            dtypes.float                             [18, 19]                         None\n  21 Ops.GEP             :            dtypes.float                             [10]                             (2,)\n  22 Ops.GEP             :            dtypes.float                             [15]                             (2,)\n  23 Ops.ADD             :            dtypes.float                             [21, 22]                         None\n  24 Ops.GEP             :            dtypes.float                             [10]                             (3,)\n  25 Ops.GEP             :            dtypes.float                             [15]                             (3,)\n  26 Ops.ADD             :            dtypes.float                             [24, 25]                         None\n  27 Ops.VECTORIZE       :            dtypes.float.vec(4)                      [17, 20, 23, 26]                 None\n  28 Ops.STORE           :            dtypes.void                              [6, 27]                          None\n  29 Ops.END             :            dtypes.void                              [28, 2]                          None\n  30 Ops.SINK            :            dtypes.void                              [29]                             KernelInfo(name='elementwise_add_opt', axis_types=(), dont_use_locals=False, applied_opts=(Opt(op=OptOps.UPCAST, axis=0, arg=4),), opts_to_apply=None, estimates=None)\n=== lower all index dtypes ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.DEFINE_VAR      :            dtypes.int                               []                               ('core_id', 0, 63)\n   2 Ops.CONST           :            dtypes.int                               []                               4\n   3 Ops.MUL             :            dtypes.int                               [1, '4']                         None\n   4 Ops.INDEX           :            dtypes.float.ptr(-1)                     [0, 3]                           None\n   5 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [4]                              None\n   6 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   7 Ops.INDEX           :            dtypes.float.ptr(-1)                     [6, 3]                           None\n   8 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [7]                              None\n   9 Ops.LOAD            :            dtypes.float.vec(4)                      [8]                              None\n  10 Ops.GEP             :            dtypes.float                             [9]                              (0,)\n  11 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n  12 Ops.INDEX           :            dtypes.float.ptr(-1)                     [11, 3]                          None\n  13 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [12]                             None\n  14 Ops.LOAD            :            dtypes.float.vec(4)                      [13]                             None\n  15 Ops.GEP             :            dtypes.float                             [14]                             (0,)\n  16 Ops.ADD             :            dtypes.float                             [10, 15]                         None\n  17 Ops.GEP             :            dtypes.float                             [9]                              (1,)\n  18 Ops.GEP             :            dtypes.float                             [14]                             (1,)\n  19 Ops.ADD             :            dtypes.float                             [17, 18]                         None\n  20 Ops.GEP             :            dtypes.float                             [9]                              (2,)\n  21 Ops.GEP             :            dtypes.float                             [14]                             (2,)\n  22 Ops.ADD             :            dtypes.float                             [20, 21]                         None\n  23 Ops.GEP             :            dtypes.float                             [9]                              (3,)\n  24 Ops.GEP             :            dtypes.float                             [14]                             (3,)\n  25 Ops.ADD             :            dtypes.float                             [23, 24]                         None\n  26 Ops.VECTORIZE       :            dtypes.float.vec(4)                      [16, 19, 22, 25]                 None\n  27 Ops.STORE           :            dtypes.void                              [5, 26]                          None\n  28 Ops.END             :            dtypes.void                              [27, 1]                          None\n  29 Ops.SINK            :            dtypes.void                              [28]                             KernelInfo(name='elementwise_add_opt', axis_types=(), dont_use_locals=False, applied_opts=(Opt(op=OptOps.UPCAST, axis=0, arg=4),), opts_to_apply=None, estimates=None)\n=== post index symbolic ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.DEFINE_VAR      :            dtypes.int                               []                               ('core_id', 0, 63)\n   2 Ops.CONST           :            dtypes.int                               []                               4\n   3 Ops.MUL             :            dtypes.int                               [1, '4']                         None\n   4 Ops.INDEX           :            dtypes.float.ptr(-1)                     [0, 3]                           None\n   5 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [4]                              None\n   6 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   7 Ops.INDEX           :            dtypes.float.ptr(-1)                     [6, 3]                           None\n   8 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [7]                              None\n   9 Ops.LOAD            :            dtypes.float.vec(4)                      [8]                              None\n  10 Ops.GEP             :            dtypes.float                             [9]                              (0,)\n  11 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n  12 Ops.INDEX           :            dtypes.float.ptr(-1)                     [11, 3]                          None\n  13 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [12]                             None\n  14 Ops.LOAD            :            dtypes.float.vec(4)                      [13]                             None\n  15 Ops.GEP             :            dtypes.float                             [14]                             (0,)\n  16 Ops.ADD             :            dtypes.float                             [10, 15]                         None\n  17 Ops.GEP             :            dtypes.float                             [9]                              (1,)\n  18 Ops.GEP             :            dtypes.float                             [14]                             (1,)\n  19 Ops.ADD             :            dtypes.float                             [17, 18]                         None\n  20 Ops.GEP             :            dtypes.float                             [9]                              (2,)\n  21 Ops.GEP             :            dtypes.float                             [14]                             (2,)\n  22 Ops.ADD             :            dtypes.float                             [20, 21]                         None\n  23 Ops.GEP             :            dtypes.float                             [9]                              (3,)\n  24 Ops.GEP             :            dtypes.float                             [14]                             (3,)\n  25 Ops.ADD             :            dtypes.float                             [23, 24]                         None\n  26 Ops.VECTORIZE       :            dtypes.float.vec(4)                      [16, 19, 22, 25]                 None\n  27 Ops.STORE           :            dtypes.void                              [5, 26]                          None\n  28 Ops.END             :            dtypes.void                              [27, 1]                          None\n  29 Ops.SINK            :            dtypes.void                              [28]                             KernelInfo(name='elementwise_add_opt', axis_types=(), dont_use_locals=False, applied_opts=(Opt(op=OptOps.UPCAST, axis=0, arg=4),), opts_to_apply=None, estimates=None)\n=== decompositions ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.DEFINE_VAR      :            dtypes.int                               []                               ('core_id', 0, 63)\n   2 Ops.CONST           :            dtypes.int                               []                               2\n   3 Ops.SHL             :            dtypes.int                               [1, '2']                         None\n   4 Ops.INDEX           :            dtypes.float.ptr(-1)                     [0, 3]                           None\n   5 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [4]                              None\n   6 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   7 Ops.INDEX           :            dtypes.float.ptr(-1)                     [6, 3]                           None\n   8 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [7]                              None\n   9 Ops.LOAD            :            dtypes.float.vec(4)                      [8]                              None\n  10 Ops.GEP             :            dtypes.float                             [9]                              (0,)\n  11 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n  12 Ops.INDEX           :            dtypes.float.ptr(-1)                     [11, 3]                          None\n  13 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [12]                             None\n  14 Ops.LOAD            :            dtypes.float.vec(4)                      [13]                             None\n  15 Ops.GEP             :            dtypes.float                             [14]                             (0,)\n  16 Ops.ADD             :            dtypes.float                             [10, 15]                         None\n  17 Ops.GEP             :            dtypes.float                             [9]                              (1,)\n  18 Ops.GEP             :            dtypes.float                             [14]                             (1,)\n  19 Ops.ADD             :            dtypes.float                             [17, 18]                         None\n  20 Ops.GEP             :            dtypes.float                             [9]                              (2,)\n  21 Ops.GEP             :            dtypes.float                             [14]                             (2,)\n  22 Ops.ADD             :            dtypes.float                             [20, 21]                         None\n  23 Ops.GEP             :            dtypes.float                             [9]                              (3,)\n  24 Ops.GEP             :            dtypes.float                             [14]                             (3,)\n  25 Ops.ADD             :            dtypes.float                             [23, 24]                         None\n  26 Ops.VECTORIZE       :            dtypes.float.vec(4)                      [16, 19, 22, 25]                 None\n  27 Ops.STORE           :            dtypes.void                              [5, 26]                          None\n  28 Ops.END             :            dtypes.void                              [27, 1]                          None\n  29 Ops.SINK            :            dtypes.void                              [28]                             KernelInfo(name='elementwise_add_opt', axis_types=(), dont_use_locals=False, applied_opts=(Opt(op=OptOps.UPCAST, axis=0, arg=4),), opts_to_apply=None, estimates=None)\n=== decomp dtypes ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.DEFINE_VAR      :            dtypes.int                               []                               ('core_id', 0, 63)\n   2 Ops.CONST           :            dtypes.int                               []                               2\n   3 Ops.SHL             :            dtypes.int                               [1, '2']                         None\n   4 Ops.INDEX           :            dtypes.float.ptr(-1)                     [0, 3]                           None\n   5 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [4]                              None\n   6 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   7 Ops.INDEX           :            dtypes.float.ptr(-1)                     [6, 3]                           None\n   8 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [7]                              None\n   9 Ops.LOAD            :            dtypes.float.vec(4)                      [8]                              None\n  10 Ops.GEP             :            dtypes.float                             [9]                              (0,)\n  11 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n  12 Ops.INDEX           :            dtypes.float.ptr(-1)                     [11, 3]                          None\n  13 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [12]                             None\n  14 Ops.LOAD            :            dtypes.float.vec(4)                      [13]                             None\n  15 Ops.GEP             :            dtypes.float                             [14]                             (0,)\n  16 Ops.ADD             :            dtypes.float                             [10, 15]                         None\n  17 Ops.GEP             :            dtypes.float                             [9]                              (1,)\n  18 Ops.GEP             :            dtypes.float                             [14]                             (1,)\n  19 Ops.ADD             :            dtypes.float                             [17, 18]                         None\n  20 Ops.GEP             :            dtypes.float                             [9]                              (2,)\n  21 Ops.GEP             :            dtypes.float                             [14]                             (2,)\n  22 Ops.ADD             :            dtypes.float                             [20, 21]                         None\n  23 Ops.GEP             :            dtypes.float                             [9]                              (3,)\n  24 Ops.GEP             :            dtypes.float                             [14]                             (3,)\n  25 Ops.ADD             :            dtypes.float                             [23, 24]                         None\n  26 Ops.VECTORIZE       :            dtypes.float.vec(4)                      [16, 19, 22, 25]                 None\n  27 Ops.STORE           :            dtypes.void                              [5, 26]                          None\n  28 Ops.END             :            dtypes.void                              [27, 1]                          None\n  29 Ops.SINK            :            dtypes.void                              [28]                             KernelInfo(name='elementwise_add_opt', axis_types=(), dont_use_locals=False, applied_opts=(Opt(op=OptOps.UPCAST, axis=0, arg=4),), opts_to_apply=None, estimates=None)\n=== transcendental ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.DEFINE_VAR      :            dtypes.int                               []                               ('core_id', 0, 63)\n   2 Ops.CONST           :            dtypes.int                               []                               2\n   3 Ops.SHL             :            dtypes.int                               [1, '2']                         None\n   4 Ops.INDEX           :            dtypes.float.ptr(-1)                     [0, 3]                           None\n   5 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [4]                              None\n   6 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   7 Ops.INDEX           :            dtypes.float.ptr(-1)                     [6, 3]                           None\n   8 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [7]                              None\n   9 Ops.LOAD            :            dtypes.float.vec(4)                      [8]                              None\n  10 Ops.GEP             :            dtypes.float                             [9]                              (0,)\n  11 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n  12 Ops.INDEX           :            dtypes.float.ptr(-1)                     [11, 3]                          None\n  13 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [12]                             None\n  14 Ops.LOAD            :            dtypes.float.vec(4)                      [13]                             None\n  15 Ops.GEP             :            dtypes.float                             [14]                             (0,)\n  16 Ops.ADD             :            dtypes.float                             [10, 15]                         None\n  17 Ops.GEP             :            dtypes.float                             [9]                              (1,)\n  18 Ops.GEP             :            dtypes.float                             [14]                             (1,)\n  19 Ops.ADD             :            dtypes.float                             [17, 18]                         None\n  20 Ops.GEP             :            dtypes.float                             [9]                              (2,)\n  21 Ops.GEP             :            dtypes.float                             [14]                             (2,)\n  22 Ops.ADD             :            dtypes.float                             [20, 21]                         None\n  23 Ops.GEP             :            dtypes.float                             [9]                              (3,)\n  24 Ops.GEP             :            dtypes.float                             [14]                             (3,)\n  25 Ops.ADD             :            dtypes.float                             [23, 24]                         None\n  26 Ops.VECTORIZE       :            dtypes.float.vec(4)                      [16, 19, 22, 25]                 None\n  27 Ops.STORE           :            dtypes.void                              [5, 26]                          None\n  28 Ops.END             :            dtypes.void                              [27, 1]                          None\n  29 Ops.SINK            :            dtypes.void                              [28]                             KernelInfo(name='elementwise_add_opt', axis_types=(), dont_use_locals=False, applied_opts=(Opt(op=OptOps.UPCAST, axis=0, arg=4),), opts_to_apply=None, estimates=None)\n=== final rewrite ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.DEFINE_VAR      :            dtypes.int                               []                               ('core_id', 0, 63)\n   2 Ops.CONST           :            dtypes.int                               []                               2\n   3 Ops.SHL             :            dtypes.int                               [1, '2']                         None\n   4 Ops.INDEX           :            dtypes.float.ptr(-1)                     [0, 3]                           None\n   5 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [4]                              None\n   6 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   7 Ops.INDEX           :            dtypes.float.ptr(-1)                     [6, 3]                           None\n   8 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [7]                              None\n   9 Ops.LOAD            :            dtypes.float.vec(4)                      [8]                              None\n  10 Ops.GEP             :            dtypes.float                             [9]                              (0,)\n  11 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n  12 Ops.INDEX           :            dtypes.float.ptr(-1)                     [11, 3]                          None\n  13 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [12]                             None\n  14 Ops.LOAD            :            dtypes.float.vec(4)                      [13]                             None\n  15 Ops.GEP             :            dtypes.float                             [14]                             (0,)\n  16 Ops.ADD             :            dtypes.float                             [10, 15]                         None\n  17 Ops.GEP             :            dtypes.float                             [9]                              (1,)\n  18 Ops.GEP             :            dtypes.float                             [14]                             (1,)\n  19 Ops.ADD             :            dtypes.float                             [17, 18]                         None\n  20 Ops.GEP             :            dtypes.float                             [9]                              (2,)\n  21 Ops.GEP             :            dtypes.float                             [14]                             (2,)\n  22 Ops.ADD             :            dtypes.float                             [20, 21]                         None\n  23 Ops.GEP             :            dtypes.float                             [9]                              (3,)\n  24 Ops.GEP             :            dtypes.float                             [14]                             (3,)\n  25 Ops.ADD             :            dtypes.float                             [23, 24]                         None\n  26 Ops.VECTORIZE       :            dtypes.float.vec(4)                      [16, 19, 22, 25]                 None\n  27 Ops.STORE           :            dtypes.void                              [5, 26]                          None\n  28 Ops.SINK            :            dtypes.void                              [27]                             KernelInfo(name='elementwise_add_opt', axis_types=(), dont_use_locals=False, applied_opts=(Opt(op=OptOps.UPCAST, axis=0, arg=4),), opts_to_apply=None, estimates=None)\n=== add control flow ===\n   0 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               2\n   1 Ops.DEFINE_VAR      :            dtypes.int                               []                               ('core_id', 0, 63)\n   2 Ops.CONST           :            dtypes.int                               []                               2\n   3 Ops.SHL             :            dtypes.int                               [1, '2']                         None\n   4 Ops.INDEX           :            dtypes.float.ptr(-1)                     [0, 3]                           None\n   5 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [4]                              None\n   6 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               0\n   7 Ops.INDEX           :            dtypes.float.ptr(-1)                     [6, 3]                           None\n   8 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [7]                              None\n   9 Ops.LOAD            :            dtypes.float.vec(4)                      [8]                              None\n  10 Ops.GEP             :            dtypes.float                             [9]                              (0,)\n  11 Ops.PARAM           :            dtypes.float.ptr(-1)                     []                               1\n  12 Ops.INDEX           :            dtypes.float.ptr(-1)                     [11, 3]                          None\n  13 Ops.CAST            :            dtypes.float.vec(4).ptr(-1)              [12]                             None\n  14 Ops.LOAD            :            dtypes.float.vec(4)                      [13]                             None\n  15 Ops.GEP             :            dtypes.float                             [14]                             (0,)\n  16 Ops.ADD             :            dtypes.float                             [10, 15]                         None\n  17 Ops.GEP             :            dtypes.float                             [9]                              (1,)\n  18 Ops.GEP             :            dtypes.float                             [14]                             (1,)\n  19 Ops.ADD             :            dtypes.float                             [17, 18]                         None\n  20 Ops.GEP             :            dtypes.float                             [9]                              (2,)\n  21 Ops.GEP             :            dtypes.float                             [14]                             (2,)\n  22 Ops.ADD             :            dtypes.float                             [20, 21]                         None\n  23 Ops.GEP             :            dtypes.float                             [9]                              (3,)\n  24 Ops.GEP             :            dtypes.float                             [14]                             (3,)\n  25 Ops.ADD             :            dtypes.float                             [23, 24]                         None\n  26 Ops.VECTORIZE       :            dtypes.float.vec(4)                      [16, 19, 22, 25]                 None\n  27 Ops.STORE           :            dtypes.void                              [5, 26]                          None\n  28 Ops.SINK            :            dtypes.void                              [27]                             KernelInfo(name='elementwise_add_opt', axis_types=(), dont_use_locals=False, applied_opts=(Opt(op=OptOps.UPCAST, axis=0, arg=4),), opts_to_apply=None, estimates=None)\n"
  },
  {
    "path": "packages/tolk/test/golden/debug/generate_actual.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Generates elementwise_add.actual for the debug golden test. Dune runs\n   this with DEBUG=6 so graph_rewrite emits print_uops after each named\n   stage. Stderr is redirected to the output file. *)\n\nopen Tolk\nopen Tolk_ir\nmodule K = Kernel\n\nlet global_fptr = Dtype.Ptr.create (Dtype.val_of Dtype.float32) ~addrspace:Global ~size:(-1)\nlet idx n = K.const (Const.int Dtype.Val.index n)\n\nlet make_kernel ~name ~opts_to_apply =\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let p2 = K.param ~idx:2 ~dtype:global_fptr in\n  let r0 = K.range ~size:(idx 256) ~axis:0 ~kind:Axis_kind.Global () in\n  let ld_a = K.load ~src:(K.index ~ptr:p0 ~idxs:[ r0 ] ()) () in\n  let ld_b = K.load ~src:(K.index ~ptr:p1 ~idxs:[ r0 ] ()) () in\n  let add = K.binary ~op:`Add ~lhs:ld_a ~rhs:ld_b in\n  let st = K.store ~dst:(K.index ~ptr:p2 ~idxs:[ r0 ] ()) ~value:add ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0 ] () in\n  K.sink\n    ~kernel_info:{ K.name = name;\n      axis_kinds = [ Axis_kind.Global ]; dont_use_locals = false;\n      applied_opts = []; opts_to_apply; estimates = None }\n    [ e ]\n\nlet ren = Cstyle.clang_no_abi\n\nlet saved_stderr = Unix.dup Unix.stderr\n\nlet run_test ~name ~sink =\n  let path = Filename.concat Sys.argv.(1) (name ^ \".actual\") in\n  let fd = Unix.openfile path [ O_WRONLY; O_CREAT; O_TRUNC ] 0o644 in\n  Unix.dup2 fd Unix.stderr;\n  Unix.close fd;\n  ignore (Codegen.full_rewrite_to_sink ~optimize:true ren sink);\n  flush stderr;\n  Unix.dup2 saved_stderr Unix.stderr\n\nlet () =\n  (* Test 1: no optimization (scalar) *)\n  run_test ~name:\"elementwise_add\"\n    ~sink:(make_kernel ~name:\"elementwise_add\" ~opts_to_apply:(Some []));\n  (* Test 2: auto-optimized (float4 upcast) *)\n  run_test ~name:\"elementwise_add_opt\"\n    ~sink:(make_kernel ~name:\"elementwise_add_opt\" ~opts_to_apply:None)\n"
  },
  {
    "path": "packages/tolk/test/golden/debug/generate_expected.py",
    "content": "#!/usr/bin/env python3\n\"\"\"Generate tinygrad reference .expected files for debug golden tests.\n\nCaptures the exact print_uops output after each named graph_rewrite stage\nin full_rewrite_to_sink, concatenated into a single file with === headers.\n\nUsage:\n    uv run packages/tolk/test/golden/debug/generate_expected.py\n\"\"\"\n\nimport io\nimport os\nimport re\nimport sys\nimport contextlib\n\nsys.path.insert(\n    0,\n    os.path.join(\n        os.path.dirname(__file__), \"..\", \"..\", \"..\", \"..\", \"..\", \"_tinygrad\"\n    ),\n)\n\nfrom tinygrad.uop.ops import UOp, Ops, KernelInfo, AxisType, print_uops, graph_rewrite\nfrom tinygrad.dtype import dtypes\nfrom tinygrad.codegen import full_rewrite_to_sink\nfrom tinygrad.renderer.cstyle import ClangRenderer\n\nOUT_DIR = os.path.dirname(__file__)\nRENDERER = ClangRenderer()\nANSI_RE = re.compile(r\"\\x1b\\[[0-9;]*m\")\n\n# Names of codegen stages in full_rewrite_to_sink.\nCODEGEN_STAGES = {\n    \"early movement ops\",\n    \"load collapse\",\n    \"split ranges\",\n    \"initial symbolic\",\n    \"simplify ranges\",\n    \"postopt symbolic\",\n    \"expander\",\n    \"add local buffers\",\n    \"remove_reduce\",\n    \"add gpudims\",\n    \"** add loads (code)\",\n    \"devectorize\",\n    \"lower all index dtypes\",\n    \"post index symbolic\",\n    \"decompositions\",\n    \"decomp dtypes\",\n    \"transcendental\",\n    \"final rewrite\",\n    \"add control flow\",\n}\n\n\ndef strip_ansi(s):\n    return ANSI_RE.sub(\"\", s)\n\n\ndef capture_print_uops(uops):\n    buf = io.StringIO()\n    with contextlib.redirect_stdout(buf):\n        print_uops(uops)\n    return strip_ansi(buf.getvalue().rstrip(\"\\n\"))\n\n\ndef build_elementwise_add(name, opts_to_apply):\n    p0 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 0)\n    p1 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 1)\n    p2 = UOp(Ops.PARAM, dtypes.float32.ptr(), (), 2)\n    r0 = UOp.range(256, 0, AxisType.GLOBAL)\n    ld_a = p0.index(r0, ptr=True).load()\n    ld_b = p1.index(r0, ptr=True).load()\n    add = ld_a + ld_b\n    st = p2.index(r0, ptr=True).store(add)\n    end = st.end(r0)\n    return UOp.sink(end, arg=KernelInfo(\n        name=name, axis_types=(AxisType.GLOBAL,), opts_to_apply=opts_to_apply))\n\n\ndef generate_test(name, sink):\n    import tinygrad.codegen as codegen_mod\n    orig_gr = codegen_mod.graph_rewrite\n    sections = []\n\n    def capturing_graph_rewrite(*args, **kwargs):\n        result = orig_gr(*args, **kwargs)\n        stage_name = kwargs.get(\"name\", \"\")\n        if stage_name in CODEGEN_STAGES:\n            uops = list(result.toposort())\n            sections.append(f\"=== {stage_name} ===\\n\" + capture_print_uops(uops))\n        return result\n\n    codegen_mod.graph_rewrite = capturing_graph_rewrite\n    try:\n        full_rewrite_to_sink(sink, RENDERER, optimize=True)\n    finally:\n        codegen_mod.graph_rewrite = orig_gr\n\n    content = \"\\n\".join(sections)\n    path = os.path.join(OUT_DIR, f\"{name}.expected\")\n    with open(path, \"w\") as f:\n        f.write(content + \"\\n\")\n    print(f\"wrote {path} ({len(sections)} stages)\")\n\n\ndef main():\n    # Test 1: no optimization (scalar)\n    generate_test(\"elementwise_add\",\n        build_elementwise_add(\"elementwise_add\", opts_to_apply=()))\n    # Test 2: auto-optimized (float4 upcast)\n    generate_test(\"elementwise_add_opt\",\n        build_elementwise_add(\"elementwise_add_opt\", opts_to_apply=None))\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/clang_binop_permute.expected",
    "content": "void E_5_2n3(float* restrict data0_10, float* restrict data1_10, float* restrict data2_10, float* restrict data3_10) {\n  for (int Lidx0 = 0; Lidx0 < 5; Lidx0++) {\n    for (int Lidx1 = 0; Lidx1 < 2; Lidx1++) {\n      int alu0 = ((Lidx1*5)+Lidx0);\n      float val0 = (*(data1_10+alu0));\n      float val1 = (*(data2_10+alu0));\n      int alu1 = ((Lidx0<<1)+Lidx1);\n      float val2 = (*(data3_10+alu1));\n      *(data0_10+alu1) = (val0+val1+val2);\n    }\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/clang_binop_reshape.expected",
    "content": "void E_10(float* restrict data0_10, float* restrict data1_10, float* restrict data2_10, float* restrict data3_10) {\n  for (int Lidx0 = 0; Lidx0 < 10; Lidx0++) {\n    float val0 = (*(data1_10+Lidx0));\n    float val1 = (*(data2_10+Lidx0));\n    float val2 = (*(data3_10+Lidx0));\n    *(data0_10+Lidx0) = (val0+val1+val2);\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/clang_contiguous_add.expected",
    "content": "typedef float float4 __attribute__((aligned(16),ext_vector_type(4)));\nvoid E_8_4(float* restrict data0_32, float* restrict data1_32, float* restrict data2_32) {\n  for (int Lidx0 = 0; Lidx0 < 8; Lidx0++) {\n    int alu0 = (Lidx0<<2);\n    float4 val0 = (*((float4*)((data1_32+alu0))));\n    float4 val1 = (*((float4*)((data2_32+alu0))));\n    *((float4*)((data0_32+alu0))) = (float4){(val0[0]+val1[0]),(val0[1]+val1[1]),(val0[2]+val1[2]),(val0[3]+val1[3])};\n  }\n}\n---\ntypedef float float4 __attribute__((aligned(16),ext_vector_type(4)));\nvoid E_8_4n1(float* restrict data0_32, float* restrict data1_32, float* restrict data2_32) {\n  for (int Lidx0 = 0; Lidx0 < 8; Lidx0++) {\n    int alu0 = (Lidx0<<2);\n    float4 val0 = (*((float4*)((data1_32+alu0))));\n    float4 val1 = (*((float4*)((data2_32+alu0))));\n    *((float4*)((data0_32+alu0))) = (float4){(val0[0]+val1[0]),(val0[1]+val1[1]),(val0[2]+val1[2]),(val0[3]+val1[3])};\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/clang_diamond.expected",
    "content": "void E_10n1(float* restrict data0_10, float* restrict data1_10, float* restrict data2_10, float* restrict data3_10, float* restrict data4_10) {\n  for (int Lidx0 = 0; Lidx0 < 10; Lidx0++) {\n    float val0 = (*(data1_10+Lidx0));\n    float val1 = (*(data2_10+Lidx0));\n    float val2 = (*(data3_10+Lidx0));\n    float val3 = (*(data4_10+Lidx0));\n    *(data0_10+Lidx0) = (val0+((val1+val2)*2.0f)+val3);\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/clang_elementwise_3way.expected",
    "content": "typedef float float4 __attribute__((aligned(16),ext_vector_type(4)));\nvoid E_64_4n1(float* restrict data0_256, float* restrict data1_256, float* restrict data2_256, float* restrict data3_256) {\n  for (int Lidx0 = 0; Lidx0 < 64; Lidx0++) {\n    int alu0 = (Lidx0<<2);\n    float4 val0 = (*((float4*)((data1_256+alu0))));\n    float4 val1 = (*((float4*)((data2_256+alu0))));\n    float4 val2 = (*((float4*)((data3_256+alu0))));\n    *((float4*)((data0_256+alu0))) = (float4){(val0[0]+val1[0]+val2[0]),(val0[1]+val1[1]+val2[1]),(val0[2]+val1[2]+val2[2]),(val0[3]+val1[3]+val2[3])};\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/clang_elementwise_add.expected",
    "content": "typedef float float4 __attribute__((aligned(16),ext_vector_type(4)));\nvoid E_64_4(float* restrict data0_256, float* restrict data1_256, float* restrict data2_256) {\n  for (int Lidx0 = 0; Lidx0 < 64; Lidx0++) {\n    int alu0 = (Lidx0<<2);\n    float4 val0 = (*((float4*)((data1_256+alu0))));\n    float4 val1 = (*((float4*)((data2_256+alu0))));\n    *((float4*)((data0_256+alu0))) = (float4){(val0[0]+val1[0]),(val0[1]+val1[1]),(val0[2]+val1[2]),(val0[3]+val1[3])};\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/clang_expand_permute.expected",
    "content": "void E_10_10_10(float* restrict data0_1000, float* restrict data1_100, float* restrict data2_100) {\n  for (int Lidx0 = 0; Lidx0 < 10; Lidx0++) {\n    for (int Lidx1 = 0; Lidx1 < 10; Lidx1++) {\n      int alu0 = ((Lidx0*10)+Lidx1);\n      float val0 = (*(data1_100+alu0));\n      float val1 = (*(data2_100+alu0));\n      for (int Lidx2 = 0; Lidx2 < 10; Lidx2++) {\n        int alu1 = ((Lidx2*10)+Lidx1);\n        float val2 = (*(data1_100+alu1));\n        float val3 = (*(data2_100+alu1));\n        *(data0_1000+((Lidx1*10)+Lidx2+(Lidx0*100))) = (val0+val1+val2+val3);\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/clang_mulacc.expected",
    "content": "typedef float float4 __attribute__((aligned(16),ext_vector_type(4)));\nvoid r_64_4(float* restrict data0_1, float* restrict data1_256, float* restrict data2_256) {\n  float acc0[1];\n  *(acc0+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 64; Ridx0++) {\n    int alu1 = (Ridx0<<2);\n    float4 val0 = (*((float4*)((data1_256+alu1))));\n    float4 val1 = (*((float4*)((data2_256+alu1))));\n    *(acc0+0) = ((*(acc0+0))+(val0[0]*val1[0])+(val0[1]*val1[1])+(val0[2]*val1[2])+(val0[3]*val1[3]));\n  }\n  *(data0_1+0) = (*(acc0+0));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/clang_multistage_reduce.expected",
    "content": "void r_32_32_32(float* restrict data0_32, float* restrict data1_32768) {\n  float acc0[32];\n  for (int Lidx2 = 0; Lidx2 < 32; Lidx2++) {\n    *(acc0+0) = 0.0f;\n    *(acc0+1) = 0.0f;\n    *(acc0+2) = 0.0f;\n    *(acc0+3) = 0.0f;\n    *(acc0+4) = 0.0f;\n    *(acc0+5) = 0.0f;\n    *(acc0+6) = 0.0f;\n    *(acc0+7) = 0.0f;\n    *(acc0+8) = 0.0f;\n    *(acc0+9) = 0.0f;\n    *(acc0+10) = 0.0f;\n    *(acc0+11) = 0.0f;\n    *(acc0+12) = 0.0f;\n    *(acc0+13) = 0.0f;\n    *(acc0+14) = 0.0f;\n    *(acc0+15) = 0.0f;\n    *(acc0+16) = 0.0f;\n    *(acc0+17) = 0.0f;\n    *(acc0+18) = 0.0f;\n    *(acc0+19) = 0.0f;\n    *(acc0+20) = 0.0f;\n    *(acc0+21) = 0.0f;\n    *(acc0+22) = 0.0f;\n    *(acc0+23) = 0.0f;\n    *(acc0+24) = 0.0f;\n    *(acc0+25) = 0.0f;\n    *(acc0+26) = 0.0f;\n    *(acc0+27) = 0.0f;\n    *(acc0+28) = 0.0f;\n    *(acc0+29) = 0.0f;\n    *(acc0+30) = 0.0f;\n    *(acc0+31) = 0.0f;\n    for (int Ridx0 = 0; Ridx0 < 32; Ridx0++) {\n      int alu32 = ((Lidx2<<10)+Ridx0);\n      float val0 = (*(data1_32768+(alu32+64)));\n      float val1 = (*(data1_32768+alu32));\n      float val2 = (*(data1_32768+(alu32+32)));\n      float val3 = (*(data1_32768+(alu32+96)));\n      float val4 = (*(data1_32768+(alu32+128)));\n      float val5 = (*(data1_32768+(alu32+160)));\n      float val6 = (*(data1_32768+(alu32+192)));\n      float val7 = (*(data1_32768+(alu32+224)));\n      float val8 = (*(data1_32768+(alu32+256)));\n      float val9 = (*(data1_32768+(alu32+288)));\n      float val10 = (*(data1_32768+(alu32+320)));\n      float val11 = (*(data1_32768+(alu32+352)));\n      float val12 = (*(data1_32768+(alu32+384)));\n      float val13 = (*(data1_32768+(alu32+416)));\n      float val14 = (*(data1_32768+(alu32+448)));\n      float val15 = (*(data1_32768+(alu32+480)));\n      float val16 = (*(data1_32768+(alu32+512)));\n      float val17 = (*(data1_32768+(alu32+544)));\n      float val18 = (*(data1_32768+(alu32+576)));\n      float val19 = (*(data1_32768+(alu32+608)));\n      float val20 = (*(data1_32768+(alu32+640)));\n      float val21 = (*(data1_32768+(alu32+672)));\n      float val22 = (*(data1_32768+(alu32+704)));\n      float val23 = (*(data1_32768+(alu32+736)));\n      float val24 = (*(data1_32768+(alu32+768)));\n      float val25 = (*(data1_32768+(alu32+800)));\n      float val26 = (*(data1_32768+(alu32+832)));\n      float val27 = (*(data1_32768+(alu32+864)));\n      float val28 = (*(data1_32768+(alu32+896)));\n      float val29 = (*(data1_32768+(alu32+928)));\n      float val30 = (*(data1_32768+(alu32+960)));\n      float val31 = (*(data1_32768+(alu32+992)));\n      *(acc0+0) = ((*(acc0+0))+val1);\n      *(acc0+1) = ((*(acc0+1))+val2);\n      *(acc0+2) = ((*(acc0+2))+val0);\n      *(acc0+3) = ((*(acc0+3))+val3);\n      *(acc0+4) = ((*(acc0+4))+val4);\n      *(acc0+5) = ((*(acc0+5))+val5);\n      *(acc0+6) = ((*(acc0+6))+val6);\n      *(acc0+7) = ((*(acc0+7))+val7);\n      *(acc0+8) = ((*(acc0+8))+val8);\n      *(acc0+9) = ((*(acc0+9))+val9);\n      *(acc0+10) = ((*(acc0+10))+val10);\n      *(acc0+11) = ((*(acc0+11))+val11);\n      *(acc0+12) = ((*(acc0+12))+val12);\n      *(acc0+13) = ((*(acc0+13))+val13);\n      *(acc0+14) = ((*(acc0+14))+val14);\n      *(acc0+15) = ((*(acc0+15))+val15);\n      *(acc0+16) = ((*(acc0+16))+val16);\n      *(acc0+17) = ((*(acc0+17))+val17);\n      *(acc0+18) = ((*(acc0+18))+val18);\n      *(acc0+19) = ((*(acc0+19))+val19);\n      *(acc0+20) = ((*(acc0+20))+val20);\n      *(acc0+21) = ((*(acc0+21))+val21);\n      *(acc0+22) = ((*(acc0+22))+val22);\n      *(acc0+23) = ((*(acc0+23))+val23);\n      *(acc0+24) = ((*(acc0+24))+val24);\n      *(acc0+25) = ((*(acc0+25))+val25);\n      *(acc0+26) = ((*(acc0+26))+val26);\n      *(acc0+27) = ((*(acc0+27))+val27);\n      *(acc0+28) = ((*(acc0+28))+val28);\n      *(acc0+29) = ((*(acc0+29))+val29);\n      *(acc0+30) = ((*(acc0+30))+val30);\n      *(acc0+31) = ((*(acc0+31))+val31);\n    }\n    float alu66 = (((*(acc0+0))<0.0f)?0.0f:(*(acc0+0)));\n    float alu67 = (((*(acc0+1))<0.0f)?0.0f:(*(acc0+1)));\n    float alu68 = (((*(acc0+2))<0.0f)?0.0f:(*(acc0+2)));\n    float alu69 = (((*(acc0+3))<0.0f)?0.0f:(*(acc0+3)));\n    float alu70 = (((*(acc0+4))<0.0f)?0.0f:(*(acc0+4)));\n    float alu71 = (((*(acc0+5))<0.0f)?0.0f:(*(acc0+5)));\n    float alu72 = (((*(acc0+6))<0.0f)?0.0f:(*(acc0+6)));\n    float alu73 = (((*(acc0+7))<0.0f)?0.0f:(*(acc0+7)));\n    float alu74 = (((*(acc0+8))<0.0f)?0.0f:(*(acc0+8)));\n    float alu75 = (((*(acc0+9))<0.0f)?0.0f:(*(acc0+9)));\n    float alu76 = (((*(acc0+10))<0.0f)?0.0f:(*(acc0+10)));\n    float alu77 = (((*(acc0+11))<0.0f)?0.0f:(*(acc0+11)));\n    float alu78 = (((*(acc0+12))<0.0f)?0.0f:(*(acc0+12)));\n    float alu79 = (((*(acc0+13))<0.0f)?0.0f:(*(acc0+13)));\n    float alu80 = (((*(acc0+14))<0.0f)?0.0f:(*(acc0+14)));\n    float alu81 = (((*(acc0+15))<0.0f)?0.0f:(*(acc0+15)));\n    float alu82 = (((*(acc0+16))<0.0f)?0.0f:(*(acc0+16)));\n    float alu83 = (((*(acc0+17))<0.0f)?0.0f:(*(acc0+17)));\n    float alu84 = (((*(acc0+18))<0.0f)?0.0f:(*(acc0+18)));\n    float alu85 = (((*(acc0+19))<0.0f)?0.0f:(*(acc0+19)));\n    float alu86 = (((*(acc0+20))<0.0f)?0.0f:(*(acc0+20)));\n    float alu87 = (((*(acc0+21))<0.0f)?0.0f:(*(acc0+21)));\n    float alu88 = (((*(acc0+22))<0.0f)?0.0f:(*(acc0+22)));\n    float alu89 = (((*(acc0+23))<0.0f)?0.0f:(*(acc0+23)));\n    float alu90 = (((*(acc0+24))<0.0f)?0.0f:(*(acc0+24)));\n    float alu91 = (((*(acc0+25))<0.0f)?0.0f:(*(acc0+25)));\n    float alu92 = (((*(acc0+26))<0.0f)?0.0f:(*(acc0+26)));\n    float alu93 = (((*(acc0+27))<0.0f)?0.0f:(*(acc0+27)));\n    float alu94 = (((*(acc0+28))<0.0f)?0.0f:(*(acc0+28)));\n    float alu95 = (((*(acc0+29))<0.0f)?0.0f:(*(acc0+29)));\n    float alu96 = (((*(acc0+30))<0.0f)?0.0f:(*(acc0+30)));\n    float alu97 = (((*(acc0+31))<0.0f)?0.0f:(*(acc0+31)));\n    *(data0_32+Lidx2) = (alu66+alu67+alu68+alu69+alu70+alu71+alu72+alu73+alu74+alu75+alu76+alu77+alu78+alu79+alu80+alu81+alu82+alu83+alu84+alu85+alu86+alu87+alu88+alu89+alu90+alu91+alu92+alu93+alu94+alu95+alu96+alu97);\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/clang_permute_through_reshape.expected",
    "content": "typedef float float4 __attribute__((aligned(16),ext_vector_type(4)));\nvoid E_16_4_4(float* restrict data0_256, float* restrict data1_256, float* restrict data2_256) {\n  for (int Lidx0 = 0; Lidx0 < 16; Lidx0++) {\n    for (int Lidx2 = 0; Lidx2 < 4; Lidx2++) {\n      int alu0 = ((Lidx2<<6)+Lidx0);\n      int alu1 = (alu0+16);\n      float val0 = (*(data1_256+alu1));\n      int alu2 = (alu0+32);\n      float val1 = (*(data1_256+alu2));\n      int alu3 = (alu0+48);\n      float val2 = (*(data1_256+alu3));\n      float val3 = (*(data1_256+alu0));\n      float val4 = (*(data2_256+alu1));\n      float val5 = (*(data2_256+alu2));\n      float val6 = (*(data2_256+alu3));\n      float val7 = (*(data2_256+alu0));\n      *((float4*)((data0_256+((Lidx0<<4)+(Lidx2<<2))))) = (float4){(val3+val7),(val0+val4),(val1+val5),(val2+val6)};\n    }\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/clang_reduce_permute_binop.expected",
    "content": "void r_10_10_10(float* restrict data0_100, float* restrict data1_1000, float* restrict data2_100) {\n  for (int Lidx1 = 0; Lidx1 < 10; Lidx1++) {\n    for (int Lidx2 = 0; Lidx2 < 10; Lidx2++) {\n      int alu0 = ((Lidx2*10)+Lidx1);\n      float val0 = (*(data1_1000+(alu0+100)));\n      float val1 = (*(data1_1000+(alu0+200)));\n      float val2 = (*(data1_1000+(alu0+300)));\n      float val3 = (*(data1_1000+(alu0+400)));\n      float val4 = (*(data1_1000+(alu0+500)));\n      float val5 = (*(data1_1000+(alu0+600)));\n      float val6 = (*(data1_1000+(alu0+700)));\n      float val7 = (*(data1_1000+(alu0+800)));\n      float val8 = (*(data1_1000+(alu0+900)));\n      float val9 = (*(data1_1000+alu0));\n      int alu1 = ((Lidx1*10)+Lidx2);\n      float val10 = (*(data2_100+alu1));\n      *(data0_100+alu1) = (val9+val0+val1+val2+val3+val4+val5+val6+val7+val8+val10);\n    }\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/clang_reduce_reshape_binop.expected",
    "content": "void r_10_10(float* restrict data0_10, float* restrict data1_100, float* restrict data2_10) {\n  for (int Lidx1 = 0; Lidx1 < 10; Lidx1++) {\n    float val0 = (*(data1_100+(Lidx1+10)));\n    float val1 = (*(data1_100+(Lidx1+20)));\n    float val2 = (*(data1_100+(Lidx1+30)));\n    float val3 = (*(data1_100+(Lidx1+40)));\n    float val4 = (*(data1_100+(Lidx1+50)));\n    float val5 = (*(data1_100+(Lidx1+60)));\n    float val6 = (*(data1_100+(Lidx1+70)));\n    float val7 = (*(data1_100+(Lidx1+80)));\n    float val8 = (*(data1_100+(Lidx1+90)));\n    float val9 = (*(data1_100+Lidx1));\n    float val10 = (*(data2_10+Lidx1));\n    *(data0_10+Lidx1) = (val9+val0+val1+val2+val3+val4+val5+val6+val7+val8+val10);\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/clang_reduce_shrink.expected",
    "content": "typedef float float4 __attribute__((aligned(16),ext_vector_type(4)));\nvoid r_16_32(float* restrict data0_16, float* restrict data1_1024, float* restrict data2_16) {\n  for (int Lidx1 = 0; Lidx1 < 16; Lidx1++) {\n    float val0 = (*(data2_16+Lidx1));\n    int alu0 = (Lidx1<<5);\n    float4 val1 = (*((float4*)((data1_1024+(alu0+4)))));\n    float4 val2 = (*((float4*)((data1_1024+(alu0+8)))));\n    float4 val3 = (*((float4*)((data1_1024+(alu0+12)))));\n    float4 val4 = (*((float4*)((data1_1024+(alu0+16)))));\n    float4 val5 = (*((float4*)((data1_1024+(alu0+20)))));\n    float4 val6 = (*((float4*)((data1_1024+(alu0+24)))));\n    float4 val7 = (*((float4*)((data1_1024+(alu0+28)))));\n    float4 val8 = (*((float4*)((data1_1024+alu0))));\n    *(data0_16+Lidx1) = (val8[0]+val8[1]+val8[2]+val8[3]+val1[0]+val1[1]+val1[2]+val1[3]+val2[0]+val2[1]+val2[2]+val2[3]+val3[0]+val3[1]+val3[2]+val3[3]+val4[0]+val4[1]+val4[2]+val4[3]+val5[0]+val5[1]+val5[2]+val5[3]+val6[0]+val6[1]+val6[2]+val6[3]+val7[0]+val7[1]+val7[2]+val7[3]+val0);\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/clang_reduce_unary.expected",
    "content": "typedef float float4 __attribute__((aligned(16),ext_vector_type(4)));\nvoid r_16(float* restrict data0_1, float* restrict data1_16) {\n  float4 val0 = (*((float4*)((data1_16+0))));\n  float4 val1 = (*((float4*)((data1_16+4))));\n  float4 val2 = (*((float4*)((data1_16+8))));\n  float4 val3 = (*((float4*)((data1_16+12))));\n  *(data0_1+0) = -__builtin_sqrtf((val0[0]+val0[1]+val0[2]+val0[3]+val1[0]+val1[1]+val1[2]+val1[3]+val2[0]+val2[1]+val2[2]+val2[3]+val3[0]+val3[1]+val3[2]+val3[3]));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/clang_reshape_chain.expected",
    "content": "typedef float float4 __attribute__((aligned(16),ext_vector_type(4)));\nvoid E_4_4n4(float* restrict data0_16, float* restrict data1_16, float* restrict data2_16) {\n  for (int Lidx0 = 0; Lidx0 < 4; Lidx0++) {\n    int alu0 = (Lidx0<<2);\n    float4 val0 = (*((float4*)((data1_16+alu0))));\n    float4 val1 = (*((float4*)((data2_16+alu0))));\n    *((float4*)((data0_16+alu0))) = (float4){(val0[0]+val1[0]),(val0[1]+val1[1]),(val0[2]+val1[2]),(val0[3]+val1[3])};\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/clang_shrink_fuse.expected",
    "content": "typedef float float4 __attribute__((aligned(16),ext_vector_type(4)));\nvoid E_4_4(float* restrict data0_16, float* restrict data1_131072, float* restrict data2_131072, float* restrict data3_16) {\n  for (int Lidx0 = 0; Lidx0 < 4; Lidx0++) {\n    int alu0 = (Lidx0<<2);\n    float4 val0 = (*((float4*)((data1_131072+alu0))));\n    float4 val1 = (*((float4*)((data2_131072+alu0))));\n    float4 val2 = (*((float4*)((data3_16+alu0))));\n    *((float4*)((data0_16+alu0))) = (float4){(val0[0]*val1[0]*val2[0]),(val0[1]*val1[1]*val2[1]),(val0[2]*val1[2]*val2[2]),(val0[3]*val1[3]*val2[3])};\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/clang_two_sum.expected",
    "content": "typedef float float4 __attribute__((aligned(16),ext_vector_type(4)));\nvoid r_32_2_64_16_4(float* restrict data0_64, float* restrict data1_4096, const int core_id) {\n  float acc0[1];\n  float acc1[1];\n  for (int Lidx2 = 0; Lidx2 < 32; Lidx2++) {\n    int alu0 = ((core_id<<5)+Lidx2);\n    *(acc0+0) = 0.0f;\n    for (int Ridx0 = 0; Ridx0 < 64; Ridx0++) {\n      float val0 = (*(data1_4096+(alu0+(Ridx0<<6))));\n      *(acc0+0) = ((*(acc0+0))+val0);\n    }\n    *(acc1+0) = 0.0f;\n    for (int Ridx1 = 0; Ridx1 < 16; Ridx1++) {\n      float4 val1 = (*((float4*)((data1_4096+((core_id<<11)+(Lidx2<<6)+(Ridx1<<2))))));\n      *(acc1+0) = ((*(acc1+0))+val1[0]+val1[1]+val1[2]+val1[3]);\n    }\n    *(data0_64+alu0) = ((*(acc0+0))+(*(acc1+0)));\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/cuda_binop_permute.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(2) E_5_2n4(float* data0_10, float* data1_10, float* data2_10, float* data3_10) {\n  int gidx0 = blockIdx.x; /* 5 */\n  int lidx0 = threadIdx.x; /* 2 */\n  int alu0 = (gidx0+(lidx0*5));\n  float val0 = (*(data1_10+alu0));\n  float val1 = (*(data2_10+alu0));\n  int alu1 = (lidx0+(gidx0<<1));\n  float val2 = (*(data3_10+alu1));\n  *(data0_10+alu1) = (val0+val1+val2);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/cuda_binop_reshape.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(2) E_5_2(float* data0_10, float* data1_10, float* data2_10, float* data3_10) {\n  int gidx0 = blockIdx.x; /* 5 */\n  int lidx0 = threadIdx.x; /* 2 */\n  int alu0 = (lidx0+(gidx0<<1));\n  float val0 = (*(data1_10+alu0));\n  float val1 = (*(data2_10+alu0));\n  float val2 = (*(data3_10+alu0));\n  *(data0_10+alu0) = (val0+val1+val2);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/cuda_contiguous_add.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(8) E_8_4n2(float* data0_32, float* data1_32, float* data2_32) {\n  int lidx0 = threadIdx.x; /* 8 */\n  int alu0 = (lidx0<<2);\n  float4 val0 = (*((float4*)((data1_32+alu0))));\n  float4 val1 = (*((float4*)((data2_32+alu0))));\n  *((float4*)((data0_32+alu0))) = make_float4((val0.x+val1.x),(val0.y+val1.y),(val0.z+val1.z),(val0.w+val1.w));\n}\n---\n#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(8) E_8_4n3(float* data0_32, float* data1_32, float* data2_32) {\n  int lidx0 = threadIdx.x; /* 8 */\n  int alu0 = (lidx0<<2);\n  float4 val0 = (*((float4*)((data1_32+alu0))));\n  float4 val1 = (*((float4*)((data2_32+alu0))));\n  *((float4*)((data0_32+alu0))) = make_float4((val0.x+val1.x),(val0.y+val1.y),(val0.z+val1.z),(val0.w+val1.w));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/cuda_diamond.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(2) E_5_2n7(float* data0_10, float* data1_10, float* data2_10, float* data3_10, float* data4_10) {\n  int gidx0 = blockIdx.x; /* 5 */\n  int lidx0 = threadIdx.x; /* 2 */\n  int alu0 = (lidx0+(gidx0<<1));\n  float val0 = (*(data1_10+alu0));\n  float val1 = (*(data2_10+alu0));\n  float val2 = (*(data3_10+alu0));\n  float val3 = (*(data4_10+alu0));\n  *(data0_10+alu0) = (val0+((val1+val2)*2.0f)+val3);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/cuda_elementwise_3way.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(32) E_2_32_4n3(float* data0_256, float* data1_256, float* data2_256, float* data3_256) {\n  int gidx0 = blockIdx.x; /* 2 */\n  int lidx0 = threadIdx.x; /* 32 */\n  int alu0 = ((gidx0<<7)+(lidx0<<2));\n  float4 val0 = (*((float4*)((data1_256+alu0))));\n  float4 val1 = (*((float4*)((data2_256+alu0))));\n  float4 val2 = (*((float4*)((data3_256+alu0))));\n  *((float4*)((data0_256+alu0))) = make_float4((val0.x+val1.x+val2.x),(val0.y+val1.y+val2.y),(val0.z+val1.z+val2.z),(val0.w+val1.w+val2.w));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/cuda_elementwise_add.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(32) E_2_32_4(float* data0_256, float* data1_256, float* data2_256) {\n  int gidx0 = blockIdx.x; /* 2 */\n  int lidx0 = threadIdx.x; /* 32 */\n  int alu0 = ((gidx0<<7)+(lidx0<<2));\n  float4 val0 = (*((float4*)((data1_256+alu0))));\n  float4 val1 = (*((float4*)((data2_256+alu0))));\n  *((float4*)((data0_256+alu0))) = make_float4((val0.x+val1.x),(val0.y+val1.y),(val0.z+val1.z),(val0.w+val1.w));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/cuda_expand_permute.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(8) E_5_5_5_2_2_2(float* data0_1000, float* data1_100, float* data2_100) {\n  int gidx0 = blockIdx.x; /* 5 */\n  int gidx1 = blockIdx.y; /* 5 */\n  int lidx1 = threadIdx.y; /* 2 */\n  int lidx2 = threadIdx.z; /* 2 */\n  int alu0 = (lidx1+(gidx1<<1));\n  int alu1 = (alu0+(gidx0*20)+(lidx2*10));\n  float val0 = (*(data1_100+alu1));\n  int gidx2 = blockIdx.z; /* 5 */\n  int lidx0 = threadIdx.x; /* 2 */\n  int alu2 = (alu0+(gidx2*20)+(lidx0*10));\n  float val1 = (*(data1_100+alu2));\n  float val2 = (*(data2_100+alu1));\n  float val3 = (*(data2_100+alu2));\n  *(data0_1000+(lidx2+(gidx0<<1)+(gidx1*20)+(lidx1*10)+(gidx2*200)+(lidx0*100))) = (val1+val3+val0+val2);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/cuda_mulacc.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(16) r_16_16(float* data0_1, float* data1_256, float* data2_256) {\n  __shared__ __align__(16) float temp0[16];\n  float acc0[1];\n  float acc1[1];\n  int lidx0 = threadIdx.x; /* 16 */\n  *(acc0+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 16; Ridx0++) {\n    int alu1 = ((lidx0<<4)+Ridx0);\n    float val0 = (*(data1_256+alu1));\n    float val1 = (*(data2_256+alu1));\n    *(acc0+0) = ((*(acc0+0))+(val0*val1));\n  }\n  *(temp0+lidx0) = (*(acc0+0));\n  __syncthreads();\n  *(acc1+0) = 0.0f;\n  for (int Ridx101 = 0; Ridx101 < 16; Ridx101++) {\n    float val2 = (*(temp0+Ridx101));\n    *(acc1+0) = ((*(acc1+0))+val2);\n  }\n  bool alu9 = (lidx0==0);\n  if (alu9) {\n    *(data0_1+0) = (*(acc1+0));\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/cuda_multistage_reduce.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(16) r_32_16_32_2(float* data0_32, float* data1_32768) {\n  __shared__ __align__(16) float temp0[16];\n  float acc0[1];\n  float acc1[1];\n  float acc2[1];\n  int gidx0 = blockIdx.x; /* 32 */\n  int lidx0 = threadIdx.x; /* 16 */\n  *(acc1+0) = 0.0f;\n  for (int Ridx1 = 0; Ridx1 < 2; Ridx1++) {\n    *(acc0+0) = 0.0f;\n    for (int Ridx0 = 0; Ridx0 < 32; Ridx0++) {\n      float val0 = (*(data1_32768+((lidx0<<6)+(Ridx1<<5)+Ridx0+(gidx0<<10))));\n      *(acc0+0) = ((*(acc0+0))+val0);\n    }\n    float alu4 = (((*(acc0+0))<0.0f)?0.0f:(*(acc0+0)));\n    *(acc1+0) = ((*(acc1+0))+alu4);\n  }\n  *(temp0+lidx0) = (*(acc1+0));\n  __syncthreads();\n  *(acc2+0) = 0.0f;\n  for (int Ridx103 = 0; Ridx103 < 16; Ridx103++) {\n    float val1 = (*(temp0+Ridx103));\n    *(acc2+0) = ((*(acc2+0))+val1);\n  }\n  bool alu12 = (lidx0==0);\n  if (alu12) {\n    *(data0_32+gidx0) = (*(acc2+0));\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/cuda_permute_through_reshape.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(64) E_16_4_4n1(float* data0_256, float* data1_256, float* data2_256) {\n  int lidx0 = threadIdx.x; /* 16 */\n  int lidx1 = threadIdx.y; /* 4 */\n  int alu0 = (lidx0+(lidx1<<6));\n  float val0 = (*(data1_256+alu0));\n  int alu1 = (alu0+16);\n  float val1 = (*(data1_256+alu1));\n  int alu2 = (alu0+32);\n  float val2 = (*(data1_256+alu2));\n  int alu3 = (alu0+48);\n  float val3 = (*(data1_256+alu3));\n  float val4 = (*(data2_256+alu0));\n  float val5 = (*(data2_256+alu1));\n  float val6 = (*(data2_256+alu2));\n  float val7 = (*(data2_256+alu3));\n  *((float4*)((data0_256+((lidx0<<4)+(lidx1<<2))))) = make_float4((val0+val4),(val1+val5),(val2+val6),(val3+val7));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/cuda_reduce_permute_binop.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(4) r_5_5_2_2_10(float* data0_100, float* data1_1000, float* data2_100) {\n  int gidx0 = blockIdx.x; /* 5 */\n  int gidx1 = blockIdx.y; /* 5 */\n  int lidx0 = threadIdx.x; /* 2 */\n  int lidx1 = threadIdx.y; /* 2 */\n  int alu0 = (lidx0+(gidx1<<1)+(gidx0*20)+(lidx1*10));\n  float val0 = (*(data1_1000+alu0));\n  float val1 = (*(data1_1000+(alu0+100)));\n  float val2 = (*(data1_1000+(alu0+200)));\n  float val3 = (*(data1_1000+(alu0+300)));\n  float val4 = (*(data1_1000+(alu0+400)));\n  float val5 = (*(data1_1000+(alu0+500)));\n  float val6 = (*(data1_1000+(alu0+600)));\n  float val7 = (*(data1_1000+(alu0+700)));\n  float val8 = (*(data1_1000+(alu0+800)));\n  float val9 = (*(data1_1000+(alu0+900)));\n  int alu1 = (lidx1+(gidx0<<1)+(gidx1*20)+(lidx0*10));\n  float val10 = (*(data2_100+alu1));\n  *(data0_100+alu1) = (val0+val1+val2+val3+val4+val5+val6+val7+val8+val9+val10);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/cuda_reduce_reshape_binop.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(2) r_5_2_10(float* data0_10, float* data1_100, float* data2_10) {\n  int gidx0 = blockIdx.x; /* 5 */\n  int lidx0 = threadIdx.x; /* 2 */\n  int alu0 = (lidx0+(gidx0<<1));\n  float val0 = (*(data1_100+alu0));\n  float val1 = (*(data1_100+(alu0+10)));\n  float val2 = (*(data1_100+(alu0+20)));\n  float val3 = (*(data1_100+(alu0+30)));\n  float val4 = (*(data1_100+(alu0+40)));\n  float val5 = (*(data1_100+(alu0+50)));\n  float val6 = (*(data1_100+(alu0+60)));\n  float val7 = (*(data1_100+(alu0+70)));\n  float val8 = (*(data1_100+(alu0+80)));\n  float val9 = (*(data1_100+(alu0+90)));\n  float val10 = (*(data2_10+alu0));\n  *(data0_10+alu0) = (val0+val1+val2+val3+val4+val5+val6+val7+val8+val9+val10);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/cuda_reduce_shrink.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(16) r_16_16_2(float* data0_16, float* data1_1024, float* data2_16) {\n  __shared__ __align__(16) float temp0[16];\n  float acc0[1];\n  float acc1[1];\n  int gidx0 = blockIdx.x; /* 16 */\n  int lidx0 = threadIdx.x; /* 16 */\n  *(acc0+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 2; Ridx0++) {\n    float val0 = (*(data1_1024+((lidx0<<1)+Ridx0+(gidx0<<5))));\n    *(acc0+0) = ((*(acc0+0))+val0);\n  }\n  *(temp0+lidx0) = (*(acc0+0));\n  __syncthreads();\n  *(acc1+0) = 0.0f;\n  for (int Ridx102 = 0; Ridx102 < 16; Ridx102++) {\n    float val1 = (*(temp0+Ridx102));\n    *(acc1+0) = ((*(acc1+0))+val1);\n  }\n  float val2 = (*(data2_16+gidx0));\n  bool alu8 = (lidx0==0);\n  if (alu8) {\n    *(data0_16+gidx0) = ((*(acc1+0))+val2);\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/cuda_reduce_unary.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(16) r_16n1(float* data0_1, float* data1_16) {\n  __shared__ __align__(16) float temp0[16];\n  float acc0[1];\n  int lidx0 = threadIdx.x; /* 16 */\n  float val0 = (*(data1_16+lidx0));\n  *(temp0+lidx0) = val0;\n  __syncthreads();\n  *(acc0+0) = 0.0f;\n  for (int Ridx101 = 0; Ridx101 < 16; Ridx101++) {\n    float val1 = (*(temp0+Ridx101));\n    *(acc0+0) = ((*(acc0+0))+val1);\n  }\n  bool alu5 = (lidx0==0);\n  if (alu5) {\n    *(data0_1+0) = -sqrt((*(acc0+0)));\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/cuda_reshape_chain.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(4) E_4_4n5(float* data0_16, float* data1_16, float* data2_16) {\n  int lidx0 = threadIdx.x; /* 4 */\n  int alu0 = (lidx0<<2);\n  float4 val0 = (*((float4*)((data1_16+alu0))));\n  float4 val1 = (*((float4*)((data2_16+alu0))));\n  *((float4*)((data0_16+alu0))) = make_float4((val0.x+val1.x),(val0.y+val1.y),(val0.z+val1.z),(val0.w+val1.w));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/cuda_shrink_fuse.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(4) E_4_4n1(float* data0_16, float* data1_131072, float* data2_131072, float* data3_16) {\n  int lidx0 = threadIdx.x; /* 4 */\n  int alu0 = (lidx0<<2);\n  float4 val0 = (*((float4*)((data1_131072+alu0))));\n  float4 val1 = (*((float4*)((data2_131072+alu0))));\n  float4 val2 = (*((float4*)((data3_16+alu0))));\n  *((float4*)((data0_16+alu0))) = make_float4((val0.x*val1.x*val2.x),(val0.y*val1.y*val2.y),(val0.z*val1.z*val2.z),(val0.w*val1.w*val2.w));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/cuda_two_sum.expected",
    "content": "#define INFINITY (__int_as_float(0x7f800000))\n#define NAN (__int_as_float(0x7fffffff))\ntemplate <class T, class F> __device__ __forceinline__ T tg_bitcast(F v) { union U { F f; T t; }; U u; u.f = v; return u.t; }\nextern \"C\" __global__ void __launch_bounds__(16) r_64_16_4_64(float* data0_64, float* data1_4096) {\n  __shared__ __align__(16) float temp0[16];\n  float acc0[1];\n  float acc1[1];\n  float acc2[1];\n  int gidx0 = blockIdx.x; /* 64 */\n  int lidx0 = threadIdx.x; /* 16 */\n  *(acc1+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 4; Ridx0++) {\n    float val0 = (*(data1_4096+(gidx0+(lidx0<<8)+(Ridx0<<6))));\n    *(acc1+0) = ((*(acc1+0))+val0);\n  }\n  *(acc0+0) = 0.0f;\n  for (int Ridx1 = 0; Ridx1 < 64; Ridx1++) {\n    float val1 = (*(data1_4096+((gidx0<<6)+Ridx1)));\n    *(acc0+0) = ((*(acc0+0))+val1);\n  }\n  *(temp0+lidx0) = (*(acc1+0));\n  __syncthreads();\n  *(acc2+0) = 0.0f;\n  for (int Ridx103 = 0; Ridx103 < 16; Ridx103++) {\n    float val2 = (*(temp0+Ridx103));\n    *(acc2+0) = ((*(acc2+0))+val2);\n  }\n  bool alu11 = (lidx0==0);\n  if (alu11) {\n    *(data0_64+gidx0) = ((*(acc2+0))+(*(acc0+0)));\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/dune",
    "content": "(executable\n (name generate_actual)\n (libraries tolk tolk.ir))\n\n(rule\n (package tolk)\n (targets\n  clang_binop_permute.actual\n  clang_binop_reshape.actual\n  clang_contiguous_add.actual\n  clang_diamond.actual\n  clang_elementwise_3way.actual\n  clang_elementwise_add.actual\n  clang_expand_permute.actual\n  clang_mulacc.actual\n  clang_multistage_reduce.actual\n  clang_permute_through_reshape.actual\n  clang_reduce_permute_binop.actual\n  clang_reduce_reshape_binop.actual\n  clang_reduce_shrink.actual\n  clang_reduce_unary.actual\n  clang_reshape_chain.actual\n  clang_shrink_fuse.actual\n  clang_two_sum.actual\n  cuda_binop_permute.actual\n  cuda_binop_reshape.actual\n  cuda_contiguous_add.actual\n  cuda_diamond.actual\n  cuda_elementwise_3way.actual\n  cuda_elementwise_add.actual\n  cuda_expand_permute.actual\n  cuda_mulacc.actual\n  cuda_multistage_reduce.actual\n  cuda_permute_through_reshape.actual\n  cuda_reduce_permute_binop.actual\n  cuda_reduce_reshape_binop.actual\n  cuda_reduce_shrink.actual\n  cuda_reduce_unary.actual\n  cuda_reshape_chain.actual\n  cuda_shrink_fuse.actual\n  cuda_two_sum.actual\n  metal_binop_permute.actual\n  metal_binop_reshape.actual\n  metal_contiguous_add.actual\n  metal_diamond.actual\n  metal_elementwise_3way.actual\n  metal_elementwise_add.actual\n  metal_expand_permute.actual\n  metal_mulacc.actual\n  metal_multistage_reduce.actual\n  metal_permute_through_reshape.actual\n  metal_reduce_permute_binop.actual\n  metal_reduce_reshape_binop.actual\n  metal_reduce_shrink.actual\n  metal_reduce_unary.actual\n  metal_reshape_chain.actual\n  metal_shrink_fuse.actual\n  metal_two_sum.actual\n  opencl_binop_permute.actual\n  opencl_binop_reshape.actual\n  opencl_contiguous_add.actual\n  opencl_diamond.actual\n  opencl_elementwise_3way.actual\n  opencl_elementwise_add.actual\n  opencl_expand_permute.actual\n  opencl_mulacc.actual\n  opencl_multistage_reduce.actual\n  opencl_permute_through_reshape.actual\n  opencl_reduce_permute_binop.actual\n  opencl_reduce_reshape_binop.actual\n  opencl_reduce_shrink.actual\n  opencl_reduce_unary.actual\n  opencl_reshape_chain.actual\n  opencl_shrink_fuse.actual\n  opencl_two_sum.actual)\n (action\n  (run ./generate_actual.exe .)))\n\n(rule\n (alias runtest)\n (package tolk)\n (action\n  (progn\n   (diff clang_binop_permute.expected clang_binop_permute.actual)\n   (diff clang_binop_reshape.expected clang_binop_reshape.actual)\n   (diff clang_contiguous_add.expected clang_contiguous_add.actual)\n   (diff clang_diamond.expected clang_diamond.actual)\n   (diff clang_elementwise_3way.expected clang_elementwise_3way.actual)\n   (diff clang_elementwise_add.expected clang_elementwise_add.actual)\n   (diff clang_expand_permute.expected clang_expand_permute.actual)\n   (diff clang_mulacc.expected clang_mulacc.actual)\n   (diff clang_multistage_reduce.expected clang_multistage_reduce.actual)\n   (diff\n    clang_permute_through_reshape.expected\n    clang_permute_through_reshape.actual)\n   (diff\n    clang_reduce_permute_binop.expected\n    clang_reduce_permute_binop.actual)\n   (diff\n    clang_reduce_reshape_binop.expected\n    clang_reduce_reshape_binop.actual)\n   (diff clang_reduce_shrink.expected clang_reduce_shrink.actual)\n   (diff clang_reduce_unary.expected clang_reduce_unary.actual)\n   (diff clang_reshape_chain.expected clang_reshape_chain.actual)\n   (diff clang_shrink_fuse.expected clang_shrink_fuse.actual)\n   (diff clang_two_sum.expected clang_two_sum.actual)\n   (diff cuda_binop_permute.expected cuda_binop_permute.actual)\n   (diff cuda_binop_reshape.expected cuda_binop_reshape.actual)\n   (diff cuda_contiguous_add.expected cuda_contiguous_add.actual)\n   (diff cuda_diamond.expected cuda_diamond.actual)\n   (diff cuda_elementwise_3way.expected cuda_elementwise_3way.actual)\n   (diff cuda_elementwise_add.expected cuda_elementwise_add.actual)\n   (diff cuda_expand_permute.expected cuda_expand_permute.actual)\n   (diff cuda_mulacc.expected cuda_mulacc.actual)\n   (diff cuda_multistage_reduce.expected cuda_multistage_reduce.actual)\n   (diff\n    cuda_permute_through_reshape.expected\n    cuda_permute_through_reshape.actual)\n   (diff cuda_reduce_permute_binop.expected cuda_reduce_permute_binop.actual)\n   (diff cuda_reduce_reshape_binop.expected cuda_reduce_reshape_binop.actual)\n   (diff cuda_reduce_shrink.expected cuda_reduce_shrink.actual)\n   (diff cuda_reduce_unary.expected cuda_reduce_unary.actual)\n   (diff cuda_reshape_chain.expected cuda_reshape_chain.actual)\n   (diff cuda_shrink_fuse.expected cuda_shrink_fuse.actual)\n   (diff cuda_two_sum.expected cuda_two_sum.actual)\n   (diff metal_binop_permute.expected metal_binop_permute.actual)\n   (diff metal_binop_reshape.expected metal_binop_reshape.actual)\n   (diff metal_contiguous_add.expected metal_contiguous_add.actual)\n   (diff metal_diamond.expected metal_diamond.actual)\n   (diff metal_elementwise_3way.expected metal_elementwise_3way.actual)\n   (diff metal_elementwise_add.expected metal_elementwise_add.actual)\n   (diff metal_expand_permute.expected metal_expand_permute.actual)\n   (diff metal_mulacc.expected metal_mulacc.actual)\n   (diff metal_multistage_reduce.expected metal_multistage_reduce.actual)\n   (diff\n    metal_permute_through_reshape.expected\n    metal_permute_through_reshape.actual)\n   (diff\n    metal_reduce_permute_binop.expected\n    metal_reduce_permute_binop.actual)\n   (diff\n    metal_reduce_reshape_binop.expected\n    metal_reduce_reshape_binop.actual)\n   (diff metal_reduce_shrink.expected metal_reduce_shrink.actual)\n   (diff metal_reduce_unary.expected metal_reduce_unary.actual)\n   (diff metal_reshape_chain.expected metal_reshape_chain.actual)\n   (diff metal_shrink_fuse.expected metal_shrink_fuse.actual)\n   (diff metal_two_sum.expected metal_two_sum.actual)\n   (diff opencl_binop_permute.expected opencl_binop_permute.actual)\n   (diff opencl_binop_reshape.expected opencl_binop_reshape.actual)\n   (diff opencl_contiguous_add.expected opencl_contiguous_add.actual)\n   (diff opencl_diamond.expected opencl_diamond.actual)\n   (diff opencl_elementwise_3way.expected opencl_elementwise_3way.actual)\n   (diff opencl_elementwise_add.expected opencl_elementwise_add.actual)\n   (diff opencl_expand_permute.expected opencl_expand_permute.actual)\n   (diff opencl_mulacc.expected opencl_mulacc.actual)\n   (diff opencl_multistage_reduce.expected opencl_multistage_reduce.actual)\n   (diff\n    opencl_permute_through_reshape.expected\n    opencl_permute_through_reshape.actual)\n   (diff\n    opencl_reduce_permute_binop.expected\n    opencl_reduce_permute_binop.actual)\n   (diff\n    opencl_reduce_reshape_binop.expected\n    opencl_reduce_reshape_binop.actual)\n   (diff opencl_reduce_shrink.expected opencl_reduce_shrink.actual)\n   (diff opencl_reduce_unary.expected opencl_reduce_unary.actual)\n   (diff opencl_reshape_chain.expected opencl_reshape_chain.actual)\n   (diff opencl_shrink_fuse.expected opencl_shrink_fuse.actual)\n   (diff opencl_two_sum.expected opencl_two_sum.actual))))\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/generate_actual.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Generates .actual files for rangeify pipeline golden tests. Each file\n   contains tolk's rendered output for a specific backend + test case after\n   running the full pipeline: Tensor.t -> Rangeify.get_kernel_graph ->\n   Kernel.t -> Codegen.full_rewrite_to_sink -> Linearizer.linearize ->\n   Renderer.render. Dune diff rules compare .actual against .expected. *)\n\nopen Tolk\nopen Tolk_ir\nmodule T = Tensor\nmodule K = Kernel\nmodule C = Const\nmodule D = Dtype\n\n(* Helpers *)\n\n(* Emit a shape-encoding node from a concrete int list. *)\nlet mk_shape b (dims : int list) : T.t =\n  let ids = List.map (fun s -> T.const (C.int D.Val.index s) D.index) dims in\n  match ids with\n  | [ d ] -> d\n  | ds ->\n      T.vectorize ~srcs:ds\n\n(* Emit a PARAM with a known shape and CPU device. *)\nlet mk_param b ~slot (shape : int list) : T.t =\n  let shape_id = if shape = [] then None else Some (mk_shape b shape) in\n  let dev = T.device (Single \"CPU\") in\n  T.param ~slot ~dtype:D.float32 ?shape:shape_id ~device:dev ()\n\n(* Wrap source(s) in CONTIGUOUS -> SINK. *)\nlet wrap_sink b (srcs : T.t list) : T.t =\n  let contigs =\n    List.map (fun src -> T.contiguous ~src ()) srcs\n  in\n  T.sink contigs\n\n(* Extract Kernel.t ASTs from CALL nodes in topological (id) order. *)\nlet extract_kernels (root : T.t) : K.t list =\n  let kernels = ref [] in\n  List.iter (fun node ->\n    match T.view node with\n    | Call { callee = Ast k; _ } -> kernels := k :: !kernels\n    | _ -> ())\n    (T.toposort root);\n  List.rev !kernels\n\n(* Extract kernel name from a pipeline-processed Sink. *)\nlet name_of_sink sink =\n  match K.view sink with\n  | K.Sink { kernel_info = Some ki; _ } -> ki.name\n  | _ -> \"kernel\"\n\n(* Run the full pipeline: Tensor.t -> rendered source string. *)\nlet tensor_to_source renderer (build_fn : unit -> T.t) : string =\n  let _sink = build_fn () in\n  let program = _sink in\n  let kernel_graph = Rangeify.get_kernel_graph program in\n  let kernels = extract_kernels kernel_graph in\n  let sources =\n    List.map\n      (fun k ->\n        let processed =\n          Codegen.full_rewrite_to_sink ~optimize:true renderer k\n        in\n        let name = name_of_sink processed in\n        let prog = Linearizer.linearize processed in\n        String.trim (Renderer.render renderer ~name prog))\n      kernels\n  in\n  String.concat \"\\n---\\n\" sources\n\n(* Tensor graph builders *)\n\n(* Each builder constructs a Tensor.t graph matching the corresponding\n   builder in generate_expected.py. *)\n\nlet build_elementwise_add b =\n  let a = mk_param b ~slot:0 [ 256 ] in\n  let bp = mk_param b ~slot:1 [ 256 ] in\n  let add = T.binary ~op:`Add ~lhs:a ~rhs:bp in\n  wrap_sink b [ add ]\n\nlet build_elementwise_3way b =\n  let a = mk_param b ~slot:0 [ 256 ] in\n  let bp = mk_param b ~slot:1 [ 256 ] in\n  let c = mk_param b ~slot:2 [ 256 ] in\n  let ab = T.binary ~op:`Add ~lhs:a ~rhs:bp in\n  let abc = T.binary ~op:`Add ~lhs:ab ~rhs:c in\n  wrap_sink b [ abc ]\n\nlet build_mulacc b =\n  let a = mk_param b ~slot:0 [ 256 ] in\n  let bp = mk_param b ~slot:1 [ 256 ] in\n  let mul = T.binary ~op:`Mul ~lhs:a ~rhs:bp in\n  let red =\n    T.reduce_axis ~src:mul ~op:`Add ~axes:[ 0 ] in\n  wrap_sink b [ red ]\n\nlet build_binop_reshape b =\n  let a = mk_param b ~slot:0 [ 10 ] in\n  let bp = mk_param b ~slot:1 [ 10 ] in\n  let c = mk_param b ~slot:2 [ 5; 2 ] in\n  let add = T.binary ~op:`Add ~lhs:a ~rhs:bp in\n  let reshaped = T.reshape ~src:add ~shape:(mk_shape b [ 5; 2 ]) in\n  let result = T.binary ~op:`Add ~lhs:reshaped ~rhs:c in\n  wrap_sink b [ result ]\n\nlet build_binop_permute b =\n  let a = mk_param b ~slot:0 [ 2; 5 ] in\n  let bp = mk_param b ~slot:1 [ 2; 5 ] in\n  let c = mk_param b ~slot:2 [ 5; 2 ] in\n  let add = T.binary ~op:`Add ~lhs:a ~rhs:bp in\n  let permed = T.permute ~src:add ~order:[ 1; 0 ] in\n  let result = T.binary ~op:`Add ~lhs:permed ~rhs:c in\n  wrap_sink b [ result ]\n\nlet build_diamond b =\n  let a = mk_param b ~slot:0 [ 10 ] in\n  let bp = mk_param b ~slot:1 [ 10 ] in\n  let c = mk_param b ~slot:2 [ 10 ] in\n  let d = mk_param b ~slot:3 [ 10 ] in\n  let ab = T.binary ~op:`Add ~lhs:a ~rhs:bp in\n  let abc = T.binary ~op:`Add ~lhs:ab ~rhs:c in\n  let abd = T.binary ~op:`Add ~lhs:ab ~rhs:d in\n  let result = T.binary ~op:`Add ~lhs:abc ~rhs:abd in\n  wrap_sink b [ result ]\n\nlet build_reduce_unary b =\n  let a = mk_param b ~slot:0 [ 16 ] in\n  let red =\n    T.reduce_axis ~src:a ~op:`Add ~axes:[ 0 ] in\n  let sq = T.unary ~op:`Sqrt ~src:red in\n  let neg = T.unary ~op:`Neg ~src:sq in\n  wrap_sink b [ neg ]\n\nlet build_reduce_reshape_binop b =\n  let a = mk_param b ~slot:0 [ 10; 10 ] in\n  let bp = mk_param b ~slot:1 [ 10 ] in\n  let red =\n    T.reduce_axis ~src:a ~op:`Add ~axes:[ 0 ] in\n  let reshaped = T.reshape ~src:red ~shape:(mk_shape b [ 10 ]) in\n  let result = T.binary ~op:`Add ~lhs:reshaped ~rhs:bp in\n  wrap_sink b [ result ]\n\nlet build_reduce_permute_binop b =\n  let a = mk_param b ~slot:0 [ 10; 10; 10 ] in\n  let bp = mk_param b ~slot:1 [ 10; 10; 1 ] in\n  let red =\n    T.reduce_axis ~src:a ~op:`Add ~axes:[ 0 ] in\n  let permed = T.permute ~src:red ~order:[ 2; 1; 0 ] in\n  let result = T.binary ~op:`Add ~lhs:permed ~rhs:bp in\n  wrap_sink b [ result ]\n\nlet build_permute_through_reshape b =\n  let a = mk_param b ~slot:0 [ 16; 16 ] in\n  let bp = mk_param b ~slot:1 [ 16; 16 ] in\n  let add = T.binary ~op:`Add ~lhs:a ~rhs:bp in\n  let reshaped =\n    T.reshape ~src:add ~shape:(mk_shape b [ 4; 4; 4; 4 ])\n  in\n  let permed = T.permute ~src:reshaped ~order:[ 2; 3; 0; 1 ] in\n  wrap_sink b [ permed ]\n\nlet build_expand_permute b =\n  let a = mk_param b ~slot:0 [ 10; 10; 1 ] in\n  let bp = mk_param b ~slot:1 [ 10; 10; 1 ] in\n  let ab = T.binary ~op:`Add ~lhs:a ~rhs:bp in\n  let expanded =\n    T.expand ~src:ab ~shape:(mk_shape b [ 10; 10; 10 ])\n  in\n  let permed = T.permute ~src:ab ~order:[ 2; 1; 0 ] in\n  let permed_expanded =\n    T.expand ~src:permed ~shape:(mk_shape b [ 10; 10; 10 ])\n  in\n  let result = T.binary ~op:`Add ~lhs:expanded ~rhs:permed_expanded in\n  wrap_sink b [ result ]\n\nlet build_shrink_fuse b =\n  let a = mk_param b ~slot:0 [ 8192; 16 ] in\n  let bp = mk_param b ~slot:1 [ 8192; 16 ] in\n  let d = mk_param b ~slot:2 [ 1; 16 ] in\n  let mul = T.binary ~op:`Mul ~lhs:a ~rhs:bp in\n  let before = mk_shape b [ 0; 0 ] in\n  let after = mk_shape b [ 1; 16 ] in\n  let shrunk = T.shrink ~src:mul ~before ~after in\n  let result = T.binary ~op:`Mul ~lhs:shrunk ~rhs:d in\n  wrap_sink b [ result ]\n\nlet build_multistage_reduce b =\n  let a = mk_param b ~slot:0 [ 32; 32; 32 ] in\n  let red1 =\n    T.reduce_axis ~src:a ~op:`Add ~axes:[ 2 ] in\n  (* relu: max(red1, 0) — zero must be broadcast to [32,32,1] *)\n  let zero = T.const (C.float D.Val.float32 0.0) D.float32 in\n  let zero_reshaped =\n    T.reshape ~src:zero ~shape:(mk_shape b [ 1; 1; 1 ])\n  in\n  let zero_expanded =\n    T.expand ~src:zero_reshaped ~shape:(mk_shape b [ 32; 32; 1 ])\n  in\n  let relu = T.binary ~op:`Max ~lhs:red1 ~rhs:zero_expanded in\n  let reshaped =\n    T.reshape ~src:relu ~shape:(mk_shape b [ 32; 32 ])\n  in\n  let red2 =\n    T.reduce_axis ~src:reshaped ~op:`Add ~axes:[ 1 ] in\n  wrap_sink b [ red2 ]\n\nlet build_two_sum b =\n  let a = mk_param b ~slot:0 [ 64; 64 ] in\n  let red0 =\n    T.reduce_axis ~src:a ~op:`Add ~axes:[ 0 ] in\n  let red1 =\n    T.reduce_axis ~src:a ~op:`Add ~axes:[ 1 ] in\n  let reshaped0 = T.reshape ~src:red0 ~shape:(mk_shape b [ 64 ]) in\n  let reshaped1 = T.reshape ~src:red1 ~shape:(mk_shape b [ 64 ]) in\n  let result = T.binary ~op:`Add ~lhs:reshaped0 ~rhs:reshaped1 in\n  wrap_sink b [ result ]\n\nlet build_reduce_shrink b =\n  let a = mk_param b ~slot:0 [ 32; 32 ] in\n  let bp = mk_param b ~slot:1 [ 16 ] in\n  let red =\n    T.reduce_axis ~src:a ~op:`Add ~axes:[ 1 ] in\n  let reshaped = T.reshape ~src:red ~shape:(mk_shape b [ 32 ]) in\n  let before = mk_shape b [ 0 ] in\n  let after = mk_shape b [ 16 ] in\n  let shrunk = T.shrink ~src:reshaped ~before ~after in\n  let result = T.binary ~op:`Add ~lhs:shrunk ~rhs:bp in\n  wrap_sink b [ result ]\n\nlet build_contiguous_add b =\n  let x = mk_param b ~slot:0 [ 32 ] in\n  let y = mk_param b ~slot:1 [ 32 ] in\n  let z = mk_param b ~slot:2 [ 32 ] in\n  let add = T.binary ~op:`Add ~lhs:x ~rhs:y in\n  let contig = T.contiguous ~src:add () in\n  let result = T.binary ~op:`Add ~lhs:contig ~rhs:z in\n  wrap_sink b [ result ]\n\nlet build_reshape_chain b =\n  let a = mk_param b ~slot:0 [ 4; 4 ] in\n  let bp = mk_param b ~slot:1 [ 2; 8 ] in\n  let r1 = T.reshape ~src:a ~shape:(mk_shape b [ 16 ]) in\n  let r2 = T.reshape ~src:r1 ~shape:(mk_shape b [ 2; 8 ]) in\n  let result = T.binary ~op:`Add ~lhs:r2 ~rhs:bp in\n  wrap_sink b [ result ]\n\n(* Test case type *)\n\ntype test_case = {\n  name : string;\n  build : unit -> T.t;\n  backends : (string * Renderer.t) list;\n}\n\nlet all_renderers =\n  [\n    (\"clang\", Cstyle.clang_no_abi);\n    (\"cuda\", Cstyle.cuda Gpu_target.SM80);\n    (\"metal\", Cstyle.metal);\n    (\"opencl\", Cstyle.opencl);\n  ]\n\n(* GPU renderers that don't need local memory (pm_add_buffers_local).\n   Tests with REDUCE_AXIS on GPU require local bufferize (Step 10 DEFERRED\n   in lowering.ml). Until pm_add_buffers_local is implemented, GPU reduce\n   tests are excluded. Elementwise-only and multi-kernel elementwise tests\n   work on all backends. *)\nlet gpu_renderers =\n  List.filter (fun (name, _) -> name <> \"clang\") all_renderers\n\nlet test_cases =\n  [\n    (* Tier 1: Core fusion *)\n    { name = \"elementwise_add\"; build = build_elementwise_add;\n      backends = all_renderers };\n    { name = \"elementwise_3way\"; build = build_elementwise_3way;\n      backends = all_renderers };\n    { name = \"mulacc\"; build = build_mulacc;\n      (* GPU reduce needs pm_add_buffers_local (Step 10 DEFERRED) *)\n      backends = all_renderers };\n    { name = \"binop_reshape\"; build = build_binop_reshape;\n      backends = all_renderers };\n    { name = \"binop_permute\"; build = build_binop_permute;\n      backends = all_renderers };\n    { name = \"diamond\"; build = build_diamond;\n      backends = all_renderers };\n    { name = \"reduce_unary\"; build = build_reduce_unary;\n      (* GPU reduce needs pm_add_buffers_local (Step 10 DEFERRED) *)\n      backends = all_renderers };\n    { name = \"reduce_reshape_binop\"; build = build_reduce_reshape_binop;\n      (* GPU reduce needs pm_add_buffers_local (Step 10 DEFERRED) *)\n      backends = all_renderers };\n    (* Tier 2: Movement ops *)\n    { name = \"reduce_permute_binop\"; build = build_reduce_permute_binop;\n      (* GPU reduce needs pm_add_buffers_local (Step 10 DEFERRED) *)\n      backends = all_renderers };\n    { name = \"permute_through_reshape\"; build = build_permute_through_reshape;\n      backends = all_renderers };\n    { name = \"expand_permute\"; build = build_expand_permute;\n      backends = all_renderers };\n    { name = \"shrink_fuse\"; build = build_shrink_fuse;\n      backends = all_renderers };\n    (* Tier 3: Multi-reduce / multi-kernel *)\n    { name = \"multistage_reduce\"; build = build_multistage_reduce;\n      (* GPU reduce needs pm_add_buffers_local (Step 10 DEFERRED) *)\n      backends = all_renderers };\n    { name = \"two_sum\"; build = build_two_sum;\n      backends = all_renderers };\n    { name = \"reduce_shrink\"; build = build_reduce_shrink;\n      (* GPU reduce needs pm_add_buffers_local (Step 10 DEFERRED) *)\n      backends = all_renderers };\n    (* Tier 4: Edge cases *)\n    { name = \"contiguous_add\"; build = build_contiguous_add;\n      backends = all_renderers };\n    { name = \"reshape_chain\"; build = build_reshape_chain;\n      backends = all_renderers };\n  ]\n\n(* Main *)\n\nlet () =\n  let dir = Sys.argv.(1) in\n  List.iter\n    (fun { name; build; backends } ->\n      List.iter\n        (fun (backend_name, renderer) ->\n          let snap = Printf.sprintf \"%s_%s\" backend_name name in\n          let out =\n            match tensor_to_source renderer build with\n            | out -> out\n            | exception exn ->\n                Printf.eprintf \"FAIL %s: %s\\n%!\" snap\n                  (Printexc.to_string exn);\n                Printf.sprintf \"ERROR: %s\" (Printexc.to_string exn)\n          in\n          let filename = Filename.concat dir (snap ^ \".actual\") in\n          let oc = open_out filename in\n          output_string oc out;\n          output_char oc '\\n';\n          close_out oc)\n        backends)\n    test_cases\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/generate_expected.py",
    "content": "#!/usr/bin/env python3\n\"\"\"Generate tinygrad reference .expected files for rangeify pipeline golden tests.\n\nConstructs tensor-level UOp DAGs and runs them through tinygrad's\nget_kernel_graph + full_rewrite_to_sink + linearize + render pipeline.\nThis produces the reference source code that Tolk's\nRangeify.get_kernel_graph -> Pipeline -> Linearizer -> Renderer must match.\n\nUsage:\n    uv run packages/tolk/test/golden/rangeify/generate_expected.py\n\nAfter running, commit the generated .expected files.\n\"\"\"\n\nimport os\nimport sys\n\nsys.path.insert(\n    0,\n    os.path.join(\n        os.path.dirname(__file__), \"..\", \"..\", \"..\", \"..\", \"..\", \"_tinygrad\"\n    ),\n)\n\nfrom tinygrad.uop.ops import UOp, Ops, KernelInfo, AxisType\nfrom tinygrad.dtype import dtypes\nfrom tinygrad.schedule.rangeify import get_kernel_graph\nfrom tinygrad.codegen import full_rewrite_to_sink, line_rewrite, pm_linearize_cleanups\nfrom tinygrad.codegen.late.linearizer import linearize\nfrom tinygrad.renderer.cstyle import (\n    ClangRenderer,\n    CUDARenderer,\n    MetalRenderer,\n    OpenCLRenderer,\n)\nimport tinygrad.renderer.cstyle as _cstyle_mod\n\nOUT_DIR = os.path.dirname(__file__)\n\n\nclass _RenderOnlyCUDARenderer(CUDARenderer):\n    \"\"\"CUDARenderer that skips compiler init (nvrtc not needed for rendering).\"\"\"\n\n    def __init__(self, arch):\n        self.device, self.arch, self.use_nvcc = \"NV\", arch, False\n        self.compiler = None\n        ver = int(arch[3:])\n        tc = _cstyle_mod.tc\n        self.tensor_cores = (\n            tc.cuda_sm89 if ver >= 89\n            else tc.cuda_sm80 if ver >= 80\n            else tc.cuda_sm75 if ver >= 75\n            else []\n        )\n\n\nRENDERERS = {}\nfor _name, _ctor in [\n    (\"clang\", lambda: ClangRenderer()),\n    (\"cuda\", lambda: _RenderOnlyCUDARenderer(arch=\"sm_80\")),\n    (\"metal\", lambda: MetalRenderer()),\n    (\"opencl\", lambda: OpenCLRenderer()),\n]:\n    try:\n        RENDERERS[_name] = _ctor()\n    except Exception as e:\n        print(f\"WARNING: skipping {_name} renderer: {e}\")\n\n\ndef write_expected(name, content):\n    path = os.path.join(OUT_DIR, f\"{name}.expected\")\n    with open(path, \"w\") as f:\n        f.write(content + \"\\n\")\n    print(f\"  wrote {path}\")\n\n\ndef render_kernel(ast, renderer, optimize=True):\n    \"\"\"Run full codegen pipeline on a kernel AST and return rendered source.\"\"\"\n    rewritten = full_rewrite_to_sink(ast, renderer, optimize=optimize)\n    lst = linearize(rewritten)\n    lst = line_rewrite(lst, pm_linearize_cleanups)\n    return renderer.render(lst).strip()\n\n\ndef get_source(sink, renderer, optimize=True):\n    \"\"\"Build tensor graph, rangeify, codegen, render all kernels.\"\"\"\n    kg = get_kernel_graph(sink)\n    sources = []\n    for u in kg.toposort():\n        if u.op is Ops.CALL and isinstance(u.src[0].arg, KernelInfo):\n            sources.append(render_kernel(u.src[0], renderer, optimize))\n    return \"\\n---\\n\".join(sources)\n\n\n# ── Helpers ──\n\n\ndef mk_shape(*dims):\n    \"\"\"Encode a shape as a VECTORIZE of index consts (or single const for 1-D).\"\"\"\n    if len(dims) == 1:\n        return UOp.const(dtypes.index, dims[0])\n    return UOp(\n        Ops.VECTORIZE,\n        dtypes.index.vec(len(dims)),\n        tuple(UOp.const(dtypes.index, d) for d in dims),\n    )\n\n\ndef mk_param(slot, *shape, dtype=dtypes.float32):\n    \"\"\"Build a PARAM with a known shape and CPU device.\"\"\"\n    dev = UOp(Ops.DEVICE, arg=\"CPU\")\n    return UOp(Ops.PARAM, dtype, (mk_shape(*shape), dev), slot)\n\n\ndef wrap_sink(*srcs):\n    \"\"\"Wrap source(s) in CONTIGUOUS -> SINK.\"\"\"\n    contigs = [UOp(Ops.CONTIGUOUS, s.dtype, (s,)) for s in srcs]\n    return UOp.sink(*contigs)\n\n\n# ── Tensor graph builders ──\n# Each builds a tensor-level SINK-rooted graph that get_kernel_graph will\n# transform into kernel(s). These match the Tolk generate_actual.ml builders.\n\n\ndef build_elementwise_add():\n    \"\"\"c = a + b, shape [256].\"\"\"\n    a = mk_param(0, 256)\n    b = mk_param(1, 256)\n    return wrap_sink(a + b)\n\n\ndef build_elementwise_3way():\n    \"\"\"d = a + b + c, shape [256].\"\"\"\n    a = mk_param(0, 256)\n    b = mk_param(1, 256)\n    c = mk_param(2, 256)\n    return wrap_sink(a + b + c)\n\n\ndef build_mulacc():\n    \"\"\"c = sum(a * b), shape [256] -> scalar.\"\"\"\n    a = mk_param(0, 256)\n    b = mk_param(1, 256)\n    mul = a * b\n    red = UOp(Ops.REDUCE_AXIS, dtypes.float32, (mul,), (Ops.ADD, (0,)))\n    return wrap_sink(red)\n\n\ndef build_binop_reshape():\n    \"\"\"d = (a + b).reshape(5, 2) + c.\"\"\"\n    a = mk_param(0, 10)\n    b = mk_param(1, 10)\n    c = mk_param(2, 5, 2)\n    add = a + b\n    reshaped = UOp(Ops.RESHAPE, dtypes.float32, (add, mk_shape(5, 2)))\n    return wrap_sink(reshaped + c)\n\n\ndef build_binop_permute():\n    \"\"\"d = (a + b).permute(1, 0) + c.\"\"\"\n    a = mk_param(0, 2, 5)\n    b = mk_param(1, 2, 5)\n    c = mk_param(2, 5, 2)\n    add = a + b\n    permed = UOp(Ops.PERMUTE, dtypes.float32, (add,), (1, 0))\n    return wrap_sink(permed + c)\n\n\ndef build_diamond():\n    \"\"\"e = (a+b+c) + (a+b+d), shared subexpression a+b.\"\"\"\n    a = mk_param(0, 10)\n    b = mk_param(1, 10)\n    c = mk_param(2, 10)\n    d = mk_param(3, 10)\n    ab = a + b\n    return wrap_sink(ab + c + ab + d)\n\n\ndef build_reduce_unary():\n    \"\"\"c = neg(sqrt(sum(a))), shape [16] -> scalar.\"\"\"\n    a = mk_param(0, 16)\n    red = UOp(Ops.REDUCE_AXIS, dtypes.float32, (a,), (Ops.ADD, (0,)))\n    sq = UOp(Ops.SQRT, dtypes.float32, (red,))\n    neg = UOp(Ops.NEG, dtypes.float32, (sq,))\n    return wrap_sink(neg)\n\n\ndef build_reduce_reshape_binop():\n    \"\"\"c = a.sum(0).reshape(10) + b, shape [10, 10] -> [10].\"\"\"\n    a = mk_param(0, 10, 10)\n    b = mk_param(1, 10)\n    red = UOp(Ops.REDUCE_AXIS, dtypes.float32, (a,), (Ops.ADD, (0,)))\n    reshaped = UOp(Ops.RESHAPE, dtypes.float32, (red, mk_shape(10)))\n    return wrap_sink(reshaped + b)\n\n\ndef build_reduce_permute_binop():\n    \"\"\"c = a.sum(0, keepdim=True).permute(2,1,0) + b, shape [10,10,10].\"\"\"\n    a = mk_param(0, 10, 10, 10)\n    b = mk_param(1, 10, 10, 1)\n    red = UOp(Ops.REDUCE_AXIS, dtypes.float32, (a,), (Ops.ADD, (0,)))\n    permed = UOp(Ops.PERMUTE, dtypes.float32, (red,), (2, 1, 0))\n    return wrap_sink(permed + b)\n\n\ndef build_permute_through_reshape():\n    \"\"\"c = (a+b).reshape(4,4,4,4).permute(2,3,0,1).\"\"\"\n    a = mk_param(0, 16, 16)\n    b = mk_param(1, 16, 16)\n    add = a + b\n    reshaped = UOp(Ops.RESHAPE, dtypes.float32, (add, mk_shape(4, 4, 4, 4)))\n    permed = UOp(Ops.PERMUTE, dtypes.float32, (reshaped,), (2, 3, 0, 1))\n    return wrap_sink(permed)\n\n\ndef build_expand_permute():\n    \"\"\"d = (a+b).expand(10,10,10) + (a+b).permute(2,1,0).expand(10,10,10).\"\"\"\n    a = mk_param(0, 10, 10, 1)\n    b = mk_param(1, 10, 10, 1)\n    ab = a + b\n    expanded = UOp(Ops.EXPAND, dtypes.float32, (ab, mk_shape(10, 10, 10)))\n    permed = UOp(Ops.PERMUTE, dtypes.float32, (ab,), (2, 1, 0))\n    permed_expanded = UOp(Ops.EXPAND, dtypes.float32, (permed, mk_shape(10, 10, 10)))\n    return wrap_sink(expanded + permed_expanded)\n\n\ndef build_shrink_fuse():\n    \"\"\"e = (a*b)[0] * d, shape [8192,16], d=[1,16].\"\"\"\n    a = mk_param(0, 8192, 16)\n    b = mk_param(1, 8192, 16)\n    d = mk_param(2, 1, 16)\n    mul = a * b\n    shrunk = mul.shrink(((0, 1), (0, 16)))\n    return wrap_sink(shrunk * d)\n\n\ndef build_multistage_reduce():\n    \"\"\"c = a.sum(2).relu().sum(1), shape [32,32,32].\"\"\"\n    a = mk_param(0, 32, 32, 32)\n    red1 = UOp(Ops.REDUCE_AXIS, dtypes.float32, (a,), (Ops.ADD, (2,)))\n    # relu: max(red1, 0) — zero must match red1 shape [32,32,1]\n    zero = UOp.const(dtypes.float32, 0.0)\n    zero_bc = UOp(Ops.EXPAND, dtypes.float32,\n                  (zero.reshape((1, 1, 1)), mk_shape(32, 32, 1)))\n    relu = red1.alu(Ops.MAX, zero_bc)\n    reshaped = UOp(Ops.RESHAPE, dtypes.float32, (relu, mk_shape(32, 32)))\n    red2 = UOp(Ops.REDUCE_AXIS, dtypes.float32, (reshaped,), (Ops.ADD, (1,)))\n    return wrap_sink(red2)\n\n\ndef build_two_sum():\n    \"\"\"c = a.sum(0) + a.sum(1), shape [64,64].\"\"\"\n    a = mk_param(0, 64, 64)\n    y = mk_param(1, 64, 64)\n    red0 = UOp(Ops.REDUCE_AXIS, dtypes.float32, (a,), (Ops.ADD, (0,)))\n    red1 = UOp(Ops.REDUCE_AXIS, dtypes.float32, (a,), (Ops.ADD, (1,)))\n    reshaped0 = UOp(Ops.RESHAPE, dtypes.float32, (red0, mk_shape(64)))\n    reshaped1 = UOp(Ops.RESHAPE, dtypes.float32, (red1, mk_shape(64)))\n    return wrap_sink(reshaped0 + reshaped1)\n\n\ndef build_reduce_shrink():\n    \"\"\"c = a.sum(1)[:16] + b, shape [32,32], b=[16].\"\"\"\n    a = mk_param(0, 32, 32)\n    b = mk_param(1, 16)\n    red = UOp(Ops.REDUCE_AXIS, dtypes.float32, (a,), (Ops.ADD, (1,)))\n    reshaped = UOp(Ops.RESHAPE, dtypes.float32, (red, mk_shape(32)))\n    shrunk = reshaped.shrink(((0, 16),))\n    return wrap_sink(shrunk + b)\n\n\ndef build_contiguous_add():\n    \"\"\"d = (x+y).contiguous() + z, produces 2 kernels.\"\"\"\n    x = mk_param(0, 32)\n    y = mk_param(1, 32)\n    z = mk_param(2, 32)\n    add = x + y\n    contig = UOp(Ops.CONTIGUOUS, dtypes.float32, (add,))\n    return wrap_sink(contig + z)\n\n\ndef build_reshape_chain():\n    \"\"\"c = a.reshape(16).reshape(2,8) + b, shape [4,4], b=[2,8].\"\"\"\n    a = mk_param(0, 4, 4)\n    b = mk_param(1, 2, 8)\n    r1 = UOp(Ops.RESHAPE, dtypes.float32, (a, mk_shape(16)))\n    r2 = UOp(Ops.RESHAPE, dtypes.float32, (r1, mk_shape(2, 8)))\n    return wrap_sink(r2 + b)\n\n\n# ── Test cases ──\n# (name, builder, backends_or_None, optimize)\n\nGPU_RENDERERS = [\"cuda\", \"metal\", \"opencl\"]\n\nTEST_CASES = [\n    # Tier 1: Core fusion (1 kernel each)\n    (\"elementwise_add\", build_elementwise_add, None, True),\n    (\"elementwise_3way\", build_elementwise_3way, None, True),\n    (\"mulacc\", build_mulacc, None, True),\n    (\"binop_reshape\", build_binop_reshape, None, True),\n    (\"binop_permute\", build_binop_permute, None, True),\n    (\"diamond\", build_diamond, None, True),\n    (\"reduce_unary\", build_reduce_unary, None, True),\n    (\"reduce_reshape_binop\", build_reduce_reshape_binop, None, True),\n    # Tier 2: Movement ops (1 kernel each)\n    (\"reduce_permute_binop\", build_reduce_permute_binop, None, True),\n    (\"permute_through_reshape\", build_permute_through_reshape, None, True),\n    (\"expand_permute\", build_expand_permute, None, True),\n    (\"shrink_fuse\", build_shrink_fuse, None, True),\n    # Tier 3: Multi-reduce / multi-kernel\n    (\"multistage_reduce\", build_multistage_reduce, None, True),\n    (\"two_sum\", build_two_sum, None, True),\n    (\"reduce_shrink\", build_reduce_shrink, None, True),\n    # Tier 4: Edge cases\n    (\"contiguous_add\", build_contiguous_add, None, True),\n    (\"reshape_chain\", build_reshape_chain, None, True),\n]\n\n\ndef main():\n    total = 0\n    for case_name, builder, backends, optimize in TEST_CASES:\n        print(f\"\\n{case_name} (optimize={optimize}):\")\n        sink = builder()\n        targets = backends if backends else list(RENDERERS.keys())\n        for backend_name in targets:\n            if backend_name not in RENDERERS:\n                print(f\"  SKIP {backend_name}_{case_name}: renderer not available\")\n                continue\n            renderer = RENDERERS[backend_name]\n            snap_name = f\"{backend_name}_{case_name}\"\n            try:\n                src = get_source(sink, renderer, optimize=optimize)\n                write_expected(snap_name, src)\n                total += 1\n            except Exception as e:\n                print(f\"  FAIL {snap_name}: {e}\")\n                import traceback\n\n                traceback.print_exc()\n\n    print(f\"\\nDone. Generated {total} .expected files in {OUT_DIR}\")\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/metal_binop_permute.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void E_5_2n5(device float* data0_10, device float* data1_10, device float* data2_10, device float* data3_10, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int gidx0 = gid.x; /* 5 */\n  int lidx0 = lid.x; /* 2 */\n  int alu0 = (gidx0+(lidx0*5));\n  float val0 = (*(data1_10+alu0));\n  float val1 = (*(data2_10+alu0));\n  int alu1 = (lidx0+(gidx0<<1));\n  float val2 = (*(data3_10+alu1));\n  *(data0_10+alu1) = (val0+val1+val2);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/metal_binop_reshape.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void E_5_2n1(device float* data0_10, device float* data1_10, device float* data2_10, device float* data3_10, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int gidx0 = gid.x; /* 5 */\n  int lidx0 = lid.x; /* 2 */\n  int alu0 = (lidx0+(gidx0<<1));\n  float val0 = (*(data1_10+alu0));\n  float val1 = (*(data2_10+alu0));\n  float val2 = (*(data3_10+alu0));\n  *(data0_10+alu0) = (val0+val1+val2);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/metal_contiguous_add.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void E_8_4n4(device float* data0_32, device float* data1_32, device float* data2_32, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int lidx0 = lid.x; /* 8 */\n  int alu0 = (lidx0<<2);\n  float4 val0 = (*((device float4*)((data1_32+alu0))));\n  float4 val1 = (*((device float4*)((data2_32+alu0))));\n  *((device float4*)((data0_32+alu0))) = float4((val0.x+val1.x),(val0.y+val1.y),(val0.z+val1.z),(val0.w+val1.w));\n}\n---\n#include <metal_stdlib>\nusing namespace metal;\nkernel void E_8_4n5(device float* data0_32, device float* data1_32, device float* data2_32, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int lidx0 = lid.x; /* 8 */\n  int alu0 = (lidx0<<2);\n  float4 val0 = (*((device float4*)((data1_32+alu0))));\n  float4 val1 = (*((device float4*)((data2_32+alu0))));\n  *((device float4*)((data0_32+alu0))) = float4((val0.x+val1.x),(val0.y+val1.y),(val0.z+val1.z),(val0.w+val1.w));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/metal_diamond.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void E_5_2n8(device float* data0_10, device float* data1_10, device float* data2_10, device float* data3_10, device float* data4_10, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int gidx0 = gid.x; /* 5 */\n  int lidx0 = lid.x; /* 2 */\n  int alu0 = (lidx0+(gidx0<<1));\n  float val0 = (*(data1_10+alu0));\n  float val1 = (*(data2_10+alu0));\n  float val2 = (*(data3_10+alu0));\n  float val3 = (*(data4_10+alu0));\n  *(data0_10+alu0) = (val0+((val1+val2)*2.0f)+val3);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/metal_elementwise_3way.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void E_2_32_4n4(device float* data0_256, device float* data1_256, device float* data2_256, device float* data3_256, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int gidx0 = gid.x; /* 2 */\n  int lidx0 = lid.x; /* 32 */\n  int alu0 = ((gidx0<<7)+(lidx0<<2));\n  float4 val0 = (*((device float4*)((data1_256+alu0))));\n  float4 val1 = (*((device float4*)((data2_256+alu0))));\n  float4 val2 = (*((device float4*)((data3_256+alu0))));\n  *((device float4*)((data0_256+alu0))) = float4((val0.x+val1.x+val2.x),(val0.y+val1.y+val2.y),(val0.z+val1.z+val2.z),(val0.w+val1.w+val2.w));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/metal_elementwise_add.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void E_2_32_4n1(device float* data0_256, device float* data1_256, device float* data2_256, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int gidx0 = gid.x; /* 2 */\n  int lidx0 = lid.x; /* 32 */\n  int alu0 = ((gidx0<<7)+(lidx0<<2));\n  float4 val0 = (*((device float4*)((data1_256+alu0))));\n  float4 val1 = (*((device float4*)((data2_256+alu0))));\n  *((device float4*)((data0_256+alu0))) = float4((val0.x+val1.x),(val0.y+val1.y),(val0.z+val1.z),(val0.w+val1.w));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/metal_expand_permute.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void E_5_5_5_2_2_2n1(device float* data0_1000, device float* data1_100, device float* data2_100, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int gidx0 = gid.x; /* 5 */\n  int gidx1 = gid.y; /* 5 */\n  int lidx1 = lid.y; /* 2 */\n  int lidx2 = lid.z; /* 2 */\n  int alu0 = (lidx1+(gidx1<<1));\n  int alu1 = (alu0+(gidx0*20)+(lidx2*10));\n  float val0 = (*(data1_100+alu1));\n  int gidx2 = gid.z; /* 5 */\n  int lidx0 = lid.x; /* 2 */\n  int alu2 = (alu0+(gidx2*20)+(lidx0*10));\n  float val1 = (*(data1_100+alu2));\n  float val2 = (*(data2_100+alu1));\n  float val3 = (*(data2_100+alu2));\n  *(data0_1000+(lidx2+(gidx0<<1)+(gidx1*20)+(lidx1*10)+(gidx2*200)+(lidx0*100))) = (val1+val3+val0+val2);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/metal_mulacc.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void r_16_16n1(device float* data0_1, device float* data1_256, device float* data2_256, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  threadgroup __attribute__((aligned(16))) float temp0[16];\n  float acc0[1];\n  float acc1[1];\n  int lidx0 = lid.x; /* 16 */\n  *(acc0+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 16; Ridx0++) {\n    int alu1 = ((lidx0<<4)+Ridx0);\n    float val0 = (*(data1_256+alu1));\n    float val1 = (*(data2_256+alu1));\n    *(acc0+0) = ((*(acc0+0))+(val0*val1));\n  }\n  *(temp0+lidx0) = (*(acc0+0));\n  threadgroup_barrier(mem_flags::mem_threadgroup);\n  *(acc1+0) = 0.0f;\n  for (int Ridx101 = 0; Ridx101 < 16; Ridx101++) {\n    float val2 = (*(temp0+Ridx101));\n    *(acc1+0) = ((*(acc1+0))+val2);\n  }\n  bool alu9 = (lidx0==0);\n  if (alu9) {\n    *(data0_1+0) = (*(acc1+0));\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/metal_multistage_reduce.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void r_32_16_32_2n1(device float* data0_32, device float* data1_32768, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  threadgroup __attribute__((aligned(16))) float temp0[16];\n  float acc0[1];\n  float acc1[1];\n  float acc2[1];\n  int gidx0 = gid.x; /* 32 */\n  int lidx0 = lid.x; /* 16 */\n  *(acc1+0) = 0.0f;\n  for (int Ridx1 = 0; Ridx1 < 2; Ridx1++) {\n    *(acc0+0) = 0.0f;\n    for (int Ridx0 = 0; Ridx0 < 32; Ridx0++) {\n      float val0 = (*(data1_32768+((lidx0<<6)+(Ridx1<<5)+Ridx0+(gidx0<<10))));\n      *(acc0+0) = ((*(acc0+0))+val0);\n    }\n    float alu4 = (((*(acc0+0))<0.0f)?0.0f:(*(acc0+0)));\n    *(acc1+0) = ((*(acc1+0))+alu4);\n  }\n  *(temp0+lidx0) = (*(acc1+0));\n  threadgroup_barrier(mem_flags::mem_threadgroup);\n  *(acc2+0) = 0.0f;\n  for (int Ridx103 = 0; Ridx103 < 16; Ridx103++) {\n    float val1 = (*(temp0+Ridx103));\n    *(acc2+0) = ((*(acc2+0))+val1);\n  }\n  bool alu12 = (lidx0==0);\n  if (alu12) {\n    *(data0_32+gidx0) = (*(acc2+0));\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/metal_permute_through_reshape.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void E_16_4_4n2(device float* data0_256, device float* data1_256, device float* data2_256, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int lidx0 = lid.x; /* 16 */\n  int lidx1 = lid.y; /* 4 */\n  int alu0 = (lidx0+(lidx1<<6));\n  float val0 = (*(data1_256+alu0));\n  int alu1 = (alu0+16);\n  float val1 = (*(data1_256+alu1));\n  int alu2 = (alu0+32);\n  float val2 = (*(data1_256+alu2));\n  int alu3 = (alu0+48);\n  float val3 = (*(data1_256+alu3));\n  float val4 = (*(data2_256+alu0));\n  float val5 = (*(data2_256+alu1));\n  float val6 = (*(data2_256+alu2));\n  float val7 = (*(data2_256+alu3));\n  *((device float4*)((data0_256+((lidx0<<4)+(lidx1<<2))))) = float4((val0+val4),(val1+val5),(val2+val6),(val3+val7));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/metal_reduce_permute_binop.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void r_5_5_2_2_10n1(device float* data0_100, device float* data1_1000, device float* data2_100, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int gidx0 = gid.x; /* 5 */\n  int gidx1 = gid.y; /* 5 */\n  int lidx0 = lid.x; /* 2 */\n  int lidx1 = lid.y; /* 2 */\n  int alu0 = (lidx0+(gidx1<<1)+(gidx0*20)+(lidx1*10));\n  float val0 = (*(data1_1000+alu0));\n  float val1 = (*(data1_1000+(alu0+100)));\n  float val2 = (*(data1_1000+(alu0+200)));\n  float val3 = (*(data1_1000+(alu0+300)));\n  float val4 = (*(data1_1000+(alu0+400)));\n  float val5 = (*(data1_1000+(alu0+500)));\n  float val6 = (*(data1_1000+(alu0+600)));\n  float val7 = (*(data1_1000+(alu0+700)));\n  float val8 = (*(data1_1000+(alu0+800)));\n  float val9 = (*(data1_1000+(alu0+900)));\n  int alu1 = (lidx1+(gidx0<<1)+(gidx1*20)+(lidx0*10));\n  float val10 = (*(data2_100+alu1));\n  *(data0_100+alu1) = (val0+val1+val2+val3+val4+val5+val6+val7+val8+val9+val10);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/metal_reduce_reshape_binop.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void r_5_2_10n1(device float* data0_10, device float* data1_100, device float* data2_10, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int gidx0 = gid.x; /* 5 */\n  int lidx0 = lid.x; /* 2 */\n  int alu0 = (lidx0+(gidx0<<1));\n  float val0 = (*(data1_100+alu0));\n  float val1 = (*(data1_100+(alu0+10)));\n  float val2 = (*(data1_100+(alu0+20)));\n  float val3 = (*(data1_100+(alu0+30)));\n  float val4 = (*(data1_100+(alu0+40)));\n  float val5 = (*(data1_100+(alu0+50)));\n  float val6 = (*(data1_100+(alu0+60)));\n  float val7 = (*(data1_100+(alu0+70)));\n  float val8 = (*(data1_100+(alu0+80)));\n  float val9 = (*(data1_100+(alu0+90)));\n  float val10 = (*(data2_10+alu0));\n  *(data0_10+alu0) = (val0+val1+val2+val3+val4+val5+val6+val7+val8+val9+val10);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/metal_reduce_shrink.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void r_16_16_2n1(device float* data0_16, device float* data1_1024, device float* data2_16, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  threadgroup __attribute__((aligned(16))) float temp0[16];\n  float acc0[1];\n  float acc1[1];\n  int gidx0 = gid.x; /* 16 */\n  int lidx0 = lid.x; /* 16 */\n  *(acc0+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 2; Ridx0++) {\n    float val0 = (*(data1_1024+((lidx0<<1)+Ridx0+(gidx0<<5))));\n    *(acc0+0) = ((*(acc0+0))+val0);\n  }\n  *(temp0+lidx0) = (*(acc0+0));\n  threadgroup_barrier(mem_flags::mem_threadgroup);\n  *(acc1+0) = 0.0f;\n  for (int Ridx102 = 0; Ridx102 < 16; Ridx102++) {\n    float val1 = (*(temp0+Ridx102));\n    *(acc1+0) = ((*(acc1+0))+val1);\n  }\n  float val2 = (*(data2_16+gidx0));\n  bool alu8 = (lidx0==0);\n  if (alu8) {\n    *(data0_16+gidx0) = ((*(acc1+0))+val2);\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/metal_reduce_unary.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void r_16n2(device float* data0_1, device float* data1_16, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  threadgroup __attribute__((aligned(16))) float temp0[16];\n  float acc0[1];\n  int lidx0 = lid.x; /* 16 */\n  float val0 = (*(data1_16+lidx0));\n  *(temp0+lidx0) = val0;\n  threadgroup_barrier(mem_flags::mem_threadgroup);\n  *(acc0+0) = 0.0f;\n  for (int Ridx101 = 0; Ridx101 < 16; Ridx101++) {\n    float val1 = (*(temp0+Ridx101));\n    *(acc0+0) = ((*(acc0+0))+val1);\n  }\n  bool alu5 = (lidx0==0);\n  if (alu5) {\n    *(data0_1+0) = -sqrt((*(acc0+0)));\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/metal_reshape_chain.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void E_4_4n6(device float* data0_16, device float* data1_16, device float* data2_16, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int lidx0 = lid.x; /* 4 */\n  int alu0 = (lidx0<<2);\n  float4 val0 = (*((device float4*)((data1_16+alu0))));\n  float4 val1 = (*((device float4*)((data2_16+alu0))));\n  *((device float4*)((data0_16+alu0))) = float4((val0.x+val1.x),(val0.y+val1.y),(val0.z+val1.z),(val0.w+val1.w));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/metal_shrink_fuse.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void E_4_4n2(device float* data0_16, device float* data1_131072, device float* data2_131072, device float* data3_16, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  int lidx0 = lid.x; /* 4 */\n  int alu0 = (lidx0<<2);\n  float4 val0 = (*((device float4*)((data1_131072+alu0))));\n  float4 val1 = (*((device float4*)((data2_131072+alu0))));\n  float4 val2 = (*((device float4*)((data3_16+alu0))));\n  *((device float4*)((data0_16+alu0))) = float4((val0.x*val1.x*val2.x),(val0.y*val1.y*val2.y),(val0.z*val1.z*val2.z),(val0.w*val1.w*val2.w));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/metal_two_sum.expected",
    "content": "#include <metal_stdlib>\nusing namespace metal;\nkernel void r_64_16_4_64n1(device float* data0_64, device float* data1_4096, uint3 gid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) {\n  threadgroup __attribute__((aligned(16))) float temp0[16];\n  float acc0[1];\n  float acc1[1];\n  float acc2[1];\n  int gidx0 = gid.x; /* 64 */\n  int lidx0 = lid.x; /* 16 */\n  *(acc1+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 4; Ridx0++) {\n    float val0 = (*(data1_4096+(gidx0+(lidx0<<8)+(Ridx0<<6))));\n    *(acc1+0) = ((*(acc1+0))+val0);\n  }\n  *(acc0+0) = 0.0f;\n  for (int Ridx1 = 0; Ridx1 < 64; Ridx1++) {\n    float val1 = (*(data1_4096+((gidx0<<6)+Ridx1)));\n    *(acc0+0) = ((*(acc0+0))+val1);\n  }\n  *(temp0+lidx0) = (*(acc1+0));\n  threadgroup_barrier(mem_flags::mem_threadgroup);\n  *(acc2+0) = 0.0f;\n  for (int Ridx103 = 0; Ridx103 < 16; Ridx103++) {\n    float val2 = (*(temp0+Ridx103));\n    *(acc2+0) = ((*(acc2+0))+val2);\n  }\n  bool alu11 = (lidx0==0);\n  if (alu11) {\n    *(data0_64+gidx0) = ((*(acc2+0))+(*(acc0+0)));\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/opencl_binop_permute.expected",
    "content": "__kernel void E_5_2n6(__global float* data0_10, __global float* data1_10, __global float* data2_10, __global float* data3_10) {\n  int gidx0 = get_group_id(0); /* 5 */\n  int lidx0 = get_local_id(0); /* 2 */\n  int alu0 = (gidx0+(lidx0*5));\n  float val0 = (*(data1_10+alu0));\n  float val1 = (*(data2_10+alu0));\n  int alu1 = (lidx0+(gidx0<<1));\n  float val2 = (*(data3_10+alu1));\n  *(data0_10+alu1) = (val0+val1+val2);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/opencl_binop_reshape.expected",
    "content": "__kernel void E_5_2n2(__global float* data0_10, __global float* data1_10, __global float* data2_10, __global float* data3_10) {\n  int gidx0 = get_group_id(0); /* 5 */\n  int lidx0 = get_local_id(0); /* 2 */\n  int alu0 = (lidx0+(gidx0<<1));\n  float val0 = (*(data1_10+alu0));\n  float val1 = (*(data2_10+alu0));\n  float val2 = (*(data3_10+alu0));\n  *(data0_10+alu0) = (val0+val1+val2);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/opencl_contiguous_add.expected",
    "content": "__kernel void E_8_4n6(__global float* data0_32, __global float* data1_32, __global float* data2_32) {\n  int lidx0 = get_local_id(0); /* 8 */\n  int alu0 = (lidx0<<2);\n  float4 val0 = (*((__global float4*)((data1_32+alu0))));\n  float4 val1 = (*((__global float4*)((data2_32+alu0))));\n  *((__global float4*)((data0_32+alu0))) = (float4)((val0.x+val1.x),(val0.y+val1.y),(val0.z+val1.z),(val0.w+val1.w));\n}\n---\n__kernel void E_8_4n7(__global float* data0_32, __global float* data1_32, __global float* data2_32) {\n  int lidx0 = get_local_id(0); /* 8 */\n  int alu0 = (lidx0<<2);\n  float4 val0 = (*((__global float4*)((data1_32+alu0))));\n  float4 val1 = (*((__global float4*)((data2_32+alu0))));\n  *((__global float4*)((data0_32+alu0))) = (float4)((val0.x+val1.x),(val0.y+val1.y),(val0.z+val1.z),(val0.w+val1.w));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/opencl_diamond.expected",
    "content": "__kernel void E_5_2n9(__global float* data0_10, __global float* data1_10, __global float* data2_10, __global float* data3_10, __global float* data4_10) {\n  int gidx0 = get_group_id(0); /* 5 */\n  int lidx0 = get_local_id(0); /* 2 */\n  int alu0 = (lidx0+(gidx0<<1));\n  float val0 = (*(data1_10+alu0));\n  float val1 = (*(data2_10+alu0));\n  float val2 = (*(data3_10+alu0));\n  float val3 = (*(data4_10+alu0));\n  *(data0_10+alu0) = (val0+((val1+val2)*2.0f)+val3);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/opencl_elementwise_3way.expected",
    "content": "__kernel void E_2_32_4n5(__global float* data0_256, __global float* data1_256, __global float* data2_256, __global float* data3_256) {\n  int gidx0 = get_group_id(0); /* 2 */\n  int lidx0 = get_local_id(0); /* 32 */\n  int alu0 = ((gidx0<<7)+(lidx0<<2));\n  float4 val0 = (*((__global float4*)((data1_256+alu0))));\n  float4 val1 = (*((__global float4*)((data2_256+alu0))));\n  float4 val2 = (*((__global float4*)((data3_256+alu0))));\n  *((__global float4*)((data0_256+alu0))) = (float4)((val0.x+val1.x+val2.x),(val0.y+val1.y+val2.y),(val0.z+val1.z+val2.z),(val0.w+val1.w+val2.w));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/opencl_elementwise_add.expected",
    "content": "__kernel void E_2_32_4n2(__global float* data0_256, __global float* data1_256, __global float* data2_256) {\n  int gidx0 = get_group_id(0); /* 2 */\n  int lidx0 = get_local_id(0); /* 32 */\n  int alu0 = ((gidx0<<7)+(lidx0<<2));\n  float4 val0 = (*((__global float4*)((data1_256+alu0))));\n  float4 val1 = (*((__global float4*)((data2_256+alu0))));\n  *((__global float4*)((data0_256+alu0))) = (float4)((val0.x+val1.x),(val0.y+val1.y),(val0.z+val1.z),(val0.w+val1.w));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/opencl_expand_permute.expected",
    "content": "__kernel void E_5_5_5_2_2_2n2(__global float* data0_1000, __global float* data1_100, __global float* data2_100) {\n  int gidx0 = get_group_id(0); /* 5 */\n  int gidx1 = get_group_id(1); /* 5 */\n  int lidx1 = get_local_id(1); /* 2 */\n  int lidx2 = get_local_id(2); /* 2 */\n  int alu0 = (lidx1+(gidx1<<1));\n  int alu1 = (alu0+(gidx0*20)+(lidx2*10));\n  float val0 = (*(data1_100+alu1));\n  int gidx2 = get_group_id(2); /* 5 */\n  int lidx0 = get_local_id(0); /* 2 */\n  int alu2 = (alu0+(gidx2*20)+(lidx0*10));\n  float val1 = (*(data1_100+alu2));\n  float val2 = (*(data2_100+alu1));\n  float val3 = (*(data2_100+alu2));\n  *(data0_1000+(lidx2+(gidx0<<1)+(gidx1*20)+(lidx1*10)+(gidx2*200)+(lidx0*100))) = (val1+val3+val0+val2);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/opencl_mulacc.expected",
    "content": "__kernel void r_16_16n2(__global float* data0_1, __global float* data1_256, __global float* data2_256) {\n  __attribute__ ((aligned (16))) __local float temp0[16];\n  float acc0[1];\n  float acc1[1];\n  int lidx0 = get_local_id(0); /* 16 */\n  *(acc0+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 16; Ridx0++) {\n    int alu1 = ((lidx0<<4)+Ridx0);\n    float val0 = (*(data1_256+alu1));\n    float val1 = (*(data2_256+alu1));\n    *(acc0+0) = ((*(acc0+0))+(val0*val1));\n  }\n  *(temp0+lidx0) = (*(acc0+0));\n  barrier(CLK_LOCAL_MEM_FENCE);\n  *(acc1+0) = 0.0f;\n  for (int Ridx101 = 0; Ridx101 < 16; Ridx101++) {\n    float val2 = (*(temp0+Ridx101));\n    *(acc1+0) = ((*(acc1+0))+val2);\n  }\n  bool alu9 = (lidx0==0);\n  if (alu9) {\n    *(data0_1+0) = (*(acc1+0));\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/opencl_multistage_reduce.expected",
    "content": "__kernel void r_32_16_32_2n2(__global float* data0_32, __global float* data1_32768) {\n  __attribute__ ((aligned (16))) __local float temp0[16];\n  float acc0[1];\n  float acc1[1];\n  float acc2[1];\n  int gidx0 = get_group_id(0); /* 32 */\n  int lidx0 = get_local_id(0); /* 16 */\n  *(acc1+0) = 0.0f;\n  for (int Ridx1 = 0; Ridx1 < 2; Ridx1++) {\n    *(acc0+0) = 0.0f;\n    for (int Ridx0 = 0; Ridx0 < 32; Ridx0++) {\n      float val0 = (*(data1_32768+((lidx0<<6)+(Ridx1<<5)+Ridx0+(gidx0<<10))));\n      *(acc0+0) = ((*(acc0+0))+val0);\n    }\n    float alu4 = (((*(acc0+0))<0.0f)?0.0f:(*(acc0+0)));\n    *(acc1+0) = ((*(acc1+0))+alu4);\n  }\n  *(temp0+lidx0) = (*(acc1+0));\n  barrier(CLK_LOCAL_MEM_FENCE);\n  *(acc2+0) = 0.0f;\n  for (int Ridx103 = 0; Ridx103 < 16; Ridx103++) {\n    float val1 = (*(temp0+Ridx103));\n    *(acc2+0) = ((*(acc2+0))+val1);\n  }\n  bool alu12 = (lidx0==0);\n  if (alu12) {\n    *(data0_32+gidx0) = (*(acc2+0));\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/opencl_permute_through_reshape.expected",
    "content": "__kernel void E_16_4_4n3(__global float* data0_256, __global float* data1_256, __global float* data2_256) {\n  int lidx0 = get_local_id(0); /* 16 */\n  int lidx1 = get_local_id(1); /* 4 */\n  int alu0 = (lidx0+(lidx1<<6));\n  float val0 = (*(data1_256+alu0));\n  int alu1 = (alu0+16);\n  float val1 = (*(data1_256+alu1));\n  int alu2 = (alu0+32);\n  float val2 = (*(data1_256+alu2));\n  int alu3 = (alu0+48);\n  float val3 = (*(data1_256+alu3));\n  float val4 = (*(data2_256+alu0));\n  float val5 = (*(data2_256+alu1));\n  float val6 = (*(data2_256+alu2));\n  float val7 = (*(data2_256+alu3));\n  *((__global float4*)((data0_256+((lidx0<<4)+(lidx1<<2))))) = (float4)((val0+val4),(val1+val5),(val2+val6),(val3+val7));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/opencl_reduce_permute_binop.expected",
    "content": "__kernel void r_5_5_2_2_10n2(__global float* data0_100, __global float* data1_1000, __global float* data2_100) {\n  int gidx0 = get_group_id(0); /* 5 */\n  int gidx1 = get_group_id(1); /* 5 */\n  int lidx0 = get_local_id(0); /* 2 */\n  int lidx1 = get_local_id(1); /* 2 */\n  int alu0 = (lidx0+(gidx1<<1)+(gidx0*20)+(lidx1*10));\n  float val0 = (*(data1_1000+alu0));\n  float val1 = (*(data1_1000+(alu0+100)));\n  float val2 = (*(data1_1000+(alu0+200)));\n  float val3 = (*(data1_1000+(alu0+300)));\n  float val4 = (*(data1_1000+(alu0+400)));\n  float val5 = (*(data1_1000+(alu0+500)));\n  float val6 = (*(data1_1000+(alu0+600)));\n  float val7 = (*(data1_1000+(alu0+700)));\n  float val8 = (*(data1_1000+(alu0+800)));\n  float val9 = (*(data1_1000+(alu0+900)));\n  int alu1 = (lidx1+(gidx0<<1)+(gidx1*20)+(lidx0*10));\n  float val10 = (*(data2_100+alu1));\n  *(data0_100+alu1) = (val0+val1+val2+val3+val4+val5+val6+val7+val8+val9+val10);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/opencl_reduce_reshape_binop.expected",
    "content": "__kernel void r_5_2_10n2(__global float* data0_10, __global float* data1_100, __global float* data2_10) {\n  int gidx0 = get_group_id(0); /* 5 */\n  int lidx0 = get_local_id(0); /* 2 */\n  int alu0 = (lidx0+(gidx0<<1));\n  float val0 = (*(data1_100+alu0));\n  float val1 = (*(data1_100+(alu0+10)));\n  float val2 = (*(data1_100+(alu0+20)));\n  float val3 = (*(data1_100+(alu0+30)));\n  float val4 = (*(data1_100+(alu0+40)));\n  float val5 = (*(data1_100+(alu0+50)));\n  float val6 = (*(data1_100+(alu0+60)));\n  float val7 = (*(data1_100+(alu0+70)));\n  float val8 = (*(data1_100+(alu0+80)));\n  float val9 = (*(data1_100+(alu0+90)));\n  float val10 = (*(data2_10+alu0));\n  *(data0_10+alu0) = (val0+val1+val2+val3+val4+val5+val6+val7+val8+val9+val10);\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/opencl_reduce_shrink.expected",
    "content": "__kernel void r_16_16_2n2(__global float* data0_16, __global float* data1_1024, __global float* data2_16) {\n  __attribute__ ((aligned (16))) __local float temp0[16];\n  float acc0[1];\n  float acc1[1];\n  int gidx0 = get_group_id(0); /* 16 */\n  int lidx0 = get_local_id(0); /* 16 */\n  *(acc0+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 2; Ridx0++) {\n    float val0 = (*(data1_1024+((lidx0<<1)+Ridx0+(gidx0<<5))));\n    *(acc0+0) = ((*(acc0+0))+val0);\n  }\n  *(temp0+lidx0) = (*(acc0+0));\n  barrier(CLK_LOCAL_MEM_FENCE);\n  *(acc1+0) = 0.0f;\n  for (int Ridx102 = 0; Ridx102 < 16; Ridx102++) {\n    float val1 = (*(temp0+Ridx102));\n    *(acc1+0) = ((*(acc1+0))+val1);\n  }\n  float val2 = (*(data2_16+gidx0));\n  bool alu8 = (lidx0==0);\n  if (alu8) {\n    *(data0_16+gidx0) = ((*(acc1+0))+val2);\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/opencl_reduce_unary.expected",
    "content": "__kernel void r_16n3(__global float* data0_1, __global float* data1_16) {\n  __attribute__ ((aligned (16))) __local float temp0[16];\n  float acc0[1];\n  int lidx0 = get_local_id(0); /* 16 */\n  float val0 = (*(data1_16+lidx0));\n  *(temp0+lidx0) = val0;\n  barrier(CLK_LOCAL_MEM_FENCE);\n  *(acc0+0) = 0.0f;\n  for (int Ridx101 = 0; Ridx101 < 16; Ridx101++) {\n    float val1 = (*(temp0+Ridx101));\n    *(acc0+0) = ((*(acc0+0))+val1);\n  }\n  bool alu5 = (lidx0==0);\n  if (alu5) {\n    *(data0_1+0) = -sqrt((*(acc0+0)));\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/opencl_reshape_chain.expected",
    "content": "__kernel void E_4_4n7(__global float* data0_16, __global float* data1_16, __global float* data2_16) {\n  int lidx0 = get_local_id(0); /* 4 */\n  int alu0 = (lidx0<<2);\n  float4 val0 = (*((__global float4*)((data1_16+alu0))));\n  float4 val1 = (*((__global float4*)((data2_16+alu0))));\n  *((__global float4*)((data0_16+alu0))) = (float4)((val0.x+val1.x),(val0.y+val1.y),(val0.z+val1.z),(val0.w+val1.w));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/opencl_shrink_fuse.expected",
    "content": "__kernel void E_4_4n3(__global float* data0_16, __global float* data1_131072, __global float* data2_131072, __global float* data3_16) {\n  int lidx0 = get_local_id(0); /* 4 */\n  int alu0 = (lidx0<<2);\n  float4 val0 = (*((__global float4*)((data1_131072+alu0))));\n  float4 val1 = (*((__global float4*)((data2_131072+alu0))));\n  float4 val2 = (*((__global float4*)((data3_16+alu0))));\n  *((__global float4*)((data0_16+alu0))) = (float4)((val0.x*val1.x*val2.x),(val0.y*val1.y*val2.y),(val0.z*val1.z*val2.z),(val0.w*val1.w*val2.w));\n}\n"
  },
  {
    "path": "packages/tolk/test/golden/rangeify/opencl_two_sum.expected",
    "content": "__kernel void r_64_16_4_64n2(__global float* data0_64, __global float* data1_4096) {\n  __attribute__ ((aligned (16))) __local float temp0[16];\n  float acc0[1];\n  float acc1[1];\n  float acc2[1];\n  int gidx0 = get_group_id(0); /* 64 */\n  int lidx0 = get_local_id(0); /* 16 */\n  *(acc1+0) = 0.0f;\n  for (int Ridx0 = 0; Ridx0 < 4; Ridx0++) {\n    float val0 = (*(data1_4096+(gidx0+(lidx0<<8)+(Ridx0<<6))));\n    *(acc1+0) = ((*(acc1+0))+val0);\n  }\n  *(acc0+0) = 0.0f;\n  for (int Ridx1 = 0; Ridx1 < 64; Ridx1++) {\n    float val1 = (*(data1_4096+((gidx0<<6)+Ridx1)));\n    *(acc0+0) = ((*(acc0+0))+val1);\n  }\n  *(temp0+lidx0) = (*(acc1+0));\n  barrier(CLK_LOCAL_MEM_FENCE);\n  *(acc2+0) = 0.0f;\n  for (int Ridx103 = 0; Ridx103 < 16; Ridx103++) {\n    float val2 = (*(temp0+Ridx103));\n    *(acc2+0) = ((*(acc2+0))+val2);\n  }\n  bool alu11 = (lidx0==0);\n  if (alu11) {\n    *(data0_64+gidx0) = ((*(acc2+0))+(*(acc0+0)));\n  }\n}\n"
  },
  {
    "path": "packages/tolk/test/unit/dune",
    "content": "(tests\n (names\n  test_program_spec\n  test_cstyle\n  test_elf\n  test_runtime_cpu\n  test_runtime_search)\n (package tolk)\n (libraries unix tolk tolk.ir tolk.cpu windtrap))\n\n(tests\n (names\n  test_ir_dtype\n  test_ir_program\n  test_ir_kernel\n  test_ir_tensor\n  test_ir_symbolic)\n (package tolk)\n (libraries tolk.ir windtrap))\n\n(tests\n (names\n  test_codegen_devectorizer\n  test_codegen_expander\n  test_codegen_gpudims\n  test_codegen_heuristic\n  test_codegen_images\n  test_codegen_linearizer\n  test_codegen_postrange\n  test_codegen_simplify\n  test_codegen_tc\n  test_schedule_rangeify)\n (package tolk)\n (libraries unix tolk tolk.ir windtrap))\n\n(tests\n (names test_runtime_metal)\n (package tolk)\n (enabled_if\n  (= %{system} macosx))\n (libraries tolk tolk_ir tolk.metal windtrap))\n"
  },
  {
    "path": "packages/tolk/test/unit/test_codegen_devectorizer.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Tolk\nopen Tolk_ir\n\nmodule K = Kernel\n\nlet dt = Dtype.Val.float32\nlet ptr = Dtype.Ptr.create dt ~addrspace:Global ~size:(-1)\nlet pp_kernel kernel = Format.asprintf \"%a\" K.pp kernel\n\nlet stub_renderer ?(supports_float4 = false) () =\n  Renderer.make ~name:\"test\" ~device:\"TEST\" ~has_local:true ~has_shared:true\n    ~shared_max:32768 ~supports_float4 ~render:(fun ?name:_ _ -> \"\") ()\n\nlet expect_dtype msg expected actual =\n  if not (Dtype.Val.equal expected actual) then\n    failwith\n      (Printf.sprintf \"%s: expected %s, got %s\" msg\n         (Format.asprintf \"%a\" Dtype.Val.pp expected)\n         (Format.asprintf \"%a\" Dtype.Val.pp actual))\n\nlet expect_ptr_dtype msg expected actual =\n  if not (Dtype.Ptr.equal expected actual) then\n    failwith\n      (Printf.sprintf \"%s: expected %s, got %s\" msg\n         (Format.asprintf \"%a\" Dtype.Ptr.pp expected)\n         (Format.asprintf \"%a\" Dtype.Ptr.pp actual))\n\nlet topo_array root =\n  let arr = Array.of_list (K.toposort root) in\n  let tbl = K.Ref_tbl.create (Array.length arr) in\n  Array.iteri (fun i node -> K.Ref_tbl.replace tbl node i) arr;\n  (arr, tbl)\n\nlet node_at (arr, _) i = arr.(i)\nlet id_of (_, tbl) node = K.Ref_tbl.find tbl node\nlet topo_length (arr, _) = Array.length arr\n\nlet find_sink root =\n  let arr, _ = topo_array root in\n  let found = ref None in\n  Array.iteri\n    (fun i n ->\n      match K.view n with\n      | K.Sink _ -> (\n          match !found with\n          | None -> found := Some i\n          | Some _ -> failwith \"expected a single Sink\")\n      | _ -> ())\n    arr;\n  match !found with Some i -> i | None -> failwith \"expected a Sink\"\n\nlet reachable_indices root (idx : int) =\n  let topo = topo_array root in\n  let len = topo_length topo in\n  let seen = Array.make len false in\n  let rec visit r =\n    if r >= 0 && r < len && not seen.(r) then begin\n      seen.(r) <- true;\n      List.iter\n        (fun dep -> visit (id_of topo dep))\n        (K.children (node_at topo r))\n    end\n  in\n  visit idx;\n  seen\n\nlet count_reachable root ~root_idx pred =\n  let topo = topo_array root in\n  let seen = reachable_indices root root_idx in\n  let count = ref 0 in\n  Array.iteri\n    (fun i n -> if seen.(i) && pred (K.view n) then incr count)\n    (fst topo);\n  !count\n\nlet find_reachable root ~root_idx pred =\n  let topo = topo_array root in\n  let seen = reachable_indices root root_idx in\n  let result = ref None in\n  Array.iteri\n    (fun i n ->\n      if !result = None && seen.(i) && pred (K.view n) then\n        result := Some (i, K.view n))\n    (fst topo);\n  !result\n\nlet find_all_reachable root ~root_idx pred =\n  let topo = topo_array root in\n  let seen = reachable_indices root root_idx in\n  let results = ref [] in\n  Array.iteri\n    (fun i n ->\n      if seen.(i) && pred (K.view n) then\n        results := (i, K.view n) :: !results)\n    (fst topo);\n  List.rev !results\n\n(* Helpers for common node construction *)\n\nlet f32 f = K.const (Const.float Dtype.Val.float32 f)\nlet i32 n = K.const (Const.int Dtype.Val.int32 n)\nlet idx n = K.const (Const.int Dtype.Val.index n)\nlet idx0 = idx 0\n\nlet rec unwrap_const n =\n  match K.view n with\n  | K.Const { value; _ } -> Some value\n  | K.Cast { src; _ } -> unwrap_const src\n  | _ -> None\n\nlet no_geps lowered sink =\n  equal int 0\n    (count_reachable lowered ~root_idx:sink (function\n      | K.Gep _ -> true\n      | _ -> false))\n\nlet no_reduces lowered sink =\n  equal int 0\n    (count_reachable lowered ~root_idx:sink (function\n      | K.Reduce _ -> true\n      | _ -> false))\n\nlet has_reachable lowered sink pred =\n  is_true\n    (count_reachable lowered ~root_idx:sink pred >= 1)\n\n(* Expect to find a reachable node matching pred; fail with msg if absent *)\nlet expect_reachable lowered sink msg pred =\n  match find_reachable lowered ~root_idx:sink pred with\n  | Some v -> v\n  | None -> failwith (msg ^ \":\\n\" ^ pp_kernel lowered)\n\n(* Check that devectorization produces a Vectorize of per-lane ops.\n   Used by binary, unary, cast, and bitcast scalarization tests. *)\nlet check_scalarized_vectorize lowered sink ~vec_dt ~lane_count ~lane_pred\n    ~desc =\n  let _, view =\n    expect_reachable lowered sink (\"expected vectorized scalar \" ^ desc)\n      (function\n      | K.Vectorize { dtype; srcs } when Dtype.Val.equal (Dtype.val_of dtype) vec_dt ->\n          List.for_all (fun r -> lane_pred (K.view r)) srcs\n      | _ -> false)\n  in\n  match view with\n  | K.Vectorize { srcs; dtype } ->\n      expect_dtype (desc ^ \" dtype\") vec_dt (Dtype.val_of dtype);\n      equal int lane_count (List.length srcs);\n      srcs\n  | _ -> failwith (\"expected Vectorize: \" ^ pp_kernel lowered)\n\n(* Test runner *)\n\nlet () =\n  run \"Devectorizer\"\n    [\n      group \"pm_reduce\"\n        [\n          test \"reduce_to_acc creates accumulator loop\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let p1 = K.param ~idx:1 ~dtype:ptr in\n            let r =\n              K.range ~size:(idx 8) ~axis:0 ~kind:Axis_kind.Reduce ()\n            in\n            let ld = K.load ~src:(K.index ~ptr:p0 ~idxs:[ r ] ()) () in\n            let red = K.reduce ~op:`Add ~src:ld ~ranges:[ r ] ~dtype:dt in\n            let st =\n              K.store ~dst:(K.index ~ptr:p1 ~idxs:[ idx0 ] ())\n                ~value:red ~ranges:[]\n            in\n            let lowered = K.sink [ st ] |> Devectorizer.pm_reduce in\n            let sink = find_sink lowered in\n            no_reduces lowered sink;\n            has_reachable lowered sink (function\n              | K.Define_reg _ -> true\n              | _ -> false);\n            has_reachable lowered sink (function\n              | K.End _ -> true\n              | _ -> false);\n            has_reachable lowered sink (function\n              | K.Const { value; dtype } when Dtype.Val.equal dtype dt ->\n                  (match Const.view value with\n                   | Const.Float 0.0 -> true\n                   | _ -> false)\n              | _ -> false);\n            K.validate lowered);\n          test \"reduce identity elements match op\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let make_reduce op axis =\n              let r =\n                K.range ~size:(idx 4) ~axis ~kind:Axis_kind.Reduce ()\n              in\n              let ld =\n                K.load ~src:(K.index ~ptr:p0 ~idxs:[ r ] ()) ()\n              in\n              K.reduce ~op ~src:ld ~ranges:[ r ] ~dtype:dt\n            in\n            let red_add = make_reduce `Add 0 in\n            let red_mul = make_reduce `Mul 1 in\n            let red_max = make_reduce `Max 2 in\n            let mk_store pidx value =\n              let p = K.param ~idx:pidx ~dtype:ptr in\n              K.store ~dst:(K.index ~ptr:p ~idxs:[ idx0 ] ())\n                ~value ~ranges:[]\n            in\n            let kernel =\n              K.sink\n                [ mk_store 1 red_add; mk_store 2 red_mul;\n                  mk_store 3 red_max ]\n            in\n            let lowered = Devectorizer.pm_reduce kernel in\n            let sink = find_sink lowered in\n            let stores =\n              find_all_reachable lowered ~root_idx:sink (function\n                | K.Store { value; _ } ->\n                    (match K.view value with K.Const _ -> true | _ -> false)\n                | _ -> false)\n            in\n            let store_vals =\n              List.filter_map\n                (fun (_, v) ->\n                  match v with\n                  | K.Store { value; _ } -> (\n                      match K.view value with\n                      | K.Const { value = cv; _ } -> (\n                          match Const.view cv with\n                          | Const.Float f -> Some f\n                          | _ -> None)\n                      | _ -> None)\n                  | _ -> None)\n                stores\n            in\n            is_true (List.mem 0.0 store_vals);\n            is_true (List.mem 1.0 store_vals);\n            is_true (List.mem neg_infinity store_vals);\n            K.validate lowered);\n          test \"reduce lowers parallel reduces\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let p1 = K.param ~idx:1 ~dtype:ptr in\n            let p2 = K.param ~idx:2 ~dtype:ptr in\n            let r =\n              K.range ~size:(idx 4) ~axis:0 ~kind:Axis_kind.Reduce ()\n            in\n            let ld = K.load ~src:(K.index ~ptr:p0 ~idxs:[ r ] ()) () in\n            let red_add = K.reduce ~op:`Add ~src:ld ~ranges:[ r ] ~dtype:dt in\n            let red_max = K.reduce ~op:`Max ~src:ld ~ranges:[ r ] ~dtype:dt in\n            let mk_store p value =\n              K.store ~dst:(K.index ~ptr:p ~idxs:[ idx0 ] ())\n                ~value ~ranges:[]\n            in\n            let kernel =\n              K.sink [ mk_store p1 red_add; mk_store p2 red_max ]\n            in\n            let lowered = Devectorizer.pm_reduce kernel in\n            let sink = find_sink lowered in\n            no_reduces lowered sink;\n            has_reachable lowered sink (function\n              | K.End { ranges = [ _ ]; _ } -> true\n              | _ -> false);\n            K.validate lowered);\n          test \"reduce folds WMMA accumulate\" (fun () ->\n            let dt2 = Dtype.Val.vec 2 Dtype.Val.float32 in\n            let va = K.vectorize ~srcs:[ f32 1.0; f32 2.0 ] in\n            let vb = K.vectorize ~srcs:[ f32 3.0; f32 4.0 ] in\n            let vc = K.vectorize ~srcs:[ f32 5.0; f32 6.0 ] in\n            let w =\n              K.wmma ~name:\"WMMA_test\" ~a:va ~b:vb ~c:vc ~dtype:dt2\n                ~dims:(1, 1, 1) ~dtype_in:Dtype.Float32 ~dtype_out:Dtype.Float32\n                ~device:\"TEST\" ~threads:1\n                ~upcast_axes:([ (0, 1) ], [ (0, 1) ], [ (0, 2) ])\n                ~reduce_axes:[]\n            in\n            let sum =\n              K.binary ~op:`Add ~lhs:w\n                ~rhs:(K.vectorize ~srcs:[ f32 10.0; f32 20.0 ])\n            in\n            let lowered = K.sink [ sum ] |> Devectorizer.pm_reduce in\n            let sink = find_sink lowered in\n            ignore\n              (expect_reachable lowered sink\n                 \"expected WMMA with folded accumulate in c operand\"\n                 (function\n                 | K.Wmma { c; _ } ->\n                     (match K.view c with\n                      | K.Binary { op = `Add; _ } -> true\n                      | _ -> false)\n                 | _ -> false));\n            K.validate lowered);\n        ];\n      group \"pm_add_loads\"\n        [\n          test \"add_loads inserts loads only for value uses of Index\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let p1 = K.param ~idx:1 ~dtype:ptr in\n            let idx0_node = K.index ~ptr:p0 ~idxs:[ idx0 ] ~as_ptr:false () in\n            let neg = K.unary ~op:`Neg ~src:idx0_node in\n            let st =\n              K.store ~dst:(K.index ~ptr:p1 ~idxs:[ idx0 ] ~as_ptr:false ())\n                ~value:neg ~ranges:[]\n            in\n            let lowered = K.sink [ st ] |> Devectorizer.pm_add_loads in\n            let topo = topo_array lowered in\n            equal int 9 (topo_length topo);\n            let sink = find_sink lowered in\n            let _, load_view =\n              expect_reachable lowered sink \"expected inserted Load\"\n                (function\n                | K.Load { src; alt = None; _ } -> (\n                    match K.view src with\n                    | K.Index { ptr = ptr_ref; idxs = [ _ ]; gate = None; _ }\n                      ->\n                        (match K.view ptr_ref with\n                         | K.Param { idx = 0; _ } -> true\n                         | _ -> false)\n                    | _ -> false)\n                | _ -> false)\n            in\n            (match load_view with\n            | K.Load { dtype; _ } ->\n                expect_dtype \"inserted load dtype\" dt dtype\n            | _ -> failwith \"unreachable\");\n            let _, neg_view =\n              expect_reachable lowered sink\n                \"expected Neg to consume inserted Load\"\n                (function\n                | K.Unary { op = `Neg; src; _ } ->\n                    (match K.view src with K.Load _ -> true | _ -> false)\n                | _ -> false)\n            in\n            (match neg_view with\n            | K.Unary { op = `Neg; dtype; _ } ->\n                expect_dtype \"neg dtype\" dt dtype\n            | _ -> failwith \"unreachable\");\n            let _, idx_view =\n              expect_reachable lowered sink\n                \"expected store destination Index to stay untouched\"\n                (function\n                | K.Index { ptr = ptr_ref; idxs = [ _ ]; gate = None; _ } ->\n                    (match K.view ptr_ref with\n                     | K.Param { idx = 1; _ } -> true\n                     | _ -> false)\n                | _ -> false)\n            in\n            (match idx_view with\n            | K.Index { dtype = Dtype.Ptr pty; _ } ->\n                expect_ptr_dtype \"store destination pointer dtype\" ptr pty\n            | K.Index { dtype = Dtype.Val _; _ } ->\n                failwith \"store destination Index should be ptr-typed after pm_add_loads\"\n            | _ -> failwith \"unreachable\");\n            K.validate lowered);\n        ];\n      group \"pm_devectorize\"\n        [\n          test \"splits vector ALU into scalar ops\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:(Dtype.Ptr.create dt ~addrspace:Global ~size:(-1)) in\n            let mk_load i =\n              K.load ~src:(K.index ~ptr:p0 ~idxs:[ idx i ] ()) ()\n            in\n            let v1 =\n              K.vectorize\n                ~srcs:[ mk_load 0; mk_load 1; mk_load 2; mk_load 3; mk_load 4 ]\n            in\n            let v2 =\n              K.vectorize\n                ~srcs:[ mk_load 5; mk_load 6; mk_load 7; mk_load 8; mk_load 9 ]\n            in\n            let add = K.binary ~op:`Add ~lhs:v1 ~rhs:v2 in\n            let lowered =\n              K.sink [ add ] |> Devectorizer.pm_devectorize (stub_renderer ())\n            in\n            let sink = find_sink lowered in\n            (* sym's sink_cleanup flattens the Vectorize, so we check for\n               5 scalar Adds directly reachable from the sink. *)\n            let scalar_adds =\n              find_all_reachable lowered ~root_idx:sink (function\n                | K.Binary { op = `Add; dtype = lane_dt; _ } ->\n                    Dtype.Val.equal lane_dt dt\n                | _ -> false)\n            in\n            equal int 5 (List.length scalar_adds);\n            K.validate lowered);\n          test \"splits small vector comparisons\" (fun () ->\n            let i32_ptr = Dtype.Ptr.create Dtype.Val.int32 ~addrspace:Global ~size:(-1) in\n            let p0 = K.param ~idx:0 ~dtype:i32_ptr in\n            let p1 = K.param ~idx:1 ~dtype:i32_ptr in\n            let ld0 = K.load ~src:(K.index ~ptr:p0 ~idxs:[ idx0 ] ()) () in\n            let ld1 = K.load ~src:(K.index ~ptr:p1 ~idxs:[ idx0 ] ()) () in\n            let v1 = K.vectorize ~srcs:[ ld0; ld1 ] in\n            let v2 = K.vectorize ~srcs:[ ld1; ld0 ] in\n            let lowered =\n              K.sink [ K.binary ~op:`Cmpeq ~lhs:v1 ~rhs:v2 ]\n              |> Devectorizer.pm_devectorize (stub_renderer ())\n            in\n            let sink = find_sink lowered in\n            let scalar_cmps =\n              find_all_reachable lowered ~root_idx:sink (function\n                | K.Binary { op = `Cmpeq; dtype; _ } ->\n                    Dtype.Val.equal dtype Dtype.Val.bool\n                | _ -> false)\n            in\n            equal int 2 (List.length scalar_cmps);\n            K.validate lowered);\n          test \"scalarizes unary ops on vectors\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:(Dtype.Ptr.create dt ~addrspace:Global ~size:(-1)) in\n            let mk_load i =\n              K.load ~src:(K.index ~ptr:p0 ~idxs:[ idx i ] ()) ()\n            in\n            let vec = K.vectorize ~srcs:[ mk_load 0; mk_load 1; mk_load 2 ] in\n            let lowered =\n              K.sink [ K.unary ~op:`Neg ~src:vec ]\n              |> Devectorizer.pm_devectorize (stub_renderer ())\n            in\n            let sink = find_sink lowered in\n            let scalar_negs =\n              find_all_reachable lowered ~root_idx:sink (function\n                | K.Unary { op = `Neg; dtype; _ } ->\n                    Dtype.Val.equal dtype dt\n                | _ -> false)\n            in\n            equal int 3 (List.length scalar_negs);\n            K.validate lowered);\n          test \"scalarizes Cast on vectors\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:(Dtype.Ptr.create dt ~addrspace:Global ~size:(-1)) in\n            let ld0 = K.load ~src:(K.index ~ptr:p0 ~idxs:[ idx 0 ] ()) () in\n            let ld1 = K.load ~src:(K.index ~ptr:p0 ~idxs:[ idx 1 ] ()) () in\n            let vec = K.vectorize ~srcs:[ ld0; ld1 ] in\n            let cst = K.cast ~src:vec ~dtype:(Dtype.Val (Dtype.Val.vec 2 Dtype.Val.int32)) in\n            let lowered =\n              K.sink [ cst ] |> Devectorizer.pm_devectorize (stub_renderer ())\n            in\n            let sink = find_sink lowered in\n            let scalar_casts =\n              find_all_reachable lowered ~root_idx:sink (function\n                | K.Cast { dtype; _ } ->\n                    Dtype.equal dtype Dtype.int32\n                | _ -> false)\n            in\n            equal int 2 (List.length scalar_casts);\n            K.validate lowered);\n          test \"scalarizes Bitcast on vectors\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:(Dtype.Ptr.create dt ~addrspace:Global ~size:(-1)) in\n            let ld0 = K.load ~src:(K.index ~ptr:p0 ~idxs:[ idx 0 ] ()) () in\n            let ld1 = K.load ~src:(K.index ~ptr:p0 ~idxs:[ idx 1 ] ()) () in\n            let vec = K.vectorize ~srcs:[ ld0; ld1 ] in\n            let bc = K.bitcast ~src:vec ~dtype:(Dtype.Val.vec 2 Dtype.Val.int32) in\n            let lowered =\n              K.sink [ bc ] |> Devectorizer.pm_devectorize (stub_renderer ())\n            in\n            let sink = find_sink lowered in\n            let scalar_bitcasts =\n              find_all_reachable lowered ~root_idx:sink (function\n                | K.Bitcast { dtype; _ } ->\n                    Dtype.Val.equal dtype Dtype.Val.int32\n                | _ -> false)\n            in\n            equal int 2 (List.length scalar_bitcasts);\n            K.validate lowered);\n          test \"reorders Cast after After\" (fun () ->\n            (* cast_after_after: After(Cast(x, dt), deps) -> Cast(After(x, deps), dt) *)\n            let i32_ptr = Dtype.Ptr.create Dtype.Val.int32 ~addrspace:Global ~size:(-1) in\n            let p0 = K.param ~idx:0 ~dtype:i32_ptr in\n            let ld = K.load ~src:(K.index ~ptr:p0 ~idxs:[ idx0 ] ()) () in\n            let cst = K.cast ~src:ld ~dtype:(Dtype.float32) in\n            let aft = K.after ~src:cst ~deps:[ idx0 ] in\n            let lowered =\n              K.sink [ aft ] |> Devectorizer.pm_devectorize (stub_renderer ())\n            in\n            let sink = find_sink lowered in\n            let _, view =\n              expect_reachable lowered sink\n                \"expected After(Cast) to become Cast(After)\"\n                (function K.Cast _ -> true | _ -> false)\n            in\n            (match view with\n            | K.Cast { src = after_ref; dtype } ->\n                expect_dtype \"reordered cast dtype\" Dtype.Val.float32 (Dtype.val_of dtype);\n                (match K.view after_ref with\n                | K.After { src = load_ref; deps = [ _ ] } ->\n                    (match K.view load_ref with\n                    | K.Load _ -> ()\n                    | _ -> failwith \"expected Load under After\")\n                | _ -> failwith \"expected After under Cast\")\n            | _ -> failwith \"expected Cast wrapping After\");\n            K.validate lowered);\n          test \"splits oversized WMMA\" (fun () ->\n            let dt2 = Dtype.Val.vec 2 Dtype.Val.float32 in\n            let dt4 = Dtype.Val.vec 4 Dtype.Val.float32 in\n            let p0 = K.param ~idx:0 ~dtype:(Dtype.Ptr.create dt ~addrspace:Global ~size:(-1)) in\n            let mk_load i =\n              K.load ~src:(K.index ~ptr:p0 ~idxs:[ idx i ] ()) ()\n            in\n            let va = K.vectorize ~srcs:[ mk_load 0; mk_load 1 ] in\n            let vb = K.vectorize ~srcs:[ mk_load 2; mk_load 3 ] in\n            let vc =\n              K.vectorize ~srcs:[ mk_load 4; mk_load 5; mk_load 6; mk_load 7 ]\n            in\n            let w =\n              K.wmma ~name:\"WMMA_test\" ~a:va ~b:vb ~c:vc ~dtype:dt4\n                ~dims:(1, 1, 1) ~dtype_in:Dtype.Float32\n                ~dtype_out:Dtype.Float32 ~device:\"TEST\" ~threads:1\n                ~upcast_axes:([ (0, 1) ], [ (0, 1) ], [ (0, 2) ])\n                ~reduce_axes:[]\n            in\n            let lowered =\n              K.sink [ w ] |> Devectorizer.pm_devectorize (stub_renderer ())\n            in\n            let sink = find_sink lowered in\n            (* sym's sink_cleanup flattens the Vectorize, so we check that\n               the oversized WMMA was split into 2 smaller dt2 WMMAs. *)\n            equal int 2\n              (count_reachable lowered ~root_idx:sink (function\n                | K.Wmma { dtype; _ } when Dtype.Val.equal dtype dt2 -> true\n                | _ -> false));\n            K.validate lowered);\n          test \"scalarizes vector register buffers\" (fun () ->\n            let vec_dt = Dtype.Val.vec 2 Dtype.Val.float32 in\n            let reg_ptr =\n              Dtype.Ptr.create vec_dt ~addrspace:Reg ~size:1\n            in\n            let def = K.define_reg ~size:1 ~dtype:reg_ptr ~slot:0 in\n            let idx_ld = K.index ~ptr:def ~idxs:[ idx0 ] () in\n            let ld = K.load ~src:idx_ld () in\n            let idx_st = K.index ~ptr:def ~idxs:[ idx0 ] () in\n            let st = K.store ~dst:idx_st ~value:ld ~ranges:[] in\n            let lowered =\n              K.sink [ st ] |> Devectorizer.pm_devectorize (stub_renderer ())\n            in\n            let sink = find_sink lowered in\n            let _, dreg_view =\n              expect_reachable lowered sink\n                \"expected Define_reg to scalarize\"\n                (function K.Define_reg _ -> true | _ -> false)\n            in\n            (match dreg_view with\n            | K.Define_reg { size = 2; dtype } ->\n                expect_ptr_dtype \"scalarized register dtype\"\n                  (Dtype.Ptr.create dt ~addrspace:Reg ~size:2)\n                  dtype\n            | _ -> failwith (\"expected Define_reg to scalarize: \"\n                             ^ pp_kernel lowered));\n            (* pm_devectorize scalarizes the buffer and vectorizes the\n               index, but the Load stays vector-typed. Register buffers\n               are skipped by correct_load_store. *)\n            let _, ld_view =\n              expect_reachable lowered sink\n                \"expected vector Load to remain after register devectorize\"\n                (function\n                | K.Load { dtype; _ } when Dtype.Val.equal dtype vec_dt -> true\n                | _ -> false)\n            in\n            (match ld_view with\n            | K.Load { dtype; _ } ->\n                expect_dtype \"register load stays vector\" vec_dt dtype\n            | _ -> failwith \"unreachable\");\n            let _, st_view =\n              expect_reachable lowered sink\n                \"expected store in scalarized register kernel\"\n                (function K.Store _ -> true | _ -> false)\n            in\n            (match st_view with K.Store _ -> () | _ -> failwith \"unreachable\"));\n          test \"scalarizes vector local buffers\" (fun () ->\n            let vec_dt = Dtype.Val.vec 2 Dtype.Val.float32 in\n            let local_ptr =\n              Dtype.Ptr.create vec_dt ~addrspace:Local ~size:1\n            in\n            let def = K.define_local ~size:1 ~dtype:local_ptr in\n            let idx_ld = K.index ~ptr:def ~idxs:[ idx0 ] () in\n            let ld = K.load ~src:idx_ld () in\n            let idx_st = K.index ~ptr:def ~idxs:[ idx0 ] () in\n            let st = K.store ~dst:idx_st ~value:ld ~ranges:[] in\n            let lowered =\n              K.sink [ st ] |> Devectorizer.pm_devectorize (stub_renderer ())\n            in\n            let sink = find_sink lowered in\n            let _, dloc_view =\n              expect_reachable lowered sink\n                \"expected Define_local to scalarize\"\n                (function K.Define_local _ -> true | _ -> false)\n            in\n            (match dloc_view with\n            | K.Define_local { size = 2; dtype } ->\n                expect_ptr_dtype \"scalarized local dtype\"\n                  (Dtype.Ptr.create dt ~addrspace:Local ~size:2)\n                  dtype\n            | _ -> failwith (\"expected Define_local to scalarize: \"\n                             ^ pp_kernel lowered)));\n          test \"rewrites vector index on local/reg\" (fun () ->\n            let vec_dt = Dtype.Val.vec 2 Dtype.Val.float32 in\n            let local_ptr =\n              Dtype.Ptr.create vec_dt ~addrspace:Local ~size:4\n            in\n            let def = K.define_local ~size:4 ~dtype:local_ptr in\n            let var = K.define_var ~name:\"i\" ~lo:0 ~hi:3 () in\n            let ld =\n              K.load ~src:(K.index ~ptr:def ~idxs:[ var ] ()) ()\n            in\n            let lowered =\n              K.sink [ ld ] |> Devectorizer.pm_devectorize (stub_renderer ())\n            in\n            let sink = find_sink lowered in\n            (* The vector index is rewritten to scalar indices with stride\n               multiplication.  Check we get 2 scalar loads. *)\n            let scalar_loads =\n              find_all_reachable lowered ~root_idx:sink (function\n                | K.Load { dtype; _ } -> Dtype.Val.equal dtype dt\n                | _ -> false)\n            in\n            equal int 2 (List.length scalar_loads);\n            K.validate lowered);\n          test \"preserves WHERE with Invalid_index\" (fun () ->\n            let vec_dt = Dtype.Val.vec 2 Dtype.Val.index in\n            let p0 = K.param ~idx:0 ~dtype:(Dtype.Ptr.create Dtype.Val.bool ~addrspace:Global ~size:(-1)) in\n            let cond = K.load ~src:(K.index ~ptr:p0 ~idxs:[ idx0 ] ()) () in\n            let val_vec = K.vectorize ~srcs:[ idx 0; idx 1 ] in\n            let inv = K.invalid_index ~lanes:2 () in\n            let wh =\n              K.ternary ~op:`Where\n                ~a:cond\n                ~b:val_vec ~c:inv\n            in\n            let lowered =\n              K.sink [ wh ] |> Devectorizer.pm_devectorize (stub_renderer ())\n            in\n            let sink = find_sink lowered in\n            let _, view =\n              expect_reachable lowered sink\n                \"expected WHERE with Invalid_index to be preserved\"\n                (function\n                | K.Ternary { op = `Where; _ } -> true\n                | _ -> false)\n            in\n            (match view with\n            | K.Ternary { dtype; _ } ->\n                expect_dtype \"preserved Where dtype\" vec_dt dtype\n            | _ -> failwith \"unreachable\");\n            K.validate lowered);\n          test \"drops true gate from Index\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let gate = K.const (Const.bool true) in\n            let gated_idx = K.index ~ptr:p0 ~idxs:[ idx0 ] ~gate () in\n            let ld = K.load ~src:gated_idx () in\n            let lowered =\n              K.sink [ ld ] |> Devectorizer.pm_devectorize (stub_renderer ())\n            in\n            let sink = find_sink lowered in\n            equal int 0\n              (count_reachable lowered ~root_idx:sink (function\n                | K.Index { gate = Some _; _ } -> true\n                | _ -> false));\n            has_reachable lowered sink (function\n              | K.Index { gate = None; _ } -> true\n              | _ -> false);\n            K.validate lowered);\n        ];\n      group \"pm_correct_load_store\"\n        [\n          (* Tinygrad's correct_load_store matches Load(Cast(Index)) /\n             Store(Cast(Index)). Input must wrap Index in Cast. Output\n             is Vcat of scalar Loads / Group of scalar Stores,\n             each with a new Index src (not Gep). *)\n          test \"splits vector load to scalar\" (fun () ->\n            let ren = stub_renderer () in\n            let vec_ptr =\n              Dtype.Ptr.create (Dtype.Val.vec 4 dt) ~addrspace:Global ~size:(-1)\n            in\n            let p0 = K.param ~idx:0 ~dtype:vec_ptr in\n            let index = K.index ~ptr:p0 ~idxs:[ idx0 ] () in\n            let cast_idx =\n              K.cast ~src:index ~dtype:(Dtype.Ptr vec_ptr)\n            in\n            let ld = K.load ~src:cast_idx () in\n            let lowered =\n              K.sink [ ld ] |> Devectorizer.pm_devectorize ren\n            in\n            let sink = find_sink lowered in\n            (* pm_devectorize splits the vector load and sym simplifies\n               the Vcat away; verify 4 scalar loads with Index sources. *)\n            let scalar_loads =\n              find_all_reachable lowered ~root_idx:sink (function\n                | K.Load { dtype; _ } -> Dtype.Val.equal dtype dt\n                | _ -> false)\n            in\n            equal int 4 (List.length scalar_loads);\n            List.iter\n              (fun (_, view) ->\n                match view with\n                | K.Load { src; dtype; _ } ->\n                    expect_dtype \"scalar load dtype\" dt dtype;\n                    (match K.view src with\n                    | K.Index _ -> ()\n                    | _ ->\n                        failwith (\"expected Index source for split load: \"\n                                  ^ pp_kernel lowered))\n                | _ -> failwith \"unreachable\")\n              scalar_loads;\n            K.validate lowered);\n          test \"splits vector store to scalar\" (fun () ->\n            let ren = stub_renderer () in\n            let vec_ptr =\n              Dtype.Ptr.create (Dtype.Val.vec 4 dt) ~addrspace:Global ~size:(-1)\n            in\n            let p0 = K.param ~idx:0 ~dtype:vec_ptr in\n            let index = K.index ~ptr:p0 ~idxs:[ idx0 ] () in\n            let cast_idx =\n              K.cast ~src:index ~dtype:(Dtype.Ptr vec_ptr)\n            in\n            let vec_val =\n              K.vectorize ~srcs:[ f32 1.0; f32 2.0; f32 3.0; f32 4.0 ]\n            in\n            let st = K.store ~dst:cast_idx ~value:vec_val ~ranges:[] in\n            let lowered =\n              K.sink [ st ] |> Devectorizer.pm_devectorize ren\n            in\n            let sink = find_sink lowered in\n            let _, view =\n              expect_reachable lowered sink\n                \"expected vector Store split into Group of scalar Stores\"\n                (function\n                | K.Group { srcs } ->\n                    List.for_all\n                      (fun r ->\n                        match K.view r with\n                        | K.Store _ -> true\n                        | _ -> false)\n                      srcs\n                | _ -> false)\n            in\n            (match view with\n            | K.Group { srcs } ->\n                equal int 4 (List.length srcs);\n                List.iteri\n                  (fun lane_idx st_node ->\n                    match K.view st_node with\n                    | K.Store { dst; value; _ } ->\n                        (* dst is an Index (not Gep) *)\n                        (match K.view dst with\n                        | K.Index _ -> ()\n                        | _ ->\n                            failwith (\"expected Index dst for split store: \"\n                                      ^ pp_kernel lowered));\n                        (* value is scalar (Gep may simplify away) *)\n                        (match K.dtype_opt value with\n                        | Some vdt ->\n                            expect_dtype \"scalar store value\" dt (Dtype.val_of vdt)\n                        | None ->\n                            failwith (\"expected scalar value dtype: \"\n                                      ^ pp_kernel lowered))\n                    | _ ->\n                        failwith (\"expected Store in Group: \"\n                                  ^ pp_kernel lowered))\n                  srcs\n            | _ -> failwith \"unreachable\"));\n          test \"preserves alt per lane\" (fun () ->\n            let ren = stub_renderer () in\n            let vec_ptr =\n              Dtype.Ptr.create (Dtype.Val.vec 2 dt) ~addrspace:Global ~size:(-1)\n            in\n            let p0 = K.param ~idx:0 ~dtype:vec_ptr in\n            let gate = K.const (Const.bool true) in\n            let index =\n              K.index ~ptr:p0 ~idxs:[ idx0 ] ~gate ()\n            in\n            let cast_idx =\n              K.cast ~src:index ~dtype:(Dtype.Ptr vec_ptr)\n            in\n            let vec_alt = K.vectorize ~srcs:[ f32 42.0; f32 99.0 ] in\n            let ld = K.load ~src:cast_idx ~alt:vec_alt () in\n            let lowered =\n              K.sink [ ld ] |> Devectorizer.pm_devectorize ren\n            in\n            let sink = find_sink lowered in\n            (* pm_devectorize splits the vector load and sym simplifies\n               the Vcat away; verify 2 scalar loads with alt preserved. *)\n            let scalar_loads =\n              find_all_reachable lowered ~root_idx:sink (function\n                | K.Load { alt = Some _; dtype; _ } ->\n                    Dtype.Val.equal dtype dt\n                | _ -> false)\n            in\n            equal int 2 (List.length scalar_loads);\n            List.iter\n              (fun (_, view) ->\n                match view with\n                | K.Load { alt = Some _; dtype; _ } ->\n                    expect_dtype \"scalar split load dtype\" dt dtype\n                | _ -> failwith \"unreachable\")\n              scalar_loads);\n          test \"skips Reg addrspace\" (fun () ->\n            let ren = stub_renderer () in\n            let reg_ptr =\n              Dtype.Ptr.create (Dtype.Val.vec 2 dt) ~addrspace:Reg ~size:(-1)\n            in\n            let p0 = K.param ~idx:0 ~dtype:reg_ptr in\n            let ld =\n              K.load ~src:(K.index ~ptr:p0 ~idxs:[ idx0 ] ()) ()\n            in\n            let lowered =\n              K.sink [ ld ] |> Devectorizer.pm_devectorize ren\n            in\n            let sink = find_sink lowered in\n            equal int 0\n              (count_reachable lowered ~root_idx:sink (function\n                | K.Vectorize _ -> true\n                | _ -> false));\n            has_reachable lowered sink (function\n              | K.Load { dtype; _ } -> Dtype.Val.equal dtype (Dtype.Val.vec 2 dt)\n              | _ -> false));\n          test \"skips when renderer supports width\" (fun () ->\n            let ren =\n              stub_renderer ~supports_float4:true ()\n            in\n            let vec_ptr =\n              Dtype.Ptr.create (Dtype.Val.vec 4 dt) ~addrspace:Global ~size:(-1)\n            in\n            let p0 = K.param ~idx:0 ~dtype:vec_ptr in\n            let ld =\n              K.load ~src:(K.index ~ptr:p0 ~idxs:[ idx0 ] ()) ()\n            in\n            let lowered =\n              K.sink [ ld ] |> Devectorizer.pm_devectorize ren\n            in\n            let sink = find_sink lowered in\n            equal int 0\n              (count_reachable lowered ~root_idx:sink (function\n                | K.Vectorize _ -> true\n                | _ -> false));\n            has_reachable lowered sink (function\n              | K.Load { dtype; _ } -> Dtype.Val.equal dtype (Dtype.Val.vec 4 dt)\n              | _ -> false));\n        ];\n      group \"pm_render\"\n        [\n          test \"adds a zero alt to gated loads\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let gate = K.const (Const.bool true) in\n            let gated_idx = K.index ~ptr:p0 ~idxs:[ idx0 ] ~gate () in\n            let ld = K.load ~src:gated_idx () in\n            let lowered =\n              K.sink [ ld ] |> Devectorizer.pm_render\n            in\n            let sink = find_sink lowered in\n            let _, view =\n              expect_reachable lowered sink\n                \"expected masked load alt insertion\"\n                (function\n                | K.Load { alt = Some alt; _ } ->\n                    (match K.view alt with\n                    | K.Const { value; _ } ->\n                        (match Const.view value with\n                        | Const.Float 0.0 -> true\n                        | _ -> false)\n                    | _ -> false)\n                | _ -> false)\n            in\n            (match view with\n            | K.Load { alt = Some alt; dtype; _ } ->\n                expect_dtype \"masked load dtype\" dt dtype;\n                (match K.view alt with\n                | K.Const { value; dtype } ->\n                    (match Const.view value with\n                    | Const.Float 0.0 ->\n                        expect_dtype \"masked load alt dtype\" dt dtype\n                    | _ -> failwith \"expected zero alt constant\")\n                | _ -> failwith \"expected zero alt constant\")\n            | _ -> failwith \"unreachable\");\n            K.validate lowered);\n          test \"folds Where after gated load into alt\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let gate = K.const (Const.bool true) in\n            let alt_val = K.const (Const.float dt 9.0) in\n            let gated_idx = K.index ~ptr:p0 ~idxs:[ idx0 ] ~gate () in\n            let ld = K.load ~src:gated_idx () in\n            let wh = K.ternary ~op:`Where ~a:gate ~b:ld ~c:alt_val in\n            let lowered =\n              K.sink [ wh ] |> Devectorizer.pm_render\n            in\n            let sink = find_sink lowered in\n            let _, view =\n              expect_reachable lowered sink\n                \"expected Where(gated Load, alt) to fold into Load alt\"\n                (function\n                | K.Load { alt = Some alt; _ } ->\n                    (match unwrap_const alt with\n                    | Some v ->\n                        (match Const.view v with\n                        | Const.Float 9.0 -> true\n                        | _ -> false)\n                    | None -> false)\n                | _ -> false)\n            in\n            (match view with\n            | K.Load { alt = Some alt; dtype; _ } ->\n                expect_dtype \"folded where load dtype\" dt dtype;\n                (match unwrap_const alt with\n                | Some value ->\n                    (match Const.view value with\n                    | Const.Float 9.0 -> ()\n                    | _ -> failwith\n                             \"expected Where(gated Load, alt) to fold\")\n                | None -> failwith\n                         \"expected Where(gated Load, alt) to fold\")\n            | _ -> failwith \"unreachable\");\n            K.validate lowered);\n          test \"folds Where with negated gate into alt\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let gate = K.const (Const.bool true) in\n            let negated_gate =\n              K.binary ~op:`Xor ~lhs:gate\n                ~rhs:(K.const (Const.bool true))\n            in\n            let alt_val = K.const (Const.float dt 5.0) in\n            let gated_idx =\n              K.index ~ptr:p0 ~idxs:[ idx0 ] ~gate:negated_gate ()\n            in\n            let ld = K.load ~src:gated_idx () in\n            let wh = K.ternary ~op:`Where ~a:gate ~b:alt_val ~c:ld in\n            let lowered =\n              K.sink [ wh ] |> Devectorizer.pm_render\n            in\n            let sink = find_sink lowered in\n            equal int 0\n              (count_reachable lowered ~root_idx:sink (function\n                | K.Ternary { op = `Where; _ } -> true\n                | _ -> false));\n            let _, view =\n              expect_reachable lowered sink\n                \"expected negated gate Where to fold into Load alt\"\n                (function\n                | K.Load { alt = Some alt; _ } ->\n                    (match unwrap_const alt with\n                    | Some v ->\n                        (match Const.view v with\n                        | Const.Float 5.0 -> true\n                        | _ -> false)\n                    | None -> false)\n                | _ -> false)\n            in\n            (match view with\n            | K.Load { alt = Some _; dtype; _ } ->\n                expect_dtype \"negated gate folded load dtype\" dt dtype\n            | _ -> failwith \"unreachable\");\n            K.validate lowered);\n          test \"folds Where with Cast-wrapped gated load\" (fun () ->\n            let load_dt = Dtype.Val.int32 in\n            let cast_dt = Dtype.Val.float32 in\n            let load_ptr =\n              Dtype.Ptr.create load_dt ~addrspace:Global ~size:(-1)\n            in\n            let p0 = K.param ~idx:0 ~dtype:load_ptr in\n            let gate = K.const (Const.bool true) in\n            let gated_idx = K.index ~ptr:p0 ~idxs:[ idx0 ] ~gate () in\n            let ld = K.load ~src:gated_idx () in\n            let casted_load = K.cast ~src:ld ~dtype:(Dtype.Val cast_dt) in\n            let alt_val = K.const (Const.float cast_dt 5.0) in\n            let wh =\n              K.ternary ~op:`Where ~a:gate ~b:casted_load ~c:alt_val\n            in\n            let lowered =\n              K.sink [ wh ] |> Devectorizer.pm_render\n            in\n            let sink = find_sink lowered in\n            equal int 0\n              (count_reachable lowered ~root_idx:sink (function\n                | K.Ternary { op = `Where; _ } -> true\n                | _ -> false));\n            has_reachable lowered sink (function\n              | K.Cast { dtype; _ } when Dtype.equal dtype (Dtype.Val cast_dt) -> true\n              | _ -> false);\n            has_reachable lowered sink (function\n              | K.Load { alt = Some _; _ } -> true\n              | _ -> false);\n            K.validate lowered);\n          test \"Where with different gate does not fold\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let gate1 = K.const (Const.bool true) in\n            let r =\n              K.range\n                ~size:(idx 10) ~axis:0 ~kind:Axis_kind.Loop ()\n            in\n            let gate2 =\n              K.binary ~op:`Cmplt ~lhs:r ~rhs:(idx 5)\n            in\n            let alt_val = K.const (Const.float dt 5.0) in\n            let gated_idx =\n              K.index ~ptr:p0 ~idxs:[ idx0 ] ~gate:gate1 ()\n            in\n            let ld = K.load ~src:gated_idx () in\n            let wh = K.ternary ~op:`Where ~a:gate2 ~b:ld ~c:alt_val in\n            let lowered =\n              K.sink [ wh ] |> Devectorizer.pm_render\n            in\n            let sink = find_sink lowered in\n            has_reachable lowered sink (function\n              | K.Ternary { op = `Where; _ } -> true\n              | _ -> false);\n            K.validate lowered);\n        ];\n      group \"integration\"\n        [\n          test \"full pipeline: reduce + gated load\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let p1 = K.param ~idx:1 ~dtype:ptr in\n            let r =\n              K.range ~size:(idx 4) ~axis:0 ~kind:Axis_kind.Reduce ()\n            in\n            let gate =\n              K.binary ~op:`Cmplt ~lhs:r ~rhs:(idx 3)\n            in\n            let ld =\n              K.load ~src:(K.index ~ptr:p0 ~idxs:[ r ] ~gate ()) ()\n            in\n            let red = K.reduce ~op:`Add ~src:ld ~ranges:[ r ] ~dtype:dt in\n            let st =\n              K.store ~dst:(K.index ~ptr:p1 ~idxs:[ idx0 ] ())\n                ~value:red ~ranges:[]\n            in\n            let result =\n              K.sink [ st ]\n              |> Devectorizer.pm_reduce\n              |> Devectorizer.pm_add_loads\n              |> Devectorizer.pm_devectorize (stub_renderer ())\n              |> Devectorizer.pm_render\n            in\n            let sink = find_sink result in\n            no_reduces result sink;\n            equal int 0\n              (count_reachable result ~root_idx:sink (function\n                | K.Load { alt = None; src; _ } ->\n                    let rec has_gate n =\n                      match K.view n with\n                      | K.Index { gate = Some _; _ } -> true\n                      | K.Cast { src; _ } | K.Bitcast { src; _ } ->\n                          has_gate src\n                      | _ -> false\n                    in\n                    has_gate src\n                | _ -> false));\n            K.validate result);\n        ];\n    ]\n"
  },
  {
    "path": "packages/tolk/test/unit/test_codegen_expander.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Tolk\nopen Tolk_ir\n\nmodule K = Kernel\n\nlet dt = Dtype.Val.float32\nlet ptr = Dtype.Ptr.create dt ~addrspace:Global ~size:(-1)\nlet global_ptr dt = Dtype.Ptr.create dt ~addrspace:Global ~size:(-1)\nlet pp_kernel root = Format.asprintf \"%a\" K.pp root\nlet i32 n = K.const (Const.int Dtype.Val.int32 n)\nlet idx_const n = K.const (Const.int Dtype.Val.index n)\n\nlet kernel_info ?opts_to_apply () =\n  {\n    K.name = \"\";\n    axis_kinds = [];\n    dont_use_locals = false;\n    applied_opts = [];\n    opts_to_apply;\n    estimates = None;\n  }\n\nlet metal_like_tc =\n  {\n    Tc.dims = (8, 8, 8);\n    threads = 32;\n    elements_per_thread = (2, 2, 2);\n    dtype_in = Dtype.Float32;\n    dtype_out = Dtype.Float32;\n    opts = [ \"u0\"; \"l0\"; \"l1\"; \"l1\"; \"l0\"; \"l1\" ];\n    swizzle =\n      ( ([ \"r1\"; \"l1\"; \"l2\"; \"r2\"; \"l4\" ], [ \"r0\" ], [ \"u0\"; \"l0\"; \"l3\" ]),\n        ([ \"l0\"; \"r0\"; \"r1\"; \"l3\"; \"r2\" ], [ \"u0\" ], [ \"l1\"; \"l2\"; \"l4\" ]) );\n  }\n\nlet noop_renderer ?(tensor_cores = []) () =\n  Renderer.make ~name:\"test\" ~device:\"TEST\" ~has_local:true ~has_shared:true\n    ~shared_max:32768 ~tensor_cores\n    ~render:(fun ?name:_ _ -> \"\")\n    ()\n\nlet topo_array root =\n  let arr = Array.of_list (K.toposort root) in\n  let tbl = K.Ref_tbl.create (Array.length arr) in\n  Array.iteri (fun i node -> K.Ref_tbl.replace tbl node i) arr;\n  (arr, tbl)\n\nlet id_of (_, tbl) node = K.Ref_tbl.find tbl node\nlet topo_length (arr, _) = Array.length arr\n\nlet reachable_set root =\n  let topo = topo_array root in\n  let len = topo_length topo in\n  let arr, _ = topo in\n  let sink_idx =\n    let found = ref None in\n    Array.iteri\n      (fun i n ->\n        match K.view n with K.Sink _ -> found := Some i | _ -> ())\n      arr;\n    Option.get !found\n  in\n  let seen = Array.make len false in\n  let rec visit idx =\n    if idx >= 0 && idx < len && not seen.(idx) then begin\n      seen.(idx) <- true;\n      List.iter\n        (fun dep -> visit (id_of topo dep))\n        (K.children arr.(idx))\n    end\n  in\n  visit sink_idx;\n  (topo, seen, sink_idx)\n\nlet find_sink root =\n  let arr, _ = topo_array root in\n  let found = ref None in\n  Array.iteri\n    (fun _ n ->\n      match K.view n with K.Sink _ -> found := Some n | _ -> ())\n    arr;\n  Option.get !found\n\nlet count_reachable root pred =\n  let (topo, seen, _) = reachable_set root in\n  let arr, _ = topo in\n  let count = ref 0 in\n  Array.iteri\n    (fun i n -> if seen.(i) && pred (K.view n) then incr count)\n    arr;\n  !count\n\nlet sink_children root =\n  match K.view (find_sink root) with\n  | K.Sink { srcs; _ } -> srcs\n  | _ -> failwith \"expected Sink\"\n\nlet const_int_value node =\n  match K.view node with\n  | K.Const { value; _ } -> (\n      match Const.view value with\n      | Int n -> Int64.to_int n\n      | _ -> failwith \"expected int const\")\n  | _ ->\n      failwith\n        (Printf.sprintf \"expected Const, got %s\"\n           (Format.asprintf \"%a\" K.pp_view node))\n\nlet sink_int_values root =\n  List.map const_int_value (sink_children root)\n\nlet rec take n xs =\n  if n <= 0 then []\n  else match xs with [] -> [] | x :: xs -> x :: take (n - 1) xs\n\n(* Shared assertion: no Contract or Unroll markers remain after expansion *)\nlet assert_no_contract_unroll expanded =\n  let _ = find_sink expanded in\n  equal int 0\n    (count_reachable expanded (function\n      | K.Contract _ | K.Unroll _ -> true\n      | _ -> false))\n\n(* Build vectorize -> unroll -> contract -> sink, expand, assert clean *)\nlet expand_vec_contract ~consts ~unroll_axes ~contract_axes ~vec_width =\n  let vec = K.vectorize ~srcs:consts in\n  let unroll =\n    K.unroll ~src:vec ~axes:unroll_axes ~dtype:Dtype.Val.int32\n  in\n  let contract =\n    K.contract ~src:unroll ~axes:contract_axes\n      ~dtype:(Dtype.Val.vec vec_width Dtype.Val.int32)\n  in\n  let root = K.sink ~kernel_info:(kernel_info ()) [ contract ] in\n  let expanded = Expander.expand root in\n  assert_no_contract_unroll expanded;\n  expanded\n\nlet grouped_reduce_kernel () =\n  let p1 = K.param ~idx:1 ~dtype:ptr in\n  let r0 = K.range ~size:(idx_const 2) ~axis:0 ~kind:Axis_kind.Local () in\n  let r1 =\n    K.range ~size:(idx_const 4) ~axis:1 ~kind:Axis_kind.Group_reduce ()\n  in\n  let idx = K.index ~ptr:p1 ~idxs:[ r0; r1 ] () in\n  let ld = K.load ~src:idx () in\n  let red = K.reduce ~op:`Add ~src:ld ~ranges:[ r1 ] ~dtype:dt in\n  K.sink ~kernel_info:(kernel_info ()) [ red ]\n\nlet () =\n  run \"Expander\"\n    [\n      group \"expander core\"\n        [\n          test \"expand lowers reachable contract markers\" (fun () ->\n            let _ = K.range ~size:(idx_const 4) ~axis:0\n              ~kind:Axis_kind.Global () in\n            let cf = K.const (Const.float Dtype.Val.float32 3.0) in\n            let contract =\n              K.contract ~src:cf ~axes:[ (0, 2) ]\n                ~dtype:(Dtype.Val.vec 2 Dtype.Val.float32)\n            in\n            let root = K.sink ~kernel_info:(kernel_info ()) [ contract ] in\n            let expanded = Expander.expand root in\n            assert_no_contract_unroll expanded;\n            let children = sink_children expanded in\n            equal int 2 (List.length children);\n            is_true (List.hd children == List.nth children 1);\n            K.validate expanded);\n\n          test \"expand lowers tensor-core contract and unroll markers\" (fun () ->\n            let _renderer =\n              noop_renderer ~tensor_cores:[ metal_like_tc ] ()\n            in\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let p1 = K.param ~idx:1 ~dtype:ptr in\n            let c8 = idx_const 8 in\n            let r0 = K.range ~size:c8 ~axis:0 ~kind:Axis_kind.Global () in\n            let r1 = K.range ~size:c8 ~axis:1 ~kind:Axis_kind.Global () in\n            let r2 = K.range ~size:c8 ~axis:2 ~kind:Axis_kind.Reduce () in\n            let idx0 = K.index ~ptr:p0 ~idxs:[ r0; r2 ] () in\n            let idx1 = K.index ~ptr:p1 ~idxs:[ r2; r1 ] () in\n            let ld0 = K.load ~src:idx0 () in\n            let ld1 = K.load ~src:idx1 () in\n            let zero = K.const (Const.float Dtype.Val.float32 0.0) in\n            let wmma =\n              K.wmma ~name:\"__metal_simdgroup_matrix_fma\" ~a:ld0 ~b:ld1\n                ~c:zero ~dtype:dt ~dims:(8, 8, 8)\n                ~dtype_in:Dtype.Float32 ~dtype_out:Dtype.Float32\n                ~device:\"TEST\" ~threads:32\n                ~upcast_axes:([ (0, 2) ], [ (0, 2) ], [ (0, 2); (1, 2) ])\n                ~reduce_axes:[ 2 ]\n            in\n            let unroll =\n              K.unroll ~src:wmma ~axes:[ (0, 2); (1, 2); (2, 2) ] ~dtype:dt\n            in\n            let contract =\n              K.contract ~src:unroll ~axes:[ (0, 2); (1, 2) ]\n                ~dtype:(Dtype.Val.vec 4 dt)\n            in\n            let root =\n              K.sink\n                ~kernel_info:\n                  (kernel_info\n                     ~opts_to_apply:\n                       [\n                         K.Opt.Tc\n                           { axis = 0; tc_select = -1; tc_opt = 0;\n                             use_tc = 1 };\n                       ]\n                     ())\n                [ contract ]\n            in\n            let expanded = Expander.expand root in\n            assert_no_contract_unroll expanded;\n            equal int 1\n              (count_reachable expanded (function\n                | K.Wmma _ -> true\n                | _ -> false));\n            is_true\n              (count_reachable expanded (function\n                | K.Gep _ -> true\n                | _ -> false) > 0);\n            K.validate expanded);\n\n          test \"expand fully contracts consumed unroll markers\" (fun () ->\n            let consts = List.init 4 (fun i -> i32 i) in\n            let expanded =\n              expand_vec_contract ~consts ~unroll_axes:[ (1, 4) ]\n                ~contract_axes:[ (1, 4) ] ~vec_width:4\n            in\n            equal (list int) [ 0; 1; 2; 3 ] (sink_int_values expanded);\n            K.validate expanded);\n\n          test \"expand flattens nested unroll markers\" (fun () ->\n            let consts = List.init 8 (fun i -> i32 i) in\n            let vec = K.vectorize ~srcs:consts in\n            let unroll1 =\n              K.unroll ~src:vec ~axes:[ (1, 4) ]\n                ~dtype:(Dtype.Val.vec 2 Dtype.Val.int32)\n            in\n            let unroll2 =\n              K.unroll ~src:unroll1 ~axes:[ (2, 2) ] ~dtype:Dtype.Val.int32\n            in\n            let root = K.sink ~kernel_info:(kernel_info ()) [ unroll2 ] in\n            let expanded = Expander.expand root in\n            let _ = find_sink expanded in\n            equal int 0\n              (count_reachable expanded (function\n                | K.Unroll _ -> true\n                | _ -> false));\n            equal (list int) [ 0; 1; 2; 3; 4; 5; 6; 7 ]\n              (sink_int_values expanded);\n            K.validate expanded);\n\n          test \"expand preserves remaining unroll axes after contract\" (fun () ->\n            let consts = List.init 8 (fun i -> i32 i) in\n            let vec = K.vectorize ~srcs:consts in\n            let unroll =\n              K.unroll ~src:vec ~axes:[ (1, 4); (2, 2) ] ~dtype:Dtype.Val.int32\n            in\n            let contract =\n              K.contract ~src:unroll ~axes:[ (1, 4) ]\n                ~dtype:(Dtype.Val.vec 4 Dtype.Val.int32)\n            in\n            let root = K.sink ~kernel_info:(kernel_info ()) [ contract ] in\n            let expanded = Expander.expand root in\n            assert_no_contract_unroll expanded;\n            equal (list int) [ 0; 2; 4; 6; 1; 3; 5; 7 ]\n              (sink_int_values expanded);\n            K.validate expanded);\n\n          test \"expand contract without unroll repeats scalar\" (fun () ->\n            let contract =\n              K.contract ~src:(i32 7) ~axes:[ (2, 2) ]\n                ~dtype:(Dtype.Val.vec 2 Dtype.Val.int32)\n            in\n            let root = K.sink ~kernel_info:(kernel_info ()) [ contract ] in\n            let expanded = Expander.expand root in\n            assert_no_contract_unroll expanded;\n            let children = sink_children expanded in\n            equal int 2 (List.length children);\n            is_true (List.hd children == List.nth children 1);\n            equal (list int) [ 7; 7 ] (sink_int_values expanded);\n            K.validate expanded);\n\n          test \"expand contract axis order\" (fun () ->\n            let consts = List.init 16 (fun i -> i32 i) in\n            let expanded =\n              expand_vec_contract ~consts\n                ~unroll_axes:[ (1, 4); (2, 4) ]\n                ~contract_axes:[ (1, 4) ] ~vec_width:4\n            in\n            equal (list int)\n              [ 0; 4; 8; 12; 1; 5; 9; 13; 2; 6; 10; 14; 3; 7; 11; 15 ]\n              (sink_int_values expanded);\n            K.validate expanded);\n\n          test \"expand contract half-expand duplicates lanes\" (fun () ->\n            let consts = List.init 4 (fun i -> i32 i) in\n            let vec = K.vectorize ~srcs:consts in\n            let unroll =\n              K.unroll ~src:vec ~axes:[ (1, 4) ] ~dtype:Dtype.Val.int32\n            in\n            let contract =\n              K.contract ~src:unroll ~axes:[ (1, 4); (2, 2) ]\n                ~dtype:(Dtype.Val.vec 8 Dtype.Val.int32)\n            in\n            let root = K.sink ~kernel_info:(kernel_info ()) [ contract ] in\n            let expanded = Expander.expand root in\n            assert_no_contract_unroll expanded;\n            equal (list int) [ 0; 0; 1; 1; 2; 2; 3; 3 ]\n              (sink_int_values expanded);\n            K.validate expanded);\n\n          test \"expand add broadcast\" (fun () ->\n            let consts = List.init 4 (fun i -> i32 i) in\n            let vec = K.vectorize ~srcs:consts in\n            let unroll =\n              K.unroll ~src:vec ~axes:[ (1, 4) ] ~dtype:Dtype.Val.int32\n            in\n            let add = K.binary ~op:`Add ~lhs:unroll ~rhs:(i32 3) in\n            let root = K.sink ~kernel_info:(kernel_info ()) [ add ] in\n            let expanded = Expander.expand root in\n            assert_no_contract_unroll expanded;\n            equal int 1\n              (count_reachable expanded (function\n                | K.Binary _ -> true\n                | _ -> false));\n            is_true\n              (count_reachable expanded (function\n                 | K.Vectorize { srcs = x :: xs; _ } ->\n                     List.for_all (fun s -> s == x) xs\n                 | _ -> false)\n              > 0);\n            K.validate expanded);\n\n          test \"expand same-axis add\" (fun () ->\n            let consts_a = List.init 4 (fun i -> i32 i) in\n            let consts_b = List.init 4 (fun i -> i32 (i * 4)) in\n            let unroll_a =\n              K.unroll ~src:(K.vectorize ~srcs:consts_a)\n                ~axes:[ (1, 4) ] ~dtype:Dtype.Val.int32\n            in\n            let unroll_b =\n              K.unroll ~src:(K.vectorize ~srcs:consts_b)\n                ~axes:[ (1, 4) ] ~dtype:Dtype.Val.int32\n            in\n            let add = K.binary ~op:`Add ~lhs:unroll_a ~rhs:unroll_b in\n            let root = K.sink ~kernel_info:(kernel_info ()) [ add ] in\n            let expanded = Expander.expand root in\n            assert_no_contract_unroll expanded;\n            equal int 1\n              (count_reachable expanded (function\n                | K.Binary _ -> true\n                | _ -> false));\n            equal int 2\n              (count_reachable expanded (function\n                | K.Vectorize _ -> true\n                | _ -> false));\n            K.validate expanded);\n\n          test \"expand different-axis add\" (fun () ->\n            let consts_a = List.init 4 (fun i -> i32 (i * 4)) in\n            let consts_b = List.init 4 (fun i -> i32 i) in\n            let unroll_a =\n              K.unroll ~src:(K.vectorize ~srcs:consts_a)\n                ~axes:[ (1, 4) ] ~dtype:Dtype.Val.int32\n            in\n            let unroll_b =\n              K.unroll ~src:(K.vectorize ~srcs:consts_b)\n                ~axes:[ (2, 4) ] ~dtype:Dtype.Val.int32\n            in\n            let add = K.binary ~op:`Add ~lhs:unroll_a ~rhs:unroll_b in\n            let root = K.sink ~kernel_info:(kernel_info ()) [ add ] in\n            let expanded = Expander.expand root in\n            assert_no_contract_unroll expanded;\n            equal int 1\n              (count_reachable expanded (function\n                | K.Binary _ -> true\n                | _ -> false));\n            equal int 2\n              (count_reachable expanded (function\n                | K.Vectorize _ -> true\n                | _ -> false));\n            K.validate expanded);\n\n          test \"expand contract multi-axis order\" (fun () ->\n            let build axes =\n              let consts = List.init 16 (fun i -> i32 i) in\n              let vec = K.vectorize ~srcs:consts in\n              let unroll =\n                K.unroll ~src:vec\n                  ~axes:[ (1, 2); (2, 2); (3, 2); (4, 2) ]\n                  ~dtype:Dtype.Val.int32\n              in\n              let contract =\n                K.contract ~src:unroll ~axes\n                  ~dtype:(Dtype.Val.vec 4 Dtype.Val.int32)\n              in\n              K.sink ~kernel_info:(kernel_info ()) [ contract ]\n            in\n            let assert_prefix axes expected =\n              let expanded = Expander.expand (build axes) in\n              assert_no_contract_unroll expanded;\n              equal (list int) expected\n                (sink_int_values expanded |> take 4);\n              K.validate expanded\n            in\n            assert_prefix [ (3, 2); (2, 2) ] [ 0; 4; 2; 6 ];\n            assert_prefix [ (2, 2); (3, 2) ] [ 0; 2; 4; 6 ]);\n        ];\n\n      group \"contract and expand edge cases\"\n        [\n          test \"contract axis 2 from 2-axis UNROLL\" (fun () ->\n            let consts = List.init 16 (fun i -> i32 i) in\n            let expanded =\n              expand_vec_contract ~consts\n                ~unroll_axes:[ (1, 4); (2, 4) ]\n                ~contract_axes:[ (2, 4) ] ~vec_width:4\n            in\n            let vals = sink_int_values expanded in\n            equal (list int) [ 0; 1; 2; 3 ] (take 4 vals);\n            equal (list int) [ 12; 13; 14; 15 ]\n              (take 4 (List.filteri (fun i _ -> i >= 12) vals));\n            K.validate expanded);\n\n          test \"contract axis 2 from 4-axis UNROLL\" (fun () ->\n            let consts = List.init 16 (fun i -> i32 i) in\n            let expanded =\n              expand_vec_contract ~consts\n                ~unroll_axes:[ (1, 2); (2, 2); (3, 2); (4, 2) ]\n                ~contract_axes:[ (2, 2) ] ~vec_width:2\n            in\n            let vals = sink_int_values expanded in\n            equal (list int) [ 0; 4 ] (take 2 vals);\n            equal (list int) [ 10; 14 ]\n              (take 2 (List.filteri (fun i _ -> i >= 12) vals));\n            K.validate expanded);\n\n          test \"contract middle axis of 3-axis UNROLL\" (fun () ->\n            let consts = List.init 8 (fun i -> i32 i) in\n            let expanded =\n              expand_vec_contract ~consts\n                ~unroll_axes:[ (1, 2); (2, 2); (3, 2) ]\n                ~contract_axes:[ (2, 2) ] ~vec_width:2\n            in\n            equal (list int) [ 0; 2; 1; 3; 4; 6; 5; 7 ]\n              (sink_int_values expanded);\n            K.validate expanded);\n\n          test \"different-axis add with flipped operands\" (fun () ->\n            let consts_a = List.init 4 (fun i -> i32 (i * 4)) in\n            let consts_b = List.init 4 (fun i -> i32 i) in\n            let unroll_a =\n              K.unroll ~src:(K.vectorize ~srcs:consts_a)\n                ~axes:[ (1, 4) ] ~dtype:Dtype.Val.int32\n            in\n            let unroll_b =\n              K.unroll ~src:(K.vectorize ~srcs:consts_b)\n                ~axes:[ (2, 4) ] ~dtype:Dtype.Val.int32\n            in\n            let add = K.binary ~op:`Add ~lhs:unroll_b ~rhs:unroll_a in\n            let root = K.sink ~kernel_info:(kernel_info ()) [ add ] in\n            let expanded = Expander.expand root in\n            assert_no_contract_unroll expanded;\n            equal int 1\n              (count_reachable expanded (function\n                | K.Binary _ -> true\n                | _ -> false));\n            equal int 2\n              (count_reachable expanded (function\n                | K.Vectorize _ -> true\n                | _ -> false));\n            K.validate expanded);\n\n          test \"contract simple exact GEP indices\" (fun () ->\n            let consts = List.init 4 (fun i -> i32 i) in\n            let expanded =\n              expand_vec_contract ~consts ~unroll_axes:[ (1, 4) ]\n                ~contract_axes:[ (1, 4) ] ~vec_width:4\n            in\n            equal (list int) [ 0; 1; 2; 3 ] (sink_int_values expanded);\n            K.validate expanded);\n        ];\n\n      group \"edge cases\"\n        [\n          test \"empty UNROLL is a no-op\" (fun () ->\n            let unroll =\n              K.unroll ~src:(i32 42) ~axes:[] ~dtype:Dtype.Val.int32\n            in\n            let root = K.sink ~kernel_info:(kernel_info ()) [ unroll ] in\n            let expanded = Expander.expand root in\n            let _ = find_sink expanded in\n            equal int 0\n              (count_reachable expanded (function\n                | K.Unroll _ -> true\n                | _ -> false));\n            K.validate expanded);\n\n          test \"push broadcast through AFTER\" (fun () ->\n            let c5 = i32 5 in\n            let r0 =\n              K.range ~size:(idx_const 4) ~axis:0 ~kind:Axis_kind.Global ()\n            in\n            let end_node = K.end_ ~value:c5 ~ranges:[ r0 ] () in\n            let bcast = K.vectorize ~srcs:[ c5; c5; c5; c5 ] in\n            let after = K.after ~src:bcast ~deps:[ end_node ] in\n            let root = K.sink ~kernel_info:(kernel_info ()) [ after ] in\n            let expanded = Expander.expand root in\n            let _ = find_sink expanded in\n            is_true\n              (count_reachable expanded (function\n                 | K.After { src; _ } ->\n                     Dtype.count (K.dtype src) = 1\n                 | _ -> false)\n              > 0);\n            K.validate expanded);\n\n          test \"push broadcast through END\" (fun () ->\n            let c5 = i32 5 in\n            let r0 =\n              K.range ~size:(idx_const 4) ~axis:0 ~kind:Axis_kind.Global ()\n            in\n            let bcast = K.vectorize ~srcs:[ c5; c5; c5; c5 ] in\n            let end_node = K.end_ ~value:bcast ~ranges:[ r0 ] () in\n            let root = K.sink ~kernel_info:(kernel_info ()) [ end_node ] in\n            let expanded = Expander.expand root in\n            let _ = find_sink expanded in\n            is_true\n              (count_reachable expanded (function\n                 | K.End { value; _ } ->\n                     Dtype.count (K.dtype value) = 1\n                 | _ -> false)\n              > 0);\n            K.validate expanded);\n\n          test \"double UNROLL axis order\" (fun () ->\n            let consts = List.init 8 (fun i -> i32 i) in\n            let vec = K.vectorize ~srcs:consts in\n            let inner =\n              K.unroll ~src:vec ~axes:[ (1, 4) ]\n                ~dtype:(Dtype.Val.vec 2 Dtype.Val.int32)\n            in\n            let outer =\n              K.unroll ~src:inner ~axes:[ (2, 2) ] ~dtype:Dtype.Val.int32\n            in\n            let contract =\n              K.contract ~src:outer ~axes:[ (1, 4); (2, 2) ]\n                ~dtype:(Dtype.Val.vec 8 Dtype.Val.int32)\n            in\n            let root = K.sink ~kernel_info:(kernel_info ()) [ contract ] in\n            let expanded = Expander.expand root in\n            assert_no_contract_unroll expanded;\n            K.validate expanded);\n        ];\n\n      group \"pre-expand\"\n        [\n          test \"converts Upcast range to Unroll\" (fun () ->\n            let r0 =\n              K.range ~size:(idx_const 4) ~axis:0 ~kind:Axis_kind.Upcast ()\n            in\n            let add = K.binary ~op:`Add ~lhs:r0 ~rhs:r0 in\n            let root = K.sink ~kernel_info:(kernel_info ()) [ add ] in\n            let expanded = Expander.expand root in\n            let _ = find_sink expanded in\n            equal int 0\n              (count_reachable expanded (function\n                 | K.Range { kind = Axis_kind.Upcast; _ } -> true\n                 | _ -> false));\n            equal int 0\n              (count_reachable expanded (function\n                | K.Unroll _ -> true\n                | _ -> false));\n            K.validate expanded);\n\n          test \"converts Unroll range to Unroll marker\" (fun () ->\n            let r0 =\n              K.range ~size:(idx_const 3) ~axis:0 ~kind:Axis_kind.Unroll ()\n            in\n            let add = K.binary ~op:`Add ~lhs:r0 ~rhs:r0 in\n            let root = K.sink ~kernel_info:(kernel_info ()) [ add ] in\n            let expanded = Expander.expand root in\n            let _ = find_sink expanded in\n            equal int 0\n              (count_reachable expanded (function\n                 | K.Range { kind = Axis_kind.Unroll; _ } -> true\n                 | _ -> false));\n            equal int 0\n              (count_reachable expanded (function\n                | K.Unroll _ -> true\n                | _ -> false));\n            K.validate expanded);\n\n          test \"ignores Reduce range\" (fun () ->\n            let c8 = idx_const 8 in\n            let r0 = K.range ~size:c8 ~axis:0 ~kind:Axis_kind.Reduce () in\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let idx = K.index ~ptr:p0 ~idxs:[ r0 ] () in\n            let ld = K.load ~src:idx () in\n            let red = K.reduce ~op:`Add ~src:ld ~ranges:[ r0 ] ~dtype:dt in\n            let root = K.sink ~kernel_info:(kernel_info ()) [ red ] in\n            let expanded = Expander.expand root in\n            let _ = find_sink expanded in\n            equal int 1\n              (count_reachable expanded (function\n                 | K.Range { kind = Axis_kind.Reduce; _ } -> true\n                 | _ -> false));\n            equal int 0\n              (count_reachable expanded (function\n                | K.Unroll _ -> true\n                | _ -> false));\n            K.validate expanded);\n\n          test \"fix_reduce_unroll wraps source in CONTRACT\" (fun () ->\n            let consts = List.init 4 (fun i -> i32 i) in\n            let vec = K.vectorize ~srcs:consts in\n            let unroll =\n              K.unroll ~src:vec ~axes:[ (0, 4) ] ~dtype:Dtype.Val.int32\n            in\n            let cf = K.const (Const.float Dtype.Val.float32 1.0) in\n            let r1 =\n              K.range ~size:(idx_const 8) ~axis:1 ~kind:Axis_kind.Reduce ()\n            in\n            let red =\n              K.reduce ~op:`Add ~src:cf ~ranges:[ unroll; r1 ] ~dtype:dt\n            in\n            let root = K.sink ~kernel_info:(kernel_info ()) [ red ] in\n            let expanded = Expander.expand root in\n            let _ = find_sink expanded in\n            is_true\n              (count_reachable expanded (function\n                 | K.Reduce { ranges; _ } ->\n                     List.for_all\n                       (fun r ->\n                         match K.view r with K.Range _ -> true | _ -> false)\n                       ranges\n                 | _ -> false)\n              > 0);\n            K.validate expanded);\n\n          test \"fix_store_unroll wraps store in CONTRACT\" (fun () ->\n            let consts = List.init 4 (fun i -> i32 i) in\n            let vec = K.vectorize ~srcs:consts in\n            let unroll =\n              K.unroll ~src:vec ~axes:[ (0, 4) ] ~dtype:Dtype.Val.int32\n            in\n            let i32_ptr = global_ptr Dtype.Val.int32 in\n            let p0 = K.param ~idx:0 ~dtype:i32_ptr in\n            let idx = K.index ~ptr:p0 ~idxs:[ idx_const 0 ] () in\n            let store = K.store ~dst:idx ~value:(i32 7) ~ranges:[ unroll ] in\n            let root = K.sink ~kernel_info:(kernel_info ()) [ store ] in\n            let expanded = Expander.expand root in\n            let _ = find_sink expanded in\n            equal int 0\n              (count_reachable expanded (function\n                | K.Unroll _ -> true\n                | _ -> false));\n            K.validate expanded);\n        ];\n\n      group \"group for reduce\"\n        [\n          test \"basic group-reduce transform\" (fun () ->\n            let expanded = Expander.expand (grouped_reduce_kernel ()) in\n            let _ = find_sink expanded in\n            equal int 1\n              (count_reachable expanded (function\n                | K.Bufferize _ -> true\n                | _ -> false));\n            equal int 2\n              (count_reachable expanded (function\n                | K.Reduce _ -> true\n                | _ -> false));\n            K.validate expanded);\n\n          test \"no-op without Group_reduce ranges\" (fun () ->\n            let r0 =\n              K.range ~size:(idx_const 8) ~axis:0 ~kind:Axis_kind.Reduce ()\n            in\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let idx = K.index ~ptr:p0 ~idxs:[ r0 ] () in\n            let ld = K.load ~src:idx () in\n            let red = K.reduce ~op:`Add ~src:ld ~ranges:[ r0 ] ~dtype:dt in\n            let root = K.sink ~kernel_info:(kernel_info ()) [ red ] in\n            let expanded = Expander.expand root in\n            let _ = find_sink expanded in\n            equal int 0\n              (count_reachable expanded (function\n                | K.Bufferize _ -> true\n                | _ -> false));\n            equal int 1\n              (count_reachable expanded (function\n                | K.Reduce _ -> true\n                | _ -> false));\n            K.validate expanded);\n\n          test \"new reduce loop axis is original + 100\" (fun () ->\n            let expanded = Expander.expand (grouped_reduce_kernel ()) in\n            let _ = find_sink expanded in\n            equal int 1\n              (count_reachable expanded (function\n                 | K.Range { axis = 101; kind = Axis_kind.Reduce; _ } -> true\n                 | _ -> false));\n            K.validate expanded);\n\n          test \"upstream locals in buffer ranges\" (fun () ->\n            let expanded = Expander.expand (grouped_reduce_kernel ()) in\n            let _ = find_sink expanded in\n            let buf_node =\n              List.find\n                (fun n ->\n                  match K.view n with K.Bufferize _ -> true | _ -> false)\n                (K.toposort expanded)\n            in\n            (match K.view buf_node with\n             | K.Bufferize { ranges; _ } ->\n                 is_true\n                   (List.exists\n                      (fun r -> match K.view r with\n                         | K.Range { kind = Axis_kind.Local; _ } -> true\n                         | _ -> false)\n                      ranges);\n                 is_true\n                   (List.exists\n                      (fun r -> match K.view r with\n                         | K.Range { kind = Axis_kind.Group_reduce; _ } -> true\n                         | _ -> false)\n                      ranges)\n             | _ -> failwith (pp_kernel expanded));\n            K.validate expanded);\n        ];\n\n      group \"full pipeline\"\n        [\n          test \"expand rewrites grouped reduce through bufferize plus index\"\n            (fun () ->\n            let expanded = Expander.expand (grouped_reduce_kernel ()) in\n            let _ = find_sink expanded in\n            equal int 1\n              (count_reachable expanded (function\n                | K.Bufferize _ -> true\n                | _ -> false));\n            equal int 2\n              (count_reachable expanded (function\n                | K.Reduce _ -> true\n                | _ -> false));\n            let buf_node =\n              List.find\n                (fun n ->\n                  match K.view n with K.Bufferize _ -> true | _ -> false)\n                (K.toposort expanded)\n            in\n            begin\n              match K.view buf_node with\n              | K.Bufferize { ranges; _ } ->\n                  equal int 2 (List.length ranges);\n                  is_true\n                    (List.exists\n                       (fun r ->\n                         match K.view r with\n                         | K.Range { kind = Axis_kind.Local; _ } -> true\n                         | _ -> false)\n                       ranges);\n                  is_true\n                    (List.exists\n                       (fun r ->\n                         match K.view r with\n                         | K.Range { kind = Axis_kind.Group_reduce; _ } ->\n                             true\n                         | _ -> false)\n                       ranges)\n              | _ -> failwith (pp_kernel expanded)\n            end;\n            equal int 1\n              (count_reachable expanded (function\n                | K.Index { ptr; _ } -> (\n                    match K.view ptr with\n                    | K.Bufferize _ -> true\n                    | _ -> false)\n                | _ -> false));\n            equal int 1\n              (count_reachable expanded (function\n                | K.Reduce { ranges = [ r ]; _ } -> (\n                    match K.view r with\n                    | K.Range { kind = Axis_kind.Reduce; _ } -> true\n                    | _ -> false)\n                | _ -> false));\n            K.validate expanded);\n\n          test \"expand consumes upcast ranges in reduce\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let c4 = idx_const 4 in\n            let c8 = idx_const 8 in\n            let r0 = K.range ~size:c4 ~axis:0 ~kind:Axis_kind.Upcast () in\n            let r1 = K.range ~size:c8 ~axis:1 ~kind:Axis_kind.Reduce () in\n            let idx = K.index ~ptr:p0 ~idxs:[ r0; r1 ] () in\n            let ld = K.load ~src:idx () in\n            let red =\n              K.reduce ~op:`Add ~src:ld ~ranges:[ r0; r1 ] ~dtype:dt\n            in\n            let root = K.sink ~kernel_info:(kernel_info ()) [ red ] in\n            let expanded = Expander.expand root in\n            assert_no_contract_unroll expanded;\n            is_true\n              (count_reachable expanded (function\n                 | K.Reduce { ranges = [ _ ]; _ } -> true\n                 | _ -> false)\n              = 1);\n            try K.validate expanded\n            with exn ->\n              failwith\n                (Printexc.to_string exn ^ \"\\n\" ^ pp_kernel expanded));\n\n          test \"expand consumes upcast ranges in store\" (fun () ->\n            let i32_ptr = global_ptr Dtype.Val.int32 in\n            let p0 = K.param ~idx:0 ~dtype:i32_ptr in\n            let c4 = idx_const 4 in\n            let r0 = K.range ~size:c4 ~axis:0 ~kind:Axis_kind.Upcast () in\n            let idx = K.index ~ptr:p0 ~idxs:[ r0 ] () in\n            let store = K.store ~dst:idx ~value:(i32 7) ~ranges:[ r0 ] in\n            let root = K.sink ~kernel_info:(kernel_info ()) [ store ] in\n            let expanded = Expander.expand root in\n            assert_no_contract_unroll expanded;\n            equal int 1\n              (count_reachable expanded (function\n                | K.Store _ -> true\n                | _ -> false));\n            K.validate expanded);\n\n          test \"expand consumes upcast ranges in end\" (fun () ->\n            let i32_ptr = global_ptr Dtype.Val.int32 in\n            let p0 = K.param ~idx:0 ~dtype:i32_ptr in\n            let c4 = idx_const 4 in\n            let r0 = K.range ~size:c4 ~axis:0 ~kind:Axis_kind.Upcast () in\n            let idx = K.index ~ptr:p0 ~idxs:[ r0 ] () in\n            let store = K.store ~dst:idx ~value:(i32 7) ~ranges:[] in\n            let end_node = K.end_ ~value:store ~ranges:[ r0 ] () in\n            let root = K.sink ~kernel_info:(kernel_info ()) [ end_node ] in\n            let expanded = Expander.expand root in\n            assert_no_contract_unroll expanded;\n            equal int 1\n              (count_reachable expanded (function\n                | K.Store _ -> true\n                | _ -> false));\n            equal int 1\n              (count_reachable expanded (function\n                | K.End _ -> true\n                | _ -> false));\n            K.validate expanded);\n        ];\n    ]\n"
  },
  {
    "path": "packages/tolk/test/unit/test_codegen_gpudims.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Unit tests for Gpudims.\n\n   Every _check_grouped_dims assertion is covered. The Z3 bijectivity proof\n   is replaced by exhaustive enumeration (all test products <= 131072). *)\n\nopen Windtrap\nopen Tolk\nopen Tolk_ir\nmodule K = Kernel\nmodule D = Dtype\nmodule C = Const\nmodule Ak = Axis_kind\n\n(* Helpers *)\n\nlet idx n = K.const (C.int D.Val.index n)\nlet global_fptr = D.Ptr.create D.Val.float32 ~addrspace:Global ~size:(-1)\n\nlet kernel_info ?(dont_use_locals = false) () =\n  {\n    K.name = \"\";\n    axis_kinds = [];\n    dont_use_locals;\n    applied_opts = [];\n    opts_to_apply = None;\n    estimates = None;\n  }\n\nlet wrap_sink ?(ki = kernel_info ()) srcs = K.sink ~kernel_info:ki srcs\n\n(* Expression evaluator *)\n\n(* Evaluate a Kernel expression tree by substituting integer values for\n   SPECIAL nodes.  Only handles the node types produced by\n   get_grouped_dims: Const, Special, Add, Mul, Idiv, Mod. *)\nlet rec eval_expr (env : Special_dim.t -> int) (node : K.t) : int =\n  match K.view node with\n  | K.Const { value; _ } ->\n      (match C.view value with C.Int n -> Int64.to_int n | _ -> failwith \"eval_expr: not int\")\n  | K.Special { dim; _ } -> env dim\n  | K.Binary { op = `Add; lhs; rhs; _ } -> eval_expr env lhs + eval_expr env rhs\n  | K.Binary { op = `Mul; lhs; rhs; _ } -> eval_expr env lhs * eval_expr env rhs\n  | K.Binary { op = `Idiv; lhs; rhs; _ } -> eval_expr env lhs / eval_expr env rhs\n  | K.Binary { op = `Mod; lhs; rhs; _ } -> eval_expr env lhs mod eval_expr env rhs\n  | _ -> failwith \"eval_expr: unexpected node kind\"\n\n(* SPECIAL collection *)\n\n(* Collect unique SPECIAL (dim, size) pairs from a list of index expressions,\n   sorted by dim.  Deduplication is by Special_dim equality. *)\nlet collect_specials idxs =\n  let all =\n    List.concat_map\n      (fun root ->\n        List.filter_map\n          (fun n ->\n            match K.view n with\n            | K.Special { dim; size; _ } -> Some (dim, K.const_to_int size)\n            | _ -> None)\n          (K.toposort root))\n      idxs\n  in\n  let deduped =\n    List.fold_left\n      (fun acc (dim, size) ->\n        if List.exists (fun (d, _) -> Special_dim.equal d dim) acc then acc\n        else (dim, size) :: acc)\n      [] all\n  in\n  List.sort (fun (a, _) (b, _) -> Special_dim.compare a b) deduped\n\nlet special_sizes idxs = List.map snd (collect_specials idxs)\n\n(* Bijectivity verifier *)\n\n(* Exhaustive check: for every valid combination of SPECIAL values, compute the\n   flat multi-dimensional index into the original dims and verify that the\n   mapping is a bijection (every flat index in [0, total) is hit exactly\n   once). *)\nlet verify_bijectivity idxs (dims : int array) =\n  let n = Array.length dims in\n  let total = Array.fold_left ( * ) 1 dims in\n  let specials = collect_specials idxs in\n  let spec_dims = List.map fst specials in\n  let spec_sizes = List.map snd specials in\n  let seen = Array.make total false in\n  (* suffix products: prod(dims[i+1..n-1]) *)\n  let suffix = Array.make (n + 1) 1 in\n  for i = n - 1 downto 0 do\n    suffix.(i) <- suffix.(i + 1) * dims.(i)\n  done;\n  (* Enumerate all SPECIAL value combinations *)\n  let rec enumerate vals remaining =\n    match remaining with\n    | [] ->\n        let bindings = List.combine spec_dims (List.rev vals) in\n        let env dim =\n          let rec find = function\n            | (d, v) :: _ when Special_dim.equal d dim -> v\n            | _ :: rest -> find rest\n            | [] -> failwith \"eval_expr: unknown SPECIAL dim\"\n          in\n          find bindings\n        in\n        let flat = ref 0 in\n        List.iteri\n          (fun i idx_expr ->\n            flat := !flat + (eval_expr env idx_expr * suffix.(i + 1)))\n          idxs;\n        if !flat < 0 || !flat >= total then\n          failwith\n            (Printf.sprintf \"bijectivity: flat=%d out of bounds [0,%d)\" !flat total);\n        if seen.(!flat) then\n          failwith (Printf.sprintf \"bijectivity: flat=%d already seen (not injective)\" !flat);\n        seen.(!flat) <- true\n    | size :: rest ->\n        for v = 0 to size - 1 do\n          enumerate (v :: vals) rest\n        done\n  in\n  enumerate [] spec_sizes;\n  Array.iteri\n    (fun i b ->\n      if not b then\n        failwith (Printf.sprintf \"bijectivity: flat=%d never hit (not surjective)\" i))\n    seen\n\n(* Unified check (mirrors _check_grouped_dims) *)\n\n(* Calls get_grouped_dims, asserts len(idxs)==len(dims), asserts SPECIAL\n   sizes match expected, and verifies bijectivity. *)\nlet check_grouped_dims ?(assert_same_length = true) prefix dims max_sizes reverse expected_sizes =\n  let kt_dims = Array.map (fun n -> idx n) dims in\n  let idxs = Gpudims.get_grouped_dims prefix kt_dims max_sizes ~reverse in\n  equal int (List.length idxs) (Array.length dims)\n    ~msg:\"idxs length should equal dims length\";\n  let sizes = special_sizes idxs in\n  if assert_same_length then begin\n    let num_specials = List.length (collect_specials idxs) in\n    equal int num_specials (min num_specials (Array.length dims))\n      ~msg:\"unique SPECIAL count should not exceed dims count\"\n  end;\n  equal (list int) sizes expected_sizes ~msg:\"SPECIAL sizes\";\n  verify_bijectivity idxs dims\n\n(* Group 1: no-op *)\n\nlet noop_tests =\n  group \"no-op\"\n    [\n      test \"single dim fits\" (fun () ->\n          check_grouped_dims \"gidx\" [| 2 |] (Some [ 16; 16; 16 ]) false [ 2 ]);\n      test \"two dims fit\" (fun () ->\n          check_grouped_dims \"gidx\" [| 2; 3 |] (Some [ 16; 16; 16 ]) false [ 2; 3 ]);\n    ]\n\n(* Group 2: reverse *)\n\nlet reverse_tests =\n  group \"reverse\"\n    [\n      test \"reverse two dims\" (fun () ->\n          check_grouped_dims \"gidx\" [| 2; 3 |] (Some [ 16; 16; 16 ]) true [ 3; 2 ]);\n      test \"three dims not reversed\" (fun () ->\n          check_grouped_dims \"gidx\" [| 2; 3; 4 |] (Some [ 16; 16; 16 ]) false\n            [ 2; 3; 4 ]);\n    ]\n\n(* Group 3: splitting (len(dims)==len(max)) *)\n\nlet split_same_len_tests =\n  group \"splitting same-length\"\n    [\n      test \"(64,3,4) / (16,16,16)\" (fun () ->\n          check_grouped_dims \"gidx\" [| 64; 3; 4 |] (Some [ 16; 16; 16 ]) false\n            [ 16; 12; 4 ]);\n      test \"(64,3,4) / (16,4,16)\" (fun () ->\n          check_grouped_dims \"gidx\" [| 64; 3; 4 |] (Some [ 16; 4; 16 ]) false\n            [ 16; 3; 16 ]);\n      test \"(64,3,4) reversed / (16,16,16)\" (fun () ->\n          check_grouped_dims \"gidx\" [| 64; 3; 4 |] (Some [ 16; 16; 16 ]) true\n            [ 16; 3; 16 ]);\n      test \"(128,3,4) / (16,4,256)\" (fun () ->\n          check_grouped_dims \"gidx\" [| 128; 3; 4 |] (Some [ 16; 4; 256 ]) false\n            [ 16; 3; 32 ]);\n      test \"(4,4,512) / (16,4,256)\" (fun () ->\n          check_grouped_dims \"gidx\" [| 4; 4; 512 |] (Some [ 16; 4; 256 ]) false\n            [ 8; 4; 256 ]);\n      test \"(5,12,7) / (8,4,16)\" (fun () ->\n          check_grouped_dims \"gidx\" [| 5; 12; 7 |] (Some [ 8; 4; 16 ]) false\n            [ 10; 3; 14 ]);\n    ]\n\n(* Group 4: grouping preferred *)\n\nlet grouping_preferred_tests =\n  group \"grouping preferred\"\n    [\n      test \"(512,4,2) / (8192,2,2)\" (fun () ->\n          check_grouped_dims \"gidx\" [| 512; 4; 2 |] (Some [ 8192; 2; 2 ]) false\n            [ 2048; 2 ]);\n    ]\n\n(* Group 5: expansion (len(dims) < len(max)) *)\n\nlet expansion_tests =\n  group \"expansion\"\n    [\n      test \"(128,) -> (16,8)\" (fun () ->\n          check_grouped_dims ~assert_same_length:false \"gidx\" [| 128 |]\n            (Some [ 16; 16; 256 ]) false [ 16; 8 ]);\n      test \"(65536,) -> (16,16,256)\" (fun () ->\n          check_grouped_dims ~assert_same_length:false \"gidx\" [| 65536 |]\n            (Some [ 16; 16; 256 ]) false [ 16; 16; 256 ]);\n      test \"(65536,2) -> (32768,4)\" (fun () ->\n          check_grouped_dims ~assert_same_length:false \"gidx\" [| 65536; 2 |]\n            (Some [ 65535; 65535; 65535 ]) false [ 32768; 4 ]);\n      test \"(121,) -> (11,11) sqrt factor\" (fun () ->\n          check_grouped_dims ~assert_same_length:false \"gidx\" [| 121 |]\n            (Some [ 12; 12; 12 ]) false [ 11; 11 ]);\n      test \"(128,128) -> (16,16,64)\" (fun () ->\n          check_grouped_dims ~assert_same_length:false \"gidx\" [| 128; 128 |]\n            (Some [ 16; 16; 256 ]) false [ 16; 16; 64 ]);\n    ]\n\n(* Group 6: contraction (len(dims) > len(max)) *)\n\nlet contraction_tests =\n  group \"contraction\"\n    [\n      test \"(2,3,4,5) / (16,16,16)\" (fun () ->\n          check_grouped_dims \"gidx\" [| 2; 3; 4; 5 |] (Some [ 16; 16; 16 ]) false\n            [ 6; 4; 5 ]);\n      test \"(2,3,4,5) / (32,16,16) reversed\" (fun () ->\n          check_grouped_dims \"gidx\" [| 2; 3; 4; 5 |] (Some [ 32; 16; 16 ]) true\n            [ 20; 3; 2 ]);\n      test \"(2,3,4,5) / (4,16,16) left too small\" (fun () ->\n          check_grouped_dims \"gidx\" [| 2; 3; 4; 5 |] (Some [ 4; 16; 16 ]) false\n            [ 2; 12; 5 ]);\n      test \"(2,3,4,5) / (16,16,16) reversed\" (fun () ->\n          check_grouped_dims \"gidx\" [| 2; 3; 4; 5 |] (Some [ 16; 16; 16 ]) true\n            [ 5; 12; 2 ]);\n    ]\n\n(* Group 7: error cases *)\n\nlet is_failure = function Failure _ -> true | _ -> false\n\nlet error_tests =\n  group \"errors\"\n    [\n      test \"prime dim 23 unfactorable\" (fun () ->\n          raises_match is_failure (fun () ->\n              ignore\n                (Gpudims.get_grouped_dims \"gidx\" (Array.map idx [| 23 |]) (Some [ 16; 16; 16 ])\n                   ~reverse:false)));\n      test \"unfactorable (128,3,4) / (16,2,2)\" (fun () ->\n          raises_match is_failure (fun () ->\n              ignore\n                (Gpudims.get_grouped_dims \"gidx\" (Array.map idx [| 128; 3; 4 |])\n                   (Some [ 16; 2; 2 ]) ~reverse:false)));\n      test \"too many dims (2,3,4,5,6)\" (fun () ->\n          raises_match is_failure (fun () ->\n              ignore\n                (Gpudims.get_grouped_dims \"gidx\" (Array.map idx [| 2; 3; 4; 5; 6 |])\n                   (Some [ 16; 16; 16 ]) ~reverse:false)));\n    ]\n\n(* Group 8: direct-mapped SPECIAL *)\n\n(* When (2,3,4,5) contracts to 3 SPECIAL dims (6,4,5), unmerged dims (4,5)\n   map directly to SPECIAL ops. *)\n\nlet direct_special_tests =\n  group \"direct-mapped SPECIAL\"\n    [\n      test \"unmerged dims are bare SPECIAL\" (fun () ->\n          let idxs =\n            Gpudims.get_grouped_dims \"gidx\" (Array.map idx [| 2; 3; 4; 5 |])\n              (Some [ 16; 16; 16 ]) ~reverse:false\n          in\n          (match K.view (List.nth idxs 2) with\n          | K.Special _ -> ()\n          | _ -> fail \"expected SPECIAL for idxs[2]\");\n          (match K.view (List.nth idxs 3) with\n          | K.Special _ -> ()\n          | _ -> fail \"expected SPECIAL for idxs[3]\"));\n    ]\n\n(* Group 9: max_sizes=None passthrough *)\n\nlet none_passthrough_tests =\n  group \"max_sizes=None\"\n    [\n      test \"three dims passthrough\" (fun () ->\n          check_grouped_dims \"gidx\" [| 2; 3; 4 |] None false [ 2; 3; 4 ]);\n      test \"single dim passthrough\" (fun () ->\n          check_grouped_dims \"gidx\" [| 100 |] None false [ 100 ]);\n    ]\n\n(* Group 10: integration via pm_add_gpudims *)\n\nlet gpu_renderer\n    ?(global_max = [ 0x8FFFFFFF; 0x8FFFFFFF; 0x8FFFFFFF ])\n    ?(local_max = [ 0x8FFFFFFF; 0x8FFFFFFF; 0x8FFFFFFF ]) () =\n  Renderer.make ~name:\"test\" ~device:\"TEST\" ~has_local:true ~has_shared:true\n    ~shared_max:32768 ~global_max ~local_max ~render:(fun ?name:_ _ -> \"\") ()\n\nlet thread_renderer () =\n  Renderer.make ~name:\"thread\" ~device:\"CPU\" ~has_local:false ~has_shared:false\n    ~shared_max:0 ~has_threads:true\n    ~global_max:[ 8; 0; 0 ]\n    ~render:(fun ?name:_ _ -> \"\") ()\n\n(* Build a simple kernel: load from data0[range_sum], store to data0[range_sum]. *)\nlet make_global_kernel ?(ki = kernel_info ()) ranges =\n  let p = K.param ~idx:0 ~dtype:global_fptr in\n  let open K.O in\n  let combined =\n    List.fold_left (fun acc r -> acc + r) (List.hd ranges) (List.tl ranges)\n  in\n  let index_node = K.index ~ptr:p ~idxs:[ combined ] () in\n  let ld = K.load ~src:index_node () in\n  let store_idx = K.index ~ptr:p ~idxs:[ combined ] () in\n  let st = K.store ~dst:store_idx ~value:ld ~ranges in\n  K.sink ~kernel_info:ki [ st ]\n\nlet find_specials root =\n  List.filter\n    (fun n -> match K.view n with K.Special _ -> true | _ -> false)\n    (K.toposort root)\n\nlet find_ranges root =\n  List.filter K.is_range (K.toposort root)\n\nlet find_define_vars root =\n  List.filter\n    (fun n -> match K.view n with K.Define_var _ -> true | _ -> false)\n    (K.toposort root)\n\nlet integration_tests =\n  group \"pm_add_gpudims\"\n    [\n      test \"replaces global ranges with SPECIAL\" (fun () ->\n          let r0 =\n            K.range ~size:(idx 32) ~axis:0 ~kind:Ak.Global ~dtype:D.Val.index ()\n          in\n          let r1 =\n            K.range ~size:(idx 16) ~axis:1 ~kind:Ak.Global ~dtype:D.Val.index ()\n          in\n          let sink = make_global_kernel [ r0; r1 ] in\n          let ren = gpu_renderer () in\n          let result = Gpudims.pm_add_gpudims ren sink in\n          equal int (List.length (find_ranges result)) 0\n            ~msg:\"no ranges after pass\";\n          is_true (List.length (find_specials result) > 0)\n            ~msg:\"SPECIAL nodes present\");\n      test \"replaces global+local ranges\" (fun () ->\n          let g0 =\n            K.range ~size:(idx 32) ~axis:0 ~kind:Ak.Global ~dtype:D.Val.index ()\n          in\n          let l0 =\n            K.range ~size:(idx 8) ~axis:1 ~kind:Ak.Local ~dtype:D.Val.index ()\n          in\n          let sink = make_global_kernel [ g0; l0 ] in\n          let ren = gpu_renderer () in\n          let result = Gpudims.pm_add_gpudims ren sink in\n          let specials = find_specials result in\n          let has_gid =\n            List.exists\n              (fun n ->\n                match K.view n with\n                | K.Special { dim = Special_dim.Group_id _; _ } -> true\n                | _ -> false)\n              specials\n          in\n          let has_lid =\n            List.exists\n              (fun n ->\n                match K.view n with\n                | K.Special { dim = Special_dim.Local_id _; _ } -> true\n                | _ -> false)\n              specials\n          in\n          is_true has_gid ~msg:\"has Group_id SPECIAL\";\n          is_true has_lid ~msg:\"has Local_id SPECIAL\");\n      test \"no-op when no GPU ranges\" (fun () ->\n          let r =\n            K.range ~size:(idx 4) ~axis:0 ~kind:Ak.Reduce ~dtype:D.Val.index ()\n          in\n          let p = K.param ~idx:0 ~dtype:global_fptr in\n          let index_node = K.index ~ptr:p ~idxs:[ r ] () in\n          let ld = K.load ~src:index_node () in\n          let end_node = K.end_ ~value:ld ~ranges:[ r ] () in\n          let sink = wrap_sink [ end_node ] in\n          let ren = gpu_renderer () in\n          let result = Gpudims.pm_add_gpudims ren sink in\n          equal int (List.length (find_specials result)) 0\n            ~msg:\"no SPECIALs for reduce-only kernel\");\n      test \"no-op when SPECIAL already present\" (fun () ->\n          let s =\n            K.special ~dim:(Special_dim.Group_id 0) ~size:(idx 32)\n              ~dtype:D.Val.int32 ()\n          in\n          let p = K.param ~idx:0 ~dtype:global_fptr in\n          let index_node = K.index ~ptr:p ~idxs:[ s ] () in\n          let ld = K.load ~src:index_node () in\n          let store_idx = K.index ~ptr:p ~idxs:[ s ] () in\n          let st = K.store ~dst:store_idx ~value:ld ~ranges:[] in\n          let sink = wrap_sink [ st ] in\n          let ren = gpu_renderer () in\n          let result = Gpudims.pm_add_gpudims ren sink in\n          let specials_before = find_specials sink in\n          let specials_after = find_specials result in\n          equal int (List.length specials_before) (List.length specials_after)\n            ~msg:\"same SPECIAL count (idempotent)\");\n      test \"threaded renderer uses core_id\" (fun () ->\n          let r0 =\n            K.range ~size:(idx 4) ~axis:0 ~kind:Ak.Global ~dtype:D.Val.index ()\n          in\n          let sink = make_global_kernel [ r0 ] in\n          let ren = thread_renderer () in\n          let result = Gpudims.pm_add_gpudims ren sink in\n          let dvars = find_define_vars result in\n          is_true (List.length dvars > 0) ~msg:\"has Define_var\";\n          (match K.view (List.hd dvars) with\n          | K.Define_var { name; _ } ->\n              equal string name \"core_id\" ~msg:\"variable named core_id\"\n          | _ -> fail \"expected Define_var\"));\n    ]\n\n(* Group 11: missing-locals gating *)\n\n(* When a STORE writes to a global address and the INDEX does not depend on a\n   local range, the missing local range should be gated with an Invalid mask\n   (l0 == 0 ? value : Invalid). *)\n\nlet missing_locals_tests =\n  group \"missing-locals gating\"\n    [\n      test \"missing local range gets gated with Invalid\" (fun () ->\n          let g0 =\n            K.range ~size:(idx 32) ~axis:0 ~kind:Ak.Global ~dtype:D.Val.index ()\n          in\n          let l0 =\n            K.range ~size:(idx 8) ~axis:1 ~kind:Ak.Local ~dtype:D.Val.index ()\n          in\n          let p = K.param ~idx:0 ~dtype:global_fptr in\n          (* Load using both ranges *)\n          let load_idx = K.index ~ptr:p ~idxs:[ K.O.(g0 + l0) ] () in\n          let loaded = K.load ~src:load_idx () in\n          (* End local loop (local reduction) *)\n          let reduced = K.end_ ~value:loaded ~ranges:[ l0 ] () in\n          (* Store using ONLY the global range in the index *)\n          let store_idx = K.index ~ptr:p ~idxs:[ g0 ] () in\n          let st = K.store ~dst:store_idx ~value:reduced ~ranges:[ g0 ] in\n          let ki = kernel_info () in\n          let sink = K.sink ~kernel_info:ki [ st ] in\n          let ren = gpu_renderer () in\n          let result = Gpudims.pm_add_gpudims ren sink in\n          (* Should have Invalid_index node (the mask gating) *)\n          let has_invalid =\n            List.exists\n              (fun n ->\n                match K.view n with K.Invalid_index _ -> true | _ -> false)\n              (K.toposort result)\n          in\n          is_true has_invalid\n            ~msg:\"has Invalid_index node from missing-locals gating\";\n          (* Should have a Ternary Where (the condition) *)\n          let has_where =\n            List.exists\n              (fun n ->\n                match K.view n with\n                | K.Ternary { op = `Where; _ } -> true\n                | _ -> false)\n              (K.toposort result)\n          in\n          is_true has_where ~msg:\"has Where node for gating\");\n    ]\n\n(* Group 12: dont_use_locals *)\n\nlet dont_use_locals_tests =\n  group \"dont_use_locals\"\n    [\n      test \"uses idx prefix with no local SPECIALs\" (fun () ->\n          let g0 =\n            K.range ~size:(idx 32) ~axis:0 ~kind:Ak.Global ~dtype:D.Val.index ()\n          in\n          let g1 =\n            K.range ~size:(idx 16) ~axis:1 ~kind:Ak.Global ~dtype:D.Val.index ()\n          in\n          let ki = kernel_info ~dont_use_locals:true () in\n          let sink = make_global_kernel ~ki [ g0; g1 ] in\n          let ren = gpu_renderer () in\n          let result = Gpudims.pm_add_gpudims ren sink in\n          let specials = find_specials result in\n          is_true (List.length specials > 0) ~msg:\"has SPECIAL nodes\";\n          (* All specials should be Global_idx, not Group_id or Local_id *)\n          let all_global_idx =\n            List.for_all\n              (fun n ->\n                match K.view n with\n                | K.Special { dim = Special_dim.Global_idx _; _ } -> true\n                | _ -> false)\n              specials\n          in\n          is_true all_global_idx ~msg:\"all SPECIAL nodes are Global_idx\";\n          (* No local SPECIALs *)\n          let has_local =\n            List.exists\n              (fun n ->\n                match K.view n with\n                | K.Special { dim = Special_dim.Local_id _; _ } -> true\n                | _ -> false)\n              specials\n          in\n          is_true (not has_local) ~msg:\"no Local_id SPECIALs\");\n    ]\n\n(* Entry point *)\n\nlet () =\n  run \"Codegen.Gpudims\"\n    [\n      noop_tests;\n      reverse_tests;\n      split_same_len_tests;\n      grouping_preferred_tests;\n      expansion_tests;\n      contraction_tests;\n      error_tests;\n      direct_special_tests;\n      none_passthrough_tests;\n      integration_tests;\n      missing_locals_tests;\n      dont_use_locals_tests;\n    ]\n"
  },
  {
    "path": "packages/tolk/test/unit/test_codegen_heuristic.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Unit and integration tests for Heuristic.hand_coded_optimizations.\n\n   Tests the 11-step heuristic optimization sequence. These tests verify\n   structural decision logic by checking which opts are applied for specific\n   kernel shapes and renderers. *)\n\nopen Windtrap\nopen Tolk\nopen Tolk_ir\nmodule K = Kernel\nmodule D = Dtype\nmodule C = Const\nmodule Ak = Axis_kind\nmodule P = Postrange\n\n(* Helpers *)\n\nlet idx n = K.const (C.int D.Val.index n)\nlet global_fptr = D.Ptr.create D.Val.float32 ~addrspace:Global ~size:(-1)\n\nlet kernel_info () =\n  { K.name = \"test\";\n    axis_kinds = [];\n    dont_use_locals = false;\n    applied_opts = [];\n    opts_to_apply = None;\n    estimates = None;\n  }\n\nlet wrap_sink srcs = K.sink ~kernel_info:(kernel_info ()) srcs\n\nlet loop_range ~axis size =\n  K.range ~size:(idx size) ~axis ~kind:Ak.Loop ~dtype:D.Val.index ()\n\nlet reduce_range ~axis size =\n  K.range ~size:(idx size) ~axis ~kind:Ak.Reduce ~dtype:D.Val.index ()\n\nlet global_range ~axis size =\n  K.range ~size:(idx size) ~axis ~kind:Ak.Global ~dtype:D.Val.index ()\n\n(* Renderers *)\n\nlet gpu_renderer () =\n  Renderer.make ~name:\"test\" ~device:\"TEST\" ~has_local:true ~has_shared:true\n    ~shared_max:32768 ~render:(fun ?name:_ _ -> \"\") ()\n\nlet cpu_renderer () =\n  Renderer.make ~name:\"cpu\" ~device:\"CPU\" ~has_local:false ~has_shared:false\n    ~shared_max:0 ~render:(fun ?name:_ _ -> \"\") ()\n\nlet thread_renderer () =\n  Renderer.make ~name:\"thread\" ~device:\"CPU\" ~has_local:false ~has_shared:false\n    ~shared_max:0 ~has_threads:true ~global_max:[ 32; 32; 32 ]\n    ~render:(fun ?name:_ _ -> \"\") ()\n\n(* Opt Inspection Helpers *)\n\nlet run_heuristic ast ren =\n  let t = P.create ast ren in\n  let result = Heuristic.hand_coded_optimizations t in\n  P.applied_opts result\n\nlet run_heuristic_scheduler ast ren =\n  let t = P.create ast ren in\n  Heuristic.hand_coded_optimizations t\n\nlet is_grouptop = function K.Opt.Grouptop _ -> true | _ -> false\nlet is_upcast = function K.Opt.Upcast _ -> true | _ -> false\nlet is_unroll = function K.Opt.Unroll _ -> true | _ -> false\nlet is_local = function K.Opt.Local _ -> true | _ -> false\nlet is_thread = function K.Opt.Thread _ -> true | _ -> false\nlet is_group = function K.Opt.Group _ -> true | _ -> false\nlet is_nolocals = function K.Opt.Nolocals -> true | _ -> false\n\nlet thread_axis = function K.Opt.Thread { axis; _ } -> Some axis | _ -> None\nlet local_axis = function K.Opt.Local { axis; _ } -> Some axis | _ -> None\n\nlet count pred opts = List.length (List.filter pred opts)\nlet has pred opts = List.exists pred opts\n\n(* Env var helper: sets var, runs f, restores original. *)\nlet with_env name value f =\n  let old = Sys.getenv_opt name in\n  Unix.putenv name value;\n  Fun.protect\n    ~finally:(fun () ->\n      match old with\n      | Some v -> Unix.putenv name v\n      | None ->\n          (* Can't truly unsetenv in OCaml; empty string causes\n             getenv_int to return the default via int_of_string failure. *)\n          Unix.putenv name \"\")\n    f\n\n(* AST Fixtures *)\n\n(* Elementwise: out[i,j] = exp2(in[i,j]) — Global ranges for GPU tests *)\nlet elementwise_global_ast ~s0 ~s1 =\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let r0 = global_range ~axis:0 s0 in\n  let r1 = global_range ~axis:1 s1 in\n  let open K.O in\n  let in_idx = K.index ~ptr:p1 ~idxs:[ r0 * idx s1 + r1 ] () in\n  let ld = K.load ~src:in_idx () in\n  let value = K.unary ~op:`Exp2 ~src:ld in\n  let out_idx = K.index ~ptr:p0 ~idxs:[ r0 * idx s1 + r1 ] () in\n  let st = K.store ~dst:out_idx ~value ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0; r1 ] () in\n  wrap_sink [ e ]\n\n(* Elementwise with Loop ranges — for thread renderer tests *)\nlet elementwise_loop_ast ~s0 ~s1 =\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let r0 = loop_range ~axis:0 s0 in\n  let r1 = loop_range ~axis:1 s1 in\n  let open K.O in\n  let in_idx = K.index ~ptr:p1 ~idxs:[ r0 * idx s1 + r1 ] () in\n  let ld = K.load ~src:in_idx () in\n  let value = K.unary ~op:`Exp2 ~src:ld in\n  let out_idx = K.index ~ptr:p0 ~idxs:[ r0 * idx s1 + r1 ] () in\n  let st = K.store ~dst:out_idx ~value ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0; r1 ] () in\n  wrap_sink [ e ]\n\n(* Reduce: out[i,j] = sum_k(in[i,k,j]) — Global output + Reduce *)\nlet reduce_global_ast ~s0 ~s1 ~sr =\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let r0 = global_range ~axis:0 s0 in\n  let r1 = global_range ~axis:1 s1 in\n  let rr = reduce_range ~axis:2 sr in\n  let open K.O in\n  let in_idx =\n    K.index ~ptr:p1 ~idxs:[ r0 * idx sr * idx s1 + rr * idx s1 + r1 ] ()\n  in\n  let ld = K.load ~src:in_idx () in\n  let red = K.reduce ~op:`Add ~src:ld ~ranges:[ rr ] ~dtype:D.Val.float32 in\n  let out_idx = K.index ~ptr:p0 ~idxs:[ r0 * idx s1 + r1 ] () in\n  let st = K.store ~dst:out_idx ~value:red ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0; r1 ] () in\n  wrap_sink [ e ]\n\n(* Double reduce: out[i,j] = sum_{k1,k2}(in[i,j,k1,k2])\n   Two reduce ranges for testing double unroll. *)\nlet double_reduce_global_ast ~s0 ~s1 ~sr1 ~sr2 =\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let r0 = global_range ~axis:0 s0 in\n  let r1 = global_range ~axis:1 s1 in\n  let rr1 = reduce_range ~axis:2 sr1 in\n  let rr2 = reduce_range ~axis:3 sr2 in\n  let open K.O in\n  let in_idx =\n    K.index ~ptr:p1\n      ~idxs:\n        [ r0 * idx (Stdlib.( * ) s1 (Stdlib.( * ) sr1 sr2))\n          + r1 * idx (Stdlib.( * ) sr1 sr2)\n          + rr1 * idx sr2\n          + rr2\n        ]\n      ()\n  in\n  let ld = K.load ~src:in_idx () in\n  let red =\n    K.reduce ~op:`Add ~src:ld ~ranges:[ rr1; rr2 ] ~dtype:D.Val.float32\n  in\n  let out_idx = K.index ~ptr:p0 ~idxs:[ r0 * idx s1 + r1 ] () in\n  let st = K.store ~dst:out_idx ~value:red ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0; r1 ] () in\n  wrap_sink [ e ]\n\n(* Matmul: out[m,n] = sum_k(a[m,k] * b[k,n]) *)\nlet matmul_global_ast ~m ~n ~k =\n  let p_out = K.param ~idx:0 ~dtype:global_fptr in\n  let p_a = K.param ~idx:1 ~dtype:global_fptr in\n  let p_b = K.param ~idx:2 ~dtype:global_fptr in\n  let r_m = global_range ~axis:0 m in\n  let r_n = global_range ~axis:1 n in\n  let r_k = reduce_range ~axis:2 k in\n  let open K.O in\n  let idx_a = K.index ~ptr:p_a ~idxs:[ r_m * idx k + r_k ] () in\n  let idx_b = K.index ~ptr:p_b ~idxs:[ r_k * idx n + r_n ] () in\n  let ld_a = K.load ~src:idx_a () in\n  let ld_b = K.load ~src:idx_b () in\n  let mul = K.binary ~op:`Mul ~lhs:ld_a ~rhs:ld_b in\n  let red = K.reduce ~op:`Add ~src:mul ~ranges:[ r_k ] ~dtype:D.Val.float32 in\n  let out_idx = K.index ~ptr:p_out ~idxs:[ r_m * idx n + r_n ] () in\n  let st = K.store ~dst:out_idx ~value:red ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r_m; r_n ] () in\n  wrap_sink [ e ]\n\n(* Matvec: out[i] = sum_j(x[j] * A[i,j])\n\n   NOTE: MUL operands are INDEX nodes directly (no LOAD). The heuristic's\n   matvec detection checks for INDEX as MUL operands. In the standard Tolk IR,\n   MUL operands are LOADs wrapping INDEXes. This fixture tests the matvec\n   decision logic assuming the expected pattern. *)\nlet matvec_global_ast ~rows ~cols =\n  let p_out = K.param ~idx:0 ~dtype:global_fptr in\n  let p_x = K.param ~idx:1 ~dtype:global_fptr in\n  let p_a = K.param ~idx:2 ~dtype:global_fptr in\n  let r_i = global_range ~axis:0 rows in\n  let r_j = reduce_range ~axis:1 cols in\n  let open K.O in\n  let idx_x = K.index ~ptr:p_x ~idxs:[ r_j ] () in\n  let idx_a = K.index ~ptr:p_a ~idxs:[ r_i * idx cols + r_j ] () in\n  let mul = K.binary ~op:`Mul ~lhs:idx_x ~rhs:idx_a in\n  let red = K.reduce ~op:`Add ~src:mul ~ranges:[ r_j ] ~dtype:D.Val.float32 in\n  let out_idx = K.index ~ptr:p_out ~idxs:[ r_i ] () in\n  let st = K.store ~dst:out_idx ~value:red ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r_i ] () in\n  wrap_sink [ e ]\n\n(* Broadcast elementwise: out[i,j] = a[j] + b[i]\n   Buffer a indexed by j only (broadcast on i), b indexed by i only\n   (broadcast on j). Triggers heuristic upcast stride analysis. *)\nlet broadcast_ewise_global_ast ~s0 ~s1 =\n  let p_out = K.param ~idx:0 ~dtype:global_fptr in\n  let p_a = K.param ~idx:1 ~dtype:global_fptr in\n  let p_b = K.param ~idx:2 ~dtype:global_fptr in\n  let r0 = global_range ~axis:0 s0 in\n  let r1 = global_range ~axis:1 s1 in\n  let open K.O in\n  let idx_a = K.index ~ptr:p_a ~idxs:[ r1 ] () in\n  let ld_a = K.load ~src:idx_a () in\n  let idx_b = K.index ~ptr:p_b ~idxs:[ r0 ] () in\n  let ld_b = K.load ~src:idx_b () in\n  let value = K.binary ~op:`Add ~lhs:ld_a ~rhs:ld_b in\n  let out_idx = K.index ~ptr:p_out ~idxs:[ r0 * idx s1 + r1 ] () in\n  let st = K.store ~dst:out_idx ~value ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0; r1 ] () in\n  wrap_sink [ e ]\n\n(* Masked elementwise: out[i,j] = where(j < (s1-1), in[i,j], 0.0)\n   Range j appears in WHERE condition → triggers masked upcast (step 6). *)\nlet masked_ewise_global_ast ~s0 ~s1 =\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let r0 = global_range ~axis:0 s0 in\n  let r1 = global_range ~axis:1 s1 in\n  let open K.O in\n  let in_idx = K.index ~ptr:p1 ~idxs:[ r0 * idx s1 + r1 ] () in\n  let ld = K.load ~src:in_idx () in\n  let cond = K.binary ~op:`Cmplt ~lhs:r1 ~rhs:(idx (s1 - 1)) in\n  let zero = K.const (C.float D.Val.float32 0.0) in\n  let value = K.ternary ~op:`Where ~a:cond ~b:ld ~c:zero in\n  let out_idx = K.index ~ptr:p0 ~idxs:[ r0 * idx s1 + r1 ] () in\n  let st = K.store ~dst:out_idx ~value ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0; r1 ] () in\n  wrap_sink [ e ]\n\n(* Partial broadcast: out[i,j] = a[i,j] + b[i]\n   Buffer b indexed by r0 only → axis 1 is expand for b.\n   Axis 0 is NOT expand (all bufs use r0). Tests local expand priority. *)\nlet partial_broadcast_global_ast ~s0 ~s1 =\n  let p_out = K.param ~idx:0 ~dtype:global_fptr in\n  let p_a = K.param ~idx:1 ~dtype:global_fptr in\n  let p_b = K.param ~idx:2 ~dtype:global_fptr in\n  let r0 = global_range ~axis:0 s0 in\n  let r1 = global_range ~axis:1 s1 in\n  let open K.O in\n  let idx_a = K.index ~ptr:p_a ~idxs:[ r0 * idx s1 + r1 ] () in\n  let ld_a = K.load ~src:idx_a () in\n  let idx_b = K.index ~ptr:p_b ~idxs:[ r0 ] () in\n  let ld_b = K.load ~src:idx_b () in\n  let value = K.binary ~op:`Add ~lhs:ld_a ~rhs:ld_b in\n  let out_idx = K.index ~ptr:p_out ~idxs:[ r0 * idx s1 + r1 ] () in\n  let st = K.store ~dst:out_idx ~value ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0; r1 ] () in\n  wrap_sink [ e ]\n\n(* Tests *)\n\n(* Group 1: Grouping *)\n\nlet grouping_tests =\n  group \"grouping\"\n    [\n      (* Small upcastable product triggers GROUPTOP.\n         prod(upcastable) = 4*4 = 16 ≤ 2048 → GROUPTOP applied. *)\n      test \"applies GROUPTOP when upcastable prod small\" (fun () ->\n          let ast = reduce_global_ast ~s0:4 ~s1:4 ~sr:128 in\n          let ren = gpu_renderer () in\n          let opts = run_heuristic ast ren in\n          is_true (has is_grouptop opts));\n      (* After GROUPTOP, group_for_reduces > 0 → early return.\n         No UPCAST, LOCAL, or UNROLL should appear. *)\n      test \"early return after grouping\" (fun () ->\n          let ast = reduce_global_ast ~s0:4 ~s1:4 ~sr:128 in\n          let ren = gpu_renderer () in\n          let opts = run_heuristic ast ren in\n          is_true (has is_grouptop opts);\n          is_true (not (has is_upcast opts));\n          is_true (not (has is_local opts));\n          is_true (not (has is_unroll opts)));\n      (* Large upcastable product skips grouping.\n         prod(upcastable) = 64*64 = 4096 > 2048 → no GROUPTOP. *)\n      test \"skips grouping when upcastable prod large\" (fun () ->\n          let ast = reduce_global_ast ~s0:64 ~s1:64 ~sr:128 in\n          let ren = gpu_renderer () in\n          let opts = run_heuristic ast ren in\n          is_true (not (has is_grouptop opts)));\n    ]\n\n(* Group 2: Reduce unroll *)\n\nlet reduce_unroll_tests =\n  group \"reduce unroll\"\n    [\n      (* Full unroll when reduce size ≤ 32.\n         reduce_size=16, upcastable_prod=4096>2048 → skips grouping.\n         Unroll amount=0 (full). *)\n      test \"full unrolls small reduce\" (fun () ->\n          let ast = reduce_global_ast ~s0:64 ~s1:64 ~sr:16 in\n          let ren = gpu_renderer () in\n          let opts = run_heuristic ast ren in\n          is_true (has is_unroll opts));\n      (* Split unroll by 4 when reduce > 32 and divisible by 4.\n         reduce_size=64 > 32. 64%4=0 → UNROLL amount=4. *)\n      test \"split unrolls large reduce by 4\" (fun () ->\n          let ast = reduce_global_ast ~s0:64 ~s1:64 ~sr:64 in\n          let ren = gpu_renderer () in\n          let opts = run_heuristic ast ren in\n          let unrolls = List.filter is_unroll opts in\n          is_true (unrolls <> []);\n          (match unrolls with\n          | K.Opt.Unroll { amount; _ } :: _ -> equal int 4 amount\n          | _ -> failwith \"expected Unroll\"));\n      (* Double unroll when both reduce dims ≤ 3.\n         Two reduce axes of size 3 each. Both get fully unrolled. *)\n      test \"double unrolls tiny reduces\" (fun () ->\n          let ast =\n            double_reduce_global_ast ~s0:64 ~s1:64 ~sr1:3 ~sr2:3\n          in\n          let ren = gpu_renderer () in\n          let opts = run_heuristic ast ren in\n          equal int 2 (count is_unroll opts));\n    ]\n\n(* Group 3: Default upcast *)\n\nlet default_upcast_tests =\n  group \"default upcast\"\n    [\n      (* When nothing is upcasted, applies 4x upcast on last upcastable dim.\n         Elementwise on CPU: no locals, no threads, no reduce →\n         only step 9 fires. *)\n      test \"applies 4x upcast when nothing upcasted\" (fun () ->\n          let ast = elementwise_loop_ast ~s0:128 ~s1:128 in\n          let ren = cpu_renderer () in\n          let opts = run_heuristic ast ren in\n          is_true (has is_upcast opts);\n          let upcasts = List.filter is_upcast opts in\n          (match upcasts with\n          | [ K.Opt.Upcast { amount; _ } ] -> equal int 4 amount\n          | _ -> failwith \"expected exactly one Upcast with amount=4\"));\n    ]\n\n(* Group 4: Heuristic upcast (broadcast) *)\n\nlet heuristic_upcast_tests =\n  group \"heuristic upcast\"\n    [\n      (* Broadcast triggers upcast; lower stride axis chosen first.\n         out[i,j] = a[j] + b[i]: axis=1 has stride sum 2 vs axis=0 sum 65.\n         On CPU to avoid local interference. *)\n      test \"broadcast upcast prefers lower stride axis\" (fun () ->\n          let ast = broadcast_ewise_global_ast ~s0:64 ~s1:64 in\n          let ren = cpu_renderer () in\n          let opts = run_heuristic ast ren in\n          let upcasts = List.filter is_upcast opts in\n          is_true (List.length upcasts >= 1);\n          (match upcasts with\n          | K.Opt.Upcast { axis = 1; _ } :: _ -> ()\n          | K.Opt.Upcast { axis; _ } :: _ ->\n              failwith (Printf.sprintf \"expected first upcast axis=1, got %d\" axis)\n          | _ -> failwith \"no upcast found\"));\n      (* Upcast size stays under 32. *)\n      test \"upcast size bounded by 32\" (fun () ->\n          let ast = broadcast_ewise_global_ast ~s0:64 ~s1:64 in\n          let ren = cpu_renderer () in\n          let result = run_heuristic_scheduler ast ren in\n          is_true (P.upcast_size result <= 32));\n    ]\n\n(* Group 5: Matvec detection *)\n\nlet matvec_tests =\n  group \"matvec\"\n    [\n      (* Matvec pattern detected: GROUP + LOCAL + UPCAST applied. *)\n      test \"detects matvec and applies GROUP LOCAL UPCAST\" (fun () ->\n          let ast = matvec_global_ast ~rows:128 ~cols:128 in\n          let ren = gpu_renderer () in\n          let opts = run_heuristic ast ren in\n          is_true (has is_group opts);\n          is_true (has is_local opts);\n          is_true (has is_upcast opts));\n      (* Matvec early return: no further opts after matvec. *)\n      test \"matvec early return prevents further opts\" (fun () ->\n          let ast = matvec_global_ast ~rows:128 ~cols:128 in\n          let ren = gpu_renderer () in\n          let opts = run_heuristic ast ren in\n          is_true (not (has is_grouptop opts));\n          is_true (not (has is_unroll opts));\n          is_true (not (has is_thread opts)));\n      (* Matvec skipped on CPU (no local/shared). *)\n      test \"matvec skipped on CPU\" (fun () ->\n          let ast = matvec_global_ast ~rows:128 ~cols:128 in\n          let ren = cpu_renderer () in\n          let opts = run_heuristic ast ren in\n          is_true (not (has is_group opts)));\n    ]\n\n(* Group 6: Masked upcast *)\n\nlet masked_upcast_tests =\n  group \"masked upcast\"\n    [\n      (* Small dim with WHERE guard triggers masked upcast (step 6).\n         s1=5 ≤ 7, WHERE condition references r1, prod=1*5=5 ≤ 49.\n         s0=1024 makes prod(upcastable)=5120 > 2048, skipping grouping. *)\n      test \"upcasts small WHERE-guarded dim\" (fun () ->\n          let ast = masked_ewise_global_ast ~s0:1024 ~s1:5 in\n          let ren = gpu_renderer () in\n          let opts = run_heuristic ast ren in\n          is_true (has is_upcast opts));\n    ]\n\n(* Group 7: Local groups *)\n\nlet local_groups_tests =\n  group \"local groups\"\n    [\n      (* GPU renderer applies LOCAL opts on elementwise. *)\n      test \"applies locals on GPU\" (fun () ->\n          let ast = elementwise_global_ast ~s0:128 ~s1:128 in\n          let ren = gpu_renderer () in\n          let opts = run_heuristic ast ren in\n          is_true (has is_local opts));\n      (* At most 3 LOCAL opts applied (take_n 3 in heuristic). *)\n      test \"at most 3 locals\" (fun () ->\n          let p0 = K.param ~idx:0 ~dtype:global_fptr in\n          let p1 = K.param ~idx:1 ~dtype:global_fptr in\n          let r0 = global_range ~axis:0 8 in\n          let r1 = global_range ~axis:1 8 in\n          let r2 = global_range ~axis:2 8 in\n          let r3 = global_range ~axis:3 8 in\n          let r4 = global_range ~axis:4 8 in\n          let open K.O in\n          let flat =\n            r0 * idx 4096\n            + r1 * idx 512\n            + r2 * idx 64\n            + r3 * idx 8\n            + r4\n          in\n          let in_idx = K.index ~ptr:p1 ~idxs:[ flat ] () in\n          let ld = K.load ~src:in_idx () in\n          let value = K.unary ~op:`Exp2 ~src:ld in\n          let out_idx = K.index ~ptr:p0 ~idxs:[ flat ] () in\n          let st = K.store ~dst:out_idx ~value ~ranges:[] in\n          let e = K.end_ ~value:st ~ranges:[ r0; r1; r2; r3; r4 ] () in\n          let ast = wrap_sink [ e ] in\n          let ren = gpu_renderer () in\n          let opts = run_heuristic ast ren in\n          is_true (count is_local opts <= 3));\n      (* Local budget: product of local sizes ≤ 128. *)\n      test \"local budget respected\" (fun () ->\n          let ast = elementwise_global_ast ~s0:128 ~s1:128 in\n          let ren = gpu_renderer () in\n          let result = run_heuristic_scheduler ast ren in\n          let local_ranges = P.ranges_of result [ Ak.Local ] in\n          let local_prod =\n            List.fold_left\n              (fun acc r ->\n                let sz = K.range_size r in\n                if K.is_const sz then acc * K.const_to_int sz else acc)\n              1 local_ranges\n          in\n          is_true (local_prod <= 128));\n      (* NOLOCALS=1 applies Nolocals opt instead of LOCAL. *)\n      test \"NOLOCALS env applies Nolocals\" (fun () ->\n          let ast = elementwise_global_ast ~s0:128 ~s1:128 in\n          let ren = gpu_renderer () in\n          let opts =\n            Helpers.Context_var.with_context\n              [ B (Heuristic.nolocals_var, 1) ]\n              (fun () -> run_heuristic ast ren)\n          in\n          is_true (has is_nolocals opts);\n          is_true (not (has is_local opts)));\n      (* NOLOCALS=1 changes grouping threshold from 2048 to 240.\n         prod(upcastable)=256: ≤2048 (groups) but >240 (no group). *)\n      test \"NOLOCALS adjusts grouping threshold\" (fun () ->\n          let ren = gpu_renderer () in\n          let ast1 = reduce_global_ast ~s0:8 ~s1:32 ~sr:128 in\n          let opts_default = run_heuristic ast1 ren in\n          is_true (has is_grouptop opts_default);\n          let ast2 = reduce_global_ast ~s0:8 ~s1:32 ~sr:128 in\n          let opts_nolocals =\n            Helpers.Context_var.with_context\n              [ B (Heuristic.nolocals_var, 1) ]\n              (fun () -> run_heuristic ast2 ren)\n          in\n          is_true (not (has is_grouptop opts_nolocals)));\n      (* Expand axes are prioritized for LOCAL.\n         out[i,j] = a[i,j] + b[i]: axis 1 is expand for buffer b.\n         Expand axis is ranked first, getting larger local_sz from the\n         budget. Without priority, axis 0 (non-expand) would take 32\n         first, leaving only 4 for axis 1. With priority, axis 1 gets\n         16, axis 0 gets 8. Application order is by axis index. *)\n      test \"expand axis gets larger LOCAL from budget\" (fun () ->\n          let ast = partial_broadcast_global_ast ~s0:128 ~s1:128 in\n          let ren = gpu_renderer () in\n          let opts = run_heuristic ast ren in\n          let locals =\n            List.filter_map\n              (function\n                | K.Opt.Local { axis; amount } -> Some (axis, amount)\n                | _ -> None)\n              opts\n          in\n          (* Both axes get LOCAL'd *)\n          is_true (List.length locals >= 2);\n          (* Expand axis 1 gets local_sz=16 (ranked first, more budget).\n             Without priority it would only get 4. *)\n          let axis1_amount =\n            List.find_map\n              (fun (a, amt) -> if a = 1 then Some amt else None)\n              locals\n          in\n          equal (option int) (Some 16) axis1_amount);\n      (* deleted_shape tracking: when local_sz equals axis size, the axis\n         is deleted and subsequent axis indices are adjusted.\n         s0=16, s1=128: after upcast by 4, shape=[16,32,4].\n         Axis 0 gets local_sz=16 (will_delete). Axis 1 shifts to 0. *)\n      test \"deleted_shape adjusts axis indices\" (fun () ->\n          let ast = elementwise_global_ast ~s0:16 ~s1:128 in\n          let ren = gpu_renderer () in\n          let result = run_heuristic_scheduler ast ren in\n          let opts = P.applied_opts result in\n          (* Two locals applied without crash *)\n          is_true (count is_local opts >= 2);\n          (* Local product respects 128 budget *)\n          let local_ranges = P.ranges_of result [ Ak.Local ] in\n          let local_prod =\n            List.fold_left\n              (fun acc r ->\n                let sz = K.range_size r in\n                if K.is_const sz then acc * K.const_to_int sz else acc)\n              1 local_ranges\n          in\n          is_true (local_prod <= 128));\n    ]\n\n(* Group 7: Threading *)\n\nlet threading_tests =\n  group \"threading\"\n    [\n      (* Large kernel: 4096×4096=16M. 16M/131072=128 ≥ 32 → THREAD. *)\n      test \"threading on large kernel\" (fun () ->\n          let ast = elementwise_loop_ast ~s0:4096 ~s1:4096 in\n          let ren = thread_renderer () in\n          let opts = run_heuristic ast ren in\n          is_true (has is_thread opts));\n      (* Small kernel: 4×4=16. 16/131072=0 < any thread count → no THREAD. *)\n      test \"threading skipped on small kernel\" (fun () ->\n          let ast = elementwise_loop_ast ~s0:4 ~s1:4 in\n          let ren = thread_renderer () in\n          let opts = run_heuristic ast ren in\n          is_true (not (has is_thread opts)));\n      (* First divisible loop axis is picked. Both axes div by 32; THREAD\n         should target axis 0. *)\n      test \"threading picks first divisible axis\" (fun () ->\n          let ast = elementwise_loop_ast ~s0:4096 ~s1:4096 in\n          let ren = thread_renderer () in\n          let opts = run_heuristic ast ren in\n          let axes = List.filter_map thread_axis opts in\n          is_true (axes <> []);\n          equal int 0 (List.hd axes));\n      (* First axis not divisible by any thread count (size 7); second axis\n         is divisible. THREAD should target the second loop axis. *)\n      test \"threading skips non-divisible axis\" (fun () ->\n          (* 7 × 4194304 = 29M, 29M/131072=224 ≥ 32. Axis 0 has size 7,\n             not divisible by any of [32,16,12,8,6,5,4,3,2].\n             After step 9 upcast by 4 on last dim: shape=[7, 1048576, 4].\n             Axis 1 (1048576) is divisible by 32. *)\n          let ast = elementwise_loop_ast ~s0:7 ~s1:4194304 in\n          let ren = thread_renderer () in\n          let opts = run_heuristic ast ren in\n          let axes = List.filter_map thread_axis opts in\n          is_true (axes <> []);\n          (* Axis 1 after upcast — exact value depends on shape after\n             upcast splitting, but it must NOT be axis 0 (size 7). *)\n          is_true (List.hd axes <> 0));\n    ]\n\n(* Group 8: Integration *)\n\nlet integration_tests =\n  group \"integration\"\n    [\n      (* Elementwise on GPU: default upcast + locals, no grouping. *)\n      test \"elementwise on GPU\" (fun () ->\n          let ast = elementwise_global_ast ~s0:128 ~s1:128 in\n          let ren = gpu_renderer () in\n          let opts = run_heuristic ast ren in\n          is_true (has is_upcast opts);\n          is_true (has is_local opts);\n          is_true (not (has is_grouptop opts)));\n      (* Reduce on GPU: small output → grouping → early return. *)\n      test \"reduce on GPU with grouping\" (fun () ->\n          let ast = reduce_global_ast ~s0:4 ~s1:4 ~sr:128 in\n          let ren = gpu_renderer () in\n          let opts = run_heuristic ast ren in\n          is_true (has is_grouptop opts);\n          is_true (not (has is_local opts)));\n      (* Reduce on GPU: large output → no grouping → unroll + locals. *)\n      test \"reduce on GPU without grouping\" (fun () ->\n          let ast = reduce_global_ast ~s0:64 ~s1:64 ~sr:16 in\n          let ren = gpu_renderer () in\n          let opts = run_heuristic ast ren in\n          is_true (not (has is_grouptop opts));\n          is_true (has is_unroll opts);\n          is_true (has is_local opts));\n      (* Matmul on GPU: heuristic upcast + reduce unroll + locals. *)\n      test \"matmul on GPU\" (fun () ->\n          let ast = matmul_global_ast ~m:128 ~n:128 ~k:128 in\n          let ren = gpu_renderer () in\n          let opts = run_heuristic ast ren in\n          is_true (has is_upcast opts);\n          is_true (has is_unroll opts);\n          is_true (has is_local opts);\n          is_true (not (has is_grouptop opts)));\n      (* Elementwise on CPU: default upcast only. *)\n      test \"elementwise on CPU\" (fun () ->\n          let ast = elementwise_loop_ast ~s0:128 ~s1:128 in\n          let ren = cpu_renderer () in\n          let opts = run_heuristic ast ren in\n          is_true (has is_upcast opts);\n          is_true (not (has is_local opts));\n          is_true (not (has is_thread opts)));\n      (* Large kernel on thread renderer: upcast + thread. *)\n      test \"large kernel on thread renderer\" (fun () ->\n          let ast = elementwise_loop_ast ~s0:4096 ~s1:4096 in\n          let ren = thread_renderer () in\n          let opts = run_heuristic ast ren in\n          is_true (has is_upcast opts);\n          is_true (has is_thread opts);\n          is_true (not (has is_local opts)));\n    ]\n\n(* Entry *)\n\nlet () =\n  run __FILE__\n    [\n      grouping_tests;\n      reduce_unroll_tests;\n      default_upcast_tests;\n      heuristic_upcast_tests;\n      matvec_tests;\n      masked_upcast_tests;\n      local_groups_tests;\n      threading_tests;\n      integration_tests;\n    ]\n"
  },
  {
    "path": "packages/tolk/test/unit/test_codegen_images.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Tolk\nopen Tolk_ir\nmodule K = Kernel\n\n(* Constants *)\n\nlet float4 = Dtype.Val.vec 4 Dtype.Val.float32\nlet int2 = Dtype.Val.vec 2 Dtype.Val.int32\nlet cl = Cstyle.opencl\n\n(* Helpers *)\n\nlet global dt = Dtype.Ptr.create dt ~addrspace:Global ~size:(-1)\nlet img_ptr = global float4\nlet buf_ptr = global float4\n\nlet contains s sub =\n  let sl = String.length s and subl = String.length sub in\n  subl <= sl &&\n  let rec loop i =\n    i <= sl - subl && (String.sub s i subl = sub || loop (i + 1))\n  in\n  loop 0\n\nlet rewrite k = Images.rewrite cl k\n\nlet render r k =\n  Renderer.render r (Linearizer.linearize (Images.rewrite r k))\n\nlet find_unique msg pred root =\n  match K.find_nodes pred root with\n  | [n] -> n\n  | ns ->\n      failwith\n        (Printf.sprintf \"%s: expected 1, got %d\" msg (List.length ns))\n\nlet count pred root = List.length (K.find_nodes pred root)\n\nlet failure_contains needle fn =\n  raises_match\n    (function Failure msg -> contains msg needle | _ -> false)\n    fn\n\nlet assert_rendered msg s sub =\n  if not (contains s sub) then\n    failwith (Printf.sprintf \"%s: expected %S in output\" msg sub)\n\nlet assert_dtype msg expected actual =\n  if not (Dtype.Val.equal expected actual) then\n    failwith\n      (Printf.sprintf \"%s: expected %s, got %s\" msg\n         (Format.asprintf \"%a\" Dtype.Val.pp expected)\n         (Format.asprintf \"%a\" Dtype.Val.pp actual))\n\n(* Kernel builders *)\n\nlet float4_zero () =\n  let z = K.const_float 0.0 in\n  K.vectorize ~srcs:[ z; z; z; z ]\n\nlet mk_ungated_load () =\n  let img = K.param_image ~idx:0 ~dtype:img_ptr ~width:4 ~height:4 in\n  let buf = K.param ~idx:1 ~dtype:buf_ptr in\n  let c0 = K.const_int 0 and c1 = K.const_int 1 in\n  let src = K.index ~ptr:img ~idxs:[ c0; c1 ] () in\n  let dst = K.index ~ptr:buf ~idxs:[ c0 ] () in\n  K.sink [ K.store ~dst ~value:(K.load ~src ()) ~ranges:[] ]\n\nlet mk_gated_load () =\n  let img = K.param_image ~idx:0 ~dtype:img_ptr ~width:4 ~height:4 in\n  let buf = K.param ~idx:1 ~dtype:buf_ptr in\n  let c0 = K.const_int 0 and c1 = K.const_int 1 in\n  let gate = K.const_bool true in\n  let src = K.index ~ptr:img ~idxs:[ c0; c1 ] ~gate () in\n  let dst = K.index ~ptr:buf ~idxs:[ c0 ] () in\n  K.sink [ K.store ~dst ~value:(K.load ~src ~alt:(float4_zero ()) ()) ~ranges:[] ]\n\nlet mk_ungated_store () =\n  let img = K.param_image ~idx:0 ~dtype:img_ptr ~width:4 ~height:4 in\n  let buf = K.param ~idx:1 ~dtype:buf_ptr in\n  let c0 = K.const_int 0 and c1 = K.const_int 1 in\n  let src = K.index ~ptr:buf ~idxs:[ c0 ] () in\n  let dst = K.index ~ptr:img ~idxs:[ c0; c1 ] () in\n  K.sink [ K.store ~dst ~value:(K.load ~src ()) ~ranges:[] ]\n\nlet mk_gated_store () =\n  let img = K.param_image ~idx:0 ~dtype:img_ptr ~width:4 ~height:4 in\n  let buf = K.param ~idx:1 ~dtype:buf_ptr in\n  let c0 = K.const_int 0 and c1 = K.const_int 1 in\n  let gate = K.const_bool true in\n  let src = K.index ~ptr:buf ~idxs:[ c0 ] () in\n  let dst = K.index ~ptr:img ~idxs:[ c0; c1 ] ~gate () in\n  K.sink [ K.store ~dst ~value:(K.load ~src ()) ~ranges:[] ]\n\nlet mk_mixed () =\n  let f32_ptr = global Dtype.Val.float32 in\n  let img = K.param_image ~idx:0 ~dtype:img_ptr ~width:4 ~height:4 in\n  let p1 = K.param ~idx:1 ~dtype:f32_ptr in\n  let p2 = K.param ~idx:2 ~dtype:buf_ptr in\n  let c0 = K.const_int 0 and c1 = K.const_int 1 in\n  (* buffer load/store *)\n  let idx_buf = K.index ~ptr:p1 ~idxs:[ c0 ] () in\n  let st_buf =\n    K.store ~dst:idx_buf ~value:(K.load ~src:idx_buf ()) ~ranges:[]\n  in\n  (* image load → buffer store *)\n  let idx_img = K.index ~ptr:img ~idxs:[ c0; c1 ] () in\n  let idx_out = K.index ~ptr:p2 ~idxs:[ c0 ] () in\n  let st_out =\n    K.store ~dst:idx_out ~value:(K.load ~src:idx_img ()) ~ranges:[]\n  in\n  (* buffer load → image store *)\n  let idx_img2 = K.index ~ptr:img ~idxs:[ c1; c0 ] () in\n  let idx_in = K.index ~ptr:p2 ~idxs:[ c1 ] () in\n  let st_img =\n    K.store ~dst:idx_img2 ~value:(K.load ~src:idx_in ()) ~ranges:[]\n  in\n  K.sink [ st_buf; st_out; st_img ]\n\nlet mk_no_images () =\n  let f32_ptr = global Dtype.Val.float32 in\n  let p0 = K.param ~idx:0 ~dtype:f32_ptr in\n  let p1 = K.param ~idx:1 ~dtype:f32_ptr in\n  let c0 = K.const_int 0 in\n  let src = K.index ~ptr:p0 ~idxs:[ c0 ] () in\n  let dst = K.index ~ptr:p1 ~idxs:[ c0 ] () in\n  K.sink [ K.store ~dst ~value:(K.load ~src ()) ~ranges:[] ]\n\n(* Error-case builders *)\n\nlet mk_bad_dtype () =\n  let bad = global Dtype.Val.int32 in\n  let img = K.param_image ~idx:0 ~dtype:bad ~width:4 ~height:4 in\n  let buf = K.param ~idx:1 ~dtype:buf_ptr in\n  let c0 = K.const_int 0 and c1 = K.const_int 1 in\n  let src = K.index ~ptr:img ~idxs:[ c0; c1 ] () in\n  let dst = K.index ~ptr:buf ~idxs:[ c0 ] () in\n  K.sink [ K.store ~dst ~value:(K.load ~src ()) ~ranges:[] ]\n\nlet mk_wrong_idx_count n =\n  let img = K.param_image ~idx:0 ~dtype:img_ptr ~width:4 ~height:4 in\n  let buf = K.param ~idx:1 ~dtype:buf_ptr in\n  let idxs = List.init n (fun i -> K.const_int i) in\n  let src = K.index ~ptr:img ~idxs () in\n  let c0 = K.const_int 0 in\n  let dst = K.index ~ptr:buf ~idxs:[ c0 ] () in\n  K.sink [ K.store ~dst ~value:(K.load ~src ()) ~ranges:[] ]\n\nlet mk_wrong_load_dtype () =\n  let f32_ptr = global Dtype.Val.float32 in\n  let img = K.param_image ~idx:0 ~dtype:f32_ptr ~width:4 ~height:4 in\n  let buf = K.param ~idx:1 ~dtype:f32_ptr in\n  let c0 = K.const_int 0 and c1 = K.const_int 1 in\n  let src = K.index ~ptr:img ~idxs:[ c0; c1 ] () in\n  let dst = K.index ~ptr:buf ~idxs:[ c0 ] () in\n  K.sink [ K.store ~dst ~value:(K.load ~src ()) ~ranges:[] ]\n\nlet mk_wrong_store_dtype () =\n  let f32_ptr = global Dtype.Val.float32 in\n  let img = K.param_image ~idx:0 ~dtype:f32_ptr ~width:4 ~height:4 in\n  let buf = K.param ~idx:1 ~dtype:f32_ptr in\n  let c0 = K.const_int 0 and c1 = K.const_int 1 in\n  let src = K.index ~ptr:buf ~idxs:[ c0 ] () in\n  let dst = K.index ~ptr:img ~idxs:[ c0; c1 ] () in\n  K.sink [ K.store ~dst ~value:(K.load ~src ()) ~ranges:[] ]\n\nlet mk_gated_no_alt () =\n  let img = K.param_image ~idx:0 ~dtype:img_ptr ~width:4 ~height:4 in\n  let buf = K.param ~idx:1 ~dtype:buf_ptr in\n  let c0 = K.const_int 0 and c1 = K.const_int 1 in\n  let gate = K.const_bool true in\n  let src = K.index ~ptr:img ~idxs:[ c0; c1 ] ~gate () in\n  let dst = K.index ~ptr:buf ~idxs:[ c0 ] () in\n  K.sink [ K.store ~dst ~value:(K.load ~src ()) ~ranges:[] ]\n\nlet mk_alt_no_gate () =\n  let img = K.param_image ~idx:0 ~dtype:img_ptr ~width:4 ~height:4 in\n  let buf = K.param ~idx:1 ~dtype:buf_ptr in\n  let c0 = K.const_int 0 and c1 = K.const_int 1 in\n  let src = K.index ~ptr:img ~idxs:[ c0; c1 ] () in\n  let dst = K.index ~ptr:buf ~idxs:[ c0 ] () in\n  K.sink\n    [ K.store ~dst ~value:(K.load ~src ~alt:(float4_zero ()) ()) ~ranges:[] ]\n\n(* Runner *)\n\nlet () =\n  run \"Images\"\n    [\n      group \"Index rewriting\"\n        [\n          test \"ungated image index becomes int2\" (fun () ->\n            let root = rewrite (mk_ungated_load ()) in\n            let n =\n              find_unique \"int2\"\n                (fun n ->\n                  match K.view n with\n                  | Custom_inline { fmt; _ } -> contains fmt \"(int2)\"\n                  | _ -> false)\n                root\n            in\n            match K.view n with\n            | Custom_inline { fmt; args; dtype } ->\n                equal string \"(int2)({0}, {1})\" fmt;\n                equal int 2 (List.length args);\n                assert_dtype \"int2 index\" int2 dtype\n            | _ -> assert false);\n          test \"non-image index unchanged\" (fun () ->\n            equal int 3\n              (count\n                 (fun n ->\n                   match K.view n with Index _ -> true | _ -> false)\n                 (rewrite (mk_mixed ()))));\n        ];\n      group \"Load rewriting\"\n        [\n          test \"ungated image load becomes read_imagef\" (fun () ->\n            let root = rewrite (mk_ungated_load ()) in\n            let n =\n              find_unique \"read_imagef\"\n                (fun n ->\n                  match K.view n with\n                  | Custom_inline { fmt; _ } -> contains fmt \"read_imagef\"\n                  | _ -> false)\n                root\n            in\n            match K.view n with\n            | Custom_inline { fmt; args; dtype } ->\n                equal string \"read_imagef({0}, smp, {1})\" fmt;\n                equal int 2 (List.length args);\n                assert_dtype \"read_imagef\" float4 dtype\n            | _ -> assert false);\n          test \"gated image load becomes conditional read_imagef\" (fun () ->\n            let root = rewrite (mk_gated_load ()) in\n            let n =\n              find_unique \"gated read_imagef\"\n                (fun n ->\n                  match K.view n with\n                  | Custom_inline { fmt; _ } -> contains fmt \"read_imagef\"\n                  | _ -> false)\n                root\n            in\n            match K.view n with\n            | Custom_inline { fmt; args; dtype } ->\n                equal string \"({2}?read_imagef({0}, smp, {1}):{3})\" fmt;\n                equal int 4 (List.length args);\n                assert_dtype \"gated read_imagef\" float4 dtype\n            | _ -> assert false);\n          test \"non-image load unchanged\" (fun () ->\n            equal int 2\n              (count\n                 (fun n ->\n                   match K.view n with Load _ -> true | _ -> false)\n                 (rewrite (mk_mixed ()))));\n        ];\n      group \"Store rewriting\"\n        [\n          test \"ungated image store becomes write_imagef\" (fun () ->\n            let root = rewrite (mk_ungated_store ()) in\n            let n =\n              find_unique \"write_imagef\"\n                (fun n ->\n                  match K.view n with\n                  | Custom { fmt; _ } -> contains fmt \"write_imagef\"\n                  | _ -> false)\n                root\n            in\n            match K.view n with\n            | Custom { fmt; args } ->\n                equal string \"write_imagef({0}, {1}, {2});\" fmt;\n                equal int 3 (List.length args)\n            | _ -> assert false);\n          test \"gated image store becomes conditional write_imagef\" (fun () ->\n            let root = rewrite (mk_gated_store ()) in\n            let n =\n              find_unique \"gated write_imagef\"\n                (fun n ->\n                  match K.view n with\n                  | Custom { fmt; _ } -> contains fmt \"write_imagef\"\n                  | _ -> false)\n                root\n            in\n            match K.view n with\n            | Custom { fmt; args } ->\n                equal string \"if ({3}) write_imagef({0}, {1}, {2});\" fmt;\n                equal int 4 (List.length args)\n            | _ -> assert false);\n          test \"non-image store unchanged\" (fun () ->\n            equal int 2\n              (count\n                 (fun n ->\n                   match K.view n with Store _ -> true | _ -> false)\n                 (rewrite (mk_mixed ()))));\n        ];\n      group \"Passthrough\"\n        [\n          test \"Param_image preserved\" (fun () ->\n            let root = rewrite (mk_ungated_load ()) in\n            let n =\n              find_unique \"Param_image\"\n                (fun n ->\n                  match K.view n with\n                  | Param_image _ -> true\n                  | _ -> false)\n                root\n            in\n            match K.view n with\n            | Param_image { idx; width; height; _ } ->\n                equal int 0 idx;\n                equal int 4 width;\n                equal int 4 height\n            | _ -> assert false);\n          test \"no-image kernel is identity\" (fun () ->\n            let k = mk_no_images () in\n            equal int\n              (List.length (K.toposort k))\n              (List.length (K.toposort (rewrite k))));\n        ];\n      group \"Device support\"\n        [\n          test \"CL accepts images\" (fun () ->\n            ignore (Images.rewrite Cstyle.opencl (mk_ungated_load ())));\n          test \"QCOM accepts images\" (fun () ->\n            ignore (Images.rewrite Cstyle.qcom (mk_ungated_load ())));\n          test \"Metal rejects images\" (fun () ->\n            failure_contains \"does not support\" (fun () ->\n              ignore\n                (Images.rewrite Cstyle.metal (mk_ungated_load ()))));\n          test \"CUDA rejects images\" (fun () ->\n            failure_contains \"does not support\" (fun () ->\n              ignore\n                (Images.rewrite\n                   (Cstyle.cuda Gpu_target.SM80)\n                   (mk_ungated_load ()))));\n          test \"no images on unsupported renderer passes\" (fun () ->\n            ignore (Images.rewrite Cstyle.metal (mk_no_images ())));\n        ];\n      group \"Validation\"\n        [\n          test \"rejects unsupported base dtype\" (fun () ->\n            failure_contains \"unsupported base dtype\" (fun () ->\n              ignore (rewrite (mk_bad_dtype ()))));\n          test \"rejects 1D image access\" (fun () ->\n            failure_contains \"exactly two coordinates\" (fun () ->\n              ignore (rewrite (mk_wrong_idx_count 1))));\n          test \"rejects 3D image access\" (fun () ->\n            failure_contains \"exactly two coordinates\" (fun () ->\n              ignore (rewrite (mk_wrong_idx_count 3))));\n          test \"rejects non-float4 load\" (fun () ->\n            failure_contains \"must produce float4\" (fun () ->\n              ignore (rewrite (mk_wrong_load_dtype ()))));\n          test \"rejects non-float4 store\" (fun () ->\n            failure_contains \"must write float4\" (fun () ->\n              ignore (rewrite (mk_wrong_store_dtype ()))));\n          test \"rejects gated load without alt\" (fun () ->\n            failure_contains \"requires alt value\" (fun () ->\n              ignore (rewrite (mk_gated_no_alt ()))));\n          test \"rejects alt without gate\" (fun () ->\n            failure_contains \"requires gated index\" (fun () ->\n              ignore (rewrite (mk_alt_no_gate ()))));\n        ];\n      group \"Rendered output\"\n        [\n          test \"gated load renders correctly\" (fun () ->\n            let out = render cl (mk_gated_load ()) in\n            assert_rendered \"gated read_imagef\" out \"?read_imagef(\";\n            assert_rendered \"gated alt colon\" out \":\");\n          test \"gated store renders correctly\" (fun () ->\n            let out = render cl (mk_gated_store ()) in\n            assert_rendered \"if-guarded write\" out \"if (\";\n            assert_rendered \"write_imagef\" out \"write_imagef(\");\n        ];\n    ]\n"
  },
  {
    "path": "packages/tolk/test/unit/test_codegen_linearizer.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Tolk\nopen Tolk_ir\nmodule K = Kernel\nmodule P = Program\n\n(* Helpers *)\n\nlet dt = Dtype.Val.float32\nlet global_ptr dt = Dtype.Ptr.create dt ~addrspace:Global ~size:(-1)\nlet ptr = global_ptr dt\n\nlet i32 n = K.const (Const.int Dtype.Val.int32 n)\nlet f32 x = K.const (Const.float Dtype.Val.float32 x)\n\nlet loop_range ~axis size =\n  K.range ~size ~axis ~kind:Axis_kind.Loop ~dtype:Dtype.Val.int32 ()\n\nlet reduce_range ~axis size =\n  K.range ~size ~axis ~kind:Axis_kind.Reduce ~dtype:Dtype.Val.int32 ()\n\nlet global_range ~axis size =\n  K.range ~size ~axis ~kind:Axis_kind.Global ~dtype:Dtype.Val.int32 ()\n\nlet load_one_elem () =\n  let p0 = K.param ~idx:0 ~dtype:ptr in\n  K.load ~src:(K.index ~ptr:p0 ~idxs:[ i32 0 ] ()) ()\n\nlet contains haystack needle =\n  let hl = String.length haystack and nl = String.length needle in\n  if nl = 0 then true\n  else if nl > hl then false\n  else\n    let rec loop i =\n      if i > hl - nl then false\n      else if String.sub haystack i nl = needle then true\n      else loop (i + 1)\n    in\n    loop 0\n\nlet pp_view view = Format.asprintf \"%a\" P.pp_view view\nlet pp_program program = Format.asprintf \"%a\" P.pp program\n\nlet fail_view msg view =\n  failwith (Printf.sprintf \"%s: %s\" msg (pp_view view))\n\nlet find_positions (program : P.t) pred =\n  let acc = ref [] in\n  P.iteri (fun i view -> if pred view then acc := i :: !acc) program;\n  List.rev !acc\n\nlet find_unique_position label program pred =\n  match find_positions program pred with\n  | [ i ] -> i\n  | xs ->\n      failwith\n        (Printf.sprintf \"%s: expected one match, got %d\\n%s\" label\n           (List.length xs) (pp_program program))\n\nlet count program pred =\n  let n = ref 0 in\n  P.iteri (fun _ view -> if pred view then incr n) program;\n  !n\n\nlet linearize sink =\n  let sink = Linearizer.pm_split_ends sink in\n  let sink = Linearizer.pm_add_control_flow sink in\n  Linearizer.linearize sink\n\nlet count_ranges prog = count prog (function P.Range _ -> true | _ -> false)\n\nlet count_end_ranges prog =\n  count prog (function P.End_range _ -> true | _ -> false)\n\nlet find_ranges prog =\n  find_positions prog (function P.Range _ -> true | _ -> false)\n\nlet find_end_ranges prog =\n  find_positions prog (function P.End_range _ -> true | _ -> false)\n\nlet find_range ~axis prog =\n  find_unique_position \"range\" prog (function\n    | P.Range { axis = a; _ } -> a = axis\n    | _ -> false)\n\nlet find_range_by_kind ~kind prog =\n  find_unique_position \"range\" prog (function\n    | P.Range { kind = k; _ } -> k = kind\n    | _ -> false)\n\nlet find_load prog =\n  find_unique_position \"load\" prog (function P.Load _ -> true | _ -> false)\n\nlet find_store prog =\n  find_unique_position \"store\" prog (function P.Store _ -> true | _ -> false)\n\nlet raises_linearize substring fn =\n  raises_match (function Failure msg -> contains msg substring | _ -> false) fn\n\nlet test_unlowered_rejected name build_node =\n  raises_linearize (name ^ \" must be lowered before linearize\") (fun () ->\n      ignore (linearize (K.sink [ build_node () ])))\n\nlet () =\n  run \"Linearizer\"\n    [\n      group \"Late kernel to program\"\n        [\n          test \"multi-range End lowers to nested End_range pairs\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let r0 = loop_range ~axis:0 (i32 2) in\n            let r1 = loop_range ~axis:1 (i32 3) in\n            let sum = K.binary ~op:`Add ~lhs:r0 ~rhs:r1 in\n            let idx = K.index ~ptr:p0 ~idxs:[ sum ] () in\n            let st = K.store ~dst:idx ~value:(f32 1.0) ~ranges:[] in\n            let e = K.end_ ~value:st ~ranges:[ r0; r1 ] () in\n            let program = linearize (K.sink [ e ]) in\n            P.validate program;\n            equal int 2 (count_ranges program);\n            equal int 2 (count_end_ranges program);\n            let outer = find_range ~axis:0 program in\n            let inner = find_range ~axis:1 program in\n            let inner_end =\n              find_unique_position \"inner end\" program (function\n                | P.End_range { range } ->\n                    (match P.view program range with\n                     | P.Range { axis = 1; _ } -> true\n                     | _ -> false)\n                | _ -> false)\n            in\n            let outer_end =\n              find_unique_position \"outer end\" program (function\n                | P.End_range { range } ->\n                    (match P.view program range with\n                     | P.Range { axis = 0; _ } -> true\n                     | _ -> false)\n                | _ -> false)\n            in\n            is_true (outer < inner);\n            is_true (inner < inner_end);\n            is_true (inner_end < outer_end));\n          test \"outer-range loads are scheduled before entering inner ranges\"\n            (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let p1 = K.param ~idx:1 ~dtype:ptr in\n            let r0 = loop_range ~axis:0 (i32 2) in\n            let idx_in = K.index ~ptr:p0 ~idxs:[ r0 ] () in\n            let ld = K.load ~src:idx_in () in\n            let r1 = loop_range ~axis:1 (i32 3) in\n            let sum = K.binary ~op:`Add ~lhs:r0 ~rhs:r1 in\n            let idx_out = K.index ~ptr:p1 ~idxs:[ sum ] () in\n            let st = K.store ~dst:idx_out ~value:ld ~ranges:[] in\n            let e = K.end_ ~value:st ~ranges:[ r0; r1 ] () in\n            let program = linearize (K.sink [ e ]) in\n            P.validate program;\n            let load_pos = find_load program in\n            is_true (find_range ~axis:0 program < load_pos);\n            is_true (load_pos < find_range ~axis:1 program));\n          test \"After nodes stay in Program ownership after linearize\"\n            (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let idx = K.index ~ptr:p0 ~idxs:[ i32 0 ] () in\n            let ld = K.load ~src:idx () in\n            let af = K.after ~src:ld ~deps:[ f32 1.0 ] in\n            let program = linearize (K.sink [ af ]) in\n            P.validate program;\n            let after_pos =\n              find_unique_position \"after\" program (function\n                | P.After _ -> true\n                | _ -> false)\n            in\n            (match P.view program after_pos with\n             | P.After { src; deps = [ dep ]; dtype } ->\n                 is_true (Dtype.Val.equal dtype dt);\n                 (match (P.view program src, P.view program dep) with\n                  | P.Load _, P.Const { value; _ } ->\n                      (match Const.view value with\n                       | Float f -> is_true (f = 1.0)\n                       | _ -> failwith \"expected Float const\")\n                  | src_view, dep_view ->\n                      failwith\n                        (Printf.sprintf \"unexpected After operands:\\n%s\\n%s\"\n                           (pp_view src_view) (pp_view dep_view)))\n             | view -> fail_view \"expected After\" view));\n          test \"effect-only After nodes preserve store ordering\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let idx0 = K.index ~ptr:p0 ~idxs:[ i32 0 ] () in\n            let st0 = K.store ~dst:idx0 ~value:(f32 1.0) ~ranges:[] in\n            let idx1 = K.index ~ptr:p0 ~idxs:[ i32 1 ] () in\n            let st1 = K.store ~dst:idx1 ~value:(f32 2.0) ~ranges:[] in\n            let af = K.after ~src:st0 ~deps:[ st1 ] in\n            let program = linearize (K.sink [ af ]) in\n            let after_pos =\n              find_unique_position \"effect after\" program (function\n                | P.After _ -> true\n                | _ -> false)\n            in\n            (match P.view program after_pos with\n             | P.After { src; deps = [ dep ]; dtype } ->\n                 is_true (Dtype.Val.equal dtype Dtype.Val.void);\n                 (match (P.view program src, P.view program dep) with\n                  | P.Store _, P.Store _ -> ()\n                  | src_view, dep_view ->\n                      failwith\n                        (Printf.sprintf\n                           \"unexpected void After operands:\\n%s\\n%s\"\n                           (pp_view src_view) (pp_view dep_view)))\n             | view -> fail_view \"expected effect-only After\" view));\n          test \"nested alt-index loads stay between the two ranges\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let p1 = K.param ~idx:1 ~dtype:ptr in\n            let r0 = loop_range ~axis:0 (i32 2) in\n            let gate = K.binary ~op:`Cmplt ~lhs:r0 ~rhs:(i32 2) in\n            let idx_gated = K.index ~ptr:p0 ~idxs:[ r0 ] ~gate () in\n            let ld = K.load ~src:idx_gated ~alt:(f32 2.0) () in\n            let r1 = loop_range ~axis:1 (i32 3) in\n            let add = K.binary ~op:`Add ~lhs:ld ~rhs:(f32 1.0) in\n            let flat_idx =\n              K.binary ~op:`Add\n                ~lhs:(K.binary ~op:`Mul ~lhs:r0 ~rhs:(i32 3))\n                ~rhs:r1\n            in\n            let idx_out = K.index ~ptr:p1 ~idxs:[ flat_idx ] () in\n            let st = K.store ~dst:idx_out ~value:add ~ranges:[] in\n            let e = K.end_ ~value:st ~ranges:[ r0; r1 ] () in\n            let program = linearize (K.sink [ e ]) in\n            P.validate program;\n            let outer = find_range ~axis:0 program in\n            let inner = find_range ~axis:1 program in\n            let load_pos = find_load program in\n            is_true (outer < load_pos);\n            is_true (load_pos < inner);\n            is_true\n              (List.exists\n                 (fun pos ->\n                   match P.view program pos with\n                   | P.Binary { op = `Cmplt; _ } | P.Index _ -> true\n                   | _ -> false)\n                 (List.init (inner - outer - 1) (fun i -> outer + i + 1))));\n          test \"gated stores become IF/STORE/ENDIF\"\n            (fun () ->\n            (* Gated stores are converted to IF/STORE/ENDIF in the linearizer. *)\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let gate = K.const (Const.bool true) in\n            let idx = K.index ~ptr:p0 ~idxs:[ i32 0 ] ~gate () in\n            let st = K.store ~dst:idx ~value:(f32 1.0) ~ranges:[] in\n            let program = linearize (K.sink [ st ]) in\n            P.validate program;\n            equal int 1\n              (count program (function P.If _ -> true | _ -> false));\n            equal int 1\n              (count program (function P.Endif _ -> true | _ -> false)));\n          test \"equal-priority nodes use structural tie-breaks\" (fun () ->\n            let sub = K.binary ~op:`Sub ~lhs:(i32 2) ~rhs:(i32 1) in\n            let add = K.binary ~op:`Add ~lhs:(i32 1) ~rhs:(i32 2) in\n            let program = linearize (K.sink [ add; sub ]) in\n            P.validate program;\n            let add_pos =\n              find_unique_position \"add\" program (function\n                | P.Binary { op = `Add; _ } -> true\n                | _ -> false)\n            in\n            let sub_pos =\n              find_unique_position \"sub\" program (function\n                | P.Binary { op = `Sub; _ } -> true\n                | _ -> false)\n            in\n            is_true (add_pos < sub_pos));\n          test \"late bias loads are scheduled after reduce end\" (fun () ->\n            let out_ptr = global_ptr dt in\n            let in_ptr = global_ptr dt in\n            let bias_ptr = global_ptr dt in\n            let reg_ptr = Dtype.Ptr.create dt ~addrspace:Reg ~size:1 in\n            let p0 = K.param ~idx:0 ~dtype:out_ptr in\n            let p1 = K.param ~idx:1 ~dtype:in_ptr in\n            let p2 = K.param ~idx:2 ~dtype:bias_ptr in\n            let dreg = K.define_reg ~size:1 ~dtype:reg_ptr ~slot:0 in\n            let reg_idx = K.index ~ptr:dreg ~idxs:[ i32 0 ] () in\n            let st_init = K.store ~dst:reg_idx ~value:(f32 0.0) ~ranges:[] in\n            let r0 = reduce_range ~axis:0 (i32 4) in\n            let idx_in = K.index ~ptr:p1 ~idxs:[ r0 ] () in\n            let st_acc =\n              K.store ~dst:reg_idx ~value:(K.load ~src:idx_in ()) ~ranges:[]\n            in\n            let e = K.end_ ~value:st_acc ~ranges:[ r0 ] () in\n            let acc_after = K.after ~src:dreg ~deps:[ e ] in\n            let acc_val =\n              K.load ~src:(K.index ~ptr:acc_after ~idxs:[ i32 0 ] ()) ()\n            in\n            let idx_bias = K.index ~ptr:p2 ~idxs:[ i32 0 ] () in\n            let ld_bias = K.load ~src:idx_bias () in\n            let add = K.binary ~op:`Add ~lhs:acc_val ~rhs:ld_bias in\n            let idx_out = K.index ~ptr:p0 ~idxs:[ i32 0 ] () in\n            let st_out = K.store ~dst:idx_out ~value:add ~ranges:[] in\n            let program = linearize (K.sink [ st_init; st_out ]) in\n            P.validate program;\n            let last_end =\n              find_end_ranges program |> List.rev |> List.hd\n            in\n            let bias_load =\n              find_positions program (function\n                | P.Load { src; _ } ->\n                    (match P.view program src with\n                     | P.Index { ptr; _ } ->\n                         (match P.view program ptr with\n                          | P.Param { idx = 2; _ } -> true\n                          | _ -> false)\n                     | _ -> false)\n                | _ -> false)\n              |> List.hd\n            in\n            is_true (last_end < bias_load));\n          test \"outer ops are placed before loop phis\" (fun () ->\n            let out_ptr = global_ptr dt in\n            let in_ptr = global_ptr dt in\n            let bias_ptr = global_ptr dt in\n            let reg_ptr = Dtype.Ptr.create dt ~addrspace:Reg ~size:1 in\n            let p0 = K.param ~idx:0 ~dtype:out_ptr in\n            let p1 = K.param ~idx:1 ~dtype:in_ptr in\n            let p2 = K.param ~idx:2 ~dtype:bias_ptr in\n            let dreg = K.define_reg ~size:1 ~dtype:reg_ptr ~slot:0 in\n            let reg_idx = K.index ~ptr:dreg ~idxs:[ i32 0 ] () in\n            let st_init = K.store ~dst:reg_idx ~value:(f32 0.0) ~ranges:[] in\n            let idx_bias = K.index ~ptr:p2 ~idxs:[ i32 0 ] () in\n            let ld_bias = K.load ~src:idx_bias () in\n            let r0 = reduce_range ~axis:0 (i32 4) in\n            let idx_in = K.index ~ptr:p1 ~idxs:[ r0 ] () in\n            let ld_in = K.load ~src:idx_in () in\n            let add_in = K.binary ~op:`Add ~lhs:ld_in ~rhs:ld_bias in\n            let st_reg = K.store ~dst:reg_idx ~value:add_in ~ranges:[] in\n            let e = K.end_ ~value:st_reg ~ranges:[ r0 ] () in\n            let acc_after = K.after ~src:dreg ~deps:[ e ] in\n            let acc_val =\n              K.load ~src:(K.index ~ptr:acc_after ~idxs:[ i32 0 ] ()) ()\n            in\n            let add_out = K.binary ~op:`Add ~lhs:acc_val ~rhs:ld_bias in\n            let idx_out = K.index ~ptr:p0 ~idxs:[ i32 0 ] () in\n            let st_out = K.store ~dst:idx_out ~value:add_out ~ranges:[] in\n            let program = linearize (K.sink [ st_init; st_out ]) in\n            P.validate program;\n            let range_pos =\n              find_unique_position \"range\" program (function\n                | P.Range _ -> true\n                | _ -> false)\n            in\n            let pre_range_loads =\n              List.filter\n                (fun pos ->\n                  match P.view program pos with\n                  | P.Load { src; _ } ->\n                      (match P.view program src with\n                       | P.Index { ptr; _ } ->\n                           (match P.view program ptr with\n                            | P.Param { idx = 2; _ } -> true\n                            | _ -> false)\n                       | _ -> false)\n                  | _ -> false)\n                (find_positions program (function\n                  | P.Load _ -> true\n                  | _ -> false))\n            in\n            equal int 1 (List.length pre_range_loads);\n            is_true (List.hd pre_range_loads < range_pos));\n          test \"loop-carried reg stores stay inside the range\" (fun () ->\n            let input_ptr = global_ptr dt in\n            let reg_ptr = Dtype.Ptr.create dt ~addrspace:Reg ~size:4 in\n            let p0 = K.param ~idx:0 ~dtype:input_ptr in\n            let dreg = K.define_reg ~size:4 ~dtype:reg_ptr ~slot:0 in\n            let ri n = K.index ~ptr:dreg ~idxs:[ i32 n ] () in\n            let st_init n = K.store ~dst:(ri n) ~value:(f32 0.0) ~ranges:[] in\n            let r0 = loop_range ~axis:0 (i32 4) in\n            let idx_in = K.index ~ptr:p0 ~idxs:[ r0 ] () in\n            let ld = K.load ~src:idx_in () in\n            let st_loop n =\n              let add = K.binary ~op:`Add ~lhs:ld ~rhs:(f32 0.0) in\n              K.store ~dst:(ri n) ~value:add ~ranges:[]\n            in\n            let e = K.end_ ~value:ld ~ranges:[ r0 ] () in\n            let program =\n              linearize\n                (K.sink\n                   [\n                     st_init 0; st_init 1; st_init 2; st_init 3;\n                     st_loop 0; st_loop 1; st_loop 2; st_loop 3;\n                     e;\n                   ])\n            in\n            P.validate program;\n            let range_pos =\n              find_unique_position \"range\" program (function\n                | P.Range _ -> true\n                | _ -> false)\n            in\n            let end_pos =\n              find_unique_position \"end\" program (function\n                | P.End_range _ -> true\n                | _ -> false)\n            in\n            P.iteri\n              (fun i view ->\n                match view with\n                | P.Store { dst; value } ->\n                    (match P.view program dst with\n                     | P.Index { ptr; _ } ->\n                         (match P.view program ptr with\n                          | P.Define_reg _ when i < range_pos ->\n                              (match P.view program value with\n                               | P.Const _ -> ()\n                               | other ->\n                                   fail_view \"expected reg init before range\"\n                                     other)\n                          | P.Define_reg _\n                            when i > range_pos && i < end_pos ->\n                              (match P.view program value with\n                               | P.Binary { op = `Add; _ } -> ()\n                               | other ->\n                                   fail_view\n                                     \"expected ALU-fed reg store inside range\"\n                                     other)\n                          | _ -> ())\n                     | _ -> ())\n                | _ -> ())\n              program);\n          test \"gated loads without alts are rejected\" (fun () ->\n            raises_linearize\n              \"gated loads require an alt value before linearize\"\n              (fun () ->\n                let p0 = K.param ~idx:0 ~dtype:ptr in\n                let gate = K.const (Const.bool true) in\n                let idx = K.index ~ptr:p0 ~idxs:[ i32 0 ] ~gate () in\n                let ld = K.load ~src:idx () in\n                ignore (linearize (K.sink [ ld ]))));\n          test \"load alts require gated indices\" (fun () ->\n            raises_linearize \"Load alt requires gated Index\" (fun () ->\n                let p0 = K.param ~idx:0 ~dtype:ptr in\n                let idx = K.index ~ptr:p0 ~idxs:[ i32 0 ] () in\n                let ld = K.load ~src:idx ~alt:(f32 0.0) () in\n                ignore (linearize (K.sink [ ld ]))));\n          test \"unlowered Reduce nodes are rejected\" (fun () ->\n            raises_linearize \"Reduce must be lowered before linearize\"\n              (fun () ->\n                let p0 = K.param ~idx:0 ~dtype:ptr in\n                let r0 = reduce_range ~axis:0 (i32 4) in\n                let idx = K.index ~ptr:p0 ~idxs:[ r0 ] () in\n                let ld = K.load ~src:idx () in\n                let red =\n                  K.reduce ~op:`Add ~src:ld ~ranges:[ r0 ] ~dtype:dt\n                in\n                ignore (linearize (K.sink [ red ]))));\n        ];\n      group \"CFG context\"\n        [\n          test \"sibling ends under sink are ordered\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let p1 = K.param ~idx:1 ~dtype:ptr in\n            let r0 = loop_range ~axis:0 (i32 4) in\n            let st0 =\n              K.store\n                ~dst:(K.index ~ptr:p0 ~idxs:[ r0 ] ())\n                ~value:(f32 1.0) ~ranges:[]\n            in\n            let e0 = K.end_ ~value:st0 ~ranges:[ r0 ] () in\n            let r1 = loop_range ~axis:1 (i32 4) in\n            let st1 =\n              K.store\n                ~dst:(K.index ~ptr:p1 ~idxs:[ r1 ] ())\n                ~value:(f32 1.0) ~ranges:[]\n            in\n            let e1 = K.end_ ~value:st1 ~ranges:[ r1 ] () in\n            let program = linearize (K.sink [ e0; e1 ]) in\n            P.validate program;\n            equal int 2 (count_ranges program);\n            equal int 2 (count_end_ranges program);\n            let ranges = find_ranges program in\n            let ends = find_end_ranges program in\n            is_true (List.hd ends < List.nth ranges 1));\n          test \"three-range end exercises cfg nesting\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let r0 = loop_range ~axis:0 (i32 4) in\n            let r1 = loop_range ~axis:1 (i32 4) in\n            let r2 = loop_range ~axis:2 (i32 4) in\n            let sum = K.binary ~op:`Add ~lhs:r0 ~rhs:r1 in\n            let sum2 = K.binary ~op:`Add ~lhs:sum ~rhs:r2 in\n            let idx = K.index ~ptr:p0 ~idxs:[ sum2 ] () in\n            let st = K.store ~dst:idx ~value:(f32 1.0) ~ranges:[] in\n            let e = K.end_ ~value:st ~ranges:[ r0; r1; r2 ] () in\n            let program = linearize (K.sink [ e ]) in\n            P.validate program;\n            equal int 3 (count_ranges program);\n            equal int 3 (count_end_ranges program);\n            let ranges = find_ranges program in\n            let ends = find_end_ranges program in\n            is_true (List.nth ranges 0 < List.nth ranges 1);\n            is_true (List.nth ranges 1 < List.nth ranges 2);\n            is_true (List.nth ends 0 < List.nth ends 1);\n            is_true (List.nth ends 1 < List.nth ends 2);\n            is_true (List.nth ranges 2 < List.nth ends 0);\n            is_true (List.nth ranges 1 < List.nth ranges 2);\n            is_true (List.nth ends 0 < List.nth ends 1));\n          test \"two independent reduces are sequenced\" (fun () ->\n            let out_ptr = global_ptr dt in\n            let in_ptr_a = global_ptr dt in\n            let in_ptr_b = global_ptr dt in\n            let reg_ptr = Dtype.Ptr.create dt ~addrspace:Reg ~size:1 in\n            let p0 = K.param ~idx:0 ~dtype:out_ptr in\n            let p1 = K.param ~idx:1 ~dtype:in_ptr_a in\n            let p2 = K.param ~idx:2 ~dtype:in_ptr_b in\n            let make_reduce dreg param axis =\n              let ri = K.index ~ptr:dreg ~idxs:[ i32 0 ] () in\n              let st_init = K.store ~dst:ri ~value:(f32 0.0) ~ranges:[] in\n              let r = reduce_range ~axis (i32 4) in\n              let idx = K.index ~ptr:param ~idxs:[ r ] () in\n              let st_acc =\n                K.store ~dst:ri ~value:(K.load ~src:idx ()) ~ranges:[]\n              in\n              let e = K.end_ ~value:st_acc ~ranges:[ r ] () in\n              (st_init, e)\n            in\n            let dreg_a = K.define_reg ~size:1 ~dtype:reg_ptr ~slot:0 in\n            let dreg_b = K.define_reg ~size:1 ~dtype:reg_ptr ~slot:1 in\n            let st_init_a, e0 = make_reduce dreg_a p1 0 in\n            let st_init_b, e1 = make_reduce dreg_b p2 1 in\n            let af_a = K.after ~src:dreg_a ~deps:[ e0 ] in\n            let af_b = K.after ~src:dreg_b ~deps:[ e1 ] in\n            let ld_res_a =\n              K.load ~src:(K.index ~ptr:af_a ~idxs:[ i32 0 ] ()) ()\n            in\n            let ld_res_b =\n              K.load ~src:(K.index ~ptr:af_b ~idxs:[ i32 0 ] ()) ()\n            in\n            let sum = K.binary ~op:`Add ~lhs:ld_res_a ~rhs:ld_res_b in\n            let idx_out = K.index ~ptr:p0 ~idxs:[ i32 0 ] () in\n            let st_out = K.store ~dst:idx_out ~value:sum ~ranges:[] in\n            let program =\n              linearize (K.sink [ st_init_a; st_init_b; st_out ])\n            in\n            P.validate program;\n            equal int 2 (count_ranges program);\n            equal int 2 (count_end_ranges program);\n            let ranges = find_ranges program in\n            let ends = find_end_ranges program in\n            is_true (List.nth ends 0 < List.nth ranges 1));\n          test \"three sibling ends are chain-ordered\" (fun () ->\n            let make_branch idx axis =\n              let p = K.param ~idx ~dtype:(global_ptr dt) in\n              let r = loop_range ~axis (i32 4) in\n              let st =\n                K.store\n                  ~dst:(K.index ~ptr:p ~idxs:[ r ] ())\n                  ~value:(f32 1.0) ~ranges:[]\n              in\n              K.end_ ~value:st ~ranges:[ r ] ()\n            in\n            let e0 = make_branch 0 0 in\n            let e1 = make_branch 1 1 in\n            let e2 = make_branch 2 2 in\n            let program = linearize (K.sink [ e0; e1; e2 ]) in\n            P.validate program;\n            equal int 3 (count_ranges program);\n            equal int 3 (count_end_ranges program);\n            let ranges = find_ranges program in\n            let ends = find_end_ranges program in\n            is_true (List.nth ends 0 < List.nth ranges 1);\n            is_true (List.nth ends 1 < List.nth ranges 2));\n        ];\n      group \"Error paths\"\n        [\n          test \"unlowered Unroll is rejected\" (fun () ->\n            test_unlowered_rejected \"Unroll\" (fun () ->\n                K.unroll ~src:(load_one_elem ()) ~axes:[ (0, 4) ] ~dtype:dt));\n          test \"unlowered Contract is rejected\" (fun () ->\n            test_unlowered_rejected \"Contract\" (fun () ->\n                K.contract ~src:(load_one_elem ()) ~axes:[ (0, 4) ]\n                  ~dtype:dt));\n          test \"unlowered Bufferize is rejected\" (fun () ->\n            test_unlowered_rejected \"Bufferize\" (fun () ->\n                let buf_ptr = Dtype.Ptr.create dt ~addrspace:Global ~size:(-1) in\n                let opts : Kernel.bufferize_opts =\n                  { device = None; addrspace = Global; removable = false }\n                in\n                K.bufferize ~src:(load_one_elem ()) ~ranges:[] ~dtype:buf_ptr\n                  ~opts));\n          test \"unlowered Vcat is rejected\" (fun () ->\n            test_unlowered_rejected \"Vcat\" (fun () ->\n                let v = K.vectorize ~srcs:[ f32 1.0; f32 2.0 ] in\n                K.vcat ~srcs:[ v; v ]));\n          test \"unlowered Ptrcat is rejected\" (fun () ->\n            test_unlowered_rejected \"Ptrcat\" (fun () ->\n                let p0 = K.param ~idx:0 ~dtype:ptr in\n                let p1 = K.param ~idx:1 ~dtype:ptr in\n                K.ptrcat ~srcs:[ p0; p1 ] ~dtype:ptr));\n          test \"unlowered Invalid_index is rejected\" (fun () ->\n            test_unlowered_rejected \"Invalid_index\" (fun () ->\n                K.invalid_index ()));\n          test \"empty Group is rejected\" (fun () ->\n            raises_linearize \"empty Group\" (fun () ->\n                ignore (linearize (K.sink [ K.group [] ]))));\n        ];\n      group \"Priority ordering\"\n        [\n          test \"params ordered by index\" (fun () ->\n            let p2 = K.param ~idx:2 ~dtype:ptr in\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let p1 = K.param ~idx:1 ~dtype:ptr in\n            let ld n p = K.load ~src:(K.index ~ptr:p ~idxs:[ i32 0 ] ()) () in\n            let sum =\n              K.binary ~op:`Add ~lhs:(ld 0 p0)\n                ~rhs:\n                  (K.binary ~op:`Add ~lhs:(ld 1 p1)\n                     ~rhs:(ld 2 p2))\n            in\n            let program = linearize (K.sink [ sum ]) in\n            P.validate program;\n            let find_param idx =\n              find_unique_position \"param\" program (function\n                | P.Param { idx = i; _ } -> i = idx\n                | _ -> false)\n            in\n            is_true (find_param 0 < find_param 1);\n            is_true (find_param 1 < find_param 2));\n          test \"define_var ordered by name\" (fun () ->\n            let vb =\n              K.define_var ~name:\"b\" ~lo:0 ~hi:10 ~dtype:Dtype.Val.int32 ()\n            in\n            let va =\n              K.define_var ~name:\"a\" ~lo:0 ~hi:10 ~dtype:Dtype.Val.int32 ()\n            in\n            let sum = K.binary ~op:`Add ~lhs:va ~rhs:vb in\n            let program = linearize (K.sink [ sum ]) in\n            P.validate program;\n            let find_var name =\n              find_unique_position \"var\" program (function\n                | P.Define_var { name = n; _ } -> n = name\n                | _ -> false)\n            in\n            is_true (find_var \"a\" < find_var \"b\"));\n          test \"define_local before define_reg\" (fun () ->\n            let local_ptr = Dtype.Ptr.create dt ~addrspace:Local ~size:256 in\n            let reg_ptr = Dtype.Ptr.create dt ~addrspace:Reg ~size:1 in\n            let dl = K.define_local ~size:256 ~dtype:local_ptr in\n            let dr = K.define_reg ~size:1 ~dtype:reg_ptr ~slot:0 in\n            let st ptr_node =\n              K.store\n                ~dst:(K.index ~ptr:ptr_node ~idxs:[ i32 0 ] ())\n                ~value:(f32 0.0) ~ranges:[]\n            in\n            let program = linearize (K.sink [ st dl; st dr ]) in\n            P.validate program;\n            let pos_local =\n              find_unique_position \"define_local\" program (function\n                | P.Define_local _ -> true\n                | _ -> false)\n            in\n            let pos_reg =\n              find_unique_position \"define_reg\" program (function\n                | P.Define_reg _ -> true\n                | _ -> false)\n            in\n            is_true (pos_local < pos_reg));\n          test \"nested range increases run_count\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let r_outer = loop_range ~axis:0 (i32 4) in\n            let r_inner = loop_range ~axis:1 (i32 8) in\n            let sum = K.binary ~op:`Add ~lhs:r_outer ~rhs:r_inner in\n            let idx = K.index ~ptr:p0 ~idxs:[ sum ] () in\n            let st = K.store ~dst:idx ~value:(f32 1.0) ~ranges:[] in\n            let e = K.end_ ~value:st ~ranges:[ r_outer; r_inner ] () in\n            let program = linearize (K.sink [ e ]) in\n            P.validate program;\n            let outer = find_range ~axis:0 program in\n            let inner = find_range ~axis:1 program in\n            let store_pos = find_store program in\n            is_true (outer < inner);\n            is_true (inner < store_pos));\n        ];\n      group \"Split ends\"\n        [\n          test \"three ranges with mixed kinds are sorted\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let r_global = global_range ~axis:0 (i32 4) in\n            let r_loop = loop_range ~axis:1 (i32 4) in\n            let r_reduce = reduce_range ~axis:2 (i32 4) in\n            let sum = K.binary ~op:`Add ~lhs:r_global ~rhs:r_loop in\n            let sum2 = K.binary ~op:`Add ~lhs:sum ~rhs:r_reduce in\n            let idx = K.index ~ptr:p0 ~idxs:[ sum2 ] () in\n            let st = K.store ~dst:idx ~value:(f32 1.0) ~ranges:[] in\n            let e =\n              K.end_ ~value:st ~ranges:[ r_global; r_loop; r_reduce ] ()\n            in\n            let program = linearize (K.sink [ e ]) in\n            P.validate program;\n            equal int 3 (count_ranges program);\n            equal int 3 (count_end_ranges program);\n            let pos_reduce = find_range_by_kind ~kind:Axis_kind.Reduce program in\n            let pos_loop = find_range_by_kind ~kind:Axis_kind.Loop program in\n            let pos_global = find_range_by_kind ~kind:Axis_kind.Global program in\n            is_true (pos_global < pos_loop);\n            is_true (pos_loop < pos_reduce));\n          test \"end with zero ranges passes through\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let idx = K.index ~ptr:p0 ~idxs:[ i32 0 ] () in\n            let st = K.store ~dst:idx ~value:(f32 1.0) ~ranges:[] in\n            let e = K.end_ ~value:st ~ranges:[] () in\n            let program = linearize (K.sink [ e ]) in\n            P.validate program;\n            equal int 0 (count_ranges program);\n            equal int 0 (count_end_ranges program);\n            equal int 1\n              (count program (function P.Store _ -> true | _ -> false)));\n        ];\n      group \"Emission\"\n        [\n          test \"barrier emission\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let idx = K.index ~ptr:p0 ~idxs:[ i32 0 ] () in\n            let st = K.store ~dst:idx ~value:(f32 1.0) ~ranges:[] in\n            let program = linearize (K.sink [ st; K.barrier ]) in\n            P.validate program;\n            equal int 1\n              (count program (function P.Barrier -> true | _ -> false)));\n          test \"special emission\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let sp =\n              K.special ~dim:(Special_dim.Global_idx 0) ~size:(i32 32) ()\n            in\n            let idx = K.index ~ptr:p0 ~idxs:[ sp ] () in\n            let st = K.store ~dst:idx ~value:(f32 1.0) ~ranges:[] in\n            let program = linearize (K.sink [ st ]) in\n            P.validate program;\n            ignore\n              (find_unique_position \"special\" program (function\n                | P.Special { dim = Special_dim.Global_idx 0; _ } -> true\n                | _ -> false)));\n          test \"cast and bitcast emission\" (fun () ->\n            let c1f = f32 1.0 in\n            let casted = K.cast ~src:c1f ~dtype:(Dtype.int32) in\n            let bitcoded = K.bitcast ~src:c1f ~dtype:Dtype.Val.int32 in\n            let sum = K.binary ~op:`Add ~lhs:casted ~rhs:bitcoded in\n            let program = linearize (K.sink [ sum ]) in\n            P.validate program;\n            equal int 1\n              (count program (function P.Cast _ -> true | _ -> false));\n            equal int 1\n              (count program (function P.Bitcast _ -> true | _ -> false)));\n          test \"vectorize emission\" (fun () ->\n            let v = K.vectorize ~srcs:[ f32 1.0; f32 2.0; f32 3.0; f32 4.0 ] in\n            let program = linearize (K.sink [ v ]) in\n            P.validate program;\n            ignore\n              (find_unique_position \"vectorize\" program (function\n                | P.Vectorize { srcs; _ } -> List.length srcs = 4\n                | _ -> false)));\n          test \"gep emission\" (fun () ->\n            let v = K.vectorize ~srcs:[ f32 1.0; f32 2.0 ] in\n            let add = K.binary ~op:`Add ~lhs:v ~rhs:v in\n            let program = linearize (K.sink [ K.gep ~src:add ~idx:1 ]) in\n            P.validate program;\n            ignore\n              (find_unique_position \"gep\" program (function\n                | P.Gep { idxs = [1]; _ } -> true\n                | _ -> false)));\n          test \"custom and custom_inline emission\" (fun () ->\n            let ci =\n              K.custom_inline ~fmt:\"get_val(%d)\" ~args:[ i32 0 ]\n                ~dtype:Dtype.Val.int32\n            in\n            let ce = K.custom ~fmt:\"barrier()\" ~args:[] in\n            let af = K.after ~src:ci ~deps:[ ce ] in\n            let program = linearize (K.sink [ af ]) in\n            P.validate program;\n            equal int 1\n              (count program (function P.Custom _ -> true | _ -> false));\n            equal int 1\n              (count program (function\n                | P.Custom_inline _ -> true\n                | _ -> false)));\n          test \"after on ptr maps directly\" (fun () ->\n            let reg_ptr = Dtype.Ptr.create dt ~addrspace:Reg ~size:1 in\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let dreg = K.define_reg ~size:1 ~dtype:reg_ptr ~slot:0 in\n            let reg_idx = K.index ~ptr:dreg ~idxs:[ i32 0 ] () in\n            let st_init = K.store ~dst:reg_idx ~value:(f32 0.0) ~ranges:[] in\n            let af = K.after ~src:dreg ~deps:[ st_init ] in\n            let ld =\n              K.load ~src:(K.index ~ptr:af ~idxs:[ i32 0 ] ()) ()\n            in\n            let st_out =\n              K.store\n                ~dst:(K.index ~ptr:p0 ~idxs:[ i32 0 ] ())\n                ~value:ld ~ranges:[]\n            in\n            let program = linearize (K.sink [ st_out ]) in\n            P.validate program;\n            equal int 0\n              (count program (function P.After _ -> true | _ -> false)));\n          test \"group forwards first source\" (fun () ->\n            let p0 = K.param ~idx:0 ~dtype:ptr in\n            let st n =\n              K.store\n                ~dst:(K.index ~ptr:p0 ~idxs:[ i32 n ] ())\n                ~value:(f32 1.0) ~ranges:[]\n            in\n            let g = K.group [ st 0; st 1 ] in\n            let program = linearize (K.sink [ g ]) in\n            P.validate program;\n            equal int 2\n              (count program (function P.Store _ -> true | _ -> false)));\n        ];\n    ]\n"
  },
  {
    "path": "packages/tolk/test/unit/test_codegen_postrange.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Unit tests for Postrange.\n\n   Tests the kernel optimization scheduler: shift_to, apply_opt for all\n   optimization variants, validation guards, and the apply_opts entry point. *)\n\nopen Windtrap\nopen Tolk\nopen Tolk_ir\nmodule K = Kernel\nmodule D = Dtype\nmodule C = Const\nmodule Ak = Axis_kind\nmodule P = Postrange\n\n(* Helpers *)\n\nlet idx n = K.const (C.int D.Val.index n)\nlet global_fptr = D.Ptr.create D.Val.float32 ~addrspace:Global ~size:(-1)\n\nlet kernel_info ?(opts_to_apply = None) ?(dont_use_locals = false) () =\n  {\n    K.name = \"test\";\n    axis_kinds = [];\n    dont_use_locals;\n    applied_opts = [];\n    opts_to_apply;\n    estimates = None;\n  }\n\nlet wrap_sink ?opts_to_apply ?dont_use_locals srcs =\n  K.sink ~kernel_info:(kernel_info ?opts_to_apply ?dont_use_locals ()) srcs\n\nlet loop_range ~axis size =\n  K.range ~size:(idx size) ~axis ~kind:Ak.Loop ~dtype:D.Val.index ()\n\nlet reduce_range ~axis size =\n  K.range ~size:(idx size) ~axis ~kind:Ak.Reduce ~dtype:D.Val.index ()\n\nlet global_range ~axis size =\n  K.range ~size:(idx size) ~axis ~kind:Ak.Global ~dtype:D.Val.index ()\n\n(* Renderers *)\n\nlet gpu_renderer () =\n  Renderer.make ~name:\"test\" ~device:\"TEST\" ~has_local:true ~has_shared:true\n    ~shared_max:32768 ~render:(fun ?name:_ _ -> \"\") ()\n\nlet cpu_renderer () =\n  Renderer.make ~name:\"cpu\" ~device:\"CPU\" ~has_local:false ~has_shared:false\n    ~shared_max:0 ~render:(fun ?name:_ _ -> \"\") ()\n\nlet thread_renderer () =\n  Renderer.make ~name:\"thread\" ~device:\"CPU\" ~has_local:false ~has_shared:false\n    ~shared_max:0 ~has_threads:true ~global_max:[ 8; 8; 8 ]\n    ~render:(fun ?name:_ _ -> \"\") ()\n\n(* Small shared memory renderer for testing budget *)\nlet small_smem_renderer () =\n  Renderer.make ~name:\"test\" ~device:\"TEST\" ~has_local:true ~has_shared:true\n    ~shared_max:64 ~render:(fun ?name:_ _ -> \"\") ()\n\n(* AST Fixture Builders *)\n\n(* Elementwise kernel: output[r0, r1] = exp2(input[r0, r1])\n   Two LOOP ranges, load → unary → store → end. *)\nlet elementwise_ast ~s0 ~s1 =\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let r0 = loop_range ~axis:0 s0 in\n  let r1 = loop_range ~axis:1 s1 in\n  let open K.O in\n  let in_idx = K.index ~ptr:p1 ~idxs:[ r0 * idx s1 + r1 ] () in\n  let ld = K.load ~src:in_idx () in\n  let value = K.unary ~op:`Exp2 ~src:ld in\n  let out_idx = K.index ~ptr:p0 ~idxs:[ r0 * idx s1 + r1 ] () in\n  let st = K.store ~dst:out_idx ~value ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0; r1 ] () in\n  wrap_sink [ e ]\n\n(* Reduce kernel: output[r0, r1] = sum_rr(input[r0, rr, r1])\n   Two LOOP ranges + one REDUCE range. *)\nlet reduce_ast ~s0 ~s1 ~sr =\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let r0 = loop_range ~axis:0 s0 in\n  let r1 = loop_range ~axis:1 s1 in\n  let rr = reduce_range ~axis:2 sr in\n  let open K.O in\n  let in_idx =\n    K.index ~ptr:p1 ~idxs:[ r0 * idx sr * idx s1 + rr * idx s1 + r1 ] ()\n  in\n  let ld = K.load ~src:in_idx () in\n  let red = K.reduce ~op:`Add ~src:ld ~ranges:[ rr ] ~dtype:D.Val.float32 in\n  let out_idx = K.index ~ptr:p0 ~idxs:[ r0 * idx s1 + r1 ] () in\n  let st = K.store ~dst:out_idx ~value:red ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0; r1 ] () in\n  wrap_sink [ e ]\n\n(* Reduce kernel with unsafe pad op (exp2 before reduce) *)\nlet reduce_unsafe_pad_ast ~s0 ~sr =\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let r0 = loop_range ~axis:0 s0 in\n  let rr = reduce_range ~axis:1 sr in\n  let open K.O in\n  let in_idx = K.index ~ptr:p1 ~idxs:[ r0 * idx sr + rr ] () in\n  let ld = K.load ~src:in_idx () in\n  let exp_ld = K.unary ~op:`Exp2 ~src:ld in\n  let red =\n    K.reduce ~op:`Add ~src:exp_ld ~ranges:[ rr ] ~dtype:D.Val.float32\n  in\n  let out_idx = K.index ~ptr:p0 ~idxs:[ r0 ] () in\n  let st = K.store ~dst:out_idx ~value:red ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0 ] () in\n  wrap_sink [ e ]\n\n(* Max reduce kernel: output[r0] = max_rr(input[r0, rr]) *)\nlet max_reduce_ast ~s0 ~sr =\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let r0 = loop_range ~axis:0 s0 in\n  let rr = reduce_range ~axis:1 sr in\n  let open K.O in\n  let in_idx = K.index ~ptr:p1 ~idxs:[ r0 * idx sr + rr ] () in\n  let ld = K.load ~src:in_idx () in\n  let red = K.reduce ~op:`Max ~src:ld ~ranges:[ rr ] ~dtype:D.Val.float32 in\n  let out_idx = K.index ~ptr:p0 ~idxs:[ r0 ] () in\n  let st = K.store ~dst:out_idx ~value:red ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0 ] () in\n  wrap_sink [ e ]\n\n(* Elementwise kernel with pre-assigned Global ranges *)\nlet elementwise_global_ast ~s0 ~s1 =\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let r0 = global_range ~axis:0 s0 in\n  let r1 = global_range ~axis:1 s1 in\n  let open K.O in\n  let in_idx = K.index ~ptr:p1 ~idxs:[ r0 * idx s1 + r1 ] () in\n  let ld = K.load ~src:in_idx () in\n  let value = K.unary ~op:`Exp2 ~src:ld in\n  let out_idx = K.index ~ptr:p0 ~idxs:[ r0 * idx s1 + r1 ] () in\n  let st = K.store ~dst:out_idx ~value ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0; r1 ] () in\n  wrap_sink [ e ]\n\n(* Reduce kernel with Global ranges *)\nlet reduce_global_ast ~s0 ~s1 ~sr =\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let r0 = global_range ~axis:0 s0 in\n  let r1 = global_range ~axis:1 s1 in\n  let rr = reduce_range ~axis:2 sr in\n  let open K.O in\n  let in_idx =\n    K.index ~ptr:p1 ~idxs:[ r0 * idx sr * idx s1 + rr * idx s1 + r1 ] ()\n  in\n  let ld = K.load ~src:in_idx () in\n  let red = K.reduce ~op:`Add ~src:ld ~ranges:[ rr ] ~dtype:D.Val.float32 in\n  let out_idx = K.index ~ptr:p0 ~idxs:[ r0 * idx s1 + r1 ] () in\n  let st = K.store ~dst:out_idx ~value:red ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0; r1 ] () in\n  wrap_sink [ e ]\n\n(* Analysis Helpers *)\n\nlet raises_opt_error f =\n  raises_match (function P.Opt_error _ -> true | _ -> false) f\n\nlet range_size_int r = K.const_to_int (K.range_size r)\n\nlet count_kind kind rngs =\n  List.length (List.filter (fun r -> K.range_kind r = kind) rngs)\n\n(* Tests *)\n\n(* Group 1: Shift_to *)\n\nlet shift_to_tests =\n  group \"shift_to\"\n    [\n      test \"splits range evenly\" (fun () ->\n        let ast = elementwise_global_ast ~s0:16 ~s1:4 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        let initial_len = P.shape_len t in\n        let rngs = P.rngs t in\n        let rng = List.hd rngs in\n        let replaced, new_rng = P.shift_to t rng 4 Ak.Local in\n        equal int (initial_len + 1) (P.shape_len t);\n        equal int 4 (range_size_int replaced);\n        equal int 4 (range_size_int new_rng);\n        is_true (K.range_kind replaced = Ak.Global);\n        is_true (K.range_kind new_rng = Ak.Local));\n      (* GROUPTOP/THREAD depend on top=true reversing the expression order *)\n      test \"top=true reverses expression order\" (fun () ->\n        let ast = elementwise_global_ast ~s0:8 ~s1:4 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        let rngs = P.rngs t in\n        let rng = List.hd rngs in\n        (* top=false: replaced * amount + new *)\n        let t_bot = P.copy t in\n        let _replaced_b, _new_b = P.shift_to t_bot rng 4 Ak.Upcast in\n        let shape_bot = P.shape_len t_bot in\n        (* top=true: new * old_sz + replaced *)\n        let t_top = P.copy t in\n        let _replaced_t, _new_t =\n          P.shift_to ~top:true t_top rng 4 Ak.Upcast\n        in\n        let shape_top = P.shape_len t_top in\n        (* Both should add one range *)\n        equal int (shape_bot) (shape_top));\n      (* Divisibility guard: 10 % 3 ≠ 0 *)\n      test \"rejects non-divisible amount\" (fun () ->\n        let ast = elementwise_global_ast ~s0:10 ~s1:4 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        let rngs = P.rngs t in\n        let rng = List.hd rngs in\n        raises_opt_error (fun () -> ignore (P.shift_to t rng 3 Ak.Local)));\n      (* Full split: amount=size → replaced=1, new=full *)\n      test \"full amount creates size-1 replaced range\" (fun () ->\n        let ast = elementwise_global_ast ~s0:8 ~s1:4 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        let rngs = P.rngs t in\n        let rng = List.hd rngs in\n        let replaced, new_rng = P.shift_to t rng 8 Ak.Upcast in\n        equal int 1 (range_size_int replaced);\n        equal int 8 (range_size_int new_rng));\n      (* TC warp path: input_new_rng is used as-is *)\n      test \"input_new_rng is used as provided node\" (fun () ->\n        let ast = elementwise_global_ast ~s0:16 ~s1:4 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        let rngs = P.rngs t in\n        let rng = List.hd rngs in\n        let custom_rng =\n          K.range ~size:(idx 4) ~axis:99 ~kind:Ak.Warp ~dtype:D.Val.index ()\n        in\n        let _replaced, new_rng =\n          P.shift_to ~input_new_rng:custom_rng t rng 4 Ak.Warp\n        in\n        is_true (new_rng == custom_rng));\n    ]\n\n(* Group 2: Apply_opt validation guards *)\n\nlet validation_tests =\n  group \"validation guards\"\n    [\n      test \"UPCAST rejects amount > 16\" (fun () ->\n        let ast = elementwise_global_ast ~s0:32 ~s1:4 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        raises_opt_error (fun () ->\n          ignore (P.apply_opt t (K.Opt.Upcast { axis = 0; amount = 17 }))));\n      test \"UNROLL rejects amount > 32\" (fun () ->\n        let ast = reduce_global_ast ~s0:4 ~s1:4 ~sr:64 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        raises_opt_error (fun () ->\n          ignore (P.apply_opt t (K.Opt.Unroll { axis = 0; amount = 33 }))));\n      test \"UPCAST rejects reduce axis\" (fun () ->\n        let ast = reduce_global_ast ~s0:4 ~s1:4 ~sr:8 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        (* The reduce axis is the last one in sorted rngs (pos=4).\n           With 2 globals + 1 reduce, the reduce is at index 2. *)\n        raises_opt_error (fun () ->\n          ignore (P.apply_opt t (K.Opt.Upcast { axis = 2; amount = 2 }))));\n      (* No unrollable dims in elementwise kernel → IndexError equivalent *)\n      test \"UNROLL rejects non-reduce axis\" (fun () ->\n        let ast = elementwise_global_ast ~s0:8 ~s1:8 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        raises_opt_error (fun () ->\n          ignore (P.apply_opt t (K.Opt.Unroll { axis = 0; amount = 2 }))));\n      test \"LOCAL after NOLOCALS rejected\" (fun () ->\n        let ast = elementwise_global_ast ~s0:8 ~s1:8 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        ignore (P.apply_opt t K.Opt.Nolocals);\n        raises_opt_error (fun () ->\n          ignore (P.apply_opt t (K.Opt.Local { axis = 0; amount = 2 }))));\n      test \"LOCAL without renderer locals rejected\" (fun () ->\n        let ast = elementwise_ast ~s0:8 ~s1:8 in\n        let ren = cpu_renderer () in\n        let t = P.create ast ren in\n        raises_opt_error (fun () ->\n          ignore (P.apply_opt t (K.Opt.Local { axis = 0; amount = 2 }))));\n      test \"shared memory budget exceeded\" (fun () ->\n        (* small_smem_renderer has shared_max=64 bytes.\n           reduce f32 with GROUP amt=32: smem = 32 * 1 * 4 = 128 > 64 *)\n        let ast = reduce_global_ast ~s0:4 ~s1:4 ~sr:128 in\n        let ren = small_smem_renderer () in\n        let t = P.create ast ren in\n        raises_opt_error (fun () ->\n          ignore (P.apply_opt t (K.Opt.Grouptop { axis = 0; amount = 32 }))));\n      test \"THREAD rejects double-thread\" (fun () ->\n        let ast = elementwise_ast ~s0:8 ~s1:8 in\n        let ren = thread_renderer () in\n        let t = P.create ast ren in\n        P.convert_loop_to_global t;\n        ignore (P.apply_opt t (K.Opt.Thread { axis = 0; amount = 2 }));\n        raises_opt_error (fun () ->\n          ignore (P.apply_opt t (K.Opt.Thread { axis = 0; amount = 2 }))));\n      test \"NOLOCALS rejects existing locals\" (fun () ->\n        let ast = elementwise_global_ast ~s0:8 ~s1:8 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        ignore (P.apply_opt t (K.Opt.Local { axis = 0; amount = 2 }));\n        raises_opt_error (fun () -> ignore (P.apply_opt t K.Opt.Nolocals)));\n      test \"LOCAL rejects non-global axis\" (fun () ->\n        let ast = reduce_global_ast ~s0:4 ~s1:4 ~sr:8 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        (* axis 2 is the reduce range *)\n        raises_opt_error (fun () ->\n          ignore (P.apply_opt t (K.Opt.Local { axis = 2; amount = 2 }))));\n    ]\n\n(* Group 3: Apply_opt shift-based optimizations *)\n\nlet shift_opt_tests =\n  group \"apply_opt shift-based\"\n    [\n      (* Port of test_local_and_grouped_reduce: LOCAL splits global into local\n         tile *)\n      test \"LOCAL splits global into local tile\" (fun () ->\n        let ast = elementwise_global_ast ~s0:16 ~s1:16 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        let initial_len = P.shape_len t in\n        ignore (P.apply_opt t (K.Opt.Local { axis = 0; amount = 4 }));\n        equal int (initial_len + 1) (P.shape_len t);\n        let ats = P.axis_types t in\n        is_true (List.exists (fun at -> at = Ak.Local) ats));\n      (* Port of test_upcasts: UPCAST on global range *)\n      test \"UPCAST on global range\" (fun () ->\n        let ast = elementwise_global_ast ~s0:16 ~s1:16 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        ignore (P.apply_opt t (K.Opt.Upcast { axis = 0; amount = 4 }));\n        let ats = P.axis_types t in\n        is_true (List.exists (fun at -> at = Ak.Upcast) ats);\n        equal int 4 (P.upcast_size t));\n      (* Port of test_full_upcast: UPCAST with amount=0 uses full range size *)\n      test \"UPCAST with amount=0 uses full range size\" (fun () ->\n        let ast = elementwise_global_ast ~s0:4 ~s1:4 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        ignore (P.apply_opt t (K.Opt.Upcast { axis = 0; amount = 0 }));\n        equal int 4 (P.upcast_size t);\n        equal int 1 (P.upcasted t));\n      (* Port of test_local_and_grouped_reduce: GROUPTOP on reduce *)\n      test \"GROUPTOP on reduce creates group_reduce range\" (fun () ->\n        let ast = reduce_global_ast ~s0:32 ~s1:32 ~sr:128 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        ignore (P.apply_opt t (K.Opt.Grouptop { axis = 0; amount = 32 }));\n        equal int 1 (P.group_for_reduces t);\n        let ats = P.axis_types t in\n        is_true (List.exists (fun at -> at = Ak.Group_reduce) ats));\n      (* Port of test_matmul: GROUPTOP + UNROLL *)\n      test \"UNROLL after GROUPTOP\" (fun () ->\n        let ast = reduce_global_ast ~s0:32 ~s1:32 ~sr:128 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        ignore (P.apply_opt t (K.Opt.Grouptop { axis = 0; amount = 32 }));\n        ignore (P.apply_opt t (K.Opt.Unroll { axis = 0; amount = 4 }));\n        equal int 1 (P.upcasted t);\n        let ats = P.axis_types t in\n        is_true (List.exists (fun at -> at = Ak.Unroll) ats);\n        is_true (List.exists (fun at -> at = Ak.Group_reduce) ats));\n      (* Port of test_matmul combo: LOCAL×2 + GROUPTOP + UNROLL + UPCAST×2 *)\n      test \"combined LOCAL + GROUPTOP + UNROLL + UPCAST\" (fun () ->\n        let ast = reduce_global_ast ~s0:128 ~s1:128 ~sr:128 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        ignore (P.apply_opt t (K.Opt.Local { axis = 0; amount = 4 }));\n        ignore (P.apply_opt t (K.Opt.Local { axis = 0; amount = 4 }));\n        ignore (P.apply_opt t (K.Opt.Grouptop { axis = 0; amount = 8 }));\n        ignore (P.apply_opt t (K.Opt.Unroll { axis = 0; amount = 4 }));\n        ignore (P.apply_opt t (K.Opt.Upcast { axis = 0; amount = 4 }));\n        ignore (P.apply_opt t (K.Opt.Upcast { axis = 1; amount = 2 }));\n        let ats = P.axis_types t in\n        is_true (List.exists (fun at -> at = Ak.Local) ats);\n        is_true (List.exists (fun at -> at = Ak.Upcast) ats);\n        is_true (List.exists (fun at -> at = Ak.Unroll) ats);\n        is_true (List.exists (fun at -> at = Ak.Group_reduce) ats));\n      (* Port of test_thread_opts: THREAD on threadable renderer *)\n      test \"THREAD on threadable renderer\" (fun () ->\n        let ast = elementwise_ast ~s0:8 ~s1:8 in\n        let ren = thread_renderer () in\n        let t = P.create ast ren in\n        P.convert_loop_to_global t;\n        ignore (P.apply_opt t (K.Opt.Thread { axis = 0; amount = 2 }));\n        let ats = P.axis_types t in\n        is_true (List.exists (fun at -> at = Ak.Thread) ats));\n      (* Port of test_double_reduce: Multiple GROUPTOPs on double reduce.\n         We use a single reduce with two reduce ranges. *)\n      test \"double GROUPTOP on reduce\" (fun () ->\n        let ast = reduce_global_ast ~s0:8 ~s1:8 ~sr:128 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        ignore (P.apply_opt t (K.Opt.Grouptop { axis = 0; amount = 4 }));\n        equal int 1 (P.group_for_reduces t));\n    ]\n\n(* Group 4: PADTO *)\n\nlet padto_tests =\n  group \"apply_opt PADTO\"\n    [\n      (* Port of test_padto_matmul: PADTO pads 17 → 32 *)\n      test \"PADTO pads axis to next multiple\" (fun () ->\n        let ast = elementwise_global_ast ~s0:17 ~s1:4 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        ignore (P.apply_opt t (K.Opt.Padto { axis = 0; amount = 32 }));\n        (* After padding, the range should have size 32 *)\n        let rngs = P.rngs t in\n        let sizes =\n          List.map (fun r -> range_size_int r) rngs\n        in\n        is_true (List.mem 32 sizes));\n      (* Port of test_padto_upcasted_not_ok: PADTO rejects upcast axis *)\n      test \"PADTO rejects upcast axis\" (fun () ->\n        let ast = elementwise_global_ast ~s0:4 ~s1:4 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        ignore (P.apply_opt t (K.Opt.Upcast { axis = 0; amount = 0 }));\n        (* axis 0 is now Global size-1, the upcast is at the end.\n           Find the upcast axis index *)\n        raises_opt_error (fun () ->\n          (* After full upcast of axis 0, the original is size-1 (filtered\n             from rngs). The upcast range is now in rngs at some index.\n             The upcast kind makes it non-paddable. *)\n          let ats = P.axis_types t in\n          let upcast_idx =\n            let rec find i = function\n              | [] -> failwith \"no upcast\"\n              | at :: _ when at = Ak.Upcast -> i\n              | _ :: rest -> find (i + 1) rest\n            in\n            find 0 ats\n          in\n          ignore\n            (P.apply_opt t (K.Opt.Padto { axis = upcast_idx; amount = 8 }))));\n      (* Port of test_padto_sum_not_ok: exp2 in backward slice → error *)\n      test \"PADTO rejects unsafe pad ops in reduce backward slice\" (fun () ->\n        let ast = reduce_unsafe_pad_ast ~s0:17 ~sr:32 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        P.convert_loop_to_global t;\n        (* The reduce axis (axis index 1) has unsafe pad ops in its\n           backward slice *)\n        raises_opt_error (fun () ->\n          ignore (P.apply_opt t (K.Opt.Padto { axis = 1; amount = 64 }))));\n      (* Port of test_padto_max: max reduce can't be padded on reduce axis *)\n      test \"PADTO rejects max reduce on reduce axis\" (fun () ->\n        let ast = max_reduce_ast ~s0:17 ~sr:32 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        P.convert_loop_to_global t;\n        raises_opt_error (fun () ->\n          ignore (P.apply_opt t (K.Opt.Padto { axis = 1; amount = 64 }))));\n    ]\n\n(* Group 5: SWAP and NOLOCALS *)\n\nlet swap_nolocals_tests =\n  group \"apply_opt SWAP and NOLOCALS\"\n    [\n      (* SWAP exchanges two global axes: sizes swap positions.\n         Before: axis 0 → size 8, axis 1 → size 16\n         After:  axis 0 → size 16, axis 1 → size 8  (axis numbers swapped) *)\n      test \"SWAP exchanges two global axes\" (fun () ->\n        let ast = elementwise_global_ast ~s0:8 ~s1:16 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        let rngs_before = P.rngs t in\n        let sz0_before = range_size_int (List.nth rngs_before 0) in\n        let sz1_before = range_size_int (List.nth rngs_before 1) in\n        ignore (P.apply_opt t (K.Opt.Swap { axis = 0; with_axis = 1 }));\n        let rngs_after = P.rngs t in\n        let sz0_after = range_size_int (List.nth rngs_after 0) in\n        let sz1_after = range_size_int (List.nth rngs_after 1) in\n        (* After swap, the sizes at each sorted position are exchanged *)\n        equal int sz1_before sz0_after;\n        equal int sz0_before sz1_after);\n      (* SWAP rejects non-global axes *)\n      test \"SWAP rejects non-global axes\" (fun () ->\n        let ast = elementwise_global_ast ~s0:8 ~s1:8 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        ignore (P.apply_opt t (K.Opt.Local { axis = 0; amount = 2 }));\n        (* Now axis 0 is Global(4), axis 1 is Global(8), axis 2 is Local(2).\n           Swapping axis 0 with axis 2 (Local) should fail. *)\n        raises_opt_error (fun () ->\n          ignore (P.apply_opt t (K.Opt.Swap { axis = 0; with_axis = 2 }))));\n      (* NOLOCALS sets dont_use_locals *)\n      test \"NOLOCALS disables locals\" (fun () ->\n        let ast = elementwise_global_ast ~s0:8 ~s1:8 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        ignore (P.apply_opt t K.Opt.Nolocals);\n        (* Subsequent LOCAL should fail *)\n        raises_opt_error (fun () ->\n          ignore (P.apply_opt t (K.Opt.Local { axis = 0; amount = 2 }))));\n    ]\n\n(* Group 6: State queries *)\n\nlet state_query_tests =\n  group \"state queries\"\n    [\n      (* rngs sorts by axis_to_pos: Loop(-1) < Global(0) < Reduce(4) *)\n      test \"rngs sorted by axis_to_pos then axis\" (fun () ->\n        let ast = reduce_ast ~s0:4 ~s1:4 ~sr:8 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        let ats = P.axis_types t in\n        (* Two Loop ranges first (pos=-1), then Reduce (pos=4) *)\n        equal int 3 (List.length ats);\n        is_true (List.nth ats 0 = Ak.Loop);\n        is_true (List.nth ats 1 = Ak.Loop);\n        is_true (List.nth ats 2 = Ak.Reduce));\n      (* rngs filters out size-1 ranges *)\n      test \"rngs filters out size-1 ranges\" (fun () ->\n        let ast = elementwise_global_ast ~s0:8 ~s1:4 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        (* Full upcast of axis 0: replaced range becomes size 1 *)\n        ignore (P.apply_opt t (K.Opt.Upcast { axis = 0; amount = 0 }));\n        (* Size-1 replaced range should be filtered from rngs *)\n        let rngs = P.rngs t in\n        List.iter\n          (fun r -> is_true (range_size_int r > 1))\n          rngs);\n      (* shape_str produces correct labels *)\n      test \"shape_str produces correct labels\" (fun () ->\n        let ast = reduce_global_ast ~s0:4 ~s1:4 ~sr:8 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        let ss = P.shape_str t in\n        equal int 3 (List.length ss);\n        equal string \"g0\" (List.nth ss 0);\n        equal string \"g1\" (List.nth ss 1);\n        equal string \"R0\" (List.nth ss 2));\n      (* shape_str_to_axis resolves labels *)\n      test \"shape_str_to_axis resolves labels\" (fun () ->\n        let ast = reduce_global_ast ~s0:4 ~s1:4 ~sr:8 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        let axes = P.shape_str_to_axis t [ \"g1\"; \"R0\" ] in\n        equal int 1 (List.nth axes 0);\n        equal int 2 (List.nth axes 1));\n      (* copy preserves state *)\n      test \"copy preserves mutable state\" (fun () ->\n        let ast = elementwise_global_ast ~s0:8 ~s1:8 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        ignore (P.apply_opt t K.Opt.Nolocals);\n        let t2 = P.copy t in\n        raises_opt_error (fun () ->\n          ignore (P.apply_opt t2 (K.Opt.Local { axis = 0; amount = 2 }))));\n      (* Helper queries *)\n      test \"upcastable_dims and unrollable_dims\" (fun () ->\n        let ast = reduce_global_ast ~s0:4 ~s1:4 ~sr:8 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        let up = P.upcastable_dims t in\n        let un = P.unrollable_dims t in\n        (* 2 global axes with size > 1 → 2 upcastable dims *)\n        equal int 2 (List.length up);\n        (* 1 reduce axis with size > 1 → 1 unrollable dim *)\n        equal int 1 (List.length un));\n      (* output_shape replaces reduce/unroll/group_reduce with 1 *)\n      test \"output_shape replaces non-output axes with 1\" (fun () ->\n        let ast = reduce_global_ast ~s0:4 ~s1:4 ~sr:8 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        let os = P.output_shape t in\n        equal int 3 (List.length os);\n        equal int 4 (K.const_to_int (List.nth os 0));\n        equal int 4 (K.const_to_int (List.nth os 1));\n        equal int 1 (K.const_to_int (List.nth os 2)));\n    ]\n\n(* Group 7: Integration *)\n\nlet integration_tests =\n  group \"integration\"\n    [\n      (* get_optimized_ast produces valid kernel_info *)\n      test \"get_optimized_ast produces valid kernel_info\" (fun () ->\n        let ast = reduce_global_ast ~s0:32 ~s1:32 ~sr:128 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        ignore\n          (P.apply_opt t (K.Opt.Upcast { axis = 0; amount = 4 }));\n        let result = P.get_optimized_ast t in\n        match K.view result with\n        | Sink { kernel_info = Some ki; _ } ->\n            equal int 1 (List.length ki.applied_opts);\n            is_true (K.tag result <> None)\n        | _ -> failwith \"expected Sink with kernel_info\");\n      (* Name generation: \"r_\" for reduce, \"E_\" for elementwise *)\n      test \"get_optimized_ast name generation\" (fun () ->\n        let ast_r = reduce_global_ast ~s0:4 ~s1:4 ~sr:8 in\n        let ren = gpu_renderer () in\n        let t_r = P.create ast_r ren in\n        let result_r = P.get_optimized_ast t_r in\n        (match K.view result_r with\n        | Sink { kernel_info = Some ki; _ } ->\n            is_true (String.length ki.name > 0);\n            is_true (ki.name.[0] = 'r')\n        | _ -> failwith \"expected Sink\");\n        let ast_e = elementwise_global_ast ~s0:4 ~s1:4 in\n        let t_e = P.create ast_e ren in\n        let result_e = P.get_optimized_ast t_e in\n        match K.view result_e with\n        | Sink { kernel_info = Some ki; _ } ->\n            is_true (String.length ki.name > 0);\n            is_true (ki.name.[0] = 'E')\n        | _ -> failwith \"expected Sink\");\n      (* apply_opts respects opts_to_apply *)\n      test \"apply_opts respects opts_to_apply\" (fun () ->\n        let p0 = K.param ~idx:0 ~dtype:global_fptr in\n        let p1 = K.param ~idx:1 ~dtype:global_fptr in\n        let r0 = global_range ~axis:0 16 in\n        let r1 = global_range ~axis:1 16 in\n        let open K.O in\n        let in_idx = K.index ~ptr:p1 ~idxs:[ r0 * idx 16 + r1 ] () in\n        let ld = K.load ~src:in_idx () in\n        let value = K.unary ~op:`Exp2 ~src:ld in\n        let out_idx = K.index ~ptr:p0 ~idxs:[ r0 * idx 16 + r1 ] () in\n        let st = K.store ~dst:out_idx ~value ~ranges:[] in\n        let e = K.end_ ~value:st ~ranges:[ r0; r1 ] () in\n        let opts = [ K.Opt.Upcast { axis = 0; amount = 4 } ] in\n        let ast =\n          K.sink\n            ~kernel_info:(kernel_info ~opts_to_apply:(Some opts) ())\n            [ e ]\n        in\n        let ren = gpu_renderer () in\n        (* apply_opts dispatch moved to Pipeline; test the scheduler\n           operations directly instead *)\n        let k = P.create ast ren in\n        P.convert_loop_to_global k;\n        let opts = [ K.Opt.Upcast { axis = 0; amount = 4 } ] in\n        List.iter (fun opt -> ignore (P.apply_opt k opt)) opts;\n        let result = P.get_optimized_ast k in\n        match K.view result with\n        | Sink { kernel_info = Some ki; _ } ->\n            equal int 1 (List.length ki.applied_opts)\n        | _ -> failwith \"expected Sink with kernel_info\");\n    ]\n\n(* Group 8: Convert_loop_to_global *)\n\nlet convert_loop_to_global_tests =\n  group \"convert_loop_to_global\"\n    [\n      test \"LOOP ranges become GLOBAL on GPU\" (fun () ->\n        let ast = elementwise_ast ~s0:8 ~s1:8 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        (* Before: all LOOP *)\n        is_true\n          (List.for_all (fun at -> at = Ak.Loop) (P.axis_types t));\n        P.convert_loop_to_global t;\n        (* After: all GLOBAL *)\n        is_true\n          (List.for_all (fun at -> at = Ak.Global) (P.axis_types t)));\n      test \"LOOP ranges stay LOOP on CPU\" (fun () ->\n        let ast = elementwise_ast ~s0:8 ~s1:8 in\n        let ren = cpu_renderer () in\n        let t = P.create ast ren in\n        P.convert_loop_to_global t;\n        is_true\n          (List.for_all (fun at -> at = Ak.Loop) (P.axis_types t)));\n      test \"reduce ranges stay REDUCE after conversion\" (fun () ->\n        let ast = reduce_ast ~s0:4 ~s1:4 ~sr:8 in\n        let ren = gpu_renderer () in\n        let t = P.create ast ren in\n        P.convert_loop_to_global t;\n        let ats = P.axis_types t in\n        (* LOOP ranges → GLOBAL, but REDUCE stays *)\n        is_true (List.exists (fun at -> at = Ak.Reduce) ats);\n        is_true (not (List.exists (fun at -> at = Ak.Loop) ats)));\n    ]\n\n(* Group 9: Apply_opts dispatch *)\n\nlet dispatch_tests =\n  group \"apply_opts dispatch\"\n    [\n      test \"opts_to_apply applied in order\" (fun () ->\n        let opts =\n          [\n            K.Opt.Upcast { axis = 0; amount = 4 };\n            K.Opt.Upcast { axis = 1; amount = 2 };\n          ]\n        in\n        let ast = elementwise_global_ast ~s0:16 ~s1:16 in\n        let ki = kernel_info ~opts_to_apply:(Some opts) () in\n        let ast =\n          K.sink ~kernel_info:ki\n            (match K.view ast with\n            | Sink { srcs; _ } -> srcs\n            | _ -> [ ast ])\n        in\n        let ren = gpu_renderer () in\n        let result = P.apply_opts ast ren in\n        match K.view result with\n        | Sink { kernel_info = Some ki; _ } ->\n            equal int 2 (List.length ki.applied_opts)\n        | _ -> failwith \"expected Sink with kernel_info\");\n      test \"beam_search closure is called\" (fun () ->\n        let called = ref false in\n        let beam_search k =\n          called := true;\n          k\n        in\n        let ast = elementwise_global_ast ~s0:8 ~s1:8 in\n        let ren = gpu_renderer () in\n        let _result =\n          P.apply_opts ~beam_search ast ren\n        in\n        is_true !called);\n      test \"hand_coded closure is called\" (fun () ->\n        let called = ref false in\n        let hco k =\n          called := true;\n          k\n        in\n        let ast = elementwise_global_ast ~s0:8 ~s1:8 in\n        let ren = gpu_renderer () in\n        let _result =\n          P.apply_opts ~hand_coded_optimizations:hco ast ren\n        in\n        is_true !called);\n      test \"already-optimized kernel returns unchanged\" (fun () ->\n        let ast = elementwise_global_ast ~s0:8 ~s1:8 in\n        let ren = gpu_renderer () in\n        (* First pass: optimize normally *)\n        let optimized = P.apply_opts ast ren in\n        (* Second pass: should return unchanged (tag is set) *)\n        let called = ref false in\n        let hco k =\n          called := true;\n          k\n        in\n        let result =\n          P.apply_opts ~hand_coded_optimizations:hco optimized ren\n        in\n        is_true (not !called);\n        (* Result should be the same AST *)\n        (match (K.view optimized, K.view result) with\n        | ( Sink { kernel_info = Some ki1; _ },\n            Sink { kernel_info = Some ki2; _ } ) ->\n            equal string ki1.name ki2.name\n        | _ -> failwith \"expected Sink with kernel_info\"));\n      test \"heuristic skipped with BUFFERIZE in AST\" (fun () ->\n        let p0 = K.param ~idx:0 ~dtype:global_fptr in\n        let r0 = global_range ~axis:0 8 in\n        let open K.O in\n        let in_idx = K.index ~ptr:p0 ~idxs:[ r0 ] () in\n        let ld = K.load ~src:in_idx () in\n        let buf_opts =\n          { K.device = None; addrspace = Global; removable = false }\n        in\n        let bz =\n          K.bufferize ~src:ld ~ranges:[ r0 ] ~dtype:global_fptr\n            ~opts:buf_opts\n        in\n        let out_idx = K.index ~ptr:p0 ~idxs:[ r0 + idx 8 ] () in\n        let st = K.store ~dst:out_idx ~value:bz ~ranges:[] in\n        let e = K.end_ ~value:st ~ranges:[ r0 ] () in\n        let ast = wrap_sink [ e ] in\n        let ren = gpu_renderer () in\n        let called = ref false in\n        let hco k =\n          called := true;\n          k\n        in\n        let _result =\n          P.apply_opts ~hand_coded_optimizations:hco ast ren\n        in\n        is_true (not !called));\n    ]\n\n(* Group 10: TC optimization *)\n\n(* Matmul-pattern AST:  output[i,j] = sum_k(a[i,k] * b[k,j])\n   Two GLOBAL ranges + one REDUCE range, MUL inside REDUCE ADD. *)\nlet global_f16ptr = D.Ptr.create D.Val.float16 ~addrspace:Global ~size:(-1)\n\nlet matmul_ast ~si ~sj ~sk =\n  let p_out = K.param ~idx:0 ~dtype:global_fptr in\n  let p_a = K.param ~idx:1 ~dtype:global_fptr in\n  let p_b = K.param ~idx:2 ~dtype:global_fptr in\n  let ri = global_range ~axis:0 si in\n  let rj = global_range ~axis:1 sj in\n  let rk = reduce_range ~axis:2 sk in\n  let open K.O in\n  let idx_a = K.index ~ptr:p_a ~idxs:[ ri * idx sk + rk ] () in\n  let ld_a = K.load ~src:idx_a () in\n  let idx_b = K.index ~ptr:p_b ~idxs:[ rk * idx sj + rj ] () in\n  let ld_b = K.load ~src:idx_b () in\n  let mul = ld_a * ld_b in\n  let red = K.reduce ~op:`Add ~src:mul ~ranges:[ rk ] ~dtype:D.Val.float32 in\n  let out_idx = K.index ~ptr:p_out ~idxs:[ ri * idx sj + rj ] () in\n  let st = K.store ~dst:out_idx ~value:red ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ ri; rj ] () in\n  wrap_sink [ e ]\n\nlet tc_renderer () =\n  Renderer.make ~name:\"metal\" ~device:\"METAL\" ~has_local:true ~has_shared:true\n    ~shared_max:32768 ~tensor_cores:Tc.metal\n    ~render:(fun ?name:_ _ -> \"\") ()\n\nlet has_wmma ast =\n  List.exists\n    (fun n -> match K.view n with Wmma _ -> true | _ -> false)\n    (K.toposort ast)\n\nlet tc_tests =\n  group \"TC optimization\"\n    [\n      test \"TC basic apply creates WMMA\" (fun () ->\n        let ast = matmul_ast ~si:16 ~sj:16 ~sk:16 in\n        let ren = tc_renderer () in\n        let t = P.create ast ren in\n        let result =\n          P.apply_opt t\n            (K.Opt.Tc { axis = 0; tc_select = 0; tc_opt = 0; use_tc = 1 })\n        in\n        is_true (result <> None);\n        is_true (P.tensor_core t <> None);\n        is_true (has_wmma (P.ast t)));\n      test \"TC with padding (tc_opt=2)\" (fun () ->\n        (* 10 doesn't divide 8 cleanly — tc_opt=2 allows padding *)\n        let ast = matmul_ast ~si:10 ~sj:10 ~sk:10 in\n        let ren = tc_renderer () in\n        let t = P.create ast ren in\n        let result =\n          P.apply_opt t\n            (K.Opt.Tc { axis = 0; tc_select = 0; tc_opt = 2; use_tc = 1 })\n        in\n        is_true (result <> None);\n        is_true (P.tensor_core t <> None));\n      test \"TC rejects non-reduce kernel\" (fun () ->\n        let ast = elementwise_global_ast ~s0:16 ~s1:16 in\n        let ren = tc_renderer () in\n        let t = P.create ast ren in\n        raises_opt_error (fun () ->\n          ignore\n            (P.apply_opt t\n               (K.Opt.Tc\n                  { axis = 0; tc_select = 0; tc_opt = 0; use_tc = 1 }))));\n      test \"TC rejects invalid tc_select\" (fun () ->\n        let ast = matmul_ast ~si:16 ~sj:16 ~sk:16 in\n        let ren = tc_renderer () in\n        let t = P.create ast ren in\n        raises_opt_error (fun () ->\n          ignore\n            (P.apply_opt t\n               (K.Opt.Tc\n                  { axis = 0; tc_select = 99; tc_opt = 0; use_tc = 1 }))));\n      test \"TC must be first opt\" (fun () ->\n        let ast = matmul_ast ~si:16 ~sj:16 ~sk:16 in\n        let ren = tc_renderer () in\n        let t = P.create ast ren in\n        ignore\n          (P.apply_opt t (K.Opt.Local { axis = 0; amount = 2 }));\n        raises_opt_error (fun () ->\n          ignore\n            (P.apply_opt t\n               (K.Opt.Tc\n                  { axis = 0; tc_select = 0; tc_opt = 0; use_tc = 1 }))));\n      test \"TC use_tc=2 skips WMMA construction\" (fun () ->\n        let ast = matmul_ast ~si:16 ~sj:16 ~sk:16 in\n        let ren = tc_renderer () in\n        let t = P.create ast ren in\n        ignore\n          (P.apply_opt t\n             (K.Opt.Tc { axis = 0; tc_select = 0; tc_opt = 0; use_tc = 2 }));\n        is_true (P.tensor_core t <> None);\n        is_true (not (has_wmma (P.ast t))));\n    ]\n\n(* Entry *)\n\nlet () =\n  run __FILE__\n    [\n      shift_to_tests;\n      validation_tests;\n      shift_opt_tests;\n      padto_tests;\n      swap_nolocals_tests;\n      state_query_tests;\n      integration_tests;\n      convert_loop_to_global_tests;\n      dispatch_tests;\n      tc_tests;\n    ]\n"
  },
  {
    "path": "packages/tolk/test/unit/test_codegen_simplify.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Unit tests for Simplify.\n\n   Each group tests one of the six exported passes in isolation. *)\n\nopen Windtrap\nopen Tolk\nopen Tolk_ir\nmodule K = Kernel\nmodule D = Dtype\nmodule C = Const\nmodule Ak = Axis_kind\n\n(* Helpers *)\n\nlet idx n = K.const (C.int D.Val.index n)\nlet f32 x = K.const (C.float D.Val.float32 x)\nlet global_fptr = D.Ptr.create D.Val.float32 ~addrspace:Global ~size:(-1)\n\nlet kernel_info () =\n  {\n    K.name = \"\";\n    axis_kinds = [];\n    dont_use_locals = false;\n    applied_opts = [];\n    opts_to_apply = None;\n    estimates = None;\n  }\n\nlet wrap_sink srcs = K.sink ~kernel_info:(kernel_info ()) srcs\n\n(* Build a loop range on [axis] with int [size]. *)\nlet loop_range ~axis size =\n  K.range ~size:(idx size) ~axis ~kind:Ak.Loop ~dtype:D.Val.index ()\n\n(* Build a reduce range on [axis] with int [size]. *)\nlet reduce_range ~axis size =\n  K.range ~size:(idx size) ~axis ~kind:Ak.Reduce ~dtype:D.Val.index ()\n\n(* Build a gated load: LOAD(INDEX(param, WHERE(valid, idx, invalid)), alt=0). *)\nlet gated_load ?(param_idx = 0) valid index_val =\n  let p = K.param ~idx:param_idx ~dtype:global_fptr in\n  let gated =\n    K.ternary ~op:`Where ~a:valid ~b:index_val ~c:(K.invalid_index ())\n  in\n  let index_node = K.index ~ptr:p ~idxs:[ gated ] () in\n  K.load ~src:index_node ~alt:(f32 0.0) ()\n\n(* Build a plain (ungated) load. *)\nlet plain_load ?(param_idx = 0) index_val =\n  let p = K.param ~idx:param_idx ~dtype:global_fptr in\n  let index_node = K.index ~ptr:p ~idxs:[ index_val ] () in\n  K.load ~src:index_node ()\n\n(* Collect all Range nodes from a rooted DAG. *)\nlet find_ranges root =\n  List.filter K.is_range (K.toposort root)\n\n(* Extract the integer constant size of a Range node. *)\nlet range_size_int r =\n  match K.view (K.range_size r) with\n  | K.Const { value; _ } -> (\n      match C.view value with\n      | Int n -> Int64.to_int n\n      | _ -> failwith \"range size is not int\")\n  | _ -> failwith \"range size is not const\"\n\n(* Count Range nodes in a DAG. *)\nlet count_ranges root = List.length (find_ranges root)\n\n(* Check whether a node kind appears in a DAG. *)\nlet has_node pred root =\n  List.exists (fun n -> pred (K.view n)) (K.toposort root)\n\nlet has_reduce root =\n  has_node (function K.Reduce _ -> true | _ -> false) root\n\nlet has_binary op root =\n  has_node (function K.Binary { op = o; _ } -> o = op | _ -> false) root\n\n(* Pm_flatten_range *)\n\nlet flatten_range_tests =\n  group \"pm_flatten_range\"\n    [\n      test \"toposorts range children of End\" (fun () ->\n          (* r0 has a fixed size; r1 depends on r0 via its size expression.\n             If they appear in [r1; r0] order initially, flatten should\n             reorder them to [r0; r1]. *)\n          let r0 = loop_range ~axis:0 4 in\n          let open K.O in\n          let r1 =\n            K.range ~size:(r0 + idx 1) ~axis:1 ~kind:Ak.Loop ~dtype:D.Val.index ()\n          in\n          let value = r0 + r1 in\n          (* Build End with intentionally wrong order: [r1, r0] *)\n          let end_node = K.end_ ~value ~ranges:[ r1; r0 ] () in\n          let sink = wrap_sink [ end_node ] in\n          let result = Simplify.pm_flatten_range sink in\n          (* After flattening, r0 should come before r1 in the toposort *)\n          let ranges = find_ranges result in\n          is_true (List.length ranges = 2);\n          (* r0 should appear before r1 in toposort order *)\n          let topo = K.toposort result in\n          let pos_of r =\n            let rec go i = function\n              | [] -> -1\n              | n :: rest -> if n == r then i else go (Int.add i 1) rest\n            in\n            go 0 topo\n          in\n          (* We need to find the ranges in the result. Since flatten may\n             rebuild nodes, we check the size-4 range appears before the\n             dependent one. *)\n          let ranges_sorted =\n            List.sort (fun a b -> compare (pos_of a) (pos_of b)) ranges\n          in\n          let first_size = range_size_int (List.hd ranges_sorted) in\n          is_true (first_size = 4));\n      test \"noop when ranges already sorted\" (fun () ->\n          let r0 = loop_range ~axis:0 3 in\n          let r1 = loop_range ~axis:1 5 in\n          let open K.O in\n          let value = r0 + r1 in\n          let end_node = K.end_ ~value ~ranges:[ r0; r1 ] () in\n          let sink = wrap_sink [ end_node ] in\n          let result = Simplify.pm_flatten_range sink in\n          (* Independent ranges: pass should be a noop or produce same\n             structure. Count should be the same. *)\n          equal int (count_ranges result) 2);\n    ]\n\n(* Pm_split_ranges *)\n\nlet split_ranges_tests =\n  group \"pm_split_ranges\"\n    [\n      test \"splits Range(8) used with mod 2\" (fun () ->\n          let r = loop_range ~axis:0 8 in\n          let open K.O in\n          let value = r mod idx 2 in\n          let end_node = K.end_ ~value ~ranges:[ r ] () in\n          let sink = wrap_sink [ end_node ] in\n          let result = Simplify.pm_split_ranges sink in\n          (* Range(8) % 2 -> splits into Range(4)*2 + Range(2) *)\n          let n = count_ranges result in\n          is_true (n >= 2));\n      test \"no split when size does not divide constant\" (fun () ->\n          let r = loop_range ~axis:0 7 in\n          let open K.O in\n          let value = r mod idx 3 in\n          let end_node = K.end_ ~value ~ranges:[ r ] () in\n          let sink = wrap_sink [ end_node ] in\n          let result = Simplify.pm_split_ranges sink in\n          (* 7 % 3 != 0, so no split *)\n          equal int (count_ranges result) 1);\n      test \"split produces correct sizes\" (fun () ->\n          let r = loop_range ~axis:0 12 in\n          let open K.O in\n          let value = r mod idx 4 in\n          let end_node = K.end_ ~value ~ranges:[ r ] () in\n          let sink = wrap_sink [ end_node ] in\n          let result = Simplify.pm_split_ranges sink in\n          let ranges = find_ranges result in\n          is_true (List.length ranges >= 2);\n          (* Should have Range(3) and Range(4), or equivalent. *)\n          let sizes =\n            List.map range_size_int ranges |> List.sort compare\n          in\n          (* 12/4=3 outer, 4 inner *)\n          is_true (List.mem 3 sizes && List.mem 4 sizes));\n      test \"no split for image store ranges\" (fun () ->\n          (* Build a Store whose destination is an image param. *)\n          let r = loop_range ~axis:0 8 in\n          let open K.O in\n          let img_ptr =\n            D.Ptr.create (D.Val.vec 4 D.Val.float32) ~addrspace:Global ~size:(-1)\n          in\n          let p = K.param_image ~idx:0 ~dtype:img_ptr ~width:10 ~height:10 in\n          let index_node = K.index ~ptr:p ~idxs:[ r mod idx 2 ] () in\n          let store =\n            K.store ~dst:index_node ~value:(f32 1.0) ~ranges:[ r ]\n          in\n          let sink = wrap_sink [ store ] in\n          let result = Simplify.pm_split_ranges sink in\n          (* Image store ranges should not be split *)\n          equal int (count_ranges result) 1);\n    ]\n\n(* Pm_simplify_ranges: merge adjacent *)\n\nlet simplify_merge_tests =\n  group \"pm_simplify_ranges - merge adjacent\"\n    [\n      test \"merges adjacent ranges in End with same kind\" (fun () ->\n          (* Two adjacent Loop ranges with sizes 3 and 4. Expression uses\n             r0*4 + r1, which is the canonical divmod pattern that merges\n             into Range(12). *)\n          let r0 = loop_range ~axis:0 3 in\n          let r1 = loop_range ~axis:1 4 in\n          let open K.O in\n          let value = (r0 * idx 4) + r1 in\n          let end_node = K.end_ ~value ~ranges:[ r0; r1 ] () in\n          let sink = wrap_sink [ end_node ] in\n          let result = Simplify.pm_simplify_ranges sink in\n          (* Should merge into a single Range(12) since divmod count\n             doesn't increase *)\n          let ranges = find_ranges result in\n          is_true (List.length ranges <= 1\n                   || List.length ranges <= 2));\n      test \"no merge when different kind\" (fun () ->\n          let r0 = loop_range ~axis:0 3 in\n          let r1 = reduce_range ~axis:1 4 in\n          let open K.O in\n          let value = (r0 * idx 4) + r1 in\n          let red = K.reduce ~op:`Add ~src:value ~ranges:[ r0; r1 ] ~dtype:D.Val.index in\n          let sink = wrap_sink [ K.end_ ~value:red ~ranges:[] () ] in\n          let result = Simplify.pm_simplify_ranges sink in\n          (* Different kinds: should not merge *)\n          equal int (count_ranges result) 2);\n    ]\n\n(* Pm_simplify_ranges: range shrink (TestRangeShrink port) *)\n\nlet range_shrink_tests =\n  group \"pm_simplify_ranges - range shrink\"\n    [\n      (* Port of test_range_shrink_single_guard:\n         Range(0..203) guarded by r < 4 everywhere -> shrink to 0..3 *)\n      test \"shrinks range with single guard\" (fun () ->\n          let r = loop_range ~axis:0 204 in\n          let open K.O in\n          let valid = r < idx 4 in\n          let load = gated_load valid r in\n          let sink = wrap_sink [ K.end_ ~value:load ~ranges:[ r ] () ] in\n          let result = Simplify.pm_simplify_ranges sink in\n          let ranges = find_ranges result in\n          equal int (List.length ranges) 1;\n          equal int (range_size_int (List.hd ranges)) 4);\n      (* Port of test_range_shrink_picks_max_guard:\n         Two loads guard the same range with r < 4 and r < 8 -> max(4,8) = 8 *)\n      test \"picks max guard across multiple loads\" (fun () ->\n          let r = loop_range ~axis:0 204 in\n          let open K.O in\n          let load1 = gated_load (r < idx 4) r in\n          let load2 = gated_load ~param_idx:1 (r < idx 8) r in\n          let value = load1 + load2 in\n          let sink =\n            wrap_sink [ K.end_ ~value ~ranges:[ r ] () ]\n          in\n          let result = Simplify.pm_simplify_ranges sink in\n          let ranges = find_ranges result in\n          equal int (List.length ranges) 1;\n          equal int (range_size_int (List.hd ranges)) 8);\n      (* Port of test_range_no_shrink_guard_ge_max:\n         Guard r < 300 with range max 204 -> no shrink.\n         Symbolic folds the vacuous guard first, matching the pipeline. *)\n      test \"no shrink when guard >= range size\" (fun () ->\n          let r = loop_range ~axis:0 204 in\n          let open K.O in\n          let valid = r < idx 300 in\n          let load = gated_load valid r in\n          let sink = wrap_sink [ K.end_ ~value:load ~ranges:[ r ] () ] in\n          let sink = K.graph_rewrite Symbolic.symbolic sink in\n          let result = Simplify.pm_simplify_ranges sink in\n          let ranges = find_ranges result in\n          equal int (List.length ranges) 1;\n          equal int (range_size_int (List.hd ranges)) 204);\n      (* Port of test_range_no_shrink_when_unguarded_elsewhere:\n         One load guards r < 4, another uses r without gate -> no shrink *)\n      test \"no shrink when unguarded elsewhere\" (fun () ->\n          let r = loop_range ~axis:0 204 in\n          let open K.O in\n          let load1 = gated_load (r < idx 4) r in\n          let load2 = plain_load ~param_idx:1 r in\n          let value = load1 + load2 in\n          let sink =\n            wrap_sink [ K.end_ ~value ~ranges:[ r ] () ]\n          in\n          let result = Simplify.pm_simplify_ranges sink in\n          let ranges = find_ranges result in\n          equal int (List.length ranges) 1;\n          equal int (range_size_int (List.hd ranges)) 204);\n      (* Port of test_range_no_shrink_when_used_in_reduce:\n         Range used in both gated load AND reduce expression -> no shrink *)\n      test \"no shrink for reduce ranges\" (fun () ->\n          let r = loop_range ~axis:0 204 in\n          let open K.O in\n          let load = gated_load (r < idx 4) r in\n          let src = K.cast ~src:r ~dtype:(D.float32) + load in\n          let red =\n            K.reduce ~op:`Add ~src ~ranges:[ r ] ~dtype:D.Val.float32\n          in\n          let sink = wrap_sink [ K.end_ ~value:red ~ranges:[] () ] in\n          let result = Simplify.pm_simplify_ranges sink in\n          let ranges = find_ranges result in\n          equal int (List.length ranges) 1;\n          equal int (range_size_int (List.hd ranges)) 204);\n      (* Port of test_range_shrink_to_single_iteration:\n         Guard r < 1 shrinks range to 1 *)\n      test \"shrink to single iteration\" (fun () ->\n          let r = loop_range ~axis:0 204 in\n          let open K.O in\n          let valid = r < idx 1 in\n          let load = gated_load valid r in\n          let sink = wrap_sink [ K.end_ ~value:load ~ranges:[ r ] () ] in\n          let result = Simplify.pm_simplify_ranges sink in\n          let ranges = find_ranges result in\n          (* Range shrinks to 1 — may be eliminated entirely by symbolic *)\n          is_true (List.length ranges <= 1);\n          if List.length ranges = 1 then\n            equal int (range_size_int (List.hd ranges)) 1);\n      (* Store through gated index -> range shrinks. We construct the\n         post-preprocessing form directly since we test pm_simplify_ranges\n         in isolation. *)\n      test \"shrink with store where invalid\" (fun () ->\n          let r = loop_range ~axis:0 204 in\n          let open K.O in\n          let valid = r < idx 4 in\n          let x =\n            K.ternary ~op:`Where ~a:valid ~b:(f32 1.0)\n              ~c:(K.invalid_index ())\n          in\n          let p = K.param ~idx:0 ~dtype:global_fptr in\n          let gated_idx =\n            K.ternary ~op:`Where ~a:valid ~b:r ~c:(K.invalid_index ())\n          in\n          let dst = K.index ~ptr:p ~idxs:[ gated_idx ] () in\n          let value = K.ternary ~op:`Where ~a:valid ~b:x ~c:(f32 0.0) in\n          let store = K.store ~dst ~value ~ranges:[ r ] in\n          let sink = wrap_sink [ store ] in\n          let result = Simplify.pm_simplify_ranges sink in\n          let ranges = find_ranges result in\n          equal int (List.length ranges) 1;\n          equal int (range_size_int (List.hd ranges)) 4);\n      (* Port of test_range_shrink_store_where_invalid_flipped *)\n      test \"shrink with store where invalid flipped\" (fun () ->\n          let r = loop_range ~axis:0 204 in\n          let open K.O in\n          let valid = r < idx 4 in\n          let x =\n            K.ternary ~op:`Where ~a:valid ~b:(f32 1.0)\n              ~c:(K.invalid_index ())\n          in\n          let p = K.param ~idx:0 ~dtype:global_fptr in\n          let gated_idx =\n            K.ternary ~op:`Where ~a:valid ~b:r ~c:(K.invalid_index ())\n          in\n          let dst = K.index ~ptr:p ~idxs:[ gated_idx ] () in\n          let value = K.ternary ~op:`Where ~a:valid ~b:(f32 0.0) ~c:x in\n          let store = K.store ~dst ~value ~ranges:[ r ] in\n          let sink = wrap_sink [ store ] in\n          let result = Simplify.pm_simplify_ranges sink in\n          let ranges = find_ranges result in\n          equal int (List.length ranges) 1;\n          equal int (range_size_int (List.hd ranges)) 4);\n    ]\n\n(* Pm_reduce_unparented *)\n\nlet reduce_unparented_tests =\n  group \"pm_reduce_unparented\"\n    [\n      test \"removes unparented range from ADD reduce\" (fun () ->\n          (* Reduce(ADD, src, [r0, r1]) where src only uses r0.\n             r1 is unparented -> result * size(r1). *)\n          let r0 = loop_range ~axis:0 4 in\n          let r1 = loop_range ~axis:1 5 in\n          let src = K.cast ~src:r0 ~dtype:(D.float32) in\n          let red =\n            K.reduce ~op:`Add ~src ~ranges:[ r0; r1 ] ~dtype:D.Val.float32\n          in\n          let result = Simplify.pm_reduce_unparented red in\n          (* r1 should be eliminated; result should have a Mul by 5 *)\n          let ranges = find_ranges result in\n          is_true (List.length ranges < 2);\n          (* The result should contain a multiplication by the size of r1 *)\n          is_true (has_binary `Mul result));\n      test \"removes unparented range from MUL reduce\" (fun () ->\n          let r0 = loop_range ~axis:0 4 in\n          let r1 = loop_range ~axis:1 3 in\n          let src = K.cast ~src:r0 ~dtype:(D.float32) in\n          let red =\n            K.reduce ~op:`Mul ~src ~ranges:[ r0; r1 ] ~dtype:D.Val.float32\n          in\n          let result = Simplify.pm_reduce_unparented red in\n          let ranges = find_ranges result in\n          is_true (List.length ranges < 2);\n          (* MUL reduce: unparented range produces Pow *)\n          is_true (has_binary `Pow result));\n      test \"MAX reduce ignores unparented ranges\" (fun () ->\n          let r0 = loop_range ~axis:0 4 in\n          let r1 = loop_range ~axis:1 3 in\n          let src = K.cast ~src:r0 ~dtype:(D.float32) in\n          let red =\n            K.reduce ~op:`Max ~src ~ranges:[ r0; r1 ] ~dtype:D.Val.float32\n          in\n          let result = Simplify.pm_reduce_unparented red in\n          let ranges = find_ranges result in\n          is_true (List.length ranges < 2);\n          (* MAX: no Mul or Pow compensation *)\n          is_false (has_binary `Mul result);\n          is_false (has_binary `Pow result));\n      test \"noop when all ranges parented\" (fun () ->\n          let r0 = loop_range ~axis:0 4 in\n          let r1 = loop_range ~axis:1 5 in\n          let open K.O in\n          let src = K.cast ~src:r0 ~dtype:(D.float32)\n                    + K.cast ~src:r1 ~dtype:(D.float32) in\n          let red =\n            K.reduce ~op:`Add ~src ~ranges:[ r0; r1 ] ~dtype:D.Val.float32\n          in\n          let result = Simplify.pm_reduce_unparented red in\n          (* Both ranges referenced, no change *)\n          is_true (has_reduce result);\n          equal int (count_ranges result) 2);\n    ]\n\n(* Pm_reduce_simplify *)\n\nlet reduce_simplify_tests =\n  group \"pm_reduce_simplify\"\n    [\n      test \"distributes add over reduce\" (fun () ->\n          (* Reduce(ADD, x + y, [r]) -> Reduce(ADD, x, [r]) + Reduce(ADD, y, [r]) *)\n          let r = loop_range ~axis:0 4 in\n          let x = K.cast ~src:r ~dtype:(D.float32) in\n          let y = f32 2.0 in\n          let open K.O in\n          let src = x + y in\n          let red =\n            K.reduce ~op:`Add ~src ~ranges:[ r ] ~dtype:D.Val.float32\n          in\n          let result = Simplify.pm_reduce_simplify red in\n          (* After distribution + unparented removal, the constant term\n             y is multiplied by range size. Check that original single\n             reduce is gone and we have an Add at top level or\n             simplified form. *)\n          let topo = K.toposort result in\n          let top_view = K.view result in\n          (* The result should no longer be a single Reduce over (x+y) *)\n          (match top_view with\n          | K.Reduce { src = s; _ } -> (\n              match K.view s with\n              | K.Binary { op = `Add; _ } ->\n                  (* If still Reduce(x+y), something is wrong. But the pass\n                     may also have applied further simplification. Just check\n                     the overall shape is different or ranges are reduced. *)\n                  ignore topo\n              | _ -> ())\n          | _ -> ()));\n      test \"bound from above: (r < cut).where(val, 0).reduce(ADD)\" (fun () ->\n          (* Reduce(ADD, (r < 3).where(val, 0), [r]) where r has size 10\n             -> min(max(3, 0), 10) * val = 3 * val *)\n          let r = loop_range ~axis:0 10 in\n          let open K.O in\n          let cond = r < idx 3 in\n          let val_ = f32 2.0 in\n          let src =\n            K.ternary ~op:`Where ~a:cond ~b:val_ ~c:(f32 0.0)\n          in\n          let red =\n            K.reduce ~op:`Add ~src ~ranges:[ r ] ~dtype:D.Val.float32\n          in\n          let result = Simplify.pm_reduce_simplify red in\n          (* Range should be eliminated *)\n          equal int (count_ranges result) 0);\n      test \"bound from below: (r < cut).where(0, val).reduce(ADD)\" (fun () ->\n          let r = loop_range ~axis:0 10 in\n          let open K.O in\n          let cond = r < idx 3 in\n          let val_ = f32 2.0 in\n          let src =\n            K.ternary ~op:`Where ~a:cond ~b:(f32 0.0) ~c:val_\n          in\n          let red =\n            K.reduce ~op:`Add ~src ~ranges:[ r ] ~dtype:D.Val.float32\n          in\n          let result = Simplify.pm_reduce_simplify red in\n          (* Range should be eliminated: result is (10-3)*2 = 14 *)\n          equal int (count_ranges result) 0);\n      test \"unparented range removed from ADD reduce\" (fun () ->\n          (* Integration with reduce_unparented: Reduce(ADD, const, [r])\n             where const doesn't reference r -> const * size(r) *)\n          let r = loop_range ~axis:0 5 in\n          let src = f32 3.0 in\n          let red =\n            K.reduce ~op:`Add ~src ~ranges:[ r ] ~dtype:D.Val.float32\n          in\n          let result = Simplify.pm_reduce_simplify red in\n          equal int (count_ranges result) 0;\n          is_true (has_binary `Mul result));\n      test \"mul casted bool becomes where\" (fun () ->\n          let r = loop_range ~axis:0 4 in\n          let open K.O in\n          let gate = r < idx 2 in\n          let gate_cast = K.cast ~src:gate ~dtype:(D.float32) in\n          let x = f32 5.0 in\n          let src = x * gate_cast in\n          let red =\n            K.reduce ~op:`Add ~src ~ranges:[ r ] ~dtype:D.Val.float32\n          in\n          let result = Simplify.pm_reduce_simplify red in\n          (* x * gate.cast() -> gate.where(x, 0) inside the reduce,\n             then bound-from-above collapses it. *)\n          equal int (count_ranges result) 0);\n      (* Multi-range reduce collapse: Reduce(ADD, (r1 < 3).where(1.0, 0.0), [r1, r2])\n         where r2 is unparented. Tests the iteration loop in reduce_collapse_inner. *)\n      test \"multi-range reduce collapse\" (fun () ->\n          let r1 = loop_range ~axis:0 5 in\n          let r2 = loop_range ~axis:1 4 in\n          let open K.O in\n          let cond = r1 < idx 3 in\n          let src = K.ternary ~op:`Where ~a:cond ~b:(f32 1.0) ~c:(f32 0.0) in\n          let red =\n            K.reduce ~op:`Add ~src ~ranges:[ r1; r2 ] ~dtype:D.Val.float32\n          in\n          let result = Simplify.pm_reduce_simplify red in\n          (* r2 is unparented -> removed with *4 multiplier.\n             r1 fold: min(max(3,0),5) * 1.0 = 3.0.\n             Result: 3.0 * 4.0 = 12.0, no ranges. *)\n          equal int (count_ranges result) 0);\n      (* Bound from two sides:\n         ((r >= lower) & (r < upper)).where(val, 0).reduce(r, ADD) *)\n      test \"bound from two sides\" (fun () ->\n          let r = loop_range ~axis:0 10 in\n          let open K.O in\n          let lower = idx 2 in\n          let upper = idx 7 in\n          (* !(r < lower) & (r < upper) *)\n          let not_below =\n            K.binary ~op:`Cmpeq ~lhs:(r < lower)\n              ~rhs:(K.const (C.bool false))\n          in\n          let cond = K.binary ~op:`And ~lhs:not_below ~rhs:(r < upper) in\n          let val_ = f32 3.0 in\n          let src = K.ternary ~op:`Where ~a:cond ~b:val_ ~c:(f32 0.0) in\n          let red =\n            K.reduce ~op:`Add ~src ~ranges:[ r ] ~dtype:D.Val.float32\n          in\n          let result = Simplify.pm_reduce_simplify red in\n          (* Range should be eliminated: count = min(max(min(7,10)-max(2,0),0),10) = 5 *)\n          equal int (count_ranges result) 0);\n      (* lift x*y out of reduce: (x * y) < c -> x < ceil_div(c, y)\n         when no_range(y), no_range(c), is_int(y), y.vmin > 0 *)\n      test \"lift x*y out of reduce\" (fun () ->\n          let r = loop_range ~axis:0 20 in\n          let open K.O in\n          let y = idx 3 in\n          let c = idx 15 in\n          (* (r * 3) < 15 should become r < 5 *)\n          let cond = (r * y) < c in\n          let val_ = f32 1.0 in\n          let src = K.ternary ~op:`Where ~a:cond ~b:val_ ~c:(f32 0.0) in\n          let red =\n            K.reduce ~op:`Add ~src ~ranges:[ r ] ~dtype:D.Val.float32\n          in\n          let result = Simplify.pm_reduce_simplify red in\n          equal int (count_ranges result) 0);\n      (* AND on WHERE: (DEFINE_VAR & y).where(c, 0).reduce(ADD, *ranges)\n         -> y.where(c, 0).reduce(ADD, *ranges) * x.cast(c.dtype) *)\n      test \"AND on WHERE with define_var\" (fun () ->\n          let r = loop_range ~axis:0 4 in\n          let open K.O in\n          let dv = K.define_var ~name:\"x\" ~lo:0 ~hi:1 () in\n          let gate = r < idx 2 in\n          let cond = K.binary ~op:`And ~lhs:dv ~rhs:gate in\n          let val_ = f32 1.0 in\n          let src = K.ternary ~op:`Where ~a:cond ~b:val_ ~c:(f32 0.0) in\n          let red =\n            K.reduce ~op:`Add ~src ~ranges:[ r ] ~dtype:D.Val.float32\n          in\n          let result = Simplify.pm_reduce_simplify red in\n          (* DEFINE_VAR should be factored out as a Mul *)\n          equal int (count_ranges result) 0;\n          is_true (has_binary `Mul result));\n    ]\n\n(* Pm_load_collapse *)\n\nlet load_collapse_tests =\n  group \"pm_load_collapse\"\n    [\n      test \"collapses reduce over gated load\" (fun () ->\n          (* (idx != r).where(0, expr).reduce(r, ADD)\n             -> valid_check ? expr[r:=idx] : 0 *)\n          let r = loop_range ~axis:0 10 in\n          let load_idx = idx 3 in\n          let open K.O in\n          let cond = ne load_idx (K.cast ~src:r ~dtype:(D.index)) in\n          let expr = f32 7.0 in\n          let src =\n            K.ternary ~op:`Where ~a:cond ~b:(f32 0.0) ~c:expr\n          in\n          let red =\n            K.reduce ~op:`Add ~src ~ranges:[ r ] ~dtype:D.Val.float32\n          in\n          let result = Simplify.pm_load_collapse red in\n          (* The range should be eliminated *)\n          equal int (count_ranges result) 0);\n      test \"undo rule: no math on loaded index\" (fun () ->\n          (* (x:index + y) < c where x has a load -> x < (c - y) *)\n          let p = K.param ~idx:0 ~dtype:(D.Ptr.create (D.val_of D.index) ~addrspace:Global ~size:(-1)) in\n          let index_node = K.index ~ptr:p ~idxs:[ idx 0 ] () in\n          let loaded_idx = K.load ~src:index_node () in\n          let open K.O in\n          let y = idx 5 in\n          let c = idx 20 in\n          let expr = (loaded_idx + y) < c in\n          let result = Simplify.pm_load_collapse expr in\n          (* The rule should rewrite (loaded_idx + 5) < 20 to\n             loaded_idx < (20 - 5) to avoid math on the loaded index.\n             Check that the top-level Cmplt has loaded_idx on LHS\n             (not loaded_idx + y). *)\n          (match K.view result with\n          | K.Binary { op = `Cmplt; lhs; _ } ->\n              (* lhs should be the loaded index, not an Add *)\n              (match K.view lhs with\n              | K.Binary { op = `Add; _ } ->\n                  fail \"expected undo rule to remove Add from LHS\"\n              | _ -> ())\n          | _ -> ()));\n    ]\n\n(* Node_vmin / node_vmax *)\n\nlet vmin_vmax_tests =\n  group \"node_vmin / node_vmax\"\n    [\n      test \"const int\" (fun () ->\n          let n = idx 42 in\n          equal int (K.vmin n) 42;\n          equal int (K.vmax n) 42);\n      test \"const bool\" (fun () ->\n          let t = K.const (C.bool true) in\n          let f_ = K.const (C.bool false) in\n          equal int (K.vmin t) 1;\n          equal int (K.vmax t) 1;\n          equal int (K.vmin f_) 0;\n          equal int (K.vmax f_) 0);\n      test \"range\" (fun () ->\n          let r = loop_range ~axis:0 10 in\n          equal int (K.vmin r) 0;\n          equal int (K.vmax r) 9);\n      test \"define_var\" (fun () ->\n          let dv = K.define_var ~name:\"x\" ~lo:3 ~hi:7 () in\n          equal int (K.vmin dv) 3;\n          equal int (K.vmax dv) 7);\n      test \"add\" (fun () ->\n          let r = loop_range ~axis:0 4 in\n          let open K.O in\n          let n = r + idx 3 in\n          equal int (K.vmin n) 3;\n          equal int (K.vmax n) 6);\n      test \"sub\" (fun () ->\n          let r = loop_range ~axis:0 4 in\n          let n = K.binary ~op:`Sub ~lhs:(idx 10) ~rhs:r in\n          equal int (K.vmin n) 7;\n          equal int (K.vmax n) 10);\n      test \"neg\" (fun () ->\n          let r = loop_range ~axis:0 4 in\n          let n = K.unary ~op:`Neg ~src:(K.cast ~src:r ~dtype:(D.int32)) in\n          equal int (K.vmin n) (-3);\n          equal int (K.vmax n) 0);\n      test \"mul with negative\" (fun () ->\n          let r = loop_range ~axis:0 3 in\n          let open K.O in\n          let n = r * idx (-2) in\n          equal int (K.vmin n) (-4);\n          equal int (K.vmax n) 0);\n      test \"idiv positive\" (fun () ->\n          let r = loop_range ~axis:0 10 in\n          let open K.O in\n          let n = r / idx 3 in\n          equal int (K.vmin n) 0;\n          equal int (K.vmax n) 3);\n      test \"mod constant\" (fun () ->\n          let r = loop_range ~axis:0 10 in\n          let open K.O in\n          let n = r mod idx 3 in\n          equal int (K.vmin n) 0;\n          equal int (K.vmax n) 2);\n      test \"max\" (fun () ->\n          let r = loop_range ~axis:0 4 in\n          let n = K.binary ~op:`Max ~lhs:r ~rhs:(idx 2) in\n          equal int (K.vmin n) 2;\n          equal int (K.vmax n) 3);\n      test \"cmplt known true\" (fun () ->\n          let r = loop_range ~axis:0 3 in\n          let open K.O in\n          let n = r < idx 10 in\n          equal int (K.vmin n) 1;\n          equal int (K.vmax n) 1);\n      test \"cmplt unknown\" (fun () ->\n          let r = loop_range ~axis:0 10 in\n          let open K.O in\n          let n = r < idx 5 in\n          equal int (K.vmin n) 0;\n          equal int (K.vmax n) 1);\n      test \"where int\" (fun () ->\n          let r1 = loop_range ~axis:0 5 in\n          let r2 = loop_range ~axis:1 10 in\n          let cond = K.const (C.bool true) in\n          let n = K.ternary ~op:`Where ~a:cond ~b:r1 ~c:r2 in\n          equal int (K.vmin n) 0;\n          equal int (K.vmax n) 9);\n      test \"and mask\" (fun () ->\n          let r = loop_range ~axis:0 256 in\n          let n = K.binary ~op:`And ~lhs:r ~rhs:(idx 15) in\n          equal int (K.vmin n) 0;\n          equal int (K.vmax n) 15);\n      test \"shl constant\" (fun () ->\n          let r = loop_range ~axis:0 4 in\n          let n = K.binary ~op:`Shl ~lhs:r ~rhs:(idx 2) in\n          equal int (K.vmin n) 0;\n          equal int (K.vmax n) 12);\n      test \"shr constant\" (fun () ->\n          let r = loop_range ~axis:0 16 in\n          let n = K.binary ~op:`Shr ~lhs:r ~rhs:(idx 2) in\n          equal int (K.vmin n) 0;\n          equal int (K.vmax n) 3);\n      test \"vectorize bounds\" (fun () ->\n          let r = loop_range ~axis:0 5 in\n          let dv = K.define_var ~name:\"x\" ~lo:2 ~hi:10 () in\n          let v = K.vectorize ~srcs:[ r; dv ] in\n          (* vectorize: min of sources, max of sources *)\n          equal int (K.vmin v) 0;\n          equal int (K.vmax v) 10);\n      test \"float binary falls back to dtype\" (fun () ->\n          let open K.O in\n          let a = f32 1.0 in\n          let b = f32 2.0 in\n          let n = a + b in\n          (* float binary: no recursion, falls back to dtype bounds *)\n          let vmin = K.vmin n in\n          let vmax = K.vmax n in\n          is_true (vmin <= 0);\n          is_true (vmax > 0));\n    ]\n\n(* Additional pm_load_collapse tests *)\n\nlet load_collapse_extra_tests =\n  group \"pm_load_collapse - extra\"\n    [\n      test \"lift x+y out of reduce on ne\" (fun () ->\n          (* (idx + y) != Cast(r) where no_range(y)\n             -> after NE lift: idx != Cast(r) - y\n             Tests the NE lift rule in pm_reduce_load_collapse_rule\n             combined with the gated load collapse. *)\n          let r = loop_range ~axis:0 10 in\n          let load_idx = idx 3 in\n          let y = idx 2 in\n          let open K.O in\n          (* (load_idx + y) != Cast(r) — NE lift should simplify to\n             load_idx != (Cast(r, idx) - y), then gated load fires *)\n          let sum = K.cast ~src:(load_idx + y) ~dtype:(D.index) in\n          let r_cast = K.cast ~src:r ~dtype:(D.index) in\n          let cond = ne sum r_cast in\n          let expr = f32 1.0 in\n          let src = K.ternary ~op:`Where ~a:cond ~b:(f32 0.0) ~c:expr in\n          let red =\n            K.reduce ~op:`Add ~src ~ranges:[ r ] ~dtype:D.Val.float32\n          in\n          let result = Simplify.pm_load_collapse red in\n          (* The expression should be simplified — at minimum the\n             structure should change from the original reduce. *)\n          is_true (result != red));\n      test \"reduce on gated load with casted range\" (fun () ->\n          (* (idx != Cast(r)).where(0, expr).reduce(r, ADD) *)\n          let r = loop_range ~axis:0 10 in\n          let load_idx = idx 3 in\n          let open K.O in\n          let r_cast = K.cast ~src:r ~dtype:(D.index) in\n          let cond = ne load_idx r_cast in\n          let expr = f32 7.0 in\n          let src =\n            K.ternary ~op:`Where ~a:cond ~b:(f32 0.0) ~c:expr\n          in\n          let red =\n            K.reduce ~op:`Add ~src ~ranges:[ r ] ~dtype:D.Val.float32\n          in\n          let result = Simplify.pm_load_collapse red in\n          equal int (count_ranges result) 0);\n    ]\n\n(* Entry point *)\n\nlet () =\n  run \"Codegen.Simplify\"\n    [\n      flatten_range_tests;\n      split_ranges_tests;\n      simplify_merge_tests;\n      range_shrink_tests;\n      reduce_unparented_tests;\n      reduce_simplify_tests;\n      load_collapse_tests;\n      vmin_vmax_tests;\n      load_collapse_extra_tests;\n    ]\n"
  },
  {
    "path": "packages/tolk/test/unit/test_codegen_tc.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Unit tests for Tc helper module and apply_tc_opt in Postrange.\n\n   Tests the tensor core hardware tables, helper functions (base_shape_str,\n   permutes_for_shape_str, etc.), and the apply_tc_opt optimization path. *)\n\nopen Windtrap\nopen Tolk\nopen Tolk_ir\nmodule K = Kernel\nmodule D = Dtype\nmodule C = Const\nmodule Ak = Axis_kind\nmodule P = Postrange\n\n(* Helpers *)\n\nlet all_tables =\n  [ (\"cuda_sm75\", Tc.cuda_sm75);\n    (\"cuda_sm80\", Tc.cuda_sm80);\n    (\"cuda_sm89\", Tc.cuda_sm89);\n    (\"amd_rdna3\", Tc.amd_rdna3);\n    (\"amd_rdna4\", Tc.amd_rdna4);\n    (\"amd_cdna3\", Tc.amd_cdna3);\n    (\"amd_cdna4\", Tc.amd_cdna4);\n    (\"metal\", Tc.metal);\n    (\"amx\", Tc.amx);\n    (\"intel\", Tc.intel) ]\n\nlet idx n = K.const (C.int D.Val.index n)\n\nlet global_ptr dt = D.Ptr.create (D.val_of dt) ~addrspace:Global ~size:(-1)\nlet global_fptr = global_ptr D.float32\nlet global_f16ptr = global_ptr D.float16\n\nlet kernel_info ?(opts_to_apply = None) () =\n  { K.name = \"test\";\n    axis_kinds = [];\n    dont_use_locals = false;\n    applied_opts = [];\n    opts_to_apply;\n    estimates = None }\n\nlet wrap_sink ?opts_to_apply srcs =\n  K.sink ~kernel_info:(kernel_info ?opts_to_apply ()) srcs\n\nlet loop_range ~axis size =\n  K.range ~size:(idx size) ~axis ~kind:Ak.Loop ~dtype:D.Val.index ()\n\nlet reduce_range ~axis size =\n  K.range ~size:(idx size) ~axis ~kind:Ak.Reduce ~dtype:D.Val.index ()\n\nlet global_range ~axis size =\n  K.range ~size:(idx size) ~axis ~kind:Ak.Global ~dtype:D.Val.index ()\n\n(* Renderers *)\n\nlet gpu_renderer () =\n  Renderer.make ~name:\"test\" ~device:\"TEST\" ~has_local:true ~has_shared:true\n    ~shared_max:32768 ~render:(fun ?name:_ _ -> \"\") ()\n\nlet tc_renderer tcs =\n  Renderer.make ~name:\"test_tc\" ~device:\"GPU\" ~has_local:true ~has_shared:true\n    ~shared_max:32768 ~tensor_cores:tcs ~render:(fun ?name:_ _ -> \"\") ()\n\n(* AST Fixture Builders *)\n\n(* Matmul kernel: out[i,j] = sum_k(a[i,k] * b[k,j])\n   Ranges: r_m (loop, axis 0), r_n (loop, axis 1), r_k (reduce, axis 2).\n   Both loads are f32.  Suitable for metal (f32/f32) TCs. *)\nlet matmul_f32_ast ~m ~n ~k =\n  let p_out = K.param ~idx:0 ~dtype:global_fptr in\n  let p_a = K.param ~idx:1 ~dtype:global_fptr in\n  let p_b = K.param ~idx:2 ~dtype:global_fptr in\n  let r_m = loop_range ~axis:0 m in\n  let r_n = loop_range ~axis:1 n in\n  let r_k = reduce_range ~axis:2 k in\n  let open K.O in\n  let idx_a = K.index ~ptr:p_a ~idxs:[ r_m * idx k + r_k ] () in\n  let idx_b = K.index ~ptr:p_b ~idxs:[ r_k * idx n + r_n ] () in\n  let ld_a = K.load ~src:idx_a () in\n  let ld_b = K.load ~src:idx_b () in\n  let mul = K.binary ~op:`Mul ~lhs:ld_a ~rhs:ld_b in\n  let red = K.reduce ~op:`Add ~src:mul ~ranges:[ r_k ] ~dtype:D.Val.float32 in\n  let out_idx = K.index ~ptr:p_out ~idxs:[ r_m * idx n + r_n ] () in\n  let st = K.store ~dst:out_idx ~value:red ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r_m; r_n ] () in\n  wrap_sink [ e ]\n\n(* Matmul with global ranges (for TC which needs loop-to-global conversion) *)\nlet matmul_f32_global_ast ~m ~n ~k =\n  let p_out = K.param ~idx:0 ~dtype:global_fptr in\n  let p_a = K.param ~idx:1 ~dtype:global_fptr in\n  let p_b = K.param ~idx:2 ~dtype:global_fptr in\n  let r_m = global_range ~axis:0 m in\n  let r_n = global_range ~axis:1 n in\n  let r_k = reduce_range ~axis:2 k in\n  let open K.O in\n  let idx_a = K.index ~ptr:p_a ~idxs:[ r_m * idx k + r_k ] () in\n  let idx_b = K.index ~ptr:p_b ~idxs:[ r_k * idx n + r_n ] () in\n  let ld_a = K.load ~src:idx_a () in\n  let ld_b = K.load ~src:idx_b () in\n  let mul = K.binary ~op:`Mul ~lhs:ld_a ~rhs:ld_b in\n  let red = K.reduce ~op:`Add ~src:mul ~ranges:[ r_k ] ~dtype:D.Val.float32 in\n  let out_idx = K.index ~ptr:p_out ~idxs:[ r_m * idx n + r_n ] () in\n  let st = K.store ~dst:out_idx ~value:red ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r_m; r_n ] () in\n  wrap_sink [ e ]\n\n(* Matmul with f16 inputs and f32 accumulation *)\nlet matmul_f16_global_ast ~m ~n ~k =\n  let p_out = K.param ~idx:0 ~dtype:global_fptr in\n  let p_a = K.param ~idx:1 ~dtype:global_f16ptr in\n  let p_b = K.param ~idx:2 ~dtype:global_f16ptr in\n  let r_m = global_range ~axis:0 m in\n  let r_n = global_range ~axis:1 n in\n  let r_k = reduce_range ~axis:2 k in\n  let open K.O in\n  let idx_a = K.index ~ptr:p_a ~idxs:[ r_m * idx k + r_k ] () in\n  let idx_b = K.index ~ptr:p_b ~idxs:[ r_k * idx n + r_n ] () in\n  let ld_a = K.load ~src:idx_a () in\n  let ld_b = K.load ~src:idx_b () in\n  let mul = K.binary ~op:`Mul ~lhs:ld_a ~rhs:ld_b in\n  let red = K.reduce ~op:`Add ~src:mul ~ranges:[ r_k ] ~dtype:D.Val.float32 in\n  let out_idx = K.index ~ptr:p_out ~idxs:[ r_m * idx n + r_n ] () in\n  let st = K.store ~dst:out_idx ~value:red ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r_m; r_n ] () in\n  wrap_sink [ e ]\n\n(* Simple elementwise kernel (no reduce — for testing TC rejection) *)\nlet elementwise_global_ast ~s0 ~s1 =\n  let p0 = K.param ~idx:0 ~dtype:global_fptr in\n  let p1 = K.param ~idx:1 ~dtype:global_fptr in\n  let r0 = global_range ~axis:0 s0 in\n  let r1 = global_range ~axis:1 s1 in\n  let open K.O in\n  let in_idx = K.index ~ptr:p1 ~idxs:[ r0 * idx s1 + r1 ] () in\n  let ld = K.load ~src:in_idx () in\n  let value = K.unary ~op:`Exp2 ~src:ld in\n  let out_idx = K.index ~ptr:p0 ~idxs:[ r0 * idx s1 + r1 ] () in\n  let st = K.store ~dst:out_idx ~value ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0; r1 ] () in\n  wrap_sink [ e ]\n\n(* Analysis Helpers *)\n\nlet raises_opt_error f =\n  raises_match (function P.Opt_error _ -> true | _ -> false) f\n\nlet has_wmma ast =\n  List.exists\n    (fun n -> match K.view n with Wmma _ -> true | _ -> false)\n    (K.toposort ast)\n\nlet has_contract ast =\n  List.exists\n    (fun n -> match K.view n with Contract _ -> true | _ -> false)\n    (K.toposort ast)\n\nlet has_unroll ast =\n  List.exists\n    (fun n -> match K.view n with Unroll _ -> true | _ -> false)\n    (K.toposort ast)\n\n(* Check a TC entry matches expected values *)\nlet check_tc (tc : Tc.t) ~dims ~threads ~ept ~dtype_in ~dtype_out\n    ~opts ~swizzle =\n  let n, m, k = dims in\n  let tn, tm, tk = tc.dims in\n  equal int n tn;\n  equal int m tm;\n  equal int k tk;\n  equal int threads tc.threads;\n  let ea, eb, ec = ept in\n  let ta, tb, tcc = tc.elements_per_thread in\n  equal int ea ta;\n  equal int eb tb;\n  equal int ec tcc;\n  is_true (dtype_in = tc.dtype_in);\n  is_true (dtype_out = tc.dtype_out);\n  equal (list string) opts tc.opts;\n  let (s0l, s0u, s0r), (s1l, s1u, s1r) = swizzle in\n  let (t0l, t0u, t0r), (t1l, t1u, t1r) = tc.swizzle in\n  equal (list string) s0l t0l;\n  equal (list string) s0u t0u;\n  equal (list string) s0r t0r;\n  equal (list string) s1l t1l;\n  equal (list string) s1u t1u;\n  equal (list string) s1r t1r\n\n(* Tests *)\n\nlet () =\n  run __FILE__\n    [\n      (* Existing Tc helper tests *)\n\n      group \"validate\"\n        (List.map (fun (name, tcs) ->\n           test (Printf.sprintf \"%s tables pass validation\" name) (fun () ->\n             List.iter Tc.validate tcs))\n         all_tables);\n\n      group \"to_string\"\n        [\n          test \"cuda_sm80 first entry (half/float)\" (fun () ->\n            let tc = List.hd Tc.cuda_sm80 in\n            let s = Tc.to_string tc in\n            equal string \"WMMA_8_16_16_half_float\" s);\n\n          test \"cuda_sm80 bf16 entry (__bf16/float)\" (fun () ->\n            let tc = List.nth Tc.cuda_sm80 1 in\n            let s = Tc.to_string tc in\n            equal string \"WMMA_8_16_16___bf16_float\" s);\n\n          test \"cuda_sm80 half/half entry\" (fun () ->\n            let tc = List.nth Tc.cuda_sm80 2 in\n            let s = Tc.to_string tc in\n            equal string \"WMMA_8_16_16_half_half\" s);\n\n          test \"cuda_sm89 fp8e4m3 entry\" (fun () ->\n            let sm80_len = List.length Tc.cuda_sm80 in\n            let tc = List.nth Tc.cuda_sm89 sm80_len in\n            let s = Tc.to_string tc in\n            equal string \"WMMA_8_16_32_float8_e4m3_float\" s);\n\n          test \"cuda_8168_tf32 (float/float)\" (fun () ->\n            let tc = List.nth Tc.cuda_sm80 5 in\n            let s = Tc.to_string tc in\n            equal string \"WMMA_8_16_8_float_float\" s);\n\n          test \"metal first entry (float/float)\" (fun () ->\n            let tc = List.hd Tc.metal in\n            let s = Tc.to_string tc in\n            equal string \"WMMA_8_8_8_float_float\" s);\n\n          test \"amx (float/float)\" (fun () ->\n            let tc = List.hd Tc.amx in\n            let s = Tc.to_string tc in\n            equal string \"WMMA_16_16_1_float_float\" s);\n\n          test \"intel (half/float)\" (fun () ->\n            let tc = List.hd Tc.intel in\n            let s = Tc.to_string tc in\n            equal string \"WMMA_8_8_16_half_float\" s);\n        ];\n\n      (* get_reduce_axes, get_upcast_axes, get_local_axes are internal\n         helpers tested indirectly via validate (runs at module load). *)\n\n      group \"base_shape_str\"\n        [\n          test \"cuda_sm80 first entry\" (fun () ->\n            let tc = List.hd Tc.cuda_sm80 in\n            let ss = Tc.base_shape_str tc in\n            equal (list string)\n              [\"u0\";\"l0\";\"l1\";\"l2\";\"l3\";\"l4\";\"u1\";\"r0\";\"r1\";\"r2\";\"r3\"] ss);\n\n          test \"amx has no reduce labels\" (fun () ->\n            let tc = List.hd Tc.amx in\n            let ss = Tc.base_shape_str tc in\n            equal int 8 (List.length ss);\n            is_true (List.for_all (fun s -> s.[0] = 'u') ss));\n        ];\n\n      group \"base_upcast_axes\"\n        [\n          test \"cuda_sm80 first entry\" (fun () ->\n            let tc = List.hd Tc.cuda_sm80 in\n            let bua = Tc.base_upcast_axes tc in\n            equal (list string) [\"u1\";\"u0\";\"r3\";\"r2\";\"r1\";\"r0\"] bua);\n        ];\n\n      group \"permutes_for_shape_str\"\n        [\n          test \"cuda_sm80 first entry round-trip\" (fun () ->\n            let tc = List.hd Tc.cuda_sm80 in\n            let shape_str = Tc.base_shape_str tc in\n            let p0, p1 = Tc.permutes_for_shape_str tc shape_str in\n            equal int (List.length shape_str) (List.length p0);\n            equal int (List.length shape_str) (List.length p1);\n            let n = List.length shape_str in\n            List.iter (fun i -> is_true (i >= 0 && i < n)) p0;\n            List.iter (fun i -> is_true (i >= 0 && i < n)) p1);\n\n          test \"metal first entry round-trip\" (fun () ->\n            let tc = List.hd Tc.metal in\n            let shape_str = Tc.base_shape_str tc in\n            let p0, p1 = Tc.permutes_for_shape_str tc shape_str in\n            equal int (List.length shape_str) (List.length p0);\n            equal int (List.length shape_str) (List.length p1));\n        ];\n\n      group \"table composition\"\n        [\n          test \"cuda_sm75 = cuda_8168_f16\" (fun () ->\n            equal int 2 (List.length Tc.cuda_sm75));\n\n          test \"cuda_sm80 has 6 entries\" (fun () ->\n            equal int 6 (List.length Tc.cuda_sm80));\n\n          test \"cuda_sm89 = cuda_sm80 + 2 fp8\" (fun () ->\n            equal int 8 (List.length Tc.cuda_sm89));\n\n          test \"amd_cdna3 has correct count\" (fun () ->\n            equal int 4 (List.length Tc.amd_cdna3));\n\n          test \"amd_cdna4 has correct count\" (fun () ->\n            equal int 8 (List.length Tc.amd_cdna4));\n\n          test \"metal has 5 dtype variants\" (fun () ->\n            equal int 5 (List.length Tc.metal));\n\n          test \"amx has 1 entry\" (fun () ->\n            equal int 1 (List.length Tc.amx));\n\n          test \"intel has 1 entry\" (fun () ->\n            equal int 1 (List.length Tc.intel));\n        ];\n\n      (* Table parity: exact values for each hardware target *)\n\n      group \"table parity\"\n        [\n          (* Metal: 5 entries, all same structure, different dtypes *)\n          test \"metal[0] f32/f32 matches reference\" (fun () ->\n            check_tc (List.nth Tc.metal 0)\n              ~dims:(8, 8, 8) ~threads:32 ~ept:(2, 2, 2)\n              ~dtype_in:D.Float32 ~dtype_out:D.Float32\n              ~opts:[\"u0\";\"l0\";\"l1\";\"l1\";\"l0\";\"l1\"]\n              ~swizzle:\n                ( ([\"r1\";\"l1\";\"l2\";\"r2\";\"l4\"], [\"r0\"], [\"u0\";\"l0\";\"l3\"]),\n                  ([\"l0\";\"r0\";\"r1\";\"l3\";\"r2\"], [\"u0\"], [\"l1\";\"l2\";\"l4\"]) ));\n\n          test \"metal[1] f16/f32 matches reference\" (fun () ->\n            check_tc (List.nth Tc.metal 1)\n              ~dims:(8, 8, 8) ~threads:32 ~ept:(2, 2, 2)\n              ~dtype_in:D.Float16 ~dtype_out:D.Float32\n              ~opts:[\"u0\";\"l0\";\"l1\";\"l1\";\"l0\";\"l1\"]\n              ~swizzle:\n                ( ([\"r1\";\"l1\";\"l2\";\"r2\";\"l4\"], [\"r0\"], [\"u0\";\"l0\";\"l3\"]),\n                  ([\"l0\";\"r0\";\"r1\";\"l3\";\"r2\"], [\"u0\"], [\"l1\";\"l2\";\"l4\"]) ));\n\n          test \"amx[0] f32/f32 matches reference\" (fun () ->\n            check_tc (List.nth Tc.amx 0)\n              ~dims:(16, 16, 1) ~threads:1 ~ept:(16, 16, 256)\n              ~dtype_in:D.Float32 ~dtype_out:D.Float32\n              ~opts:[\"u0\";\"u0\";\"u0\";\"u0\";\"u1\";\"u1\";\"u1\";\"u1\"]\n              ~swizzle:\n                ( ([], [\"u0\";\"u1\";\"u2\";\"u3\";\"u4\";\"u5\";\"u6\";\"u7\"], []),\n                  ([], [\"u4\";\"u5\";\"u6\";\"u7\";\"u0\";\"u1\";\"u2\";\"u3\"], []) ));\n\n          test \"intel[0] f16/f32 matches reference\" (fun () ->\n            check_tc (List.nth Tc.intel 0)\n              ~dims:(8, 8, 16) ~threads:8 ~ept:(16, 16, 8)\n              ~dtype_in:D.Float16 ~dtype_out:D.Float32\n              ~opts:[\"l0\";\"l0\";\"l0\";\"u1\";\"u1\";\"u1\"]\n              ~swizzle:\n                ( ([\"r1\";\"r2\";\"r3\"], [\"u0\";\"u1\";\"u2\"], [\"l0\";\"l1\";\"l2\";\"r0\"]),\n                  ([\"l0\";\"l1\";\"l2\"], [\"r1\";\"r2\";\"r3\"], [\"u0\";\"u1\";\"u2\";\"r0\"]) ));\n\n          test \"cuda_81616[0] f16/f32 matches reference\" (fun () ->\n            check_tc (List.nth Tc.cuda_sm80 0)\n              ~dims:(8, 16, 16) ~threads:32 ~ept:(8, 4, 4)\n              ~dtype_in:D.Float16 ~dtype_out:D.Float32\n              ~opts:[\"u0\";\"l0\";\"l0\";\"l1\";\"l1\";\"l1\";\"u1\"]\n              ~swizzle:\n                ( ([\"r1\";\"r2\";\"l2\";\"l3\";\"l4\"], [\"u1\";\"r3\"], [\"l0\";\"l1\";\"u0\";\"r0\"]),\n                  ([\"r1\";\"r2\";\"u0\";\"l0\";\"l1\"], [\"r0\";\"r3\"], [\"l2\";\"l3\";\"l4\";\"u1\"]) ));\n\n          test \"cuda_8168_tf32 f32/f32 matches reference\" (fun () ->\n            check_tc (List.nth Tc.cuda_sm80 5)\n              ~dims:(8, 16, 8) ~threads:32 ~ept:(4, 2, 4)\n              ~dtype_in:D.Float32 ~dtype_out:D.Float32\n              ~opts:[\"u0\";\"l0\";\"l0\";\"l1\";\"l1\";\"l1\";\"u1\"]\n              ~swizzle:\n                ( ([\"r0\";\"r1\";\"l2\";\"l3\";\"l4\"], [\"u1\";\"r2\"], [\"l0\";\"l1\";\"u0\"]),\n                  ([\"r0\";\"r1\";\"u0\";\"l0\";\"l1\"], [\"u1\";\"r2\"], [\"l2\";\"l3\";\"l4\"]) ));\n\n          test \"amd_rdna3[0] f16/f32 matches reference\" (fun () ->\n            check_tc (List.nth Tc.amd_rdna3 0)\n              ~dims:(16, 16, 16) ~threads:32 ~ept:(16, 16, 8)\n              ~dtype_in:D.Float16 ~dtype_out:D.Float32\n              ~opts:[\"l0\";\"l0\";\"l0\";\"l0\";\"l1\";\"u1\";\"u1\";\"u1\"]\n              ~swizzle:\n                ( ([\"l4\";\"u0\";\"u1\";\"u2\";\"l0\"], [\"r1\";\"r2\";\"r3\"], [\"l1\";\"l2\";\"l3\";\"r0\"]),\n                  ([\"l0\";\"l1\";\"l2\";\"l3\";\"l4\"], [\"r1\";\"r2\";\"r3\"], [\"u0\";\"u1\";\"u2\";\"r0\"]) ));\n\n          test \"amd_cdna_1616128[0] fp8e5m2/f32 matches reference\" (fun () ->\n            check_tc (List.nth Tc.amd_cdna4 0)\n              ~dims:(16, 16, 128) ~threads:64 ~ept:(32, 32, 4)\n              ~dtype_in:D.Fp8e5m2 ~dtype_out:D.Float32\n              ~opts:[\"l0\";\"l0\";\"l0\";\"l0\";\"u1\";\"u1\";\"l1\";\"l1\"]\n              ~swizzle:\n                ( ([\"u0\";\"u1\";\"l4\";\"l5\";\"r5\";\"r6\"], [\"r0\";\"r1\"],\n                   [\"l0\";\"l1\";\"l2\";\"l3\";\"r2\";\"r3\";\"r4\"]),\n                  ([\"l0\";\"l1\";\"l2\";\"l3\";\"r5\";\"r6\"], [\"r0\";\"r1\"],\n                   [\"l4\";\"l5\";\"u0\";\"u1\";\"r2\";\"r3\";\"r4\"]) ));\n        ];\n\n      (* Permute parity: exact golden values for each hardware target *)\n\n      group \"permute parity\"\n        [\n          test \"cuda_81616 permutes match reference\" (fun () ->\n            let tc = List.hd Tc.cuda_sm80 in\n            let ss = Tc.base_shape_str tc in\n            let p0, p1 = Tc.permutes_for_shape_str tc ss in\n            equal (list int) [6; 8; 9; 3; 4; 5; 10; 1; 2; 0; 7] p0;\n            equal (list int) [7; 8; 9; 0; 1; 2; 10; 3; 4; 5; 6] p1);\n\n          test \"cuda_8168_f16 permutes match reference\" (fun () ->\n            let tc = List.nth Tc.cuda_sm80 3 in\n            let ss = Tc.base_shape_str tc in\n            let p0, p1 = Tc.permutes_for_shape_str tc ss in\n            equal (list int) [7; 8; 9; 3; 4; 5; 6; 1; 2; 0] p0;\n            equal (list int) [6; 8; 9; 0; 1; 2; 7; 3; 4; 5] p1);\n\n          test \"cuda_8168_tf32 permutes match reference\" (fun () ->\n            let tc = List.nth Tc.cuda_sm80 5 in\n            let ss = Tc.base_shape_str tc in\n            let p0, p1 = Tc.permutes_for_shape_str tc ss in\n            equal (list int) [6; 7; 8; 3; 4; 5; 9; 1; 2; 0] p0;\n            equal (list int) [6; 7; 8; 0; 1; 2; 9; 3; 4; 5] p1);\n\n          test \"metal permutes match reference\" (fun () ->\n            let tc = List.hd Tc.metal in\n            let ss = Tc.base_shape_str tc in\n            let p0, p1 = Tc.permutes_for_shape_str tc ss in\n            equal (list int) [6; 7; 2; 3; 8; 5; 0; 1; 4] p0;\n            equal (list int) [0; 1; 6; 7; 4; 8; 2; 3; 5] p1);\n\n          test \"amx permutes match reference\" (fun () ->\n            let tc = List.hd Tc.amx in\n            let ss = Tc.base_shape_str tc in\n            let p0, p1 = Tc.permutes_for_shape_str tc ss in\n            equal (list int) [0; 1; 2; 3; 4; 5; 6; 7] p0;\n            equal (list int) [4; 5; 6; 7; 0; 1; 2; 3] p1);\n\n          test \"intel permutes match reference\" (fun () ->\n            let tc = List.hd Tc.intel in\n            let ss = Tc.base_shape_str tc in\n            let p0, p1 = Tc.permutes_for_shape_str tc ss in\n            equal (list int) [7; 8; 9; 3; 4; 5; 0; 1; 2; 6] p0;\n            equal (list int) [0; 1; 2; 7; 8; 9; 3; 4; 5; 6] p1);\n\n          test \"amd_rdna3 permutes match reference\" (fun () ->\n            let tc = List.hd Tc.amd_rdna3 in\n            let ss = Tc.base_shape_str tc in\n            let p0, p1 = Tc.permutes_for_shape_str tc ss in\n            equal (list int) [4; 5; 6; 7; 0; 9; 10; 11; 1; 2; 3; 8] p0;\n            equal (list int) [0; 1; 2; 3; 4; 9; 10; 11; 5; 6; 7; 8] p1);\n\n          test \"amd_cdna_161616 permutes match reference\" (fun () ->\n            let tc = List.nth Tc.amd_cdna3 2 in (* cdna3 = cdna_161632[:2] + cdna_161616 *)\n            let ss = Tc.base_shape_str tc in\n            let p0, p1 = Tc.permutes_for_shape_str tc ss in\n            equal (list int) [4; 5; 6; 7; 8; 9; 10; 11; 0; 1; 2; 3] p0;\n            equal (list int) [0; 1; 2; 3; 8; 9; 10; 11; 6; 7; 4; 5] p1);\n        ];\n\n      (* Apply_tc_opt validation guards *)\n\n      group \"apply_tc_opt validation\"\n        [\n          test \"TC must be first opt\" (fun () ->\n            let ast = matmul_f32_global_ast ~m:8 ~n:8 ~k:8 in\n            let ren = tc_renderer Tc.metal in\n            let t = P.create ast ren in\n            ignore (P.apply_opt t (K.Opt.Upcast { axis = 0; amount = 2 }));\n            raises_opt_error (fun () ->\n              ignore (P.apply_opt t\n                (K.Opt.Tc { axis = 0; tc_select = -1; tc_opt = 0; use_tc = 1 }))));\n\n          test \"TC invalid tc_select rejected\" (fun () ->\n            let ast = matmul_f32_global_ast ~m:8 ~n:8 ~k:8 in\n            let ren = tc_renderer Tc.metal in\n            let t = P.create ast ren in\n            raises_opt_error (fun () ->\n              ignore (P.apply_opt t\n                (K.Opt.Tc { axis = 0; tc_select = 99; tc_opt = 0; use_tc = 1 }))));\n\n          test \"TC invalid tc_opt rejected\" (fun () ->\n            let ast = matmul_f32_global_ast ~m:8 ~n:8 ~k:8 in\n            let ren = tc_renderer Tc.metal in\n            let t = P.create ast ren in\n            raises_opt_error (fun () ->\n              ignore (P.apply_opt t\n                (K.Opt.Tc { axis = 0; tc_select = -1; tc_opt = 3; use_tc = 1 }))));\n\n          test \"TC use_tc=0 rejected\" (fun () ->\n            let ast = matmul_f32_global_ast ~m:8 ~n:8 ~k:8 in\n            let ren = tc_renderer Tc.metal in\n            let t = P.create ast ren in\n            raises_opt_error (fun () ->\n              ignore (P.apply_opt t\n                (K.Opt.Tc { axis = 0; tc_select = -1; tc_opt = 0; use_tc = 0 }))));\n\n          test \"TC use_tc=3 rejected\" (fun () ->\n            let ast = matmul_f32_global_ast ~m:8 ~n:8 ~k:8 in\n            let ren = tc_renderer Tc.metal in\n            let t = P.create ast ren in\n            raises_opt_error (fun () ->\n              ignore (P.apply_opt t\n                (K.Opt.Tc { axis = 0; tc_select = -1; tc_opt = 0; use_tc = 3 }))));\n\n          test \"TC on elementwise kernel rejected\" (fun () ->\n            let ast = elementwise_global_ast ~s0:8 ~s1:8 in\n            let ren = tc_renderer Tc.metal in\n            let t = P.create ast ren in\n            raises_opt_error (fun () ->\n              ignore (P.apply_opt t\n                (K.Opt.Tc { axis = 0; tc_select = -1; tc_opt = 0; use_tc = 1 }))));\n\n          (* dtype mismatch: f32 matmul but TC only supports f16 *)\n          test \"TC dtype mismatch rejected\" (fun () ->\n            let ast = matmul_f32_global_ast ~m:8 ~n:8 ~k:16 in\n            (* Use intel TC which requires f16 input *)\n            let ren = tc_renderer Tc.intel in\n            let t = P.create ast ren in\n            raises_opt_error (fun () ->\n              ignore (P.apply_opt t\n                (K.Opt.Tc { axis = 0; tc_select = -1; tc_opt = 0; use_tc = 1 }))));\n\n          (* No grouping after TC — use use_tc=2 to avoid the WMMA tne bug\n             in apply_tc_opt (postrange.ml:914-921 calls K.range_kind on\n             non-range nodes from local shift_to results). *)\n          test \"GROUP after TC rejected\" (fun () ->\n            let ast = matmul_f32_global_ast ~m:8 ~n:8 ~k:8 in\n            let ren = tc_renderer Tc.metal in\n            let t = P.create ast ren in\n            ignore (P.apply_opt t\n              (K.Opt.Tc { axis = 0; tc_select = 0; tc_opt = 0; use_tc = 2 }));\n            raises_opt_error (fun () ->\n              ignore (P.apply_opt t (K.Opt.Grouptop { axis = 0; amount = 2 }))));\n        ];\n\n      (* Apply_tc_opt triggering *)\n\n      (* NOTE: Tests that use use_tc=1 on TCs with local opts (metal, cuda,\n         amd) hit a bug in apply_tc_opt (postrange.ml:914-921): the ne list\n         contains non-range nodes (warp % 2) from local shift_to, but tne\n         creation assumes all ne elements are ranges and calls K.range_kind\n         on them. AMX has no local opts (all 'u' opts), so it avoids this\n         bug. We test use_tc=2 for metal (skips WMMA construction) and\n         use_tc=1 for AMX (full path). *)\n\n      group \"apply_tc_opt triggering\"\n        [\n          (* use_tc=2 tests TC matching and shift_to without WMMA construction *)\n          test \"TC triggers on f32 8x8x8 matmul with metal tc (use_tc=2)\" (fun () ->\n            let ast = matmul_f32_global_ast ~m:8 ~n:8 ~k:8 in\n            let ren = tc_renderer Tc.metal in\n            let t = P.create ast ren in\n            let result =\n              P.apply_opt t\n                (K.Opt.Tc { axis = 0; tc_select = 0; tc_opt = 0; use_tc = 2 })\n            in\n            is_true (result <> None);\n            is_true\n              (List.exists\n                 (function K.Opt.Tc _ -> true | _ -> false)\n                 (P.applied_opts t)));\n\n          test \"TC auto-selects with tc_select=-1 (use_tc=2)\" (fun () ->\n            let ast = matmul_f32_global_ast ~m:8 ~n:8 ~k:8 in\n            let ren = tc_renderer Tc.metal in\n            let t = P.create ast ren in\n            let result =\n              P.apply_opt t\n                (K.Opt.Tc { axis = 0; tc_select = -1; tc_opt = 0; use_tc = 2 })\n            in\n            is_true (result <> None));\n\n          test \"TC triggers on f16 matmul with cuda sm80 tc (use_tc=2)\" (fun () ->\n            let ast = matmul_f16_global_ast ~m:16 ~n:16 ~k:16 in\n            let ren = tc_renderer Tc.cuda_sm80 in\n            let t = P.create ast ren in\n            let result =\n              P.apply_opt t\n                (K.Opt.Tc { axis = 0; tc_select = -1; tc_opt = 0; use_tc = 2 })\n            in\n            is_true (result <> None));\n\n          (* AMX has no local opts, so use_tc=2 avoids the tne bug.\n             use_tc=1 triggers a second bug: shape_str_to_axis fails because\n             AMX's 8 upcast opts create ranges that base_upcast_axes can't\n             resolve.  Tested with use_tc=2 to verify matching. *)\n          test \"TC triggers on f32 16x16 matmul with AMX tc (use_tc=2)\" (fun () ->\n            let ast = matmul_f32_global_ast ~m:16 ~n:16 ~k:2 in\n            let ren = tc_renderer Tc.amx in\n            let t = P.create ast ren in\n            let result =\n              P.apply_opt t\n                (K.Opt.Tc { axis = 0; tc_select = 0; tc_opt = 0; use_tc = 2 })\n            in\n            is_true (result <> None));\n        ];\n\n      (* Apply_tc_opt padding *)\n\n      group \"apply_tc_opt padding\"\n        [\n          (* tc_opt=2 enables padding.\n             Metal TC is 8x8x8; a 7x7x7 matmul needs padding to 8x8x8. *)\n          test \"TC padding with tc_opt=2 succeeds on unaligned dims\" (fun () ->\n            let ast = matmul_f32_global_ast ~m:7 ~n:7 ~k:7 in\n            let ren = tc_renderer Tc.metal in\n            let t = P.create ast ren in\n            let result =\n              P.apply_opt t\n                (K.Opt.Tc { axis = 0; tc_select = 0; tc_opt = 2; use_tc = 2 })\n            in\n            is_true (result <> None));\n\n          (* tc_opt=0 on unaligned dims should fail *)\n          test \"TC padding rejected with tc_opt=0\" (fun () ->\n            let ast = matmul_f32_global_ast ~m:9 ~n:9 ~k:9 in\n            let ren = tc_renderer Tc.metal in\n            let t = P.create ast ren in\n            raises_opt_error (fun () ->\n              ignore (P.apply_opt t\n                (K.Opt.Tc { axis = 0; tc_select = 0; tc_opt = 0; use_tc = 1 }))));\n\n          (* tc_opt=1 on unaligned dims should also fail *)\n          test \"TC padding rejected with tc_opt=1\" (fun () ->\n            let ast = matmul_f32_global_ast ~m:9 ~n:9 ~k:9 in\n            let ren = tc_renderer Tc.metal in\n            let t = P.create ast ren in\n            raises_opt_error (fun () ->\n              ignore (P.apply_opt t\n                (K.Opt.Tc { axis = 0; tc_select = 0; tc_opt = 1; use_tc = 1 }))));\n\n          (* Excessive padding: dims/4 *)\n          test \"TC excessive padding rejected (dims/4)\" (fun () ->\n            let ast = matmul_f32_global_ast ~m:2 ~n:2 ~k:2 in\n            let ren = tc_renderer Tc.metal in\n            let t = P.create ast ren in\n            raises_opt_error (fun () ->\n              ignore (P.apply_opt t\n                (K.Opt.Tc { axis = 0; tc_select = 0; tc_opt = 2; use_tc = 1 }))));\n        ];\n\n      (* Apply_tc_opt WMMA construction (AMX -- no local opts) *)\n\n      group \"apply_tc_opt WMMA construction\"\n        [\n          (* use_tc=2 applies shifts but skips WMMA construction *)\n          test \"TC with use_tc=2 skips WMMA construction\" (fun () ->\n            let ast = matmul_f32_global_ast ~m:8 ~n:8 ~k:8 in\n            let ren = tc_renderer Tc.metal in\n            let t = P.create ast ren in\n            let result =\n              P.apply_opt t\n                (K.Opt.Tc { axis = 0; tc_select = 0; tc_opt = 0; use_tc = 2 })\n            in\n            is_true (result <> None);\n            is_true (not (has_wmma (P.ast t))));\n\n          (* use_tc=2 records the TC opt in applied_opts *)\n          test \"TC records opt in applied_opts (use_tc=2)\" (fun () ->\n            let ast = matmul_f32_global_ast ~m:8 ~n:8 ~k:8 in\n            let ren = tc_renderer Tc.metal in\n            let t = P.create ast ren in\n            ignore (P.apply_opt t\n              (K.Opt.Tc { axis = 0; tc_select = 0; tc_opt = 0; use_tc = 2 }));\n            is_true\n              (List.exists\n                 (function K.Opt.Tc _ -> true | _ -> false)\n                 (P.applied_opts t)));\n\n          (* Port of test_tensor_cores_codegen: WMMA node in AST with\n             metal TC (use_tc=1, full path including WMMA construction) *)\n          test \"TC produces WMMA node in AST (metal use_tc=1)\" (fun () ->\n            let ast = matmul_f32_global_ast ~m:8 ~n:8 ~k:8 in\n            let ren = tc_renderer Tc.metal in\n            let t = P.create ast ren in\n            ignore (P.apply_opt t\n              (K.Opt.Tc { axis = 0; tc_select = 0; tc_opt = 0; use_tc = 1 }));\n            is_true (has_wmma (P.ast t));\n            is_true (has_contract (P.ast t));\n            is_true (has_unroll (P.ast t)));\n        ];\n\n      (* Port of test_tensor_core_opts / test_tensor_core_opts_locals *)\n      group \"apply_tc_opt with other opts\"\n        [\n          (* TC + UPCAST: use 32x32x8 so global axes remain > 1 after TC\n             splits.  Metal TC splits 8 elements per dim, leaving 32/8=4 per\n             global axis. Port of test_tensor_core_opts [Opt(UPCAST,0,4)]. *)\n          test \"UPCAST after TC\" (fun () ->\n            let ast = matmul_f32_global_ast ~m:32 ~n:32 ~k:8 in\n            let ren = tc_renderer Tc.metal in\n            let t = P.create ast ren in\n            ignore (P.apply_opt t\n              (K.Opt.Tc { axis = 0; tc_select = 0; tc_opt = 0; use_tc = 1 }));\n            let upcastable = P.upcastable_dims t in\n            is_true (List.length upcastable > 0);\n            let axis = List.hd upcastable in\n            let fs = P.full_shape t in\n            let sz = K.const_to_int (List.nth fs axis) in\n            if sz >= 2 then\n              ignore (P.apply_opt t (K.Opt.Upcast { axis; amount = 2 })));\n\n          (* TC + UNROLL *)\n          test \"UNROLL after TC\" (fun () ->\n            let ast = matmul_f32_global_ast ~m:8 ~n:8 ~k:8 in\n            let ren = tc_renderer Tc.metal in\n            let t = P.create ast ren in\n            ignore (P.apply_opt t\n              (K.Opt.Tc { axis = 0; tc_select = 0; tc_opt = 0; use_tc = 1 }));\n            let unroll_dims = P.unrollable_dims t in\n            if List.length unroll_dims > 0 then begin\n              let fs = P.full_shape t in\n              let axis_idx = List.hd unroll_dims in\n              let sz = K.const_to_int (List.nth fs axis_idx) in\n              if sz >= 2 then\n                ignore (P.apply_opt t (K.Opt.Unroll { axis = 0; amount = min sz 2 }))\n            end);\n\n          (* TC + LOCAL *)\n          test \"LOCAL after TC\" (fun () ->\n            let ast = matmul_f32_global_ast ~m:8 ~n:8 ~k:8 in\n            let ren = tc_renderer Tc.metal in\n            let t = P.create ast ren in\n            ignore (P.apply_opt t\n              (K.Opt.Tc { axis = 0; tc_select = 0; tc_opt = 0; use_tc = 1 }));\n            let upcastable = P.upcastable_dims t in\n            if List.length upcastable > 0 then begin\n              let axis = List.hd upcastable in\n              let fs = P.full_shape t in\n              let sz = K.const_to_int (List.nth fs axis) in\n              if sz >= 2 then\n                ignore (P.apply_opt t (K.Opt.Local { axis; amount = 2 }))\n            end);\n        ];\n    ]\n"
  },
  {
    "path": "packages/tolk/test/unit/test_cstyle.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Tolk\nopen Tolk_ir\nmodule P = Program\n\nlet all_renderers =\n  [\n    (\"clang\", Cstyle.clang);\n    (\"cuda\", Cstyle.cuda Gpu_target.SM80);\n    (\"metal\", Cstyle.metal);\n    (\"opencl\", Cstyle.opencl);\n  ]\n\nlet gpu_renderers = List.filter (fun (name, _) -> name <> \"clang\") all_renderers\n\n(* Helpers *)\n\nlet dt = Dtype.Val.float32\nlet global_ptr dt = Dtype.Ptr.create dt ~addrspace:Global ~size:(-1)\nlet local_ptr dt = Dtype.Ptr.create dt ~addrspace:Local ~size:(-1)\nlet render r prog = Renderer.render r prog\nlet render_with_images r kernel =\n  Renderer.render r (Linearizer.linearize (Images.rewrite r kernel))\nlet int32_c n = Const.int Dtype.Val.int32 n\nlet float_c dt v = Const.float dt v\n\nlet contains haystack needle =\n  let hl = String.length haystack and nl = String.length needle in\n  if nl = 0 then true\n  else if nl > hl then false\n  else\n    let rec loop i =\n      if i > hl - nl then false\n      else if String.sub haystack i nl = needle then true\n      else loop (i + 1)\n    in\n    loop 0\n\nlet count_char s c =\n  let n = ref 0 in\n  String.iter (fun ch -> if ch = c then incr n) s;\n  !n\n\nlet count_substring s sub =\n  let sl = String.length s and nl = String.length sub in\n  let rec loop i acc =\n    if i > sl - nl then acc\n    else if String.sub s i nl = sub then loop (i + 1) (acc + 1)\n    else loop (i + 1) acc\n  in\n  loop 0 0\n\nlet assert_contains msg haystack needle =\n  if not (contains haystack needle) then\n    failwith\n      (Printf.sprintf \"%s: expected output to contain %S, got:\\n%s\" msg needle\n         haystack)\n\nlet assert_not_contains msg haystack needle =\n  if contains haystack needle then\n    failwith\n      (Printf.sprintf \"%s: expected output NOT to contain %S, got:\\n%s\" msg\n         needle haystack)\n\nlet for_each_renderer renderers f =\n  List.iter (fun (name, renderer) -> f name renderer) renderers\n\nlet assert_equal_string msg expected actual =\n  if not (String.equal expected actual) then\n    failwith\n      (Printf.sprintf \"%s: expected:\\n%s\\n\\ngot:\\n%s\" msg expected actual)\n\n(* IR Program Builders *)\n\nlet make_store_const dt const_value =\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let cv = P.emit b (Const { value = const_value; dtype = dt }) in\n  let c0 = P.emit b (Const { value = int32_c 0; dtype = Dtype.Val.int32 }) in\n  let idx = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let _ = P.emit b (Store { dst = idx; value = cv }) in\n  P.finish b\n\nlet make_binop dt mk_op =\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let p1 = P.emit b (Param { idx = 1; dtype = ptr }) in\n  let p2 = P.emit b (Param { idx = 2; dtype = ptr }) in\n  let c0 = P.emit b (Const { value = int32_c 0; dtype = Dtype.Val.int32 }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let idx1 = P.emit b (Index { ptr = p1; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let idx2 = P.emit b (Index { ptr = p2; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let ld0 = P.emit b (Load { src = idx0; alt = None; dtype = dt }) in\n  let ld1 = P.emit b (Load { src = idx1; alt = None; dtype = dt }) in\n  let op_result = P.emit b (mk_op ld0 ld1 dt) in\n  let _ = P.emit b (Store { dst = idx2; value = op_result }) in\n  P.finish b\n\nlet make_simple_add_f32 () =\n  make_binop dt (fun lhs rhs dtype ->\n      P.Binary { op = `Add; lhs; rhs; dtype })\n\nlet make_unop dt mk_op =\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let p1 = P.emit b (Param { idx = 1; dtype = ptr }) in\n  let c0 = P.emit b (Const { value = int32_c 0; dtype = Dtype.Val.int32 }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let idx1 = P.emit b (Index { ptr = p1; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let ld = P.emit b (Load { src = idx0; alt = None; dtype = dt }) in\n  let op_result = P.emit b (mk_op ld dt) in\n  let _ = P.emit b (Store { dst = idx1; value = op_result }) in\n  P.finish b\n\nlet make_ternary_where dt =\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let p1 = P.emit b (Param { idx = 1; dtype = ptr }) in\n  let p2 = P.emit b (Param { idx = 2; dtype = ptr }) in\n  let c0 = P.emit b (Const { value = int32_c 0; dtype = Dtype.Val.int32 }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let idx1 = P.emit b (Index { ptr = p1; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let idx2 = P.emit b (Index { ptr = p2; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let ld0 = P.emit b (Load { src = idx0; alt = None; dtype = dt }) in\n  let ld1 = P.emit b (Load { src = idx1; alt = None; dtype = dt }) in\n  let cond = P.emit b (Const { value = Const.bool true; dtype = Dtype.Val.bool }) in\n  let w = P.emit b (Ternary { op = `Where; a = cond; b = ld0; c = ld1; dtype = dt }) in\n  let _ = P.emit b (Store { dst = idx2; value = w }) in\n  P.finish b\n\nlet make_mulacc dt =\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let p1 = P.emit b (Param { idx = 1; dtype = ptr }) in\n  let p2 = P.emit b (Param { idx = 2; dtype = ptr }) in\n  let p3 = P.emit b (Param { idx = 3; dtype = ptr }) in\n  let c0 = P.emit b (Const { value = int32_c 0; dtype = Dtype.Val.int32 }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let idx1 = P.emit b (Index { ptr = p1; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let idx2 = P.emit b (Index { ptr = p2; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let idx3 = P.emit b (Index { ptr = p3; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let ld0 = P.emit b (Load { src = idx0; alt = None; dtype = dt }) in\n  let ld1 = P.emit b (Load { src = idx1; alt = None; dtype = dt }) in\n  let ld2 = P.emit b (Load { src = idx2; alt = None; dtype = dt }) in\n  let mac = P.emit b (Ternary { op = `Mulacc; a = ld0; b = ld1; c = ld2; dtype = dt }) in\n  let _ = P.emit b (Store { dst = idx3; value = mac }) in\n  P.finish b\n\nlet make_loop () =\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let c10 = P.emit b (Const { value = int32_c 10; dtype = Dtype.Val.int32 }) in\n  let r = P.emit b (Range { size = c10; dtype = Dtype.Val.int32; axis = 0; sub = []; kind = Axis_kind.Loop }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ r ]; gate = None; dtype = ptr }) in\n  let ld = P.emit b (Load { src = idx0; alt = None; dtype = dt }) in\n  let idx1 = P.emit b (Index { ptr = p0; idxs = [ r ]; gate = None; dtype = ptr }) in\n  let _ = P.emit b (Store { dst = idx1; value = ld }) in\n  let _ = P.emit b (End_range { dep = ld; range = r }) in\n  P.finish b\n\nlet make_nested_loops () =\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let c10 = P.emit b (Const { value = int32_c 10; dtype = Dtype.Val.int32 }) in\n  let c5 = P.emit b (Const { value = int32_c 5; dtype = Dtype.Val.int32 }) in\n  let r0 = P.emit b (Range { size = c10; dtype = Dtype.Val.int32; axis = 0; sub = []; kind = Axis_kind.Loop }) in\n  let r1 = P.emit b (Range { size = c5; dtype = Dtype.Val.int32; axis = 1; sub = []; kind = Axis_kind.Loop }) in\n  let sum = P.emit b (Binary { op = `Add; lhs = r0; rhs = r1; dtype = Dtype.Val.int32 }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ sum ]; gate = None; dtype = ptr }) in\n  let ld = P.emit b (Load { src = idx0; alt = None; dtype = dt }) in\n  let idx1 = P.emit b (Index { ptr = p0; idxs = [ sum ]; gate = None; dtype = ptr }) in\n  let _ = P.emit b (Store { dst = idx1; value = ld }) in\n  let _ = P.emit b (End_range { dep = ld; range = r1 }) in\n  let _ = P.emit b (End_range { dep = r0; range = r0 }) in\n  P.finish b\n\nlet make_special dim =\n  let ptr = global_ptr Dtype.Val.int32 in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let c64 = P.emit b (Const { value = int32_c 64; dtype = Dtype.Val.int32 }) in\n  let sp = P.emit b (Special { dim; size = c64; dtype = Dtype.Val.int32 }) in\n  let idx = P.emit b (Index { ptr = p0; idxs = [ sp ]; gate = None; dtype = ptr }) in\n  let _ = P.emit b (Store { dst = idx; value = sp }) in\n  P.finish b\n\nlet make_shared_memory () =\n  let gptr = global_ptr dt in\n  let lptr = local_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = gptr }) in\n  let dl = P.emit b (Define_local { size = 256; dtype = lptr }) in\n  let c0 = P.emit b (Const { value = int32_c 0; dtype = Dtype.Val.int32 }) in\n  let lidx = P.emit b (Index { ptr = dl; idxs = [ c0 ]; gate = None; dtype = lptr }) in\n  let fzero = P.emit b (Const { value = float_c dt 0.0; dtype = dt }) in\n  let _ = P.emit b (Store { dst = lidx; value = fzero }) in\n  let _ = P.emit b Barrier in\n  let ld = P.emit b (Load { src = lidx; alt = None; dtype = dt }) in\n  let gidx = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = gptr }) in\n  let _ = P.emit b (Store { dst = gidx; value = ld }) in\n  P.finish b\n\nlet make_gated_load () =\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let p1 = P.emit b (Param { idx = 1; dtype = ptr }) in\n  let c0 = P.emit b (Const { value = int32_c 0; dtype = Dtype.Val.int32 }) in\n  let gate = P.emit b (Const { value = Const.bool true; dtype = Dtype.Val.bool }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = Some gate; dtype = ptr }) in\n  let alt = P.emit b (Const { value = float_c dt 0.0; dtype = dt }) in\n  let ld = P.emit b (Load { src = idx0; alt = Some alt; dtype = dt }) in\n  let idx1 = P.emit b (Index { ptr = p1; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let _ = P.emit b (Store { dst = idx1; value = ld }) in\n  P.finish b\n\nlet make_image_load () =\n  let module K = Kernel in\n  let float4 = Dtype.Val.vec 4 dt in\n  let img_ptr = global_ptr float4 in\n  let buf_ptr = global_ptr float4 in\n  let img = K.param_image ~idx:0 ~dtype:img_ptr ~width:4 ~height:4 in\n  let buf = K.param ~idx:1 ~dtype:buf_ptr in\n  let c0 = K.const_int 0 and c1 = K.const_int 1 in\n  let src = K.index ~ptr:img ~idxs:[ c0; c1 ] () in\n  let dst = K.index ~ptr:buf ~idxs:[ c0 ] () in\n  K.sink [ K.store ~dst ~value:(K.load ~src ()) ~ranges:[] ]\n\nlet make_image_store () =\n  let module K = Kernel in\n  let float4 = Dtype.Val.vec 4 dt in\n  let img_ptr = global_ptr float4 in\n  let buf_ptr = global_ptr float4 in\n  let img = K.param_image ~idx:0 ~dtype:img_ptr ~width:4 ~height:4 in\n  let buf = K.param ~idx:1 ~dtype:buf_ptr in\n  let c0 = K.const_int 0 and c1 = K.const_int 1 in\n  let src = K.index ~ptr:buf ~idxs:[ c0 ] () in\n  let dst = K.index ~ptr:img ~idxs:[ c0; c1 ] () in\n  K.sink [ K.store ~dst ~value:(K.load ~src ()) ~ranges:[] ]\n\nlet make_type_convert ~from_dt ~to_dt mk_convert =\n  let from_ptr = global_ptr from_dt in\n  let to_ptr = global_ptr to_dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = from_ptr }) in\n  let p1 = P.emit b (Param { idx = 1; dtype = to_ptr }) in\n  let c0 = P.emit b (Const { value = int32_c 0; dtype = Dtype.Val.int32 }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = from_ptr }) in\n  let idx1 = P.emit b (Index { ptr = p1; idxs = [ c0 ]; gate = None; dtype = to_ptr }) in\n  let ld = P.emit b (Load { src = idx0; alt = None; dtype = from_dt }) in\n  let converted = P.emit b (mk_convert ld) in\n  let _ = P.emit b (Store { dst = idx1; value = converted }) in\n  P.finish b\n\nlet make_cast ~from_dt ~to_dt =\n  make_type_convert ~from_dt ~to_dt (fun src -> P.Cast { src; dtype = to_dt })\n\nlet make_bitcast ~from_dt ~to_dt =\n  make_type_convert ~from_dt ~to_dt (fun src -> P.Bitcast { src; dtype = to_dt })\n\nlet make_vectorize_gep () =\n  let vdt = Dtype.Val.vec 4 dt in\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let p1 = P.emit b (Param { idx = 1; dtype = ptr }) in\n  let c0 = P.emit b (Const { value = int32_c 0; dtype = Dtype.Val.int32 }) in\n  let c1 = P.emit b (Const { value = int32_c 1; dtype = Dtype.Val.int32 }) in\n  let c2 = P.emit b (Const { value = int32_c 2; dtype = Dtype.Val.int32 }) in\n  let c3 = P.emit b (Const { value = int32_c 3; dtype = Dtype.Val.int32 }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let idx1 = P.emit b (Index { ptr = p0; idxs = [ c1 ]; gate = None; dtype = ptr }) in\n  let idx2 = P.emit b (Index { ptr = p0; idxs = [ c2 ]; gate = None; dtype = ptr }) in\n  let idx3 = P.emit b (Index { ptr = p0; idxs = [ c3 ]; gate = None; dtype = ptr }) in\n  let ld0 = P.emit b (Load { src = idx0; alt = None; dtype = dt }) in\n  let ld1 = P.emit b (Load { src = idx1; alt = None; dtype = dt }) in\n  let ld2 = P.emit b (Load { src = idx2; alt = None; dtype = dt }) in\n  let ld3 = P.emit b (Load { src = idx3; alt = None; dtype = dt }) in\n  let vec = P.emit b (Vectorize { srcs = [ ld0; ld1; ld2; ld3 ]; dtype = vdt }) in\n  let gep = P.emit b (Gep { src = vec; idxs = [2]; dtype = dt }) in\n  let oidx = P.emit b (Index { ptr = p1; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let _ = P.emit b (Store { dst = oidx; value = gep }) in\n  P.finish b\n\nlet make_custom () =\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let c0 = P.emit b (Const { value = int32_c 0; dtype = Dtype.Val.int32 }) in\n  let idx = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let ld = P.emit b (Load { src = idx; alt = None; dtype = dt }) in\n  let ci = P.emit b (Custom_inline { fmt = \"custom_func({0}, {0})\"; args = [ ld ]; dtype = dt }) in\n  let _ = P.emit b (Store { dst = idx; value = ci }) in\n  P.finish b\n\nlet make_define_var () =\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let dv = P.emit b (Define_var { name = \"n\"; lo = 0; hi = 1024; dtype = Dtype.Val.int32 }) in\n  let idx = P.emit b (Index { ptr = p0; idxs = [ dv ]; gate = None; dtype = ptr }) in\n  let ld = P.emit b (Load { src = idx; alt = None; dtype = dt }) in\n  let _ = P.emit b (Store { dst = idx; value = ld }) in\n  P.finish b\n\nlet make_chained_binop dt mk_op n =\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let p1 = P.emit b (Param { idx = 1; dtype = ptr }) in\n  let c0 = P.emit b (Const { value = int32_c 0; dtype = Dtype.Val.int32 }) in\n  let idx_in = P.emit b (Index { ptr = p1; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let ld = P.emit b (Load { src = idx_in; alt = None; dtype = dt }) in\n  let result = ref ld in\n  for _ = 0 to n - 1 do\n    result := P.emit b (mk_op !result ld dt)\n  done;\n  let idx_out = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let _ = P.emit b (Store { dst = idx_out; value = !result }) in\n  P.finish b\n\nlet make_conditional () =\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let c0 = P.emit b (Const { value = int32_c 0; dtype = Dtype.Val.int32 }) in\n  let idx = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let cond = P.emit b (Const { value = Const.bool true; dtype = Dtype.Val.bool }) in\n  let if_ = P.emit b (If { cond; idx_for_dedup = idx }) in\n  let fval = P.emit b (Const { value = float_c dt 42.0; dtype = dt }) in\n  let _ = P.emit b (Store { dst = idx; value = fval }) in\n  let _ = P.emit b (Endif { if_ }) in\n  P.finish b\n\nlet make_launch_bounds () =\n  let ptr = global_ptr Dtype.Val.int32 in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let c64 = P.emit b (Const { value = int32_c 64; dtype = Dtype.Val.int32 }) in\n  let lid0 = P.emit b (Special { dim = Special_dim.Local_id 0; size = c64; dtype = Dtype.Val.int32 }) in\n  let c4 = P.emit b (Const { value = int32_c 4; dtype = Dtype.Val.int32 }) in\n  let lid1 = P.emit b (Special { dim = Special_dim.Local_id 1; size = c4; dtype = Dtype.Val.int32 }) in\n  let sum = P.emit b (Binary { op = `Add; lhs = lid0; rhs = lid1; dtype = Dtype.Val.int32 }) in\n  let idx = P.emit b (Index { ptr = p0; idxs = [ sum ]; gate = None; dtype = ptr }) in\n  let _ = P.emit b (Store { dst = idx; value = sum }) in\n  P.finish b\n\n(* Frequently-used programs *)\n\nlet f32_1 = make_store_const dt (float_c dt 1.0)\n\n(* Comparison program builder: loads from two float32 inputs, applies cmp, stores bool *)\nlet make_comparison mk_op =\n  let in_ptr = global_ptr dt in\n  let out_ptr = global_ptr Dtype.Val.bool in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = in_ptr }) in\n  let p1 = P.emit b (Param { idx = 1; dtype = in_ptr }) in\n  let p2 = P.emit b (Param { idx = 2; dtype = out_ptr }) in\n  let c0 = P.emit b (Const { value = int32_c 0; dtype = Dtype.Val.int32 }) in\n  let idx0 = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = in_ptr }) in\n  let idx1 = P.emit b (Index { ptr = p1; idxs = [ c0 ]; gate = None; dtype = in_ptr }) in\n  let idx2 = P.emit b (Index { ptr = p2; idxs = [ c0 ]; gate = None; dtype = out_ptr }) in\n  let ld0 = P.emit b (Load { src = idx0; alt = None; dtype = dt }) in\n  let ld1 = P.emit b (Load { src = idx1; alt = None; dtype = dt }) in\n  let cmp = P.emit b (mk_op ld0 ld1 Dtype.Val.bool) in\n  let _ = P.emit b (Store { dst = idx2; value = cmp }) in\n  P.finish b\n\n(* Property test support *)\n\nlet renderer_testable =\n  let gen = Gen.oneofl all_renderers in\n  let pp fmt (name, _) = Format.pp_print_string fmt name in\n  testable ~pp ~equal:(fun (a, _) (b, _) -> String.equal a b) ~gen ()\n\nlet safe_dtypes = [ Dtype.Val.int32; Dtype.Val.float32; Dtype.Val.float64; Dtype.Val.uint32 ]\n\nlet safe_dtype =\n  let gen = Gen.oneofl safe_dtypes in\n  testable ~pp:Dtype.Val.pp ~equal:Dtype.Val.equal ~gen ()\n\n(* Runner *)\n\nlet () =\n  run \"Renderer\"\n    [\n      group \"Constants\"\n        [\n          test \"int constant\" (fun () ->\n            let prog = make_store_const Dtype.Val.int32 (int32_c 42) in\n            for_each_renderer all_renderers (fun name r ->\n                assert_contains (name ^ \" int 42\") (render r prog) \"42\"));\n          test \"float32 constant\" (fun () ->\n            let prog = make_store_const dt (float_c dt 3.14) in\n            for_each_renderer all_renderers (fun name r ->\n                let out = render r prog in\n                assert_contains (name ^ \" float32 3.14\") out \"3.14\";\n                assert_contains (name ^ \" float32 f suffix\") out \"f\"));\n          test \"float64 constant\" (fun () ->\n            let prog = make_store_const Dtype.Val.float64 (float_c Dtype.Val.float64 3.14) in\n            for_each_renderer all_renderers (fun name r ->\n                assert_contains (name ^ \" float64 3.14\") (render r prog) \"3.14\"));\n          test \"bool constants\" (fun () ->\n            for_each_renderer all_renderers (fun name r ->\n                assert_contains (name ^ \" bool true\")\n                  (render r (make_store_const Dtype.Val.bool (Const.bool true))) \"1\";\n                assert_contains (name ^ \" bool false\")\n                  (render r (make_store_const Dtype.Val.bool (Const.bool false))) \"0\"));\n          test \"nan/inf constants\" (fun () ->\n            let nan_prog = make_store_const dt (float_c dt Float.nan) in\n            let inf_prog = make_store_const dt (float_c dt Float.infinity) in\n            let neg_inf_prog = make_store_const dt (float_c dt Float.neg_infinity) in\n            List.iter\n              (fun (name, r) ->\n                assert_contains (name ^ \" NAN\") (render r nan_prog) \"NAN\";\n                assert_contains (name ^ \" INFINITY\") (render r inf_prog) \"INFINITY\";\n                assert_contains (name ^ \" -INFINITY\") (render r neg_inf_prog) \"INFINITY\")\n              [\n                (\"cuda\", Cstyle.cuda Gpu_target.SM80);\n                (\"metal\", Cstyle.metal);\n                (\"opencl\", Cstyle.opencl);\n              ];\n            let nan_out = render Cstyle.clang nan_prog in\n            assert_contains \"clang NAN\" nan_out \"__builtin_nanf\";\n            assert_contains \"clang INF\" (render Cstyle.clang inf_prog) \"__builtin_inff\");\n          test \"int64 suffix\" (fun () ->\n            let prog = make_store_const Dtype.Val.int64 (Const.int Dtype.Val.int64 12345) in\n            for_each_renderer all_renderers (fun name r ->\n                assert_contains (name ^ \" int64 ll suffix\") (render r prog) \"12345ll\"));\n          test \"uint32 suffix\" (fun () ->\n            let prog = make_store_const Dtype.Val.uint32 (Const.int Dtype.Val.uint32 42) in\n            for_each_renderer all_renderers (fun name r ->\n                assert_contains (name ^ \" uint32 u suffix\") (render r prog) \"42u\"));\n          test \"uint64 suffix\" (fun () ->\n            let prog = make_store_const Dtype.Val.uint64 (Const.int Dtype.Val.uint64 42) in\n            for_each_renderer all_renderers (fun name r ->\n                assert_contains (name ^ \" uint64 ull suffix\") (render r prog) \"42ull\"));\n        ];\n      group \"ALU Operations\"\n        [\n          group \"Binary\"\n            [\n              test \"arithmetic operators\" (fun () ->\n                let ops =\n                  [\n                    (\"Add +\", (fun l r dt -> P.Binary { op = `Add; lhs = l; rhs = r; dtype = dt }), \"+\");\n                    (\"Sub -\", (fun l r dt -> P.Binary { op = `Sub; lhs = l; rhs = r; dtype = dt }), \"-\");\n                    (\"Mul *\", (fun l r dt -> P.Binary { op = `Mul; lhs = l; rhs = r; dtype = dt }), \"*\");\n                    (\"Fdiv /\", (fun l r dt -> P.Binary { op = `Fdiv; lhs = l; rhs = r; dtype = dt }), \"/\");\n                    (\"Mod %\", (fun l r dt -> P.Binary { op = `Mod; lhs = l; rhs = r; dtype = dt }), \"%\");\n                    (\"Shl <<\", (fun l r dt -> P.Binary { op = `Shl; lhs = l; rhs = r; dtype = dt }), \"<<\");\n                    (\"Shr >>\", (fun l r dt -> P.Binary { op = `Shr; lhs = l; rhs = r; dtype = dt }), \">>\");\n                    (\"And &\", (fun l r dt -> P.Binary { op = `And; lhs = l; rhs = r; dtype = dt }), \"&\");\n                    (\"Or |\", (fun l r dt -> P.Binary { op = `Or; lhs = l; rhs = r; dtype = dt }), \"|\");\n                    (\"Xor ^\", (fun l r dt -> P.Binary { op = `Xor; lhs = l; rhs = r; dtype = dt }), \"^\");\n                  ]\n                in\n                List.iter\n                  (fun (label, mk_op, expected) ->\n                    let op_dt =\n                      if String.length expected = 1 && expected.[0] = '/' then dt\n                      else Dtype.Val.int32\n                    in\n                    let prog = make_binop op_dt mk_op in\n                    for_each_renderer all_renderers (fun name r ->\n                        assert_contains (name ^ \" \" ^ label) (render r prog) expected))\n                  ops);\n              test \"integer division\" (fun () ->\n                let prog =\n                  make_binop Dtype.Val.int32 (fun l r dt ->\n                      P.Binary { op = `Idiv; lhs = l; rhs = r; dtype = dt })\n                in\n                for_each_renderer all_renderers (fun name r ->\n                    assert_contains (name ^ \" Idiv /\") (render r prog) \"/\"));\n              test \"comparison operators\" (fun () ->\n                let ops =\n                  [\n                    (\"Cmplt <\", (fun l r dt -> P.Binary { op = `Cmplt; lhs = l; rhs = r; dtype = dt }), \"<\");\n                    (\"Cmpeq ==\", (fun l r dt -> P.Binary { op = `Cmpeq; lhs = l; rhs = r; dtype = dt }), \"==\");\n                    (\"Cmpne !=\", (fun l r dt -> P.Binary { op = `Cmpne; lhs = l; rhs = r; dtype = dt }), \"!=\");\n                  ]\n                in\n                List.iter\n                  (fun (label, mk_op, expected) ->\n                    let prog = make_comparison mk_op in\n                    for_each_renderer all_renderers (fun name r ->\n                        assert_contains (name ^ \" \" ^ label) (render r prog) expected))\n                  ops);\n              test \"max\" (fun () ->\n                let prog =\n                  make_binop dt (fun l r dt ->\n                      P.Binary { op = `Max; lhs = l; rhs = r; dtype = dt })\n                in\n                for_each_renderer all_renderers (fun name r ->\n                    raises_match\n                      (function\n                        | Invalid_argument msg -> contains msg \"not handled\"\n                        | _ -> false)\n                      (fun () ->\n                        ignore (render r prog);\n                        failwith (name ^ \" should reject raw Max in renderer\"))));\n            ];\n          group \"Unary\"\n            [\n              test \"operators\" (fun () ->\n                let ops =\n                  [\n                    (\"Neg\", (fun s dt -> P.Unary { op = `Neg; src = s; dtype = dt }), \"-\");\n                    (\"Exp2\", (fun s dt -> P.Unary { op = `Exp2; src = s; dtype = dt }), \"exp2\");\n                    (\"Log2\", (fun s dt -> P.Unary { op = `Log2; src = s; dtype = dt }), \"log2\");\n                    (\"Sin\", (fun s dt -> P.Unary { op = `Sin; src = s; dtype = dt }), \"sin\");\n                    (\"Sqrt\", (fun s dt -> P.Unary { op = `Sqrt; src = s; dtype = dt }), \"sqrt\");\n                    (\"Trunc\", (fun s dt -> P.Unary { op = `Trunc; src = s; dtype = dt }), \"trunc\");\n                  ]\n                in\n                List.iter\n                  (fun (label, mk_op, expected) ->\n                    let prog = make_unop dt mk_op in\n                    for_each_renderer all_renderers (fun name r ->\n                        assert_contains (name ^ \" \" ^ label) (render r prog) expected))\n                  ops);\n              test \"reciprocal\" (fun () ->\n                let prog =\n                  make_unop dt (fun s dt -> P.Unary { op = `Recip; src = s; dtype = dt })\n                in\n                for_each_renderer all_renderers (fun name r ->\n                    assert_contains (name ^ \" Recip\") (render r prog) \"1/\"));\n            ];\n          group \"Ternary\"\n            [\n              test \"where\" (fun () ->\n                let prog = make_ternary_where dt in\n                for_each_renderer all_renderers (fun name r ->\n                    let out = render r prog in\n                    assert_contains (name ^ \" Where ?\") out \"?\";\n                    assert_contains (name ^ \" Where :\") out \":\"));\n              test \"mulacc\" (fun () ->\n                let prog = make_mulacc dt in\n                for_each_renderer all_renderers (fun name r ->\n                    raises_match\n                      (function\n                        | Invalid_argument msg -> contains msg \"not handled\"\n                        | _ -> false)\n                      (fun () ->\n                        ignore (render r prog);\n                        failwith (name ^ \" should reject raw Mulacc in renderer\"))));\n            ];\n          group \"Backend-specific\"\n            [\n              test \"CUDA half intrinsics\" (fun () ->\n                let cuda = Cstyle.cuda Gpu_target.SM80 in\n                List.iter\n                  (fun (expected, mk_op) ->\n                    let out = render cuda (make_unop Dtype.Val.float16 mk_op) in\n                    assert_contains (\"CUDA \" ^ expected) out expected)\n                  [\n                    (\"hexp2\", fun s dt -> P.Unary { op = `Exp2; src = s; dtype = dt });\n                    (\"hlog2\", fun s dt -> P.Unary { op = `Log2; src = s; dtype = dt });\n                    (\"hsin\", fun s dt -> P.Unary { op = `Sin; src = s; dtype = dt });\n                    (\"hsqrt\", fun s dt -> P.Unary { op = `Sqrt; src = s; dtype = dt });\n                    (\"hrcp\", fun s dt -> P.Unary { op = `Recip; src = s; dtype = dt });\n                    (\"htrunc\", fun s dt -> P.Unary { op = `Trunc; src = s; dtype = dt });\n                  ]);\n              test \"Metal precise sin\" (fun () ->\n                let prog =\n                  make_unop dt (fun s dt -> P.Unary { op = `Sin; src = s; dtype = dt })\n                in\n                assert_contains \"Metal precise::sin\" (render Cstyle.metal prog) \"precise::sin\");\n              test \"Clang builtins\" (fun () ->\n                let clang = Cstyle.clang in\n                let sqrt_out =\n                  render clang\n                    (make_unop dt (fun s dt -> P.Unary { op = `Sqrt; src = s; dtype = dt }))\n                in\n                assert_contains \"clang __builtin_sqrtf\" sqrt_out \"__builtin_sqrtf\";\n                let trunc_out =\n                  render clang\n                    (make_unop dt (fun s dt -> P.Unary { op = `Trunc; src = s; dtype = dt }))\n                in\n                assert_contains \"clang __builtin_truncf\" trunc_out \"__builtin_truncf\");\n            ];\n          test \"paren stripping\" (fun () ->\n            let mk_add l r dt = P.Binary { op = `Add; lhs = l; rhs = r; dtype = dt } in\n            let mk_sub l r dt = P.Binary { op = `Sub; lhs = l; rhs = r; dtype = dt } in\n            let mk_mul l r dt = P.Binary { op = `Mul; lhs = l; rhs = r; dtype = dt } in\n            let mk_xor l r dt = P.Binary { op = `Xor; lhs = l; rhs = r; dtype = dt } in\n            let mk_or l r dt = P.Binary { op = `Or; lhs = l; rhs = r; dtype = dt } in\n            let mk_and l r dt = P.Binary { op = `And; lhs = l; rhs = r; dtype = dt } in\n            let prog_add = make_chained_binop dt mk_add 5 in\n            let prog_sub = make_chained_binop dt mk_sub 5 in\n            let prog_mul = make_chained_binop dt mk_mul 5 in\n            let prog_xor = make_chained_binop Dtype.Val.int32 mk_xor 5 in\n            let prog_or = make_chained_binop Dtype.Val.int32 mk_or 5 in\n            let prog_and = make_chained_binop Dtype.Val.int32 mk_and 5 in\n            for_each_renderer all_renderers (fun name r ->\n                assert_not_contains (name ^ \" Add no deep parens\") (render r prog_add) \"(((((\";\n                assert_not_contains (name ^ \" Mul no deep parens\") (render r prog_mul) \"(((((\";\n                assert_not_contains (name ^ \" Xor no deep parens\") (render r prog_xor) \"(((((\";\n                assert_not_contains (name ^ \" Or no deep parens\") (render r prog_or) \"(((((\";\n                assert_not_contains (name ^ \" And no deep parens\") (render r prog_and) \"(((((\";\n                assert_contains (name ^ \" Sub deep parens\") (render r prog_sub) \"(((((\"));\n        ];\n      group \"Control Flow\"\n        [\n          test \"for loop\" (fun () ->\n            let prog = make_loop () in\n            for_each_renderer all_renderers (fun name r ->\n                assert_contains (name ^ \" for loop\") (render r prog) \"for (\"));\n          test \"nested loops\" (fun () ->\n            let prog = make_nested_loops () in\n            for_each_renderer all_renderers (fun name r ->\n                let out = render r prog in\n                let count = count_substring out \"for \" in\n                if count < 2 then\n                  failwith\n                    (Printf.sprintf \"%s: expected 2 'for ' occurrences, got %d\" name count)));\n          test \"conditional\" (fun () ->\n            let prog = make_conditional () in\n            for_each_renderer all_renderers (fun name r ->\n                assert_contains (name ^ \" if\") (render r prog) \"if (\"));\n        ];\n      group \"Memory\"\n        [\n          test \"simple load/store\" (fun () ->\n            let prog =\n              make_binop dt (fun l r dt ->\n                  P.Binary { op = `Add; lhs = l; rhs = r; dtype = dt })\n            in\n            for_each_renderer all_renderers (fun name r ->\n                assert_contains (name ^ \" dereference\") (render r prog) \"*\"));\n          test \"gated load\" (fun () ->\n            let prog = make_gated_load () in\n            for_each_renderer all_renderers (fun name r ->\n                assert_contains (name ^ \" gated load ternary\") (render r prog) \"?\"));\n          test \"opencl image load/store\" (fun () ->\n            let load_out = render_with_images Cstyle.opencl (make_image_load ()) in\n            assert_contains \"opencl image param\" load_out \"read_only image2d_t\";\n            assert_contains \"opencl sampler preamble\" load_out \"const sampler_t smp\";\n            assert_contains \"opencl read_imagef\" load_out \"read_imagef(\";\n            let store_out = render_with_images Cstyle.opencl (make_image_store ()) in\n            assert_contains \"opencl mutable image param\" store_out \"write_only image2d_t\";\n            assert_contains \"opencl write_imagef\" store_out \"write_imagef(\");\n          test \"non-opencl image rejected\" (fun () ->\n            raises_match\n              (function\n                | Failure msg -> contains msg \"does not support images\"\n                | _ -> false)\n              (fun () -> ignore (Images.rewrite Cstyle.metal (make_image_load ()))));\n        ];\n      group \"Cast and Bitcast\"\n        [\n          test \"cast per backend\" (fun () ->\n            let prog = make_cast ~from_dt:Dtype.Val.int32 ~to_dt:dt in\n            let metal_out = render Cstyle.metal prog in\n            assert_contains \"metal cast\" metal_out \"(float)\";\n            let cuda_out = render (Cstyle.cuda Gpu_target.SM80) prog in\n            assert_contains \"cuda cast\" cuda_out \"(float)\";\n            let opencl_out = render Cstyle.opencl prog in\n            assert_contains \"opencl cast\" opencl_out \"(float)\";\n            assert_contains \"clang cast\" (render Cstyle.clang prog) \"(float)\");\n          test \"bitcast per backend\" (fun () ->\n            let prog = make_bitcast ~from_dt:dt ~to_dt:Dtype.Val.int32 in\n            assert_contains \"clang __builtin_bit_cast\"\n              (render Cstyle.clang prog) \"__builtin_bit_cast\";\n            assert_contains \"cuda tg_bitcast\"\n              (render (Cstyle.cuda Gpu_target.SM80) prog) \"tg_bitcast\";\n            assert_contains \"metal as_type\"\n              (render Cstyle.metal prog) \"as_type<\";\n            assert_contains \"opencl as_\"\n              (render Cstyle.opencl prog) \"as_\");\n        ];\n      group \"Special Dimensions\"\n        [\n          test \"Group_id\" (fun () ->\n            let prog = make_special (Special_dim.Group_id 0) in\n            assert_contains \"cuda blockIdx.x\"\n              (render (Cstyle.cuda Gpu_target.SM80) prog) \"blockIdx.x\";\n            assert_contains \"metal gid.x\"\n              (render Cstyle.metal prog) \"gid.x\";\n            assert_contains \"opencl get_group_id(0)\"\n              (render Cstyle.opencl prog) \"get_group_id(0)\");\n          test \"Local_id\" (fun () ->\n            let prog = make_special (Special_dim.Local_id 1) in\n            assert_contains \"cuda threadIdx.y\"\n              (render (Cstyle.cuda Gpu_target.SM80) prog) \"threadIdx.y\";\n            assert_contains \"metal lid.y\"\n              (render Cstyle.metal prog) \"lid.y\";\n            assert_contains \"opencl get_local_id(1)\"\n              (render Cstyle.opencl prog) \"get_local_id(1)\");\n          test \"Global_idx\" (fun () ->\n            let prog = make_special (Special_dim.Global_idx 2) in\n            assert_contains \"cuda global idx formula\"\n              (render (Cstyle.cuda Gpu_target.SM80) prog)\n              \"(blockIdx.z*blockDim.z+threadIdx.z)\";\n            raises_match\n              (function Failure _ -> true | _ -> false)\n              (fun () -> ignore (render Cstyle.metal prog));\n            assert_contains \"opencl get_global_id(2)\"\n              (render Cstyle.opencl prog) \"get_global_id(2)\");\n          test \"Clang fails\" (fun () ->\n            raises_match\n              (function Failure _ -> true | _ -> false)\n              (fun () -> ignore (render Cstyle.clang (make_special (Special_dim.Group_id 0)))));\n        ];\n      group \"Shared Memory and Barrier\"\n        [\n          test \"shared memory qualifiers\" (fun () ->\n            let prog = make_shared_memory () in\n            assert_contains \"cuda __shared__\"\n              (render (Cstyle.cuda Gpu_target.SM80) prog) \"__shared__\";\n            assert_contains \"metal threadgroup\"\n              (render Cstyle.metal prog) \"threadgroup\";\n            assert_contains \"opencl __local\"\n              (render Cstyle.opencl prog) \"__local\");\n          test \"barrier syntax\" (fun () ->\n            let prog = make_shared_memory () in\n            assert_contains \"cuda __syncthreads\"\n              (render (Cstyle.cuda Gpu_target.SM80) prog) \"__syncthreads()\";\n            assert_contains \"metal threadgroup_barrier\"\n              (render Cstyle.metal prog) \"threadgroup_barrier\";\n            assert_contains \"opencl barrier\"\n              (render Cstyle.opencl prog) \"barrier(CLK_LOCAL_MEM_FENCE)\");\n        ];\n      group \"Vectorize and Gep\"\n        [\n          test \"vectorize\" (fun () ->\n            let prog = make_vectorize_gep () in\n            for_each_renderer all_renderers (fun name r ->\n                assert_contains (name ^ \" vectorize val elements\")\n                  (render r prog) \"val0,val1,val2,val3\"));\n          test \"gep\" (fun () ->\n            let prog = make_vectorize_gep () in\n            for_each_renderer all_renderers (fun name r ->\n                let out = render r prog in\n                if not (contains out \"[2]\" || contains out \".z\") then\n                  failwith\n                    (Printf.sprintf\n                       \"%s: expected GEP element 2 access ([2] or .z), got:\\n%s\"\n                       name out)));\n        ];\n      group \"Kernel Signature\"\n        [\n          test \"function prefix\" (fun () ->\n            let cuda_out = render (Cstyle.cuda Gpu_target.SM80) f32_1 in\n            assert_contains \"cuda extern C\" cuda_out {|extern \"C\"|};\n            assert_contains \"cuda __global__\" cuda_out \"__global__\";\n            assert_contains \"metal kernel void\"\n              (render Cstyle.metal f32_1) \"kernel void\";\n            assert_contains \"opencl __kernel\"\n              (render Cstyle.opencl f32_1) \"__kernel\";\n            assert_contains \"clang void\" (render Cstyle.clang f32_1) \"void\");\n          test \"kernel name\" (fun () ->\n            for_each_renderer all_renderers (fun name r ->\n                assert_contains (name ^ \" kernel name\")\n                  (Renderer.render r ~name:\"my_test_kernel\" f32_1) \"my_test_kernel\"));\n          test \"parameter qualifiers\" (fun () ->\n            assert_contains \"opencl __global\"\n              (render Cstyle.opencl f32_1) \"__global\";\n            assert_contains \"metal device\"\n              (render Cstyle.metal f32_1) \"device\");\n          test \"scalar parameter\" (fun () ->\n            let prog = make_define_var () in\n            for_each_renderer all_renderers (fun name r ->\n                assert_contains (name ^ \" scalar param n\") (render r prog) \"n\"));\n        ];\n      group \"Preamble\"\n        [\n          test \"CUDA bitcast template\" (fun () ->\n            let prog = make_bitcast ~from_dt:dt ~to_dt:Dtype.Val.int32 in\n            assert_contains \"cuda tg_bitcast template\"\n              (render (Cstyle.cuda Gpu_target.SM80) prog) \"tg_bitcast\");\n          test \"CUDA fp16 include\" (fun () ->\n            let prog = make_store_const Dtype.Val.float16 (float_c Dtype.Val.float16 1.0) in\n            assert_contains \"cuda fp16 include\"\n              (render (Cstyle.cuda Gpu_target.SM80) prog) \"cuda_fp16\");\n          test \"Metal stdlib\" (fun () ->\n            assert_contains \"metal stdlib\" (render Cstyle.metal f32_1) \"metal_stdlib\");\n          test \"OpenCL fp16 pragma\" (fun () ->\n            let prog = make_store_const Dtype.Val.float16 (float_c Dtype.Val.float16 1.0) in\n            assert_contains \"opencl fp16 pragma\"\n              (render Cstyle.opencl prog) \"cl_khr_fp16\");\n        ];\n      group \"Non-native Rewrites\"\n        [\n          (* bf16 promotion is handled by extra_matcher at the Kernel level\n             (during codegen), not at render time.  Verify the matcher is set. *)\n          test \"clang has bf16 extra_matcher\" (fun () ->\n            match Renderer.extra_matcher Cstyle.clang with\n            | None -> failwith \"clang should have extra_matcher for bf16 promotion\"\n            | Some _ -> ());\n        ];\n      group \"Clang ABI\"\n        [\n          test \"fixed ABI wrapper\" (fun () ->\n            let out = Renderer.render Cstyle.clang ~name:\"kern\" f32_1 in\n            assert_contains \"clang fixed ABI\" out\n              \"void kern(const unsigned long long *bufs\");\n          test \"fixed ABI wraps inner kernel\" (fun () ->\n            let out = Renderer.render Cstyle.clang ~name:\"kern\" f32_1 in\n            assert_contains \"clang fixed ABI static inner\" out \"static void kern_(\";\n            assert_contains \"clang fixed ABI wrapper signature\" out\n              \"void kern(const unsigned long long *bufs, const long long *vals)\";\n            assert_contains \"clang fixed ABI wrapper call\" out \"kern_((float*)bufs[0]);\");\n        ];\n      group \"CUDA Launch Bounds\"\n        [\n          test \"launch bounds\" (fun () ->\n            assert_contains \"cuda __launch_bounds__\"\n              (render (Cstyle.cuda Gpu_target.SM80) (make_launch_bounds ()))\n              \"__launch_bounds__\");\n        ];\n      group \"Variable Naming\"\n        [\n          test \"range variable prefix\" (fun () ->\n            let prog = make_loop () in\n            for_each_renderer all_renderers (fun name r ->\n                assert_contains (name ^ \" Loop prefix Lidx\") (render r prog) \"Lidx0\"));\n          test \"special variable names\" (fun () ->\n            let prog = make_special (Special_dim.Group_id 0) in\n            for_each_renderer gpu_renderers (fun name r ->\n                assert_contains (name ^ \" gidx0\") (render r prog) \"gidx0\");\n            let prog_lid = make_special (Special_dim.Local_id 1) in\n            for_each_renderer gpu_renderers (fun name r ->\n                assert_contains (name ^ \" lidx1\") (render r prog_lid) \"lidx1\"));\n        ];\n      group \"Custom\"\n        [\n          test \"custom_inline\" (fun () ->\n            let prog = make_custom () in\n            for_each_renderer all_renderers (fun name r ->\n                assert_contains (name ^ \" custom_func\") (render r prog) \"custom_func\"));\n        ];\n      group \"AMD/HIP\"\n        [\n          test \"special dims\" (fun () ->\n            let rdna3 = Cstyle.amd Gpu_target.RDNA3 in\n            assert_contains \"amd group_id\"\n              (render rdna3 (make_special (Special_dim.Group_id 0)))\n              \"__ockl_get_group_id(0)\";\n            assert_contains \"amd local_id\"\n              (render rdna3 (make_special (Special_dim.Local_id 0)))\n              \"__ockl_get_local_id(0)\");\n          test \"transcendentals\" (fun () ->\n            let rdna3 = Cstyle.amd Gpu_target.RDNA3 in\n            assert_contains \"amd __ocml_sqrt_f32\"\n              (render rdna3\n                 (make_unop dt (fun s dt -> P.Unary { op = `Sqrt; src = s; dtype = dt })))\n              \"__ocml_sqrt_f32\";\n            assert_contains \"amd __ocml_sin_f32\"\n              (render rdna3\n                 (make_unop dt (fun s dt -> P.Unary { op = `Sin; src = s; dtype = dt })))\n              \"__ocml_sin_f32\");\n          test \"barrier\" (fun () ->\n            let out = render (Cstyle.amd Gpu_target.RDNA3) (make_shared_memory ()) in\n            assert_contains \"amd fence\" out \"__builtin_amdgcn_fence\";\n            assert_contains \"amd s_barrier\" out \"__builtin_amdgcn_s_barrier\");\n          test \"kernel attribute\" (fun () ->\n            assert_contains \"amd amdgpu_flat_work_group_size\"\n              (render (Cstyle.amd Gpu_target.RDNA3) f32_1)\n              \"amdgpu_flat_work_group_size\");\n          test \"bf16 target paths\" (fun () ->\n            let prog =\n              make_binop Dtype.Val.bfloat16 (fun l r dt ->\n                  P.Binary { op = `Add; lhs = l; rhs = r; dtype = dt })\n            in\n            let rdna3_out = render (Cstyle.amd Gpu_target.RDNA3) prog in\n            assert_contains \"amd rdna3 uses hip_bfloat16\" rdna3_out \"hip_bfloat16\";\n            assert_contains \"amd rdna3 typedefs software bf16\" rdna3_out\n              \"typedef unsigned short hip_bfloat16;\";\n            let cdna4_out = render (Cstyle.amd Gpu_target.CDNA4) prog in\n            assert_contains \"amd cdna4 typedefs __bf16 hip_bfloat16\" cdna4_out\n              \"typedef __bf16 hip_bfloat16;\";\n            assert_not_contains \"amd cdna4 does not typedef ushort hip_bfloat16\"\n              cdna4_out \"typedef unsigned short hip_bfloat16;\");\n        ];\n      group \"Intel\"\n        [\n          test \"kernel attribute\" (fun () ->\n            assert_contains \"intel sub_group_size\"\n              (render Cstyle.intel f32_1) \"intel_reqd_sub_group_size(8)\");\n        ];\n      group \"Properties\"\n        [\n          prop \"non-empty output\"\n            (pair safe_dtype renderer_testable)\n            (fun (dt, (_name, renderer)) ->\n              let const_value =\n                match Dtype.Val.scalar dt with\n                | Dtype.Float32 | Dtype.Float64 -> Const.float dt 1.0\n                | _ -> Const.int dt 1\n              in\n              String.length (render renderer (make_store_const dt const_value)) > 0);\n          prop \"contains kernel name\" renderer_testable (fun (_name, renderer) ->\n            contains\n              (Renderer.render renderer ~name:\"test_prop_kernel\" f32_1)\n              \"test_prop_kernel\");\n          prop \"balanced braces\" renderer_testable (fun (_name, renderer) ->\n            let output = render renderer (make_loop ()) in\n            count_char output '{' = count_char output '}');\n          prop \"deterministic\" renderer_testable (fun (_name, renderer) ->\n            String.equal (render renderer f32_1) (render renderer f32_1));\n        ];\n    ]\n"
  },
  {
    "path": "packages/tolk/test/unit/test_elf.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Tolk\n\nlet uname flag =\n  try\n    let ic = Unix.open_process_in (\"uname \" ^ flag) in\n    let value = input_line ic in\n    let _ = Unix.close_process_in ic in\n    String.trim value\n  with _ -> \"\"\n\nlet host_arch () = uname \"-m\"\nlet cc () = match Sys.getenv_opt \"CC\" with Some cc -> cc | None -> \"clang\"\n\nlet read_file path =\n  let ic = open_in_bin path in\n  Fun.protect\n    ~finally:(fun () -> close_in_noerr ic)\n    (fun () ->\n      let len = in_channel_length ic in\n      really_input_string ic len)\n\nlet compile_c src =\n  let arch = host_arch () in\n  let arch_flag =\n    match arch with\n    | \"x86_64\" | \"AMD64\" -> \"-march=native\"\n    | \"riscv64\" -> \"-march=rv64g\"\n    | _ -> \"-mcpu=native\"\n  in\n  let src_path = Filename.temp_file \"tolk_elf\" \".c\" in\n  let obj_path = Filename.temp_file \"tolk_elf\" \".o\" in\n  let err_path = Filename.temp_file \"tolk_elf\" \".err\" in\n  Fun.protect\n    ~finally:(fun () ->\n      List.iter\n        (fun path -> try Sys.remove path with Sys_error _ -> ())\n        [ src_path; obj_path; err_path ])\n    (fun () ->\n      let oc = open_out_bin src_path in\n      output_string oc src;\n      close_out oc;\n      let command =\n        String.concat \" \"\n          [\n            Filename.quote (cc ());\n            \"-c\"; \"-x\"; \"c\"; arch_flag;\n            Filename.quote (Printf.sprintf \"--target=%s-none-unknown-elf\" arch);\n            \"-O2\"; \"-fPIC\"; \"-ffreestanding\"; \"-fno-math-errno\";\n            \"-nostdlib\"; \"-fno-ident\";\n            Filename.quote src_path; \"-o\"; Filename.quote obj_path;\n            \"2>\"; Filename.quote err_path;\n          ]\n      in\n      match Sys.command command with\n      | 0 -> Bytes.of_string (read_file obj_path)\n      | _ ->\n          let err = read_file err_path in\n          failwith\n            (if String.equal err \"\" then \"clang failed\"\n             else \"clang failed:\\n\" ^ err))\n\nlet load_c src = Elf.load (compile_c src)\n\nlet require_section elf name =\n  match Elf.find_section elf name with\n  | Some s -> s\n  | None -> failwith (\"expected \" ^ name ^ \" section\")\n\nlet () =\n  run \"Elf\"\n    [\n      group \"Parsing\"\n        [\n          test \"clang object exposes relocation sections\" (fun () ->\n            let elf =\n              load_c\n                {|\n                  int something;\n                  int test(int x) { return something + x; }\n                |}\n            in\n            let names =\n              Elf.sections elf |> Array.to_list\n              |> List.map (fun (s : Elf.section) -> s.name)\n            in\n            is_true (List.mem \".text\" names);\n            is_true\n              (List.mem \".rela.text\" names || List.mem \".rel.text\" names));\n          test \"bss is laid out in image\" (fun () ->\n            let elf =\n              load_c\n                {|\n                  int counter;\n                  int test(void) { return 1; }\n                |}\n            in\n            let bss = require_section elf \".bss\" in\n            equal int 4 bss.size;\n            is_true (Bytes.length (Elf.image elf) >= bss.addr + bss.size);\n            equal string \"\\000\\000\\000\\000\" (Bytes.to_string bss.content));\n          test \"entry symbol offset is reported\" (fun () ->\n            let elf =\n              load_c {|\n                int test(int x) { return x + 1; }\n              |}\n            in\n            let off = Elf.find_symbol_offset elf \"test\" in\n            is_true (off >= 0);\n            let text = require_section elf \".text\" in\n            is_true (off >= text.addr && off < text.addr + text.size));\n          test \"undefined external is preserved in relocations\" (fun () ->\n            let elf =\n              load_c\n                {|\n                  float powf(float, float);\n                  float test(float x, float y) { return powf(x, y); }\n                |}\n            in\n            let names =\n              Elf.relocs elf\n              |> List.map (fun (r : Elf.reloc) -> r.symbol.name)\n            in\n            is_true (List.mem \"powf\" names));\n        ];\n    ]\n"
  },
  {
    "path": "packages/tolk/test/unit/test_ir_dtype.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Tolk_ir\n\nlet dtype = testable ~pp:Dtype.Val.pp ~equal:Dtype.Val.equal ()\n\nlet bound =\n  let pp fmt = function\n    | `Bool b -> Format.fprintf fmt \"`Bool %b\" b\n    | `SInt n -> Format.fprintf fmt \"`SInt %Ld\" n\n    | `UInt n -> Format.fprintf fmt \"`UInt %Ld\" n\n    | `Float f -> Format.fprintf fmt \"`Float %g\" f\n  in\n  let equal a b =\n    match (a, b) with\n    | `Bool a, `Bool b -> a = b\n    | `SInt a, `SInt b -> Int64.equal a b\n    | `UInt a, `UInt b -> Int64.equal a b\n    | `Float a, `Float b ->\n        (Float.is_nan a && Float.is_nan b) || Float.equal a b\n    | _ -> false\n  in\n  testable ~pp ~equal ()\n\nlet int_pair =\n  let pp fmt (a, b) = Format.fprintf fmt \"(%d, %d)\" a b in\n  testable ~pp ~equal:( = ) ()\n\nlet raises_invalid (f : unit -> _) =\n  raises_match (function Invalid_argument _ -> true | _ -> false) f\n\n(* Dtypes that participate in promotion (excludes Void and Index). *)\nlet promotable_dtypes =\n  Dtype.Val.\n    [\n      bool; int8; int16; int32; int64; uint8; uint16; uint32; uint64; float16;\n      bfloat16; float32; float64; fp8e4m3; fp8e5m2;\n    ]\n\nlet promotable_dtype =\n  let gen = Gen.oneofl promotable_dtypes in\n  testable ~pp:Dtype.Val.pp ~equal:Dtype.Val.equal ~gen ()\n\n(* Integer dtypes suitable for truncate_int (excludes Index). *)\nlet int_dtypes = Dtype.Val.[ bool; int8; int16; int32; uint8; uint16; uint32 ]\n\nlet int_dtype =\n  let gen = Gen.oneofl int_dtypes in\n  testable ~pp:Dtype.Val.pp ~equal:Dtype.Val.equal ~gen ()\n\nlet fp8_byte =\n  let gen = Gen.int_range 0 255 in\n  testable ~pp:Format.pp_print_int ~equal:Int.equal ~gen ()\n\nlet lub = Dtype.Val.least_upper_dtype\n\nlet () =\n  run \"Dtype\"\n    [\n      group \"Type Promotion\"\n        [\n          test \"lattice edges\" (fun () ->\n            equal dtype Dtype.Val.int8 (lub [ Dtype.Val.bool; Dtype.Val.int8 ]);\n            equal dtype Dtype.Val.int16 (lub [ Dtype.Val.int8; Dtype.Val.uint8 ]);\n            equal dtype Dtype.Val.int32 (lub [ Dtype.Val.int16; Dtype.Val.uint16 ]);\n            equal dtype Dtype.Val.int64 (lub [ Dtype.Val.int32; Dtype.Val.uint32 ]);\n            (* Cross-category: int through float. *)\n            equal dtype Dtype.Val.float16 (lub [ Dtype.Val.float16; Dtype.Val.int64 ]);\n            (* FP8 siblings meet at float16. *)\n            equal dtype Dtype.Val.float16 (lub [ Dtype.Val.fp8e4m3; Dtype.Val.fp8e5m2 ]);\n            (* Float16 and bfloat16 are incomparable; they meet at float32. *)\n            equal dtype Dtype.Val.float32 (lub [ Dtype.Val.float16; Dtype.Val.bfloat16 ]));\n          test \"strips vectorization\" (fun () ->\n            let vec4 = Dtype.Val.vec 4 Dtype.Val.int8 in\n            equal dtype Dtype.Val.int16 (lub [ vec4; Dtype.Val.uint8 ]));\n          test \"errors\" (fun () ->\n            raises_invalid_arg \"least_upper_dtype requires at least one dtype\"\n              (fun () -> lub []);\n            raises_invalid_arg \"Index does not participate in dtype promotion\"\n              (fun () -> lub [ Dtype.Val.index ]));\n          prop2 \"commutative\" promotable_dtype promotable_dtype (fun a b ->\n            Dtype.Val.equal (lub [ a; b ]) (lub [ b; a ]));\n          prop \"idempotent\" promotable_dtype (fun a ->\n            Dtype.Val.equal (lub [ a; a ]) (Dtype.Val.scalarize a));\n        ];\n      group \"Lossless Cast\"\n        [\n          test \"widening\" (fun () ->\n            is_true (Dtype.Val.can_lossless_cast Dtype.Val.int8 Dtype.Val.int16);\n            is_true (Dtype.Val.can_lossless_cast Dtype.Val.int16 Dtype.Val.int32);\n            is_true (Dtype.Val.can_lossless_cast Dtype.Val.uint8 Dtype.Val.uint16);\n            is_true (Dtype.Val.can_lossless_cast Dtype.Val.float16 Dtype.Val.float32);\n            is_true (Dtype.Val.can_lossless_cast Dtype.Val.float32 Dtype.Val.float64);\n            is_true (Dtype.Val.can_lossless_cast Dtype.Val.fp8e4m3 Dtype.Val.float16);\n            is_true (Dtype.Val.can_lossless_cast Dtype.Val.fp8e5m2 Dtype.Val.float16));\n          test \"narrowing fails\" (fun () ->\n            is_false (Dtype.Val.can_lossless_cast Dtype.Val.int32 Dtype.Val.int16);\n            is_false (Dtype.Val.can_lossless_cast Dtype.Val.float64 Dtype.Val.float32);\n            is_false (Dtype.Val.can_lossless_cast Dtype.Val.float16 Dtype.Val.fp8e4m3));\n          test \"cross-sign\" (fun () ->\n            (* uint8 fits in int16 (wider signed). *)\n            is_true (Dtype.Val.can_lossless_cast Dtype.Val.uint8 Dtype.Val.int16);\n            (* int8 doesn't fit in uint8 (loses negatives). *)\n            is_false (Dtype.Val.can_lossless_cast Dtype.Val.int8 Dtype.Val.uint8);\n            is_false (Dtype.Val.can_lossless_cast Dtype.Val.int16 Dtype.Val.uint16));\n          test \"to index\" (fun () ->\n            is_true (Dtype.Val.can_lossless_cast Dtype.Val.int32 Dtype.Val.index);\n            is_true (Dtype.Val.can_lossless_cast Dtype.Val.uint64 Dtype.Val.index);\n            is_false (Dtype.Val.can_lossless_cast Dtype.Val.float32 Dtype.Val.index));\n          prop \"reflexive\" promotable_dtype (fun a ->\n            Dtype.Val.can_lossless_cast a a);\n        ];\n      group \"Sum Accumulator\"\n        [\n          test \"all categories\" (fun () ->\n            (* Unsigned widens to at least uint32. *)\n            equal dtype Dtype.Val.uint32 (Dtype.Val.sum_acc_dtype Dtype.Val.uint8);\n            equal dtype Dtype.Val.uint32 (Dtype.Val.sum_acc_dtype Dtype.Val.uint32);\n            equal dtype Dtype.Val.uint64 (Dtype.Val.sum_acc_dtype Dtype.Val.uint64);\n            (* Signed widens to at least int32. *)\n            equal dtype Dtype.Val.int32 (Dtype.Val.sum_acc_dtype Dtype.Val.int8);\n            equal dtype Dtype.Val.int64 (Dtype.Val.sum_acc_dtype Dtype.Val.int64);\n            (* Bool accumulates as int32. *)\n            equal dtype Dtype.Val.int32 (Dtype.Val.sum_acc_dtype Dtype.Val.bool);\n            (* Floats widen to at least float32. *)\n            equal dtype Dtype.Val.float32 (Dtype.Val.sum_acc_dtype Dtype.Val.float16);\n            equal dtype Dtype.Val.float64 (Dtype.Val.sum_acc_dtype Dtype.Val.float64);\n            (* Index rejected. *)\n            raises_invalid_arg \"sum_acc_dtype does not accept index dtype\"\n              (fun () -> Dtype.Val.sum_acc_dtype Dtype.Val.index));\n          prop \"idempotent\" promotable_dtype (fun a ->\n            Dtype.Val.equal\n              (Dtype.Val.sum_acc_dtype (Dtype.Val.sum_acc_dtype a))\n              (Dtype.Val.sum_acc_dtype a));\n        ];\n      group \"FP16 Conversion\"\n        [\n          test \"boundaries\" (fun () ->\n            let eq = equal (float 0.0) in\n            eq 1.0 (Dtype.float_to_fp16 1.0);\n            eq (-1.0) (Dtype.float_to_fp16 (-1.0));\n            eq 0.0 (Dtype.float_to_fp16 0.0);\n            eq (-0.0) (Dtype.float_to_fp16 (-0.0));\n            (* Max representable. *)\n            eq 65504.0 (Dtype.float_to_fp16 65504.0);\n            (* Overflow to infinity. *)\n            eq infinity (Dtype.float_to_fp16 65520.0);\n            eq neg_infinity (Dtype.float_to_fp16 (-65520.0));\n            (* Underflow to zero. *)\n            eq 0.0 (Dtype.float_to_fp16 1e-8);\n            (* Non-finite passthrough. *)\n            eq infinity (Dtype.float_to_fp16 infinity);\n            eq neg_infinity (Dtype.float_to_fp16 neg_infinity);\n            is_true (Float.is_nan (Dtype.float_to_fp16 Float.nan)));\n          test \"denormal range\" (fun () ->\n            (* Smallest positive fp16 denormal: 2^-24 *)\n            let x = Float.ldexp 1.0 (-24) in\n            equal (float 0.0) x (Dtype.float_to_fp16 x);\n            (* Largest fp16 denormal: just below 2^-14. *)\n            let x = Float.ldexp 1.0 (-14) -. Float.ldexp 1.0 (-24) in\n            let r = Dtype.float_to_fp16 x in\n            is_true ~msg:\"denormal round-trips to finite\" (Float.is_finite r);\n            is_true ~msg:\"denormal non-zero\" (r > 0.0));\n          prop \"idempotent\" (float 0.0) (fun x ->\n            let r = Dtype.float_to_fp16 x in\n            if Float.is_nan r then Float.is_nan (Dtype.float_to_fp16 r)\n            else Float.equal r (Dtype.float_to_fp16 r));\n        ];\n      group \"BF16 Conversion\"\n        [\n          test \"boundaries\" (fun () ->\n            let eq = equal (float 0.0) in\n            eq 1.0 (Dtype.float_to_bf16 1.0);\n            eq 0.0 (Dtype.float_to_bf16 0.0);\n            (* 128.0 = 1.0 * 2^7, exactly representable. *)\n            eq 128.0 (Dtype.float_to_bf16 128.0);\n            (* 1234.0 needs 10 mantissa bits, rounds to 1232.0 in bf16's 7. *)\n            eq 1232.0 (Dtype.float_to_bf16 1234.0);\n            (* Non-finite passthrough. *)\n            eq infinity (Dtype.float_to_bf16 infinity);\n            eq neg_infinity (Dtype.float_to_bf16 neg_infinity);\n            is_true (Float.is_nan (Dtype.float_to_bf16 Float.nan)));\n          prop \"idempotent\" (float 0.0) (fun x ->\n            let r = Dtype.float_to_bf16 x in\n            if Float.is_nan r then Float.is_nan (Dtype.float_to_bf16 r)\n            else Float.equal r (Dtype.float_to_bf16 r));\n        ];\n      group \"FP8 Conversion\"\n        [\n          test \"boundaries\" (fun () ->\n            let eq = equal (float 0.0) in\n            equal int 0 (Dtype.float_to_fp8 Fp8e4m3 0.0);\n            equal int 0 (Dtype.float_to_fp8 Fp8e5m2 0.0);\n            eq 0.0 (Dtype.fp8_to_float Fp8e4m3 0);\n            eq 0.0 (Dtype.fp8_to_float Fp8e5m2 0);\n            (* E4m3 max normal: 448.0. *)\n            eq 448.0\n              (Dtype.fp8_to_float Fp8e4m3\n                 (Dtype.float_to_fp8 Fp8e4m3 448.0));\n            (* E4m3 is saturating: infinity -> NaN, above-max -> maxnorm. *)\n            is_true\n              (Float.is_nan\n                 (Dtype.fp8_to_float Fp8e4m3\n                    (Dtype.float_to_fp8 Fp8e4m3 infinity)));\n            eq 448.0\n              (Dtype.fp8_to_float Fp8e4m3\n                 (Dtype.float_to_fp8 Fp8e4m3 500.0));\n            (* E5m2 max normal: 57344.0. *)\n            eq 57344.0\n              (Dtype.fp8_to_float Fp8e5m2\n                 (Dtype.float_to_fp8 Fp8e5m2 57344.0));\n            (* E5m2 is IEEE-like: infinity -> infinity, NaN -> NaN. *)\n            eq infinity\n              (Dtype.fp8_to_float Fp8e5m2\n                 (Dtype.float_to_fp8 Fp8e5m2 infinity));\n            is_true\n              (Float.is_nan\n                 (Dtype.fp8_to_float Fp8e5m2\n                    (Dtype.float_to_fp8 Fp8e5m2 Float.nan)));\n            raises_invalid (fun () -> Dtype.float_to_fp8 Int8 1.0);\n            raises_invalid (fun () -> Dtype.fp8_to_float Int8 0));\n          prop \"byte round-trip stable\" fp8_byte (fun byte ->\n            List.for_all\n              (fun s ->\n                let f = Dtype.fp8_to_float s byte in\n                let byte' = Dtype.float_to_fp8 s f in\n                let f' = Dtype.fp8_to_float s byte' in\n                (Float.is_nan f && Float.is_nan f') || Float.equal f f')\n              [ Fp8e4m3; Fp8e5m2 ]);\n        ];\n      group \"Integer Truncation\"\n        [\n          test \"boundaries\" (fun () ->\n            (* In-range identity. *)\n            equal int 42 (Dtype.truncate_int Dtype.Val.int8 42);\n            equal int (-1) (Dtype.truncate_int Dtype.Val.int8 (-1));\n            (* Unsigned wrap. *)\n            equal int 0 (Dtype.truncate_int Dtype.Val.uint8 256);\n            equal int 255 (Dtype.truncate_int Dtype.Val.uint8 255);\n            equal int 0 (Dtype.truncate_int Dtype.Val.uint16 65536);\n            (* Signed wrap with sign extension. *)\n            equal int (-128) (Dtype.truncate_int Dtype.Val.int8 128);\n            equal int (-1) (Dtype.truncate_int Dtype.Val.int8 255);\n            equal int (-1) (Dtype.truncate_int Dtype.Val.int16 65535);\n            (* Bool: 0 -> 0, nonzero -> 1. *)\n            equal int 0 (Dtype.truncate_int Dtype.Val.bool 0);\n            equal int 1 (Dtype.truncate_int Dtype.Val.bool 1);\n            equal int 1 (Dtype.truncate_int Dtype.Val.bool 2);\n            raises_invalid (fun () -> Dtype.truncate_int Dtype.Val.float32 1));\n          prop \"idempotent\" (pair int_dtype int) (fun (dt, x) ->\n            let r = Dtype.truncate_int dt x in\n            r = Dtype.truncate_int dt r);\n        ];\n      group \"Vec\"\n        [\n          test \"operations\" (fun () ->\n            let v = Dtype.Val.vec 4 Dtype.Val.int32 in\n            equal dtype (Dtype.Val.vec 4 Dtype.Val.int32) v;\n            (* Count=1 is identity. *)\n            equal dtype Dtype.Val.int32 (Dtype.Val.vec 1 Dtype.Val.int32);\n            (* Void ignores count. *)\n            equal dtype Dtype.Val.void (Dtype.Val.vec 4 Dtype.Val.void);\n            (* index.vec(0) for empty shape vectors. *)\n            equal int 0 (Dtype.Val.count (Dtype.Val.vec 0 Dtype.Val.index));\n            (* scalar_of strips count. *)\n            equal dtype Dtype.Val.int32 (Dtype.Val.scalarize v);\n            equal dtype Dtype.Val.float64 (Dtype.Val.scalarize Dtype.Val.float64));\n          test \"errors\" (fun () ->\n            raises_invalid_arg\n              \"only index dtype can use zero-length vectors\" (fun () ->\n                Dtype.Val.vec 0 Dtype.Val.int32);\n            raises_invalid (fun () -> Dtype.Val.vec 2 (Dtype.Val.vec 4 Dtype.Val.int32));\n            raises_invalid (fun () -> Dtype.Val.vec (-1) Dtype.Val.int32));\n        ];\n      group \"Bounds\"\n        [\n          test \"spot checks\" (fun () ->\n            equal bound (`Bool false) (Dtype.min (Dtype.Val Dtype.Val.bool));\n            equal bound (`Bool true) (Dtype.max (Dtype.Val Dtype.Val.bool));\n            equal bound (`SInt (-128L)) (Dtype.min (Dtype.Val Dtype.Val.int8));\n            equal bound (`SInt 127L) (Dtype.max (Dtype.Val Dtype.Val.int8));\n            equal bound (`UInt 0L) (Dtype.min (Dtype.Val Dtype.Val.uint8));\n            equal bound (`UInt 255L) (Dtype.max (Dtype.Val Dtype.Val.uint8));\n            equal bound (`SInt Int64.min_int) (Dtype.min (Dtype.Val Dtype.Val.int64));\n            equal bound (`SInt Int64.max_int) (Dtype.max (Dtype.Val Dtype.Val.int64));\n            equal bound (`UInt Int64.minus_one) (Dtype.max (Dtype.Val Dtype.Val.uint64));\n            equal bound (`Float neg_infinity) (Dtype.min (Dtype.Val Dtype.Val.float32));\n            equal bound (`Float infinity) (Dtype.max (Dtype.Val Dtype.Val.float64));\n            (* Vec inherits scalar bounds. *)\n            equal bound (`SInt (-128L)) (Dtype.min (Dtype.Val (Dtype.Val.vec 4 Dtype.Val.int8)));\n            raises_invalid_arg \"void has no numeric bounds\" (fun () ->\n              Dtype.min (Dtype.Val Dtype.Val.void)));\n        ];\n      group \"Float Info\"\n        [\n          test \"all types\" (fun () ->\n            equal int_pair (5, 10) (Dtype.finfo (Dtype.Val Dtype.Val.float16));\n            equal int_pair (8, 7) (Dtype.finfo (Dtype.Val Dtype.Val.bfloat16));\n            equal int_pair (8, 23) (Dtype.finfo (Dtype.Val Dtype.Val.float32));\n            equal int_pair (11, 52) (Dtype.finfo (Dtype.Val Dtype.Val.float64));\n            equal int_pair (4, 3) (Dtype.finfo (Dtype.Val Dtype.Val.fp8e4m3));\n            equal int_pair (5, 2) (Dtype.finfo (Dtype.Val Dtype.Val.fp8e5m2));\n            raises_invalid_arg \"finfo expects a floating-point dtype\" (fun () ->\n              Dtype.finfo (Dtype.Val Dtype.Val.int32)));\n        ];\n    ]\n"
  },
  {
    "path": "packages/tolk/test/unit/test_ir_kernel.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nmodule D = Tolk_ir.Dtype\nmodule C = Tolk_ir.Const\nmodule K = Tolk_ir.Kernel\nmodule Ak = Tolk_ir.Axis_kind\nmodule Sd = Tolk_ir.Special_dim\n\n(* Helpers *)\n\nlet global_ptr dt = D.Ptr.create (D.val_of dt) ~addrspace:Global ~size:(-1)\nlet local_ptr dt = D.Ptr.create (D.val_of dt) ~addrspace:Local ~size:(-1)\nlet reg_ptr dt = D.Ptr.create (D.val_of dt) ~addrspace:Reg ~size:(-1)\nlet mk_idx () = K.const (C.int D.Val.index 0)\nlet mk_f32 () = K.const (C.float D.Val.float32 1.0)\nlet mk_i32 () = K.const (C.int D.Val.int32 0)\nlet mk_param () = K.param ~idx:0 ~dtype:(global_ptr D.float32)\n\nlet mk_load () =\n  let idx = K.index ~ptr:(mk_param ()) ~idxs:[ mk_idx () ] () in\n  K.load ~src:idx ()\n\nlet contains haystack needle =\n  let hlen = String.length haystack in\n  let nlen = String.length needle in\n  let rec loop i =\n    if i + nlen > hlen then false\n    else if String.sub haystack i nlen = needle then true\n    else loop (i + 1)\n  in\n  loop 0\n\nlet validate_ok node = K.validate (K.sink [ node ])\n\nlet raises_validate substring fn =\n  raises_match (function Failure msg -> contains msg substring | _ -> false) fn\n\nlet raises_invalid fn =\n  raises_match (function Invalid_argument _ -> true | _ -> false) fn\n\nlet dtype_eq expected node =\n  match K.dtype_opt node with\n  | Some dt -> is_true (D.equal dt expected)\n  | None -> fail \"expected a dtype but got None\"\n\nlet has_const_int n nodes =\n  List.exists\n    (fun node ->\n      match K.view node with\n      | Const { value; _ } ->\n          (match C.view value with Int v -> Int64.to_int v = n | _ -> false)\n      | _ -> false)\n    nodes\n\nlet const_int_value node =\n  match K.view node with\n  | Const { value; _ } ->\n      (match C.view value with Int n -> Some (Int64.to_int n) | _ -> None)\n  | _ -> None\n\nlet () =\n  run \"Ir_next.Kernel\"\n    [\n      group \"Smart constructor dtype inference\"\n        [\n          test \"binary cmplt produces bool\" (fun () ->\n            dtype_eq D.bool\n              (K.binary ~op:`Cmplt ~lhs:(mk_f32 ()) ~rhs:(mk_f32 ())));\n          test \"binary cmpeq preserves lanes\" (fun () ->\n            let s1 = mk_f32 () and s2 = mk_f32 () in\n            let s3 = mk_f32 () and s4 = mk_f32 () in\n            let v1 = K.vectorize ~srcs:[ s1; s2; s3; s4 ] in\n            let v2 = K.vectorize ~srcs:[ s4; s3; s2; s1 ] in\n            dtype_eq (D.vec 4 D.bool) (K.binary ~op:`Cmpeq ~lhs:v1 ~rhs:v2));\n          test \"binary cmpne produces bool\" (fun () ->\n            dtype_eq D.bool\n              (K.binary ~op:`Cmpne ~lhs:(mk_i32 ()) ~rhs:(mk_i32 ())));\n          test \"binary add inherits lhs\" (fun () ->\n            dtype_eq D.float32\n              (K.binary ~op:`Add ~lhs:(mk_f32 ()) ~rhs:(mk_f32 ())));\n          test \"binary shl inherits lhs\" (fun () ->\n            dtype_eq D.int32\n              (K.binary ~op:`Shl ~lhs:(mk_i32 ()) ~rhs:(mk_i32 ())));\n          test \"ternary where inherits b\" (fun () ->\n            let b = mk_f32 () and c = mk_f32 () in\n            dtype_eq D.float32\n              (K.ternary ~op:`Where ~a:(K.const_bool true) ~b ~c));\n          test \"ternary mulacc inherits a\" (fun () ->\n            let a = mk_i32 () and b = mk_i32 () and c = mk_i32 () in\n            dtype_eq D.int32 (K.ternary ~op:`Mulacc ~a ~b ~c));\n          test \"unary sqrt inherits src\" (fun () ->\n            dtype_eq D.float32 (K.unary ~op:`Sqrt ~src:(mk_f32 ())));\n          test \"index derives ptr dtype\" (fun () ->\n            let index = K.index ~ptr:(mk_param ()) ~idxs:[ mk_idx () ] () in\n            is_true (K.is_ptr index));\n          test \"load derives base dtype\" (fun () ->\n            dtype_eq D.float32 (mk_load ()));\n          test \"vectorize dtype\" (fun () ->\n            let s1 = mk_f32 () and s2 = mk_f32 () and s3 = mk_f32 () in\n            dtype_eq (D.vec 3 D.float32) (K.vectorize ~srcs:[ s1; s2; s3 ]));\n          test \"cat sums counts\" (fun () ->\n            let a = mk_f32 () and b = mk_f32 () in\n            let c = mk_f32 () and d = mk_f32 () and e = mk_f32 () in\n            let v2 = K.vectorize ~srcs:[ a; b ] in\n            let v3 = K.vectorize ~srcs:[ c; d; e ] in\n            dtype_eq (D.vec 5 D.float32) (K.vcat ~srcs:[ v2; v3 ]));\n          test \"gep gives scalar\" (fun () ->\n            let s1 = mk_f32 () and s2 = mk_f32 () in\n            let s3 = mk_f32 () and s4 = mk_f32 () in\n            let v4 = K.vectorize ~srcs:[ s1; s2; s3; s4 ] in\n            dtype_eq D.float32 (K.gep ~src:v4 ~idx:2));\n          test \"const_int is index\" (fun () ->\n            dtype_eq D.index (K.const_int 42));\n          test \"const_float is float32\" (fun () ->\n            dtype_eq D.float32 (K.const_float 3.14));\n          test \"const_bool is bool\" (fun () ->\n            dtype_eq D.bool (K.const_bool false));\n        ];\n      group \"Smart constructor edge cases\"\n        [\n          test \"vectorize empty raises\" (fun () ->\n            raises_invalid (fun () -> ignore (K.vectorize ~srcs:[])));\n          test \"vcat empty raises\" (fun () ->\n            raises_invalid (fun () -> ignore (K.vcat ~srcs:[])));\n          test \"vcat mixed scalar raises\" (fun () ->\n            let f = mk_f32 () and i = mk_i32 () in\n            let vf = K.vectorize ~srcs:[ f; f ] in\n            let vi = K.vectorize ~srcs:[ i; i ] in\n            raises_invalid (fun () -> ignore (K.vcat ~srcs:[ vf; vi ])));\n          test \"gep_multi empty raises\" (fun () ->\n            let v = K.vectorize ~srcs:[ mk_f32 (); mk_f32 () ] in\n            raises_invalid (fun () -> ignore (K.gep_multi ~src:v ~idxs:[])));\n          test \"gep_multi identity on scalar idx 0\" (fun () ->\n            let s = mk_f32 () in\n            is_true (K.gep_multi ~src:s ~idxs:[ 0 ] == s));\n          test \"gep_multi single gives gep\" (fun () ->\n            let add = K.binary ~op:`Add ~lhs:(mk_f32 ()) ~rhs:(mk_f32 ()) in\n            let v = K.vectorize ~srcs:[ add; add ] in\n            is_true (K.gep_multi ~src:v ~idxs:[ 0 ] == add);\n            let result = K.gep_multi ~src:v ~idxs:[ 0; 1 ] in\n            (match K.view result with\n             | Gep { idxs = [0; 1]; _ } -> ()\n             | _ -> fail \"expected multi-element Gep\"));\n          test \"gep_multi multi gives gep\" (fun () ->\n            let v =\n              K.vectorize\n                ~srcs:[ mk_f32 (); mk_f32 (); mk_f32 (); mk_f32 () ]\n            in\n            let result = K.gep_multi ~src:v ~idxs:[ 0; 2 ] in\n            (match K.view result with\n             | Gep { idxs = [0; 2]; _ } -> ()\n             | _ -> fail \"expected multi-element Gep\"));\n          test \"broadcast scalar to n\" (fun () ->\n            let s = mk_f32 () in\n            let b = K.broadcast s 4 in\n            dtype_eq (D.vec 4 D.float32) b;\n            (match K.view b with\n             | Vectorize { srcs; _ } -> equal int 4 (List.length srcs)\n             | _ -> fail \"expected Vectorize\"));\n          test \"broadcast pointer creates vectorize\" (fun () ->\n            let p = mk_param () in\n            let b = K.broadcast p 4 in\n            is_true (match K.view b with K.Vectorize { srcs; _ } -> List.length srcs = 4 | _ -> false));\n          test \"broadcast n <= 1 identity\" (fun () ->\n            let s = mk_f32 () in\n            is_true (K.broadcast s 1 == s);\n            is_true (K.broadcast s 0 == s));\n          test \"zero_like float int bool\" (fun () ->\n            dtype_eq D.float32 (K.zero_like (mk_f32 ()));\n            dtype_eq D.int32 (K.zero_like (mk_i32 ()));\n            dtype_eq D.bool (K.zero_like (K.const_bool true)));\n          test \"zero_like no dtype raises\" (fun () ->\n            raises_invalid (fun () -> ignore (K.zero_like K.barrier)));\n        ];\n      group \"Validation acceptance\"\n        [\n          test \"param global\" (fun () ->\n            validate_ok (K.param ~idx:0 ~dtype:(global_ptr D.float32)));\n          test \"define_local local\" (fun () ->\n            validate_ok (K.define_local ~size:8 ~dtype:(local_ptr D.float32)));\n          test \"define_reg reg\" (fun () ->\n            validate_ok (K.define_reg ~size:4 ~dtype:(reg_ptr D.float32) ~slot:0));\n          test \"define_var\" (fun () ->\n            validate_ok (K.define_var ~name:\"n\" ~lo:0 ~hi:10 ()));\n          test \"const all types\" (fun () ->\n            validate_ok (K.const_bool true);\n            validate_ok (K.const_int 42);\n            validate_ok (K.const_float 3.14));\n          test \"binary shift uint32 rhs\" (fun () ->\n            validate_ok\n              (K.binary ~op:`Shl ~lhs:(mk_i32 ())\n                 ~rhs:(K.const (C.int D.Val.uint32 2))));\n          test \"full load/store chain\" (fun () ->\n            let ptr = K.param ~idx:0 ~dtype:(global_ptr D.float32) in\n            let idx = mk_idx () in\n            let index = K.index ~ptr ~idxs:[ idx ] () in\n            let loaded = K.load ~src:index () in\n            let added = K.binary ~op:`Add ~lhs:loaded ~rhs:(mk_f32 ()) in\n            let dst_idx = K.index ~ptr ~idxs:[ idx ] () in\n            K.validate\n              (K.sink [ K.store ~dst:dst_idx ~value:added ~ranges:[] ]));\n          test \"store with ranges\" (fun () ->\n            let ptr = K.param ~idx:0 ~dtype:(global_ptr D.float32) in\n            let index = K.index ~ptr ~idxs:[ mk_idx () ] () in\n            let size = K.const (C.int D.Val.index 10) in\n            let range = K.range ~size ~axis:0 ~kind:Ak.Loop () in\n            K.validate\n              (K.sink\n                 [ K.store ~dst:index ~value:(mk_f32 ()) ~ranges:[ range ] ]));\n          test \"contract axes\" (fun () ->\n            validate_ok\n              (K.contract ~src:(mk_f32 ()) ~axes:[ (0, 3); (1, 2) ]\n                 ~dtype:(D.Val.vec 6 D.Val.float32)));\n          test \"unroll axes\" (fun () ->\n            let s = mk_f32 () and s2 = mk_f32 () and s3 = mk_f32 () in\n            let s4 = mk_f32 () and s5 = mk_f32 () and s6 = mk_f32 () in\n            let src = K.vectorize ~srcs:[ s; s2; s3; s4; s5; s6 ] in\n            validate_ok\n              (K.unroll ~src ~axes:[ (0, 3); (1, 2) ] ~dtype:D.Val.float32));\n          test \"vectorized local index operand\" (fun () ->\n            let ptr = local_ptr D.float32 in\n            let zero = K.const (C.int D.Val.index 0) in\n            let one = K.const (C.int D.Val.index 1) in\n            let idxs = K.vectorize ~srcs:[ zero; one ] in\n            let local = K.define_local ~size:8 ~dtype:ptr in\n            K.validate (K.sink [ K.index ~ptr:local ~idxs:[ idxs ] () ]));\n          test \"horizontal reduce src\" (fun () ->\n            let one = K.const (C.float D.Val.float32 1.0) in\n            let two = K.const (C.float D.Val.float32 2.0) in\n            let src = K.vectorize ~srcs:[ one; two ] in\n            let size = K.const (C.int D.Val.index 2) in\n            let range = K.range ~size ~axis:0 ~kind:Ak.Reduce () in\n            K.validate\n              (K.sink\n                 [ K.reduce ~op:`Add ~src ~ranges:[ range ] ~dtype:D.Val.float32 ]));\n        ];\n      group \"Validation rejection\"\n        [\n          test \"reject param local addrspace\" (fun () ->\n            raises_validate \"Global addrspace\" (fun () ->\n                validate_ok (K.param ~idx:0 ~dtype:(local_ptr D.float32))));\n          test \"reject define_local global addrspace\" (fun () ->\n            raises_validate \"Local addrspace\" (fun () ->\n                validate_ok\n                  (K.define_local ~size:8 ~dtype:(global_ptr D.float32))));\n          test \"reject define_reg local addrspace\" (fun () ->\n            raises_validate \"Reg addrspace\" (fun () ->\n                validate_ok\n                  (K.define_reg ~size:4 ~dtype:(local_ptr D.float32) ~slot:0)));\n          test \"reject define_var vector\" (fun () ->\n            raises_validate \"must be scalar\" (fun () ->\n                validate_ok\n                  (K.define_var ~name:\"v\" ~lo:0 ~hi:4\n                     ~dtype:(D.Val.vec 4 D.Val.int32) ())));\n          test \"reject define_var float\" (fun () ->\n            raises_validate \"must be int/index\" (fun () ->\n                validate_ok\n                  (K.define_var ~name:\"f\" ~lo:0 ~hi:4 ~dtype:D.Val.float32 ())));\n          test \"reject define_var lo > hi\" (fun () ->\n            raises_validate \"lo > hi\" (fun () ->\n                validate_ok (K.define_var ~name:\"x\" ~lo:5 ~hi:3 ())));\n          test \"reject range float\" (fun () ->\n            raises_validate \"Range must have int\" (fun () ->\n                validate_ok\n                  (K.range ~size:(mk_f32 ()) ~axis:0 ~kind:Ak.Loop\n                     ~dtype:D.Val.float32 ())));\n          test \"reject range vector\" (fun () ->\n            raises_validate \"Range must be scalar\" (fun () ->\n                validate_ok\n                  (K.range ~size:(mk_i32 ()) ~axis:0 ~kind:Ak.Loop\n                     ~dtype:(D.Val.vec 4 D.Val.int32) ())));\n          test \"reject special vector\" (fun () ->\n            raises_validate \"must be scalar\" (fun () ->\n                validate_ok\n                  (K.special ~dim:(Sd.Group_id 0) ~size:(mk_i32 ())\n                     ~dtype:(D.Val.vec 2 D.Val.int32) ())));\n          test \"reject special float\" (fun () ->\n            raises_validate \"must be index or int32\" (fun () ->\n                validate_ok\n                  (K.special ~dim:(Sd.Group_id 0) ~size:(mk_f32 ())\n                     ~dtype:D.Val.float32 ())));\n          test \"reject index empty idxs\" (fun () ->\n            raises_validate \"at least one index\" (fun () ->\n                validate_ok (K.index ~ptr:(mk_param ()) ~idxs:[] ())));\n          test \"reject index non-buffer base\" (fun () ->\n            raises_invalid (fun () ->\n                ignore (K.index ~ptr:(mk_f32 ()) ~idxs:[ mk_idx () ] ())));\n          test \"reject index non-index operand\" (fun () ->\n            raises_validate \"must be index-like\" (fun () ->\n                validate_ok\n                  (K.index ~ptr:(mk_param ()) ~idxs:[ mk_f32 () ] ())));\n          test \"reject index non-bool gate\" (fun () ->\n            raises_validate \"must be bool scalar\" (fun () ->\n                validate_ok\n                  (K.index ~ptr:(mk_param ()) ~idxs:[ mk_idx () ]\n                     ~gate:(mk_i32 ()) ())));\n          test \"reject cmp dtype mismatch\" (fun () ->\n            raises_validate \"don't match\" (fun () ->\n                validate_ok\n                  (K.binary ~op:`Cmplt ~lhs:(mk_f32 ()) ~rhs:(mk_i32 ()))));\n          test \"reject shift non-int\" (fun () ->\n            raises_validate \"Shift must have int\" (fun () ->\n                validate_ok\n                  (K.binary ~op:`Shl ~lhs:(mk_f32 ()) ~rhs:(mk_f32 ()))));\n          test \"reject idiv non-int\" (fun () ->\n            raises_validate \"Idiv/Mod must have int\" (fun () ->\n                validate_ok\n                  (K.binary ~op:`Idiv ~lhs:(mk_f32 ()) ~rhs:(mk_f32 ()))));\n          test \"reject where non-bool cond\" (fun () ->\n            raises_validate \"must be bool\" (fun () ->\n                validate_ok\n                  (K.ternary ~op:`Where ~a:(mk_i32 ()) ~b:(mk_f32 ())\n                     ~c:(mk_f32 ()))));\n          test \"reject where mismatched arms\" (fun () ->\n            raises_validate \"arms\" (fun () ->\n                validate_ok\n                  (K.ternary ~op:`Where ~a:(K.const_bool true)\n                     ~b:(mk_f32 ()) ~c:(mk_i32 ()))));\n          test \"reject gep out of bounds\" (fun () ->\n            let v =\n              K.vectorize\n                ~srcs:[ mk_f32 (); mk_f32 (); mk_f32 (); mk_f32 () ]\n            in\n            raises_validate \"out of bounds\" (fun () ->\n                validate_ok (K.gep ~src:v ~idx:5)));\n          test \"accept gep scalar source\" (fun () ->\n            validate_ok (K.gep ~src:(mk_f32 ()) ~idx:0));\n          test \"reject store value dtype mismatch\" (fun () ->\n            let ptr = K.param ~idx:0 ~dtype:(global_ptr D.float32) in\n            let index = K.index ~ptr ~idxs:[ mk_idx () ] () in\n            raises_validate \"Store value\" (fun () ->\n                K.validate\n                  (K.sink\n                     [ K.store ~dst:index ~value:(mk_i32 ()) ~ranges:[] ])));\n          test \"reject load alt without gate\" (fun () ->\n            let ptr = K.param ~idx:0 ~dtype:(global_ptr D.float32) in\n            let index = K.index ~ptr ~idxs:[ mk_idx () ] () in\n            raises_validate \"alt requires gated\" (fun () ->\n                K.validate\n                  (K.sink [ K.load ~src:index ~alt:(mk_f32 ()) () ])));\n          test \"reject ptrcat empty\" (fun () ->\n            raises_validate \"at least one source\" (fun () ->\n                validate_ok\n                  (K.ptrcat ~srcs:[] ~dtype:(global_ptr D.float32))));\n          test \"reject unroll count mismatch\" (fun () ->\n            let s = mk_f32 () and s2 = mk_f32 () in\n            let s3 = mk_f32 () and s4 = mk_f32 () in\n            let src = K.vectorize ~srcs:[ s; s2; s3; s4 ] in\n            raises_validate \"count mismatch\" (fun () ->\n                validate_ok\n                  (K.unroll ~src ~axes:[ (0, 3); (1, 2) ] ~dtype:D.Val.float32)));\n          test \"reject contract count mismatch\" (fun () ->\n            raises_validate \"count mismatch\" (fun () ->\n                validate_ok\n                  (K.contract ~src:(mk_f32 ()) ~axes:[ (0, 3) ]\n                     ~dtype:(D.Val.vec 2 D.Val.float32))));\n          test \"reject bufferize addrspace mismatch\" (fun () ->\n            let opts : K.bufferize_opts =\n              { device = None; addrspace = D.Global; removable = false }\n            in\n            raises_validate \"addrspace mismatch\" (fun () ->\n                validate_ok\n                  (K.bufferize ~src:(mk_f32 ()) ~ranges:[]\n                     ~dtype:(local_ptr D.float32) ~opts)));\n        ];\n      group \"Graph infrastructure\"\n        [\n          test \"toposort leaf to root\" (fun () ->\n            let a = mk_f32 () in\n            let b = K.unary ~op:`Neg ~src:a in\n            let c = K.unary ~op:`Neg ~src:b in\n            let root = K.sink [ c ] in\n            let order = K.toposort root in\n            equal int 4 (List.length order);\n            is_true (List.hd order == a);\n            is_true (List.nth order 3 == root));\n          test \"toposort diamond\" (fun () ->\n            let a = mk_f32 () in\n            let b = K.unary ~op:`Neg ~src:a in\n            let c = K.unary ~op:`Sqrt ~src:a in\n            let d = K.binary ~op:`Add ~lhs:b ~rhs:c in\n            let root = K.sink [ d ] in\n            let order = K.toposort root in\n            equal int 1\n              (List.length (List.filter (fun n -> n == a) order));\n            is_true (List.nth order (List.length order - 1) == root));\n          test \"intern dedup\" (fun () ->\n            let b1 = K.unary ~op:`Neg ~src:(K.const_int 42) in\n            let b2 = K.unary ~op:`Neg ~src:(K.const_int 42) in\n            let interned =\n              K.intern (K.binary ~op:`Add ~lhs:b1 ~rhs:b2)\n            in\n            (match K.children interned with\n             | [ lhs; rhs ] -> is_true (lhs == rhs)\n             | _ -> fail \"expected two children\"));\n          test \"intern preserves validity\" (fun () ->\n            let ptr = K.param ~idx:0 ~dtype:(global_ptr D.float32) in\n            let index = K.index ~ptr ~idxs:[ mk_idx () ] () in\n            let root = K.sink [ K.load ~src:index () ] in\n            K.validate root;\n            K.validate (K.intern root));\n          test \"sort pointer variants\" (fun () ->\n            is_true (K.sort (mk_param ()) = K.Pointer);\n            is_true\n              (K.sort\n                 (K.define_local ~size:8 ~dtype:(local_ptr D.float32))\n               = K.Pointer);\n            is_true\n              (K.sort (K.define_reg ~size:4 ~dtype:(reg_ptr D.float32) ~slot:0)\n               = K.Pointer);\n            is_true\n              (K.sort\n                 (K.index ~ptr:(mk_param ()) ~idxs:[ mk_idx () ] ())\n               = K.Pointer));\n          test \"sort effect variants\" (fun () ->\n            is_true (K.sort (K.sink []) = K.Effect);\n            is_true (K.sort K.barrier = K.Effect);\n            let index =\n              K.index ~ptr:(mk_param ()) ~idxs:[ mk_idx () ] ()\n            in\n            is_true\n              (K.sort (K.store ~dst:index ~value:(mk_f32 ()) ~ranges:[])\n               = K.Effect));\n          test \"sort index variants\" (fun () ->\n            let size = K.const (C.int D.Val.index 10) in\n            is_true\n              (K.sort (K.range ~size ~axis:0 ~kind:Ak.Loop ()) = K.Index);\n            is_true\n              (K.sort (K.special ~dim:(Sd.Group_id 0) ~size ()) = K.Index);\n            is_true\n              (K.sort (K.define_var ~name:\"n\" ~lo:0 ~hi:10 ()) = K.Index));\n          test \"sort value vs index\" (fun () ->\n            is_true\n              (K.sort\n                 (K.binary ~op:`Add ~lhs:(mk_f32 ()) ~rhs:(mk_f32 ()))\n               = K.Value);\n            is_true\n              (K.sort\n                 (K.binary ~op:`Add ~lhs:(mk_idx ()) ~rhs:(mk_idx ()))\n               = K.Index));\n          test \"is_alu and is_ptr\" (fun () ->\n            is_true (K.is_alu (K.unary ~op:`Neg ~src:(mk_f32 ())));\n            is_true\n              (K.is_alu\n                 (K.binary ~op:`Add ~lhs:(mk_f32 ()) ~rhs:(mk_f32 ())));\n            is_true\n              (K.is_alu\n                 (K.ternary ~op:`Where ~a:(K.const_bool true)\n                    ~b:(mk_f32 ()) ~c:(mk_f32 ())));\n            is_false (K.is_alu (mk_f32 ()));\n            is_true (K.is_ptr (mk_param ()));\n            is_true\n              (K.is_ptr\n                 (K.index ~ptr:(mk_param ()) ~idxs:[ mk_idx () ] ()));\n            is_false (K.is_ptr (mk_f32 ())));\n        ];\n      group \"Rewriting\"\n        [\n          test \"rebuild replaces const\" (fun () ->\n            let four = K.const (C.int D.Val.index 4) in\n            let neg = K.unary ~op:`Neg ~src:(K.const (C.int D.Val.index 3)) in\n            let root = K.sink [ neg ] in\n            let rewrite node =\n              match K.view node with\n              | Const { value; _ } ->\n                  (match C.view value with\n                   | Int n when Int64.to_int n = 3 -> Some four\n                   | _ -> None)\n              | _ -> None\n            in\n            let nodes = K.toposort (K.graph_rewrite rewrite root) in\n            is_false (has_const_int 3 nodes);\n            is_true (has_const_int 4 nodes));\n          test \"graph_rewrite no match identity\" (fun () ->\n            let root = K.sink [ K.unary ~op:`Neg ~src:(mk_f32 ()) ] in\n            let result = K.graph_rewrite (fun _ -> None) root in\n            equal int\n              (List.length (K.toposort root))\n              (List.length (K.toposort result)));\n          test \"graph_rewrite simplifies\" (fun () ->\n            let x = mk_f32 () in\n            let zero = K.const (C.float D.Val.float32 0.0) in\n            let root =\n              K.sink [ K.binary ~op:`Add ~lhs:x ~rhs:zero ]\n            in\n            let rewrite node =\n              match K.view node with\n              | Binary { op = `Add; rhs; _ } ->\n                  (match K.view rhs with\n                   | Const { value; _ } ->\n                       (match C.view value with\n                        | Float f when f = 0.0 -> Some x\n                        | _ -> None)\n                   | _ -> None)\n              | _ -> None\n            in\n            is_true\n              (List.length (K.toposort (K.graph_rewrite rewrite root))\n               < List.length (K.toposort root)));\n          test \"first_match returns first\" (fun () ->\n            let r1 _ = Some (K.const_int 1) in\n            let r2 _ = Some (K.const_int 2) in\n            (match K.first_match [ r1; r2 ] (mk_f32 ()) with\n             | Some result -> equal (option int) (Some 1) (const_int_value result)\n             | None -> fail \"expected Some\"));\n          test \"first_match skips none\" (fun () ->\n            let r1 _ = None in\n            let r2 _ = Some (K.const_int 2) in\n            (match K.first_match [ r1; r2 ] (mk_f32 ()) with\n             | Some result -> equal (option int) (Some 2) (const_int_value result)\n             | None -> fail \"expected Some\"));\n          test \"replace binary children\" (fun () ->\n            let a = mk_f32 () and b = mk_f32 () in\n            let add = K.binary ~op:`Add ~lhs:a ~rhs:b in\n            let c = mk_f32 () and d = mk_f32 () in\n            (match K.children (K.replace add ~children:[ c; d ] ()) with\n             | [ lhs; rhs ] ->\n                 is_true (lhs == c);\n                 is_true (rhs == d)\n             | _ -> fail \"expected two children\"));\n        ];\n      group \"Formatting and Opt\"\n        [\n          test \"pp diamond bounded size\" (fun () ->\n            let rec build depth node =\n              if depth = 0 then node\n              else\n                let left = K.unary ~op:`Neg ~src:node in\n                let right = K.unary ~op:`Sqrt ~src:node in\n                build (depth - 1) (K.binary ~op:`Add ~lhs:left ~rhs:right)\n            in\n            let root = K.sink [ build 20 (mk_f32 ()) ] in\n            is_true\n              (String.length (Format.asprintf \"%a\" K.pp root) < 10_000));\n          test \"pp includes ops and dtypes\" (fun () ->\n            let root =\n              K.sink\n                [ K.binary ~op:`Add ~lhs:(mk_f32 ()) ~rhs:(mk_f32 ()) ]\n            in\n            let output = Format.asprintf \"%a\" K.pp root in\n            is_true (contains output \"add\");\n            is_true (contains output \"f32\"));\n          test \"opt to_string all variants\" (fun () ->\n            equal string \"LOCAL:0:4\"\n              (K.Opt.to_string (Local { axis = 0; amount = 4 }));\n            equal string \"UPCAST:1:8\"\n              (K.Opt.to_string (Upcast { axis = 1; amount = 8 }));\n            equal string \"UNROLL:2:3\"\n              (K.Opt.to_string (Unroll { axis = 2; amount = 3 }));\n            equal string \"GROUP:0:16\"\n              (K.Opt.to_string (Group { axis = 0; amount = 16 }));\n            equal string \"GROUPTOP:1:32\"\n              (K.Opt.to_string (Grouptop { axis = 1; amount = 32 }));\n            equal string \"THREAD:0:2\"\n              (K.Opt.to_string (Thread { axis = 0; amount = 2 }));\n            equal string \"NOLOCALS\" (K.Opt.to_string Nolocals);\n            equal string \"TC:0:1:2:3\"\n              (K.Opt.to_string\n                 (Tc { axis = 0; tc_select = 1; tc_opt = 2; use_tc = 3 }));\n            equal string \"PADTO:3:64\"\n              (K.Opt.to_string (Padto { axis = 3; amount = 64 }));\n            equal string \"SWAP:0:1\"\n              (K.Opt.to_string (Swap { axis = 0; with_axis = 1 })));\n          test \"kernel_info in sink survives validate\" (fun () ->\n            let ki : K.kernel_info =\n              {\n                name = \"test_kernel\";\n                axis_kinds = [ Ak.Global; Ak.Loop; Ak.Reduce ];\n                dont_use_locals = false;\n                applied_opts = [ K.Opt.Local { axis = 0; amount = 4 } ];\n                opts_to_apply = None;\n                estimates = None;\n              }\n            in\n            K.validate (K.sink ~kernel_info:ki []));\n        ];\n      group \"Constructor short-circuits\"\n        [\n          test \"group singleton returns child\" (fun () ->\n            let x = mk_f32 () in\n            is_true (K.group [ x ] == x));\n          test \"group empty creates Group\" (fun () ->\n            (match K.view (K.group []) with\n             | Group _ -> ()\n             | _ -> fail \"expected Group\"));\n          test \"group multi creates Group\" (fun () ->\n            let a = mk_f32 () and b = mk_f32 () in\n            (match K.view (K.group [ a; b ]) with\n             | Group { srcs } -> equal int 2 (List.length srcs)\n             | _ -> fail \"expected Group\"));\n          test \"after empty deps returns src\" (fun () ->\n            let x = mk_f32 () in\n            is_true (K.after ~src:x ~deps:[] == x));\n          test \"after with deps creates After\" (fun () ->\n            let x = mk_f32 () and d = mk_f32 () in\n            (match K.view (K.after ~src:x ~deps:[ d ]) with\n             | After { src; deps } ->\n                 is_true (src == x);\n                 equal int 1 (List.length deps)\n             | _ -> fail \"expected After\"));\n        ];\n      group \"Range analysis\"\n        [\n          test \"ended_ranges for End\" (fun () ->\n            let size = K.const (C.int D.Val.index 4) in\n            let r0 = K.range ~size ~axis:0 ~kind:Ak.Loop () in\n            let ended = K.end_ ~value:(mk_f32 ()) ~ranges:[ r0 ] () in\n            let ers = K.ended_ranges ended in\n            equal int 1 (List.length ers);\n            is_true (List.hd ers == r0));\n          test \"ended_ranges for Reduce\" (fun () ->\n            let size = K.const (C.int D.Val.index 4) in\n            let r0 = K.range ~size ~axis:0 ~kind:Ak.Reduce () in\n            let red = K.reduce ~op:`Add ~src:(mk_f32 ()) ~ranges:[ r0 ]\n                ~dtype:D.Val.float32 in\n            let ers = K.ended_ranges red in\n            equal int 1 (List.length ers);\n            is_true (List.hd ers == r0));\n          test \"ended_ranges for Store\" (fun () ->\n            let size = K.const (C.int D.Val.index 4) in\n            let r0 = K.range ~size ~axis:0 ~kind:Ak.Loop () in\n            let idx = K.index ~ptr:(mk_param ()) ~idxs:[ mk_idx () ] () in\n            let st = K.store ~dst:idx ~value:(mk_f32 ()) ~ranges:[ r0 ] in\n            let ers = K.ended_ranges st in\n            equal int 1 (List.length ers);\n            is_true (List.hd ers == r0));\n          test \"ended_ranges for After delegates to deps\" (fun () ->\n            let size = K.const (C.int D.Val.index 4) in\n            let r0 = K.range ~size ~axis:0 ~kind:Ak.Loop () in\n            let ended = K.end_ ~value:(mk_f32 ()) ~ranges:[ r0 ] () in\n            let aft = K.after ~src:(mk_f32 ()) ~deps:[ ended ] in\n            let ers = K.ended_ranges aft in\n            equal int 1 (List.length ers);\n            is_true (List.hd ers == r0));\n          test \"ended_ranges for leaf is empty\" (fun () ->\n            equal int 0 (List.length (K.ended_ranges (mk_f32 ()))));\n          test \"ended_ranges for Contract with live\" (fun () ->\n            let size = K.const (C.int D.Val.index 4) in\n            let r0 = K.range ~size ~axis:0 ~kind:Ak.Upcast () in\n            let r1 = K.range ~size ~axis:1 ~kind:Ak.Upcast () in\n            let r2 = K.range ~size ~axis:2 ~kind:Ak.Loop () in\n            let live _ = [ r0; r1; r2 ] in\n            let contract =\n              K.contract ~src:(mk_f32 ()) ~axes:[ (0, 4); (1, 4) ]\n                ~dtype:(D.Val.vec 16 D.Val.float32)\n            in\n            let ers = K.ended_ranges ~live contract in\n            equal int 2 (List.length ers);\n            is_true (List.exists (fun r -> r == r0) ers);\n            is_true (List.exists (fun r -> r == r1) ers);\n            is_false (List.exists (fun r -> r == r2) ers));\n          test \"ended_ranges for Contract without live is empty\" (fun () ->\n            let contract =\n              K.contract ~src:(mk_f32 ()) ~axes:[ (0, 4) ]\n                ~dtype:(D.Val.vec 4 D.Val.float32)\n            in\n            equal int 0 (List.length (K.ended_ranges contract)));\n          test \"live_ranges_tbl simple reduce\" (fun () ->\n            let size = K.const (C.int D.Val.index 4) in\n            let r0 = K.range ~size ~axis:0 ~kind:Ak.Reduce () in\n            let red = K.reduce ~op:`Add ~src:(mk_f32 ()) ~ranges:[ r0 ]\n                ~dtype:D.Val.float32 in\n            let root = K.sink [ red ] in\n            let tbl = K.live_ranges_tbl root in\n            (* r0 is live at itself *)\n            let r0_live =\n              match K.Ref_tbl.find_opt tbl r0 with Some r -> r | None -> []\n            in\n            is_true (List.exists (fun r -> r == r0) r0_live);\n            (* r0 is NOT live at reduce (it's ended there) *)\n            let red_live =\n              match K.Ref_tbl.find_opt tbl red with Some r -> r | None -> []\n            in\n            is_false (List.exists (fun r -> r == r0) red_live));\n          test \"live_ranges_tbl nested ranges\" (fun () ->\n            let size = K.const (C.int D.Val.index 4) in\n            let r0 = K.range ~size ~axis:0 ~kind:Ak.Loop () in\n            let r1 = K.range ~size ~axis:1 ~kind:Ak.Reduce () in\n            (* src depends on both ranges so both are in its backward slice *)\n            let src = K.binary ~op:`Add ~lhs:r0 ~rhs:r1 in\n            let red = K.reduce ~op:`Add ~src ~ranges:[ r1 ]\n                ~dtype:D.Val.index in\n            let ended = K.end_ ~value:red ~ranges:[ r0 ] () in\n            let root = K.sink [ ended ] in\n            let tbl = K.live_ranges_tbl root in\n            (* r0 and r1 are both live at src (it depends on both ranges) *)\n            let src_live =\n              match K.Ref_tbl.find_opt tbl src with Some r -> r | None -> []\n            in\n            is_true (List.exists (fun r -> r == r0) src_live);\n            is_true (List.exists (fun r -> r == r1) src_live);\n            (* r1 is ended at reduce, but r0 is still live *)\n            let red_live =\n              match K.Ref_tbl.find_opt tbl red with Some r -> r | None -> []\n            in\n            is_true (List.exists (fun r -> r == r0) red_live);\n            is_false (List.exists (fun r -> r == r1) red_live));\n        ];\n      group \"Substitute\"\n        [\n          test \"substitute replaces by identity\" (fun () ->\n            let a = K.const (C.float D.Val.float32 1.0) in\n            let b = K.const (C.float D.Val.float32 2.0) in\n            let add = K.binary ~op:`Add ~lhs:a ~rhs:b in\n            let root = K.sink [ add ] in\n            let c = K.const (C.float D.Val.float32 3.0) in\n            let result = K.substitute [ (a, c) ] root in\n            (* The old a should be replaced with c *)\n            let nodes = K.toposort result in\n            is_true (List.exists (fun n -> n == c) nodes);\n            is_false (List.exists (fun n -> n == a) nodes));\n          test \"substitute no match identity\" (fun () ->\n            let a = mk_f32 () and b = mk_f32 () in\n            let add = K.binary ~op:`Add ~lhs:a ~rhs:b in\n            let root = K.sink [ add ] in\n            let result = K.substitute [] root in\n            is_true (result == root));\n          test \"substitute propagates tags\" (fun () ->\n            let tags = K.Ref_tbl.create 4 in\n            let a = mk_f32 () in\n            K.Ref_tbl.replace tags a 42;\n            let b = mk_f32 () in\n            let add = K.binary ~op:`Add ~lhs:a ~rhs:b in\n            let root = K.sink [ add ] in\n            let c = mk_f32 () in\n            let _ = K.substitute ~tags [ (a, c) ] root in\n            (* Tag should be copied from a to c *)\n            equal (option int) (Some 42) (K.Ref_tbl.find_opt tags c));\n        ];\n    ]\n"
  },
  {
    "path": "packages/tolk/test/unit/test_ir_program.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nmodule C = Tolk_ir.Const\nmodule D = Tolk_ir.Dtype\nmodule P = Tolk_ir.Program\nmodule Ak = Tolk_ir.Axis_kind\nmodule Sd = Tolk_ir.Special_dim\n\n(* Helpers *)\n\nlet global_ptr dt = D.Ptr.create (D.val_of dt) ~addrspace:Global ~size:(-1)\nlet local_ptr dt = D.Ptr.create (D.val_of dt) ~addrspace:Local ~size:(-1)\nlet reg_ptr dt = D.Ptr.create (D.val_of dt) ~addrspace:Reg ~size:(-1)\nlet dt = D.float32\nlet gptr = global_ptr dt\n\nlet contains haystack needle =\n  let hlen = String.length haystack in\n  let nlen = String.length needle in\n  let rec loop i =\n    if i + nlen > hlen then false\n    else if String.sub haystack i nlen = needle then true\n    else loop (i + 1)\n  in\n  loop 0\n\nlet raises_validate substring fn =\n  raises_match\n    (function Failure msg -> contains msg substring | _ -> false)\n    fn\n\nlet raises_invalid fn =\n  raises_match (function Invalid_argument _ -> true | _ -> false) fn\n\nlet emit_i32 b n =\n  P.emit b (Const { value = C.int D.Val.int32 n; dtype = D.Val.int32 })\n\nlet emit_f32 b x =\n  P.emit b (Const { value = C.float (D.val_of dt) x; dtype = D.val_of dt })\n\nlet emit_bool b v =\n  P.emit b (Const { value = C.bool v; dtype = D.Val.bool })\n\n(* Emit Param(global f32) -> Const(0:i32) -> Index -> Load.\n   Returns (ptr_id, idx_id, addr_id, value_id). *)\nlet emit_load_chain ?(dtype = D.val_of dt) b =\n  let ptr_dt = global_ptr (D.Val dtype) in\n  let ptr = P.emit b (Param { idx = 0; dtype = ptr_dt }) in\n  let idx = emit_i32 b 0 in\n  let addr =\n    P.emit b (Index { ptr; idxs = [ idx ]; gate = None; dtype = ptr_dt })\n  in\n  let value = P.emit b (Load { src = addr; alt = None; dtype }) in\n  (ptr, idx, addr, value)\n\n(* Emit a gated load chain with gate and alt.\n   Returns (ptr_id, idx_id, gate_id, addr_id, alt_id, value_id). *)\nlet emit_gated_load_chain ?(dtype = D.val_of dt) b =\n  let ptr_dt = global_ptr (D.Val dtype) in\n  let ptr = P.emit b (Param { idx = 0; dtype = ptr_dt }) in\n  let idx = emit_i32 b 0 in\n  let gate = emit_bool b true in\n  let addr =\n    P.emit b\n      (Index { ptr; idxs = [ idx ]; gate = Some gate; dtype = ptr_dt })\n  in\n  let alt = P.emit b (Const { value = C.float dtype 0.0; dtype }) in\n  let value = P.emit b (Load { src = addr; alt = Some alt; dtype }) in\n  (ptr, idx, gate, addr, alt, value)\n\n(* Emit Param(global f32) -> Const(0:i32) -> Index(ungated).\n   Returns addr_id. *)\nlet emit_index_chain b =\n  ignore (P.emit b (Param { idx = 0; dtype = gptr }));\n  let idx = emit_i32 b 0 in\n  P.emit b (Index { ptr = 0; idxs = [ idx ]; gate = None; dtype = gptr })\n\n(* Build, finish, validate. *)\nlet validates fn =\n  let b = P.create () in\n  fn b;\n  P.validate (P.finish b)\n\nlet rejects substring fn =\n  let b = P.create () in\n  fn b;\n  raises_validate substring (fun () -> P.validate (P.finish b))\n\n(* Wmma with default fields, overridable. *)\nlet wmma_fields ?(dims = (16, 16, 16)) ?(dtype_in = D.Float32)\n    ?(dtype_out = D.Float32) ~a ~b ~c () : P.view =\n  Wmma\n    {\n      name = \"test\";\n      a;\n      b;\n      c;\n      dtype = D.val_of dt;\n      dims;\n      dtype_in;\n      dtype_out;\n      device = \"METAL\";\n      threads = 32;\n      upcast_axes = ([], [], []);\n      reduce_axes = [];\n    }\n\nlet () =\n  run \"Ir_next.Program\"\n    [\n      group \"Builder\"\n        [\n          test \"empty program\" (fun () ->\n            equal int 0 (P.length (P.finish (P.create ()))));\n          test \"emit sequential ids\" (fun () ->\n            let b = P.create () in\n            let id0 = P.emit b Barrier in\n            let id1 = P.emit b Barrier in\n            let id2 = P.emit b Barrier in\n            let id3 = P.emit b Barrier in\n            let id4 = P.emit b Barrier in\n            equal int 0 id0;\n            equal int 1 id1;\n            equal int 2 id2;\n            equal int 3 id3;\n            equal int 4 id4);\n          test \"finish preserves order\" (fun () ->\n            let b = P.create () in\n            ignore (P.emit b (Param { idx = 0; dtype = gptr }));\n            ignore (emit_i32 b 0);\n            ignore\n              (P.emit b\n                 (Index\n                    { ptr = 0; idxs = [ 1 ]; gate = None; dtype = gptr }));\n            let p = P.finish b in\n            (match P.view p 0 with\n            | Param { idx = 0; _ } -> ()\n            | _ -> fail \"expected Param\");\n            (match P.view p 1 with Const _ -> () | _ -> fail \"expected Const\");\n            (match P.view p 2 with\n            | Index _ -> ()\n            | _ -> fail \"expected Index\"));\n          test \"reallocation\" (fun () ->\n            let b = P.create () in\n            for _ = 0 to 63 do\n              ignore (P.emit b Barrier)\n            done;\n            let p = P.finish b in\n            equal int 64 (P.length p);\n            match P.view p 63 with\n            | Barrier -> ()\n            | _ -> fail \"expected Barrier\");\n          test \"finish is snapshot\" (fun () ->\n            let b = P.create () in\n            ignore (P.emit b Barrier);\n            ignore (P.emit b Barrier);\n            let p1 = P.finish b in\n            ignore (P.emit b Barrier);\n            let p2 = P.finish b in\n            equal int 2 (P.length p1);\n            equal int 3 (P.length p2));\n          test \"barrier has no dtype\" (fun () ->\n            let b = P.create () in\n            ignore (P.emit b Barrier);\n            is_none (P.dtype (P.finish b) 0));\n          test \"view roundtrip\" (fun () ->\n            let b = P.create () in\n            ignore (P.emit b (Param { idx = 3; dtype = global_ptr D.int32 }));\n            match P.view (P.finish b) 0 with\n            | Param { idx = 3; dtype } ->\n                is_true (D.Ptr.addrspace dtype = D.Global);\n                is_true (D.Val.equal (D.Ptr.base dtype) D.Val.int32)\n            | _ -> fail \"expected Param with idx=3\");\n        ];\n      group \"Inspection\"\n        [\n          test \"dtype value\" (fun () ->\n            let b = P.create () in\n            ignore (emit_f32 b 1.0);\n            some (of_equal D.Val.equal) (D.val_of dt) (P.dtype (P.finish b) 0));\n          test \"dtype pointer\" (fun () ->\n            let b = P.create () in\n            ignore (P.emit b (Param { idx = 0; dtype = gptr }));\n            some (of_equal D.Val.equal) (D.val_of dt) (P.dtype (P.finish b) 0));\n          test \"dtype effect\" (fun () ->\n            let b = P.create () in\n            let _ptr, _idx, addr, value = emit_load_chain b in\n            ignore (P.emit b (Store { dst = addr; value }));\n            is_none (P.dtype (P.finish b) 4));\n          test \"dtype end_range\" (fun () ->\n            let b = P.create () in\n            let size = emit_i32 b 10 in\n            let range =\n              P.emit b\n                (Range { size; dtype = D.Val.int32; axis = 0; sub = []; kind = Ak.Loop })\n            in\n            ignore (P.emit b (End_range { dep = range; range }));\n            is_none (P.dtype (P.finish b) 2));\n          test \"sort pointer\" (fun () ->\n            let b = P.create () in\n            ignore (P.emit b (Param { idx = 0; dtype = gptr }));\n            ignore\n              (P.emit b\n                 (Define_local { size = 8; dtype = local_ptr dt }));\n            ignore (P.emit b (Define_reg { size = 4; dtype = reg_ptr dt }));\n            let idx = emit_i32 b 0 in\n            ignore\n              (P.emit b\n                 (Index\n                    { ptr = 0; idxs = [ idx ]; gate = None; dtype = gptr }));\n            let p = P.finish b in\n            is_true (P.sort p 0 = P.Pointer);\n            is_true (P.sort p 1 = P.Pointer);\n            is_true (P.sort p 2 = P.Pointer);\n            is_true (P.sort p 4 = P.Pointer));\n          test \"sort index\" (fun () ->\n            let b = P.create () in\n            ignore\n              (P.emit b\n                 (Define_var\n                    { name = \"n\"; lo = 0; hi = 10; dtype = D.Val.int32 }));\n            let size = emit_i32 b 10 in\n            ignore\n              (P.emit b\n                 (Range { size; dtype = D.Val.int32; axis = 0; sub = []; kind = Ak.Loop }));\n            ignore\n              (P.emit b\n                 (Special { dim = Sd.Group_id 0; size; dtype = D.Val.int32 }));\n            let p = P.finish b in\n            is_true (P.sort p 0 = P.Index);\n            is_true (P.sort p 2 = P.Index);\n            is_true (P.sort p 3 = P.Index));\n          test \"sort effect\" (fun () ->\n            let b = P.create () in\n            let _ptr, _idx, addr, value = emit_load_chain b in\n            ignore (P.emit b (Store { dst = addr; value }));\n            ignore (P.emit b Barrier);\n            let p = P.finish b in\n            is_true (P.sort p 4 = P.Effect);\n            is_true (P.sort p 5 = P.Effect));\n          test \"sort value\" (fun () ->\n            let b = P.create () in\n            let _ptr, _idx, _addr, value = emit_load_chain b in\n            ignore (P.emit b (Unary { op = `Neg; src = value; dtype = D.val_of dt }));\n            ignore\n              (P.emit b\n                 (Binary\n                    { op = `Add; lhs = value; rhs = value; dtype = D.val_of dt }));\n            let p = P.finish b in\n            is_true (P.sort p 1 = P.Value);\n            is_true (P.sort p 3 = P.Value);\n            is_true (P.sort p 4 = P.Value);\n            is_true (P.sort p 5 = P.Value));\n          test \"sort after void is effect\" (fun () ->\n            let b = P.create () in\n            let barrier = P.emit b Barrier in\n            ignore\n              (P.emit b (After { src = barrier; deps = []; dtype = D.Val.void }));\n            is_true (P.sort (P.finish b) 1 = P.Effect));\n          test \"children binary\" (fun () ->\n            let b = P.create () in\n            let a = emit_f32 b 1.0 in\n            let c = emit_f32 b 2.0 in\n            ignore (P.emit b (Binary { op = `Add; lhs = a; rhs = c; dtype = D.val_of dt }));\n            equal (list int) [ a; c ] (P.children (P.finish b) 2));\n        ];\n      group \"Predicates\"\n        [\n          test \"is_alu true\" (fun () ->\n            is_true (P.is_alu (Unary { op = `Neg; src = 0; dtype = D.val_of dt }));\n            is_true\n              (P.is_alu (Binary { op = `Add; lhs = 0; rhs = 1; dtype = D.val_of dt }));\n            is_true\n              (P.is_alu\n                 (Ternary\n                    { op = `Where; a = 0; b = 1; c = 2; dtype = D.val_of dt })));\n          test \"is_alu false\" (fun () ->\n            is_false\n              (P.is_alu (Const { value = C.float (D.val_of dt) 1.0; dtype = D.val_of dt }));\n            is_false (P.is_alu Barrier);\n            is_false (P.is_alu (Store { dst = 0; value = 1 }));\n            is_false (P.is_alu (Cast { src = 0; dtype = D.val_of dt })));\n          test \"index_gate direct\" (fun () ->\n            let b = P.create () in\n            ignore (P.emit b (Param { idx = 0; dtype = gptr }));\n            let idx = emit_i32 b 0 in\n            let gate = emit_bool b true in\n            let addr =\n              P.emit b\n                (Index\n                   { ptr = 0; idxs = [ idx ]; gate = Some gate; dtype = gptr })\n            in\n            some int gate (P.index_gate (P.finish b) addr));\n          test \"index_gate through chain\" (fun () ->\n            let b = P.create () in\n            ignore (P.emit b (Param { idx = 0; dtype = gptr }));\n            let idx = emit_i32 b 0 in\n            let gate = emit_bool b true in\n            let addr =\n              P.emit b\n                (Index\n                   { ptr = 0; idxs = [ idx ]; gate = Some gate; dtype = gptr })\n            in\n            let cast = P.emit b (Cast { src = addr; dtype = D.val_of dt }) in\n            some int gate (P.index_gate (P.finish b) cast));\n          test \"index_gate none\" (fun () ->\n            let b = P.create () in\n            ignore (P.emit b (Param { idx = 0; dtype = gptr }));\n            let idx = emit_i32 b 0 in\n            let addr =\n              P.emit b\n                (Index\n                   { ptr = 0; idxs = [ idx ]; gate = None; dtype = gptr })\n            in\n            is_none (P.index_gate (P.finish b) addr));\n        ];\n      group \"Validation general\"\n        [\n          test \"forward ref rejected\" (fun () ->\n            rejects \"out of bounds or forward\" (fun b ->\n                ignore\n                  (P.emit b (Unary { op = `Neg; src = 1; dtype = D.val_of dt }));\n                ignore (emit_f32 b 1.0)));\n          test \"self ref rejected\" (fun () ->\n            rejects \"out of bounds or forward\" (fun b ->\n                ignore (P.emit b (Unary { op = `Neg; src = 0; dtype = D.val_of dt }))));\n          test \"index dtype rejected\" (fun () ->\n            rejects \"Index dtype not allowed\" (fun b ->\n                ignore\n                  (P.emit b\n                     (Const { value = C.int D.Val.index 0; dtype = D.Val.index }))));\n          test \"empty program accepted\" (fun () ->\n            validates (fun _b -> ()));\n        ];\n      group \"Validation per-instruction\"\n        [\n          (* Addrspace *)\n          test \"param global ok\" (fun () ->\n            validates (fun b ->\n                ignore (P.emit b (Param { idx = 0; dtype = gptr }))));\n          test \"param local rejected\" (fun () ->\n            rejects \"Global addrspace\" (fun b ->\n                ignore\n                  (P.emit b (Param { idx = 0; dtype = local_ptr dt }))));\n          test \"define_local ok\" (fun () ->\n            validates (fun b ->\n                ignore\n                  (P.emit b\n                     (Define_local { size = 8; dtype = local_ptr dt }))));\n          test \"define_local global rejected\" (fun () ->\n            rejects \"Local addrspace\" (fun b ->\n                ignore\n                  (P.emit b (Define_local { size = 8; dtype = gptr }))));\n          test \"define_reg ok\" (fun () ->\n            validates (fun b ->\n                ignore\n                  (P.emit b (Define_reg { size = 4; dtype = reg_ptr dt }))));\n          test \"define_reg local rejected\" (fun () ->\n            rejects \"Reg addrspace\" (fun b ->\n                ignore\n                  (P.emit b\n                     (Define_reg { size = 4; dtype = local_ptr dt }))));\n          (* Define_var *)\n          test \"define_var ok\" (fun () ->\n            validates (fun b ->\n                ignore\n                  (P.emit b\n                     (Define_var\n                        { name = \"n\"; lo = 0; hi = 10; dtype = D.Val.int32 }))));\n          test \"define_var float rejected\" (fun () ->\n            rejects \"must be int/index\" (fun b ->\n                ignore\n                  (P.emit b\n                     (Define_var\n                        { name = \"x\"; lo = 0; hi = 4; dtype = D.val_of dt }))));\n          test \"define_var vector rejected\" (fun () ->\n            rejects \"must be scalar\" (fun b ->\n                ignore\n                  (P.emit b\n                     (Define_var\n                        {\n                          name = \"v\";\n                          lo = 0;\n                          hi = 4;\n                          dtype = D.Val.vec 4 D.Val.int32;\n                        }))));\n          test \"define_var lo > hi rejected\" (fun () ->\n            rejects \"lo > hi\" (fun b ->\n                ignore\n                  (P.emit b\n                     (Define_var\n                        { name = \"x\"; lo = 5; hi = 3; dtype = D.Val.int32 }))));\n          (* Range / End_range *)\n          test \"range int ok\" (fun () ->\n            validates (fun b ->\n                let size = emit_i32 b 10 in\n                let range =\n                  P.emit b\n                    (Range { size; dtype = D.Val.int32; axis = 0; sub = []; kind = Ak.Loop })\n                in\n                ignore (P.emit b (End_range { dep = range; range }))));\n          test \"range float rejected\" (fun () ->\n            rejects \"int dtype\" (fun b ->\n                let size = emit_f32 b 10.0 in\n                let range =\n                  P.emit b\n                    (Range { size; dtype = D.val_of dt; axis = 0; sub = []; kind = Ak.Loop })\n                in\n                ignore (P.emit b (End_range { dep = range; range }))));\n          test \"range vector rejected\" (fun () ->\n            rejects \"scalar\" (fun b ->\n                let size = emit_i32 b 10 in\n                let range =\n                  P.emit b\n                    (Range\n                       {\n                         size;\n                         dtype = D.Val.vec 4 D.Val.int32;\n                         axis = 0;\n                         sub = [];\n                         kind = Ak.Loop;\n                       })\n                in\n                ignore (P.emit b (End_range { dep = range; range }))));\n          test \"range size mismatch rejected\" (fun () ->\n            rejects \"Range size\" (fun b ->\n                let size =\n                  P.emit b\n                    (Const { value = C.int D.Val.int64 10; dtype = D.Val.int64 })\n                in\n                let range =\n                  P.emit b\n                    (Range { size; dtype = D.Val.int32; axis = 0; sub = []; kind = Ak.Loop })\n                in\n                ignore (P.emit b (End_range { dep = range; range }))));\n          test \"end_range not range rejected\" (fun () ->\n            rejects \"must reference a Range\" (fun b ->\n                let c = emit_i32 b 0 in\n                ignore (P.emit b (End_range { dep = c; range = c }))));\n          test \"unclosed range rejected\" (fun () ->\n            rejects \"unclosed Range\" (fun b ->\n                let size = emit_i32 b 10 in\n                ignore\n                  (P.emit b\n                     (Range\n                        { size; dtype = D.Val.int32; axis = 0; sub = []; kind = Ak.Loop }))));\n          test \"end_range unbalanced rejected\" (fun () ->\n            rejects \"unbalanced End_range\" (fun b ->\n                let size = emit_i32 b 10 in\n                let outer =\n                  P.emit b\n                    (Range { size; dtype = D.Val.int32; axis = 0; sub = []; kind = Ak.Loop })\n                in\n                ignore\n                  (P.emit b\n                     (Range\n                        { size; dtype = D.Val.int32; axis = 1; sub = []; kind = Ak.Loop }));\n                ignore (P.emit b (End_range { dep = outer; range = outer }))));\n          (* If / Endif *)\n          test \"if/endif ok\" (fun () ->\n            validates (fun b ->\n                let addr = emit_index_chain b in\n                let cond = emit_bool b true in\n                let if_ = P.emit b (If { cond; idx_for_dedup = addr }) in\n                ignore (P.emit b (Endif { if_ }))));\n          test \"if non-bool cond rejected\" (fun () ->\n            rejects \"must be bool\" (fun b ->\n                let addr = emit_index_chain b in\n                let if_ = P.emit b (If { cond = 1; idx_for_dedup = addr }) in\n                ignore (P.emit b (Endif { if_ }))));\n          test \"if idx not index rejected\" (fun () ->\n            rejects \"must reference Index\" (fun b ->\n                let cond = emit_bool b true in\n                let not_index = emit_i32 b 0 in\n                let if_ =\n                  P.emit b (If { cond; idx_for_dedup = not_index })\n                in\n                ignore (P.emit b (Endif { if_ }))));\n          test \"if idx through cast ok\" (fun () ->\n            validates (fun b ->\n                let addr = emit_index_chain b in\n                let cast = P.emit b (Cast { src = addr; dtype = D.Val.int32 }) in\n                let cond = emit_bool b true in\n                let if_ = P.emit b (If { cond; idx_for_dedup = cast }) in\n                ignore (P.emit b (Endif { if_ }))));\n          test \"endif not if rejected\" (fun () ->\n            rejects \"must reference an If\" (fun b ->\n                let c = emit_i32 b 0 in\n                ignore (P.emit b (Endif { if_ = c }))));\n          test \"unclosed if rejected\" (fun () ->\n            rejects \"unclosed If\" (fun b ->\n                let addr = emit_index_chain b in\n                let cond = emit_bool b true in\n                ignore (P.emit b (If { cond; idx_for_dedup = addr }))));\n          (* Special *)\n          test \"special int32 ok\" (fun () ->\n            validates (fun b ->\n                let size = emit_i32 b 32 in\n                ignore\n                  (P.emit b\n                     (Special\n                        { dim = Sd.Group_id 0; size; dtype = D.Val.int32 }))));\n          test \"special float rejected\" (fun () ->\n            rejects \"must be int32 scalar\" (fun b ->\n                let size = emit_f32 b 32.0 in\n                ignore\n                  (P.emit b\n                     (Special { dim = Sd.Group_id 0; size; dtype = D.val_of dt }))));\n          test \"special duplicate rejected\" (fun () ->\n            rejects \"duplicate Special\" (fun b ->\n                let size = emit_i32 b 32 in\n                ignore\n                  (P.emit b\n                     (Special\n                        { dim = Sd.Group_id 0; size; dtype = D.Val.int32 }));\n                ignore\n                  (P.emit b\n                     (Special\n                        { dim = Sd.Group_id 0; size; dtype = D.Val.int32 }))));\n          test \"special different dims ok\" (fun () ->\n            validates (fun b ->\n                let size = emit_i32 b 32 in\n                ignore\n                  (P.emit b\n                     (Special\n                        { dim = Sd.Group_id 0; size; dtype = D.Val.int32 }));\n                ignore\n                  (P.emit b\n                     (Special\n                        { dim = Sd.Local_id 1; size; dtype = D.Val.int32 }))));\n          (* Index *)\n          test \"index ok\" (fun () ->\n            validates (fun b -> ignore (emit_index_chain b)));\n          test \"index bad base rejected\" (fun () ->\n            rejects \"must be a Param\" (fun b ->\n                let c = emit_i32 b 0 in\n                ignore\n                  (P.emit b\n                     (Index\n                        {\n                          ptr = c;\n                          idxs = [ c ];\n                          gate = None;\n                          dtype = gptr;\n                        }))));\n          test \"index empty idxs rejected\" (fun () ->\n            rejects \"exactly one index\" (fun b ->\n                ignore (P.emit b (Param { idx = 0; dtype = gptr }));\n                ignore\n                  (P.emit b\n                     (Index\n                        { ptr = 0; idxs = []; gate = None; dtype = gptr }))));\n          test \"index multi-element idxs rejected\" (fun () ->\n            rejects \"exactly one index\" (fun b ->\n                ignore (P.emit b (Param { idx = 0; dtype = gptr }));\n                let i0 = emit_i32 b 0 in\n                let i1 = emit_i32 b 1 in\n                ignore\n                  (P.emit b\n                     (Index\n                        {\n                          ptr = 0;\n                          idxs = [ i0; i1 ];\n                          gate = None;\n                          dtype = gptr;\n                        }))));\n          test \"index float operand rejected\" (fun () ->\n            rejects \"must be int\" (fun b ->\n                ignore (P.emit b (Param { idx = 0; dtype = gptr }));\n                let fidx = emit_f32 b 0.0 in\n                ignore\n                  (P.emit b\n                     (Index\n                        {\n                          ptr = 0;\n                          idxs = [ fidx ];\n                          gate = None;\n                          dtype = gptr;\n                        }))));\n          test \"index non-bool gate rejected\" (fun () ->\n            rejects \"must be bool\" (fun b ->\n                ignore (P.emit b (Param { idx = 0; dtype = gptr }));\n                let idx = emit_i32 b 0 in\n                ignore\n                  (P.emit b\n                     (Index\n                        {\n                          ptr = 0;\n                          idxs = [ idx ];\n                          gate = Some idx;\n                          dtype = gptr;\n                        }))));\n          (* Load *)\n          test \"load ok\" (fun () ->\n            validates (fun b -> ignore (emit_load_chain b)));\n          test \"load not index rejected\" (fun () ->\n            rejects \"must reference Index\" (fun b ->\n                let c = emit_i32 b 0 in\n                ignore\n                  (P.emit b\n                     (Load { src = c; alt = None; dtype = D.Val.int32 }))));\n          test \"load alt gated ok\" (fun () ->\n            validates (fun b -> ignore (emit_gated_load_chain b)));\n          test \"load alt without gate rejected\" (fun () ->\n            rejects \"alt requires gated\" (fun b ->\n                ignore (P.emit b (Param { idx = 0; dtype = gptr }));\n                let idx = emit_i32 b 0 in\n                let addr =\n                  P.emit b\n                    (Index\n                       {\n                         ptr = 0;\n                         idxs = [ idx ];\n                         gate = None;\n                         dtype = gptr;\n                       })\n                in\n                let alt = emit_f32 b 0.0 in\n                ignore\n                  (P.emit b\n                     (Load { src = addr; alt = Some alt; dtype = D.val_of dt }))));\n          (* After *)\n          test \"after barrier void ok\" (fun () ->\n            validates (fun b ->\n                let barrier = P.emit b Barrier in\n                ignore\n                  (P.emit b\n                     (After { src = barrier; deps = []; dtype = D.Val.void }))));\n          test \"after barrier non-void rejected\" (fun () ->\n            rejects \"void dtype\" (fun b ->\n                let barrier = P.emit b Barrier in\n                ignore\n                  (P.emit b\n                     (After { src = barrier; deps = []; dtype = D.val_of dt }))));\n          test \"after value mismatch rejected\" (fun () ->\n            rejects \"After src\" (fun b ->\n                let _ptr, _idx, _addr, value = emit_load_chain b in\n                ignore\n                  (P.emit b\n                     (After { src = value; deps = []; dtype = D.Val.int32 }))));\n          (* ALU: Where *)\n          test \"where ok\" (fun () ->\n            validates (fun b ->\n                let cond = emit_bool b true in\n                let t = emit_f32 b 1.0 in\n                let e = emit_f32 b 0.0 in\n                ignore\n                  (P.emit b\n                     (Ternary\n                        {\n                          op = `Where;\n                          a = cond;\n                          b = t;\n                          c = e;\n                          dtype = D.val_of dt;\n                        }))));\n          test \"where non-bool rejected\" (fun () ->\n            rejects \"must be bool\" (fun b ->\n                let cond = emit_i32 b 1 in\n                let t = emit_f32 b 1.0 in\n                let e = emit_f32 b 0.0 in\n                ignore\n                  (P.emit b\n                     (Ternary\n                        {\n                          op = `Where;\n                          a = cond;\n                          b = t;\n                          c = e;\n                          dtype = D.val_of dt;\n                        }))));\n          test \"where mismatched arms rejected\" (fun () ->\n            rejects \"Where branch\" (fun b ->\n                let cond = emit_bool b true in\n                let t = emit_f32 b 1.0 in\n                let e = emit_i32 b 0 in\n                ignore\n                  (P.emit b\n                     (Ternary\n                        {\n                          op = `Where;\n                          a = cond;\n                          b = t;\n                          c = e;\n                          dtype = D.val_of dt;\n                        }))));\n          (* ALU: Cmp *)\n          test \"cmp ok\" (fun () ->\n            validates (fun b ->\n                let a = emit_f32 b 1.0 in\n                let c = emit_f32 b 2.0 in\n                ignore\n                  (P.emit b\n                     (Binary\n                        { op = `Cmplt; lhs = a; rhs = c; dtype = D.Val.bool }))));\n          test \"cmp non-bool result rejected\" (fun () ->\n            rejects \"comparison result must be bool\" (fun b ->\n                let a = emit_f32 b 1.0 in\n                let c = emit_f32 b 2.0 in\n                ignore\n                  (P.emit b\n                     (Binary\n                        { op = `Cmplt; lhs = a; rhs = c; dtype = D.Val.int32 }))));\n          test \"cmp operands mismatch rejected\" (fun () ->\n            rejects \"don't match\" (fun b ->\n                let a = emit_f32 b 1.0 in\n                let c = emit_i32 b 2 in\n                ignore\n                  (P.emit b\n                     (Binary\n                        { op = `Cmpeq; lhs = a; rhs = c; dtype = D.Val.bool }))));\n          (* ALU: Idiv/Mod *)\n          test \"idiv int ok\" (fun () ->\n            validates (fun b ->\n                let a = emit_i32 b 10 in\n                let c = emit_i32 b 3 in\n                ignore\n                  (P.emit b\n                     (Binary\n                        { op = `Idiv; lhs = a; rhs = c; dtype = D.Val.int32 }))));\n          test \"idiv float rejected\" (fun () ->\n            rejects \"int dtype\" (fun b ->\n                let a = emit_f32 b 1.0 in\n                let c = emit_f32 b 2.0 in\n                ignore\n                  (P.emit b\n                     (Binary { op = `Idiv; lhs = a; rhs = c; dtype = D.val_of dt }))));\n          (* ALU: Shift *)\n          test \"shift ok\" (fun () ->\n            validates (fun b ->\n                let a = emit_i32 b 8 in\n                let c = emit_i32 b 2 in\n                ignore\n                  (P.emit b\n                     (Binary\n                        { op = `Shl; lhs = a; rhs = c; dtype = D.Val.int32 }))));\n          test \"shift rhs mismatch rejected\" (fun () ->\n            rejects \"shift rhs must match\" (fun b ->\n                let a = emit_i32 b 8 in\n                let c =\n                  P.emit b\n                    (Const { value = C.int D.Val.int64 2; dtype = D.Val.int64 })\n                in\n                ignore\n                  (P.emit b\n                     (Binary\n                        { op = `Shl; lhs = a; rhs = c; dtype = D.Val.int32 }))));\n          (* ALU: Unary *)\n          test \"unary ok\" (fun () ->\n            validates (fun b ->\n                let a = emit_f32 b 1.0 in\n                ignore\n                  (P.emit b (Unary { op = `Neg; src = a; dtype = D.val_of dt }))));\n          test \"unary mismatch rejected\" (fun () ->\n            rejects \"unary ALU\" (fun b ->\n                let a = emit_f32 b 1.0 in\n                ignore\n                  (P.emit b\n                     (Unary { op = `Neg; src = a; dtype = D.Val.int32 }))));\n          (* ALU: Binary general *)\n          test \"binary alu ok\" (fun () ->\n            validates (fun b ->\n                let a = emit_f32 b 1.0 in\n                let c = emit_f32 b 2.0 in\n                ignore\n                  (P.emit b\n                     (Binary { op = `Add; lhs = a; rhs = c; dtype = D.val_of dt }))));\n          test \"binary alu lhs mismatch rejected\" (fun () ->\n            rejects \"binary ALU lhs\" (fun b ->\n                let a = emit_i32 b 1 in\n                let c = emit_f32 b 2.0 in\n                ignore\n                  (P.emit b\n                     (Binary { op = `Add; lhs = a; rhs = c; dtype = D.val_of dt }))));\n          (* ALU: Mulacc *)\n          test \"mulacc ok\" (fun () ->\n            validates (fun b ->\n                let a = emit_f32 b 1.0 in\n                let c = emit_f32 b 2.0 in\n                let d = emit_f32 b 3.0 in\n                ignore\n                  (P.emit b\n                     (Ternary\n                        { op = `Mulacc; a; b = c; c = d; dtype = D.val_of dt }))));\n          test \"mulacc mismatch rejected\" (fun () ->\n            rejects \"Mulacc\" (fun b ->\n                let a = emit_f32 b 1.0 in\n                let c = emit_i32 b 2 in\n                let d = emit_f32 b 3.0 in\n                ignore\n                  (P.emit b\n                     (Ternary\n                        { op = `Mulacc; a; b = c; c = d; dtype = D.val_of dt }))));\n          (* Vectorize *)\n          test \"vectorize ok\" (fun () ->\n            validates (fun b ->\n                let a = emit_f32 b 1.0 in\n                let c = emit_f32 b 2.0 in\n                let d = emit_f32 b 3.0 in\n                ignore\n                  (P.emit b\n                     (Vectorize\n                        { srcs = [ a; c; d ]; dtype = D.Val.vec 3 (D.val_of dt) }))));\n          test \"vectorize one source rejected\" (fun () ->\n            rejects \"more than one source\" (fun b ->\n                let a = emit_f32 b 1.0 in\n                ignore\n                  (P.emit b\n                     (Vectorize { srcs = [ a ]; dtype = D.Val.vec 1 (D.val_of dt) }))));\n          (* Gep *)\n          test \"gep ok\" (fun () ->\n            validates (fun b ->\n                let a = emit_f32 b 1.0 in\n                let c = emit_f32 b 2.0 in\n                let v =\n                  P.emit b\n                    (Vectorize { srcs = [ a; c ]; dtype = D.Val.vec 2 (D.val_of dt) })\n                in\n                ignore (P.emit b (Gep { src = v; idxs = [1]; dtype = D.val_of dt }))));\n          test \"gep out of bounds rejected\" (fun () ->\n            rejects \"out of bounds\" (fun b ->\n                let a = emit_f32 b 1.0 in\n                let c = emit_f32 b 2.0 in\n                let v =\n                  P.emit b\n                    (Vectorize { srcs = [ a; c ]; dtype = D.Val.vec 2 (D.val_of dt) })\n                in\n                ignore (P.emit b (Gep { src = v; idxs = [5]; dtype = D.val_of dt }))));\n          (* Store *)\n          test \"store ok\" (fun () ->\n            validates (fun b ->\n                let _ptr, _idx, addr, _value = emit_load_chain b in\n                let new_val = emit_f32 b 42.0 in\n                ignore (P.emit b (Store { dst = addr; value = new_val }))));\n          test \"store not index rejected\" (fun () ->\n            rejects \"must reference Index\" (fun b ->\n                let c = emit_i32 b 0 in\n                let v = emit_f32 b 1.0 in\n                ignore (P.emit b (Store { dst = c; value = v }))));\n          test \"store dtype mismatch rejected\" (fun () ->\n            rejects \"Store value\" (fun b ->\n                let _ptr, _idx, addr, _value = emit_load_chain b in\n                let wrong = emit_i32 b 7 in\n                ignore (P.emit b (Store { dst = addr; value = wrong }))));\n          (* Wmma *)\n          test \"wmma ok\" (fun () ->\n            validates (fun b ->\n                let a = emit_f32 b 1.0 in\n                let c = emit_f32 b 2.0 in\n                let d = emit_f32 b 3.0 in\n                ignore (P.emit b (wmma_fields ~a ~b:c ~c:d ()))));\n          test \"wmma zero dim rejected\" (fun () ->\n            rejects \"dims must be positive\" (fun b ->\n                let a = emit_f32 b 1.0 in\n                let c = emit_f32 b 2.0 in\n                let d = emit_f32 b 3.0 in\n                ignore\n                  (P.emit b\n                     (wmma_fields ~a ~b:c ~c:d ~dims:(0, 16, 16) ()))));\n          test \"wmma dtype mismatch rejected\" (fun () ->\n            rejects \"must match dtype_out\" (fun b ->\n                let a = emit_f32 b 1.0 in\n                let c = emit_f32 b 2.0 in\n                let d = emit_f32 b 3.0 in\n                ignore\n                  (P.emit b\n                     (wmma_fields ~a ~b:c ~c:d ~dtype_in:D.Float16\n                        ~dtype_out:D.Float16 ()))));\n        ];\n      group \"Control flow balancing\"\n        [\n          test \"nested ranges balanced\" (fun () ->\n            validates (fun b ->\n                let size = emit_i32 b 10 in\n                let outer =\n                  P.emit b\n                    (Range { size; dtype = D.Val.int32; axis = 0; sub = []; kind = Ak.Loop })\n                in\n                let inner =\n                  P.emit b\n                    (Range { size; dtype = D.Val.int32; axis = 1; sub = []; kind = Ak.Loop })\n                in\n                ignore (P.emit b (End_range { dep = inner; range = inner }));\n                ignore (P.emit b (End_range { dep = outer; range = outer }))));\n          test \"nested ifs balanced\" (fun () ->\n            validates (fun b ->\n                let addr = emit_index_chain b in\n                let cond = emit_bool b true in\n                let outer_if =\n                  P.emit b (If { cond; idx_for_dedup = addr })\n                in\n                let inner_if =\n                  P.emit b (If { cond; idx_for_dedup = addr })\n                in\n                ignore (P.emit b (Endif { if_ = inner_if }));\n                ignore (P.emit b (Endif { if_ = outer_if }))));\n          test \"interleaved range/if\" (fun () ->\n            validates (fun b ->\n                let addr = emit_index_chain b in\n                let size = emit_i32 b 10 in\n                let range =\n                  P.emit b\n                    (Range { size; dtype = D.Val.int32; axis = 0; sub = []; kind = Ak.Loop })\n                in\n                let cond = emit_bool b true in\n                let if_ = P.emit b (If { cond; idx_for_dedup = addr }) in\n                ignore (P.emit b (Endif { if_ }));\n                ignore (P.emit b (End_range { dep = range; range }))));\n          test \"sequential ranges\" (fun () ->\n            validates (fun b ->\n                let size = emit_i32 b 10 in\n                let r1 =\n                  P.emit b\n                    (Range { size; dtype = D.Val.int32; axis = 0; sub = []; kind = Ak.Loop })\n                in\n                ignore (P.emit b (End_range { dep = r1; range = r1 }));\n                let r2 =\n                  P.emit b\n                    (Range { size; dtype = D.Val.int32; axis = 1; sub = []; kind = Ak.Loop })\n                in\n                ignore (P.emit b (End_range { dep = r2; range = r2 }))));\n        ];\n      group \"Rewriting\"\n        [\n          test \"map_children binary\" (fun () ->\n            let view : P.view =\n              Binary { op = `Add; lhs = 2; rhs = 5; dtype = D.val_of dt }\n            in\n            match P.map_children (fun id -> id + 10) view with\n            | Binary { lhs = 12; rhs = 15; _ } -> ()\n            | _ -> fail \"expected remapped Binary\");\n          test \"map_children leaf identity\" (fun () ->\n            let view : P.view =\n              Param { idx = 0; dtype = global_ptr D.int32 }\n            in\n            match P.map_children (fun id -> id + 10) view with\n            | Param { idx = 0; _ } -> ()\n            | _ -> fail \"expected unchanged Param\");\n          test \"map_alu remaps\" (fun () ->\n            let view : P.view =\n              Unary { op = `Neg; src = 5; dtype = D.val_of dt }\n            in\n            match P.map_alu ~map_ref:(fun r -> r + 1) ~dtype:D.Val.int32 view with\n            | Unary { op = `Neg; src = 6; dtype } when D.Val.equal dtype D.Val.int32 ->\n                ()\n            | _ -> fail \"expected remapped Unary with int32 dtype\");\n          test \"map_alu non-alu raises\" (fun () ->\n            raises_invalid (fun () ->\n                ignore\n                  (P.map_alu ~map_ref:Fun.id ~dtype:(D.val_of dt)\n                     (Const { value = C.float (D.val_of dt) 1.0; dtype = D.val_of dt }))));\n          test \"rebuild identity\" (fun () ->\n            let b = P.create () in\n            let _ptr, _idx, addr, value = emit_load_chain b in\n            ignore (P.emit b (Store { dst = addr; value }));\n            let p = P.finish b in\n            let p' = P.rebuild (fun ~emit:_ ~map_ref:_ _ -> None) p in\n            equal int (P.length p) (P.length p'));\n        ];\n      group \"Formatting\"\n        [\n          test \"pp_view param\" (fun () ->\n            let s =\n              Format.asprintf \"%a\" P.pp_view\n                (Param { idx = 0; dtype = gptr })\n            in\n            is_true (contains s \"param\");\n            is_true (contains s \"global\"));\n          test \"pp program indexed\" (fun () ->\n            let b = P.create () in\n            ignore (P.emit b (Param { idx = 0; dtype = gptr }));\n            ignore (emit_i32 b 0);\n            ignore (P.emit b Barrier);\n            let s = Format.asprintf \"%a\" P.pp (P.finish b) in\n            is_true (contains s \"  0:\");\n            is_true (contains s \"  1:\");\n            is_true (contains s \"  2:\"));\n        ];\n    ]\n"
  },
  {
    "path": "packages/tolk/test/unit/test_ir_symbolic.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Unit tests for Symbolic simplification rules.\n\n   Tests each phase of symbolic simplification in isolation. *)\n\nopen Windtrap\nopen Tolk_ir\nmodule K = Kernel\nmodule D = Dtype\nmodule C = Const\n\n(* Helpers *)\n\nlet idx n = K.const (C.int D.Val.index n)\nlet f32 x = K.const (C.float D.Val.float32 x)\n\nlet var name lo hi = K.define_var ~name ~lo ~hi ~dtype:D.Val.index ()\nlet range size = K.range ~size:(idx size) ~axis:0 ~kind:Axis_kind.Loop ~dtype:D.Val.index ()\n\n(* Apply sym to a single node (not bottom-up). *)\nlet sym n = Symbolic.sym n\n\n(* Apply sym as a single-pass graph rewrite. *)\nlet simplify n = K.graph_rewrite (K.first_match [ Symbolic.sym ]) n\n\n(* Check that rule fires and produces the expected node (by physical identity). *)\nlet fires rule node expected =\n  match rule node with\n  | Some r -> is_true (r == expected)\n  | None -> fail \"expected rule to fire\"\n\n(* Check that applying sym produces a specific constant. *)\nlet simplifies_to_int node expected_val =\n  let result = simplify node in\n  match K.view result with\n  | Const { value; _ } -> (\n      match C.view value with\n      | Int v -> equal int64 v (Int64.of_int expected_val)\n      | _ -> fail \"expected int const\")\n  | _ -> fail \"expected const\"\n\nlet simplifies_to_float node expected_val =\n  let result = simplify node in\n  match K.view result with\n  | Const { value; _ } -> (\n      match C.view value with\n      | Float v -> is_true (Float.equal v expected_val)\n      | _ -> fail \"expected float const\")\n  | _ -> fail \"expected const\"\n\n(* Constant folding *)\n\nlet const_fold_tests =\n  group \"const_fold\"\n    [\n      test \"int add\" (fun () ->\n          simplifies_to_int (K.binary ~op:`Add ~lhs:(idx 3) ~rhs:(idx 4)) 7);\n      test \"int mul\" (fun () ->\n          simplifies_to_int (K.binary ~op:`Mul ~lhs:(idx 3) ~rhs:(idx 5)) 15);\n      test \"int sub\" (fun () ->\n          simplifies_to_int (K.binary ~op:`Sub ~lhs:(idx 10) ~rhs:(idx 3)) 7);\n      test \"int idiv\" (fun () ->\n          simplifies_to_int (K.binary ~op:`Idiv ~lhs:(idx 10) ~rhs:(idx 3)) 3);\n      test \"int mod\" (fun () ->\n          simplifies_to_int (K.binary ~op:`Mod ~lhs:(idx 10) ~rhs:(idx 3)) 1);\n      test \"float add\" (fun () ->\n          simplifies_to_float\n            (K.binary ~op:`Add ~lhs:(f32 1.5) ~rhs:(f32 2.5))\n            4.0);\n      test \"float mul\" (fun () ->\n          simplifies_to_float\n            (K.binary ~op:`Mul ~lhs:(f32 3.0) ~rhs:(f32 2.0))\n            6.0);\n      test \"unary neg float\" (fun () ->\n          simplifies_to_float (K.unary ~op:`Neg ~src:(f32 5.0)) (-5.0));\n    ]\n\n(* Identity folding *)\n\nlet identity_fold_tests =\n  group \"identity_fold\"\n    [\n      test \"x + 0 → x\" (fun () ->\n          let x = var \"x\" 0 10 in\n          fires sym (K.binary ~op:`Add ~lhs:x ~rhs:(idx 0)) x);\n      test \"0 + x → x\" (fun () ->\n          let x = var \"x\" 0 10 in\n          fires sym (K.binary ~op:`Add ~lhs:(idx 0) ~rhs:x) x);\n      test \"x * 1 → x\" (fun () ->\n          let x = var \"x\" 0 10 in\n          fires sym (K.binary ~op:`Mul ~lhs:x ~rhs:(idx 1)) x);\n      test \"1 * x → x\" (fun () ->\n          let x = var \"x\" 0 10 in\n          fires sym (K.binary ~op:`Mul ~lhs:(idx 1) ~rhs:x) x);\n      test \"x // 1 → x\" (fun () ->\n          let x = var \"x\" 0 10 in\n          fires sym (K.binary ~op:`Idiv ~lhs:x ~rhs:(idx 1)) x);\n      test \"x | 0 → x\" (fun () ->\n          let x = var \"x\" 0 10 in\n          fires sym (K.binary ~op:`Or ~lhs:x ~rhs:(idx 0)) x);\n      test \"x & 0 → 0\" (fun () ->\n          let x = var \"x\" 0 10 in\n          let result = sym (K.binary ~op:`And ~lhs:x ~rhs:(idx 0)) in\n          (match result with\n          | Some r -> (\n              match K.view r with\n              | Const { value; _ } -> (\n                  match C.view value with\n                  | Int 0L -> ()\n                  | _ -> fail \"expected 0\")\n              | _ -> fail \"expected const\")\n          | None -> fail \"expected rule to fire\"));\n      test \"x ^ 0 → x\" (fun () ->\n          let x = var \"x\" 0 10 in\n          fires sym (K.binary ~op:`Xor ~lhs:x ~rhs:(idx 0)) x);\n    ]\n\n(* Self-folding *)\n\nlet self_fold_tests =\n  group \"self_fold\"\n    [\n      test \"x // x → 1\" (fun () ->\n          let x = var \"x\" 1 10 in\n          simplifies_to_int (K.binary ~op:`Idiv ~lhs:x ~rhs:x) 1);\n      test \"x // -1 → -x\" (fun () ->\n          let x = var \"x\" 0 10 in\n          let expr = K.binary ~op:`Idiv ~lhs:x ~rhs:(idx (-1)) in\n          let result = sym expr in\n          (match result with\n          | Some r -> (\n              match K.view r with\n              | Unary { op = `Neg; src; _ } -> is_true (src == x)\n              | _ -> fail \"expected Neg\")\n          | None -> fail \"expected rule to fire\"));\n      test \"x ^ x → 0\" (fun () ->\n          let x = var \"x\" 0 10 in\n          simplifies_to_int (K.binary ~op:`Xor ~lhs:x ~rhs:x) 0);\n      test \"x < x → false\" (fun () ->\n          let x = var \"x\" 0 10 in\n          let result = sym (K.binary ~op:`Cmplt ~lhs:x ~rhs:x) in\n          (match result with\n          | Some r -> (\n              match K.view r with\n              | Const { value; _ } -> (\n                  match C.view value with\n                  | Bool false -> ()\n                  | _ -> fail \"expected false\")\n              | _ -> fail \"expected const\")\n          | None -> fail \"expected rule to fire\"));\n    ]\n\n(* Divmod reconstitution *)\n\nlet divmod_reconstitute_tests =\n  group \"divmod_reconstitute\"\n    [\n      test \"(x // y) * y + (x % y) → x\" (fun () ->\n          let x = var \"x\" 0 100 in\n          let y = idx 4 in\n          let div = K.binary ~op:`Idiv ~lhs:x ~rhs:y in\n          let mul = K.binary ~op:`Mul ~lhs:div ~rhs:y in\n          let mod_ = K.binary ~op:`Mod ~lhs:x ~rhs:y in\n          let expr = K.binary ~op:`Add ~lhs:mul ~rhs:mod_ in\n          fires sym expr x);\n      test \"(x % y) + (x // y) * y → x (commuted)\" (fun () ->\n          let x = var \"x\" 0 100 in\n          let y = idx 4 in\n          let div = K.binary ~op:`Idiv ~lhs:x ~rhs:y in\n          let mul = K.binary ~op:`Mul ~lhs:div ~rhs:y in\n          let mod_ = K.binary ~op:`Mod ~lhs:x ~rhs:y in\n          let expr = K.binary ~op:`Add ~lhs:mod_ ~rhs:mul in\n          fires sym expr x);\n    ]\n\n(* Divandmod *)\n\nlet divandmod_tests =\n  group \"divandmod\"\n    [\n      test \"Range(8) // 8 → 0 (cancel)\" (fun () ->\n          let r = range 8 in\n          (* Range(8) has vmin=0, vmax=7. cdiv(0,8)=cdiv(7,8)=0. *)\n          simplifies_to_int (K.binary ~op:`Idiv ~lhs:r ~rhs:(idx 8)) 0);\n      test \"Range(8) % 8 → Range(8) (cancel)\" (fun () ->\n          let r = range 8 in\n          let expr = K.binary ~op:`Mod ~lhs:r ~rhs:(idx 8) in\n          (* Range(8) % 8: since range is [0,7], result is just Range(8) *)\n          let result = simplify expr in\n          (* Should simplify to just r. Check it's a range with size 8. *)\n          (match K.view result with\n          | Range _ -> equal int (Int64.to_int (Divandmod.vmax result) + 1) 8\n          | _ ->\n              (* Might be the original r if no simplification was needed *)\n              is_true true));\n      test \"(x % 12) // 4 → (x // 4) % 3 (nested)\" (fun () ->\n          let x = var \"x\" 0 100 in\n          let expr =\n            K.binary ~op:`Idiv\n              ~lhs:(K.binary ~op:`Mod ~lhs:x ~rhs:(idx 12))\n              ~rhs:(idx 4)\n          in\n          let result = simplify expr in\n          (* Should be (x // 4) % 3 *)\n          (match K.view result with\n          | Binary { op = `Mod; _ } -> ()\n          | _ -> fail \"expected mod in result\"));\n    ]\n\n(* Combine terms *)\n\nlet combine_terms_tests =\n  group \"combine_terms\"\n    [\n      test \"x + x → x * 2\" (fun () ->\n          let x = var \"x\" 0 10 in\n          let expr = K.binary ~op:`Add ~lhs:x ~rhs:x in\n          let result = simplify expr in\n          (match K.view result with\n          | Binary { op = `Mul; lhs; _ } -> is_true (lhs == x)\n          | _ -> fail \"expected x * 2\"));\n    ]\n\n(* Associative folding *)\n\nlet associative_tests =\n  group \"associative_fold\"\n    [\n      test \"(x + 3) + 5 → x + 8\" (fun () ->\n          let x = var \"x\" 0 100 in\n          let expr =\n            K.binary ~op:`Add\n              ~lhs:(K.binary ~op:`Add ~lhs:x ~rhs:(idx 3))\n              ~rhs:(idx 5)\n          in\n          let result = simplify expr in\n          (* Should fold to x + 8 *)\n          (match K.view result with\n          | Binary { op = `Add; lhs; rhs } ->\n              is_true (lhs == x);\n              (match K.view rhs with\n              | Const { value; _ } -> (\n                  match C.view value with\n                  | Int 8L -> ()\n                  | _ -> fail \"expected 8\")\n              | _ -> fail \"expected const\")\n          | _ -> fail \"expected add\"));\n    ]\n\n(* GEP pushing *)\n\nlet gep_tests =\n  group \"gep_pushing\"\n    [\n      test \"GEP(Vectorize(a, b, c), 1) → b via simplify\" (fun () ->\n          let a = idx 10 and b = idx 20 and _c = idx 30 in\n          let vec = K.vectorize ~srcs:[ a; b; _c ] in\n          let gep = K.gep ~src:vec ~idx:1 in\n          let result = simplify gep in\n          (* After simplification, should resolve to b = const 20 *)\n          (match K.view result with\n          | Const { value; _ } -> (\n              match C.view value with\n              | Int 20L -> ()\n              | _ -> fail \"expected 20\")\n          | _ -> fail \"expected const\"));\n    ]\n\n(* Decompositions *)\n\nlet decomp_tests =\n  group \"decompositions\"\n    [\n      test \"MUL to SHL: x * 8 → x << 3\" (fun () ->\n          let ops : Decompositions.supported_ops =\n            { has_shl = true; has_shr = true; has_and = true; has_or = true;\n              has_max = true; has_cmplt = true; has_cmpeq = true;\n              has_neg = true; has_sub = true; has_mulacc = false;\n              has_fdiv = false; has_threefry = false;\n              disable_fast_idiv = true; has_exp2 = true; has_log2 = true; has_sin = true; has_sqrt = true; has_recip = true; force_transcendental = false }\n          in\n          let x = var \"x\" 0 100 in\n          let expr = K.binary ~op:`Mul ~lhs:x ~rhs:(idx 8) in\n          let result = Decompositions.get_late_rewrite_patterns ops expr in\n          (match result with\n          | Some r -> (\n              match K.view r with\n              | Binary { op = `Shl; lhs; _ } -> is_true (lhs == x)\n              | _ -> fail \"expected SHL\")\n          | None -> fail \"expected rule to fire\"));\n      test \"MOD to AND: x % 4 → x & 3\" (fun () ->\n          let ops : Decompositions.supported_ops =\n            { has_shl = true; has_shr = true; has_and = true; has_or = true;\n              has_max = true; has_cmplt = true; has_cmpeq = true;\n              has_neg = true; has_sub = true; has_mulacc = false;\n              has_fdiv = false; has_threefry = false;\n              disable_fast_idiv = true; has_exp2 = true; has_log2 = true; has_sin = true; has_sqrt = true; has_recip = true; force_transcendental = false }\n          in\n          let x = var \"x\" 0 100 in\n          let expr = K.binary ~op:`Mod ~lhs:x ~rhs:(idx 4) in\n          let result = Decompositions.get_late_rewrite_patterns ops expr in\n          (match result with\n          | Some r -> (\n              match K.view r with\n              | Binary { op = `And; lhs; _ } -> is_true (lhs == x)\n              | _ -> fail \"expected AND\")\n          | None -> fail \"expected rule to fire\"));\n      test \"MAX to WHERE when has_max=false\" (fun () ->\n          let ops : Decompositions.supported_ops =\n            { has_shl = true; has_shr = true; has_and = true; has_or = true;\n              has_max = false; has_cmplt = true; has_cmpeq = true;\n              has_neg = true; has_sub = true; has_mulacc = false;\n              has_fdiv = false; has_threefry = false;\n              disable_fast_idiv = true; has_exp2 = true; has_log2 = true; has_sin = true; has_sqrt = true; has_recip = true; force_transcendental = false }\n          in\n          let x = var \"x\" 0 100 in\n          let y = var \"y\" 0 100 in\n          let expr = K.binary ~op:`Max ~lhs:x ~rhs:y in\n          let result = Decompositions.get_late_rewrite_patterns ops expr in\n          (match result with\n          | Some r -> (\n              match K.view r with\n              | Ternary { op = `Where; _ } -> ()\n              | _ -> fail \"expected WHERE\")\n          | None -> fail \"expected rule to fire\"));\n    ]\n\n(* New phase 1 rules *)\n\nlet bool_cast_fold_tests =\n  group \"bool_cast_fold\"\n    [\n      test \"x % x → 0\" (fun () ->\n          let x = var \"x\" 1 10 in\n          simplifies_to_int (K.binary ~op:`Mod ~lhs:x ~rhs:x) 0);\n      test \"bool MUL → AND\" (fun () ->\n          let x = K.const_bool true and y = K.const_bool false in\n          let expr = K.binary ~op:`Mul ~lhs:x ~rhs:y in\n          let result = simplify expr in\n          (match K.view result with\n          | Binary { op = `And; _ } | Const _ -> ()\n          | _ -> fail \"expected AND or const\"));\n      test \"cast(const(3), float32) → const(3.0)\" (fun () ->\n          let expr = K.cast ~src:(idx 3) ~dtype:(D.float32) in\n          simplifies_to_float expr 3.0);\n      test \"cast to same dtype → x\" (fun () ->\n          let x = var \"x\" 0 10 in\n          fires sym (K.cast ~src:x ~dtype:(D.index)) x);\n      test \"nested where: a.where(b.where(c,d), d) → (a&b).where(c,d)\" (fun () ->\n          let a = K.const_bool true and b = K.const_bool true in\n          let c = idx 1 and d = idx 0 in\n          let inner = K.ternary ~op:`Where ~a:b ~b:c ~c:d in\n          let outer = K.ternary ~op:`Where ~a ~b:inner ~c:d in\n          let result = simplify outer in\n          (* Should simplify to just 1 since both conditions are true *)\n          (match K.view result with\n          | Const { value; _ } -> (\n              match C.view value with\n              | Int 1L -> ()\n              | _ -> fail \"expected 1\")\n          | _ -> fail \"expected const\"));\n    ]\n\n(* New phase 2 rules *)\n\nlet lt_fold_tests =\n  group \"lt_fold\"\n    [\n      test \"lt mul fold: 2*x < 10 → x < 5\" (fun () ->\n          let x = var \"x\" 0 100 in\n          let expr =\n            K.binary ~op:`Cmplt\n              ~lhs:(K.binary ~op:`Mul ~lhs:(idx 2) ~rhs:x)\n              ~rhs:(idx 10)\n          in\n          let result = simplify expr in\n          (* Should fold to x < 5 *)\n          (match K.view result with\n          | Binary { op = `Cmplt; lhs; rhs } ->\n              is_true (lhs == x);\n              (match K.view rhs with\n              | Const { value; _ } -> (\n                  match C.view value with\n                  | Int 5L -> ()\n                  | _ -> fail \"expected 5\")\n              | _ -> fail \"expected const\")\n          | _ -> fail \"expected cmplt\"));\n      test \"lt div fold: x // 4 < 3 → x < 12\" (fun () ->\n          let x = var \"x\" 0 100 in\n          let expr =\n            K.binary ~op:`Cmplt\n              ~lhs:(K.binary ~op:`Idiv ~lhs:x ~rhs:(idx 4))\n              ~rhs:(idx 3)\n          in\n          let result = simplify expr in\n          (match K.view result with\n          | Binary { op = `Cmplt; lhs; rhs } ->\n              is_true (lhs == x);\n              (match K.view rhs with\n              | Const { value; _ } -> (\n                  match C.view value with\n                  | Int 12L -> ()\n                  | _ -> fail \"expected 12\")\n              | _ -> fail \"expected const\")\n          | _ -> fail \"expected cmplt\"));\n      test \"lt sign flip: x*-1 < y*-1 → y < x\" (fun () ->\n          let x = var \"x\" 0 10 and y = var \"y\" 0 10 in\n          let expr =\n            K.binary ~op:`Cmplt\n              ~lhs:(K.binary ~op:`Mul ~lhs:x ~rhs:(idx (-1)))\n              ~rhs:(K.binary ~op:`Mul ~lhs:y ~rhs:(idx (-1)))\n          in\n          let result = simplify expr in\n          (match K.view result with\n          | Binary { op = `Cmplt; lhs; rhs } ->\n              is_true (lhs == y);\n              is_true (rhs == x)\n          | _ -> fail \"expected y < x\"));\n      test \"float div chain: (x/y)/z → x/(y*z)\" (fun () ->\n          let x = f32 12.0 and y = f32 3.0 and z = f32 2.0 in\n          let expr =\n            K.binary ~op:`Fdiv\n              ~lhs:(K.binary ~op:`Fdiv ~lhs:x ~rhs:y)\n              ~rhs:z\n          in\n          simplifies_to_float expr 2.0);\n    ]\n\n(* New phase 3 rules *)\n\nlet where_fold_tests =\n  group \"where_fold\"\n    [\n      test \"where cast push: where(s,a,b).cast(dt)\" (fun () ->\n          let s = K.const_bool true in\n          let a = idx 5 and b = idx 0 in\n          let w = K.ternary ~op:`Where ~a:s ~b:a ~c:b in\n          let expr = K.cast ~src:w ~dtype:(D.float32) in\n          let result = simplify expr in\n          simplifies_to_float expr 5.0;\n          ignore result);\n    ]\n\n(* Entry point *)\n\nlet () =\n  run \"Ir.Symbolic\"\n    [\n      const_fold_tests;\n      identity_fold_tests;\n      self_fold_tests;\n      divmod_reconstitute_tests;\n      divandmod_tests;\n      combine_terms_tests;\n      associative_tests;\n      gep_tests;\n      decomp_tests;\n      bool_cast_fold_tests;\n      lt_fold_tests;\n      where_fold_tests;\n    ]\n"
  },
  {
    "path": "packages/tolk/test/unit/test_ir_tensor.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nmodule C = Tolk_ir.Const\nmodule D = Tolk_ir.Dtype\nmodule T = Tolk_ir.Tensor\nmodule Ak = Tolk_ir.Axis_kind\nmodule Sh = Tolk_ir.Shape\n\nlet contains haystack needle =\n  let hlen = String.length haystack in\n  let nlen = String.length needle in\n  let rec loop i =\n    if i + nlen > hlen then false\n    else if String.sub haystack i nlen = needle then true\n    else loop (i + 1)\n  in\n  loop 0\n\nlet raises_validate substring fn =\n  raises_match\n    (function Failure msg -> contains msg substring | _ -> false)\n    fn\n\nlet mk_f32 () = T.const (C.float D.Val.float32 1.0) D.float32\nlet mk_i32 () = T.const (C.int D.Val.int32 0) D.int32\nlet mk_idx () = T.const (C.int D.Val.index 0) D.index\nlet mk_bool () = T.const (C.bool true) D.bool\n\nlet emit_buffer ?(dtype = D.float32) () =\n  let u = T.unique ~id:0 in\n  let d = T.device (Single \"CPU\") in\n  let buf = T.buffer ~unique:u ~device:d ~size:1024 ~dtype in\n  (u, d, buf)\n\nlet mk_shape_2x3 () =\n  let d1 = T.const (C.int D.Val.index 2) D.index in\n  let d2 = T.const (C.int D.Val.index 3) D.index in\n  T.vectorize ~srcs:[ d1; d2 ]\n\nlet dtype_eq expected id =\n  match T.dtype id with\n  | Some dt -> is_true (D.equal dt expected)\n  | None -> fail \"expected a dtype but got None\"\n\nlet mk_index_on_buf () =\n  let _u, _d, buf = emit_buffer () in\n  let idx = mk_idx () in\n  (buf, T.index ~ptr:buf ~idxs:[ idx ] ~dtype:D.float32 ())\n\nlet call_info : T.call_info =\n  { grad_fxn = None; metadata = []; name = None; precompile = false }\n\nlet pp_to_string pp v =\n  let buf = Buffer.create 64 in\n  let fmt = Format.formatter_of_buffer buf in\n  pp fmt v;\n  Format.pp_print_flush fmt ();\n  Buffer.contents buf\n\nlet () =\n  run \"Ir_next.Tensor\"\n    [\n      group \"Hash-consing and inspection\"\n        [\n          test \"structurally equal nodes are physically equal\" (fun () ->\n            let a = T.unique ~id:42 in\n            let b = T.unique ~id:42 in\n            is_true (a == b));\n          test \"different nodes are distinct\" (fun () ->\n            let a = T.unique ~id:1 in\n            let b = T.unique ~id:2 in\n            is_true (a != b));\n          test \"tags are unique\" (fun () ->\n            let a = T.unique ~id:1 in\n            let b = T.unique ~id:2 in\n            is_true (T.tag a <> T.tag b));\n          test \"toposort leaves first\" (fun () ->\n            let u = T.unique ~id:0 in\n            let d = T.device (Single \"CPU\") in\n            let buf = T.buffer ~unique:u ~device:d ~size:4 ~dtype:D.float32 in\n            let nodes = T.toposort buf in\n            is_true (List.length nodes >= 3);\n            is_true (List.hd nodes == u || List.hd nodes == d));\n          test \"dtype value\" (fun () ->\n            let n = mk_f32 () in\n            some (of_equal D.equal) D.float32 (T.dtype n));\n          test \"dtype effect\" (fun () ->\n            let n = T.sink [] in\n            is_none (T.dtype n));\n          test \"dtype buffer\" (fun () ->\n            let _u, _d, buf = emit_buffer () in\n            some (of_equal D.equal) D.float32 (T.dtype buf));\n          test \"children binary\" (fun () ->\n            let a = mk_f32 () and c = mk_f32 () in\n            let bin = T.binary ~op:`Add ~lhs:a ~rhs:c in\n            is_true (List.length (T.children bin) = 2));\n          test \"children buffer\" (fun () ->\n            let u, d, buf = emit_buffer () in\n            let ch = T.children buf in\n            is_true (List.exists (fun c -> c == u) ch);\n            is_true (List.exists (fun c -> c == d) ch));\n          test \"children pad\" (fun () ->\n            let _u, _d, buf = emit_buffer () in\n            let bef = mk_idx () and aft = mk_idx () in\n            let p = T.pad ~src:buf ~before:bef ~after:aft in\n            is_true (List.length (T.children p) = 3));\n          test \"children leaf\" (fun () ->\n            let u = T.unique ~id:0 in\n            let d = T.device (Single \"CPU\") in\n            let dv = T.define_var ~name:\"n\" ~lo:0 ~hi:10 () in\n            equal (list int) [] (List.map T.tag (T.children u));\n            equal (list int) [] (List.map T.tag (T.children d));\n            equal (list int) [] (List.map T.tag (T.children dv)));\n        ];\n      group \"Smart constructor dtype inference\"\n        [\n          test \"binary cmplt produces bool\" (fun () ->\n            let a = mk_f32 () and c = mk_f32 () in\n            dtype_eq D.bool (T.binary ~op:`Cmplt ~lhs:a ~rhs:c));\n          test \"binary add inherits lhs\" (fun () ->\n            let a = mk_f32 () and c = mk_f32 () in\n            dtype_eq D.float32 (T.binary ~op:`Add ~lhs:a ~rhs:c));\n          test \"ternary where inherits b\" (fun () ->\n            let cond = mk_bool () and t = mk_f32 () and e = mk_f32 () in\n            dtype_eq D.float32 (T.ternary ~op:`Where ~a:cond ~b:t ~c:e));\n          test \"ternary mulacc inherits a\" (fun () ->\n            let a = mk_i32 () and c = mk_i32 () and d = mk_i32 () in\n            dtype_eq D.int32 (T.ternary ~op:`Mulacc ~a ~b:c ~c:d));\n          test \"unary inherits src\" (fun () ->\n            dtype_eq D.float32 (T.unary ~op:`Neg ~src:(mk_f32 ())));\n          test \"after inherits src dtype\" (fun () ->\n            dtype_eq D.float32 (T.after ~src:(mk_f32 ()) ~deps:[]));\n          test \"after void for effect src\" (fun () ->\n            dtype_eq D.void (T.after ~src:(T.sink []) ~deps:[]));\n          test \"const derives dtype\" (fun () ->\n            dtype_eq D.float64 (T.const (C.float D.Val.float64 3.14) D.float64));\n          test \"reshape inherits src\" (fun () ->\n            let _u, _d, buf = emit_buffer () in\n            dtype_eq D.float32 (T.reshape ~src:buf ~shape:(mk_idx ())));\n          test \"permute inherits src\" (fun () ->\n            let _u, _d, buf = emit_buffer () in\n            dtype_eq D.float32 (T.permute ~src:buf ~order:[ 0 ]));\n          test \"flip inherits src\" (fun () ->\n            let _u, _d, buf = emit_buffer () in\n            dtype_eq D.float32 (T.flip ~src:buf ~dims:[ true ]));\n          test \"detach inherits src\" (fun () ->\n            dtype_eq D.float32 (T.detach ~src:(mk_f32 ())));\n          test \"contiguous inherits src\" (fun () ->\n            dtype_eq D.float32 (T.contiguous ~src:(mk_f32 ()) ()));\n          test \"vectorize from scalar and count\" (fun () ->\n            let s1 = mk_f32 () and s2 = mk_f32 () and s3 = mk_f32 () in\n            dtype_eq (D.vec 3 D.float32) (T.vectorize ~srcs:[ s1; s2; s3 ]));\n        ];\n      group \"Smart constructor edge cases\"\n        [\n          test \"vectorize empty raises\" (fun () ->\n            raises_match\n              (function Invalid_argument _ -> true | _ -> false)\n              (fun () -> ignore (T.vectorize ~srcs:[])));\n          test \"mstack empty raises\" (fun () ->\n            raises_match\n              (function Failure _ -> true | _ -> false)\n              (fun () -> ignore (T.mstack ~srcs:[])));\n          test \"shape static\" (fun () ->\n            let s = T.const (C.int D.Val.index 3) D.index in\n            match T.view s with\n            | Const _ -> ()\n            | _ -> fail \"expected Const for 1-d static shape\");\n          test \"shape symbolic\" (fun () ->\n            let s = T.define_var ~name:\"n\" ~lo:1 ~hi:8 () in\n            match T.view s with\n            | Define_var _ -> ()\n            | _ -> fail \"expected Define_var\");\n          test \"shape multi\" (fun () ->\n            let d1 = T.const (C.int D.Val.index 2) D.index in\n            let d2 = T.const (C.int D.Val.index 3) D.index in\n            let s = T.vectorize ~srcs:[ d1; d2 ] in\n            match T.view s with\n            | Vectorize _ -> ()\n            | _ -> fail \"expected Vectorize for multi-dim\");\n        ];\n      group \"Validation acceptance\"\n        [\n          test \"buffer chain ok\" (fun () ->\n            let _u, _d, _buf = emit_buffer () in\n            ignore (T.unique ~id:0));\n          test \"buffer view ok\" (fun () ->\n            let _buf, index = mk_index_on_buf () in\n            ignore (T.buffer_view ~src:index ~size:512 ~offset:0 ~dtype:D.float32);\n            ignore (T.unique ~id:0));\n          test \"const all types ok\" (fun () ->\n            ignore (T.const (C.bool true) D.bool);\n            ignore (T.const (C.int D.Val.int32 42) D.int32);\n            ignore (T.const (C.float D.Val.float32 3.14) D.float32);\n            ignore (T.unique ~id:0));\n          test \"vconst ok\" (fun () ->\n            ignore\n              (T.vconst\n                 ~values:[ C.float D.Val.float32 1.0; C.float D.Val.float32 2.0 ]\n                 ~dtype:(D.vec 2 D.float32) ());\n            ignore (T.unique ~id:0));\n          test \"define_var ok\" (fun () ->\n            ignore (T.define_var ~name:\"n\" ~lo:0 ~hi:10 ~dtype:D.int32 ());\n            ignore (T.unique ~id:0));\n          test \"bind ok\" (fun () ->\n            let var = T.define_var ~name:\"n\" ~lo:0 ~hi:10 ~dtype:D.int32 () in\n            ignore (T.bind ~var ~value:(mk_i32 ()) ~dtype:D.int32 ());\n            ignore (T.unique ~id:0));\n          test \"param ok\" (fun () ->\n            let shape = mk_shape_2x3 () in\n            let dev = T.device (Single \"CPU\") in\n            ignore (T.param ~slot:0 ~dtype:D.float32 ~shape ~device:dev ());\n            ignore (T.unique ~id:0));\n          test \"call ref ok\" (fun () ->\n            let fn = mk_f32 () in\n            ignore (T.call ~callee:(Ref fn) ~args:[] ~info:call_info ~dtype:D.float32);\n            ignore (T.unique ~id:0));\n          test \"assign ok\" (fun () ->\n            let _u, _d, buf = emit_buffer () in\n            let assigned = T.assign ~target:buf ~value:(mk_f32 ()) () in\n            (* assign emits Store+After *)\n            (match T.view assigned with\n            | After { deps; _ } ->\n                is_true (List.exists (fun d ->\n                  match T.view d with Store _ -> true | _ -> false)\n                  deps)\n            | _ -> fail \"expected After from assign\");\n            ignore (T.unique ~id:0));\n          test \"detach ok\" (fun () ->\n            ignore (T.detach ~src:(mk_f32 ()));\n            ignore (T.unique ~id:0));\n          test \"contiguous ok\" (fun () ->\n            ignore (T.contiguous ~src:(mk_f32 ()) ());\n            ignore (T.unique ~id:0));\n          test \"copy ok\" (fun () ->\n            ignore (T.copy ~src:(mk_f32 ()) ~device:(T.device (Single \"GPU\")) ());\n            ignore (T.unique ~id:0));\n          test \"allreduce ok\" (fun () ->\n            let a = mk_f32 () in\n            let dev = T.device (Single \"GPU\") in\n            ignore (T.allreduce ~src:a ~device:dev ~op:`Add ~dtype:D.float32);\n            ignore (T.unique ~id:0));\n          test \"mstack ok\" (fun () ->\n            ignore (T.mstack ~srcs:[ mk_f32 (); mk_f32 () ]);\n            ignore (T.unique ~id:0));\n          test \"reduce_axis ok\" (fun () ->\n            ignore (T.reduce_axis ~src:(mk_f32 ()) ~op:`Add ~axes:[ 0 ]);\n            ignore (T.unique ~id:0));\n          test \"reshape ok\" (fun () ->\n            let _u, _d, buf = emit_buffer () in\n            ignore (T.reshape ~src:buf ~shape:(mk_shape_2x3 ()));\n            ignore (T.unique ~id:0));\n          test \"expand ok\" (fun () ->\n            let _u, _d, buf = emit_buffer () in\n            ignore (T.expand ~src:buf ~shape:(mk_shape_2x3 ()));\n            ignore (T.unique ~id:0));\n          test \"pad/shrink ok\" (fun () ->\n            let a = mk_f32 () in\n            let bef = mk_idx () and aft = mk_idx () in\n            ignore (T.pad ~src:a ~before:bef ~after:aft);\n            ignore (T.shrink ~src:a ~before:bef ~after:aft);\n            ignore (T.unique ~id:0));\n          test \"permute ok\" (fun () ->\n            ignore (T.permute ~src:(mk_f32 ()) ~order:[ 1; 0; 2 ]);\n            ignore (T.unique ~id:0));\n          test \"flip ok\" (fun () ->\n            ignore (T.flip ~src:(mk_f32 ()) ~dims:[ true; false ]);\n            ignore (T.unique ~id:0));\n          test \"range/end ok\" (fun () ->\n            let range = T.range ~size:(mk_idx ()) ~axis:0 ~kind:Ak.Loop () in\n            ignore (T.end_ ~value:(mk_f32 ()) ~ranges:[ range ]);\n            ignore (T.unique ~id:0));\n          test \"index/store ok\" (fun () ->\n            let _buf, index = mk_index_on_buf () in\n            ignore (T.store ~dst:index ~value:(mk_f32 ()));\n            ignore (T.unique ~id:0));\n          test \"alu chain ok\" (fun () ->\n            let a = mk_f32 () in\n            let u = T.unary ~op:`Neg ~src:a in\n            let bin = T.binary ~op:`Add ~lhs:u ~rhs:(mk_f32 ()) in\n            ignore (T.ternary ~op:`Where ~a:(mk_bool ()) ~b:bin ~c:a);\n            ignore (T.unique ~id:0));\n        ];\n      group \"Validation rejection — tensor ops\"\n        [\n          test \"reject buffer negative size\" (fun () ->\n            let u = T.unique ~id:0 in\n            let d = T.device (Single \"CPU\") in\n            raises_validate \"non-negative\" (fun () ->\n              ignore (T.buffer ~unique:u ~device:d ~size:(-1) ~dtype:D.float32)));\n          test \"reject buffer unique not unique\" (fun () ->\n            let d = T.device (Single \"CPU\") in\n            raises_validate \"Unique/Lunique\" (fun () ->\n              ignore (T.buffer ~unique:(mk_f32 ()) ~device:d ~size:1024 ~dtype:D.float32)));\n          test \"reject buffer device not device\" (fun () ->\n            let u = T.unique ~id:0 in\n            raises_validate \"Device\" (fun () ->\n              ignore (T.buffer ~unique:u ~device:(mk_f32 ()) ~size:1024 ~dtype:D.float32)));\n          test \"reject buffer_view negative size\" (fun () ->\n            let _buf, index = mk_index_on_buf () in\n            raises_validate \"non-negative\" (fun () ->\n              ignore (T.buffer_view ~src:index ~size:(-1) ~offset:0 ~dtype:D.float32)));\n          test \"reject buffer_view negative offset\" (fun () ->\n            let _buf, index = mk_index_on_buf () in\n            raises_validate \"non-negative\" (fun () ->\n              ignore (T.buffer_view ~src:index ~size:512 ~offset:(-1) ~dtype:D.float32)));\n          test \"reject buffer_view src not buffer or index\" (fun () ->\n            raises_validate \"must be Buffer or Index\" (fun () ->\n              ignore (T.buffer_view ~src:(mk_f32 ()) ~size:512 ~offset:0 ~dtype:D.float32)));\n          test \"reject vconst count mismatch\" (fun () ->\n            raises_validate \"match vector width\" (fun () ->\n              ignore\n                (T.vconst\n                   ~values:[ C.float D.Val.float32 1.0 ]\n                   ~dtype:(D.vec 3 D.float32) ())));\n          test \"reject vconst element type mismatch\" (fun () ->\n            raises_validate \"int elements\" (fun () ->\n              ignore\n                (T.vconst\n                   ~values:[ C.int D.Val.int32 1; C.int D.Val.int32 2 ]\n                   ~dtype:(D.vec 2 D.float32) ())));\n          test \"reject bind var not define_var\" (fun () ->\n            raises_validate \"Define_var\" (fun () ->\n              ignore (T.bind ~var:(mk_f32 ()) ~dtype:D.float32 ())));\n          test \"reject bind value dtype mismatch\" (fun () ->\n            let var = T.define_var ~name:\"n\" ~lo:0 ~hi:10 ~dtype:D.int32 () in\n            raises_validate \"Bind value\" (fun () ->\n              ignore (T.bind ~var ~value:(mk_f32 ()) ~dtype:D.int32 ())));\n          test \"reject param shape not index vector\" (fun () ->\n            raises_validate \"must be index vector\" (fun () ->\n              ignore (T.param ~slot:0 ~dtype:D.float32 ~shape:(mk_f32 ()) ())));\n          test \"reject param device not device\" (fun () ->\n            raises_validate \"Device\" (fun () ->\n              ignore (T.param ~slot:0 ~dtype:D.float32 ~device:(mk_f32 ()) ())));\n          test \"reject reduce_axis empty axes\" (fun () ->\n            raises_validate \"at least one axis\" (fun () ->\n              ignore (T.reduce_axis ~src:(mk_f32 ()) ~op:`Add ~axes:[])));\n          test \"reject reduce_axis duplicate axes\" (fun () ->\n            raises_validate \"unique\" (fun () ->\n              ignore (T.reduce_axis ~src:(mk_f32 ()) ~op:`Add ~axes:[ 0; 1; 0 ])));\n          test \"reject permute invalid order\" (fun () ->\n            raises_validate \"valid permutation\" (fun () ->\n              ignore (T.permute ~src:(mk_f32 ()) ~order:[ 0; 0 ])));\n          test \"reject reshape negative dim\" (fun () ->\n            let _u, _d, buf = emit_buffer () in\n            raises_validate \"negative\" (fun () ->\n              ignore (T.reshape ~src:buf ~shape:(T.const (C.int D.Val.index (-1)) D.index))));\n          test \"reject pad/shrink width mismatch\" (fun () ->\n            let a = mk_f32 () in\n            let d1 = T.const (C.int D.Val.index 1) D.index in\n            let d2 = T.const (C.int D.Val.index 2) D.index in\n            let bef = T.vectorize ~srcs:[ d1 ] in\n            let aft = T.vectorize ~srcs:[ d1; d2 ] in\n            raises_validate \"width mismatch\" (fun () ->\n              ignore (T.pad ~src:a ~before:bef ~after:aft)));\n          test \"reject copy device not device\" (fun () ->\n            raises_validate \"Device\" (fun () ->\n              ignore (T.copy ~src:(mk_f32 ()) ~device:(mk_f32 ()) ())));\n          test \"reject allreduce device not device\" (fun () ->\n            raises_validate \"Device\" (fun () ->\n              ignore (T.allreduce ~src:(mk_f32 ()) ~device:(mk_f32 ()) ~op:`Add ~dtype:D.float32)));\n          test \"reject contiguous range not index\" (fun () ->\n            raises_validate \"must be index scalar\" (fun () ->\n              ignore (T.contiguous ~src:(mk_f32 ()) ~ranges:[ mk_f32 () ] ())));\n          test \"reject mstack empty\" (fun () ->\n            raises_validate \"must have srcs\" (fun () ->\n              ignore (T.mstack ~srcs:[])));\n          test \"reject mstack dtype mismatch\" (fun () ->\n            raises_validate \"Mstack src\" (fun () ->\n              ignore (T.mstack ~srcs:[ mk_f32 (); mk_i32 () ])));\n          test \"reject cast width change\" (fun () ->\n            let v = T.vectorize ~srcs:[ mk_f32 (); mk_f32 () ] in\n            raises_validate \"vector width\" (fun () ->\n              ignore (T.cast ~src:v ~dtype:D.float32)));\n          test \"reject call ref dtype mismatch\" (fun () ->\n            let fn = mk_f32 () in\n            raises_validate \"Call dtype\" (fun () ->\n              ignore (T.call ~callee:(Ref fn) ~args:[] ~info:call_info ~dtype:D.int32)));\n        ];\n      group \"Validation rejection — ALU\"\n        [\n          test \"reject define_var float\" (fun () ->\n            raises_validate \"must be int/index\" (fun () ->\n              ignore (T.define_var ~name:\"x\" ~lo:0 ~hi:4 ~dtype:D.float32 ())));\n          test \"reject define_var lo > hi\" (fun () ->\n            raises_validate \"lo > hi\" (fun () ->\n              ignore (T.define_var ~name:\"x\" ~lo:5 ~hi:3 ())));\n          test \"reject const type mismatch\" (fun () ->\n            raises_validate \"Bool const\" (fun () ->\n              ignore (T.const (C.bool true) D.int32)));\n          test \"reject binary cmp operands mismatch\" (fun () ->\n            raises_validate \"don't match\" (fun () ->\n              ignore (T.binary ~op:`Cmplt ~lhs:(mk_f32 ()) ~rhs:(mk_i32 ()))));\n          test \"reject binary idiv float\" (fun () ->\n            raises_validate \"int/index\" (fun () ->\n              ignore (T.binary ~op:`Idiv ~lhs:(mk_f32 ()) ~rhs:(mk_f32 ()))));\n          test \"reject shift non-int\" (fun () ->\n            raises_validate \"int/index\" (fun () ->\n              ignore (T.binary ~op:`Shl ~lhs:(mk_f32 ()) ~rhs:(mk_f32 ()))));\n          test \"reject shift rhs mismatch\" (fun () ->\n            let a = mk_i32 () in\n            let c = T.const (C.int D.Val.int64 2) D.int64 in\n            raises_validate \"Shift rhs\" (fun () ->\n              ignore (T.binary ~op:`Shl ~lhs:a ~rhs:c)));\n          test \"reject where non-bool cond\" (fun () ->\n            raises_validate \"bool scalar\" (fun () ->\n              ignore (T.ternary ~op:`Where ~a:(mk_i32 ()) ~b:(mk_f32 ()) ~c:(mk_f32 ()))));\n          test \"reject where mismatched arms\" (fun () ->\n            raises_validate \"arms\" (fun () ->\n              ignore (T.ternary ~op:`Where ~a:(mk_bool ()) ~b:(mk_f32 ()) ~c:(mk_i32 ()))));\n          test \"reject mulacc mismatch\" (fun () ->\n            raises_validate \"Mulacc\" (fun () ->\n              ignore (T.ternary ~op:`Mulacc ~a:(mk_f32 ()) ~b:(mk_i32 ()) ~c:(mk_f32 ()))));\n        ];\n      group \"check and exn\"\n        [\n          test \"check ok returns Ok\" (fun () ->\n            ignore (mk_f32 ());\n            ignore (T.unique ~id:0));\n          test \"validation raises Failure\" (fun () ->\n            raises_validate \"must be int/index\" (fun () ->\n              ignore (T.define_var ~name:\"x\" ~lo:0 ~hi:4 ~dtype:D.float32 ())));\n        ];\n      group \"Rewriting\"\n        [\n          test \"rebuild replaces const\" (fun () ->\n            ignore (T.const (C.int D.Val.int32 3) D.int32);\n            ignore (T.const (C.int D.Val.int32 5) D.int32);\n            let c3 = T.const (C.int D.Val.int32 3) D.int32 in\n            let g' =\n              T.graph_rewrite\n                (fun n ->\n                  match T.view n with\n                  | Const { value; _ } -> (\n                      match C.view value with\n                      | Int n when Int64.to_int n = 3 ->\n                          Some (T.const (C.int D.Val.int32 4) D.int32)\n                      | _ -> None)\n                  | _ -> None)\n                c3\n            in\n            match T.view g' with\n            | Const { value; _ } -> (\n                match C.view value with\n                | Int n -> equal int 4 (Int64.to_int n)\n                | _ -> fail \"expected Int\")\n            | _ -> fail \"expected Const\");\n          test \"rebuild no match identity\" (fun () ->\n            ignore (mk_f32 ());\n            ignore (mk_i32 ());\n            \n            let g = mk_f32 () in\n            let g' = T.graph_rewrite (fun _ -> None) g in\n            is_true (g == g'));\n          test \"graph_rewrite replaces binary\" (fun () ->\n            let a = mk_f32 () in\n            let g = T.binary ~op:`Add ~lhs:a ~rhs:(T.const (C.float D.Val.float32 0.0) D.float32) in\n            let g' =\n              T.graph_rewrite\n                (fun n ->\n                  match T.view n with\n                  | Binary { op = `Add; _ } ->\n                      Some (T.const (C.float D.Val.float32 99.0) D.float32)\n                  | _ -> None)\n                g\n            in\n            (match T.view g' with Const _ -> () | _ -> fail \"expected Const\"));\n          (* rewrite_fixpoint is not in the new API — graph_rewrite\n             handles re-processing internally *)\n          test \"graph_rewrite diverges raises\" (fun () ->\n            let c = T.const (C.int D.Val.int32 3) D.int32 in\n            raises_match\n              (function Failure _ -> true | _ -> false)\n              (fun () ->\n                ignore\n                  (T.graph_rewrite\n                     (fun n ->\n                       match T.view n with\n                       | Const { value; _ } -> (\n                           match C.view value with\n                           | Int i when Int64.to_int i = 3 ->\n                               Some (T.const (C.int D.Val.int32 4) D.int32)\n                           | Int i when Int64.to_int i = 4 ->\n                               Some (T.const (C.int D.Val.int32 3) D.int32)\n                           | _ -> None)\n                       | _ -> None)\n                     c)));\n          test \"hash-consing deduplicates\" (fun () ->\n            let a1 = T.const (C.float D.Val.float32 1.0) D.float32 in\n            let a2 = T.const (C.float D.Val.float32 1.0) D.float32 in\n            is_true (a1 == a2));\n          test \"map_children remaps\" (fun () ->\n            let a = mk_f32 () and b = mk_i32 () in\n            let replacement = mk_idx () in\n            let view : T.view =\n              Binary { op = `Add; lhs = a; rhs = b; dtype = D.float32 }\n            in\n            match T.map_children (fun n -> if n == a then replacement else n) view with\n            | Binary { lhs; _ } when lhs == replacement -> ()\n            | _ -> fail \"expected remapped Binary\");\n        ];\n      group \"Formatting\"\n        [\n          test \"pp_instr contains op name\" (fun () ->\n            let d = mk_f32 () in\n            is_true\n              (contains\n                 (pp_to_string T.pp_view\n                    (Reshape { src = d; shape = d; dtype = D.float32 }))\n                 \"reshape\");\n            is_true\n              (contains\n                 (pp_to_string T.pp_view\n                    (Buffer { unique = d; device = d; size = 1024; dtype = D.float32 }))\n                 \"buffer\"));\n          test \"pp program indexed\" (fun () ->\n            let u = T.unique ~id:0 in\n            let d = T.device (Single \"CPU\") in\n            let c = mk_f32 () in\n            let root = T.sink [ u; d; c ] in\n            let s = pp_to_string T.pp root in\n            is_true (contains s \"unique\");\n            is_true (contains s \"device\");\n            is_true (contains s \"const\"));\n        ];\n      group \"Shape computation\"\n        [\n          test \"buffer shape\" (fun () ->\n            let _u, _d, buf = emit_buffer () in\n            let shapes = T.compute_shapes buf in\n            equal (option (list int)) (Some [ 1024 ]) (shapes buf));\n          test \"const shape is empty\" (fun () ->\n            let c = mk_f32 () in\n            let shapes = T.compute_shapes c in\n            equal (option (list int)) (Some []) (shapes c));\n          test \"reshape shape\" (fun () ->\n            let _u, _d, buf = emit_buffer () in\n            let shape = mk_shape_2x3 () in\n            let r = T.reshape ~src:buf ~shape in\n            let shapes = T.compute_shapes r in\n            equal (option (list int)) (Some [ 2; 3 ]) (shapes r));\n          test \"permute shape\" (fun () ->\n            let _u, _d, buf = emit_buffer ~dtype:D.float32 () in\n            let d1 = T.const (C.int D.Val.index 4) D.index in\n            let d2 = T.const (C.int D.Val.index 8) D.index in\n            let shape = T.vectorize ~srcs:[ d1; d2 ] in\n            let reshaped = T.reshape ~src:buf ~shape in\n            let p = T.permute ~src:reshaped ~order:[ 1; 0 ] in\n            let shapes = T.compute_shapes p in\n            equal (option (list int)) (Some [ 8; 4 ]) (shapes p));\n          test \"unary inherits shape\" (fun () ->\n            let _u, _d, buf = emit_buffer () in\n            let neg = T.unary ~op:`Neg ~src:buf in\n            let shapes = T.compute_shapes neg in\n            equal (option (list int)) (shapes buf) (shapes neg));\n          test \"binary inherits lhs shape\" (fun () ->\n            let _u, _d, buf = emit_buffer () in\n            let c = mk_f32 () in\n            let add = T.binary ~op:`Add ~lhs:buf ~rhs:c in\n            let shapes = T.compute_shapes add in\n            equal (option (list int)) (shapes buf) (shapes add));\n          test \"reduce_axis collapses axes\" (fun () ->\n            let _u, _d, buf = emit_buffer ~dtype:D.float32 () in\n            let d1 = T.const (C.int D.Val.index 4) D.index in\n            let d2 = T.const (C.int D.Val.index 8) D.index in\n            let shape = T.vectorize ~srcs:[ d1; d2 ] in\n            let reshaped = T.reshape ~src:buf ~shape in\n            let red = T.reduce_axis ~src:reshaped ~op:`Add ~axes:[ 1 ] in\n            let shapes = T.compute_shapes red in\n            equal (option (list int)) (Some [ 4; 1 ]) (shapes red));\n          test \"sink has no shape\" (fun () ->\n            let sink = T.sink [] in\n            let shapes = T.compute_shapes sink in\n            is_none (shapes sink));\n        ];\n      group \"Device computation\"\n        [\n          test \"device node\" (fun () ->\n            let d = T.device (Single \"GPU\") in\n            let devs = T.compute_devices d in\n            equal (option string) (Some \"GPU\")\n              (match devs d with\n               | Some (Single s) -> Some s\n               | _ -> None));\n          test \"buffer inherits device\" (fun () ->\n            let u = T.unique ~id:0 in\n            let d = T.device (Single \"CPU\") in\n            let buf = T.buffer ~unique:u ~device:d ~size:64 ~dtype:D.float32 in\n            let devs = T.compute_devices buf in\n            equal (option string) (Some \"CPU\")\n              (match devs buf with\n               | Some (Single s) -> Some s\n               | _ -> None));\n        ];\n      group \"Analysis\"\n        [\n          test \"backward_slice excludes root\" (fun () ->\n            let a = mk_f32 () in\n            let neg = T.unary ~op:`Neg ~src:a in\n            let slice = T.backward_slice neg in\n            is_true (List.exists (fun n -> n == a) slice));\n          test \"toposort is topological\" (fun () ->\n            let a = mk_f32 () in\n            let neg = T.unary ~op:`Neg ~src:a in\n            let topo = T.toposort neg in\n            let idx_of n = List.find_opt (fun (_, x) -> x == n)\n              (List.mapi (fun i x -> (i, x)) topo)\n              |> Option.map fst |> Option.value ~default:(-1) in\n            is_true (idx_of a < idx_of neg));\n          test \"consumer_map tracks consumers\" (fun () ->\n            let a = mk_f32 () in\n            let neg = T.unary ~op:`Neg ~src:a in\n            let consumers = T.consumer_map neg in\n            is_true (List.exists (fun c -> c == neg) (consumers a)));\n          test \"base follows through movement ops\" (fun () ->\n            let _u, _d, buf = emit_buffer () in\n            let shape = mk_shape_2x3 () in\n            let reshaped = T.reshape ~src:buf ~shape in\n            let perm = T.permute ~src:reshaped ~order:[ 1; 0 ] in\n            is_true (T.base perm == buf));\n          test \"base stops at non-movement\" (fun () ->\n            let a = mk_f32 () in\n            let neg = T.unary ~op:`Neg ~src:a in\n            is_true (T.base neg == neg));\n        ];\n    ]\n"
  },
  {
    "path": "packages/tolk/test/unit/test_program_spec.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Tolk\nopen Tolk_ir\nmodule P = Program\n\nlet global_ptr dt = Dtype.Ptr.create dt ~addrspace:Global ~size:(-1)\n\nlet spec_of ?(estimates : Program_spec.Estimates.t option) b =\n  Program_spec.of_program ~name:\"kern\" ~src:\"\" ~device:\"CPU\" ?estimates (P.finish b)\n\nlet empty_spec ?estimates () = spec_of ?estimates (P.create ())\n\nlet () =\n  run \"Program_spec\"\n    [\n      group \"Extraction\"\n        [\n          test \"reads and writes are deduplicated\" (fun () ->\n            let ptr = global_ptr Dtype.Val.float32 in\n            let b = P.create () in\n            let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n            let p1 = P.emit b (Param { idx = 1; dtype = ptr }) in\n            let c0 =\n              P.emit b (Const { value = Const.int Dtype.Val.int32 0; dtype = Dtype.Val.int32 })\n            in\n            let idx1 = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n            let idx2 = P.emit b (Index { ptr = p1; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n            let ld1 = P.emit b (Load { src = idx2; alt = None; dtype = Dtype.Val.float32 }) in\n            let ld2 = P.emit b (Load { src = idx2; alt = None; dtype = Dtype.Val.float32 }) in\n            let _ = P.emit b (Store { dst = idx1; value = ld1 }) in\n            let _ = P.emit b (Store { dst = idx1; value = ld2 }) in\n            let spec = spec_of b in\n            equal (list int) [ 0 ] (Program_spec.outs spec);\n            equal (list int) [ 1 ] (Program_spec.ins spec));\n          test \"thread-group launch expressions are preserved\" (fun () ->\n            let b = P.create () in\n            let dv = P.emit b (Define_var { name = \"m\"; lo = 1; hi = 32; dtype = Dtype.Val.int32 }) in\n            let c4 =\n              P.emit b (Const { value = Const.int Dtype.Val.int32 4; dtype = Dtype.Val.int32 })\n            in\n            let mul = P.emit b (Binary { op = `Mul; lhs = dv; rhs = c4; dtype = Dtype.Val.int32 }) in\n            let _ = P.emit b (Special { dim = Special_dim.Group_id 0; size = mul; dtype = Dtype.Val.index }) in\n            let _ = P.emit b (Special { dim = Special_dim.Local_id 1; size = dv; dtype = Dtype.Val.index }) in\n            let spec = spec_of b in\n            match Program_spec.launch_kind spec with\n            | Program_spec.Thread_groups ->\n                let global, local = Program_spec.launch_dims spec [ \"m\", 3 ] in\n                equal (array int) [| 12; 1; 1 |] global;\n                begin match local with\n                | None -> failwith \"expected local dims\"\n                | Some local -> equal (array int) [| 1; 3; 1 |] local\n                end\n            | _ -> failwith \"expected thread-group launch metadata\");\n          test \"launch variables are resolved by name\" (fun () ->\n            let b = P.create () in\n            let _ = P.emit b (Define_var { name = \"m\"; lo = 0; hi = 7; dtype = Dtype.Val.int32 }) in\n            let dv1 = P.emit b (Define_var { name = \"n\"; lo = 0; hi = 15; dtype = Dtype.Val.int32 }) in\n            let _ = P.emit b (Special { dim = Special_dim.Group_id 0; size = dv1; dtype = Dtype.Val.index }) in\n            let global, _local = Program_spec.launch_dims (spec_of b) [ \"m\", 3; \"n\", 9 ] in\n            equal (array int) [| 9; 1; 1 |] global);\n          test \"global idx uses thread launch\" (fun () ->\n            let b = P.create () in\n            let dv = P.emit b (Define_var { name = \"threads\"; lo = 1; hi = 64; dtype = Dtype.Val.int32 }) in\n            let _ = P.emit b (Special { dim = Special_dim.Global_idx 2; size = dv; dtype = Dtype.Val.index }) in\n            let spec = spec_of b in\n            match Program_spec.launch_kind spec with\n            | Program_spec.Threads ->\n                let global, local = Program_spec.launch_dims spec [ \"threads\", 11 ] in\n                equal (array int) [| 1; 1; 11 |] global;\n                equal (option pass) None local\n            | _ -> failwith \"expected flat thread launch metadata\");\n          test \"core_id is explicit runtime metadata\" (fun () ->\n            let b = P.create () in\n            let _ = P.emit b (Define_var { name = \"arg\"; lo = 0; hi = 9; dtype = Dtype.Val.int32 }) in\n            let _ = P.emit b (Define_var { name = \"core_id\"; lo = 0; hi = 7; dtype = Dtype.Val.int32 }) in\n            let spec = spec_of b in\n            match Program_spec.core_id spec with\n            | None -> failwith \"expected core_id metadata\"\n            | Some core_id ->\n                equal int 1 core_id.var_index;\n                equal int 8 (Program_spec.thread_count core_id);\n                begin match Program_spec.launch_kind spec with\n                | Program_spec.Serial -> ()\n                | _ -> failwith \"core_id should not synthesize GPU launch metadata\"\n                end);\n          test \"duplicate launch axis is rejected\" (fun () ->\n            let b = P.create () in\n            let c4 =\n              P.emit b (Const { value = Const.int Dtype.Val.int32 4; dtype = Dtype.Val.int32 })\n            in\n            let _ = P.emit b (Special { dim = Special_dim.Group_id 0; size = c4; dtype = Dtype.Val.index }) in\n            let _ = P.emit b (Special { dim = Special_dim.Group_id 0; size = c4; dtype = Dtype.Val.index }) in\n            raises_invalid_arg \"group_id axis 0 appears more than once\" (fun () ->\n                ignore (spec_of b)));\n          test \"mixed launch models are rejected\" (fun () ->\n            let b = P.create () in\n            let c4 =\n              P.emit b (Const { value = Const.int Dtype.Val.int32 4; dtype = Dtype.Val.int32 })\n            in\n            let _ = P.emit b (Special { dim = Special_dim.Group_id 0; size = c4; dtype = Dtype.Val.index }) in\n            let _ = P.emit b (Special { dim = Special_dim.Global_idx 1; size = c4; dtype = Dtype.Val.index }) in\n            raises_invalid_arg\n              \"launch metadata cannot mix flat-thread and thread-group specials\"\n              (fun () -> ignore (spec_of b)));\n          test \"core_id lower bound must be zero\" (fun () ->\n            let b = P.create () in\n            let _ = P.emit b (Define_var { name = \"core_id\"; lo = 2; hi = 7; dtype = Dtype.Val.int32 }) in\n            raises_invalid_arg \"core_id must have lower bound 0\" (fun () ->\n                ignore (spec_of b)));\n          test \"exact estimates are forwarded\" (fun () ->\n            let estimates =\n              Program_spec.Estimates.of_kernel\n                Kernel.{ ops = Int 7; lds = Int 11; mem = Int 13 }\n            in\n            let est = Program_spec.estimates (empty_spec ~estimates ()) in\n            begin match est.ops with\n            | Program_spec.Estimates.Int 7 -> ()\n            | _ -> failwith \"expected exact ops estimate\"\n            end;\n            begin match est.lds with\n            | Program_spec.Estimates.Int 11 -> ()\n            | _ -> failwith \"expected exact lds estimate\"\n            end;\n            begin match est.mem with\n            | Program_spec.Estimates.Int 13 -> ()\n            | _ -> failwith \"expected exact mem estimate\"\n            end);\n          test \"symbolic estimates require caller handling\" (fun () ->\n            let sym_node = Kernel.define_var ~name:\"n\" ~lo:1 ~hi:100 () in\n            let estimates =\n              Program_spec.Estimates.of_kernel\n                Kernel.{ ops = Symbolic sym_node; lds = Int 1; mem = Int 2 }\n            in\n            match estimates.ops with\n            | Program_spec.Estimates.Symbolic _ -> ()\n            | _ -> failwith \"expected symbolic ops estimate\");\n        ];\n      group \"Estimates.of_program\"\n        [\n          test \"counts basic ALU ops\" (fun () ->\n            let b = P.create () in\n            let a =\n              P.emit b (Const { value = Const.float Dtype.Val.float32 1.0; dtype = Dtype.Val.float32 })\n            in\n            let c =\n              P.emit b (Const { value = Const.float Dtype.Val.float32 2.0; dtype = Dtype.Val.float32 })\n            in\n            let _ = P.emit b (Binary { op = `Add; lhs = a; rhs = c; dtype = Dtype.Val.float32 }) in\n            let _ = P.emit b (Unary { op = `Neg; src = a; dtype = Dtype.Val.float32 }) in\n            let est = Program_spec.Estimates.of_program (P.finish b) in\n            begin match est.ops with\n            | Program_spec.Estimates.Int 2 -> ()\n            | Program_spec.Estimates.Int n ->\n                failwith (Printf.sprintf \"expected 2 FLOPs, got %d\" n)\n            | _ -> failwith \"expected exact int ops estimate\"\n            end);\n          test \"mulacc counts as 2 FLOPs\" (fun () ->\n            let b = P.create () in\n            let a =\n              P.emit b (Const { value = Const.float Dtype.Val.float32 1.0; dtype = Dtype.Val.float32 })\n            in\n            let c =\n              P.emit b (Const { value = Const.float Dtype.Val.float32 2.0; dtype = Dtype.Val.float32 })\n            in\n            let d =\n              P.emit b (Const { value = Const.float Dtype.Val.float32 3.0; dtype = Dtype.Val.float32 })\n            in\n            let _ =\n              P.emit b (Ternary { op = `Mulacc; a; b = c; c = d; dtype = Dtype.Val.float32 })\n            in\n            let est = Program_spec.Estimates.of_program (P.finish b) in\n            begin match est.ops with\n            | Program_spec.Estimates.Int 2 -> ()\n            | Program_spec.Estimates.Int n ->\n                failwith (Printf.sprintf \"expected 2 FLOPs, got %d\" n)\n            | _ -> failwith \"expected exact int ops estimate\"\n            end);\n          test \"loop multiplier stacks\" (fun () ->\n            let b = P.create () in\n            let c10 =\n              P.emit b (Const { value = Const.int Dtype.Val.int32 10; dtype = Dtype.Val.int32 })\n            in\n            let range =\n              P.emit b\n                (Range\n                   { size = c10; dtype = Dtype.Val.int32; axis = 0; sub = [];\n                     kind = Axis_kind.Loop })\n            in\n            let a =\n              P.emit b (Const { value = Const.float Dtype.Val.float32 1.0; dtype = Dtype.Val.float32 })\n            in\n            let add =\n              P.emit b (Binary { op = `Add; lhs = a; rhs = a; dtype = Dtype.Val.float32 })\n            in\n            let _ = P.emit b (End_range { dep = add; range }) in\n            let est = Program_spec.Estimates.of_program (P.finish b) in\n            begin match est.ops with\n            | Program_spec.Estimates.Int 10 -> ()\n            | Program_spec.Estimates.Int n ->\n                failwith (Printf.sprintf \"expected 10 FLOPs (1 op * 10 iters), got %d\" n)\n            | _ -> failwith \"expected exact int ops estimate\"\n            end);\n          test \"load/store tracks lds bytes\" (fun () ->\n            let ptr = global_ptr Dtype.Val.float32 in\n            let b = P.create () in\n            let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n            let c0 =\n              P.emit b (Const { value = Const.int Dtype.Val.int32 0; dtype = Dtype.Val.int32 })\n            in\n            let idx = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n            let ld = P.emit b (Load { src = idx; alt = None; dtype = Dtype.Val.float32 }) in\n            let _ = P.emit b (Store { dst = idx; value = ld }) in\n            let est = Program_spec.Estimates.of_program (P.finish b) in\n            begin match est.lds with\n            | Program_spec.Estimates.Int n when n = 4 + 4 -> ()\n            | Program_spec.Estimates.Int n ->\n                failwith (Printf.sprintf \"expected 8 lds bytes (4 load + 4 store), got %d\" n)\n            | _ -> failwith \"expected exact int lds estimate\"\n            end);\n          test \"index arithmetic excluded from FLOPs\" (fun () ->\n            let ptr = global_ptr Dtype.Val.float32 in\n            let b = P.create () in\n            let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n            let c0 =\n              P.emit b (Const { value = Const.int Dtype.Val.int32 0; dtype = Dtype.Val.int32 })\n            in\n            let c1 =\n              P.emit b (Const { value = Const.int Dtype.Val.int32 1; dtype = Dtype.Val.int32 })\n            in\n            (* This add is used as an index operand — should be excluded. *)\n            let idx_expr =\n              P.emit b (Binary { op = `Add; lhs = c0; rhs = c1; dtype = Dtype.Val.int32 })\n            in\n            let idx =\n              P.emit b\n                (Index { ptr = p0; idxs = [ idx_expr ]; gate = None; dtype = ptr })\n            in\n            let _ = P.emit b (Load { src = idx; alt = None; dtype = Dtype.Val.float32 }) in\n            let est = Program_spec.Estimates.of_program (P.finish b) in\n            begin match est.ops with\n            | Program_spec.Estimates.Int 0 -> ()\n            | Program_spec.Estimates.Int n ->\n                failwith (Printf.sprintf \"expected 0 FLOPs (index add excluded), got %d\" n)\n            | _ -> failwith \"expected exact int ops estimate\"\n            end);\n        ];\n    ]\n"
  },
  {
    "path": "packages/tolk/test/unit/test_runtime_cpu.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Tolk\nopen Tolk_ir\nmodule P = Program\n\nlet global_ptr dt = Dtype.Ptr.create dt ~addrspace:Global ~size:(-1)\n\nlet int32_to_bytes values =\n  let bytes = Bytes.create (List.length values * 4) in\n  let set off value =\n    let open Int32 in\n    Bytes.set bytes off (Char.chr (to_int (logand value 0xFFl)));\n    Bytes.set bytes (off + 1)\n      (Char.chr (to_int (logand (shift_right_logical value 8) 0xFFl)));\n    Bytes.set bytes (off + 2)\n      (Char.chr (to_int (logand (shift_right_logical value 16) 0xFFl)));\n    Bytes.set bytes (off + 3)\n      (Char.chr (to_int (logand (shift_right_logical value 24) 0xFFl)))\n  in\n  List.iteri (fun i value -> set (i * 4) (Int32.of_int value)) values;\n  bytes\n\nlet int32_list_of_bytes bytes =\n  let len = Bytes.length bytes / 4 in\n  let get off =\n    let open Int32 in\n    logor\n      (of_int (Char.code (Bytes.get bytes off)))\n      (logor\n         (shift_left (of_int (Char.code (Bytes.get bytes (off + 1)))) 8)\n         (logor\n            (shift_left (of_int (Char.code (Bytes.get bytes (off + 2)))) 16)\n            (shift_left (of_int (Char.code (Bytes.get bytes (off + 3)))) 24)))\n  in\n  List.init len (fun i -> Int32.to_int (get (i * 4)))\n\nlet cpu name = Tolk_cpu.create (\"CPU:\" ^ name)\n\nlet create_i32_buffer device values =\n  let buf =\n    Device.create_buffer ~size:(List.length values) ~dtype:Dtype.int32 device\n  in\n  Device.Buffer.ensure_allocated buf;\n  Device.Buffer.copyin buf (int32_to_bytes values);\n  buf\n\nlet read_i32_buffer buf = Device.Buffer.as_bytes buf |> int32_list_of_bytes\n\nlet increment_program () =\n  let dt = Dtype.Val.int32 in\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let p1 = P.emit b (Param { idx = 1; dtype = ptr }) in\n  let c0 = P.emit b (Const { value = Const.int Dtype.Val.int32 0; dtype = Dtype.Val.int32 }) in\n  let idx_src = P.emit b (Index { ptr = p1; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let idx_dst = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let l0 = P.emit b (Load { src = idx_src; alt = None; dtype = dt }) in\n  let c1 = P.emit b (Const { value = Const.int dt 1; dtype = dt }) in\n  let sum = P.emit b (Binary { op = `Add; lhs = l0; rhs = c1; dtype = dt }) in\n  let _ = P.emit b (Store { dst = idx_dst; value = sum }) in\n  P.finish b\n\nlet core_id_program ~threads =\n  let dt = Dtype.Val.int32 in\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let dv = P.emit b (Define_var { name = \"core_id\"; lo = 0; hi = threads - 1; dtype = dt }) in\n  let idx = P.emit b (Index { ptr = p0; idxs = [ dv ]; gate = None; dtype = ptr }) in\n  let _ = P.emit b (Store { dst = idx; value = dv }) in\n  P.finish b\n\nlet run_spec device spec bufs =\n  let car = Realize.Compiled_runner.create ~device spec in\n  ignore (Realize.Compiled_runner.call car bufs [] ~wait:true ~timeout:None);\n  Device.synchronize device\n\nlet () =\n  run \"Cpu_runtime\"\n    [\n      group \"Execution\"\n        [\n          test \"compile and run one kernel\" (fun () ->\n            let device = cpu \"run-one\" in\n            let spec =\n              Device.compile_program device ~name:\"add_one\"\n                (increment_program ())\n            in\n            let dst = create_i32_buffer device [ 0 ] in\n            let src = create_i32_buffer device [ 41 ] in\n            run_spec device spec [ dst; src ];\n            equal (list int) [ 42 ] (read_i32_buffer dst));\n          test \"exec is ordered\" (fun () ->\n            let device = cpu \"ordered\" in\n            let spec =\n              Device.compile_program device ~name:\"ordered_add_one\"\n                (increment_program ())\n            in\n            let a = create_i32_buffer device [ 0 ] in\n            let b = create_i32_buffer device [ 0 ] in\n            run_spec device spec [ b; a ];\n            run_spec device spec [ a; b ];\n            equal (list int) [ 2 ] (read_i32_buffer a);\n            equal (list int) [ 1 ] (read_i32_buffer b));\n          test \"core_id drives parallel execution\" (fun () ->\n            let device = cpu \"core-id\" in\n            let threads = 4 in\n            let spec =\n              Device.compile_program device ~name:\"write_core_id\"\n                (core_id_program ~threads)\n            in\n            let dst = create_i32_buffer device [ 0; 0; 0; 0 ] in\n            run_spec device spec [ dst ];\n            equal (list int) [ 0; 1; 2; 3 ] (read_i32_buffer dst));\n        ];\n    ]\n"
  },
  {
    "path": "packages/tolk/test/unit/test_runtime_metal.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nopen Tolk\nopen Tolk_ir\nmodule P = Program\n\nlet global_ptr dt = Dtype.Ptr.create dt ~addrspace:Global ~size:(-1)\n\nlet int32_to_bytes values =\n  let bytes = Bytes.create (List.length values * 4) in\n  let set off value =\n    let open Int32 in\n    Bytes.set bytes off (Char.chr (to_int (logand value 0xFFl)));\n    Bytes.set bytes (off + 1)\n      (Char.chr (to_int (logand (shift_right_logical value 8) 0xFFl)));\n    Bytes.set bytes (off + 2)\n      (Char.chr (to_int (logand (shift_right_logical value 16) 0xFFl)));\n    Bytes.set bytes (off + 3)\n      (Char.chr (to_int (logand (shift_right_logical value 24) 0xFFl)))\n  in\n  List.iteri (fun i value -> set (i * 4) (Int32.of_int value)) values;\n  bytes\n\nlet int32_list_of_bytes bytes =\n  let len = Bytes.length bytes / 4 in\n  let get off =\n    let open Int32 in\n    logor\n      (of_int (Char.code (Bytes.get bytes off)))\n      (logor\n         (shift_left (of_int (Char.code (Bytes.get bytes (off + 1)))) 8)\n         (logor\n            (shift_left (of_int (Char.code (Bytes.get bytes (off + 2)))) 16)\n            (shift_left (of_int (Char.code (Bytes.get bytes (off + 3)))) 24)))\n  in\n  List.init len (fun i -> Int32.to_int (get (i * 4)))\n\nlet metal_device =\n  let cached : Tolk.Device.t option ref = ref None in\n  fun () ->\n    match !cached with\n    | Some device -> device\n    | None -> (\n        try\n          let device = Tolk_metal.create \"METAL:test\" in\n          cached := Some device;\n          device\n        with Failure msg -> skip ~reason:msg ())\n\nlet i32_buf device values =\n  let buf =\n    Device.create_buffer ~size:(List.length values) ~dtype:Dtype.int32 device\n  in\n  Device.Buffer.ensure_allocated buf;\n  Device.Buffer.copyin buf (int32_to_bytes values);\n  buf\n\nlet read_i32 buf = Device.Buffer.as_bytes buf |> int32_list_of_bytes\n\nlet increment_program () =\n  let dt = Dtype.Val.int32 in\n  let ptr = global_ptr dt in\n  let b = P.create () in\n  let p0 = P.emit b (Param { idx = 0; dtype = ptr }) in\n  let p1 = P.emit b (Param { idx = 1; dtype = ptr }) in\n  let c0 = P.emit b (Const { value = Const.int Dtype.Val.int32 0; dtype = Dtype.Val.int32 }) in\n  let idx_src = P.emit b (Index { ptr = p1; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let idx_dst = P.emit b (Index { ptr = p0; idxs = [ c0 ]; gate = None; dtype = ptr }) in\n  let l0 = P.emit b (Load { src = idx_src; alt = None; dtype = dt }) in\n  let c1 = P.emit b (Const { value = Const.int dt 1; dtype = dt }) in\n  let sum = P.emit b (Binary { op = `Add; lhs = l0; rhs = c1; dtype = dt }) in\n  let _ = P.emit b (Store { dst = idx_dst; value = sum }) in\n  P.finish b\n\nlet compile_incr device name =\n  Device.compile_program device ~name (increment_program ())\n\nlet run_spec device spec bufs =\n  let car = Realize.Compiled_runner.create ~device spec in\n  ignore (Realize.Compiled_runner.call car bufs [] ~wait:true ~timeout:None);\n  Device.synchronize device\n\nlet () =\n  run \"Metal_runtime\"\n    [\n      group \"Execution\"\n        [\n          test \"compile and run one kernel\" (fun () ->\n            let device = metal_device () in\n            let spec = compile_incr device \"metal_add_one\" in\n            let dst = i32_buf device [ 0 ] in\n            let src = i32_buf device [ 41 ] in\n            run_spec device spec [ dst; src ];\n            equal (list int) [ 42 ] (read_i32 dst));\n          test \"exec is ordered\" (fun () ->\n            let device = metal_device () in\n            let spec = compile_incr device \"metal_ordered_add_one\" in\n            let a = i32_buf device [ 0 ] in\n            let b = i32_buf device [ 0 ] in\n            run_spec device spec [ b; a ];\n            run_spec device spec [ a; b ];\n            equal (list int) [ 2 ] (read_i32 a);\n            equal (list int) [ 1 ] (read_i32 b));\n        ];\n    ]\n"
  },
  {
    "path": "packages/tolk/test/unit/test_runtime_search.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Runtime tests for Search.\n\n   Tests that beam_search compiles and executes kernels correctly on real\n   hardware (CPU via Clang). Complements the pure-logic unit tests in\n   test_codegen_search.ml. *)\n\nopen Windtrap\nopen Tolk\nopen Tolk_ir\nmodule K = Kernel\nmodule D = Dtype\nmodule C = Const\nmodule Ak = Axis_kind\nmodule P = Postrange\n\n(* Helpers *)\n\nlet idx n = K.const (C.int D.Val.index n)\nlet ren = Cstyle.clang\n\nlet f32_ptr n = D.Ptr.create D.Val.float32 ~addrspace:Global ~size:n\n\nlet cpu name = Tolk_cpu.create (\"CPU:\" ^ name)\n\nlet f32_to_bytes values =\n  let bytes = Bytes.create (List.length values * 4) in\n  List.iteri\n    (fun i v -> Bytes.set_int32_le bytes (i * 4) (Int32.bits_of_float v))\n    values;\n  bytes\n\nlet read_f32_buffer buf =\n  let bytes = Device.Buffer.as_bytes buf in\n  let n = Bytes.length bytes / 4 in\n  List.init n (fun i -> Int32.float_of_bits (Bytes.get_int32_le bytes (i * 4)))\n\nlet create_f32_buffer device n values =\n  let buf = Device.create_buffer ~size:n ~dtype:D.float32 device in\n  Device.Buffer.ensure_allocated buf;\n  Device.Buffer.copyin buf (f32_to_bytes values);\n  buf\n\nlet create_bufs_for_kernel device ast =\n  List.map\n    (fun p ->\n      match K.view p with\n      | Param { dtype = pty; _ } ->\n          let buf = Device.create_buffer ~size:(D.Ptr.size pty) ~dtype:(D.Val (D.Ptr.base pty)) device in\n          Device.Buffer.ensure_allocated buf;\n          buf\n      | _ -> assert false)\n    (P.bufs_from_ast ast)\n\n(* AST Fixture Builders *)\n\n(* Elementwise: output[i] = input[i] + input[i], single flat loop.\n   Avoids transcendental ops (exp2/sin/log2) because the Clang freestanding\n   backend compiles to ELF without libm — those ops require the transcendental\n   decomposition pass which is not yet ported. *)\nlet elementwise_1d_ast ~n =\n  let p0 = K.param ~idx:0 ~dtype:(f32_ptr n) in\n  let p1 = K.param ~idx:1 ~dtype:(f32_ptr n) in\n  let r0 = K.range ~size:(idx n) ~axis:0 ~kind:Ak.Loop ~dtype:D.Val.index () in\n  let in_idx = K.index ~ptr:p1 ~idxs:[ r0 ] () in\n  let ld = K.load ~src:in_idx () in\n  let value = K.binary ~op:`Add ~lhs:ld ~rhs:ld in\n  let out_idx = K.index ~ptr:p0 ~idxs:[ r0 ] () in\n  let st = K.store ~dst:out_idx ~value ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0 ] () in\n  let ki =\n    {\n      K.name = \"test\";\n      axis_kinds = [];\n      dont_use_locals = false;\n      applied_opts = [];\n      opts_to_apply = None;\n      estimates = None;\n    }\n  in\n  K.sink ~kernel_info:ki [ e ]\n\n(* Elementwise 2D: output[r0,r1] = input[r0,r1] + input[r0,r1] *)\nlet elementwise_2d_ast ~s0 ~s1 =\n  let n = s0 * s1 in\n  let p0 = K.param ~idx:0 ~dtype:(f32_ptr n) in\n  let p1 = K.param ~idx:1 ~dtype:(f32_ptr n) in\n  let r0 = K.range ~size:(idx s0) ~axis:0 ~kind:Ak.Loop ~dtype:D.Val.index () in\n  let r1 = K.range ~size:(idx s1) ~axis:1 ~kind:Ak.Loop ~dtype:D.Val.index () in\n  let open K.O in\n  let in_idx = K.index ~ptr:p1 ~idxs:[ r0 * idx s1 + r1 ] () in\n  let ld = K.load ~src:in_idx () in\n  let value = K.binary ~op:`Add ~lhs:ld ~rhs:ld in\n  let out_idx = K.index ~ptr:p0 ~idxs:[ r0 * idx s1 + r1 ] () in\n  let st = K.store ~dst:out_idx ~value ~ranges:[] in\n  let e = K.end_ ~value:st ~ranges:[ r0; r1 ] () in\n  let ki =\n    {\n      K.name = \"test\";\n      axis_kinds = [];\n      dont_use_locals = false;\n      applied_opts = [];\n      opts_to_apply = None;\n      estimates = None;\n    }\n  in\n  K.sink ~kernel_info:ki [ e ]\n\n(* Tests *)\n\nlet beam_search_tests =\n  group \"beam_search on CPU\"\n    [\n      slow \"Lowering.compile produces correct output\" (fun () ->\n          let device = cpu \"compile-test\" in\n          let n = 16 in\n          let ast = elementwise_1d_ast ~n in\n          let s = P.create ast ren in\n          let opt_ast = P.get_optimized_ast (P.copy s) in\n          let program = Device.compile_program device (Linearizer.linearize (Codegen_lower.lower ren opt_ast)) in\n          let out_buf = create_f32_buffer device n (List.init n (fun _ -> 0.0)) in\n          let in_buf =\n            create_f32_buffer device n (List.init n (fun i -> Float.of_int i))\n          in\n          let car = Realize.Compiled_runner.create ~device program in\n          ignore (Realize.Compiled_runner.call car [ out_buf; in_buf ] []\n            ~wait:true ~timeout:None);\n          Device.synchronize device;\n          let output = read_f32_buffer out_buf in\n          let expected =\n            List.init n (fun i -> let x = Float.of_int i in x +. x)\n          in\n          List.iter2\n            (fun exp act ->\n              is_true\n                ~msg:(Printf.sprintf \"expected %.4f, got %.4f\" exp act)\n                (Float.abs (exp -. act) < 1e-4))\n            expected output);\n      slow \"completes on 1D elementwise kernel\" (fun () ->\n          let device = cpu \"beam-1d\" in\n          let n = 16 in\n          let ast = elementwise_1d_ast ~n in\n          let s = P.create ast ren in\n          let rawbufs = create_bufs_for_kernel device ast in\n          let input_data = List.init n (fun i -> Float.of_int i) in\n          Device.Buffer.copyin (List.nth rawbufs 1) (f32_to_bytes input_data);\n          let result = Search.beam_search s rawbufs 1 device in\n          is_true (P.shape_len result >= 1));\n      slow \"completes on 2D elementwise kernel\" (fun () ->\n          let device = cpu \"beam-2d\" in\n          let ast = elementwise_2d_ast ~s0:8 ~s1:8 in\n          let s = P.create ast ren in\n          let rawbufs = create_bufs_for_kernel device ast in\n          let result = Search.beam_search s rawbufs 1 device in\n          is_true (P.shape_len result >= 1));\n      slow \"optimized kernel produces correct output\" (fun () ->\n          let device = cpu \"beam-correct\" in\n          let n = 16 in\n          let ast = elementwise_1d_ast ~n in\n          let s = P.create ast ren in\n          let rawbufs = create_bufs_for_kernel device ast in\n          let input_data = List.init n (fun i -> Float.of_int i) in\n          Device.Buffer.copyin (List.nth rawbufs 1) (f32_to_bytes input_data);\n          let result = Search.beam_search s rawbufs 1 device in\n          let out_buf = create_f32_buffer device n (List.init n (fun _ -> 0.0)) in\n          let in_buf = create_f32_buffer device n input_data in\n          let opt_ast = P.get_optimized_ast (P.copy result) in\n          let program = Device.compile_program device (Linearizer.linearize (Codegen_lower.lower (P.ren result) opt_ast)) in\n          let car = Realize.Compiled_runner.create ~device program in\n          ignore (Realize.Compiled_runner.call car [ out_buf; in_buf ] []\n            ~wait:true ~timeout:None);\n          Device.synchronize device;\n          let output = read_f32_buffer out_buf in\n          let expected = List.map (fun x -> x +. x) input_data in\n          List.iter2\n            (fun exp act ->\n              is_true\n                ~msg:(Printf.sprintf \"expected %.4f, got %.4f\" exp act)\n                (Float.abs (exp -. act) < 1e-4))\n            expected output);\n      (* Verify beam_search does not corrupt input buffer contents. *)\n      slow \"beam_search does not corrupt input buffers\" (fun () ->\n          let device = cpu \"no-mutate\" in\n          let n = 16 in\n          let ast = elementwise_1d_ast ~n in\n          let s = P.create ast ren in\n          let rawbufs = create_bufs_for_kernel device ast in\n          let input_data = List.init n (fun i -> Float.of_int (i + 1)) in\n          Device.Buffer.copyin (List.nth rawbufs 1) (f32_to_bytes input_data);\n          let input_before = read_f32_buffer (List.nth rawbufs 1) in\n          ignore (Search.beam_search s rawbufs 1 device : P.t);\n          let input_after = read_f32_buffer (List.nth rawbufs 1) in\n          List.iter2\n            (fun before after ->\n              is_true\n                ~msg:\n                  (Printf.sprintf \"input buffer mutated: %.4f -> %.4f\" before\n                     after)\n                (Float.abs (before -. after) < 1e-6))\n            input_before input_after);\n      (* Verify beam_search completes on a kernel with variable-sized range. *)\n      slow \"completes on variable-sized kernel\" (fun () ->\n          let device = cpu \"beam-var\" in\n          let n = 16 in\n          let p0 = K.param ~idx:0 ~dtype:(f32_ptr n) in\n          let p1 = K.param ~idx:1 ~dtype:(f32_ptr n) in\n          let var = K.define_var ~name:\"v\" ~lo:1 ~hi:n () in\n          let r0 =\n            K.range ~size:var ~axis:0 ~kind:Ak.Loop ~dtype:D.Val.index ()\n          in\n          let in_idx = K.index ~ptr:p1 ~idxs:[ r0 ] () in\n          let ld = K.load ~src:in_idx () in\n          let value = K.binary ~op:`Add ~lhs:ld ~rhs:ld in\n          let out_idx = K.index ~ptr:p0 ~idxs:[ r0 ] () in\n          let st = K.store ~dst:out_idx ~value ~ranges:[] in\n          let e = K.end_ ~value:st ~ranges:[ r0 ] () in\n          let ki =\n            {\n              K.name = \"test\";\n              axis_kinds = [];\n              dont_use_locals = false;\n              applied_opts = [];\n              opts_to_apply = None;\n              estimates = None;\n            }\n          in\n          let ast = K.sink ~kernel_info:ki [ e ] in\n          let s = P.create ast ren in\n          let rawbufs = create_bufs_for_kernel device ast in\n          let result = Search.beam_search s rawbufs 1 device in\n          ignore (result : P.t));\n      (* Verify disable_cache parameter works: running beam_search twice\n         with disable_cache=true should both complete (no stale cache). *)\n      slow \"disable_cache bypasses cache\" (fun () ->\n          let device = cpu \"beam-nocache\" in\n          let n = 16 in\n          let ast = elementwise_1d_ast ~n in\n          let s = P.create ast ren in\n          let rawbufs = create_bufs_for_kernel device ast in\n          let r1 =\n            Search.beam_search ~disable_cache:true s rawbufs 1 device\n          in\n          let r2 =\n            Search.beam_search ~disable_cache:true s rawbufs 1 device\n          in\n          is_true (P.shape_len r1 >= 1);\n          is_true (P.shape_len r2 >= 1));\n    ]\n\n(* Entry *)\n\nlet () = run __FILE__ [ beam_search_tests ]\n"
  },
  {
    "path": "packages/tolk/test/unit/test_schedule_rangeify.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Tests for the schedule rangeify pipeline.\n\n   Covers indexing.ml (core rangeify algorithm) and rangeify.ml (pipeline\n   orchestrator). Tests are organized by responsibility:\n   - is_always_contiguous: op classification\n   - new_range: range creation with size-1 folding\n   - apply_movement_op: range transforms for all 6 movement ops\n   - run_rangeify: backward walk producing range_map/realize_map\n   - get_kernel_graph: full pipeline kernel count tests\n\n   Covers core rangeify correctness and schedule-level fusion decisions. *)\n\nopen Windtrap\nopen Tolk\nmodule C = Tolk_ir.Const\nmodule D = Tolk_ir.Dtype\nmodule T = Tolk_ir.Tensor\nmodule K = Tolk_ir.Kernel\nmodule Ak = Tolk_ir.Axis_kind\n\n(* Extract an int from a Const value, assuming it's Int. *)\nlet const_to_int (v : C.t) : int =\n  match C.view v with Int n -> Int64.to_int n | _ -> failwith \"not Int\"\n\n(* Helpers *)\n\n(* Emit a shape-encoding node from a concrete int list.\n   For 1-D: emits a single Const index.\n   For N-D: emits a Vectorize of Const index nodes. *)\nlet mk_shape (dims : int list) : T.t =\n  let ids = List.map (fun s -> T.const (C.int D.Val.index s) D.index) dims in\n  match ids with\n  | [ d ] -> d\n  | ds ->\n      T.vectorize ~srcs:ds\n\n(* Emit a PARAM with a known shape and CPU device. *)\nlet mk_param ~slot (shape : int list) : T.t =\n  let shape_id = if shape = [] then None else Some (mk_shape shape) in\n  let dev = T.device (Single \"CPU\") in\n  T.param ~slot ~dtype:D.float32 ?shape:shape_id ~device:dev ()\n\n(* Count CALL nodes in a program. *)\nlet count_calls (root : T.t) : int =\n  let n = ref 0 in\n  List.iter (fun node ->\n    match T.view node with Call _ -> incr n | _ -> ())\n    (T.toposort root);\n  !n\n\n(* Wrap an expression in CONTIGUOUS -> SINK for get_kernel_graph. *)\nlet wrap_sink b (src : T.t) : T.t =\n  let c = T.contiguous ~src () in\n  T.sink [ c ]\n\n(* Build a program from a builder function and run get_kernel_graph.\n   Returns the kernel graph and CALL count. *)\nlet run_pipeline (build_fn : unit -> T.t) : T.t * int =\n  let _sink = build_fn () in\n  let result = Rangeify.get_kernel_graph (build_fn ()) in\n  (result, count_calls result)\n\n(* is_always_contiguous tests *)\n\nlet dummy = T.const (C.int D.Val.index 0) D.index\nlet dummy2 = T.const (C.int D.Val.index 1) D.index\n\nlet is_always_contiguous_tests =\n  group \"is_always_contiguous\"\n    [\n      test \"contiguous\" (fun () ->\n          let dummy = T.const (C.int D.Val.index 0) D.index in\n          is_true\n            (Indexing.is_always_contiguous\n               (T.Contiguous { src = dummy; ranges = []; opts = []; dtype = D.float32 })));\n      test \"after with store (assign pattern)\" (fun () ->\n          let dummy = T.const (C.int D.Val.index 0) D.index in\n          let dummy2 = T.const (C.int D.Val.index 1) D.index in\n          (* AFTER is a buffer identity — always contiguous *)\n          is_true\n            (Indexing.is_always_contiguous\n               (T.After { src = dummy; deps = [ dummy2 ]; dtype = D.float32 })));\n      test \"copy\" (fun () ->\n          let dummy = T.const (C.int D.Val.index 0) D.index in\n          let dummy2 = T.const (C.int D.Val.index 1) D.index in\n          is_true\n            (Indexing.is_always_contiguous\n               (T.Copy { src = dummy; device = dummy2; dtype = D.float32 })));\n      test \"buffer\" (fun () ->\n          let dummy = T.const (C.int D.Val.index 0) D.index in\n          let dummy2 = T.const (C.int D.Val.index 1) D.index in\n          is_true\n            (Indexing.is_always_contiguous\n               (T.Buffer\n                  { unique = dummy; device = dummy2; size = 4; dtype = D.float32 })));\n      test \"const\" (fun () ->\n          is_true\n            (Indexing.is_always_contiguous\n               (T.Const\n                  { value = C.int D.Val.int32 0; dtype = D.int32; srcs = [] })));\n      test \"param\" (fun () ->\n          is_true\n            (Indexing.is_always_contiguous\n               (T.Param\n                  {\n                    slot = 0;\n                    dtype = D.float32;\n                    shape = None;\n                    device = None;\n                  })));\n      test \"call\" (fun () ->\n          is_true\n            (Indexing.is_always_contiguous\n               (T.Call\n                  {\n                    callee = Ast (K.const (C.int D.Val.int32 0));\n                    args = [];\n                    info =\n                      {\n                        grad_fxn = None;\n                        metadata = [];\n                        name = None;\n                        precompile = false;\n                      };\n                    dtype = D.float32;\n                  })));\n      test \"reshape not contiguous\" (fun () ->\n          is_false\n            (Indexing.is_always_contiguous\n               (T.Reshape { src = dummy; shape = dummy2; dtype = D.float32 })));\n      test \"expand not contiguous\" (fun () ->\n          is_false\n            (Indexing.is_always_contiguous\n               (T.Expand { src = dummy; shape = dummy2; dtype = D.float32 })));\n      test \"reduce_axis not contiguous\" (fun () ->\n          is_false\n            (Indexing.is_always_contiguous\n               (T.Reduce_axis\n                  { src = dummy; op = `Add; axes = [ 0 ]; dtype = D.float32 })));\n      test \"unary not contiguous\" (fun () ->\n          is_false\n            (Indexing.is_always_contiguous\n               (T.Unary { op = `Neg; src = dummy; dtype = D.float32 })));\n      test \"binary not contiguous\" (fun () ->\n          is_false\n            (Indexing.is_always_contiguous\n               (T.Binary { op = `Add; lhs = dummy; rhs = dummy2; dtype = D.float32 })));\n    ]\n\n(* new_range tests *)\n\nlet new_range_tests =\n  group \"new_range\"\n    [\n      test \"size 1 gives const 0\" (fun () ->\n          \n          let ctx = Indexing.create_context () in\n          let id = Indexing.new_range ctx 1 ~kind:Ak.Loop () in\n          (match T.view id with\n          | Const { value; _ } ->\n              equal int 0 (const_to_int value)\n          | _ -> fail \"expected Const for size 1\"));\n      test \"size 0 gives Range (resolve(s!=1) is true)\" (fun () ->\n          \n          let ctx = Indexing.create_context () in\n          let id = Indexing.new_range ctx 0 ~kind:Ak.Loop () in\n          (match T.view id with\n          | Range { size; axis; kind; _ } ->\n              equal int 0 axis;\n              is_true (kind = Ak.Loop);\n              (match T.view size with\n              | Const { value; _ } ->\n                  equal int 0 (const_to_int value)\n              | _ -> fail \"expected Const for range size\")\n          | _ -> fail \"expected Range for size 0\"));\n      test \"size > 1 gives Range\" (fun () ->\n          \n          let ctx = Indexing.create_context () in\n          let id = Indexing.new_range ctx 4 ~kind:Ak.Loop () in\n          (match T.view id with\n          | Range { size; axis; kind; _ } ->\n              equal int 0 axis;\n              is_true (kind = Ak.Loop);\n              (match T.view size with\n              | Const { value; _ } ->\n                  equal int 4 (const_to_int value)\n              | _ -> fail \"expected Const for range size\")\n          | _ -> fail \"expected Range for size > 1\"));\n      test \"axis increments\" (fun () ->\n          \n          let ctx = Indexing.create_context () in\n          let id1 = Indexing.new_range ctx 4 ~kind:Ak.Loop () in\n          let id2 = Indexing.new_range ctx 8 ~kind:Ak.Loop () in\n          let axis1 =\n            match T.view id1 with\n            | Range { axis; _ } -> axis\n            | _ -> fail \"expected Range\"\n          in\n          let axis2 =\n            match T.view id2 with\n            | Range { axis; _ } -> axis\n            | _ -> fail \"expected Range\"\n          in\n          equal int 0 axis1;\n          equal int 1 axis2);\n      test \"kind propagates\" (fun () ->\n          \n          let ctx = Indexing.create_context () in\n          let id = Indexing.new_range ctx 8 ~kind:Ak.Reduce () in\n          (match T.view id with\n          | Range { kind; _ } -> is_true (kind = Ak.Reduce)\n          | _ -> fail \"expected Range\"));\n    ]\n\n(* apply_movement_op tests *)\n\nlet apply_movement_op_tests =\n  group \"apply_movement_op\"\n    [\n      (* SHRINK *)\n      group \"shrink\"\n        [\n          test \"zero offset passthrough\" (fun () ->\n              let param = mk_param ~slot:0 [ 4; 4 ] in\n              let ctx = Indexing.create_context () in\n              let rng0 = Indexing.new_range ctx 4 ~kind:Ak.Loop () in\n              let rng1 = Indexing.new_range ctx 4 ~kind:Ak.Loop () in\n              let before = mk_shape [ 0; 0 ] in\n              let after = mk_shape [ 4; 4 ] in\n              let shapes = T.compute_shapes param in\n              let v =\n                T.Shrink { src = param; before; after; dtype = D.float32 }\n              in\n              let result =\n                Indexing.apply_movement_op ~shapes v\n                  [ rng0; rng1 ]\n              in\n              (* zero offsets: output ranges should be same ids as input *)\n              equal int (T.tag rng0) (T.tag (List.nth result 0));\n              equal int (T.tag rng1) (T.tag (List.nth result 1)));\n          test \"nonzero offset adds\" (fun () ->\n              let param = mk_param ~slot:0 [ 4; 4 ] in\n              let ctx = Indexing.create_context () in\n              let rng0 = Indexing.new_range ctx 4 ~kind:Ak.Loop () in\n              let rng1 = Indexing.new_range ctx 4 ~kind:Ak.Loop () in\n              let before = mk_shape [ 1; 2 ] in\n              let after = mk_shape [ 3; 4 ] in\n              let shapes = T.compute_shapes param in\n              let v =\n                T.Shrink { src = param; before; after; dtype = D.float32 }\n              in\n              let result =\n                Indexing.apply_movement_op ~shapes v\n                  [ rng0; rng1 ]\n              in\n              equal int 2 (List.length result);\n              (match T.view (List.nth result 0) with\n              | Binary { op = `Add; _ } -> ()\n              | _ -> fail \"expected Add for shrink offset 1\");\n              (match T.view (List.nth result 1) with\n              | Binary { op = `Add; _ } -> ()\n              | _ -> fail \"expected Add for shrink offset 2\"));\n        ];\n      (* PERMUTE *)\n      group \"permute\"\n        [\n          test \"swap [1;0]\" (fun () ->\n              let param = mk_param ~slot:0 [ 4; 8 ] in\n              let ctx = Indexing.create_context () in\n              let rng0 = Indexing.new_range ctx 4 ~kind:Ak.Loop () in\n              let rng1 = Indexing.new_range ctx 8 ~kind:Ak.Loop () in\n              let shapes = T.compute_shapes param in\n              let v =\n                T.Permute { src = dummy; order = [ 1; 0 ]; dtype = D.float32 }\n              in\n              let result =\n                Indexing.apply_movement_op ~shapes v\n                  [ rng0; rng1 ]\n              in\n              (* permute [1;0]: argsort = [1;0] → result = [rng1; rng0] *)\n              equal int (T.tag rng1) (T.tag (List.nth result 0));\n              equal int (T.tag rng0) (T.tag (List.nth result 1)));\n          test \"identity [0;1;2]\" (fun () ->\n              let param = mk_param ~slot:0 [ 2; 3; 4 ] in\n              ignore param;\n              let ctx = Indexing.create_context () in\n              let rng0 = Indexing.new_range ctx 2 ~kind:Ak.Loop () in\n              let rng1 = Indexing.new_range ctx 3 ~kind:Ak.Loop () in\n              let rng2 = Indexing.new_range ctx 4 ~kind:Ak.Loop () in\n              let shapes = T.compute_shapes param in\n              let v =\n                T.Permute\n                  { src = dummy; order = [ 0; 1; 2 ]; dtype = D.float32 }\n              in\n              let result =\n                Indexing.apply_movement_op ~shapes v\n                  [ rng0; rng1; rng2 ]\n              in\n              equal int (T.tag rng0) (T.tag (List.nth result 0));\n              equal int (T.tag rng1) (T.tag (List.nth result 1));\n              equal int (T.tag rng2) (T.tag (List.nth result 2)));\n        ];\n      (* FLIP *)\n      group \"flip\"\n        [\n          test \"flip true reverses\" (fun () ->\n              let param = mk_param ~slot:0 [ 4 ] in\n              let ctx = Indexing.create_context () in\n              let rng = Indexing.new_range ctx 4 ~kind:Ak.Loop () in\n              let shapes = T.compute_shapes param in\n              let v =\n                T.Flip { src = param; dims = [ true ]; dtype = D.float32 }\n              in\n              let result =\n                Indexing.apply_movement_op ~shapes v [ rng ]\n              in\n              (match T.view (List.nth result 0) with\n              | Binary { op = `Sub; _ } -> ()\n              | _ -> fail \"expected Sub for flip\"));\n          test \"flip false passthrough\" (fun () ->\n              let param = mk_param ~slot:0 [ 4 ] in\n              let ctx = Indexing.create_context () in\n              let rng = Indexing.new_range ctx 4 ~kind:Ak.Loop () in\n              let shapes = T.compute_shapes param in\n              let v =\n                T.Flip { src = param; dims = [ false ]; dtype = D.float32 }\n              in\n              let result =\n                Indexing.apply_movement_op ~shapes v [ rng ]\n              in\n              equal int (T.tag rng) (T.tag (List.nth result 0)));\n        ];\n      (* EXPAND *)\n      group \"expand\"\n        [\n          test \"same shape passthrough\" (fun () ->\n              let param = mk_param ~slot:0 [ 4; 4 ] in\n              let ctx = Indexing.create_context () in\n              let rng0 = Indexing.new_range ctx 4 ~kind:Ak.Loop () in\n              let rng1 = Indexing.new_range ctx 4 ~kind:Ak.Loop () in\n              let out_shape = mk_shape [ 4; 4 ] in\n              let shapes = T.compute_shapes param in\n              let v =\n                T.Expand\n                  { src = param; shape = out_shape; dtype = D.float32 }\n              in\n              let result =\n                Indexing.apply_movement_op ~shapes v\n                  [ rng0; rng1 ]\n              in\n              equal int (T.tag rng0) (T.tag (List.nth result 0));\n              equal int (T.tag rng1) (T.tag (List.nth result 1)));\n          test \"broadcast 1->N gives const 0\" (fun () ->\n              let param = mk_param ~slot:0 [ 1; 4 ] in\n              let ctx = Indexing.create_context () in\n              let rng0 = Indexing.new_range ctx 4 ~kind:Ak.Loop () in\n              let rng1 = Indexing.new_range ctx 4 ~kind:Ak.Loop () in\n              let out_shape = mk_shape [ 4; 4 ] in\n              let shapes = T.compute_shapes param in\n              let v =\n                T.Expand\n                  { src = param; shape = out_shape; dtype = D.float32 }\n              in\n              let result =\n                Indexing.apply_movement_op ~shapes v\n                  [ rng0; rng1 ]\n              in\n              (* axis 0: in_shape=1, out_shape=4 -> const 0 *)\n              (match T.view (List.nth result 0) with\n              | Const { value; _ } ->\n                  equal int 0 (const_to_int value)\n              | _ -> fail \"expected Const 0 for expanded dim\");\n              (* axis 1: in_shape=4, out_shape=4 -> passthrough *)\n              equal int (T.tag rng1) (T.tag (List.nth result 1)));\n        ];\n      (* PAD *)\n      group \"pad\"\n        [\n          test \"zero pad passthrough\" (fun () ->\n              let param = mk_param ~slot:0 [ 4; 4 ] in\n              let ctx = Indexing.create_context () in\n              let rng0 = Indexing.new_range ctx 4 ~kind:Ak.Loop () in\n              let rng1 = Indexing.new_range ctx 4 ~kind:Ak.Loop () in\n              let before = mk_shape [ 0; 0 ] in\n              let after = mk_shape [ 0; 0 ] in\n              let shapes = T.compute_shapes param in\n              let v =\n                T.Pad { src = param; before; after; dtype = D.float32 }\n              in\n              let result =\n                Indexing.apply_movement_op ~shapes v\n                  [ rng0; rng1 ]\n              in\n              equal int (T.tag rng0) (T.tag (List.nth result 0));\n              equal int (T.tag rng1) (T.tag (List.nth result 1)));\n          test \"nonzero pad creates WHERE\" (fun () ->\n              let param = mk_param ~slot:0 [ 4; 4 ] in\n              let ctx = Indexing.create_context () in\n              let rng0 = Indexing.new_range ctx 6 ~kind:Ak.Loop () in\n              let rng1 = Indexing.new_range ctx 4 ~kind:Ak.Loop () in\n              let before = mk_shape [ 2; 0 ] in\n              let after = mk_shape [ 0; 0 ] in\n              let shapes = T.compute_shapes param in\n              let v =\n                T.Pad { src = param; before; after; dtype = D.float32 }\n              in\n              let result =\n                Indexing.apply_movement_op ~shapes v\n                  [ rng0; rng1 ]\n              in\n              (* axis 0: pad_before=2 -> WHERE(valid, offset, invalid) *)\n              (match T.view (List.nth result 0) with\n              | Ternary { op = `Where; _ } -> ()\n              | _ -> fail \"expected WHERE for padded dim\");\n              (* axis 1: pad_before=0 -> passthrough *)\n              equal int (T.tag rng1) (T.tag (List.nth result 1)));\n          test \"end-only pad creates WHERE (F2)\" (fun () ->\n              (* PAD with start=0, end=2 on a dim of size 4:\n                 output range goes 0..5 but valid indices are only 0..3.\n                 Must emit WHERE(r < 4, r, invalid). *)\n              let param = mk_param ~slot:0 [ 4 ] in\n              let ctx = Indexing.create_context () in\n              let rng = Indexing.new_range ctx 6 ~kind:Ak.Loop () in\n              let before = mk_shape [ 0 ] in\n              let after = mk_shape [ 2 ] in\n              let shapes = T.compute_shapes param in\n              let v =\n                T.Pad { src = param; before; after; dtype = D.float32 }\n              in\n              let result =\n                Indexing.apply_movement_op ~shapes v [ rng ]\n              in\n              (* end padding nonzero -> WHERE must be generated *)\n              (match T.view (List.nth result 0) with\n              | Ternary { op = `Where; _ } -> ()\n              | _ -> fail \"expected WHERE for end-only padded dim\"));\n        ];\n      (* RESHAPE *)\n      group \"reshape\"\n        [\n          test \"flatten [2;3] to [6]\" (fun () ->\n              (* apply_movement_op receives output ranges and returns input ranges.\n                 Reshape [2;3] -> [6]: output shape [6], input shape [2;3].\n                 Pass 1 output range, get back 2 input ranges. *)\n              let param = mk_param ~slot:0 [ 2; 3 ] in\n              let ctx = Indexing.create_context () in\n              let rng_out = Indexing.new_range ctx 6 ~kind:Ak.Loop () in\n              let new_shape = mk_shape [ 6 ] in\n              let shapes = T.compute_shapes param in\n              let v =\n                T.Reshape\n                  { src = param; shape = new_shape; dtype = D.float32 }\n              in\n              let result =\n                Indexing.apply_movement_op ~shapes v [ rng_out ]\n              in\n              equal int 2 (List.length result));\n          test \"unflatten [6] to [2;3]\" (fun () ->\n              (* Reshape [6] -> [2;3]: output shape [2;3], input shape [6].\n                 Pass 2 output ranges, get back 1 input range. *)\n              let param = mk_param ~slot:0 [ 6 ] in\n              let ctx = Indexing.create_context () in\n              let rng0 = Indexing.new_range ctx 2 ~kind:Ak.Loop () in\n              let rng1 = Indexing.new_range ctx 3 ~kind:Ak.Loop () in\n              let new_shape = mk_shape [ 2; 3 ] in\n              let shapes = T.compute_shapes param in\n              let v =\n                T.Reshape\n                  { src = param; shape = new_shape; dtype = D.float32 }\n              in\n              let result =\n                Indexing.apply_movement_op ~shapes v\n                  [ rng0; rng1 ]\n              in\n              equal int 1 (List.length result));\n          test \"identity [4] to [4]\" (fun () ->\n              let param = mk_param ~slot:0 [ 4 ] in\n              let ctx = Indexing.create_context () in\n              let rng = Indexing.new_range ctx 4 ~kind:Ak.Loop () in\n              let new_shape = mk_shape [ 4 ] in\n              let shapes = T.compute_shapes param in\n              let v =\n                T.Reshape\n                  { src = param; shape = new_shape; dtype = D.float32 }\n              in\n              let result =\n                Indexing.apply_movement_op ~shapes v [ rng ]\n              in\n              equal int 1 (List.length result));\n        ];\n    ]\n\n(* run_rangeify tests *)\n\nlet run_rangeify_tests =\n  group \"run_rangeify\"\n    [\n      test \"realized node creates Realized\" (fun () ->\n          \n          let param = mk_param ~slot:0 [ 4 ] in\n          let contig = T.contiguous ~src:param () in\n          let _sink = T.sink [ contig ] in\n          let shapes = T.compute_shapes _sink in\n          let ctx = Indexing.run_rangeify _sink ~shapes in\n          (match Hashtbl.find_opt ctx.realize_map (T.tag contig) with\n          | Some (Indexing.Realized axes) ->\n              equal (list int) [ 0 ] axes\n          | Some Indexing.Marked -> fail \"expected Realized, got Marked\"\n          | None -> fail \"expected Realized, got None\"));\n      test \"realized node has range_map entry\" (fun () ->\n          let param = mk_param ~slot:0 [ 4 ] in\n          let contig = T.contiguous ~src:param () in\n          let sink = T.sink [ contig ] in\n          let shapes = T.compute_shapes sink in\n          let ctx = Indexing.run_rangeify sink ~shapes in\n          is_true (Hashtbl.mem ctx.range_map (T.tag contig)));\n      test \"elementwise inherits consumer ranges\" (fun () ->\n          let param = mk_param ~slot:0 [ 4 ] in\n          let neg = T.unary ~op:`Neg ~src:param in\n          let contig = T.contiguous ~src:neg () in\n          let sink = T.sink [ contig ] in\n          let shapes = T.compute_shapes sink in\n          let ctx = Indexing.run_rangeify sink ~shapes in\n          is_true (Hashtbl.mem ctx.range_map (T.tag neg)));\n      test \"reduce creates reduce-kind ranges\" (fun () ->\n          let param = mk_param ~slot:0 [ 4; 4 ] in\n          let red = T.reduce_axis ~src:param ~op:`Add ~axes:[ 1 ] in\n          let contig = T.contiguous ~src:red () in\n          let sink = T.sink [ contig ] in\n          let shapes = T.compute_shapes sink in\n          let ctx = Indexing.run_rangeify sink ~shapes in\n          (match Hashtbl.find_opt ctx.range_map (T.tag red) with\n          | Some (in_rngs, _out_rngs) ->\n              equal int 2 (List.length in_rngs);\n              (match T.view (List.nth in_rngs 1) with\n              | Range { kind; _ } -> is_true (kind = Ak.Reduce)\n              | Const _ -> fail \"expected Range for reduce axis, got Const\"\n              | _ -> fail \"expected Range for reduce axis\")\n          | None -> fail \"expected range_map entry for reduce\"));\n      test \"movement op has different in and out ranges\" (fun () ->\n          let param = mk_param ~slot:0 [ 4; 8 ] in\n          let perm = T.permute ~src:param ~order:[ 1; 0 ] in\n          let contig = T.contiguous ~src:perm () in\n          let sink = T.sink [ contig ] in\n          let shapes = T.compute_shapes sink in\n          let ctx = Indexing.run_rangeify sink ~shapes in\n          (match Hashtbl.find_opt ctx.range_map (T.tag perm) with\n          | Some (in_rngs, out_rngs) ->\n              equal int 2 (List.length in_rngs);\n              equal int 2 (List.length out_rngs);\n              is_true (List.nth out_rngs 1 == List.nth in_rngs 0);\n              is_true (List.nth out_rngs 0 == List.nth in_rngs 1)\n          | None -> fail \"expected range_map entry for permute\"));\n      test \"2D realized node has all axes\" (fun () ->\n          \n          let param = mk_param ~slot:0 [ 4; 8 ] in\n          let contig = T.contiguous ~src:param () in\n          let _sink = T.sink [ contig ] in\n          let shapes = T.compute_shapes _sink in\n          let ctx = Indexing.run_rangeify _sink ~shapes in\n          (match Hashtbl.find_opt ctx.realize_map (T.tag contig) with\n          | Some (Indexing.Realized axes) ->\n              equal (list int) [ 0; 1 ] axes\n          | _ -> fail \"expected Realized with [0;1]\"));\n    ]\n\n(* get_kernel_graph integration tests *)\n\n(* Helper to build a pipeline test: build graph, run get_kernel_graph,\n   assert CALL count. *)\nlet pipeline_test name ~expected_calls build_fn =\n  test name (fun () ->\n      let _, calls = run_pipeline build_fn in\n      equal int expected_calls calls)\n\nlet get_kernel_graph_tests =\n  group \"get_kernel_graph\"\n    [\n      (* test_basic_binop_fusion *)\n      pipeline_test \"elementwise fusion\" ~expected_calls:1 (fun b ->\n          let a = mk_param ~slot:0 [ 10 ] in\n          let bp = mk_param ~slot:1 [ 10 ] in\n          let c = mk_param ~slot:2 [ 10 ] in\n          let ab = T.binary ~op:`Add ~lhs:a ~rhs:bp in\n          let abc = T.binary ~op:`Add ~lhs:ab ~rhs:c in\n          wrap_sink b abc);\n      (* test_mulacc_fusion *)\n      pipeline_test \"mulacc fusion\" ~expected_calls:1 (fun b ->\n          let a = mk_param ~slot:0 [ 10 ] in\n          let bp = mk_param ~slot:1 [ 10 ] in\n          let mul = T.binary ~op:`Mul ~lhs:a ~rhs:bp in\n          let red =\n            T.reduce_axis ~src:mul ~op:`Add ~axes:[ 0 ]\n          in\n          wrap_sink b red);\n      (* test_binop_reshape_fusion *)\n      pipeline_test \"binop reshape fusion\" ~expected_calls:1 (fun b ->\n          let a = mk_param ~slot:0 [ 10 ] in\n          let bp = mk_param ~slot:1 [ 10 ] in\n          let c = mk_param ~slot:2 [ 5; 2 ] in\n          let ab = T.binary ~op:`Add ~lhs:a ~rhs:bp in\n          let new_shape = mk_shape [ 5; 2 ] in\n          let reshaped = T.reshape ~src:ab ~shape:new_shape in\n          let result = T.binary ~op:`Add ~lhs:reshaped ~rhs:c in\n          wrap_sink b result);\n      (* test_binop_permute_fusion *)\n      pipeline_test \"binop permute fusion\" ~expected_calls:1 (fun b ->\n          let a = mk_param ~slot:0 [ 2; 5 ] in\n          let bp = mk_param ~slot:1 [ 2; 5 ] in\n          let c = mk_param ~slot:2 [ 5; 2 ] in\n          let ab = T.binary ~op:`Add ~lhs:a ~rhs:bp in\n          let permed = T.permute ~src:ab ~order:[ 1; 0 ] in\n          let result = T.binary ~op:`Add ~lhs:permed ~rhs:c in\n          wrap_sink b result);\n      (* test_diamond_folded *)\n      pipeline_test \"diamond folded\" ~expected_calls:1 (fun b ->\n          let a = mk_param ~slot:0 [ 10 ] in\n          let bp = mk_param ~slot:1 [ 10 ] in\n          let c = mk_param ~slot:2 [ 10 ] in\n          let d = mk_param ~slot:3 [ 10 ] in\n          let ab = T.binary ~op:`Add ~lhs:a ~rhs:bp in\n          let abc = T.binary ~op:`Add ~lhs:ab ~rhs:c in\n          let abd = T.binary ~op:`Add ~lhs:ab ~rhs:d in\n          let result = T.binary ~op:`Add ~lhs:abc ~rhs:abd in\n          wrap_sink b result);\n      (* test_fold_double_unary *)\n      pipeline_test \"fold double unary\" ~expected_calls:1 (fun b ->\n          let param = mk_param ~slot:0 [ 2 ] in\n          let red =\n            T.reduce_axis ~src:param ~op:`Add ~axes:[ 0 ]\n          in\n          let sq = T.unary ~op:`Sqrt ~src:red in\n          let neg = T.unary ~op:`Neg ~src:sq in\n          wrap_sink b neg);\n      (* test_reduce_reshape_binop_fusion *)\n      pipeline_test \"reduce reshape binop fusion\" ~expected_calls:1\n        (fun b ->\n          let a = mk_param ~slot:0 [ 10; 10 ] in\n          let bp = mk_param ~slot:1 [ 10 ] in\n          let red =\n            T.reduce_axis ~src:a ~op:`Add ~axes:[ 0 ]\n          in\n          let new_shape = mk_shape [ 10 ] in\n          let reshaped = T.reshape ~src:red ~shape:new_shape in\n          let result = T.binary ~op:`Add ~lhs:reshaped ~rhs:bp in\n          wrap_sink b result);\n      (* test_reduce_permute_binop_fusion *)\n      pipeline_test \"reduce permute binop fusion\" ~expected_calls:1\n        (fun b ->\n          let a = mk_param ~slot:0 [ 10; 10; 10 ] in\n          let bp = mk_param ~slot:1 [ 10; 10; 1 ] in\n          let red =\n            T.reduce_axis ~src:a ~op:`Add ~axes:[ 0 ]\n          in\n          let permed = T.permute ~src:red ~order:[ 2; 1; 0 ] in\n          let result = T.binary ~op:`Add ~lhs:permed ~rhs:bp in\n          wrap_sink b result);\n      (* test_push_permute_through_reshape *)\n      pipeline_test \"push permute through reshape\" ~expected_calls:1\n        (fun b ->\n          let a = mk_param ~slot:0 [ 16; 16 ] in\n          let bp = mk_param ~slot:1 [ 16; 16 ] in\n          let ab = T.binary ~op:`Add ~lhs:a ~rhs:bp in\n          let s4 = mk_shape [ 4; 4; 4; 4 ] in\n          let reshaped = T.reshape ~src:ab ~shape:s4 in\n          let permed =\n            T.permute ~src:reshaped ~order:[ 2; 3; 0; 1 ]\n          in\n          wrap_sink b permed);\n      (* test_multistage_reduce *)\n      pipeline_test \"multistage reduce\" ~expected_calls:1 (fun b ->\n          let a = mk_param ~slot:0 [ 32; 32; 32 ] in\n          let red1 =\n            T.reduce_axis ~src:a ~op:`Add ~axes:[ 2 ]\n          in\n          let relu =\n            T.binary ~op:`Max ~lhs:red1\n              ~rhs:(T.const (C.float D.Val.float32 0.0) D.float32)\n          in\n          let new_shape = mk_shape [ 32; 32 ] in\n          let reshaped = T.reshape ~src:relu ~shape:new_shape in\n          let red2 =\n            T.reduce_axis ~src:reshaped ~op:`Add ~axes:[ 1 ]\n          in\n          wrap_sink b red2);\n      (* test_children_dont_push:\n         TODO: should be 1 kernel. remove_bufferize correctly identifies the\n         removable bufferize but the substitution (inlining ranges into source)\n         is not yet implemented. *)\n      pipeline_test \"children dont push\" ~expected_calls:2 (fun b ->\n          let a = mk_param ~slot:0 [ 10; 10; 1 ] in\n          let bp = mk_param ~slot:1 [ 10; 10; 1 ] in\n          let ab = T.binary ~op:`Add ~lhs:a ~rhs:bp in\n          let exp_shape = mk_shape [ 10; 10; 10 ] in\n          let expanded = T.expand ~src:ab ~shape:exp_shape in\n          let permed = T.permute ~src:ab ~order:[ 2; 1; 0 ] in\n          let result = T.binary ~op:`Add ~lhs:expanded ~rhs:permed in\n          wrap_sink b result);\n      (* test_reduce_permute_nofuse *)\n      pipeline_test \"reduce permute nofuse\" ~expected_calls:1 (fun b ->\n          let x = mk_param ~slot:0 [ 32; 32; 32 ] in\n          let y = mk_param ~slot:1 [ 32; 32 ] in\n          let red =\n            T.reduce_axis ~src:x ~op:`Add ~axes:[ 2 ]\n          in\n          let new_shape = mk_shape [ 32; 32 ] in\n          let reshaped = T.reshape ~src:red ~shape:new_shape in\n          let permed = T.permute ~src:reshaped ~order:[ 1; 0 ] in\n          let result = T.binary ~op:`Add ~lhs:permed ~rhs:y in\n          wrap_sink b result);\n    ]\n\n(* Reshape merge (tested through get_kernel_graph pipeline) *)\n\nlet reshape_merge_tests =\n  group \"reshape merge\"\n    [\n      (* Adjacent reshapes should be merged by earliest_rewrites. Verified\n         indirectly: if they weren't merged, the graph might produce\n         incorrect kernel structure. We test that the pipeline handles\n         Reshape(Reshape(x, s1), s2) without error. *)\n      pipeline_test \"reshape chain produces 1 kernel\" ~expected_calls:1\n        (fun b ->\n          let param = mk_param ~slot:0 [ 4; 4 ] in\n          let s1 = mk_shape [ 16 ] in\n          let r1 = T.reshape ~src:param ~shape:s1 in\n          let s2 = mk_shape [ 2; 8 ] in\n          let r2 = T.reshape ~src:r1 ~shape:s2 in\n          let other = mk_param ~slot:1 [ 2; 8 ] in\n          let result = T.binary ~op:`Add ~lhs:r2 ~rhs:other in\n          wrap_sink b result);\n    ]\n\n(* Main *)\n\nlet () =\n  run \"Schedule.Rangeify\"\n    [\n      is_always_contiguous_tests;\n      new_range_tests;\n      apply_movement_op_tests;\n      run_rangeify_tests;\n      get_kernel_graph_tests;\n      reshape_merge_tests;\n    ]\n"
  },
  {
    "path": "packages/vega/README.md",
    "content": "# Vega\n\nComposable gradient-based optimizers for OCaml, inspired by [Optax](https://github.com/google-deepmind/optax)\n\nVega provides typed, per-parameter optimizer primitives that compose via\nchaining. Each primitive is a gradient transformation: it takes updates\n(gradients) and returns modified updates. Primitives are chained to build\ncomplete optimizers, giving you full control over the optimization pipeline\nwhile common recipes are available as one-line aliases.\n\n## Quick Start\n\nMinimize `f(x) = 0.5 * ||x||^2` with Adam:\n\n```ocaml\nopen Vega\n\nlet () =\n  let lr = Schedule.constant 0.01 in\n  let tx = adam lr in\n\n  let param = ref (Nx.create Nx.float32 [| 2 |] [| 5.0; -3.0 |]) in\n  let st = ref (init tx !param) in\n\n  for i = 1 to 50 do\n    (* For f(x) = 0.5 * ||x||^2, the gradient is x *)\n    let p, s = step !st ~grad:!param ~param:!param in\n    param := p;\n    st := s;\n    if i mod 10 = 0 then\n      Printf.printf \"step %2d  x = %s\\n\" i (Nx.data_to_string !param)\n  done\n```\n\n## Features\n\n- **Optimizer aliases**: `adam`, `adamw`, `sgd`, `rmsprop`, `adagrad`, `lamb`, `lion`, `radam`, `lars`, `adan`, `adafactor`\n- **Composable primitives**: `scale_by_adam`, `scale_by_rms`, `trace`, `add_decayed_weights`, `scale_by_trust_ratio`, and more -- combine via `chain`\n- **Learning rate schedules**: `constant`, `cosine_decay`, `warmup_cosine_decay`, `one_cycle`, `cosine_decay_restarts`, `piecewise_constant`, `join`\n- **Gradient clipping**: `clip_by_value`, `clip_by_norm`\n- **Gradient processing**: `centralize`, `add_noise`\n- **Robustness**: `apply_if_finite` skips updates containing NaN/Inf\n- **Serialization**: `state_to_tensors` / `state_of_tensors` for checkpointing\n- **No autodiff dependency**: works with Nx directly\n\n## Examples\n\n- **01-basic-optimizers** -- Minimize a quadratic using SGD, Adam, and AdamW\n- **02-composing-transforms** -- Build custom optimizers from primitives\n- **03-learning-rate-schedules** -- Explore warmup, cosine decay, one-cycle, and more\n\n## Contributing\n\nSee the [Raven monorepo README](../README.md) for guidelines.\n\n## License\n\nISC License. See [LICENSE](../LICENSE) for details.\n"
  },
  {
    "path": "packages/vega/bench/bench_vega.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nlet lr = Vega.Schedule.constant 1e-3\nlet shapes = [ (\"256\", [| 256; 256 |]); (\"1024\", [| 1024; 1024 |]) ]\n\nlet make_step_bench name tx (label, shape) =\n  let param = Nx.rand Nx.Float32 shape in\n  let grad = Nx.rand Nx.Float32 shape in\n  let state = Vega.init tx param in\n  let state = ref state in\n  Thumper.bench (Printf.sprintf \"%s/%s\" name label) (fun () ->\n      let new_param, new_state = Vega.step !state ~grad ~param in\n      state := new_state;\n      new_param)\n\nlet optimizer_benches name tx = List.map (make_step_bench name tx) shapes\n\nlet build_benchmarks () =\n  [\n    Thumper.group \"SGD\" (optimizer_benches \"SGD\" (Vega.sgd lr));\n    Thumper.group \"SGD+Momentum\"\n      (optimizer_benches \"SGD+Momentum\" (Vega.sgd ~momentum:0.9 lr));\n    Thumper.group \"Adam\" (optimizer_benches \"Adam\" (Vega.adam lr));\n    Thumper.group \"AdamW\" (optimizer_benches \"AdamW\" (Vega.adamw lr));\n    Thumper.group \"RMSprop\" (optimizer_benches \"RMSprop\" (Vega.rmsprop lr));\n    Thumper.group \"Adagrad\" (optimizer_benches \"Adagrad\" (Vega.adagrad lr));\n    Thumper.group \"Lion\" (optimizer_benches \"Lion\" (Vega.lion lr));\n    Thumper.group \"RAdam\" (optimizer_benches \"RAdam\" (Vega.radam lr));\n    Thumper.group \"LAMB\" (optimizer_benches \"LAMB\" (Vega.lamb lr));\n    Thumper.group \"LARS\" (optimizer_benches \"LARS\" (Vega.lars lr));\n    Thumper.group \"Adan\" (optimizer_benches \"Adan\" (Vega.adan lr));\n    Thumper.group \"Adafactor\"\n      (optimizer_benches \"Adafactor\" (Vega.adafactor ()));\n  ]\n\nlet () =\n  let benchmarks = build_benchmarks () in\n  Thumper.run \"vega\" benchmarks\n"
  },
  {
    "path": "packages/vega/bench/dune",
    "content": "(executable\n (name bench_vega)\n (libraries nx vega thumper))\n\n(rule\n (alias runtest)\n (action\n  (progn\n   (run %{exe:bench_vega.exe} -q)\n   (diff? vega.thumper vega.thumper.corrected))))\n"
  },
  {
    "path": "packages/vega/bench/vega.thumper",
    "content": "# thumper baseline\n# version: 1\n# suite_name: vega\n# host: 1480401c3b76ed18\n# cpu: Apple M1 Max\n# ocaml: 5.4.1\n# git: 31747323\n# dirty: true\n# command: /Users/tmattio/Workspace/raven/_build/default/packages/vega/bench/bench_vega.exe --bless --quick\n\nadafactor/adafactor_1024\talloc_words\t7.379000e+03\t7.379000e+03\t7.379000e+03\t0.000000e+00\t6\t1\nadafactor/adafactor_1024\tcpu_time\t3.772890e-02\t3.621944e-02\t3.918347e-02\t3.928066e-02\t6\t0\nadafactor/adafactor_1024\twall_time\t3.376657e-02\t3.273775e-02\t3.474388e-02\t2.970594e-02\t6\t0\nadafactor/adafactor_256\talloc_words\t7.379000e+03\t7.379000e+03\t7.379000e+03\t0.000000e+00\t5\t0\nadafactor/adafactor_256\tcpu_time\t4.248163e-03\t4.204892e-03\t4.300359e-03\t1.123619e-02\t5\t1\nadafactor/adafactor_256\twall_time\t3.413570e-03\t3.372499e-03\t3.460579e-03\t1.290153e-02\t5\t1\nadagrad/adagrad_1024\talloc_words\t1.645000e+03\t1.645000e+03\t1.645000e+03\t0.000000e+00\t5\t1\nadagrad/adagrad_1024\tcpu_time\t1.438640e-02\t1.429114e-02\t1.472547e-02\t1.509513e-02\t5\t0\nadagrad/adagrad_1024\twall_time\t1.174961e-02\t1.169396e-02\t1.193502e-02\t1.025825e-02\t5\t0\nadagrad/adagrad_256\talloc_words\t1.645000e+03\t1.645000e+03\t1.645000e+03\t0.000000e+00\t5\t0\nadagrad/adagrad_256\tcpu_time\t1.815310e-03\t1.759471e-03\t1.866326e-03\t2.943144e-02\t5\t0\nadagrad/adagrad_256\twall_time\t1.553853e-03\t1.531986e-03\t1.572731e-03\t1.311116e-02\t5\t0\nadam/adam_1024\talloc_words\t4.485000e+03\t4.485000e+03\t4.485000e+03\t0.000000e+00\t5\t1\nadam/adam_1024\tcpu_time\t5.117711e-02\t5.071938e-02\t5.166416e-02\t9.230480e-03\t5\t0\nadam/adam_1024\twall_time\t4.540472e-02\t4.514677e-02\t4.591099e-02\t8.415732e-03\t5\t1\nadam/adam_256\talloc_words\t4.485000e+03\t4.485000e+03\t4.485000e+03\t0.000000e+00\t5\t0\nadam/adam_256\tcpu_time\t6.023322e-03\t5.913064e-03\t6.176393e-03\t2.185909e-02\t5\t1\nadam/adam_256\twall_time\t4.790472e-03\t4.753650e-03\t4.845705e-03\t9.608070e-03\t5\t0\nadamw/adamw_1024\talloc_words\t5.092000e+03\t5.092000e+03\t5.092000e+03\t0.000000e+00\t6\t1\nadamw/adamw_1024\tcpu_time\t5.971130e-02\t5.665413e-02\t6.113653e-02\t3.753400e-02\t6\t0\nadamw/adamw_1024\twall_time\t5.252358e-02\t5.025975e-02\t5.371553e-02\t3.289742e-02\t6\t0\nadamw/adamw_256\talloc_words\t5.092000e+03\t5.092000e+03\t5.092000e+03\t0.000000e+00\t5\t0\nadamw/adamw_256\tcpu_time\t6.340322e-03\t6.305766e-03\t6.382597e-03\t6.058950e-03\t5\t1\nadamw/adamw_256\twall_time\t5.312767e-03\t5.300098e-03\t5.334077e-03\t3.197817e-03\t5\t1\nadan/adan_1024\talloc_words\t6.622000e+03\t6.622000e+03\t6.622000e+03\t0.000000e+00\t5\t1\nadan/adan_1024\tcpu_time\t7.260545e-02\t7.228025e-02\t7.294407e-02\t4.571413e-03\t5\t0\nadan/adan_1024\twall_time\t6.365080e-02\t6.329511e-02\t6.388930e-02\t4.667538e-03\t5\t2\nadan/adan_256\talloc_words\t6.622000e+03\t6.622000e+03\t6.622000e+03\t0.000000e+00\t5\t0\nadan/adan_256\tcpu_time\t9.280633e-03\t9.082946e-03\t9.401430e-03\t1.715849e-02\t5\t0\nadan/adan_256\twall_time\t7.481574e-03\t7.242978e-03\t7.683269e-03\t2.942506e-02\t5\t1\nlamb/lamb_1024\talloc_words\t6.768000e+03\t6.768000e+03\t6.768000e+03\t0.000000e+00\t5\t1\nlamb/lamb_1024\tcpu_time\t6.685995e-02\t6.656900e-02\t6.704892e-02\t3.589018e-03\t5\t2\nlamb/lamb_1024\twall_time\t5.741052e-02\t5.711717e-02\t5.764923e-02\t4.633855e-03\t5\t1\nlamb/lamb_256\talloc_words\t6.768000e+03\t6.768000e+03\t6.768000e+03\t0.000000e+00\t5\t0\nlamb/lamb_256\tcpu_time\t7.480822e-03\t7.276476e-03\t7.572106e-03\t1.975920e-02\t5\t1\nlamb/lamb_256\twall_time\t6.297606e-03\t6.245168e-03\t6.345828e-03\t7.991907e-03\t5\t0\nlars/lars_1024\talloc_words\t3.515000e+03\t3.515000e+03\t3.515000e+03\t0.000000e+00\t5\t1\nlars/lars_1024\tcpu_time\t2.934985e-02\t2.912893e-02\t2.967841e-02\t9.360792e-03\t5\t0\nlars/lars_1024\twall_time\t2.398166e-02\t2.395391e-02\t2.407921e-02\t2.612359e-03\t5\t1\nlars/lars_256\talloc_words\t3.515000e+03\t3.515000e+03\t3.515000e+03\t0.000000e+00\t5\t0\nlars/lars_256\tcpu_time\t3.960421e-03\t3.900252e-03\t4.037988e-03\t1.738901e-02\t5\t1\nlars/lars_256\twall_time\t3.010140e-03\t2.972880e-03\t3.067971e-03\t1.579503e-02\t5\t0\nlion/lion_1024\talloc_words\t2.825000e+03\t2.825000e+03\t2.825000e+03\t0.000000e+00\t5\t1\nlion/lion_1024\tcpu_time\t3.114285e-02\t3.090694e-02\t3.127559e-02\t5.918672e-03\t5\t0\nlion/lion_1024\twall_time\t2.746662e-02\t2.734103e-02\t2.755798e-02\t3.949292e-03\t5\t2\nlion/lion_256\talloc_words\t2.825000e+03\t2.825000e+03\t2.825000e+03\t0.000000e+00\t5\t0\nlion/lion_256\tcpu_time\t3.361558e-03\t3.340400e-03\t3.383800e-03\t6.455340e-03\t5\t0\nlion/lion_256\twall_time\t2.890833e-03\t2.881734e-03\t2.903313e-03\t3.732212e-03\t5\t0\nradam/radam_1024\talloc_words\t4.932000e+03\t4.932000e+03\t4.932000e+03\t0.000000e+00\t5\t1\nradam/radam_1024\tcpu_time\t5.562996e-02\t5.516837e-02\t5.577618e-02\t5.462897e-03\t5\t0\nradam/radam_1024\twall_time\t4.963922e-02\t4.905381e-02\t4.981040e-02\t7.620872e-03\t5\t0\nradam/radam_256\talloc_words\t4.932000e+03\t4.932000e+03\t4.932000e+03\t0.000000e+00\t5\t0\nradam/radam_256\tcpu_time\t5.914142e-03\t5.838698e-03\t6.001715e-03\t1.378197e-02\t5\t0\nradam/radam_256\twall_time\t5.084347e-03\t5.042129e-03\t5.133876e-03\t9.022509e-03\t5\t0\nrmsprop/rmsprop_1024\talloc_words\t2.537000e+03\t2.537000e+03\t2.537000e+03\t0.000000e+00\t5\t1\nrmsprop/rmsprop_1024\tcpu_time\t2.691480e-02\t2.649645e-02\t2.743741e-02\t1.748038e-02\t5\t0\nrmsprop/rmsprop_1024\twall_time\t2.328188e-02\t2.297942e-02\t2.372377e-02\t1.598553e-02\t5\t2\nrmsprop/rmsprop_256\talloc_words\t2.537000e+03\t2.537000e+03\t2.537000e+03\t0.000000e+00\t7\t0\nrmsprop/rmsprop_256\tcpu_time\t3.350205e-03\t3.313144e-03\t3.407388e-03\t1.406549e-02\t7\t1\nrmsprop/rmsprop_256\twall_time\t2.769409e-03\t2.750218e-03\t2.785217e-03\t6.318764e-03\t7\t0\nsgd/sgd_1024\talloc_words\t6.190000e+02\t6.190000e+02\t6.190000e+02\t0.000000e+00\t5\t0\nsgd/sgd_1024\tcpu_time\t6.671744e-03\t6.632292e-03\t6.730083e-03\t7.328793e-03\t5\t0\nsgd/sgd_1024\twall_time\t5.857654e-03\t5.828912e-03\t5.904516e-03\t6.453447e-03\t5\t0\nsgd/sgd_256\talloc_words\t6.190000e+02\t6.190000e+02\t6.190000e+02\t0.000000e+00\t5\t0\nsgd/sgd_256\tcpu_time\t7.354628e-04\t7.167765e-04\t7.553499e-04\t2.622392e-02\t5\t2\nsgd/sgd_256\twall_time\t6.137406e-04\t6.029555e-04\t6.227054e-04\t1.608983e-02\t5\t0\nsgd_momentum/sgd_momentum_1024\talloc_words\t1.232000e+03\t1.232000e+03\t1.232000e+03\t0.000000e+00\t5\t1\nsgd_momentum/sgd_momentum_1024\tcpu_time\t1.328411e-02\t1.311089e-02\t1.340961e-02\t1.124322e-02\t5\t1\nsgd_momentum/sgd_momentum_1024\twall_time\t1.149804e-02\t1.124852e-02\t1.167815e-02\t1.868268e-02\t5\t0\nsgd_momentum/sgd_momentum_256\talloc_words\t1.232000e+03\t1.232000e+03\t1.232000e+03\t0.000000e+00\t5\t0\nsgd_momentum/sgd_momentum_256\tcpu_time\t1.511477e-03\t1.497683e-03\t1.530273e-03\t1.078084e-02\t5\t0\nsgd_momentum/sgd_momentum_256\twall_time\t1.277731e-03\t1.242033e-03\t1.328993e-03\t3.402916e-02\t5\t1\n"
  },
  {
    "path": "packages/vega/doc/01-getting-started.md",
    "content": "# Getting Started\n\nThis guide shows you how to create optimizers, initialize state, and run\noptimization steps.\n\n## Installation\n\n<!-- $MDX skip -->\n```bash\nopam install vega\n```\n\nOr build from source:\n\n<!-- $MDX skip -->\n```bash\ngit clone https://github.com/raven-ml/raven\ncd raven && dune build vega\n```\n\nAdd to your `dune` file:\n\n<!-- $MDX skip -->\n```dune\n(executable\n (name main)\n (libraries vega nx))\n```\n\n## Your First Optimizer\n\nVega optimizers transform gradients into parameter updates. Here we minimize\n`f(x) = 0.5 * ||x||²` (whose gradient is simply `x`) using SGD:\n\n<!-- $MDX skip -->\n```ocaml\nopen Vega\n\nlet () =\n  (* Create an SGD optimizer with learning rate 0.1 *)\n  let lr = Schedule.constant 0.1 in\n  let tx = sgd lr in\n\n  (* Start from x = [5.0; -3.0] *)\n  let param = ref (Nx.create Nx.float32 [| 2 |] [| 5.0; -3.0 |]) in\n\n  (* Initialize optimizer state from the parameter shape *)\n  let st = ref (init tx !param) in\n\n  for i = 1 to 30 do\n    (* step takes state, gradient, and current param;\n       returns (new_param, new_state) *)\n    let p, s = step !st ~grad:!param ~param:!param in\n    param := p;\n    st := s;\n    if i mod 10 = 0 then\n      Printf.printf \"step %2d  x = %s\\n\" i (Nx.data_to_string !param)\n  done\n```\n\nKey points:\n- `Schedule.constant 0.1` creates a fixed learning rate\n- `init tx param` creates optimizer state matching the parameter's shape and dtype\n- `step` returns both the updated parameter and the new optimizer state\n- The optimizer state must be threaded through each step\n\n## Using Adam\n\nReplace `sgd` with `adam` for adaptive learning rates. Adam adjusts the\neffective step size per-parameter using running moment estimates:\n\n<!-- $MDX skip -->\n```ocaml\nlet lr = Vega.Schedule.constant 0.001 in\nlet tx = Vega.adam lr\n```\n\nAdam takes optional parameters `~b1` (default 0.9), `~b2` (default 0.999),\nand `~eps` (default 1e-8). The rest of the training loop is identical — just\nswap the optimizer.\n\n## The Update API\n\n`step` is a convenience that combines two lower-level operations:\n\n<!-- $MDX skip -->\n```ocaml\n(* step = update + apply_updates *)\nlet new_param, new_state = Vega.step state ~grad ~param\n\n(* is equivalent to: *)\nlet updates, new_state = Vega.update state ~grad ~param in\nlet new_param = Vega.apply_updates ~param ~updates\n```\n\nThe two-step API is useful when you need to inspect or modify the raw updates\nbefore applying them (e.g., logging gradient norms, applying custom masks).\n\n## Optimizer Aliases\n\nVega provides ready-to-use aliases that compose primitives internally:\n\n| Alias | Description | Key Parameters |\n|-------|-------------|----------------|\n| `sgd` | Stochastic gradient descent | `~momentum`, `~nesterov` |\n| `adam` | Adam with bias correction | `~b1`, `~b2`, `~eps` |\n| `adamw` | Adam with decoupled weight decay | `~b1`, `~b2`, `~eps`, `~weight_decay` |\n| `rmsprop` | RMSprop | `~decay`, `~eps`, `~momentum` |\n| `adagrad` | Adagrad | `~eps` |\n| `lamb` | LAMB for large-batch training | `~b1`, `~b2`, `~eps`, `~weight_decay` |\n| `lion` | Evolved sign momentum | `~b1`, `~b2` |\n| `radam` | Rectified Adam | `~b1`, `~b2`, `~eps` |\n| `lars` | LARS for large-batch SGD | `~momentum`, `~weight_decay`, `~nesterov` |\n| `adan` | Adan with gradient difference | `~b1`, `~b2`, `~b3`, `~eps`, `~weight_decay` |\n| `adafactor` | Memory-efficient factored moments | `~b2_decay` |\n\nAll aliases take `lr` (a `Schedule.t`) as their last positional argument.\n`adafactor` is the exception — it includes its own learning rate schedule\ninternally.\n\n## Next Steps\n\n- [Composing Transforms](../02-composing-transforms/) — build custom optimizers from primitives\n- [Learning Rate Schedules](../03-schedules/) — decay, warmup, restarts, and composition\n- [Optax Comparison](../04-optax-comparison/) — mapping from Python's Optax to Vega\n"
  },
  {
    "path": "packages/vega/doc/02-composing-transforms.md",
    "content": "# Composing Transforms\n\nVega's core abstraction is the composable gradient transformation. Every\noptimizer — `adam`, `sgd`, `adamw` — is built by chaining small, focused\nprimitives. You can use these same primitives to build custom optimizers.\n\n## How Aliases Work\n\nEach alias is shorthand for `chain`. For example, `adamw` is:\n\n<!-- $MDX skip -->\n```ocaml\nlet adamw ?(b1 = 0.9) ?(b2 = 0.999) ?(eps = 1e-8) ?(weight_decay = 0.01) lr =\n  Vega.chain [\n    Vega.scale_by_adam ~b1 ~b2 ~eps ();\n    Vega.add_decayed_weights ~rate:(Vega.Schedule.constant weight_decay) ();\n    Vega.scale_by_learning_rate lr;\n  ]\n```\n\nThe gradient flows through each primitive in order:\n1. `scale_by_adam` — normalize by bias-corrected first and second moment estimates\n2. `add_decayed_weights` — add `weight_decay * param` to the updates\n3. `scale_by_learning_rate` — multiply by `-lr` for gradient descent\n\n## Building Custom Optimizers\n\nSince `chain` accepts any list of primitives, you can mix and match freely.\n\n### Adding Gradient Clipping\n\nPrepend a clipping transform to any optimizer:\n\n<!-- $MDX skip -->\n```ocaml\n(* Clip gradient L2 norm before Adam *)\nlet tx =\n  Vega.chain [\n    Vega.clip_by_norm 1.0;\n    Vega.adam (Vega.Schedule.constant 1e-3);\n  ]\n\n(* Or clip element-wise *)\nlet tx =\n  Vega.chain [\n    Vega.clip_by_value 0.5;\n    Vega.adam (Vega.Schedule.constant 1e-3);\n  ]\n```\n\n### Centralized Adam with Weight Decay\n\nCombine gradient centralization, Adam, weight decay, and a schedule:\n\n<!-- $MDX skip -->\n```ocaml\nlet lr =\n  Vega.Schedule.warmup_cosine_decay\n    ~init_value:0.0 ~peak_value:1e-3\n    ~warmup_steps:1000 ~decay_steps:9000 ()\nin\nlet tx =\n  Vega.chain [\n    Vega.centralize;\n    Vega.scale_by_adam ();\n    Vega.add_decayed_weights ~rate:(Vega.Schedule.constant 0.01) ();\n    Vega.scale_by_learning_rate lr;\n  ]\n```\n\n### LAMB from Primitives\n\nLAMB adds a trust ratio on top of Adam with weight decay:\n\n<!-- $MDX skip -->\n```ocaml\nlet tx =\n  Vega.chain [\n    Vega.scale_by_adam ();\n    Vega.add_decayed_weights ~rate:(Vega.Schedule.constant 0.01) ();\n    Vega.scale_by_trust_ratio ();\n    Vega.scale_by_learning_rate lr;\n  ]\n```\n\n## Primitives Reference\n\n### Scaling\n\n| Primitive | Description | State |\n|-----------|-------------|-------|\n| `scale s` | Multiply updates by constant `s` | 0 tensors |\n| `scale_by_schedule f` | Multiply updates by `f step` | 0 tensors |\n| `scale_by_learning_rate lr` | Multiply by `-lr step` (negates for descent) | 0 tensors |\n\n### Adaptive Scaling\n\n| Primitive | Description | State |\n|-----------|-------------|-------|\n| `scale_by_adam` | Bias-corrected 1st/2nd moments (Adam core) | 2-3 tensors |\n| `scale_by_rms` | Inverse RMS of past gradients (RMSprop core) | 1 tensor |\n| `scale_by_adagrad` | Inverse root of accumulated squared gradients | 1 tensor |\n| `scale_by_lion` | Sign-based updates with dual momentum | 1 tensor |\n| `scale_by_radam` | Rectified Adam (adaptive vs momentum switching) | 2 tensors |\n| `scale_by_trust_ratio` | LAMB/LARS trust ratio `\\|\\|param\\|\\| / \\|\\|updates\\|\\|` | 0 tensors |\n| `scale_by_adafactor` | Factored 2nd moments for memory efficiency | 2 tensors |\n| `scale_by_adan` | Adan with gradient difference momentum | 4 tensors |\n\n### Accumulation\n\n| Primitive | Description | State |\n|-----------|-------------|-------|\n| `trace` | Momentum (EMA of updates), optional Nesterov | 1 tensor |\n\n### Regularization\n\n| Primitive | Description | State |\n|-----------|-------------|-------|\n| `add_decayed_weights` | Add `rate * param` (decoupled weight decay) | 0 tensors |\n\n### Clipping\n\n| Primitive | Description | State |\n|-----------|-------------|-------|\n| `clip_by_value delta` | Clamp to `[-delta, +delta]` | 0 tensors |\n| `clip_by_norm max_norm` | Rescale if L2 norm exceeds `max_norm` | 0 tensors |\n\n### Gradient Processing\n\n| Primitive | Description | State |\n|-----------|-------------|-------|\n| `centralize` | Subtract mean (all axes except first for 2D+) | 0 tensors |\n| `add_noise` | Gaussian noise with annealing schedule | 0 tensors |\n\n### Robustness\n\n| Primitive | Description | State |\n|-----------|-------------|-------|\n| `apply_if_finite tx` | Skip updates containing NaN/Inf | inner + 1 tensor |\n\n## Chain Associativity\n\n`chain` is associative — nesting chains produces the same optimizer:\n\n<!-- $MDX skip -->\n```ocaml\n(* These are equivalent: *)\nlet tx1 = Vega.chain [a; b; c]\nlet tx2 = Vega.chain [Vega.chain [a; b]; c]\nlet tx3 = Vega.chain [a; Vega.chain [b; c]]\n```\n\nThis means you can build reusable sub-chains and compose them freely.\n\n## Serialization\n\nSave and restore optimizer state for checkpointing:\n\n<!-- $MDX skip -->\n```ocaml\n(* Save *)\nlet count, tensors = Vega.state_to_tensors state in\n(* ... persist count and tensors to disk ... *)\n\n(* Restore *)\nlet state = Vega.state_of_tensors tx ~count tensors\n```\n\n`n_tensors tx` returns the total number of state tensors, useful for\npre-allocating storage.\n\n## Next Steps\n\n- [Learning Rate Schedules](../03-schedules/) — decay, warmup, restarts, and composition\n- [Getting Started](../01-getting-started/) — basic usage and optimizer aliases\n- [Optax Comparison](../04-optax-comparison/) — mapping from Python's Optax to Vega\n"
  },
  {
    "path": "packages/vega/doc/03-schedules.md",
    "content": "# Learning Rate Schedules\n\nA learning rate schedule controls how the learning rate changes over the\ncourse of training. In Vega, a schedule is simply a function from step\nnumber to learning rate.\n\n## How Schedules Work\n\n`Schedule.t` is `int -> float`. Given a 1-based step number, it returns\nthe learning rate for that step:\n\n<!-- $MDX skip -->\n```ocaml\nlet lr = Vega.Schedule.constant 0.001 in\nPrintf.printf \"step 1:   %f\\n\" (lr 1);    (* 0.001 *)\nPrintf.printf \"step 100: %f\\n\" (lr 100)   (* 0.001 *)\n```\n\nSchedules plug into optimizers as the last positional argument:\n\n<!-- $MDX skip -->\n```ocaml\nlet tx = Vega.adam lr\n```\n\nOr directly as a primitive:\n\n<!-- $MDX skip -->\n```ocaml\nlet tx =\n  Vega.chain [\n    Vega.scale_by_adam ();\n    Vega.scale_by_learning_rate lr;\n  ]\n```\n\n## Basic Schedules\n\n### constant\n\nA fixed learning rate:\n\n<!-- $MDX skip -->\n```ocaml\nVega.Schedule.constant 0.001\n```\n\n### linear\n\nLinear interpolation from `init_value` to `end_value` over `steps`. Clamps\nto `end_value` after:\n\n<!-- $MDX skip -->\n```ocaml\nVega.Schedule.linear ~init_value:0.0 ~end_value:0.001 ~steps:1000\n(* step 1: ~0.0, step 500: ~0.0005, step 1000: 0.001, step 2000: 0.001 *)\n```\n\n## Decay Schedules\n\n### cosine_decay\n\nCosine annealing from `init_value` to `alpha * init_value` over `decay_steps`:\n\n<!-- $MDX skip -->\n```ocaml\nVega.Schedule.cosine_decay ~init_value:0.01 ~decay_steps:10000 ()\n(* Decays from 0.01 to 0.0 following a cosine curve *)\n\n(* With a minimum floor *)\nVega.Schedule.cosine_decay ~init_value:0.01 ~decay_steps:10000 ~alpha:0.001 ()\n(* Decays from 0.01 to 0.00001 *)\n```\n\n### exponential_decay\n\nMultiply by `decay_rate` every `decay_steps`:\n\n<!-- $MDX skip -->\n```ocaml\nVega.Schedule.exponential_decay ~init_value:0.01 ~decay_rate:0.96 ~decay_steps:1000\n(* lr = 0.01 * 0.96^(step/1000) *)\n```\n\n### polynomial_decay\n\nPolynomial decay from `init_value` to `end_value`. `power` defaults to 1.0\n(linear). Clamps to `end_value` after `decay_steps`:\n\n<!-- $MDX skip -->\n```ocaml\n(* Linear decay (power=1) *)\nVega.Schedule.polynomial_decay ~init_value:0.01 ~end_value:0.0 ~decay_steps:10000 ()\n\n(* Quadratic decay (power=2) — decays faster initially *)\nVega.Schedule.polynomial_decay ~init_value:0.01 ~end_value:0.0 ~decay_steps:10000\n  ~power:2.0 ()\n```\n\n## Warmup Schedules\n\n### warmup_cosine\n\nCosine warmup from `init_value` to `peak_value` over `warmup_steps`. Clamps\nto `peak_value` after:\n\n<!-- $MDX skip -->\n```ocaml\nVega.Schedule.warmup_cosine ~init_value:0.0 ~peak_value:0.001 ~warmup_steps:1000\n```\n\n### warmup_cosine_decay\n\nThe most common schedule for transformer training: linear warmup followed\nby cosine decay:\n\n<!-- $MDX skip -->\n```ocaml\nVega.Schedule.warmup_cosine_decay\n  ~init_value:0.0       (* start from 0 *)\n  ~peak_value:0.001     (* warm up to 0.001 *)\n  ~warmup_steps:1000    (* over 1000 steps *)\n  ~decay_steps:9000     (* then decay over 9000 steps *)\n  ~end_value:0.0        (* down to 0 *)\n  ()\n```\n\n## Warm Restarts\n\n### cosine_decay_restarts\n\nSGDR: cosine decay that periodically resets to the initial value. After each\nrestart, the period is multiplied by `t_mul` and the peak by `m_mul`:\n\n<!-- $MDX skip -->\n```ocaml\n(* Fixed-period restarts *)\nVega.Schedule.cosine_decay_restarts ~init_value:0.01 ~decay_steps:1000 ()\n\n(* Increasing period: 1000, 2000, 4000, ... *)\nVega.Schedule.cosine_decay_restarts ~init_value:0.01 ~decay_steps:1000\n  ~t_mul:2.0 ()\n\n(* Decreasing peak: 0.01, 0.005, 0.0025, ... *)\nVega.Schedule.cosine_decay_restarts ~init_value:0.01 ~decay_steps:1000\n  ~m_mul:0.5 ()\n```\n\n### one_cycle\n\nThe 1cycle policy: linear warmup from `max_value / div_factor` to `max_value`,\nthen cosine decay to `max_value / final_div_factor`:\n\n<!-- $MDX skip -->\n```ocaml\nVega.Schedule.one_cycle ~max_value:0.01 ~total_steps:10000 ()\n\n(* Custom phase split: 40% warmup *)\nVega.Schedule.one_cycle ~max_value:0.01 ~total_steps:10000\n  ~pct_start:0.4 ()\n```\n\n## Composition\n\n### piecewise_constant\n\nA step function. `values` has one more element than `boundaries`:\n\n<!-- $MDX skip -->\n```ocaml\nVega.Schedule.piecewise_constant\n  ~boundaries:[1000; 5000]\n  ~values:[0.01; 0.001; 0.0001]\n(* steps 1–1000: 0.01, steps 1001–5000: 0.001, steps 5001+: 0.0001 *)\n```\n\n### join\n\nSequence multiple schedules end-to-end. Each `(n, schedule)` pair runs\n`schedule` for `n` steps. Step numbers restart from 1 within each segment:\n\n<!-- $MDX skip -->\n```ocaml\nVega.Schedule.join [\n  (1000, Vega.Schedule.linear ~init_value:0.0 ~end_value:0.001 ~steps:1000);\n  (9000, Vega.Schedule.cosine_decay ~init_value:0.001 ~decay_steps:9000 ());\n]\n```\n\n### Custom Schedules\n\nSince `Schedule.t` is just `int -> float`, you can write arbitrary functions:\n\n<!-- $MDX skip -->\n```ocaml\n(* Step decay: halve every 1000 steps *)\nlet step_decay : Vega.Schedule.t = fun step ->\n  0.01 *. (0.5 ** float_of_int (step / 1000))\n```\n\n## Using Schedules with Optimizers\n\nSchedules are passed to optimizer aliases as the last positional argument:\n\n<!-- $MDX skip -->\n```ocaml\nlet lr =\n  Vega.Schedule.warmup_cosine_decay\n    ~init_value:0.0 ~peak_value:1e-3\n    ~warmup_steps:1000 ~decay_steps:9000 ()\nin\nlet tx = Vega.adamw ~weight_decay:0.01 lr\n```\n\nWhen building from primitives, pass the schedule to `scale_by_learning_rate`:\n\n<!-- $MDX skip -->\n```ocaml\nlet tx =\n  Vega.chain [\n    Vega.scale_by_adam ();\n    Vega.scale_by_learning_rate lr;\n  ]\n```\n\nOther primitives accept schedules too. For instance, `add_decayed_weights`\ntakes a `~rate` schedule for dynamic weight decay:\n\n<!-- $MDX skip -->\n```ocaml\nVega.add_decayed_weights\n  ~rate:(Vega.Schedule.cosine_decay ~init_value:0.01 ~decay_steps:10000 ())\n  ()\n```\n\n## Next Steps\n\n- [Composing Transforms](../02-composing-transforms/) — building custom optimizers from primitives\n- [Getting Started](../01-getting-started/) — basic usage and optimizer aliases\n- [Optax Comparison](../04-optax-comparison/) — mapping from Python's Optax to Vega\n"
  },
  {
    "path": "packages/vega/doc/04-optax-comparison.md",
    "content": "# Optax Comparison\n\nThis page maps [Optax](https://github.com/google-deepmind/optax) concepts\nand API to their Vega equivalents. Both libraries share the same core idea:\noptimizers are composable gradient transformations.\n\n## Creating Optimizers\n\n| Optax (Python) | Vega (OCaml) |\n|----------------|--------------|\n| `optax.sgd(0.1)` | `Vega.sgd (Schedule.constant 0.1)` |\n| `optax.sgd(0.1, momentum=0.9)` | `Vega.sgd ~momentum:0.9 (Schedule.constant 0.1)` |\n| `optax.adam(1e-3)` | `Vega.adam (Schedule.constant 1e-3)` |\n| `optax.adamw(1e-3, weight_decay=0.01)` | `Vega.adamw ~weight_decay:0.01 (Schedule.constant 1e-3)` |\n| `optax.rmsprop(1e-3)` | `Vega.rmsprop (Schedule.constant 1e-3)` |\n| `optax.adagrad(0.01)` | `Vega.adagrad (Schedule.constant 0.01)` |\n| `optax.lamb(1e-3)` | `Vega.lamb (Schedule.constant 1e-3)` |\n| `optax.lion(1e-4)` | `Vega.lion (Schedule.constant 1e-4)` |\n| `optax.radam(1e-3)` | `Vega.radam (Schedule.constant 1e-3)` |\n| `optax.adafactor()` | `Vega.adafactor ()` |\n\n## Init and Update\n\n**Optax:**\n\n```python\nimport optax\n\ntx = optax.adam(1e-3)\nstate = tx.init(params)\nupdates, state = tx.update(grads, state, params)\nparams = optax.apply_updates(params, updates)\n```\n\n**Vega:**\n\n<!-- $MDX skip -->\n```ocaml\nlet tx = Vega.adam (Vega.Schedule.constant 1e-3) in\nlet state = Vega.init tx param in\nlet updates, state = Vega.update state ~grad ~param in\nlet param = Vega.apply_updates ~param ~updates\n\n(* Or use the convenience function: *)\nlet param, state = Vega.step state ~grad ~param\n```\n\nThe key difference: Optax passes `(grads, state, params)` to `tx.update`,\nwhile Vega passes `state ~grad ~param` — the optimizer is baked into the\nstate at `init` time.\n\n## Chaining Transforms\n\n**Optax:**\n\n```python\ntx = optax.chain(\n    optax.clip_by_global_norm(1.0),\n    optax.scale_by_adam(),\n    optax.add_decayed_weights(0.01),\n    optax.scale_by_learning_rate(1e-3),\n)\n```\n\n**Vega:**\n\n<!-- $MDX skip -->\n```ocaml\nlet tx =\n  Vega.chain [\n    Vega.clip_by_norm 1.0;\n    Vega.scale_by_adam ();\n    Vega.add_decayed_weights ~rate:(Vega.Schedule.constant 0.01) ();\n    Vega.scale_by_learning_rate (Vega.Schedule.constant 1e-3);\n  ]\n```\n\n## Primitives\n\n| Optax | Vega | Notes |\n|-------|------|-------|\n| `scale(s)` | `scale s` | |\n| `scale_by_adam()` | `scale_by_adam ()` | Supports `~nesterov`, `~amsgrad` |\n| `scale_by_rms()` | `scale_by_rms ()` | |\n| `scale_by_lion()` | `scale_by_lion ()` | |\n| `scale_by_radam()` | `scale_by_radam ()` | |\n| `scale_by_trust_ratio()` | `scale_by_trust_ratio ()` | |\n| `scale_by_factored_rms()` | `scale_by_adafactor ()` | Different name |\n| `trace(decay)` | `trace ~decay ()` | |\n| `add_decayed_weights(wd)` | `add_decayed_weights ~rate:(Schedule.constant wd) ()` | Vega uses a schedule |\n| `clip_by_global_norm(max)` | `clip_by_norm max` | Per-tensor, not global |\n| `clip(delta)` | `clip_by_value delta` | |\n| `centralize()` | `centralize` | Value, not function |\n| `add_noise(eta, gamma)` | `add_noise ~eta ~gamma ()` | `eta` is a schedule in Vega |\n| `apply_if_finite(tx)` | `apply_if_finite tx` | |\n| `scale_by_learning_rate(lr)` | `scale_by_learning_rate (Schedule.constant lr)` | Vega uses a schedule |\n| `scale_by_schedule(fn)` | `scale_by_schedule fn` | |\n\n## Schedules\n\n| Optax | Vega |\n|-------|------|\n| `constant_schedule(lr)` | `Schedule.constant lr` |\n| `linear_schedule(init, end, steps)` | `Schedule.linear ~init_value ~end_value ~steps` |\n| `cosine_decay_schedule(init, steps)` | `Schedule.cosine_decay ~init_value ~decay_steps ()` |\n| `exponential_decay(init, steps, rate)` | `Schedule.exponential_decay ~init_value ~decay_rate ~decay_steps` |\n| `polynomial_schedule(init, end, power, steps)` | `Schedule.polynomial_decay ~init_value ~end_value ~decay_steps ~power ()` |\n| `warmup_cosine_decay_schedule(...)` | `Schedule.warmup_cosine_decay ~init_value ~peak_value ~warmup_steps ~decay_steps ()` |\n| `sgdr_schedule(...)` | `Schedule.cosine_decay_restarts ~init_value ~decay_steps ()` |\n| `piecewise_constant_schedule(...)` | `Schedule.piecewise_constant ~boundaries ~values` |\n| `join_schedules(...)` | `Schedule.join segments` |\n\n## Key Differences\n\n| Aspect | Optax | Vega |\n|--------|-------|------|\n| Language | Python/JAX | OCaml/Nx |\n| State type | PyTree of arrays | Typed `('a, 'b) state` |\n| Learning rate | Float or schedule | Always `Schedule.t` (`int -> float`) |\n| Weight decay rate | Float | `Schedule.t` (dynamic decay) |\n| Noise eta | Float | `Schedule.t` (dynamic noise) |\n| Gradient clipping | Global norm across all params | Per-tensor norm |\n| Parameter trees | Built-in (JAX pytrees) | Handled by Kaun's `Ptree.t` |\n| `centralize` | Function call `centralize()` | Value `centralize` (no arguments) |\n"
  },
  {
    "path": "packages/vega/doc/dune",
    "content": "(mdx\n (files *.md)\n (package vega)\n (libraries vega nx))\n"
  },
  {
    "path": "packages/vega/doc/index.md",
    "content": "# Vega\n\nVega provides composable gradient-based optimizers for OCaml. Each optimizer is built from small, typed gradient transformations that compose via `chain`. The library depends only on Nx — no autodiff framework is required.\n\n## Features\n\n- **Optimizer aliases** — `adam`, `adamw`, `sgd`, `rmsprop`, `adagrad`, `lamb`, `lion`, `radam`, `lars`, `adan`, `adafactor`\n- **Composable primitives** — `scale_by_adam`, `trace`, `add_decayed_weights`, `clip_by_norm`, and more, combined via `chain`\n- **Learning rate schedules** — `constant`, `cosine_decay`, `warmup_cosine_decay`, `one_cycle`, `piecewise_constant`, `join`\n- **Gradient processing** — clipping, centralization, noise injection\n- **Robustness** — `apply_if_finite` skips NaN/Inf updates automatically\n- **Serialization** — `state_to_tensors` / `state_of_tensors` for checkpointing\n\n## Quick Start\n\n<!-- $MDX skip -->\n```ocaml\nopen Vega\n\nlet () =\n  let lr = Schedule.constant 0.01 in\n  let tx = adam lr in\n\n  let param = ref (Nx.create Nx.float32 [| 2 |] [| 5.0; -3.0 |]) in\n  let st = ref (init tx !param) in\n\n  for i = 1 to 100 do\n    (* For f(x) = 0.5 * ||x||², the gradient is x *)\n    let p, s = step !st ~grad:!param ~param:!param in\n    param := p;\n    st := s;\n    if i mod 25 = 0 then\n      Printf.printf \"step %3d  x = %s\\n\" i (Nx.data_to_string !param)\n  done\n```\n\n## Next Steps\n\n- [Getting Started](01-getting-started/) — installation, first optimizer, the step/update API\n- [Composing Transforms](02-composing-transforms/) — building custom optimizers from primitives\n- [Learning Rate Schedules](03-schedules/) — decay, warmup, restarts, and composition\n- [Optax Comparison](04-optax-comparison/) — mapping from Python's Optax to Vega\n"
  },
  {
    "path": "packages/vega/examples/01-basic-optimizers/README.md",
    "content": "# `01-basic-optimizers`\n\nYour first optimizer. This example minimizes `f(x) = 0.5 * ||x||²` from a\nstarting point using SGD, Adam, and AdamW to compare convergence behavior.\n\n```bash\ndune exec packages/vega/examples/01-basic-optimizers/main.exe\n```\n\n## What You'll Learn\n\n- Creating optimizers with `Vega.sgd`, `Vega.adam`, `Vega.adamw`\n- Setting a constant learning rate with `Vega.Schedule.constant`\n- Initializing per-parameter state with `Vega.init`\n- Running optimization steps with `Vega.step`\n- How different optimizers converge at different rates\n\n## Key Functions\n\n| Function            | Purpose                                         |\n| ------------------- | ----------------------------------------------- |\n| `Schedule.constant` | Create a fixed learning rate                    |\n| `sgd`               | Stochastic gradient descent with optional momentum |\n| `adam`               | Adam with bias-corrected moment estimates       |\n| `adamw`             | Adam with decoupled weight decay                |\n| `init`              | Create optimizer state matching a parameter     |\n| `step`              | Apply one optimization step, return new param and state |\n\n## How It Works\n\nFor `f(x) = 0.5 * ||x||²`, the gradient is simply `x`. Each optimizer starts\nfrom `x = [5.0; -3.0]` and runs 50 steps toward the minimum at `[0; 0]`:\n\n- **SGD** with `lr=0.1` converges fastest on this simple problem\n- **Adam** with `lr=0.01` uses adaptive per-coordinate learning rates\n- **AdamW** adds weight decay, which also helps push parameters toward zero\n\n## Try It\n\n1. Increase the learning rate for Adam and observe the effect on convergence.\n2. Add momentum to SGD with `~momentum:0.9` and compare.\n3. Try `Vega.lion` or `Vega.radam` as alternative optimizers.\n\n## Next Steps\n\nContinue to [02-composing-transforms](../02-composing-transforms/) to learn\nhow to build custom optimizers from primitives.\n"
  },
  {
    "path": "packages/vega/examples/01-basic-optimizers/dune",
    "content": "(executable\n (name main)\n (libraries vega nx))\n"
  },
  {
    "path": "packages/vega/examples/01-basic-optimizers/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Minimize f(x) = 0.5 * ||x||^2 using different optimizers.\n\n   The gradient is simply x, so this is a clean testbed for comparing\n   convergence behavior. Each optimizer starts from the same point x = [5.0;\n   -3.0] and runs 50 steps. *)\n\nlet dt = Nx.float32\nlet x0 () = Nx.create dt [| 2 |] [| 5.0; -3.0 |]\n\nlet run name tx =\n  Printf.printf \"--- %s ---\\n\" name;\n  let param = ref (x0 ()) in\n  let st = ref (Vega.init tx !param) in\n  for i = 1 to 50 do\n    let p, s = Vega.step !st ~grad:!param ~param:!param in\n    param := p;\n    st := s;\n    if i mod 10 = 0 then\n      Printf.printf \"  step %2d  x = %s\\n\" i (Nx.data_to_string !param)\n  done;\n  Printf.printf \"\\n\"\n\nlet () =\n  let lr = Vega.Schedule.constant 0.1 in\n  run \"SGD (lr=0.1)\" (Vega.sgd lr);\n\n  let lr = Vega.Schedule.constant 0.01 in\n  run \"Adam (lr=0.01)\" (Vega.adam lr);\n\n  let lr = Vega.Schedule.constant 0.01 in\n  run \"AdamW (lr=0.01, wd=0.01)\" (Vega.adamw ~weight_decay:0.01 lr)\n"
  },
  {
    "path": "packages/vega/examples/02-composing-transforms/README.md",
    "content": "# `02-composing-transforms`\n\nBuild custom optimizers by composing gradient transformation primitives.\nShows that optimizer aliases like `adamw` are just shorthand for `chain`.\n\n```bash\ndune exec packages/vega/examples/02-composing-transforms/main.exe\n```\n\n## What You'll Learn\n\n- Recreating `adamw` from primitives using `Vega.chain`\n- Adding gradient clipping to any optimizer\n- That `chain` is associative (nesting doesn't change behavior)\n- Using `Vega.update` + `Vega.apply_updates` for explicit two-step control\n\n## Key Functions\n\n| Function               | Purpose                                          |\n| ---------------------- | ------------------------------------------------ |\n| `chain`                | Compose gradient transformations sequentially    |\n| `scale_by_adam`        | Adam's bias-corrected moment scaling             |\n| `add_decayed_weights`  | Decoupled weight decay (add `rate * param`)      |\n| `scale_by_learning_rate` | Multiply by `-lr` for gradient descent         |\n| `clip_by_norm`         | Rescale updates if L2 norm exceeds a threshold   |\n| `clip_by_value`        | Clamp updates element-wise to `[-delta, +delta]` |\n| `update`               | Compute raw updates without applying them        |\n| `apply_updates`        | Add updates to parameters                        |\n\n## How Composition Works\n\nGradient transformations are chained left to right. The gradient flows through\neach primitive in order:\n\n```\ngrad → [clip_by_norm] → [scale_by_adam] → [add_decayed_weights] → [scale_by_learning_rate] → updates\n```\n\nSince `chain` is associative, you can build reusable sub-chains:\n\n```ocaml\nlet adaptive = Vega.chain [Vega.scale_by_adam (); Vega.add_decayed_weights ...] in\nlet tx = Vega.chain [Vega.clip_by_norm 1.0; adaptive; Vega.scale_by_learning_rate lr]\n```\n\n## Try It\n\n1. Add `Vega.centralize` at the beginning of the chain and observe the effect.\n2. Move `clip_by_norm` after `scale_by_adam` instead of before — does it matter?\n3. Try wrapping the chain with `Vega.apply_if_finite` for NaN protection.\n\n## Next Steps\n\nContinue to [03-learning-rate-schedules](../03-learning-rate-schedules/) to\nlearn about warmup, cosine decay, and schedule composition.\n"
  },
  {
    "path": "packages/vega/examples/02-composing-transforms/dune",
    "content": "(executable\n (name main)\n (libraries vega nx))\n"
  },
  {
    "path": "packages/vega/examples/02-composing-transforms/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Build custom optimizers by composing gradient transformation primitives.\n\n   Vega's core abstraction is the composable gradient transformation. Optimizer\n   aliases like [adam] are just shorthand for [chain]. This example shows how\n   to:\n\n   1. Recreate AdamW from primitives 2. Add gradient clipping to any optimizer\n   3. Use update + apply_updates for explicit two-step control *)\n\nlet dt = Nx.float32\nlet x0 () = Nx.create dt [| 2 |] [| 5.0; -3.0 |]\n\nlet run name tx steps =\n  let param = ref (x0 ()) in\n  let st = ref (Vega.init tx !param) in\n  for _ = 1 to steps do\n    let p, s = Vega.step !st ~grad:!param ~param:!param in\n    param := p;\n    st := s\n  done;\n  Printf.printf \"  %-40s x = %s\\n\" name (Nx.data_to_string !param)\n\nlet () =\n  let lr = Vega.Schedule.constant 0.01 in\n\n  (* 1. AdamW is just a chain of primitives *)\n  Printf.printf \"--- AdamW: alias vs primitives (50 steps) ---\\n\";\n\n  run \"adamw (alias)\" (Vega.adamw ~weight_decay:0.01 lr) 50;\n\n  run \"chain [adam; decay; lr] (manual)\"\n    (Vega.chain\n       [\n         Vega.scale_by_adam ();\n         Vega.add_decayed_weights ~rate:(Vega.Schedule.constant 0.01) ();\n         Vega.scale_by_learning_rate lr;\n       ])\n    50;\n\n  Printf.printf \"\\n\";\n\n  (* 2. Gradient clipping composes with any optimizer *)\n  Printf.printf \"--- Adding gradient clipping (50 steps) ---\\n\";\n\n  run \"adam (no clipping)\" (Vega.adam lr) 50;\n\n  run \"clip_by_norm 1.0 + adam\"\n    (Vega.chain [ Vega.clip_by_norm 1.0; Vega.adam lr ])\n    50;\n\n  run \"clip_by_value 0.5 + adam\"\n    (Vega.chain [ Vega.clip_by_value 0.5; Vega.adam lr ])\n    50;\n\n  Printf.printf \"\\n\";\n\n  (* 3. chain is associative: nesting doesn't change behavior *)\n  Printf.printf \"--- chain is associative (50 steps) ---\\n\";\n\n  let a = Vega.scale_by_adam () in\n  let b = Vega.add_decayed_weights ~rate:(Vega.Schedule.constant 0.01) () in\n  let c = Vega.scale_by_learning_rate lr in\n\n  run \"chain [a; b; c]\" (Vega.chain [ a; b; c ]) 50;\n  run \"chain [chain [a; b]; c]\" (Vega.chain [ Vega.chain [ a; b ]; c ]) 50;\n\n  Printf.printf \"\\n\";\n\n  (* 4. update + apply_updates: the explicit two-step API *)\n  Printf.printf \"--- update + apply_updates (explicit) ---\\n\";\n  let tx = Vega.adam lr in\n  let param = ref (x0 ()) in\n  let st = ref (Vega.init tx !param) in\n  for i = 1 to 50 do\n    let updates, s = Vega.update !st ~grad:!param ~param:!param in\n    param := Vega.apply_updates ~param:!param ~updates;\n    st := s;\n    if i mod 10 = 0 then\n      Printf.printf \"  step %2d  x = %s\\n\" i (Nx.data_to_string !param)\n  done\n"
  },
  {
    "path": "packages/vega/examples/03-learning-rate-schedules/README.md",
    "content": "# `03-learning-rate-schedules`\n\nExplore learning rate schedules. Evaluates several schedules at sampled steps,\nthen uses warmup + cosine decay in an optimization loop.\n\n```bash\ndune exec packages/vega/examples/03-learning-rate-schedules/main.exe\n```\n\n## What You'll Learn\n\n- That a schedule is simply `int -> float` (step number to learning rate)\n- How `constant`, `cosine_decay`, `warmup_cosine_decay`, `one_cycle`, and\n  `piecewise_constant` shape the learning rate curve\n- Composing schedules end-to-end with `Schedule.join`\n- Plugging schedules into optimizers as the last positional argument\n\n## Key Functions\n\n| Function                | Purpose                                          |\n| ----------------------- | ------------------------------------------------ |\n| `Schedule.constant`     | Fixed learning rate                              |\n| `Schedule.cosine_decay` | Cosine annealing to zero (or `alpha * init`)     |\n| `Schedule.warmup_cosine_decay` | Linear warmup then cosine decay            |\n| `Schedule.one_cycle`    | 1cycle: linear warmup then cosine decay          |\n| `Schedule.piecewise_constant` | Step function with boundaries and values   |\n| `Schedule.join`         | Sequence schedules end-to-end                    |\n\n## Schedule Shapes\n\n| Schedule | Shape |\n| -------- | ----- |\n| `constant` | Flat line |\n| `cosine_decay` | Smooth decrease following a cosine curve |\n| `warmup_cosine_decay` | Ramp up, then smooth decrease |\n| `one_cycle` | Ramp up to peak, then cosine back down to near zero |\n| `piecewise_constant` | Staircase drops at specified boundaries |\n\n## Try It\n\n1. Change `warmup_steps` in `warmup_cosine_decay` and observe how it affects\n   the transition point.\n2. Use `Schedule.cosine_decay_restarts` to see periodic warm restarts (SGDR).\n3. Write a custom schedule as a plain function: `let my_schedule step = ...`\n\n## Further Reading\n\n- [Composing Transforms](../02-composing-transforms/) — how schedules plug\n  into the `chain` API via `scale_by_learning_rate`\n"
  },
  {
    "path": "packages/vega/examples/03-learning-rate-schedules/dune",
    "content": "(executable\n (name main)\n (libraries vega nx))\n"
  },
  {
    "path": "packages/vega/examples/03-learning-rate-schedules/main.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(* Learning rate schedules control how the learning rate changes over training.\n\n   A schedule is simply a function [int -> float]: given a 1-based step number,\n   it returns the learning rate. This example evaluates several schedules and\n   prints their values, then uses warmup + cosine decay in an optimization\n   loop. *)\n\nmodule S = Vega.Schedule\n\nlet print_schedule name s steps =\n  Printf.printf \"  %-30s\" name;\n  List.iter (fun step -> Printf.printf \"  %3d:%.6f\" step (s step)) steps;\n  Printf.printf \"\\n\"\n\nlet sample = [ 1; 25; 50; 75; 100 ]\n\nlet () =\n  Printf.printf \"--- Schedule values at steps %s ---\\n\"\n    (String.concat \", \" (List.map string_of_int sample));\n\n  print_schedule \"constant 0.01\" (S.constant 0.01) sample;\n\n  print_schedule \"cosine_decay\"\n    (S.cosine_decay ~init_value:0.01 ~decay_steps:100 ())\n    sample;\n\n  print_schedule \"warmup_cosine_decay\"\n    (S.warmup_cosine_decay ~init_value:0.0 ~peak_value:0.01 ~warmup_steps:25\n       ~decay_steps:75 ())\n    sample;\n\n  print_schedule \"one_cycle\"\n    (S.one_cycle ~max_value:0.01 ~total_steps:100 ())\n    sample;\n\n  print_schedule \"piecewise_constant\"\n    (S.piecewise_constant ~boundaries:[ 30; 70 ] ~values:[ 0.01; 0.001; 0.0001 ])\n    sample;\n\n  Printf.printf \"\\n\";\n\n  (* join: sequence two schedules end-to-end *)\n  Printf.printf \"--- join: linear warmup then cosine decay ---\\n\";\n  let joined =\n    S.join\n      [\n        (20, S.linear ~init_value:0.0 ~end_value:0.01 ~steps:20);\n        (80, S.cosine_decay ~init_value:0.01 ~decay_steps:80 ());\n      ]\n  in\n  print_schedule \"join [warmup; cosine]\" joined sample;\n\n  Printf.printf \"\\n\";\n\n  (* Use warmup + cosine decay in an optimization loop *)\n  Printf.printf \"--- Adam with warmup_cosine_decay (100 steps) ---\\n\";\n  let lr =\n    S.warmup_cosine_decay ~init_value:0.0 ~peak_value:0.01 ~warmup_steps:20\n      ~decay_steps:80 ()\n  in\n  let tx = Vega.adam lr in\n  let param = ref (Nx.create Nx.float32 [| 2 |] [| 5.0; -3.0 |]) in\n  let st = ref (Vega.init tx !param) in\n  for i = 1 to 100 do\n    let p, s = Vega.step !st ~grad:!param ~param:!param in\n    param := p;\n    st := s;\n    if i mod 20 = 0 then\n      Printf.printf \"  step %3d  lr=%.6f  x = %s\\n\" i (lr i)\n        (Nx.data_to_string !param)\n  done\n"
  },
  {
    "path": "packages/vega/examples/README.md",
    "content": "# Vega Examples\n\nLearn Vega through progressively complex examples. Start with `01-basic-optimizers`\nand work through the numbered examples in order.\n\n## Examples\n\n| Example | Concept | Key Functions |\n|---------|---------|---------------|\n| [`01-basic-optimizers`](./01-basic-optimizers/) | Minimize a quadratic with SGD, Adam, AdamW | `init`, `step`, `Schedule.constant` |\n| [`02-composing-transforms`](./02-composing-transforms/) | Build custom optimizers from primitives | `chain`, `scale_by_adam`, `clip_by_norm`, `update` |\n| [`03-learning-rate-schedules`](./03-learning-rate-schedules/) | Explore warmup, cosine decay, one-cycle | `Schedule.warmup_cosine_decay`, `Schedule.one_cycle`, `Schedule.join` |\n\n## Running Examples\n\nAll examples can be run with:\n\n```bash\ndune exec packages/vega/examples/<name>/main.exe\n```\n\nFor example:\n\n```bash\ndune exec packages/vega/examples/01-basic-optimizers/main.exe\n```\n\n## Quick Reference\n\n### Basic Optimizer\n\n```ocaml\nopen Vega\n\nlet lr = Schedule.constant 0.01 in\nlet tx = adam lr in\nlet st = ref (init tx param) in\nfor _ = 1 to steps do\n  let p, s = step !st ~grad ~param:!param in\n  param := p; st := s\ndone\n```\n\n### Custom Optimizer via chain\n\n```ocaml\nlet tx =\n  Vega.chain [\n    Vega.clip_by_norm 1.0;\n    Vega.scale_by_adam ();\n    Vega.add_decayed_weights ~rate:(Vega.Schedule.constant 0.01) ();\n    Vega.scale_by_learning_rate lr;\n  ]\n```\n\n### Learning Rate Schedule\n\n```ocaml\nlet lr =\n  Vega.Schedule.warmup_cosine_decay\n    ~init_value:0.0 ~peak_value:0.001\n    ~warmup_steps:1000 ~decay_steps:9000 ()\nin\nlet tx = Vega.adam lr\n```\n"
  },
  {
    "path": "packages/vega/lib/dune",
    "content": "(library\n (name vega)\n (public_name vega)\n (libraries nx.core nx))\n"
  },
  {
    "path": "packages/vega/lib/schedule.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\ntype t = int -> float\n\n(* Cosine annealing factor: 1 -> 0 as ratio goes 0 -> 1. *)\nlet cosine_decay_factor ratio =\n  0.5 *. (1. +. Stdlib.cos (Float.pi *. ratio))\n\nlet constant value _ = value\n\nlet linear ~init_value ~end_value ~steps step =\n  if steps <= 0 then invalid_arg \"Schedule.linear: steps must be positive\";\n  if step >= steps then end_value\n  else\n    let ratio = float_of_int step /. float_of_int steps in\n    init_value +. ((end_value -. init_value) *. ratio)\n\nlet cosine_decay ~init_value ~decay_steps ?(alpha = 0.) () step =\n  if decay_steps <= 0 then\n    invalid_arg \"Schedule.cosine_decay: decay_steps must be positive\";\n  if step >= decay_steps then alpha *. init_value\n  else\n    let ratio = float_of_int step /. float_of_int decay_steps in\n    let cosine_val = cosine_decay_factor ratio in\n    (((1. -. alpha) *. cosine_val) +. alpha) *. init_value\n\nlet exponential_decay ~init_value ~decay_rate ~decay_steps step =\n  if decay_steps <= 0 then\n    invalid_arg \"Schedule.exponential_decay: decay_steps must be positive\";\n  let ratio = float_of_int step /. float_of_int decay_steps in\n  init_value *. (decay_rate ** ratio)\n\nlet polynomial_decay ~init_value ~end_value ~decay_steps ?(power = 1.0) ()\n    step =\n  if decay_steps <= 0 then\n    invalid_arg \"Schedule.polynomial_decay: decay_steps must be positive\";\n  if step >= decay_steps then end_value\n  else\n    let ratio = float_of_int step /. float_of_int decay_steps in\n    end_value +. ((init_value -. end_value) *. ((1. -. ratio) ** power))\n\nlet warmup_cosine ~init_value ~peak_value ~warmup_steps step =\n  if warmup_steps <= 0 then\n    invalid_arg \"Schedule.warmup_cosine: warmup_steps must be positive\";\n  if step >= warmup_steps then peak_value\n  else\n    let ratio = float_of_int step /. float_of_int warmup_steps in\n    let cosine_val = 1. -. cosine_decay_factor ratio in\n    init_value +. ((peak_value -. init_value) *. cosine_val)\n\nlet warmup_cosine_decay ~init_value ~peak_value ~warmup_steps ~decay_steps\n    ?(end_value = 0.) () step =\n  if warmup_steps <= 0 then\n    invalid_arg \"Schedule.warmup_cosine_decay: warmup_steps must be positive\";\n  if decay_steps <= 0 then\n    invalid_arg \"Schedule.warmup_cosine_decay: decay_steps must be positive\";\n  if step <= warmup_steps then\n    let ratio = float_of_int step /. float_of_int warmup_steps in\n    init_value +. ((peak_value -. init_value) *. ratio)\n  else\n    let decay_step = step - warmup_steps in\n    if decay_step >= decay_steps then end_value\n    else\n      let ratio = float_of_int decay_step /. float_of_int decay_steps in\n      let cosine_val = cosine_decay_factor ratio in\n      end_value +. ((peak_value -. end_value) *. cosine_val)\n\nlet cosine_decay_restarts ~init_value ~decay_steps ?(t_mul = 1.0)\n    ?(m_mul = 1.0) ?(alpha = 0.) () =\n  if decay_steps <= 0 then\n    invalid_arg \"Schedule.cosine_decay_restarts: decay_steps must be positive\";\n  fun step ->\n    (* Fast path for uniform period (exact float comparison is\n       intentional: 1.0 is the unmodified default). *)\n    if t_mul = 1.0 then\n      let cycle = step / decay_steps in\n      let pos = step - (cycle * decay_steps) in\n      let amp = init_value *. (m_mul ** float_of_int cycle) in\n      let ratio = float_of_int pos /. float_of_int decay_steps in\n      let cosine_val = cosine_decay_factor ratio in\n      (((1. -. alpha) *. cosine_val) +. alpha) *. amp\n    else begin\n      (* Geometric period: find which cycle [step] falls in. *)\n      let remaining = ref step in\n      let cycle = ref 0 in\n      let period = ref (float_of_int decay_steps) in\n      while float_of_int !remaining >= !period do\n        remaining := !remaining - int_of_float !period;\n        period := !period *. t_mul;\n        incr cycle\n      done;\n      let amp = init_value *. (m_mul ** float_of_int !cycle) in\n      let ratio = float_of_int !remaining /. !period in\n      let cosine_val = cosine_decay_factor ratio in\n      (((1. -. alpha) *. cosine_val) +. alpha) *. amp\n    end\n\nlet one_cycle ~max_value ~total_steps ?(div_factor = 25.0)\n    ?(final_div_factor = 10000.0) ?(pct_start = 0.3) () =\n  if total_steps <= 0 then\n    invalid_arg \"Schedule.one_cycle: total_steps must be positive\";\n  fun step ->\n    let warmup_steps = int_of_float (pct_start *. float_of_int total_steps) in\n    let init_value = max_value /. div_factor in\n    let end_value = max_value /. final_div_factor in\n    if step <= warmup_steps then\n      let ratio = float_of_int step /. float_of_int warmup_steps in\n      init_value +. ((max_value -. init_value) *. ratio)\n    else\n      let decay_steps = total_steps - warmup_steps in\n      let decay_step = step - warmup_steps in\n      if decay_step >= decay_steps then end_value\n      else\n        let ratio = float_of_int decay_step /. float_of_int decay_steps in\n        let cosine_val = cosine_decay_factor ratio in\n        end_value +. ((max_value -. end_value) *. cosine_val)\n\nlet piecewise_constant ~boundaries ~values =\n  let n_boundaries = List.length boundaries in\n  let n_values = List.length values in\n  if n_values <> n_boundaries + 1 then\n    invalid_arg\n      (Printf.sprintf\n         \"Schedule.piecewise_constant: expected %d values for %d boundaries, \\\n          got %d\"\n         (n_boundaries + 1) n_boundaries n_values);\n  let boundaries = Array.of_list boundaries in\n  let values = Array.of_list values in\n  for i = 1 to Array.length boundaries - 1 do\n    if boundaries.(i) <= boundaries.(i - 1) then\n      invalid_arg\n        \"Schedule.piecewise_constant: boundaries must be strictly increasing\"\n  done;\n  fun step ->\n    let rec find i =\n      if i >= Array.length boundaries then values.(Array.length values - 1)\n      else if step <= boundaries.(i) then values.(i)\n      else find (i + 1)\n    in\n    find 0\n\nlet join segments =\n  if segments = [] then invalid_arg \"Schedule.join: segments must not be empty\";\n  List.iter\n    (fun (n, _) ->\n      if n <= 0 then\n        invalid_arg \"Schedule.join: segment lengths must be positive\")\n    segments;\n  let segments = Array.of_list segments in\n  fun step ->\n    let remaining = ref step in\n    let i = ref 0 in\n    while !i < Array.length segments - 1 && !remaining > fst segments.(!i) do\n      remaining := !remaining - fst segments.(!i);\n      incr i\n    done;\n    let _, sched = segments.(!i) in\n    sched !remaining\n"
  },
  {
    "path": "packages/vega/lib/schedule.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Learning-rate schedules. *)\n\ntype t = int -> float\n(** The type for learning-rate schedules.\n\n    [s step] is the learning rate for 1-based [step]. *)\n\n(** {1:basic Basic} *)\n\nval constant : float -> t\n(** [constant lr] is the schedule that always returns [lr]. *)\n\nval linear : init_value:float -> end_value:float -> steps:int -> t\n(** [linear ~init_value ~end_value ~steps] interpolates linearly from\n    [init_value] to [end_value] over [steps]. Clamps to [end_value] after\n    [steps]. *)\n\n(** {1:decay Decay} *)\n\nval cosine_decay :\n  init_value:float -> decay_steps:int -> ?alpha:float -> unit -> t\n(** [cosine_decay ~init_value ~decay_steps ?alpha ()] is cosine decay from\n    [init_value] to [alpha * init_value] over [decay_steps].\n\n    [alpha] defaults to [0.]. *)\n\nval exponential_decay :\n  init_value:float -> decay_rate:float -> decay_steps:int -> t\n(** [exponential_decay ~init_value ~decay_rate ~decay_steps] is\n    [init_value * decay_rate{^ (step / decay_steps)}]. *)\n\nval polynomial_decay :\n  init_value:float ->\n  end_value:float ->\n  decay_steps:int ->\n  ?power:float ->\n  unit ->\n  t\n(** [polynomial_decay ~init_value ~end_value ~decay_steps ?power ()] decays from\n    [init_value] to [end_value] over [decay_steps] using a polynomial schedule:\n    [end_value + (init_value - end_value) * (1 - step/decay_steps)^power].\n\n    [power] defaults to [1.0] (linear decay). Clamps to [end_value] after\n    [decay_steps]. *)\n\n(** {1:warmup Warmup} *)\n\nval warmup_cosine :\n  init_value:float -> peak_value:float -> warmup_steps:int -> t\n(** [warmup_cosine ~init_value ~peak_value ~warmup_steps] is cosine warmup from\n    [init_value] to [peak_value] over [warmup_steps]. Clamps to [peak_value]\n    after [warmup_steps]. *)\n\nval warmup_cosine_decay :\n  init_value:float ->\n  peak_value:float ->\n  warmup_steps:int ->\n  decay_steps:int ->\n  ?end_value:float ->\n  unit ->\n  t\n(** [warmup_cosine_decay ~init_value ~peak_value ~warmup_steps ~decay_steps\n     ?end_value ()] is linear warmup from [init_value] to [peak_value] over\n    [warmup_steps], then cosine decay to [end_value] over [decay_steps].\n\n    [end_value] defaults to [0.]. *)\n\n(** {1:restarts Warm Restarts} *)\n\nval cosine_decay_restarts :\n  init_value:float ->\n  decay_steps:int ->\n  ?t_mul:float ->\n  ?m_mul:float ->\n  ?alpha:float ->\n  unit ->\n  t\n(** [cosine_decay_restarts ~init_value ~decay_steps ?t_mul ?m_mul ?alpha ()] is\n    cosine decay that periodically resets to [init_value] (SGDR).\n\n    After each restart the period is multiplied by [t_mul] and the peak\n    amplitude by [m_mul]. [alpha] is the minimum fraction of [init_value].\n\n    [t_mul] defaults to [1.0]. [m_mul] defaults to [1.0]. [alpha] defaults to\n    [0.0]. *)\n\nval one_cycle :\n  max_value:float ->\n  total_steps:int ->\n  ?div_factor:float ->\n  ?final_div_factor:float ->\n  ?pct_start:float ->\n  unit ->\n  t\n(** [one_cycle ~max_value ~total_steps ?div_factor ?final_div_factor ?pct_start\n     ()] is the 1cycle schedule.\n\n    Phase 1 (warmup): linear from [max_value / div_factor] to [max_value] over\n    [pct_start * total_steps] steps. Phase 2 (decay): cosine from [max_value] to\n    [max_value / final_div_factor] over the remaining steps.\n\n    [div_factor] defaults to [25.0]. [final_div_factor] defaults to [10000.0].\n    [pct_start] defaults to [0.3]. *)\n\n(** {1:composition Composition} *)\n\nval piecewise_constant : boundaries:int list -> values:float list -> t\n(** [piecewise_constant ~boundaries ~values] is a step function. [values] has\n    one more element than [boundaries]. The schedule returns [values.(i)] for\n    steps in the i-th segment.\n\n    For example,\n    [piecewise_constant ~boundaries:[100; 200] ~values:[0.1; 0.01; 0.001]]\n    returns [0.1] for steps 1--100, [0.01] for 101--200, and [0.001] thereafter.\n\n    Raises [Invalid_argument] if\n    [List.length values <> List.length boundaries + 1] or if [boundaries] is not\n    strictly increasing. *)\n\nval join : (int * t) list -> t\n(** [join segments] sequences schedules end-to-end. Each [(n, s)] runs [s] for\n    [n] steps. Step numbers are restarted from 1 within each segment. The last\n    segment's schedule is used for all steps beyond the total.\n\n    Raises [Invalid_argument] if [segments] is empty or any [n <= 0]. *)\n"
  },
  {
    "path": "packages/vega/lib/vega.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nmodule Schedule = Schedule\nmodule Dtype = Nx_core.Dtype\n\n(* Helpers *)\n\nlet scalar (type a b) (dt : (a, b) Dtype.t) x =\n  Nx.scalar dt (Dtype.of_float dt x)\n\nlet float_of_scalar (type a b) (dt : (a, b) Dtype.t) (v : a) : float =\n  match dt with\n  | Dtype.Float16 -> (v : float)\n  | Dtype.Float32 -> (v : float)\n  | Dtype.Float64 -> (v : float)\n  | Dtype.BFloat16 -> (v : float)\n  | Dtype.Float8_e4m3 -> (v : float)\n  | Dtype.Float8_e5m2 -> (v : float)\n  | _ -> invalid_arg \"Vega: expected floating-point dtype\"\n\n(* Validation *)\n\nlet invalid_argf fmt = Printf.ksprintf invalid_arg fmt\n\nlet validate_positive ctx name value =\n  if value <= 0.0 then\n    invalid_argf \"%s: expected %s > 0.0, got %g\" ctx name value\n\nlet validate_non_negative ctx name value =\n  if value < 0.0 then\n    invalid_argf \"%s: expected %s >= 0.0, got %g\" ctx name value\n\nlet validate_unit_interval ctx name value =\n  if value < 0.0 || value >= 1.0 then\n    invalid_argf \"%s: expected 0.0 <= %s < 1.0, got %g\" ctx name value\n\n(* Primitive: a single composable gradient transformation *)\n\ntype prim = {\n  n_tensors : int;\n  prim_init : 'a 'b. ('a, 'b) Nx.t -> ('a, 'b) Nx.t array;\n  prim_update :\n    'a 'b.\n    int ->\n    ('a, 'b) Nx.t array ->\n    ('a, 'b) Nx.t ->\n    ('a, 'b) Nx.t ->\n    ('a, 'b) Nx.t * ('a, 'b) Nx.t array;\n      (* count -> sub_state -> updates -> param -> (new_updates,\n         new_sub_state) *)\n}\n\ntype t = prim list\n\ntype ('a, 'b) state = {\n  prims : prim array;\n  count : int;\n  tensors : ('a, 'b) Nx.t array;\n}\n\n(* Core *)\n\nlet chain ts = List.concat ts\nlet n_tensors tx = List.fold_left (fun acc p -> acc + p.n_tensors) 0 tx\n\nlet init tx param =\n  let prims = Array.of_list tx in\n  let tensors =\n    Array.concat (Array.to_list (Array.map (fun p -> p.prim_init param) prims))\n  in\n  { prims; count = 0; tensors }\n\nlet update st ~grad ~param =\n  let count = st.count + 1 in\n  let n_prims = Array.length st.prims in\n  let all_tensors = Array.copy st.tensors in\n  let offset = ref 0 in\n  let updates = ref grad in\n  for i = 0 to n_prims - 1 do\n    let p = st.prims.(i) in\n    let sub_state = Array.sub st.tensors !offset p.n_tensors in\n    let new_updates, new_sub_state =\n      p.prim_update count sub_state !updates param\n    in\n    Array.blit new_sub_state 0 all_tensors !offset p.n_tensors;\n    updates := new_updates;\n    offset := !offset + p.n_tensors\n  done;\n  (!updates, { prims = st.prims; count; tensors = all_tensors })\n\nlet apply_updates ~param ~updates = Nx.add param updates\n\nlet step st ~grad ~param =\n  let updates, st = update st ~grad ~param in\n  (apply_updates ~param ~updates, st)\n\n(* Scaling transforms *)\n\nlet scale s =\n  [\n    {\n      n_tensors = 0;\n      prim_init = (fun _ -> [||]);\n      prim_update =\n        (fun _count _st updates _param ->\n          let dt = Nx.dtype updates in\n          (Nx.mul updates (scalar dt s), [||]));\n    };\n  ]\n\nlet scale_by_schedule sched =\n  [\n    {\n      n_tensors = 0;\n      prim_init = (fun _ -> [||]);\n      prim_update =\n        (fun count _st updates _param ->\n          let dt = Nx.dtype updates in\n          let s = sched count in\n          (Nx.mul updates (scalar dt s), [||]));\n    };\n  ]\n\nlet scale_by_learning_rate lr =\n  [\n    {\n      n_tensors = 0;\n      prim_init = (fun _ -> [||]);\n      prim_update =\n        (fun count _st updates _param ->\n          let dt = Nx.dtype updates in\n          let s = -.lr count in\n          (Nx.mul updates (scalar dt s), [||]));\n    };\n  ]\n\n(* Adaptive scaling transforms *)\n\nlet scale_by_adam ?(b1 = 0.9) ?(b2 = 0.999) ?(eps = 1e-8) ?(nesterov = false)\n    ?(amsgrad = false) () =\n  validate_unit_interval \"Vega.scale_by_adam\" \"b1\" b1;\n  validate_unit_interval \"Vega.scale_by_adam\" \"b2\" b2;\n  validate_positive \"Vega.scale_by_adam\" \"eps\" eps;\n  let n_tensors = if amsgrad then 3 else 2 in\n  [\n    {\n      n_tensors;\n      prim_init =\n        (fun param ->\n          if amsgrad then\n            [| Nx.zeros_like param; Nx.zeros_like param; Nx.zeros_like param |]\n          else [| Nx.zeros_like param; Nx.zeros_like param |]);\n      prim_update =\n        (fun count st updates _param ->\n          let mu = st.(0) and nu = st.(1) in\n          let dt = Nx.dtype updates in\n          let new_mu =\n            Nx.add\n              (Nx.mul mu (scalar dt b1))\n              (Nx.mul updates (scalar dt (1. -. b1)))\n          in\n          let new_nu =\n            Nx.add\n              (Nx.mul nu (scalar dt b2))\n              (Nx.mul (Nx.mul updates updates) (scalar dt (1. -. b2)))\n          in\n          let bc1 = 1. -. (b1 ** float_of_int count) in\n          let bc2 = 1. -. (b2 ** float_of_int count) in\n          let m_hat = Nx.div new_mu (scalar dt bc1) in\n          let v_hat, new_st =\n            if amsgrad then\n              let v_max = Nx.maximum st.(2) new_nu in\n              (Nx.div v_max (scalar dt bc2), [| new_mu; new_nu; v_max |])\n            else (Nx.div new_nu (scalar dt bc2), [| new_mu; new_nu |])\n          in\n          let out =\n            if nesterov then\n              let m_hat_nesterov =\n                Nx.add\n                  (Nx.mul (scalar dt (b1 /. bc1)) new_mu)\n                  (Nx.mul (scalar dt ((1. -. b1) /. bc1)) updates)\n              in\n              Nx.div m_hat_nesterov (Nx.add (Nx.sqrt v_hat) (scalar dt eps))\n            else Nx.div m_hat (Nx.add (Nx.sqrt v_hat) (scalar dt eps))\n          in\n          (out, new_st));\n    };\n  ]\n\nlet scale_by_rms ?(decay = 0.9) ?(eps = 1e-8) () =\n  validate_unit_interval \"Vega.scale_by_rms\" \"decay\" decay;\n  validate_positive \"Vega.scale_by_rms\" \"eps\" eps;\n  [\n    {\n      n_tensors = 1;\n      prim_init = (fun param -> [| Nx.zeros_like param |]);\n      prim_update =\n        (fun _count st updates _param ->\n          let nu = st.(0) in\n          let dt = Nx.dtype updates in\n          let new_nu =\n            Nx.add\n              (Nx.mul nu (scalar dt decay))\n              (Nx.mul (Nx.mul updates updates) (scalar dt (1. -. decay)))\n          in\n          let out = Nx.div updates (Nx.add (Nx.sqrt new_nu) (scalar dt eps)) in\n          (out, [| new_nu |]));\n    };\n  ]\n\nlet scale_by_adagrad ?(eps = 1e-8) () =\n  validate_positive \"Vega.scale_by_adagrad\" \"eps\" eps;\n  [\n    {\n      n_tensors = 1;\n      prim_init = (fun param -> [| Nx.zeros_like param |]);\n      prim_update =\n        (fun _count st updates _param ->\n          let accum = st.(0) in\n          let dt = Nx.dtype updates in\n          let new_accum = Nx.add accum (Nx.mul updates updates) in\n          let out =\n            Nx.div updates (Nx.add (Nx.sqrt new_accum) (scalar dt eps))\n          in\n          (out, [| new_accum |]));\n    };\n  ]\n\nlet scale_by_lion ?(b1 = 0.9) ?(b2 = 0.99) () =\n  validate_unit_interval \"Vega.scale_by_lion\" \"b1\" b1;\n  validate_unit_interval \"Vega.scale_by_lion\" \"b2\" b2;\n  [\n    {\n      n_tensors = 1;\n      prim_init = (fun param -> [| Nx.zeros_like param |]);\n      prim_update =\n        (fun _count st updates _param ->\n          let mu = st.(0) in\n          let dt = Nx.dtype updates in\n          (* Update direction: sign of interpolation with b1 *)\n          let interp =\n            Nx.add\n              (Nx.mul mu (scalar dt b1))\n              (Nx.mul updates (scalar dt (1. -. b1)))\n          in\n          let out = Nx.sign interp in\n          (* Momentum state: EMA with b2 *)\n          let new_mu =\n            Nx.add\n              (Nx.mul mu (scalar dt b2))\n              (Nx.mul updates (scalar dt (1. -. b2)))\n          in\n          (out, [| new_mu |]));\n    };\n  ]\n\nlet scale_by_radam ?(b1 = 0.9) ?(b2 = 0.999) ?(eps = 1e-8) () =\n  validate_unit_interval \"Vega.scale_by_radam\" \"b1\" b1;\n  validate_unit_interval \"Vega.scale_by_radam\" \"b2\" b2;\n  validate_positive \"Vega.scale_by_radam\" \"eps\" eps;\n  let rho_inf = (2. /. (1. -. b2)) -. 1. in\n  [\n    {\n      n_tensors = 2;\n      prim_init = (fun param -> [| Nx.zeros_like param; Nx.zeros_like param |]);\n      prim_update =\n        (fun count st updates _param ->\n          let mu = st.(0) and nu = st.(1) in\n          let dt = Nx.dtype updates in\n          let new_mu =\n            Nx.add\n              (Nx.mul mu (scalar dt b1))\n              (Nx.mul updates (scalar dt (1. -. b1)))\n          in\n          let new_nu =\n            Nx.add\n              (Nx.mul nu (scalar dt b2))\n              (Nx.mul (Nx.mul updates updates) (scalar dt (1. -. b2)))\n          in\n          let bc1 = 1. -. (b1 ** float_of_int count) in\n          let m_hat = Nx.div new_mu (scalar dt bc1) in\n          let b2_t = b2 ** float_of_int count in\n          let rho_t =\n            rho_inf -. (2. *. float_of_int count *. b2_t /. (1. -. b2_t))\n          in\n          let out =\n            if rho_t > 5. then begin\n              let bc2 = 1. -. b2_t in\n              let v_hat = Nx.div new_nu (scalar dt bc2) in\n              let rect =\n                sqrt\n                  ((rho_t -. 4.) *. (rho_t -. 2.) *. rho_inf\n                  /. ((rho_inf -. 4.) *. (rho_inf -. 2.) *. rho_t))\n              in\n              Nx.mul (scalar dt rect)\n                (Nx.div m_hat (Nx.add (Nx.sqrt v_hat) (scalar dt eps)))\n            end\n            else m_hat\n          in\n          (out, [| new_mu; new_nu |]));\n    };\n  ]\n\nlet scale_by_trust_ratio ?(eps = 1e-6) () =\n  validate_positive \"Vega.scale_by_trust_ratio\" \"eps\" eps;\n  [\n    {\n      n_tensors = 0;\n      prim_init = (fun _ -> [||]);\n      prim_update =\n        (fun _count _st updates param ->\n          let dt = Nx.dtype updates in\n          let param_norm =\n            float_of_scalar dt\n              (Nx.item [] (Nx.sqrt (Nx.sum (Nx.mul param param))))\n          in\n          let update_norm =\n            float_of_scalar dt\n              (Nx.item [] (Nx.sqrt (Nx.sum (Nx.mul updates updates))))\n          in\n          let ratio =\n            if param_norm > 0. && update_norm > 0. then\n              param_norm /. (update_norm +. eps)\n            else 1.\n          in\n          (Nx.mul updates (scalar dt ratio), [||]));\n    };\n  ]\n\nlet scale_by_adafactor ?(b2_decay = `Rms) ?(eps = 1e-30) ?(eps_scale = 1e-3)\n    ?(factored = true) ?(clipping_threshold = 1.0) () =\n  validate_positive \"Vega.scale_by_adafactor\" \"eps\" eps;\n  validate_positive \"Vega.scale_by_adafactor\" \"eps_scale\" eps_scale;\n  validate_positive \"Vega.scale_by_adafactor\" \"clipping_threshold\"\n    clipping_threshold;\n  let rms_clip (type a b) (dt : (a, b) Dtype.t) (u : (a, b) Nx.t) =\n    if Float.is_finite clipping_threshold then\n      let rms =\n        float_of_scalar dt (Nx.item [] (Nx.sqrt (Nx.mean (Nx.mul u u))))\n      in\n      let scale =\n        if rms > 0. then Float.min 1. (clipping_threshold /. rms) else 1.\n      in\n      if scale < 1. then Nx.mul u (scalar dt scale) else u\n    else u\n  in\n  [\n    {\n      n_tensors = 2;\n      prim_init =\n        (fun param ->\n          let shape = Nx.shape param in\n          let ndim = Array.length shape in\n          if factored && ndim >= 2 then (\n            let row_shape = Array.copy shape in\n            row_shape.(ndim - 1) <- 1;\n            let col_shape = Array.copy shape in\n            col_shape.(ndim - 2) <- 1;\n            [|\n              Nx.zeros (Nx.dtype param) row_shape;\n              Nx.zeros (Nx.dtype param) col_shape;\n            |])\n          else\n            [|\n              Nx.zeros_like param;\n              Nx.scalar (Nx.dtype param) (Dtype.of_float (Nx.dtype param) 0.);\n            |]);\n      prim_update =\n        (fun count st updates _param ->\n          let dt = Nx.dtype updates in\n          let shape = Nx.shape updates in\n          let ndim = Array.length shape in\n          let rho =\n            match b2_decay with\n            | `Constant rho -> rho\n            | `Rms ->\n                let t = float_of_int (max count 1) in\n                1. -. (t ** -0.8)\n          in\n          let lr = -.eps_scale /. sqrt (float_of_int (max count 1)) in\n          let g_sq = Nx.mul updates updates in\n          if factored && ndim >= 2 then begin\n            let row_ax = ndim - 1 in\n            let col_ax = ndim - 2 in\n            let row_mean = Nx.mean ~axes:[ row_ax ] ~keepdims:true g_sq in\n            let col_mean = Nx.mean ~axes:[ col_ax ] ~keepdims:true g_sq in\n            let new_rf =\n              Nx.add\n                (Nx.mul st.(0) (scalar dt rho))\n                (Nx.mul row_mean (scalar dt (1. -. rho)))\n            in\n            let new_cf =\n              Nx.add\n                (Nx.mul st.(1) (scalar dt rho))\n                (Nx.mul col_mean (scalar dt (1. -. rho)))\n            in\n            let rf_mean = Nx.mean ~axes:[ col_ax ] ~keepdims:true new_rf in\n            let v_est =\n              Nx.div (Nx.mul new_rf new_cf) (Nx.add rf_mean (scalar dt eps))\n            in\n            let u = Nx.div updates (Nx.add (Nx.sqrt v_est) (scalar dt eps)) in\n            let out = rms_clip dt u in\n            (Nx.mul out (scalar dt lr), [| new_rf; new_cf |])\n          end\n          else begin\n            let new_nu =\n              Nx.add\n                (Nx.mul st.(0) (scalar dt rho))\n                (Nx.mul g_sq (scalar dt (1. -. rho)))\n            in\n            let u = Nx.div updates (Nx.add (Nx.sqrt new_nu) (scalar dt eps)) in\n            let out = rms_clip dt u in\n            (Nx.mul out (scalar dt lr), [| new_nu; st.(1) |])\n          end);\n    };\n  ]\n\nlet scale_by_adan ?(b1 = 0.98) ?(b2 = 0.92) ?(b3 = 0.99) ?(eps = 1e-8) () =\n  validate_unit_interval \"Vega.scale_by_adan\" \"b1\" b1;\n  validate_unit_interval \"Vega.scale_by_adan\" \"b2\" b2;\n  validate_unit_interval \"Vega.scale_by_adan\" \"b3\" b3;\n  validate_positive \"Vega.scale_by_adan\" \"eps\" eps;\n  [\n    {\n      n_tensors = 4;\n      prim_init =\n        (fun param ->\n          [|\n            Nx.zeros_like param;\n            Nx.zeros_like param;\n            Nx.zeros_like param;\n            Nx.zeros_like param;\n          |]);\n      prim_update =\n        (fun _count st updates _param ->\n          let m = st.(0) and v = st.(1) and n = st.(2) and prev_g = st.(3) in\n          let dt = Nx.dtype updates in\n          let diff = Nx.sub updates prev_g in\n          let new_m =\n            Nx.add\n              (Nx.mul m (scalar dt b1))\n              (Nx.mul updates (scalar dt (1. -. b1)))\n          in\n          let new_v =\n            Nx.add\n              (Nx.mul v (scalar dt b2))\n              (Nx.mul diff (scalar dt (1. -. b2)))\n          in\n          let nesterov_g = Nx.add updates (Nx.mul diff (scalar dt b2)) in\n          let new_n =\n            Nx.add\n              (Nx.mul n (scalar dt b3))\n              (Nx.mul (Nx.mul nesterov_g nesterov_g) (scalar dt (1. -. b3)))\n          in\n          let out =\n            Nx.div\n              (Nx.add new_m (Nx.mul new_v (scalar dt b2)))\n              (Nx.add (Nx.sqrt new_n) (scalar dt eps))\n          in\n          (out, [| new_m; new_v; new_n; updates |]));\n    };\n  ]\n\n(* Accumulation transforms *)\n\nlet trace ?(decay = 0.9) ?(nesterov = false) () =\n  validate_unit_interval \"Vega.trace\" \"decay\" decay;\n  [\n    {\n      n_tensors = 1;\n      prim_init = (fun param -> [| Nx.zeros_like param |]);\n      prim_update =\n        (fun _count st updates _param ->\n          let vel = st.(0) in\n          let dt = Nx.dtype updates in\n          let new_vel = Nx.add (Nx.mul vel (scalar dt decay)) updates in\n          let out =\n            if nesterov then Nx.add updates (Nx.mul new_vel (scalar dt decay))\n            else new_vel\n          in\n          (out, [| new_vel |]));\n    };\n  ]\n\n(* Regularization transforms *)\n\nlet add_decayed_weights ?(rate = Schedule.constant 0.01) () =\n  [\n    {\n      n_tensors = 0;\n      prim_init = (fun _ -> [||]);\n      prim_update =\n        (fun count _st updates param ->\n          let dt = Nx.dtype updates in\n          let r = rate count in\n          (Nx.add updates (Nx.mul param (scalar dt r)), [||]));\n    };\n  ]\n\n(* Clipping transforms *)\n\nlet clip_by_value delta =\n  validate_positive \"Vega.clip_by_value\" \"delta\" delta;\n  [\n    {\n      n_tensors = 0;\n      prim_init = (fun _ -> [||]);\n      prim_update =\n        (fun _count _st updates _param ->\n          let dt = Nx.dtype updates in\n          let min_v = Dtype.of_float dt (-.delta) in\n          let max_v = Dtype.of_float dt delta in\n          (Nx.clip updates ~min:min_v ~max:max_v, [||]));\n    };\n  ]\n\nlet clip_by_norm max_norm =\n  validate_positive \"Vega.clip_by_norm\" \"max_norm\" max_norm;\n  [\n    {\n      n_tensors = 0;\n      prim_init = (fun _ -> [||]);\n      prim_update =\n        (fun _count _st updates _param ->\n          let dt = Nx.dtype updates in\n          let norm =\n            float_of_scalar dt\n              (Nx.item [] (Nx.sqrt (Nx.sum (Nx.mul updates updates))))\n          in\n          if norm <= max_norm then (updates, [||])\n          else\n            let s = max_norm /. norm in\n            (Nx.mul updates (scalar dt s), [||]));\n    };\n  ]\n\n(* Gradient processing *)\n\nlet centralize =\n  [\n    {\n      n_tensors = 0;\n      prim_init = (fun _ -> [||]);\n      prim_update =\n        (fun _count _st updates _param ->\n          let ndim = Array.length (Nx.shape updates) in\n          if ndim < 2 then (updates, [||])\n          else\n            let axes = List.init (ndim - 1) (fun i -> i + 1) in\n            let mean = Nx.mean ~axes ~keepdims:true updates in\n            (Nx.sub updates mean, [||]));\n    };\n  ]\n\nlet add_noise ~eta ?(gamma = 0.55) () =\n  [\n    {\n      n_tensors = 0;\n      prim_init = (fun _ -> [||]);\n      prim_update =\n        (fun count _st updates _param ->\n          let dt = Nx.dtype updates in\n          let variance =\n            eta count /. Float.pow (1. +. float_of_int count) gamma\n          in\n          let noise =\n            Nx.mul (Nx.randn dt (Nx.shape updates)) (scalar dt (sqrt variance))\n          in\n          (Nx.add updates noise, [||]));\n    };\n  ]\n\n(* Robustness *)\n\nlet apply_if_finite tx =\n  let inner_prims = Array.of_list tx in\n  let inner_n =\n    Array.fold_left (fun acc p -> acc + p.n_tensors) 0 inner_prims\n  in\n  [\n    {\n      n_tensors = inner_n + 1;\n      prim_init =\n        (fun param ->\n          let inner_st =\n            Array.concat\n              (Array.to_list\n                 (Array.map (fun p -> p.prim_init param) inner_prims))\n          in\n          let counter =\n            Nx.scalar (Nx.dtype param) (Dtype.of_float (Nx.dtype param) 0.)\n          in\n          Array.append inner_st [| counter |]);\n      prim_update =\n        (fun count st updates param ->\n          let dt = Nx.dtype updates in\n          let inner_st = Array.sub st 0 inner_n in\n          (* Run the inner chain *)\n          let offset = ref 0 in\n          let upd = ref updates in\n          let new_inner = Array.copy inner_st in\n          for i = 0 to Array.length inner_prims - 1 do\n            let p = inner_prims.(i) in\n            let sub = Array.sub inner_st !offset p.n_tensors in\n            let new_upd, new_sub = p.prim_update count sub !upd param in\n            Array.blit new_sub 0 new_inner !offset p.n_tensors;\n            upd := new_upd;\n            offset := !offset + p.n_tensors\n          done;\n          (* Check if result is finite *)\n          let is_finite =\n            let fin = Nx.isfinite !upd in\n            let all_fin = Nx.all fin in\n            Nx.item [] all_fin\n          in\n          if is_finite then\n            let new_st =\n              Array.append new_inner [| Nx.scalar dt (Dtype.of_float dt 0.) |]\n            in\n            (!upd, new_st)\n          else\n            let counter = st.(inner_n) in\n            let new_counter =\n              Nx.add counter (Nx.scalar dt (Dtype.of_float dt 1.))\n            in\n            let new_st = Array.append inner_st [| new_counter |] in\n            (Nx.zeros_like updates, new_st));\n    };\n  ]\n\n(* Optimizer aliases *)\n\nlet sgd ?(momentum = 0.) ?(nesterov = false) lr =\n  validate_unit_interval \"Vega.sgd\" \"momentum\" momentum;\n  if momentum > 0. then\n    chain [ trace ~decay:momentum ~nesterov (); scale_by_learning_rate lr ]\n  else chain [ scale_by_learning_rate lr ]\n\nlet adam ?b1 ?b2 ?eps lr =\n  chain [ scale_by_adam ?b1 ?b2 ?eps (); scale_by_learning_rate lr ]\n\nlet adamw ?b1 ?b2 ?eps ?(weight_decay = 0.01) lr =\n  validate_non_negative \"Vega.adamw\" \"weight_decay\" weight_decay;\n  chain\n    [\n      scale_by_adam ?b1 ?b2 ?eps ();\n      add_decayed_weights ~rate:(Schedule.constant weight_decay) ();\n      scale_by_learning_rate lr;\n    ]\n\nlet rmsprop ?decay ?eps ?(momentum = 0.) lr =\n  validate_unit_interval \"Vega.rmsprop\" \"momentum\" momentum;\n  let base = scale_by_rms ?decay ?eps () in\n  if momentum > 0. then\n    chain [ base; trace ~decay:momentum (); scale_by_learning_rate lr ]\n  else chain [ base; scale_by_learning_rate lr ]\n\nlet adagrad ?eps lr =\n  chain [ scale_by_adagrad ?eps (); scale_by_learning_rate lr ]\n\nlet lamb ?b1 ?b2 ?eps ?(weight_decay = 0.01) lr =\n  chain\n    [\n      scale_by_adam ?b1 ?b2 ?eps ();\n      add_decayed_weights ~rate:(Schedule.constant weight_decay) ();\n      scale_by_trust_ratio ();\n      scale_by_learning_rate lr;\n    ]\n\nlet lion ?b1 ?b2 lr =\n  chain [ scale_by_lion ?b1 ?b2 (); scale_by_learning_rate lr ]\n\nlet radam ?b1 ?b2 ?eps lr =\n  chain [ scale_by_radam ?b1 ?b2 ?eps (); scale_by_learning_rate lr ]\n\nlet lars ?(momentum = 0.9) ?(weight_decay = 0.01) ?(nesterov = false) lr =\n  chain\n    [\n      trace ~decay:momentum ~nesterov ();\n      add_decayed_weights ~rate:(Schedule.constant weight_decay) ();\n      scale_by_trust_ratio ();\n      scale_by_learning_rate lr;\n    ]\n\nlet adan ?b1 ?b2 ?b3 ?eps ?(weight_decay = 0.02) lr =\n  validate_non_negative \"Vega.adan\" \"weight_decay\" weight_decay;\n  chain\n    [\n      scale_by_adan ?b1 ?b2 ?b3 ?eps ();\n      add_decayed_weights ~rate:(Schedule.constant weight_decay) ();\n      scale_by_learning_rate lr;\n    ]\n\nlet adafactor ?b2_decay () = chain [ scale_by_adafactor ?b2_decay () ]\n\n(* Serialization *)\n\nlet state_to_tensors st = (st.count, st.tensors)\n\nlet state_of_tensors tx ~count tensors =\n  let prims = Array.of_list tx in\n  let expected = Array.fold_left (fun acc p -> acc + p.n_tensors) 0 prims in\n  let got = Array.length tensors in\n  if got <> expected then\n    invalid_arg\n      (Printf.sprintf \"Vega.state_of_tensors: expected %d tensors, got %d\"\n         expected got);\n  { prims; count; tensors }\n"
  },
  {
    "path": "packages/vega/lib/vega.mli",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\n(** Composable gradient-based optimizers.\n\n    Vega provides typed, per-parameter optimizer primitives that compose via\n    {!chain}. Each primitive is a gradient transformation: it takes updates\n    (gradients) and returns modified updates. Primitives are chained to build\n    optimizers:\n\n    {[\n    let tx =\n      Vega.chain\n        [\n          Vega.scale_by_adam ();\n          Vega.add_decayed_weights ~rate:(Schedule.constant 0.01) ();\n          Vega.scale_by_learning_rate lr;\n        ]\n    ]}\n\n    Common optimizers are provided as aliases: {!adam}, {!sgd}, {!adamw}, etc.\n\n    {b Narrow waist.} The core abstraction is [t]: a composable gradient\n    transformation. The per-parameter {!state} is fully self-contained and\n    tracks moments, step count, and the update rule. *)\n\n(** {1:schedules Learning-Rate Schedules} *)\n\nmodule Schedule = Schedule\n\n(** {1:types Types} *)\n\ntype t\n(** A composable gradient transformation. Constructed via primitives like\n    {!scale_by_adam}, {!trace}, etc., and composed via {!chain}. *)\n\ntype ('a, 'b) state\n(** Per-parameter optimizer state. Typed to match the parameter tensor. Tracks\n    moments, step count, and the transformation chain. Created via {!init},\n    advanced via {!update} or {!step}. *)\n\n(** {1:core Core} *)\n\nval chain : t list -> t\n(** [chain transforms] composes transforms sequentially. {!update} applies each\n    transform in order, threading the modified updates through.\n\n    {!chain} is associative: [chain [chain [a; b]; c]] is equivalent to\n    [chain [a; b; c]]. *)\n\nval init : t -> ('a, 'b) Nx.t -> ('a, 'b) state\n(** [init tx param] creates initial optimizer state matching [param]'s shape and\n    dtype. Step count starts at [0]. *)\n\nval update :\n  ('a, 'b) state ->\n  grad:('a, 'b) Nx.t ->\n  param:('a, 'b) Nx.t ->\n  ('a, 'b) Nx.t * ('a, 'b) state\n(** [update state ~grad ~param] returns [(updates, new_state)].\n\n    The returned [updates] are gradient-scale values that include the\n    learning-rate sign. Apply them via {!apply_updates}. *)\n\nval apply_updates :\n  param:('a, 'b) Nx.t -> updates:('a, 'b) Nx.t -> ('a, 'b) Nx.t\n(** [apply_updates ~param ~updates] is [Nx.add param updates]. *)\n\nval step :\n  ('a, 'b) state ->\n  grad:('a, 'b) Nx.t ->\n  param:('a, 'b) Nx.t ->\n  ('a, 'b) Nx.t * ('a, 'b) state\n(** [step state ~grad ~param] returns [(new_param, new_state)].\n\n    Convenience for:\n    {[\n    let updates, state = update state ~grad ~param in\n    (apply_updates ~param ~updates, state)\n    ]} *)\n\n(** {1:scaling Scaling Transforms} *)\n\nval scale : float -> t\n(** [scale s] multiplies updates by [s]. Stateless. *)\n\nval scale_by_schedule : Schedule.t -> t\n(** [scale_by_schedule f] multiplies updates by [f step]. *)\n\nval scale_by_learning_rate : Schedule.t -> t\n(** [scale_by_learning_rate lr] multiplies updates by [-lr step]. Negates the\n    learning rate so that {!apply_updates} performs gradient descent. *)\n\n(** {1:adaptive Adaptive Scaling Transforms} *)\n\nval scale_by_adam :\n  ?b1:float ->\n  ?b2:float ->\n  ?eps:float ->\n  ?nesterov:bool ->\n  ?amsgrad:bool ->\n  unit ->\n  t\n(** [scale_by_adam ?b1 ?b2 ?eps ?nesterov ?amsgrad ()] scales updates by Adam's\n    bias-corrected first and second moment estimates.\n\n    When [amsgrad] is [true], the denominator uses the running maximum of past\n    second moments, preventing the adaptive learning rate from increasing.\n\n    [b1] defaults to [0.9]. [b2] defaults to [0.999]. [eps] defaults to [1e-8].\n    [nesterov] defaults to [false]. [amsgrad] defaults to [false].\n\n    State: 2 tensors when [amsgrad] is [false], 3 when [true] (first moment,\n    second moment, max second moment). *)\n\nval scale_by_rms : ?decay:float -> ?eps:float -> unit -> t\n(** [scale_by_rms ?decay ?eps ()] scales updates by the inverse root mean square\n    of past gradients (the core of RMSprop).\n\n    [decay] defaults to [0.9]. [eps] defaults to [1e-8].\n\n    State: 1 tensor (second moment EMA). *)\n\nval scale_by_adagrad : ?eps:float -> unit -> t\n(** [scale_by_adagrad ?eps ()] scales updates by the inverse root of accumulated\n    squared gradients.\n\n    [eps] defaults to [1e-8].\n\n    State: 1 tensor (accumulated squared gradients). *)\n\nval scale_by_lion : ?b1:float -> ?b2:float -> unit -> t\n(** [scale_by_lion ?b1 ?b2 ()] produces sign-based updates using two momentum\n    rates: [b1] for the update direction, [b2] for the momentum state.\n\n    [b1] defaults to [0.9]. [b2] defaults to [0.99].\n\n    State: 1 tensor (momentum). *)\n\nval scale_by_radam : ?b1:float -> ?b2:float -> ?eps:float -> unit -> t\n(** [scale_by_radam ?b1 ?b2 ?eps ()] scales by rectified Adam. Uses the length\n    of the approximated SMA to decide between adaptive and momentum-only\n    updates, avoiding unstable variance in early steps.\n\n    [b1] defaults to [0.9]. [b2] defaults to [0.999]. [eps] defaults to [1e-8].\n\n    State: 2 tensors (first moment, second moment). *)\n\nval scale_by_trust_ratio : ?eps:float -> unit -> t\n(** [scale_by_trust_ratio ?eps ()] scales updates by the ratio\n    [||param|| / (||updates|| + eps)] (the LAMB/LARS trust ratio).\n\n    [eps] defaults to [1e-6].\n\n    State: 0 tensors. *)\n\nval scale_by_adafactor :\n  ?b2_decay:[ `Constant of float | `Rms ] ->\n  ?eps:float ->\n  ?eps_scale:float ->\n  ?factored:bool ->\n  ?clipping_threshold:float ->\n  unit ->\n  t\n(** [scale_by_adafactor ?b2_decay ?eps ?eps_scale ?factored ?clipping_threshold\n     ()] scales updates using Adafactor's factored second-moment estimation. For\n    2D+ parameters, row and column factors are maintained instead of the full\n    second moment matrix, reducing memory from O(mn) to O(m+n).\n\n    [b2_decay] controls second moment decay. [`Rms] (default) uses\n    [1 - step{^-0.8}]. [`Constant rho] uses fixed decay [rho]. [eps] defaults to\n    [1e-30]. [eps_scale] defaults to [1e-3]. [factored] defaults to [true]; when\n    [false], uses a full second moment. [clipping_threshold] defaults to [1.0];\n    set to [infinity] to disable.\n\n    State: 2 tensors (row factor, col factor for factored 2D+; full second\n    moment + dummy for 1D or unfactored). *)\n\nval scale_by_adan :\n  ?b1:float -> ?b2:float -> ?b3:float -> ?eps:float -> unit -> t\n(** [scale_by_adan ?b1 ?b2 ?b3 ?eps ()] scales updates using Adan's adaptive\n    Nesterov momentum estimation. Maintains first moment, gradient difference\n    moment, and second moment.\n\n    [b1] defaults to [0.98]. [b2] defaults to [0.92]. [b3] defaults to [0.99].\n    [eps] defaults to [1e-8].\n\n    State: 4 tensors (first moment, gradient difference moment, second moment,\n    previous gradient). *)\n\n(** {1:accumulation Accumulation Transforms} *)\n\nval trace : ?decay:float -> ?nesterov:bool -> unit -> t\n(** [trace ?decay ?nesterov ()] accumulates a trace (momentum) of updates.\n\n    [decay] defaults to [0.9]. [nesterov] defaults to [false].\n\n    State: 1 tensor (trace/velocity). *)\n\n(** {1:regularization Regularization Transforms} *)\n\nval add_decayed_weights : ?rate:Schedule.t -> unit -> t\n(** [add_decayed_weights ?rate ()] adds [rate step * param] to updates. When\n    placed before {!scale_by_learning_rate}, this implements decoupled weight\n    decay.\n\n    [rate] defaults to [Schedule.constant 0.01].\n\n    State: 0 tensors. *)\n\n(** {1:clipping Clipping Transforms} *)\n\nval clip_by_value : float -> t\n(** [clip_by_value delta] clips updates element-wise to [[-delta, +delta]].\n\n    State: 0 tensors. *)\n\nval clip_by_norm : float -> t\n(** [clip_by_norm max_norm] rescales updates so their L2 norm does not exceed\n    [max_norm]. Returns updates unchanged if the norm is already within bounds.\n\n    State: 0 tensors. *)\n\n(** {1:gradient_processing Gradient Processing} *)\n\nval centralize : t\n(** [centralize] subtracts the mean from each gradient tensor. For tensors with\n    2+ dimensions, the mean is computed over all axes except the first (output\n    features). Scalars and 1D tensors are left unchanged.\n\n    State: 0 tensors. *)\n\nval add_noise : eta:Schedule.t -> ?gamma:float -> unit -> t\n(** [add_noise ~eta ?gamma ()] adds Gaussian noise with variance\n    [eta step / (1 + step){^ gamma}] to updates. The annealing ensures noise\n    decreases over training.\n\n    [gamma] defaults to [0.55].\n\n    State: 0 tensors. *)\n\n(** {1:robustness Robustness} *)\n\nval apply_if_finite : t -> t\n(** [apply_if_finite tx] wraps [tx] so that if any update produced by [tx]\n    contains non-finite values (NaN or Inf), the update is skipped: zero updates\n    are returned and the inner state is not advanced.\n\n    State: inner state + 1 tensor (count of consecutive non-finite steps). *)\n\n(** {1:aliases Optimizer Aliases} *)\n\nval sgd : ?momentum:float -> ?nesterov:bool -> Schedule.t -> t\n(** [sgd lr] is stochastic gradient descent.\n\n    Without momentum: [chain [scale_by_learning_rate lr]]. With momentum:\n    [chain [trace ~decay:momentum ~nesterov (); scale_by_learning_rate lr]].\n\n    [momentum] defaults to [0.]. [nesterov] defaults to [false]. *)\n\nval adam : ?b1:float -> ?b2:float -> ?eps:float -> Schedule.t -> t\n(** [adam lr] is Adam with bias correction.\n\n    Equivalent to\n    [chain [scale_by_adam ~b1 ~b2 ~eps (); scale_by_learning_rate lr]]. *)\n\nval adamw :\n  ?b1:float -> ?b2:float -> ?eps:float -> ?weight_decay:float -> Schedule.t -> t\n(** [adamw lr] is AdamW with decoupled weight decay.\n\n    Equivalent to\n    [chain [scale_by_adam ~b1 ~b2 ~eps (); add_decayed_weights\n     ~rate:(Schedule.constant weight_decay) (); scale_by_learning_rate lr]]. *)\n\nval rmsprop : ?decay:float -> ?eps:float -> ?momentum:float -> Schedule.t -> t\n(** [rmsprop lr] is RMSprop.\n\n    Equivalent to\n    [chain [scale_by_rms ~decay ~eps (); (* trace if momentum > 0 *)\n     scale_by_learning_rate lr]]. *)\n\nval adagrad : ?eps:float -> Schedule.t -> t\n(** [adagrad lr] is Adagrad.\n\n    Equivalent to [chain [scale_by_adagrad ~eps (); scale_by_learning_rate lr]].\n*)\n\nval lamb :\n  ?b1:float -> ?b2:float -> ?eps:float -> ?weight_decay:float -> Schedule.t -> t\n(** [lamb lr] is LAMB (Layer-wise Adaptive Moments) for large-batch training.\n\n    Equivalent to\n    [chain [scale_by_adam ~b1 ~b2 ~eps (); add_decayed_weights\n     ~rate:(Schedule.constant weight_decay) (); scale_by_trust_ratio ();\n     scale_by_learning_rate lr]]. *)\n\nval lion : ?b1:float -> ?b2:float -> Schedule.t -> t\n(** [lion lr] is Lion (Evolved Sign Momentum).\n\n    Equivalent to [chain [scale_by_lion ~b1 ~b2 (); scale_by_learning_rate lr]].\n*)\n\nval radam : ?b1:float -> ?b2:float -> ?eps:float -> Schedule.t -> t\n(** [radam lr] is Rectified Adam.\n\n    Equivalent to\n    [chain [scale_by_radam ~b1 ~b2 ~eps (); scale_by_learning_rate lr]]. *)\n\nval lars :\n  ?momentum:float -> ?weight_decay:float -> ?nesterov:bool -> Schedule.t -> t\n(** [lars lr] is LARS (Layer-wise Adaptive Rate Scaling) for large-batch SGD\n    training.\n\n    Equivalent to\n    [chain [trace ~decay:momentum ~nesterov (); add_decayed_weights\n     ~rate:(Schedule.constant weight_decay) (); scale_by_trust_ratio ();\n     scale_by_learning_rate lr]].\n\n    [momentum] defaults to [0.9]. [weight_decay] defaults to [0.01]. [nesterov]\n    defaults to [false]. *)\n\nval adan :\n  ?b1:float ->\n  ?b2:float ->\n  ?b3:float ->\n  ?eps:float ->\n  ?weight_decay:float ->\n  Schedule.t ->\n  t\n(** [adan lr] is Adan with decoupled weight decay.\n\n    Equivalent to\n    [chain [scale_by_adan ~b1 ~b2 ~b3 ~eps (); add_decayed_weights\n     ~rate:(Schedule.constant weight_decay) (); scale_by_learning_rate lr]].\n\n    [weight_decay] defaults to [0.02]. *)\n\nval adafactor : ?b2_decay:[ `Constant of float | `Rms ] -> unit -> t\n(** [adafactor ?b2_decay ()] is Adafactor with default parameters.\n\n    Equivalent to [chain [scale_by_adafactor ?b2_decay ()]].\n\n    Adafactor includes its own learning rate schedule (inverse root of step) so\n    no separate {!scale_by_learning_rate} is needed. *)\n\n(** {1:serialization Serialization} *)\n\nval n_tensors : t -> int\n(** [n_tensors tx] is the total number of state tensors across all primitives in\n    the chain. *)\n\nval state_to_tensors : ('a, 'b) state -> int * ('a, 'b) Nx.t array\n(** [state_to_tensors state] is [(count, tensors)] where [count] is the current\n    step count and [tensors] are the internal state tensors (flat array, ordered\n    by primitive in the chain). *)\n\nval state_of_tensors : t -> count:int -> ('a, 'b) Nx.t array -> ('a, 'b) state\n(** [state_of_tensors tx ~count tensors] reconstructs state from a\n    transformation, step count, and previously serialized tensors.\n\n    Raises [Invalid_argument] if [Array.length tensors <> n_tensors tx]. *)\n"
  },
  {
    "path": "packages/vega/test/dune",
    "content": "(test\n (name test_vega)\n (package vega)\n (libraries vega nx nx.core windtrap))\n"
  },
  {
    "path": "packages/vega/test/test_vega.ml",
    "content": "(*---------------------------------------------------------------------------\n  Copyright (c) 2026 The Raven authors. All rights reserved.\n  SPDX-License-Identifier: ISC\n  ---------------------------------------------------------------------------*)\n\nopen Windtrap\nmodule S = Vega.Schedule\n\n(* Helpers *)\n\nlet f32 = Nx.float32\nlet vec xs = Nx.create f32 [| Array.length xs |] xs\nlet mat r c xs = Nx.create f32 [| r; c |] xs\nlet to_arr t = Nx.to_array (Nx.reshape [| -1 |] t)\nlet eps = float 1e-5\nlet lr01 = S.constant 0.1\n\nlet converges ~msg ~tol tx =\n  let param = ref (vec [| 5.0; -3.0 |]) in\n  let st = ref (Vega.init tx !param) in\n  for _ = 1 to 200 do\n    let p, s = Vega.step !st ~grad:!param ~param:!param in\n    param := p;\n    st := s\n  done;\n  let v = to_arr !param in\n  equal ~msg:(msg ^ \"[0]\") (float tol) 0.0 v.(0);\n  equal ~msg:(msg ^ \"[1]\") (float tol) 0.0 v.(1)\n\nlet raises_invalid_arg f =\n  raises_match\n    (fun exn -> match exn with Invalid_argument _ -> true | _ -> false)\n    f\n\n(* Schedules *)\n\nlet test_polynomial_decay () =\n  let s =\n    S.polynomial_decay ~init_value:1.0 ~end_value:0.0 ~decay_steps:100 ()\n  in\n  equal ~msg:\"step 0\" (float 1e-10) 1.0 (s 0);\n  equal ~msg:\"step 50 (power=1, linear)\" (float 1e-6) 0.5 (s 50);\n  equal ~msg:\"step 100\" (float 1e-10) 0.0 (s 100);\n  equal ~msg:\"clamps past end\" (float 1e-10) 0.0 (s 200);\n  let s2 =\n    S.polynomial_decay ~init_value:1.0 ~end_value:0.0 ~decay_steps:100\n      ~power:2.0 ()\n  in\n  equal ~msg:\"power=2 at midpoint\" (float 1e-6) 0.25 (s2 50)\n\nlet test_warmup_cosine_decay () =\n  let s =\n    S.warmup_cosine_decay ~init_value:0.0 ~peak_value:1.0 ~warmup_steps:10\n      ~decay_steps:90 ()\n  in\n  equal ~msg:\"step 0\" (float 1e-10) 0.0 (s 0);\n  equal ~msg:\"step 5 (warmup midpoint)\" (float 1e-6) 0.5 (s 5);\n  equal ~msg:\"step 10 (peak)\" (float 1e-6) 1.0 (s 10);\n  equal ~msg:\"step 100 (fully decayed)\" (float 1e-10) 0.0 (s 100);\n  equal ~msg:\"past end\" (float 1e-10) 0.0 (s 200)\n\nlet test_piecewise_constant () =\n  let s =\n    S.piecewise_constant ~boundaries:[ 10; 20 ] ~values:[ 1.0; 0.1; 0.01 ]\n  in\n  equal ~msg:\"segment 1\" (float 1e-10) 1.0 (s 5);\n  equal ~msg:\"boundary\" (float 1e-10) 1.0 (s 10);\n  equal ~msg:\"segment 2\" (float 1e-10) 0.1 (s 15);\n  equal ~msg:\"segment 3\" (float 1e-10) 0.01 (s 25)\n\nlet test_piecewise_constant_validation () =\n  raises_invalid_arg (fun () ->\n      ignore (S.piecewise_constant ~boundaries:[ 10 ] ~values:[ 1.0 ] 0));\n  raises_invalid_arg (fun () ->\n      ignore\n        (S.piecewise_constant ~boundaries:[ 20; 10 ] ~values:[ 1.; 2.; 3. ] 0))\n\nlet test_join () =\n  let s =\n    S.join [ (10, S.constant 1.0); (10, S.constant 2.0); (10, S.constant 3.0) ]\n  in\n  equal ~msg:\"segment 1\" (float 1e-10) 1.0 (s 5);\n  equal ~msg:\"segment 2\" (float 1e-10) 2.0 (s 15);\n  equal ~msg:\"segment 3\" (float 1e-10) 3.0 (s 25);\n  equal ~msg:\"past end extends last\" (float 1e-10) 3.0 (s 100)\n\nlet test_join_step_reset () =\n  let calls = ref [] in\n  let spy name =\n    S.join\n      [\n        ( 5,\n          fun step ->\n            calls := (name, step) :: !calls;\n            0. );\n      ]\n  in\n  let s = spy \"a\" in\n  ignore (s 3);\n  equal ~msg:\"step passed to inner schedule\"\n    (list (pair string int))\n    [ (\"a\", 3) ]\n    (List.rev !calls)\n\nlet test_join_validation () =\n  raises_invalid_arg (fun () -> ignore (S.join [] 0));\n  raises_invalid_arg (fun () -> ignore (S.join [ (0, S.constant 1.0) ] 0))\n\nlet test_cosine_decay_restarts () =\n  let s = S.cosine_decay_restarts ~init_value:1.0 ~decay_steps:100 () in\n  equal ~msg:\"step 0 (peak)\" (float 1e-10) 1.0 (s 0);\n  equal ~msg:\"step 100 (restart)\" (float 1e-6) 1.0 (s 100);\n  equal ~msg:\"step 200 (second restart)\" (float 1e-6) 1.0 (s 200);\n  equal ~msg:\"step 50 (midpoint)\" (float 1e-6) 0.5 (s 50)\n\nlet test_cosine_decay_restarts_t_mul () =\n  let s =\n    S.cosine_decay_restarts ~init_value:1.0 ~decay_steps:10 ~t_mul:2.0 ()\n  in\n  (* First cycle: 10 steps. Second: 20 steps. *)\n  equal ~msg:\"step 0 (start)\" (float 1e-6) 1.0 (s 0);\n  equal ~msg:\"step 10 (second cycle start)\" (float 1e-6) 1.0 (s 10);\n  equal ~msg:\"step 30 (third cycle start)\" (float 1e-6) 1.0 (s 30)\n\nlet test_cosine_decay_restarts_m_mul () =\n  let s =\n    S.cosine_decay_restarts ~init_value:1.0 ~decay_steps:100 ~m_mul:0.5 ()\n  in\n  equal ~msg:\"cycle 0 peak\" (float 1e-6) 1.0 (s 0);\n  equal ~msg:\"cycle 1 peak\" (float 1e-6) 0.5 (s 100);\n  equal ~msg:\"cycle 2 peak\" (float 1e-6) 0.25 (s 200)\n\nlet test_one_cycle () =\n  let s = S.one_cycle ~max_value:1.0 ~total_steps:100 () in\n  (* warmup: 30 steps (pct_start=0.3), init=1/25=0.04, peak=1.0 *)\n  equal ~msg:\"step 0\" (float 1e-6) 0.04 (s 0);\n  equal ~msg:\"step 30 (peak)\" (float 1e-6) 1.0 (s 30);\n  (* decay: 70 steps, from 1.0 to 1/10000=0.0001 *)\n  let end_val = 1.0 /. 10000.0 in\n  equal ~msg:\"step 100 (end)\" (float 1e-6) end_val (s 100)\n\n(* Schedule property tests — these are `test` values, placed directly in the\n   group list below. *)\n\n(* Primitives *)\n\nlet test_scale () =\n  let tx = Vega.scale 2.0 in\n  let grad = vec [| 1.0; -0.5 |] in\n  let param = vec [| 0.; 0. |] in\n  let upd, _ = Vega.update (Vega.init tx param) ~grad ~param in\n  equal ~msg:\"scaled\" (array eps) [| 2.0; -1.0 |] (to_arr upd)\n\nlet test_scale_by_schedule () =\n  let tx = Vega.scale_by_schedule (S.constant 3.0) in\n  let grad = vec [| 1.0; 2.0 |] in\n  let param = vec [| 0.; 0. |] in\n  let upd, _ = Vega.update (Vega.init tx param) ~grad ~param in\n  equal ~msg:\"scheduled\" (array eps) [| 3.0; 6.0 |] (to_arr upd)\n\nlet test_scale_by_learning_rate () =\n  let tx = Vega.scale_by_learning_rate (S.constant 0.1) in\n  let grad = vec [| 10.0; -5.0 |] in\n  let param = vec [| 0.; 0. |] in\n  let upd, _ = Vega.update (Vega.init tx param) ~grad ~param in\n  (* negated: updates = grad * (-0.1) *)\n  equal ~msg:\"negated lr\" (array eps) [| -1.0; 0.5 |] (to_arr upd)\n\nlet test_trace () =\n  let tx = Vega.trace ~decay:0.9 () in\n  let grad = vec [| 1.0; 2.0 |] in\n  let param = vec [| 0.; 0. |] in\n  let upd, _ = Vega.update (Vega.init tx param) ~grad ~param in\n  (* step 1: vel = 0.9*0 + grad = grad; output = vel *)\n  equal ~msg:\"step 1\" (array eps) [| 1.0; 2.0 |] (to_arr upd)\n\nlet test_trace_nesterov () =\n  let tx = Vega.trace ~decay:0.9 ~nesterov:true () in\n  let grad = vec [| 1.0; 2.0 |] in\n  let param = vec [| 0.; 0. |] in\n  let upd, _ = Vega.update (Vega.init tx param) ~grad ~param in\n  (* vel = grad; nesterov output = grad + 0.9 * vel = grad + 0.9*grad =\n     1.9*grad *)\n  equal ~msg:\"nesterov\" (array eps) [| 1.9; 3.8 |] (to_arr upd)\n\nlet test_add_decayed_weights () =\n  let tx = Vega.add_decayed_weights ~rate:(S.constant 0.1) () in\n  let grad = vec [| 1.0; 0.0 |] in\n  let param = vec [| 10.0; -5.0 |] in\n  let upd, _ = Vega.update (Vega.init tx param) ~grad ~param in\n  (* updates + 0.1 * param *)\n  equal ~msg:\"wd\" (array eps) [| 2.0; -0.5 |] (to_arr upd)\n\nlet test_add_decayed_weights_scheduled () =\n  let rate step = 0.01 *. float_of_int step in\n  let tx = Vega.add_decayed_weights ~rate () in\n  let grad = vec [| 0.0 |] in\n  let param = vec [| 10.0 |] in\n  let st = Vega.init tx param in\n  let upd1, st = Vega.update st ~grad ~param in\n  (* step 1: rate=0.01, updates = 0 + 0.01*10 = 0.1 *)\n  equal ~msg:\"step 1\" (array eps) [| 0.1 |] (to_arr upd1);\n  let upd2, _ = Vega.update st ~grad ~param in\n  (* step 2: rate=0.02, updates = 0 + 0.02*10 = 0.2 *)\n  equal ~msg:\"step 2\" (array eps) [| 0.2 |] (to_arr upd2)\n\nlet test_clip () =\n  let tx = Vega.clip_by_value 1.0 in\n  let grad = vec [| 5.0; -0.5; -3.0 |] in\n  let param = vec [| 0.; 0.; 0. |] in\n  let upd, _ = Vega.update (Vega.init tx param) ~grad ~param in\n  equal ~msg:\"clipped\" (array eps) [| 1.0; -0.5; -1.0 |] (to_arr upd)\n\nlet test_clip_by_norm () =\n  (* norm of [3, 4] = 5, clip to 2.5 → scale by 0.5 *)\n  let tx = Vega.clip_by_norm 2.5 in\n  let grad = vec [| 3.0; 4.0 |] in\n  let param = vec [| 0.; 0. |] in\n  let upd, _ = Vega.update (Vega.init tx param) ~grad ~param in\n  equal ~msg:\"rescaled\" (array eps) [| 1.5; 2.0 |] (to_arr upd)\n\nlet test_clip_by_norm_no_op () =\n  let tx = Vega.clip_by_norm 10.0 in\n  let grad = vec [| 1.0; 1.0 |] in\n  let param = vec [| 0.; 0. |] in\n  let upd, _ = Vega.update (Vega.init tx param) ~grad ~param in\n  equal ~msg:\"unchanged\" (array eps) [| 1.0; 1.0 |] (to_arr upd)\n\nlet test_trust_ratio () =\n  let tx = Vega.scale_by_trust_ratio () in\n  let grad = vec [| 1.0; 0.0 |] in\n  let param = vec [| 3.0; 4.0 |] in\n  let upd, _ = Vega.update (Vega.init tx param) ~grad ~param in\n  (* ||param|| = 5, ||grad|| = 1, ratio = 5/1 = 5 *)\n  equal ~msg:\"ratio\" (array (float 1e-4)) [| 5.0; 0.0 |] (to_arr upd)\n\nlet test_trust_ratio_zero_param () =\n  let tx = Vega.scale_by_trust_ratio () in\n  let grad = vec [| 1.0 |] in\n  let param = vec [| 0.0 |] in\n  let upd, _ = Vega.update (Vega.init tx param) ~grad ~param in\n  (* zero param norm → ratio = 1 *)\n  equal ~msg:\"fallback\" (array eps) [| 1.0 |] (to_arr upd)\n\n(* Gradient processing *)\n\nlet test_centralize_2d () =\n  let tx = Vega.centralize in\n  (* 2x3 matrix: row 0 = [1,2,3] mean=2, row 1 = [4,5,6] mean=5 *)\n  let grad = mat 2 3 [| 1.; 2.; 3.; 4.; 5.; 6. |] in\n  let param = mat 2 3 [| 0.; 0.; 0.; 0.; 0.; 0. |] in\n  let upd, _ = Vega.update (Vega.init tx param) ~grad ~param in\n  equal ~msg:\"centralized\" (array eps)\n    [| -1.; 0.; 1.; -1.; 0.; 1. |]\n    (to_arr upd)\n\nlet test_centralize_1d () =\n  let tx = Vega.centralize in\n  let grad = vec [| 1.; 2.; 3. |] in\n  let param = vec [| 0.; 0.; 0. |] in\n  let upd, _ = Vega.update (Vega.init tx param) ~grad ~param in\n  equal ~msg:\"1d unchanged\" (array eps) [| 1.; 2.; 3. |] (to_arr upd)\n\nlet test_add_noise () =\n  let tx = Vega.add_noise ~eta:(S.constant 1.0) () in\n  let grad = vec [| 0.; 0. |] in\n  let param = vec [| 0.; 0. |] in\n  let upd, _ = Vega.update (Vega.init tx param) ~grad ~param in\n  let v = to_arr upd in\n  (* With zero grad, output is pure noise — should be non-zero with high prob *)\n  is_true ~msg:\"noise injected\"\n    (Float.abs v.(0) > 1e-10 || Float.abs v.(1) > 1e-10)\n\n(* Adam variants *)\n\nlet test_scale_by_adam_step1 () =\n  let tx = Vega.scale_by_adam ~b1:0.9 ~b2:0.999 ~eps:1e-8 () in\n  let grad = vec [| 2.0 |] in\n  let param = vec [| 0. |] in\n  let upd, _ = Vega.update (Vega.init tx param) ~grad ~param in\n  (* mu = 0.1*2 = 0.2, nu = 0.001*4 = 0.004 bc1 = 0.1, bc2 = 0.001 m_hat =\n     0.2/0.1 = 2.0, v_hat = 0.004/0.001 = 4.0 out = 2 / (sqrt(4) + 1e-8) = 2/2 =\n     1.0 *)\n  equal ~msg:\"adam step 1\" (array (float 1e-4)) [| 1.0 |] (to_arr upd)\n\nlet test_amsgrad () =\n  let tx = Vega.scale_by_adam ~amsgrad:true () in\n  let param = vec [| 0. |] in\n  let st = Vega.init tx param in\n  (* Step 1: large gradient → large v *)\n  let _, st = Vega.update st ~grad:(vec [| 10.0 |]) ~param in\n  (* Step 2: small gradient → v decreases, but v_max holds *)\n  let _, st = Vega.update st ~grad:(vec [| 0.01 |]) ~param in\n  let _, tensors = Vega.state_to_tensors st in\n  let nu = to_arr tensors.(1) in\n  let v_max = to_arr tensors.(2) in\n  is_true ~msg:\"v_max >= nu\" (v_max.(0) >= nu.(0))\n\nlet test_nesterov_differs () =\n  let tx_std = Vega.scale_by_adam () in\n  let tx_nes = Vega.scale_by_adam ~nesterov:true () in\n  let grad = vec [| 3.0; -1.0 |] in\n  let param = vec [| 0.; 0. |] in\n  let upd_std, _ = Vega.update (Vega.init tx_std param) ~grad ~param in\n  let upd_nes, _ = Vega.update (Vega.init tx_nes param) ~grad ~param in\n  let a = to_arr upd_std and b = to_arr upd_nes in\n  is_true ~msg:\"nesterov differs from standard\"\n    (Float.abs (a.(0) -. b.(0)) > 1e-6)\n\n(* Optimizer convergence *)\n\nlet test_lion_converges () = converges ~msg:\"lion\" ~tol:1.0 (Vega.lion lr01)\nlet test_radam_converges () = converges ~msg:\"radam\" ~tol:0.5 (Vega.radam lr01)\n\nlet test_adan_converges () =\n  converges ~msg:\"adan\" ~tol:1.0 (Vega.adan (S.constant 0.05))\n\nlet test_lamb_converges () = converges ~msg:\"lamb\" ~tol:0.5 (Vega.lamb lr01)\n\nlet test_lars_converges () =\n  converges ~msg:\"lars\" ~tol:0.5 (Vega.lars (S.constant 0.05))\n\nlet test_adafactor_converges () =\n  (* Adafactor includes its own LR (eps_scale/sqrt(step) ≈ 0.001/sqrt(t)).\n     Cumulative displacement after N steps is ~0.002*sqrt(N), so use small\n     initial values and 2D shape to exercise the factored path. *)\n  let tx = Vega.adafactor () in\n  let param = ref (mat 2 2 [| 0.1; -0.05; 0.08; -0.03 |]) in\n  let st = ref (Vega.init tx !param) in\n  for _ = 1 to 5000 do\n    let p, s = Vega.step !st ~grad:!param ~param:!param in\n    param := p;\n    st := s\n  done;\n  let v = to_arr !param in\n  Array.iter\n    (fun x -> is_true ~msg:\"adafactor converges\" (Float.abs x < 0.05))\n    v\n\nlet test_adam_amsgrad_converges () =\n  converges ~msg:\"adam+amsgrad\" ~tol:0.5\n    (Vega.adam ~b1:0.9 ~b2:0.999 ~eps:1e-8 lr01)\n\n(* Chain composition *)\n\nlet test_chain_associativity () =\n  let a = Vega.scale 2.0 in\n  let b = Vega.clip_by_value 5.0 in\n  let c = Vega.scale 0.5 in\n  let tx1 = Vega.chain [ Vega.chain [ a; b ]; c ] in\n  let tx2 = Vega.chain [ a; b; c ] in\n  let grad = vec [| 3.0; -4.0 |] in\n  let param = vec [| 0.; 0. |] in\n  let upd1, _ = Vega.update (Vega.init tx1 param) ~grad ~param in\n  let upd2, _ = Vega.update (Vega.init tx2 param) ~grad ~param in\n  equal ~msg:\"associative\" (array eps) (to_arr upd1) (to_arr upd2)\n\nlet test_chain_identity () =\n  let tx = Vega.scale_by_adam () in\n  let tx_wrapped = Vega.chain [ tx ] in\n  let grad = vec [| 1.0; -2.0 |] in\n  let param = vec [| 0.; 0. |] in\n  let upd1, _ = Vega.update (Vega.init tx param) ~grad ~param in\n  let upd2, _ = Vega.update (Vega.init tx_wrapped param) ~grad ~param in\n  equal ~msg:\"identity\" (array eps) (to_arr upd1) (to_arr upd2)\n\nlet test_chain_ordering_matters () =\n  let tx1 = Vega.chain [ Vega.clip_by_value 1.0; Vega.scale 10.0 ] in\n  let tx2 = Vega.chain [ Vega.scale 10.0; Vega.clip_by_value 1.0 ] in\n  let grad = vec [| 0.5 |] in\n  let param = vec [| 0. |] in\n  let upd1, _ = Vega.update (Vega.init tx1 param) ~grad ~param in\n  let upd2, _ = Vega.update (Vega.init tx2 param) ~grad ~param in\n  (* clip then scale: 0.5 → 0.5 → 5.0 ; scale then clip: 0.5 → 5.0 → 1.0 *)\n  equal ~msg:\"clip then scale\" (array eps) [| 5.0 |] (to_arr upd1);\n  equal ~msg:\"scale then clip\" (array eps) [| 1.0 |] (to_arr upd2)\n\n(* apply_if_finite *)\n\nlet test_finite_passes_through () =\n  let inner = Vega.scale 2.0 in\n  let tx = Vega.apply_if_finite inner in\n  let grad = vec [| 1.0; -0.5 |] in\n  let param = vec [| 0.; 0. |] in\n  let upd, _ = Vega.update (Vega.init tx param) ~grad ~param in\n  equal ~msg:\"pass-through\" (array eps) [| 2.0; -1.0 |] (to_arr upd)\n\nlet test_nan_skipped () =\n  let inner = Vega.scale 1.0 in\n  let tx = Vega.apply_if_finite inner in\n  let param = vec [| 0.; 0. |] in\n  let grad = vec [| Float.nan; 1.0 |] in\n  let upd, _ = Vega.update (Vega.init tx param) ~grad ~param in\n  let v = to_arr upd in\n  equal ~msg:\"nan → zero[0]\" (float 1e-10) 0.0 v.(0);\n  equal ~msg:\"nan → zero[1]\" (float 1e-10) 0.0 v.(1)\n\nlet test_inf_skipped () =\n  let inner = Vega.scale 1.0 in\n  let tx = Vega.apply_if_finite inner in\n  let param = vec [| 0.; 0. |] in\n  let grad = vec [| Float.infinity; 1.0 |] in\n  let upd, _ = Vega.update (Vega.init tx param) ~grad ~param in\n  let v = to_arr upd in\n  equal ~msg:\"inf → zero\" (float 1e-10) 0.0 v.(0)\n\nlet test_nonfinite_counter () =\n  let inner = Vega.scale 1.0 in\n  let tx = Vega.apply_if_finite inner in\n  let param = vec [| 0. |] in\n  let nan_grad = vec [| Float.nan |] in\n  let st = Vega.init tx param in\n  let _, st = Vega.update st ~grad:nan_grad ~param in\n  let _, st = Vega.update st ~grad:nan_grad ~param in\n  let _, tensors = Vega.state_to_tensors st in\n  (* Last tensor is the counter *)\n  let counter = Nx.item [] tensors.(Array.length tensors - 1) in\n  equal ~msg:\"2 consecutive non-finite\" (float 1e-10) 2.0 counter\n\n(* Serialization *)\n\nlet test_n_tensors () =\n  equal ~msg:\"sgd\" int 0 (Vega.n_tensors (Vega.sgd lr01));\n  equal ~msg:\"sgd+momentum\" int 1 (Vega.n_tensors (Vega.sgd ~momentum:0.9 lr01));\n  equal ~msg:\"adam\" int 2 (Vega.n_tensors (Vega.adam lr01));\n  equal ~msg:\"adam+amsgrad\" int 3\n    (Vega.n_tensors\n       (Vega.chain\n          [\n            Vega.scale_by_adam ~amsgrad:true ();\n            Vega.scale_by_learning_rate lr01;\n          ]));\n  equal ~msg:\"lion\" int 1 (Vega.n_tensors (Vega.lion lr01));\n  equal ~msg:\"adan\" int 4 (Vega.n_tensors (Vega.adan lr01));\n  equal ~msg:\"adafactor\" int 2 (Vega.n_tensors (Vega.adafactor ()))\n\nlet test_serialization_round_trip () =\n  let optimizers =\n    [\n      (\"adam\", Vega.adam lr01);\n      (\"adamw\", Vega.adamw lr01);\n      (\"lion\", Vega.lion lr01);\n      (\"radam\", Vega.radam lr01);\n    ]\n  in\n  List.iter\n    (fun (name, tx) ->\n      let param = vec [| 3.0; -2.0 |] in\n      let grad = vec [| 1.0; -1.0 |] in\n      (* Step once *)\n      let st = Vega.init tx param in\n      let _, st = Vega.update st ~grad ~param in\n      (* Serialize and deserialize *)\n      let count, tensors = Vega.state_to_tensors st in\n      let st2 = Vega.state_of_tensors tx ~count tensors in\n      (* Step again from both *)\n      let upd1, _ = Vega.update st ~grad ~param in\n      let upd2, _ = Vega.update st2 ~grad ~param in\n      equal ~msg:(name ^ \" round-trip\") (array eps) (to_arr upd1) (to_arr upd2))\n    optimizers\n\nlet test_wrong_tensor_count () =\n  let tx = Vega.adam lr01 in\n  raises_invalid_arg (fun () ->\n      ignore (Vega.state_of_tensors tx ~count:1 [| vec [| 0. |] |]))\n\n(* Validation *)\n\nlet test_validation () =\n  raises_invalid_arg (fun () -> ignore (Vega.scale_by_lion ~b1:1.0 ()));\n  raises_invalid_arg (fun () -> ignore (Vega.scale_by_lion ~b2:(-0.1) ()));\n  raises_invalid_arg (fun () -> ignore (Vega.scale_by_adan ~b1:1.0 ()));\n  raises_invalid_arg (fun () -> ignore (Vega.scale_by_adan ~b2:(-0.1) ()));\n  raises_invalid_arg (fun () -> ignore (Vega.scale_by_adan ~b3:1.0 ()));\n  raises_invalid_arg (fun () -> ignore (Vega.adan ~weight_decay:(-1.) lr01));\n  raises_invalid_arg (fun () ->\n      ignore (S.cosine_decay_restarts ~init_value:1. ~decay_steps:0 () 0));\n  raises_invalid_arg (fun () ->\n      ignore (S.one_cycle ~max_value:1. ~total_steps:0 () 0))\n\n(* Entry point *)\n\nlet () =\n  run \"Vega\"\n    [\n      group \"schedule\"\n        [\n          test \"polynomial_decay\" test_polynomial_decay;\n          test \"warmup_cosine_decay\" test_warmup_cosine_decay;\n          test \"piecewise_constant\" test_piecewise_constant;\n          test \"piecewise_constant validation\"\n            test_piecewise_constant_validation;\n          test \"join\" test_join;\n          test \"join step reset\" test_join_step_reset;\n          test \"join validation\" test_join_validation;\n          test \"cosine_decay_restarts\" test_cosine_decay_restarts;\n          test \"cosine_decay_restarts t_mul\" test_cosine_decay_restarts_t_mul;\n          test \"cosine_decay_restarts m_mul\" test_cosine_decay_restarts_m_mul;\n          test \"one_cycle\" test_one_cycle;\n          prop2 \"constant is constant\" (float 0.) nat (fun v step ->\n              S.constant v step = v);\n          prop' \"cosine_decay bounded\" nat (fun step ->\n              let s = S.cosine_decay ~init_value:1.0 ~decay_steps:100 () in\n              let v = s step in\n              is_true ~msg:\">=0\" (v >= 0.0);\n              is_true ~msg:\"<=1\" (v <= 1.0 +. 1e-10));\n          prop' \"one_cycle bounded\" nat (fun step ->\n              let s = S.one_cycle ~max_value:1.0 ~total_steps:100 () in\n              let v = s step in\n              is_true ~msg:\">=0\" (v >= 0.0);\n              is_true ~msg:\"<=max\" (v <= 1.0 +. 1e-10));\n          prop' \"cosine_decay_restarts periodic\" nat (fun step ->\n              let period = 50 in\n              let s =\n                S.cosine_decay_restarts ~init_value:1.0 ~decay_steps:period ()\n              in\n              let v1 = s step in\n              let v2 = s (step + period) in\n              equal ~msg:\"periodic\" (float 1e-10) v1 v2);\n        ];\n      group \"primitives\"\n        [\n          test \"scale\" test_scale;\n          test \"scale_by_schedule\" test_scale_by_schedule;\n          test \"scale_by_learning_rate\" test_scale_by_learning_rate;\n          test \"trace\" test_trace;\n          test \"trace nesterov\" test_trace_nesterov;\n          test \"add_decayed_weights\" test_add_decayed_weights;\n          test \"add_decayed_weights scheduled\"\n            test_add_decayed_weights_scheduled;\n          test \"clip\" test_clip;\n          test \"clip_by_norm\" test_clip_by_norm;\n          test \"clip_by_norm no-op\" test_clip_by_norm_no_op;\n          test \"trust_ratio\" test_trust_ratio;\n          test \"trust_ratio zero param\" test_trust_ratio_zero_param;\n          test \"centralize 2d\" test_centralize_2d;\n          test \"centralize 1d\" test_centralize_1d;\n          test \"add_noise\" test_add_noise;\n        ];\n      group \"adam\"\n        [\n          test \"step 1 exact\" test_scale_by_adam_step1;\n          test \"amsgrad holds max\" test_amsgrad;\n          test \"nesterov differs\" test_nesterov_differs;\n        ];\n      group \"optimizers\"\n        [\n          test \"lion converges\" test_lion_converges;\n          test \"radam converges\" test_radam_converges;\n          test \"adan converges\" test_adan_converges;\n          test \"lamb converges\" test_lamb_converges;\n          test \"lars converges\" test_lars_converges;\n          test \"adafactor converges\" test_adafactor_converges;\n          test \"adam+amsgrad converges\" test_adam_amsgrad_converges;\n        ];\n      group \"chain\"\n        [\n          test \"associativity\" test_chain_associativity;\n          test \"identity\" test_chain_identity;\n          test \"ordering matters\" test_chain_ordering_matters;\n        ];\n      group \"apply_if_finite\"\n        [\n          test \"finite passes through\" test_finite_passes_through;\n          test \"nan skipped\" test_nan_skipped;\n          test \"inf skipped\" test_inf_skipped;\n          test \"counter tracks failures\" test_nonfinite_counter;\n        ];\n      group \"serialization\"\n        [\n          test \"n_tensors\" test_n_tensors;\n          test \"round-trip\" test_serialization_round_trip;\n          test \"wrong count raises\" test_wrong_tensor_count;\n        ];\n      group \"validation\" [ test \"invalid hyperparameters\" test_validation ];\n    ]\n"
  },
  {
    "path": "scripts/ubench.py",
    "content": "\"\"\"\nUbench - Micro-benchmarking library for Python.\n\nThis module mirrors the public surface of the OCaml `ubench` library,\nproviding comparable semantics while remaining idiomatic Python.\nIt focuses on high-resolution timing, light-weight statistical analysis, and\nflexible output formats suitable for comparing backends such as Nx and NumPy.\n\"\"\"\n\nfrom __future__ import annotations\n\nimport argparse\nimport dataclasses\nimport gc\nimport json\nimport math\nimport os\nimport statistics\nimport sys\nimport time\nfrom dataclasses import dataclass, field, replace\nfrom enum import Enum\nfrom typing import Any, Callable, Iterable, List, Optional, Sequence, Tuple, Union\n\ntry:\n    import resource\nexcept ImportError:  # pragma: no cover - Windows fallback\n    resource = None  # type: ignore\n\nBenchmarkFn = Callable[[], Any]\n\n\n# ---------------------------------------------------------------------------\n# Core types\n\n\n@dataclass(frozen=True)\nclass TimeLimit:\n    seconds: float\n\n\n@dataclass(frozen=True)\nclass IterationLimit:\n    iterations: int\n\n\n@dataclass(frozen=True)\nclass VarianceLimit:\n    coefficient: float\n\n\nQuota = Union[TimeLimit, IterationLimit, VarianceLimit]\n\n\nclass BenchmarkMode(str, Enum):\n    LATENCY = \"latency\"\n    THROUGHPUT = \"throughput\"\n\n\n@dataclass\nclass ProgressInfo:\n    name: str\n    current_measurement: int\n    total_measurements: Optional[int]\n    elapsed_time: float\n    estimated_remaining: Optional[float]\n\n\nclass Predictor(str, Enum):\n    ONE = \"one\"\n    RUNS = \"runs\"\n    TIME_NS = \"time_ns\"\n    WALL_NS = \"wall_ns\"\n    CYCLES = \"cycles\"\n    USER_TIME = \"user_time\"\n    SYSTEM_TIME = \"system_time\"\n    CHILD_TIME = \"child_time\"\n    MINOR_WORDS = \"minor_words\"\n    MAJOR_WORDS = \"major_words\"\n    PROMOTED_WORDS = \"promoted_words\"\n    MINOR_COLLECTIONS = \"minor_collections\"\n    MAJOR_COLLECTIONS = \"major_collections\"\n    COMPACTIONS = \"compactions\"\n    CUSTOM = \"custom\"\n\n\nclass Responder(str, Enum):\n    TIME_PER_RUN = \"time_per_run\"\n    WALL_PER_RUN = \"wall_per_run\"\n    MEMORY_PER_RUN = \"memory_per_run\"\n    TOTAL_TIME = \"total_time\"\n    TOTAL_WALL = \"total_wall\"\n    ALLOCATION_RATE = \"allocation_rate\"\n    CUSTOM = \"custom\"\n\n\n@dataclass\nclass Measurement:\n    time_ns: float\n    wall_ns: float\n    utime_ns: float\n    stime_ns: float\n    cutime_ns: float\n    cstime_ns: float\n    cycles: float\n    runs: int\n    minor_words: float = 0.0\n    major_words: float = 0.0\n    promoted_words: float = 0.0\n    minor_collections: int = 0\n    major_collections: int = 0\n    compactions: int = 0\n    custom_predictors: Tuple[Tuple[str, float], ...] = field(default_factory=tuple)\n\n\n@dataclass\nclass Statistics:\n    avg: float\n    min: float\n    max: float\n    std_dev: float\n    ci95_lower: float\n    ci95_upper: float\n\n\n@dataclass\nclass RegressionResult:\n    responder: Responder\n    predictors: Tuple[Predictor, ...]\n    coefficients: Tuple[float, ...]\n    r_squared: float\n    adjusted_r_squared: float\n    intercept: Optional[float]\n    confidence_intervals: Optional[Tuple[Tuple[float, float], ...]]\n\n\n@dataclass\nclass BenchData:\n    measurements: List[Measurement]\n    time_stats: Statistics\n    wall_stats: Statistics\n    memory_stats: Statistics\n    regressions: List[RegressionResult]\n    total_time_ns: float\n    total_runs: int\n\n\n@dataclass\nclass AnalysisResult:\n    name: str\n    measurements: List[Measurement]\n    time_stats: Statistics\n    wall_stats: Statistics\n    memory_stats: Statistics\n    regressions: List[RegressionResult]\n    total_time_ns: float\n    total_runs: int\n\n\n@dataclass(frozen=True)\nclass Config:\n    mode: BenchmarkMode = BenchmarkMode.THROUGHPUT\n    quota: Quota = field(default_factory=lambda: TimeLimit(0.3))\n    warmup_iterations: int = 1\n    min_measurements_required: int = 5\n    stabilize_gc: bool = False\n    geometric_scale_factor: float = 1.3\n    fork_benchmarks: bool = False\n    regressions_spec: Tuple[\n        Tuple[Responder, Tuple[Predictor, ...], bool], ...\n    ] = field(\n        default_factory=lambda: (\n            (Responder.TIME_PER_RUN, (Predictor.ONE, Predictor.RUNS), False),\n            (Responder.MEMORY_PER_RUN, (Predictor.RUNS,), True),\n        )\n    )\n    custom_measurer_fn: Optional[\n        Callable[[Callable[[], None], int], Measurement]\n    ] = None\n    ascii_only_output: bool = False\n    null_loop_subtraction: bool = True\n    min_cpu_seconds: float = 0.002\n    repeat: int = 1\n    progress_callback_fn: Optional[Callable[[ProgressInfo], None]] = None\n\n    @staticmethod\n    def default() -> Config:\n        return Config()\n\n    def time_limit(self, seconds: float) -> Config:\n        return replace(self, quota=TimeLimit(float(seconds)))\n\n    def iteration_limit(self, iterations: int) -> Config:\n        return replace(self, quota=IterationLimit(int(iterations)))\n\n    def variance_limit(self, coefficient: float) -> Config:\n        return replace(self, quota=VarianceLimit(float(coefficient)))\n\n    def warmup(self, iterations: int) -> Config:\n        return replace(self, warmup_iterations=int(iterations))\n\n    def min_measurements(self, count: int) -> Config:\n        return replace(self, min_measurements_required=int(count))\n\n    def gc_stabilization(self, enabled: bool) -> Config:\n        return replace(self, stabilize_gc=bool(enabled))\n\n    def fork(self, enabled: bool) -> Config:\n        return replace(self, fork_benchmarks=bool(enabled))\n\n    def ascii_only(self, enabled: bool) -> Config:\n        return replace(self, ascii_only_output=bool(enabled))\n\n    def geometric_scale(self, factor: float) -> Config:\n        if factor <= 1.0:\n            raise ValueError(\"geometric_scale must be > 1.0\")\n        return replace(self, geometric_scale_factor=float(factor))\n\n    def regressions(\n        self, entries: Sequence[Tuple[Responder, Sequence[Predictor], bool]]\n    ) -> Config:\n        normalized = tuple(\n            (resp, tuple(preds), bool(include_intercept))\n            for resp, preds, include_intercept in entries\n        )\n        return replace(self, regressions_spec=normalized)\n\n    def custom_measurer(\n        self, measurer: Optional[Callable[[Callable[[], None], int], Measurement]]\n    ) -> Config:\n        return replace(self, custom_measurer_fn=measurer)\n\n    def progress_callback(\n        self, callback: Optional[Callable[[ProgressInfo], None]]\n    ) -> Config:\n        return replace(self, progress_callback_fn=callback)\n\n    def null_loop(self, enabled: bool) -> Config:\n        return replace(self, null_loop_subtraction=bool(enabled))\n\n    def min_cpu(self, seconds: float) -> Config:\n        return replace(self, min_cpu_seconds=float(seconds))\n\n    def repeat_runs(self, count: int) -> Config:\n        return replace(self, repeat=max(1, int(count)))\n\n    def build(self) -> Config:\n        return self\n\n\ndefault_config = Config.default()\n\n\n# ---------------------------------------------------------------------------\n# Statistics utilities\n\n\ndef _mean(values: Sequence[float]) -> float:\n    return statistics.mean(values) if values else 0.0\n\n\ndef _std_dev(values: Sequence[float]) -> float:\n    if len(values) < 2:\n        return 0.0\n    return statistics.stdev(values)\n\n\ndef random_state() -> \"random.Random\":\n    import random\n\n    if not hasattr(random_state, \"_rng\"):\n        random_state._rng = random.Random()\n        random_state._rng.seed(int(time.time() * 1e9) ^ os.getpid())\n    return random_state._rng  # type: ignore[attr-defined]\n\n\ndef _bootstrap_interval(\n    values: List[float], confidence: float = 0.95\n) -> Tuple[float, float]:\n    if len(values) < 3:\n        mean_val = _mean(values)\n        return (mean_val, mean_val)\n\n    rng = random_state()\n    n = len(values)\n    resamples = max(1000, 10 * n)\n    stats = []\n    for _ in range(resamples):\n        sample = [values[rng.randrange(0, n)] for _ in range(n)]\n        stats.append(_mean(sample))\n    stats.sort()\n    alpha = 1.0 - confidence\n    lower_idx = int(resamples * (alpha / 2.0))\n    upper_idx = min(resamples - 1, int(resamples * (1.0 - alpha / 2.0)))\n    return (stats[lower_idx], stats[upper_idx])\n\n\ndef _confidence_interval(values: List[float]) -> Tuple[float, float]:\n    if len(values) >= 20:\n        return _bootstrap_interval(values)\n    if len(values) < 3:\n        mean_val = _mean(values)\n        return (mean_val, mean_val)\n    sorted_vals = sorted(values)\n    n = len(sorted_vals)\n    lower_idx = max(0, n * 25 // 1000)\n    upper_idx = min(n - 1, n * 975 // 1000)\n    return (sorted_vals[lower_idx], sorted_vals[upper_idx])\n\n\ndef compute_statistics(values: List[float]) -> Statistics:\n    if not values:\n        return Statistics(0.0, 0.0, 0.0, 0.0, 0.0, 0.0)\n\n    avg = _mean(values)\n    std_dev = _std_dev(values)\n    ci_lower, ci_upper = _confidence_interval(values)\n    return Statistics(\n        avg=avg,\n        min=min(values),\n        max=max(values),\n        std_dev=std_dev,\n        ci95_lower=ci_lower,\n        ci95_upper=ci_upper,\n    )\n\n\n# ---------------------------------------------------------------------------\n# Math helpers for statistical tests\n\n\ndef log_gamma(x: float) -> float:\n    return math.lgamma(x)\n\n\ndef _max_tiny(x: float) -> float:\n    return max(1e-30, x)\n\n\n_BETAI_CF_EPS = sys.float_info.epsilon\n\n\ndef _betai_cf(x: float, a: float, b: float) -> float:\n    apb = a + b\n    ap1 = a + 1.0\n    am1 = a - 1.0\n    # Initialize Lentz's method\n    d = 1.0 / _max_tiny(1.0 - (apb * x / ap1))\n    c = 1.0\n    f = d\n    m = 1.0\n    while True:\n        m2 = 2.0 * m\n        cf_d2m = m * (b - m) * x / ((am1 + m2) * (a + m2))\n        d = 1.0 / _max_tiny(1.0 + (cf_d2m * d))\n        c = _max_tiny(1.0 + (cf_d2m / c))\n        f *= d * c\n\n        cf_d2m1 = -(a + m) * (apb + m) * x / ((a + m2) * (ap1 + m2))\n        d = 1.0 / _max_tiny(1.0 + (cf_d2m1 * d))\n        c = _max_tiny(1.0 + (cf_d2m1 / c))\n        delta = c * d\n        f *= delta\n        if abs(delta - 1.0) < _BETAI_CF_EPS:\n            break\n        m += 1.0\n    return f\n\n\ndef betai(x: float, a: float, b: float) -> float:\n    if a <= 0.0 or b <= 0.0:\n        raise ValueError(\"betai: a and b must be positive\")\n    if x < 0.0 or x > 1.0:\n        raise ValueError(\"betai: x must be in [0, 1]\")\n    if x == 0.0:\n        return 0.0\n    if x == 1.0:\n        return 1.0\n\n    m = math.exp(\n        log_gamma(a + b)\n        - log_gamma(a)\n        - log_gamma(b)\n        + (a * math.log(x))\n        + (b * math.log(1.0 - x))\n    )\n    if x < (a + 1.0) / (a + b + 2.0):\n        return m * _betai_cf(x, a, b) / a\n    return 1.0 - (m * _betai_cf(1.0 - x, b, a) / b)\n\n\ndef cpl_student_t(t: float, nu: float) -> float:\n    return betai(nu / (nu + (t * t)), 0.5 * nu, 0.5)\n\n\ndef different_rates(\n    significance: float,\n    n1: int,\n    r1: float,\n    var1: float,\n    n2: int,\n    r2: float,\n    var2: float,\n) -> bool:\n    if n1 <= 0 or n2 <= 0:\n        return False\n    if n1 == 1 and n2 == 1:\n        return True\n\n    df = float(n1 + n2 - 2)\n    n1f = float(n1)\n    n2f = float(n2)\n    pooled = (var1 + var2) / df\n    if pooled <= 0.0:\n        return False\n    se = math.sqrt(pooled * ((1.0 / n1f) + (1.0 / n2f)))\n    if se == 0.0:\n        return False\n    t_val = abs(r1 - r2) / se\n    return cpl_student_t(t_val, df) <= significance\n\n\n# ---------------------------------------------------------------------------\n# Formatting helpers\n\n\ndef format_time_ns(ns: float) -> str:\n    if ns < 0.0:\n        return f\"-{format_time_ns(-ns)}\"\n    units = [\n        (1e9, \"s\"),\n        (1e6, \"ms\"),\n        (1e3, \"µs\"),\n        (1.0, \"ns\"),\n    ]\n    for scale, suffix in units:\n        if ns >= scale:\n            value = ns / scale\n            return f\"{value:,.2f}{suffix}\".replace(\",\", \"_\")\n    return f\"{ns:.2f}ns\"\n\n\ndef format_words(words: float) -> str:\n    units = [\n        (1e9, \"Gw\"),\n        (1e6, \"Mw\"),\n        (1e3, \"kw\"),\n        (1.0, \"w\"),\n    ]\n    value = abs(words)\n    for scale, suffix in units:\n        if value >= scale:\n            res = value / scale\n            return f\"{res:,.2f}{suffix}\".replace(\",\", \"_\")\n    return f\"{value:.2f}w\"\n\n\ndef format_number(value: float) -> str:\n    units = [\n        (1e9, \"G\"),\n        (1e6, \"M\"),\n        (1e3, \"k\"),\n    ]\n    abs_value = abs(value)\n    for scale, suffix in units:\n        if abs_value >= scale:\n            res = value / scale\n            return f\"{res:,.2f}{suffix}\".replace(\",\", \"_\")\n    return f\"{value:.2f}\"\n\n\n# ---------------------------------------------------------------------------\n# Measurement primitives\n\n\ndef _collect_times() -> Tuple[int, int, os.times_result]:\n    return time.perf_counter_ns(), time.process_time_ns(), os.times()\n\n\ndef _to_measurement(\n    before: Tuple[int, int, os.times_result],\n    after: Tuple[int, int, os.times_result],\n    runs: int,\n) -> Measurement:\n    wall_start, cpu_start, tms_start = before\n    wall_end, cpu_end, tms_end = after\n    utime = (tms_end.user - tms_start.user) * 1e9\n    stime = (tms_end.system - tms_start.system) * 1e9\n    cutime = (tms_end.children_user - tms_start.children_user) * 1e9\n    cstime = (tms_end.children_system - tms_start.children_system) * 1e9\n    wall_ns = float(wall_end - wall_start)\n    time_ns = float(cpu_end - cpu_start)\n    estimated_cycles = (wall_ns / 1e9) * 3e9\n    measurement = Measurement(\n        time_ns=time_ns,\n        wall_ns=wall_ns,\n        utime_ns=utime,\n        stime_ns=stime,\n        cutime_ns=cutime,\n        cstime_ns=cstime,\n        cycles=estimated_cycles,\n        runs=runs,\n    )\n    if resource is not None:\n        usage = resource.getrusage(resource.RUSAGE_SELF)\n        measurement.minor_words = getattr(usage, \"ru_minflt\", 0.0)\n        measurement.major_words = getattr(usage, \"ru_majflt\", 0.0)\n    return measurement\n\n\ndef _subtract_measurements(target: Measurement, baseline: Measurement) -> Measurement:\n    for attr in [\n        \"time_ns\",\n        \"wall_ns\",\n        \"utime_ns\",\n        \"stime_ns\",\n        \"cutime_ns\",\n        \"cstime_ns\",\n        \"cycles\",\n    ]:\n        value = getattr(target, attr) - getattr(baseline, attr)\n        setattr(target, attr, max(0.0, value))\n    return target\n\n\ndef _measure_callable(func: Callable[[], Any], runs: int) -> Measurement:\n    before = _collect_times()\n    for _ in range(runs):\n        func()\n    after = _collect_times()\n    return _to_measurement(before, after, runs)\n\n\ndef _measure_null_loop(runs: int) -> Measurement:\n    return _measure_callable(lambda: None, runs)\n\n\ndef _measure_one_batch(\n    func: Callable[[], Any],\n    batch_size: int,\n    *,\n    null_loop_subtraction: bool,\n) -> Measurement:\n    for _ in range(3):\n        func()\n    measurement = _measure_callable(func, batch_size)\n    if null_loop_subtraction and batch_size > 0:\n        baseline = _measure_null_loop(batch_size)\n        measurement = _subtract_measurements(measurement, baseline)\n    return measurement\n\n\ndef stabilize_gc() -> None:\n    gc.collect()\n\n\n# ---------------------------------------------------------------------------\n# Regression analysis\n\n\ndef _predictor_value(measurement: Measurement, predictor: Predictor) -> float:\n    if predictor == Predictor.ONE:\n        return 1.0\n    if predictor == Predictor.RUNS:\n        return float(measurement.runs)\n    if predictor == Predictor.TIME_NS:\n        return measurement.time_ns\n    if predictor == Predictor.WALL_NS:\n        return measurement.wall_ns\n    if predictor == Predictor.CYCLES:\n        return measurement.cycles\n    if predictor == Predictor.USER_TIME:\n        return measurement.utime_ns\n    if predictor == Predictor.SYSTEM_TIME:\n        return measurement.stime_ns\n    if predictor == Predictor.CHILD_TIME:\n        return measurement.cutime_ns + measurement.cstime_ns\n    if predictor == Predictor.MINOR_WORDS:\n        return measurement.minor_words\n    if predictor == Predictor.MAJOR_WORDS:\n        return measurement.major_words\n    if predictor == Predictor.PROMOTED_WORDS:\n        return measurement.promoted_words\n    if predictor == Predictor.MINOR_COLLECTIONS:\n        return float(measurement.minor_collections)\n    if predictor == Predictor.MAJOR_COLLECTIONS:\n        return float(measurement.major_collections)\n    if predictor == Predictor.COMPACTIONS:\n        return float(measurement.compactions)\n    if predictor == Predictor.CUSTOM:\n        # Fallback to first custom predictor, mirroring OCaml semantics.\n        return measurement.custom_predictors[0][1] if measurement.custom_predictors else 0.0\n    raise ValueError(f\"Unsupported predictor: {predictor}\")\n\n\ndef _responder_value(measurement: Measurement, responder: Responder) -> float:\n    if responder == Responder.TIME_PER_RUN:\n        return measurement.time_ns / max(1, measurement.runs)\n    if responder == Responder.WALL_PER_RUN:\n        return measurement.wall_ns / max(1, measurement.runs)\n    if responder == Responder.MEMORY_PER_RUN:\n        return measurement.minor_words / max(1, measurement.runs)\n    if responder == Responder.TOTAL_TIME:\n        return measurement.time_ns\n    if responder == Responder.TOTAL_WALL:\n        return measurement.wall_ns\n    if responder == Responder.ALLOCATION_RATE:\n        seconds = measurement.time_ns / 1e9\n        return measurement.minor_words / seconds if seconds > 0 else 0.0\n    if responder == Responder.CUSTOM:\n        return measurement.custom_predictors[0][1] if measurement.custom_predictors else 0.0\n    raise ValueError(f\"Unsupported responder: {responder}\")\n\n\ndef _transpose(matrix: List[List[float]]) -> List[List[float]]:\n    return [list(row) for row in zip(*matrix)]\n\n\ndef _matmul(a: List[List[float]], b: List[List[float]]) -> List[List[float]]:\n    result = [[0.0 for _ in range(len(b[0]))] for _ in range(len(a))]\n    for i in range(len(a)):\n        for k in range(len(b)):\n            aik = a[i][k]\n            if aik == 0.0:\n                continue\n            for j in range(len(b[0])):\n                result[i][j] += aik * b[k][j]\n    return result\n\n\ndef _matvec_mul(matrix: List[List[float]], vector: List[float]) -> List[float]:\n    return [sum(row[j] * vector[j] for j in range(len(vector))) for row in matrix]\n\n\ndef _solve_normal_equations(xtx: List[List[float]], xty: List[float]) -> List[float]:\n    n = len(xtx)\n    augmented = [row[:] + [value] for row, value in zip(xtx, xty)]\n\n    for i in range(n):\n        pivot = augmented[i][i]\n        if abs(pivot) < 1e-12:\n            for j in range(i + 1, n):\n                if abs(augmented[j][i]) > abs(pivot):\n                    augmented[i], augmented[j] = augmented[j], augmented[i]\n                    pivot = augmented[i][i]\n                    break\n        if abs(pivot) < 1e-12:\n            raise ValueError(\"Matrix is singular\")\n        pivot_inv = 1.0 / pivot\n        for j in range(i, n + 1):\n            augmented[i][j] *= pivot_inv\n        for k in range(n):\n            if k == i:\n                continue\n            factor = augmented[k][i]\n            if factor == 0.0:\n                continue\n            for j in range(i, n + 1):\n                augmented[k][j] -= factor * augmented[i][j]\n    return [augmented[i][n] for i in range(n)]\n\n\ndef ordinary_least_squares(\n    measurements: Sequence[Measurement],\n    responder: Responder,\n    predictors: Sequence[Predictor],\n    include_intercept: bool,\n) -> RegressionResult:\n    if not measurements:\n        return RegressionResult(\n            responder=responder,\n            predictors=tuple(predictors),\n            coefficients=tuple(),\n            r_squared=0.0,\n            adjusted_r_squared=0.0,\n            intercept=None,\n            confidence_intervals=None,\n        )\n\n    y = [_responder_value(m, responder) for m in measurements]\n    x_rows = []\n    for m in measurements:\n        row = [_predictor_value(m, p) for p in predictors]\n        if include_intercept:\n            row.insert(0, 1.0)\n        x_rows.append(row)\n\n    xt = _transpose(x_rows)\n    xtx = _matmul(xt, x_rows)\n    xty = _matvec_mul(xt, y)\n    try:\n        coeffs = _solve_normal_equations(xtx, xty)\n    except ValueError:\n        coeffs = [0.0 for _ in range(len(predictors) + (1 if include_intercept else 0))]\n\n    predictions = [_matvec_mul([row], coeffs)[0] for row in x_rows]\n    mean_y = _mean(y)\n    ss_tot = sum((val - mean_y) ** 2 for val in y)\n    ss_res = sum((y_i - y_hat) ** 2 for y_i, y_hat in zip(y, predictions))\n    r_squared = 0.0 if ss_tot == 0 else max(0.0, 1.0 - (ss_res / ss_tot))\n    n = len(measurements)\n    p = len(coeffs)\n    adjusted_r_squared = (\n        1.0 - ((1.0 - r_squared) * (n - 1) / (n - p - 1)) if n > p + 1 else r_squared\n    )\n\n    intercept = coeffs[0] if include_intercept else None\n    coeff_tuple = tuple(coeffs[1:]) if include_intercept else tuple(coeffs)\n\n    return RegressionResult(\n        responder=responder,\n        predictors=tuple(predictors),\n        coefficients=coeff_tuple,\n        r_squared=r_squared,\n        adjusted_r_squared=adjusted_r_squared,\n        intercept=intercept,\n        confidence_intervals=None,\n    )\n\n\n# ---------------------------------------------------------------------------\n# Benchmark definitions\n\n\n@dataclass\nclass _Benchmark:\n    name: str\n    fn: BenchmarkFn\n\n\n@dataclass\nclass _BenchmarkGroup:\n    name: str\n    benchmarks: Sequence[Union[\"_Benchmark\", \"_BenchmarkGroup\"]]\n\n\nBenchmark = Union[_Benchmark, _BenchmarkGroup]\n\n\ndef bench(name: str, fn: Callable[[], Any]) -> _Benchmark:\n    return _Benchmark(name=name, fn=lambda: fn())\n\n\ndef create(name: str, fn: Callable[[], Any]) -> _Benchmark:\n    return bench(name, fn)\n\n\ndef group(name: str, benchmarks: Sequence[Benchmark]) -> _BenchmarkGroup:\n    return _BenchmarkGroup(name=name, benchmarks=list(benchmarks))\n\n\ndef create_group(name: str, benchmarks: Sequence[Benchmark]) -> _BenchmarkGroup:\n    return group(name, benchmarks)\n\n\ndef bench_with_setup(\n    name: str,\n    *,\n    setup: Callable[[], Any],\n    teardown: Callable[[Any], None],\n    f: Callable[[Any], Any],\n) -> _Benchmark:\n    def wrapped() -> None:\n        resource = setup()\n        try:\n            f(resource)\n        finally:\n            teardown(resource)\n\n    return _Benchmark(name=name, fn=wrapped)\n\n\ndef create_with_setup(\n    name: str,\n    *,\n    setup: Callable[[], Any],\n    teardown: Callable[[Any], None],\n    f: Callable[[Any], Any],\n) -> _Benchmark:\n    return bench_with_setup(name, setup=setup, teardown=teardown, f=f)\n\n\ndef bench_param(\n    base_name: str,\n    func: Callable[..., Any],\n    *,\n    params: Sequence[Tuple[str, Any]],\n) -> List[_Benchmark]:\n    benchmarks = []\n    for label, value in params:\n        name = f\"{base_name}[{label}]\"\n\n        def wrapped(\n            fn: Callable[..., Any] = func, param=value\n        ) -> None:  # default values capture loop vars\n            fn(param=param)\n\n        benchmarks.append(_Benchmark(name=name, fn=wrapped))\n    return benchmarks\n\n\ndef create_param(\n    base_name: str,\n    func: Callable[..., Any],\n    *,\n    params: Sequence[Tuple[str, Any]],\n) -> List[_Benchmark]:\n    return bench_param(base_name, func, params=params)\n\n\ndef _flatten(benchmark: Benchmark, prefix: str = \"\") -> List[_Benchmark]:\n    if isinstance(benchmark, _Benchmark):\n        full_name = benchmark.name if not prefix else f\"{prefix}/{benchmark.name}\"\n        return [_Benchmark(name=full_name, fn=benchmark.fn)]\n    new_prefix = benchmark.name if not prefix else f\"{prefix}/{benchmark.name}\"\n    flattened: List[_Benchmark] = []\n    for child in benchmark.benchmarks:\n        flattened.extend(_flatten(child, new_prefix))\n    return flattened\n\n\ndef flatten_benchmarks(benchmarks: Sequence[Benchmark]) -> List[_Benchmark]:\n    flattened: List[_Benchmark] = []\n    for benchmark in benchmarks:\n        flattened.extend(_flatten(benchmark))\n    return flattened\n\n\n# ---------------------------------------------------------------------------\n# Execution engine\n\n\ndef run_bench_with_config(config: Config, fn: Callable[[], None]) -> BenchData:\n    if config.geometric_scale_factor <= 1.0:\n        raise ValueError(\"geometric_scale must be > 1.0\")\n\n    measurements: List[Measurement] = []\n    total_time_ns = 0.0\n    total_runs = 0\n    measurement_count = 0\n    batch_size = 1\n    start_cpu = time.process_time()\n    samples: List[float] = []\n\n    for _ in range(config.warmup_iterations):\n        fn()\n\n    if config.custom_measurer_fn is not None:\n        measure_batch = lambda runs: config.custom_measurer_fn(fn, runs)\n    else:\n        measure_batch = lambda runs: _measure_one_batch(\n            fn, runs, null_loop_subtraction=config.null_loop_subtraction\n        )\n\n    def should_continue(elapsed_cpu: float) -> bool:\n        min_met = measurement_count >= config.min_measurements_required\n        quota = config.quota\n        if isinstance(quota, TimeLimit):\n            return not min_met or elapsed_cpu < quota.seconds\n        if isinstance(quota, IterationLimit):\n            return total_runs < quota.iterations\n        if isinstance(quota, VarianceLimit):\n            if measurement_count < config.min_measurements_required:\n                return True\n            mean_val = _mean(samples)\n            if mean_val == 0.0:\n                return False\n            std_val = _std_dev(samples)\n            return (std_val / mean_val) > quota.coefficient\n        return False\n\n    while True:\n        if config.stabilize_gc:\n            stabilize_gc()\n        measurement = measure_batch(batch_size)\n        if measurement.time_ns / 1e9 < config.min_cpu_seconds:\n            batch_size = max(batch_size + 1, int(batch_size * config.geometric_scale_factor))\n            continue\n        measurement_count += 1\n        measurement_per_run = (\n            measurement.time_ns / max(1, measurement.runs) if measurement.runs else 0.0\n        )\n        measurements.append(measurement)\n        samples.append(measurement_per_run)\n        total_time_ns += measurement.time_ns\n        total_runs += measurement.runs\n\n        elapsed_cpu = time.process_time() - start_cpu\n        if not should_continue(elapsed_cpu):\n            break\n\n        next_batch = max(\n            batch_size + 1, int(math.ceil(batch_size * config.geometric_scale_factor))\n        )\n        if isinstance(config.quota, IterationLimit):\n            remaining = config.quota.iterations - total_runs\n            next_batch = max(1, min(next_batch, remaining))\n        batch_size = next_batch if next_batch > 0 else 1\n\n    time_values = [m.time_ns / max(1, m.runs) for m in measurements]\n    wall_values = [m.wall_ns / max(1, m.runs) for m in measurements]\n    memory_values = [m.minor_words / max(1, m.runs) for m in measurements]\n\n    regressions = [\n        ordinary_least_squares(\n            measurements,\n            responder=resp,\n            predictors=preds,\n            include_intercept=include_intercept,\n        )\n        for resp, preds, include_intercept in config.regressions_spec\n    ]\n\n    return BenchData(\n        measurements=measurements,\n        time_stats=compute_statistics(time_values),\n        wall_stats=compute_statistics(wall_values),\n        memory_stats=compute_statistics(memory_values),\n        regressions=regressions,\n        total_time_ns=total_time_ns,\n        total_runs=total_runs,\n    )\n\n\ndef run_silent(\n    benchmarks: Sequence[Benchmark],\n    *,\n    config: Config = default_config,\n) -> List[AnalysisResult]:\n    flattened = flatten_benchmarks(benchmarks)\n    results: List[AnalysisResult] = []\n    start_wall = time.perf_counter()\n\n    for index, bench_impl in enumerate(flattened, start=1):\n        print(f\"[{index}/{len(flattened)}] Running {bench_impl.name}...\", end=\"\", flush=True)\n        bench_data = run_bench_with_config(config, bench_impl.fn)\n        print(\" done.\")\n\n        if config.progress_callback_fn is not None:\n            elapsed = time.perf_counter() - start_wall\n            info = ProgressInfo(\n                name=bench_impl.name,\n                current_measurement=index,\n                total_measurements=len(flattened),\n                elapsed_time=elapsed,\n                estimated_remaining=None,\n            )\n            config.progress_callback_fn(info)\n\n        results.append(\n            AnalysisResult(\n                name=bench_impl.name,\n                measurements=bench_data.measurements,\n                time_stats=bench_data.time_stats,\n                wall_stats=bench_data.wall_stats,\n                memory_stats=bench_data.memory_stats,\n                regressions=bench_data.regressions,\n                total_time_ns=bench_data.total_time_ns,\n                total_runs=bench_data.total_runs,\n            )\n        )\n\n    return results\n\n\ndef run_and_print(\n    benchmarks: Sequence[Benchmark],\n    *,\n    config: Config = default_config,\n    output_format: str = \"pretty\",\n    verbose: bool = False,\n) -> List[AnalysisResult]:\n    results = run_silent(benchmarks, config=config)\n    print(\"\\nBenchmark Results:\")\n    fmt = output_format.lower()\n    if fmt in {\"pretty\", \"table\"}:\n        print_pretty_table(results, ascii_only=config.ascii_only_output)\n    elif fmt == \"json\":\n        print_json(results)\n    elif fmt == \"csv\":\n        print_csv(results)\n    else:\n        raise ValueError(f\"Unsupported output format: {output_format}\")\n    if verbose:\n        print_regression_analysis(results)\n    return results\n\n\ndef run(\n    benchmarks: Sequence[Benchmark],\n    *,\n    config: Config = default_config,\n    output_format: str = \"pretty\",\n    verbose: bool = False,\n) -> List[AnalysisResult]:\n    return run_and_print(\n        benchmarks, config=config, output_format=output_format, verbose=verbose\n    )\n\n\n# ---------------------------------------------------------------------------\n# Output helpers\n\n\ndef print_pretty_table(\n    results: Sequence[AnalysisResult],\n    *,\n    ascii_only: bool = False,\n) -> None:\n    if not results:\n        print(\"No benchmark results to display.\")\n        return\n\n    reset = \"\\x1b[0m\"\n    bold = \"\\x1b[1m\"\n    green = \"\\x1b[32m\"\n    cyan = \"\\x1b[36m\"\n\n    def colorize(code: str, text: str) -> str:\n        return f\"{code}{text}{reset}\"\n\n    def strip_ansi_codes(text: str) -> str:\n        result_chars: List[str] = []\n        i = 0\n        while i < len(text):\n            if text[i] == \"\\x1b\" and i + 1 < len(text) and text[i + 1] == \"[\":\n                end = text.find(\"m\", i + 2)\n                if end == -1:\n                    result_chars.append(text[i])\n                    i += 1\n                else:\n                    i = end + 1\n            else:\n                result_chars.append(text[i])\n                i += 1\n        return \"\".join(result_chars)\n\n    def visual_width(text: str) -> int:\n        stripped = strip_ansi_codes(text)\n        return sum(1 for _ in stripped)\n\n    def pad_left(text: str, width: int) -> str:\n        length = visual_width(text)\n        return text if length >= width else \" \" * (width - length) + text\n\n    def pad_right(text: str, width: int) -> str:\n        length = visual_width(text)\n        return text if length >= width else text + \" \" * (width - length)\n\n    fastest_wall = min(r.wall_stats.avg for r in results)\n    fastest_cpu = min(r.time_stats.avg for r in results)\n    lowest_memory = min(r.memory_stats.avg for r in results)\n\n    sorted_results = sorted(results, key=lambda r: r.wall_stats.avg)\n\n    rows_data: List[Tuple[AnalysisResult, List[str]]] = []\n    for entry in sorted_results:\n        wall_avg = entry.wall_stats.avg\n        cpu_avg = entry.time_stats.avg\n        wall_str = format_time_ns(wall_avg)\n        cpu_str = format_time_ns(cpu_avg)\n        mem_str = format_words(entry.memory_stats.avg)\n        speedup = fastest_wall / wall_avg if wall_avg > 0.0 else float(\"inf\")\n        vs_fastest = wall_avg / fastest_wall if fastest_wall > 0.0 else float(\"inf\")\n        row = [\n            entry.name,\n            wall_str,\n            cpu_str,\n            mem_str,\n            f\"{speedup:.2f}x\",\n            f\"{vs_fastest * 100.0:.0f}%\",\n        ]\n        if math.isclose(wall_avg, fastest_wall):\n            row[1] = colorize(green, row[1])\n        if math.isclose(cpu_avg, fastest_cpu):\n            row[2] = colorize(green, row[2])\n        if entry.memory_stats.avg == lowest_memory:\n            row[3] = colorize(cyan, row[3])\n        if speedup >= 1.0:\n            row[4] = colorize(green, row[4])\n        if math.isclose(vs_fastest, 1.0):\n            row[5] = colorize(green, row[5])\n        rows_data.append((entry, row))\n\n    headers = [\"Name\", \"Wall/Run\", \"CPU/Run\", \"mWd/Run\", \"Speedup\", \"vs Fastest\"]\n    widths = [visual_width(h) for h in headers]\n    for _, row in rows_data:\n        for index, value in enumerate(row):\n            widths[index] = max(widths[index], visual_width(value))\n\n    if ascii_only:\n        top_left, top_mid, top_right = \"+\", \"+\", \"+\"\n        mid_left, mid_mid, mid_right = \"+\", \"+\", \"+\"\n        bot_left, bot_mid, bot_right = \"+\", \"+\", \"+\"\n        hline = \"-\"\n        vline = \"|\"\n    else:\n        top_left, top_mid, top_right = \"┌\", \"┬\", \"┐\"\n        mid_left, mid_mid, mid_right = \"├\", \"┼\", \"┤\"\n        bot_left, bot_mid, bot_right = \"└\", \"┴\", \"┘\"\n        hline = \"─\"\n        vline = \"│\"\n\n    def repeat_str(char: str, count: int) -> str:\n        return char * count\n\n    def make_border(left: str, mid: str, right: str) -> str:\n        segments = [repeat_str(hline, width + 2) for width in widths]\n        joined = f\"{mid}\".join(segments)\n        return f\"{left}{joined}{right}\"\n\n    top_border = make_border(top_left, top_mid, top_right)\n    separator = make_border(mid_left, mid_mid, mid_right)\n    bottom_border = make_border(bot_left, bot_mid, bot_right)\n\n    print(top_border)\n    header_row = []\n    for index, header in enumerate(headers):\n        padded = pad_right(header, widths[index]) if index == 0 else pad_left(header, widths[index])\n        header_row.append(colorize(bold, padded))\n    header_str = f\" {vline} \".join(header_row)\n    print(f\"{vline} {header_str} {vline}\")\n    print(separator)\n\n    for _, row in rows_data:\n        padded_row = []\n        for index, value in enumerate(row):\n            padded = pad_right(value, widths[index]) if index == 0 else pad_left(value, widths[index])\n            padded_row.append(padded)\n        row_str = f\" {vline} \".join(padded_row)\n        print(f\"{vline} {row_str} {vline}\")\n    print(bottom_border)\n\n\ndef print_json(results: Sequence[AnalysisResult]) -> None:\n    payload = []\n    for result in results:\n        payload.append(\n            {\n                \"name\": result.name,\n                \"time_stats\": dataclasses.asdict(result.time_stats),\n                \"wall_stats\": dataclasses.asdict(result.wall_stats),\n                \"memory_stats\": dataclasses.asdict(result.memory_stats),\n                \"total_time_ns\": result.total_time_ns,\n                \"total_runs\": result.total_runs,\n                \"regressions\": [\n                    {\n                        \"responder\": reg.responder.value,\n                        \"predictors\": [pred.value for pred in reg.predictors],\n                        \"coefficients\": list(reg.coefficients),\n                        \"r_squared\": reg.r_squared,\n                        \"adjusted_r_squared\": reg.adjusted_r_squared,\n                        \"intercept\": reg.intercept,\n                    }\n                    for reg in result.regressions\n                ],\n            }\n        )\n    print(json.dumps(payload, indent=2))\n\n\ndef print_csv(results: Sequence[AnalysisResult]) -> None:\n    headers = [\n        \"name\",\n        \"time_avg\",\n        \"time_min\",\n        \"time_max\",\n        \"time_std_dev\",\n        \"time_ci95_lower\",\n        \"time_ci95_upper\",\n        \"wall_avg\",\n        \"wall_min\",\n        \"wall_max\",\n        \"wall_std_dev\",\n        \"wall_ci95_lower\",\n        \"wall_ci95_upper\",\n        \"memory_avg\",\n        \"memory_min\",\n        \"memory_max\",\n        \"memory_std_dev\",\n        \"memory_ci95_lower\",\n        \"memory_ci95_upper\",\n        \"total_runs\",\n        \"time_r_squared\",\n        \"time_adjusted_r_squared\",\n    ]\n    print(\",\".join(headers))\n    for result in results:\n        time_reg = next(\n            (reg for reg in result.regressions if reg.responder == Responder.TIME_PER_RUN),\n            RegressionResult(\n                responder=Responder.TIME_PER_RUN,\n                predictors=tuple(),\n                coefficients=tuple(),\n                r_squared=0.0,\n                adjusted_r_squared=0.0,\n                intercept=None,\n                confidence_intervals=None,\n            ),\n        )\n        row = [\n            result.name,\n            f\"{result.time_stats.avg:.2f}\",\n            f\"{result.time_stats.min:.2f}\",\n            f\"{result.time_stats.max:.2f}\",\n            f\"{result.time_stats.std_dev:.2f}\",\n            f\"{result.time_stats.ci95_lower:.2f}\",\n            f\"{result.time_stats.ci95_upper:.2f}\",\n            f\"{result.wall_stats.avg:.2f}\",\n            f\"{result.wall_stats.min:.2f}\",\n            f\"{result.wall_stats.max:.2f}\",\n            f\"{result.wall_stats.std_dev:.2f}\",\n            f\"{result.wall_stats.ci95_lower:.2f}\",\n            f\"{result.wall_stats.ci95_upper:.2f}\",\n            f\"{result.memory_stats.avg:.2f}\",\n            f\"{result.memory_stats.min:.2f}\",\n            f\"{result.memory_stats.max:.2f}\",\n            f\"{result.memory_stats.std_dev:.2f}\",\n            f\"{result.memory_stats.ci95_lower:.2f}\",\n            f\"{result.memory_stats.ci95_upper:.2f}\",\n            f\"{result.total_runs}\",\n            f\"{time_reg.r_squared:.4f}\",\n            f\"{time_reg.adjusted_r_squared:.4f}\",\n        ]\n        print(\",\".join(row))\n\n\ndef print_regression_analysis(results: Sequence[AnalysisResult]) -> None:\n    if not results:\n        return\n    print(\"\\nRegression analysis:\")\n    for result in results:\n        print(f\"\\n{result.name}:\")\n        for reg in result.regressions:\n            predictor_str = \", \".join(pred.value for pred in reg.predictors)\n            coeffs = \", \".join(f\"{coef:.4g}\" for coef in reg.coefficients)\n            intercept = f\"{reg.intercept:.4g}\" if reg.intercept is not None else \"None\"\n            print(\n                f\"  {reg.responder.value}: intercept={intercept}; \"\n                f\"predictors=({predictor_str}); coeffs=[{coeffs}]; \"\n                f\"R²={reg.r_squared:.4f}; adj.R²={reg.adjusted_r_squared:.4f}\"\n            )\n\n\n# ---------------------------------------------------------------------------\n# Comparison utilities\n\n\n@dataclass\nclass ComparisonResult:\n    baseline: AnalysisResult\n    compared: AnalysisResult\n    speedup: float\n    speedup_ci_lower: float\n    speedup_ci_upper: float\n    significant: bool\n    p_value: Optional[float]\n\n\ndef compare(\n    baseline: AnalysisResult,\n    compared: AnalysisResult,\n    *,\n    confidence: float = 0.95,\n) -> ComparisonResult:\n    n1 = len(baseline.measurements)\n    n2 = len(compared.measurements)\n    if n1 == 0 or n2 == 0:\n        nan = float(\"nan\")\n        return ComparisonResult(\n            baseline=baseline,\n            compared=compared,\n            speedup=nan,\n            speedup_ci_lower=nan,\n            speedup_ci_upper=nan,\n            significant=False,\n            p_value=None,\n        )\n\n    rates1 = [\n        float(m.runs) / (m.wall_ns / 1e9) if m.wall_ns > 0 else 0.0\n        for m in baseline.measurements\n    ]\n    rates2 = [\n        float(m.runs) / (m.wall_ns / 1e9) if m.wall_ns > 0 else 0.0\n        for m in compared.measurements\n    ]\n\n    avg1 = _mean(rates1)\n    avg2 = _mean(rates2)\n    var1 = statistics.variance(rates1) if len(rates1) > 1 else 0.0\n    var2 = statistics.variance(rates2) if len(rates2) > 1 else 0.0\n    significance = 1.0 - confidence\n    significant = different_rates(significance, n1, avg1, var1, n2, avg2, var2)\n    speedup = avg2 / avg1 if avg1 else float(\"inf\")\n\n    if n1 > 1 and n2 > 1:\n        se1 = math.sqrt(var1 / n1)\n        se2 = math.sqrt(var2 / n2)\n        z = 1.96  # Approximate 95% CI\n        lower = (avg2 - (z * se2)) / (avg1 + (z * se1)) if (avg1 + z * se1) else speedup\n        upper = (avg2 + (z * se2)) / (avg1 - (z * se1)) if (avg1 - z * se1) else speedup\n    else:\n        lower = upper = speedup\n\n    return ComparisonResult(\n        baseline=baseline,\n        compared=compared,\n        speedup=speedup,\n        speedup_ci_lower=lower,\n        speedup_ci_upper=upper,\n        significant=significant,\n        p_value=None,\n    )\n\n\ndef print_comparison(comparison: ComparisonResult) -> None:\n    print(\"\\n=== Benchmark Comparison ===\")\n    print(f\"Baseline: {comparison.baseline.name}\")\n    print(f\"Compared: {comparison.compared.name}\")\n    if math.isnan(comparison.speedup):\n        print(\"Insufficient data for comparison.\")\n        return\n    adjective = \"faster\" if comparison.speedup > 1.0 else \"slower\"\n    print(\n        f\"{comparison.compared.name} is {abs(comparison.speedup):.2f}x {adjective} \"\n        f\"than {comparison.baseline.name}\"\n    )\n    print(\n        f\"{comparison.baseline.name}: {format_time_ns(comparison.baseline.time_stats.avg)} \"\n        f\"(±{comparison.baseline.time_stats.std_dev / max(1e-9, comparison.baseline.time_stats.avg) * 100.0:.2f}%)\"\n    )\n    print(\n        f\"{comparison.compared.name}: {format_time_ns(comparison.compared.time_stats.avg)} \"\n        f\"(±{comparison.compared.time_stats.std_dev / max(1e-9, comparison.compared.time_stats.avg) * 100.0:.2f}%)\"\n    )\n    print(\n        f\"95% CI on speedup: [{comparison.speedup_ci_lower:.2f}x, \"\n        f\"{comparison.speedup_ci_upper:.2f}x]\"\n    )\n    print(\n        \"Difference is statistically significant.\"\n        if comparison.significant\n        else \"Difference is not statistically significant.\"\n    )\n\n\ndef tabulate(\n    results: Sequence[AnalysisResult],\n    *,\n    confidence: float = 0.95,\n    cpu_selector: str = \"process\",\n) -> None:\n    if not results:\n        print(\"(no benchmarks)\")\n        return\n    selector = cpu_selector.lower()\n\n    def cpu_time(m: Measurement) -> float:\n        if selector == \"process\":\n            return m.time_ns\n        if selector == \"user\":\n            return m.utime_ns\n        if selector == \"system\":\n            return m.stime_ns\n        if selector == \"children\":\n            return m.cutime_ns + m.cstime_ns\n        if selector == \"all\":\n            return m.time_ns + m.cutime_ns + m.cstime_ns\n        raise ValueError(f\"Unsupported cpu_selector: {cpu_selector}\")\n\n    entries: List[Tuple[str, int, float, float]] = []\n    for result in results:\n        n = len(result.measurements)\n        if n == 0:\n            entries.append((result.name, 0, float(\"nan\"), 0.0))\n            continue\n        rates = [\n            float(meas.runs) / (cpu_time(meas) / 1e9) if cpu_time(meas) > 0 else 0.0\n            for meas in result.measurements\n        ]\n        avg = _mean(rates)\n        var = statistics.variance(rates) if len(rates) > 1 else 0.0\n        entries.append((result.name, n, avg, var))\n\n    entries.sort(key=lambda item: item[2], reverse=True)\n    header = f\"{'Benchmark':<30} {'Rate (runs/s)':>16} {'Vs fastest':>12}\"\n    print(header)\n    print(\"-\" * len(header))\n    for name, count, rate, _ in entries:\n        if count == 0 or math.isnan(rate):\n            print(f\"{name:<30} {'N/A':>16} {'N/A':>12}\")\n            continue\n        pct = rate / entries[0][2] * 100.0 if entries[0][2] else 0.0\n        print(f\"{name:<30} {rate:>16.2f} {pct:>11.0f}%\")\n\n    print(\"\\nPairwise comparison:\")\n    significance = 1.0 - confidence\n    for row_name, row_n, row_rate, row_var in entries:\n        print(f\"{row_name:<30}\", end=\" \")\n        for col_name, col_n, col_rate, col_var in entries:\n            if row_name == col_name:\n                cell = \"--\"\n            elif row_n == 0 or col_n == 0:\n                cell = \"N/A\"\n            else:\n                diff = (row_rate / col_rate - 1.0) * 100.0 if col_rate else 0.0\n                sig = different_rates(significance, row_n, row_rate, row_var, col_n, col_rate, col_var)\n                cell = f\"{diff:>7.0f}%\" if sig else f\"[{diff:>5.0f}%]\"\n            print(f\"{cell:>10}\", end=\" \")\n        print()\n\n\n# ---------------------------------------------------------------------------\n# CLI\n\n\ndef parse_quota(text: str) -> Quota:\n    text = text.strip()\n    if text.endswith(\"s\"):\n        return TimeLimit(float(text[:-1]))\n    if text.endswith(\"x\"):\n        return IterationLimit(int(text[:-1]))\n    if text.endswith(\"%\"):\n        return VarianceLimit(float(text[:-1]) / 100.0)\n    try:\n        return TimeLimit(float(text))\n    except ValueError as exc:  # pragma: no cover - defensive\n        raise ValueError(f\"Invalid quota: {text}\") from exc\n\n\ndef parse_output(text: str) -> str:\n    text = text.lower()\n    if text in {\"pretty\", \"table\"}:\n        return \"pretty\"\n    if text in {\"json\", \"csv\"}:\n        return text\n    raise ValueError(f\"Unsupported format: {text}\")\n\n\ndef run_cli(benchmarks: Sequence[Benchmark]) -> None:\n    parser = argparse.ArgumentParser(description=\"Ubench - Python microbenchmarking\")\n    parser.add_argument(\n        \"-q\",\n        \"--quota\",\n        default=\"1s\",\n        help=\"Quota: e.g. '5s', '1000x', or '1%%' (variance)\",\n    )\n    parser.add_argument(\n        \"-f\",\n        \"--format\",\n        default=\"pretty\",\n        help=\"Output format: pretty, json, csv\",\n    )\n    parser.add_argument(\n        \"--fork\",\n        action=\"store_true\",\n        help=\"(Ignored) Compatibility with OCaml fork mode\",\n    )\n    parser.add_argument(\n        \"-w\",\n        \"--warmup\",\n        type=int,\n        default=3,\n        help=\"Number of warmup iterations\",\n    )\n    parser.add_argument(\n        \"--gc\",\n        action=\"store_true\",\n        help=\"Collect garbage between measurements\",\n    )\n    parser.add_argument(\n        \"--ascii-only\",\n        action=\"store_true\",\n        help=\"Disable Unicode box drawing characters\",\n    )\n    parser.add_argument(\n        \"-v\",\n        \"--verbose\",\n        action=\"store_true\",\n        help=\"Print regression analysis\",\n    )\n    args = parser.parse_args()\n\n    quota = parse_quota(args.quota)\n    output_format = parse_output(args.format)\n\n    config = (\n        Config.default()\n        .warmup(args.warmup)\n        .gc_stabilization(args.gc)\n        .ascii_only(args.ascii_only)\n        .build()\n    )\n    if isinstance(quota, TimeLimit):\n        config = config.time_limit(quota.seconds)\n    elif isinstance(quota, IterationLimit):\n        config = config.iteration_limit(quota.iterations)\n    elif isinstance(quota, VarianceLimit):\n        config = config.variance_limit(quota.coefficient)\n\n    run_and_print(\n        benchmarks,\n        config=config,\n        output_format=output_format,\n        verbose=args.verbose,\n    )\n\n\n__all__ = [\n    # Core API\n    \"Config\",\n    \"default_config\",\n    \"Quota\",\n    \"TimeLimit\",\n    \"IterationLimit\",\n    \"VarianceLimit\",\n    \"BenchmarkMode\",\n    \"Measurement\",\n    \"Statistics\",\n    \"RegressionResult\",\n    \"BenchData\",\n    \"AnalysisResult\",\n    \"ProgressInfo\",\n    # Benchmark creation\n    \"bench\",\n    \"create\",\n    \"group\",\n    \"create_group\",\n    \"bench_with_setup\",\n    \"create_with_setup\",\n    \"bench_param\",\n    \"create_param\",\n    \"flatten_benchmarks\",\n    # Execution\n    \"run\",\n    \"run_silent\",\n    \"run_and_print\",\n    \"run_bench_with_config\",\n    \"run_cli\",\n    # Output and utilities\n    \"print_pretty_table\",\n    \"print_json\",\n    \"print_csv\",\n    \"print_regression_analysis\",\n    \"compare\",\n    \"print_comparison\",\n    \"tabulate\",\n    \"format_time_ns\",\n    \"format_words\",\n    \"format_number\",\n    \"different_rates\",\n]\n"
  },
  {
    "path": "www/.gitignore",
    "content": "# Generated site\nbuild/\nodoc/\n"
  },
  {
    "path": "www/README.md",
    "content": "# Raven Website\n\nStatic site for [raven-ml.dev](https://raven-ml.dev). Built with a small OCaml script (`generate/generate.ml`) that converts Markdown to HTML using cmarkit.\n\n## Build and serve\n\n```bash\ndune build www/build\npython3 -m http.server -d _build/default/www/build\n```\n\n## Structure\n\n- `site/` — HTML landing pages and static assets\n- `../doc/` — general documentation (installation, roadmap, etc.)\n- `templates/` — HTML templates (`main.html`, `layout_docs.html`, `layout_docs_lib.html`)\n- `generate/` — site generator\n- `process/` — odoc API docs integration (WIP, not part of the build)\n\nLibrary-specific docs live in each library's `doc/` directory (e.g., `packages/nx/doc/`, `packages/rune/doc/`) where they're tested with mdx. The site generator pulls them in automatically.\n"
  },
  {
    "path": "www/dune",
    "content": "(dirs :standard \\ process)\n\n(rule\n (targets\n  (dir build))\n (deps\n  (source_tree templates)\n  (source_tree site)\n  (source_tree ../doc)\n  (source_tree ../packages/nx/doc)\n  (source_tree ../packages/tolk/doc)\n  (source_tree ../packages/rune/doc)\n  (source_tree ../packages/kaun/doc)\n  (source_tree ../packages/hugin/doc)\n  (source_tree ../packages/brot/doc)\n  (source_tree ../packages/talon/doc)\n  (source_tree ../packages/sowilo/doc)\n  (source_tree ../packages/fehu/doc)\n  (source_tree ../packages/quill/doc)\n  (source_tree ../packages/munin/doc)\n  (source_tree ../packages/nx/examples)\n  (source_tree ../packages/tolk/examples)\n  (source_tree ../packages/rune/examples)\n  (source_tree ../packages/kaun/examples)\n  (source_tree ../packages/hugin/examples)\n  (source_tree ../packages/brot/examples)\n  (source_tree ../packages/talon/examples)\n  (source_tree ../packages/sowilo/examples)\n  (source_tree ../packages/fehu/examples)\n  (source_tree ../packages/quill/examples)\n  (source_tree ../packages/munin/examples)\n  generate/generate.exe)\n (action\n  (run generate/generate.exe)))\n"
  },
  {
    "path": "www/dune-project",
    "content": "(lang dune 3.19)\n\n(using directory-targets 0.1)\n\n(name raven-www)\n\n(package\n (name raven-www)\n (allow_empty)\n (depends cmarkit hilite))\n"
  },
  {
    "path": "www/generate/api.ml",
    "content": "(* API reference generation from odoc HTML output.\n\n   Extracts content from odoc-generated HTML pages, rewrites internal links to\n   match the site URL scheme, and produces pages wrapped in the site\n   template. *)\n\nlet odoc_dir = Filename.concat \"..\" \"_doc/_html\"\n\n(* Libraries to include per package. Each entry is (library_name,\n   entry_module_name). The library name is displayed in the sidebar; the module\n   name is the odoc directory name. *)\nlet libraries =\n  [\n    ( \"nx\",\n      [\n        (\"nx\", \"Nx\");\n        (\"nx.backend\", \"Nx_backend\");\n        (\"nx.buffer\", \"Nx_buffer\");\n        (\"nx.core\", \"Nx_core\");\n        (\"nx.effect\", \"Nx_effect\");\n        (\"nx.io\", \"Nx_io\");\n      ] );\n    ( \"tolk\",\n      [\n        (\"tolk\", \"Tolk\");\n        (\"tolk.ir\", \"Tolk_ir\");\n        (\"tolk.cpu\", \"Tolk_cpu\");\n        (\"tolk.metal\", \"Tolk_metal\");\n      ] );\n    (\"rune\", [ (\"rune\", \"Rune\") ]);\n    ( \"kaun\",\n      [\n        (\"kaun\", \"Kaun\");\n        (\"kaun.datasets\", \"Kaun_datasets\");\n        (\"kaun.hf\", \"Kaun_hf\");\n      ] );\n    (\"brot\", [ (\"brot\", \"Brot\") ]);\n    (\"talon\", [ (\"talon\", \"Talon\"); (\"talon.csv\", \"Talon_csv\") ]);\n    (\"hugin\", [ (\"hugin\", \"Hugin\") ]);\n    (\"quill\", [ (\"quill\", \"Quill\") ]);\n    (\"fehu\", [ (\"fehu\", \"Fehu\"); (\"fehu.envs\", \"Fehu_envs\") ]);\n    (\"sowilo\", [ (\"sowilo\", \"Sowilo\") ]);\n  ]\n\n(*---------------------------------------------------------------------------\n  Module discovery\n  ---------------------------------------------------------------------------*)\n\n(* Walk [dir] recursively, collecting all [index.html] files. Returns\n   [(rel_path, full_path)] where [rel_path] is relative to [dir]. *)\nlet rec walk_modules dir rel =\n  let full = Filename.concat dir rel in\n  let index = Filename.concat full \"index.html\" in\n  let self = if Sys.file_exists index then [ (rel, index) ] else [] in\n  let children =\n    if not (Sys.is_directory full) then []\n    else\n      Sys.readdir full |> Array.to_list\n      |> List.filter (fun e -> Sys.is_directory (Filename.concat full e))\n      |> List.sort String.compare\n      |> List.concat_map (fun e -> walk_modules dir (Filename.concat rel e))\n  in\n  self @ children\n\n(* All module pages for a package, filtered by [libraries]. *)\nlet package_modules pkg_name =\n  let pkg_dir = Filename.concat odoc_dir pkg_name in\n  if not (Sys.file_exists pkg_dir && Sys.is_directory pkg_dir) then []\n  else\n    match List.assoc_opt pkg_name libraries with\n    | None -> []\n    | Some libs ->\n        libs\n        |> List.concat_map (fun (_lib_name, mod_name) ->\n            walk_modules pkg_dir mod_name)\n\n(* Direct child subdirectories of [dir/mod_name] that have an index.html. *)\nlet direct_submodules pkg_name mod_name =\n  let mod_dir = Filename.concat (Filename.concat odoc_dir pkg_name) mod_name in\n  if not (Sys.file_exists mod_dir && Sys.is_directory mod_dir) then []\n  else\n    Sys.readdir mod_dir |> Array.to_list\n    |> List.filter (fun e ->\n        let d = Filename.concat mod_dir e in\n        Sys.is_directory d && Sys.file_exists (Filename.concat d \"index.html\"))\n    |> List.sort String.compare\n\n(*---------------------------------------------------------------------------\n  HTML extraction\n  ---------------------------------------------------------------------------*)\n\n(* Extract the preamble and content from an odoc HTML page. Drops <nav>, <head>,\n   scripts, and the local TOC. *)\nlet extract_content html =\n  let preamble =\n    let tag = {|<header class=\"odoc-preamble\">|} in\n    match Site.find_sub html tag with\n    | None -> \"\"\n    | Some i -> (\n        match Site.find_sub ~start:i html \"</header>\" with\n        | None -> \"\"\n        | Some j -> String.sub html i (j + 9 - i))\n  in\n  let body =\n    let tag = {|<div class=\"odoc-content\">|} in\n    match Site.find_sub html tag with\n    | None -> \"\"\n    | Some i -> (\n        match Site.find_sub html \"</body>\" with\n        | None -> \"\"\n        | Some j -> String.trim (String.sub html i (j - i)))\n  in\n  preamble ^ \"\\n\" ^ body\n\n(* Extract the local TOC from odoc HTML. Returns the inner <nav> wrapped in a\n   right-sidebar container. *)\nlet extract_toc html =\n  let tag = {|<nav class=\"odoc-toc odoc-local-toc\">|} in\n  match Site.find_sub html tag with\n  | None -> \"\"\n  | Some i -> (\n      match Site.find_sub ~start:i html \"</nav>\" with\n      | None -> \"\"\n      | Some j ->\n          let nav = String.sub html i (j + 6 - i) in\n          Printf.sprintf\n            {|    <aside class=\"toc\">\n      <div class=\"toc-inner\">\n        <div class=\"toc-title\">On this page</div>\n%s\n      </div>\n    </aside>|}\n            nav)\n\n(* Extract the module name from the <h1> in the preamble. \"Module\n   <code><span>Nx</span></code>\" -> \"Nx\" *)\nlet extract_title html =\n  let tag = {|<header class=\"odoc-preamble\">|} in\n  match Site.find_sub html tag with\n  | None -> None\n  | Some start -> (\n      match Site.find_sub ~start html \"<h1>\" with\n      | None -> None\n      | Some h1 -> (\n          match Site.find_sub ~start:h1 html \"</h1>\" with\n          | None -> None\n          | Some h1_end ->\n              let raw = String.sub html (h1 + 4) (h1_end - h1 - 4) in\n              let text = String.trim (Site.strip_tags raw) in\n              if String.length text > 7 && String.sub text 0 7 = \"Module \" then\n                Some (String.sub text 7 (String.length text - 7))\n              else if\n                String.length text > 12 && String.sub text 0 12 = \"Module type \"\n              then Some (String.sub text 12 (String.length text - 12))\n              else Some text))\n\n(*---------------------------------------------------------------------------\n  Link rewriting\n  ---------------------------------------------------------------------------*)\n\n(* Resolve [..] segments in a path. *)\nlet resolve_relative ~base href =\n  let base_parts = List.rev (String.split_on_char '/' base) in\n  let href_parts = String.split_on_char '/' href in\n  let rec go base = function\n    | \"..\" :: rest ->\n        let base' = match base with _ :: tl -> tl | [] -> [] in\n        go base' rest\n    | rest -> List.rev base @ rest\n  in\n  String.concat \"/\" (go base_parts href_parts)\n\n(* Rewrite a single href from odoc-relative to site-absolute. [current_dir]: the\n   module's position relative to the odoc root, e.g. \"nx/Nx/Infix\". *)\nlet rewrite_href ~current_dir href =\n  if String.length href = 0 || href.[0] = '#' then href\n  else if String.contains href ':' then href\n  else\n    let anchor, path =\n      match String.index_opt href '#' with\n      | Some i ->\n          (String.sub href i (String.length href - i), String.sub href 0 i)\n      | None -> (\"\", href)\n    in\n    let resolved = resolve_relative ~base:current_dir path in\n    let parts =\n      String.split_on_char '/' resolved\n      |> List.filter (fun s -> s <> \"\" && s <> \"index.html\")\n    in\n    match parts with\n    | pkg :: rest when Site.find_library pkg <> None ->\n        \"/docs/\" ^ pkg ^ \"/api/\" ^ String.concat \"/\" rest ^ \"/\" ^ anchor\n    | _ -> href\n\n(* Rewrite all href=\"...\" in [html]. *)\nlet rewrite_hrefs ~current_dir html =\n  let buf = Buffer.create (String.length html) in\n  let attr = {|href=\"|} in\n  let attr_len = String.length attr in\n  let len = String.length html in\n  let i = ref 0 in\n  while !i < len do\n    match Site.find_sub ~start:!i html attr with\n    | None ->\n        Buffer.add_string buf (String.sub html !i (len - !i));\n        i := len\n    | Some pos -> (\n        Buffer.add_string buf (String.sub html !i (pos + attr_len - !i));\n        let href_start = pos + attr_len in\n        match String.index_from_opt html href_start '\"' with\n        | None -> i := href_start\n        | Some href_end ->\n            let href = String.sub html href_start (href_end - href_start) in\n            Buffer.add_string buf (rewrite_href ~current_dir href);\n            Buffer.add_char buf '\"';\n            i := href_end + 1)\n  done;\n  Buffer.contents buf\n\n(*---------------------------------------------------------------------------\n  Sidebar\n  ---------------------------------------------------------------------------*)\n\n(* Generate the API tab panel contents for a package's sidebar. *)\nlet nav_items ~current_url pkg_name =\n  match List.assoc_opt pkg_name libraries with\n  | None -> \"\"\n  | Some libs ->\n      let buf = Buffer.create 512 in\n      libs\n      |> List.iter (fun (lib_name, mod_name) ->\n          Printf.bprintf buf {|            <li class=\"nav-group\">%s</li>|}\n            lib_name;\n          Buffer.add_char buf '\\n';\n          let url = Printf.sprintf \"/docs/%s/api/%s/\" pkg_name mod_name in\n          let active = if url = current_url then {| class=\"active\"|} else \"\" in\n          Printf.bprintf buf {|            <li><a href=\"%s\"%s>%s</a></li>|} url\n            active mod_name;\n          Buffer.add_char buf '\\n';\n          direct_submodules pkg_name mod_name\n          |> List.iter (fun sub ->\n              let sub_url =\n                Printf.sprintf \"/docs/%s/api/%s/%s/\" pkg_name mod_name sub\n              in\n              let active =\n                if sub_url = current_url then {| class=\"active\"|} else \"\"\n              in\n              Printf.bprintf buf\n                {|            <li class=\"nav-sub\"><a href=\"%s\"%s>%s</a></li>|}\n                sub_url active sub;\n              Buffer.add_char buf '\\n'));\n      Buffer.contents buf\n\n(*---------------------------------------------------------------------------\n  Page generation\n  ---------------------------------------------------------------------------*)\n\nlet process_page ~build_dir ~templates_dir ~tab_nav ~lib ~module_rel full_path =\n  let html = Site.read_file full_path in\n  let toc = extract_toc html in\n  let content = extract_content html in\n  let current_dir = Filename.concat lib.Site.name module_rel in\n  let content = rewrite_hrefs ~current_dir content in\n  let content = Printf.sprintf {|<div class=\"odoc-api\">%s</div>|} content in\n  let mod_title =\n    match extract_title html with\n    | Some t -> t\n    | None -> (\n        match List.rev (String.split_on_char '/' module_rel) with\n        | last :: _ -> last\n        | [] -> \"API\")\n  in\n  let path = Printf.sprintf \"docs/%s/api/%s/index.html\" lib.name module_rel in\n  let current_url = Site.url_of_path path in\n  let title = mod_title ^ \" - \" ^ lib.name ^ \" - raven\" in\n  let segments =\n    \"docs\" :: lib.name :: \"api\" :: String.split_on_char '/' module_rel\n  in\n  let breadcrumbs = Site.make_breadcrumbs segments mod_title in\n  let template =\n    Site.read_file (Filename.concat templates_dir \"layout_docs_lib.html\")\n  in\n  let result =\n    Site.apply_template ~template ~title ~breadcrumbs ~content ~lib:(Some lib)\n      ~is_lib_index:false ~page_title:mod_title ~path\n      ~tab_nav:(tab_nav current_url) ~prev_next:\"\" ~toc ()\n  in\n  let out =\n    Filename.concat build_dir\n      (Printf.sprintf \"docs/%s/api/%s/index.html\" lib.name module_rel)\n  in\n  Site.ensure_dir (Filename.dirname out);\n  Site.write_file out result\n\nlet generate ~build_dir ~templates_dir ~tab_nav =\n  Site.libraries\n  |> List.iter (fun lib ->\n      let modules = package_modules lib.Site.name in\n      let tab_nav = tab_nav lib.Site.name in\n      modules\n      |> List.iter (fun (module_rel, full_path) ->\n          process_page ~build_dir ~templates_dir ~tab_nav ~lib ~module_rel\n            full_path))\n"
  },
  {
    "path": "www/generate/dune",
    "content": "(executable\n (name generate)\n (modules generate api site)\n (libraries cmarkit hilite hilite.markdown unix))\n"
  },
  {
    "path": "www/generate/generate.ml",
    "content": "let site_dir = \"site\"\nlet docs_dir = \"../doc\"\nlet build_dir = \"build\"\nlet templates_dir = \"templates\"\n\nlet lib_doc_dir lib_name =\n  Filename.concat\n    (Filename.concat (Filename.concat \"..\" \"packages\") lib_name)\n    \"doc\"\n\nlet lib_examples_dir lib_name =\n  Filename.concat\n    (Filename.concat (Filename.concat \"..\" \"packages\") lib_name)\n    \"examples\"\n\n(* -- Library navigation -- *)\n\nlet lib_doc_entries lib_name =\n  let dir = lib_doc_dir lib_name in\n  if not (Sys.file_exists dir && Sys.is_directory dir) then []\n  else\n    let files = Sys.readdir dir |> Array.to_list |> List.sort String.compare in\n    let entries =\n      files\n      |> List.filter_map (fun f ->\n          if Filename.extension f = \".md\" then\n            let stem = Filename.chop_extension f in\n            let slug = Site.strip_order_prefix stem in\n            let title =\n              if slug = \"index\" then \"Overview\" else Site.title_case slug\n            in\n            let url =\n              if slug = \"index\" then Printf.sprintf \"/docs/%s/\" lib_name\n              else Printf.sprintf \"/docs/%s/%s/\" lib_name slug\n            in\n            Some (stem, title, url)\n          else None)\n    in\n    List.sort\n      (fun (a, _, _) (b, _, _) ->\n        match (a, b) with\n        | \"index\", _ -> -1\n        | _, \"index\" -> 1\n        | _ -> String.compare a b)\n      entries\n\nlet generate_lib_nav ~current_url lib_name =\n  lib_doc_entries lib_name\n  |> List.map (fun (_, title, url) ->\n      let active = if url = current_url then {| class=\"active\"|} else \"\" in\n      Printf.sprintf {|          <li><a href=\"%s\"%s>%s</a></li>|} url active\n        title)\n  |> String.concat \"\\n\"\n\nlet generate_prev_next ~current_url lib_name =\n  let entries = lib_doc_entries lib_name in\n  let arr = Array.of_list entries in\n  let len = Array.length arr in\n  let cur =\n    let rec find i =\n      if i >= len then -1\n      else\n        let _, _, url = arr.(i) in\n        if url = current_url then i else find (i + 1)\n    in\n    find 0\n  in\n  if cur < 0 then \"\"\n  else\n    let prev =\n      if cur > 0 then\n        let _, title, url = arr.(cur - 1) in\n        Printf.sprintf {|<a href=\"%s\" class=\"prev-link\">← %s</a>|} url title\n      else \"\"\n    in\n    let next =\n      if cur < len - 1 then\n        let _, title, url = arr.(cur + 1) in\n        Printf.sprintf {|<a href=\"%s\" class=\"next-link\">%s →</a>|} url title\n      else \"\"\n    in\n    if prev = \"\" && next = \"\" then \"\"\n    else Printf.sprintf {|<nav class=\"prev-next\">%s%s</nav>|} prev next\n\nlet generate_lib_examples_nav_items ~current_url lib_name =\n  let dir = lib_examples_dir lib_name in\n  if not (Sys.file_exists dir && Sys.is_directory dir) then \"\"\n  else\n    let entries =\n      Sys.readdir dir |> Array.to_list\n      |> List.filter (fun entry -> Sys.is_directory (Filename.concat dir entry))\n      |> List.sort String.compare\n      |> List.map (fun entry ->\n          let slug = Site.strip_order_prefix entry in\n          let title = Site.title_case slug in\n          let url = Printf.sprintf \"/docs/%s/examples/%s/\" lib_name slug in\n          (title, url))\n    in\n    match entries with\n    | [] -> \"\"\n    | _ ->\n        entries\n        |> List.map (fun (title, url) ->\n            let active =\n              if url = current_url then {| class=\"active\"|} else \"\"\n            in\n            Printf.sprintf {|          <li><a href=\"%s\"%s>%s</a></li>|} url\n              active title)\n        |> String.concat \"\\n\"\n\nlet generate_tab_nav ~current_url lib_name =\n  let guides_items = generate_lib_nav ~current_url lib_name in\n  let examples_items = generate_lib_examples_nav_items ~current_url lib_name in\n  let api_items = Api.nav_items ~current_url lib_name in\n  let has_examples = examples_items <> \"\" in\n  let has_api = api_items <> \"\" in\n  let is_examples_page =\n    match Site.find_sub current_url \"/examples/\" with\n    | Some _ -> true\n    | None -> false\n  in\n  let is_api_page =\n    match Site.find_sub current_url \"/api/\" with\n    | Some _ -> true\n    | None -> false\n  in\n  let active_tab =\n    if is_api_page && has_api then \"api\"\n    else if is_examples_page && has_examples then \"examples\"\n    else \"guides\"\n  in\n  let checked tab = if tab = active_tab then \" checked\" else \"\" in\n  let buf = Buffer.create 1024 in\n  Buffer.add_string buf {|      <div class=\"nav-tabs\">|};\n  Buffer.add_char buf '\\n';\n  Printf.bprintf buf\n    {|        <input type=\"radio\" id=\"tab-guides\" name=\"nav-tab\"%s>|}\n    (checked \"guides\");\n  Buffer.add_char buf '\\n';\n  Buffer.add_string buf {|        <label for=\"tab-guides\">Guides</label>|};\n  Buffer.add_char buf '\\n';\n  if has_examples then (\n    Printf.bprintf buf\n      {|        <input type=\"radio\" id=\"tab-examples\" name=\"nav-tab\"%s>|}\n      (checked \"examples\");\n    Buffer.add_char buf '\\n';\n    Buffer.add_string buf {|        <label for=\"tab-examples\">Examples</label>|};\n    Buffer.add_char buf '\\n');\n  if has_api then (\n    Printf.bprintf buf\n      {|        <input type=\"radio\" id=\"tab-api\" name=\"nav-tab\"%s>|}\n      (checked \"api\");\n    Buffer.add_char buf '\\n';\n    Buffer.add_string buf {|        <label for=\"tab-api\">API</label>|};\n    Buffer.add_char buf '\\n');\n  Buffer.add_string buf {|        <div class=\"tab-panel\" id=\"panel-guides\">|};\n  Buffer.add_char buf '\\n';\n  Buffer.add_string buf {|          <ul class=\"nav-links\">|};\n  Buffer.add_char buf '\\n';\n  Buffer.add_string buf guides_items;\n  Buffer.add_char buf '\\n';\n  Buffer.add_string buf {|          </ul>|};\n  Buffer.add_char buf '\\n';\n  Buffer.add_string buf {|        </div>|};\n  Buffer.add_char buf '\\n';\n  if has_examples then (\n    Buffer.add_string buf\n      {|        <div class=\"tab-panel\" id=\"panel-examples\">|};\n    Buffer.add_char buf '\\n';\n    Buffer.add_string buf {|          <ul class=\"nav-links\">|};\n    Buffer.add_char buf '\\n';\n    Buffer.add_string buf examples_items;\n    Buffer.add_char buf '\\n';\n    Buffer.add_string buf {|          </ul>|};\n    Buffer.add_char buf '\\n';\n    Buffer.add_string buf {|        </div>|};\n    Buffer.add_char buf '\\n');\n  if has_api then (\n    Buffer.add_string buf {|        <div class=\"tab-panel\" id=\"panel-api\">|};\n    Buffer.add_char buf '\\n';\n    Buffer.add_string buf {|          <ul class=\"nav-links\">|};\n    Buffer.add_char buf '\\n';\n    Buffer.add_string buf api_items;\n    Buffer.add_string buf {|          </ul>|};\n    Buffer.add_char buf '\\n';\n    Buffer.add_string buf {|        </div>|};\n    Buffer.add_char buf '\\n');\n  Buffer.add_string buf {|      </div>|};\n  Buffer.contents buf\n\n(* -- Template -- *)\n\nlet select_template path =\n  let parts = String.split_on_char '/' path in\n  let name =\n    match parts with\n    | \"docs\" :: lib :: _ when Site.find_library lib <> None ->\n        \"layout_docs_lib.html\"\n    | \"docs\" :: _ -> \"layout_docs.html\"\n    | _ -> \"main.html\"\n  in\n  Filename.concat templates_dir name\n\nlet apply_template ~template ~title ~breadcrumbs ~content ~lib ~is_lib_index\n    ~page_title ~path =\n  let current_url = Site.url_of_path path in\n  let tab_nav =\n    match lib with\n    | Some lib -> generate_tab_nav ~current_url lib.Site.name\n    | None -> \"\"\n  in\n  let prev_next =\n    match lib with\n    | Some lib -> generate_prev_next ~current_url lib.Site.name\n    | None -> \"\"\n  in\n  Site.apply_template ~template ~title ~breadcrumbs ~content ~lib ~is_lib_index\n    ~page_title ~path ~tab_nav ~prev_next ()\n\nlet dest_path path =\n  let stem = Filename.chop_extension path in\n  if Filename.basename stem = \"index\" then\n    Filename.concat build_dir (stem ^ \".html\")\n  else Filename.concat build_dir (Filename.concat stem \"index.html\")\n\n(* -- Processing -- *)\n\nlet render_markdown content =\n  content\n  |> Cmarkit.Doc.of_string ~heading_auto_ids:true ~strict:false\n  |> Hilite_markdown.transform ~skip_unknown_languages:true\n  |> Cmarkit_html.of_doc ~safe:false\n\nlet process_markdown path content =\n  let html = render_markdown content |> Site.rewrite_doc_hrefs in\n  let h1 = Site.extract_h1 html in\n  let title = match h1 with Some t -> t ^ \" - raven\" | None -> \"raven\" in\n  let page_title =\n    match h1 with\n    | Some t -> t\n    | None -> Site.title_case (Filename.chop_extension (Filename.basename path))\n  in\n  let breadcrumbs = Site.make_breadcrumbs (Site.url_segments path) page_title in\n  let lib =\n    match String.split_on_char '/' path with\n    | \"docs\" :: lib_name :: _ -> Site.find_library lib_name\n    | _ -> None\n  in\n  let is_lib_index =\n    match String.split_on_char '/' path with\n    | \"docs\" :: lib_name :: rest when Site.find_library lib_name <> None -> (\n        match rest with [ \"index.md\" ] | [] -> true | _ -> false)\n    | _ -> false\n  in\n  let html = if is_lib_index then Site.strip_h1 html else html in\n  let template = Site.read_file (select_template path) in\n  apply_template ~template ~title ~breadcrumbs ~content:html ~lib ~is_lib_index\n    ~page_title ~path\n\nlet highlight_html_code_blocks html =\n  let buf = Buffer.create (String.length html) in\n  let len = String.length html in\n  let i = ref 0 in\n  let pre_open = {|<pre class=\"language-|} in\n  let pre_open_len = String.length pre_open in\n  let code_close = \"</code></pre>\" in\n  let code_close_len = String.length code_close in\n  while !i < len do\n    match Site.find_sub ~start:!i html pre_open with\n    | None ->\n        Buffer.add_string buf (String.sub html !i (len - !i));\n        i := len\n    | Some pre_start -> (\n        Buffer.add_string buf (String.sub html !i (pre_start - !i));\n        let lang_start = pre_start + pre_open_len in\n        match String.index_from_opt html lang_start '\"' with\n        | None ->\n            Buffer.add_char buf html.[!i];\n            i := !i + 1\n        | Some lang_end -> (\n            let lang = String.sub html lang_start (lang_end - lang_start) in\n            let code_tag = {|<code class=\"language-|} ^ lang ^ {|\">|} in\n            match Site.find_sub ~start:lang_end html code_tag with\n            | None ->\n                Buffer.add_char buf html.[!i];\n                i := !i + 1\n            | Some code_tag_start -> (\n                let content_start = code_tag_start + String.length code_tag in\n                match Site.find_sub ~start:content_start html code_close with\n                | None ->\n                    Buffer.add_char buf html.[!i];\n                    i := !i + 1\n                | Some content_end -> (\n                    let raw =\n                      String.sub html content_start (content_end - content_start)\n                    in\n                    let code =\n                      raw |> Site.replace \"&amp;\" \"&\" |> Site.replace \"&lt;\" \"<\"\n                      |> Site.replace \"&gt;\" \">\"\n                    in\n                    match Hilite.src_code_to_html ~lang code with\n                    | Ok highlighted ->\n                        Buffer.add_string buf highlighted;\n                        i := content_end + code_close_len\n                    | Error _ ->\n                        Buffer.add_string buf\n                          (String.sub html pre_start\n                             (content_end + code_close_len - pre_start));\n                        i := content_end + code_close_len))))\n  done;\n  Buffer.contents buf\n\nlet process_file ~path full_path =\n  let ext = Filename.extension path in\n  let out, content =\n    match ext with\n    | \".md\" -> (dest_path path, process_markdown path (Site.read_file full_path))\n    | \".html\" | \".htm\" ->\n        (dest_path path, highlight_html_code_blocks (Site.read_file full_path))\n    | _ -> (Filename.concat build_dir path, Site.read_file full_path)\n  in\n  Site.ensure_dir (Filename.dirname out);\n  Site.write_file out content\n\nlet escape_html s =\n  let buf = Buffer.create (String.length s) in\n  String.iter\n    (fun c ->\n      match c with\n      | '&' -> Buffer.add_string buf \"&amp;\"\n      | '<' -> Buffer.add_string buf \"&lt;\"\n      | '>' -> Buffer.add_string buf \"&gt;\"\n      | _ -> Buffer.add_char buf c)\n    s;\n  Buffer.contents buf\n\nlet process_example ~lib example_dir =\n  let entry = Filename.basename example_dir in\n  let slug = Site.strip_order_prefix entry in\n  let path = Printf.sprintf \"docs/%s/examples/%s.md\" lib.Site.name slug in\n  let readme_path = Filename.concat example_dir \"README.md\" in\n  let prose_html =\n    if Sys.file_exists readme_path then\n      render_markdown (Site.read_file readme_path)\n    else Printf.sprintf \"<h1>%s</h1>\" (escape_html (Site.title_case slug))\n  in\n  let ml_files =\n    Sys.readdir example_dir |> Array.to_list\n    |> List.filter (fun f -> Filename.extension f = \".ml\")\n    |> List.sort String.compare\n  in\n  let multi = List.length ml_files > 1 in\n  let code_html =\n    ml_files\n    |> List.map (fun f ->\n        let code = Site.read_file (Filename.concat example_dir f) in\n        let header =\n          if multi then Printf.sprintf \"<h3>%s</h3>\\n\" (escape_html f) else \"\"\n        in\n        let highlighted =\n          match Hilite.src_code_to_html ~lang:\"ocaml\" code with\n          | Ok html -> html\n          | Error _ ->\n              Printf.sprintf \"<pre><code>%s</code></pre>\" (escape_html code)\n        in\n        header ^ highlighted)\n    |> String.concat \"\\n\"\n  in\n  let html = prose_html ^ \"\\n\" ^ code_html in\n  let h1 = Site.extract_h1 html in\n  let title = match h1 with Some t -> t ^ \" - raven\" | None -> \"raven\" in\n  let page_title = match h1 with Some t -> t | None -> Site.title_case slug in\n  let breadcrumbs = Site.make_breadcrumbs (Site.url_segments path) page_title in\n  let template = Site.read_file (select_template path) in\n  let content =\n    apply_template ~template ~title ~breadcrumbs ~content:html ~lib:(Some lib)\n      ~is_lib_index:false ~page_title ~path\n  in\n  let out = dest_path path in\n  Site.ensure_dir (Filename.dirname out);\n  Site.write_file out content\n\nlet () =\n  Site.walk site_dir\n  |> List.iter (fun p ->\n      process_file ~path:(Site.strip_prefix ~prefix:site_dir p) p);\n  Site.walk docs_dir\n  |> List.iter (fun p ->\n      let rel = Site.strip_prefix ~prefix:docs_dir p in\n      let path = Filename.concat \"docs\" rel in\n      process_file ~path p);\n  Site.libraries\n  |> List.iter (fun lib ->\n      let dir = lib_doc_dir lib.Site.name in\n      if Sys.file_exists dir && Sys.is_directory dir then\n        Site.walk dir\n        |> List.iter (fun full_path ->\n            let ext = Filename.extension full_path in\n            let base = Filename.basename full_path in\n            if base = \"dune\" || ext = \".mld\" then ()\n            else\n              let rel = Site.strip_prefix ~prefix:dir full_path in\n              let rel_base = Filename.basename rel in\n              let rel_dir = Filename.dirname rel in\n              let clean_base =\n                if Filename.extension rel_base = \".md\" then\n                  Site.strip_order_prefix (Filename.chop_extension rel_base)\n                  ^ \".md\"\n                else rel_base\n              in\n              let clean_rel =\n                if rel_dir = \".\" then clean_base\n                else Filename.concat rel_dir clean_base\n              in\n              let path =\n                Filename.concat (Filename.concat \"docs\" lib.name) clean_rel\n              in\n              process_file ~path full_path));\n  Site.libraries\n  |> List.iter (fun lib ->\n      let dir = lib_examples_dir lib.Site.name in\n      if Sys.file_exists dir && Sys.is_directory dir then\n        Sys.readdir dir |> Array.to_list |> List.sort String.compare\n        |> List.iter (fun entry ->\n            let full = Filename.concat dir entry in\n            if Sys.is_directory full then process_example ~lib full));\n  let tab_nav lib_name =\n   fun current_url -> generate_tab_nav ~current_url lib_name\n  in\n  Api.generate ~build_dir ~templates_dir ~tab_nav\n"
  },
  {
    "path": "www/generate/site.ml",
    "content": "(* Shared types and utilities for the site generator. *)\n\ntype library = {\n  name : string;\n  display : string;\n  color : string;\n  description : string;\n  tagline : string;\n  symbol : string;\n}\n\nlet libraries =\n  [\n    {\n      name = \"nx\";\n      display = \"nx\";\n      color = \"color-blue\";\n      description = \"N-dimensional arrays with linear algebra operations\";\n      tagline = \"N-dimensional arrays for OCaml\";\n      symbol = \"\";\n    };\n    {\n      name = \"tolk\";\n      display = {|<span class=\"rune-symbol\">ᛏ</span> tolk|};\n      color = \"color-slate\";\n      description = \"Minimal ML compiler for GPU tensor computation\";\n      tagline = \"GPU tensor compiler for OCaml\";\n      symbol = {|ᛏ|};\n    };\n    {\n      name = \"rune\";\n      display = {|<span class=\"rune-symbol\">ᚱ</span> rune|};\n      color = \"color-orange\";\n      description = \"Automatic differentiation and functional transformations\";\n      tagline = \"Functional transformations for Nx arrays\";\n      symbol = {|ᚱ|};\n    };\n    {\n      name = \"kaun\";\n      display = {|<span class=\"rune-symbol\">ᚲ</span> kaun|};\n      color = \"color-red\";\n      description = \"Neural networks and training\";\n      tagline = \"Neural networks for OCaml\";\n      symbol = {|ᚲ|};\n    };\n    {\n      name = \"vega\";\n      display = {|<span class=\"rune-symbol\">ᚹ</span> vega|};\n      color = \"color-yellow\";\n      description = \"Composable gradient-based optimizers\";\n      tagline = \"Composable gradient-based optimizers for OCaml\";\n      symbol = {|ᚹ|};\n    };\n    {\n      name = \"norn\";\n      display = {|<span class=\"rune-symbol\">ᚾ</span> norn|};\n      color = \"color-teal\";\n      description = \"MCMC sampling with automatic gradients\";\n      tagline = \"MCMC sampling for OCaml\";\n      symbol = {|ᚾ|};\n    };\n    {\n      name = \"hugin\";\n      display = {|<span class=\"rune-symbol\">ᛞ</span> hugin|};\n      color = \"color-purple\";\n      description = \"Publication-quality plotting\";\n      tagline = \"Plotting for OCaml\";\n      symbol = {|ᛞ|};\n    };\n    {\n      name = \"brot\";\n      display = {|<span class=\"rune-symbol\">ᚨ</span> brot|};\n      color = \"color-cyan\";\n      description =\n        \"Fast, HuggingFace-compatible tokenization for language models\";\n      tagline = \"Tokenization for OCaml\";\n      symbol = {|ᚨ|};\n    };\n    {\n      name = \"talon\";\n      display = {|<span class=\"rune-symbol\">ᛃ</span> talon|};\n      color = \"color-pink\";\n      description = \"Fast and elegant dataframes with type-safe operations\";\n      tagline = \"Dataframes for OCaml\";\n      symbol = {|ᛃ|};\n    };\n    {\n      name = \"quill\";\n      display = {|<span class=\"rune-symbol\">ᛈ</span> quill|};\n      color = \"color-green\";\n      description = \"Interactive REPL and markdown notebooks\";\n      tagline = \"Interactive REPL and markdown notebooks\";\n      symbol = {|ᛈ|};\n    };\n    {\n      name = \"fehu\";\n      display = {|<span class=\"rune-symbol\">ᚠ</span> fehu|};\n      color = \"color-lime\";\n      description = \"Reinforcement learning for OCaml\";\n      tagline = \"Reinforcement learning for OCaml\";\n      symbol = {|ᚠ|};\n    };\n    {\n      name = \"sowilo\";\n      display = {|<span class=\"rune-symbol\">ᛋ</span> sowilo|};\n      color = \"color-indigo\";\n      description = \"Differentiable computer vision\";\n      tagline = \"Differentiable computer vision for OCaml\";\n      symbol = {|ᛋ|};\n    };\n    {\n      name = \"munin\";\n      display = {|<span class=\"rune-symbol\">ᛗ</span> munin|};\n      color = \"color-munin\";\n      description = \"Local experiment tracking with live TUI dashboard\";\n      tagline = \"Experiment tracking for OCaml\";\n      symbol = {|ᛗ|};\n    };\n  ]\n\nlet find_library name = List.find_opt (fun lib -> lib.name = name) libraries\n\n(*---------------------------------------------------------------------------\n  String utilities\n  ---------------------------------------------------------------------------*)\n\nlet replace pattern replacement s =\n  let plen = String.length pattern in\n  let slen = String.length s in\n  if plen = 0 then s\n  else\n    let buf = Buffer.create slen in\n    let i = ref 0 in\n    while !i <= slen - plen do\n      if String.sub s !i plen = pattern then (\n        Buffer.add_string buf replacement;\n        i := !i + plen)\n      else (\n        Buffer.add_char buf (String.unsafe_get s !i);\n        incr i)\n    done;\n    while !i < slen do\n      Buffer.add_char buf (String.unsafe_get s !i);\n      incr i\n    done;\n    Buffer.contents buf\n\nlet find_sub ?(start = 0) s sub =\n  let slen = String.length s in\n  let sublen = String.length sub in\n  if sublen = 0 || sublen > slen then None\n  else\n    let rec loop i =\n      if i > slen - sublen then None\n      else if String.sub s i sublen = sub then Some i\n      else loop (i + 1)\n    in\n    loop start\n\nlet strip_tags s =\n  let buf = Buffer.create (String.length s) in\n  let in_tag = ref false in\n  String.iter\n    (fun c ->\n      if c = '<' then in_tag := true\n      else if c = '>' then in_tag := false\n      else if not !in_tag then Buffer.add_char buf c)\n    s;\n  Buffer.contents buf\n\nlet strip_order_prefix s =\n  let len = String.length s in\n  if\n    len >= 3\n    && s.[0] >= '0'\n    && s.[0] <= '9'\n    && s.[1] >= '0'\n    && s.[1] <= '9'\n    && s.[2] = '-'\n  then String.sub s 3 (len - 3)\n  else s\n\n(* Strip order prefixes from each segment of a relative href path.\n   \"../02-pipeline/\" → \"../pipeline/\", \"05-algorithms/\" → \"algorithms/\" *)\nlet strip_href_order_prefixes href =\n  if String.length href = 0 || href.[0] = '/' || String.contains href ':' then\n    href\n  else\n    let anchor, path =\n      match String.index_opt href '#' with\n      | Some i ->\n          (String.sub href i (String.length href - i), String.sub href 0 i)\n      | None -> (\"\", href)\n    in\n    let parts = String.split_on_char '/' path in\n    let cleaned =\n      List.map\n        (fun seg -> if seg = \"..\" then seg else strip_order_prefix seg)\n        parts\n    in\n    String.concat \"/\" cleaned ^ anchor\n\n(* Rewrite all href=\"...\" in [html] to strip order prefixes from relative\n   paths. *)\nlet rewrite_doc_hrefs html =\n  let buf = Buffer.create (String.length html) in\n  let attr = {|href=\"|} in\n  let attr_len = String.length attr in\n  let len = String.length html in\n  let i = ref 0 in\n  while !i < len do\n    match find_sub ~start:!i html attr with\n    | None ->\n        Buffer.add_string buf (String.sub html !i (len - !i));\n        i := len\n    | Some pos -> (\n        Buffer.add_string buf (String.sub html !i (pos + attr_len - !i));\n        let href_start = pos + attr_len in\n        match String.index_from_opt html href_start '\"' with\n        | None -> i := href_start\n        | Some href_end ->\n            let href = String.sub html href_start (href_end - href_start) in\n            Buffer.add_string buf (strip_href_order_prefixes href);\n            Buffer.add_char buf '\"';\n            i := href_end + 1)\n  done;\n  Buffer.contents buf\n\nlet title_case s =\n  s |> String.split_on_char '-'\n  |> List.map (fun w ->\n      if w = \"\" then w\n      else\n        String.make 1 (Char.uppercase_ascii w.[0])\n        ^ String.sub w 1 (String.length w - 1))\n  |> String.concat \" \"\n\n(*---------------------------------------------------------------------------\n  File system\n  ---------------------------------------------------------------------------*)\n\nlet read_file path = In_channel.with_open_bin path In_channel.input_all\n\nlet write_file path content =\n  Out_channel.with_open_bin path (fun oc -> output_string oc content)\n\nlet rec ensure_dir path =\n  if path <> \"\" && path <> \".\" && path <> \"/\" then\n    if not (Sys.file_exists path) then (\n      ensure_dir (Filename.dirname path);\n      Unix.mkdir path 0o755)\n\nlet rec walk dir =\n  Sys.readdir dir |> Array.to_list\n  |> List.concat_map (fun entry ->\n      let path = Filename.concat dir entry in\n      if Sys.is_directory path then walk path else [ path ])\n\n(*---------------------------------------------------------------------------\n  HTML utilities\n  ---------------------------------------------------------------------------*)\n\nlet extract_h1 html =\n  match find_sub html \"<h1\" with\n  | None -> None\n  | Some i -> (\n      match String.index_from_opt html i '>' with\n      | None -> None\n      | Some j -> (\n          match find_sub ~start:(j + 1) html \"</h1>\" with\n          | None -> None\n          | Some k -> Some (strip_tags (String.sub html (j + 1) (k - j - 1)))))\n\nlet strip_h1 html =\n  match find_sub html \"<h1\" with\n  | None -> html\n  | Some i -> (\n      match find_sub ~start:i html \"</h1>\" with\n      | None -> html\n      | Some k ->\n          let after = k + 5 in\n          String.sub html 0 i\n          ^ String.sub html after (String.length html - after))\n\n(*---------------------------------------------------------------------------\n  Paths and URLs\n  ---------------------------------------------------------------------------*)\n\nlet strip_prefix ~prefix path =\n  let plen = String.length prefix + 1 in\n  String.sub path plen (String.length path - plen)\n\nlet url_segments path =\n  let stem = Filename.chop_extension path in\n  let parts = String.split_on_char '/' stem in\n  match List.rev parts with \"index\" :: rest -> List.rev rest | _ -> parts\n\nlet url_of_path path =\n  let segments = url_segments path in\n  \"/\" ^ String.concat \"/\" segments ^ \"/\"\n\n(*---------------------------------------------------------------------------\n  Breadcrumbs\n  ---------------------------------------------------------------------------*)\n\nlet breadcrumb_sep = {|<span class=\"breadcrumb-separator\">/</span>|}\n\nlet make_breadcrumbs segments title =\n  match segments with\n  | [] | [ _ ] -> \"\"\n  | _ ->\n      let ancestors = List.rev (List.tl (List.rev segments)) in\n      let links =\n        let rec go acc url_path = function\n          | [] -> List.rev acc\n          | seg :: rest ->\n              let url_path = url_path ^ \"/\" ^ seg in\n              let link =\n                Printf.sprintf {|<a href=\"%s/\" class=\"breadcrumb-link\">%s</a>|}\n                  url_path (title_case seg)\n              in\n              go (link :: acc) url_path rest\n        in\n        go [] \"\" ancestors\n      in\n      Printf.sprintf\n        {|<div id=\"breadcrumbs\" class=\"breadcrumbs\">%s%s<span class=\"breadcrumb-current\">%s</span></div>|}\n        (String.concat breadcrumb_sep links)\n        breadcrumb_sep title\n\n(*---------------------------------------------------------------------------\n  Template application\n  ---------------------------------------------------------------------------*)\n\nlet apply_template ~template ~title ~breadcrumbs ~content ~lib ~is_lib_index\n    ~page_title ~path ~tab_nav ~prev_next ?(toc = \"\") () =\n  let t =\n    template |> replace \"{{title}}\" title\n    |> replace \"{{breadcrumbs}}\" breadcrumbs\n    |> replace \"{{content}}\" content\n    |> replace \"{{toc}}\" toc\n  in\n  match lib with\n  | None ->\n      let docs_breadcrumb =\n        let segments = url_segments path in\n        match segments with\n        | [ \"docs\" ] | [ \"docs\"; \"index\" ] ->\n            {|<span class=\"breadcrumb-text\">docs</span>|}\n        | \"docs\" :: _ ->\n            Printf.sprintf\n              {|<a href=\"/docs/\" class=\"breadcrumb-link\">docs</a>\n      <span class=\"breadcrumb-separator\">/</span>\n      <span class=\"breadcrumb-text\">%s</span>|}\n              page_title\n        | _ -> \"\"\n      in\n      t |> replace \"{{lib_hero}}\" \"\"\n      |> replace \"{{docs_breadcrumb}}\" docs_breadcrumb\n  | Some lib ->\n      let hero =\n        if is_lib_index then\n          let symbol_html =\n            if lib.symbol = \"\" then \"\"\n            else\n              Printf.sprintf {|<span class=\"rune-symbol\">%s</span> |} lib.symbol\n          in\n          Printf.sprintf\n            {|      <div class=\"docs-hero\" style=\"--lib-color: var(--color-%s)\">\n        <h1>%s%s</h1>\n        <p class=\"tagline\">%s</p>\n      </div>\n|}\n            lib.name symbol_html lib.name lib.tagline\n        else \"\"\n      in\n      let lib_breadcrumb =\n        if is_lib_index then\n          Printf.sprintf {|<span class=\"breadcrumb-text %s\">%s</span>|}\n            lib.color lib.display\n        else\n          Printf.sprintf\n            {|<a href=\"/docs/%s/\" class=\"breadcrumb-text %s\">%s</a>\n      <span class=\"breadcrumb-separator\">/</span>\n      <span class=\"breadcrumb-text\">%s</span>|}\n            lib.name lib.color lib.display page_title\n      in\n      t\n      |> replace \"{{lib_name}}\" lib.name\n      |> replace \"{{lib_display}}\" lib.display\n      |> replace \"{{lib_color}}\" lib.color\n      |> replace \"{{lib_description}}\" lib.description\n      |> replace \"{{lib_tab_nav}}\" tab_nav\n      |> replace \"{{lib_hero}}\" hero\n      |> replace \"{{lib_breadcrumb}}\" lib_breadcrumb\n      |> replace \"{{prev_next}}\" prev_next\n"
  },
  {
    "path": "www/process/dream_process.ml",
    "content": "(* This file is part of Dream, released under the MIT license. See LICENSE.md\n   for details, or visit https://github.com/camlworks/dream.\n\n   Copyright 2021 Anton Bachin *)\n\n\n\nlet if_expected = Common.if_expected\n\nopen Soup\n\nlet promise_expected = {|<div class=\"spec type\" id=\"type-promise\">\n <a href=\"#type-promise\" class=\"anchor\"></a><code><span><span class=\"keyword\">and</span> <span>'a promise</span></span><span> = <span><span class=\"type-var\">'a</span> <span class=\"xref-unresolved\">Lwt</span>.t</span></span></code>\n</div>\n|}\n\nlet promise_replacement = {|\n<code><span class=\"keyword\">and</span> 'a promise = 'a <a href=\"https://ocsigen.org/lwt/latest/api/Lwt#2_Fundamentals\">Lwt.t</a></code>\n|}\n\nlet method_expected = {|<div class=\"spec type\" id=\"type-method_\">\n <a href=\"#type-method_\" class=\"anchor\"></a><code><span><span class=\"keyword\">type</span> method_</span><span> = </span><span>[ </span></code>\n <table>\n  <tbody>\n   <tr id=\"type-method_.GET\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-method_.GET\" class=\"anchor\"></a><code><span>| </span></code><code><span>`GET</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-method_.POST\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-method_.POST\" class=\"anchor\"></a><code><span>| </span></code><code><span>`POST</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-method_.PUT\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-method_.PUT\" class=\"anchor\"></a><code><span>| </span></code><code><span>`PUT</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-method_.DELETE\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-method_.DELETE\" class=\"anchor\"></a><code><span>| </span></code><code><span>`DELETE</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-method_.HEAD\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-method_.HEAD\" class=\"anchor\"></a><code><span>| </span></code><code><span>`HEAD</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-method_.CONNECT\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-method_.CONNECT\" class=\"anchor\"></a><code><span>| </span></code><code><span>`CONNECT</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-method_.OPTIONS\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-method_.OPTIONS\" class=\"anchor\"></a><code><span>| </span></code><code><span>`OPTIONS</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-method_.TRACE\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-method_.TRACE\" class=\"anchor\"></a><code><span>| </span></code><code><span>`TRACE</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-method_.PATCH\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-method_.PATCH\" class=\"anchor\"></a><code><span>| </span></code><code><span>`PATCH</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-method_.Method\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-method_.Method\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Method <span class=\"keyword\">of</span> string</span></code>\n    </td>\n   </tr>\n  </tbody>\n </table>\n <code><span> ]</span></code>\n</div>\n|}\n\nlet method_replacement = {|\n<pre class=\"compact\"><span class=\"keyword\">type</span> method_ = [\n  | `GET\n  | `POST\n  | `PUT\n  | `DELETE\n  | `HEAD\n  | `CONNECT\n  | `OPTIONS\n  | `TRACE\n  | `PATCH\n  | `Method <span class=\"of\">of</span> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a>\n]\n</pre>\n|}\n\nlet method_to_string_expected = {|<div class=\"spec value\" id=\"val-method_to_string\">\n <a href=\"#val-method_to_string\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> method_to_string : <span><span>[&lt; <a href=\"#type-method_\">method_</a> ]</span> <span class=\"arrow\">-&gt;</span></span> string</span></code>\n</div>\n|}\n\nlet method_to_string_replacement = {|\n<code><span class=\"keyword\">val</span> method_to_string : [&lt; <a href=\"#type-method_\">method_</a> ] <span class=\"arrow\">-&gt;</span> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a></code>\n|}\n\nlet string_to_method_expected = {|<div class=\"spec value\" id=\"val-string_to_method\">\n <a href=\"#val-string_to_method\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> string_to_method : <span>string <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-method_\">method_</a></span></code>\n</div>\n|}\n\nlet string_to_method_replacement = {|\n<code><span class=\"keyword\">val</span> string_to_method : <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> <span class=\"arrow\">-&gt;</span> <a href=\"#type-method_\">method_</a></code>\n|}\n\nlet methods_equal_expected = {|<div class=\"spec value\" id=\"val-methods_equal\">\n <a href=\"#val-methods_equal\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> methods_equal : <span><span>[&lt; <a href=\"#type-method_\">method_</a> ]</span> <span class=\"arrow\">-&gt;</span></span> <span><span>[&lt; <a href=\"#type-method_\">method_</a> ]</span> <span class=\"arrow\">-&gt;</span></span> bool</span></code>\n</div>\n|}\n\nlet methods_equal_replacement = {|\n<code><span class=\"keyword\">val</span> methods_equal : [&lt; <a href=\"#type-method_\">method_</a> ] <span class=\"arrow\">-&gt;</span> [&lt; <a href=\"#type-method_\">method_</a> ] <span class=\"arrow\">-&gt;</span> <a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a></code>\n|}\n\nlet informational_expected = {|<div class=\"spec type\" id=\"type-informational\">\n <a href=\"#type-informational\" class=\"anchor\"></a><code><span><span class=\"keyword\">type</span> informational</span><span> = </span><span>[ </span></code>\n <table>\n  <tbody>\n   <tr id=\"type-informational.Continue\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-informational.Continue\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Continue</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-informational.Switching_Protocols\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-informational.Switching_Protocols\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Switching_Protocols</span></code>\n    </td>\n   </tr>\n  </tbody>\n </table>\n <code><span> ]</span></code>\n</div>\n|}\n\nlet informational_replacement = {|\n<pre class=\"compact\"><span class=\"keyword\">type</span> informational = [\n  | `Continue\n  | `Switching_Protocols\n]\n</pre>\n|}\n\nlet success_expected = {|<div class=\"spec type\" id=\"type-successful\">\n <a href=\"#type-successful\" class=\"anchor\"></a><code><span><span class=\"keyword\">type</span> successful</span><span> = </span><span>[ </span></code>\n <table>\n  <tbody>\n   <tr id=\"type-successful.OK\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-successful.OK\" class=\"anchor\"></a><code><span>| </span></code><code><span>`OK</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-successful.Created\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-successful.Created\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Created</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-successful.Accepted\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-successful.Accepted\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Accepted</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-successful.Non_Authoritative_Information\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-successful.Non_Authoritative_Information\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Non_Authoritative_Information</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-successful.No_Content\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-successful.No_Content\" class=\"anchor\"></a><code><span>| </span></code><code><span>`No_Content</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-successful.Reset_Content\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-successful.Reset_Content\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Reset_Content</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-successful.Partial_Content\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-successful.Partial_Content\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Partial_Content</span></code>\n    </td>\n   </tr>\n  </tbody>\n </table>\n <code><span> ]</span></code>\n</div>\n|}\n\nlet success_replacement = {|\n<pre class=\"compact\"><span class=\"keyword\">type</span> successful = [\n  | `OK\n  | `Created\n  | `Accepted\n  | `Non_Authoritative_Information\n  | `No_Content\n  | `Reset_Content\n  | `Partial_Content\n]</pre>\n|}\n\nlet redirect_expected = {|<div class=\"spec type\" id=\"type-redirection\">\n <a href=\"#type-redirection\" class=\"anchor\"></a><code><span><span class=\"keyword\">type</span> redirection</span><span> = </span><span>[ </span></code>\n <table>\n  <tbody>\n   <tr id=\"type-redirection.Multiple_Choices\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-redirection.Multiple_Choices\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Multiple_Choices</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-redirection.Moved_Permanently\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-redirection.Moved_Permanently\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Moved_Permanently</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-redirection.Found\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-redirection.Found\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Found</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-redirection.See_Other\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-redirection.See_Other\" class=\"anchor\"></a><code><span>| </span></code><code><span>`See_Other</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-redirection.Not_Modified\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-redirection.Not_Modified\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Not_Modified</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-redirection.Temporary_Redirect\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-redirection.Temporary_Redirect\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Temporary_Redirect</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-redirection.Permanent_Redirect\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-redirection.Permanent_Redirect\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Permanent_Redirect</span></code>\n    </td>\n   </tr>\n  </tbody>\n </table>\n <code><span> ]</span></code>\n</div>\n|}\n\nlet redirect_replacement = {|\n<pre class=\"compact\"><span class=\"keyword\">type</span> redirection = [\n  | `Multiple_Choices\n  | `Moved_Permanently\n  | `Found\n  | `See_Other\n  | `Not_Modified\n  | `Temporary_Redirect\n  | `Permanent_Redirect\n]</pre>\n|}\n\nlet client_error_expected = {|<div class=\"spec type\" id=\"type-client_error\">\n <a href=\"#type-client_error\" class=\"anchor\"></a><code><span><span class=\"keyword\">type</span> client_error</span><span> = </span><span>[ </span></code>\n <table>\n  <tbody>\n   <tr id=\"type-client_error.Bad_Request\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Bad_Request\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Bad_Request</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Unauthorized\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Unauthorized\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Unauthorized</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Payment_Required\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Payment_Required\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Payment_Required</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Forbidden\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Forbidden\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Forbidden</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Not_Found\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Not_Found\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Not_Found</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Method_Not_Allowed\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Method_Not_Allowed\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Method_Not_Allowed</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Not_Acceptable\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Not_Acceptable\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Not_Acceptable</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Proxy_Authentication_Required\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Proxy_Authentication_Required\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Proxy_Authentication_Required</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Request_Timeout\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Request_Timeout\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Request_Timeout</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Conflict\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Conflict\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Conflict</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Gone\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Gone\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Gone</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Length_Required\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Length_Required\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Length_Required</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Precondition_Failed\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Precondition_Failed\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Precondition_Failed</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Payload_Too_Large\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Payload_Too_Large\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Payload_Too_Large</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.URI_Too_Long\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.URI_Too_Long\" class=\"anchor\"></a><code><span>| </span></code><code><span>`URI_Too_Long</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Unsupported_Media_Type\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Unsupported_Media_Type\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Unsupported_Media_Type</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Range_Not_Satisfiable\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Range_Not_Satisfiable\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Range_Not_Satisfiable</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Expectation_Failed\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Expectation_Failed\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Expectation_Failed</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Misdirected_Request\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Misdirected_Request\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Misdirected_Request</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Too_Early\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Too_Early\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Too_Early</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Upgrade_Required\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Upgrade_Required\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Upgrade_Required</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Precondition_Required\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Precondition_Required\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Precondition_Required</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Too_Many_Requests\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Too_Many_Requests\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Too_Many_Requests</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Request_Header_Fields_Too_Large\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Request_Header_Fields_Too_Large\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Request_Header_Fields_Too_Large</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-client_error.Unavailable_For_Legal_Reasons\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-client_error.Unavailable_For_Legal_Reasons\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Unavailable_For_Legal_Reasons</span></code>\n    </td>\n   </tr>\n  </tbody>\n </table>\n <code><span> ]</span></code>\n</div>\n|}\n\nlet client_error_replacement = {|\n<pre class=\"compact\"><span class=\"keyword\">type</span> client_error = [\n  | `Bad_Request\n  | `Unauthorized\n  | `Payment_Required\n  | `Forbidden\n  | `Not_Found\n  | `Method_Not_Allowed\n  | `Not_Acceptable\n  | `Proxy_Authentication_Required\n  | `Request_Timeout\n  | `Conflict\n  | `Gone\n  | `Length_Required\n  | `Precondition_Failed\n  | `Payload_Too_Large\n  | `URI_Too_Long\n  | `Unsupported_Media_Type\n  | `Range_Not_Satisfiable\n  | `Expectation_Failed\n  | `Misdirected_Request\n  | `Too_Early\n  | `Upgrade_Required\n  | `Precondition_Required\n  | `Too_Many_Requests\n  | `Request_Header_Fields_Too_Large\n  | `Unavailable_For_Legal_Reasons\n]</pre>\n|}\n\nlet server_expected = {|<div class=\"spec type\" id=\"type-server_error\">\n <a href=\"#type-server_error\" class=\"anchor\"></a><code><span><span class=\"keyword\">type</span> server_error</span><span> = </span><span>[ </span></code>\n <table>\n  <tbody>\n   <tr id=\"type-server_error.Internal_Server_Error\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-server_error.Internal_Server_Error\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Internal_Server_Error</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-server_error.Not_Implemented\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-server_error.Not_Implemented\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Not_Implemented</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-server_error.Bad_Gateway\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-server_error.Bad_Gateway\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Bad_Gateway</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-server_error.Service_Unavailable\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-server_error.Service_Unavailable\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Service_Unavailable</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-server_error.Gateway_Timeout\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-server_error.Gateway_Timeout\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Gateway_Timeout</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-server_error.HTTP_Version_Not_Supported\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-server_error.HTTP_Version_Not_Supported\" class=\"anchor\"></a><code><span>| </span></code><code><span>`HTTP_Version_Not_Supported</span></code>\n    </td>\n   </tr>\n  </tbody>\n </table>\n <code><span> ]</span></code>\n</div>\n|}\n\nlet server_replacement = {|\n<pre class=\"compact\"><span class=\"keyword\">type</span> server_error = [\n  | `Internal_Server_Error\n  | `Not_Implemented\n  | `Bad_Gateway\n  | `Service_Unavailable\n  | `Gateway_Timeout\n  | `HTTP_Version_Not_Supported\n]</pre>\n|}\n\nlet standard_expected = {|<div class=\"spec type\" id=\"type-standard_status\">\n <a href=\"#type-standard_status\" class=\"anchor\"></a><code><span><span class=\"keyword\">type</span> standard_status</span><span> = </span><span>[ </span></code>\n <table>\n  <tbody>\n   <tr id=\"type-standard_status.informational\" class=\"anchored\">\n    <td class=\"def type\">\n     <a href=\"#type-standard_status.informational\" class=\"anchor\"></a><code><span>| </span></code><code><span><a href=\"#type-informational\">informational</a></span></code>\n    </td>\n   </tr>\n   <tr id=\"type-standard_status.successful\" class=\"anchored\">\n    <td class=\"def type\">\n     <a href=\"#type-standard_status.successful\" class=\"anchor\"></a><code><span>| </span></code><code><span><a href=\"#type-successful\">successful</a></span></code>\n    </td>\n   </tr>\n   <tr id=\"type-standard_status.redirection\" class=\"anchored\">\n    <td class=\"def type\">\n     <a href=\"#type-standard_status.redirection\" class=\"anchor\"></a><code><span>| </span></code><code><span><a href=\"#type-redirection\">redirection</a></span></code>\n    </td>\n   </tr>\n   <tr id=\"type-standard_status.client_error\" class=\"anchored\">\n    <td class=\"def type\">\n     <a href=\"#type-standard_status.client_error\" class=\"anchor\"></a><code><span>| </span></code><code><span><a href=\"#type-client_error\">client_error</a></span></code>\n    </td>\n   </tr>\n   <tr id=\"type-standard_status.server_error\" class=\"anchored\">\n    <td class=\"def type\">\n     <a href=\"#type-standard_status.server_error\" class=\"anchor\"></a><code><span>| </span></code><code><span><a href=\"#type-server_error\">server_error</a></span></code>\n    </td>\n   </tr>\n  </tbody>\n </table>\n <code><span> ]</span></code>\n</div>\n|}\n\nlet standard_replacement = {|\n<pre class=\"compact\"><span class=\"keyword\">type</span> standard_status = [\n  | <a href=\"#type-informational\">informational</a>\n  | <a href=\"#type-successful\">successful</a>\n  | <a href=\"#type-redirection\">redirection</a>\n  | <a href=\"#type-client_error\">client_error</a>\n  | <a href=\"#type-server_error\">server_error</a>\n]</pre>\n|}\n\nlet status_expected = {|<div class=\"spec type\" id=\"type-status\">\n <a href=\"#type-status\" class=\"anchor\"></a><code><span><span class=\"keyword\">type</span> status</span><span> = </span><span>[ </span></code>\n <table>\n  <tbody>\n   <tr id=\"type-status.standard_status\" class=\"anchored\">\n    <td class=\"def type\">\n     <a href=\"#type-status.standard_status\" class=\"anchor\"></a><code><span>| </span></code><code><span><a href=\"#type-standard_status\">standard_status</a></span></code>\n    </td>\n   </tr>\n   <tr id=\"type-status.Status\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-status.Status\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Status <span class=\"keyword\">of</span> int</span></code>\n    </td>\n   </tr>\n  </tbody>\n </table>\n <code><span> ]</span></code>\n</div>\n|}\n\nlet status_replacement = {|\n<pre class=\"compact\"><span class=\"keyword\">type</span> status = [\n  | <a href=\"#type-standard_status\">standard_status</a>\n  | `Status <span class=\"of\">of</span> <a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a>\n]</pre>\n|}\n\nlet status_to_string_expected = {|<div class=\"spec value\" id=\"val-status_to_string\">\n <a href=\"#val-status_to_string\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> status_to_string : <span><span>[&lt; <a href=\"#type-status\">status</a> ]</span> <span class=\"arrow\">-&gt;</span></span> string</span></code>\n</div>\n|}\n\nlet status_to_string_replacement = {|\n<code><span class=\"keyword\">val</span> status_to_string : [&lt; <a href=\"#type-status\">status</a> ] <span class=\"arrow\">-&gt;</span> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a></code>\n|}\n\nlet status_to_reason_expected = {|<div class=\"spec value\" id=\"val-status_to_reason\">\n <a href=\"#val-status_to_reason\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> status_to_reason : <span><span>[&lt; <a href=\"#type-status\">status</a> ]</span> <span class=\"arrow\">-&gt;</span></span> <span>string option</span></span></code>\n</div>\n|}\n\nlet status_to_reason_replacement = {|\n<code><span class=\"keyword\">val</span> status_to_reason : [&lt; <a href=\"#type-status\">status</a> ] <span class=\"arrow\">-&gt;</span> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> <a href=\"https://ocaml.org/manual/latest/api/Option.html\">option</a></code>\n|}\n\nlet status_to_int_expected = {|<div class=\"spec value\" id=\"val-status_to_int\">\n <a href=\"#val-status_to_int\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> status_to_int : <span><span>[&lt; <a href=\"#type-status\">status</a> ]</span> <span class=\"arrow\">-&gt;</span></span> int</span></code>\n</div>\n|}\n\nlet status_to_int_replacement = {|\n<code><span class=\"keyword\">val</span> status_to_int : [&lt; <a href=\"#type-status\">status</a> ] <span class=\"arrow\">-&gt;</span> <a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a></code>\n|}\n\nlet int_to_status_expected = {|<div class=\"spec value\" id=\"val-int_to_status\">\n <a href=\"#val-int_to_status\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> int_to_status : <span>int <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-status\">status</a></span></code>\n</div>\n|}\n\nlet int_to_status_replacement = {|\n<code><span class=\"keyword\">val</span> int_to_status : <a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> <span class=\"arrow\">-&gt;</span> <a href=\"#type-status\">status</a></code>\n|}\n\nlet is_informational_expected = {|<div class=\"spec value\" id=\"val-is_informational\">\n <a href=\"#val-is_informational\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> is_informational : <span><span>[&lt; <a href=\"#type-status\">status</a> ]</span> <span class=\"arrow\">-&gt;</span></span> bool</span></code>\n</div>\n|}\n\nlet is_informational_replacement = {|\n<code><span class=\"keyword\">val</span> is_informational : [&lt; <a href=\"#type-status\">status</a> ] <span class=\"arrow\">-&gt;</span> <a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a></code>\n|}\n\nlet is_successful_expected = {|<div class=\"spec value\" id=\"val-is_successful\">\n <a href=\"#val-is_successful\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> is_successful : <span><span>[&lt; <a href=\"#type-status\">status</a> ]</span> <span class=\"arrow\">-&gt;</span></span> bool</span></code>\n</div>\n|}\n\nlet is_successful_replacement = {|\n<code><span class=\"keyword\">val</span> is_successful : [&lt; <a href=\"#type-status\">status</a> ] <span class=\"arrow\">-&gt;</span> <a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a></code>\n|}\n\nlet is_redirection_expected = {|<div class=\"spec value\" id=\"val-is_redirection\">\n <a href=\"#val-is_redirection\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> is_redirection : <span><span>[&lt; <a href=\"#type-status\">status</a> ]</span> <span class=\"arrow\">-&gt;</span></span> bool</span></code>\n</div>\n|}\n\nlet is_redirection_replacement = {|\n<code><span class=\"keyword\">val</span> is_redirection : [&lt; <a href=\"#type-status\">status</a> ] <span class=\"arrow\">-&gt;</span> <a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a></code>\n|}\n\nlet is_client_error_expected = {|<div class=\"spec value\" id=\"val-is_client_error\">\n <a href=\"#val-is_client_error\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> is_client_error : <span><span>[&lt; <a href=\"#type-status\">status</a> ]</span> <span class=\"arrow\">-&gt;</span></span> bool</span></code>\n</div>\n|}\n\nlet is_client_error_replacement = {|\n<code><span class=\"keyword\">val</span> is_client_error : [&lt; <a href=\"#type-status\">status</a> ] <span class=\"arrow\">-&gt;</span> <a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a></code>\n|}\n\nlet is_server_error_expected = {|<div class=\"spec value\" id=\"val-is_server_error\">\n <a href=\"#val-is_server_error\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> is_server_error : <span><span>[&lt; <a href=\"#type-status\">status</a> ]</span> <span class=\"arrow\">-&gt;</span></span> bool</span></code>\n</div>\n|}\n\nlet is_server_error_replacement = {|\n<code><span class=\"keyword\">val</span> is_server_error : [&lt; <a href=\"#type-status\">status</a> ] <span class=\"arrow\">-&gt;</span> <a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a></code>\n|}\n\nlet status_codes_equal_expected = {|<div class=\"spec value\" id=\"val-status_codes_equal\">\n <a href=\"#val-status_codes_equal\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> status_codes_equal : <span><span>[&lt; <a href=\"#type-status\">status</a> ]</span> <span class=\"arrow\">-&gt;</span></span> <span><span>[&lt; <a href=\"#type-status\">status</a> ]</span> <span class=\"arrow\">-&gt;</span></span> bool</span></code>\n</div>\n|}\n\nlet status_codes_equal_replacement = {|\n<code><span class=\"keyword\">val</span> status_codes_equal : [&lt; <a href=\"#type-status\">status</a> ] <span class=\"arrow\">-&gt;</span> [&lt; <a href=\"#type-status\">status</a> ] <span class=\"arrow\">-&gt;</span> <a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a></code>\n|}\n\nlet client_expected = {|<div class=\"spec value\" id=\"val-client\">\n <a href=\"#val-client\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> client : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> string</span></code>\n</div>\n|}\n\nlet tls_expected = {|<div class=\"spec value\" id=\"val-tls\">\n <a href=\"#val-tls\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> tls : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> bool</span></code>\n</div>\n|}\n\nlet target_expected = {|<div class=\"spec value\" id=\"val-target\">\n <a href=\"#val-target\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> target : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> string</span></code>\n</div>\n|}\n\nlet set_client_expected = {|<div class=\"spec value\" id=\"val-set_client\">\n <a href=\"#val-set_client\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> set_client : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet set_method_expected = {|<div class=\"spec value\" id=\"val-set_method_\">\n <a href=\"#val-set_method_\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> set_method_ : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span><span>[&lt; <a href=\"#type-method_\">method_</a> ]</span> <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet query_expected = {|<div class=\"spec value\" id=\"val-query\">\n <a href=\"#val-query\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> query : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <span>string option</span></span></code>\n</div>\n|}\n\nlet queries_expected = {|<div class=\"spec value\" id=\"val-queries\">\n <a href=\"#val-queries\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> queries : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <span>string list</span></span></code>\n</div>\n|}\n\nlet all_queries_expected = {|<div class=\"spec value\" id=\"val-all_queries\">\n <a href=\"#val-all_queries\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> all_queries : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span><span>(string * string)</span> list</span></span></code>\n</div>\n|}\n\nlet response_expected = {|<div class=\"spec value\" id=\"val-response\">\n <a href=\"#val-response\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> response : <span>?status:<span>[&lt; <a href=\"#type-status\">status</a> ]</span> <span class=\"arrow\">-&gt;</span></span> <span>?code:int <span class=\"arrow\">-&gt;</span></span> <span>?headers:<span><span>(string * string)</span> list</span> <span class=\"arrow\">-&gt;</span></span>\n<span>string <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-response\">response</a></span></code>\n</div>\n|}\n\nlet response_replacement = {|\n<pre><span class=\"keyword\">val</span> response :\n  <span class=\"optional\">?status:[&lt; <a href=\"#type-status\">status</a> ] -&gt;\n  ?code:<a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> -&gt;\n  ?headers:(<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> * <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a>) <a href=\"https://ocaml.org/manual/latest/api/List.html\">list</a> -&gt;</span>\n    <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -&gt; <a href=\"#type-response\">response</a>\n</pre>\n|}\n\nlet respond_expected = {|<div class=\"spec value\" id=\"val-respond\">\n <a href=\"#val-respond\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> respond : <span>?status:<span>[&lt; <a href=\"#type-status\">status</a> ]</span> <span class=\"arrow\">-&gt;</span></span> <span>?code:int <span class=\"arrow\">-&gt;</span></span> <span>?headers:<span><span>(string * string)</span> list</span> <span class=\"arrow\">-&gt;</span></span>\n<span>string <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-response\">response</a> <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet respond_replacement = {|\n<pre><span class=\"keyword\">val</span> respond :\n  <span class=\"optional\">?status:[&lt; <a href=\"#type-status\">status</a> ] -&gt;\n  ?code:<a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> -&gt;\n  ?headers:(<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> * <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a>) <a href=\"https://ocaml.org/manual/latest/api/List.html\">list</a> -&gt;</span>\n    <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -&gt; <a href=\"#type-response\">response</a> <a href=\"#type-promise\">promise</a>\n</pre>\n|}\n\nlet html_expected = {|<div class=\"spec value\" id=\"val-html\">\n <a href=\"#val-html\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> html : <span>?status:<span>[&lt; <a href=\"#type-status\">status</a> ]</span> <span class=\"arrow\">-&gt;</span></span> <span>?code:int <span class=\"arrow\">-&gt;</span></span> <span>?headers:<span><span>(string * string)</span> list</span> <span class=\"arrow\">-&gt;</span></span>\n<span>string <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-response\">response</a> <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet html_replacement = {|\n<pre><span class=\"keyword\">val</span> html :\n  <span class=\"optional\">?status:[&lt; <a href=\"#type-status\">status</a> ] ->\n  ?code:<a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> -&gt;\n  ?headers:(<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> * <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a>) <a href=\"https://ocaml.org/manual/latest/api/List.html\">list</a> -&gt;</span>\n    <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"#type-response\">response</a> <a href=\"#type-promise\">promise</a>\n</pre>\n|}\n\nlet json_expected = {|<div class=\"spec value\" id=\"val-json\">\n <a href=\"#val-json\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> json : <span>?status:<span>[&lt; <a href=\"#type-status\">status</a> ]</span> <span class=\"arrow\">-&gt;</span></span> <span>?code:int <span class=\"arrow\">-&gt;</span></span> <span>?headers:<span><span>(string * string)</span> list</span> <span class=\"arrow\">-&gt;</span></span>\n<span>string <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-response\">response</a> <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet json_replacement = {|\n<pre><span class=\"keyword\">val</span> json :\n  <span class=\"optional\">?status:[&lt; <a href=\"#type-status\">status</a> ] ->\n  ?code:<a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> -&gt;\n  ?headers:(<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> * <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a>) <a href=\"https://ocaml.org/manual/latest/api/List.html\">list</a> -&gt;</span>\n    <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"#type-response\">response</a> <a href=\"#type-promise\">promise</a>\n</pre>\n|}\n\nlet val_redirect_expected = {|<div class=\"spec value\" id=\"val-redirect\">\n <a href=\"#val-redirect\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> redirect : <span>?status:<span>[&lt; <a href=\"#type-redirection\">redirection</a> ]</span> <span class=\"arrow\">-&gt;</span></span> <span>?code:int <span class=\"arrow\">-&gt;</span></span> <span>?headers:<span><span>(string * string)</span> list</span> <span class=\"arrow\">-&gt;</span></span>\n<span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-response\">response</a> <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet val_redirect_replacement = {|\n<pre><span class=\"keyword\">val</span> redirect :\n  <span class=\"optional\">?status:[&lt; <a href=\"#type-redirection\">redirection</a> ] ->\n  ?code:<a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> -&gt;\n  ?headers:(<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> * <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a>) <a href=\"https://ocaml.org/manual/latest/api/List.html\">list</a> -&gt;</span>\n    <a href=\"#type-request\">request</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"#type-response\">response</a> <a href=\"#type-promise\">promise</a>\n</pre>\n|}\n\nlet stream_expected = {|<div class=\"spec value\" id=\"val-stream\">\n <a href=\"#val-stream\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> stream : <span>?status:<span>[&lt; <a href=\"#type-status\">status</a> ]</span> <span class=\"arrow\">-&gt;</span></span> <span>?code:int <span class=\"arrow\">-&gt;</span></span> <span>?headers:<span><span>(string * string)</span> list</span> <span class=\"arrow\">-&gt;</span></span>\n<span>?close:bool <span class=\"arrow\">-&gt;</span></span> <span><span>(<span><a href=\"#type-stream\">stream</a> <span class=\"arrow\">-&gt;</span></span> <span>unit <a href=\"#type-promise\">promise</a></span>)</span> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-response\">response</a> <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet stream_replacement = {|\n<pre><span class=\"keyword\">val</span> stream :\n  ?status:[&lt; <a href=\"#type-status\">status</a> ] ->\n  ?code:<a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> -&gt;\n  ?headers:(<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> * <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a>) <a href=\"https://ocaml.org/manual/latest/api/List.html\">list</a> -&gt;</span>\n  ?close:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> ->\n    (<a href=\"#type-stream\">stream</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a> <a href=\"#type-promise\">promise</a>) -> <a href=\"#type-response\">response</a> <a href=\"#type-promise\">promise</a>\n</pre>\n|}\n\nlet empty_expected = {|<div class=\"spec value\" id=\"val-empty\">\n <a href=\"#val-empty\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> empty : <span>?headers:<span><span>(string * string)</span> list</span> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-status\">status</a> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-response\">response</a> <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet empty_replacement = {|\n<pre><span class=\"keyword\">val</span> empty :\n  ?headers:(<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> * <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a>) <a href=\"https://ocaml.org/manual/latest/api/List.html\">list</a> -&gt;</span>\n    <a href=\"#type-status\">status</a> -> <a href=\"#type-response\">response</a> <a href=\"#type-promise\">promise</a>\n</pre>\n|}\n\nlet set_status_expected = {|<div class=\"spec value\" id=\"val-set_status\">\n <a href=\"#val-set_status\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> set_status : <span><a href=\"#type-response\">response</a> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-status\">status</a> <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet header_expected = {|<div class=\"spec value\" id=\"val-header\">\n <a href=\"#val-header\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> header : <span><span><span class=\"type-var\">'a</span> <a href=\"#type-message\">message</a></span> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <span>string option</span></span></code>\n</div>\n|}\n\nlet headers_expected = {|<div class=\"spec value\" id=\"val-headers\">\n <a href=\"#val-headers\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> headers : <span><span><span class=\"type-var\">'a</span> <a href=\"#type-message\">message</a></span> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <span>string list</span></span></code>\n</div>\n|}\n\nlet all_headers_expected = {|<div class=\"spec value\" id=\"val-all_headers\">\n <a href=\"#val-all_headers\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> all_headers : <span><span><span class=\"type-var\">'a</span> <a href=\"#type-message\">message</a></span> <span class=\"arrow\">-&gt;</span></span> <span><span>(string * string)</span> list</span></span></code>\n</div>\n|}\n\nlet has_header_expected = {|<div class=\"spec value\" id=\"val-has_header\">\n <a href=\"#val-has_header\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> has_header : <span><span><span class=\"type-var\">'a</span> <a href=\"#type-message\">message</a></span> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> bool</span></code>\n</div>\n|}\n\nlet drop_header_expected = {|<div class=\"spec value\" id=\"val-drop_header\">\n <a href=\"#val-drop_header\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> drop_header : <span><span><span class=\"type-var\">'a</span> <a href=\"#type-message\">message</a></span> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet add_header_expected = {|<div class=\"spec value\" id=\"val-add_header\">\n <a href=\"#val-add_header\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> add_header : <span><span><span class=\"type-var\">'a</span> <a href=\"#type-message\">message</a></span> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet add_header_replacement = {|\n<pre><span class=\"keyword\">val</span> add_header :\n  'a <a href=\"#type-message\">message</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>\n|}\n\nlet set_header_expected = {|<div class=\"spec value\" id=\"val-set_header\">\n <a href=\"#val-set_header\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> set_header : <span><span><span class=\"type-var\">'a</span> <a href=\"#type-message\">message</a></span> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet set_header_replacement = {|\n<pre><span class=\"keyword\">val</span> set_header :\n  'a <a href=\"#type-message\">message</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>\n|}\n\nlet add_set_cookie_expected = {|<div class=\"spec value\" id=\"val-set_cookie\">\n <a href=\"#val-set_cookie\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> set_cookie : <span>?prefix:<span><span>[&lt; `Host <span>| `Secure</span> ]</span> option</span> <span class=\"arrow\">-&gt;</span></span> <span>?encrypt:bool <span class=\"arrow\">-&gt;</span></span>\n<span>?expires:float <span class=\"arrow\">-&gt;</span></span> <span>?max_age:float <span class=\"arrow\">-&gt;</span></span> <span>?domain:string <span class=\"arrow\">-&gt;</span></span> <span>?path:<span>string option</span> <span class=\"arrow\">-&gt;</span></span>\n<span>?secure:bool <span class=\"arrow\">-&gt;</span></span> <span>?http_only:bool <span class=\"arrow\">-&gt;</span></span> <span>?same_site:<span><span>[&lt; `Strict <span>| `Lax</span> <span>| `None</span> ]</span> option</span> <span class=\"arrow\">-&gt;</span></span>\n<span><a href=\"#type-response\">response</a> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet add_set_cookie_replacement = {|\n<pre><span class=\"keyword\">val</span> set_cookie :\n  <span class=\"optional\">?prefix:[&lt; `Host | `Secure ] <a href=\"https://ocaml.org/manual/latest/api/Option.html\">option</a> ->\n  ?encrypt:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> ->\n  ?expires:<a href=\"https://ocaml.org/manual/latest/api/Float.html\">float</a> ->\n  ?max_age:<a href=\"https://ocaml.org/manual/latest/api/Float.html\">float</a> ->\n  ?domain:<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> ->\n  ?path:<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> <a href=\"https://ocaml.org/manual/latest/api/Option.html\">option</a> ->\n  ?secure:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> ->\n  ?http_only:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> ->\n  ?same_site:[&lt; `Strict | `Lax | `None ] <a href=\"https://ocaml.org/manual/latest/api/Option.html\">option</a> -></span>\n    <a href=\"#type-response\">response</a> -> <a href=\"#type-request\">request</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>\n</pre>|}\n\nlet drop_cookie_expected = {|<div class=\"spec value\" id=\"val-drop_cookie\">\n <a href=\"#val-drop_cookie\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> drop_cookie : <span>?prefix:<span><span>[&lt; `Host <span>| `Secure</span> ]</span> option</span> <span class=\"arrow\">-&gt;</span></span> <span>?domain:string <span class=\"arrow\">-&gt;</span></span>\n<span>?path:<span>string option</span> <span class=\"arrow\">-&gt;</span></span> <span>?secure:bool <span class=\"arrow\">-&gt;</span></span> <span>?http_only:bool <span class=\"arrow\">-&gt;</span></span>\n<span>?same_site:<span><span>[&lt; `Strict <span>| `Lax</span> <span>| `None</span> ]</span> option</span> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-response\">response</a> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet drop_cookie_replacement = {|\n<pre><span class=\"keyword\">val</span> drop_cookie :\n  <span class=\"optional\">?prefix:[&lt; `Host | `Secure ] <a href=\"https://ocaml.org/manual/latest/api/Option.html\">option</a> ->\n  ?domain:<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> ->\n  ?path:<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> <a href=\"https://ocaml.org/manual/latest/api/Option.html\">option</a> ->\n  ?secure:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> ->\n  ?http_only:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> ->\n  ?same_site:[&lt; `Strict | `Lax | `None ] <a href=\"https://ocaml.org/manual/latest/api/Option.html\">option</a> -></span>\n    <a href=\"#type-response\">response</a> -> <a href=\"#type-request\">request</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>\n</pre>|}\n\nlet cookie_expected = {|<div class=\"spec value\" id=\"val-cookie\">\n <a href=\"#val-cookie\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> cookie : <span>?prefix:<span><span>[&lt; `Host <span>| `Secure</span> ]</span> option</span> <span class=\"arrow\">-&gt;</span></span> <span>?decrypt:bool <span class=\"arrow\">-&gt;</span></span>\n<span>?domain:string <span class=\"arrow\">-&gt;</span></span> <span>?path:<span>string option</span> <span class=\"arrow\">-&gt;</span></span> <span>?secure:bool <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <span>string option</span></span></code>\n</div>\n|}\n\nlet cookie_replacement = {|\n<pre><span class=\"keyword\">val</span> cookie :\n  ?prefix:[&lt; `Host | `Secure ] <a href=\"https://ocaml.org/manual/latest/api/Option.html\">option</a> ->\n  ?decrypt:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> ->\n  ?domain:<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> ->\n  ?path:<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> <a href=\"https://ocaml.org/manual/latest/api/Option.html\">option</a> ->\n  ?secure:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> ->\n    <a href=\"#type-request\">request</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> <a href=\"https://ocaml.org/manual/latest/api/Option.html\">option</a>\n</pre>\n|}\n\nlet all_cookies_expected = {|<div class=\"spec value\" id=\"val-all_cookies\">\n <a href=\"#val-all_cookies\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> all_cookies : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span><span>(string * string)</span> list</span></span></code>\n</div>\n|}\n\nlet body_expected = {|<div class=\"spec value\" id=\"val-body\">\n <a href=\"#val-body\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> body : <span><span><span class=\"type-var\">'a</span> <a href=\"#type-message\">message</a></span> <span class=\"arrow\">-&gt;</span></span> <span>string <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet set_body_expected = {|<div class=\"spec value\" id=\"val-set_body\">\n <a href=\"#val-set_body\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> set_body : <span><span><span class=\"type-var\">'a</span> <a href=\"#type-message\">message</a></span> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet read_expected = {|<div class=\"spec value\" id=\"val-read\">\n <a href=\"#val-read\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> read : <span><a href=\"#type-stream\">stream</a> <span class=\"arrow\">-&gt;</span></span> <span><span>string option</span> <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet write_expected = {|<div class=\"spec value\" id=\"val-write\">\n <a href=\"#val-write\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> write : <span><a href=\"#type-stream\">stream</a> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <span>unit <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet flush_expected = {|<div class=\"spec value\" id=\"val-flush\">\n <a href=\"#val-flush\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> flush : <span><a href=\"#type-stream\">stream</a> <span class=\"arrow\">-&gt;</span></span> <span>unit <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet close_expected = {|<div class=\"spec value\" id=\"val-close\">\n <a href=\"#val-close\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> close : <span><a href=\"#type-stream\">stream</a> <span class=\"arrow\">-&gt;</span></span> <span>unit <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet bigstring_expected = {|<div class=\"spec type\" id=\"type-buffer\">\n <a href=\"#type-buffer\" class=\"anchor\"></a><code><span><span class=\"keyword\">type</span> buffer</span><span> = <span><span>(char,&nbsp;<span class=\"xref-unresolved\">Stdlib</span>.Bigarray.int8_unsigned_elt,&nbsp;<span class=\"xref-unresolved\">Stdlib</span>.Bigarray.c_layout)</span> <span class=\"xref-unresolved\">Stdlib</span>.Bigarray.Array1.t</span></span></code>\n</div>\n|}\n\nlet bigstring_replacement = {|\n<pre><span class=\"keyword\">type</span> buffer =\n  (<a href=\"https://ocaml.org/manual/latest/api/Char.html\">char</a>, <a href=\"https://ocaml.org/manual/latest/api/Bigarray.html#1_Elementkinds\">Bigarray.int8_unsigned_elt</a>, <a href=\"https://ocaml.org/manual/latest/api/Bigarray.html#1_Arraylayouts\">Bigarray.c_layout</a>)\n    <a href=\"https://ocaml.org/manual/latest/api/Bigarray.Array1.html\">Bigarray.Array1.t</a>\n</pre>\n|}\n\nlet read_stream_expected = {|<div class=\"spec value\" id=\"val-read_stream\">\n <a href=\"#val-read_stream\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> read_stream : <span><a href=\"#type-stream\">stream</a> <span class=\"arrow\">-&gt;</span></span> <span>data:<span>(<span><a href=\"#type-buffer\">buffer</a> <span class=\"arrow\">-&gt;</span></span> <span>int <span class=\"arrow\">-&gt;</span></span> <span>int <span class=\"arrow\">-&gt;</span></span> <span>bool <span class=\"arrow\">-&gt;</span></span> <span>bool <span class=\"arrow\">-&gt;</span></span> unit)</span> <span class=\"arrow\">-&gt;</span></span> <span>flush:<span>(<span>unit <span class=\"arrow\">-&gt;</span></span> unit)</span> <span class=\"arrow\">-&gt;</span></span>\n<span>ping:<span>(<span><a href=\"#type-buffer\">buffer</a> <span class=\"arrow\">-&gt;</span></span> <span>int <span class=\"arrow\">-&gt;</span></span> <span>int <span class=\"arrow\">-&gt;</span></span> unit)</span> <span class=\"arrow\">-&gt;</span></span> <span>pong:<span>(<span><a href=\"#type-buffer\">buffer</a> <span class=\"arrow\">-&gt;</span></span> <span>int <span class=\"arrow\">-&gt;</span></span> <span>int <span class=\"arrow\">-&gt;</span></span> unit)</span> <span class=\"arrow\">-&gt;</span></span> <span>close:<span>(<span>int <span class=\"arrow\">-&gt;</span></span> unit)</span> <span class=\"arrow\">-&gt;</span></span>\n<span>exn:<span>(<span>exn <span class=\"arrow\">-&gt;</span></span> unit)</span> <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet read_stream_replacement = {|\n<pre><span class=\"keyword\">val</span> read_stream :\n  <a href=\"#type-stream\">stream</a> ->\n  data:(<a href=\"#type-buffer\">buffer</a> -> <a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> -> <a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> -> <a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> -> <a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>) ->\n  flush:(<a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>) ->\n  ping:(<a href=\"#type-buffer\">buffer</a> -> <a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> -> <a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>) ->\n  pong:(<a href=\"#type-buffer\">buffer</a> -> <a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> -> <a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>) ->\n  close:(<a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>) ->\n  exn:(<a href=\"https://ocaml.org/manual/latest/coreexamples.html#s%3Aexceptions\">exn</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>) ->\n    <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>\n</pre>\n|}\n\nlet write_stream_expected = {|<div class=\"spec value\" id=\"val-write_stream\">\n <a href=\"#val-write_stream\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> write_stream : <span><a href=\"#type-stream\">stream</a> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-buffer\">buffer</a> <span class=\"arrow\">-&gt;</span></span> <span>int <span class=\"arrow\">-&gt;</span></span> <span>int <span class=\"arrow\">-&gt;</span></span> <span>bool <span class=\"arrow\">-&gt;</span></span> <span>bool <span class=\"arrow\">-&gt;</span></span> <span>close:<span>(<span>int <span class=\"arrow\">-&gt;</span></span> unit)</span> <span class=\"arrow\">-&gt;</span></span>\n<span>exn:<span>(<span>exn <span class=\"arrow\">-&gt;</span></span> unit)</span> <span class=\"arrow\">-&gt;</span></span> <span><span>(<span>unit <span class=\"arrow\">-&gt;</span></span> unit)</span> <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet write_stream_replacement = {|\n<pre><span class=\"keyword\">val</span> write_stream :\n  <a href=\"#type-stream\">stream</a> ->\n  <a href=\"#type-buffer\">buffer</a> -> <a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> -> <a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> ->\n  <a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> -> <a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> ->\n  close:(<a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>) ->\n  exn:(<a href=\"https://ocaml.org/manual/latest/coreexamples.html#s%3Aexceptions\">exn</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>) ->\n  (<a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>) ->\n    <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>\n</pre>|}\n\nlet flush_stream_expected = {|<div class=\"spec value\" id=\"val-flush_stream\">\n <a href=\"#val-flush_stream\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> flush_stream : <span><a href=\"#type-stream\">stream</a> <span class=\"arrow\">-&gt;</span></span> <span>close:<span>(<span>int <span class=\"arrow\">-&gt;</span></span> unit)</span> <span class=\"arrow\">-&gt;</span></span> <span>exn:<span>(<span>exn <span class=\"arrow\">-&gt;</span></span> unit)</span> <span class=\"arrow\">-&gt;</span></span> <span><span>(<span>unit <span class=\"arrow\">-&gt;</span></span> unit)</span> <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet flush_stream_replacement = {|\n<pre><span class=\"keyword\">val</span> flush_stream :\n  <a href=\"#type-stream\">stream</a> ->\n  close:(<a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>) ->\n  exn:(<a href=\"https://ocaml.org/manual/latest/coreexamples.html#s%3Aexceptions\">exn</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>) ->\n  (<a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>) ->\n    <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>\n</pre>|}\n\nlet ping_stream_expected = {|<div class=\"spec value\" id=\"val-ping_stream\">\n <a href=\"#val-ping_stream\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> ping_stream : <span><a href=\"#type-stream\">stream</a> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-buffer\">buffer</a> <span class=\"arrow\">-&gt;</span></span> <span>int <span class=\"arrow\">-&gt;</span></span> <span>int <span class=\"arrow\">-&gt;</span></span> <span>close:<span>(<span>int <span class=\"arrow\">-&gt;</span></span> unit)</span> <span class=\"arrow\">-&gt;</span></span> <span>exn:<span>(<span>exn <span class=\"arrow\">-&gt;</span></span> unit)</span> <span class=\"arrow\">-&gt;</span></span>\n<span><span>(<span>unit <span class=\"arrow\">-&gt;</span></span> unit)</span> <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet ping_stream_replacement = {|\n<pre>\n<span class=\"keyword\">val</span> ping_stream :\n  <a href=\"#type-stream\">stream</a> ->\n  <a href=\"#type-buffer\">buffer</a> -> <a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> -> <a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> ->\n  close:(<a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>) ->\n  exn:(<a href=\"https://ocaml.org/manual/latest/coreexamples.html#s%3Aexceptions\">exn</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>) ->\n  (<a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>) ->\n    <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>\n</pre>|}\n\nlet pong_stream_expected = {|<div class=\"spec value\" id=\"val-pong_stream\">\n <a href=\"#val-pong_stream\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> pong_stream : <span><a href=\"#type-stream\">stream</a> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-buffer\">buffer</a> <span class=\"arrow\">-&gt;</span></span> <span>int <span class=\"arrow\">-&gt;</span></span> <span>int <span class=\"arrow\">-&gt;</span></span> <span>close:<span>(<span>int <span class=\"arrow\">-&gt;</span></span> unit)</span> <span class=\"arrow\">-&gt;</span></span> <span>exn:<span>(<span>exn <span class=\"arrow\">-&gt;</span></span> unit)</span> <span class=\"arrow\">-&gt;</span></span>\n<span><span>(<span>unit <span class=\"arrow\">-&gt;</span></span> unit)</span> <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet pong_stream_replacement = {|\n<pre>\n<span class=\"keyword\">val</span> pong_stream :\n  <a href=\"#type-stream\">stream</a> ->\n  <a href=\"#type-buffer\">buffer</a> -> <a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> -> <a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> ->\n  close:(<a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>) ->\n  exn:(<a href=\"https://ocaml.org/manual/latest/coreexamples.html#s%3Aexceptions\">exn</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>) ->\n  (<a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>) ->\n    <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>\n</pre>|}\n\nlet close_stream_expected = {|<div class=\"spec value\" id=\"val-close_stream\">\n <a href=\"#val-close_stream\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> close_stream : <span><a href=\"#type-stream\">stream</a> <span class=\"arrow\">-&gt;</span></span> <span>int <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet abort_stream_expected = {|<div class=\"spec value\" id=\"val-abort_stream\">\n <a href=\"#val-abort_stream\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> abort_stream : <span><a href=\"#type-stream\">stream</a> <span class=\"arrow\">-&gt;</span></span> <span>exn <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet abort_stream_replacement = {|\n<code><span class=\"keyword\">val</span> abort_stream : <a href=\"#type-stream\">stream</a> <span class=\"arrow\">-&gt;</span> <a href=\"https://ocaml.org/manual/latest/coreexamples.html#s%3Aexceptions\">exn</a> <span class=\"arrow\">-&gt;</span> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a></code>\n|}\n\nlet form_expected = {|<div class=\"spec type\" id=\"type-form_result\">\n <a href=\"#type-form_result\" class=\"anchor\"></a><code><span><span class=\"keyword\">type</span> <span>'a form_result</span></span><span> = </span><span>[ </span></code>\n <table>\n  <tbody>\n   <tr id=\"type-form_result.Ok\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-form_result.Ok\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Ok <span class=\"keyword\">of</span> <span class=\"type-var\">'a</span></span></code>\n    </td>\n   </tr>\n   <tr id=\"type-form_result.Expired\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-form_result.Expired\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Expired <span class=\"keyword\">of</span> <span class=\"type-var\">'a</span> * float</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-form_result.Wrong_session\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-form_result.Wrong_session\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Wrong_session <span class=\"keyword\">of</span> <span class=\"type-var\">'a</span></span></code>\n    </td>\n   </tr>\n   <tr id=\"type-form_result.Invalid_token\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-form_result.Invalid_token\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Invalid_token <span class=\"keyword\">of</span> <span class=\"type-var\">'a</span></span></code>\n    </td>\n   </tr>\n   <tr id=\"type-form_result.Missing_token\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-form_result.Missing_token\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Missing_token <span class=\"keyword\">of</span> <span class=\"type-var\">'a</span></span></code>\n    </td>\n   </tr>\n   <tr id=\"type-form_result.Many_tokens\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-form_result.Many_tokens\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Many_tokens <span class=\"keyword\">of</span> <span class=\"type-var\">'a</span></span></code>\n    </td>\n   </tr>\n   <tr id=\"type-form_result.Wrong_content_type\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-form_result.Wrong_content_type\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Wrong_content_type</span></code>\n    </td>\n   </tr>\n  </tbody>\n </table>\n <code><span> ]</span></code>\n</div>\n|}\n\nlet form_replacement = {|\n<pre class=\"compact\"><span class=\"keyword\">type</span> 'a form_result = [\n  | `Ok            <span class=\"of\">of</span> 'a\n  | `Expired       <span class=\"of\">of</span> 'a * <a href=\"https://ocaml.org/manual/latest/api/Float.html\">float</a>\n  | `Wrong_session <span class=\"of\">of</span> 'a\n  | `Invalid_token <span class=\"of\">of</span> 'a\n  | `Missing_token <span class=\"of\">of</span> 'a\n  | `Many_tokens   <span class=\"of\">of</span> 'a\n  | `Wrong_content_type\n]\n|}\n\nlet form'_expected = {|<div class=\"spec value\" id=\"val-form\">\n <a href=\"#val-form\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> form : <span>?csrf:bool <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span><span><span><span>(string * string)</span> list</span> <a href=\"#type-form_result\">form_result</a></span> <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet form'_replacement = {|\n<pre><span class=\"keyword\">val</span> form :\n  ?csrf:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> ->\n    <a href=\"#type-request\">request</a> -> (<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> * <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a>) <a href=\"https://ocaml.org/manual/latest/api/List.html\">list</a> <a href=\"#type-form_result\">form_result</a> <a href=\"#type-promise\">promise</a>\n</pre>\n|}\n\nlet multipart_form_expected = {|<div class=\"spec type\" id=\"type-multipart_form\">\n <a href=\"#type-multipart_form\" class=\"anchor\"></a><code><span><span class=\"keyword\">type</span> multipart_form</span><span> = <span><span>(string * <span><span>(<span>string option</span> * string)</span> list</span>)</span> list</span></span></code>\n</div>\n|}\n\nlet multipart_form_replacement = {|\n<pre><span class=\"keyword\">type</span> multipart_form =\n  (<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> * ((<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> <a href=\"https://ocaml.org/manual/latest/api/Option.html\">option</a> * <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a>) <a href=\"https://ocaml.org/manual/latest/api/List.html\">list</a>)) <a href=\"https://ocaml.org/manual/latest/api/List.html\">list</a>\n</pre>\n|}\n\nlet multipart_expected = {|<div class=\"spec value\" id=\"val-multipart\">\n <a href=\"#val-multipart\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> multipart : <span>?csrf:bool <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span><span><a href=\"#type-multipart_form\">multipart_form</a> <a href=\"#type-form_result\">form_result</a></span> <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet multipart_replacement = {|\n<pre><span class=\"keyword\">val</span> multipart :\n  ?csrf:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> ->\n    <a href=\"#type-request\">request</a> -> <a href=\"#type-multipart\">multipart_form</a> <a href=\"#type-form_result\">form_result</a> <a href=\"#type-promise\">promise</a>\n</pre>\n|}\n\nlet part_expected = {|<div class=\"spec type\" id=\"type-part\">\n <a href=\"#type-part\" class=\"anchor\"></a><code><span><span class=\"keyword\">type</span> part</span><span> = <span>string option</span> * <span>string option</span> * <span><span>(string * string)</span> list</span></span></code>\n</div>\n|}\n\nlet part_replacement = {|\n<pre><span class=\"keyword\">type</span> part =\n  <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> <a href=\"https://ocaml.org/manual/latest/api/Option.html\">option</a> * <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> <a href=\"https://ocaml.org/manual/latest/api/Option.html\">option</a> * ((<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> * <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a>) <a href=\"https://ocaml.org/manual/latest/api/List.html\">list</a>)\n</pre>\n|}\n\nlet csrf_result_expected = {|<div class=\"spec type\" id=\"type-csrf_result\">\n <a href=\"#type-csrf_result\" class=\"anchor\"></a><code><span><span class=\"keyword\">type</span> csrf_result</span><span> = </span><span>[ </span></code>\n <table>\n  <tbody>\n   <tr id=\"type-csrf_result.Ok\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-csrf_result.Ok\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Ok</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-csrf_result.Expired\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-csrf_result.Expired\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Expired <span class=\"keyword\">of</span> float</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-csrf_result.Wrong_session\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-csrf_result.Wrong_session\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Wrong_session</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-csrf_result.Invalid\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-csrf_result.Invalid\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Invalid</span></code>\n    </td>\n   </tr>\n  </tbody>\n </table>\n <code><span> ]</span></code>\n</div>\n|}\n\nlet csrf_result_replacement = {|\n<pre class=\"compact\"><span class=\"keyword\">type</span> csrf_result = [\n  | `Ok\n  | `Expired <span class=\"of\">of</span> <a href=\"https://ocaml.org/manual/latest/api/Float.html\">float</a>\n  | `Wrong_session\n  | `Invalid\n]\n|}\n\nlet verify_csrf_token_expected = {|<div class=\"spec value\" id=\"val-verify_csrf_token\">\n <a href=\"#val-verify_csrf_token\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> verify_csrf_token : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-csrf_result\">csrf_result</a> <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet verify_csrf_token_replacement = {|\n<pre><span class=\"keyword\">val</span> verify_csrf_token :\n  <a href=\"#type-request\">request</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"#type-csrf_result\">csrf_result</a> <a href=\"#type-promise\">promise</a>\n</pre>\n|}\n\nlet scope_expected = {|<div class=\"spec value\" id=\"val-scope\">\n <a href=\"#val-scope\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> scope : <span>string <span class=\"arrow\">-&gt;</span></span> <span><span><a href=\"#type-middleware\">middleware</a> list</span> <span class=\"arrow\">-&gt;</span></span> <span><span><a href=\"#type-route\">route</a> list</span> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-route\">route</a></span></code>\n</div>\n|}\n\nlet scope_replacement = {|\n<pre><span class=\"keyword\">val</span> scope :\n  <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"#type-middleware\">middleware</a> <a href=\"https://ocaml.org/manual/latest/api/List.html\">list</a> -> <a href=\"#type-route\">route</a> <a href=\"https://ocaml.org/manual/latest/api/List.html\">list</a> -> <a href=\"#type-route\">route</a>\n</pre>\n|}\n\nlet get_expected = {|<div class=\"spec value\" id=\"val-get\">\n <a href=\"#val-get\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> get : <span>string <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-handler\">handler</a> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-route\">route</a></span></code>\n</div>\n|}\n\nlet get_replacement = {|\n<code><span><span class=\"keyword\">val</span> get &nbsp;&nbsp;&nbsp;&nbsp;: <span><a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-handler\">handler</a> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-route\">route</a></span></code>\n|}\n\nlet post_expected = {|<div class=\"spec value\" id=\"val-post\">\n <a href=\"#val-post\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> post : <span>string <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-handler\">handler</a> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-route\">route</a></span></code>\n</div>\n|}\n\nlet post_replacement = {|\n<code><span><span class=\"keyword\">val</span> post &nbsp;&nbsp;&nbsp;: <span><a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-handler\">handler</a> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-route\">route</a></span></code>\n|}\n\nlet put_expected = {|<div class=\"spec value\" id=\"val-put\">\n <a href=\"#val-put\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> put : <span>string <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-handler\">handler</a> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-route\">route</a></span></code>\n</div>\n|}\n\nlet put_replacement = {|\n<code><span><span class=\"keyword\">val</span> put &nbsp;&nbsp;&nbsp;&nbsp;: <span><a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-handler\">handler</a> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-route\">route</a></span></code>\n|}\n\nlet delete_expected = {|<div class=\"spec value\" id=\"val-delete\">\n <a href=\"#val-delete\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> delete : <span>string <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-handler\">handler</a> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-route\">route</a></span></code>\n</div>\n|}\n\nlet delete_replacement = {|\n<code><span><span class=\"keyword\">val</span> delete &nbsp;: <span><a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-handler\">handler</a> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-route\">route</a></span></code>\n|}\n\nlet head_expected = {|<div class=\"spec value\" id=\"val-head\">\n <a href=\"#val-head\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> head : <span>string <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-handler\">handler</a> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-route\">route</a></span></code>\n</div>\n|}\n\nlet head_replacement = {|\n<code><span><span class=\"keyword\">val</span> head &nbsp;&nbsp;&nbsp;: <span><a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-handler\">handler</a> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-route\">route</a></span></code>\n|}\n\nlet trace_expected = {|<div class=\"spec value\" id=\"val-trace\">\n <a href=\"#val-trace\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> trace : <span>string <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-handler\">handler</a> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-route\">route</a></span></code>\n</div>\n|}\n\nlet trace_replacement = {|\n<code><span><span class=\"keyword\">val</span> trace &nbsp;&nbsp;: <span><a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-handler\">handler</a> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-route\">route</a></span></code>\n|}\n\nlet patch_expected = {|<div class=\"spec value\" id=\"val-patch\">\n <a href=\"#val-patch\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> patch : <span>string <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-handler\">handler</a> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-route\">route</a></span></code>\n</div>\n|}\n\nlet patch_replacement = {|\n<code><span><span class=\"keyword\">val</span> patch &nbsp;&nbsp;: <span><a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-handler\">handler</a> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-route\">route</a></span></code>\n|}\n\nlet any_expected = {|<div class=\"spec value\" id=\"val-any\">\n <a href=\"#val-any\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> any : <span>string <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-handler\">handler</a> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-route\">route</a></span></code>\n</div>\n|}\n\nlet any_replacement = {|\n<code><span><span class=\"keyword\">val</span> any &nbsp;&nbsp;&nbsp;&nbsp;: <span><a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-handler\">handler</a> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-route\">route</a></span></code>\n|}\n\nlet static_expected = {|<div class=\"spec value\" id=\"val-static\">\n <a href=\"#val-static\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> static : <span>?loader:<span>(<span>string <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-handler\">handler</a>)</span> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-handler\">handler</a></span></code>\n</div>\n|}\n\nlet static_replacement = {|\n<pre><span class=\"keyword\">val</span> static :\n  ?loader:(<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"#type-handler\">handler</a>) ->\n    <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"#type-handler\">handler</a>\n</pre>\n|}\n\nlet set_session_expected = {|<div class=\"spec value\" id=\"val-set_session_field\">\n <a href=\"#val-set_session_field\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> set_session_field : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <span>unit <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet set_session_replacement = {|\n<pre><span class=\"keyword\">val</span> set_session_field :\n  <a href=\"#type-request\">request</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a> <a href=\"#type-promise\">promise</a>\n</pre>\n|}\n\nlet websocket_expected = {|<div class=\"spec value\" id=\"val-websocket\">\n <a href=\"#val-websocket\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> websocket : <span>?headers:<span><span>(string * string)</span> list</span> <span class=\"arrow\">-&gt;</span></span> <span>?close:bool <span class=\"arrow\">-&gt;</span></span> <span><span>(<span><a href=\"#type-websocket\">websocket</a> <span class=\"arrow\">-&gt;</span></span> <span>unit <a href=\"#type-promise\">promise</a></span>)</span> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-response\">response</a> <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet websocket_replacement = {|\n<pre><span class=\"keyword\">val</span> websocket :\n  ?headers:(<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> * <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a>) <a href=\"https://ocaml.org/manual/latest/api/List.html\">list</a> ->\n  ?close:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> ->\n    (<a href=\"#type-websocket\">websocket</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a> <a href=\"#type-promise\">promise</a>) -> <a href=\"#type-response\">response</a> <a href=\"#type-promise\">promise</a>\n</pre>\n|}\n\nlet text_or_binary_expected = {|<div class=\"spec type\" id=\"type-text_or_binary\">\n <a href=\"#type-text_or_binary\" class=\"anchor\"></a><code><span><span class=\"keyword\">type</span> text_or_binary</span><span> = </span><span>[ </span></code>\n <table>\n  <tbody>\n   <tr id=\"type-text_or_binary.Text\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-text_or_binary.Text\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Text</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-text_or_binary.Binary\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-text_or_binary.Binary\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Binary</span></code>\n    </td>\n   </tr>\n  </tbody>\n </table>\n <code><span> ]</span></code>\n</div>\n|}\n\nlet text_or_binary_replacement = {|\n<pre class=\"compact\"><span class=\"keyword\">type</span> text_or_binary = [ `Text | `Binary ]</pre>\n|}\n\nlet end_of_message_expected = {|<div class=\"spec type\" id=\"type-end_of_message\">\n <a href=\"#type-end_of_message\" class=\"anchor\"></a><code><span><span class=\"keyword\">type</span> end_of_message</span><span> = </span><span>[ </span></code>\n <table>\n  <tbody>\n   <tr id=\"type-end_of_message.End_of_message\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-end_of_message.End_of_message\" class=\"anchor\"></a><code><span>| </span></code><code><span>`End_of_message</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-end_of_message.Continues\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-end_of_message.Continues\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Continues</span></code>\n    </td>\n   </tr>\n  </tbody>\n </table>\n <code><span> ]</span></code>\n</div>\n|}\n\nlet end_of_message_replacement = {|\n<pre class=\"compact\"><span class=\"keyword\">type</span> end_of_message = [ `End_of_message | `Continues ]</pre>\n|}\n\nlet send_expected = {|<div class=\"spec value\" id=\"val-send\">\n <a href=\"#val-send\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> send : <span>?text_or_binary:<span>[&lt; <a href=\"#type-text_or_binary\">text_or_binary</a> ]</span> <span class=\"arrow\">-&gt;</span></span> <span>?end_of_message:<span>[&lt; <a href=\"#type-end_of_message\">end_of_message</a> ]</span> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-websocket\">websocket</a> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <span>unit <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet send_replacement = {|\n<pre><span class=\"keyword\">val</span> send :\n  ?text_or_binary:[&lt; <a href=\"#type-text_or_binary\">text_or_binary</a> ] ->\n  ?end_of_message:[&lt; <a href=\"#type-end_of_message\">end_of_message</a> ] ->\n    <a href=\"#type-websocket\">websocket</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a> <a href=\"#type-promise\">promise</a>\n</pre>\n|}\n\nlet receive_expected = {|<div class=\"spec value\" id=\"val-receive\">\n <a href=\"#val-receive\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> receive : <span><a href=\"#type-websocket\">websocket</a> <span class=\"arrow\">-&gt;</span></span> <span><span>string option</span> <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet receive_fragment_expected = {|<div class=\"spec value\" id=\"val-receive_fragment\">\n <a href=\"#val-receive_fragment\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> receive_fragment : <span><a href=\"#type-websocket\">websocket</a> <span class=\"arrow\">-&gt;</span></span> <span><span><span>(string * <a href=\"#type-text_or_binary\">text_or_binary</a> * <a href=\"#type-end_of_message\">end_of_message</a>)</span> option</span> <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet receive_fragment_replacement = {|\n<pre><span class=\"keyword\">val</span> receive_fragment :\n  <a href=\"#type-websocket\">websocket</a> ->\n    (<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> * <a href=\"#type-text_or_binary\">text_or_binary</a> * <a href=\"#type-end_of_message\">end_of_message</a>) <a href=\"https://ocaml.org/manual/latest/api/Option.html\">option</a> <a href=\"#type-promise\">promise</a>\n</pre>\n|}\n\nlet close_websocket_expected = {|<div class=\"spec value\" id=\"val-close_websocket\">\n <a href=\"#val-close_websocket\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> close_websocket : <span>?code:int <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-websocket\">websocket</a> <span class=\"arrow\">-&gt;</span></span> <span>unit <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet close_websocket_replacement = {|\n<pre><span class=\"keyword\">val</span> close_websocket :\n  ?code:<a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> -> <a href=\"#type-websocket\">websocket</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a> <a href=\"#type-promise\">promise</a>\n</pre>\n|}\n\nlet graphql_expected = {|<div class=\"spec value\" id=\"val-graphql\">\n <a href=\"#val-graphql\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> graphql : <span><span>(<span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span><span class=\"type-var\">'a</span> <a href=\"#type-promise\">promise</a></span>)</span> <span class=\"arrow\">-&gt;</span></span> <span><span><span class=\"type-var\">'a</span> <span class=\"xref-unresolved\">Graphql_lwt</span>.Schema.schema</span> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-handler\">handler</a></span></code>\n</div>\n|}\n\nlet graphql_replacement = {|\n<pre><span class=\"keyword\">val</span> graphql :\n  (<a href=\"#type-request\">request</a> -> 'a <a href=\"#type-promise\">promise</a>) ->\n  'a <a href=\"https://github.com/andreas/ocaml-graphql-server?tab=readme-ov-file#defining-a-schema\">Graphql_lwt.Schema.schema</a> ->\n    <a href=\"#type-handler\">handler</a>\n</pre>\n|}\n\nlet sql_expected = {|<div class=\"spec value\" id=\"val-sql\">\n <a href=\"#val-sql\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> sql : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span><span>(<span><span class=\"xref-unresolved\">Caqti_lwt</span>.connection <span class=\"arrow\">-&gt;</span></span> <span><span class=\"type-var\">'a</span> <a href=\"#type-promise\">promise</a></span>)</span> <span class=\"arrow\">-&gt;</span></span> <span><span class=\"type-var\">'a</span> <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet sql_replacement = {|\n<pre><span class=\"keyword\">val</span> sql :\n  <a href=\"#type-request\">request</a> -> (<a href=\"https://github.com/paurkedal/caqti-study/#readme\">Caqti_lwt.connection</a> -> 'a <a href=\"#type-promise\">promise</a>) ->\n    'a <a href=\"#type-promise\">promise</a>\n</pre>\n|}\n\nlet conditional_log_expected = {|<div class=\"spec type\" id=\"type-conditional_log\">\n <a href=\"#type-conditional_log\" class=\"anchor\"></a><code><span><span class=\"keyword\">type</span> <span>('a, 'b) conditional_log</span></span><span> = <span><span>(<span><span>(<span>?request:<a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span><span><span>(<span class=\"type-var\">'a</span>,&nbsp;<span class=\"xref-unresolved\">Stdlib</span>.Format.formatter,&nbsp;unit,&nbsp;<span class=\"type-var\">'b</span>)</span> <span class=\"xref-unresolved\">Stdlib</span>.format4</span> <span class=\"arrow\">-&gt;</span></span> <span class=\"type-var\">'a</span>)</span> <span class=\"arrow\">-&gt;</span></span> <span class=\"type-var\">'b</span>)</span> <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet conditional_log_replacement = {|\n<pre class=\"compact\"><span class=\"keyword\">type</span> ('a, 'b) conditional_log =\n  ((?request:<a href=\"#type-request\">request</a> ->\n   <a href=\"https://ocaml.org/manual/latest/api/Format.html\">('a, Format.formatter, unit, 'b) format4</a> -> 'a) -> 'b) ->\n    <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>\n</pre>\n|}\n\nlet sub_log_expected = {|<div class=\"spec type\" id=\"type-sub_log\">\n <a href=\"#type-sub_log\" class=\"anchor\"></a><code><span><span class=\"keyword\">type</span> sub_log</span><span> = </span><span>{</span></code>\n <table>\n  <tbody>\n   <tr id=\"type-sub_log.error\" class=\"anchored\">\n    <td class=\"def record field\">\n     <a href=\"#type-sub_log.error\" class=\"anchor\"></a><code><span>error : a. <span><span>(<span class=\"type-var\">'a</span>,&nbsp;unit)</span> <a href=\"#type-conditional_log\">conditional_log</a></span>;</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-sub_log.warning\" class=\"anchored\">\n    <td class=\"def record field\">\n     <a href=\"#type-sub_log.warning\" class=\"anchor\"></a><code><span>warning : a. <span><span>(<span class=\"type-var\">'a</span>,&nbsp;unit)</span> <a href=\"#type-conditional_log\">conditional_log</a></span>;</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-sub_log.info\" class=\"anchored\">\n    <td class=\"def record field\">\n     <a href=\"#type-sub_log.info\" class=\"anchor\"></a><code><span>info : a. <span><span>(<span class=\"type-var\">'a</span>,&nbsp;unit)</span> <a href=\"#type-conditional_log\">conditional_log</a></span>;</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-sub_log.debug\" class=\"anchored\">\n    <td class=\"def record field\">\n     <a href=\"#type-sub_log.debug\" class=\"anchor\"></a><code><span>debug : a. <span><span>(<span class=\"type-var\">'a</span>,&nbsp;unit)</span> <a href=\"#type-conditional_log\">conditional_log</a></span>;</span></code>\n    </td>\n   </tr>\n  </tbody>\n </table>\n <code><span>}</span></code>\n</div>\n|}\n\nlet sub_log_replacement = {|\n<pre class=\"compact\"><span class=\"keyword\">type</span> sub_log = {\n  error   <span class=\"of\">:</span> 'a. ('a, <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>) <a href=\"#type-conditional_log\">conditional_log</a>;\n  warning <span class=\"of\">:</span> 'a. ('a, <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>) <a href=\"#type-conditional_log\">conditional_log</a>;\n  info    <span class=\"of\">:</span> 'a. ('a, <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>) <a href=\"#type-conditional_log\">conditional_log</a>;\n  debug   <span class=\"of\">:</span> 'a. ('a, <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>) <a href=\"#type-conditional_log\">conditional_log</a>;\n}\n</pre>|}\n\nlet sub_log_expected' = {|<div class=\"spec value\" id=\"val-sub_log\">\n <a href=\"#val-sub_log\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> sub_log : <span>?level:<span>[&lt; <a href=\"#type-log_level\">log_level</a> ]</span> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-sub_log\">sub_log</a></span></code>\n</div>\n|}\n\nlet log_level_expected = {|<div class=\"spec type\" id=\"type-log_level\">\n <a href=\"#type-log_level\" class=\"anchor\"></a><code><span><span class=\"keyword\">type</span> log_level</span><span> = </span><span>[ </span></code>\n <table>\n  <tbody>\n   <tr id=\"type-log_level.Error\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-log_level.Error\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Error</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-log_level.Warning\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-log_level.Warning\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Warning</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-log_level.Info\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-log_level.Info\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Info</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-log_level.Debug\" class=\"anchored\">\n    <td class=\"def constructor\">\n     <a href=\"#type-log_level.Debug\" class=\"anchor\"></a><code><span>| </span></code><code><span>`Debug</span></code>\n    </td>\n   </tr>\n  </tbody>\n </table>\n <code><span> ]</span></code>\n</div>\n|}\n\nlet log_level_replacement = {|\n<code><span class=\"keyword\">type</span> log_level = [ `Error | `Warning | `Info | `Debug ]</code>\n|}\n\nlet val_error_expected = {|<div class=\"spec value\" id=\"val-error\">\n <a href=\"#val-error\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> error : <span><span>(<span class=\"type-var\">'a</span>,&nbsp;unit)</span> <a href=\"#type-conditional_log\">conditional_log</a></span></span></code>\n</div>\n|}\n\nlet val_error_replacement = {|\n<code><span><span class=\"keyword\">val</span> error &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;: <span><span>(<span class=\"type-var\">'a</span>,&nbsp;<a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>)</span> <a href=\"#type-conditional_log\">conditional_log</a></span></span></code>\n|}\n\nlet warning_expected = {|<div class=\"spec value\" id=\"val-warning\">\n <a href=\"#val-warning\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> warning : <span><span>(<span class=\"type-var\">'a</span>,&nbsp;unit)</span> <a href=\"#type-conditional_log\">conditional_log</a></span></span></code>\n</div>\n|}\n\nlet warning_replacement = {|\n<code><span><span class=\"keyword\">val</span> warning &nbsp;&nbsp;&nbsp;: <span><span>(<span class=\"type-var\">'a</span>,&nbsp;<a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>)</span> <a href=\"#type-conditional_log\">conditional_log</a></span></span></code>\n|}\n\nlet info_expected = {|<div class=\"spec value\" id=\"val-info\">\n <a href=\"#val-info\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> info : <span><span>(<span class=\"type-var\">'a</span>,&nbsp;unit)</span> <a href=\"#type-conditional_log\">conditional_log</a></span></span></code>\n</div>\n|}\n\nlet info_replacement = {|\n<code><span><span class=\"keyword\">val</span> info &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;: <span><span>(<span class=\"type-var\">'a</span>,&nbsp;<a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>)</span> <a href=\"#type-conditional_log\">conditional_log</a></span></span></code>\n|}\n\nlet debug_expected = {|<div class=\"spec value\" id=\"val-debug\">\n <a href=\"#val-debug\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> debug : <span><span>(<span class=\"type-var\">'a</span>,&nbsp;unit)</span> <a href=\"#type-conditional_log\">conditional_log</a></span></span></code>\n</div>\n|}\n\nlet debug_replacement = {|\n<code><span><span class=\"keyword\">val</span> debug &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;: <span><span>(<span class=\"type-var\">'a</span>,&nbsp;<a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>)</span> <a href=\"#type-conditional_log\">conditional_log</a></span></span></code>\n|}\n\nlet initialize_log_expected = {|<div class=\"spec value\" id=\"val-initialize_log\">\n <a href=\"#val-initialize_log\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> initialize_log : <span>?backtraces:bool <span class=\"arrow\">-&gt;</span></span> <span>?async_exception_hook:bool <span class=\"arrow\">-&gt;</span></span>\n<span>?level:<span>[&lt; <a href=\"#type-log_level\">log_level</a> ]</span> <span class=\"arrow\">-&gt;</span></span> <span>?enable:bool <span class=\"arrow\">-&gt;</span></span> <span>unit <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet initialize_log_replacement = {|\n<pre><span class=\"keyword\">val</span> initialize_log :\n  <span class=\"optional\">?backtraces:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> ->\n  ?async_exception_hook:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> ->\n  ?level:[&lt; <a href=\"#type-log_level\">log_level</a> ] ->\n  ?enable:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> -></span>\n    <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>\n</pre>|}\n\nlet error_template_expected = {|<div class=\"spec value\" id=\"val-error_template\">\n <a href=\"#val-error_template\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> error_template : <span><span>(<span><a href=\"#type-error\">error</a> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-response\">response</a> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-response\">response</a> <a href=\"#type-promise\">promise</a></span>)</span> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-error_handler\">error_handler</a></span></code>\n</div>\n|}\n\nlet error_template_replacement = {|\n<pre><span class=\"keyword\">val</span> error_template :\n  (<a href=\"#type-error\">error</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"#val-response\">response</a> -> <a href=\"#val-response\">response</a> <a href=\"#type-promise\">promise</a>) ->\n    <a href=\"#type-error_handler\">error_handler</a>\n</pre>\n|}\n\nlet error_expected = {|<div class=\"spec type\" id=\"type-error\">\n <a href=\"#type-error\" class=\"anchor\"></a><code><span><span class=\"keyword\">type</span> error</span><span> = </span><span>{</span></code>\n <table>\n  <tbody>\n   <tr id=\"type-error.condition\" class=\"anchored\">\n    <td class=\"def record field\">\n     <a href=\"#type-error.condition\" class=\"anchor\"></a><code><span>condition : <span>[ <span>`Response of <a href=\"#type-response\">response</a></span> <span><span>| `String</span> of string</span> <span><span>| `Exn</span> of exn</span> ]</span>;</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-error.layer\" class=\"anchored\">\n    <td class=\"def record field\">\n     <a href=\"#type-error.layer\" class=\"anchor\"></a><code><span>layer : <span>[ `App <span>| `HTTP</span> <span>| `HTTP2</span> <span>| `TLS</span> <span>| `WebSocket</span> ]</span>;</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-error.caused_by\" class=\"anchored\">\n    <td class=\"def record field\">\n     <a href=\"#type-error.caused_by\" class=\"anchor\"></a><code><span>caused_by : <span>[ `Server <span>| `Client</span> ]</span>;</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-error.request\" class=\"anchored\">\n    <td class=\"def record field\">\n     <a href=\"#type-error.request\" class=\"anchor\"></a><code><span>request : <span><a href=\"#type-request\">request</a> option</span>;</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-error.response\" class=\"anchored\">\n    <td class=\"def record field\">\n     <a href=\"#type-error.response\" class=\"anchor\"></a><code><span>response : <span><a href=\"#type-response\">response</a> option</span>;</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-error.client\" class=\"anchored\">\n    <td class=\"def record field\">\n     <a href=\"#type-error.client\" class=\"anchor\"></a><code><span>client : <span>string option</span>;</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-error.severity\" class=\"anchored\">\n    <td class=\"def record field\">\n     <a href=\"#type-error.severity\" class=\"anchor\"></a><code><span>severity : <a href=\"#type-log_level\">log_level</a>;</span></code>\n    </td>\n   </tr>\n   <tr id=\"type-error.will_send_response\" class=\"anchored\">\n    <td class=\"def record field\">\n     <a href=\"#type-error.will_send_response\" class=\"anchor\"></a><code><span>will_send_response : bool;</span></code>\n    </td>\n   </tr>\n  </tbody>\n </table>\n <code><span>}</span></code>\n</div>\n|}\n\nlet error_replacement = {|\n<pre class=\"compact\"><span class=\"keyword\">type</span> error = {\n  condition <span class=\"of\">:</span> [\n    | `Response of <a href=\"#type-response\">response</a>\n    | `String of <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a>\n    | `Exn of <a href=\"https://ocaml.org/manual/latest/coreexamples.html#s%3Aexceptions\">exn</a>\n  ];\n  layer     <span class=\"of\">:</span> [ `App | `HTTP | `HTTP2 | `TLS | `WebSocket ];\n  caused_by <span class=\"of\">:</span> [ `Server | `Client ];\n  request   <span class=\"of\">:</span> <a href=\"#type-request\">request</a>  <a href=\"https://ocaml.org/manual/latest/api/Option.html\">option</a>;\n  response  <span class=\"of\">:</span> <a href=\"#type-response\">response</a> <a href=\"https://ocaml.org/manual/latest/api/Option.html\">option</a>;\n  client    <span class=\"of\">:</span> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a>   <a href=\"https://ocaml.org/manual/latest/api/Option.html\">option</a>;\n  severity  <span class=\"of\">:</span> <a href=\"#type-log_level\">log_level</a>;\n  will_send_response <span class=\"of\">:</span> <a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a>;\n}\n</pre>|}\n\nlet new_field_expected = {|<div class=\"spec value\" id=\"val-new_field\">\n <a href=\"#val-new_field\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> new_field : <span>?name:string <span class=\"arrow\">-&gt;</span></span> <span>?show_value:<span>(<span><span class=\"type-var\">'a</span> <span class=\"arrow\">-&gt;</span></span> string)</span> <span class=\"arrow\">-&gt;</span></span> <span>unit <span class=\"arrow\">-&gt;</span></span> <span><span class=\"type-var\">'a</span> <a href=\"#type-field\">field</a></span></span></code>\n</div>\n|}\n\nlet new_field_replacement = {|\n<pre><span class=\"keyword\">val</span> new_field :\n  ?name:<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> ->\n  ?show_value:('a -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a>) ->\n    <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a> -> 'a <a href=\"#type-field\">field</a>\n</pre>\n|}\n\nlet run_expected = {|<div class=\"spec value\" id=\"val-run\">\n <a href=\"#val-run\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> run : <span>?interface:string <span class=\"arrow\">-&gt;</span></span> <span>?port:int <span class=\"arrow\">-&gt;</span></span> <span>?socket_path:string <span class=\"arrow\">-&gt;</span></span> <span>?stop:<span>unit <a href=\"#type-promise\">promise</a></span> <span class=\"arrow\">-&gt;</span></span>\n<span>?error_handler:<a href=\"#type-error_handler\">error_handler</a> <span class=\"arrow\">-&gt;</span></span> <span>?tls:bool <span class=\"arrow\">-&gt;</span></span> <span>?certificate_file:string <span class=\"arrow\">-&gt;</span></span> <span>?key_file:string <span class=\"arrow\">-&gt;</span></span>\n<span>?builtins:bool <span class=\"arrow\">-&gt;</span></span> <span>?greeting:bool <span class=\"arrow\">-&gt;</span></span> <span>?adjust_terminal:bool <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-handler\">handler</a> <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet run_replacement = {|\n<pre><span class=\"keyword\">val</span> run :\n  <span class=\"optional\">?interface:<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> ->\n  ?port:<a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> ->\n  ?stop:<a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a> <a href=\"#type-promise\">promise</a> ->\n  ?socket_path:<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> ->\n  ?error_handler:<a href=\"#type-error_handler\">error_handler</a> ->\n  ?tls:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> ->\n  ?certificate_file:<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> ->\n  ?key_file:<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> ->\n  ?builtins:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> ->\n  ?greeting:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> ->\n  ?adjust_terminal:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> -></span>\n    <a href=\"#type-handler\">handler</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a>\n</pre>|}\n\nlet serve_expected = {|<div class=\"spec value\" id=\"val-serve\">\n <a href=\"#val-serve\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> serve : <span>?interface:string <span class=\"arrow\">-&gt;</span></span> <span>?port:int <span class=\"arrow\">-&gt;</span></span> <span>?socket_path:string <span class=\"arrow\">-&gt;</span></span> <span>?stop:<span>unit <a href=\"#type-promise\">promise</a></span> <span class=\"arrow\">-&gt;</span></span>\n<span>?error_handler:<a href=\"#type-error_handler\">error_handler</a> <span class=\"arrow\">-&gt;</span></span> <span>?tls:bool <span class=\"arrow\">-&gt;</span></span> <span>?certificate_file:string <span class=\"arrow\">-&gt;</span></span> <span>?key_file:string <span class=\"arrow\">-&gt;</span></span>\n<span>?builtins:bool <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-handler\">handler</a> <span class=\"arrow\">-&gt;</span></span> <span>unit <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet serve_replacement = {|\n<pre><span class=\"keyword\">val</span> serve :\n  <span class=\"optional\">?interface:<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> ->\n  ?port:<a href=\"https://ocaml.org/manual/latest/api/Int.html\">int</a> ->\n  ?socket_path:<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> ->\n  ?stop:<a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a> <a href=\"#type-promise\">promise</a> ->\n  ?error_handler:<a href=\"#type-error_handler\">error_handler</a> ->\n  ?tls:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> ->\n  ?certificate_file:<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> ->\n  ?key_string:<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> ->\n  ?builtins:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> -></span>\n    <a href=\"#type-handler\">handler</a> -> <a href=\"https://ocaml.org/manual/latest/api/Unit.html\">unit</a> <a href=\"#type-promise\">promise</a>\n</pre>|}\n\nlet to_percent_encoded_expected = {|<div class=\"spec value\" id=\"val-to_percent_encoded\">\n <a href=\"#val-to_percent_encoded\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> to_percent_encoded : <span>?international:bool <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> string</span></code>\n</div>\n|}\n\nlet to_percent_encoded_replacement = {|\n<pre><span class=\"keyword\">val</span> to_percent_encoded :\n  ?international:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a>\n</pre>\n|}\n\nlet to_set_cookie_expected = {|<div class=\"spec value\" id=\"val-to_set_cookie\">\n <a href=\"#val-to_set_cookie\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> to_set_cookie : <span>?expires:float <span class=\"arrow\">-&gt;</span></span> <span>?max_age:float <span class=\"arrow\">-&gt;</span></span> <span>?domain:string <span class=\"arrow\">-&gt;</span></span>\n<span>?path:string <span class=\"arrow\">-&gt;</span></span> <span>?secure:bool <span class=\"arrow\">-&gt;</span></span> <span>?http_only:bool <span class=\"arrow\">-&gt;</span></span>\n<span>?same_site:<span>[ `Strict <span>| `Lax</span> <span>| `None</span> ]</span> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> string</span></code>\n</div>\n|}\n\nlet to_set_cookie_replacement = {|\n<pre><span class=\"keyword\">val</span> to_set_cookie :\n  ?expires:<a href=\"https://ocaml.org/manual/latest/api/Float.html\">float</a> ->\n  ?max_age:<a href=\"https://ocaml.org/manual/latest/api/Float.html\">float</a> ->\n  ?domain:<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> ->\n  ?path:<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> ->\n  ?secure:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> ->\n  ?http_only:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> ->\n  ?same_site:[ `Strict | `Lax | `None ] ->\n    <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a>\n</pre>\n|}\n\nlet to_path_expected = {|<div class=\"spec value\" id=\"val-to_path\">\n <a href=\"#val-to_path\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> to_path : <span>?relative:bool <span class=\"arrow\">-&gt;</span></span> <span>?international:bool <span class=\"arrow\">-&gt;</span></span> <span><span>string list</span> <span class=\"arrow\">-&gt;</span></span> string</span></code>\n</div>\n|}\n\nlet to_path_replacement = {|\n<pre><span class=\"keyword\">val</span> to_path :\n  ?relative:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> ->\n  ?international:<a href=\"https://ocaml.org/manual/latest/api/Bool.html\">bool</a> ->\n    <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> <a href=\"https://ocaml.org/manual/latest/api/List.html\">list</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a>\n</pre>\n|}\n\nlet encrypt_expected = {|<div class=\"spec value\" id=\"val-encrypt\">\n <a href=\"#val-encrypt\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> encrypt : <span>?associated_data:string <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> string</span></code>\n</div>\n|}\n\nlet encrypt_replacement = {|\n<pre><span class=\"keyword\">val</span> encrypt :\n  ?associated_data:<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> ->\n    <a href=\"#type-request\">request</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a>\n</pre>\n|}\n\nlet decrypt_expected = {|<div class=\"spec value\" id=\"val-decrypt\">\n <a href=\"#val-decrypt\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> decrypt : <span>?associated_data:string <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <span>string option</span></span></code>\n</div>\n|}\n\nlet decrypt_replacement = {|\n<pre><span class=\"keyword\">val</span> decrypt :\n  ?associated_data:<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> ->\n    <a href=\"#type-request\">request</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> <a href=\"https://ocaml.org/manual/latest/api/Option.html\">option</a>\n</pre>\n|}\n\nlet request_expected = {|<div class=\"spec value\" id=\"val-request\">\n <a href=\"#val-request\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> request : <span>?method_:<span>[&lt; <a href=\"#type-method_\">method_</a> ]</span> <span class=\"arrow\">-&gt;</span></span> <span>?target:string <span class=\"arrow\">-&gt;</span></span> <span>?headers:<span><span>(string * string)</span> list</span>\n<span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-request\">request</a></span></code>\n</div>\n|}\n\nlet request_replacement = {|\n<pre><span class=\"keyword\">val</span> request :\n  <span class=\"optional\">?method_:[&lt; <a href=\"#type-method_\">method_</a> ] ->\n  ?target:<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> ->\n  ?headers:(<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> * <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a>) <a href=\"https://ocaml.org/manual/latest/api/List.html\">list</a> -></span>\n    <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"#type-request\">request</a>\n</pre>|}\n\nlet sort_headers_expected = {|<div class=\"spec value\" id=\"val-sort_headers\">\n <a href=\"#val-sort_headers\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> sort_headers : <span><span><span>(string * string)</span> list</span> <span class=\"arrow\">-&gt;</span></span> <span><span>(string * string)</span> list</span></span></code>\n</div>\n|}\n\nlet sort_headers_replacement = {|\n<pre><span class=\"keyword\">val</span> sort_headers :\n  (<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> * <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a>) <a href=\"https://ocaml.org/manual/latest/api/List.html\">list</a> -> (<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> * <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a>) <a href=\"https://ocaml.org/manual/latest/api/List.html\">list</a>\n</pre>|}\n\nlet message_expected = {|<div class=\"spec type\" id=\"type-message\">\n <a href=\"#type-message\" class=\"anchor\"></a><code><span><span class=\"keyword\">and</span> <span>'a message</span></span><span> = <span><span class=\"type-var\">'a</span> <a href=\"../../dream-pure/Dream_pure/Message/index.html#type-message\">Dream_pure.Message.message</a></span></span></code>\n</div>\n|}\n\nlet message_replacement = {|\n<code><span><span class=\"keyword\">and</span> 'a message</span></code>\n|}\n\nlet client_expected' = {|<div class=\"spec type\" id=\"type-client\">\n <a href=\"#type-client\" class=\"anchor\"></a><code><span><span class=\"keyword\">and</span> client</span><span> = <a href=\"../../dream-pure/Dream_pure/Message/index.html#type-client\">Dream_pure.Message.client</a></span></code>\n</div>\n|}\n\nlet client_replacement' = {|\n<code><span><span class=\"keyword\">and</span> client</span></code>\n|}\n\nlet server_expected' = {|<div class=\"spec type\" id=\"type-server\">\n <a href=\"#type-server\" class=\"anchor\"></a><code><span><span class=\"keyword\">and</span> server</span><span> = <a href=\"../../dream-pure/Dream_pure/Message/index.html#type-server\">Dream_pure.Message.server</a></span></code>\n</div>\n|}\n\nlet server_replacement' = {|\n<code><span><span class=\"keyword\">and</span> server</span></code>\n|}\n\nlet set_secret_expected = {|<div class=\"spec value\" id=\"val-set_secret\">\n <a href=\"#val-set_secret\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> set_secret : <span>?old_secrets:<span>string list</span> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-middleware\">middleware</a></span></code>\n</div>\n|}\n\nlet set_secret_replacement = {|\n<pre><span class=\"keyword\">val</span> set_secret :\n  ?old_secrets:<a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> <a href=\"https://ocaml.org/manual/latest/api/List.html\">list</a> -> <a href=\"https://ocaml.org/manual/latest/api/String.html\">string</a> -> <a href=\"#type-middleware\">middleware</a>\n</pre>|}\n\nlet upload_expected = {|<div class=\"spec value\" id=\"val-upload\">\n <a href=\"#val-upload\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> upload : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span><span><a href=\"#type-part\">part</a> option</span> <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet upload_part_expected = {|<div class=\"spec value\" id=\"val-upload_part\">\n <a href=\"#val-upload_part\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> upload_part : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span><span>string option</span> <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet csrf_token_expected = {|<div class=\"spec value\" id=\"val-csrf_token\">\n <a href=\"#val-csrf_token\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> csrf_token : <span>?valid_for:float <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> string</span></code>\n</div>\n|}\n\nlet csrf_tag_expected = {|<div class=\"spec value\" id=\"val-csrf_tag\">\n <a href=\"#val-csrf_tag\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> csrf_tag : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> string</span></code>\n</div>\n|}\n\nlet pipeline_expected = {|<div class=\"spec value\" id=\"val-pipeline\">\n <a href=\"#val-pipeline\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> pipeline : <span><span><a href=\"#type-middleware\">middleware</a> list</span> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-middleware\">middleware</a></span></code>\n</div>\n|}\n\nlet set_client_stream_expected = {|<div class=\"spec value\" id=\"val-set_client_stream\">\n <a href=\"#val-set_client_stream\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> set_client_stream : <span><a href=\"#type-response\">response</a> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-stream\">stream</a> <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet set_server_stream_expected = {|<div class=\"spec value\" id=\"val-set_server_stream\">\n <a href=\"#val-set_server_stream\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> set_server_stream : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-stream\">stream</a> <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet router_expected = {|<div class=\"spec value\" id=\"val-router\">\n <a href=\"#val-router\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> router : <span><span><a href=\"#type-route\">route</a> list</span> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-handler\">handler</a></span></code>\n</div>\n|}\n\nlet param_expected = {|<div class=\"spec value\" id=\"val-param\">\n <a href=\"#val-param\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> param : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> string</span></code>\n</div>\n|}\n\nlet from_filesystem_expected = {|<div class=\"spec value\" id=\"val-from_filesystem\">\n <a href=\"#val-from_filesystem\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> from_filesystem : <span>string <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-handler\">handler</a></span></code>\n</div>\n|}\n\nlet mime_lookup_expected = {|<div class=\"spec value\" id=\"val-mime_lookup\">\n <a href=\"#val-mime_lookup\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> mime_lookup : <span>string <span class=\"arrow\">-&gt;</span></span> <span><span>(string * string)</span> list</span></span></code>\n</div>\n|}\n\nlet session_field_expected = {|<div class=\"spec value\" id=\"val-session_field\">\n <a href=\"#val-session_field\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> session_field : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <span>string option</span></span></code>\n</div>\n|}\n\nlet drop_session_field_expected = {|<div class=\"spec value\" id=\"val-drop_session_field\">\n <a href=\"#val-drop_session_field\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> drop_session_field : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <span>unit <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet all_session_fields_expected = {|<div class=\"spec value\" id=\"val-all_session_fields\">\n <a href=\"#val-all_session_fields\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> all_session_fields : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span><span>(string * string)</span> list</span></span></code>\n</div>\n|}\n\nlet invalidate_session_expected = {|<div class=\"spec value\" id=\"val-invalidate_session\">\n <a href=\"#val-invalidate_session\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> invalidate_session : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span>unit <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet memory_sessions_expected = {|<div class=\"spec value\" id=\"val-memory_sessions\">\n <a href=\"#val-memory_sessions\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> memory_sessions : <span>?lifetime:float <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-middleware\">middleware</a></span></code>\n</div>\n|}\n\nlet cookie_sessions_expected = {|<div class=\"spec value\" id=\"val-cookie_sessions\">\n <a href=\"#val-cookie_sessions\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> cookie_sessions : <span>?lifetime:float <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-middleware\">middleware</a></span></code>\n</div>\n|}\n\nlet sql_sessions_expected = {|<div class=\"spec value\" id=\"val-sql_sessions\">\n <a href=\"#val-sql_sessions\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> sql_sessions : <span>?lifetime:float <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-middleware\">middleware</a></span></code>\n</div>\n|}\n\nlet session_id_expected = {|<div class=\"spec value\" id=\"val-session_id\">\n <a href=\"#val-session_id\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> session_id : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> string</span></code>\n</div>\n|}\n\nlet session_label_expected = {|<div class=\"spec value\" id=\"val-session_label\">\n <a href=\"#val-session_label\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> session_label : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> string</span></code>\n</div>\n|}\n\nlet session_expires_at_expected = {|<div class=\"spec value\" id=\"val-session_expires_at\">\n <a href=\"#val-session_expires_at\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> session_expires_at : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> float</span></code>\n</div>\n|}\n\nlet flash_messages_expected = {|<div class=\"spec value\" id=\"val-flash_messages\">\n <a href=\"#val-flash_messages\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> flash_messages : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span><span>(string * string)</span> list</span></span></code>\n</div>\n|}\n\nlet add_flash_message_expected = {|<div class=\"spec value\" id=\"val-add_flash_message\">\n <a href=\"#val-add_flash_message\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> add_flash_message : <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet graphiql_expected = {|<div class=\"spec value\" id=\"val-graphiql\">\n <a href=\"#val-graphiql\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> graphiql : <span>?default_query:string <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-handler\">handler</a></span></code>\n</div>\n|}\n\nlet sql_pool_expected = {|<div class=\"spec value\" id=\"val-sql_pool\">\n <a href=\"#val-sql_pool\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> sql_pool : <span>?size:int <span class=\"arrow\">-&gt;</span></span> <span>string <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-middleware\">middleware</a></span></code>\n</div>\n|}\n\nlet connect_expected = {|<div class=\"spec value\" id=\"val-connect\">\n <a href=\"#val-connect\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> connect : <span>string <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-handler\">handler</a> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-route\">route</a></span></code>\n</div>\n|}\n\nlet options_expected = {|<div class=\"spec value\" id=\"val-options\">\n <a href=\"#val-options\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> options : <span>string <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-handler\">handler</a> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-route\">route</a></span></code>\n</div>\n|}\n\nlet log_expected = {|<div class=\"spec value\" id=\"val-log\">\n <a href=\"#val-log\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> log : <span><span><span>(<span class=\"type-var\">'a</span>,&nbsp;<span class=\"xref-unresolved\">Stdlib</span>.Format.formatter,&nbsp;unit,&nbsp;unit)</span> <span class=\"xref-unresolved\">Stdlib</span>.format4</span> <span class=\"arrow\">-&gt;</span></span> <span class=\"type-var\">'a</span></span></code>\n</div>\n|}\n\nlet log_replacement = {|\n<code><span class=\"keyword\">val</span> log : <a href=\"https://ocaml.org/manual/latest/api/Format.html\">(<span class=\"type-var\">'a</span>,&nbsp;Format.formatter,&nbsp;unit,&nbsp;unit) format4</a> <span class=\"arrow\">-&gt;</span> <span class=\"type-var\">'a</span></code>\n|}\n\nlet set_log_level_expected = {|<div class=\"spec value\" id=\"val-set_log_level\">\n <a href=\"#val-set_log_level\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> set_log_level : <span>string <span class=\"arrow\">-&gt;</span></span> <span><span>[&lt; <a href=\"#type-log_level\">log_level</a> ]</span> <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet error_handler_expected = {|<div class=\"spec type\" id=\"type-error_handler\">\n <a href=\"#type-error_handler\" class=\"anchor\"></a><code><span><span class=\"keyword\">type</span> error_handler</span><span> = <span><a href=\"#type-error\">error</a> <span class=\"arrow\">-&gt;</span></span> <span><span><a href=\"#type-response\">response</a> option</span> <a href=\"#type-promise\">promise</a></span></span></code>\n</div>\n|}\n\nlet with_site_prefix_expected = {|<div class=\"spec value\" id=\"val-with_site_prefix\">\n <a href=\"#val-with_site_prefix\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> with_site_prefix : <span>string <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-middleware\">middleware</a></span></code>\n</div>\n|}\n\nlet html_escape_expected = {|<div class=\"spec value\" id=\"val-html_escape\">\n <a href=\"#val-html_escape\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> html_escape : <span>string <span class=\"arrow\">-&gt;</span></span> string</span></code>\n</div>\n|}\n\nlet to_base64url_expected = {|<div class=\"spec value\" id=\"val-to_base64url\">\n <a href=\"#val-to_base64url\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> to_base64url : <span>string <span class=\"arrow\">-&gt;</span></span> string</span></code>\n</div>\n|}\n\nlet from_base64url_expected = {|<div class=\"spec value\" id=\"val-from_base64url\">\n <a href=\"#val-from_base64url\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> from_base64url : <span>string <span class=\"arrow\">-&gt;</span></span> <span>string option</span></span></code>\n</div>\n|}\n\nlet from_percent_encoded_expected = {|<div class=\"spec value\" id=\"val-from_percent_encoded\">\n <a href=\"#val-from_percent_encoded\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> from_percent_encoded : <span>string <span class=\"arrow\">-&gt;</span></span> string</span></code>\n</div>\n|}\n\nlet to_form_urlencoded_expected = {|<div class=\"spec value\" id=\"val-to_form_urlencoded\">\n <a href=\"#val-to_form_urlencoded\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> to_form_urlencoded : <span><span><span>(string * string)</span> list</span> <span class=\"arrow\">-&gt;</span></span> string</span></code>\n</div>\n|}\n\nlet from_form_urlencoded_expected = {|<div class=\"spec value\" id=\"val-from_form_urlencoded\">\n <a href=\"#val-from_form_urlencoded\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> from_form_urlencoded : <span>string <span class=\"arrow\">-&gt;</span></span> <span><span>(string * string)</span> list</span></span></code>\n</div>\n|}\n\nlet from_cookie_expected = {|<div class=\"spec value\" id=\"val-from_cookie\">\n <a href=\"#val-from_cookie\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> from_cookie : <span>string <span class=\"arrow\">-&gt;</span></span> <span><span>(string * string)</span> list</span></span></code>\n</div>\n|}\n\nlet split_target_expected = {|<div class=\"spec value\" id=\"val-split_target\">\n <a href=\"#val-split_target\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> split_target : <span>string <span class=\"arrow\">-&gt;</span></span> string * string</span></code>\n</div>\n|}\n\nlet from_path_expected = {|<div class=\"spec value\" id=\"val-from_path\">\n <a href=\"#val-from_path\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> from_path : <span>string <span class=\"arrow\">-&gt;</span></span> <span>string list</span></span></code>\n</div>\n|}\n\nlet drop_trailing_slash_expected = {|<div class=\"spec value\" id=\"val-drop_trailing_slash\">\n <a href=\"#val-drop_trailing_slash\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> drop_trailing_slash : <span><span>string list</span> <span class=\"arrow\">-&gt;</span></span> <span>string list</span></span></code>\n</div>\n|}\n\nlet text_html_expected = {|<div class=\"spec value\" id=\"val-text_html\">\n <a href=\"#val-text_html\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> text_html : string</span></code>\n</div>\n|}\n\nlet application_json_expected = {|<div class=\"spec value\" id=\"val-application_json\">\n <a href=\"#val-application_json\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> application_json : string</span></code>\n</div>\n|}\n\nlet random_expected = {|<div class=\"spec value\" id=\"val-random\">\n <a href=\"#val-random\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> random : <span>int <span class=\"arrow\">-&gt;</span></span> string</span></code>\n</div>\n|}\n\nlet field_expected = {|<div class=\"spec value\" id=\"val-field\">\n <a href=\"#val-field\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> field : <span><span><span class=\"type-var\">'b</span> <a href=\"#type-message\">message</a></span> <span class=\"arrow\">-&gt;</span></span> <span><span><span class=\"type-var\">'a</span> <a href=\"#type-field\">field</a></span> <span class=\"arrow\">-&gt;</span></span> <span><span class=\"type-var\">'a</span> option</span></span></code>\n</div>\n|}\n\nlet set_field_expected = {|<div class=\"spec value\" id=\"val-set_field\">\n <a href=\"#val-set_field\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> set_field : <span><span><span class=\"type-var\">'b</span> <a href=\"#type-message\">message</a></span> <span class=\"arrow\">-&gt;</span></span> <span><span><span class=\"type-var\">'a</span> <a href=\"#type-field\">field</a></span> <span class=\"arrow\">-&gt;</span></span> <span><span class=\"type-var\">'a</span> <span class=\"arrow\">-&gt;</span></span> unit</span></code>\n</div>\n|}\n\nlet test_expected = {|<div class=\"spec value\" id=\"val-test\">\n <a href=\"#val-test\" class=\"anchor\"></a><code><span><span class=\"keyword\">val</span> test : <span>?prefix:string <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-handler\">handler</a> <span class=\"arrow\">-&gt;</span></span> <span><a href=\"#type-request\">request</a> <span class=\"arrow\">-&gt;</span></span> <a href=\"#type-response\">response</a></span></code>\n</div>\n|}\n\nlet pretty_print_signatures soup =\n  let method_ = soup $ \"#type-method_\" in\n  if_expected\n    method_expected\n    (fun () -> pretty_print method_)\n    (fun () ->\n      method_ $$ \"> code\" |> Soup.iter Soup.delete;\n      Soup.replace (method_ $ \"> table\") (Soup.parse method_replacement);\n      Soup.add_class \"multiline\" method_);\n\n  let rewrite_status_group ?(multiline = true) id expected replacement =\n    let group = soup $ id in\n    if_expected\n      expected\n      (fun () -> pretty_print group)\n      (fun () ->\n        group $$ \"> code\" |> Soup.iter Soup.delete;\n        Soup.replace (group $ \"> table\") (Soup.parse replacement);\n        if multiline then\n          Soup.add_class \"multiline\" group)\n  in\n\n  rewrite_status_group\n    \"#type-informational\"\n    informational_expected\n    informational_replacement;\n\n  rewrite_status_group\n    \"#type-successful\"\n    success_expected\n    success_replacement;\n\n  rewrite_status_group\n    \"#type-redirection\"\n    redirect_expected\n    redirect_replacement;\n\n  rewrite_status_group\n    \"#type-client_error\"\n    client_error_expected\n    client_error_replacement;\n\n  rewrite_status_group\n    \"#type-server_error\"\n    server_expected\n    server_replacement;\n\n  rewrite_status_group\n    \"#type-standard_status\"\n    standard_expected\n    standard_replacement;\n\n  let status = soup $ \"#type-status\" in\n  if_expected\n    status_expected\n    (fun () -> pretty_print status)\n    (fun () ->\n      status $$ \"> code\" |> Soup.iter Soup.delete;\n      Soup.replace (status $ \"> table\") (Soup.parse status_replacement);\n      Soup.add_class \"multiline\" status);\n\n  let multiline selector expected replacement =\n    let element = soup $ selector in\n    if_expected\n      expected\n      (fun () -> pretty_print element)\n      (fun () ->\n        Soup.replace (element $ \"> code\") (Soup.parse replacement);\n        Soup.add_class \"multiline\" element)\n  in\n\n  let response = soup $ \"#val-response\" in\n  if_expected\n    response_expected\n    (fun () -> pretty_print response)\n    (fun () ->\n      Soup.replace (response $ \"> code\") (Soup.parse response_replacement);\n      Soup.add_class \"multiline\" response);\n\n  let respond = soup $ \"#val-respond\" in\n  if_expected\n    respond_expected\n    (fun () -> pretty_print respond)\n    (fun () ->\n      Soup.replace (respond $ \"> code\") (Soup.parse respond_replacement);\n      Soup.add_class \"multiline\" respond);\n\n  multiline \"#val-html\" html_expected html_replacement;\n  multiline \"#val-json\" json_expected json_replacement;\n  multiline \"#val-redirect\" val_redirect_expected val_redirect_replacement;\n\n  let stream = soup $ \"#val-stream\" in\n  if_expected\n    stream_expected\n    (fun () -> pretty_print stream)\n    (fun () ->\n      Soup.replace (stream $ \"> code\") (Soup.parse stream_replacement);\n      Soup.add_class \"multiline\" stream);\n\n  let empty = soup $ \"#val-empty\" in\n  if_expected\n    empty_expected\n    (fun () -> pretty_print empty)\n    (fun () ->\n      Soup.replace (empty $ \"> code\") (Soup.parse empty_replacement);\n      Soup.add_class \"multiline\" empty);\n\n  let replace selector expected replacement =\n    let element = soup $ selector in\n    if_expected\n      expected\n      (fun () -> pretty_print element)\n      (fun () ->\n        Soup.replace (element $ \"> code\") (Soup.parse replacement))\n  in\n\n  replace \"#type-promise\" promise_expected promise_replacement;\n\n  replace\n    \"#val-method_to_string\"\n    method_to_string_expected\n    method_to_string_replacement;\n  replace\n    \"#val-string_to_method\"\n    string_to_method_expected\n    string_to_method_replacement;\n  replace\n    \"#val-methods_equal\"\n    methods_equal_expected\n    methods_equal_replacement;\n\n  replace\n    \"#val-status_to_string\"\n    status_to_string_expected\n    status_to_string_replacement;\n  replace\n    \"#val-status_to_reason\"\n    status_to_reason_expected\n    status_to_reason_replacement;\n  replace \"#val-status_to_int\" status_to_int_expected status_to_int_replacement;\n  replace \"#val-int_to_status\" int_to_status_expected int_to_status_replacement;\n  replace\n    \"#val-is_informational\"\n    is_informational_expected\n    is_informational_replacement;\n  replace \"#val-is_successful\" is_successful_expected is_successful_replacement;\n  replace\n    \"#val-is_redirection\" is_redirection_expected is_redirection_replacement;\n  replace\n    \"#val-is_client_error\" is_client_error_expected is_client_error_replacement;\n  replace\n    \"#val-is_server_error\" is_server_error_expected is_server_error_replacement;\n  replace\n    \"#val-status_codes_equal\"\n    status_codes_equal_expected\n    status_codes_equal_replacement;\n\n  let link_stdlib_type selector expected types =\n    let replacement =\n      types\n      |> List.fold_left begin fun replacement type_ ->\n        let link =\n          {|<a href=\"https://ocaml.org/manual/latest/api/|} ^\n          (String.capitalize_ascii type_) ^\n          {|.html\">|} ^\n          type_ ^\n          {|</a>|}\n        in\n        Str.global_replace (Str.regexp (Str.quote type_)) link replacement\n      end\n        (Soup.parse expected $ \"code\" |> Soup.to_string)\n    in\n    replace selector expected replacement\n  in\n\n  link_stdlib_type \"#val-client\" client_expected [\"string\"];\n  link_stdlib_type \"#val-tls\" tls_expected [\"bool\"];\n  link_stdlib_type \"#val-target\" target_expected [\"string\"];\n  link_stdlib_type \"#val-set_client\" set_client_expected [\"string\"; \"unit\"];\n  link_stdlib_type \"#val-set_method_\" set_method_expected [\"unit\"];\n  link_stdlib_type \"#val-query\" query_expected [\"string\"; \"option\"];\n  link_stdlib_type \"#val-queries\" queries_expected [\"string\"; \"list\"];\n  link_stdlib_type \"#val-all_queries\" all_queries_expected [\"string\"; \"list\"];\n\n  link_stdlib_type \"#val-set_status\" set_status_expected [\"unit\"];\n\n  link_stdlib_type \"#val-header\" header_expected [\"string\"; \"option\"];\n  link_stdlib_type \"#val-headers\" headers_expected [\"string\"; \"list\"];\n  link_stdlib_type \"#val-all_headers\" all_headers_expected [\"string\"; \"list\"];\n  link_stdlib_type \"#val-has_header\" has_header_expected [\"string\"; \"bool\"];\n  link_stdlib_type \"#val-drop_header\" drop_header_expected [\"string\"; \"unit\"];\n  replace \"#val-add_header\" add_header_expected add_header_replacement;\n  multiline \"#val-set_header\" set_header_expected set_header_replacement;\n\n  let add_set_cookie = soup $ \"#val-set_cookie\" in\n  if_expected\n    add_set_cookie_expected\n    (fun () -> pretty_print add_set_cookie)\n    (fun () ->\n      Soup.replace\n        (add_set_cookie $ \"> code\")\n        (Soup.parse add_set_cookie_replacement);\n      Soup.add_class \"multiline\" add_set_cookie);\n\n  let drop_cookie = soup $ \"#val-drop_cookie\" in\n  if_expected\n    drop_cookie_expected\n    (fun () -> pretty_print drop_cookie)\n    (fun () ->\n      Soup.replace\n        (drop_cookie $ \"> code\")\n        (Soup.parse drop_cookie_replacement);\n      Soup.add_class \"multiline\" drop_cookie);\n\n  multiline \"#val-cookie\" cookie_expected cookie_replacement;\n  link_stdlib_type \"#val-all_cookies\" all_cookies_expected [\"string\"; \"list\"];\n\n  link_stdlib_type \"#val-body\" body_expected [\"string\"];\n  link_stdlib_type \"#val-set_body\" set_body_expected [\"string\"; \"unit\"];\n\n  link_stdlib_type \"#val-read\" read_expected [\"string\"; \"option\"];\n  link_stdlib_type \"#val-write\" write_expected [\"string\"; \"unit\"];\n  link_stdlib_type \"#val-flush\" flush_expected [\"unit\"];\n  link_stdlib_type \"#val-close\" close_expected [\"unit\"];\n\n  let bigstring = soup $ \"#type-buffer\" in\n  if_expected\n    bigstring_expected\n    (fun () -> pretty_print bigstring)\n    (fun () ->\n      Soup.replace (bigstring $ \"> code\") (Soup.parse bigstring_replacement);\n      Soup.add_class \"multiline\" bigstring);\n\n  link_stdlib_type \"#val-close_stream\" close_stream_expected [\"int\"; \"unit\"];\n  replace \"#val-abort_stream\" abort_stream_expected abort_stream_replacement;\n\n  let form = soup $ \"#type-form_result\" in\n  if_expected\n    form_expected\n    (fun () -> pretty_print form)\n    (fun () ->\n      form $$ \"> code\" |> Soup.iter Soup.delete;\n      Soup.replace (form $ \"> table\") (Soup.parse form_replacement);\n      Soup.add_class \"multiline\" form);\n\n  multiline \"#val-form\" form'_expected form'_replacement;\n\n  (* let type_table selector expected replacement =\n    let element = soup $ selector in\n    if_expected\n      expected\n      (fun () -> pretty_print element)\n      (fun () ->\n        element $$ \"> code\" |> Soup.iter Soup.delete;\n        Soup.replace (element $ \"> table\") (Soup.parse replacement);\n        Soup.add_class \"multiline\" element)\n  in *)\n\n  multiline\n    \"#type-multipart_form\" multipart_form_expected multipart_form_replacement;\n  multiline \"#val-multipart\" multipart_expected multipart_replacement;\n  multiline \"#type-part\" part_expected part_replacement;\n  link_stdlib_type \"#val-upload\" upload_expected [\"option\"];\n  link_stdlib_type \"#val-upload_part\" upload_part_expected [\"string\"; \"option\"];\n  link_stdlib_type \"#val-csrf_token\" csrf_token_expected [\"float\"; \"string\"];\n  link_stdlib_type \"#val-csrf_tag\" csrf_tag_expected [\"string\"];\n\n  let csrf_result = soup $ \"#type-csrf_result\" in\n  if_expected\n    csrf_result_expected\n    (fun () -> pretty_print csrf_result)\n    (fun () ->\n      csrf_result $$ \"> code\" |> Soup.iter Soup.delete;\n      Soup.replace (csrf_result $ \"> table\")\n        (Soup.parse csrf_result_replacement);\n      Soup.add_class \"multiline\" csrf_result);\n\n  multiline\n    \"#val-verify_csrf_token\"\n    verify_csrf_token_expected\n    verify_csrf_token_replacement;\n\n  link_stdlib_type \"#val-pipeline\" pipeline_expected [\"list\"];\n  link_stdlib_type \"#val-set_client_stream\" set_client_stream_expected [\"unit\"];\n  link_stdlib_type \"#val-set_server_stream\" set_server_stream_expected [\"unit\"];\n\n  link_stdlib_type \"#val-router\" router_expected [\"list\"];\n  multiline \"#val-scope\" scope_expected scope_replacement;\n  replace \"#val-get\" get_expected get_replacement;\n  replace \"#val-post\" post_expected post_replacement;\n  replace \"#val-put\" put_expected put_replacement;\n  replace \"#val-delete\" delete_expected delete_replacement;\n  replace \"#val-head\" head_expected head_replacement;\n  link_stdlib_type \"#val-connect\" connect_expected [\"string\"];\n  link_stdlib_type \"#val-options\" options_expected [\"string\"];\n  replace \"#val-trace\" trace_expected trace_replacement;\n  replace \"#val-patch\" patch_expected patch_replacement;\n  replace \"#val-any\" any_expected any_replacement;\n  link_stdlib_type \"#val-param\" param_expected [\"string\"];\n  multiline \"#val-static\" static_expected static_replacement;\n  link_stdlib_type \"#val-from_filesystem\" from_filesystem_expected [\"string\"];\n  link_stdlib_type \"#val-mime_lookup\" mime_lookup_expected [\"string\"; \"list\"];\n\n  link_stdlib_type\n    \"#val-session_field\" session_field_expected [\"string\"; \"option\"];\n  multiline \"#val-set_session_field\"\n    set_session_expected set_session_replacement;\n  link_stdlib_type\n    \"#val-drop_session_field\" drop_session_field_expected [\"string\"; \"unit\"];\n  link_stdlib_type\n    \"#val-all_session_fields\" all_session_fields_expected [\"string\"; \"list\"];\n  link_stdlib_type\n    \"#val-invalidate_session\" invalidate_session_expected [\"unit\"];\n  link_stdlib_type \"#val-memory_sessions\" memory_sessions_expected [\"float\"];\n  link_stdlib_type \"#val-cookie_sessions\" cookie_sessions_expected [\"float\"];\n  link_stdlib_type \"#val-sql_sessions\" sql_sessions_expected [\"float\"];\n  link_stdlib_type \"#val-session_id\" session_id_expected [\"string\"];\n  link_stdlib_type \"#val-session_label\" session_label_expected [\"string\"];\n  link_stdlib_type\n    \"#val-session_expires_at\" session_expires_at_expected [\"float\"];\n\n  link_stdlib_type\n    \"#val-flash_messages\" flash_messages_expected [\"string\"; \"list\"];\n  link_stdlib_type\n    \"#val-add_flash_message\" add_flash_message_expected [\"string\"; \"unit\"];\n\n  multiline \"#val-websocket\" websocket_expected websocket_replacement;\n  multiline \"#val-send\" send_expected send_replacement;\n  link_stdlib_type \"#val-receive\" receive_expected [\"string\"; \"option\"];\n  multiline \"#val-close_websocket\"\n    close_websocket_expected close_websocket_replacement;\n\n  multiline \"#val-graphql\" graphql_expected graphql_replacement;\n  link_stdlib_type \"#val-graphiql\" graphiql_expected [\"string\"];\n\n  link_stdlib_type \"#val-sql_pool\" sql_pool_expected [\"int\"; \"string\"];\n  multiline \"#val-sql\" sql_expected sql_replacement;\n\n  replace \"#val-log\" log_expected log_replacement;\n\n  let conditional_log = soup $ \"#type-conditional_log\" in\n  if_expected\n    conditional_log_expected\n    (fun () -> pretty_print conditional_log)\n    (fun () ->\n      Soup.replace\n        (conditional_log $ \"> code\")\n        (Soup.parse conditional_log_replacement);\n      Soup.add_class \"multiline\" conditional_log);\n\n  let sub_log = soup $ \"#type-sub_log\" in\n  if_expected\n    sub_log_expected\n    (fun () -> pretty_print sub_log)\n    (fun () ->\n      sub_log $$ \"> code\" |> Soup.iter Soup.delete;\n      Soup.replace\n        (sub_log $ \"> table\")\n        (Soup.parse sub_log_replacement);\n      Soup.add_class \"multiline\" sub_log);\n\n  let log_level = soup $ \"#type-log_level\" in\n  if_expected\n    log_level_expected\n    (fun () -> pretty_print log_level)\n    (fun () ->\n      log_level $$ \"> code\" |> Soup.iter Soup.delete;\n      Soup.replace (log_level $ \"> table\") (Soup.parse log_level_replacement));\n\n  replace \"#val-error\" val_error_expected val_error_replacement;\n  replace \"#val-warning\" warning_expected warning_replacement;\n  replace \"#val-info\" info_expected info_replacement;\n  replace \"#val-debug\" debug_expected debug_replacement;\n\n  link_stdlib_type \"#val-sub_log\" sub_log_expected' [\"string\"];\n  link_stdlib_type\n    \"#val-set_log_level\" set_log_level_expected [\"string\"; \"unit\"];\n\n  let initialize_log = soup $ \"#val-initialize_log\" in\n  if_expected\n    initialize_log_expected\n    (fun () -> pretty_print initialize_log)\n    (fun () ->\n      Soup.replace\n        (initialize_log $ \"> code\")\n        (Soup.parse initialize_log_replacement);\n      Soup.add_class \"multiline\" initialize_log);\n\n  multiline\n    \"#val-error_template\" error_template_expected error_template_replacement;\n\n  let error = soup $ \"#type-error\" in\n  if_expected\n    error_expected\n    (fun () -> pretty_print error)\n    (fun () ->\n      error $$ \"> code\" |> Soup.iter Soup.delete;\n      Soup.replace (error $ \"> table\") (Soup.parse error_replacement);\n      Soup.add_class \"multiline\" error);\n\n  link_stdlib_type \"#type-error_handler\" error_handler_expected [\"option\"];\n\n  multiline \"#val-new_field\" new_field_expected new_field_replacement;\n\n  let run = soup $ \"#val-run\" in\n  if_expected\n    run_expected\n    (fun () -> pretty_print run)\n    (fun () ->\n      Soup.replace\n        (run $ \"> code\")\n        (Soup.parse run_replacement);\n      Soup.add_class \"multiline\" run);\n\n  let serve = soup $ \"#val-serve\" in\n  if_expected\n    serve_expected\n    (fun () -> pretty_print serve)\n    (fun () ->\n      Soup.replace\n        (serve $ \"> code\")\n        (Soup.parse serve_replacement);\n      Soup.add_class \"multiline\" serve);\n\n  link_stdlib_type \"#val-with_site_prefix\" with_site_prefix_expected [\"string\"];\n\n  link_stdlib_type \"#val-html_escape\" html_escape_expected [\"string\"];\n  link_stdlib_type \"#val-to_base64url\" to_base64url_expected [\"string\"];\n  link_stdlib_type\n    \"#val-from_base64url\" from_base64url_expected [\"string\"; \"option\"];\n  multiline \"#val-to_percent_encoded\"\n    to_percent_encoded_expected to_percent_encoded_replacement;\n  link_stdlib_type\n    \"#val-from_percent_encoded\" from_percent_encoded_expected [\"string\"];\n  link_stdlib_type\n    \"#val-to_form_urlencoded\" to_form_urlencoded_expected [\"string\"; \"list\"];\n  link_stdlib_type\n    \"#val-from_form_urlencoded\"\n    from_form_urlencoded_expected\n    [\"string\"; \"list\"];\n  link_stdlib_type \"#val-from_cookie\" from_cookie_expected [\"string\"; \"list\"];\n  multiline\n    \"#val-to_set_cookie\" to_set_cookie_expected to_set_cookie_replacement;\n  link_stdlib_type \"#val-split_target\" split_target_expected [\"string\"];\n  link_stdlib_type \"#val-from_path\" from_path_expected [\"string\"; \"list\"];\n  multiline \"#val-to_path\" to_path_expected to_path_replacement;\n  link_stdlib_type\n    \"#val-drop_trailing_slash\" drop_trailing_slash_expected [\"string\"; \"list\"];\n\n  link_stdlib_type \"#val-text_html\" text_html_expected [\"string\"];\n  link_stdlib_type \"#val-application_json\" application_json_expected [\"string\"];\n\n  link_stdlib_type \"#val-random\" random_expected [\"int\"; \"string\"];\n  multiline \"#val-encrypt\" encrypt_expected encrypt_replacement;\n  multiline \"#val-decrypt\" decrypt_expected decrypt_replacement;\n\n  link_stdlib_type \"#val-field\" field_expected [\"option\"];\n  link_stdlib_type \"#val-set_field\" set_field_expected [\"unit\"];\n\n  let request = soup $ \"#val-request\" in\n  if_expected\n    request_expected\n    (fun () -> pretty_print request)\n    (fun () ->\n      Soup.replace (request $ \"> code\") (Soup.parse request_replacement);\n      Soup.add_class \"multiline\" request);\n\n  multiline \"#val-sort_headers\" sort_headers_expected sort_headers_replacement;\n\n  replace \"#type-message\" message_expected message_replacement;\n  replace \"#type-client\" client_expected' client_replacement';\n  replace \"#type-server\" server_expected' server_replacement';\n\n  multiline \"#val-read_stream\" read_stream_expected read_stream_replacement;\n  multiline \"#val-write_stream\" write_stream_expected write_stream_replacement;\n  multiline \"#val-flush_stream\" flush_stream_expected flush_stream_replacement;\n  multiline \"#val-ping_stream\" ping_stream_expected ping_stream_replacement;\n  multiline \"#val-pong_stream\" pong_stream_expected pong_stream_replacement;\n\n  rewrite_status_group ~multiline:false\n    \"#type-text_or_binary\" text_or_binary_expected text_or_binary_replacement;\n  rewrite_status_group ~multiline:false\n    \"#type-end_of_message\" end_of_message_expected end_of_message_replacement;\n\n  multiline\n    \"#val-receive_fragment\"\n    receive_fragment_expected receive_fragment_replacement;\n\n  multiline \"#val-set_secret\" set_secret_expected set_secret_replacement;\n\n  link_stdlib_type \"#val-test\" test_expected [\"string\"]\n\nlet remove_stdlib soup =\n  soup $$ \".xref-unresolved:contains(\\\"Stdlib\\\")\" |> Soup.iter (fun element ->\n    begin match Soup.next_sibling element with\n    | None -> ()\n    | Some next ->\n      match Soup.element next with\n      | Some _ -> ()\n      | None ->\n        match Soup.leaf_text next with\n        | None -> ()\n        | Some s ->\n          match s.[0] with\n          | '.' ->\n            String.sub s 1 (String.length s - 1)\n            |> Soup.create_text\n            |> Soup.replace next\n          | _ | exception _ -> ()\n    end;\n    delete element)\n\nlet retarget_status soup =\n  soup $$ \"a[href=#type-status]\"\n  |> Soup.(iter (set_attribute \"href\" \"#status_codes\"))\n\nlet links_new_tabs soup =\n  soup $$ \"a[href^=http]\"\n  |> Soup.(iter (fun a ->\n    set_attribute \"target\" \"_blank\" a;\n    set_attribute \"rel\" \"noreferrer noopener\" a))\n\nlet () =\n  let source = Sys.argv.(1) in\n  let destination = Sys.argv.(2) in\n  let soup = Soup.(read_file source |> parse) in\n  let content = soup $ \"div.odoc-content\" in\n\n  soup $$ \"nav.odoc-toc li > ul\" |> Soup.iter delete;\n\n  soup\n  $ \"nav.odoc-toc\"\n  |> Soup.prepend_child content;\n\n  pretty_print_signatures soup;\n  (* remove_specs soup; *)\n\n  let error_template = soup $ \"#val-error_template\" |> Soup.R.parent in\n  let error = soup $ \"#type-error\" |> Soup.R.parent in\n  Soup.prepend_child error error_template;\n\n  Common.add_backing_lines soup;\n\n  remove_stdlib content;\n  retarget_status content;\n  links_new_tabs content;\n\n  Soup.(to_string content |> write_file destination)"
  },
  {
    "path": "www/process/dune",
    "content": "(executables\n (names index sidebar generate_api_rules)\n (libraries lambdasoup str markup jsont jsont.bytesrw))\n"
  },
  {
    "path": "www/process/generate_api_rules.ml",
    "content": "let read_sidebar_json path =\n  if Sys.file_exists path then\n    let ic = open_in path in\n    let content =\n      Fun.protect ~finally:(fun () -> close_in ic) (fun () ->\n          In_channel.input_all ic)\n    in\n    match Jsont_bytesrw.decode_string Jsont.json content with\n    | Ok v -> v\n    | Error e -> failwith e\n  else failwith (Printf.sprintf \"sidebar.json not found at %s\" path)\n\nlet find_field name mems =\n  match Jsont.Json.find_mem name mems with\n  | Some (_, v) -> v\n  | None -> raise Not_found\n\ntype module_info = {\n  name : string; [@warning \"-69\"]\n  url : string;\n  path : string list; [@warning \"-69\"]\n}\n\nlet rec extract_modules ?(path = []) json =\n  match json with\n  | Jsont.Object (mems, _) -> (\n      let node =\n        try find_field \"node\" mems\n        with Not_found -> failwith \"No node field in entry\"\n      in\n      let children =\n        try\n          match find_field \"children\" mems with\n          | Jsont.Array (l, _) -> l\n          | _ -> []\n        with Not_found -> []\n      in\n      match node with\n      | Jsont.Object (node_fields, _) -> (\n          let url =\n            try\n              match find_field \"url\" node_fields with\n              | Jsont.String (s, _) -> Some s\n              | _ -> None\n            with Not_found -> None\n          in\n          let content =\n            try\n              match find_field \"content\" node_fields with\n              | Jsont.String (s, _) -> s\n              | _ -> \"\"\n            with Not_found -> \"\"\n          in\n          let kind =\n            try\n              match find_field \"kind\" node_fields with\n              | Jsont.String (s, _) -> Some s\n              | _ -> None\n            with Not_found -> None\n          in\n          match (kind, url) with\n          | Some (\"module\" | \"module-type\"), Some url ->\n              let module_info = { name = content; url; path } in\n              module_info\n              :: List.concat_map\n                   (extract_modules ~path:(path @ [ content ]))\n                   children\n          | _ -> List.concat_map (extract_modules ~path) children)\n      | _ -> [])\n  | _ -> []\n\nlet process_sidebar json =\n  match json with\n  | Jsont.Array (entries, _) ->\n      List.concat_map (extract_modules ~path:[]) entries\n  | _ -> failwith \"Expected array at top level of sidebar.json\"\n\nlet generate_dune_rule library module_info =\n  let open Printf in\n  (* Skip URLs with anchors *)\n  if String.contains module_info.url '#' then None\n  else\n    (* Extract the path from the URL *)\n    let url_parts = String.split_on_char '/' module_info.url in\n    let odoc_path =\n      match List.filter (fun s -> s <> \"\") url_parts with\n      | \"nx\" :: rest -> String.concat \"/\" rest\n      | _ -> failwith (sprintf \"Unexpected URL format: %s\" module_info.url)\n    in\n    let target_path =\n      String.sub odoc_path 0 (String.length odoc_path - 10)\n      (* Remove \"index.html\" *)\n    in\n    let depth = List.length (String.split_on_char '/' target_path) in\n    let back_path = String.concat \"/\" (List.init (depth + 4) (fun _ -> \"..\")) in\n\n    Some\n      (sprintf\n         \"(rule\\n\\\n         \\  (mode promote)\\n\\\n         \\  (deps\\n\\\n         \\   (:index %s/process/index.exe)\\n\\\n         \\   (:source %s/odoc/%s/%s))\\n\\\n         \\  (targets %sindex.html)\\n\\\n         \\  (action\\n\\\n         \\   (run %%{index} %%{source} %s %sindex.html)))\"\n         back_path back_path library odoc_path target_path library target_path)\n\nlet () =\n  if Array.length Sys.argv < 4 then\n    failwith \"Usage: generate_api_rules <library> <sidebar.json> <output.inc>\"\n  else\n    let library = Sys.argv.(1) in\n    let sidebar_json_path = Sys.argv.(2) in\n    let output_path = Sys.argv.(3) in\n\n    let json = read_sidebar_json sidebar_json_path in\n    let modules = process_sidebar json in\n\n    (* Skip the main Nx module as it's already handled *)\n    let other_modules =\n      List.filter (fun m -> m.url <> \"nx/nx/Nx/index.html\") modules\n    in\n\n    let rules = List.filter_map (generate_dune_rule library) other_modules in\n    let output = String.concat \"\\n\\n\" rules in\n\n    let oc = open_out output_path in\n    output_string oc output;\n    close_out oc\n"
  },
  {
    "path": "www/process/index.ml",
    "content": "open Soup\n\n(* Extract the main content from an odoc-generated HTML file *)\nlet extract_odoc_content soup =\n  (* For odoc 3.x, the main content is in the body directly *)\n  match soup $? \"body\" with\n  | Some body ->\n      (* Extract header and content sections *)\n      let header = body $? \"header.odoc-preamble\" in\n      let content_div = body $? \"div.odoc-content\" in\n      let create_wrapper () = create_element \"div\" ~class_:\"odoc-extracted\" in\n\n      let wrapper = create_wrapper () in\n\n      (* Add header if it exists *)\n      (match header with Some h -> append_child wrapper h | None -> ());\n\n      (* Add content if it exists *)\n      (match content_div with Some c -> append_child wrapper c | None -> ());\n\n      wrapper\n  | None -> failwith \"Could not find body in odoc HTML\"\n\n(* Remove odoc-specific navigation and header elements *)\nlet remove_odoc_navigation content =\n  (* Remove the odoc nav elements *)\n  content $$ \"nav.odoc-nav\" |> iter delete;\n  content $$ \"nav.odoc-toc\" |> iter delete;\n\n  (* Remove the odoc search bar *)\n  content $$ \"div.odoc-search\" |> iter delete;\n\n  (* Remove any script tags *)\n  content $$ \"script\" |> iter delete;\n\n  (* Remove any link tags for stylesheets *)\n  content $$ \"link[rel=stylesheet]\" |> iter delete;\n\n  content\n\n(* Convert odoc CSS classes to Raven website classes *)\nlet adapt_css_classes content =\n  (* Map odoc classes to Raven classes *)\n  let class_mappings =\n    [\n      (\"odoc-doc\", \"doc-content\");\n      (\"odoc-spec\", \"api-spec\");\n      (\"odoc-val\", \"api-value\");\n      (\"odoc-type\", \"api-type\");\n      (\"odoc-module\", \"api-module\");\n      (\"odoc-include\", \"api-include\");\n      (\"odoc-comment\", \"api-comment\");\n    ]\n  in\n\n  (* Apply class mappings *)\n  List.iter\n    (fun (old_class, new_class) ->\n      content $$ \".\" ^ old_class\n      |> iter (fun elem ->\n             remove_class old_class elem;\n             add_class new_class elem))\n    class_mappings;\n\n  content\n\n(* Update internal links to match site structure *)\nlet fix_internal_links ~library content =\n  (* Find all links *)\n  content $$ \"a\"\n  |> iter (fun link ->\n         match attribute \"href\" link with\n         | Some href when not (String.contains href ':') ->\n             (* This is a relative link - update it *)\n             let new_href =\n               if String.starts_with ~prefix:\"../\" href then\n                 (* Link to parent module *)\n                 \"/docs/\" ^ library ^ \"/api/\"\n                 ^ String.sub href 3 (String.length href - 3)\n               else if String.contains href '#' then\n                 (* Link with anchor *)\n                 href\n               else\n                 (* Link to another module *)\n                 \"/docs/\" ^ library ^ \"/api/\" ^ href\n             in\n             set_attribute \"href\" new_href link\n         | _ -> ())\n\n(* Apply syntax highlighting to code blocks *)\nlet enhance_code_blocks content =\n  (* Find all code blocks *)\n  content $$ \"pre code\"\n  |> iter (fun code ->\n         (* Add syntax highlighting class *)\n         add_class \"language-ocaml\" code;\n\n         (* Ensure the pre element has proper styling *)\n         match parent code with\n         | Some pre -> add_class \"code-block\" pre\n         | None -> ());\n\n  (* Also handle inline code *)\n  content $$ \"code\"\n  |> iter (fun code ->\n         match parent code with\n         | Some p when name p <> \"pre\" -> add_class \"inline-code\" code\n         | _ -> ())\n\n(* Add backing lines to headers for visual style *)\nlet add_backing_lines content =\n  (* Add visual backing to h1, h2, h3 elements *)\n  content $$ \"h1\" |> iter (fun header -> add_class \"with-backing\" header);\n  content $$ \"h2\" |> iter (fun header -> add_class \"with-backing\" header);\n  content $$ \"h3\" |> iter (fun header -> add_class \"with-backing\" header);\n\n  (* Add backing to specification blocks *)\n  content $$ \".api-spec\" |> iter (fun spec -> add_class \"spec-backing\" spec)\n\n(* Clean up Stdlib references *)\nlet remove_stdlib_prefix content =\n  (* Pattern to match Stdlib. prefixes *)\n  let stdlib_regex = Str.regexp \"\\\\bStdlib\\\\.\" in\n\n  (* Process all text nodes *)\n  content |> descendants |> elements\n  |> iter (fun elem ->\n         let text_content = texts elem |> String.concat \"\" in\n         if\n           String.length text_content > 0\n           && Str.string_match stdlib_regex text_content 0\n         then (\n           let new_text = Str.global_replace stdlib_regex \"\" text_content in\n           (* Don't parse as HTML, just set as text *)\n           clear elem;\n           append_child elem (Soup.create_text new_text)))\n\n(* Extract module name from the page title or content *)\nlet extract_module_name content =\n  (* Try to find module name from h1 or title *)\n  match content $? \"h1\" with\n  | Some h1 ->\n      (* Get only the direct text content, not from child elements *)\n      let get_direct_text node =\n        List.fold_left\n          (fun acc child ->\n            match element child with\n            | None -> acc ^ to_string child\n            | Some elem ->\n                if name elem = \"a\" then acc (* Skip anchor links *)\n                else acc ^ (texts elem |> String.concat \"\"))\n          \"\"\n          (children node |> to_list)\n      in\n      let text = get_direct_text h1 |> String.trim in\n      (* Extract module name from \"Module Nx.Tensor\" -> \"Nx.Tensor\" *)\n      if String.starts_with ~prefix:\"Module \" text then\n        String.sub text 7 (String.length text - 7)\n      else text\n  | None -> \"Unknown\"\n\n(* Main processing function *)\nlet process_odoc_html ~source ~library ~destination =\n  (* Read HTML using permissive parsing *)\n  let soup =\n    let stream, close = Markup.file source in\n    let signals =\n      stream |> Markup.parse_html ~context:`Document |> Markup.signals\n    in\n    let result = Soup.from_signals signals in\n    close ();\n    result\n  in\n\n  (* Extract the main content *)\n  let content = extract_odoc_content soup in\n\n  (* Apply transformations *)\n  let content = remove_odoc_navigation content in\n  let content = adapt_css_classes content in\n  fix_internal_links ~library content;\n  enhance_code_blocks content;\n  add_backing_lines content;\n  remove_stdlib_prefix content;\n\n  (* Extract module name for metadata *)\n  let module_name = extract_module_name content in\n\n  (* Generate the final HTML with proper structure *)\n  let final_html =\n    Printf.sprintf \"---\\nlayout: layout_docs_%s\\ntitle: %s\\n---\\n\\n%s\" library\n      module_name (Soup.to_string content)\n  in\n\n  (* Write to destination *)\n  let oc = open_out destination in\n  output_string oc final_html;\n  close_out oc\n\n(* Entry point *)\nlet () =\n  match Sys.argv with\n  | [| _; source; library; destination |] ->\n      process_odoc_html ~source ~library ~destination\n  | _ ->\n      Printf.eprintf \"Usage: %s <source.html> <library> <destination.html>\\n\"\n        Sys.argv.(0);\n      Printf.eprintf\n        \"Example: %s _html/nx/Nx.html nx www/site/docs/nx/api/Nx.html\\n\"\n        Sys.argv.(0);\n      exit 1\n"
  },
  {
    "path": "www/process/sidebar.ml",
    "content": "let read_sidebar_json path =\n  if Sys.file_exists path then\n    let ic = open_in path in\n    let content =\n      Fun.protect ~finally:(fun () -> close_in ic) (fun () ->\n          In_channel.input_all ic)\n    in\n    match Jsont_bytesrw.decode_string Jsont.json content with\n    | Ok v -> v\n    | Error e -> failwith e\n  else failwith (Printf.sprintf \"sidebar.json not found at %s\" path)\n\nlet find_field name mems =\n  match Jsont.Json.find_mem name mems with\n  | Some (_, v) -> v\n  | None -> raise Not_found\n\ntype entry = {\n  name : string;\n  url : string option;\n  kind : string option;\n  children : entry list;\n}\n\nlet rec process_entry json =\n  match json with\n  | Jsont.Object (mems, _) -> (\n      let node =\n        try find_field \"node\" mems\n        with Not_found -> failwith \"No node field in entry\"\n      in\n      let children =\n        try\n          match find_field \"children\" mems with\n          | Jsont.Array (l, _) -> l\n          | _ -> []\n        with Not_found -> []\n      in\n      match node with\n      | Jsont.Object (node_fields, _) ->\n          let url =\n            try\n              match find_field \"url\" node_fields with\n              | Jsont.String (s, _) -> Some s\n              | Jsont.Null _ -> None\n              | _ -> None\n            with Not_found -> None\n          in\n          let content =\n            try\n              match find_field \"content\" node_fields with\n              | Jsont.String (s, _) -> s\n              | _ -> \"\"\n            with Not_found -> \"\"\n          in\n          let kind =\n            try\n              match find_field \"kind\" node_fields with\n              | Jsont.String (s, _) -> Some s\n              | Jsont.Null _ -> None\n              | _ -> None\n            with Not_found -> None\n          in\n          let children_entries = List.filter_map process_entry children in\n          Some { name = content; url; kind; children = children_entries }\n      | _ -> None)\n  | _ -> None\n\nlet rec collect_modules entry =\n  match entry.kind with\n  | Some \"module\" | Some \"module-type\" -> [ entry ]\n  | _ ->\n      List.concat_map collect_modules entry.children\n\nlet process_sidebar json =\n  match json with\n  | Jsont.Array (entries, _) ->\n      let all_entries = List.filter_map process_entry entries in\n      List.concat_map collect_modules all_entries\n  | _ -> failwith \"Expected array at top level of sidebar.json\"\n\nlet rec generate_html ~base_path entries =\n  let open Printf in\n  List.map\n    (fun entry ->\n      match entry.url with\n      | Some url ->\n          (* Convert URL from nx/... to /docs/nx/api/... *)\n          let link =\n            if String.length url > 3 && String.sub url 0 3 = \"nx/\" then\n              let rest = String.sub url 3 (String.length url - 3) in\n              sprintf \"/docs/nx/api/%s\" rest\n            else sprintf \"%s%s\" base_path url\n          in\n          let children_html =\n            if entry.children = [] then \"\"\n            else\n              let child_modules =\n                List.concat_map collect_modules entry.children\n              in\n              if child_modules = [] then \"\"\n              else\n                sprintf \"\\n<ul>\\n%s</ul>\"\n                  (generate_html ~base_path child_modules)\n          in\n          sprintf \"<li><a href=\\\"%s\\\">%s</a>%s</li>\" link entry.name\n            children_html\n      | None ->\n          (* Group without URL - shouldn't happen for modules *)\n          \"\")\n    entries\n  |> List.filter (fun s -> s <> \"\")\n  |> String.concat \"\\n\"\n\nlet generate_api_nav_html ~library:_ ~sidebar_json_path ~output_path =\n  let json = read_sidebar_json sidebar_json_path in\n  let entries = process_sidebar json in\n  let html = generate_html ~base_path:\"/\" entries in\n  let full_html =\n    if html = \"\" then\n      \"<ul class=\\\"nav-links\\\">\\n<li>No API documentation available</li>\\n</ul>\"\n    else Printf.sprintf \"<ul class=\\\"nav-links\\\">\\n%s\\n</ul>\" html\n  in\n  let oc = open_out output_path in\n  output_string oc full_html;\n  close_out oc\n\nlet () =\n  if Array.length Sys.argv < 4 then\n    failwith \"Usage: sidebar <library> <sidebar.json> <output.html>\"\n  else\n    let library = Sys.argv.(1) in\n    let sidebar_json_path = Sys.argv.(2) in\n    let output_path = Sys.argv.(3) in\n    generate_api_nav_html ~library ~sidebar_json_path ~output_path\n"
  },
  {
    "path": "www/site/docs.css",
    "content": "/* Documentation specific styles */\nbody {\n  padding: 0;\n  font-size: 15px;\n}\n\n/* Layout */\n.header {\n  border-bottom: 2px solid var(--color-border);\n  padding: 20px;\n}\n\n/* Header navigation */\n.header nav {\n  display: flex;\n  align-items: center;\n  gap: 8px;\n}\n.breadcrumb-link {\n  text-decoration: none;\n  font-size: 18px;\n  color: var(--color-text);\n}\n.breadcrumb-link:hover {\n  color: var(--color-text-bright);\n}\n.breadcrumb-separator {\n  color: var(--color-border-light);\n  margin: 0;\n}\n.breadcrumb-text {\n  font-size: 18px;\n}\n\n/* Logo in header */\n.logo-link {\n  display: flex;\n  align-items: center;\n  gap: 12px;\n}\n.header-logo {\n  width: 28px;\n  height: 28px;\n  filter: invert(1);\n  opacity: 0.85;\n  margin-right: 4px;\n}\n.container {\n  display: flex;\n  min-height: calc(100vh - 60px);\n}\n.sidebar {\n  width: 250px;\n  border-right: 1px solid var(--color-border);\n  padding: 30px 20px;\n  background-color: var(--color-bg-dark);\n}\n.content {\n  flex: 1;\n  padding: 40px;\n  max-width: 900px;\n}\n\n/* Navigation */\n.nav-section {\n  margin-bottom: 30px;\n}\n.nav-title {\n  font-size: 13px;\n  text-transform: uppercase;\n  color: var(--color-text-dim);\n  margin-bottom: 10px;\n  letter-spacing: 0.5px;\n}\n.nav-links {\n  list-style: none;\n  padding: 0;\n  margin: 0;\n}\n.nav-links li {\n  margin-bottom: 5px;\n}\n.nav-links a {\n  text-decoration: none;\n  color: var(--color-text);\n  font-size: 14px;\n  display: block;\n  padding: 3px 0;\n}\n.nav-links a:hover {\n  color: var(--color-text-bright);\n}\n.nav-links a.active {\n  color: var(--color-text-bright);\n  border-left: 2px solid var(--color-text-bright);\n  padding-left: 8px;\n}\n\n/* Override for library color classes */\n.nav-links a.color-blue { color: var(--color-nx) !important; }      /* nx */\n.nav-links a.color-cyan { color: var(--color-brot) !important; }    /* brot */\n.nav-links a.color-pink { color: var(--color-talon) !important; }   /* talon */\n.nav-links a.color-purple { color: var(--color-hugin) !important; } /* hugin */\n.nav-links a.color-green { color: var(--color-quill) !important; }  /* quill */\n.nav-links a.color-orange { color: var(--color-rune) !important; }  /* rune */\n.nav-links a.color-red { color: var(--color-kaun) !important; }     /* kaun */\n.nav-links a.color-indigo { color: var(--color-sowilo) !important; } /* sowilo */\n.nav-links a.color-lime { color: var(--color-fehu) !important; }    /* fehu */\n.nav-links a.color-slate { color: var(--color-tolk) !important; }   /* tolk */\n\n/* Tab navigation */\n.nav-tabs {\n  margin-bottom: 30px;\n}\n.nav-tabs input[type=\"radio\"] {\n  display: none;\n}\n.nav-tabs label {\n  display: inline-block;\n  padding: 6px 12px;\n  font-size: 13px;\n  color: var(--color-text-dim);\n  cursor: pointer;\n  border-bottom: 2px solid transparent;\n  text-transform: uppercase;\n  letter-spacing: 0.5px;\n}\n.nav-tabs label:hover {\n  color: var(--color-text);\n}\n.nav-tabs input[type=\"radio\"]:checked + label {\n  color: var(--color-text-bright);\n  border-bottom-color: var(--color-text-bright);\n}\n.tab-panel {\n  display: none;\n  margin-top: 12px;\n}\n#tab-guides:checked ~ #panel-guides,\n#tab-examples:checked ~ #panel-examples,\n#tab-api:checked ~ #panel-api {\n  display: block;\n}\n\n/* Library info section */\n.library-info {\n  margin-bottom: 30px;\n  padding-bottom: 20px;\n  border-bottom: 1px solid var(--color-border);\n}\n.library-info h3 {\n  margin-bottom: 10px;\n}\n.library-info p {\n  font-size: 13px;\n  color: var(--color-text-dim);\n  margin-bottom: 10px;\n}\n.library-info a {\n  font-size: 13px;\n}\n\n/* Breadcrumbs */\n.breadcrumbs {\n  padding: 15px 0;\n  font-size: 13px;\n  color: var(--color-text-dim);\n  display: flex;\n  align-items: center;\n  gap: 4px;\n  border-bottom: 1px solid var(--color-border);\n  margin-bottom: 30px;\n}\n.breadcrumbs:empty {\n  display: none;\n}\n.breadcrumbs a {\n  color: var(--color-text);\n  text-decoration: none;\n}\n.breadcrumbs a:hover {\n  color: var(--color-text-bright);\n  text-decoration: underline;\n}\n.breadcrumbs > span {\n  color: var(--color-border-light);\n}\n.breadcrumbs > span:last-child {\n  color: var(--color-text);\n  font-weight: normal;\n}\n\n/* Breadcrumb separator and current page */\n.breadcrumb-separator {\n  color: var(--color-border-light);\n  margin: 0 8px;\n}\n.breadcrumb-current {\n  color: var(--color-text);\n}\n\n/* Docs hero (library index pages) */\n.docs-hero {\n  border: 3px solid var(--lib-color, var(--color-border-light));\n  padding: 36px 20px;\n  text-align: center;\n  margin-bottom: 40px;\n}\n.docs-hero h1 {\n  font-size: 72px;\n  margin: 0;\n  color: var(--lib-color, inherit);\n}\n.docs-hero h1 .rune-symbol {\n  display: inline-block;\n  vertical-align: baseline;\n}\n.docs-hero .tagline {\n  font-size: 20px;\n  margin: 20px 0;\n  color: var(--color-text-dim);\n}\n\n/* Sidebar toggle (mobile) */\n.sidebar-toggle {\n  display: none;\n  background: none;\n  border: 1px solid var(--color-border);\n  color: var(--color-text);\n  font-size: 14px;\n  padding: 6px 12px;\n  cursor: pointer;\n  font-family: inherit;\n}\n.sidebar-toggle:hover {\n  color: var(--color-text-bright);\n  border-color: var(--color-border-light);\n}\n\n/* Mobile responsive */\n@media (max-width: 768px) {\n  .sidebar-toggle {\n    display: block;\n  }\n  .container {\n    flex-direction: column;\n  }\n  .sidebar {\n    width: auto;\n    border-right: none;\n    border-bottom: 1px solid var(--color-border);\n    padding: 20px;\n    display: none;\n  }\n  .sidebar.open {\n    display: block;\n  }\n  .content {\n    padding: 20px;\n  }\n  .docs-hero h1 {\n    font-size: 48px;\n  }\n  .docs-hero .tagline {\n    font-size: 16px;\n  }\n  .docs-hero {\n    padding: 24px 16px;\n  }\n  .header nav {\n    flex-wrap: wrap;\n    gap: 8px;\n  }\n}\n\n/* Prev/next navigation */\n.prev-next {\n  display: flex;\n  justify-content: space-between;\n  margin-top: 60px;\n  padding-top: 20px;\n  border-top: 1px solid var(--color-border);\n}\n.prev-next a {\n  text-decoration: none;\n  color: var(--color-text-dim);\n  font-size: 15px;\n}\n.prev-next a:hover {\n  color: var(--color-text-bright);\n}\n.prev-next .next-link {\n  margin-left: auto;\n}\n\n/* Content specific overrides */\nblockquote {\n  border-left: 3px solid var(--color-border);\n  margin: 20px 0;\n  padding-left: 20px;\n  color: var(--color-text-dim);\n}\n\n"
  },
  {
    "path": "www/site/index.html",
    "content": "<!DOCTYPE html>\n<html lang=\"en\">\n\n<head>\n  <meta charset=\"UTF-8\" />\n  <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\" />\n  <title>raven: Modern scientific computing for OCaml</title>\n  <link rel=\"stylesheet\" href=\"/styles.css\">\n</head>\n\n<body>\n\n  <!-- Main Content -->\n  <main class=\"main-content\">\n\n    <div class=\"landing\">\n      <!-- <img src=\"/raven-anim.svg\" alt=\"Raven logo\" class=\"landing-logo\"> -->\n      <svg xmlns=\"http://www.w3.org/2000/svg\" viewBox=\"0 0 137.22 143.7\" class=\"landing-logo\">\n\n        <style>\n          /* one quick flap as soon as the SVG appears */\n          @keyframes flap {\n\n            0%,\n            100% {\n              transform: rotate(0deg);\n            }\n\n            50% {\n              transform: rotate(-8deg);\n            }\n          }\n\n          #wing {\n            transform-origin: 72% 40%;\n            transform-box: fill-box;\n            animation: flap 1.2s ease-in-out 1 forwards;\n          }\n        </style>\n\n        <!-- mask that cuts a hole for the eye -->\n        <defs>\n          <mask id=\"eye-hole\">\n            <!-- keep everything -->\n            <rect width=\"100%\" height=\"100%\" fill=\"white\" />\n            <!-- knock out the eye -->\n            <circle cx=\"103.44\" cy=\"10.83\" r=\"3.48\" fill=\"black\" />\n          </mask>\n        </defs>\n\n        <!-- whole logo, with the mask applied -->\n        <g fill=\"#231f20\" mask=\"url(#eye-hole)\">\n          <path d=\"M109.74,15.86l4.76-6.91,12.97,1.18s9.76.43,9.76,10.2h-23.73l-3.76-4.48Z\" />\n          <polygon points=\"0 136.29 32.74 118.7 37.3 101.43 7.98 114.3 0 136.29\" />\n          <polygon points=\"18.14 89.38 9.12 109.09 38.93 96.71 45.94 84.33 18.14 89.38\" />\n          <path d=\"M19.38,85.95l28.67-6.03,14.82-24.92-10.1-18.3s-23.51,17.81-33.39,49.25Z\" />\n          <path id=\"wing\" d=\"M56.04,35.4l5.86-4.67,30.3,10.1,9.61-5.05s8.29,3.26,7.98,15.8c-.27,11.19-2.05,16.17-2.05,16.17l-13.05.02\n             s4.13-12.39-3.37-21.41c0,0,4.36,10.79-.54,26.92-1.49,4.88-9.5,26.67-52.67,42.56\n             c0,0,2.28-24.42,28.34-59.93l.73-1.08-11.16-19.44Z\" />\n          <polygon points=\"47.08 116.74 55.55 127.33 59.62 122.04 60.27 109.09 47.08 116.74\" />\n          <polygon points=\"65.97 115.11 78.35 107.78 78.35 120 73.96 125.05 65.97 115.11\" />\n          <path d=\"M58.15,129.94l2.93-3.75,7.33,9.12h7.49l-1.97,4.56h-11.06\n             s-5.21.65-7,2.93c0,0-1.22-3.58,5.62-6.52v-2.5l-3.34-3.86Z\" />\n          <path d=\"M75.91,128.04l5.1-5.86,9.88,12.92,6.09.97s6.29.96,6.29,6.74\n             c0,0-5.7-3.01-13.03-3.01s-13.11.57-14.33,3.91\n             c0,0-.65-6.71,7.82-7.66l-.08-2.28-7.74-5.73Z\" />\n          <path d=\"M64.18,106.64v6.52s35.84-18.24,43.17-42.84h-13.41s-1.41,19.38-29.76,36.33Z\" />\n          <path d=\"M112.94,7.6s-5.43-7.6-18.24-7.6-18.9,6.73-18.9,6.73\n             c0,0-8.36,4.89-11.62,20.96l27.91,9.99,9.34-4.83,10.43-10.89-5-5.89,6.08-8.47Z\" />\n        </g>\n      </svg>\n      <h1 style=\"display: inline;\">raven</h1>\n      <p class=\"landing-tagline\">modern scientific computing for OCaml</p>\n      <div class=\"links\">\n        <div class=\"links-group\">\n          <a href=\"/docs/nx/\" class=\"color-blue\" data-tooltip=\"N-dimensional arrays with linear algebra operations\">nx</a>\n          <span class=\"separator\">•</span>\n          <a href=\"/docs/tolk/\" class=\"color-slate\" data-tooltip=\"Minimal ML compiler for GPU tensor computation\"><span class=\"rune-symbol\" style=\"font-size: 20px;\">ᛏ</span> tolk</a>\n          <span class=\"separator\">•</span>\n          <a href=\"/docs/rune/\" class=\"color-orange\" data-tooltip=\"Automatic differentiation and functional transformations\"><span class=\"rune-symbol\" style=\"font-size: 20px;\">ᚱ</span> rune</a>\n          <span class=\"separator\">•</span>\n          <a href=\"/docs/kaun/\" class=\"color-red\" data-tooltip=\"Neural networks and training\"><span class=\"rune-symbol\" style=\"font-size: 20px;\">ᚲ</span> kaun</a>\n          <span class=\"separator\">•</span>\n          <a href=\"/docs/brot/\" class=\"color-cyan\" data-tooltip=\"Fast, HuggingFace-compatible tokenization for language models\"><span class=\"rune-symbol\" style=\"font-size: 20px;\">ᚨ</span> brot</a>\n          <span class=\"separator\">•</span>\n          <a href=\"/docs/talon/\" class=\"color-pink\" data-tooltip=\"Fast and elegant dataframes with type-safe operations\"><span class=\"rune-symbol\" style=\"font-size: 20px;\">ᛃ</span> talon</a>\n        </div>\n        <div class=\"links-group\" style=\"margin-top: 16px;\">\n          <a href=\"/docs/hugin/\" class=\"color-purple\" data-tooltip=\"Publication-quality plotting\"><span class=\"rune-symbol\" style=\"font-size: 20px;\">ᛞ</span> hugin</a>\n          <span class=\"separator\">•</span>\n          <a href=\"/docs/quill/\" class=\"color-green\" data-tooltip=\"Interactive REPL and markdown notebooks\"><span class=\"rune-symbol\" style=\"font-size: 20px;\">ᛈ</span> quill</a>\n          <span class=\"separator\">•</span>\n          <a href=\"/docs/fehu/\" class=\"color-lime\" data-tooltip=\"Reinforcement learning for OCaml\"><span class=\"rune-symbol\" style=\"font-size: 20px;\">ᚠ</span> fehu</a>\n          <span class=\"separator\">•</span>\n          <a href=\"/docs/sowilo/\" class=\"color-indigo\" data-tooltip=\"Differentiable computer vision\"><span class=\"rune-symbol\" style=\"font-size: 20px;\">ᛋ</span> sowilo</a>\n          <span class=\"separator\">•</span>\n          <a href=\"/docs/munin/\" class=\"color-munin\" data-tooltip=\"Local experiment tracking with live TUI dashboard\"><span class=\"rune-symbol\" style=\"font-size: 20px;\">ᛗ</span> munin</a>\n        </div>\n      </div>\n      <div class=\"landing-footer-nav\">\n        <a href=\"#raven\">about</a>\n        <span class=\"separator\">|</span>\n        <a href=\"/docs/\">docs</a>\n        <span class=\"separator\">|</span>\n        <a href=\"https://github.com/raven-ml/raven\">github</a>\n        <span class=\"separator\">|</span>\n        <a href=\"/docs/installation/\" class=\"cta-inline\">get started →</a>\n      </div>\n    </div>\n\n    <h2 id=\"raven\">raven</h2>\n\n    <p><a href=\"https://github.com/raven-ml/raven\">Raven</a> is an ecosystem of composable libraries for numerical computing in OCaml. Tensors, automatic differentiation, neural networks, dataframes, plotting, tokenization, computer vision, reinforcement learning, and interactive notebooks — each library does one thing well, and they compose cleanly together.</p>\n\n    <p>Built on OCaml 5's effect system, Raven uses function transformations — <code>grad</code>, <code>vmap</code>, <code>jit</code> — that compose freely because they are nested effect handlers. The same code that runs a training loop on your laptop can target Metal or CUDA. Types catch shape and dtype mismatches at compile time.</p>\n\n    <hr>\n\n    <h2>see it in action</h2>\n\n    <p>Tokenize text, build a classifier, and train it — three libraries working together:</p>\n\n    <pre class=\"language-ocaml\"><code class=\"language-ocaml\">open Kaun\n\n(* Tokenize text with Brot *)\nlet tokenizer = Brot.from_file \"tokenizer.json\" |> Result.get_ok\nlet encode text = Brot.encode_ids tokenizer text\n\n(* Build a model with Kaun *)\nlet model = Layer.sequential [\n  Layer.embedding ~vocab_size:30522 ~embed_dim:128 ();\n  Layer.relu ();\n  Layer.linear ~in_features:128 ~out_features:2 ();\n]\n\n(* Train with automatic differentiation — Rune under the hood *)\nlet trainer = Train.make ~model\n  ~optimizer:(Optim.adam ~lr:(Optim.Schedule.constant 1e-3) ())\nlet st = Train.init trainer ~dtype:Nx.Float32\nlet st = Train.fit trainer st train_data</code></pre>\n\n    <hr>\n\n    <h2>the ecosystem</h2>\n\n    <div class=\"library-map\">\n      <h3>foundation</h3>\n      <div class=\"library-map-group\">\n        <div class=\"library-map-item\">\n          <a href=\"/docs/nx/\" class=\"color-blue\">nx</a>\n          <span class=\"desc\">N-dimensional arrays with pluggable backends (NumPy)</span>\n        </div>\n        <div class=\"library-map-item\">\n          <a href=\"/docs/tolk/\" class=\"color-slate\"><span class=\"rune-symbol\">ᛏ</span> tolk</a>\n          <span class=\"desc\">Minimal ML compiler for GPU tensor computation (tinygrad)</span>\n        </div>\n        <div class=\"library-map-item\">\n          <a href=\"/docs/rune/\" class=\"color-orange\"><span class=\"rune-symbol\">ᚱ</span> rune</a>\n          <span class=\"desc\">Automatic differentiation and vectorizing maps (JAX)</span>\n        </div>\n      </div>\n\n      <h3>machine learning</h3>\n      <div class=\"library-map-group\">\n        <div class=\"library-map-item\">\n          <a href=\"/docs/kaun/\" class=\"color-red\"><span class=\"rune-symbol\">ᚲ</span> kaun</a>\n          <span class=\"desc\">Neural networks and training (Flax / PyTorch)</span>\n        </div>\n        <div class=\"library-map-item\">\n          <a href=\"/docs/brot/\" class=\"color-cyan\"><span class=\"rune-symbol\">ᚨ</span> brot</a>\n          <span class=\"desc\">Tokenization for language models (HuggingFace Tokenizers)</span>\n        </div>\n        <div class=\"library-map-item\">\n          <a href=\"/docs/sowilo/\" class=\"color-indigo\"><span class=\"rune-symbol\">ᛋ</span> sowilo</a>\n          <span class=\"desc\">Differentiable computer vision (OpenCV)</span>\n        </div>\n        <div class=\"library-map-item\">\n          <a href=\"/docs/fehu/\" class=\"color-lime\"><span class=\"rune-symbol\">ᚠ</span> fehu</a>\n          <span class=\"desc\">Reinforcement learning environments (Gymnasium)</span>\n        </div>\n      </div>\n\n      <h3>data and visualization</h3>\n      <div class=\"library-map-group\">\n        <div class=\"library-map-item\">\n          <a href=\"/docs/talon/\" class=\"color-pink\"><span class=\"rune-symbol\">ᛃ</span> talon</a>\n          <span class=\"desc\">Dataframes with type-safe columns (pandas / Polars)</span>\n        </div>\n        <div class=\"library-map-item\">\n          <a href=\"/docs/hugin/\" class=\"color-purple\"><span class=\"rune-symbol\">ᛞ</span> hugin</a>\n          <span class=\"desc\">Publication-quality plotting (Matplotlib)</span>\n        </div>\n      </div>\n\n      <h3>tools</h3>\n      <div class=\"library-map-group\">\n        <div class=\"library-map-item\">\n          <a href=\"/docs/quill/\" class=\"color-green\"><span class=\"rune-symbol\">ᛈ</span> quill</a>\n          <span class=\"desc\">Interactive REPL and markdown notebooks (Jupyter + IPython)</span>\n        </div>\n        <div class=\"library-map-item\">\n          <a href=\"/docs/munin/\" class=\"color-munin\"><span class=\"rune-symbol\">ᛗ</span> munin</a>\n          <span class=\"desc\">Local experiment tracking and live dashboard (W&B / MLFlow)</span>\n        </div>\n      </div>\n    </div>\n\n    <hr>\n\n    <h2>get involved</h2>\n\n    <p>Raven is built in public. We need your help:</p>\n\n    <ul>\n      <li><b>Try it out</b> — install the libraries, run the examples, tell us what breaks</li>\n      <li><b>Report issues</b> — found a bug? Missing a feature? <a href=\"https://github.com/raven-ml/raven/issues\">Let us know</a></li>\n      <li><b>Join the conversation</b> — API decisions happen in GitHub discussions</li>\n      <li><b>Contribute</b> — check out <a href=\"https://github.com/raven-ml/raven/labels/good%20first%20issue\">good first issues</a></li>\n    </ul>\n\n    <p><a href=\"https://github.com/raven-ml/raven\" class=\"cta-inline\">View on GitHub →</a></p>\n\n    <hr>\n\n    <h2>support the project</h2>\n\n    <p>Building a scientific computing ecosystem takes time and focus. Your sponsorship helps us deliver on our <a href=\"/docs/roadmap\">roadmap</a> — GPU backends, performance parity, and comprehensive documentation.</p>\n\n    <p><a href=\"/docs/support-raven\" class=\"cta-inline\">Support Raven →</a></p>\n  </main>\n\n</body>\n\n</html>\n"
  },
  {
    "path": "www/site/odoc.css",
    "content": "/* Styles for odoc API reference pages.\n   Adapted from odig (Daniel Bünzli) for the raven dark theme. */\n\n/*---------------------------------------------------------------------------\n   Right sidebar (table of contents)\n  ---------------------------------------------------------------------------*/\n\n.toc {\n  width: 220px;\n  flex-shrink: 0;\n  padding: 30px 16px;\n  border-left: 1px solid var(--color-border);\n}\n.toc-inner {\n  position: sticky;\n  top: 20px;\n  max-height: calc(100vh - 40px);\n  overflow-y: auto;\n}\n.toc-title {\n  font-size: 11px;\n  text-transform: uppercase;\n  letter-spacing: 0.5px;\n  color: var(--color-text-dim);\n  margin-bottom: 12px;\n}\n.toc .odoc-toc {\n  font-size: 13px;\n}\n.toc .odoc-toc ul {\n  list-style: none;\n  padding: 0;\n  margin: 0;\n}\n.toc .odoc-toc > ul > li {\n  margin-bottom: 2px;\n}\n.toc .odoc-toc > ul > li > a {\n  font-weight: 500;\n  color: var(--color-text);\n}\n.toc .odoc-toc li li {\n  border-left: 1px solid var(--color-border);\n  margin-left: 4px;\n  padding-left: 10px;\n}\n.toc .odoc-toc li {\n  padding: 2px 0;\n}\n.toc .odoc-toc a {\n  color: var(--color-text-dim);\n  text-decoration: none;\n  display: block;\n  line-height: 1.4;\n}\n.toc .odoc-toc a:hover {\n  color: var(--color-text-bright);\n}\n\n@media (max-width: 1100px) {\n  .toc {\n    display: none;\n  }\n}\n\n/*---------------------------------------------------------------------------\n   Left sidebar: API navigation\n  ---------------------------------------------------------------------------*/\n\n.nav-links .nav-group {\n  font-size: 11px;\n  text-transform: uppercase;\n  letter-spacing: 0.3px;\n  color: var(--color-text-dim);\n  margin-top: 14px;\n  padding-top: 10px;\n  border-top: 1px solid var(--color-border);\n  list-style: none;\n}\n.nav-links .nav-group:first-child {\n  margin-top: 0;\n  padding-top: 0;\n  border-top: none;\n}\n.nav-links .nav-sub {\n  padding-left: 10px;\n}\n.nav-links .nav-sub a {\n  font-size: 13px;\n  color: var(--color-text-dim);\n}\n.nav-links .nav-sub a:hover {\n  color: var(--color-text);\n}\n.nav-links .nav-sub a.active {\n  color: var(--color-text-bright);\n}\n\n/*---------------------------------------------------------------------------\n   API page layout\n  ---------------------------------------------------------------------------*/\n\n.odoc-api {\n  line-height: 1.5;\n}\n\n/*---------------------------------------------------------------------------\n   Code and code highlighting\n  ---------------------------------------------------------------------------*/\n\n.odoc-api code {\n  font-family: inherit;\n  font-size: inherit;\n  color: var(--color-text);\n  background: none;\n  border: none;\n  padding: 0;\n  overflow-wrap: anywhere;\n}\n.odoc-api code span span {\n  white-space: nowrap;\n}\n.odoc-api pre code {\n  font-size: inherit;\n}\n.odoc-api a code {\n  color: inherit;\n}\n.odoc-api .odoc-content h2 code,\n.odoc-api .odoc-content h3 code,\n.odoc-api .odoc-content h4 code {\n  font-size: inherit;\n  font-weight: inherit;\n  text-transform: none;\n}\n\n/* Syntax highlighting */\n.odoc-api .keyword {\n  color: var(--color-hugin);\n}\n.odoc-api .arrow {\n  white-space: nowrap;\n}\n.odoc-api .type-var {\n  color: var(--color-rune);\n}\n.odoc-api .label,\n.odoc-api .optlabel {\n  color: #8ec07c;\n}\n.odoc-api .constructor {\n  color: var(--color-rune);\n}\n\n/*---------------------------------------------------------------------------\n   Preamble (module header)\n  ---------------------------------------------------------------------------*/\n\n.odoc-api .odoc-preamble h1 {\n  font-size: 24px;\n  margin: 0 0 12px;\n  padding-bottom: 8px;\n  border-bottom: 1px solid var(--color-border);\n}\n.odoc-api .odoc-preamble h1 code {\n  font-size: inherit;\n  font-weight: inherit;\n}\n.odoc-api .odoc-preamble p {\n  margin: 8px 0;\n  color: var(--color-text);\n}\n.odoc-api .odoc-preamble > *:first-child {\n  margin-top: 0;\n  padding-top: 0;\n}\n.odoc-api .odoc-preamble:has(> :nth-child(2)) {\n  margin-bottom: 24px;\n}\n\n/*---------------------------------------------------------------------------\n   Pre-formatted code blocks\n  ---------------------------------------------------------------------------*/\n\n.odoc-api pre {\n  background: var(--color-bg-dark);\n  padding: 1ch 0.8ch;\n  margin-left: -0.8ch;\n  margin-right: -0.8ch;\n  white-space: pre-wrap;\n  overflow-wrap: break-word;\n}\n\n/*---------------------------------------------------------------------------\n   Spec blocks (val, type, module, etc.)\n  ---------------------------------------------------------------------------*/\n\n.odoc-api .odoc-spec {\n  padding-bottom: 4px;\n}\n\n/* Half-line spacing between sibling spec items */\n.odoc-api .odoc-spec + .odoc-spec {\n  margin-top: 12px;\n}\n\n.odoc-api .spec {\n  margin-top: 0;\n  position: relative;\n  padding-left: 4ch;\n  padding-top: 4px;\n  padding-bottom: 4px;\n  text-indent: -4ch;\n  font-size: calc(1em * 1.05);\n}\n\n/* Don't use hanging indent on type specs (variants/records break it) */\n.odoc-api .spec.type {\n  padding-left: 0;\n  text-indent: 0;\n}\n.odoc-api .spec.type > a.anchor {\n  padding-left: 1ch;\n  padding-right: 1ch;\n}\n\n/*---------------------------------------------------------------------------\n   Spec documentation\n  ---------------------------------------------------------------------------*/\n\n.odoc-api .spec-doc {\n  margin-top: 0;\n  padding-left: 1ch;\n  color: var(--color-text);\n}\n.odoc-api .spec-doc > *:first-child {\n  margin-top: 0;\n}\n.odoc-api .spec-doc p {\n  margin: 4px 0;\n}\n.odoc-api .spec-doc ul {\n  margin: 4px 0;\n  padding-left: 20px;\n}\n.odoc-api .spec-doc li {\n  margin-bottom: 2px;\n}\n\n/* Inline code inside documentation text */\n.odoc-api .spec-doc code,\n.odoc-api .def-doc code {\n  font-size: calc(1em * 0.90);\n}\n\n/*---------------------------------------------------------------------------\n   Definition documentation (inside type variants/records)\n  ---------------------------------------------------------------------------*/\n\n.odoc-api .def-doc {\n  display: inline-block;\n  padding-left: 7ch;\n  color: var(--color-text-dim);\n}\n.odoc-api .def-doc p {\n  margin: 2px 0;\n  margin-left: -4ch;\n  text-indent: 0;\n}\n.odoc-api .def-doc > *:first-child {\n  margin-top: 0;\n}\n.odoc-api .spec .def-doc .comment-delim {\n  position: absolute;\n  width: 1px;\n  height: 1px;\n  overflow: hidden;\n}\n.odoc-api .spec .def-doc .comment-delim + * {\n  margin-top: 0;\n}\n\n/* Variant/record layout inside spec blocks */\n.odoc-api .spec.type .variant,\n.odoc-api .spec.type .record {\n  margin-left: 2ch;\n}\n.odoc-api .spec.type li.variant,\n.odoc-api .spec.type li.record {\n  list-style: none;\n}\n.odoc-api .spec.type .variant p,\n.odoc-api .spec.type .record p {\n  margin: 4px;\n}\n.odoc-api .spec.type > ol {\n  margin-top: 0;\n  margin-bottom: 0;\n}\n.odoc-api .spec ol {\n  margin: 0;\n  list-style-type: none;\n}\n.odoc-api .spec li {\n  margin-left: 0;\n  padding-left: 4ch;\n  text-indent: -4ch;\n}\n.odoc-api .spec li.record.field {\n  margin-left: 2ch;\n}\n.odoc-api div.def {\n  margin-top: 0;\n  text-indent: -2ex;\n  padding-left: 2ex;\n}\n\n/*---------------------------------------------------------------------------\n   Include blocks (collapsible)\n  ---------------------------------------------------------------------------*/\n\n.odoc-api .odoc-include {\n  margin-bottom: 28px;\n}\n.odoc-api .odoc-include.shadowed-include {\n  display: none;\n}\n.odoc-api .odoc-include summary {\n  cursor: pointer;\n}\n\n/*---------------------------------------------------------------------------\n   Links and anchors\n  ---------------------------------------------------------------------------*/\n\n.odoc-api a {\n  color: var(--color-nx);\n  text-decoration: none;\n}\n.odoc-api a:hover {\n  box-shadow: 0 1px 0 0 var(--color-nx);\n}\n.odoc-api .xref-unresolved {\n  box-shadow: 0 1px 0 0 var(--color-kaun);\n}\n\n.odoc-api a.anchor {\n  visibility: hidden;\n  position: absolute;\n  font-weight: normal;\n  font-style: normal;\n  margin-left: -2.5ch;\n  padding-right: 1ch;\n  padding-left: 1ch;\n  color: var(--color-nx);\n  text-align: right;\n}\n.odoc-api a.anchor:before {\n  content: \"#\";\n}\n.odoc-api *:hover > a.anchor {\n  visibility: visible;\n}\n.odoc-api a.anchor:hover {\n  box-shadow: none;\n  text-decoration: underline;\n}\n.odoc-api .spec > a.anchor,\n.odoc-api .spec li > a.anchor {\n  padding-right: 0.5ch;\n  padding-left: 2ch;\n}\n\n/* Target highlight when navigating to an anchor */\n.odoc-api *:target {\n  background-color: color-mix(in srgb, var(--color-bg) 70%, var(--color-nx) 30%);\n  box-shadow: 0 0 0 3px color-mix(in srgb, var(--color-bg) 70%, var(--color-nx) 30%);\n}\n\n/*---------------------------------------------------------------------------\n   Section headings\n  ---------------------------------------------------------------------------*/\n\n.odoc-api .odoc-content h2 {\n  font-size: 18px;\n  margin: 40px 0 12px;\n  padding-top: 12px;\n  border-top: 1px solid var(--color-border);\n}\n.odoc-api .odoc-content h3 {\n  font-size: 15px;\n  margin: 28px 0 8px;\n}\n.odoc-api .odoc-content h4 {\n  font-size: 14px;\n  margin: 20px 0 6px;\n}\n.odoc-api .odoc-content > *:first-child {\n  border-top: 1px solid var(--color-border);\n  padding-top: 12px;\n  margin-top: 16px;\n}\n\n/*---------------------------------------------------------------------------\n   Comment delimiters (accessible but hidden)\n  ---------------------------------------------------------------------------*/\n\n.odoc-api .comment-delim {\n  position: absolute;\n  width: 1px;\n  height: 1px;\n  padding: 0;\n  margin: -1px;\n  overflow: hidden;\n  clip: rect(0, 0, 0, 0);\n  white-space: nowrap;\n  border: 0;\n}\n\n/*---------------------------------------------------------------------------\n   Lists of modules and @-tags\n  ---------------------------------------------------------------------------*/\n\n.odoc-api .modules {\n  list-style-type: none;\n  margin-left: -2ch;\n}\n.odoc-api .modules li {\n  padding-left: 2ch;\n  text-indent: -2ch;\n  margin-top: 5px;\n}\n.odoc-api .modules .synopsis {\n  padding-left: 1ch;\n}\n.odoc-api .at-tags {\n  list-style-type: none;\n  margin-left: -2ch;\n}\n.odoc-api .at-tags li {\n  padding-left: 2ch;\n  text-indent: -2ch;\n}\n.odoc-api .at-tags .at-tag {\n  text-transform: capitalize;\n}\n\n/*---------------------------------------------------------------------------\n   Alert and since markers\n  ---------------------------------------------------------------------------*/\n\n.odoc-api .alert::before,\n.odoc-api .deprecated::before {\n  content: \"\\26A0\\FE0F \" / \"\";\n}\n.odoc-api .since::before {\n  content: \"\\1F55A \" / \"\";\n}\n\n/*---------------------------------------------------------------------------\n   Tables\n  ---------------------------------------------------------------------------*/\n\n.odoc-api .odoc-table {\n  margin: 16px 0;\n  border-collapse: collapse;\n}\n.odoc-api .odoc-table td,\n.odoc-api .odoc-table th {\n  padding: 6px 10px;\n  border: 1px solid var(--color-border);\n}\n.odoc-api .odoc-table th {\n  font-weight: bold;\n}\n\n/*---------------------------------------------------------------------------\n   Source links\n  ---------------------------------------------------------------------------*/\n\n.odoc-api a.source_link {\n  float: right;\n  color: var(--color-text-dim);\n  font-size: 13px;\n}\n\n/*---------------------------------------------------------------------------\n   Basic markup\n  ---------------------------------------------------------------------------*/\n\n.odoc-api b,\n.odoc-api strong {\n  font-weight: bold;\n}\n.odoc-api em {\n  font-style: italic;\n}\n.odoc-api sup {\n  vertical-align: super;\n  font-size: 12px;\n  line-height: 0;\n}\n.odoc-api sub {\n  vertical-align: sub;\n  font-size: 12px;\n  line-height: 0;\n}\n.odoc-api ul > li {\n  margin-left: 22px;\n}\n.odoc-api ol > li {\n  margin-left: 27px;\n}\n"
  },
  {
    "path": "www/site/styles.css",
    "content": "/* Global styles for Raven website */\n\n:root {\n  /* Base colors */\n  --color-bg: #1a1a1a;\n  --color-bg-dark: #0f0f0f;\n  --color-text: #d0d0d0;\n  --color-text-bright: #ffffff;\n  --color-text-dim: #888;\n  --color-border: #333;\n  --color-border-light: #666;\n  \n  /* Library colors */\n  --color-nx: #4dabf7;        /* blue */\n  --color-brot: #06b6d4;      /* cyan */\n  --color-hugin: #b197fc;     /* purple */\n  --color-quill: #10b981;     /* emerald */\n  --color-rune: #f59e0b;      /* amber */\n  --color-kaun: #ef4444;      /* red */\n  --color-sowilo: #6366f1;    /* indigo */\n  --color-talon: #ec4899;     /* pink */\n  --color-fehu: #84cc16;      /* lime */\n  --color-vega: #eab308;      /* yellow */\n  --color-norn: #14b8a6;      /* teal */\n  --color-munin: #f97316;     /* orange-warm */\n  --color-tolk: #64748b;      /* slate */\n  \n  /* Syntax highlighting */\n  --syntax-comment: #666;\n  --syntax-keyword: var(--color-rune);\n  --syntax-string: var(--color-quill);\n  --syntax-number: var(--color-nx);\n  --syntax-function: var(--color-hugin);\n  --syntax-operator: var(--color-text);\n  --syntax-type: var(--color-sowilo);\n}\n\n/* Base styles */\nbody {\n  font-family: 'Lucida Console', 'DejaVu Sans Mono', monospace;\n  font-size: 16px;\n  color: var(--color-text);\n  background-color: var(--color-bg);\n  margin: 0;\n  padding: 0;\n  line-height: 1.6;\n}\n\n/* Content wrapper for centered pages */\n.main-content {\n  max-width: 900px;\n  margin: 0 auto;\n  padding: 40px;\n}\n\n@media (min-width: 1600px) {\n  .main-content {\n    max-width: 1000px;\n  }\n}\n\n/* Full-width sections */\n\n/* Typography */\nh1, h2, h3, h4, h5, h6 {\n  font-weight: normal;\n  color: var(--color-text-bright);\n  margin-top: 1.5em;\n  margin-bottom: 0.5em;\n}\n\nh1 { font-size: 32px; }\nh2 {\n  font-size: 24px;\n  padding-top: 1em;\n  border-top: 1px solid var(--color-border);\n  margin-top: 2em;\n}\nh3 { font-size: 20px; }\n\n/* Links */\na {\n  color: inherit;\n  text-decoration: underline;\n}\n\na:hover {\n  color: var(--color-text-bright);\n}\n\n/* Code blocks */\npre {\n  background: var(--color-bg-dark);\n  color: var(--color-text);\n  padding: 15px;\n  overflow-x: auto;\n  margin: 20px 0;\n}\n\ncode {\n  background: #2a2a2a;\n  border: 1px solid var(--color-border);\n  padding: 2px 6px;\n  color: var(--color-text);\n  font-size: 14px;\n}\n\npre code {\n  background: none;\n  border: none;\n  padding: 0;\n}\n\n/* Tables */\ntable {\n  border-collapse: collapse;\n  width: 100%;\n  margin: 20px 0;\n}\n\ntd, th {\n  padding: 8px 15px;\n  border: 1px solid var(--color-border);\n  text-align: left;\n}\n\nth {\n  color: var(--color-text-bright);\n  background: var(--color-bg-dark);\n}\n\n/* Horizontal rules */\nhr {\n  border: none;\n  border-top: 1px solid var(--color-border);\n  margin: 40px auto;\n  max-width: 900px;\n}\n\n/* Library colors */\n.color-blue { color: var(--color-nx); }      /* nx */\n.color-cyan { color: var(--color-brot); }    /* brot */\n.color-purple { color: var(--color-hugin); } /* hugin */\n.color-green { color: var(--color-quill); }  /* quill */\n.color-orange { color: var(--color-rune); }  /* rune */\n.color-red { color: var(--color-kaun); }     /* kaun */\n.color-indigo { color: var(--color-sowilo); } /* sowilo */\n.color-pink { color: var(--color-talon); }   /* talon */\n.color-lime { color: var(--color-fehu); }    /* fehu */\n.color-yellow { color: var(--color-vega); }  /* vega */\n.color-teal { color: var(--color-norn); }   /* norn */\n.color-munin { color: var(--color-munin); } /* munin */\n.color-slate { color: var(--color-tolk); }  /* tolk */\n\n/* Remove underlines from colored library links */\na.color-blue,\na.color-cyan,\na.color-purple,\na.color-orange,\na.color-red,\na.color-indigo,\na.color-green,\na.color-pink,\na.color-lime,\na.color-yellow,\na.color-teal,\na.color-munin,\na.color-slate {\n  text-decoration: none;\n}\n\n/* Rune symbols */\n.rune-symbol {\n  opacity: 0.7;\n  margin-left: 2px;\n  display: inline-block;\n  transform: translateY(-0.1em);\n}\n\n/* Landing page specific */\n.main-content h2 {\n  border-top: none;\n  padding-top: 0;\n  margin-top: 1.5em;\n}\n\n.landing {\n  height: 100vh;\n  display: flex;\n  flex-direction: column;\n  justify-content: center;\n  text-align: center;\n  padding: 40px;\n  padding-bottom: 12vh; /* Pull content up into visual golden third */\n  box-sizing: border-box;\n}\n\n.landing h1 {\n  font-size: 72px;\n  margin: 0;\n  letter-spacing: -2px;\n}\n\n.landing-logo {\n  width: 110px;\n  height: 110px;\n  margin: 0 auto 8px auto; /* Reduced to bring title closer to logo */\n  display: block;\n  filter: invert(1);\n  opacity: 0.9;\n}\n\n.landing-tagline {\n  margin: 8px 0 56px 0; /* Less gap from title, more gap to links */\n  font-size: 20px;\n  color: var(--color-text);\n  text-transform: capitalize;\n  letter-spacing: 0.05em;\n}\n\n.landing .links {\n  margin: 0 auto 64px auto; /* 64px to footer nav */\n  font-size: 24px;\n  display: flex;\n  flex-direction: column;\n  justify-content: center;\n  align-items: center;\n  gap: 0;\n}\n\n.landing .links-group {\n  display: flex;\n  align-items: center;\n  gap: 20px;\n}\n\n.landing .links a {\n  text-decoration: none;\n  transition: opacity 0.2s ease, filter 0.2s ease;\n  position: relative;\n  display: inline-block;\n}\n\n.landing .links a:hover {\n  opacity: 1;\n  filter: brightness(1.3);\n}\n\n.landing .separator {\n  color: var(--color-border-light);\n  margin: 0;\n}\n\n.landing-footer-nav {\n  font-size: 1rem;\n  letter-spacing: 0.05em;\n  padding-bottom: 36px;\n}\n\n.landing-footer-nav a {\n  color: var(--color-text-dim);\n  text-decoration: none;\n  transition: color 0.2s ease;\n}\n\n.landing-footer-nav a:hover {\n  color: var(--color-text);\n}\n\n.landing-footer-nav .cta-inline {\n  color: var(--color-text);\n  font-weight: bold;\n  position: relative;\n}\n\n.landing-footer-nav .cta-inline:hover {\n  color: var(--color-text-bright);\n}\n\n.landing-footer-nav .separator {\n  margin: 0 12px;\n  color: var(--color-border-light);\n}\n\n/* Reference tables — compact, for API/algorithm listings */\n.reference-table {\n  border-collapse: collapse;\n  width: 100%;\n  margin: 20px 0;\n  font-size: 14px;\n}\n\n.reference-table td,\n.reference-table th {\n  padding: 6px 12px;\n  border: 1px solid var(--color-border);\n  text-align: left;\n}\n\n.reference-table th {\n  color: var(--color-text-dim);\n  background: var(--color-bg-dark);\n  font-weight: normal;\n  text-transform: uppercase;\n  letter-spacing: 0.5px;\n  font-size: 12px;\n}\n\n.reference-table code {\n  font-size: 13px;\n  background: none;\n  padding: 0;\n}\n\n/* Library map on index page */\n.library-map {\n  margin: 30px 0;\n}\n\n.library-map h3 {\n  font-size: 14px;\n  text-transform: uppercase;\n  letter-spacing: 0.5px;\n  color: var(--color-text-dim);\n  margin-top: 30px;\n  margin-bottom: 15px;\n}\n\n.library-map-group {\n  display: grid;\n  grid-template-columns: 1fr 1fr;\n  gap: 15px;\n}\n\n.library-map-item {\n  display: flex;\n  align-items: baseline;\n  gap: 10px;\n}\n\n.library-map-item a {\n  text-decoration: none;\n  font-size: 18px;\n  white-space: nowrap;\n}\n\n.library-map-item .desc {\n  color: var(--color-text-dim);\n  font-size: 14px;\n}\n\n/* Syntax highlighting — hilite (build-time, TextMate classes) */\n\n/* Comments */\nspan[class^='ocaml-comment'],\nspan[class^='bash-comment'],\nspan[class^='bash-punctuation-definition-comment'],\nspan[class^='dune-comment'] {\n  color: var(--syntax-comment);\n  font-style: italic;\n}\n\n/* Keywords */\nspan[class='ocaml-keyword'],\nspan[class='ocaml-keyword-other'],\nspan[class^='ocaml-keyword-operator'],\nspan[class^='ocaml-keyword-other-attribute'],\nspan[class^='ocaml-storage'],\nspan[class^='bash-keyword'],\nspan[class^='dune-keyword'] {\n  color: var(--syntax-keyword);\n}\n\n/* Strings and characters */\nspan[class^='ocaml-string'],\nspan[class*='ocaml-constant-character'] {\n  color: var(--syntax-string);\n}\n\n/* Numbers */\nspan[class*='ocaml-constant-numeric'],\nspan[class^='dune-constant'] {\n  color: var(--syntax-number);\n}\n\n/* Functions and bindings */\nspan[class^='ocaml-entity'],\nspan[class^='ocaml-variable'],\nspan[class^='bash-support'] {\n  color: var(--syntax-function);\n}\n\n/* Types, modules, constructors */\nspan[class*='ocaml-constant-language-capital'],\nspan[class*='ocaml-constant-language-polymorphic'],\nspan[class*='ocaml-constant-language-boolean'],\nspan[class^='ocaml-support'] {\n  color: var(--syntax-type);\n}\n\n/* Responsive */\n@media (max-width: 1023px) {\n  .landing-logo {\n    width: 90px;\n    height: 90px;\n  }\n  \n  .landing h1 {\n    font-size: 56px;\n  }\n  \n  .landing-tagline {\n    margin: 19px 0 32px 0; /* 20% reduction */\n    font-size: 18px;\n  }\n  \n  .landing .links {\n    margin: 0 auto 51px auto; /* 20% reduction */\n    font-size: 21px;\n  }\n  \n  .landing .links-group {\n    gap: 16px;\n  }\n}\n\n@media (max-width: 700px) {\n  body {\n    padding: 20px;\n  }\n  \n  .main-content {\n    padding: 20px;\n  }\n  \n  .library-map-group {\n    grid-template-columns: 1fr;\n  }\n  \n  .landing-logo {\n    width: 70px;\n    height: 70px;\n    margin-bottom: 12px;\n  }\n  \n  .landing h1 {\n    font-size: 48px;\n  }\n  \n  .landing-tagline {\n    margin: 16px 0 24px 0;\n    font-size: 16px;\n  }\n  \n  .landing .links {\n    font-size: 18px;\n    margin: 0 auto 40px auto;\n  }\n  \n  .landing .links-group {\n    gap: 14px;\n    font-size: 18px;\n  }\n  \n  .landing-footer-nav {\n    font-size: 0.875rem;\n  }\n  \n}\n\n/* Custom tooltips for project links */\n.landing .links a[data-tooltip] {\n  position: relative;\n}\n\n.landing .links a[data-tooltip]::after {\n  content: attr(data-tooltip);\n  position: absolute;\n  bottom: 150%;\n  left: 50%;\n  transform: translateX(-50%) scale(0.9);\n  background: var(--color-bg-dark);\n  color: var(--color-text);\n  padding: 6px 12px;\n  border: 1px solid var(--color-border-light);\n  font-size: 14px;\n  white-space: nowrap;\n  opacity: 0;\n  pointer-events: none;\n  transition: all 0.15s ease;\n  z-index: 1000;\n}\n\n.landing .links a[data-tooltip]:hover::after {\n  opacity: 1;\n  transform: translateX(-50%) scale(1);\n}\n\n/* Arrow for tooltip */\n.landing .links a[data-tooltip]::before {\n  content: '';\n  position: absolute;\n  bottom: 140%;\n  left: 50%;\n  transform: translateX(-50%) scale(0.9);\n  width: 0;\n  height: 0;\n  border-left: 6px solid transparent;\n  border-right: 6px solid transparent;\n  border-top: 6px solid var(--color-border-light);\n  opacity: 0;\n  pointer-events: none;\n  transition: all 0.15s ease;\n  z-index: 999;\n}\n\n.landing .links a[data-tooltip]:hover::before {\n  opacity: 1;\n  transform: translateX(-50%) scale(1);\n}\n\n"
  },
  {
    "path": "www/templates/layout_docs.html",
    "content": "<!DOCTYPE html>\n<html lang=\"en\">\n\n<head>\n  <meta charset=\"UTF-8\" />\n  <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\" />\n  <title>{{title}}</title>\n  <link rel=\"stylesheet\" href=\"/styles.css\">\n  <link rel=\"stylesheet\" href=\"/docs.css\">\n</head>\n\n<body>\n  <!-- Header -->\n  <header class=\"header\">\n    <nav>\n      <a href=\"/\" class=\"breadcrumb-link logo-link\">\n        <img src=\"/raven.svg\" alt=\"Raven\" class=\"header-logo\">\n        raven\n      </a>\n      <span class=\"breadcrumb-separator\">/</span>\n      {{docs_breadcrumb}}\n    </nav>\n  </header>\n\n  <div class=\"container\">\n    <button class=\"sidebar-toggle\" onclick=\"document.querySelector('.sidebar').classList.toggle('open')\">Menu</button>\n    <!-- Sidebar -->\n    <nav class=\"sidebar\">\n      <div class=\"nav-section\">\n        <div class=\"nav-title\">Documentation</div>\n        <ul class=\"nav-links\">\n          <li><a href=\"/docs/\">Overview</a></li>\n          <li><a href=\"/docs/introduction/\">Introduction</a></li>\n          <li><a href=\"/docs/installation/\">Installation</a></li>\n          <li><a href=\"/docs/ecosystem-overview/\">Ecosystem Overview</a></li>\n          <li><a href=\"/docs/roadmap/\">Roadmap</a></li>\n          <li><a href=\"/docs/support-raven/\">Support Raven</a></li>\n        </ul>\n      </div>\n\n      <div class=\"nav-section\">\n        <div class=\"nav-title\">Libraries</div>\n        <ul class=\"nav-links\">\n          <li><a href=\"/docs/nx/\" class=\"color-blue\">nx</a></li>\n          <li><a href=\"/docs/rune/\" class=\"color-orange\"><span class=\"rune-symbol\">ᚱ</span> rune</a></li>\n          <li><a href=\"/docs/kaun/\" class=\"color-red\"><span class=\"rune-symbol\">ᚲ</span> kaun</a></li>\n          <li><a href=\"/docs/brot/\" class=\"color-cyan\"><span class=\"rune-symbol\">ᚨ</span> brot</a></li>\n          <li><a href=\"/docs/talon/\" class=\"color-pink\"><span class=\"rune-symbol\">ᛃ</span> talon</a></li>\n          <li><a href=\"/docs/hugin/\" class=\"color-purple\"><span class=\"rune-symbol\">ᛞ</span> hugin</a></li>\n          <li><a href=\"/docs/quill/\" class=\"color-green\"><span class=\"rune-symbol\">ᛈ</span> quill</a></li>\n          <li><a href=\"/docs/fehu/\" class=\"color-lime\"><span class=\"rune-symbol\">ᚠ</span> fehu</a></li>\n          <li><a href=\"/docs/sowilo/\" class=\"color-indigo\"><span class=\"rune-symbol\">ᛋ</span> sowilo</a></li>\n        </ul>\n      </div>\n    </nav>\n\n    <!-- Main Content -->\n    <div class=\"content\">\n      <main>\n{{content}}\n      </main>\n    </div>\n  </div>\n\n</body>\n\n</html>"
  },
  {
    "path": "www/templates/layout_docs_lib.html",
    "content": "<!DOCTYPE html>\n<html lang=\"en\">\n\n<head>\n  <meta charset=\"UTF-8\" />\n  <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\" />\n  <title>{{title}}</title>\n  <link rel=\"stylesheet\" href=\"/styles.css\">\n  <link rel=\"stylesheet\" href=\"/docs.css\">\n  <link rel=\"stylesheet\" href=\"/odoc.css\">\n</head>\n\n<body>\n  <!-- Header -->\n  <header class=\"header\">\n    <nav>\n      <a href=\"/\" class=\"breadcrumb-link logo-link\">\n        <img src=\"/raven.svg\" alt=\"Raven\" class=\"header-logo\">\n        raven\n      </a>\n      <span class=\"breadcrumb-separator\">/</span>\n      <a href=\"/docs/\" class=\"breadcrumb-link\">docs</a>\n      <span class=\"breadcrumb-separator\">/</span>\n      {{lib_breadcrumb}}\n    </nav>\n  </header>\n\n  <div class=\"container\">\n    <button class=\"sidebar-toggle\" onclick=\"document.querySelector('.sidebar').classList.toggle('open')\">Menu</button>\n    <!-- Sidebar -->\n    <nav class=\"sidebar\">\n      <!-- Library Info -->\n      <div class=\"library-info\">\n        <h3 class=\"{{lib_color}}\">{{lib_display}}</h3>\n        <p>{{lib_description}}</p>\n        <a href=\"/docs/\" class=\"{{lib_color}}\">← raven</a>\n      </div>\n\n      <!-- Navigation -->\n{{lib_tab_nav}}\n    </nav>\n\n    <!-- Main Content -->\n    <div class=\"content\">\n      <main>\n{{lib_hero}}\n{{content}}\n{{prev_next}}\n      </main>\n    </div>\n{{toc}}\n  </div>\n\n</body>\n\n</html>\n"
  },
  {
    "path": "www/templates/main.html",
    "content": "<!DOCTYPE html>\n<html lang=\"en\">\n\n<head>\n  <meta charset=\"UTF-8\" />\n  <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\" />\n  <title>{{title}}</title>\n  <link rel=\"stylesheet\" href=\"/styles.css\">\n</head>\n\n<body>\n\n  <!-- Main Content -->\n  <main class=\"main-content\">\n{{content}}\n  </main>\n\n</body>\n\n</html>"
  }
]